What can we shed some light on?
Results
Beyoncé Giselle Knowles-Carter (/biːˈjɒnseɪ/ bee-YON-say) (born September 4, 1981) is an American singer, songwriter, record
producer and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions
as a child, and rose to fame in the late 1990s as lead singer of R&B girl-group Destiny's Child. Managed by her father,
Mathew Knowles, the group became one of the world's best-selling girl groups of all time. Their hiatus saw the release
of Beyoncé's debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five
Grammy Awards and featured the Billboard Hot 100 number-one singles "Crazy in Love" and "Baby Boy". Following the
disbandment of Destiny's Child in June 2005, she released her second solo album, B'Day (2006), which contained hits
"Déjà Vu", "Irreplaceable", and "Beautiful Liar". Beyoncé also ventured into acting, with a Golden Globe-nominated
performance in Dreamgirls (2006), and starring roles in The Pink Panther (2006) and Obsessed (2009). Her marriage
to rapper Jay Z and portrayal of Etta James in Cadillac Records (2008) influenced her third album, I Am... Sasha
Fierce (2008), which saw the birth of her alter-ego Sasha Fierce and earned a record-setting six Grammy Awards in
2010, including Song of the Year for "Single Ladies (Put a Ring on It)". Beyoncé took a hiatus from music in 2010
and took over management of her career; her fourth album 4 (2011) was subsequently mellower in tone, exploring 1970s
funk, 1980s pop, and 1990s soul. Her critically acclaimed fifth studio album, Beyoncé (2013), was distinguished from
previous releases by its experimental production and exploration of darker themes. A self-described "modern-day feminist",
Beyoncé creates songs that are often characterized by themes of love, relationships, and monogamy, as well as female
sexuality and empowerment. On stage, her dynamic, highly choreographed performances have led to critics hailing her
as one of the best entertainers in contemporary popular music. Throughout a career spanning 19 years, she has sold
over 118 million records as a solo artist, and a further 60 million with Destiny's Child, making her one of the best-selling
music artists of all time. She has won 20 Grammy Awards and is the most nominated woman in the award's history. The
Recording Industry Association of America recognized her as the Top Certified Artist in America during the 2000s
decade. In 2009, Billboard named her the Top Radio Songs Artist of the Decade, the Top Female Artist of the 2000s
and their Artist of the Millennium in 2011. Time listed her among the 100 most influential people in the world in
2013 and 2014. Forbes magazine also listed her as the most powerful female musician of 2015. Beyoncé Giselle Knowles
was born in Houston, Texas, to Celestine Ann "Tina" Knowles (née Beyincé), a hairdresser and salon owner, and Mathew
Knowles, a Xerox sales manager. Beyoncé's name is a tribute to her mother's maiden name. Beyoncé's younger sister
Solange is also a singer and a former member of Destiny's Child. Mathew is African-American, while Tina is of Louisiana
Creole descent (with African, Native American, French, Cajun, and distant Irish and Spanish ancestry). Through her
mother, Beyoncé is a descendant of Acadian leader Joseph Broussard. She was raised in a Methodist household. Beyoncé
attended St. Mary's Elementary School in Fredericksburg, Texas, where she enrolled in dance classes. Her singing
talent was discovered when dance instructor Darlette Johnson began humming a song and she finished it, able to hit
the high-pitched notes. Beyoncé's interest in music and performing continued after winning a school talent show at
age seven, singing John Lennon's "Imagine" to beat 15/16-year-olds. In fall of 1990, Beyoncé enrolled in Parker Elementary
School, a music magnet school in Houston, where she would perform with the school's choir. She also attended the
High School for the Performing and Visual Arts and later Alief Elsik High School. Beyoncé was also a member of the
choir at St. John's United Methodist Church as a soloist for two years. At age eight, Beyoncé and childhood friend
Kelly Rowland met LaTavia Roberson while in an audition for an all-girl entertainment group. They were placed into
a group with three other girls as Girl's Tyme, and rapped and danced on the talent show circuit in Houston. After
seeing the group, R&B producer Arne Frager brought them to his Northern California studio and placed them in Star
Search, the largest talent show on national TV at the time. Girl's Tyme failed to win, and Beyoncé later said the
song they performed was not good. In 1995 Beyoncé's father resigned from his job to manage the group. The move reduced
Beyoncé's family's income by half, and her parents were forced to move into separated apartments. Mathew cut the
original line-up to four and the group continued performing as an opening act for other established R&B girl groups.
The girls auditioned before record labels and were finally signed to Elektra Records, moving to Atlanta Records briefly
to work on their first recording, only to be cut by the company. This put further strain on the family, and Beyoncé's
parents separated. On October 5, 1995, Dwayne Wiggins's Grass Roots Entertainment signed the group. In 1996, the
girls began recording their debut album under an agreement with Sony Music, the Knowles family reunited, and shortly
after, the group got a contract with Columbia Records. The group changed their name to Destiny's Child in 1996, based
upon a passage in the Book of Isaiah. In 1997, Destiny's Child released their major label debut song "Killing Time"
on the soundtrack to the 1997 film, Men in Black. The following year, the group released their self-titled debut
album, scoring their first major hit "No, No, No". The album established the group as a viable act in the music industry,
with moderate sales and winning the group three Soul Train Lady of Soul Awards for Best R&B/Soul Album of the Year,
Best R&B/Soul or Rap New Artist, and Best R&B/Soul Single for "No, No, No". The group released their multi-platinum
second album The Writing's on the Wall in 1999. The record features some of the group's most widely known songs such
as "Bills, Bills, Bills", the group's first number-one single, "Jumpin' Jumpin'" and "Say My Name", which became
their most successful song at the time, and would remain one of their signature songs. "Say My Name" won the Best
R&B Performance by a Duo or Group with Vocals and the Best R&B Song at the 43rd Annual Grammy Awards. The Writing's
on the Wall sold more than eight million copies worldwide. During this time, Beyoncé recorded a duet with Marc Nelson,
an original member of Boyz II Men, on the song "After All Is Said and Done" for the soundtrack to the 1999 film,
The Best Man. LeToya Luckett and Roberson became unhappy with Mathew's managing of the band and eventually were replaced
by Farrah Franklin and Michelle Williams. Beyoncé experienced depression following the split with Luckett and Roberson
after being publicly blamed by the media, critics, and blogs for its cause. Her long-standing boyfriend left her
at this time. The depression was so severe it lasted for a couple of years, during which she occasionally kept herself
in her bedroom for days and refused to eat anything. Beyoncé stated that she struggled to speak about her depression
because Destiny's Child had just won their first Grammy Award and she feared no one would take her seriously. Beyoncé
would later speak of her mother as the person who helped her fight it. Franklin was dismissed, leaving just Beyoncé,
Rowland, and Williams. The remaining band members recorded "Independent Women Part I", which appeared on the soundtrack
to the 2000 film, Charlie's Angels. It became their best-charting single, topping the U.S. Billboard Hot 100 chart
for eleven consecutive weeks. In early 2001, while Destiny's Child was completing their third album, Beyoncé landed
a major role in the MTV made-for-television film, Carmen: A Hip Hopera, starring alongside American actor Mekhi Phifer.
Set in Philadelphia, the film is a modern interpretation of the 19th century opera Carmen by French composer Georges
Bizet. When the third album Survivor was released in May 2001, Luckett and Roberson filed a lawsuit claiming that
the songs were aimed at them. The album debuted at number one on the U.S. Billboard 200, with first-week sales of
663,000 copies sold. The album spawned other number-one hits, "Bootylicious" and the title track, "Survivor", the
latter of which earned the group a Grammy Award for Best R&B Performance by a Duo or Group with Vocals. After releasing
their holiday album 8 Days of Christmas in October 2001, the group announced a hiatus to further pursue solo careers.
In July 2002, Beyoncé continued her acting career playing Foxxy Cleopatra alongside Mike Myers in the comedy film,
Austin Powers in Goldmember, which spent its first weekend atop the US box office and grossed $73 million. Beyoncé
released "Work It Out" as the lead single from its soundtrack album which entered the top ten in the UK, Norway,
and Belgium. In 2003, Beyoncé starred opposite Cuba Gooding, Jr., in the musical comedy The Fighting Temptations
as Lilly, a single mother whom Gooding's character falls in love with. The film received mixed reviews from critics
but grossed $30 million in the U.S. Beyoncé released "Fighting Temptation" as the lead single from the film's soundtrack
album, with Missy Elliott, MC Lyte, and Free which was also used to promote the film. Another of Beyoncé's contributions
to the soundtrack, "Summertime", fared better on the US charts. Beyoncé's first solo recording was a feature on Jay
Z's "'03 Bonnie & Clyde" that was released in October 2002, peaking at number four on the U.S. Billboard Hot 100
chart. Her first solo album Dangerously in Love was released on June 24, 2003, after Michelle Williams and Kelly
Rowland had released their solo efforts. The album sold 317,000 copies in its first week, debuted atop the Billboard
200, and has since sold 11 million copies worldwide. The album's lead single, "Crazy in Love", featuring Jay Z, became
Beyoncé's first number-one single as a solo artist in the US. The single "Baby Boy" also reached number one, and
singles, "Me, Myself and I" and "Naughty Girl", both reached the top-five. The album earned Beyoncé a then record-tying
five awards at the 46th Annual Grammy Awards; Best Contemporary R&B Album, Best Female R&B Vocal Performance for
"Dangerously in Love 2", Best R&B Song and Best Rap/Sung Collaboration for "Crazy in Love", and Best R&B Performance
by a Duo or Group with Vocals for "The Closer I Get to You" with Luther Vandross. In November 2003, she embarked
on the Dangerously in Love Tour in Europe and later toured alongside Missy Elliott and Alicia Keys for the Verizon
Ladies First Tour in North America. On February 1, 2004, Beyoncé performed the American national anthem at Super
Bowl XXXVIII, at the Reliant Stadium in Houston, Texas. After the release of Dangerously in Love, Beyoncé had planned
to produce a follow-up album using several of the left-over tracks. However, this was put on hold so she could concentrate
on recording Destiny Fulfilled, the final studio album by Destiny's Child. Released on November 15, 2004, in the
US and peaking at number two on the Billboard 200, Destiny Fulfilled included the singles "Lose My Breath" and "Soldier",
which reached the top five on the Billboard Hot 100 chart. Destiny's Child embarked on a worldwide concert tour,
Destiny Fulfilled... and Lovin' It and during the last stop of their European tour, in Barcelona on June 11, 2005,
Rowland announced that Destiny's Child would disband following the North American leg of the tour. The group released
their first compilation album Number 1's on October 25, 2005, in the US and accepted a star on the Hollywood Walk
of Fame in March 2006. Beyoncé's second solo album B'Day was released on September 5, 2006, in the US, to coincide
with her twenty-fifth birthday. It sold 541,000 copies in its first week and debuted atop the Billboard 200, becoming
Beyoncé's second consecutive number-one album in the United States. The album's lead single "Déjà Vu", featuring
Jay Z, reached the top five on the Billboard Hot 100 chart. The second international single "Irreplaceable" was a
commercial success worldwide, reaching number one in Australia, Hungary, Ireland, New Zealand and the United States.
B'Day also produced three other singles; "Ring the Alarm", "Get Me Bodied", and "Green Light" (released in the United
Kingdom only). Her first acting role of 2006 was in the comedy film The Pink Panther starring opposite Steve Martin,
grossing $158.8 million at the box office worldwide. Her second film Dreamgirls, the film version of the 1981 Broadway
musical loosely based on The Supremes, received acclaim from critics and grossed $154 million internationally. In
it, she starred opposite Jennifer Hudson, Jamie Foxx, and Eddie Murphy playing a pop singer based on Diana Ross.
To promote the film, Beyoncé released "Listen" as the lead single from the soundtrack album. In April 2007, Beyoncé
embarked on The Beyoncé Experience, her first worldwide concert tour, visiting 97 venues and grossed over $24 million.[note
1] Beyoncé conducted pre-concert food donation drives during six major stops in conjunction with her pastor at St.
John's and America's Second Harvest. At the same time, B'Day was re-released with five additional songs, including
her duet with Shakira "Beautiful Liar". On April 4, 2008, Beyoncé married Jay Z. She publicly revealed their marriage
in a video montage at the listening party for her third studio album, I Am... Sasha Fierce, in Manhattan's Sony Club
on October 22, 2008. I Am... Sasha Fierce was released on November 18, 2008 in the United States. The album formally
introduces Beyoncé's alter ego Sasha Fierce, conceived during the making of her 2003 single "Crazy in Love", selling
482,000 copies in its first week, debuting atop the Billboard 200, and giving Beyoncé her third consecutive number-one
album in the US. The album featured the number-one song "Single Ladies (Put a Ring on It)" and the top-five songs
"If I Were a Boy" and "Halo". Achieving the accomplishment of becoming her longest-running Hot 100 single in her
career, "Halo"'s success in the US helped Beyoncé attain more top-ten singles on the list than any other woman during
the 2000s. It also included the successful "Sweet Dreams", and singles "Diva", "Ego", "Broken-Hearted Girl" and "Video
Phone". The music video for "Single Ladies" has been parodied and imitated around the world, spawning the "first
major dance craze" of the Internet age according to the Toronto Star. The video has won several awards, including
Best Video at the 2009 MTV Europe Music Awards, the 2009 Scottish MOBO Awards, and the 2009 BET Awards. At the 2009
MTV Video Music Awards, the video was nominated for nine awards, ultimately winning three including Video of the
Year. Its failure to win the Best Female Video category, which went to American country pop singer Taylor Swift's
"You Belong with Me", led to Kanye West interrupting the ceremony and Beyoncé improvising a re-presentation of Swift's
award during her own acceptance speech. In March 2009, Beyoncé embarked on the I Am... World Tour, her second headlining
worldwide concert tour, consisting of 108 shows, grossing $119.5 million. Beyoncé further expanded her acting career,
starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. Her performance in the film received
praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award
nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress. Beyoncé
donated her entire salary from the film to Phoenix House, an organization of rehabilitation centers for heroin addicts
around the country. On January 20, 2009, Beyoncé performed James' "At Last" at the First Couple's first inaugural
ball. Beyoncé starred opposite Ali Larter and Idris Elba in the thriller, Obsessed. She played Sharon Charles, a
mother and wife who learns of a woman's obsessive behavior over her husband. Although the film received negative
reviews from critics, the movie did well at the US box office, grossing $68 million—$60 million more than Cadillac
Records—on a budget of $20 million. The fight scene finale between Sharon and the character played by Ali Larter
also won the 2010 MTV Movie Award for Best Fight. At the 52nd Annual Grammy Awards, Beyoncé received ten nominations,
including Album of the Year for I Am... Sasha Fierce, Record of the Year for "Halo", and Song of the Year for "Single
Ladies (Put a Ring on It)", among others. She tied with Lauryn Hill for most Grammy nominations in a single year
by a female artist. In 2010, Beyoncé was featured on Lady Gaga's single "Telephone" and its music video. The song
topped the US Pop Songs chart, becoming the sixth number-one for both Beyoncé and Gaga, tying them with Mariah Carey
for most number-ones since the Nielsen Top 40 airplay chart launched in 1992. "Telephone" received a Grammy Award
nomination for Best Pop Collaboration with Vocals. Beyoncé announced a hiatus from her music career in January 2010,
heeding her mother's advice, "to live life, to be inspired by things again". During the break she and her father
parted ways as business partners. Beyoncé's musical break lasted nine months and saw her visit multiple European
cities, the Great Wall of China, the Egyptian pyramids, Australia, English music festivals and various museums and
ballet performances. In 2011, documents obtained by WikiLeaks revealed that Beyoncé was one of many entertainers
who performed for the family of Libyan ruler Muammar Gaddafi. Rolling Stone reported that the music industry was
urging them to return the money they earned for the concerts; a spokesperson for Beyoncé later confirmed to The Huffington
Post that she donated the money to the Clinton Bush Haiti Fund. Later that year she became the first solo female
artist to headline the main Pyramid stage at the 2011 Glastonbury Festival in over twenty years, and was named the
highest-paid performer in the world per minute. Her fourth studio album 4 was released on June 28, 2011 in the US.
4 sold 310,000 copies in its first week and debuted atop the Billboard 200 chart, giving Beyoncé her fourth consecutive
number-one album in the US. The album was preceded by two of its singles "Run the World (Girls)" and "Best Thing
I Never Had", which both attained moderate success. The fourth single "Love on Top" was a commercial success in the
US. 4 also produced four other singles; "Party", "Countdown", "I Care" and "End of Time". "Eat, Play, Love", a cover
story written by Beyoncé for Essence that detailed her 2010 career break, won her a writing award from the New York
Association of Black Journalists. In late 2011, she took the stage at New York's Roseland Ballroom for four nights
of special performances: the 4 Intimate Nights with Beyoncé concerts saw the performance of her 4 album to a standing
room only. On January 7, 2012, Beyoncé gave birth to her first child, a daughter, Blue Ivy Carter, at Lenox Hill
Hospital in New York. Five months later, she performed for four nights at Revel Atlantic City's Ovation Hall to celebrate
the resort's opening, her first performances since giving birth to Blue Ivy. In January 2013, Destiny's Child released
Love Songs, a compilation album of the romance-themed songs from their previous albums and a newly recorded track,
"Nuclear". Beyoncé performed the American national anthem singing along with a pre-recorded track at President Obama's
second inauguration in Washington, D.C. The following month, Beyoncé performed at the Super Bowl XLVII halftime show,
held at the Mercedes-Benz Superdome in New Orleans. The performance stands as the second most tweeted about moment
in history at 268,000 tweets per minute. At the 55th Annual Grammy Awards, Beyoncé won for Best Traditional R&B Performance
for "Love on Top". Her feature-length documentary film, Life Is But a Dream, first aired on HBO on February 16, 2013.
The film, which she directed and produced herself, featured footage from her childhood, her as a mother and businesswoman,
recording, rehearsing for live performances, and her return to the spotlight following Blue Ivy's birth. Its DVD
release in November 2013 was accompanied by footage from the Revel Presents: Beyoncé Live concerts and a new song,
"God Made You Beautiful". In February 2013, Beyoncé signed a global publishing agreement with Warner/Chappell Music,
which would cover her future songwriting and then-upcoming studio album. Beyoncé embarked on The Mrs. Carter Show
World Tour on April 15 in Belgrade, Serbia; the tour included 132 dates that ran through to March 2014. It became
the most successful tour of her career and one of the most-successful tours of all time. In May, Beyoncé's cover
of Amy Winehouse's "Back to Black" with André 3000 on The Great Gatsby soundtrack was released. She was also honorary
chair of the 2013 Met Gala. Beyoncé voiced Queen Tara in the 3D CGI animated film, Epic, released by 20th Century
Fox on May 24, and recorded an original song for the film, "Rise Up", co-written with Sia. On December 13, 2013,
Beyoncé unexpectedly released her eponymous fifth studio album on the iTunes Store without any prior announcement
or promotion. The album debuted atop the Billboard 200 chart, giving Beyoncé her fifth consecutive number-one album
in the US. This made her the first woman in the chart's history to have her first five studio albums debut at number
one. Beyoncé received critical acclaim and commercial success, selling one million digital copies worldwide in six
days; The New York Times noted the album's unconventional, unexpected release as significant. Musically an electro-R&B
album, it concerns darker themes previously unexplored in her work, such as "bulimia, postnatal depression [and]
the fears and insecurities of marriage and motherhood". The single "Drunk in Love", featuring Jay Z, peaked at number
two on the Billboard Hot 100 chart. In April 2014, after much speculation in the weeks before, Beyoncé and Jay Z
officially announced their On the Run Tour. It served as the couple's first co-headlining stadium tour together.
On August 24, 2014, she received the Video Vanguard Award at the 2014 MTV Video Music Awards. Knowles also took home
three competitive awards: Best Video with a Social Message and Best Cinematography for "Pretty Hurts", as well as
best collaboration for "Drunk in Love". In November, Forbes reported that Beyoncé was the top-earning woman in music
for the second year in a row—earning $115 million in the year, more than double her earnings in 2013. Beyoncé was
reissued with new material in three forms: as an extended play, a box set, as well as a full platinum edition. At
the 57th Annual Grammy Awards in February 2015, Beyoncé was nominated for six awards, ultimately winning three: Best
R&B Performance and Best R&B Song for "Drunk in Love", and Best Surround Sound Album for Beyoncé. She was nominated
for Album of the Year but the award was won by Beck for his Morning Phase album. In August, the cover of the September
issue of Vogue magazine was unveiled online, Beyoncé as the cover star, becoming the first African-American artist
and third African-American woman in general to cover the September issue. She headlined the 2015 Made in America
festival in early September and also the Global Citizen Festival later that month. Beyoncé made an uncredited featured
appearance on the track "Hymn for the Weekend" by British rock band Coldplay, on their seventh studio album A Head
Full of Dreams (2015), which saw release in December. On January 7, 2016, Pepsi announced Beyoncé would perform alongside
Coldplay at Super Bowl 50 in February. Knowles has previously performed at four Super Bowl shows throughout her career,
serving as the main headliner of the 47th Super Bowl halftime show in 2013. On February 6, 2016, one day before her
performance at the Super Bowl, Beyoncé released a new single exclusively on music streaming service Tidal called
"Formation". Beyoncé is believed to have first started a relationship with Jay Z after a collaboration on "'03 Bonnie
& Clyde", which appeared on his seventh album The Blueprint 2: The Gift & The Curse (2002). Beyoncé appeared as Jay
Z's girlfriend in the music video for the song, which would further fuel speculation of their relationship. On April
4, 2008, Beyoncé and Jay Z were married without publicity. As of April 2014, the couple have sold a combined 300
million records together. The couple are known for their private relationship, although they have appeared to become
more relaxed in recent years. Beyoncé suffered a miscarriage in 2010 or 2011, describing it as "the saddest thing"
she had ever endured. She returned to the studio and wrote music in order to cope with the loss. In April 2011, Beyoncé
and Jay Z traveled to Paris in order to shoot the album cover for her 4, and unexpectedly became pregnant in Paris.
In August, the couple attended the 2011 MTV Video Music Awards, at which Beyoncé performed "Love on Top" and started
the performance saying "Tonight I want you to stand up on your feet, I want you to feel the love that's growing inside
of me". At the end of the performance, she dropped her microphone, unbuttoned her blazer and rubbed her stomach,
confirming her pregnancy she had alluded to earlier in the evening. Her appearance helped that year's MTV Video Music
Awards become the most-watched broadcast in MTV history, pulling in 12.4 million viewers; the announcement was listed
in Guinness World Records for "most tweets per second recorded for a single event" on Twitter, receiving 8,868 tweets
per second and "Beyonce pregnant" was the most Googled term the week of August 29, 2011. On January 7, 2012, Beyoncé
gave birth to a daughter, Blue Ivy Carter, at Lenox Hill Hospital in New York under heavy security. Two days later,
Jay Z released "Glory", a song dedicated to their child, on his website Lifeandtimes.com. The song detailed the couple's
pregnancy struggles, including a miscarriage Beyoncé suffered before becoming pregnant with Blue Ivy. Blue Ivy's
cries are included at the end of the song, and she was officially credited as "B.I.C." on it. At two days old, she
became the youngest person ever to appear on a Billboard chart when "Glory" debuted on the Hot R&B/Hip-Hop Songs
chart. Beyoncé and husband Jay Z are friends with President Barack Obama and First Lady Michelle Obama. She performed
"America the Beautiful" at the 2009 presidential inauguration, as well as "At Last" during the first inaugural dance
at the Neighborhood Ball two days later. Beyoncé and Jay Z held a fundraiser at the latter's 40/40 Club in Manhattan
for Obama's 2012 presidential campaign which raised $4 million. Beyoncé uploaded pictures of her paper ballot on
Tumblr, confirming she had voted in support for the Democratic Party and to encourage others to do so. She also performed
the American national anthem at his second inauguration, singing along with a pre-recorded track. She publicly endorsed
same sex marriage on March 26, 2013, after the Supreme Court debate on California's Proposition 8. In July 2013,
Beyoncé and Jay-Z attended a rally in response to the acquittal of George Zimmerman for the shooting of Trayvon Martin.
In an interview published by Vogue in April 2013, Beyoncé was asked if she considers herself a feminist, to which
she said, "that word can be very extreme... But I guess I am a modern-day feminist. I do believe in equality". She
would later align herself more publicly with the movement, sampling "We should all be feminists", a speech delivered
by Nigerian author Chimamanda Ngozi Adichie at a TEDxEuston conference in April 2013, in her song "Flawless", released
later that year. She has also contributed to the Ban Bossy campaign, which uses television and social media to encourage
leadership in girls. In 2015 Beyoncé signed an open letter which the ONE Campaign had been collecting signatures
for; the letter was addressed to Angela Merkel and Nkosazana Dlamini-Zuma, urging them to focus on women as they
serve as the head of the G7 in Germany and the AU in South Africa respectively, which will start to set the priorities
in development funding before a main UN summit in September 2015 that will establish new development goals for the
generation. Following the death of Freddie Gray, Beyoncé and Jay-Z, among other notable figures, met with his family.
After the imprisonment of protesters of Gray's death, Beyoncé and Jay-Z donated thousands of dollars to bail them
out. Forbes magazine began reporting on Beyoncé's earnings in 2008, calculating that the $80 million earned between
June 2007 to June 2008, for her music, tour, films and clothing line made her the world's best-paid music personality
at the time, above Madonna and Celine Dion. They placed her fourth on the Celebrity 100 list in 2009 and ninth on
the "Most Powerful Women in the World" list in 2010. The following year, Forbes placed her eighth on the "Best-Paid
Celebrities Under 30" list, having earned $35 million in the past year for her clothing line and endorsement deals.
In 2012, Forbes placed Beyoncé at number 16 on the Celebrity 100 list, twelve places lower than three years ago yet
still having earned $40 million in the past year for her album 4, clothing line and endorsement deals. In the same
year, Beyoncé and Jay Z placed at number one on the "World's Highest-Paid Celebrity Couples", for collectively earning
$78 million. The couple made it into the previous year's Guinness World Records as the "highest-earning power couple"
for collectively earning $122 million in 2009. For the years 2009 to 2011, Beyoncé earned an average of $70 million
per year, and earned $40 million in 2012. In 2013, Beyoncé's endorsements of Pepsi and H&M made her and Jay Z the
world's first billion dollar couple in the music industry. That year, Beyoncé was published as the fourth most-powerful
celebrity in the Forbes rankings. MTV estimated that by the end of 2014, Beyoncé would become the highest-paid black
musician in history; she succeeded to do so in April 2014. In June 2014, Beyoncé ranked at #1 on the Forbes Celebrity
100 list, earning an estimated $115 million throughout June 2013 – June 2014. This in turn was the first time she
had topped the Celebrity 100 list as well as being her highest yearly earnings to date. As of May 2015, her net worth
is estimated to be $250 million. Beyoncé's vocal range spans four octaves. Jody Rosen highlights her tone and timbre
as particularly distinctive, describing her voice as "one of the most compelling instruments in popular music". While
another critic says she is a "Vocal acrobat, being able to sing long and complex melismas and vocal runs effortlessly,
and in key. Her vocal abilities mean she is identified as the centerpiece of Destiny's Child. The Daily Mail calls
Beyoncé's voice "versatile", capable of exploring power ballads, soul, rock belting, operatic flourishes, and hip
hop. Jon Pareles of The New York Times commented that her voice is "velvety yet tart, with an insistent flutter and
reserves of soul belting". Rosen notes that the hip hop era highly influenced Beyoncé's strange rhythmic vocal style,
but also finds her quite traditionalist in her use of balladry, gospel and falsetto. Other critics praise her range
and power, with Chris Richards of The Washington Post saying she was "capable of punctuating any beat with goose-bump-inducing
whispers or full-bore diva-roars." Beyoncé's music is generally R&B, but she also incorporates pop, soul and funk
into her songs. 4 demonstrated Beyoncé's exploration of 90s-style R&B, as well as further use of soul and hip hop
than compared to previous releases. While she almost exclusively releases English songs, Beyoncé recorded several
Spanish songs for Irreemplazable (re-recordings of songs from B'Day for a Spanish-language audience), and the re-release
of B'Day. To record these, Beyoncé was coached phonetically by American record producer Rudy Perez. She has received
co-writing credits for most of the songs recorded with Destiny's Child and her solo efforts. Her early songs were
personally driven and female-empowerment themed compositions like "Independent Women" and "Survivor", but after the
start of her relationship with Jay Z she transitioned to more man-tending anthems such as "Cater 2 U". Beyoncé has
also received co-producing credits for most of the records in which she has been involved, especially during her
solo efforts. However, she does not formulate beats herself, but typically comes up with melodies and ideas during
production, sharing them with producers. In 2001, she became the first African-American woman and second woman songwriter
to win the Pop Songwriter of the Year award at the American Society of Composers, Authors, and Publishers Pop Music
Awards. Beyoncé was the third woman to have writing credits on three number one songs ("Irreplaceable", "Grillz"
and "Check on It") in the same year, after Carole King in 1971 and Mariah Carey in 1991. She is tied with American
songwriter Diane Warren at third with nine songwriting credits on number-one singles. (The latter wrote her 9/11-motivated
song "I Was Here" for 4.) In May 2011, Billboard magazine listed Beyoncé at number 17 on their list of the "Top 20
Hot 100 Songwriters", for having co-written eight singles that hit number one on the Billboard Hot 100 chart. She
was one of only three women on that list. Beyoncé names Michael Jackson as her major musical influence. Aged five,
Beyoncé attended her first ever concert where Jackson performed and she claims to have realised her purpose. When
she presented him with a tribute award at the World Music Awards in 2006, Beyoncé said, "if it wasn't for Michael
Jackson, I would never ever have performed." She admires Diana Ross as an "all-around entertainer" and Whitney Houston,
who she said "inspired me to get up there and do what she did." She credits Mariah Carey's singing and her song "Vision
of Love" as influencing her to begin practicing vocal runs as a child. Her other musical influences include Aaliyah,
Prince, Lauryn Hill, Sade Adu, Donna Summer, Mary J. Blige, Janet Jackson, Anita Baker and Rachelle Ferrell. The
feminism and female empowerment themes on Beyoncé's second solo album B'Day were inspired by her role in Dreamgirls
and by singer Josephine Baker. Beyoncé paid homage to Baker by performing "Déjà Vu" at the 2006 Fashion Rocks concert
wearing Baker's trademark mini-hula skirt embellished with fake bananas. Beyoncé's third solo album I Am... Sasha
Fierce was inspired by Jay Z and especially by Etta James, whose "boldness" inspired Beyoncé to explore other musical
genres and styles. Her fourth solo album, 4, was inspired by Fela Kuti, 1990s R&B, Earth, Wind & Fire, DeBarge, Lionel
Richie, Teena Marie with additional influences by The Jackson 5, New Edition, Adele, Florence and the Machine, and
Prince. Beyoncé has stated that she is personally inspired by US First Lady Michelle Obama, saying "She proves you
can do it all" and she has described Oprah Winfrey as "the definition of inspiration and a strong woman". She has
also discussed how Jay Z is a continuing inspiration to her, both with what she describes as his lyrical genius and
in the obstacles he has overcome in his life. Beyoncé has expressed admiration for the artist Jean-Michel Basquiat,
posting in a letter "what I find in the work of Jean-Michel Basquiat, I search for in every day in music... he is
lyrical and raw". In February 2013, Beyoncé said that Madonna inspired her to take control of her own career. She
commented: "I think about Madonna and how she took all of the great things she achieved and started the label and
developed other artists. But there are not enough of those women.". In 2006, Beyoncé introduced her all-female tour
band Suga Mama (also the name of a song in B'Day) which includes bassists, drummers, guitarists, horn players, keyboardists
and percussionists. Her background singers, The Mamas, consist of Montina Cooper-Donnell, Crystal Collins and Tiffany
Moniqué Riddick. They made their debut appearance at the 2006 BET Awards and re-appeared in the music videos for
"Irreplaceable" and "Green Light". The band have supported Beyoncé in most subsequent live performances, including
her 2007 concert tour The Beyoncé Experience, 2009–2010 I Am... World Tour and 2013–2014 The Mrs. Carter Show World
Tour. Beyoncé has received praise for her stage presence and voice during live performances. Jarett Wieselman of
the New York Post placed her at number one on her list of the Five Best Singer/Dancers. According to Barbara Ellen
of The Guardian Beyoncé is the most in-charge female artist she's seen onstage, while Alice Jones of The Independent
wrote she "takes her role as entertainer so seriously she's almost too good." The ex-President of Def Jam L.A. Reid
has described Beyoncé as the greatest entertainer alive. Jim Farber of the Daily News and Stephanie Classen of Star
Phoenix both praised her strong voice and her stage presence. Described as being "sexy, seductive and provocative"
when performing on stage, Beyoncé has said that she originally created the alter ego "Sasha Fierce" to keep that
stage persona separate from who she really is. She described Sasha as being "too aggressive, too strong, too sassy
[and] too sexy", stating, "I'm not like her in real life at all." Sasha was conceived during the making of "Crazy
in Love", and Beyoncé introduced her with the release of her 2008 album I Am... Sasha Fierce. In February 2010, she
announced in an interview with Allure magazine that she was comfortable enough with herself to no longer need Sasha
Fierce. However, Beyoncé announced in May 2012 that she would bring her back for her Revel Presents: Beyoncé Live
shows later that month. Beyoncé has been described as a having a wide-ranging sex appeal, with music journalist Touré
writing that since the release of Dangerously in Love, she has "become a crossover sex symbol". Offstage Beyoncé
says that while she likes to dress sexily, her onstage dress "is absolutely for the stage." Due to her curves and
the term's catchiness, in the 2000s, the media often used the term "Bootylicious" (a portmanteau of the words booty
and delicious) to describe Beyoncé, the term popularized by Destiny's Child's single of the same name. In 2006, it
was added to the Oxford English Dictionary. In September 2010, Beyoncé made her runway modelling debut at Tom Ford's
Spring/Summer 2011 fashion show. She was named "World's Most Beautiful Woman" by People and the "Hottest Female Singer
of All Time" by Complex in 2012. In January 2013, GQ placed her on its cover, featuring her atop its "100 Sexiest
Women of the 21st Century" list. VH1 listed her at number 1 on its 100 Sexiest Artists list. Several wax figures
of Beyoncé are found at Madame Tussauds Wax Museums in major cities around the world, including New York, Washington,
D.C., Amsterdam, Bangkok, Hollywood and Sydney. According to Italian fashion designer Roberto Cavalli, Beyoncé uses
different fashion styles to work with her music while performing. Her mother co-wrote a book, published in 2002,
titled Destiny's Style an account of how fashion had an impact on the trio's success. The B'Day Anthology Video Album
showed many instances of fashion-oriented footage, depicting classic to contemporary wardrobe styles. In 2007, Beyoncé
was featured on the cover of the Sports Illustrated Swimsuit Issue, becoming the second African American woman after
Tyra Banks, and People magazine recognized Beyoncé as the best-dressed celebrity. The Bey Hive is the name given
to Beyoncé's fan base. Fans were previously titled "The Beyontourage", (a portmanteau of Beyoncé and entourage).
The name Bey Hive derives from the word beehive, purposely misspelled to resemble her first name, and was penned
by fans after petitions on the online social networking service Twitter and online news reports during competitions.
In 2006, the animal rights organization People for the Ethical Treatment of Animals (PETA), criticized Beyoncé for
wearing and using fur in her clothing line House of Deréon. In 2011, she appeared on the cover of French fashion
magazine L'Officiel, in blackface and tribal makeup that drew criticism from the media. A statement released from
a spokesperson for the magazine said that Beyoncé's look was "far from the glamorous Sasha Fierce" and that it was
"a return to her African roots". Beyoncé's lighter skin color and costuming has drawn criticism from some in the
African-American community. Emmett Price, a professor of music at Northeastern University, wrote in 2007, that he
thinks race plays a role in many of these criticisms, saying white celebrities who dress similarly do not attract
as many comments. In 2008, L'Oréal was accused of whitening her skin in their Feria hair color advertisements, responding
that "it is categorically untrue", and in 2013, Beyoncé herself criticized H&M for their proposed "retouching" of
promotional images of her, and according to Vogue requested that only "natural pictures be used". In The New Yorker
music critic Jody Rosen described Beyoncé as "the most important and compelling popular musician of the twenty-first
century..... the result, the logical end point, of a century-plus of pop." When The Guardian named her Artist of
the Decade, Llewyn-Smith wrote, "Why Beyoncé? [...] Because she made not one but two of the decade's greatest singles,
with Crazy in Love and Single Ladies (Put a Ring on It), not to mention her hits with Destiny's Child; and this was
the decade when singles – particularly R&B singles – regained their status as pop's favourite medium. [...] [She]
and not any superannuated rock star was arguably the greatest live performer of the past 10 years." In 2013, Beyoncé
made the Time 100 list, Baz Luhrmann writing "no one has that voice, no one moves the way she moves, no one can hold
an audience the way she does... When Beyoncé does an album, when Beyoncé sings a song, when Beyoncé does anything,
it's an event, and it's broadly influential. Right now, she is the heir-apparent diva of the USA — the reigning national
voice." In 2014, Beyoncé was listed again on the Time 100 and also featured on the cover of the issue. Beyoncé's
work has influenced numerous artists including Adele, Ariana Grande, Lady Gaga, Bridgit Mendler, Rihanna, Kelly Rowland,
Sam Smith, Meghan Trainor, Nicole Scherzinger, Rita Ora, Zendaya, Cheryl Cole, JoJo, Alexis Jordan, Jessica Sanchez,
and Azealia Banks. American indie rock band White Rabbits also cited her an inspiration for their third album Milk
Famous (2012), friend Gwyneth Paltrow studied Beyoncé at her live concerts while learning to become a musical performer
for the 2010 film Country Strong. Nicki Minaj has stated that seeing Beyoncé's Pepsi commercial influenced her decision
to appear in the company's 2012 global campaign. Her debut single, "Crazy in Love" was named VH1's "Greatest Song
of the 2000s", NME's "Best Track of the 00s" and "Pop Song of the Century", considered by Rolling Stone to be one
of the 500 greatest songs of all time, earned two Grammy Awards and is one of the best-selling singles of all time
at around 8 million copies. The music video for "Single Ladies (Put a Ring on It)", which achieved fame for its intricate
choreography and its deployment of jazz hands, was credited by the Toronto Star as having started the "first major
dance craze of both the new millennium and the Internet", triggering a number of parodies of the dance choreography
and a legion of amateur imitators on YouTube. In 2013, Drake released a single titled "Girls Love Beyoncé", which
featured an interpolation from Destiny Child's "Say My Name" and discussed his relationship with women. In January
2012, research scientist Bryan Lessard named Scaptia beyonceae, a species of horse fly found in Northern Queensland,
Australia after Beyoncé due to the fly's unique golden hairs on its abdomen. In July 2014, a Beyoncé exhibit was
introduced into the "Legends of Rock" section of the Rock and Roll Hall of Fame. The black leotard from the "Single
Ladies" video and her outfit from the Super Bowl half time performance are among several pieces housed at the museum.
Beyoncé has received numerous awards. As a solo artist she has sold over 15 million albums in the US, and over 118
million records worldwide (a further 60 million additionally with Destiny's Child), making her one of the best-selling
music artists of all time. The Recording Industry Association of America (RIAA) listed Beyoncé as the top certified
artist of the 2000s, with a total of 64 certifications. Her songs "Crazy in Love", "Single Ladies (Put a Ring on
It)", "Halo", and "Irreplaceable" are some of the best-selling singles of all time worldwide. In 2009, The Observer
named her the Artist of the Decade and Billboard named her the Top Female Artist and Top Radio Songs Artist of the
Decade. In 2010, Billboard named her in their "Top 50 R&B/Hip-Hop Artists of the Past 25 Years" list at number 15.
In 2012 VH1 ranked her third on their list of the "100 Greatest Women in Music". Beyoncé was the first female artist
to be honored with the International Artist Award at the American Music Awards. She has also received the Legend
Award at the 2008 World Music Awards and the Billboard Millennium Award at the 2011 Billboard Music Awards. Beyoncé
has won 20 Grammy Awards, both as a solo artist and member of Destiny's Child, making her the second most honored
female artist by the Grammys, behind Alison Krauss and the most nominated woman in Grammy Award history with 52 nominations.
"Single Ladies (Put a Ring on It)" won Song of the Year in 2010 while "Say My Name" and "Crazy in Love" had previously
won Best R&B Song. Dangerously in Love, B'Day and I Am... Sasha Fierce have all won Best Contemporary R&B Album.
Beyoncé set the record for the most Grammy awards won by a female artist in one night in 2010 when she won six awards,
breaking the tie she previously held with Alicia Keys, Norah Jones, Alison Krauss, and Amy Winehouse, with Adele
equaling this in 2012. Following her role in Dreamgirls she was nominated for Best Original Song for "Listen" and
Best Actress at the Golden Globe Awards, and Outstanding Actress in a Motion Picture at the NAACP Image Awards. Beyoncé
won two awards at the Broadcast Film Critics Association Awards 2006; Best Song for "Listen" and Best Original Soundtrack
for Dreamgirls: Music from the Motion Picture. Beyoncé has worked with Pepsi since 2002, and in 2004 appeared in
a Gladiator-themed commercial with Britney Spears, Pink, and Enrique Iglesias. In 2012, Beyoncé signed a $50 million
deal to endorse Pepsi. The Center for Science in the Public Interest (CSPINET) wrote Beyoncé an open letter asking
her to reconsider the deal because of the unhealthiness of the product and to donate the proceeds to a medical organisation.
Nevertheless, NetBase found that Beyoncé's campaign was the most talked about endorsement in April 2013, with a 70
per cent positive audience response to the commercial and print ads. Beyoncé has worked with Tommy Hilfiger for the
fragrances True Star (singing a cover version of "Wishing on a Star") and True Star Gold; she also promoted Emporio
Armani's Diamonds fragrance in 2007. Beyoncé launched her first official fragrance, Heat in 2010. The commercial,
which featured the 1956 song "Fever", was shown after the water shed in the United Kingdom as it begins with an image
of Beyoncé appearing to lie naked in a room. In February 2011, Beyoncé launched her second fragrance, Heat Rush.
Beyoncé's third fragrance, Pulse, was launched in September 2011. In 2013, The Mrs. Carter Show Limited Edition version
of Heat was released. The six editions of Heat are the world's best-selling celebrity fragrance line, with sales
of over $400 million. The release of a video-game Starpower: Beyoncé was cancelled after Beyoncé pulled out of a
$100 million with GateFive who alleged the cancellation meant the sacking of 70 staff and millions of pounds lost
in development. It was settled out of court by her lawyers in June 2013 who said that they had cancelled because
GateFive had lost its financial backers. Beyoncé also has had deals with American Express, Nintendo DS and L'Oréal
since the age of 18. In October 2014, it was announced that Beyoncé with her management company Parkwood Entertainment
would be partnering with London-based fashion retailer Topshop, in a new 50/50 split subsidiary business named Parkwood
Topshop Athletic Ltd. The new division was created for Topshop to break into the activewear market, with an athletic,
street wear brand being produced. "Creating a partnership with Beyoncé, one of the most hard-working and talented
people in the world, who spends many hours of her life dancing, rehearsing and training is a unique opportunity to
develop this category" stated Sir Philip Green on the partnership. The company and collection is set to launch and
hit stores in the fall of 2015. On March 30, 2015, it was announced that Beyoncé is a co-owner, with various other
music artists, in the music streaming service Tidal. The service specialises in lossless audio and high definition
music videos. Beyoncé's husband Jay Z acquired the parent company of Tidal, Aspiro, in the first quarter of 2015.
Including Beyoncé and Jay-Z, sixteen artist stakeholders (such as Kanye West, Rihanna, Madonna, Chris Martin, Nicki
Minaj and more) co-own Tidal, with the majority owning a 3% equity stake. The idea of having an all artist owned
streaming service was created by those involved to adapt to the increased demand for streaming within the current
music industry, and to rival other streaming services such as Spotify, which have been criticised for their low payout
of royalties. "The challenge is to get everyone to respect music again, to recognize its value", stated Jay-Z on
the release of Tidal. Beyoncé and her mother introduced House of Deréon, a contemporary women's fashion line, in
2005. The concept is inspired by three generations of women in their family, the name paying tribute to Beyoncé's
grandmother, Agnèz Deréon, a respected seamstress. According to Tina, the overall style of the line best reflects
her and Beyoncé's taste and style. Beyoncé and her mother founded their family's company Beyond Productions, which
provides the licensing and brand management for House of Deréon, and its junior collection, Deréon. House of Deréon
pieces were exhibited in Destiny's Child's shows and tours, during their Destiny Fulfilled era. The collection features
sportswear, denim offerings with fur, outerwear and accessories that include handbags and footwear, and are available
at department and specialty stores across the US and Canada. In 2005, Beyoncé teamed up with House of Brands, a shoe
company, to produce a range of footwear for House of Deréon. In January 2008, Starwave Mobile launched Beyoncé Fashion
Diva, a "high-style" mobile game with a social networking component, featuring the House of Deréon collection. In
July 2009, Beyoncé and her mother launched a new junior apparel label, Sasha Fierce for Deréon, for back-to-school
selling. The collection included sportswear, outerwear, handbags, footwear, eyewear, lingerie and jewelry. It was
available at department stores including Macy's and Dillard's, and specialty stores Jimmy Jazz and Against All Odds.
On May 27, 2010, Beyoncé teamed up with clothing store C&A to launch Deréon by Beyoncé at their stores in Brazil.
The collection included tailored blazers with padded shoulders, little black dresses, embroidered tops and shirts
and bandage dresses. In October 2014, Beyoncé signed a deal to launch an activewear line of clothing with British
fashion retailer Topshop. The 50-50 venture is called Parkwood Topshop Athletic Ltd and is scheduled to launch its
first dance, fitness and sports ranges in autumn 2015. The line will launch in April 2016. After Hurricane Katrina
in 2005, Beyoncé and Rowland founded the Survivor Foundation to provide transitional housing for victims in the Houston
area, to which Beyoncé contributed an initial $250,000. The foundation has since expanded to work with other charities
in the city, and also provided relief following Hurricane Ike three years later. Beyoncé participated in George Clooney
and Wyclef Jean's Hope for Haiti Now: A Global Benefit for Earthquake Relief telethon and was named the official
face of the limited edition CFDA "Fashion For Haiti" T-shirt, made by Theory which raised a total of $1 million.
On March 5, 2010, Beyoncé and her mother Tina opened the Beyoncé Cosmetology Center at the Brooklyn Phoenix House,
offering a seven-month cosmetology training course for men and women. In April 2011, Beyoncé joined forces with US
First Lady Michelle Obama and the National Association of Broadcasters Education Foundation, to help boost the latter's
campaign against child obesity by reworking her single "Get Me Bodied". Following the death of Osama bin Laden, Beyoncé
released her cover of the Lee Greenwood song "God Bless the USA", as a charity single to help raise funds for the
New York Police and Fire Widows' and Children's Benefit Fund. In December, Beyoncé along with a variety of other
celebrities teamed up and produced a video campaign for "Demand A Plan", a bipartisan effort by a group of 950 US
mayors and others designed to influence the federal government into rethinking its gun control laws, following the
Sandy Hook Elementary School shooting. Beyoncé became an ambassador for the 2012 World Humanitarian Day campaign
donating her song "I Was Here" and its music video, shot in the UN, to the campaign. In 2013, it was announced that
Beyoncé would work with Salma Hayek and Frida Giannini on a Gucci "Chime for Change" campaign that aims to spread
female empowerment. The campaign, which aired on February 28, was set to her new music. A concert for the cause took
place on June 1, 2013 in London and included other acts like Ellie Goulding, Florence and the Machine, and Rita Ora.
In advance of the concert, she appeared in a campaign video released on 15 May 2013, where she, along with Cameron
Diaz, John Legend and Kylie Minogue, described inspiration from their mothers, while a number of other artists celebrated
personal inspiration from other women, leading to a call for submission of photos of women of viewers' inspiration
from which a selection was shown at the concert. Beyoncé said about her mother Tina Knowles that her gift was "finding
the best qualities in every human being." With help of the crowdfunding platform Catapult, visitors of the concert
could choose between several projects promoting education of women and girls. Beyoncé is also taking part in "Miss
a Meal", a food-donation campaign, and supporting Goodwill charity through online charity auctions at Charitybuzz
that support job creation throughout Europe and the U.S.
Frédéric François Chopin (/ˈʃoʊpæn/; French pronunciation: [fʁe.de.ʁik fʁɑ̃.swa ʃɔ.pɛ̃]; 22 February or 1 March 1810 – 17
October 1849), born Fryderyk Franciszek Chopin,[n 1] was a Polish and French (by citizenship and birth of father)
composer and a virtuoso pianist of the Romantic era, who wrote primarily for the solo piano. He gained and has maintained
renown worldwide as one of the leading musicians of his era, whose "poetic genius was based on a professional technique
that was without equal in his generation." Chopin was born in what was then the Duchy of Warsaw, and grew up in Warsaw,
which after 1815 became part of Congress Poland. A child prodigy, he completed his musical education and composed
his earlier works in Warsaw before leaving Poland at the age of 20, less than a month before the outbreak of the
November 1830 Uprising. At the age of 21 he settled in Paris. Thereafter, during the last 18 years of his life, he
gave only some 30 public performances, preferring the more intimate atmosphere of the salon. He supported himself
by selling his compositions and teaching piano, for which he was in high demand. Chopin formed a friendship with
Franz Liszt and was admired by many of his musical contemporaries, including Robert Schumann. In 1835 he obtained
French citizenship. After a failed engagement to Maria Wodzińska, from 1837 to 1847 he maintained an often troubled
relationship with the French writer George Sand. A brief and unhappy visit to Majorca with Sand in 1838–39 was one
of his most productive periods of composition. In his last years, he was financially supported by his admirer Jane
Stirling, who also arranged for him to visit Scotland in 1848. Through most of his life, Chopin suffered from poor
health. He died in Paris in 1849, probably of tuberculosis. All of Chopin's compositions include the piano. Most
are for solo piano, though he also wrote two piano concertos, a few chamber pieces, and some songs to Polish lyrics.
His keyboard style is highly individual and often technically demanding; his own performances were noted for their
nuance and sensitivity. Chopin invented the concept of instrumental ballade. His major piano works also include mazurkas,
waltzes, nocturnes, polonaises, études, impromptus, scherzos, preludes and sonatas, some published only after his
death. Influences on his compositional style include Polish folk music, the classical tradition of J. S. Bach, Mozart
and Schubert, the music of all of whom he admired, as well as the Paris salons where he was a frequent guest. His
innovations in style, musical form, and harmony, and his association of music with nationalism, were influential
throughout and after the late Romantic period. In his native Poland, in France, where he composed most of his works,
and beyond, Chopin's music, his status as one of music's earliest superstars, his association (if only indirect)
with political insurrection, his love life and his early death have made him, in the public consciousness, a leading
symbol of the Romantic era. His works remain popular, and he has been the subject of numerous films and biographies
of varying degrees of historical accuracy. Fryderyk Chopin was born in Żelazowa Wola, 46 kilometres (29 miles) west
of Warsaw, in what was then the Duchy of Warsaw, a Polish state established by Napoleon. The parish baptismal record
gives his birthday as 22 February 1810, and cites his given names in the Latin form Fridericus Franciscus (in Polish,
he was Fryderyk Franciszek). However, the composer and his family used the birthdate 1 March,[n 2] which is now generally
accepted as the correct date. Fryderyk's father, Nicolas Chopin, was a Frenchman from Lorraine who had emigrated
to Poland in 1787 at the age of sixteen. Nicolas tutored children of the Polish aristocracy, and in 1806 married
Justyna Krzyżanowska, a poor relative of the Skarbeks, one of the families for whom he worked. Fryderyk was baptized
on Easter Sunday, 23 April 1810, in the same church where his parents had married, in Brochów. His eighteen-year-old
godfather, for whom he was named, was Fryderyk Skarbek, a pupil of Nicolas Chopin. Fryderyk was the couple's second
child and only son; he had an elder sister, Ludwika (1807–55), and two younger sisters, Izabela (1811–81) and Emilia
(1812–27). Nicolas was devoted to his adopted homeland, and insisted on the use of the Polish language in the household.
In October 1810, six months after Fryderyk's birth, the family moved to Warsaw, where his father acquired a post
teaching French at the Warsaw Lyceum, then housed in the Saxon Palace. Fryderyk lived with his family in the Palace
grounds. The father played the flute and violin; the mother played the piano and gave lessons to boys in the boarding
house that the Chopins kept. Chopin was of slight build, and even in early childhood was prone to illnesses. Fryderyk
may have had some piano instruction from his mother, but his first professional music tutor, from 1816 to 1821, was
the Czech pianist Wojciech Żywny. His elder sister Ludwika also took lessons from Żywny, and occasionally played
duets with her brother. It quickly became apparent that he was a child prodigy. By the age of seven Fryderyk had
begun giving public concerts, and in 1817 he composed two polonaises, in G minor and B-flat major. His next work,
a polonaise in A-flat major of 1821, dedicated to Żywny, is his earliest surviving musical manuscript. In 1817 the
Saxon Palace was requisitioned by Warsaw's Russian governor for military use, and the Warsaw Lyceum was reestablished
in the Kazimierz Palace (today the rectorate of Warsaw University). Fryderyk and his family moved to a building,
which still survives, adjacent to the Kazimierz Palace. During this period, Fryderyk was sometimes invited to the
Belweder Palace as playmate to the son of the ruler of Russian Poland, Grand Duke Constantine; he played the piano
for the Duke and composed a march for him. Julian Ursyn Niemcewicz, in his dramatic eclogue, "Nasze Przebiegi" ("Our
Discourses", 1818), attested to "little Chopin's" popularity. From September 1823 to 1826 Chopin attended the Warsaw
Lyceum, where he received organ lessons from the Czech musician Wilhelm Würfel during his first year. In the autumn
of 1826 he began a three-year course under the Silesian composer Józef Elsner at the Warsaw Conservatory, studying
music theory, figured bass and composition.[n 3] Throughout this period he continued to compose and to give recitals
in concerts and salons in Warsaw. He was engaged by the inventors of a mechanical organ, the "eolomelodicon", and
on this instrument in May 1825 he performed his own improvisation and part of a concerto by Moscheles. The success
of this concert led to an invitation to give a similar recital on the instrument before Tsar Alexander I, who was
visiting Warsaw; the Tsar presented him with a diamond ring. At a subsequent eolomelodicon concert on 10 June 1825,
Chopin performed his Rondo Op. 1. This was the first of his works to be commercially published and earned him his
first mention in the foreign press, when the Leipzig Allgemeine Musikalische Zeitung praised his "wealth of musical
ideas". During 1824–28 Chopin spent his vacations away from Warsaw, at a number of locales.[n 4] In 1824 and 1825,
at Szafarnia, he was a guest of Dominik Dziewanowski, the father of a schoolmate. Here for the first time he encountered
Polish rural folk music. His letters home from Szafarnia (to which he gave the title "The Szafarnia Courier"), written
in a very modern and lively Polish, amused his family with their spoofing of the Warsaw newspapers and demonstrated
the youngster's literary gift. In 1827, soon after the death of Chopin's youngest sister Emilia, the family moved
from the Warsaw University building, adjacent to the Kazimierz Palace, to lodgings just across the street from the
university, in the south annex of the Krasiński Palace on Krakowskie Przedmieście,[n 5] where Chopin lived until
he left Warsaw in 1830.[n 6] Here his parents continued running their boarding house for male students; the Chopin
Family Parlour (Salonik Chopinów) became a museum in the 20th century. In 1829 the artist Ambroży Mieroszewski executed
a set of portraits of Chopin family members, including the first known portrait of the composer.[n 7] Four boarders
at his parents' apartments became Chopin's intimates: Tytus Woyciechowski, Jan Nepomucen Białobłocki, Jan Matuszyński
and Julian Fontana; the latter two would become part of his Paris milieu. He was friendly with members of Warsaw's
young artistic and intellectual world, including Fontana, Józef Bohdan Zaleski and Stefan Witwicki. He was also attracted
to the singing student Konstancja Gładkowska. In letters to Woyciechowski, he indicated which of his works, and even
which of their passages, were influenced by his fascination with her; his letter of 15 May 1830 revealed that the
slow movement (Larghetto) of his Piano Concerto No. 1 (in E minor) was secretly dedicated to her – "It should be
like dreaming in beautiful springtime – by moonlight." His final Conservatory report (July 1829) read: "Chopin F.,
third-year student, exceptional talent, musical genius." In September 1828 Chopin, while still a student, visited
Berlin with a family friend, zoologist Feliks Jarocki, enjoying operas directed by Gaspare Spontini and attending
concerts by Carl Friedrich Zelter, Felix Mendelssohn and other celebrities. On an 1829 return trip to Berlin, he
was a guest of Prince Antoni Radziwiłł, governor of the Grand Duchy of Posen—himself an accomplished composer and
aspiring cellist. For the prince and his pianist daughter Wanda, he composed his Introduction and Polonaise brillante
in C major for cello and piano, Op. 3. Back in Warsaw that year, Chopin heard Niccolò Paganini play the violin, and
composed a set of variations, Souvenir de Paganini. It may have been this experience which encouraged him to commence
writing his first Études, (1829–32), exploring the capacities of his own instrument. On 11 August, three weeks after
completing his studies at the Warsaw Conservatory, he made his debut in Vienna. He gave two piano concerts and received
many favourable reviews—in addition to some commenting (in Chopin's own words) that he was "too delicate for those
accustomed to the piano-bashing of local artists". In one of these concerts, he premiered his Variations on Là ci
darem la mano, Op. 2 (variations on an aria from Mozart's opera Don Giovanni) for piano and orchestra. He returned
to Warsaw in September 1829, where he premiered his Piano Concerto No. 2 in F minor, Op. 21 on 17 March 1830. Chopin's
successes as a composer and performer opened the door to western Europe for him, and on 2 November 1830, he set out,
in the words of Zdzisław Jachimecki, "into the wide world, with no very clearly defined aim, forever." With Woyciechowski,
he headed for Austria, intending to go on to Italy. Later that month, in Warsaw, the November 1830 Uprising broke
out, and Woyciechowski returned to Poland to enlist. Chopin, now alone in Vienna, was nostalgic for his homeland,
and wrote to a friend, "I curse the moment of my departure." When in September 1831 he learned, while travelling
from Vienna to Paris, that the uprising had been crushed, he expressed his anguish in the pages of his private journal:
"Oh God! ... You are there, and yet you do not take vengeance!" Jachimecki ascribes to these events the composer's
maturing "into an inspired national bard who intuited the past, present and future of his native Poland." Chopin
arrived in Paris in late September 1831; he would never return to Poland, thus becoming one of many expatriates of
the Polish Great Emigration. In France he used the French versions of his given names, and after receiving French
citizenship in 1835, he travelled on a French passport. However, Chopin remained close to his fellow Poles in exile
as friends and confidants and he never felt fully comfortable speaking French. Chopin's biographer Adam Zamoyski
writes that he never considered himself to be French, despite his father's French origins, and always saw himself
as a Pole. In Paris, Chopin encountered artists and other distinguished figures, and found many opportunities to
exercise his talents and achieve celebrity. During his years in Paris he was to become acquainted with, among many
others, Hector Berlioz, Franz Liszt, Ferdinand Hiller, Heinrich Heine, Eugène Delacroix, and Alfred de Vigny. Chopin
was also acquainted with the poet Adam Mickiewicz, principal of the Polish Literary Society, some of whose verses
he set as songs. Two Polish friends in Paris were also to play important roles in Chopin's life there. His fellow
student at the Warsaw Conservatory, Julian Fontana, had originally tried unsuccessfully to establish himself in England;
Albert Grzymała, who in Paris became a wealthy financier and society figure, often acted as Chopin's adviser and
"gradually began to fill the role of elder brother in [his] life." Fontana was to become, in the words of Michałowski
and Samson, Chopin's "general factotum and copyist". At the end of 1831, Chopin received the first major endorsement
from an outstanding contemporary when Robert Schumann, reviewing the Op. 2 Variations in the Allgemeine musikalische
Zeitung (his first published article on music), declared: "Hats off, gentlemen! A genius." On 26 February 1832 Chopin
gave a debut Paris concert at the Salle Pleyel which drew universal admiration. The critic François-Joseph Fétis
wrote in the Revue et gazette musicale: "Here is a young man who ... taking no model, has found, if not a complete
renewal of piano music, ... an abundance of original ideas of a kind to be found nowhere else ..." After this concert,
Chopin realized that his essentially intimate keyboard technique was not optimal for large concert spaces. Later
that year he was introduced to the wealthy Rothschild banking family, whose patronage also opened doors for him to
other private salons (social gatherings of the aristocracy and artistic and literary elite). By the end of 1832 Chopin
had established himself among the Parisian musical elite, and had earned the respect of his peers such as Hiller,
Liszt, and Berlioz. He no longer depended financially upon his father, and in the winter of 1832 he began earning
a handsome income from publishing his works and teaching piano to affluent students from all over Europe. This freed
him from the strains of public concert-giving, which he disliked. Chopin seldom performed publicly in Paris. In later
years he generally gave a single annual concert at the Salle Pleyel, a venue that seated three hundred. He played
more frequently at salons, but preferred playing at his own Paris apartment for small groups of friends. The musicologist
Arthur Hedley has observed that "As a pianist Chopin was unique in acquiring a reputation of the highest order on
the basis of a minimum of public appearances—few more than thirty in the course of his lifetime." The list of musicians
who took part in some of his concerts provides an indication of the richness of Parisian artistic life during this
period. Examples include a concert on 23 March 1833, in which Chopin, Liszt and Hiller performed (on pianos) a concerto
by J.S. Bach for three keyboards; and, on 3 March 1838, a concert in which Chopin, his pupil Adolphe Gutmann, Charles-Valentin
Alkan, and Alkan's teacher Joseph Zimmermann performed Alkan's arrangement, for eight hands, of two movements from
Beethoven's 7th symphony. Chopin was also involved in the composition of Liszt's Hexameron; he wrote the sixth (and
final) variation on Bellini's theme. Chopin's music soon found success with publishers, and in 1833 he contracted
with Maurice Schlesinger, who arranged for it to be published not only in France but, through his family connections,
also in Germany and England. In the spring of 1834, Chopin attended the Lower Rhenish Music Festival in Aix-la-Chapelle
with Hiller, and it was there that Chopin met Felix Mendelssohn. After the festival, the three visited Düsseldorf,
where Mendelssohn had been appointed musical director. They spent what Mendelssohn described as "a very agreeable
day", playing and discussing music at his piano, and met Friedrich Wilhelm Schadow, director of the Academy of Art,
and some of his eminent pupils such as Lessing, Bendemann, Hildebrandt and Sohn. In 1835 Chopin went to Carlsbad,
where he spent time with his parents; it was the last time he would see them. On his way back to Paris, he met old
friends from Warsaw, the Wodzińskis. He had made the acquaintance of their daughter Maria in Poland five years earlier,
when she was eleven. This meeting prompted him to stay for two weeks in Dresden, when he had previously intended
to return to Paris via Leipzig. The sixteen-year-old girl's portrait of the composer is considered, along with Delacroix's,
as among Chopin's best likenesses. In October he finally reached Leipzig, where he met Schumann, Clara Wieck and
Felix Mendelssohn, who organised for him a performance of his own oratorio St. Paul, and who considered him "a perfect
musician". In July 1836 Chopin travelled to Marienbad and Dresden to be with the Wodziński family, and in September
he proposed to Maria, whose mother Countess Wodzińska approved in principle. Chopin went on to Leipzig, where he
presented Schumann with his G minor Ballade. At the end of 1836 he sent Maria an album in which his sister Ludwika
had inscribed seven of his songs, and his 1835 Nocturne in C-sharp minor, Op. 27, No. 1. The anodyne thanks he received
from Maria proved to be the last letter he was to have from her. Although it is not known exactly when Chopin first
met Liszt after arriving in Paris, on 12 December 1831 he mentioned in a letter to his friend Woyciechowski that
"I have met Rossini, Cherubini, Baillot, etc.—also Kalkbrenner. You would not believe how curious I was about Herz,
Liszt, Hiller, etc." Liszt was in attendance at Chopin's Parisian debut on 26 February 1832 at the Salle Pleyel,
which led him to remark: "The most vigorous applause seemed not to suffice to our enthusiasm in the presence of this
talented musician, who revealed a new phase of poetic sentiment combined with such happy innovation in the form of
his art." The two became friends, and for many years lived in close proximity in Paris, Chopin at 38 Rue de la Chaussée-d'Antin,
and Liszt at the Hôtel de France on the Rue Lafitte, a few blocks away. They performed together on seven occasions
between 1833 and 1841. The first, on 2 April 1833, was at a benefit concert organized by Hector Berlioz for his bankrupt
Shakespearean actress wife Harriet Smithson, during which they played George Onslow's Sonata in F minor for piano
duet. Later joint appearances included a benefit concert for the Benevolent Association of Polish Ladies in Paris.
Their last appearance together in public was for a charity concert conducted for the Beethoven Memorial in Bonn,
held at the Salle Pleyel and the Paris Conservatory on 25 and 26 April 1841. Although the two displayed great respect
and admiration for each other, their friendship was uneasy and had some qualities of a love-hate relationship. Harold
C. Schonberg believes that Chopin displayed a "tinge of jealousy and spite" towards Liszt's virtuosity on the piano,
and others have also argued that he had become enchanted with Liszt's theatricality, showmanship and success. Liszt
was the dedicatee of Chopin's Op. 10 Études, and his performance of them prompted the composer to write to Hiller,
"I should like to rob him of the way he plays my studies." However, Chopin expressed annoyance in 1843 when Liszt
performed one of his nocturnes with the addition of numerous intricate embellishments, at which Chopin remarked that
he should play the music as written or not play it at all, forcing an apology. Most biographers of Chopin state that
after this the two had little to do with each other, although in his letters dated as late as 1848 he still referred
to him as "my friend Liszt". Some commentators point to events in the two men's romantic lives which led to a rift
between them; there are claims that Liszt had displayed jealousy of his mistress Marie d'Agoult's obsession with
Chopin, while others believe that Chopin had become concerned about Liszt's growing relationship with George Sand.
In 1836, at a party hosted by Marie d'Agoult, Chopin met the French author George Sand (born [Amantine] Aurore [Lucile]
Dupin). Short (under five feet, or 152 cm), dark, big-eyed and a cigar smoker, she initially repelled Chopin, who
remarked, "What an unattractive person la Sand is. Is she really a woman?" However, by early 1837 Maria Wodzińska's
mother had made it clear to Chopin in correspondence that a marriage with her daughter was unlikely to proceed. It
is thought that she was influenced by his poor health and possibly also by rumours about his associations with women
such as d'Agoult and Sand. Chopin finally placed the letters from Maria and her mother in a package on which he wrote,
in Polish, "My tragedy". Sand, in a letter to Grzymała of June 1838, admitted strong feelings for the composer and
debated whether to abandon a current affair in order to begin a relationship with Chopin; she asked Grzymała to assess
Chopin's relationship with Maria Wodzińska, without realising that the affair, at least from Maria's side, was over.
In June 1837 Chopin visited London incognito in the company of the piano manufacturer Camille Pleyel where he played
at a musical soirée at the house of English piano maker James Broadwood. On his return to Paris, his association
with Sand began in earnest, and by the end of June 1838 they had become lovers. Sand, who was six years older than
the composer, and who had had a series of lovers, wrote at this time: "I must say I was confused and amazed at the
effect this little creature had on me ... I have still not recovered from my astonishment, and if I were a proud
person I should be feeling humiliated at having been carried away ..." The two spent a miserable winter on Majorca
(8 November 1838 to 13 February 1839), where, together with Sand's two children, they had journeyed in the hope of
improving the health of Chopin and that of Sand's 15-year-old son Maurice, and also to escape the threats of Sand's
former lover Félicien Mallefille. After discovering that the couple were not married, the deeply traditional Catholic
people of Majorca became inhospitable, making accommodation difficult to find. This compelled the group to take lodgings
in a former Carthusian monastery in Valldemossa, which gave little shelter from the cold winter weather. On 3 December,
Chopin complained about his bad health and the incompetence of the doctors in Majorca: "Three doctors have visited
me ... The first said I was dead; the second said I was dying; and the third said I was about to die." He also had
problems having his Pleyel piano sent to him. It finally arrived from Paris in December. Chopin wrote to Pleyel in
January 1839: "I am sending you my Preludes [(Op. 28)]. I finished them on your little piano, which arrived in the
best possible condition in spite of the sea, the bad weather and the Palma customs." Chopin was also able to undertake
work on his Ballade No. 2, Op. 38; two Polonaises, Op. 40; and the Scherzo No. 3, Op. 39. Although this period had
been productive, the bad weather had such a detrimental effect on Chopin's health that Sand determined to leave the
island. To avoid further customs duties, Sand sold the piano to a local French couple, the Canuts.[n 8] The group
traveled first to Barcelona, then to Marseilles, where they stayed for a few months while Chopin convalesced. In
May 1839 they headed for the summer to Sand's estate at Nohant, where they spent most summers until 1846. In autumn
they returned to Paris, where Chopin's apartment at 5 rue Tronchet was close to Sand's rented accommodation at the
rue Pigalle. He frequently visited Sand in the evenings, but both retained some independence. In 1842 he and Sand
moved to the Square d'Orléans, living in adjacent buildings. At the funeral of the tenor Adolphe Nourrit in Paris
in 1839, Chopin made a rare appearance at the organ, playing a transcription of Franz Schubert's lied Die Gestirne.
On 26 July 1840 Chopin and Sand were present at the dress rehearsal of Berlioz's Grande symphonie funèbre et triomphale,
composed to commemorate the tenth anniversary of the July Revolution. Chopin was reportedly unimpressed with the
composition. During the summers at Nohant, particularly in the years 1839–43, Chopin found quiet, productive days
during which he composed many works, including his Polonaise in A-flat major, Op. 53. Among the visitors to Nohant
were Delacroix and the mezzo-soprano Pauline Viardot, whom Chopin had advised on piano technique and composition.
Delacroix gives an account of staying at Nohant in a letter of 7 June 1842: From 1842 onwards, Chopin showed signs
of serious illness. After a solo recital in Paris on 21 February 1842, he wrote to Grzymała: "I have to lie in bed
all day long, my mouth and tonsils are aching so much." He was forced by illness to decline a written invitation
from Alkan to participate in a repeat performance of the Beethoven Seventh Symphony arrangement at Erard's on 1 March
1843. Late in 1844, Charles Hallé visited Chopin and found him "hardly able to move, bent like a half-opened penknife
and evidently in great pain", although his spirits returned when he started to play the piano for his visitor. Chopin's
health continued to deteriorate, particularly from this time onwards. Modern research suggests that apart from any
other illnesses, he may also have suffered from temporal lobe epilepsy. Chopin's relations with Sand were soured
in 1846 by problems involving her daughter Solange and Solange's fiancé, the young fortune-hunting sculptor Auguste
Clésinger. The composer frequently took Solange's side in quarrels with her mother; he also faced jealousy from Sand's
son Maurice. Chopin was utterly indifferent to Sand's radical political pursuits, while Sand looked on his society
friends with disdain. As the composer's illness progressed, Sand had become less of a lover and more of a nurse to
Chopin, whom she called her "third child". In letters to third parties, she vented her impatience, referring to him
as a "child," a "little angel", a "sufferer" and a "beloved little corpse." In 1847 Sand published her novel Lucrezia
Floriani, whose main characters—a rich actress and a prince in weak health—could be interpreted as Sand and Chopin;
the story was uncomplimentary to Chopin, who could not have missed the allusions as he helped Sand correct the printer's
galleys. In 1847 he did not visit Nohant, and he quietly ended their ten-year relationship following an angry correspondence
which, in Sand's words, made "a strange conclusion to nine years of exclusive friendship." The two would never meet
again. Chopin's output as a composer throughout this period declined in quantity year by year. Whereas in 1841 he
had written a dozen works, only six were written in 1842 and six shorter pieces in 1843. In 1844 he wrote only the
Op. 58 sonata. 1845 saw the completion of three mazurkas (Op. 59). Although these works were more refined than many
of his earlier compositions, Zamoyski opines that "his powers of concentration were failing and his inspiration was
beset by anguish, both emotional and intellectual." Chopin's public popularity as a virtuoso began to wane, as did
the number of his pupils, and this, together with the political strife and instability of the time, caused him to
struggle financially. In February 1848, with the cellist Auguste Franchomme, he gave his last Paris concert, which
included three movements of the Cello Sonata Op. 65. Chopin's life was covered in a BBC TV documentary Chopin – The
Women Behind The Music (2010), and in a 2010 documentary realised by Angelo Bozzolini and Roberto Prosseda for Italian
television. Chopin's life and his relations with George Sand have been fictionalized in numerous films. The 1945
biographical film A Song to Remember earned Cornel Wilde an Academy Award nomination as Best Actor for his portrayal
of the composer. Other film treatments have included: La valse de l'adieu (France, 1928) by Henry Roussel, with Pierre
Blanchar as Chopin; Impromptu (1991), starring Hugh Grant as Chopin; La note bleue (1991); and Chopin: Desire for
Love (2002). Possibly the first venture into fictional treatments of Chopin's life was a fanciful operatic version
of some of its events. Chopin was written by Giacomo Orefice and produced in Milan in 1901. All the music is derived
from that of Chopin. Chopin has figured extensively in Polish literature, both in serious critical studies of his
life and music and in fictional treatments. The earliest manifestation was probably an 1830 sonnet on Chopin by Leon
Ulrich. French writers on Chopin (apart from Sand) have included Marcel Proust and André Gide; and he has also featured
in works of Gottfried Benn and Boris Pasternak. There are numerous biographies of Chopin in English (see bibliography
for some of these). Numerous recordings of Chopin's works are available. On the occasion of the composer's bicentenary,
the critics of The New York Times recommended performances by the following contemporary pianists (among many others):
Martha Argerich, Vladimir Ashkenazy, Emanuel Ax, Evgeny Kissin, Murray Perahia, Maurizio Pollini and Krystian Zimerman.
The Warsaw Chopin Society organizes the Grand prix du disque de F. Chopin for notable Chopin recordings, held every
five years. The British Library notes that "Chopin's works have been recorded by all the great pianists of the recording
era." The earliest recording was an 1895 performance by Paul Pabst of the Nocturne in E major Op. 62 No. 2. The British
Library site makes available a number of historic recordings, including some by Alfred Cortot, Ignaz Friedman, Vladimir
Horowitz, Benno Moiseiwitsch, Paderewski, Arthur Rubinstein, Xaver Scharwenka and many others. A select discography
of recordings of Chopin works by pianists representing the various pedagogic traditions stemming from Chopin is given
by Methuen-Campbell in his work tracing the lineage and character of those traditions. Chopin's music remains very
popular and is regularly performed, recorded and broadcast worldwide. The world's oldest monographic music competition,
the International Chopin Piano Competition, founded in 1927, is held every five years in Warsaw. The Fryderyk Chopin
Institute of Poland lists on its website over eighty societies world-wide devoted to the composer and his music.
The Institute site also lists nearly 1,500 performances of Chopin works on YouTube as of January 2014. Chopin's music
was used in the 1909 ballet Chopiniana, choreographed by Michel Fokine and orchestrated by Alexander Glazunov. Sergei
Diaghilev commissioned additional orchestrations—from Stravinsky, Anatoly Lyadov, Sergei Taneyev and Nikolai Tcherepnin—for
later productions, which used the title Les Sylphides. In April, during the Revolution of 1848 in Paris, he left
for London, where he performed at several concerts and at numerous receptions in great houses. This tour was suggested
to him by his Scottish pupil Jane Stirling and her elder sister. Stirling also made all the logistical arrangements
and provided much of the necessary funding. In London Chopin took lodgings at Dover Street, where the firm of Broadwood
provided him with a grand piano. At his first engagement, on 15 May at Stafford House, the audience included Queen
Victoria and Prince Albert. The Prince, who was himself a talented musician, moved close to the keyboard to view
Chopin's technique. Broadwood also arranged concerts for him; among those attending were Thackeray and the singer
Jenny Lind. Chopin was also sought after for piano lessons, for which he charged the high fee of one guinea (£1.05
in present British currency) per hour, and for private recitals for which the fee was 20 guineas. At a concert on
7 July he shared the platform with Viardot, who sang arrangements of some of his mazurkas to Spanish texts. In late
summer he was invited by Jane Stirling to visit Scotland, where he stayed at Calder House near Edinburgh and at Johnstone
Castle in Renfrewshire, both owned by members of Stirling's family. She clearly had a notion of going beyond mere
friendship, and Chopin was obliged to make it clear to her that this could not be so. He wrote at this time to Grzymała
"My Scottish ladies are kind, but such bores", and responding to a rumour about his involvement, answered that he
was "closer to the grave than the nuptial bed." He gave a public concert in Glasgow on 27 September, and another
in Edinburgh, at the Hopetoun Rooms on Queen Street (now Erskine House) on 4 October. In late October 1848, while
staying at 10 Warriston Crescent in Edinburgh with the Polish physician Adam Łyszczyński, he wrote out his last will
and testament—"a kind of disposition to be made of my stuff in the future, if I should drop dead somewhere", he wrote
to Grzymała. Chopin made his last public appearance on a concert platform at London's Guildhall on 16 November 1848,
when, in a final patriotic gesture, he played for the benefit of Polish refugees. By this time he was very seriously
ill, weighing under 99 pounds (i.e. less than 45 kg), and his doctors were aware that his sickness was at a terminal
stage. At the end of November, Chopin returned to Paris. He passed the winter in unremitting illness, but gave occasional
lessons and was visited by friends, including Delacroix and Franchomme. Occasionally he played, or accompanied the
singing of Delfina Potocka, for his friends. During the summer of 1849, his friends found him an apartment in Chaillot,
out of the centre of the city, for which the rent was secretly subsidised by an admirer, Princess Obreskoff. Here
in June 1849 he was visited by Jenny Lind. With his health further deteriorating, Chopin desired to have a family
member with him. In June 1849 his sister Ludwika came to Paris with her husband and daughter, and in September, supported
by a loan from Jane Stirling, he took an apartment at Place Vendôme 12. After 15 October, when his condition took
a marked turn for the worse, only a handful of his closest friends remained with him, although Viardot remarked sardonically
that "all the grand Parisian ladies considered it de rigueur to faint in his room." Some of his friends provided
music at his request; among them, Potocka sang and Franchomme played the cello. Chopin requested that his body be
opened after death (for fear of being buried alive) and his heart returned to Warsaw where it rests at the Church
of the Holy Cross. He also bequeathed his unfinished notes on a piano tuition method, Projet de méthode, to Alkan
for completion. On 17 October, after midnight, the physician leaned over him and asked whether he was suffering greatly.
"No longer", he replied. He died a few minutes before two o'clock in the morning. Those present at the deathbed appear
to have included his sister Ludwika, Princess Marcelina Czartoryska, Sand's daughter Solange, and his close friend
Thomas Albrecht. Later that morning, Solange's husband Clésinger made Chopin's death mask and a cast of his left
hand. Chopin's disease and the cause of his death have since been a matter of discussion. His death certificate gave
the cause as tuberculosis, and his physician, Jean Cruveilhier, was then the leading French authority on this disease.
Other possibilities have been advanced including cystic fibrosis, cirrhosis and alpha 1-antitrypsin deficiency. However,
the attribution of tuberculosis as principal cause of death has not been disproved. Permission for DNA testing, which
could put the matter to rest, has been denied by the Polish government. The funeral, held at the Church of the Madeleine
in Paris, was delayed almost two weeks, until 30 October. Entrance was restricted to ticket holders as many people
were expected to attend. Over 3,000 people arrived without invitations, from as far as London, Berlin and Vienna,
and were excluded. Mozart's Requiem was sung at the funeral; the soloists were the soprano Jeanne-Anais Castellan,
the mezzo-soprano Pauline Viardot, the tenor Alexis Dupont, and the bass Luigi Lablache; Chopin's Preludes No. 4
in E minor and No. 6 in B minor were also played. The organist at the funeral was Louis Lefébure-Wély. The funeral
procession to Père Lachaise Cemetery, which included Chopin's sister Ludwika, was led by the aged Prince Adam Czartoryski.
The pallbearers included Delacroix, Franchomme, and Camille Pleyel. At the graveside, the Funeral March from Chopin's
Piano Sonata No. 2 was played, in Reber's instrumentation. Chopin's tombstone, featuring the muse of music, Euterpe,
weeping over a broken lyre, was designed and sculpted by Clésinger. The expenses of the funeral and monument, amounting
to 5,000 francs, were covered by Jane Stirling, who also paid for the return of the composer's sister Ludwika to
Warsaw. Ludwika took Chopin's heart in an urn, preserved in alcohol, back to Poland in 1850.[n 9] She also took a
collection of two hundred letters from Sand to Chopin; after 1851 these were returned to Sand, who seems to have
destroyed them. Over 230 works of Chopin survive; some compositions from early childhood have been lost. All his
known works involve the piano, and only a few range beyond solo piano music, as either piano concertos, songs or
chamber music. Chopin was educated in the tradition of Beethoven, Haydn, Mozart and Clementi; he used Clementi's
piano method with his own students. He was also influenced by Hummel's development of virtuoso, yet Mozartian, piano
technique. He cited Bach and Mozart as the two most important composers in shaping his musical outlook. Chopin's
early works are in the style of the "brilliant" keyboard pieces of his era as exemplified by the works of Ignaz Moscheles,
Friedrich Kalkbrenner, and others. Less direct in the earlier period are the influences of Polish folk music and
of Italian opera. Much of what became his typical style of ornamentation (for example, his fioriture) is taken from
singing. His melodic lines were increasingly reminiscent of the modes and features of the music of his native country,
such as drones. Chopin took the new salon genre of the nocturne, invented by the Irish composer John Field, to a
deeper level of sophistication. He was the first to write ballades and scherzi as individual concert pieces. He essentially
established a new genre with his own set of free-standing preludes (Op. 28, published 1839). He exploited the poetic
potential of the concept of the concert étude, already being developed in the 1820s and 1830s by Liszt, Clementi
and Moscheles, in his two sets of studies (Op. 10 published in 1833, Op. 25 in 1837). Chopin also endowed popular
dance forms with a greater range of melody and expression. Chopin's mazurkas, while originating in the traditional
Polish dance (the mazurek), differed from the traditional variety in that they were written for the concert hall
rather than the dance hall; "it was Chopin who put the mazurka on the European musical map." The series of seven
polonaises published in his lifetime (another nine were published posthumously), beginning with the Op. 26 pair (published
1836), set a new standard for music in the form. His waltzes were also written specifically for the salon recital
rather than the ballroom and are frequently at rather faster tempos than their dance-floor equivalents. Some of Chopin's
well-known pieces have acquired descriptive titles, such as the Revolutionary Étude (Op. 10, No. 12), and the Minute
Waltz (Op. 64, No. 1). However, with the exception of his Funeral March, the composer never named an instrumental
work beyond genre and number, leaving all potential extramusical associations to the listener; the names by which
many of his pieces are known were invented by others. There is no evidence to suggest that the Revolutionary Étude
was written with the failed Polish uprising against Russia in mind; it merely appeared at that time. The Funeral
March, the third movement of his Sonata No. 2 (Op. 35), the one case where he did give a title, was written before
the rest of the sonata, but no specific event or death is known to have inspired it. The last opus number that Chopin
himself used was 65, allocated to the Cello Sonata in G minor. He expressed a deathbed wish that all his unpublished
manuscripts be destroyed. At the request of the composer's mother and sisters, however, his musical executor Julian
Fontana selected 23 unpublished piano pieces and grouped them into eight further opus numbers (Opp. 66–73), published
in 1855. In 1857, 17 Polish songs that Chopin wrote at various stages of his life were collected and published as
Op. 74, though their order within the opus did not reflect the order of composition. Works published since 1857 have
received alternative catalogue designations instead of opus numbers. The present standard musicological reference
for Chopin's works is the Kobylańska Catalogue (usually represented by the initials 'KK'), named for its compiler,
the Polish musicologist Krystyna Kobylańska. Chopin's original publishers included Maurice Schlesinger and Camille
Pleyel. His works soon began to appear in popular 19th-century piano anthologies. The first collected edition was
by Breitkopf & Härtel (1878–1902). Among modern scholarly editions of Chopin's works are the version under the name
of Paderewski published between 1937 and 1966 and the more recent Polish "National Edition", edited by Jan Ekier,
both of which contain detailed explanations and discussions regarding choices and sources. Improvisation stands at
the centre of Chopin's creative processes. However, this does not imply impulsive rambling: Nicholas Temperley writes
that "improvisation is designed for an audience, and its starting-point is that audience's expectations, which include
the current conventions of musical form." The works for piano and orchestra, including the two concertos, are held
by Temperley to be "merely vehicles for brilliant piano playing ... formally longwinded and extremely conservative".
After the piano concertos (which are both early, dating from 1830), Chopin made no attempts at large-scale multi-movement
forms, save for his late sonatas for piano and for cello; "instead he achieved near-perfection in pieces of simple
general design but subtle and complex cell-structure." Rosen suggests that an important aspect of Chopin's individuality
is his flexible handling of the four-bar phrase as a structural unit. J. Barrie Jones suggests that "amongst the
works that Chopin intended for concert use, the four ballades and four scherzos stand supreme", and adds that "the
Barcarolle Op. 60 stands apart as an example of Chopin's rich harmonic palette coupled with an Italianate warmth
of melody." Temperley opines that these works, which contain "immense variety of mood, thematic material and structural
detail", are based on an extended "departure and return" form; "the more the middle section is extended, and the
further it departs in key, mood and theme, from the opening idea, the more important and dramatic is the reprise
when it at last comes." Chopin's mazurkas and waltzes are all in straightforward ternary or episodic form, sometimes
with a coda. The mazurkas often show more folk features than many of his other works, sometimes including modal scales
and harmonies and the use of drone basses. However, some also show unusual sophistication, for example Op. 63 No.
3, which includes a canon at one beat's distance, a great rarity in music. Chopin's polonaises show a marked advance
on those of his Polish predecessors in the form (who included his teachers Zywny and Elsner). As with the traditional
polonaise, Chopin's works are in triple time and typically display a martial rhythm in their melodies, accompaniments
and cadences. Unlike most of their precursors, they also require a formidable playing technique. The 21 nocturnes
are more structured, and of greater emotional depth, than those of Field (whom Chopin met in 1833). Many of the Chopin
nocturnes have middle sections marked by agitated expression (and often making very difficult demands on the performer)
which heightens their dramatic character. Chopin's études are largely in straightforward ternary form. He used them
to teach his own technique of piano playing—for instance playing double thirds (Op. 25, No. 6), playing in octaves
(Op. 25, No. 10), and playing repeated notes (Op. 10, No. 7). The preludes, many of which are very brief (some consisting
of simple statements and developments of a single theme or figure), were described by Schumann as "the beginnings
of studies". Inspired by J.S. Bach's The Well-Tempered Clavier, Chopin's preludes move up the circle of fifths (rather
than Bach's chromatic scale sequence) to create a prelude in each major and minor tonality. The preludes were perhaps
not intended to be played as a group, and may even have been used by him and later pianists as generic preludes to
others of his pieces, or even to music by other composers, as Kenneth Hamilton suggests: he has noted a recording
by Ferruccio Busoni of 1922, in which the Prelude Op. 28 No. 7 is followed by the Étude Op. 10 No. 5. The two mature
piano sonatas (No. 2, Op. 35, written in 1839 and No. 3, Op. 58, written in 1844) are in four movements. In Op. 35,
Chopin was able to combine within a formal large musical structure many elements of his virtuosic piano technique—"a
kind of dialogue between the public pianism of the brilliant style and the German sonata principle". The last movement,
a brief (75-bar) perpetuum mobile in which the hands play in unmodified octave unison throughout, was found shocking
and unmusical by contemporaries, including Schumann. The Op. 58 sonata is closer to the German tradition, including
many passages of complex counterpoint, "worthy of Brahms" according to the music historians Kornel Michałowski and
Jim Samson. Chopin's harmonic innovations may have arisen partly from his keyboard improvisation technique. Temperley
says that in his works "novel harmonic effects frequently result from the combination of ordinary appoggiaturas or
passing notes with melodic figures of accompaniment", and cadences are delayed by the use of chords outside the home
key (neapolitan sixths and diminished sevenths), or by sudden shifts to remote keys. Chord progressions sometimes
anticipate the shifting tonality of later composers such as Claude Debussy, as does Chopin's use of modal harmony.
In 1841, Léon Escudier wrote of a recital given by Chopin that year, "One may say that Chopin is the creator of a
school of piano and a school of composition. In truth, nothing equals the lightness, the sweetness with which the
composer preludes on the piano; moreover nothing may be compared to his works full of originality, distinction and
grace." Chopin refused to conform to a standard method of playing and believed that there was no set technique for
playing well. His style was based extensively on his use of very independent finger technique. In his Projet de méthode
he wrote: "Everything is a matter of knowing good fingering ... we need no less to use the rest of the hand, the
wrist, the forearm and the upper arm." He further stated: "One needs only to study a certain position of the hand
in relation to the keys to obtain with ease the most beautiful quality of sound, to know how to play short notes
and long notes, and [to attain] unlimited dexterity." The consequences of this approach to technique in Chopin's
music include the frequent use of the entire range of the keyboard, passages in double octaves and other chord groupings,
swiftly repeated notes, the use of grace notes, and the use of contrasting rhythms (four against three, for example)
between the hands. Polish composers of the following generation included virtuosi such as Moritz Moszkowski, but,
in the opinion of J. Barrie Jones, his "one worthy successor" among his compatriots was Karol Szymanowski (1882–1937).
Edvard Grieg, Antonín Dvořák, Isaac Albéniz, Pyotr Ilyich Tchaikovsky and Sergei Rachmaninoff, among others, are
regarded by critics as having been influenced by Chopin's use of national modes and idioms. Alexander Scriabin was
devoted to the music of Chopin, and his early published works include nineteen mazurkas, as well as numerous études
and preludes; his teacher Nikolai Zverev drilled him in Chopin's works to improve his virtuosity as a performer.
In the 20th century, composers who paid homage to (or in some cases parodied) the music of Chopin included George
Crumb, Bohuslav Martinů, Darius Milhaud, Igor Stravinsky and Heitor Villa-Lobos. Jonathan Bellman writes that modern
concert performance style—set in the "conservatory" tradition of late 19th- and 20th-century music schools, and suitable
for large auditoria or recordings—militates against what is known of Chopin's more intimate performance technique.
The composer himself said to a pupil that "concerts are never real music, you have to give up the idea of hearing
in them all the most beautiful things of art." Contemporary accounts indicate that in performance, Chopin avoided
rigid procedures sometimes incorrectly attributed to him, such as "always crescendo to a high note", but that he
was concerned with expressive phrasing, rhythmic consistency and sensitive colouring. Berlioz wrote in 1853 that
Chopin "has created a kind of chromatic embroidery ... whose effect is so strange and piquant as to be impossible
to describe ... virtually nobody but Chopin himself can play this music and give it this unusual turn". Hiller wrote
that "What in the hands of others was elegant embellishment, in his hands became a colourful wreath of flowers."
Chopin's music is frequently played with rubato, "the practice in performance of disregarding strict time, 'robbing'
some note-values for expressive effect". There are differing opinions as to how much, and what type, of rubato is
appropriate for his works. Charles Rosen comments that "most of the written-out indications of rubato in Chopin are
to be found in his mazurkas ... It is probable that Chopin used the older form of rubato so important to Mozart ...
[where] the melody note in the right hand is delayed until after the note in the bass ... An allied form of this
rubato is the arpeggiation of the chords thereby delaying the melody note; according to Chopin's pupil, Karol Mikuli,
Chopin was firmly opposed to this practice." Friederike Müller, a pupil of Chopin, wrote: "[His] playing was always
noble and beautiful; his tones sang, whether in full forte or softest piano. He took infinite pains to teach his
pupils this legato, cantabile style of playing. His most severe criticism was 'He—or she—does not know how to join
two notes together.' He also demanded the strictest adherence to rhythm. He hated all lingering and dragging, misplaced
rubatos, as well as exaggerated ritardandos ... and it is precisely in this respect that people make such terrible
errors in playing his works." With his mazurkas and polonaises, Chopin has been credited with introducing to music
a new sense of nationalism. Schumann, in his 1836 review of the piano concertos, highlighted the composer's strong
feelings for his native Poland, writing that "Now that the Poles are in deep mourning [after the failure of the November
1830 rising], their appeal to us artists is even stronger ... If the mighty autocrat in the north [i.e. Nicholas
I of Russia] could know that in Chopin's works, in the simple strains of his mazurkas, there lurks a dangerous enemy,
he would place a ban on his music. Chopin's works are cannon buried in flowers!" The biography of Chopin published
in 1863 under the name of Franz Liszt (but probably written by Carolyne zu Sayn-Wittgenstein) claims that Chopin
"must be ranked first among the first musicians ... individualizing in themselves the poetic sense of an entire nation."
Some modern commentators have argued against exaggerating Chopin's primacy as a "nationalist" or "patriotic" composer.
George Golos refers to earlier "nationalist" composers in Central Europe, including Poland's Michał Kleofas Ogiński
and Franciszek Lessel, who utilised polonaise and mazurka forms. Barbara Milewski suggests that Chopin's experience
of Polish music came more from "urbanised" Warsaw versions than from folk music, and that attempts (by Jachimecki
and others) to demonstrate genuine folk music in his works are without basis. Richard Taruskin impugns Schumann's
attitude toward Chopin's works as patronizing and comments that Chopin "felt his Polish patriotism deeply and sincerely"
but consciously modelled his works on the tradition of Bach, Beethoven, Schubert and Field. A reconciliation of these
views is suggested by William Atwood: "Undoubtedly [Chopin's] use of traditional musical forms like the polonaise
and mazurka roused nationalistic sentiments and a sense of cohesiveness amongst those Poles scattered across Europe
and the New World ... While some sought solace in [them], others found them a source of strength in their continuing
struggle for freedom. Although Chopin's music undoubtedly came to him intuitively rather than through any conscious
patriotic design, it served all the same to symbolize the will of the Polish people ..." Jones comments that "Chopin's
unique position as a composer, despite the fact that virtually everything he wrote was for the piano, has rarely
been questioned." He also notes that Chopin was fortunate to arrive in Paris in 1831—"the artistic environment, the
publishers who were willing to print his music, the wealthy and aristocratic who paid what Chopin asked for their
lessons"—and these factors, as well as his musical genius, also fuelled his contemporary and later reputation. While
his illness and his love-affairs conform to some of the stereotypes of romanticism, the rarity of his public recitals
(as opposed to performances at fashionable Paris soirées) led Arthur Hutchings to suggest that "his lack of Byronic
flamboyance [and] his aristocratic reclusiveness make him exceptional" among his romantic contemporaries, such as
Liszt and Henri Herz. Chopin's qualities as a pianist and composer were recognized by many of his fellow musicians.
Schumann named a piece for him in his suite Carnaval, and Chopin later dedicated his Ballade No. 2 in F major to
Schumann. Elements of Chopin's music can be traced in many of Liszt's later works. Liszt later transcribed for piano
six of Chopin's Polish songs. A less fraught friendship was with Alkan, with whom he discussed elements of folk music,
and who was deeply affected by Chopin's death. Two of Chopin's long-standing pupils, Karol Mikuli (1821–1897) and
Georges Mathias, were themselves piano teachers and passed on details of his playing to their own students, some
of whom (such as Raoul Koczalski) were to make recordings of his music. Other pianists and composers influenced by
Chopin's style include Louis Moreau Gottschalk, Édouard Wolff (1816–1880) and Pierre Zimmermann. Debussy dedicated
his own 1915 piano Études to the memory of Chopin; he frequently played Chopin's music during his studies at the
Paris Conservatoire, and undertook the editing of Chopin's piano music for the publisher Jacques Durand.
The exact nature of relations between Tibet and the Ming dynasty of China (1368–1644) is unclear. Analysis of the relationship
is further complicated by modern political conflicts and the application of Westphalian sovereignty to a time when
the concept did not exist. Some Mainland Chinese scholars, such as Wang Jiawei and Nyima Gyaincain, assert that the
Ming dynasty had unquestioned sovereignty over Tibet, pointing to the Ming court's issuing of various titles to Tibetan
leaders, Tibetans' full acceptance of these titles, and a renewal process for successors of these titles that involved
traveling to the Ming capital. Scholars within China also argue that Tibet has been an integral part of China since
the 13th century and that it was thus a part of the Ming Empire. But most scholars outside China, such as Turrell
V. Wylie, Melvin C. Goldstein, and Helmut Hoffman, say that the relationship was one of suzerainty, that Ming titles
were only nominal, that Tibet remained an independent region outside Ming control, and that it simply paid tribute
until the Jiajing Emperor (1521–1566), who ceased relations with Tibet. Some scholars note that Tibetan leaders during
the Ming frequently engaged in civil war and conducted their own foreign diplomacy with neighboring states such as
Nepal. Some scholars underscore the commercial aspect of the Ming-Tibetan relationship, noting the Ming dynasty's
shortage of horses for warfare and thus the importance of the horse trade with Tibet. Others argue that the significant
religious nature of the relationship of the Ming court with Tibetan lamas is underrepresented in modern scholarship.
In hopes of reviving the unique relationship of the earlier Mongol leader Kublai Khan (r. 1260–1294) and his spiritual
superior Drogön Chögyal Phagpa (1235–1280) of the Sakya school of Tibetan Buddhism, the Yongle Emperor (r. 1402–1424)
made a concerted effort to build a secular and religious alliance with Deshin Shekpa (1384–1415), the Karmapa of
the Karma Kagyu school. However, the Yongle Emperor's attempts were unsuccessful. The Ming initiated sporadic armed
intervention in Tibet during the 14th century, but did not garrison permanent troops there. At times the Tibetans
also used armed resistance against Ming forays. The Wanli Emperor (r. 1572–1620) made attempts to reestablish Sino-Tibetan
relations after the Mongol-Tibetan alliance initiated in 1578, which affected the foreign policy of the subsequent
Qing dynasty (1644–1912) of China in their support for the Dalai Lama of the Gelug school. By the late 16th century,
the Mongols were successful armed protectors of the Gelug Dalai Lama, after increasing their presence in the Amdo
region. This culminated in Güshi Khan's (1582–1655) conquest of Tibet from 1637–1642 and the establishment of the
Ganden Phodrang regime by the 5th Dalai Lama with his help. Tibet was once a strong power contemporaneous with Tang
China (618–907). Until the Tibetan Empire's collapse in the 9th century, it was the Tang's major rival in dominating
Inner Asia. The Yarlung rulers of Tibet also signed various peace treaties with the Tang, culminating in a treaty
in 821 that fixed the borders between Tibet and China. During the Five Dynasties and Ten Kingdoms period of China
(907–960), while the fractured political realm of China saw no threat in a Tibet which was in just as much political
disarray, there was little in the way of Sino-Tibetan relations. Few documents involving Sino-Tibetan contacts survive
from the Song dynasty (960–1279). The Song were far more concerned with countering northern enemy states of the Khitan-ruled
Liao dynasty (907–1125) and Jurchen-ruled Jin dynasty (1115–1234). In 1207, the Mongol ruler Genghis Khan (r. 1206–1227)
conquered and subjugated the ethnic Tangut state of the Western Xia (1038–1227). In the same year, he established
diplomatic relations with Tibet by sending envoys there. The conquest of the Western Xia alarmed Tibetan rulers,
who decided to pay tribute to the Mongols. However, when they ceased to pay tribute after Genghis Khan's death, his
successor Ögedei Khan (r. 1229–1241) launched an invasion into Tibet. The Mongol prince Godan, a grandson of Genghis
Khan, raided as far as Lhasa. During his attack in 1240, Prince Godan summoned Sakya Pandita (1182–1251), leader
of the Sakya school of Tibetan Buddhism, to his court in what is now Gansu in Western China. With Sakya Pandita's
submission to Godan in 1247, Tibet was officially incorporated into the Mongol Empire during the regency of Töregene
Khatun (1241–1246). Michael C. van Walt van Praag writes that Godan granted Sakya Pandita temporal authority over
a still politically fragmented Tibet, stating that "this investiture had little real impact" but it was significant
in that it established the unique "Priest-Patron" relationship between the Mongols and the Sakya lamas. Starting
in 1236, the Mongol prince Kublai, who later ruled as Khagan from 1260–1294, was granted a large appanage in North
China by his superior, Ögedei Khan. Karma Pakshi, 2nd Karmapa Lama (1203–1283)—the head lama of the Karma Kagyu lineage
of Tibetan Buddhism—rejected Kublai's invitation, so instead Kublai invited Drogön Chögyal Phagpa (1235–1280), successor
and nephew of Sakya Pandita, who came to his court in 1253. Kublai instituted a unique relationship with the Phagpa
lama, which recognized Kublai as a superior sovereign in political affairs and the Phagpa lama as the senior instructor
to Kublai in religious affairs. Kublai also made Drogön Chögyal Phagpa the director of the government agency known
as the Bureau of Buddhist and Tibetan Affairs and the ruling priest-king of Tibet, which comprised thirteen different
states ruled by myriarchies. Kublai Khan did not conquer the Song dynasty in South China until 1279, so Tibet was
a component of the early Mongol Empire before it was combined into one of its descendant empires with the whole of
China under the Yuan dynasty (1271–1368). Van Praag writes that this conquest "marked the end of independent China,"
which was then incorporated into the Yuan dynasty that ruled China, Tibet, Mongolia, Korea, parts of Siberia and
Upper Burma. Morris Rossabi, a professor of Asian history at Queens College, City University of New York, writes
that "Khubilai wished to be perceived both as the legitimate Khan of Khans of the Mongols and as the Emperor of China.
Though he had, by the early 1260s, become closely identified with China, he still, for a time, claimed universal
rule", and yet "despite his successes in China and Korea, Khubilai was unable to have himself accepted as the Great
Khan". Thus, with such limited acceptance of his position as Great Khan, Kublai Khan increasingly became identified
with China and sought support as Emperor of China. In 1358, the Sakya viceregal regime installed by the Mongols in
Tibet was overthrown in a rebellion by the Phagmodru myriarch Tai Situ Changchub Gyaltsen (1302–1364). The Mongol
Yuan court was forced to accept him as the new viceroy, and Changchub Gyaltsen and his successors, the Phagmodrupa
Dynasty, gained de facto rule over Tibet. In 1368, a Han Chinese revolt known as the Red Turban Rebellion toppled
the Mongol Yuan dynasty in China. Zhu Yuanzhang then established the Ming dynasty, ruling as the Hongwu Emperor (r.
1368–1398). It is not clear how much the early Ming court understood the civil war going on in Tibet between rival
religious sects, but the first emperor was anxious to avoid the same trouble that Tibet had caused for the Tang dynasty.
Instead of recognizing the Phagmodru ruler, the Hongwu Emperor sided with the Karmapa of the nearer Kham region and
southeastern Tibet, sending envoys out in the winter of 1372–1373 to ask the Yuan officeholders to renew their titles
for the new Ming court. As evident in his imperial edicts, the Hongwu Emperor was well aware of the Buddhist link
between Tibet and China and wanted to foster it. Rolpe Dorje, 4th Karmapa Lama (1340–1383) rejected the Hongwu Emperor's
invitation, although he did send some disciples as envoys to the court in Nanjing. The Hongwu Emperor also entrusted
his guru Zongluo, one of many Buddhist monks at court, to head a religious mission into Tibet in 1378–1382 in order
to obtain Buddhist texts. However, the early Ming government enacted a law, later rescinded, which forbade Han Chinese
to learn the tenets of Tibetan Buddhism. There is little detailed evidence of Chinese—especially lay Chinese—studying
Tibetan Buddhism until the Republican era (1912–1949). Despite these missions on behalf of the Hongwu Emperor, Morris
Rossabi writes that the Yongle Emperor (r. 1402–1424) "was the first Ming ruler actively to seek an extension of
relations with Tibet." According to the official Twenty-Four Histories, the History of Ming compiled in 1739 by the
subsequent Qing dynasty (1644–1912), the Ming dynasty established the "É-Lì-Sī Army-Civilian Marshal Office" (Chinese:
俄力思軍民元帥府) in western Tibet and installed the "Ü-Tsang Itinerant High Commandery" and "Amdo-Kham Itinerant High Commandery"
to administer Kham. The Mingshi states that administrative offices were set up under these high commanderies, including
one Itinerant Commandery, three Pacification Commissioner's Offices, six Expedition Commissioner's Offices, four
Wanhu offices (myriarchies, in command of 10,000 households each) and seventeen Qianhu offices (chiliarchies, each
in command of 1,000 households). The Ming court appointed three Princes of Dharma (法王) and five Princes (王), and
granted many other titles, such as Grand State Tutors (大國師) and State Tutors (國師), to the important schools of Tibetan
Buddhism, including the Karma Kagyu, Sakya, and Gelug. According to Wang Jiawei and Nyima Gyaincain, leading officials
of these organs were all appointed by the central government and were subject to the rule of law. Yet Van Praag describes
the distinct and long-lasting Tibetan law code established by the Phagmodru ruler Tai Situ Changchub Gyaltsen as
one of many reforms to revive old Imperial Tibetan traditions. The late Turrell V. Wylie, a former professor of the
University of Washington, and Li Tieh-tseng argue that the reliability of the heavily censored History of Ming as
a credible source on Sino-Tibetan relations is questionable, in the light of modern scholarship. Other historians
also assert that these Ming titles were nominal and did not actually confer the authority that the earlier Yuan titles
had. Van Praag writes that the "numerous economically motivated Tibetan missions to the Ming Court are referred to
as 'tributary missions' in the Ming Shih." Van Praag writes that these "tributary missions" were simply prompted
by China's need for horses from Tibet, since a viable horse market in Mongol lands was closed as a result of incessant
conflict. Morris Rossabi also writes that "Tibet, which had extensive contacts with China during the Yuan, scarcely
had diplomatic relations with the Ming." Historians disagree on what the relationship was between the Ming court
and Tibet and whether or not Ming China had sovereignty over Tibet. Van Praag writes that Chinese court historians
viewed Tibet as an independent foreign tributary and had little interest in Tibet besides a lama-patron relationship.
The historian Tsepon W. D. Shakabpa supports van Praag's position. However, Wang Jiawei and Nyima Gyaincain state
that these assertions by van Praag and Shakabpa are "fallacies". Wang and Nyima argue that the Ming emperor sent
edicts to Tibet twice in the second year of the Ming dynasty, and demonstrated that he viewed Tibet as a significant
region to pacify by urging various Tibetan tribes to submit to the authority of the Ming court. They note that at
the same time, the Mongol Prince Punala, who had inherited his position as ruler of areas of Tibet, went to Nanjing
in 1371 to pay tribute and show his allegiance to the Ming court, bringing with him the seal of authority issued
by the Yuan court. They also state that since successors of lamas granted the title of "prince" had to travel to
the Ming court to renew this title, and since lamas called themselves princes, the Ming court therefore had "full
sovereignty over Tibet." They state that the Ming dynasty, by issuing imperial edicts to invite ex-Yuan officials
to the court for official positions in the early years of its founding, won submission from ex-Yuan religious and
administrative leaders in the Tibetan areas, and thereby incorporated Tibetan areas into the rule of the Ming court.
Thus, they conclude, the Ming court won the power to rule Tibetan areas formerly under the rule of the Yuan dynasty.
Journalist and author Thomas Laird, in his book The Story of Tibet: Conversations with the Dalai Lama, writes that
Wang and Nyima present the government viewpoint of the People's Republic of China in their Historical Status of China's
Tibet, and fail to realize that China was "absorbed into a larger, non-Chinese political unit" during the Mongol
Yuan dynasty, which Wang and Nyima paint as a characteristic Chinese dynasty succeeded by the Ming. Laird asserts
that the ruling Mongol khans never administered Tibet as part of China and instead ruled them as separate territories,
comparing the Mongols with the British who colonized India and New Zealand, yet stating this does not make India
part of New Zealand as a consequence. Of later Mongol and Tibetan accounts interpreting the Mongol conquest of Tibet,
Laird asserts that "they, like all non-Chinese historical narratives, never portray the Mongol subjugation of Tibet
as a Chinese one." The Columbia Encyclopedia distinguishes between the Yuan dynasty and the other Mongol Empire khanates
of Ilkhanate, Chagatai Khanate and the Golden Horde. It describes the Yuan dynasty as "A Mongol dynasty of China
that ruled from 1271 to 1368, and a division of the great empire conquered by the Mongols. Founded by Kublai Khan,
who adopted the Chinese dynastic name of Yüan in 1271." The Encyclopedia Americana describes the Yuan dynasty as
"the line of Mongol rulers in China" and adds that the Mongols "proclaimed a Chinese-style Yüan dynasty at Khanbaliq
(Beijing)." The Metropolitan Museum of Art writes that the Mongol rulers of the Yuan dynasty "adopted Chinese political
and cultural models; ruling from their capitals in Dadu, they assumed the role of Chinese emperors," although Tibetologist
Thomas Laird dismissed the Yuan dynasty as a non-Chinese polity and plays down its Chinese characteristics. The Metropolitan
Museum of Art also noted that in spite of the gradual assimilation of Yuan monarchs, the Mongol rulers largely ignored
the literati and imposed harsh policies discriminating against southern Chinese. In his Kublai Khan: His Life and
Times, Rossabi explains that Kublai "created government institutions that either resembled or were the same as the
traditional Chinese ones", and he "wished to signal to the Chinese that he intended to adopt the trappings and style
of a Chinese ruler". Nevertheless, the ethno-geographic caste hierarchy favoring the Mongols and other ethnicities
were accorded higher status than the Han Chinese majority. Although Han Chinese who were recruited as advisers were
often actually more influential than high officials, their status was not as well defined. Kublai also abolished
the imperial examinations of China's civil service legacy, which was not reinstated until Ayurbarwada Buyantu Khan's
reign (1311–1320). Rossabi writes that Kublai recognized that in order to rule China, "he had to employ Chinese advisors
and officials, yet he could not rely totally on Chinese advisers because he had to maintain a delicate balancing
act between ruling the sedentary civilization of China and preserving the cultural identity and values of the Mongols."
And "in governing China, he was concerned with the interests of his Chinese subjects, but also with exploiting the
resources of the empire for his own aggrandizement. His motivations and objectives alternated from one to the other
throughout his reign," according to Rossabi. Van Praag writes in The Status of Tibet that the Tibetans and Mongols,
on the other hand, upheld a dual system of rule and an interdependent relationship that legitimated the succession
of Mongol khans as universal Buddhist rulers, or chakravartin. Van Praag writes that "Tibet remained a unique part
of the Empire and was never fully integrated into it," citing examples such as a licensed border market that existed
between China and Tibet during the Yuan. The official position of the Ministry of Foreign Affairs of the People's
Republic of China is that the Ming implemented a policy of managing Tibet according to conventions and customs, granting
titles and setting up administrative organs over Tibet. The State Council Information Office of the People's Republic
states that the Ming dynasty's Ü-Tsang Commanding Office governed most areas of Tibet. It also states that while
the Ming abolished the policy council set up by the Mongol Yuan to manage local affairs in Tibet and the Mongol system
of Imperial Tutors to govern religious affairs, the Ming adopted a policy of bestowing titles upon religious leaders
who had submitted to the Ming dynasty. For example, an edict of the Hongwu Emperor in 1373 appointed the Tibetan
leader Choskunskyabs as the General of the Ngari Military and Civil Wanhu Office, stating: Chen Qingying, Professor
of History and Director of the History Studies Institute under the China Tibetology Research Center in Beijing, writes
that the Ming court conferred new official positions on ex-Yuan Tibetan leaders of the Phachu Kargyu and granted
them lower-ranking positions. Of the county (zong or dzong) leaders of Neiwo Zong and Renbam Zong, Chen states that
when "the Emperor learned the actual situation of the Phachu Kargyu, the Ming court then appointed the main Zong
leaders to be senior officers of the Senior Command of Dbus and Gtsang." The official posts that the Ming court established
in Tibet, such as senior and junior commanders, offices of Qianhu (in charge of 1,000 households), and offices of
Wanhu (in charge of 10,000 households), were all hereditary positions according to Chen, but he asserts that "the
succession of some important posts still had to be approved by the emperor," while old imperial mandates had to be
returned to the Ming court for renewal. According to Tibetologist John Powers, Tibetan sources counter this narrative
of titles granted by the Chinese to Tibetans with various titles which the Tibetans gave to the Chinese emperors
and their officials. Tribute missions from Tibetan monasteries to the Chinese court brought back not only titles,
but large, commercially valuable gifts which could subsequently be sold. The Ming emperors sent invitations to ruling
lamas, but the lamas sent subordinates rather than coming themselves, and no Tibetan ruler ever explicitly accepted
the role of being a vassal of the Ming. Hans Bielenstein writes that as far back as the Han dynasty (202 BCE–220
CE), the Han Chinese government "maintained the fiction" that the foreign officials administering the various "Dependent
States" and oasis city-states of the Western Regions (composed of the Tarim Basin and oasis of Turpan) were true
Han representatives due to the Han government's conferral of Chinese seals and seal cords to them. Wang and Nyima
state that after the official title "Education Minister" was granted to Tai Situ Changchub Gyaltsen (1302–1364) by
the Yuan court, this title appeared frequently with his name in various Tibetan texts, while his Tibetan title "Degsi"
(sic properly sde-srid or desi) is seldom mentioned. Wang and Nyima take this to mean that "even in the later period
of the Yuan dynasty, the Yuan imperial court and the Phagmodrupa Dynasty maintained a Central-local government relation."
The Tai Situpa is even supposed to have written in his will: "In the past I received loving care from the emperor
in the east. If the emperor continues to care for us, please follow his edicts and the imperial envoy should be well
received." However, Lok-Ham Chan, a professor of history at the University of Washington, writes that Changchub Gyaltsen's
aims were to recreate the old Tibetan Kingdom that existed during the Chinese Tang dynasty, to build "nationalist
sentiment" amongst Tibetans, and to "remove all traces of Mongol suzerainty." Georges Dreyfus, a professor of religion
at Williams College, writes that it was Changchub Gyaltsen who adopted the old administrative system of Songtsän
Gampo (c. 605–649)—the first leader of the Tibetan Empire to establish Tibet as a strong power—by reinstating its
legal code of punishments and administrative units. For example, instead of the 13 governorships established by the
Mongol Sakya viceroy, Changchub Gyaltsen divided Central Tibet into districts (dzong) with district heads (dzong
dpon) who had to conform to old rituals and wear clothing styles of old Imperial Tibet. Van Praag asserts that Changchub
Gyaltsen's ambitions were to "restore to Tibet the glories of its Imperial Age" by reinstating secular administration,
promoting "national culture and traditions," and installing a law code that survived into the 20th century. According
to Chen, the Ming officer of Hezhou (modern day Linxia) informed the Hongwu Emperor that the general situation in
Dbus and Gtsang "was under control," and so he suggested to the emperor that he offer the second Phagmodru ruler,
Jamyang Shakya Gyaltsen, an official title. According to the Records of the Founding Emperor, the Hongwu Emperor
issued an edict granting the title "Initiation State Master" to Sagya Gyaincain, while the latter sent envoys to
the Ming court to hand over his jade seal of authority along with tribute of colored silk and satin, statues of the
Buddha, Buddhist scriptures, and sarira. Dreyfus writes that after the Phagmodrupa lost its centralizing power over
Tibet in 1434, several attempts by other families to establish hegemonies failed over the next two centuries until
1642 with the 5th Dalai Lama's effective hegemony over Tibet. The Ming dynasty granted titles to lamas of schools
such as the Karmapa Kargyu, but the latter had previously declined Mongol invitations to receive titles. When the
Ming Yongle Emperor invited Je Tsongkhapa (1357–1419), founder of the Gelug school, to come to the Ming court and
pay tribute, the latter declined. Wang and Nyima write that this was due to old age and physical weakness, and also
because of efforts being made to build three major monasteries. Chen Qingying states that Tsongkhapa wrote a letter
to decline the Emperor's invitation, and in this reply, Tsongkhapa wrote: A. Tom Grunfeld says that Tsongkhapa claimed
ill health in his refusal to appear at the Ming court, while Rossabi adds that Tsongkhapa cited the "length and arduousness
of the journey" to China as another reason not to make an appearance. This first request by the Ming was made in
1407, but the Ming court sent another embassy in 1413, this one led by the eunuch Hou Xian (候顯; fl. 1403–1427), which
was again refused by Tsongkhapa. Rossabi writes that Tsongkhapa did not want to entirely alienate the Ming court,
so he sent his disciple Chosrje Shākya Yeshes to Nanjing in 1414 on his behalf, and upon his arrival in 1415 the
Yongle Emperor bestowed upon him the title of "State Teacher"—the same title earlier awarded the Phagmodrupa ruler
of Tibet. The Xuande Emperor (r. 1425–1435) even granted this disciple Chosrje Shākya Yeshes the title of a "King"
(王). This title does not appear to have held any practical meaning, or to have given its holder any power, at Tsongkhapa's
Ganden Monastery. Wylie notes that this—like the Karma Kargyu—cannot be seen as a reappointment of Mongol Yuan offices,
since the Gelug school was created after the fall of the Yuan dynasty. Dawa Norbu argues that modern Chinese Communist
historians tend to be in favor of the view that the Ming simply reappointed old Yuan dynasty officials in Tibet and
perpetuated their rule of Tibet in this manner. Norbu writes that, although this would have been true for the eastern
Tibetan regions of Amdo and Kham's "tribute-cum-trade" relations with the Ming, it was untrue if applied to the western
Tibetan regions of Ü-Tsang and Ngari. After the Phagmodrupa Changchub Gyaltsen, these were ruled by "three successive
nationalistic regimes," which Norbu writes "Communist historians prefer to ignore." Laird writes that the Ming appointed
titles to eastern Tibetan princes, and that "these alliances with eastern Tibetan principalities are the evidence
China now produces for its assertion that the Ming ruled Tibet," despite the fact that the Ming did not send an army
to replace the Mongols after they left Tibet. Yiu Yung-chin states that the furthest western extent of the Ming dynasty's
territory was Gansu, Sichuan, and Yunnan while "the Ming did not possess Tibet." Shih-Shan Henry Tsai writes that
the Yongle Emperor sent his eunuch Yang Sanbao into Tibet in 1413 to gain the allegiance of various Tibetan princes,
while the Yongle Emperor paid a small fortune in return gifts for tributes in order to maintain the loyalty of neighboring
vassal states such as Nepal and Tibet. However, Van Praag states that Tibetan rulers upheld their own separate relations
with the kingdoms of Nepal and Kashmir, and at times "engaged in armed confrontation with them." Even though the
Gelug exchanged gifts with and sent missions to the Ming court up until the 1430s, the Gelug was not mentioned in
the Mingshi or the Mingshi Lu. On this, historian Li Tieh-tseng says of Tsongkhapa's refusal of Ming invitations
to visit the Yongle Emperor's court: Wylie asserts that this type of censorship of the History of Ming distorts the
true picture of the history of Sino-Tibetan relations, while the Ming court granted titles to various lamas regardless
of their sectarian affiliations in an ongoing civil war in Tibet between competing Buddhist factions. Wylie argues
that Ming titles of "King" granted indiscriminately to various Tibetan lamas or even their disciples should not be
viewed as reappointments to earlier Yuan dynasty offices, since the viceregal Sakya regime established by the Mongols
in Tibet was overthrown by the Phagmodru myriarchy before the Ming existed. Helmut Hoffman states that the Ming upheld
the facade of rule over Tibet through periodic missions of "tribute emissaries" to the Ming court and by granting
nominal titles to ruling lamas, but did not actually interfere in Tibetan governance. Melvyn C. Goldstein writes
that the Ming had no real administrative authority over Tibet, as the various titles given to Tibetan leaders did
not confer authority as the earlier Mongol Yuan titles had. He asserts that "by conferring titles on Tibetans already
in power, the Ming emperors merely recognized political reality." Hugh Edward Richardson writes that the Ming dynasty
exercised no authority over the succession of Tibetan ruling families, the Phagmodru (1354–1435), Rinpungpa (1435–1565),
and Tsangpa (1565–1642). In his usurpation of the throne from the Jianwen Emperor (r. 1398–1402), the Yongle Emperor
was aided by the Buddhist monk Yao Guangxiao, and like his father, the Hongwu Emperor, the Yongle Emperor was "well-disposed
towards Buddhism", claims Rossabi. On March 10, 1403, the Yongle Emperor invited Deshin Shekpa, 5th Karmapa Lama
(1384–1415), to his court, even though the fourth Karmapa had rejected the invitation of the Hongwu Emperor. A Tibetan
translation in the 16th century preserves the letter of the Yongle Emperor, which the Association for Asian Studies
notes is polite and complimentary towards the Karmapa. The letter of invitation reads, In order to seek out the Karmapa,
the Yongle Emperor dispatched his eunuch Hou Xian and the Buddhist monk Zhi Guang (d. 1435) to Tibet. Traveling to
Lhasa either through Qinghai or via the Silk Road to Khotan, Hou Xian and Zhi Guang did not return to Nanjing until
1407. During his travels beginning in 1403, Deshin Shekpa was induced by further exhortations by the Ming court to
visit Nanjing by April 10, 1407. Norbu writes that the Yongle Emperor, following the tradition of Mongol emperors
and their reverence for the Sakya lamas, showed an enormous amount of deference towards Deshin Shekpa. The Yongle
Emperor came out of the palace in Nanjing to greet the Karmapa and did not require him to kowtow like a tributary
vassal. According to Karma Thinley, the emperor gave the Karmapa the place of honor at his left, and on a higher
throne than his own. Rossabi and others describe a similar arrangement made by Kublai Khan and the Sakya Phagpa lama,
writing that Kublai would "sit on a lower platform than the Tibetan cleric" when receiving religious instructions
from him. Throughout the following month, the Yongle Emperor and his court showered the Karmapa with presents. At
Linggu Temple in Nanjing, he presided over the religious ceremonies for the Yongle Emperor's deceased parents, while
twenty-two days of his stay were marked by religious miracles that were recorded in five languages on a gigantic
scroll that bore the Emperor's seal. During his stay in Nanjing, Deshin Shekpa was bestowed the title "Great Treasure
Prince of Dharma" by the Yongle Emperor. Elliot Sperling asserts that the Yongle Emperor, in bestowing Deshin Shekpa
with the title of "King" and praising his mystical abilities and miracles, was trying to build an alliance with the
Karmapa as the Mongols had with the Sakya lamas, but Deshin Shekpa rejected the Yongle Emperor's offer. In fact,
this was the same title that Kublai Khan had offered the Sakya Phagpa lama, but Deshin Shekpa persuaded the Yongle
Emperor to grant the title to religious leaders of other Tibetan Buddhist sects. Tibetan sources say Deshin Shekpa
also persuaded the Yongle Emperor not to impose his military might on Tibet as the Mongols had previously done. Thinley
writes that before the Karmapa returned to Tibet, the Yongle Emperor began planning to send a military force into
Tibet to forcibly give the Karmapa authority over all the Tibetan Buddhist schools but Deshin Shekpa dissuaded him.
However, Hok-Lam Chan states that "there is little evidence that this was ever the emperor's intention" and that
evidence indicates that Deshin Skekpa was invited strictly for religious purposes. Marsha Weidner states that Deshin
Shekpa's miracles "testified to the power of both the emperor and his guru and served as a legitimizing tool for
the emperor's problematic succession to the throne," referring to the Yongle Emperor's conflict with the previous
Jianwen Emperor. Tsai writes that Deshin Shekpa aided the legitimacy of the Yongle Emperor's rule by providing him
with portents and omens which demonstrated Heaven's favor of the Yongle Emperor on the Ming throne. With the example
of the Ming court's relationship with the fifth Karmapa and other Tibetan leaders, Norbu states that Chinese Communist
historians have failed to realize the significance of the religious aspect of the Ming-Tibetan relationship. He writes
that the meetings of lamas with the Emperor of China were exchanges of tribute between "the patron and the priest"
and were not merely instances of a political subordinate paying tribute to a superior. He also notes that the items
of tribute were Buddhist artifacts which symbolized "the religious nature of the relationship." Josef Kolmaš writes
that the Ming dynasty did not exercise any direct political control over Tibet, content with their tribute relations
that were "almost entirely of a religious character." Patricia Ann Berger writes that the Yongle Emperor's courting
and granting of titles to lamas was his attempt to "resurrect the relationship between China and Tibet established
earlier by the Yuan dynastic founder Khubilai Khan and his guru Phagpa." She also writes that the later Qing emperors
and their Mongol associates viewed the Yongle Emperor's relationship with Tibet as "part of a chain of reincarnation
that saw this Han Chinese emperor as yet another emanation of Manjusri." The Information Office of the State Council
of the PRC preserves an edict of the Zhengtong Emperor (r. 1435–1449) addressed to the Karmapa in 1445, written after
the latter's agent had brought holy relics to the Ming court. Zhengtong had the following message delivered to the
Great Treasure Prince of Dharma, the Karmapa: Despite this glowing message by the Emperor, Chan writes that a year
later in 1446, the Ming court cut off all relations with the Karmapa hierarchs. Until then, the court was unaware
that Deshin Shekpa had died in 1415. The Ming court had believed that the representatives of the Karma Kagyu who
continued to visit the Ming capital were sent by the Karmapa. Tsai writes that shortly after the visit by Deshin
Shekpa, the Yongle Emperor ordered the construction of a road and of trading posts in the upper reaches of the Yangzi
and Mekong Rivers in order to facilitate trade with Tibet in tea, horses, and salt. The trade route passed through
Sichuan and crossed Shangri-La County in Yunnan. Wang and Nyima assert that this "tribute-related trade" of the Ming
exchanging Chinese tea for Tibetan horses—while granting Tibetan envoys and Tibetan merchants explicit permission
to trade with Han Chinese merchants—"furthered the rule of the Ming dynasty court over Tibet". Rossabi and Sperling
note that this trade in Tibetan horses for Chinese tea existed long before the Ming. Peter C. Perdue says that Wang
Anshi (1021–1086), realizing that China could not produce enough militarily capable steeds, had also aimed to obtain
horses from Inner Asia in exchange for Chinese tea. The Chinese needed horses not only for cavalry but also as draft
animals for the army's supply wagons. The Tibetans required Chinese tea not only as a common beverage but also as
a religious ceremonial supplement. The Ming government imposed a monopoly on tea production and attempted to regulate
this trade with state-supervised markets, but these collapsed in 1449 due to military failures and internal ecological
and commercial pressures on the tea-producing regions. Van Praag states that the Ming court established diplomatic
delegations with Tibet merely to secure urgently needed horses. Wang and Nyima argue that these were not diplomatic
delegations at all, that Tibetan areas were ruled by the Ming since Tibetan leaders were granted positions as Ming
officials, that horses were collected from Tibet as a mandatory "corvée" tax, and therefore Tibetans were "undertaking
domestic affairs, not foreign diplomacy". Sperling writes that the Ming simultaneously bought horses in the Kham
region while fighting Tibetan tribes in Amdo and receiving Tibetan embassies in Nanjing. He also argues that the
embassies of Tibetan lamas visiting the Ming court were for the most part efforts to promote commercial transactions
between the lamas' large, wealthy entourage and Ming Chinese merchants and officials. Kolmaš writes that while the
Ming maintained a laissez-faire policy towards Tibet and limited the numbers of the Tibetan retinues, the Tibetans
sought to maintain a tributary relationship with the Ming because imperial patronage provided them with wealth and
power. Laird writes that Tibetans eagerly sought Ming court invitations since the gifts the Tibetans received for
bringing tribute were much greater in value than the latter. As for the Yongle Emperor's gifts to his Tibetan and
Nepalese vassals such as silver wares, Buddha relics, utensils for Buddhist temples and religious ceremonies, and
gowns and robes for monks, Tsai writes "in his effort to draw neighboring states to the Ming orbit so that he could
bask in glory, the Yongle Emperor was quite willing to pay a small price". The Information Office of the State Council
of the PRC lists the Tibetan tribute items as oxen, horses, camels, sheep, fur products, medical herbs, Tibetan incenses,
thangkas (painted scrolls), and handicrafts; while the Ming awarded Tibetan tribute-bearers an equal value of gold,
silver, satin and brocade, bolts of cloth, grains, and tea leaves. Silk workshops during the Ming also catered specifically
to the Tibetan market with silk clothes and furnishings featuring Tibetan Buddhist iconography. While the Ming dynasty
traded horses with Tibet, it upheld a policy of outlawing border markets in the north, which Laird sees as an effort
to punish the Mongols for their raids and to "drive them from the frontiers of China." However, after Altan Khan
(1507–1582)—leader of the Tümed Mongols who overthrew the Oirat Mongol confederation's hegemony over the steppes—made
peace with the Ming dynasty in 1571, he persuaded the Ming to reopen their border markets in 1573. This provided
the Chinese with a new supply of horses that the Mongols had in excess; it was also a relief to the Ming, since they
were unable to stop the Mongols from periodic raiding. Laird says that despite the fact that later Mongols believed
Altan forced the Ming to view him as an equal, Chinese historians argue that he was simply a loyal Chinese citizen.
By 1578, Altan Khan formed a formidable Mongol-Tibetan alliance with the Gelug that the Ming viewed from afar without
intervention. Patricia Ebrey writes that Tibet, like Joseon Korea and other neighboring states to the Ming, settled
for its tributary status while there were no troops or governors of Ming China stationed in its territory. Laird
writes that "after the Mongol troops left Tibet, no Ming troops replaced them." Wang and Nyima state that, despite
the fact that the Ming refrained from sending troops to subdue Tibet and refrained from garrisoning Ming troops there,
these measures were unnecessary so long as the Ming court upheld close ties with Tibetan vassals and their forces.
However, there were instances in the 14th century when the Hongwu Emperor did use military force to quell unrest
in Tibet. John D. Langlois writes that there was unrest in Tibet and western Sichuan, which the Marquis Mu Ying (沐英)
was commissioned to quell in November 1378 after he established a Taozhou garrison in Gansu. Langlois notes that
by October 1379, Mu Ying had allegedly captured 30,000 Tibetan prisoners and 200,000 domesticated animals. Yet invasion
went both ways; the Ming general Qu Neng, under the command of Lan Yu, was ordered to repel a Tibetan assault into
Sichuan in 1390. Discussions of strategy in the mid Ming dynasty focused primarily on recovery of the Ordos region,
which the Mongols used as a rallying base to stage raids into Ming China. Norbu states that the Ming dynasty, preoccupied
with the Mongol threat to the north, could not spare additional armed forces to enforce or back up their claim of
sovereignty over Tibet; instead, they relied on "Confucian instruments of tribute relations" of heaping unlimited
number of titles and gifts on Tibetan lamas through acts of diplomacy. Sperling states that the delicate relationship
between the Ming and Tibet was "the last time a united China had to deal with an independent Tibet," that there was
a potential for armed conflict at their borders, and that the ultimate goal of Ming foreign policy with Tibet was
not subjugation but "avoidance of any kind of Tibetan threat." P. Christiaan Klieger argues that the Ming court's
patronage of high Tibetan lamas "was designed to help stabilize border regions and protect trade routes." Historians
Luciano Petech and Sato Hisashi argue that the Ming upheld a "divide-and-rule" policy towards a weak and politically
fragmented Tibet after the Sakya regime had fallen. Chan writes that this was perhaps the calculated strategy of
the Yongle Emperor, as exclusive patronage to one Tibetan sect would have given it too much regional power. Sperling
finds no textual evidence in either Chinese or Tibetan sources to support this thesis of Petech and Hisashi. Norbu
asserts that their thesis is largely based on the list of Ming titles conferred on Tibetan lamas rather than "comparative
analysis of developments in China and Tibet." Rossabi states that this theory "attributes too much influence to the
Chinese," pointing out that Tibet was already politically divided when the Ming dynasty began. Rossabi also discounts
the "divide-and-rule" theory on the grounds of the Yongle Emperor's failed attempt to build a strong relationship
with the fifth Karmapa—one which he hoped would parallel Kublai Khan's earlier relationship with the Sakya Phagpa
lama. Instead, the Yongle Emperor followed the Karmapa's advice of giving patronage to many different Tibetan lamas.
The Association for Asian Studies states that there is no known written evidence to suggest that later leaders of
the Gelug—Gendün Drup (1391–1474) and Gendün Gyatso (1475–1571)—had any contacts with Ming China. These two religious
leaders were preoccupied with an overriding concern for dealing with the powerful secular Rinpungpa princes, who
were patrons and protectors of the Karma Kargyu lamas. The Rinpungpa leaders were relatives of the Phagmodrupa, yet
their authority shifted over time from simple governors to rulers in their own right over large areas of Ü-Tsang.
The prince of Rinbung occupied Lhasa in 1498 and excluded the Gelug from attending New Years ceremonies and prayers,
the most important event in the Gelug. While the task of New Years prayers in Lhasa was granted to the Karmapa and
others, Gendün Gyatso traveled in exile looking for allies. However, it was not until 1518 that the secular Phagmodru
ruler captured Lhasa from the Rinbung, and thereafter the Gelug was given rights to conduct the New Years prayer.
When the Drikung Kagyu abbot of Drigung Monastery threatened Lhasa in 1537, Gendün Gyatso was forced to abandon the
Drepung Monastery, although he eventually returned. The Zhengde Emperor (r. 1505–1521), who enjoyed the company of
lamas at court despite protests from the censorate, had heard tales of a "living Buddha" which he desired to host
at the Ming capital; this was none other than the Rinpung-supported Mikyö Dorje, 8th Karmapa Lama then occupying
Lhasa. Zhengde's top advisors made every attempt to dissuade him from inviting this lama to court, arguing that Tibetan
Buddhism was wildly heterodox and unorthodox. Despite protests by the Grand Secretary Liang Chu, in 1515 the Zhengde
Emperor sent his eunuch official Liu Yun of the Palace Chancellery on a mission to invite this Karmapa to Beijing.
Liu commanded a fleet of hundreds of ships requisitioned along the Yangtze, consuming 2,835 g (100 oz) of silver
a day in food expenses while stationed for a year in Chengdu of Sichuan. After procurring necessary gifts for the
mission, he departed with a cavalry force of about 1,000 troops. When the request was delivered, the Karmapa lama
refused to leave Tibet despite the Ming force brought to coerce him. The Karmapa launched a surprise ambush on Liu
Yun's camp, seizing all the goods and valuables while killing or wounding half of Liu Yun's entire escort. After
this fiasco, Liu fled for his life, but only returned to Chengdu several years later to find that the Zhengde Emperor
had died. Elliot Sperling, a specialist of Indian studies and the director of the Tibetan Studies program at Indiana
University’s Department of Central Eurasia Studies, writes that "the idea that Tibet became part of China in the
13th century is a very recent construction." He writes that Chinese writers of the early 20th century were of the
view that Tibet was not annexed by China until the Manchu Qing dynasty invasion during the 18th century. He also
states that Chinese writers of the early 20th century described Tibet as a feudal dependency of China, not an integral
part of it. Sperling states that this is because "Tibet was ruled as such, within the empires of the Mongols and
the Manchus" and also that "China's intervening Ming dynasty ... had no control over Tibet." He writes that the Ming
relationship with Tibet is problematic for China’s insistence of its unbroken sovereignty over Tibet since the 13th
century. As for the Tibetan view that Tibet was never subject to the rule of the Yuan or Qing emperors of China,
Sperling also discounts this by stating that Tibet was "subject to rules, laws and decisions made by the Yuan and
Qing rulers" and that even Tibetans described themselves as subjects of these emperors. Josef Kolmaš, a sinologist,
Tibetologist, and Professor of Oriental Studies at the Academy of Sciences of the Czech Republic, writes that it
was during the Qing dynasty "that developments took place on the basis of which Tibet came to be considered an organic
part of China, both practically and theoretically subject to the Chinese central government." Yet he states that
this was a radical change in regards to all previous eras of Sino-Tibetan relations. P. Christiaan Klieger, an anthropologist
and scholar of the California Academy of Sciences in San Francisco, writes that the vice royalty of the Sakya regime
installed by the Mongols established a patron and priest relationship between Tibetans and Mongol converts to Tibetan
Buddhism. According to him, the Tibetan lamas and Mongol khans upheld a "mutual role of religious prelate and secular
patron," respectively. He adds that "Although agreements were made between Tibetan leaders and Mongol khans, Ming
and Qing emperors, it was the Republic of China and its Communist successors that assumed the former imperial tributaries
and subject states as integral parts of the Chinese nation-state." China Daily, a CCP-controlled news organization
since 1981, states in a 2008 article that although there were dynastic changes after Tibet was incorporated into
the territory of Yuan dynasty's China in the 13th century, "Tibet has remained under the jurisdiction of the central
government of China." It also states that the Ming dynasty "inherited the right to rule Tibet" from the Yuan dynasty,
and repeats the claims in the Mingshi about the Ming establishing two itinerant high commands over Tibet. China Daily
states that the Ming handled Tibet's civil administration, appointed all leading officials of these administrative
organs, and punished Tibetans who broke the law. The party-controlled People's Daily, the state-controlled Xinhua
News Agency, and the state-controlled national television network China Central Television posted the same article
that China Daily had, the only difference being their headlines and some additional text. During the reign of the
Jiajing Emperor (r. 1521–1567), the native Chinese ideology of Daoism was fully sponsored at the Ming court, while
Tibetan Vajrayana and even Chinese Buddhism were ignored or suppressed. Even the History of Ming states that the
Tibetan lamas discontinued their trips to Ming China and its court at this point. Grand Secretary Yang Tinghe under
Jiajing was determined to break the eunuch influence at court which typified the Zhengde era, an example being the
costly escort of the eunuch Liu Yun as described above in his failed mission to Tibet. The court eunuchs were in
favor of expanding and building new commercial ties with foreign countries such as Portugal, which Zhengde deemed
permissible since he had an affinity for foreign and exotic people. With the death of Zhengde and ascension of Jiajing,
the politics at court shifted in favor of the Neo-Confucian establishment which not only rejected the Portuguese
embassy of Fernão Pires de Andrade (d. 1523), but had a predisposed animosity towards Tibetan Buddhism and lamas.
Evelyn S. Rawski, a professor in the Department of History of the University of Pittsburgh, writes that the Ming's
unique relationship with Tibetan prelates essentially ended with Jiajing's reign while Ming influence in the Amdo
region was supplanted by the Mongols. Meanwhile, the Tumed Mongols began moving into the Kokonor region (modern Qinghai),
raiding the Ming Chinese frontier and even as far as the suburbs of Beijing under Altan Khan (1507–1582). Klieger
writes that Altan Khan's presence in the west effectively reduced Ming influence and contact with Tibet. After Altan
Khan made peace with the Ming dynasty in 1571, he invited the third hierarch of the Gelug—Sönam Gyatso (1543–1588)—to
meet him in Amdo (modern Qinghai) in 1578, where he accidentally bestowed him and his two predecessors with the title
of Dalai Lama—"Ocean Teacher". The full title was "Dalai Lama Vajradhara", "Vajradhara" meaning "Holder of the Thunderbolt"
in Sanskrit. Victoria Huckenpahler notes that Vajradhara is considered by Buddhists to be the primordial Buddha of
limitless and all-pervasive beneficial qualities, a being that "represents the ultimate aspect of enlightenment."
Goldstein writes that Sönam Gyatso also enhanced Altan Khan's standing by granting him the title "king of religion,
majestic purity". Rawski writes that the Dalai Lama officially recognized Altan Khan as the "Protector of the Faith".
Laird writes that Altan Khan abolished the native Mongol practices of shamanism and blood sacrifice, while the Mongol
princes and subjects were coerced by Altan to convert to Gelug Buddhism—or face execution if they persisted in their
shamanistic ways. Committed to their religious leader, Mongol princes began requesting the Dalai Lama to bestow titles
on them, which demonstrated "the unique fusion of religious and political power" wielded by the Dalai Lama, as Laird
writes. Kolmaš states that the spiritual and secular Mongol-Tibetan alliance of the 13th century was renewed by this
alliance constructed by Altan Khan and Sönam Gyatso. Van Praag writes that this restored the original Mongol patronage
of a Tibetan lama and "to this day, Mongolians are among the most devout followers of the Gelugpa and the Dalai Lama."
Angela F. Howard writes that this unique relationship not only provided the Dalai Lama and Panchen Lama with religious
and political authority in Tibet, but that Altan Khan gained "enormous power among the entire Mongol population."
Rawski writes that Altan Khan's conversion to the Gelug "can be interpreted as an attempt to expand his authority
in his conflict with his nominal superior, Tümen Khan." To further cement the Mongol-Tibetan alliance, the great-grandson
of Altan Khan—the 4th Dalai Lama (1589–1616)—was made the fourth Dalai Lama. In 1642, the 5th Dalai Lama (1617–1682)
became the first to wield effective political control over Tibet. Sonam Gyatso, after being granted the grandiose
title by Altan Khan, departed for Tibet. Before he left, he sent a letter and gifts to the Ming Chinese official
Zhang Juzheng (1525–1582), which arrived on March 12, 1579. Sometime in August or September of that year, Sonam Gyatso's
representative stationed with Altan Khan received a return letter and gift from the Wanli Emperor (r. 1572–1620),
who also conferred upon Sonam Gyatso a title; this was the first official contact between a Dalai Lama and a government
of China. However, Laird states that when Wanli invited him to Beijing, the Dalai Lama declined the offer due to
a prior commitment, even though he was only 400 km (250 mi) from Beijing. Laird adds that "the power of the Ming
emperor did not reach very far at the time." Although not recorded in any official Chinese records, Sonam Gyatso's
biography states that Wanli again conferred titles on Sonam Gyatso in 1588, and invited him to Beijing for a second
time, but Sonam Gyatso was unable to visit China as he died the same year in Mongolia working with Altan Khan's son
to further the spread of Buddhism. Of the third Dalai Lama, China Daily states that the "Ming dynasty showed him
special favor by allowing him to pay tribute." China Daily then says that Sonam Gyatso was granted the title Dorjichang
or Vajradhara Dalai Lama in 1587 [sic!], but China Daily does not mention who granted him the title. Without mentioning
the role of the Mongols, China Daily states that it was the successive Qing dynasty which established the title of
Dalai Lama and his power in Tibet: "In 1653, the Qing emperor granted an honorific title to the fifth Dalai Lama
and then did the same for the fifth Panchen Lama in 1713, officially establishing the titles of the Dalai Lama and
the Panchen Erdeni, and their political and religious status in Tibet." Chen states that the fourth Dalai Lama Yonten
Gyatso was granted the title "Master of Vajradhara" and an official seal by the Wanli Emperor in 1616. This was noted
in the Biography of the Fourth Dalai Lama, which stated that one Soinam Lozui delivered the seal of the Emperor to
the Dalai Lama. The Wanli Emperor had invited Yonten Gyatso to Beijing in 1616, but just like his predecessor he
died before being able to make the journey. Kolmaš writes that, as the Mongol presence in Tibet increased, culminating
in the conquest of Tibet by a Mongol leader in 1642, the Ming emperors "viewed with apparent unconcern these developments
in Tibet." He adds that the Ming court's lack of concern for Tibet was one of the reasons why the Mongols pounced
on the chance to reclaim their old vassal of Tibet and "fill once more the political vacuum in that country." On
the mass Mongol conversion to Tibetan Buddhism under Altan Khan, Laird writes that "the Chinese watched these developments
with interest, though few Chinese ever became devout Tibetan Buddhists." In 1565, the powerful Rinbung princes were
overthrown by one of their own ministers, Karma Tseten who styled himself as the Tsangpa, "the one of Tsang", and
established his base of power at Shigatse. The second successor of this first Tsang king, Karma Phuntsok Namgyal,
took control of the whole of Central Tibet (Ü-Tsang), reigning from 1611–1621. Despite this, the leaders of Lhasa
still claimed their allegiance to the Phagmodru as well as the Gelug, while the Ü-Tsang king allied with the Karmapa.
Tensions rose between the nationalistic Ü-Tsang ruler and the Mongols who safeguarded their Mongol Dalai Lama in
Lhasa. The fourth Dalai Lama refused to give an audience to the Ü-Tsang king, which sparked a conflict as the latter
began assaulting Gelug monasteries. Chen writes of the speculation over the fourth Dalai Lama's mysterious death
and the plot of the Ü-Tsang king to have him murdered for "cursing" him with illness, although Chen writes that the
murder was most likely the result of a feudal power struggle. In 1618, only two years after Yonten Gyatso died, the
Gelug and the Karma Kargyu went to war, the Karma Kargyu supported by the secular Ü-Tsang king. The Ü-Tsang ruler
had a large number of Gelugpa lamas killed, occupied their monasteries at Drepung and Sera, and outlawed any attempts
to find another Dalai Lama. In 1621, the Ü-Tsang king died and was succeeded by his young son Karma Tenkyong, an
event which stymied the war effort as the latter accepted the six-year-old Lozang Gyatso as the new Dalai Lama. Despite
the new Dalai Lama's diplomatic efforts to maintain friendly relations with the new Ü-Tsang ruler, Sonam Rapten (1595–1657),
the Dalai Lama's chief steward and treasurer at Drepung, made efforts to overthrow the Ü-Tsang king, which led to
another conflict. In 1633, the Gelugpas and several thousand Mongol adherents defeated the Ü-Tsang king's troops
near Lhasa before a peaceful negotiation was settled. Goldstein writes that in this the "Mongols were again playing
a significant role in Tibetan affairs, this time as the military arm of the Dalai Lama." When an ally of the Ü-Tsang
ruler threatened destruction of the Gelugpas again, the fifth Dalai Lama Lozang Gyatso pleaded for help from the
Mongol prince Güshi Khan (1582–1655), leader of the Khoshut (Qoshot) tribe of the Oirat Mongols, who was then on
a pilgrimage to Lhasa. Güshi Khan accepted his role as protector, and from 1637–1640 he not only defeated the Gelugpas'
enemies in the Amdo and Kham regions, but also resettled his entire tribe into Amdo. Sonam Chöpel urged Güshi Khan
to assault the Ü-Tsang king's homebase of Shigatse, which Güshi Khan agreed upon, enlisting the aid of Gelug monks
and supporters. In 1642, after a year's siege of Shigatse, the Ü-Tsang forces surrendered. Güshi Khan then captured
and summarily executed Karma Tenkyong, the ruler of Ü-Tsang, King of Tibet. Soon after the victory in Ü-Tsang, Güshi
Khan organized a welcoming ceremony for Lozang Gyatso once he arrived a day's ride from Shigatse, presenting his
conquest of Tibet as a gift to the Dalai Lama. In a second ceremony held within the main hall of the Shigatse fortress,
Güshi Khan enthroned the Dalai Lama as the ruler of Tibet, but conferred the actual governing authority to the regent
Sonam Chöpel. Although Güshi Khan had granted the Dalai Lama "supreme authority" as Goldstein writes, the title of
'King of Tibet' was conferred upon Güshi Khan, spending his summers in pastures north of Lhasa and occupying Lhasa
each winter. Van Praag writes that at this point Güshi Khan maintained control over the armed forces, but accepted
his inferior status towards the Dalai Lama. Rawski writes that the Dalai Lama shared power with his regent and Güshi
Khan during his early secular and religious reign. However, Rawski states that he eventually "expanded his own authority
by presenting himself as Avalokiteśvara through the performance of rituals," by building the Potala Palace and other
structures on traditional religious sites, and by emphasizing lineage reincarnation through written biographies.
Goldstein states that the government of Güshi Khan and the Dalai Lama persecuted the Karma Kagyu sect, confiscated
their wealth and property, and even converted their monasteries into Gelug monasteries. Rawski writes that this Mongol
patronage allowed the Gelugpas to dominate the rival religious sects in Tibet. Meanwhile, the Chinese Ming dynasty
fell to the rebellion of Li Zicheng (1606–1645) in 1644, yet his short-lived Shun dynasty was crushed by the Manchu
invasion and the Han Chinese general Wu Sangui (1612–1678). China Daily states that when the following Qing dynasty
replaced the Ming dynasty, it merely "strengthened administration of Tibet." However, Kolmaš states that the Dalai
Lama was very observant of what was going on in China and accepted a Manchu invitation in 1640 to send envoys to
their capital at Mukden in 1642, before the Ming collapsed. Dawa Norbu, William Rockhill, and George N. Patterson
write that when the Shunzhi Emperor (r. 1644–1661) of the subsequent Qing dynasty invited the fifth Dalai Lama Lozang
Gyatso to Beijing in 1652, Shunzhi treated the Dalai Lama as an independent sovereign of Tibet. Patterson writes
that this was an effort of Shunzhi to secure an alliance with Tibet that would ultimately lead to the establishment
of Manchu rule over Mongolia. In this meeting with the Qing emperor, Goldstein asserts that the Dalai Lama was not
someone to be trifled with due to his alliance with Mongol tribes, some of which were declared enemies of the Qing.
Van Praag states that Tibet and the Dalai Lama's power was recognized by the "Manchu Emperor, the Mongolian Khans
and Princes, and the rulers of Ladakh, Nepal, India, Bhutan, and Sikkim." When the Dzungar Mongols attempted to spread
their territory from what is now Xinjiang into Tibet, the Kangxi Emperor (r. 1661–1722) responded to Tibetan pleas
for aid with his own expedition to Tibet, occupying Lhasa in 1720. By 1751, during the reign of the Qianlong Emperor
(r. 1735–1796), a protectorate and permanent Qing dynasty garrison was established in Tibet. As of 1751, Albert Kolb
writes that "Chinese claims to suzerainty over Tibet date from this time."
The iPod is a line of portable media players and multi-purpose pocket computers designed and marketed by Apple Inc. The first
line was released on October 23, 2001, about 8½ months after iTunes (Macintosh version) was released. The most recent
iPod redesigns were announced on July 15, 2015. There are three current versions of the iPod: the ultra-compact iPod
Shuffle, the compact iPod Nano and the touchscreen iPod Touch. Like other digital music players, iPods can serve
as external data storage devices. Storage capacity varies by model, ranging from 2 GB for the iPod Shuffle to 128
GB for the iPod Touch (previously 160 GB for the iPod Classic, which is now discontinued). Apple's iTunes software
(and other alternative software) can be used to transfer music, photos, videos, games, contact information, e-mail
settings, Web bookmarks, and calendars, to the devices supporting these features from computers using certain versions
of Apple Macintosh and Microsoft Windows operating systems. Before the release of iOS 5, the iPod branding was used
for the media player included with the iPhone and iPad, a combination of the Music and Videos apps on the iPod Touch.
As of iOS 5, separate apps named "Music" and "Videos" are standardized across all iOS-powered products. While the
iPhone and iPad have essentially the same media player capabilities as the iPod line, they are generally treated
as separate products. During the middle of 2010, iPhone sales overtook those of the iPod. In mid-2015, a new model
of the iPod Touch was announced by Apple, and was officially released on the Apple store on July 15, 2015. The sixth
generation iPod Touch includes a wide variety of spec improvements such as the upgraded A8 processor and higher-quality
screen. The core is over 5 times faster than previous models and is built to be roughly on par with the iPhone 5S.
It is available in 5 different colors: Space grey, pink, gold, silver and Product (red). Though the iPod was released
in 2001, its price and Mac-only compatibility caused sales to be relatively slow until 2004. The iPod line came from
Apple's "digital hub" category, when the company began creating software for the growing market of personal digital
devices. Digital cameras, camcorders and organizers had well-established mainstream markets, but the company found
existing digital music players "big and clunky or small and useless" with user interfaces that were "unbelievably
awful," so Apple decided to develop its own. As ordered by CEO Steve Jobs, Apple's hardware engineering chief Jon
Rubinstein assembled a team of engineers to design the iPod line, including hardware engineers Tony Fadell and Michael
Dhuey, and design engineer Sir Jonathan Ive. Rubinstein had already discovered the Toshiba disk drive when meeting
with an Apple supplier in Japan, and purchased the rights to it for Apple, and had also already worked out how the
screen, battery, and other key elements would work. The aesthetic was inspired by the 1958 Braun T3 transistor radio
designed by Dieter Rams, while the wheel based user interface was prompted by Bang & Olufsen's BeoCom 6000 telephone.
The product ("the Walkman of the twenty-first century" ) was developed in less than one year and unveiled on October
23, 2001. Jobs announced it as a Mac-compatible product with a 5 GB hard drive that put "1,000 songs in your pocket."
Apple did not develop the iPod software entirely in-house, instead using PortalPlayer's reference platform based
on two ARM cores. The platform had rudimentary software running on a commercial microkernel embedded operating system.
PortalPlayer had previously been working on an IBM-branded MP3 player with Bluetooth headphones. Apple contracted
another company, Pixo, to help design and implement the user interface under the direct supervision of Steve Jobs.
As development progressed, Apple continued to refine the software's look and feel. Starting with the iPod Mini, the
Chicago font was replaced with Espy Sans. Later iPods switched fonts again to Podium Sans—a font similar to Apple's
corporate font, Myriad. iPods with color displays then adopted some Mac OS X themes like Aqua progress bars, and
brushed metal meant to evoke a combination lock. In 2007, Apple modified the iPod interface again with the introduction
of the sixth-generation iPod Classic and third-generation iPod Nano by changing the font to Helvetica and, in most
cases, splitting the screen in half by displaying the menus on the left and album artwork, photos, or videos on the
right (whichever was appropriate for the selected item). In 2006 Apple presented a special edition for iPod 5G of
Irish rock band U2. Like its predecessor, this iPod has engraved the signatures of the four members of the band on
its back, but this one was the first time the company changed the colour of the metal (not silver but black). This
iPod was only available with 30GB of storage capacity. The special edition entitled purchasers to an exclusive video
with 33 minutes of interviews and performance by U2, downloadable from the iTunes Store. In September 2007, during
a lawsuit with patent holding company Burst.com, Apple drew attention to a patent for a similar device that was developed
in 1979. Kane Kramer applied for a UK patent for his design of a "plastic music box" in 1981, which he called the
IXI. He was unable to secure funding to renew the US$120,000 worldwide patent, so it lapsed and Kramer never profited
from his idea. The name iPod was proposed by Vinnie Chieco, a freelance copywriter, who (with others) was called
by Apple to figure out how to introduce the new player to the public. After Chieco saw a prototype, he thought of
the movie 2001: A Space Odyssey and the phrase "Open the pod bay door, Hal!", which refers to the white EVA Pods
of the Discovery One spaceship. Chieco saw an analogy to the relationship between the spaceship and the smaller independent
pods in the relationship between a personal computer and the music player. Apple researched the trademark and found
that it was already in use. Joseph N. Grasso of New Jersey had originally listed an "iPod" trademark with the U.S.
Patent and Trademark Office (USPTO) in July 2000 for Internet kiosks. The first iPod kiosks had been demonstrated
to the public in New Jersey in March 1998, and commercial use began in January 2000, but had apparently been discontinued
by 2001. The trademark was registered by the USPTO in November 2003, and Grasso assigned it to Apple Computer, Inc.
in 2005. In mid-2015, several new color schemes for all of the current iPod models were spotted in the latest version
of iTunes, 12.2. Belgian website Belgium iPhone originally found the images when plugging in an iPod for the first
time, and subsequent leaked photos were found by Pierre Dandumont. The third-generation iPod had a weak bass response,
as shown in audio tests. The combination of the undersized DC-blocking capacitors and the typical low-impedance of
most consumer headphones form a high-pass filter, which attenuates the low-frequency bass output. Similar capacitors
were used in the fourth-generation iPods. The problem is reduced when using high-impedance headphones and is completely
masked when driving high-impedance (line level) loads, such as an external headphone amplifier. The first-generation
iPod Shuffle uses a dual-transistor output stage, rather than a single capacitor-coupled output, and does not exhibit
reduced bass response for any load. For all iPods released in 2006 and earlier, some equalizer (EQ) sound settings
would distort the bass sound far too easily, even on undemanding songs. This would happen for EQ settings like R&B,
Rock, Acoustic, and Bass Booster, because the equalizer amplified the digital audio level beyond the software's limit,
causing distortion (clipping) on bass instruments. From the fifth-generation iPod on, Apple introduced a user-configurable
volume limit in response to concerns about hearing loss. Users report that in the sixth-generation iPod, the maximum
volume output level is limited to 100 dB in EU markets. Apple previously had to remove iPods from shelves in France
for exceeding this legal limit. However, users that have bought a new sixth-generation iPod in late 2013 have reported
a new option that allowed them to disable the EU volume limit. It has been said that these new iPods came with an
updated software that allowed this change. Older sixth-generation iPods, however, are unable to update to this software
version. Originally, a FireWire connection to the host computer was used to update songs or recharge the battery.
The battery could also be charged with a power adapter that was included with the first four generations. The third
generation began including a 30-pin dock connector, allowing for FireWire or USB connectivity. This provided better
compatibility with non-Apple machines, as most of them did not have FireWire ports at the time. Eventually Apple
began shipping iPods with USB cables instead of FireWire, although the latter was available separately. As of the
first-generation iPod Nano and the fifth-generation iPod Classic, Apple discontinued using FireWire for data transfer
(while still allowing for use of FireWire to charge the device) in an attempt to reduce cost and form factor. As
of the second-generation iPod Touch and the fourth-generation iPod Nano, FireWire charging ability has been removed.
The second-, third-, and fourth-generation iPod Shuffle uses a single 3.5 mm minijack phone connector which acts
as both a headphone jack and a data port for the dock. The dock connector also allowed the iPod to connect to accessories,
which often supplement the iPod's music, video, and photo playback. Apple sells a few accessories, such as the now-discontinued
iPod Hi-Fi, but most are manufactured by third parties such as Belkin and Griffin. Some peripherals use their own
interface, while others use the iPod's own screen. Because the dock connector is a proprietary interface, the implementation
of the interface requires paying royalties to Apple. Apple introduced a new 8-pin dock connector, named Lightning,
on September 12, 2012 with their announcement of the iPhone 5, the fifth generation iPod Touch, and the seventh generation
iPod Nano, which all feature it. The new connector replaces the older 30-pin dock connector used by older iPods,
iPhones, and iPads. Apple Lightning cables have pins on both sides of the plug so it can be inserted with either
side facing up. Many accessories have been made for the iPod line. A large number are made by third party companies,
although many, such as the iPod Hi-Fi, are made by Apple. Some accessories add extra features that other music players
have, such as sound recorders, FM radio tuners, wired remote controls, and audio/visual cables for TV connections.
Other accessories offer unique features like the Nike+iPod pedometer and the iPod Camera Connector. Other notable
accessories include external speakers, wireless remote controls, protective case, screen films, and wireless earphones.
Among the first accessory manufacturers were Griffin Technology, Belkin, JBL, Bose, Monster Cable, and SendStation.
BMW released the first iPod automobile interface, allowing drivers of newer BMW vehicles to control an iPod using
either the built-in steering wheel controls or the radio head-unit buttons. Apple announced in 2005 that similar
systems would be available for other vehicle brands, including Mercedes-Benz, Volvo, Nissan, Toyota, Alfa Romeo,
Ferrari, Acura, Audi, Honda, Renault, Infiniti and Volkswagen. Scion offers standard iPod connectivity on all their
cars. Some independent stereo manufacturers including JVC, Pioneer, Kenwood, Alpine, Sony, and Harman Kardon also
have iPod-specific integration solutions. Alternative connection methods include adapter kits (that use the cassette
deck or the CD changer port), audio input jacks, and FM transmitters such as the iTrip—although personal FM transmitters
are illegal in some countries. Many car manufacturers have added audio input jacks as standard. Beginning in mid-2007,
four major airlines, United, Continental, Delta, and Emirates, reached agreements to install iPod seat connections.
The free service will allow passengers to power and charge an iPod, and view video and music libraries on individual
seat-back displays. Originally KLM and Air France were reported to be part of the deal with Apple, but they later
released statements explaining that they were only contemplating the possibility of incorporating such systems. The
iPod line can play several audio file formats including MP3, AAC/M4A, Protected AAC, AIFF, WAV, Audible audiobook,
and Apple Lossless. The iPod photo introduced the ability to display JPEG, BMP, GIF, TIFF, and PNG image file formats.
Fifth and sixth generation iPod Classics, as well as third generation iPod Nanos, can additionally play MPEG-4 (H.264/MPEG-4
AVC) and QuickTime video formats, with restrictions on video dimensions, encoding techniques and data-rates. Originally,
iPod software only worked with Mac OS; iPod software for Microsoft Windows was launched with the second generation
model. Unlike most other media players, Apple does not support Microsoft's WMA audio format—but a converter for WMA
files without Digital Rights Management (DRM) is provided with the Windows version of iTunes. MIDI files also cannot
be played, but can be converted to audio files using the "Advanced" menu in iTunes. Alternative open-source audio
formats, such as Ogg Vorbis and FLAC, are not supported without installing custom firmware onto an iPod (e.g., Rockbox).
During installation, an iPod is associated with one host computer. Each time an iPod connects to its host computer,
iTunes can synchronize entire music libraries or music playlists either automatically or manually. Song ratings can
be set on an iPod and synchronized later to the iTunes library, and vice versa. A user can access, play, and add
music on a second computer if an iPod is set to manual and not automatic sync, but anything added or edited will
be reversed upon connecting and syncing with the main computer and its library. If a user wishes to automatically
sync music with another computer, an iPod's library will be entirely wiped and replaced with the other computer's
library. iPods with color displays use anti-aliased graphics and text, with sliding animations. All iPods (except
the 3rd-generation iPod Shuffle, the 6th & 7th generation iPod Nano, and iPod Touch) have five buttons and the later
generations have the buttons integrated into the click wheel – an innovation that gives an uncluttered, minimalist
interface. The buttons perform basic functions such as menu, play, pause, next track, and previous track. Other operations,
such as scrolling through menu items and controlling the volume, are performed by using the click wheel in a rotational
manner. The 3rd-generation iPod Shuffle does not have any controls on the actual player; instead it has a small control
on the earphone cable, with volume-up and -down buttons and a single button for play and pause, next track, etc.
The iPod Touch has no click-wheel; instead it uses a 3.5" touch screen along with a home button, sleep/wake button
and (on the second and third generations of the iPod Touch) volume-up and -down buttons. The user interface for the
iPod Touch is identical to that of the iPhone. Differences include a lack of a phone application. Both devices use
iOS. The iTunes Store (introduced April 29, 2003) is an online media store run by Apple and accessed through iTunes.
The store became the market leader soon after its launch and Apple announced the sale of videos through the store
on October 12, 2005. Full-length movies became available on September 12, 2006. At the time the store was introduced,
purchased audio files used the AAC format with added encryption, based on the FairPlay DRM system. Up to five authorized
computers and an unlimited number of iPods could play the files. Burning the files with iTunes as an audio CD, then
re-importing would create music files without the DRM. The DRM could also be removed using third-party software.
However, in a deal with Apple, EMI began selling DRM-free, higher-quality songs on the iTunes Stores, in a category
called "iTunes Plus." While individual songs were made available at a cost of US$1.29, 30¢ more than the cost of
a regular DRM song, entire albums were available for the same price, US$9.99, as DRM encoded albums. On October 17,
2007, Apple lowered the cost of individual iTunes Plus songs to US$0.99 per song, the same as DRM encoded tracks.
On January 6, 2009, Apple announced that DRM has been removed from 80% of the music catalog, and that it would be
removed from all music by April 2009. iPods cannot play music files from competing music stores that use rival-DRM
technologies like Microsoft's protected WMA or RealNetworks' Helix DRM. Example stores include Napster and MSN Music.
RealNetworks claims that Apple is creating problems for itself by using FairPlay to lock users into using the iTunes
Store. Steve Jobs stated that Apple makes little profit from song sales, although Apple uses the store to promote
iPod sales. However, iPods can also play music files from online stores that do not use DRM, such as eMusic or Amie
Street. Universal Music Group decided not to renew their contract with the iTunes Store on July 3, 2007. Universal
will now supply iTunes in an 'at will' capacity. Apple debuted the iTunes Wi-Fi Music Store on September 5, 2007,
in its Media Event entitled "The Beat Goes On...". This service allows users to access the Music Store from either
an iPhone or an iPod Touch and download songs directly to the device that can be synced to the user's iTunes Library
over a WiFi connection, or, in the case of an iPhone, the telephone network. Video games are playable on various
versions of iPods. The original iPod had the game Brick (originally invented by Apple's co-founder Steve Wozniak)
included as an easter egg hidden feature; later firmware versions added it as a menu option. Later revisions of the
iPod added three more games: Parachute, Solitaire, and Music Quiz. In September 2006, the iTunes Store began to offer
additional games for purchase with the launch of iTunes 7, compatible with the fifth generation iPod with iPod software
1.2 or later. Those games were: Bejeweled, Cubis 2, Mahjong, Mini Golf, Pac-Man, Tetris, Texas Hold 'Em, Vortex,
Asphalt 4: Elite Racing and Zuma. Additional games have since been added. These games work on the 6th and 5th generation
iPod Classic and the 5th and 4th generation iPod Nano. With third parties like Namco, Square Enix, Electronic Arts,
Sega, and Hudson Soft all making games for the iPod, Apple's MP3 player has taken steps towards entering the video
game handheld console market. Even video game magazines like GamePro and EGM have reviewed and rated most of their
games as of late. The games are in the form of .ipg files, which are actually .zip archives in disguise[citation
needed]. When unzipped, they reveal executable files along with common audio and image files, leading to the possibility
of third party games. Apple has not publicly released a software development kit (SDK) for iPod-specific development.
Apps produced with the iPhone SDK are compatible only with the iOS on the iPod Touch and iPhone, which cannot run
clickwheel-based games. Unlike many other MP3 players, simply copying audio or video files to the drive with a typical
file management application will not allow an iPod to properly access them. The user must use software that has been
specifically designed to transfer media files to iPods, so that the files are playable and viewable. Usually iTunes
is used to transfer media to an iPod, though several alternative third-party applications are available on a number
of different platforms. iTunes 7 and above can transfer purchased media of the iTunes Store from an iPod to a computer,
provided that computer containing the DRM protected media is authorized to play it. Media files are stored on an
iPod in a hidden folder, along with a proprietary database file. The hidden content can be accessed on the host operating
system by enabling hidden files to be shown. The media files can then be recovered manually by copying the files
or folders off the iPod. Many third-party applications also allow easy copying of media files off of an iPod. In
2005, Apple faced two lawsuits claiming patent infringement by the iPod line and its associated technologies: Advanced
Audio Devices claimed the iPod line breached its patent on a "music jukebox", while a Hong Kong-based IP portfolio
company called Pat-rights filed a suit claiming that Apple's FairPlay technology breached a patent issued to inventor
Ho Keung Tse. The latter case also includes the online music stores of Sony, RealNetworks, Napster, and Musicmatch
as defendants. Apple's application to the United States Patent and Trademark Office for a patent on "rotational user
inputs", as used on the iPod interface, received a third "non-final rejection" (NFR) in August 2005. Also in August
2005, Creative Technology, one of Apple's main rivals in the MP3 player market, announced that it held a patent on
part of the music selection interface used by the iPod line, which Creative Technology dubbed the "Zen Patent", granted
on August 9, 2005. On May 15, 2006, Creative filed another suit against Apple with the United States District Court
for the Northern District of California. Creative also asked the United States International Trade Commission to
investigate whether Apple was breaching U.S. trade laws by importing iPods into the United States. On August 24,
2006, Apple and Creative announced a broad settlement to end their legal disputes. Apple will pay Creative US$100
million for a paid-up license, to use Creative's awarded patent in all Apple products. As part of the agreement,
Apple will recoup part of its payment, if Creative is successful in licensing the patent. Creative then announced
its intention to produce iPod accessories by joining the Made for iPod program. Since October 2004, the iPod line
has dominated digital music player sales in the United States, with over 90% of the market for hard drive-based players
and over 70% of the market for all types of players. During the year from January 2004 to January 2005, the high
rate of sales caused its U.S. market share to increase from 31% to 65% and in July 2005, this market share was measured
at 74%. In January 2007 the iPod market share reached 72.7% according to Bloomberg Online. On January 8, 2004, Hewlett-Packard
(HP) announced that they would sell HP-branded iPods under a license agreement from Apple. Several new retail channels
were used—including Wal-Mart—and these iPods eventually made up 5% of all iPod sales. In July 2005, HP stopped selling
iPods due to unfavorable terms and conditions imposed by Apple. On April 9, 2007, it was announced that Apple had
sold its one-hundred millionth iPod, making it the biggest selling digital music player of all time. In April 2007,
Apple reported second quarter revenue of US$5.2 billion, of which 32% was made from iPod sales. Apple and several
industry analysts suggest that iPod users are likely to purchase other Apple products such as Mac computers. On October
22, 2007, Apple reported quarterly revenue of US$6.22 billion, of which 30.69% came from Apple notebook sales, 19.22%
from desktop sales and 26% from iPod sales. Apple's 2007 year revenue increased to US$24.01 billion with US$3.5 billion
in profits. Apple ended the fiscal year 2007 with US$15.4 billion in cash and no debt. On January 22, 2008, Apple
reported the best quarter revenue and earnings in Apple's history so far. Apple posted record revenue of US$9.6 billion
and record net quarterly profit of US$1.58 billion. 42% of Apple's revenue for the First fiscal quarter of 2008 came
from iPod sales, followed by 21% from notebook sales and 16% from desktop sales. On October 21, 2008, Apple reported
that only 14.21% of total revenue for fiscal quarter 4 of year 2008 came from iPods. At the September 9, 2009 keynote
presentation at the Apple Event, Phil Schiller announced total cumulative sales of iPods exceeded 220 million. The
continual decline of iPod sales since 2009 has not been a surprising trend for the Apple corporation, as Apple CFO
Peter Oppenheimer explained in June 2009: "We expect our traditional MP3 players to decline over time as we cannibalize
ourselves with the iPod Touch and the iPhone." Since 2009, the company's iPod sales have continually decreased every
financial quarter and in 2013 a new model was not introduced onto the market. iPods have won several awards ranging
from engineering excellence,[not in citation given] to most innovative audio product, to fourth best computer product
of 2006. iPods often receive favorable reviews; scoring on looks, clean design, and ease of use. PC World says that
iPod line has "altered the landscape for portable audio players". Several industries are modifying their products
to work better with both the iPod line and the AAC audio format. Examples include CD copy-protection schemes, and
mobile phones, such as phones from Sony Ericsson and Nokia, which play AAC files rather than WMA. Besides earning
a reputation as a respected entertainment device, the iPod has also been accepted as a business device. Government
departments, major institutions and international organisations have turned to the iPod line as a delivery mechanism
for business communication and training, such as the Royal and Western Infirmaries in Glasgow, Scotland, where iPods
are used to train new staff. iPods have also gained popularity for use in education. Apple offers more information
on educational uses for iPods on their website, including a collection of lesson plans. There has also been academic
research done in this area in nursing education and more general K-16 education. Duke University provided iPods to
all incoming freshmen in the fall of 2004, and the iPod program continues today with modifications. Entertainment
Weekly put it on its end-of-the-decade, "best-of" list, saying, "Yes, children, there really was a time when we roamed
the earth without thousands of our favorite jams tucked comfortably into our hip pockets. Weird." The iPod has also
been credited with accelerating shifts within the music industry. The iPod's popularization of digital music storage
allows users to abandon listening to entire albums and instead be able to choose specific singles which hastened
the end of the Album Era in popular music. The advertised battery life on most models is different from the real-world
achievable life. For example, the fifth generation 30 GB iPod is advertised as having up to 14 hours of music playback.
An MP3.com report stated that this was virtually unachievable under real-life usage conditions, with a writer for
MP3.com getting on average less than 8 hours from an iPod. In 2003, class action lawsuits were brought against Apple
complaining that the battery charges lasted for shorter lengths of time than stated and that the battery degraded
over time. The lawsuits were settled by offering individuals either US$50 store credit or a free battery replacement.
iPod batteries are not designed to be removed or replaced by the user, although some users have been able to open
the case themselves, usually following instructions from third-party vendors of iPod replacement batteries. Compounding
the problem, Apple initially would not replace worn-out batteries. The official policy was that the customer should
buy a refurbished replacement iPod, at a cost almost equivalent to a brand new one. All lithium-ion batteries lose
capacity during their lifetime even when not in use (guidelines are available for prolonging life-span) and this
situation led to a market for third-party battery replacement kits. Apple announced a battery replacement program
on November 14, 2003, a week before a high publicity stunt and website by the Neistat Brothers. The initial cost
was US$99, and it was lowered to US$59 in 2005. One week later, Apple offered an extended iPod warranty for US$59.
For the iPod Nano, soldering tools are needed because the battery is soldered onto the main board. Fifth generation
iPods have their battery attached to the backplate with adhesive. The first generation iPod Nano may overheat and
pose a health and safety risk. Affected iPod Nanos were sold between September 2005 and December 2006. This is due
to a flawed battery used by Apple from a single battery manufacturer. Apple recommended that owners of affected iPod
Nanos stop using them. Under an Apple product replacement program, affected Nanos were replaced with current generation
Nanos free of charge. iPods have been criticized for alleged short life-span and fragile hard drives. A 2005 survey
conducted on the MacInTouch website found that the iPod line had an average failure rate of 13.7% (although they
note that comments from respondents indicate that "the true iPod failure rate may be lower than it appears"). It
concluded that some models were more durable than others. In particular, failure rates for iPods employing hard drives
was usually above 20% while those with flash memory had a failure rate below 10%. In late 2005, many users complained
that the surface of the first generation iPod Nano can become scratched easily, rendering the screen unusable. A
class action lawsuit was also filed. Apple initially considered the issue a minor defect, but later began shipping
these iPods with protective sleeves.[citation needed] On June 11, 2006, the British tabloid The Mail on Sunday reported
that iPods are mainly manufactured by workers who earn no more than US$50 per month and work 15-hour shifts. Apple
investigated the case with independent auditors and found that, while some of the plant's labour practices met Apple's
Code of Conduct, others did not: employees worked over 60 hours a week for 35% of the time, and worked more than
six consecutive days for 25% of the time. Foxconn, Apple's manufacturer, initially denied the abuses, but when an
auditing team from Apple found that workers had been working longer hours than were allowed under Chinese law, they
promised to prevent workers working more hours than the code allowed. Apple hired a workplace standards auditing
company, Verité, and joined the Electronic Industry Code of Conduct Implementation Group to oversee the measures.
On December 31, 2006, workers at the Foxconn factory in Longhua, Shenzhen formed a union affiliated with the All-China
Federation of Trade Unions, the Chinese government-approved union umbrella organization. In 2010, a number of workers
committed suicide at a Foxconn operations in China. Apple, HP, and others stated that they were investigating the
situation. Foxconn guards have been videotaped beating employees. Another employee killed himself in 2009 when an
Apple prototype went missing, and claimed in messages to friends, that he had been beaten and interrogated. As of
2006, the iPod was produced by about 14,000 workers in the U.S. and 27,000 overseas. Further, the salaries attributed
to this product were overwhelmingly distributed to highly skilled U.S. professionals, as opposed to lower skilled
U.S. retail employees or overseas manufacturing labor. One interpretation of this result is that U.S. innovation
can create more jobs overseas than domestically. All iPods except for the iPod Touch can function in "disk mode"
as mass storage devices to store data files but this may not be the default behavior, and in the case of the iPod
Touch, requires special software.[citation needed] If an iPod is formatted on a Mac OS computer, it uses the HFS+
file system format, which allows it to serve as a boot disk for a Mac computer. If it is formatted on Windows, the
FAT32 format is used. With the release of the Windows-compatible iPod, the default file system used on the iPod line
switched from HFS+ to FAT32, although it can be reformatted to either file system (excluding the iPod Shuffle which
is strictly FAT32). Generally, if a new iPod (excluding the iPod Shuffle) is initially plugged into a computer running
Windows, it will be formatted with FAT32, and if initially plugged into a Mac running Mac OS it will be formatted
with HFS+.
The Legend of Zelda: Twilight Princess (Japanese: ゼルダの伝説 トワイライトプリンセス, Hepburn: Zeruda no Densetsu: Towairaito Purinsesu?)
is an action-adventure game developed and published by Nintendo for the GameCube and Wii home video game consoles.
It is the thirteenth installment in the The Legend of Zelda series. Originally planned for release on the GameCube
in November 2005, Twilight Princess was delayed by Nintendo to allow its developers to refine the game, add more
content, and port it to the Wii. The Wii version was released alongside the console in North America in November
2006, and in Japan, Europe, and Australia the following month. The GameCube version was released worldwide in December
2006.[b] The story focuses on series protagonist Link, who tries to prevent Hyrule from being engulfed by a corrupted
parallel dimension known as the Twilight Realm. To do so, he takes the form of both a Hylian and a wolf, and is assisted
by a mysterious creature named Midna. The game takes place hundreds of years after Ocarina of Time and Majora's Mask,
in an alternate timeline from The Wind Waker. At the time of its release, Twilight Princess was considered the greatest
entry in the Zelda series by many critics, including writers for 1UP.com, Computer and Video Games, Electronic Gaming
Monthly, Game Informer, GamesRadar, IGN, and The Washington Post. It received several Game of the Year awards, and
was the most critically acclaimed game of 2006. In 2011, the Wii version was rereleased under the Nintendo Selects
label. A high-definition port for the Wii U, The Legend of Zelda: Twilight Princess HD, will be released in March
2016. The Legend of Zelda: Twilight Princess is an action-adventure game focused on combat, exploration, and item
collection. It uses the basic control scheme introduced in Ocarina of Time, including context-sensitive action buttons
and L-targeting (Z-targeting on the Wii), a system that allows the player to keep Link's view focused on an enemy
or important object while moving and attacking. Link can walk, run, and attack, and will automatically jump when
running off of or reaching for a ledge.[c] Link uses a sword and shield in combat, complemented with secondary weapons
and items, including a bow and arrows, a boomerang, bombs, and the Clawshot (similar to the Hookshot introduced earlier
in the The Legend of Zelda series).[d] While L-targeting, projectile-based weapons can be fired at a target without
the need for manual aiming.[c] The context-sensitive button mechanic allows one button to serve a variety of functions,
such as talking, opening doors, and pushing, pulling, and throwing objects.[e] The on-screen display shows what action,
if any, the button will trigger, determined by the situation. For example, if Link is holding a rock, the context-sensitive
button will cause Link to throw the rock if he is moving or targeting an object or enemy, or place the rock on the
ground if he is standing still.[f] The GameCube and Wii versions feature several minor differences in their controls.
The Wii version of the game makes use of the motion sensors and built-in speaker of the Wii Remote. The speaker emits
the sounds of a bowstring when shooting an arrow, Midna's laugh when she gives advice to Link, and the series' trademark
"chime" when discovering secrets. The player controls Link's sword by swinging the Wii Remote. Other attacks are
triggered using similar gestures with the Nunchuk. Unique to the GameCube version is the ability for the player to
control the camera freely, without entering a special "lookaround" mode required by the Wii; however, in the GameCube
version, only two of Link's secondary weapons can be equipped at a time, as opposed to four in the Wii version.[g]
The game features nine dungeons—large, contained areas where Link battles enemies, collects items, and solves puzzles.
Link navigates these dungeons and fights a boss at the end in order to obtain an item or otherwise advance the plot.
The dungeons are connected by a large overworld, across which Link can travel on foot; on his horse, Epona; or by
teleporting. When Link enters the Twilight Realm, the void that corrupts parts of Hyrule, he transforms into a wolf.[h]
He is eventually able to transform between his Hylian and wolf forms at will. As a wolf, Link loses the ability to
use his sword, shield, or any secondary items; he instead attacks by biting, and defends primarily by dodging attacks.
However, "Wolf Link" gains several key advantages in return—he moves faster than he does as a human (though riding
Epona is still faster) and digs holes to create new passages and uncover buried items, and has improved senses, including
the ability to follow scent trails.[i] He also carries Midna, a small imp-like creature who gives him hints, uses
an energy field to attack enemies, helps him jump long distances, and eventually allows Link to "warp" to any of
several preset locations throughout the overworld.[j] Using Link's wolf senses, the player can see and listen to
the wandering spirits of those affected by the Twilight, as well as hunt for enemy ghosts named Poes.[k] The artificial
intelligence (AI) of enemies in Twilight Princess is more advanced than that of enemies in The Wind Waker. Enemies
react to defeated companions and to arrows or slingshot pellets that pass by, and can detect Link from a greater
distance than was possible in previous games. There is very little voice acting in the game, as is the case in most
Zelda titles to date. Link remains silent in conversation, but grunts when attacking or injured and gasps when surprised.
His emotions and responses are largely indicated visually by nods and facial expressions. Other characters have similar
language-independent verbalizations, including laughter, surprised or fearful exclamations, and screams. The character
of Midna has the most voice acting—her on-screen dialog is often accompanied by a babble of pseudo-speech, which
was produced by scrambling the phonemes of English phrases[better source needed] sampled by Japanese voice actress
Akiko Kōmoto. Twilight Princess takes place several centuries after Ocarina of Time and Majora's Mask, and begins
with a youth named Link who is working as a ranch hand in Ordon Village. One day, the village is attacked by Bulblins,
who carry off the village's children with Link in pursuit before he encounters a wall of Twilight. A Shadow Beast
pulls him beyond the wall into the Realm of Twilight, where he is transformed into a wolf and imprisoned. Link is
soon freed by an imp-like Twilight being named Midna, who dislikes Link but agrees to help him if he obeys her unconditionally.
She guides him to Princess Zelda. Zelda explains that Zant, the King of the Twilight, has stolen the light from three
of the four Light Spirits and conquered Hyrule. In order to save Hyrule, Link must first restore the Light Spirits
by entering the Twilight-covered areas and, as a wolf, recover the Spirits' lost light. He must do this by collecting
the multiple "Tears of Light"; once all the Tears of Light are collected for one area, he restores that area's Light
Spirit. As he restores them, the Light Spirits return Link to his Hylian form. During this time, Link also helps
Midna find the Fused Shadows, fragments of a relic containing powerful dark magic. In return, she helps Link find
Ordon Village's children while helping the monkeys of Faron, the Gorons of Eldin, and the Zoras of Lanayru. Once
Link has restored the Light Spirits and Midna has all the Fused Shadows, they are ambushed by Zant. After he relieves
Midna of the Fused Shadow fragments, she ridicules him for abusing his tribe's magic, but Zant reveals that his power
comes from another source as he uses it to turn Link back into a wolf, and then leaves Midna in Hyrule to die from
the world's light. Bringing a dying Midna to Zelda, Link learns he needs the Master Sword to return to human form.
Zelda sacrifices herself to heal Midna with her power before vanishing mysteriously. Midna is moved by Zelda's sacrifice,
and begins to care more about Link and the fate of the light world. After gaining the Master Sword, Link is cleansed
of the magic that kept him in wolf form, obtaining the Shadow Crystal. Now able to use it to switch between both
forms at will, Link is led by Midna to the Mirror of Twilight located deep within the Gerudo Desert, the only known
gateway between the Twilight Realm and Hyrule. However, they discover that the mirror is broken. The Sages there
explain that Zant tried to destroy it, but he was only able to shatter it into fragments; only the true ruler of
the Twili can completely destroy the Mirror of Twilight. They also reveal that they used it a century ago to banish
Ganondorf, the Gerudo leader who attempted to steal the Triforce, to the Twilight Realm when executing him failed.
Assisted by an underground resistance group they meet in Castle Town, Link and Midna set out to retrieve the missing
shards of the Mirror, defeating those they infected. Once the portal has been restored, Midna is revealed to be the
true ruler of the Twilight Realm, usurped by Zant when he cursed her into her current form. Confronting Zant, Link
and Midna learn that Zant's coup was made possible when he forged a pact with Ganondorf, who asked for Zant's assistance
in conquering Hyrule. After Link defeats Zant, Midna recovers the Fused Shadows, but destroys Zant after learning
that only Ganondorf's death can release her from her curse. Returning to Hyrule, Link and Midna find Ganondorf in
Hyrule Castle, with a lifeless Zelda suspended above his head. Ganondorf fights Link by possessing Zelda's body and
eventually by transforming into a beast, but Link defeats him and Midna is able to resurrect Zelda. Ganondorf then
revives, and Midna teleports Link and Zelda outside the castle so she can hold him off with the Fused Shadows. However,
as Hyrule Castle collapses, it is revealed that Ganondorf was victorious as he crushes Midna's helmet. Ganondorf
engages Link on horseback, and, assisted by Zelda and the Light Spirits, Link eventually knocks Ganondorf off his
horse and they duel on foot before Link strikes down Ganondorf and plunges the Master Sword into his chest. With
Ganondorf dead, the Light Spirits not only bring Midna back to life, but restore her to her true form. After bidding
farewell to Link and Zelda, Midna returns home before destroying the Mirror of Twilight with a tear to maintain balance
between Hyrule and the Twilight Realm. Near the end, as Hyrule Castle is rebuilt, Link is shown leaving Ordon Village
heading to parts unknown. In 2003, Nintendo announced that a new The Legend of Zelda game was in the works for the
GameCube by the same team that had created the cel-shaded The Wind Waker. At the following year's Game Developers
Conference, director Eiji Aonuma unintentionally revealed that the game's sequel was in development under the working
title The Wind Waker 2; it was set to use a similar graphical style to that of its predecessor. Nintendo of America
told Aonuma that North American sales of The Wind Waker were sluggish because its cartoon appearance created the
impression that the game was designed for a young audience. Concerned that the sequel would have the same problem,
Aonuma expressed to producer Shigeru Miyamoto that he wanted to create a realistic Zelda game that would appeal to
the North American market. Miyamoto, hesitant about solely changing the game's presentation, suggested the team's
focus should instead be on coming up with gameplay innovations. He advised that Aonuma should start by doing what
could not be done in Ocarina of Time, particularly horseback combat.[l] In four months, Aonuma's team managed to
present realistic horseback riding,[l] which Nintendo later revealed to the public with a trailer at Electronic Entertainment
Expo 2004. The game was scheduled to be released the next year, and was no longer a follow-up to The Wind Waker;
a true sequel to it was released for the Nintendo DS in 2007, in the form of Phantom Hourglass. Miyamoto explained
in interviews that the graphical style was chosen to satisfy demand, and that it better fit the theme of an older
incarnation of Link. The game runs on a modified The Wind Waker engine. Prior Zelda games have employed a theme of
two separate, yet connected, worlds. In A Link to the Past, Link travels between a "Light World" and a "Dark World";
in Ocarina of Time, as well as in Oracle of Ages, Link travels between two different time periods. The Zelda team
sought to reuse this motif in the series' latest installment. It was suggested that Link transform into a wolf, much
like he metamorphoses into a rabbit in the Dark World of A Link to the Past.[m] The story of the game was created
by Aonuma, and later underwent several changes by scenario writers Mitsuhiro Takano and Aya Kyogoku. Takano created
the script for the story scenes, while Kyogoku and Takayuki Ikkaku handled the actual in-game script. Aonuma left
his team working on the new idea while he directed The Minish Cap for the Game Boy Advance. When he returned, he
found the Twilight Princess team struggling. Emphasis on the parallel worlds and the wolf transformation had made
Link's character unbelievable. Aonuma also felt the gameplay lacked the caliber of innovation found in Phantom Hourglass,
which was being developed with touch controls for the Nintendo DS. At the same time, the Wii was under development
with the code name "Revolution". Miyamoto thought that the Revolution's pointing device, the Wii Remote, was well
suited for aiming arrows in Zelda, and suggested that Aonuma consider using it.[n] Aonuma had anticipated creating
a Zelda game for what would later be called the Wii, but had assumed that he would need to complete Twilight Princess
first. His team began work developing a pointing-based interface for the bow and arrow, and Aonuma found that aiming
directly at the screen gave the game a new feel, just like the DS control scheme for Phantom Hourglass. Aonuma felt
confident this was the only way to proceed, but worried about consumers who had been anticipating a GameCube release.
Developing two versions would mean delaying the previously announced 2005 release, still disappointing the consumer.
Satoru Iwata felt that having both versions would satisfy users in the end, even though they would have to wait for
the finished product. Aonuma then started working on both versions in parallel.[o] Transferring GameCube development
to the Wii was relatively simple, since the Wii was being created to be compatible with the GameCube.[o] At E3 2005,
Nintendo released a small number of Nintendo DS game cards containing a preview trailer for Twilight Princess. They
also announced that Zelda would appear on the Wii (then codenamed "Revolution"), but it was not clear to the media
if this meant Twilight Princess or a different game. The team worked on a Wii control scheme, adapting camera control
and the fighting mechanics to the new interface. A prototype was created that used a swinging gesture to control
the sword from a first-person viewpoint, but was unable to show the variety of Link's movements. When the third-person
view was restored, Aonuma thought it felt strange to swing the Wii Remote with the right hand to control the sword
in Link's left hand, so the entire Wii version map was mirrored.[p] Details about Wii controls began to surface in
December 2005 when British publication NGC Magazine claimed that when a GameCube copy of Twilight Princess was played
on the Revolution, it would give the player the option of using the Revolution controller. Miyamoto confirmed the
Revolution controller-functionality in an interview with Nintendo of Europe and Time reported this soon after. However,
support for the Wii controller did not make it into the GameCube release. At E3 2006, Nintendo announced that both
versions would be available at the Wii launch, and had a playable version of Twilight Princess for the Wii.[p] Later,
the GameCube release was pushed back to a month after the launch of the Wii. Nintendo staff members reported that
demo users complained about the difficulty of the control scheme. Aonuma realized that his team had implemented Wii
controls under the mindset of "forcing" users to adapt, instead of making the system intuitive and easy to use. He
began rethinking the controls with Miyamoto to focus on comfort and ease.[q] The camera movement was reworked and
item controls were changed to avoid accidental button presses.[r] In addition, the new item system required use of
the button that had previously been used for the sword. To solve this, sword controls were transferred back to gestures—something
E3 attendees had commented they would like to see. This reintroduced the problem of using a right-handed swing to
control a left-handed sword attack. The team did not have enough time before release to rework Link's character model,
so they instead flipped the entire game—everything was made a mirror image.[s] Link was now right-handed, and references
to "east" and "west" were switched around. The GameCube version, however, was left with the original orientation.
The Twilight Princess player's guide focuses on the Wii version, but has a section in the back with mirror-image
maps for GameCube users.[t] The game's score was composed by Toru Minegishi and Asuka Ohta, with series regular Koji
Kondo serving as the sound supervisor. Minegishi took charge of composition and sound design in Twilight Princess,
providing all field and dungeon music under the supervision of Kondo. For the trailers, three pieces were written
by different composers, two of which were created by Mahito Yokota and Kondo. Michiru Ōshima created orchestral arrangements
for the three compositions, later to be performed by an ensemble conducted by Yasuzo Takemoto. Kondo's piece was
later chosen as music for the E3 2005 trailer and for the demo movie after the game's title screen. Media requests
at the trade show prompted Kondo to consider using orchestral music for the other tracks in the game as well, a notion
reinforced by his preference for live instruments. He originally envisioned a full 50-person orchestra for action
sequences and a string quartet for more "lyrical moments", though the final product used sequenced music instead.
Kondo later cited the lack of interactivity that comes with orchestral music as one of the main reasons for the decision.
Both six- and seven-track versions of the game's soundtrack were released on November 19, 2006, as part of a Nintendo
Power promotion and bundled with replicas of the Master Sword and the Hylian Shield. Following the discovery of a
buffer overflow vulnerability in the Wii version of Twilight Princess, an exploit known as the "Twilight Hack" was
developed, allowing the execution of custom code from a Secure Digital (SD) card on the console. A properly designed
save file would cause the game to load unsigned code, which could include Executable and Linkable Format (ELF) programs
and homebrew Wii applications. Versions 3.3 and 3.4 of the Wii Menu prevented copying exploited save files onto the
console until circumvention methods were discovered, and version 4.0 of the Wii Menu patched the vulnerability. A
high-definition remaster of the game, The Legend of Zelda: Twilight Princess HD, is being developed by Tantalus Media
for the Wii U. Officially announced during a Nintendo Direct presentation on November 12, 2015, it features enhanced
graphics and Amiibo functionality. The game will be released in North America and Europe on March 4, 2016; in Australia
on March 5, 2016; and in Japan on March 10, 2016. Special bundles of the game contain a Wolf Link Amiibo figurine,
which unlocks a Wii U-exclusive dungeon called the "Cave of Shadows" and can carry data over to the upcoming 2016
Zelda game. Other Zelda-related Amiibo figurines have distinct functions: Link and Toon Link replenish arrows, Zelda
and Sheik restore Link's health, and Ganondorf causes Link to take twice as much damage. A CD containing 20 musical
selections from the game was available as a GameStop preorder bonus in the United States; it is included in all bundles
in Japan, Europe, and Australia.[citation needed] Twilight Princess was released to universal critical acclaim and
commercial success. It received perfect scores from major publications such as 1UP.com, Computer and Video Games,
Electronic Gaming Monthly, Game Informer, GamesRadar, and GameSpy. On the review aggregators GameRankings and Metacritic,
Twilight Princess has average scores of 95% and 95 for the Wii version and scores of 95% and 96 for the GameCube
version. GameTrailers in their review called it one of the greatest games ever created. On release, Twilight Princess
was considered to be the greatest Zelda game ever made by many critics including writers for 1UP.com, Computer and
Video Games, Electronic Gaming Monthly, Game Informer, GamesRadar, IGN and The Washington Post. Game Informer called
it "so creative that it rivals the best that Hollywood has to offer". GamesRadar praised Twilight Princess as "a
game that deserves nothing but the absolute highest recommendation". Cubed3 hailed Twilight Princess as "the single
greatest videogame experience". Twilight Princess's graphics were praised for the art style and animation, although
the game was designed for the GameCube, which is technically lacking compared to the next generation consoles. Both
IGN and GameSpy pointed out the existence of blurry textures and low-resolution characters. Despite these complaints,
Computer and Video Games felt the game's atmosphere was superior to that of any previous Zelda game, and regarded
Twilight Princess's Hyrule as the best version ever created. PALGN praised the game's cinematics, noting that "the
cutscenes are the best ever in Zelda games". Regarding the Wii version, GameSpot's Jeff Gerstmann said the Wii controls
felt "tacked-on", although 1UP.com said the remote-swinging sword attacks were "the most impressive in the entire
series". Gaming Nexus considered Twilight Princess's soundtrack to be the best of this generation, though IGN criticized
its MIDI-formatted songs for lacking "the punch and crispness" of their orchestrated counterparts. Hyper's Javier
Glickman commended the game for its "very long quests, superb Wii controls and being able to save anytime". However,
he criticised it for "no voice acting, no orchestral score and slightly outdated graphics". Twilight Princess received
the awards for Best Artistic Design, Best Original Score, and Best Use of Sound from IGN for its GameCube version.
Both IGN and Nintendo Power gave Twilight Princess the awards for Best Graphics and Best Story. Twilight Princess
received Game of the Year awards from GameTrailers, 1UP.com, Electronic Gaming Monthly, Game Informer, Games Radar,
GameSpy, Spacey Awards, X-Play and Nintendo Power. It was also given awards for Best Adventure Game from the Game
Critics Awards, X-Play, IGN, GameTrailers, 1UP.com, and Nintendo Power. The game was considered the Best Console
Game by the Game Critics Awards and GameSpy. The game placed 16th in Official Nintendo Magazine's list of the 100
Greatest Nintendo Games of All Time. IGN ranked the game as the 4th-best Wii game. Nintendo Power ranked the game
as the third-best game to be released on a Nintendo system in the 2000s decade. In the PAL region, which covers most
of Africa, Asia, Europe, and Oceania, Twilight Princess is the best-selling entry in the Zelda series. During its
first week, the game was sold with three of every four Wii purchases. The game had sold 5.82 million copies on the
Wii as of March 31, 2011[update], and 1.32 million on the GameCube as of March 31, 2007[update]. A Japan-exclusive
manga series based on Twilight Princess, penned and illustrated by Akira Himekawa, was first released on February
8, 2016. The series is available solely via publisher Shogakukan's MangaOne mobile application. While the manga adaptation
began almost ten years after the initial release of the game on which it is based, it launched only a month before
the release of the high-definition remake.
Spectre (2015) is the twenty-fourth James Bond film produced by Eon Productions. It features Daniel Craig in his fourth performance
as James Bond, and Christoph Waltz as Ernst Stavro Blofeld, with the film marking the character's re-introduction
into the series. It was directed by Sam Mendes as his second James Bond film following Skyfall, and was written by
John Logan, Neal Purvis, Robert Wade and Jez Butterworth. It is distributed by Metro-Goldwyn-Mayer and Columbia Pictures.
With a budget around $245 million, it is the most expensive Bond film and one of the most expensive films ever made.
The story sees Bond pitted against the global criminal organisation Spectre, marking the group's first appearance
in an Eon Productions film since 1971's Diamonds Are Forever,[N 2] and tying Craig's series of films together with
an overarching storyline. Several recurring James Bond characters, including M, Q and Eve Moneypenny return, with
the new additions of Léa Seydoux as Dr. Madeleine Swann, Dave Bautista as Mr. Hinx, Andrew Scott as Max Denbigh and
Monica Bellucci as Lucia Sciarra. Spectre was released on 26 October 2015 in the United Kingdom on the same night
as the world premiere at the Royal Albert Hall in London, followed by a worldwide release. It was released in the
United States on 6 November 2015. It became the second James Bond film to be screened in IMAX venues after Skyfall,
although it was not filmed with IMAX cameras. Spectre received mixed reviews upon its release; although criticised
for its length, lack of screen time for new characters, and writing, it received praise for its action sequences
and cinematography. The theme song, "Writing's on the Wall", received mixed reviews, particularly compared to the
previous theme; nevertheless, it won the Golden Globe for Best Original Song and was nominated for the Academy Award
in the same category. As of 20 February 2016[update], Spectre has grossed over $879 million worldwide. Following
Garreth Mallory's promotion to M, on a mission in Mexico City unofficially ordered by a posthumous message from the
previous M, 007 James Bond kills three men plotting a terrorist bombing during the Day of the Dead and gives chase
to Marco Sciarra, an assassin who survived the attack. In the ensuing struggle, Bond steals his ring, which is emblazoned
with a stylised octopus, and then kills Sciarra by kicking him out of a helicopter. Upon returning to London, Bond
is indefinitely suspended from field duty by M, who is in the midst of a power struggle with C, the head of the privately-backed
Joint Intelligence Service, consisting of the recently merged MI5 and MI6. C campaigns for Britain to form alongside
8 other countries "Nine Eyes ", a global surveillance and intelligence co-operation initiative between nine member
states, and uses his influence to close down the '00' section, believing it to be outdated. Bond disobeys M's order
and travels to Rome to attend Sciarra's funeral. That evening he visits Sciarra's widow Lucia, who tells him about
Spectre, a criminal organisation to which her husband belonged. Bond infiltrates a Spectre meeting, where he identifies
the leader, Franz Oberhauser. When Oberhauser addresses Bond by name, he escapes and is pursued by Mr. Hinx, a Spectre
assassin. Moneypenny informs Bond that the information he collected leads to Mr. White, former member of Quantum,
a subsidiary of Spectre. Bond asks her to investigate Oberhauser, who was presumed dead years earlier. Bond travels
to Austria to find White, who is dying of thallium poisoning. He admits to growing disenchanted with Quantum and
tells Bond to find and protect his daughter, Dr. Madeline Swann, who will take him to L'Américain; this will in turn
lead him to Spectre. White then commits suicide. Bond locates Swann at the Hoffler Klinik, but she is abducted by
Hinx. Bond rescues her and the two meet Q, who discovers that Sciarra's ring links Oberhauser to Bond's previous
missions, identifying Le Chiffre, Dominic Greene and Raoul Silva as Spectre agents. Swann reveals that L'Américain
is a hotel in Tangier. The two travel to the hotel and discover White's secret room where they find co-ordinates
pointing to Oberhauser's operations base in the desert. They travel by train to the nearest station, but are once
again confronted by Hinx; they engage in a fight throughout the train in which Mr Hinx is eventually thrown off the
train by Bond with Swann's assistance. After arriving at the station, Bond and Swann are escorted to Oberhauser's
base. There, he reveals that Spectre has been staging terrorist attacks around the world, creating a need for the
Nine Eyes programme. In return Spectre will be given unlimited access to intelligence gathered by Nine Eyes. Bond
is tortured as Oberhauser discusses their shared history: after the younger Bond was orphaned, Oberhauser's father,
Hannes, became his temporary guardian. Believing that Bond supplanted his role as son, Oberhauser killed his father
and staged his own death, subsequently adopting the name Ernst Stavro Blofeld and going on to form Spectre. Bond
and Swann escape, destroying the base in the process, leaving Blofeld to apparently die during the explosion. Bond
and Swann return to London where they meet M, Bill Tanner, Q, and Moneypenny; they intend to arrest C and stop Nine
Eyes from going online. Swann leaves Bond, telling him she cannot be part of a life involving espionage, and is subsequently
kidnapped. On the way, the group is ambushed and Bond is kidnapped, but the rest still proceed with the plan. After
Q succeeds in preventing the Nine Eyes from going online, a brief struggle between M and C ends with the latter falling
to his death. Meanwhile, Bond is taken to the old MI6 building, which is scheduled for demolition, and frees himself.
Moving throughout the ruined labyrinth, he encounters a disfigured Blofeld, who tells him that he has three minutes
to escape the building before explosives are detonated or die trying to save Swann. Bond finds Swann and the two
escape by boat as the building collapses. Bond shoots down Blofeld's helicopter, which crashes onto Westminster Bridge.
As Blofeld crawls away from the wreckage, Bond confronts him but ultimately leaves him to be arrested by M. Bond
leaves the bridge with Swann. The ownership of the Spectre organisation—originally stylised "SPECTRE" as an acronym
of SPecial Executive for Counter-intelligence, Terrorism, Revenge and Extortion—and its characters, had been at the
centre of long-standing litigation starting in 1961 between Ian Fleming and Kevin McClory over the film rights to
the novel Thunderball. The dispute began after Fleming incorporated elements of an undeveloped film script written
by McClory and screenwriter Jack Whittingham—including characters and plot points—into Thunderball, which McClory
contested in court, claiming ownership over elements of the novel. In 1963, Fleming settled out of court with McClory,
in an agreement which awarded McClory the film rights. This enabled him to become a producer for the 1965 film Thunderball—with
Albert R. Broccoli and Harry Saltzman as executive producers—and the non-Eon film Never Say Never Again, an updated
remake of Thunderball, in 1983.[N 3] A second remake, entitled Warhead 2000 A.D., was planned for production and
release in the 1990s before being abandoned. Under the terms of the 1963 settlement, the literary rights stayed with
Fleming, allowing the Spectre organisation and associated characters to continue appearing in print. In November
2013 MGM and the McClory estate formally settled the issue with Danjaq, LLC—sister company of Eon Productions—with
MGM acquiring the full copyright film rights to the concept of Spectre and all of the characters associated with
it. With the acquisition of the film rights and the organisation's re-introduction to the series' continuity, the
SPECTRE acronym was discarded and the organisation reimagined as "Spectre". In November 2014, Sony Pictures Entertainment
was targeted by hackers who released details of confidential e-mails between Sony executives regarding several high-profile
film projects. Included within these were several memos relating to the production of Spectre, claiming that the
film was over budget, detailing early drafts of the script written by John Logan, and expressing Sony's frustration
with the project. Eon Productions later issued a statement confirming the leak of what they called "an early version
of the screenplay". Despite being an original story, Spectre draws on Ian Fleming's source material, most notably
in the character of Franz Oberhauser, played by Christoph Waltz. Oberhauser shares his name with Hannes Oberhauser,
a background character in the short story "Octopussy" from the Octopussy and The Living Daylights collection, and
who is named in the film as having been a temporary legal guardian of a young Bond in 1983. Similarly, Charmian Bond
is shown to have been his full-time guardian, observing the back story established by Fleming. With the acquisition
of the rights to Spectre and its associated characters, screenwriters Neal Purvis and Robert Wade revealed that the
film would provide a minor retcon to the continuity of the previous films, with the Quantum organisation alluded
to in Casino Royale and introduced in Quantum of Solace reimagined as a division within Spectre rather than an independent
organisation. Further references to Fleming's material can be found throughout the film; an MI6 safehouse is called
"Hildebrand Rarities and Antiques", a reference to the short story "The Hildebrand Rarity" from the For Your Eyes
Only short story collection.[citation needed] Bond's torture by Blofeld mirrors his torture by the title character
of Kingsley Amis' continuation novel Colonel Sun.[citation needed] The main cast was revealed in December 2014 at
the 007 Stage at Pinewood Studios. Daniel Craig returned for his fourth appearance as James Bond, while Ralph Fiennes,
Naomie Harris and Ben Whishaw reprised their roles as M, Eve Moneypenny and Q respectively, having been established
in Skyfall. Rory Kinnear also reprised his role as Bill Tanner in his third appearance in the series. Christoph Waltz
was cast in the role of Franz Oberhauser, though he refused to comment on the nature of the part. It was later revealed
with the film's release that he is Ernst Stavro Blofeld. Dave Bautista was cast as Mr. Hinx after producers sought
an actor with a background in contact sports. After casting Bérénice Lim Marlohe, a relative newcomer, as Sévérine
in Skyfall, Mendes consciously sought out a more experienced actor for the role of Madeleine Swann, ultimately casting
Léa Seydoux in the role. Monica Bellucci joined the cast as Lucia Sciarra, becoming, at the age of fifty, the oldest
actress to be cast as a Bond girl. In a separate interview with Danish website Euroman, Jesper Christensen revealed
he would be reprising his role as Mr. White from Casino Royale and Quantum of Solace. Christensen's character was
reportedly killed off in a scene intended to be used as an epilogue to Quantum of Solace, before it was removed from
the final cut of the film, enabling his return in Spectre. In addition to the principal cast, Alessandro Cremona
was cast as Marco Sciarra, Stephanie Sigman was cast as Estrella, and Detlef Bothe was cast as a villain for scenes
shot in Austria. In February 2015 over fifteen hundred extras were hired for the pre-title sequence set in Mexico,
though they were duplicated in the film, giving the effect of around ten thousand extras. In March 2013 Mendes said
he would not return to direct the next film in the series, then known as Bond 24; he later recanted and announced
that he would return, as he found the script and the plans for the long-term future of the franchise appealing. In
directing Skyfall and Spectre, Mendes became the first director to oversee two consecutive Bond films since John
Glen directed The Living Daylights and Licence to Kill in 1987 and 1989. Skyfall writer John Logan resumed his role
of scriptwriter, collaborating with Neal Purvis and Robert Wade, who returned for their sixth Bond film.[N 4] The
writer Jez Butterworth also worked on the script, alongside Mendes and Craig. Dennis Gassner returned as the film's
production designer, while cinematographer Hoyte van Hoytema took over from Roger Deakins. In July 2015 Mendes noted
that the combined crew of Spectre numbered over one thousand, making it a larger production than Skyfall. Craig is
listed as co-producer. Mendes revealed that production would begin on 8 December 2014 at Pinewood Studios, with filming
taking seven months. Mendes also confirmed several filming locations, including London, Mexico City and Rome. Van
Hoytema shot the film on Kodak 35 mm film stock. Early filming took place at Pinewood Studios, and around London,
with scenes variously featuring Craig and Harris at Bond's flat, and Craig and Kinnear travelling down the River
Thames. Filming started in Austria in December 2014, with production taking in the area around Sölden—including the
Ötztal Glacier Road, Rettenbach glacier and the adjacent ski resort and cable car station—and Obertilliach and Lake
Altaussee, before concluding in February 2015. Scenes filmed in Austria centred on the Ice Q Restaurant, standing
in for the fictional Hoffler Klinik, a private medical clinic in the Austrian Alps. Filming included an action scene
featuring a Land Rover Defender Bigfoot and a Range Rover Sport. Production was temporarily halted first by an injury
to Craig, who sprained his knee whilst shooting a fight scene, and later by an accident involving a filming vehicle
that saw three crew members injured, at least one of them seriously. Filming temporarily returned to England to shoot
scenes at Blenheim Palace in Oxfordshire, which stood in for a location in Rome, before moving on to the city itself
for a five-week shoot across the city, with locations including the Ponte Sisto bridge and the Roman Forum. The production
faced opposition from a variety of special interest groups and city authorities, who were concerned about the potential
for damage to historical sites around the city, and problems with graffiti and rubbish appearing in the film. A car
chase scene set along the banks of the Tiber River and through the streets of Rome featured an Aston Martin DB10
and a Jaguar C-X75. The C-X75 was originally developed as a hybrid electric vehicle with four independent electric
engines powered by two jet turbines, before the project was cancelled. The version used for filming was converted
to use a conventional internal combustion engine, to minimise the potential for disruption from mechanical problems
with the complex hybrid system. The C-X75s used for filming were developed by the engineering division of Formula
One racing team Williams, who built the original C-X75 prototype for Jaguar. With filming completed in Rome, production
moved to Mexico City in late March to shoot the film's opening sequence, with scenes to include the Day of the Dead
festival filmed in and around the Zócalo and the Centro Histórico district. The planned scenes required the city
square to be closed for filming a sequence involving a fight aboard a Messerschmitt-Bölkow-Blohm Bo 105 helicopter
flown by stunt pilot Chuck Aaron, which called for modifications to be made to several buildings to prevent damage.
This particular scene in Mexico required 1,500 extras, 10 giant skeletons and 250,000 paper flowers. Reports in the
Mexican media added that the film's second unit would move to Palenque in the state of Chiapas, to film aerial manoeuvres
considered too dangerous to shoot in an urban area. Following filming in Mexico, and during a scheduled break, Craig
was flown to New York to undergo minor surgery to fix his knee injury. It was reported that filming was not affected
and he had returned to filming at Pinewood Studios as planned on 22 April. A brief shoot at London's City Hall was
filmed on 18 April 2015, while Mendes was on location. On 17 May 2015 filming took place on the Thames in London.
Stunt scenes involving Craig and Seydoux on a speedboat as well as a low flying helicopter near Westminster Bridge
were shot at night, with filming temporarily closing both Westminster and Lambeth Bridges. Scenes were also shot
on the river near MI6's headquarters at Vauxhall Cross. The crew returned to the river less than a week later to
film scenes solely set on Westminster Bridge. The London Fire Brigade was on set to simulate rain as well as monitor
smoke used for filming. Craig, Seydoux, and Waltz, as well as Harris and Fiennes, were seen being filmed. Prior to
this, scenes involving Fiennes were shot at a restaurant in Covent Garden. Filming then took place in Trafalgar Square.
In early June, the crew, as well as Craig, Seydoux, and Waltz, returned to the Thames for a final time to continue
filming scenes previously shot on the river. After wrapping up in England, production travelled to Morocco in June,
with filming taking place in Oujda, Tangier and Erfoud, after preliminary work was completed by the production's
second unit. An explosion filmed in Morocco holds a Guinness World Record for the "Largest film stunt explosion"
in cinematic history, with the record credited to production designer Chris Corbould. Principal photography concluded
on 5 July 2015. A wrap-up party for Spectre was held in commemoration before entering post-production. Filming took
128 days. Whilst filming in Mexico City, speculation in the media claimed that the script had been altered to accommodate
the demands of Mexican authorities—reportedly influencing details of the scene and characters, casting choices, and
modifying the script in order to portray the country in a "positive light"—in order to secure tax concessions and
financial support worth up to $20 million for the film. This was denied by producer Michael G. Wilson, who stated
that the scene had always been intended to be shot in Mexico as production had been attracted to the imagery of the
Day of the Dead, and that the script had been developed from there. Production of Skyfall had previously faced similar
problems while attempting to secure permits to shoot the film's pre-title sequence in India before moving to Istanbul.
Thomas Newman returned as Spectre's composer. Rather than composing the score once the film had moved into post-production,
Newman worked during filming. The theatrical trailer released in July 2015 contained a rendition of John Barry's
On Her Majesty's Secret Service theme. Mendes revealed that the final film would have more than one hundred minutes
of music. The soundtrack album was released on 23 October 2015 in the UK and 6 November 2015 in the USA on the Decca
Records label. In September 2015 it was announced that Sam Smith and regular collaborator Jimmy Napes had written
the film's title theme, "Writing's on the Wall", with Smith performing it for the film. Smith said the song came
together in one session and that he and Napes wrote it in under half an hour before recording a demo. Satisfied with
the quality, the demo was used in the final release. The song was released as a digital download on 25 September
2015. It received mixed reviews from critics and fans, particularly in comparison to Adele's "Skyfall". The mixed
reception to the song led to Shirley Bassey trending on Twitter on the day it was released. It became the first Bond
theme to reach number one in the UK Singles Chart. The English band Radiohead also composed a song for the film,
which went unused. During the December 2014 press conference announcing the start of filming, Aston Martin and Eon
unveiled the new DB10 as the official car for the film. The DB10 was designed in collaboration between Aston Martin
and the filmmakers, with only 10 being produced especially for Spectre as a celebration of the 50th anniversary of
the company's association with the franchise. Only eight of those 10 were used for the film, however; the remaining
two were used for promotional work. After modifying the Jaguar C-X75 for the film, Williams F1 carried the 007 logo
on their cars at the 2015 Mexican Grand Prix, with the team playing host to the cast and crew ahead of the Mexican
premiere of the film. To promote the film, production continued the trend established during Skyfall's production
of releasing still images of clapperboards and video blogs on Eon's official social media accounts. On 13 March 2015,
several members of the cast and crew, including Craig, Whishaw, Wilson and Mendes, as well as previous James Bond
actor, Sir Roger Moore, appeared in a sketch written by David Walliams and the Dawson Brothers for Comic Relief's
Red Nose Day on BBC One. In the sketch, they film a behind-the-scenes mockumentary on the filming of Spectre. The
first teaser trailer for Spectre was released worldwide in March 2015, followed by the theatrical trailer in July
and the final trailer in October. Spectre had its world premiere in London on 26 October 2015 at the Royal Albert
Hall, the same day as its general release in the United Kingdom and Republic of Ireland. Following the announcement
of the start of filming, Paramount Pictures brought forward the release of Mission: Impossible – Rogue Nation to
avoid competing with Spectre. In March 2015 IMAX corporation announced that Spectre would be screened in its cinemas,
following Skyfall's success with the company. In the UK it received a wider release than Skyfall, with a minimum
of 647 cinemas including 40 IMAX screens, compared to Skyfall's 587 locations and 21 IMAX screens. As of 21 February
2016[update] Spectre has grossed $879.3 million worldwide; $138.1 million of the takings have been generated from
the UK market and $199.8 million from North America. In the United Kingdom, the film grossed £4.1 million ($6.4 million)
from its Monday preview screenings. It grossed £6.3 million ($9.2 million) on its opening day and then £5.7 million
($8.8 million) on Wednesday, setting UK records for both days. In the film's first seven days it grossed £41.7 million
($63.8 million), breaking the UK record for highest first-week opening, set by Harry Potter and the Prisoner of Azkaban's
£23.88 million ($36.9 million) in 2004. Its Friday–Saturday gross was £20.4 million ($31.2 million) compared to Skyfall's
£20.1 million ($31 million). The film also broke the record for the best per-screen opening average with $110,000,
a record previously held by The Dark Knight with $100,200. It has grossed a total of $136.3 million there. In the
U.K., it surpassed Avatar to become the country's highest-grossing IMAX release ever with $10.09 million. Spectre
opened in Germany with $22.45 million (including previews), which included a new record for the biggest Saturday
of all time, Australia with $8.7 million (including previews) and South Korea opened to $8.2 million (including previews).
Despite the 13 November Paris attacks, which led to numerous theaters being closed down, the film opened with $14.6
million (including $2 million in previews) in France. In Mexico, where part of the film was shot, it debuted with
more than double that of Skyfall with $4.5 million. It also bested its predecessor's opening in various Nordic regions
where MGM is distributing, such as in Finland ($2.66 million) and Norway ($2.91 million), and in other markets like
Denmark ($4.2 million), the Netherlands ($3.38 million), and Sweden ($3.1 million). In India, it opened at No. 1
with $4.8 million which is 4% above the opening of Skyfall. It topped the German-speaking Switzerland box office
for four weeks and in the Netherlands, it has held the No. 1 spot for seven weeks straight where it has topped Minions
to become the top movie of the year. The top earning markets are Germany ($70.3 million) and France ($38.8 million).
In Paris, it has the second highest ticket sales of all time with $4.1 million tickets sold only behind Spider-Man
3 which sold over $6.32 million tickets in 2007. In the United States and Canada, the film opened on 6 November 2015,
and in its opening weekend, was originally projected to gross $70–75 million from 3,927 screens, the widest release
for a Bond film. However, after grossing $5.25 million from its early Thursday night showings and $28 million on
its opening day, weekend projections were increased to $75–80 million. The film ended up grossing $70.4 million in
its opening weekend (about $20 million less than Skyfall's $90.6 million debut, including IMAX previews), but nevertheless
finished first at the box office. IMAX generated $9.1 million for Spectre at 374 screens, premium large format made
$8 million from 429 cinemas, reaping 11% of the film's opening, which means that Spectre earned $17.1 million (23%)
of its opening weekend total in large-format venues. Cinemark XD generated $1.85 million in 112 XD locations. In
China, it opened on 12 November and earned $15 million on its opening day, which is the second biggest 2D single
day gross for a Hollywood film behind the $18.5 million opening day of Mission: Impossible – Rogue Nation and occupying
43% of all available screens which included $790,000 in advance night screenings. Through its opening weekend, it
earned $48.1 million from 14,700 screens which is 198% ahead of Skyfall, a new record for a Hollywood 2D opening.
IMAX contributed $4.6 million on 246 screens, also a new record for a three-day opening for a November release (breaking
Interstellar's record). In its second weekend, it added $12.1 million falling precipitously by 75% which is the second
worst second weekend drop for any major Hollywood release in China of 2015. It grossed a total of $84.7 million there
after four weekends. Albeit a strong opening it failed to attain the $100 million mark as projected. Spectre has
received mixed reviews, with many reviewers either giving the film highly positive or highly negative feedback. Many
critics praised the film's opening scene, action sequences, stuntwork, cinematography and performances from the cast.
In some early reviews, the film received favourable comparisons with its predecessor, Skyfall. Rotten Tomatoes sampled
274 reviews and judged 64% of the critiques to be positive, saying that the film "nudges Daniel Craig's rebooted
Bond closer to the glorious, action-driven spectacle of earlier entries, although it's admittedly reliant on established
007 formula." On Metacritic, the film has a rating of 60 out of 100, based on 48 critics, indicating "mixed or average
reviews". Audiences polled by CinemaScore gave the film an average grade of "A−" on an A+ to F scale. Prior to its
UK release, Spectre mostly received positive reviews. Mark Kermode, writing in The Guardian, gave the film four out
of five stars, observing that the film did not live up to the standard set by Skyfall, but was able to tap into audience
expectations. Writing in the same publication, Peter Bradshaw gave the film a full five stars, calling it "inventive,
intelligent and complex", and singling out Craig's performance as the film's highlight. In another five star review,
The Daily Telegraph's Robbie Collin described Spectre as "a swaggering show of confidence'", lauding it as "a feat
of pure cinematic necromancy." In an otherwise positive, but overall less enthusiastic review, IGN's Chris Tilly
considered Spectre "solid if unspectacular", and gave the film a 7.2 score (out of a possible 10), saying that "the
film falls frustratingly short of greatness." Critical appraisal of the film was mixed in the United States. In a
lukewarm review for RogerEbert.com, Matt Zoller Seitz gave the film 2.5 stars out of 4, describing Spectre as inconsistent
and unable to capitalise on its potential. Kenneth Turan, reviewing the film for Los Angeles Times, concluded that
Spectre "comes off as exhausted and uninspired". Manohla Dargis of The New York Times panned the film as having "nothing
surprising" and sacrificing its originality for the sake of box office returns. Forbes' Scott Mendelson also heavily
criticised the film, denouncing Spectre as "the worst 007 movie in 30 years". Darren Franich of Entertainment Weekly
viewed Spectre as "an overreaction to our current blockbuster moment", aspiring "to be a serialized sequel" and proving
"itself as a Saga". While noting that "[n]othing that happens in Spectre holds up to even minor logical scrutiny",
he had "come not to bury Spectre, but to weirdly praise it. Because the final act of the movie is so strange, so
willfully obtuse, that it deserves extra attention." In a positive review Rolling Stone, Peter Travers gave the film
3.5 stars out of 4, describing "The 24th movie about the British MI6 agent with a license to kill is party time for
Bond fans, a fierce, funny, gorgeously produced valentine to the longest-running franchise in movies". Other positive
reviews from Mick LaSalle from the San Francisco Chronicle, gave it a perfect 100 score, stating: “One of the great
satisfactions of Spectre is that, in addition to all the stirring action, and all the timely references to a secret
organization out to steal everyone’s personal information, we get to believe in Bond as a person.” Stephen Whitty
from the New York Daily News, gave it an 80 grade, saying: “Craig is cruelly efficient. Dave Bautista makes a good,
Oddjob-like assassin. And while Lea Seydoux doesn’t leave a huge impression as this film’s “Bond girl,” perhaps it’s
because we’ve already met — far too briefly — the hypnotic Monica Bellucci, as the first real “Bond woman” since
Diana Rigg.” Richard Roeper from the Chicago Sun-Times, gave it a 75 grade. He stated: “This is the 24th Bond film
and it ranks solidly in the middle of the all-time rankings, which means it’s still a slick, beautifully photographed,
action-packed, international thriller with a number of wonderfully, ludicrously entertaining set pieces, a sprinkling
of dry wit, myriad gorgeous women and a classic psycho-villain who is clearly out of his mind but seems to like it
that way.” Michael Phillips over at the Chicago Tribune, gave it a 75 grade. He stated: “For all its workmanlike
devotion to out-of-control helicopters, “Spectre” works best when everyone’s on the ground, doing his or her job,
driving expensive fast cars heedlessly, detonating the occasional wisecrack, enjoying themselves and their beautiful
clothes.” Guy Lodge from Variety, gave it a 70 score, stating: “What’s missing is the unexpected emotional urgency
of “Skyfall,” as the film sustains its predecessor’s nostalgia kick with a less sentimental bent.” Christopher Orr,
writing in The Atlantic, also criticised the film, saying that Spectre "backslides on virtually every [aspect]".
Lawrence Toppman of The Charlotte Observer called Craig's performance "Bored, James Bored." Alyssa Rosenberg, writing
for The Washington Post, stated that the film turned into "a disappointingly conventional Bond film." In India, it
was reported that the Indian Central Board of Film Certification (CBFC) censored kissing scenes featuring Monica
Bellucci, Daniel Craig, and Léa Seydoux. They also muted all profanity. This prompted criticism of the board online,
especially on Twitter. A sequel to Spectre will begin development in spring 2016. Sam Mendes has stated he will not
return to direct the next 007 film. Christoph Waltz has signed on for two more films in the series, but his return
depends on whether or not Craig will again portray Bond.
The 2008 Sichuan earthquake or the Great Sichuan earthquake, measured at 8.0 Ms and 7.9 Mw, and occurred at 02:28:01 PM China
Standard Time at epicenter (06:28:01 UTC) on May 12 in Sichuan province, killed 69,197 people and left 18,222 missing.
It is also known as the Wenchuan earthquake (Chinese: 汶川大地震; pinyin: Wènchuān dà dìzhèn; literally: "Great Wenchuan
earthquake"), after the location of the earthquake's epicenter, Wenchuan County, Sichuan. The epicenter was 80 kilometres
(50 mi) west-northwest of Chengdu, the provincial capital, with a focal depth of 19 km (12 mi). The earthquake was
also felt in nearby countries and as far away as both Beijing and Shanghai—1,500 km (930 mi) and 1,700 km (1,060
mi) away—where office buildings swayed with the tremor. Strong aftershocks, some exceeding magnitude 6, continued
to hit the area even months after the main quake, causing new casualties and damage. Official figures (as of July
21, 2008 12:00 CST) stated that 69,197 were confirmed dead, including 68,636 in Sichuan province, and 374,176 injured,
with 18,222 listed as missing. The earthquake left about 4.8 million people homeless, though the number could be
as high as 11 million. Approximately 15 million people lived in the affected area. It was the deadliest earthquake
to hit China since the 1976 Tangshan earthquake, which killed at least 240,000 people, and the strongest in the country
since the 1950 Chayu earthquake, which registered at 8.5 on the Richter magnitude scale. It is the 21st deadliest
earthquake of all time. On November 6, 2008, the central government announced that it would spend 1 trillion RMB
(about US $146.5 billion) over the next three years to rebuild areas ravaged by the earthquake, as part of the Chinese
economic stimulus program. The earthquake had a magnitude of 8.0 Ms and 7.9 Mw. The epicenter was in Wenchuan County,
Ngawa Tibetan and Qiang Autonomous Prefecture, 80 km west/northwest of the provincial capital of Chengdu, with its
main tremor occurring at 14:28:01.42 China Standard Time (06:28:01.42 UTC), on May 12, 2008 lasting for around 2
minutes, in the quake almost 80% of buildings were destroyed. According to a study by the China Earthquake Administration
(CEA), the earthquake occurred along the Longmenshan fault, a thrust structure along the border of the Indo-Australian
Plate and Eurasian Plate. Seismic activities concentrated on its mid-fracture (known as Yingxiu-Beichuan fracture).
The rupture lasted close to 120 sec, with the majority of energy released in the first 80 sec. Starting from Wenchuan,
the rupture propagated at an average speed of 3.1 kilometers per second 49° toward north east, rupturing a total
of about 300 km. Maximum displacement amounted to 9 meters. The focus was deeper than 10 km. Malaysia-based Yazhou
Zhoukan conducted an interview with former researcher at the China Seismological Bureau Geng Qingguo (耿庆国), in which
Geng claimed that a confidential written report was sent to the State Seismological Bureau on April 30, 2008, warning
about the possible occurrence of a significant earthquake in Ngawa Prefecture region of Sichuan around May 8, with
a range of 10 days before or after the quake. Geng, while acknowledging that earthquake prediction was broadly considered
problematic by the scientific community, believed that "the bigger the earthquake, the easier it is to predict."
Geng had long attempted to establish a correlation between the occurrence of droughts and earthquakes; Premier Zhou
Enlai reportedly took an interest in Geng's work. Geng's drought-earthquake correlation theory was first released
in 1972, and said to have successfully predicted the 1975 Haicheng and 1976 Tangshan earthquakes. The same Yazhou
Zhoukan article pointed out the inherent difficulties associated with predicting earthquakes. In response, an official
with the Seismological Bureau stated that "earthquake prediction is widely acknowledged around the world to be difficult
from a scientific standpoint." The official also denied that the Seismological Bureau had received reports predicting
the earthquake. In a United States Geological Survey (USGS) study, preliminary rupture models of the earthquake indicated
displacement of up to 9 meters along a fault approximately 240 km long by 20 km deep. The earthquake generated deformations
of the surface greater than 3 meters and increased the stress (and probability of occurrence of future events) at
the northeastern and southwestern ends of the fault. On May 20, USGS seismologist Tom Parsons warned that there is
"high risk" of a major M>7 aftershock over the next weeks or months. Japanese seismologist Yuji Yagi at the University
of Tsukuba said that the earthquake occurred in two stages: "The 155-mile Longmenshan Fault tore in two sections,
the first one ripping about seven yards, followed by a second one that sheared four yards." His data also showed
that the earthquake lasted about two minutes and released 30 times the energy of the Great Hanshin earthquake of
1995 in Japan, which killed over 6,000 people. He pointed out that the shallowness of the epicenter and the density
of population greatly increased the severity of the earthquake. Teruyuki Kato, a seismologist at the University of
Tokyo, said that the seismic waves of the quake traveled a long distance without losing their power because of the
firmness of the terrain in central China. According to reports from Chengdu, the capital of Sichuan province, the
earthquake tremors lasted for "about two or three minutes". Between 64 and 104 major aftershocks, ranging in magnitude
from 4.0 to 6.1, were recorded within 72 hours of the main quake. According to Chinese official counts, "by 12:00
CST, November 6, 2008 there had been 42,719 total aftershocks, of which 246 ranged from 4.0 MS to 4.9 MS, 34 from
5.0 MS to 5.9 MS, and 8 from 6.0 Ms to 6.4 MS; the strongest aftershock measured 6.4 MS." The latest aftershock exceeding
M6 occurred on August 5, 2008. (The Ms 6.1 earthquake on August 30, 2008 in southern Sichuan was not part of this
series because it was caused by a different fault. See 2008 Panzhihua earthquake for details.) The map of earthquake
intensity published by CEA after surveying 500,000 km2 of the affected area shows a maximum liedu of XI on the China
Seismic Intensity Scale (CSIS), described as "very destructive" on the European Macroseismic Scale (EMS) from which
CSIS drew reference. (USGS, using the Modified Mercalli intensity scale (CC), also placed maximum intensity at XI,
"very disastrous".) Two south-west-north-east stripes of liedu XI are centered around Yingxiu, Wenchuan (the town
closest to the epicenter of the main quake) and Beichuan (the town repeatedly struck by strong aftershocks including
one registering MS 6.1 on Aug 1, 2008), both in Sichuan Province, occupying a total of 2,419 km2. The Yingxiu liedu-XI
zone is about 66 km long and 20 km wide along Wenchuan–Dujiangyan–Pengzhou; the Beichuan liedu-XI zone is about 82
km long and 15 km wide along An County–Beichuan–Pingwu. The area with liedu X (comparable to X on EMS, "destructive"
and X on MM, "disastrous") spans 3,144 km2. The area affected by earthquakes exceeding liedu VI totals 440,442 km2,
occupying an oval 936 km long and 596 km wide, spanning three provinces and one autonomous region. The Longmen Shan
Fault System is situated in the eastern border of the Tibetan Plateau and contains several faults. This earthquake
ruptured at least two imbricate structures in Longmen Shan Fault System, i.e. the Beichuan Fault and the Guanxian–Anxian
Fault. In the epicentral area, the average slip in Beichuan Fault was about 3.5 metres (11 ft) vertical, 3.5 metres
(11 ft) horizontal-parallel to the fault, and 4.8 metres (16 ft) horizontal-perpendicular to the fault. In the area
about 30 kilometres (19 mi) northeast of the epicenter, the surface slip on Beichuan Fault was almost purely dextral
strike-slip up to about 3 metres (9.8 ft), while the average slip in Guanxian–Anxian Fault was about 2 metres (6
ft 7 in) vertical and 2.3 metres (7 ft 7 in) horizontal. Office buildings in Shanghai's financial district, including
the Jin Mao Tower and the Hong Kong New World Tower, were evacuated. A receptionist at the Tibet Hotel in Chengdu
said things were "calm" after the hotel evacuated its guests. Meanwhile, workers at a Ford plant in Sichuan were
evacuated for about 10 minutes. Chengdu Shuangliu International Airport was shut down, and the control tower and
regional radar control evacuated. One SilkAir flight was diverted and landed in Kunming as a result. Cathay Pacific
delayed both legs of its quadruple daily Hong Kong to London route due to this disruption in air traffic services.
Chengdu Shuangliu Airport reopened later on the evening of May 12, offering limited service as the airport began
to be used as a staging area for relief operations. Reporters in Chengdu said they saw cracks on walls of some residential
buildings in the downtown areas, but no buildings collapsed. Many Beijing office towers were evacuated, including
the building housing the media offices for the organizers of the 2008 Summer Olympics. None of the Olympic venues
were damaged. Meanwhile, a cargo train carrying 13 petrol tanks derailed in Hui County, Gansu, and caught on fire
after the rail was distorted. All of the highways into Wenchuan, and others throughout the province, were damaged,
resulting in delayed arrival of the rescue troops. In Beichuan County, 80% of the buildings collapsed according to
Xinhua News. In the city of Shifang, the collapse of two chemical plants led to leakage of some 80 tons of liquid
ammonia, with hundreds of people reported buried. In the city of Dujiangyan, south-east of the epicenter, a whole
school collapsed with 900 students buried and fewer than 60 survived. The Juyuan Middle School, where many teenagers
were buried, was excavated by civilians and cranes. Dujiangyan is home of the Dujiangyan Irrigation System, an ancient
water diversion project which is still in use and is a UNESCO World Heritage Site. The project's famous Fish Mouth
was cracked but not severely damaged otherwise. Both the Shanghai Stock Exchange and the Shenzhen Stock Exchange
suspended trading of companies based in southwestern China. Copper rose over speculations that production in southwestern
China may be affected, and oil prices dropped over speculations that demand from China would fall. Immediately after
the earthquake event, mobile and terrestrial telecommunications were cut to the affected and surrounding area, with
all internet capabilities cut to the Sichuan area too. Elements of telecommunications were restored by the government
piece by piece over the next number of months as the situation in the Sichuan province gradually improved. Eventually,
a handful of major news and media websites were made accessible online in the region, albeit with dramatically pared
back webpages. China Mobile had more than 2,300 base stations suspended due to power disruption or severe telecommunication
traffic congestion. Half of the wireless communications were lost in the Sichuan province. China Unicom's service
in Wenchuan and four nearby counties was cut off, with more than 700 towers suspended. Initially, officials were
unable to contact the Wolong National Nature Reserve, home to around 280 giant pandas. However, the Foreign Ministry
later said that a group of 31 British tourists visiting the Wolong Panda Reserve in the quake-hit area returned safe
and uninjured to Chengdu. Nonetheless, the well-being of an even greater number of pandas in the neighbouring panda
reserves remained unknown. Five security guards at the reserve were killed by the earthquake. Six pandas escaped
after their enclosures were damaged. By May 20, two pandas at the reserve were found to be injured, while the search
continued for another two adult pandas that went missing after the quake. By May 28, 2008, one panda was still missing.
The missing panda was later found dead under the rubble of an enclosure. Nine-year-old Mao Mao, a mother of five
at the breeding center, was discovered on Monday, her body crushed by a wall in her enclosure. Panda keepers and
other workers placed her remains in a small wooden crate and buried her outside the breeding centre. The Zipingpu
Hydropower Plant (simplified Chinese: 紫坪铺水库; traditional Chinese: 紫坪鋪水庫) located 20 km east of the epicenter was
damaged. A recent inspection indicated that the damage was less severe than initially feared, and it remains structurally
stable and safe. The Tulong reservoir upstream is in danger of collapse. About 2,000 troops have been allocated to
Zipingpu, trying to release the pressure through spillway. In total, 391 dams, most of them small, were reported
damaged by the quake. According to Chinese state officials, the quake caused 69,180 known deaths including 68,636
in Sichuan province; 18,498 people are listed as missing, and 374,176 injured, but these figures may further increase
as more reports come in.[dated info] This estimate includes 158 earthquake relief workers who were killed in landslides
as they tried to repair roads. One rescue team reported only 2,300 survivors from the town of Yingxiu in Wenchuan
County, out of a total population of about 9,000. 3,000 to 5,000 people were killed in Beichuan County, Sichuan alone;
in the same location, 10,000 people were injured and 80% of the buildings were destroyed. The old county seat of
Beichuan was abandoned and preserved as part of the Beichuan Earthquake Museum. Eight schools were toppled in Dujiangyan.
A 56-year-old was killed in Dujiangyan during a rescue attempt on the Lingyanshan Ropeway, where due to the earthquake
11 tourists from Taiwan had been trapped inside cable cars since May 13. A 4-year-old boy named Zhu Shaowei (traditional
Chinese: 朱紹維; simplified Chinese: 朱绍维; pinyin: Zhū Shàowéi) was also killed in Mianzhu City when a house collapsed
on him and another was reported missing. Experts point out that the earthquake hit an area that has been largely
neglected and untouched by China's economic rise. Health care is poor in inland areas such as Sichuan, highlighting
the widening gap between prosperous urban dwellers and struggling rural people. Vice Minister of Health Gao Qiang
told reporters in Beijing that the "public health care system in China is insufficient." The Vice Minister of Health
also suggested that the government would pick up the costs of care to earthquake victims, many of whom have little
or no insurance: "The government should be responsible for providing medical treatment to them," he said. In terms
of school casualties, thousands of school children died due to shoddy construction. In Mianyang City, seven schools
collapsed, burying at least 1,700 people. At least 7,000 school buildings throughout the province collapsed. Another
700 students were buried in a school in Hanwang. At least 600 students and staff died at Juyuan Elementary School.
Up to 1,300 children and teachers died at Beichuan Middle School. Details of school casualties had been under non-governmental
investigation since December 2008 by volunteers including artist and architect Ai Weiwei, who had been constantly
posting updates on his blog since March 2009. The official tally of students killed in the earthquake was not released
until May 7, 2009, almost a year after the earthquake. According to the state-run Xinhua news agency, the earthquake
killed 5,335 students and left another 546 children disabled. In the aftermath of the earthquake, the Chinese government
declared that parents who had lost their only children would get free treatment from fertility clinics to reverse
vasectomies and tubal ligations conducted by family planning authorities. The earthquake left at least 5 million
people without housing, although the number could be as high as 11 million. Millions of livestock and a significant
amount of agriculture were also destroyed, including 12.5 million animals, mainly birds. In the Sichuan province
a million pigs died out of 60 million total. Catastrophe modeling firm AIR Worldwide reported official estimates
of insurers' losses at US$1 billion from the earthquake; estimated total damages exceed US$20 billion. It values
Chengdu, at the time having an urban population of 4.5 million people, at around US$115 billion, with only a small
portion covered by insurance. Reginald DesRoches, a professor of civil and environmental engineering at Georgia Tech,
pointed out that the massive damage of properties and houses in the earthquake area was because China did not create
an adequate seismic design code until after the devastating 1976 Tangshan earthquake. DesRoches said: "If the buildings
were older and built prior to that 1976 earthquake, chances are they weren't built for adequate earthquake forces."
In the days following the disaster, an international reconnaissance team of engineers was dispatched to the region
to make a detailed preliminary survey of damaged buildings. Their findings show a variety of reasons why many constructions
failed to withstand the earthquake. News reports indicate that the poorer, rural villages were hardest hit. Swaminathan
Krishnan, assistant professor of civil engineering and geophysics at the California Institute of Technology said:
"the earthquake occurred in the rural part of China. Presumably, many of the buildings were just built; they were
not designed, so to speak." Swaminathan Krishnan further added: "There are very strong building codes in China, which
take care of earthquake issues and seismic design issues. But many of these buildings presumably were quite old and
probably were not built with any regulations overseeing them." Even with the five largest cities in Sichuan suffering
only minor damage from the quake, some estimates of the economic loss run higher than US$75 billion, making the earthquake
one of the costliest natural disasters in Chinese history. Strong aftershocks continued to strike even months after
the main quake. On May 25, an aftershock of 6.0 Mw (6.4 Ms according to CEA) hit northeast of the original earthquake's
epicenter, in Qingchuan County, Sichuan, causing eight deaths, 1000 injuries, and destroying thousands of buildings.
On May 27, two aftershocks, one 5.2 Mw in Qingchuan County and one 5.7 Mw in Ningqiang County, Shaanxi, led to the
collapse of more than 420,000 homes and injured 63 people. The same area suffered two more aftershocks of 5.6 and
6.0 Ms (5.8 and 5.5 Mw, respectively, according to USGS) on July 23, resulting in 1 death, 6 serious injuries, collapse
of hundreds of homes and damaging kilometers of highways. Pingwu County and Beichuan County, Sichuan, also northeast
of Wenchuan and close to the epicenter of a 7.2 Ms earthquake in 1976, suffered a 6.1 Ms aftershock (5.7 Mw according
to USGS) on August 1; it caused 2 deaths, 345 injuries, collapse of 707 homes, damages to over 1,000 homes, and blocked
25 kilometres (16 mi) of country roads. As late as August 5, yet another aftershock of 6.1 Ms (6.2 Mw according to
USGS) hit Qingchuan, Sichuan, causing 1 death, 32 injuries, telecommunication interruptions, and widespread hill
slides blocking roads in the area including a national highway. Executive vice governor Wei Hong confirmed on November
21, 2008 that more than 90,000 people in total were dead or missing in the earthquake. He stated that 200,000 homes
had been rebuilt, and 685,000 were under reconstruction, but 1.94 million households were still without permanent
shelter. 1,300 schools had been reconstructed, with initial relocation of 25 townships, including Beichuan and Wenchuan,
two of the most devastated areas. The government spent $441 billion on relief and reconstruction efforts. General
Secretary and President Hu Jintao announced that the disaster response would be rapid. Just 90 minutes after the
earthquake, Premier Wen Jiabao, who has an academic background in geomechanics, flew to the earthquake area to oversee
the rescue work. Soon afterward, the Ministry of Health said that it had sent ten emergency medical teams to Wenchuan
County. On the same day, the Chengdu Military Region Command dispatched 50,000 troops and armed police to help with
disaster relief work in Wenchuan County. However, due to the rough terrain and close proximity of the quake's epicenter,
the soldiers found it very difficult to get help to the rural regions of the province. The National Disaster Relief
Commission initiated a "Level II emergency contingency plan", which covers the most serious class of natural disasters.
The plan rose to Level I at 22:15 CST, May 12. An earthquake emergency relief team of 184 people (consisting of 12
people from the State Seismological Bureau, 150 from the Beijing Military Area Command, and 22 from the Armed Police
General Hospital) left Beijing from Nanyuan Airport late May 12 in two military transport planes to travel to Wenchuan
County. In the China Digital Times an article reports a close analysis by an alleged Chinese construction engineer
known online as “Book Blade” (书剑子), who stated: On Children's Day, June 1, 2008, many parents went to the rubble
of schools to mourn for their children. The surviving children, who were mostly living in relief centres, performed
ceremonies marking the special day, but also acknowledging the earthquake. Central State-owned enterprises have accumulatively
donated more than $48.6 million. China National Petroleum Corp and Sinopec donated 10 million yuan each to the disaster
area. On May 16 China stated it had also received $457 million in donated money and goods for rescue efforts so far,
including $83 million from 19 countries and four international organizations. Saudi Arabia was the largest aid donor
to China, providing close to €40,000,000 in financial assistance, and an additional €8,000,000 worth of relief materials.
In 2008, State Council established a counterpart support plan (《汶川地震灾后恢复重建对口支援方案》). The plan is to arrange 19 eastern
and central province and municipalitie to help 18 counties, on "one province to one affected county" basis. The plan
spanned 3 years, and cost no less than one percent of the province or municipality's budget. An article in Science
suggested that the construction and filling of the Zipingpu Dam may have triggered the earthquake. The chief engineer
of the Sichuan Geology and Mineral Bureau said that the sudden shift of a huge quantity of water into the region
could have relaxed the tension between the two sides of the fault, allowing them to move apart, and could have increased
the direct pressure on it, causing a violent rupture. The effect was "25 times more" than a year's worth of natural
stress from tectonic movement. The government had disregarded warnings about so many large-scale dam projects in
a seismically active area. Researchers have been denied access to seismological and geological data to examine the
cause of the quake further. The earthquake also provided opportunities for researchers to retrofit data in order
to model future earthquake predictions. Using data from the Intermagnet Lanzhou geomagnetic observatory, geologists
Lazo Pekevski from the Ss. Cyril and Methodius University of Skopje in Macedonia and Strachimir Mavrodiev from the
Bulgarian Academy of Sciences attempted to establish a "time prediction method" through collecting statistics on
geomagnetism with tidal gravitational potential. Using this method, they were said to have predicted the time of
the 2008 Sichuan earthquake with an accuracy of ±1 day. The same study, however, acknowledges the limitation of earthquake
prediction models, and does not mention that the location of the quake could be accurately predicted. In a press
conference held by the State Council Information Office the day after the earthquake, geologist Zhang Xiaodong, deputy
director of CEA's Seismic Monitoring Network Center, restated that earthquake prediction was a global issue, in the
sense that no proven methods exist, and that no prediction notification was received before the earthquake. Seismologist
Gary Gibson of Monash University in Australia told Deutsche Presse-Agentur that he also did not see anything that
could be regarded as having 'predicted' the earthquake's occurrence. In 2002, Chinese geologist Chen Xuezhong published
a Seismic Risk Analysis study in which he came to the conclusion that beginning with 2003, attention should be paid
to the possibility of an earthquake with a magnitude of over 7.0 occurring in Sichuan region. He based his study
on statistical correlation. That Sichuan is a seismically active area has been discussed for years prior to the quake,
though few studies point to a specific date and time. The earthquake was the worst to strike the Sichuan area in
over 30 years. Following the quake, experts and the general public sought information on whether or not the earthquake
could have been predicted in advance, and whether or not studying statistics related to the quake could result in
better prediction of earthquakes in the future. Earthquake prediction is not yet established science; there was no
consensus within the scientific community that earthquake "prediction" is possible. Many rescue teams, including
that of the Taipei Fire Department from Taiwan, were reported ready to join the rescue effort in Sichuan as early
as Wednesday. However, the Red Cross Society of China said that (on May 13) "it was inconvenient currently due to
the traffic problem to the hardest hit areas closest to the epicenter." The Red Cross Society of China also stated
that the disaster areas need tents, medical supplies, drinking water and food; however it recommended donating cash
instead of other items, as it had not been possible to reach roads that were completely damaged or places that were
blocked off by landslides. Landslides continuously threatened the progress of a search and rescue group of 80 men,
each carrying about 40 kg of relief supplies, from a motorized infantry brigade under commander Yang Wenyao, as they
tried to reach the ethnically Tibetan village of Sier at a height of 4000 m above sea level in Pingwu county. The
extreme terrain conditions precluded the use of helicopter evacuation, and over 300 of the Tibetan villagers were
stranded in their demolished village for five days without food and water before the rescue group finally arrived
to help the injured and stranded villagers down the mountain. Persistent heavy rain and landslides in Wenchuan County
and the nearby area badly affected rescue efforts. At the start of rescue operations on May 12, 20 helicopters were
deployed for the delivery of food, water, and emergency aid, and also the evacuation of the injured and reconnaissance
of quake-stricken areas. By 17:37 CST on May 13, a total of over 15,600 troops and militia reservists from the Chengdu
Military Region had joined the rescue force in the heavily affected areas. A commander reported from Yingxiu Town,
Wenchuan, that around 3,000 survivors were found, while the status of the other inhabitants (around 9,000) remained
unclear. The 1,300 rescuers reached the epicenter, and 300 pioneer troops reached the seat of Wenchuan at about 23:30
CST. By 12:17 CST, May 14, 2008, communication in the seat of Wenchuan was partly revived. On the afternoon of May
14, 15 Special Operations Troops, along with relief supplies and communications gear, parachuted into inaccessible
Mao County, northeast of Wenchuan. By May 15, Premier Wen Jiabao ordered the deployment of an additional 90 helicopters,
of which 60 were to be provided by the PLAAF, and 30 were to be provided by the civil aviation industry, bringing
the total of number of aircraft deployed in relief operations by the air force, army, and civil aviation to over
150, resulting in the largest non-combat airlifting operation in People's Liberation Army history. Beijing accepted
the aid of the Tzu Chi Foundation from Taiwan late on May 13. Tzu Chi was the first force from outside the People's
Republic of China to join the rescue effort. China stated it would gratefully accept international help to cope with
the quake. A direct chartered cargo flight was made by China Airlines from Taiwan Taoyuan International Airport to
Chengdu Shuangliu International Airport sending some 100 tons of relief supplies donated by the Tzu Chi Foundation
and the Red Cross Society of Taiwan to the affected areas. Approval from mainland Chinese authorities was sought,
and the chartered flight departed Taipei at 17:00 CST, May 15 and arrived in Chengdu by 20:30 CST. A rescue team
from the Red Cross in Taiwan was also scheduled to depart Taipei on a Mandarin Airlines direct chartered flight to
Chengdu at 15:00 CST on May 16. On May 16, rescue groups from South Korea, Japan, Singapore, Russia and Taiwan arrived
to join the rescue effort. The United States shared some of its satellite images of the quake-stricken areas with
Chinese authorities. During the weekend, the US sent into China two U.S. Air Force C-17's carrying supplies, which
included tents and generators. Xinhua reported 135,000 Chinese troops and medics were involved in the rescue effort
across 58 counties and cities. The Internet was extensively used for passing information to aid rescue and recovery
efforts. For example, the official news agency Xinhua set up an online rescue request center in order to find the
blind spots of disaster recovery. After knowing that rescue helicopters had trouble landing into the epicenter area
in Wenchuan, a student proposed a landing spot online and it was chosen as the first touchdown place for the helicopters[not
in citation given]. Volunteers also set up several websites to help store contact information for victims and evacuees.
On May 31, a rescue helicopter carrying earthquake survivors and crew members crashed in fog and turbulence in Wenchuan
county. No-one survived. On May 12, 2009, China marked the first anniversary of the quake with a moment of silence
as people across the nation remembered the dead. The government also opened access to the sealed ruins of the Beichuan
county seat for three days, after which it will be frozen in time as a state earthquake relic museum, to remind people
of the terrible disaster. There were also several concerts across the country to raise money for the survivors of
the quake. Following the earthquake, donations were made by people from all over mainland China, with booths set
up in schools, at banks, and around gas stations. People also donated blood, resulting in according to Xinhua long
line-ups in most major Chinese cities. Many donated through text messaging on mobile phones to accounts set up by
China Unicom and China Mobile By May 16, the Chinese government had allocated a total of $772 million for earthquake
relief so far, up sharply from $159 million from May 14. The Red Cross Society of China flew 557 tents and 2,500
quilts valued at 788,000 yuan (US$113,000) to Wenchuan County. The Amity Foundation already began relief work in
the region and has earmarked US$143,000 for disaster relief. The Sichuan Ministry of Civil Affairs said that they
have provided 30,000 tents for those left homeless. The central government estimates that over 7,000 inadequately
engineered schoolrooms collapsed in the earthquake. Chinese citizens have since invented a catch phrase: "tofu-dregs
schoolhouses" (Chinese: 豆腐渣校舍), to mock both the quality and the quantity of these inferior constructions that killed
so many school children. Due to the one-child policy, many families lost their only child when schools in the region
collapsed during the earthquake. Consequently, Sichuan provincial and local officials have lifted the restriction
for families whose only child was either killed or severely injured in the disaster. So-called "illegal children"
under 18 years of age may be registered as legal replacements for their dead siblings; if the dead child was illegal,
no further outstanding fines would apply. Reimbursement would not, however, be offered for fines that were already
levied. On the evening of May 18, CCTV-1 hosted a special four-hour program called The Giving of Love (simplified
Chinese: 爱的奉献; traditional Chinese: 愛的奉獻), hosted by regulars from the CCTV New Year's Gala and round-the-clock coverage
anchor Bai Yansong. It was attended by a wide range of entertainment, literary, business and political figures from
mainland China, Hong Kong, Singapore and Taiwan. Donations of the evening totalled 1.5 billion Chinese Yuan (~US$208
million). Of the donations, CCTV gave the biggest corporate contribution at ¥50 million. Almost at the same time
in Taiwan, a similarly themed programme was on air hosted by the sitting president Ma Ying-jeou. In June, Hong Kong
actor Jackie Chan, who donated $1.57 million to the victims, made a music video alongside other artists entitled
"Promise"; the song was composed by Andy Lau. The Artistes 512 Fund Raising Campaign, an 8-hour fundraising marathon,
was held on June 1 in Hong Kong; it was attended by some 200 Sinosphere musicians and celebrities. In Singapore,
MediaCorp Channel 8 hosted a 'live' programme 让爱川流不息 to raise funds for the victims. Rescue efforts performed by
the Chinese government were praised by western media, especially in comparison with Myanmar's blockage of foreign
aid during Cyclone Nargis, as well as China's previous performance during the 1976 Tangshan earthquake. China's openness
during the media coverage of the Sichuan earthquake led a professor at the Peking University to say, “This is the
first time [that] the Chinese media has lived up to international standards”. Los Angeles Times praised China's media
coverage of the quake of being "democratic". As a result of the magnitude 7.9 earthquake and the many strong aftershocks,
many rivers became blocked by large landslides, which resulted in the formation of "quake lakes" behind the blockages;
these massive amounts of water were pooling up at a very high rate behind the natural landslide dams and it was feared
that the blockages would eventually crumble under the weight of the ever-increasing water mass, potentially endangering
the lives of millions of people living downstream. As of May 27, 2008, 34 lakes had formed due to earthquake debris
blocking and damming rivers, and it was estimated that 28 of them were still of potential danger to the local people.
Entire villages had to be evacuated because of the resultant flooding. The most precarious of these quake-lakes was
the one located in the extremely difficult terrain at Mount Tangjia in Beichuan County, Sichuan, accessible only
by foot or air; an Mi-26T heavy lift helicopter belonging to the China Flying Dragon Special Aviation Company was
used to bring heavy earthmoving tractors to the affected location. This operation was coupled with the work done
by PLAAF Mi-17 helicopters bringing in PLA engineering corps, explosive specialists and other personnel to join 1,200
soldiers who arrived on site by foot. Five tons of fuel to operate the machinery was airlifted to the site, where
a sluice was constructed to allow the safe discharge of the bottlenecked water. Downstream, more than 200,000 people
were evacuated from Mianyang by June 1 in anticipation of the dam bursting. The State Council declared a three-day
period of national mourning for the quake victims starting from May 19, 2008; the PRC's National Flag and Regional
Flags of Hong Kong and Macau Special Administrative Regions flown at half mast. It was the first time that a national
mourning period had been declared for something other than the death of a state leader, and many have called it the
biggest display of mourning since the death of Mao Zedong. At 14:28 CST on May 19, 2008, a week after the earthquake,
the Chinese public held a moment of silence. People stood silent for three minutes while air defense, police and
fire sirens, and the horns of vehicles, vessels and trains sounded. Cars and trucks on Beijing's roads also came
to a halt. People spontaneously burst into cheering "Zhongguo jiayou!" (Let's go, China!) and "Sichuan jiayou" (Let's
go, Sichuan!) afterwards. The Ningbo Organizing Committee of the Beijing Olympic torch relay announced that the relay,
scheduled to take place in Ningbo during national morning, would be suspended for the duration of the mourning period.
The route of the torch through the country was scaled down, and there was a minute of silence when the next leg started
in city of Ruijin, Jiangxi on the Wednesday after the quake. Many websites converted their home page to black and
white; Sina.com and Sohu, major internet portals, limited their homepages to news items and removed all advertisements.
Chinese video sharing websites Youku and Tudou displayed a black background and placed multiple videos showing earthquake
footage and news reports. The Chinese version of MSN, cn.msn.com, also displayed banner ads about the earthquake
and the relief efforts. Other entertainment websites, including various gaming sites, such as the Chinese servers
for World of Warcraft, had shut down altogether, or had corresponding links to earthquake donations. After the moments
of silence, in Tiananmen Square, crowds spontaneously burst out cheering various slogans, including "Long Live China".
Casinos in Macau closed down. Ye Zhiping, the principal of Sangzao Middle School in Sangzao, one of the largest in
An County, has been credited with proactive action that spared the lives of all 2,323 pupils in attendance when the
earthquake happened. During a three-year period that ended in 2007, he oversaw a major overhaul of his school. During
that time he obtained more than 400,000 yuan (US$60,000) from the county education department, money used to widen
and strengthen concrete pillars and the balcony railing of all four storeys of his school, as well as secure its
concrete floors. However, Reuters reported in June that, to date, Chinese prosecutors have joined an official inquiry
into ten collapsed schools during May's devastating earthquake to gain first-hand material of construction quality
at the collapsed schools, launch preliminary inquiries and prepare for possible investigations into professional
crime. It was also reported that safety checks were to be carried out at schools across China after last month's
earthquake. The New York Times reported that "government officials in Beijing and Sichuan have said they are investigating
the collapses. In an acknowledgment of the weakness of building codes in the countryside, the National Development
and Reform Commission said on May 27 that it had drafted an amendment to improve construction standards for primary
and middle schools in rural areas. Experts are reviewing the draft, the commission said." To limit protests, officials
pushed parents to sign a document, which forbade them from holding protests, in exchange of money, but some who refused
to sign were threatened. The payment amounts varied from school to school but were approximately the same. In Hanwang,
parents were offered a package valued at 8,800 USD in cash and a per-parent pension of nearly 5,600 USD. Furthermore,
officials used other methods of silencing: riot police officers broke up protests by parents; the authorities set
up cordons around the schools; and officials ordered the Chinese news media to stop reporting on school collapses.
Besides parents, Liu Shaokun (刘绍坤), a Sichuan school teacher, was detained on June 25, 2008 for "disseminating rumors
and destroying social order" about the Sichuan earthquake. Liu’s family was later told that he was being investigated
on suspicion of the crime of inciting subversion. Liu had travelled to the Shifang, taken photos of collapsed school
buildings, and put them online. He had also expressed his anger at “the shoddy tofu-dregs buildings” (豆腐渣工程) in a
media interview. He was ordered to serve one year of re-education through labor (RTL). According to the organization
Human Rights in China, Liu has been released to serve his RTL sentence outside of the labor camp. In January 2010,
Hong Kong-based English newspaper The Standard reported that writer Tan Zuoren attempted to document shoddy construction
that may have led to massive casualties in schools, was sentenced to in prison ostensibly for his writing an article
in 2007 in support of the pro-democracy movement in 1989. Because of the magnitude of the quake, and the media attention
on China, foreign nations and organizations immediately responded to the disaster by offering condolences and assistance.
On May 14, UNICEF reported that China formally requested the support of the international community to respond to
the needs of affected families. By May 14, the Ministry of Civil Affairs stated that 10.7 billion yuan (approximately
US$1.5 billion) had been donated by the Chinese public. Houston Rockets center Yao Ming, one of the country's most
popular sports icons, gave $214,000 and $71,000 to the Red Cross Society of China. The association has also collected
a total of $26 million in donations so far. Other multinational firms located in China have also announced large
amounts of donations. Francis Marcus of the International Federation of the Red Cross praised the Chinese rescue
effort as "swift and very efficient" in Beijing on Tuesday. But he added the scale of the disaster was such that
"we can't expect that the government can do everything and handle every aspect of the needs". The Economist noted
that China reacted to the disaster "rapidly and with uncharacteristic openness", contrasting it with Burma's secretive
response to Cyclone Nargis, which devastated that country 10 days before the earthquake. All Mainland Chinese television
stations (along with some stations in Hong Kong and expatriate communities) cancelled all regularly-scheduled programming,
displayed their logo in grayscale, and replaced their cancelled programmes with live earthquake footage from CCTV-1
for multiple days after the quake. Even pay television channels (such as Channel V) had their programmes suspended.
Although the Chinese government was initially praised for its response to the quake (especially in comparison to
Myanmar's ruling military junta's blockade of aid during Cyclone Nargis), it then saw an erosion in confidence over
the school construction scandal. On May 29, 2008, government officials began inspecting the ruins of thousands of
schools that collapsed, searching for clues about why they crumbled. Thousands of parents around the province have
accused local officials and builders of cutting corners in school construction, citing that after the quake other
nearby buildings were little damaged. In the aftermath of the quake, many local governments promised to formally
investigate the school collapses, but as of July 17, 2008 across Sichuan, parents of children lost in collapsed schools
complained they had yet to receive any reports. Local officials urged them not to protest but the parents demonstrated
and demanded an investigation. Furthermore, censors discouraged stories of poorly built schools from being published
in the media and there was an incident where police drove the protestors away. The AP reported that "The state-controlled
media has largely ignored the issue, apparently under the propaganda bureau's instructions. Parents and volunteers
who have questioned authorities have been detained and threatened." On May 15, 2008 Geoffery York of the Globeandmail.com
reported that the shoddily constructed buildings are commonly called "tofu buildings" because builders cut corners
by replacing steel rods with thin iron wires for concrete re-inforcement; using inferior grade cement, if any at
all; and using fewer bricks than they should. One local was quoted in the article as saying that "the supervising
agencies did not check to see if it met the national standards." However questions still remain, as some of the corrupt
government officials have still not been brought to justice, while the many families who lost their only child, are
still seeking compensation and justice to what had happened. According to the Times, many parents were warned by
the government not to stage a protest under the threat of arrest.
New York—often called New York City or the City of New York to distinguish it from the State of New York, of which it is
a part—is the most populous city in the United States and the center of the New York metropolitan area, the premier
gateway for legal immigration to the United States and one of the most populous urban agglomerations in the world.
A global power city, New York exerts a significant impact upon commerce, finance, media, art, fashion, research,
technology, education, and entertainment, its fast pace defining the term New York minute. Home to the headquarters
of the United Nations, New York is an important center for international diplomacy and has been described as the
cultural and financial capital of the world. Situated on one of the world's largest natural harbors, New York City
consists of five boroughs, each of which is a separate county of New York State. The five boroughs – Brooklyn, Queens,
Manhattan, the Bronx, and Staten Island – were consolidated into a single city in 1898. With a census-estimated 2014
population of 8,491,079 distributed over a land area of just 305 square miles (790 km2), New York is the most densely
populated major city in the United States. As many as 800 languages are spoken in New York, making it the most linguistically
diverse city in the world. By 2014 census estimates, the New York City metropolitan region remains by a significant
margin the most populous in the United States, as defined by both the Metropolitan Statistical Area (20.1 million
residents) and the Combined Statistical Area (23.6 million residents). In 2013, the MSA produced a gross metropolitan
product (GMP) of nearly US$1.39 trillion, while in 2012, the CSA generated a GMP of over US$1.55 trillion, both ranking
first nationally by a wide margin and behind the GDP of only twelve and eleven countries, respectively. New York
City traces its roots to its 1624 founding as a trading post by colonists of the Dutch Republic and was named New
Amsterdam in 1626. The city and its surroundings came under English control in 1664. New York served as the capital
of the United States from 1785 until 1790. It has been the country's largest city since 1790. The Statue of Liberty
greeted millions of immigrants as they came to the Americas by ship in the late 19th and early 20th centuries and
is a globally recognized symbol of the United States and its democracy. Many districts and landmarks in New York
City have become well known, and the city received a record 56 million tourists in 2014, hosting three of the world's
ten most visited tourist attractions in 2013. Several sources have ranked New York the most photographed city in
the world. Times Square, iconic as the world's "heart" and its "Crossroads", is the brightly illuminated hub of the
Broadway Theater District, one of the world's busiest pedestrian intersections, and a major center of the world's
entertainment industry. The names of many of the city's bridges, skyscrapers, and parks are known around the world.
Anchored by Wall Street in the Financial District of Lower Manhattan, New York City has been called both the most
economically powerful city and the leading financial center of the world, and the city is home to the world's two
largest stock exchanges by total market capitalization, the New York Stock Exchange and NASDAQ. Manhattan's real
estate market is among the most expensive in the world. Manhattan's Chinatown incorporates the highest concentration
of Chinese people in the Western Hemisphere, with multiple signature Chinatowns developing across the city. Providing
continuous 24/7 service, the New York City Subway is one of the most extensive metro systems worldwide, with 469
stations in operation. New York City's higher education network comprises over 120 colleges and universities, including
Columbia University, New York University, and Rockefeller University, which have been ranked among the top 35 in
the world. During the Wisconsinan glaciation, the New York City region was situated at the edge of a large ice sheet
over 1,000 feet in depth. The ice sheet scraped away large amounts of soil, leaving the bedrock that serves as the
geologic foundation for much of New York City today. Later on, the ice sheet would help split apart what are now
Long Island and Staten Island. In the precolonial era, the area of present-day New York City was inhabited by various
bands of Algonquian tribes of Native Americans, including the Lenape, whose homeland, known as Lenapehoking, included
Staten Island; the western portion of Long Island, including the area that would become Brooklyn and Queens; Manhattan;
the Bronx; and the Lower Hudson Valley. The first documented visit by a European was in 1524 by Giovanni da Verrazzano,
a Florentine explorer in the service of the French crown, who sailed his ship La Dauphine into New York Harbor. He
claimed the area for France and named it "Nouvelle Angoulême" (New Angoulême). A Spanish expedition led by captain
Estêvão Gomes, a Portuguese sailing for Emperor Charles V, arrived in New York Harbor in January 1525 aboard the
purpose-built caravel "La Anunciada" and charted the mouth of the Hudson River, which he named Rio de San Antonio.
Heavy ice kept him from further exploration, and he returned to Spain in August. The first scientific map to show
the North American East coast continuously, the 1527 world map known as the Padrón Real, was informed by Gomes' expedition,
and labeled the Northeast as Tierra de Esteban Gómez in his honor. In 1609, English explorer Henry Hudson re-discovered
the region when he sailed his ship the Halve Maen ("Half Moon" in Dutch) into New York Harbor while searching for
the Northwest Passage to the Orient for his employer, the Dutch East India Company. He proceeded to sail up what
he named the North River, also called the Mauritis River, and now known as the Hudson River, to the site of the present-day
New York State capital of Albany in the belief that it might represent an oceanic tributary. When the river narrowed
and was no longer saline, he realized it was not a maritime passage and sailed back downriver. He made a ten-day
exploration of the area and claimed the region for his employer. In 1614, the area between Cape Cod and Delaware
Bay would be claimed by the Netherlands and called Nieuw-Nederland (New Netherland). The first non-Native American
inhabitant of what would eventually become New York City was Dominican trader Juan Rodriguez (transliterated to Dutch
as Jan Rodrigues). Born in Santo Domingo of Portuguese and African descent, he arrived in Manhattan during the winter
of 1613–1614, trapping for pelts and trading with the local population as a representative of the Dutch. Broadway,
from 159th Street to 218th Street, is named Juan Rodriguez Way in his honor. A permanent European presence in New
Netherland began in 1624 – making New York the 12th oldest continuously occupied European-established settlement
in the continental United States – with the founding of a Dutch fur trading settlement on Governors Island. In 1625,
construction was started on a citadel and a Fort Amsterdam on Manhattan Island, later called New Amsterdam (Nieuw
Amsterdam). The colony of New Amsterdam was centered at the site which would eventually become Lower Manhattan. The
Dutch colonial Director-General Peter Minuit purchased the island of Manhattan from the Canarsie, a small band of
the Lenape, in 1626 for a value of 60 guilders (about $1000 in 2006); a disproved legend says that Manhattan was
purchased for $24 worth of glass beads. In 1664, Peter Stuyvesant, the Director-General of the colony of New Netherland,
surrendered New Amsterdam to the English without bloodshed. The English promptly renamed the fledgling city "New
York" after the Duke of York (later King James II). On August 24, 1673, Dutch captain Anthonio Colve took over the
colony of New York from England and rechristened it "New Orange" to honor the Prince of Orange, King William III.
However, facing defeat from the British and French, who had teamed up to destroy Dutch trading routes, the Dutch
returned the island to England in 1674. At the end of the Second Anglo-Dutch War, the English gained New Amsterdam
(New York) in North America in exchange for Dutch control of Run, an Indonesian island. Several intertribal wars
among the Native Americans and some epidemics brought on by contact with the Europeans caused sizable population
losses for the Lenape between the years 1660 and 1670. By 1700, the Lenape population had diminished to 200. New
York grew in importance as a trading port while under British rule in the early 1700s. It also became a center of
slavery, with 42% of households holding slaves by 1730, more than any other city other than Charleston, South Carolina.
Most slaveholders held a few or several domestic slaves, but others hired them out to work at labor. Slavery became
integrally tied to New York's economy through the labor of slaves throughout the port, and the banks and shipping
tied to the South. Discovery of the African Burying Ground in the 1990s, during construction of a new federal courthouse
near Foley Square, revealed that tens of thousands of Africans had been buried in the area in the colonial years.
The trial in Manhattan of John Peter Zenger in 1735 helped to establish the freedom of the press in North America.
In 1754, Columbia University was founded under charter by King George II as King's College in Lower Manhattan. The
Stamp Act Congress met in New York in October 1765 as the Sons of Liberty organized in the city, skirmishing over
the next ten years with British troops stationed there. The Battle of Long Island, the largest battle of the American
Revolutionary War, was fought in August 1776 entirely within the modern-day borough of Brooklyn. After the battle,
in which the Americans were defeated, leaving subsequent smaller armed engagements following in its wake, the city
became the British military and political base of operations in North America. The city was a haven for Loyalist
refugees, as well as escaped slaves who joined the British lines for freedom newly promised by the Crown for all
fighters. As many as 10,000 escaped slaves crowded into the city during the British occupation. When the British
forces evacuated at the close of the war in 1783, they transported 3,000 freedmen for resettlement in Nova Scotia.
They resettled other freedmen in England and the Caribbean. The only attempt at a peaceful solution to the war took
place at the Conference House on Staten Island between American delegates, including Benjamin Franklin, and British
general Lord Howe on September 11, 1776. Shortly after the British occupation began, the Great Fire of New York occurred,
a large conflagration on the West Side of Lower Manhattan, which destroyed about a quarter of the buildings in the
city, including Trinity Church. In 1785, the assembly of the Congress of the Confederation made New York the national
capital shortly after the war. New York was the last capital of the U.S. under the Articles of Confederation and
the first capital under the Constitution of the United States. In 1789, the first President of the United States,
George Washington, was inaugurated; the first United States Congress and the Supreme Court of the United States each
assembled for the first time, and the United States Bill of Rights was drafted, all at Federal Hall on Wall Street.
By 1790, New York had surpassed Philadelphia as the largest city in the United States. Under New York State's gradual
abolition act of 1799, children of slave mothers were born to be eventually liberated but were held in indentured
servitude until their mid-to-late twenties. Together with slaves freed by their masters after the Revolutionary War
and escaped slaves, a significant free-black population gradually developed in Manhattan. Under such influential
United States founders as Alexander Hamilton and John Jay, the New York Manumission Society worked for abolition
and established the African Free School to educate black children. It was not until 1827 that slavery was completely
abolished in the state, and free blacks struggled afterward with discrimination. New York interracial abolitionist
activism continued; among its leaders were graduates of the African Free School. The city's black population reached
more than 16,000 in 1840. In the 19th century, the city was transformed by development relating to its status as
a trading center, as well as by European immigration. The city adopted the Commissioners' Plan of 1811, which expanded
the city street grid to encompass all of Manhattan. The 1825 completion of the Erie Canal through central New York
connected the Atlantic port to the agricultural markets and commodities of the North American interior via the Hudson
River and the Great Lakes. Local politics became dominated by Tammany Hall, a political machine supported by Irish
and German immigrants. Several prominent American literary figures lived in New York during the 1830s and 1840s,
including William Cullen Bryant, Washington Irving, Herman Melville, Rufus Wilmot Griswold, John Keese, Nathaniel
Parker Willis, and Edgar Allan Poe. Public-minded members of the contemporaneous business elite lobbied for the establishment
of Central Park, which in 1857 became the first landscaped park in an American city. The Great Irish Famine brought
a large influx of Irish immigrants. Over 200,000 were living in New York by 1860, upwards of a quarter of the city's
population. There was also extensive immigration from the German provinces, where revolutions had disrupted societies,
and Germans comprised another 25% of New York's population by 1860. Democratic Party candidates were consistently
elected to local office, increasing the city's ties to the South and its dominant party. In 1861, Mayor Fernando
Wood called on the aldermen to declare independence from Albany and the United States after the South seceded, but
his proposal was not acted on. Anger at new military conscription laws during the American Civil War (1861–1865),
which spared wealthier men who could afford to pay a $300 (equivalent to $5,766 in 2016) commutation fee to hire
a substitute, led to the Draft Riots of 1863, whose most visible participants were ethnic Irish working class. The
situation deteriorated into attacks on New York's elite, followed by attacks on black New Yorkers and their property
after fierce competition for a decade between Irish immigrants and blacks for work. Rioters burned the Colored Orphan
Asylum to the ground, but more than 200 children escaped harm due to efforts of the New York City Police Department,
which was mainly made up of Irish immigrants. According to historian James M. McPherson (2001), at least 120 people
were killed. In all, eleven black men were lynched over five days, and the riots forced hundreds of blacks to flee
the city for Williamsburg, Brooklyn, as well as New Jersey; the black population in Manhattan fell below 10,000 by
1865, which it had last been in 1820. The white working class had established dominance. Violence by longshoremen
against black men was especially fierce in the docks area. It was one of the worst incidents of civil unrest in American
history. In 1898, the modern City of New York was formed with the consolidation of Brooklyn (until then a separate
city), the County of New York (which then included parts of the Bronx), the County of Richmond, and the western portion
of the County of Queens. The opening of the subway in 1904, first built as separate private systems, helped bind
the new city together. Throughout the first half of the 20th century, the city became a world center for industry,
commerce, and communication. In 1904, the steamship General Slocum caught fire in the East River, killing 1,021 people
on board. In 1911, the Triangle Shirtwaist Factory fire, the city's worst industrial disaster, took the lives of
146 garment workers and spurred the growth of the International Ladies' Garment Workers' Union and major improvements
in factory safety standards. New York's non-white population was 36,620 in 1890. New York City was a prime destination
in the early twentieth century for African Americans during the Great Migration from the American South, and by 1916,
New York City was home to the largest urban African diaspora in North America. The Harlem Renaissance of literary
and cultural life flourished during the era of Prohibition. The larger economic boom generated construction of skyscrapers
competing in height and creating an identifiable skyline. New York became the most populous urbanized area in the
world in the early 1920s, overtaking London. The metropolitan area surpassed the 10 million mark in the early 1930s,
becoming the first megacity in human history. The difficult years of the Great Depression saw the election of reformer
Fiorello La Guardia as mayor and the fall of Tammany Hall after eighty years of political dominance. Returning World
War II veterans created a post-war economic boom and the development of large housing tracts in eastern Queens. New
York emerged from the war unscathed as the leading city of the world, with Wall Street leading America's place as
the world's dominant economic power. The United Nations Headquarters was completed in 1952, solidifying New York's
global geopolitical influence, and the rise of abstract expressionism in the city precipitated New York's displacement
of Paris as the center of the art world. The Stonewall riots were a series of spontaneous, violent demonstrations
by members of the gay community against a police raid that took place in the early morning hours of June 28, 1969,
at the Stonewall Inn in the Greenwich Village neighborhood of Lower Manhattan. They are widely considered to constitute
the single most important event leading to the gay liberation movement and the modern fight for LGBT rights in the
United States. In the 1970s, job losses due to industrial restructuring caused New York City to suffer from economic
problems and rising crime rates. While a resurgence in the financial industry greatly improved the city's economic
health in the 1980s, New York's crime rate continued to increase through that decade and into the beginning of the
1990s. By the mid 1990s, crime rates started to drop dramatically due to revised police strategies, improving economic
opportunities, gentrification, and new residents, both American transplants and new immigrants from Asia and Latin
America. Important new sectors, such as Silicon Alley, emerged in the city's economy. New York's population reached
all-time highs in the 2000 Census and then again in the 2010 Census. The city and surrounding area suffered the bulk
of the economic damage and largest loss of human life in the aftermath of the September 11, 2001 attacks when 10
of the 19 terrorists associated with Al-Qaeda piloted American Airlines Flight 11 into the North Tower of the World
Trade Center and United Airlines Flight 175 into the South Tower of the World Trade Center, and later destroyed them,
killing 2,192 civilians, 343 firefighters, and 71 law enforcement officers who were in the towers and in the surrounding
area. The rebuilding of the area, has created a new One World Trade Center, and a 9/11 memorial and museum along
with other new buildings and infrastructure. The World Trade Center PATH station, which opened on July 19, 1909 as
the Hudson Terminal, was also destroyed in the attack. A temporary station was built and opened on November 23, 2003.
A permanent station, the World Trade Center Transportation Hub, is currently under construction. The new One World
Trade Center is the tallest skyscraper in the Western Hemisphere and the fourth-tallest building in the world by
pinnacle height, with its spire reaching a symbolic 1,776 feet (541.3 m) in reference to the year of American independence.
The Occupy Wall Street protests in Zuccotti Park in the Financial District of Lower Manhattan began on September
17, 2011, receiving global attention and spawning the Occupy movement against social and economic inequality worldwide.
When one Republican presidential candidate for the 2016 election ridiculed the liberalism of "New York values" in
January 2016, Donald Trump, leading in the polls, vigorously defended his city. The National Review, a conservative
magazine published in the city since its founding by William F. Buckley, Jr. in 1955, commented, "By hearkening back
to New York's heart after 9/11, for a moment Trump transcended politics. How easily we forget, but for weeks after
the terror attacks, New York was America." New York City is situated in the Northeastern United States, in southeastern
New York State, approximately halfway between Washington, D.C. and Boston. The location at the mouth of the Hudson
River, which feeds into a naturally sheltered harbor and then into the Atlantic Ocean, has helped the city grow in
significance as a trading port. Most of New York City is built on the three islands of Long Island, Manhattan, and
Staten Island. The Hudson River flows through the Hudson Valley into New York Bay. Between New York City and Troy,
New York, the river is an estuary. The Hudson River separates the city from the U.S. state of New Jersey. The East
River—a tidal strait—flows from Long Island Sound and separates the Bronx and Manhattan from Long Island. The Harlem
River, another tidal strait between the East and Hudson Rivers, separates most of Manhattan from the Bronx. The Bronx
River, which flows through the Bronx and Westchester County, is the only entirely fresh water river in the city.
The city's land has been altered substantially by human intervention, with considerable land reclamation along the
waterfronts since Dutch colonial times; reclamation is most prominent in Lower Manhattan, with developments such
as Battery Park City in the 1970s and 1980s. Some of the natural relief in topography has been evened out, especially
in Manhattan. The city's total area is 468.9 square miles (1,214 km2). 164.1 sq mi (425 km2) of this is water and
304.8 sq mi (789 km2) is land. The highest point in the city is Todt Hill on Staten Island, which, at 409.8 feet
(124.9 m) above sea level, is the highest point on the Eastern Seaboard south of Maine. The summit of the ridge is
mostly covered in woodlands as part of the Staten Island Greenbelt. New York has architecturally noteworthy buildings
in a wide range of styles and from distinct time periods, from the saltbox style Pieter Claesen Wyckoff House in
Brooklyn, the oldest section of which dates to 1656, to the modern One World Trade Center, the skyscraper at Ground
Zero in Lower Manhattan and currently the most expensive new office tower in the world. Manhattan's skyline, with
its many skyscrapers, is universally recognized, and the city has been home to several of the tallest buildings in
the world. As of 2011, New York City had 5,937 high-rise buildings, of which 550 completed structures were at least
330 feet (100 m) high, both second in the world after Hong Kong, with over 50 completed skyscrapers taller than 656
feet (200 m). These include the Woolworth Building (1913), an early gothic revival skyscraper built with massively
scaled gothic detailing. The 1916 Zoning Resolution required setbacks in new buildings, and restricted towers to
a percentage of the lot size, to allow sunlight to reach the streets below. The Art Deco style of the Chrysler Building
(1930) and Empire State Building (1931), with their tapered tops and steel spires, reflected the zoning requirements.
The buildings have distinctive ornamentation, such as the eagles at the corners of the 61st floor on the Chrysler
Building, and are considered some of the finest examples of the Art Deco style. A highly influential example of the
international style in the United States is the Seagram Building (1957), distinctive for its façade using visible
bronze-toned I-beams to evoke the building's structure. The Condé Nast Building (2000) is a prominent example of
green design in American skyscrapers and has received an award from the American Institute of Architects as well
as AIA New York State for its design. The character of New York's large residential districts is often defined by
the elegant brownstone rowhouses and townhouses and shabby tenements that were built during a period of rapid expansion
from 1870 to 1930. In contrast, New York City also has neighborhoods that are less densely populated and feature
free-standing dwellings. In neighborhoods such as Riverdale (in the Bronx), Ditmas Park (in Brooklyn), and Douglaston
(in Queens), large single-family homes are common in various architectural styles such as Tudor Revival and Victorian.
Stone and brick became the city's building materials of choice after the construction of wood-frame houses was limited
in the aftermath of the Great Fire of 1835. A distinctive feature of many of the city's buildings is the wooden roof-mounted
water towers. In the 1800s, the city required their installation on buildings higher than six stories to prevent
the need for excessively high water pressures at lower elevations, which could break municipal water pipes. Garden
apartments became popular during the 1920s in outlying areas, such as Jackson Heights. According to the United States
Geological Survey, an updated analysis of seismic hazard in July 2014 revealed a "slightly lower hazard for tall
buildings" in New York City than previously assessed. Scientists estimated this lessened risk based upon a lower
likelihood than previously thought of slow shaking near the city, which would be more likely to cause damage to taller
structures from an earthquake in the vicinity of the city. There are hundreds of distinct neighborhoods throughout
the five boroughs of New York City, many with a definable history and character to call their own. If the boroughs
were each independent cities, four of the boroughs (Brooklyn, Queens, Manhattan, and the Bronx) would be among the
ten most populous cities in the United States. Under the Köppen climate classification, using the 0 °C (32 °F) coldest
month (January) isotherm, New York City itself experiences a humid subtropical climate (Cfa) and is thus the northernmost
major city on the North American continent with this categorization. The suburbs to the immediate north and west
lie in the transition zone from a humid subtropical (Cfa) to a humid continental climate (Dfa). The area averages
234 days with at least some sunshine annually, and averages 57% of possible sunshine annually, accumulating 2,535
hours of sunshine per annum. The city falls under USDA 7b Plant Hardiness zone. Winters are cold and damp, and prevailing
wind patterns that blow offshore minimize the moderating effects of the Atlantic Ocean; yet the Atlantic and the
partial shielding from colder air by the Appalachians keep the city warmer in the winter than inland North American
cities at similar or lesser latitudes such as Pittsburgh, Cincinnati, and Indianapolis. The daily mean temperature
in January, the area's coldest month, is 32.6 °F (0.3 °C); however, temperatures usually drop to 10 °F (−12 °C) several
times per winter, and reach 50 °F (10 °C) several days each winter month. Spring and autumn are unpredictable and
can range from chilly to warm, although they are usually mild with low humidity. Summers are typically warm to hot
and humid, with a daily mean temperature of 76.5 °F (24.7 °C) in July and an average humidity level of 72%. Nighttime
conditions are often exacerbated by the urban heat island phenomenon, while daytime temperatures exceed 90 °F (32
°C) on average of 17 days each summer and in some years exceed 100 °F (38 °C). In the warmer months, the dew point,
a measure of atmospheric moisture, ranges from 57.3 °F (14.1 °C) in June to 62.0 °F (16.7 °C) in August. Extreme
temperatures have ranged from −15 °F (−26 °C), recorded on February 9, 1934, up to 106 °F (41 °C) on July 9, 1936.
The city receives 49.9 inches (1,270 mm) of precipitation annually, which is fairly spread throughout the year. Average
winter snowfall between 1981 and 2010 has been 25.8 inches (66 cm), but this varies considerably from year to year.
Hurricanes and tropical storms are rare in the New York area, but are not unheard of and always have the potential
to strike the area. Hurricane Sandy brought a destructive storm surge to New York City on the evening of October
29, 2012, flooding numerous streets, tunnels, and subway lines in Lower Manhattan and other areas of the city and
cutting off electricity in many parts of the city and its suburbs. The storm and its profound impacts have prompted
the discussion of constructing seawalls and other coastal barriers around the shorelines of the city and the metropolitan
area to minimize the risk of destructive consequences from another such event in the future. The City of New York
has a complex park system, with various lands operated by the National Park Service, the New York State Office of
Parks, Recreation and Historic Preservation, and the New York City Department of Parks and Recreation. In its 2013
ParkScore ranking, The Trust for Public Land reported that the park system in New York City was the second best park
system among the 50 most populous U.S. cities, behind the park system of Minneapolis. ParkScore ranks urban park
systems by a formula that analyzes median park size, park acres as percent of city area, the percent of city residents
within a half-mile of a park, spending of park services per resident, and the number of playgrounds per 10,000 residents.
Gateway National Recreation Area contains over 26,000 acres (10,521.83 ha) in total, most of it surrounded by New
York City, including the Jamaica Bay Wildlife Refuge in Brooklyn and Queens, over 9,000 acres (36 km2) of salt marsh,
islands, and water, including most of Jamaica Bay. Also in Queens, the park includes a significant portion of the
western Rockaway Peninsula, most notably Jacob Riis Park and Fort Tilden. In Staten Island, the park includes Fort
Wadsworth, with historic pre-Civil War era Battery Weed and Fort Tompkins, and Great Kills Park, with beaches, trails,
and a marina. The Statue of Liberty National Monument and Ellis Island Immigration Museum are managed by the National
Park Service and are in both the states of New York and New Jersey. They are joined in the harbor by Governors Island
National Monument, in New York. Historic sites under federal management on Manhattan Island include Castle Clinton
National Monument; Federal Hall National Memorial; Theodore Roosevelt Birthplace National Historic Site; General
Grant National Memorial ("Grant's Tomb"); African Burial Ground National Monument; and Hamilton Grange National Memorial.
Hundreds of private properties are listed on the National Register of Historic Places or as a National Historic Landmark
such as, for example, the Stonewall Inn in Greenwich Village as the catalyst of the modern gay rights movement. There
are seven state parks within the confines of New York City, including Clay Pit Ponds State Park Preserve, a natural
area which includes extensive riding trails, and Riverbank State Park, a 28-acre (110,000 m2) facility that rises
69 feet (21 m) over the Hudson River. New York City has over 28,000 acres (110 km2) of municipal parkland and 14
miles (23 km) of public beaches. Parks in New York City include Central Park, Prospect Park, Flushing Meadows–Corona
Park, Forest Park, and Washington Square Park. The largest municipal park in the city is Pelham Bay Park with 2,700
acres (1,093 ha). New York City is home to Fort Hamilton, the U.S. military's only active duty installation within
the city. Established in 1825 in Brooklyn on the site of a small battery utilized during the American Revolution,
it is one of America's longest serving military forts. Today Fort Hamilton serves as the headquarters of the North
Atlantic Division of the United States Army Corps of Engineers as well as for the New York City Recruiting Battalion.
It also houses the 1179th Transportation Brigade, the 722nd Aeromedical Staging Squadron, and a military entrance
processing station. Other formerly active military reservations still utilized for National Guard and military training
or reserve operations in the city include Fort Wadsworth in Staten Island and Fort Totten in Queens. New York City
is the most-populous city in the United States, with an estimated record high of 8,491,079 residents as of 2014,
incorporating more immigration into the city than outmigration since the 2010 United States Census. More than twice
as many people live in New York City as in the second-most populous U.S. city (Los Angeles), and within a smaller
area. New York City gained more residents between April 2010 and July 2014 (316,000) than any other U.S. city. New
York City's population amounts to about 40% of New York State's population and a similar percentage of the New York
metropolitan area population. In 2014, the city had an estimated population density of 27,858 people per square mile
(10,756/km²), rendering it the most densely populated of all municipalities housing over 100,000 residents in the
United States; however, several small cities (of fewer than 100,000) in adjacent Hudson County, New Jersey are more
dense overall, as per the 2000 Census. Geographically co-extensive with New York County, the borough of Manhattan's
population density of 71,672 people per square mile (27,673/km²) makes it the highest of any county in the United
States and higher than the density of any individual American city. The city's population in 2010 was 44% white (33.3%
non-Hispanic white), 25.5% black (23% non-Hispanic black), 0.7% Native American, and 12.7% Asian. Hispanics of any
race represented 28.6% of the population, while Asians constituted the fastest-growing segment of the city's population
between 2000 and 2010; the non-Hispanic white population declined 3 percent, the smallest recorded decline in decades;
and for the first time since the Civil War, the number of blacks declined over a decade. Throughout its history,
the city has been a major port of entry for immigrants into the United States; more than 12 million European immigrants
were received at Ellis Island between 1892 and 1924. The term "melting pot" was first coined to describe densely
populated immigrant neighborhoods on the Lower East Side. By 1900, Germans constituted the largest immigrant group,
followed by the Irish, Jews, and Italians. In 1940, whites represented 92% of the city's population. Approximately
37% of the city's population is foreign born. In New York, no single country or region of origin dominates. The ten
largest sources of foreign-born individuals in the city as of 2011 were the Dominican Republic, China, Mexico, Guyana,
Jamaica, Ecuador, Haiti, India, Russia, and Trinidad and Tobago, while the Bangladeshi immigrant population has since
become one of the fastest growing in the city, counting over 74,000 by 2013. Asian Americans in New York City, according
to the 2010 Census, number more than one million, greater than the combined totals of San Francisco and Los Angeles.
New York contains the highest total Asian population of any U.S. city proper. The New York City borough of Queens
is home to the state's largest Asian American population and the largest Andean (Colombian, Ecuadorian, Peruvian,
and Bolivian) populations in the United States, and is also the most ethnically diverse urban area in the world.
The Chinese population constitutes the fastest-growing nationality in New York State; multiple satellites of the
original Manhattan Chinatown (紐約華埠), in Brooklyn (布鲁克林華埠), and around Flushing, Queens (法拉盛華埠), are thriving as traditionally
urban enclaves, while also expanding rapidly eastward into suburban Nassau County (拿騷縣) on Long Island (長島), as the
New York metropolitan region and New York State have become the top destinations for new Chinese immigrants, respectively,
and large-scale Chinese immigration continues into New York City and surrounding areas. In 2012, 6.3% of New York
City was of Chinese ethnicity, with nearly three-fourths living in either Queens or Brooklyn, geographically on Long
Island. A community numbering 20,000 Korean-Chinese (Chaoxianzu (Chinese: 朝鲜族) or Joseonjok (Hangul: 조선족)) is centered
in Flushing, Queens, while New York City is also home to the largest Tibetan population outside China, India, and
Nepal, also centered in Queens. Koreans made up 1.2% of the city's population, and Japanese 0.3%. Filipinos were
the largest Southeast Asian ethnic group at 0.8%, followed by Vietnamese, who made up 0.2% of New York City's population
in 2010. Indians are the largest South Asian group, comprising 2.4% of the city's population, with Bangladeshis and
Pakistanis at 0.7% and 0.5%, respectively. Queens is the preferred borough of settlement for Asian Indians, Koreans,
and Filipinos, as well as Malaysians and other Southeast Asians; while Brooklyn is receiving large numbers of both
West Indian as well as Asian Indian immigrants. New York City has the largest European and non-Hispanic white population
of any American city. At 2.7 million in 2012, New York's non-Hispanic white population is larger than the non-Hispanic
white populations of Los Angeles (1.1 million), Chicago (865,000), and Houston (550,000) combined. The European diaspora
residing in the city is very diverse. According to 2012 Census estimates, there were roughly 560,000 Italian Americans,
385,000 Irish Americans, 253,000 German Americans, 223,000 Russian Americans, 201,000 Polish Americans, and 137,000
English Americans. Additionally, Greek and French Americans numbered 65,000 each, with those of Hungarian descent
estimated at 60,000 people. Ukrainian and Scottish Americans numbered 55,000 and 35,000, respectively. People identifying
ancestry from Spain numbered 30,838 total in 2010. People of Norwegian and Swedish descent both stood at about 20,000
each, while people of Czech, Lithuanian, Portuguese, Scotch-Irish, and Welsh descent all numbered between 12,000–14,000
people. Arab Americans number over 160,000 in New York City, with the highest concentration in Brooklyn. Central
Asians, primarily Uzbek Americans, are a rapidly growing segment of the city's non-Hispanic white population, enumerating
over 30,000, and including over half of all Central Asian immigrants to the United States, most settling in Queens
or Brooklyn. Albanian Americans are most highly concentrated in the Bronx. The wider New York City metropolitan area,
with over 20 million people, about 50% greater than the second-place Los Angeles metropolitan area in the United
States, is also ethnically diverse. The New York region continues to be by far the leading metropolitan gateway for
legal immigrants admitted into the United States, substantially exceeding the combined totals of Los Angeles and
Miami, the next most popular gateway regions. It is home to the largest Jewish as well as Israeli communities outside
Israel, with the Jewish population in the region numbering over 1.5 million in 2012 and including many diverse Jewish
sects from around the Middle East and Eastern Europe. The metropolitan area is also home to 20% of the nation's Indian
Americans and at least 20 Little India enclaves, as well as 15% of all Korean Americans and four Koreatowns; the
largest Asian Indian population in the Western Hemisphere; the largest Russian American, Italian American, and African
American populations; the largest Dominican American, Puerto Rican American, and South American and second-largest
overall Hispanic population in the United States, numbering 4.8 million; and includes at least 6 established Chinatowns
within New York City alone, with the urban agglomeration comprising a population of 779,269 overseas Chinese as of
2013 Census estimates, the largest outside of Asia. Ecuador, Colombia, Guyana, Peru, and Brazil were the top source
countries from South America for legal immigrants to the New York City region in 2013; the Dominican Republic, Jamaica,
Haiti, and Trinidad and Tobago in the Caribbean; Egypt, Ghana, and Nigeria from Africa; and El Salvador, Honduras,
and Guatemala in Central America. Amidst a resurgence of Puerto Rican migration to New York City, this population
had increased to approximately 1.3 million in the metropolitan area as of 2013. The New York metropolitan area is
home to a self-identifying gay and bisexual community estimated at 568,903 individuals, the largest in the United
States and one of the world's largest. Same-sex marriages in New York were legalized on June 24, 2011 and were authorized
to take place beginning 30 days thereafter. Christianity (59%), particularly Catholicism (33%), was the most prevalently
practiced religion in New York as of 2014, followed by Judaism, with approximately 1.1 million Jews in New York City,
over half living in Brooklyn. Islam ranks third in New York City, with official estimates ranging between 600,000
and 1,000,000 observers and including 10% of the city's public schoolchildren, followed by Hinduism, Buddhism, and
a variety of other religions, as well as atheism. In 2014, 24% self-identified with no organized religious affiliation.
New York City has a high degree of income disparity as indicated by its Gini Coefficient of 0.5 for the city overall
and 0.6 for Manhattan. The disparity is driven by wage growth in high-income brackets, while wages have stagnated
for middle and lower-income brackets. In the first quarter of 2014, the average weekly wage in New York County (Manhattan)
was $2,749, representing the highest total among large counties in the United States. In 2013, New York City had
the highest number of billionaires of any city in the world, higher than the next five U.S. cities combined, including
former Mayor Michael R. Bloomberg. New York also had the highest density of millionaires per capita among major U.S.
cities in 2014, at 4.6% of residents. Lower Manhattan has been experiencing a baby boom, with the area south of Canal
Street witnessing 1,086 births in 2010, 12% greater than 2009 and over twice the number born in 2001. New York is
a global hub of international business and commerce. In 2012, New York City topped the first Global Economic Power
Index, published by The Atlantic (to be differentiated from a namesake list published by the Martin Prosperity Institute),
with cities ranked according to criteria reflecting their presence on similar lists as published by other entities.
The city is a major center for banking and finance, retailing, world trade, transportation, tourism, real estate,
new media as well as traditional media, advertising, legal services, accountancy, insurance, theater, fashion, and
the arts in the United States; while Silicon Alley, metonymous for New York's broad-spectrum high technology sphere,
continues to expand. The Port of New York and New Jersey is also a major economic engine, handling record cargo volume
in the first half of 2014. Many Fortune 500 corporations are headquartered in New York City, as are a large number
of foreign corporations. One out of ten private sector jobs in the city is with a foreign company. New York City
has been ranked first among cities across the globe in attracting capital, business, and tourists. This ability to
attract foreign investment helped New York City top the FDi Magazine American Cities of the Future ranking for 2013.
Real estate is a major force in the city's economy, as the total value of all New York City property was assessed
at US$914.8 billion for the 2015 fiscal year. The Time Warner Center is the property with the highest-listed market
value in the city, at US$1.1 billion in 2006. New York City is home to some of the nation's—and the world's—most
valuable real estate. 450 Park Avenue was sold on July 2, 2007 for US$510 million, about $1,589 per square foot ($17,104/m²),
breaking the barely month-old record for an American office building of $1,476 per square foot ($15,887/m²) set in
the June 2007 sale of 660 Madison Avenue. According to Forbes, in 2014, Manhattan was home to six of the top ten
zip codes in the United States by median housing price. As of 2013, the global advertising agencies of Omnicom Group
and Interpublic Group, both based in Manhattan, had combined annual revenues of approximately US$21 billion, reflecting
New York City's role as the top global center for the advertising industry, which is metonymously referred to as
"Madison Avenue". The city's fashion industry provides approximately 180,000 employees with $11 billion in annual
wages. Other important sectors include medical research and technology, non-profit institutions, and universities.
Manufacturing accounts for a significant but declining share of employment, although the city's garment industry
is showing a resurgence in Brooklyn. Food processing is a US$5 billion industry that employs more than 19,000 residents.
Chocolate is New York City's leading specialty-food export, with up to US$234 million worth of exports each year.
Entrepreneurs were forming a "Chocolate District" in Brooklyn as of 2014, while Godiva, one of the world's largest
chocolatiers, continues to be headquartered in Manhattan. New York City's most important economic sector lies in
its role as the headquarters for the U.S.financial industry, metonymously known as Wall Street. The city's securities
industry, enumerating 163,400 jobs in August 2013, continues to form the largest segment of the city's financial
sector and an important economic engine, accounting in 2012 for 5 percent of the city's private sector jobs, 8.5
percent (US$3.8 billion) of its tax revenue, and 22 percent of the city's total wages, including an average salary
of US$360,700. Many large financial companies are headquartered in New York City, and the city is also home to a
burgeoning number of financial startup companies. Lower Manhattan is the third-largest central business district
in the United States and is home to the New York Stock Exchange, on Wall Street, and the NASDAQ, at 165 Broadway,
representing the world's largest and second largest stock exchanges, respectively, when measured both by overall
average daily trading volume and by total market capitalization of their listed companies in 2013. Investment banking
fees on Wall Street totaled approximately $40 billion in 2012, while in 2013, senior New York City bank officers
who manage risk and compliance functions earned as much as $324,000 annually. In fiscal year 2013–14, Wall Street's
securities industry generated 19% of New York State's tax revenue. New York City remains the largest global center
for trading in public equity and debt capital markets, driven in part by the size and financial development of the
U.S. economy.:31–32 In July 2013, NYSE Euronext, the operator of the New York Stock Exchange, took over the administration
of the London interbank offered rate from the British Bankers Association. New York also leads in hedge fund management;
private equity; and the monetary volume of mergers and acquisitions. Several investment banks and investment mangers
headquartered in Manhattan are important participants in other global financial centers.:34–35 New York is also the
principal commercial banking center of the United States. Many of the world's largest media conglomerates are also
based in the city. Manhattan contained over 500 million square feet (46.5 million m2) of office space in 2015, making
it the largest office market in the United States, while Midtown Manhattan, with nearly 400 million square feet (37.2
million m2) in 2015, is the largest central business district in the world. Silicon Alley, centered in Manhattan,
has evolved into a metonym for the sphere encompassing the New York City metropolitan region's high technology industries
involving the Internet, new media, telecommunications, digital media, software development, biotechnology, game design,
financial technology ("fintech"), and other fields within information technology that are supported by its entrepreneurship
ecosystem and venture capital investments. In the first half of 2015, Silicon Alley generated over US$3.7 billion
in venture capital investment across a broad spectrum of high technology enterprises, most based in Manhattan, with
others in Brooklyn, Queens, and elsewhere in the region. High technology startup companies and employment are growing
in New York City and the region, bolstered by the city's position in North America as the leading Internet hub and
telecommunications center, including its vicinity to several transatlantic fiber optic trunk lines, New York's intellectual
capital, and its extensive outdoor wireless connectivity. Verizon Communications, headquartered at 140 West Street
in Lower Manhattan, was at the final stages in 2014 of completing a US$3 billion fiberoptic telecommunications upgrade
throughout New York City. As of 2014, New York City hosted 300,000 employees in the tech sector. The biotechnology
sector is also growing in New York City, based upon the city's strength in academic scientific research and public
and commercial financial support. On December 19, 2011, then Mayor Michael R. Bloomberg announced his choice of Cornell
University and Technion-Israel Institute of Technology to build a US$2 billion graduate school of applied sciences
called Cornell Tech on Roosevelt Island with the goal of transforming New York City into the world's premier technology
capital. By mid-2014, Accelerator, a biotech investment firm, had raised more than US$30 million from investors,
including Eli Lilly and Company, Pfizer, and Johnson & Johnson, for initial funding to create biotechnology startups
at the Alexandria Center for Life Science, which encompasses more than 700,000 square feet (65,000 m2) on East 29th
Street and promotes collaboration among scientists and entrepreneurs at the center and with nearby academic, medical,
and research institutions. The New York City Economic Development Corporation's Early Stage Life Sciences Funding
Initiative and venture capital partners, including Celgene, General Electric Ventures, and Eli Lilly, committed a
minimum of US$100 million to help launch 15 to 20 ventures in life sciences and biotechnology. Tourism is a vital
industry for New York City, which has witnessed a growing combined volume of international and domestic tourists
– receiving approximately 51 million tourists in 2011, 54 million in 2013, and a record 56.4 million in 2014. Tourism
generated an all-time high US$61.3 billion in overall economic impact for New York City in 2014. I Love New York
(stylized I ❤ NY) is both a logo and a song that are the basis of an advertising campaign and have been used since
1977 to promote tourism in New York City, and later to promote New York State as well. The trademarked logo, owned
by New York State Empire State Development, appears in souvenir shops and brochures throughout the city and state,
some licensed, many not. The song is the state song of New York. Major tourist destinations include Times Square;
Broadway theater productions; the Empire State Building; the Statue of Liberty; Ellis Island; the United Nations
Headquarters; museums such as the Metropolitan Museum of Art; greenspaces such as Central Park and Washington Square
Park; Rockefeller Center; the Manhattan Chinatown; luxury shopping along Fifth and Madison Avenues; and events such
as the Halloween Parade in Greenwich Village; the Macy's Thanksgiving Day Parade; the lighting of the Rockefeller
Center Christmas Tree; the St. Patrick's Day parade; seasonal activities such as ice skating in Central Park in the
wintertime; the Tribeca Film Festival; and free performances in Central Park at Summerstage. Major attractions in
the boroughs outside Manhattan include Flushing Meadows-Corona Park and the Unisphere in Queens; the Bronx Zoo; Coney
Island, Brooklyn; and the New York Botanical Garden in the Bronx. The New York Wheel, a 630-foot ferris wheel, was
under construction at the northern shore of Staten Island in 2015, overlooking the Statue of Liberty, New York Harbor,
and the Lower Manhattan skyline. Manhattan was on track to have an estimated 90,000 hotel rooms at the end of 2014,
a 10% increase from 2013. In October 2014, the Anbang Insurance Group, based in China, purchased the Waldorf Astoria
New York for US$1.95 billion, making it the world's most expensive hotel ever sold. New York is a prominent location
for the American entertainment industry, with many films, television series, books, and other media being set there.
As of 2012, New York City was the second largest center for filmmaking and television production in the United States,
producing about 200 feature films annually, employing 130,000 individuals, and generating an estimated $7.1 billion
in direct expenditures, and by volume, New York is the world leader in independent film production; one-third of
all American independent films are produced in New York City. The Association of Independent Commercial Producers
is also based in New York. In the first five months of 2014 alone, location filming for television pilots in New
York City exceeded the record production levels for all of 2013, with New York surpassing Los Angeles as the top
North American city for the same distinction during the 2013/2014 cycle. New York City is additionally a center for
the advertising, music, newspaper, digital media, and publishing industries and is also the largest media market
in North America. Some of the city's media conglomerates and institutions include Time Warner, the Thomson Reuters
Corporation, the Associated Press, Bloomberg L.P., the News Corporation, The New York Times Company, NBCUniversal,
the Hearst Corporation, AOL, and Viacom. Seven of the world's top eight global advertising agency networks have their
headquarters in New York. Two of the top three record labels' headquarters are in New York: Sony Music Entertainment
and Warner Music Group. Universal Music Group also has offices in New York. New media enterprises are contributing
an increasingly important component to the city's central role in the media sphere. More than 200 newspapers and
350 consumer magazines have an office in the city, and the publishing industry employs about 25,000 people. Two of
the three national daily newspapers in the United States are New York papers: The Wall Street Journal and The New
York Times, which has won the most Pulitzer Prizes for journalism. Major tabloid newspapers in the city include:
The New York Daily News, which was founded in 1919 by Joseph Medill Patterson and The New York Post, founded in 1801
by Alexander Hamilton. The city also has a comprehensive ethnic press, with 270 newspapers and magazines published
in more than 40 languages. El Diario La Prensa is New York's largest Spanish-language daily and the oldest in the
nation. The New York Amsterdam News, published in Harlem, is a prominent African American newspaper. The Village
Voice is the largest alternative newspaper. The television industry developed in New York and is a significant employer
in the city's economy. The three major American broadcast networks are all headquartered in New York: ABC, CBS, and
NBC. Many cable networks are based in the city as well, including MTV, Fox News, HBO, Showtime, Bravo, Food Network,
AMC, and Comedy Central. The City of New York operates a public broadcast service, NYCTV, that has produced several
original Emmy Award-winning shows covering music and culture in city neighborhoods and city government. New York
is also a major center for non-commercial educational media. The oldest public-access television channel in the United
States is the Manhattan Neighborhood Network, founded in 1971. WNET is the city's major public television station
and a primary source of national Public Broadcasting Service (PBS) television programming. WNYC, a public radio station
owned by the city until 1997, has the largest public radio audience in the United States. The New York City Public
Schools system, managed by the New York City Department of Education, is the largest public school system in the
United States, serving about 1.1 million students in more than 1,700 separate primary and secondary schools. The
city's public school system includes nine specialized high schools to serve academically and artistically gifted
students. The New York City Charter School Center assists the setup of new charter schools. There are approximately
900 additional privately run secular and religious schools in the city. Over 600,000 students are enrolled in New
York City's over 120 higher education institutions, the highest number of any city in the United States, including
over half million in the City University of New York (CUNY) system alone in 2014. In 2005, three out of five Manhattan
residents were college graduates, and one out of four had a postgraduate degree, forming one of the highest concentrations
of highly educated people in any American city. New York City is home to such notable private universities as Barnard
College, Columbia University, Cooper Union, Fordham University, New York University, New York Institute of Technology,
Pace University, and Yeshiva University. The public CUNY system is one of the largest universities in the nation,
comprising 24 institutions across all five boroughs: senior colleges, community colleges, and other graduate/professional
schools. The public State University of New York (SUNY) system also serves New York City, as well as the rest of
the state. The city also has other smaller private colleges and universities, including many religious and special-purpose
institutions, such as St. John's University, The Juilliard School, Manhattan College, The College of Mount Saint
Vincent, The New School, Pratt Institute, The School of Visual Arts, The King's College, and Wagner College. The
New York Public Library, which has the largest collection of any public library system in the United States, serves
Manhattan, the Bronx, and Staten Island. Queens is served by the Queens Borough Public Library, the nation's second
largest public library system, while the Brooklyn Public Library serves Brooklyn. The New York City Health and Hospitals
Corporation (HHC) operates the public hospitals and clinics in New York City. A public benefit corporation with $6.7
billion in annual revenues, HHC is the largest municipal healthcare system in the United States serving 1.4 million
patients, including more than 475,000 uninsured city residents. HHC was created in 1969 by the New York State Legislature
as a public benefit corporation (Chapter 1016 of the Laws 1969). It is similar to a municipal agency but has a Board
of Directors. HHC operates 11 acute care hospitals, five nursing homes, six diagnostic and treatment centers, and
more than 70 community-based primary care sites, serving primarily the poor and working class. HHC's MetroPlus Health
Plan is one of the New York area's largest providers of government-sponsored health insurance and is the plan of
choice for nearly half million New Yorkers. The most well-known hospital in the HHC system is Bellevue Hospital,
the oldest public hospital in the United States. Bellevue is the designated hospital for treatment of the President
of the United States and other world leaders if they become sick or injured while in New York City. The president
of HHC is Ramanathan Raju, MD, a surgeon and former CEO of the Cook County health system in Illinois. The New York
City Police Department (NYPD) has been the largest police force in the United States by a significant margin, with
over 35,000 sworn officers. Members of the NYPD are frequently referred to by politicians, the media, and their own
police cars by the nickname, New York's Finest. In 2012, New York City had the lowest overall crime rate and the
second lowest murder rate among the largest U.S. cities, having become significantly safer after a spike in crime
in the 1970s through 1990s. Violent crime in New York City decreased more than 75% from 1993 to 2005, and continued
decreasing during periods when the nation as a whole saw increases. By 2002, New York City's crime rate was similar
to that of Provo, Utah, and was ranked 197th in crime among the 216 U.S. cities with populations greater than 100,000.
In 2005 the homicide rate was at its lowest level since 1966, and in 2007 the city recorded fewer than 500 homicides
for the first time ever since crime statistics were first published in 1963. In the first six months of 2010, 95.1%
of all murder victims and 95.9% of all shooting victims in New York City were black or Hispanic; additionally, 90.2
percent of those arrested for murder and 96.7 percent of those arrested for shooting someone were black or Hispanic.
New York experienced a record low of 328 homicides in 2014 and has a far lower murder rate than other major American
cities. Organized crime has long been associated with New York City, beginning with the Forty Thieves and the Roach
Guards in the Five Points in the 1820s. The 20th century saw a rise in the Mafia, dominated by the Five Families,
as well as in gangs, including the Black Spades. The Mafia presence has declined in the city in the 21st century.
The New York City Fire Department (FDNY), provides fire protection, technical rescue, primary response to biological,
chemical, and radioactive hazards, and emergency medical services for the five boroughs of New York City. The New
York City Fire Department is the largest municipal fire department in the United States and the second largest in
the world after the Tokyo Fire Department. The FDNY employs approximately 11,080 uniformed firefighters and over
3,300 uniformed EMTs and paramedics. The FDNY's motto is New York's Bravest. The New York City Fire Department faces
highly multifaceted firefighting challenges in many ways unique to New York. In addition to responding to building
types that range from wood-frame single family homes to high-rise structures, there are many secluded bridges and
tunnels, as well as large parks and wooded areas that can give rise to brush fires. New York is also home to one
of the largest subway systems in the world, consisting of hundreds of miles of tunnel with electrified track. The
FDNY headquarters is located at 9 MetroTech Center in Downtown Brooklyn, and the FDNY Fire Academy is located on
Randalls Island. There are three Bureau of Fire Communications alarm offices which receive and dispatch alarms to
appropriate units. One office, at 11 Metrotech Center in Brooklyn, houses Manhattan/Citywide, Brooklyn, and Staten
Island Fire Communications. The Bronx and Queens offices are in separate buildings. Numerous major American cultural
movements began in the city, such as the Harlem Renaissance, which established the African-American literary canon
in the United States. The city was a center of jazz in the 1940s, abstract expressionism in the 1950s, and the birthplace
of hip hop in the 1970s. The city's punk and hardcore scenes were influential in the 1970s and 1980s. New York has
long had a flourishing scene for Jewish American literature. The city is the birthplace of many cultural movements,
including the Harlem Renaissance in literature and visual art; abstract expressionism (also known as the New York
School) in painting; and hip hop, punk, salsa, disco, freestyle, Tin Pan Alley, and Jazz in music. New York City
has been considered the dance capital of the world. The city is also widely celebrated in popular lore, frequently
the setting for books, movies (see List of films set in New York City), and television programs. New York Fashion
Week is one of the world's preeminent fashion events and is afforded extensive coverage by the media. New York has
also frequently been ranked the top fashion capital of the world on the annual list compiled by the Global Language
Monitor. New York City has more than 2,000 arts and cultural organizations and more than 500 art galleries of all
sizes. The city government funds the arts with a larger annual budget than the National Endowment for the Arts. Wealthy
business magnates in the 19th century built a network of major cultural institutions, such as the famed Carnegie
Hall and The Metropolitan Museum of Art, that would become internationally established. The advent of electric lighting
led to elaborate theater productions, and in the 1880s, New York City theaters on Broadway and along 42nd Street
began featuring a new stage form that became known as the Broadway musical. Strongly influenced by the city's immigrants,
productions such as those of Harrigan and Hart, George M. Cohan, and others used song in narratives that often reflected
themes of hope and ambition. Forty of the city's theaters, with more than 500 seats each, are collectively known
as Broadway, after the major thoroughfare that crosses the Times Square Theater District, sometimes referred to as
"The Great White Way". According to The Broadway League, Broadway shows sold approximately US$1.27 billion worth
of tickets in the 2013–2014 season, an 11.4% increase from US$1.139 billion in the 2012–2013 season. Attendance in
2013–2014 stood at 12.21 million, representing a 5.5% increase from the 2012–2013 season's 11.57 million. New York
City's food culture includes a variety of international cuisines influenced by the city's immigrant history. Central
European and Italian immigrants originally made the city famous for bagels, cheesecake, and New York-style pizza,
while Chinese and other Asian restaurants, sandwich joints, trattorias, diners, and coffeehouses have become ubiquitous.
Some 4,000 mobile food vendors licensed by the city, many immigrant-owned, have made Middle Eastern foods such as
falafel and kebabs popular examples of modern New York street food. The city is also home to nearly one thousand
of the finest and most diverse haute cuisine restaurants in the world, according to Michelin. The New York City Department
of Health and Mental Hygiene assigns letter grades to the city's 24,000 restaurants based upon their inspection results.
New York City is home to the headquarters of the National Football League, Major League Baseball, the National Basketball
Association, the National Hockey League, and Major League Soccer. The New York metropolitan area hosts the most sports
teams in these five professional leagues. Participation in professional sports in the city predates all professional
leagues, and the city has been continuously hosting professional sports since the birth of the Brooklyn Dodgers in
1882. The city has played host to over forty major professional teams in the five sports and their respective competing
leagues, both current and historic. Four of the ten most expensive stadiums ever built worldwide (MetLife Stadium,
the new Yankee Stadium, Madison Square Garden, and Citi Field) are located in the New York metropolitan area. Madison
Square Garden, its predecessor, as well as the original Yankee Stadium and Ebbets Field, are some of the most famous
sporting venues in the world, the latter two having been commemorated on U.S. postage stamps. New York has been described
as the "Capital of Baseball". There have been 35 Major League Baseball World Series and 73 pennants won by New York
teams. It is one of only five metro areas (Los Angeles, Chicago, Baltimore–Washington, and the San Francisco Bay
Area being the others) to have two baseball teams. Additionally, there have been 14 World Series in which two New
York City teams played each other, known as a Subway Series and occurring most recently in 2000. No other metropolitan
area has had this happen more than once (Chicago in 1906, St. Louis in 1944, and the San Francisco Bay Area in 1989).
The city's two current Major League Baseball teams are the New York Mets, who play at Citi Field in Queens, and the
New York Yankees, who play at Yankee Stadium in the Bronx. who compete in six games of interleague play every regular
season that has also come to be called the Subway Series. The Yankees have won a record 27 championships, while the
Mets have won the World Series twice. The city also was once home to the Brooklyn Dodgers (now the Los Angeles Dodgers),
who won the World Series once, and the New York Giants (now the San Francisco Giants), who won the World Series five
times. Both teams moved to California in 1958. There are also two Minor League Baseball teams in the city, the Brooklyn
Cyclones and Staten Island Yankees. The city is represented in the National Football League by the New York Giants
and the New York Jets, although both teams play their home games at MetLife Stadium in nearby East Rutherford, New
Jersey, which hosted Super Bowl XLVIII in 2014. The New York Islanders and the New York Rangers represent the city
in the National Hockey League. Also within the metropolitan area are the New Jersey Devils, who play in nearby Newark,
New Jersey. The city's National Basketball Association teams are the Brooklyn Nets and the New York Knicks, while
the New York Liberty is the city's Women's National Basketball Association. The first national college-level basketball
championship, the National Invitation Tournament, was held in New York in 1938 and remains in the city. The city
is well known for its links to basketball, which is played in nearly every park in the city by local youth, many
of whom have gone on to play for major college programs and in the NBA. The annual United States Open Tennis Championships
is one of the world's four Grand Slam tennis tournaments and is held at the National Tennis Center in Flushing Meadows-Corona
Park, Queens. The New York Marathon is one of the world's largest, and the 2004–2006 events hold the top three places
in the marathons with the largest number of finishers, including 37,866 finishers in 2006. The Millrose Games is
an annual track and field meet whose featured event is the Wanamaker Mile. Boxing is also a prominent part of the
city's sporting scene, with events like the Amateur Boxing Golden Gloves being held at Madison Square Garden each
year. The city is also considered the host of the Belmont Stakes, the last, longest and oldest of horse racing's
Triple Crown races, held just over the city's border at Belmont Park on the first or second Sunday of June. The city
also hosted the 1932 U.S. Open golf tournament and the 1930 and 1939 PGA Championships, and has been host city for
both events several times, most notably for nearby Winged Foot Golf Club. Many sports are associated with New York's
immigrant communities. Stickball, a street version of baseball, was popularized by youths in the 1930s, and a street
in the Bronx was renamed Stickball Boulevard in the late 2000s to memorialize this. The iconic New York City Subway
system is the largest rapid transit system in the world when measured by stations in operation, with 469, and by
length of routes. New York's subway is notable for nearly the entire system remaining open 24 hours a day, in contrast
to the overnight shutdown common to systems in most cities, including Hong Kong, London, Paris, Seoul, and Tokyo.
The New York City Subway is also the busiest metropolitan rail transit system in the Western Hemisphere, with 1.75
billion passengers rides in 2014, while Grand Central Terminal, also popularly referred to as "Grand Central Station",
is the world's largest railway station by number of train platforms. Public transport is essential in New York City.
54.6% of New Yorkers commuted to work in 2005 using mass transit. This is in contrast to the rest of the United States,
where about 90% of commuters drive automobiles to their workplace. According to the US Census Bureau, New York City
residents spend an average of 38.4 minutes a day getting to work, the longest commute time in the nation among large
cities. New York is the only US city in which a majority (52%) of households do not have a car; only 22% of Manhattanites
own a car. Due to their high usage of mass transit, New Yorkers spend less of their household income on transportation
than the national average, saving $19 billion annually on transportation compared to other urban Americans. New York
City's public bus fleet is the largest in North America, and the Port Authority Bus Terminal, the main intercity
bus terminal of the city, serves 7,000 buses and 200,000 commuters daily, making it the busiest bus station in the
world. New York's airspace is the busiest in the United States and one of the world's busiest air transportation
corridors. The three busiest airports in the New York metropolitan area include John F. Kennedy International Airport,
Newark Liberty International Airport, and LaGuardia Airport; 109 million travelers used these three airports in 2012,
and the city's airspace is the busiest in the nation. JFK and Newark Liberty were the busiest and fourth busiest
U.S. gateways for international air passengers, respectively, in 2012; as of 2011, JFK was the busiest airport for
international passengers in North America. Plans have advanced to expand passenger volume at a fourth airport, Stewart
International Airport near Newburgh, New York, by the Port Authority of New York and New Jersey. Plans were announced
in July 2015 to entirely rebuild LaGuardia Airport in a multibillion-dollar project to replace its aging facilities.
The Staten Island Ferry is the world's busiest ferry route, carrying approximately 20 million passengers on the 5.2-mile
(8.4 km) route between Staten Island and Lower Manhattan and running 24 hours a day. Other ferry systems shuttle
commuters between Manhattan and other locales within the city and the metropolitan area. The George Washington Bridge
is the world's busiest motor vehicle bridge, connecting Manhattan to Bergen County, New Jersey. The Verrazano-Narrows
Bridge is the longest suspension bridge in the Americas and one of the world's longest. The Brooklyn Bridge is an
icon of the city itself. The towers of the Brooklyn Bridge are built of limestone, granite, and Rosendale cement,
and their architectural style is neo-Gothic, with characteristic pointed arches above the passageways through the
stone towers. This bridge was also the longest suspension bridge in the world from its opening until 1903, and is
the first steel-wire suspension bridge. Manhattan Island is linked to New York City's outer boroughs and New Jersey
by several tunnels as well. The Lincoln Tunnel, which carries 120,000 vehicles a day under the Hudson River between
New Jersey and Midtown Manhattan, is the busiest vehicular tunnel in the world. The tunnel was built instead of a
bridge to allow unfettered passage of large passenger and cargo ships that sailed through New York Harbor and up
the Hudson River to Manhattan's piers. The Holland Tunnel, connecting Lower Manhattan to Jersey City, New Jersey,
was the world's first mechanically ventilated vehicular tunnel when it opened in 1927. The Queens-Midtown Tunnel,
built to relieve congestion on the bridges connecting Manhattan with Queens and Brooklyn, was the largest non-federal
project in its time when it was completed in 1940. President Franklin D. Roosevelt was the first person to drive
through it. The Hugh L. Carey Tunnel runs underneath Battery Park and connects the Financial District at the southern
tip of Manhattan to Red Hook in Brooklyn. New York's high rate of public transit use, over 200,000 daily cyclists
as of 2014, and many pedestrian commuters make it the most energy-efficient major city in the United States. Walk
and bicycle modes of travel account for 21% of all modes for trips in the city; nationally the rate for metro regions
is about 8%. In both its 2011 and 2015 rankings, Walk Score named New York City the most walkable large city in the
United States. Citibank sponsored the introduction of 10,000 public bicycles for the city's bike-share project in
the summer of 2013. Research conducted by Quinnipiac University showed that a majority of New Yorkers support the
initiative. New York City's numerical "in-season cycling indicator" of bicycling in the city hit an all-time high
in 2013. New York City is supplied with drinking water by the protected Catskill Mountains watershed. As a result
of the watershed's integrity and undisturbed natural water filtration system, New York is one of only four major
cities in the United States the majority of whose drinking water is pure enough not to require purification by water
treatment plants. The Croton Watershed north of the city is undergoing construction of a US$3.2 billion water purification
plant to augment New York City's water supply by an estimated 290 million gallons daily, representing a greater than
20% addition to the city's current availability of water. The ongoing expansion of New York City Water Tunnel No.
3, an integral part of the New York City water supply system, is the largest capital construction project in the
city's history. The Mayor and council members are elected to four-year terms. The City Council is a unicameral body
consisting of 51 council members whose districts are defined by geographic population boundaries. Each term for the
mayor and council members lasts four years and has a three consecutive-term limit, but can resume after a four-year
break. The New York City Administrative Code, the New York City Rules, and the City Record are the code of local
laws, compilation of regulations, and official journal, respectively. The Democratic Party holds the majority of
public offices. As of November 2008, 67% of registered voters in the city are Democrats. New York City has not been
carried by a Republican in a statewide or presidential election since President Calvin Coolidge won the five boroughs
in 1924. In 2012, Democrat Barack Obama became the first presidential candidate of any party to receive more than
80% of the overall vote in New York City, sweeping all five boroughs. Party platforms center on affordable housing,
education, and economic development, and labor politics are of importance in the city. Much of the scientific research
in the city is done in medicine and the life sciences. New York City has the most post-graduate life sciences degrees
awarded annually in the United States, with 127 Nobel laureates having roots in local institutions as of 2004; while
in 2012, 43,523 licensed physicians were practicing in New York City. Major biomedical research institutions include
Memorial Sloan–Kettering Cancer Center, Rockefeller University, SUNY Downstate Medical Center, Albert Einstein College
of Medicine, Mount Sinai School of Medicine, and Weill Cornell Medical College, being joined by the Cornell University/Technion-Israel
Institute of Technology venture on Roosevelt Island. Each year HHC's facilities provide about 225,000 admissions,
one million emergency room visits and five million clinic visits to New Yorkers. HHC facilities treat nearly one-fifth
of all general hospital discharges and more than one third of emergency room and hospital-based clinic visits in
New York City. Sociologists and criminologists have not reached consensus on the explanation for the dramatic decrease
in the city's crime rate. Some attribute the phenomenon to new tactics used by the NYPD, including its use of CompStat
and the broken windows theory. Others cite the end of the crack epidemic and demographic changes, including from
immigration. Another theory is that widespread exposure to lead pollution from automobile exhaust, which can lower
intelligence and increase aggression levels, incited the initial crime wave in the mid-20th century, most acutely
affecting heavily trafficked cities like New York. A strong correlation was found demonstrating that violent crime
rates in New York and other big cities began to fall after lead was removed from American gasoline in the 1970s.
Another theory cited to explain New York City's falling homicide rate is the inverse correlation between the number
of murders and the increasingly wetter climate in the city. New York City has been described as the cultural capital
of the world by the diplomatic consulates of Iceland and Latvia and by New York's Baruch College. A book containing
a series of essays titled New York, culture capital of the world, 1940–1965 has also been published as showcased
by the National Library of Australia. In describing New York, author Tom Wolfe said, "Culture just seems to be in
the air, like part of the weather." Lincoln Center for the Performing Arts, anchoring Lincoln Square on the Upper
West Side of Manhattan, is home to numerous influential arts organizations, including the Metropolitan Opera, New
York City Opera, New York Philharmonic, and New York City Ballet, as well as the Vivian Beaumont Theater, the Juilliard
School, Jazz at Lincoln Center, and Alice Tully Hall. The Lee Strasberg Theatre and Film Institute is in Union Square,
and Tisch School of the Arts is based at New York University, while Central Park SummerStage presents performances
of free plays and music in Central Park. New York City is home to hundreds of cultural institutions and historic
sites, many of which are internationally known. Museum Mile is the name for a section of Fifth Avenue running from
82nd to 105th streets on the Upper East Side of Manhattan, in an area sometimes called Upper Carnegie Hill. The Mile,
which contains one of the densest displays of culture in the world, is actually three blocks longer than one mile
(1.6 km). Ten museums occupy the length of this section of Fifth Avenue. The tenth museum, the Museum for African
Art, joined the ensemble in 2009, however its Museum at 110th Street, the first new museum constructed on the Mile
since the Guggenheim in 1959, opened in late 2012. In addition to other programming, the museums collaborate for
the annual Museum Mile Festival, held each year in June, to promote the museums and increase visitation. Many of
the world's most lucrative art auctions are held in New York City. The New York area is home to a distinctive regional
speech pattern called the New York dialect, alternatively known as Brooklynese or New Yorkese. It has generally been
considered one of the most recognizable accents within American English. The classic version of this dialect is centered
on middle and working-class people of European descent. However, the influx of non-European immigrants in recent
decades has led to changes in this distinctive dialect, and the traditional form of this speech pattern is no longer
as prevalent among general New Yorkers as in the past. The traditional New York area accent is characterized as non-rhotic,
so that the sound [ɹ] does not appear at the end of a syllable or immediately before a consonant; hence the pronunciation
of the city name as "New Yawk." There is no [ɹ] in words like park [pɑək] or [pɒək] (with vowel backed and diphthongized
due to the low-back chain shift), butter [bʌɾə], or here [hiə]. In another feature called the low back chain shift,
the [ɔ] vowel sound of words like talk, law, cross, chocolate, and coffee and the often homophonous [ɔr] in core
and more are tensed and usually raised more than in General American. In the most old-fashioned and extreme versions
of the New York dialect, the vowel sounds of words like "girl" and of words like "oil" became a diphthong [ɜɪ]. This
would often be misperceived by speakers of other accents as a reversal of the er and oy sounds, so that girl is pronounced
"goil" and oil is pronounced "erl"; this leads to the caricature of New Yorkers saying things like "Joizey" (Jersey),
"Toidy-Toid Street" (33rd St.) and "terlet" (toilet). The character Archie Bunker from the 1970s sitcom All in the
Family (played by Carroll O'Connor) was a notable example of having used this pattern of speech, which continues
to fade in its overall presence. In soccer, New York City is represented by New York City FC of Major League Soccer,
who play their home games at Yankee Stadium. The New York Red Bulls play their home games at Red Bull Arena in nearby
Harrison, New Jersey. Historically, the city is known for the New York Cosmos, the highly successful former professional
soccer team which was the American home of Pelé, one of the world's most famous soccer players. A new version of
the New York Cosmos was formed in 2010, and began play in the second division North American Soccer League in 2013.
The Cosmos play their home games at James M. Shuart Stadium on the campus of Hofstra University, just outside the
New York City limits in Hempstead, New York. Mass transit in New York City, most of which runs 24 hours a day, accounts
for one in every three users of mass transit in the United States, and two-thirds of the nation's rail riders live
in the New York City Metropolitan Area. New York City's commuter rail network is the largest in North America. The
rail network, connecting New York City to its suburbs, consists of the Long Island Rail Road, Metro-North Railroad,
and New Jersey Transit. The combined systems converge at Grand Central Terminal and Pennsylvania Station and contain
more than 250 stations and 20 rail lines. In Queens, the elevated AirTrain people mover system connects JFK International
Airport to the New York City Subway and the Long Island Rail Road; a separate AirTrain system is planned alongside
the Grand Central Parkway to connect LaGuardia Airport to these transit systems. For intercity rail, New York City
is served by Amtrak, whose busiest station by a significant margin is Pennsylvania Station on the West Side of Manhattan,
from which Amtrak provides connections to Boston, Philadelphia, and Washington, D.C. along the Northeast Corridor,
as well as long-distance train service to other North American cities. The Staten Island Railway rapid transit system
solely serves Staten Island, operating 24 hours a day. The Port Authority Trans-Hudson (PATH train) links Midtown
and Lower Manhattan to northeastern New Jersey, primarily Hoboken, Jersey City, and Newark. Like the New York City
Subway, the PATH operates 24 hours a day; meaning three of the six rapid transit systems in the world which operate
on 24-hour schedules are wholly or partly in New York (the others are a portion of the Chicago 'L', the PATCO Speedline
serving Philadelphia, and the Copenhagen Metro). Multibillion US$ heavy-rail transit projects under construction
in New York City include the Second Avenue Subway, the East Side Access project, and the 7 Subway Extension. Other
features of the city's transportation infrastructure encompass more than 12,000 yellow taxicabs; various competing
startup transportation network companies; and an aerial tramway that transports commuters between Roosevelt Island
and Manhattan Island. Despite New York's heavy reliance on its vast public transit system, streets are a defining
feature of the city. Manhattan's street grid plan greatly influenced the city's physical development. Several of
the city's streets and avenues, like Broadway, Wall Street, Madison Avenue, and Seventh Avenue are also used as metonyms
for national industries there: the theater, finance, advertising, and fashion organizations, respectively. New York
City also has an extensive web of expressways and parkways, which link the city's boroughs to each other as well
as to northern New Jersey, Westchester County, Long Island, and southwestern Connecticut through various bridges
and tunnels. Because these highways serve millions of outer borough and suburban residents who commute into Manhattan,
it is quite common for motorists to be stranded for hours in traffic jams that are a daily occurrence, particularly
during rush hour. New York City is located on one of the world's largest natural harbors, and the boroughs of Manhattan
and Staten Island are (primarily) coterminous with islands of the same names, while Queens and Brooklyn are located
at the west end of the larger Long Island, and The Bronx is located at the southern tip of New York State's mainland.
This situation of boroughs separated by water led to the development of an extensive infrastructure of bridges and
tunnels. Nearly all of the city's major bridges and tunnels are notable, and several have broken or set records.
The Queensboro Bridge is an important piece of cantilever architecture. The Manhattan Bridge, Throgs Neck Bridge,
Triborough Bridge, and Verrazano-Narrows Bridge are all examples of Structural Expressionism. New York City has focused
on reducing its environmental impact and carbon footprint. Mass transit use in New York City is the highest in the
United States. Also, by 2010, the city had 3,715 hybrid taxis and other clean diesel vehicles, representing around
28% of New York's taxi fleet in service, the most of any city in North America. The city government was a petitioner
in the landmark Massachusetts v. Environmental Protection Agency Supreme Court case forcing the EPA to regulate greenhouse
gases as pollutants. The city is also a leader in the construction of energy-efficient green office buildings, including
the Hearst Tower among others. Mayor Bill de Blasio has committed to an 80% reduction in greenhouse gas emissions
between 2014 and 2050 to reduce the city's contributions to climate change, beginning with a comprehensive "Green
Buildings" plan. Newtown Creek, a 3.5-mile (6-kilometer) a long estuary that forms part of the border between the
boroughs of Brooklyn and Queens, has been designated a Superfund site for environmental clean-up and remediation
of the waterway's recreational and economic resources for many communities. One of the most heavily used bodies of
water in the Port of New York and New Jersey, it had been one of the most contaminated industrial sites in the country,
containing years of discarded toxins, an estimated 30 million US gallons (110,000 m3) of spilled oil, including the
Greenpoint oil spill, raw sewage from New York City's sewer system, and other accumulation. New York City has been
a metropolitan municipality with a mayor-council form of government since its consolidation in 1898. The government
of New York is more centralized than that of most other U.S. cities. In New York City, the city government is responsible
for public education, correctional institutions, public safety, recreational facilities, sanitation, water supply,
and welfare services. Each borough is coextensive with a judicial district of the state Unified Court System, of
which the Criminal Court and the Civil Court are the local courts, while the New York Supreme Court conducts major
trials and appeals. Manhattan hosts the First Department of the Supreme Court, Appellate Division while Brooklyn
hosts the Second Department. There are also several extrajudicial administrative courts, which are executive agencies
and not part of the state Unified Court System. Uniquely among major American cities, New York is divided between,
and is host to the main branches of, two different US district courts: the District Court for the Southern District
of New York, whose main courthouse is on Foley Square near City Hall in Manhattan and whose jurisdiction includes
Manhattan and the Bronx, and the District Court for the Eastern District of New York, whose main courthouse is in
Brooklyn and whose jurisdiction includes Brooklyn, Queens, and Staten Island. The US Court of Appeals for the Second
Circuit and US Court of International Trade are also based in New York, also on Foley Square in Manhattan. New York
is the most important source of political fundraising in the United States, as four of the top five ZIP codes in
the nation for political contributions are in Manhattan. The top ZIP code, 10021 on the Upper East Side, generated
the most money for the 2004 presidential campaigns of George W. Bush and John Kerry. The city has a strong imbalance
of payments with the national and state governments. It receives 83 cents in services for every $1 it sends to the
federal government in taxes (or annually sends $11.4 billion more than it receives back). The city also sends an
additional $11 billion more each year to the state of New York than it receives back. In 2006, the Sister City Program
of the City of New York, Inc. was restructured and renamed New York City Global Partners. New York City has expanded
its international outreach via this program to a network of cities worldwide, promoting the exchange of ideas and
innovation between their citizenry and policymakers, according to the city's website. New York's historic sister
cities are denoted below by the year they joined New York City's partnership network.
To Kill a Mockingbird is a novel by Harper Lee published in 1960. It was immediately successful, winning the Pulitzer Prize,
and has become a classic of modern American literature. The plot and characters are loosely based on the author's
observations of her family and neighbors, as well as on an event that occurred near her hometown in 1936, when she
was 10 years old. As a Southern Gothic novel and a Bildungsroman, the primary themes of To Kill a Mockingbird involve
racial injustice and the destruction of innocence. Scholars have noted that Lee also addresses issues of class, courage,
compassion, and gender roles in the American Deep South. The book is widely taught in schools in the United States
with lessons that emphasize tolerance and decry prejudice. Despite its themes, To Kill a Mockingbird has been subject
to campaigns for removal from public classrooms, often challenged for its use of racial epithets. Reaction to the
novel varied widely upon publication. Literary analysis of it is sparse, considering the number of copies sold and
its widespread use in education. Author Mary McDonough Murphy, who collected individual impressions of To Kill a
Mockingbird by several authors and public figures, calls the book, "an astonishing phenomenon". In 2006, British
librarians ranked the book ahead of the Bible as one "every adult should read before they die". It was adapted into
an Oscar-winning film in 1962 by director Robert Mulligan, with a screenplay by Horton Foote. Since 1990, a play
based on the novel has been performed annually in Harper Lee's hometown of Monroeville, Alabama. To Kill a Mockingbird
was Lee's only published book until Go Set a Watchman, an earlier draft of To Kill a Mockingbird, was published on
July 14, 2015. Lee continued to respond to her work's impact until her death in February 2016, although she had refused
any personal publicity for herself or the novel since 1964. Born in 1926, Harper Lee grew up in the Southern town
of Monroeville, Alabama, where she became close friends with soon-to-be famous writer Truman Capote. She attended
Huntingdon College in Montgomery (1944–45), and then studied law at the University of Alabama (1945–49). While attending
college, she wrote for campus literary magazines: Huntress at Huntingdon and the humor magazine Rammer Jammer at
the University of Alabama. At both colleges, she wrote short stories and other works about racial injustice, a rarely
mentioned topic on such campuses at the time. In 1950, Lee moved to New York City, where she worked as a reservation
clerk for British Overseas Airways Corporation; there, she began writing a collection of essays and short stories
about people in Monroeville. Hoping to be published, Lee presented her writing in 1957 to a literary agent recommended
by Capote. An editor at J. B. Lippincott , who bought the manuscript, advised her to quit the airline and concentrate
on writing. Donations from friends allowed her to write uninterrupted for a year. After finishing the first draft
and returning it to Lippincott, the manuscript, at that point titled "Go Set a Watchman", fell into the hands of
Therese von Hohoff Torrey — known professionally as Tay Hohoff — a small, wiry veteran editor in her late 50s. Hohoff
was impressed. “[T]he spark of the true writer flashed in every line,” she would later recount in a corporate history
of Lippincott. But as Hohoff saw it, the manuscript was by no means fit for publication. It was, as she described
it, “more a series of anecdotes than a fully conceived novel.” During the next couple of years, she led Lee from
one draft to the next until the book finally achieved its finished form and was retitled To Kill a Mockingbird. Lee
had lost her mother, who suffered from mental illness, six years before she met Hohoff at Lippincott’s offices. Her
father, a lawyer on whom Atticus was modeled, would die two years after the publication of To Kill a Mockingbird.
Ultimately, Lee spent over two and a half years writing To Kill a Mockingbird. The book was published on July 11,
1960. After rejecting the "Watchman" title, it was initially re-titled Atticus, but Lee renamed it "To Kill a Mockingbird"
to reflect that the story went beyond just a character portrait. The editorial team at Lippincott warned Lee that
she would probably sell only several thousand copies. In 1964, Lee recalled her hopes for the book when she said,
"I never expected any sort of success with 'Mockingbird.' ... I was hoping for a quick and merciful death at the
hands of the reviewers but, at the same time, I sort of hoped someone would like it enough to give me encouragement.
Public encouragement. I hoped for a little, as I said, but I got rather a whole lot, and in some ways this was just
about as frightening as the quick, merciful death I'd expected." Instead of a "quick and merciful death", Reader's
Digest Condensed Books chose the book for reprinting in part, which gave it a wide readership immediately. Since
the original publication, the book has never been out of print. The story takes place during three years (1933–35)
of the Great Depression in the fictional "tired old town" of Maycomb, Alabama, the seat of Maycomb County. It focuses
on six-year-old Jean Louise Finch (Scout), who lives with her older brother, Jem, and their widowed father, Atticus,
a middle-aged lawyer. Jem and Scout befriend a boy named Dill, who visits Maycomb to stay with his aunt each summer.
The three children are terrified of, and fascinated by, their neighbor, the reclusive Arthur "Boo" Radley. The adults
of Maycomb are hesitant to talk about Boo, and, for many years few have seen him. The children feed one another's
imagination with rumors about his appearance and reasons for remaining hidden, and they fantasize about how to get
him out of his house. After two summers of friendship with Dill, Scout and Jem find that someone leaves them small
gifts in a tree outside the Radley place. Several times the mysterious Boo makes gestures of affection to the children,
but, to their disappointment, he never appears in person. Judge Taylor appoints Atticus to defend Tom Robinson, a
black man who has been accused of raping a young white woman, Mayella Ewell. Although many of Maycomb's citizens
disapprove, Atticus agrees to defend Tom to the best of his ability. Other children taunt Jem and Scout for Atticus's
actions, calling him a "nigger-lover". Scout is tempted to stand up for her father's honor by fighting, even though
he has told her not to. Atticus faces a group of men intent on lynching Tom. This danger is averted when Scout, Jem,
and Dill shame the mob into dispersing by forcing them to view the situation from Atticus' and Tom's points of view.
Atticus does not want Jem and Scout to be present at Tom Robinson's trial. No seat is available on the main floor,
so by invitation of Rev. Sykes, Jem, Scout, and Dill watch from the colored balcony. Atticus establishes that the
accusers—Mayella and her father, Bob Ewell, the town drunk—are lying. It also becomes clear that the friendless Mayella
made sexual advances toward Tom, and that her father caught her and beat her. Despite significant evidence of Tom's
innocence, the jury convicts him. Jem's faith in justice becomes badly shaken, as is Atticus', when the hapless Tom
is shot and killed while trying to escape from prison. Despite Tom's conviction, Bob Ewell is humiliated by the events
of the trial, Atticus explaining that he "destroyed [Ewell's] last shred of credibility at that trial." Ewell vows
revenge, spitting in Atticus' face, trying to break into the judge's house, and menacing Tom Robinson's widow. Finally,
he attacks the defenseless Jem and Scout while they walk home on a dark night after the school Halloween pageant.
One of Jem's arms is broken in the struggle, but amid the confusion someone comes to the children's rescue. The mysterious
man carries Jem home, where Scout realizes that he is Boo Radley. Sheriff Tate arrives and discovers that Bob Ewell
has died during the fight. The sheriff argues with Atticus about the prudence and ethics of charging Jem (whom Atticus
believes to be responsible) or Boo (whom Tate believes to be responsible). Atticus eventually accepts the sheriff's
story that Ewell simply fell on his own knife. Boo asks Scout to walk him home, and after she says goodbye to him
at his front door he disappears again. While standing on the Radley porch, Scout imagines life from Boo's perspective,
and regrets that they had never repaid him for the gifts he had given them. Lee has said that To Kill a Mockingbird
is not an autobiography, but rather an example of how an author "should write about what he knows and write truthfully".
Nevertheless, several people and events from Lee's childhood parallel those of the fictional Scout. Lee's father,
Amasa Coleman Lee, was an attorney, similar to Atticus Finch, and in 1919, he defended two black men accused of murder.
After they were convicted, hanged and mutilated, he never tried another criminal case. Lee's father was also the
editor and publisher of the Monroeville newspaper. Although more of a proponent of racial segregation than Atticus,
he gradually became more liberal in his later years. Though Scout's mother died when she was a baby, Lee was 25 when
her mother, Frances Cunningham Finch, died. Lee's mother was prone to a nervous condition that rendered her mentally
and emotionally absent. Lee had a brother named Edwin, who—like the fictional Jem—was four years older than his sister.
As in the novel, a black housekeeper came daily to care for the Lee house and family. Lee modeled the character of
Dill on her childhood friend, Truman Capote, known then as Truman Persons. Just as Dill lived next door to Scout
during the summer, Capote lived next door to Lee with his aunts while his mother visited New York City. Like Dill,
Capote had an impressive imagination and a gift for fascinating stories. Both Lee and Capote were atypical children:
both loved to read. Lee was a scrappy tomboy who was quick to fight, but Capote was ridiculed for his advanced vocabulary
and lisp. She and Capote made up and acted out stories they wrote on an old Underwood typewriter Lee's father gave
them. They became good friends when both felt alienated from their peers; Capote called the two of them "apart people".
In 1960, Capote and Lee traveled to Kansas together to investigate the multiple murders that were the basis for Capote's
nonfiction novel In Cold Blood. The origin of Tom Robinson is less clear, although many have speculated that his
character was inspired by several models. When Lee was 10 years old, a white woman near Monroeville accused a black
man named Walter Lett of raping her. The story and the trial were covered by her father's newspaper which reported
that Lett was convicted and sentenced to death. After a series of letters appeared claiming Lett had been falsely
accused, his sentence was commuted to life in prison. He died there of tuberculosis in 1937. Scholars believe that
Robinson's difficulties reflect the notorious case of the Scottsboro Boys, in which nine black men were convicted
of raping two white women on negligible evidence. However, in 2005, Lee stated that she had in mind something less
sensational, although the Scottsboro case served "the same purpose" to display Southern prejudices. Emmett Till,
a black teenager who was murdered for flirting with a white woman in Mississippi in 1955, and whose death is credited
as a catalyst for the Civil Rights Movement, is also considered a model for Tom Robinson. Writing about Lee's style
and use of humor in a tragic story, scholar Jacqueline Tavernier-Courbin states: "Laughter ... [exposes] the gangrene
under the beautiful surface but also by demeaning it; one can hardly ... be controlled by what one is able to laugh
at." Scout's precocious observations about her neighbors and behavior inspire National Endowment of the Arts director
David Kipen to call her "hysterically funny". To address complex issues, however, Tavernier-Courbin notes that Lee
uses parody, satire, and irony effectively by using a child's perspective. After Dill promises to marry her, then
spends too much time with Jem, Scout reasons the best way to get him to pay attention to her is to beat him up, which
she does several times. Scout's first day in school is a satirical treatment of education; her teacher says she must
undo the damage Atticus has wrought in teaching her to read and write, and forbids Atticus from teaching her further.
Lee treats the most unfunny situations with irony, however, as Jem and Scout try to understand how Maycomb embraces
racism and still tries sincerely to remain a decent society. Satire and irony are used to such an extent that Tavernier-Courbin
suggests one interpretation for the book's title: Lee is doing the mocking—of education, the justice system, and
her own society by using them as subjects of her humorous disapproval. Critics also note the entertaining methods
used to drive the plot. When Atticus is out of town, Jem locks a Sunday school classmate in the church basement with
the furnace during a game of Shadrach. This prompts their black housekeeper Calpurnia to escort Scout and Jem to
her church, which allows the children a glimpse into her personal life, as well as Tom Robinson's. Scout falls asleep
during the Halloween pageant and makes a tardy entrance onstage, causing the audience to laugh uproariously. She
is so distracted and embarrassed that she prefers to go home in her ham costume, which saves her life. Scholars have
characterized To Kill a Mockingbird as both a Southern Gothic and coming-of-age or Bildungsroman novel. The grotesque
and near-supernatural qualities of Boo Radley and his house, and the element of racial injustice involving Tom Robinson
contribute to the aura of the Gothic in the novel. Lee used the term "Gothic" to describe the architecture of Maycomb's
courthouse and in regard to Dill's exaggeratedly morbid performances as Boo Radley. Outsiders are also an important
element of Southern Gothic texts and Scout and Jem's questions about the hierarchy in the town cause scholars to
compare the novel to Catcher in the Rye and Adventures of Huckleberry Finn. Despite challenging the town's systems,
Scout reveres Atticus as an authority above all others, because he believes that following one's conscience is the
highest priority, even when the result is social ostracism. However, scholars debate about the Southern Gothic classification,
noting that Boo Radley is in fact human, protective, and benevolent. Furthermore, in addressing themes such as alcoholism,
incest, rape, and racial violence, Lee wrote about her small town realistically rather than melodramatically. She
portrays the problems of individual characters as universal underlying issues in every society. As children coming
of age, Scout and Jem face hard realities and learn from them. Lee seems to examine Jem's sense of loss about how
his neighbors have disappointed him more than Scout's. Jem says to their neighbor Miss Maudie the day after the trial,
"It's like bein' a caterpillar wrapped in a cocoon ... I always thought Maycomb folks were the best folks in the
world, least that's what they seemed like". This leads him to struggle with understanding the separations of race
and class. Just as the novel is an illustration of the changes Jem faces, it is also an exploration of the realities
Scout must face as an atypical girl on the verge of womanhood. As one scholar writes, "To Kill a Mockingbird can
be read as a feminist Bildungsroman, for Scout emerges from her childhood experiences with a clear sense of her place
in her community and an awareness of her potential power as the woman she will one day be." The second part of the
novel deals with what book reviewer Harding LeMay termed "the spirit-corroding shame of the civilized white Southerner
in the treatment of the Negro". In the years following its release, many reviewers considered To Kill a Mockingbird
a novel primarily concerned with race relations. Claudia Durst Johnson considers it "reasonable to believe" that
the novel was shaped by two events involving racial issues in Alabama: Rosa Parks' refusal to yield her seat on a
city bus to a white person, which sparked the 1955 Montgomery Bus Boycott, and the 1956 riots at the University of
Alabama after Autherine Lucy and Polly Myers were admitted (Myers eventually withdrew her application and Lucy was
expelled, but reinstated in 1980). In writing about the historical context of the novel's construction, two other
literary scholars remark: "To Kill a Mockingbird was written and published amidst the most significant and conflict-ridden
social change in the South since the Civil War and Reconstruction. Inevitably, despite its mid-1930s setting, the
story told from the perspective of the 1950s voices the conflicts, tensions, and fears induced by this transition."
Scholar Patrick Chura, who suggests Emmett Till was a model for Tom Robinson, enumerates the injustices endured by
the fictional Tom that Till also faced. Chura notes the icon of the black rapist causing harm to the representation
of the "mythologized vulnerable and sacred Southern womanhood". Any transgressions by black males that merely hinted
at sexual contact with white females during the time the novel was set often resulted in a punishment of death for
the accused. Tom Robinson's trial was juried by poor white farmers who convicted him despite overwhelming evidence
of his innocence, as more educated and moderate white townspeople supported the jury's decision. Furthermore, the
victim of racial injustice in To Kill a Mockingbird was physically impaired, which made him unable to commit the
act he was accused of, but also crippled him in other ways. Roslyn Siegel includes Tom Robinson as an example of
the recurring motif among white Southern writers of the black man as "stupid, pathetic, defenseless, and dependent
upon the fair dealing of the whites, rather than his own intelligence to save him". Although Tom is spared from being
lynched, he is killed with excessive violence during an attempted escape from prison, shot seventeen times. The theme
of racial injustice appears symbolically in the novel as well. For example, Atticus must shoot a rabid dog, even
though it is not his job to do so. Carolyn Jones argues that the dog represents prejudice within the town of Maycomb,
and Atticus, who waits on a deserted street to shoot the dog, must fight against the town's racism without help from
other white citizens. He is also alone when he faces a group intending to lynch Tom Robinson and once more in the
courthouse during Tom's trial. Lee even uses dreamlike imagery from the mad dog incident to describe some of the
courtroom scenes. Jones writes, "[t]he real mad dog in Maycomb is the racism that denies the humanity of Tom Robinson
.... When Atticus makes his summation to the jury, he literally bares himself to the jury's and the town's anger."
In a 1964 interview, Lee remarked that her aspiration was "to be ... the Jane Austen of South Alabama." Both Austen
and Lee challenged the social status quo and valued individual worth over social standing. When Scout embarrasses
her poorer classmate, Walter Cunningham, at the Finch home one day, Calpurnia, their black cook, chastises and punishes
her for doing so. Atticus respects Calpurnia's judgment, and later in the book even stands up to his sister, the
formidable Aunt Alexandra, when she strongly suggests they fire Calpurnia. One writer notes that Scout, "in Austenian
fashion", satirizes women with whom she does not wish to identify. Literary critic Jean Blackall lists the priorities
shared by the two authors: "affirmation of order in society, obedience, courtesy, and respect for the individual
without regard for status". Scholars argue that Lee's approach to class and race was more complex "than ascribing
racial prejudice primarily to 'poor white trash' ... Lee demonstrates how issues of gender and class intensify prejudice,
silence the voices that might challenge the existing order, and greatly complicate many Americans' conception of
the causes of racism and segregation." Lee's use of the middle-class narrative voice is a literary device that allows
an intimacy with the reader, regardless of class or cultural background, and fosters a sense of nostalgia. Sharing
Scout and Jem's perspective, the reader is allowed to engage in relationships with the conservative antebellum Mrs.
Dubose; the lower-class Ewells, and the Cunninghams who are equally poor but behave in vastly different ways; the
wealthy but ostracized Mr. Dolphus Raymond; and Calpurnia and other members of the black community. The children
internalize Atticus' admonition not to judge someone until they have walked around in that person's skin, gaining
a greater understanding of people's motives and behavior. The novel has been noted for its poignant exploration of
different forms of courage. Scout's impulsive inclination to fight students who insult Atticus reflects her attempt
to stand up for him and defend him. Atticus is the moral center of the novel, however, and he teaches Jem one of
the most significant lessons of courage. In a statement that foreshadows Atticus' motivation for defending Tom Robinson
and describes Mrs. Dubose, who is determined to break herself of a morphine addiction, Atticus tells Jem that courage
is "when you're licked before you begin but you begin anyway and you see it through no matter what". Charles Shields,
who has written the only book-length biography of Harper Lee to date, offers the reason for the novel's enduring
popularity and impact is that "its lessons of human dignity and respect for others remain fundamental and universal".
Atticus' lesson to Scout that "you never really understand a person until you consider things from his point of view—until
you climb around in his skin and walk around in it" exemplifies his compassion. She ponders the comment when listening
to Mayella Ewell's testimony. When Mayella reacts with confusion to Atticus' question if she has any friends, Scout
offers that she must be lonelier than Boo Radley. Having walked Boo home after he saves their lives, Scout stands
on the Radley porch and considers the events of the previous three years from Boo's perspective. One writer remarks,
"... [w]hile the novel concerns tragedy and injustice, heartache and loss, it also carries with it a strong sense
[of] courage, compassion, and an awareness of history to be better human beings." Just as Lee explores Jem's development
in coming to grips with a racist and unjust society, Scout realizes what being female means, and several female characters
influence her development. Scout's primary identification with her father and older brother allows her to describe
the variety and depth of female characters in the novel both as one of them and as an outsider. Scout's primary female
models are Calpurnia and her neighbor Miss Maudie, both of whom are strong willed, independent, and protective. Mayella
Ewell also has an influence; Scout watches her destroy an innocent man in order to hide her desire for him. The female
characters who comment the most on Scout's lack of willingness to adhere to a more feminine role are also those who
promote the most racist and classist points of view. For example, Mrs. Dubose chastises Scout for not wearing a dress
and camisole, and indicates she is ruining the family name by not doing so, in addition to insulting Atticus' intentions
to defend Tom Robinson. By balancing the masculine influences of Atticus and Jem with the feminine influences of
Calpurnia and Miss Maudie, one scholar writes, "Lee gradually demonstrates that Scout is becoming a feminist in the
South, for with the use of first-person narration, she indicates that Scout/ Jean Louise still maintains the ambivalence
about being a Southern lady she possessed as a child." Absent mothers and abusive fathers are another theme in the
novel. Scout and Jem's mother died before Scout could remember her, Mayella's mother is dead, and Mrs. Radley is
silent about Boo's confinement to the house. Apart from Atticus, the fathers described are abusers. Bob Ewell, it
is hinted, molested his daughter, and Mr. Radley imprisons his son in his house until Boo is remembered only as a
phantom. Bob Ewell and Mr. Radley represent a form of masculinity that Atticus does not, and the novel suggests that
such men as well as the traditionally feminine hypocrites at the Missionary Society can lead society astray. Atticus
stands apart as a unique model of masculinity; as one scholar explains: "It is the job of real men who embody the
traditional masculine qualities of heroic individualism, bravery, and an unshrinking knowledge of and dedication
to social justice and morality, to set the society straight." Allusions to legal issues in To Kill a Mockingbird,
particularly in scenes outside of the courtroom, has drawn the attention from legal scholars. Claudia Durst Johnson
writes that "a greater volume of critical readings has been amassed by two legal scholars in law journals than by
all the literary scholars in literary journals". The opening quote by the 19th-century essayist Charles Lamb reads:
"Lawyers, I suppose, were children once." Johnson notes that even in Scout and Jem's childhood world, compromises
and treaties are struck with each other by spitting on one's palm and laws are discussed by Atticus and his children:
is it right that Bob Ewell hunts and traps out of season? Many social codes are broken by people in symbolic courtrooms:
Mr. Dolphus Raymond has been exiled by society for taking a black woman as his common-law wife and having interracial
children; Mayella Ewell is beaten by her father in punishment for kissing Tom Robinson; by being turned into a non-person,
Boo Radley receives a punishment far greater than any court could have given him. Scout repeatedly breaks codes and
laws and reacts to her punishment for them. For example, she refuses to wear frilly clothes, saying that Aunt Alexandra's
"fanatical" attempts to place her in them made her feel "a pink cotton penitentiary closing in on [her]". Johnson
states, "[t]he novel is a study of how Jem and Scout begin to perceive the complexity of social codes and how the
configuration of relationships dictated by or set off by those codes fails or nurtures the inhabitants of (their)
small worlds." Songbirds and their associated symbolism appear throughout the novel. The family's last name of Finch
also shares Lee's mother's maiden name. The titular mockingbird is a key motif of this theme, which first appears
when Atticus, having given his children air-rifles for Christmas, allows their Uncle Jack to teach them to shoot.
Atticus warns them that, although they can "shoot all the bluejays they want", they must remember that "it's a sin
to kill a mockingbird". Confused, Scout approaches her neighbor Miss Maudie, who explains that mockingbirds never
harm other living creatures. She points out that mockingbirds simply provide pleasure with their songs, saying, "They
don't do one thing but sing their hearts out for us." Writer Edwin Bruell summarized the symbolism when he wrote
in 1964, "'To kill a mockingbird' is to kill that which is innocent and harmless—like Tom Robinson." Scholars have
noted that Lee often returns to the mockingbird theme when trying to make a moral point. Despite her editors' warnings
that the book might not sell well, it quickly became a sensation, bringing acclaim to Lee in literary circles, in
her hometown of Monroeville, and throughout Alabama. The book went through numerous subsequent printings and became
widely available through its inclusion in the Book of the Month Club and editions released by Reader's Digest Condensed
Books. One year after its publication To Kill a Mockingbird had been translated into ten languages. In the years
since, it has sold more than 30 million copies and been translated into more than 40 languages. The novel has never
been out of print in hardcover or paperback, and has become part of the standard literature curriculum. A 2008 survey
of secondary books read by students between grades 9–12 in the U.S. indicates the novel is the most widely read book
in these grades. A 1991 survey by the Book of the Month Club and the Library of Congress Center for the Book found
that To Kill a Mockingbird was rated behind only the Bible in books that are "most often cited as making a difference".[note
1] It is considered by some to be the Great American Novel. Many writers compare their perceptions of To Kill a Mockingbird
as adults with when they first read it as children. Mary McDonagh Murphy interviewed celebrities including Oprah
Winfrey, Rosanne Cash, Tom Brokaw, and Harper's sister Alice Lee, who read the novel and compiled their impressions
of it as children and adults into a book titled Scout, Atticus, and Boo. One of the most significant impacts To Kill
a Mockingbird has had is Atticus Finch's model of integrity for the legal profession. As scholar Alice Petry explains,
"Atticus has become something of a folk hero in legal circles and is treated almost as if he were an actual person."
Morris Dees of the Southern Poverty Law Center cites Atticus Finch as the reason he became a lawyer, and Richard
Matsch, the federal judge who presided over the Timothy McVeigh trial, counts Atticus as a major judicial influence.
One law professor at the University of Notre Dame stated that the most influential textbook he taught from was To
Kill a Mockingbird, and an article in the Michigan Law Review claims, "No real-life lawyer has done more for the
self-image or public perception of the legal profession," before questioning whether, "Atticus Finch is a paragon
of honor or an especially slick hired gun". In 1992, an Alabama editorial called for the death of Atticus, saying
that as liberal as Atticus was, he still worked within a system of institutionalized racism and sexism and should
not be revered. The editorial sparked a flurry of responses from attorneys who entered the profession because of
him and esteemed him as a hero. Critics of Atticus maintain he is morally ambiguous and does not use his legal skills
to challenge the racist status quo in Maycomb. However, in 1997, the Alabama State Bar erected a monument to Atticus
in Monroeville, marking his existence as the "first commemorative milestone in the state's judicial history". In
2008, Lee herself received an honorary special membership to the Alabama State Bar for creating Atticus who "has
become the personification of the exemplary lawyer in serving the legal needs of the poor". To Kill a Mockingbird
has been a source of significant controversy since its being the subject of classroom study as early as 1963. The
book's racial slurs, profanity, and frank discussion of rape have led people to challenge its appropriateness in
libraries and classrooms across the United States. The American Library Association reported that To Kill a Mockingbird
was number 21 of the 100 most frequently challenged books of 2000–2009. One of the first incidents of the book being
challenged was in Hanover, Virginia, in 1966: a parent protested that the use of rape as a plot device was immoral.
Johnson cites examples of letters to local newspapers, which ranged from amusement to fury; those letters expressing
the most outrage, however, complained about Mayella Ewell's attraction to Tom Robinson over the depictions of rape.
Upon learning the school administrators were holding hearings to decide the book's appropriateness for the classroom,
Harper Lee sent $10 to The Richmond News Leader suggesting it to be used toward the enrollment of "the Hanover County
School Board in any first grade of its choice". The National Education Association in 1968 placed the novel second
on a list of books receiving the most complaints from private organizations—after Little Black Sambo. The novel is
cited as a factor in the success of the civil rights movement in the 1960s, however, in that it "arrived at the right
moment to help the South and the nation grapple with the racial tensions (of) the accelerating civil rights movement".
Its publication is so closely associated with the Civil Rights Movement that many studies of the book and biographies
of Harper Lee include descriptions of important moments in the movement, despite the fact that she had no direct
involvement in any of them. Civil Rights leader Andrew Young comments that part of the book's effectiveness is that
it "inspires hope in the midst of chaos and confusion" and by using racial epithets portrays the reality of the times
in which it was set. Young views the novel as "an act of humanity" in showing the possibility of people rising above
their prejudices. Alabama author Mark Childress compares it to the impact of Uncle Tom's Cabin, a book that is popularly
implicated in starting the U.S. Civil War. Childress states the novel "gives white Southerners a way to understand
the racism that they've been brought up with and to find another way. And most white people in the South were good
people. Most white people in the South were not throwing bombs and causing havoc ... I think the book really helped
them come to understand what was wrong with the system in the way that any number of treatises could never do, because
it was popular art, because it was told from a child's point of view." Lee's childhood friend, author Truman Capote,
wrote on the dust jacket of the first edition, "Someone rare has written this very fine first novel: a writer with
the liveliest sense of life, and the warmest, most authentic sense of humor. A touching book; and so funny, so likeable."
This comment has been construed to suggest that Capote wrote the book or edited it heavily. In 2003, a Tuscaloosa
newspaper quoted Capote's biological father, Archulus Persons, as claiming that Capote had written "almost all" of
the book. In 2006, a Capote letter was donated to Monroeville's literary heritage museum; in a letter to a neighbor
in Monroeville in 1959, Capote mentioned that Lee was writing a book that was to be published soon. Extensive notes
between Lee and her editor at Lippincott also refute the rumor of Capote's authorship. Lee's older sister, Alice,
responded to the rumor, saying: "That's the biggest lie ever told." During the years immediately following the novel's
publication, Harper Lee enjoyed the attention its popularity garnered her, granting interviews, visiting schools,
and attending events honoring the book. In 1961, when To Kill a Mockingbird was in its 41st week on the bestseller
list, it was awarded the Pulitzer Prize, stunning Lee. It also won the Brotherhood Award of the National Conference
of Christians and Jews in the same year, and the Paperback of the Year award from Bestsellers magazine in 1962. Starting
in 1964, Lee began to turn down interviews, complaining that the questions were monotonous, and grew concerned that
attention she received bordered on the kind of publicity celebrities sought. Since the, she declined talking with
reporters about the book. She also steadfastly refused to provide an introduction, writing in 1995: "Introductions
inhibit pleasure, they kill the joy of anticipation, they frustrate curiosity. The only good thing about Introductions
is that in some cases they delay the dose to come. Mockingbird still says what it has to say; it has managed to survive
the years without preamble." In 2001, Lee was inducted into the Alabama Academy of Honor. In the same year, Chicago
mayor Richard M. Daley initiated a reading program throughout the city's libraries, and chose his favorite book,
To Kill a Mockingbird, as the first title of the One City, One Book program. Lee declared that "there is no greater
honor the novel could receive". By 2004, the novel had been chosen by 25 communities for variations of the citywide
reading program, more than any other novel. David Kipen of the National Endowment of the Arts, who supervised The
Big Read, states "people just seem to connect with it. It dredges up things in their own lives, their interactions
across racial lines, legal encounters, and childhood. It's just this skeleton key to so many different parts of people's
lives, and they cherish it." In 2006, Lee was awarded an honorary doctorate from the University of Notre Dame. During
the ceremony, the students and audience gave Lee a standing ovation, and the entire graduating class held up copies
of To Kill a Mockingbird to honor her.[note 5] Lee was awarded the Presidential Medal of Freedom on November 5, 2007
by President George W. Bush. In his remarks, Bush stated, "One reason To Kill a Mockingbird succeeded is the wise
and kind heart of the author, which comes through on every page ... To Kill a Mockingbird has influenced the character
of our country for the better. It's been a gift to the entire world. As a model of good writing and humane sensibility,
this book will be read and studied forever." The book was made into the well-received 1962 film with the same title,
starring Gregory Peck as Atticus Finch. The film's producer, Alan J. Pakula, remembered Universal Pictures executives
questioning him about a potential script: "They said, 'What story do you plan to tell for the film?' I said, 'Have
you read the book?' They said, 'Yes.' I said, 'That's the story.'" The movie was a hit at the box office, quickly
grossing more than $20 million from a $2-million budget. It won three Oscars: Best Actor for Gregory Peck, Best Art
Direction-Set Decoration, Black-and-White, and Best Writing, Screenplay Based on Material from Another Medium for
Horton Foote. It was nominated for five more Oscars including Best Actress in a Supporting Role for Mary Badham,
the actress who played Scout. Harper Lee was pleased with the movie, saying: "In that film the man and the part met...
I've had many, many offers to turn it into musicals, into TV or stage plays, but I've always refused. That film was
a work of art." Peck met Lee's father, the model for Atticus, before the filming. Lee's father died before the film's
release, and Lee was so impressed with Peck's performance that she gave him her father's pocketwatch, which he had
with him the evening he was awarded the Oscar for best actor. Years later, he was reluctant to tell Lee that the
watch was stolen out of his luggage in London Heathrow Airport. When Peck eventually did tell Lee, he said she responded,
"'Well, it's only a watch.' Harper—she feels deeply, but she's not a sentimental person about things." Lee and Peck
shared a friendship long after the movie was made. Peck's grandson was named "Harper" in her honor. In May 2005,
Lee made an uncharacteristic appearance at the Los Angeles Public Library at the request of Peck's widow Veronique,
who said of Lee: "She's like a national treasure. She's someone who has made a difference...with this book. The book
is still as strong as it ever was, and so is the film. All the kids in the United States read this book and see the
film in the seventh and eighth grades and write papers and essays. My husband used to get thousands and thousands
of letters from teachers who would send them to him." The book has also been adapted as a play by Christopher Sergel.
It debuted in 1990 in Monroeville, a town that labels itself "The Literary Capital of Alabama". The play runs every
May on the county courthouse grounds and townspeople make up the cast. White male audience members are chosen at
the intermission to make up the jury. During the courtroom scene the production moves into the Monroe County Courthouse
and the audience is racially segregated. Author Albert Murray said of the relationship of the town to the novel (and
the annual performance): "It becomes part of the town ritual, like the religious underpinning of Mardi Gras. With
the whole town crowded around the actual courthouse, it's part of a central, civic education—what Monroeville aspires
to be." Sergel's play toured in the UK starting at West Yorkshire Playhouse in Leeds in 2006, and again in 2011 starting
at the York Theatre Royal, both productions featuring Duncan Preston as Atticus Finch. The play also opened the 2013
season at Regent's Park Open Air Theatre in London where it played to full houses and starred Robert Sean Leonard
as Atticus Finch, his first London appearance in 22 years. The production is returning to the venue to close the
2014 season, prior to a UK Tour. An earlier draft of To Kill a Mockingbird, titled Go Set a Watchman, was controversially
released on July 14, 2015. This draft, which was completed in 1957, is set 20 years after the time period depicted
in To Kill a Mockingbird but is not a continuation of the narrative. This earlier version of the story follows an
adult Scout Finch who travels from New York to visit her father, Atticus Finch, in Maycomb, Alabama, where she is
confronted by the intolerance in her community. The Watchman manuscript was believed to have been lost until Lee's
lawyer Tonja Carter discovered it; although this claim has been widely disputed. Watchman contains early versions
of many of the characters from To Kill a Mockingbird. According to Lee's agent Andrew Nurnberg, Mockingbird was originally
intended to be the first book of a trilogy: "They discussed publishing Mockingbird first, Watchman last, and a shorter
connecting novel between the two." This assertion has been discredited however by the rare books expert James S.
Jaffe, who reviewed the pages at the request of Lee's attorney and found them to be only another draft of "To Kill
a Mockingbird". The statement was also contrary to Jonathan Mahler's description of how "Watchman" was seen as just
the first draft of "Mockingbird". Instances where many passages overlap between the two books, in some case word
for word, also refutes this assertion. The novel is renowned for its warmth and humor, despite dealing with the serious
issues of rape and racial inequality. The narrator's father, Atticus Finch, has served as a moral hero for many readers
and as a model of integrity for lawyers. One critic explains the novel's impact by writing, "In the twentieth century,
To Kill a Mockingbird is probably the most widely read book dealing with race in America, and its protagonist, Atticus
Finch, the most enduring fictional image of racial heroism." The strongest element of style noted by critics and
reviewers is Lee's talent for narration, which in an early review in Time was called "tactile brilliance". Writing
a decade later, another scholar noted, "Harper Lee has a remarkable gift of story-telling. Her art is visual, and
with cinematographic fluidity and subtlety we see a scene melting into another scene without jolts of transition."
Lee combines the narrator's voice of a child observing her surroundings with a grown woman's reflecting on her childhood,
using the ambiguity of this voice combined with the narrative technique of flashback to play intricately with perspectives.
This narrative method allows Lee to tell a "delightfully deceptive" story that mixes the simplicity of childhood
observation with adult situations complicated by hidden motivations and unquestioned tradition. However, at times
the blending causes reviewers to question Scout's preternatural vocabulary and depth of understanding. Both Harding
LeMay and the novelist and literary critic Granville Hicks expressed doubt that children as sheltered as Scout and
Jem could understand the complexities and horrors involved in the trial for Tom Robinson's life. Harper Lee has remained
famously detached from interpreting the novel since the mid-1960s. However, she gave some insight into her themes
when, in a rare letter to the editor, she wrote in response to the passionate reaction her book caused: "Surely it
is plain to the simplest intelligence that To Kill a Mockingbird spells out in words of seldom more than two syllables
a code of honor and conduct, Christian in its ethic, that is the heritage of all Southerners." When the book was
released, reviewers noted that it was divided into two parts, and opinion was mixed about Lee's ability to connect
them. The first part of the novel concerns the children's fascination with Boo Radley and their feelings of safety
and comfort in the neighborhood. Reviewers were generally charmed by Scout and Jem's observations of their quirky
neighbors. One writer was so impressed by Lee's detailed explanations of the people of Maycomb that he categorized
the book as Southern romantic regionalism. This sentimentalism can be seen in Lee's representation of the Southern
caste system to explain almost every character's behavior in the novel. Scout's Aunt Alexandra attributes Maycomb's
inhabitants' faults and advantages to genealogy (families that have gambling streaks and drinking streaks), and the
narrator sets the action and characters amid a finely detailed background of the Finch family history and the history
of Maycomb. This regionalist theme is further reflected in Mayella Ewell's apparent powerlessness to admit her advances
toward Tom Robinson, and Scout's definition of "fine folks" being people with good sense who do the best they can
with what they have. The South itself, with its traditions and taboos, seems to drive the plot more than the characters.
Tom Robinson is the chief example among several innocents destroyed carelessly or deliberately throughout the novel.
However, scholar Christopher Metress connects the mockingbird to Boo Radley: "Instead of wanting to exploit Boo for
her own fun (as she does in the beginning of the novel by putting on gothic plays about his history), Scout comes
to see him as a 'mockingbird'—that is, as someone with an inner goodness that must be cherished." The last pages
of the book illustrate this as Scout relates the moral of a story Atticus has been reading to her, and in allusions
to both Boo Radley and Tom Robinson states about a character who was misunderstood, "when they finally saw him, why
he hadn't done any of those things ... Atticus, he was real nice," to which he responds, "Most people are, Scout,
when you finally see them." The novel exposes the loss of innocence so frequently that reviewer R. A. Dave claims
that because every character has to face, or even suffer defeat, the book takes on elements of a classical tragedy.
In exploring how each character deals with his or her own personal defeat, Lee builds a framework to judge whether
the characters are heroes or fools. She guides the reader in such judgments, alternating between unabashed adoration
and biting irony. Scout's experience with the Missionary Society is an ironic juxtaposition of women who mock her,
gossip, and "reflect a smug, colonialist attitude toward other races" while giving the "appearance of gentility,
piety, and morality". Conversely, when Atticus loses Tom's case, he is last to leave the courtroom, except for his
children and the black spectators in the colored balcony, who rise silently as he walks underneath them, to honor
his efforts. Initial reactions to the novel were varied. The New Yorker declared it "skilled, unpretentious, and
totally ingenious", and The Atlantic Monthly's reviewer rated it as "pleasant, undemanding reading", but found the
narrative voice—"a six-year-old girl with the prose style of a well-educated adult"—to be implausible. Time magazine's
1960 review of the book states that it "teaches the reader an astonishing number of useful truths about little girls
and about Southern life" and calls Scout Finch "the most appealing child since Carson McCullers' Frankie got left
behind at the wedding". The Chicago Sunday Tribune noted the even-handed approach to the narration of the novel's
events, writing: "This is in no way a sociological novel. It underlines no cause ... To Kill a Mockingbird is a novel
of strong contemporary national significance." Not all reviewers were enthusiastic. Some lamented the use of poor
white Southerners, and one-dimensional black victims, and Granville Hicks labeled the book "melodramatic and contrived".
When the book was first released, Southern writer Flannery O'Connor commented, "I think for a child's book it does
all right. It's interesting that all the folks that are buying it don't know they're reading a child's book. Somebody
ought to say what it is." Carson McCullers apparently agreed with the Time magazine review, writing to a cousin:
"Well, honey, one thing we know is that she's been poaching on my literary preserves." The 50th anniversary of the
novel's release was met with celebrations and reflections on its impact. Eric Zorn of the Chicago Tribune praises
Lee's "rich use of language" but writes that the central lesson is that "courage isn't always flashy, isn't always
enough, but is always in style". Jane Sullivan in the Sydney Morning Herald agrees, stating that the book "still
rouses fresh and horrified indignation" as it examines morality, a topic that has recently become unfashionable.
Chimamanda Ngozi Adichie writing in The Guardian states that Lee, rare among American novelists, writes with "a fiercely
progressive ink, in which there is nothing inevitable about racism and its very foundation is open to question",
comparing her to William Faulkner, who wrote about racism as an inevitability. Literary critic Rosemary Goring in
Scotland's The Herald notes the connections between Lee and Jane Austen, stating the book's central theme, that "one’s
moral convictions are worth fighting for, even at the risk of being reviled" is eloquently discussed. Native Alabamian
Allen Barra sharply criticized Lee and the novel in The Wall Street Journal calling Atticus a "repository of cracker-barrel
epigrams" and the novel represents a "sugar-coated myth" of Alabama history. Barra writes, "It's time to stop pretending
that To Kill a Mockingbird is some kind of timeless classic that ranks with the great works of American literature.
Its bloodless liberal humanism is sadly dated". Thomas Mallon in The New Yorker criticizes Atticus' stiff and self-righteous
demeanor, and calls Scout "a kind of highly constructed doll" whose speech and actions are improbable. Although acknowledging
that the novel works, Mallon blasts Lee's "wildly unstable" narrative voice for developing a story about a content
neighborhood until it begins to impart morals in the courtroom drama, following with his observation that "the book
has begun to cherish its own goodness" by the time the case is over.[note 2] Defending the book, Akin Ajayi writes
that justice "is often complicated, but must always be founded upon the notion of equality and fairness for all."
Ajayi states that the book forces readers to question issues about race, class, and society, but that it was not
written to resolve them. Furthermore, despite the novel's thematic focus on racial injustice, its black characters
are not fully examined. In its use of racial epithets, stereotyped depictions of superstitious blacks, and Calpurnia,
who to some critics is an updated version of the "contented slave" motif and to others simply unexplored, the book
is viewed as marginalizing black characters. One writer asserts that the use of Scout's narration serves as a convenient
mechanism for readers to be innocent and detached from the racial conflict. Scout's voice "functions as the not-me
which allows the rest of us—black and white, male and female—to find our relative position in society". A teaching
guide for the novel published by The English Journal cautions, "what seems wonderful or powerful to one group of
students may seem degrading to another". A Canadian language arts consultant found that the novel resonated well
with white students, but that black students found it "demoralizing". Another criticism, articulated by Michael Lind,
is that the novel indulges in classist stereotyping and demonization of poor rural "white trash". Diane McWhorter,
Pulitzer Prize-winning historian of the Birmingham civil rights campaign, asserts that To Kill a Mockingbird condemns
racism instead of racists, and states that every child in the South has moments of racial cognitive dissonance when
they are faced with the harsh reality of inequality. This feeling causes them to question the beliefs with which
they have been raised, which for many children is what the novel does. McWhorter writes of Lee, "for a white person
from the South to write a book like this in the late 1950s is really unusual—by its very existence an act of protest."[note
4] Author James McBride calls Lee brilliant but stops short of calling her brave: "I think by calling Harper Lee
brave you kind of absolve yourself of your own racism ... She certainly set the standards in terms of how these issues
need to be discussed, but in many ways I feel ... the moral bar's been lowered. And that's really distressing. We
need a thousand Atticus Finches." McBride, however, defends the book's sentimentality, and the way Lee approaches
the story with "honesty and integrity". According to a National Geographic article, the novel is so revered in Monroeville
that people quote lines from it like Scripture; yet Harper Lee herself refused to attend any performances, because
"she abhors anything that trades on the book's fame". To underscore this sentiment, Lee demanded that a book of recipes
named Calpurnia's Cookbook not be published and sold out of the Monroe County Heritage Museum. David Lister in The
Independent states that Lee's refusal to speak to reporters made them desire to interview her all the more, and her
silence "makes Bob Dylan look like a media tart". Despite her discouragement, a rising number of tourists made to
Monroeville a destination, hoping to see Lee's inspiration for the book, or Lee herself. Local residents call them
"Mockingbird groupies", and although Lee was not reclusive, she refused publicity and interviews with an emphatic
"Hell, no!"
Solar energy is radiant light and heat from the Sun harnessed using a range of ever-evolving technologies such as solar heating,
photovoltaics, solar thermal energy, solar architecture and artificial photosynthesis. The Earth receives 174,000
terawatts (TW) of incoming solar radiation (insolation) at the upper atmosphere. Approximately 30% is reflected back
to space while the rest is absorbed by clouds, oceans and land masses. The spectrum of solar light at the Earth's
surface is mostly spread across the visible and near-infrared ranges with a small part in the near-ultraviolet. Most
people around the world live in areas with insolation levels of 150 to 300 watts per square meter or 3.5 to 7.0 kWh/m2
per day. Solar radiation is absorbed by the Earth's land surface, oceans – which cover about 71% of the globe – and
atmosphere. Warm air containing evaporated water from the oceans rises, causing atmospheric circulation or convection.
When the air reaches a high altitude, where the temperature is low, water vapor condenses into clouds, which rain
onto the Earth's surface, completing the water cycle. The latent heat of water condensation amplifies convection,
producing atmospheric phenomena such as wind, cyclones and anti-cyclones. Sunlight absorbed by the oceans and land
masses keeps the surface at an average temperature of 14 °C. By photosynthesis green plants convert solar energy
into chemically stored energy, which produces food, wood and the biomass from which fossil fuels are derived. The
total solar energy absorbed by Earth's atmosphere, oceans and land masses is approximately 3,850,000 exajoules (EJ)
per year. In 2002, this was more energy in one hour than the world used in one year. Photosynthesis captures approximately
3,000 EJ per year in biomass. The amount of solar energy reaching the surface of the planet is so vast that in one
year it is about twice as much as will ever be obtained from all of the Earth's non-renewable resources of coal,
oil, natural gas, and mined uranium combined, Solar technologies are broadly characterized as either passive or active
depending on the way they capture, convert and distribute sunlight and enable solar energy to be harnessed at different
levels around the world, mostly depending on distance from the equator. Although solar energy refers primarily to
the use of solar radiation for practical ends, all renewable energies, other than geothermal and tidal, derive their
energy from the Sun in a direct or indirect way. Active solar techniques use photovoltaics, concentrated solar power,
solar thermal collectors, pumps, and fans to convert sunlight into useful outputs. Passive solar techniques include
selecting materials with favorable thermal properties, designing spaces that naturally circulate air, and referencing
the position of a building to the Sun. Active solar technologies increase the supply of energy and are considered
supply side technologies, while passive solar technologies reduce the need for alternate resources and are generally
considered demand side technologies. In 1897, Frank Shuman, a U.S. inventor, engineer and solar energy pioneer built
a small demonstration solar engine that worked by reflecting solar energy onto square boxes filled with ether, which
has a lower boiling point than water, and were fitted internally with black pipes which in turn powered a steam engine.
In 1908 Shuman formed the Sun Power Company with the intent of building larger solar power plants. He, along with
his technical advisor A.S.E. Ackermann and British physicist Sir Charles Vernon Boys, developed an improved system
using mirrors to reflect solar energy upon collector boxes, increasing heating capacity to the extent that water
could now be used instead of ether. Shuman then constructed a full-scale steam engine powered by low-pressure water,
enabling him to patent the entire solar engine system by 1912. Shuman built the world’s first solar thermal power
station in Maadi, Egypt, between 1912 and 1913. Shuman’s plant used parabolic troughs to power a 45–52 kilowatts
(60–70 hp) engine that pumped more than 22,000 litres (4,800 imp gal; 5,800 US gal) of water per minute from the
Nile River to adjacent cotton fields. Although the outbreak of World War I and the discovery of cheap oil in the
1930s discouraged the advancement of solar energy, Shuman’s vision and basic design were resurrected in the 1970s
with a new wave of interest in solar thermal energy. In 1916 Shuman was quoted in the media advocating solar energy's
utilization, saying: Solar hot water systems use sunlight to heat water. In low geographical latitudes (below 40
degrees) from 60 to 70% of the domestic hot water use with temperatures up to 60 °C can be provided by solar heating
systems. The most common types of solar water heaters are evacuated tube collectors (44%) and glazed flat plate collectors
(34%) generally used for domestic hot water; and unglazed plastic collectors (21%) used mainly to heat swimming pools.
As of 2007, the total installed capacity of solar hot water systems is approximately 154 thermal gigawatt (GWth).
China is the world leader in their deployment with 70 GWth installed as of 2006 and a long-term goal of 210 GWth
by 2020. Israel and Cyprus are the per capita leaders in the use of solar hot water systems with over 90% of homes
using them. In the United States, Canada and Australia heating swimming pools is the dominant application of solar
hot water with an installed capacity of 18 GWth as of 2005. In the United States, heating, ventilation and air conditioning
(HVAC) systems account for 30% (4.65 EJ/yr) of the energy used in commercial buildings and nearly 50% (10.1 EJ/yr)
of the energy used in residential buildings. Solar heating, cooling and ventilation technologies can be used to offset
a portion of this energy. Thermal mass is any material that can be used to store heat—heat from the Sun in the case
of solar energy. Common thermal mass materials include stone, cement and water. Historically they have been used
in arid climates or warm temperate regions to keep buildings cool by absorbing solar energy during the day and radiating
stored heat to the cooler atmosphere at night. However, they can be used in cold temperate areas to maintain warmth
as well. The size and placement of thermal mass depend on several factors such as climate, daylighting and shading
conditions. When properly incorporated, thermal mass maintains space temperatures in a comfortable range and reduces
the need for auxiliary heating and cooling equipment. A solar chimney (or thermal chimney, in this context) is a
passive solar ventilation system composed of a vertical shaft connecting the interior and exterior of a building.
As the chimney warms, the air inside is heated causing an updraft that pulls air through the building. Performance
can be improved by using glazing and thermal mass materials in a way that mimics greenhouses. Deciduous trees and
plants have been promoted as a means of controlling solar heating and cooling. When planted on the southern side
of a building in the northern hemisphere or the northern side in the southern hemisphere, their leaves provide shade
during the summer, while the bare limbs allow light to pass during the winter. Since bare, leafless trees shade 1/3
to 1/2 of incident solar radiation, there is a balance between the benefits of summer shading and the corresponding
loss of winter heating. In climates with significant heating loads, deciduous trees should not be planted on the
Equator facing side of a building because they will interfere with winter solar availability. They can, however,
be used on the east and west sides to provide a degree of summer shading without appreciably affecting winter solar
gain. Solar cookers use sunlight for cooking, drying and pasteurization. They can be grouped into three broad categories:
box cookers, panel cookers and reflector cookers. The simplest solar cooker is the box cooker first built by Horace
de Saussure in 1767. A basic box cooker consists of an insulated container with a transparent lid. It can be used
effectively with partially overcast skies and will typically reach temperatures of 90–150 °C (194–302 °F). Panel
cookers use a reflective panel to direct sunlight onto an insulated container and reach temperatures comparable to
box cookers. Reflector cookers use various concentrating geometries (dish, trough, Fresnel mirrors) to focus light
on a cooking container. These cookers reach temperatures of 315 °C (599 °F) and above but require direct light to
function properly and must be repositioned to track the Sun. Solar concentrating technologies such as parabolic dish,
trough and Scheffler reflectors can provide process heat for commercial and industrial applications. The first commercial
system was the Solar Total Energy Project (STEP) in Shenandoah, Georgia, USA where a field of 114 parabolic dishes
provided 50% of the process heating, air conditioning and electrical requirements for a clothing factory. This grid-connected
cogeneration system provided 400 kW of electricity plus thermal energy in the form of 401 kW steam and 468 kW chilled
water, and had a one-hour peak load thermal storage. Evaporation ponds are shallow pools that concentrate dissolved
solids through evaporation. The use of evaporation ponds to obtain salt from sea water is one of the oldest applications
of solar energy. Modern uses include concentrating brine solutions used in leach mining and removing dissolved solids
from waste streams. Clothes lines, clotheshorses, and clothes racks dry clothes through evaporation by wind and sunlight
without consuming electricity or gas. In some states of the United States legislation protects the "right to dry"
clothes. Unglazed transpired collectors (UTC) are perforated sun-facing walls used for preheating ventilation air.
UTCs can raise the incoming air temperature up to 22 °C (40 °F) and deliver outlet temperatures of 45–60 °C (113–140
°F). The short payback period of transpired collectors (3 to 12 years) makes them a more cost-effective alternative
than glazed collection systems. As of 2003, over 80 systems with a combined collector area of 35,000 square metres
(380,000 sq ft) had been installed worldwide, including an 860 m2 (9,300 sq ft) collector in Costa Rica used for
drying coffee beans and a 1,300 m2 (14,000 sq ft) collector in Coimbatore, India, used for drying marigolds. Solar
distillation can be used to make saline or brackish water potable. The first recorded instance of this was by 16th-century
Arab alchemists. A large-scale solar distillation project was first constructed in 1872 in the Chilean mining town
of Las Salinas. The plant, which had solar collection area of 4,700 m2 (51,000 sq ft), could produce up to 22,700
L (5,000 imp gal; 6,000 US gal) per day and operate for 40 years. Individual still designs include single-slope,
double-slope (or greenhouse type), vertical, conical, inverted absorber, multi-wick, and multiple effect. These stills
can operate in passive, active, or hybrid modes. Double-slope stills are the most economical for decentralized domestic
purposes, while active multiple effect units are more suitable for large-scale applications. Solar water disinfection
(SODIS) involves exposing water-filled plastic polyethylene terephthalate (PET) bottles to sunlight for several hours.
Exposure times vary depending on weather and climate from a minimum of six hours to two days during fully overcast
conditions. It is recommended by the World Health Organization as a viable method for household water treatment and
safe storage. Over two million people in developing countries use this method for their daily drinking water. Solar
energy may be used in a water stabilisation pond to treat waste water without chemicals or electricity. A further
environmental advantage is that algae grow in such ponds and consume carbon dioxide in photosynthesis, although algae
may produce toxic chemicals that make the water unusable. Solar power is anticipated to become the world's largest
source of electricity by 2050, with solar photovoltaics and concentrated solar power contributing 16 and 11 percent
to the global overall consumption, respectively. Commercial CSP plants were first developed in the 1980s. Since 1985
the eventually 354 MW SEGS CSP installation, in the Mojave Desert of California, is the largest solar power plant
in the world. Other large CSP plants include the 150 MW Solnova Solar Power Station and the 100 MW Andasol solar
power station, both in Spain. The 250 MW Agua Caliente Solar Project, in the United States, and the 221 MW Charanka
Solar Park in India, are the world’s largest photovoltaic plants. Solar projects exceeding 1 GW are being developed,
but most of the deployed photovoltaics are in small rooftop arrays of less than 5 kW, which are grid connected using
net metering and/or a feed-in tariff. In 2013 solar generated less than 1% of the worlds total grid electricity.
In the last two decades, photovoltaics (PV), also known as solar PV, has evolved from a pure niche market of small
scale applications towards becoming a mainstream electricity source. A solar cell is a device that converts light
directly into electricity using the photoelectric effect. The first solar cell was constructed by Charles Fritts
in the 1880s. In 1931 a German engineer, Dr Bruno Lange, developed a photo cell using silver selenide in place of
copper oxide. Although the prototype selenium cells converted less than 1% of incident light into electricity, both
Ernst Werner von Siemens and James Clerk Maxwell recognized the importance of this discovery. Following the work
of Russell Ohl in the 1940s, researchers Gerald Pearson, Calvin Fuller and Daryl Chapin created the crystalline silicon
solar cell in 1954. These early solar cells cost 286 USD/watt and reached efficiencies of 4.5–6%. By 2012 available
efficiencies exceed 20% and the maximum efficiency of research photovoltaics is over 40%. Concentrating Solar Power
(CSP) systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. The
concentrated heat is then used as a heat source for a conventional power plant. A wide range of concentrating technologies
exists; the most developed are the parabolic trough, the concentrating linear fresnel reflector, the Stirling dish
and the solar power tower. Various techniques are used to track the Sun and focus light. In all of these systems
a working fluid is heated by the concentrated sunlight, and is then used for power generation or energy storage.
The common features of passive solar architecture are orientation relative to the Sun, compact proportion (a low
surface area to volume ratio), selective shading (overhangs) and thermal mass. When these features are tailored to
the local climate and environment they can produce well-lit spaces that stay in a comfortable temperature range.
Socrates' Megaron House is a classic example of passive solar design. The most recent approaches to solar design
use computer modeling tying together solar lighting, heating and ventilation systems in an integrated solar design
package. Active solar equipment such as pumps, fans and switchable windows can complement passive design and improve
system performance. Urban heat islands (UHI) are metropolitan areas with higher temperatures than that of the surrounding
environment. The higher temperatures are a result of increased absorption of the Solar light by urban materials such
as asphalt and concrete, which have lower albedos and higher heat capacities than those in the natural environment.
A straightforward method of counteracting the UHI effect is to paint buildings and roads white and plant trees. Using
these methods, a hypothetical "cool communities" program in Los Angeles has projected that urban temperatures could
be reduced by approximately 3 °C at an estimated cost of US$1 billion, giving estimated total annual benefits of
US$530 million from reduced air-conditioning costs and healthcare savings. Agriculture and horticulture seek to optimize
the capture of solar energy in order to optimize the productivity of plants. Techniques such as timed planting cycles,
tailored row orientation, staggered heights between rows and the mixing of plant varieties can improve crop yields.
While sunlight is generally considered a plentiful resource, the exceptions highlight the importance of solar energy
to agriculture. During the short growing seasons of the Little Ice Age, French and English farmers employed fruit
walls to maximize the collection of solar energy. These walls acted as thermal masses and accelerated ripening by
keeping plants warm. Early fruit walls were built perpendicular to the ground and facing south, but over time, sloping
walls were developed to make better use of sunlight. In 1699, Nicolas Fatio de Duillier even suggested using a tracking
mechanism which could pivot to follow the Sun. Applications of solar energy in agriculture aside from growing crops
include pumping water, drying crops, brooding chicks and drying chicken manure. More recently the technology has
been embraced by vinters, who use the energy generated by solar panels to power grape presses. Greenhouses convert
solar light to heat, enabling year-round production and the growth (in enclosed environments) of specialty crops
and other plants not naturally suited to the local climate. Primitive greenhouses were first used during Roman times
to produce cucumbers year-round for the Roman emperor Tiberius. The first modern greenhouses were built in Europe
in the 16th century to keep exotic plants brought back from explorations abroad. Greenhouses remain an important
part of horticulture today, and plastic transparent materials have also been used to similar effect in polytunnels
and row covers. Development of a solar-powered car has been an engineering goal since the 1980s. The World Solar
Challenge is a biannual solar-powered car race, where teams from universities and enterprises compete over 3,021
kilometres (1,877 mi) across central Australia from Darwin to Adelaide. In 1987, when it was founded, the winner's
average speed was 67 kilometres per hour (42 mph) and by 2007 the winner's average speed had improved to 90.87 kilometres
per hour (56.46 mph). The North American Solar Challenge and the planned South African Solar Challenge are comparable
competitions that reflect an international interest in the engineering and development of solar powered vehicles.
In 1975, the first practical solar boat was constructed in England. By 1995, passenger boats incorporating PV panels
began appearing and are now used extensively. In 1996, Kenichi Horie made the first solar powered crossing of the
Pacific Ocean, and the sun21 catamaran made the first solar powered crossing of the Atlantic Ocean in the winter
of 2006–2007. There were plans to circumnavigate the globe in 2010. In 1974, the unmanned AstroFlight Sunrise plane
made the first solar flight. On 29 April 1979, the Solar Riser made the first flight in a solar-powered, fully controlled,
man carrying flying machine, reaching an altitude of 40 feet (12 m). In 1980, the Gossamer Penguin made the first
piloted flights powered solely by photovoltaics. This was quickly followed by the Solar Challenger which crossed
the English Channel in July 1981. In 1990 Eric Scott Raymond in 21 hops flew from California to North Carolina using
solar power. Developments then turned back to unmanned aerial vehicles (UAV) with the Pathfinder (1997) and subsequent
designs, culminating in the Helios which set the altitude record for a non-rocket-propelled aircraft at 29,524 metres
(96,864 ft) in 2001. The Zephyr, developed by BAE Systems, is the latest in a line of record-breaking solar aircraft,
making a 54-hour flight in 2007, and month-long flights were envisioned by 2010. As of 2015, Solar Impulse, an electric
aircraft, is currently circumnavigating the globe. It is a single-seat plane powered by solar cells and capable of
taking off under its own power. The designed allows the aircraft to remain airborne for 36 hours. Solar chemical
processes use solar energy to drive chemical reactions. These processes offset energy that would otherwise come from
a fossil fuel source and can also convert solar energy into storable and transportable fuels. Solar induced chemical
reactions can be divided into thermochemical or photochemical. A variety of fuels can be produced by artificial photosynthesis.
The multielectron catalytic chemistry involved in making carbon-based fuels (such as methanol) from reduction of
carbon dioxide is challenging; a feasible alternative is hydrogen production from protons, though use of water as
the source of electrons (as plants do) requires mastering the multielectron oxidation of two water molecules to molecular
oxygen. Some have envisaged working solar fuel plants in coastal metropolitan areas by 2050 – the splitting of sea
water providing hydrogen to be run through adjacent fuel-cell electric power plants and the pure water by-product
going directly into the municipal water system. Another vision involves all human structures covering the earth's
surface (i.e., roads, vehicles and buildings) doing photosynthesis more efficiently than plants. Hydrogen production
technologies been a significant area of solar chemical research since the 1970s. Aside from electrolysis driven by
photovoltaic or photochemical cells, several thermochemical processes have also been explored. One such route uses
concentrators to split water into oxygen and hydrogen at high temperatures (2,300–2,600 °C or 4,200–4,700 °F). Another
approach uses the heat from solar concentrators to drive the steam reformation of natural gas thereby increasing
the overall hydrogen yield compared to conventional reforming methods. Thermochemical cycles characterized by the
decomposition and regeneration of reactants present another avenue for hydrogen production. The Solzinc process under
development at the Weizmann Institute uses a 1 MW solar furnace to decompose zinc oxide (ZnO) at temperatures above
1,200 °C (2,200 °F). This initial reaction produces pure zinc, which can subsequently be reacted with water to produce
hydrogen. Thermal mass systems can store solar energy in the form of heat at domestically useful temperatures for
daily or interseasonal durations. Thermal storage systems generally use readily available materials with high specific
heat capacities such as water, earth and stone. Well-designed systems can lower peak demand, shift time-of-use to
off-peak hours and reduce overall heating and cooling requirements. Phase change materials such as paraffin wax and
Glauber's salt are another thermal storage media. These materials are inexpensive, readily available, and can deliver
domestically useful temperatures (approximately 64 °C or 147 °F). The "Dover House" (in Dover, Massachusetts) was
the first to use a Glauber's salt heating system, in 1948. Solar energy can also be stored at high temperatures using
molten salts. Salts are an effective storage medium because they are low-cost, have a high specific heat capacity
and can deliver heat at temperatures compatible with conventional power systems. The Solar Two used this method of
energy storage, allowing it to store 1.44 terajoules (400,000 kWh) in its 68 cubic metres storage tank with an annual
storage efficiency of about 99%. Off-grid PV systems have traditionally used rechargeable batteries to store excess
electricity. With grid-tied systems, excess electricity can be sent to the transmission grid, while standard grid
electricity can be used to meet shortfalls. Net metering programs give household systems a credit for any electricity
they deliver to the grid. This is handled by 'rolling back' the meter whenever the home produces more electricity
than it consumes. If the net electricity use is below zero, the utility then rolls over the kilowatt hour credit
to the next month. Other approaches involve the use of two meters, to measure electricity consumed vs. electricity
produced. This is less common due to the increased installation cost of the second meter. Most standard meters accurately
measure in both directions, making a second meter unnecessary. Pumped-storage hydroelectricity stores energy in the
form of water pumped when energy is available from a lower elevation reservoir to a higher elevation one. The energy
is recovered when demand is high by releasing the water, with the pump becoming a hydroelectric power generator.
The 1973 oil embargo and 1979 energy crisis caused a reorganization of energy policies around the world and brought
renewed attention to developing solar technologies. Deployment strategies focused on incentive programs such as the
Federal Photovoltaic Utilization Program in the US and the Sunshine Program in Japan. Other efforts included the
formation of research facilities in the US (SERI, now NREL), Japan (NEDO), and Germany (Fraunhofer Institute for
Solar Energy Systems ISE). Commercial solar water heaters began appearing in the United States in the 1890s. These
systems saw increasing use until the 1920s but were gradually replaced by cheaper and more reliable heating fuels.
As with photovoltaics, solar water heating attracted renewed attention as a result of the oil crises in the 1970s
but interest subsided in the 1980s due to falling petroleum prices. Development in the solar water heating sector
progressed steadily throughout the 1990s and growth rates have averaged 20% per year since 1999. Although generally
underestimated, solar water heating and cooling is by far the most widely deployed solar technology with an estimated
capacity of 154 GW as of 2007. The International Energy Agency has said that solar energy can make considerable contributions
to solving some of the most urgent problems the world now faces: The International Organization for Standardization
has established a number of standards relating to solar energy equipment. For example, ISO 9050 relates to glass
in building while ISO 10217 relates to the materials used in solar water heaters. It is an important source of renewable
energy and its technologies are broadly characterized as either passive solar or active solar depending on the way
they capture and distribute solar energy or convert it into solar power. Active solar techniques include the use
of photovoltaic systems, concentrated solar power and solar water heating to harness the energy. Passive solar techniques
include orienting a building to the Sun, selecting materials with favorable thermal mass or light dispersing properties,
and designing spaces that naturally circulate air. The large magnitude of solar energy available makes it a highly
appealing source of electricity. The United Nations Development Programme in its 2000 World Energy Assessment found
that the annual potential of solar energy was 1,575–49,837 exajoules (EJ). This is several times larger than the
total world energy consumption, which was 559.8 EJ in 2012. In 2011, the International Energy Agency said that "the
development of affordable, inexhaustible and clean solar energy technologies will have huge longer-term benefits.
It will increase countries’ energy security through reliance on an indigenous, inexhaustible and mostly import-independent
resource, enhance sustainability, reduce pollution, lower the costs of mitigating global warming, and keep fossil
fuel prices lower than otherwise. These advantages are global. Hence the additional costs of the incentives for early
deployment should be considered learning investments; they must be wisely spent and need to be widely shared". The
potential solar energy that could be used by humans differs from the amount of solar energy present near the surface
of the planet because factors such as geography, time variation, cloud cover, and the land available to humans limits
the amount of solar energy that we can acquire. Geography effects solar energy potential because areas that are closer
to the equator have a greater amount of solar radiation. However, the use of photovoltaics that can follow the position
of the sun can significantly increase the solar energy potential in areas that are farther from the equator. Time
variation effects the potential of solar energy because during the nighttime there is little solar radiation on the
surface of the Earth for solar panels to absorb. This limits the amount of energy that solar panels can absorb in
one day. Cloud cover can effect the potential of solar panels because clouds block incoming light from the sun and
reduce the light available for solar cells. In addition, land availability has a large effect on the available solar
energy because solar panels can only be set up on land that is unowned and suitable for solar panels. Roofs have
been found to be a suitable place for solar cells, as many people have discovered that they can collect energy directly
from their homes this way. Other areas that are suitable for solar cells are lands that are unowned by businesses
where solar plants can be established. In 2000, the United Nations Development Programme, UN Department of Economic
and Social Affairs, and World Energy Council published an estimate of the potential solar energy that could be used
by humans each year that took into account factors such as insolation, cloud cover, and the land that is usable by
humans. The estimate found that solar energy has a global potential of 1,575–49,837 EJ per year (see table below).
Solar power is the conversion of sunlight into electricity, either directly using photovoltaics (PV), or indirectly
using concentrated solar power (CSP). CSP systems use lenses or mirrors and tracking systems to focus a large area
of sunlight into a small beam. PV converts light into electric current using the photoelectric effect. Sunlight has
influenced building design since the beginning of architectural history. Advanced solar architecture and urban planning
methods were first employed by the Greeks and Chinese, who oriented their buildings toward the south to provide light
and warmth. A solar balloon is a black balloon that is filled with ordinary air. As sunlight shines on the balloon,
the air inside is heated and expands causing an upward buoyancy force, much like an artificially heated hot air balloon.
Some solar balloons are large enough for human flight, but usage is generally limited to the toy market as the surface-area
to payload-weight ratio is relatively high. Beginning with the surge in coal use which accompanied the Industrial
Revolution, energy consumption has steadily transitioned from wood and biomass to fossil fuels. The early development
of solar technologies starting in the 1860s was driven by an expectation that coal would soon become scarce. However,
development of solar technologies stagnated in the early 20th century in the face of the increasing availability,
economy, and utility of coal and petroleum. In 2011, a report by the International Energy Agency found that solar
energy technologies such as photovoltaics, solar hot water and concentrated solar power could provide a third of
the world’s energy by 2060 if politicians commit to limiting climate change. The energy from the sun could play a
key role in de-carbonizing the global economy alongside improvements in energy efficiency and imposing costs on greenhouse
gas emitters. "The strength of solar is the incredible variety and flexibility of applications, from small scale
to big scale".
Kanye Omari West (/ˈkɑːnjeɪ/; born June 8, 1977) is an American hip hop recording artist, record producer, rapper, fashion
designer, and entrepreneur. He is among the most acclaimed musicians of the 21st century, attracting both praise
and controversy for his work and his outspoken public persona. Raised in Chicago, West briefly attended art school
before becoming known as a producer for Roc-A-Fella Records in the early 2000s, producing hit singles for artists
such as Jay-Z and Alicia Keys. Intent on pursuing a solo career as a rapper, West released his debut album The College
Dropout in 2004 to widespread commercial and critical success, and founded record label GOOD Music. He went on to
explore a variety of different musical styles on subsequent albums that included the baroque-inflected Late Registration
(2005), the arena-inspired Graduation (2007), and the starkly polarizing 808s & Heartbreak (2008). In 2010, he released
his critically acclaimed fifth album, the maximalist My Beautiful Dark Twisted Fantasy, and the following year he
collaborated with Jay-Z on the joint LP Watch the Throne (2011). West released his abrasive sixth album, Yeezus,
to further critical praise in 2013. Following a series of recording delays and work on non-musical projects, West's
seventh album, The Life of Pablo, was released in 2016. West is one of the best-selling artists of all time, having
sold more than 32 million albums and 100 million digital downloads worldwide. He has won a total of 21 Grammy Awards,
making him one of the most awarded artists of all time and the most Grammy-awarded artist of his age. Three of his
albums rank on Rolling Stone's 2012 "500 Greatest Albums of All Time" list; two of his albums feature at first and
eighth, respectively, in Pitchfork Media's The 100 Best Albums of 2010–2014. He has also been included in a number
of Forbes annual lists. Time named him one of the 100 most influential people in the world in 2005 and 2015. Kanye
Omari West was born on June 8, 1977 in Atlanta, Georgia. His parents divorced when he was three and he and his mother
moved to Chicago, Illinois. His father, Ray West, is a former Black Panther and was one of the first black photojournalists
at The Atlanta Journal-Constitution. Ray West was later a Christian counselor, and in 2006, opened the Good Water
Store and Café in Lexington Park, Maryland with startup capital from his son. West's mother, Dr. Donda C. (Williams)
West, was a professor of English at Clark Atlanta University, and the Chair of the English Department at Chicago
State University before retiring to serve as his manager. West was raised in a middle-class background, attending
Polaris High School in suburban Oak Lawn, Illinois after living in Chicago. At the age of 10, West moved with his
mother to Nanjing, China, where she was teaching at Nanjing University as part of an exchange program. According
to his mother, West was the only foreigner in his class, but settled in well and quickly picked up the language,
although he has since forgotten most of it. When asked about his grades in high school, West replied, "I got A's
and B's. And I'm not even frontin'." West demonstrated an affinity for the arts at an early age; he began writing
poetry when he was five years old. His mother recalled that she first took notice of West's passion for drawing and
music when he was in the third grade. Growing up in the city,[which?] West became deeply involved in its hip hop
scene. He started rapping in the third grade and began making musical compositions in the seventh grade, eventually
selling them to other artists. At age thirteen, West wrote a rap song called "Green Eggs and Ham" and began to persuade
his mother to pay $25 an hour for time in a recording studio. It was a small, crude basement studio where a microphone
hung from the ceiling by a wire clothes hanger. Although this wasn't what West's mother wanted, she nonetheless supported
him. West crossed paths with producer/DJ No I.D., with whom he quickly formed a close friendship. No I.D. soon became
West's mentor, and it was from him that West learned how to sample and program beats after he received his first
sampler at age 15. After graduating from high school, West received a scholarship to attend Chicago's American Academy
of Art in 1997 and began taking painting classes, but shortly after transferred to Chicago State University to study
English. He soon realized that his busy class schedule was detrimental to his musical work, and at 20 he dropped
out of college to pursue his musical dreams. This action greatly displeased his mother, who was also a professor
at the university. She later commented, "It was drummed into my head that college is the ticket to a good life...
but some career goals don't require college. For Kanye to make an album called College Dropout it was more about
having the guts to embrace who you are, rather than following the path society has carved out for you." Kanye West
began his early production career in the mid-1990s, making beats primarily for burgeoning local artists, eventually
developing a style that involved speeding up vocal samples from classic soul records. His first official production
credits came at the age of nineteen when he produced eight tracks on Down to Earth, the 1996 debut album of a Chicago
rapper named Grav. For a time, West acted as a ghost producer for Deric "D-Dot" Angelettie. Because of his association
with D-Dot, West wasn't able to release a solo album, so he formed and became a member and producer of the Go-Getters,
a late-1990s Chicago rap group composed of him, GLC, Timmy G, Really Doe, and Arrowstar. His group was managed by
John "Monopoly" Johnson, Don Crowley, and Happy Lewis under the management firm Hustle Period. After attending a
series of promotional photo shoots and making some radio appearances, The Go-Getters released their first and only
studio album World Record Holders in 1999. The album featured other Chicago-based rappers such as Rhymefest, Mikkey
Halsted, Miss Criss, and Shayla G. Meanwhile, the production was handled by West, Arrowstar, Boogz, and Brian "All
Day" Miller. West spent much of the late-1990s producing records for a number of well-known artists and music groups.
The third song on Foxy Brown's second studio album Chyna Doll was produced by West. Her second effort subsequently
became the very first hip-hop album by a female rapper to debut at the top of the U.S. Billboard 200 chart in its
first week of release. West produced three of the tracks on Harlem World's first and only album The Movement alongside
Jermaine Dupri and the production duo Trackmasters. His songs featured rappers Nas, Drag-On, and R&B singer Carl
Thomas. The ninth track from World Party, the last Goodie Mob album to feature the rap group's four founding members
prior to their break-up, was co-produced by West with his manager Deric "D-Dot" Angelettie. At the close of the millennium,
West ended up producing six songs for Tell 'Em Why U Madd, an album that was released by D-Dot under the alias of
The Madd Rapper; a fictional character he created for a skit on The Notorious B.I.G.'s second and final studio album
Life After Death. West's songs featured guest appearances from rappers such as Ma$e, Raekwon, and Eminem. West got
his big break in the year 2000, when he began to produce for artists on Roc-A-Fella Records. West came to achieve
recognition and is often credited with revitalizing Jay-Z's career with his contributions to the rap mogul's influential
2001 album The Blueprint. The Blueprint is consistently ranked among the greatest hip-hop albums, and the critical
and financial success of the album generated substantial interest in West as a producer. Serving as an in-house producer
for Roc-A-Fella Records, West produced records for other artists from the label, including Beanie Sigel, Freeway,
and Cam'ron. He also crafted hit songs for Ludacris, Alicia Keys, and Janet Jackson. Despite his success as a producer,
West's true aspiration was to be a rapper. Though he had developed his rapping long before he began producing, it
was often a challenge for West to be accepted as a rapper, and he struggled to attain a record deal. Multiple record
companies ignored him because he did not portray the gangsta image prominent in mainstream hip hop at the time. After
a series of meetings with Capitol Records, West was ultimately denied an artist deal. According to Capitol Record's
A&R, Joe Weinberger, he was approached by West and almost signed a deal with him, but another person in the company
convinced Capitol's president not to. Desperate to keep West from defecting to another label, then-label head Damon
Dash reluctantly signed West to Roc-A-Fella Records. Jay-Z later admitted that Roc-A-Fella was initially reluctant
to support West as a rapper, claiming that many saw him as a producer first and foremost, and that his background
contrasted with that of his labelmates. West's breakthrough came a year later on October 23, 2002, when, while driving
home from a California recording studio after working late, he fell asleep at the wheel and was involved in a near-fatal
car crash. The crash left him with a shattered jaw, which had to be wired shut in reconstructive surgery. The accident
inspired West; two weeks after being admitted to the hospital, he recorded a song at the Record Plant Studios with
his jaw still wired shut. The composition, "Through The Wire", expressed West's experience after the accident, and
helped lay the foundation for his debut album, as according to West "all the better artists have expressed what they
were going through". West added that "the album was my medicine", as working on the record distracted him from the
pain. "Through The Wire" was first available on West's Get Well Soon... mixtape, released December 2002. At the same
time, West announced that he was working on an album called The College Dropout, whose overall theme was to "make
your own decisions. Don't let society tell you, 'This is what you have to do.'" Carrying a Louis Vuitton backpack
filled with old disks and demos to the studio and back, West crafted much of his production for his debut album in
less than fifteen minutes at a time. He recorded the remainder of the album in Los Angeles while recovering from
the car accident. Once he had completed the album, it was leaked months before its release date. However, West decided
to use the opportunity to review the album, and The College Dropout was significantly remixed, remastered, and revised
before being released. As a result, certain tracks originally destined for the album were subsequently retracted,
among them "Keep the Receipt" with Ol' Dirty Bastard and "The Good, the Bad, and the Ugly" with Consequence. West
meticulously refined the production, adding string arrangements, gospel choirs, improved drum programming and new
verses. West's perfectionism led The College Dropout to have its release postponed three times from its initial date
in August 2003. The College Dropout was eventually issued by Roc-A-Fella in February 2004, shooting to number two
on the Billboard 200 as his debut single, "Through the Wire" peaked at number fifteen on the Billboard Hot 100 chart
for five weeks. "Slow Jamz", his second single featuring Twista and Jamie Foxx, became an even bigger success: it
became the three musicians' first number one hit. The College Dropout received near-universal critical acclaim from
contemporary music critics, was voted the top album of the year by two major music publications, and has consistently
been ranked among the great hip-hop works and debut albums by artists. "Jesus Walks", the album's fourth single,
perhaps exposed West to a wider audience; the song's subject matter concerns faith and Christianity. The song nevertheless
reached the top 20 of the Billboard pop charts, despite industry executives' predictions that a song containing such
blatant declarations of faith would never make it to radio. The College Dropout would eventually be certified triple
platinum in the US, and garnered West 10 Grammy nominations, including Album of the Year, and Best Rap Album (which
it received). During this period, West also founded GOOD Music, a record label and management company that would
go on to house affiliate artists and producers, such as No I.D. and John Legend. At the time, the focal point of
West's production style was the use of sped-up vocal samples from soul records. However, partly because of the acclaim
of The College Dropout, such sampling had been much copied by others; with that overuse, and also because West felt
he had become too dependent on the technique, he decided to find a new sound. Beginning his second effort that fall,
West would invest two million dollars and take over a year to craft his second album. West was significantly inspired
by Roseland NYC Live, a 1998 live album by English trip hop group Portishead, produced with the New York Philharmonic
Orchestra. Early in his career, the live album had inspired him to incorporate string arrangements into his hip-hop
production. Though West had not been able to afford many live instruments around the time of his debut album, the
money from his commercial success enabled him to hire a string orchestra for his second album Late Registration.
West collaborated with American film score composer Jon Brion, who served as the album's co-executive producer for
several tracks. Although Brion had no prior experience in creating hip-hop records, he and West found that they could
productively work together after their first afternoon in the studio where they discovered that neither confined
his musical knowledge and vision to one specific genre. Late Registration sold over 2.3 million units in the United
States alone by the end of 2005 and was considered by industry observers as the only successful major album release
of the fall season, which had been plagued by steadily declining CD sales. While West had encountered controversy
a year prior when he stormed out of the American Music Awards of 2004 after losing Best New Artist, the rapper's
first large-scale controversy came just days following Late Registration's release, during a benefit concert for
Hurricane Katrina victims. In September 2005, NBC broadcast A Concert for Hurricane Relief, and West was a featured
speaker. When West was presenting alongside actor Mike Myers, he deviated from the prepared script. Myers spoke next
and continued to read the script. Once it was West's turn to speak again, he said, "George Bush doesn't care about
black people." West's comment reached much of the United States, leading to mixed reactions; President Bush would
later call it one of the most "disgusting moments" of his presidency. West raised further controversy in January
2006 when he posed on the cover of Rolling Stone wearing a crown of thorns. Fresh off spending the previous year
touring the world with U2 on their Vertigo Tour, West felt inspired to compose anthemic rap songs that could operate
more efficiently in large arenas. To this end, West incorporated the synthesizer into his hip-hop production, utilized
slower tempos, and experimented with electronic music and influenced by music of the 1980s. In addition to U2, West
drew musical inspiration from arena rock bands such as The Rolling Stones and Led Zeppelin in terms of melody and
chord progression. To make his next effort, the third in a planned tetralogy of education-themed studio albums, more
introspective and personal in lyricism, West listened to folk and country singer-songwriters Bob Dylan and Johnny
Cash in hopes of developing methods to augment his wordplay and storytelling ability. West's third studio album,
Graduation, garnered major publicity when its release date pitted West in a sales competition against rapper 50 Cent's
Curtis. Upon their September 2007 releases, Graduation outsold Curtis by a large margin, debuting at number one on
the U.S. Billboard 200 chart and selling 957,000 copies in its first week. Graduation once again continued the string
of critical and commercial successes by West, and the album's lead single, "Stronger", garnered the rapper his third
number-one hit. "Stronger", which samples French house duo Daft Punk, has been accredited to not only encouraging
other hip-hop artists to incorporate house and electronica elements into their music, but also for playing a part
in the revival of disco and electro-infused music in the late 2000s. Ben Detrick of XXL cited the outcome of the
sales competition between 50 Cent's Curtis and West's Graduation as being responsible for altering the direction
of hip-hop and paving the way for new rappers who didn't follow the hardcore-gangster mold, writing, "If there was
ever a watershed moment to indicate hip-hop's changing direction, it may have come when 50 Cent competed with Kanye
in 2007 to see whose album would claim superior sales." West's life took a different direction when his mother, Donda
West, died of complications from cosmetic surgery involving abdominoplasty and breast reduction in November 2007.
Months later, West and fiancée Alexis Phifer ended their engagement and their long-term intermittent relationship,
which had begun in 2002. The events profoundly affected West, who set off for his 2008 Glow in the Dark Tour shortly
thereafter. Purportedly because his emotions could not be conveyed through rapping, West decided to sing using the
voice audio processor Auto-Tune, which would become a central part of his next effort. West had previously experimented
with the technology on his debut album The College Dropout for the background vocals of "Jesus Walks" and "Never
Let Me Down." Recorded mostly in Honolulu, Hawaii in three weeks, West announced his fourth album, 808s & Heartbreak,
at the 2008 MTV Video Music Awards, where he performed its lead single, "Love Lockdown". Music audiences were taken
aback by the uncharacteristic production style and the presence of Auto-Tune, which typified the pre-release response
to the record. 808s & Heartbreak, which features extensive use of the eponymous Roland TR-808 drum machine and contains
themes of love, loneliness, and heartache, was released by Island Def Jam to capitalize on Thanksgiving weekend in
November 2008. Reviews were positive, though slightly more mixed than his previous efforts. Despite this, the record's
singles demonstrated outstanding chart performances. Upon its release, the lead single "Love Lockdown" debuted at
number three on the Billboard Hot 100 and became a "Hot Shot Debut", while follow-up single "Heartless" performed
similarly and became his second consecutive "Hot Shot Debut" by debuting at number four on the Billboard Hot 100.
While it was criticized prior to release, 808s & Heartbreak had a significant effect on hip-hop music, encouraging
other rappers to take more creative risks with their productions. In 2012, Rolling Stone journalist Matthew Trammell
asserted that the record was ahead of its time and wrote, "Now that popular music has finally caught up to it, 808s
& Heartbreak has revealed itself to be Kanye’s most vulnerable work, and perhaps his most brilliant." West's controversial
incident the following year at the 2009 MTV Video Music Awards was arguably his biggest controversy, and led to widespread
outrage throughout the music industry. During the ceremony, West crashed the stage and grabbed the microphone from
winner Taylor Swift in order to proclaim that, instead, Beyoncé's video for "Single Ladies (Put a Ring on It)", nominated
for the same award, was "one of the best videos of all time". He was subsequently withdrawn from the remainder of
the show for his actions. West's tour with Lady Gaga was cancelled in response to the controversy, and it was suggested
that the incident was partially responsible for 808s & Heartbreak's lack of nominations at the 52nd Grammy Awards.
Following the highly publicized incident, West took a brief break from music and threw himself into fashion, only
to hole up in Hawaii for the next few months writing and recording his next album. Importing his favorite producers
and artists to work on and inspire his recording, West kept engineers behind the boards 24 hours a day and slept
only in increments. Noah Callahan-Bever, a writer for Complex, was present during the sessions and described the
"communal" atmosphere as thus: "With the right songs and the right album, he can overcome any and all controversy,
and we are here to contribute, challenge, and inspire." A variety of artists contributed to the project, including
close friends Jay-Z, Kid Cudi and Pusha T, as well as off-the-wall collaborations, such as with Justin Vernon of
Bon Iver. My Beautiful Dark Twisted Fantasy, West's fifth studio album, was released in November 2010 to rave reviews
from critics, many of whom described it as his best work that solidified his comeback. In stark contrast to his previous
effort, which featured a minimalist sound, Dark Fantasy adopts a maximalist philosophy and deals with themes of celebrity
and excess. The record included the international hit "All of the Lights", and Billboard hits "Power", "Monster",
and "Runaway", the latter of which accompanied a 35-minute film of the same name. During this time, West initiated
the free music program GOOD Fridays through his website, offering a free download of previously unreleased songs
each Friday, a portion of which were included on the album. This promotion ran from August 20 - December 17, 2010.
Dark Fantasy went on to go platinum in the United States, but its omission as a contender for Album of the Year at
the 54th Grammy Awards was viewed as a "snub" by several media outlets. Following a headlining set at Coachella 2011
that was described by The Hollywood Reporter as "one of greatest hip-hop sets of all time", West released the collaborative
album Watch the Throne with Jay-Z. By employing a sales strategy that released the album digitally weeks before its
physical counterpart, Watch the Throne became one of the few major label albums in the Internet age to avoid a leak.
"Niggas in Paris" became the record's highest charting single, peaking at number five on the Billboard Hot 100. In
2012, West released the compilation album Cruel Summer, a collection of tracks by artists from West's record label
GOOD Music. Cruel Summer produced four singles, two of which charted within the top twenty of the Hot 100: "Mercy"
and "Clique". West also directed a film of the same name that premiered at the 2012 Cannes Film Festival in custom
pyramid-shaped screening pavilion featuring seven screens. Sessions for West's sixth solo effort begin to take shape
in early 2013 in his own personal loft's living room at a Paris hotel. Determined to "undermine the commercial",
he once again brought together close collaborators and attempted to incorporate Chicago drill, dancehall, acid house,
and industrial music. Primarily inspired by architecture, West's perfectionist tendencies led him to contact producer
Rick Rubin fifteen days shy of its due date to strip down the record's sound in favor of a more minimalist approach.
Initial promotion of his sixth album included worldwide video projections of the album's music and live television
performances. Yeezus, West's sixth album, was released June 18, 2013 to rave reviews from critics. It became the
rapper's sixth consecutive number one debut, but also marked his lowest solo opening week sales. Def Jam issued "Black
Skinhead" to radio in July 2013 as the album's lead single. On September 6, 2013, Kanye West announced he would be
headlining his first solo tour in five years, to support Yeezus, with fellow American rapper Kendrick Lamar accompanying
him along the way. In June 2013, West and television personality Kim Kardashian announced the birth of their first
child, North. In October 2013, the couple announced their engagement to widespread media attention. November 2013,
West stated that he was beginning work on his next studio album, hoping to release it by mid-2014, with production
by Rick Rubin and Q-Tip. In December 2013, Adidas announced the beginning of an official apparel collaboration with
West, to be premiered the following year. In May 2014, West and Kardashian were married in a private ceremony in
Florence, Italy, with a variety of artists and celebrities in attendance. West released a single, "Only One", featuring
Paul McCartney, on December 31, 2014. "FourFiveSeconds", a single jointly produced with Rihanna and McCartney, was
released in January 2015. West also appeared on the Saturday Night Live 40th Anniversary Special, where he premiered
a new song entitled "Wolves", featuring Sia Furler and fellow Chicago rapper, Vic Mensa. In February 2015, West premiered
his clothing collaboration with Adidas, entitled Yeezy Season 1, to generally positive reviews. This would include
West's Yeezy Boost sneakers. In March 2015, West released the single "All Day" featuring Theophilus London, Allan
Kingdom and Paul McCartney. West performed the song at the 2015 BRIT Awards with a number of US rappers and UK grime
MC's including: Skepta, Wiley, Novelist, Fekky, Krept & Konan, Stormzy, Allan Kingdom, Theophilus London and Vic
Mensa. He would premiere the second iteration of his clothing line, Yeezy Season 2, in September 2015 at New York
Fashion Week. Having initially announced a new album entitled So Help Me God slated for a 2014 release, in March
2015 West announced that the album would instead be tentatively called SWISH. Later that month, West was awarded
an honorary doctorate by the School of the Art Institute of Chicago for his contributions to music, fashion, and
popular culture, officially making him an honorary DFA. The next month, West headlined at the Glastonbury Festival
in the UK, despite a petition signed by almost 135,000 people against his appearance. At one point, he told the audience:
"You are now watching the greatest living rock star on the planet." Media outlets, including social media sites such
as Twitter, were sharply divided on his performance. NME stated, "The decision to book West for the slot has proved
controversial since its announcement, and the show itself appeared to polarise both Glastonbury goers and those who
tuned in to watch on their TVs." The publication added that "he's letting his music speak for and prove itself."
The Guardian said that "his set has a potent ferocity – but there are gaps and stutters, and he cuts a strangely
lone figure in front of the vast crowd." In December 2015, West released a song titled "Facts". He announced in January
2016 on Twitter that SWISH would be released on February 11, after releasing new song "Real Friends" and a snippet
of "No More Parties in L.A." with Kendrick Lamar. This also revived the GOOD Fridays initiative in which Kanye releases
new singles every Friday. On January 26, 2016, West revealed he had renamed the album from SWISH to Waves, and also
announced the premier of his Yeezy Season 3 clothing line at Madison Square Garden. In early 2016, several weeks
prior to the release of his new album, West became embroiled in a short-lived social media altercation with rapper
Wiz Khalifa on Twitter that eventually involved their mutual ex-partner, Amber Rose, who protested to West's mention
of her and Khalifa's child. The feud involved allegations by Rose concerning her sexual relationship with West, and
received significant media attention. As of February 2, 2016, West and Khalifa had reconciled. Several days ahead
of the album's release, West again changed the title, this time to The Life of Pablo. On February 11, West premiered
the album at Madison Square Garden as part of the presentation of his Yeezy Season 3 clothing line. Following the
preview, West announced that he would be modifying the track list once more before its release to the public, and
further delayed its release to finalize the recording of the track "Waves" at the behest of co-writer Chance the
Rapper. He released the album exclusively on Tidal on 14 February 2016 following a performance on SNL. West's musical
career has been defined by frequent stylistic shifts, and has seen him develop and explore a variety of different
musical approaches and genres throughout his work. When asked about his musical inspirations, he has named A Tribe
Called Quest, Stevie Wonder, Michael Jackson, George Michael, LL Cool J, Phil Collins and Madonna as early interests.
He has further described musician David Bowie as one of his "most important inspirations," and named producer Puff
Daddy as the "most important cultural figure in my life." Early in his career, West pioneered a style of production
dubbed "chipmunk soul" which utilized pitched-up vocal samples, usually from soul and R&B songs, along with his own
drums and instrumentation. His first major release featuring his trademark soulful vocal sampling style was "This
Can't Be Life", a track from Jay-Z’s The Dynasty: Roc La Familia. West has said that Wu-Tang Clan producer RZA influenced
him in his style, and has named Wu-Tang rappers Ghostface Killah and Ol' Dirty Bastard as inspirations. RZA spoke
positively of the comparisons, stating in an interview for Rolling Stone, "All good. I got super respect for Kanye
[...] [he] is going to inspire people to be like him." West further developed his style on his 2004 debut album,
The College Dropout. After a rough version was leaked, he meticulously refined the production, adding string arrangements,
gospel choirs, and improved drum programming. For his second album, Late Registration (2005), he collaborated with
film score composer Jon Brion and drew influence from non-rap influences such as English trip hop group Portishead.
Blending West's primary soulful hip hop production with Brion's elaborate chamber pop orchestration, the album experimentally
incorporated a wide array of different genres and prominent orchestral elements, including string arrangements, piano
chords, brass flecks, and horn riffs among other symphonic instrumentation. It also incorporated a myriad of foreign
and vintage instruments not typical in popular music, let alone hip hop, such as a celesta, harpsichord, Chamberlin,
CS-80 analog synthesizer, Chinese bells and berimbau, vibraphones, and marimba. Rolling Stone described Late Registration
as West claiming "the whole world of music as hip-hop turf" chronicling the album as "his mad quest to explode every
cliché about hip-hop identity." Critic Robert Christgau wrote that "there's never been hip-hop so complex and subtle
musically." For a period of time, Kanye West stood as the sole current pop star to tour with a string section, as
audible on his 2006 live album Late Orchestration. With his third album, Graduation (2007), West moved away from
the sound of his previous releases and towards a more atmospheric, rock-tinged, electronic-influenced soundscape.
The musical evolution arose from him listening to music genres encompassing European Britpop and Euro-disco, American
alternative and indie-rock, and his native Chicago house. Towards this end, West retracted much of the live instrumentation
that characterized his previous album and replaced it with heavy, gothic synthesizers, distorted synth-chords, rave
stabs, house beats, electro-disco rhythms, and a wide array of modulated electronic noises and digital audio-effects.
In addition, West drew musical inspiration from arena rock bands such as The Rolling Stones, U2, and Led Zeppelin
in terms of melody and chord progression. West's fourth studio album, 808s & Heartbreak (2008), marked an even more
radical departure from his previous releases, largely abandoning rap and hip hop stylings in favor of a stark electropop
sound composed of virtual synthesis, the Roland TR-808 drum machine, and explicitly auto-tuned vocal tracks. Drawing
inspiration from artists such as Gary Numan, TJ Swan and Boy George, and maintaining a "minimal but functional" approach
towards the album's studio production, West explored the electronic feel produced by Auto-Tune and utilized the sounds
created by the 808, manipulating its pitch to produce a distorted, electronic sound; he then sought to juxtapose
mechanical sounds with the traditional sounds of taiko drums and choir monks. The album's music features austere
production and elements such as dense drums, lengthy strings, droning synthesizers, and somber piano, and drew comparisons
to the work of 1980s post-punk and new wave groups, with West himself later confessing an affinity with British post-punk
group Joy Division. Rolling Stone journalist Matthew Trammell asserted that the record was ahead of its time and
wrote in a 2012 article, "Now that popular music has finally caught up to it, 808s & Heartbreak has revealed itself
to be Kanye’s most vulnerable work, and perhaps his most brilliant." West's fifth album, My Beautiful Dark Twisted
Fantasy, has been noted by writers for its maximalist aesthetic and its incorporation of elements from West's previous
four albums. Entertainment Weekly's Simon Vozick-Levinson perceives that such elements "all recur at various points",
namely "the luxurious soul of 2004's The College Dropout, the symphonic pomp of Late Registration, the gloss of 2007's
Graduation, and the emotionally exhausted electro of 2008's 808s & Heartbreak". Sean Fennessey of The Village Voice
writes that West "absorb[ed] the gifts of his handpicked collaborators, and occasionally elevat[ed] them" on previous
studio albums, noting collaborators and elements as Jon Brion for Late Registration, DJ Toomp for Graduation, and
Kid Cudi for 808s & Heartbreak. Describing his sixth studio album Yeezus (2013) as "a protest to music," West embraced
an abrasive style that incorporated industrial music, acid house, dancehall, punk, electro, and Chicago drill. Inspired
by the minimalist design of Le Corbusier and primarily electronic in nature, the album features distorted drum machines
and "synthesizers that sound like they're malfunctioning, low-resolution samplers that add a pixelated digital aura
to the most analog sounds." To this end, the album incorporates glitches reminiscent of CD skips or corrupted MP3's,
and Auto-Tuned vocals are modulated to a point in which they are difficult to decipher. It also continues West's
practice of eclectic samples: he employs a sample of Nina Simone's "Strange Fruit," an obscure Hindi sample on "I
Am a God", and a sample of 1970s Hungarian rock group Omega on "New Slaves". "On Sight" interpolates a melody from
"Sermon (He'll Give Us What We Really Need)" by the Holy Name of Mary Choral Family. Rolling Stone called the album
a "brilliant, obsessive-compulsive career auto-correct". In September 2005, West announced that he would release
his Pastelle Clothing line in spring 2006, claiming "Now that I have a Grammy under my belt and Late Registration
is finished, I am ready to launch my clothing line next spring." The line was developed over the following four years
– with multiple pieces teased by West himself – before the line was ultimately cancelled in 2009. In 2009, West collaborated
with Nike to release his own shoe, the Air Yeezys, with a second version released in 2012. In January 2009, West
introduced his first shoe line designed for Louis Vuitton during Paris Fashion Week. The line was released in summer
2009. West has additionally designed shoewear for Bape and Italian shoemaker Giuseppe Zanotti. On October 1, 2011,
Kanye West premiered his women's fashion label, DW Kanye West at Paris Fashion Week. He received support from DSquared2
duo Dean and Dan Caten, Olivier Theyskens, Jeremy Scott, Azzedine Alaïa, and the Olsen twins, who were also in attendance
during his show. His debut fashion show received mixed-to-negative reviews, ranging from reserved observations by
Style.com to excoriating commentary by The Wall Street Journal, The New York Times, the International Herald Tribune,
Elleuk.com, The Daily Telegraph, Harper's Bazaar and many others. On March 6, 2012, West premiered a second fashion
line at Paris Fashion Week. The line's reception was markedly improved from the previous presentation, with a number
of critics heralding West for his "much improved" sophomore effort. On December 3, 2013, Adidas officially confirmed
a new shoe collaboration deal with West. After months of anticipation and rumors, West confirmed the release of the
Adidas Yeezy Boosts with a Twitter announcement directing fans to the domain yeezy.supply. In 2015, West unveiled
his Yeezy Season clothing line, premiering Season 1 in collaboration with Adidas early in the year. The release of
the Yeezy Boosts and the full Adidas collaboration was showcased in New York City on February 12, 2015, with free
streaming to 50 cinemas in 13 countries around the world. An initial release of the Adidas Yeezy Boosts was limited
to 9000 pairs to be available only in New York City via the Adidas smartphone app; the Adidas Yeezy Boosts were sold
out within 10 minutes. The shoes released worldwide on February 28, 2015, were limited to select boutique stores
and the Adidas UK stores. He followed with Season 2 later that year at New York Fashion Week. On February 11, West
premiered his Yeezy Season 3 clothing line at Madison Square Garden in conjunction with the previewing of his album
The Life of Pablo. In August 2008, West revealed plans to open 10 Fatburger restaurants in the Chicago area; the
first was set to open in September 2008 in Orland Park. The second followed in January 2009, while a third location
is yet to be revealed, although the process is being finalized. His company, KW Foods LLC, bought the rights to the
chain in Chicago. Ultimately, in 2009, only two locations actually opened. In February 2011, West shut down the Fatburger
located in Orland Park. Later that year, the remaining Beverly location also was shuttered. West founded the record
label and production company GOOD Music in 2004, in conjunction with Sony BMG, shortly after releasing his debut
album, The College Dropout. John Legend, Common, and West were the label's inaugural artists. The label houses artists
including West, Big Sean, Pusha T, Teyana Taylor, Yasiin Bey / Mos Def, D'banj and John Legend, and producers including
Hudson Mohawke, Q-Tip, Travis Scott, No I.D., Jeff Bhasker, and S1. GOOD Music has released ten albums certified
gold or higher by the Recording Industry Association of America (RIAA). In November 2015, West appointed Pusha T
the new president of GOOD Music. On January 5, 2012, West announced his establishment of the creative content company
DONDA, named after his late mother Donda West. In his announcement, West proclaimed that the company would "pick
up where Steve Jobs left off"; DONDA would operate as "a design company which will galvanize amazing thinkers in
a creative space to bounce their dreams and ideas" with the "goal to make products and experiences that people want
and can afford." West is notoriously secretive about the company's operations, maintaining neither an official website
nor a social media presence. In stating DONDA's creative philosophy, West articulated the need to "put creatives
in a room together with like minds" in order to "simplify and aesthetically improve everything we see, taste, touch,
and feel.". Contemporary critics have noted the consistent minimalistic aesthetic exhibited throughout DONDA creative
projects. On March 30, 2015, it was announced that West is a co-owner, with various other music artists, in the music
streaming service Tidal. The service specialises in lossless audio and high definition music videos. Jay Z acquired
the parent company of Tidal, Aspiro, in the first quarter of 2015. Including Beyoncé and Jay-Z, sixteen artist stakeholders
(such as Rihanna, Beyoncé, Madonna, Chris Martin, Nicki Minaj and more) co-own Tidal, with the majority owning a
3% equity stake. The idea of having an all artist owned streaming service was created by those involved to adapt
to the increased demand for streaming within the current music industry, and to rival other streaming services such
as Spotify, which have been criticised for their low payout of royalties. "The challenge is to get everyone to respect
music again, to recognize its value", stated Jay-Z on the release of Tidal. West, alongside his mother, founded the
"Kanye West Foundation" in Chicago in 2003, tasked with a mission to battle dropout and illiteracy rates, while partnering
with community organizations to provide underprivileged youth access to music education. In 2007, the West and the
Foundation partnered with Strong American Schools as part of their "Ed in '08" campaign. As spokesman for the campaign,
West appeared in a series of PSAs for the organization, and hosted an inaugural benefit concert in August of that
year. In 2008, following the death of West's mother, the foundation was rechristened "The Dr. Donda West Foundation."
The foundation ceased operations in 2011. West has additionally appeared and participated in many fundraisers, benefit
concerts, and has done community work for Hurricane Katrina relief, the Kanye West Foundation, the Millions More
Movement, 100 Black Men of America, a Live Earth concert benefit, World Water Day rally and march, Nike runs, and
a MTV special helping young Iraq War veterans who struggle through debt and PTSD a second chance after returning
home. West has been an outspoken and controversial celebrity throughout his career, receiving both criticism and
praise from many, including the mainstream media, other artists and entertainers, and two U.S. presidents. On September
2, 2005, during a benefit concert for Hurricane Katrina relief on NBC, A Concert for Hurricane Relief, West (a featured
speaker) accused President George W. Bush of not "car[ing] about black people". When West was presenting alongside
actor Mike Myers, he deviated from the prepared script to criticize the media's portrayal of hurricane victims, saying:
Myers spoke next and continued to read the script. Once it was West's turn to speak again, he said, "George Bush
doesn't care about black people." At this point, telethon producer Rick Kaplan cut off the microphone and then cut
away to Chris Tucker, who was unaware of the cut for a few seconds. Still, West's comment reached much of the United
States. Bush stated in an interview that the comment was "one of the most disgusting moments" of his presidency.
In November 2010, in a taped interview with Matt Lauer for the Today show, West expressed regret for his criticism
of Bush. "I would tell George Bush in my moment of frustration, I didn't have the grounds to call him a racist",
he told Lauer. "I believe that in a situation of high emotion like that we as human beings don't always choose the
right words." The following day, Bush reacted to the apology in a live interview with Lauer saying he appreciated
the rapper's remorse. "I'm not a hater", Bush said. "I don't hate Kanye West. I was talking about an environment
in which people were willing to say things that hurt. Nobody wants to be called a racist if in your heart you believe
in equality of races." Reactions were mixed, but some felt that West had no need to apologize. "It was not the particulars
of your words that mattered, it was the essence of a feeling of the insensitivity towards our communities that many
of us have felt for far too long", argued Def Jam co-founder Russell Simmons. Bush himself was receptive to the apology,
saying, "I appreciate that. It wasn't just Kanye West who was talking like that during Katrina, I cited him as an
example, I cited others as an example as well. You know, I appreciate that." In September 2013, West was widely rebuked
by human rights groups for performing in Kazakhstan at the wedding of authoritarian President Nursultan Nazarbayev's
grandson. He traveled to Kazakhstan, which has one of the poorest human rights records in the world, as a personal
guest of Nazarbayev. Other notable Western performers, including Sting, have previously cancelled performances in
the country over human rights concerns. West was reportedly paid US$3 million for his performance. West had previously
participated in cultural boycotts, joining Shakira and Rage Against The Machine in refusing to perform in Arizona
after the 2010 implementation of stop and search laws directed against potential illegal aliens. Later in 2013, West
launched a tirade on Twitter directed at talk show host Jimmy Kimmel after his ABC program Jimmy Kimmel Live! ran
a sketch on September 25 involving two children re-enacting West's recent interview with Zane Lowe for BBC Radio
1 in which he calls himself the biggest rock star on the planet. Kimmel reveals the following night that West called
him to demand an apology shortly before taping. During a November 26, 2013 radio interview, West explained why he
believed that President Obama had problems pushing policies in Washington: "Man, let me tell you something about
George Bush and oil money and Obama and no money. People want to say Obama can't make these moves or he's not executing.
That's because he ain't got those connections. Black people don't have the same level of connections as Jewish people...We
ain't Jewish. We don't got family that got money like that." In response to his comments, the Anti-Defamation League
stated: "There it goes again, the age-old canard that Jews are all-powerful and control the levers of power in government."
On December 21, 2013, West backed off of the original comment and told a Chicago radio station that "I thought I
was giving a compliment, but if anything it came off more ignorant. I don’t know how being told you have money is
an insult." In February 2016, West again became embroiled in controversy when he posted a tweet seemingly asserting
Bill Cosby's innocence in the wake of over 50 women making allegations of sexual assault directed at Cosby. In 2004,
West had his first of a number of public incidents during his attendance at music award events. At the American Music
Awards of 2004, West stormed out of the auditorium after losing Best New Artist to country singer Gretchen Wilson.
He later commented, "I felt like I was definitely robbed [...] I was the best new artist this year." After the 2006
Grammy nominations were released, West said he would "really have a problem" if he did not win the Album of the Year,
saying, "I don't care what I do, I don't care how much I stunt – you can never take away from the amount of work
I put into it. I don't want to hear all of that politically correct stuff." On November 2, 2006, when his "Touch
the Sky" failed to win Best Video at the MTV Europe Music Awards, West went onto the stage as the award was being
presented to Justice and Simian for "We Are Your Friends" and argued that he should have won the award instead. Hundreds
of news outlets worldwide criticized the outburst. On November 7, 2006, West apologized for this outburst publicly
during his performance as support act for U2 for their Vertigo concert in Brisbane. He later spoofed the incident
on the 33rd season premiere of Saturday Night Live in September 2007. On September 9, 2007, West suggested that his
race had something to do with his being overlooked for opening the 2007 MTV Video Music Awards (VMAs) in favor of
Britney Spears; he claimed, "Maybe my skin’s not right." West was performing at the event; that night, he lost all
five awards that he was nominated for, including Best Male Artist and Video of the Year. After the show, he was visibly
upset that he had lost at the VMAs two years in a row, stating that he would not come back to MTV ever again. He
also appeared on several radio stations saying that when he made the song "Stronger" that it was his dream to open
the VMAs with it. He has also stated that Spears has not had a hit in a long period of time and that MTV exploited
her for ratings. On September 13, 2009, during the 2009 MTV Video Music Awards while Taylor Swift was accepting her
award for Best Female Video for "You Belong with Me", West went on stage and grabbed the microphone to proclaim that
Beyoncé's video for "Single Ladies (Put a Ring on It)", nominated for the same award, was "one of the best videos
of all time". He was subsequently removed from the remainder of the show for his actions. When Beyoncé later won
the award for Best Video of the Year for "Single Ladies (Put a Ring on It)", she called Swift up on stage so that
she could finish her acceptance speech. West was criticized by various celebrities for the outburst, and by President
Barack Obama, who called West a "jackass". In addition, West's VMA disruption sparked a large influx of Internet
photo memes with blogs, forums and "tweets" with the "Let you finish" photo-jokes. He posted a Tweet soon after the
event where he stated, "Everybody wanna booooo me but I'm a fan of real pop culture... I'm not crazy y'all, I'm just
real." He then posted two apologies for the outburst on his personal blog; one on the night of the incident, and
the other the following day, when he also apologized during an appearance on The Jay Leno Show. After Swift appeared
on The View two days after the outburst, partly to discuss the matter, West called her to apologize personally. Swift
said she accepted his apology. In September 2010, West wrote a series of apologetic tweets addressed to Swift including
"Beyonce didn't need that. MTV didn't need that and Taylor and her family friends and fans definitely didn't want
or need that" and concluding with "I'm sorry Taylor." He also revealed he had written a song for Swift and if she
did not accept the song, he would perform it himself. However, on November 8, 2010, in an interview with a Minnesota
radio station, he seemed to recant his past apologies by attempting to describe the act at the 2009 awards show as
"selfless" and downgrade the perception of disrespect it created. In "Famous," a track from his 2016 album The Life
of Pablo, West implies that this incident led to Swift's stardom, rapping, "I feel like me and Taylor might still
have sex/ Why? I made that bitch famous." After some media backlash about the reference, West posted on Twitter "I
did not diss Taylor Swift and I've never dissed her...First thing is I'm an artist and as an artist I will express
how I feel with no censorship." He continued by adding that he had asked both Swift and his wife, Kim Kardashian,
for permission to publish the line. On February 8, 2015, at the 57th Annual Grammy Awards, West walked on stage as
Beck was accepting his award for Album of the Year and then walked off stage, making everyone think he was joking
around. After the awards show, West stated in an interview that he was not joking and that "Beck needs to respect
artistry, he should have given his award to Beyoncé". On February 26, 2015, he publicly apologized to Beck on Twitter.
On August 30, 2015, West was presented with the Michael Jackson Video Vanguard Award at the MTV Video Music Awards.
In his acceptance speech, he stated, "Y'all might be thinking right now, 'I wonder did he smoke something before
he came out here?' And the answer is: 'Yes, I rolled up a little something. I knocked the edge off.'" At the end
of his speech, he announced, "I have decided in 2020 to run for president." Music fans have turned to Change.org
around the globe to try and block West's participation at various events. The largest unsuccessful petition has been
to the Glastonbury Festival 2015 with 133,000+ voters stating they would prefer a rock band to headline. On July
20, 2015, within five days of West's announcement as the headlining artist of the closing ceremonies of the 2015
Pan American Games, Change.org user XYZ collected over 50,000 signatures for West's removal as headliner citing the
headlining artist should be Canadian. In his Pan American Games Closing Ceremony performance, close to the end of
his performance, West closed the show by tossing his faulty microphone in the air and walked off stage. West began
an on-and-off relationship with designer Alexis Phifer in 2002, and they became engaged in August 2006. The pair
ended their 18-month engagement in 2008. West subsequently dated model Amber Rose from 2008 until the summer of 2010.
West began dating reality star and longtime friend Kim Kardashian in April 2012. West and Kardashian became engaged
in October 2013, and married on May 24, 2014 at Fort di Belvedere in Florence, Italy. Their private ceremony was
subject to widespread mainstream coverage, with West taking issue with the couple's portrayal in the media. They
have two children: daughter North "Nori" West (born June 15, 2013) and son Saint West (born December 5, 2015). In
April 2015, West and Kardashian traveled to Jerusalem to have North baptized in the Armenian Apostolic Church at
the Cathedral of St. James. The couple's high status and respective careers have resulted in their relationship becoming
subject to heavy media coverage; The New York Times referred to their marriage as "a historic blizzard of celebrity."
On November 10, 2007, at approximately 7:35 pm, paramedics responding to an emergency call transported West's mother,
Donda West, to the nearby Centinela Freeman Hospital in Marina del Rey, California. She was unresponsive in the emergency
room, and after resuscitation attempts, doctors pronounced her dead at approximately 8:30 pm, at age 58. The Los
Angeles County coroner's office said in January 2008 that West had died of heart disease while suffering "multiple
post-operative factors" after plastic surgery. She had undergone liposuction and breast reduction. Beverly Hills
plastic surgeon Andre Aboolian had refused to do the surgery because West had a health condition that placed her
at risk for a heart attack. Aboolian referred her to an internist to investigate her cardiac issue. She never met
with the doctor recommended by Aboolian and had the procedures performed by a third doctor, Jan Adams. Adams sent
condolences to Donda West's family but declined to publicly discuss the procedure, citing confidentiality. West’s
family, through celebrity attorney Ed McPherson, filed complaints with the Medical Board against Adams and Aboolian
for violating patient confidentiality following her death. Adams had previously been under scrutiny by the medical
board. He appeared on Larry King Live on November 20, 2007, but left before speaking. Two days later, he appeared
again, with his attorney, stating he was there to "defend himself". He said that the recently released autopsy results
"spoke for themselves". The final coroner's report January 10, 2008, concluded that Donda West died of "coronary
artery disease and multiple post-operative factors due to or as a consequence of liposuction and mammoplasty". The
funeral and burial for Donda West was held in Oklahoma City on November 20, 2007. West played his first concert following
the funeral at The O2 in London on November 22. He dedicated a performance of "Hey Mama", as well as a cover of Journey's
"Don't Stop Believin'", to his mother, and did so on all other dates of his Glow in the Dark tour. At a December
2008 press conference in New Zealand, West spoke about his mother's death for the first time. "It was like losing
an arm and a leg and trying to walk through that", he told reporters. California governor Arnold Schwarzenegger signed
the "Donda West Law", legislation which makes it mandatory for patients to provide medical clearance for elective
cosmetic surgery. In December 2006, Robert "Evel" Knievel sued West for trademark infringement in West's video for
"Touch the Sky". Knievel took issue with a "sexually charged video" in which West takes on the persona of "Evel Kanyevel"
and attempts flying a rocket over a canyon. The suit claimed infringement on Knievel's trademarked name and likeness.
Knievel also claimed that the "vulgar and offensive" images depicted in the video damaged his reputation. The suit
sought monetary damages and an injunction to stop distribution of the video. West's attorneys argued that the music
video amounted to satire and therefore was covered under the First Amendment. Just days before his death in November
2007, Knievel amicably settled the suit after being paid a visit from West, saying, "I thought he was a wonderful
guy and quite a gentleman." On September 11, 2008, West and his road manager/bodyguard Don "Don C." Crowley were
arrested at Los Angeles International Airport and booked on charges of felony vandalism after an altercation with
the paparazzi in which West and Crowley broke the photographers' cameras. West was later released from the Los Angeles
Police Department's Pacific Division station in Culver City on $20,000 bail bond. On September 26, 2008, the Los
Angeles County District Attorney's Office said it would not file felony counts against West over the incident. Instead
the case file was forwarded to the city attorney's office, which charged West with one count of misdemeanor vandalism,
one count of grand theft and one count of battery and his manager with three counts of each on March 18, 2009. West's
and Crowley's arraignment was delayed from an original date of April 14, 2009. West was arrested again on November
14, 2008 at the Hilton hotel near Gateshead after another scuffle involving a photographer outside the famous Tup
Tup Palace nightclub in Newcastle upon Tyne. He was later released "with no further action", according to a police
spokesperson. On July 19, 2013, West was leaving LAX as he was surrounded by dozens of paparazzi. West became increasingly
agitated as a photographer, Daniel Ramos, continued to ask him why people were not allowed to speak in his presence.
West then says, "I told you don't talk to me, right? You trying to get me in trouble so I steal off on you and have
to pay you like $250,000 and shit." Then he allegedly charged the man and grabbed him and his camera. The incident
captured by TMZ, took place for a few seconds before a female voice can be heard telling West to stop. West then
released the man, and his camera, and drove away from the scene. Medics were later called to the scene on behalf
of the photographer who was grabbed. It was reported West could be charged with felony attempted robbery behind the
matter. However, the charges were reduced to misdemeanor criminal battery and attempted grand theft. In March 2014,
West was sentenced to serve two years' probation for the misdemeanor battery conviction and required to attend 24
anger management sessions, perform 250 hours of community service and pay restitution to Ramos. After the success
of his song "Jesus Walks" from the album The College Dropout, West was questioned on his beliefs and said, "I will
say that I'm spiritual. I have accepted Jesus as my Savior. And I will say that I fall short every day." More recently,
in September 2014, West referred to himself as a Christian during one of his concerts. West is among the most critically
acclaimed artists of the twenty-first century, receiving praise from music critics, fans, fellow musicians, artists,
and wider cultural figures for his work. AllMusic editor Jason Birchmeier writes of his impact, "As his career progressed
throughout the early 21st century, West shattered certain stereotypes about rappers, becoming a superstar on his
own terms without adapting his appearance, his rhetoric, or his music to fit any one musical mold." Jon Caramanic
of The New York Times said that West has been "a frequent lightning rod for controversy, a bombastic figure who can
count rankling two presidents among his achievements, along with being a reliably dyspeptic presence at award shows
(when he attends them)." Village Voice Media senior editor Ben Westhoff dubbed him the greatest hip hop artist of
all time, writing that "he's made the best albums and changed the game the most, and his music is the most likely
to endure," while Complex called him the 21st century's "most important artist of any art form, of any genre." The
Guardian has compared West to David Bowie, arguing that "there is nobody else who can sell as many records as West
does (30m-odd album sales and counting) while remaining so resolutely experimental and capable of stirring things
up culturally and politically." West's middle-class background, flamboyant fashion sense and outspokenness have additionally
set him apart from other rappers. Early in his career, he was among the first rappers to publicly criticize the preponderance
of homophobia in hip hop. The sales competition between rapper 50 Cent's Curtis and West's Graduation altered the
direction of hip hop and helped pave the way for new rappers who did not follow the hardcore-gangster mold. Rosie
Swash of The Guardian viewed the sales competition as a historical moment in hip-hop, because it "highlighted the
diverging facets of hip-hop in the last decade; the former was gangsta rap for the noughties, while West was the
thinking man's alternative." Rolling Stone credited West with transforming hip hop's mainstream, "establishing a
style of introspective yet glossy rap [...]", and called him "as interesting and complicated a pop star as the 2000s
produced—a rapper who mastered, upped and moved beyond the hip-hop game, a producer who created a signature sound
and then abandoned it to his imitators, a flashy, free-spending sybarite with insightful things to say about college,
culture and economics, an egomaniac with more than enough artistic firepower to back it up." His 2008 album 808s
& Heartbreak polarized both listeners and critics upon its release, but was commercially successful and impacted
hip hop and pop stylistically, as it laid the groundwork for a new wave of artists who generally eschewed typical
rap braggadocio for intimate subject matter and introspection, including Frank Ocean, The Weeknd, Drake, Future,
Kid Cudi, Childish Gambino, Lil Durk, Chief Keef, and Soulja Boy. According to Ben Detrick of XXL magazine, West
effectively led a new wave of artists, including Kid Cudi, Wale, Lupe Fiasco, Kidz in the Hall, and Drake, who lacked
the interest or ability to rap about gunplay or drug-dealing. A substantial number of artists and other figures have
been influenced by, or complimented, West's work, including hip hop artists RZA of Wu-Tang Clan, Chuck D of Public
Enemy, and DJ Premier of Gang Starr. Both Drake and Casey Veggies have acknowledged being influenced directly by
West. Non-rap artists such as English singer-songwriters Adele and Lily Allen, New Zealand artist Lorde, rock band
Arctic Monkeys, pop singer Halsey, Sergio Pizzorno of English rock band Kasabian and American indie rock group MGMT
have cited West as an influence. Experimental and electronic artists such as James Blake Daniel Lopatin, and Tim
Hecker have also cited West's work as an inspiration. Experimental rock pioneer and Velvet Underground founder Lou
Reed, in a review of West's album Yeezus, wrote that "the guy really, really, really is talented. He's really trying
to raise the bar. No one's near doing what he’s doing, it’s not even on the same planet." Musicians such as Paul
McCartney and Prince have also commended West's work. Famed Tesla Motors CEO and inventor Elon Musk complimented
West in a piece for Time Magazine's 100 most influential people list, writing that: West's first six solo studio
albums, all of which have gone platinum, have received numerous awards and critical acclaim. All of his albums have
been commercially successful, with Yeezus, his sixth solo album, becoming his fifth consecutive No. 1 album in the
U.S. upon release. West has had six songs exceed 3 million in digital sales as of December 2012, with "Gold Digger"
selling 3,086,000, "Stronger" selling 4,402,000, "Heartless" selling 3,742,000, "E.T." selling over 4,000,000, "Love
Lockdown" selling over 3,000,000, and "Niggas in Paris" selling over 3,000,000, placing him third in overall digital
sales of the past decade. He has sold over 30 million digital songs in the United States making him one of the best-selling
digital artists of all-time. As of 2013, West has won a total of 21 Grammy Awards, making him one of the most awarded
artists of all-time. About.com ranked Kanye West No. 8 on their "Top 50 Hip-Hop Producers" list. On May 16, 2008,
Kanye West was crowned by MTV as the year's No. 1 "Hottest MC in the Game." On December 17, 2010, Kanye West was
voted as the MTV Man of the Year by MTV. Billboard ranked Kanye West No. 3 on their list of Top 10 Producers of the
Decade. West ties with Bob Dylan for having topped the annual Pazz & Jop critic poll the most number of times ever,
with four number-one albums each. West has also been included twice in the Time 100 annual lists of the most influential
people in the world as well as being listed in a number of Forbes annual lists. In its 2012 list of "500 Greatest
Albums of All Time, Rolling Stone included three of West's albums—The College Dropout at number 298, Late Registration
at number 118, and My Beautiful Dark Twisted Fantasy at number 353. The Pitchfork online music publication ranked
My Beautiful Dark Twisted Fantasy as the world's best album of the decade "so far"—between 2010 and 2014—on August
19, 2014, while Yeezus was ranked in the eighth position of a list of 100 albums. During the same week, the song
"Runaway" (featuring Pusha T) was ranked in the third position in the publication's list of the 200 "best tracks"
released since 2010. West's outspoken views and ventures outside of music have received significant mainstream attention.
He has been a frequent source of controversy and public scrutiny for his conduct at award shows, on social media,
and in other public settings. His more publicized comments include his declaration that President George W. Bush
"doesn't care about black people" during a live 2005 television broadcast for Hurricane Katrina relief, and his interruption
of singer Taylor Swift at the 2009 MTV Video Music Awards. West's efforts as a designer include collaborations with
Nike, Louis Vuitton, and A.P.C. on both clothing and footwear, and have most prominently resulted in the Yeezy Season
collaboration with Adidas beginning in 2013. He is the founder and head of the creative content company DONDA.
Buddhism /ˈbudɪzəm/ is a nontheistic religion[note 1] or philosophy (Sanskrit: धर्म dharma; Pali: धम्म dhamma) that encompasses
a variety of traditions, beliefs and spiritual practices largely based on teachings attributed to Gautama Buddha,
commonly known as the Buddha ("the awakened one"). According to Buddhist tradition, the Buddha lived and taught in
the eastern part of the Indian subcontinent, present-day Nepal sometime between the 6th and 4th centuries BCE.[note
1] He is recognized by Buddhists as an awakened or enlightened teacher who shared his insights to help sentient beings
end their suffering through the elimination of ignorance and craving. Buddhists believe that this is accomplished
through the direct understanding and perception of dependent origination and the Four Noble Truths. Two major extant
branches of Buddhism are generally recognized by scholars: Theravada ("The School of the Elders") and Mahayana ("The
Great Vehicle"). Vajrayana, a body of teachings attributed to Indian siddhas, may be viewed as a third branch or
merely a part of Mahayana. Theravada has a widespread following in Sri Lanka and Southeast Asia. Mahayana which includes
the traditions of Pure Land, Zen, Nichiren Buddhism, Shingon, and Tiantai (Tendai) is found throughout East Asia.
Tibetan Buddhism, which preserves the Vajrayana teachings of eighth century India, is practiced in regions surrounding
the Himalayas, Mongolia and Kalmykia. Buddhists number between an estimated 488 million[web 1] and 535 million, making
it one of the world's major religions. In Theravada Buddhism, the ultimate goal is the attainment of the sublime
state of Nirvana, achieved by practicing the Noble Eightfold Path (also known as the Middle Way), thus escaping what
is seen as a cycle of suffering and rebirth. Mahayana Buddhism instead aspires to Buddhahood via the bodhisattva
path, a state wherein one remains in this cycle to help other beings reach awakening. Tibetan Buddhism aspires to
Buddhahood or rainbow body. Buddhist schools vary on the exact nature of the path to liberation, the importance and
canonicity of various teachings and scriptures, and especially their respective practices. Buddhism denies a creator
deity and posits that mundane deities such as Mahabrahma are misperceived to be a creator. The foundations of Buddhist
tradition and practice are the Three Jewels: the Buddha, the Dharma (the teachings), and the Sangha (the community).
Taking "refuge in the triple gem" has traditionally been a declaration and commitment to being on the Buddhist path,
and in general distinguishes a Buddhist from a non-Buddhist. Other practices are Ten Meritorious Deeds including,
giving charity to reduce the greediness; following ethical precepts; renouncing conventional living and becoming
a monastic; the development of mindfulness and practice of meditation; cultivation of higher wisdom and discernment;
study of scriptures; devotional practices; ceremonies; and in the Mahayana tradition, invocation of buddhas and bodhisattvas.
This narrative draws on the Nidānakathā of the Jataka tales of the Theravada, which is ascribed to Buddhaghoṣa in
the 5th century CE. Earlier biographies such as the Buddhacarita, the Lokottaravādin Mahāvastu, and the Sarvāstivādin
Lalitavistara Sūtra, give different accounts. Scholars are hesitant to make unqualified claims about the historical
facts of the Buddha's life. Most accept that he lived, taught and founded a monastic order, but do not consistently
accept all of the details contained in his biographies. According to author Michael Carrithers, while there are good
reasons to doubt the traditional account, "the outline of the life must be true: birth, maturity, renunciation, search,
awakening and liberation, teaching, death." In writing her biography of the Buddha, Karen Armstrong noted, "It is
obviously difficult, therefore, to write a biography of the Buddha that meets modern criteria, because we have very
little information that can be considered historically sound... [but] we can be reasonably confident Siddhatta Gotama
did indeed exist and that his disciples preserved the memory of his life and teachings as well as they could."[dubious
– discuss] The evidence of the early texts suggests that Siddhārtha Gautama was born in a community that was on the
periphery, both geographically and culturally, of the northeastern Indian subcontinent in the fifth century BCE.
It was either a small republic, in which case his father was an elected chieftain, or an oligarchy, in which case
his father was an oligarch. According to this narrative, shortly after the birth of young prince Gautama, an astrologer
named Asita visited the young prince's father, Suddhodana, and prophesied that Siddhartha would either become a great
king or renounce the material world to become a holy man, depending on whether he saw what life was like outside
the palace walls. Śuddhodana was determined to see his son become a king, so he prevented him from leaving the palace
grounds. But at age 29, despite his father's efforts, Gautama ventured beyond the palace several times. In a series
of encounters—known in Buddhist literature as the four sights—he learned of the suffering of ordinary people, encountering
an old man, a sick man, a corpse and, finally, an ascetic holy man, apparently content and at peace with the world.
These experiences prompted Gautama to abandon royal life and take up a spiritual quest. Gautama first went to study
with famous religious teachers of the day, and mastered the meditative attainments they taught. But he found that
they did not provide a permanent end to suffering, so he continued his quest. He next attempted an extreme asceticism,
which was a religious pursuit common among the śramaṇas, a religious culture distinct from the Vedic one. Gautama
underwent prolonged fasting, breath-holding, and exposure to pain. He almost starved himself to death in the process.
He realized that he had taken this kind of practice to its limit, and had not put an end to suffering. So in a pivotal
moment he accepted milk and rice from a village girl and changed his approach. He devoted himself to anapanasati
meditation, through which he discovered what Buddhists call the Middle Way (Skt. madhyamā-pratipad): a path of moderation
between the extremes of self-indulgence and self-mortification.[web 2][web 3] Gautama was now determined to complete
his spiritual quest. At the age of 35, he famously sat in meditation under a Ficus religiosa tree now called the
Bodhi Tree in the town of Bodh Gaya and vowed not to rise before achieving enlightenment. After many days, he finally
destroyed the fetters of his mind, thereby liberating himself from the cycle of suffering and rebirth, and arose
as a fully enlightened being (Skt. samyaksaṃbuddha). Soon thereafter, he attracted a band of followers and instituted
a monastic order. Now, as the Buddha, he spent the rest of his life teaching the path of awakening he had discovered,
traveling throughout the northeastern part of the Indian subcontinent, and died at the age of 80 (483 BCE) in Kushinagar,
India. The south branch of the original fig tree available only in Anuradhapura Sri Lanka is known as Jaya Sri Maha
Bodhi. Within Buddhism, samsara is defined as the continual repetitive cycle of birth and death that arises from
ordinary beings' grasping and fixating on a self and experiences. Specifically, samsara refers to the process of
cycling through one rebirth after another within the six realms of existence,[note 2] where each realm can be understood
as physical realm or a psychological state characterized by a particular type of suffering. Samsara arises out of
avidya (ignorance) and is characterized by dukkha (suffering, anxiety, dissatisfaction). In the Buddhist view, liberation
from samsara is possible by following the Buddhist path. In Buddhism, Karma (from Sanskrit: "action, work") is the
force that drives saṃsāra—the cycle of suffering and rebirth for each being. Good, skillful deeds (Pali: "kusala")
and bad, unskillful (Pāli: "akusala") actions produce "seeds" in the mind that come to fruition either in this life
or in a subsequent rebirth. The avoidance of unwholesome actions and the cultivation of positive actions is called
sīla. Karma specifically refers to those actions of body, speech or mind that spring from mental intent (cetanā),
and bring about a consequence or phala "fruit" or vipāka "result". In Theravada Buddhism there can be no divine salvation
or forgiveness for one's karma, since it is a purely impersonal process that is a part of the makeup of the universe.[citation
needed] In Mahayana Buddhism, the texts of certain Mahayana sutras (such as the Lotus Sutra, the Aṅgulimālīya Sūtra
and the Mahāyāna Mahāparinirvāṇa Sūtra) claim that the recitation or merely the hearing of their texts can expunge
great swathes of negative karma. Some forms of Buddhism (for example, Vajrayana) regard the recitation of mantras
as a means for cutting off of previous negative karma. The Japanese Pure Land teacher Genshin taught that Amitābha
has the power to destroy the karma that would otherwise bind one in saṃsāra. Rebirth refers to a process whereby
beings go through a succession of lifetimes as one of many possible forms of sentient life, each running from conception
to death. The doctrine of anattā (Sanskrit anātman) rejects the concepts of a permanent self or an unchanging, eternal
soul, as it is called in Hinduism and Christianity. According to Buddhism there ultimately is no such thing as a
self independent from the rest of the universe. Buddhists also refer to themselves as the believers of the anatta
doctrine—Nairatmyavadin or Anattavadin. Rebirth in subsequent existences must be understood as the continuation of
a dynamic, ever-changing process of pratītyasamutpāda ("dependent arising") determined by the laws of cause and effect
(karma) rather than that of one being, reincarnating from one existence to the next. The above are further subdivided
into 31 planes of existence.[web 4] Rebirths in some of the higher heavens, known as the Śuddhāvāsa Worlds or Pure
Abodes, can be attained only by skilled Buddhist practitioners known as anāgāmis (non-returners). Rebirths in the
Ārūpyadhātu (formless realms) can be attained by only those who can meditate on the arūpajhānas, the highest object
of meditation. According to East Asian and Tibetan Buddhism, there is an intermediate state (Tibetan "bardo") between
one life and the next. The orthodox Theravada position rejects this; however there are passages in the Samyutta Nikaya
of the Pali Canon that seem to lend support to the idea that the Buddha taught of an intermediate stage between one
life and the next.[page needed] The teachings on the Four Noble Truths are regarded as central to the teachings of
Buddhism, and are said to provide a conceptual framework for Buddhist thought. These four truths explain the nature
of dukkha (suffering, anxiety, unsatisfactoriness), its causes, and how it can be overcome. The four truths are:[note
4] The first truth explains the nature of dukkha. Dukkha is commonly translated as "suffering", "anxiety", "unsatisfactoriness",
"unease", etc., and it is said to have the following three aspects: The second truth is that the origin of dukkha
can be known. Within the context of the four noble truths, the origin of dukkha is commonly explained as craving
(Pali: tanha) conditioned by ignorance (Pali: avijja). On a deeper level, the root cause of dukkha is identified
as ignorance (Pali: avijja) of the true nature of things. The third noble truth is that the complete cessation of
dukkha is possible, and the fourth noble truth identifies a path to this cessation.[note 7] The Noble Eightfold Path—the
fourth of the Buddha's Noble Truths—consists of a set of eight interconnected factors or conditions, that when developed
together, lead to the cessation of dukkha. These eight factors are: Right View (or Right Understanding), Right Intention
(or Right Thought), Right Speech, Right Action, Right Livelihood, Right Effort, Right Mindfulness, and Right Concentration.
Ajahn Sucitto describes the path as "a mandala of interconnected factors that support and moderate each other." The
eight factors of the path are not to be understood as stages, in which each stage is completed before moving on to
the next. Rather, they are understood as eight significant dimensions of one's behaviour—mental, spoken, and bodily—that
operate in dependence on one another; taken together, they define a complete path, or way of living. While he searched
for enlightenment, Gautama combined the yoga practice of his teacher Kalama with what later became known as "the
immeasurables".[dubious – discuss] Gautama thus invented a new kind of human, one without egotism.[dubious – discuss]
What Thich Nhat Hanh calls the "Four Immeasurable Minds" of love, compassion, joy, and equanimity[full citation needed]
are also known as brahmaviharas, divine abodes, or simply as four immeasurables.[web 5] Pema Chödrön calls them the
"four limitless ones". Of the four, mettā or loving-kindness meditation is perhaps the best known.[web 5] The Four
Immeasurables are taught as a form of meditation that cultivates "wholesome attitudes towards all sentient beings."[web
6][web 7] An important guiding principle of Buddhist practice is the Middle Way (or Middle Path), which is said to
have been discovered by Gautama Buddha prior to his enlightenment. The Middle Way has several definitions: Buddhist
scholars have produced a number of intellectual theories, philosophies and world view concepts (see, for example,
Abhidharma, Buddhist philosophy and Reality in Buddhism). Some schools of Buddhism discourage doctrinal study, and
some regard it as essential practice. The concept of liberation (nirvāṇa)—the goal of the Buddhist path—is closely
related to overcoming ignorance (avidyā), a fundamental misunderstanding or mis-perception of the nature of reality.
In awakening to the true nature of the self and all phenomena one develops dispassion for the objects of clinging,
and is liberated from suffering (dukkha) and the cycle of incessant rebirths (saṃsāra). To this end, the Buddha recommended
viewing things as characterized by the three marks of existence. Impermanence (Pāli: anicca) expresses the Buddhist
notion that all compounded or conditioned phenomena (all things and experiences) are inconstant, unsteady, and impermanent.
Everything we can experience through our senses is made up of parts, and its existence is dependent on external conditions.
Everything is in constant flux, and so conditions and the thing itself are constantly changing. Things are constantly
coming into being, and ceasing to be. Since nothing lasts, there is no inherent or fixed nature to any object or
experience. According to the doctrine of impermanence, life embodies this flux in the aging process, the cycle of
rebirth (saṃsāra), and in any experience of loss. The doctrine asserts that because things are impermanent, attachment
to them is futile and leads to suffering (dukkha). Suffering (Pāli: दुक्ख dukkha; Sanskrit दुःख duḥkha) is also a
central concept in Buddhism. The word roughly corresponds to a number of terms in English including suffering, pain,
unsatisfactoriness, sorrow, affliction, anxiety, dissatisfaction, discomfort, anguish, stress, misery, and frustration.
Although the term is often translated as "suffering", its philosophical meaning is more analogous to "disquietude"
as in the condition of being disturbed. As such, "suffering" is too narrow a translation with "negative emotional
connotations"[web 9] that can give the impression that the Buddhist view is pessimistic, but Buddhism seeks to be
neither pessimistic nor optimistic, but realistic. In English-language Buddhist literature translated from Pāli,
"dukkha" is often left untranslated, so as to encompass its full range of meaning.[note 8] Not-self (Pāli: anatta;
Sanskrit: anātman) is the third mark of existence. Upon careful examination, one finds that no phenomenon is really
"I" or "mine"; these concepts are in fact constructed by the mind. In the Nikayas anatta is not meant as a metaphysical
assertion, but as an approach for gaining release from suffering. In fact, the Buddha rejected both of the metaphysical
assertions "I have a Self" and "I have no Self" as ontological views that bind one to suffering.[note 9] When asked
if the self was identical with the body, the Buddha refused to answer. By analyzing the constantly changing physical
and mental constituents (skandhas) of a person or object, the practitioner comes to the conclusion that neither the
respective parts nor the person as a whole comprise a self. The doctrine of pratītyasamutpāda, (Sanskrit; Pali: paticcasamuppāda;
Tibetan Wylie: rten cing 'brel bar 'byung ba; Chinese: 緣起) is an important part of Buddhist metaphysics. It states
that phenomena arise together in a mutually interdependent web of cause and effect. It is variously rendered into
English as "dependent origination", "conditioned genesis", "dependent relationship", "dependent co-arising", "interdependent
arising", or "contingency". The best-known application of the concept of pratītyasamutpāda is the scheme of Twelve
Nidānas (from Pāli "nidāna" meaning "cause, foundation, source or origin"), which explain the continuation of the
cycle of suffering and rebirth (saṃsāra) in detail.[note 10] The Twelve Nidānas describe a causal connection between
the subsequent characteristics or conditions of cyclic existence, each one giving rise to the next: Sentient beings
always suffer throughout saṃsāra until they free themselves from this suffering (dukkha) by attaining Nirvana. Then
the absence of the first Nidāna—ignorance—leads to the absence of the others. Mahayana Buddhism received significant
theoretical grounding from Nagarjuna (perhaps c. 150–250 CE), arguably the most influential scholar within the Mahayana
tradition. Nagarjuna's primary contribution to Buddhist philosophy was the systematic exposition of the concept of
śūnyatā, or "emptiness", widely attested in the Prajñāpāramitā sutras that emerged in his era. The concept of emptiness
brings together other key Buddhist doctrines, particularly anatta and dependent origination, to refute the metaphysics
of Sarvastivada and Sautrantika (extinct non-Mahayana schools). For Nagarjuna, it is not merely sentient beings that
are empty of ātman; all phenomena (dharmas) are without any svabhava (literally "own-nature" or "self-nature"), and
thus without any underlying essence; they are "empty" of being independent; thus the heterodox theories of svabhava
circulating at the time were refuted on the basis of the doctrines of early Buddhism. Nagarjuna's school of thought
is known as the Mādhyamaka. Some of the writings attributed to Nagarjuna made explicit references to Mahayana texts,
but his philosophy was argued within the parameters set out by the agamas. He may have arrived at his positions from
a desire to achieve a consistent exegesis of the Buddha's doctrine as recorded in the Canon. In the eyes of Nagarjuna
the Buddha was not merely a forerunner, but the very founder of the Mādhyamaka system. Sarvastivada teachings—which
were criticized by Nāgārjuna—were reformulated by scholars such as Vasubandhu and Asanga and were adapted into the
Yogacara school. While the Mādhyamaka school held that asserting the existence or non-existence of any ultimately
real thing was inappropriate, some exponents of Yogacara asserted that the mind and only the mind is ultimately real
(a doctrine known as cittamatra). Not all Yogacarins asserted that mind was truly existent; Vasubandhu and Asanga
in particular did not.[web 11] These two schools of thought, in opposition or synthesis, form the basis of subsequent
Mahayana metaphysics in the Indo-Tibetan tradition. Besides emptiness, Mahayana schools often place emphasis on the
notions of perfected spiritual insight (prajñāpāramitā) and Buddha-nature (tathāgatagarbha). There are conflicting
interpretations of the tathāgatagarbha in Mahāyāna thought. The idea may be traced to Abhidharma, and ultimately
to statements of the Buddha in the Nikāyas. In Tibetan Buddhism, according to the Sakya school, tathāgatagarbha is
the inseparability of the clarity and emptiness of one's mind. In Nyingma, tathāgatagarbha also generally refers
to inseparability of the clarity and emptiness of one's mind. According to the Gelug school, it is the potential
for sentient beings to awaken since they are empty (i.e. dependently originated). According to the Jonang school,
it refers to the innate qualities of the mind that expresses themselves as omniscience etc. when adventitious obscurations
are removed. The "Tathāgatagarbha Sutras" are a collection of Mahayana sutras that present a unique model of Buddha-nature.
Even though this collection was generally ignored in India, East Asian Buddhism provides some significance to these
texts. Nirvana (Sanskrit; Pali: "Nibbāna") means "cessation", "extinction" (of craving and ignorance and therefore
suffering and the cycle of involuntary rebirths (saṃsāra)), "extinguished", "quieted", "calmed"; it is also known
as "Awakening" or "Enlightenment" in the West. The term for anybody who has achieved nirvana, including the Buddha,
is arahant. Bodhi (Pāli and Sanskrit, in devanagari: बॊधि) is a term applied to the experience of Awakening of arahants.
Bodhi literally means "awakening", but it is more commonly translated into English as "enlightenment". In Early Buddhism,
bodhi carried a meaning synonymous to nirvana, using only some different metaphors to describe the experience, which
implies the extinction of raga (greed, craving),[web 12] dosa (hate, aversion)[web 13] and moha (delusion).[web 14]
In the later school of Mahayana Buddhism, the status of nirvana was downgraded in some scriptures, coming to refer
only to the extinction of greed and hate, implying that delusion was still present in one who attained nirvana, and
that one needed to attain bodhi to eradicate delusion: Therefore, according to Mahayana Buddhism, the arahant has
attained only nirvana, thus still being subject to delusion, while the bodhisattva not only achieves nirvana but
full liberation from delusion as well. He thus attains bodhi and becomes a buddha. In Theravada Buddhism, bodhi and
nirvana carry the same meaning as in the early texts, that of being freed from greed, hate and delusion. The term
parinirvana is also encountered in Buddhism, and this generally refers to the complete nirvana attained by the arahant
at the moment of death, when the physical body expires. According to Buddhist traditions a Buddha is a fully awakened
being who has completely purified his mind of the three poisons of desire, aversion and ignorance. A Buddha is no
longer bound by Samsara and has ended the suffering which unawakened people experience in life. Buddhists do not
consider Siddhartha Gautama to have been the only Buddha. The Pali Canon refers to many previous ones (see List of
the 28 Buddhas), while the Mahayana tradition additionally has many Buddhas of celestial, rather than historical,
origin (see Amitabha or Vairocana as examples, for lists of many thousands Buddha names see Taishō Shinshū Daizōkyō
numbers 439–448). A common Theravada and Mahayana Buddhist belief is that the next Buddha will be one named Maitreya
(Pali: Metteyya). In Theravada doctrine, a person may awaken from the "sleep of ignorance" by directly realizing
the true nature of reality; such people are called arahants and occasionally buddhas. After numerous lifetimes of
spiritual striving, they have reached the end of the cycle of rebirth, no longer reincarnating as human, animal,
ghost, or other being. The commentaries to the Pali Canon classify these awakened beings into three types: Bodhi
and nirvana carry the same meaning, that of being freed from craving, hate, and delusion. In attaining bodhi, the
arahant has overcome these obstacles. As a further distinction, the extinction of only hatred and greed (in the sensory
context) with some residue of delusion, is called anagami. In the Mahayana, the Buddha tends not to be viewed as
merely human, but as the earthly projection of a beginningless and endless, omnipresent being (see Dharmakaya) beyond
the range and reach of thought. Moreover, in certain Mahayana sutras, the Buddha, Dharma and Sangha are viewed essentially
as One: all three are seen as the eternal Buddha himself. The Buddha's death is seen as an illusion, he is living
on in other planes of existence, and monks are therefore permitted to offer "new truths" based on his input. Mahayana
also differs from Theravada in its concept of śūnyatā (that ultimately nothing has existence), and in its belief
in bodhisattvas (enlightened people who vow to continue being reborn until all beings can be enlightened).
The method of self-exertion or "self-power"—without reliance on an external force or being—stands in contrast to another
major form of Buddhism, Pure Land, which is characterized by utmost trust in the salvific "other-power" of Amitabha
Buddha. Pure Land Buddhism is a very widespread and perhaps the most faith-orientated manifestation of Buddhism and
centres upon the conviction that faith in Amitabha Buddha and the chanting of homage to his name liberates one at
death into the Blissful (安樂), Pure Land (淨土) of Amitabha Buddha. This Buddhic realm is variously construed as a foretaste
of Nirvana, or as essentially Nirvana itself. The great vow of Amitabha Buddha to rescue all beings from samsaric
suffering is viewed within Pure Land Buddhism as universally efficacious, if only one has faith in the power of that
vow or chants his name. Buddhists believe Gautama Buddha was the first to achieve enlightenment in this Buddha era
and is therefore credited with the establishment of Buddhism. A Buddha era is the stretch of history during which
people remember and practice the teachings of the earliest known Buddha. This Buddha era will end when all the knowledge,
evidence and teachings of Gautama Buddha have vanished. This belief therefore maintains that many Buddha eras have
started and ended throughout the course of human existence.[web 15][web 16] The Gautama Buddha, therefore, is the
Buddha of this era, who taught directly or indirectly to all other Buddhas in it (see types of Buddhas). In addition,
Mahayana Buddhists believe there are innumerable other Buddhas in other universes. A Theravada commentary says that
Buddhas arise one at a time in this world element, and not at all in others. The understandings of this matter reflect
widely differing interpretations of basic terms, such as "world realm", between the various schools of Buddhism.
The idea of the decline and gradual disappearance of the teaching has been influential in East Asian Buddhism. Pure
Land Buddhism holds that it has declined to the point where few are capable of following the path, so it may be best
to rely on the power of Amitābha. Bodhisattva means "enlightenment being", and generally refers to one who is on
the path to buddhahood. Traditionally, a bodhisattva is anyone who, motivated by great compassion, has generated
bodhicitta, which is a spontaneous wish to attain Buddhahood for the benefit of all sentient beings. Theravada Buddhism
primarily uses the term in relation to Gautama Buddha's previous existences, but has traditionally acknowledged and
respected the bodhisattva path as well.[web 17] According to Jan Nattier, the term Mahāyāna "Great Vehicle" was originally
even an honorary synonym for Bodhisattvayāna "Bodhisattva Vehicle." The Aṣṭasāhasrikā Prajñāpāramitā Sūtra, an early
and important Mahayana text, contains a simple and brief definition for the term bodhisattva: "Because he has enlightenment
as his aim, a bodhisattva-mahāsattva is so called." Mahayana Buddhism encourages everyone to become bodhisattvas
and to take the bodhisattva vow, where the practitioner promises to work for the complete enlightenment of all beings
by practicing the six pāramitās. According to Mahayana teachings, these perfections are: dāna, śīla, kṣanti, vīrya,
dhyāna, and prajñā. A famous saying by the 8th-century Indian Buddhist scholar-saint Shantideva, which the 14th Dalai
Lama often cites as his favourite verse, summarizes the Bodhisattva's intention (Bodhicitta) as follows: "For as
long as space endures, and for as long as living beings remain, until then may I too abide to dispel the misery of
the world."[citation needed] Devotion is an important part of the practice of most Buddhists. Devotional practices
include bowing, offerings, pilgrimage, and chanting. In Pure Land Buddhism, devotion to the Buddha Amitabha is the
main practice. In Nichiren Buddhism, devotion to the Lotus Sutra is the main practice. Buddhism traditionally incorporates
states of meditative absorption (Pali: jhāna; Skt: dhyāna). The most ancient sustained expression of yogic ideas
is found in the early sermons of the Buddha. One key innovative teaching of the Buddha was that meditative absorption
must be combined with liberating cognition. The difference between the Buddha's teaching and the yoga presented in
early Brahminic texts is striking. Meditative states alone are not an end, for according to the Buddha, even the
highest meditative state is not liberating. Instead of attaining a complete cessation of thought, some sort of mental
activity must take place: a liberating cognition, based on the practice of mindful awareness. Meditation was an aspect
of the practice of the yogis in the centuries preceding the Buddha. The Buddha built upon the yogis' concern with
introspection and developed their meditative techniques, but rejected their theories of liberation. In Buddhism,
mindfulness and clear awareness are to be developed at all times; in pre-Buddhist yogic practices there is no such
injunction. A yogi in the Brahmanical tradition is not to practice while defecating, for example, while a Buddhist
monastic should do so. Religious knowledge or "vision" was indicated as a result of practice both within and outside
of the Buddhist fold. According to the Samaññaphala Sutta, this sort of vision arose for the Buddhist adept as a
result of the perfection of "meditation" coupled with the perfection of "discipline" (Pali sīla; Skt. śīla). Some
of the Buddha's meditative techniques were shared with other traditions of his day, but the idea that ethics are
causally related to the attainment of "transcendent wisdom" (Pali paññā; Skt. prajñā) was original.[web 18] The Buddhist
texts are probably the earliest describing meditation techniques. They describe meditative practices and states that
existed before the Buddha as well as those first developed within Buddhism. Two Upanishads written after the rise
of Buddhism do contain full-fledged descriptions of yoga as a means to liberation. While there is no convincing evidence
for meditation in pre-Buddhist early Brahminic texts, Wynne argues that formless meditation originated in the Brahminic
or Shramanic tradition, based on strong parallels between Upanishadic cosmological statements and the meditative
goals of the two teachers of the Buddha as recorded in the early Buddhist texts. He mentions less likely possibilities
as well. Having argued that the cosmological statements in the Upanishads also reflect a contemplative tradition,
he argues that the Nasadiya Sukta contains evidence for a contemplative tradition, even as early as the late Rig
Vedic period. Traditionally, the first step in most Buddhist schools requires taking refuge in the Three Jewels (Sanskrit:
tri-ratna, Pāli: ti-ratana)[web 19] as the foundation of one's religious practice. The practice of taking refuge
on behalf of young or even unborn children is mentioned in the Majjhima Nikaya, recognized by most scholars as an
early text (cf. Infant baptism). Tibetan Buddhism sometimes adds a fourth refuge, in the lama. In Mahayana, the person
who chooses the bodhisattva path makes a vow or pledge, considered the ultimate expression of compassion. In Mahayana,
too, the Three Jewels are perceived as possessed of an eternal and unchanging essence and as having an irreversible
effect: "The Three Jewels have the quality of excellence. Just as real jewels never change their faculty and goodness,
whether praised or reviled, so are the Three Jewels (Refuges), because they have an eternal and immutable essence.
These Three Jewels bring a fruition that is changeless, for once one has reached Buddhahood, there is no possibility
of falling back to suffering. According to the scriptures, Gautama Buddha presented himself as a model. The Dharma
offers a refuge by providing guidelines for the alleviation of suffering and the attainment of Nirvana. The Sangha
is considered to provide a refuge by preserving the authentic teachings of the Buddha and providing further examples
that the truth of the Buddha's teachings is attainable. Śīla (Sanskrit) or sīla (Pāli) is usually translated into
English as "virtuous behavior", "morality", "moral discipline", "ethics" or "precept". It is an action committed
through the body, speech, or mind, and involves an intentional effort. It is one of the three practices (sīla, samādhi,
and paññā) and the second pāramitā. It refers to moral purity of thought, word, and deed. The four conditions of
śīla are chastity, calmness, quiet, and extinguishment. Śīla is the foundation of Samādhi/Bhāvana (Meditative cultivation)
or mind cultivation. Keeping the precepts promotes not only the peace of mind of the cultivator, which is internal,
but also peace in the community, which is external. According to the Law of Karma, keeping the precepts is meritorious
and it acts as causes that would bring about peaceful and happy effects. Keeping these precepts keeps the cultivator
from rebirth in the four woeful realms of existence. Śīla refers to overall principles of ethical behavior. There
are several levels of sīla, which correspond to "basic morality" (five precepts), "basic morality with asceticism"
(eight precepts), "novice monkhood" (ten precepts) and "monkhood" (Vinaya or Patimokkha). Lay people generally undertake
to live by the five precepts, which are common to all Buddhist schools. If they wish, they can choose to undertake
the eight precepts, which add basic asceticism. The precepts are not formulated as imperatives, but as training rules
that laypeople undertake voluntarily to facilitate practice. In Buddhist thought, the cultivation of dana and ethical
conduct themselves refine consciousness to such a level that rebirth in one of the lower heavens is likely, even
if there is no further Buddhist practice. There is nothing improper or un-Buddhist about limiting one's aims to this
level of attainment. In the eight precepts, the third precept on sexual misconduct is made more strict, and becomes
a precept of celibacy. The three additional precepts are: The complete list of ten precepts may be observed by laypeople
for short periods. For the complete list, the seventh precept is partitioned into two, and a tenth added: Vinaya
is the specific moral code for monks and nuns. It includes the Patimokkha, a set of 227 rules for monks in the Theravadin
recension. The precise content of the vinayapitaka (scriptures on Vinaya) differs slightly according to different
schools, and different schools or subschools set different standards for the degree of adherence to Vinaya. Novice-monks
use the ten precepts, which are the basic precepts for monastics. Regarding the monastic rules, the Buddha constantly
reminds his hearers that it is the spirit that counts. On the other hand, the rules themselves are designed to assure
a satisfying life, and provide a perfect springboard for the higher attainments. Monastics are instructed by the
Buddha to live as "islands unto themselves". In this sense, living life as the vinaya prescribes it is, as one scholar
puts it: "more than merely a means to an end: it is very nearly the end in itself." In Eastern Buddhism, there is
also a distinctive Vinaya and ethics contained within the Mahayana Brahmajala Sutra (not to be confused with the
Pali text of that name) for Bodhisattvas, where, for example, the eating of meat is frowned upon and vegetarianism
is actively encouraged (see vegetarianism in Buddhism). In Japan, this has almost completely displaced the monastic
vinaya, and allows clergy to marry. Buddhist meditation is fundamentally concerned with two themes: transforming
the mind and using it to explore itself and other phenomena. According to Theravada Buddhism the Buddha taught two
types of meditation, samatha meditation (Sanskrit: śamatha) and vipassanā meditation (Sanskrit: vipaśyanā). In Chinese
Buddhism, these exist (translated chih kuan), but Chán (Zen) meditation is more popular. According to Peter Harvey,
whenever Buddhism has been healthy, not only monks, nuns, and married lamas, but also more committed lay people have
practiced meditation. According to Routledge's Encyclopedia of Buddhism, in contrast, throughout most of Buddhist
history before modern times, serious meditation by lay people has been unusual. The evidence of the early texts suggests
that at the time of the Buddha, many male and female lay practitioners did practice meditation, some even to the
point of proficiency in all eight jhānas (see the next section regarding these).[note 11] In the language of the
Noble Eightfold Path, samyaksamādhi is "right concentration". The primary means of cultivating samādhi is meditation.
Upon development of samādhi, one's mind becomes purified of defilement, calm, tranquil, and luminous. Once the meditator
achieves a strong and powerful concentration (jhāna, Sanskrit ध्यान dhyāna), his mind is ready to penetrate and gain
insight (vipassanā) into the ultimate nature of reality, eventually obtaining release from all suffering. The cultivation
of mindfulness is essential to mental concentration, which is needed to achieve insight. Samatha meditation starts
from being mindful of an object or idea, which is expanded to one's body, mind and entire surroundings, leading to
a state of total concentration and tranquility (jhāna). There are many variations in the style of meditation, from
sitting cross-legged or kneeling to chanting or walking. The most common method of meditation is to concentrate on
one's breath (anapanasati), because this practice can lead to both samatha and vipassana'. In Buddhist practice,
it is said that while samatha meditation can calm the mind, only vipassanā meditation can reveal how the mind was
disturbed to start with, which is what leads to insight knowledge (jñāna; Pāli ñāṇa) and understanding (prajñā Pāli
paññā), and thus can lead to nirvāṇa (Pāli nibbāna). When one is in jhana, all defilements are suppressed temporarily.
Only understanding (prajñā or vipassana) eradicates the defilements completely. Jhanas are also states that Arahants
abide in order to rest. In Theravāda Buddhism, the cause of human existence and suffering is identified as craving,
which carries with it the various defilements. These various defilements are traditionally summed up as greed, hatred
and delusion. These are believed deeply rooted afflictions of the mind that create suffering and stress. To be free
from suffering and stress, these defilements must be permanently uprooted through internal investigation, analyzing,
experiencing, and understanding of the true nature of those defilements by using jhāna, a technique of the Noble
Eightfold Path. It then leads the meditator to realize the Four Noble Truths, Enlightenment and Nibbāna. Nibbāna
is the ultimate goal of Theravadins. Prajñā (Sanskrit) or paññā (Pāli) means wisdom that is based on a realization
of dependent origination, The Four Noble Truths and the three marks of existence. Prajñā is the wisdom that is able
to extinguish afflictions and bring about bodhi. It is spoken of as the principal means of attaining nirvāṇa, through
its revelation of the true nature of all things as dukkha (unsatisfactoriness), anicca (impermanence) and anatta
(not-self). Prajñā is also listed as the sixth of the six pāramitās of the Mahayana. Initially, prajñā is attained
at a conceptual level by means of listening to sermons (dharma talks), reading, studying, and sometimes reciting
Buddhist texts and engaging in discourse. Once the conceptual understanding is attained, it is applied to daily life
so that each Buddhist can verify the truth of the Buddha's teaching at a practical level. Notably, one could in theory
attain Nirvana at any point of practice, whether deep in meditation, listening to a sermon, conducting the business
of one's daily life, or any other activity. Zen Buddhism (禅), pronounced Chán in Chinese, seon in Korean or zen in
Japanese (derived from the Sanskrit term dhyāna, meaning "meditation") is a form of Buddhism that became popular
in China, Korea and Japan and that lays special emphasis on meditation.[note 12] Zen places less emphasis on scriptures
than some other forms of Buddhism and prefers to focus on direct spiritual breakthroughs to truth. Zen Buddhism is
divided into two main schools: Rinzai (臨済宗) and Sōtō (曹洞宗), the former greatly favouring the use in meditation on
the koan (公案, a meditative riddle or puzzle) as a device for spiritual break-through, and the latter (while certainly
employing koans) focusing more on shikantaza or "just sitting".[note 13] Zen Buddhist teaching is often full of paradox,
in order to loosen the grip of the ego and to facilitate the penetration into the realm of the True Self or Formless
Self, which is equated with the Buddha himself.[note 14] According to Zen master Kosho Uchiyama, when thoughts and
fixation on the little "I" are transcended, an Awakening to a universal, non-dual Self occurs: "When we let go of
thoughts and wake up to the reality of life that is working beyond them, we discover the Self that is living universal
non-dual life (before the separation into two) that pervades all living creatures and all existence." Thinking and
thought must therefore not be allowed to confine and bind one. Though based upon Mahayana, Tibeto-Mongolian Buddhism
is one of the schools that practice Vajrayana or "Diamond Vehicle" (also referred to as Mantrayāna, Tantrayāna, Tantric
Buddhism, or esoteric Buddhism). It accepts all the basic concepts of Mahāyāna, but also includes a vast array of
spiritual and physical techniques designed to enhance Buddhist practice. Tantric Buddhism is largely concerned with
ritual and meditative practices. One component of the Vajrayāna is harnessing psycho-physical energy through ritual,
visualization, physical exercises, and meditation as a means of developing the mind. Using these techniques, it is
claimed that a practitioner can achieve Buddhahood in one lifetime, or even as little as three years. In the Tibetan
tradition, these practices can include sexual yoga, though only for some very advanced practitioners. Historically,
the roots of Buddhism lie in the religious thought of ancient India during the second half of the first millennium
BCE. That was a period of social and religious turmoil, as there was significant discontent with the sacrifices and
rituals of Vedic Brahmanism.[note 15] It was challenged by numerous new ascetic religious and philosophical groups
and teachings that broke with the Brahmanic tradition and rejected the authority of the Vedas and the Brahmans.[note
16] These groups, whose members were known as shramanas, were a continuation of a non-Vedic strand of Indian thought
distinct from Indo-Aryan Brahmanism.[note 17] Scholars have reasons to believe that ideas such as samsara, karma
(in the sense of the influence of morality on rebirth), and moksha originated in the shramanas, and were later adopted
by Brahmin orthodoxy.[note 18][note 19][note 20][note 21][note 22][note 23] This view is supported by a study of
the region where these notions originated. Buddhism arose in Greater Magadha, which stretched from Sravasti, the
capital of Kosala in the north-west, to Rajagrha in the south east. This land, to the east of aryavarta, the land
of the Aryas, was recognized as non-Vedic. Other Vedic texts reveal a dislike of the people of Magadha, in all probability
because the Magadhas at this time were not Brahmanised.[page needed] It was not until the 2nd or 3rd centuries BCE
that the eastward spread of Brahmanism into Greater Magadha became significant. Ideas that developed in Greater Magadha
prior to this were not subject to Vedic influence. These include rebirth and karmic retribution that appear in a
number of movements in Greater Magadha, including Buddhism. These movements inherited notions of rebirth and karmic
retribution from an earlier culture[page needed] At the same time, these movements were influenced by, and in some
respects continued, philosophical thought within the Vedic tradition as reflected e.g. in the Upanishads. These movements
included, besides Buddhism, various skeptics (such as Sanjaya Belatthiputta), atomists (such as Pakudha Kaccayana),
materialists (such as Ajita Kesakambali), antinomians (such as Purana Kassapa); the most important ones in the 5th
century BCE were the Ajivikas, who emphasized the rule of fate, the Lokayata (materialists), the Ajnanas (agnostics)
and the Jains, who stressed that the soul must be freed from matter. Many of these new movements shared the same
conceptual vocabulary—atman ("Self"), buddha ("awakened one"), dhamma ("rule" or "law"), karma ("action"), nirvana
("extinguishing"), samsara ("eternal recurrence") and yoga ("spiritual practice").[note 24] The shramanas rejected
the Veda, and the authority of the brahmans, who claimed they possessed revealed truths not knowable by any ordinary
human means. Moreover, they declared that the entire Brahmanical system was fraudulent: a conspiracy of the brahmans
to enrich themselves by charging exorbitant fees to perform bogus rites and give useless advice. A particular criticism
of the Buddha was Vedic animal sacrifice.[web 18] He also mocked the Vedic "hymn of the cosmic man". However, the
Buddha was not anti-Vedic, and declared that the Veda in its true form was declared by "Kashyapa" to certain rishis,
who by severe penances had acquired the power to see by divine eyes. He names the Vedic rishis, and declared that
the original Veda of the rishis[note 25] was altered by a few Brahmins who introduced animal sacrifices. The Buddha
says that it was on this alteration of the true Veda that he refused to pay respect to the Vedas of his time. However,
he did not denounce the union with Brahman,[note 26] or the idea of the self uniting with the Self. At the same time,
the traditional Hindu itself gradually underwent profound changes, transforming it into what is recognized as early
Hinduism. Information of the oldest teachings may be obtained by analysis of the oldest texts. One method to obtain
information on the oldest core of Buddhism is to compare the oldest extant versions of the Theravadin Pali Canon
and other texts.[note 27] The reliability of these sources, and the possibility to draw out a core of oldest teachings,
is a matter of dispute.[page needed][page needed][page needed][page needed] According to Vetter, inconsistencies
remain, and other methods must be applied to resolve those inconsistencies.[note 28] A core problem in the study
of early Buddhism is the relation between dhyana and insight. Schmithausen, in his often-cited article On some Aspects
of Descriptions or Theories of 'Liberating Insight' and 'Enlightenment' in Early Buddhism notes that the mention
of the four noble truths as constituting "liberating insight", which is attained after mastering the Rupa Jhanas,
is a later addition to texts such as Majjhima Nikaya 36.[page needed] Bruce Matthews notes that there is no cohesive
presentation of karma in the Sutta Pitaka, which may mean that the doctrine was incidental to the main perspective
of early Buddhist soteriology. Schmithausen is a notable scholar who has questioned whether karma already played
a role in the theory of rebirth of earliest Buddhism.[page needed][note 32] According to Vetter, "the Buddha at first
sought "the deathless" (amata/amrta), which is concerned with the here and now. According to Vetter, only after this
realization did he become acquainted with the doctrine of rebirth." Bronkhorst disagrees, and concludes that the
Buddha "introduced a concept of karma that differed considerably from the commonly held views of his time." According
to Bronkhorst, not physical and mental activities as such were seen as responsible for rebirth, but intentions and
desire. According to Tilmann Vetter, the core of earliest Buddhism is the practice of dhyāna. Bronkhorst agrees that
dhyana was a Buddhist invention, whereas Norman notes that "the Buddha's way to release [...] was by means of meditative
practices." Discriminating insight into transiency as a separate path to liberation was a later development. According
to the Mahāsaccakasutta,[note 33] from the fourth jhana the Buddha gained bodhi. Yet, it is not clear what he was
awakened to.[page needed] "Liberating insight" is a later addition to this text, and reflects a later development
and understanding in early Buddhism.[page needed][page needed] The mentioning of the four truths as constituting
"liberating insight" introduces a logical problem, since the four truths depict a linear path of practice, the knowledge
of which is in itself not depicted as being liberating.[note 34] Although "Nibbāna" (Sanskrit: Nirvāna) is the common
term for the desired goal of this practice, many other terms can be found throughout the Nikayas, which are not specified.[note
35] According to Vetter, the description of the Buddhist path may initially have been as simple as the term "the
middle way". In time, this short description was elaborated, resulting in the description of the eightfold path.
According to both Bronkhorst and Anderson, the four truths became a substitution for prajna, or "liberating insight",
in the suttas in those texts where "liberating insight" was preceded by the four jhanas. According to Bronkhorst,
the four truths may not have been formulated in earliest Buddhism, and did not serve in earliest Buddhism as a description
of "liberating insight". Gotama's teachings may have been personal, "adjusted to the need of each person." The three
marks of existence may reflect Upanishadic or other influences. K.R. Norman supposes that the these terms were already
in use at the Buddha's time, and were familiair to his hearers. The history of Indian Buddhism may be divided into
five periods: Early Buddhism (occasionally called Pre-sectarian Buddhism), Nikaya Buddhism or Sectarian Buddhism:
The period of the Early Buddhist schools, Early Mahayana Buddhism, Later Mahayana Buddhism, and Esoteric Buddhism
(also called Vajrayana Buddhism). Pre-sectarian Buddhism is the earliest phase of Buddhism, recognized by nearly
all scholars. Its main scriptures are the Vinaya Pitaka and the four principal Nikayas or Agamas. Certain basic teachings
appear in many places throughout the early texts, so most scholars conclude that Gautama Buddha must have taught
something similar to the Three marks of existence, the Five Aggregates, dependent origination, karma and rebirth,
the Four Noble Truths, the Noble Eightfold Path, and nirvana. Some scholars disagree, and have proposed many other
theories. According to the scriptures, soon after the parinirvāṇa (from Sanskrit: "highest extinguishment") of Gautama
Buddha, the first Buddhist council was held. As with any ancient Indian tradition, transmission of teaching was done
orally. The primary purpose of the assembly was to collectively recite the teachings to ensure that no errors occurred
in oral transmission. In the first council, Ānanda, a cousin of the Buddha and his personal attendant, was called
upon to recite the discourses (sūtras, Pāli suttas) of the Buddha, and, according to some sources, the abhidhamma.
Upāli, another disciple, recited the monastic rules (vinaya). Most scholars regard the traditional accounts of the
council as greatly exaggerated if not entirely fictitious.[note 36]Richard Gombrich noted Sariputta led communal
recitations of the Buddha's teaching for preservation in the Buddha's lifetime in Sangiti Sutta (Digha Nikaya #33),
and something similar to the First Council must have taken place to compose Buddhist scriptures. According to most
scholars, at some period after the Second Council the Sangha began to break into separate factions.[note 37] The
various accounts differ as to when the actual schisms occurred. According to the Dipavamsa of the Pāli tradition,
they started immediately after the Second Council, the Puggalavada tradition places it in 137 AN, the Sarvastivada
tradition of Vasumitra says it was in the time of Ashoka and the Mahasanghika tradition places it much later, nearly
100 BCE. The root schism was between the Sthaviras and the Mahāsāṅghikas. The fortunate survival of accounts from
both sides of the dispute reveals disparate traditions. The Sthavira group offers two quite distinct reasons for
the schism. The Dipavamsa of the Theravāda says that the losing party in the Second Council dispute broke away in
protest and formed the Mahasanghika. This contradicts the Mahasanghikas' own vinaya, which shows them as on the same,
winning side. The Mahāsāṅghikas argued that the Sthaviras were trying to expand the vinaya and may also have challenged
what they perceived were excessive claims or inhumanly high criteria for arhatship. Both parties, therefore, appealed
to tradition. The Sthaviras gave rise to several schools, one of which was the Theravāda school. Originally, these
schisms were caused by disputes over vinaya, and monks following different schools of thought seem to have lived
happily together in the same monasteries, but eventually, by about 100 CE if not earlier, schisms were being caused
by doctrinal disagreements too. Following (or leading up to) the schisms, each Saṅgha started to accumulate an Abhidharma,
a detailed scholastic reworking of doctrinal material appearing in the Suttas, according to schematic classifications.
These Abhidharma texts do not contain systematic philosophical treatises, but summaries or numerical lists. Scholars
generally date these texts to around the 3rd century BCE, 100 to 200 years after the death of the Buddha. Therefore
the seven Abhidharma works are generally claimed not to represent the words of the Buddha himself, but those of disciples
and great scholars.[note 38] Every school had its own version of the Abhidharma, with different theories and different
texts. The different Abhidharmas of the various schools did not agree with each other. Scholars disagree on whether
the Mahasanghika school had an Abhidhamma Pitaka or not.[note 38] Several scholars have suggested that the Prajñāpāramitā
sūtras, which are among the earliest Mahāyāna sūtras, developed among the Mahāsāṃghika along the Kṛṣṇa River in the
Āndhra region of South India. The earliest Mahāyāna sūtras to include the very first versions of the Prajñāpāramitā
genre, along with texts concerning Akṣobhya Buddha, which were probably written down in the 1st century BCE in the
south of India. Guang Xing states, "Several scholars have suggested that the Prajñāpāramitā probably developed among
the Mahāsāṃghikas in southern India, in the Āndhra country, on the Kṛṣṇa River." A.K. Warder believes that "the Mahāyāna
originated in the south of India and almost certainly in the Āndhra country." Anthony Barber and Sree Padma note
that "historians of Buddhist thought have been aware for quite some time that such pivotally important Mahayana Buddhist
thinkers as Nāgārjuna, Dignaga, Candrakīrti, Āryadeva, and Bhavaviveka, among many others, formulated their theories
while living in Buddhist communities in Āndhra." They note that the ancient Buddhist sites in the lower Kṛṣṇa Valley,
including Amaravati, Nāgārjunakoṇḍā and Jaggayyapeṭa "can be traced to at least the third century BCE, if not earlier."
Akira Hirakawa notes the "evidence suggests that many Early Mahayana scriptures originated in South India." There
is no evidence that Mahāyāna ever referred to a separate formal school or sect of Buddhism, but rather that it existed
as a certain set of ideals, and later doctrines, for bodhisattvas. Initially it was known as Bodhisattvayāna (the
"Vehicle of the Bodhisattvas"). Paul Williams has also noted that the Mahāyāna never had nor ever attempted to have
a separate Vinaya or ordination lineage from the early schools of Buddhism, and therefore each bhikṣu or bhikṣuṇī
adhering to the Mahāyāna formally belonged to an early school. This continues today with the Dharmaguptaka ordination
lineage in East Asia, and the Mūlasarvāstivāda ordination lineage in Tibetan Buddhism. Therefore Mahāyāna was never
a separate rival sect of the early schools. From Chinese monks visiting India, we now know that both Mahāyāna and
non-Mahāyāna monks in India often lived in the same monasteries side by side. Much of the early extant evidence for
the origins of Mahāyāna comes from early Chinese translations of Mahāyāna texts. These Mahāyāna teachings were first
propagated into China by Lokakṣema, the first translator of Mahāyāna sūtras into Chinese during the 2nd century CE.[note
39] Some scholars have traditionally considered the earliest Mahāyāna sūtras to include the very first versions of
the Prajñāpāramitā series, along with texts concerning Akṣobhya Buddha, which were probably composed in the 1st century
BCE in the south of India.[note 40] During the period of Late Mahayana Buddhism, four major types of thought developed:
Madhyamaka, Yogacara, Tathagatagarbha, and Buddhist Logic as the last and most recent. In India, the two main philosophical
schools of the Mahayana were the Madhyamaka and the later Yogacara. According to Dan Lusthaus, Madhyamaka and Yogacara
have a great deal in common, and the commonality stems from early Buddhism. There were no great Indian teachers associated
with tathagatagarbha thought. Buddhism may have spread only slowly in India until the time of the Mauryan emperor
Ashoka, who was a public supporter of the religion. The support of Aśoka and his descendants led to the construction
of more stūpas (Buddhist religious memorials) and to efforts to spread Buddhism throughout the enlarged Maurya empire
and even into neighboring lands—particularly to the Iranian-speaking regions of Afghanistan and Central Asia, beyond
the Mauryas' northwest border, and to the island of Sri Lanka south of India. These two missions, in opposite directions,
would ultimately lead, in the first case to the spread of Buddhism into China, and in the second case, to the emergence
of Theravāda Buddhism and its spread from Sri Lanka to the coastal lands of Southeast Asia. This period marks the
first known spread of Buddhism beyond India. According to the edicts of Aśoka, emissaries were sent to various countries
west of India to spread Buddhism (Dharma), particularly in eastern provinces of the neighboring Seleucid Empire,
and even farther to Hellenistic kingdoms of the Mediterranean. It is a matter of disagreement among scholars whether
or not these emissaries were accompanied by Buddhist missionaries. The gradual spread of Buddhism into adjacent areas
meant that it came into contact with new ethnical groups. During this period Buddhism was exposed to a variety of
influences, from Persian and Greek civilization, to changing trends in non-Buddhist Indian religions—themselves influenced
by Buddhism. Striking examples of this syncretistic development can be seen in the emergence of Greek-speaking Buddhist
monarchs in the Indo-Greek Kingdom, and in the development of the Greco-Buddhist art of Gandhāra. A Greek king, Menander,
has even been immortalized in the Buddhist canon. The Theravada school spread south from India in the 3rd century
BCE, to Sri Lanka and Thailand and Burma and later also Indonesia. The Dharmagupta school spread (also in 3rd century
BCE) north to Kashmir, Gandhara and Bactria (Afghanistan). The Silk Road transmission of Buddhism to China is most
commonly thought to have started in the late 2nd or the 1st century CE, though the literary sources are all open
to question.[note 41] The first documented translation efforts by foreign Buddhist monks in China were in the 2nd
century CE, probably as a consequence of the expansion of the Kushan Empire into the Chinese territory of the Tarim
Basin. In the 2nd century CE, Mahayana Sutras spread to China, and then to Korea and Japan, and were translated into
Chinese. During the Indian period of Esoteric Buddhism (from the 8th century onwards), Buddhism spread from India
to Tibet and Mongolia. By the late Middle Ages, Buddhism had become virtually extinct in India, although it continued
to exist in surrounding countries. It is now again gaining strength worldwide. China and India are now starting to
fund Buddhist shrines in various Asian countries as they compete for influence in the region.[web 20] Formal membership
varies between communities, but basic lay adherence is often defined in terms of a traditional formula in which the
practitioner takes refuge in The Three Jewels: the Buddha, the Dharma (the teachings of the Buddha), and the Sangha
(the Buddhist community). At the present time, the teachings of all three branches of Buddhism have spread throughout
the world, and Buddhist texts are increasingly translated into local languages. While in the West Buddhism is often
seen as exotic and progressive, in the East it is regarded as familiar and traditional. Buddhists in Asia are frequently
well organized and well funded. In countries such as Cambodia and Bhutan, it is recognized as the state religion
and receives government support. Modern influences increasingly lead to new forms of Buddhism that significantly
depart from traditional beliefs and practices. A number of modern movements or tendencies in Buddhism emerged during
the second half of the 20th Century, including the Dalit Buddhist movement (also sometimes called 'neo-Buddhism'),
Engaged Buddhism, and the further development of various Western Buddhist traditions. In the second half of the 20th
Century a modern movement in Nichiren Buddhism: Soka Gakkai (Value Creation Society) emerged in Japan and spread
further to other countries. Soka Gakkai International (SGI) is a lay Buddhist movement linking more than 12 million
people around the world, and is currently described as "the most diverse" and "the largest lay Buddhist movement
in the world".[web 21] Buddhism is practiced by an estimated 488 million,[web 1] 495 million, or 535 million people
as of the 2010s, representing 7% to 8% of the world's total population. China is the country with the largest population
of Buddhists, approximately 244 million or 18.2% of its total population.[web 1] They are mostly followers of Chinese
schools of Mahayana, making this the largest body of Buddhist traditions. Mahayana, also practiced in broader East
Asia, is followed by over half of world Buddhists.[web 1] According to a demographic analysis reported by Peter Harvey
(2013): Mahayana has 360 million adherents; Theravada has 150 million adherents; and Vajrayana has 18,2 million adherents.
Seven million additional Buddhists are found outside of Asia. According to Johnson and Grim (2013), Buddhism has
grown from a total of 138 million adherents in 1910, of which 137 million were in Asia, to 495 million in 2010, of
which 487 million are in Asia. According to them, there was a fast annual growth of Buddhism in Pakistan, Saudi Arabia,
Lebanon and several Western European countries (1910–2010). More recently (2000–2010), the countries with highest
growth rates are Qatar, the United Arab Emirates, Iran and some African countries. Some scholars[note 44] use other
schemes. Buddhists themselves have a variety of other schemes. Hinayana (literally "lesser vehicle") is used by Mahayana
followers to name the family of early philosophical schools and traditions from which contemporary Theravada emerged,
but as this term is rooted in the Mahayana viewpoint and can be considered derogatory, a variety of other terms are
increasingly used instead, including Śrāvakayāna, Nikaya Buddhism, early Buddhist schools, sectarian Buddhism, conservative
Buddhism, mainstream Buddhism and non-Mahayana Buddhism. Not all traditions of Buddhism share the same philosophical
outlook, or treat the same concepts as central. Each tradition, however, does have its own core concepts, and some
comparisons can be drawn between them. For example, according to one Buddhist ecumenical organization,[web 23] several
concepts common to both major Buddhist branches: Theravada ("Doctrine of the Elders", or "Ancient Doctrine") is the
oldest surviving Buddhist school. It is relatively conservative, and generally closest to early Buddhism. The name
Theravāda comes from the ancestral Sthāvirīya, one of the early Buddhist schools, from which the Theravadins claim
descent. After unsuccessfully trying to modify the Vinaya, a small group of "elderly members", i.e. sthaviras, broke
away from the majority Mahāsāṃghika during the Second Buddhist council, giving rise to the Sthavira sect. Sinhalese
Buddhist reformers in the late nineteenth and early twentieth centuries portrayed the Pali Canon as the original
version of scripture. They also emphasized Theravada being rational and scientific. Theravāda is primarily practiced
today in Sri Lanka, Burma, Laos, Thailand, Cambodia as well as small portions of China, Vietnam, Malaysia and Bangladesh.
It has a growing presence in the west. Theravadin Buddhists believe that personal effort is required to realize rebirth.
Monks follow the vinaya: meditating, teaching and serving their lay communities. Laypersons can perform good actions,
producing merit. Mahayana Buddhism flourished in India from the 5th century CE onwards, during the dynasty of the
Guptas. Mahāyāna centres of learning were established, the most important one being the Nālandā University in north-eastern
India. Mahayana schools recognize all or part of the Mahayana Sutras. Some of these sutras became for Mahayanists
a manifestation of the Buddha himself, and faith in and veneration of those texts are stated in some sutras (e.g.
the Lotus Sutra and the Mahaparinirvana Sutra) to lay the foundations for the later attainment of Buddhahood itself.
Native Mahayana Buddhism is practiced today in China, Japan, Korea, Singapore, parts of Russia and most of Vietnam
(also commonly referred to as "Eastern Buddhism"). The Buddhism practiced in Tibet, the Himalayan regions, and Mongolia
is also Mahayana in origin, but is discussed below under the heading of Vajrayana (also commonly referred to as "Northern
Buddhism"). There are a variety of strands in Eastern Buddhism, of which "the Pure Land school of Mahayana is the
most widely practised today.". In most of this area however, they are fused into a single unified form of Buddhism.
In Japan in particular, they form separate denominations with the five major ones being: Nichiren, peculiar to Japan;
Pure Land; Shingon, a form of Vajrayana; Tendai, and Zen. In Korea, nearly all Buddhists belong to the Chogye school,
which is officially Son (Zen), but with substantial elements from other traditions. Various classes of Vajrayana
literature developed as a result of royal courts sponsoring both Buddhism and Saivism. The Mañjusrimulakalpa, which
later came to classified under Kriyatantra, states that mantras taught in the Saiva, Garuda and Vaisnava tantras
will be effective if applied by Buddhists since they were all taught originally by Manjushri. The Guhyasiddhi of
Padmavajra, a work associated with the Guhyasamaja tradition, prescribes acting as a Saiva guru and initiating members
into Saiva Siddhanta scriptures and mandalas. The Samvara tantra texts adopted the pitha list from the Saiva text
Tantrasadbhava, introducing a copying error where a deity was mistaken for a place. Buddhist scriptures and other
texts exist in great variety. Different schools of Buddhism place varying levels of value on learning the various
texts. Some schools venerate certain texts as religious objects in themselves, while others take a more scholastic
approach. Buddhist scriptures are mainly written in Pāli, Tibetan, Mongolian, and Chinese. Some texts still exist
in Sanskrit and Buddhist Hybrid Sanskrit. Unlike many religions, Buddhism has no single central text that is universally
referred to by all traditions. However, some scholars have referred to the Vinaya Pitaka and the first four Nikayas
of the Sutta Pitaka as the common core of all Buddhist traditions.[page needed] This could be considered misleading,
as Mahāyāna considers these merely a preliminary, and not a core, teaching. The Tibetan Buddhists have not even translated
most of the āgamas (though theoretically they recognize them) and they play no part in the religious life of either
clergy or laity in China and Japan. Other scholars say there is no universally accepted common core. The size and
complexity of the Buddhist canons have been seen by some (including Buddhist social reformer Babasaheb Ambedkar)
as presenting barriers to the wider understanding of Buddhist philosophy. Over the years, various attempts have been
made to synthesize a single Buddhist text that can encompass all of the major principles of Buddhism. In the Theravada
tradition, condensed 'study texts' were created that combined popular or influential scriptures into single volumes
that could be studied by novice monks. Later in Sri Lanka, the Dhammapada was championed as a unifying scripture.
Dwight Goddard collected a sample of Buddhist scriptures, with the emphasis on Zen, along with other classics of
Eastern philosophy, such as the Tao Te Ching, into his 'Buddhist Bible' in the 1920s. More recently, Dr. Babasaheb
Ambedkar attempted to create a single, combined document of Buddhist principles in "The Buddha and His Dhamma". Other
such efforts have persisted to present day, but currently there is no single text that represents all Buddhist traditions.
The Pāli Tipitaka, which means "three baskets", refers to the Vinaya Pitaka, the Sutta Pitaka, and the Abhidhamma
Pitaka. The Vinaya Pitaka contains disciplinary rules for the Buddhist monks and nuns, as well as explanations of
why and how these rules were instituted, supporting material, and doctrinal clarification. The Sutta Pitaka contains
discourses ascribed to Gautama Buddha. The Abhidhamma Pitaka contains material often described as systematic expositions
of the Gautama Buddha's teachings. The Pāli Tipitaka is the only early Tipitaka (Sanskrit: Tripiṭaka) to survive
intact in its original language, but a number of early schools had their own recensions of the Tipitaka featuring
much of the same material. We have portions of the Tipitakas of the Sārvāstivāda, Dharmaguptaka, Sammitya, Mahāsaṅghika,
Kāśyapīya, and Mahīśāsaka schools, most of which survive in Chinese translation only. According to some sources,
some early schools of Buddhism had five or seven pitakas. According to the scriptures, soon after the death of the
Buddha, the first Buddhist council was held; a monk named Mahākāśyapa (Pāli: Mahākassapa) presided. The goal of the
council was to record the Buddha's teachings. Upāli recited the vinaya. Ānanda, the Buddha's personal attendant,
was called upon to recite the dhamma. These became the basis of the Tripitaka. However, this record was initially
transmitted orally in form of chanting, and was committed to text in the last century BCE. Both the sūtras and the
vinaya of every Buddhist school contain a wide variety of elements including discourses on the Dharma, commentaries
on other teachings, cosmological and cosmogonical texts, stories of the Gautama Buddha's previous lives, and various
other subjects. Much of the material in the Canon is not specifically "Theravadin", but is instead the collection
of teachings that this school preserved from the early, non-sectarian body of teachings. According to Peter Harvey,
it contains material at odds with later Theravadin orthodoxy. He states: "The Theravadins, then, may have added texts
to the Canon for some time, but they do not appear to have tampered with what they already had from an earlier period."
The Mahayana sutras are a very broad genre of Buddhist scriptures that the Mahayana Buddhist tradition holds are
original teachings of the Buddha. Some adherents of Mahayana accept both the early teachings (including in this the
Sarvastivada Abhidharma, which was criticized by Nagarjuna and is in fact opposed to early Buddhist thought) and
the Mahayana sutras as authentic teachings of Gautama Buddha, and claim they were designed for different types of
persons and different levels of spiritual understanding. The Mahayana sutras often claim to articulate the Buddha's
deeper, more advanced doctrines, reserved for those who follow the bodhisattva path. That path is explained as being
built upon the motivation to liberate all living beings from unhappiness. Hence the name Mahāyāna (lit., the Great
Vehicle). According to Mahayana tradition, the Mahayana sutras were transmitted in secret, came from other Buddhas
or Bodhisattvas, or were preserved in non-human worlds because human beings at the time could not understand them:
Approximately six hundred Mahayana sutras have survived in Sanskrit or in Chinese or Tibetan translations. In addition,
East Asian Buddhism recognizes some sutras regarded by scholars as of Chinese rather than Indian origin. Generally,
scholars conclude that the Mahayana scriptures were composed from the 1st century CE onwards: "Large numbers of Mahayana
sutras were being composed in the period between the beginning of the common era and the fifth century", five centuries
after the historical Gautama Buddha. Some of these had their roots in other scriptures composed in the 1st century
BCE. It was not until after the 5th century CE that the Mahayana sutras started to influence the behavior of mainstream
Buddhists in India: "But outside of texts, at least in India, at exactly the same period, very different—in fact
seemingly older—ideas and aspirations appear to be motivating actual behavior, and old and established Hinnayana
groups appear to be the only ones that are patronized and supported." These texts were apparently not universally
accepted among Indian Buddhists when they appeared; the pejorative label Hinayana was applied by Mahayana supporters
to those who rejected the Mahayana sutras. Only the Theravada school does not include the Mahayana scriptures in
its canon. As the modern Theravada school is descended from a branch of Buddhism that diverged and established itself
in Sri Lanka prior to the emergence of the Mahayana texts, debate exists as to whether the Theravada were historically
included in the hinayana designation; in the modern era, this label is seen as derogatory, and is generally avoided.
Scholar Isabelle Onians asserts that although "the Mahāyāna ... very occasionally referred contemptuously to earlier
Buddhism as the Hinayāna, the Inferior Way," "the preponderance of this name in the secondary literature is far out
of proportion to occurrences in the Indian texts." She notes that the term Śrāvakayāna was "the more politically
correct and much more usual" term used by Mahāyānists. Jonathan Silk has argued that the term "Hinayana" was used
to refer to whomever one wanted to criticize on any given occasion, and did not refer to any definite grouping of
Buddhists. Buddhism provides many opportunities for comparative study with a diverse range of subjects. For example,
Buddhism's emphasis on the Middle way not only provides a unique guideline for ethics but has also allowed Buddhism
to peacefully coexist with various differing beliefs, customs and institutions in countries where it has resided
throughout its history. Also, its moral and spiritual parallels with other systems of thought—for example, with various
tenets of Christianity—have been subjects of close study. In addition, the Buddhist concept of dependent origination
has been compared to modern scientific thought, as well as Western metaphysics. There are differences of opinion
on the question of whether or not Buddhism should be considered a religion. Many sources commonly refer to Buddhism
as a religion. For example:
American Idol is an American singing competition series created by Simon Fuller and produced by 19 Entertainment, and is
distributed by FremantleMedia North America. It began airing on Fox on June 11, 2002, as an addition to the Idols
format based on the British series Pop Idol and has since become one of the most successful shows in the history
of American television. The concept of the series is to find new solo recording artists, with the winner being determined
by the viewers in America. Winners chosen by viewers through telephone, Internet, and SMS text voting were Kelly
Clarkson, Ruben Studdard, Fantasia Barrino, Carrie Underwood, Taylor Hicks, Jordin Sparks, David Cook, Kris Allen,
Lee DeWyze, Scotty McCreery, Phillip Phillips, Candice Glover, Caleb Johnson, and Nick Fradiani. American Idol employs
a panel of judges who critique the contestants' performances. The original judges were record producer and music
manager Randy Jackson, pop singer and choreographer Paula Abdul and music executive and manager Simon Cowell. The
judging panel for the most recent season consisted of country singer Keith Urban, singer and actress Jennifer Lopez,
and jazz singer Harry Connick, Jr. The show was originally hosted by radio personality Ryan Seacrest and comedian
Brian Dunkleman, with Seacrest continuing on for the rest of the seasons. The success of American Idol has been described
as "unparalleled in broadcasting history". The series was also said by a rival TV executive to be "the most impactful
show in the history of television". It has become a recognized springboard for launching the career of many artists
as bona fide stars. According to Billboard magazine, in its first ten years, "Idol has spawned 345 Billboard chart-toppers
and a platoon of pop idols, including Kelly Clarkson, Carrie Underwood, Chris Daughtry, Fantasia, Ruben Studdard,
Jennifer Hudson, Clay Aiken, Adam Lambert and Jordin Sparks while remaining a TV ratings juggernaut." For an unprecedented
eight consecutive years, from the 2003–04 television season through the 2010–11 season, either its performance or
result show had been ranked number one in U.S. television ratings. The popularity of American Idol however declined,
and on May 11, 2015, Fox announced that the series would conclude its run in its fifteenth season. American Idol
was based on the British show Pop Idol created by Simon Fuller, which was in turn inspired by the New Zealand television
singing competition Popstars. Television producer Nigel Lythgoe saw it in Australia and helped bring it over to Britain.
Fuller was inspired by the idea from Popstars of employing a panel of judges to select singers in audition. He then
added other elements, such as telephone voting by the viewing public (which at the time was already in use in shows
such as the Eurovision Song Contest), the drama of backstories and real-life soap opera unfolding in real time. The
show debuted in 2001 in Britain with Lythgoe as showrunner—the executive producer and production leader—and Simon
Cowell as one of the judges, and was a big success with the viewing public. In 2001, Fuller, Cowell, and TV producer
Simon Jones attempted to sell the Pop Idol format to the United States, but the idea was met with poor response from
United States television networks. However, Rupert Murdoch, head of Fox's parent company, was persuaded to buy the
show by his daughter Elisabeth, who was a fan of the British show. The show was renamed American Idol: The Search
for a Superstar and debuted in the summer of 2002. Cowell was initially offered the job as showrunner but refused;
Lythgoe then took over that position. Much to Cowell's surprise, it became one of the hit shows for the summer that
year. The show, with the personal engagement of the viewers with the contestants through voting, and the presence
of the acid-tongued Cowell as a judge, grew into a phenomenon. By 2004, it had become the most-watched show in the
U.S., a position it then held on for seven consecutive seasons. The show had originally planned on having four judges
following the Pop Idol format; however, only three judges had been found by the time of the audition round in the
first season, namely Randy Jackson, Paula Abdul and Simon Cowell. A fourth judge, radio DJ Stryker, was originally
chosen but he dropped out citing "image concerns". In the second season, New York radio personality Angie Martinez
had been hired as a fourth judge but withdrew only after a few days of auditions due to not being comfortable with
giving out criticism. The show decided to continue with the three judges format until season eight. All three original
judges stayed on the judging panel for eight seasons. In season eight, Latin Grammy Award-nominated singer–songwriter
and record producer Kara DioGuardi was added as a fourth judge. She stayed for two seasons and left the show before
season ten. Paula Abdul left the show before season nine after failing to agree terms with the show producers. Emmy
Award-winning talk show host Ellen DeGeneres replaced Paula Abdul for that season, but left after just one season.
On January 11, 2010, Simon Cowell announced that he was leaving the show to pursue introducing the American version
of his show The X Factor to the USA for 2011. Jennifer Lopez and Steven Tyler joined the judging panel in season
ten, but both left after two seasons. They were replaced by three new judges, Mariah Carey, Nicki Minaj and Keith
Urban, who joined Randy Jackson in season 12. However both Carey and Minaj left after one season, and Randy Jackson
also announced that he would depart the show after twelve seasons as a judge but would return as a mentor. Urban
is the only judge from season 12 to return in season 13. He was joined by previous judge Jennifer Lopez and former
mentor Harry Connick, Jr.. Lopez, Urban and Connick, Jr. all returned as judges for the show's fourteenth and fifteenth
seasons. Guest judges may occasionally be introduced. In season two, guest judges such as Lionel Richie and Robin
Gibb were used, and in season three Donna Summer, Quentin Tarantino and some of the mentors also joined as judges
to critique the performances in the final rounds. Guest judges were used in the audition rounds for seasons four,
six, nine, and fourteen such as Gene Simmons and LL Cool J in season four, Jewel and Olivia Newton-John in season
six, Shania Twain in season eight, Neil Patrick Harris, Avril Lavigne and Katy Perry in season nine, and season eight
runner-up, Adam Lambert, in season fourteen. The first season was co-hosted by Ryan Seacrest and Brian Dunkleman.
Dunkleman quit thereafter, making Seacrest the sole emcee of the show starting with season two. Beginning in the
tenth season[citation needed], permanent mentors were brought in during the live shows to help guide the contestants
with their song choice and performance. Jimmy Iovine was the mentor in the tenth through twelfth seasons, former
judge Randy Jackson was the mentor for the thirteenth season and Scott Borchetta was the mentor for the fourteenth
and fifteenth season. The mentors regularly bring in guest mentors to aid them, including Akon, Alicia Keys, Lady
Gaga, and current judge Harry Connick, Jr.. The eligible age-range for contestants is currently fifteen to twenty-eight
years old. The initial age limit was sixteen to twenty-four in the first three seasons, but the upper limit was raised
to twenty-eight in season four, and the lower limit was reduced to fifteen in season ten. The contestants must be
legal U.S. residents, cannot have advanced to particular stages of the competition in previous seasons (varies depending
on the season, currently by the semi-final stage until season thirteen), and must not hold any current recording
or talent representation contract by the semi-final stage (in previous years by the audition stage). Contestants
go through at least three sets of cuts. The first is a brief audition with a few other contestants in front of selectors
which may include one of the show's producers. Although auditions can exceed 10,000 in each city, only a few hundred
of these make it past the preliminary round of auditions. Successful contestants then sing in front of producers,
where more may be cut. Only then can they proceed to audition in front of the judges, which is the only audition
stage shown on television. Those selected by the judges are sent to Hollywood. Between 10–60 people in each city
may make it to Hollywood[citation needed]. Once in Hollywood, the contestants perform individually or in groups in
a series of rounds. Until season ten, there were usually three rounds of eliminations in Hollywood. In the first
round the contestants emerged in groups but performed individually. For the next round, the contestants put themselves
in small groups and perform a song together. In the final round, the contestants perform solo with a song of their
choice a cappella or accompanied by a band—depending on the season. In seasons two and three, contestants were
also asked to write original lyrics or melody in an additional round after the first round. In season seven, the
group round was eliminated and contestants may, after a first solo performance and on judges approval, skip a second
solo round and move directly to the final Hollywood round. In season twelve, the executive producers split up the
females and males and chose the members to form the groups in the group round. In seasons ten and eleven, a further
round was added in Las Vegas, where the contestants perform in groups based on a theme, followed by one final solo
round to determine the semi-finalists. At the end of this stage of the competition, 24 to 36 contestants are selected
to move on to the semi-final stage. In season twelve the Las Vegas round became a Sudden Death round, where the judges
had to choose five guys and five girls each night (four nights) to make the top twenty. In season thirteen, a new
round called "Hollywood or Home" was added, where if the judges were uncertain about some contestants, those contestants
were required to perform soon after landing in Los Angeles, and those who failed to impress were sent back home before
they reached Hollywood. From the semi-finals onwards, the fate of the contestants is decided by public vote. During
the contestant's performance as well as the recap at the end, a toll-free telephone number for each contestant is
displayed on the screen. For a two-hour period after the episode ends (up to four hours for the finale) in each US
time zone, viewers may call or send a text message to their preferred contestant's telephone number, and each call
or text message is registered as a vote for that contestant. Viewers are allowed to vote as many times as they can
within the two-hour voting window. However, the show reserves the right to discard votes by power dialers. One or
more of the least popular contestants may be eliminated in successive weeks until a winner emerges. Over 110 million
votes were cast in the first season, and by season ten the seasonal total had increased to nearly 750 million. Voting
via text messaging was made available in the second season when AT&T Wireless joined as a sponsor of the show, and
7.5 million text messages were sent to American Idol that season. The number of text messages rapidly increased,
reaching 178 million texts by season eight. Online voting was offered for the first time in season ten. The votes
are counted and verified by Telescope Inc. In the first three seasons, the semi-finalists were split into different
groups to perform individually in their respective night. In season one, there were three groups of ten, with the
top three contestants from each group making the finals. In seasons two and three, there were four groups of eight,
and the top two of each selected. These seasons also featured a wildcard round, where contestants who failed to qualify
were given another chance. In season one, only one wildcard contestant was chosen by the judges, giving a total of
ten finalists. In seasons two and three, each of the three judges championed one contestant with the public advancing
a fourth into the finals, making 12 finalists in all. From seasons four to seven and nine, the twenty-four semi-finalists
were divided by gender in order to ensure an equal gender division in the top twelve. The men and women sang separately
on consecutive nights, and the bottom two in each groups were eliminated each week until only six of each remained
to form the top twelve. The wildcard round returned in season eight, wherein there were three groups of twelve, with
three contestants moving forward – the highest male, the highest female, and the next highest-placed singer - for
each night, and four wildcards were chosen by the judges to produce a final 13. Starting season ten, the girls and
boys perform on separate nights. In seasons ten and eleven, five of each gender were chosen, and three wildcards
were chosen by the judges to form a final 13. In season twelve, the top twenty semifinalists were split into gender
groups, with five of each gender advancing to form the final 10. In season thirteen, there were thirty semifinalists,
but only twenty semifinalists (ten for each gender) were chosen by the judges to perform on the live shows, with
five in each gender and three wildcards chosen by the judges composing the final 13. The finals are broadcast in
prime time from CBS Television City in Los Angeles, in front of a live studio audience. The finals lasted eight weeks
in season one, eleven weeks in subsequent seasons until seasons ten and eleven which lasted twelve weeks except for
season twelve, which lasted ten weeks, and season thirteen, which lasted for thirteen weeks. Each finalist performs
songs based on a weekly theme which may be a musical genre such as Motown, disco, or big band, songs by artists such
as Michael Jackson, Elvis Presley or The Beatles, or more general themes such as Billboard Number 1 hits or songs
from the contestant's year of birth. Contestants usually work with a celebrity mentor related to the theme. In season
ten, Jimmy Iovine was brought in as a mentor for the season. Initially the contestants sing one song each week, but
this is increased to two songs from top four or five onwards, then three songs for the top two or three. The most
popular contestants are usually not revealed in the results show. Instead, typically the three contestants (two in
later rounds) who received the lowest number of votes are called to the center of the stage. One of these three is
usually sent to safety; however the two remaining are not necessarily the bottom two. The contestant with the fewest
votes is then revealed and eliminated from the competition. A montage of the eliminated contestant's time on the
show is played and they give their final performance. However, in season six, during the series' first ever Idol
Gives Back episode, no contestant was eliminated, but on the following week, two were sent home. Moreover, starting
in season eight, the judges may overturn viewers' decision with a "Judges' Save" if they unanimously agree to. "The
save" can only be used once, and only up through the top five. In the eighth, ninth, tenth, and fourteenth seasons,
a double elimination then took place in the week following the activation of the save, but in the eleventh and thirteenth
seasons, a regular single elimination took place. The save was not activated in the twelfth season and consequently,
a non-elimination took place in the week after its expiration with the votes then carrying over into the following
week. The "Fan Save" was introduced in the fourteenth season. During the finals, viewers are given a five-minute
window to vote for the contestants in danger of elimination by using their Twitter account to decide which contestant
will move on to the next show, starting with the Top 8. The finale is the two-hour last episode of the season, culminating
in revealing the winner. For seasons one, three through six, and fourteen, it was broadcast from the Dolby Theatre,
which has an audience capacity of approximately 3,400. The finale for season two took place at the Gibson Amphitheatre,
which has an audience capacity of over 6,000. In seasons seven through thirteen, the venue was at the Nokia Theatre,
which holds an audience of over 7,000. The winner receives a record deal with a major label, which may be for up
to six albums, and secures a management contract with American Idol-affiliated 19 Management (which has the right
of first refusal to sign all contestants), as well as various lucrative contracts. All winners prior to season nine
reportedly earned at least $1 million in their first year as winner. All the runners-up of the first ten seasons,
as well as some of other finalists, have also received record deals with major labels. However, starting in season
11, the runner-up may only be guaranteed a single-only deal. BMG/Sony (seasons 1–9) and UMG (season 10–) had the
right of first refusal to sign contestants for three months after the season's finale. Starting in the fourteenth
season, the winner was signed with Big Machine Records. Prominent music mogul Clive Davis also produced some of the
selected contestants' albums, such as Kelly Clarkson, Clay Aiken, Fantasia Barrino and Diana DeGarmo. All top 10
(11 in seasons 10 and 12) finalists earn the privilege of going on a tour, where the participants may each earn a
six-figure sum. Each season premieres with the audition round, taking place in different cities. The audition episodes
typically feature a mix of potential finalists, interesting characters and woefully inadequate contestants. Each
successful contestant receives a golden ticket to proceed on to the next round in Hollywood. Based on their performances
during the Hollywood round (Las Vegas round for seasons 10 onwards), 24 to 36 contestants are selected by the judges
to participate in the semifinals. From the semifinal onwards the contestants perform their songs live, with the judges
making their critiques after each performance. The contestants are voted for by the viewing public, and the outcome
of the public votes is then revealed in the results show typically on the following night. The results shows feature
group performances by the contestants as well as guest performers. The Top-three results show also features the homecoming
events for the Top 3 finalists. The season reaches its climax in a two-hour results finale show, where the winner
of the season is revealed. With the exception of seasons one and two, the contestants in the semifinals onwards perform
in front of a studio audience. They perform with a full band in the finals. From season four to season nine, the
American Idol band was led by Rickey Minor; from season ten onwards, Ray Chew. Assistance may also be given by vocal
coaches and song arrangers, such as Michael Orland and Debra Byrd to contestants behind the scene. Starting with
season seven, contestants may perform with a musical instrument from the Hollywood rounds onwards. In the first nine
seasons, performances were usually aired live on Tuesday nights, followed by the results shows on Wednesdays in the
United States and Canada, but moved to Wednesdays and Thursdays in season ten. The first season of American Idol
debuted as a summer replacement show in June 2002 on the Fox network. It was co-hosted by Ryan Seacrest and Brian
Dunkleman. In the audition rounds, 121 contestants were selected from around 10,000 who attended the auditions. These
were cut to 30 for the semifinal, with ten going on to the finals. One semifinalist, Delano Cagnolatti, was disqualified
for lying to evade the show's age limit. One of the early favorites, Tamyra Gray, was eliminated at the top four,
the first of several such shock eliminations that were to be repeated in later seasons. Christina Christian was hospitalized
before the top six result show due to chest pains and palpitations, and she was eliminated while she was in the hospital.
Jim Verraros was the first openly gay contestant on the show; his sexual orientation was revealed in his online journal,
however it was removed during the competition after a request from the show producers over concerns that it might
be unfairly influencing votes. The final showdown was between Justin Guarini, one of the early favorites, and Kelly
Clarkson. Clarkson was not initially thought of as a contender, but impressed the judges with some good performances
in the final rounds, such as her performance of Aretha Franklin's "Natural Woman", and Betty Hutton's "Stuff Like
That There", and eventually won the crown on September 4, 2002. In what was to become a tradition, Clarkson performed
the coronation song during the finale, and released the song immediately after the season ended. The single, "A Moment
Like This", went on to break a 38-year-old record held by The Beatles for the biggest leap to number one on the Billboard
Hot 100. Guarini did not release any song immediately after the show and remains the only runner-up not to do so.
Both Clarkson and Guarini made a musical film, From Justin to Kelly, which was released in 2003 but was widely panned.
Clarkson has since become the most successful Idol contestant internationally, with worldwide album sales of more
than 23 million. Following the success of season one, the second season was moved up to air in January 2003. The
number of episodes increased, as did the show's budget and the charge for commercial spots. Dunkleman left the show,
leaving Seacrest as the lone host. Kristin Adams was a correspondent for this season. Corey Clark was disqualified
during the finals for having an undisclosed police record; however, he later alleged that he and Paula Abdul had
an affair while on the show and that this contributed to his expulsion. Clark also claimed that Abdul gave him preferential
treatment on the show due to their affair. The allegations were dismissed by Fox after an independent investigation.
Two semi-finalists were also disqualified that year – Jaered Andrews for an arrest on an assault charge, and Frenchie
Davis for having previously modelled for an adult website. Ruben Studdard emerged as the winner, beating Clay Aiken
by a small margin. Out of a total of 24 million votes, Studdard finished just 134,000 votes ahead of Aiken. This
slim margin of victory was controversial due to the large number of calls that failed to get through. In an interview
prior to season five, executive producer Nigel Lythgoe indicated that Aiken had led the fan voting from the wildcard
week onward until the finale. Both finalists found success after the show, but Aiken out-performed Studdard's coronation
song "Flying Without Wings" with his single release from the show "This Is the Night", as well as in their subsequent
album releases. The fourth-place finisher Josh Gracin also enjoyed some success as a country singer. Season three
premiered on January 19, 2004. One of the most talked-about contestants during the audition process was William Hung
whose off-key rendition of Ricky Martin's "She Bangs" received widespread attention. His exposure on Idol landed
him a record deal and surprisingly he became the third best-selling singer from that season. Much media attention
on the season had been focused on the three black singers, Fantasia Barrino, LaToya London, and Jennifer Hudson,
dubbed the Three Divas. All three unexpectedly landed on the bottom three on the top seven result show, with Hudson
controversially eliminated. Elton John, who was one of the mentors that season, called the results of the votes "incredibly
racist". The prolonged stays of John Stevens and Jasmine Trias in the finals, despite negative comments from the
judges, had aroused resentment, so much so that John Stevens reportedly received a death threat, which he dismissed
as a joke 'blown out of proportion'. The performance of "Summertime" by Barrino, later known simply as "Fantasia",
at Top 8 was widely praised, and Simon Cowell considered it as his favorite Idol moment in the nine seasons he was
on the show. Fantasia and Diana DeGarmo were the last two finalists, and Fantasia was crowned as the winner. Fantasia
released as her coronation single "I Believe", a song co-written by season one finalist Tamyra Gray, and DeGarmo
released "Dreams". Fantasia went on to gain some successes as a recording artist, while Hudson, who placed seventh,
became the only Idol contestant so far to win both an Academy Award and a Grammy. The top 12 finalists originally
included Mario Vazquez, but he dropped out citing 'personal reasons' and was replaced by Nikko Smith. Later, an employee
of Freemantle Media, which produces the show, sued the company for wrongful termination, claiming that he was dismissed
after complaining about lewd behavior by Vazquez toward him during the show. During the top 11 week, due to a mix-up
with the contestants' telephone number, voting was repeated on what was normally the result night, with the result
reveal postponed until the following night. In May 2005, Carrie Underwood was announced the winner, with Bice the
runner-up. Both Underwood and Bice released the coronation song "Inside Your Heaven". Underwood has since sold 65
million records worldwide, and become the most successful Idol contestant in the U.S., selling over 14 million albums
copies in the U.S. and has more Underwood has won seven Grammy Awards, the most Grammys by an "American Idol" alumnus.
Season five began on January 17, 2006. It remains the highest-rated season in the show's run so far. Two of the more
prominent contestants during the Hollywood round were the Brittenum twins who were later disqualified for identity
theft. Chris Daughtry's performance of Fuel's "Hemorrhage (In My Hands)" on the show was widely praised and led to
an invitation to join the band as Fuel's new lead singer, an invitation he declined. His performance of Live's version
of "I Walk the Line" was well received by the judges but later criticized in some quarters for not crediting the
arrangement to Live. He was eliminated at the top four in a shocking result. On May 30, 2006, Taylor Hicks was named
American Idol, with Katharine McPhee the runner-up. "Do I Make You Proud" was released as Hicks' first single and
McPhee's was "My Destiny". Despite being eliminated earlier in the season, Chris Daughtry (as lead of the band Daughtry)
became the most successful recording artist from this season. Other contestants, such as Hicks, McPhee, Bucky Covington,
Mandisa, Kellie Pickler, and Elliott Yamin have had varying levels of success. Season six began on Tuesday, January
16, 2007. The premiere drew a massive audience of 37.3 million viewers, peaking in the last half hour with more than
41 million viewers. Teenager Sanjaya Malakar was the season's most talked-about contestant for his unusual hairdo,
and for managing to survive elimination for many weeks due in part to the weblog Vote for the Worst and satellite
radio personality Howard Stern, who both encouraged fans to vote for him. However, on April 18, Sanjaya was voted
off. This season saw the first Idol Gives Back telethon-inspired event, which raised more than $76 million in corporate
and viewer donations. No contestant was eliminated that week, but two (Phil Stacey and Chris Richardson) were eliminated
the next. Melinda Doolittle was eliminated in the final three. In the May 23 season finale, Jordin Sparks was declared
the winner with the runner-up being Blake Lewis. Sparks has had some success as a recording artist post-Idol. Season
four premiered on January 18, 2005; this was the first season of the series to be aired in high definition, although
the finale of season three was also aired in high definition. The number of those attending the auditions by now
had increased to over 100,000 from the 10,000 of the first season. The age limit was raised to 28 in this season,
and among those who benefited from this new rule were Constantine Maroulis and Bo Bice, the two rockers of the show.
This season also saw the launch of the American Idol Songwriter contest which allows fans to vote for the "coronation
song". Thousands of recordings of original songs were submitted by songwriters, and 20 entries selected for the public
vote. The winning song, "This Is My Now", was performed by both finalists during the finale and released by Sparks
on May 24, 2007. Season seven premiered on January 15, 2008, for a two-day, four-hour premiere. The media focused
on the professional status of the season seven contestants, the so-called 'ringers', many of whom, including Kristy
Lee Cook, Brooke White, Michael Johns, and in particular Carly Smithson, had prior recording contracts. Contestant
David Hernandez also attracted some attention due to his past employment as a stripper. For the finals, American
Idol debuted a new state-of-the-art set and stage on March 11, 2008, along with a new on-air look. David Cook's performance
of "Billie Jean" on top-ten night was lauded by the judges, but provoked controversy when they apparently mistook
the Chris Cornell arrangement to be David Cook's own even though the performance was introduced as Cornell's version.
Cornell himself said he was 'flattered' and praised David Cook's performance. David Cook was taken to the hospital
after the top-nine performance show due to heart palpitations and high blood pressure. David Archuleta's performance
of John Lennon's "Imagine" was considered by many as one of the best of the season. Jennifer Lopez, who was brought
in as a judge in season ten, called it a beautiful song-moment that she will never forget. Jason Castro's semi-final
performance of "Hallelujah" also received considerable attention, and it propelled Jeff Buckley's version of the
song to the top of the Billboard digital song chart. This was the first season in which contestants' recordings were
released onto iTunes after their performances, and although sales information was not released so as not to prejudice
the contest, leaked information indicated that contestants' songs frequently reached the top of iTunes sales charts.
The finalists were Cook and Archuleta. David Cook was announced the winner on May 21, 2008, the first rocker to win
the show. Both Cook and Archuleta had some success as recording artists with both selling over a million albums in
the U.S. The American Idol Songwriter contest was also held this season. From ten of the most popular submissions,
each of the final two contestants chose a song to perform, although neither of their selections was used as the "coronation
song". The winning song, "The Time of My Life", was recorded by David Cook and released on May 22, 2008. Season eight
premiered on January 13, 2009. Mike Darnell, the president of alternative programming for Fox, stated that the season
would focus more on the contestants' personal life. Much early attention on the show was therefore focused on the
widowhood of Danny Gokey.[citation needed] In the first major change to the judging panel, a fourth judge, Kara DioGuardi,
was introduced. This was also the first season without executive producer Nigel Lythgoe who left to focus on the
international versions of his show So You Think You Can Dance. The Hollywood round was moved to the Kodak Theatre
for 2009 and was also extended to two weeks. Idol Gives Back was canceled for this season due to the global recession
at the time. There were 13 finalists this season, but two were eliminated in the first result show of the finals.
A new feature introduced was the "Judges' Save", and Matt Giraud was saved from elimination at the top seven by the
judges when he received the fewest votes. The next week, Lil Rounds and Anoop Desai were eliminated. The two finalists
were Kris Allen and Adam Lambert, both of whom had previously landed in the bottom three at the top five. Allen won
the contest in the most controversial voting result since season two. It was claimed, later retracted, that 38 million
of the 100 million votes cast on the night came from Allen's home state of Arkansas alone, and that AT&T employees
unfairly influenced the votes by giving lessons on power-texting at viewing parties in Arkansas. Both Allen and Lambert
released the coronation song, "No Boundaries" which was co-written by DioGuardi. This is the first season in which
the winner failed to achieve gold album status, and none from that season achieved platinum album status in the U.S.[citation
needed] Season nine premiered on January 12, 2010. The upheaval at the judging panel continued. Ellen DeGeneres joined
as a judge to replace Paula Abdul at the start of Hollywood Week. Crystal Bowersox, who has Type-I diabetes, fell
ill due to diabetic ketoacidosis on the morning of the girls performance night for the top 20 week and was hospitalized.
The schedule was rearranged so the boys performed first and she could perform the following night instead; she later
revealed that Ken Warwick, the show producer, wanted to disqualify her but she begged to be allowed to stay on the
show. Michael Lynche was the lowest vote getter at top nine and was given the Judges' Save. The next week Katie Stevens
and Andrew Garcia were eliminated. That week, Adam Lambert was invited back to be a mentor, the first Idol alum to
do so. Idol Gives Back returned this season on April 21, 2010, and raised $45 million. A special tribute to Simon
Cowell was presented in the finale for his final season with the show. Many figures from the show's past, including
Paula Abdul, made an appearance. The final two contestants were Lee DeWyze and Bowersox. DeWyze was declared the
winner during the May 26 finale. No new song was used as coronation song this year; instead, the two finalists each
released a cover song – DeWyze chose U2's "Beautiful Day", and Bowersox chose Patty Griffin's "Up to the Mountain".
This is the first season where neither finalist achieved significant album sales. Season ten of the series premiered
on January 19, 2011. Many changes were introduced this season, from the format to the personnel of the show. Jennifer
Lopez and Steven Tyler joined Randy Jackson as judges following the departures of Simon Cowell (who left to launch
the U.S. version of The X Factor), Kara DioGuardi (whose contract was not renewed) and Ellen DeGeneres, while Nigel
Lythgoe returned as executive producer. Jimmy Iovine, chairman of the Interscope Geffen A&M label group, the new
partner of American Idol, acted as the in-house mentor in place of weekly guest mentors, although in later episodes
special guest mentors such as Beyoncé, will.i.am and Lady Gaga were brought in. Season ten is the first to include
online auditions where contestants could submit a 40-second video audition via Myspace. Karen Rodriguez was one such
auditioner and reached the final rounds. One of the more prominent contestants this year was Chris Medina, whose
story of caring for his brain-damaged fiancée received widespread coverage. Medina was cut in the Top 40 round. Casey
Abrams, who suffers from ulcerative colitis, was hospitalized twice and missed the Top 13 result show. The judges
used their one save on Abrams on the Top 11, and as a result this was the first season that 11 finalists went on
tour instead of 10. In the following week, Naima Adedapo and Thia Megia were both eliminated the following week.
Pia Toscano, one of the presumed favorites to advance far in the season, was unexpectedly eliminated on April 7,
2011, finishing in ninth place. Her elimination drew criticisms from some former Idol contestants, as well as actor
Tom Hanks. The two finalists in 2011 were Lauren Alaina and Scotty McCreery, both teenage country singers. McCreery
won the competition on May 25, being the youngest male winner and the fourth male in a row to win American Idol.
McCreery released his first single, "I Love You This Big", as his coronation song, and Alaina released "Like My Mother
Does". McCreery's debut album, Clear as Day, became the first debut album by an Idol winner to reach No. 1 on the
US Billboard 200 since Ruben Studdard's Soulful in 2003, and he became the youngest male artist to reach No. 1 on
the Billboard 200. Season 11 premiered on January 18, 2012. On February 23, it was announced that one more finalist
would join the Top 24 making it the Top 25, and that was Jermaine Jones. However, on March 14, Jones was disqualified
in 12th place for concealing arrests and outstanding warrants. Jones denied the accusation that he concealed his
arrests. Finalist Phillip Phillips suffered from kidney pain and was taken to the hospital before the Top 13 results
show, and later received medical procedure to alleviate a blockage caused by kidney stones. He was reported to have
eight surgeries during his Idol run, and had considered quitting the show due to the pain. He underwent surgery to
remove the stones and reconstruct his kidney soon after the season had finished. Jessica Sanchez received the fewest
number of votes during the Top 7 week, and the judges decided to use their "save" option on her, making her the first
female recipient of the save. The following week, unlike previous seasons, Colton Dixon was the only contestant sent
home. Sanchez later made the final two, the first season where a recipient of the save reached the finale. Phillips
became the winner, beating Sanchez. Prior to the announcement of the winner, season five finalist Ace Young proposed
marriage to season three runner-up Diana DeGarmo on stage – which she accepted. Phillips released "Home" as his coronation
song, while Sanchez released "Change Nothing". Phillips' "Home" has since become the best selling of all coronation
songs. Season 12 premiered on January 16, 2013. Judges Jennifer Lopez and Steven Tyler left the show after two seasons.
This season's judging panel consisted of Randy Jackson, along with Mariah Carey, Keith Urban and Nicki Minaj. This
was the first season since season nine to have four judges on the panel. The pre-season buzz and the early episodes
of the show were dominated by the feud between the judges Minaj and Carey after a video of their dispute was leaked
to TMZ. The top 10 contestants started with five males and five females, however, the males were eliminated consecutively
in the first five weeks, with Lazaro Arbos the last male to be eliminated. For the first time in the show's history,
the top 5 contestants were all female. It was also the first time that the judges' "save" was not used, the top four
contestants were therefore given an extra week to perform again with their votes carried over with no elimination
in the first week. 23-year-old Candice Glover won the season with Kree Harrison taking the runner-up spot. Glover
is the first female to win American Idol since Jordin Sparks. Glover released "I Am Beautiful" as a single while
Harrison released "All Cried Out" immediately after the show. Glover sold poorly with her debut album, and this is
also the first season that the runner-up was not signed by a music label. Towards the end of the season, Randy Jackson,
the last remaining of the original judges, announced that he would no longer serve as a judge to pursue other business
ventures. Both judges Mariah Carey and Nicki Minaj also decided to leave after one season to focus on their music
careers. The thirteenth season premiered on January 15, 2014, with Ryan Seacrest returning as host. Randy Jackson
and Keith Urban returned, though Jackson moved from the judging panel to the role of in-mentor. Mariah Carey and
Nicki Minaj left the panel after one season. Former judge Jennifer Lopez and former mentor Harry Connick, Jr. joined
Urban on the panel. Also, Nigel Lythgoe and Ken Warwick were replaced as executive producers by Per Blankens, Jesse
Ignjatovic and Evan Pragger. Bill DeRonde replaced Warwick as a director of the audition episodes, while Louis J.
Horvitz replaced Gregg Gelfand as a director of the show. This was the first season where the contestants were permitted
to perform in the final rounds songs they wrote themselves. In the Top 8, Sam Woolf received the fewest votes, but
he was saved from elimination by the judges. The 500th episode of the series was the Top 3 performance night. Caleb
Johnson was named the winner of the season, with Jena Irene as the runner-up. Johnson released "As Long as You Love
Me" as his coronation single while Irene released "We Are One". The fourteenth season premiered on January 7, 2015.
Ryan Seacrest returned to host, while Jennifer Lopez, Keith Urban and Harry Connick, Jr. returned for their respective
fourth, third and second seasons as judges. Eighth season runner-up Adam Lambert filled in for Urban during the New
York City auditions. Randy Jackson did not return as the in-house mentor for this season. Changes this season include
only airing one episode a week during the final ten. Coca Cola ended their longtime sponsorship of the show and Ford
Motor Company maintained a reduced role. The winner of the season also received a recording contract with Big Machine
Records. Nick Fradiani won the season, defeating Clark Beckham. By winning, Fradiani became the first winner from
the Northeast region. Fradiani released "Beautiful Life" as his coronation single while Beckham released "Champion".
Jax, the third place finalist, also released a single called "Forcefield". Fox announced on May 11, 2015 that the
fifteenth season would be the final season of American Idol; as such, the season is expected to have an additional
focus on the program's alumni. Ryan Seacrest returns as host, with Harry Connick Jr., Keith Urban, and Jennifer Lopez
all returning for their respective third, fourth, and fifth seasons as judges. Since the show's inception in 2002,
ten of the fourteen Idol winners, including its first five, have come from the Southern United States. A large number
of other notable finalists during the series' run have also hailed from the American South, including Clay Aiken,
Kellie Pickler, and Chris Daughtry, who are all from North Carolina. In 2012, an analysis of the 131 contestants
who have appeared in the finals of all seasons of the show up to that point found that 48% have some connection to
the Southern United States. The show itself is popular in the Southern United States, with households in the Southeastern
United States 10% more likely to watch American Idol during the eighth season in 2009, and those in the East Central
region, such as Kentucky, were 16 percent more likely to tune into the series. Data from Nielsen SoundScan, a music-sales
tracking service, showed that of the 47 million CDs sold by Idol contestants through January 2010, 85 percent were
by contestants with ties to the American South. Theories given for the success of Southerners on Idol have been:
more versatility with musical genres, as the Southern U.S. is home to several music genre scenes; not having as many
opportunities to break into the pop music business; text-voting due to the South having the highest percentage of
cell-phone only households; and the strong heritage of music and singing, which is notable in the Bible Belt, where
it is in church that many people get their start in public singing. Others also suggest that the Southern character
of these contestants appeal to the South, as well as local pride. According to season five winner Taylor Hicks, who
is from the state of Alabama, "People in the South have a lot of pride ... So, they're adamant about supporting the
contestants who do well from their state or region." For five consecutive seasons, starting in season seven, the
title was given to a white male who plays the guitar – a trend that Idol pundits call the "White guy with guitar"
or "WGWG" factor. Just hours before the season eleven finale, where Phillip Phillips was named the winner, Richard
Rushfield, author of the book American Idol: The Untold Story, said, "You have this alliance between young girls
and grandmas and they see it, not necessarily as a contest to create a pop star competing on the contemporary radio,
but as .... who's the nicest guy in a popularity contest," he says, "And that has led to this dynasty of four, and
possibly now five, consecutive, affable, very nice, good-looking white boys." The show had been criticized in earlier
seasons over the onerous contract contestants had to sign that gave excessive control to 19 Entertainment over their
future career, and handed a large part of their future earnings to the management. Voting results have been a consistent
source of controversy. The mechanism of voting had also aroused considerable criticisms, most notably in season two
when Ruben Studdard beat Clay Aiken in a close vote, and in season eight, when the massive increase in text votes
(100 million more text votes than season 7) fueled the texting controversy. Concerns about power voting have been
expressed from the very first season. Since 2004, votes also have been affected to a limited degree by online communities
such as DialIdol, Vote for the Worst (closed in 2013), and Vote for the Girls (started 2010). Idol Gives Back is
a special charity event started in season six featuring performances by celebrities and various fund-raising initiatives.
This event was also held in seasons seven and nine and has raised nearly $185 million in total. American Idol premiered
in June 2002 and became the surprise summer hit show of 2002. The first show drew 9.9 million viewers, giving Fox
the best viewing figure for the 8.30 pm spot in over a year. The audience steadily grew, and by finale night, the
audience had averaged 23 million, with more than 40 million watching some part of that show. That episode was placed
third amongst all age groups, but more importantly it led in the 18–49 demographic, the age group most valued by
advertisers. The growth continued into the next season, starting with a season premiere of 26.5 million. The season
attracted an average of 21.7 million viewers, and was placed second overall amongst the 18–49 age group. The finale
night when Ruben Studdard won over Clay Aiken was also the highest-rated ever American Idol episode at 38.1 million
for the final hour. By season three, the show had become the top show in the 18–49 demographic a position it has
held for all subsequent years up to and including season ten, and its competition stages ranked first in the nationwide
overall ratings. By season four, American Idol had become the most watched series amongst all viewers on American
TV for the first time, with an average viewership of 26.8 million. The show reached its peak in season five with
numbers averaging 30.6 million per episode, and season five remains the highest-rated season of the series. Season
six premiered with the series' highest-rated debut episode and a few of its succeeding episodes rank among the most
watched episodes of American Idol. During this time, many television executives begun to regard the show as a programming
force unlike any seen before, as its consistent dominance of up to two hours two or three nights a week exceeded
the 30- or 60-minute reach of previous hits such as NBC's The Cosby Show. The show was dubbed "the Death Star", and
competing networks often rearranged their schedules in order to minimize losses. However, season six also showed
a steady decline in viewership over the course of the season. The season finale saw a drop in ratings of 16% from
the previous year. Season six was the first season wherein the average results show rated higher than the competition
stages (unlike in the previous seasons), and became the second highest-rated of the series after the preceding season.
The loss of viewers continued into season seven. The premiere was down 11% among total viewers, and the results show
in which Kristy Lee Cook was eliminated delivered its lowest-rated Wednesday show among the 18–34 demo since the
first season in 2002. However, the ratings rebounded for the season seven finale with the excitement over the battle
of the Davids, and improved over season six as the series' third most watched finale. The strong finish of season
seven also helped Fox become the most watched TV network in the country for the first time since its inception, a
first ever in American television history for a non-Big Three major broadcast network. Overall ratings for the season
were down 10% from season six, which is in line with the fall in viewership across all networks due in part to the
2007–2008 Writers Guild of America strike. The declining trend however continued into season eight, as total viewers
numbers fell by 5–10% for early episodes compared to season seven, and by 9% for the finale. In season nine, Idol's
six-year extended streak of perfection in the ratings was broken, when NBC's coverage of the 2010 Winter Olympics
on February 17 beat Idol in the same time slot with 30.1 million viewers over Idol's 18.4 million. Nevertheless,
American Idol overall finished its ninth season as the most watched TV series for the sixth year running, breaking
the previous record of five consecutive seasons achieved by CBS' All in the Family and NBC's The Cosby Show. In season
ten, the total viewer numbers for the first week of shows fell 12–13%, and by up to 23% in the 18–49 demo compared
to season nine. Later episodes, however, retained viewers better, and the season ended on a high with a significant
increase in viewership for the finale – up 12% for the adults 18–49 demo and a 21% increase in total viewers from
the season nine finale. While the overall viewer number has increased this season, its viewer demographics have continued
to age year on year – the median age this season was 47.2 compared to a median age of 32.1 in its first season. By
the time of the 2010–11 television season, Fox was in its seventh consecutive season of victory overall in the 18–49
demographic ratings in the United States. Season eleven, however, suffered a steep drop in ratings, a drop attributed
by some to the arrival of new shows such as The Voice and The X-Factor. The ratings for the first two episodes of
season eleven fell 16–21% in overall viewer numbers and 24–27% in the 18/49 demo, while the season finale fell 27%
in total viewer number and 30% in the 18-49 demo. The average viewership for the season fell below 20 million viewers
the first time since 2003, a drop of 23% in total viewers and 30% in the 18/49 demo. For the first time in eight
years, American Idol lost the leading position in both the total viewers number and the 18/49 demo, coming in second
to NBC Sunday Night Football, although the strengths of Idol in its second year in the Wednesday-Thursday primetime
slots helped Fox achieve the longest period of 18-49 demographic victory in the Nielsen ratings, standing at 8 straight
years from 2004 to 2012. The loss of viewers continued into season 12, which saw the show hitting a number of series
low in the 18-49 demo. The finale had 7.2 million fewer viewers than the previous season, and saw a drop of 44% in
the 18-49 demo. The season viewers averaged at 13.3 million, a drop of 24% from the previous season. The thirteenth
season suffered a huge decline in the 18–49 demographic, a drop of 28% from the twelfth season, and American Idol
lost its Top 10 position in the Nielsen ratings by the end of the 2013–14 television season for the first time since
its entry to the rankings in 2003 as a result, although the entire series to date had not yet been dropped from the
Nielsen Top 30 rankings since its inception in 2002. The continuing decline influenced further changes for season
14, including the loss of Coca-Cola as the show's major sponsor, and a decision to only broadcast one, two-hour show
per week during the top 12 rounds (with results from the previous week integrated into the performance show, rather
than having a separate results show). On May 11, 2015, prior to the fourteenth season finale, Fox announced that
the fifteenth season of American Idol would be its last. Despite these changes, the show's ratings would decline
more sharply. The fourteenth season finale was the lowest-rated finale ever, with an average of only 8.03 million
viewers watching the finale. The enormous success of the show and the revenue it generated was transformative for
Fox Broadcasting Company. American Idol and fellow competing shows Survivor and Who Wants to Be a Millionaire were
altogether credited for expanding reality television programming in the United States in the 1990s and 2000s, and
Idol became the most watched non-scripted primetime television series for almost a decade, from 2003 to 2012, breaking
records on U.S. television (dominated by drama shows and sitcoms in the preceding decades). The show pushed Fox to
become the number one U.S. TV network amongst adults 18–49, the key demographic coveted by advertisers, for an unprecedented
eight consecutive years by 2012. Its success also helped lift the ratings of other shows that were scheduled around
it such as House and Bones, and Idol, for years, had become Fox's strongest platform primetime television program
for promoting eventual hit shows of the 2010s (of the same network) such as Glee and New Girl. The show, its creator
Simon Fuller claimed, "saved Fox". The show's massive success in the mid-2000s and early 2010s spawned a number of
imitating singing-competition shows, such as Rock Star, Nashville Star, The Voice, Rising Star, The Sing-Off, and
The X Factor. Its format also served as a blueprint for non-singing TV shows such as Dancing with the Stars and So
You Think You Can Dance, most of which contribute to the current highly competitive reality TV landscape on American
television. As one of the most successful shows on U.S. television history, American Idol has a strong impact not
just on television, but also in the wider world of entertainment. It helped create a number of highly successful
recording artists, such as Kelly Clarkson, Daughtry and Carrie Underwood, as well as others of varying notability.
Various American Idol alumni had success on various record charts around the world; in the U.S. they had achieved
345 number ones on the Billboard charts in its first ten years. According to Fred Bronson, author of books on the
Billboard charts, no other entity has ever created as many hit-making artists and best-selling albums and singles.
In 2007, American Idol alums accounted for 2.1% of all music sales. Its alumni have a massive impact on radio; in
2007, American Idol had become "a dominant force in radio" according to the president of the research company Mediabase
which monitors radio stations Rich Meyer. By 2010, four winners each had more than a million radio spins, with Kelly
Clarkson leading the field with over four million spins. As of 2013, the American Idol alumni in their post-Idol
careers have amassed over 59 million albums and 120 million singles and digital track downloads in the United States
alone. The impact of American Idol is also strongly felt in musical theatre, where many of Idol alumni have forged
successful careers. The striking effect of former American Idol contestants on Broadway has been noted and commented
on. The casting of a popular Idol contestant can lead to significantly increased ticket sales. Other alumni have
gone on to work in television and films, the most notable being Jennifer Hudson who, on the recommendation of the
Idol vocal coach Debra Byrd, won a role in Dreamgirls and subsequently received an Academy Award for her performance.
Early reviews were mixed in their assessment. Ken Tucker of Entertainment Weekly considered that "As TV, American
Idol is crazily entertaining; as music, it's dust-mote inconsequential". Others, however, thought that "the most
striking aspect of the series was the genuine talent it revealed". It was also described as a "sadistic musical bake-off",
and "a romp in humiliation". Other aspects of the show have attracted criticisms. The product placement in the show
in particular was noted, and some critics were harsh about what they perceived as its blatant commercial calculations
– Karla Peterson of The San Diego Union-Tribune charged that American Idol is "a conniving multimedia monster" that
has "absorbed the sin of our debauched culture and spit them out in a lump of reconstituted evil". The decision to
send the season one winner to sing the national anthem at the Lincoln Memorial on the first anniversary of the September
11 attacks in 2002 was also poorly received by many. Lisa de Moraes of The Washington Post noted sarcastically that
"The terrorists have won" and, with a sideswipe at the show's commercialism and voting process, that the decision
as to who "gets to turn this important site into just another cog in the 'Great American Idol Marketing Mandala'
is in the hands of the millions of girls who have made American Idol a hit. Them and a handful of phone-redialer
geeks who have been clocking up to 10,000 calls each week for their contestant of choice (but who, according to Fox,
are in absolutely no way skewing the outcome)." Some of the later writers about the show were more positive, Michael
Slezak, again of Entertainment Weekly, thought that "for all its bloated, synthetic, product-shilling, money-making
trappings, Idol provides a once-a-year chance for the average American to combat the evils of today's music business."
Singer Sheryl Crow, who was later to act as a mentor on the show, however took the view that the show "undermines
art in every way and promotes commercialism". Pop music critic Ann Powers nevertheless suggested that Idol has "reshaped
the American songbook", "led us toward a new way of viewing ourselves in relationship to mainstream popular culture",
and connects "the classic Hollywood dream to the multicentered popular culture of the future." Others focused on
the personalities in the show; Ramin Setoodeh of Newsweek accused judge Simon Cowell's cruel critiques in the show
of helping to establish in the wider world a culture of meanness, that "Simon Cowell has dragged the rest of us in
the mud with him." Some such as singer John Mayer disparaged the contestants, suggesting that those who appeared
on Idol are not real artists with self-respect. Some in the entertainment industry were critical of the star-making
aspect of the show. Usher, a mentor on the show, bemoaning the loss of the "true art form of music", thought that
shows like American Idol made it seem "so easy that everyone can do it, and that it can happen overnight", and that
"television is a lie". Musician Michael Feinstein, while acknowledging that the show had uncovered promising performers,
said that American Idol "isn't really about music. It's about all the bad aspects of the music business – the arrogance
of commerce, this sense of 'I know what will make this person a star; artists themselves don't know.' " That American
Idol is seen to be a fast track to success for its contestants has been a cause of resentment for some in the industry.
LeAnn Rimes, commenting on Carrie Underwood winning Best Female Artist in Country Music Awards over Faith Hill in
2006, said that "Carrie has not paid her dues long enough to fully deserve that award". It is a common theme that
has been echoed by many others. Elton John, who had appeared as a mentor in the show but turned down an offer to
be a judge on American Idol, commenting on talent shows in general, said that "there have been some good acts but
the only way to sustain a career is to pay your dues in small clubs". The success of the show's alumni however has
led to a more positive assessment of the show, and the show was described as having "proven it has a valid way to
pick talent and a proven way to sell records". While the industry is divided on the show success, its impact is felt
particularly strongly in the country music format. According to a CMT exec, reflecting on the success of Idol alumni
in the country genre, "if you want to try and get famous fast by going to a cattle call audition on TV, Idol reasonably
remains the first choice for anyone," and that country music and Idol "go together well". American Idol was nominated
for the Emmy's Outstanding Reality Competition Program for nine years but never won. Director Bruce Gower won a Primetime
Emmy Award for Outstanding Directing For A Variety, Music Or Comedy Series in 2009, and the show won a Creative Arts
Emmys each in 2007 and 2008, three in 2009, and two in 2011, as well as a Governor's Award in 2007 for its Idol Gives
Back edition. It won the People's Choice Award, which honors the popular culture of the previous year as voted by
the public, for favorite competition/reality show in 2005, 2006, 2007, 2010, 2011 and 2012. It won the first Critics'
Choice Television Award in 2011 for Best Reality Competition. The dominance of American Idol in the ratings had made
it the most profitable show in U.S. TV for many years. The show was estimated to generate $900 million for the year
2004 through sales of TV ads, albums, merchandise and concert tickets. By season seven, the show was estimated to
earn around $900 million from its ad revenue alone, not including ancillary sponsorship deals and other income. One
estimate puts the total TV revenue for the first eight seasons of American at $6.4 billion. Sponsors that bought
fully integrated packages can expect a variety of promotions of their products on the show, such as product placement,
adverts and product promotion integrated into the show, and various promotional opportunities. Other off-air promotional
partners pay for the rights to feature "Idol" branding on their packaging, products and marketing programs. American
Idol also partnered with Disney in its theme park attraction The American Idol Experience. American Idol became the
most expensive series on broadcast networks for advertisers starting season four, and by the next season, it had
broken the record in advertising rate for a regularly scheduled prime-time network series, selling over $700,000
for a 30-seconds slot, and reaching up to $1.3 million for the finale. Its ad prices reached a peak in season seven
at $737,000. Estimated revenue more than doubled from $404 million in season three to $870 million in season six.
While that declined from season eight onwards, it still earned significantly more than its nearest competitor, with
advertising revenue topping $800 million annually the next few seasons. However, the sharp drop in ratings in season
eleven also resulted in a sharp drop in advertising rate for season twelve, and the show lost its leading position
as the costliest show for advertisers. By 2014, ad revenue from had fallen to $427 million where a 30-second spot
went for less than $300,000. Ford Motor Company and Coca-Cola were two of the first sponsors of American Idol in
its first season. The sponsorship deal cost around $10 million in season one, rising to $35 million by season 7,
and between $50 to $60 million in season 10. The third major sponsor AT&T Wireless joined in the second season but
ended after season 12, and Coca-Cola officially ended its sponsorship after season 13 amidst the declining ratings
of Idol in the mid-2010s. iTunes sponsored the show since season seven. American Idol prominent display of its sponsors'
logo and products had been noted since the early seasons. By season six, Idol showed 4,349 product placements according
to Nielsen Media Research. The branded entertainment integration proved beneficial to its advertisers – promotion
of AT&T text-messaging as a means to vote successfully introduced the technology into the wider culture, and Coca-Cola
has seen its equity increased during the show. Coca-Cola's archrival PepsiCo declined to sponsor American Idol at
the show's start. What the Los Angeles Times later called "missing one of the biggest marketing opportunities in
a generation" contributed to Pepsi losing market share, by 2010 falling to third place from second in the United
States. PepsiCo sponsored the American version of Cowell's The X Factor in hopes of not repeating its Idol mistake
until its cancellation. The top ten (eleven in season ten) toured at the end of every season. In the season twelve
tour a semi-finalist who won a sing-off was also added to the tour. Kellogg's Pop-Tarts was the sponsor for the first
seven seasons, and Guitar Hero was added for the season seven tour. M&M's Pretzel Chocolate Candies was a sponsor
of the season nine tour. The season five tour was the most successful tour with gross of over $35 million. American
Idol has traditionally released studio recordings of contestants' performances as well as the winner's coronation
single for sale. For the first five seasons, the recordings were released as a compilation album at the end of the
season. All five of these albums reached the top ten in Billboard 200 which made then American Idol the most successful
soundtrack franchise of any motion picture or television program. Starting late in season five, individual performances
were released during the season as digital downloads, initially from the American Idol official website only. In
season seven the live performances and studio recordings were made available during the season from iTunes when it
joined as a sponsor. In Season ten the weekly studio recordings were also released as compilation digital album straight
after performance night. 19 Recordings, a recording label owned by 19 Entertainment, currently hold the rights to
phonographic material recorded by all the contestants. 19 originally partnered with Bertelsmann Music Group (BMG)
to promote and distribute the recordings through its labels RCA Records, Arista Records, J Records, Jive Records.
In 2005-2007, BMG partnered with Sony Music Entertainment to form a joint venture known as Sony BMG Music Entertainment.
From 2008-2010, Sony Music handled the distribution following their acquisition of BMG. Sony Music was partnered
with American Idol and distribute its music, and In 2010, Sony was replaced by as the music label for American Idol
by UMG's Interscope-Geffen-A&M Records. On February 14, 2009, The Walt Disney Company debuted "The American Idol
Experience" at its Disney's Hollywood Studios theme park at the Walt Disney World Resort in Florida. In this live
production, co-produced by 19 Entertainment, park guests chose from a list of songs and auditioned privately for
Disney cast members. Those selected then performed on a stage in a 1000-seat theater replicating the Idol set. Three
judges, whose mannerisms and style mimicked those of the real Idol judges, critiqued the performances. Audience members
then voted for their favorite performer. There were several preliminary-round shows during the day that culminated
in a "finals" show in the evening where one of the winners of the previous rounds that day was selected as the overall
winner. The winner of the finals show received a "Dream Ticket" that granted them front-of-the-line privileges at
any future American Idol audition. The attraction closed on August 30, 2014. American Idol is broadcast to over 100
nations outside of the United States. In most nations these are not live broadcasts and may be tape delayed by several
days or weeks. In Canada, the first thirteen seasons of American Idol were aired live by CTV and/or CTV Two, in simulcast
with Fox. CTV dropped Idol after its thirteenth season and in August 2014, Yes TV announced that it had picked up
Canadian rights to American Idol beginning in its 2015 season. In Latin America, the show is broadcast and subtitled
by Sony Entertainment Television. In southeast Asia, it is broadcast by STAR World every Thursday and Friday nine
or ten hours after. In Philippines, it is aired every Thursday and Friday nine or ten hours after its United States
telecast; from 2002 to 2007 on ABC 5; 2008–11 on QTV, then GMA News TV; and since 2012 on ETC. On Philippine television
history. In Australia, it is aired a few hours after the U.S. telecast. It was aired on Network Ten from 2002 to
2007 and then again in 2013, from 2008 to 2012 on Fox8, from season 13 onwards it airs on digital channel, Eleven,
a sister channel to Network Ten. In the United Kingdom, episodes are aired one day after the U.S. broadcast on digital
channel ITV2. As of season 12, the episodes air on 5*. It is also aired in Ireland on TV3 two days after the telecast.
In Brazil and Israel, the show airs two days after its original broadcast. In the instances where the airing is delayed,
the shows may sometimes be combined into one episode to summarize the results. In Italy, the twelfth season was broadcast
by La3. Individual contestants have generated controversy in this competition for their past actions, or for being
'ringers' planted by the producers. A number of contestants had been disqualified for various reasons, such as for
having an existing contract or undisclosed criminal records, although the show had been accused of double standard
for disqualifying some but not others. Seasonal rankings (based on average total viewers per episode) of American
Idol. It holds the distinction of having the longest winning streak in the Nielsen annual television ratings; it
became the highest-rated of all television programs in the United States overall for an unprecedented seven consecutive
years, or eight consecutive (and total) years when either its performance or result show was ranked number one overall.
The domestic dog (Canis lupus familiaris or Canis familiaris) is a domesticated canid which has been selectively bred for
millennia for various behaviors, sensory capabilities, and physical attributes. Although initially thought to have
originated as a manmade variant of an extant canid species (variously supposed as being the dhole, golden jackal,
or gray wolf), extensive genetic studies undertaken during the 2010s indicate that dogs diverged from an extinct
wolf-like canid in Eurasia 40,000 years ago. Being the oldest domesticated animal, their long association with people
has allowed dogs to be uniquely attuned to human behavior, as well as thrive on a starch-rich diet which would be
inadequate for other canid species. Dogs perform many roles for people, such as hunting, herding, pulling loads,
protection, assisting police and military, companionship, and, more recently, aiding handicapped individuals. This
impact on human society has given them the nickname "man's best friend" in the Western world. In some cultures, however,
dogs are a source of meat. The term "domestic dog" is generally used for both of the domesticated and feral varieties.
The English word dog comes from Middle English dogge, from Old English docga, a "powerful dog breed". The term may
possibly derive from Proto-Germanic *dukkōn, represented in Old English finger-docce ("finger-muscle"). The word
also shows the familiar petname diminutive -ga also seen in frogga "frog", picga "pig", stagga "stag", wicga "beetle,
worm", among others. The term dog may ultimately derive from the earliest layer of Proto-Indo-European vocabulary.
In 14th-century England, hound (from Old English: hund) was the general word for all domestic canines, and dog referred
to a subtype of hound, a group including the mastiff. It is believed this "dog" type was so common, it eventually
became the prototype of the category "hound". By the 16th century, dog had become the general word, and hound had
begun to refer only to types used for hunting. The word "hound" is ultimately derived from the Proto-Indo-European
word *kwon- "dog". In breeding circles, a male canine is referred to as a dog, while a female is called a bitch (Middle
English bicche, from Old English bicce, ultimately from Old Norse bikkja). A group of offspring is a litter. The
father of a litter is called the sire, and the mother is called the dam. Offspring are, in general, called pups or
puppies, from French poupée, until they are about a year old. The process of birth is whelping, from the Old English
word hwelp. In 1758, the taxonomist Linnaeus published in Systema Naturae a categorization of species which included
the Canis species. Canis is a Latin word meaning dog, and the list included the dog-like carnivores: the domestic
dog, wolves, foxes and jackals. The dog was classified as Canis familiaris, which means "Dog-family" or the family
dog. On the next page he recorded the wolf as Canis lupus, which means "Dog-wolf". In 1978, a review aimed at reducing
the number of recognized Canis species proposed that "Canis dingo is now generally regarded as a distinctive feral
domestic dog. Canis familiaris is used for domestic dogs, although taxonomically it should probably be synonymous
with Canis lupus." In 1982, the first edition of Mammal Species of the World listed Canis familiaris under Canis
lupus with the comment: "Probably ancestor of and conspecific with the domestic dog, familiaris. Canis familiaris
has page priority over Canis lupus, but both were published simultaneously in Linnaeus (1758), and Canis lupus has
been universally used for this species", which avoided classifying the wolf as the family dog. The dog is now listed
among the many other Latin-named subspecies of Canis lupus as Canis lupus familiaris. In 2003, the ICZN ruled in
its Opinion 2027 that if wild animals and their domesticated derivatives are regarded as one species, then the scientific
name of that species is the scientific name of the wild animal. In 2005, the third edition of Mammal Species of the
World upheld Opinion 2027 with the name Lupus and the note: "Includes the domestic dog as a subspecies, with the
dingo provisionally separate - artificial variants created by domestication and selective breeding". However, Canis
familiaris is sometimes used due to an ongoing nomenclature debate because wild and domestic animals are separately
recognizable entities and that the ICZN allowed users a choice as to which name they could use, and a number of internationally
recognized researchers prefer to use Canis familiaris. Later genetic studies strongly supported dogs and gray wolves
forming two sister monophyletic clades within the one species, and that the common ancestor of dogs and extant wolves
is extinct. The origin of the domestic dog (Canis lupus familiaris or Canis familiaris) is not clear. Whole genome
sequencing indicates that the dog, the gray wolf and the extinct Taymyr wolf diverged at around the same time 27,000–40,000
years ago. These dates imply that the earliest dogs arose in the time of human hunter-gatherers and not agriculturists.
Modern dogs are more closely related to ancient wolf fossils that have been found in Europe than they are to modern
gray wolves. Nearly all dog breeds' genetic closeness to the gray wolf are due to admixture, except several Arctic
dog breeds are close to the Taimyr wolf of North Asia due to admixture. Domestic dogs have been selectively bred
for millennia for various behaviors, sensory capabilities, and physical attributes. Modern dog breeds show more variation
in size, appearance, and behavior than any other domestic animal. Dogs are predators and scavengers, and like many
other predatory mammals, the dog has powerful muscles, fused wrist bones, a cardiovascular system that supports both
sprinting and endurance, and teeth for catching and tearing. Dogs are highly variable in height and weight. The smallest
known adult dog was a Yorkshire Terrier, that stood only 6.3 cm (2.5 in) at the shoulder, 9.5 cm (3.7 in) in length
along the head-and-body, and weighed only 113 grams (4.0 oz). The largest known dog was an English Mastiff which
weighed 155.6 kg (343 lb) and was 250 cm (98 in) from the snout to the tail. The tallest dog is a Great Dane that
stands 106.7 cm (42.0 in) at the shoulder. The coats of domestic dogs are of two varieties: "double" being common
with dogs (as well as wolves) originating from colder climates, made up of a coarse guard hair and a soft down hair,
or "single", with the topcoat only. Domestic dogs often display the remnants of countershading, a common natural
camouflage pattern. A countershaded animal will have dark coloring on its upper surfaces and light coloring below,
which reduces its general visibility. Thus, many breeds will have an occasional "blaze", stripe, or "star" of white
fur on their chest or underside. There are many different shapes for dog tails: straight, straight up, sickle, curled,
or cork-screw. As with many canids, one of the primary functions of a dog's tail is to communicate their emotional
state, which can be important in getting along with others. In some hunting dogs, however, the tail is traditionally
docked to avoid injuries. In some breeds, such as the Braque du Bourbonnais, puppies can be born with a short tail
or no tail at all. Some breeds of dogs are prone to certain genetic ailments such as elbow and hip dysplasia, blindness,
deafness, pulmonic stenosis, cleft palate, and trick knees. Two serious medical conditions particularly affecting
dogs are pyometra, affecting unspayed females of all types and ages, and bloat, which affects the larger breeds or
deep-chested dogs. Both of these are acute conditions, and can kill rapidly. Dogs are also susceptible to parasites
such as fleas, ticks, and mites, as well as hookworms, tapeworms, roundworms, and heartworms. A number of common
human foods and household ingestibles are toxic to dogs, including chocolate solids (theobromine poisoning), onion
and garlic (thiosulphate, sulfoxide or disulfide poisoning), grapes and raisins, macadamia nuts, xylitol, as well
as various plants and other potentially ingested materials. The nicotine in tobacco can also be dangerous. Dogs can
get it by scavenging in garbage or ashtrays; eating cigars and cigarettes. Signs can be vomiting of large amounts
(e.g., from eating cigar butts) or diarrhea. Some other signs are abdominal pain, loss of coordination, collapse,
or death. Dogs are highly susceptible to theobromine poisoning, typically from ingestion of chocolate. Theobromine
is toxic to dogs because, although the dog's metabolism is capable of breaking down the chemical, the process is
so slow that even small amounts of chocolate can be fatal, especially dark chocolate. In 2013, a study found that
mixed breeds live on average 1.2 years longer than pure breeds, and that increasing body-weight was negatively correlated
with longevity (i.e. the heavier the dog the shorter its lifespan). The typical lifespan of dogs varies widely among
breeds, but for most the median longevity, the age at which half the dogs in a population have died and half are
still alive, ranges from 10 to 13 years. Individual dogs may live well beyond the median of their breed. The breed
with the shortest lifespan (among breeds for which there is a questionnaire survey with a reasonable sample size)
is the Dogue de Bordeaux, with a median longevity of about 5.2 years, but several breeds, including Miniature Bull
Terriers, Bloodhounds, and Irish Wolfhounds are nearly as short-lived, with median longevities of 6 to 7 years. The
longest-lived breeds, including Toy Poodles, Japanese Spitz, Border Terriers, and Tibetan Spaniels, have median longevities
of 14 to 15 years. The median longevity of mixed-breed dogs, taken as an average of all sizes, is one or more years
longer than that of purebred dogs when all breeds are averaged. The dog widely reported to be the longest-lived is
"Bluey", who died in 1939 and was claimed to be 29.5 years old at the time of his death. On 5 December 2011, Pusuke,
the world's oldest living dog recognized by Guinness Book of World Records, died aged 26 years and 9 months. In domestic
dogs, sexual maturity begins to happen around age six to twelve months for both males and females, although this
can be delayed until up to two years old for some large breeds. This is the time at which female dogs will have their
first estrous cycle. They will experience subsequent estrous cycles biannually, during which the body prepares for
pregnancy. At the peak of the cycle, females will come into estrus, being mentally and physically receptive to copulation.
Because the ova survive and are capable of being fertilized for a week after ovulation, it is possible for a female
to mate with more than one male. Dogs bear their litters roughly 58 to 68 days after fertilization, with an average
of 63 days, although the length of gestation can vary. An average litter consists of about six puppies, though this
number may vary widely based on the breed of dog. In general, toy dogs produce from one to four puppies in each litter,
while much larger breeds may average as many as twelve. Neutering refers to the sterilization of animals, usually
by removal of the male's testicles or the female's ovaries and uterus, in order to eliminate the ability to procreate
and reduce sex drive. Because of the overpopulation of dogs in some countries, many animal control agencies, such
as the American Society for the Prevention of Cruelty to Animals (ASPCA), advise that dogs not intended for further
breeding should be neutered, so that they do not have undesired puppies that may have to later be euthanized. Neutering
reduces problems caused by hypersexuality, especially in male dogs. Spayed female dogs are less likely to develop
some forms of cancer, affecting mammary glands, ovaries, and other reproductive organs. However, neutering increases
the risk of urinary incontinence in female dogs, and prostate cancer in males, as well as osteosarcoma, hemangiosarcoma,
cruciate ligament rupture, obesity, and diabetes mellitus in either sex. Dog intelligence is the ability of the dog
to perceive information and retain it as knowledge for applying to solve problems. Dogs have been shown to learn
by inference. A study with Rico showed that he knew the labels of over 200 different items. He inferred the names
of novel items by exclusion learning and correctly retrieved those novel items immediately and also 4 weeks after
the initial exposure. Dogs have advanced memory skills. A study documented the learning and memory capabilities of
a border collie, "Chaser", who had learned the names and could associate by verbal command over 1,000 words. Dogs
are able to read and react appropriately to human body language such as gesturing and pointing, and to understand
human voice commands. Dogs demonstrate a theory of mind by engaging in deception. A study showed compelling evidence
that Australian dingos can outperform domestic dogs in non-social problem-solving experiment, indicating that domestic
dogs may have lost much of their original problem-solving abilities once they joined humans. Another study indicated
that after undergoing training to solve a simple manipulation task, dogs that are faced with an insoluble version
of the same problem look at the human, while socialized wolves do not. Modern domestic dogs use humans to solve their
problems for them. Dog behavior is the internally coordinated responses (actions or inactions) of the domestic dog
(individuals or groups) to internal and/or external stimuli. As the oldest domesticated species, with estimates ranging
from 9,000–30,000 years BCE, the minds of dogs inevitably have been shaped by millennia of contact with humans. As
a result of this physical and social evolution, dogs, more than any other species, have acquired the ability to understand
and communicate with humans and they are uniquely attuned to our behaviors. Behavioral scientists have uncovered
a surprising set of social-cognitive abilities in the otherwise humble domestic dog. These abilities are not possessed
by the dog's closest canine relatives nor by other highly intelligent mammals such as great apes. Rather, these skills
parallel some of the social-cognitive skills of human children. Dog communication is about how dogs "speak" to each
other, how they understand messages that humans send to them, and how humans can translate the ideas that dogs are
trying to transmit.:xii These communication behaviors include eye gaze, facial expression, vocalization, body posture
(including movements of bodies and limbs) and gustatory communication (scents, pheromones and taste). Humans communicate
with dogs by using vocalization, hand signals and body posture. Despite their close genetic relationship and the
ability to inter-breed, there are a number of diagnostic features to distinguish the gray wolves from domestic dogs.
Domesticated dogs are clearly distinguishable from wolves by starch gel electrophoresis of red blood cell acid phosphatase.
The tympanic bullae are large, convex and almost spherical in gray wolves, while the bullae of dogs are smaller,
compressed and slightly crumpled. Compared to equally sized wolves, dogs tend to have 20% smaller skulls and 30%
smaller brains.:35 The teeth of gray wolves are also proportionately larger than those of dogs; the premolars and
molars of wolves are much less crowded and have more complex cusp patterns. Wolves do not have dewclaws on their
back legs, unless there has been admixture with dogs. Dogs lack a functioning pre-caudal gland, and most enter estrus
twice yearly, unlike gray wolves which only do so once a year. Dogs require fewer calories to function than wolves.
The dog's limp ears may be the result of atrophy of the jaw muscles. The skin of domestic dogs tends to be thicker
than that of wolves, with some Inuit tribes favoring the former for use as clothing due to its greater resistance
to wear and tear in harsh weather. Unlike other domestic species which were primarily selected for production-related
traits, dogs were initially selected for their behaviors. In 2016, a study found that there were only 11 fixed genes
that showed variation between wolves and dogs. These gene variations were unlikely to have been the result of natural
evolution, and indicate selection on both morphology and behavior during dog domestication. These genes have been
shown to have an impact on the catecholamine synthesis pathway, with the majority of the genes affecting the fight-or-flight
response (i.e. selection for tameness), and emotional processing. Dogs generally show reduced fear and aggression
compared to wolves. Some of these genes have been associated with aggression in some dog breeds, indicating their
importance in both the initial domestication and then later in breed formation. The global dog population is estimated
at 525 million:225 based on a transparent methodology, as opposed to other estimates where the methodology has not
been made available – all dog population estimates are based on regional human population densities and land uses.
Although large wild dogs, like wolves, are apex predators, they can be killed in territory disputes with wild animals.
Furthermore, in areas where both dogs and other large predators live, dogs can be a major food source for big cats
or canines. Reports from Croatia indicate wolves kill dogs more frequently than they kill sheep. Wolves in Russia
apparently limit feral dog populations. In Wisconsin, more compensation has been paid for dog losses than livestock.
Some wolf pairs have been reported to prey on dogs by having one wolf lure the dog out into heavy brush where the
second animal waits in ambush. In some instances, wolves have displayed an uncharacteristic fearlessness of humans
and buildings when attacking dogs, to the extent that they have to be beaten off or killed. Coyotes and big cats
have also been known to attack dogs. Leopards in particular are known to have a predilection for dogs, and have been
recorded to kill and consume them regardless of the dog's size or ferocity. Tigers in Manchuria, Indochina, Indonesia,
and Malaysia are reputed to kill dogs with the same vigor as leopards. Striped hyenas are major predators of village
dogs in Turkmenistan, India, and the Caucasus. Reptiles such as alligators and pythons have been known to kill and
eat dogs. Despite their descent from wolves and classification as Carnivora, dogs are variously described in scholarly
and other writings as carnivores or omnivores. Unlike obligate carnivores, such as the cat family with its shorter
small intestine, dogs can adapt to a wide-ranging diet, and are not dependent on meat-specific protein nor a very
high level of protein in order to fulfill their basic dietary requirements. Dogs will healthily digest a variety
of foods, including vegetables and grains, and can consume a large proportion of these in their diet. Comparing dogs
and wolves, dogs have adaptations in genes involved in starch digestion that contribute to an increased ability to
thrive on a starch-rich diet. Most breeds of dog are at most a few hundred years old, having been artificially selected
for particular morphologies and behaviors by people for specific functional roles. Through this selective breeding,
the dog has developed into hundreds of varied breeds, and shows more behavioral and morphological variation than
any other land mammal. For example, height measured to the withers ranges from 15.2 centimetres (6.0 in) in the Chihuahua
to about 76 cm (30 in) in the Irish Wolfhound; color varies from white through grays (usually called "blue") to black,
and browns from light (tan) to dark ("red" or "chocolate") in a wide variation of patterns; coats can be short or
long, coarse-haired to wool-like, straight, curly, or smooth. It is common for most breeds to shed this coat. While
all dogs are genetically very similar, natural selection and selective breeding have reinforced certain characteristics
in certain populations of dogs, giving rise to dog types and dog breeds. Dog types are broad categories based on
function, genetics, or characteristics. Dog breeds are groups of animals that possess a set of inherited characteristics
that distinguishes them from other animals within the same species. Modern dog breeds are non-scientific classifications
of dogs kept by modern kennel clubs. Purebred dogs of one breed are genetically distinguishable from purebred dogs
of other breeds, but the means by which kennel clubs classify dogs is unsystematic. Systematic analyses of the dog
genome has revealed only four major types of dogs that can be said to be statistically distinct. These include the
"old world dogs" (e.g., Malamute and Shar Pei), "Mastiff"-type (e.g., English Mastiff), "herding"-type (e.g., Border
Collie), and "all others" (also called "modern"- or "hunting"-type). Domestic dogs inherited complex behaviors, such
as bite inhibition, from their wolf ancestors, which would have been pack hunters with complex body language. These
sophisticated forms of social cognition and communication may account for their trainability, playfulness, and ability
to fit into human households and social situations, and these attributes have given dogs a relationship with humans
that has enabled them to become one of the most successful species on the planet today.:pages95-136 The dogs' value
to early human hunter-gatherers led to them quickly becoming ubiquitous across world cultures. Dogs perform many
roles for people, such as hunting, herding, pulling loads, protection, assisting police and military, companionship,
and, more recently, aiding handicapped individuals. This impact on human society has given them the nickname "man's
best friend" in the Western world. In some cultures, however, dogs are also a source of meat. Humans would also have
derived enormous benefit from the dogs associated with their camps. For instance, dogs would have improved sanitation
by cleaning up food scraps. Dogs may have provided warmth, as referred to in the Australian Aboriginal expression
"three dog night" (an exceptionally cold night), and they would have alerted the camp to the presence of predators
or strangers, using their acute hearing to provide an early warning. Anthropologists believe the most significant
benefit would have been the use of dogs' robust sense of smell to assist with the hunt. The relationship between
the presence of a dog and success in the hunt is often mentioned as a primary reason for the domestication of the
wolf, and a 2004 study of hunter groups with and without a dog gives quantitative support to the hypothesis that
the benefits of cooperative hunting was an important factor in wolf domestication. Emigrants from Siberia that walked
across the Bering land bridge into North America may have had dogs in their company, and one writer suggests that
the use of sled dogs may have been critical to the success of the waves that entered North America roughly 12,000
years ago, although the earliest archaeological evidence of dog-like canids in North America dates from about 9,400
years ago.:104 Dogs were an important part of life for the Athabascan population in North America, and were their
only domesticated animal. Dogs also carried much of the load in the migration of the Apache and Navajo tribes 1,400
years ago. Use of dogs as pack animals in these cultures often persisted after the introduction of the horse to North
America. "The most widespread form of interspecies bonding occurs between humans and dogs" and the keeping of dogs
as companions, particularly by elites, has a long history. (As a possible example, at the Natufian culture site of
Ain Mallaha in Israel, dated to 12,000 BC, the remains of an elderly human and a four-to-five-month-old puppy were
found buried together). However, pet dog populations grew significantly after World War II as suburbanization increased.
In the 1950s and 1960s, dogs were kept outside more often than they tend to be today (using the expression "in the
doghouse" to describe exclusion from the group signifies the distance between the doghouse and the home) and were
still primarily functional, acting as a guard, children's playmate, or walking companion. From the 1980s, there have
been changes in the role of the pet dog, such as the increased role of dogs in the emotional support of their human
guardians. People and dogs have become increasingly integrated and implicated in each other's lives, to the point
where pet dogs actively shape the way a family and home are experienced. There have been two major trends in the
changing status of pet dogs. The first has been the 'commodification' of the dog, shaping it to conform to human
expectations of personality and behaviour. The second has been the broadening of the concept of the family and the
home to include dogs-as-dogs within everyday routines and practices. There are a vast range of commodity forms available
to transform a pet dog into an ideal companion. The list of goods, services and places available is enormous: from
dog perfumes, couture, furniture and housing, to dog groomers, therapists, trainers and caretakers, dog cafes, spas,
parks and beaches, and dog hotels, airlines and cemeteries. While dog training as an organized activity can be traced
back to the 18th century, in the last decades of the 20th century it became a high profile issue as many normal dog
behaviors such as barking, jumping up, digging, rolling in dung, fighting, and urine marking (which dogs do to establish
territory through scent), became increasingly incompatible with the new role of a pet dog. Dog training books, classes
and television programs proliferated as the process of commodifying the pet dog continued. The majority of contemporary
people with dogs describe their pet as part of the family, although some ambivalence about the relationship is evident
in the popular reconceptualization of the dog–human family as a pack. A dominance model of dog–human relationships
has been promoted by some dog trainers, such as on the television program Dog Whisperer. However it has been disputed
that "trying to achieve status" is characteristic of dog–human interactions. Pet dogs play an active role in family
life; for example, a study of conversations in dog–human families showed how family members use the dog as a resource,
talking to the dog, or talking through the dog, to mediate their interactions with each other. Another study of dogs'
roles in families showed many dogs have set tasks or routines undertaken as family members, the most common of which
was helping with the washing-up by licking the plates in the dishwasher, and bringing in the newspaper from the lawn.
Increasingly, human family members are engaging in activities centered on the perceived needs and interests of the
dog, or in which the dog is an integral partner, such as dog dancing and dog yoga. According to statistics published
by the American Pet Products Manufacturers Association in the National Pet Owner Survey in 2009–2010, it is estimated
there are 77.5 million people with pet dogs in the United States. The same survey shows nearly 40% of American households
own at least one dog, of which 67% own just one dog, 25% two dogs and nearly 9% more than two dogs. There does not
seem to be any gender preference among dogs as pets, as the statistical data reveal an equal number of female and
male dog pets. Yet, although several programs are undergoing to promote pet adoption, less than a fifth of the owned
dogs come from a shelter. The latest study using magnetic resonance imaging (MRI) to humans and dogs together proved
that dogs have same response to voices and use the same parts of the brain as humans to do so. This gives dogs the
ability to recognize emotional human sounds, making them friendly social pets to humans. Dogs have lived and worked
with humans in so many roles that they have earned the unique nickname, "man's best friend", a phrase used in other
languages as well. They have been bred for herding livestock, hunting (e.g. pointers and hounds), rodent control,
guarding, helping fishermen with nets, detection dogs, and pulling loads, in addition to their roles as companions.
In 1957, a husky-terrier mix named Laika became the first animal to orbit the Earth. Service dogs such as guide dogs,
utility dogs, assistance dogs, hearing dogs, and psychological therapy dogs provide assistance to individuals with
physical or mental disabilities. Some dogs owned by epileptics have been shown to alert their handler when the handler
shows signs of an impending seizure, sometimes well in advance of onset, allowing the guardian to seek safety, medication,
or medical care. In conformation shows, also referred to as breed shows, a judge familiar with the specific dog breed
evaluates individual purebred dogs for conformity with their established breed type as described in the breed standard.
As the breed standard only deals with the externally observable qualities of the dog (such as appearance, movement,
and temperament), separately tested qualities (such as ability or health) are not part of the judging in conformation
shows. Dog meat is consumed in some East Asian countries, including Korea, China, and Vietnam, a practice that dates
back to antiquity. It is estimated that 13–16 million dogs are killed and consumed in Asia every year. Other cultures,
such as Polynesia and pre-Columbian Mexico, also consumed dog meat in their history. However, Western, South Asian,
African, and Middle Eastern cultures, in general, regard consumption of dog meat as taboo. In some places, however,
such as in rural areas of Poland, dog fat is believed to have medicinal properties—being good for the lungs for instance.
Dog meat is also consumed in some parts of Switzerland. Proponents of eating dog meat have argued that placing a
distinction between livestock and dogs is western hypocrisy, and that there is no difference with eating the meat
of different animals. The most popular Korean dog dish is gaejang-guk (also called bosintang), a spicy stew meant
to balance the body's heat during the summer months; followers of the custom claim this is done to ensure good health
by balancing one's gi, or vital energy of the body. A 19th century version of gaejang-guk explains that the dish
is prepared by boiling dog meat with scallions and chili powder. Variations of the dish contain chicken and bamboo
shoots. While the dishes are still popular in Korea with a segment of the population, dog is not as widely consumed
as beef, chicken, and pork. Citing a 2008 study, the U.S. Center for Disease Control estimated in 2015 that 4.5 million
people in the USA are bitten by dogs each year. A 2015 study estimated that 1.8% of the U.S. population is bitten
each year. In the 1980s and 1990s the US averaged 17 fatalities per year, while in the 2000s this has increased to
26. 77% of dog bites are from the pet of family or friends, and 50% of attacks occur on the property of the dog's
legal owner. A Colorado study found bites in children were less severe than bites in adults. The incidence of dog
bites in the US is 12.9 per 10,000 inhabitants, but for boys aged 5 to 9, the incidence rate is 60.7 per 10,000.
Moreover, children have a much higher chance to be bitten in the face or neck. Sharp claws with powerful muscles
behind them can lacerate flesh in a scratch that can lead to serious infections. In the United States, cats and dogs
are a factor in more than 86,000 falls each year. It has been estimated around 2% of dog-related injuries treated
in UK hospitals are domestic accidents. The same study found that while dog involvement in road traffic accidents
was difficult to quantify, dog-associated road accidents involving injury more commonly involved two-wheeled vehicles.
Toxocara canis (dog roundworm) eggs in dog feces can cause toxocariasis. In the United States, about 10,000 cases
of Toxocara infection are reported in humans each year, and almost 14% of the U.S. population is infected. In Great
Britain, 24% of soil samples taken from public parks contained T. canis eggs. Untreated toxocariasis can cause retinal
damage and decreased vision. Dog feces can also contain hookworms that cause cutaneous larva migrans in humans. A
2005 paper states "recent research has failed to support earlier findings that pet ownership is associated with a
reduced risk of cardiovascular disease, a reduced use of general practitioner services, or any psychological or physical
benefits on health for community dwelling older people. Research has, however, pointed to significantly less absenteeism
from school through sickness among children who live with pets." In one study, new guardians reported a highly significant
reduction in minor health problems during the first month following pet acquisition, and this effect was sustained
in those with dogs through to the end of the study. In addition, people with pet dogs took considerably more physical
exercise than those with cats and those without pets. The results provide evidence that keeping pets may have positive
effects on human health and behaviour, and that for guardians of dogs these effects are relatively long-term. Pet
guardianship has also been associated with increased coronary artery disease survival, with human guardians being
significantly less likely to die within one year of an acute myocardial infarction than those who did not own dogs.
The health benefits of dogs can result from contact with dogs in general, and not solely from having dogs as pets.
For example, when in the presence of a pet dog, people show reductions in cardiovascular, behavioral, and psychological
indicators of anxiety. Other health benefits are gained from exposure to immune-stimulating microorganisms, which,
according to the hygiene hypothesis, can protect against allergies and autoimmune diseases. The benefits of contact
with a dog also include social support, as dogs are able to not only provide companionship and social support themselves,
but also to act as facilitators of social interactions between humans. One study indicated that wheelchair users
experience more positive social interactions with strangers when they are accompanied by a dog than when they are
not. In 2015, a study found that pet owners were significantly more likely to get to know people in their neighborhood
than non-pet owners. The practice of using dogs and other animals as a part of therapy dates back to the late 18th
century, when animals were introduced into mental institutions to help socialize patients with mental disorders.
Animal-assisted intervention research has shown that animal-assisted therapy with a dog can increase social behaviors,
such as smiling and laughing, among people with Alzheimer's disease. One study demonstrated that children with ADHD
and conduct disorders who participated in an education program with dogs and other animals showed increased attendance,
increased knowledge and skill objectives, and decreased antisocial and violent behavior compared to those who were
not in an animal-assisted program. Medical detection dogs are capable of detecting diseases by sniffing a person
directly or samples of urine or other specimens. Dogs can detect odour in one part per trillion, as their brain's
olfactory cortex is (relative to total brain size) 40 times larger than humans. Dogs may have as many as 300 million
odour receptors in their nose, while humans may have only 5 million. Each dog is trained specifically for the detection
of single disease from the blood glucose level indicative to diabetes to cancer. To train a cancer dog requires 6
months. A Labrador Retriever called Daisy has detected 551 cancer patients with an accuracy of 93 percent and received
the Blue Cross (for pets) Medal for her life-saving skills. In Greek mythology, Cerberus is a three-headed watchdog
who guards the gates of Hades. In Norse mythology, a bloody, four-eyed dog called Garmr guards Helheim. In Persian
mythology, two four-eyed dogs guard the Chinvat Bridge. In Philippine mythology, Kimat who is the pet of Tadaklan,
god of thunder, is responsible for lightning. In Welsh mythology, Annwn is guarded by Cŵn Annwn. In Hindu mythology,
Yama, the god of death owns two watch dogs who have four eyes. They are said to watch over the gates of Naraka. Hunter
god Muthappan from North Malabar region of Kerala has a hunting dog as his mount. Dogs are found in and out of the
Muthappan Temple and offerings at the shrine take the form of bronze dog figurines. In Islam, dogs are viewed as
unclean because they are viewed as scavengers. In 2015 city councillor Hasan Küçük of The Hague called for dog ownership
to be made illegal in that city. Islamic activists in Lérida, Spain, lobbied for dogs to be kept out of Muslim neighborhoods,
saying their presence violated Muslims' religious freedom. In Britain, police sniffer dogs are carefully used, and
are not permitted to contact passengers, only their luggage. They are required to wear leather dog booties when searching
mosques or Muslim homes. Jewish law does not prohibit keeping dogs and other pets. Jewish law requires Jews to feed
dogs (and other animals that they own) before themselves, and make arrangements for feeding them before obtaining
them. In Christianity, dogs represent faithfulness. In Asian countries such as China, Korea, and Japan, dogs are
viewed as kind protectors. The role of the dog in Chinese mythology includes a position as one of the twelve animals
which cyclically represent years (the zodiacal dog). Cultural depictions of dogs in art extend back thousands of
years to when dogs were portrayed on the walls of caves. Representations of dogs became more elaborate as individual
breeds evolved and the relationships between human and canine developed. Hunting scenes were popular in the Middle
Ages and the Renaissance. Dogs were depicted to symbolize guidance, protection, loyalty, fidelity, faithfulness,
watchfulness, and love. Dogs are also vulnerable to some of the same health conditions as humans, including diabetes,
dental and heart disease, epilepsy, cancer, hypothyroidism, and arthritis. Some dog breeds have acquired traits through
selective breeding that interfere with reproduction. Male French Bulldogs, for instance, are incapable of mounting
the female. For many dogs of this breed, the female must be artificially inseminated in order to reproduce. Although
it is said that the "dog is man's best friend" regarding 17–24% of dogs in the developed countries, in the developing
world they are feral, village or community dogs, with pet dogs uncommon. These live their lives as scavengers and
have never been owned by humans, with one study showing their most common response when approached by strangers was
to run away (52%) or respond with aggression (11%). We know little about these dogs, nor about the dogs that live
in developed countries that are feral, stray or are in shelters, yet the great majority of modern research on dog
cognition has focused on pet dogs living in human homes. Wolves, and their dog descendants, would have derived significant
benefits from living in human camps—more safety, more reliable food, lesser caloric needs, and more chance to breed.
They would have benefited from humans' upright gait that gives them larger range over which to see potential predators
and prey, as well as color vision that, at least by day, gives humans better visual discrimination. Camp dogs would
also have benefited from human tool use, as in bringing down larger prey and controlling fire for a range of purposes.
The cohabitation of dogs and humans would have greatly improved the chances of survival for early human groups, and
the domestication of dogs may have been one of the key forces that led to human success. The scientific evidence
is mixed as to whether companionship of a dog can enhance human physical health and psychological wellbeing. Studies
suggesting that there are benefits to physical health and psychological wellbeing have been criticised for being
poorly controlled, and finding that "[t]he health of elderly people is related to their health habits and social
supports but not to their ownership of, or attachment to, a companion animal." Earlier studies have shown that people
who keep pet dogs or cats exhibit better mental and physical health than those who do not, making fewer visits to
the doctor and being less likely to be on medication than non-guardians.
The 2008 Summer Olympics torch relay was run from March 24 until August 8, 2008, prior to the 2008 Summer Olympics, with
the theme of "one world, one dream". Plans for the relay were announced on April 26, 2007, in Beijing, China. The
relay, also called by the organizers as the "Journey of Harmony", lasted 129 days and carried the torch 137,000 km
(85,000 mi) – the longest distance of any Olympic torch relay since the tradition was started ahead of the 1936 Summer
Olympics. After being lit at the birthplace of the Olympic Games in Olympia, Greece on March 24, the torch traveled
to the Panathinaiko Stadium in Athens, and then to Beijing, arriving on March 31. From Beijing, the torch was following
a route passing through six continents. The torch has visited cities along the Silk Road, symbolizing ancient links
between China and the rest of the world. The relay also included an ascent with the flame to the top of Mount Everest
on the border of Nepal and Tibet, China from the Chinese side, which was closed specially for the event. In many
cities along the North American and European route, the torch relay was protested by advocates of Tibetan independence,
animal rights, and legal online gambling, and people protesting against China's human rights record, resulting in
confrontations at a few of the relay locations. These protests, which ranged from hundreds of people in San Francisco,
to effectively none in Pyongyang, forced the path of the torch relay to be changed or shortened on a number of occasions.
The torch was extinguished by Chinese security officials several times during the Paris leg for security reasons,
and once in protest in Paris. The attacks on the torch in London and Paris were described as "despicable" by the
Chinese government, condemning them as "deliberate disruptions... who gave no thought to the Olympic spirit or the
laws of Britain and France" and who "tarnish the lofty Olympic spirit", and vowed they would continue with the relay
and not allow the protests to "impede the Olympic spirit". Large-scale counter-protests by overseas Chinese and foreign-based
Chinese nationals became prevalent in later segments of the relay. In San Francisco, the number of supporters were
much more than the number of protesters, and in Australia, Japan, South Korea, the counter-protesters overwhelmed
the protesters. A couple of skirmishes between the protesters and supporters were reported. No major protests were
visible in the Latin America, Africa, and Western Asia legs of the torch relay. Prompted by the chaotic torch relays
in Western Europe and North America, the president of the International Olympic Committee, Jacques Rogge described
the situation as a "crisis" for the organization and stated that any athletes displaying Tibetan flags at Olympic
venues could be expelled from the games. though he stopped short of cancelling the relay altogether despite calls
to do so by some IOC members. The outcome of the relay influenced the IOC's decision to scrap global relays in future
editions of the games. In June 2008, the Beijing Games' Organizing Committee announced that the planned international
torch relay for the Paralympic Games had been cancelled. The Committee stated that the relay was being cancelled
to enable the Chinese government to "focus on the rescue and relief work" following the Sichuan earthquake. The Olympic
Torch is based on traditional scrolls and uses a traditional Chinese design known as "Lucky Cloud". It is made from
aluminum. It is 72 centimetres high and weighs 985 grams. The torch is designed to remain lit in 65 kilometre per
hour (37 mile per hour) winds, and in rain of up to 50 millimetres (2 inches) per hour. An ignition key is used to
ignite and extinguish the flame. The torch is fueled by cans of propane. Each can will light the torch for 15 minutes.
It is designed by a team from Lenovo Group. The Torch is designed in reference to the traditional Chinese concept
of the 5 elements that make up the entire universe. Internationally, the torch and its accompanying party traveled
in a chartered Air China Airbus A330 (registered B-6075), painted in the red and yellow colors of the Olympic Games.
Air China was chosen by the Beijing Committees of the Olympic Game as the designated Olympic torch carrier in March
2008 for its long-standing participation in the Olympic cause. The plane traveled a total of 137,000 km (85,000 mi)
for a duration of 130 days through 21 countries and regions. The route carried the torch through six continents from
March 2008 to May 2008 to August 2008. The planned route originally included a stop in Taipei between Ho Chi Minh
City and Hong Kong, but there was disagreement in Beijing and Taipei over language used to describe whether it was
an international or a domestic part of the route. While the Olympic committees of China and Chinese Taipei reached
initial consensus on the approach, the government of the Republic of China in Taiwan intervened, stating that this
placement could be interpreted as placing Taiwan on the same level as Hong Kong and Macau, an implication it objected
to. The Beijing Organizing Committee attempted to continue negotiation, but further disputes arose over the flag
or the anthem of the Republic of China along the 24 km torch route in Taiwan. By the midnight deadline for concluding
the negotiation on September 21, 2007, Taiwan and China were unable to come to terms with the issue of the Torch
Relay. In the end, both sides of the Taiwan Strait decided to eliminate the Taipei leg. Greece: On March 24, 2008,
the Olympic Flame was ignited at Olympia, Greece, site of the ancient Olympic Games. The actress Maria Nafpliotou,
in the role of a High Priestess, ignited the torch of the first torchbearer, a silver medalist of the 2004 Summer
Olympics in taekwondo Alexandros Nikolaidis from Greece, who handed the flame over to the second torchbearer, Olympic
champion in women's breaststroke Luo Xuejuan from China. Following the recent unrest in Tibet, three members of Reporters
Without Borders, including Robert Ménard, breached security and attempted to disrupt a speech by Liu Qi, the head
of Beijing's Olympic organising committee during the torch lighting ceremony in Olympia, Greece. The People's Republic
of China called this a "disgraceful" attempt to sabotage the Olympics. On March 30, 2008 in Athens, during ceremonies
marking the handing over of the torch from Greek officials to organizers of the Beijing games, demonstrators shouted
'Free Tibet' and unfurled banners; some 10 of the 15 protesters were taken into police detention. After the hand-off,
protests continued internationally, with particularly violent confrontations with police in Nepal. China: In China,
the torch was first welcomed by Politburo Standing Committee member Zhou Yongkang and State Councilor Liu Yandong.
It was subsequently passed onto CPC General Secretary Hu Jintao. A call to boycott French hypermart Carrefour from
May 1 began spreading through mobile text messaging and online chat rooms amongst the Chinese over the weekend from
April 12, accusing the company's major shareholder, the LVMH Group, of donating funds to the Dalai Lama. There were
also calls to extend the boycott to include French luxury goods and cosmetic products. According to the Washington
Times on April 15, however, the Chinese government was attempting to "calm the situation" through censorship: "All
comments posted on popular Internet forum Sohu.com relating to a boycott of Carrefour have been deleted." Chinese
protesters organized boycotts of the French-owned retail chain Carrefour in major Chinese cities including Kunming,
Hefei and Wuhan, accusing the French nation of pro-secessionist conspiracy and anti-Chinese racism. Some burned French
flags, some added Nazism's Swastika to the French flag, and spread short online messages calling for large protests
in front of French consulates and embassy. The Carrefour boycott was met with anti-boycott demonstrators who insisted
on entering one of the Carrefour stores in Kunming, only to be blocked by boycotters wielding large Chinese flags
and hit by water bottles. The BBC reported that hundreds of people demonstrated in Beijing, Wuhan, Hefei, Kunming
and Qingdao. In response to the demonstrations, an editorial in the People's Daily urged Chinese people to "express
[their] patriotic enthusiasm calmly and rationally, and express patriotic aspiration in an orderly and legal manner".
Kazakhstan: The first torchbearer in Almaty, where the Olympic torch arrived for the first time ever on April 2,
was the President of Kazakhstan Nursultan Nazarbaev. The route ran 20 km from Medeo stadium to Astana Square. There
were reports that Uighur activists were arrested and some were deported back to China. Turkey: The torch relay leg
in Istanbul, held on April 3, started on Sultanahmet Square and finished in Taksim Square. Uyghurs living in Turkey
protested at Chinese treatment of their compatriots living in Xinjiang. Several protesters who tried to disrupt the
relay were promptly arrested by the police. Russia: On April 5 the Olympic torch arrived at Saint Petersburg, Russia.
The length of the torch relay route in the city was 20 km, with the start at the Victory Square and finish at the
Palace Square. Mixed martial arts icon and former PRIDE Heavyweight Champion Fedor Emelianenko was one the torch
bearers. This gives him the distinction of the being the first active MMA fighter to carry the Olympic flame. Great
Britain: The torch relay leg held in London, the host city of the 2012 Summer Olympics, on April 6 began at Wembley
Stadium, passed through the City of London, and eventually ended at O2 Arena in the eastern part of the city. The
48 km (30 mi) leg took a total of seven and a half hours to complete, and attracted protests by pro-Tibetan independence
and pro-Human Rights supporters, prompting changes to the planned route and an unscheduled move onto a bus, which
was then briefly halted by protestors. Home Secretary Jacqui Smith has officially complained to Beijing Organising
Committee about the conduct of the tracksuit-clad Chinese security guards. The Chinese officials, seen manhandling
protesters, were described by both the London Mayor Ken Livingstone and Lord Coe, chairman of the London Olympic
Committee as "thugs". A Metropolitan police briefing paper revealed that security for the torch relay cost £750,000
and the participation of the Chinese security team had been agreed in advance, despite the Mayor stating, "We did
not know beforehand these thugs were from the security services. Had I known so, we would have said no." Of the 80
torch-bearers in London, Sir Steve Redgrave, who started the relay, mentioned to the media that he had received e-mailed
pleas to boycott the event and could "see why they would like to make an issue" of it. Francesca Martinez and Richard
Vaughan refused to carry the torch, while Konnie Huq decided to carry it and also speak out against China. The pro-Tibetan
Member of Parliament Norman Baker asked all bearers to reconsider. Amid pressure from both directions, Prime Minister
Gordon Brown welcomed the torch outside 10 Downing Street without holding or touching it. The London relay saw the
torch surrounded by what the BBC described as "a mobile protective ring." Protests began as soon as Redgrave started
the event, leading to at least thirty-five arrests. In Ladbroke Grove a demonstrator attempted to snatch the torch
from Konnie Huq in a momentary struggle, and in a separate incident, a fire extinguisher was set off near the torch.
The Chinese ambassador carried the torch through Chinatown after an unpublicized change to the route amid security
concerns. The torch made an unscheduled move onto a bus along Fleet Street amid security concerns and efforts to
evade the protesters. In an effort to counter the pro-Tibet protesters and show their support for the 2008 Beijing
Olympics, more than 2,000 Chinese also gathered on the torch route and demonstrated with signs, banners and Chinese
flags. A large number of supporters were concentrated in Trafalgar Square, displaying the Olympic slogan "One World,
One Dream". France: The torch relay leg in Paris, held on April 7, began on the first level of the Eiffel Tower and
finished at the Stade Charléty. The relay was initially supposed to cover 28 km, but it was shortened at the demand
of Chinese officials following widespread protests by pro-Tibet and human rights activists, who repeatedly attempted
to disrupt, hinder or halt the procession. A scheduled ceremony at the town hall was cancelled at the request of
the Chinese authorities, and, also at the request of Chinese authorities, the torch finished the relay by bus instead
of being carried by athletes. Paris City officials had announced plans to greet the Olympic flame with peaceful protest
when the torch was to reach the French capital. The city government attached a banner reading "Paris defends human
rights throughout the world" to the City Hall, in an attempt to promote values "of all humanity and of human rights."
Members from Reporters Without Borders turned out in large numbers to protest. An estimated 3,000 French police protected
the Olympic torch relay as it departed from the Eiffel Tower and criss-crossed Paris amid threat of protests. Widespread
pro-Tibet protests, including an attempt by more than one demonstrator to extinguish the flame with water or fire
extinguishers, prompted relay authorities to put out the flame five times (according to the police authorities in
Paris) and load the torch onto a bus, at the demand of Chinese officials. This was later denied by the Chinese Ministry
of Foreign Affairs, despite video footage broadcast by French television network France 2 which showed Chinese flame
attendants extinguishing the torch. Backup flames are with the relay at all times to relight the torch. French judoka
and torchbearer David Douillet expressed his annoyance at the Chinese flame attendants who extinguished the torch
which he was about to hand over to Teddy Riner: "I understand they're afraid of everything, but this is just annoying.
They extinguished the flame despite the fact that there was no risk, and they could see it and they knew it. I don't
know why they did it." Chinese officials canceled the torch relay ceremony amidst disruptions, including a Tibetan
flag flown from a window in the City Hall by Green Party officials. The third torchbearer in the Paris leg, Jin Jing,
who was disabled and carried the torch on a wheelchair, was assaulted several times by unidentified protestors seemingly
from the pro-Tibet independent camp. In interviews, Jin Jing said that she was "tugged at, scratched" and "kicked",
but that she "did not feel the pain at the time." She received praise from ethnic Chinese worldwide as "Angel in
Wheelchair". The Chinese government gave the comment that "the Chinese respect France a lot" but "Paris [has slapped]
its own face." Reporters Without Borders organised several symbolic protests, including scaling the Eiffel Tower
to hang a protest banner from it, and hanging an identical banner from the Notre Dame cathedral. Several hundred
pro-Tibet protesters gathered at the Trocadéro with banners and Tibetan flags, and remained there for a peaceful
protest, never approaching the torch relay itself. Among them was Jane Birkin, who spoke to the media about the "lack
of freedom of speech" in China. Also present was Thupten Gyatso, President of the French Tibetan community, who called
upon pro-Tibet demonstrators to "remain calm, non-violent, peaceful". French members of Parliament and other French
politicians also organised a protest. All political parties in Parliament—UMP, Socialists, New Centre, Communists,
Democratic Movement (centre) and Greens—jointly requested a pause in the National Assembly's session, which was granted,
so that MPs could step outside and unfurl a banner which read "Respect for Human Rights in China". The coach containing
the torch drove past the National Assembly and the assembled protesting MPs, who shouted "Freedom for Tibet!" several
times as it passed. French police were criticised for their handling of the events, and notably for confiscating
Tibetan flags from demonstrators. The newspaper Libération commented: "The police did so much that only the Chinese
were given freedom of expression. The Tibetan flag was forbidden everywhere except on the Trocadéro." Minister of
the Interior Michèle Alliot-Marie later stated that the police had not been ordered to do so, and that they had acted
on their own initiative. A cameraman for France 2 was struck in the face by a police officer, knocked unconscious,
and had to be sent to hospital. United States of America: The torch relay's North American leg occurred in San Francisco,
California on April 9. On the day of the relay officials diverted the torch run to an unannounced route. The start
was at McCovey Cove, where Norman Bellingham of the U.S. Olympic Committee gave the torch to the first torchbearer,
Chinese 1992 Olympic champion swimmer Lin Li. The planned closing ceremony at Justin Herman Plaza was cancelled and
instead, a ceremony was held at San Francisco International Airport, where the torch was to leave for Buenos Aires.
The route changes allowed the run to avoid large numbers of China supporters and protesters against China. As people
found out there would be no closing ceremony at Justin Herman Plaza, there were angry reactions. One demonstrator
was quoted as saying that the route changes were an effort to "thwart any organized protest that had been planned."
San Francisco Board of Supervisors President Aaron Peskin, a critic of Mayor Gavin Newsom, said that it was a "cynical
plan to please the Bush State Department and the Chinese government because of the incredible influence of money."
Newsom, on the other hand, said he felt it was in "everyone's best interest" and that he believed people had been
"afforded the right to protest and support the torch" despite the route changes. Peter Ueberroth, head of the U.S.
Olympic Committee, praised the route changes, saying, "The city of San Francisco, from a global perspective, will
be applauded." People who saw the torch were surprised and cheered as shown from live video of CBS and NBC. The cost
to the city for hosting the event was reported to be USD $726,400, nearly half of which has been recovered by private
fundraising. Mayor Gavin Newsom said that "exponential" costs associated with mass arrests were avoided by his decision
to change the route in consultation with police chief Heather Fong. On April 1, 2008, the San Francisco Board of
Supervisors approved a resolution addressing human rights concerns when the Beijing Olympic torch arrives in San
Francisco on April 9. The resolution would welcome the torch with "alarm and protest at the failure of China to meet
its past solemn promises to the international community, including the citizens of San Francisco, to cease the egregious
and ongoing human rights abuses in China and occupied Tibet." On April 8, numerous protests were planned including
one at the city's United Nations Plaza led by actor Richard Gere and Archbishop Desmond Tutu. Some advocates for
Tibet, Darfur, and the spiritual practice Falun Gong, planned to protest the April 9 arrival of the torch in San
Francisco. China had already requested the torch route in San Francisco be shortened. On April 7, 2008, two days
prior to the actual torch relay, three activists carrying Tibetan flags scaled the suspension cables of the Golden
Gate Bridge to unfurl two banners, one saying "One World, One Dream. Free Tibet", and the other, "Free Tibet '08".
Among them was San Francisco resident Laurel Sutherlin, who spoke to the local TV station KPIX-CBS5 live from a cellphone,
urging the International Olympic Committee to ask China not to allow the torch to go through Tibet. "Sutherlin said
he was worried that the torch's planned route through Tibet would lead to more arrests and Chinese officials would
use force to stifle dissent." The three activists and five supporters face charges related to trespassing, conspiracy
and causing a public nuisance. The torch was lit at a park outside at AT&T Park at about 1:17 pm PDT (20:17 UTC),
briefly held aloft by American and Chinese Olympic officials. The relay descended into confusion as the first runner
in the elaborately planned relay disappeared into a warehouse on a waterfront pier where it stayed for a half-an-hour.
There were clashes between thousands of pro-China demonstrators, many of whom said they were bused in by the Chinese
Consulate and other pro-China groups, and both pro-Tibet and Darfur protesters. The non-Chinese demonstrators were
reported to have been swamped and trailed by angry crowds. Around 2 pm PDT (21:00 UTC), the torch resurfaced about
3 km (1.9 mi) away from the stadium along Van Ness Avenue, a heavily trafficked thoroughfare that was not on official
route plans. Television reports showed the flame flanked by motorcycles and uniformed police officers. Two torchbearers
carried the flame running slowly behind a truck and surrounded by Olympic security guards. During the torch relay,
two torchbearers, Andrew Michael who uses a wheelchair and is the Vice President for Sustainable Development for
the Bay Area Council and Director of Partnerships For Change, and an environmental advocate, Majora Carter, managed
to display Tibetan flags in protest, resulting in their ejection from the relay. The closing ceremony at Justin Herman
Plaza was canceled due to the presence of large numbers of protesters at the site. The torch run ended with a final
stretch through San Francisco's Marina district and was then moved by bus to San Francisco International Airport
for a makeshift closing ceremony at the terminal, from which the free media was excluded. San Jose Mercury News described
the "deceiving" event as "a game of Where's Waldo, played against the landscape of a lovely city." International
Olympic Committee President Jacques Rogge said the San Francisco relay had "fortunately" avoided much of the disruptions
that marred the legs in London and Paris, but "was, however, not the joyous party that we had wished it to be." Argentina:
The torch relay leg in Buenos Aires, Argentina, held on April 11, began with an artistic show at the Lola Mora amphitheatre
in Costanera Sur. In the end of the show the mayor of Buenos Aires Mauricio Macri gave the torch to the first torchbearer,
Carlos Espínola. The leg finished at the Buenos Aires Riding Club in the Palermo district, the last torchbearer being
Gabriela Sabatini. The 13.8 km route included landmarks like the obelisk and Plaza de Mayo. The day was marked by
several pro-Tibet protests, which included a giant banner reading "Free Tibet", and an alternative "human rights
torch" that was lit by protesters and paraded along the route the flame was to take. Most of these protests were
peaceful in nature, and the torch was not impeded. Chinese immigrants also turned out in support of the Games, but
only minor scuffles were reported between both groups. Runners surrounded by rows of security carried the Olympic
flame past thousands of jubilant Argentines in the most trouble-free torch relay in nearly a week. People showered
the parade route with confetti as banks, government offices and businesses took an impromptu half-day holiday for
the only Latin American stop on the flame's five-continent journey. Argentine activists told a news conference that
they would not try to snuff out the torch's flame as demonstrators had in Paris and London. "I want to announce that
we will not put out the Olympic torch," said pro-Tibet activist Jorge Carcavallo. "We'll be carrying out surprise
actions throughout the city of Buenos Aires, but all of these will be peaceful." Among other activities, protesters
organized an alternative march that went from the Obelisk to the city hall, featuring their own "Human Rights Torch."
A giant banner reading "Free Tibet" was also displayed on the torch route. According to a representative from the
NGO 'Human Rights Torch Relay', their objective was to "show the contradiction between the Olympic Games and the
presence of widespread human rights violations in China" The outreach director of HRTR, Susan Prager, is also the
communication director of "Friends of Falun Gong", a quasi-government non-profit funded by fmr. Congressman Tom Lanto's
wife and Ambassador Mark Palmer of NED. A major setback to the event was caused by footballer Diego Maradona, scheduled
to open the relay through Buenos Aires, pulling out in an attempt to avoid the Olympic controversy. Trying to avoid
the scenes that marred the relay in the UK, France and the US, the city government designed a complex security operative
to protect the torch relay, involving 1200 police officers and 3000 other people, including public employees and
volunteers. Overall, the protests were peaceful in nature, although there were a few incidents such as the throwing
of several water balloons in an attempt to extinguish the Olympic flame, and minor scuffles between Olympic protesters
and supporters from Chinese immigrant communities. Tanzania: Dar es Salaam was the torch's only stop in Africa, on
April 13. The relay began at the grand terminal of the TAZARA Railway, which was China's largest foreign aid project
of the 1970s, and continued for 5 km through the old city to the Benjamin Mkapa National Stadium in Temeke, which
was built with Chinese aid in 2005. The torch was lit by Vice-President Ali Mohamed Shein. About a thousand people
followed the relay, waving the Olympic flag. The only noted instance of protest was Nobel Peace Prize laureate Wangari
Maathai's withdrawal from the list of torchbearers, in protest against human rights abuses in Tibet. Sultanate of
Oman: Muscat was the torch's only stop in the Middle East, on April 14. The relay covered 20 km. No protests or incidents
were reported. One of the torchbearers was Syrian actress Sulaf Fawakherji. Pakistan: The Olympic torch reached Islamabad
for the first time ever on April 16. President Pervez Musharraf and Prime Minister Yousaf Raza Gillani spoke at the
opening ceremony of the relay. Security was high, for what one newspaper called the "most sensitive leg" of the torch's
Olympic journey. The relay was initially supposed to carry the torch around Islamabad, but the entire relay was cancelled
due to security concerns regarding "militant threats or anti-China protests", and replaced by an indoors ceremony
with the torch carried around the track of Jinnah Stadium. In fear of violent protests and bomb attacks, the torch
relay in Pakistan took place in a stadium behind closed doors. Although the relay was behind closed doors, thousands
of policemen and soldiers guarded the flame. As a consequence, no incidents arose. India: Due to concerns about pro-Tibet
protests, the relay through New Delhi on April 17 was cut to just 2.3 km (less than 1.5 miles), which was shared
amongst 70 runners. It concluded at the India Gate. The event was peaceful due to the public not being allowed at
the relay. A total of five intended torchbearers -Kiran Bedi, Soha Ali Khan, Sachin Tendulkar, Bhaichung Bhutia and
Sunil Gavaskar- withdrew from the event, citing "personal reasons", or, in Bhutia's case, explicitly wishing to "stand
by the people of Tibet and their struggle" and protest against the PRC "crackdown" in Tibet. Indian national football
captain, Baichung Bhutia refused to take part in the Indian leg of the torch relay, citing concerns over Tibet. Bhutia,
who is Sikkimese, is the first athlete to refuse to run with the torch. Indian film star Aamir Khan states on his
personal blog that the "Olympic Games do not belong to China" and confirms taking part in the torch relay "with a
prayer in his heart for the people of Tibet, and ... for all people across the world who are victims of human rights
violations". Rahul Gandhi, son of the Congress President Sonia Gandhi and scion of the Nehru-Gandhi family, also
refused to carry the torch. Wary of protests, the Indian authorities have decided to shorten the route of the relay
in New Delhi, and have given it the security normally associated with Republic Day celebrations, which are considered
terrorist targets. Chinese intelligence's expectations of points on the relay route that would be particularly 'vulnerable'
to protesters were presented to the Indian ambassador to Beijing, Nirupama Sen. The Indian media responded angrily
to the news that the ambassador, a distinguished lady diplomat, was summoned to the Foreign Ministry at 2 am local
time; the news was later denied by anonymous sources in Delhi. The Indian media reported that India's Commerce Minister,
Kamal Nath, cancelled an official trip to Beijing in protest, though both Nath and Chinese sources have denied it.
India rejected Chinese demands that the torch route be clear of India's 150,000-strong Tibetan exile community, by
which they required a ban on congregation near the curtailed 3 km route. In response Indian officials said India
was a democracy, and "a wholesale ban on protests was out of the question". Contradicting some other reports, Indian
officials also refused permission to the "Olympic Holy Flame Protection Unit". The combined effect is a "rapid deterioration"
of relations between India and China. Meanwhile, the Tibetan government in exile, which is based in India, has stated
that it did not support the disruption of the Olympic torch relay. The noted Indian social activist and a retired
Indian Police Service (IPS) officer Kiran Bedi refused to participate saying "she doesn’t want to run in the event
as ‘caged woman’." On April 15, Bollywood actress Soha Ali Khan pulled out of the Olympic torch relay, citing “very
strong personal reasons”. On April 16, a protest was organised in Delhi "against Chinese repression in Tibet", and
was broken up by the police. Thailand: The April 18 relay through Bangkok was the Olympic flame's first visit to
Thailand. The relay covered just over 10 km, and included Bangkok's Chinatown. The torch was carried past Democracy
Monument, Chitralada Palace and a number of other city landmarks. M.R. Narisa Chakrabongse, Green World Foundation
(GWF) chairwoman, withdrew from the torch-running ceremony, protesting against China's actions in Tibet. Several
hundred protesters were present, along with Olympic supporters. Thai authorities threatened to arrest foreign protesters
and ban them from future entry into Thailand. A coalition of Thai human rights groups announced that it would organise
a "small demonstration" during the relay, and several hundred people did indeed take part in protests, facing Beijing
supporters. Intended torchbearer Mom Rajawongse Narissara Chakrabongse boycotted the relay, to protest against China's
actions in Tibet. In Bangkok, students told the media that the Chinese Embassy provided them with transportation
and gave them shirts to wear. Malaysia: The event was held in the capital city, Kuala Lumpur, on April 21. The 16.5
km long-relay began from the historic Independence Square, passed in front of several city landmarks before coming
to an end at the iconic Petronas Twin Towers. Among the landmarks the Olympic flame passed next to were the Parliament
House, National Mosque, KL Tower and Merdeka Stadium. A team of 1000 personnel from the Malaysian police Special
Action Squad guarded the event and escorted the torchbearers. The last time an Olympic torch relay was held in Malaysia
was the 1964 Tokyo edition. Just days before the relay supporters of Falun Gong demonstrated in front of the Chinese
embassy in the Malaysian capital. As many as 1,000 personnel from the special police unit were expected to be deployed
on the day of the relay. A Japanese family with Malaysian citizenship and their 5-year-old child who unfurled a Tibetan
flag were hit by a group of Chinese nationals with plastic air-filled batons and heckled by a crowd of Chinese citizens
during the confrontation at Independence Square where the relay began, and the Chinese group shouted: "Taiwan and
Tibet belong to China." Later during the day, the Chinese volunteers forcefully took away placards from two other
Malaysians protesting at the relay. One of the protesting Malaysian was hit in the head. Indonesia: The Olympic flame
reached Jakarta on April 22. The original 20 km relay through Jakarta was cancelled due to "security worries", at
the request of the Chinese embassy, and the torch was instead carried round the city main's stadium, as it had been
in Islamabad. Several dozen pro-Tibet protesters gathered near the stadium, and were dispersed by the police. The
event was held in the streets around the city main's stadium. The cancelling of the relay through the city itself
was decided due to security concerns and at the request of the Chinese embassy. Only invitees and journalists were
admitted inside the stadium. Protests took place outside the stadium. Australia: The event was held in Canberra,
Australian Capital Territory on April 24, and covered around 16 km of Canberra's central areas, from Reconciliation
Place to Commonwealth Park. Upon its arrival in Canberra, the Olympic flame was presented by Chinese officials to
local Aboriginal elder Agnes Shea, of the Ngunnawal people. She, in turn, offered them a message stick, as a gift
of peace and welcome. Hundreds of pro-Tibet protesters and thousands of Chinese students reportedly attended. Demonstrators
and counter-demonstrators were kept apart by the Australian Federal Police. Preparations for the event were marred
by a disagreement over the role of the Chinese flame attendants, with Australian and Chinese officials arguing publicly
over their function and prerogatives during a press conference. Following the events in Olympia, there were reports
that China requested permission to deploy People's Liberation Army personnel along the relay route to protect the
flame in Canberra. Australian authorities stated that such a request, if it were to be made, would be refused. Chinese
officials labeled it a rumor. Australian police have been given powers to search relay spectators, following a call
by the Chinese Students and Scholars Association for Chinese Australian students to "go defend our sacred torch"
against "ethnic degenerate scum and anti-China separatists". Tony Goh, chairman of the Australian Council of Chinese
Organisations, has said the ACCO would be taking "thousands" of pro-Beijing demonstrators to Canberra by bus, to
support the torch relay. Zhang Rongan, a Chinese Australian student organising pro-Beijing demonstrations, told the
press that Chinese diplomats were assisting with the organization of buses, meals and accommodation for pro-Beijing
demonstrators, and helping them organise a "peaceful show of strength". Foreign Minister Stephen Smith said Chinese
officials were urging supporters to "turn up and put a point of view", but that he had no objection to it as long
as they remained peaceful. Intended torchbearer Lin Hatfield Dodds withdrew from the event, explaining that she wished
to express concern about China's human rights record. Foreign Minister Stephen Smith said her decision was "a very
good example of peacefully making a point". Up to 600 pro-Tibet protesters were expected to attend the relay, along
with between 2,000 and 10,000 Chinese supporters. Taking note of the high number of Chinese supporters, Ted Quinlan,
head of the Canberra torch relay committee, said: "We didn't expect this reaction from the Chinese community. It
is obviously a well-coordinated plan to take the day by weight of numbers. But we have assurances that it will be
done peacefully.". Also, Australia's ACT Chief Minister, Jon Stanhope confirmed that the Chinese embassy was closely
involve to ensure that "pro-China demonstrators vastly outnumbered Tibetan activists." Australian freestyle swimmer
and five-time Olympic gold medalist Ian Thorpe ended the Australian leg of the torch relay April 24, 2008, touching
the flame to light a cauldron after a run that was only marginally marked by protests. People demonstrated both for
China and for Tibet. At least five people were arrested during the torch relay. Police said "the five were arrested
for interfering with the event under special powers enacted in the wake of massive protests against Chinese policy
toward Tibet." At one point, groups of Chinese students surrounded and intimidated pro-Tibet protesters. One person
had to be pulled aboard a police launch when a group of pro-Chinese students looked like they might force him into
the lake. Japan: The event was held in Nagano, which hosted the 1998 Winter Olympics, on April 26. Japanese Buddhist
temple Zenkō-ji, which was originally scheduled to be the starting point for the Olympic torch relay in Nagano, refused
to host the torch and pulled out of the relay plans, amid speculation that monks there sympathized with anti-Chinese
government protesters. as well as the risk of disruption by violent protests. Parts of Zenkō-ji temple's main building
(Zenkō-ji Hondō), reconstructed in 1707 and one of the National Treasures of Japan, was then vandalized with spraypaint.
A new starting point, previously the site of a municipal building and now a parking lot, was chosen by the city.
An event the city had planned to hold at the Minami Nagano Sports Park following the torch relay was also canceled
out of concern about disruptions caused by demonstrators protesting against China's recent crackdown in Tibet. Thousands
of riot police were mobilized to protect the torch along its route. The show of force kept most protesters in check,
but slogans shouted by pro-China or pro-Tibet demonstrators, Japanese nationalists, and human rights organizations
flooded the air. Five men were arrested and four injured amidst scenes of mob violence. The torch route was packed
with mostly peaceful demonstrators. The public was not allowed at the parking lot where the relay started. After
the Zenkoji monks held a prayer ceremony for victims of the recent events in Tibet. More than 100 police officers
ran with the torch and riot police lined the streets while three helicopters flew above. Only two Chinese guards
were allowed to accompany the torch because of Japan's concern over their treatment of demonstrators at previous
relays. A man with a Tibetan flag tried to stop the torch at the beginning of the relay but was dragged off by police.
Some raw eggs were also thrown from the crowd. South Korea: The event was held in Seoul, which hosted the 1988 Summer
Olympics, on April 27. Intended torchbearers Choi Seung-kook and Park Won-sun boycotted the event to protest against
the Chinese government's crackdown in Tibet. More than 8,000 riot police were deployed to guard the 24-kilometre
route, which began at Olympic Park, which was built when Seoul hosted the 1988 Summer Games. On the day of the torch
relay in Seoul, Chinese students clashed with protesters, throwing rocks, bottles, and punches. A North Korean defector
whose brother defected to China but was captured and executed by the DPRK, attempted to set himself on fire in protest
of China's treatment of North Korean refugees. He poured gasoline on himself but police quickly surrounded him and
carried him away. Two other demonstrators tried to storm the torch but failed. Fighting broke out near the beginning
of the relay between a group of 500 Chinese supporters and approximately 50 protesters who carried a banner that
read: "Free North Korean refugees in China." The students threw stones and water bottles as approximately 2,500 police
tried to keep the groups separated. Police said they arrested five people, including a Chinese student who was arrested
for allegedly throwing rocks. Thousands of Chinese followed the torch on its 4.5 hour journey, some chanting, "Go
China, go Olympics!" By the end of the relay, Chinese students became violent, and it was reported in Korean media
that they were "lynching" everyone who was disagreeing with them. One police man was also rushed to hospital after
being attacked by Chinese students. On Apr 29, the Secretary of Justice, Kim Kyung Han, told the prime minister that
he will find "every single Chinese who was involved and bring them to justice." Later in the day, South Korea's Prosecutor's
Office, National Police Agency, Ministry of Foreign Affairs and National Intelligence Service made a joint statement
saying that they will be deporting every Chinese student that was involved in the incident. China defended the conduct
of the students. North Korea: The event was held in Pyongyang on April 28. It was the first time that the Olympic
torch has traveled to North Korea. A crowd of thousands waving pink paper flowers and small flags with the Beijing
Olympics logo were organized by the authoritarian regime watched the beginning of the relay in Pyongyang, some waving
Chinese flags. The event was presided over by the head of the country's parliament, Kim Yong Nam. The North, an ally
of China, has been critical of disruptions to the torch relay elsewhere and has supported Beijing in its actions
against protests in Tibet. Kim passed the torch to the first runner Pak Du Ik, who played on North Korea's 1966 World
Cup soccer team, as he began the 19-kilometre route through Pyongyang. The relay began from the large sculpted flame
of the obelisk of the Juche Tower, which commemorates the national ideology of Juche, or "self-reliance", created
by the country's late founding President Kim Il Sung, father of leader Kim Jong Il, who did not attend. The United
Nations Organization and its children's agency UNICEF withdrew their staff, saying that it wasn't sure the event
would help its mission of raising awareness of conditions for children and amid concerns that the relay would be
used as a propaganda stunt. "It was unconscionable," said a UN official who was briefed on the arguments. North Korea
is frequently listed among the world’s worst offenders against human rights. Vietnam: The event was held in Ho Chi
Minh City on April 29. Some 60 torchbearers carried the torch from the downtown Opera House to the Military Zone
7 Competition Hall stadium near Tan Son Nhat International Airport along an undisclosed route. Vietnam is involved
in a territorial dispute with China (and other countries) for sovereignty of the Spratly and Paracel Islands; tensions
have risen recently[when?] following reports that the Chinese government had established a county-level city named
Sansha in the disputed territories, resulting in anti-Chinese demonstrations in December 2007 in Hanoi and Ho Chi
Minh City. However to sustain its relationship with China the Vietnamese government has actively sought to head off
protests during the torch relay, with Prime Minister Nguyễn Tấn Dũng warning government agencies that "hostile forces"
may try to disrupt the torch relay. Prior to the rally, seven anti-China protestors were arrested in Hanoi after
unfurling a banner and shouting "Boycott the Beijing Olympics" through a loudhailer at a market. A Vietnamese American
was deported for planning protests against the torch, while a prominent blogger, Điếu Cày (real name Nguyễn Văn Hải),
who blogged about protests around the world and who called for demonstrations in Vietnam, was arrested on charges
of tax evasion. Outside Vietnam, there were protests by overseas Vietnamese in Paris, San Francisco and Canberra.
Lê Minh Phiếu, a torchbearer who is a Vietnamese law student studying in France, wrote a letter to the president
of the International Olympic Committee protesting China's "politicisation of the Olympics", citing maps of the torch
relay at the official Beijing Olympic website depicting the disputed islands as Chinese territory and posted it on
his blog. One day before the relay was to start, the official website appeared to have been updated to remove the
disputed islands and dotted lines marking China's maritime claims in the South China Sea. Hong Kong: The event was
held in Hong Kong on May 2. In the ceremony held at the Hong Kong Cultural Centre in Tsim Sha Tsui, Chief Executive
Donald Tsang handed the torch to the first torchbearer, Olympic medalist Lee Lai Shan. The torch relay then traveled
through Nathan Road, Lantau Link, Sha Tin (crossed Shing Mun River via a dragon boat, which had been never used before
in the history of Olympic torch relays), Victoria Harbour (crossed by Tin Hau, a VIP vessel managed by the Marine
Department) before ending in Golden Bauhinia Square in Wan Chai. A total of 120 torchbearers were selected to participate
in the event consisting of celebrities, athletes and pro-Beijing camp politicians. No politicians from the pro-democracy
camp were selected as torchbearers. One torchbearer could not participate due to flight delay. It was estimated that
more than 200,000 spectators came out and watched the relay. Many enthusiastic supporters wore red shirts and waved
large Chinese flags. According to Hong Kong Chief Secretary for Administration Henry Tang, 3,000 police were deployed
to ensure order. There were several protests along the torch relay route. Members of the Hong Kong Alliance in Support
of Patriotic Democratic Movements in China, including pro-democracy activist Szeto Wah, waved novelty inflatable
plastic Olympic flames, which they said symbolised democracy. They wanted accountability for the Tiananmen Square
protests of 1989 and the implementation of democracy in Hong Kong. Political activist and Legislative Council member
Leung Kwok-hung (Longhair) also joined the protest, saying "I'm very proud that in Hong Kong we still have people
brave enough to speak out." Pro-democracy activists were overwhelmed by a crowd of torch supporters with insults
like "running dog," "traitor," "get out!," and "I love the Communist Party." At the same time, about 10 members of
the Civil Human Rights Front had orange banners calling for human rights improvements and universal suffrage. Onlookers
were saying "Aren't you Chinese?" in Mandarin putonghua as they tried to cover the orange banners with a large Chinese
national flag. One woman had an orange sign that said, "Olympic flame for democracy", while a man carried a poster
with a tank and the slogan "One world, two dreams". A university student and former RDHK radio host Christina Chan
wrapped the Tibetan snow lion flag around her body and later began waving it. Several onlookers heckled Chan, shouting
"What kind of Chinese are you?" and "What a shame!" In the end, she and some of the protesters were taken away against
their will by the authorities via a police vehicle "for their own protection." Chan is currently[when?] suing the
Hong Kong government, claiming her human rights were breached. (case number HCAL139/08) The Color Orange democracy
group, led by Danish sculptor Jens Galschiøt, originally planned to join the Hong Kong Alliance relay and paint the
"Pillar of Shame", a structure he built in Hong Kong to commemorate the 1989 Tiananmen Square protests. However,
Galschiøt and two other people were denied entry to Hong Kong on April 26, 2008 due to "immigration reasons" and
were forced to leave Hong Kong. In response, Lee Cheuk Yan, vice chairman of the Hong Kong Alliance in Support of
Patriotic Democratic Movements in China, said, "It's outrageous that the government is willing to sacrifice the image
of Hong Kong because of the torch relay." Hollywood actress Mia Farrow was also briefly questioned at the Hong Kong
airport though officials allowed her to enter. She later gave a speech criticizing China's relations with Sudan in
Hong Kong, as there was also a small minority of people protesting about China's role in the crisis of Darfur. Legislator
Cheung Man Kwong have also said the government's decision allowing Farrow to enter while denying others is a double
standard and a violation to Hong Kong's one country, two systems policy. Macao: The event was held in Macau on May
3. It was the first time that the Olympic torch had traveled to Macau. A ceremony was held at Macau Fisherman's Wharf.
Afterward, the torch traveled through Macau, passing by a number of landmarks including A-Ma Temple, Macau Tower,
Ponte Governador Nobre de Carvalho, Ponte de Sai Van, Macau Cultural Centre, Macau Stadium and then back to the Fisherman's
Wharf for the closing ceremony. Parts of the route near Ruins of St. Paul's and Taipa was shortened due to large
crowds of supporters blocking narrow streets. A total of 120 torchbearers participated in this event including casino
tycoon Stanley Ho. Leong Hong Man and Leong Heng Teng were the first and last torchbearer in the relay respectively.
An article published on Macao Daily News criticized that the list of the torchbearers could not fully represent the
Macanese and that there were too many non-athletes among the torchbearers. (some of whom had already been torchbearers
of other sporting events) A Macau resident was arrested on April 26 for posting a message on cyberctm.com encouraging
people to disrupt the relay. Both orchidbbs.com and cyberctm.com Internet forums were shut down from May 2 to 4.
This fueled speculation that the shutdowns were targeting speeches against the relay. The head of the Bureau of Telecommunications
Regulation has denied that the shutdowns of the websites were politically motivated. About 2,200 police were deployed
on the streets, there were no interruptions. China: The torch returned to China for the first time since April. The
torch arrived in Sanya, Hainan on May 4 with celebrations attended by International Olympic Committee (IOC) officials
and Chinese big names like Jackie Chan. The entire relay through Mainland China was largely a success with many people
welcoming the arrival of the torch along the way. The coverage of the events by the media came under scrutiny during
the relay. Chinese media coverage of the torch relay has been distinct in a number of ways from coverage elsewhere.
Western reporters in Beijing have described Chinese media coverage as partial and censored (for example when Chinese
media did not broadcast Reporters Without Borders' disruption of the torch lighting ceremony), whereas Chinese netizens
have in turn accused Western media coverage of being biased. The French newspaper Libération was criticised by the
Chinese State press agency Xinhua for its allegedly biased reporting; Xinhua suggested that Libération needed "a
stinging slap in the face" for having "insulted the Olympic flame" and "supported a handful of saboteurs". In response
to pro-Tibet and pro-human rights protests, the Chinese media focused on the more disruptive protesters, referring
for example to "a very small number of 'Tibet independence' secessionists and a handful of so-called human rights-minded
NGO activists" intent on "disrupting and sabotaging the Beijing Olympic Games". However, the Chinese media published
articles about crowds supporting the torch relay. Xinhua and CCTV quoted relay spectators who condemned the protests,
to a greater extent than most Western media, but did not quote any alternate viewpoints, providing no coverage of
support for the protests by some ordinary citizens in Western countries. It quoted athletes who expressed pride at
taking part in the relays, to a greater extent than Western media, but not those who, like Marie-José Pérec, expressed
understanding and support for the protestors. The Beijing Organising Committee for the Games mentioned the "smiling
faces of the elderly, children and the artists on the streets", of cheering and supportive Londoners. Xinhua said
that protesters were "radicals" who "trampled human rights" and whose activities were condemned by "the people of
the world who cordially love the Olympic spirit". Reports on the Delhi relay were similarly distinct. Despite intended
torchbearers Kiran Bedi, Soha Ali Khan, Sachin Tendulkar and Bhaichung Bhutia all withdrawing from the event, the
official Chinese website for the relay reported "Indian torchbearers vow to run for spirit of Olympics", and quoted
torchbearers Manavjit Singh Sandhu, Abhinav Bindra, Ayaan Ali Khan and Rajinder Singh Rahelu all stating that sports
and politics should not be mixed. Some Western media have reported on Chinese accusations of Western media bias.
The Daily Telegraph published an opinion piece by the Chinese ambassador to the United Kingdom, Fu Ying, who accused
Western media of "demonising" China during their coverage of the torch relays. The Telegraph also asked its readers
to send their views in response to the question "Is the West demonising China?" The BBC reported on a demonstration
in Sydney by Chinese Australians "voicing support for Beijing amid controversy over Tibet" and protesting against
what they saw as Western media bias. The report showed demonstrators carrying signs which read "Shame on some Western
media", "BBC CNN lies too" and "Stop media distortion!". One demonstrator interviewed by the BBC stated: "I saw some
news from CNN, from the BBC, some media [inaudible], and they are just lying." Libération also reported that it had
been accused of bias by the Chinese media. On April 17, Xinhua condemned what it called "biased coverage of the Lhasa
riots and the Olympic torch relay by the U.S.-based Cable News Network (CNN)". The same day, the Chinese government
called on CNN to "apologise" for having allegedly insulted the Chinese people, and for "attempting to incite the
Chinese people against the government". CNN issued a statement on April 14, responded to China over 'thugs and goons'
comment by Jack Cafferty. On April 19, the BBC reported that 1,300 people had gathered outside BBC buildings in Manchester
and London, protesting against what they described as Western media bias. Several days earlier, the BBC had published
an article entitled "The challenges of reporting in China", responding to earlier criticism. The BBC's Paul Danahar
noted that Chinese people were now "able to access the BBC News website for the first time, after years of strict
censorship", and that "many were critical of our coverage". He provided readers with a reminder of censorship in
China, and added: "People who criticise the media for their coverage in Tibet should acknowledge that we were and
still are banned from reporting there." He also quoted critical Chinese responses, and invited readers to comment.
On April 20, the People's Daily published a report entitled "Overseas Chinese rally against biased media coverage,
for Olympics". It included images of Chinese people demonstrating in France, the United Kingdom, Germany and the
United States. One picture showed Chinese demonstrators holding a sign which claimed, incorrectly, that the BBC had
not reported on Jin Jing. The People's Daily quoted one protestor who claimed the "BBC on some of the recent events
has misled the British public and the rest of the world by providing intensive untruthful reports and biased coverage."
On April 4, it was reported that the Chinese government appeared to be running an anti-CNN website that criticizes
the cable network’s coverage of recent events. The site claims to have been created by a Beijing citizen. However,
foreign correspondents in Beijing voiced suspicions that Anti-cnn may be a semi-government-made website. A Chinese
government spokesman insisted the site was spontaneously set up by a Chinese citizen angered over media coverage.
The Beijing Olympic Organizing Committee sent out a team of 30 unarmed attendants selected from the People's Armed
Police to escort the flame throughout its journey. According to Asian Times, sworn in as the "Beijing Olympic Games
Sacred Flame Protection Unit" during a ceremony in August 2007, their main job is to keep the Olympic flame alight
throughout the journey and to assist in transferring the flame between the torches, the lanterns and the cauldrons.
They wear matching blue tracksuits and are intended to accompany the torch every step of the way. One of the torch
attendants, dubbed "Second Right Brother," has developed a significant online fan-base, particularly among China's
female netizens. In China, a call to boycott French hypermart Carrefour from May 1 began spreading through mobile
text messaging and online chat rooms amongst the Chinese over the weekend from April 12, accusing the company's major
shareholder, the LVMH Group, of donating funds to the Dalai Lama. There were also calls to extend the boycott to
include French luxury goods and cosmetic products. Chinese protesters organized boycotts of the French-owned retail
chain Carrefour in major Chinese cities including Kunming, Hefei and Wuhan, accusing the French nation of pro-secessionist
conspiracy and anti-Chinese racism. Some burned French flags, some added Swastika (due to its conotaions with Nazism)
to the French flag, and spread short online messages calling for large protests in front of French consulates and
embassy. Some shoppers who insisted on entering one of the Carrefour stores in Kunming were blocked by boycotters
wielding large Chinese flags and hit by water bottles. Hundreds of people joined Anti-French rallies in Beijing,
Wuhan, Hefei, Kunming and Qingdao, which quickly spread to other cities like Xi'an, Harbin and Jinan. Carrefour denied
any support or involvement in the Tibetan issue, and had its staff in its Chinese stores wear uniforms emblazoned
with the Chinese national flag and caps with Olympic insignia and as well as the words "Beijing 2008" to show its
support for the games. The effort had to be ceased when the BOCOG deemed the use of official Olympic insignia as
illegal and a violation of copyright. In response to the demonstrations, the Chinese government attempted to calm
the situation, possibly fearing the protests may spiral out of control as has happened in recent years, including
the anti-Japanese protests in 2005. State media and commentaries began to call for calm, such as an editorial in
the People's Daily which urged Chinese people to "express [their] patriotic enthusiasm calmly and rationally, and
express patriotic aspiration in an orderly and legal manner". The government also began to patrol and censor the
internet forums such as Sohu.com, with comments related to the Carrefour boycott removed. In the days prior to the
planned boycott, evidence of efforts by Chinese authorities to choke the mass boycott's efforts online became even
more evident, including barring searches of words related to the French protests, but protests broke out nonetheless
in front of Carrefour's stores at Beijing, Changsha, Fuzhou and Shenyang on May 1. In Japan, the Mayor of Nagano,
Shoichi Washizawa said that it has become a "great nuisance" for the city to host the torch relay prior to the Nagano
leg. Washizawa's aides said the mayor's remark was not criticism about the relay itself but about the potential disruptions
and confusion surrounding it. A city employee of the Nagano City Office ridiculed the protests in Europe, he said
"They are doing something foolish", in a televised interview. Nagano City officially apologized later and explained
what he had wanted to say was "Such violent protests were not easy to accept". Also citing concerns about protests
as well as the recent violence in Tibet, a major Buddhist temple in Nagano cancelled its plans to host the opening
stage of the Olympic torch relay, this temple was vandalised by an un-identified person the day after in apparent
revenge, The Olympic Flame is supposed to remain lit for the whole relay. When the Torch is extinguished at night,
on airplanes, in bad weather, or during protests (such as the several occasions in Paris), the Olympic Flame is kept
alight in a set of 8 lanterns.[citation needed] A union planned to protest at the relay for better living conditions.
Hong Kong legislator Michael Mak Kwok-fung and activist Chan Cheong, both members of the League of Social Democrats,
were not allowed to enter Macau. Chinese media have also reported on Jin Jing, whom the official Chinese torch relay
website described as "heroic" and an "angel", whereas Western media initially gave her little mention – despite a
Chinese claim that "Chinese Paralympic athlete Jin Jing has garnered much attention from the media". Two additional
teams of 40 attendants each will accompany the flame on its Mainland China route. This arrangement has however sparked
several controversies.
In modern molecular biology and genetics, the genome is the genetic material of an organism. It consists of DNA (or RNA in
RNA viruses). The genome includes both the genes and the non-coding sequences of the DNA/RNA. The term was created
in 1920 by Hans Winkler, professor of botany at the University of Hamburg, Germany. The Oxford Dictionary suggests
the name to be a blend of the words gene and chromosome. However, see omics for a more thorough discussion. A few
related -ome words already existed—such as biome, rhizome, forming a vocabulary into which genome fits systematically.
Some organisms have multiple copies of chromosomes: diploid, triploid, tetraploid and so on. In classical genetics,
in a sexually reproducing organism (typically eukarya) the gamete has half the number of chromosomes of the somatic
cell and the genome is a full set of chromosomes in a diploid cell. The halving of the genetic material in gametes
is accomplished by the segregation of homologous chromosomes during meiosis. In haploid organisms, including cells
of bacteria, archaea, and in organelles including mitochondria and chloroplasts, or viruses, that similarly contain
genes, the single or set of circular or linear chains of DNA (or RNA for some viruses), likewise constitute the genome.
The term genome can be applied specifically to mean what is stored on a complete set of nuclear DNA (i.e., the "nuclear
genome") but can also be applied to what is stored within organelles that contain their own DNA, as with the "mitochondrial
genome" or the "chloroplast genome". Additionally, the genome can comprise non-chromosomal genetic elements such
as viruses, plasmids, and transposable elements. When people say that the genome of a sexually reproducing species
has been "sequenced", typically they are referring to a determination of the sequences of one set of autosomes and
one of each type of sex chromosome, which together represent both of the possible sexes. Even in species that exist
in only one sex, what is described as a "genome sequence" may be a composite read from the chromosomes of various
individuals. Colloquially, the phrase "genetic makeup" is sometimes used to signify the genome of a particular individual
or organism.[citation needed] The study of the global properties of genomes of related organisms is usually referred
to as genomics, which distinguishes it from genetics which generally studies the properties of single genes or groups
of genes. Both the number of base pairs and the number of genes vary widely from one species to another, and there
is only a rough correlation between the two (an observation known as the C-value paradox). At present, the highest
known number of genes is around 60,000, for the protozoan causing trichomoniasis (see List of sequenced eukaryotic
genomes), almost three times as many as in the human genome. In 1976, Walter Fiers at the University of Ghent (Belgium)
was the first to establish the complete nucleotide sequence of a viral RNA-genome (Bacteriophage MS2). The next year
Fred Sanger completed the first DNA-genome sequence: Phage Φ-X174, of 5386 base pairs. The first complete genome
sequences among all three domains of life were released within a short period during the mid-1990s: The first bacterial
genome to be sequenced was that of Haemophilus influenzae, completed by a team at The Institute for Genomic Research
in 1995. A few months later, the first eukaryotic genome was completed, with sequences of the 16 chromosomes of budding
yeast Saccharomyces cerevisiae published as the result of a European-led effort begun in the mid-1980s. The first
genome sequence for an archaeon, Methanococcus jannaschii, was completed in 1996, again by The Institute for Genomic
Research. The development of new technologies has made it dramatically easier and cheaper to do sequencing, and the
number of complete genome sequences is growing rapidly. The US National Institutes of Health maintains one of several
comprehensive databases of genomic information. Among the thousands of completed genome sequencing projects include
those for rice, a mouse, the plant Arabidopsis thaliana, the puffer fish, and the bacteria E. coli. In December 2013,
scientists first sequenced the entire genome of a Neanderthal, an extinct species of humans. The genome was extracted
from the toe bone of a 130,000-year-old Neanderthal found in a Siberian cave. New sequencing technologies, such as
massive parallel sequencing have also opened up the prospect of personal genome sequencing as a diagnostic tool,
as pioneered by Manteia Predictive Medicine. A major step toward that goal was the completion in 2007 of the full
genome of James D. Watson, one of the co-discoverers of the structure of DNA. Whereas a genome sequence lists the
order of every DNA base in a genome, a genome map identifies the landmarks. A genome map is less detailed than a
genome sequence and aids in navigating around the genome. The Human Genome Project was organized to map and to sequence
the human genome. A fundamental step in the project was the release of a detailed genomic map by Jean Weissenbach
and his team at the Genoscope in Paris. Genome composition is used to describe the make up of contents of a haploid
genome, which should include genome size, proportions of non-repetitive DNA and repetitive DNA in details. By comparing
the genome compositions between genomes, scientists can better understand the evolutionary history of a given genome.
When talking about genome composition, one should distinguish between prokaryotes and eukaryotes as the big differences
on contents structure they have. In prokaryotes, most of the genome (85–90%) is non-repetitive DNA, which means coding
DNA mainly forms it, while non-coding regions only take a small part. On the contrary, eukaryotes have the feature
of exon-intron organization of protein coding genes; the variation of repetitive DNA content in eukaryotes is also
extremely high. In mammals and plants, the major part of the genome is composed of repetitive DNA. Most biological
entities that are more complex than a virus sometimes or always carry additional genetic material besides that which
resides in their chromosomes. In some contexts, such as sequencing the genome of a pathogenic microbe, "genome" is
meant to include information stored on this auxiliary material, which is carried in plasmids. In such circumstances
then, "genome" describes all of the genes and information on non-coding DNA that have the potential to be present.
In eukaryotes such as plants, protozoa and animals, however, "genome" carries the typical connotation of only information
on chromosomal DNA. So although these organisms contain chloroplasts or mitochondria that have their own DNA, the
genetic information contained by DNA within these organelles is not considered part of the genome. In fact, mitochondria
are sometimes said to have their own genome often referred to as the "mitochondrial genome". The DNA found within
the chloroplast may be referred to as the "plastome". Genome size is the total number of DNA base pairs in one copy
of a haploid genome. The genome size is positively correlated with the morphological complexity among prokaryotes
and lower eukaryotes; however, after mollusks and all the other higher eukaryotes above, this correlation is no longer
effective. This phenomenon also indicates the mighty influence coming from repetitive DNA act on the genomes. Since
genomes are very complex, one research strategy is to reduce the number of genes in a genome to the bare minimum
and still have the organism in question survive. There is experimental work being done on minimal genomes for single
cell organisms as well as minimal genomes for multi-cellular organisms (see Developmental biology). The work is both
in vivo and in silico. The proportion of non-repetitive DNA is calculated by using the length of non-repetitive DNA
divided by genome size. Protein-coding genes and RNA-coding genes are generally non-repetitive DNA. A bigger genome
does not mean more genes, and the proportion of non-repetitive DNA decreases along with increasing genome size in
higher eukaryotes. It had been found that the proportion of non-repetitive DNA can vary a lot between species. Some
E. coli as prokaryotes only have non-repetitive DNA, lower eukaryotes such as C. elegans and fruit fly, still possess
more non-repetitive DNA than repetitive DNA. Higher eukaryotes tend to have more repetitive DNA than non-repetitive
ones. In some plants and amphibians, the proportion of non-repetitive DNA is no more than 20%, becoming a minority
component. The proportion of repetitive DNA is calculated by using length of repetitive DNA divide by genome size.
There are two categories of repetitive DNA in genome: tandem repeats and interspersed repeats. Tandem repeats are
usually caused by slippage during replication, unequal crossing-over and gene conversion, satellite DNA and microsatellites
are forms of tandem repeats in the genome. Although tandem repeats count for a significant proportion in genome,
the largest proportion in mammalian is the other type, interspersed repeats. Interspersed repeats mainly come from
transposable elements (TEs), but they also include some protein coding gene families and pseudogenes. Transposable
elements are able to integrate into the genome at another site within the cell. It is believed that TEs are an important
driving force on genome evolution of higher eukaryotes. TEs can be classified into two categories, Class 1 (retrotransposons)
and Class 2 (DNA transposons). Retrotransposons can be transcribed into RNA, which are then duplicated at another
site into the genome. Retrotransposons can be divided into Long terminal repeats (LTRs) and Non-Long Terminal Repeats
(Non-LTR). DNA transposons generally move by "cut and paste" in the genome, but duplication has also been observed.
Class 2 TEs do not use RNA as intermediate and are popular in bacteria, in metazoan it has also been found. Genomes
are more than the sum of an organism's genes and have traits that may be measured and studied without reference to
the details of any particular genes and their products. Researchers compare traits such as chromosome number (karyotype),
genome size, gene order, codon usage bias, and GC-content to determine what mechanisms could have produced the great
variety of genomes that exist today (for recent overviews, see Brown 2002; Saccone and Pesole 2003; Benfey and Protopapas
2004; Gibson and Muse 2004; Reese 2004; Gregory 2005). Duplications play a major role in shaping the genome. Duplication
may range from extension of short tandem repeats, to duplication of a cluster of genes, and all the way to duplication
of entire chromosomes or even entire genomes. Such duplications are probably fundamental to the creation of genetic
novelty. Horizontal gene transfer is invoked to explain how there is often extreme similarity between small portions
of the genomes of two organisms that are otherwise very distantly related. Horizontal gene transfer seems to be common
among many microbes. Also, eukaryotic cells seem to have experienced a transfer of some genetic material from their
chloroplast and mitochondrial genomes to their nuclear chromosomes.
A comprehensive school is a state school that does not select its intake on the basis of academic achievement or aptitude.
This is in contrast to the selective school system, where admission is restricted on the basis of selection criteria.
The term is commonly used in relation to England and Wales, where comprehensive schools were introduced on an experimental
basis in the 1940s and became more widespread from 1965. About 90% of British secondary school pupils now attend
comprehensive schools. They correspond broadly to the public high school in the United States and Canada and to the
German Gesamtschule.[citation needed] Comprehensive schools are primarily about providing an entitlement curriculum
to all children, without selection whether due to financial considerations or attainment. A consequence of that is
a wider ranging curriculum, including practical subjects such as design and technology and vocational learning, which
were less common or non-existent in grammar schools. Providing post-16 education cost-effectively becomes more challenging
for smaller comprehensive schools, because of the number of courses needed to cover a broader curriculum with comparatively
fewer students. This is why schools have tended to get larger and also why many local authorities have organised
secondary education into 11–16 schools, with the post-16 provision provided by Sixth Form colleges and Further Education
Colleges. Comprehensive schools do not select their intake on the basis of academic achievement or aptitude, but
there are demographic reasons why the attainment profiles of different schools vary considerably. In addition, government
initiatives such as the City Technology Colleges and Specialist schools programmes have made the comprehensive ideal
less certain. In these schools children could be selected on the basis of curriculum aptitude related to the school's
specialism even though the schools do take quotas from each quartile of the attainment range to ensure they were
not selective by attainment. A problem with this is whether the quotas should be taken from a normal distribution
or from the specific distribution of attainment in the immediate catchment area. In the selective school system,
which survives in several parts of the United Kingdom, admission is dependent on selection criteria, most commonly
a cognitive test or tests. Although comprehensive schools were introduced to England and Wales in 1965, there are
164 selective grammar schools that are still in operation.[citation needed] (though this is a small number compared
to approximately 3500 state secondary schools in England). Most comprehensives are secondary schools for children
between the ages of 11 to 16, but in a few areas there are comprehensive middle schools, and in some places the secondary
level is divided into two, for students aged 11 to 14 and those aged 14 to 18, roughly corresponding to the US middle
school (or junior high school) and high school, respectively. With the advent of key stages in the National Curriculum
some local authorities reverted from the Middle School system to 11–16 and 11–18 schools so that the transition between
schools corresponds to the end of one key stage and the start of another. In principle, comprehensive schools were
conceived as "neighbourhood" schools for all students in a specified catchment area. Current education reforms with
Academies Programme, Free Schools and University Technical Colleges will no doubt have some impact on the comprehensive
ideal but it is too early to say to what degree. Finland has used comprehensive schools since the 1970s, in the sense
that everyone is expected to complete the nine grades of peruskoulu, from the age 7 to 16. The division to lower
comprehensive school (grades 1–6, ala-aste, alakoulu) and upper comprehensive school (grades 7–9, yläaste, yläkoulu)
has been discontinued. Germany has a comprehensive school known as the Gesamtschule. While some German schools such
as the Gymnasium and the Realschule have rather strict entrance requirements, the Gesamtschule does not have such
requirements. They offer college preparatory classes for the students who are doing well, general education classes
for average students, and remedial courses for those who aren't doing that well. In most cases students attending
a Gesamtschule may graduate with the Hauptschulabschluss, the Realschulabschluss or the Abitur depending on how well
they did in school. The percentage of students attending a Gesamtschule varies by Bundesland. In the State of Brandenburg
more than 50% of all students attended a Gesamtschule in 2007, while in the State of Bavaria less than 1% did. Starting
in 2010/2011, Hauptschulen were merged with Realschulen and Gesamtschulen to form a new type of comprehensive school
in the German States of Berlin and Hamburg, called Stadtteilschule in Hamburg and Sekundarschule in Berlin (see:
Education in Berlin, Education in Hamburg). The "Mittelschule" is a school in some States of Germany that offers
regular classes and remedial classes but no college preparatory classes. In some States of Germany, the Hauptschule
does not exist, and any student who has not been accepted by another school has to attend the Mittelschule. Students
may be awarded the Hauptschulabschluss or the Mittlere Reife but not the Abitur. Comprehensive schools have been
accused of grade inflation after a study revealed that Gymnasium senior students of average mathematical ability
found themselves at the very bottom of their class and had an average grade of "Five", which means "Failed". Gesamtschule
senior students of average mathematical ability found themselves in the upper half of their class and had an average
grade of "Three Plus". When a central Abitur examination was established in the State of North Rhine-Westphalia,
it was revealed that Gesamtschule students did worse than could be predicted by their grades or class rank. Barbara
Sommer (Christian Democratic Union), Education Minister of North Rhine-Westphalia, commented that: Looking at the
performance gap between comprehensives and the Gymnasium [at the Abitur central examination] [...] it is difficult
to understand why the Social Democratic Party of Germany wants to do away with the Gymnasium. [...] The comprehensives
do not help students achieve [...] I am sick and tired of the comprehensive schools blaming their problems on the
social class origins of their students. What kind of attitude is this to blame their own students? She also called
the Abitur awarded by the Gymnasium the true Abitur and the Abitur awarded by the Gesamtschule "Abitur light". As
a reaction, Sigrid Beer (Alliance '90/The Greens) stated that comprehensives were structurally discriminated against
by the government, which favoured the Gymnasiums. She also said that many of the students awarded the Abitur by the
comprehensives came from "underprivileged groups" and sneering at their performance was a "piece of impudence". Gesamtschulen
might put bright working class students at risk according to several studies. It could be shown that an achievement
gap opens between working class students attending a comprehensive and their middle class peers. Also working class
students attending a Gymnasium or a Realschule outperform students from similar backgrounds attending a comprehensive.
However it is not students attending a comprehensive, but students attending a Hauptschule, who perform the poorest.
According to a study done by Helmut Fend (who had always been a fierce proponent of comprehensive schools) revealed
that comprehensive schools do not help working class students. He compared alumni of the tripartite system to alumni
of comprehensive schools. While working class alumni of comprehensive schools were awarded better school diplomas
at age 35, they held similar occupational positions as working class alumni of the tripartite system and were as
unlikely to graduate from college. Gibraltar opened its first comprehensive school in 1972. Between the ages of 12
and 16 two comprehensive schools cater for girls and boys separately. Students may also continue into the sixth form
to complete their A-levels. Comprehensive schools were introduced into Ireland in 1966 by an initiative by Patrick
Hillery, Minister for Education, to give a broader range of education compared to that of the vocational school system,
which was then the only system of schools completely controlled by the state. Until then, education in Ireland was
largely dominated by religious persuasion, particularly the voluntary secondary school system was a particular realisation
of this. The comprehensive school system is still relatively small and to an extent has been superseded by the community
school concept. The Irish word for a comprehensive school is a 'scoil chuimsitheach.' In Ireland comprehensive schools
were an earlier model of state schools, introduced in the late 1960s and largely replaced by the secular community
model of the 1970s. The comprehensive model generally incorporated older schools that were under Roman Catholic or
Protestant ownership, and the various denominations still manage the school as patrons or trustees. The state owns
the school property, which is vested in the trustees in perpetuity. The model was adopted to make state schools more
acceptable to a largely conservative society of the time. The introduction of the community school model in the 1970s
controversially removed the denominational basis of the schools, but religious interests were invited to be represented
on the Boards of Management. Community schools are divided into two models, the community school vested in the Minister
for Education and the community college vested in the local Education and Training Board. Community colleges tended
to be amalgamations of unviable local schools under the umbrella of a new community school model, but community schools
have tended to be entirely new foundations. The first comprehensives were set up after the Second World War. In 1946,
for example, Walworth School was one of five 'experimental' comprehensive schools set up by the London County Council
Another early comprehensive school was Holyhead County School in Anglesey in 1949. Other early examples of comprehensive
schools included Woodlands Boys School in Coventry (opened in 1954) and Tividale Comprehensive School in Tipton.
The largest expansion of comprehensive schools in 1965 resulted from a policy decision taken in 1965 by Anthony Crosland,
Secretary of State for Education in the 1964–1970 Labour government. The policy decision was implemented by Circular
10/65, an instruction to local education authorities to plan for conversion. Students sat the 11+ examination in
their last year of primary education and were sent to one of a secondary modern, secondary technical or grammar school
depending on their perceived ability. Secondary technical schools were never widely implemented and for 20 years
there was a virtual bipartite system which saw fierce competition for the available grammar school places, which
varied between 15% and 25% of total secondary places, depending on location.[citation needed] In 1970 Margaret Thatcher
became Secretary of State for Education of the new Conservative government. She ended the compulsion on local authorities
to convert, however, many local authorities were so far down the path that it would have been prohibitively expensive
to attempt to reverse the process, and more comprehensive schools were established under Mrs Thatcher than any other
education secretary. By 1975 the majority of local authorities in England and Wales had abandoned the 11-plus examination
and moved to a comprehensive system. Over that 10-year period many secondary modern schools and grammar schools were
amalgamated to form large neighbourhood comprehensives, whilst a number of new schools were built to accommodate
a growing school population. By the mid-1970s the system had been almost fully implemented, with virtually no secondary
modern schools remaining. Many grammar schools were either closed or changed to comprehensive status. Some local
authorities, including Sandwell and Dudley in the West Midlands, changed all of its state secondary schools to comprehensive
schools during the 1970s. In 1976 the future Labour prime minister James Callaghan launched what became known as
the 'great debate' on the education system. He went on to list the areas he felt needed closest scrutiny: the case
for a core curriculum, the validity and use of informal teaching methods, the role of school inspection and the future
of the examination system. Comprehensive school remains the most common type of state secondary school in England,
and the only type in Wales. They account for around 90% of pupils, or 64% if one does not count schools with low-level
selection. This figure varies by region. Since the 1988 Education Reform Act, parents have a right to choose which
school their child should go to or whether to not send them to school at all and to home educate them instead. The
concept of "school choice" introduces the idea of competition between state schools, a fundamental change to the
original "neighbourhood comprehensive" model, and is partly intended as a means by which schools that are perceived
to be inferior are forced either to improve or, if hardly anyone wants to go there, to close down. Government policy
is currently promoting 'specialisation' whereby parents choose a secondary school appropriate for their child's interests
and skills. Most initiatives focus on parental choice and information, implementing a pseudo-market incentive to
encourage better schools. This logic has underpinned the controversial league tables of school performance. Scotland
has a very different educational system from England and Wales, though also based on comprehensive education. It
has different ages of transfer, different examinations and a different philosophy of choice and provision. All publicly
funded primary and secondary schools are comprehensive. The Scottish Government has rejected plans for specialist
schools as of 2005. Education in Northern Ireland differs slightly from systems used elsewhere in the United Kingdom,
but it is more similar to that used in England and Wales than it is to Scotland. There is some controversy about
comprehensive schools. As a rule of thumb those supporting The Left Party, the Social Democratic Party of Germany
and Alliance '90/The Greens are in favour of comprehensive schools, while those supporting the Christian Democratic
Union and the Free Democratic Party are opposed to them.
The Republic of the Congo (French: République du Congo), also known as Congo, Congo Republic, West Congo[citation needed],
or Congo-Brazzaville, is a country located in Central Africa. It is bordered by five countries: Gabon to the west;
Cameroon to the northwest; the Central African Republic to the northeast; the Democratic Republic of the Congo to
the east and south; and the Angolan exclave of Cabinda to the southwest. The region was dominated by Bantu-speaking
tribes, who built trade links leading into the Congo River basin. Congo-Brazzaville was formerly part of the French
colony of Equatorial Africa. Upon independence in 1960, the former colony of French Congo became the Republic of
the Congo. The People's Republic of the Congo was a Marxist–Leninist one-party state from 1970 to 1991. Multi-party
elections have been held since 1992, although a democratically elected government was ousted in the 1997 Republic
of the Congo Civil War and President Denis Sassou Nguesso has ruled for 26 of the past 36 years. The political stability
and development of hydrocarbon production made Republic of Congo the fourth largest oil producer in the Gulf of Guinea
and provided the country with a relative prosperity despite the poor state of its infrastructure and public services
and an unequal distribution of oil revenues. Bantu-speaking peoples who founded tribes during the Bantu expansions
largely displaced and absorbed the earliest inhabitants of the region, the Pygmy people, about 1500 BC. The Bakongo,
a Bantu ethnic group that also occupied parts of present-day Angola, Gabon, and Democratic Republic of the Congo,
formed the basis for ethnic affinities and rivalries among those countries. Several Bantu kingdoms—notably those
of the Kongo, the Loango, and the Teke—built trade links leading into the Congo River basin. The Portuguese explorer
Diogo Cão reached the mouth of the Congo in 1484. Commercial relationships quickly grew between the inland Bantu
kingdoms and European merchants who traded various commodities, manufactured goods, and people captured from the
hinterlands. After centuries as a major hub for transatlantic trade, direct European colonization of the Congo river
delta began in the late 19th century, subsequently eroding the power of the Bantu societies in the region. The area
north of the Congo River came under French sovereignty in 1880 as a result of Pierre de Brazza's treaty with Makoko
of the Bateke. This Congo Colony became known first as French Congo, then as Middle Congo in 1903. In 1908, France
organized French Equatorial Africa (AEF), comprising Middle Congo, Gabon, Chad, and Oubangui-Chari (the modern Central
African Republic). The French designated Brazzaville as the federal capital. Economic development during the first
50 years of colonial rule in Congo centered on natural-resource extraction. The methods were often brutal: construction
of the Congo–Ocean Railroad following World War I has been estimated to have cost at least 14,000 lives. During the
Nazi occupation of France during World War II, Brazzaville functioned as the symbolic capital of Free France between
1940 and 1943. The Brazzaville Conference of 1944 heralded a period of major reform in French colonial policy. Congo
benefited from the postwar expansion of colonial administrative and infrastructure spending as a result of its central
geographic location within AEF and the federal capital at Brazzaville. It also received a local legislature after
the adoption of the 1946 constitution that established the Fourth Republic. Following the revision of the French
constitution that established the Fifth Republic in 1958, the AEF dissolved into its constituent parts, each of which
became an autonomous colony within the French Community. During these reforms, Middle Congo became known as the Republic
of the Congo in 1958 and published its first constitution in 1959. Antagonism between the pro-Opangault Mbochis and
the pro-Youlou Balalis resulted in a series of riots in Brazzaville in February 1959, which the French Army subdued.
The Republic of the Congo received full independence from France on August 15, 1960. Fulbert Youlou ruled as the
country's first president until labour elements and rival political parties instigated a three-day uprising that
ousted him. The Congolese military took charge of the country briefly and installed a civilian provisional government
headed by Alphonse Massamba-Débat. Under the 1963 constitution, Massamba-Débat was elected President for a five-year
term. During Massamba-Débat's term in office the regime adopted "scientific socialism" as the country's constitutional
ideology. In 1965, Congo established relations with the Soviet Union, the People's Republic of China, North Korea
and North Vietnam. Massamba-Débat's regime also invited several hundred Cuban army troops into the country to train
his party's militia units and these troops helped his government survive a coup in 1966 led by paratroopers loyal
to future President Marien Ngouabi. Nevertheless, Massamba-Débat was unable to reconcile various institutional, tribal
and ideological factions within the country and his regime ended abruptly with a bloodless coup d'état in September
1968. Marien Ngouabi, who had participated in the coup, assumed the presidency on December 31, 1968. One year later,
President Ngouabi proclaimed Congo Africa's first "people's republic", the People's Republic of the Congo, and announced
the decision of the National Revolutionary Movement to change its name to the Congolese Labour Party (PCT). Ngouabi
survived an attempted coup in 1972 but was assassinated on March 16, 1977. An 11-member Military Committee of the
Party (CMP) was then named to head an interim government with Joachim Yhombi-Opango to serve as President of the
Republic. Two years later, Yhombi-Opango was forced from power and Denis Sassou Nguesso become the new president.
Sassou Nguesso aligned the country with the Eastern Bloc and signed a twenty-year friendship pact with the Soviet
Union. Over the years, Sassou had to rely more on political repression and less on patronage to maintain his dictatorship.
Pascal Lissouba, who became Congo's first elected president (1992–1997) during the period of multi-party democracy,
attempted to implement economic reforms with IMF backing to liberalise the economy. In June 1996 the IMF approved
a three-year SDR69.5m (US$100m) enhanced structural adjustment facility (ESAF) and was on the verge of announcing
a renewed annual agreement when civil war broke out in Congo in mid-1997. Congo's democratic progress was derailed
in 1997 when Lissouba and Sassou started to fight for power in the civil war. As presidential elections scheduled
for July 1997 approached, tensions between the Lissouba and Sassou camps mounted. On June 5, President Lissouba's
government forces surrounded Sassou's compound in Brazzaville and Sassou ordered members of his private militia (known
as "Cobras") to resist. Thus began a four-month conflict that destroyed or damaged much of Brazzaville and caused
tens of thousands of civilian deaths. In early October, the Angolan socialist régime began an invasion of Congo to
install Sassou in power. In mid-October, the Lissouba government fell. Soon thereafter, Sassou declared himself president.
In the controversial elections in 2002, Sassou won with almost 90% of the vote cast. His two main rivals, Lissouba
and Bernard Kolelas, were prevented from competing and the only remaining credible rival, Andre Milongo, advised
his supporters to boycott the elections and then withdrew from the race. A new constitution, agreed upon by referendum
in January 2002, granted the president new powers, extended his term to seven years, and introduced a new bicameral
assembly. International observers took issue with the organization of the presidential election and the constitutional
referendum, both of which were reminiscent in their organization of Congo's era of the one-party state. Following
the presidential elections, fighting restarted in the Pool region between government forces and rebels led by Pastor
Ntumi; a peace treaty to end the conflict was signed in April 2003. Sassou also won the following presidential election
in July 2009. According to the Congolese Observatory of Human Rights, a non-governmental organization, the election
was marked by "very low" turnout and "fraud and irregularities". Congo-Brazzaville has had a multi-party political
system since the early 1990s, although the system is heavily dominated by President Denis Sassou Nguesso; he has
lacked serious competition in the presidential elections held under his rule. Sassou Nguesso is backed by his own
Congolese Labour Party (French: Parti Congolais du Travail) as well as a range of smaller parties. Internationally,
Sassou's regime has been hit by corruption revelations despite attempts to censor them. One French investigation
found over 110 bank accounts and dozens of lavish properties in France; Sassou denounced embezzlement investigations
as "racist" and "colonial". On March 27, 2015 Sassou Nguesso announced that his government would hold a referendum
to change the country's 2002 constitution and allow him to run for a third consecutive term in office. On October
25 the government held a referendum to allow Sassou Nguesso to run in the next election. The government claimed that
the proposal as approved by 92 percent of voters with 72 percent of eligible voters participating. The opposition,
who had boycotted the referendum claimed that the government's statistics were false and that the vote was a sham.
In 2008, the main media were owned by the government, but many more privately run forms of media were being created.
There is one government-owned television station and around 10 small private television channels. Many Pygmies belong
from birth to Bantus in a relationship many refer to as slavery. The Congolese Human Rights Observatory says that
the Pygmies are treated as property the same way "pets" are. On December 30, 2010, the Congolese parliament adopted
a law for the promotion and protection of the rights of indigenous peoples. This law is the first of its kind in
Africa, and its adoption is a historic development for indigenous peoples on the continent. Congo is located in the
central-western part of sub-Saharan Africa, along the Equator, lying between latitudes 4°N and 5°S, and longitudes
11° and 19°E. To the south and east of it is the Democratic Republic of Congo. It is also bounded by Gabon to the
west, Cameroon and the Central African Republic to the north, and Cabinda (Angola) to the southwest. It has a short
coast on the Atlantic Ocean. The capital, Brazzaville, is located on the Congo River, in the south of the country,
immediately across from Kinshasa, the capital of the Democratic Republic of the Congo. The southwest of the country
is a coastal plain for which the primary drainage is the Kouilou-Niari River; the interior of the country consists
of a central plateau between two basins to the south and north. Forests are under increasing exploitation pressure.
Since the country is located on the Equator, the climate is consistent year-round, with the average day temperature
being a humid 24 °C (75 °F) and nights generally between 16 °C (61 °F) and 21 °C (70 °F). The average yearly rainfall
ranges from 1,100 millimetres (43 in) in south in the Niari Valley to over 2,000 millimetres (79 in) in central parts
of the country. The dry season is from June to August while in the majority of the country the wet season has two
rainfall maxima: one in March–May and another in September–November. In 2006–07, researchers from the Wildlife Conservation
Society studied gorillas in heavily forested regions centered on the Ouesso district of the Sangha Region. They suggest
a population on the order of 125,000 Western Lowland Gorillas, whose isolation from humans has been largely preserved
by inhospitable swamps. The economy is a mixture of village agriculture and handicrafts, an industrial sector based
largely on petroleum, support services, and a government characterized by budget problems and overstaffing. Petroleum
extraction has supplanted forestry as the mainstay of the economy. In 2008, oil sector accounted for 65% of the GDP,
85% of government revenue, and 92% of exports. The country also has large untapped mineral wealth. In the early 1980s,
rapidly rising oil revenues enabled the government to finance large-scale development projects with GDP growth averaging
5% annually, one of the highest rates in Africa. The government has mortgaged a substantial portion of its petroleum
earnings, contributing to a shortage of revenues. January 12, 1994 devaluation of Franc Zone currencies by 50% resulted
in inflation of 46% in 1994, but inflation has subsided since. Economic reform efforts continued with the support
of international organizations, notably the World Bank and the International Monetary Fund. The reform program came
to a halt in June 1997 when civil war erupted. When Sassou Nguesso returned to power at the end of the war in October
1997, he publicly expressed interest in moving forward on economic reforms and privatization and in renewing cooperation
with international financial institutions. However, economic progress was badly hurt by slumping oil prices and the
resumption of armed conflict in December 1998, which worsened the republic's budget deficit. The current administration
presides over an uneasy internal peace and faces difficult economic problems of stimulating recovery and reducing
poverty, despite record-high oil prices since 2003. Natural gas and diamonds are also recent major Congolese exports,
although Congo was excluded from the Kimberley Process in 2004 amid allegations that most of its diamond exports
were in fact being smuggled out of the neighboring Democratic Republic of Congo; it was re-admitted to the group
in 2007. The Republic of the Congo also has large untapped base metal, gold, iron and phosphate deposits. The country
is a member of the Organization for the Harmonization of Business Law in Africa (OHADA). The Congolese government
signed an agreement in 2009 to lease 200,000 hectares of land to South African farmers to reduce its dependence on
imports. Transport in the Republic of the Congo includes land, air and water transportation. The country's rail system
was built by forced laborers during the 1930s and largely remains in operation. There are also over 1000 km of paved
roads and two major international airports (Maya-Maya Airport and Pointe Noire Airport) which have flights to Paris
and many African cities. The country also has a large port on the Atlantic Ocean at Pointe-Noire and others along
the Congo River at Brazzaville and Impfondo. The Republic of the Congo's sparse population is concentrated in the
southwestern portion of the country, leaving the vast areas of tropical jungle in the north virtually uninhabited.
Thus, Congo is one of the most urbanized countries in Africa, with 70% of its total population living in a few urban
areas, namely in Brazzaville, Pointe-Noire or one of the small cities or villages lining the 534-kilometre (332 mi)
railway which connects the two cities. In rural areas, industrial and commercial activity has declined rapidly in
recent years, leaving rural economies dependent on the government for support and subsistence. Ethnically and linguistically
the population of the Republic of the Congo is diverse—Ethnologue recognises 62 spoken languages in the country—but
can be grouped into three categories. The Kongo are the largest ethnic group and form roughly half of the population.
The most significant subgroups of the Kongo are Laari in Brazzaville and Pool regions and Vili around Pointe-Noire
and along the Atlantic coast. The second largest group are the Teke who live to the north of Brazzaville with 17%
of the population. Boulangui (M’Boshi) live in the northwest and in Brazzaville and form 12% of the population. Pygmies
make up 2% of Congo's population. Before the 1997 war, about 9,000 Europeans and other non-Africans lived in Congo,
most of whom were French; only a fraction of this number remains. Around 300 American expatriates reside in the Congo.
According to CIA World Factbook, the people of Republic of the Congo are largely a mix of Catholics (33.1%), Awakening
Lutherans (22.3%) and other Protestants (19.9%). Followers of Islam make up 1.6%, and this is primarily due to an
influx of foreign workers into the urban centers. Public expenditure health was at 8.9% of the GDP in 2004, whereas
private expenditure was at 1.3%. As of 2012, the HIV/AIDS prevalence was at 2.8% among 15- to 49-year-olds. Health
expenditure was at US$30 per capita in 2004. A large proportion of the population is undernourished, with malnutrition
being a problem in Congo-Brazzaville. There were 20 physicians per 100,000 persons in the early 2000s (decade). As
of 2010, the maternal mortality rate was 560 deaths/100,000 live births, and the infant mortality rate was 59.34
deaths/1,000 live births. Female genital mutilation (FGM) is rare in the country, being confined to limited geographic
areas of the country. Public expenditure of the GDP was less in 2002–05 than in 1991. Public education is theoretically
free and mandatory for under-16-year-olds, but in practice, expenses exist. Net primary enrollment rate was 44% in
2005, much less than the 79% in 1991. The country has universities. Education between ages six and sixteen is compulsory.
Pupils who complete six years of primary school and seven years of secondary school obtain a baccalaureate. At the
university, students can obtain a bachelor's degree in three years and a master's after four. Marien Ngouabi University—which
offers courses in medicine, law and several other fields—is the country's only public university. Instruction at
all levels is in French, and the educational system as a whole models the French system. The educational infrastructure
has been seriously degraded as a result of political and economic crises. There are no seats in most classrooms,
forcing children to sit on the floor. Enterprising individuals have set up private schools, but they often lack the
technical knowledge and familiarity with the national curriculum to teach effectively. Families frequently enroll
their children in private schools only to find they cannot make the payments.
A prime minister is the most senior minister of cabinet in the executive branch of government, often in a parliamentary or
semi-presidential system. In many systems, the prime minister selects and may dismiss other members of the cabinet,
and allocates posts to members within the government. In most systems, the prime minister is the presiding member
and chairman of the cabinet. In a minority of systems, notably in semi-presidential systems of government, a prime
minister is the official who is appointed to manage the civil service and execute the directives of the head of state.
In parliamentary systems fashioned after the Westminster system, the prime minister is the presiding and actual head
of government and head of the executive branch. In such systems, the head of state or the head of state's official
representative (i.e. the monarch, president, or governor-general) usually holds a largely ceremonial position, although
often with reserve powers. The prime minister is often, but not always, a member of parliament[clarification needed]
and is expected with other ministers to ensure the passage of bills through the legislature. In some monarchies the
monarch may also exercise executive powers (known as the royal prerogative) that are constitutionally vested in the
crown and may be exercised without the approval of parliament. As well as being head of government, a prime minister
may have other roles or titles—the Prime Minister of the United Kingdom, for example, is also First Lord of the Treasury
and Minister for the Civil Service. Prime ministers may take other ministerial posts—for example, during the Second
World War, Winston Churchill was also Minister of Defence (although there was then no Ministry of Defence), and in
the current cabinet of Israel, Benjamin Netanyahu also serves as Minister of Communications, Foreign Affairs, Regional
Cooperation, Economy and Interior The first actual usage of the term prime minister or Premier Ministre[citation
needed] was used by Cardinal Richelieu when in 1625 he was named to head the royal council as prime minister of France.
Louis XIV and his descendants generally attempted to avoid giving this title to their chief ministers. The term prime
minister in the sense that we know it originated in the 18th century in the United Kingdom when members of parliament
disparagingly used the title in reference to Sir Robert Walpole. Over time, the title became honorific and remains
so in the 21st century. The monarchs of England and the United Kingdom had ministers in whom they placed special
trust and who were regarded as the head of the government. Examples were Thomas Cromwell under Henry VIII; William
Cecil, Lord Burghley under Elizabeth I; Clarendon under Charles II and Godolphin under Queen Anne. These ministers
held a variety of formal posts, but were commonly known as "the minister", the "chief minister", the "first minister"
and finally the "prime minister". The power of these ministers depended entirely on the personal favour of the monarch.
Although managing the parliament was among the necessary skills of holding high office, they did not depend on a
parliamentary majority for their power. Although there was a cabinet, it was appointed entirely by the monarch, and
the monarch usually presided over its meetings. When the monarch grew tired of a first minister, he or she could
be dismissed, or worse: Cromwell was executed and Clarendon driven into exile when they lost favour. Kings sometimes
divided power equally between two or more ministers to prevent one minister from becoming too powerful. Late in Anne's
reign, for example, the Tory ministers Harley and St John shared power. In the mid 17th century, after the English
Civil War (1642–1651), Parliament strengthened its position relative to the monarch then gained more power through
the Glorious Revolution of 1688 and passage of the Bill of Rights in 1689. The monarch could no longer establish
any law or impose any tax without its permission and thus the House of Commons became a part of the government. It
is at this point that a modern style of prime minister begins to emerge. A tipping point in the evolution of the
prime ministership came with the death of Anne in 1714 and the accession of George I to the throne. George spoke
no English, spent much of his time at his home in Hanover, and had neither knowledge of, nor interest in, the details
of English government. In these circumstances it was inevitable that the king's first minister would become the de
facto head of the government. From 1721 this was the Whig politician Robert Walpole, who held office for twenty-one
years. Walpole chaired cabinet meetings, appointed all the other ministers, dispensed the royal patronage and packed
the House of Commons with his supporters. Under Walpole, the doctrine of cabinet solidarity developed. Walpole required
that no minister other than himself have private dealings with the king, and also that when the cabinet had agreed
on a policy, all ministers must defend it in public, or resign. As a later prime minister, Lord Melbourne, said,
"It matters not what we say, gentlemen, so long as we all say the same thing." Walpole always denied that he was
"prime minister", and throughout the 18th century parliamentarians and legal scholars continued to deny that any
such position was known to the Constitution. George II and George III made strenuous efforts to reclaim the personal
power of the monarch, but the increasing complexity and expense of government meant that a minister who could command
the loyalty of the Commons was increasingly necessary. The long tenure of the wartime prime minister William Pitt
the Younger (1783–1801), combined with the mental illness of George III, consolidated the power of the post. The
title was first referred to on government documents during the administration of Benjamin Disraeli but did not appear
in the formal British Order of precedence until 1905. By the late 20th century, the majority of the world's countries
had a prime minister or equivalent minister, holding office under either a constitutional monarchy or a ceremonial
president. The main exceptions to this system have been the United States and the presidential republics in Latin
America modelled on the U.S. system, in which the president directly exercises executive authority. Bahrain's prime
minister, Sheikh Khalifah bin Sulman Al Khalifah has been in the post since 1970, making him the longest serving
non-elected prime minister. The post of prime minister may be encountered both in constitutional monarchies (such
as Belgium, Denmark, Japan, Luxembourg, the Netherlands, Norway, Malaysia, Morocco, Spain, Sweden, Thailand, Canada,
Australia, New Zealand, and the United Kingdom), and in parliamentary republics in which the head of state is an
elected official (such as Finland ,the Czech Republic, France, Greece, Hungary, India, Indonesia, Ireland, Pakistan,
Portugal, Montenegro, Croatia, Bulgaria, Romania, Serbia and Turkey). See also "First Minister", "Premier", "Chief
Minister", "Chancellor", "Taoiseach", "Statsminister" and "Secretary of State": alternative titles usually equivalent
in meaning to, or translated as, "prime minister". This contrasts with the presidential system, in which the president
(or equivalent) is both the head of state and the head of the government. In some presidential or semi-presidential
systems, such as those of France, Russia or South Korea, the prime minister is an official generally appointed by
the president but usually approved by the legislature and responsible for carrying out the directives of the president
and managing the civil service. The head of government of the People's Republic of China is referred to as the Premier
of the State Council and the premier of the Republic of China (Taiwan) is also appointed by the president, but requires
no approval by the legislature. Appointment of the prime minister of France requires no approval by the parliament
either, but the parliament may force the resignation of the government. In these systems, it is possible for the
president and the prime minister to be from different political parties if the legislature is controlled by a party
different from that of the president. When it arises, such a state of affairs is usually referred to as (political)
cohabitation. Bangladesh's constitution clearly outlines the functions and powers of the Prime Minister, and also
details the process of his/her appointment and dismissal. The People's Republic of China constitution set a premier
just one place below the National People's Congress in China. Premier read as (Simplified Chinese: 总理; pinyin: Zŏnglĭ)
in Chinese. Canada's constitution, being a 'mixed' or hybrid constitution (a constitution that is partly formally
codified and partly uncodified) originally did not make any reference whatsoever to a prime minister, with her or
his specific duties and method of appointment instead dictated by "convention". In the Constitution Act, 1982, passing
reference to a "Prime Minister of Canada" is added, though only regarding the composition of conferences of federal
and provincial first ministers. Czech Republic's constitution clearly outlines the functions and powers of the Prime
Minister of the Czech Republic, and also details the process of his/her appointment and dismissal. The United Kingdom's
constitution, being uncodified and largely unwritten, makes no mention of a prime minister. Though it had de facto
existed for centuries, its first mention in official state documents did not occur until the first decade of the
twentieth century. Accordingly, it is often said "not to exist", indeed there are several instances of parliament
declaring this to be the case. The prime minister sits in the cabinet solely by virtue of occupying another office,
either First Lord of the Treasury (office in commission), or more rarely Chancellor of the Exchequer (the last of
whom was Balfour in 1905). Most prime ministers in parliamentary systems are not appointed for a specific term in
office and in effect may remain in power through a number of elections and parliaments. For example, Margaret Thatcher
was only ever appointed prime minister on one occasion, in 1979. She remained continuously in power until 1990, though
she used the assembly of each House of Commons after a general election to reshuffle her cabinet. Some states, however,
do have a term of office of the prime minister linked to the period in office of the parliament. Hence the Irish
Taoiseach is formally 'renominated' after every general election. (Some constitutional experts have questioned whether
this process is actually in keeping with the provisions of the Irish constitution, which appear to suggest that a
taoiseach should remain in office, without the requirement of a renomination, unless s/he has clearly lost the general
election.) The position of prime minister is normally chosen from the political party that commands majority of seats
in the lower house of parliament. In parliamentary systems, governments are generally required to have the confidence
of the lower house of parliament (though a small minority of parliaments, by giving a right to block supply to upper
houses, in effect make the cabinet responsible to both houses, though in reality upper houses, even when they have
the power, rarely exercise it). Where they lose a vote of confidence, have a motion of no confidence passed against
them, or where they lose supply, most constitutional systems require either: The latter in effect allows the government
to appeal the opposition of parliament to the electorate. However, in many jurisdictions a head of state may refuse
a parliamentary dissolution, requiring the resignation of the prime minister and his or her government. In most modern
parliamentary systems, the prime minister is the person who decides when to request a parliamentary dissolution.
Older constitutions often vest this power in the cabinet. In the United Kingdom, for example, the tradition whereby
it is the prime minister who requests a dissolution of parliament dates back to 1918. Prior to then, it was the entire
government that made the request. Similarly, though the modern 1937 Irish constitution grants to the Taoiseach the
right to make the request, the earlier 1922 Irish Free State Constitution vested the power in the Executive Council
(the then name for the Irish cabinet). In Australia, the Prime Minister is expected to step down if s/he loses the
majority support of his/her party under a spill motion as have many such as Tony Abbott, Julia Gillard and Kevin
Rudd. In the Russian constitution the prime minister is actually titled Chairman of the government while the Irish
prime minister is called the Taoiseach (which is rendered into English as prime minister), and in Israel he is Rosh
HaMemshalah meaning "head of the government". In many cases, though commonly used, "prime minister" is not the official
title of the office-holder; the Spanish prime minister is the President of the Government (Presidente del Gobierno).
Other common forms include president of the council of ministers (for example in Italy, Presidente del Consiglio
dei Ministri), President of the Executive Council, or Minister-President. In the Scandinavian countries the prime
minister is called statsminister in the native languages (i.e. minister of state). In federations, the head of government
of subnational entities such as provinces is most commonly known as the premier, chief minister, governor or minister-president.
The convention in the English language is to call nearly all national heads of government "prime minister" (sometimes
modified to the equivalent term of premier), regardless of the correct title of the head of government as applied
in his or her respective country. The few exceptions to the rule are Germany and Austria, whose heads of government
titles are almost always translated as Chancellor; Monaco, whose head of government is referred to as the Minister
of State; and Vatican City, for which the head of government is titled the Secretary of State. In the case of Ireland,
the head of government is occasionally referred to as the Taoiseach by English speakers. A stand-out case is the
President of Iran, who is not actually a head of state, but the head of the government of Iran. He is referred to
as "president" in both the Persian and English languages. In non-Commonwealth countries the prime minister may be
entitled to the style of Excellency like a president. In some Commonwealth countries prime ministers and former prime
ministers are styled Right Honourable due to their position, for example in the Prime Minister of Canada. In the
United Kingdom the prime minister and former prime ministers may appear to also be styled Right Honourable, however
this is not due to their position as head of government but as a privilege of being current members of Her Majesty's
Most Honourable Privy Council. In the UK, where devolved government is in place, the leaders of the Scottish, Northern
Irish and Welsh Governments are styled First Minister. In India, The Prime Minister is referred to as "Pradhan Mantri",
meaning "prime minister". In Pakistan, the prime minister is referred to as "Wazir-e-Azam", meaning "Grand Vizier".
The Prime Minister's executive office is usually called the Office of the Prime Minister in the case of the Canada
and other Commonwealth countries, it is called Cabinet Office in United Kingdom. Some Prime Minister's office do
include the role of Cabinet. In other countries, it is called the Prime Minister's Department or the Department of
the Prime Minister and Cabinet as for Australia. The prestige of British institutions in the 19th century and the
growth of the British Empire saw the British model of cabinet government, headed by a prime minister, widely copied,
both in other European countries and in British colonial territories as they developed self-government. In some places
alternative titles such as "premier", "chief minister", "first minister of state", "president of the council" or
"chancellor" were adopted, but the essentials of the office were the same.
Institute of technology (also: university of technology, polytechnic university, technikon, and technical college) is a designation
employed for a wide range of learning institutions awarding different types of degrees and operating often at variable
levels of the educational system. It may be an institution of higher education and advanced engineering and scientific
research or professional vocational education, specializing in science, engineering, and technology or different
sorts of technical subjects. It may also refer to a secondary education school focused in vocational training.[citation
needed] The term institute of technology is often abbreviated IT and is not to be confused with information technology.
The English term polytechnic appeared in the early 19th century, from the French École Polytechnique, an engineering
school founded in 1794 in Paris. The French term comes from the Greek πολύ (polú or polý) meaning "many" and τεχνικός
(tekhnikós) meaning "arts". While the terms "institute of technology" and "polytechnic" are synonymous, the preference
concerning which one is the preferred term varies from country to country.[citation needed] The institutes of technology
and polytechnics have been in existence since at least the 18th century, but became popular after World War II with
the expansion of engineering and applied science education, associated with the new needs created by industrialization.
The world's first institution of technology, the Berg-Schola (today its legal successor is the University of Miskolc)
was founded by the Court Chamber of Vienna in Selmecbánya, Kingdom of Hungary in 1735 in order to train specialists
of precious metal and copper mining according to the requirements of the industrial revolution in Hungary. The oldest
German Institute of Technology is the Braunschweig University of Technology (founded in 1745 as "Collegium Carolinum").
Another exception is the École Polytechnique, which has educated French élites since its foundation in 1794. In some
cases, polytechnics or institutes of technology are engineering schools or technical colleges. In several countries,
like Germany, the Netherlands, Switzerland and Turkey, institutes of technology and polytechnics are institutions
of higher education, and have been accredited to award academic degrees and doctorates. Famous examples are the Istanbul
Technical University, ETH Zurich, İYTE, Delft University of Technology and RWTH Aachen, all considered universities.[citation
needed] In countries like Iran, Finland, Malaysia, Portugal, Singapore or the United Kingdom, there is often a significant
and confused distinction between polytechnics and universities. In the UK a binary system of higher education emerged
consisting of universities (research orientation) and Polytechnics (engineering and applied science and professional
practice orientation). Polytechnics offered university equivalent degrees from bachelor's, master's and PhD that
were validated and governed at the national level by the independent UK Council for National Academic Awards. In
1992 UK Polytechnics were designated as universities which meant they could award their own degrees. The CNAA was
disbanded. The UK's first polytechnic, the Royal Polytechnic Institution (now the University of Westminster) was
founded in 1838 in Regent Street, London. In Ireland the term institute of technology is the more favored synonym
of a regional technical college though the latter is the legally correct term; however, Dublin Institute of Technology
is a university in all but name as it can confer degrees in accordance with law, Cork Institute of Technology and
another of other Institutes of Technology have delegated authority from HETAC to make awards to and including master's
degree level—Level 9 of the National Framework for Qualifications (NFQ)—for all areas of study and Doctorate level
in a number of others. In a number of countries, although being today generally considered similar institutions of
higher learning across many countries, polytechnics and institutes of technology used to have a quite different statute
among each other, its teaching competences and organizational history. In many cases polytechnic were elite technological
universities concentrating on applied science and engineering and may also be a former designation for a vocational
institution, before it has been granted the exclusive right to award academic degrees and can be truly called an
institute of technology. A number of polytechnics providing higher education is simply a result of a formal upgrading
from their original and historical role as intermediate technical education schools. In some situations, former polytechnics
or other non-university institutions have emerged solely through an administrative change of statutes, which often
included a name change with the introduction of new designations like institute of technology, polytechnic university,
university of applied sciences, or university of technology for marketing purposes. Such emergence of so many upgraded
polytechnics, former vocational education and technical schools converted into more university-like institutions
has caused concern where the lack of specialized intermediate technical professionals lead to industrial skill shortages
in some fields, being also associated to an increase of the graduate unemployment rate. This is mostly the case in
those countries, where the education system is not controlled by the state and everybody can grant degrees.[citation
needed] Evidence have also shown a decline in the general quality of teaching and graduate's preparation for the
workplace, due to the fast-paced conversion of that technical institutions to more advanced higher level institutions.
Mentz, Kotze and Van der Merwe (2008) argues that all the tools are in place to promote the debate on the place of
technology in higher education in general and in Universities of Technology specifically. The aspects of this debate
can follow the following lines: • To what degree is technology defined as a concept? • What is the scope of technology
discourse? • What is the place and relation of science with technology? • How useful is the Mitcham framework in
thinking about technology in South Africa? • Can a measure of cooperation as opposed to competition be achieved amongst
higher education institutions? • Who ultimately is responsible for vocational training and what is the role of technology
in this? During the 1970s to early 1990s, the term was used to describe state owned and funded technical schools
that offered both vocational and higher education. They were part of the College of Advanced Education system. In
the 1990s most of these merged with existing universities, or formed new ones of their own. These new universities
often took the title University of Technology, for marketing rather than legal purposes. AVCC report The most prominent
such university in each state founded the Australian Technology Network a few years later. Since the mid-1990s, the
term has been applied to some technically minded technical and further education (TAFE) institutes. A recent example
is the Melbourne Polytechnic rebranding and repositioning in 2014 from Northern Melbourne Institute of TAFE. These
primarily offer vocational education, although some like Melbourne Polytechnic are expanding into higher education
offering vocationally oriented applied bachelor degress. This usage of the term is most prevalent historically in
NSW and the ACT. The new terminology is apt given that this category of institution are becoming very much like the
institutes of the 1970s–1990s period. In Tasmania in 2009 the old college system and TAFE Tasmania have started a
3-year restructure to become the Tasmanian Polytechnic www.polytechnic.tas.edu.au, Tasmanian Skills Institute www.skillsinstitute.tas.edu.au
and Tasmanian Academy www.academy.tas.edu.au In the higher education sector, there are seven designated Universities
of Technology in Australia (though, note, not all use the phrase "university of technology", such as the Universities
of Canberra and South Australia, which used to be Colleges of Advanced Education before transitioning into fully-fledged
universities with the ability - most important of all - to confer doctorates): Fachhochschule is a German type of
tertiary education institution and adopted later in Austria and Switzerland. They do not focus exclusively on technology,
but may also offer courses in social science, medicine, business and design. They grant bachelor's degrees and master's
degrees, and focus more on teaching than research and more on specific professions than on science. Hogeschool is
used in Belgium and in the Netherlands. The hogeschool has many similarities to the Fachhochschule in the German
language areas and to the ammattikorkeakoulu in Finland. Hogeschool institutions in the Flemish Community of Belgium
(such as the Erasmus Hogeschool Brussel) are currently undergoing a process of academization. They form associations
with a university and integrate research into the curriculum, which will allow them to deliver academic master's
degrees. In the Netherlands, four former institutes of technology have become universities over the past decades.
These are the current three Technical Universities (at Delft, Eindhoven and Enschede), plus the former agricultural
institute in Wageningen. A list of all hogescholen in the Netherlands, including some which might be called polytechnics,
can be found here. In Cambodia, there are Institutes of Technology/Polytechnic Institutes, and Universities that
offer instruction in a variety of programs that can lead to: certificates, diplomas, and degrees. Institutes of Technology/Polytechnic
Institutes and Universities tend to be independent institutions. In Canada, there are Affiliate Schools, Colleges,
Institutes of Technology/Polytechnic Institutes, and Universities that offer instruction in a variety of programs
that can lead to: engineering and applied science degrees, apprenticeship and trade programs, certificates, and diplomas.
Affiliate Schools are polytechnic divisions belonging to a national university and offer select technical and engineering
programs. Colleges, Institutes of Technology/Polytechnic Institutes, and Universities tend to be independent institutions.
Credentials are typically conferred at the undergraduate level, however university-affiliated schools like the École
de technologie supérieure and the École Polytechnique de Montréal (both of which are located in Quebec), also offer
graduate and postgraduate programs, in accordance with provincial higher education guidelines. Canadian higher education
institutions, at all levels, undertake directed and applied research with financing allocated through public funding,
private equity, or industry sources. Some of Canada's most esteemed colleges and polytechnic institutions also partake
in collaborative institute-industry projects, leading to technology commercialization, made possible through the
scope of Polytechnics Canada; a national alliance of eleven leading research-intensive colleges and institutes of
technology. China's modern higher education began in 1895 with the Imperial Tientsin University which was a polytechnic
plus a law department. Liberal arts were not offered until three years later at Capital University. To this day,
about half of China's elite universities remain essentially polytechnical. In Croatia there are many polytechnic
institutes and colleges that offer a polytechnic education. The law about polytechnic education in Croatia was passed
in 1997. EPN is known for research and education in the applied science, astronomy, atmospheric physics, engineering
and physical sciences. The Geophysics Institute monitors over the country`s seismic, tectonic and volcanic activity
in the continental territory and in the Galápagos Islands. One of the oldest observatories in South America is the
Quito Astronomical Observatory. Founded in 1873 and located 12 minutes south of the Equator in Quito, Ecuador. The
Quito Astronomical Observatory is the National Observatory of Ecuador and is located in the Historic Center of Quito
and is managed by the National Polytechnic School. The Nuclear Science Department at EPN is the only one in Ecuador
and has the large infrastructure, related to irrradiation factilities like cobalt-60 source and Electron beam processing.
Its mission is to provide high quality education, training and research in the areas of science and technology to
produce qualified professionals that can apply their knowledge and skills in the country's development. MIT raises
funds from non-governmental organizations and individuals who support the mission and objectives of the Institute.
Tigray Development Association, its supporters, and REST have provided the initial funds for the launching of the
Institute. As a result of the unstinting efforts made by the Provisional Governing Board to obtain technical and
financial assistance, the Institute has so far secured financial and material support as well as pledges of sponsorship
for 50 students, covering their tuition fees, room and board up to graduation. The MIT has also been able to create
linkages with some universities and colleges in the United States of America, which have provided manpower and material
support to MIT. The institute is governed by a provisional governing board. Universities of Technology are categorised
as universities, are allowed to grant B.Sc. (Tech.), M.Sc. (Tech.), Lic.Sc. (Tech.), Ph.D. and D.Sc.(Tech.) degrees
and roughly correspond to Instituts de technologie of French-speaking areas and Technische Universität of Germany
in prestige. In addition to universities of technology, some universities, e.g. University of Oulu and Åbo Akademi
University, are allowed to grant the B.Sc. (tech.), M.Sc. (tech.) and D.Sc. (Tech.) degrees. Universities of Technology
are academically similar to other (non-polytechnic) universities. Prior to Bologna process, M.Sc. (Tech.) required
180 credits, whereas M.Sc. from a normal university required 160 credits. The credits between Universities of Technology
and normal universities are comparable. Polytechnic schools are distinct from academic universities in Finland. Ammattikorkeakoulu
is the common term in Finland, as is the Swedish alternative "yrkeshögskola" – their focus is on studies leading
to a degree (for instance insinööri, engineer; in international use, Bachelor of Engineering) in kind different from
but in level comparable to an academic bachelor's degree awarded by a university. Since 2006 the polytechnics have
offered studies leading to master's degrees (Master of Engineering). After January 1, 2006, some Finnish ammattikorkeakoulus
switched the English term "polytechnic" to the term "university of applied sciences" in the English translations
of their legal names. The ammattikorkeakoulu has many similarities to the hogeschool in Belgium and in the Netherlands
and to the Fachhochschule in the German language areas. Collegiate universities grouping several engineering schools
or multi-site clusters of French grandes écoles provide sciences and technology curricula as autonomous higher education
engineering institutes. They include : In addition, France's education system includes many institutes of technology,
embedded within most French universities. They are referred-to as institut universitaire de technologie (IUT). Instituts
universitaires de technologie provide undergraduate technology curricula. 'Polytech institutes', embedded as a part
of eleven French universities provide both undergraduate and graduate engineering curricula. In the French-speaking
part of Switzerland exists also the term haute école specialisée for a type of institution called Fachhochschule
in the German-speaking part of the country. (see below). Higher education systems, that are influenced by the French
education system set at the end of the 18th century, use a terminology derived by reference to the French École polytechnique.
Such terms include Écoles Polytechniques (Algeria, Belgium, Canada, France, Switzerland, Tunisia), Escola Politécnica
(Brasil, Spain), Polytechnicum (Eastern Europe). Fachhochschulen were first founded in the early 1970s. They do not
focus exclusively on technology, but may also offer courses in social science, medicine, business and design. They
grant bachelor's degrees and master's degrees, and focus more on teaching than research and more on specific professions
than on science. Technische Universität (abbreviation: TU) are the common terms for universities of technology or
technical university. These institutions can grant habilitation and doctoral degrees and focus on research. The nine
largest and most renowned Technische Universitäten in Germany have formed TU9 German Institutes of Technology as
community of interests. Technische Universitäten normally have faculties or departements of natural sciences and
often of economics but can also have units of cultural and social sciences and arts. RWTH Aachen, TU Dresden and
TU München also have a faculty of medicine associated with university hospitals (Klinikum Aachen, University Hospital
Dresden, Rechts der Isar Hospital). There are 17 universities of technology in Germany with about 290,000 students
enrolled. The four states of Bremen, Mecklenburg-Vorpommern, Saxony-Anhalt and Schleswig-Holstein are not operating
a Technische Universität. Saxony and Lower Saxony have the highest counts of TUs, while in Saxony three out of four
universities are universities of technology. Niedersächsische Technische Hochschule is a joint-venture of TU Clausthal,
TU Braunschweig and University of Hanover. Some universities in Germany can also be seen as institutes of technology
due to comprising a wide spread of technical sciences and having a history as a technical university. Examples are
In Greece, there are 2 "Polytechnics" part of the public higher education in Greece and they confer a 5-year Diplom
Uni (300E.C.T.S – I.S.C.E.D. 5A), the National Technical University of Athens and the Technical University of Crete.
Also, there are Greek Higher Technological Educational Institutes (Ανώτατα Τεχνολογικά Εκπαιδευτικά Ιδρύματα – Α.T.E.I).
After the N.1404/1983 Higher Education Reform Act (Ν.1404/1983 - 2916/2001 - Ν. 3549/2007 - N. 3685/2008 - N. 4009/2011)
the Technological Educational Institute constitute, a parallel and equivalent with universities part of the public
higher education in Greece. They confer 4-year bachelor's degree (Diplom FH) (240E.C.T.S – I.S.C.E.D. 5A). The first
polytechnic in Hong Kong is The Hong Kong Polytechnic, established in 1972 through upgrading the Hong Kong Technical
College (Government Trade School before 1947). The second polytechnic, the City Polytechnic of Hong Kong, was founded
in 1984. These polytechnics awards diplomas, higher diplomas, as well as academic degrees. Like the United Kingdom,
the two polytechnics were granted university status in 1994, and renamed The Hong Kong Polytechnic University and
the City University of Hong Kong respectively. The Hong Kong University of Science and Technology, a university with
a focus in applied science, engineering and business, was founded in 1991. The world's first Institute of Technology
the Berg-Schola (Bergschule) established in Selmecbánya, Kingdom of Hungary by the Court Chamber of Vienna in 1735
providing Further education to train specialists of precious metal and copper mining. In 1762 the institute ranked
up to be Academia providing Higher Education courses. After the Treaty of Trianon the institute had to be moved to
Sopron. There are 16 autonomous Indian Institutes of Technology in addition to 30 National Institutes of Technology
which are Government Institutions. In addition to these there are many other Universities which offer higher technical
courses. The Authority over technical education in India is the AICTE. In India there are many polytechnic institutes
and collages that offer a polytechnic education. In India a Diploma in Engineering is a specific academic award usually
awarded in technical or vocational courses e.g. Engineering, Pharmacy, Designing, etc. These Institutions offer three
year diploma in engineering post Tenth class. These institutes have affiliation from state bord of technical education
of respective state governments. after which one can apply for post of junior engineer or continue higher studies
by appearing for exams of AMIE to become an engineering graduate. There are four public institutes of technology
in Indonesia that owned by the government of Indonesia. Other than that, there are hundreds other institute that
owned by private or other institutions. However, in Bahasa Indonesia, Politeknik carries a rather different meaning
than Institut Teknologi. Politeknik provides vocational education and typically offers three-year Diploma degrees,
which is similar to associate degrees, instead of full, four-year bachelor's degree and the more advanced Master's
and doctoral degrees being offered by an Institut Teknologi. Ireland has an "Institute of Technology" system, formerly
referred to as Regional Technical College (RTCs) system. The terms "IT" and "IT's" are now widely used to describe
an Institute(s) of Technology. These institutions offer sub-degree, degree and post-graduate level studies. Unlike
the Irish university system an Institute of Technology also offers sub-degree programmes such as 2-year Higher Certificate
programme in various academic fields of study. Some institutions have "delegated authority" that allows them to make
awards in their own name, after authorisation by the Higher Education & Training Awards Council. Dublin Institute
of Technology developed separately from the Regional Technical College system, and after several decades of association
with the University of Dublin, Trinity College it acquired the authority to confer its own degrees. In higher education,
Politecnico refers to a technical university awarding degrees in engineering. Historically there were two Politecnici,
one in each of the two largest industrial cities of the north: In 2003, the Ministry of Education, Universities and
Research and the Ministry of Economy and Finance jointly established the Istituto Italiano di Tecnologia (Italian
Institute of Technology), headquartered in Genoa with 10 laboratories around Italy, which however focuses on research
and does not offer undergraduate degrees. In Japan, an institute of technology (工業大学, kōgyō daigaku?) is a type of
university that specializes in the sciences. See also the Imperial College of Engineering, which was the forerunner
of the University of Tokyo's engineering faculty. Polytechnics in Malaysia has been operated for almost 44 years.
The institutions provide courses for bachelor's degree & Bachelor of Science (BSc) (offer at Premier Polytechnics
for September 2013 intake & 2014 intake), Advanced Diploma, Diploma and Special Skills Certificate. It was established
by the Ministry of Education with the help of UNESCO in 1969. The amount of RM24.5 million is used to fund the pioneer
of Politeknik Ungku Omar located in Ipoh, Perak from the United Nations Development Program (UNDP). At present, Malaysia
have developed 32 polytechnic at all over states in engineering, agriculture, commerce, hospitality and design courses
with 60,840 students in 2009 to 87,440 students in 2012. The only technical university in Mauritius is the University
of Technology, Mauritius with its main campus situated in La Tour Koenig, Pointe aux Sables. It has a specialized
mission with a technology focus. It applies traditional and beyond traditional approaches to teaching, training,
research and consultancy. The university has been founded with the aim to play a key role in the economic and social
development of Mauritius through the development of programmes of direct relevance to the country’s needs, for example
in areas like technology, sustainable development science, and public sector policy and management. New Zealand polytechnics
are established under the Education Act 1989 as amended, and are considered state-owned tertiary institutions along
with universities, colleges of education, and wānanga; there is today often much crossover in courses and qualifications
offered between all these types of Tertiary Education Institutions. Some have officially taken the title 'institute
of technology' which is a term recognized in government strategies equal to that of the term 'polytechnic'. One has
opted for the name 'Universal College of Learning' (UCOL), and another 'Unitec New Zealand'. These are legal names
but not recognized terms like 'polytechnic' or 'institute of technology'. Many if not all now grant at least bachelor-level
degrees. Since the 1990s, there has been consolidation in New Zealand's state-owned tertiary education system. In
the polytechnic sector: Wellington Polytechnic amalgamated with Massey University. The Central Institute of Technology
explored a merger with the Waikato Institute of Technology, which was abandoned, but later, after financial concerns,
controversially amalgamated with Hutt Valley Polytechnic, which in turn became Wellington Institute of Technology.
Some smaller polytechnics in the North Island, such as Waiarapa Polytechnic, amalgamated with UCOL. (The only other
amalgamations have been in the colleges of education.) The Auckland University of Technology is the only polytechnic
to have been elevated to university status; while Unitec has had repeated attempts blocked by government policy and
consequent decisions; Unitec has not been able to convince the courts to overturn these decisions. The Polytechnic
institutes in Pakistan, offer a diploma spanning three years in different branches. Students are admitted to the
diploma program based on their results in the 10th grade standardized exams. The main purpose of Polytechnic Institutes
is to train people in various trades. After successfully completing a diploma at a polytechnic, students can gain
lateral entry to engineering degree (under graduate) courses called BE, which are conducted by engineering colleges
affiliated to universities or University of Engineering & Technology or University of Engineering Sciences. University
of Engineering & Technology or University of Engineering Sciences are the recognized universities that grant Bachelor's
and master's degrees in undergraduate and graduate studies respectively. The Bachelor of Science degree awarded by
Universities of Engineering & Technology or University of Engineering Sciences are 4 years full-time program after
finishing 13 years of education (international high school certificate) in Pakistan known as F.Sc equivalent to British
system A-Level. Politechnika (translated as a "technical university" or "university of technology") is a main kind
of technical university name in Poland. There are some biggest Polytechnic in Poland: The designation "Institute
of Technology" is not applied at all, being meaningless in Portugal. However, there are higher education educational
institutions in Portugal since the 1980s, which are called polytechnics. After 1998 they were upgraded to institutions
which are allowed to confer bachelor's degrees (the Portuguese licenciatura). Before then, they only awarded short-cycle
degrees which were known as bacharelatos and did not provide further education. After the Bologna Process in 2007,
they have been allowed to offer 2nd cycle (master's) degrees to its students. The polytechnical higher education
system provides a more practical training and is profession-oriented, while the university higher education system
has a strong theoretical basis and is highly research-oriented. Polytechnics in Singapore provides industry oriented
education equivalent to a junior college or sixth form college in the UK. Singapore retains a system similar but
not the same as in the United Kingdom from 1970–1992, distinguishing between polytechnics and universities. Unlike
the British Polytechnic (United Kingdom) system Singapore Polytechnics do not offer bachelors, masters or PhD degrees.
Under this system, most Singaporean students sit for their O-Level examinations after a four or five years of education
in secondary school, and apply for a place at either a technical school termed ITE, a polytechnic or a university-preparatory
school (a junior college or the Millennia Institute, a centralized institute). Polytechnic graduates may be granted
transfer credits when they apply to local and overseas universities, depending on the overall performance in their
grades, as well as the university's policies on transfer credits. A few secondary schools are now offering six-year
program which leads directly to university entrance. Polytechnics offer three-year diploma courses in fields such
as information technology, engineering subjects and other vocational fields, like psychology and nursing. There are
5 polytechnics in Singapore. They are namely: The world's first institution of technology or technical university
with tertiary technical education is the Banská Akadémia in Banská Štiavnica, Slovakia, founded in 1735, Academy
since December 13, 1762 established by queen Maria Theresa in order to train specialists of silver and gold mining
and metallurgy in neighbourhood. Teaching started in 1764. Later the department of Mathematics, Mechanics and Hydraulics
and department of Forestry were settled. University buildings are still at their place today and are used for teaching.
University has launched the first book of electrotechnics in the world. South Africa has completed a process of transforming
its "higher education landscape". Historically a division has existed in South Africa between Universities and Technikons
(polytechnics) as well between institutions servicing particular racial and language groupings. In 1993 Technikons
were afforded the power to award certain technology degrees. Beginning in 2004 former Technikons have either been
merged with traditional Universities to form Comprehensive Universities or have become Universities of Technology,
however the Universities of Technology have not to date acquired all of the traditional rights and privileges of
a University (such as the ability to confer a wide range of degrees). Most of Thailand's institutes of technology
were developed from technical colleges, in the past could not grant bachelor's degrees; today, however, they are
university level institutions, some of which can grant degrees to the doctoral level. Examples are Pathumwan Institute
of Technology (developed from Pathumwan Technical School), King Mongkut's Institute of Technology Ladkrabang (Nondhaburi
Telecommunications Training Centre), and King Mongkut's Institute of Technology North Bangkok (Thai-German Technical
School). There are two former institutes of technology, which already changed their name to "University of Technology":
Rajamangala University of Technology (formerly Institute of Technology and Vocational Education) and King Mongkut's
University of Technology Thonburi (Thonburi Technology Institute). Institutes of technology with different origins
are Asian Institute of Technology, which developed from SEATO Graduate School of Engineering, and Sirindhorn International
Institute of Technology, an engineering school of Thammasat University. Suranaree University of Technology is the
only government-owned technological university in Thailand that was established (1989) as such; while Mahanakorn
University of Technology is the most well known private technological institute. Technology/Technical colleges in
Thailand is associated with bitter rivalries which erupts into frequent off-campus brawls and assassinations of students
in public locations that has been going on for nearly a decade, with innocent bystanders also commonly among the
injured and the military under martial law still unable to stop them from occurring. In Turkey and the Ottoman Empire,
the oldest technical university is Istanbul Technical University. Its graduates contributed to a wide variety of
activities in scientific research and development. In 1950s, 2 technical universities were opened in Ankara and Trabzon.
In recent years, Yildiz University is reorganized as Yildiz Technical University and 2 institutes of technology were
founded in Kocaeli and Izmir. In 2010, another technical university named Bursa Technical University was founded
in Bursa. Moreover, a sixth technical university is about to be opened in Konya named Konya Technical University.
Polytechnics were tertiary education teaching institutions in England, Wales and Northern Ireland. Since 1970 UK
Polytechnics operated under the binary system of education along with universities. Polytechnics offered diplomas
and degrees (bachelor's, master's, PhD) validated at the national level by the UK Council for National Academic Awards
CNAA. They particularly excelled in engineering and applied science degree courses similar to technological universities
in the USA and continental Europe. The comparable institutions in Scotland were collectively referred to as Central
Institutions. Britain's first Polytechnic, the Royal Polytechnic Institution later known as the Polytechnic of Central
London (now the University of Westminster) was established in 1838 at Regent Street in London and its goal was to
educate and popularize engineering and scientific knowledge and inventions in Victorian Britain "at little expense."
The London Polytechnic led a mass movement to create numerous Polytechnic institutes across the UK in the late 19th
Century. Most Polytechnic institutes were established at the centre of major metropolitan cities and their focus
was on engineering, applied science and technology education. In 1956, some colleges of technology received the designation
College of Advanced Technology. They became universities in the 1960s meaning they could award their own degrees.
The designation "Institute of Technology" was occasionally used by polytechnics (Bolton), Central Institutions (Dundee,
Robert Gordon's), and postgraduate universities, (Cranfield and Wessex), most of which later adopted the designation
University, and there were two "Institutes of Science and Technology": UMIST and UWIST, part of the University of
Wales. Loughborough University was called Loughborough University of Technology from 1966 to 1996, the only institution
in the UK to have had such a designation. Polytechnics were granted university status under the Further and Higher
Education Act 1992. This meant that Polytechnics could confer degrees without the oversight of the national CNAA
organization. These institutions are sometimes referred to as post-1992 universities. Schools called "technical institute"
or "technical school" that were formed in the early 20th century provided further education between high school and
University or Polytechnic. Most technical institutes have been merged into regional colleges and some have been designated
university colleges if they are associated with a local university. Polytechnic Institutes are technological universities,
many dating back to the mid-19th century. A handful of world-renowned Elite American universities include the phrases
"Institute of Technology", "Polytechnic Institute", "Polytechnic University", or similar phrasing in their names;
these are generally research-intensive universities with a focus on engineering, science and technology. The earliest
and most famous of these institutions are, respectively, Rensselaer Polytechnic Institute (RPI, 1824), New York University
Tandon School of Engineering (1854) and the Massachusetts Institute of Technology (MIT, 1861). Conversely, schools
dubbed "technical colleges" or "technical institutes" generally provide post-secondary training in technical and
mechanical fields, focusing on training vocational skills primarily at a community college level—parallel and sometimes
equivalent to the first two years at a bachelor's degree-granting institution. Institutes of technology in Venezuela
were developed in the 1950s as an option for post-secondary education in technical and scientific courses, after
the polytechnic French concepts. At that time, technical education was considered essential for the development of
a sound middle class economy. Most of these institutes award diplomas after three or three and a half years of education.
The Institute of technology implementation (IUT from Instituto universitario de tecnologia on Spanish) began with
the creation of the first IUT at Caracas, capital city of Venezuela, called IUT. Dr. Federico Rivero Palacio adopted
the French "Institut Universitaire de Technologie"s system, using French personnel and study system based on three-year
periods, with research and engineering facilities at the same level as the main national universities to obtain French
equivalent degrees. This IUT is the first and only one in Venezuela having French equivalent degrees accepted, implementing
this system and observing the high-level degrees some other IUT's were created in Venezuela, regardless of this the
term IUT was not used appropriately resulting in some institutions with mediocre quality and no equivalent degree
in France. Later, some private institutions sprang up using IUT in their names, but they are not regulated by the
original French system and award lower quality degrees.
The Wayback Machine is a digital archive of the World Wide Web and other information on the Internet created by the Internet
Archive, a nonprofit organization, based in San Francisco, California, United States. It was set up by Brewster Kahle
and Bruce Gilliat, and is maintained with content from Alexa Internet. The service enables users to see archived
versions of web pages across time, which the archive calls a "three dimensional index." Since 1996, they have been
archiving cached pages of web sites onto their large cluster of Linux nodes. They revisit sites every few weeks or
months and archive a new version if the content has changed. Sites can also be captured on the fly by visitors who
are offered a link to do so. The intent is to capture and archive content that otherwise would be lost whenever a
site is changed or closed down. Their grand vision is to archive the entire Internet. The name Wayback Machine was
chosen as a droll reference to a plot device in an animated cartoon series, The Rocky and Bullwinkle Show. In one
of the animated cartoon's component segments, Peabody's Improbable History, lead characters Mr. Peabody and Sherman
routinely used a time machine called the "WABAC machine" (pronounced way-back) to witness, participate in, and, more
often than not, alter famous events in history. In 1996 Brewster Kahle, with Bruce Gilliat, developed software to
crawl and download all publicly accessible World Wide Web pages, the Gopher hierarchy, the Netnews (Usenet) bulletin
board system, and downloadable software. The information collected by these "crawlers" does not include all the information
available on the Internet, since much of the data is restricted by the publisher or stored in databases that are
not accessible. These "crawlers" also respect the robots exclusion standard for websites whose owners opt for them
not to appear in search results or be cached. To overcome inconsistencies in partially cached web sites, Archive-It.org
was developed in 2005 by the Internet Archive as a means of allowing institutions and content creators to voluntarily
harvest and preserve collections of digital content, and create digital archives. Information had been kept on digital
tape for five years, with Kahle occasionally allowing researchers and scientists to tap into the clunky database.
When the archive reached its fifth anniversary, it was unveiled and opened to the public in a ceremony at the University
of California, Berkeley. Snapshots usually become available more than six months after they are archived or, in some
cases, even later; it can take twenty-four months or longer. The frequency of snapshots is variable, so not all tracked
web site updates are recorded. Sometimes there are intervals of several weeks or years between snapshots. After August
2008 sites had to be listed on the Open Directory in order to be included. According to Jeff Kaplan of the Internet
Archive in November 2010, other sites were still being archived, but more recent captures would become visible only
after the next major indexing, an infrequent operation. As of 2009[update], the Wayback Machine contained approximately
three petabytes of data and was growing at a rate of 100 terabytes each month; the growth rate reported in 2003 was
12 terabytes/month. The data is stored on PetaBox rack systems manufactured by Capricorn Technologies. In 2009, the
Internet Archive migrated its customized storage architecture to Sun Open Storage, and hosts a new data center in
a Sun Modular Datacenter on Sun Microsystems' California campus. In 2011 a new, improved version of the Wayback Machine,
with an updated interface and fresher index of archived content, was made available for public testing. In March
2011, it was said on the Wayback Machine forum that "The Beta of the new Wayback Machine has a more complete and
up-to-date index of all crawled materials into 2010, and will continue to be updated regularly. The index driving
the classic Wayback Machine only has a little bit of material past 2008, and no further index updates are planned,
as it will be phased out this year". In October 2013, the company announced the "Save a Page" feature which allows
any Internet user to archive the contents of a URL. This became a threat of abuse by the service for hosting malicious
binaries. In a 2009 case, Netbula, LLC v. Chordiant Software Inc., defendant Chordiant filed a motion to compel Netbula
to disable the robots.txt file on its web site that was causing the Wayback Machine to retroactively remove access
to previous versions of pages it had archived from Nebula's site, pages that Chordiant believed would support its
case. Netbula objected to the motion on the ground that defendants were asking to alter Netbula's web site and that
they should have subpoenaed Internet Archive for the pages directly. An employee of Internet Archive filed a sworn
statement supporting Chordiant's motion, however, stating that it could not produce the web pages by any other means
"without considerable burden, expense and disruption to its operations." Magistrate Judge Howard Lloyd in the Northern
District of California, San Jose Division, rejected Netbula's arguments and ordered them to disable the robots.txt
blockage temporarily in order to allow Chordiant to retrieve the archived pages that they sought. In an October 2004
case, Telewizja Polska USA, Inc. v. Echostar Satellite, No. 02 C 3293, 65 Fed. R. Evid. Serv. 673 (N.D. Ill. Oct.
15, 2004), a litigant attempted to use the Wayback Machine archives as a source of admissible evidence, perhaps for
the first time. Telewizja Polska is the provider of TVP Polonia and EchoStar operates the Dish Network. Prior to
the trial proceedings, EchoStar indicated that it intended to offer Wayback Machine snapshots as proof of the past
content of Telewizja Polska's web site. Telewizja Polska brought a motion in limine to suppress the snapshots on
the grounds of hearsay and unauthenticated source, but Magistrate Judge Arlander Keys rejected Telewizja Polska's
assertion of hearsay and denied TVP's motion in limine to exclude the evidence at trial. At the trial, however, district
Court Judge Ronald Guzman, the trial judge, overruled Magistrate Keys' findings,[citation needed] and held that neither
the affidavit of the Internet Archive employee nor the underlying pages (i.e., the Telewizja Polska website) were
admissible as evidence. Judge Guzman reasoned that the employee's affidavit contained both hearsay and inconclusive
supporting statements, and the purported web page printouts were not self-authenticating.[citation needed] Provided
some additional requirements are met (e.g. providing an authoritative statement of the archivist), the United States
patent office and the European Patent Office will accept date stamps from the Internet Archive as evidence of when
a given Web page was accessible to the public. These dates are used to determine if a Web page is available as prior
art for instance in examining a patent application. There are technical limitations to archiving a web site, and
as a consequence, it is possible for opposing parties in litigation to misuse the results provided by web site archives.
This problem can be exacerbated by the practice of submitting screen shots of web pages in complaints, answers, or
expert witness reports, when the underlying links are not exposed and therefore, can contain errors. For example,
archives such as the Wayback Machine do not fill out forms and therefore, do not include the contents of non-RESTful
e-commerce databases in their archives. In Europe the Wayback Machine could be interpreted as violating copyright
laws. Only the content creator can decide where their content is published or duplicated, so the Archive would have
to delete pages from its system upon request of the creator. The exclusion policies for the Wayback Machine may be
found in the FAQ section of the site. The Wayback Machine also retroactively respects robots.txt files, i.e., pages
that currently are blocked to robots on the live web temporarily will be made unavailable from the archives as well.
In late 2002, the Internet Archive removed various sites that were critical of Scientology from the Wayback Machine.
An error message stated that this was in response to a "request by the site owner." Later, it was clarified that
lawyers from the Church of Scientology had demanded the removal and that the site owners did not want their material
removed. In 2003, Harding Earley Follmer & Frailey defended a client from a trademark dispute using the Archive's
Wayback Machine. The attorneys were able to demonstrate that the claims made by the plaintiff were invalid, based
on the content of their web site from several years prior. The plaintiff, Healthcare Advocates, then amended their
complaint to include the Internet Archive, accusing the organization of copyright infringement as well as violations
of the DMCA and the Computer Fraud and Abuse Act. Healthcare Advocates claimed that, since they had installed a robots.txt
file on their web site, even if after the initial lawsuit was filed, the Archive should have removed all previous
copies of the plaintiff web site from the Wayback Machine. The lawsuit was settled out of court. Robots.txt is used
as part of the Robots Exclusion Standard, a voluntary protocol the Internet Archive respects that disallows bots
from indexing certain pages delineated by its creator as off-limits. As a result, the Internet Archive has rendered
unavailable a number of web sites that now are inaccessible through the Wayback Machine. Currently, the Internet
Archive applies robots.txt rules retroactively; if a site blocks the Internet Archive, such as Healthcare Advocates,
any previously archived pages from the domain are rendered unavailable as well. In cases of blocked sites, only the
robots.txt file is archived. The Internet Archive states, however, "Sometimes a website owner will contact us directly
and ask us to stop crawling or archiving a site. We comply with these requests." In addition, the web site says:
"The Internet Archive is not interested in preserving or offering access to Web sites or other Internet documents
of persons who do not want their materials in the collection." In December 2005, activist Suzanne Shell filed suit
demanding Internet Archive pay her US $100,000 for archiving her web site profane-justice.org between 1999 and 2004.
Internet Archive filed a declaratory judgment action in the United States District Court for the Northern District
of California on January 20, 2006, seeking a judicial determination that Internet Archive did not violate Shell's
copyright. Shell responded and brought a countersuit against Internet Archive for archiving her site, which she alleges
is in violation of her terms of service. On February 13, 2007, a judge for the United States District Court for the
District of Colorado dismissed all counterclaims except breach of contract. The Internet Archive did not move to
dismiss copyright infringement claims Shell asserted arising out of its copying activities, which would also go forward.
On April 25, 2007, Internet Archive and Suzanne Shell jointly announced the settlement of their lawsuit. The Internet
Archive said it "...has no interest in including materials in the Wayback Machine of persons who do not wish to have
their Web content archived. We recognize that Ms. Shell has a valid and enforceable copyright in her Web site and
we regret that the inclusion of her Web site in the Wayback Machine resulted in this litigation." Shell said, "I
respect the historical value of Internet Archive's goal. I never intended to interfere with that goal nor cause it
any harm." In 2013–14 a pornographic actor was trying to remove archived images of himself, first by sending multiple
DMCA requests to the Archive and then in the Federal Court of Canada.
The Dutch Republic, also known as the Republic of the Seven United Netherlands (Republiek der Zeven Verenigde Nederlanden),
Republic of the United Netherlands or Republic of the Seven United Provinces (Republiek der Zeven Verenigde Provinciën),
was a republic in Europe existing from 1581, when part of the Netherlands separated from Spanish rule, until 1795.
It preceded the Batavian Republic, the Kingdom of Holland, the United Kingdom of the Netherlands, and ultimately
the modern Kingdom of the Netherlands. Alternative names include the United Provinces (Verenigde Provinciën), Federated
Dutch Provinces (Foederatae Belgii Provinciae), and Dutch Federation (Belgica Foederata). Until the 16th century,
the Low Countries – corresponding roughly to the present-day Netherlands, Belgium, and Luxembourg – consisted of
a number of duchies, counties, and Prince-bishoprics, almost all of which were under the supremacy of the Holy Roman
Empire, with the exception of the county of Flanders, which was under the Kingdom of France. Most of the Low Countries
had come under the rule of the House of Burgundy and subsequently the House of Habsburg. In 1549 Holy Roman Emperor
Charles V issued the Pragmatic Sanction, which further unified the Seventeen Provinces under his rule. Charles was
succeeded by his son, King Philip II of Spain. In 1568 the Netherlands, led by William I of Orange, revolted against
Philip II because of high taxes, persecution of Protestants by the government, and Philip's efforts to modernize
and centralize the devolved-medieval government structures of the provinces. This was the start of the Eighty Years'
War. In 1579 a number of the northern provinces of the Low Countries signed the Union of Utrecht, in which they promised
to support each other in their defence against the Spanish army. This was followed in 1581 by the Act of Abjuration,
the declaration of independence of the provinces from Philip II. In 1582 the United Provinces invited Francis, Duke
of Anjou to lead them; but after a failed attempt to take Antwerp in 1583, the duke left the Netherlands again. After
the assassination of William of Orange (10 July 1584), both Henry III of France and Elizabeth I of England declined
the offer of sovereignty. However, the latter agreed to turn the United Provinces into a protectorate of England
(Treaty of Nonsuch, 1585), and sent the Earl of Leicester as governor-general. This was unsuccessful and in 1588
the provinces became a confederacy. The Union of Utrecht is regarded as the foundation of the Republic of the Seven
United Provinces, which was not recognized by the Spanish Empire until the Peace of Westphalia in 1648. The Republic
of the United Provinces lasted until a series of republican revolutions in 1783–1795 created the Batavian Republic.
During this period, republican forces took several major cities of the Netherlands. After initially fleeing, the
monarchist forces came back with British, Austrian, and Prussian troops and retook the Netherlands. The republican
forces fled to France, but then successfully re-invaded alongside the army of the French republic. After the French
Republic became the French Empire under Napoleon, the Batavian Republic was replaced by the Napoleonic Kingdom of
Holland. The Netherlands regained independence from France in 1813. In the Anglo-Dutch Treaty of 1814 the names "United
Provinces of the Netherlands" and "United Netherlands" were used. In 1815 it was rejoined with the Austrian Netherlands,
Luxembourg and Liège (the "Southern provinces") to become the Kingdom of the Netherlands, informally known as the
Kingdom of the United Netherlands, to create a strong buffer state north of France. After Belgium and Luxembourg
became independent, the state became unequivocally known as the Kingdom of the Netherlands, as it remains today.
During the Dutch Golden Age in the late 16th century onward, the Dutch Republic dominated world trade in the 17th
century, conquering a vast colonial empire and operating the largest fleet of merchantmen of any nation. The County
of Holland was the wealthiest and most urbanized region in the world. The free trade spirit of the time received
a strong augmentation through the development of a modern, effective stock market in the Low Countries. The Netherlands
has the oldest stock exchange in the world, founded in 1602 by the Dutch East India Company. While Rotterdam has
the oldest bourse in the Netherlands, the world's first stock exchange – that of the Dutch East-India Company – went
public in six different cities. Later, a court ruled that the company had to reside legally in a single city, so
Amsterdam is recognized as the oldest such institution based on modern trading principles. While the banking system
evolved in the Low Countries, it was quickly incorporated by the well-connected English, stimulating English economic
output. Between 1590–1712 the Dutch also possessed one of the strongest and fastest navies in the world, allowing
for their varied conquests including breaking the Portuguese sphere of influence on the Indian Ocean and in the Orient,
as well as a lucrative slave trade from Africa and the Pacific. The republic was a confederation of seven provinces,
which had their own governments and were very independent, and a number of so-called Generality Lands. The latter
were governed directly by the States General (Staten-Generaal in Dutch), the federal government. The States General
were seated in The Hague and consisted of representatives of each of the seven provinces. The provinces of the republic
were, in official feudal order: In fact, there was an eighth province, the County of Drenthe, but this area was so
poor it was exempt from paying federal taxes and as a consequence was denied representation in the States General.
Each province was governed by the Provincial States, the main executive official (though not the official head of
state) was a raadspensionaris. In times of war, the stadtholder, who commanded the army, would have more power than
the raadspensionaris. In theory, the stadtholders were freely appointed by and subordinate to the states of each
province. However, in practice the princes of Orange of the House of Orange-Nassau, beginning with William the Silent,
were always chosen as stadtholders of most of the provinces. Zeeland and usually Utrecht had the same stadtholder
as Holland. There was a constant power struggle between the Orangists, who supported the stadtholders and specifically
the princes of Orange, and the Republicans, who supported the States General and hoped to replace the semi-hereditary
nature of the stadtholdership with a true republican structure. After the Peace of Westphalia, several border territories
were assigned to the United Provinces. They were federally-governed Generality Lands (Generaliteitslanden). They
were Staats-Brabant (present North Brabant), Staats-Vlaanderen (present Zeeuws-Vlaanderen), Staats-Limburg (around
Maastricht) and Staats-Oppergelre (around Venlo, after 1715). The States General of the United Provinces were in
control of the Dutch East India Company (VOC) and the Dutch West India Company (WIC), but some shipping expeditions
were initiated by some of the provinces, mostly Holland and/or Zeeland. The framers of the US Constitution were influenced
by the Constitution of the Republic of the United Provinces, as Federalist No. 20, by James Madison, shows. Such
influence appears, however, to have been of a negative nature, as Madison describes the Dutch confederacy as exhibiting
"Imbecility in the government; discord among the provinces; foreign influence and indignities; a precarious existence
in peace, and peculiar calamities from war." Apart from this, the American Declaration of Independence is similar
to the Act of Abjuration, essentially the declaration of independence of the United Provinces, but concrete evidence
that the former directly influenced the latter is absent. In the Union of Utrecht of 20 January 1579, Holland and
Zeeland were granted the right to accept only one religion (in practice, Calvinism). Every other province had the
freedom to regulate the religious question as it wished, although the Union stated every person should be free in
the choice of personal religion and that no person should be prosecuted based on religious choice. William of Orange
had been a strong supporter of public and personal freedom of religion and hoped to unite Protestants and Catholics
in the new union, and, for him, the Union was a defeat. In practice, Catholic services in all provinces were quickly
forbidden, and the Reformed Church became the "public" or "privileged" church in the Republic. During the Republic,
any person who wished to hold public office had to conform to the Reformed Church and take an oath to this effect.
The extent to which different religions or denominations were persecuted depended much on the time period and regional
or city leaders. In the beginning, this was especially focused on Roman Catholics, being the religion of the enemy.
In 17th-century Leiden, for instance, people opening their homes to services could be fined 200 guilders (a year's
wage for a skilled tradesman) and banned from the city. Throughout this, however, personal freedom of religion existed
and was one factor – along with economic reasons – in causing large immigration of religious refugees from other
parts of Europe. In the first years of the Republic, controversy arose within the Reformed Church, mainly around
the subject of predestination. This has become known as the struggle between Arminianism and Gomarism, or between
Remonstrants and Contra-Remonstrants. In 1618 the Synod of Dort tackled this issue, which led to the banning of the
Remonstrant faith. Beginning in the 18th century, the situation changed from more or less active persecution of religious
services to a state of restricted toleration of other religions, as long as their services took place secretly in
private churches.
Symbiosis (from Greek σύν "together" and βίωσις "living") is close and often long-term interaction between two different
biological species. In 1877 Albert Bernhard Frank used the word symbiosis (which previously had been used to depict
people living together in community) to describe the mutualistic relationship in lichens. In 1879, the German mycologist
Heinrich Anton de Bary defined it as "the living together of unlike organisms." The definition of symbiosis has varied
among scientists. Some believe symbiosis should only refer to persistent mutualisms, while others believe it should
apply to any type of persistent biological interaction (in other words mutualistic, commensalistic, or parasitic).
After 130 years of debate, current biology and ecology textbooks now use the latter "de Bary" definition or an even
broader definition (where symbiosis means all species interactions), with the restrictive definition no longer used
(in other words, symbiosis means mutualism). Some symbiotic relationships are obligate, meaning that both symbionts
entirely depend on each other for survival. For example, many lichens consist of fungal and photosynthetic symbionts
that cannot live on their own. Others are facultative (optional): they can, but do not have to live with the other
organism. Symbiotic relationships include those associations in which one organism lives on another (ectosymbiosis,
such as mistletoe), or where one partner lives inside the other (endosymbiosis, such as lactobacilli and other bacteria
in humans or Symbiodinium in corals). Symbiosis is also classified by physical attachment of the organisms; symbiosis
in which the organisms have bodily union is called conjunctive symbiosis, and symbiosis in which they are not in
union is called disjunctive symbiosis. Endosymbiosis is any symbiotic relationship in which one symbiont lives within
the tissues of the other, either within the cells or extracellularly. Examples include diverse microbiomes, rhizobia,
nitrogen-fixing bacteria that live in root nodules on legume roots; actinomycete nitrogen-fixing bacteria called
Frankia, which live in alder tree root nodules; single-celled algae inside reef-building corals; and bacterial endosymbionts
that provide essential nutrients to about 10%–15% of insects. Ectosymbiosis, also referred to as exosymbiosis, is
any symbiotic relationship in which the symbiont lives on the body surface of the host, including the inner surface
of the digestive tract or the ducts of exocrine glands. Examples of this include ectoparasites such as lice, commensal
ectosymbionts such as the barnacles that attach themselves to the jaw of baleen whales, and mutualist ectosymbionts
such as cleaner fish. Mutualism or interspecies reciprocal altruism is a relationship between individuals of different
species where both individuals benefit. In general, only lifelong interactions involving close physical and biochemical
contact can properly be considered symbiotic. Mutualistic relationships may be either obligate for both species,
obligate for one but facultative for the other, or facultative for both. Many biologists restrict the definition
of symbiosis to close mutualist relationships. A large percentage of herbivores have mutualistic gut flora that help
them digest plant matter, which is more difficult to digest than animal prey. This gut flora is made up of cellulose-digesting
protozoans or bacteria living in the herbivores' intestines. Coral reefs are the result of mutualisms between coral
organisms and various types of algae that live inside them. Most land plants and land ecosystems rely on mutualisms
between the plants, which fix carbon from the air, and mycorrhyzal fungi, which help in extracting water and minerals
from the ground. An example of mutual symbiosis is the relationship between the ocellaris clownfish that dwell among
the tentacles of Ritteri sea anemones. The territorial fish protects the anemone from anemone-eating fish, and in
turn the stinging tentacles of the anemone protect the clownfish from its predators. A special mucus on the clownfish
protects it from the stinging tentacles. A further example is the goby fish, which sometimes lives together with
a shrimp. The shrimp digs and cleans up a burrow in the sand in which both the shrimp and the goby fish live. The
shrimp is almost blind, leaving it vulnerable to predators when outside its burrow. In case of danger the goby fish
touches the shrimp with its tail to warn it. When that happens both the shrimp and goby fish quickly retreat into
the burrow. Different species of gobies (Elacatinus spp.) also exhibit mutualistic behavior through cleaning up ectoparasites
in other fish. Another non-obligate symbiosis is known from encrusting bryozoans and hermit crabs that live in a
close relationship. The bryozoan colony (Acanthodesia commensale) develops a cirumrotatory growth and offers the
crab (Pseudopagurus granulimanus) a helicospiral-tubular extension of its living chamber that initially was situated
within a gastropod shell. One of the most spectacular examples of obligate mutualism is between the siboglinid tube
worms and symbiotic bacteria that live at hydrothermal vents and cold seeps. The worm has no digestive tract and
is wholly reliant on its internal symbionts for nutrition. The bacteria oxidize either hydrogen sulfide or methane,
which the host supplies to them. These worms were discovered in the late 1980s at the hydrothermal vents near the
Galapagos Islands and have since been found at deep-sea hydrothermal vents and cold seeps in all of the world's oceans.
During mutualistic symbioses, the host cell lacks some of the nutrients, which are provided by the endosymbiont.
As a result, the host favors endosymbiont's growth processes within itself by producing some specialized cells. These
cells affect the genetic composition of the host in order to regulate the increasing population of the endosymbionts
and ensuring that these genetic changes are passed onto the offspring via vertical transmission (heredity). Adaptation
of the endosymbiont to the host's lifestyle leads to many changes in the endosymbiont–the foremost being drastic
reduction in its genome size. This is due to many genes being lost during the process of metabolism, and DNA repair
and recombination. While important genes participating in the DNA to RNA transcription, protein translation and DNA/RNA
replication are retained. That is, a decrease in genome size is due to loss of protein coding genes and not due to
lessening of inter-genic regions or open reading frame (ORF) size. Thus, species that are naturally evolving and
contain reduced sizes of genes can be accounted for an increased number of noticeable differences between them, thereby
leading to changes in their evolutionary rates. As the endosymbiotic bacteria related with these insects are passed
on to the offspring strictly via vertical genetic transmission, intracellular bacteria goes through many hurdles
during the process, resulting in the decrease in effective population sizes when compared to the free living bacteria.
This incapability of the endosymbiotic bacteria to reinstate its wild type phenotype via a recombination process
is called as Muller's ratchet phenomenon. Muller's ratchet phenomenon together with less effective population sizes
has led to an accretion of deleterious mutations in the non-essential genes of the intracellular bacteria. This could
have been due to lack of selection mechanisms prevailing in the rich environment of the host. Commensalism describes
a relationship between two living organisms where one benefits and the other is not significantly harmed or helped.
It is derived from the English word commensal used of human social interaction. The word derives from the medieval
Latin word, formed from com- and mensa, meaning "sharing a table". Commensal relationships may involve one organism
using another for transportation (phoresy) or for housing (inquilinism), or it may also involve one organism using
something another created, after its death (metabiosis). Examples of metabiosis are hermit crabs using gastropod
shells to protect their bodies and spiders building their webs on plants. A parasitic relationship is one in which
one member of the association benefits while the other is harmed. This is also known as antagonistic or antipathetic
symbiosis. Parasitic symbioses take many forms, from endoparasites that live within the host's body to ectoparasites
that live on its surface. In addition, parasites may be necrotrophic, which is to say they kill their host, or biotrophic,
meaning they rely on their host's surviving. Biotrophic parasitism is an extremely successful mode of life. Depending
on the definition used, as many as half of all animals have at least one parasitic phase in their life cycles, and
it is also frequent in plants and fungi. Moreover, almost all free-living animals are host to one or more parasite
taxa. An example of a biotrophic relationship would be a tick feeding on the blood of its host. Amensalism is the
type of relationship that exists where one species is inhibited or completely obliterated and one is unaffected.
This type of symbiosis is relatively uncommon in rudimentary reference texts, but is omnipresent in the natural world.[citation
needed] There are two types of amensalism, competition and antibiosis. Competition is where a larger or stronger
organisms deprives a smaller or weaker one from a resource. Antibiosis occurs when one organism is damaged or killed
by another through a chemical secretion. An example of competition is a sapling growing under the shadow of a mature
tree. The mature tree can begin to rob the sapling of necessary sunlight and, if the mature tree is very large, it
can take up rainwater and deplete soil nutrients. Throughout the process the mature tree is unaffected. Indeed, if
the sapling dies, the mature tree gains nutrients from the decaying sapling. Note that these nutrients become available
because of the sapling's decomposition, rather than from the living sapling, which would be a case of parasitism.[citation
needed] An example of antibiosis is Juglans nigra (black walnut), secreting juglone, a substance which destroys many
herbaceous plants within its root zone. Amensalism is an interaction where an organism inflicts harm to another organism
without any costs or benefits received by the other. A clear case of amensalism is where sheep or cattle trample
grass. Whilst the presence of the grass causes negligible detrimental effects to the animal's hoof, the grass suffers
from being crushed. Amensalism is often used to describe strongly asymmetrical competitive interactions, such as
has been observed between the Spanish ibex and weevils of the genus Timarcha which feed upon the same type of shrub.
Whilst the presence of the weevil has almost no influence on food availability, the presence of ibex has an enormous
detrimental effect on weevil numbers, as they consume significant quantities of plant matter and incidentally ingest
the weevils upon it. Synnecrosis is a rare type of symbiosis in which the interaction between species is detrimental
to both organisms involved. It is a short-lived condition, as the interaction eventually causes death. Because of
this, evolution selects against synnecrosis and it is uncommon in nature. An example of this is the relationship
between some species of bees and victims of the bee sting. Species of bees who die after stinging their prey inflict
pain on themselves (albeit to protect the hive) as well as on the victim. This term is rarely used. While historically,
symbiosis has received less attention than other interactions such as predation or competition, it is increasingly
recognized as an important selective force behind evolution, with many species having a long history of interdependent
co-evolution. In fact, the evolution of all eukaryotes (plants, animals, fungi, and protists) is believed under the
endosymbiotic theory to have resulted from a symbiosis between various sorts of bacteria. This theory is supported
by certain organelles dividing independently of the cell, and the observation that some organelles seem to have their
own nucleic acid. The biologist Lynn Margulis, famous for her work on endosymbiosis, contends that symbiosis is a
major driving force behind evolution. She considers Darwin's notion of evolution, driven by competition, to be incomplete
and claims that evolution is strongly based on co-operation, interaction, and mutual dependence among organisms.
According to Margulis and Dorion Sagan, "Life did not take over the globe by combat, but by networking." Symbiosis
played a major role in the co-evolution of flowering plants and the animals that pollinate them. Many plants that
are pollinated by insects, bats, or birds have highly specialized flowers modified to promote pollination by a specific
pollinator that is also correspondingly adapted. The first flowering plants in the fossil record had relatively simple
flowers. Adaptive speciation quickly gave rise to many diverse groups of plants, and, at the same time, corresponding
speciation occurred in certain insect groups. Some groups of plants developed nectar and large sticky pollen, while
insects evolved more specialized morphologies to access and collect these rich food sources. In some taxa of plants
and insects the relationship has become dependent, where the plant species can only be pollinated by one species
of insect.
The Canadian Armed Forces (CAF; French: Forces armées canadiennes, FAC), or Canadian Forces (CF) (French: les Forces canadiennes,
FC), is the unified armed force of Canada, as constituted by the National Defence Act, which states: "The Canadian
Forces are the armed forces of Her Majesty raised by Canada and consist of one Service called the Canadian Armed
Forces." This unified institution consists of sea, land, and air elements referred to as the Royal Canadian Navy
(RCN), Canadian Army, and Royal Canadian Air Force (RCAF). Personnel may belong to either the Regular Force or the
Reserve Force, which has four sub-components: the Primary Reserve, Supplementary Reserve, Cadet Organizations Administration
and Training Service, and the Canadian Rangers. Under the National Defence Act, the Canadian Armed Forces are an
entity separate and distinct from the Department of National Defence (the federal government department responsible
for administration and formation of defence policy), which also exists as the civilian support system for the Forces.
The Commander-in-Chief of the Canadian Armed Forces is the reigning Canadian monarch, Queen Elizabeth II, who is
represented by the Governor General of Canada. The Canadian Armed Forces is led by the Chief of the Defence Staff,
who is advised and assisted by the Armed Forces Council. During the Cold War, a principal focus of Canadian defence
policy was contributing to the security of Europe in the face of the Soviet military threat. Toward that end, Canadian
ground and air forces were based in Europe from the early 1950s until the early 1990s. However, since the end of
the Cold War, as the North Atlantic Treaty Organization (NATO) has moved much of its defence focus "out of area",
the Canadian military has also become more deeply engaged in international security operations in various other parts
of the world – most notably in Afghanistan since 2002. Canadian defence policy today is based on the Canada First
Defence Strategy, introduced in 2008. Based on that strategy, the Canadian military is oriented and being equipped
to carry out six core missions within Canada, in North America and globally. Specifically, the Canadian Armed Forces
are tasked with having the capacity to: Consistent with the missions and priorities outlined above, the Canadian
Armed Forces also contribute to the conduct of Canadian defence diplomacy through a range of activities, including
the deployment of Canadian Defence Attachés, participation in bilateral and multilateral military forums (e.g. the
System of Cooperation Among the American Air Forces), ship and aircraft visits, military training and cooperation,
and other such outreach and relationship-building efforts. Prior to Confederation in 1867, residents of the colonies
in what is now Canada served as regular members of French and British forces and in local militia groups. The latter
aided in the defence of their respective territories against attacks by other European powers, Aboriginal peoples,
and later American forces during the American Revolutionary War and War of 1812, as well as in the Fenian raids,
Red River Rebellion, and North-West Rebellion. Consequently, the lineages of some Canadian army units stretch back
to the early 19th century, when militia units were formed to assist in the defence of British North America against
invasion by the United States. The responsibility for military command remained with the British Crown-in-Council,
with a commander-in-chief for North America stationed at Halifax until the final withdrawal of British Army and Royal
Navy units from that city in 1906. Thereafter, the Royal Canadian Navy was formed, and, with the advent of military
aviation, the Royal Canadian Air Force. These forces were organised under the Department of Militia and Defence,
and split into the Permanent and Non-Permanent Active Militias—frequently shortened to simply The Militia. By 1923,
the department was merged into the Department of National Defence, but land forces in Canada were not referred to
as the Canadian Army until November 1940. The first overseas deployment of Canadian military forces occurred during
the Second Boer War, when several units were raised to serve under British command. Similarly, when the United Kingdom
entered into conflict with Germany in the First World War, Canadian troops were called to participate in European
theatres. The Canadian Crown-in-Council then decided to send its forces into the Second World War, as well as the
Korean War. Since 1947, Canadian military units have participated in more than 200 operations worldwide, and completed
72 international operations. Canadian soldiers, sailors, and aviators came to be considered world-class professionals
through conspicuous service during these conflicts and the country's integral participation in NATO during the Cold
War, First Gulf War, Kosovo War, and in United Nations Peacekeeping operations, such as the Suez Crisis, Golan Heights,
Cyprus, Croatia, Bosnia, Afghanistan, and Libya. Canada maintained an aircraft carrier from 1957 to 1970 during the
Cold War, which never saw combat but participated in patrols during the Cuban Missile Crisis. Battles which are particularly
notable to the Canadian military include the Battle of Vimy Ridge, the Dieppe Raid, the Battle of Ortona, the Battle
of Passchendaele, the Normandy Landings, the Battle for Caen, the Battle of the Scheldt, the Battle of Britain, the
Battle of the Atlantic, the strategic bombing of German cities, and more recently the Battle of Medak Pocket, in
Croatia. At the end of the Second World War, Canada possessed the fourth-largest air force and fifth-largest naval
surface fleet in the world, as well as the largest volunteer army ever fielded. Conscription for overseas service
was introduced only near the end of the war, and only 2,400 conscripts actually made it into battle. Originally,
Canada was thought to have had the third-largest navy in the world, but with the fall of the Soviet Union, new data
based on Japanese and Soviet sources found that to be incorrect. The current iteration of the Canadian Armed Forces
dates from 1 February 1968, when the Royal Canadian Navy, Canadian Army, and Royal Canadian Air Force were merged
into a unified structure and superseded by elemental commands. Its roots, however, lie in colonial militia groups
that served alongside garrisons of the French and British armies and navies; a structure that remained in place until
the early 20th century. Thereafter, a distinctly Canadian army and navy was established, followed by an air force,
that, because of the constitutional arrangements at the time, remained effectively under the control of the British
government until Canada gained legislative independence from the United Kingdom in 1931, in part due to the distinguished
achievement and sacrifice of the Canadian Corps in the First World War. After the 1980s, the use of the "Canadian
Armed Forces" name gave way to "Canadian Forces";[citation needed] The "Canadian Armed Forces" name returned in 2013.
Land Forces during this period also deployed in support of peacekeeping operations within United Nations sanctioned
conflicts. The nature of the Canadian Forces has continued to evolve. They have been deployed in Afghanistan until
2011, under the NATO-led United Nations International Security Assistance Force (ISAF), at the request of the Government
of Afghanistan. The Armed Forces are today funded by approximately $20.1 billion annually and are presently ranked
74th in size compared to the world's other armed forces by number of total personnel, and 58th in terms of active
personnel, standing at a strength of roughly 68,000, plus 27,000 reservists, 5000 Rangers, and 19,000 supplementary
reserves, bringing the total force to approximately 119,000. The number of primary reserve personnel is expected
to go up to 30,000 by 2020, and the number of active to at least 70,000. In addition, 5000 rangers and 19,000 supplementary
personnel will be serving. If this happens the total strength would be around 124,000. These individuals serve on
numerous CF bases located in all regions of the country, and are governed by the Queen's Regulations and Orders and
the National Defence Act. In 2008 the Government of Canada made efforts, through the Canada First Defence Strategy,
to modernize the Canadian Armed Forces, through the purchase of new equipment, improved training and readiness, as
well as the establishment of the Canadian Special Operations Regiment. More funds were also put towards recruitment,
which had been dwindling throughout the 1980s and '90s, possibly because the Canadian populace had come to perceive
the CAF as peacekeepers rather than as soldiers, as shown in a 2008 survey conducted for the Department of National
Defence. The poll found that nearly two thirds of Canadians agreed with the country's participation in the invasion
of Afghanistan, and that the military should be stronger, but also that the purpose of the forces should be different,
such as more focused on responding to natural disasters. Then CDS, Walter Natynczyk, said later that year that while
recruiting has become more successful, the CF was facing a problem with its rate of loss of existing members, which
increased between 2006 and 2008 from 6% to 9.2% annually. The 2006 renewal and re-equipment effort has resulted in
the acquisition of specific equipment (main battle tanks, artillery, unmanned air vehicles and other systems) to
support the mission in Afghanistan. It has also encompassed initiatives to renew certain so-called "core capabilities"
(such as the air force's medium range transport aircraft fleet – the C-130 Hercules – and the army's truck and armoured
vehicle fleets). In addition, new systems (such as C-17 Globemaster III strategic transport aircraft and CH-47 Chinook
heavy-lift helicopters) have also been acquired for the Armed Forces. Although the viability of the Canada First
Defence Strategy continues to suffer setbacks from challenging and evolving fiscal and other factors, it originally
aimed to: In the 1950s, the recruitment of women was open to roles in medicine, communication, logistics, and administration.
The roles of women in the CAF began to expand in 1971, after the Department reviewed the recommendations of the Royal
Commission on the Status of Women, at which time it lifted the ceiling of 1,500 women personnel, and gradually expanded
employment opportunities into the non-traditional areas—vehicle drivers and mechanics, aircraft mechanics, air-traffic
controllers, military police, and firefighters. The Department further reviewed personnel policies in 1978 and 1985,
after Parliament passed the Canadian Human Rights Act and the Canadian Charter of Rights and Freedoms. As a result
of these reviews, the Department changed its policies to permit women to serve at sea in replenishment ships and
in a diving tender, with the army service battalions, in military police platoons and field ambulance units, and
in most air squadrons. In 1987, occupations and units with the primary role of preparing for direct involvement in
combat on the ground or at sea were still closed to women: infantry, armoured corps, field artillery, air-defence
artillery, signals, field engineers, and naval operations. On 5 February 1987, the Minister of National Defence created
an office to study the impact of employing men and women in combat units. These trials were called Combat-Related
Employment of Women. All military occupations were open to women in 1989, with the exception of submarine service,
which opened in 2000. Throughout the 1990s, the introduction of women into the combat arms increased the potential
recruiting pool by about 100 percent. It also provided opportunities for all persons to serve their country to the
best of their abilities. Women were fully integrated in all occupations and roles by the government of Jean Chretien,
and by 8 March 2000, even allowed to serve on submarines. All equipment must be suitable for a mixed-gender force.
Combat helmets, rucksacks, combat boots, and flak jackets are designed to ensure women have the same level of protection
and comfort as their male colleagues. The women's uniform is similar in design to the men's uniform, but conforms
to the female figure, and is functional and practical. Women are also provided with an annual financial entitlement
for the purchase of brassiere undergarments. The following is the hierarchy of the Canadian Armed Forces. It begins
at the top with the most senior-ranking personnel and works its way into lower organizations. The Canadian constitution
determines that the Commander-in-Chief of the Canadian Armed Forces is the country's sovereign, who, since 1904,
has authorized his or her viceroy, the governor general, to exercise the duties ascribed to the post of Commander-in-Chief
and to hold the associated title since 1905. All troop deployment and disposition orders, including declarations
of war, fall within the royal prerogative and are issued as Orders in Council, which must be signed by either the
monarch or governor general. Under the Westminster system's parliamentary customs and practices, however, the monarch
and viceroy must generally follow the advice of his or her ministers in Cabinet, including the prime minister and
minister of national defence, who are accountable to the elected House of Commons. The Armed Forces' 115,349 personnel
are divided into a hierarchy of numerous ranks of officers and non-commissioned members. The governor general appoints,
on the advice of the prime minister, the Chief of the Defence Staff (CDS) as the highest ranking commissioned officer
in the Armed Forces and who, as head of the Armed Forces Council, is in command of the Canadian Forces. The Armed
Forces Council generally operates from National Defence Headquarters (NDHQ) in Ottawa, Ontario. On the Armed Forces
Council sit the heads of Canadian Joint Operations Command and Canadian Special Operations Forces Command, the Vice
Chief of the Defence Staff, and the heads of the Royal Canadian Navy, the Canadian Army, the Royal Canadian Air Force
and other key Level 1 organizations. The sovereign and most other members of the Canadian Royal Family also act as
colonels-in-chief, honorary air commodores, air commodores-in-chief, admirals, and captains-general of Canadian Forces
units, though these positions are ceremonial. Canada's Armed forces operate out of 27 Canadian Forces bases (CFB)
across the country, including NDHQ. This number has been gradually reduced since the 1970s with bases either being
closed or merged. Both officers and non-commissioned members receive their basic training at the Canadian Forces
Leadership and Recruit School in Saint-Jean-sur-Richelieu. Officers will generally either directly enter the Canadian
Armed Forces with a degree from a civilian university, or receive their commission upon graduation from the Royal
Military College of Canada. Specific element and trade training is conducted at a variety of institutions throughout
Canada, and to a lesser extent, the world. The Royal Canadian Navy (RCN), headed by the Commander of the Royal Canadian
Navy, includes 33 warships and submarines deployed in two fleets: Maritime Forces Pacific (MARPAC) at CFB Esquimalt
on the west coast, and Maritime Forces Atlantic (MARLANT) at Her Majesty's Canadian Dockyard in Halifax on the east
coast, as well as one formation: the Naval Reserve Headquarters (NAVRESHQ) at Quebec City, Quebec. The fleet is augmented
by various aircraft and supply vessels. The RCN participates in NATO exercises and operations, and ships are deployed
all over the world in support of multinational deployments. The Canadian Army is headed by the Commander of the Canadian
Army and administered through four divisions—the 2nd Canadian Division, the 3rd Canadian Division, the 4th Canadian
Division and the 5th Canadian Division—the Canadian Army Doctrine and Training System and the Canadian Army Headquarters.
Currently, the Regular Force component of the Army consists of three field-ready brigade groups: 1 Canadian Mechanized
Brigade Group, at CFB Edmonton and CFB Shilo; 2 Canadian Mechanized Brigade Group, at CFB Petawawa and CFB Gagetown;
and 5 Canadian Mechanized Brigade Group, at CFB Valcartier and Quebec City. Each contains one regiment each of artillery,
armour, and combat engineers, three battalions of infantry (all scaled in the British fashion), one battalion for
logistics, a squadron for headquarters/signals, and several smaller support organizations. A tactical helicopter
squadron and a field ambulance are co-located with each brigade, but do not form part of the brigade's command structure.
The 2nd, 3rd and 4th Canadian Divisions each has a Regular Force brigade group, and each division except the 1st
has two to three Reserve Force brigades groups. In total, there are ten Reserve Force brigade groups. The 5th Canadian
Division and the 2nd Canadian Division each have two Reserve Force brigade groups, while the 4th Canadian Division
and the 3rd Canadian Division each have three Reserve Force brigade groups. Major training and support establishments
exist at CFB Gagetown, CFB Montreal and CFB Wainwright. The Royal Canadian Air Force (RCAF) is headed by the Commander
of the Royal Canadian Air Force. The commander of 1 Canadian Air Division and Canadian NORAD Region, based in Winnipeg,
is responsible for the operational command and control of Air Force activities throughout Canada and worldwide. 1
Canadian Air Division operations are carried out through eleven wings located across Canada. The commander of 2 Canadian
Air Division is responsible for training and support functions. 2 Canadian Air Division operations are carried out
at two wings. Wings represent the grouping of various squadrons, both operational and support, under a single tactical
commander reporting to the operational commander and vary in size from several hundred personnel to several thousand.
Major air bases are located in British Columbia, Alberta, Saskatchewan, Manitoba, Ontario, Quebec, Nova Scotia, and
Newfoundland and Labrador, while administrative and command and control facilities are located in Winnipeg and North
Bay. A Canadian component of the NATO Airborne Early Warning Force is also based at NATO Air Base Geilenkirchen near
Geilenkirchen, Germany. The RCAF and Joint Task Force (North) (JTFN) also maintain at various points throughout Canada's
northern region a chain of forward operating locations, each capable of supporting fighter operations. Elements of
CF-18 squadrons periodically deploy to these airports for short training exercises or Arctic sovereignty patrols.
The Canadian Joint Operations Command is an operational element established in October 2012 with the merger of Canada
Command, the Canadian Expeditionary Force Command and the Canadian Operational Support Command. The new command,
created as a response to the cost-cutting measures in the 2012 federal budget, combines the resources, roles and
responsibilities of the three former commands under a single headquarters. The Canadian Special Operations Forces
Command (CANSOFCOM) is a formation capable of operating independently but primarily focused on generating special
operations forces (SOF) elements to support CJOC. The command includes Joint Task Force 2 (JTF2), the Canadian Joint
Incident Response Unit (CJIRU) based at CFB Trenton, as well as the Canadian Special Operations Regiment (CSOR) and
427 Special Operations Aviation Squadron (SOAS) based at CFB Petawawa. Among other things, the Information Management
Group is responsible for the conduct of electronic warfare and the protection of the Armed Forces' communications
and computer networks. Within the group, this operational role is fulfilled by the Canadian Forces Information Operations
Group, headquartered at CFS Leitrim in Ottawa, which operates the following units: the Canadian Forces Information
Operations Group Headquarters (CFIOGHQ), the Canadian Forces Electronic Warfare Centre (CFEWC), the Canadian Forces
Network Operation Centre (CFNOC), the Canadian Forces Signals Intelligence Operations Centre (CFSOC), the Canadian
Forces Station (CFS) Leitrim, and the 764 Communications Squadron. In June 2011 the Canadian Armed Forces Chief of
Force Development announced the establishment of a new organization, the Directorate of Cybernetics, headed by a
Brigadier General, the Director General Cyber (DG Cyber). Within that directorate the newly established CAF Cyber
Task Force, has been tasked to design and build cyber warfare capabilities for the Canadian Armed Forces. The Health
Services Group is a joint formation that includes over 120 general or specialized units and detachments providing
health services to the Canadian Armed Forces. With few exceptions, all elements are under command of the Surgeon
General for domestic support and force generation, or temporarily assigned under command of a deployed Joint Task
Force through Canadian Joint Operations Command. The Canadian Armed Forces have a total reserve force of approximately
50,000 primary and supplementary that can be called upon in times of national emergency or threat. For the components
and sub-components of the Canadian Armed Forces Reserve Force, the order of precedence follows: Approximately 26,000
citizen soldiers, sailors, and airmen and women, trained to the level of and interchangeable with their Regular Force
counterparts, and posted to CAF operations or duties on a casual or ongoing basis, make up the Primary Reserve. This
group is represented, though not commanded, at NDHQ by the Chief of Reserves and Cadets, who is usually a major general
or rear admiral, and is divided into four components that are each operationally and administratively responsible
to its corresponding environmental command in the Regular Force – the Naval Reserve (NAVRES), Land Force Reserve
(LFR), and Air Reserve (AIRRES) – in addition to one force that does not fall under an environmental command, the
Health Services Reserve under the Canadian Forces Health Services Group. The Cadet Organizations Administration and
Training Service (COATS) consists of officers and non-commissioned members who conduct training, safety, supervision
and administration of nearly 60,000 cadets aged 12 to 18 years in the Canadian Cadet Movement. The majority of members
in COATS are officers of the Cadet Instructors Cadre (CIC) branch of the CAF. Members of the Reserve Force Sub-Component
COATS who are not employed part-time (Class A) or full-time (Class B) may be held on the "Cadet Instructor Supplementary
Staff List" (CISS List) in anticipation of employment in the same manner as other reservists are held as members
of the Supplementary Reserve. The Canadian Rangers, who provide surveillance and patrol services in Canada's arctic
and other remote areas, are an essential reserve force component used for Canada's exercise of sovereignty over its
northern territory. Only service dress is suitable for CAF members to wear on any occasion, barring "dirty work"
or combat. With gloves, swords, and medals (No. 1 or 1A), it is suitable for ceremonial occasions and "dressed down"
(No. 3 or lower), it is suitable for daily wear. Generally, after the elimination of base dress (although still defined
for the Air Force uniform), operational dress is now the daily uniform worn by most members of the CF, unless service
dress is prescribed (such as at the NDHQ, on parades, at public events, etc.). Approved parkas are authorized for
winter wear in cold climates and a light casual jacket is also authorized for cooler days. The navy, most army, and
some other units have, for very specific occasions, a ceremonial/regimental full dress, such as the naval "high-collar"
white uniform, kilted Highland, Scottish, and Irish regiments, and the scarlet uniforms of the Royal Military Colleges.
Authorized headdress for the Canadian Armed Forces are the: beret, wedge cap, ballcap, Yukon cap, and tuque (toque).
Each is coloured according to the distinctive uniform worn: navy (white or navy blue), army (rifle green or "regimental"
colour), air force (light blue). Adherents of the Sikh faith may wear uniform turbans (dastar) (or patka, when operational)
and Muslim women may wear uniform tucked hijabs under their authorized headdress. Jews may wear yarmulke under their
authorized headdress and when bareheaded. The beret is probably the most widely worn headgear and is worn with almost
all orders of dress (with the exception of the more formal orders of Navy and Air Force dress), and the colour of
which is determined by the wearer's environment, branch, or mission. Naval personnel, however, seldom wear berets,
preferring either service cap or authorized ballcaps (shipboard operational dress), which only the Navy wear. Air
Force personnel, particularly officers, prefer the wedge cap to any other form of headdress. There is no naval variant
of the wedge cap. The Yukon cap and tuque are worn only with winter dress, although clearance and combat divers may
wear tuques year-round as a watch cap. Soldiers in Highland, Scottish, and Irish regiments generally wear alternative
headdress, including the glengarry, balmoral, tam o'shanter, and caubeen instead of the beret. The officer cadets
of both Royal Military Colleges wear gold-braided "pillbox" (cavalry) caps with their ceremonial dress and have a
unique fur "Astrakhan" for winter wear. The Canadian Army wears the CG634 helmet. The Constitution of Canada gives
the federal government exclusive responsibility for national defence, and expenditures are thus outlined in the federal
budget. For the 2008–2009 fiscal year, the amount allocated for defence spending was CAD$18.9 billion. This regular
funding was augmented in 2005 with an additional CAD$12.5 billion over five years, as well as a commitment to increasing
regular force troop levels by 5,000 persons, and the primary reserve by 3,000 over the same period. In 2006, a further
CAD$5.3 billion over five years was provided to allow for 13,000 more regular force members, and 10,000 more primary
reserve personnel, as well as CAD$17.1 billion for the purchase of new trucks for the Canadian Army, transport aircraft
and helicopters for the Royal Canadian Air Force, and joint support ships for the Royal Canadian Navy. Although the
Canadian Armed Forces are a single service, there are three similar but distinctive environmental uniforms (DEUs):
navy blue (which is actually black) for the navy, rifle green for the army, and light blue for the air force. CAF
members in operational occupations generally wear the DEU to which their occupation "belongs." CAF members in non-operational
occupations (the "purple" trades) are allocated a uniform according to the "distribution" of their branch within
the CAF, association of the branch with one of the former services, and the individual's initial preference. Therefore,
on any given day, in any given CAF unit, all three coloured uniforms may be seen.
In 1059, the right of electing the pope was reserved to the principal clergy of Rome and the bishops of the seven suburbicarian
sees. In the 12th century the practice of appointing ecclesiastics from outside Rome as cardinals began, with each
of them assigned a church in Rome as his titular church or linked with one of the suburbicarian dioceses, while still
being incardinated in a diocese other than that of Rome.[citation needed] The term cardinal at one time applied to
any priest permanently assigned or incardinated to a church, or specifically to the senior priest of an important
church, based on the Latin cardo (hinge), meaning "principal" or "chief". The term was applied in this sense as early
as the ninth century to the priests of the tituli (parishes) of the diocese of Rome. The Church of England retains
an instance of this origin of the title, which is held by the two senior members of the College of Minor Canons of
St Paul's Cathedral. There is disagreement about the origin of the term, but general consensus that "cardinalis"
from the word cardo (meaning 'pivot' or 'hinge') was first used in late antiquity to designate a bishop or priest
who was incorporated into a church for which he had not originally been ordained. In Rome the first persons to be
called cardinals were the deacons of the seven regions of the city at the beginning of the 6th century, when the
word began to mean “principal,” “eminent,” or "superior." The name was also given to the senior priest in each of
the "title" churches (the parish churches) of Rome and to the bishops of the seven sees surrounding the city. By
the 8th century the Roman cardinals constituted a privileged class among the Roman clergy. They took part in the
administration of the church of Rome and in the papal liturgy. By decree of a synod of 769, only a cardinal was eligible
to become pope. In 1059, during the pontificate of Nicholas II, cardinals were given the right to elect the pope
under the Papal Bull In nomine Domini. For a time this power was assigned exclusively to the cardinal bishops, but
the Third Lateran Council in 1179 gave back the right to the whole body of cardinals. Cardinals were granted the
privilege of wearing the red hat by Pope Innocent IV in 1244. In cities other than Rome, the name cardinal began
to be applied to certain church men as a mark of honour. The earliest example of this occurs in a letter sent by
Pope Zacharias in 747 to Pippin III (the Short), ruler of the Franks, in which Zacharias applied the title to the
priests of Paris to distinguish them from country clergy. This meaning of the word spread rapidly, and from the 9th
century various episcopal cities had a special class among the clergy known as cardinals. The use of the title was
reserved for the cardinals of Rome in 1567 by Pius V. In the year 1563 the influential Ecumenical Council of Trent,
headed by Pope Pius IV, wrote about the importance of selecting good Cardinals. According to this historic council
"nothing is more necessary to the Church of God than that the holy Roman pontiff apply that solicitude which by the
duty of his office he owes the universal Church in a very special way by associating with himself as cardinals the
most select persons only, and appoint to each church most eminently upright and competent shepherds; and this the
more so, because our Lord Jesus Christ will require at his hands the blood of the sheep of Christ that perish through
the evil government of shepherds who are negligent and forgetful of their office." The earlier influence of temporal
rulers, notably the French kings, reasserted itself through the influence of cardinals of certain nationalities or
politically significant movements. Traditions even developed entitling certain monarchs, including those of Austria,
Spain, and Portugal, to nominate one of their trusted clerical subjects to be created cardinal, a so-called crown-cardinal.[citation
needed] In early modern times, cardinals often had important roles in secular affairs. In some cases, they took on
powerful positions in government. In Henry VIII's England, his chief minister was Cardinal Wolsey. Cardinal Richelieu's
power was so great that he was for many years effectively the ruler of France. Richelieu successor was also a cardinal,
Jules Mazarin. Guillaume Dubois and André-Hercule de Fleury complete the list of the "four great" cardinals to have
ruled France.[citation needed] In Portugal, due to a succession crisis, one cardinal, Henry, King of Portugal, was
crowned king, the only example of a cardinal-king. Pope Sixtus V limited the number of cardinals to 70, comprising
six cardinal bishops, 50 cardinal priests, and 14 cardinal deacons. Starting in the pontificate of Pope John XXIII,
that limit has been exceeded. At the start of 1971, Pope Paul VI set the number of cardinal electors at a maximum
of 120, but set no limit on the number of cardinals generally. He also established a maximum age of eighty years
for electors. His action deprived twenty-five living cardinals, including the three living cardinals elevated by
Pope Pius XI, of the right to participate in a conclave.[citation needed] Popes can dispense from church laws and
have sometimes brought the number of cardinals under the age of 80 to more than 120. Pope Paul VI also increased
the number of cardinal bishops by giving that rank to patriarchs of the Eastern Catholic Churches. Each cardinal
takes on a titular church, either a church in the city of Rome or one of the suburbicarian sees. The only exception
is for patriarchs of Eastern Catholic Churches. Nevertheless, cardinals possess no power of governance nor are they
to intervene in any way in matters which pertain to the administration of goods, discipline, or the service of their
titular churches. They are allowed to celebrate Mass and hear confessions and lead visits and pilgrimages to their
titular churches, in coordination with the staff of the church. They often support their churches monetarily, and
many Cardinals do keep in contact with the pastoral staffs of their titular churches. The Dean of the College of
Cardinals in addition to such a titular church also receives the titular bishopric of Ostia, the primary suburbicarian
see. Cardinals governing a particular Church retain that church. In 1630, Pope Urban VIII decreed their title to
be Eminence (previously, it had been "illustrissimo" and "reverendissimo") and decreed that their secular rank would
equate to Prince, making them secondary only to the Pope and crowned monarchs. In accordance with tradition, they
sign by placing the title "Cardinal" (abbreviated Card.) after their personal name and before their surname as, for
instance, "John Card(inal) Doe" or, in Latin, "Ioannes Card(inalis) Cognomen". Some writers, such as James-Charles
Noonan, hold that, in the case of cardinals, the form used for signatures should be used also when referring to them
in English. Official sources such as the Archdiocese of Milwaukee and Catholic News Service say that the correct
form for referring to a cardinal in English is normally as "Cardinal [First name] [Surname]". This is the rule given
also in stylebooks not associated with the Catholic Church. This style is also generally followed on the websites
of the Holy See and episcopal conferences. Oriental Patriarchs who are created Cardinals customarily use "Sanctae
Ecclesiae Cardinalis" as their full title, probably because they do not belong to the Roman clergy. In Latin, on
the other hand, the [First name] Cardinal [Surname] order is used in the proclamation of the election of a new pope
by the cardinal protodeacon: "Annuntio vobis gaudium magnum; habemus Papam: Eminentissimum ac Reverendissimum Dominum,
Dominum (first name) Sanctae Romanae Ecclesiae Cardinalem (last name), ..." (Meaning: "I announce to you a great
joy; we have a Pope: The Most Eminent and Most Reverend Lord, Lord (first name) Cardinal of the Holy Roman Church
(last name), ...") This assumes that the new pope had been a cardinal just before becoming pope; the most recent
election of a non-cardinal as pope was in 1378. While the incumbents of some sees are regularly made cardinals, and
some countries are entitled to at least one cardinal by concordate (usually earning its primate the cardinal's hat),
no see carries an actual right to the cardinalate, not even if its bishop is a Patriarch. Cardinal bishops (cardinals
of the episcopal order) are among the most senior prelates of the Catholic Church. Though in modern times most cardinals
are also bishops, the term "cardinal bishop" only refers to the cardinals who are titular bishops of one of the "suburbicarian"
sees. In early times, the privilege of papal election was not reserved to the cardinals, and for centuries the person
elected was customarily a Roman priest and never a bishop from elsewhere. To preserve apostolic succession the rite
of consecrating him a bishop had to be performed by someone who was already a bishop. The rule remains that, if the
person elected Pope is not yet a bishop, he is consecrated by the Dean of the College of Cardinals, the Cardinal
Bishop of Ostia. There are seven suburbicarian sees: Ostia, Albano, Porto and Santa Rufina, Palestrina, Sabina and
Mentana, Frascati and Velletri. Velletri was united with Ostia from 1150 until 1914, when Pope Pius X separated them
again, but decreed that whatever cardinal bishop became Dean of the College of Cardinals would keep the suburbicarian
see he already held, adding to it that of Ostia, with the result that there continued to be only six cardinal bishops.
Since 1962, the cardinal bishops have only a titular relationship with the suburbicarian sees, with no powers of
governance over them. Each see has its own bishop, with the exception of Ostia, in which the Cardinal Vicar of the
see of Rome is apostolic administrator. A cardinal (Latin: sanctae romanae ecclesiae cardinalis, literally cardinal
of the Holy Roman Church) is a senior ecclesiastical leader, an ecclesiastical prince, and usually (now always for
those created when still within the voting age-range) an ordained bishop of the Roman Catholic Church. The cardinals
of the Church are collectively known as the College of Cardinals. The duties of the cardinals include attending the
meetings of the College and making themselves available individually or in groups to the Pope as requested. Most
have additional duties, such as leading a diocese or archdiocese or managing a department of the Roman Curia. A cardinal's
primary duty is electing the pope when the see becomes vacant. During the sede vacante (the period between a pope's
death or resignation and the election of his successor), the day-to-day governance of the Holy See is in the hands
of the College of Cardinals. The right to enter the conclave of cardinals where the pope is elected is limited to
those who have not reached the age of 80 years by the day the vacancy occurs. Cardinals have in canon law a "privilege
of forum" (i.e., exemption from being judged by ecclesiastical tribunals of ordinary rank): only the pope is competent
to judge them in matters subject to ecclesiastical jurisdiction (cases that refer to matters that are spiritual or
linked with the spiritual, or with regard to infringement of ecclesiastical laws and whatever contains an element
of sin, where culpability must be determined and the appropriate ecclesiastical penalty imposed). The pope either
decides the case himself or delegates the decision to a tribunal, usually one of the tribunals or congregations of
the Roman Curia. Without such delegation, no ecclesiastical court, even the Roman Rota, is competent to judge a canon
law case against a cardinal. Cardinals are, however, subject to the civil and criminal law like everybody else. To
symbolize their bond with the papacy, the pope gives each newly appointed cardinal a gold ring, which is traditionally
kissed by Catholics when greeting a cardinal (as with a bishop's episcopal ring). The pope chooses the image on the
outside: under Pope Benedict XVI it was a modern depiction of the crucifixion of Jesus, with Mary and John to each
side. The ring includes the pope's coat of arms on the inside.[citation needed] In previous times, at the consistory
at which the pope named a new cardinal, he would bestow upon him a distinctive wide-brimmed hat called a galero.
This custom was discontinued in 1969 and the investiture now takes place with the scarlet biretta. In ecclesiastical
heraldry, however, the scarlet galero is still displayed on the cardinal's coat of arms. Cardinals had the right
to display the galero in their cathedral, and when a cardinal died, it would be suspended from the ceiling above
his tomb. Some cardinals will still have a galero made, even though it is not officially part of their apparel.[citation
needed] Eastern Catholic cardinals continue to wear the normal dress appropriate to their liturgical tradition, though
some may line their cassocks with scarlet and wear scarlet fascias, or in some cases, wear Eastern-style cassocks
entirely of scarlet. When in choir dress, a Latin-rite cardinal wears scarlet garments — the blood-like red symbolizes
a cardinal's willingness to die for his faith. Excluding the rochet — which is always white — the scarlet garments
include the cassock, mozzetta, and biretta (over the usual scarlet zucchetto). The biretta of a cardinal is distinctive
not merely for its scarlet color, but also for the fact that it does not have a pompon or tassel on the top as do
the birettas of other prelates. Until the 1460s, it was customary for cardinals to wear a violet or blue cape unless
granted the privilege of wearing red when acting on papal business. His normal-wear cassock is black but has scarlet
piping and a scarlet fascia (sash). Occasionally, a cardinal wears a scarlet ferraiolo which is a cape worn over
the shoulders, tied at the neck in a bow by narrow strips of cloth in the front, without any 'trim' or piping on
it. It is because of the scarlet color of cardinals' vesture that the bird of the same name has become known as such.[citation
needed] If conditions change, so that the pope judges it safe to make the appointment public, he may do so at any
time. The cardinal in question then ranks in precedence with those raised to the cardinalate at the time of his in
pectore appointment. If a pope dies before revealing the identity of an in pectore cardinal, the cardinalate expires.
During the Western Schism, many cardinals were created by the contending popes. Beginning with the reign of Pope
Martin V, cardinals were created without publishing their names until later, termed creati et reservati in pectore.
At various times, there have been cardinals who had only received first tonsure and minor orders but not yet been
ordained as deacons or priests. Though clerics, they were inaccurately called "lay cardinals" and were permitted
to marry. Teodolfo Mertel was among the last of the lay cardinals. When he died in 1899 he was the last surviving
cardinal who was not at least ordained a priest. With the revision of the Code of Canon Law promulgated in 1917 by
Pope Benedict XV, only those who are already priests or bishops may be appointed cardinals. Since the time of Pope
John XXIII a priest who is appointed a cardinal must be consecrated a bishop, unless he obtains a dispensation. A
cardinal who is not a bishop is still entitled to wear and use the episcopal vestments and other pontificalia (episcopal
regalia: mitre, crozier, zucchetto, pectoral cross and ring). Even if not a bishop, any cardinal has both actual
and honorary precedence over non-cardinal patriarchs, as well as the archbishops and bishops who are not cardinals,
but he cannot perform the functions reserved solely to bishops, such as ordination. The prominent priests who since
1962 were not ordained bishops on their elevation to the cardinalate were over the age of 80 or near to it, and so
no cardinal who was not a bishop has participated in recent papal conclaves. Until 1917, it was possible for someone
who was not a priest, but only in minor orders, to become a cardinal (see "lay cardinals", below), but they were
enrolled only in the order of cardinal deacons. For example, in the 16th century, Reginald Pole was a cardinal for
18 years before he was ordained a priest. In 1917 it was established that all cardinals, even cardinal deacons, had
to be priests, and, in 1962, Pope John XXIII set the norm that all cardinals be ordained as bishops, even if they
are only priests at the time of appointment. As a consequence of these two changes, canon 351 of the 1983 Code of
Canon Law requires that a cardinal be at least in the order of priesthood at his appointment, and that those who
are not already bishops must receive episcopal consecration. Several cardinals aged over 80 or close to it when appointed
have obtained dispensation from the rule of having to be a bishop. These were all appointed cardinal-deacons, but
one of them, Roberto Tucci, lived long enough to exercise the right of option and be promoted to the rank of cardinal-priest.
The Cardinal Camerlengo of the Holy Roman Church, assisted by the Vice-Camerlengo and the other prelates of the office
known as the Apostolic Camera, has functions that in essence are limited to a period of sede vacante of the papacy.
He is to collate information about the financial situation of all administrations dependent on the Holy See and present
the results to the College of Cardinals, as they gather for the papal conclave. The cardinal protodeacon, the senior
cardinal deacon in order of appointment to the College of Cardinals, has the privilege of announcing a new pope's
election and name (once he has been ordained to the Episcopate) from the central balcony at the Basilica of Saint
Peter in Vatican City State. In the past, during papal coronations, the proto-deacon also had the honor of bestowing
the pallium on the new pope and crowning him with the papal tiara. However, in 1978 Pope John Paul I chose not to
be crowned and opted for a simpler papal inauguration ceremony, and his three successors followed that example. As
a result, the Cardinal protodeacon's privilege of crowning a new pope has effectively ceased although it could be
revived if a future Pope were to restore a coronation ceremony. However, the proto-deacon still has the privilege
of bestowing the pallium on a new pope at his papal inauguration. “Acting in the place of the Roman Pontiff, he also
confers the pallium upon metropolitan bishops or gives the pallium to their proxies.” The current cardinal proto-deacon
is Renato Raffaele Martino. When not celebrating Mass but still serving a liturgical function, such as the semiannual
Urbi et Orbi papal blessing, some Papal Masses and some events at Ecumenical Councils, cardinal deacons can be recognized
by the dalmatics they would don with the simple white mitre (so called mitra simplex). As of 2005, there were over
50 churches recognized as cardinalatial deaconries, though there were only 30 cardinals of the order of deacons.
Cardinal deacons have long enjoyed the right to "opt for the order of cardinal priests" (optazione) after they have
been cardinal deacons for 10 years. They may on such elevation take a vacant "title" (a church allotted to a cardinal
priest as the church in Rome with which he is associated) or their diaconal church may be temporarily elevated to
a cardinal priest's "title" for that occasion. When elevated to cardinal priests, they take their precedence according
to the day they were first made cardinal deacons (thus ranking above cardinal priests who were elevated to the college
after them, regardless of order). Cardinals elevated to the diaconal order are mainly officials of the Roman Curia
holding various posts in the church administration. Their number and influence has varied through the years. While
historically predominantly Italian the group has become much more internationally diverse in later years. While in
1939 about half were Italian by 1994 the number was reduced to one third. Their influence in the election of the
Pope has been considered important, they are better informed and connected than the dislocated cardinals but their
level of unity has been varied. Under the 1587 decree of Pope Sixtus V, which fixed the maximum size of the College
of Cardinals, there were 14 cardinal deacons. Later the number increased. As late as 1939 almost half of the cardinals
were members of the curia. Pius XII reduced this percentage to 24 percent. John XXIII brought it back up to 37 percent
but Paul VI brought it down to 27 percent where John Paul II has maintained this ratio. Cardinal deacons derive originally
from the seven deacons in the Papal Household and the seven deacons who supervised the Church's works in the districts
of Rome during the early Middle Ages, when church administration was effectively the government of Rome and provided
all social services. Cardinal deacons are given title to one of these deaconries. The cardinal deacons are the lowest-ranking
cardinals. Cardinals elevated to the diaconal order are either officials of the Roman Curia or priests elevated after
their 80th birthday. Bishops with diocesan responsibilities, however, are created cardinal priests. The cardinal
who is the longest-serving member of the order of cardinal priests is titled cardinal protopriest. He had certain
ceremonial duties in the conclave that have effectively ceased because he would generally have already reached age
80, at which cardinals are barred from the conclave. The current cardinal protopriest is Paulo Evaristo Arns of Brazil.
The Dean of the College of Cardinals, or Cardinal-dean, is the primus inter pares of the College of Cardinals, elected
by the cardinal bishops holding suburbicarian sees from among their own number, an election, however, that must be
approved by the Pope. Formerly the position of dean belonged by right to the longest-serving of the cardinal bishops.
In 1965, Pope Paul VI decreed in his motu proprio Ad Purpuratorum Patrum that patriarchs of the Eastern Catholic
Churches who were named cardinals (i.e., patriarch cardinals) would also be part of the episcopal order, ranking
after the six cardinal bishops of the suburbicarian sees (who had been relieved of direct responsibilities for those
sees by Pope John XXIII three years earlier). Patriarch cardinals do not receive title of a suburbicarian see, and
as such they cannot elect the dean or become dean. There are currently three Eastern Patriarchs who are cardinal
bishops: Cardinal priests are the most numerous of the three orders of cardinals in the Catholic Church, ranking
above the cardinal deacons and below the cardinal bishops. Those who are named cardinal priests today are generally
bishops of important dioceses throughout the world, though some hold Curial positions. In modern times, the name
"cardinal priest" is interpreted as meaning a cardinal who is of the order of priests. Originally, however, this
referred to certain key priests of important churches of the Diocese of Rome, who were recognized as the cardinal
priests, the important priests chosen by the pope to advise him in his duties as Bishop of Rome (the Latin cardo
means "hinge"). Certain clerics in many dioceses at the time, not just that of Rome, were said to be the key personnel
— the term gradually became exclusive to Rome to indicate those entrusted with electing the bishop of Rome, the pope.
While the cardinalate has long been expanded beyond the Roman pastoral clergy and Roman Curia, every cardinal priest
has a titular church in Rome, though they may be bishops or archbishops elsewhere, just as cardinal bishops are given
one of the suburbicarian dioceses around Rome. Pope Paul VI abolished all administrative rights cardinals had with
regard to their titular churches, though the cardinal's name and coat of arms are still posted in the church, and
they are expected to celebrate mass and preach there if convenient when they are in Rome. While the number of cardinals
was small from the times of the Roman Empire to the Renaissance, and frequently smaller than the number of recognized
churches entitled to a cardinal priest, in the 16th century the College expanded markedly. In 1587, Pope Sixtus V
sought to arrest this growth by fixing the maximum size of the College at 70, including 50 cardinal priests, about
twice the historical number. This limit was respected until 1958, and the list of titular churches modified only
on rare occasions, generally when a building fell into disrepair. When Pope John XXIII abolished the limit, he began
to add new churches to the list, which Popes Paul VI and John Paul II continued to do. Today there are close to 150
titular churches, out of over 300 churches in Rome. For a period ending in the mid-20th century, long-serving cardinal
priests were entitled to fill vacancies that arose among the cardinal bishops, just as cardinal deacons of ten years'
standing are still entitled to become cardinal priests. Since then, cardinals have been advanced to cardinal bishop
exclusively by papal appointment. A cardinal named in pectore is known only to the pope; not even the cardinal so
named is necessarily aware of his elevation, and in any event cannot function as a cardinal while his appointment
is in pectore. Today, cardinals are named in pectore to protect them or their congregations from reprisals if their
identities were known.
The Iranian languages or Iranic languages form a branch of the Indo-Iranian languages, which in turn are a branch of the
Indo-European language family. The speakers of Iranian languages are known as Iranian peoples. Historical Iranian
languages are grouped in three stages: Old Iranian (until 400 BCE), Middle Iranian (400 BCE – 900 CE), and New Iranian
(since 900 CE). Of the Old Iranian languages, the better understood and recorded ones are Old Persian (a language
of Achaemenid Iran) and Avestan (the language of the Avesta). Middle Iranian languages included Middle Persian (a
language of Sassanid Iran), Parthian, and Bactrian. As of 2008, there were an estimated 150–200 million native speakers
of Iranian languages. Ethnologue estimates there are 86 Iranian languages, the largest amongst them being Persian,
Pashto, Kurdish, and Balochi. The term Iranian is applied to any language which descends from the ancestral Proto-Iranian
language. Iranian derives from the Persian and Sanskrit origin word Arya. The use of the term for the Iranian language
family was introduced in 1836 by Christian Lassen. Robert Needham Cust used the term Irano-Aryan in 1878, and Orientalists
such as George Abraham Grierson and Max Müller contrasted Irano-Aryan (Iranian) and Indo-Aryan (Indic). Some recent
scholarship, primarily in German, has revived this convention. All Iranian languages are descended from a common
ancestor, Proto-Iranian. In turn, and together with Proto-Indo-Aryan and the Nuristani languages, Proto-Iranian descends
from a common ancestor Proto-Indo-Iranian. The Indo-Iranian languages are thought to have originated in Central Asia.
The Andronovo culture is the suggested candidate for the common Indo-Iranian culture ca. 2000 BC. It was situated
precisely in the western part of Central Asia that borders present-day Russia (and present-day Kazakhstan). It was
in relative proximity to the other satem ethno-linguistic groups of the Indo-European family, like Thracian, Balto-Slavic
and others, and to common Indo-European's original homeland (more precisely, the steppes of southern Russia to the
north of the Caucasus), according to the reconstructed linguistic relationships of common Indo-European. Proto-Iranian
thus dates to some time after Proto-Indo-Iranian break-up, or the early second millennium BCE, as the Old Iranian
languages began to break off and evolve separately as the various Iranian tribes migrated and settled in vast areas
of southeastern Europe, the Iranian plateau, and Central Asia. The multitude of Middle Iranian languages and peoples
indicate that great linguistic diversity must have existed among the ancient speakers of Iranian languages. Of that
variety of languages/dialects, direct evidence of only two have survived. These are: Old Persian is the Old Iranian
dialect as it was spoken in south-western Iran by the inhabitants of Parsa, who also gave their name to their region
and language. Genuine Old Persian is best attested in one of the three languages of the Behistun inscription, composed
circa 520 BC, and which is the last inscription (and only inscription of significant length) in which Old Persian
is still grammatically correct. Later inscriptions are comparatively brief, and typically simply copies of words
and phrases from earlier ones, often with grammatical errors, which suggests that by the 4th century BC the transition
from Old Persian to Middle Persian was already far advanced, but efforts were still being made to retain an "old"
quality for official proclamations. The other directly attested Old Iranian dialects are the two forms of Avestan,
which take their name from their use in the Avesta, the liturgical texts of indigenous Iranian religion that now
goes by the name of Zoroastrianism but in the Avesta itself is simply known as vohu daena (later: behdin). The language
of the Avesta is subdivided into two dialects, conventionally known as "Old (or 'Gathic') Avestan", and "Younger
Avestan". These terms, which date to the 19th century, are slightly misleading since 'Younger Avestan' is not only
much younger than 'Old Avestan', but also from a different geographic region. The Old Avestan dialect is very archaic,
and at roughly the same stage of development as Rigvedic Sanskrit. On the other hand, Younger Avestan is at about
the same linguistic stage as Old Persian, but by virtue of its use as a sacred language retained its "old" characteristics
long after the Old Iranian languages had yielded to their Middle Iranian stage. Unlike Old Persian, which has Middle
Persian as its known successor, Avestan has no clearly identifiable Middle Iranian stage (the effect of Middle Iranian
is indistinguishable from effects due to other causes). In addition to Old Persian and Avestan, which are the only
directly attested Old Iranian languages, all Middle Iranian languages must have had a predecessor "Old Iranian" form
of that language, and thus can all be said to have had an (at least hypothetical) "Old" form. Such hypothetical Old
Iranian languages include Carduchi (the hypothetical predecessor to Kurdish) and Old Parthian. Additionally, the
existence of unattested languages can sometimes be inferred from the impact they had on neighbouring languages. Such
transfer is known to have occurred for Old Persian, which has (what is called) a "Median" substrate in some of its
vocabulary. Also, foreign references to languages can also provide a hint to the existence of otherwise unattested
languages, for example through toponyms/ethnonyms or in the recording of vocabulary, as Herodotus did for what he
called "Scythian". Conventionally, Iranian languages are grouped in "western" and "eastern" branches. These terms
have little meaning with respect to Old Avestan as that stage of the language may predate the settling of the Iranian
peoples into western and eastern groups. The geographic terms also have little meaning when applied to Younger Avestan
since it isn't known where that dialect (or dialects) was spoken either. Certain is only that Avestan (all forms)
and Old Persian are distinct, and since Old Persian is "western", and Avestan was not Old Persian, Avestan acquired
a default assignment to "eastern". Confusing the issue is the introduction of a western Iranian substrate in later
Avestan compositions and redactions undertaken at the centers of imperial power in western Iran (either in the south-west
in Persia, or in the north-west in Nisa/Parthia and Ecbatana/Media). Two of the earliest dialectal divisions among
Iranian indeed happen to not follow the later division into Western and Eastern blocks. These concern the fate of
the Proto-Indo-Iranian first-series palatal consonants, *ć and *dź: As a common intermediate stage, it is possible
to reconstruct depalatalized affricates: *c, *dz. (This coincides with the state of affairs in the neighboring Nuristani
languages.) A further complication however concerns the consonant clusters *ćw and *dźw: It is possible that other
distinct dialect groups were already in existence during this period. Good candidates are the hypothethical ancestor
languages of Alanian/Scytho-Sarmatian subgroup of Scythian in the far northwest; and the hypothetical "Old Parthian"
(the Old Iranian ancestor of Parthian) in the near northwest, where original *dw > *b (paralleling the development
of *ćw). What is known in Iranian linguistic history as the "Middle Iranian" era is thought to begin around the 4th
century BCE lasting through the 9th century. Linguistically the Middle Iranian languages are conventionally classified
into two main groups, Western and Eastern. The Western family includes Parthian (Arsacid Pahlavi) and Middle Persian,
while Bactrian, Sogdian, Khwarezmian, Saka, and Old Ossetic (Scytho-Sarmatian) fall under the Eastern category. The
two languages of the Western group were linguistically very close to each other, but quite distinct from their eastern
counterparts. On the other hand, the Eastern group was an areal entity whose languages retained some similarity to
Avestan. They were inscribed in various Aramaic-derived alphabets which had ultimately evolved from the Achaemenid
Imperial Aramaic script, though Bactrian was written using an adapted Greek script. Middle Persian (Pahlavi) was
the official language under the Sasanian dynasty in Iran. It was in use from the 3rd century CE until the beginning
of the 10th century. The script used for Middle Persian in this era underwent significant maturity. Middle Persian,
Parthian and Sogdian were also used as literary languages by the Manichaeans, whose texts also survive in various
non-Iranian languages, from Latin to Chinese. Manichaean texts were written in a script closely akin to the Syriac
script. Following the Islamic Conquest of Persia (Iran), there were important changes in the role of the different
dialects within the Persian Empire. The old prestige form of Middle Iranian, also known as Pahlavi, was replaced
by a new standard dialect called Dari as the official language of the court. The name Dari comes from the word darbâr
(دربار), which refers to the royal court, where many of the poets, protagonists, and patrons of the literature flourished.
The Saffarid dynasty in particular was the first in a line of many dynasties to officially adopt the new language
in 875 CE. Dari may have been heavily influenced by regional dialects of eastern Iran, whereas the earlier Pahlavi
standard was based more on western dialects. This new prestige dialect became the basis of Standard New Persian.
Medieval Iranian scholars such as Abdullah Ibn al-Muqaffa (8th century) and Ibn al-Nadim (10th century) associated
the term "Dari" with the eastern province of Khorasan, while they used the term "Pahlavi" to describe the dialects
of the northwestern areas between Isfahan and Azerbaijan, and "Pârsi" ("Persian" proper) to describe the Dialects
of Fars. They also noted that the unofficial language of the royalty itself was yet another dialect, "Khuzi", associated
with the western province of Khuzestan. The Islamic conquest also brought with it the adoption of Arabic script for
writing Persian and much later, Kurdish, Pashto and Balochi. All three were adapted to the writing by the addition
of a few letters. This development probably occurred some time during the second half of the 8th century, when the
old middle Persian script began dwindling in usage. The Arabic script remains in use in contemporary modern Persian.
Tajik script was first Latinised in the 1920s under the then Soviet nationality policy. The script was however subsequently
Cyrillicized in the 1930s by the Soviet government. The geographical regions in which Iranian languages were spoken
were pushed back in several areas by newly neighbouring languages. Arabic spread into some parts of Western Iran
(Khuzestan), and Turkic languages spread through much of Central Asia, displacing various Iranian languages such
as Sogdian and Bactrian in parts of what is today Turkmenistan, Uzbekistan and Tajikistan. In Eastern Europe, mostly
comprising the territory of modern-day Ukraine, southern European Russia, and parts of the Balkans, the core region
of the native Scythians, Sarmatians, and Alans had been decisively been taken over as a result of absorption and
assimilation (e.g. Slavicisation) by the various Proto-Slavic population of the region, by the 6th century AD. This
resulted in the displacement and extinction of the once predominant Scythian languages of the region. Sogdian's close
relative Yaghnobi barely survives in a small area of the Zarafshan valley east of Samarkand, and Saka as Ossetic
in the Caucasus, which is the sole remnant of the once predominant Scythian languages in Eastern Europe proper and
large parts of the North Caucasus. Various small Iranian languages in the Pamirs survive that are derived from Eastern
Iranian.
Lighting or illumination is the deliberate use of light to achieve a practical or aesthetic effect. Lighting includes the
use of both artificial light sources like lamps and light fixtures, as well as natural illumination by capturing
daylight. Daylighting (using windows, skylights, or light shelves) is sometimes used as the main source of light
during daytime in buildings. This can save energy in place of using artificial lighting, which represents a major
component of energy consumption in buildings. Proper lighting can enhance task performance, improve the appearance
of an area, or have positive psychological effects on occupants. Indoor lighting is usually accomplished using light
fixtures, and is a key part of interior design. Lighting can also be an intrinsic component of landscape projects.
Forms of lighting include alcove lighting, which like most other uplighting is indirect. This is often done with
fluorescent lighting (first available at the 1939 World's Fair) or rope light, occasionally with neon lighting, and
recently with LED strip lighting. It is a form of backlighting. Recessed lighting (often called "pot lights" in Canada,
"can lights" or 'high hats" in the US) is popular, with fixtures mounted into the ceiling structure so as to appear
flush with it. These downlights can use narrow beam spotlights, or wider-angle floodlights, both of which are bulbs
having their own reflectors. There are also downlights with internal reflectors designed to accept common 'A' lamps
(light bulbs) which are generally less costly than reflector lamps. Downlights can be incandescent, fluorescent,
HID (high intensity discharge) or LED. With the discovery of fire, the earliest form of artificial lighting used
to illuminate an area were campfires or torches. As early as 400,000 BCE, fire was kindled in the caves of Peking
Man. Prehistoric people used primitive oil lamps to illuminate surroundings. These lamps were made from naturally
occurring materials such as rocks, shells, horns and stones, were filled with grease, and had a fiber wick. Lamps
typically used animal or vegetable fats as fuel. Hundreds of these lamps (hollow worked stones) have been found in
the Lascaux caves in modern-day France, dating to about 15,000 years ago. Oily animals (birds and fish) were also
used as lamps after being threaded with a wick. Fireflies have been used as lighting sources. Candles and glass and
pottery lamps were also invented. Chandeliers were an early form of "light fixture". Major reductions in the cost
of lighting occurred with the discovery of whale oil and kerosene. Gas lighting was economical enough to power street
lights in major cities starting in the early 1800s, and was also used in some commercial buildings and in the homes
of wealthy people. The gas mantle boosted the luminosity of utility lighting and of kerosene lanterns. The next major
drop in price came about with the incandescent light bulb powered by electricity. Over time, electric lighting became
ubiquitous in developed countries. Segmented sleep patterns disappeared, improved nighttime lighting made more activities
possible at night, and more street lights reduced urban crime. Lighting fixtures come in a wide variety of styles
for various functions. The most important functions are as a holder for the light source, to provide directed light
and to avoid visual glare. Some are very plain and functional, while some are pieces of art in themselves. Nearly
any material can be used, so long as it can tolerate the excess heat and is in keeping with safety codes. An important
property of light fixtures is the luminous efficacy or wall-plug efficiency, meaning the amount of usable light emanating
from the fixture per used energy, usually measured in lumen per watt. A fixture using replaceable light sources can
also have its efficiency quoted as the percentage of light passed from the "bulb" to the surroundings. The more transparent
the lighting fixture is, the higher efficacy. Shading the light will normally decrease efficacy but increase the
directionality and the visual comfort probability. Color temperature for white light sources also affects their use
for certain applications. The color temperature of a white light source is the temperature in Kelvin of a theoretical
black body emitter that most closely matches the spectral characteristics of the lamp. An incandescent bulb has a
color temperature around 2800 to 3000 Kelvin; daylight is around 6400 Kelvin. Lower color temperature lamps have
relatively more energy in the yellow and red part of the visible spectrum, while high color temperatures correspond
to lamps with more of a blue-white appearance. For critical inspection or color matching tasks, or for retail displays
of food and clothing, the color temperature of the lamps will be selected for the best overall lighting effect. Lighting
is classified by intended use as general, accent, or task lighting, depending largely on the distribution of the
light produced by the fixture. Track lighting, invented by Lightolier, was popular at one period of time because
it was much easier to install than recessed lighting, and individual fixtures are decorative and can be easily aimed
at a wall. It has regained some popularity recently in low-voltage tracks, which often look nothing like their predecessors
because they do not have the safety issues that line-voltage systems have, and are therefore less bulky and more
ornamental in themselves. A master transformer feeds all of the fixtures on the track or rod with 12 or 24 volts,
instead of each light fixture having its own line-to-low voltage transformer. There are traditional spots and floods,
as well as other small hanging fixtures. A modified version of this is cable lighting, where lights are hung from
or clipped to bare metal cables under tension. A sconce is a wall-mounted fixture, particularly one that shines up
and sometimes down as well. A torchiere is an uplight intended for ambient lighting. It is typically a floor lamp
but may be wall-mounted like a sconce. The portable or table lamp is probably the most common fixture, found in many
homes and offices. The standard lamp and shade that sits on a table is general lighting, while the desk lamp is considered
task lighting. Magnifier lamps are also task lighting. The illuminated ceiling was once popular in the 1960s and
1970s but fell out of favor after the 1980s. This uses diffuser panels hung like a suspended ceiling below fluorescent
lights, and is considered general lighting. Other forms include neon, which is not usually intended to illuminate
anything else, but to actually be an artwork in itself. This would probably fall under accent lighting, though in
a dark nightclub it could be considered general lighting. In a movie theater, steps in the aisles are usually marked
with a row of small lights for convenience and safety, when the film has started and the other lights are off. Traditionally
made up of small low wattage, low voltage lamps in a track or translucent tube, these are rapidly being replaced
with LED based versions. Street Lights are used to light roadways and walkways at night. Some manufacturers are designing
LED and photovoltaic luminaires to provide an energy-efficient alternative to traditional street light fixtures.
Floodlights can be used to illuminate outdoor playing fields or work zones during nighttime hours. The most common
type of floodlights are metal halide and high pressure sodium lights. Sometimes security lighting can be used along
roadways in urban areas, or behind homes or commercial facilities. These are extremely bright lights used to deter
crime. Security lights may include floodlights. Entry lights can be used outside to illuminate and signal the entrance
to a property. These lights are installed for safety, security, and for decoration. Vehicles typically include headlamps
and tail lights. Headlamps are white or selective yellow lights placed in the front of the vehicle, designed to illuminate
the upcoming road and to make the vehicle more visible. Many manufactures are turning to LED headlights as an energy-efficient
alternative to traditional headlamps. Tail and brake lights are red and emit light to the rear so as to reveal the
vehicle's direction of travel to following drivers. White rear-facing reversing lamps indicate that the vehicle's
transmission has been placed in the reverse gear, warning anyone behind the vehicle that it is moving backwards,
or about to do so. Flashing turn signals on the front, side, and rear of the vehicle indicate an intended change
of position or direction. In the late 1950s, some automakers began to use electroluminescent technology to backlight
their cars' speedometers and other gauges or to draw attention to logos or other decorative elements. Commonly called
'light bulbs', lamps are the removable and replaceable part of a light fixture, which converts electrical energy
into electromagnetic radiation. While lamps have traditionally been rated and marketed primarily in terms of their
power consumption, expressed in watts, proliferation of lighting technology beyond the incandescent light bulb has
eliminated the correspondence of wattage to the amount of light produced. For example, a 60 W incandescent light
bulb produces about the same amount of light as a 13 W compact fluorescent lamp. Each of these technologies has a
different efficacy in converting electrical energy to visible light. Visible light output is typically measured in
lumens. This unit only quantifies the visible radiation, and excludes invisible infrared and ultraviolet light. A
wax candle produces on the close order of 13 lumens, a 60 watt incandescent lamp makes around 700 lumens, and a 15-watt
compact fluorescent lamp produces about 800 lumens, but actual output varies by specific design. Rating and marketing
emphasis is shifting away from wattage and towards lumen output, to give the purchaser a directly applicable basis
upon which to select a lamp. Lighting design as it applies to the built environment is known as 'architectural lighting
design'. Lighting of structures considers aesthetic elements as well as practical considerations of quantity of light
required, occupants of the structure, energy efficiency and cost. Artificial lighting takes into account the amount
of daylight received in an internal space by using Daylight factor calculation. For simple installations, hand-calculations
based on tabular data are used to provide an acceptable lighting design. More critical or optimized designs now routinely
use mathematical modeling on a computer using software such as Radiance which can allow an Architect to quickly undertake
complex calculations to review the benefit of a particular design. In some design instances, materials used on walls
and furniture play a key role in the lighting effect
< for example dark paint tends to absorb light, making the room appear smaller and more dim than it is, whereas light paint
does the opposite. In addition to paint, reflective surfaces also have an effect on lighting design. Photometric
studies (also sometimes referred to as "layouts" or "point by points") are often used to simulate lighting designs
for projects before they are built or renovated. This enables architects, lighting designers, and engineers to determine
whether a proposed lighting setup will deliver the amount of light intended. They will also be able to determine
the contrast ratio between light and dark areas. In many cases these studies are referenced against IESNA or CIBSE
recommended lighting practices for the type of application. Depending on the type of area, different design aspects
may be emphasized for safety or practicality (i.e. such as maintaining uniform light levels, avoiding glare or highlighting
certain areas). Specialized software is often used to create these, which typically combine the use of two-dimensional
digital CAD drawings and lighting calculation software (i.e. AGi32or Dialux). Lighting illuminates the performers
and artists in a live theatre, dance, or musical performance, and is selected and arranged to create dramatic effects.
Stage lighting uses general illumination technology in devices configured for easy adjustment of their output characteristics.[citation
needed] The setup of stage lighting is tailored for each scene of each production. Dimmers, colored filters, reflectors,
lenses, motorized or manually aimed lamps, and different kinds of flood and spot lights are among the tools used
by a stage lighting designer to produce the desired effects. A set of lighting cues are prepared so that the lighting
operator can control the lights in step with the performance; complex theatre lighting systems use computer control
of lighting instruments. Motion picture and television production use many of the same tools and methods of stage
lighting. Especially in the early days of these industries, very high light levels were required and heat produced
by lighting equipment presented substantial challenges. Modern cameras require less light, and modern light sources
emit less heat. Measurement of light or photometry is generally concerned with the amount of useful light falling
on a surface and the amount of light emerging from a lamp or other source, along with the colors that can be rendered
by this light. The human eye responds differently to light from different parts of the visible spectrum, therefore
photometric measurements must take the luminosity function into account when measuring the amount of useful light.
The basic SI unit of measurement is the candela (cd), which describes the luminous intensity, all other photometric
units are derived from the candela. Luminance for instance is a measure of the density of luminous intensity in a
given direction. It describes the amount of light that passes through or is emitted from a particular area, and falls
within a given solid angle. The SI unit for luminance is candela per square metre (cd/m2). The CGS unit of luminance
is the stilb, which is equal to one candela per square centimetre or 10 kcd/m2. The amount of useful light emitted
from a source or the luminous flux is measured in lumen (lm). The SI unit of illuminance and luminous emittance,
being the luminous power per area, is measured in Lux. It is used in photometry as a measure of the intensity, as
perceived by the human eye, of light that hits or passes through a surface. It is analogous to the radiometric unit
watts per square metre, but with the power at each wavelength weighted according to the luminosity function, a standardized
model of human visual brightness perception. In English, "lux" is used in both singular and plural. Several measurement
methods have been developed to control glare resulting from indoor lighting design. The Unified Glare Rating (UGR),
the Visual Comfort Probability, and the Daylight Glare Index are some of the most well-known methods of measurement.
In addition to these new methods, four main factors influence the degree of discomfort glare; the luminance of the
glare source, the solid angle of the glare source, the background luminance, and the position of the glare source
in the field of view must all be taken into account. To define light source color properties, the lighting industry
predominantly relies on two metrics, correlated color temperature (CCT), commonly used as an indication of the apparent
"warmth" or "coolness" of the light emitted by a source, and color rendering index (CRI), an indication of the light source’s
ability to make objects appear natural. For example, in order to meet the expectations for good color rendering in
retail applications, research suggests using the well-established CRI along with another metric called gamut area
index (GAI). GAI represents the relative separation of object colors illuminated by a light source; the greater the
GAI, the greater the apparent saturation or vividness of the object colors. As a result, light sources which balance
both CRI and GAI are generally preferred over ones that have only high CRI or only high GAI. Typical measurements
of light have used a Dosimeter. Dosimeters measure an individual 's or an object's exposure to something in the environment,
such as light dosimeters and ultraviolet dosimeters. In order to specifically measure the amount of light entering
the eye, personal circadian light meter called the Daysimeter has been developed. This is the first device created
to accurately measure and characterize light (intensity, spectrum, timing, and duration) entering the eye that affects
the human body 's clock.
The small, head-mounted device measures an individual's daily rest and activity patterns, as well as exposure
to short-wavelength light that stimulates the circadian system. The device measures activity and light together at
regular time intervals and electronically stores and logs its operating temperature. The Daysimeter can gather data
for up to 30 days for analysis. Specification of illumination requirements is the basic concept of deciding how much
illumination is required for a given task. Clearly, much less light is required to illuminate a hallway compared
to that needed for a word processing work station. Generally speaking, the energy expended is proportional to the
design illumination level. For example, a lighting level of 400 lux might be chosen for a work environment involving
meeting rooms and conferences, whereas a level of 80 lux could be selected for building hallways. If the hallway
standard simply emulates the conference room needs, then much more energy will be consumed than is needed. Unfortunately,
most of the lighting standards even today have been specified by industrial groups who manufacture and sell lighting,
so that a historical commercial bias exists in designing most building lighting, especially for office and industrial
settings. Lighting control systems reduce energy usage and cost by helping to provide light only when and where it
is needed. Lighting control systems typically incorporate the use of time schedules, occupancy control, and photocell
control (i.e.daylight harvesting). Some systems also support demand response and will automatically dim or turn off
lights to take advantage of utility incentives. Lighting control systems are sometimes incorporated into larger building
automation systems. Many newer control systems are using wireless mesh open standards (such as ZigBee), which provides
benefits including easier installation (no need to run control wires) and interoperability with other standards-based
building control systems (e.g. security). Occupancy sensors to allow operation for whenever someone is within the
area being scanned can control lighting. When motion can no longer be detected, the lights shut off. Passive infrared
sensors react to changes in heat, such as the pattern created by a moving person. The control must have an unobstructed
view of the building area being scanned. Doors, partitions, stairways, etc. will block motion detection and reduce
its effectiveness. The best applications for passive infrared occupancy sensors are open spaces with a clear view
of the area being scanned. Ultrasonic sensors transmit sound above the range of human hearing and monitor the time
it takes for the sound waves to return. A break in the pattern caused by any motion in the area triggers the control.
Ultrasonic sensors can see around obstructions and are best for areas with cabinets and shelving, restrooms, and
open areas requiring 360-degree coverage. Some occupancy sensors utilize both passive infrared and ultrasonic technology,
but are usually more expensive. They can be used to control one lamp, one fixture or many fixtures. Daylighting is
the oldest method of interior lighting. Daylighting is simply designing a space to use as much natural light as possible.
This decreases energy consumption and costs, and requires less heating and cooling from the building. Daylighting
has also been proven to have positive effects on patients in hospitals as well as work and school performance. Due
to a lack of information that indicate the likely energy savings, daylighting schemes are not yet popular among most
buildings. In recent years light emitting diodes (LEDs) are becoming increasingly efficient leading to an extraordinary
increase in the use of solid state lighting. In many situations, controlling the light emission of LEDs may be done
most effectively by using the principles of nonimaging optics. Beyond the energy factors being considered, it is
important not to over-design illumination, lest adverse health effects such as headache frequency, stress, and increased
blood pressure be induced by the higher lighting levels. In addition, glare or excess light can decrease worker efficiency.
Analysis of lighting quality particularly emphasizes use of natural lighting, but also considers spectral content
if artificial light is to be used. Not only will greater reliance on natural light reduce energy consumption, but
will favorably impact human health and performance. New studies have shown that the performance of students is influenced
by the time and duration of daylight in their regular schedules. Designing school facilities to incorporate the right
types of light at the right time of day for the right duration may improve student performance and well-being. Similarly,
designing lighting systems that maximize the right amount of light at the appropriate time of day for the elderly
may help relieve symptoms of Alzheimer
's Disease. The human circadian system is entrained to a 24-hour light-dark pattern that mimics the earth’s natural light/dark pattern. When those patterns are disrupted, they disrupt the natural circadian cycle. Circadian disruption may lead to numerous health problems including breast cancer, seasonal affective disorder, delayed sleep phase syndrome, and other ailments.
A study conducted in 1972 and 1981, documented by Robert Ulrich, surveyed 23 surgical patients assigned to rooms looking out on a natural scene. The study concluded that patients assigned to rooms with windows allowing lots of natural light had shorter postoperative hospital stays, received fewer negative evaluative comments in nurses’ notes, and took fewer potent analegesics than 23 matched patients in similar rooms with windows facing a brick wall. This study suggests that due to the nature of the scenery and daylight exposure was indeed healthier for patients as opposed to those exposed to little light from the brick wall. In addition to increased work performance, proper usage of windows and daylighting crosses the boundaries between pure aesthetics and overall health.
Alison Jing Xu, assistant professor of management at the University of Toronto Scarborough and Aparna Labroo of Northwestern University conducted a series of studies analyzing the correlation between lighting and human emotion. The researchers asked participants to rate a number of things such as: the spiciness of chicken-wing sauce, the aggressiveness of a fictional character, how attractive someone was, their feelings about specific words, and the taste of two juices–all under different lighting conditions. In their study, they found that both positive and negative human emotions are felt more intensely in bright light. Professor Xu stated, "we found that on sunny days depression-prone people actually become more depressed." They also found that dim light makes people make more rational decisions and settle negotiations easier. In the dark, emotions are slightly suppressed. However, emotions are intensified in the bright light.
In 1849, Dr. Abraham Gesner, a Canadian geologist, devised a method where kerosene could be distilled from petroleum. Earlier coal-gas methods had been used for lighting since the 1820s, but they were expensive. Gesner's kerosene was cheap, easy to produce, could be burned in existing lamps, and did not produce an offensive odor
as did most whale oil. It could be stored indefinitely, unlike whale oil, which would eventually spoil. The American
petroleum boom began in the 1850s. By the end of the decade there were 30 kerosene plants operating in the United
States. The cheaper, more efficient fuel began to drive whale oil out of the market. John D. Rockefeller was most
responsible for the commercial success of kerosene. He set up a network of kerosene distilleries which would later
become Standard Oil, thus completely abolishing the need for whale-oil lamps. These types of lamps may catch fire
or emit carbon-monoxide and sometimes are odorous making them problematic for asthmatic people. Compact fluorescent
lamps (aka 'CFLs') use less power to supply the same amount of light as an incandescent lamp, however they contain
mercury which is a dispose hazard. Due to the ability to reduce electric consumption, many organizations have undertaken
measures to encourage the adoption of CFLs. Some electric utilities and local governments have subsidized CFLs or
provided them free to customers as a means of reducing electric demand. For a given light output, CFLs use between
one fifth and one quarter of the power of an equivalent incandescent lamp. One of the simplest and quickest ways
for a household or business to become more energy efficient is to adopt CFLs as the main lamp source, as suggested
by the Alliance for Climate Protection. Unlike incandescent lamps CFL 's need a little time to 'warm up
' and reach full brightness. Care should be taken when selecting CFL's because not all of them are suitable for dimming. LED lamps have been advocated as the newest and best environmental
lighting method. According to the Energy Saving Trust, LED lamps use only 10% power compared to a standard incandescent
bulb, where compact fluorescent lamps use 20% and energy saving halogen lamps 70%. The lifetime is also much longer
— up to 50,000 hours. A downside is still the initial cost, which is higher than that of compact fluorescent lamps.
Light pollution is a growing problem in reaction to excess light being given off by numerous signs, houses, and buildings.
Polluting light is often wasted light involving unnecessary energy costs and carbon dioxide emissions. Light pollution
is described as artificial light that is excessive or intrudes where it is not wanted. Well-designed lighting sends
light only where it is needed without scattering it elsewhere. Poorly designed lighting can also compromise safety.
For example, glare creates safety issues around buildings by causing very sharp shadows, temporarily blinding passersby
making them vulnerable to would-be assailants. From a military standpoint, lighting is a critical part of the battlefield
conditions. Shadows are good places to hide, while bright areas are more exposed. It is often beneficial to fight
with the Sun or other light source behind you, giving your enemy disturbing visual glare and partially hiding your
own movements in backlight. If natural light is not present searchlights and flares can be used. However the use
of light may disclose your own hidden position and modern warfare have seen increased use of night vision through
the use of infrared cameras and image intensifiers. Flares can also be used by the military to mark positions, usually
for targeting, but laser-guided and GPS weapons have eliminated this need for the most part. The International Commission
on Illumination (CIE) is an international authority and standard defining organization on color and lighting. Publishing
widely used standard metrics such as various CIE color spaces and the color rendering index. The Illuminating Engineering
Society of North America (IESNA), in conjunction with organizations like ANSI and ASHRAE, publishes guidelines, standards,
and handbooks that allow categorization of the illumination needs of different built environments. Manufacturers
of lighting equipment publish photometric data for their products, which defines the distribution of light released
by a specific luminaire. This data is typically expressed in standardized form defined by the IESNA. The International
Association of Lighting Designers (IALD) is an organization which focuses on the advancement of lighting design education
and the recognition of independent professional lighting designers. Those fully independent designers who meet the
requirements for professional membership in the association typically append the abbreviation IALD to their name.
The Professional Lighting Designers Association (PLDA), formerly known as ELDA is an organisation focusing on the
promotion of the profession of Architectural Lighting Design. They publish a monthly newsletter and organise different
events throughout the world. The National Council on Qualifications for the Lighting Professions (NCQLP) offers the
Lighting Certification Examination which tests rudimentary lighting design principles. Individuals who pass this
exam become ‘Lighting Certified’ and may append the abbreviation LC to their name. This certification process is
one of three national (U.S.) examinations (the others are CLEP and CLMC) in the lighting industry and is open not
only to designers, but to lighting equipment manufacturers, electric utility employees, etc. The Professional Lighting
And Sound Association (PLASA) is a UK-based trade organisation representing the 500+ individual and corporate members
drawn from the technical services sector. Its members include manufacturers and distributors of stage and entertainment
lighting, sound, rigging and similar products and services, and affiliated professionals in the area. They lobby
for and represent the interests of the industry at various levels, interacting with government and regulating bodies
and presenting the case for the entertainment industry. Example subjects of this representation include the ongoing
review of radio frequencies (which may or may not affect the radio bands in which wireless microphones and other
devices use) and engaging with the issues surrounding the introduction of the RoHS (Restriction of Hazardous Substances
Directive) regulations.
Separation of powers is a political doctrine originating in the writings of Montesquieu in The Spirit of the Laws where he
urged for a constitutional government with three separate branches of government. Each of the three branches would
have defined abilities to check the powers of the other branches. This idea was called separation of powers. This
philosophy heavily influenced the writing of the United States Constitution, according to which the Legislative,
Executive, and Judicial branches of the United States government are kept distinct in order to prevent abuse of power.
This United States form of separation of powers is associated with a system of checks and balances. During the Age
of Enlightenment, philosophers such as John Locke advocated the principle in their writings, whereas others, such
as Thomas Hobbes, strongly opposed it. Montesquieu was one of the foremost supporters of separating the legislature,
the executive, and the judiciary. His writings considerably influenced the opinions of the framers of the United
States Constitution. Strict separation of powers did not operate in The United Kingdom, the political structure of
which served in most instances[citation needed] as a model for the government created by the U.S. Constitution.[citation
needed] Under the UK Westminster system, based on parliamentary sovereignty and responsible government, Parliament
(consisting of the Sovereign (King-in-Parliament), House of Lords and House of Commons) was the supreme lawmaking
authority. The executive branch acted in the name of the King ("His Majesty's Government"), as did the judiciary.
The King's Ministers were in most cases members of one of the two Houses of Parliament, and the Government needed
to sustain the support of a majority in the House of Commons. One minister, the Lord Chancellor, was at the same
time the sole judge in the Court of Chancery and the presiding officer in the House of Lords. Therefore, it may be
seen that the three branches of British government often violated the strict principle of separation of powers, even
though there were many occasions when the different branches of the government disagreed with each other. Some U.S.
states did not observe a strict separation of powers in the 18th century. In New Jersey, the Governor also functioned
as a member of the state's highest court and as the presiding officer of one house of the New Jersey Legislature.
The President of Delaware was a member of the Court of Appeals; the presiding officers of the two houses of the state
legislature also served in the executive department as Vice Presidents. In both Delaware and Pennsylvania, members
of the executive council served at the same time as judges. On the other hand, many southern states explicitly required
separation of powers. Maryland, Virginia, North Carolina and Georgia all kept the branches of government "separate
and distinct." Congress has the sole power to legislate for the United States. Under the nondelegation doctrine,
Congress may not delegate its lawmaking responsibilities to any other agency. In this vein, the Supreme Court held
in the 1998 case Clinton v. City of New York that Congress could not delegate a "line-item veto" to the President,
by powers vested in the government by the Constitution. Where Congress does not make great and sweeping delegations
of its authority, the Supreme Court has been less stringent. One of the earliest cases involving the exact limits
of non-delegation was Wayman v. Southard 23 U.S. (10 Wet.) 1, 42 (1825). Congress had delegated to the courts the
power to prescribe judicial procedure; it was contended that Congress had thereby unconstitutionally clothed the
judiciary with legislative powers. While Chief Justice John Marshall conceded that the determination of rules of
procedure was a legislative function, he distinguished between "important" subjects and mere details. Marshall wrote
that "a general provision may be made, and power given to those who are to act under such general provisions, to
fill up the details." Marshall's words and future court decisions gave Congress much latitude in delegating powers.
It was not until the 1930s that the Supreme Court held a delegation of authority unconstitutional. In a case involving
the creation of the National Recovery Administration called A.L.A. Schechter Poultry Corp. v. United States, 295
U.S. 495 (1935), Congress could not authorize the president to formulate codes of "fair competition." It was held
that Congress must set some standards governing the actions of executive officers. The Court, however, has deemed
that phrases such as "just and reasonable," "public interest" and "public convenience" suffice. Executive power is
vested, with exceptions and qualifications, in the President. By law (Section 2.) the president becomes the Commander
in Chief of the Army and Navy, Militia of several states when called into service, has power to make treaties and
appointments to office "with the Advice and Consent of the Senate," receive Ambassadors and Public Ministers, and
"take care that the laws be faithfully executed" (Section 3.) By using these words, the Constitution does not require
the president to personally enforce the law; rather, officers subordinate to the president may perform such duties.
The Constitution empowers the president to ensure the faithful execution of the laws made by Congress and approved
by the President. Congress may itself terminate such appointments, by impeachment, and restrict the president. Bodies
such as the War Claims Commission, the Interstate Commerce Commission and the Federal Trade Commission—all quasi-judicial—often
have direct Congressional oversight. Congress often writes legislation to restrain executive officials to the performance
of their duties, as laid out by the laws Congress passes. In INS v. Chadha (1983), the Supreme Court decided (a)
The prescription for legislative action in Art. I, § 1—requiring all legislative powers to be vested in a Congress
consisting of a Senate and a House of Representatives—and § 7—requiring every bill passed by the House and Senate,
before becoming law, to be presented to the president, and, if he disapproves, to be repassed by two-thirds of the
Senate and House—represents the Framers' decision that the legislative power of the Federal Government be exercised
in accord with a single, finely wrought and exhaustively considered procedure. This procedure is an integral part
of the constitutional design for the separation of powers. Further rulings clarified the case; even both Houses acting
together cannot override Executive vetos without a 2⁄3 majority. Legislation may always prescribe regulations governing
executive officers. Judicial power—the power to decide cases and controversies—is vested in the Supreme Court and
inferior courts established by Congress. The judges must be appointed by the president with the advice and consent
of the Senate, hold office during good behavior and receive compensations that may not be diminished during their
continuance in office. If a court's judges do not have such attributes, the court may not exercise the judicial power
of the United States. Courts exercising the judicial power are called "constitutional courts." Congress may establish
"legislative courts," which do not take the form of judicial agencies or commissions, whose members do not have the
same security of tenure or compensation as the constitutional court judges. Legislative courts may not exercise the
judicial power of the United States. In Murray's Lessee v. Hoboken Land & Improvement Co. (1856), the Supreme Court
held that a legislative court may not decide "a suit at the common law, or in equity, or admiralty," as such a suit
is inherently judicial. Legislative courts may only adjudicate "public rights" questions (cases between the government
and an individual and political determinations). The president exercises a check over Congress through his power
to veto bills, but Congress may override any veto (excluding the so-called "pocket veto") by a two-thirds majority
in each house. When the two houses of Congress cannot agree on a date for adjournment, the president may settle the
dispute. Either house or both houses may be called into emergency session by the president. The Vice President serves
as president of the Senate, but he may only vote to break a tie. The president, as noted above, appoints judges with
the Senate's advice and consent. He also has the power to issue pardons and reprieves. Such pardons are not subject
to confirmation by either the House of Representatives or the Senate, or even to acceptance by the recipient. The
president is the civilian Commander in Chief of the Army and Navy of the United States. He has the authority to command
them to take appropriate military action in the event of a sudden crisis. However, only the Congress is explicitly
granted the power to declare war per se, as well as to raise, fund and maintain the armed forces. Congress also has
the duty and authority to prescribe the laws and regulations under which the armed forces operate, such as the Uniform
Code of Military Justice, and requires that all Generals and Admirals appointed by the president be confirmed by
a majority vote of the Senate before they can assume their office. Courts check both the executive branch and the
legislative branch through judicial review. This concept is not written into the Constitution, but was envisioned
by many of the Constitution's Framers (for example, The Federalist Papers mention it). The Supreme Court established
a precedent for judicial review in Marbury v. Madison. There were protests by some at this decision, born chiefly
of political expediency, but political realities in the particular case paradoxically restrained opposing views from
asserting themselves. For this reason, precedent alone established the principle that a court may strike down a law
it deems unconstitutional. A common misperception is that the Supreme Court is the only court that may determine
constitutionality; the power is exercised even by the inferior courts. But only Supreme Court decisions are binding
across the nation. Decisions of a Court of Appeals, for instance, are binding only in the circuit over which the
court has jurisdiction. The power to review the constitutionality of laws may be limited by Congress, which has the
power to set the jurisdiction of the courts. The only constitutional limit on Congress' power to set the jurisdiction
of the judiciary relates to the Supreme Court; the Supreme Court may exercise only appellate jurisdiction except
in cases involving states and cases affecting foreign ambassadors, ministers or consuls. The Chief Justice presides
in the Senate during a president's impeachment trial. The rules of the Senate, however, generally do not grant much
authority to the presiding officer. Thus, the Chief Justice's role in this regard is a limited one. The Constitution
does not explicitly indicate the pre-eminence of any particular branch of government. However, James Madison wrote
in Federalist 51, regarding the ability of each branch to defend itself from actions by the others, that "it is not
possible to give to each department an equal power of self-defense. In republican government, the legislative authority
necessarily predominates." Throughout America's history dominance of one of the three branches has essentially been
a see-saw struggle between Congress and the president. Both have had periods of great power and weakness such as
immediately after the Civil War when republicans had a majority in Congress and were able to pass major legislation
and shoot down most of the president's vetoes. They also passed acts to essentially make the president subordinate
to Congress, such as the Tenure of Office Act. Johnson's later impeachment also cost the presidency much political
power. However the president has also exercised greater power largely during the 20th century. Both Roosevelts greatly
expanded the powers of the president and wielded great power during their terms. The first six presidents of the
United States did not make extensive use of the veto power: George Washington only vetoed two bills, James Monroe
one, and John Adams, Thomas Jefferson and John Quincy Adams none. James Madison, a firm believer in a strong executive,
vetoed seven bills. None of the first six Presidents, however, used the veto to direct national policy. It was Andrew
Jackson, the seventh President, who was the first to use the veto as a political weapon. During his two terms in
office, he vetoed twelve bills—more than all of his predecessors combined. Furthermore, he defied the Supreme Court
in enforcing the policy of ethnically cleansing Native American tribes ("Indian Removal"); he stated (perhaps apocryphally),
"John Marshall has made his decision. Now let him enforce it!" Some of Jackson's successors made no use of the veto
power, while others used it intermittently. It was only after the Civil War that presidents began to use the power
to truly counterbalance Congress. Andrew Johnson, a Democrat, vetoed several Reconstruction bills passed by the "Radical
Republicans." Congress, however, managed to override fifteen of Johnson's twenty-nine vetoes. Furthermore, it attempted
to curb the power of the presidency by passing the Tenure of Office Act. The Act required Senate approval for the
dismissal of senior Cabinet officials. When Johnson deliberately violated the Act, which he felt was unconstitutional
(Supreme Court decisions later vindicated such a position), the House of Representatives impeached him; he was acquitted
in the Senate by one vote. Johnson's impeachment was perceived to have done great damage to the presidency, which
came to be almost subordinate to Congress. Some believed that the president would become a mere figurehead, with
the Speaker of the House of Representatives becoming a de facto Prime Minister. Grover Cleveland, the first Democratic
President following Johnson, attempted to restore the power of his office. During his first term, he vetoed over
four hundred bills—twice as many bills as his twenty-one predecessors combined. He also began to suspend bureaucrats
who were appointed as a result of the patronage system, replacing them with more "deserving" individuals. The Senate,
however, refused to confirm many new nominations, instead demanding that Cleveland turn over the confidential records
relating to the suspensions. Cleveland steadfastly refused, asserting, "These suspensions are my executive acts ...
I am not responsible to the Senate, and I am unwilling to submit my actions to them for judgment." Cleveland's popular
support forced the Senate to back down and confirm the nominees. Furthermore, Congress finally repealed the controversial
Tenure of Office Act that had been passed during the Johnson Administration. Overall, this meant that Cleveland's
Administration marked the end of presidential subordination. Several twentieth-century presidents have attempted
to greatly expand the power of the presidency. Theodore Roosevelt, for instance, claimed that the president was permitted
to do whatever was not explicitly prohibited by the law—in direct contrast to his immediate successor, William Howard
Taft. Franklin Delano Roosevelt held considerable power during the Great Depression. Congress had granted Franklin
Roosevelt sweeping authority; in Panama Refining v. Ryan, the Court for the first time struck down a Congressional
delegation of power as violative of the doctrine of separation of powers. The aforementioned Schechter Poultry Corp.
v. United States, another separation of powers case, was also decided during Franklin Roosevelt's presidency. In
response to many unfavorable Supreme Court decisions, Roosevelt introduced a "Court Packing" plan, under which more
seats would be added to the Supreme Court for the president to fill. Such a plan (which was defeated in Congress)
would have seriously undermined the judiciary's independence and power. Richard Nixon used national security as a
basis for his expansion of power. He asserted, for example, that "the inherent power of the President to safeguard
the security of the nation" authorized him to order a wiretap without a judge's warrant. Nixon also asserted that
"executive privilege" shielded him from all legislative oversight; furthermore, he impounded federal funds (that
is to say, he refused to spend money that Congress had appropriated for government programs). In the specific cases
aforementioned, however, the Supreme Court ruled against Nixon. This was also because of an ongoing criminal investigation
into the Watergate tapes, even though they acknowledged the general need for executive privilege. Since then, Nixon's
successors have sometimes asserted that they may act in the interests of national security or that executive privilege
shields them from Congressional oversight. Though such claims have in general been more limited than Nixon's, one
may still conclude that the presidency's power has been greatly augmented since the eighteenth and nineteenth centuries.
It is said[by whom?] on one side of this debate that separation of powers means that powers are shared among different
branches; no one branch may act unilaterally on issues (other than perhaps minor questions), but must obtain some
form of agreement across branches. That is, it is argued that "checks and balances" apply to the Judicial branch
as well as to the other branches. An example of the first view is the regulation of attorneys and judges, and the
establishment of rules for the conduct of the courts, by the Congress and in the states the legislatures. Although
in practice these matters are delegated to the Supreme Court, the Congress holds these powers and delegates them
to the Supreme Court only for convenience in light of the Supreme Court's expertise, but can withdraw that delegation
at any time. An example of the second view at the State level is found in the view of the Florida Supreme Court,
that only the Florida Supreme Court may license and regulate attorneys appearing before the courts of Florida, and
only the Florida Supreme Court may set rules for procedures in the Florida courts.[citation needed] The State of
New Hampshire also follows this system.[citation needed] One may claim that the judiciary has historically been the
weakest of the three branches. In fact, its power to exercise judicial review—its sole meaningful check on the other
two branches—is not explicitly granted by the U.S Constitution. The U.S. Supreme Court exercised its power to strike
down congressional acts as unconstitutional only twice prior to the Civil War: in Marbury v. Madison (1803) and Dred
Scott v. Sandford (1857). The Supreme Court has since then made more extensive use of judicial review. Many political
scientists believe that separation of powers is a decisive factor in what they see as a limited degree of American
exceptionalism. In particular, John W. Kingdon made this argument, claiming that separation of powers contributed
to the development of a unique political structure in the United States. He attributes the unusually large number
of interest groups active in the United States, in part, to the separation of powers; it gives groups more places
to try to influence, and creates more potential group activity. He also cites its complexity as one of the reasons
for lower citizen participation.[citation needed]
Architecture (Latin architectura, from the Greek ἀρχιτέκτων arkhitekton "architect", from ἀρχι- "chief" and τέκτων "builder")
is both the process and the product of planning, designing, and constructing buildings and other physical structures.
Architectural works, in the material form of buildings, are often perceived as cultural symbols and as works of art.
Historical civilizations are often identified with their surviving architectural achievements. The earliest surviving
written work on the subject of architecture is De architectura, by the Roman architect Vitruvius in the early 1st
century AD. According to Vitruvius, a good building should satisfy the three principles of firmitas, utilitas, venustas,
commonly known by the original translation – firmness, commodity and delight. An equivalent in modern English would
be: According to Vitruvius, the architect should strive to fulfill each of these three attributes as well as possible.
Leon Battista Alberti, who elaborates on the ideas of Vitruvius in his treatise, De Re Aedificatoria, saw beauty
primarily as a matter of proportion, although ornament also played a part. For Alberti, the rules of proportion were
those that governed the idealised human figure, the Golden mean. The most important aspect of beauty was therefore
an inherent part of an object, rather than something applied superficially; and was based on universal, recognisable
truths. The notion of style in the arts was not developed until the 16th century, with the writing of Vasari: by
the 18th century, his Lives of the Most Excellent Painters, Sculptors, and Architects had been translated into Italian,
French, Spanish and English. In the early 19th century, Augustus Welby Northmore Pugin wrote Contrasts (1836) that,
as the titled suggested, contrasted the modern, industrial world, which he disparaged, with an idealized image of
neo-medieval world. Gothic architecture, Pugin believed, was the only "true Christian form of architecture." The
19th-century English art critic, John Ruskin, in his Seven Lamps of Architecture, published 1849, was much narrower
in his view of what constituted architecture. Architecture was the "art which so disposes and adorns the edifices
raised by men ... that the sight of them" contributes "to his mental health, power, and pleasure". For Ruskin, the
aesthetic was of overriding significance. His work goes on to state that a building is not truly a work of architecture
unless it is in some way "adorned". For Ruskin, a well-constructed, well-proportioned, functional building needed
string courses or rustication, at the very least. On the difference between the ideals of architecture and mere construction,
the renowned 20th-century architect Le Corbusier wrote: "You employ stone, wood, and concrete, and with these materials
you build houses and palaces: that is construction. Ingenuity is at work. But suddenly you touch my heart, you do
me good. I am happy and I say: This is beautiful. That is Architecture". While the notion that structural and aesthetic
considerations should be entirely subject to functionality was met with both popularity and skepticism, it had the
effect of introducing the concept of "function" in place of Vitruvius' "utility". "Function" came to be seen as encompassing
all criteria of the use, perception and enjoyment of a building, not only practical but also aesthetic, psychological
and cultural. Among the philosophies that have influenced modern architects and their approach to building design
are rationalism, empiricism, structuralism, poststructuralism, and phenomenology. In the late 20th century a new
concept was added to those included in the compass of both structure and function, the consideration of sustainability,
hence sustainable architecture. To satisfy the contemporary ethos a building should be constructed in a manner which
is environmentally friendly in terms of the production of its materials, its impact upon the natural and built environment
of its surrounding area and the demands that it makes upon non-sustainable power sources for heating, cooling, water
and waste management and lighting. Building first evolved out of the dynamics between needs (shelter, security, worship,
etc.) and means (available building materials and attendant skills). As human cultures developed and knowledge began
to be formalized through oral traditions and practices, building became a craft, and "architecture" is the name given
to the most highly formalized and respected versions of that craft. It is widely assumed that architectural success
was the product of a process of trial and error, with progressively less trial and more replication as the results
of the process proved increasingly satisfactory. What is termed vernacular architecture continues to be produced
in many parts of the world. Indeed, vernacular buildings make up most of the built world that people experience every
day. Early human settlements were mostly rural. Due to a surplus in production the economy began to expand resulting
in urbanization thus creating urban areas which grew and evolved very rapidly in some cases, such as that of Çatal
Höyük in Anatolia and Mohenjo Daro of the Indus Valley Civilization in modern-day Pakistan. In many ancient civilizations,
such as those of Egypt and Mesopotamia, architecture and urbanism reflected the constant engagement with the divine
and the supernatural, and many ancient cultures resorted to monumentality in architecture to represent symbolically
the political power of the ruler, the ruling elite, or the state itself. Early Asian writings on architecture include
the Kao Gong Ji of China from the 7th–5th centuries BCE; the Shilpa Shastras of ancient India and Manjusri Vasthu
Vidya Sastra of Sri Lanka. The architecture of different parts of Asia developed along different lines from that
of Europe; Buddhist, Hindu and Sikh architecture each having different characteristics. Buddhist architecture, in
particular, showed great regional diversity. Hindu temple architecture, which developed around the 3rd century BCE,
is governed by concepts laid down in the Shastras, and is concerned with expressing the macrocosm and the microcosm.
In many Asian countries, pantheistic religion led to architectural forms that were designed specifically to enhance
the natural landscape. Islamic architecture began in the 7th century CE, incorporating architectural forms from the
ancient Middle East and Byzantium, but also developing features to suit the religious and social needs of the society.
Examples can be found throughout the Middle East, North Africa, Spain and the Indian Sub-continent. The widespread
application of the pointed arch was to influence European architecture of the Medieval period. The major architectural
undertakings were the buildings of abbeys and cathedrals. From about 900 CE onwards, the movements of both clerics
and tradesmen carried architectural knowledge across Europe, resulting in the pan-European styles Romanesque and
Gothic. In Renaissance Europe, from about 1400 onwards, there was a revival of Classical learning accompanied by
the development of Renaissance Humanism which placed greater emphasis on the role of the individual in society than
had been the case during the Medieval period. Buildings were ascribed to specific architects – Brunelleschi, Alberti,
Michelangelo, Palladio – and the cult of the individual had begun. There was still no dividing line between artist,
architect and engineer, or any of the related vocations, and the appellation was often one of regional preference.
Architecture has to do with planning and designing form, space and ambience to reflect functional, technical, social,
environmental and aesthetic considerations. It requires the creative manipulation and coordination of materials and
technology, and of light and shadow. Often, conflicting requirements must be resolved. The practice of Architecture
also encompasses the pragmatic aspects of realizing buildings and structures, including scheduling, cost estimation
and construction administration. Documentation produced by architects, typically drawings, plans and technical specifications,
defines the structure and/or behavior of a building or other kind of system that is to be or has been constructed.
Nunzia Rondanini stated, "Through its aesthetic dimension architecture goes beyond the functional aspects that it
has in common with other human sciences. Through its own particular way of expressing values, architecture can stimulate
and influence social life without presuming that, in and of itself, it will promote social development.' To restrict
the meaning of (architectural) formalism to art for art's sake is not only reactionary; it can also be a purposeless
quest for perfection or originality which degrades form into a mere instrumentality". The architecture and urbanism
of the Classical civilizations such as the Greek and the Roman evolved from civic ideals rather than religious or
empirical ones and new building types emerged. Architectural "style" developed in the form of the Classical orders.
Texts on architecture have been written since ancient time. These texts provided both general advice and specific
formal prescriptions or canons. Some examples of canons are found in the writings of the 1st-century BCE Roman Architect
Vitruvius. Some of the most important early examples of canonic architecture are religious. In Europe during the
Medieval period, guilds were formed by craftsmen to organise their trades and written contracts have survived, particularly
in relation to ecclesiastical buildings. The role of architect was usually one with that of master mason, or Magister
lathomorum as they are sometimes described in contemporary documents. A revival of the Classical style in architecture
was accompanied by a burgeoning of science and engineering which affected the proportions and structure of buildings.
At this stage, it was still possible for an artist to design a bridge as the level of structural calculations involved
was within the scope of the generalist. With the emerging knowledge in scientific fields and the rise of new materials
and technology, architecture and engineering began to separate, and the architect began to concentrate on aesthetics
and the humanist aspects, often at the expense of technical aspects of building design. There was also the rise of
the "gentleman architect" who usually dealt with wealthy clients and concentrated predominantly on visual qualities
derived usually from historical prototypes, typified by the many country houses of Great Britain that were created
in the Neo Gothic or Scottish Baronial styles. Formal architectural training in the 19th century, for example at
École des Beaux-Arts in France, gave much emphasis to the production of beautiful drawings and little to context
and feasibility. Effective architects generally received their training in the offices of other architects, graduating
to the role from draughtsmen or clerks. Meanwhile, the Industrial Revolution laid open the door for mass production
and consumption. Aesthetics became a criterion for the middle class as ornamented products, once within the province
of expensive craftsmanship, became cheaper under machine production. Vernacular architecture became increasingly
ornamental. House builders could use current architectural design in their work by combining features found in pattern
books and architectural journals. Around the beginning of the 20th century, a general dissatisfaction with the emphasis
on revivalist architecture and elaborate decoration gave rise to many new lines of thought that served as precursors
to Modern Architecture. Notable among these is the Deutscher Werkbund, formed in 1907 to produce better quality machine
made objects. The rise of the profession of industrial design is usually placed here. Following this lead, the Bauhaus
school, founded in Weimar, Germany in 1919, redefined the architectural bounds prior set throughout history, viewing
the creation of a building as the ultimate synthesis—the apex—of art, craft, and technology. When modern architecture
was first practiced, it was an avant-garde movement with moral, philosophical, and aesthetic underpinnings. Immediately
after World War I, pioneering modernist architects sought to develop a completely new style appropriate for a new
post-war social and economic order, focused on meeting the needs of the middle and working classes. They rejected
the architectural practice of the academic refinement of historical styles which served the rapidly declining aristocratic
order. The approach of the Modernist architects was to reduce buildings to pure forms, removing historical references
and ornament in favor of functionalist details. Buildings displayed their functional and structural elements, exposing
steel beams and concrete surfaces instead of hiding them behind decorative forms. Architects such as Frank Lloyd
Wright developed Organic architecture, in which the form was defined by its environment and purpose, with an aim
to promote harmony between human habitation and the natural world with prime examples being Robie House and Fallingwater.
Architects such as Mies van der Rohe, Philip Johnson and Marcel Breuer worked to create beauty based on the inherent
qualities of building materials and modern construction techniques, trading traditional historic forms for simplified
geometric forms, celebrating the new means and methods made possible by the Industrial Revolution, including steel-frame
construction, which gave birth to high-rise superstructures. By mid-century, Modernism had morphed into the International
Style, an aesthetic epitomized in many ways by the Twin Towers of New York's World Trade Center designed by Minoru
Yamasaki. Many architects resisted modernism, finding it devoid of the decorative richness of historical styles.
As the first generation of modernists began to die after WWII, a second generation of architects including Paul Rudolph,
Marcel Breuer, and Eero Saarinen tried to expand the aesthetics of modernism with Brutalism, buildings with expressive
sculptural facades made of unfinished concrete. But an even new younger postwar generation critiqued modernism and
Brutalism for being too austere, standardized, monotone, and not taking into account the richness of human experience
offered in historical buildings across time and in different places and cultures. One such reaction to the cold aesthetic
of modernism and Brutalism is the school of metaphoric architecture, which includes such things as biomorphism and
zoomorphic architecture, both using nature as the primary source of inspiration and design. While it is considered
by some to be merely an aspect of postmodernism, others consider it to be a school in its own right and a later development
of expressionist architecture. Beginning in the late 1950s and 1960s, architectural phenomenology emerged as an important
movement in the early reaction against modernism, with architects like Charles Moore in the USA, Christian Norberg-Schulz
in Norway, and Ernesto Nathan Rogers and Vittorio Gregotti in Italy, who collectively popularized an interest in
a new contemporary architecture aimed at expanding human experience using historical buildings as models and precedents.
Postmodernism produced a style that combined contemporary building technology and cheap materials, with the aesthetics
of older pre-modern and non-modern styles, from high classical architecture to popular or vernacular regional building
styles. Robert Venturi famously defined postmodern architecture as a "decorated shed" (an ordinary building which
is functionally designed inside and embellished on the outside), and upheld it against modernist and brutalist "ducks"
(buildings with unnecessarily expressive tectonic forms). Since the 1980s, as the complexity of buildings began to
increase (in terms of structural systems, services, energy and technologies), the field of architecture became multi-disciplinary
with specializations for each project type, technological expertise or project delivery methods. In addition, there
has been an increased separation of the 'design' architect [Notes 1] from the 'project' architect who ensures that
the project meets the required standards and deals with matters of liability.[Notes 2] The preparatory processes
for the design of any large building have become increasingly complicated, and require preliminary studies of such
matters as durability, sustainability, quality, money, and compliance with local laws. A large structure can no longer
be the design of one person but must be the work of many. Modernism and Postmodernism have been criticised by some
members of the architectural profession who feel that successful architecture is not a personal, philosophical, or
aesthetic pursuit by individualists; rather it has to consider everyday needs of people and use technology to create
liveable environments, with the design process being informed by studies of behavioral, environmental, and social
sciences. Environmental sustainability has become a mainstream issue, with profound effect on the architectural profession.
Many developers, those who support the financing of buildings, have become educated to encourage the facilitation
of environmentally sustainable design, rather than solutions based primarily on immediate cost. Major examples of
this can be found in Passive solar building design, greener roof designs, biodegradable materials, and more attention
to a structure's energy usage. This major shift in architecture has also changed architecture schools to focus more
on the environment. Sustainability in architecture was pioneered by Frank Lloyd Wright, in the 1960s by Buckminster
Fuller and in the 1970s by architects such as Ian McHarg and Sim Van der Ryn in the US and Brenda and Robert Vale
in the UK and New Zealand. There has been an acceleration in the number of buildings which seek to meet green building
sustainable design principles. Sustainable practices that were at the core of vernacular architecture increasingly
provide inspiration for environmentally and socially sustainable contemporary techniques. The U.S. Green Building
Council's LEED (Leadership in Energy and Environmental Design) rating system has been instrumental in this. Concurrently,
the recent movements of New Urbanism, Metaphoric architecture and New Classical Architecture promote a sustainable
approach towards construction, that appreciates and develops smart growth, architectural tradition and classical
design. This in contrast to modernist and globally uniform architecture, as well as leaning against solitary housing
estates and suburban sprawl.
The Human Development Index (HDI) is a composite statistic of life expectancy, education, and income per capita indicators,
which are used to rank countries into four tiers of human development. A country scores higher HDI when the life
expectancy at birth is longer, the education period is longer, and the income per capita is higher. The HDI was developed
by the Pakistani economist Mahbub ul Haq, often framed in terms of whether people are able to "be" and "do" desirable
things in their life, and was published by the United Nations Development Programme. The 2010 Human Development Report
introduced an Inequality-adjusted Human Development Index (IHDI). While the simple HDI remains useful, it stated
that "the IHDI is the actual level of human development (accounting for inequality)," and "the HDI can be viewed
as an index of 'potential' human development (or the maximum IHDI that could be achieved if there were no inequality)."
The origins of the HDI are found in the annual Development Reports of the United Nations Development Programme (UNDP).
These were devised and launched by Pakistani economist Mahbub ul Haq in 1990 and had the explicit purpose "to shift
the focus of development economics from national income accounting to people-centered policies". To produce the Human
Development Reports, Mahbub ul Haq formed a group of development economists including Paul Streeten, Frances Stewart,
Gustav Ranis, Keith Griffin, Sudhir Anand, and Meghnad Desai. Working alongside Nobel laureate Amartya Sen, they
worked on capabilities and functions that provided the underlying conceptual framework. Haq was sure that a simple
composite measure of human development was needed in order to convince the public, academics, and politicians that
they can and should evaluate development not only by economic advances but also improvements in human well-being.
Sen initially opposed this idea, but he soon went on to help Haq develop the Index. Sen was worried that it was going
to be difficult to capture the full complexity of human capabilities in a single index, but Haq persuaded him that
only a single number would shift the immediate attention of politicians from economic to human well-being. LE: Life
expectancy at birth MYS: Mean years of schooling (Years that a person 25 years-of-age or older has spent in schools)
EYS: Expected years of schooling (Years that a 5-year-old child will spend in schools throughout his life) GNIpc:
Gross national income at purchasing power parity per capita The formula defining the HDI is promulgated by the United
Nations Development Programme (UNDP). In general, to transform a raw variable, say , into a unit-free index between
0 and 1 (which allows different indices to be added together), the following formula is used: The 2015 Human Development
Report by the United Nations Development Program was released on December 14, 2015, and calculates HDI values based
on estimates for 2014. Below is the list of the "very high human development" countries: The 2014 Human Development
Report by the United Nations Development Program was released on July 24, 2014, and calculates HDI values based on
estimates for 2013. Below is the list of the "very high human development" countries: The Inequality-adjusted Human
Development Index (IHDI) is a "measure of the average level of human development of people in a society once inequality
is taken into account." Countries in the top quartile of HDI ("very high human development" group) with a missing
IHDI: New Zealand, Singapore, Hong Kong, Liechtenstein, Brunei, Qatar, Saudi Arabia, Andorra, United Arab Emirates,
Bahrain, Cuba, and Kuwait. Some countries were not included for various reasons, primarily the lack of necessary
data. The following United Nations Member States were not included in the 2014 report: North Korea, Marshall Islands,
Monaco, Nauru, San Marino, Somalia, India, Pakistan, South Sudan, and Tuvalu. The 2013 Human Development Report by
the United Nations Development Program was released on March 14, 2013, and calculates HDI values based on estimates
for 2012. Below is the list of the "very high human development" countries: The Inequality-adjusted Human Development
Index (IHDI) is a "measure of the average level of human development of people in a society once inequality is taken
into account." Some countries were not included for various reasons, mainly the unavailability of certain crucial
data. The following United Nations Member States were not included in the 2011 report: North Korea, Marshall Islands,
Monaco, Nauru, San Marino, South Sudan, Somalia and Tuvalu. The 2010 Human Development Report by the United Nations
Development Program was released on November 4, 2010, and calculates HDI values based on estimates for 2010. Below
is the list of the "very high human development" countries: The 2010 Human Development Report was the first to calculate
an Inequality-adjusted Human Development Index (IHDI), which factors in inequalities in the three basic dimensions
of human development (income, life expectancy, and education). Below is a list of countries in the top quartile by
IHDI: Some countries were not included for various reasons, mainly the unavailability of certain crucial data. The
following United Nations Member States were not included in the 2010 report. Cuba lodged a formal protest at its
lack of inclusion. The UNDP explained that Cuba had been excluded due to the lack of an "internationally reported
figure for Cuba’s Gross National Income adjusted for Purchasing Power Parity". All other indicators for Cuba were
available, and reported by the UNDP, but the lack of one indicator meant that no ranking could be attributed to the
country. The situation has been addressed and, in later years, Cuba has ranked as a High Human Development country.
The 2009 Human Development Report by UNDP was released on October 5, 2009, and covers the period up to 2007. It was
titled "Overcoming barriers: Human mobility and development". The top countries by HDI were grouped in a new category
called "very high human development". The report refers to these countries as developed countries. They are: Some
countries were not included for various reasons, such as being a non-UN member or unable or unwilling to provide
the necessary data at the time of publication. Besides the states with limited recognition, the following states
were also not included. A new index was released on December 18, 2008. This so-called "statistical update" covered
the period up to 2006 and was published without an accompanying Human Development Report. The update is relevant
due to newly released estimates of purchasing power parities (PPP), implying substantial adjustments for many countries,
resulting in changes in HDI values and, in many cases, HDI ranks. The Human Development Report for 2007/2008 was
launched in Brasília, Brazil, on November 27, 2007. Its focus was on "Fighting climate change: Human solidarity in
a divided world." Most of the data used for the report are derived largely from 2005 or earlier, thus indicating
an HDI for 2005. Not all UN member states choose to or are able to provide the necessary statistics. The report showed
a small increase in world HDI in comparison with last year's report. This rise was fueled by a general improvement
in the developing world, especially of the least developed countries group. This marked improvement at the bottom
was offset with a decrease in HDI of high income countries. A HDI below 0.5 is considered to represent "low development".
All 22 countries in that category are located in Africa. The highest-scoring Sub-Saharan countries, Gabon and South
Africa, are ranked 119th and 121st, respectively. Nine countries departed from this category this year and joined
the "medium development" group. A HDI of 0.8 or more is considered to represent "high development". This includes
all developed countries, such as those in North America, Western Europe, Oceania, and Eastern Asia, as well as some
developing countries in Eastern Europe, Central and South America, Southeast Asia, the Caribbean, and the oil-rich
Arabian Peninsula. Seven countries were promoted to this category this year, leaving the "medium development" group:
Albania, Belarus, Brazil, Libya, Macedonia, Russia and Saudi Arabia. On the following table, green arrows () represent
an increase in ranking over the previous study, while red arrows () represent a decrease in ranking. They are followed
by the number of spaces they moved. Blue dashes () represent a nation that did not move in the rankings since the
previous study. The list below displays the top-ranked country from each year of the Human Development Index. Norway
has been ranked the highest twelve times, Canada eight times, followed by Japan which has been ranked highest three
times. Iceland has been ranked highest twice. The Human Development Index has been criticized on a number of grounds
including alleged ideological biases towards egalitarianism and so-called "Western models of development", failure
to include any ecological considerations, lack of consideration of technological development or contributions to
the human civilization, focusing exclusively on national performance and ranking, lack of attention to development
from a global perspective, measurement error of the underlying statistics, and on the UNDP's changes in formula which
can lead to severe misclassification in the categorisation of 'low', 'medium', 'high' or 'very high' human development
countries. Economists Hendrik Wolff, Howard Chong and Maximilian Auffhammer discuss the HDI from the perspective
of data error in the underlying health, education and income statistics used to construct the HDI. They identified
three sources of data error which are due to (i) data updating, (ii) formula revisions and (iii) thresholds to classify
a country’s development status and conclude that 11%, 21% and 34% of all countries can be interpreted as currently
misclassified in the development bins due to the three sources of data error, respectively. The authors suggest that
the United Nations should discontinue the practice of classifying countries into development bins because - they
claim - the cut-off values seem arbitrary, can provide incentives for strategic behavior in reporting official statistics,
and have the potential to misguide politicians, investors, charity donors and the public who use the HDI at large.[citation
needed] In 2010 the UNDP reacted to the criticism and updated the thresholds to classify nations as low, medium,
and high human development countries. In a comment to The Economist in early January 2011, the Human Development
Report Office responded to a January 6, 2011 article in the magazine which discusses the Wolff et al. paper. The
Human Development Report Office states that they undertook a systematic revision of the methods used for the calculation
of the HDI and that the new methodology directly addresses the critique by Wolff et al. in that it generates a system
for continuous updating of the human development categories whenever formula or data revisions take place. The HDI
has extended its geographical coverage: David Hastings, of the United Nations Economic and Social Commission for
Asia and the Pacific, published a report geographically extending the HDI to 230+ economies, whereas the UNDP HDI
for 2009 enumerates 182 economies and coverage for the 2010 HDI dropped to 169 countries. Note: The green arrows
(), red arrows (), and blue dashes () represent changes in rank. The changes in rank are not relative to the HDI
list above, but are according to the source (p. 168) calculated with the exclusion of countries which are missing
IHDI data. Note: The green arrows (), red arrows (), and blue dashes () represent changes in rank when compared to
the new 2012 data HDI for 2011 – published in the 2012 report. Countries in the top quartile of HDI ("very high human
development" group) with a missing IHDI: New Zealand, Chile, Japan, Hong Kong, Singapore, Taiwan, Liechtenstein,
Brunei, Andorra, Qatar, Barbados, United Arab Emirates, and Seychelles. The 2011 Human Development Report was released
on 2 November 2011, and calculated HDI values based on estimates for 2011. Below is the list of the "very high human
development" countries (equal to the top quartile): Note: The green arrows (), red arrows (), and blue dashes ()
represent changes in rank when compared to the 2011 HDI data for 2010 – published in the 2011 report (p. 131). Below
is a list of countries in the top quartile by Inequality-adjusted Human Development Index (IHDI). According to the
report, the IHDI is a "measure of the average level of human development of people in a society once inequality is
taken into account." Note: The green arrows (), red arrows (), and blue dashes () represent changes in rank when
compared to the 2011 HDI list, for countries listed in both rankings. Countries in the top quartile of HDI ("very
high human development" group) with a missing IHDI include: New Zealand, Liechtenstein, Japan, Hong Kong, Singapore,
Taiwan, United Arab Emirates, Andorra, Brunei, Malta, Qatar, Bahrain, Chile, Argentina and Barbados. Note: The green
arrows (), red arrows (), and blue dashes () represent changes in rank when compared to the 2010 HDI list, for countries
listed in both rankings. Some countries were not included for various reasons, such as being a non-UN member, unable,
or unwilling to provide the necessary data at the time of publication. Besides the states with limited recognition,
the following states were also not included.
Some definitions of southern Europe, also known as Mediterranean Europe, include the countries of the Iberian peninsula (Spain
and Portugal), the Italian peninsula, southern France and Greece. Other definitions sometimes include the Balkan
countries of southeast Europe, which are geographically in the southern part of Europe, but which have different
historical, political, economic, and cultural backgrounds. Different methods can be used to define southern Europe,
including its political, economic, and cultural attributes. Southern Europe can also be defined by its natural features
— its geography, climate, and flora. Southern Europe's most emblematic climate is that of the Mediterranean climate,
which has become a typically known characteristic of the area. The Mediterranean climate covers much of Portugal,
Spain, Southeast France, Italy, Croatia, Albania, Montenegro, Greece, the Western and Southern coastal regions of
Turkey as well as the Mediterranean islands. Those areas of Mediterranean climate present similar vegetations and
landscapes throughout, including dry hills, small plains, pine forests and olive trees. Cooler climates can be found
in certain parts of Southern European countries, for example within the mountain ranges of Spain and Italy. Additionally,
the north coast of Spain experiences a wetter Atlantic climate. Southern Europe's flora is that of the Mediterranean
Region, one of the phytochoria recognized by Armen Takhtajan. The Mediterranean and Submediterranean climate regions
in Europe are found in much of Southern Europe, mainly in Southern Portugal, most of Spain, the southern coast of
France, Italy, the Croatian coast, much of Bosnia, Montenegro, Albania, Macedonia, Greece, and the Mediterranean
islands. The period known as classical antiquity began with the rise of the city-states of Ancient Greece. Greek
influence reached its zenith under the expansive empire of Alexander the Great, spreading throughout Asia. The Roman
Empire came to dominate the entire Mediterranean basin in a vast empire based on Roman law and Roman legions. It
promoted trade, tolerance, and Greek culture. By 300 AD the Roman Empire was divided into the Western Roman Empire
based in Rome, and the Eastern Roman Empire based in Constantinople. The attacks of the Germanic peoples of northern
Europe led to the Fall of the Western Roman Empire in AD 476, a date which traditionally marks the end of the classical
period and the start of the Middle Ages. During the Middle Ages, the Eastern Roman Empire survived, though modern
historians refer to this state as the Byzantine Empire. In Western Europe, Germanic peoples moved into positions
of power in the remnants of the former Western Roman Empire and established kingdoms and empires of their own. The
period known as the Crusades, a series of religiously motivated military expeditions originally intended to bring
the Levant back into Christian rule, began. Several Crusader states were founded in the eastern Mediterranean. These
were all short-lived. The Crusaders would have a profound impact on many parts of Europe. Their Sack of Constantinople
in 1204 brought an abrupt end to the Byzantine Empire. Though it would later be re-established, it would never recover
its former glory. The Crusaders would establish trade routes that would develop into the Silk Road and open the way
for the merchant republics of Genoa and Venice to become major economic powers. The Reconquista, a related movement,
worked to reconquer Iberia for Christendom. The Late Middle Ages represented a period of upheaval in Europe. The
epidemic known as the Black Death and an associated famine caused demographic catastrophe in Europe as the population
plummeted. Dynastic struggles and wars of conquest kept many of the states of Europe at war for much of the period.
In the Balkans, the Ottoman Empire, a Turkish state originating in Anatolia, encroached steadily on former Byzantine
lands, culminating in the Fall of Constantinople in 1453. Beginning roughly in the 14th century in Florence, and
later spreading through Europe with the development of the printing press, a Renaissance of knowledge challenged
traditional doctrines in science and theology, with the Arabic texts and thought bringing about rediscovery of classical
Greek and Roman knowledge. The Reconquista of Portugal and Spain led to a series of oceanic explorations resulting
in the Age of Discovery that established direct links with Africa, the Americas, and Asia, while religious wars continued
to be fought in Europe, which ended in 1648 with the Peace of Westphalia. The Spanish crown maintained its hegemony
in Europe and was the leading power on the continent until the signing of the Treaty of the Pyrenees, which ended
a conflict between Spain and France that had begun during the Thirty Years' War. An unprecedented series of major
wars and political revolutions took place around Europe and indeed the world in the period between 1610 and 1700.
Observers at the time, and many historians since, have argued that wars caused the revolutions. Galileo Galilei,
invented the telescope and the thermometer which allowed him to observe and describe the solar system. Leonardo da
Vinci painted the most famous work in the world. Guglielmo Marconi invented the radio. European overseas expansion
led to the rise of colonial empires, producing the Columbian Exchange. The combination of resource inflows from the
New World and the Industrial Revolution of Great Britain, allowed a new economy based on manufacturing instead of
subsistence agriculture. The period between 1815 and 1871 saw a large number of revolutionary attempts and independence
wars. Balkan nations began to regain independence from the Ottoman Empire. Italy unified into a nation state. The
capture of Rome in 1870 ended the Papal temporal power. Rivalry in a scramble for empires spread in what is known
as The Age of Empire. The outbreak of World War I in 1914 was precipitated by the rise of nationalism in Southeastern
Europe as the Great Powers took up sides. The Allies defeated the Central Powers in 1918. During the Paris Peace
Conference the Big Four imposed their terms in a series of treaties, especially the Treaty of Versailles. The Nazi
regime under Adolf Hitler came to power in 1933, and along with Mussolini's Italy sought to gain control of the continent
by the Second World War. Following the Allied victory in the Second World War, Europe was divided by the Iron Curtain.
The countries in Southeastern Europe were dominated by the Soviet Union and became communist states. The major non-communist
Southern European countries joined a US-led military alliance (NATO) and formed the European Economic Community amongst
themselves. The countries in the Soviet sphere of influence joined the military alliance known as the Warsaw Pact
and the economic bloc called Comecon. Yugoslavia was neutal. Italy became a major industrialized country again, due
to its post-war economic miracle. The European Union (EU) involved the division of powers, with taxation, health
and education handled by the nation states, while the EU had charge of market rules, competition, legal standards
and environmentalism. The Soviet economic and political system collapsed, leading to the end of communism in the
satellite countries in 1989, and the dissolution of the Soviet Union itself in 1991. As a consequence, Europe's integration
deepened, the continent became depolarised, and the European Union expanded to subsequently include many of the formerly
communist European countries – Romania and Bulgaria (2007) and Croatia (2013). The most widely spoken family of languages
in southern Europe are the Romance languages, the heirs of Latin, which have spread from the Italian peninsula, and
are emblematic of Southwestern Europe. (See the Latin Arch.) By far the most common romance languages in Southern
Europe are: Italian, which is spoken by over 50 million people in Italy, San Marino, and the Vatican; and Spanish,
which is spoken by over 40 million people in Spain and Gibraltar. Other common romance languages include: Romanian,
which is spoken in Romania and Moldova; Portuguese, which is spoken in Portugal; Catalan, which is spoken in eastern
Spain; and Galician, which is spoken in northwestern Spain. The Hellenic languages or Greek language are widely spoken
in Greece and in the Greek part of Cyprus. Additionally, other varieties of Greek are spoken in small communities
in parts of other European counties. Several South Slavic languages are spoken by millions of people in Southern
Europe. Serbian is spoken in Serbia, Bosnia, and Croatia; Bulgarian is spoken in Bulgaria; Croatian is spoken in
Croatia and Bosnia; Bosnian is spoken in Bosnia; Slovene is spoken in Slovenia; and Macedonian is spoken in Macedonia.
English is used as a second language in parts of Southern Europe. As a primary language, however, English has only
a small presence in Southern Europe, only in Gibraltar (alongside Spanish) and Malta (secondary to Maltese). There
are other language groupings in Southern Europe. Albanian is spoken in Albania, Kosovo, Macedoonia, and parts of
Greece. Maltese is a Semitic language that is the official language of Malta. The Basque language is spoken in the
Basque Country, a region in northern Spain and southwestern France. The predominant religion is southern Europe is
Christianity. Christianity spread throughout Southern Europe during the Roman Empire, and Christianity was adopted
as the official religion of the Roman Empire in the year 380 AD. Due to the historical break of the Christian Church
into the western half based in Rome and the eastern half based in Constantinople, different branches of Christianity
are prodominent in different parts of Europe. Christians in the western half of Southern Europe — e.g., Portugal,
Spain, Italy — are generally Roman Catholic. Christians in the eastern half of Southern Europe — e.g., Greece, Macedonia
— are generally Greek Orthodox. For its official works and publications, the United Nations Organization groups countries
under a classification of regions. The assignment of countries or areas to specific groupings is for statistical
convenience and does not imply any assumption regarding political or other affiliation of countries or territories
by the United Nations. Southern Europe, as grouped for statistical convenience by the United Nations (the sub-regions
according to the UN), includes following countries and territories: European Travel Commission divides the European
region on the basis of Tourism Decision Metrics (TDM) model. Countries which belong to the Southern/Mediterranean
Europe are:
BBC Television is a service of the British Broadcasting Corporation. The corporation, which has operated in the United Kingdom
under the terms of a Royal charter since 1927, has produced television programmes from its own since 1932, although
the start of its regular service of television broadcasts is dated to 2 November 1936. The domestic TV BBC television
channels are broadcast without any commercial advertising and collectively they account for more than 30% of all
UK viewing. The services are funded by a television licence. The BBC operates several television networks, television
stations (although there is generally very little distinction between the two terms in the UK), and related programming
services in the United Kingdom. As well as being a broadcaster, the corporation also produces a large number of its
own programmes in-house, thereby ranking as one of the world's largest television production companies. Baird Television
Ltd. made Britain's first television broadcast, on 30 September 1929 from its studio in Long Acre, London, via the
BBC's London transmitter, using the electromechanical system pioneered by John Logie Baird. This system used a vertically-scanned
image of 30 lines – just enough resolution for a close-up of one person, and with a bandwidth low enough to use existing
radio transmitters. Simultaneous transmission of sound and picture was achieved on 30 March 1930, by using the BBC's
new twin transmitter at Brookmans Park. By late 1930, 30 minutes of morning programmes were broadcast Monday to Friday,
and 30 minutes at midnight on Tuesdays and Fridays, after BBC radio went off the air. Baird broadcasts via the BBC
continued until June 1932. The BBC began its own regular television programming from the basement of Broadcasting
House, London, on 22 August 1932. The studio moved to larger quarters in 16 Portland Place, London, in February 1934,
and continued broadcasting the 30-line images, carried by telephone line to the medium wave transmitter at Brookmans
Park, until 11 September 1935, by which time advances in all-electronic television systems made the electromechanical
broadcasts obsolete. After a series of test transmissions and special broadcasts that began in August, regular BBC
television broadcasts officially resumed on 1 October 1936, from a converted wing of Alexandra Palace in London,
which housed two studios, various scenery stores, make-up areas, dressing rooms, offices, and the transmitter itself,
now broadcasting on the VHF band. BBC television initially used two systems, on alternate weeks: the 240-line Baird
intermediate film system and the 405-line Marconi-EMI system, each making the BBC the world's first regular high-definition
television service, broadcasting Monday to Saturday from 15:00 to 16:00 and 21:00 to 22:00. The two systems were
to run on a trial basis for six months; early television sets supported both resolutions. However, the Baird system,
which used a mechanical camera for filmed programming and Farnsworth image dissector cameras for live programming,
proved too cumbersome and visually inferior, and ended with closedown (at 22:00) on Saturday 13 February 1937. Initially,
the station's range was officially a 40 kilometres radius of the Alexandra Palace transmitter—in practice, however,
transmissions could be picked up a good deal further away, and on one occasion in 1938 were picked up by engineers
at RCA in New York, who were experimenting with a British television set. Mechanically scanned, 30-line television
broadcasts by John Logie Baird began in 1929, using the BBC transmitter in London, and by 1930 a regular schedule
of programmes was transmitted from the BBC antenna in Brookmans Park. Television production was switched from Baird's
company to what is now known as BBC One on 2 August 1932, and continued until September 1935. Regularly scheduled
electronically scanned television began from Alexandra Palace in London on 2 November 1936, to just a few hundred
viewers in the immediate area. The first programme broadcast – and thus the first ever, on a dedicated TV channel
– was "Opening of the BBC Television Service" at 15:00. The first major outside broadcast was the coronation of King
George VI and Queen Elizabeth in May 1937. The service was reaching an estimated 25,000–40,000 homes before the outbreak
of World War II which caused the service to be suspended in September 1939. The VHF broadcasts would have provided
an ideal radio beacon for German bombers homing in on London, and the engineers and technicians of the service would
be needed for the war effort, in particular the radar programme. On 1 September 1939, two days before Britain declared
war on Germany, the station was taken off air with little warning; the government was concerned that the VHF transmissions
would act as a beacon to enemy aircraft homing in on London. Also, many of the television service's technical staff
and engineers would be needed for the war effort, in particular on the radar programme. The last programme transmitted
was a Mickey Mouse cartoon, Mickey's Gala Premier (1933), which was followed by test transmissions; this account
refuted the popular memory according to which broadcasting was suspended before the end of the cartoon. According
to figures from Britain's Radio Manufacturers Association, 18,999 television sets had been manufactured from 1936
to September 1939, when production was halted by the war. BBC Television returned on 7 June 1946 at 15:00. Jasmine
Bligh, one of the original announcers, made the first announcement, saying, 'Good afternoon everybody. How are you?
Do you remember me, Jasmine Bligh?'. The Mickey Mouse cartoon of 1939 was repeated twenty minutes later.[unreliable
source?] Alexandra Palace was the home base of the channel until the early 1950s when the majority of production
moved into the newly acquired Lime Grove Studios.[original research?] Postwar broadcast coverage was extended to
Birmingham in 1949 with the opening of the Sutton Coldfield transmitting station, and by the mid-1950s most of the
country was covered, transmitting a 405-line interlaced image on VHF.[original research?] Television transmissions
resumed from Alexandra Palace in 1946. The BBC Television Service (renamed "BBC tv" in 1960) showed popular programming,
including drama, comedies, documentaries, game shows, and soap operas, covering a wide range of genres and regularly
competed with ITV to become the channel with the highest ratings for that week. The channel also introduced the science
fiction show Doctor Who on 23 November 1963 - at 17:16 - which went on to become one of Britain's most iconic and
beloved television programmes. BBC TV was renamed BBC1 in 1964, after the launch of BBC2 (now BBC Two), the third
television station (ITV was the second) for the UK; its remit, to provide more niche programming. The channel was
due to launch on 20 April 1964, but was put off the air by a massive power failure that affected much of London,
caused by a fire at Battersea Power Station. A videotape made on the opening night was rediscovered in 2003 by a
BBC technician. In the end the launch went ahead the following night, hosted by Denis Tuohy holding a candle. BBC2
was the first British channel to use UHF and 625-line pictures, giving higher definition than the existing VHF 405-line
system. On 1 July 1967, BBC Two became the first television channel in Europe to broadcast regularly in colour, using
the West German PAL system that is still in use today although being gradually superseded by digital systems. (BBC
One and ITV began 625-line colour broadcasts simultaneously on 15 November 1969). Unlike other terrestrial channels,
BBC Two does not have soap opera or standard news programming, but a range of programmes intended to be eclectic
and diverse (although if a programme has high audience ratings it is often eventually repositioned to BBC One). The
different remit of BBC2 allowed its first controller, Sir David Attenborough to commission the first heavyweight
documentaries and documentary series such as Civilisation, The Ascent of Man and Horizon. In 1967 Tom and Jerry cartoons
first aired on BBC One, with around 2 episodes shown every evening at 17:00, with occasional morning showings on
CBBC. The BBC stopped airing the famous cartoon duo in 2000. David Attenborough was later granted sabbatical leave
from his job as Controller to work with the BBC Natural History Unit which had existed since the 1950s. This unit
is now famed throughout the world for producing high quality programmes with Attenborough such as Life on Earth,
The Private Life of Plants, The Blue Planet, The Life of Mammals, Planet Earth and Frozen Planet. National and regional
variations also occur within the BBC One and BBC Two schedules. England's BBC One output is split up into fifteen
regions (such as South West and East), which exist mainly to produce local news programming, but also occasionally
opt out of the network to show programmes of local importance (such as major local events). The other nations of
the United Kingdom (Wales, Scotland and Northern Ireland) have been granted more autonomy from the English network;
for example, programmes are mostly introduced by local announcers, rather than by those in London. BBC One and BBC
Two schedules in the other UK nations can vary immensely from BBC One and BBC Two in England. Programmes, such as
the politically fuelled Give My Head Peace (produced by BBC Northern Ireland) and the soap opera River City (produced
by BBC Scotland), have been created specifically to cater for some viewers in their respective nations, who may have
found programmes created for English audiences irrelevant. BBC Scotland produces daily programmes for its Gaelic-speaking
viewers, including current affairs, political and children's programming such as the popular Eòrpa and Dè a-nis?.
BBC Wales also produces a large amount of Welsh language programming for S4C, particularly news, sport and other
programmes, especially the soap opera Pobol y Cwm ('People of the Valley'). The UK nations also produce a number
of programmes that are shown across the UK, such as BBC Scotland's comedy series Chewin' the Fat, and BBC Northern
Ireland's talk show Patrick Kielty Almost Live. The BBC is also renowned for its production of costume dramas, such
as Jane Austen's Pride and Prejudice and contemporary social dramas such as Boys from the Blackstuff and Our Friends
in the North. The BBC has come under pressure to commission more programmes from independent British production companies,
and indeed is legally required to source 25% of its output from such companies by the terms of the Broadcasting Act
1990. Programmes have also been imported mainly from English-speaking countries: notable—though no longer shown—examples
include The Simpsons from the United States and Neighbours from Australia. Because of the availability of programmes
in English, few programmes need use sub-titles or dubbing unlike much European television. The BBC also introduced
Ceefax, the first teletext service, starting in 1974. This service allows BBC viewers to view textual information
such as the latest news on their television. CEEFAX has not made a full transition to digital television, instead
being replaced by the new interactive BBCi service. In March 2003 the BBC announced that from the end of May 2003
(subsequently deferred to 14 July) it intended to transmit all eight of its domestic television channels (including
the 15 regional variations of BBC 1) unencrypted from the Astra 2D satellite. This move was estimated to save the
BBC £85 million over the next five years. While the "footprint" of the Astra 2D satellite was smaller than that of
Astra 2A, from which it was previously broadcast encrypted, it meant that viewers with appropriate equipment were
able to receive BBC channels "free-to-air" over much of Western Europe. Consequently, some rights concerns have needed
to be resolved with programme providers such as Hollywood studios and sporting organisations, which have expressed
concern about the unencrypted signal leaking out. This led to some broadcasts being made unavailable on the Sky Digital
platform, such as Scottish Premier League and Scottish Cup football, while on other platforms such broadcasts were
not disrupted. Later, when rights contracts were renewed, this problem was resolved. On 5 July 2004, the BBC celebrated
the fiftieth anniversary of its television news bulletins (although it had produced the Television Newsreel for several
years before 1954). This event was marked by the release of a DVD, which showed highlights of the BBC's television
coverage of significant events over the half-century, as well as changes in the format of the BBC television news;
from the newsreel format of the first BBC Television News bulletins, to the 24-hour, worldwide news coverage available
in 2004. A special edition of Radio Times was also produced, as well as a special section of the BBC News Online
website. In 2005 the pioneering BBC television series Little Angels won a BAFTA award. Little Angels was the first
reality parenting show and its most famous episode saw Welsh actress Jynine James try to cope with the tantrums of
her six-year-old son. The BBC Television department headed by Jana Bennett was absorbed into a new, much larger group;
BBC Vision, in late 2006. The new group was part of larger restructuring within the BBC with the onset of new media
outlets and technology. In 2008, the BBC began experimenting with live streaming of certain channels in the UK, and
in November 2008, all standard BBC television channels were made available to watch online. In February 2016, it
was confirmed by BBC Worldwide that Keeping Up Appearances is the corporation's most exported television programme,
being sold nearly 1000 times to overseas broadcasters. The BBC domestic television channels do not broadcast advertisements;
they are instead funded by a television licence fee which TV viewers are required to pay annually. This includes
viewers who watch real-time streams of the BBC's channels online or via their mobile phone. The BBC's international
television channels are funded by advertisements and subscription. As a division within the BBC, Television was formerly
known as BBC Vision for a few years in the early 21st century, until its name reverted to Television in 2013. It
is responsible for the commissioning, producing, scheduling and broadcasting of all programming on the BBC's television
channels, and is led by Danny Cohen. BBC Japan was a general entertainment channel, which operated between December
2004 and April 2006. It ceased operations after its Japanese distributor folded.
Arnold Alois Schwarzenegger (/ˈʃwɔːrtsənˌɛɡər/; German: [ˈaɐ̯nɔlt ˈalɔʏs ˈʃvaɐ̯tsn̩ˌɛɡɐ]; born July 30, 1947) is an Austrian-American
actor, filmmaker, businessman, investor, author, philanthropist, activist, former professional bodybuilder and politician.
He served two terms as the 38th Governor of California from 2003 until 2011. Schwarzenegger began weight training
at the age of 15. He won the Mr. Universe title at age 20 and went on to win the Mr. Olympia contest seven times.
Schwarzenegger has remained a prominent presence in bodybuilding and has written many books and articles on the sport.
He is widely considered to be among the greatest bodybuilders of all times as well as its biggest icon. Schwarzenegger
gained worldwide fame as a Hollywood action film icon. His breakthrough film was the sword-and-sorcery epic Conan
the Barbarian in 1982, which was a box-office hit and resulted in a sequel. In 1984, he appeared in James Cameron's
science-fiction thriller film The Terminator, which was a massive critical and box-office success. Schwarzenegger
subsequently reprised the Terminator character in the franchise's later installments in 1991, 2003, and 2015. He
appeared in a number of successful films, such as Commando (1985), The Running Man (1987), Predator (1987), Twins
(1988), Total Recall (1990), Kindergarten Cop (1990) and True Lies (1994). He was nicknamed the "Austrian Oak" in
his bodybuilding days, "Arnie" during his acting career, and "The Governator" (a portmanteau of "Governor" and "The
Terminator", one of his best-known movie roles). As a Republican, he was first elected on October 7, 2003, in a special
recall election to replace then-Governor Gray Davis. Schwarzenegger was sworn in on November 17, to serve the remainder
of Davis's term. Schwarzenegger was then re-elected on November 7, 2006, in California's 2006 gubernatorial election,
to serve a full term as governor, defeating Democrat Phil Angelides, who was California State Treasurer at the time.
Schwarzenegger was sworn in for his second term on January 5, 2007. In 2011, Schwarzenegger completed his second
term as governor. Schwarzenegger was born in Thal, a village bordering the city of Graz in Styria, Austria and christened
Arnold Alois. His parents were Gustav Schwarzenegger (August 17, 1907 – December 13, 1972), and Aurelia Schwarzenegger
(née Jadrny; July 29, 1922 – August 2, 1998). Gustav was the local chief of police, and had served in World War II
as a Hauptfeldwebel after voluntarily joining the Nazi Party in 1938, though he was discharged in 1943 following
a bout of malaria. He married Arnold's mother on October 20, 1945;– he was 38, and she was 23 years old. According
to Schwarzenegger, both of his parents were very strict: "Back then in Austria it was a very different world, if
we did something bad or we disobeyed our parents, the rod was not spared." He grew up in a Roman Catholic family
who attended Mass every Sunday. Gustav had a preference for his elder son, Meinhard (July 17, 1946 – May 20, 1971),
over Arnold. His favoritism was "strong and blatant," which stemmed from unfounded suspicion that Arnold was not
his biological child. Schwarzenegger has said his father had "no patience for listening or understanding your problems."
Schwarzenegger had a good relationship with his mother and kept in touch with her until her death. In later life,
Schwarzenegger commissioned the Simon Wiesenthal Center to research his father's wartime record, which came up with
no evidence of Gustav's being involved in atrocities, despite Gustav's membership in the Nazi Party and SA. Schwarzenegger's
father's background received wide press attention during the 2003 California recall campaign. At school, Schwarzenegger
was apparently in the middle but stood out for his "cheerful, good-humored and exuberant" character. Money was a
problem in their household; Schwarzenegger recalled that one of the highlights of his youth was when the family bought
a refrigerator. As a boy, Schwarzenegger played several sports, heavily influenced by his father. He picked up his
first barbell in 1960, when his soccer coach took his team to a local gym. At the age of 14, he chose bodybuilding
over soccer as a career. Schwarzenegger has responded to a question asking if he was 13 when he started weightlifting:
"I actually started weight training when I was 15, but I'd been participating in sports, like soccer, for years,
so I felt that although I was slim, I was well-developed, at least enough so that I could start going to the gym
and start Olympic lifting." However, his official website biography claims: "At 14, he started an intensive training
program with Dan Farmer, studied psychology at 15 (to learn more about the power of mind over body) and at 17, officially
started his competitive career." During a speech in 2001, he said, "My own plan formed when I was 14 years old. My
father had wanted me to be a police officer like he was. My mother wanted me to go to trade school." Schwarzenegger
took to visiting a gym in Graz, where he also frequented the local movie theaters to see bodybuilding idols such
as Reg Park, Steve Reeves, and Johnny Weissmuller on the big screen. When Reeves died in 2000, Schwarzenegger fondly
remembered him: "As a teenager, I grew up with Steve Reeves. His remarkable accomplishments allowed me a sense of
what was possible, when others around me didn't always understand my dreams. Steve Reeves has been part of everything
I've ever been fortunate enough to achieve." In 1961, Schwarzenegger met former Mr. Austria Kurt Marnul, who invited
him to train at the gym in Graz. He was so dedicated as a youngster that he broke into the local gym on weekends,
when it was usually closed, so that he could train. "It would make me sick to miss a workout... I knew I couldn't
look at myself in the mirror the next morning if I didn't do it." When Schwarzenegger was asked about his first movie
experience as a boy, he replied: "I was very young, but I remember my father taking me to the Austrian theaters and
seeing some newsreels. The first real movie I saw, that I distinctly remember, was a John Wayne movie." On May 20,
1971, his brother, Meinhard, died in a car accident. Meinhard had been drinking and was killed instantly. Schwarzenegger
did not attend his funeral. Meinhard was due to marry Erika Knapp, and the couple had a three-year-old son, Patrick.
Schwarzenegger would pay for Patrick's education and help him to emigrate to the United States. Gustav died the following
year from a stroke. In Pumping Iron, Schwarzenegger claimed that he did not attend his father's funeral because he
was training for a bodybuilding contest. Later, he and the film's producer said this story was taken from another
bodybuilder for the purpose of showing the extremes that some would go to for their sport and to make Schwarzenegger's
image more cold and machine-like in order to fan controversy for the film. Barbara Baker, his first serious girlfriend,
has said he informed her of his father's death without emotion and that he never spoke of his brother. Over time,
he has given at least three versions of why he was absent from his father's funeral. In an interview with Fortune
in 2004, Schwarzenegger told how he suffered what "would now be called child abuse" at the hands of his father: "My
hair was pulled. I was hit with belts. So was the kid next door. It was just the way it was. Many of the children
I've seen were broken by their parents, which was the German-Austrian mentality. They didn't want to create an individual.
It was all about conforming. I was one who did not conform, and whose will could not be broken. Therefore, I became
a rebel. Every time I got hit, and every time someone said, 'you can't do this,' I said, 'this is not going to be
for much longer, because I'm going to move out of here. I want to be rich. I want to be somebody.'" Schwarzenegger
served in the Austrian Army in 1965 to fulfill the one year of service required at the time of all 18-year-old Austrian
males. During his army service, he won the Junior Mr. Europe contest. He went AWOL during basic training so he could
take part in the competition and spent a week in military prison: "Participating in the competition meant so much
to me that I didn't carefully think through the consequences." He won another bodybuilding contest in Graz, at Steirer
Hof Hotel (where he had placed second). He was voted best built man of Europe, which made him famous. "The Mr. Universe
title was my ticket to America – the land of opportunity, where I could become a star and get rich." Schwarzenegger
made his first plane trip in 1966, attending the NABBA Mr. Universe competition in London. He would come in second
in the Mr. Universe competition, not having the muscle definition of American winner Chester Yorton. Charles "Wag"
Bennett, one of the judges at the 1966 competition, was impressed with Schwarzenegger and he offered to coach him.
As Schwarzenegger had little money, Bennett invited him to stay in his crowded family home above one of his two gyms
in Forest Gate, London, England. Yorton's leg definition had been judged superior, and Schwarzenegger, under a training
program devised by Bennett, concentrated on improving the muscle definition and power in his legs. Staying in the
East End of London helped Schwarzenegger improve his rudimentary grasp of the English language. Also in 1966, Schwarzenegger
had the opportunity to meet childhood idol Reg Park, who became his friend and mentor. The training paid off and,
in 1967, Schwarzenegger won the title for the first time, becoming the youngest ever Mr. Universe at the age of 20.
He would go on to win the title a further three times. Schwarzenegger then flew back to Munich, training for four
to six hours daily, attending business school and working in a health club (Rolf Putziger's gym where he worked and
trained from 1966–1968), returning in 1968 to London to win his next Mr. Universe title. He frequently told Roger
C. Field, his English coach and friend in Munich at that time, "I'm going to become the greatest actor!" Schwarzenegger,
who dreamed of moving to the U.S. since the age of 10, and saw bodybuilding as the avenue through which to do so,
realized his dream by moving to the United States in September 1968 at the age of 21, speaking little English. There
he trained at Gold's Gym in Venice, Los Angeles, California, under Joe Weider. From 1970 to 1974, one of Schwarzenegger's
weight training partners was Ric Drasin, a professional wrestler who designed the original Gold's Gym logo in 1973.
Schwarzenegger also became good friends with professional wrestler Superstar Billy Graham. In 1970, at age 23, he
captured his first Mr. Olympia title in New York, and would go on to win the title a total of seven times. Immigration
law firm Siskind & Susser have stated that Schwarzenegger may have been an illegal immigrant at some point in the
late 1960s or early 1970s because of violations in the terms of his visa. LA Weekly would later say in 2002 that
Schwarzenegger is the most famous immigrant in America, who "overcame a thick Austrian accent and transcended the
unlikely background of bodybuilding to become the biggest movie star in the world in the 1990s". In 1977, Schwarzenegger's
autobiography/weight-training guide Arnold: The Education of a Bodybuilder was published and became a huge success.
After taking English classes at Santa Monica College in California, he earned a BA by correspondence from the University
of Wisconsin–Superior, where he graduated with a degree in international marketing of fitness and business administration
in 1979. He tells that during this time he ran into a friend who told him that he was teaching Transcendental Meditation
(TM), which prompted Schwarzenegger to reveal he had been struggling with anxiety for the first time in his life:
"Even today, I still benefit from [the year of TM] because I don't merge and bring things together and see everything
as one big problem." Schwarzenegger is considered among the most important figures in the history of bodybuilding,
and his legacy is commemorated in the Arnold Classic annual bodybuilding competition. Schwarzenegger has remained
a prominent face in the bodybuilding sport long after his retirement, in part because of his ownership of gyms and
fitness magazines. He has presided over numerous contests and awards shows. For many years, he wrote a monthly column
for the bodybuilding magazines Muscle & Fitness and Flex. Shortly after being elected Governor, he was appointed
executive editor of both magazines, in a largely symbolic capacity. The magazines agreed to donate $250,000 a year
to the Governor's various physical fitness initiatives. When the deal, including the contract that gave Schwarzenegger
at least $1 million a year, was made public in 2005, many criticized it as being a conflict of interest since the
governor's office made decisions concerning regulation of dietary supplements in California. Consequently, Schwarzenegger
relinquished the executive editor role in 2005. American Media Inc., which owns Muscle & Fitness and Flex, announced
in March 2013 that Schwarzenegger had accepted their renewed offer to be executive editor of the magazines. One of
the first competitions he won was the Junior Mr. Europe contest in 1965. He won Mr. Europe the following year, at
age 19. He would go on to compete in, and win, many bodybuilding contests. His bodybuilding victories included five
Mr. Universe (4 – NABBA [England], 1 – IFBB [USA]) wins, and seven Mr. Olympia wins, a record which would stand until
Lee Haney won his eighth consecutive Mr. Olympia title in 1991. Schwarzenegger continues to work out even today.
When asked about his personal training during the 2011 Arnold Classic he said that he was still working out a half
an hour with weights every day. In 1967, Schwarzenegger won the Munich stone-lifting contest, in which a stone weighing
508 German pounds (254 kg/560 lbs.) is lifted between the legs while standing on two foot rests. Schwarzenegger's
goal was to become the greatest bodybuilder in the world, which meant becoming Mr. Olympia. His first attempt was
in 1969, when he lost to three-time champion Sergio Oliva. However, Schwarzenegger came back in 1970 and won the
competition, making him the youngest ever Mr. Olympia at the age of 23, a record he still holds to this day. He continued
his winning streak in the 1971–74 competitions. In 1975, Schwarzenegger was once again in top form, and won the title
for the sixth consecutive time, beating Franco Columbu. After the 1975 Mr. Olympia contest, Schwarzenegger announced
his retirement from professional bodybuilding. Months before the 1975 Mr. Olympia contest, filmmakers George Butler
and Robert Fiore persuaded Schwarzenegger to compete, in order to film his training in the bodybuilding documentary
called Pumping Iron. Schwarzenegger had only three months to prepare for the competition, after losing significant
weight to appear in the film Stay Hungry with Jeff Bridges. Lou Ferrigno proved not to be a threat, and a lighter-than-usual
Schwarzenegger convincingly won the 1975 Mr. Olympia. Schwarzenegger came out of retirement, however, to compete
in the 1980 Mr. Olympia. Schwarzenegger was training for his role in Conan, and he got into such good shape because
of the running, horseback riding and sword training, that he decided he wanted to win the Mr. Olympia contest one
last time. He kept this plan a secret, in the event that a training accident would prevent his entry and cause him
to lose face. Schwarzenegger had been hired to provide color commentary for network television, when he announced
at the eleventh hour that while he was there: "Why not compete?" Schwarzenegger ended up winning the event with only
seven weeks of preparation. After being declared Mr. Olympia for a seventh time, Schwarzenegger then officially retired
from competition. Schwarzenegger has admitted to using performance-enhancing anabolic steroids while they were legal,
writing in 1977 that "steroids were helpful to me in maintaining muscle size while on a strict diet in preparation
for a contest. I did not use them for muscle growth, but rather for muscle maintenance when cutting up." He has called
the drugs "tissue building." In 1999, Schwarzenegger sued Dr. Willi Heepe, a German doctor who publicly predicted
his early death on the basis of a link between his steroid use and his later heart problems. As the doctor had never
examined him personally, Schwarzenegger collected a US$10,000 libel judgment against him in a German court. In 1999,
Schwarzenegger also sued and settled with The Globe, a U.S. tabloid which had made similar predictions about the
bodybuilder's future health. Schwarzenegger wanted to move from bodybuilding into acting, finally achieving it when
he was chosen to play the role of Hercules in 1970's Hercules in New York. Credited under the name "Arnold Strong,"
his accent in the film was so thick that his lines were dubbed after production. His second film appearance was as
a deaf mute hit-man for the mob in director Robert Altman's The Long Goodbye (1973), which was followed by a much
more significant part in the film Stay Hungry (1976), for which he was awarded a Golden Globe for New Male Star of
the Year. Schwarzenegger has discussed his early struggles in developing his acting career. "It was very difficult
for me in the beginning – I was told by agents and casting people that my body was 'too weird', that I had a funny
accent, and that my name was too long. You name it, and they told me I had to change it. Basically, everywhere I
turned, I was told that I had no chance." Schwarzenegger drew attention and boosted his profile in the bodybuilding
film Pumping Iron (1977), elements of which were dramatized; in 1991, he purchased the rights to the film, its outtakes,
and associated still photography. In 1977, he also appeared in an episode of the ABC situation comedy The San Pedro
Beach Bums. Schwarzenegger auditioned for the title role of The Incredible Hulk, but did not win the role because
of his height. Later, Lou Ferrigno got the part of Dr. David Banner's alter ego. Schwarzenegger appeared with Kirk
Douglas and Ann-Margret in the 1979 comedy The Villain. In 1980, he starred in a biographical film of the 1950s actress
Jayne Mansfield as Mansfield's husband, Mickey Hargitay. Schwarzenegger's breakthrough film was the sword-and-sorcery
epic Conan the Barbarian in 1982, which was a box-office hit. This was followed by a sequel, Conan the Destroyer,
in 1984, although it was not as successful as its predecessor. In 1983, Schwarzenegger starred in the promotional
video, Carnival in Rio. In 1984, he made his first appearance as the eponymous character, and what some would say
was his acting career's signature role, in James Cameron's science fiction thriller film The Terminator. Following
this, Schwarzenegger made Red Sonja in 1985. During the 1980s, audiences had an appetite for action films, with both
Schwarzenegger and Sylvester Stallone becoming international stars. Schwarzenegger's roles reflected his sense of
humor, separating him from more serious action hero films, such as the alternative universe poster for Terminator
2: Judgment Day starring Stallone in the comedy thriller Last Action Hero. He made a number of successful films,
such as Commando (1985), Raw Deal (1986), The Running Man (1987), Predator (1987), and Red Heat (1988). Twins (1988),
a comedy with Danny DeVito, also proved successful. Total Recall (1990) netted Schwarzenegger $10 million and 15%
of the film's gross. A science fiction script, the film was based on the Philip K. Dick short story "We Can Remember
It for You Wholesale". Kindergarten Cop (1990) reunited him with director Ivan Reitman, who directed him in Twins.
Schwarzenegger had a brief foray into directing, first with a 1990 episode of the TV series Tales from the Crypt,
entitled "The Switch", and then with the 1992 telemovie Christmas in Connecticut. He has not directed since. Schwarzenegger's
commercial peak was his return as the title character in 1991's Terminator 2: Judgment Day, which was the highest-grossing
film of 1991. In 1993, the National Association of Theatre Owners named him the "International Star of the Decade".
His next film project, the 1993 self-aware action comedy spoof Last Action Hero, was released opposite Jurassic Park,
and did not do well at the box office. His next film, the comedy drama True Lies (1994), was a popular spy film,
and saw Schwarzenegger reunited with James Cameron. That same year, the comedy Junior was released, the last of Schwarzenegger's
three collaborations with Ivan Reitman and again co-starring Danny DeVito. This film brought him his second Golden
Globe nomination, this time for Best Actor – Musical or Comedy. It was followed by the action thriller Eraser (1996),
the Christmas comedy Jingle All The Way (1996), and the comic book-based Batman & Robin (1997), in which he played
the villain Mr. Freeze. This was his final film before taking time to recuperate from a back injury. Following the
critical failure of Batman & Robin, his film career and box office prominence went into decline. He returned with
the supernatural thriller End of Days (1999), later followed by the action films The 6th Day (2000) and Collateral
Damage (2002), both of which failed to do well at the box office. In 2003, he made his third appearance as the title
character in Terminator 3: Rise of the Machines, which went on to earn over $150 million domestically. In tribute
to Schwarzenegger in 2002, Forum Stadtpark, a local cultural association, proposed plans to build a 25-meter (82
ft) tall Terminator statue in a park in central Graz. Schwarzenegger reportedly said he was flattered, but thought
the money would be better spent on social projects and the Special Olympics. His film appearances after becoming
Governor of California included a three-second cameo appearance in The Rundown, and the 2004 remake of Around the
World in 80 Days. In 2005, he appeared as himself in the film The Kid & I. He voiced Baron von Steuben in the Liberty's
Kids episode "Valley Forge". He had been rumored to be appearing in Terminator Salvation as the original T-800; he
denied his involvement, but he ultimately did appear briefly via his image being inserted into the movie from stock
footage of the first Terminator movie. Schwarzenegger appeared in Sylvester Stallone's The Expendables, where he
made a cameo appearance. In January 2011, just weeks after leaving office in California, Schwarzenegger announced
that he was reading several new scripts for future films, one of them being the World War II action drama With Wings
as Eagles, written by Randall Wallace, based on a true story. On March 6, 2011, at the Arnold Seminar of the Arnold
Classic, Schwarzenegger revealed that he was being considered for several films, including sequels to The Terminator
and remakes of Predator and The Running Man, and that he was "packaging" a comic book character. The character was
later revealed to be the Governator, star of the comic book and animated series of the same name. Schwarzenegger
inspired the character and co-developed it with Stan Lee, who would have produced the series. Schwarzenegger would
have voiced the Governator. On May 20, 2011, Schwarzenegger's entertainment counsel announced that all movie projects
currently in development were being halted: "Schwarzenegger is focusing on personal matters and is not willing to
commit to any production schedules or timelines". On July 11, 2011, it was announced that Schwarzenegger was considering
a comeback film despite his legal problems. He appeared in The Expendables 2 (2012), and starred in The Last Stand
(2013), his first leading role in 10 years, and Escape Plan (2013), his first co-starring role alongside Sylvester
Stallone. He starred in Sabotage, released in March 2014, and appeared in The Expendables 3, released in August 2014.
He starred in the fifth Terminator movie Terminator Genisys in 2015 and will reprise his role as Conan the Barbarian
in The Legend of Conan. Schwarzenegger has been a registered Republican for many years. As an actor, his political
views were always well known as they contrasted with those of many other prominent Hollywood stars, who are generally
considered to be a liberal and Democratic-leaning community. At the 2004 Republican National Convention, Schwarzenegger
gave a speech and explained why he was a Republican: In 1985, Schwarzenegger appeared in "Stop the Madness", an anti-drug
music video sponsored by the Reagan administration. He first came to wide public notice as a Republican during the
1988 presidential election, accompanying then-Vice President George H.W. Bush at a campaign rally. Schwarzenegger's
first political appointment was as chairman of the President's Council on Physical Fitness and Sports, on which he
served from 1990 to 1993. He was nominated by George H. W. Bush, who dubbed him "Conan the Republican". He later
served as Chairman for the California Governor's Council on Physical Fitness and Sports under Governor Pete Wilson.
In an interview with Talk magazine in late 1999, Schwarzenegger was asked if he thought of running for office. He
replied, "I think about it many times. The possibility is there, because I feel it inside." The Hollywood Reporter
claimed shortly after that Schwarzenegger sought to end speculation that he might run for governor of California.
Following his initial comments, Schwarzenegger said, "I'm in show business – I am in the middle of my career. Why
would I go away from that and jump into something else?" Schwarzenegger announced his candidacy in the 2003 California
recall election for Governor of California on the August 6, 2003 episode of The Tonight Show with Jay Leno. Schwarzenegger
had the most name recognition in a crowded field of candidates, but he had never held public office and his political
views were unknown to most Californians. His candidacy immediately became national and international news, with media
outlets dubbing him the "Governator" (referring to The Terminator movies, see above) and "The Running Man" (the name
of another one of his films), and calling the recall election "Total Recall" (yet another movie starring Schwarzenegger).
Schwarzenegger declined to participate in several debates with other recall replacement candidates, and appeared
in only one debate on September 24, 2003. On October 7, 2003, the recall election resulted in Governor Gray Davis
being removed from office with 55.4% of the Yes vote in favor of a recall. Schwarzenegger was elected Governor of
California under the second question on the ballot with 48.6% of the vote to choose a successor to Davis. Schwarzenegger
defeated Democrat Cruz Bustamante, fellow Republican Tom McClintock, and others. His nearest rival, Bustamante, received
31% of the vote. In total, Schwarzenegger won the election by about 1.3 million votes. Under the regulations of the
California Constitution, no runoff election was required. Schwarzenegger was the second foreign-born governor of
California after Irish-born Governor John G. Downey in 1862. As soon as Schwarzenegger was elected governor, Willie
Brown said he would start a drive to recall the governor. Schwarzenegger was equally entrenched in what he considered
to be his mandate in cleaning up gridlock. Building on a catchphrase from the sketch "Hans and Franz" from Saturday
Night Live (which partly parodied his bodybuilding career), Schwarzenegger called the Democratic State politicians
"girlie men". Schwarzenegger's early victories included repealing an unpopular increase in the vehicle registration
fee as well as preventing driver's licenses being given out to illegal immigrants, but later he began to feel the
backlash when powerful state unions began to oppose his various initiatives. Key among his reckoning with political
realities was a special election he called in November 2005, in which four ballot measures he sponsored were defeated.
Schwarzenegger accepted personal responsibility for the defeats and vowed to continue to seek consensus for the people
of California. He would later comment that "no one could win if the opposition raised 160 million dollars to defeat
you". The U.S. Supreme Court later found the public employee unions' use of compulsory fundraising during the campaign
had been illegal in Knox v. Service Employees International Union, Local 1000. Schwarzenegger then went against the
advice of fellow Republican strategists and appointed a Democrat, Susan Kennedy, as his Chief of Staff. Schwarzenegger
gradually moved towards a more politically moderate position, determined to build a winning legacy with only a short
time to go until the next gubernatorial election. Schwarzenegger ran for re-election against Democrat Phil Angelides,
the California State Treasurer, in the 2006 elections, held on November 7, 2006. Despite a poor year nationally for
the Republican party, Schwarzenegger won re-election with 56.0% of the vote compared with 38.9% for Angelides, a
margin of well over one million votes. In recent years, many commentators have seen Schwarzenegger as moving away
from the right and towards the center of the political spectrum. After hearing a speech by Schwarzenegger at the
2006 Martin Luther King, Jr. breakfast, San Francisco mayor Gavin Newsom said that, "[H]e's becoming a Democrat […
H]e's running back, not even to the center. I would say center-left". It was rumored that Schwarzenegger might run
for the United States Senate in 2010, as his governorship would be term-limited by that time. This turned out to
be false. Wendy Leigh, who wrote an unofficial biography on Schwarzenegger, claims he plotted his political rise
from an early age using the movie business and bodybuilding as building blocks to escape a depressing home. Leigh
portrays Schwarzenegger as obsessed with power and quotes him as saying, "I wanted to be part of the small percentage
of people who were leaders, not the large mass of followers. I think it is because I saw leaders use 100% of their
potential – I was always fascinated by people in control of other people." Schwarzenegger has said that it was never
his intention to enter politics, but he says, "I married into a political family. You get together with them and
you hear about policy, about reaching out to help people. I was exposed to the idea of being a public servant and
Eunice and Sargent Shriver became my heroes." Eunice Kennedy Shriver was sister of John F. Kennedy, and mother-in-law
to Schwarzenegger; Sargent Shriver is husband to Eunice and father-in-law to Schwarzenegger. He cannot run for president
as he is not a natural born citizen of the United States. In The Simpsons Movie (2007), he is portrayed as the president,
and in the Sylvester Stallone movie, Demolition Man (1993, ten years before his first run for political office),
it is revealed that a constitutional amendment passed which allowed Schwarzenegger to become president. Schwarzenegger
is a dual Austrian/United States citizen. He holds Austrian citizenship by birth and has held U.S. citizenship since
becoming naturalized in 1983. Being Austrian and thus European, he was able to win the 2007 European Voice campaigner
of the year award for taking action against climate change with the California Global Warming Solutions Act of 2006
and plans to introduce an emissions trading scheme with other US states and possibly with the EU. Schwarzenegger's
endorsement in the Republican primary of the 2008 U.S. presidential election was highly sought; despite being good
friends with candidates Rudy Giuliani and Senator John McCain, Schwarzenegger remained neutral throughout 2007 and
early 2008. Giuliani dropped out of the presidential race on January 30, 2008, largely because of a poor showing
in Florida, and endorsed McCain. Later that night, Schwarzenegger was in the audience at a Republican debate at the
Ronald Reagan Presidential Library in California. The following day, he endorsed McCain, joking, "It's Rudy's fault!"
(in reference to his friendships with both candidates and that he could not make up his mind). Schwarzenegger's endorsement
was thought to be a boost for Senator McCain's campaign; both spoke about their concerns for the environment and
economy. In its April 2010 report, Progressive ethics watchdog group Citizens for Responsibility and Ethics in Washington
named Schwarzenegger one of 11 "worst governors" in the United States because of various ethics issues throughout
Schwarzenegger's term as governor. Although he began his tenure as governor with record high approval ratings (as
high as 89% in December 2003), he left office with a record low 23%, only one percent higher than that of Gray Davis's
when he was recalled in October 2003. During his initial campaign for governor, allegations of sexual and personal
misconduct were raised against Schwarzenegger, dubbed "Gropegate". Within the last five days before the election,
news reports appeared in the Los Angeles Times recounting allegations of sexual misconduct from several individual
women, six of whom eventually came forward with their personal stories. Three of the women claimed he had grabbed
their breasts, a fourth said he placed his hand under her skirt on her buttock. A fifth woman claimed Schwarzenegger
tried to take off her bathing suit in a hotel elevator, and the last said he pulled her onto his lap and asked her
about a sex act. Schwarzenegger admitted that he has "behaved badly sometimes" and apologized, but also stated that
"a lot of [what] you see in the stories is not true". This came after an interview in adult magazine Oui from 1977
surfaced, in which Schwarzenegger discussed attending sexual orgies and using substances such as marijuana. Schwarzenegger
is shown smoking a marijuana joint after winning Mr. Olympia in the 1975 documentary film Pumping Iron. In an interview
with GQ magazine in October 2007, Schwarzenegger said, "[Marijuana] is not a drug. It's a leaf. My drug was pumping
iron, trust me." His spokesperson later said the comment was meant to be a joke. British television personality Anna
Richardson settled a libel lawsuit in August 2006 against Schwarzenegger, his top aide, Sean Walsh, and his publicist,
Sheryl Main. A joint statement read: "The parties are content to put this matter behind them and are pleased that
this legal dispute has now been settled." Richardson claimed they tried to tarnish her reputation by dismissing her
allegations that Schwarzenegger touched her breast during a press event for The 6th Day in London. She claimed Walsh
and Main libeled her in a Los Angeles Times article when they contended she encouraged his behavior. Schwarzenegger
became a naturalized U.S. citizen on September 17, 1983. Shortly before he gained his citizenship, he asked the Austrian
authorities for the right to keep his Austrian citizenship, as Austria does not usually allow dual citizenship. His
request was granted, and he retained his Austrian citizenship. In 2005, Peter Pilz, a member of the Austrian Parliament
from the Austrian Green Party, demanded that Parliament revoke Schwarzenegger's Austrian citizenship due to his decision
not to prevent the executions of Donald Beardslee and Stanley Williams, causing damage of reputation to Austria,
where the death penalty has been abolished since 1968. This demand was based on Article 33 of the Austrian Citizenship
Act that states: "A citizen, who is in the public service of a foreign country, shall be deprived of his citizenship,
if he heavily damages the reputation or the interests of the Austrian Republic." Pilz claimed that Schwarzenegger's
actions in support of the death penalty (prohibited in Austria under Protocol 13 of the European Convention on Human
Rights) had indeed done damage to Austria's reputation. Schwarzenegger explained his actions by referring to the
fact that his only duty as Governor of California was to prevent an error in the judicial system. On September 27,
2006 Schwarzenegger signed a bill creating the nation's first cap on greenhouse gas emissions. The law set new regulations
on the amount of emissions utilities, refineries and manufacturing plants are allowed to release into the atmosphere.
Schwarzenegger also signed a second global warming bill that prohibits large utilities and corporations in California
from making long-term contracts with suppliers who do not meet the state's greenhouse gas emission standards. The
two bills are part of a plan to reduce California's emissions by 25 percent to 1990s levels by 2020. In 2005, Schwarzenegger
issued an executive order calling to reduce greenhouse gases to 80 percent below 1990 levels by 2050. Schwarzenegger
signed another executive order on October 17, 2006 allowing California to work with the Northeast's Regional Greenhouse
Gas Initiative. They plan to reduce carbon dioxide emissions by issuing a limited amount of carbon credits to each
power plant in participating states. Any power plants that exceed emissions for the amount of carbon credits will
have to purchase more credits to cover the difference. The plan took effect in 2009. In addition to using his political
power to fight global warming, the governor has taken steps at his home to reduce his personal carbon footprint.
Schwarzenegger has adapted one of his Hummers to run on hydrogen and another to run on biofuels. He has also installed
solar panels to heat his home. In respect of his contribution to the direction of the US motor industry, Schwarzenegger
was invited to open the 2009 SAE World Congress in Detroit, on April 20, 2009. In October 2013, the New York Post
reported that Schwarzenegger was exploring a future run for president. The former California governor would face
a constitutional hurdle; Article II, Section I, Clause V nominally prevents individuals who are not natural-born
citizens of the United States from assuming the office. He has reportedly been lobbying legislators about a possible
constitutional change, or filing a legal challenge to the provision. Columbia University law professor Michael Dorf
observed that Schwarzenegger's possible lawsuit could ultimately win him the right to run for the office, noting,
"The law is very clear, but it’s not 100 percent clear that the courts would enforce that law rather than leave it
to the political process." Schwarzenegger has had a highly successful business career. Following his move to the
United States, Schwarzenegger became a "prolific goal setter" and would write his objectives at the start of the
year on index cards, like starting a mail order business or buying a new car – and succeed in doing so. By the age
of 30, Schwarzenegger was a millionaire, well before his career in Hollywood. His financial independence came from
his success as a budding entrepreneur with a series of successful business ventures and investments. In 1968, Schwarzenegger
and fellow bodybuilder Franco Columbu started a bricklaying business. The business flourished thanks to the pair's
marketing savvy and an increased demand following the 1971 San Fernando earthquake. Schwarzenegger and Columbu used
profits from their bricklaying venture to start a mail order business, selling bodybuilding and fitness-related equipment
and instructional tapes. Schwarzenegger rolled profits from the mail order business and his bodybuilding competition
winnings into his first real estate investment venture: an apartment building he purchased for $10,000. He would
later go on to invest in a number of real estate holding companies. Schwarzenegger was a founding celebrity investor
in the Planet Hollywood chain of international theme restaurants (modeled after the Hard Rock Cafe) along with Bruce
Willis, Sylvester Stallone and Demi Moore. Schwarzenegger severed his financial ties with the business in early 2000.
Schwarzenegger said the company had not had the success he had hoped for, claiming he wanted to focus his attention
on "new US global business ventures" and his movie career. He also invested in a shopping mall in Columbus, Ohio.
He has talked about some of those who have helped him over the years in business: "I couldn't have learned about
business without a parade of teachers guiding me... from Milton Friedman to Donald Trump... and now, Les Wexner and
Warren Buffett. I even learned a thing or two from Planet Hollywood, such as when to get out! And I did!" He has
significant ownership in Dimensional Fund Advisors, an investment firm. Schwarzenegger is also the owner of Arnold's
Sports Festival, which he started in 1989 and is held annually in Columbus, Ohio. It is a festival that hosts thousands
of international health and fitness professionals which has also expanded into a three-day expo. He also owns a movie
production company called Oak Productions, Inc. and Fitness Publications, a joint publishing venture with Simon &
Schuster. In 1992, Schwarzenegger and his wife opened a restaurant in Santa Monica called Schatzi On Main. Schatzi
literally means "little treasure," colloquial for "honey" or "darling" in German. In 1998, he sold his restaurant.
Schwarzenegger's net worth had been conservatively estimated at $100–$200 million. After separating from his wife,
Maria Shriver, in 2011, it has been estimated that his net worth has been approximately $400 million, and even as
high as $800 million, based on tax returns he filed in 2006. Over the years as an investor, he invested his bodybuilding
and movie earnings in an array of stocks, bonds, privately controlled companies, and real estate holdings worldwide,
making his net worth as an accurate estimation difficult to calculate, particularly in light of declining real estate
values owing to economic recessions in the U.S. and Europe since the late 2000s. In June 1997, Schwarzenegger spent
$38 million of his own money on a private Gulfstream jet. Schwarzenegger once said of his fortune, "Money doesn't
make you happy. I now have $50 million, but I was just as happy when I had $48 million." He has also stated, "I've
made many millions as a businessman many times over." In 1969, Schwarzenegger met Barbara Outland (later Barbara
Outland Baker), an English teacher he lived with until 1974. Schwarzenegger talked about Barbara in his memoir in
1977: "Basically it came down to this: she was a well-balanced woman who wanted an ordinary, solid life, and I was
not a well-balanced man, and hated the very idea of ordinary life." Baker has described Schwarzenegger as "[a] joyful
personality, totally charismatic, adventurous, and athletic" but claims towards the end of the relationship he became
"insufferable – classically conceited – the world revolved around him". Baker published her memoir in 2006, entitled
Arnold and Me: In the Shadow of the Austrian Oak. Although Baker, at times, painted an unflattering portrait of her
former lover, Schwarzenegger actually contributed to the tell-all book with a foreword, and also met with Baker for
three hours. Baker claims, for example, that she only learned of his being unfaithful after they split, and talks
of a turbulent and passionate love life. Schwarzenegger has made it clear that their respective recollection of events
can differ. The couple first met six to eight months after his arrival in the U.S – their first date was watching
the first Apollo Moon landing on television. They shared an apartment in Santa Monica for three and a half years,
and having little money, would visit the beach all day, or have barbecues in the back yard. Although Baker claims
that when she first met him, he had "little understanding of polite society" and she found him a turn-off, she says,
"He's as much a self-made man as it's possible to be – he never got encouragement from his parents, his family, his
brother. He just had this huge determination to prove himself, and that was very attractive … I'll go to my grave
knowing Arnold loved me." Schwarzenegger met his next paramour, Sue Moray, a Beverly Hills hairdresser's assistant,
on Venice Beach in July 1977. According to Moray, the couple led an open relationship: "We were faithful when we
were both in LA … but when he was out of town, we were free to do whatever we wanted." Schwarzenegger met Maria Shriver
at the Robert F. Kennedy Tennis Tournament in August 1977, and went on to have a relationship with both women until
August 1978, when Moray (who knew of his relationship with Shriver) issued an ultimatum. On April 26, 1986, Schwarzenegger
married television journalist Maria Shriver, niece of President John F. Kennedy, in Hyannis, Massachusetts. The Rev.
John Baptist Riordan performed the ceremony at St. Francis Xavier Catholic Church. They have four children: Katherine
Eunice Schwarzenegger (born December 13, 1989 in Los Angeles); Christina Maria Aurelia Schwarzenegger (born July
23, 1991 in Los Angeles); Patrick Arnold Shriver Schwarzenegger (born September 18, 1993 in Los Angeles); and Christopher
Sargent Shriver Schwarzenegger (born September 27, 1997 in Los Angeles). Schwarzenegger lives in a 11,000-square-foot
(1,000 m2) home in Brentwood. The divorcing couple currently own vacation homes in Sun Valley, Idaho and Hyannis
Port, Massachusetts. They attended St. Monica's Catholic Church. Following their separation, it is reported that
Schwarzenegger is dating physical therapist Heather Milligan. On May 9, 2011, Shriver and Schwarzenegger ended their
relationship after 25 years of marriage, with Shriver moving out of the couple's Brentwood mansion. On May 16, 2011,
the Los Angeles Times revealed that Schwarzenegger had fathered a son more than fourteen years earlier with an employee
in their household, Mildred Patricia 'Patty' Baena. "After leaving the governor's office I told my wife about this
event, which occurred over a decade ago," Schwarzenegger said in a statement issued to The Times. In the statement,
Schwarzenegger did not mention that he had confessed to his wife only after Shriver had confronted him with the information,
which she had done after confirming with the housekeeper what she had suspected about the child. Fifty-year-old Baena,
of Guatemalan origin, was employed by the family for 20 years and retired in January 2011. The pregnant Baena was
working in the home while Shriver was pregnant with the youngest of the couple’s four children. Baena's son with
Schwarzenegger, Joseph, was born on October 2, 1997; Shriver gave birth to Christopher on September 27, 1997. Schwarzenegger
says it took seven or eight years before he found out that he had fathered a child with his housekeeper. It wasn't
until the boy "started looking like me, that's when I kind of got it. I put things together," the action star and
former California governor, told 60 Minutes. Schwarzenegger has taken financial responsibility for the child "from
the start and continued to provide support." KNX 1070 radio reported that in 2010 he bought a new four-bedroom house,
with a pool, for Baena and their son in Bakersfield, about 112 miles (180 km) north of Los Angeles. Baena separated
from her husband, Rogelio, in 1997, a few months after Joseph's birth, and filed for divorce in 2008. Baena's ex-husband
says that the child's birth certificate was falsified and that he plans to sue Schwarzenegger for engaging in conspiracy
to falsify a public document, a serious crime in California. Schwarzenegger has consulted an attorney, Bob Kaufman.
Kaufman has earlier handled divorce cases for celebrities such as Jennifer Aniston and Reese Witherspoon. Schwarzenegger
will keep the Brentwood home as part of their divorce settlement and Shriver has purchased a new home nearby so that
the children may travel easily between their parents' homes. They will share custody of the two minor children. Schwarzenegger
came under fire after the initial petition did not include spousal support and a reimbursement of attorney's fees.
However, he claims this was not intentional and that he signed the initial documents without having properly read
them. Schwarzenegger has filed amended divorce papers remedying this. After the scandal, actress Brigitte Nielsen
came forward and stated that she too had an affair with Schwarzenegger while he was in a relationship with Shriver,
saying, "Maybe I wouldn't have got into it if he said 'I'm going to marry Maria' and this is dead serious, but he
didn't, and our affair carried on." When asked in 2014 "Of all the things you are famous for … which are you least
proud of?", Schwarzenegger replied "I'm least proud of the mistakes I made that caused my family pain and split us
up". Schwarzenegger was born with a bicuspid aortic valve, an aortic valve with only two leaflets (a normal aortic
valve has three leaflets). Schwarzenegger opted in 1997 for a replacement heart valve made of his own transplanted
tissue; medical experts predicted he would require heart valve replacement surgery in the following two to eight
years as his valve would progressively degrade. Schwarzenegger apparently opted against a mechanical valve, the only
permanent solution available at the time of his surgery, because it would have sharply limited his physical activity
and capacity to exercise. On January 8, 2006, while Schwarzenegger was riding his Harley Davidson motorcycle in Los
Angeles, with his son Patrick in the sidecar, another driver backed into the street he was riding on, causing him
and his son to collide with the car at a low speed. While his son and the other driver were unharmed, the governor
sustained a minor injury to his lip, requiring 15 stitches. "No citations were issued", said Officer Jason Lee, a
Los Angeles Police Department spokesman. Schwarzenegger did not obtain his motorcycle license until July 3, 2006.
Schwarzenegger tripped over his ski pole and broke his right femur while skiing in Sun Valley, Idaho, with his family
on December 23, 2006. On December 26, 2006, he underwent a 90-minute operation in which cables and screws were used
to wire the broken bone back together. He was released from the St. John's Health Center on December 30, 2006. Schwarzenegger's
private jet made an emergency landing at Van Nuys Airport on June 19, 2009, after the pilot reported smoke coming
from the cockpit, according to a statement released by the governor's press secretary. No one was harmed in the incident.
Schwarzenegger's official height of 6'2" (1.88 m) has been brought into question by several articles. In his bodybuilding
days in the late 1960s, he was measured to be 6'1.5" (1.87 m), a height confirmed by his fellow bodybuilders. However,
in 1988 both the Daily Mail and Time Out magazine mentioned that Schwarzenegger appeared noticeably shorter. Prior
to running for Governor, Schwarzenegger's height was once again questioned in an article by the Chicago Reader. As
Governor, Schwarzenegger engaged in a light-hearted exchange with Assemblyman Herb Wesson over their heights. At
one point, Wesson made an unsuccessful attempt to, in his own words, "settle this once and for all and find out how
tall he is" by using a tailor's tape measure on the Governor. Schwarzenegger retaliated by placing a pillow stitched
with the words "Need a lift?" on the five-foot-five inch (165 cm) Wesson's chair before a negotiating session in
his office. Bob Mulholland also claimed Schwarzenegger was 5'10" (1.78 m) and that he wore risers in his boots. In
1999, Men's Health magazine stated his height was 5'10". Schwarzenegger's autobiography, Total Recall, was released
in October 2012. He devotes one chapter called "The Secret" to his extramarital affair. The majority of his book
is about his successes in the three major chapters in his life: bodybuilder, actor, and Governor of California. Schwarzenegger
was the first civilian to purchase a Humvee. He was so enamored by the vehicle that he lobbied the Humvee's manufacturer,
AM General, to produce a street-legal, civilian version, which they did in 1992; the first two Hummers they sold
were also purchased by Schwarzenegger. He was in the news in 2014 for buying a rare Bugatti Veyron Grand Sport Vitesse.
He was spotted and filmed in 2015 Summer in his car, silver painted with bright aluminium forged wheels. Schwarzenegger's
Bugatti has its interior adorned in dark brown leather. The Hummers that Schwarzenegger bought 1992 are so large
– each weighs 6,300 lb (2,900 kg) and is 7 feet (2.1 m) wide – that they are classified as large trucks, and U.S.
fuel economy regulations do not apply to them. During the gubernatorial recall campaign he announced that he would
convert one of his Hummers to burn hydrogen. The conversion was reported to have cost about US$21,000. After the
election, he signed an executive order to jump-start the building of hydrogen refueling plants called the California
Hydrogen Highway Network, and gained a U.S. Department of Energy grant to help pay for its projected US$91,000,000
cost. California took delivery of the first H2H (Hydrogen Hummer) in October 2004. Arnold Schwarzenegger has been
involved with the Special Olympics for many years after they were founded by his ex-mother-in-law, Eunice Kennedy
Shriver. In 2007, Schwarzenegger was the official spokesperson for the Special Olympics which were held in Shanghai,
China. Schwarzenegger believes that quality school opportunities should be made available to children who might not
normally be able to access them. In 1995, he founded the Inner City Games Foundation (ICG) which provides cultural,
educational and community enrichment programming to youth. ICG is active in 15 cities around the country and serves
over 250,000 children in over 400 schools countrywide. He has also been involved with After-School All-Stars, and
founded the Los Angeles branch in 2002. ASAS is an after school program provider, educating youth about health, fitness
and nutrition. In 2012, Schwarzenegger helped to found the Schwarzenegger Institute for State and Global Policy,
which is a part of the USC Sol Price School of Public Policy at the University of Southern California. The Institute's
mission is to "[advance] post-partisanship, where leaders put people over political parties and work together to
find the best ideas and solutions to benefit the people they serve," and to "seek to influence public policy and
public debate in finding solutions to the serious challenges we face." Schwarzenegger serves as chairman of the Institute.
Plymouth (i/ˈplɪməθ/) is a city on the south coast of Devon, England, about 37 miles (60 km) south-west of Exeter and 190
miles (310 km) west-south-west of London, between the mouths of the rivers Plym to the east and Tamar to the west
where they join Plymouth Sound to form the boundary with Cornwall. Plymouth's early history extends to the Bronze
Age, when a first settlement emerged at Mount Batten. This settlement continued as a trading post for the Roman Empire,
until it was surpassed by the more prosperous village of Sutton, now called Plymouth. In 1620, the Pilgrim Fathers
departed Plymouth for the New World and established Plymouth Colony – the second English settlement in what is now
the United States of America. During the English Civil War the town was held by the Parliamentarians and was besieged
between 1642 and 1646. Throughout the Industrial Revolution, Plymouth grew as a commercial shipping port, handling
imports and passengers from the Americas, and exporting local minerals (tin, copper, lime, china clay and arsenic)
while the neighbouring town of Devonport became a strategic Royal Naval shipbuilding and dockyard town. In 1914 three
neighbouring independent towns, viz., the county borough of Plymouth, the county borough of Devonport, and the urban
district of East Stonehouse were merged to form a single County Borough. The combined town took the name of Plymouth
which, in 1928, achieved city status. The city's naval importance later led to its targeting and partial destruction
during World War II, an act known as the Plymouth Blitz. After the war the city centre was completely rebuilt and
subsequent expansion led to the incorporation of Plympton and Plymstock along with other outlying suburbs in 1967.
The city is home to 261,546 (mid-2014 est.) people, making it the 30th most populous built-up area in the United
Kingdom. It is governed locally by Plymouth City Council and is represented nationally by three MPs. Plymouth's economy
remains strongly influenced by shipbuilding and seafaring including ferry links to Brittany (Roscoff and St Malo)
and Spain (Santander), but has tended toward a service-based economy since the 1990s. It has the largest operational
naval base in Western Europe – HMNB Devonport and is home to Plymouth University. Upper Palaeolithic deposits, including
bones of Homo sapiens, have been found in local caves, and artefacts dating from the Bronze Age to the Middle Iron
Age have been found at Mount Batten showing that it was one of the main trading ports of the country at that time.
An unidentified settlement named 'TAMARI OSTIA' (mouth/estuaries of the Tamar) is listed in Ptolemy's Geographia
and is presumed to be located in the area of the modern city. The settlement of Plympton, further up the River Plym
than the current Plymouth, was also an early trading port, but the river silted up in the early 11th century and
forced the mariners and merchants to settle at the current day Barbican near the river mouth. At the time this village
was called Sutton, meaning south town in Old English. The name Plym Mouth, meaning "mouth of the River Plym" was
first mentioned in a Pipe Roll of 1211. The name Plymouth first officially replaced Sutton in a charter of King Henry
VI in 1440. See Plympton for the derivation of the name Plym. During the Hundred Years' War a French attack (1340)
burned a manor house and took some prisoners, but failed to get into the town. In 1403 the town was burned by Breton
raiders. In the late fifteenth century a 'castle quadrate' was constructed close to the area now known as The Barbican;
it included four round towers, one at each corner, as featured on the city coat of arms. The castle served to protect
Sutton Pool, which is where the fleet was based in Plymouth prior to the establishment of Plymouth Dockyard. In 1512
an Act of Parliament was passed for further fortifying Plymouth, and a series of fortifications were then built,
including defensive walls at the entrance to Sutton Pool (across which a chain would be extended in time of danger).
Defences on St Nicholas Island also date from this time, and a string of six artillery blockhouses were built, including
one on Fishers Nose at the south-eastern corner of the Hoe. This location was further strengthened by the building
of a fort (later known as Drake's Fort) in 1596, which itself went on to provide the site for the Citadel, established
in the 1660s (see below). During the 16th century locally produced wool was the major export commodity. Plymouth
was the home port for successful maritime traders, among them Sir John Hawkins, who led England's first foray into
the Atlantic slave trade, as well as Sir Francis Drake, Mayor of Plymouth in 1581 and 1593. According to legend,
Drake insisted on completing his game of bowls on the Hoe before engaging the Spanish Armada in 1588. In 1620 the
Pilgrim Fathers set sail for the New World from Plymouth, establishing Plymouth Colony – the second English colony
in what is now the United States of America. During the English Civil War Plymouth sided with the Parliamentarians
and was besieged for almost four years by the Royalists. The last major attack by the Royalist was by Sir Richard
Grenville leading thousands of soldiers towards Plymouth, but they were defeated by the Plymothians at Freedom Fields
Park. The civil war ended as a Parliamentary win, but monarchy was restored by King Charles II in 1660, who imprisoned
many of the Parliamentary heroes on Drake's Island. Construction of the Royal Citadel began in 1665, after the Restoration;
it was armed with cannon facing both out to sea and into the town, rumoured to be a reminder to residents not to
oppose the Crown. Mount Batten tower also dates from around this time. Throughout the 17th century Plymouth had gradually
lost its pre-eminence as a trading port. By the mid-17th century commodities manufactured elsewhere in England cost
too much to transport to Plymouth and the city had no means of processing sugar or tobacco imports, although it did
play a relatively small part in the Atlantic slave trade during the early 18th century. In the nearby parish of Stoke
Damerel the first dockyard, HMNB Devonport, opened in 1690 on the eastern bank of the River Tamar. Further docks
were built here in 1727, 1762 and 1793. The settlement that developed here was called "Dock" or "Plymouth Dock" at
the time, and a new town, separate from Plymouth, grew up. In 1712 there were 318 men employed and by 1733 it had
grown to a population of 3,000 people. Before the latter half of the 18th century, grain, timber and then coal were
Plymouth's main imports. During this time the real source of wealth was from the neighbouring town of Plymouth Dock
(renamed in 1824 to Devonport) and the major employer in the entire region was the dockyard. The Three Towns conurbation
of Plymouth, Stonehouse and Devonport enjoyed some prosperity during the late 18th and early 19th century and were
enriched by a series of neo-classical urban developments designed by London architect John Foulston. Foulston was
important for both Devonport and Plymouth and was responsible for several grand public buildings, many now destroyed,
including the Athenaeum, the Theatre Royal and Royal Hotel, and much of Union Street. Local chemist William Cookworthy
established his somewhat short-lived Plymouth Porcelain venture in 1768 to exploit the recently discovered deposits
of local China Clay - an industry which continues to make up a portion of the city income. As an associate and host
of engineer John Smeaton he was indirectly involved with the development of the Eddystone Lighthouse. The 1-mile-long
(2 km) Breakwater in Plymouth Sound was designed by John Rennie in order to protect the fleet moving in and out of
Devonport; work started in 1812. Numerous technical difficulties and repeated storm damage meant that it was not
completed until 1841, twenty years after Rennie's death. In the 1860s, a ring of Palmerston forts was constructed
around the outskirts of Devonport, to protect the dockyard from attack from any direction. Some of the greatest imports
to Plymouth from the Americas and Europe during the latter half of the 19th century included maize, wheat, barley,
sugar cane, guano, sodium nitrate and phosphate Aside from the dockyard in the town of Devonport, industries in Plymouth
such as the gasworks, the railways and tramways and a number of small chemical works had begun to develop in the
19th century, continuing into the 20th century. During the First World War, Plymouth was the port of entry for many
troops from around the Empire and also developed as a facility for the manufacture of munitions. Although major units
of the Royal Navy moved to the safety of Scapa Flow, Devonport was an important base for escort vessels and repairs.
Flying boats operated from Mount Batten. In the First World War, Devonport was the headquarters of Western Approaches
Command until 1941 and Sunderland flying boats were operated by the Royal Australian Air Force. It was an important
embarkation point for US troops for D-Day. The city was heavily bombed by the Luftwaffe, in a series of 59 raids
known as the Plymouth Blitz. Although the dockyards were the principal targets, much of the city centre and over
3,700 houses were completely destroyed and more than 1,000 civilians lost their lives. This was largely due to Plymouth's
status as a major port Charles Church was hit by incendiary bombs and partially destroyed in 1941 during the Blitz,
but has not been demolished, as it is now an official permanent monument to the bombing of Plymouth during World
War II. The redevelopment of the city was planned by Sir Patrick Abercrombie in his 1943 Plan for Plymouth whilst
simultaneously working on the reconstruction plan for London. Between 1951 and 1957 over 1000 homes were completed
every year mostly using innovative prefabricated systems of just three main types; by 1964 over 20,000 new homes
had been built transforming the dense overcrowded and unsanitary slums of the pre-war city into a low density, dispersed
suburbia. Most of the city centre shops had been destroyed and those that remained were cleared to enable a zoned
reconstruction according to his plan. In 1962 the modernist high rise of the Civic Centre was constructed, an architecturally
significant example of mid twentieth century civic slab-and-tower set piece allowed to fall into disrepair by its
owner Plymouth City Council but recently grade II listed by English Heritage to prevent its demolition. Post-war,
Devonport Dockyard was kept busy refitting aircraft carriers such as the Ark Royal and, later, nuclear submarines
while new light industrial factories were constructed in the newly zoned industrial sector attracting rapid growth
of the urban population. The army had substantially left the city by 1971, with barracks pulled down in the 1960s,
however the city remains home to the 42 Commando of the Royal Marines. The first record of the existence of a settlement
at Plymouth was in the Domesday Book in 1086 as Sudtone, Saxon for south farm, located at the present day Barbican.
From Saxon times, it was in the hundred of Roborough. In 1254 it gained status as a town and in 1439, became the
first town in England to be granted a Charter by Parliament. Between 1439 and 1934, Plymouth had a Mayor. In 1914
the county boroughs of Plymouth and Devonport, and the urban district of East Stonehouse merged to form a single
county borough of Plymouth. Collectively they were referred to as "The Three Towns". In 1919 Nancy Astor was elected
the first ever female member of parliament to take office in the British Houses of Parliament for the constituency
of Plymouth Sutton. Taking over office from her husband Waldorf Astor, Lady Astor was a vibrantly active campaigner
for her resident constituents . Plymouth was granted city status on 18 October 1928. The city's first Lord Mayor
was appointed in 1935 and its boundaries further expanded in 1967 to include the town of Plympton and the parish
of Plymstock. In 1945, Plymouth-born Michael Foot was elected Labour MP for the war-torn constituency of Plymouth
Devonport and after serving as Secretary of State for Education and responsible for the 1974 Health and Safety at
Work Act, went on to become one of the most distinguished leaders of the Labour party. The 1971 Local Government
White Paper proposed abolishing county boroughs, which would have left Plymouth, a town of 250,000 people, being
administered from a council based at the smaller Exeter, on the other side of the county. This led to Plymouth lobbying
for the creation of a Tamarside county, to include Plymouth, Torpoint, Saltash, and the rural hinterland. The campaign
was not successful, and Plymouth ceased to be a county borough on 1 April 1974 with responsibility for education,
social services, highways and libraries transferred to Devon County Council. All powers returned when the city become
a unitary authority on 1 April 1998 under recommendations of the Banham Commission. In the Parliament of the United
Kingdom, Plymouth is represented by the three constituencies of Plymouth Moor View, Plymouth Sutton and Devonport
and South West Devon and within the European Parliament as South West England. In the 2015 general election all three
constituencies returned Conservative MPs, who were Oliver Colvile (for Devon South West), Gary Streeter (for Sutton
and Devonport) and Johnny Mercer for Moor View. The City of Plymouth is divided into 20 wards, 17 of which elect
three councillors and the other three electing two councillors, making up a total council of 57. Each year a third
of the council is up for election for three consecutive years – there are no elections on the following "fourth"
year, which is when County Council elections take place. The total electorate for Plymouth was 188,924 in April 2015.
The local election of 7 May 2015 resulted in a political composition of 28 Labour councillors, 26 Conservative and
3 UKIP resulting in a Labour administration. Plymouth City Council is formally twinned with: Brest, France (1963),
Gdynia, Poland (1976), Novorossiysk, Russia (1990) San Sebastián, Spain (1990) and Plymouth, United States (2001).
Plymouth was granted the dignity of Lord Mayor by King George V in 1935. The position is elected each year by a group
of six councillors. It is traditional that the position of the Lord Mayor alternates between the Conservative Party
and the Labour Party annually and that the Lord Mayor chooses the Deputy Lord Mayor. Conservative councillor Dr John
Mahony is the incumbent for 2015–16. The Lord Mayor's official residence is 3 Elliot Terrace, located on the Hoe.
Once a home of Waldorf and Nancy Astor, it was given by Lady Astor to the City of Plymouth as an official residence
for future Lord Mayors and is also used today for civic hospitality, as lodgings for visiting dignitaries and High
Court judges and it is also available to hire for private events. The Civic Centre municipal office building in Armada
Way became a listed building in June 2007 because of its quality and period features, but has become the centre of
a controversy as the council planned for its demolition estimating that it could cost £40m to refurbish it, resulting
in possible job losses. Plymouth lies between the River Plym to the east and the River Tamar to the west; both rivers
flow into the natural harbour of Plymouth Sound. Since 1967, the unitary authority of Plymouth has included the,
once independent, towns of Plympton and Plymstock which lie along the east of the River Plym. The River Tamar forms
the county boundary between Devon and Cornwall and its estuary forms the Hamoaze on which is sited Devonport Dockyard.
The River Plym, which flows off Dartmoor to the north-east, forms a smaller estuary to the east of the city called
Cattewater. Plymouth Sound is protected from the sea by the Plymouth Breakwater, in use since 1814. In the Sound
is Drake's Island which is seen from Plymouth Hoe, a flat public area on top of limestone cliffs. The Unitary Authority
of Plymouth is 79.84 square kilometres (30.83 sq mi). The topography rises from sea level to a height, at Roborough,
of about 509 feet (155 m) above Ordnance Datum (AOD). Geologically, Plymouth has a mixture of limestone, Devonian
slate, granite and Middle Devonian limestone. Plymouth Sound, Shores and Cliffs is a Site of Special Scientific Interest,
because of its geology. The bulk of the city is built upon Upper Devonian slates and shales and the headlands at
the entrance to Plymouth Sound are formed of Lower Devonian slates, which can withstand the power of the sea. A band
of Middle Devonian limestone runs west to east from Cremyll to Plymstock including the Hoe. Local limestone may be
seen in numerous buildings, walls and pavements throughout Plymouth. To the north and north east of the city is the
granite mass of Dartmoor; the granite was mined and exported via Plymouth. Rocks brought down the Tamar from Dartmoor
include ores containing tin, copper, tungsten, lead and other minerals. There is evidence that the middle Devonian
limestone belt at the south edge of Plymouth and in Plymstock was quarried at West Hoe, Cattedown and Radford. On
27 April 1944 Sir Patrick Abercrombie's Plan for Plymouth to rebuild the bomb-damaged city was published; it called
for demolition of the few remaining pre-War buildings in the city centre to make way for their replacement with wide,
parallel, modern boulevards aligned east–west linked by a north–south avenue (Armada Way) linking the railway station
with the vista of Plymouth Hoe. A peripheral road system connecting the historic Barbican on the east and Union Street
to the west determines the principal form of the city centre, even following pedestrianisation of the shopping centre
in the late 1980s, and continues to inform the present 'Vision for Plymouth' developed by a team led by Barcelona-based
architect David MacKay in 2003 which calls for revivification of the city centre with mixed-use and residential.
In suburban areas, post-War prefabs had already begun to appear by 1946, and over 1,000 permanent council houses
were built each year from 1951–57 according to the Modernist zoned low-density garden city model advocated by Abercrombie.
By 1964 over 20,000 new homes had been built, more than 13,500 of them permanent council homes and 853 built by the
Admiralty. Plymouth is home to 28 parks with an average size of 45,638 square metres (491,240 sq ft). Its largest
park is Central Park, with other sizeable green spaces including Victoria Park, Freedom Fields Park, Alexandra Park,
Devonport Park and the Hoe. Along with the rest of South West England, Plymouth has a temperate oceanic climate (Köppen
Cfb) which is generally wetter and milder than the rest of England. This means a wide range of exotic plants can
be grown. The annual mean temperature is approximately 11 °C (52 °F). Due to the modifying effect of the sea the
seasonal range is less than in most other parts of the UK. As a result of this summer highs are lower than its southerly
latitude should warrant, but as a contrast the coldest month of February has mean minimum temperatures as mild as
between 3 and 4 °C (37 and 39 °F). Snow is rare, not usually equating to more than a few flakes, but there have been
exclusions, namely the European winter storms of 2009-10 which, in early January, covered Plymouth in at least 1
inch (2.5 cm) of snow; more on higher ground. Another period of notable snow occurred from 17–19 December 2010 when
up to 8 inches (20 cm) of snow fell through the period – though only 2 inches (5.1 cm) would lie at any one time
due to melt. Over the 1961–1990 period, annual snowfall accumulation averaged less than 7 cm (3 in) per year. July
and August are the warmest months with mean daily maxima over 19 °C (66 °F). Rainfall tends to be associated with
Atlantic depressions or with convection. The Atlantic depressions are more vigorous in autumn and winter and most
of the rain which falls in those seasons in the south-west is from this source. Average annual rainfall is around
980 millimetres (39 in). November to March have the highest mean wind speeds, with June to August having the lightest
winds. The predominant wind direction is from the south-west. South West England has a favoured location when the
Azores High pressure area extends north-eastwards towards the UK, particularly in summer. Coastal areas have average
annual sunshine totals over 1,600 hours. Typically, the warmest day of the year (1971–2000) will achieve a temperature
of 26.6 °C (80 °F), although in June 1976 the temperature reached 31.6 °C (89 °F), the site record. On average, 4.25
days of the year will report a maximum temperature of 25.1 °C (77 °F) or above. During the winter half of the year,
the coldest night will typically fall to −4.1 °C (25 °F) although in January 1979 the temperature fell to −8.8 °C
(16 °F). Typically, 18.6 nights of the year will register an air frost. The University of Plymouth enrolls 25,895
total students as of 2014/15 (22nd largest in the UK out of 165). It also employs 3,000 staff with an annual income
of around £160 million. It was founded in 1992 from Polytechnic South West (formerly Plymouth Polytechnic) following
the Further and Higher Education Act 1992. It has courses in maritime business, marine engineering, marine biology
and Earth, ocean and environmental sciences, surf science, shipping and logistics. The university formed a joint
venture with the fellow Devonian University of Exeter in 2000, establishing the Peninsula College of Medicine and
Dentistry. The college is ranked 8th out of 30 universities in the UK in 2011 for medicine. Its dental school was
established in 2006, which also provides free dental care in an attempt to improve access to dental care in the South
West. The University of St Mark & St John (known as "Marjon" or "Marjons") specialises in teacher training, and offers
training across the country and abroad. The city is also home to two large colleges. The City College Plymouth provides
courses from the most basic to Foundation degrees for approximately 26,000 students. Plymouth College of Art offers
a selection of courses including media. It was started 153 years ago and is now one of only four independent colleges
of art and design in the UK. Plymouth also has 71 state primary phase schools, 13 state secondary schools, eight
special schools and three selective state grammar schools, Devonport High School for Girls, Devonport High School
for Boys and Plymouth High School for Girls. There is also an independent school Plymouth College. The city was also
home to the Royal Naval Engineering College; opened in 1880 in Keyham, it trained engineering students for five years
before they completed the remaining two years of the course at Greenwich. The college closed in 1910, but in 1940
a new college opened at Manadon. This was renamed Dockyard Technical College in 1959 before finally closing in 1994;
training was transferred to the University of Southampton. Plymouth is home to the Marine Biological Association
of the United Kingdom (MBA) which conducts research in all areas of the marine sciences. The Plymouth Marine Laboratory
is an offshoot of the MBA. Together with the National Marine Aquarium, the Sir Alister Hardy Foundation for Ocean
Sciences, Plymouth University's Marine Institute and the Diving Diseases Research Centre, these marine-related organisations
form the Plymouth Marine Sciences Partnership. The Plymouth Marine Laboratory, which focuses on global issues of
climate change and sustainability. It monitors the effects of ocean acidity on corals and shellfish and reports the
results to the UK government. It also cultivates algae that could be used to make biofuels or in the treatment of
waste water by using technology such as photo-bioreactors. It works alongside the Boots Group to investigate the
use of algae in skin care protects, taking advantage of the chemicals they contain that adapt to protect themselves
from the sun. From the 2011 Census, the Office for National Statistics published that Plymouth's unitary authority
area population was 256,384; 15,664 more people than that of the last census from 2001, which indicated that Plymouth
had a population of 240,720. The Plymouth urban area had a population of 260,203 in 2011 (the urban sprawl which
extends outside the authority's boundaries). The city's average household size was 2.3 persons. At the time of the
2011 UK census, the ethnic composition of Plymouth's population was 96.2% White (of 92.9% was White British), with
the largest minority ethnic group being Chinese at 0.5%. The white Irish ethnic group saw the largest decline in
its share of the population since the 2001 Census (-24%), while the Other Asian and Black African had the largest
increases (360% and 351% respectively). This excludes the two new ethnic groups added to the 2011 census of Gypsy
or Irish Traveller and Arab. The population rose rapidly during the second half of the 19th century, but declined
by over 1.6% from 1931 to 1951. Plymouth's gross value added (a measure of the size of its economy) was 5,169 million
GBP in 2013 making up 25% of Devon's GVA. Its GVA per person was £19,943 and compared to the national average of
£23,755, was £3,812 lower. Plymouth's unemployment rate was 7.0% in 2014 which was 2.0 points higher than the South
West average and 0.8 points higher than the average for Great Britain (England, Wales and Scotland). A 2014 profile
by the National Health Service showed Plymouth had higher than average levels of poverty and deprivation (26.2% of
population among the poorest 20.4% nationally). Life expectancy, at 78.3 years for men and 82.1 for women, was the
lowest of any region in the South West of England. Because of its coastal location, the economy of Plymouth has traditionally
been maritime, in particular the defence sector with over 12,000 people employed and approximately 7,500 in the armed
forces. The Plymouth Gin Distillery has been producing Plymouth Gin since 1793, which was exported around the world
by the Royal Navy. During the 1930s, it was the most widely distributed gin and has a controlled term of origin.
Since the 1980s, employment in the defence sector has decreased substantially and the public sector is now prominent
particularly in administration, health, education, medicine and engineering. Devonport Dockyard is the UK's only
naval base that refits nuclear submarines and the Navy estimates that the Dockyard generates about 10% of Plymouth's
income. Plymouth has the largest cluster of marine and maritime businesses in the south west with 270 firms operating
within the sector. Other substantial employers include the university with almost 3,000 staff, as well as the Tamar
Science Park employing 500 people in 50 companies. Several employers have chosen to locate their headquarters in
Plymouth, including Hemsley Fraser. Plymouth has a post-war shopping area in the city centre with substantial pedestrianisation.
At the west end of the zone inside a grade II listed building is the Pannier Market that was completed in 1959 –
pannier meaning "basket" from French, so it translates as "basket market". In terms of retail floorspace, Plymouth
is ranked in the top five in the South West, and 29th nationally. Plymouth was one of the first ten British cities
to trial the new Business Improvement District initiative. The Tinside Pool is situated at the foot of the Hoe and
became a grade II listed building in 1998 before being restored to its 1930s look for £3.4 million. Plymouth Council
is currently undertaking a project of urban redevelopment called the "Vision for Plymouth" launched by the architect
David Mackay and backed by both Plymouth City Council and the Plymouth Chamber of Commerce (PCC). Its projects range
from shopping centres, a cruise terminal, a boulevard and to increase the population to 300,000 and build 33,000
dwellings. In 2004 the old Drake Circus shopping centre and Charles Cross car park were demolished and replaced by
the latest Drake Circus Shopping Centre, which opened in October 2006. It received negative feedback before opening
when David Mackay said it was already "ten years out of date". In contrast, the Theatre Royal's production and education
centre, TR2, which was built on wasteland at Cattedown, was a runner-up for the RIBA Stirling Prize for Architecture
in 2003. There is a project involving the future relocation of Plymouth City Council's headquarters, the civic centre,
to the current location of the Bretonside bus station; it would involve both the bus station and civic centre being
demolished and a rebuilt together at the location with the land from the civic centre being sold off. Other suggestions
include the demolition of the Plymouth Pavilions entertainment arena to create a canal "boulevard" linking Millbay
to the city centre. Millbay is being regenerated with mixed residential, retail and office space alongside the ferry
port. The A38 dual-carriageway runs from east to west across the north of the city. Within the city it is designated
as 'The Parkway' and represents the boundary between the urban parts of the city and the generally more recent suburban
areas. Heading east, it connects Plymouth to the M5 motorway about 40 miles (65 km) away near Exeter; and heading
west it connects Cornwall and Devon via the Tamar Bridge. Regular bus services are provided by Plymouth Citybus,
First South West and Target Travel. There are three Park and ride services located at Milehouse, Coypool (Plympton)
and George Junction (Plymouth City Airport), which are operated by First South West. A regular international ferry
service provided by Brittany Ferries operates from Millbay taking cars and foot passengers directly to France (Roscoff)
and Spain (Santander) on the three ferries, MV Armorique, MV Bretagne and MV Pont-Aven. There is a passenger ferry
between Stonehouse and the Cornish hamlet of Cremyll, which is believed to have operated continuously since 1204.
There is also a pedestrian ferry from the Mayflower Steps to Mount Batten, and an alternative to using the Tamar
Bridge via the Torpoint Ferry (vehicle and pedestrian) across the River Tamar. The city's airport was Plymouth City
Airport about 4 miles (6 km) north of the city centre. The airport was home to the local airline Air Southwest, which
operated flights across the United Kingdom and Ireland. In June 2003, a report by the South West RDA was published
looking at the future of aviation in the south-west and the possible closure of airports. It concluded that the best
option for the south-west was to close Plymouth City Airport and expand Exeter International Airport and Newquay
Cornwall Airport, although it did conclude that this was not the best option for Plymouth. In April 2011, it was
announced that the airport would close, which it did on 23 December. However, FlyPlymouth plans to reopen the city
airport by 2018, which will provide daily services to various destinations including London. Plymouth railway station,
which opened in 1877, is managed by Great Western Railway and also sees trains on the CrossCountry network. Smaller
stations are served by local trains on the Tamar Valley Line and Cornish Main Line. First Great Western have come
under fire recently, due to widespread rail service cuts across the south-west, which affect Plymouth greatly. Three
MPs from the three main political parties in the region have lobbied that the train services are vital to its economy.
The Exeter to Plymouth railway of the LSWR needs to be reopened to connect Cornwall and Plymouth to the rest of the
UK railway system on an all weather basis. There are proposals to reopen the line from Tavistock to Bere Alston for
a through service to Plymouth. On the night of 4 February 2014, amid high winds and extremely rough seas, part of
the sea wall at Dawlish was breached washing away around 40 metres (130 ft) of the wall and the ballast under the
railway immediately behind. The line was closed. Network Rail began repair work and the line reopened on 4 April
2014. In the wake of widespread disruption caused by damage to the mainline track at Dawlish by coastal storms in
February 2014, Network Rail are considering reopening the Tavistock to Okehampton and Exeter section of the line
as an alternative to the coastal route. Plymouth has about 150 churches and its Roman Catholic cathedral (1858) is
in Stonehouse. The city's oldest church is St Andrew's (Anglican) located at the top of Royal Parade—it is the largest
parish church in Devon and has been a site of gathering since AD 800. The city also includes five Baptist churches,
over twenty Methodist chapels, and thirteen Roman Catholic churches. In 1831 the first Brethren assembly in England,
a movement of conservative non-denominational Evangelical Christians, was established in the city, so that Brethren
are often called Plymouth Brethren, although the movement did not begin locally. Plymouth has the first known reference
to Jews in the South West from Sir Francis Drake's voyages in 1577 to 1580, as his log mentioned "Moses the Jew"
– a man from Plymouth. The Plymouth Synagogue is a Listed Grade II* building, built in 1762 and is the oldest Ashkenazi
Synagogue in the English speaking world. There are also places of worship for Islam, Bahá'í, Buddhism, Unitarianism,
Chinese beliefs and Humanism. 58.1% of the population described themselves in the 2011 census return as being at
least nominally Christian and 0.8% as Muslim with all other religions represented by less than 0.5% each. The portion
of people without a religion is 32.9%; above the national average of 24.7%. 7.1% did not state their religious belief.
Since the 2001 Census, the number of Christians and Jews has decreased (-16% and -7% respectively), while all other
religions have increased and non-religious people have almost doubled in number. Built in 1815, Union Street was
at the heart of Plymouth's historical culture. It became known as the servicemen's playground, as it was where sailors
from the Royal Navy would seek entertainment of all kinds. During the 1930s, there were 30 pubs and it attracted
such performers as Charlie Chaplin to the New Palace Theatre. It is now the late-night hub of Plymouth's entertainment
strip, but has a reputation for trouble at closing hours. Outdoor events and festivals are held including the annual
British Firework Championships in August, which attracts tens of thousands of people across the waterfront. In August
2006 the world record for the most amount of simultaneous fireworks was surpassed, by Roy Lowry of the University
of Plymouth, over Plymouth Sound. Since 1992 the Music of the Night has been performed in the Royal Citadel by the
29 Commando Regiment and local performers to raise money for local and military charities. The city's main theatres
are the Theatre Royal (1,315 capacity), its Drum Theatre (200 capacity), and its production and creative learning
centre, The TR2. The Plymouth Pavilions has multiple uses for the city staging music concerts, basketball matches
and stand-up comedy. There are also three cinemas: Reel Cinema at Derrys Cross, Plymouth Arts Centre at Looe Street
and a Vue cinema at the Barbican Leisure Park. The Plymouth City Museum and Art Gallery is operated by Plymouth City
Council allowing free admission – it has six galleries. The Plymouth Athenaeum, which includes a local interest library,
is a society dedicated to the promotion of learning in the fields of science, technology, literature and art. From
1961 to 2009 it also housed a theatre. Plymouth is the regional television centre of BBC South West. A team of journalists
are headquartered at Plymouth for the ITV West Country regional station, after a merger with ITV West forced ITV
Westcountry to close on 16 February 2009. The main local newspapers serving Plymouth are The Herald and Western Morning
News with Radio Plymouth , BBC Radio Devon, Heart South West , and Pirate FM being the main local radio stations.
Plymouth is home to Plymouth Argyle F.C., who play in the fourth tier of English football league known as Football
League Two. The team's home ground is called Home Park and is located in Central Park. It links itself with the group
of English non-conformists that left Plymouth for the New World in 1620: its nickname is "The Pilgrims". The city
also has four Non-League football clubs; Plymouth Parkway F.C. who play at Bolitho Park, Elburton Villa F.C. who
play at Haye Road, Vospers Oak Villa F.C. who play at Weston Mill and Plymstock United F.C. who play at Deans Cross.
All four clubs play in the South West Peninsula League. Other sports clubs include Plymouth Albion R.F.C. and the
Plymouth Raiders basketball club. Plymouth Albion Rugby Football Club is a rugby union club that was founded in 1875
and are currently competing in the third tier of Professional English Rugby . They play at the Brickfields. Plymouth
Raiders play in the British Basketball League – the top tier of British basketball. They play at the Plymouth Pavilions
entertainment arena and were founded in 1983. Plymouth cricket club was formed in 1843, the current 1st XI play in
the Devon Premier League. Plymouth Devils are a speedway team in the British Premier League. Plymouth was home to
an American football club, the Plymouth Admirals until 2010. Plymouth is also home to Plymouth Marjons Hockey Club,
with their 1st XI playing in the National League last season. Plymouth is an important centre for watersports, especially
scuba diving and sailing. The Port of Plymouth Regatta is one of the oldest regattas in the world, and has been held
regularly since 1823. In September 2011, Plymouth hosted the America's Cup World Series for nine days. Since 1973
Plymouth has been supplied water by South West Water. Prior to the 1973 take over it was supplied by Plymouth County
Borough Corporation. Before the 19th century two leats were built in order to provide drinking water for the town.
They carried water from Dartmoor to Plymouth. A watercourse, known as Plymouth or Drake's Leat, was opened on 24
April 1591 to tap the River Meavy. The Devonport Leat was constructed to carry fresh drinking water to the expanding
town of Devonport and its ever growing dockyard. It was fed by three Dartmoor rivers: The West Dart, Cowsic and Blackabrook.
It seems to have been carrying water since 1797, but it was officially completed in 1801. It was originally designed
to carry water to Devonport town, but has since been shortened and now carries water to Burrator Reservoir, which
feeds most of the water supply of Plymouth. Burrator Reservoir is located about 5 miles (8 km) north of the city
and was constructed in 1898 and expanded in 1928. Plymouth City Council is responsible for waste management throughout
the city and South West Water is responsible for sewerage. Plymouth's electricity is supplied from the National Grid
and distributed to Plymouth via Western Power Distribution. On the outskirts of Plympton a combined cycle gas-powered
station, the Langage Power Station, which started to produce electricity for Plymouth at the end of 2009. Her Majesty's
Courts Service provide a Magistrates' Court and a Combined Crown and County Court in the city. The Plymouth Borough
Police, formed in 1836, eventually became part of Devon and Cornwall Constabulary. There are police stations at Charles
Cross and Crownhill (the Divisional HQ) and smaller stations at Plympton and Plymstock. The city has one of the Devon
and Cornwall Area Crown Prosecution Service Divisional offices. Plymouth has five fire stations located in Camel's
Head, Crownhill, Greenbank, Plympton and Plymstock which is part of Devon and Somerset Fire and Rescue Service. The
Royal National Lifeboat Institution have an Atlantic 85 class lifeboat and Severn class lifeboat stationed at Millbay
Docks. Plymouth is served by Plymouth Hospitals NHS Trust and the city's NHS hospital is Derriford Hospital 4 miles
(6 km) north of the city centre. The Royal Eye Infirmary is located at Derriford Hospital. South Western Ambulance
Service NHS Foundation Trust operates in Plymouth and the rest of the south west; its headquarters are in Exeter.
The mid-19th century burial ground at Ford Park Cemetery was reopened in 2007 by a successful trust and the City
council operate two large early 20th century cemeteries at Weston Mill and Efford both with crematoria and chapels.
There is also a privately owned cemetery on the outskirts of the city, Drake Memorial Park which does not allow headstones
to mark graves, but a brass plaque set into the ground. After the English Civil War the Royal Citadel was built in
1666 on the east end of Plymouth Hoe, to defend the port from naval attacks, suppress Plymothian Parliamentary leanings
and to train the armed forces. Guided tours are available in the summer months. Further west is Smeaton's Tower,
which was built in 1759 as a lighthouse on rocks 14 miles (23 km) off shore, but dismantled and the top two thirds
rebuilt on the Hoe in 1877. It is open to the public and has views over the Plymouth Sound and the city from the
lantern room. Plymouth has 20 war memorials of which nine are on The Hoe including: Plymouth Naval Memorial, to remember
those killed in World Wars I and II, and the Armada Memorial, to commemorate the defeat of the Spanish Armada. The
early port settlement of Plymouth, called "Sutton", approximates to the area now referred to as the Barbican and
has 100 listed buildings and the largest concentration of cobbled streets in Britain. The Pilgrim Fathers left for
the New World in 1620 near the commemorative Mayflower Steps in Sutton Pool. Also on Sutton Pool is the National
Marine Aquarium which displays 400 marine species and includes Britain's deepest aquarium tank. On the northern outskirts
of the city, Crownhill Fort is a well restored example of a "Palmerston's Folly". It is owned by the Landmark Trust
and is open to the public. To the west of the city is Devonport, one of Plymouth's historic quarters. As part of
Devonport's millennium regeneration project, the Devonport Heritage Trail has been introduced, complete with over
70 waymarkers outlining the route. Plymouth is often used as a base by visitors to Dartmoor, the Tamar Valley and
the beaches of south-east Cornwall. Kingsand, Cawsand and Whitsand Bay are popular. The Roland Levinsky building,
the landmark building of the University of Plymouth, is located in the city's central quarter. Designed by leading
architect Henning Larsen, the building was opened in 2008 and houses the University's Arts faculty. It has been consistently
considered one of the UK's most beautiful university buildings. People from Plymouth are known as Plymothians or
less formally as Janners. Its meaning is described as a person from Devon, deriving from Cousin Jan (the Devon form
of John), but more particularly in naval circles anyone from the Plymouth area. The Elizabethan navigator, Sir Francis
Drake was born in the nearby town of Tavistock and was the mayor of Plymouth. He was the first Englishman to circumnavigate
the world and was known by the Spanish as El Draco meaning "The Dragon" after he raided many of their ships. He died
of dysentery in 1596 off the coast of Puerto Rico. In 2002 a mission to recover his body and bring it to Plymouth
was allowed by the Ministry of Defence. His cousin and contemporary John Hawkins was a Plymouth man. Painter Sir
Joshua Reynolds, founder and first president of the Royal Academy was born and educated in nearby Plympton, now part
of Plymouth. William Cookworthy born in Kingsbridge set up his successful porcelain business in the city and was
a close friend of John Smeaton designer of the Eddystone Lighthouse. On 26 January 1786, Benjamin Robert Haydon,
an English painter who specialised in grand historical pictures, was born here. The naturalist Dr William Elford
Leach FRS, who did much to pave the way in Britain for Charles Darwin, was born at Hoe Gate in 1791. Antarctic explorers
Robert Falcon Scott and Frank Bickerton both lived in the city. Artists include Beryl Cook whose paintings depict
the culture of Plymouth and Robert Lenkiewicz, whose paintings investigated themes of vagrancy, sexual behaviour
and suicide, lived in the city from the 1960s until his death in 2002. Illustrator and creator of children's series
Mr Benn and King Rollo, David McKee, was born and brought up in South Devon and trained at Plymouth College of Art.
Jazz musician John Surman, born in nearby Tavistock, has close connections to the area, evidenced by his 2012 album
Saltash Bells. The avant garde prepared guitarist Keith Rowe was born in the city before establishing the jazz free
improvisation band AMM in London in 1965 and MIMEO in 1997. The musician and film director Cosmo Jarvis has lived
in several towns in South Devon and has filmed videos in and around Plymouth. In addition, actors Sir Donald Sinden
and Judi Trott. George Passmore of Turner Prize winning duo Gilbert and George was born in the city, as was Labour
politician Michael Foot whose family reside at nearby Trematon Castle. Notable athletes include swimmer Sharron Davies,
diver Tom Daley, dancer Wayne Sleep, and footballer Trevor Francis. Other past residents include composer journalist
and newspaper editor William Henry Wills, Ron Goodwin, and journalist Angela Rippon and comedian Dawn French. Canadian
politician and legal scholar Chris Axworthy hails from Plymouth. America based actor Donald Moffat, whose roles include
American Vice President Lyndon B. Johnson in the film The Right Stuff, and fictional President Bennett in Clear and
Present Danger, was born in Plymouth.
Heresy is any provocative belief or theory that is strongly at variance with established beliefs or customs. A heretic is
a proponent of such claims or beliefs. Heresy is distinct from both apostasy, which is the explicit renunciation
of one's religion, principles or cause, and blasphemy, which is an impious utterance or action concerning God or
sacred things. The term is usually used to refer to violations of important religious teachings, but is used also
of views strongly opposed to any generally accepted ideas. It is used in particular in reference to Christianity,
Judaism, Islam and Marxism. In certain historical Christian, Islamic and Jewish cultures, among others, espousing
ideas deemed heretical has been and in some cases still is subjected not merely to punishments such as excommunication,
but even to the death penalty. The term heresy is from Greek αἵρεσις originally meant "choice" or "thing chosen",
but it came to mean the "party or school of a man's choice" and also referred to that process whereby a young person
would examine various philosophies to determine how to live. The word "heresy" is usually used within a Christian,
Jewish, or Islamic context, and implies slightly different meanings in each. The founder or leader of a heretical
movement is called a heresiarch, while individuals who espouse heresy or commit heresy are known as heretics. Heresiology
is the study of heresy. According to Titus 3:10 a divisive person should be warned two times before separating from
him. The Greek for the phrase "divisive person" became a technical term in the early Church for a type of "heretic"
who promoted dissension. In contrast correct teaching is called sound not only because it builds up in the faith,
but because it protects against the corrupting influence of false teachers. The Church Fathers identified Jews and
Judaism with heresy. They saw deviations from Orthodox Christianity as heresies that were essentially Jewish in spirit.
Tertullian implied that it was the Jews who most inspired heresy in Christianity: "From the Jew the heretic has accepted
guidance in this discussion [that Jesus was not the Christ.]" Saint Peter of Antioch referred to Christians that
refused to venerate religious images as having "Jewish minds". The use of the word "heresy" was given wide currency
by Irenaeus in his 2nd century tract Contra Haereses (Against Heresies) to describe and discredit his opponents during
the early centuries of the Christian community.[citation needed] He described the community's beliefs and doctrines
as orthodox (from ὀρθός, orthos "straight" + δόξα, doxa "belief") and the Gnostics' teachings as heretical.[citation
needed] He also pointed out the concept of apostolic succession to support his arguments. Constantine the Great,
who along with Licinius had decreed toleration of Christianity in the Roman Empire by what is commonly called the
"Edict of Milan", and was the first Roman Emperor baptized, set precedents for later policy. By Roman law the Emperor
was Pontifex Maximus, the high priest of the College of Pontiffs (Collegium Pontificum) of all recognized religions
in ancient Rome. To put an end to the doctrinal debate initiated by Arius, Constantine called the first of what would
afterwards be called the ecumenical councils and then enforced orthodoxy by Imperial authority. The first known usage
of the term in a legal context was in AD 380 by the Edict of Thessalonica of Theodosius I, which made Christianity
the state church of the Roman Empire. Prior to the issuance of this edict, the Church had no state-sponsored support
for any particular legal mechanism to counter what it perceived as "heresy". By this edict the state's authority
and that of the Church became somewhat overlapping. One of the outcomes of this blurring of Church and state was
the sharing of state powers of legal enforcement with church authorities. This reinforcement of the Church's authority
gave church leaders the power to, in effect, pronounce the death sentence upon those whom the church considered heretical.
Within six years of the official criminalization of heresy by the Emperor, the first Christian heretic to be executed,
Priscillian, was condemned in 386 by Roman secular officials for sorcery, and put to death with four or five followers.
However, his accusers were excommunicated both by Ambrose of Milan and Pope Siricius, who opposed Priscillian's heresy,
but "believed capital punishment to be inappropriate at best and usually unequivocally evil". For some years after
the Reformation, Protestant churches were also known to execute those they considered heretics, including Catholics.
The last known heretic executed by sentence of the Roman Catholic Church was Spanish schoolmaster Cayetano Ripoll
in 1826. The number of people executed as heretics under the authority of the various "ecclesiastical authorities"[note
1] is not known.[note 2] One of the first examples of the word as translated from the Nag Hammadi's Apocalypse of
Peter was" they will cleave to the name of a dead man thinking that they will become pure. But they will become greatly
defiled and they will fall into the name of error and into the hands of an evil cunning man and a manifold dogma,
and they will be ruled heretically". In the Roman Catholic Church, obstinate and willful manifest heresy is considered
to spiritually cut one off from the Church, even before excommunication is incurred. The Codex Justinianus (1:5:12)
defines "everyone who is not devoted to the Catholic Church and to our Orthodox holy Faith" a heretic. The Church
had always dealt harshly with strands of Christianity that it considered heretical, but before the 11th century these
tended to centre around individual preachers or small localised sects, like Arianism, Pelagianism, Donatism, Marcionism
and Montanism. The diffusion of the almost Manichaean sect of Paulicians westwards gave birth to the famous 11th
and 12th century heresies of Western Europe. The first one was that of Bogomils in modern day Bosnia, a sort of sanctuary
between Eastern and Western Christianity. By the 11th century, more organised groups such as the Patarini, the Dulcinians,
the Waldensians and the Cathars were beginning to appear in the towns and cities of northern Italy, southern France
and Flanders. In France the Cathars grew to represent a popular mass movement and the belief was spreading to other
areas. The Cathar Crusade was initiated by the Roman Catholic Church to eliminate the Cathar heresy in Languedoc.
Heresy was a major justification for the Inquisition (Inquisitio Haereticae Pravitatis, Inquiry on Heretical Perversity)
and for the European wars of religion associated with the Protestant Reformation. Galileo Galilei was brought before
the Inquisition for heresy, but abjured his views and was sentenced to house arrest, under which he spent the rest
of his life. Galileo was found "vehemently suspect of heresy", namely of having held the opinions that the Sun lies
motionless at the centre of the universe, that the Earth is not at its centre and moves, and that one may hold and
defend an opinion as probable after it has been declared contrary to Holy Scripture. He was required to "abjure,
curse and detest" those opinions. Pope St. Gregory stigmatized Judaism and the Jewish People in many of his writings.
He described Jews as enemies of Christ: "The more the Holy Spirit fills the world, the more perverse hatred dominates
the souls of the Jews." He labeled all heresy as "Jewish", claiming that Judaism would "pollute [Catholics and] deceive
them with sacrilegious seduction." The identification of Jews and heretics in particular occurred several times in
Roman-Christian law, In Eastern Christianity heresy most commonly refers to those beliefs declared heretical by the
first seven Ecumenical Councils.[citation needed] Since the Great Schism and the Protestant Reformation, various
Christian churches have also used the concept in proceedings against individuals and groups those churches deemed
heretical. The Orthodox Church also rejects the early Christian heresies such as Arianism, Gnosticism, Origenism,
Montanism, Judaizers, Marcionism, Docetism, Adoptionism, Nestorianism, Monophysitism, Monothelitism and Iconoclasm.
In his work "On the Jews and Their Lies" (1543), German Reformation leader Martin Luther claims that Jewish history
was "assailed by much heresy", and that Christ the logos swept away the Jewish heresy and goes on to do so, "as it
still does daily before our eyes." He stigmatizes Jewish Prayer as being "blasphemous" (sic) and a lie, and vilifies
Jews in general as being spiritually "blind" and "surely possessed by all devils." Luther calls the members of the
Orthodox Catholic Church "papists" and heretics, and has a special spiritual problem with Jewish circumcision. In
England, the 16th-century European Reformation resulted in a number of executions on charges of heresy. During the
thirty-eight years of Henry VIII's reign, about sixty heretics, mainly Protestants, were executed and a rather greater
number of Catholics lost their lives on grounds of political offences such as treason, notably Sir Thomas More and
Cardinal John Fisher, for refusing to accept the king's supremacy over the Church in England. Under Edward VI, the
heresy laws were repealed in 1547 only to be reintroduced in 1554 by Mary I; even so two radicals were executed in
Edward's reign (one for denying the reality of the incarnation, the other for denying Christ's divinity). Under Mary,
around two hundred and ninety people were burned at the stake between 1555 and 1558 after the restoration of papal
jurisdiction. When Elizabeth I came to the throne, the concept of heresy was retained in theory but severely restricted
by the 1559 Act of Supremacy and the one hundred and eighty or so Catholics who were executed in the forty-five years
of her reign were put to death because they were considered members of "...a subversive fifth column." The last execution
of a "heretic" in England occurred under James VI and I in 1612. Although the charge was technically one of "blasphemy"
there was one later execution in Scotland (still at that date an entirely independent kingdom) when in 1697 Thomas
Aikenhead was accused, among other things, of denying the doctrine of the Trinity. Another example of the persecution
of heretics under Protestant rule was the execution of the Boston martyrs in 1659, 1660, and 1661. These executions
resulted from the actions of the Anglican Puritans, who at that time wielded political as well as ecclesiastic control
in the Massachusetts Bay Colony. At the time, the colony leaders were apparently hoping to achieve their vision of
a "purer absolute theocracy" within their colony .[citation needed] As such, they perceived the teachings and practices
of the rival Quaker sect as heretical, even to the point where laws were passed and executions were performed with
the aim of ridding their colony of such perceived "heresies".[citation needed] It should be noticed that the Eastern
Orthodox and Oriental Orthodox communions generally regard the Puritans themselves as having been heterodox or heretical.
The era of mass persecution and execution of heretics under the banner of Christianity came to an end in 1826 with
the last execution of a "heretic", Cayetano Ripoll, by the Catholic Inquisition. Although less common than in earlier
periods, in modern times, formal charges of heresy within Christian churches still occur. Issues in the Protestant
churches have included modern biblical criticism and the nature of God. In the Catholic Church, the Congregation
for the Doctrine of the Faith criticizes writings for "ambiguities and errors" without using the word "heresy". Perhaps
due to the many modern negative connotations associated with the term heretic, such as the Spanish inquisition, the
term is used less often today. The subject of Christian heresy opens up broader questions as to who has a monopoly
on spiritual truth, as explored by Jorge Luis Borges in the short story "The Theologians" within the compilation
Labyrinths. Ottoman Sultan Selim the Grim, regarded the Shia Qizilbash as heretics, reportedly proclaimed that "the
killing of one Shiite had as much otherworldly reward as killing 70 Christians." In some modern day nations and regions
in which Sharia law is ostensibly practiced, heresy remains an offense punishable by death. One example is the 1989
fatwa issued by the government of Iran, offering a substantial bounty for anyone who succeeds in the assassination
of author Salman Rushdie, whose writings were declared as heretical. Orthodox Judaism considers views on the part
of Jews who depart from traditional Jewish principles of faith heretical. In addition, the more right-wing groups
within Orthodox Judaism hold that all Jews who reject the simple meaning of Maimonides's 13 principles of Jewish
faith are heretics. As such, most of Orthodox Judaism considers Reform and Reconstructionist Judaism heretical movements,
and regards most of Conservative Judaism as heretical. The liberal wing of Modern Orthodoxy is more tolerant of Conservative
Judaism, particularly its right wing, as there is some theological and practical overlap between these groups. The
act of using Church of Scientology techniques in a form different than originally described by Hubbard is referred
to within Scientology as "squirreling" and is said by Scientologists to be high treason. The Religious Technology
Center has prosecuted breakaway groups that have practiced Scientology outside the official Church without authorization.
In other contexts the term does not necessarily have pejorative overtones and may even be complimentary when used,
in areas where innovation is welcome, of ideas that are in fundamental disagreement with the status quo in any practice
and branch of knowledge. Scientist/author Isaac Asimov considered heresy as an abstraction, Asimov's views are in
Forward: The Role of the Heretic. mentioning religious, political, socioeconomic and scientific heresies. He divided
scientific heretics into endoheretics (those from within the scientific community) and exoheretics (those from without).
Characteristics were ascribed to both and examples of both kinds were offered. Asimov concluded that science orthodoxy
defends itself well against endoheretics (by control of science education, grants and publication as examples), but
is nearly powerless against exoheretics. He acknowledged by examples that heresy has repeatedly become orthodoxy.
The revisionist paleontologist Robert T. Bakker, who published his findings as The Dinosaur Heresies, treated the
mainstream view of dinosaurs as dogma. "I have enormous respect for dinosaur paleontologists past and present. But
on average, for the last fifty years, the field hasn't tested dinosaur orthodoxy severely enough." page 27 "Most
taxonomists, however, have viewed such new terminology as dangerously destabilizing to the traditional and well-known
scheme..." page 462. This book apparently influenced Jurassic Park. The illustrations by the author show dinosaurs
in very active poses, in contrast to the traditional perception of lethargy. He is an example of a recent scientific
endoheretic. Immanuel Velikovsky is an example of a recent scientific exoheretic; he did not have appropriate scientific
credentials or did not publish in scientific journals. While the details of his work are in scientific disrepute,
the concept of catastrophic change (extinction event and punctuated equilibrium) has gained acceptance in recent
decades. The term heresy is also used as an ideological pigeonhole for contemporary writers because, by definition,
heresy depends on contrasts with an established orthodoxy. For example, the tongue-in-cheek contemporary usage of
heresy, such as to categorize a "Wall Street heresy" a "Democratic heresy" or a "Republican heresy," are metaphors
that invariably retain a subtext that links orthodoxies in geology or biology or any other field to religion. These
expanded metaphoric senses allude to both the difference between the person's views and the mainstream and the boldness
of such a person in propounding these views.
The Warsaw Pact (formally, the Treaty of Friendship, Co-operation, and Mutual Assistance, sometimes, informally WarPac, akin
in format to NATO) was a collective defense treaty among Soviet Union and seven Soviet satellite states in Central
and Eastern Europe in existence during the Cold War. The Warsaw Pact was the military complement to the Council for
Mutual Economic Assistance (CoMEcon), the regional economic organization for the communist states of Central and
Eastern Europe. The Warsaw Pact was created in reaction to the integration of West Germany into NATO in 1955 per
the Paris Pacts of 1954, but it is also considered to have been motivated by Soviet desires to maintain control over
military forces in Central and Eastern Europe. While the Warsaw Pact was established as a balance of power or counterweight
to NATO, there was no direct confrontation between them. Instead, the conflict was fought on an ideological basis.
Both NATO and the Warsaw Pact led to the expansion of military forces and their integration into the respective blocs.
The Warsaw Pact's largest military engagement was Warsaw Pact invasion of Czechoslovakia (with the participation
of all Pact nations except Romania and Albania). The Pact failed to function when the Revolutions of 1989 spread
through Eastern Europe, beginning with the Solidarity movement in Poland and its success in June 1989. On 25 February
1991, the Pact was declared at an end at a meeting of defense and foreign ministers from the remaining member states
meeting in Hungary. On 1 July 1991, the Czechoslovak President Václav Havel formally declared an end to the Warsaw
Treaty Organization of Friendship, Co-operation, and Mutual Assistance which had been established in 1955. The USSR
itself was dissolved in December 1991. The Warsaw Treaty's organization was two-fold: the Political Consultative
Committee handled political matters, and the Combined Command of Pact Armed Forces controlled the assigned multi-national
forces, with headquarters in Warsaw, Poland. Furthermore, the Supreme Commander of the Unified Armed Forces of the
Warsaw Treaty Organization which commands and controls all the military forces of the member countries was also a
First Deputy Minister of Defense of the USSR, and the Chief of Combined Staff of the Unified Armed Forces of the
Warsaw Treaty Organization was also a First Deputy Chief of the General Staff of the Armed Forces of the USSR. Therefore,
although ostensibly an international collective security alliance, the USSR dominated the Warsaw Treaty armed forces.
The strategy behind the formation of the Warsaw Pact was driven by the desire of the Soviet Union to dominate Central
and Eastern Europe. This policy was driven by ideological and geostrategic reasons. Ideologically, the Soviet Union
arrogated the right to define socialism and communism and act as the leader of the global socialist movement. A corollary
to this idea was the necessity of intervention if a country appeared to be violating core socialist ideas and Communist
Party functions, which was explicitly stated in the Brezhnev Doctrine. Geostrategic principles also drove the Soviet
Union to prevent invasion of its territory by Western European powers. Before creation of Warsaw Pact, fearing Germany
rearmed, Czechoslovak leadership sought to create security pact with East Germany and Poland. These states protested
strongly against re-militarization of West Germany. The Warsaw Pact was primarily put in place as a consequence of
the rearming of West Germany inside NATO. Soviet leaders, as many European countries in both western and eastern
side, feared Germany being once again a military power as a direct threat and German militarism remained a fresh
memory among Soviets and Eastern Europeans. As Soviet Union had already bilateral treaties with all of its eastern
satellites, the Pact has been long considered 'superfluous', and because of the rushed way in which it was conceived,
NATO officials labeled it as a 'cardboard castle'. Previously, in March 1954, the USSR, fearing the restoration of
German Militarism in West Germany, requested admission to NATO. The Soviet request to join NATO arose in the aftermath
of the Berlin Conference of January–February 1954. Soviet foreign minister Molotov made proposals to have Germany
reunified and elections for a pan-German government, under conditions of withdrawal of the four powers armies and
German neutrality, but all were refused by the other foreign ministers, Dulles (USA), Eden (UK) and Bidault (France).
Proposals for the reunification of Germany were nothing new: earlier on 20 March 1952, talks about a German reunification,
initiated by the socalled 'Stalin Note', ended after the United Kingdom, France, and the United States insisted that
a unified Germany should not be neutral and should be free to join the European Defence Community and rearm. James
Dunn (USA), who met in Paris with Eden, Adenauer and Robert Schuman (France), affirmed that "the object should be
to avoid discussion with the Russians and to press on the European Defense Community". According to John Gaddis "there
was little inclination in Western capitals to explore this offer" from USSR. While historian Rolf Steininger asserts
that Adenauer's conviction that “neutralization means sovietization” was the main factor in the rejection of the
soviet proposals, Adenauer also feared that unification might have resulted in the end of the CDU's dominance in
the Bundestag. One month later, the proposed European Treaty was rejected not only by supporters of the EDC but also
by western opponents of the European Defense Community (like French Gaullist leader Palewski) who perceived it as
"unacceptable in its present form because it excludes the USA from participation in the collective security system
in Europe". The Soviets then decided to make a new proposal to the governments of the USA, UK and France stating
to accept the participation of the USA in the proposed General European Agreement. And considering that another argument
deployed against the Soviet proposal was that it was perceived by western powers as "directed against the North Atlantic
Pact and its liquidation", the Soviets decided to declare their "readiness to examine jointly with other interested
parties the question of the participation of the USSR in the North Atlantic bloc", specifying that "the admittance
of the USA into the General European Agreement should not be conditional on the three western powers agreeing to
the USSR joining the North Atlantic Pact". Again all proposals, including the request to join NATO, were rejected
by UK, US, and French governments shortly after. Emblematic was the position of British General Hastings Ismay, supporter
of NATO expansion, who said that NATO "must grow until the whole free world gets under one umbrella." He opposed
the request to join NATO made by the USSR in 1954 saying that "the Soviet request to join NATO is like an unrepentant
burglar requesting to join the police force". In April 1954 Adenauer made his first visit to the USA meeting Nixon,
Eisenhower and Dulles. Ratification of EDC was delaying but the US representatives made it clear to Adenauer that
EDC would have to become a part of NATO. Memories of the Nazi occupation were still strong, and the rearmament of
Germany was feared by France too. On 30 August 1954 French Parliament rejected the EDC, thus ensuring its failure
and blocking a major objective of US policy towards Europe: to associate Germany militarily with the West. The US
Department of State started to elaborate alternatives: Germany would be invited to join NATO or, in the case of French
obstructionism, strategies to circumvent a French veto would be implemented in order to obtain a German rearmament
outside NATO. On 23 October 1954 – only nine years after Allies (UK, USA and USSR) defeated Nazi Germany ending World
War II in Europe – the admission of the Federal Republic of Germany to the North Atlantic Pact was finally decided.
The incorporation of West Germany into the organization on 9 May 1955 was described as "a decisive turning point
in the history of our continent" by Halvard Lange, Foreign Affairs Minister of Norway at the time. In November 1954,
the USSR requested a new European Security Treaty, in order to make a final attempt to not have a remilitarized West
Germany potentially opposed to the Soviet Union, with no success. On 14 May 1955, the USSR and other seven European
countries "reaffirming their desire for the establishment of a system of European collective security based on the
participation of all European states irrespective of their social and political systems" established the Warsaw Pact
in response to the integration of the Federal Republic of Germany into NATO, declaring that: "a remilitarized Western
Germany and the integration of the latter in the North-Atlantic bloc [...] increase the danger of another war and
constitutes a threat to the national security of the peaceable states; [...] in these circumstances the peaceable
European states must take the necessary measures to safeguard their security". One of the founding members, East
Germany was allowed to re-arm by the Soviet Union and the National People's Army was established as the armed forces
of the country to counter the rearmament of West Germany. The eight member countries of the Warsaw Pact pledged the
mutual defense of any member who would be attacked. Relations among the treaty signatories were based upon mutual
non-intervention in the internal affairs of the member countries, respect for national sovereignty, and political
independence. However, almost all governments of those member states were indirectly controlled by the Soviet Union.
In July 1963 the Mongolian People's Republic asked to join the Warsaw Pact under Article 9 of the treaty. For this
purpose a special protocol should have been taken since the text of the treaty applied only to Europe. Due to the
emerging Sino-Soviet split, Mongolia remained on observer status. Soviet stationing troops were agreed to stay in
Mongolia from 1966. For 36 years, NATO and the Warsaw Pact never directly waged war against each other in Europe;
the United States and the Soviet Union and their respective allies implemented strategic policies aimed at the containment
of each other in Europe, while working and fighting for influence within the wider Cold War on the international
stage. In 1956, following the declaration of the Imre Nagy government of withdrawal of Hungary from the Warsaw Pact,
Soviet troops entered the country and removed the government. Soviet forces crushed the nationwide revolt, leading
to the death of an estimated 2,500 Hungarian citizens. The multi-national Communist armed forces' sole joint action
was the Warsaw Pact invasion of Czechoslovakia in August 1968. All member countries, with the exception of the Socialist
Republic of Romania and the People's Republic of Albania participated in the invasion. On 25 February 1991, the Warsaw
Pact was declared disbanded at a meeting of defense and foreign ministers from remaining Pact countries meeting in
Hungary. On 1 July 1991, in Prague, the Czechoslovak President Václav Havel formally ended the 1955 Warsaw Treaty
Organization of Friendship, Cooperation, and Mutual Assistance and so disestablished the Warsaw Treaty after 36 years
of military alliance with the USSR. In fact, the treaty was de facto disbanded in December 1989 during the violent
revolution in Romania, which toppled the communist government, without military intervention form other member states.
The USSR disestablished itself in December 1991. On 12 March 1999, the Czech Republic, Hungary, and Poland joined
NATO; Bulgaria, Estonia, Latvia, Lithuania, Romania, and Slovakia joined in March 2004; Albania joined on 1 April
2009. In November 2005, the Polish government opened its Warsaw Treaty archives to the Institute of National Remembrance,
who published some 1,300 declassified documents in January 2006. Yet the Polish government reserved publication of
100 documents, pending their military declassification. Eventually, 30 of the reserved 100 documents were published;
70 remained secret, and unpublished. Among the documents published is the Warsaw Treaty's nuclear war plan, Seven
Days to the River Rhine – a short, swift counter-attack capturing Austria, Denmark, Germany and Netherlands east
of River Rhine, using nuclear weapons, in self-defense, after a NATO first strike. The plan originated as a 1979
field training exercise war game, and metamorphosed into official Warsaw Treaty battle doctrine, until the late 1980s
– which is why the People's Republic of Poland was a nuclear weapons base, first, to 178, then, to 250 tactical-range
rockets. Doctrinally, as a Soviet-style (offensive) battle plan, Seven Days to the River Rhine gave commanders few
defensive-war strategies for fighting NATO in Warsaw Treaty territory.[citation needed] Consequently, Molotov, fearing
that EDC would be directed in the future against the USSR therefore "seeking to prevent the formation of groups of
European States directed against other European States", made a proposal for a General European Treaty on Collective
Security in Europe "open to all European States without regard as to their social systems" which would have included
the unified Germany (thus making the EDC – perceived by the USSR as a threat – unusable). But Eden, Dulles and Bidault
opposed the proposal.
Materialism is a form of philosophical monism which holds that matter is the fundamental substance in nature, and that all
phenomena, including mental phenomena and consciousness, are identical with material interactions. Materialism is
closely related to physicalism, the view that all that exists is ultimately physical. Philosophical physicalism has
evolved from materialism with the discoveries of the physical sciences to incorporate more sophisticated notions
of physicality than mere ordinary matter, such as: spacetime, physical energies and forces, dark matter, and so on.
Thus the term "physicalism" is preferred over "materialism" by some, while others use the terms as if they are synonymous.
Materialism belongs to the class of monist ontology. As such, it is different from ontological theories based on
dualism or pluralism. For singular explanations of the phenomenal reality, materialism would be in contrast to idealism,
neutral monism, and spiritualism. Despite the large number of philosophical schools and subtle nuances between many,
all philosophies are said to fall into one of two primary categories, which are defined in contrast to each other:
Idealism, and materialism.[a] The basic proposition of these two categories pertains to the nature of reality, and
the primary distinction between them is the way they answer two fundamental questions: "what does reality consist
of?" and "how does it originate?" To idealists, spirit or mind or the objects of mind (ideas) are primary, and matter
secondary. To materialists, matter is primary, and mind or spirit or ideas are secondary, the product of matter acting
upon matter. The materialist view is perhaps best understood in its opposition to the doctrines of immaterial substance
applied to the mind historically, famously by René Descartes. However, by itself materialism says nothing about how
material substance should be characterized. In practice, it is frequently assimilated to one variety of physicalism
or another. During the 19th century, Karl Marx and Friedrich Engels extended the concept of materialism to elaborate
a materialist conception of history centered on the roughly empirical world of human activity (practice, including
labor) and the institutions created, reproduced, or destroyed by that activity (see materialist conception of history).
Later Marxists developed the notion of dialectical materialism which characterized later Marxist philosophy and method.
Materialism developed, possibly independently, in several geographically separated regions of Eurasia during what
Karl Jaspers termed the Axial Age (approximately 800 to 200 BC). In Ancient Indian philosophy, materialism developed
around 600 BC with the works of Ajita Kesakambali, Payasi, Kanada, and the proponents of the Cārvāka school of philosophy.
Kanada became one of the early proponents of atomism. The Nyaya–Vaisesika school (600 BC - 100 BC) developed one
of the earliest forms of atomism, though their proofs of God and their positing that the consciousness was not material
precludes labelling them as materialists. Buddhist atomism and the Jaina school continued the atomic tradition. Materialism
is often associated with reductionism, according to which the objects or phenomena individuated at one level of description,
if they are genuine, must be explicable in terms of the objects or phenomena at some other level of description —
typically, at a more reduced level. Non-reductive materialism explicitly rejects this notion, however, taking the
material constitution of all particulars to be consistent with the existence of real objects, properties, or phenomena
not explicable in the terms canonically used for the basic material constituents. Jerry Fodor influentially argues
this view, according to which empirical laws and explanations in "special sciences" like psychology or geology are
invisible from the perspective of basic physics. A lot of vigorous literature has grown up around the relation between
these views. Ancient Greek philosophers like Thales, Anaxagoras (ca. 500 BC – 428 BC), Epicurus and Democritus prefigure
later materialists. The Latin poem De Rerum Natura by Lucretius (ca. 99 BC – ca. 55 BC) reflects the mechanistic
philosophy of Democritus and Epicurus. According to this view, all that exists is matter and void, and all phenomena
result from different motions and conglomerations of base material particles called "atoms" (literally: "indivisibles").
De Rerum Natura provides mechanistic explanations for phenomena such as erosion, evaporation, wind, and sound. Famous
principles like "nothing can touch body but body" first appeared in the works of Lucretius. Democritus and Epicurus
however did not hold to a monist ontology since they held to the ontological separation of matter and space i.e.
space being "another kind" of being, indicating that the definition of "materialism" is wider than given scope for
in this article. Later Indian materialist Jayaraashi Bhatta (6th century) in his work Tattvopaplavasimha ("The upsetting
of all principles") refuted the Nyaya Sutra epistemology. The materialistic Cārvāka philosophy appears to have died
out some time after 1400. When Madhavacharya compiled Sarva-darśana-samgraha (a digest of all philosophies) in the
14th century, he had no Cārvāka/Lokāyata text to quote from, or even refer to. In early 12th-century al-Andalus,
the Arabian philosopher, Ibn Tufail (Abubacer), wrote discussions on materialism in his philosophical novel, Hayy
ibn Yaqdhan (Philosophus Autodidactus), while vaguely foreshadowing the idea of a historical materialism. The French
cleric Pierre Gassendi (1592-1665) represented the materialist tradition in opposition to the attempts of René Descartes
(1596-1650) to provide the natural sciences with dualist foundations. There followed the materialist and atheist
abbé Jean Meslier (1664-1729), Julien Offray de La Mettrie, the German-French Paul-Henri Thiry Baron d'Holbach (1723-1789),
the Encyclopedist Denis Diderot (1713-1784), and other French Enlightenment thinkers; as well as (in England) John
"Walking" Stewart (1747-1822), whose insistence in seeing matter as endowed with a moral dimension had a major impact
on the philosophical poetry of William Wordsworth (1770-1850). Arthur Schopenhauer (1788-1860) wrote that "...materialism
is the philosophy of the subject who forgets to take account of himself". He claimed that an observing subject can
only know material objects through the mediation of the brain and its particular organization. That is, the brain
itself is the "determiner" of how material objects will be experienced or perceived: The German materialist and atheist
anthropologist Ludwig Feuerbach would signal a new turn in materialism through his book, The Essence of Christianity
(1841), which provided a humanist account of religion as the outward projection of man's inward nature. Feuerbach's
materialism would later heavily influence Karl Marx. Many current and recent philosophers—e.g., Daniel Dennett, Willard
Van Orman Quine, Donald Davidson, and Jerry Fodor—operate within a broadly physicalist or materialist framework,
producing rival accounts of how best to accommodate mind, including functionalism, anomalous monism, identity theory,
and so on. The nature and definition of matter - like other key concepts in science and philosophy - have occasioned
much debate. Is there a single kind of matter (hyle) which everything is made of, or multiple kinds? Is matter a
continuous substance capable of expressing multiple forms (hylomorphism), or a number of discrete, unchanging constituents
(atomism)? Does it have intrinsic properties (substance theory), or is it lacking them (prima materia)? One challenge
to the traditional concept of matter as tangible "stuff" came with the rise of field physics in the 19th century.
Relativity shows that matter and energy (including the spatially distributed energy of fields) are interchangeable.
This enables the ontological view that energy is prima materia and matter is one of its forms. On the other hand,
the Standard Model of Particle physics uses quantum field theory to describe all interactions. On this view it could
be said that fields are prima materia and the energy is a property of the field. According to the dominant cosmological
model, the Lambda-CDM model, less than 5% of the universe's energy density is made up of the "matter" described by
the Standard Model of Particle Physics, and the majority of the universe is composed of dark matter and dark energy
- with little agreement amongst scientists about what these are made of. With the advent of quantum physics, some
scientists believed the concept of matter had merely changed, while others believed the conventional position could
no longer be maintained. For instance Werner Heisenberg said "The ontology of materialism rested upon the illusion
that the kind of existence, the direct 'actuality' of the world around us, can be extrapolated into the atomic range.
This extrapolation, however, is impossible... atoms are not things." Likewise, some philosophers[which?] feel that
these dichotomies necessitate a switch from materialism to physicalism. Others use the terms "materialism" and "physicalism"
interchangeably. Some modern day physicists and science writers—such as Paul Davies and John Gribbin—have argued
that materialism has been disproven by certain scientific findings in physics, such as quantum mechanics and chaos
theory. In 1991, Gribbin and Davies released their book The Matter Myth, the first chapter of which, "The Death of
Materialism", contained the following passage: Davies' and Gribbin's objections are shared by proponents of digital
physics who view information rather than matter to be fundamental. Their objections were also shared by some founders
of quantum theory, such as Max Planck, who wrote: According to the Catholic Encyclopedia of 1907-1912, materialism,
defined as "a philosophical system which regards matter as the only reality in the world [...] denies the existence
of God and the soul". Materialism, in this view, therefore becomes incompatible with most world religions, including
Christianity, Judaism, and Islam. In such a context one can conflate materialism with atheism. Most of Hinduism and
transcendentalism regards all matter as an illusion called Maya, blinding humans from knowing "the truth". Maya is
the limited, purely physical and mental reality in which our everyday consciousness has become entangled. Maya gets
destroyed for a person when s/he perceives Brahman with transcendental knowledge. In contrast, Joseph Smith, the
founder of the Latter Day Saint movement, taught: "There is no such thing as immaterial matter. All spirit is matter,
but it is more fine or pure, and can only be discerned by purer eyes; We cannot see it; but when our bodies are purified
we shall see that it is all matter." This spirit element has always existed; it is co-eternal with God. It is also
called "intelligence" or "the light of truth", which like all observable matter "was not created or made, neither
indeed can be". Members of the Church of Jesus Christ of Latter-day Saints view the revelations of Joseph Smith as
a restoration of original Christian doctrine, which they believe post-apostolic theologians began to corrupt in the
centuries after Christ. The writings of many[quantify] of these theologians indicate a clear influence of Greek metaphysical
philosophies such as Neoplatonism, which characterized divinity as an utterly simple, immaterial, formless, substance/essence
(ousia) that transcended all that was physical. Despite strong opposition from many Christians, this metaphysical
depiction of God eventually became incorporated into the doctrine of the Christian church, displacing the original
Judeo-Christian concept of a physical, corporeal God who created humans in His image and likeness. An argument for
idealism, such as those of Hegel and Berkeley, is ipso facto an argument against materialism. Matter can be argued
to be redundant, as in bundle theory, and mind-independent properties can in turn be reduced to subjective percepts.
Berkeley presents an example of the latter by pointing out that it is impossible to gather direct evidence of matter,
as there is no direct experience of matter; all that is experienced is perception, whether internal or external.
As such, the existence of matter can only be assumed from the apparent (perceived) stability of perceptions; it finds
absolutely no evidence in direct experience. If matter and energy are seen as necessary to explain the physical world,
but incapable of explaining mind, dualism results. Emergence, holism, and process philosophy seek to ameliorate the
perceived shortcomings of traditional (especially mechanistic) materialism without abandoning materialism entirely.
Some critics object to materialism as part of an overly skeptical, narrow or reductivist approach to theorizing,
rather than to the ontological claim that matter is the only substance. Particle physicist and Anglican theologian
John Polkinghorne objects to what he calls promissory materialism — claims that materialistic science will eventually
succeed in explaining phenomena it has not so far been able to explain. Polkinghorne prefers "dual-aspect monism"
to faith in materialism. Modern philosophical materialists extend the definition of other scientifically observable
entities such as energy, forces, and the curvature of space. However philosophers such as Mary Midgley suggest that
the concept of "matter" is elusive and poorly defined. Materialism typically contrasts with dualism, phenomenalism,
idealism, vitalism, and dual-aspect monism. Its materiality can, in some ways, be linked to the concept of Determinism,
as espoused by Enlightenment thinkers. Scientific "Materialism" is often synonymous with, and has so far been described,
as being a reductive materialism. In recent years, Paul and Patricia Churchland have advocated a radically contrasting
position (at least, in regards to certain hypotheses); eliminativist materialism holds that some mental phenomena
simply do not exist at all, and that talk of those mental phenomena reflects a totally spurious "folk psychology"
and introspection illusion. That is, an eliminative materialist might suggest that a concept like "belief" simply
has no basis in fact - the way folk science speaks of demon-caused illnesses. Reductive materialism being at one
end of a continuum (our theories will reduce to facts) and eliminative materialism on the other (certain theories
will need to be eliminated in light of new facts), Revisionary materialism is somewhere in the middle. Some scientific
materialists have been criticized, for example by Noam Chomsky, for failing to provide clear definitions for what
constitutes matter, leaving the term "materialism" without any definite meaning. Chomsky also states that since the
concept of matter may be affected by new scientific discoveries, as has happened in the past, scientific materialists
are being dogmatic in assuming the opposite. The concept of matter has changed in response to new scientific discoveries.
Thus materialism has no definite content independent of the particular theory of matter on which it is based. According
to Noam Chomsky, any property can be considered material, if one defines matter such that it has that property. Kant
argued against all three forms of materialism, subjective idealism (which he contrasts with his "transcendental idealism")
and dualism. However, Kant also argues that change and time require an enduring substrate, and does so in connection
with his Refutation of Idealism. Postmodern/poststructuralist thinkers also express a skepticism about any all-encompassing
metaphysical scheme. Philosopher Mary Midgley, among others, argues that materialism is a self-refuting idea, at
least in its eliminative form.
A Christian ( pronunciation (help·info)) is a person who adheres to Christianity, an Abrahamic, monotheistic religion based
on the life and teachings of Jesus Christ. "Christian" derives from the Koine Greek word Christós (Χριστός), a translation
of the Biblical Hebrew term mashiach. There are diverse interpretations of Christianity which sometimes conflict.
However, "Whatever else they might disagree about, Christians are at least united in believing that Jesus has a unique
significance." The term "Christian" is also used adjectivally to describe anything associated with Christianity,
or in a proverbial sense "all that is noble, and good, and Christ-like." It is also used as a label to identify people
who associate with the cultural aspects of Christianity, irrespective of personal religious beliefs or practices.
According to a 2011 Pew Research Center survey, there were 2.2 billion Christians around the world in 2010, up from
about 600 million in 1910. By 2050, the Christian population is expected to exceed 3 billion. According to a 2012
Pew Research Center survey Christianity will remain the world's largest religion in 2050, if current trends continue.
Today, about 37% of all Christians live in the Americas, and about 26% live in Europe, 24% of total Christians live
in sub-Saharan Africa, about 13% in Asia and the Pacific, and 1% of the world's Christians live in the Middle east
and North Africa. About half of all Christians worldwide are Catholic, while more than a third are Protestant (37%).
Orthodox communions comprise 12% of the world's Christians. Other Christian groups make up the remainder. Christians
make up the majority of the population in 158 countries and territories. 280 million Christian live as a minority.
The Greek word Χριστιανός (Christianos), meaning "follower of Christ", comes from Χριστός (Christos), meaning "anointed
one", with an adjectival ending borrowed from Latin to denote adhering to, or even belonging to, as in slave ownership.
In the Greek Septuagint, christos was used to translate the Hebrew מָשִׁיחַ (Mašíaḥ, messiah), meaning "[one who
is] anointed." In other European languages, equivalent words to Christian are likewise derived from the Greek, such
as Chrétien in French and Cristiano in Spanish. The first recorded use of the term (or its cognates in other languages)
is in the New Testament, in Acts 11:26, after Barnabas brought Saul (Paul) to Antioch where they taught the disciples
for about a year, the text says: "[...] the disciples were called Christians first in Antioch." The second mention
of the term follows in Acts 26:28, where Herod Agrippa II replied to Paul the Apostle, "Then Agrippa said unto Paul,
Almost thou persuadest me to be a Christian." The third and final New Testament reference to the term is in 1 Peter
4:16, which exhorts believers: "Yet if [any man suffer] as a Christian, let him not be ashamed; but let him glorify
God on this behalf." Kenneth Samuel Wuest holds that all three original New Testament verses' usages reflect a derisive
element in the term Christian to refer to followers of Christ who did not acknowledge the emperor of Rome. The city
of Antioch, where someone gave them the name Christians, had a reputation for coming up with such nicknames. However
Peter's apparent endorsement of the term led to its being preferred over "Nazarenes" and the term Christianoi from
1 Peter becomes the standard term in the Early Church Fathers from Ignatius and Polycarp onwards. The earliest occurrences
of the term in non-Christian literature include Josephus, referring to "the tribe of Christians, so named from him;"
Pliny the Younger in correspondence with Trajan; and Tacitus, writing near the end of the 1st century. In the Annals
he relates that "by vulgar appellation [they were] commonly called Christians" and identifies Christians as Nero's
scapegoats for the Great Fire of Rome. Another term for Christians which appears in the New Testament is "Nazarenes"
which is used by the Jewish lawyer Tertullus in Acts 24. Tertullian (Against Marcion 4:8) records that "the Jews
call us Nazarenes," while around 331 AD Eusebius records that Christ was called a Nazoraean from the name Nazareth,
and that in earlier centuries "Christians," were once called "Nazarenes." The Hebrew equivalent of "Nazarenes", Notzrim,
occurs in the Babylonian Talmud, and is still the modern Israeli Hebrew term for Christian. A wide range of beliefs
and practices is found across the world among those who call themselves Christian. Denominations and sects disagree
on a common definition of "Christianity". For example, Timothy Beal notes the disparity of beliefs among those who
identify as Christians in the United States as follows: Linda Woodhead attempts to provide a common belief thread
for Christians by noting that "Whatever else they might disagree about, Christians are at least united in believing
that Jesus has a unique significance." Philosopher Michael Martin, in his book The Case Against Christianity, evaluated
three historical Christian creeds (the Apostles' Creed, the Nicene Creed and the Athanasian Creed) to establish a
set of basic assumptions which include belief in theism, the historicity of Jesus, the Incarnation, salvation through
faith in Jesus, and Jesus as an ethical role model. The identification of Jesus as the Messiah is not accepted by
Judaism. The term for a Christian in Hebrew is נוּצְרי (Notzri—"Nazarene"), a Talmudic term originally derived from
the fact that Jesus came from the Galilean village of Nazareth, today in northern Israel. Adherents of Messianic
Judaism are referred to in modern Hebrew as יְהוּדִים מָשִׁיחַיים (Yehudim Meshihi'im—"Messianic Jews"). In Arabic-speaking
cultures, two words are commonly used for Christians: Naṣrānī (نصراني), plural Naṣārā (نصارى) is generally understood
to be derived from Nazareth through the Syriac (Aramaic); Masīḥī (مسيحي) means followers of the Messiah. The term
Nasara rose to prominence in July 2014, after the Fall of Mosul to the terrorist organization Islamic State of Iraq
and the Levant. The nun or ن— the first letter of Nasara—was spray-painted on the property of Christians ejected
from the city. Where there is a distinction, Nasrani refers to people from a Christian culture and Masihi means those
with a religious faith in Jesus. In some countries Nasrani tends to be used generically for non-Muslim Western foreigners,
e.g. "blond people." Another Arabic word sometimes used for Christians, particularly in a political context, is Ṣalībī
(صليبي "Crusader") from ṣalīb (صليب "cross") which refers to Crusaders and has negative connotations. However, Salibi
is a modern term; historically, Muslim writers described European Christian Crusaders as al-Faranj or Alfranj (الفرنج)
and Firinjīyah (الفرنجيّة) in Arabic" This word comes from the Franks and can be seen in the Arab history text Al-Kamil
fi al-Tarikh by Ali ibn al-Athir. The most common Persian word is Masīhī (مسیحی), from Arabic.,Other words are Nasrānī
(نصرانی), from Syriac for "Nazarene", and Tarsā (ترسا), from Middle Persian word Tarsāg, also meaning "Christian",
derived from tars, meaning "fear, respect". The Syriac term Nasrani (Nazarene) has also been attached to the Saint
Thomas Christians of Kerala, India. In the Indian subcontinent, Christians call themselves Isaai (Hindi: ईसाई, Urdu:
عیسائی), and are also known by this term to adherents of other religions. This is related to the name they call
Jesus, 'Isa Masih, and literally means 'the followers of 'Isa'. In the past, the Malays used to call the Portuguese
Serani from the Arabic Nasrani, but the term now refers to the modern Kristang creoles of Malaysia. The Chinese word
is 基督徒 (pinyin: jīdū tú), literally "Christ follower." The two characters now pronounced Jīdū in Mandarin Chinese,
were originally pronounced Ki-To in Cantonese as representation of Latin "Cristo".[citation needed] In Vietnam, the
same two characters read Cơ đốc, and a "follower of Christianity" is a tín đồ Cơ đốc giáo. In Japan, the term kirishitan
(written in Edo period documents 吉利支丹, 切支丹, and in modern Japanese histories as キリシタン), from Portuguese cristão,
referred to Roman Catholics in the 16th and 17th centuries before the religion was banned by the Tokugawa shogunate.
Today, Christians are referred to in Standard Japanese as キリスト教徒, Kirisuto-kyōto or the English-derived term クリスチャン
kurisuchan. Korean still uses 기독교도, Kidok-kyo-do for "Christian", though the Greek form Kurisudo 그리스도 has now replaced
the old Sino-Korean Kidok, which refers to Christ himself. The region of modern Eastern Europe and Central Eurasia
(Russia, Ukraine and other countries of the ex-USSR) have a long history of Christianity and Christian communities
on its lands. In ancient times, in the first centuries after the birth of Christ, when this region was called[by
whom?] Scythia - Christians already lived there. Later the region saw the first states to adopt Christianity officially
- initially in Armenia (301 AD) and in Georgia (337 AD), later in the Great Russian Principality (Kyivan Rus, Russian:
Великое княжество Русское, ca 988 AD). People of that time used to denote themselves Christians (христиане, крестьяне)
and Russians (русские). Both terms had strong Christian connotations.[citation needed] It is also interesting that
in time the term "крестьяне" acquired the meaning "peasants of Christian faith" and later "peasants" (the main part
of the population of the region), while the term "христиане" retained its religious meaning and the term "русские"
began to mean representatives of the heterogeneous Russian nation formed on the basis of common Christian faith and
language,[citation needed] which strongly influenced the history and development of the region. In the region the
"Pravoslav faith" (православная вера - Orthodox faith) or "Russian faith" (русская вера) from earliest times became
almost as known as the original "Christian faith" (христианская, крестьянская вера). Also in some contexts the term
"cossack" (козак, казак - free man by the will of God) was used[by whom?] to denote "free" Christians of steppe origin
and Russian language. As of the early 21st century, Christianity has approximately 2.4 billion adherents. The faith
represents about a third of the world's population and is the largest religion in the world. Christians have composed
about 33 percent of the world's population for around 100 years. The largest Christian denomination is the Roman
Catholic Church, with 1.17 billion adherents, representing half of all Christians. Christianity remains the dominant
religion in the Western World, where 70% are Christians. A 2011 Pew Research Center survey found that 76.2% of Europeans,
73.3% in Oceania, and about 86.0% in the Americas (90% in Latin America and 77.4% in North America) described themselves
as Christians. According to 2012 Pew Research Center survey if current trends continue, Christianity will remains
the world's largest religion by year 2050. By 2050, the Christian population is expected to exceed 3 billion. While
Muslims have an average of 3.1 children per woman—the highest rate of all religious groups. Christians are second,
with 2.7 children per woman. High birth rates and conversion were cited as the reason for the Christian population
growths. A 2015 study found that approximately 10.2 million Muslim converted to Christianity. Christianity is growing
in Africa, Asia, Latin America, Muslim world, and Oceania. According to Scientific Elite: Nobel Laureates in the
United State by Harriet Zuckerman, a review of American Nobel prizes awarded between 1901 and 1972, 72% of American
Nobel Prize laureates identified a Protestant background. Overall, Protestants have won a total of 84.2% of all the
Nobel Prizes in Chemistry, 60% in Medicine, and 58.6% in Physics awarded to Americans between 1901 and 1972. Christians
have made a myriad contributions in a broad and diverse range of fields, including the sciences, arts, politics,
literatures and business. According to 100 Years of Nobel Prizes, a review of Nobel prizes awarded between 1901 and
2000 reveals that (65.4%) of Nobel Prizes laureates identified Christianity in its various forms as their religious
preference. Christians have made a myriad contributions in a broad and diverse range of fields, including the sciences,
arts, politics, literatures and business. According to 100 Years of Nobel Prizes, a review of Nobel prizes awarded
between 1901 and 2000 reveals that (65.4%) of Nobel Prizes laureates identified Christianity in its various forms
as their religious preference.
Sony Music Entertainment Inc. (sometimes known as Sony Music or by the initials, SME) is an American music corporation managed
and operated by Sony Corporation of America (SCA), a subsidiary of Japanese conglomerate Sony Corporation. In 1929,
the enterprise was first founded as American Record Corporation (ARC) and, in 1938, was renamed Columbia Recording
Corporation, following ARC's acquisition by CBS. In 1966, the company was reorganized to become CBS Records. In 1987,
Sony Corporation of Japan bought the company, and in 1991, renamed it SME. It is the world's second largest recorded
music company, after Universal Music Group. In 2004, SME and Bertelsmann Music Group merged as Sony BMG Music Entertainment.
When Sony acquired BMG's half of the conglomerate in 2008, Sony BMG reverted to the SME name. The buyout led to the
dissolution of BMG, which then relaunched as BMG Rights Management. Out of the "Big Three" record companies, with
Universal Music Group being the largest and Warner Music Group, SME is middle-sized. In 1929, ARC was founded through
a merger of several smaller record companies, which, ultimately, transformed into one enterprise known as SME. In
the depths of the Great Depression, the Columbia Phonograph Company (founded in 1888) in the U.S. (including its
Okeh Records subsidiary) was acquired by ARC in 1934. ARC was acquired in 1938 by the Columbia Broadcasting System
(CBS, which, in turn, had been formed by the Columbia Phonograph Company, but then sold off). ARC was renamed Columbia
Recording Corporation. The Columbia Phonograph Company had international subsidiaries and affiliates such as the
Columbia Graphophone Company in the United Kingdom, but they were sold off prior to CBS acquiring American Columbia.
RCA Victor Records executive Ted Wallerstein convinced CBS head William S. Paley to buy ARC and Paley made Wallerstein
head of the newly acquired record company. The renamed company made Columbia its flagship label with Okeh its subsidiary
label while deemphasizing ARC's other labels. This allowed ARC's leased labels Brunswick Records and Vocalion Records
to revert to former owner Warner Bros. which sold the labels to Decca Records. Columbia kept the Brunswick catalogue
recorded from December 1931 onward which was reissued on the Columbia label as well as the Vocalion label material
from the same time period which was reissued on the Okeh label. Wallerstein, who was promoted at the end of 1947
from president to chairman of the record company, restored Columbia's status as a leading record company and spearheaded
the successful introduction of the long playing (LP) record before he retired as Columbia's chairman in 1951. James
Conkling then became head of Columbia Records. Also in 1951, Columbia severed its ties with the EMI-owned record
label of the same name and began a UK distribution deal with Philips Records, whereas Okeh Records continued to be
distributed by EMI on the Columbia label. Columbia founded Epic Records in 1953. In 1956, Conkling left Columbia,
he would help establish the National Academy of Recording Arts and Sciences before eventually becoming the first
president of the newly launched Warner Bros. Records, and Goddard Lieberson began the first of two stints as head
of the record company. In 1958, Columbia founded another label, Date Records, which initially issued rockabilly music.
In 1960, Columbia/CBS began negotiations with its main international distributor Philips Records with the goal of
CBS starting its own global record company. Philips' acquisition of Mercury Records in the US in 1961 paved the way
for this. CBS only had the rights to the Columbia name in North America; therefore the international arm founded
in 1961 and launched in 1962 utilized the "CBS Records" name only, with Philips Records distributing the label in
Europe. CBS's Mexican record company, Discos Columbia, was renamed Discos CBS by 1963. By 1962, their Columbia Record
Productions unit was operating four plants around the United States located in Los Angeles; Terre Haute, Indiana;
Bridgeport, Connecticut; and Pitman, New Jersey, which manufactured records for not only Columbia's own labels, but
also for independent record labels. In 1964, CBS established its own UK distribution with the acquisition of Oriole
Records. EMI continued to distribute Epic and Okeh label material on the Columbia label in the UK until the distribution
deal with EMI expired in 1968 when CBS took over distribution. With the record company a global operation in 1965,
the Columbia Broadcasting System upper management started pondering changing the name of their record company subsidiary
from Columbia Records to CBS Records. Also in late 1965, the Date subsidiary label was revived. This label released
the first string of hits for Peaches & Herb and scored a few minor hits from various other artists. Date's biggest
success was "Time of the Season" by the Zombies, peaking at #2 in 1969. The label was discontinued in 1970. In 1966,
CBS reorganized its corporate structure with Leiberson promoted to head the new "CBS-Columbia Group" which made the
now renamed CBS Records company a separate unit of this new group run by Clive Davis. In March 1968, CBS and Sony
formed CBS/Sony Records, a Japanese business joint venture. With Sony being one of the developers behind the compact
disc digital music media, a compact disc production plant was constructed in Japan under the joint venture, allowing
CBS to begin supplying some of the first compact disc releases for the American market in 1983. The CBS Records Group
was led very successfully by Clive Davis until his dismissal in 1972, after it was discovered that Davis has used
CBS funds to finance his personal life, including an expensive bar mitzvah party for his son. He was replaced first
by former head Goddard Lieberson, then in 1975 by the colourful and controversial lawyer Walter Yetnikoff, who led
the company until 1990. In February 2016, over a hundred thousand people signed a petition in just twenty-four hours,
calling for a boycott of Sony Music and all other Sony-affiliated businesses after rape allegations against music
producer Dr. Luke were made by musical artist Kesha. Kesha asked a New York City Supreme Court to free her from her
contract with Sony Music but the court denied the request, prompting widespread public and media response. Over the
past two years,[clarification needed] dozens of rights-holders, including Sony Music, have sent complaints about
Wikipedia.org directly to Google to have content removed. In July 2013, Sony Music withdrew from the Greek market
due to an economic crisis. Albums released by Sony Music in Greece from domestic and foreign artists are carried
by Feelgood Records. In March 2012, Sony Music reportedly closed its Philippines office due to piracy, causing to
move distribution of SME in the Philippines to Ivory Music. Doug Morris, who was head of Warner Music Group, then
Universal Music, became chairman and CEO of the company on July 1, 2011. Sony Music underwent a restructuring after
Morris' arrival. He was joined by L.A. Reid, who became the chairman and CEO of Epic Records. Under Reid, multiple
artists from the Jive half of the former RCA/Jive Label Group moved to Epic. Peter Edge became the new CEO of the
RCA Records unit. The RCA Music Group closed down Arista, J Records and Jive Records in October 2011, with the artists
from those labels being moved to RCA Records. In the 1980s to early 1990s, there was a CBS imprint label in the US
known as CBS Associated Records. Tony Martell, veteran CBS and Epic Records A&R Vice President was head of this label
and signed artists including Ozzy Osbourne, the Fabulous Thunderbirds, Electric Light Orchestra, Joan Jett, and Henry
Lee Summer. This label was a part of (Epic/Portrait/Associated) wing of sub labels at CBS which shared the same national
and regional staff as the rest of Epic Records and was a part of the full CBS Records worldwide distribution system.
In 1986, CBS sold its music publishing arm, CBS Songs, to Stephen Swid, Martin Bandier, and Charles Koppelman for
$125 million making it the foundation of their SBK Entertainment. By 1987, CBS was the only "big three" American
TV network to have a co-owned record company. ABC had sold its record division to MCA Records in 1979, and in 1986,
NBC's parent company RCA was sold to General Electric, who then sold off all other RCA units, including the record
division (which was bought by Ariola Records, later known as BMG). On November 17, 1987, SCA acquired CBS Records,
which hosted such acts as Michael Jackson, for US$2 billion. CBS Inc., now CBS Corporation, retained the rights to
the CBS name for music recordings but granted Sony a temporary license to use the CBS name. CBS Corporation founded
a new CBS Records in 2006, which is distributed by Sony through its RED subsidiary. In 1989, CBS Records re-entered
the music publishing business by acquiring Nashville music publisher Tree International Publishing for more than
$30 million. RCA/Jive Label Group CEO Barry Weiss left the company in March 2011 to become the new CEO of Island
Def Jam and Universal Republic, which were both part of Universal Music Group. Weiss had been the RCA/Jive Label
Group CEO since 2008 and was head of Jive Records since 1991. On October 11, 2011, Doug Morris announced that Mel
Lewinter had been named Executive Vice President of Label Strategy. Lewinter previously served as chairman and CEO
of Universal Motown Republic Group. In January 2012, Dennis Kooker was named President of Global Digital Business
and US Sales. In August 2004, Sony entered joint venture with equal partner Bertelsmann, by merging Sony Music and
Bertelsmann Music Group, Germany, to establish Sony BMG Music Entertainment. However Sony continued to operate its
Japanese music business independently from Sony BMG while BMG Japan was made part of the merger. On August 5, 2008,
SCA and Bertelsmann announced that Sony had agreed to acquire Bertelsmann's 50% stake in Sony BMG. Sony completed
its acquisition of Bertelsmann's 50% stake in the companies' joint venture on October 1, 2008. The company, once
again named Sony Music Entertainment Inc., became a wholly owned subsidiary of Sony Corporation through its US subsidiary
SCA. The last few albums to feature a Sony BMG logo were Thriller 25 by Michael Jackson, I Am... Sasha Fierce by
Beyoncé, Keeps Gettin' Better: A Decade of Hits by Christina Aguilera, and Safe Trip Home by Dido. A temporary logo
was unveiled beginning December 1, 2008. The present logo was unveiled in March 2009. On July 1, 2009, SME and IODA
announced their global strategic partnership to leverage combined worldwide online retail distribution networks and
complementary technologies to support independent labels and music rightsholders. In March 2010, Sony Corp has partnered
with The Michael Jackson Company with a contract of more than $250 million, the largest deal in recorded music history.
The merger made Columbia and Epic sister labels to RCA Records, which was once owned by RCA which also owned CBS
rival NBC. It also started the process of bringing BMG's Arista Records back under common ownership with its former
parent Columbia Pictures, a Sony division since 1989, and also brought Arista founder Clive Davis back into the fold.
Davis is still with Sony Music as Chief Creative Officer. In 1995, Sony and Michael Jackson formed a joint venture
which merged Sony's music publishing operations with Jackson's ATV Music to form Sony/ATV Music Publishing. Sony
renamed the record company Sony Music Entertainment (SME) on January 1, 1991, fulfilling the terms set under the
1988 buyout, which granted only a transitional license to the CBS trademark. The CBS Associated label was renamed
Epic Associated. Also on January 1, 1991, to replace the CBS label, Sony reintroduced the Columbia label worldwide,
which it previously held in the United States and Canada only, after it acquired the international rights to the
trademark from EMI in 1990. Japan is the only country where Sony does not have rights to the Columbia name as it
is controlled by Nippon Columbia, an unrelated company. Thus, until this day, Sony Music Entertainment Japan does
not use the Columbia trademark for Columbia label recordings from outside Japan which are issued in Japan. The Columbia
Records trademark's rightsholder in Spain was Bertelsmann Music Group, Germany, which Sony Music subsequently subsumed
via a 2004 merger, followed by a 2008 buyout. In 1970, CBS Records revived the Embassy Records imprint in UK and
Europe, which had been defunct since CBS had taken control of Embassy's parent company, Oriole, in 1964. The purpose
of the revived Embassy imprint was to release budget reissues of albums that had originally been released in the
United States on Columbia Records (or its subsidiaries). Many albums, by artists as diverse as Andy Williams, Johnny
Cash, Barbra Streisand, the Byrds, Tammy Wynette, Laura Nyro and Sly & the Family Stone were issued on Embassy, before
the label was once again discontinued in 1980. In 2011–2012, Sony Music Inc. expressed support for SOPA and PIPA.
The Stop Online Piracy Act and Protect IP Act stirred up controversy within the entertainment industry since their
introduction in the United States Congress.[citation needed] In May 2012, Sony Music filed charges against the website
IsoHunt. The plaintifs claims in the court document filed at the supreme court of British Columbia: "The IsoHunt
Websites have been designed and are operated by the defendants with the sole purpose of profiting from rampant copyright
infringement which defendants actively encourage, promote, authorize, induce, aid, abet, materially contribute to
and commercially profit from."
Oklahoma City is the capital and largest city of the state of Oklahoma. The county seat of Oklahoma County, the city ranks
27th among United States cities in population. The population grew following the 2010 Census, with the population
estimated to have increased to 620,602 as of July 2014. As of 2014, the Oklahoma City metropolitan area had a population
of 1,322,429, and the Oklahoma City-Shawnee Combined Statistical Area had a population of 1,459,758 (Chamber of Commerce)
residents, making it Oklahoma's largest metropolitan area. Oklahoma City's city limits extend into Canadian, Cleveland,
and Pottawatomie counties, though much of those areas outside of the core Oklahoma County area are suburban or rural
(watershed). The city ranks as the eighth-largest city in the United States by land area (including consolidated
city-counties; it is the largest city in the United States by land area whose government is not consolidated with
that of a county or borough). Oklahoma City, lying in the Great Plains region, features one of the largest livestock
markets in the world. Oil, natural gas, petroleum products and related industries are the largest sector of the local
economy. The city is situated in the middle of an active oil field and oil derricks dot the capitol grounds. The
federal government employs large numbers of workers at Tinker Air Force Base and the United States Department of
Transportation's Mike Monroney Aeronautical Center (these two sites house several offices of the Federal Aviation
Administration and the Transportation Department's Enterprise Service Center, respectively). Oklahoma City is on
the I-35 Corridor and is one of the primary travel corridors into neighboring Texas and Mexico. Located in the Frontier
Country region of the state, the city's northeast section lies in an ecological region known as the Cross Timbers.
The city was founded during the Land Run of 1889, and grew to a population of over 10,000 within hours of its founding.
The city was the scene of the April 19, 1995 bombing of the Alfred P. Murrah Federal Building, in which 168 people
died. It was the deadliest terror attack in the history of the United States until the attacks of September 11, 2001,
and remains the deadliest act of domestic terrorism in U.S. history. Oklahoma City was settled on April 22, 1889,
when the area known as the "Unassigned Lands" was opened for settlement in an event known as "The Land Run". Some
10,000 homesteaders settled the area that would become the capital of Oklahoma. The town grew quickly; the population
doubled between 1890 and 1900. Early leaders of the development of the city included Anton Classen, John Shartel,
Henry Overholser and James W. Maney. By the time Oklahoma was admitted to the Union in 1907, Oklahoma City had surpassed
Guthrie, the territorial capital, as the population center and commercial hub of the new state. Soon after, the capital
was moved from Guthrie to Oklahoma City. Oklahoma City was a major stop on Route 66 during the early part of the
20th century; it was prominently mentioned in Bobby Troup's 1946 jazz classic, "(Get Your Kicks on) Route 66", later
made famous by artist Nat King Cole. Before World War II, Oklahoma City developed major stockyards, attracting jobs
and revenue formerly in Chicago and Omaha, Nebraska. With the 1928 discovery of oil within the city limits (including
under the State Capitol), Oklahoma City became a major center of oil production. Post-war growth accompanied the
construction of the Interstate Highway System, which made Oklahoma City a major interchange as the convergence of
I-35, I-40 and I-44. It was also aided by federal development of Tinker Air Force Base. Patience Latting was elected
Mayor of Oklahoma City in 1971, becoming the city's first female mayor. Latting was also the first woman to serve
as mayor of a U.S. city with over 350,000 residents. In 1993, the city passed a massive redevelopment package known
as the Metropolitan Area Projects (MAPS), intended to rebuild the city's core with civic projects to establish more
activities and life to downtown. The city added a new baseball park; central library; renovations to the civic center,
convention center and fairgrounds; and a water canal in the Bricktown entertainment district. Water taxis transport
passengers within the district, adding color and activity along the canal. MAPS has become one of the most successful
public-private partnerships undertaken in the U.S., exceeding $3 billion in private investment as of 2010. As a result
of MAPS, the population living in downtown housing has exponentially increased, together with demand for additional
residential and retail amenities, such as grocery, services, and shops. Since the MAPS projects' completion, the
downtown area has seen continued development. Several downtown buildings are undergoing renovation/restoration. Notable
among these was the restoration of the Skirvin Hotel in 2007. The famed First National Center is being renovated.
Residents of Oklahoma City suffered substantial losses on April 19, 1995 when Timothy McVeigh detonated a bomb in
front of the Murrah building. The building was destroyed (the remnants of which had to be imploded in a controlled
demolition later that year), more than 100 nearby buildings suffered severe damage, and 168 people were killed. The
site has been commemorated as the Oklahoma City National Memorial and Museum. Since its opening in 2000, over three
million people have visited. Every year on April 19, survivors, families and friends return to the memorial to read
the names of each person lost. The "Core-to-Shore" project was created to relocate I-40 one mile (1.6 km) south and
replace it with a boulevard to create a landscaped entrance to the city. This also allows the central portion of
the city to expand south and connect with the shore of the Oklahoma River. Several elements of "Core to Shore" were
included in the MAPS 3 proposal approved by voters in late 2009. According to the United States Census Bureau, the
city has a total area of 620.34 square miles (1,606.7 km2), of which, 601.11 square miles (1,556.9 km2) of it is
land and 19.23 square miles (49.8 km2) of it is water. The total area is 3.09 percent water. Oklahoma City lies in
the Sandstone Hills region of Oklahoma, known for hills of 250 to 400 feet (120 m) and two species of oak: blackjack
oak (Quercus marilandica) and post oak (Q. stellata). The northeastern part of the city and its eastern suburbs fall
into an ecological region known as the Cross Timbers. The city is roughly bisected by the North Canadian River (recently
renamed the Oklahoma River inside city limits). The North Canadian once had sufficient flow to flood every year,
wreaking destruction on surrounding areas, including the central business district and the original Oklahoma City
Zoo. In the 1940s, a dam was built on the river to manage the flood control and reduced its level. In the 1990s,
as part of the citywide revitalization project known as MAPS, the city built a series of low-water dams, returning
water to the portion of the river flowing near downtown. The city has three large lakes: Lake Hefner and Lake Overholser,
in the northwestern quarter of the city; and the largest, Lake Stanley Draper, in the sparsely populated far southeast
portion of the city. The population density normally reported for Oklahoma City using the area of its city limits
can be a bit misleading. Its urbanized zone covers roughly 244 sq mi (630 km2) resulting in a density of 2,500 per
square mile (2013 est), compared with larger rural watershed areas incorporated by the city, which cover the remaining
377 sq mi (980 km2) of the city limits. The city is bisected geographically and culturally by the North Canadian
River, which basically divides North Oklahoma City and South Oklahoma City. The two halves of the city were actually
founded and plotted as separate cities, but soon grew together. The north side is characterized by very diverse and
fashionable urban neighborhoods near the city center and sprawling suburbs further north. South Oklahoma City is
generally more blue collar working class and significantly more industrial, having grown up around the Stockyards
and meat packing plants at the turn of the century, and is currently the center of the city's rapidly growing Latino
community. Downtown Oklahoma City, which has 7,600 residents, is currently seeing an influx of new private investment
and large scale public works projects, which have helped to resuscitate a central business district left almost deserted
by the Oil Bust of the early 1980s. The centerpiece of downtown is the newly renovated Crystal Bridge and Myriad
Botanical Gardens, one of the few elements of the Pei Plan to be completed. In the next few years a massive new central
park will link the gardens near the CBD and the new convention center to be built just south of it to the North Canadian
River, as part of a massive works project known as Core to Shore; the new park is part of MAPS3, a collection of
civic projects funded by a 1-cent temporary (seven-year) sales tax increase. Oklahoma City has a humid subtropical
climate (Köppen: Cfa), with frequent variations in weather daily and seasonally, except during the consistently hot
and humid summer months. Prolonged and severe droughts (sometimes leading to wildfires in the vicinity) as well as
very heavy rainfall leading to flash flooding and flooding occur with some regularity. Consistent winds, usually
from the south or south-southeast during the summer, help temper the hotter weather. Consistent northerly winds during
the winter can intensify cold periods. Severe ice storms and snowstorms happen sporadically during the winter. The
average temperature is 61.4 °F (16.3 °C), with the monthly daily average ranging from 39.2 °F (4.0 °C) in January
to 83.0 °F (28.3 °C) in July. Extremes range from −17 °F (−27 °C) on February 12, 1899 to 113 °F (45 °C) on August
11, 1936 and August 3, 2012; the last sub-zero (°F) reading was −5 °F (−21 °C) on February 10, 2011. Temperatures
reach 100 °F (38 °C) on 10.4 days of the year, 90 °F (32 °C) on nearly 70 days, and fail to rise above freezing on
8.3 days. The city receives about 35.9 inches (91.2 cm) of precipitation annually, of which 8.6 inches (21.8 cm)
is snow. Oklahoma City has a very active severe weather season from March through June, especially during April and
May. Being in the center of what is colloquially referred to as Tornado Alley, it is prone to especially frequent
and severe tornadoes, as well as very severe hailstorms and occasional derechoes. Tornadoes have occurred in every
month of the year and a secondary smaller peak also occurs during autumn, especially October. The Oklahoma City metropolitan
area is one of the most tornado-prone major cities in the world, with about 150 tornadoes striking within the city
limits since 1890. Since the time weather records have been kept, Oklahoma City has been struck by thirteen violent
tornadoes, eleven F/EF4s and two F/EF5. On May 3, 1999 parts of southern Oklahoma City and nearby suburban communities
suffered from one of the most powerful tornadoes on record, an F5 on the Fujita scale, with wind speeds estimated
by radar at 318 mph (510 km/h). On May 20, 2013, far southwest Oklahoma City, along with Newcastle and Moore, was
hit again by a EF5 tornado; it was 0.5 to 1.3 miles (0.80 to 2.09 km) wide and killed 23 people. Less than two weeks
later, on May 31, another outbreak affected the Oklahoma City area, including an EF1 and an EF0 within the city and
a tornado several miles west of the city that was 2.6 miles (4.2 km) in width, the widest tornado ever recorded.
With 19.48 inches of rainfall, May 2015 was by far Oklahoma City's record-wettest month since record keeping began
in 1890. Across Oklahoma and Texas generally, there was record flooding in the latter part of the month As of the
2010 census, there were 579,999 people, 230,233 households, and 144,120 families residing in the city. The population
density was 956.4 inhabitants per square mile (321.9/km²). There were 256,930 housing units at an average density
of 375.9 per square mile (145.1/km²). There were 230,233 households, 29.4% of which had children under the age of
18 living with them, 43.4% were married couples living together, 13.9% had a female householder with no husband present,
and 37.4% were non-families. One person households account for 30.5% of all households and 8.7% of all households
had someone living alone who is 65 years of age or older. The average household size was 2.47 and the average family
size was 3.11. In the 2000 Census Oklahoma City's age composition was 25.5% under the age of 18, 10.7% from 18 to
24, 30.8% from 25 to 44, 21.5% from 45 to 64, and 11.5% who were 65 years of age or older. The median age was 34
years. For every 100 females there were 95.6 males. For every 100 females age 18 and over, there were 92.7 males.
Oklahoma City has experienced significant population increases since the late 1990s. In May 2014, the U.S. Census
announced Oklahoma City had an estimated population of 620,602 in 2014 and that it had grown 5.3 percent between
April 2010 and June 2013. Since the official Census in 2000, Oklahoma City had grown 21 percent (a 114,470 raw increase)
according to the Bureau estimates. The 2014 estimate of 620,602 is the largest population Oklahoma City has ever
recorded. It is the first city in the state to record a population greater than 600,000 residents and the largest
municipal population of the Great Plains region (OK, KS, NE, SD, ND). Oklahoma City is the principal city of the
eight-county Oklahoma City Metropolitan Statistical Area in Central Oklahoma and is the state's largest urbanized
area. Based on population rank, the metropolitan area was the 42nd largest in the nation as of 2012. With regards
to Mexican drug cartels, Oklahoma City has traditionally been the territory of the notorious Juárez Cartel, but the
Sinaloa Cartel has been reported as trying to establish a foothold in Oklahoma City. There are many rival gangs in
Oklahoma City, one whose headquarters has been established in the city, the Southside Locos, traditionally known
as Sureños. Oklahoma City also has its share of very brutal crimes, particularly in the 1970s. The worst of which
occurred in 1978, when six employees of a Sirloin Stockade restaurant on the city's south side were murdered execution-style
in the restaurant's freezer. An intensive investigation followed, and the three individuals involved, who also killed
three others in Purcell, Oklahoma, were identified. One, Harold Stafford, died in a motorcycle accident in Tulsa
not long after the restaurant murders. Another, Verna Stafford, was sentenced to life without parole after being
granted a new trial after she had previously been sentenced to death. Roger Dale Stafford, considered the mastermind
of the murder spree, was executed by lethal injection at the Oklahoma State Penitentiary in 1995. The Oklahoma City
Police Department, has a uniformed force of 1,169 officers and 300+ civilian employees. The Department has a central
police station and five substations covering 2,500 police reporting districts that average 1/4 square mile in size.
On April 19, 1995, the Alfred P. Murrah Federal Building was destroyed by a fertilizer bomb manufactured and detonated
by Timothy McVeigh. The blast and catastrophic collapse killed 168 people and injured over 680. The blast shockwave
destroyed or damaged 324 buildings within a 340-meter radius, destroyed or burned 86 cars, and shattered glass in
258 nearby buildings, causing at least an estimated $652 million worth of damage. The main suspect- Timothy McVeigh,
was executed by lethal injection on June 11, 2001. It was the deadliest single domestic terrorist attack in US history,
prior to 9/11. While not in Oklahoma City proper, other large employers within the MSA region include: Tinker Air
Force Base (27,000); University of Oklahoma (11,900); University of Central Oklahoma (2,900); and Norman Regional
Hospital (2,800). According to the Oklahoma City Chamber of Commerce, the metropolitan area's economic output grew
by 33 percent between 2001 and 2005 due chiefly to economic diversification. Its gross metropolitan product was $43.1
billion in 2005 and grew to $61.1 billion in 2009. In 2008, Forbes magazine named Oklahoma City the most "recession
proof city in America". The magazine reported that the city had falling unemployment, one of the strongest housing
markets in the country and solid growth in energy, agriculture and manufacturing. However, during the early 1980s,
Oklahoma City had one of the worst job and housing markets due to the bankruptcy of Penn Square Bank in 1982 and
then the post-1985 crash in oil prices.[citation needed] Other theaters include Lyric Theatre, Jewel Box Theatre,
Kirkpatrick Auditorium, the Poteet Theatre, the Oklahoma City Community College Bruce Owen Theater and the 488-seat
Petree Recital Hall, at the Oklahoma City University campus. The university also opened the Wanda L Bass School of
Music and auditorium in April 2006. The Science Museum Oklahoma (formerly Kirkpatrick Science and Air Space Museum
at Omniplex) houses exhibits on science, aviation, and an IMAX theater. The museum formerly housed the International
Photography Hall of Fame (IPHF) that exhibits photographs and artifacts from a large collection of cameras and other
artifacts preserving the history of photography. IPHF honors those who have made significant contributions to the
art and/or science of photography and relocated to St. Louis, Missouri in 2013. The Museum of Osteology houses more
than 300 real animal skeletons. Focusing on the form and function of the skeletal system, this 7,000 sq ft (650 m2)
museum displays hundreds of skulls and skeletons from all corners of the world. Exhibits include adaptation, locomotion,
classification and diversity of the vertebrate kingdom. The Museum of Osteology is the only one of its kind in America.
The National Cowboy & Western Heritage Museum has galleries of western art and is home to the Hall of Great Western
Performers. In contrast, the city will also be home to The American Indian Cultural Center and Museum that began
construction in 2009 (although completion of the facility has been held up due to insufficient funding), on the south
side of Interstate 40, southeast from Bricktown. The Oklahoma City National Memorial in the northern part of Oklahoma
City's downtown was created as the inscription on its eastern gate of the Memorial reads, "to honor the victims,
survivors, rescuers, and all who were changed forever on April 19, 1995"; the memorial was built on the land formerly
occupied by the Alfred P. Murrah Federal Building complex prior to its 1995 bombing. The outdoor Symbolic Memorial
can be visited 24 hours a day for free, and the adjacent Memorial Museum, located in the former Journal Record building
damaged by the bombing, can be entered for a small fee. The site is also home to the National Memorial Institute
for the Prevention of Terrorism, a non-partisan, nonprofit think tank devoted to the prevention of terrorism. The
American Banjo Museum located in the Bricktown Entertainment district is dedicated to preserving and promoting the
music and heritage of America's native musical instrument – the banjo. With a collection valued at $3.5 million it
is truly a national treasure. An interpretive exhibits tells the evolution of the banjo from its humble roots in
American slavery, to bluegrass, to folk and world music. The Oklahoma History Center is the history museum of the
state of Oklahoma. Located across the street from the governor's mansion at 800 Nazih Zuhdi Drive in northeast Oklahoma
City, the museum opened in 2005 and is operated by the Oklahoma Historical Society. It preserves the history of Oklahoma
from the prehistoric to the present day. Oklahoma City is home to several professional sports teams, including the
Oklahoma City Thunder of the National Basketball Association. The Thunder is the city's second "permanent" major
professional sports franchise after the now-defunct AFL Oklahoma Wranglers and is the third major-league team to
call the city home when considering the temporary hosting of the New Orleans/Oklahoma City Hornets for the 2005–06
and 2006–07 NBA seasons. Other professional sports clubs in Oklahoma City include the Oklahoma City Dodgers, the
Triple-A affiliate of the Los Angeles Dodgers, the Oklahoma City Energy FC of the United Soccer League, and the Crusaders
of Oklahoma Rugby Football Club USA Rugby. Chesapeake Energy Arena in downtown is the principal multipurpose arena
in the city which hosts concerts, NHL exhibition games, and many of the city's pro sports teams. In 2008, the Oklahoma
City Thunder became the major tenant. Located nearby in Bricktown, the Chickasaw Bricktown Ballpark is the home to
the city's baseball team, the Dodgers. "The Brick", as it is locally known, is considered one of the finest minor
league parks in the nation.[citation needed] Oklahoma City is the annual host of the Big 12 Baseball Tournament,
the World Cup of Softball, and the annual NCAA Women's College World Series. The city has held the 2005 NCAA Men's
Basketball First and Second round and hosted the Big 12 Men's and Women's Basketball Tournaments in 2007 and 2009.
The major universities in the area – University of Oklahoma, Oklahoma City University, and Oklahoma State University
– often schedule major basketball games and other sporting events at Chesapeake Energy Arena and Chickasaw Bricktown
Ballpark, although most home games are played at their campus stadiums. Other major sporting events include Thoroughbred
and Quarter horse racing circuits at Remington Park and numerous horse shows and equine events that take place at
the state fairgrounds each year. There are numerous golf courses and country clubs spread around the city. The state
of Oklahoma hosts a highly competitive high school football culture, with many teams in the Oklahoma City metropolitan
area. The Oklahoma Secondary School Activities Association (OSSAA) organizes high school football into eight distinct
classes based on the size of school enrollment. Beginning with the largest, the classes are: 6A, 5A, 4A, 3A, 2A,
A, B, and C. Class 6A is broken into two divisions. Oklahoma City area schools in this division include: Edmond North,
Mustang, Moore, Yukon, Edmond Memorial, Edmond Santa Fe, Norman North, Westmoore, Southmoore, Putnam City North,
Norman, Putnam City, Putnam City West, U.S. Grant, Midwest City. The Oklahoma City Thunder of the National Basketball
Association (NBA) has called Oklahoma City home since the 2008–09 season, when owner Clayton Bennett relocated the
franchise from Seattle, Washington. The Thunder plays home games at the Chesapeake Energy Arena in downtown Oklahoma
City, known affectionately in the national media as 'the Peake' and 'Loud City'. The Thunder is known by several
nicknames, including "OKC Thunder" and simply "OKC", and its mascot is Rumble the Bison. After a lackluster arrival
to Oklahoma City for the 2008–09 season, the Oklahoma City Thunder secured a berth (8th) in the 2010 NBA Playoffs
the next year after boasting its first 50-win season, winning two games in the first round against the Los Angeles
Lakers. In 2012, Oklahoma City made it to the NBA Finals, but lost to the Miami Heat in five games. In 2013 the Thunder
reached the Western Conference semifinals without All-Star guard Russell Westbrook, who was injured in their first
round series against the Houston Rockets, only to lose to the Memphis Grizzlies. In 2014 Oklahoma City again reached
the NBA's Western Conference Finals but eventually lost to the San Antonio Spurs in six games. The Oklahoma City
Thunder has been regarded by sports analysts as one of the elite franchises of the NBA's Western Conference and that
of a media darling as the future of the league. Oklahoma City has earned Northwest Division titles every year since
2009 and has consistently improved its win record to 59-wins in 2014. The Thunder is led by first year head coach
Billy Donovan and is anchored by several NBA superstars, including perennial All-Star point guard Russell Westbrook,
2014 MVP and four-time NBA scoring champion Kevin Durant, and Defensive Player of the Year nominee and shot-blocker
Serge Ibaka. In the aftermath of Hurricane Katrina, the NBA's New Orleans Hornets (now the New Orleans Pelicans)
temporarily relocated to the Ford Center, playing the majority of its home games there during the 2005–06 and 2006–07
seasons. The team became the first NBA franchise to play regular-season games in the state of Oklahoma.[citation
needed] The team was known as the New Orleans/Oklahoma City Hornets while playing in Oklahoma City. The team ultimately
returned to New Orleans full-time for the 2007–08 season. The Hornets played their final home game in Oklahoma City
during the exhibition season on October 9, 2007 against the Houston Rockets. One of the more prominent landmarks
downtown is the Crystal Bridge at the Myriad Botanical Gardens, a large downtown urban park. Designed by I. M. Pei,
the Crystal Bridge is a tropical conservatory in the area. The park has an amphitheater, known as the Water Stage.
In 2007, following a renovation of the stage, Oklahoma Shakespeare in the Park relocated to the Myriad Gardens. The
Myriad Gardens will undergo a massive renovation in conjunction with the recently built Devon Tower directly north
of it. The Oklahoma City Zoo and Botanical Garden is home to numerous natural habitats, WPA era architecture and
landscaping, and hosts major touring concerts during the summer at its amphitheater. Oklahoma City also has two amusement
parks, Frontier City theme park and White Water Bay water park. Frontier City is an 'Old West'-themed amusement park.
The park also features a recreation of a western gunfight at the 'OK Corral' and many shops that line the "Western"
town's main street. Frontier City also hosts a national concert circuit at its amphitheater during the summer. Oklahoma
City also has a combination racetrack and casino open year-round, Remington Park, which hosts both Quarter horse
(March – June) and Thoroughbred (August – December) seasons. Walking trails line Lake Hefner and Lake Overholser
in the northwest part of the city and downtown at the canal and the Oklahoma River. The majority of the east shore
area is taken up by parks and trails, including a new leashless dog park and the postwar-era Stars and Stripes Park.
Lake Stanley Draper is the city's largest and most remote lake. Oklahoma City has a major park in each quadrant of
the city, going back to the first parks masterplan. Will Rogers Park, Lincoln Park, Trosper Park, and Woodson Park
were once connected by the Grand Boulevard loop, some sections of which no longer exist. Martin Park Nature Center
is a natural habitat in far northwest Oklahoma City. Will Rogers Park is home to the Lycan Conservatory, the Rose
Garden, and Butterfly Garden, all built in the WPA era. Oklahoma City is home to the American Banjo Museum, which
houses a large collection of highly decorated banjos from the early 20th century and exhibits on the history of the
banjo and its place in American history. Concerts and lectures are also held there. In April 2005, the Oklahoma City
Skate Park at Wiley Post Park was renamed the Mat Hoffman Action Sports Park to recognize Mat Hoffman, an Oklahoma
City area resident and businessman that was instrumental in the design of the skate park and is a 10-time BMX World
Vert champion. In March 2009, the Mat Hoffman Action Sports Park was named by the National Geographic Society Travel
Guide as one of the "Ten Best." The City of Oklahoma City has operated under a council-manager form of city government
since 1927. Mick Cornett serves as Mayor, having first been elected in 2004, and re-elected in 2006, 2010, and 2014.
Eight councilpersons represent each of the eight wards of Oklahoma City. City Manager Jim Couch was appointed in
late 2000. Couch previously served as assistant city manager, Metropolitan Area Projects Plan (MAPS) director and
utilities director prior to his service as city manager. The city is home to several colleges and universities. Oklahoma
City University, formerly known as Epworth University, was founded by the United Methodist Church on September 1,
1904 and is renowned for its performing arts, science, mass communications, business, law, and athletic programs.
OCU has its main campus in the north-central section of the city, near the city's chinatown area. OCU Law is located
in the Midtown district near downtown, in the old Central High School building. The University of Oklahoma has several
institutions of higher learning in the city and metropolitan area, with OU Medicine and the University of Oklahoma
Health Sciences Center campuses located east of downtown in the Oklahoma Health Center district, and the main campus
located to the south in the suburb of Norman. The OU Medicine hosting the state's only Level-One trauma center. OU
Health Sciences Center is one of the nation's largest independent medical centers, employing more than 12,000 people.
OU is one of only four major universities in the nation to operate six medical schools.[clarification needed] The
third-largest university in the state, the University of Central Oklahoma, is located just north of the city in the
suburb of Edmond. Oklahoma Christian University, one of the state's private liberal arts institutions, is located
just south of the Edmond border, inside the Oklahoma City limits. Oklahoma City Community College in south Oklahoma
City is the second-largest community college in the state. Rose State College is located east of Oklahoma City in
suburban Midwest City. Oklahoma State University–Oklahoma City is located in the "Furniture District" on the Westside.
Northeast of the city is Langston University, the state's historically black college (HBCU). Langston also has an
urban campus in the eastside section of the city. Southern Nazarene University, which was founded by the Church of
the Nazarene, is a university located in suburban Bethany, which is surrounded by the Oklahoma City city limits.
Although technically not a university, the FAA's Mike Monroney Aeronautical Center has many aspects of an institution
of higher learning. Its FAA Academy is accredited by the North Central Association of Colleges and Schools. Its Civil
Aerospace Medical Institute (CAMI) has a medical education division responsible for aeromedical education in general
as well as the education of aviation medical examiners in the U.S. and 93 other countries. In addition, The National
Academy of Science offers Research Associateship Programs for fellowship and other grants for CAMI research. Oklahoma
City is home to the state's largest school district, Oklahoma City Public Schools. The district's Classen School
of Advanced Studies and Harding Charter Preparatory High School rank high among public schools nationally according
to a formula that looks at the number of Advanced Placement, International Baccalaureate and/or Cambridge tests taken
by the school's students divided by the number of graduating seniors. In addition, OKCPS's Belle Isle Enterprise
Middle School was named the top middle school in the state according to the Academic Performance Index, and recently
received the Blue Ribbon School Award, in 2004 and again in 2011. KIPP Reach College Preparatory School in Oklahoma
City received the 2012 National Blue Ribbon along with its school leader, Tracy McDaniel Sr., being awarded the Terrel
H. Bell Award for Outstanding Leadership. The Oklahoma School of Science and Mathematics, a school for some of the
state's most gifted math and science pupils, is also located in Oklahoma City. Oklahoma City has several public career
and technology education schools associated with the Oklahoma Department of Career and Technology Education, the
largest of which are Metro Technology Center and Francis Tuttle Technology Center. Private career and technology
education schools in Oklahoma City include Oklahoma Technology Institute, Platt College, Vatterott College, and Heritage
College. The Dale Rogers Training Center in Oklahoma City is a nonprofit vocational training center for individuals
with disabilities. The Oklahoman is Oklahoma City's major daily newspaper and is the most widely circulated in the
state. NewsOK.com is the Oklahoman's online presence. Oklahoma Gazette is Oklahoma City's independent newsweekly,
featuring such staples as local commentary, feature stories, restaurant reviews and movie listings and music and
entertainment. The Journal Record is the city's daily business newspaper and okcBIZ is a monthly publication that
covers business news affecting those who live and work in Central Oklahoma. There are numerous community and international
newspapers locally that cater to the city's ethnic mosaic; such as The Black Chronicle, headquartered in the Eastside,
the OK VIETIMES and Oklahoma Chinese Times, located in Asia District, and various Hispanic community publications.
The Campus is the student newspaper at Oklahoma City University. Gay publications include The Gayly Oklahoman. An
upscale lifestyle publication called Slice Magazine is circulated throughout the metropolitan area. In addition,
there is a magazine published by Back40 Design Group called The Edmond Outlook. It contains local commentary and
human interest pieces direct-mailed to over 50,000 Edmond residents. Oklahoma City was home to several pioneers in
radio and television broadcasting. Oklahoma City's WKY Radio was the first radio station transmitting west of the
Mississippi River and the third radio station in the United States. WKY received its federal license in 1921 and
has continually broadcast under the same call letters since 1922. In 1928, WKY was purchased by E.K. Gaylord's Oklahoma
Publishing Company and affiliated with the NBC Red Network; in 1949, WKY-TV (channel 4) went on the air and later
became the first independently owned television station in the U.S. to broadcast in color. In mid-2002, WKY radio
was purchased outright by Citadel Broadcasting, who was bought out by Cumulus Broadcasting in 2011. The Gaylord family
earlier sold WKY-TV in 1976, which has gone through a succession of owners (what is now KFOR-TV is currently owned
by Tribune Broadcasting as of December 2013). The major U.S. broadcast television networks have affiliates in the
Oklahoma City market (ranked 41st for television by Nielsen and 48th for radio by Arbitron, covering a 34-county
area serving the central, northern-central and west-central sections Oklahoma); including NBC affiliate KFOR-TV (channel
4), ABC affiliate KOCO-TV (channel 5), CBS affiliate KWTV-DT (channel 9, the flagship of locally based Griffin Communications),
PBS station KETA-TV (channel 13, the flagship of the state-run OETA member network), Fox affiliate KOKH-TV (channel
25), CW affiliate KOCB (channel 34), independent station KAUT-TV (channel 43), MyNetworkTV affiliate KSBI-TV (channel
52), and Ion Television owned-and-operated station KOPX-TV (channel 62). The market is also home to several religious
stations including TBN owned-and-operated station KTBO-TV (channel 14) and Norman-based Daystar owned-and-operated
station KOCM (channel 46). Oklahoma City is protected by the Oklahoma City Fire Department (OKCFD), which employs
1015 paid, professional firefighters. The current Chief of Department is G. Keith Bryant, the department is also
commanded by three Deputy Chiefs, who – along with the department chief – oversee the Operational Services, Prevention
Services, and Support Services bureaus. The OKCFD currently operates out of 37 fire stations, located throughout
the city in six battalions. The OKCFD also operates a fire apparatus fleet of 36 engines (including 30 paramedic
engines), 13 ladders, 16 brush patrol units, six water tankers, two hazardous materials units, one Technical Rescue
Unit, one Air Supply Unit, six Arson Investigation Units, and one Rehabilitation Unit. Each engine is staffed with
a driver, an officer, and one to two firefighters, while each ladder company is staffed with a driver, an officer,
and one firefighter. Minimum staffing per shift is 213 personnel. The Oklahoma City Fire Department responds to over
70,000 emergency calls annually. Oklahoma City is an integral point on the United States Interstate Network, with
three major interstate highways – Interstate 35, Interstate 40, and Interstate 44 – bisecting the city. Interstate
240 connects Interstate 40 and Interstate 44 in south Oklahoma City, while Interstate 235 spurs from Interstate 44
in north-central Oklahoma City into downtown. Major state expressways through the city include Lake Hefner Parkway
(SH-74), the Kilpatrick Turnpike, Airport Road (SH-152), and Broadway Extension (US-77) which continues from I-235
connecting Central Oklahoma City to Edmond. Lake Hefner Parkway runs through northwest Oklahoma City, while Airport
Road runs through southwest Oklahoma City and leads to Will Rogers World Airport. The Kilpatrick Turnpike loops around
north and west Oklahoma City. Oklahoma City also has several major national and state highways within its city limits.
Shields Boulevard (US-77) continues from E.K. Gaylord Boulevard in downtown Oklahoma City and runs south eventually
connecting to I-35 near the suburb of Moore. Northwest Expressway (Oklahoma State Highway 3) runs from North Classen
Boulevard in north-central Oklahoma City to the northwestern suburbs. Oklahoma City is served by two primary airports,
Will Rogers World Airport and the much smaller Wiley Post Airport (incidentally, the two honorees died in the same
plane crash in Alaska) Will Rogers World Airport is the state's busiest commercial airport, with over 3.6 million
passengers annually. Tinker Air Force Base, in southeast Oklahoma City, is the largest military air depot in the
nation; a major maintenance and deployment facility for the Navy and the Air Force, and the second largest military
institution in the state (after Fort Sill in Lawton). METRO Transit is the city's public transit company. The main
transfer terminal is located downtown at NW 5th Street and Hudson Avenue. METRO Transit maintains limited coverage
of the city's main street grid using a hub-and-spoke system from the main terminal, making many journeys impractical
due to the rather small number of bus routes offered and that most trips require a transfer downtown. The city has
recognized that transit as a major issue for the rapidly growing and urbanizing city and has initiated several studies
in recent times to improve upon the existing bus system starting with a plan known as the Fixed Guideway Study. This
study identified several potential commuter transit routes from the suburbs into downtown OKC as well as feeder-line
bus and/or rail routes throughout the city. On December 2009, Oklahoma City voters passed MAPS 3, the $777 million
(7-year 1-cent tax) initiative, which will include funding (appx $130M) for an estimated 5-to-6-mile (8.0 to 9.7
km) modern streetcar in downtown Oklahoma City and the establishment of a transit hub. It is believed the streetcar
would begin construction in 2014 and be in operation around 2017. Oklahoma City and the surrounding metropolitan
area are home to a number of health care facilities and specialty hospitals. In Oklahoma City's MidTown district
near downtown resides the state's oldest and largest single site hospital, St. Anthony Hospital and Physicians Medical
Center. OU Medicine, an academic medical institution located on the campus of The University of Oklahoma Health Sciences
Center, is home to OU Medical Center. OU Medicine operates Oklahoma's only level-one trauma center at the OU Medical
Center and the state's only level-one trauma center for children at Children's Hospital at OU Medicine, both of which
are located in the Oklahoma Health Center district. Other medical facilities operated by OU Medicine include OU Physicians
and OU Children's Physicians, the OU College of Medicine, the Oklahoma Cancer Center and OU Medical Center Edmond,
the latter being located in the northern suburb of Edmond. INTEGRIS Health owns several hospitals, including INTEGRIS
Baptist Medical Center, the INTEGRIS Cancer Institute of Oklahoma, and the INTEGRIS Southwest Medical Center. INTEGRIS
Health operates hospitals, rehabilitation centers, physician clinics, mental health facilities, independent living
centers and home health agencies located throughout much of Oklahoma. INTEGRIS Baptist Medical Center was named in
U.S. News & World Report's 2012 list of Best Hospitals. INTEGRIS Baptist Medical Center ranks high-performing in
the following categories: Cardiology and Heart Surgery; Diabetes and Endocrinology; Ear, Nose and Throat; Gastroenterology;
Geriatrics; Nephrology; Orthopedics; Pulmonology and Urology. The Midwest Regional Medical Center located in the
suburb of Midwest City; other major hospitals in the city include the Oklahoma Heart Hospital and the Mercy Health
Center. There are 347 physicians for every 100,000 people in the city. In the American College of Sports Medicine's
annual ranking of the United States' 50 most populous metropolitan areas on the basis of community health, Oklahoma
City took last place in 2010, falling five places from its 2009 rank of 45. The ACSM's report, published as part
of its American Fitness Index program, cited, among other things, the poor diet of residents, low levels of physical
fitness, higher incidences of obesity, diabetes, and cardiovascular disease than the national average, low access
to recreational facilities like swimming pools and baseball diamonds, the paucity of parks and low investment by
the city in their development, the high percentage of households below the poverty level, and the lack of state-mandated
physical education curriculum as contributing factors.
A hunter-gatherer is a human living in a society in which most or all food is obtained by foraging (collecting wild plants
and pursuing wild animals), in contrast to agricultural societies, which rely mainly on domesticated species. Hunting
and gathering was humanity's first and most successful adaptation, occupying at least 90 percent of human history.
Following the invention of agriculture, hunter-gatherers have been displaced or conquered by farming or pastoralist
groups in most parts of the world. Only a few contemporary societies are classified as hunter-gatherers, and many
supplement their foraging activity with horticulture and/or keeping animals. In the 1950s, Lewis Binford suggested
that early humans were obtaining meat via scavenging, not hunting. Early humans in the Lower Paleolithic lived in
forests and woodlands, which allowed them to collect seafood, eggs, nuts, and fruits besides scavenging. Rather than
killing large animals for meat, according to this view, they used carcasses of such animals that had either been
killed by predators or that had died of natural causes. Archaeological and genetic data suggest that the source populations
of Paleolithic hunter-gatherers survived in sparsely wooded areas and dispersed through areas of high primary productivity
while avoiding dense forest cover. According to the endurance running hypothesis, long-distance running as in persistence
hunting, a method still practiced by some hunter-gatherer groups in modern times, was likely the driving evolutionary
force leading to the evolution of certain human characteristics. This hypothesis does not necessarily contradict
the scavenging hypothesis: both subsistence strategies could have been in use – sequentially, alternating or even
simultaneously. Hunting and gathering was presumably the subsistence strategy employed by human societies beginning
some 1.8 million years ago, by Homo erectus, and from its appearance some 0.2 million years ago by Homo sapiens.
It remained the only mode of subsistence until the end of the Mesolithic period some 10,000 years ago, and after
this was replaced only gradually with the spread of the Neolithic Revolution. Starting at the transition between
the Middle to Upper Paleolithic period, some 80,000 to 70,000 years ago, some hunter-gatherers bands began to specialize,
concentrating on hunting a smaller selection of (often larger) game and gathering a smaller selection of food. This
specialization of work also involved creating specialized tools, like fishing nets and hooks and bone harpoons. The
transition into the subsequent Neolithic period is chiefly defined by the unprecedented development of nascent agricultural
practices. Agriculture originated and spread in several different areas including the Middle East, Asia, Mesoamerica,
and the Andes beginning as early as 12,000 years ago. Forest gardening was also being used as a food production system
in various parts of the world over this period. Forest gardens originated in prehistoric times along jungle-clad
river banks and in the wet foothills of monsoon regions.[citation needed] In the gradual process of families improving
their immediate environment, useful tree and vine species were identified, protected and improved, whilst undesirable
species were eliminated. Eventually superior foreign species were selected and incorporated into the gardens. Many
groups continued their hunter-gatherer ways of life, although their numbers have continually declined, partly as
a result of pressure from growing agricultural and pastoral communities. Many of them reside in the developing world,
either in arid regions or tropical forests. Areas that were formerly available to hunter-gatherers were—and continue
to be—encroached upon by the settlements of agriculturalists. In the resulting competition for land use, hunter-gatherer
societies either adopted these practices or moved to other areas. In addition, Jared Diamond has blamed a decline
in the availability of wild foods, particularly animal resources. In North and South America, for example, most large
mammal species had gone extinct by the end of the Pleistocene—according to Diamond, because of overexploitation by
humans, although the overkill hypothesis he advocates is strongly contested.[by whom?] As the number and size of
agricultural societies increased, they expanded into lands traditionally used by hunter-gatherers. This process of
agriculture-driven expansion led to the development of the first forms of government in agricultural centers, such
as the Fertile Crescent, Ancient India, Ancient China, Olmec, Sub-Saharan Africa and Norte Chico. As a result of
the now near-universal human reliance upon agriculture, the few contemporary hunter-gatherer cultures usually live
in areas unsuitable for agricultural use. Most hunter-gatherers are nomadic or semi-nomadic and live in temporary
settlements. Mobile communities typically construct shelters using impermanent building materials, or they may use
natural rock shelters, where they are available. Some hunter-gatherer cultures, such as the indigenous peoples of
the Pacific Northwest Coast, lived in particularly rich environments that allowed them to be sedentary or semi-sedentary.
Hunter-gatherers tend to have an egalitarian social ethos, although settled hunter-gatherers (for example, those
inhabiting the Northwest Coast of North America) are an exception to this rule. Nearly all African hunter-gatherers
are egalitarian, with women roughly as influential and powerful as men. The egalitarianism typical of human hunters
and gatherers is never total, but is striking when viewed in an evolutionary context. One of humanity's two closest
primate relatives, chimpanzees, are anything but egalitarian, forming themselves into hierarchies that are often
dominated by an alpha male. So great is the contrast with human hunter-gatherers that it is widely argued by palaeoanthropologists
that resistance to being dominated was a key factor driving the evolutionary emergence of human consciousness, language,
kinship and social organization. Anthropologists maintain that hunter/gatherers don't have permanent leaders; instead,
the person taking the initiative at any one time depends on the task being performed. In addition to social and economic
equality in hunter-gatherer societies, there is often, though not always, sexual parity as well. Hunter-gatherers
are often grouped together based on kinship and band (or tribe) membership. Postmarital residence among hunter-gatherers
tends to be matrilocal, at least initially. Young mothers can enjoy childcare support from their own mothers, who
continue living nearby in the same camp. The systems of kinship and descent among human hunter-gatherers were relatively
flexible, although there is evidence that early human kinship in general tended to be matrilineal. It is easy for
Western-educated scholars to fall into the trap of viewing hunter-gatherer social and sexual arrangements in the
light of Western values.[editorializing] One common arrangement is the sexual division of labour, with women doing
most of the gathering, while men concentrate on big game hunting. It might be imagined that this arrangement oppresses
women, keeping them in the domestic sphere. However, according to some observers, hunter-gatherer women would not
understand this interpretation. Since childcare is collective, with every baby having multiple mothers and male carers,
the domestic sphere is not atomised or privatised but an empowering place to be.[citation needed] In all hunter-gatherer
societies, women appreciate the meat brought back to camp by men. An illustrative account is Megan Biesele's study
of the southern African Ju/'hoan, 'Women Like Meat'. Recent archaeological research suggests that the sexual division
of labor was the fundamental organisational innovation that gave Homo sapiens the edge over the Neanderthals, allowing
our ancestors to migrate from Africa and spread across the globe. To this day, most hunter-gatherers have a symbolically
structured sexual division of labour. However, it is true that in a small minority of cases, women hunt the same
kind of quarry as men, sometimes doing so alongside men. The best-known example are the Aeta people of the Philippines.
According to one study, "About 85% of Philippine Aeta women hunt, and they hunt the same quarry as men. Aeta women
hunt in groups and with dogs, and have a 31% success rate as opposed to 17% for men. Their rates are even better
when they combine forces with men: mixed hunting groups have a full 41% success rate among the Aeta." Among the Ju'/hoansi
people of Namibia, women help men track down quarry. Women in the Australian Martu also primarily hunt small animals
like lizards to feed their children and maintain relations with other women. At the 1966 "Man the Hunter" conference,
anthropologists Richard Borshay Lee and Irven DeVore suggested that egalitarianism was one of several central characteristics
of nomadic hunting and gathering societies because mobility requires minimization of material possessions throughout
a population. Therefore, no surplus of resources can be accumulated by any single member. Other characteristics Lee
and DeVore proposed were flux in territorial boundaries as well as in demographic composition. At the same conference,
Marshall Sahlins presented a paper entitled, "Notes on the Original Affluent Society", in which he challenged the
popular view of hunter-gatherers lives as "solitary, poor, nasty, brutish and short," as Thomas Hobbes had put it
in 1651. According to Sahlins, ethnographic data indicated that hunter-gatherers worked far fewer hours and enjoyed
more leisure than typical members of industrial society, and they still ate well. Their "affluence" came from the
idea that they were satisfied with very little in the material sense. Later, in 1996, Ross Sackett performed two
distinct meta-analyses to empirically test Sahlin's view. The first of these studies looked at 102 time-allocation
studies, and the second one analyzed 207 energy-expenditure studies. Sackett found that adults in foraging and horticultural
societies work, on average, about 6.5 hours a day, where as people in agricultural and industrial societies work
on average 8.8 hours a day. Mutual exchange and sharing of resources (i.e., meat gained from hunting) are important
in the economic systems of hunter-gatherer societies. Therefore, these societies can be described as based on a "gift
economy." Hunter-gatherer societies manifest significant variability, depending on climate zone/life zone, available
technology and societal structure. Archaeologists examine hunter-gatherer tool kits to measure variability across
different groups. Collard et al. (2005) found temperature to be the only statistically significant factor to impact
hunter-gatherer tool kits. Using temperature as a proxy for risk, Collard et al.'s results suggest that environments
with extreme temperatures pose a threat to hunter-gatherer systems significant enough to warrant increased variability
of tools. These results support Torrence's (1989) theory that risk of failure is indeed the most important factor
in determining the structure of hunter-gatherer toolkits. One way to divide hunter-gatherer groups is by their return
systems. James Woodburn uses the categories "immediate return" hunter-gatherers for egalitarian and "delayed return"
for nonegalitarian. Immediate return foragers consume their food within a day or two after they procure it. Delayed
return foragers store the surplus food (Kelly, 31). Hunting-gathering was the common human mode of subsistence throughout
the Paleolithic, but the observation of current-day hunters and gatherers does not necessarily reflect Paleolithic
societies; the hunter-gatherer cultures examined today have had much contact with modern civilization and do not
represent "pristine" conditions found in uncontacted peoples. The transition from hunting and gathering to agriculture
is not necessarily a one way process. It has been argued that hunting and gathering represents an adaptive strategy,
which may still be exploited, if necessary, when environmental change causes extreme food stress for agriculturalists.
In fact, it is sometimes difficult to draw a clear line between agricultural and hunter-gatherer societies, especially
since the widespread adoption of agriculture and resulting cultural diffusion that has occurred in the last 10,000
years.[citation needed] This anthropological view has remained unchanged since the 1960s.[clarification needed][citation
needed] In the early 1980s, a small but vocal segment of anthropologists and archaeologists attempted to demonstrate
that contemporary groups usually identified as hunter-gatherers do not, in most cases, have a continuous history
of hunting and gathering, and that in many cases their ancestors were agriculturalists and/or pastoralists[citation
needed] who were pushed into marginal areas as a result of migrations, economic exploitation, and/or violent conflict
(see, for example, the Kalahari Debate). The result of their effort has been the general acknowledgement that there
has been complex interaction between hunter-gatherers and non-hunter-gatherers for millennia.[citation needed] Some
of the theorists who advocate this "revisionist" critique imply that, because the "pure hunter-gatherer" disappeared
not long after colonial (or even agricultural) contact began, nothing meaningful can be learned about prehistoric
hunter-gatherers from studies of modern ones (Kelly, 24-29; see Wilmsen) Lee and Guenther have rejected most of the
arguments put forward by Wilmsen. Doron Shultziner and others have argued that we can learn a lot about the life-styles
of prehistoric hunter-gatherers from studies of contemporary hunter-gatherers—especially their impressive levels
of egalitarianism. Many hunter-gatherers consciously manipulate the landscape through cutting or burning undesirable
plants while encouraging desirable ones, some even going to the extent of slash-and-burn to create habitat for game
animals. These activities are on an entirely different scale to those associated with agriculture, but they are nevertheless
domestication on some level. Today, almost all hunter-gatherers depend to some extent upon domesticated food sources
either produced part-time or traded for products acquired in the wild. Some agriculturalists also regularly hunt
and gather (e.g., farming during the frost-free season and hunting during the winter). Still others in developed
countries go hunting, primarily for leisure. In the Brazilian rainforest, those groups that recently did, or even
continue to, rely on hunting and gathering techniques seem to have adopted this lifestyle, abandoning most agriculture,
as a way to escape colonial control and as a result of the introduction of European diseases reducing their populations
to levels where agriculture became difficult.[citation needed][dubious – discuss] There are nevertheless a number
of contemporary hunter-gatherer peoples who, after contact with other societies, continue their ways of life with
very little external influence. One such group is the Pila Nguru (Spinifex people) of Western Australia, whose habitat
in the Great Victoria Desert has proved unsuitable for European agriculture (and even pastoralism).[citation needed]
Another are the Sentinelese of the Andaman Islands in the Indian Ocean, who live on North Sentinel Island and to
date have maintained their independent existence, repelling attempts to engage with and contact them.[citation needed]
Evidence suggests big-game hunter gatherers crossed the Bering Strait from Asia (Eurasia) into North America over
a land bridge (Beringia), that existed between 47,000–14,000 years ago. Around 18,500-15,500 years ago, these hunter-gatherers
are believed to have followed herds of now-extinct Pleistocene megafauna along ice-free corridors that stretched
between the Laurentide and Cordilleran ice sheets. Another route proposed is that, either on foot or using primitive
boats, they migrated down the Pacific coast to South America. Hunter-gatherers would eventually flourish all over
the Americas, primarily based in the Great Plains of the United States and Canada, with offshoots as far east as
the Gaspé Peninsula on the Atlantic coast, and as far south as Chile, Monte Verde.[citation needed] American hunter-gatherers
were spread over a wide geographical area, thus there were regional variations in lifestyles. However, all the individual
groups shared a common style of stone tool production, making knapping styles and progress identifiable. This early
Paleo-Indian period lithic reduction tool adaptations have been found across the Americas, utilized by highly mobile
bands consisting of approximately 25 to 50 members of an extended family. The Archaic period in the Americas saw
a changing environment featuring a warmer more arid climate and the disappearance of the last megafauna. The majority
of population groups at this time were still highly mobile hunter-gatherers; but now individual groups started to
focus on resources available to them locally, thus with the passage of time there is a pattern of increasing regional
generalization like, the Southwest, Arctic, Poverty, Dalton and Plano traditions. This regional adaptations would
become the norm, with reliance less on hunting and gathering, with a more mixed economy of small game, fish, seasonally
wild vegetables and harvested plant foods.
The United Nations Population Fund (UNFPA), formerly the United Nations Fund for Population Activities, is a UN organization.
The UNFPA says it "is the lead UN agency for delivering a world where every pregnancy is wanted, every childbirth
is safe and every young person's potential is fulfilled." Their work involves the improvement of reproductive health;
including creation of national strategies and protocols, and providing supplies and services. The organization has
recently been known for its worldwide campaign against obstetric fistula and female genital mutilation. The UNFPA
supports programs in more than 150 countries, territories and areas spread across four geographic regions: Arab States
and Europe, Asia and the Pacific, Latin America and the Caribbean, and sub-Saharan Africa. Around three quarters
of the staff work in the field. It is a member of the United Nations Development Group and part of its Executive
Committee. UNFPA began operations in 1969 as the United Nations Fund for Population Activities (the name was changed
in 1987) under the administration of the United Nations Development Fund. In 1971 it was placed under the authority
of the United Nations General Assembly. In September 2015, the 193 member states of the United Nations unanimously
adopted the Sustainable Development Goals, a set of 17 goals aiming to transform the world over the next 15 years.
These goals are designed to eliminate poverty, discrimination, abuse and preventable deaths, address environmental
destruction, and usher in an era of development for all people, everywhere. The Sustainable Development Goals are
ambitious, and they will require enormous efforts across countries, continents, industries and disciplines - but
they are achievable. UNFPA is working with governments, partners and other UN agencies to directly tackle many of
these goals - in particular Goal 3 on health, Goal 4 on education and Goal 5 on gender equality - and contributes
in a variety of ways to achieving many of the rest. Executive Directors and Under-Secretaries General of the UN 2011–present
Dr Babatunde Osotimehin (Nigeria) 2000–2010 Ms Thoraya Ahmed Obaid (Saudi Arabia) 1987–2000 Dr Nafis Sadik (Pakistan)
1969–87 Mr Rafael M. Salas (Philippines) UNFPA is the world's largest multilateral source of funding for population
and reproductive health programs. The Fund works with governments and non-governmental organizations in over 150
countries with the support of the international community, supporting programs that help women, men and young people:
According to UNFPA these elements promote the right of "reproductive health", that is physical, mental, and social
health in matters related to reproduction and the reproductive system. The Fund raises awareness of and supports
efforts to meet these needs in developing countries, advocates close attention to population concerns, and helps
developing nations formulate policies and strategies in support of sustainable development. Dr. Osotimehin assumed
leadership in January 2011. The Fund is also represented by UNFPA Goodwill Ambassadors and a Patron. UNFPA works
in partnership with governments, along with other United Nations agencies, communities, NGOs, foundations and the
private sector, to raise awareness and mobilize the support and resources needed to achieve its mission to promote
the rights and health of women and young people. UNFPA has been falsely accused by anti-family planning groups of
providing support for government programs which have promoted forced-abortions and coercive sterilizations. Controversies
regarding these claims have resulted in a sometimes shaky relationship between the organization and three presidential
administrations, that of Ronald Reagan, George H. W. Bush and George W. Bush, withholding funding from the UNFPA.
Contributions from governments and the private sector to UNFPA in 2014 exceeded $1 billion. The amount includes $477
million to the organization’s core resources and $529 million earmarked for specific programs and initiatives. UNFPA
provided aid to Peru's reproductive health program in the mid-to-late '90s. When it was discovered a Peruvian program
had been engaged in carrying out coercive sterilizations, UNFPA called for reforms and protocols to protect the rights
of women seeking assistance. UNFPA was not involved in the scandal, but continued work with the country after the
abuses had become public to help end the abuses and reform laws and practices. From 2002 through 2008, the Bush Administration
denied funding to UNFPA that had already been allocated by the US Congress, partly on the refuted claims that the
UNFPA supported Chinese government programs which include forced abortions and coercive sterilizations. In a letter
from the Undersecretary of State for Political Affairs Nicholas Burns to Congress, the administration said it had
determined that UNFPA’s support for China’s population program “facilitates (its) government’s coercive abortion
program”, thus violating the Kemp-Kasten Amendment, which bans the use of United States aid to finance organizations
that support or take part in managing a program of coercive abortion of sterilization. UNFPA's connection to China's
administration of forced abortions was refuted by investigations carried out by various US, UK, and UN teams sent
to examine UNFPA activities in China. Specifically, a three-person U.S State Department fact-finding team was sent
on a two-week tour throughout China. It wrote in a report to the State Department that it found "no evidence that
UNFPA has supported or participated in the management of a program of coercive abortion or involuntary sterilization
in China," as has been charged by critics. However, according to then-Secretary of State Colin Powell, the UNFPA
contributed vehicles and computers to the Chinese to carry out their population control policies. However, both the
Washington Post and the Washington Times reported that Powell simply fell in line, signing a brief written by someone
else. Rep. Christopher H. Smith (R-NJ), criticized the State Department investigation, saying the investigators were
shown "Potemkin Villages" where residents had been intimidated into lying about the family-planning program. Dr.
Nafis Sadik, former director of UNFPA said her agency had been pivotal in reversing China's coercive population control
methods, but a 2005 report by Amnesty International and a separate report by the United States State Department found
that coercive techniques were still regularly employed by the Chinese, casting doubt upon Sadik's statements. But
Amnesty International found no evidence that UNFPA had supported the coercion. A 2001 study conducted by the pro-life
Population Research Institute (PRI) falsely claimed that the UNFPA shared an office with the Chinese family planning
officials who were carrying out forced abortions. "We located the family planning offices, and in that family planning
office, we located the UNFPA office, and we confirmed from family planning officials there that there is no distinction
between what the UNFPA does and what the Chinese Family Planning Office does," said Scott Weinberg, a spokesman for
PRI. However, United Nations Members disagreed and approved UNFPA’s new country program me in January 2006. The more
than 130 members of the “Group of 77” developing countries in the United Nations expressed support for the UNFPA
programmes. In addition, speaking for European democracies -- Norway, Denmark, Sweden, Finland, the Netherlands,
France, Belgium, Switzerland and Germany -- the United Kingdom stated, ”UNFPA’s activities in China, as in the rest
of the world, are in strict conformity with the unanimously adopted Programme of Action of the ICPD, and play a key
role in supporting our common endeavor, the promotion and protection of all human rights and fundamental freedoms.”
President Bush denied funding to the UNFPA. Over the course of the Bush Administration, a total of $244 million in
Congressionally approved funding was blocked by the Executive Branch. In response, the EU decided to fill the gap
left behind by the US under the Sandbaek report. According to its Annual Report for 2008, the UNFPA received its
funding mainly from European Governments: Of the total income of M845.3 M, $118 was donated by the Netherlands, $67
M by Sweden, $62 M by Norway, $54 M by Denmark, $53 M by the UK, $52 M by Spain, $19 M by Luxembourg. The European
Commission donated further $36 M. The most important non-European donor State was Japan ($36 M). The number of donors
exceeded 180 in one year. In America, nonprofit organizations like Friends of UNFPA (formerly Americans for UNFPA)
worked to compensate for the loss of United States federal funding by raising private donations. In January 2009
President Barack Obama restored US funding to UNFPA, saying in a public statement that he would "look forward to
working with Congress to restore US financial support for the UN Population Fund. By resuming funding to UNFPA, the
US will be joining 180 other donor nations working collaboratively to reduce poverty, improve the health of women
and children, prevent HIV/AIDS and provide family planning assistance to women in 154 countries."
The Russian Soviet Federative Socialist Republic (Russian SFSR or RSFSR; Russian: Российская Советская Федеративная Социалистическая
Республика, tr. Rossiyskaya Sovetskaya Federativnaya Sotsialisticheskaya Respublika listen (help·info)) commonly
referred to as Soviet Russia or simply as Russia, was a sovereign state in 1917–22, the largest, most populous, and
most economically developed republic of the Soviet Union in 1922–91 and a sovereign part of the Soviet Union with
its own legislation in 1990–91. The Republic comprised sixteen autonomous republics, five autonomous oblasts, ten
autonomous okrugs, six krais, and forty oblasts. Russians formed the largest ethnic group. To the west it bordered
Finland, Norway and Poland; and to the south, China, Mongolia and North Korea whilst bordering the Arctic Ocean to
the north, the Pacific Ocean to the east and the Black sea and Caspian Sea to the south. Within the USSR, it bordered
the Baltic republics (Lithuania, Latvia and Estonia), the Byelorussian SSR and the Ukrainian SSR to the west. To
the south it bordered the Georgian, Azerbaijan and Kazakh SSRs. Under the leadership of Vladimir Lenin, the Bolsheviks
established the Soviet state on 7 November [O.S. 25 October] 1917, immediately after the Russian Provisional Government,
which governed the Russian Republic, was overthrown during the October Revolution. Initially, the state did not have
an official name and wasn't recognized by neighboring countries for five months. Meanwhile, anti-Bolsheviks coined
the mocking label "Sovdepia" for the nascent state of the "Soviets of Workers' and Peasants' Deputies". On December
30, 1922, with the creation of the Soviet Union, Russia became one of six republics within the federation of the
Union of Soviet Socialist Republics. The final Soviet name for the republic, the Russian Soviet Federative Socialist
Republic, was adopted in the Soviet Constitution of 1936. By that time, Soviet Russia had gained roughly the same
borders of the old Tsardom of Russia before the Great Northern War of 1700. On December 25, 1991, following the collapse
of the Soviet Union, the republic was renamed the Russian Federation, which it remains to this day. This name and
"Russia" were specified as the official state names in the April 21, 1992 amendment to the existing constitution
and were retained as such in the 1993 Constitution of Russia. The international borders of the RSFSR touched Poland
on the west; Norway and Finland on the northwest; and to its southeast were the Democratic People's Republic of Korea,
Mongolian People's Republic, and the People's Republic of China. Within the Soviet Union, the RSFSR bordered the
Ukrainian, Belarusian, Estonian, Latvian and Lithuanian SSRs to its west and Azerbaijan, Georgian and Kazakh SSRs
to the south. The Soviet regime first came to power on November 7, 1917, immediately after the Russian Provisional
Government, which governed the Russian Republic, was overthrown in the October Revolution. The state it governed,
which did not have an official name, would be unrecognized by neighboring countries for another five months. On January
25, 1918, at the third meeting of the All-Russian Congress of Soviets, the unrecognized state was renamed the Soviet
Russian Republic. On March 3, 1918, the Treaty of Brest-Litovsk was signed, giving away much of the land of the former
Russian Empire to Germany, in exchange for peace in World War I. On July 10, 1918, the Russian Constitution of 1918
renamed the country the Russian Socialist Federative Soviet Republic. By 1918, during the Russian Civil War, several
states within the former Russian Empire had seceded, reducing the size of the country even more. In 1943, Karachay
Autonomous Oblast was dissolved by Joseph Stalin, when the Karachays were exiled to Central Asia for their alleged
collaboration with the Germans and territory was incorporated into the Georgian SSR. The RSFSR was established on
November 7, 1917 (October Revolution) as a sovereign state. The first Constitution was adopted in 1918. In 1922 the
Russian SFSR signed the Treaty on the Creation of the USSR. The economy of Russia became heavily industrialized,
accounting for about two-thirds of the electricity produced in the USSR. It was, by 1961, the third largest producer
of petroleum due to new discoveries in the Volga-Urals region and Siberia, trailing only the United States and Saudi
Arabia. In 1974, there were 475 institutes of higher education in the republic providing education in 47 languages
to some 23,941,000 students. A network of territorially-organized public-health services provided health care. After
1985, the restructuring policies of the Gorbachev administration relatively liberalised the economy, which had become
stagnant since the late 1970s, with the introduction of non-state owned enterprises such as cooperatives. The effects
of market policies led to the failure of many enterprises and total instability by 1990. On June 12, 1990, the Congress
of People's Deputies adopted the Declaration of State Sovereignty. On June 12, 1991, Boris Yeltsin was elected the
first President. On December 8, 1991, heads of Russia, Ukraine and Belarus signed the Belavezha Accords. The agreement
declared dissolution of the USSR by its founder states (i.e. denunciation of 1922 Treaty on the Creation of the USSR)
and established the CIS. On December 12, the agreement was ratified by the Russian Parliament, therefore Russian
SFSR denounced the Treaty on the Creation of the USSR and de facto declared Russia's independence from the USSR.
On December 25, 1991, the Russian SFSR was renamed the Russian Federation. On December 26, 1991, the USSR was self-dissolved
by the Soviet of Nationalities, which by that time was the only functioning house of the Supreme Soviet (the other
house, Soviet of the Union, had already lost the quorum after recall of its members by the union republics). After
dissolution of the USSR, Russia declared that it assumed the rights and obligations of the dissolved central Soviet
government, including UN membership. On January 25, 1918 the third meeting of the All-Russian Congress of Soviets
renamed the unrecognized state the Soviet Russian Republic. The Treaty of Brest-Litovsk was signed on March 3, 1918,
giving away much of the land of the former Russian Empire to Germany in exchange for peace during the rest of World
War I. On July 10, 1918, the Russian Constitution of 1918 renamed the country the Russian Socialist Federative Soviet
Republic. By 1918, during the Russian Civil War, several states within the former Russian Empire seceded, reducing
the size of the country even more. Internationally, in 1920, the RSFSR was recognized as an independent state only
by Estonia, Finland, Latvia and Lithuania in the Treaty of Tartu and by the short-lived Irish Republic. For most
of the Soviet Union's existence, it was commonly referred to as "Russia," even though technically "Russia" was only
one republic within the larger union—albeit by far the largest, most powerful and most highly developed. Roughly
70% of the area in the RSFSR consisted of broad plains, with mountainous tundra regions mainly concentrated in the
east. The area is rich in mineral resources, including petroleum, natural gas, and iron ore. On December 30, 1922,
the First Congress of the Soviets of the USSR approved the Treaty on the Creation of the USSR, by which Russia was
united with the Ukrainian Soviet Socialist Republic, Byelorussian Soviet Socialist Republic, and Transcaucasian Soviet
Federal Socialist Republic into a single federal state, the Soviet Union. Later treaty was included in the 1924 Soviet
Constitution,[clarification needed] adopted on January 31, 1924 by the Second Congress of Soviets of the USSR. Many
regions in Russia were affected by the Soviet famine of 1932–1933: Volga; Central Black Soil Region; North Caucasus;
the Urals; the Crimea; part of Western Siberia; and the Kazak ASSR. With the adoption of the 1936 Soviet Constitution
on December 5, 1936, the size of the RSFSR was significantly reduced. The Kazakh ASSR and Kirghiz ASSR were transformed
into the Kazakh and Kirghiz Soviet Socialist Republics. The Karakalpak Autonomous Socialist Soviet Republic was transferred
to the Uzbek SSR. The final name for the republic during the Soviet era was adopted by the Russian Constitution of
1937, which renamed it the Russian Soviet Federative Socialist Republic. On March 3, 1944, on the orders of Stalin,
the Chechen-Ingush ASSR was disbanded and its population forcibly deported upon the accusations of collaboration
with the invaders and separatism. The territory of the ASSR was divided between other administrative unit of Russian
SFSR and the Georgian SSR. On October 11, 1944, the Tuvan People's Republic joined the Russian SFSR as the Tuvan
Autonomous Oblast, in 1961 becoming an Autonomous Soviet Socialist Republic. After reconquering Estonia and Latvia
in 1944, the Russian SFSR annexed their easternmost territories around Ivangorod and within the modern Pechorsky
and Pytalovsky Districts in 1944-1945. At the end of World War II Soviet troops occupied southern Sakhalin Island
and the Kuril Islands, making them part of the RSFSR. The status of the southernmost Kurils remains in dispute with
Japan. On April 17, 1946, the Kaliningrad Oblast — the northern portion of the former German province of East Prussia—was
annexed by the Soviet Union and made part of the Russian SFSR. On February 8, 1955, Malenkov was officially demoted
to deputy Prime Minister. As First Secretary of the Central Committee of the Communist Party, Nikita Khrushchev's
authority was significantly enhanced by Malenkov's demotion. On January 9, 1957, Karachay Autonomous Oblast and Chechen-Ingush
Autonomous Soviet Socialist Republic were restored by Khrushchev and they were transferred from the Georgian SSR
back to the Russian SFSR. In 1964, Nikita Khrushchev was removed from his position of power and replaced with Leonid
Brezhnev. Under his rule, the Russian SFSR and the rest of the Soviet Union went through an era of stagnation. Even
after he died in 1982, the era didn’t end until Mikhail Gorbachev took power and introduced liberal reforms in Soviet
society. On June 12, 1990, the Congress of People's Deputies of the Republic adopted the Declaration of State Sovereignty
of the Russian SFSR, which was the beginning of the "War of Laws", pitting the Soviet Union against the Russian Federation
and other constituent republics. On March 17, 1991, an all-Russian referendum created the post of President of the
RSFSR. On June 12, Boris Yeltsin was elected President of Russia by popular vote. During an unsuccessful coup attempt
on August 19–21, 1991 in Moscow, the capital of the Soviet Union and Russia, President of Russia Yeltsin strongly
supported the President of the Soviet Union, Mikhail Gorbachev. On August 23, after the failure of GKChP, in the
presence of Gorbachev, Yeltsin signed a decree suspending all activity by the Communist Party of the Russian SFSR
in the territory of Russia. On November 6, he went further, banning the Communist Parties of the USSR and the RSFSR
from the territory of the RSFSR. On December 8, 1991, at Viskuli near Brest (Belarus), the President of the Russian
SFSR and the heads of Byelorussian SSR and Ukrainian SSR signed the "Agreement on the Establishment of the Commonwealth
of Independent States" (known in media as Belavezha Accords). The document, consisting of a preamble and fourteen
articles, stated that the Soviet Union ceased to exist as a subject of international law and geopolitical reality.
However, based on the historical community of peoples, relations between them, given the bilateral treaties, the
desire for a democratic rule of law, the intention to develop their relations based on mutual recognition and respect
for state sovereignty, the parties agreed to the formation of the Commonwealth of Independent States. On December
12, the agreement was ratified by the Supreme Soviet of the Russian SFSR by an overwhelming majority: 188 votes for,
6 against, 7 abstentions. On the same day, the Supreme Soviet of the Russian SFSR denounced the Treaty on the Creation
of the USSR and recalled all Russian deputies from the Supreme Soviet of the Soviet Union. The legality of this act
is the subject of discussions because, according to the 1978 Constitution (Basic Law) of the Russian SFSR, the Russian
Supreme Soviet had no right to do so. However, by this time the Soviet government had been rendered more or less
impotent, and was in no position to object. Although the December 12 vote is sometimes reckoned as the moment that
the RSFSR seceded from the collapsing Soviet Union, this is not the case. It appears that the RSFSR took the line
that it was not possible to secede from an entity that no longer existed. On December 24, Yeltsin informed the Secretary-General
of the United Nations that by agreement of the member states of the CIS Russian Federation would assume the membership
of the Soviet Union in all UN organs (including permanent membership in the UN Security Council). Thus, Russia is
considered to be an original member of the UN (since October 24, 1945) along with Ukraine (Ukrainian SSR) and Belarus
(Byelorussian SSR). On December 25—just hours after Gorbachev resigned as president of the Soviet Union—the Russian
SFSR was renamed the Russian Federation (Russia), reflecting that it was now a sovereign state with Yeltsin assuming
the Presidency. The change was originally published on January 6, 1992 (Rossiyskaya Gazeta). According to law, during
1992, it was allowed to use the old name of the RSFSR for official business (forms, seals and stamps). The Russian
Federation's Constitution (Fundamental Law) of 1978, though with the 1991–1992 Amendements, remained in effect until
the 1993 Russian constitutional crisis. The Government was known officially as the Council of People's Commissars
(1917–1946), Council of Ministers (1946–1978) and Council of Ministers–Government (1978–1991). The first government
was headed by Vladimir Lenin as "Chairman of the Council of People's Commissars of the Russian SFSR" and the last
by Boris Yeltsin as both head of government and head of state under the title "President". The Russian SFSR was controlled
by the Communist Party of the Soviet Union, until the abortive 1991 August coup, which prompted President Yeltsin
to suspend the recently created Communist Party of the Russian Soviet Federative Socialist Republic.
Alexander Graham Bell (March 3, 1847 – August 2, 1922) was a Scottish-born[N 3] scientist, inventor, engineer and innovator
who is credited with patenting the first practical telephone. Bell's father, grandfather, and brother had all been
associated with work on elocution and speech, and both his mother and wife were deaf, profoundly influencing Bell's
life's work. His research on hearing and speech further led him to experiment with hearing devices which eventually
culminated in Bell being awarded the first U.S. patent for the telephone in 1876.[N 4] Bell considered his most famous
invention an intrusion on his real work as a scientist and refused to have a telephone in his study.[N 5] Many other
inventions marked Bell's later life, including groundbreaking work in optical telecommunications, hydrofoils and
aeronautics. Although Bell was not one of the 33 founders of the National Geographic Society, he had a strong influence
on the magazine while serving as the second president from January 7, 1898 until 1903. Alexander Bell was born in
Edinburgh, Scotland, on March 3, 1847. The family home was at 16 South Charlotte Street, and has a stone inscription
marking it as Alexander Graham Bell's birthplace. He had two brothers: Melville James Bell (1845–70) and Edward Charles
Bell (1848–67), both of whom would die of tuberculosis. His father was Professor Alexander Melville Bell, a phonetician,
and his mother was Eliza Grace (née Symonds). Born as just "Alexander Bell", at age 10 he made a plea to his father
to have a middle name like his two brothers.[N 6] For his 11th birthday, his father acquiesced and allowed him to
adopt the name "Graham", chosen out of respect for Alexander Graham, a Canadian being treated by his father who had
become a family friend. To close relatives and friends he remained "Aleck". As a child, young Bell displayed a natural
curiosity about his world, resulting in gathering botanical specimens as well as experimenting even at an early age.
His best friend was Ben Herdman, a neighbor whose family operated a flour mill, the scene of many forays. Young Bell
asked what needed to be done at the mill. He was told wheat had to be dehusked through a laborious process and at
the age of 12, Bell built a homemade device that combined rotating paddles with sets of nail brushes, creating a
simple dehusking machine that was put into operation and used steadily for a number of years. In return, John Herdman
gave both boys the run of a small workshop in which to "invent". From his early years, Bell showed a sensitive nature
and a talent for art, poetry, and music that was encouraged by his mother. With no formal training, he mastered the
piano and became the family's pianist. Despite being normally quiet and introspective, he reveled in mimicry and
"voice tricks" akin to ventriloquism that continually entertained family guests during their occasional visits. Bell
was also deeply affected by his mother's gradual deafness, (she began to lose her hearing when he was 12) and learned
a manual finger language so he could sit at her side and tap out silently the conversations swirling around the family
parlour. He also developed a technique of speaking in clear, modulated tones directly into his mother's forehead
wherein she would hear him with reasonable clarity. Bell's preoccupation with his mother's deafness led him to study
acoustics. His family was long associated with the teaching of elocution: his grandfather, Alexander Bell, in London,
his uncle in Dublin, and his father, in Edinburgh, were all elocutionists. His father published a variety of works
on the subject, several of which are still well known, especially his The Standard Elocutionist (1860), which appeared
in Edinburgh in 1868. The Standard Elocutionist appeared in 168 British editions and sold over a quarter of a million
copies in the United States alone. In this treatise, his father explains his methods of how to instruct deaf-mutes
(as they were then known) to articulate words and read other people's lip movements to decipher meaning. Bell's father
taught him and his brothers not only to write Visible Speech but to identify any symbol and its accompanying sound.
Bell became so proficient that he became a part of his father's public demonstrations and astounded audiences with
his abilities. He could decipher Visible Speech representing virtually every language, including Latin, Scottish
Gaelic, and even Sanskrit, accurately reciting written tracts without any prior knowledge of their pronunciation.
As a young child, Bell, like his brothers, received his early schooling at home from his father. At an early age,
however, he was enrolled at the Royal High School, Edinburgh, Scotland, which he left at age 15, completing only
the first four forms. His school record was undistinguished, marked by absenteeism and lacklustre grades. His main
interest remained in the sciences, especially biology, while he treated other school subjects with indifference,
to the dismay of his demanding father. Upon leaving school, Bell travelled to London to live with his grandfather,
Alexander Bell. During the year he spent with his grandfather, a love of learning was born, with long hours spent
in serious discussion and study. The elder Bell took great efforts to have his young pupil learn to speak clearly
and with conviction, the attributes that his pupil would need to become a teacher himself. At age 16, Bell secured
a position as a "pupil-teacher" of elocution and music, in Weston House Academy, at Elgin, Moray, Scotland. Although
he was enrolled as a student in Latin and Greek, he instructed classes himself in return for board and £10 per session.
The following year, he attended the University of Edinburgh; joining his older brother Melville who had enrolled
there the previous year. In 1868, not long before he departed for Canada with his family, Bell completed his matriculation
exams and was accepted for admission to the University of London. His father encouraged Bell's interest in speech
and, in 1863, took his sons to see a unique automaton, developed by Sir Charles Wheatstone based on the earlier work
of Baron Wolfgang von Kempelen. The rudimentary "mechanical man" simulated a human voice. Bell was fascinated by
the machine and after he obtained a copy of von Kempelen's book, published in German, and had laboriously translated
it, he and his older brother Melville built their own automaton head. Their father, highly interested in their project,
offered to pay for any supplies and spurred the boys on with the enticement of a "big prize" if they were successful.
While his brother constructed the throat and larynx, Bell tackled the more difficult task of recreating a realistic
skull. His efforts resulted in a remarkably lifelike head that could "speak", albeit only a few words. The boys would
carefully adjust the "lips" and when a bellows forced air through the windpipe, a very recognizable "Mama" ensued,
to the delight of neighbors who came to see the Bell invention. Intrigued by the results of the automaton, Bell continued
to experiment with a live subject, the family's Skye Terrier, "Trouve". After he taught it to growl continuously,
Bell would reach into its mouth and manipulate the dog's lips and vocal cords to produce a crude-sounding "Ow ah
oo ga ma ma". With little convincing, visitors believed his dog could articulate "How are you grandma?" More indicative
of his playful nature, his experiments convinced onlookers that they saw a "talking dog". However, these initial
forays into experimentation with sound led Bell to undertake his first serious work on the transmission of sound,
using tuning forks to explore resonance. At the age of 19, he wrote a report on his work and sent it to philologist
Alexander Ellis, a colleague of his father (who would later be portrayed as Professor Henry Higgins in Pygmalion).
Ellis immediately wrote back indicating that the experiments were similar to existing work in Germany, and also lent
Bell a copy of Hermann von Helmholtz's work, The Sensations of Tone as a Physiological Basis for the Theory of Music.
Dismayed to find that groundbreaking work had already been undertaken by Helmholtz who had conveyed vowel sounds
by means of a similar tuning fork "contraption", he pored over the German scientist's book. Working from his own
erroneous mistranslation of a French edition, Bell fortuitously then made a deduction that would be the underpinning
of all his future work on transmitting sound, reporting: "Without knowing much about the subject, it seemed to me
that if vowel sounds could be produced by electrical means, so could consonants, so could articulate speech." He
also later remarked: "I thought that Helmholtz had done it ... and that my failure was due only to my ignorance of
electricity. It was a valuable blunder ... If I had been able to read German in those days, I might never have commenced
my experiments!"[N 7] In 1865, when the Bell family moved to London, Bell returned to Weston House as an assistant
master and, in his spare hours, continued experiments on sound using a minimum of laboratory equipment. Bell concentrated
on experimenting with electricity to convey sound and later installed a telegraph wire from his room in Somerset
College to that of a friend. Throughout late 1867, his health faltered mainly through exhaustion. His younger brother,
Edward "Ted," was similarly bed-ridden, suffering from tuberculosis. While Bell recovered (by then referring to himself
in correspondence as "A.G. Bell") and served the next year as an instructor at Somerset College, Bath, England, his
brother's condition deteriorated. Edward would never recover. Upon his brother's death, Bell returned home in 1867.
His older brother Melville had married and moved out. With aspirations to obtain a degree at University College London,
Bell considered his next years as preparation for the degree examinations, devoting his spare time at his family's
residence to studying. Helping his father in Visible Speech demonstrations and lectures brought Bell to Susanna E.
Hull's private school for the deaf in South Kensington, London. His first two pupils were "deaf mute" girls who made
remarkable progress under his tutelage. While his older brother seemed to achieve success on many fronts including
opening his own elocution school, applying for a patent on an invention, and starting a family, Bell continued as
a teacher. However, in May 1870, Melville died from complications due to tuberculosis, causing a family crisis. His
father had also suffered a debilitating illness earlier in life and had been restored to health by a convalescence
in Newfoundland. Bell's parents embarked upon a long-planned move when they realized that their remaining son was
also sickly. Acting decisively, Alexander Melville Bell asked Bell to arrange for the sale of all the family property,[N
8] conclude all of his brother's affairs (Bell took over his last student, curing a pronounced lisp), and join his
father and mother in setting out for the "New World". Reluctantly, Bell also had to conclude a relationship with
Marie Eccleston, who, as he had surmised, was not prepared to leave England with him. In 1870, at age 23, Bell, his
brother's widow, Caroline (Margaret Ottaway), and his parents travelled on the SS Nestorian to Canada. After landing
at Quebec City the Bells transferred to another steamer to Montreal and then boarded a train to Paris, Ontario, to
stay with the Reverend Thomas Henderson, a family friend. After a brief stay with the Hendersons, the Bell family
purchased a farm of 10.5 acres (42,000 m2) at Tutelo Heights (now called Tutela Heights), near Brantford, Ontario.
The property consisted of an orchard, large farm house, stable, pigsty, hen-house, and a carriage house, which bordered
the Grand River.[N 9] At the homestead, Bell set up his own workshop in the converted carriage house near to what
he called his "dreaming place", a large hollow nestled in trees at the back of the property above the river. Despite
his frail condition upon arriving in Canada, Bell found the climate and environs to his liking, and rapidly improved.[N
10] He continued his interest in the study of the human voice and when he discovered the Six Nations Reserve across
the river at Onondaga, he learned the Mohawk language and translated its unwritten vocabulary into Visible Speech
symbols. For his work, Bell was awarded the title of Honorary Chief and participated in a ceremony where he donned
a Mohawk headdress and danced traditional dances.[N 11] After setting up his workshop, Bell continued experiments
based on Helmholtz's work with electricity and sound. He also modified a melodeon (a type of pump organ) so that
it could transmit its music electrically over a distance. Once the family was settled in, both Bell and his father
made plans to establish a teaching practice and in 1871, he accompanied his father to Montreal, where Melville was
offered a position to teach his System of Visible Speech. Bell's father was invited by Sarah Fuller, principal of
the Boston School for Deaf Mutes (which continues today as the public Horace Mann School for the Deaf), in Boston,
Massachusetts, to introduce the Visible Speech System by providing training for Fuller's instructors, but he declined
the post in favor of his son. Traveling to Boston in April 1871, Bell proved successful in training the school's
instructors. He was subsequently asked to repeat the program at the American Asylum for Deaf-mutes in Hartford, Connecticut,
and the Clarke School for the Deaf in Northampton, Massachusetts. Returning home to Brantford after six months abroad,
Bell continued his experiments with his "harmonic telegraph".[N 12] The basic concept behind his device was that
messages could be sent through a single wire if each message was transmitted at a different pitch, but work on both
the transmitter and receiver was needed. Unsure of his future, he first contemplated returning to London to complete
his studies, but decided to return to Boston as a teacher. His father helped him set up his private practice by contacting
Gardiner Greene Hubbard, the president of the Clarke School for the Deaf for a recommendation. Teaching his father's
system, in October 1872, Alexander Bell opened his "School of Vocal Physiology and Mechanics of Speech" in Boston,
which attracted a large number of deaf pupils, with his first class numbering 30 students. While he was working as
a private tutor, one of his most famous pupils was Helen Keller, who came to him as a young child unable to see,
hear, or speak. She was later to say that Bell dedicated his life to the penetration of that "inhuman silence which
separates and estranges." In 1893, Keller performed the sod-breaking ceremony for the construction of the new Bell's
new Volta Bureau, dedicated to "the increase and diffusion of knowledge relating to the deaf". Several influential
people of the time, including Bell, viewed deafness as something that should be eradicated, and also believed that
with resources and effort they could teach the deaf to speak and avoid the use of sign language, thus enabling their
integration within the wider society from which many were often being excluded. In several schools, children were
mistreated, for example by having their hands tied behind their backs so they could not communicate by signing—the
only language they knew—in an attempt to force them to attempt oral communication. Owing to his efforts to suppress
the teaching of sign language, Bell is often viewed negatively by those embracing Deaf culture. In the following
year, Bell became professor of Vocal Physiology and Elocution at the Boston University School of Oratory. During
this period, he alternated between Boston and Brantford, spending summers in his Canadian home. At Boston University,
Bell was "swept up" by the excitement engendered by the many scientists and inventors residing in the city. He continued
his research in sound and endeavored to find a way to transmit musical notes and articulate speech, but although
absorbed by his experiments, he found it difficult to devote enough time to experimentation. While days and evenings
were occupied by his teaching and private classes, Bell began to stay awake late into the night, running experiment
after experiment in rented facilities at his boarding house. Keeping "night owl" hours, he worried that his work
would be discovered and took great pains to lock up his notebooks and laboratory equipment. Bell had a specially
made table where he could place his notes and equipment inside a locking cover. Worse still, his health deteriorated
as he suffered severe headaches. Returning to Boston in fall 1873, Bell made a fateful decision to concentrate on
his experiments in sound. Deciding to give up his lucrative private Boston practice, Bell retained only two students,
six-year-old "Georgie" Sanders, deaf from birth, and 15-year-old Mabel Hubbard. Each pupil would play an important
role in the next developments. George's father, Thomas Sanders, a wealthy businessman, offered Bell a place to stay
in nearby Salem with Georgie's grandmother, complete with a room to "experiment". Although the offer was made by
George's mother and followed the year-long arrangement in 1872 where her son and his nurse had moved to quarters
next to Bell's boarding house, it was clear that Mr. Sanders was backing the proposal. The arrangement was for teacher
and student to continue their work together, with free room and board thrown in. Mabel was a bright, attractive girl
who was ten years Bell's junior, but became the object of his affection. Having lost her hearing after a near-fatal
bout of scarlet fever close to her fifth birthday,[N 13] she had learned to read lips but her father, Gardiner Greene
Hubbard, Bell's benefactor and personal friend, wanted her to work directly with her teacher. By 1874, Bell's initial
work on the harmonic telegraph had entered a formative stage, with progress made both at his new Boston "laboratory"
(a rented facility) and at his family home in Canada a big success.[N 14] While working that summer in Brantford,
Bell experimented with a "phonautograph", a pen-like machine that could draw shapes of sound waves on smoked glass
by tracing their vibrations. Bell thought it might be possible to generate undulating electrical currents that corresponded
to sound waves. Bell also thought that multiple metal reeds tuned to different frequencies like a harp would be able
to convert the undulating currents back into sound. But he had no working model to demonstrate the feasibility of
these ideas. In 1874, telegraph message traffic was rapidly expanding and in the words of Western Union President
William Orton, had become "the nervous system of commerce". Orton had contracted with inventors Thomas Edison and
Elisha Gray to find a way to send multiple telegraph messages on each telegraph line to avoid the great cost of constructing
new lines. When Bell mentioned to Gardiner Hubbard and Thomas Sanders that he was working on a method of sending
multiple tones on a telegraph wire using a multi-reed device, the two wealthy patrons began to financially support
Bell's experiments. Patent matters would be handled by Hubbard's patent attorney, Anthony Pollok. In March 1875,
Bell and Pollok visited the famous scientist Joseph Henry, who was then director of the Smithsonian Institution,
and asked Henry's advice on the electrical multi-reed apparatus that Bell hoped would transmit the human voice by
telegraph. Henry replied that Bell had "the germ of a great invention". When Bell said that he did not have the necessary
knowledge, Henry replied, "Get it!" That declaration greatly encouraged Bell to keep trying, even though he did not
have the equipment needed to continue his experiments, nor the ability to create a working model of his ideas. However,
a chance meeting in 1874 between Bell and Thomas A. Watson, an experienced electrical designer and mechanic at the
electrical machine shop of Charles Williams, changed all that. With financial support from Sanders and Hubbard, Bell
hired Thomas Watson as his assistant,[N 15] and the two of them experimented with acoustic telegraphy. On June 2,
1875, Watson accidentally plucked one of the reeds and Bell, at the receiving end of the wire, heard the overtones
of the reed; overtones that would be necessary for transmitting speech. That demonstrated to Bell that only one reed
or armature was necessary, not multiple reeds. This led to the "gallows" sound-powered telephone, which could transmit
indistinct, voice-like sounds, but not clear speech. In 1875, Bell developed an acoustic telegraph and drew up a
patent application for it. Since he had agreed to share U.S. profits with his investors Gardiner Hubbard and Thomas
Sanders, Bell requested that an associate in Ontario, George Brown, attempt to patent it in Britain, instructing
his lawyers to apply for a patent in the U.S. only after they received word from Britain (Britain would issue patents
only for discoveries not previously patented elsewhere). Meanwhile, Elisha Gray was also experimenting with acoustic
telegraphy and thought of a way to transmit speech using a water transmitter. On February 14, 1876, Gray filed a
caveat with the U.S. Patent Office for a telephone design that used a water transmitter. That same morning, Bell's
lawyer filed Bell's application with the patent office. There is considerable debate about who arrived first and
Gray later challenged the primacy of Bell's patent. Bell was in Boston on February 14 and did not arrive in Washington
until February 26. Bell's patent 174,465, was issued to Bell on March 7, 1876, by the U.S. Patent Office. Bell's
patent covered "the method of, and apparatus for, transmitting vocal or other sounds telegraphically ... by causing
electrical undulations, similar in form to the vibrations of the air accompanying the said vocal or other sound"
[N 16] Bell returned to Boston the same day and the next day resumed work, drawing in his notebook a diagram similar
to that in Gray's patent caveat. On March 10, 1876, three days after his patent was issued, Bell succeeded in getting
his telephone to work, using a liquid transmitter similar to Gray's design. Vibration of the diaphragm caused a needle
to vibrate in the water, varying the electrical resistance in the circuit. When Bell spoke the famous sentence "Mr.
Watson—Come here—I want to see you" into the liquid transmitter, Watson, listening at the receiving end in an adjoining
room, heard the words clearly. Although Bell was, and still is, accused of stealing the telephone from Gray, Bell
used Gray's water transmitter design only after Bell's patent had been granted, and only as a proof of concept scientific
experiment, to prove to his own satisfaction that intelligible "articulate speech" (Bell's words) could be electrically
transmitted. After March 1876, Bell focused on improving the electromagnetic telephone and never used Gray's liquid
transmitter in public demonstrations or commercial use. The question of priority for the variable resistance feature
of the telephone was raised by the examiner before he approved Bell's patent application. He told Bell that his claim
for the variable resistance feature was also described in Gray's caveat. Bell pointed to a variable resistance device
in Bell's previous application in which Bell described a cup of mercury, not water. Bell had filed the mercury application
at the patent office a year earlier on February 25, 1875, long before Elisha Gray described the water device. In
addition, Gray abandoned his caveat, and because he did not contest Bell's priority, the examiner approved Bell's
patent on March 3, 1876. Gray had reinvented the variable resistance telephone, but Bell was the first to write down
the idea and the first to test it in a telephone. The patent examiner, Zenas Fisk Wilber, later stated in an affidavit
that he was an alcoholic who was much in debt to Bell's lawyer, Marcellus Bailey, with whom he had served in the
Civil War. He claimed he showed Gray's patent caveat to Bailey. Wilber also claimed (after Bell arrived in Washington
D.C. from Boston) that he showed Gray's caveat to Bell and that Bell paid him $100. Bell claimed they discussed the
patent only in general terms, although in a letter to Gray, Bell admitted that he learned some of the technical details.
Bell denied in an affidavit that he ever gave Wilber any money. Continuing his experiments in Brantford, Bell brought
home a working model of his telephone. On August 3, 1876, from the telegraph office in Mount Pleasant five miles
(eight km) away from Brantford, Bell sent a tentative telegram indicating that he was ready. With curious onlookers
packed into the office as witnesses, faint voices were heard replying. The following night, he amazed guests as well
as his family when a message was received at the Bell home from Brantford, four miles (six km) distant, along an
improvised wire strung up along telegraph lines and fences, and laid through a tunnel. This time, guests at the household
distinctly heard people in Brantford reading and singing. These experiments clearly proved that the telephone could
work over long distances. Bell and his partners, Hubbard and Sanders, offered to sell the patent outright to Western
Union for $100,000. The president of Western Union balked, countering that the telephone was nothing but a toy. Two
years later, he told colleagues that if he could get the patent for $25 million he would consider it a bargain. By
then, the Bell company no longer wanted to sell the patent. Bell's investors would become millionaires, while he
fared well from residuals and at one point had assets of nearly one million dollars. Bell began a series of public
demonstrations and lectures to introduce the new invention to the scientific community as well as the general public.
A short time later, his demonstration of an early telephone prototype at the 1876 Centennial Exposition in Philadelphia
brought the telephone to international attention. Influential visitors to the exhibition included Emperor Pedro II
of Brazil. Later Bell had the opportunity to demonstrate the invention personally to Sir William Thomson (later,
Lord Kelvin), a renowned Scottish scientist, as well as to Queen Victoria, who had requested a private audience at
Osborne House, her Isle of Wight home. She called the demonstration "most extraordinary". The enthusiasm surrounding
Bell's public displays laid the groundwork for universal acceptance of the revolutionary device. The Bell Telephone
Company was created in 1877, and by 1886, more than 150,000 people in the U.S. owned telephones. Bell Company engineers
made numerous other improvements to the telephone, which emerged as one of the most successful products ever. In
1879, the Bell company acquired Edison's patents for the carbon microphone from Western Union. This made the telephone
practical for longer distances, and it was no longer necessary to shout to be heard at the receiving telephone. In
January 1915, Bell made the first ceremonial transcontinental telephone call. Calling from the AT&T head office at
15 Dey Street in New York City, Bell was heard by Thomas Watson at 333 Grant Avenue in San Francisco. The New York
Times reported: As is sometimes common in scientific discoveries, simultaneous developments can occur, as evidenced
by a number of inventors who were at work on the telephone. Over a period of 18 years, the Bell Telephone Company
faced 587 court challenges to its patents, including five that went to the U.S. Supreme Court, but none was successful
in establishing priority over the original Bell patent and the Bell Telephone Company never lost a case that had
proceeded to a final trial stage. Bell's laboratory notes and family letters were the key to establishing a long
lineage to his experiments. The Bell company lawyers successfully fought off myriad lawsuits generated initially
around the challenges by Elisha Gray and Amos Dolbear. In personal correspondence to Bell, both Gray and Dolbear
had acknowledged his prior work, which considerably weakened their later claims. On January 13, 1887, the U,S. Government
moved to annul the patent issued to Bell on the grounds of fraud and misrepresentation. After a series of decisions
and reversals, the Bell company won a decision in the Supreme Court, though a couple of the original claims from
the lower court cases were left undecided. By the time that the trial wound its way through nine years of legal battles,
the U.S. prosecuting attorney had died and the two Bell patents (No. 174,465 dated March 7, 1876 and No. 186,787
dated January 30, 1877) were no longer in effect, although the presiding judges agreed to continue the proceedings
due to the case's importance as a "precedent". With a change in administration and charges of conflict of interest
(on both sides) arising from the original trial, the US Attorney General dropped the lawsuit on November 30, 1897
leaving several issues undecided on the merits. During a deposition filed for the 1887 trial, Italian inventor Antonio
Meucci also claimed to have created the first working model of a telephone in Italy in 1834. In 1886, in the first
of three cases in which he was involved, Meucci took the stand as a witness in the hopes of establishing his invention's
priority. Meucci's evidence in this case was disputed due to a lack of material evidence for his inventions as his
working models were purportedly lost at the laboratory of American District Telegraph (ADT) of New York, which was
later incorporated as a subsidiary of Western Union in 1901. Meucci's work, like many other inventors of the period,
was based on earlier acoustic principles and despite evidence of earlier experiments, the final case involving Meucci
was eventually dropped upon Meucci's death. However, due to the efforts of Congressman Vito Fossella, the U.S. House
of Representatives on June 11, 2002 stated that Meucci's "work in the invention of the telephone should be acknowledged",
even though this did not put an end to a still contentious issue.[N 17] Some modern scholars do not agree with the
claims that Bell's work on the telephone was influenced by Meucci's inventions.[N 18] The value of the Bell patent
was acknowledged throughout the world, and patent applications were made in most major countries, but when Bell had
delayed the German patent application, the electrical firm of Siemens & Halske (S&H) managed to set up a rival manufacturer
of Bell telephones under their own patent. The Siemens company produced near-identical copies of the Bell telephone
without having to pay royalties. The establishment of the International Bell Telephone Company in Brussels, Belgium
in 1880, as well as a series of agreements in other countries eventually consolidated a global telephone operation.
The strain put on Bell by his constant appearances in court, necessitated by the legal battles, eventually resulted
in his resignation from the company.[N 19] On July 11, 1877, a few days after the Bell Telephone Company was established,
Bell married Mabel Hubbard (1857–1923) at the Hubbard estate in Cambridge, Massachusetts. His wedding present to
his bride was to turn over 1,487 of his 1,497 shares in the newly formed Bell Telephone Company. Shortly thereafter,
the newlyweds embarked on a year-long honeymoon in Europe. During that excursion, Bell took a handmade model of his
telephone with him, making it a "working holiday". The courtship had begun years earlier; however, Bell waited until
he was more financially secure before marrying. Although the telephone appeared to be an "instant" success, it was
not initially a profitable venture and Bell's main sources of income were from lectures until after 1897. One unusual
request exacted by his fiancée was that he use "Alec" rather than the family's earlier familiar name of "Aleck".
From 1876, he would sign his name "Alec Bell". They had four children: The Bell family home was in Cambridge, Massachusetts,
until 1880 when Bell's father-in-law bought a house in Washington, D.C., and later in 1882 bought a home in the same
city for Bell's family, so that they could be with him while he attended to the numerous court cases involving patent
disputes. Bell was a British subject throughout his early life in Scotland and later in Canada until 1882, when he
became a naturalized citizen of the United States. In 1915, he characterized his status as: "I am not one of those
hyphenated Americans who claim allegiance to two countries." Despite this declaration, Bell has been proudly claimed
as a "native son" by all three countries he resided in: the United States, Canada, and the United Kingdom. By 1885,
a new summer retreat was contemplated. That summer, the Bells had a vacation on Cape Breton Island in Nova Scotia,
spending time at the small village of Baddeck. Returning in 1886, Bell started building an estate on a point across
from Baddeck, overlooking Bras d'Or Lake. By 1889, a large house, christened The Lodge was completed and two years
later, a larger complex of buildings, including a new laboratory, were begun that the Bells would name Beinn Bhreagh
(Gaelic: beautiful mountain) after Bell's ancestral Scottish highlands.[N 21] Bell also built the Bell Boatyard on
the estate, employing up to 40 people building experimental craft as well as wartime lifeboats and workboats for
the Royal Canadian Navy and pleasure craft for the Bell family. An enthusiastic boater, Bell and his family sailed
or rowed a long series of vessels on Bras d'Or Lake, ordering additional vessels from the H.W. Embree and Sons boatyard
in Port Hawkesbury, Nova Scotia. In his final, and some of his most productive years, Bell split his residency between
Washington, D.C., where he and his family initially resided for most of the year, and at Beinn Bhreagh where they
spent increasing amounts of time. Until the end of his life, Bell and his family would alternate between the two
homes, but Beinn Bhreagh would, over the next 30 years, become more than a summer home as Bell became so absorbed
in his experiments that his annual stays lengthened. Both Mabel and Bell became immersed in the Baddeck community
and were accepted by the villagers as "their own".[N 22] The Bells were still in residence at Beinn Bhreagh when
the Halifax Explosion occurred on December 6, 1917. Mabel and Bell mobilized the community to help victims in Halifax.
Although Alexander Graham Bell is most often associated with the invention of the telephone, his interests were extremely
varied. According to one of his biographers, Charlotte Gray, Bell's work ranged "unfettered across the scientific
landscape" and he often went to bed voraciously reading the Encyclopædia Britannica, scouring it for new areas of
interest. The range of Bell's inventive genius is represented only in part by the 18 patents granted in his name
alone and the 12 he shared with his collaborators. These included 14 for the telephone and telegraph, four for the
photophone, one for the phonograph, five for aerial vehicles, four for "hydroairplanes" and two for selenium cells.
Bell's inventions spanned a wide range of interests and included a metal jacket to assist in breathing, the audiometer
to detect minor hearing problems, a device to locate icebergs, investigations on how to separate salt from seawater,
and work on finding alternative fuels. Bell worked extensively in medical research and invented techniques for teaching
speech to the deaf. During his Volta Laboratory period, Bell and his associates considered impressing a magnetic
field on a record as a means of reproducing sound. Although the trio briefly experimented with the concept, they
could not develop a workable prototype. They abandoned the idea, never realizing they had glimpsed a basic principle
which would one day find its application in the tape recorder, the hard disc and floppy disc drive and other magnetic
media. Bell's own home used a primitive form of air conditioning, in which fans blew currents of air across great
blocks of ice. He also anticipated modern concerns with fuel shortages and industrial pollution. Methane gas, he
reasoned, could be produced from the waste of farms and factories. At his Canadian estate in Nova Scotia, he experimented
with composting toilets and devices to capture water from the atmosphere. In a magazine interview published shortly
before his death, he reflected on the possibility of using solar panels to heat houses. Bell and his assistant Charles
Sumner Tainter jointly invented a wireless telephone, named a photophone, which allowed for the transmission of both
sounds and normal human conversations on a beam of light. Both men later became full associates in the Volta Laboratory
Association. On June 21, 1880, Bell's assistant transmitted a wireless voice telephone message a considerable distance,
from the roof of the Franklin School in Washington, D.C., to Bell at the window of his laboratory, some 213 metres
(700 ft) away, 19 years before the first voice radio transmissions. Bell believed the photophone's principles were
his life's "greatest achievement", telling a reporter shortly before his death that the photophone was "the greatest
invention [I have] ever made, greater than the telephone". The photophone was a precursor to the fiber-optic communication
systems which achieved popular worldwide usage in the 1980s. Its master patent was issued in December 1880, many
decades before the photophone's principles came into popular use. Bell is also credited with developing one of the
early versions of a metal detector in 1881. The device was quickly put together in an attempt to find the bullet
in the body of U.S. President James Garfield. According to some accounts, the metal detector worked flawlessly in
tests but did not find the assassin's bullet partly because the metal bed frame on which the President was lying
disturbed the instrument, resulting in static. The president's surgeons, who were skeptical of the device, ignored
Bell's requests to move the president to a bed not fitted with metal springs. Alternatively, although Bell had detected
a slight sound on his first test, the bullet may have been lodged too deeply to be detected by the crude apparatus.
Bell's own detailed account, presented to the American Association for the Advancement of Science in 1882, differs
in several particulars from most of the many and varied versions now in circulation, most notably by concluding that
extraneous metal was not to blame for failure to locate the bullet. Perplexed by the peculiar results he had obtained
during an examination of Garfield, Bell "...proceeded to the Executive Mansion the next morning...to ascertain from
the surgeons whether they were perfectly sure that all metal had been removed from the neighborhood of the bed. It
was then recollected that underneath the horse-hair mattress on which the President lay was another mattress composed
of steel wires. Upon obtaining a duplicate, the mattress was found to consist of a sort of net of woven steel wires,
with large meshes. The extent of the [area that produced a response from the detector] having been so small, as compared
with the area of the bed, it seemed reasonable to conclude that the steel mattress had produced no detrimental effect."
In a footnote, Bell adds that "The death of President Garfield and the subsequent post-mortem examination, however,
proved that the bullet was at too great a distance from the surface to have affected our apparatus." The March 1906
Scientific American article by American pioneer William E. Meacham explained the basic principle of hydrofoils and
hydroplanes. Bell considered the invention of the hydroplane as a very significant achievement. Based on information
gained from that article he began to sketch concepts of what is now called a hydrofoil boat. Bell and assistant Frederick
W. "Casey" Baldwin began hydrofoil experimentation in the summer of 1908 as a possible aid to airplane takeoff from
water. Baldwin studied the work of the Italian inventor Enrico Forlanini and began testing models. This led him and
Bell to the development of practical hydrofoil watercraft. During his world tour of 1910–11, Bell and Baldwin met
with Forlanini in France. They had rides in the Forlanini hydrofoil boat over Lake Maggiore. Baldwin described it
as being as smooth as flying. On returning to Baddeck, a number of initial concepts were built as experimental models,
including the Dhonnas Beag, the first self-propelled Bell-Baldwin hydrofoil. The experimental boats were essentially
proof-of-concept prototypes that culminated in the more substantial HD-4, powered by Renault engines. A top speed
of 54 miles per hour (87 km/h) was achieved, with the hydrofoil exhibiting rapid acceleration, good stability and
steering along with the ability to take waves without difficulty. In 1913, Dr. Bell hired Walter Pinaud, a Sydney
yacht designer and builder as well as the proprietor of Pinaud's Yacht Yard in Westmount, Nova Scotia to work on
the pontoons of the HD-4. Pinaud soon took over the boatyard at Bell Laboratories on Beinn Bhreagh, Bell's estate
near Baddeck, Nova Scotia. Pinaud's experience in boat-building enabled him to make useful design changes to the
HD-4. After the First World War, work began again on the HD-4. Bell's report to the U.S. Navy permitted him to obtain
two 350 horsepower (260 kilowatts) engines in July 1919. On September 9, 1919, the HD-4 set a world marine speed
record of 70.86 miles per hour (114.04 kilometres per hour), a record which stood for ten years. In 1898, Bell experimented
with tetrahedral box kites and wings constructed of multiple compound tetrahedral kites covered in maroon silk.[N
23] The tetrahedral wings were named Cygnet I, II and III, and were flown both unmanned and manned (Cygnet I crashed
during a flight carrying Selfridge) in the period from 1907–1912. Some of Bell's kites are on display at the Alexander
Graham Bell National Historic Site. Bell was a supporter of aerospace engineering research through the Aerial Experiment
Association (AEA), officially formed at Baddeck, Nova Scotia, in October 1907 at the suggestion of his wife Mabel
and with her financial support after the sale of some of her real estate. The AEA was headed by Bell and the founding
members were four young men: American Glenn H. Curtiss, a motorcycle manufacturer at the time and who held the title
"world's fastest man", having ridden his self-constructed motor bicycle around in the shortest time, and who was
later awarded the Scientific American Trophy for the first official one-kilometre flight in the Western hemisphere,
and who later became a world-renowned airplane manufacturer; Lieutenant Thomas Selfridge, an official observer from
the U.S. Federal government and one of the few people in the army who believed that aviation was the future; Frederick
W. Baldwin, the first Canadian and first British subject to pilot a public flight in Hammondsport, New York, and
J.A.D. McCurdy —Baldwin and McCurdy being new engineering graduates from the University of Toronto. The AEA's work
progressed to heavier-than-air machines, applying their knowledge of kites to gliders. Moving to Hammondsport, the
group then designed and built the Red Wing, framed in bamboo and covered in red silk and powered by a small air-cooled
engine. On March 12, 1908, over Keuka Lake, the biplane lifted off on the first public flight in North America.[N
24] [N 25] The innovations that were incorporated into this design included a cockpit enclosure and tail rudder (later
variations on the original design would add ailerons as a means of control). One of the AEA's inventions, a practical
wingtip form of the aileron, was to become a standard component on all aircraft. [N 26] The White Wing and June Bug
were to follow and by the end of 1908, over 150 flights without mishap had been accomplished. However, the AEA had
depleted its initial reserves and only a $15,000 grant from Mrs. Bell allowed it to continue with experiments. Lt.
Selfridge had also become the first person killed in a powered heavier-than-air flight in a crash of the Wright Flyer
at Fort Myer, Virginia, on September 17, 1908. Their final aircraft design, the Silver Dart, embodied all of the
advancements found in the earlier machines. On February 23, 1909, Bell was present as the Silver Dart flown by J.A.D.
McCurdy from the frozen ice of Bras d'Or, made the first aircraft flight in Canada. Bell had worried that the flight
was too dangerous and had arranged for a doctor to be on hand. With the successful flight, the AEA disbanded and
the Silver Dart would revert to Baldwin and McCurdy who began the Canadian Aerodrome Company and would later demonstrate
the aircraft to the Canadian Army. Bell was connected with the eugenics movement in the United States. In his lecture
Memoir upon the formation of a deaf variety of the human race presented to the National Academy of Sciences on November
13, 1883 he noted that congenitally deaf parents were more likely to produce deaf children and tentatively suggested
that couples where both parties were deaf should not marry. However, it was his hobby of livestock breeding which
led to his appointment to biologist David Starr Jordan's Committee on Eugenics, under the auspices of the American
Breeders' Association. The committee unequivocally extended the principle to man. From 1912 until 1918 he was the
chairman of the board of scientific advisers to the Eugenics Record Office associated with Cold Spring Harbor Laboratory
in New York, and regularly attended meetings. In 1921, he was the honorary president of the Second International
Congress of Eugenics held under the auspices of the American Museum of Natural History in New York. Organisations
such as these advocated passing laws (with success in some states) that established the compulsory sterilization
of people deemed to be, as Bell called them, a "defective variety of the human race." By the late 1930s, about half
the states in the U.S. had eugenics laws, and California's compulsory sterilization law was used as a model for that
of Nazi Germany. A large number of Bell's writings, personal correspondence, notebooks, papers and other documents
reside at both the United States Library of Congress Manuscript Division (as the Alexander Graham Bell Family Papers),
and at the Alexander Graham Bell Institute, Cape Breton University, Nova Scotia; major portions of which are available
for online viewing. In 1880, Bell received the Volta Prize with a purse of 50,000 francs (approximately US$250,000
in today's dollars) for the invention of the telephone from the Académie française, representing the French government.
Among the luminaries who judged were Victor Hugo and Alexandre Dumas. The Volta Prize was conceived by Napoleon Bonaparte
in 1801, and named in honor of Alessandro Volta, with Bell receiving the third grand prize in its history. Since
Bell was becoming increasingly affluent, he used his prize money to create endowment funds (the 'Volta Fund') and
institutions in and around the United States capital of Washington, D.C.. These included the prestigious 'Volta Laboratory
Association' (1880), also known as the Volta Laboratory and as the 'Alexander Graham Bell Laboratory', and which
eventually led to the Volta Bureau (1887) as a center for studies on deafness which is still in operation in Georgetown,
Washington, D.C. The Volta Laboratory became an experimental facility devoted to scientific discovery, and the very
next year it improved Edison's phonograph by substituting wax for tinfoil as the recording medium and incising the
recording rather than indenting it, key upgrades that Edison himself later adopted. The laboratory was also the site
where he and his associate invented his "proudest achievement", "the photophone", the "optical telephone" which presaged
fibre optical telecommunications, while the Volta Bureau would later evolve into the Alexander Graham Bell Association
for the Deaf and Hard of Hearing (the AG Bell), a leading center for the research and pedagogy of deafness. In partnership
with Gardiner Greene Hubbard, Bell helped establish the publication Science during the early 1880s. In 1898, Bell
was elected as the second president of the National Geographic Society, serving until 1903, and was primarily responsible
for the extensive use of illustrations, including photography, in the magazine. he also became a Regent of the Smithsonian
Institution (1898–1922). The French government conferred on him the decoration of the Légion d'honneur (Legion of
Honor); the Royal Society of Arts in London awarded him the Albert Medal in 1902; the University of Würzburg, Bavaria,
granted him a PhD, and he was awarded the Franklin Institute's Elliott Cresson Medal in 1912. He was one of the founders
of the American Institute of Electrical Engineers in 1884, and served as its president from 1891–92. Bell was later
awarded the AIEE's Edison Medal in 1914 "For meritorious achievement in the invention of the telephone". Honors and
tributes flowed to Bell in increasing numbers as his most famous invention became ubiquitous and his personal fame
grew. Bell received numerous honorary degrees from colleges and universities, to the point that the requests almost
became burdensome. During his life he also received dozens of major awards, medals and other tributes. These included
statuary monuments to both him and the new form of communication his telephone created, notably the Bell Telephone
Memorial erected in his honor in Alexander Graham Bell Gardens in Brantford, Ontario, in 1917. In 1936 the US Patent
Office declared Bell first on its list of the country's greatest inventors, leading to the US Post Office issuing
a commemorative stamp honoring Bell in 1940 as part of its 'Famous Americans Series'. The First Day of Issue ceremony
was held on October 28 in Boston, Massachusetts, the city where Bell spent considerable time on research and working
with the deaf. The Bell stamp became very popular and sold out in little time. The stamp became, and remains to this
day, the most valuable one of the series. The 150th anniversary of Bell's birth in 1997 was marked by a special issue
of commemorative £1 banknotes from the Royal Bank of Scotland. The illustrations on the reverse of the note include
Bell's face in profile, his signature, and objects from Bell's life and career: users of the telephone over the ages;
an audio wave signal; a diagram of a telephone receiver; geometric shapes from engineering structures; representations
of sign language and the phonetic alphabet; the geese which helped him to understand flight; and the sheep which
he studied to understand genetics. Additionally, the Government of Canada honored Bell in 1997 with a C$100 gold
coin, in tribute also to the 150th anniversary of his birth, and with a silver dollar coin in 2009 in honor of the
100th anniversary of flight in Canada. That first flight was made by an airplane designed under Dr. Bell's tutelage,
named the Silver Dart. Bell's image, and also those of his many inventions have graced paper money, coinage and postal
stamps in numerous countries worldwide for many dozens of years. Alexander Graham Bell was ranked 57th among the
100 Greatest Britons (2002) in an official BBC nationwide poll, and among the Top Ten Greatest Canadians (2004),
and the 100 Greatest Americans (2005). In 2006 Bell was also named as one of the 10 greatest Scottish scientists
in history after having been listed in the National Library of Scotland's 'Scottish Science Hall of Fame'. Bell's
name is still widely known and used as part of the names of dozens of educational institutes, corporate namesakes,
street and place names around the world. Bell died of complications arising from diabetes on August 2, 1922, at his
private estate, Beinn Bhreagh, Nova Scotia, at age 75. Bell had also been afflicted with pernicious anemia. His last
view of the land he had inhabited was by moonlight on his mountain estate at 2:00 a.m.[N 29][N 30] While tending
to him after his long illness, Mabel, his wife, whispered, "Don't leave me." By way of reply, Bell traced the sign
for "no" in the air —and then he died. Bell's coffin was constructed of Beinn Bhreagh pine by his laboratory staff,
lined with the same red silk fabric used in his tetrahedral kite experiments. To help celebrate his life, his wife
asked guests not to wear black (the traditional funeral color) while attending his service, during which soloist
Jean MacDonald sang a verse of Robert Louis Stevenson's "Requiem": Dr. Alexander Graham Bell was buried atop Beinn
Bhreagh mountain, on his estate where he had resided increasingly for the last 35 years of his life, overlooking
Bras d'Or Lake. He was survived by his wife Mabel, his two daughters, Elsie May and Marian, and nine of his grandchildren.
The bel (B) and the smaller decibel (dB) are units of measurement of sound intensity invented by Bell Labs and named
after him. [N 28] Since 1976 the IEEE's Alexander Graham Bell Medal has been awarded to honor outstanding contributions
in the field of telecommunications.
A pub /pʌb/, or public house is, despite its name, a private house, but is called a public house because it is licensed to
sell alcohol to the general public. It is a drinking establishment in Britain, Ireland, New Zealand, Australia, Canada,
Denmark and New England. In many places, especially in villages, a pub can be the focal point of the community. The
writings of Samuel Pepys describe the pub as the heart of England. The history of pubs can be traced back to Roman
taverns, through the Anglo-Saxon alehouse to the development of the modern tied house system in the 19th century.
Historically, pubs have been socially and culturally distinct from cafés, bars and German beer halls. Most pubs offer
a range of beers, wines, spirits, and soft drinks and snacks. Traditionally the windows of town pubs were of smoked
or frosted glass to obscure the clientele from the street but from the 1990s onwards, there has been a move towards
clear glass, in keeping with brighter interiors. The owner, tenant or manager (licensee) of a pub is properly known
as the "pub landlord". The term publican (in historical Roman usage a public contractor or tax farmer) has come into
use since Victorian times to designate the pub landlord. Known as "locals" to regulars, pubs are typically chosen
for their proximity to home or work, the availability of a particular beer, as a place to smoke (or avoid it), hosting
a darts team, having a pool or snooker table, or appealing to friends. Until the 1970s most of the larger pubs also
featured an off-sales counter or attached shop for the sales of beers, wines and spirits for home consumption. In
the 1970s the newly built supermarkets and high street chain stores or off-licences undercut the pub prices to such
a degree that within ten years all but a handful of pubs had closed their off-sale counters, which had often been
referred to colloquially as the jug and bottle. The inhabitants of the British Isles have been drinking ale since
the Bronze Age, but it was with the arrival of the Roman Empire in its shores in the 1st Century, and the construction
of the Roman road networks that the first inns, called tabernae, in which travellers could obtain refreshment began
to appear. After the departure of Roman authority in the 5th Century and the fall of the Romano-British kingdoms,
the Anglo-Saxons established alehouses that grew out of domestic dwellings, the Anglo-Saxon alewife would put a green
bush up on a pole to let people know her brew was ready. These alehouses quickly evolved into meeting houses for
the folk to socially congregate, gossip and arrange mutual help within their communities. Herein lies the origin
of the modern public house, or "Pub" as it is colloquially called in England. They rapidly spread across the Kingdom,
becoming so commonplace that in 965 King Edgar decreed that there should be no more than one alehouse per village.
A traveller in the early Middle Ages could obtain overnight accommodation in monasteries, but later a demand for
hostelries grew with the popularity of pilgrimages and travel. The Hostellers of London were granted guild status
in 1446 and in 1514 the guild became the Worshipful Company of Innholders. Inns are buildings where travellers can
seek lodging and, usually, food and drink. They are typically located in the country or along a highway. In Europe,
they possibly first sprang up when the Romans built a system of roads two millennia ago.[citation needed] Some inns
in Europe are several centuries old. In addition to providing for the needs of travellers, inns traditionally acted
as community gathering places. In Europe, it is the provision of accommodation, if anything, that now distinguishes
inns from taverns, alehouses and pubs. The latter tend to provide alcohol (and, in the UK, soft drinks and often
food), but less commonly accommodation. Inns tend to be older and grander establishments: historically they provided
not only food and lodging, but also stabling and fodder for the traveller's horse(s) and on some roads fresh horses
for the mail coach. Famous London inns include The George, Southwark and The Tabard. There is however no longer a
formal distinction between an inn and other kinds of establishment. Many pubs use "Inn" in their name, either because
they are long established former coaching inns, or to summon up a particular kind of image, or in many cases simply
as a pun on the word "in", as in "The Welcome Inn", the name of many pubs in Scotland. The original services of an
inn are now also available at other establishments, such as hotels, lodges, and motels, which focus more on lodging
customers than on other services, although they usually provide meals; pubs, which are primarily alcohol-serving
establishments; and restaurants and taverns, which serve food and drink. In North America, the lodging aspect of
the word "inn" lives on in hotel brand names like Holiday Inn, and in some state laws that refer to lodging operators
as innkeepers. The Inns of Court and Inns of Chancery in London started as ordinary inns where barristers met to
do business, but became institutions of the legal profession in England and Wales. Traditional English ale was made
solely from fermented malt. The practice of adding hops to produce beer was introduced from the Netherlands in the
early 15th century. Alehouses would each brew their own distinctive ale, but independent breweries began to appear
in the late 17th century. By the end of the century almost all beer was brewed by commercial breweries. The 18th
century saw a huge growth in the number of drinking establishments, primarily due to the introduction of gin. Gin
was brought to England by the Dutch after the Glorious Revolution of 1688 and became very popular after the government
created a market for "cuckoo grain" or "cuckoo malt" that was unfit to be used in brewing and distilling by allowing
unlicensed gin and beer production, while imposing a heavy duty on all imported spirits. As thousands of gin-shops
sprang up all over England, brewers fought back by increasing the number of alehouses. By 1740 the production of
gin had increased to six times that of beer and because of its cheapness it became popular with the poor, leading
to the so-called Gin Craze. Over half of the 15,000 drinking establishments in London were gin shops. The drunkenness
and lawlessness created by gin was seen to lead to ruination and degradation of the working classes. The distinction[clarification
needed] was illustrated by William Hogarth in his engravings Beer Street and Gin Lane. The Gin Act 1736 imposed high
taxes on retailers and led to riots in the streets. The prohibitive duty was gradually reduced and finally abolished
in 1742. The Gin Act 1751 however was more successful. It forced distillers to sell only to licensed retailers and
brought gin shops under the jurisdiction of local magistrates. By the early 19th century, encouraged by lower duties
on gin, the gin houses or "Gin Palaces" had spread from London to most cities and towns in Britain, with most of
the new establishments illegal and unlicensed. These bawdy, loud and unruly drinking dens so often described by Charles
Dickens in his Sketches by Boz (published 1835–1836) increasingly came to be held as unbridled cesspits of immorality
or crime and the source of much ill-health and alcoholism among the working classes. Under a banner of "reducing
public drunkenness" the Beer Act of 1830 introduced a new lower tier of premises permitted to sell alcohol, the Beer
Houses. At the time beer was viewed as harmless, nutritious and even healthy. Young children were often given what
was described as small beer, which was brewed to have a low alcohol content, as the local water was often unsafe.
Even the evangelical church and temperance movements of the day viewed the drinking of beer very much as a secondary
evil and a normal accompaniment to a meal. The freely available beer was thus intended to wean the drinkers off the
evils of gin, or so the thinking went. Under the 1830 Act any householder who paid rates could apply, with a one-off
payment of two guineas (roughly equal in value to £168 today), to sell beer or cider in his home (usually the front
parlour) and even to brew his own on his premises. The permission did not extend to the sale of spirits and fortified
wines, and any beer house discovered selling those items was closed down and the owner heavily fined. Beer houses
were not permitted to open on Sundays. The beer was usually served in jugs or dispensed directly from tapped wooden
barrels on a table in the corner of the room. Often profits were so high the owners were able to buy the house next
door to live in, turning every room in their former home into bars and lounges for customers. In the first year,
400 beer houses opened and within eight years there were 46,000 across the country, far outnumbering the combined
total of long-established taverns, pubs, inns and hotels. Because it was so easy to obtain permission and the profits
could be huge compared to the low cost of gaining permission, the number of beer houses was continuing to rise and
in some towns nearly every other house in a street could be a beer house. Finally in 1869 the growth had to be checked
by magisterial control and new licensing laws were introduced. Only then was it made harder to get a licence, and
the licensing laws which operate today were formulated. Although the new licensing laws prevented new beer houses
from being created, those already in existence were allowed to continue and many did not close until nearly the end
of the 19th century. A very small number remained into the 21st century. The vast majority of the beer houses applied
for the new licences and became full pubs. These usually small establishments can still be identified in many towns,
seemingly oddly located in the middle of otherwise terraced housing part way up a street, unlike purpose-built pubs
that are usually found on corners or road junctions. Many of today's respected real ale micro-brewers in the UK started
as home based Beer House brewers under the 1830 Act. The beer houses tended to avoid the traditional pub names like
The Crown, The Red Lion, The Royal Oak etc. and, if they did not simply name their place Smith's Beer House, they
would apply topical pub names in an effort to reflect the mood of the times. There was already regulation on public
drinking spaces in the 17th and 18th centuries,[citation needed] and the income earned from licences was beneficial
to the crown. Tavern owners were required to possess a licence to sell ale, and a separate licence for distilled
spirits. From the mid-19th century on the opening hours of licensed premises in the UK were restricted. However licensing
was gradually liberalised after the 1960s, until contested licensing applications became very rare, and the remaining
administrative function was transferred to Local Authorities in 2005. The Wine and Beerhouse Act 1869 reintroduced
the stricter controls of the previous century. The sale of beers, wines or spirits required a licence for the premises
from the local magistrates. Further provisions regulated gaming, drunkenness, prostitution and undesirable conduct
on licensed premises, enforceable by prosecution or more effectively by the landlord under threat of forfeiting his
licence. Licences were only granted, transferred or renewed at special Licensing Sessions courts, and were limited
to respectable individuals. Often these were ex-servicemen or ex-policemen; retiring to run a pub was popular amongst
military officers at the end of their service. Licence conditions varied widely, according to local practice. They
would specify permitted hours, which might require Sunday closing, or conversely permit all-night opening near a
market. Typically they might require opening throughout the permitted hours, and the provision of food or lavatories.
Once obtained, licences were jealously protected by the licensees (who were expected to be generally present, not
an absentee owner or company), and even "Occasional Licences" to serve drinks at temporary premises such as fêtes
would usually be granted only to existing licensees. Objections might be made by the police, rival landlords or anyone
else on the grounds of infractions such as serving drunks, disorderly or dirty premises, or ignoring permitted hours.
Detailed licensing records were kept, giving the Public House, its address, owner, licensee and misdemeanours of
the licensees, often going back for hundreds of years[citation needed]. Many of these records survive and can be
viewed, for example, at the London Metropolitan Archives centre. The restrictions were tightened by the Defence of
the Realm Act of August 1914, which, along with the introduction of rationing and the censorship of the press for
wartime purposes, restricted pubs' opening hours to 12 noon–2:30 pm and 6:30 pm–9:30 pm. Opening for the full licensed
hours was compulsory, and closing time was equally firmly enforced by the police; a landlord might lose his licence
for infractions. Pubs were closed under the Act and compensation paid, for example in Pembrokeshire. There was a
special case established under the State Management Scheme where the brewery and licensed premises were bought and
run by the state until 1973, most notably in Carlisle. During the 20th century elsewhere, both the licensing laws
and enforcement were progressively relaxed, and there were differences between parishes; in the 1960s, at closing
time in Kensington at 10:30 pm, drinkers would rush over the parish boundary to be in good time for "Last Orders"
in Knightsbridge before 11 pm, a practice observed in many pubs adjoining licensing area boundaries. Some Scottish
and Welsh parishes remained officially "dry" on Sundays (although often this merely required knocking at the back
door of the pub). These restricted opening hours led to the tradition of lock-ins. However, closing times were increasingly
disregarded in the country pubs. In England and Wales by 2000 pubs could legally open from 11 am (12 noon on Sundays)
through to 11 pm (10:30 pm on Sundays). That year was also the first to allow continuous opening for 36 hours from
11 am on New Year's Eve to 11 pm on New Year's Day. In addition, many cities had by-laws to allow some pubs to extend
opening hours to midnight or 1 am, whilst nightclubs had long been granted late licences to serve alcohol into the
morning. Pubs near London's Smithfield market, Billingsgate fish market and Covent Garden fruit and flower market
could stay open 24 hours a day since Victorian times to provide a service to the shift working employees of the markets.
Scotland's and Northern Ireland's licensing laws have long been more flexible, allowing local authorities to set
pub opening and closing times. In Scotland, this stemmed out of[clarification needed] a late repeal of the wartime
licensing laws, which stayed in force until 1976. The Licensing Act 2003, which came into force on 24 November 2005,
consolidated the many laws into a single Act. This allowed pubs in England and Wales to apply to the local council
for the opening hours of their choice. It was argued that this would end the concentration of violence around 11.30
pm, when people had to leave the pub, making policing easier. In practice, alcohol-related hospital admissions rose
following the change in the law, with alcohol involved in 207,800 admissions in 2006/7. Critics claimed that these
laws would lead to "24-hour drinking". By the time the law came into effect, 60,326 establishments had applied for
longer hours and 1,121 had applied for a licence to sell alcohol 24 hours a day. However nine months later many pubs
had not changed their hours, although some stayed open longer at the weekend, but rarely beyond 1:00 am. A "lock-in"
is when a pub owner lets drinkers stay in the pub after the legal closing time, on the theory that once the doors
are locked, it becomes a private party rather than a pub. Patrons may put money behind the bar before official closing
time, and redeem their drinks during the lock-in so no drinks are technically sold after closing time. The origin
of the British lock-in was a reaction to 1915 changes in the licensing laws in England and Wales, which curtailed
opening hours to stop factory workers from turning up drunk and harming the war effort. Since 1915, the UK licensing
laws had changed very little, with comparatively early closing times. The tradition of the lock-in therefore remained.
Since the implementation of Licensing Act 2003, premises in England and Wales may apply to extend their opening hours
beyond 11 pm, allowing round-the-clock drinking and removing much of the need for lock-ins. Since the smoking ban,
some establishments operated a lock-in during which the remaining patrons could smoke without repercussions but,
unlike drinking lock-ins, allowing smoking in a pub was still a prosecutable offence. In March 2006, a law was introduced
to forbid smoking in all enclosed public places in Scotland. Wales followed suit in April 2007, with England introducing
the ban in July 2007. Pub landlords had raised concerns prior to the implementation of the law that a smoking ban
would have a negative impact on sales. After two years, the impact of the ban was mixed; some pubs suffered declining
sales, while others developed their food sales. The Wetherspoon pub chain reported in June 2009 that profits were
at the top end of expectations; however, Scottish & Newcastle's takeover by Carlsberg and Heineken was reported in
January 2008 as partly the result of its weakness following falling sales due to the ban. Similar bans are applied
in Australian pubs with smoking only allowed in designated areas. By the end of the 18th century a new room in the
pub was established: the saloon.[citation needed] Beer establishments had always provided entertainment of some sort—singing,
gaming or sport.[citation needed] Balls Pond Road in Islington was named after an establishment run by a Mr Ball
that had a duck pond at the rear, where drinkers could, for a fee, go out and take a potshot at the ducks. More common,
however, was a card room or a billiard room.[citation needed] The saloon was a room where for an admission fee or
a higher price of drinks, singing, dancing, drama or comedy was performed and drinks would be served at the table.[citation
needed] From this came the popular music hall form of entertainment—a show consisting of a variety of acts.[citation
needed] A most famous London saloon was the Grecian Saloon in The Eagle, City Road, which is still famous because
of a nursery rhyme: "Up and down the City Road / In and out The Eagle / That's the way the money goes / Pop goes
the weasel." This meant that the customer had spent all his money at The Eagle, and needed to pawn his "weasel" to
get some more. The meaning of the "weasel" is unclear but the two most likely definitions are: a flat iron used for
finishing clothing; or rhyming slang for a coat (weasel and stoat). A few pubs have stage performances such as serious
drama, stand-up comedy, musical bands, cabaret or striptease; however juke boxes, karaoke and other forms of pre-recorded
music have otherwise replaced the musical tradition of a piano or guitar and singing.[citation needed] By the 20th
century, the saloon, or lounge bar, had become a middle-class room[citation needed]—carpets on the floor, cushions
on the seats, and a penny or two on the prices,[citation needed] while the public bar, or tap room, remained working
class with bare boards, sometimes with sawdust to absorb the spitting and spillages (known as "spit and sawdust"),
hard bench seats, and cheap beer[citation needed]. This bar was known as the four-ale bar from the days when the
cheapest beer served there cost 4 pence (4d) a quart.[citation needed] Later, the public bars gradually improved
until sometimes almost the only difference was in the prices, so that customers could choose between economy and
exclusivity (or youth and age, or a jukebox or dartboard).[citation needed] With the blurring of class divisions
in the 1960s and 1970s,[citation needed] the distinction between the saloon and the public bar was often seen as
archaic,[citation needed] and was frequently abolished, usually by the removal of the dividing wall or partition.[citation
needed] While the names of saloon and public bar may still be seen on the doors of pubs, the prices (and often the
standard of furnishings and decoration) are the same throughout the premises, and many pubs now comprise one large
room. However the modern importance of dining in pubs encourages some establishments to maintain distinct rooms or
areas. The "snug", sometimes called the smoke room, was typically a small, very private room with access to the bar
that had a frosted glass external window, set above head height. A higher price was paid for beer in the snug and
nobody could look in and see the drinkers. It was not only the wealthy visitors who would use these rooms. The snug
was for patrons who preferred not to be seen in the public bar. Ladies would often enjoy a private drink in the snug
in a time when it was frowned upon for women to be in a pub. The local police officer might nip in for a quiet pint,
the parish priest for his evening whisky, or lovers for a rendezvous. CAMRA have surveyed the 50,000 pubs in Britain
and they believe that there are very few pubs that still have classic snugs. These are on a historic interiors list
in order that they can be preserved. It was the pub that first introduced the concept of the bar counter being used
to serve the beer. Until that time beer establishments used to bring the beer out to the table or benches, as remains
the practice in (for example) beer gardens and other drinking establishments in Germany. A bar might be provided
for the manager to do paperwork while keeping an eye on his or her customers, but the casks of ale were kept in a
separate taproom. When the first pubs were built, the main room was the public room with a large serving bar copied
from the gin houses, the idea being to serve the maximum number of people in the shortest possible time. It became
known as the public bar[citation needed]. The other, more private, rooms had no serving bar—they had the beer brought
to them from the public bar. There are a number of pubs in the Midlands or the North which still retain this set
up but these days the beer is fetched by the customer from the taproom or public bar. One of these is The Vine, known
locally as The Bull and Bladder, in Brierley Hill near Birmingham, another the Cock at Broom, Bedfordshire a series
of small rooms served drinks and food by waiting staff. In the Manchester district the public bar was known as the
"vault", other rooms being the lounge and snug as usual elsewhere. By the early 1970s there was a tendency to change
to one large drinking room and breweries were eager to invest in interior design and theming. Isambard Kingdom Brunel,
the British engineer and railway builder, introduced the idea of a circular bar into the Swindon station pub in order
that customers were served quickly and did not delay his trains. These island bars became popular as they also allowed
staff to serve customers in several different rooms surrounding the bar. A "beer engine" is a device for pumping
beer, originally manually operated and typically used to dispense beer from a cask or container in a pub's basement
or cellar. The first beer pump known in England is believed to have been invented by John Lofting (b. Netherlands
1659-d. Great Marlow Buckinghamshire 1742) an inventor, manufacturer and merchant of London. The London Gazette of
17 March 1691 published a patent in favour of John Lofting for a fire engine, but remarked upon and recommended another
invention of his, for a beer pump: "Whereas their Majesties have been Graciously Pleased to grant Letters patent
to John Lofting of London Merchant for a New Invented Engine for Extinguishing Fires which said Engine have found
every great encouragement. The said Patentee hath also projected a Very Useful Engine for starting of beer and other
liquors which will deliver from 20 to 30 barrels an hour which are completely fixed with Brass Joints and Screws
at Reasonable Rates. Any Person that hath occasion for the said Engines may apply themselves to the Patentee at his
house near St Thomas Apostle London or to Mr. Nicholas Wall at the Workshoppe near Saddlers Wells at Islington or
to Mr. William Tillcar, Turner, his agent at his house in Woodtree next door to the Sun Tavern London." Strictly
the term refers to the pump itself, which is normally manually operated, though electrically powered and gas powered
pumps are occasionally used. When manually powered, the term "handpump" is often used to refer to both the pump and
the associated handle. After the development of the large London Porter breweries in the 18th century, the trend
grew for pubs to become tied houses which could only sell beer from one brewery (a pub not tied in this way was called
a Free house). The usual arrangement for a tied house was that the pub was owned by the brewery but rented out to
a private individual (landlord) who ran it as a separate business (even though contracted to buy the beer from the
brewery). Another very common arrangement was (and is) for the landlord to own the premises (whether freehold or
leasehold) independently of the brewer, but then to take a mortgage loan from a brewery, either to finance the purchase
of the pub initially, or to refurbish it, and be required as a term of the loan to observe the solus tie. A trend
in the late 20th century was for breweries to run their pubs directly, using managers rather than tenants. Most such
breweries, such as the regional brewery Shepherd Neame in Kent and Young's and Fuller's in London, control hundreds
of pubs in a particular region of the UK, while a few, such as Greene King, are spread nationally. The landlord of
a tied pub may be an employee of the brewery—in which case he/she would be a manager of a managed house, or a self-employed
tenant who has entered into a lease agreement with a brewery, a condition of which is the legal obligation (trade
tie) only to purchase that brewery's beer. The beer selection is mainly limited to beers brewed by that particular
company. The Beer Orders, passed in 1989, were aimed at getting tied houses to offer at least one alternative beer,
known as a guest beer, from another brewery. This law has now been repealed but while in force it dramatically altered
the industry. Some pubs still offer a regularly changing selection of guest beers. Organisations such as Wetherspoons,
Punch Taverns and O'Neill's were formed in the UK in the wake of the Beer Orders. A PubCo is a company involved in
the retailing but not the manufacture of beverages, while a Pub chain may be run either by a PubCo or by a brewery.
Pubs within a chain will usually have items in common, such as fittings, promotions, ambience and range of food and
drink on offer. A pub chain will position itself in the marketplace for a target audience. One company may run several
pub chains aimed at different segments of the market. Pubs for use in a chain are bought and sold in large units,
often from regional breweries which are then closed down. Newly acquired pubs are often renamed by the new owners,
and many people resent the loss of traditional names, especially if their favourite regional beer disappears at the
same time. A brewery tap is the nearest outlet for a brewery's beers. This is usually a room or bar in the brewery
itself, though the name may be applied to the nearest pub. The term is not applied to a brewpub which brews and sells
its beer on the same premises. A "country pub" by tradition is a rural public house. However, the distinctive culture
surrounding country pubs, that of functioning as a social centre for a village and rural community, has been changing
over the last thirty or so years. In the past, many rural pubs provided opportunities for country folk to meet and
exchange (often local) news, while others—especially those away from village centres—existed for the general purpose,
before the advent of motor transport, of serving travellers as coaching inns. In more recent years, however, many
country pubs have either closed down, or have been converted to establishments intent on providing seating facilities
for the consumption of food, rather than a venue for members of the local community meeting and convivially drinking.
Pubs that cater for a niche clientele, such as sports fans or people of certain nationalities are known as theme
pubs. Examples of theme pubs include sports bars, rock pubs, biker pubs, Goth pubs, strip pubs, gay bars, karaoke
bars and Irish pubs. In 1393 King Richard II compelled landlords to erect signs outside their premises. The legislation
stated "Whosoever shall brew ale in the town with intention of selling it must hang out a sign, otherwise he shall
forfeit his ale." This was to make alehouses easily visible to passing inspectors, borough ale tasters, who would
decide the quality of the ale they provided. William Shakespeare's father, John Shakespeare, was one such inspector.
Another important factor was that during the Middle Ages a large proportion of the population would have been illiterate
and so pictures on a sign were more useful than words as a means of identifying a public house. For this reason there
was often no reason to write the establishment's name on the sign and inns opened without a formal written name,
the name being derived later from the illustration on the pub's sign. The earliest signs were often not painted but
consisted, for example, of paraphernalia connected with the brewing process such as bunches of hops or brewing implements,
which were suspended above the door of the pub. In some cases local nicknames, farming terms and puns were used.
Local events were often commemorated in pub signs. Simple natural or religious symbols such as 'The Sun', 'The Star'
and 'The Cross' were incorporated into pub signs, sometimes being adapted to incorporate elements of the heraldry
(e.g. the coat of arms) of the local lords who owned the lands upon which the pub stood. Some pubs have Latin inscriptions.
Other subjects that lent themselves to visual depiction included the name of battles (e.g. Trafalgar), explorers,
local notables, discoveries, sporting heroes and members of the royal family. Some pub signs are in the form of a
pictorial pun or rebus. For example, a pub in Crowborough, East Sussex called The Crow and Gate has an image of a
crow with gates as wings. Most British pubs still have decorated signs hanging over their doors, and these retain
their original function of enabling the identification of the pub. Today's pub signs almost always bear the name
of the pub, both in words and in pictorial representation. The more remote country pubs often have stand-alone signs
directing potential customers to their door. Pub names are used to identify and differentiate each pub. Modern names
are sometimes a marketing ploy or attempt to create "brand awareness", frequently using a comic theme thought to
be memorable, Slug and Lettuce for a pub chain being an example. Interesting origins are not confined to old or traditional
names, however. Names and their origins can be broken up into a relatively small number of categories. As many pubs
are centuries old, many of their early customers were unable to read, and pictorial signs could be readily recognised
when lettering and words could not be read. Pubs often have traditional names. A common name is the "Marquis of Granby".
These pubs were named after John Manners, Marquess of Granby, who was the son of John Manners, 3rd Duke of Rutland
and a general in the 18th century British Army. He showed a great concern for the welfare of his men, and on their
retirement, provided funds for many of them to establish taverns, which were subsequently named after him. All pubs
granted their licence in 1780 were called the Royal George[citation needed], after King George III, and the twentieth
anniversary of his coronation. Many names for pubs that appear nonsensical may have come from corruptions of old
slogans or phrases, such as "The Bag o'Nails" (Bacchanals), "The Goat and Compasses" (God Encompasseth Us), "The
Cat and the Fiddle" (Chaton Fidèle: Faithful Kitten) and "The Bull and Bush", which purportedly celebrates the victory
of Henry VIII at "Boulogne Bouche" or Boulogne-sur-Mer Harbour. Traditional games are played in pubs, ranging from
the well-known darts, skittles, dominoes, cards and bar billiards, to the more obscure Aunt Sally, Nine Men's Morris
and ringing the bull. In the UK betting is legally limited to certain games such as cribbage or dominoes, played
for small stakes. In recent decades the game of pool (both the British and American versions) has increased in popularity
as well as other table based games such as snooker or Table Football becoming common. Increasingly, more modern games
such as video games and slot machines are provided. Pubs hold special events, from tournaments of the aforementioned
games to karaoke nights to pub quizzes. Some play pop music and hip-hop (dance bar), or show football and rugby union
on big screen televisions (sports bar). Shove ha'penny and Bat and trap were also popular in pubs south of London.
Some pubs in the UK also have football teams composed of regular customers. Many of these teams are in leagues that
play matches on Sundays, hence the term "Sunday League Football". Bowling is found in association with pubs in some
parts of the country and the local team will play matches against teams invited from elsewhere on the pub's bowling
green. Pubs may be venues for pub songs and live music. During the 1970s pubs provided an outlet for a number of
bands, such as Kilburn and the High Roads, Dr. Feelgood and The Kursaal Flyers, who formed a musical genre called
Pub rock that was a precursor to Punk music. Many pubs were drinking establishments, and little emphasis was placed
on the serving of food, other than sandwiches and "bar snacks", such as pork scratchings, pickled eggs, salted crisps
and peanuts which helped to increase beer sales. In South East England (especially London) it was common until recent
times for vendors selling cockles, whelks, mussels, and other shellfish to sell to customers during the evening and
at closing time. Many mobile shellfish stalls would set up near pubs, a practice that continues in London's East
End. Otherwise, pickled cockles and mussels may be offered by the pub in jars or packets. In the 1950s some British
pubs would offer "a pie and a pint", with hot individual steak and ale pies made easily on the premises by the proprietor's
wife during the lunchtime opening hours. The ploughman's lunch became popular in the late 1960s. In the late 1960s
"chicken in a basket", a portion of roast chicken with chips, served on a napkin, in a wicker basket became popular
due to its convenience. Quality dropped but variety increased with the introduction of microwave ovens and freezer
food. "Pub grub" expanded to include British food items such as steak and ale pie, shepherd's pie, fish and chips,
bangers and mash, Sunday roast, ploughman's lunch, and pasties. In addition, dishes such as burgers, chicken wings,
lasagne and chilli con carne are often served. Some pubs offer elaborate hot and cold snacks free to customers at
Sunday lunchtimes, to prevent them getting hungry and leaving for their lunch at home. Since the 1990s food has become
a more important part of a pub's trade, and today most pubs serve lunches and dinners at the table in addition to
(or instead of) snacks consumed at the bar. They may have a separate dining room. Some pubs serve meals to a higher
standard, to match good restaurant standards; these are sometimes termed gastropubs. A gastropub concentrates on
quality food. The name is a portmanteau of pub and gastronomy and was coined in 1991 when David Eyre and Mike Belben
took over The Eagle pub in Clerkenwell, London. The concept of a restaurant in a pub reinvigorated both pub culture
and British dining, though has occasionally attracted criticism for potentially removing the character of traditional
pubs. CAMRA maintains a "National Inventory" of historical notability and of architecturally and decoratively notable
pubs. The National Trust owns thirty-six public houses of historic interest including the George Inn, Southwark,
London and The Crown Liquor Saloon, Belfast, Northern Ireland. The highest pub in the United Kingdom is the Tan Hill
Inn, Yorkshire, at 1,732 feet (528 m) above sea level. The remotest pub on the British mainland is The Old Forge
in the village of Inverie, Lochaber, Scotland. There is no road access and it may only be reached by an 18-mile (29
km) walk over mountains, or a 7-mile (11 km) sea crossing. Likewise, The Berney Arms in Norfolk has no road access.
It may be reached by foot or by boat, and by train as it is served by the nearby Berney Arms railway station, which
likewise has no road access and serves no other settlement. A number of pubs claim to be the oldest surviving establishment
in the United Kingdom, although in several cases original buildings have been demolished and replaced on the same
site. Others are ancient buildings that saw uses other than as a pub during their history. Ye Olde Fighting Cocks
in St Albans, Hertfordshire, holds the Guinness World Record for the oldest pub in England, as it is an 11th-century
structure on an 8th-century site. Ye Olde Trip to Jerusalem in Nottingham is claimed to be the "oldest inn in England".
It has a claimed date of 1189, based on the fact it is constructed on the site of the Nottingham Castle brewhouse;
the present building dates from around 1650. Likewise, The Nags Head in Burntwood, Staffordshire only dates back
to the 16th century, but there has been a pub on the site since at least 1086, as it is mentioned in the Domesday
Book. There is archaeological evidence that parts of the foundations of The Old Ferryboat Inn in Holywell may date
to AD 460, and there is evidence of ale being served as early as AD 560. The Bingley Arms, Bardsey, Yorkshire, is
claimed to date to 905 AD. Ye Olde Salutation Inn in Nottingham dates from 1240, although the building served as
a tannery and a private residence before becoming an inn sometime before the English Civil War. The Adam and Eve
in Norwich was first recorded in 1249, when it was an alehouse for the workers constructing nearby Norwich Cathedral.
Ye Olde Man & Scythe in Bolton, Lancashire, is mentioned by name in a charter of 1251, but the current building is
dated 1631. Its cellars are the only surviving part of the older structure. The town of Stalybridge in Cheshire is
thought to have the pubs with both the longest and shortest names in the United Kingdom — The Old 13th Cheshire Rifleman
Corps Inn and the Q Inn. The number of pubs in the UK has declined year on year, at least since 1982. Various reasons
are put forward for this, such as the failure of some establishments to keep up with customer requirements. Others
claim the smoking ban of 2007, intense competition from gastro-pubs, the availability of cheap alcohol in supermarkets
or the general economic climate are either to blame, or are factors in the decline. Changes in demographics may be
an additional factor. The Lost Pubs Project listed 28,095 closed pubs on 21 April 2015, with photographs of many.
In 2015 the rate of pub closures came under the scrutiny of Parliament in the UK, with a promise of legislation to
improve relations between owners and tenants. The highwayman Dick Turpin used the Swan Inn at Woughton-on-the-Green
in Buckinghamshire as his base. In the 1920s John Fothergill (1876–1957) was the innkeeper of the Spread Eagle in
Thame, Berkshire, and published his autobiography: An Innkeeper's Diary (London: Chatto & Windus, 1931). During his
idiosyncratic occupancy many famous people came to stay, such as H. G. Wells. United States president George W. Bush
fulfilled his lifetime ambition of visiting a 'genuine British pub' during his November 2003 state visit to the UK
when he had lunch and a pint of non-alcoholic lager (Bush being a teetotaler) with British Prime Minister Tony Blair
at the Dun Cow pub in Sedgefield, County Durham in Blair's home constituency. There were approximately 53,500 public
houses in 2009 in the United Kingdom. This number has been declining every year, so that nearly half of the smaller
villages no longer have a local pub. Many of London's pubs are known to have been used by famous people, but in some
cases, such as the association between Samuel Johnson and Ye Olde Cheshire Cheese, this is speculative, based on
little more than the fact that the person is known to have lived nearby. However, Charles Dickens is known to have
visited the Cheshire Cheese, the Prospect of Whitby, Ye Olde Cock Tavern and many others. Samuel Pepys is also associated
with the Prospect of Whitby and the Cock Tavern. The Fitzroy Tavern is a pub situated at 16 Charlotte Street in the
Fitzrovia district, to which it gives its name. It became famous (or according to others, infamous) during a period
spanning the 1920s to the mid-1950s as a meeting place for many of London's artists, intellectuals and bohemians
such as Dylan Thomas, Augustus John, and George Orwell. Several establishments in Soho, London, have associations
with well-known, post-war literary and artistic figures, including the Pillars of Hercules, The Colony Room and the
Coach and Horses. The Canonbury Tavern, Canonbury, was the prototype for Orwell's ideal English pub, The Moon Under
Water. The Red Lion in Parliament Square is close to the Palace of Westminster and is consequently used by political
journalists and members of parliament. The pub is equipped with a Division bell that summons MPs back to the chamber
when they are required to take part in a vote. The Punch Bowl, Mayfair was at one time jointly owned by Madonna and
Guy Ritchie. The Coleherne public house in Earls Court was a well-known gay pub from the 1950s. It attracted many
well-known patrons, such as Freddie Mercury, Kenny Everett and Rudolph Nureyev. It was used by the serial-killer
Colin Ireland to pick up victims. In 1966 The Blind Beggar in Whitechapel became infamous as the scene of a murder
committed by gangster Ronnie Kray. The Ten Bells is associated with several of the victims of Jack the Ripper. In
1955, Ruth Ellis, the last woman executed in the United Kingdom, shot David Blakely as he emerged from The Magdala
in South Hill Park, Hampstead, the bullet holes can still be seen in the walls outside. It is said that Vladimir
Lenin and a young Joseph Stalin met in the Crown and Anchor pub (now known as The Crown Tavern) on Clerkenwell Green
when the latter was visiting London in 1903. The Angel, Islington was formerly a coaching inn, the first on the route
northwards out of London, where Thomas Paine is believed to have written much of The Rights of Man. It was mentioned
by Charles Dickens, became a Lyons Corner House, and is now a Co-operative Bank. The Eagle and Child and the Lamb
and Flag, Oxford, were regular meeting places of the Inklings, a writers' group which included J. R. R. Tolkien and
C. S. Lewis. The Eagle in Cambridge is where Francis Crick interrupted patrons' lunchtime on 28 February 1953 to
announce that he and James Watson had "discovered the secret of life" after they had come up with their proposal
for the structure of DNA. The anecdote is related in Watson's book The Double Helix. and commemorated with a blue
plaque on the outside wall. The major soap operas on British television each feature a pub, and these pubs have become
household names. The Rovers Return is the pub in Coronation Street, the British soap broadcast on ITV. The Queen
Vic (short for the Queen Victoria) is the pub in EastEnders, the major soap on BBC One and the Woolpack in ITV's
Emmerdale. The sets of each of the three major television soap operas have been visited by some of the members of
the royal family, including Queen Elizabeth II. The centrepiece of each visit was a trip into the Rovers, the Queen
Vic, or the Woolpack to be offered a drink. The Bull in the BBC Radio 4 soap opera The Archers is an important meeting
point. Although "British" pubs found outside of Britain and its former colonies are often themed bars owing little
to the original British pub, a number of "true" pubs may be found around the world. In Denmark—a country, like Britain,
with a long tradition of brewing—a number of pubs have opened which eschew "theming", and which instead focus on
the business of providing carefully conditioned beer, often independent of any particular brewery or chain, in an
environment which would not be unfamiliar to a British pub-goer. Some import British cask ale, rather than beer in
kegs, to provide the full British real ale experience to their customers. This newly established Danish interest
in British cask beer and the British pub tradition is reflected by the fact that some 56 British cask beers were
available at the 2008 European Beer Festival in Copenhagen, which was attended by more than 20,000 people. In Ireland,
pubs are known for their atmosphere or "craic". In Irish, a pub is referred to as teach tábhairne ("tavernhouse")
or teach óil ("drinkinghouse"). Live music, either sessions of traditional Irish music or varieties of modern popular
music, is frequently featured in the pubs of Ireland. Pubs in Northern Ireland are largely identical to their counterparts
in the Republic of Ireland except for the lack of spirit grocers. A side effect of "The Troubles" was that the lack
of a tourist industry meant that a higher proportion of traditional bars have survived the wholesale refitting of
Irish pub interiors in the 'English style' in the 1950s and 1960s. New Zealand sports a number of Irish pubs. The
most popular term in English-speaking Canada used for a drinking establishment was "tavern", until the 1970s when
the term "bar" became widespread as in the United States. In the 1800s the term used was "public house" as in England
but "pub culture" did not spread to Canada. A fake "English looking" pub trend started in the 1990s, built into existing
storefronts, like regular bars. Most universities in Canada have campus pubs which are central to student life, as
it would be bad form just to serve alcohol to students without providing some type of basic food. Often these pubs
are run by the student's union. The gastropub concept has caught on, as traditional British influences are to be
found in many Canadian dishes. There are now pubs in the large cities of Canada that cater to anyone interested in
a "pub" type drinking environment.[citation needed]
An Internet service provider (ISP) is an organization that provides services for accessing, using, the Internet. Internet
service providers may be organized in various forms, such as commercial, community-owned, non-profit, or otherwise
privately owned. Internet services typically provided by ISPs include Internet access, Internet transit, domain name
registration, web hosting, Usenet service, and colocation. The Internet was developed as a network between government
research laboratories and participating departments of universities. By the late 1980s, a process was set in place
towards public, commercial use of the Internet. The remaining restrictions were removed by 1995, 4 years after the
introduction of the World Wide Web. In 1989, the first ISPs were established in Australia and the United States.
In Brookline, Massachusetts, The World became the first commercial ISP in the US. Its first customer was served in
November 1989. On 23 April 2014, the U.S. Federal Communications Commission (FCC) was reported to be considering
a new rule that will permit ISPs to offer content providers a faster track to send content, thus reversing their
earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according
to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On 15 May 2014, the FCC decided
to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising
net neutrality; and second, reclassify broadband as a telecommunication service, thereby preserving net neutrality.
On 10 November 2014, President Barack Obama recommended that the FCC reclassify broadband Internet service as a telecommunications
service in order to preserve net neutrality. On 16 January 2015, Republicans presented legislation, in the form of
a U.S. Congress H.R. discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing
the goal or enacting any further regulation affecting Internet service providers. On 31 January 2015, AP News reported
that the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications
Act of 1934 to the internet in a vote expected on 26 February 2015. Adoption of this notion would reclassify internet
service from one of information to one of the telecommunications and, according to Tom Wheeler, chairman of the FCC,
ensure net neutrality. The FCC is expected to enforce net neutrality in its vote, according to the New York Times.
On 26 February 2015, the FCC ruled in favor of net neutrality by adopting Title II (common carrier) of the Communications
Act of 1934 and Section 706 in the Telecommunications act of 1996 to the Internet. The FCC Chairman, Tom Wheeler,
commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech.
They both stand for the same concept." On 12 March 2015, the FCC released the specific details of the net neutrality
rules. On 13 April 2015, the FCC published the final rule on its new "Net Neutrality" regulations. ISPs provide Internet
access, employing a range of technologies to connect users to their network. Available technologies have ranged from
computer modems with acoustic couplers to telephone lines, to television cable (CATV), wireless Ethernet (wi-fi),
and fiber optics. For users and small businesses, traditional options include copper wires to provide dial-up, DSL,
typically asymmetric digital subscriber line (ADSL), cable modem or Integrated Services Digital Network (ISDN) (typically
basic rate interface). Using fiber-optics to end users is called Fiber To The Home or similar names. For customers
with more demanding requirements (such as medium-to-large businesses, or other ISPs) can use higher-speed DSL (such
as single-pair high-speed digital subscriber line), Ethernet, metropolitan Ethernet, gigabit Ethernet, Frame Relay,
ISDN Primary Rate Interface, ATM (Asynchronous Transfer Mode) and synchronous optical networking (SONET). A mailbox
provider is an organization that provides services for hosting electronic mail domains with access to storage for
mail boxes. It provides email servers to send, receive, accept, and store email for end users or other organizations.
Many mailbox providers are also access providers, while others are not (e.g., Yahoo! Mail, Outlook.com, Gmail, AOL
Mail, Po box). The definition given in RFC 6650 covers email hosting services, as well as the relevant department
of companies, universities, organizations, groups, and individuals that manage their mail servers themselves. The
task is typically accomplished by implementing Simple Mail Transfer Protocol (SMTP) and possibly providing access
to messages through Internet Message Access Protocol (IMAP), the Post Office Protocol, Webmail, or a proprietary
protocol. Internet hosting services provide email, web-hosting, or online storage services. Other services include
virtual server, cloud services, or physical server operation. Just as their customers pay them for Internet access,
ISPs themselves pay upstream ISPs for Internet access. An upstream ISP usually has a larger network than the contracting
ISP or is able to provide the contracting ISP with access to parts of the Internet the contracting ISP by itself
has no access to. In the simplest case, a single connection is established to an upstream ISP and is used to transmit
data to or from areas of the Internet beyond the home network; this mode of interconnection is often cascaded multiple
times until reaching a tier 1 carrier. In reality, the situation is often more complex. ISPs with more than one point
of presence (PoP) may have separate connections to an upstream ISP at multiple PoPs, or they may be customers of
multiple upstream ISPs and may have connections to each one of them at one or more point of presence. Transit ISPs
provide large amounts of bandwidth for connecting hosting ISPs and access ISPs. A virtual ISP (VISP) is an operation
that purchases services from another ISP, sometimes called a wholesale ISP in this context, which allow the VISP's
customers to access the Internet using services and infrastructure owned and operated by the wholesale ISP. VISPs
resemble mobile virtual network operators and competitive local exchange carriers for voice communications. Free
ISPs are Internet service providers that provide service free of charge. Many free ISPs display advertisements while
the user is connected; like commercial television, in a sense they are selling the user's attention to the advertiser.
Other free ISPs, sometimes called freenets, are run on a nonprofit basis, usually with volunteer staff.[citation
needed] A wireless Internet service provider (WISP) is an Internet service provider with a network based on wireless
networking. Technology may include commonplace Wi-Fi wireless mesh networking, or proprietary equipment designed
to operate over open 900 MHz, 2.4 GHz, 4.9, 5.2, 5.4, 5.7, and 5.8 GHz bands or licensed frequencies such as 2.5
GHz (EBS/BRS), 3.65 GHz (NN) and in the UHF band (including the MMDS frequency band) and LMDS.[citation needed] ISPs
may engage in peering, where multiple ISPs interconnect at peering points or Internet exchange points (IXs), allowing
routing of data between each network, without charging one another for the data transmitted—data that would otherwise
have passed through a third upstream ISP, incurring charges from the upstream ISP. Network hardware, software and
specifications, as well as the expertise of network management personnel are important in ensuring that data follows
the most efficient route, and upstream connections work reliably. A tradeoff between cost and efficiency is possible.[citation
needed] Internet service providers in many countries are legally required (e.g., via Communications Assistance for
Law Enforcement Act (CALEA) in the U.S.) to allow law enforcement agencies to monitor some or all of the information
transmitted by the ISP. Furthermore, in some countries ISPs are subject to monitoring by intelligence agencies. In
the U.S., a controversial National Security Agency program known as PRISM provides for broad monitoring of Internet
users traffic and has raised concerns about potential violation of the privacy protections in the Fourth Amendment
to the United States Constitution. Modern ISPs integrate a wide array of surveillance and packet sniffing equipment
into their networks, which then feeds the data to law-enforcement/intelligence networks (such as DCSNet in the United
States, or SORM in Russia) allowing monitoring of Internet traffic in real time.
Comics are a medium used to express ideas by images, often combined with text or other visual information. Comics frequently
takes the form of juxtaposed sequences of panels of images. Often textual devices such as speech balloons, captions,
and onomatopoeia indicate dialogue, narration, sound effects, or other information. Size and arrangement of panels
contribute to narrative pacing. Cartooning and similar forms of illustration are the most common image-making means
in comics; fumetti is a form which uses photographic images. Common forms of comics include comic strips, editorial
and gag cartoons, and comic books. Since the late 20th century, bound volumes such as graphic novels, comics albums,
and tankōbon have become increasingly common, and online webcomics have proliferated in the 21st century. The history
of comics has followed different paths in different cultures. Scholars have posited a pre-history as far back as
the Lascaux cave paintings. By the mid-20th century, comics flourished particularly in the United States, western
Europe (especially in France and Belgium), and Japan. The history of European comics is often traced to Rodolphe
Töpffer's cartoon strips of the 1830s, and became popular following the success in the 1930s of strips and books
such as The Adventures of Tintin. American comics emerged as a mass medium in the early 20th century with the advent
of newspaper comic strips; magazine-style comic books followed in the 1930s, in which the superhero genre became
prominent after Superman appeared in 1938. Histories of Japanese comics and cartooning (manga) propose origins as
early as the 12th century. Modern comic strips emerged in Japan in the early 20th century, and the output of comics
magazines and books rapidly expanded in the post-World War II era with the popularity of cartoonists such as Osamu
Tezuka. Comics has had a lowbrow reputation for much of its history, but towards the end of the 20th century began
to find greater acceptance with the public and in academia. The English term comics is used as a singular noun when
it refers to the medium and a plural when referring to particular instances, such as individual strips or comic books.
Though the term derives from the humorous (or comic) work that predominated in early American newspaper comic strips,
it has become standard also for non-humorous works. It is common in English to refer to the comics of different cultures
by the terms used in their original languages, such as manga for Japanese comics, or bandes dessinées for French-language
comics. There is no consensus amongst theorists and historians on a definition of comics; some emphasize the combination
of images and text, some sequentiality or other image relations, and others historical aspects such as mass reproduction
or the use of recurring characters. The increasing cross-pollination of concepts from different comics cultures and
eras has further made definition difficult. The European, American, and Japanese comics traditions have followed
different paths. Europeans have seen their tradition as beginning with the Swiss Rodolphe Töpffer from as early as
1827 and Americans have seen the origin of theirs in Richard F. Outcault's 1890s newspaper strip The Yellow Kid,
though many Americans have come to recognize Töpffer's precedence. Japan had a long prehistory of satirical cartoons
and comics leading up to the World War II era. The ukiyo-e artist Hokusai popularized the Japanese term for comics
and cartooning, manga, in the early 19th century. In the post-war era modern Japanese comics began to flourish when
Osamu Tezuka produced a prolific body of work. Towards the close of the 20th century, these three traditions converged
in a trend towards book-length comics: the comics album in Europe, the tankōbon[a] in Japan, and the graphic novel
in the English-speaking countries. Outside of these genealogies, comics theorists and historians have seen precedents
for comics in the Lascaux cave paintings in France (some of which appear to be chronological sequences of images),
Egyptian hieroglyphs, Trajan's Column in Rome, the 11th-century Norman Bayeux Tapestry, the 1370 bois Protat woodcut,
the 15th-century Ars moriendi and block books, Michelangelo's The Last Judgment in the Sistine Chapel, and William
Hogarth's 17th-century sequential engravings, amongst others.[b] Illustrated humour periodicals were popular in 19th-century
Britain, the earliest of which was the short-lived The Glasgow Looking Glass in 1825. The most popular was Punch,
which popularized the term cartoon for its humorous caricatures. On occasion the cartoons in these magazines appeared
in sequences; the character Ally Sloper featured in the earliest serialized comic strip when the character began
to feature in its own weekly magazine in 1884. American comics developed out of such magazines as Puck, Judge, and
Life. The success of illustrated humour supplements in the New York World and later the New York American, particularly
Outcault's The Yellow Kid, led to the development of newspaper comic strips. Early Sunday strips were full-page and
often in colour. Between 1896 and 1901 cartoonists experimented with sequentiality, movement, and speech balloons.
Shorter, black-and-white daily strips began to appear early in the 20th century, and became established in newspapers
after the success in 1907 of Bud Fisher's Mutt and Jeff. Humour strips predominated at first, and in the 1920s and
1930s strips with continuing stories in genres such as adventure and drama also became popular. Thin periodicals
called comic books appeared in the 1930s, at first reprinting newspaper comic strips; by the end of the decade, original
content began to dominate. The success in 1938 of Action Comics and its lead hero Superman marked the beginning of
the Golden Age of Comic Books, in which the superhero genre was prominent. The popularity of superhero comic books
declined following World War II, while comic book sales continued to increase as other genres proliferated, such
as romance, westerns, crime, horror, and humour. Following a sales peak in the early 1950s, the content of comic
books (particularly crime and horror) was subjected to scrutiny from parent groups and government agencies, which
culminated in Senate hearings that led to the establishment of the Comics Code Authority self-censoring body. The
Code has been blamed for stunting the growth of American comics and maintaining its low status in American society
for much of the remainder of the century. Superheroes re-established themselves as the most prominent comic book
genre by the early 1960s. Underground comix challenged the Code and readers with adult, countercultural content in
the late 1960s and early 1970s. The underground gave birth to the alternative comics movement in the 1980s and its
mature, often experimental content in non-superhero genres. From the 1980s, mainstream sensibilities were reasserted
and serialization became less common as the number of comics magazines decreased and many comics began to be published
directly as albums. Smaller publishers such as L'Association that published longer works in non-traditional formats
by auteur-istic creators also became common. Since the 1990s, mergers resulted in fewer large publishers, while smaller
publishers proliferated. Sales overall continued to grow despite the trend towards a shrinking print market. Japanese
comics and cartooning (manga),[g] have a history that has been seen as far back as the anthropomorphic characters
in the 12th-to-13th-century Chōjū-jinbutsu-giga, 17th-century toba-e and kibyōshi picture books, and woodblock prints
such as ukiyo-e which were popular between the 17th and 20th centuries. The kibyōshi contained examples of sequential
images, movement lines, and sound effects. Illustrated magazines for Western expatriates introduced Western-style
satirical cartoons to Japan in the late 19th century. New publications in both the Western and Japanese styles became
popular, and at the end of the 1890s, American-style newspaper comics supplements began to appear in Japan, as well
as some American comic strips. 1900 saw the debut of the Jiji Manga in the Jiji Shinpō newspaper—the first use of
the word "manga" in its modern sense, and where, in 1902, Rakuten Kitazawa began the first modern Japanese comic
strip. By the 1930s, comic strips were serialized in large-circulation monthly girls' and boys' magazine and collected
into hardback volumes. The modern era of comics in Japan began after World War II, propelled by the success of the
serialized comics of the prolific Osamu Tezuka and the comic strip Sazae-san. Genres and audiences diversified over
the following decades. Stories are usually first serialized in magazines which are often hundreds of pages thick
and may over a dozen stories; they are later compiled in tankōbon-format books. At the turn of the 20th and 21st
centuries, nearly a quarter of all printed material in Japan was comics. translations became extremely popular in
foreign markets—in some cases equaling or surpassing the sales of domestic comics. Comic strips are generally short,
multipanel comics that traditionally most commonly appeared in newspapers. In the US, daily strips have normally
occupied a single tier, while Sunday strips have been given multiple tiers. In the early 20th century, daily strips
were typically in black-and-white and Sundays were usually in colour and often occupied a full page. Specialized
comics periodicals formats vary greatly in different cultures. Comic books, primarily an American format, are thin
periodicals usually published in colour. European and Japanese comics are frequently serialized in magazines—monthly
or weekly in Europe, and usually black-and-white and weekly in Japan. Japanese comics magazine typically run to hundreds
of pages. Book-length comics take different forms in different cultures. European comics albums are most commonly
printed in A4-size colour volumes. In English-speaking countries, bound volumes of comics are called graphic novels
and are available in various formats. Despite incorporating the term "novel"—a term normally associated with fiction—"graphic
novel" also refers to non-fiction and collections of short works. Japanese comics are collected in volumes called
tankōbon following magazine serialization. Gag and editorial cartoons usually consist of a single panel, often incorporating
a caption or speech balloon. Definitions of comics which emphasize sequence usually exclude gag, editorial, and other
single-panel cartoons; they can be included in definitions that emphasize the combination of word and image. Gag
cartoons first began to proliferate in broadsheets published in Europe in the 18th and 19th centuries, and the term
"cartoon"[h] was first used to describe them in 1843 in the British humour magazine Punch. Comics in the US has had
a lowbrow reputation stemming from its roots in mass culture; cultural elites sometimes saw popular culture as threatening
culture and society. In the latter half of the 20th century, popular culture won greater acceptance, and the lines
between high and low culture began to blur. Comics nevertheless continued to be stigmatized, as the medium was seen
as entertainment for children and illiterates. The graphic novel—book-length comics—began to gain attention after
Will Eisner popularized the term with his book A Contract with God (1978). The term became widely known with the
public after the commercial success of Maus, Watchmen, and The Dark Knight Returns in the mid-1980s. In the 21st
century graphic novels became established in mainstream bookstores and libraries and webcomics became common. The
francophone Swiss Rodolphe Töpffer produced comic strips beginning in 1827, and published theories behind the form.
Cartoons appeared widely in newspapers and magazines from the 19th century. The success of Zig et Puce in 1925 popularized
the use of speech balloons in European comics, after which Franco-Belgian comics began to dominate. The Adventures
of Tintin, with its signature clear line style, was first serialized in newspaper comics supplements beginning in
1929, and became an icon of Franco-Belgian comics. Following the success of Le Journal de Mickey (1934–44), dedicated
comics magazines and full-colour comics albums became the primary outlet for comics in the mid-20th century. As in
the US, at the time comics were seen as infantile and a threat to culture and literacy; commentators stated that
"none bear up to the slightest serious analysis",[c] and that comics were "the sabotage of all art and all literature".[d]
In the 1960s, the term bandes dessinées ("drawn strips") came into wide use in French to denote the medium. Cartoonists
began creating comics for mature audiences, and the term "Ninth Art"[e] was coined, as comics began to attract public
and academic attention as an artform. A group including René Goscinny and Albert Uderzo founded the magazine Pilote
in 1959 to give artists greater freedom over their work. Goscinny and Uderzo's The Adventures of Asterix appeared
in it and went on to become the best-selling French-language comics series. From 1960, the satirical and taboo-breaking
Hara-Kiri defied censorship laws in the countercultural spirit that led to the May 1968 events. Frustration with
censorship and editorial interference led to a group of Pilote cartoonists to found the adults-only L'Écho des savanes
in 1972. Adult-oriented and experimental comics flourished in the 1970s, such as in the experimental science fiction
of Mœbius and others in Métal hurlant, even mainstream publishers took to publishing prestige-format adult comics.
Historical narratives of manga tend to focus either on its recent, post-WWII history, or on attempts to demonstrates
deep roots in the past, such as to the Chōjū-jinbutsu-giga picture scroll of the 12th and 13th centuries, or the
early 19th-century Hokusai Manga. The first historical overview of Japanese comics was Seiki Hosokibara's Nihon Manga-Shi[i]
in 1924. Early post-war Japanese criticism was mostly of a left-wing political nature until the 1986 publication
for Tomofusa Kure's Modern Manga: The Complete Picture,[j] which de-emphasized politics in favour of formal aspects,
such as structure and a "grammar" of comics. The field of manga studies increased rapidly, with numerous books on
the subject appearing in the 1990s. Formal theories of manga have focused on developing a "manga expression theory",[k]
with emphasis on spatial relationships in the structure of images on the page, distinguishing the medium from film
or literature, in which the flow of time is the basic organizing element. Comics studies courses have proliferated
at Japanese universities, and Japan Society for Studies in Cartoon and Comics (ja)[l] was established in 2001 to
promote comics scholarship. The publication of Frederik L. Schodt's Manga! Manga! The World of Japanese Comics in
1983 led to the spread of use of the word manga outside Japan to mean "Japanese comics" or "Japanese-style comics".
Coulton Waugh attempted the first comprehensive history of American comics with The Comics (1947). Will Eisner's
Comics and Sequential Art (1985) and Scott McCloud's Understanding Comics (1993) were early attempts in English to
formalize the study of comics. David Carrier's The Aesthetics of Comics (2000) was the first full-length treatment
of comics from a philosophical perspective. Prominent American attempts at definitions of comics include Eisner's,
McCloud's, and Harvey's. Eisner described what he called "sequential art" as "the arrangement of pictures or images
and words to narrate a story or dramatize an idea"; Scott McCloud defined comics "juxtaposed pictorial and other
images in deliberate sequence, intended to convey information and/or to produce an aesthetic response in the viewer",
a strictly formal definition which detached comics from its historical and cultural trappings. R. C. Harvey defined
comics as "pictorial narratives or expositions in which words (often lettered into the picture area within speech
balloons) usually contribute to the meaning of the pictures and vice versa". Each definition has had its detractors.
Harvey saw McCloud's definition as excluding single-panel cartoons, and objected to McCloud's de-emphasizing verbal
elements, insisting "the essential characteristic of comics is the incorporation of verbal content". Aaron Meskin
saw McCloud's theories as an artificial attempt to legitimize the place of comics in art history. Cross-cultural
study of comics is complicated by the great difference in meaning and scope of the words for "comics" in different
languages. The French term for comics, bandes dessinées ("drawn strip") emphasizes the juxtaposition of drawn images
as a defining factor, which can imply the exclusion of even photographic comics. The term manga is used in Japanese
to indicate all forms of comics, cartooning, and caricature. Webcomics are comics that are available on the internet.
They are able to reach large audiences, and new readers usually can access archived installments. Webcomics can make
use of an infinite canvas—meaning they are not constrained by size or dimensions of a page. Some consider storyboards
and wordless novels to be comics. Film studios, especially in animation, often use sequences of images as guides
for film sequences. These storyboards are not intended as an end product and are rarely seen by the public. Wordless
novels are books which use sequences of captionless images to deliver a narrative. Similar to the problems of defining
literature and film, no consensus has been reached on a definition of the comics medium, and attempted definitions
and descriptions have fallen prey to numerous exceptions. Theorists such as Töpffer, R. C. Harvey, Will Eisner, David
Carrier, Alain Rey, and Lawrence Grove emphasize the combination of text and images, though there are prominent examples
of pantomime comics throughout its history. Other critics, such as Thierry Groensteen and Scott McCloud, have emphasized
the primacy of sequences of images. Towards the close of the 20th century, different cultures' discoveries of each
other's comics traditions, the rediscovery of forgotten early comics forms, and the rise of new forms made defining
comics a more complicated task. European comics studies began with Töpffer's theories of his own work in the 1840s,
which emphasized panel transitions and the visual–verbal combination. No further progress was made until the 1970s.
Pierre Fresnault-Deruelle then took a semiotics approach to the study of comics, analyzing text–image relations,
page-level image relations, and image discontinuities, or what Scott McCloud later dubbed "closure". In 1987, Henri
Vanlier introduced the term multicadre, or "multiframe", to refer to the comics a page as a semantic unit. By the
1990s, theorists such as Benoît Peeters and Thierry Groensteen turned attention to artists' poïetic creative choices.
Thierry Smolderen and Harry Morgan have held relativistic views of the definition of comics, a medium that has taken
various, equally valid forms over its history. Morgan sees comics as a subset of "les littératures dessinées" (or
"drawn literatures"). French theory has come to give special attention to the page, in distinction from American
theories such as McCloud's which focus on panel-to-panel transitions. Since the mid-2000s, Neil Cohn has begun analyzing
how comics are understood using tools from cognitive science, extending beyond theory by using actual psychological
and neuroscience experiments. This work has argued that sequential images and page layouts both use separate rule-bound
"grammars" to be understood that extend beyond panel-to-panel transitions and categorical distinctions of types of
layouts, and that the brain's comprehension of comics is similar to comprehending other domains, such as language
and music. Many cultures have taken their words for comics from English, including Russian (Russian: Комикс, komiks)
and German (comic). Similarly, the Chinese term manhua and the Korean manhwa derive from the Chinese characters with
which the Japanese term manga is written. The English term comics derives from the humorous (or "comic") work which
predominated in early American newspaper comic strips; usage of the term has become standard for non-humorous works
as well. The term "comic book" has a similarly confusing history: they are most often not humorous; nor are they
regular books, but rather periodicals. It is common in English to refer to the comics of different cultures by the
terms used in their original languages, such as manga for Japanese comics, or bandes dessinées for French-language
Franco-Belgian comics. While comics are often the work of a single creator, the labour of making them is frequently
divided between a number of specialists. There may be separate writers and artists, and artists may specialize in
parts of the artwork such as characters or backgrounds, as is common in Japan. Particularly in American superhero
comic books, the art may be divided between a penciller, who lays out the artwork in pencil; an inker, who finishes
the artwork in ink; a colourist; and a letterer, who adds the captions and speech balloons. Panels are individual
images containing a segment of action, often surrounded by a border. Prime moments in a narrative are broken down
into panels via a process called encapsulation. The reader puts the pieces together via the process of closure by
using background knowledge and an understanding of panel relations to combine panels mentally into events. The size,
shape, and arrangement of panels each affect the timing and pacing of the narrative. The contents of a panel may
be asynchronous, with events depicted in the same image not necessarily occurring at the same time. Text is frequently
incorporated into comics via speech balloons, captions, and sound effects. Speech balloons indicate dialogue (or
thought, in the case of thought balloons), with tails pointing at their respective speakers. Captions can give voice
to a narrator, convey characters' dialogue or thoughts, or indicate place or time. Speech balloons themselves are
strongly associated with comics, such that the addition of one to an image is sufficient to turn the image into comics.
Sound effects mimic non-vocal sounds textually using onomatopoeia sound-words. Cartooning is most frequently used
in making comics, traditionally using ink (especially India ink) with dip pens or ink brushes; mixed media and digital
technology have become common. Cartooning techniques such as motion lines and abstract symbols are often employed.
The term comics refers to the comics medium when used as an uncountable noun and thus takes the singular: "comics
is a medium" rather than "comics are a medium". When comic appears as a countable noun it refers to instances of
the medium, such as individual comic strips or comic books: "Tom's comics are in the basement."
Saint Helena (/ˌseɪnt həˈliːnə/ SAYNT-hə-LEE-nə) is a volcanic tropical island in the South Atlantic Ocean, 4,000 kilometres
(2,500 mi) east of Rio de Janeiro and 1,950 kilometres (1,210 mi) west of the Cunene River, which marks the border
between Namibia and Angola in southwestern Africa. It is part of the British Overseas Territory of Saint Helena,
Ascension and Tristan da Cunha. Saint Helena measures about 16 by 8 kilometres (10 by 5 mi) and has a population
of 4,255 (2008 census). It was named after Saint Helena of Constantinople. The island was uninhabited when discovered
by the Portuguese in 1502. One of the most remote islands in the world, it was for centuries an important stopover
for ships sailing to Europe from Asia and South Africa. Napoleon was imprisoned there in exile by the British, as
were Dinuzulu kaCetshwayo (for leading a Zulu army against British rule) and more than 5,000 Boers taken prisoner
during the Second Boer War. Between 1791 and 1833, Saint Helena became the site of a series of experiments in conservation,
reforestation and attempts to boost rainfall artificially. This environmental intervention was closely linked to
the conceptualisation of the processes of environmental change and helped establish the roots of environmentalism.
Most historical accounts state that the island was discovered on 21 May 1502 by the Galician navigator João da Nova
sailing at the service of Portugal, and that he named it "Santa Helena" after Helena of Constantinople. Another theory
holds that the island found by da Nova was actually Tristan da Cunha, 2,430 kilometres (1,510 mi) to the south, and
that Saint Helena was discovered by some of the ships attached to the squadron of Estêvão da Gama expedition on 30
July 1503 (as reported in the account of clerk Thomé Lopes). However, a paper published in 2015 reviewed the discovery
date and dismissed the 18 August as too late for da Nova to make a discovery and then return to Lisbon by 11 September
1502, whether he sailed from St Helena or Tristan da Cunha. It demonstrates the 21 May is probably a Protestant rather
than Catholic or Orthodox feast-day, first quoted in 1596 by Jan Huyghen van Linschoten, who was probably mistaken
because the island was discovered several decades before the Reformation and start of Protestantism. The alternative
discovery date of 3 May, the Catholic feast-day for the finding of the True Cross by Saint Helena in Jerusalem, quoted
by Odoardo Duarte Lopes and Sir Thomas Herbert is suggested as being historically more credible. The Portuguese found
the island uninhabited, with an abundance of trees and fresh water. They imported livestock, fruit trees and vegetables,
and built a chapel and one or two houses. Though they formed no permanent settlement, the island was an important
rendezvous point and source of food for ships travelling from Asia to Europe, and frequently sick mariners were left
on the island to recover, before taking passage on the next ship to call on the island. Englishman Sir Francis Drake
probably located the island on the final leg of his circumnavigation of the world (1577–1580). Further visits by
other English explorers followed, and, once Saint Helena’s location was more widely known, English ships of war began
to lie in wait in the area to attack Portuguese India carracks on their way home. In developing their Far East trade,
the Dutch also began to frequent the island. The Portuguese and Spanish soon gave up regularly calling at the island,
partly because they used ports along the West African coast, but also because of attacks on their shipping, the desecration
of their chapel and religious icons, destruction of their livestock and destruction of plantations by Dutch and English
sailors. The Dutch Republic formally made claim to Saint Helena in 1633, although there is no evidence that they
ever occupied, colonised or fortified it. By 1651, the Dutch had mainly abandoned the island in favour of their colony
at the Cape of Good Hope. In 1657, Oliver Cromwell granted the English East India Company a charter to govern Saint
Helena and the following year the company decided to fortify the island and colonise it with planters. The first
governor, Captain John Dutton, arrived in 1659, making Saint Helena one of Britain's oldest colonies outside North
America and the Caribbean. A fort and houses were built. After the Restoration of the English monarchy in 1660, the
East India Company received a royal charter giving it the sole right to fortify and colonise the island. The fort
was renamed James Fort and the town Jamestown, in honour of the Duke of York, later James II of England. Between
January and May 1673, the Dutch East India Company forcibly took the island, before English reinforcements restored
English East India Company control. The company experienced difficulty attracting new immigrants, and sentiments
of unrest and rebellion fomented among the inhabitants. Ecological problems, including deforestation, soil erosion,
vermin and drought, led Governor Isaac Pyke to suggest in 1715 that the population be moved to Mauritius, but this
was not acted upon and the company continued to subsidise the community because of the island's strategic location.
A census in 1723 recorded 1,110 people, including 610 slaves. 18th century governors tried to tackle the island's
problems by implementing tree plantation, improving fortifications, eliminating corruption, building a hospital,
tackling the neglect of crops and livestock, controlling the consumption of alcohol and introducing legal reforms.
From about 1770, the island enjoyed a lengthy period of prosperity. Captain James Cook visited the island in 1775
on the final leg of his second circumnavigation of the world. St. James' Church was erected in Jamestown in 1774
and in 1791–92 Plantation House was built, and has since been the official residence of the Governor. On leaving
the University of Oxford, in 1676, Edmond Halley visited Saint Helena and set up an astronomical observatory with
a 7.3-metre-long (24 ft) aerial telescope with the intention of studying stars from the Southern Hemisphere. The
site of this telescope is near Saint Mathew's Church in Hutt's Gate, in the Longwood district. The 680-metre (2,230
ft) high hill there is named for him and is called Halley's Mount. Throughout this period, Saint Helena was an important
port of call of the East India Company. East Indiamen would stop there on the return leg of their voyages to British
India and China. At Saint Helena ships could replenish supplies of water and provisions, and during war time, form
convoys that would sail under the protection of vessels of the Royal Navy. Captain James Cook's vessel HMS Endeavour
anchored and resupplied off the coast of St Helena in May 1771, on her return from the European discovery of the
east coast of Australia and rediscovery of New Zealand. The importation of slaves was made illegal in 1792. Governor
Robert Patton (1802–1807) recommended that the company import Chinese labour to supplement the rural workforce. The
coolie labourers arrived in 1810, and their numbers reached 600 by 1818. Many were allowed to stay, and their descendents
became integrated into the population. An 1814 census recorded 3,507 people on the island. In 1815, the British government
selected Saint Helena as the place of detention of Napoleon Bonaparte. He was taken to the island in October 1815.
Napoleon stayed at the Briars pavilion on the grounds of the Balcombe family's home until his permanent residence,
Longwood House, was completed in December 1815. Napoleon died there on 5 May 1821. After Napoleon's death, the thousands
of temporary visitors were soon withdrawn and the East India Company resumed full control of Saint Helena. Between
1815 and 1830, the EIC made available to the government of the island the packet schooner St Helena, which made multiple
trips per year between the island and the Cape carrying passengers both ways, and supplies of wine and provisions
back to the island. Owing to Napoleon's praise of Saint Helena’s coffee during his exile on the island, the product
enjoyed a brief popularity in Paris in the years after his death. Although the importation of slaves to St Helena
had been banned in 1792, the phased emancipation of over 800 resident slaves did not take place until 1827, which
was still some six years before the British Parliament passed legislation to ban slavery in the colonies. Under the
provisions of the 1833 India Act, control of Saint Helena was passed from the East India Company to the British Crown,
becoming a crown colony. Subsequent administrative cost-cutting triggered the start of a long-term population decline
whereby those who could afford to do so tended to leave the island for better opportunities elsewhere. The latter
half of the 19th century saw the advent of steam ships not reliant on trade winds, as well as the diversion of Far
East trade away from the traditional South Atlantic shipping lanes to a route via the Red Sea (which, prior to the
building of the Suez Canal, involved a short overland section). These factors contributed to a decline in the number
of ships calling at the island from 1,100 in 1855 to only 288 in 1889. In 1840, a British naval station established
to suppress the African slave trade was based on the island, and between 1840 and 1849 over 15,000 freed slaves,
known as "Liberated Africans", were landed there. In 1858, the French emperor Napoleon III successfully gained the
possession, in the name of the French government, of Longwood House and the lands around it, last residence of Napoleon
I (who died there in 1821). It is still French property, administered by a French representative and under the authority
of the French Ministry of Foreign Affairs. On 11 April 1898 American Joshua Slocum, on his famous and epic solo round
the world voyage arrived at Jamestown. He departed on 20 April 1898 for the final leg of his circumnavigation having
been extended hospitality from the governor, his Excellency Sir R A Standale, presented two lectures on his voyage
and been invited to Longwood by the French Consular agent. A local industry manufacturing fibre from New Zealand
flax was successfully reestablished in 1907 and generated considerable income during the First World War. Ascension
Island was made a dependency of Saint Helena in 1922, and Tristan da Cunha followed in 1938. During the Second World
War, the United States built Wideawake airport on Ascension in 1942, but no military use was made of Saint Helena.
During this period, the island enjoyed increased revenues through the sale of flax, with prices peaking in 1951.
However, the industry declined because of transportation costs and competition from synthetic fibres. The decision
by the British Post Office to use synthetic fibres for its mailbags was a further blow, contributing to the closure
of the island's flax mills in 1965. From 1958, the Union Castle shipping line gradually reduced its service calls
to the island. Curnow Shipping, based in Avonmouth, replaced the Union-Castle Line mailship service in 1977, using
the RMS (Royal Mail Ship) St Helena. The British Nationality Act 1981 reclassified Saint Helena and the other Crown
colonies as British Dependent Territories. The islanders lost their right of abode in Britain. For the next 20 years,
many could find only low-paid work with the island government, and the only available employment outside Saint Helena
was on the Falkland Islands and Ascension Island. The Development and Economic Planning Department, which still operates,
was formed in 1988 to contribute to raising the living standards of the people of Saint Helena. In 1989, Prince Andrew
launched the replacement RMS St Helena to serve the island; the vessel was specially built for the Cardiff–Cape Town
route and features a mixed cargo/passenger layout. The Saint Helena Constitution took effect in 1989 and provided
that the island would be governed by a Governor and Commander-in-Chief, and an elected Executive and Legislative
Council. In 2002, the British Overseas Territories Act 2002 granted full British citizenship to the islanders, and
renamed the Dependent Territories (including Saint Helena) the British Overseas Territories. In 2009, Saint Helena
and its two territories received equal status under a new constitution, and the British Overseas Territory was renamed
Saint Helena, Ascension and Tristan da Cunha. The UK government has spent £250 million in the construction of the
island's airport. Expected to be fully operational early 2016, it is expected to help the island towards self-sufficiency
and encourage economic development, reducing dependence on British government aid. The airport is also expected to
kick start the tourism industry, with up to 30,000 visitors expected annually. As of August, 2015 ticketing was postponed
until an airline could be firmly designated. Located in the South Atlantic Ocean on the Mid-Atlantic Ridge, more
than 2,000 kilometres (1,200 mi) from the nearest major landmass, Saint Helena is one of the most remote places in
the world. The nearest port on the continent is Namibe in southern Angola, and the nearest international airport
the Quatro de Fevereiro Airport of Angola's capital Luanda; connections to Cape Town in South Africa are used for
most shipping needs, such as the mail boat that serves the island, the RMS St Helena. The island is associated with
two other isolated islands in the southern Atlantic, also British territories: Ascension Island about 1,300 kilometres
(810 mi) due northwest in more equatorial waters and Tristan da Cunha, which is well outside the tropics 2,430 kilometres
(1,510 mi) to the south. The island is situated in the Western Hemisphere and has the same longitude as Cornwall
in the United Kingdom. Despite its remote location, it is classified as being in West Africa by the United Nations.
The island of Saint Helena has a total area of 122 km2 (47 sq mi), and is composed largely of rugged terrain of volcanic
origin (the last volcanic eruptions occurred about 7 million years ago). Coastal areas are covered in volcanic rock
and warmer and drier than the centre. The highest point of the island is Diana's Peak at 818 m (2,684 ft). In 1996
it became the island's first national park. Much of the island is covered by New Zealand flax, a legacy of former
industry, but there are some original trees augmented by plantations, including those of the Millennium Forest project
which was established in 2002 to replant part of the lost Great Wood and is now managed by the Saint Helena National
Trust. When the island was discovered, it was covered with unique indigenous vegetation, including a remarkable cabbage
tree species. The island's hinterland must have been a dense tropical forest but the coastal areas were probably
also quite green. The modern landscape is very different, with widespread bare rock in the lower areas, although
inland it is green, mainly due to introduced vegetation. There are no native land mammals, but cattle, cats, dogs,
donkeys, goats, mice, rabbits, rats and sheep have been introduced, and native species have been adversely affected
as a result. The dramatic change in landscape must be attributed to these introductions. As a result, the string
tree (Acalypha rubrinervis) and the St Helena olive (Nesiota elliptica) are now extinct, and many of the other endemic
plants are threatened with extinction. There are several rocks and islets off the coast, including: Castle Rock,
Speery Island, the Needle, Lower Black Rock, Upper Black Rock (South), Bird Island (Southwest), Black Rock, Thompson's
Valley Island, Peaked Island, Egg Island, Lady's Chair, Lighter Rock (West), Long Ledge (Northwest), Shore Island,
George Island, Rough Rock Island, Flat Rock (East), the Buoys, Sandy Bay Island, the Chimney, White Bird Island and
Frightus Rock (Southeast), all of which are within one kilometre (0.62 miles) of the shore. The national bird of
Saint Helena is the Saint Helena plover, known locally as the wirebird. It appears on the coat of arms of Saint Helena
and on the flag. The climate of Saint Helena is tropical, marine and mild, tempered by the Benguela Current and trade
winds that blow almost continuously. The climate varies noticeably across the island. Temperatures in Jamestown,
on the north leeward shore, range between 21–28 °C (70–82 °F) in the summer (January to April) and 17–24 °C (63–75
°F) during the remainder of the year. The temperatures in the central areas are, on average, 5–6 °C (9.0–10.8 °F)
lower. Jamestown also has a very low annual rainfall, while 750–1,000 mm (30–39 in) falls per year on the higher
ground and the south coast, where it is also noticeably cloudier. There are weather recording stations in the Longwood
and Blue Hill districts. Saint Helena is divided into eight districts, each with a community centre. The districts
also serve as statistical subdivisions. The island is a single electoral area and elects twelve representatives to
the Legislative Council of fifteen. Saint Helena was first settled by the English in 1659, and the island has a population
of about 4,250 inhabitants, mainly descended from people from Britain – settlers ("planters") and soldiers – and
slaves who were brought there from the beginning of settlement – initially from Africa (the Cape Verde Islands, Gold
Coast and west coast of Africa are mentioned in early records), then India and Madagascar. Eventually the planters
felt there were too many slaves and no more were imported after 1792. In 1840, St Helena became a provisioning station
for the British West Africa Squadron, preventing slavery to Brazil (mainly), and many thousands of slaves were freed
on the island. These were all African, and about 500 stayed while the rest were sent on to the West Indies and Cape
Town, and eventually to Sierra Leone. Imported Chinese labourers arrived in 1810, reaching a peak of 618 in 1818,
after which numbers were reduced. Only a few older men remained after the British Crown took over the government
of the island from the East India Company in 1834. The majority were sent back to China, although records in the
Cape suggest that they never got any farther than Cape Town. There were also a very few Indian lascars who worked
under the harbour master. The citizens of Saint Helena hold British Overseas Territories citizenship. On 21 May 2002,
full British citizenship was restored by the British Overseas Territories Act 2002. See also British nationality
law. During periods of unemployment, there has been a long pattern of emigration from the island since the post-Napoleonic
period. The majority of "Saints" emigrated to the UK, South Africa and in the early years, Australia. The population
has steadily declined since the late 1980s and has dropped from 5,157 at the 1998 census to 4,255 in 2008. In the
past emigration was characterised by young unaccompanied persons leaving to work on long-term contracts on Ascension
and the Falkland Islands, but since "Saints" were re-awarded UK citizenship in 2002, emigration to the UK by a wider
range of wage-earners has accelerated due to the prospect of higher wages and better progression prospects. Most
residents belong to the Anglican Communion and are members of the Diocese of St Helena, which has its own bishop
and includes Ascension Island. The 150th anniversary of the diocese was celebrated in June 2009. Other Christian
denominations on the island include: Roman Catholic (since 1852), Salvation Army (since 1884), Baptist (since 1845)
and, in more recent times, Seventh-day Adventist (since 1949), New Apostolic and Jehovah's Witnesses (of which one
in 35 residents is a member, the highest ratio of any country). The Catholics are pastorally served by the Mission
sui iuris of Saint Helena, Ascension and Tristan da Cunha, whose office of ecclesiastical superior is vested in the
Apostolic Prefecture of the Falkland Islands. Executive authority in Saint Helena is vested in Queen Elizabeth II
and is exercised on her behalf by the Governor of Saint Helena. The Governor is appointed by the Queen on the advice
of the British government. Defence and Foreign Affairs remain the responsibility of the United Kingdom. There are
fifteen seats in the Legislative Council of Saint Helena, a unicameral legislature, in addition to a Speaker and
a Deputy Speaker. Twelve of the fifteen members are elected in elections held every four years. The three ex officio
members are the Chief Secretary, Financial Secretary and Attorney General. The Executive Council is presided over
by the Governor, and consists of three ex officio officers and five elected members of the Legislative Council appointed
by the Governor. There is no elected Chief Minister, and the Governor acts as the head of government. In January
2013 it was proposed that the Executive Council would be led by a "Chief Councillor" who would be elected by the
members of the Legislative Council and would nominate the other members of the Executive Council. These proposals
were put to a referendum on 23 March 2013 where they were defeated by 158 votes to 42 on a 10% turnout. One commentator
has observed that, notwithstanding the high unemployment resulting from the loss of full passports during 1981–2002,
the level of loyalty to the British monarchy by the St Helena population is probably not exceeded in any other part
of the world. King George VI is the only reigning monarch to have visited the island. This was in 1947 when the King,
accompanied by Queen Elizabeth (later the Queen Mother), Princess Elizabeth (later Queen Elizabeth II) and Princess
Margaret were travelling to South Africa. Prince Philip arrived at St Helena in 1957 and then his son Prince Andrew
visited as a member of the armed forces in 1984 and his sister the Princess Royal arrived in 2002. In 2012, the government
of St. Helena funded the creation of the St. Helena Human Rights Action Plan 2012-2015. Work is being done under
this action plan, including publishing awareness-raising articles in local newspapers, providing support for members
of the public with human rights queries, and extending several UN Conventions on human rights to St. Helena. In recent
years[when?], there have been reports of child abuse in St Helena. Britain’s Foreign and Commonwealth Office (FCO)
has been accused of lying to the United Nations about child abuse in St Helena to cover up allegations, including
cases of a police officer having raped a four-year-old girl and of a police officer having mutilated a two-year-old.
St Helena has long been known for its high proportion of endemic birds and vascular plants. The highland areas contain
most of the 400 endemic species recognised to date. Much of the island has been identified by BirdLife International
as being important for bird conservation, especially the endemic Saint Helena plover or wirebird, and for seabirds
breeding on the offshore islets and stacks, in the north-east and the south-west Important Bird Areas. On the basis
of these endemics and an exceptional range of habitats, Saint Helena is on the United Kingdom's tentative list for
future UNESCO World Heritage Sites. St Helena's biodiversity, however, also includes marine vertebrates, invertebrates
(freshwater, terrestrial and marine), fungi (including lichen-forming species), non-vascular plants, seaweeds and
other biological groups. To date, very little is known about these, although more than 200 lichen-forming fungi have
been recorded, including 9 endemics, suggesting that many significant discoveries remain to be made. The island had
a monocrop economy until 1966, based on the cultivation and processing of New Zealand flax for rope and string. St
Helena's economy is now weak, and is almost entirely sustained by aid from the British government. The public sector
dominates the economy, accounting for about 50% of gross domestic product. Inflation was running at 4% in 2005. There
have been increases in the cost of fuel, power and all imported goods. The tourist industry is heavily based on the
promotion of Napoleon's imprisonment. A golf course also exists and the possibility for sportfishing tourism is great.
Three hotels operate on the island but the arrival of tourists is directly linked to the arrival and departure schedule
of the RMS St Helena. Some 3,200 short-term visitors arrived on the island in 2013. Saint Helena produces what is
said to be the most expensive coffee in the world. It also produces and exports Tungi Spirit, made from the fruit
of the prickly or cactus pears, Opuntia ficus-indica ("Tungi" is the local St Helenian name for the plant). Ascension
Island, Tristan da Cunha and Saint Helena all issue their own postage stamps which provide a significant income.
Quoted at constant 2002 prices, GDP fell from £12 million in 1999-2000 to £11 million in 2005-06. Imports are mainly
from the UK and South Africa and amounted to £6.4 million in 2004-05 (quoted on an FOB basis). Exports are much smaller,
amounting to £0.2 million in 2004-05. Exports are mainly fish and coffee; Philatelic sales were £0.06 million in
2004-05. The limited number of visiting tourists spent about £0.4 million in 2004-05, representing a contribution
to GDP of 3%. Public expenditure rose from £10 million in 2001-02 to £12 million in 2005-06 to £28m in 2012-13. The
contribution of UK budgetary aid to total SHG government expenditure rose from £4.6 million in to £6.4 million to
£12.1 million over the same period. Wages and salaries represent about 38% of recurrent expenditure. Unemployment
levels are low (31 individuals in 2013, compared to 50 in 2004 and 342 in 1998). Employment is dominated by the public
sector, the number of government positions has fallen from 1,142 in 2006 to just over 800 in 2013. St Helena’s private
sector employs approximately 45% of the employed labour force and is largely dominated by small and micro businesses
with 218 private businesses employing 886 in 2004. Household survey results suggest the percentage of households
spending less than £20 per week on a per capita basis fell from 27% to 8% between 2000 and 2004, implying a decline
in income poverty. Nevertheless, 22% of the population claimed social security benefit in 2006/7, most of them aged
over 60, a sector that represents 20% of the population. In 1821, Saul Solomon issued a 70,560 copper tokens worth
a halfpenny each Payable at St Helena by Solomon, Dickson and Taylor – presumably London partners – that circulated
alongside the East India Company's local coinage until the Crown took over the island in 1836. The coin remains readily
available to collectors. Today Saint Helena has its own currency, the Saint Helena pound, which is at parity with
the pound sterling. The government of Saint Helena produces its own coinage and banknotes. The Bank of Saint Helena
was established on Saint Helena and Ascension Island in 2004. It has branches in Jamestown on Saint Helena, and Georgetown,
Ascension Island and it took over the business of the St. Helena government savings bank and Ascension Island Savings
Bank. Saint Helena is one of the most remote islands in the world, has one commercial airport under construction,
and travel to the island is by ship only. A large military airfield is located on Ascension Island, with two Friday
flights to RAF Brize Norton, England (as from September 2010). These RAF flights offer a limited number of seats
to civilians. The ship RMS Saint Helena runs between St Helena and Cape Town on a 5-day voyage, also visiting Ascension
Island and Walvis Bay, and occasionally voyaging north to Tenerife and Portland, UK. It berths in James Bay, St Helena
approximately thirty times per year. The RMS Saint Helena was due for decommissioning in 2010. However, its service
life has been extended indefinitely until the airport is completed. After a long period of rumour and consultation,
the British government announced plans to construct an airport in Saint Helena in March 2005. The airport was expected
to be completed by 2010. However an approved bidder, the Italian firm Impregilo, was not chosen until 2008, and then
the project was put on hold in November 2008, allegedly due to new financial pressures brought on by the Financial
crisis of 2007–2010. By January 2009, construction had not commenced and no final contracts had been signed. Governor
Andrew Gurr departed for London in an attempt to speed up the process and solve the problems. On 22 July 2010, the
British government agreed to help pay for the new airport using taxpayer money. In November 2011 a new deal between
the British government and South African civil engineering company Basil Read was signed and the airport was scheduled
to open in February 2016, with flights to and from South Africa and the UK. In March 2015 South African airline Comair
became the preferred bidder to provide weekly air service between the island and Johannesburg, starting from 2016.
The first aircraft, a South African Beechcraft King Air 200, landed at the new airport on 15 September 2015, prior
to conducting a series of flights to calibrate the airport's radio navigation equipment. The first helicopter landing
at the new airfield was conducted by the Wildcat HMA.2 ZZ377 from 825 Squadron 201 Flight, embarked on visiting HMS
Lancaster on 23 October 2015. A minibus offers a basic service to carry people around Saint Helena, with most services
designed to take people into Jamestown for a few hours on weekdays to conduct their business. Car hire is available
for visitors. Radio St Helena, which started operations on Christmas Day 1967, provided a local radio service that
had a range of about 100 km (62 mi) from the island, and also broadcast internationally on shortwave radio (11092.5
kHz) on one day a year. The station presented news, features and music in collaboration with its sister newspaper,
the St Helena Herald. It closed on 25 December 2012 to make way for a new three-channel FM service, also funded by
St. Helena Government and run by the South Atlantic Media Services (formerly St. Helena Broadcasting (Guarantee)
Corporation). Saint FM provided a local radio service for the island which was also available on internet radio and
relayed in Ascension Island. The station was not government funded. It was launched in January 2005 and closed on
21 December 2012. It broadcast news, features and music in collaboration with its sister newspaper, the St Helena
Independent (which continues). Saint FM Community Radio took over the radio channels vacated by Saint FM and launched
on 10 March 2013. The station operates as a limited-by-guarantee company owned by its members and is registered as
a fund-raising Association. Membership is open to everyone, and grants access to a live audio stream. St Helena Online
is a not-for-profit internet news service run from the UK by a former print and BBC journalist, working in partnership
with Saint FM and the St Helena Independent. Sure South Atlantic Ltd ("Sure") offers television for the island via
17 analogue terrestrial UHF channels, offering a mix of British, US, and South African programming. The channels
are from DSTV and include Mnet, SuperSport and BBC channels. The feed signal, from MultiChoice DStv in South Africa,
is received by a satellite dish at Bryant's Beacon from Intelsat 7 in the Ku band. SURE provide the telecommunications
service in the territory through a digital copper-based telephone network including ADSL-broadband service. In August
2011 the first fibre-optic link has been installed on the island, which connects the television receive antennas
at Bryant's Beacon to the Cable & Wireless Technical Centre in the Briars. A satellite ground station with a 7.6-metre
(25 ft) satellite dish installed in 1989 at The Briars is the only international connection providing satellite links
through Intelsat 707 to Ascension island and the United Kingdom. Since all international telephone and internet communications
are relying on this single satellite link both internet and telephone service are subject to sun outages. Saint Helena
has the international calling code +290 which, since 2006, Tristan da Cunha shares. Saint Helena telephone numbers
changed from 4 to 5 digits on 1 October 2013 by being prefixed with the digit "2", i.e. 2xxxx, with the range 5xxxx
being reserved for mobile numbering, and 8xxx being used for Tristan da Cunha numbers (these still shown as 4 digits).
Saint Helena has a 10/3.6 Mbit/s internet link via Intelsat 707 provided by SURE. Serving a population of more than
4,000, this single satellite link is considered inadequate in terms of bandwidth. ADSL-broadband service is provided
with maximum speeds of up to 1536 KBit/s downstream and 512 KBit/s upstream offered on contract levels from lite
£16 per month to gold+ at £190 per month. There are a few public WiFi hotspots in Jamestown, which are also being
operated by SURE (formerly Cable & Wireless). The South Atlantic Express, a 10,000 km (6,214 mi) submarine communications
cable connecting Africa to South America, run by the undersea fibre optic provider eFive, will pass St Helena relatively
closely. There were no plans to land the cable and install a landing station ashore, which could supply St Helena's
population with sufficient bandwidth to fully leverage the benefits of today's Information Society. In January 2012,
a group of supporters petitioned the UK government to meet the cost of landing the cable at St Helena. On 6 October
2012, eFive agreed to reroute the cable through St. Helena after a successful lobbying campaign by A Human Right,
a San Francisco-based NGA working on initiatives to ensure all people are connected to the Internet. Islanders have
sought the assistance of the UK Department for International Development and Foreign and Commonwealth Office in funding
the £10m required to bridge the connection from a local junction box on the cable to the island. The UK Government
have announced that a review of the island's economy would be required before such funding would be agreed to. The
island has two local newspapers, both of which are available on the Internet. The St Helena Independent has been
published since November 2005. The Sentinel newspaper was introduced in 2012. Education is free and compulsory between
the ages of 5 and 16 The island has three primary schools for students of age 4 to 11: Harford, Pilling, and St Paul’s.
Prince Andrew School provides secondary education for students aged 11 to 18. At the beginning of the academic year
2009-10, 230 students were enrolled in primary school and 286 in secondary school. The Education and Employment Directorate
also offers programmes for students with special needs, vocational training, adult education, evening classes, and
distance learning. The island has a public library (the oldest in the Southern Hemisphere) and a mobile library service
which operates weekly rural areas. The UK national curriculum is adapted for local use. A range of qualifications
are offered – from GCSE, A/S and A2, to Level 3 Diplomas and VRQ qualifications: Sports played on the island include
football, cricket, volleyball, tennis, golf, motocross, shooting sports and yachting. Saint Helena has sent teams
to a number of Commonwealth Games. Saint Helena is a member of the International Island Games Association. The Saint
Helena cricket team made its debut in international cricket in Division Three of the African region of the World
Cricket League in 2011. The Governor's Cup is a yacht race between Cape Town and Saint Helena island, held every
two years in December/January; the most recent event was in December 2010. In Jamestown a timed run takes place up
Jacob's Ladder every year, with people coming from all over the world to take part. There are scouting and guiding
groups on Saint Helena and Ascension Island. Scouting was established on Saint Helena island in 1912. Lord and Lady
Baden-Powell visited the Scouts on Saint Helena on the return from their 1937 tour of Africa. The visit is described
in Lord Baden-Powell's book entitled African Adventures.
In phonetics, aspiration is the strong burst of breath that accompanies either the release or, in the case of preaspiration,
the closure of some obstruents. In English, aspirated consonants are allophones in complementary distribution with
their unaspirated counterparts, but in some other languages, notably most Indian and East Asian languages, the difference
is contrastive. To feel or see the difference between aspirated and unaspirated sounds, one can put a hand or a lit
candle in front of one's mouth, and say pin [pʰɪn] and then spin [spɪn]. One should either feel a puff of air or
see a flicker of the candle flame with pin that one does not get with spin. In most dialects of English, the initial
consonant is aspirated in pin and unaspirated in spin. In the International Phonetic Alphabet (IPA), aspirated consonants
are written using the symbols for voiceless consonants followed by the aspiration modifier letter ⟨◌ʰ⟩, a superscript
form of the symbol for the voiceless glottal fricative ⟨h⟩. For instance, ⟨p⟩ represents the voiceless bilabial stop,
and ⟨pʰ⟩ represents the aspirated bilabial stop. Voiced consonants are seldom actually aspirated. Symbols for voiced
consonants followed by ⟨◌ʰ⟩, such as ⟨bʰ⟩, typically represent consonants with breathy voiced release (see below).
In the grammatical tradition of Sanskrit, aspirated consonants are called voiceless aspirated, and breathy-voiced
consonants are called voiced aspirated. There are no dedicated IPA symbols for degrees of aspiration and typically
only two degrees are marked: unaspirated ⟨k⟩ and aspirated ⟨kʰ⟩. An old symbol for light aspiration was ⟨ʻ⟩, but
this is now obsolete. The aspiration modifier letter may be doubled to indicate especially strong or long aspiration.
Hence, the two degrees of aspiration in Korean stops are sometimes transcribed ⟨kʰ kʰʰ⟩ or ⟨kʻ⟩ and ⟨kʰ⟩, but they
are usually transcribed [k] and [kʰ], with the details of voice-onset time given numerically. Preaspirated consonants
are marked by placing the aspiration modifier letter before the consonant symbol: ⟨ʰp⟩ represents the preaspirated
bilabial stop. Unaspirated or tenuis consonants are occasionally marked with the modifier letter for unaspiration
⟨◌˭⟩, a superscript equal sign: ⟨t˭⟩. Usually, however, unaspirated consonants are left unmarked: ⟨t⟩. Voiceless
consonants are produced with the vocal folds open (spread) and not vibrating, and voiced consonants are produced
when the vocal folds are fractionally closed and vibrating (modal voice). Voiceless aspiration occurs when the vocal
cords remain open after a consonant is released. An easy way to measure this is by noting the consonant's voice-onset
time, as the voicing of a following vowel cannot begin until the vocal cords close. Phonetically in some languages,
such as Navajo, aspiration of stops tends to be realised as voiceless velar airflow; aspiration of affricates is
realised as an extended length of the frication. Aspirated consonants are not always followed by vowels or other
voiced sounds. For example, in Eastern Armenian, aspiration is contrastive even word-finally, and aspirated consonants
occur in consonant clusters. In Wahgi, consonants are aspirated only in final position. Armenian and Cantonese have
aspiration that lasts about as long as English aspirated stops, in addition to unaspirated stops. Korean has lightly
aspirated stops that fall between the Armenian and Cantonese unaspirated and aspirated stops as well as strongly
aspirated stops whose aspiration lasts longer than that of Armenian or Cantonese. (See voice-onset time.) Aspiration
varies with place of articulation. The Spanish voiceless stops /p t k/ have voice-onset times (VOTs) of about 5,
10, and 30 milliseconds, whereas English aspirated /p t k/ have VOTs of about 60, 70, and 80 ms. Voice-onset time
in Korean has been measured at 20, 25, and 50 ms for /p t k/ and 90, 95, and 125 for /pʰ tʰ kʰ/. When aspirated consonants
are doubled or geminated, the stop is held longer and then has an aspirated release. An aspirated affricate consists
of a stop, fricative, and aspirated release. A doubled aspirated affricate has a longer hold in the stop portion
and then has a release consisting of the fricative and aspiration. Icelandic and Faroese have preaspirated [ʰp ʰt
ʰk]; some scholars interpret these as consonant clusters as well. In Icelandic, preaspirated stops contrast with
double stops and single stops: Preaspirated stops also occur in most Sami languages; for example, in North Sami,
the unvoiced stop and affricate phonemes /p/, /t/, /ts/, /tʃ/, /k/ are pronounced preaspirated ([ʰp], [ʰt] [ʰts],
[ʰtʃ], [ʰk]) when they occur in medial or final position. Although most aspirated obstruents in the world's language
are stops and affricates, aspirated fricatives such as [sʰ], [fʰ] or [ɕʰ] have been documented in Korean, in a few
Tibeto-Burman languages, in some Oto-Manguean languages, and in the Siouan language Ofo. Some languages, such as
Choni Tibetan, have up to four contrastive aspirated fricatives [sʰ] [ɕʰ], [ʂʰ] and [xʰ]. True aspirated voiced consonants,
as opposed to murmured (breathy-voice) consonants such as the [bʱ], [dʱ], [ɡʱ] that are common in the languages of
India, are extremely rare. They have been documented in Kelabit Taa, and the Kx'a languages. Reported aspirated voiced
stops, affricates and clicks are [b͡pʰ, d͡tʰ, d͡tsʰ, d͡tʃʰ, ɡ͡kʰ, ɢ͡qʰ, ᶢʘʰ, ᶢǀʰ, ᶢǁʰ, ᶢǃʰ, ᶢǂʰ]. Aspiration has
varying significance in different languages. It is either allophonic or phonemic, and may be analyzed as an underlying
consonant cluster. In some languages, such as English, aspiration is allophonic. Stops are distinguished primarily
by voicing, and voiceless stops are sometimes aspirated, while voiced stops are usually unaspirated. They are unaspirated
for almost all speakers when immediately following word-initial s, as in spill, still, skill. After an s elsewhere
in a word they are normally unaspirated as well, except sometimes in compound words. When the consonants in a cluster
like st are analyzed as belonging to different morphemes (heteromorphemic) the stop is aspirated, but when they are
analyzed as belonging to one morpheme the stop is unaspirated.[citation needed] For instance, distend has unaspirated
[t] since it is not analyzed as two morphemes, but distaste has an aspirated middle [tʰ] because it is analyzed as
dis- + taste and the word taste has an aspirated initial t. In many languages, such as Armenian, Korean, Thai, Indo-Aryan
languages, Dravidian languages, Icelandic, Ancient Greek, and the varieties of Chinese, tenuis and aspirated consonants
are phonemic. Unaspirated consonants like [p˭ s˭] and aspirated consonants like [pʰ ʰp sʰ] are separate phonemes,
and words are distinguished by whether they have one or the other. In Danish and most southern varieties of German,
the "lenis" consonants transcribed for historical reasons as ⟨b d ɡ⟩ are distinguished from their fortis counterparts
⟨p t k⟩, mainly in their lack of aspiration. Standard Chinese (Mandarin) has stops and affricates distinguished by
aspiration: for instance, /t tʰ/, /t͡s t͡sʰ/. In pinyin, tenuis stops are written with letters that represent voiced
consonants in English, and aspirated stops with letters that represent voiceless consonants. Thus d represents /t/,
and t represents /tʰ/. Wu Chinese has a three-way distinction in stops and affricates: /p pʰ b/. In addition to aspirated
and unaspirated consonants, there is a series of muddy consonants, like /b/. These are pronounced with slack or breathy
voice: that is, they are weakly voiced. Muddy consonants as initial cause a syllable to be pronounced with low pitch
or light (陽 yáng) tone. Many Indo-Aryan languages have aspirated stops. Sanskrit, Hindi, Bengali, Marathi, and Gujarati
have a four-way distinction in stops: voiceless, aspirated, voiced, and breathy-voiced or voiced aspirated, such
as /p pʰ b bʱ/. Punjabi has lost breathy-voiced consonants, which resulted in a tone system, and therefore has a
distinction between voiceless, aspirated, and voiced: /p pʰ b/. Some of the Dravidian languages, such as Telugu,
Tamil, Malayalam, and Kannada, have a distinction between voiced and voiceless, aspirated and unaspirated only in
loanwords from Indo-Aryan languages. In native Dravidian words, there is no distinction between these categories
and stops are underspecified for voicing and aspiration. Western Armenian has a two-way distinction between aspirated
and voiced: /tʰ d/. Western Armenian aspirated /tʰ/ corresponds to Eastern Armenian aspirated /tʰ/ and voiced /d/,
and Western voiced /d/ corresponds to Eastern voiceless /t/. Some forms of Greek before the Koine Greek period are
reconstructed as having aspirated stops. The Classical Attic dialect of Ancient Greek had a three-way distinction
in stops like Eastern Armenian: /t tʰ d/. These stops were called ψιλά, δασέα, μέσα "thin, thick, middle" by Koine
Greek grammarians. There were aspirated stops at three places of articulation: labial, coronal, and velar /pʰ tʰ
kʰ/. Earlier Greek, represented by Mycenaean Greek, likely had a labialized velar aspirated stop /kʷʰ/, which later
became labial, coronal, or velar depending on dialect and phonetic environment. The other Ancient Greek dialects,
Ionic, Doric, Aeolic, and Arcadocypriot, likely had the same three-way distinction at one point, but Doric seems
to have had a fricative in place of /tʰ/ in the Classical period, and the Ionic and Aeolic dialects sometimes lost
aspiration (psilosis). Later, during the Koine Greek period, the aspirated and voiceless stops /tʰ d/ of Attic Greek
lenited to voiceless and voiced fricatives, yielding /θ ð/ in Medieval and Modern Greek. The term aspiration sometimes
refers to the sound change of debuccalization, in which a consonant is lenited (weakened) to become a glottal stop
or fricative [ʔ h ɦ]. So-called voiced aspirated consonants are nearly always pronounced instead with breathy voice,
a type of phonation or vibration of the vocal folds. The modifier letter ⟨◌ʰ⟩ after a voiced consonant actually represents
a breathy-voiced or murmured dental stop, as with the "voiced aspirated" bilabial stop ⟨bʰ⟩ in the Indo-Aryan languages.
This consonant is therefore more accurately transcribed as ⟨b̤⟩, with the diacritic for breathy voice, or with the
modifier letter ⟨bʱ⟩, a superscript form of the symbol for the voiced glottal fricative ⟨ɦ⟩. Some linguists restrict
the double-dot subscript ⟨◌̤⟩ to murmured sonorants, such as vowels and nasals, which are murmured throughout their
duration, and use the superscript hook-aitch ⟨◌ʱ⟩ for the breathy-voiced release of obstruents.
Hydrogen is a chemical element with chemical symbol H and atomic number 1. With an atomic weight of 7000100794000000000♠1.00794
u, hydrogen is the lightest element on the periodic table. Its monatomic form (H) is the most abundant chemical substance
in the Universe, constituting roughly 75% of all baryonic mass.[note 1] Non-remnant stars are mainly composed of
hydrogen in its plasma state. The most common isotope of hydrogen, termed protium (name rarely used, symbol 1H),
has one proton and no neutrons. The universal emergence of atomic hydrogen first occurred during the recombination
epoch. At standard temperature and pressure, hydrogen is a colorless, odorless, tasteless, non-toxic, nonmetallic,
highly combustible diatomic gas with the molecular formula H2. Since hydrogen readily forms covalent compounds with
most non-metallic elements, most of the hydrogen on Earth exists in molecular forms such as in the form of water
or organic compounds. Hydrogen plays a particularly important role in acid–base reactions as many acid-base reactions
involve the exchange of protons between soluble molecules. In ionic compounds, hydrogen can take the form of a negative
charge (i.e., anion) when it is known as a hydride, or as a positively charged (i.e., cation) species denoted by
the symbol H+. The hydrogen cation is written as though composed of a bare proton, but in reality, hydrogen cations
in ionic compounds are always more complex species than that would suggest. As the only neutral atom for which the
Schrödinger equation can be solved analytically, study of the energetics and bonding of the hydrogen atom has played
a key role in the development of quantum mechanics. Hydrogen gas was first artificially produced in the early 16th
century, via the mixing of metals with acids. In 1766–81, Henry Cavendish was the first to recognize that hydrogen
gas was a discrete substance, and that it produces water when burned, a property which later gave it its name: in
Greek, hydrogen means "water-former". Industrial production is mainly from the steam reforming of natural gas, and
less often from more energy-intensive hydrogen production methods like the electrolysis of water. Most hydrogen is
employed near its production site, with the two largest uses being fossil fuel processing (e.g., hydrocracking) and
ammonia production, mostly for the fertilizer market. Hydrogen is a concern in metallurgy as it can embrittle many
metals, complicating the design of pipelines and storage tanks. Hydrogen gas (dihydrogen or molecular hydrogen) is
highly flammable and will burn in air at a very wide range of concentrations between 4% and 75% by volume. The enthalpy
of combustion for hydrogen is −286 kJ/mol: Hydrogen gas forms explosive mixtures with air if it is 4–74% concentrated
and with chlorine if it is 5–95% concentrated. The mixtures may be ignited by spark, heat or sunlight. The hydrogen
autoignition temperature, the temperature of spontaneous ignition in air, is 500 °C (932 °F). Pure hydrogen-oxygen
flames emit ultraviolet light and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the
faint plume of the Space Shuttle Main Engine compared to the highly visible plume of a Space Shuttle Solid Rocket
Booster. The detection of a burning hydrogen leak may require a flame detector; such leaks can be very dangerous.
Hydrogen flames in other conditions are blue, resembling blue natural gas flames. The destruction of the Hindenburg
airship was an infamous example of hydrogen combustion; the cause is debated, but the visible orange flames were
the result of a rich mixture of hydrogen to oxygen combined with carbon compounds from the airship skin. H2 reacts
with every oxidizing element. Hydrogen can react spontaneously and violently at room temperature with chlorine and
fluorine to form the corresponding hydrogen halides, hydrogen chloride and hydrogen fluoride, which are also potentially
dangerous acids. The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom,
which conceptualizes the electron as "orbiting" the proton in analogy to the Earth's orbit of the Sun. However, the
electromagnetic force attracts electrons and protons to one another, while planets and celestial objects are attracted
to each other by gravity. Because of the discretization of angular momentum postulated in early quantum mechanics
by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore
only certain allowed energies. A more accurate description of the hydrogen atom comes from a purely quantum mechanical
treatment that uses the Schrödinger equation, Dirac equation or even the Feynman path integral formulation to calculate
the probability density of the electron around the proton. The most complicated treatments allow for the small effects
of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a ground state
hydrogen atom has no angular momentum at all—an illustration of how the "planetary orbit" conception of electron
motion differs from reality. There exist two different spin isomers of hydrogen diatomic molecules that differ by
the relative spin of their nuclei. In the orthohydrogen form, the spins of the two protons are parallel and form
a triplet state with a molecular spin quantum number of 1 (1⁄2+1⁄2); in the parahydrogen form the spins are antiparallel
and form a singlet with a molecular spin quantum number of 0 (1⁄2–1⁄2). At standard temperature and pressure, hydrogen
gas contains about 25% of the para form and 75% of the ortho form, also known as the "normal form". The equilibrium
ratio of orthohydrogen to parahydrogen depends on temperature, but because the ortho form is an excited state and
has a higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium
state is composed almost exclusively of the para form. The liquid and gas phase thermal properties of pure parahydrogen
differ significantly from those of the normal form because of differences in rotational heat capacities, as discussed
more fully in spin isomers of hydrogen. The ortho/para distinction also occurs in other hydrogen-containing molecules
or functional groups, such as water and methylene, but is of little significance for their thermal properties. The
uncatalyzed interconversion between para and ortho H2 increases with increasing temperature; thus rapidly condensed
H2 contains large quantities of the high-energy ortho form that converts to the para form very slowly. The ortho/para
ratio in condensed H2 is an important consideration in the preparation and storage of liquid hydrogen: the conversion
from ortho to para is exothermic and produces enough heat to evaporate some of the hydrogen liquid, leading to loss
of liquefied material. Catalysts for the ortho-para interconversion, such as ferric oxide, activated carbon, platinized
asbestos, rare earth metals, uranium compounds, chromic oxide, or some nickel compounds, are used during hydrogen
cooling. While H2 is not very reactive under standard conditions, it does form compounds with most elements. Hydrogen
can form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I), or oxygen;
in these compounds hydrogen takes on a partial positive charge. When bonded to fluorine, oxygen, or nitrogen, hydrogen
can participate in a form of medium-strength noncovalent bonding with other similar molecules between their hydrogens
called hydrogen bonding, which is critical to the stability of many biological molecules. Hydrogen also forms compounds
with less electronegative elements, such as the metals and metalloids, in which it takes on a partial negative charge.
These compounds are often known as hydrides. Hydrogen forms a vast array of compounds with carbon called the hydrocarbons,
and an even vaster array with heteroatoms that, because of their general association with living things, are called
organic compounds. The study of their properties is known as organic chemistry and their study in the context of
living organisms is known as biochemistry. By some definitions, "organic" compounds are only required to contain
carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond which gives this
class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions
of the word "organic" in chemistry. Millions of hydrocarbons are known, and they are usually formed by complicated
synthetic pathways, which seldom involve elementary hydrogen. Compounds of hydrogen are often called hydrides, a
term that is used fairly loosely. The term "hydride" suggests that the H atom has acquired a negative or anionic
character, denoted H−, and is used when hydrogen forms a compound with a more electropositive element. The existence
of the hydride anion, suggested by Gilbert N. Lewis in 1916 for group I and II salt-like hydrides, was demonstrated
by Moers in 1920 by the electrolysis of molten lithium hydride (LiH), producing a stoichiometry quantity of hydrogen
at the anode. For hydrides other than group I and II metals, the term is quite misleading, considering the low electronegativity
of hydrogen. An exception in group II hydrides is BeH 2, which is polymeric. In lithium aluminium hydride, the AlH−
4 anion carries hydridic centers firmly attached to the Al(III). Although hydrides can be formed with almost all
main-group elements, the number and combination of possible compounds varies widely; for example, there are over
100 binary borane hydrides known, but only one binary aluminium hydride. Binary indium hydride has not yet been identified,
although larger complexes exist. In inorganic chemistry, hydrides can also serve as bridging ligands that link two
metal centers in a coordination complex. This function is particularly common in group 13 elements, especially in
boranes (boron hydrides) and aluminium complexes, as well as in clustered carboranes. Oxidation of hydrogen removes
its electron and gives H+, which contains no electrons and a nucleus which is usually composed of one proton. That
is why H+ is often called a proton. This species is central to discussion of acids. Under the Bronsted-Lowry theory,
acids are proton donors, while bases are proton acceptors. A bare proton, H+, cannot exist in solution or in ionic
crystals, because of its unstoppable attraction to other atoms or molecules with electrons. Except at the high temperatures
associated with plasmas, such protons cannot be removed from the electron clouds of atoms and molecules, and will
remain attached to them. However, the term 'proton' is sometimes used loosely and metaphorically to refer to positively
charged or cationic hydrogen attached to other species in this fashion, and as such is denoted "H+" without any implication
that any single protons exist freely as a species. To avoid the implication of the naked "solvated proton" in solution,
acidic aqueous solutions are sometimes considered to contain a less unlikely fictitious species, termed the "hydronium
ion" (H 3O+). However, even in this case, such solvated hydrogen cations are more realistically conceived as being
organized into clusters that form species closer to H 9O+ 4. Other oxonium ions are found when water is in acidic
solution with other solvents. Although exotic on Earth, one of the most common ions in the universe is the H+ 3 ion,
known as protonated molecular hydrogen or the trihydrogen cation. Hydrogen has three naturally occurring isotopes,
denoted 1H, 2H and 3H. Other, highly unstable nuclei (4H to 7H) have been synthesized in the laboratory but not observed
in nature. Hydrogen is the only element that has different names for its isotopes in common use today. During the
early study of radioactivity, various heavy radioactive isotopes were given their own names, but such names are no
longer used, except for deuterium and tritium. The symbols D and T (instead of 2H and 3H) are sometimes used for
deuterium and tritium, but the corresponding symbol for protium, P, is already in use for phosphorus and thus is
not available for protium. In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry
allows any of D, T, 2H, and 3H to be used, although 2H and 3H are preferred. In 1671, Robert Boyle discovered and
described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas. In
1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by naming the gas from a metal-acid
reaction "flammable air". He speculated that "flammable air" was in fact identical to the hypothetical substance
called "phlogiston" and further finding in 1781 that the gas produces water when burned. He is usually given credit
for its discovery as an element. In 1783, Antoine Lavoisier gave the element the name hydrogen (from the Greek ὑδρο-
hydro meaning "water" and -γενής genes meaning "creator") when he and Laplace reproduced Cavendish's finding that
water is produced when hydrogen is burned. Lavoisier produced hydrogen for his experiments on mass conservation by
reacting a flux of steam with metallic iron through an incandescent iron tube heated in a fire. Anaerobic oxidation
of iron by the protons of water at high temperature can be schematically represented by the set of following reactions:
Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention,
the vacuum flask. He produced solid hydrogen the next year. Deuterium was discovered in December 1931 by Harold Urey,
and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck. Heavy water, which consists
of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932. François Isaac de Rivaz built
the first de Rivaz engine, an internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward
Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823.
The first hydrogen-filled balloon was invented by Jacques Charles in 1783. Hydrogen provided the lift for the first
reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard. German
count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins;
the first of which had its maiden flight in 1900. Regularly scheduled flights started in 1910 and by the outbreak
of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships
were used as observation platforms and bombers during the war. The first non-stop transatlantic crossing was made
by the British airship R34 in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves
in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose.
Therefore, H2 was used in the Hindenburg airship, which was destroyed in a midair fire over New Jersey on 6 May 1937.
The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen is widely assumed to be the cause,
but later investigations pointed to the ignition of the aluminized fabric coating by static electricity. But the
damage to hydrogen's reputation as a lifting gas was already done. In the same year the first hydrogen-cooled turbogenerator
went into service with gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, by the
Dayton Power & Light Co.; because of the thermal conductivity of hydrogen gas, this is the most common type in its
field today. The nickel hydrogen battery was used for the first time in 1977 aboard the U.S. Navy's Navigation technology
satellite-2 (NTS-2). For example, the ISS, Mars Odyssey and the Mars Global Surveyor are equipped with nickel-hydrogen
batteries. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen batteries,
which were finally replaced in May 2009, more than 19 years after launch, and 13 years over their design life. Because
of its simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the
spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic
structure. Furthermore, the corresponding simplicity of the hydrogen molecule and the corresponding cation H+ 2 allowed
fuller understanding of the nature of the chemical bond, which followed shortly after the quantum mechanical treatment
of the hydrogen atom had been developed in the mid-1920s. One of the first quantum effects to be explicitly noticed
(but not understood at the time) was a Maxwell observation involving hydrogen, half a century before full quantum
mechanical theory arrived. Maxwell observed that the specific heat capacity of H2 unaccountably departs from that
of a diatomic gas below room temperature and begins to increasingly resemble that of a monatomic gas at cryogenic
temperatures. According to quantum theory, this behavior arises from the spacing of the (quantized) rotational energy
levels, which are particularly wide-spaced in H2 because of its low mass. These widely spaced levels inhibit equal
partition of heat energy into rotational motion in hydrogen at low temperatures. Diatomic gases composed of heavier
atoms do not have such widely spaced levels and do not exhibit the same effect. Hydrogen, as atomic H, is the most
abundant chemical element in the universe, making up 75% of normal matter by mass and over 90% by number of atoms
(most of the mass of the universe, however, is not in the form of chemical-element type matter, but rather is postulated
to occur as yet-undetected forms of mass such as dark matter and dark energy). This element is found in great abundance
in stars and gas giant planets. Molecular clouds of H2 are associated with star formation. Hydrogen plays a vital
role in powering stars through the proton-proton reaction and the CNO cycle nuclear fusion. Throughout the universe,
hydrogen is mostly found in the atomic and plasma states whose properties are quite different from molecular hydrogen.
As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity
and high emissivity (producing the light from the Sun and other stars). The charged particles are highly influenced
by magnetic and electric fields. For example, in the solar wind they interact with the Earth's magnetosphere giving
rise to Birkeland currents and the aurora. Hydrogen is found in the neutral atomic state in the interstellar medium.
The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological
baryonic density of the Universe up to redshift z=4. Under ordinary conditions on Earth, elemental hydrogen exists
as the diatomic gas, H2. However, hydrogen gas is very rare in the Earth's atmosphere (1 ppm by volume) because of
its light weight, which enables it to escape from Earth's gravity more easily than heavier gases. However, hydrogen
is the third most abundant element on the Earth's surface, mostly in the form of chemical compounds such as hydrocarbons
and water. Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus, as is methane,
itself a hydrogen source of increasing importance. A molecular form called protonated molecular hydrogen (H+ 3) is
found in the interstellar medium, where it is generated by ionization of molecular hydrogen from cosmic rays. This
charged ion has also been observed in the upper atmosphere of the planet Jupiter. The ion is relatively stable in
the environment of outer space due to the low temperature and density. H+ 3 is one of the most abundant ions in the
Universe, and it plays a notable role in the chemistry of the interstellar medium. Neutral triatomic hydrogen H3
can only exist in an excited form and is unstable. By contrast, the positive hydrogen molecular ion (H+ 2) is a rare
molecule in the universe. H 2 is produced in chemistry and biology laboratories, often as a by-product of other reactions;
in industry for the hydrogenation of unsaturated substrates; and in nature as a means of expelling reducing equivalents
in biochemical reactions. The electrolysis of water is a simple method of producing hydrogen. A low voltage current
is run through the water, and gaseous oxygen forms at the anode while gaseous hydrogen forms at the cathode. Typically
the cathode is made from platinum or another inert metal when producing hydrogen for storage. If, however, the gas
is to be burnt on site, oxygen is desirable to assist the combustion, and so both electrodes would be made from inert
metals. (Iron, for instance, would oxidize, and thus decrease the amount of oxygen given off.) The theoretical maximum
efficiency (electricity used vs. energetic value of hydrogen produced) is in the range 80–94%. An alloy of aluminium
and gallium in pellet form added to water can be used to generate hydrogen. The process also produces alumina, but
the expensive gallium, which prevents the formation of an oxide skin on the pellets, can be re-used. This has important
potential implications for a hydrogen economy, as hydrogen can be produced on-site and does not need to be transported.
Hydrogen can be prepared in several different ways, but economically the most important processes involve removal
of hydrogen from hydrocarbons. Commercial bulk hydrogen is usually produced by the steam reforming of natural gas.
At high temperatures (1000–1400 K, 700–1100 °C or 1300–2000 °F), steam (water vapor) reacts with methane to yield
carbon monoxide and H 2. This reaction is favored at low pressures but is nonetheless conducted at high pressures
(2.0 MPa, 20 atm or 600 inHg). This is because high-pressure H 2 is the most marketable product and Pressure Swing
Adsorption (PSA) purification systems work better at higher pressures. The product mixture is known as "synthesis
gas" because it is often used directly for the production of methanol and related compounds. Hydrocarbons other than
methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly
optimized technology is the formation of coke or carbon: Consequently, steam reforming typically employs an excess
of H 2O. Additional hydrogen can be recovered from the steam by use of carbon monoxide through the water gas shift
reaction, especially with an iron oxide catalyst. This reaction is also a common industrial source of carbon dioxide:
Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber
process for the production of ammonia, hydrogen is generated from natural gas. Electrolysis of brine to yield chlorine
also produces hydrogen as a co-product. There are more than 200 thermochemical cycles which can be used for water
splitting, around a dozen of these cycles such as the iron oxide cycle, cerium(IV) oxide–cerium(III) oxide cycle,
zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle are under research and
in testing phase to produce hydrogen and oxygen from water and heat without using electricity. A number of laboratories
(including in France, Germany, Greece, Japan, and the USA) are developing thermochemical methods to produce hydrogen
from solar energy and water. Under anaerobic conditions, iron and steel alloys are slowly oxidized by the protons
of water concomitantly reduced in molecular hydrogen (H 2). The anaerobic corrosion of iron leads first to the formation
of ferrous hydroxide (green rust) and can be described by the following reaction: In its turn, under anaerobic conditions,
the ferrous hydroxide (Fe(OH) 2 ) can be oxidized by the protons of water to form magnetite and molecular hydrogen.
This process is described by the Schikorr reaction: In the absence of atmospheric oxygen (O 2), in deep geological
conditions prevailing far away from Earth atmosphere, hydrogen (H 2) is produced during the process of serpentinization
by the anaerobic oxidation by the water protons (H+) of the ferrous (Fe2+) silicate present in the crystal lattice
of the fayalite (Fe 2SiO 4, the olivine iron-endmember). The corresponding reaction leading to the formation of magnetite
(Fe 3O 4), quartz (SiO 2) and hydrogen (H 2) is the following: From all the fault gases formed in power transformers,
hydrogen is the most common and is generated under most fault conditions; thus, formation of hydrogen is an early
indication of serious problems in the transformer's life cycle. Large quantities of H 2 are needed in the petroleum
and chemical industries. The largest application of H 2 is for the processing ("upgrading") of fossil fuels, and
in the production of ammonia. The key consumers of H 2 in the petrochemical plant include hydrodealkylation, hydrodesulfurization,
and hydrocracking. H 2 has several other important uses. H 2 is used as a hydrogenating agent, particularly in increasing
the level of saturation of unsaturated fats and oils (found in items such as margarine), and in the production of
methanol. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. H 2 is also used as a reducing
agent of metallic ores. Hydrogen is highly soluble in many rare earth and transition metals and is soluble in both
nanocrystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities
in the crystal lattice. These properties may be useful when hydrogen is purified by passage through hot palladium
disks, but the gas's high solubility is a metallurgical problem, contributing to the embrittlement of many metals,
complicating the design of pipelines and storage tanks. Apart from its use as a reactant, H 2 has wide applications
in physics and engineering. It is used as a shielding gas in welding methods such as atomic hydrogen welding. H2
is used as the rotor coolant in electrical generators at power stations, because it has the highest thermal conductivity
of any gas. Liquid H2 is used in cryogenic research, including superconductivity studies. Because H 2 is lighter
than air, having a little more than 1⁄14 of the density of air, it was once widely used as a lifting gas in balloons
and airships. In more recent applications, hydrogen is used pure or mixed with nitrogen (sometimes called forming
gas) as a tracer gas for minute leak detection. Applications can be found in the automotive, chemical, power generation,
aerospace, and telecommunications industries. Hydrogen is an authorized food additive (E 949) that allows food package
leak testing among other anti-oxidizing properties. Hydrogen's rarer isotopes also each have specific applications.
Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons, and in nuclear fusion
reactions. Deuterium compounds have applications in chemistry and biology in studies of reaction isotope effects.
Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs, as an isotopic label
in the biosciences, and as a radiation source in luminous paints. Hydrogen is commonly used in power stations as
a coolant in generators due to a number of favorable properties that are a direct result of its light diatomic molecules.
These include low density, low viscosity, and the highest specific heat and thermal conductivity of all gases. Hydrogen
is not an energy resource, except in the hypothetical context of commercial nuclear fusion power plants using deuterium
or tritium, a technology presently far from development. The Sun's energy comes from nuclear fusion of hydrogen,
but this process is difficult to achieve controllably on Earth. Elemental hydrogen from solar, biological, or electrical
sources require more energy to make it than is obtained by burning it, so in these cases hydrogen functions as an
energy carrier, like a battery. Hydrogen may be obtained from fossil sources (such as methane), but these sources
are unsustainable. The energy density per unit volume of both liquid hydrogen and compressed hydrogen gas at any
practicable pressure is significantly less than that of traditional fuel sources, although the energy density per
unit fuel mass is higher. Nevertheless, elemental hydrogen has been widely discussed in the context of energy, as
a possible future carrier of energy on an economy-wide scale. For example, CO 2 sequestration followed by carbon
capture and storage could be conducted at the point of H 2 production from fossil fuels. Hydrogen used in transportation
would burn relatively cleanly, with some NOx emissions, but without carbon emissions. However, the infrastructure
costs associated with full conversion to a hydrogen economy would be substantial. Fuel cells can convert hydrogen
and oxygen directly to electricity more efficiently than internal combustion engines. Hydrogen is employed to saturate
broken ("dangling") bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties. It
is also a potential electron donor in various oxide materials, including ZnO, SnO2, CdO, MgO, ZrO2, HfO2, La2O3,
Y2O3, TiO2, SrTiO3, LaAlO3, SiO2, Al2O3, ZrSiO4, HfSiO4, and SrZrO3. H2 is a product of some types of anaerobic metabolism
and is produced by several microorganisms, usually via reactions catalyzed by iron- or nickel-containing enzymes
called hydrogenases. These enzymes catalyze the reversible redox reaction between H2 and its component two protons
and two electrons. Creation of hydrogen gas occurs in the transfer of reducing equivalents produced during pyruvate
fermentation to water. The natural cycle of hydrogen production and consumption by organisms is called the hydrogen
cycle. Water splitting, in which water is decomposed into its component protons, electrons, and oxygen, occurs in
the light reactions in all photosynthetic organisms. Some such organisms, including the alga Chlamydomonas reinhardtii
and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are reduced to
form H2 gas by specialized hydrogenases in the chloroplast. Efforts have been undertaken to genetically modify cyanobacterial
hydrogenases to efficiently synthesize H2 gas even in the presence of oxygen. Efforts have also been undertaken with
genetically modified alga in a bioreactor. Hydrogen poses a number of hazards to human safety, from potential detonations
and fires when mixed with air to being an asphyxiant in its pure, oxygen-free form. In addition, liquid hydrogen
is a cryogen and presents dangers (such as frostbite) associated with very cold liquids. Hydrogen dissolves in many
metals, and, in addition to leaking out, may have adverse effects on them, such as hydrogen embrittlement, leading
to cracks and explosions. Hydrogen gas leaking into external air may spontaneously ignite. Moreover, hydrogen fire,
while being extremely hot, is almost invisible, and thus can lead to accidental burns. Even interpreting the hydrogen
data (including safety data) is confounded by a number of phenomena. Many physical and chemical properties of hydrogen
depend on the parahydrogen/orthohydrogen ratio (it often takes days or weeks at a given temperature to reach the
equilibrium ratio, for which the data is usually given). Hydrogen detonation parameters, such as critical detonation
pressure and temperature, strongly depend on the container geometry.
The Space Race was a 20th-century competition between two Cold War rivals, the Soviet Union (USSR) and the United States
(US), for supremacy in spaceflight capability. It had its origins in the missile-based nuclear arms race between
the two nations that occurred following World War II, enabled by captured German rocket technology and personnel.
The technological superiority required for such supremacy was seen as necessary for national security, and symbolic
of ideological superiority. The Space Race spawned pioneering efforts to launch artificial satellites, unmanned space
probes of the Moon, Venus, and Mars, and human spaceflight in low Earth orbit and to the Moon. The competition began
on August 2, 1955, when the Soviet Union responded to the US announcement four days earlier of intent to launch artificial
satellites for the International Geophysical Year, by declaring they would also launch a satellite "in the near future".
The Soviet Union beat the US to this, with the October 4, 1957 orbiting of Sputnik 1, and later beat the US to the
first human in space, Yuri Gagarin, on April 12, 1961. The Space Race peaked with the July 20, 1969 US landing of
the first humans on the Moon with Apollo 11. The USSR tried but failed manned lunar missions, and eventually cancelled
them and concentrated on Earth orbital space stations. A period of détente followed with the April 1972 agreement
on a co-operative Apollo–Soyuz Test Project, resulting in the July 1975 rendezvous in Earth orbit of a US astronaut
crew with a Soviet cosmonaut crew. The Space Race can trace its origins to Germany, beginning in the 1930s and continuing
during World War II when Nazi Germany researched and built operational ballistic missiles. Starting in the early
1930s, during the last stages of the Weimar Republic, German aerospace engineers experimented with liquid-fueled
rockets, with the goal that one day they would be capable of reaching high altitudes and traversing long distances.
The head of the German Army's Ballistics and Munitions Branch, Lieutenant Colonel Karl Emil Becker, gathered a small
team of engineers that included Walter Dornberger and Leo Zanssen, to figure out how to use rockets as long-range
artillery in order to get around the Treaty of Versailles' ban on research and development of long-range cannons.
Wernher von Braun, a young engineering prodigy, was recruited by Becker and Dornberger to join their secret army
program at Kummersdorf-West in 1932. Von Braun had dreams about conquering outer space with rockets, and did not
initially see the military value in missile technology. During the Second World War, General Dornberger was the military
head of the army's rocket program, Zanssen became the commandant of the Peenemünde army rocket centre, and von Braun
was the technical director of the ballistic missile program. They would lead the team that built the Aggregate-4
(A-4) rocket, which became the first vehicle to reach outer space during its test flight program in 1942 and 1943.
By 1943, Germany began mass-producing the A-4 as the Vergeltungswaffe 2 ("Vengeance Weapon" 2, or more commonly,
V2), a ballistic missile with a 320 kilometers (200 mi) range carrying a 1,130 kilograms (2,490 lb) warhead at 4,000
kilometers per hour (2,500 mph). Its supersonic speed meant there was no defense against it, and radar detection
provided little warning. Germany used the weapon to bombard southern England and parts of Allied-liberated western
Europe from 1944 until 1945. After the war, the V-2 became the basis of early American and Soviet rocket designs.
At war's end, American, British, and Soviet scientific intelligence teams competed to capture Germany's rocket engineers
along with the German rockets themselves and the designs on which they were based. Each of the Allies captured a
share of the available members of the German rocket team, but the United States benefited the most with Operation
Paperclip, recruiting von Braun and most of his engineering team, who later helped develop the American missile and
space exploration programs. The United States also acquired a large number of complete V2 rockets. The German rocket
center in Peenemünde was located in the eastern part of Germany, which became the Soviet zone of occupation. On Stalin's
orders, the Soviet Union sent its best rocket engineers to this region to see what they could salvage for future
weapons systems. The Soviet rocket engineers were led by Sergei Korolev. He had been involved in space clubs and
early Soviet rocket design in the 1930s, but was arrested in 1938 during Joseph Stalin's Great Purge and imprisoned
for six years in Siberia. After the war, he became the USSR's chief rocket and spacecraft engineer, essentially the
Soviet counterpart to von Braun. His identity was kept a state secret throughout the Cold War, and he was identified
publicly only as "the Chief Designer." In the West, his name was only officially revealed when he died in 1966. After
almost a year in the area around Peenemünde, Soviet officials moved most of the captured German rocket specialists
to Gorodomlya Island on Lake Seliger, about 240 kilometers (150 mi) northwest of Moscow. They were not allowed to
participate in Soviet missile design, but were used as problem-solving consultants to the Soviet engineers. They
helped in the following areas: the creation of a Soviet version of the A-4; work on "organizational schemes"; research
in improving the A-4 main engine; development of a 100-ton engine; assistance in the "layout" of plant production
rooms; and preparation of rocket assembly using German components. With their help, particularly Helmut Groettrup's
group, Korolev reverse-engineered the A-4 and built his own version of the rocket, the R-1, in 1948. Later, he developed
his own distinct designs, though many of these designs were influenced by the Groettrup Group's G4-R10 design from
1949. The Germans were eventually repatriated in 1951–53. The American professor Robert H. Goddard had worked on
developing solid-fuel rockets since 1914, and demonstrated a light battlefield rocket to the US Army Signal Corps
only five days before the signing of the armistice that ended World War I. He also started developing liquid-fueled
rockets in 1921; yet he had not been taken seriously by the public, and was not sponsored by the government as part
of the post-WW II rocket development effort. Von Braun, himself inspired by Goddard's work, was bemused by this when
debriefed by his American handlers, asking them, "Why didn't you just ask Dr. Goddard?"[citation needed] Von Braun
and his team were sent to the United States Army's White Sands Proving Ground, located in New Mexico, in 1945. They
set about assembling the captured V2s and began a program of launching them and instructing American engineers in
their operation. These tests led to the first rocket to take photos from outer space, and the first two-stage rocket,
the WAC Corporal-V2 combination, in 1949. The German rocket team was moved from Fort Bliss to the Army's new Redstone
Arsenal, located in Huntsville, Alabama, in 1950. From here, von Braun and his team would develop the Army's first
operational medium-range ballistic missile, the Redstone rocket, that would, in slightly modified versions, launch
both America's first satellite, and the first piloted Mercury space missions. It became the basis for both the Jupiter
and Saturn family of rockets. In simple terms, the Cold War could be viewed as an expression of the ideological struggle
between communism and capitalism. The United States faced a new uncertainty beginning in September 1949, when it
lost its monopoly on the atomic bomb. American intelligence agencies discovered that the Soviet Union had exploded
its first atomic bomb, with the consequence that the United States potentially could face a future nuclear war that,
for the first time, might devastate its cities. Given this new danger, the United States participated in an arms
race with the Soviet Union that included development of the hydrogen bomb, as well as intercontinental strategic
bombers and intercontinental ballistic missiles (ICBMs) capable of delivering nuclear weapons. A new fear of communism
and its sympathizers swept the United States during the 1950s, which devolved into paranoid McCarthyism. With communism
spreading in China, Korea, and Eastern Europe, Americans came to feel so threatened that popular and political culture
condoned extensive "witch-hunts" to expose communist spies. Part of the American reaction to the Soviet atomic and
hydrogen bomb tests included maintaining a large Air Force, under the control of the Strategic Air Command (SAC).
SAC employed intercontinental strategic bombers, as well as medium-bombers based close to Soviet airspace (in western
Europe and in Turkey) that were capable of delivering nuclear payloads. For its part, the Soviet Union harbored fears
of invasion. Having suffered at least 27 million casualties during World War II after being invaded by Nazi Germany
in 1941, the Soviet Union was wary of its former ally, the United States, which until late 1949 was the sole possessor
of atomic weapons. The United States had used these weapons operationally during World War II, and it could use them
again against the Soviet Union, laying waste its cities and military centers. Since the Americans had a much larger
air force than the Soviet Union, and the United States maintained advance air bases near Soviet territory, in 1947
Stalin ordered the development of intercontinental ballistic missiles (ICBMs) in order to counter the perceived American
threat. In 1953, Korolev was given the go-ahead to develop the R-7 Semyorka rocket, which represented a major advance
from the German design. Although some of its components (notably boosters) still resembled the German G-4, the new
rocket incorporated staged design, a completely new control system, and a new fuel. It was successfully tested on
August 21, 1957 and became the world's first fully operational ICBM the following month. It would later be used to
launch the first satellite into space, and derivatives would launch all piloted Soviet spacecraft. The United States
had multiple rocket programs divided among the different branches of the American armed services, which meant that
each force developed its own ICBM program. The Air Force initiated ICBM research in 1945 with the MX-774. However,
its funding was cancelled and only three partially successful launches were conducted in 1947. In 1950, von Braun
began testing the Air Force PGM-11 Redstone rocket family at Cape Canaveral. In 1951, the Air Force began a new ICBM
program called MX-1593, and by 1955 this program was receiving top-priority funding. The MX-1593 program evolved
to become the Atlas-A, with its maiden launch occurring June 11, 1957, becoming the first successful American ICBM.
Its upgraded version, the Atlas-D rocket, would later serve as an operational nuclear ICBM and as the orbital launch
vehicle for Project Mercury and the remote-controlled Agena Target Vehicle used in Project Gemini. In 1955, with
both the United States and the Soviet Union building ballistic missiles that could be utilized to launch objects
into space, the "starting line" was drawn for the Space Race. In separate announcements, just four days apart, both
nations publicly announced that they would launch artificial Earth satellites by 1957 or 1958. On July 29, 1955,
James C. Hagerty, president Dwight D. Eisenhower's press secretary, announced that the United States intended to
launch "small Earth circling satellites" between July 1, 1957, and December 31, 1958, as part of their contribution
to the International Geophysical Year (IGY). Four days later, at the Sixth Congress of International Astronautical
Federation in Copenhagen, scientist Leonid I. Sedov spoke to international reporters at the Soviet embassy, and announced
his country's intention to launch a satellite as well, in the "near future". On August 30, 1955, Korolev managed
to get the Soviet Academy of Sciences to create a commission whose purpose was to beat the Americans into Earth orbit:
this was the de facto start date for the Space Race. The Council of Ministers of the Soviet Union began a policy
of treating development of its space program as a classified state secret. Initially, President Eisenhower was worried
that a satellite passing above a nation at over 100 kilometers (62 mi), might be construed as violating that nation's
sovereign airspace. He was concerned that the Soviet Union would accuse the Americans of an illegal overflight, thereby
scoring a propaganda victory at his expense. Eisenhower and his advisors believed that a nation's airspace sovereignty
did not extend into outer space, acknowledged as the Kármán line, and he used the 1957–58 International Geophysical
Year launches to establish this principle in international law. Eisenhower also feared that he might cause an international
incident and be called a "warmonger" if he were to use military missiles as launchers. Therefore, he selected the
untried Naval Research Laboratory's Vanguard rocket, which was a research-only booster. This meant that von Braun's
team was not allowed to put a satellite into orbit with their Jupiter-C rocket, because of its intended use as a
future military vehicle. On September 20, 1956, von Braun and his team did launch a Jupiter-C that was capable of
putting a satellite into orbit, but the launch was used only as a suborbital test of nose cone reentry technology.
The Soviet success caused public controversy in the United States, and Eisenhower ordered the civilian rocket and
satellite project, Vanguard, to move up its timetable and launch its satellite much sooner than originally planned.
The December 6, 1957 Project Vanguard launch failure occurred at Cape Canaveral Air Force Station in Florida, broadcast
live in front of a US television audience. It was a monumental failure, exploding a few seconds after launch, and
it became an international joke. The satellite appeared in newspapers under the names Flopnik, Stayputnik, Kaputnik,
and Dudnik. In the United Nations, the Russian delegate offered the U.S. representative aid "under the Soviet program
of technical assistance to backwards nations." Only in the wake of this very public failure did von Braun's Redstone
team get the go-ahead to launch their Jupiter-C rocket as soon as they could. In Britain, the USA's Western Cold
War ally, the reaction was mixed: some members of the population celebrated the fact that the Soviets had reached
space first, while others feared the destructive potential that military uses of spacecraft might bring. On January
31, 1958, nearly four months after the launch of Sputnik 1, von Braun and the United States successfully launched
its first satellite on a four-stage Juno I rocket derived from the US Army's Redstone missile, at Cape Canaveral.
The satellite Explorer 1 was 30.8 pounds (14.0 kg) in mass. It carried a micrometeorite gauge and a Geiger-Müller
tube. It passed in and out of the Earth-encompassing radiation belt with its 194-by-1,368-nautical-mile (360 by 2,534
km) orbit, therefore saturating the tube's capacity and proving what Dr. James Van Allen, a space scientist at the
University of Iowa, had theorized. The belt, named the Van Allen radiation belt, is a doughnut-shaped zone of high-level
radiation intensity around the Earth above the magnetic equator. Van Allen was also the man who designed and built
the satellite instrumentation of Explorer 1. The satellite actually measured three phenomena: cosmic ray and radiation
levels, the temperature in the spacecraft, and the frequency of collisions with micrometeorites. The satellite had
no memory for data storage, therefore it had to transmit continuously. Two months later in March 1958, a second satellite
was sent into orbit with augmented cosmic ray instruments. On April 2, 1958, President Eisenhower reacted to the
Soviet space lead in launching the first satellite, by recommending to the US Congress that a civilian agency be
established to direct nonmilitary space activities. Congress, led by Senate Majority Leader Lyndon B. Johnson, responded
by passing the National Aeronautics and Space Act, which Eisenhower signed into law on July 29, 1958. This law turned
the National Advisory Committee on Aeronautics into the National Aeronautics and Space Administration (NASA). It
also created a Civilian-Military Liaison Committee, chaired by the President, responsible for coordinating the nation's
civilian and military space programs. On October 21, 1959, Eisenhower approved the transfer of the Army's remaining
space-related activities to NASA. On July 1, 1960, the Redstone Arsenal became NASA's George C. Marshall Space Flight
Center, with von Braun as its first director. Development of the Saturn rocket family, which when mature, would finally
give the US parity with the Soviets in terms of lifting capability, was thus transferred to NASA. In 1958, Korolev
upgraded the R-7 to be able to launch a 400-kilogram (880 lb) payload to the Moon. Three secret 1958 attempts to
launch Luna E-1-class impactor probes failed. The fourth attempt, Luna 1, launched successfully on January 2, 1959,
but missed the Moon. The fifth attempt on June 18 also failed at launch. The 390-kilogram (860 lb) Luna 2 successfully
impacted the Moon on September 14, 1959. The 278.5-kilogram (614 lb) Luna 3 successfully flew by the Moon and sent
back pictures of its far side on October 6, 1959. The US reacted to the Luna program by embarking on the Ranger program
in 1959, managed by NASA's Jet Propulsion Laboratory. The Block I Ranger 1 and Ranger 2 suffered Atlas-Agena launch
failures in August and November 1961. The 727-pound (330 kg) Block II Ranger 3 launched successfully on January 26,
1962, but missed the Moon. The 730-pound (330 kg) Ranger 4 became the first US spacecraft to reach the Moon, but
its solar panels and navigational system failed near the Moon and it impacted the far side without returning any
scientific data. Ranger 5 ran out of power and missed the Moon by 725 kilometers (391 nmi) on October 21, 1962. The
first successful Ranger mission was the 806-pound (366 kg) Block III Ranger 7 which impacted on July 31, 1964. By
1959, American observers believed that the Soviet Union would be the first to get a human into space, because of
the time needed to prepare for Mercury's first launch. On April 12, 1961, the USSR surprised the world again by launching
Yuri Gagarin into a single orbit around the Earth in a craft they called Vostok 1. They dubbed Gagarin the first
cosmonaut, roughly translated from Russian and Greek as "sailor of the universe". Although he had the ability to
take over manual control of his spacecraft in an emergency by opening an envelope he had in the cabin that contained
a code that could be typed into the computer, it was flown in an automatic mode as a precaution; medical science
at that time did not know what would happen to a human in the weightlessness of space. Vostok 1 orbited the Earth
for 108 minutes and made its reentry over the Soviet Union, with Gagarin ejecting from the spacecraft at 7,000 meters
(23,000 ft), and landing by parachute. The Fédération Aéronautique Internationale (International Federation of Aeronautics)
credited Gagarin with the world's first human space flight, although their qualifying rules for aeronautical records
at the time required pilots to take off and land with their craft. For this reason, the Soviet Union omitted from
their FAI submission the fact that Gagarin did not land with his capsule. When the FAI filing for Gherman Titov's
second Vostok flight in August 1961 disclosed the ejection landing technique, the FAI committee decided to investigate,
and concluded that the technological accomplishment of human spaceflight lay in the safe launch, orbiting, and return,
rather than the manner of landing, and so revised their rules accordingly, keeping Gagarin's and Titov's records
intact. Gagarin became a national hero of the Soviet Union and the Eastern Bloc, and a worldwide celebrity. Moscow
and other cities in the USSR held mass demonstrations, the scale of which was second only to the World War II Victory
Parade of 1945. April 12 was declared Cosmonautics Day in the USSR, and is celebrated today in Russia as one of the
official "Commemorative Dates of Russia." In 2011, it was declared the International Day of Human Space Flight by
the United Nations. The US Air Force had been developing a program to launch the first man in space, named Man in
Space Soonest. This program studied several different types of one-man space vehicles, settling on a ballistic re-entry
capsule launched on a derivative Atlas missile, and selecting a group of nine candidate pilots. After NASA's creation,
the program was transferred over to the civilian agency and renamed Project Mercury on November 26, 1958. NASA selected
a new group of astronaut (from the Greek for "star sailor") candidates from Navy, Air Force and Marine test pilots,
and narrowed this down to a group of seven for the program. Capsule design and astronaut training began immediately,
working toward preliminary suborbital flights on the Redstone missile, followed by orbital flights on the Atlas.
Each flight series would first start unmanned, then carry a primate, then finally men. Three weeks later, on May
5, 1961, Alan Shepard became the first American in space, launched in a ballistic trajectory on Mercury-Redstone
3, in a spacecraft he named Freedom 7. Though he did not achieve orbit like Gagarin, he was the first person to exercise
manual control over his spacecraft's attitude and retro-rocket firing. After his successful return, Shepard was celebrated
as a national hero, honored with parades in Washington, New York and Los Angeles, and received the NASA Distinguished
Service Medal from President John F. Kennedy. Gagarin's flight changed this; now Kennedy sensed the humiliation and
fear on the part of the American public over the Soviet lead. He sent a memo dated April 20, 1961, to Vice President
Lyndon B. Johnson, asking him to look into the state of America's space program, and into programs that could offer
NASA the opportunity to catch up. The two major options at the time seemed to be, either establishment of an Earth
orbital space station, or a manned landing on the Moon. Johnson in turn consulted with von Braun, who answered Kennedy's
questions based on his estimates of US and Soviet rocket lifting capability. Based on this, Johnson responded to
Kennedy, concluding that much more was needed to reach a position of leadership, and recommending that the manned
Moon landing was far enough in the future that the US had a fighting chance to achieve it first. Kennedy ultimately
decided to pursue what became the Apollo program, and on May 25 took the opportunity to ask for Congressional support
in a Cold War speech titled "Special Message on Urgent National Needs". Full text He justified the program in terms
of its importance to national security, and its focus of the nation's energies on other scientific and social fields.
He rallied popular support for the program in his "We choose to go to the Moon" speech, on September 12, 1962, before
a large crowd at Rice University Stadium, in Houston, Texas, near the construction site of the new Manned Spacecraft
Center facility. Full text American Virgil "Gus" Grissom repeated Shepard's suborbital flight in Liberty Bell 7 on
July 21, 1961. Almost a year after the Soviet Union put a human into orbit, astronaut John Glenn became the first
American to orbit the Earth, on February 20, 1962. His Mercury-Atlas 6 mission completed three orbits in the Friendship
7 spacecraft, and splashed down safely in the Atlantic Ocean, after a tense reentry, due to what falsely appeared
from the telemetry data to be a loose heat-shield. As the first American in orbit, Glenn became a national hero,
and received a ticker-tape parade in New York City, reminiscent of that given for Charles Lindbergh. On February
23, 1962, President Kennedy escorted him in a parade at Cape Canaveral Air Force Station, where he awarded Glenn
with the NASA service medal. The United States launched three more Mercury flights after Glenn's: Aurora 7 on May
24, 1962 duplicated Glenn's three orbits; Sigma 7 on October 3, 1962, six orbits; and Faith 7 on May 15, 1963, 22
orbits (32.4 hours), the maximum capability of the spacecraft. NASA at first intended to launch one more mission,
extending the spacecraft's endurance to three days, but since this would not beat the Soviet record, it was decided
instead to concentrate on developing Project Gemini. Gherman Titov became the first Soviet cosmonaut to exercise
manual control of his Vostok 2 craft on August 6, 1961. The Soviet Union demonstrated 24-hour launch pad turnaround
and the capability to launch two piloted spacecraft, Vostok 3 and Vostok 4, in essentially identical orbits, on August
11 and 12, 1962. The two spacecraft came within approximately 6.5 kilometers (4.0 mi) of one another, close enough
for radio communication. Vostok 4 also set a record of nearly four days in space. Though the two craft's orbits were
as nearly identical as possible given the accuracy of the launch rocket's guidance system, slight variations still
existed which drew the two craft at first as close to each other as 6.5 kilometers (3.5 nautical miles), then as
far apart as 2,850 kilometers (1,540 nautical miles). There were no maneuvering rockets on the Vostok to permit space
rendezvous, required to keep two spacecraft a controlled distance apart. The Soviet Union duplicated its dual-launch
feat with Vostok 5 and Vostok 6 (June 16, 1963). This time they launched the first woman (also the first civilian),
Valentina Tereshkova, into space on Vostok 6. Launching a woman was reportedly Korolev's idea, and it was accomplished
purely for propaganda value. Tereshkova was one of a small corps of female cosmonauts who were amateur parachutists,
but Tereshkova was the only one to fly. The USSR didn't again open its cosmonaut corps to women until 1980, two years
after the United States opened its astronaut corps to women. The Soviets kept the details and true appearance of
the Vostok capsule secret until the April 1965 Moscow Economic Exhibition, where it was first displayed without its
aerodynamic nose cone concealing the spherical capsule. The "Vostok spaceship" had been first displayed at the July
1961 Tushino air show, mounted on its launch vehicle's third stage, with the nose cone in place. A tail section with
eight fins was also added, in an apparent attempt to confuse western observers. This spurious tail section also appeared
on official commemorative stamps and a documentary. On September 20, 1963, in a speech before the United Nations
General Assembly, President Kennedy proposed that the United States and the Soviet Union join forces in their efforts
to reach the Moon. Soviet Premier Nikita Khrushchev initially rejected Kennedy's proposal. On October 2, 1997, it
was reported that Khrushchev's son Sergei claimed Khrushchev was poised to accept Kennedy's proposal at the time
of Kennedy's assassination on November 22, 1963. During the next few weeks he reportedly concluded that both nations
might realize cost benefits and technological gains from a joint venture, and decided to accept Kennedy's offer based
on a measure of rapport during their years as leaders of the world's two superpowers, but changed his mind and dropped
the idea since he did not have the same trust for Kennedy's successor, Lyndon Johnson. As President, Johnson steadfastly
pursued the Gemini and Apollo programs, promoting them as Kennedy's legacy to the American public. One week after
Kennedy's death, he issued an executive order renaming the Cape Canaveral and Apollo launch facilities after Kennedy.
Focused by the commitment to a Moon landing, in January 1962 the US announced Project Gemini, a two-man spacecraft
that would support the later three-man Apollo by developing the key spaceflight technologies of space rendezvous
and docking of two craft, flight durations of sufficient length to simulate going to the Moon and back, and extra-vehicular
activity to accomplish useful work outside the spacecraft. The greater advances of the Soviet space program at the
time allowed their space program to achieve other significant firsts, including the first EVA "spacewalk" and the
first mission performed by a crew in shirt-sleeves. Gemini took a year longer than planned to accomplish its first
flight, allowing the Soviets to achieve another first, launching Voskhod 1 on October 12, 1964, the first spacecraft
with a three-cosmonaut crew. The USSR touted another technological achievement during this mission: it was the first
space flight during which cosmonauts performed in a shirt-sleeve-environment. However, flying without spacesuits
was not due to safety improvements in the Soviet spacecraft's environmental systems; rather this innovation was accomplished
because the craft's limited cabin space did not allow for spacesuits. Flying without spacesuits exposed the cosmonauts
to significant risk in the event of potentially fatal cabin depressurization. This feat would not be repeated until
the US Apollo Command Module flew in 1968; this later mission was designed from the outset to safely transport three
astronauts in a shirt-sleeve environment while in space. Between October 14–16, 1964, Leonid Brezhnev and a small
cadre of high-ranking Communist Party officials, deposed Khrushchev as Soviet government leader a day after Voskhod
1 landed, in what was called the "Wednesday conspiracy". The new political leaders, along with Korolev, ended the
technologically troublesome Voskhod program, cancelling Voskhod 3 and 4, which were in the planning stages, and started
concentrating on the race to the Moon. Voskhod 2 would end up being Korolev's final achievement before his death
on January 14, 1966, as it would become the last of the many space firsts that demonstrated the USSR's domination
in spacecraft technology during the early 1960s. According to historian Asif Siddiqi, Korolev's accomplishments marked
"the absolute zenith of the Soviet space program, one never, ever attained since." There would be a two-year pause
in Soviet piloted space flights while Voskhod's replacement, the Soyuz spacecraft, was designed and developed. On
March 18, 1965, about a week before the first American piloted Project Gemini space flight, the USSR accelerated
the competition, by launching the two-cosmonaut Voskhod 2 mission with Pavel Belyayev and Alexey Leonov. Voskhod
2's design modifications included the addition of an inflatable airlock to allow for extravehicular activity (EVA),
also known as a spacewalk, while keeping the cabin pressurized so that the capsule's electronics wouldn't overheat.
Leonov performed the first-ever EVA as part of the mission. A fatality was narrowly avoided when Leonov's spacesuit
expanded in the vacuum of space, preventing him from re-entering the airlock. In order to overcome this, he had to
partially depressurize his spacesuit to a potentially dangerous level. He succeeded in safely re-entering the ship,
but he and Belyayev faced further challenges when the spacecraft's atmospheric controls flooded the cabin with 45%
pure oxygen, which had to be lowered to acceptable levels before re-entry. The reentry involved two more challenges:
an improperly timed retrorocket firing caused the Voskhod 2 to land 386 kilometers (240 mi) off its designated target
area, the town of Perm; and the instrument compartment's failure to detach from the descent apparatus caused the
spacecraft to become unstable during reentry. Most of the novice pilots on the early missions would command the later
missions. In this way, Project Gemini built up spaceflight experience for the pool of astronauts who would be chosen
to fly the Apollo lunar missions. The circumlunar program (Zond), created by Vladimir Chelomey's design bureau OKB-52,
was to fly two cosmonauts in a stripped-down Soyuz 7K-L1, launched by Chelomey's Proton UR-500 rocket. The Zond sacrificed
habitable cabin volume for equipment, by omitting the Soyuz orbital module. Chelomey gained favor with Khruschev
by employing members of his family. Korolev's lunar landing program was designated N1/L3, for its N1 superbooster
and a more advanced Soyuz 7K-L3 spacecraft, also known as the lunar orbital module ("Lunniy Orbitalny Korabl", LOK),
with a crew of two. A separate lunar lander ("Lunniy Korabl", LK), would carry a single cosmonaut to the lunar surface.
The US and USSR began discussions on the peaceful uses of space as early as 1958, presenting issues for debate to
the United Nations, which created a Committee on the Peaceful Uses of Outer Space in 1959. On May 10, 1962, Vice
President Johnson addressed the Second National Conference on the Peaceful Uses of Space revealing that the United
States and the USSR both supported a resolution passed by the Political Committee of the UN General Assembly on December
1962, which not only urged member nations to "extend the rules of international law to outer space," but to also
cooperate in its exploration. Following the passing of this resolution, Kennedy commenced his communications proposing
a cooperative American/Soviet space program. The UN ultimately created a Treaty on Principles Governing the Activities
of States in the Exploration and Use of Outer Space, including the Moon and Other Celestial Bodies, which was signed
by the United States, USSR, and the United Kingdom on January 27, 1967 and went into force the following October
10. In 1967, both nations faced serious challenges that brought their programs to temporary halts. Both had been
rushing at full-speed toward the first piloted flights of Apollo and Soyuz, without paying due diligence to growing
design and manufacturing problems. The results proved fatal to both pioneering crews. On January 27, 1967, the same
day the US and USSR signed the Outer Space Treaty, the crew of the first manned Apollo mission, Command Pilot Virgil
"Gus" Grissom, Senior Pilot Edward H. White, and Pilot Roger Chaffee, were killed in a fire that swept through their
spacecraft cabin during a ground test, less than a month before the planned February 21 launch. An investigative
board determined the fire was probably caused by an electrical spark, and quickly grew out of control, fed by the
spacecraft's pure oxygen atmosphere. Crew escape was made impossible by inability to open the plug door hatch cover
against the greater-than-atmospheric internal pressure. The board also found design and construction flaws in the
spacecraft, and procedural failings, including failure to appreciate the hazard of the pure-oxygen atmosphere, as
well as inadequate safety procedures. All these flaws had to be corrected over the next twenty-two months until the
first piloted flight could be made. Mercury and Gemini veteran Grissom had been a favored choice of Deke Slayton,
NASA's Director of Flight Crew Operations, to make the first piloted landing. Meanwhile, the Soviet Union was having
its own problems with Soyuz development. Engineers reported 200 design faults to party leaders, but their concerns
"were overruled by political pressures for a series of space feats to mark the anniversary of Lenin's birthday."[citation
needed] On April 24, 1967, the single pilot of Soyuz 1, Vladimir Komarov, became the first in-flight spaceflight
fatality. The mission was planned to be a three-day test, to include the first Soviet docking with an unpiloted Soyuz
2, but the mission was plagued with problems. Early on, Komarov's craft lacked sufficient electrical power because
only one of two solar panels had deployed. Then the automatic attitude control system began malfunctioning and eventually
failed completely, resulting in the craft spinning wildly. Komarov was able to stop the spin with the manual system,
which was only partially effective. The flight controllers aborted his mission after only one day. During the emergency
re-entry, a fault in the landing parachute system caused the primary chute to fail, and the reserve chute became
tangled with the drogue chute; Komarov was killed on impact. Fixing the spacecraft faults caused an eighteen-month
delay before piloted Soyuz flights could resume. The United States recovered from the Apollo 1 fire, fixing the fatal
flaws in an improved version of the Block II command module. The US proceeded with unpiloted test launches of the
Saturn V launch vehicle (Apollo 4 and Apollo 6) and the Lunar Module (Apollo 5) during the latter half of 1967 and
early 1968. Apollo 1's mission to check out the Apollo Command/Service Module in Earth orbit was accomplished by
Grissom's backup crew commanded by Walter Schirra on Apollo 7, launched on October 11, 1968. The eleven-day mission
was a total success, as the spacecraft performed a virtually flawless mission, paving the way for the United States
to continue with its lunar mission schedule. The Soviet Union also fixed the parachute and control problems with
Soyuz, and the next piloted mission Soyuz 3 was launched on October 26, 1968. The goal was to complete Komarov's
rendezvous and docking mission with the un-piloted Soyuz 2. Ground controllers brought the two craft to within 200
meters (660 ft) of each other, then cosmonaut Georgy Beregovoy took control. He got within 40 meters (130 ft) of
his target, but was unable to dock before expending 90 percent of his maneuvering fuel, due to a piloting error that
put his spacecraft into the wrong orientation and forced Soyuz 2 to automatically turn away from his approaching
craft. The first docking of Soviet spacecraft was finally realised in January 1969 by the Soyuz 4 and Soyuz 5 missions.
It was the first-ever docking of two manned spacecraft, and the first transfer of crew from one space vehicle to
another. The Soviet Zond spacecraft was not yet ready for piloted circumlunar missions in 1968, after five unsuccessful
and partially successful automated test launches: Cosmos 146 on March 10, 1967; Cosmos 154 on April 8, 1967; Zond
1967A September 27, 1967; Zond 1967B on November 22, 1967. Zond 4 was launched on March 2, 1968, and successfully
made a circumlunar flight. After its successful flight around the Moon, Zond 4 encountered problems with its Earth
reentry on March 9, and was ordered destroyed by an explosive charge 15,000 meters (49,000 ft) over the Gulf of Guinea.
The Soviet official announcement said that Zond 4 was an automated test flight which ended with its intentional destruction,
due to its recovery trajectory positioning it over the Atlantic Ocean instead of over the USSR. During the summer
of 1968, the Apollo program hit another snag: the first pilot-rated Lunar Module (LM) was not ready for orbital tests
in time for a December 1968 launch. NASA planners overcame this challenge by changing the mission flight order, delaying
the first LM flight until March 1969, and sending Apollo 8 into lunar orbit without the LM in December. This mission
was in part motivated by intelligence rumors the Soviet Union might be ready for a piloted Zond flight during late
1968. In September 1968, Zond 5 made a circumlunar flight with tortoises on board and returned to Earth, accomplishing
the first successful water landing of the Soviet space program in the Indian Ocean. It also scared NASA planners,
as it took them several days to figure out that it was only an automated flight, not piloted, because voice recordings
were transmitted from the craft en route to the Moon. On November 10, 1968 another automated test flight, Zond 6
was launched, but this time encountered difficulties in its Earth reentry, and depressurized and deployed its parachute
too early, causing it to crash-land only 16 kilometers (9.9 mi) from where it had been launched six days earlier.
It turned out there was no chance of a piloted Soviet circumlunar flight during 1968, due to the unreliability of
the Zonds. On December 21, 1968, Frank Borman, James Lovell, and William Anders became the first humans to ride the
Saturn V rocket into space on Apollo 8. They also became the first to leave low-Earth orbit and go to another celestial
body, and entered lunar orbit on December 24. They made ten orbits in twenty hours, and transmitted one of the most
watched TV broadcasts in history, with their Christmas Eve program from lunar orbit, that concluded with a reading
from the biblical Book of Genesis. Two and a half hours after the broadcast, they fired their engine to perform the
first trans-Earth injection to leave lunar orbit and return to the Earth. Apollo 8 safely landed in the Pacific ocean
on December 27, in NASA's first dawn splashdown and recovery. The American Lunar Module was finally ready for a successful
piloted test flight in low Earth orbit on Apollo 9 in March 1969. The next mission, Apollo 10, conducted a "dress
rehearsal" for the first landing in May 1969, flying the LM in lunar orbit as close as 47,400 feet (14.4 km) above
the surface, the point where the powered descent to the surface would begin. With the LM proven to work well, the
next step was to attempt the actual landing. Unknown to the Americans, the Soviet Moon program was in deep trouble.
After two successive launch failures of the N1 rocket in 1969, Soviet plans for a piloted landing suffered delay.
The launch pad explosion of the N-1 on July 3, 1969 was a significant setback. The rocket hit the pad after an engine
shutdown, destroying itself and the launch facility. Without the N-1 rocket, the USSR could not send a large enough
payload to the Moon to land a human and return him safely. Apollo 11 was prepared with the goal of a July landing
in the Sea of Tranquility. The crew, selected in January 1969, consisted of commander (CDR) Neil Armstrong, Command
Module Pilot (CMP) Michael Collins, and Lunar Module Pilot (LMP) Edwin "Buzz" Aldrin. They trained for the mission
until just before the actual launch day. On July 16, 1969, at exactly 9:32 am EDT, the Saturn V rocket, AS-506, lifted
off from Kennedy Space Center Launch Complex 39 in Florida. The trip to the Moon took just over three days. After
achieving orbit, Armstrong and Aldrin transferred into the Lunar Module, named Eagle, and after a landing gear inspection
by Collins remaining in the Command/Service Module Columbia, began their descent. After overcoming several computer
overload alarms caused by an antenna switch left in the wrong position, and a slight downrange error, Armstrong took
over manual flight control at about 180 meters (590 ft), and guided the Lunar Module to a safe landing spot at 20:18:04
UTC, July 20, 1969 (3:17:04 pm CDT). The first humans on the Moon would wait another six hours before they ventured
out of their craft. At 02:56 UTC, July 21 (9:56 pm CDT July 20), Armstrong became the first human to set foot on
the Moon. The first step was witnessed by at least one-fifth of the population of Earth, or about 723 million people.
His first words when he stepped off the LM's landing footpad were, "That's one small step for [a] man, one giant
leap for mankind." Aldrin joined him on the surface almost 20 minutes later. Altogether, they spent just under two
and one-quarter hours outside their craft. The next day, they performed the first launch from another celestial body,
and rendezvoused back with Columbia. Apollo 11 left lunar orbit and returned to Earth, landing safely in the Pacific
Ocean on July 24, 1969. When the spacecraft splashed down, 2,982 days had passed since Kennedy's commitment to landing
a man on the Moon and returning him safely to the Earth before the end of the decade; the mission was completed with
161 days to spare. With the safe completion of the Apollo 11 mission, the Americans won the race to the Moon. The
first landing was followed by another, precision landing on Apollo 12 in November 1969. NASA had achieved its first
landing goal with enough Apollo spacecraft and Saturn V launchers left for eight follow-on lunar landings through
Apollo 20, conducting extended-endurance missions and transporting the landing crews in Lunar Roving Vehicles on
the last five. They also planned an Apollo Applications Program to develop a longer-duration Earth orbital workshop
(later named Skylab) to be constructed in orbit from a spent S-IVB upper stage, using several launches of the smaller
Saturn IB launch vehicle. But planners soon decided this could be done more efficiently by using the two live stages
of a Saturn V to launch the workshop pre-fabricated from an S-IVB (which was also the Saturn V third stage), which
immediately removed Apollo 20. Belt-tightening budget cuts soon led NASA to cut Apollo 18 and 19 as well, but keep
three extended/Lunar Rover missions. Apollo 13 encountered an in-flight spacecraft failure and had to abort its lunar
landing in April 1970, returning its crew safely but temporarily grounding the program again. It resumed with four
successful landings on Apollo 14 (February 1971), Apollo 15 (July 1971), Apollo 16 (April 1972), and Apollo 17 (December
1972). Meanwhile, the USSR continued briefly trying to perfect their N1 rocket, finally canceling it in 1976, after
two more launch failures in 1971 and 1972. Having lost the race to the Moon, the USSR decided to concentrate on orbital
space stations. During 1969 and 1970, they launched six more Soyuz flights after Soyuz 3, then launched the first
space station, the Salyut 1 laboratory designed by Kerim Kerimov, on April 19, 1971. Three days later, the Soyuz
10 crew attempted to dock with it, but failed to achieve a secure enough connection to safely enter the station.
The Soyuz 11 crew of Vladislav Volkov, Georgi Dobrovolski and Viktor Patsayev successfully docked on June 7, and
completed a record 22-day stay. The crew became the second in-flight space fatality during their reentry on June
30. They were asphyxiated when their spacecraft's cabin lost all pressure, shortly after undocking. The disaster
was blamed on a faulty cabin pressure valve, that allowed all the air to vent into space. The crew was not wearing
pressure suits and had no chance of survival once the leak occurred. Salyut 1's orbit was increased to prevent premature
reentry, but further piloted flights were delayed while the Soyuz was redesigned to fix the new safety problem. The
station re-entered the Earth's atmosphere on October 11, after 175 days in orbit. The USSR attempted to launch a
second Salyut-class station designated Durable Orbital Station-2 (DOS-2) on July 29, 1972, but a rocket failure caused
it to fail to achieve orbit. After the DOS-2 failure, the USSR attempted to launch four more Salyut-class stations
through 1975, with another failure due to an explosion of the final rocket stage, which punctured the station with
shrapnel so that it wouldn't hold pressure. While all of the Salyuts were presented to the public as non-military
scientific laboratories, some of them were actually covers for the military Almaz reconnaissance stations. The United
States launched the orbital workstation Skylab 1 on May 14, 1973. It weighed 169,950 pounds (77,090 kg), was 58 feet
(18 m) long by 21.7 feet (6.6 m) in diameter, with a habitable volume of 10,000 cubic feet (280 m3). Skylab was damaged
during the ascent to orbit, losing one of its solar panels and a meteoroid thermal shield. Subsequent manned missions
repaired the station, and the final mission's crew, Skylab 4, set the Space Race endurance record with 84 days in
orbit when the mission ended on February 8, 1974. Skylab stayed in orbit another five years before reentering the
Earth's atmosphere over the Indian Ocean and Western Australia on July 11, 1979. In May 1972, President Richard M.
Nixon and Soviet Premier Leonid Brezhnev negotiated an easing of relations known as detente, creating a temporary
"thaw" in the Cold War. In the spirit of good sportsmanship, the time seemed right for cooperation rather than competition,
and the notion of a continuing "race" began to subside. The two nations planned a joint mission to dock the last
US Apollo craft with a Soyuz, known as the Apollo-Soyuz Test Project (ASTP). To prepare, the US designed a docking
module for the Apollo that was compatible with the Soviet docking system, which allowed any of their craft to dock
with any other (e.g. Soyuz/Soyuz as well as Soyuz/Salyut). The module was also necessary as an airlock to allow the
men to visit each other's craft, which had incompatible cabin atmospheres. The USSR used the Soyuz 16 mission in
December 1974 to prepare for ASTP. The joint mission began when Soyuz 19 was first launched on July 15, 1975 at 12:20
UTC, and the Apollo craft was launched with the docking module six and a half hours later. The two craft rendezvoused
and docked on July 17 at 16:19 UTC. The three astronauts conducted joint experiments with the two cosmonauts, and
the crew shook hands, exchanged gifts, and visited each other's craft. In the 1970s, the United States began developing
a new generation of reusable orbital spacecraft known as the Space Shuttle, and launched a range of unmanned probes.
The USSR continued to develop space station technology with the Salyut program and Mir ('Peace' or 'World', depending
on the context) space station, supported by Soyuz spacecraft. They developed their own large space shuttle under
the Buran program. However, the USSR dissolved in 1991 and the remains of its space program were distributed to various
Eastern European countries. The United States and Russia would work together in space with the Shuttle–Mir Program,
and again with the International Space Station. The Russian R-7 rocket family, which launched the first Sputnik at
the beginning of the space race, is still in use today. It services the International Space Station (ISS) as the
launcher for both the Soyuz and Progress spacecraft. It also ferries both Russian and American crews to and from
the station. American concerns that they had fallen behind the Soviet Union in the race to space led quickly to a
push by legislators and educators for greater emphasis on mathematics and the physical sciences in American schools.
The United States' National Defense Education Act of 1958 increased funding for these goals from childhood education
through the post-graduate level.
A web browser (commonly referred to as a browser) is a software application for retrieving, presenting, and traversing information
resources on the World Wide Web. An information resource is identified by a Uniform Resource Identifier (URI/URL)
and may be a web page, image, video or other piece of content. Hyperlinks present in resources enable users easily
to navigate their browsers to related resources. Although browsers are primarily intended to use the World Wide Web,
they can also be used to access information provided by web servers in private networks or files in file systems.
The first web browser was invented in 1990 by Sir Tim Berners-Lee. Berners-Lee is the director of the World Wide
Web Consortium (W3C), which oversees the Web's continued development, and is also the founder of the World Wide Web
Foundation. His browser was called WorldWideWeb and later renamed Nexus. In 1993, browser software was further innovated
by Marc Andreessen with the release of Mosaic, "the world's first popular browser", which made the World Wide Web
system easy to use and more accessible to the average person. Andreesen's browser sparked the internet boom of the
1990s. The introduction of Mosaic in 1993 – one of the first graphical web browsers – led to an explosion in web
use. Andreessen, the leader of the Mosaic team at National Center for Supercomputing Applications (NCSA), soon started
his own company, named Netscape, and released the Mosaic-influenced Netscape Navigator in 1994, which quickly became
the world's most popular browser, accounting for 90% of all web use at its peak (see usage share of web browsers).
Microsoft responded with its Internet Explorer in 1995, also heavily influenced by Mosaic, initiating the industry's
first browser war. Bundled with Windows, Internet Explorer gained dominance in the web browser market; Internet Explorer
usage share peaked at over 95% by 2002. Opera debuted in 1996; it has never achieved widespread use, having less
than 2% browser usage share as of February 2012 according to Net Applications. Its Opera-mini version has an additive
share, in April 2011 amounting to 1.1% of overall browser use, but focused on the fast-growing mobile phone web browser
market, being preinstalled on over 40 million phones. It is also available on several other embedded systems, including
Nintendo's Wii video game console. In 1998, Netscape launched what was to become the Mozilla Foundation in an attempt
to produce a competitive browser using the open source software model. That browser would eventually evolve into
Firefox, which developed a respectable following while still in the beta stage of development; shortly after the
release of Firefox 1.0 in late 2004, Firefox (all versions) accounted for 7% of browser use. As of August 2011, Firefox
has a 28% usage share. Apple's Safari had its first beta release in January 2003; as of April 2011, it had a dominant
share of Apple-based web browsing, accounting for just over 7% of the entire browser market. The most recent major
entrant to the browser market is Chrome, first released in September 2008. Chrome's take-up has increased significantly
year by year, by doubling its usage share from 8% to 16% by August 2011. This increase seems largely to be at the
expense of Internet Explorer, whose share has tended to decrease from month to month. In December 2011, Chrome overtook
Internet Explorer 8 as the most widely used web browser but still had lower usage than all versions of Internet Explorer
combined. Chrome's user-base continued to grow and in May 2012, Chrome's usage passed the usage of all versions of
Internet Explorer combined. By April 2014, Chrome's usage had hit 45%. Internet Explorer, on the other hand, was
bundled free with the Windows operating system (and was also downloadable free), and therefore it was funded partly
by the sales of Windows to computer manufacturers and direct to users. Internet Explorer also used to be available
for the Mac. It is likely that releasing IE for the Mac was part of Microsoft's overall strategy to fight threats
to its quasi-monopoly platform dominance - threats such as web standards and Java - by making some web developers,
or at least their managers, assume that there was "no need" to develop for anything other than Internet Explorer.
In this respect, IE may have contributed to Windows and Microsoft applications sales in another way, through "lock-in"
to Microsoft's browser. In January 2009, the European Commission announced it would investigate the bundling of Internet
Explorer with Windows operating systems from Microsoft, saying "Microsoft's tying of Internet Explorer to the Windows
operating system harms competition between web browsers, undermines product innovation and ultimately reduces consumer
choice." Microsoft Corp v Commission Safari and Mobile Safari were likewise always included with OS X and iOS respectively,
so, similarly, they were originally funded by sales of Apple computers and mobile devices, and formed part of the
overall Apple experience to customers. Today, most commercial web browsers are paid by search engine companies to
make their engine default, or to include them as another option. For example, Google pays Mozilla, the maker of Firefox,
to make Google Search the default search engine in Firefox. Mozilla makes enough money from this deal that it does
not need to charge users for Firefox. In addition, Google Search is also (as one would expect) the default search
engine in Google Chrome. Users searching for websites or items on the Internet would be led to Google's search results
page, increasing ad revenue and which funds development at Google and of Google Chrome. The primary purpose of a
web browser is to bring information resources to the user ("retrieval" or "fetching"), allowing them to view the
information ("display", "rendering"), and then access other information ("navigation", "following links"). This process
begins when the user inputs a Uniform Resource Locator (URL), for example http://en.wikipedia.org/, into the browser.
The prefix of the URL, the Uniform Resource Identifier or URI, determines how the URL will be interpreted. The most
commonly used kind of URI starts with http: and identifies a resource to be retrieved over the Hypertext Transfer
Protocol (HTTP). Many browsers also support a variety of other prefixes, such as https: for HTTPS, ftp: for the File
Transfer Protocol, and file: for local files. Prefixes that the web browser cannot directly handle are often handed
off to another application entirely. For example, mailto: URIs are usually passed to the user's default e-mail application,
and news: URIs are passed to the user's default newsgroup reader. In the case of http, https, file, and others, once
the resource has been retrieved the web browser will display it. HTML and associated content (image files, formatting
information such as CSS, etc.) is passed to the browser's layout engine to be transformed from markup to an interactive
document, a process known as "rendering". Aside from HTML, web browsers can generally display any kind of content
that can be part of a web page. Most browsers can display images, audio, video, and XML files, and often have plug-ins
to support Flash applications and Java applets. Upon encountering a file of an unsupported type or a file that is
set up to be downloaded rather than displayed, the browser prompts the user to save the file to disk. Information
resources may contain hyperlinks to other information resources. Each link contains the URI of a resource to go to.
When a link is clicked, the browser navigates to the resource indicated by the link's target URI, and the process
of bringing content to the user begins again. Available web browsers range in features from minimal, text-based user
interfaces with bare-bones support for HTML to rich user interfaces supporting a wide variety of file formats and
protocols. Browsers which include additional components to support e-mail, Usenet news, and Internet Relay Chat (IRC),
are sometimes referred to as "Internet suites" rather than merely "web browsers". All major web browsers allow the
user to open multiple information resources at the same time, either in different browser windows or in different
tabs of the same window. Major browsers also include pop-up blockers to prevent unwanted windows from "popping up"
without the user's consent. A browser extension is a computer program that extends the functionality of a web browser.
Every major web browser supports the development of browser extensions. Most web browsers can display a list of web
pages that the user has bookmarked so that the user can quickly return to them. Bookmarks are also called "Favorites"
in Internet Explorer. In addition, all major web browsers have some form of built-in web feed aggregator. In Firefox,
web feeds are formatted as "live bookmarks" and behave like a folder of bookmarks corresponding to recent entries
in the feed. In Opera, a more traditional feed reader is included which stores and displays the contents of the feed.
Most browsers support HTTP Secure and offer quick and easy ways to delete the web cache, download history, form and
search history, cookies, and browsing history. For a comparison of the current security vulnerabilities of browsers,
see comparison of web browsers. Early web browsers supported only a very simple version of HTML. The rapid development
of proprietary web browsers led to the development of non-standard dialects of HTML, leading to problems with interoperability.
Modern web browsers support a combination of standards-based and de facto HTML and XHTML, which should be rendered
in the same way by all browsers. Web browsers consist of a user interface, layout engine, rendering engine, JavaScript
interpreter, UI backend, networking component and data persistence component. These components achieve different
functionalities of a web browser and together provide all capabilities of a web browser.
The BeiDou Navigation Satellite System (BDS, simplified Chinese: 北斗卫星导航系统; traditional Chinese: 北斗衛星導航系統; pinyin: Běidǒu
wèixīng dǎoháng xìtǒng) is a Chinese satellite navigation system. It consists of two separate satellite constellations
– a limited test system that has been operating since 2000, and a full-scale global navigation system that is currently
under construction. The first BeiDou system, officially called the BeiDou Satellite Navigation Experimental System
(simplified Chinese: 北斗卫星导航试验系统; traditional Chinese: 北斗衛星導航試驗系統; pinyin: Běidǒu wèixīng dǎoháng shìyàn xìtǒng) and
also known as BeiDou-1, consists of three satellites and offers limited coverage and applications. It has been offering
navigation services, mainly for customers in China and neighboring regions, since 2000. The second generation of
the system, officially called the BeiDou Navigation Satellite System (BDS) and also known as COMPASS or BeiDou-2,
will be a global satellite navigation system consisting of 35 satellites, and is under construction as of January
2015[update]. It became operational in China in December 2011, with 10 satellites in use, and began offering services
to customers in the Asia-Pacific region in December 2012. It is planned to begin serving global customers upon its
completion in 2020. In-mid 2015, China started the build-up of the third generation BeiDou system (BDS-3) in the
global coverage constellation. The first BDS-3 satellite was launched 30 September 2015. As of March 2016, 4 BDS-3
in-orbit validation satellites have been launched. According to China daily. Fifteen years after the satellite system
was launched, it is now generating $31.5 billion for major companies such as China Aerospace Science and Industry
Corp, AutoNavi Holdings Ltd, and China North Industries Group Corp. The official English name of the system is BeiDou
Navigation Satellite System. It is named after the Big Dipper constellation, which is known in Chinese as Běidǒu.
The name literally means "Northern Dipper", the name given by ancient Chinese astronomers to the seven brightest
stars of the Ursa Major constellation. Historically, this set of stars was used in navigation to locate the North
Star Polaris. As such, the name BeiDou also serves as a metaphor for the purpose of the satellite navigation system.
The original idea of a Chinese satellite navigation system was conceived by Chen Fangyun and his colleagues in the
1980s. According to the China National Space Administration, the development of the system would be carried out in
three steps: The first satellite, BeiDou-1A, was launched on 30 October 2000, followed by BeiDou-1B on 20 December
2000. The third satellite, BeiDou-1C (a backup satellite), was put into orbit on 25 May 2003. The successful launch
of BeiDou-1C also meant the establishment of the BeiDou-1 navigation system. On 2 November 2006, China announced
that from 2008 BeiDou would offer an open service with an accuracy of 10 meters, timing of 0.2 microseconds, and
speed of 0.2 meters/second.[citation needed] In February 2007, the fourth and last satellite of the BeiDou-1 system,
BeiDou-1D (sometimes called BeiDou-2A, serving as a backup satellite), was sent up into space. It was reported that
the satellite had suffered from a control system malfunction but was then fully restored. In April 2007, the first
satellite of BeiDou-2, namely Compass-M1 (to validate frequencies for the BeiDou-2 constellation) was successfully
put into its working orbit. The second BeiDou-2 constellation satellite Compass-G2 was launched on 15 April 2009.
On 15 January 2010, the official website of the BeiDou Navigation Satellite System went online, and the system's
third satellite (Compass-G1) was carried into its orbit by a Long March 3C rocket on 17 January 2010. On 2 June 2010,
the fourth satellite was launched successfully into orbit. The fifth orbiter was launched into space from Xichang
Satellite Launch Center by an LM-3I carrier rocket on 1 August 2010. Three months later, on 1 November 2010, the
sixth satellite was sent into orbit by LM-3C. Another satellite, the Beidou-2/Compass IGSO-5 (fifth inclined geosynchonous
orbit) satellite, was launched from the Xichang Satellite Launch Center by a Long March-3A on 1 December 2011 (UTC).
In September 2003, China intended to join the European Galileo positioning system project and was to invest €230
million (USD296 million, GBP160 million) in Galileo over the next few years. At the time, it was believed that China's
"BeiDou" navigation system would then only be used by its armed forces. In October 2004, China officially joined
the Galileo project by signing the Agreement on the Cooperation in the Galileo Program between the "Galileo Joint
Undertaking" (GJU) and the "National Remote Sensing Centre of China" (NRSCC). Based on the Sino-European Cooperation
Agreement on Galileo program, China Galileo Industries (CGI), the prime contractor of the China’s involvement in
Galileo programs, was founded in December 2004. By April 2006, eleven cooperation projects within the Galileo framework
had been signed between China and EU. However, the Hong Kong-based South China Morning Post reported in January 2008
that China was unsatisfied with its role in the Galileo project and was to compete with Galileo in the Asian market.
BeiDou-1 is an experimental regional navigation system, which consists of four satellites (three working satellites
and one backup satellite). The satellites themselves were based on the Chinese DFH-3 geostationary communications
satellite and had a launch weight of 1,000 kilograms (2,200 pounds) each. Unlike the American GPS, Russian GLONASS,
and European Galileo systems, which use medium Earth orbit satellites, BeiDou-1 uses satellites in geostationary
orbit. This means that the system does not require a large constellation of satellites, but it also limits the coverage
to areas on Earth where the satellites are visible. The area that can be serviced is from longitude 70°E to 140°E
and from latitude 5°N to 55°N. A frequency of the system is 2491.75 MHz. The first satellite, BeiDou-1A, was launched
on October 31, 2000. The second satellite, BeiDou-1B, was successfully launched on December 21, 2000. The last operational
satellite of the constellation, BeiDou-1C, was launched on May 25, 2003. In 2007, the official Xinhua News Agency
reported that the resolution of the BeiDou system was as high as 0.5 metres. With the existing user terminals it
appears that the calibrated accuracy is 20m (100m, uncalibrated). In 2008, a BeiDou-1 ground terminal cost around
CN¥20,000RMB (US$2,929), almost 10 times the price of a contemporary GPS terminal. The price of the terminals was
explained as being due to the cost of imported microchips. At the China High-Tech Fair ELEXCON of November 2009 in
Shenzhen, a BeiDou terminal priced at CN¥3,000RMB was presented. According to Sun Jiadong, the chief designer of
the navigation system, "Many organizations have been using our system for a while, and they like it very much." BeiDou-2
(formerly known as COMPASS) is not an extension to the older BeiDou-1, but rather supersedes it outright. The new
system will be a constellation of 35 satellites, which include 5 geostationary orbit satellites for backward compatibility
with BeiDou-1, and 30 non-geostationary satellites (27 in medium Earth orbit and 3 in inclined geosynchronous orbit),
that will offer complete coverage of the globe. The ranging signals are based on the CDMA principle and have complex
structure typical of Galileo or modernized GPS. Similar to the other GNSS, there will be two levels of positioning
service: open and restricted (military). The public service shall be available globally to general users. When all
the currently planned GNSS systems are deployed, the users will benefit from the use of a total constellation of
75+ satellites, which will significantly improve all the aspects of positioning, especially availability of the signals
in so-called urban canyons. The general designer of the COMPASS navigation system is Sun Jiadong, who is also the
general designer of its predecessor, the original BeiDou navigation system. There are two levels of service provided
— a free service to civilians and licensed service to the Chinese government and military. The free civilian service
has a 10-meter location-tracking accuracy, synchronizes clocks with an accuracy of 10 nanoseconds, and measures speeds
to within 0.2 m/s. The restricted military service has a location accuracy of 10 centimetres, can be used for communication,
and will supply information about the system status to the user. To date, the military service has been granted only
to the People's Liberation Army and to the Military of Pakistan. Frequencies for COMPASS are allocated in four bands:
E1, E2, E5B, and E6 and overlap with Galileo. The fact of overlapping could be convenient from the point of view
of the receiver design, but on the other hand raises the issues of inter-system interference, especially within E1
and E2 bands, which are allocated for Galileo's publicly regulated service. However, under International Telecommunication
Union (ITU) policies, the first nation to start broadcasting in a specific frequency will have priority to that frequency,
and any subsequent users will be required to obtain permission prior to using that frequency, and otherwise ensure
that their broadcasts do not interfere with the original nation's broadcasts. It now appears that Chinese COMPASS
satellites will start transmitting in the E1, E2, E5B, and E6 bands before Europe's Galileo satellites and thus have
primary rights to these frequency ranges. Although little was officially announced by Chinese authorities about the
signals of the new system, the launch of the first COMPASS satellite permitted independent researchers not only to
study general characteristics of the signals, but even to build a COMPASS receiver. Compass-M1 is an experimental
satellite launched for signal testing and validation and for the frequency filing on 14 April 2007. The role of Compass-M1
for Compass is similar to the role of the GIOVE satellites for the Galileo system. The orbit of Compass-M1 is nearly
circular, has an altitude of 21,150 km and an inclination of 55.5 degrees. Compass-M1 transmits in 3 bands: E2, E5B,
and E6. In each frequency band two coherent sub-signals have been detected with a phase shift of 90 degrees (in quadrature).
These signal components are further referred to as "I" and "Q". The "I" components have shorter codes and are likely
to be intended for the open service. The "Q" components have much longer codes, are more interference resistive,
and are probably intended for the restricted service. IQ modulation has been the method in both wired and wireless
digital modulation since morsetting carrier signal 100 years ago. The investigation of the transmitted signals started
immediately after the launch of Compass -M1 on 14 April 2007. Soon after in June 2007, engineers at CNES reported
the spectrum and structure of the signals. A month later, researchers from Stanford University reported the complete
decoding of the “I” signals components. The knowledge of the codes allowed a group of engineers at Septentrio to
build the COMPASS receiver and report tracking and multipath characteristics of the “I” signals on E2 and E5B. Characteristics
of the "I" signals on E2 and E5B are generally similar to the civilian codes of GPS (L1-CA and L2C), but Compass
signals have somewhat greater power. The notation of Compass signals used in this page follows the naming of the
frequency bands and agrees with the notation used in the American literature on the subject, but the notation used
by the Chinese seems to be different and is quoted in the first row of the table. In December 2011, the system went
into operation on a trial basis. It has started providing navigation, positioning and timing data to China and the
neighbouring area for free from 27 December. During this trial run, Compass will offer positioning accuracy to within
25 meters, but the precision will improve as more satellites are launched. Upon the system's official launch, it
pledged to offer general users positioning information accurate to the nearest 10 m, measure speeds within 0.2 m
per second, and provide signals for clock synchronisation accurate to 0.02 microseconds. The BeiDou-2 system began
offering services for the Asia-Pacific region in December 2012. At this time, the system could provide positioning
data between longitude 55°E to 180°E and from latitude 55°S to 55°N. In December 2011, Xinhua stated that "[t]he
basic structure of the Beidou system has now been established, and engineers are now conducting comprehensive system
test and evaluation. The system will provide test-run services of positioning, navigation and time for China and
the neighboring areas before the end of this year, according to the authorities." The system became operational in
the China region that same month. The global navigation system should be finished by 2020. As of December 2012, 16
satellites for BeiDou-2 have been launched, 14 of them are in service. The first satellite of the second-generation
system, Compass-M1 was launched in 2007. It was followed by further nine satellites during 2009-2011, achieving functional
regional coverage. A total of 16 satellites were launched during this phase. In 2015, the system began its transition
towards global coverage with the first launch of a new-generation of satellites, and the 17th one within the new
system. On July 25, 2015, the 18th and 19th satellites were successfully launched from the Xichang Satellite Launch
Center, marking the first time for China to launch two satellites at once on top of a Long March 3B/Expedition-1
carrier rocket. The Expedition-1 is an independent upper stage capable of delivering one or more spacecraft into
different orbits. The three latest satellites will jointly undergo testing of a new system of navigation signaling
and inter-satellite links, and start providing navigation services when ready.
Canon law is the body of laws and regulations made by ecclesiastical authority (Church leadership), for the government of
a Christian organization or church and its members. It is the internal ecclesiastical law governing the Catholic
Church (both Latin Church and Eastern Catholic Churches), the Eastern and Oriental Orthodox churches, and the individual
national churches within the Anglican Communion. The way that such church law is legislated, interpreted and at times
adjudicated varies widely among these three bodies of churches. In all three traditions, a canon was originally a
rule adopted by a church council; these canons formed the foundation of canon law. Greek kanon / Ancient Greek: κανών,
Arabic Qanun / قانون, Hebrew kaneh / קנה, "straight"; a rule, code, standard, or measure; the root meaning in all
these languages is "reed" (cf. the Romance-language ancestors of the English word "cane"). The Apostolic Canons or
Ecclesiastical Canons of the Same Holy Apostles is a collection of ancient ecclesiastical decrees (eighty-five in
the Eastern, fifty in the Western Church) concerning the government and discipline of the Early Christian Church,
incorporated with the Apostolic Constitutions which are part of the Ante-Nicene Fathers In the fourth century the
First Council of Nicaea (325) calls canons the disciplinary measures of the Church: the term canon, κανὠν, means
in Greek, a rule. There is a very early distinction between the rules enacted by the Church and the legislative measures
taken by the State called leges, Latin for laws. In the Catholic Church, canon law is the system of laws and legal
principles made and enforced by the Church's hierarchical authorities to regulate its external organization and government
and to order and direct the activities of Catholics toward the mission of the Church. The Roman Catholic Church canon
law also includes the main five rites (groups) of churches which are in full union with the Roman Catholic Church
and the Supreme Pontiff: In the Roman Church, universal positive ecclesiastical laws, based upon either immutable
divine and natural law, or changeable circumstantial and merely positive law, derive formal authority and promulgation
from the office of pope, who as Supreme Pontiff possesses the totality of legislative, executive, and judicial power
in his person. The actual subject material of the canons is not just doctrinal or moral in nature, but all-encompassing
of the human condition. The Catholic Church has what is claimed to be the oldest continuously functioning internal
legal system in Western Europe, much later than Roman law but predating the evolution of modern European civil law
traditions. What began with rules ("canons") adopted by the Apostles at the Council of Jerusalem in the first century
has developed into a highly complex legal system encapsulating not just norms of the New Testament, but some elements
of the Hebrew (Old Testament), Roman, Visigothic, Saxon, and Celtic legal traditions. The history of Latin canon
law can be divided into four periods: the jus antiquum, the jus novum, the jus novissimum and the Code of Canon Law.
In relation to the Code, history can be divided into the jus vetus (all law before the Code) and the jus novum (the
law of the Code, or jus codicis). The canon law of the Eastern Catholic Churches, which had developed some different
disciplines and practices, underwent its own process of codification, resulting in the Code of Canons of the Eastern
Churches promulgated in 1990 by Pope John Paul II. It is a fully developed legal system, with all the necessary elements:
courts, lawyers, judges, a fully articulated legal code principles of legal interpretation, and coercive penalties,
though it lacks civilly-binding force in most secular jurisdictions. The academic degrees in canon law are the J.C.B.
(Juris Canonici Baccalaureatus, Bachelor of Canon Law, normally taken as a graduate degree), J.C.L. (Juris Canonici
Licentiatus, Licentiate of Canon Law) and the J.C.D. (Juris Canonici Doctor, Doctor of Canon Law). Because of its
specialized nature, advanced degrees in civil law or theology are normal prerequisites for the study of canon law.
Much of the legislative style was adapted from the Roman Law Code of Justinian. As a result, Roman ecclesiastical
courts tend to follow the Roman Law style of continental Europe with some variation, featuring collegiate panels
of judges and an investigative form of proceeding, called "inquisitorial", from the Latin "inquirere", to enquire.
This is in contrast to the adversarial form of proceeding found in the common law system of English and U.S. law,
which features such things as juries and single judges. The institutions and practices of canon law paralleled the
legal development of much of Europe, and consequently both modern civil law and common law (legal system) bear the
influences of canon law. Edson Luiz Sampel, a Brazilian expert in canon law, says that canon law is contained in
the genesis of various institutes of civil law, such as the law in continental Europe and Latin American countries.
Sampel explains that canon law has significant influence in contemporary society. Canonical jurisprudential theory
generally follows the principles of Aristotelian-Thomistic legal philosophy. While the term "law" is never explicitly
defined in the Code, the Catechism of the Catholic Church cites Aquinas in defining law as "...an ordinance of reason
for the common good, promulgated by the one who is in charge of the community" and reformulates it as "...a rule
of conduct enacted by competent authority for the sake of the common good." The law of the Eastern Catholic Churches
in full union with Rome was in much the same state as that of the Latin or Western Church before 1917; much more
diversity in legislation existed in the various Eastern Catholic Churches. Each had its own special law, in which
custom still played an important part. In 1929 Pius XI informed the Eastern Churches of his intention to work out
a Code for the whole of the Eastern Church. The publication of these Codes for the Eastern Churches regarding the
law of persons was made between 1949 through 1958 but finalized nearly 30 years later. The first Code of Canon Law,
1917, was mostly for the Roman Rite, with limited application to the Eastern Churches. After the Second Vatican Council,
(1962 - 1965), another edition was published specifically for the Roman Rite in 1983. Most recently, 1990, the Vatican
produced the Code of Canons of the Eastern Churches which became the 1st code of Eastern Catholic Canon Law. The
Greek-speaking Orthodox have collected canons and commentaries upon them in a work known as the Pēdálion (Greek:
Πηδάλιον, "Rudder"), so named because it is meant to "steer" the Church. The Orthodox Christian tradition in general
treats its canons more as guidelines than as laws, the bishops adjusting them to cultural and other local circumstances.
Some Orthodox canon scholars point out that, had the Ecumenical Councils (which deliberated in Greek) meant for the
canons to be used as laws, they would have called them nómoi/νόμοι (laws) rather than kanónes/κανόνες (rules), but
almost all Orthodox conform to them. The dogmatic decisions of the Councils, though, are to be obeyed rather than
to be treated as guidelines, since they are essential for the Church's unity. In the Church of England, the ecclesiastical
courts that formerly decided many matters such as disputes relating to marriage, divorce, wills, and defamation,
still have jurisdiction of certain church-related matters (e.g. discipline of clergy, alteration of church property,
and issues related to churchyards). Their separate status dates back to the 12th century when the Normans split them
off from the mixed secular/religious county and local courts used by the Saxons. In contrast to the other courts
of England the law used in ecclesiastical matters is at least partially a civil law system, not common law, although
heavily governed by parliamentary statutes. Since the Reformation, ecclesiastical courts in England have been royal
courts. The teaching of canon law at the Universities of Oxford and Cambridge was abrogated by Henry VIII; thereafter
practitioners in the ecclesiastical courts were trained in civil law, receiving a Doctor of Civil Law (D.C.L.) degree
from Oxford, or a Doctor of Laws (LL.D.) degree from Cambridge. Such lawyers (called "doctors" and "civilians") were
centered at "Doctors Commons", a few streets south of St Paul's Cathedral in London, where they monopolized probate,
matrimonial, and admiralty cases until their jurisdiction was removed to the common law courts in the mid-19th century.
Other churches in the Anglican Communion around the world (e.g., the Episcopal Church in the United States, and the
Anglican Church of Canada) still function under their own private systems of canon law. Currently, (2004), there
are principles of canon law common to the churches within the Anglican Communion; their existence can be factually
established; each province or church contributes through its own legal system to the principles of canon law common
within the Communion; these principles have a strong persuasive authority and are fundamental to the self-understanding
of each of the churches of the Communion; these principles have a living force, and contain in themselves the possibility
of further development; and the existence of these principles both demonstrates unity and promotes unity within the
Anglican Communion. In Presbyterian and Reformed churches, canon law is known as "practice and procedure" or "church
order", and includes the church's laws respecting its government, discipline, legal practice and worship. Roman canon
law had been criticized by the Presbyterians as early as 1572 in the Admonition to Parliament. The protest centered
on the standard defense that canon law could be retained so long as it did not contradict the civil law. According
to Polly Ha, the Reformed Church Government refuted this claiming that the bishops had been enforcing canon law for
1500 years. The Book of Concord is the historic doctrinal statement of the Lutheran Church, consisting of ten credal
documents recognized as authoritative in Lutheranism since the 16th century. However, the Book of Concord is a confessional
document (stating orthodox belief) rather than a book of ecclesiastical rules or discipline, like canon law. Each
Lutheran national church establishes its own system of church order and discipline, though these are referred to
as "canons."
Communications in Somalia encompasses the communications services and capacity of Somalia. Telecommunications, internet,
radio, print, television and postal services in the nation are largely concentrated in the private sector. Several
of the telecom firms have begun expanding their activities abroad. The Federal government operates two official radio
and television networks, which exist alongside a number of private and foreign stations. Print media in the country
is also progressively giving way to news radio stations and online portals, as internet connectivity and access increases.
Additionally, the national postal service is slated to be officially relaunched in 2013 after a long absence. In
2012, a National Communications Act was also approved by Cabinet members, which lays the foundation for the establishment
of a National Communications regulator in the broadcasting and telecommunications sectors. After the start of the
civil war, various new telecommunications companies began to spring up in the country and competed to provide missing
infrastructure. Somalia now offers some of the most technologically advanced and competitively priced telecommunications
and internet services in the world. Funded by Somali entrepreneurs and backed by expertise from China, Korea and
Europe, these nascent telecommunications firms offer affordable mobile phone and internet services that are not available
in many other parts of the continent. Customers can conduct money transfers (such as through the popular Dahabshiil)
and other banking activities via mobile phones, as well as easily gain wireless Internet access. After forming partnerships
with multinational corporations such as Sprint, ITT and Telenor, these firms now offer the cheapest and clearest
phone calls in Africa. These Somali telecommunication companies also provide services to every city, town and hamlet
in Somalia. There are presently around 25 mainlines per 1,000 persons, and the local availability of telephone lines
(tele-density) is higher than in neighboring countries; three times greater than in adjacent Ethiopia. Prominent
Somali telecommunications companies include Somtel Network, Golis Telecom Group, Hormuud Telecom, Somafone, Nationlink,
Netco, Telcom and Somali Telecom Group. Hormuud Telecom alone grosses about $40 million a year. Despite their rivalry,
several of these companies signed an interconnectivity deal in 2005 that allows them to set prices, maintain and
expand their networks, and ensure that competition does not get out of control. In 2008, Dahabshiil Group acquired
a majority stake in Somtel Network, a Hargeisa-based telecommunications firm specialising in high speed broadband,
mobile internet, LTE services, mobile money transfer and mobile phone services. The acquisition provided Dahabshiil
with the necessary platform for a subsequent expansion into mobile banking, a growth industry in the regional banking
sector. In 2014, Somalia's three largest telecommunication operators, Hormuud Telecom, NationLink and Somtel, also
signed an interconnection agreement. The cooperative deal will see the firms establish the Somali Telecommunication
Company (STC), which will allow their mobile clients to communicate across the three networks. Investment in the
telecom industry is held to be one of the clearest signs that Somalia's economy has continued to develop. The sector
provides key communication services, and in the process facilitates job creation and income generation. On March
22, 2012, the Somali Cabinet unanimously approved the National Communications Act, which paves the way for the establishment
of a National Communications regulator in the broadcasting and telecommunications sectors. The bill was passed following
consultations between government representatives and communications, academic and civil society stakeholders. According
to the Ministry of Information, Posts and Telecommunication, the Act is expected to create an environment conducive
to investment and the certainty it provides will encourage further infrastructural development, resulting in more
efficient service delivery. The Somali Postal Service (Somali Post) is the national postal service of the Federal
Government of Somalia. It is part of the Ministry of Information, Posts and Telecommunication. The national postal
infrastructure was completely destroyed during the civil war. In order to fill the vacuum, Somali Post signed an
agreement in 2003 with the United Arab Emirates' Emirates Post to process mail to and from Somalia. Emirates Post's
mail transit hub at the Dubai International Airport was then used to forward mail from Somalia to the UAE and various
Western destinations, including Italy, the Netherlands, the United Kingdom, Sweden, Switzerland and Canada. Concurrently,
the Somali Transitional Federal Government began preparations to revive the national postal service. The government's
overall reconstruction plan for Somali Post is structured into three Phases spread out over a period of ten years.
Phase I will see the reconstruction of the postal headquarters and General Post Office (GPO), as well as the establishment
of 16 branch offices in the capital and 17 in regional bases. As of March 2012, the Somali authorities have re-established
Somalia's membership with the Universal Postal Union (UPU), and taken part once again in the Union's affairs. They
have also rehabilitated the GPO in Mogadishu, and appointed an official Postal Consultant to provide professional
advice on the renovations. Phase II of the rehabilitation project involves the construction of 718 postal outlets
from 2014 to 2016. Phase III is slated to begin in 2017, with the objective of creating 897 postal outlets by 2022.
On 1 November 2013, international postal services for Somalia officially resumed. The Universal Postal Union is now
assisting the Somali Postal Service to develop its capacity, including providing technical assistance and basic mail
processing equipment. There are a number of radio news agencies based in Somalia. Established during the colonial
period, Radio Mogadishu initially broadcast news items in both Somali and Italian. The station was modernized with
Russian assistance following independence in 1960, and began offering home service in Somali, Amharic and Oromo.
After closing down operations in the early 1990s due to the civil war, the station was officially re-opened in the
early 2000s by the Transitional National Government. In the late 2000s, Radio Mogadishu also launched a complementary
website of the same name, with news items in Somali, Arabic and English. Other radio stations based in Mogadishu
include Mustaqbal Media corporation and the Shabelle Media Network, the latter of which was in 2010 awarded the Media
of the Year prize by the Paris-based journalism organisation, Reporters Without Borders (RSF). In total, about one
short-wave and ten private FM radio stations broadcast from the capital, with several radio stations broadcasting
from the central and southern regions. The northeastern Puntland region has around six private radio stations, including
Radio Garowe, Radio Daljir, Radio Codka-Nabbada and Radio Codka-Mudug. Radio Gaalkacyo, formerly known as Radio Free
Somalia, operates from Galkayo in the north-central Mudug province. Additionally, the Somaliland region in the northwest
has one government-operated radio station. The Mogadishu-based Somali National Television is the principal national
public service broadcaster. On March 18, 2011, the Ministry of Information of the Transitional Federal Government
began experimental broadcasts of the new TV channel. After a 20-year hiatus, the station was shortly thereafter officially
re-launched on April 4, 2011. SNTV broadcasts 24 hours a day, and can be viewed both within Somalia and abroad via
terrestrial and satellite platforms. Additionally, Somalia has several private television networks, including Horn
Cable Television and Universal TV. Two such TV stations re-broadcast Al-Jazeera and CNN. Eastern Television Network
and SBC TV air from Bosaso, the commercial capital of Puntland. The Puntland and Somaliland regions also each have
one government-run TV channel, Puntland TV and Radio and Somaliland National TV, respectively. In the early 2000s,
print media in Somalia reached a peak in activity. Around 50 newspapers were published in Mogadishu alone during
this period, including Qaran, Mogadishu Times, Sana'a, Shabelle Press, Ayaamaha, Mandeeq, Sky Sport, Goal, The Nation,
Dalka, Panorama, Aayaha Nolosha, Codka Xuriyada and Xidigta Maanta. In 2003, as new free electronic media outlets
started to proliferate, advertisers increasingly began switching over from print ads to radio and online commercials
in order to reach more customers. A number of the broadsheets in circulation subsequently closed down operations,
as they were no longer able to cover printing costs in the face of the electronic revolution. In 2012, the political
Xog Doon and Xog Ogaal and Horyaal Sports were reportedly the last remaining newspapers printed in the capital. According
to Issa Farah, a former editor with the Dalka broadsheet, newspaper publishing in Somalia is likely to experience
a resurgence if the National Somali Printing Press is re-opened and the sector is given adequate public support.
According to the Centre for Law and Democracy (CLD) and the African Union/United Nations Information Support Team
(IST), Somalia did not have systemic internet blocking or filtering as of December 2012. The application of content
standards online was also unclear. Somalia established its first ISP in 1999, one of the last countries in Africa
to get connected to the Internet. According to the telecommunications resource Balancing Act, growth in internet
connectivity has since then grown considerably, with around 53% of the entire nation covered as of 2009. Both internet
commerce and telephony have consequently become among the quickest growing local businesses. According to the Somali
Economic Forum, the number of internet users in Somalia rose from only 200 in the year 2000 to 106,000 users in 2011,
with the percentage continuing to rise. The number of mobile subscribers is similarly expected to rise from 512,682
in 2008 to around 6.1 million by 2015. The Somali Telecommunication Association (STA), a watchdog organization that
oversees the policy development and regulatory framework of Somalia's ICT sector, reported in 2006 that there were
over half a million users of internet services within the territory. There were also 22 established ISPs and 234
cyber cafes, with an annual growth rate of 15.6%. As of 2009, dial up, wireless and satellite services were available.
Dial up internet services in Somalia were among the fastest growing on the continent, with an annual landline growth
rate of over 12.5%. The increase in usage was largely due to innovative policy initiatives adopted by the various
Somali telecom operators, including free local in-town calls, a flat rate of $10 per month for unlimited calls, a
low charge of $0.005 per minute for Internet connections, and a one-time connection fee of $50. Global Internet Company,
a firm jointly owned by the major Somali telecommunication networks Hormuud Telecom, Telcom Somalia and Nationlink,
was the country's largest ISP. It was at the time the only provider of dial up services in Somalia's south-central
regions. In the northern Puntland and Somaliland regions, online networks offered internet dial up services to their
own group of subscribers. Among these firms was Golis Telecom Somalia in the northeast and Telesom in the northwest.
Broadband wireless services were offered by both dial up and non-dial up ISPs in major cities, such as Mogadishu,
Bosaso, Hargeisa, Galkayo and Kismayo. Pricing ranged from $150 to $300 a month for unlimited internet access, with
bandwidth rates of 64 kbit/s up and down. The main patrons of these wireless services were scholastic institutions,
corporations, and UN, NGO and diplomatic missions. Mogadishu had the biggest subscriber base nationwide and was also
the headquarters of the largest wireless internet services, among which were Dalkom (Wanaag HK), Orbit, Unitel and
Webtel. As of 2009, Internet via satellite had a steady growth rate of 10% to 15% per year. It was particularly in
demand in remote areas that did not have either dialup or wireless online services. The local telecommunications
company Dalkom Somalia provided internet over satellite, as well as premium routes for media operators and content
providers, and international voice gateway services for global carriers. It also offered inexpensive bandwidth through
its internet backbone, whereas bandwidth ordinarily cost customers from $2,500 to $3,000 per month through the major
international bandwidth providers. The main clients of these local satellite services were internet cafes, money
transfer firms and other companies, as well as international community representatives. In total, there were over
300 local satellite terminals available aross the nation, which were linked to teleports in Europe and Asia. Demand
for the satellite services gradually began to fall as broadband wireless access rose. However, it increased in rural
areas, as the main client base for the satellite services extended their operations into more remote locales. In
December 2012, Hormuud Telecom launched its Tri-Band 3G service for internet and mobile clients. The first of its
kind in the country, this third generation mobile telecommunications technology offers users a faster and more secure
connection. In November 2013, Somalia received its first fiber optic connection. The country previously had to rely
on expensive satellite links due to the civil conflict, which limited internet usage. However, residents now have
access to broadband internet cable for the first time after an agreement reached between Hormuud Telecom and Liquid
Telecom. The deal will see Liquid Telecom link Hormuud to its 17,000 km (10,500 mile) network of terrestrial cables,
which will deliver faster internet capacity. The fiber optic connection will also make online access more affordable
to the average user. This in turn is expected to further increase the number of internet users. Dalkom Somalia reached
a similar agreement with the West Indian Ocean Cable Company (WIOCC) Ltd, which it holds shares in. Effective the
first quarter of 2014, the deal will establish fiber optic connectivity to and from Somalia via the EASSy cable.
The new services are expected to reduce the cost of international bandwidth and to better optimize performance, thereby
further broadening internet access. Dalkom Somalia is concurrently constructing a 1,000 square mile state-of-the-art
data center in Mogadishu. The site will facilitate direct connection into the international fiber optic network by
hosting equipment for all of the capital's ISPs and telecommunication companies.
Catalan (/ˈkætəlæn/; autonym: català [kətəˈla] or [kataˈla]) is a Romance language named for its origins in Catalonia, in
what is northeastern Spain and adjoining parts of France. It is the national and only official language of Andorra,
and a co-official language of the Spanish autonomous communities of Catalonia, the Balearic Islands, and Valencia
(where the language is known as Valencian, and there exist regional standards). It also has semi-official status
in the city of Alghero on the Italian island of Sardinia. It is also spoken with no official recognition in parts
of the Spanish autonomous communities of Aragon (La Franja) and Murcia (Carche), and in the historic French region
of Roussillon/Northern Catalonia, roughly equivalent to the department of Pyrénées-Orientales. According to the Statistical
Institute of Catalonia in 2008 the Catalan language is the second most commonly used in Catalonia, after Spanish,
as a native or self-defining language. The Generalitat of Catalunya spends part of its annual budget on the promotion
of the use of Catalan in Catalonia and in other territories. Catalan evolved from Vulgar Latin around the eastern
Pyrenees in the 9th century. During the Low Middle Ages it saw a golden age as the literary and dominant language
of the Crown of Aragon, and was widely used all over the Mediterranean. The union of Aragon with the other territories
of Spain in 1479 marked the start of the decline of the language. In 1659 Spain ceded Northern Catalonia to France,
and Catalan was banned in both states in the early 18th century. 19th-century Spain saw a Catalan literary revival,
which culminated in the 1913 orthographic standardization, and the officialization of the language during the Second
Spanish Republic (1931–39). However, the Francoist dictatorship (1939–75) banned the language again. Since the Spanish
transition to democracy (1975–1982), Catalan has been recognized as an official language, language of education,
and language of mass media, all of which have contributed to its increased prestige. There is no parallel in Europe
of such a large, bilingual, non-state speech community. Catalan dialects are relatively uniform, and are mutually
intelligible. They are divided into two blocks, Eastern and Western, differing mostly in pronunciation. The terms
"Catalan" and "Valencian" (respectively used in Catalonia and the Valencian Community) are two different varieties
of the same language. There are two institutions regulating the two standard varieties, the Institute of Catalan
Studies in Catalonia and the Valencian Academy of the Language in Valencia. Catalan shares many traits with its neighboring
Romance languages. However, despite being mostly situated in the Iberian Peninsula, Catalan differs more from Iberian
Romance (such as Spanish and Portuguese) in terms of vocabulary, pronunciation, and grammar than from Gallo-Romance
(Occitan, French, Gallo-Italic languages, etc.). These similarities are most notable with Occitan. Catalan has an
inflectional grammar, with two genders (masculine, feminine), and two numbers (singular, plural). Pronouns are also
inflected for case, animacy[citation needed] and politeness, and can be combined in very complex ways. Verbs are
split in several paradigms and are inflected for person, number, tense, aspect, mood, and gender. In terms of pronunciation,
Catalan has many words ending in a wide variety of consonants and some consonant clusters, in contrast with many
other Romance languages. The word Catalan derives from the territory of Catalonia, itself of disputed etymology.
The main theory suggests that Catalunya (Latin Gathia Launia) derives from the name Gothia or Gauthia ("Land of the
Goths"), since the origins of the Catalan counts, lords and people were found in the March of Gothia, whence Gothland
> Gothlandia > Gothalania > Catalonia theoretically derived. In English, the term referring to a person first appears
in the mid 14th century as Catelaner, followed in the 15th century as Catellain (from French). It is attested a language
name since at least 1652. Catalan can be pronounced as /ˈkætəlæn/, /kætəˈlæn/ or /ˈkætələn/. The endonym is pronounced
/kə.təˈɫa/ in the Eastern Catalan dialects, and /ka.taˈɫa/ in the Western dialects. In the Valencian Community, the
term valencià (/va.len.siˈa/) is frequently used instead. The names "Catalan" and "Valencian" are two names for the
same language. See also status of Valencian below. By the 9th century, Catalan had evolved from Vulgar Latin on both
sides of the eastern end of the Pyrenees, as well as the territories of the Roman province of Hispania Tarraconensis
to the south. From the 8th century onwards the Catalan counts extended their territory southwards and westwards at
the expense of the Muslims, bringing their language with them. This process was given definitive impetus with the
separation of the County of Barcelona from the Carolingian Empire in 988. In the 11th century, documents written
in macaronic Latin begin to show Catalan elements, with texts written almost completely in Romance appearing by 1080.
Old Catalan shared many features with Gallo-Romance, diverging from Old Occitan between the 11th and 14th centuries.
During the 11th and 12th centuries the Catalan rulers expanded up to north of the Ebro river, and in the 13th century
they conquered the Land of Valencia and the Balearic Islands. The city of Alghero in Sardinia was repopulated with
Catalan speakers in the 14th century. The language also reached Murcia, which became Spanish-speaking in the 15th
century. In the Low Middle Ages, Catalan went through a golden age, reaching a peak of maturity and cultural richness.
Examples include the work of Majorcan polymath Ramon Llull (1232–1315), the Four Great Chronicles (13th–14th centuries),
and the Valencian school of poetry culminating in Ausiàs March (1397–1459). By the 15th century, the city of Valencia
had become the sociocultural center of the Crown of Aragon, and Catalan was present all over the Mediterranean world.
During this period, the Royal Chancery propagated a highly standardized language. Catalan was widely used as an official
language in Sicily until the 15th century, and in Sardinia until the 17th. During this period, the language was what
Costa Carreras terms "one of the 'great languages' of medieval Europe". Martorell's outstanding novel of chivalry
Tirant lo Blanc (1490) shows a transition from Medieval to Renaissance values, something that can also be seen in
Metge's work. The first book produced with movable type in the Iberian Peninsula was printed in Catalan. With the
union of the crowns of Castille and Aragon (1479), the use of Spanish gradually became more prestigious. Starting
in the 16th century, Catalan literature experienced a decline, the language came under the influence of Spanish,
and the urban and literary classes became bilingual. With the Treaty of the Pyrenees (1659), Spain ceded the northern
part of Catalonia to France, and soon thereafter the local Catalan varieties came under the influence of French,
which in 1700 became the sole official language of the region. Shortly after the French Revolution (1789), the French
First Republic prohibited official use of, and enacted discriminating policies against, the nonstandard languages
of France (patois), such as Catalan, Alsatian, Breton, Occitan, Flemish, and Basque. Following the French capture
of Algeria (1833), that region saw several waves of Catalan-speaking settlers. People from the Spanish Alacant province
settled around Oran, whereas Algiers received immigration from Northern Catalonia and Minorca. Their speech was known
as patuet. By 1911, the number of Catalan speakers was around 100,000. After the declaration of independence of Algeria
in 1962, almost all the Catalan speakers fled to Northern Catalonia (as Pieds-Noirs) or Alacant. Nowadays, France
only recognizes French as an official language. Nevertheless, on 10 December 2007, the General Council of the Pyrénées-Orientales
officially recognized Catalan as one of the languages of the department and seeks to further promote it in public
life and education. The decline of Catalan continued in the 16th and 17th centuries. The Catalan defeat in the War
of Spanish Succession (1714) initiated a series of measures imposing the use of Spanish in legal documentation. In
parallel, however, the 19th century saw a Catalan literary revival (Renaixença), which has continued up to the present
day. This period starts with Aribau's Ode to the Homeland (1833); followed in the second half of the 19th century,
and the early 20th by the work of Verdaguer (poetry), Oller (realist novel), and Guimerà (drama). Since the Spanish
transition to democracy (1975–1982), Catalan has been institutionalizated as an official language, language of education,
and language of mass media; all of which have contributed to its increased prestige. In Catalonia, there is no parallel
of a large, bilingual, European, non-state speech community. The teaching of Catalan is mandatory in all schools,
but it is possible to use Spanish for studying in the public education system of Catalonia in two situations, if
the teacher assigned to a class chooses to use Spanish, or during the learning process of one or some recently arrived
students. There is also some intergenerational shift towards Catalan. In Andorra, Catalan has always been the sole
official language. Since the promulgation of the 1993 constitution, several Andorranization policies have been enforced,
like Catalan medium education. On the other hand, there are several language shift processes currently taking place.
In Northern Catalonia, Catalan has followed the same trend as the other minority languages of France, with most of
its native speakers being 60 or older (as of 2004). Catalan is studied as a foreign language by 30% of the primary
education students, and by 15% of the secondary. The cultural association La Bressola promotes a network of community-run
schools engaged in Catalan language immersion programs. In the Alicante province Catalan is being replaced by Spanish,
and in Alghero by Italian. There are also well ingrained diglossic attitudes against Catalan in the Valencian Community,
Ibiza, and to a lesser extent, in the rest of the Balearic islands. The ascription of Catalan to the Occitano-Romance
branch of Gallo-Romance languages is not shared by all linguists and philologists, particularly among Spanish ones,
such as Ramón Menéndez Pidal. Catalan bears varying degrees of similarity to the linguistic varieties subsumed under
the cover term Occitan language (see also differences between Occitan and Catalan and Gallo-Romance languages). Thus,
as it should be expected from closely related languages, Catalan today shares many traits with other Romance languages.
Catalan shares many traits with the other neighboring Romance languages (Italian, Sardinian, Occitan, and Spanish).
However, despite being mostly situated in the Iberian Peninsula, Catalan has marked differences with the Ibero-Romance
group (Spanish and Portuguese) in terms of pronunciation, grammar, and especially vocabulary; showing instead its
closest affinity with Occitan and to a lesser extent Gallo-Romance (French, Franco-Provençal, Gallo-Italian). According
to Ethnologue, the lexical similarity between Catalan and other Romance languages is: 87% with Italian; 85% with
Portuguese; 80% with Spanish; 76% with Ladin; 75% with Sardinian; and 73% with Romanian. During much of its history,
and especially during the Francoist dictatorship (1939–1975), the Catalan language has often been degraded as a mere
dialect of Spanish. This view, based on political and ideological considerations, has no linguistic validity. Spanish
and Catalan have important differences in their sound systems, lexicon, and grammatical features, placing the language
in a number of respects closer to Occitan (and French). There is evidence that, at least from the a.d. 2nd century,
the vocabulary and phonology of Roman Tarraconensis was different from the rest of Roman Hispania. Differentiation
has arisen generally because Spanish, Asturian, and Galician-Portuguese share certain peripheral archaisms (Spanish
hervir, Asturian/Portuguese ferver vs. Catalan bullir, Occitan bolir "to boil") and innovatory regionalisms (Sp novillo,
Ast nuviellu vs. Cat torell, Oc taurèl "bullock"), while Catalan has a shared history with the Western Romance innovative
core, especially Occitan. The Germanic superstrate has had different outcomes in Spanish and Catalan. For example,
Catalan fang "mud" and rostir "to roast", of Germanic origin, contrast with Spanish lodo and asar, of Latin origin;
whereas Catalan filosa "spinning wheel" and pols "temple", of Latin origin, contrast with Spanish rueca and sien,
of Germanic origin. The same happens with Arabic loanwords. Thus, Catalan alfàbia "large earthenware jar" and rajola
"tile", of Arabic origin, contrast with Spanish tinaja and teja, of Latin origin; whereas Catalan oli "oil" and oliva
"olive", of Latin origin, contrast with Spanish aceite and aceituna. However, the Arabic element in Spanish is generally
much more prevalent. Situated between two large linguistic blocks (Ibero-Romance and Gallo-Romance), Catalan has
many unique lexical choices, such as enyorar "to miss somebody", apaivagar "to calm down somebody", or rebutjar "reject".
These territories are sometimes referred to as the Països Catalans (Catalan Countries), a denomination based on cultural
affinity and common heritage, that has also had a subsequent political interpretation but no official status. Various
interpretations of the term may include some or all of these regions. In contrast with other Romance languages, Catalan
has many monosyllabic words; and those ending in a wide variety of consonants and some consonant clusters. Also,
Catalan has final obstruent devoicing, thus featuring many couplets like amic "(male friend") vs. amiga ("female
friend"). Central Catalan is considered the standard pronunciation of the language. The descriptions below are mostly
for this variety. For the differences in pronunciation of the different dialects, see the section pronunciation of
dialects in this article. Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed phonemes:
/a ɛ e i ɔ o u/, a common feature in Western Romance, except Spanish. Balearic has also instances of stressed /ə/.
Dialects differ in the different degrees of vowel reduction, and the incidence of the pair /ɛ e/. In Central Catalan,
unstressed vowels reduce to three: /a e ɛ/ > [ə]; /o ɔ u/ > [u]; /i/ remains distinct. The other dialects have different
vowel reduction processes (see the section pronunciation of dialects in this article). Catalan sociolinguistics studies
the situation of Catalan in the world and the different varieties that this language presents. It is a subdiscipline
of Catalan philology and other affine studies and has as an objective to analyse the relation between the Catalan
language, the speakers and the close reality (including the one of other languages in contact). The dialects of the
Catalan language feature a relative uniformity, especially when compared to other Romance languages; both in terms
of vocabulary, semantics, syntax, morphology, and phonology. Mutual intelligibility between dialects is very high,
estimates ranging from 90% to 95%. The only exception is the isolated idiosyncratic Alguerese dialect. Catalan is
split in two major dialectal blocks: Eastern Catalan, and Western Catalan. The main difference lies in the treatment
of unstressed a and e; which have merged to /ə/ in Eastern dialects, but which remain distinct as /a/ and /e/ in
Western dialects. There are a few other differences in pronunciation, verbal morphology, and vocabulary. Western
Catalan comprises the two dialects of Northwestern Catalan and Valencian; the Eastern block comprises four dialects:
Central Catalan, Balearic, Rossellonese, and Alguerese. Each dialect can be further subdivided in several subdialects.
Central Catalan is considered the standard pronunciation of the language and has the highest number of speakers.
It is spoken in the densely populated regions of the Barcelona province, the eastern half of the province of Tarragona,
and most of the province of Girona. Catalan has inherited the typical vowel system of Vulgar Latin, with seven stressed
phonemes: /a ɛ e i ɔ o u/, a common feature in Western Romance, except Spanish. Balearic has also instances of stressed
/ə/. Dialects differ in the different degrees of vowel reduction, and the incidence of the pair /ɛ e/. In Eastern
Catalan (except Majorcan), unstressed vowels reduce to three: /a e ɛ/ > [ə]; /o ɔ u/ > [u]; /i/ remains distinct.
There are a few instances of unreduced [e], [o] in some words. Alguerese has lowered [ə] to [a]. In Majorcan, unstressed
vowels reduce to four: /a e ɛ/ follow the Eastern Catalan reduction pattern; however /o ɔ/ reduce to [o], with /u/
remaining distinct, as in Western Catalan. In Western Catalan, unstressed vowels reduce to five: /e ɛ/ > [e]; /o
ɔ/ > [o]; /a u i/ remain distinct. This reduction pattern, inherited from Proto-Romance, is also found in Italian
and Portuguese. Some Western dialects present further reduction or vowel harmony in some cases. Central, Western,
and Balearic differ in the lexical incidence of stressed /e/ and /ɛ/. Usually, words with /ɛ/ in Central Catalan
correspond to /ə/ in Balearic and /e/ in Western Catalan. Words with /e/ in Balearic almost always have /e/ in Central
and Western Catalan as well.[vague] As a result, Central Catalan has a much higher incidence of /e/. In verbs, 1st
person present indicative desinence is -e (∅ in verbs of the 2nd and 3rd conjugation), or -o. E.g. parle, tem, sent
(Valencian); parlo, temo, sento (Northwestern). In verbs, 1st person present indicative desinence is -o, -i or ∅
in all conjugations. E.g. parlo (Central), parl (Balearic), parli (Northern), ('I speak'). In nouns and adjectives,
maintenance of /n/ of medieval plurals in proparoxytone words. E.g. hòmens 'men', jóvens 'youth'. In nouns and adjectives,
loss of /n/ of medieval plurals in proparoxytone words. E.g. homes 'men', joves 'youth'. Despite its relative lexical
unity, the two dialectal blocks of Catalan (Eastern and Western) show some differences in word choices. Any lexical
divergence within any of the two groups can be explained as an archaism. Also, usually Central Catalan acts as an
innovative element. Standard Catalan, virtually accepted by all speakers, is mostly based on Eastern Catalan, which
is the most widely used dialect. Nevertheless, the standards of Valencia and the Balearics admit alternative forms,
mostly traditional ones, which are not current in eastern Catalonia. The most notable difference between both standards
is some tonic ⟨e⟩ accentuation, for instance: francès, anglès (IEC) – francés, anglés (AVL). Nevertheless, AVL's
standard keeps the grave accent ⟨è⟩, without pronouncing this ⟨e⟩ as /ɛ/, in some words like: què ('what'), or València.
Other divergences include the use of ⟨tl⟩ (AVL) in some words instead of ⟨tll⟩ like in ametla/ametlla ('almond'),
espatla/espatlla ('back'), the use of elided demonstratives (este 'this', eixe 'that') in the same level as reinforced
ones (aquest, aqueix) or the use of many verbal forms common in Valencian, and some of these common in the rest of
Western Catalan too, like subjunctive mood or inchoative conjugation in -ix- at the same level as -eix- or the priority
use of -e morpheme in 1st person singular in present indicative (-ar verbs): jo compre instead of jo compro ('I buy').
In the Balearic Islands, IEC's standard is used but adapted for the Balearic dialect by the University of the Balearic
Islands's philological section. In this way, for instance, IEC says it is correct writing cantam as much as cantem
('we sing') but the University says that the priority form in the Balearic Islands must be "cantam" in all fields.
Another feature of the Balearic standard is the non-ending in the 1st person singular present indicative: jo compr
('I buy'), jo tem ('I fear'), jo dorm ('I sleep'). In Alghero, the IEC has adapted its standard to the Alguerese
dialect. In this standard one can find, among other features: the definite article lo instead of el, special possessive
pronouns and determinants la mia ('mine'), lo sou/la sua ('his/her'), lo tou/la tua ('yours'), and so on, the use
of -v- /v/ in the imperfect tense in all conjugations: cantava, creixiva, llegiva; the use of many archaic words,
usual words in Alguerese: manco instead of menys ('less'), calqui u instead of algú ('someone'), qual/quala instead
of quin/quina ('which'), and so on; and the adaptation of weak pronouns. In 2011, the Aragonese government passed
a decree for the establishment of a new language regulator of Catalan in La Franja (the so-called Catalan-speaking
areas of Aragon). The new entity, designated as Acadèmia Aragonesa del Català, shall allow a facultative education
in Catalan and a standardization of the Catalan language in La Franja. Valencian is classified as a Western dialect,
along with the northwestern varieties spoken in Western Catalonia (provinces of Lleida and the western half of Tarragona).
The various forms of Catalan and Valencian are mutually intelligible (ranging from 90% to 95%) Linguists, including
Valencian scholars, deal with Catalan and Valencian as the same language. The official regulating body of the language
of the Valencian Community, the Valencian Academy of Language (Acadèmia Valenciana de la Llengua, AVL) declares the
linguistic unity between Valencian and Catalan varieties. The AVL, created by the Valencian parliament, is in charge
of dictating the official rules governing the use of Valencian, and its standard is based on the Norms of Castelló
(Normes de Castelló). Currently, everyone who writes in Valencian uses this standard, except the Royal Academy of
Valencian Culture (Acadèmia de Cultura Valenciana, RACV), which uses for Valencian an independent standard. Despite
the position of the official organizations, an opinion poll carried out between 2001 and 2004 showed that the majority
of the Valencian people consider Valencian different from Catalan. This position is promoted by people who do not
use Valencian regularly. Furthermore, the data indicates that younger generations educated in Valencian are much
less likely to hold these views. A minority of Valencian scholars active in fields other than linguistics defends
the position of the Royal Academy of Valencian Culture (Acadèmia de Cultura Valenciana, RACV), which uses for Valencian
a standard independent from Catalan. This clash of opinions has sparked much controversy. For example, during the
drafting of the European Constitution in 2004, the Spanish government supplied the EU with translations of the text
into Basque, Galician, Catalan, and Valencian, but the latter two were identical. Despite its relative lexical unity,
the two dialectal blocks of Catalan (Eastern and Western) show some differences in word choices. Any lexical divergence
within any of the two groups can be explained as an archaism. Also, usually Central Catalan acts as an innovative
element. Literary Catalan allows the use of words from different dialects, except those of very restricted use. However,
from the 19th century onwards, there is a tendency of favoring words of Northern dialects in detriment of others,
even though nowadays there is a greater freedom of choice. Like other languages, Catalan has a large list of learned
words from Greek and Latin. This process started very early, and one can find such examples in Ramon Llull's work.
On the fourteenth and fifteenth centuries Catalan had a number of Greco-Latin learned words much superior to other
Romance languages, as it can be attested for example in Roís de Corella's writings. The process of morphological
derivation in Catalan follows the same principles as the other Romance languages, where agglutination is common.
Many times, several affixes are appended to a preexisting lexeme, and some sound alternations can occur, for example
elèctric [əˈlɛktrik] ("electrical") vs. electricitat [ələktrisiˈtat]. Prefixes are usually appended to verbs, for
as in preveure ("foresee"). In gender inflection, the most notable feature is (compared to Portuguese, Spanish or
Italian), the loss of the typical masculine suffix -o. Thus, the alternance of -o/-a, has been replaced by ø/-a.
There are only a few exceptions, like minso/minsa ("scarce"). Many not completely predictable morphological alternations
may occur, such as: Catalan has few suppletive couplets, like Italian and Spanish, and unlike French. Thus, Catalan
has noi/noia ("boy"/"girl") and gall/gallina ("cock"/"hen"), whereas French has garçon/fille and coq/poule. There
is a tendency to abandon traditionally gender-invariable adjectives in favour of marked ones, something prevalent
in Occitan and French. Thus, one can find bullent/bullenta ("boiling") in contrast with traditional bullent/bullent.
As in the other Western Romance languages, the main plural expression is the suffix -s, which may create morphological
alternations similar to the ones found in gender inflection, albeit more rarely. The most important one is the addition
of -o- before certain consonant groups, a phonetic phenomenon that does not affect feminine forms: el pols/els polsos
("the pulse"/"the pulses") vs. la pols/les pols ("the dust"/"the dusts"). The inflection of determinatives is complex,
specially because of the high number of elisions, but is similar to the neighboring languages. Catalan has more contractions
of preposition + article than Spanish, like dels ("of + the [plural]"), but not as many as Italian (which has sul,
col, nel, etc.). Central Catalan has abandoned almost completely unstressed possessives (mon, etc.) in favour of
constructions of article + stressed forms (el meu, etc.), a feature shared with Italian. The morphology of Catalan
personal pronouns is complex, specially in unstressed forms, which are numerous (13 distinct forms, compared to 11
in Spanish or 9 in Italian). Features include the gender-neutral ho and the great degree of freedom when combining
different unstressed pronouns (65 combinations). Catalan pronouns exhibit T–V distinction, like all other Romance
languages (and most European languages, but not Modern English). This feature implies the use of a different set
of second person pronouns for formality. This flexibility allows Catalan to use extraposition extensively, much more
than French or Spanish. Thus, Catalan can have m'hi recomanaren ("they recommended me to him"), whereas in French
one must say ils m'ont recommandé à lui, and Spanish me recomendaron a él. This allows the placement of almost any
nominal term as a sentence topic, without having to use so often the passive voice (as in French or English), or
identifying the direct object with a preposition (as in Spanish). Like all the Romance languages, Catalan verbal
inflection is more complex than the nominal. Suffixation is omnipresent, whereas morphological alternations play
a secondary role. Vowel alternances are active, as well as infixation and suppletion. However, these are not as productive
as in Spanish, and are mostly restricted to irregular verbs. The Catalan verbal system is basically common to all
Western Romance, except that most dialects have replaced the synthetic indicative perfect with a periphrastic form
of anar ("to go") + infinitive. Catalan verbs are traditionally divided into three conjugations, with vowel themes
-a-, -e-, -i-, the last two being split into two subtypes. However, this division is mostly theoretical. Only the
first conjugation is nowadays productive (with about 3500 common verbs), whereas the third (the subtype of servir,
with about 700 common verbs) is semiproductive. The verbs of the second conjugation are fewer than 100, and it is
not possible to create new ones, except by compounding. In Spain, every person officially has two surnames, one of
which is the father's first surname and the other is the mother's first surname. The law contemplates the possibility
of joining both surnames with the Catalan conjunction i ("and").
Boston (pronounced i/ˈbɒstən/) is the capital and largest city of the Commonwealth of Massachusetts in the United States.
Boston also served as the historic county seat of Suffolk County until Massachusetts disbanded county government
in 1999. The city proper covers 48 square miles (124 km2) with an estimated population of 655,884 in 2014, making
it the largest city in New England and the 24th largest city in the United States. The city is the economic and cultural
anchor of a substantially larger metropolitan area called Greater Boston, home to 4.7 million people and the tenth-largest
metropolitan statistical area in the country. Greater Boston as a commuting region is home to 8.1 million people,
making it the sixth-largest combined statistical area in the United States. One of the oldest cities in the United
States, Boston was founded on the Shawmut Peninsula in 1630 by Puritan settlers from England. It was the scene of
several key events of the American Revolution, such as the Boston Massacre, the Boston Tea Party, the Battle of Bunker
Hill, and the Siege of Boston. Upon American independence from Great Britain, the city continued to be an important
port and manufacturing hub, as well as a center for education and culture. Through land reclamation and municipal
annexation, Boston has expanded beyond the original peninsula. Its rich history attracts many tourists, with Faneuil
Hall alone drawing over 20 million visitors per year. Boston's many firsts include the United States' first public
school, Boston Latin School (1635), and first subway system (1897). The area's many colleges and universities make
Boston an international center of higher education and medicine, and the city is considered to be a world leader
in innovation. Boston's economic base also includes finance, professional and business services, biotechnology, information
technology, and government activities. Households in the city claim the highest average rate of philanthropy in the
United States; businesses and institutions rank amongst the top in the country for environmental sustainability and
investment. The city has one of the highest costs of living in the United States, though it remains high on world
livability rankings. Boston's early European settlers had first called the area Trimountaine (after its "three mountains"—only
traces of which remain today) but later renamed it Boston after Boston, Lincolnshire, England, the origin of several
prominent colonists. The renaming, on September 7, 1630 (Old Style),[b] was by Puritan colonists from England, who
had moved over from Charlestown earlier that year in quest of fresh water. Their settlement was initially limited
to the Shawmut Peninsula, at that time surrounded by the Massachusetts Bay and Charles River and connected to the
mainland by a narrow isthmus. The peninsula is known to have been inhabited as early as 5000 BC. In 1629, the Massachusetts
Bay Colony's first governor, John Winthrop, led the signing of the Cambridge Agreement, a key founding document of
the city. Puritan ethics and their focus on education influenced its early history; America's first public school
was founded in Boston in 1635. Over the next 130 years, the city participated in four French and Indian Wars, until
the British defeated the French and their native allies in North America. Boston was the largest town in British
North America until Philadelphia grew larger in the mid 18th century. Many of the crucial events of the American
Revolution—the Boston Massacre, the Boston Tea Party, Paul Revere's midnight ride, the battles of Lexington and Concord
and Bunker Hill, the Siege of Boston, and many others—occurred in or near Boston. After the Revolution, Boston's
long seafaring tradition helped make it one of the world's wealthiest international ports, with rum, fish, salt,
and tobacco being particularly important. The Embargo Act of 1807, adopted during the Napoleonic Wars, and the War
of 1812 significantly curtailed Boston's harbor activity. Although foreign trade returned after these hostilities,
Boston's merchants had found alternatives for their capital investments in the interim. Manufacturing became an important
component of the city's economy, and by the mid-19th century, the city's industrial manufacturing overtook international
trade in economic importance. Until the early 20th century, Boston remained one of the nation's largest manufacturing
centers and was notable for its garment production and leather-goods industries. A network of small rivers bordering
the city and connecting it to the surrounding region facilitated shipment of goods and led to a proliferation of
mills and factories. Later, a dense network of railroads furthered the region's industry and commerce. During this
period Boston flourished culturally as well, admired for its rarefied literary life and generous artistic patronage,
with members of old Boston families—eventually dubbed Boston Brahmins—coming to be regarded as the nation's social
and cultural elites. Boston was an early port of the Atlantic triangular slave trade in the New England colonies,
but was soon overtaken by Salem, Massachusetts and Newport, Rhode Island. Eventually Boston became a center of the
abolitionist movement. The city reacted strongly to the Fugitive Slave Law of 1850, contributing to President Franklin
Pierce's attempt to make an example of Boston after the Anthony Burns Fugitive Slave Case. In 1822, the citizens
of Boston voted to change the official name from "the Town of Boston" to "the City of Boston", and on March 4, 1822,
the people of Boston accepted the charter incorporating the City. At the time Boston was chartered as a city, the
population was about 46,226, while the area of the city was only 4.7 square miles (12 km2). In the 1820s, Boston's
population grew rapidly, and the city's ethnic composition changed dramatically with the first wave of European immigrants.
Irish immigrants dominated the first wave of newcomers during this period, especially following the Irish Potato
Famine; by 1850, about 35,000 Irish lived in Boston. In the latter half of the 19th century, the city saw increasing
numbers of Irish, Germans, Lebanese, Syrians, French Canadians, and Russian and Polish Jews settled in the city.
By the end of the 19th century, Boston's core neighborhoods had become enclaves of ethnically distinct immigrants—Italians
inhabited the North End, Irish dominated South Boston and Charlestown, and Russian Jews lived in the West End. Irish
and Italian immigrants brought with them Roman Catholicism. Currently, Catholics make up Boston's largest religious
community, and since the early 20th century, the Irish have played a major role in Boston politics—prominent figures
include the Kennedys, Tip O'Neill, and John F. Fitzgerald. Between 1631 and 1890, the city tripled its area through
land reclamation by filling in marshes, mud flats, and gaps between wharves along the waterfront. The largest reclamation
efforts took place during the 19th century; beginning in 1807, the crown of Beacon Hill was used to fill in a 50-acre
(20 ha) mill pond that later became the Haymarket Square area. The present-day State House sits atop this lowered
Beacon Hill. Reclamation projects in the middle of the century created significant parts of the South End, the West
End, the Financial District, and Chinatown. After The Great Boston Fire of 1872, workers used building rubble as
landfill along the downtown waterfront. During the mid-to-late 19th century, workers filled almost 600 acres (2.4
km2) of brackish Charles River marshlands west of Boston Common with gravel brought by rail from the hills of Needham
Heights. The city annexed the adjacent towns of South Boston (1804), East Boston (1836), Roxbury (1868), Dorchester
(including present day Mattapan and a portion of South Boston) (1870), Brighton (including present day Allston) (1874),
West Roxbury (including present day Jamaica Plain and Roslindale) (1874), Charlestown (1874), and Hyde Park (1912).
Other proposals, for the annexation of Brookline, Cambridge, and Chelsea, were unsuccessful. By the early and mid-20th
century, the city was in decline as factories became old and obsolete, and businesses moved out of the region for
cheaper labor elsewhere. Boston responded by initiating various urban renewal projects under the direction of the
Boston Redevelopment Authority (BRA), which was established in 1957. In 1958, BRA initiated a project to improve
the historic West End neighborhood. Extensive demolition was met with vociferous public opposition. The BRA subsequently
reevaluated its approach to urban renewal in its future projects, including the construction of Government Center.
In 1965, the first Community Health Center in the United States opened, the Columbia Point Health Center, in the
Dorchester neighborhood. It mostly served the massive Columbia Point public housing complex adjoining it, which was
built in 1953. The health center is still in operation and was rededicated in 1990 as the Geiger-Gibson Community
Health Center. The Columbia Point complex itself was redeveloped and revitalized into a mixed-income community called
Harbor Point Apartments from 1984 to 1990. By the 1970s, the city's economy boomed after 30 years of economic downturn.
A large number of high rises were constructed in the Financial District and in Boston's Back Bay during this time
period. This boom continued into the mid-1980s and later began again. Hospitals such as Massachusetts General Hospital,
Beth Israel Deaconess Medical Center, and Brigham and Women's Hospital lead the nation in medical innovation and
patient care. Schools such as Boston College, Boston University, the Harvard Medical School, Northeastern University,
Wentworth Institute of Technology, Berklee College of Music and Boston Conservatory attract students to the area.
Nevertheless, the city experienced conflict starting in 1974 over desegregation busing, which resulted in unrest
and violence around public schools throughout the mid-1970s. Boston is an intellectual, technological, and political
center but has lost some important regional institutions, including the acquisition of The Boston Globe by The New
York Times, and the loss to mergers and acquisitions of local financial institutions such as FleetBoston Financial,
which was acquired by Charlotte-based Bank of America in 2004. Boston-based department stores Jordan Marsh and Filene's
have both been merged into the Cincinnati–based Macy's. Boston has experienced gentrification in the latter half
of the 20th century, with housing prices increasing sharply since the 1990s. Living expenses have risen, and Boston
has one of the highest costs of living in the United States, and was ranked the 129th most expensive major city in
the world in a 2011 survey of 214 cities. Despite cost of living issues, Boston ranks high on livability ratings,
ranking 36th worldwide in quality of living in 2011 in a survey of 221 major cities. On April 15, 2013, two Chechen
Islamist brothers exploded two bombs near the finish line of the Boston Marathon, killing three people and injuring
roughly 264. Boston has an area of 89.6 square miles (232.1 km2)—48.4 square miles (125.4 km2) (54.0%) of land and
41.2 square miles (106.7 km2) (46.0%) of water. The city's official elevation, as measured at Logan International
Airport, is 19 ft (5.8 m) above sea level. The highest point in Boston is Bellevue Hill at 330 feet (100 m) above
sea level, and the lowest point is at sea level. Situated onshore of the Atlantic Ocean, Boston is the only state
capital in the contiguous United States with an oceanic coastline. Boston is surrounded by the "Greater Boston" region
and is contiguously bordered by the cities and towns of Winthrop, Revere, Chelsea, Everett, Somerville, Cambridge,
Newton, Brookline, Needham, Dedham, Canton, Milton, and Quincy. The Charles River separates Boston from Watertown
and the majority of Cambridge, and the mass of Boston from its own Charlestown neighborhood. To the east lie Boston
Harbor and the Boston Harbor Islands National Recreation Area (which includes part of the city's territory, specifically
Calf Island, Gallops Island, Great Brewster Island, Green Island, Little Brewster Island, Little Calf Island, Long
Island, Lovells Island, Middle Brewster Island, Nixes Mate, Outer Brewster Island, Rainsford Island, Shag Rocks,
Spectacle Island, The Graves, and Thompson Island). The Neponset River forms the boundary between Boston's southern
neighborhoods and the city of Quincy and the town of Milton. The Mystic River separates Charlestown from Chelsea
and Everett, and Chelsea Creek and Boston Harbor separate East Boston from Boston proper. Boston is sometimes called
a "city of neighborhoods" because of the profusion of diverse subsections; the city government's Office of Neighborhood
Services has officially designated 23 neighborhoods. More than two-thirds of inner Boston's modern land area did
not exist when the city was founded, but was created via the gradual filling in of the surrounding tidal areas over
the centuries, notably with earth from the leveling or lowering of Boston's three original hills (the "Trimountain",
after which Tremont Street is named), and with gravel brought by train from Needham to fill the Back Bay. Downtown
and its immediate surroundings consists largely of low-rise (often Federal style and Greek Revival) masonry buildings,
interspersed with modern highrises, notably in the Financial District, Government Center, and South Boston. Back
Bay includes many prominent landmarks, such as the Boston Public Library, Christian Science Center, Copley Square,
Newbury Street, and New England's two tallest buildings—the John Hancock Tower and the Prudential Center. Near the
John Hancock Tower is the old John Hancock Building with its prominent illuminated beacon, the color of which forecasts
the weather. Smaller commercial areas are interspersed among areas of single-family homes and wooden/brick multi-family
row houses. The South End Historic District is the largest surviving contiguous Victorian-era neighborhood in the
US. The geography of downtown and South Boston was particularly impacted by the Central Artery/Tunnel Project (known
unofficially as the "Big Dig"), which allowed for the removal of the unsightly elevated Central Artery and the incorporation
of new green spaces and open areas. Boston has a continental climate with some maritime influence, and using the
−3 °C (27 °F) coldest month (January) isotherm, the city lies within the transition zone from a humid subtropical
climate (Köppen Cfa) to a humid continental climate (Köppen Dfa), although the suburbs north and west of the city
are significantly colder in winter and solidly fall under the latter categorization; the city lies at the transition
between USDA plant hardiness zones 6b (most of the city) and 7a (Downtown, South Boston, and East Boston neighborhoods).
Summers are typically warm to hot, rainy, and humid, while winters oscillate between periods of cold rain and snow,
with cold temperatures. Spring and fall are usually mild, with varying conditions dependent on wind direction and
jet stream positioning. Prevailing wind patterns that blow offshore minimize the influence of the Atlantic Ocean.
The hottest month is July, with a mean temperature of 73.4 °F (23.0 °C). The coldest month is January, with a mean
of 29.0 °F (−1.7 °C). Periods exceeding 90 °F (32 °C) in summer and below freezing in winter are not uncommon but
rarely extended, with about 13 and 25 days per year seeing each, respectively. The most recent sub-0 °F (−18 °C)
reading occurring on February 14, 2016 when the temperature dipped down to −9 °F (−23 °C), the coldest reading since
1957. In addition, several decades may pass between 100 °F (38 °C) readings, with the most recent such occurrence
on July 22, 2011 when the temperature reached 103 °F (39 °C). The city's average window for freezing temperatures
is November 9 through April 5.[c] Official temperature records have ranged from −18 °F (−28 °C) on February 9, 1934,
up to 104 °F (40 °C) on July 4, 1911; the record cold daily maximum is 2 °F (−17 °C) on December 30, 1917, while,
conversely, the record warm daily minimum is 83 °F (28 °C) on August 2, 1975. Boston's coastal location on the North
Atlantic moderates its temperature, but makes the city very prone to Nor'easter weather systems that can produce
much snow and rain. The city averages 43.8 inches (1,110 mm) of precipitation a year, with 43.8 inches (111 cm) of
snowfall per season. Snowfall increases dramatically as one goes inland away from the city (especially north and
west of the city)—away from the moderating influence of the ocean. Most snowfall occurs from December through March,
as most years see no measurable snow in April and November, and snow is rare in May and October. There is also high
year-to-year variability in snowfall; for instance, the winter of 2011–12 saw only 9.3 in (23.6 cm) of accumulating
snow, but the previous winter, the corresponding figure was 81.0 in (2.06 m).[d] Fog is fairly common, particularly
in spring and early summer, and the occasional tropical storm or hurricane can threaten the region, especially in
late summer and early autumn. Due to its situation along the North Atlantic, the city often receives sea breezes,
especially in the late spring, when water temperatures are still quite cold and temperatures at the coast can be
more than 20 °F (11 °C) colder than a few miles inland, sometimes dropping by that amount near midday. Thunderstorms
occur from May to September, that are occasionally severe with large hail, damaging winds and heavy downpours. Although
downtown Boston has never been struck by a violent tornado, the city itself has experienced many tornado warnings.
Damaging storms are more common to areas north, west, and northwest of the city. Boston has a relatively sunny climate
for a coastal city at its latitude, averaging over 2,600 hours of sunshine per annum. In 2010, Boston was estimated
to have 617,594 residents (a density of 12,200 persons/sq mile, or 4,700/km2) living in 272,481 housing units— a
5% population increase over 2000. The city is the third most densely populated large U.S. city of over half a million
residents. Some 1.2 million persons may be within Boston's boundaries during work hours, and as many as 2 million
during special events. This fluctuation of people is caused by hundreds of thousands of suburban residents who travel
to the city for work, education, health care, and special events. In the city, the population was spread out with
21.9% at age 19 and under, 14.3% from 20 to 24, 33.2% from 25 to 44, 20.4% from 45 to 64, and 10.1% who were 65 years
of age or older. The median age was 30.8 years. For every 100 females, there were 92.0 males. For every 100 females
age 18 and over, there were 89.9 males. There were 252,699 households, of which 20.4% had children under the age
of 18 living in them, 25.5% were married couples living together, 16.3% had a female householder with no husband
present, and 54.0% were non-families. 37.1% of all households were made up of individuals and 9.0% had someone living
alone who was 65 years of age or older. The average household size was 2.26 and the average family size was 3.08.
The median household income in Boston was $51,739, while the median income for a family was $61,035. Full-time year-round
male workers had a median income of $52,544 versus $46,540 for full-time year-round female workers. The per capita
income for the city was $33,158. 21.4% of the population and 16.0% of families are below the poverty line. Of the
total population, 28.8% of those under the age of 18 and 20.4% of those 65 and older were living below the poverty
line. In 1950, whites represented 94.7% of Boston's population. From the 1950s to the end of the 20th century, the
proportion of non-Hispanic whites in the city declined; in 2000, non-Hispanic whites made up 49.5% of the city's
population, making the city majority-minority for the first time. However, in recent years the city has experienced
significant gentrification, in which affluent whites have moved into formerly non-white areas. In 2006, the US Census
Bureau estimated that non-Hispanic whites again formed a slight majority. But as of 2010, in part due to the housing
crash, as well as increased efforts to make more affordable housing more available, the minority population has rebounded.
This may also have to do with an increased Latino population and more clarity surrounding US Census statistics, which
indicate a Non-Hispanic White population of 47 percent (some reports give slightly lower figures). People of Irish
descent form the largest single ethnic group in the city, making up 15.8% of the population, followed by Italians,
accounting for 8.3% of the population. People of West Indian and Caribbean ancestry are another sizable group, at
6.0%, about half of whom are of Haitian ancestry. Over 27,000 Chinese Americans made their home in Boston city proper
in 2013, and the city hosts a growing Chinatown accommodating heavily traveled Chinese-owned bus lines to and from
Chinatown, Manhattan. Some neighborhoods, such as Dorchester, have received an influx of people of Vietnamese ancestry
in recent decades. Neighborhoods such as Jamaica Plain and Roslindale have experienced a growing number of Dominican
Americans. The city and greater area also has a growing immigrant population of South Asians, including the tenth-largest
Indian community in the country. The city has a sizable Jewish population with an estimated 25,000 Jews within the
city and 227,000 within the Boston metro area; the number of congregations in Boston is estimated at 22. The adjacent
communities of Brookline and Newton are both approximately one-third Jewish. The city, especially the East Boston
neighborhood, has a significant Hispanic community. Hispanics in Boston are mostly of Puerto Rican (30,506 or 4.9%
of total city population), Dominican (25,648 or 4.2% of total city population), Salvadoran (10,850 or 1.8% of city
population), Colombian (6,649 or 1.1% of total city population), Mexican (5,961 or 1.0% of total city population),
and Guatemalan (4,451 or 0.7% of total city population) ethnic origin. When including all Hispanic national origins,
they number 107,917. In Greater Boston, these numbers grow significantly with Puerto Ricans numbering 175,000+, Dominicans
95,000+, Salvadorans 40,000+, Guatemalans 31,000+, Mexicans 25,000+, and Colombians numbering 22,000+. According
to a 2014 study by the Pew Research Center, 57% of the population of the city identified themselves as Christians,
with 25% professing attendance at a variety of churches that could be considered Protestant, and 29% professing Roman
Catholic beliefs. while 33% claim no religious affiliation. The same study says that other religions (including Judaism,
Buddhism, Islam, and Hinduism) collectively make up about 10% of the population. As of 2010 the Catholic Church had
the highest number of adherents as a single denomination in the Boston-Cambridge-Newton Metro area, with more than
two million members and 339 churches, followed by the Episcopal Church with 58,000 adherents in 160 churches. The
United Church of Christ had 55,000 members and 213 churches. The UCC is the successor of the city's Puritan religious
traditions. Old South Church in Boston is one of the oldest congregations in the United States. It was organized
in 1669 by dissenters from the First Church in Boston (1630). Notable past members include Samuel Adams, William
Dawes, Benjamin Franklin, Samuel Sewall, and Phillis Wheatley. In 1773, Adams gave the signals from the Old South
Meeting House that started the Boston Tea Party. A global city, Boston is placed among the top 30 most economically
powerful cities in the world. Encompassing $363 billion, the Greater Boston metropolitan area has the sixth-largest
economy in the country and 12th-largest in the world. Boston's colleges and universities have a significant effect
on the regional economy. Boston attracts more than 350,000 college students from around the world, who contribute
more than $4.8 billion annually to the city's economy. The area's schools are major employers and attract industries
to the city and surrounding region. The city is home to a number of technology companies and is a hub for biotechnology,
with the Milken Institute rating Boston as the top life sciences cluster in the country. Boston receives the highest
absolute amount of annual funding from the National Institutes of Health of all cities in the United States. The
city is considered highly innovative for a variety of reasons, including the presence of academia, access to venture
capital, and the presence of many high-tech companies. The Route 128 corridor and Greater Boston continue to be a
major center for venture capital investment, and high technology remains an important sector. Tourism also composes
a large part of Boston's economy, with 21.2 million domestic and international visitors spending $8.3 billion in
2011; excluding visitors from Canada and Mexico, over 1.4 million international tourists visited Boston in 2014,
with those from China and the United Kingdom leading the list. Boston's status as a state capital as well as the
regional home of federal agencies has rendered law and government to be another major component of the city's economy.
The city is a major seaport along the United States' East Coast and the oldest continuously operated industrial and
fishing port in the Western Hemisphere. Other important industries are financial services, especially mutual funds
and insurance. Boston-based Fidelity Investments helped popularize the mutual fund in the 1980s and has made Boston
one of the top financial cities in the United States. The city is home to the headquarters of Santander Bank, and
Boston is a center for venture capital firms. State Street Corporation, which specializes in asset management and
custody services, is based in the city. Boston is a printing and publishing center — Houghton Mifflin Harcourt is
headquartered within the city, along with Bedford-St. Martin's Press and Beacon Press. Pearson PLC publishing units
also employ several hundred people in Boston. The city is home to three major convention centers—the Hynes Convention
Center in the Back Bay, and the Seaport World Trade Center and Boston Convention and Exhibition Center on the South
Boston waterfront. The General Electric Corporation announced in January 2016 its decision to move the company's
global headquarters to the Seaport District in Boston, from Fairfield, Connecticut, citing factors including Boston's
preeminence in the realm of higher education. The Boston Public Schools enrolls 57,000 students attending 145 schools,
including the renowned Boston Latin Academy, John D. O'Bryant School of Math & Science, and Boston Latin School.
The Boston Latin School, established 1635, is the oldest public high school in the US; Boston also operates the United
States' second oldest public high school, and its oldest public elementary school. The system's students are 40%
Hispanic or Latino, 35% Black or African American, 13% White, and 9% Asian. There are private, parochial, and charter
schools as well, and approximately 3,300 minority students attend participating suburban schools through the Metropolitan
Educational Opportunity Council. Some of the most renowned and highly ranked universities in the world are located
in the Boston area. Four members of the Association of American Universities are in Greater Boston (more than any
other metropolitan area): Harvard University, the Massachusetts Institute of Technology, Boston University, and Brandeis
University. Hospitals, universities, and research institutions in Greater Boston received more than $1.77 billion
in National Institutes of Health grants in 2013, more money than any other American metropolitan area. Greater Boston
has more than 100 colleges and universities, with 250,000 students enrolled in Boston and Cambridge alone. Its largest
private universities include Boston University (the city's fourth-largest employer) with its main campus along Commonwealth
Avenue and a medical campus in the South End; Northeastern University in the Fenway area; Suffolk University near
Beacon Hill, which includes law school and business school; and Boston College, which straddles the Boston (Brighton)–Newton
border. Boston's only public university is the University of Massachusetts Boston, on Columbia Point in Dorchester.
Roxbury Community College and Bunker Hill Community College are the city's two public community colleges. Altogether,
Boston's colleges and universities employ over 42,600 people, accounting for nearly 7 percent of the city's workforce.
Smaller private schools include Babson College, Bentley University, Boston Architectural College, Emmanuel College,
Fisher College, MGH Institute of Health Professions, Massachusetts College of Pharmacy and Health Sciences, Simmons
College, Wellesley College, Wheelock College, Wentworth Institute of Technology, New England School of Law (originally
established as America's first all female law school), and Emerson College. Metropolitan Boston is home to several
conservatories and art schools, including Lesley University College of Art and Design, Massachusetts College of Art,
the School of the Museum of Fine Arts, New England Institute of Art, New England School of Art and Design (Suffolk
University), Longy School of Music of Bard College, and the New England Conservatory (the oldest independent conservatory
in the United States). Other conservatories include the Boston Conservatory and Berklee College of Music, which has
made Boston an important city for jazz music. Several universities located outside Boston have a major presence in
the city. Harvard University, the nation's oldest institute of higher education, is centered across the Charles River
in Cambridge but has the majority of its land holdings and a substantial amount of its educational activities in
Boston. Its business, medical, dental, and public health schools are located in Boston's Allston and Longwood neighborhoods.
Harvard has plans for additional expansion into Allston. The Massachusetts Institute of Technology (MIT), which originated
in Boston and was long known as "Boston Tech", moved across the river to Cambridge in 1916. Tufts University, whose
main campus is north of the city in Somerville and Medford, locates its medical and dental school in Boston's Chinatown
at Tufts Medical Center, a 451-bed academic medical institution that is home to both a full-service hospital for
adults and the Floating Hospital for Children. Like many major American cities, Boston has seen a great reduction
in violent crime since the early 1990s. Boston's low crime rate since the 1990s has been credited to the Boston Police
Department's collaboration with neighborhood groups and church parishes to prevent youths from joining gangs, as
well as involvement from the United States Attorney and District Attorney's offices. This helped lead in part to
what has been touted as the "Boston Miracle". Murders in the city dropped from 152 in 1990 (for a murder rate of
26.5 per 100,000 people) to just 31—not one of them a juvenile—in 1999 (for a murder rate of 5.26 per 100,000). In
2008, there were 62 reported homicides. Through December 20 each of 2014 and 2015, the Boston Police Department reported
52 and 39 homicides, respectively. Boston shares many cultural roots with greater New England, including a dialect
of the non-rhotic Eastern New England accent known as Boston English, and a regional cuisine with a large emphasis
on seafood, salt, and dairy products. Irish Americans are a major influence on Boston's politics and religious institutions.
Boston also has its own collection of neologisms known as Boston slang. Boston has been called the "Athens of America"
for its literary culture, earning a reputation as "the intellectual capital of the United States." In the nineteenth
century, Ralph Waldo Emerson, Henry David Thoreau, Nathaniel Hawthorne, Margaret Fuller, James Russell Lowell, and
Henry Wadsworth Longfellow wrote in Boston. Some consider the Old Corner Bookstore, where these writers met and where
The Atlantic Monthly was first published, to be "cradle of American literature. In 1852, the Boston Public Library
was founded as the first free library in the United States. Boston's literary culture continues today thanks to the
city's many universities and the Boston Book Festival. Music is cherished in Boston. The Boston Symphony Orchestra
is one of the "Big Five," a group of the greatest American orchestras, and the classical music magazine Gramophone
called it one of the "world's best" orchestras. Symphony Hall (located west of Back Bay) is home to the Boston Symphony
Orchestra, (and the related Boston Youth Symphony Orchestra, which is the largest youth orchestra in the nation)
and the Boston Pops Orchestra. The British newspaper The Guardian called Boston Symphony Hall "one of the top venues
for classical music in the world," adding that "Symphony Hall in Boston was where science became an essential part
of concert hall design." Other concerts are held at the New England Conservatory's Jordan Hall. The Boston Ballet
performs at the Boston Opera House. Other performing-arts organizations located in the city include the Boston Lyric
Opera Company, Opera Boston, Boston Baroque (the first permanent Baroque orchestra in the US), and the Handel and
Haydn Society (one of the oldest choral companies in the United States). The city is a center for contemporary classical
music with a number of performing groups, several of which are associated with the city's conservatories and universities.
These include the Boston Modern Orchestra Project and Boston Musica Viva. Several theaters are located in or near
the Theater District south of Boston Common, including the Cutler Majestic Theatre, Citi Performing Arts Center,
the Colonial Theater, and the Orpheum Theatre. There are several major annual events such as First Night, which occurs
on New Year's Eve, the Boston Early Music Festival, the annual Boston Arts Festival at Christopher Columbus Waterfront
Park, and Italian summer feasts in the North End honoring Catholic saints. The city is the site of several events
during the Fourth of July period. They include the week-long Harborfest festivities and a Boston Pops concert accompanied
by fireworks on the banks of the Charles River. Because of the city's prominent role in the American Revolution,
several historic sites relating to that period are preserved as part of the Boston National Historical Park. Many
are found along the Freedom Trail, which is marked by a red line of bricks embedded in the ground. The city is also
home to several art museums, including the Museum of Fine Arts and the Isabella Stewart Gardner Museum. The Institute
of Contemporary Art is housed in a contemporary building designed by Diller Scofidio + Renfro in the Seaport District.
The University of Massachusetts Boston campus on Columbia Point houses the John F. Kennedy Library. The Boston Athenaeum
(one of the oldest independent libraries in the United States), Boston Children's Museum, Bull & Finch Pub (whose
building is known from the television show Cheers), Museum of Science, and the New England Aquarium are within the
city. Boston has been a noted religious center from its earliest days. The Roman Catholic Archdiocese of Boston serves
nearly 300 parishes and is based in the Cathedral of the Holy Cross (1875) in the South End, while the Episcopal
Diocese of Massachusetts, with the Cathedral Church of St. Paul (1819) as its episcopal seat, serves just under 200
congregations. Unitarian Universalism has its headquarters on Beacon Hill. The Christian Scientists are headquartered
in Back Bay at the Mother Church (1894). The oldest church in Boston is First Church in Boston, founded in 1630.
King's Chapel, the city's first Anglican church, was founded in 1686 and converted to Unitarianism in 1785. Other
churches include Christ Church (better known as Old North Church, 1723), the oldest church building in the city,
Trinity Church (1733), Park Street Church (1809), Old South Church (1874), Jubilee Christian Church and Basilica
and Shrine of Our Lady of Perpetual Help on Mission Hill (1878). Air quality in Boston is generally very good: during
the ten-year period 2004–2013, there were only 4 days in which the air was unhealthy for the general public, according
to the EPA. Some of the cleaner energy facilities in Boston include the Allston green district, with three ecologically
compatible housing facilities. Boston is also breaking ground on multiple green affordable housing facilities to
help reduce the carbon footprint of the city while simultaneously making these initiatives financially available
to a greater population. Boston's climate plan is updated every three years and was most recently modified in 2013.
This legislature includes the Building Energy Reporting and Disclosure Ordinance, which requires the city's larger
buildings to disclose their yearly energy and water use statistics and partake in an energy assessment every five
years. These statistics are made public by the city, thereby increasing incentives for buildings to be more environmentally
conscious. Another initiative, presented by the late Mayor Thomas Menino, is the Renew Boston Whole Building Incentive,
which reduces the cost of living in buildings that are deemed energy efficient. This, much like the green housing
developments, gives people of low socioeconomic status an opportunity to find housing in communities that support
the environment. The ultimate goal of this initiative is to enlist 500 Bostonians to participate in a free, in-home
energy assessment. Many older buildings in certain areas of Boston are supported by wooden piles driven into the
area's fill; these piles remain sound if submerged in water, but are subject to dry rot if exposed to air for long
periods. Groundwater levels have been dropping, to varying degrees, in many areas of the city, due in part to an
increase in the amount of rainwater discharged directly into sewers rather than absorbed by the ground. A city agency,
the Boston Groundwater Trust, coordinates monitoring of groundwater levels throughout the city via a network of public
and private monitoring wells. However, Boston's drinking water supply, from the Quabbin and Wachusett Reservoirs
to the west, is one of the very few in the country so pure as to satisfy federal water quality standards without
filtration. Boston has teams in the four major North American professional sports leagues plus Major League Soccer,
and has won 36 championships in these leagues, As of 2014[update]. It is one of six cities (along with Chicago, Detroit,
Los Angeles, New York and Philadelphia) to have won championships in all four major sports. It has been suggested
that Boston is the new "TitleTown, USA", as the city's professional sports teams have won nine championships since
2001: Patriots (2001, 2003, 2004, and 2014), Red Sox (2004, 2007, and 2013), Celtics (2008), and Bruins (2011). This
love of sports has made Boston the United States Olympic Committee's choice to bid to hold the 2024 Summer Olympic
Games, but the city cited financial concerns when it withdrew its bid on July 27, 2015. The Boston Red Sox, a founding
member of the American League of Major League Baseball in 1901, play their home games at Fenway Park, near Kenmore
Square in the city's Fenway section. Built in 1912, it is the oldest sports arena or stadium in active use in the
United States among the four major professional American sports leagues, encompassing Major League Baseball, the
National Football League, National Basketball Association, and the National Hockey League. Boston was the site of
the first game of the first modern World Series, in 1903. The series was played between the AL Champion Boston Americans
and the NL champion Pittsburgh Pirates. Persistent reports that the team was known in 1903 as the "Boston Pilgrims"
appear to be unfounded. Boston's first professional baseball team was the Red Stockings, one of the charter members
of the National Association in 1871, and of the National League in 1876. The team played under that name until 1883,
under the name Beaneaters until 1911, and under the name Braves from 1912 until they moved to Milwaukee after the
1952 season. Since 1966 they have played in Atlanta as the Atlanta Braves. The TD Garden, formerly called the FleetCenter
and built to replace the old, since-demolished Boston Garden, is adjoined to North Station and is the home of two
major league teams: the Boston Bruins of the National Hockey League and the Boston Celtics of the National Basketball
Association. The arena seats 18,624 for basketball games and 17,565 for ice hockey games. The Bruins were the first
American member of the National Hockey League and an Original Six franchise. The Boston Celtics were founding members
of the Basketball Association of America, one of the two leagues that merged to form the NBA. The Celtics have the
distinction of having won more championships than any other NBA team, with seventeen. While they have played in suburban
Foxborough since 1971, the New England Patriots of the National Football League were founded in 1960 as the Boston
Patriots, changing their name after relocating. The team won the Super Bowl after the 2001, 2003, 2004, and 2014
seasons. They share Gillette Stadium with the New England Revolution of Major League Soccer. The Boston Breakers
of Women's Professional Soccer, which formed in 2009, play their home games at Dilboy Stadium in Somerville. The
area's many colleges and universities are active in college athletics. Four NCAA Division I members play in the city—Boston
College, Boston University, Harvard University, and Northeastern University. Of the four, only Boston College participates
in college football at the highest level, the Football Bowl Subdivision. Harvard participates in the second-highest
level, the Football Championship Subdivision. One of the best known sporting events in the city is the Boston Marathon,
the 26.2-mile (42.2 km) race which is the world's oldest annual marathon, run on Patriots' Day in April. On April
15, 2013, two explosions killed three people and injured hundreds at the marathon. Another major annual event is
the Head of the Charles Regatta, held in October. Boston Common, located near the Financial District and Beacon Hill,
is the oldest public park in the United States. Along with the adjacent Boston Public Garden, it is part of the Emerald
Necklace, a string of parks designed by Frederick Law Olmsted to encircle the city. The Emerald Necklace includes
Jamaica Pond, Boston's largest body of freshwater, and Franklin Park, the city's largest park and home of the Franklin
Park Zoo. Another major park is the Esplanade, located along the banks of the Charles River. The Hatch Shell, an
outdoor concert venue, is located adjacent to the Charles River Esplanade. Other parks are scattered throughout the
city, with the major parks and beaches located near Castle Island; in Charlestown; and along the Dorchester, South
Boston, and East Boston shorelines. Boston's park system is well-reputed nationally. In its 2013 ParkScore ranking,
The Trust for Public Land reported that Boston was tied with Sacramento and San Francisco for having the third-best
park system among the 50 most populous US cities. ParkScore ranks city park systems by a formula that analyzes the
city's median park size, park acres as percent of city area, the percent of residents within a half-mile of a park,
spending of park services per resident, and the number of playgrounds per 10,000 residents. Boston has a strong mayor
– council government system in which the mayor (elected every fourth year) has extensive executive power. Marty Walsh
became Mayor in January 2014, his predecessor Thomas Menino's twenty-year tenure having been the longest in the city's
history. The Boston City Council is elected every two years; there are nine district seats, and four citywide "at-large"
seats. The School Committee, which oversees the Boston Public Schools, is appointed by the mayor. In addition to
city government, numerous commissions and state authorities—including the Massachusetts Department of Conservation
and Recreation, the Boston Public Health Commission, the Massachusetts Water Resources Authority (MWRA), and the
Massachusetts Port Authority (Massport)—play a role in the life of Bostonians. As the capital of Massachusetts, Boston
plays a major role in state politics. The city has several federal facilities, including the John F. Kennedy Federal
Office Building, the Thomas P. O'Neill Federal Building, the United States Court of Appeals for the First Circuit,
the United States District Court for the District of Massachusetts, and the Federal Reserve Bank of Boston. Federally,
Boston is split between two congressional districts. The northern three-fourths of the city is in the 7th district,
represented by Mike Capuano since 1998. The southern fourth is in the 8th district, represented by Stephen Lynch.
Both are Democrats; a Republican has not represented a significant portion of Boston in over a century. The state's
senior member of the United States Senate is Democrat Elizabeth Warren, first elected in 2012. The state's junior
member of the United States Senate is Democrat Ed Markey, who was elected in 2013 to succeed John Kerry after Kerry's
appointment and confirmation as the United States Secretary of State. The Boston Globe and the Boston Herald are
two of the city's major daily newspapers. The city is also served by other publications such as Boston magazine,
The Improper Bostonian, DigBoston, and the Boston edition of Metro. The Christian Science Monitor, headquartered
in Boston, was formerly a worldwide daily newspaper but ended publication of daily print editions in 2009, switching
to continuous online and weekly magazine format publications. The Boston Globe also releases a teen publication to
the city's public high schools, called Teens in Print or T.i.P., which is written by the city's teens and delivered
quarterly within the school year. The city's growing Latino population has given rise to a number of local and regional
Spanish-language newspapers. These include El Planeta (owned by the former publisher of The Boston Phoenix), El Mundo,
and La Semana. Siglo21, with its main offices in nearby Lawrence, is also widely distributed. Various LGBT publications
serve the city's large LGBT (lesbian, gay, bisexual and transgender) community such as The Rainbow Times, the only
minority and lesbian-owned LGBT newsmagazine. Founded in 2006, The Rainbow Times is now based out of Boston, but
serves all of New England. Boston is the largest broadcasting market in New England, with the radio market being
the 11th largest in the United States. Several major AM stations include talk radio WRKO, sports/talk station WEEI,
and CBS Radio WBZ. WBZ (AM) broadcasts a news radio format. A variety of commercial FM radio formats serve the area,
as do NPR stations WBUR and WGBH. College and university radio stations include WERS (Emerson), WHRB (Harvard), WUMB
(UMass Boston), WMBR (MIT), WZBC (Boston College), WMFO (Tufts University), WBRS (Brandeis University), WTBU (Boston
University, campus and web only), WRBB (Northeastern University) and WMLN-FM (Curry College). The Boston television
DMA, which also includes Manchester, New Hampshire, is the 8th largest in the United States. The city is served by
stations representing every major American network, including WBZ-TV and its sister station WSBK-TV (the former a
CBS O&O, the latter an MyNetwork TV affiliate), WCVB-TV (ABC), WHDH (NBC), WFXT (Fox), and WLVI (The CW). The city
is also home to PBS station WGBH-TV, a major producer of PBS programs, which also operates WGBX. Spanish-language
television networks, including MundoFox (WFXZ-CD), Univision (WUNI), Telemundo (WNEU), and Telefutura (WUTF-DT),
have a presence in the region, with WNEU and WUTF serving as network owned-and-operated stations. Most of the area's
television stations have their transmitters in nearby Needham and Newton along the Route 128 corridor. Six Boston
television stations are carried by Canadian satellite television provider Bell TV and by cable television providers
in Canada. The Longwood Medical and Academic Area, adjacent to the Fenway district, is home to a large number of
medical and research facilities, including Beth Israel Deaconess Medical Center, Brigham and Women's Hospital, Children's
Hospital Boston, Dana-Farber Cancer Institute, Harvard Medical School, Joslin Diabetes Center, and the Massachusetts
College of Pharmacy and Health Sciences. Prominent medical facilities, including Massachusetts General Hospital,
Massachusetts Eye and Ear Infirmary and Spaulding Rehabilitation Hospital are located in the Beacon Hill area. St.
Elizabeth's Medical Center is in Brighton Center of the city's Brighton neighborhood. New England Baptist Hospital
is in Mission Hill. The city has Veterans Affairs medical centers in the Jamaica Plain and West Roxbury neighborhoods.
The Boston Public Health Commission, an agency of the Massachusetts government, oversees health concerns for city
residents. Boston EMS provides pre-hospital emergency medical services to residents and visitors. Many of Boston's
medical facilities are associated with universities. The facilities in the Longwood Medical and Academic Area and
in Massachusetts General Hospital are affiliated with Harvard Medical School. Tufts Medical Center (formerly Tufts-New
England Medical Center), located in the southern portion of the Chinatown neighborhood, is affiliated with Tufts
University School of Medicine. Boston Medical Center, located in the South End neighborhood, is the primary teaching
facility for the Boston University School of Medicine as well as the largest trauma center in the Boston area; it
was formed by the merger of Boston University Hospital and Boston City Hospital, which was the first municipal hospital
in the United States. Logan Airport, located in East Boston and operated by the Massachusetts Port Authority (Massport),
is Boston's principal airport. Nearby general aviation airports are Beverly Municipal Airport to the north, Hanscom
Field to the west, and Norwood Memorial Airport to the south. Massport also operates several major facilities within
the Port of Boston, including a cruise ship terminal and facilities to handle bulk and container cargo in South Boston,
and other facilities in Charlestown and East Boston. Downtown Boston's streets grew organically, so they do not form
a planned grid, unlike those in later-developed Back Bay, East Boston, the South End, and South Boston. Boston is
the eastern terminus of I-90, which in Massachusetts runs along the Massachusetts Turnpike. The elevated portion
of the Central Artery, which carried most of the through traffic in downtown Boston, was replaced with the O'Neill
Tunnel during the Big Dig, substantially completed in early 2006. With nearly a third of Bostonians using public
transit for their commute to work, Boston has the fifth-highest rate of public transit usage in the country. Boston's
subway system, the Massachusetts Bay Transportation Authority (MBTA—known as the "T") operates the oldest underground
rapid transit system in the Americas, and is the fourth-busiest rapid transit system in the country, with 65.5 miles
(105 km) of track on four lines. The MBTA also operates busy bus and commuter rail networks, and water shuttles.
Amtrak's Northeast Corridor and Chicago lines originate at South Station, which serves as a major intermodal transportation
hub, and stop at Back Bay. Fast Northeast Corridor trains, which serve New York City, Washington, D.C., and points
in between, also stop at Route 128 Station in the southwestern suburbs of Boston. Meanwhile, Amtrak's Downeaster
service to Maine originates at North Station, despite the current lack of a dedicated passenger rail link between
the two railhubs, other than the "T" subway lines. Nicknamed "The Walking City", Boston hosts more pedestrian commuters
than do other comparably populated cities. Owing to factors such as the compactness of the city and large student
population, 13 percent of the population commutes by foot, making it the highest percentage of pedestrian commuters
in the country out of the major American cities. In 2011, Walk Score ranked Boston the third most walkable city in
the United States. As of 2015[update], Walk Score still ranks Boston as the third most walkable US city, with a Walk
Score of 80, a Transit Score of 75, and a Bike Score of 70. Between 1999 and 2006, Bicycling magazine named Boston
three times as one of the worst cities in the US for cycling; regardless, it has one of the highest rates of bicycle
commuting. In 2008, as a consequence of improvements made to bicycling conditions within the city, the same magazine
put Boston on its "Five for the Future" list as a "Future Best City" for biking, and Boston's bicycle commuting percentage
increased from 1% in 2000 to 2.1% in 2009. The bikeshare program called Hubway launched in late July 2011, logging
more than 140,000 rides before the close of its first season. The neighboring municipalities of Cambridge, Somerville,
and Brookline joined the Hubway program in summer 2012.
Universal Studios Inc. (also known as Universal Pictures) is an American film studio, owned by Comcast through its wholly
owned subsidiary NBCUniversal, and is one of Hollywood's "Big Six" film studios. Its production studios are at 100
Universal City Plaza Drive in Universal City, California. Distribution and other corporate offices are in New York
City. Universal Studios is a member of the Motion Picture Association of America (MPAA). Universal was founded in
1912 by the German Carl Laemmle (pronounced "LEM-lee"), Mark Dintenfass, Charles O. Baumann, Adam Kessel, Pat Powers,
William Swanson, David Horsley, Robert H. Cochrane, and Jules Brulatour. It is the world's fourth oldest major film
studio, after the renowned French studios Gaumont Film Company and Pathé, and the Danish Nordisk Film company. Universal
Studios was founded by Carl Laemmle, Mark Dintenfass, Charles O. Baumann, Adam Kessel, Pat Powers, William Swanson,
David Horsley, Robert H. Cochrane[a] and Jules Brulatour. One story has Laemmle watching a box office for hours,
counting patrons and calculating the day's takings. Within weeks of his Chicago trip, Laemmle gave up dry goods to
buy the first several nickelodeons. For Laemmle and other such entrepreneurs, the creation in 1908 of the Edison-backed
Motion Picture Trust meant that exhibitors were expected to pay fees for Trust-produced films they showed. Based
on the Latham Loop used in cameras and projectors, along with other patents, the Trust collected fees on all aspects
of movie production and exhibition, and attempted to enforce a monopoly on distribution. Soon, Laemmle and other
disgruntled nickelodeon owners decided to avoid paying Edison by producing their own pictures. In June 1909, Laemmle
started the Yankee Film Company with partners Abe Stern and Julius Stern. That company quickly evolved into the Independent
Moving Pictures Company (IMP), with studios in Fort Lee, New Jersey, where many early films in America's first motion
picture industry were produced in the early 20th century. Laemmle broke with Edison's custom of refusing to give
billing and screen credits to performers. By naming the movie stars, he attracted many of the leading players of
the time, contributing to the creation of the star system. In 1910, he promoted Florence Lawrence, formerly known
as "The Biograph Girl", and actor King Baggot, in what may be the first instance of a studio using stars in its marketing.
The Universal Film Manufacturing Company was incorporated in New York on April 30, 1912. Laemmle, who emerged as
president in July 1912, was the primary figure in the partnership with Dintenfass, Baumann, Kessel, Powers, Swanson,
Horsley, and Brulatour. Eventually all would be bought out by Laemmle. The new Universal studio was a vertically
integrated company, with movie production, distribution and exhibition venues all linked in the same corporate entity,
the central element of the Studio system era. On March 15, 1915,:8 Laemmle opened the world's largest motion picture
production facility, Universal City Studios, on a 230-acre (0.9-km²) converted farm just over the Cahuenga Pass from
Hollywood. Studio management became the third facet of Universal's operations, with the studio incorporated as a
distinct subsidiary organization. Unlike other movie moguls, Laemmle opened his studio to tourists. Universal became
the largest studio in Hollywood, and remained so for a decade. However, it sought an audience mostly in small towns,
producing mostly inexpensive melodramas, westerns and serials. In its early years Universal released three brands
of feature films — Red Feather, low-budget programmers; Bluebird, more ambitious productions; and Jewel, their prestige
motion pictures. Directors included Jack Conway, John Ford, Rex Ingram, Robert Z. Leonard, George Marshall and Lois
Weber, one of the few women directing films in Hollywood.:13 Despite Laemmle's role as an innovator, he was an extremely
cautious studio chief. Unlike rivals Adolph Zukor, William Fox, and Marcus Loew, Laemmle chose not to develop a theater
chain. He also financed all of his own films, refusing to take on debt. This policy nearly bankrupted the studio
when actor-director Erich von Stroheim insisted on excessively lavish production values for his films Blind Husbands
(1919) and Foolish Wives (1922), but Universal shrewdly gained a return on some of the expenditure by launching a
sensational ad campaign that attracted moviegoers. Character actor Lon Chaney became a drawing card for Universal
in the 1920s, appearing steadily in dramas. His two biggest hits for Universal were The Hunchback of Notre Dame (1923)
and The Phantom of the Opera (1925). During this period Laemmle entrusted most of the production policy decisions
to Irving Thalberg. Thalberg had been Laemmle's personal secretary, and Laemmle was impressed by his cogent observations
of how efficiently the studio could be operated. Promoted to studio chief, Thalberg was giving Universal's product
a touch of class, but MGM's head of production Louis B. Mayer lured Thalberg away from Universal with a promise of
better pay. Without his guidance Universal became a second-tier studio, and would remain so for several decades.
In 1926, Universal opened a production unit in Germany, Deutsche Universal-Film AG, under the direction of Joe Pasternak.
This unit produced three to four films per year until 1936, migrating to Hungary and then Austria in the face of
Hitler's increasing domination of central Europe. With the advent of sound, these productions were made in the German
language or, occasionally, Hungarian or Polish. In the U.S., Universal Pictures did not distribute any of this subsidiary's
films, but at least some of them were exhibited through other, independent, foreign-language film distributors based
in New York, without benefit of English subtitles. Nazi persecution and a change in ownership for the parent Universal
Pictures organization resulted in the dissolution of this subsidiary. In the early years, Universal had a "clean
picture" policy. However, by April 1927, Carl Laemmle considered this to be a mistake as "unclean pictures" from
other studios were generating more profit while Universal was losing money. Universal owned the rights to the "Oswald
the Lucky Rabbit" character, although Walt Disney and Ub Iwerks had created Oswald, and their films had enjoyed a
successful theatrical run. After Charles Mintz had unsuccessfully demanded that Disney accept a lower fee for producing
the property, Mintz produced the films with his own group of animators. Instead, Disney and Iwerks created Mickey
Mouse who in 1928 stared in the first "sync" sound animated short, Steamboat Willie. This moment effectively launched
Walt Disney Studios' foothold, while Universal became a minor player in film animation. Universal subsequently severed
its link to Mintz and formed its own in-house animation studio to produce Oswald cartoons headed by Walter Lantz.
In 2006, after almost 80 years, NBC Universal sold all Walt Disney-produced Oswald cartoons, along with the rights
to the character himself, back to Disney. In return, Disney released ABC sportscaster Al Michaels from his contract
so he could work on NBC's Sunday night NFL football package. However, Universal retained ownership of Oswald cartoons
produced for them by Walter Lantz from 1929 to 1943. In 1928, Laemmle, Sr. made his son, Carl, Jr. head of Universal
Pictures as a 21st birthday present. Universal already had a reputation for nepotism—at one time, 70 of Carl, Sr.'s
relatives were supposedly on the payroll. Many of them were nephews, resulting in Carl, Sr. being known around the
studios as "Uncle Carl." Ogden Nash famously quipped in rhyme, "Uncle Carl Laemmle/Has a very large faemmle." Among
these relatives was future Academy Award winning director/producer William Wyler. "Junior" Laemmle persuaded his
father to bring Universal up to date. He bought and built theaters, converted the studio to sound production, and
made several forays into high-quality production. His early efforts included the critically mauled part-talkie version
of Edna Ferber's novel Show Boat (1929), the lavish musical Broadway (1929) which included Technicolor sequences;
and the first all-color musical feature (for Universal), King of Jazz (1930). The more serious All Quiet on the Western
Front (1930), won its year's Best Picture Oscar. Laemmle, Jr. created a niche for the studio, beginning a series
of horror films which extended into the 1940s, affectionately dubbed Universal Horror. Among them are Frankenstein
(1931), Dracula ( also in 1931), The Mummy (1932) and The Invisible Man (1933). Other Laemmle productions of this
period include Imitation of Life (1934) and My Man Godfrey (1936). Universal's forays into high-quality production
spelled the end of the Laemmle era at the studio. Taking on the task of modernizing and upgrading a film conglomerate
in the depths of the depression was risky, and for a time Universal slipped into receivership. The theater chain
was scrapped, but Carl, Jr. held fast to distribution, studio and production operations. The end for the Laemmles
came with a lavish version of Show Boat (1936), a remake of its earlier 1929 part-talkie production, and produced
as a high-quality, big-budget film rather than as a B-picture. The new film featured several stars from the Broadway
stage version, which began production in late 1935, and unlike the 1929 film was based on the Broadway musical rather
than the novel. Carl, Jr.'s spending habits alarmed company stockholders. They would not allow production to start
on Show Boat unless the Laemmles obtained a loan. Universal was forced to seek a $750,000 production loan from the
Standard Capital Corporation, pledging the Laemmle family's controlling interest in Universal as collateral. It was
the first time Universal had borrowed money for a production in its 26-year history. The production went $300,000
over budget; Standard called in the loan, cash-strapped Universal could not pay, Standard foreclosed and seized control
of the studio on April 2, 1936. Universal's 1936 Show Boat (released a little over a month later) became a critical
and financial success, it was not enough to save the Laemmles' involvement with the studio. They were unceremoniously
removed from the company they had founded. Because the Laemmles personally oversaw production, Show Boat was released
(despite the takeover) with Carl Laemmle and Carl Laemmle Jr.'s names on the credits and in the advertising campaign
of the film. Standard Capital's J. Cheever Cowdin had taken over as president and chairman of the board of directors,
and instituted severe cuts in production budgets. Gone were the big ambitions, and though Universal had a few big
names under contract, those it had been cultivating, like William Wyler and Margaret Sullavan, left. Meanwhile, producer
Joe Pasternak, who had been successfully producing light musicals with young sopranos for Universal's German subsidiary,
repeated his formula in America. Teenage singer Deanna Durbin starred in Pasternak's first American film, Three Smart
Girls (1936). The film was a box-office hit and reputedly restored the studio's solvency. The success of the film
led Universal to offer her a contract, which for the first five years of her career produced her most successful
pictures. When Pasternak stopped producing Durbin's pictures, and she outgrew her screen persona and pursued more
dramatic roles, the studio signed 13-year-old Gloria Jean for her own series of Pasternak musicals from 1939; she
went on to star with Bing Crosby, W. C. Fields, and Donald O'Connor. A popular Universal film of the late 1930s was
Destry Rides Again (1939), starring James Stewart as Destry and Marlene Dietrich in her comeback role after leaving
Paramount Studios. By the early 1940s, the company was concentrating on lower-budget productions that were the company's
main staple: westerns, melodramas, serials and sequels to the studio's horror pictures, the latter now solely B pictures.
The studio fostered many series: The Dead End Kids and Little Tough Guys action features and serials (1938–43); the
comic adventures of infant Baby Sandy (1938–41); comedies with Hugh Herbert (1938–42) and The Ritz Brothers (1940–43);
musicals with Robert Paige, Jane Frazee, The Andrews Sisters, and The Merry Macs (1938–45); and westerns with Tom
Mix (1932–33), Buck Jones (1933–36), Bob Baker (1938–39), Johnny Mack Brown (1938–43); Rod Cameron (1944–45), and
Kirby Grant (1946–47). Universal could seldom afford its own stable of stars, and often borrowed talent from other
studios, or hired freelance actors. In addition to Stewart and Dietrich, Margaret Sullavan, and Bing Crosby were
two of the major names that made a couple of pictures for Universal during this period. Some stars came from radio,
including Edgar Bergen, W. C. Fields, and the comedy team of Abbott and Costello (Bud Abbott and Lou Costello). Abbott
and Costello's military comedy Buck Privates (1941) gave the former burlesque comedians a national and international
profile. During the war years Universal did have a co-production arrangement with producer Walter Wanger and his
partner, director Fritz Lang, lending the studio some amount of prestige productions. Universal's core audience base
was still found in the neighborhood movie theaters, and the studio continued to please the public with low- to medium-budget
films. Basil Rathbone and Nigel Bruce in new Sherlock Holmes mysteries (1942–46), teenage musicals with Gloria Jean,
Donald O'Connor, and Peggy Ryan (1942–43), and screen adaptations of radio's Inner Sanctum Mysteries with Lon Chaney,
Jr. (1943–45). Alfred Hitchcock was also borrowed for two films from Selznick International Pictures: Saboteur (1942)
and Shadow of a Doubt (1943). As Universal's main product had always been low-budget film, it was one of the last
major studios to have a contract with Technicolor. The studio did not make use of the three-strip Technicolor process
until Arabian Nights (1942), starring Jon Hall and Maria Montez. The following year, Technicolor was also used in
Universal's remake of their 1925 horror melodrama, Phantom of the Opera with Claude Rains and Nelson Eddy. With the
success of their first two pictures, a regular schedule of high-budget, Technicolor films followed. In 1945, the
British entrepreneur J. Arthur Rank, hoping to expand his American presence, bought into a four-way merger with Universal,
the independent company International Pictures, and producer Kenneth Young. The new combine, United World Pictures,
was a failure and was dissolved within one year. Rank and International remained interested in Universal, however,
culminating in the studio's reorganization as Universal-International. William Goetz, a founder of International,
was made head of production at the renamed Universal-International Pictures Inc., which also served as an import-export
subsidiary, and copyright holder for the production arm's films. Goetz, a son-in-law of Louis B. Mayer decided to
bring "prestige" to the new company. He stopped the studio's low-budget production of B movies, serials and curtailed
Universal's horror and "Arabian Nights" cycles. Distribution and copyright control remained under the name of Universal
Pictures Company Inc. Goetz set out an ambitious schedule. Universal-International became responsible for the American
distribution of Rank's British productions, including such classics as David Lean's Great Expectations (1946) and
Laurence Olivier's Hamlet (1948). Broadening its scope further, Universal-International branched out into the lucrative
non-theatrical field, buying a majority stake in home-movie dealer Castle Films in 1947, and taking the company over
entirely in 1951. For three decades, Castle would offer "highlights" reels from the Universal film library to home-movie
enthusiasts and collectors. Goetz licensed Universal's pre–Universal-International film library to Jack Broeder's
Realart Pictures for cinema re-release but Realart was not allowed to show the films on television. The production
arm of the studio still struggled. While there were to be a few hits like The Killers (1946) and The Naked City (1948),
Universal-International's new theatrical films often met with disappointing response at the box office. By the late
1940s, Goetz was out, and the studio returned to low-budget films. The inexpensive Francis (1950), the first film
of a series about a talking mule and Ma and Pa Kettle (1949), part of a series, became mainstays of the company.
Once again, the films of Abbott and Costello, including Abbott and Costello Meet Frankenstein (1948), were among
the studio's top-grossing productions. But at this point Rank lost interest and sold his shares to the investor Milton
Rackmil, whose Decca Records would take full control of Universal in 1952. Besides Abbott and Costello, the studio
retained the Walter Lantz cartoon studio, whose product was released with Universal-International's films. In the
1950s, Universal-International resumed their series of Arabian Nights films, many starring Tony Curtis. The studio
also had a success with monster and science fiction films produced by William Alland, with many directed by Jack
Arnold. Other successes were the melodramas directed by Douglas Sirk and produced by Ross Hunter, although for film
critics they were not so well thought of on first release as they have since become. Among Universal-International's
stable of stars were Rock Hudson, Tony Curtis, Jeff Chandler, Audie Murphy, and John Gavin. Though Decca would continue
to keep picture budgets lean, it was favored by changing circumstances in the film business, as other studios let
their contract actors go in the wake of the 1948 U.S. vs. Paramount Pictures, et al. decision. Leading actors were
increasingly free to work where and when they chose, and in 1950 MCA agent Lew Wasserman made a deal with Universal
for his client James Stewart that would change the rules of the business. Wasserman's deal gave Stewart a share in
the profits of three pictures in lieu of a large salary. When one of those films, Winchester '73, proved to be a
hit, the arrangement would become the rule for many future productions at Universal, and eventually at other studios
as well. By the late 1950s, the motion picture business was again changing. The combination of the studio/theater-chain
break-up and the rise of television saw the reduced audience size for cinema productions. The Music Corporation of
America (MCA), then predominately a talent agency, had also become a powerful television producer, renting space
at Republic Studios for its Revue Productions subsidiary. After a period of complete shutdown, a moribund Universal
agreed to sell its 360-acre (1.5 km²) studio lot to MCA in 1958, for $11 million, renamed Revue Studios. MCA owned
the studio lot, but not Universal Pictures, yet was increasingly influential on Universal's product. The studio lot
was upgraded and modernized, while MCA clients like Doris Day, Lana Turner, Cary Grant, and director Alfred Hitchcock
were signed to Universal Pictures contracts. The long-awaited takeover of Universal Pictures by MCA, Inc. happened
in mid-1962 as part of the MCA-Decca Records merger. The company reverted in name to Universal Pictures. As a final
gesture before leaving the talent agency business, virtually every MCA client was signed to a Universal contract.
In 1964 MCA formed Universal City Studios, Inc., merging the motion pictures and television arms of Universal Pictures
Company and Revue Productions (officially renamed as Universal Television in 1966). And so, with MCA in charge, Universal
became a full-blown, A-film movie studio, with leading actors and directors under contract; offering slick, commercial
films; and a studio tour subsidiary launched in 1964. Television production made up much of the studio's output,
with Universal heavily committed, in particular, to deals with NBC (which later merged with Universal to form NBC
Universal; see below) providing up to half of all prime time shows for several seasons. An innovation during this
period championed by Universal was the made-for-television movie. At this time, Hal B. Wallis, who had latterly worked
as a major producer at Paramount, moved over to Universal, where he produced several films, among them a lavish version
of Maxwell Anderson's Anne of the Thousand Days (1969), and the equally lavish Mary, Queen of Scots (1971). Though
neither could claim to be a big financial hit, both films received Academy Award nominations, and Anne was nominated
for Best Picture, Best Actor (Richard Burton), Best Actress (Geneviève Bujold), and Best Supporting Actor (Anthony
Quayle). Wallis retired from Universal after making the film Rooster Cogburn (1975), a sequel to True Grit (1969),
which Wallis had produced at Paramount. Rooster Cogburn co-starred John Wayne, reprising his Oscar-winning role from
the earlier film, and Katharine Hepburn, their only film together. The film was only a moderate success. In the early
1970s, Universal teamed up with Paramount Pictures to form Cinema International Corporation, which distributed films
by Paramount and Universal worldwide. Though Universal did produce occasional hits, among them Airport (1970), The
Sting (1973), American Graffiti (also 1973), Earthquake (1974), and a big box-office success which restored the company's
fortunes: Jaws (1975), Universal during the decade was primarily a television studio. When Metro-Goldwyn-Mayer purchased
United Artists in 1981, MGM could not drop out of the CIC venture to merge with United Artists overseas operations.
However, with future film productions from both names being released through the MGM/UA Entertainment plate, CIC
decided to merge UA's international units with MGM and reformed as United International Pictures. There would be
other film hits like E.T. the Extra-Terrestrial (1982), Back to the Future (1985), Field of Dreams (1989), and Jurassic
Park (1993), but the film business was financially unpredictable. UIP began distributing films by start-up studio
DreamWorks in 1997, due to connections the founders have with Paramount, Universal, and Amblin Entertainment. In
2001, MGM dropped out of the UIP venture, and went with 20th Century Fox's international arm to handle distribution
of their titles to this day. Anxious to expand the company's broadcast and cable presence, longtime MCA head Lew
Wasserman sought a rich partner. He located Japanese electronics manufacturer Matsushita Electric (now known as Panasonic),
which agreed to acquire MCA for $6.6 billion in 1990. Meanwhile, around this time, the production subsidiary was
renamed Universal Studios Inc., and (in 1990) MCA created MCA/Universal Home Video Inc. for the VHS video cassette
(later DVD) sales industry. Matsushita provided a cash infusion, but the clash of cultures was too great to overcome,
and five years later Matsushita sold an 80% stake in MCA/Universal to Canadian drinks distributor Seagram for $5.7
billion. Seagram sold off its stake in DuPont to fund this expansion into the entertainment industry. Hoping to build
an entertainment empire around Universal, Seagram bought PolyGram in 1999 and other entertainment properties, but
the fluctuating profits characteristic of Hollywood were no substitute for the reliable income stream gained from
the previously held shares in DuPont. To raise money, Seagram head Edgar Bronfman Jr. sold Universal's television
holdings, including cable network USA, to Barry Diller (these same properties would be bought back later at greatly
inflated prices). In June 2000, Seagram was sold to French water utility and media company Vivendi, which owned StudioCanal;
the conglomerate then became known as Vivendi Universal. Afterward, Universal Pictures acquired the United States
distribution rights of several of StudioCanal's films, such as Mulholland Drive (which received an Oscar nomination)
and Brotherhood of the Wolf (which became the second-highest-grossing French-language film in the United States since
1980). Universal Pictures and StudioCanal also co-produced several films, such as Love Actually (an $40 million-budgeted
film that eventually grossed $246 million worldwide). In late 2000, the New York Film Academy was permitted to use
the Universal Studios backlot for student film projects in an unofficial partnership. Burdened with debt, in 2004
Vivendi Universal sold 80% of Vivendi Universal Entertainment (including the studio and theme parks) to General Electric,
parent of NBC. The resulting media super-conglomerate was renamed NBCUniversal, while Universal Studios Inc. remained
the name of the production subsidiary. After that deal, GE owned 80% of NBC Universal; Vivendi held the remaining
20%, with an option to sell its share in 2006. GE purchased Vivendi's share in NBCU in 2011 and in turn sold 51%
of the company to cable provider Comcast. Comcast merged the former GE subsidiary with its own cable-television programming
assets, creating the current NBCUniversal. Following Federal Communications Commission (FCC) approval, the Comcast-GE
deal was closed on Jan 29, 2011. In March 2013, Comcast bought the remaining 49% of NBCUniversal for $16.7 billion.
In late 2005, Viacom's Paramount Pictures acquired DreamWorks SKG after acquisition talks between GE and DreamWorks
stalled. Universal's long time chairperson, Stacey Snider, left the company in early 2006 to head up DreamWorks.
Snider was replaced by then-Vice Chairman Marc Shmuger and Focus Features head David Linde. On October 5, 2009, Marc
Shmuger and David Linde were ousted and their co-chairperson jobs consolidated under former president of worldwide
marketing and distribution Adam Fogelson becoming the single chairperson. Donna Langley was also upped to co-chairperson.
In 2009, Stephanie Sperber founded Universal Partnerships & Licensing within Universal to license consumer products
for Universal. In September 2013, Adam Fogelson was ousted as co-chairman of Universal Pictures, promoting Donna
Langley to sole-chairman. In addition, NBCUniversal International Chairman, Jeff Shell, would be appointed as Chairman
of the newly created Filmed Entertainment Group. Longtime studio head Ron Meyer would give up oversight of the film
studio and appointed Vice Chairman of NBCUniversal, providing consultation to CEO Steve Burke on all of the company's
operations. Meyers still retains oversight of Universal Parks and Resorts. Universal's multi-year film financing
deal with Elliott Management expired in 2013. In July 2013, Universal made an agreement with Legendary Pictures to
market, co-finance, and distribute Legendary's films for five years starting in 2014, the year that Legendary's similar
agreement with Warner Bros. expires. In June 2014, Universal Partnerships took over licensing consumer products for
NBC and Sprout with expectation that all licensing would eventually be centralized within NBCUniversal. In May 2015,
Gramercy Pictures was revived by Focus Features as a genre label, that concentrated on action, sci-fi, and horror
films. As of 2015, Universal is the only studio to have released three billion-dollar films in one year; this distinction
was achieved in 2015 with Furious 7, Jurassic World and Minions. In the early 1950s, Universal set up its own distribution
company in France, and in the late 1960s, the company also started a production company in Paris, Universal Productions
France S.A., although sometimes credited by the name of the distribution company, Universal Pictures France. Except
for the two first films it produced, Claude Chabrol's Le scandale (English title The Champagne Murders) and Romain
Gary's Les oiseaux vont mourir au Pérou (English title Birds in Peru), it was only involved in French or other European
co-productions, the most noticeable ones being Louis Malle's Lacombe, Lucien, Bertrand Blier's Les Valseuses (English
title Going Places), and Fred Zinnemann's The Day of the Jackal. It was only involved in approximately 20 French
film productions. In the early 1970s, the unit was incorporated into the French Cinema International Corporation
arm.
Estonian (eesti keel [ˈeːsti ˈkeːl] ( listen)) is the official language of Estonia, spoken natively by about 1.1 million
people in Estonia and tens of thousands in various migrant communities. It belongs to the Finnic branch of the Uralic
language family. One distinctive feature that has caused a great amount of interest among linguists is what is traditionally
seen as three degrees of phonemic length: short, long, and "overlong", such that /sɑdɑ/, /sɑˑdɑ/ and /sɑːdɑ/ are
distinct. In actuality, the distinction is not purely in the phonemic length, and the underlying phonological mechanism
is still disputed.[citation needed] Estonian belongs to the Finnic branch of the Uralic languages, along with Finnish,
Karelian, and other nearby languages. The Uralic languages do not belong to the Indo-European languages. Estonian
is distantly related to Hungarian and to the Sami languages. Estonian has been influenced by Swedish, German (initially
Middle Low German, which was the lingua franca of the Hanseatic League and spoken natively in the territories of
what is today known as Estonia by a sizeable burgher community of Baltic Germans, later Estonian was also influenced
by standard German), and Russian, though it is not related to them genetically. Like Finnish and Hungarian, Estonian
is a somewhat agglutinative language, but unlike them, it has lost vowel harmony, the front vowels occurring exclusively
on the first or stressed syllable, although in older texts the vowel harmony can still be recognized. Furthermore,
the apocope of word-final sounds is extensive and has contributed to a shift from a purely agglutinative to a fusional
language.[citation needed] The basic word order is subject–verb–object. The two different historical Estonian languages
(sometimes considered dialects), the North and South Estonian languages, are based on the ancestors of modern Estonians'
migration into the territory of Estonia in at least two different waves, both groups speaking considerably different
Finnic vernaculars. Modern standard Estonian has evolved on the basis of the dialects of Northern Estonia. The domination
of Estonia after the Northern Crusades, from the 13th century to 1918 by Denmark, Germany, Sweden, and Russia delayed
indigenous literacy in Estonia.[citation needed] The oldest written records of the Finnic languages of Estonia date
from the 13th century. Originates Livoniae in Chronicle of Henry of Livonia contains Estonian place names, words
and fragments of sentences. The earliest extant samples of connected (north) Estonian are the so-called Kullamaa
prayers dating from 1524 and 1528. In 1525 the first book published in the Estonian language was printed. The book
was a Lutheran manuscript, which never reached the reader and was destroyed immediately after publication. The first
extant Estonian book is a bilingual German-Estonian translation of the Lutheran catechism by S. Wanradt and J. Koell
dating to 1535, during the Protestant Reformation period. An Estonian grammar book to be used by priests was printed
in German in 1637. The New Testament was translated into southern Estonian in 1686 (northern Estonian, 1715). The
two languages were united based on northern Estonian by Anton thor Helle. The birth of native Estonian literature
was in 1810 to 1820 when the patriotic and philosophical poems by Kristjan Jaak Peterson were published. Peterson,
who was the first student at the then German-language University of Dorpat to acknowledge his Estonian origin, is
commonly regarded as a herald of Estonian national literature and considered the founder of modern Estonian poetry.
His birthday on March 14 is celebrated in Estonia as the Mother Tongue Day. A fragment from Peterson's poem "Kuu"
expresses the claim reestablishing the birthright of the Estonian language: From 1525 to 1917 14,503 titles were
published in Estonian, as opposed to the 23,868 titles which were published between 1918 and 1940.[citation needed]
Writings in Estonian became significant only in the 19th century with the spread of the ideas of the Age of Enlightenment,
during the Estophile Enlightenment Period (1750–1840). Although Baltic Germans at large regarded the future of Estonians
as being a fusion with themselves, the Estophile educated class admired the ancient culture of the Estonians and
their era of freedom before the conquests by Danes and Germans in the 13th century. After the Estonian War of Independence
in 1919, the Estonian language became the state language of the newly independent country. In 1945, 97.3% of Estonia
considered itself ethnic Estonian and spoke the language. When Estonia was invaded and occupied by the Soviet Union
in World War II, the status of the Estonian language changed to the first of two official languages (Russian being
the other one). As with Latvia many immigrants entered Estonia under Soviet encouragement. In the second half of
the 1970s, the pressure of bilingualism (for Estonians) intensified, resulting in widespread knowledge of Russian
throughout the country. The Russian language was termed as ‘the language of friendship of nations’ and was taught
to Estonian children, sometimes as early as in kindergarten. Although teaching Estonian to non-Estonians in schools
was compulsory, in practice learning the language was often considered unnecessary. During the Perestroika era, The
Law on the Status of the Estonian Language was adopted in January 1989. The collapse of the Soviet Union led to the
restoration of Republic of Estonia's independence. Estonian went back to being the only state language in Estonia
which in practice meant that use of Estonian was promoted while the use of Russian was discouraged. The return of
Soviet immigrants to their countries of origin has brought the proportion of Estonians in Estonia back above 70%.
And again as in Latvia, today many of the remnant non-Estonians in Estonia have adopted the Estonian language; about
40% at the 2000 census. The Estonian dialects are divided into two groups – the northern and southern dialects, historically
associated with the cities of Tallinn in the north and Tartu in the south, in addition to a distinct kirderanniku
dialect, that of the northeastern coast of Estonia. The northern group consists of the keskmurre or central dialect
that is also the basis for the standard language, the läänemurre or western dialect, roughly corresponding to Läänemaa
and Pärnumaa, the saarte murre (islands') dialect of Saaremaa and Hiiumaa and the idamurre or eastern dialect on
the northwestern shore of Lake Peipsi. The southern group (South Estonian language) consists of the Tartu, Mulgi,
Võru (Võro) and Setu (Seto) dialects. These are sometimes considered either variants of a South Estonian language,
or separate languages altogether. Also, Seto and Võro distinguish themselves from each other less by language and
more by their culture and their respective Christian confession. Like Finnish, Estonian employs the Latin script
as the basis for its alphabet, which adds the letters ä, ö, ü, and õ, plus the later additions š and ž. The letters
c, q, w, x and y are limited to proper names of foreign origin, and f, z, š, and ž appear in loanwords and foreign
names only. Ö and ü are pronounced similarly to their equivalents in Swedish and German. Unlike in standard German
but like Finnish and Swedish (when followed by 'r'), Ä is pronounced [æ], as in English mat. The vowels Ä, Ö and
Ü are clearly separate phonemes and inherent in Estonian, although the letter shapes come from German. The letter
õ denotes /ɤ/, unrounded /o/, or a close-mid back unrounded vowel. It is almost identical to the Bulgarian ъ /ɤ̞/
and the Vietnamese ơ, and is used to transcribe the Russian ы. Although the Estonian orthography is generally guided
by phonemic principles, with each grapheme corresponding to one phoneme, there are some historical and morphological
deviations from this: for example preservation of the morpheme in declension of the word (writing b, g, d in places
where p, k, t is pronounced) and in the use of 'i' and 'j'.[clarification needed] Where it is very impractical or
impossible to type š and ž, they are substituted with sh and zh in some written texts, although this is considered
incorrect. Otherwise, the h in sh represents a voiceless glottal fricative, as in Pasha (pas-ha); this also applies
to some foreign names. Modern Estonian orthography is based on the Newer Orthography created by Eduard Ahrens in
the second half of the 19th century based on Finnish orthography. The Older Orthography it replaced was created in
the 17th century by Bengt Gottfried Forselius and Johann Hornung based on standard German orthography. Earlier writing
in Estonian had by and large used an ad hoc orthography based on Latin and Middle Low German orthography. Some influences
of the standard German orthography — for example, writing 'W'/'w' instead of 'V'/'v' persisted well into the 1930s.
It should be noted that Estonian words and names quoted in international publications from Soviet sources are often
back-transliterations from the Russian transliteration. Examples are the use of "ya" for "ä" (e.g. Pyarnu instead
of Pärnu), "y" instead of "õ" (e.g., Pylva instead of Põlva) and "yu" instead of "ü" (e.g., Pyussi instead of Püssi).
Even in the Encyclopædia Britannica one can find "ostrov Khiuma", where "ostrov" means "island" in Russian and "Khiuma"
is back-transliteration from Russian instead of "Hiiumaa" (Hiiumaa > Хийума(а) > Khiuma). Typologically, Estonian
represents a transitional form from an agglutinating language to a fusional language. The canonical word order is
SVO (subject–verb–object). In Estonian, nouns and pronouns do not have grammatical gender, but nouns and adjectives
decline in fourteen cases: nominative, genitive, partitive, illative, inessive, elative, allative, adessive, ablative,
translative, terminative, essive, abessive, and comitative, with the case and number of the adjective(s) always agreeing
with that of the noun (except in the terminative, essive, abessive and comitative, where there is agreement only
for the number, the adjective being in the genitive form). Thus the illative for kollane maja ("a yellow house")
is kollasesse majja ("into a yellow house"), but the terminative is kollase majani ("as far as a yellow house").
With respect to the Proto-Finnic language, elision has occurred; thus, the actual case marker may be absent, but
the stem is changed, cf. maja – majja and the Pohjanmaa dialect of Finnish maja – majahan. The direct object of the
verb appears either in the accusative (for total objects) or in the partitive (for partial objects). The accusative
coincides with the genitive in the singular and with nominative in the plural. Accusative vs. partitive case opposition
of the object used with transitive verbs creates a telicity contrast, just as in Finnish. This is a rough equivalent
of the perfective vs. imperfective aspect opposition. The verbal system lacks a distinctive future tense (the present
tense serves here) and features special forms to express an action performed by an undetermined subject (the "impersonal").
Although the Estonian and Germanic languages are of very different origins, one can identify many similar words in
Estonian and English, for example. This is primarily because the Estonian language has borrowed nearly one third
of its vocabulary from Germanic languages, mainly from Low Saxon (Middle Low German) during the period of German
rule, and High German (including standard German). The percentage of Low Saxon and High German loanwords can be estimated
at 22–25 percent, with Low Saxon making up about 15 percent.[citation needed] Often 'b' & 'p' are interchangeable,
for example 'baggage' becomes 'pagas', 'lob' (to throw) becomes 'loopima'. The initial letter 's' is often dropped,
for example 'skool' becomes 'kool', 'stool' becomes 'tool'. Estonian language planners such as Ado Grenzstein (a
journalist active in Estonia in the 1870s–90s) tried to use formation ex nihilo, Urschöpfung; i.e. they created new
words out of nothing. The most famous reformer of Estonian, Johannes Aavik (1880–1973), used creations ex nihilo
(cf. ‘free constructions’, Tauli 1977), along with other sources of lexical enrichment such as derivations, compositions
and loanwords (often from Finnish; cf. Saareste and Raun 1965: 76). In Aavik’s dictionary (1921), which lists approximately
4000 words, there are many words which were (allegedly) created ex nihilo, many of which are in common use today.
Examples are Many of the coinages that have been considered (often by Aavik himself) as words concocted ex nihilo
could well have been influenced by foreign lexical items, for example words from Russian, German, French, Finnish,
English and Swedish. Aavik had a broad classical education and knew Ancient Greek, Latin and French. Consider roim
‘crime’ versus English crime or taunima ‘to condemn, disapprove’ versus Finnish tuomita ‘to condemn, to judge’ (these
Aavikisms appear in Aavik’s 1921 dictionary). These words might be better regarded as a peculiar manifestation of
morpho-phonemic adaptation of a foreign lexical item.
Paper is a thin material produced by pressing together moist fibres of cellulose pulp derived from wood, rags or grasses,
and drying them into flexible sheets. It is a versatile material with many uses, including writing, printing, packaging,
cleaning, and a number of industrial and construction processes. The pulp papermaking process is said to have been
developed in China during the early 2nd century AD, possibly as early as the year 105 A.D., by the Han court eunuch
Cai Lun, although the earliest archaeological fragments of paper derive from the 2nd century BC in China. The modern
pulp and paper industry is global, with China leading its production and the United States right behind it. The oldest
known archaeological fragments of the immediate precursor to modern paper, date to the 2nd century BC in China. The
pulp papermaking process is ascribed to Cai Lun, a 2nd-century AD Han court eunuch. With paper as an effective substitute
for silk in many applications, China could export silk in greater quantity, contributing to a Golden Age. Its knowledge
and uses spread from China through the Middle East to medieval Europe in the 13th century, where the first water
powered paper mills were built. Because of paper's introduction to the West through the city of Baghdad, it was first
called bagdatikos. In the 19th century, industrial manufacture greatly lowered its cost, enabling mass exchange of
information and contributing to significant cultural shifts. In 1844, the Canadian inventor Charles Fenerty and the
German F. G. Keller independently developed processes for pulping wood fibres. The word "paper" is etymologically
derived from Latin papyrus, which comes from the Greek πάπυρος (papuros), the word for the Cyperus papyrus plant.
Papyrus is a thick, paper-like material produced from the pith of the Cyperus papyrus plant, which was used in ancient
Egypt and other Mediterranean cultures for writing before the introduction of paper into the Middle East and Europe.
Although the word paper is etymologically derived from papyrus, the two are produced very differently and the development
of the first is distinct from the development of the second. Papyrus is a lamination of natural plant fibres, while
paper is manufactured from fibres whose properties have been changed by maceration. To make pulp from wood, a chemical
pulping process separates lignin from cellulose fibres. This is accomplished by dissolving lignin in a cooking liquor,
so that it may be washed from the cellulose; this preserves the length of the cellulose fibres. Paper made from chemical
pulps are also known as wood-free papers–not to be confused with tree-free paper; this is because they do not contain
lignin, which deteriorates over time. The pulp can also be bleached to produce white paper, but this consumes 5%
of the fibres; chemical pulping processes are not used to make paper made from cotton, which is already 90% cellulose.
There are three main chemical pulping processes: the sulfite process dates back to the 1840s and it was the dominant
method extent before the second world war. The kraft process, invented in the 1870s and first used in the 1890s,
is now the most commonly practiced strategy, one of its advantages is the chemical reaction with lignin, that produces
heat, which can be used to run a generator. Most pulping operations using the kraft process are net contributors
to the electricity grid or use the electricity to run an adjacent paper mill. Another advantage is that this process
recovers and reuses all inorganic chemical reagents. Soda pulping is another specialty process used to pulp straws,
bagasse and hardwoods with high silicate content. There are two major mechanical pulps, the thermomechanical one
(TMP) and groundwood pulp (GW). In the TMP process, wood is chipped and then fed into large steam heated refiners,
where the chips are squeezed and converted to fibres between two steel discs. In the groundwood process, debarked
logs are fed into grinders where they are pressed against rotating stones to be made into fibres. Mechanical pulping
does not remove the lignin, so the yield is very high, >95%, however it causes the paper thus produced to turn yellow
and become brittle over time. Mechanical pulps have rather short fibres, thus producing weak paper. Although large
amounts of electrical energy are required to produce mechanical pulp, it costs less than the chemical kind. Recycled
papers can be made from 100% recycled materials or blended with virgin pulp, although they are (generally) not as
strong nor as bright as papers made from the latter. Besides the fibres, pulps may contain fillers such as chalk
or china clay, which improve its characteristics for printing or writing. Additives for sizing purposes may be mixed
with it and/or applied to the paper web later in the manufacturing process; the purpose of such sizing is to establish
the correct level of surface absorbency to suit ink or paint. Pressing the sheet removes the water by force; once
the water is forced from the sheet, a special kind of felt, which is not to be confused with the traditional one,
is used to collect the water; whereas when making paper by hand, a blotter sheet is used instead. Drying involves
using air and/or heat to remove water from the paper sheets; in the earliest days of paper making this was done by
hanging the sheets like laundry; in more modern times various forms of heated drying mechanisms are used. On the
paper machine the most common is the steam heated can dryer. These can reach temperatures above 200 °F (93 °C) and
are used in long sequences of more than 40 cans; where the heat produced by these can easily dry the paper to less
than 6% moisture. Paper at this point is uncoated. Coated paper has a thin layer of material such as calcium carbonate
or china clay applied to one or both sides in order to create a surface more suitable for high-resolution halftone
screens. (Uncoated papers are rarely suitable for screens above 150 lpi.) Coated or uncoated papers may have their
surfaces polished by calendering. Coated papers are divided into matte, semi-matte or silk, and gloss. Gloss papers
give the highest optical density in the printed image. The paper is then fed onto reels if it is to be used on web
printing presses, or cut into sheets for other printing processes or other purposes. The fibres in the paper basically
run in the machine direction. Sheets are usually cut "long-grain", i.e. with the grain parallel to the longer dimension
of the sheet. All paper produced by paper machines as the Fourdrinier Machine are wove paper, i.e. the wire mesh
that transports the web leaves a pattern that has the same density along the paper grain and across the grain. Textured
finishes, watermarks and wire patterns imitating hand-made laid paper can be created by the use of appropriate rollers
in the later stages of the machine. Wove paper does not exhibit "laidlines", which are small regular lines left behind
on paper when it was handmade in a mould made from rows of metal wires or bamboo. Laidlines are very close together.
They run perpendicular to the "chainlines", which are further apart. Handmade paper similarly exhibits "deckle edges",
or rough and feathery borders. The thickness of paper is often measured by caliper, which is typically given in thousandths
of an inch in the United States and in thousandths of a mm in the rest of the world. Paper may be between 0.07 and
0.18 millimetres (0.0028 and 0.0071 in) thick. Paper is often characterized by weight. In the United States, the
weight assigned to a paper is the weight of a ream, 500 sheets, of varying "basic sizes", before the paper is cut
into the size it is sold to end customers. For example, a ream of 20 lb, 8.5 in × 11 in (216 mm × 279 mm) paper weighs
5 pounds, because it has been cut from a larger sheet into four pieces. In the United States, printing paper is generally
20 lb, 24 lb, or 32 lb at most. Cover stock is generally 68 lb, and 110 lb or more is considered card stock. In Europe,
and other regions using the ISO 216 paper sizing system, the weight is expressed in grammes per square metre (g/m2
or usually just g) of the paper. Printing paper is generally between 60 g and 120 g. Anything heavier than 160 g
is considered card. The weight of a ream therefore depends on the dimensions of the paper and its thickness. Most
commercial paper sold in North America is cut to standard paper sizes based on customary units and is defined by
the length and width of a sheet of paper. The ISO 216 system used in most other countries is based on the surface
area of a sheet of paper, not on a sheet's width and length. It was first adopted in Germany in 1922 and generally
spread as nations adopted the metric system. The largest standard size paper is A0 (A zero), measuring one square
meter (approx. 1189 × 841 mm). Two sheets of A1, placed upright side by side fit exactly into one sheet of A0 laid
on its side. Similarly, two sheets of A2 fit into one sheet of A1 and so forth. Common sizes used in the office and
the home are A4 and A3 (A3 is the size of two A4 sheets). The density of paper ranges from 250 kg/m3 (16 lb/cu ft)
for tissue paper to 1,500 kg/m3 (94 lb/cu ft) for some speciality paper. Printing paper is about 800 kg/m3 (50 lb/cu
ft). Much of the early paper made from wood pulp contained significant amounts of alum, a variety of aluminium sulfate
salts that is significantly acidic. Alum was added to paper to assist in sizing, making it somewhat water resistant
so that inks did not "run" or spread uncontrollably. Early papermakers did not realize that the alum they added liberally
to cure almost every problem encountered in making their product would eventually be detrimental. The cellulose fibres
that make up paper are hydrolyzed by acid, and the presence of alum would eventually degrade the fibres until the
paper disintegrated in a process that has come to be known as "slow fire". Documents written on rag paper were significantly
more stable. The use of non-acidic additives to make paper is becoming more prevalent, and the stability of these
papers is less of an issue. Paper made from mechanical pulp contains significant amounts of lignin, a major component
in wood. In the presence of light and oxygen, lignin reacts to give yellow materials, which is why newsprint and
other mechanical paper yellows with age. Paper made from bleached kraft or sulfite pulps does not contain significant
amounts of lignin and is therefore better suited for books, documents and other applications where whiteness of the
paper is essential. Paper made from wood pulp is not necessarily less durable than a rag paper. The ageing behavior
of a paper is determined by its manufacture, not the original source of the fibres. Furthermore, tests sponsored
by the Library of Congress prove that all paper is at risk of acid decay, because cellulose itself produces formic,
acetic, lactic and oxalic acids. Mechanical pulping yields almost a tonne of pulp per tonne of dry wood used, which
is why mechanical pulps are sometimes referred to as "high yield" pulps. With almost twice the yield as chemical
pulping, mechanical pulps is often cheaper. Mass-market paperback books and newspapers tend to use mechanical papers.
Book publishers tend to use acid-free paper, made from fully bleached chemical pulps for hardback and trade paperback
books. Worldwide consumption of paper has risen by 400% in the past 40 years leading to increase in deforestation,
with 35% of harvested trees being used for paper manufacture. Most paper companies also plant trees to help regrow
forests. Logging of old growth forests accounts for less than 10% of wood pulp, but is one of the most controversial
issues. Paper waste accounts for up to 40% of total waste produced in the United States each year, which adds up
to 71.6 million tons of paper waste per year in the United States alone. The average office worker in the US prints
31 pages every day. Americans also use on the order of 16 billion paper cups per year. Conventional bleaching of
wood pulp using elemental chlorine produces and releases into the environment large amounts of chlorinated organic
compounds, including chlorinated dioxins. Dioxins are recognized as a persistent environmental pollutant, regulated
internationally by the Stockholm Convention on Persistent Organic Pollutants. Dioxins are highly toxic, and health
effects on humans include reproductive, developmental, immune and hormonal problems. They are known to be carcinogenic.
Over 90% of human exposure is through food, primarily meat, dairy, fish and shellfish, as dioxins accumulate in the
food chain in the fatty tissue of animals. Some manufacturers have started using a new, significantly more environmentally
friendly alternative to expanded plastic packaging. Made out of paper, and known commercially as paperfoam, the new
packaging has very similar mechanical properties to some expanded plastic packaging, but is biodegradable and can
also be recycled with ordinary paper. With increasing environmental concerns about synthetic coatings (such as PFOA)
and the higher prices of hydrocarbon based petrochemicals, there is a focus on zein (corn protein) as a coating for
paper in high grease applications such as popcorn bags. Paper recycling processes can use either chemically or mechanically
produced pulp; by mixing it with water and applying mechanical action the hydrogen bonds in the paper can be broken
and fibres separated again. Most recycled paper contains a proportion of virgin fibre for the sake of quality; generally
speaking, de-inked pulp is of the same quality or lower than the collected paper it was made from.
Adult contemporary music (AC) is a style of music, ranging from 1960s vocal and 1970s soft rock music to predominantly ballad-heavy
music of the present day, with varying degrees of easy listening, pop, soul, rhythm and blues, quiet storm, and rock
influence. Adult contemporary is rather a continuation of the easy listening and soft rock style that became popular
in the 1960s and 1970s with some adjustments that reflect the evolution of pop/rock music. Adult contemporary tends
to have lush, soothing and highly polished qualities where emphasis on melody and harmonies is accentuated. It is
usually melodic enough to get a listener's attention, and is inoffensive and pleasurable enough to work well as background
music. Like most of pop music, its songs tend to be written in a basic format employing a verse–chorus structure.
Adult contemporary is heavy on romantic sentimental ballads which mostly use acoustic instruments (though bass guitar
is usually used) such as acoustic guitars, pianos, saxophones, and sometimes an orchestral set. The electric guitars
are normally faint and high-pitched. However, recent adult contemporary music may usually feature synthesizers (and
other electronics, such as drum machines). AC radio stations may play mainstream music, but they will exclude hip
hop, dance tracks, hard rock, and some forms of teen pop, as they are less popular amongst the target demographic
of these radio stations, which is intended for an adult audience. AC radio often targets the 25–44 age group, the
demographic that has received the most attention from advertisers since the 1960s. A common practice in recent years
is that many adult contemporary stations play less newer music because they also give ample airtime to hits of the
past, so the de-emphasis on new songs slows the progression of the AC chart. Over the years, AC has spawned subgenres
including "hot AC", "soft AC" (also known as "lite AC"), "urban AC", "rhythmic AC", and "Christian AC" (a softer
type of contemporary Christian music). Some stations play only "hot AC", "soft AC", or only one of the variety of
subgenres. Therefore, it is not usually considered a specific genre of music; it is merely an assemblage of selected
tracks from musicians of many different genres. Adult contemporary traces its roots to the 1960s easy listening format,
which adopted a 70-80% instrumental - 20-30% vocal mix. A few offered 90% instrumentals, and a handful were entirely
instrumental. The easy listening format, as it was first known, was born of a desire by some radio stations in the
late 1950s and early 1960s to continue playing current hit songs but distinguish themselves from being branded as
"rock and roll" stations. Billboard first published the Easy Listening chart July 17, 1961, with 20 songs; the first
number one was "Boll Weevil Song" by Brook Benton. The chart described itself as "not too far out in either direction".
Initially, the vocalists consisted of artists such as Frank Sinatra, Doris Day, Johnny Mathis, Connie Francis, Nat
King Cole, Perry Como, and others. The custom recordings were usually instrumental versions of current or recent
rock and roll or pop hit songs, a move intended to give the stations more mass appeal without selling out. Some stations
would also occasionally play earlier big band-era recordings from the 1940s and early 1950s. After 1965, differences
between the Hot 100 chart and the Easy Listening chart became more pronounced. Better reflecting what middle of the
road stations were actually playing, the composition of the chart changed dramatically. As rock music continued to
harden, there was much less crossover between the Hot 100 and Easy Listening chart than there had been in the early
half of the 1960s. Roger Miller, Barbra Streisand and Bobby Vinton were among the chart's most popular performers.
One big impetus for the development of the AC radio format was that, when rock and roll music first became popular
in the mid-1950s, many more conservative radio stations wanted to continue to play current hit songs while shying
away from rock. These middle of the road (or "MOR") stations also frequently included older, pre-rock-era adult standards
and big band titles to further appeal to adult listeners who had grown up with those songs. Another big impetus for
the evolution of the AC radio format was the popularity of easy listening or "beautiful music" stations, stations
with music specifically designed to be purely ambient. Whereas most easy listening music was instrumental, created
by relatively unknown artists, and rarely purchased, AC was an attempt to create a similar "lite" format by choosing
certain tracks (both hit singles and album cuts) of popular artists. Hard rock had been established as a mainstream
genre by 1965. From the end of the 1960s, it became common to divide mainstream rock music into soft and hard rock,
with both emerging as major radio formats in the US. Soft rock was often derived from folk rock, using acoustic instruments
and putting more emphasis on melody and harmonies. Major artists included Barbra Streisand, Carole King, Cat Stevens,
James Taylor and Bread. The Hot 100 and Easy Listening charts became more similar again toward the end of the 1960s
and into the early and mid-1970s, when the texture of much of the music played on Top 40 radio once more began to
soften. The adult contemporary format began evolving into the sound that later defined it, with rock-oriented acts
as Chicago, The Eagles, and Elton John becoming associated with the format. Soft rock reached its commercial peak
in the mid-to-late 1970s with acts such as Toto, England Dan & John Ford Coley, Air Supply, Seals and Crofts, America
and the reformed Fleetwood Mac, whose Rumours (1977) was the best-selling album of the decade. By 1977, some radio
stations, like New York's WTFM and NBC-owned WYNY, had switched to an all-soft rock format. By the 1980s, tastes
had changed and radio formats reflected this change, including musical artists such as Journey. Walter Sabo and his
team at NBC brought in major personalities from the AM Band to the FM Band taking the format from a background to
a foreground listening experience. The addition of major radio stars such as Dan Daniel, Steve O'Brien, Dick Summers,
Don Bleu and Tom Parker made it possible to fully monetize the format and provide the foundation for financial success
enjoyed to this day Radio stations played Top 40 hits regardless of genre; although, most were in the same genre
until the mid-1970s when different forms of popular music started to target different demographic groups, such as
disco vs. hard rock. This evolved into specialized radio stations that played specific genres of music, and generally
followed the evolution of artists in those genres. By the early 1970s, softer songs by artists like The Carpenters,
Anne Murray, John Denver, Barry Manilow, and even Streisand, began to be played more often on "Top 40" radio and
others were added to the mix on many AC stations. Also, some of these stations even played softer songs by Elvis
Presley, Linda Ronstadt, Elton John, Rod Stewart, Billy Joel, and other rock-based artists. Much of the music recorded
by singer-songwriters such as Diana Ross, James Taylor, Carly Simon, Carole King and Janis Ian got as much, if not
more, airplay on this format than on Top 40 stations. Easy Listening radio also began including songs by artists
who had begun in other genres, such as rock and roll or R&B. In addition, several early disco songs, did well on
the Adult Contemporary format. On April 7, 1979, the Easy Listening chart officially became known as Adult Contemporary,
and those two words have remained consistent in the name of the chart ever since. Adult contemporary music became
one of the most popular radio formats of the 1980s. The growth of AC was a natural result of the generation that
first listened to the more "specialized" music of the mid-late 1970s growing older and not being interested in the
heavy metal and rap/hip-hop music that a new generation helped to play a significant role in the Top 40 charts by
the end of the decade. Mainstream AC itself has evolved in a similar fashion over the years; traditional AC artists
like Barbra Streisand, the Carpenters, Dionne Warwick, Barry Manilow, John Denver, and Olivia Newton-John found it
harder to have major Top 40 hits as the 1980s wore on, and due to the influence of MTV, artists who were staples
of the Contemporary Hit Radio format, such as Richard Marx, Michael Jackson, Bonnie Tyler, George Michael, Phil Collins,
and Laura Branigan began crossing over to the AC charts with greater frequency. Collins has been described by AllMusic
as "one of the most successful pop and adult contemporary singers of the '80s and beyond". However, with the combination
of MTV and AC radio, adult contemporary appeared harder to define as a genre, with established soft-rock artists
of the past still charting pop hits and receiving airplay alongside mainstream radio fare from newer artists at the
time. The amount of crossover between the AC chart and the Hot 100 has varied based on how much the passing pop music
trends of the times appealed to adult listeners. Not many disco or new wave songs were particularly successful on
the AC chart during the late 1970s and early 1980s, and much of the hip-hop and harder rock music featured on CHR
formats later in the decade would have been unacceptable on AC radio. Although dance-oriented, electronic pop and
ballad-oriented rock dominated the 1980s, soft rock songs still enjoyed a mild success thanks to artists like Sheena
Easton, Amy Grant, Lionel Richie, Christopher Cross, Dan Hill, Leo Sayer, Billy Ocean, Julio Iglesias, Bertie Higgins
and Tommy Page. No song spent more than six weeks at #1 on this chart during the 1980s, with nine songs accomplishing
that feat. Two of these were by Lionel Richie, "You Are" in 1983 and "Hello" in 1984, which also reached #1 on the
Hot 100. In 1989, Linda Ronstadt released Cry Like a Rainstorm, Howl Like the Wind, described by critics as "the
first true Adult Contemporary album of the decade", featuring American soul singer Aaron Neville on several of the
twelve tracks. The album was certified Triple Platinum in the United States alone and became a major success throughout
the globe. The Grammy Award-winning singles, "Don't Know Much" and "All My Life", were both long-running #1 Adult
Contemporary hits. Several additional singles from the disc made the AC Top 10 as well. The album won over many critics
in the need to define AC, and appeared to change the tolerance and acceptance of AC music into mainstream day to
day radio play. The early 1990s marked the softening of urban R&B at the same time alternative rock emerged and traditional
pop saw a significant resurgence. This in part led to a widening of the market, not only allowing to cater to more
niche markets, but it also became customary for artists to make AC-friendly singles. Unlike the majority of 1980s
mainstream singers, the 1990s mainstream pop/R&B singers such as All-4-One, Boyz II Men, Rob Thomas, Christina Aguilera,
Backstreet Boys and Savage Garden generally crossed over to the AC charts. Latin pop artists such as Lynda Thomas,
Ricky Martin, Marc Anthony, Selena, Enrique Iglesias and Luis Miguel also enjoyed success in the AC charts. In addition
to Celine Dion, who has had significant success on this chart, other artists with multiple number ones on the AC
chart in the 1990s include Mariah Carey, Phil Collins, Michael Bolton, Whitney Houston and Shania Twain. Newer female
singer-songwriters such as Sarah McLachlan, Natalie Merchant, Jewel, Melissa Etheridge and Sheryl Crow also broke
through on the AC chart during this time. In 1996, Billboard created a new chart called Adult Top 40, which reflects
programming on radio stations that exists somewhere between "adult contemporary" music and "pop" music. Although
they are sometimes mistaken for each other, the Adult Contemporary chart and the Adult Top 40 chart are separate
charts, and songs reaching one chart might not reach the other. In addition, hot AC is another subgenre of radio
programming that is distinct from the Hot Adult Contemporary Tracks chart as it exists today, despite the apparent
similarity in name. In response to the pressure on Hot AC, a new kind of AC format cropped up among American radio
recently. The urban adult contemporary format (a term coined by Barry Mayo) usually attracts a large number of African
Americans and sometimes Caucasian listeners through playing a great deal of R&B (without any form of rapping), gospel
music, classic soul and dance music (including disco). Another format, rhythmic AC, in addition to playing all the
popular hot and soft AC music, past and present, places a heavy emphasis on disco as well as 1980s and 1990s dance
hits, such as those by Amber, C&C Music Factory and Black Box, and includes dance remixes of pop songs, such as the
Soul Solution mix of Toni Braxton's "Unbreak My Heart". In its early years of existence, the smooth jazz format was
considered to be a form of AC, although it was mainly instrumental, and related a stronger resemblance to the soft
AC-styled music. For many years, artists like George Benson, Kenny G and Dave Koz had crossover hits that were played
on both smooth jazz and soft AC stations. A notable pattern that developed during the 2000s and 2010s has been for
certain pop songs to have lengthy runs on AC charts, even after the songs have fallen off the Hot 100. Adrian Moreira,
senior vice president for adult music for RCA Music Group, said, "We've seen a fairly tidal shift in what AC will
play". Rather than emphasizing older songs, adult contemporary was playing many of the same songs as top 40 and adult
top 40, but only after the hits had become established. An article on MTV's website by Corey Moss describes this
trend: "In other words, AC stations are where pop songs go to die a very long death. Or, to optimists, to get a second
life." With the mixture of radio friendly AC tunes with some rock and pop fare also landing on the pop charts, mainstream
songs won over many critics in the need to define AC, and appeared to change the tolerance and acceptance of AC music
into mainstream day to day radio play. Part of the reason why more and more hot AC stations are forced to change
is that less and less new music fits their bill; most new rock is too alternative for mainstream radio and most new
pop is now influenced heavily by dance-pop and electronic dance music. A popular trend in this era was remixing dance
music hits into adult contemporary ballads, especially in the US, (for example, the "Candlelight Mix" versions of
"Heaven" by DJ Sammy, "Listen To Your Heart" by D.H.T., and "Everytime We Touch" by Cascada). Adult contemporary
has long characterized itself as family-friendly, but edited versions of "Perfect" by P!nk and "Forget You" by Cee
Lo Green showed up in the format in 2011. While most artists became established in other formats before moving to
adult contemporary, Michael Bublé and Josh Groban started out as AC artists. Throughout this decade, artists such
as Nick Lachey, James Blunt, John Mayer, Bruno Mars, Jason Mraz, Kelly Clarkson, Adele, Clay Aiken and Susan Boyle
have become successful thanks to a ballad heavy sound. Much as some hot AC and modern rock artists have crossed over
into each other, so too has soft AC crossed with country music in this decade. Country musicians such as Faith Hill,
Shania Twain, LeAnn Rimes and Carrie Underwood have had success on both charts. Since the mid-2000s, the mainstreaming
of bands like Wilco and Feist have pushed indie rock into the adult contemporary conversation. In the early 2010s,
indie musicians like Imagine Dragons, Mumford & Sons, Of Monsters & Men, The Lumineers and Ed Sheeran also had indie
songs that crossed over to the adult contemporary charts. The consolidation of the "hot AC" format contrasted with
the near-demise of most other AC formats: Beginning with the 2005-2007 economic downturn and eventual recession most
stations went for the more chart-based CHR, along with the top 40, urban and even Latino formats. Diminishing physical
record sales also proved a major blow to the AC genre. The "soft" AC format has reinvented in the late 2000s/early
2010s as a result of its declining relevance, adopting a more upmarket, middle-of-the-road approach, with a selection
of "oldies" (usually from the 1960s/70s onwards), primarily rock, jazz, R&B and pop music. Newer songs are more often
(but not limited to) "easy listening" fare, this amount varying depending on the age of the station's target demographic.
Termed "the acoustic equivalent to Prozac", soft adult contemporary, a more adult-oriented version of AC, was born
in the late 1970s and grew in the early 1980s. WEEI-FM in Boston was the first station to use the term "soft rock",
with ad slogans such as, "Fleetwood Mac ... without the yack" and "Joni ... without the baloney". The vast majority
of music played on soft AC stations is mellow, more acoustic, and primarily solo vocalists. Other popular names for
the format include "Warm", "Sunny", "Bee" (or "B") and (particularly in Canada) "EZ Rock". The format can be seen
as a more contemporary successor to and combination of the middle of the road (MOR), beautiful music, easy listening
and soft rock formats. Many stations in the soft AC format capitalize on its appeal to office workers (many of them
females aged 25–54, a key advertiser demographic), and brand themselves as stations "everyone at work can agree on"
(KOST originated that phrase as a primary tagline, and other soft AC stations have followed suit). A large portion
of music played on this format are either considered oldies or recurrent. It often deals with modern romantic and
sexual relationships (and sometimes other adult themes such as work, raising children, and family) in a thoughtful
and complex way. Soft AC, which has never minded keeping songs in high rotation literally for years in some cases,
does not appear necessarily to be facing similar pressures to expand its format. Soft AC includes a larger amount
of older music, especially classic R&B, soul, and 1960s and 1970s music, than hot AC. The soft AC format may soon
be facing the demographic pressures that the jazz and big band formats faced in the 1960s and 1970s and that the
oldies format is starting to face today, with the result that one may hear soft AC less on over-the-air radio and
more on satellite radio systems in coming years. Much of the music and artists that were traditionally played on
soft AC stations have been relegated to the adult standards format, which is itself disappearing because of aging
demographics. Some soft AC stations have found a niche by incorporating more oldies into their playlists and are
more open to playing softer songs that fit the "traditional" definition of AC. Artists contributing to this format
include mainly soft rock/pop singers such as, Andy Williams, Johnny Mathis, Nana Mouskouri, Celine Dion, Julio Iglesias,
Frank Sinatra, Barry Manilow, Engelbert Humperdinck, and Marc Anthony. Hot adult contemporary radio stations play
a variety of classic hits and contemporary mainstream music aimed at an adult audience. Some Hot AC stations concentrate
slightly more on pop music and alternative rock to target the Generation Z audience, though they include the more
youth-oriented teen pop, urban and rhythmic dance tracks. This format often includes dance-pop (such as upbeat songs
by Madonna, Cher, Gloria Estefan and Kylie Minogue), power pops (mainly by boybands such as Backstreet Boys and Westlife),
and adult-oriented soft rock music that are ballad-driven (typically by Aerosmith[citation needed], The Eagles, Sting,
Toto and The Moody Blues). Generally, Hot AC radio stations target their music output towards the 18-54 age group
and a demographic audience of both men and women. Modern adult contemporary can be a variation of hot AC, and includes
modern rock titles in its presentation. In 1997, Mike Marino of KMXB in Las Vegas described the format as reaching
"an audience that has outgrown the edgier hip-hop or alternative music but hasn't gotten old and sappy enough for
the soft ACs." The format's artists included Alanis Morissette, Counting Crows, Gin Blossoms, Bon Jovi, Train, No
Doubt, The Script, The Cranberries, Lifehouse, Sarah McLachlan, Sara Bareilles, John Mayer, Jewel, and Ingrid Michaelson.
Unlike modern rock, which went after 18-34 men, this format appealed to women. Urban AC is a form of AC music geared
towards adult African-American audiences, and therefore, the artists that are played on these stations are most often
black, such as Des'ree, whose album I Ain't Movin' was massively popular amongst both African American audience as
well as the wider national audience. The urban AC stations resemble soft AC rather than hot AC; they play predominantly
R&B and soul music with little hip-hop. This is reflected in many of the urban AC radio stations' taglines, such
as "Today's R&B and classic soul", "The best variety of R&B hits and oldies" and "(City/Region)'s R&B leader". Urban
AC's core artists include Luther Vandross, Trey Songz, Patti LaBelle, Toni Braxton, Whitney Houston, Aretha Franklin,
Frank Ocean, Craig David and Mariah Carey. A more elaborate form of urban AC is the rhythmic oldies format, which
focuses primarily on "old school" R&B and soul hits from the 1960s to the 1990s, including Motown and disco hits.
The format includes soul or disco artists such as ABBA, The Village People, The Jackson 5, Donna Summer, Tina Charles,
Gloria Gaynor and the Bee Gees. Rhythmic oldies stations still exist today, but target African-Americans as opposed
to a mass audience. A format called quiet storm is often included in urban adult contemporary, and is often played
during the evening, blending the urban AC and soft AC styles of music. The music that is played is strictly ballads
and slow jams, mostly but not limited to Black and Latino artists. Popular artists in the quiet storm format are
Teena Marie, Freddie Jackson, Johnny Gill, Lalah Hathaway, Vanessa L. Williams, Toni Braxton, and En Vogue among
others. Anita Baker, Sade, Regina Belle, and Luther Vandross are other examples of artists who appeal to mainstream
AC, urban AC and smooth jazz listeners. Some soft AC and urban AC stations like to play smooth jazz on the weekends.
In recent years, the Smooth Jazz format has been renamed to Smooth AC, as an attempt to lure younger listeners. Adult
contemporary R&B may be played on both soft AC stations and urban AC. It is a form of neo soul R&B that places emphasis
on songcraft and sophistication. As the use of drum machines, synthesizers, and sequencers dominates R&B-rooted music,
adult contemporary R&B tends to take most of its cues from the more refined strains of 1970s soul, such as smooth
soul, Philly soul and quiet storm. Classic songwriting touches and organic-leaning instrumentation, often featuring
string arrangements and horn charts, were constants. In the 1980s, lush jazz-R&B fusion (George Benson, Patti Austin,
Al Jarreau) and stylish crossover R&B (Anita Baker and Luther Vandross, New Edition and Keith Sweat) were equally
successful within the mainstream. In the 1990s and early 2000s (decade), artists as sonically contrasting as R. Kelly,
Leona Lewis (mainly ballads) and Jill Scott both fit the bill, provided the audience for the material was mature.
By riding and contributing to nearly all of the trends, no one has exemplified the style more than Babyface, whose
career thrived over 20 years as a member of the Deele (Two Occasions), a solo artist (Whip Appeal, When Can I See
You), and a songwriter/producer (Toni Braxton's Breathe Again, Boyz II Men's I'll Make Love to You). Contemporary
Christian music (CCM) has several subgenres, one being "Christian AC". Radio & Records, for instance, lists Christian
AC among its format charts. There has been crossover to mainstream and hot AC formats by many of the core artists
of the Christian AC genre, notably Amy Grant, Michael W. Smith, Kathy Troccoli, Steven Curtis Chapman, Plumb, and
more recently, MercyMe. In recent years it has become common for many AC stations, particularly soft AC stations,
to play primarily or exclusively Christmas music during the Christmas season in November and December. While these
tend mostly to be contemporary seasonal recordings by the same few artists featured under the normal format, most
stations will also air some vintage holiday tunes from older pop, MOR, and adult standards artists – such as Nat
King Cole, Bing Crosby, Dean Martin, The Carpenters, Percy Faith, Johnny Mathis and Andy Williams – many of whom
would never be played on these stations during the rest of the year. These Christmas music marathons typically start
during the week before Thanksgiving Day and end after Christmas Day, or sometimes extending to New Year's Day. Afterwards,
the stations usually resume their normal music fare. Several stations begin the holiday format much earlier, at the
beginning of November. The roots of this tradition can be traced back to the beautiful music and easy listening stations
of the 1960s and 1970s.
Daylight saving time (DST) or summer time is the practice of advancing clocks during summer months by one hour so that in
the evening daylight is experienced an hour longer, while sacrificing normal sunrise times. Typically, regions with
summer time adjust clocks forward one hour close to the start of spring and adjust them backward in the autumn to
standard time. New Zealander George Hudson proposed the modern idea of daylight saving in 1895. Germany and Austria-Hungary
organized the first nationwide implementation, starting on 30 April 1916. Many countries have used it at various
times since then, particularly since the energy crisis of the 1970s. The practice has received both advocacy and
criticism. Putting clocks forward benefits retailing, sports, and other activities that exploit sunlight after working
hours, but can cause problems for evening entertainment and for other activities tied to sunlight, such as farming.
Although some early proponents of DST aimed to reduce evening use of incandescent lighting, which used to be a primary
use of electricity, modern heating and cooling usage patterns differ greatly and research about how DST affects energy
use is limited or contradictory. DST clock shifts sometimes complicate timekeeping and can disrupt travel, billing,
record keeping, medical devices, heavy equipment, and sleep patterns. Computer software can often adjust clocks automatically,
but policy changes by various jurisdictions of the dates and timings of DST may be confusing. Industrialized societies
generally follow a clock-based schedule for daily activities that do not change throughout the course of the year.
The time of day that individuals begin and end work or school, and the coordination of mass transit, for example,
usually remain constant year-round. In contrast, an agrarian society's daily routines for work and personal conduct
are more likely governed by the length of daylight hours and by solar time, which change seasonally because of the
Earth's axial tilt. North and south of the tropics daylight lasts longer in summer and shorter in winter, the effect
becoming greater as one moves away from the tropics. By synchronously resetting all clocks in a region to one hour
ahead of Standard Time (one hour "fast"), individuals who follow such a year-round schedule will wake an hour earlier
than they would have otherwise; they will begin and complete daily work routines an hour earlier, and they will have
available to them an extra hour of daylight after their workday activities. However, they will have one less hour
of daylight at the start of each day, making the policy less practical during winter. While the times of sunrise
and sunset change at roughly equal rates as the seasons change, proponents of Daylight Saving Time argue that most
people prefer a greater increase in daylight hours after the typical "nine-to-five" workday. Supporters have also
argued that DST decreases energy consumption by reducing the need for lighting and heating, but the actual effect
on overall energy use is heavily disputed. The manipulation of time at higher latitudes (for example Iceland, Nunavut
or Alaska) has little impact on daily life, because the length of day and night changes more extremely throughout
the seasons (in comparison to other latitudes), and thus sunrise and sunset times are significantly out of sync with
standard working hours regardless of manipulations of the clock. DST is also of little use for locations near the
equator, because these regions see only a small variation in daylight in the course of the year. Although they did
not fix their schedules to the clock in the modern sense, ancient civilizations adjusted daily schedules to the sun
more flexibly than modern DST does, often dividing daylight into twelve hours regardless of day length, so that each
daylight hour was longer during summer. For example, Roman water clocks had different scales for different months
of the year: at Rome's latitude the third hour from sunrise, hora tertia, started by modern standards at 09:02 solar
time and lasted 44 minutes at the winter solstice, but at the summer solstice it started at 06:58 and lasted 75 minutes.
After ancient times, equal-length civil hours eventually supplanted unequal, so civil time no longer varies by season.
Unequal hours are still used in a few traditional settings, such as some Mount Athos monasteries and all Jewish ceremonies.
During his time as an American envoy to France, Benjamin Franklin, publisher of the old English proverb, "Early to
bed, and early to rise, makes a man healthy, wealthy and wise", anonymously published a letter suggesting that Parisians
economize on candles by rising earlier to use morning sunlight. This 1784 satire proposed taxing shutters, rationing
candles, and waking the public by ringing church bells and firing cannons at sunrise. Despite common misconception,
Franklin did not actually propose DST; 18th-century Europe did not even keep precise schedules. However, this soon
changed as rail and communication networks came to require a standardization of time unknown in Franklin's day. Modern
DST was first proposed by the New Zealand entomologist George Hudson, whose shift-work job gave him leisure time
to collect insects, and led him to value after-hours daylight. In 1895 he presented a paper to the Wellington Philosophical
Society proposing a two-hour daylight-saving shift, and after considerable interest was expressed in Christchurch,
he followed up in an 1898 paper. Many publications credit DST's proposal to the prominent English builder and outdoorsman
William Willett, who independently conceived DST in 1905 during a pre-breakfast ride, when he observed with dismay
how many Londoners slept through a large part of a summer's day. An avid golfer, he also disliked cutting short his
round at dusk. His solution was to advance the clock during the summer months, a proposal he published two years
later. The proposal was taken up by the Liberal Member of Parliament (MP) Robert Pearce, who introduced the first
Daylight Saving Bill to the House of Commons on 12 February 1908. A select committee was set up to examine the issue,
but Pearce's bill did not become law, and several other bills failed in the following years. Willett lobbied for
the proposal in the UK until his death in 1915. Starting on 30 April 1916, Germany and its World War I ally Austria-Hungary
were the first to use DST (German: Sommerzeit) as a way to conserve coal during wartime. Britain, most of its allies,
and many European neutrals soon followed suit. Russia and a few other countries waited until the next year and the
United States adopted it in 1918. Broadly speaking, Daylight Saving Time was abandoned in the years after the war
(with some notable exceptions including Canada, the UK, France, and Ireland for example). However, it was brought
back for periods of time in many different places during the following decades, and commonly during the Second World
War. It became widely adopted, particularly in North America and Europe starting in the 1970s as a result of the
1970s energy crisis. Since then, the world has seen many enactments, adjustments, and repeals. For specific details,
an overview is available at Daylight saving time by country. In the case of the United States where a one-hour shift
occurs at 02:00 local time, in spring the clock jumps forward from the last moment of 01:59 standard time to 03:00
DST and that day has 23 hours, whereas in autumn the clock jumps backward from the last moment of 01:59 DST to 01:00
standard time, repeating that hour, and that day has 25 hours. A digital display of local time does not read 02:00
exactly at the shift to summer time, but instead jumps from 01:59:59.9 forward to 03:00:00.0. Clock shifts are usually
scheduled near a weekend midnight to lessen disruption to weekday schedules. A one-hour shift is customary, but Australia's
Lord Howe Island uses a half-hour shift. Twenty-minute and two-hour shifts have been used in the past. Coordination
strategies differ when adjacent time zones shift clocks. The European Union shifts all at once, at 01:00 UTC or 02:00
CET or 03:00 EET; for example, Eastern European Time is always one hour ahead of Central European Time. Most of North
America shifts at 02:00 local time, so its zones do not shift at the same time; for example, Mountain Time is temporarily
(for one hour) zero hours ahead of Pacific Time, instead of one hour ahead, in the autumn and two hours, instead
of one, ahead of Pacific Time in the spring. In the past, Australian districts went even further and did not always
agree on start and end dates; for example, in 2008 most DST-observing areas shifted clocks forward on October 5 but
Western Australia shifted on October 26. In some cases only part of a country shifts; for example, in the US, Hawaii
and most of Arizona do not observe DST. Start and end dates vary with location and year. Since 1996 European Summer
Time has been observed from the last Sunday in March to the last Sunday in October; previously the rules were not
uniform across the European Union. Starting in 2007, most of the United States and Canada observe DST from the second
Sunday in March to the first Sunday in November, almost two-thirds of the year. The 2007 US change was part of the
Energy Policy Act of 2005; previously, from 1987 through 2006, the start and end dates were the first Sunday in April
and the last Sunday in October, and Congress retains the right to go back to the previous dates now that an energy-consumption
study has been done. Proponents for permanently retaining November as the month for ending DST point to Halloween
as a reason to delay the change in order to allow extra daylight for the evening of October 31. Beginning and ending
dates are roughly the reverse in the southern hemisphere. For example, mainland Chile observed DST from the second
Saturday in October to the second Saturday in March, with transitions at 24:00 local time. The time difference between
the United Kingdom and mainland Chile could therefore be five hours during the Northern summer, three hours during
the Southern summer and four hours a few weeks per year because of mismatch of changing dates. DST is generally not
observed near the equator, where sunrise times do not vary enough to justify it. Some countries observe it only in
some regions; for example, southern Brazil observes it while equatorial Brazil does not. Only a minority of the world's
population uses DST because Asia and Africa generally do not observe it. Daylight saving has caused controversy since
it began. Winston Churchill argued that it enlarges "the opportunities for the pursuit of health and happiness among
the millions of people who live in this country" and pundits have dubbed it "Daylight Slaving Time". Historically,
retailing, sports, and tourism interests have favored daylight saving, while agricultural and evening entertainment
interests have opposed it, and its initial adoption had been prompted by energy crisis and war. The fate of Willett's
1907 proposal illustrates several political issues involved. The proposal attracted many supporters, including Balfour,
Churchill, Lloyd George, MacDonald, Edward VII (who used half-hour DST at Sandringham), the managing director of
Harrods, and the manager of the National Bank. However, the opposition was stronger: it included Prime Minister H.
H. Asquith, Christie (the Astronomer Royal), George Darwin, Napier Shaw (director of the Meteorological Office),
many agricultural organizations, and theatre owners. After many hearings the proposal was narrowly defeated in a
Parliament committee vote in 1909. Willett's allies introduced similar bills every year from 1911 through 1914, to
no avail. The US was even more skeptical: Andrew Peters introduced a DST bill to the US House of Representatives
in May 1909, but it soon died in committee. After Germany led the way with starting DST (German: Sommerzeit) during
World War I on 30 April 1916 together with its allies to alleviate hardships from wartime coal shortages and air
raid blackouts, the political equation changed in other countries; the United Kingdom used DST first on 21 May 1916.
US retailing and manufacturing interests led by Pittsburgh industrialist Robert Garland soon began lobbying for DST,
but were opposed by railroads. The US's 1917 entry to the war overcame objections, and DST was established in 1918.
The war's end swung the pendulum back. Farmers continued to dislike DST, and many countries repealed it after the
war. Britain was an exception: it retained DST nationwide but over the years adjusted transition dates for several
reasons, including special rules during the 1920s and 1930s to avoid clock shifts on Easter mornings. The US was
more typical: Congress repealed DST after 1919. President Woodrow Wilson, like Willett an avid golfer, vetoed the
repeal twice but his second veto was overridden. Only a few US cities retained DST locally thereafter, including
New York so that its financial exchanges could maintain an hour of arbitrage trading with London, and Chicago and
Cleveland to keep pace with New York. Wilson's successor Warren G. Harding opposed DST as a "deception". Reasoning
that people should instead get up and go to work earlier in the summer, he ordered District of Columbia federal employees
to start work at 08:00 rather than 09:00 during summer 1922. Some businesses followed suit though many others did
not; the experiment was not repeated. The history of time in the United States includes DST during both world wars,
but no standardization of peacetime DST until 1966. In May 1965, for two weeks, St. Paul, Minnesota and Minneapolis,
Minnesota were on different times, when the capital city decided to join most of the nation by starting Daylight
Saving Time while Minneapolis opted to follow the later date set by state law. In the mid-1980s, Clorox (parent of
Kingsford Charcoal) and 7-Eleven provided the primary funding for the Daylight Saving Time Coalition behind the 1987
extension to US DST, and both Idaho senators voted for it based on the premise that during DST fast-food restaurants
sell more French fries, which are made from Idaho potatoes. In 1992, after a three-year trial of daylight saving
in Queensland, Australia, a referendum on daylight saving was held and defeated with a 54.5% 'no' vote – with regional
and rural areas strongly opposed, while those in the metropolitan south-east were in favor. In 2005, the Sporting
Goods Manufacturers Association and the National Association of Convenience Stores successfully lobbied for the 2007
extension to US DST. In December 2008, the Daylight Saving for South East Queensland (DS4SEQ) political party was
officially registered in Queensland, advocating the implementation of a dual-time zone arrangement for Daylight Saving
in South East Queensland while the rest of the state maintains standard time. DS4SEQ contested the March 2009 Queensland
State election with 32 candidates and received one percent of the statewide primary vote, equating to around 2.5%
across the 32 electorates contested. After a three-year trial, more than 55% of Western Australians voted against
DST in 2009, with rural areas strongly opposed. On 14 April 2010, after being approached by the DS4SEQ political
party, Queensland Independent member Peter Wellington, introduced the Daylight Saving for South East Queensland Referendum
Bill 2010 into Queensland Parliament, calling for a referendum to be held at the next State election on the introduction
of daylight saving into South East Queensland under a dual-time zone arrangement. The Bill was defeated in Queensland
Parliament on 15 June 2011. In the UK the Royal Society for the Prevention of Accidents supports a proposal to observe
SDST's additional hour year-round, but is opposed in some industries, such as postal workers and farmers, and particularly
by those living in the northern regions of the UK. In some Muslim countries DST is temporarily abandoned during Ramadan
(the month when no food should be eaten between sunrise and sunset), since the DST would delay the evening dinner.
Ramadan took place in July and August in 2012. This concerns at least Morocco and Palestine, although Iran keeps
DST during Ramadan. Most Muslim countries do not use DST, partially for this reason. The 2011 declaration by Russia
that it would not turn its clocks back and stay in DST all year long was subsequently followed by a similar declaration
from Belarus. The plan generated widespread complaints due to the dark of wintertime morning, and thus was abandoned
in 2014. The country changed its clocks to Standard Time on 26 October 2014 - and intends to stay there permanently.
Proponents of DST generally argue that it saves energy, promotes outdoor leisure activity in the evening (in summer),
and is therefore good for physical and psychological health, reduces traffic accidents, reduces crime, or is good
for business. Groups that tend to support DST are urban workers, retail businesses, outdoor sports enthusiasts and
businesses, tourism operators, and others who benefit from increased light during the evening in summer. Opponents
argue that actual energy savings are inconclusive, that DST increases health risks such as heart attack, that DST
can disrupt morning activities, and that the act of changing clocks twice a year is economically and socially disruptive
and cancels out any benefit. Farmers have tended to oppose DST. Common agreement about the day's layout or schedule
confers so many advantages that a standard DST schedule has generally been chosen over ad hoc efforts to get up earlier.
The advantages of coordination are so great that many people ignore whether DST is in effect by altering their nominal
work schedules to coordinate with television broadcasts or daylight. DST is commonly not observed during most of
winter, because its mornings are darker; workers may have no sunlit leisure time, and children may need to leave
for school in the dark. Since DST is applied to many varying communities, its effects may be very different depending
on their culture, light levels, geography, and climate; that is why it is hard to make generalized conclusions about
the absolute effects of the practice. Some areas may adopt DST simply as a matter of coordination with others rather
than for any direct benefits. DST's potential to save energy comes primarily from its effects on residential lighting,
which consumes about 3.5% of electricity in the United States and Canada. Delaying the nominal time of sunset and
sunrise reduces the use of artificial light in the evening and increases it in the morning. As Franklin's 1784 satire
pointed out, lighting costs are reduced if the evening reduction outweighs the morning increase, as in high-latitude
summer when most people wake up well after sunrise. An early goal of DST was to reduce evening usage of incandescent
lighting, which used to be a primary use of electricity. Although energy conservation remains an important goal,
energy usage patterns have greatly changed since then, and recent research is limited and reports contradictory results.
Electricity use is greatly affected by geography, climate, and economics, making it hard to generalize from single
studies. Several studies have suggested that DST increases motor fuel consumption. The 2008 DOE report found no significant
increase in motor gasoline consumption due to the 2007 United States extension of DST. Retailers, sporting goods
makers, and other businesses benefit from extra afternoon sunlight, as it induces customers to shop and to participate
in outdoor afternoon sports. In 1984, Fortune magazine estimated that a seven-week extension of DST would yield an
additional $30 million for 7-Eleven stores, and the National Golf Foundation estimated the extension would increase
golf industry revenues $200 million to $300 million. A 1999 study estimated that DST increases the revenue of the
European Union's leisure sector by about 3%. Conversely, DST can adversely affect farmers, parents of young children,
and others whose hours are set by the sun and they have traditionally opposed the practice, although some farmers
are neutral. One reason why farmers oppose DST is that grain is best harvested after dew evaporates, so when field
hands arrive and leave earlier in summer their labor is less valuable. Dairy farmers are another group who complain
of the change. Their cows are sensitive to the timing of milking, so delivering milk earlier disrupts their systems.
Today some farmers' groups are in favor of DST. Changing clocks and DST rules has a direct economic cost, entailing
extra work to support remote meetings, computer applications and the like. For example, a 2007 North American rule
change cost an estimated $500 million to $1 billion, and Utah State University economist William F. Shughart II has
estimated the lost opportunity cost at around $1.7 billion USD. Although it has been argued that clock shifts correlate
with decreased economic efficiency, and that in 2000 the daylight-saving effect implied an estimated one-day loss
of $31 billion on US stock exchanges, the estimated numbers depend on the methodology. The results have been disputed,
and the original authors have refuted the points raised by disputers. In 1975 the US DOT conservatively identified
a 0.7% reduction in traffic fatalities during DST, and estimated the real reduction at 1.5% to 2%, but the 1976 NBS
review of the DOT study found no differences in traffic fatalities. In 1995 the Insurance Institute for Highway Safety
estimated a reduction of 1.2%, including a 5% reduction in crashes fatal to pedestrians. Others have found similar
reductions. Single/Double Summer Time (SDST), a variant where clocks are one hour ahead of the sun in winter and
two in summer, has been projected to reduce traffic fatalities by 3% to 4% in the UK, compared to ordinary DST. However,
accidents do increase by as much as 11% during the two weeks that follow the end of British Summer Time. It is not
clear whether sleep disruption contributes to fatal accidents immediately after the spring clock shifts. A correlation
between clock shifts and traffic accidents has been observed in North America and the UK but not in Finland or Sweden.
If this effect exists, it is far smaller than the overall reduction in traffic fatalities. A 2009 US study found
that on Mondays after the switch to DST, workers sleep an average of 40 minutes less, and are injured at work more
often and more severely. In the 1970s the US Law Enforcement Assistance Administration (LEAA) found a reduction of
10% to 13% in Washington, D.C.'s violent crime rate during DST. However, the LEAA did not filter out other factors,
and it examined only two cities and found crime reductions only in one and only in some crime categories; the DOT
decided it was "impossible to conclude with any confidence that comparable benefits would be found nationwide". Outdoor
lighting has a marginal and sometimes even contradictory influence on crime and fear of crime. In several countries,
fire safety officials encourage citizens to use the two annual clock shifts as reminders to replace batteries in
smoke and carbon monoxide detectors, particularly in autumn, just before the heating and candle season causes an
increase in home fires. Similar twice-yearly tasks include reviewing and practicing fire escape and family disaster
plans, inspecting vehicle lights, checking storage areas for hazardous materials, reprogramming thermostats, and
seasonal vaccinations. Locations without DST can instead use the first days of spring and autumn as reminders. DST
has mixed effects on health. In societies with fixed work schedules it provides more afternoon sunlight for outdoor
exercise. It alters sunlight exposure; whether this is beneficial depends on one's location and daily schedule, as
sunlight triggers vitamin D synthesis in the skin, but overexposure can lead to skin cancer. DST may help in depression
by causing individuals to rise earlier, but some argue the reverse. The Retinitis Pigmentosa Foundation Fighting
Blindness, chaired by blind sports magnate Gordon Gund, successfully lobbied in 1985 and 2005 for US DST extensions.
Clock shifts were found to increase the risk of heart attack by 10 percent, and to disrupt sleep and reduce its efficiency.
Effects on seasonal adaptation of the circadian rhythm can be severe and last for weeks. A 2008 study found that
although male suicide rates rise in the weeks after the spring transition, the relationship weakened greatly after
adjusting for season. A 2008 Swedish study found that heart attacks were significantly more common the first three
weekdays after the spring transition, and significantly less common the first weekday after the autumn transition.
The government of Kazakhstan cited health complications due to clock shifts as a reason for abolishing DST in 2005.
In March 2011, Dmitri Medvedev, president of Russia, claimed that "stress of changing clocks" was the motivation
for Russia to stay in DST all year long. Officials at the time talked about an annual increase in suicides. An unexpected
adverse effect of daylight saving time may lie in the fact that an extra part of morning rush hour traffic occurs
before dawn and traffic emissions then cause higher air pollution than during daylight hours. DST's clock shifts
have the obvious disadvantage of complexity. People must remember to change their clocks; this can be time-consuming,
particularly for mechanical clocks that cannot be moved backward safely. People who work across time zone boundaries
need to keep track of multiple DST rules, as not all locations observe DST or observe it the same way. The length
of the calendar day becomes variable; it is no longer always 24 hours. Disruption to meetings, travel, broadcasts,
billing systems, and records management is common, and can be expensive. During an autumn transition from 02:00 to
01:00, a clock reads times from 01:00:00 through 01:59:59 twice, possibly leading to confusion. Damage to a German
steel facility occurred during a DST transition in 1993, when a computer timing system linked to a radio time synchronization
signal allowed molten steel to cool for one hour less than the required duration, resulting in spattering of molten
steel when it was poured. Medical devices may generate adverse events that could harm patients, without being obvious
to clinicians responsible for care. These problems are compounded when the DST rules themselves change; software
developers must test and perhaps modify many programs, and users must install updates and restart applications. Consumers
must update devices such as programmable thermostats with the correct DST rules, or manually adjust the devices'
clocks. A common strategy to resolve these problems in computer systems is to express time using the Coordinated
Universal Time (UTC) rather than the local time zone. For example, Unix-based computer systems use the UTC-based
Unix time internally. Some clock-shift problems could be avoided by adjusting clocks continuously or at least more
gradually—for example, Willett at first suggested weekly 20-minute transitions—but this would add complexity and
has never been implemented. DST inherits and can magnify the disadvantages of standard time. For example, when reading
a sundial, one must compensate for it along with time zone and natural discrepancies. Also, sun-exposure guidelines
such as avoiding the sun within two hours of noon become less accurate when DST is in effect. As explained by Richard
Meade in the English Journal of the (American) National Council of Teachers of English, the form daylight savings
time (with an "s") was already in 1978 much more common than the older form daylight saving time in American English
("the change has been virtually accomplished"). Nevertheless, even dictionaries such as Merriam-Webster's, American
Heritage, and Oxford, which describe actual usage instead of prescribing outdated usage (and therefore also list
the newer form), still list the older form first. This is because the older form is still very common in print and
preferred by many editors. ("Although daylight saving time is considered correct, daylight savings time (with an
"s") is commonly used.") The first two words are sometimes hyphenated (daylight-saving[s] time). Merriam-Webster's
also lists the forms daylight saving (without "time"), daylight savings (without "time"), and daylight time. In Britain,
Willett's 1907 proposal used the term daylight saving, but by 1911 the term summer time replaced daylight saving
time in draft legislation. Continental Europe uses similar phrases, such as Sommerzeit in Germany, zomertijd in Dutch-speaking
regions, kesäaika in Finland, horario de verano or hora de verano in Spain and heure d'été in France, whereas in
Italy the term is ora legale, that is, legal time (legally enforced time) as opposed to "ora solare", solar time,
in winter. The name of local time typically changes when DST is observed. American English replaces standard with
daylight: for example, Pacific Standard Time (PST) becomes Pacific Daylight Time (PDT). In the United Kingdom, the
standard term for UK time when advanced by one hour is British Summer Time (BST), and British English typically inserts
summer into other time zone names, e.g. Central European Time (CET) becomes Central European Summer Time (CEST).
The North American mnemonic "spring forward, fall back" (also "spring ahead ...", "spring up ...", and "... fall
behind") helps people remember which direction to shift clocks. Changes to DST rules cause problems in existing computer
installations. For example, the 2007 change to DST rules in North America required many computer systems to be upgraded,
with the greatest impact on email and calendaring programs; the upgrades consumed a significant effort by corporate
information technologists. Some applications standardize on UTC to avoid problems with clock shifts and time zone
differences. Likewise, most modern operating systems internally handle and store all times as UTC and only convert
to local time for display. However, even if UTC is used internally, the systems still require information on time
zones to correctly calculate local time where it is needed. Many systems in use today base their date/time calculations
from data derived from the IANA time zone database also known as zoneinfo. The IANA time zone database maps a name
to the named location's historical and predicted clock shifts. This database is used by many computer software systems,
including most Unix-like operating systems, Java, and the Oracle RDBMS; HP's "tztab" database is similar but incompatible.
When temporal authorities change DST rules, zoneinfo updates are installed as part of ordinary system maintenance.
In Unix-like systems the TZ environment variable specifies the location name, as in TZ=':America/New_York'. In many
of those systems there is also a system-wide setting that is applied if the TZ environment variable isn't set: this
setting is controlled by the contents of the /etc/localtime file, which is usually a symbolic link or hard link to
one of the zoneinfo files. Internal time is stored in timezone-independent epoch time; the TZ is used by each of
potentially many simultaneous users and processes to independently localize time display. Older or stripped-down
systems may support only the TZ values required by POSIX, which specify at most one start and end rule explicitly
in the value. For example, TZ='EST5EDT,M3.2.0/02:00,M11.1.0/02:00' specifies time for the eastern United States starting
in 2007. Such a TZ value must be changed whenever DST rules change, and the new value applies to all years, mishandling
some older timestamps. As with zoneinfo, a user of Microsoft Windows configures DST by specifying the name of a location,
and the operating system then consults a table of rule sets that must be updated when DST rules change. Procedures
for specifying the name and updating the table vary with release. Updates are not issued for older versions of Microsoft
Windows. Windows Vista supports at most two start and end rules per time zone setting. In a Canadian location observing
DST, a single Vista setting supports both 1987–2006 and post-2006 time stamps, but mishandles some older time stamps.
Older Microsoft Windows systems usually store only a single start and end rule for each zone, so that the same Canadian
setting reliably supports only post-2006 time stamps. These limitations have caused problems. For example, before
2005, DST in Israel varied each year and was skipped some years. Windows 95 used rules correct for 1995 only, causing
problems in later years. In Windows 98, Microsoft marked Israel as not having DST, forcing Israeli users to shift
their computer clocks manually twice a year. The 2005 Israeli Daylight Saving Law established predictable rules using
the Jewish calendar but Windows zone files could not represent the rules' dates in a year-independent way. Partial
workarounds, which mishandled older time stamps, included manually switching zone files every year and a Microsoft
tool that switches zones automatically. In 2013, Israel standardized its daylight saving time according to the Gregorian
calendar. Microsoft Windows keeps the system real-time clock in local time. This causes several problems, including
compatibility when multi booting with operating systems that set the clock to UTC, and double-adjusting the clock
when multi booting different Windows versions, such as with a rescue boot disk. This approach is a problem even in
Windows-only systems: there is no support for per-user timezone settings, only a single system-wide setting. In 2008
Microsoft hinted that future versions of Windows will partially support a Windows registry entry RealTimeIsUniversal
that had been introduced many years earlier, when Windows NT supported RISC machines with UTC clocks, but had not
been maintained. Since then at least two fixes related to this feature have been published by Microsoft. The NTFS
file system used by recent versions of Windows stores the file with a UTC time stamp, but displays it corrected to
local—or seasonal—time. However, the FAT filesystem commonly used on removable devices stores only the local time.
Consequently, when a file is copied from the hard disk onto separate media, its time will be set to the current local
time. If the time adjustment is changed, the timestamps of the original file and the copy will be different. The
same effect can be observed when compressing and uncompressing files with some file archivers. It is the NTFS file
that changes seen time. This effect should be kept in mind when trying to determine if a file is a duplicate of another,
although there are other methods of comparing files for equality (such as using a checksum algorithm). A move to
"permanent daylight saving time" (staying on summer hours all year with no time shifts) is sometimes advocated, and
has in fact been implemented in some jurisdictions such as Argentina, Chile, Iceland, Singapore, Uzbekistan and Belarus.
Advocates cite the same advantages as normal DST without the problems associated with the twice yearly time shifts.
However, many remain unconvinced of the benefits, citing the same problems and the relatively late sunrises, particularly
in winter, that year-round DST entails. Russia switched to permanent DST from 2011 to 2014, but the move proved unpopular
because of the late sunrises in winter, so the country switched permanently back to "standard" or "winter" time in
2014. Xinjiang, China; Argentina; Chile; Iceland; Russia and other areas skew time zones westward, in effect observing
DST year-round without complications from clock shifts. For example, Saskatoon, Saskatchewan, is at 106°39′ W longitude,
slightly west of center of the idealized Mountain Time Zone (105° W), but the time in Saskatchewan is Central Standard
Time (90° W) year-round, so Saskatoon is always about 67 minutes ahead of mean solar time, thus effectively observing
daylight saving time year-round. Conversely, northeast India and a few other areas skew time zones eastward, in effect
observing negative DST. The United Kingdom and Ireland experimented with year-round DST from 1968 to 1971 but abandoned
it because of its unpopularity, particularly in northern regions. Western France, Spain, and other areas skew time
zones and shift clocks, in effect observing DST in winter with an extra hour in summer. Nome, Alaska, is at 165°24′
W longitude, which is just west of center of the idealized Samoa Time Zone (165° W), but Nome observes Alaska Time
(135° W) with DST, so it is slightly more than two hours ahead of the sun in winter and three in summer. Double daylight
saving time has been used on occasion; for example, it was used in some European countries during and shortly after
World War II when it was referred to as "Double Summer Time". See British Double Summer Time and Central European
Midsummer Time for details.
The Royal Institute of British Architects (RIBA) is a professional body for architects primarily in the United Kingdom, but
also internationally, founded for the advancement of architecture under its charter granted in 1837 and Supplemental
Charter granted in 1971. Originally named the Institute of British Architects in London, it was formed in 1834 by
several prominent architects, including Philip Hardwick, Thomas Allom, William Donthorne, Thomas Leverton Donaldson,
William Adams Nicholson, John Buonarotti Papworth, and Thomas de Grey, 2nd Earl de Grey. After the grant of the royal
charter it had become known as the Royal Institute of British Architects in London, eventually dropping the reference
to London in 1892. In 1934, it moved to its current headquarters on Portland Place, with the building being opened
by King George V and Queen Mary. It was granted its Royal Charter in 1837 under King William IV. Supplemental Charters
of 1887, 1909 and 1925 were replaced by a single Charter in 1971, and there have been minor amendments since then.
The original Charter of 1837 set out the purpose of the Royal Institute to be: '… the general advancement of Civil
Architecture, and for promoting and facilitating the acquirement of the knowledge of the various arts and sciences
connected therewith…' The operational framework is provided by the Byelaws, which are more frequently updated than
the Charter. Any revisions to the Charter or Byelaws require the Privy Council's approval. The design of the Institute's
Mycenean lions medal and the motto ‘Usui civium, decori urbium' has been attributed to Thomas Leverton Donaldson,
who had been honorary secretary until 1839. The RIBA Guide to its Archive and History (Angela Mace,1986) records
that the first official version of this badge was used as a bookplate for the Institute's library and publications
from 1835 to 1891, when it was redesigned by J.H.Metcalfe. It was again redesigned in 1931 by Eric Gill and in 1960
by Joan Hassall. The description in the 1837 by-laws was: "gules, two lions rampant guardant or, supporting a column
marked with lines chevron, proper, all standing on a base of the same; a garter surrounding the whole with the inscription
Institute of British Architects, anno salutis MDCCCXXXIV; above a mural crown proper, and beneath the motto Usui
civium decori urbium ". In the nineteenth and twentieth centuries the RIBA and its members had a leading part in
the promotion of architectural education in the United Kingdom, including the establishment of the Architects' Registration
Council of the United Kingdom (ARCUK) and the Board of Architectural Education under the Architects (Registration)
Acts, 1931 to 1938. A member of the RIBA, Lionel Bailey Budden, then Associate Professor in the Liverpool University
School of Architecture, had contributed the article on Architectural Education published in the fourteenth edition
of the Encyclopædia Britannica (1929). His School, Liverpool, was one of the twenty schools named for the purpose
of constituting the statutory Board of Architectural Education when the 1931 Act was passed. Soon after the passing
of the 1931 Act, in the book published on the occasion of the Institute's centenary celebration in 1934, Harry Barnes,
FRIBA, Chairman of the Registration Committee, mentioned that ARCUK could not be a rival of any architectural association,
least of all the RIBA, given the way ARCUK was constituted. Barnes commented that the Act's purpose was not protecting
the architectural profession, and that the legitimate interests of the profession were best served by the (then)
architectural associations in which some 80 per cent of those practising architecture were to be found. The RIBA
Guide to its Archive and History (1986) has a section on the "Statutory registration of architects" with a bibliography
extending from a draft bill of 1887 to one of 1969. The Guide's section on "Education" records the setting up in
1904 of the RIBA Board of Architectural Education, and the system by which any school which applied for recognition,
whose syllabus was approved by the Board and whose examinations were conducted by an approved external examiner,
and whose standard of attainment was guaranteed by periodical inspections by a "Visiting Board" from the BAE, could
be placed on the list of "recognized schools" and its successful students could qualify for exemption from RIBA examinations.
The content of the acts, particularly section 1 (1) of the amending act of 1938, shows the importance which was then
attached to giving architects the responsibility of superintending or supervising the building works of local authorities
(for housing and other projects), rather than persons professionally qualified only as municipal or other engineers.
By the 1970s another issue had emerged affecting education for qualification and registration for practice as an
architect, due to the obligation imposed on the United Kingdom and other European governments to comply with European
Union Directives concerning mutual recognition of professional qualifications in favour of equal standards across
borders, in furtherance of the policy for a single market of the European Union. This led to proposals for reconstituting
ARCUK. Eventually, in the 1990s, before proceeding, the government issued a consultation paper "Reform of Architects
Registration" (1994). The change of name to "Architects Registration Board" was one of the proposals which was later
enacted in the Housing Grants, Construction and Regeneration Act 1996 and reenacted as the Architects Act 1997; another
was the abolition of the ARCUK Board of Architectural Education. RIBA Visiting Boards continue to assess courses
for exemption from the RIBA's examinations in architecture. Under arrangements made in 2011 the validation criteria
are jointly held by the RIBA and the Architects Registration Board, but unlike the ARB, the RIBA also validates courses
outside the UK. The RIBA is a member organisation, with 44,000 members. Chartered Members are entitled to call themselves
chartered architects and to append the post-nominals RIBA after their name; Student Members are not permitted to
do so. Formerly, fellowships of the institute were granted, although no longer; those who continue to hold this title
instead add FRIBA. RIBA is based at 66 Portland Place, London—a 1930s Grade II* listed building designed by architect
George Grey Wornum with sculptures by Edward Bainbridge Copnall and James Woodford. Parts of the London building
are open to the public, including the Library. It has a large architectural bookshop, a café, restaurant and lecture
theatres. Rooms are hired out for events. The Institute also maintains a dozen regional offices around the United
Kingdom, it opened its first regional office for the East of England at Cambridge in 1966. RIBA Enterprises is the
commercial arm of RIBA, with a registered office in Newcastle upon Tyne, a base at 15 Bonhill Street in London, and
an office in Newark. It employs over 250 staff, approximately 180 of whom are based in Newcastle. Its services include
RIBA Insight, RIBA Appointments, and RIBA Publishing. It publishes the RIBA Product Selector and RIBA Journal. In
Newcastle is the NBS, the National Building Specification, which has 130 staff and deals with the building regulations
and the Construction Information Service. RIBA Bookshops, which operates online and at 66 Portland Place, is also
part of RIBA Enterprises. The British Architectural Library, sometimes referred to as the RIBA Library, was established
in 1834 upon the founding of the institute with donations from members. Now, with over four million items, it is
one of the three largest architectural libraries in the world and the largest in Europe. Some items from the collections
are on permanent display at the Victoria and Albert Museum (V&A) in the V&A + RIBA Architecture Gallery and included
in temporary exhibitions at the RIBA and across Europe and North America. Its collections include: The overcrowded
conditions of the library was one of the reasons why the RIBA moved from 9 Conduit Street to larger premises at 66
Portland Place in 1934. The library remained open throughout World War Two and was able to shelter the archives of
Modernist architect Adolf Loos during the war. The library is based at two public sites: the Reading Room at the
RIBA's headquarters, 66 Portland Place, London; and the RIBA Architecture Study Rooms in the Henry Cole Wing of the
V&A. The Reading Room, designed by the building's architect George Grey Wornum and his wife Miriam, retains its original
1934 Art Deco interior with open bookshelves, original furniture and double-height central space. The study rooms,
opened in 2004, were designed by Wright & Wright. The library is funded entirely by the RIBA but it is open to the
public without charge. It operates a free education programme aimed at students, education groups and families, and
an information service for RIBA members and the public through the RIBA Information Centre. Since 2004, through the
V&A + RIBA Architecture Partnership, the RIBA and V&A have worked together to promote the understanding and enjoyment
of architecture. In 2004, the two institutions created the Architecture Gallery (Room 128) at the V&A showing artefacts
from the collections of both institutions, this was the first permanent gallery devoted to architecture in the UK.
The adjacent Architecture Exhibition Space (Room 128a) is used for temporary displays related to architecture. Both
spaces were designed by Gareth Hoskins Architects. At the same time the RIBA Library Drawing and Archives Collections
moved from 21 Portman Place to new facilities in the Henry Cole Wing at the V&A. Under the Partnership new study
rooms were opened where members of the public could view items from the RIBA and V&A architectural collections under
the supervision of curatorial staff. These and the nearby education room were designed by Wright & Wright Architects.
RIBA runs many awards including the Stirling Prize for the best new building of the year, the Royal Gold Medal (first
awarded in 1848), which honours a distinguished body of work, and the Stephen Lawrence Prize for projects with a
construction budget of less than £500,000. The RIBA also awards the President's Medals for student work, which are
regarded as the most prestigious awards in architectural education, and the RIBA President's Awards for Research.
The RIBA European Award was inaugurated in 2005 for work in the European Union, outside the UK. The RIBA National
Award and the RIBA International Award were established in 2007. Since 1966, the RIBA also judges regional awards
which are presented locally in the UK regions (East, East Midlands, London, North East, North West, Northern Ireland,
Scotland, South/South East, South West/Wessex, Wales, West Midlands and Yorkshire). Architectural design competitions
are used by an organisation that plans to build a new building or refurbish an existing building. They can be used
for buildings, engineering work, structures, landscape design projects or public realm artworks. A competition typically
asks for architects and/or designers to submit a design proposal in response to a given Brief. The winning design
will then be selected by an independent jury panel of design professionals and client representatives. The independence
of the jury is vital to the fair conduct of a competition. In addition to the Architects Registration Board, the
RIBA provides accreditation to architecture schools in the UK under a course validation procedure. It also provides
validation to international courses without input from the ARB. The RIBA has three parts to the education process:
Part I which is generally a three-year first degree, a year-out of at least one year work experience in an architectural
practice precedes the Part II which is generally a two-year post graduate diploma or masters. A further year out
must be taken before the RIBA Part III professional exams can be taken. Overall it takes a minimum of seven years
before an architecture student can seek chartered status. In 2007, RIBA called for minimum space standards in newly
built British houses after research was published suggesting that British houses were falling behind other European
countries. "The average new home sold to people today is significantly smaller than that built in the 1920s... We're
way behind the rest of Europe—even densely populated Holland has better proportioned houses than are being built
in the country. So let's see minimum space standards for all new homes," said RIBA president Jack Pringle.
The National Archives and Records Administration (NARA) is an independent agency of the United States government charged
with preserving and documenting government and historical records and with increasing public access to those documents,
which comprise the National Archives. NARA is officially responsible for maintaining and publishing the legally authentic
and authoritative copies of acts of Congress, presidential proclamations and executive orders, and federal regulations.
The NARA also transmits votes of the Electoral College to Congress. The Archivist of the United States is the chief
official overseeing the operation of the National Archives and Records Administration. The Archivist not only maintains
the official documentation of the passage of amendments to the U.S. Constitution by state legislatures, but has the
authority to declare when the constitutional threshold for passage has been reached, and therefore when an act has
become an amendment. The Office of the Federal Register publishes the Federal Register, Code of Federal Regulations,
and United States Statutes at Large, among others. It also administers the Electoral College. The National Historical
Publications and Records Commission (NHPRC)—the agency's grant-making arm—awards funds to state and local governments,
public and private archives, colleges and universities, and other nonprofit organizations to preserve and publish
historical records. Since 1964, the NHPRC has awarded some 4,500 grants. The Office of Government Information Services
(OGIS) is a Freedom of Information Act (FOIA) resource for the public and the government. Congress has charged NARA
with reviewing FOIA policies, procedures and compliance of Federal agencies and to recommend changes to FOIA. NARA's
mission also includes resolving FOIA disputes between Federal agencies and requesters. Originally, each branch and
agency of the U.S. government was responsible for maintaining its own documents, which often resulted in records
loss and destruction. Congress established the National Archives Establishment in 1934 to centralize federal record
keeping, with the Archivist of the United States as chief administrator. The National Archives was incorporated with
GSA in 1949; in 1985 it became an independent agency as NARA (National Archives and Records Administration). The
first Archivist, R.D.W. Connor, began serving in 1934, when the National Archives was established by Congress. As
a result of a first Hoover Commission recommendation, in 1949 the National Archives was placed within the newly formed
General Services Administration (GSA). The Archivist served as a subordinate official to the GSA Administrator until
the National Archives and Records Administration became an independent agency on April 1, 1985. In March 2006, it
was revealed by the Archivist of the United States in a public hearing that a memorandum of understanding between
NARA and various government agencies existed to "reclassify", i.e., withdraw from public access, certain documents
in the name of national security, and to do so in a manner such that researchers would not be likely to discover
the process (the U.S. reclassification program). An audit indicated that more than one third withdrawn since 1999
did not contain sensitive information. The program was originally scheduled to end in 2007. In 2010, Executive Order
13526 created the National Declassification Center to coordinate declassification practices across agencies, provide
secure document services to other agencies, and review records in NARA custody for declassification. NARA's holdings
are classed into "record groups" reflecting the governmental department or agency from which they originated. Records
include paper documents, microfilm, still pictures, motion pictures, and electronic media. Archival descriptions
of the permanent holdings of the federal government in the custody of NARA are stored in Archival Research Catalog
(ARC). The archival descriptions include information on traditional paper holdings, electronic records, and artifacts.
As of December 2012, the catalog consisted of about 10 billion logical data records describing 527,000 artifacts
and encompassing 81% of NARA's records. There are also 922,000 digital copies of already digitized materials. Most
records at NARA are in the public domain, as works of the federal government are excluded from copyright protection.
However, records from other sources may still be protected by copyright or donor agreements. Executive Order 13526
directs originating agencies to declassify documents if possible before shipment to NARA for long-term storage, but
NARA also stores some classified documents until they can be declassified. Its Information Security Oversight Office
monitors and sets policy for the U.S. government's security classification system. Many of NARA's most requested
records are frequently used for genealogy research. This includes census records from 1790 to 1930, ships' passenger
lists, and naturalization records. The National Archives Building, known informally as Archives I, located north
of the National Mall on Constitution Avenue in Washington, D.C., opened as its original headquarters in 1935. It
holds the original copies of the three main formative documents of the United States and its government: the Declaration
of Independence, the Constitution, and the Bill of Rights. It also hosts a copy of the 1297 Magna Carta confirmed
by Edward I. These are displayed to the public in the main chamber of the National Archives, which is called the
Rotunda for the Charters of Freedom. The National Archives Building also exhibits other important American historical
documents such as the Louisiana Purchase Treaty, the Emancipation Proclamation, and collections of photography and
other historically and culturally significant American artifacts. Once inside the Rotunda for the Charters of Freedom,
there are no lines to see the individual documents and visitors are allowed to walk from document to document as
they wish. For over 30 years the National Archives have forbidden flash photography but the advent of cameras with
automatic flashes have made the rules increasingly difficult to enforce. As a result, all filming, photographing,
and videotaping by the public in the exhibition areas has been prohibited since February 25, 2010. Because of space
constraints, NARA opened a second facility, known informally as Archives II, in 1994 near the University of Maryland,
College Park campus (8601 Adelphi Road, College Park, MD, 20740-6001). Largely because of this proximity, NARA and
the University of Maryland engage in cooperative initiatives. The College Park campus includes an archaeological
site that was listed on the National Register of Historic Places in 1996. The Washington National Records Center
(WNRC), located in Suitland, Maryland is a large warehouse type facility which stores federal records which are still
under the control of the creating agency. Federal government agencies pay a yearly fee for storage at the facility.
In accordance with federal records schedules, documents at WNRC are transferred to the legal custody of the National
Archives after a certain point (this usually involves a relocation of the records to College Park). Temporary records
at WNRC are either retained for a fee or destroyed after retention times has elapsed. WNRC also offers research services
and maintains a small research room. The National Archives Building in downtown Washington holds record collections
such as all existing federal census records, ships' passenger lists, military unit records from the American Revolution
to the Philippine–American War, records of the Confederate government, the Freedmen's Bureau records, and pension
and land records. There are facilities across the country with research rooms, archival holdings, and microfilms
of documents of federal agencies and courts pertinent to each region. In addition, Federal Records Centers exist
in each region that house materials owned by Federal agencies. Federal Records Centers are not open for public research.
For example, the FRC in Lenexa, Kansas holds items from the treatment of John F. Kennedy after his fatal shooting
in 1963. NARA also maintains the Presidential Library system, a nationwide network of libraries for preserving and
making available the documents of U.S. presidents since Herbert Hoover. The Presidential Libraries include: Libraries
and museums have been established for other presidents, but they are not part of the NARA presidential library system,
and are operated by private foundations, historical societies, or state governments, including the Abraham Lincoln,
Rutherford B. Hayes, William McKinley, Woodrow Wilson and Calvin Coolidge libraries. For example, the Abraham Lincoln
Presidential Library and Museum is owned and operated by the state of Illinois. In an effort to make its holdings
more widely available and more easily accessible, the National Archives began entering into public–private partnerships
in 2006. A joint venture with Google will digitize and offer NARA video online. When announcing the agreement, Archivist
Allen Weinstein said that this pilot program is On January 10, 2007, the National Archives and Fold3.com (formerly
Footnote) launched a pilot project to digitize historic documents from the National Archives holdings. Allen Weinstein
explained that this partnership would "allow much greater access to approximately 4.5 million pages of important
documents that are currently available only in their original format or on microfilm" and "would also enhance NARA's
efforts to preserve its original records." In July 2007, the National Archives announced it would make its collection
of Universal Newsreels from 1929 to 1967 available for purchase through CreateSpace, an Amazon.com subsidiary. During
the announcement, Weinstein noted that the agreement would "... reap major benefits for the public-at-large and for
the National Archives." Adding, "While the public can come to our College Park, MD research room to view films and
even copy them at no charge, this new program will make our holdings much more accessible to millions of people who
cannot travel to the Washington, DC area." The agreement also calls for CreateSpace partnership to provide the National
Archives with digital reference and preservation copies of the films as part of NARA's preservation program. In May
2008, the National Archives announced a five-year agreement to digitize selected records including the complete U.S.
Federal Census Collection, 1790–1930, passenger lists from 1820–1960 and WWI and WWII draft registration cards. The
partnership agreement allows for exclusive use of the digitized records by Ancestry.com for a 5-year embargo period
at which point the digital records will be turned over to the National Archives. On June 18, 2009, the National Archives
announced the launching of a YouTube channel "to showcase popular archived films, inform the public about upcoming
events around the country, and bring National Archives exhibits to the people." Also in 2009, the National Archives
launched a Flickr photostream to share portions of its photographic holdings with the general public. A new teaching
with documents website premiered in 2010 and was developed by the education team. The website features 3,000 documents,
images, and recordings from the holdings of the Archives. The site also features lesson plans and tools for creating
new classroom activities and lessons. In 2011 the National Archives initiated a Wikiproject on the English Wikipedia
to expand collaboration in making its holdings widely available through Wikimedia.
Tristan da Cunha /ˈtrɪstən də ˈkuːnjə/, colloquially Tristan, is both a remote group of volcanic islands in the south Atlantic
Ocean and the main island of that group. It is the most remote inhabited archipelago in the world, lying 2,000 kilometres
(1,200 mi) from the nearest inhabited land, Saint Helena, 2,400 kilometres (1,500 mi) from the nearest continental
land, South Africa, and 3,360 kilometres (2,090 mi) from South America. The territory consists of the main island,
also named Tristan da Cunha, which has a north–south length of 11.27 kilometres (7.00 mi) and has an area of 98 square
kilometres (38 sq mi), along with the smaller, uninhabited Nightingale Islands and the wildlife reserves of Inaccessible
and Gough Islands. Tristan da Cunha is part of the British overseas territory of Saint Helena, Ascension and Tristan
da Cunha. This includes Saint Helena and equatorial Ascension Island some 3,730 kilometres (2,318 mi) to the north
of Tristan. The island has a population of 267 as of January 2016. The islands were first sighted in 1506 by Portuguese
explorer Tristão da Cunha; rough seas prevented a landing. He named the main island after himself, Ilha de Tristão
da Cunha, which was anglicised from its earliest mention on British Admiralty charts to Tristan da Cunha Island.
Some sources state that the Portuguese made the first landing in 1520, when the Lás Rafael captained by Ruy Vaz Pereira
called at Tristan for water. The first undisputed landing was made in 1643 by the crew of the Heemstede, captained
by Claes Gerritsz Bierenbroodspot. The first permanent settler was Jonathan Lambert, from Salem, Massachusetts, United
States, who arrived at the islands in December 1810 with two other men. Lambert publicly declared the islands his
property and named them the Islands of Refreshment. After being joined by an Andrew Millet, three of the four men
died in 1812; however, the survivor among the original three permanent settlers, Thomas Currie (or Tommaso Corri)
remained as a farmer on the island. In 1816, the United Kingdom annexed the islands, ruling them from the Cape Colony
in South Africa. This is reported to have primarily been a measure to ensure that the French would be unable to use
the islands as a base for a rescue operation to free Napoleon Bonaparte from his prison on Saint Helena. The occupation
also prevented the United States from using Tristan da Cunha as a cruiser base, as it had during the War of 1812.
In 1867, Prince Alfred, Duke of Edinburgh and second son of Queen Victoria, visited the islands. The main settlement,
Edinburgh of the Seven Seas, was named in honour of his visit. Lewis Carroll's youngest brother, the Reverend Edwin
Heron Dodgson, served as an Anglican missionary and schoolteacher in Tristan da Cunha in the 1880s. The islands were
occupied by a garrison of British Marines and a civilian population was gradually built up. Whalers also set up on
the islands as a base for operations in the Southern Atlantic. However, the opening of the Suez Canal in 1869, together
with the gradual move from sailing ships to coal-fired steam ships, increased the isolation of the islands, as they
were no longer needed as a stopping port or for shelter for journeys from Europe to East Asia. On 12 January 1938
by Letters Patent the islands were declared a dependency of Saint Helena. Prior to roughly this period, passing ships
stopped irregularly at the island for a period of mere hours. During World War II, the islands were used as a top
secret Royal Navy weather and radio station codenamed HMS Atlantic Isle, to monitor Nazi U-boats (which were required
to maintain radio contact) and shipping movements in the South Atlantic Ocean. The first Administrator, Surgeon Lieutenant
Commander E.J.S. Woolley, was appointed by the British government during this time. In 1958 as part of an experiment,
Operation Argus, the United States Navy detonated an atomic bomb 160 kilometres (100 mi) high in the upper atmosphere
about 175 kilometres (109 mi) southeast of the main island. The 1961 eruption of Queen Mary's Peak forced the evacuation
of the entire population via Cape Town to England. The following year a Royal Society expedition went to the islands
to assess the damage, and reported that the settlement of Edinburgh of the Seven Seas had been only marginally affected.
Most families returned in 1963. On 23 May 2001, the islands experienced an extratropical cyclone that generated winds
up to 190 kilometres per hour (120 mph). A number of structures were severely damaged and a large number of cattle
were killed, prompting emergency aid, provided by the British government. On 4 December 2007 an outbreak of an acute
virus-induced flu was reported. This outbreak was compounded by Tristan's lack of suitable and sufficient medical
supplies. On 13 February 2008, fire destroyed the fishing factory and the four generators that supplied power to
the island. On 14 March 2008, new generators were installed and uninterrupted power was restored. This fire was devastating
to the island because fishing is a mainstay of the economy. While a new factory was being planned and built, M/V
Kelso came to the island and acted as a factory ship, with island fishermen based on board for stints normally of
one week. The new facility was ready in July 2009, for the start of the 2009–10 fishing season. On 16 March 2011,
the freighter MS Oliva ran aground on Nightingale Island, spilling tons of heavy fuel oil into the ocean, leaving
an oil slick threatening the island's population of rockhopper penguins. Nightingale Island has no fresh water, so
the penguins were transported to Tristan da Cunha for cleaning. On November 2011, the sailing boat Puma's Mar Mostro
participant in Volvo Ocean Race arrived to the island after her mast broke in the first leg from Alicante and Cape
Town. This event made the island, its inhabitants and lifestyle known worldwide thanks to the media reports. From
December 1937 to March 1938 a Norwegian party made the first ever scientific expedition to Tristan da Cunha. During
their stay, the expeditionary party carried out observations and made recordings of the topography of the island,
its people and how they lived and worked and the flora and fauna that inhabited the island. The main island is generally
mountainous. The only flat area is on the north-west coast, which is the location of the only settlement, Edinburgh
of the Seven Seas. The highest point is a volcano called Queen Mary's Peak 2,062 metres (6,765.1 ft), which is covered
by snow in winter. The other islands of the group are uninhabited, except for a weather station with a staff of six
on Gough Island, which has been operated by South Africa since 1956 (since 1963 at its present location at Transvaal
Bay on the south-east coast). The archipelago has a wet oceanic climate with pleasant temperatures but consistent
moderate to heavy rainfall and very limited sunshine, due to the persistent westerly winds. The number of rainy days
is comparable to the Aleutian Islands at a much higher latitude in the northern hemisphere, while sunshine hours
are comparable to Juneau, Alaska, 20° farther from the equator. Frost is unknown below elevations of 500 metres (1,600
ft) and summer temperatures are similarly mild, never reaching 25 °C (77 °F). Sandy Point on the east coast is reputed
to be the warmest and driest place on the island, being in the lee of the prevailing winds. Tristan is primarily
known for its wildlife. The island has been identified as an Important Bird Area by BirdLife International because
there are 13 known species of breeding seabirds on the island and two species of resident land birds. The seabirds
include northern rockhopper penguins, Atlantic yellow-nosed albatrosses, sooty albatrosses, Atlantic petrels, great-winged
petrels, soft-plumaged petrels, broad-billed prions, grey petrels, great shearwaters, sooty shearwaters, Tristan
skuas, Antarctic terns and brown noddies. Tristan and Gough Islands are the only known breeding sites in the world
for the Atlantic petrel (Pterodroma incerta; IUCN status EN). Inaccessible Island is also the only known breeding
ground of the Spectacled Petrel (Procellaria conspicillata; IUCN Vulnerable). The Tristan albatross (IUCN status
CR) is known to breed only on Gough and Inaccessible Islands: all nest on Gough except for one or two pairs who nest
on Inaccessible Island. The island's unique social and economic organisation has evolved over the years, but is based
on the principles set out by William Glass in 1817 when he established a settlement based on equality. All Tristan
families are farmers, owning their own stock and/or fishing. All land is communally owned. All households have plots
of land at The Patches on which they grow potatoes. Livestock numbers are strictly controlled to conserve pasture
and to prevent better-off families from accumulating wealth. Unless it votes for a change in its law, no outsiders
are allowed to buy land or settle on Tristan; theoretically the whole island would have to be put up for sale. All
people – including children and pensioners – are involved in farming, while adults additionally have salaried jobs
working either for the Government, or, a small number in domestic service, and many of the men are involved in the
fishing industry, going to sea in good weather. The nominal fishing season lasts 90 days; however during the 2013
fishing season – 1 July through 30 September – there were only 10 days suitable for fishing. The 1961 volcanic eruption
destroyed the Tristan da Cunha canned crawfish factory, which was rebuilt a short time later. The crawfish catchers
and processors work for the South African company Ovenstone, which has an exclusive contract to sell crawfish to
the United States and Japan. Even though Tristan da Cunha is a UK overseas territory, it is not permitted direct
access to European Union markets. Recent[clarification needed] economic conditions have meant that the islanders
have had to draw from their reserves. The islands' financial problems may cause delays in updating communication
equipment and improving education on the island. The fire of 13 February 2008 (see History) resulted in major temporary
economic disruption. There is an annual break from government and factory work which begins before Christmas and
lasts for 3 weeks. Break-Up Day is usually marked with parties at various work "departments". Break-Up includes the
Island Store, which means that families must be organised to have a full larder of provisions during the period.
In 2013, the Island Store closed a week earlier than usual to conduct a comprehensive inventory, and all purchases
had to be made by Friday 13 December as the shop did not open again until a month later. Healthcare is funded by
the government, undertaken by one resident doctor from South Africa and five nurses. Surgery or facilities for complex
childbirth are therefore limited, and emergencies can necessitate communicating with passing fishing vessels so the
injured person can be ferried to Cape Town. As of late 2007, IBM and Beacon Equity Partners, co-operating with Medweb,
the University of Pittsburgh Medical Center and the island's government on "Project Tristan", has supplied the island's
doctor with access to long distance tele-medical help, making it possible to send EKG and X-ray pictures to doctors
in other countries for instant consultation. This system has been limited owing to the poor reliability of Internet
connections and an absence of qualified technicians on the island to service fibre optic links between the hospital
and Internet centre at the administration buildings. The Tristan Song Project was a collaboration between St Mary's
School and amateur composers in England, led by music teacher Tony Triggs. It began in 2010 and involved St Mary's
pupils writing poems and Tony Triggs providing musical settings by himself and his pupils. A desktop publication
entitled Rockhopper Penguins and Other Songs (2010) embraced most of the songs completed that year and funded a consignment
of guitars to the School. In February 2013 the Tristan Post Office issued a set of four Song Project stamps featuring
island musical instruments and lyrics from Song Project songs about Tristan's volcano and wildlife. In 2014 the Project
broadened its scope and continues as the International Song Project. The islands have a population of 301. The main
settlement is Edinburgh of the Seven Seas (known locally as "The Settlement"). The only religion is Christianity,
with denominations of Anglican and Roman Catholic. The current population is thought to have descended from 15 ancestors,
eight males and seven females, who arrived on the island at various times between 1816 and 1908. The male founders
originated from Scotland, England, The Netherlands, the United States and Italy, belonging to 3 Y-haplogroups: I
(M170), R-SRY10831.2 and R (M207) (xSRY10831.2) and share just eight surnames: Glass, Green, Hagan, Lavarello, Patterson,
Repetto, Rogers, and Swain.[n 1] There are 80 families on the island. Tristan da Cunha's isolation has led to an
unusual, patois-like dialect of English described by the writer Simon Winchester as "a sonorous amalgam of Home Counties
lockjaw and nineteenth century idiom, Afrikaans slang and Italian." Bill Bryson documents some examples of the island's
dialect in his book, The Mother Tongue. Executive authority is vested in the Queen, who is represented in the territory
by the Governor of Saint Helena. As the Governor resides permanently in Saint Helena, an Administrator is appointed
to represent the Governor in the islands. The Administrator is a career civil servant in the Foreign Office and is
selected by London. Since 1998, each Administrator has served a single, three-year term (which begins in September,
upon arrival of the supply ship from Cape Town.) The Administrator acts as the local head of government, and takes
advice from the Tristan da Cunha Island Council. Alex Mitham was appointed Tristan da Cunha’s 22nd Administrator
and arrived, with his wife Hasene, to take over from Sean Burns in September 2013. The Island Council is made up
of eight elected and three appointed members, who serve a 3-year term which begins in February (or March). The remote
location of the islands makes transport to the outside world difficult. Lacking an airport, the islands can be reached
only by sea. Fishing boats from South Africa service the islands eight or nine times a year. The RMS Saint Helena
used to connect the main island to St Helena and South Africa once each year during its January voyage, but has done
so only twice in the last few years, in 2006 and 2011. The wider territory has access to air travel with Ascension
island served by RAF Ascension Island and a new airport, financed by the United Kingdom government, under construction
on St Helena and due for completion in 2016. There is however no direct, regular service to Tristan da Cunha itself
from either location. The harbour at Edinburgh of the Seven Seas is called Calshot Harbour, named after the place
in Hampshire where the islanders temporarily stayed during the volcanic eruption. Although Tristan da Cunha shares
the +290 code with St Helena, residents have access to the Foreign and Commonwealth Office Telecommunications Network,
provided by Global Crossing. This uses a London 020 numbering range, meaning that numbers are accessed via the UK
telephone numbering plan. From 1998 to 2006, internet was available in Tristan da Cunha but its high cost made it
almost unaffordable for the local population, who primarily used it only to send email. The connection was also extremely
unreliable, connecting through a 64 kbit/s satellite phone connection provided by Inmarsat. From 2006, a very-small-aperture
terminal provides 3072 kbit/s of publicly accessible bandwidth via an internet cafe.
The University of Kansas (KU) is a public research university and the largest in the U.S. state of Kansas. KU branch campuses
are located in the towns of Lawrence, Wichita, Overland Park, Salina, and Kansas City, Kansas, with the main campus
located in Lawrence on Mount Oread, the highest location in Lawrence. Founded March 21, 1865, the university was
opened in 1866, under a charter granted by the Kansas State Legislature in 1864 following enabling legislation passed
in 1863 under the Kansas State Constitution, adopted two years after the 1861 admission of the former Kansas Territory
as the 34th state into the Union following a very famous bloody internal civil war known as "Bleeding Kansas" during
the 1850s. The university's Medical Center and University Hospital are located in Kansas City, Kansas. The Edwards
Campus is in Overland Park, Kansas, in the Kansas City metropolitan area. There are also educational and research
sites in Parsons and Topeka, and branches of the University of Kansas School of Medicine in Wichita and Salina. The
university is one of the 62 members of the Association of American Universities. Enrollment at the Lawrence and Edwards
campuses was 23,597 students in fall 2014; an additional 3,371 students were enrolled at the KU Medical Center for
a total enrollment of 26,968 students across the three campuses. The university overall employed 2,663 faculty members
in fall 2012. On February 20, 1863, Kansas Governor Thomas Carney signed into law a bill creating the state university
in Lawrence. The law was conditioned upon a gift from Lawrence of a $15,000 endowment fund and a site for the university,
in or near the town, of not less than forty acres (16 ha) of land. If Lawrence failed to meet these conditions, Emporia
instead of Lawrence would get the university. The site selected for the university was a hill known as Mount Oread,
which was owned by former Kansas Governor Charles L. Robinson. Robinson and his wife Sara bestowed the 40-acre (16
ha) site to the State of Kansas in exchange for land elsewhere. The philanthropist Amos Adams Lawrence donated $10,000
of the necessary endowment fund, and the citizens of Lawrence raised the remaining cash by issuing notes backed by
Governor Carney. On November 2, 1863, Governor Carney announced that Lawrence had met the conditions to get the state
university, and the following year the university was officially organized. The school's Board of Regents held its
first meeting in March 1865, which is the event that KU dates its founding from. Work on the first college building
began later that year. The university opened for classes on September 12, 1866, and the first class graduated in
1873. During World War II, Kansas was one of 131 colleges and universities nationally that took part in the V-12
Navy College Training Program which offered students a path to a Navy commission. KU is home to the Robert J. Dole
Institute of Politics, the Beach Center on Disability, Lied Center of Kansas and radio stations KJHK, 90.7 FM, and
KANU, 91.5 FM. The university is host to several museums including the University of Kansas Natural History Museum
and the Spencer Museum of Art. The libraries of the University include the Watson Library, Spencer Research Library,
and Anschutz Library, which commemorates the businessman Philip Anschutz, an alumnus of the University. The University
of Kansas is a large, state-sponsored university, with five campuses. KU features the College of Liberal Arts & Sciences,
which includes the School of the Arts and the School of Public Affairs & Administration; and the schools of Architecture,
Design & Planning; Business; Education; Engineering; Health Professions; Journalism & Mass Communications; Law; Medicine;
Music; Nursing; Pharmacy; and Social Welfare. The university offers more than 345 degree programs. The city management
and urban policy program was ranked first in the nation, and the special education program second, by U.S. News &
World Report's 2016 rankings. USN&WR also ranked several programs in the top 25 among U.S. universities. The University
of Kansas School of Architecture, Design, and Planning (SADP), with its main building being Marvin Hall, traces its
architectural roots to the creation of the architectural engineering degree program in KU's School of Engineering
in 1912. The Bachelor of Architecture degree was added in 1920. In 1969, the School of Architecture and Urban Design
(SAUD) was formed with three programs: architecture, architectural engineering, and urban planning. In 2001 architectural
engineering merged with civil and environmental engineering. The design programs from the discontinued School of
Fine Arts were merged into the school in 2009 forming the current School of Architecture, Design, and Planning. According
to the journal DesignIntelligence, which annually publishes "America's Best Architecture and Design Schools," the
School of Architecture and Urban Design at the University of Kansas was named the best in the Midwest and ranked
11th among all undergraduate architecture programs in the U.S in 2012. The University of Kansas School of Business
is a public business school located on the main campus of the University of Kansas in Lawrence, Kansas. The KU School
of Business was founded in 1924 and currently has more than 80 faculty members and approximately 1500 students. Named
one of the best business schools in the Midwest by Princeton Review, the KU School of Business has been continually
accredited by the Association to Advance Collegiate Schools of Business (AACSB) for both its undergraduate and graduate
programs in business and accounting. The University of Kansas School of Law was the top law school in the state of
Kansas, and 68th nationally, according to the 2014 U.S. News & World Report "Best Graduate Schools" edition. Classes
are held in Green Hall at W 15th St and Burdick Dr, which is named after former dean James Green. The KU School of
Engineering is an ABET accredited, public engineering school located on the main campus. The School of Engineering
was officially founded in 1891, although engineering degrees were awarded as early as 1873. In the U.S. News & World
Report's "America’s Best Colleges" 2016 issue, KU’s School of Engineering was ranked tied for 90th among national
universities. Notable alumni include: Alan Mulally (BS/MS), former President and CEO of Ford Motor Company, Lou Montulli,
co-founder of Netscape and author of the Lynx web browser, Brian McClendon (BSEE 1986), VP of Engineering at Google,
Charles E. Spahr (1934), former CEO of Standard Oil of Ohio. The William Allen White School of Journalism and Mass
Communications is recognized for its ability to prepare students to work in a variety of media when they graduate.
The school offers two tracts of study: News and Information and Strategic Communication. This professional school
teaches its students reporting for print, online and broadcast, strategic campaigning for PR and advertising, photojournalism
and video reporting and editing. The J-School's students maintain various publications on campus, including The University
Daily Kansan, Jayplay magazine, KUJH TV and KJHK radio. In 2008, the Fiske Guide to Colleges praised the KU J-School
for its strength. In 2010, the School of Journalism and Mass Communications finished second at the prestigious Hearst
Foundation national writing competition. The University of Kansas Medical Center features three schools: the School
of Medicine, School of Nursing, and School of Health Professions. Furthermore, each of the three schools has its
own programs of graduate study. As of the Fall 2013 semester, there were 3,349 students enrolled at KU Med. The Medical
Center also offers four year instruction at the Wichita campus, and features a medical school campus in Salina, Kansas
that is devoted to rural health care. KU's Edwards Campus is in Overland Park, Kansas. Established in 1993, its goal
is to provide adults with the opportunity to complete college degrees. About 2,100 students attend the Edwards Campus,
with an average age of 32. Programs available at the Edwards Campus include developmental psychology, public administration,
social work, systems analysis, information technology, engineering management and design. Tuition at KU is 13 percent
below the national average, according to the College Board, and the University remains a best buy in the region.[citation
needed] Beginning in the 2007–2008 academic year, first-time freshman at KU pay a fixed tuition rate for 48 months
according to the Four-Year Tuition Compact passed by the Kansas Board of Regents. For the 2014–15 academic year,
tuition was $318 per credit hour for in-state freshman and $828 for out-of-state freshmen. For transfer students,
who do not take part in the compact, 2014–15 per-credit-hour tuition was $295 for in-state undergraduates and $785
for out-of-state undergraduates; subject to annual increases. Students enrolled in 6 or more credit hours also paid
an annual required campus fee of $888. The schools of architecture, music, arts, business, education, engineering,
journalism, law, pharmacy, and social welfare charge additional fees. KU's School of Business launched interdisciplinary
management science graduate studies in operations research during Fall Semester 1965. The program provided the foundation
for decision science applications supporting NASA Project Apollo Command Capsule Recovery Operations. KU's academic
computing department was an active participant in setting up the Internet and is the developer of the early Lynx
text based web browser. Lynx itself provided hypertext browsing and navigation prior to Tim Berners Lee's invention
of HTTP and HTML. The school's sports teams, wearing crimson and royal blue, are called the Kansas Jayhawks. They
participate in the NCAA's Division I and in the Big 12 Conference. KU has won thirteen National Championships: five
in men's basketball (two Helms Foundation championships and three NCAA championships), three in men's indoor track
and field, three in men's outdoor track and field, one in men's cross country and one in women's outdoor track and
field. The home course for KU Cross Country is Rim Rock Farm. Their most recent championship came on June 8, 2013
when the KU women's track and field team won the NCAA outdoor in Eugene, Oregon becoming the first University of
Kansas women's team to win a national title. KU football dates from 1890, and has played in the Orange Bowl three
times: 1948, 1968, and 2008. They are currently coached by David Beaty, who was hired in 2014. In 2008, under the
leadership of Mark Mangino, the #7 Jayhawks emerged victorious in their first BCS bowl game, the FedEx Orange Bowl,
with a 24–21 victory over the #3 Virginia Tech Hokies. This capstone victory marked the end of the most successful
season in school history, in which the Jayhawks went 12–1 (.923). The team plays at Memorial Stadium, which recently
underwent a $31 million renovation to add the Anderson Family Football Complex, adding a football practice facility
adjacent to the stadium complete with indoor partial practice field, weight room, and new locker room. The KU men's
basketball team has fielded a team every year since 1898. The Jayhawks are a perennial national contender currently
coached by Bill Self. The team has won five national titles, including three NCAA tournament championships in 1952,
1988, and 2008. The basketball program is currently the second winningest program in college basketball history with
an overall record of 2,070–806 through the 2011–12 season. The team plays at Allen Fieldhouse. Perhaps its best recognized
player was Wilt Chamberlain, who played in the 1950s. Kansas has counted among its coaches Dr. James Naismith (the
inventor of basketball and only coach in Kansas history to have a losing record), Basketball Hall of Fame inductee
Phog Allen ("the Father of basketball coaching"), Basketball Hall of Fame inductee Roy Williams of the University
of North Carolina at Chapel Hill, and Basketball Hall of Fame inductee and former NBA Champion Detroit Pistons coach
Larry Brown. In addition, legendary University of Kentucky coach and Basketball Hall of Fame inductee Adolph Rupp
played for KU's 1922 and 1923 Helms National Championship teams, and NCAA Hall of Fame inductee and University of
North Carolina Coach Dean Smith played for KU's 1952 NCAA Championship team. Both Rupp and Smith played under Phog
Allen. Allen also coached Hall of Fame coaches Dutch Lonborg and Ralph Miller. Allen founded the National Association
of Basketball Coaches (NABC), which started what is now the NCAA Tournament. The Tournament began in 1939 under the
NABC and the next year was handed off to the newly formed NCAA. Sheahon Zenger was introduced as KU's new athletic
director in January 2011. Under former athletic director Lew Perkins, the department's budget increased from $27.2
million in 2003 (10th in the conference) to currently over $50 million thanks in large part to money raised from
a new priority seating policy at Allen Fieldhouse, a new $26.67 million eight-year contract with Adidas replacing
an existing contract with Nike, and a new $40.2 million seven-year contract with ESPN Regional Television. The additional
funds brought improvements to the university, including: The University of Kansas has had more teams (70) compete
in the National Debate Tournament than any other university. Kansas has won the tournament 5 times (1954, 1970, 1976,
1983, and 2009) and had 12 teams make it to the final four. Kansas trails only Northwestern (13), Dartmouth (6),
and Harvard (6) for most tournaments won. Kansas also won the 1981–82 Copeland Award. Notable among a number of songs
commonly played and sung at various events such as commencement and convocation, and athletic games are: "I’m a Jayhawk",
"Fighting Jayhawk", "Kansas Song", "Sunflower Song", "Crimson and the Blue", "Red and Blue", the "Rock Chalk, Jayhawk"
chant", "Home on the Range" and "Stand Up and Cheer." The school newspaper of the University of Kansas is University
Daily Kansan, which placed first in the Intercollegiate Writing Competition of the prestigious William Randolph Hearst
Writing Foundation competition, often called "The Pulitzers of College Journalism" in 2007. In Winter 2008, a group
of students created KUpedia, a wiki about all things KU. They have received student funding for operations in 2008–09.
The KU Department of English publishes the Coal City Review, an annual literary journal of prose, poetry, reviews
and illustrations. The Review typically features the work of many writers, but periodically spotlights one author,
as in the case of 2006 Nelson Poetry Book Award-winner Voyeur Poems by Matthew Porubsky. The University Daily Kansan
operates outside of the university's William Allen White School of Journalism and reaches an audience of at least
30,000 daily readers through its print and online publications The university houses the following public broadcasting
stations: KJHK, a student-run campus radio station, KUJH-LP, an independent station that primarily broadcasts public
affairs programs, and KANU, the NPR-affiliated radio station. Kansas Public Radio station KANU was one of the first
public radio stations in the nation. KJHK, the campus radio has roots back to 1952 and is completely run by students.
The first union was built on campus in 1926 as a campus community center. The unions are still the "living rooms"
of campus today and include three locations – the Kansas Union and Burge Union at the Lawrence Campus and Jayhawk
Central at the Edwards Campus. The KU Memorial Unions Corporation manages the KU Bookstore (with seven locations).
The KU Bookstore is the official bookstore of KU. The Corporation also includes KU Dining Services, with more than
20 campus locations, including The Market (inside the Kansas Union) and The Underground (located in Wescoe Hall).
The KU Bookstore and KU Dining Services are not-for-profit, with proceeds going back to support student programs,
such as Student Union Activities. KU Endowment was established in 1891 as America’s first foundation for a public
university. Its mission is to partner with donors in providing philanthropic support to build a greater University
of Kansas. The Community Tool Box is a public service of the University maintained by the Work Group for Community
Health and Development. It is a free, online resource that contains more than 7,000 pages of practical information
for promoting community health and development, and is a global resource for both professionals and grassroots groups
engaged in the work of community health and development.
Nanjing ( listen; Chinese: 南京, "Southern Capital") is the city situated in the heartland of lower Yangtze River region in
China, which has long been a major centre of culture, education, research, politics, economy, transport networks
and tourism. It is the capital city of Jiangsu province of People's Republic of China and the second largest city
in East China, with a total population of 8,216,100, and legally the capital of Republic of China which lost the
mainland during the civil war. The city whose name means "Southern Capital" has a prominent place in Chinese history
and culture, having served as the capitals of various Chinese dynasties, kingdoms and republican governments dating
from the 3rd century AD to 1949. Prior to the advent of pinyin romanization, Nanjing's city name was spelled as Nanking
or Nankin. Nanjing has a number of other names, and some historical names are now used as names of districts of the
city, and among them there is the name Jiangning (江寧), whose former character Jiang (江, River) is the former part
of the name Jiangsu and latter character Ning (寧, simplified form 宁, Peace) is the short name of Nanjing. When being
the capital of a state, for instance, ROC, Jing (京) is adopted as the abbreviation of Nanjing. Although as a city
located in southern part of China becoming Chinese national capital as early as in Jin dynasty, the name Nanjing
was designated to the city in Ming dynasty, about a thousand years later. Nanjing is particularly known as Jinling
(金陵, literally meaning Gold Mountain) and the old name has been used since the Warring States Period in Zhou Dynasty.
Located in Yangtze River Delta area and the center of East China, Nanjing is home to one of the world's largest inland
ports. Nanjing is also one of the fifteen sub-provincial cities in the People's Republic of China's administrative
structure, enjoying jurisdictional and economic autonomy only slightly less than that of a province. Nanjing has
been ranked seventh in the evaluation of "Cities with Strongest Comprehensive Strength" issued by the National Statistics
Bureau, and second in the evaluation of cities with most sustainable development potential in the Yangtze River Delta.
It has also been awarded the title of 2008 Habitat Scroll of Honour of China, Special UN Habitat Scroll of Honour
Award and National Civilized City. Nanjing boasts many high-quality universities and research institutes, with the
number of universities listed in 100 National Key Universities ranking third, including Nanjing University. The ratio
of college students to total population ranks No.1 among large cities nationwide. Nanjing is one of the three Chinese
top research centres according to Nature Index. Nanjing, one of the nation's most important cities for over a thousand
years, is recognized as one of the Four Great Ancient Capitals of China, and had been the world's largest city aggregately
for hundreds of years, enjoyed peace and prosperity and beared wars and disasters. Nanjing served as the capital
of Eastern Wu, one of the three major states in the Three Kingdoms period (211-280); the Eastern Jin and each of
the Southern Dynasties (Liu Song, Southern Qi, Liang and Chen), which successively ruled southern China from 317-589;
the Southern Tang, one of the Ten Kingdoms (937-76); the Ming dynasty when, for the first time, all of China was
ruled from the city (1368-1421); and the Republic of China (1927–37, 1945–49) prior to its flight to Taiwan during
the Chinese Civil War. The city also served as the seat of the rebel Taiping Heavenly Kingdom (1851–64) and the Japanese
puppet regime of Wang Jingwei (1940–45) during the Second Sino-Japanese War, and suffered appalling atrocities in
both conflicts, including the Nanjing Massacre. It has been serving as the capital city of Jiangsu province after
the China was established, and is still the nominal capital of Republic of China that accommodates many of its important
heritage sites, including the Presidential Palace and Sun Yat-sen Mausoleum. Nanjing is famous for human historical
landscapes, mountains and waters such as Fuzimiao, Ming Palace, Chaotian Palace, Porcelain Tower, Drum Tower, Stone
City, City Wall, Qinhuai River, Xuanwu Lake and Purple Mountain. Key cultural facilities include Nanjing Library,
Nanjing Museum and Art Museum. Archaeological discovery shows that "Nanjing Man" lived in more than 500 thousand
years ago. Zun, a kind of wine vessel, was found to exist in Beiyinyangying culture of Nanjing in about 5000 years
ago. In the late period of Shang dynasty, Taibo of Zhou came to Jiangnan and established Wu state, and the first
stop is in Nanjing area according to some historians based on discoveries in Taowu and Hushu culture. According to
legend,[which?] Fuchai, King of the State of Wu, founded a fort named Yecheng (冶城) in today's Nanjing area in 495
BC. Later in 473 BC, the State of Yue conquered Wu and constructed the fort of Yuecheng (越城) on the outskirts of
the present-day Zhonghua Gate. In 333 BC, after eliminating the State of Yue, the State of Chu built Jinling Yi (金陵邑)
in the western part of present-day Nanjing. It was renamed Moling (秣陵) during reign of Qin Shi Huang. Since then,
the city experienced destruction and renewal many times.[citation needed] The area was successively part of Kuaiji,
Zhang and Danyang prefectures in Qin and Han dynasty, and part of Yangzhou region which was established as the nation's
13 supervisory and administrative regions in the 5th year of Yuanfeng in Han dynasty (106 BC). Nanjing was later
the capital city of Danyang Prefecture, and had been the capital city of Yangzhou for about 400 years from late Han
to early Tang. Nanjing first became a state capital in 229 AD, when the state of Eastern Wu founded by Sun Quan during
the Three Kingdoms period relocated its capital to Jianye (建業), the city extended on the basis of Jinling Yi in 211
AD. Although conquered by the Western Jin dynasty in 280, Nanjing and its neighbouring areas had been well cultivated
and developed into one of the commercial, cultural and political centers of China during the rule of East Wu. This
city would soon play a vital role in the following centuries. Shortly after the unification of the region, the Western
Jin dynasty collapsed. First the rebellions by eight Jin princes for the throne and later rebellions and invasion
from Xiongnu and other nomadic peoples that destroyed the rule of the Jin dynasty in the north. In 317, remnants
of the Jin court, as well as nobles and wealthy families, fled from the north to the south and reestablished the
Jin court in Nanjing, which was then called Jiankang (建康), replacing Luoyang. It's the first time that the capital
of the nation moved to southern part. During the period of North–South division, Nanjing remained the capital of
the Southern dynasties for more than two and a half centuries. During this time, Nanjing was the international hub
of East Asia. Based on historical documents, the city had 280,000 registered households. Assuming an average Nanjing
household had about 5.1 people at that time, the city had more than 1.4 million residents. A number of sculptural
ensembles of that era, erected at the tombs of royals and other dignitaries, have survived (in various degrees of
preservation) in Nanjing's northeastern and eastern suburbs, primarily in Qixia and Jiangning District. Possibly
the best preserved of them is the ensemble of the Tomb of Xiao Xiu (475–518), a brother of Emperor Wu of Liang. The
period of division ended when the Sui Dynasty reunified China and almost destroyed the entire city, turning it into
a small town. The city of Nanjing was razed after the Sui dynasty took over it. It renamed Shengzhou (昇州) in Tang
dynasty and resuscitated during the late Tang. It was chosen as the capital and called Jinling (金陵) during the Southern
Tang (937–976), a state that succeeded Wu state. It renamed Jiangning (江寧) in Northern Song dynasty and renamed Jiankang
in Southern Song dynasty. Jiankang's textile industry burgeoned and thrived during the Song dynasty despite the constant
threat of foreign invasions from the north by the Jurchen-led Jin dynasty. The court of Da Chu, a short-lived puppet
state established by the Jurchens, and the court of Song were once in the city. Song was eventually exterminated
by the Mongol empire under the name Yuan and in Yuan dynasty the city's status as a hub of the textile industry was
further consolidated. The first emperor of the Ming dynasty, Zhu Yuanzhang (the Hongwu Emperor), who overthrew the
Yuan dynasty, renamed the city Yingtian, rebuilt it, and made it the dynastic capital in 1368. He constructed a 48
km (30 mi) long city wall around Yingtian, as well as a new Ming Palace complex, and government halls. It took 200,000
laborers 21 years to finish the project. The present-day City Wall of Nanjing was mainly built during that time and
today it remains in good condition and has been well preserved. It is among the longest surviving city walls in China.
The Jianwen Emperor ruled from 1398 to 1402. It is believed that Nanjing was the largest city in the world from 1358
to 1425 with a population of 487,000 in 1400. Nanjing remained the capital of the Ming Empire until 1421, when the
third emperor of the Ming dynasty, the Yongle Emperor, relocated the capital to Beijing. Besides the city wall, other
famous Ming-era structures in the city included the famous Ming Xiaoling Mausoleum and Porcelain Tower, although
the latter was destroyed by the Taipings in the 19th century either in order to prevent a hostile faction from using
it to observe and shell the city or from superstitious fear of its geomantic properties. A monument to the huge human
cost of some of the gigantic construction projects of the early Ming dynasty is the Yangshan Quarry (located some
15–20 km (9–12 mi) east of the walled city and Ming Xiaoling mausoleum), where a gigantic stele, cut on the orders
of the Yongle Emperor, lies abandoned, just as it was left 600 years ago when it was understood it was impossible
to move or complete it. As the center of the empire, early-Ming Nanjing had worldwide connections. It was home of
the admiral Zheng He, who went to sail the Pacific and Indian Oceans, and it was visited by foreign dignitaries,
such as a king from Borneo (Boni 渤泥), who died during his visit to China in 1408. The Tomb of the King of Boni, with
a spirit way and a tortoise stele, was discovered in Yuhuatai District (south of the walled city) in 1958, and has
been restored. Over two centuries after the removal of the capital to Beijing, Nanjing was destined to become the
capital of a Ming emperor one more time. After the fall of Beijing to Li Zicheng's rebel forces and then to the Manchu-led
Qing dynasty in the spring of 1644, the Ming prince Zhu Yousong was enthroned in Nanjing in June 1644 as the Hongguang
Emperor. His short reign was described by later historians as the first reign of the so-called Southern Ming dynasty.
Zhu Yousong, however, fared a lot worse than his ancestor Zhu Yuanzhang three centuries earlier. Beset by factional
conflicts, his regime could not offer effective resistance to Qing forces, when the Qing army, led by the Manchu
prince Dodo approached Jiangnan the next spring. Days after Yangzhou fell to the Manchus in late May 1645, the Hongguang
Emperor fled Nanjing, and the imperial Ming Palace was looted by local residents. On June 6, Dodo's troops approached
Nanjing, and the commander of the city's garrison, Zhao the Earl of Xincheng, promptly surrendered the city to them.
The Manchus soon ordered all male residents of the city to shave their heads in the Manchu queue way. They requisitioned
a large section of the city for the bannermen's cantonment, and destroyed the former imperial Ming Palace, but otherwise
the city was spared the mass murders and destruction that befell Yangzhou. Under the Qing dynasty (1644–1911), the
Nanjing area was known as Jiangning (江寧) and served as the seat of government for the Viceroy of Liangjiang. It had
been visited by the Kangxi and Qianlong emperors a number of times on their tours of the southern provinces. Nanjing
was invaded by British troops during the close of the First Opium War, which was ended by the Treaty of Nanjing in
1842. As the capital of the brief-lived rebel Taiping Heavenly Kingdom (founded by the Taiping rebels in the mid-19th
century, Nanjing was known as Tianjing (天京, "Heavenly Capital" or "Capital of Heaven"). Both the Qing viceroy and
the Taiping king resided in buildings that would later be known as the Presidential Palace. When Qing forces led
by Zeng Guofan retook the city in 1864, a massive slaughter occurred in the city with over 100,000 estimated to have
committed suicide or fought to the death. Since the Taiping Rebellion began, Qing forces allowed no rebels speaking
its dialect to surrender. This policy of mass murder of civilians occurred in Nanjing. The Xinhai Revolution led
to the founding of the Republic of China in January 1912 with Sun Yat-sen as the first provisional president and
Nanking was selected as its new capital. However, the Qing Empire controlled large regions to the north, so revolutionaries
asked Yuan Shikai to replace Sun as president in exchange for the abdication of Puyi, the Last Emperor. Yuan demanded
the capital be Beijing (closer to his power base). In 1927, the Kuomintang (KMT; Nationalist Party) under Generalissimo
Chiang Kai-shek again established Nanjing as the capital of the Republic of China, and this became internationally
recognized once KMT forces took Beijing in 1928. The following decade is known as the Nanking decade. In 1937, the
Empire of Japan started a full-scale invasion of China after invading Manchuria in 1931, beginning the Second Sino-Japanese
War (often considered a theatre of World War II). Their troops occupied Nanjing in December and carried out the systematic
and brutal Nanking Massacre (the "Rape of Nanking"). Even children, the elderly, and nuns are reported to have suffered
at the hands of the Imperial Japanese Army. The total death toll, including estimates made by the International Military
Tribunal for the Far East and the Nanjing War Crimes Tribunal, was between 300,000 and 350,000. The city itself was
also severely damaged during the massacre. The Nanjing Massacre Memorial Hall was built in 1985 to commemorate this
event. A few days before the fall of the city, the National Government of China was relocated to the southwestern
city Chungking (Chongqing) and resumed Chinese resistance. In 1940, a Japanese-collaborationist government known
as the "Nanjing Regime" or "Reorganized National Government of China" led by Wang Jingwei was established in Nanjing
as a rival to Chiang Kai-shek's government in Chongqing. In 1946, after the Surrender of Japan, the KMT relocated
its central government back to Nanjing. On 21 April, Communist forces crossed the Yangtze River. On April 23, 1949,
the Communist People's Liberation Army (PLA) captured Nanjing. The KMT government retreated to Canton (Guangzhou)
until October 15, Chongqing until November 25, and then Chengdu before retreating to Taiwan on December 10. By late
1949, the PLA was pursuing remnants of KMT forces southwards in southern China, and only Tibet was left. After the
establishment of the People's Republic of China in October 1949, Nanjing was initially a province-level municipality,
but it was soon merged into Jiangsu province and again became the provincial capital by replacing Zhenjiang which
was transferred in 1928, and retains that status to this day. Nanjing, with a total land area of 6,598 square kilometres
(2,548 sq mi), is situated in the heartland of drainage area of lower reaches of Yangtze River, and in Yangtze River
Delta, one of the largest economic zones of China. The Yangtze River flows past the west side and then north side
of Nanjing City, while the Ningzheng Ridge surrounds the north, east and south side of the city. The city is 300
kilometres (190 mi) west-northwest of Shanghai, 1,200 kilometres (750 mi) south-southeast of Beijing, and 1,400 kilometres
(870 mi) east-northeast of Chongqing. The downstream Yangtze River flows from Jiujiang, Jiangxi, through Anhui and
Jiangsu to East Sea, north to drainage basin of downstream Yangtze is Huai River basin and south to it is Zhe River
basin, and they are connected by the Grand Canal east to Nanjing. The area around Nanjing is called Hsiajiang (下江,
Downstream River) region, with Jianghuai (江淮) stressing northern part and Jiangzhe (江浙) stressing southern part.
The region is also known as Dongnan (東南, South East, the Southeast) and Jiangnan (江南, River South, south of Yangtze).
Nanjing borders Yangzhou to the northeast, one town downstream when following the north bank of the Yangtze, Zhenjiang
to the east, one town downstream when following the south bank of the Yangtze, and Changzhou to the southeast. On
its western boundary is Anhui province, where Nanjing borders five prefecture-level cities, Chuzhou to the northwest,
Wuhu, Chaohu and Maanshan to the west and Xuancheng to the southwest. Nanjing is the intersection of Yangtze River,
an east-west water transport artery, and Nanjing–Beijing railway, a south-north land transport artery, hence the
name “door of the east and west, throat of the south and north”. Furthermore, the west part of the Ningzhen range
is in Nanjing; the Loong-like Zhong Mountain is curling in the east of the city; the tiger-like Stone Mountain is
crouching in the west of the city, hence the name “the Zhong Mountain, a dragon curling, and the Stone Mountain,
a tiger crouching”. Mr. Sun Yet-sen spoke highly of Nanjing in the “Constructive Scheme for Our Country”, “The position
of Nanjing is wonderful since mountains, lakes and plains all integrated in it. It is hard to find another city like
this.” Nanjing has a humid subtropical climate (Köppen Cfa) and is under the influence of the East Asian monsoon.
The four seasons are distinct, with damp conditions seen throughout the year, very hot and muggy summers, cold, damp
winters, and in between, spring and autumn are of reasonable length. Along with Chongqing and Wuhan, Nanjing is traditionally
referred to as one of the "Three Furnacelike Cities" along the Yangtze River (长江流域三大火炉) for the perennially high
temperatures in the summertime. However, the time from mid-June to the end of July is the plum blossom blooming season
in which the meiyu (rainy season of East Asia; literally "plum rain") occurs, during which the city experiences a
period of mild rain as well as dampness. Typhoons are uncommon but possible in the late stages of summer and early
part of autumn. The annual mean temperature is around 15.46 °C (59.8 °F), with the monthly 24-hour average temperature
ranging from 2.4 °C (36.3 °F) in January to 27.8 °C (82.0 °F) in July. Extremes since 1951 have ranged from −14.0
°C (7 °F) on 6 January 1955 to 40.7 °C (105 °F) on 22 August 1959. On average precipitation falls 115 days out of
the year, and the average annual rainfall is 1,062 millimetres (42 in). With monthly percent possible sunshine ranging
from 37 percent in March to 52 percent in August, the city receives 1,983 hours of bright sunshine annually. Nanjing
is endowed with rich natural resources, which include more than 40 kinds of minerals. Among them, iron and sulfur
reserves make up 40 percent of those of Jiangsu province. Its reserves of strontium rank first in East Asia and the
South East Asia region. Nanjing also possesses abundant water resources, both from the Yangtze River and groundwater.
In addition, it has several natural hot springs such as Tangshan Hot Spring in Jiangning and Tangquan Hot Spring
in Pukou. Surrounded by the Yangtze River and mountains, Nanjing also enjoys beautiful natural scenery. Natural lakes
such as Xuanwu Lake and Mochou Lake are located in the centre of the city and are easily accessible to the public,
while hills like Purple Mountain are covered with evergreens and oaks and host various historical and cultural sites.
Sun Quan relocated his capital to Nanjing after Liu Bei's suggestion as Liu Bei was impressed by Nanjing's impeccable
geographic position when negotiating an alliance with Sun Quan. Sun Quan then renamed the city from Moling (秣陵) to
Jianye (建鄴) shortly thereafter. A dense wave of smog began in the Central and Eastern part of China on 2 December
2013 across a distance of around 1,200 kilometres (750 mi), including Tianjin, Hebei, Shandong, Jiangsu, Anhui, Shanghai
and Zhejiang. A lack of cold air flow, combined with slow-moving air masses carrying industrial emissions, collected
airborne pollutants to form a thick layer of smog over the region. The heavy smog heavily polluted central and southern
Jiangsu Province, especially in and around Nanjing, with its AQI pollution Index at "severely polluted" for five
straight days and "heavily polluted" for nine. On 3 December 2013, levels of PM2.5 particulate matter average over
943 micrograms per cubic metre, falling to over 338 micrograms per cubic metre on 4 December 2013. Between 3:00 pm,
3 December and 2:00pm, 4 December local time, several expressways from Nanjing to other Jiangsu cities were closed,
stranding dozens of passenger buses in Zhongyangmen bus station. From 5 to 6 December, Nanjing issued a red alert
for air pollution and closed down all kindergarten through middle schools. Children's Hospital outpatient services
increased by 33 percent; general incidence of bronchitis, pneumonia, upper respiratory tract infections significantly
increased. The smog dissipated 12 December. Officials blamed the dense pollution on lack of wind, automobile exhaust
emissions under low air pressure, and coal-powered district heating system in North China region. Prevailing winds
blew low-hanging air masses of factory emissions (mostly SO2) towards China's east coast. At present, the full name
of the government of Nanjing is "People's Government of Nanjing City" and the city is under the one-party rule of
the CPC, with the CPC Nanjing Committee Secretary as the de facto governor of the city and the mayor as the executive
head of the government working under the secretary. According to the Sixth China Census, the total population of
the City of Nanjing reached 8.005 million in 2010. The statistics in 2011 estimated the total population to be 8.11
million. The birth rate was 8.86 percent and the death rate was 6.88 percent. The urban area had a population of
6.47 million people. The sex ratio of the city population was 107.31 males to 100 females. As in most of eastern
China the ethnic makeup of Nanjing is predominantly Han nationality (98.56 percent), with 50 other minority nationalities.
In 1999, 77,394 residents belonged to minority nationalities, among which the vast majority (64,832) were Hui nationalities,
contributing 83.76 percent to the minority population. The second and third largest minority groups were Manchu (2,311)
and Zhuang (533) nationalities. Most of the minority nationalities resided in Jianye District, comprising 9.13 percent
of the district's population. Since the Three Kingdoms period, Nanjing has been an industrial centre for textiles
and minting owing to its strategic geographical location and convenient transportation. During the Ming dynasty,
Nanjing's industry was further expanded, and the city became one of the most prosperous cities in China and the world.
It led in textiles, minting, printing, shipbuilding and many other industries, and was the busiest business center
in East Asia. Textiles boomed particularly in Qing dynasty, the industry created around 200 thousand jobs and there
were about 50 thousand satin machines in the city in 18th and 19th century. Into the first half of the twentieth
century after the establishment of ROC, Nanjing gradually shifted from being a production hub towards being a heavy
consumption city, mainly because of the rapid expansion of its wealthy population after Nanjing once again regained
the political spotlight of China. A number of huge department stores such as Zhongyang Shangchang sprouted up, attracting
merchants from all over China to sell their products in Nanjing. In 1933, the revenue generated by the food and entertainment
industry in the city exceeded the sum of the output of the manufacturing and agriculture industry. One third of the
city population worked in the service industry, . In the 1950s after PRC was established by CPC, the government invested
heavily in the city to build a series of state-owned heavy industries, as part of the national plan of rapid industrialization,
converting it into a heavy industry production centre of East China. Overenthusiastic in building a “world-class”
industrial city, the government also made many disastrous mistakes during development, such as spending hundreds
of millions of yuan to mine for non-existent coal, resulting in negative economic growth in the late 1960s. From
1960s to 1980s there were Five Pillar Industries, namely, electronics, cars, petrochemical, iron and steel, and power,
each with big state-owned firms. After the Reform and Opening recovering market economy, the state-owned enterprises
found themselves incapable of competing with efficient multinational firms and local private firms, hence were either
mired in heavy debt or forced into bankruptcy or privatization and this resulted in large numbers of layoff workers
who were technically not unemployed but effectively jobless. The current economy of the city is basically newly developed
based on the past. Service industries are dominating, accounting for about 60 percent of the GDP of the city, and
financial industry, culture industry and tourism industry are top 3 of them. Industries of information technology,
energy saving and environmental protection, new energy, smart power grid and intelligent equipment manufacturing
have become pillar industries. Big private firms include Suning, Yurun, Sanpower, Fuzhong, Hiteker, 5stars, Jinpu,
Tiandi, CTTQ Pharmaceutical and Simcere Pharmaceutical. Big state-owned firms include Panda Electronics, Yangzi Petrochemical,
Jinling Petrochemical, Nanjing Chemical, Nanjing Steel, Jincheng Motors, Jinling Pharmaceutical, Chenguang and NARI.
The city has also attracted foreign investment, multinational firms such as Siemens, Ericsson, Volkswagen, Iveco,
A.O. Smith, and Sharp have established their lines, and a number of multinationals such as Ford, IBM, Lucent, Samsung
and SAP established research center there. Many China-based leading firms such as Huawei, ZTE and Lenovo have key
R & D institutes in the city. Nanjing is an industrial technology research and development hub, hosting many R &
D centers and institutions, especially in areas of electronics technology, information technology, computer software,
biotechnology and pharmaceutical technology and new material technology. In recent years, Nanjing has been developing
its economy, commerce, industry, as well as city construction. In 2013 the city's GDP was RMB 801 billion (3rd in
Jiangsu), and GDP per capita(current price) was RMB 98,174(US$16041), a 11 percent increase from 2012. The average
urban resident's disposable income was RMB 36,200, while the average rural resident's net income was RMB 14,513.
The registered urban unemployment rate was 3.02 percent, lower than the national average (4.3 percent). Nanjing's
Gross Domestic Product ranked 12th in 2013 in China, and its overall competence ranked 6th in mainland and 8th including
Taiwan and Hong Kong in 2009. Nanjing is the transportation hub in eastern China and the downstream Yangtze River
area. Different means of transportation constitute a three-dimensional transport system that includes land, water
and air. As in most other Chinese cities, public transportation is the dominant mode of travel of the majority of
the citizens. As of October 2014, Nanjing had five bridges and two tunnels over the Yangtze River, which are tying
districts north of the river with the city centre on the south bank. Nanjing is an important railway hub in eastern
China. It serves as rail junction for the Beijing-Shanghai (Jinghu) (which is itself composed of the old Jinpu and
Huning Railways), Nanjing–Tongling Railway (Ningtong), Nanjing–Qidong (Ningqi), and the Nanjing-Xian (Ningxi) which
encompasses the Hefei–Nanjing Railway. Nanjing is connected to the national high-speed railway network by Beijing–Shanghai
High-Speed Railway and Shanghai–Wuhan–Chengdu Passenger Dedicated Line, with several more high-speed rail lines under
construction. Among all 17 railway stations in Nanjing, passenger rail service is mainly provided by Nanjing Railway
Station and Nanjing South Railway Station, while other stations like Nanjing West Railway Station, Zhonghuamen Railway
Station and Xianlin Railway Station serve minor roles. Nanjing Railway Station was first built in 1968. In 1999,
On November 12, 1999, the station was burnt in a serious fire. Reconstruction of the station was finished on September
1, 2005. Nanjing South Railway Station, which is one of the 5 hub stations on Beijing–Shanghai High-Speed Railway,
has officially been claimed as the largest railway station in Asia and the second largest in the world in terms of
GFA (Gross Floor Area). Construction of Nanjing South Station began on 10 January 2008. The station was opened for
public service in 2011. Express highways such as Hu–Ning, Ning–He, Ning–Hang enable commuters to travel to Shanghai,
Hefei, Hangzhou, and other important cities quickly and conveniently. Inside the city of Nanjing, there are 230 km
(140 mi) of highways, with a highway coverage density of 3.38 kilometres per hundred square kilometrs (5.44 mi/100
sq mi). The total road coverage density of the city is 112.56 kilometres per hundred square kilometres (181.15 mi/100
sq mi). The two artery roads in Nanjing are Zhongshan Road and Hanzhong. The two roads cross in the city centre,
Xinjiekou. The city also boasts an efficient network of public transportation, which mainly consists of bus, taxi
and metro systems. The bus network, which is currently run by three companies since 2011, provides more than 370
routes covering all parts of the city and suburban areas. Nanjing Metro Line 1, started service on September 3, 2005,
with 16 stations and a length of 21.72 km. Line 2 and the 24.5 km-long south extension of Line 1 officially opened
to passenger service on May 28, 2010. At present, Nanjing has a metro system with a grand total of 223.6 kilometers
(138.9 mi) of route and 121 stations. They are Line 1, Line 2, Line 3, Line 10, Line S1 and Line S8. The city is
planning to complete a 17-line Metro and light-rail system by 2030. The expansion of the Metro network will greatly
facilitate the intracity transportation and reduce the currently heavy traffic congestion. Nanjing's airport, Lukou
International Airport, serves both national and international flights. In 2013, Nanjing airport handled 15,011,792
passengers and 255,788.6 tonnes of freight. The airport currently has 85 routes to national and international destinations,
which include Japan, Korea, Thailand, Malaysia, Singapore, USA and Germany. The airport is connected by a 29-kilometre
(18 mi) highway directly to the city center, and is also linked to various intercity highways, making it accessible
to the passengers from the surrounding cities. A railway Ninggao Intercity Line is being built to link the airport
with Nanjing South Railway Station. Lukou Airport was opened on 28 June 1997, replacing Nanjing Dajiaochang Airport
as the main airport serving Nanjing. Dajiaochang Airport is still used as a military air base. Port of Nanjing is
the largest inland port in China, with annual cargo tonnage reached 191,970,000 t in 2012. The port area is 98 kilometres
(61 mi) in length and has 64 berths including 16 berths for ships with a tonnage of more than 10,000. Nanjing is
also the biggest container port along the Yangtze River; in March 2004, the one million container-capacity base,
Longtan Containers Port Area opened, further consolidating Nanjing as the leading port in the region. As of 2010,
it operated six public ports and three industrial ports. In the 1960s, the first Nanjing Yangtze River Bridge was
completed, and served as the only bridge crossing over the Lower Yangtze in eastern China at that time. The bridge
was a source of pride and an important symbol of modern China, having been built and designed by the Chinese themselves
following failed surveys by other nations and the reliance on and then rejection of Soviet expertise. Begun in 1960
and opened to traffic in 1968, the bridge is a two-tiered road and rail design spanning 4,600 metres on the upper
deck, with approximately 1,580 metres spanning the river itself. Since then four more bridges and two tunnels have
been built. Going in the downstream direction, the Yangtze crossings in Nanjing are: Dashengguan Bridge, Line 10
Metro Tunnel, Third Bridge, Nanjing Yangtze River Tunnel, First Bridge, Second Bridge and Fourth Bridge. Being one
of the four ancient capitals of China, Nanjing has always been a cultural centre attracting intellectuals from all
over the country. In the Tang and Song dynasties, Nanjing was a place where poets gathered and composed poems reminiscent
of its luxurious past; during the Ming and Qing dynasties, the city was the official imperial examination centre
(Jiangnan Examination Hall) for the Jiangnan region, again acting as a hub where different thoughts and opinions
converged and thrived. Today, with a long cultural tradition and strong support from local educational institutions,
Nanjing is commonly viewed as a “city of culture” and one of the more pleasant cities to live in China. Some of the
leading art groups of China are based in Nanjing; they include the Qianxian Dance Company, Nanjing Dance Company,
Jiangsu Peking Opera Institute and Nanjing Xiaohonghua Art Company among others. Jiangsu Province Kun Opera is one
of the best theatres for Kunqu, China's oldest stage art. It is considered a conservative and traditional troupe.
Nanjing also has professional opera troupes for the Yang, Yue (shaoxing), Xi and Jing (Chinese opera varieties) as
well as Suzhou pingtan, spoken theatre and puppet theatre. Jiangsu Art Gallery is the largest gallery in Jiangsu
Province, presenting some of the best traditional and contemporary art pieces of China; many other smaller-scale
galleries, such as Red Chamber Art Garden and Jinling Stone Gallery, also have their own special exhibitions. Many
traditional festivals and customs were observed in the old times, which included climbing the City Wall on January
16, bathing in Qing Xi on March 3, hill hiking on September 9 and others (the dates are in Chinese lunar calendar).
Almost none of them, however, are still celebrated by modern Nanjingese. Instead, Nanjing, as a popular tourist destination,
hosts a series of government-organised events throughout the year. The annual International Plum Blossom Festival
held in Plum Blossom Hill, the largest plum collection in China, attracts thousands of tourists both domestically
and internationally. Other events include Nanjing Baima Peach Blossom and Kite Festival, Jiangxin Zhou Fruit Festival
and Linggu Temple Sweet Osmanthus Festival. Nanjing Library, founded in 1907, houses more than 10 million volumes
of printed materials and is the third largest library in China, after the National Library in Beijing and Shanghai
Library. Other libraries, such as city-owned Jinling Library and various district libraries, also provide considerable
amount of information to citizens. Nanjing University Library is the second largest university libraries in China
after Peking University Library, and the fifth largest nationwide, especially in the number of precious collections.
Nanjing has some of the oldest and finest museums in China. Nanjing Museum, formerly known as National Central Museum
during ROC period, is the first modern museum and remains as one of the leading museums in China having 400,000 items
in its permanent collection,. The museum is notable for enormous collections of Ming and Qing imperial porcelain,
which is among the largest in the world. Other museums include the City Museum of Nanjing in the Chaotian Palace,
the Oriental Metropolitan Museum, the China Modern History Museum in the Presidential Palace, the Nanjing Massacre
Memorial Hall, the Taiping Kingdom History Museum, Jiangning Imperial Silk Manufacturing Museum, Nanjing Yunjin Museum,
Nanjing City Wall Cultural Museum, Nanjing Customs Museum in Ganxi House, Nanjing Astronomical History Museum, Nanjing
Paleontological Museum, Nanjing Geological Museum, Nanjing Riverstones Museum, and other museums and memorials such
Zheng He Memorial, Jinling Four Modern Calligraphers Memorial. Most of Nanjing's major theatres are multi-purpose,
used as convention halls, cinemas, musical halls and theatres on different occasions. The major theatres include
the People's Convention Hall and the Nanjing Arts and Culture Center. The Capital Theatre well known in the past
is now a museum in theatre/film. Traditionally Nanjing's nightlife was mostly centered around Nanjing Fuzimiao (Confucius
Temple) area along the Qinhuai River, where night markets, restaurants and pubs thrived. Boating at night in the
river was a main attraction of the city. Thus, one can see the statues of the famous teachers and educators of the
past not too far from those of the courtesans who educated the young men in the other arts. In the past 20 years,
several commercial streets have been developed, hence the nightlife has become more diverse: there are shopping malls
opening late in the Xinjiekou CBD and Hunan Road. The well-established "Nanjing 1912" district hosts a wide variety
of recreational facilities ranging from traditional restaurants and western pubs to dance clubs. There are two major
areas where bars are densely located; one is in 1912 block; the other is along Shanghai road and its neighbourhood.
Both are popular with international residents of the city. The radish is also a typical food representing people
of Nanjing, which has been spread through word of mouth as an interesting fact for many years in China. According
to Nanjing.GOV.cn, "There is a long history of growing radish in Nanjing especially the southern suburb. In the spring,
the radish tastes very juicy and sweet. It is well-known that people in Nanjing like eating radish. And the people
are even addressed as 'Nanjing big radish', which means they are unsophisticated, passionate and conservative. From
health perspective, eating radish can help to offset the stodgy food that people take during the Spring Festival".
As a major Chinese city, Nanjing is home to many professional sports teams. Jiangsu Sainty, the football club currently
staying in Chinese Super League, is a long-term tenant of Nanjing Olympic Sports Center. Jiangsu Nangang Basketball
Club is a competitive team which has long been one of the major clubs fighting for the title in China top level league,
CBA. Jiangsu Volleyball men and women teams are also traditionally considered as at top level in China volleyball
league. There are two major sports centers in Nanjing, Wutaishan Sports Center and Nanjing Olympic Sports Center.
Both of these two are comprehensive sports centers, including stadium, gymnasium, natatorium, tennis court, etc.
Wutaishan Sports Center was established in 1952 and it was one of the oldest and most advanced stadiums in early
time of People's Republic of China. In 2005, in order to host The 10th National Game of People's Republic of China,
there was a new stadium, Nanjing Olympic Sports Center, constructed in Nanjing. Compared to Wutaishan Sports Center,
which the major stadium's capacity is 18,500, Nanjing Olympic Sports Center has a more advanced stadium which is
big enough to seat 60,000 spectators. Its gymnasium has capacity of 13,000, and natatorium of capacity 3,000. On
10 February 2010, the 122nd IOC session at Vancouver announced Nanjing as the host city for the 2nd Summer Youth
Olympic Games. The slogan of the 2014 Youth Olympic Games was “Share the Games, Share our Dreams”. The Nanjing 2014
Youth Olympic Games featured all 28 sports on the Olympic programme and were held from 16 to 28 August. The Nanjing
Youth Olympic Games Organising Committee (NYOGOC) worked together with the International Olympic Committee (IOC)
to attract the best young athletes from around the world to compete at the highest level. Off the competition fields,
an integrated culture and education programme focused on discussions about education, Olympic values, social challenges,
and cultural diversity. The YOG aims to spread the Olympic spirit and encourage sports participation. Nanjing is
one of the most beautiful cities of mainland China with lush green parks, natural scenic lakes, small mountains,
historical buildings and monuments, relics and much more, which attracts thousands of tourists every year. Because
it was designated as the national capital, many structures were built around that time. Even today, some of them
still remain which are open to tourists. Nanjing has been the educational centre in southern China for more than
1700 years. There are 75 institutions of higher learning till 2013. The number of National key laboratories, National
key disciplines and the academicians of Chinese Academy of Sciences and Chinese Academy of Engineering all rank third
in the nation. It boasts some of the most prominent educational institutions in the region, some of which are listed
as follows:
The Arena Football League (AFL) is the highest level of professional indoor American football in the United States. It was
founded in 1987 by Jim Foster, making it the third longest-running professional football league in North America,
after the Canadian Football League and the National Football League. It is played indoors on a 68-yard field (about
half the distance of an NFL field), resulting in a faster-paced and higher-scoring game. The sport was invented in
the early 1980s and patented by Foster, a former executive of the United States Football League and the National
Football League. For its 2015 season, the league consisted of 12 teams, all from the United States; however, upon
the completion of the regular season, the league announced that the two teams it had assumed operation of during
the season would cease all operations effective immediately; a regular season game slated between the two had previously
been canceled and declared a tie. Subsequently, one of the remaining teams, the Spokane Shock, severed its ties with
the league to join the competing IFL. The AFL is divided into two conferences – the American Conference and National
Conference. Starting 2016, each conference will have only four teams as the champion San Jose SaberCats announced
in November 2015 that they were ceasing activity for "reasons not associated with League operations." The 2016 regular
season consists of an 18-week schedule during which each team plays 16 games and two bye weeks. Each team plays two
or three games against the teams within its own conference, and two games (home/road) against each team interconference-wise.
The 2015 season started during the last week of March and ran weekly into late August. At the end of the regular
season, all teams from each conference (the conference winner and three wild card teams) play in the AFL playoffs,
an eight-team single-elimination tournament that culminates with the championship game, known as the ArenaBowl. From
1987 to 2004, 2010 and 2011 and again starting in 2014, the game was played at the site of the higher seeded team.
From 2005 to 2008, the games were at neutral sites, Las Vegas and New Orleans. In 2012, the league championship returned
to a neutral site and ArenaBowl XXV was held at the New Orleans Arena; ArenaBowl XXVI was held in Orlando. The 2016
season will begin April 1, 2016. From 2000 to 2009, the AFL had its own developmental league, the af2. The AFL played
22 seasons from 1987 to 2008; internal issues caused the league to cancel its 2009 season, though the af2 did play.
Later that year both the AFL and af2 were dissolved and reorganized as a new corporation comprising teams from both
leagues, and the AFL returned in 2010. The Arena Football League has its headquarters in Chicago, Illinois. Jim Foster,
a promotions manager with the National Football League, conceived of indoor football while watching an indoor soccer
match at Madison Square Garden in 1981. While at the game, he wrote his idea on a 9x12 envelope, with sketches of
the field and notes on gameplay. He presented the idea to a few friends at the NFL offices, where he received praise
and encouragement for his concept. After solidifying the rules and a business plan, and supplemented with sketches
by a professional artist, Foster presented his idea to various television networks. He reached an agreement with
NBC for a "test game". Plans for arena football were put on hold in 1982 as the United States Football League was
launched. Foster left the NFL to accept a position in the USFL. He eventually became executive vice-president with
the Chicago Blitz, where he returned to his concept of arena football. In 1983, he began organizing the test game
in his spare time from his job with the Blitz. By 1985, the USFL had ceased football operations and he began devoting
all his time to arena football, and on April 27, 1986, his concept was realized when the test game was played. The
test game was played in Rockford, Illinois, at the Rockford MetroCentre. Sponsors were secured, and players and coaches
from local colleges were recruited to volunteer to play for the teams, the Chicago Politicians and Rockford Metros,
with the guarantee of a tryout should the league take off. Interest was high enough following the initial test game
that Foster decided to put on a second, "showcase", game. The second game was held on February 26, 1987 at the Rosemont
Horizon in Chicago with a budget of $20,000, up from $4,000 in the test game. Foster also invited ESPN to send a
film crew to the game; a highlights package aired on SportsCenter. Following the successes of his trial-run games,
Foster moved ahead with his idea for arena football. He founded the Arena Football League with four teams: the Pittsburgh
Gladiators, Denver Dynamite, Washington Commandos, and Chicago Bruisers. Foster appointed legendary Darrel "Mouse"
Davis, godfather of the "run and shoot" and modern pro offenses, as executive director of football operations. Davis
hired the original coaches and was the architect of the league's original wide-open offensive playbooks. The first
game in Arena Football League history was played on June 19, 1987, between the Gladiators and Commandos at Pittsburgh
Civic Arena in front of 12,117 fans. The game was deliberately not televised so that it could be analyzed and any
follies and failures would not be subject to national public scrutiny. Following the inaugural game, tweaks and adjustments
were made, and the first season continued. The Dynamite and Bruisers played in the first-ever televised AFL game
the next night, on June 20, 1987, at the Rosemont Horizon in suburban Chicago on ESPN with Bob Rathbun and Lee Corso
calling the play. The broadcast showed a short clip of the Commandos-Gladiators game. Each team played six games,
two against each other team. The top two teams, Denver and Pittsburgh, then competed in the first-ever AFL championship
game, ArenaBowl I. On September 30, 1987, Foster filed an application with the United States Patent and Trademark
Office to patent his invented sport. The patent application covered the rules of the game, specifically detailing
the goalposts and rebound netting and their impact on gameplay. Foster's application was granted on March 27, 1990.
The patent expired on September 30, 2007. From the 1987 season until the late 1990s, the most exposure the league
would receive was on ESPN, which aired tape-delayed games, often well after midnight, and often edited to match the
alloted time slot. The league received its first taste of wide exposure in 1998, when Arena Bowl XII was televised
nationally as part of ABC's old Wide World of Sports. On Saturday, July 23, 1989, much of America learned of the
AFL for an unintended reason, when the Pittsburgh Gladiators' head coach, Joe Haering, made football history by punching
commissioner Jim Foster during a game with the Chicago Bruisers. The national media ran with the story, including
a photo in USA Today. The game was played between the two teams in Sacramento's Arco Arena, as part of the AFL's
'Barnstorming America' tour. Foster had walked onto the field of play to mediate an altercation between the two teams
when Haering, a former NFL assistant, punched him in the jaw. Haering was suspended without pay. One of the league's
early success stories was the Detroit Drive. A primary team for some of the AFL's most highly regarded players, including
George LaFrance and Gary and Alvin Rettig, as well as being a second career chance for quarterback Art Schlichter,
the Drive regularly played before sold out crowds at Joe Louis Arena, and went to the ArenaBowl every year of their
existence (1988–1993). The AFL's first dynasty came to an end when their owner, Mike Ilitch (who also owned Little
Caesars Pizza and the Detroit Red Wings) bought the Detroit Tigers baseball team and sold the AFL team. Although
the Drive moved to Massachusetts for the 1994 season, the AFL had a number of other teams which it considered "dynasties",
including the Tampa Bay Storm (the only team that has existed in some form for all twenty-eight contested seasons),
their arch-rival the Orlando Predators, the now-defunct San Jose SaberCats of the present decade, and their rivals
the Arizona Rattlers. In 1993, the league staged its first All-Star Game in Des Moines, Iowa, the future home of
the long-running Iowa Barnstormers, as a fundraiser for flood victims in the area. The National Conference defeated
the American Conference 64–40 in front of a crowd of 7,189. The second Allstar game was in Oct. 2013, with two games,
the first in Honolulu, Hawai'i, the second being in Beijing, China. While some teams have enjoyed considerable on-field
and even financial success, many teams in the history of the league have enjoyed little success either on or off
of the field of play. There are a number of franchises which existed in the form of a number of largely-unrelated
teams under numerous management groups until they folded (an example is the New York CityHawks whose owners transferred
the team from New York to Hartford to become the New England Sea Wolves after two seasons, then after another two
seasons were sold and became the Toronto Phantoms, who lasted another two seasons until folding). There are a number
of reasons why these teams failed, including financially weak ownership groups, lack of deep financial support from
some owners otherwise capable of providing it, lack of media exposure, and the host city's evident lack of interest
in its team or the sport as a whole. The year 2000 brought heightened interest in the AFL. Then-St. Louis Rams quarterback
Kurt Warner, who was MVP of Super Bowl XXXIV, was first noticed because he played quarterback for the AFL's Iowa
Barnstormers. While many sports commentators and fans continued to ridicule the league, Warner's story gave the league
positive exposure, and it brought the league a new television deal with TNN, which, unlike ESPN, televised regular
season games live. While it was not financially lucrative, it helped set the stage for what the league would become
in the new millennium. Also, the year also brought a spin-off league, the AF2, intended to be a developmental league,
comparable to the National Football League's NFL Europe. There was a lot of expansion in the 2000s. Expansion teams
included the Austin Wranglers, Carolina Cobras, Los Angeles Avengers, Chicago Rush, Detroit Fury, Dallas Desperados,
Colorado Crush, New Orleans VooDoo, Philadelphia Soul, Nashville Kats, Kansas City Brigade, New York Dragons and
Utah Blaze. Some of these teams, including the Crush, Desperados, Kats, and VooDoo, were owned by the same group
which owned the NFL teams in their host cities. The NFL purchased, but never exercised, an option to buy a major
interest the AFL. Of all of these teams, only the Soul still compete in the AFL as of now. In 2003, the season expanded
to 16 games. There were also several rule changes in this period. In 2005, players were no longer allowed to run
out of bounds. The only way for a player to go out of bounds presently is if he is tackled into or deliberately contacts
the side boards. This was also the first year the ArenaBowl was played at a neutral site. In 2007, free substitution
was allowed, ending the "iron man" era of one-platoon football. And in 2008, the "jack" linebacker was allowed to
go sideboard to sideboard without being penalized for "illegal defense". After 12 years as commissioner of the AFL,
David Baker retired unexpectedly on July 25, 2008, just two days before ArenaBowl XXII; deputy commissioner Ed Policy
was named interim commissioner until Baker's replacement was found. Baker explained, "When I took over as commissioner,
I thought it would be for one year. It turned into 12. But now it's time." In October 2008, Tom Benson announced
that the New Orleans VooDoo were ceasing operations and folding "based on circumstances currently affecting the league
and the team". Shortly thereafter, an article in Sports Business Journal announced that the AFL had a tentative agreement
to sell a $100 million stake in the league to Platinum Equity; in exchange, Platinum Equity would create a centralized,
single-entity business model that would streamline league and team operations and allow the league to be more profitable.
Benson's move to shut down the VooDoo came during the Platinum Equity conference call, leading to speculation that
he had folded because of the deal. Because of the sudden loss of the New Orleans franchise, the league announced
in October that the beginning of the free agency period would be delayed in order to accommodate a dispersal draft.
Dates were eventually announced as December 2 for the dispersal draft and December 4 for free agency, but shortly
before the draft the league issued a press release announcing the draft had been postponed one day to December 3.
Shortly thereafter, another press release announced that the draft would be held on December 9 and free agency would
commence on December 11. However, the draft still never took place, and instead another press release was issued
stating that both the draft and free agency had been postponed indefinitely. Rumors began circulating that the league
was in trouble and on the verge of folding, but owners denied those claims. It was soon revealed the players' union
had agreed to cut the salary cap for the 2009 season to prevent a total cessation of operations. However, the announced
Platinum Equity investment never materialized. Although the Arenafootball2 league played its tenth season in 2009,
a conference call in December 2008 resulted in enough votes from owners and cooperation from the AFLPA for the AFL
to suspend the entire 2009 season in order to create "a long-term plan to improve its economic model". In doing so,
the AFL became the second sports league to cancel an entire season, after the National Hockey League cancelled the
2004-05 season because of a lockout. The AFL also became the third sports league to lose its postseason (the first
being Major League Baseball, which lost its postseason in 1994 because of a strike). Efforts to reformat the league's
business model were placed under the leadership of Columbus Destroyers owner Jim Renacci and interim commissioner
Policy. High hopes for the AFL waned when interim commissioner Ed Policy announced his resignation, citing the obsolescence
of his position in the reformatted league. Two weeks later, the Los Angeles Avengers announced that they were formally
folding the franchise. One month later, the league missed the deadline to formally ratify the new collective bargaining
agreement and announced that it was eliminating health insurance for the players. Progress on the return stalled,
and no announcements were made regarding the future of the league. On July 20, 2009, Sports Business Journal reported
that the AFL owed approximately $14 million to its creditors and were considering filing for Chapter 11 bankruptcy
protection. In early August 2009, numerous media outlets began reporting that the AFL was folding permanently and
would file for Chapter 7 bankruptcy. The league released a statement on August 4 to the effect that while the league
was not folding, it was suspending league operations indefinitely. Despite this, several of the league's creditors
filed papers to force a Chapter 7 liquidation if the league did not do so voluntarily. This request was granted on
August 7, though converted to a Chapter 11 reorganization on August 26. Following the suspension of the AFL's 2009
season, league officials and owners of af2 (which had played its season as scheduled) began discussing the future
of arena football and the two leagues. With its 50.1 percent ownership of af2, the AFL's bankruptcy and dissolution
prompted the dissolution of af2 as well. That league was formally considered disbanded on September 8, 2009, when
no owner committed his or her team to the league's eleventh season by that deadline. For legal reasons, af2 league
officials and owners agreed to form a new legal entity, Arena Football 1 (AF1), with former AFL teams the Arizona
Rattlers and Orlando Predators joining the former af2. All assets of the Arena Football League were put up for auction.
On November 11, 2009, the new league announced its intention to purchase the entire assets of the former AFL; the
assets included the team names and logos of all but one of the former AFL and af2 teams. The lone exception was that
of the Dallas Desperados; Desperados owner Jerry Jones had purposely designed the Desperados' properties around those
of the Dallas Cowboys, making the two inseparable. The auction occurred on November 25, 2009. The assets were awarded
to Arena Football 1 on December 7, 2009, with a winning bid of $6.1 million. On February 17, 2010, AF1 announced
it would use the "Arena Football League" name. The league announced plans for the upcoming season and details of
its contract with NFL Network to broadcast AFL games in 2010. AF1 teams were given the option of restoring historical
names to their teams. In addition to the historical teams, the league added two new expansion franchises, the Dallas
Vigilantes and the Jacksonville Sharks. For the 2011 season, the Philadelphia Soul, Kansas City Brigade, San Jose
SaberCats, New Orleans VooDoo, and the Georgia Force returned to the AFL after having last played in 2008. However,
the Grand Rapids Rampage, Colorado Crush, Columbus Destroyers, Los Angeles Avengers, and the New York Dragons did
not return. The league added one expansion team, the Pittsburgh Power. Former Pittsburgh Steelers wide receiver Lynn
Swann was one of the team's owners. It was the first time the AFL returned to Pittsburgh since the Pittsburgh Gladiators
were an original franchise in 1987 before becoming the Tampa Bay Storm. The Brigade changed its name to the Command,
becoming the Kansas City Command. Even though they were returning teams, the Bossier–Shreveport Battle Wings moved
to New Orleans as the Voodoo, the identity formerly owned by New Orleans Saints owner Tom Benson. The Alabama Vipers
moved to Duluth, Georgia to become the new Georgia Force (the earlier franchise of that name being a continuation
of the first Nashville Kats franchise). On October 25, 2010 the Oklahoma City Yard Dawgz did not return. The Milwaukee
Iron also changed names to the Milwaukee Mustangs, the name of Milwaukee's original AFL team that had existed from
1994 to 2001. In 2012, the AFL celebrated its silver anniversary for its 25th season of operations. The season kicked
off on March 9, 2012. The Tulsa Talons moved to San Antonio, Texas and Jeffrey Vinik became owner of the Tampa Bay
Storm. The Dallas Vigilantes were left off the schedule for the 2012 season with no announcement from the management,
raising speculations that either the team had suspended operations for the season or was ceasing operations altogether.
(Apparently the latter was the case as the organization did not field a team for the 2013 season or any subsequent
one either.) Like the National Football League, the AFL postponed the free agency period to October 31 due to Hurricane
Sandy. It was announced on December 12, 2012, that the AFL reached a partnership agreement with NET10 Wireless to
be the first non-motorsports-related professional sports league in the United States to have a title sponsor, renaming
it the NET10 Wireless Arena Football League. The redesigned website showed the new logo which incorporated the current
AFL logo with the one from NET10 Wireless. The title sponsorship agreement ended in 2014 after a two-year partnership.
In 2013, the league expanded with the addition of two new franchises to play in 2014, the Los Angeles Kiss (owned
by Gene Simmons and Paul Stanley of the legendary rock band Kiss) and the Portland Thunder. In 2014, the league announced
the granting of a new franchise to former Mötley Crüe frontman Vince Neil, previously part-owner of the Jacksonville
Sharks. That franchise, the Las Vegas Outlaws, were originally to play in Las Vegas at the MGM Grand Garden Arena
in 2015, but instead played their home games at the Thomas & Mack Center, previous home to the Las Vegas Sting and
Las Vegas Gladiators. After 20 years as a familiar name to the league, an AFL mainstay, the Iowa Barnstormers, departed
the league to join the Indoor Football League. The San Antonio Talons folded on October 13, 2014, after the league
(which owned the team) failed to find a new owner. On November 16, 2014, despite a successful season record-wise,
the Pittsburgh Power became the second team to cease operations after the 2014 season. This resulted from poor attendance.
It was later announced by the league that the Power would go dormant for 2015 and were looking for new ownership.
Jerry Kurz also stepped down as commissioner of the AFL as he was promoted to be the AFL's first president. Former
Foxwoods CEO Scott Butera was hired as his successor as commissioner. On August 9, 2015, ESPN reported that the New
Orleans VooDoo and Las Vegas Outlaws had ceased operations, effective immediately, a claim which was subsequently
validated on the AFL website. On September 1, 2015, the Spokane Shock officially left the AFL and joined the IFL
under the new name Spokane Empire, becoming the fifth active AFL/af2 franchise to leave for the IFL since bankruptcy
(Iowa Barnstormers, Tri-Cities Fever, Green Bay Blizzard and Arkansas Twisters—now the Texas Revolution—left previously).
Rumors began swirling with regards to bringing the AFL back to Austin and San Antonio, Texas. Both cities have hosted
franchises in the past (Austin Wranglers, San Antonio Force and San Antonio Talons), but an AFL spokesman, BJ Pickard,
was quoted as saying, "News to me." Announcements have yet to be made on any sort of expansion plans. An expected
"big announcement" on Friday, October 30 at a San Antonio Spurs game never came to fruition. On November 12, the
league announced the defending champion San Jose SaberCats would be ceasing operations due to "reasons unrelated
to League operations". A statement from the league indicated that the AFL is working to secure new, long-term owners
for the franchise. This leaves the AFL with eight teams for 2016. On January 6, 2016, the league took over "ownership
and operational control" of the Portland Thunder from its previous owners. The AFL stated this move was made after
months of trying work out an arrangement "to provide financial and operational support." On February 3, 2016, it
was announced that the franchise will start from scratch and no longer be called the "Thunder" as the name and trademarks
belong to former franchise owner Terry Emmert (similar to the Jerry Jones move with the Desperados). AFL commissioner
Scott Butera announced that a new identity will be announced at a later date. The league's 2016 schedule, announced
on the league's website on December 10, 2015, shows an eight-team league playing a 16-game regular season over 18
weeks, with two bye weeks for each team, one on a rotational basis and the other a "universal bye" for all teams
during the Independence Day weekend, the first weekend in July. All teams will qualify for the postseason, meaning
that the regular season will serve only to establish seeding. From the league's inception through ArenaBowl XVIII,
the championship game was played at the home of the highest-seeded remaining team. The AFL then switched to a neutral-site
championship, with ArenaBowls XIX and XX in Las Vegas. New Orleans Arena, home of the New Orleans VooDoo, served
as the site of ArenaBowl XXI on July 29, 2007. This was the first professional sports championship to be staged in
the city since Hurricane Katrina struck in August 2005. The San Jose SaberCats earned their third championship in
six years by defeating the Columbus Destroyers 55–33. ArenaBowl XXI in New Orleans was deemed a success, and the
city was chosen to host ArenaBowl XXII, in which the Philadelphia Soul defeated the defending champion San Jose Sabercats.
In 2010, the location returned to being decided by which of the two participating teams was seeded higher. In ArenaBowl
XXIII, the Spokane Shock defeated the Tampa Bay Storm at their home arena, Spokane Arena, in Spokane, Washington.
In ArenaBowl XXIV, the Jacksonville Sharks, coming off of a victory in their conference final game four nights earlier,
traveled to US Airways Center in Phoenix and defeated the Arizona Rattlers 73–70. ArenaBowl XXV returned to a neutral
site and was once again played in New Orleans, where the Rattlers returned and defeated the Philadelphia Soul. Since
2014 the ArenaBowl has been played at the higher-seeded team. The practice of playing one or two preseason exhibition
games by each team before the start of the regular season was discontinued when the NBC contract was initiated, and
the regular season was extended from 14 games, the length that it had been since 1996, to 16 from 2001 to 2010, and
since 2016. From 2011 to 2015, the regular season league expanded to 18 games, with each team having two bye weeks
and the option of two preseason games. In August 2012, the AFL announced a new project into China, known as the China
American Football League. The CAFL project is headed up by ESPN NFL analyst and Philadelphia Soul majority owner
president Ron Jaworski. The plans were to establish a six-team league that would play a 10-week schedule that was
slated to start in October 2014. The AFL coaches and trainers will travel to China to help teach the rules of the
sport to squads made up of Chinese and American players with the goal of starting an official Chinese arena league.
Ganlan Media International were given exclusive rights to the new Chinese league. AFL Global and Ganlan Media were
created in 2012 by businessman Martin E. Judge, founder and owner of the Judge Group. The company, called AFL Global,
LLC, looks to introduce and launch professional Arena Football teams and franchises in various locations throughout
the world (like NFL Europe). After their successful trip to China to help promote the game, they formally announced
plans to further develop AFL China by the fall of 2014 by starting a comprehensive training program in May 2013 with
exhibition games planned for the cities of Beijing and Guangzhou in October. This is the first time professional
football of any kind will be played in China with the support of the Chinese government and the CRFA (Chinese Rugby
Football Association). Key persons involved include founder and CEO. Martin E. Judge, partner Ron Jaworski, CAFL
CEO Gary Morris and president David Niu. Ganlan Media has since dropped this name and will carry the league's name
as its corporate identity. The 2014 scheduled beginning proved to be too ambitious for the group; its official website
now cites an anticipated beginning of professional play in 2016 and shows photos from a six-team collegiate tournament
staged in early November, 2015 Beginning with the 2003 season, the AFL made a deal with NBC to televise league games,
which was renewed for another two years in 2005. In conjunction with this, the league moved the beginning of the
season from May to February (the week after the NFL's Super Bowl) and scheduled most of its games on Sunday instead
of Friday or Saturday as it had in the past. In 2006, because of the XX Winter Olympic Games, the Stanley Cup playoffs
and the Daytona 500, NBC scaled back from weekly coverage to scattered coverage during the regular season, but committed
to a full playoff schedule ending with the 20th ArenaBowl. NBC and the Arena Football League officially severed ties
on June 30, 2006, having failed to reach a new broadcast deal. Las Vegas owner Jim Ferraro stated during a radio
interview that the reason why a deal failed is because ESPN refused to show highlights or even mention a product
being broadcast on NBC. On December 19, 2006, ESPN announced the purchase of a minority stake in the AFL. This deal
included television rights for the ESPN family of networks. ESPN would televise a minimum of 17 regular season games,
most on Monday nights, and nine playoff games, including ArenaBowl XXI on ABC. The deal resulted in added exposure
on ESPN's SportsCenter. However, after the original AFL filed for bankruptcy, this arrangement did not carry over
to the new AFL, which is a separate legal entity. The AFL also had a regional-cable deal with FSN, where FSN regional
affiliates in AFL markets carried local team games. In some areas, such as with the Arizona Rattlers, Fox Sports
affiliates still carry the games. After its return in 2010, the AFL had its national television deal with the NFL
Network for a weekly Friday night game. All AFL games not on the NFL Network could be seen for free online, provided
by Ustream. The NFL Network ceased airing Arena Football League games partway through the 2012 season as a result
of ongoing labor problems within the league. Briefly, the games were broadcast on a tape delay to prevent the embarrassment
that would result should the players stage a work stoppage immediately prior to a scheduled broadcast. (In at least
once incidence this actually happened, resulting in a non-competitive game being played with replacement players,
and further such incidents were threatened.) Once the labor issues were resolved, the NFL Network resumed the practice
of broadcasting a live Friday night game. NFL Network dropped the league at the end of the 2012 season. For the 2013
season, the league's new national broadcast partner was the CBS Sports Network. CBSSN would air 19 regular season
games and two playoff games. CBS would also air the ArenaBowl, marking the first time since 2008 that the league's
finale aired on network television. Regular season CBSSN broadcast games are usually on Saturday nights. As the games
are being shown live, the start times are not uniform as with most football broadcast packages, but vary with the
time zone in which the home team is located. This means that the AFL may appear either prior to or following the
CBSSN's featured Major League Lacrosse game. Starting in 2014, ESPN returned to the AFL as broadcast partners, with
weekly games being shown on CBS Sports Network, ESPN, ESPN2, ESPNEWS along with all games being broadcast on ESPN3
for free live on WatchESPN. ArenaBowl XXVII was also broadcast on ESPN. Most teams also have a local TV station broadcast
their games locally and all games are available on local radio. The first video game based on the AFL was Arena Football
for the C-64 released in 1988. On May 18, 2000, Kurt Warner's Arena Football Unleashed was released by Midway Games
for the PlayStation game console. On February 7, 2006 EA Sports released Arena Football for the PlayStation 2 and
Xbox. EA Sports released another AFL video game, titled Arena Football: Road to Glory, on February 21, 2007, for
the PlayStation 2. In 2001, Jeff Foley published War on the Floor: An Average Guy Plays in the Arena Football League
and Lives to Write About It. The book details a journalist's two preseasons (1999 and 2000) as an offensive specialist/writer
with the now-defunct Albany Firebirds. The 5-foot-6 (170 cm), self-described "unathletic writer" played in three
preseason games and had one catch for −2 yards. The AFL currently runs as under the single-entity model, with the
league owning the rights to the teams, players, and coaches. The single-entity model was adopted in 2010 when the
league emerged from bankruptcy. Prior to that, the league followed the franchise model more common in North American
professional sports leagues; each team essentially operated as its own business and the league itself was a separate
entity which in exchange for franchise fees paid by the team owners provided rules, officials, scheduling and the
other elements of organizational structure. A pool of money is allotted to teams to aid in travel costs. Average
attendance for AFL games were around 10,000–11,000 per game in the 1990s, though during the recession connected to
the dot-com bubble and the September 11, 2001 attacks average attendance dropped below 10,000 for several years.
From the start of the 2004 season until the final season of the original league in 2008, average attendance was above
12,000, with 12,392 in 2007. Eleven of the seventeen teams in operation in 2007 had average attendance figures over
13,000. In 2008, the overall attendance average increased to 12,957, with eight teams exceeding 13,000 per game.
In 2010, the first year of the reconstituted league following bankruptcy, the overall attendance average decreased
to 8,135, with only one team (Tampa Bay) exceeding 13,000 per game.
The term dialect (from Latin dialectus, dialectos, from the ancient Greek word διάλεκτος diálektos, "discourse", from διά
diá, "through" and λέγω legō, "I speak") is used in two distinct ways to refer to two different types of linguistic
phenomena. One usage—the more common among linguists—refers to a variety of a language that is a characteristic of
a particular group of the language's speakers. The term is applied most often to regional speech patterns, but a
dialect may also be defined by other factors, such as social class. A dialect that is associated with a particular
social class can be termed a sociolect, a dialect that is associated with a particular ethnic group can be termed
as ethnolect, and a regional dialect may be termed a regiolect. According to this definition, any variety of a language
constitutes "a dialect", including any standard varieties. The other usage refers to a language that is socially
subordinated to a regional or national standard language, often historically cognate or related to the standard language,
but not actually derived from it. In this sense, unlike in the first usage, the standard language would not itself
be considered a "dialect," as it is the dominant language in a particular state or region, whether in terms of social
or political status, official status, predominance or prevalence, or all of the above. Meanwhile, the "dialects"
subordinate to the standard language are generally not variations on the standard language but rather separate (but
often related) languages in and of themselves. For example, most of the various regional Romance languages of Italy,
often colloquially referred to as Italian "dialects," are, in fact, not actually derived from modern standard Italian,
but rather evolved from Vulgar Latin separately and individually from one another and independently of standard Italian,
long prior to the diffusion of a national standardized language throughout what is now Italy. These various Latin-derived
regional languages are therefore, in a linguistic sense, not truly "dialects" of the standard Italian language, but
are instead better defined as their own separate languages. Conversely, with the spread of standard Italian throughout
Italy in the 20th century, various regional versions or varieties of standard Italian developed, generally as a mix
of the national standard Italian with local regional languages and local accents. These variations on standard Italian,
known as regional Italian, would more appropriately be called "dialects" in accordance with the first linguistic
definition of "dialect," as they are in fact derived partially or mostly from standard Italian. A dialect is distinguished
by its vocabulary, grammar, and pronunciation (phonology, including prosody). Where a distinction can be made only
in terms of pronunciation (including prosody, or just prosody itself), the term accent may be preferred over dialect.
Other types of speech varieties include jargons, which are characterized by differences in lexicon (vocabulary);
slang; patois; pidgins; and argots. A standard dialect (also known as a standardized dialect or "standard language")
is a dialect that is supported by institutions. Such institutional support may include government recognition or
designation; presentation as being the "correct" form of a language in schools; published grammars, dictionaries,
and textbooks that set forth a correct spoken and written form; and an extensive formal literature that employs that
dialect (prose, poetry, non-fiction, etc.). There may be multiple standard dialects associated with a single language.
For example, Standard American English, Standard British English, Standard Canadian English, Standard Indian English,
Standard Australian English, and Standard Philippine English may all be said to be standard dialects of the English
language. A nonstandard dialect, like a standard dialect, has a complete vocabulary, grammar, and syntax, but is
usually not the beneficiary of institutional support. Examples of a nonstandard English dialect are Southern American
English, Western Australian English, Scouse and Tyke. The Dialect Test was designed by Joseph Wright to compare different
English dialects with each other. There is no universally accepted criterion for distinguishing two different languages
from two dialects (i.e. varieties) of the same language. A number of rough measures exist, sometimes leading to contradictory
results. The distinction is therefore subjective and depends on the user's frame of reference. For example, there
is discussion about if the Limón Creole English must be considered as "a kind" of English or a different language.
This creole is spoken in the Caribbean coast of Costa Rica (Central America) by descendant of Jamaican people. The
position that Costa Rican linguists support depends on the University they belong. The most common, and most purely
linguistic, criterion is that of mutual intelligibility: two varieties are said to be dialects of the same language
if being a speaker of one variety confers sufficient knowledge to understand and be understood by a speaker of the
other; otherwise, they are said to be different languages. However, this definition becomes problematic in the case
of dialect continua, in which it may be the case that dialect B is mutually intelligible with both dialect A and
dialect C but dialects A and C are not mutually intelligible with each other. In this case the criterion of mutual
intelligibility makes it impossible to decide whether A and C are dialects of the same language or not. Cases may
also arise in which a speaker of dialect X can understand a speaker of dialect Y, but not vice versa; the mutual
intelligibility criterion flounders here as well. Another occasionally used criterion for discriminating dialects
from languages is that of linguistic authority, a more sociolinguistic notion. According to this definition, two
varieties are considered dialects of the same language if (under at least some circumstances) they would defer to
the same authority regarding some questions about their language. For instance, to learn the name of a new invention,
or an obscure foreign species of plant, speakers of Bavarian German and East Franconian German might each consult
a German dictionary or ask a German-speaking expert in the subject. By way of contrast, although Yiddish is classified
by linguists as a language in the "Middle High German" group of languages, a Yiddish speaker would not consult a
German dictionary to determine the word to use in such a case. By the definition most commonly used by linguists,
any linguistic variety can be considered a "dialect" of some language—"everybody speaks a dialect". According to
that interpretation, the criteria above merely serve to distinguish whether two varieties are dialects of the same
language or dialects of different languages. A framework was developed in 1967 by Heinz Kloss, abstand and ausbau
languages, to describe speech communities, that while unified politically and/or culturally, include multiple dialects
which though closely related genetically may be divergent to the point of inter-dialect unintelligibility. The terms
"language" and "dialect" are not necessarily mutually exclusive: There is nothing contradictory in the statement
"the language of the Pennsylvania Dutch is a dialect of German". There are various terms that linguists may use to
avoid taking a position on whether the speech of a community is an independent language in its own right or a dialect
of another language. Perhaps the most common is "variety"; "lect" is another. A more general term is "languoid",
which does not distinguish between dialects, languages, and groups of languages, whether genealogically related or
not. In many societies, however, a particular dialect, often the sociolect of the elite class, comes to be identified
as the "standard" or "proper" version of a language by those seeking to make a social distinction, and is contrasted
with other varieties. As a result of this, in some contexts the term "dialect" refers specifically to varieties with
low social status. In this secondary sense of "dialect", language varieties are often called dialects rather than
languages: The status of "language" is not solely determined by linguistic criteria, but it is also the result of
a historical and political development. Romansh came to be a written language, and therefore it is recognized as
a language, even though it is very close to the Lombardic alpine dialects. An opposite example is the case of Chinese,
whose variations such as Mandarin and Cantonese are often called dialects and not languages, despite their mutual
unintelligibility. Modern Nationalism, as developed especially since the French Revolution, has made the distinction
between "language" and "dialect" an issue of great political importance. A group speaking a separate "language" is
often seen as having a greater claim to being a separate "people", and thus to be more deserving of its own independent
state, while a group speaking a "dialect" tends to be seen not as "a people" in its own right, but as a sub-group,
part of a bigger people, which must content itself with regional autonomy.[citation needed] The distinction between
language and dialect is thus inevitably made at least as much on a political basis as on a linguistic one, and can
lead to great political controversy, or even armed conflict. The Yiddish linguist Max Weinreich published the expression,
A shprakh iz a dialekt mit an armey un flot ("אַ שפּראַך איז אַ דיאַלעקט מיט אַן אַרמײ און פֿלאָט": "A language
is a dialect with an army and navy") in YIVO Bleter 25.1, 1945, p. 13. The significance of the political factors
in any attempt at answering the question "what is a language?" is great enough to cast doubt on whether any strictly
linguistic definition, without a socio-cultural approach, is possible. This is illustrated by the frequency with
which the army-navy aphorism is cited. When talking about the German language, the term German dialects is only used
for the traditional regional varieties. That allows them to be distinguished from the regional varieties of modern
standard German. The German dialects show a wide spectrum of variation. Most of them are not mutually intelligible.
German dialectology traditionally names the major dialect groups after Germanic tribes from which they were assumed
to have descended.[citation needed] The extent to which the dialects are spoken varies according to a number of factors:
In Northern Germany, dialects are less common than in the South. In cities, dialects are less common than on the
countryside. In a public environment, dialects are less common than in a familiar environment. The situation in Switzerland
and Liechtenstein is different from the rest of the German-speaking countries. The Swiss German dialects are the
default everyday language in virtually every situation, whereas standard German is seldom spoken. Some Swiss German
speakers perceive standard German to be a foreign language. The Low German varieties spoken in Germany are often
counted among the German dialects. This reflects the modern situation where they are roofed by standard German. This
is different from the situation in the Middle Ages when Low German had strong tendencies towards an ausbau language.
Italy is home to a vast array of native regional minority languages, most of which are Romance-based and have their
own local variants. These regional languages are often referred to colloquially or in non-linguistic circles as Italian
"dialects," or dialetti (standard Italian for "dialects"). However, the majority of the regional languages in Italy
are in fact not actually "dialects" of standard Italian in the strict linguistic sense, as they are not derived from
modern standard Italian but instead evolved locally from Vulgar Latin independent of standard Italian, with little
to no influence from what is now known as "standard Italian." They are therefore better classified as individual
languages rather than "dialects." In addition to having evolved, for the most part, separately from one another and
with distinct individual histories, the Latin-based regional Romance languages of Italy are also better classified
as separate languages rather than true "dialects" due to the often high degree in which they lack mutual intelligibility.
Though mostly mutually unintelligible, the exact degree to which the regional Italian languages are mutual unintelligible
varies, often correlating with geographical distance or geographical barriers between the languages, with some regional
Italian languages that are closer in geographical proximity to each other or closer to each other on the dialect
continuum being more or less mutually intelligible. For instance, a speaker of purely Eastern Lombard, a language
in Northern Italy's Lombardy region that includes the Bergamasque dialect, would have severely limited mutual intelligibility
with a purely standard Italian speaker and would be nearly completely unintelligible to a speaker of a pure Sicilian
language variant. Due to Eastern Lombard's status as a Gallo-Italic language, an Eastern Lombard speaker may, in
fact, have more mutual intelligibility with a Occitan, Catalan, or French speaker than a standard Italian or Sicilian
language speaker. Meanwhile, a Sicilian language speaker would have an greater degree of mutual intelligibility with
a speaker of the more closely related Neapolitan language, but far less mutual intelligibility with a person speaking
Sicilian Gallo-Italic, a language that developed in isolated Lombard emigrant communities on the same island as the
Sicilian language. Modern standard Italian itself is heavily based on the Latin-derived Florentine Tuscan language.
The Tuscan-based language that would eventually become modern standard Italian had been used in poetry and literature
since at least the 12th century, and it first became widely known in Italy through the works of authors such as Dante
Alighieri, Giovanni Boccaccio, Niccolò Machiavelli, and Petrarch. Dante's Florentine-Tuscan literary Italian thus
became the language of the literate and upper class in Italy, and it spread throughout the peninsula as the lingua
franca among the Italian educated class as well as Italian traveling merchants. The economic prowess and cultural
and artistic importance of Tuscany in the Late Middle Ages and the Renaissance further encouraged the diffusion of
the Florentine-Tuscan Italian throughout Italy and among the educated and powerful, though local and regional languages
remained the main languages of the common people. During the Risorgimento, proponents of Italian republicanism and
Italian nationalism, such as Alessandro Manzoni, stressed the importance of establishing a uniform national language
in order to better create an Italian national identity. With the unification of Italy in the 1860s, standard Italian
became the official national language of the new Italian state, while the various unofficial regional languages of
Italy gradually became regarded as subordinate "dialects" to Italian, increasingly associated negatively with lack
of education or provincialism. However, at the time of the Italian Unification, standard Italian still existed mainly
as a literary language, and only 2.5% of Italy's population could speak standard Italian. In the early 20th century,
the vast conscription of Italian men from all throughout Italy during World War I is credited with facilitating the
diffusion of standard Italian among less educated Italian men, as these men from various regions with various regional
languages were forced to communicate with each other in a common tongue while serving in the Italian military. With
the eventual spread of the radio and television throughout Italy and the establishment of public education, Italians
from all regions were increasingly exposed to standard Italian, while literacy rates among all social classes improved.
Today, the majority of Italians are able to speak standard Italian, though many Italians still speak their regional
language regularly or as their primary day-to-day language, especially at home with family or when communicating
with Italians from the same town or region. However, to some Italians, speaking a regional language, especially in
a formal setting or outside of one's region, may carry a stigma or negative connotations associated with being lower
class, uneducated, boorish, or overly informal. Italians in different regions today may also speak regional varieties
of standard Italian, or regional Italian dialects, which, unlike the majority of languages of Italy, are actually
dialects of standard Italian rather than separate languages. A regional Italian dialect is generally standard Italian
that has been heavily influenced or mixed with local or regional native languages and accents. The languages of Italy
are primarily Latin-based Romance languages, with the most widely spoken languages falling within the Italo-Dalmatian
language family. This wide category includes: The Sardinian language is considered to be its own Romance language
family, separate not only from standard Italian but also the wider Italo-Dalmatian family, and it includes the Campidanese
Sardinian and Logudorese Sardinian variants. However, Gallurese, Sassarese, and Corsican are also spoken in Sardinia,
and these languages are considered closely related or derived from the Italian Tuscan language and thus are Italo-Dalmatian
languages. Furthermore, the Gallo-Romance language of Ligurian and the Catalan Algherese dialect are also spoken
in Sardinia. The classification of speech varieties as dialects or languages and their relationship to other varieties
of speech can be controversial and the verdicts inconsistent. English and Serbo-Croatian illustrate the point. English
and Serbo-Croatian each have two major variants (British and American English, and Serbian and Croatian, respectively),
along with numerous other varieties. For political reasons, analyzing these varieties as "languages" or "dialects"
yields inconsistent results: British and American English, spoken by close political and military allies, are almost
universally regarded as dialects of a single language, whereas the standard languages of Serbia and Croatia, which
differ from each other to a similar extent as the dialects of English, are being treated by some linguists from the
region as distinct languages, largely because the two countries oscillate from being brotherly to being bitter enemies.
(The Serbo-Croatian language article deals with this topic much more fully.) Similar examples abound. Macedonian,
although mutually intelligible with Bulgarian, certain dialects of Serbian and to a lesser extent the rest of the
South Slavic dialect continuum, is considered by Bulgarian linguists to be a Bulgarian dialect, in contrast with
the contemporary international view and the view in the Republic of Macedonia, which regards it as a language in
its own right. Nevertheless, before the establishment of a literary standard of Macedonian in 1944, in most sources
in and out of Bulgaria before the Second World War, the southern Slavonic dialect continuum covering the area of
today's Republic of Macedonia were referred to as Bulgarian dialects. In Lebanon, a part of the Christian population
considers "Lebanese" to be in some sense a distinct language from Arabic and not merely a dialect. During the civil
war Christians often used Lebanese Arabic officially, and sporadically used the Latin script to write Lebanese, thus
further distinguishing it from Arabic. All Lebanese laws are written in the standard literary form of Arabic, though
parliamentary debate may be conducted in Lebanese Arabic. In Tunisia, Algeria, and Morocco, the Darijas (spoken North
African languages) are sometimes considered more different from other Arabic dialects. Officially, North African
countries prefer to give preference to the Literary Arabic and conduct much of their political and religious life
in it (adherence to Islam), and refrain from declaring each country's specific variety to be a separate language,
because Literary Arabic is the liturgical language of Islam and the language of the Islamic sacred book, the Qur'an.
Although, especially since the 1960s, the Darijas are occupying an increasing use and influence in the cultural life
of these countries. Examples of cultural elements where Darijas' use became dominant include: theatre, film, music,
television, advertisement, social media, folk-tale books and companies' names. In the 19th century, the Tsarist Government
of the Russian Empire claimed that Ukrainian was merely a dialect of Russian and not a language on its own. The differences
were few and caused by the conquest of western Ukraine by the Polish-Lithuanian Commonwealth. However, the dialects
in Ukraine eventually differed substantially from the dialects in Russia. The German Empire conquered Ukraine during
World War I and was planning on either annexing it or installing a puppet king, but was defeated by the Entente,
with major involvement by the Ukrainian Bolsheviks. After conquering the rest of Ukraine from the Whites, Ukraine
joined the USSR and was enlarged (gaining Crimea and then Eastern Galicia), whence a process of Ukrainization was
begun, with encouragement from Moscow. After World War II, due to Ukrainian collaborationism with the Axis powers
in an attempt to gain independence, Moscow changed its policy towards repression of the Ukrainian language. Today
the boundaries of the Ukrainian language to the Russian language are still not drawn clearly, with an intermediate
dialect between them, called Surzhyk, developing in Ukraine. There have been cases of a variety of speech being deliberately
reclassified to serve political purposes. One example is Moldovan. In 1996, the Moldovan parliament, citing fears
of "Romanian expansionism", rejected a proposal from President Mircea Snegur to change the name of the language to
Romanian, and in 2003 a Moldovan–Romanian dictionary was published, purporting to show that the two countries speak
different languages. Linguists of the Romanian Academy reacted by declaring that all the Moldovan words were also
Romanian words; while in Moldova, the head of the Academy of Sciences of Moldova, Ion Bărbuţă, described the dictionary
as a politically motivated "absurdity". Unlike most languages that use alphabets to indicate the pronunciation, Chinese
characters have developed from logograms that do not always give hints to its pronunciation. Although the written
characters remained relatively consistent for the last two thousand years, the pronunciation and grammar in different
regions has developed to an extent that the varieties of the spoken language are often mutually unintelligible. As
a series of migration to the south throughout the history, the regional languages of the south, including Xiang,
Wu, Gan, Min, Yue (Cantonese), and Hakka often show traces of Old Chinese or Middle Chinese. From the Ming dynasty
onward, Beijing has been the capital of China and the dialect spoken in Beijing has had the most prestige among other
varieties. With the founding of the Republic of China, Standard Mandarin was designated as the official language,
based on the spoken language of Beijing. Since then, other spoken varieties are regarded as fangyan (dialects). Cantonese
is still the most commonly used language in Hong Kong, Macau and among some overseas Chinese communities, whereas
Southern Min has been accepted in Taiwan as an important local language along with Mandarin. Many historical linguists
view any speech form as a dialect of the older medium of communication from which it developed.[citation needed]
This point of view sees the modern Romance languages as dialects of Latin, modern Greek as a dialect of Ancient Greek,
Tok Pisin as a dialect of English, and North Germanic as dialects of Old Norse. This paradigm is not entirely problem-free.
It sees genetic relationships as paramount: the "dialects" of a "language" (which itself may be a "dialect" of a
yet older language) may or may not be mutually intelligible. Moreover, a parent language may spawn several "dialects"
which themselves subdivide any number of times, with some "branches" of the tree changing more rapidly than others.
This can give rise to the situation in which two dialects (defined according to this paradigm) with a somewhat distant
genetic relationship are mutually more readily comprehensible than more closely related dialects. In one opinion,
this pattern is clearly present among the modern Romance languages, with Italian and Spanish having a high degree
of mutual comprehensibility, which neither language shares with French, despite some claiming that both languages
are genetically closer to French than to each other:[citation needed] In fact, French-Italian and French-Spanish
relative mutual incomprehensibility is due to French having undergone more rapid and more pervasive phonological
change than have Spanish and Italian, not to real or imagined distance in genetic relationship. In fact, Italian
and French share many more root words in common that do not even appear in Spanish. For example, the Italian and
French words for various foods, some family relationships, and body parts are very similar to each other, yet most
of those words are completely different in Spanish. Italian "avere" and "essere" as auxiliaries for forming compound
tenses are used similarly to French "avoir" and "être". Spanish only retains "haber" and has done away with "ser"
in forming compound tenses. However, when it comes to phonological structures, Italian and Spanish have undergone
less change than French, with the result that some native speakers of Italian and Spanish may attain a degree of
mutual comprehension that permits extensive communication.[citation needed] One language, Interlingua, was developed
so that the languages of Western civilization would act as its dialects. Drawing from such concepts as the international
scientific vocabulary and Standard Average European, linguists[who?] developed a theory that the modern Western languages
were actually dialects of a hidden or latent language.[citation needed] Researchers at the International Auxiliary
Language Association extracted words and affixes that they considered to be part of Interlingua's vocabulary. In
theory, speakers of the Western languages would understand written or spoken Interlingua immediately, without prior
study, since their own languages were its dialects. This has often turned out to be true, especially, but not solely,
for speakers of the Romance languages and educated speakers of English. Interlingua has also been found to assist
in the learning of other languages. In one study, Swedish high school students learning Interlingua were able to
translate passages from Spanish, Portuguese, and Italian that students of those languages found too difficult to
understand. It should be noted, however, that the vocabulary of Interlingua extends beyond the Western language families.
The city of Bern or Berne (German: Bern, pronounced [bɛrn] ( listen); French: Berne [bɛʁn]; Italian: Berna [ˈbɛrna]; Romansh:
Berna [ˈbɛrnɐ] (help·info); Bernese German: Bärn [b̥æːrn]) is the de facto capital of Switzerland, referred to by
the Swiss as their (e.g. in German) Bundesstadt, or "federal city".[note 1] With a population of 140,634 (November
2015), Bern is the fifth most populous city in Switzerland. The Bern agglomeration, which includes 36 municipalities,
had a population of 406,900 in 2014. The metropolitan area had a population of 660,000 in 2000. Bern is also the
capital of the Canton of Bern, the second most populous of Switzerland's cantons. The official language of Bern is
(the Swiss variety of Standard) German, but the main spoken language is the Alemannic Swiss German dialect called
Bernese German. In 1983 the historic old town in the centre of Bern became a UNESCO World Heritage Site. Bern is
ranked among the world’s top ten cities for the best quality of life (2010). The etymology of the name Bern is uncertain.
According to the local legend, based on folk etymology, Berchtold V, Duke of Zähringen, the founder of the city of
Bern, vowed to name the city after the first animal he met on the hunt, and this turned out to be a bear. It has
long been considered likely that the city was named after the Italian city of Verona, which at the time was known
as Bern in Middle High German. As a result of the find of the Bern zinc tablet in the 1980s, it is now more common
to assume that the city was named after a pre-existing toponym of Celtic origin, possibly *berna "cleft". The bear
was the heraldic animal of the seal and coat of arms of Bern from at least the 1220s. The earliest reference to the
keeping of live bears in the Bärengraben dates to the 1440s. No archaeological evidence that indicates a settlement
on the site of today′s city centre prior to the 12th century has been found so far. In antiquity, a Celtic oppidum
stood on the Engehalbinsel (peninsula) north of Bern, fortified since the 2nd century BC (late La Tène period), thought
to be one of the twelve oppida of the Helvetii mentioned by Caesar. During the Roman era, there was a Gallo-Roman
vicus on the same site. The Bern zinc tablet has the name Brenodor ("dwelling of Breno"). In the Early Middle Ages,
there was a settlement in Bümpliz, now a city district of Bern, some 4 km (2 mi) from the medieval city. The medieval
city is a foundation of the Zähringer ruling family, which rose to power in Upper Burgundy in the 12th century. According
to 14th century historiography (Cronica de Berno, 1309), Bern was founded in 1191 by Berthold V, Duke of Zähringen.
In 1353 Bern joined the Swiss Confederacy, becoming one of the eight cantons of the formative period of 1353 to 1481.
Bern invaded and conquered Aargau in 1415 and Vaud in 1536, as well as other smaller territories; thereby becoming
the largest city-state north of the Alps, by the 18th century comprising most of what is today the canton of Bern
and the canton of Vaud. The city grew out towards the west of the boundaries of the peninsula formed by the River
Aare. The Zytglogge tower marked the western boundary of the city from 1191 until 1256, when the Käfigturm took over
this role until 1345. It was, in turn, succeeded by the Christoffelturm (formerly located close to the site of the
modern-day railway station) until 1622. During the time of the Thirty Years' War, two new fortifications – the so-called
big and small Schanze (entrenchment) – were built to protect the whole area of the peninsula. After a major blaze
in 1405, the city's original wooden buildings were gradually replaced by half-timbered houses and subsequently the
sandstone buildings which came to be characteristic for the Old Town. Despite the waves of pestilence that hit Europe
in the 14th century, the city continued to grow: mainly due to immigration from the surrounding countryside. Bern
was occupied by French troops in 1798 during the French Revolutionary Wars, when it was stripped of parts of its
territories. It regained control of the Bernese Oberland in 1802, and following the Congress of Vienna of 1814, it
newly acquired the Bernese Jura. At this time, it once again became the largest canton of the confederacy as it stood
during the Restoration and until the secession of the canton of Jura in 1979. Bern was made the Federal City (seat
of the Federal Assembly) within the new Swiss federal state in 1848. A number of congresses of the socialist First
and Second Internationals were held in Bern, particularly during World War I when Switzerland was neutral; see Bern
International. The city's population rose from about 5,000 in the 15th century to about 12,000 by 1800 and to above
60,000 by 1900, passing the 100,000 mark during the 1920s. Population peaked during the 1960s at 165,000, and has
since decreased slightly, to below 130,000 by 2000. As of October 2015, the resident population stood at 140,634,
of which 100,634 were Swiss citizens and 40,000 (30%) resident foreigners. A further estimated 350,000 people live
in the immediate urban agglomeration. Bern lies on the Swiss plateau in the Canton of Bern, slightly west of the
centre of Switzerland and 20 km (12 mi) north of the Bernese Alps. The countryside around Bern was formed by glaciers
during the most recent Ice Age. The two mountains closest to Bern are Gurten with a height of 864 m (2,835 ft) and
Bantiger with a height of 947 m (3,107 ft). The site of the old observatory in Bern is the point of origin of the
CH1903 coordinate system at 46°57′08.66″N 7°26′22.50″E / 46.9524056°N 7.4395833°E / 46.9524056; 7.4395833. The
city was originally built on a hilly peninsula surrounded by the River Aare, but outgrew natural boundaries by the
19th century. A number of bridges have been built to allow the city to expand beyond the Aare. Bern is built on very
uneven ground. There is an elevation difference of several metres between the inner city districts on the Aare (Matte,
Marzili) and the higher ones (Kirchenfeld, Länggasse). Bern has an area, as of 2009[update], of 51.62 square kilometers
(19.93 sq mi). Of this area, 9.79 square kilometers (3.78 sq mi) or 19.0% is used for agricultural purposes, while
17.33 square kilometers (6.69 sq mi) or 33.6% is forested. Of the rest of the land, 23.25 square kilometers (8.98
sq mi) or 45.0% is settled (buildings or roads), 1.06 square kilometers (0.41 sq mi) or 2.1% is either rivers or
lakes and 0.16 square kilometers (0.062 sq mi) or 0.3% is unproductive land. The City Council (Gemeinderat) constitutes
the executive government of the City of Bern and operates as a collegiate authority. It is composed of five councilors
(German: Gemeinderat/-rätin), each presiding over a directorate (Direktion) comprising several departments and bureaus.
The president of the executive department acts as mayor (Stadtpräsident). In the mandate period 2013–2016 (Legislatur)
the City Council is presided by Stadtpräsident Alexander Tschäppät. Departmental tasks, coordination measures and
implementation of laws decreed by the City Parliament are carried by the City Council. The regular election of the
City Council by any inhabitant valid to vote is held every four years. Any resident of Bern allowed to vote can be
elected as a member of the City Council. The delegates are selected by means of a system of Majorz. The mayor is
elected as such as well by public election while the heads of the other directorates are assigned by the collegiate.
The executive body holds its meetings in the Erlacherhof, built by architect Albrecht Stürler after 1747. As of 2015,
Bern's City Council is made up of two representatives of the SP (Social Democratic Party, of whom one is also the
mayor), and one each of CVP (Christian Democratic Party), GB (Green Alliance of Berne), and FDP (FDP.The Liberals),
giving the left parties a majority of three out of five seats. The last election was held on 25 November 2012. The
City Parliament (de: Stadtrat, fr: Conseil de ville) holds legislative power. It is made up of 80 members, with elections
held every four years. The City Parliament decrees regulations and by-laws that are executed by the City Council
and the administration. The delegates are selected by means of a system of proportional representation. The sessions
of the City Parliament are public. Unlike members of the City Council, members of the City Parliament are not politicians
by profession, and they are paid a fee based on their attendance. Any resident of Bern allowed to vote can be elected
as a member of the City Parliament. The parliament holds its meetings in the Stadthaus (Town Hall). The last regular
election of the City Parliament was held on 25 November 2012 for the mandate period (German: Legislatur, French:
la législature) from 2013 to 2016. Currently the City Parliament consist of 23 members of the Social Democratic Party
(SP/PS), 11 Swiss People's Party (SVP/UDC), 8 Green Alliance of Berne (GB), 8 Grüne Freie Liste (GFL) (Green Free
List), 7 The Liberals (FDP/PLR), 7 Conservative Democratic Party (BDP/PBD), 7 Green Liberal Party (GLP/PVL), 2 Christian
Democratic People's Party (CVP/PDC), 2 Evangelical People's Party (EVP/PEV), 1 Junge Alternative (JA!) (or Young
Alternatives), 1 Grüne Partei Bern - Demokratische Alternative (GPB-DA) (or Green Party Bern - Democratic Alternative),
1 Swiss Party of Labour (PdA), 1 Alternative Linke Bern (AL) and finally one independent. The following parties combine
their parliamentary power in parliamentary groups (German: Fraktion(en)): Independent and AL and GPB-DA and PdA (4),
SP (23), GB and JA! (9), GFL and EVP (10), GLP (7), BDP and CVP (9), FDP (7), and SVP (11). This gives the left parties
an absolute majority of 46 seats. Bern has a population of 140,634 people and 34% of the population are resident
foreign nationals. Over the 10 years between 2000 and 2010, the population changed at a rate of 0.6%. Migration accounted
for 1.3%, while births and deaths accounted for −2.1%. Most of the population (as of 2000[update]) speaks German
(104,465 or 81.2%) as their first language, Italian is the second most common (5,062 or 3.9%) and French is the third
(4,671 or 3.6%). There are 171 people who speak Romansh. The city council of the city of Bern decided against having
twinned cities except for a temporary (during the UEFA Euro 2008) cooperation with the Austrian city Salzburg As
of 2008[update], the population was 47.5% male and 52.5% female. The population was made up of 44,032 Swiss men (35.4%
of the population) and 15,092 (12.1%) non-Swiss men. There were 51,531 Swiss women (41.4%) and 13,726 (11.0%) non-Swiss
women. Of the population in the municipality, 39,008 or about 30.3% were born in Bern and lived there in 2000. There
were 27,573 or 21.4% who were born in the same canton, while 25,818 or 20.1% were born somewhere else in Switzerland,
and 27,812 or 21.6% were born outside of Switzerland. As of 2000[update], children and teenagers (0–19 years old)
make up 15.1% of the population, while adults (20–64 years old) make up 65% and seniors (over 64 years old) make
up 19.9%. As of 2000[update], there were 59,948 people who were single and never married in the municipality. There
were 49,873 married individuals, 9,345 widows or widowers and 9,468 individuals who are divorced. As of 2000[update],
there were 67,115 private households in the municipality, and an average of 1.8 persons per household. There were
34,981 households that consist of only one person and 1,592 households with five or more people. In 2000[update],
a total of 65,538 apartments (90.6% of the total) were permanently occupied, while 5,352 apartments (7.4%) were seasonally
occupied and 1,444 apartments (2.0%) were empty. As of 2009[update], the construction rate of new housing units was
1.2 new units per 1000 residents. As of 2003[update] the average price to rent an average apartment in Bern was 1108.92
Swiss francs (CHF) per month (US$890, £500, €710 approx. exchange rate from 2003). The average rate for a one-room
apartment was 619.82 CHF (US$500, £280, €400), a two-room apartment was about 879.36 CHF (US$700, £400, €560), a
three-room apartment was about 1040.54 CHF (US$830, £470, €670) and a six or more room apartment cost an average
of 2094.80 CHF (US$1680, £940, €1340). The average apartment price in Bern was 99.4% of the national average of 1116
CHF. The vacancy rate for the municipality, in 2010[update], was 0.45%. From the 2000 census[update], 60,455 or 47.0%
belonged to the Swiss Reformed Church, while 31,510 or 24.5% were Roman Catholic. Of the rest of the population,
there were 1,874 members of an Orthodox church (or about 1.46% of the population), there were 229 persons (or about
0.18% of the population) who belonged to the Christian Catholic Church, and there were 5,531 persons (or about 4.30%
of the population) who belonged to another Christian church. There were 324 persons (or about 0.25% of the population)
who were Jewish, and 4,907 (or about 3.81% of the population) who were Muslim. There were 629 persons who were Buddhist,
1,430 persons who were Hindu and 177 persons who belonged to another church. 16,363 (or about 12.72% of the population)
belonged to no church, are agnostic or atheist, and 7,855 persons (or about 6.11% of the population) did not answer
the question. On 14 December 2014 the Haus der Religionen was inaugurated. The structure of Bern's city centre is
largely medieval and has been recognised by UNESCO as a Cultural World Heritage Site. Perhaps its most famous sight
is the Zytglogge (Bernese German for "Time Bell"), an elaborate medieval clock tower with moving puppets. It also
has an impressive 15th century Gothic cathedral, the Münster, and a 15th-century town hall. Thanks to 6 kilometres
(4 miles) of arcades, the old town boasts one of the longest covered shopping promenades in Europe. Since the 16th
century, the city has had a bear pit, the Bärengraben, at the far end of the Nydeggbrücke to house its heraldic animals.
The currently four bears are now kept in an open-air enclosure nearby, and two other young bears, a present by the
Russian president, are kept in Dählhölzli zoo. The Federal Palace (Bundeshaus), built from 1857 to 1902, which houses
the national parliament, government and part of the federal administration, can also be visited. Albert Einstein
lived in a flat at the Kramgasse 49, the site of the Einsteinhaus, from 1903 to 1905, the year in which the Annus
Mirabilis Papers were published. The Rose Garden (Rosengarten), from which a scenic panoramic view of the medieval
town centre can be enjoyed, is a well-kept Rosarium on a hill, converted into a park from a former cemetery in 1913.
There are eleven Renaissance allegorical statues on public fountains in the Old Town. Nearly all the 16th century
fountains, except the Zähringer fountain which was created by Hans Hiltbrand, are the work of the Fribourg master
Hans Gieng. One of the more interesting fountains is the Kindlifresserbrunnen (Bernese German: Child Eater Fountain
but often translated Ogre Fountain) which is claimed to represent a Jew, the Greek god Chronos or a Fastnacht figure
that scares disobedient children. It includes the entire Old Town, which is also a UNESCO World Heritage Site, and
many sites within and around it. Some of the most notable in the Old Town include the Cathedral which was started
in 1421 and is the tallest cathedral in Switzerland, the Zytglogge and Käfigturm towers, which mark two successive
expansions of the Old Town, and the Holy Ghost Church, which is one of the largest Swiss Reformed churches in Switzerland.
Within the Old Town, there are eleven 16th century fountains, most attributed to Hans Gieng, that are on the list.
Bern has several dozen cinemas. As is customary in Switzerland, films are generally shown in their original language
(e.g., English) with German and French subtitles. Only a small number of screenings are dubbed in German. Bern was
the site of the 1954 Football (Soccer) World Cup Final, a huge upset for the Hungarian Golden Team, who were beaten
3–2 by West Germany. The football team BSC Young Boys is based in Bern at the Stade de Suisse Wankdorf, which also
was one of the venues for the European football championship 2008 in which it hosted 3 matches. SC Bern is the major
ice hockey team of Bern who plays at the PostFinance Arena. The team has ranked highest in attendance for a European
hockey team for more than a decade. The PostFinance Arena was the main host of the 2009 IIHF Ice Hockey World Championship,
including the opening game and the final of the tournament. Bern was a candidate to host the 2010 Winter Olympics,
but withdrew its bid in September 2002 after a referendum was passed that showed that the bid was not supported by
locals. Those games were eventually awarded to Vancouver, Canada. As of 2010[update], Bern had an unemployment rate
of 3.3%. As of 2008[update], there were 259 people employed in the primary economic sector and about 59 businesses
involved in this sector. 16,413 people were employed in the secondary sector and there were 950 businesses in this
sector. 135,973 people were employed in the tertiary sector, with 7,654 businesses in this sector. In 2008[update]
the total number of full-time equivalent jobs was 125,037. The number of jobs in the primary sector was 203, of which
184 were in agriculture and 19 were in forestry or lumber production. The number of jobs in the secondary sector
was 15,476 of which 7,650 or (49.4%) were in manufacturing, 51 or (0.3%) were in mining and 6,389 (41.3%) were in
construction. The number of jobs in the tertiary sector was 109,358. In the tertiary sector; 11,396 or 10.4% were
in wholesale or retail sales or the repair of motor vehicles, 10,293 or 9.4% were in the movement and storage of
goods, 5,090 or 4.7% were in a hotel or restaurant, 7,302 or 6.7% were in the information industry, 8,437 or 7.7%
were the insurance or financial industry, 10,660 or 9.7% were technical professionals or scientists, 5,338 or 4.9%
were in education and 17,903 or 16.4% were in health care. In 2000[update], there were 94,367 workers who commuted
into the municipality and 16,424 workers who commuted away. The municipality is a net importer of workers, with about
5.7 workers entering the municipality for every one leaving. Of the working population, 50.6% used public transport
to get to work, and 20.6% used a private car. The University of Bern, whose buildings are mainly located in the Länggasse
quarter, is located in Bern, as well as the University of Applied Sciences (Fachhochschule) and several vocations
schools. In Bern, about 50,418 or (39.2%) of the population have completed non-mandatory upper secondary education,
and 24,311 or (18.9%) have completed additional higher education (either university or a Fachhochschule). Of the
24,311 who completed tertiary schooling, 51.6% were Swiss men, 33.0% were Swiss women, 8.9% were non-Swiss men and
6.5% were non-Swiss women. The Canton of Bern school system provides one year of non-obligatory kindergarten, followed
by six years of primary school. This is followed by three years of obligatory lower secondary school where the pupils
are separated according to ability and aptitude. Following the lower secondary pupils may attend additional schooling
or they may enter an apprenticeship. During the 2009–10 school year, there were a total of 10,979 pupils attending
classes in Bern. There were 89 kindergarten classes with a total of 1,641 pupils in the municipality. Of the kindergarten
pupils, 32.4% were permanent or temporary residents of Switzerland (not citizens) and 40.2% have a different mother
language than the classroom language. The municipality had 266 primary classes and 5,040 pupils. Of the primary pupils,
30.1% were permanent or temporary residents of Switzerland (not citizens) and 35.7% have a different mother language
than the classroom language. During the same year, there were 151 lower secondary classes with a total of 2,581 pupils.
There were 28.7% who were permanent or temporary residents of Switzerland (not citizens) and 32.7% have a different
mother language than the classroom language. Bern is home to 8 libraries. These libraries include; the Schweiz. Nationalbibliothek/
Bibliothèque nationale suisse, the Universitätsbibliothek Bern, the Kornhausbibliotheken Bern, the BFH Wirtschaft
und Verwaltung Bern, the BFH Gesundheit, the BFH Soziale Arbeit, the Hochschule der Künste Bern, Gestaltung und Kunst
and the Hochschule der Künste Bern, Musikbibliothek. There was a combined total (as of 2008[update]) of 10,308,336
books or other media in the libraries, and in the same year a total of 2,627,973 items were loaned out. As of 2000[update],
there were 9,045 pupils in Bern who came from another municipality, while 1,185 residents attended schools outside
the municipality. A funicular railway leads from the Marzili district to the Bundeshaus. The Marzilibahn funicular
is, with a length of 106 m (348 ft), the second shortest public railway in Europe after the Zagreb funicular. Bern
is also served by Bern Airport, located outside the city near the town of Belp. The regional airport, colloquially
called Bern-Belp or Belpmoos, is connected to several European cities. Additionally Zürich Airport, Geneva Airport
and EuroAirport Basel-Mulhouse-Freiburg also serve as international gateways, all reachable within two hours by car
or train from Bern.
Westminster Abbey, formally titled the Collegiate Church of St Peter at Westminster, is a large, mainly Gothic abbey church
in the City of Westminster, London, located just to the west of the Palace of Westminster. It is one of the most
notable religious buildings in the United Kingdom and has been the traditional place of coronation and burial site
for English and, later, British monarchs. Between 1540 and 1556 the abbey had the status of a cathedral. Since 1560,
however, the building is no longer an abbey nor a cathedral, having instead the status of a Church of England "Royal
Peculiar"—a church responsible directly to the sovereign. The building itself is the original abbey church. According
to a tradition first reported by Sulcard in about 1080, a church was founded at the site (then known as Thorn Ey
(Thorn Island)) in the 7th century, at the time of Mellitus, a Bishop of London. Construction of the present church
began in 1245, on the orders of King Henry III. Since 1066, when Harold Godwinson and William the Conqueror were
crowned, the coronations of English and British monarchs have been held there. There have been at least 16 royal
weddings at the abbey since 1100. Two were of reigning monarchs (Henry I and Richard II), although, before 1919,
there had been none for some 500 years. The first reports of the abbey are based on a late tradition claiming that
a young fisherman called Aldrich on the River Thames saw a vision of Saint Peter near the site. This seems to be
quoted to justify the gifts of salmon from Thames fishermen that the abbey received in later years. In the present
era, the Fishmonger's Company still gives a salmon every year. The proven origins are that in the 960s or early 970s,
Saint Dunstan, assisted by King Edgar, installed a community of Benedictine monks here. Between 1042 and 1052 King
Edward the Confessor began rebuilding St Peter's Abbey to provide himself with a royal burial church. It was the
first church in England built in the Romanesque style. The building was not completed until around 1090 but was consecrated
on 28 December 1065, only a week before Edward's death on 5 January 1066. A week later he was buried in the church,
and nine years later his wife Edith was buried alongside him. His successor, Harold II, was probably crowned in the
abbey, although the first documented coronation is that of William the Conqueror later the same year. The only extant
depiction of Edward's abbey, together with the adjacent Palace of Westminster, is in the Bayeux Tapestry. Some of
the lower parts of the monastic dormitory, an extension of the South Transept, survive in the Norman undercroft of
the Great School, including a door said to come from the previous Saxon abbey. Increased endowments supported a community
increased from a dozen monks in Dunstan's original foundation, up to a maximum about eighty monks, although there
was also a large community of lay brothers who supported the monastery's extensive property and activities. The abbot
and monks, in proximity to the royal Palace of Westminster, the seat of government from the later 12th century, became
a powerful force in the centuries after the Norman Conquest. The abbot often was employed on royal service and in
due course took his place in the House of Lords as of right. Released from the burdens of spiritual leadership, which
passed to the reformed Cluniac movement after the mid-10th century, and occupied with the administration of great
landed properties, some of which lay far from Westminster, "the Benedictines achieved a remarkable degree of identification
with the secular life of their times, and particularly with upper-class life", Barbara Harvey concludes, to the extent
that her depiction of daily life provides a wider view of the concerns of the English gentry in the High and Late
Middle Ages.[citation needed] The proximity of the Palace of Westminster did not extend to providing monks or abbots
with high royal connections; in social origin the Benedictines of Westminster were as modest as most of the order.
The abbot remained Lord of the Manor of Westminster as a town of two to three thousand persons grew around it: as
a consumer and employer on a grand scale the monastery helped fuel the town economy, and relations with the town
remained unusually cordial, but no enfranchising charter was issued during the Middle Ages. The abbey built shops
and dwellings on the west side, encroaching upon the sanctuary.[citation needed] The abbey became the coronation
site of Norman kings. None were buried there until Henry III, intensely devoted to the cult of the Confessor, rebuilt
the abbey in Anglo-French Gothic style as a shrine to venerate King Edward the Confessor and as a suitably regal
setting for Henry's own tomb, under the highest Gothic nave in England. The Confessor's shrine subsequently played
a great part in his canonisation. The work continued between 1245 and 1517 and was largely finished by the architect
Henry Yevele in the reign of Richard II. Henry III also commissioned unique Cosmati pavement in front of the High
Altar (the pavement has recently undergone a major cleaning and conservation programme and was re-dedicated by the
Dean at a service on 21 May 2010). Henry VII added a Perpendicular style chapel dedicated to the Blessed Virgin Mary
in 1503 (known as the Henry VII Chapel or the "Lady Chapel"). Much of the stone came from Caen, in France (Caen stone),
the Isle of Portland (Portland stone) and the Loire Valley region of France (tuffeau limestone).[citation needed]
In 1535, the abbey's annual income of £2400–2800[citation needed] (£1,310,000 to £1,530,000 as of 2016), during the
assessment attendant on the Dissolution of the Monasteries rendered it second in wealth only to Glastonbury Abbey.
Henry VIII assumed direct royal control in 1539 and granted the abbey the status of a cathedral by charter in 1540,
simultaneously issuing letters patent establishing the Diocese of Westminster. By granting the abbey cathedral status
Henry VIII gained an excuse to spare it from the destruction or dissolution which he inflicted on most English abbeys
during this period. Westminster diocese was dissolved in 1550, but the abbey was recognised (in 1552, retroactively
to 1550) as a second cathedral of the Diocese of London until 1556. The already-old expression "robbing Peter to
pay Paul" may have been given a new lease of life when money meant for the abbey, which is dedicated to Saint Peter,
was diverted to the treasury of St Paul's Cathedral. The abbey was restored to the Benedictines under the Catholic
Mary I of England, but they were again ejected under Elizabeth I in 1559. In 1560, Elizabeth re-established Westminster
as a "Royal Peculiar" – a church of the Church of England responsible directly to the Sovereign, rather than to a
diocesan bishop – and made it the Collegiate Church of St Peter (that is, a non-cathedral church with an attached
chapter of canons, headed by a dean.) The last of Mary's abbots was made the first dean. It suffered damage during
the turbulent 1640s, when it was attacked by Puritan iconoclasts, but was again protected by its close ties to the
state during the Commonwealth period. Oliver Cromwell was given an elaborate funeral there in 1658, only to be disinterred
in January 1661 and posthumously hanged from a gibbet at Tyburn. The abbey's two western towers were built between
1722 and 1745 by Nicholas Hawksmoor, constructed from Portland stone to an early example of a Gothic Revival design.
Purbeck marble was used for the walls and the floors of Westminster Abbey, even though the various tombstones are
made of different types of marble. Further rebuilding and restoration occurred in the 19th century under Sir George
Gilbert Scott. A narthex (a portico or entrance hall) for the west front was designed by Sir Edwin Lutyens in the
mid-20th century but was not built. Images of the abbey prior to the construction of the towers are scarce, though
the abbey's official website states that the building was without towers following Yevele's renovation, with just
the lower segments beneath the roof level of the Nave completed. Until the 19th century, Westminster was the third
seat of learning in England, after Oxford and Cambridge. It was here that the first third of the King James Bible
Old Testament and the last half of the New Testament were translated. The New English Bible was also put together
here in the 20th century. Westminster suffered minor damage during the Blitz on 15 November 1940. In the 1990s two
icons by the Russian icon painter Sergei Fyodorov were hung in the abbey. On 6 September 1997 the funeral of Diana,
Princess of Wales, was held at the Abbey. On 17 September 2010 Pope Benedict XVI became the first pope to set foot
in the abbey. Since the coronations in 1066 of both King Harold and William the Conqueror, coronations of English
and British monarchs were held in the abbey. In 1216, Henry III was unable to be crowned in London when he first
came to the throne, because the French prince Louis had taken control of the city, and so the king was crowned in
Gloucester Cathedral. This coronation was deemed by the Pope to be improper, and a further coronation was held in
the abbey on 17 May 1220. The Archbishop of Canterbury is the traditional cleric in the coronation ceremony.[citation
needed] King Edward's Chair (or St Edward's Chair), the throne on which English and British sovereigns have been
seated at the moment of coronation, is housed within the abbey and has been used at every coronation since 1308.
From 1301 to 1996 (except for a short time in 1950 when it was temporarily stolen by Scottish nationalists), the
chair also housed the Stone of Scone upon which the kings of Scots are crowned. Although the Stone is now kept in
Scotland, in Edinburgh Castle, at future coronations it is intended that the Stone will be returned to St Edward's
Chair for use during the coronation ceremony.[citation needed] Westminster Abbey is a collegiate church governed
by the Dean and Chapter of Westminster, as established by Royal charter of Queen Elizabeth I in 1560, which created
it as the Collegiate Church of St Peter Westminster and a Royal Peculiar under the personal jurisdiction of the Sovereign.
The members of the Chapter are the Dean and four canons residentiary, assisted by the Receiver General and Chapter
Clerk. One of the canons is also Rector of St Margaret's Church, Westminster, and often holds also the post of Chaplain
to the Speaker of the House of Commons. In addition to the Dean and canons, there are at present two full-time minor
canons, one is precentor, and the other is sacrist. The office of Priest Vicar was created in the 1970s for those
who assist the minor canons. Together with the clergy and Receiver General and Chapter Clerk, various lay officers
constitute the college, including the Organist and Master of the Choristers, the Registrar, the Auditor, the Legal
Secretary, the Surveyor of the Fabric, the Head Master of the choir school, the Keeper of the Muniments and the Clerk
of the Works, as well as 12 lay vicars, 10 choristers and the High Steward and High Bailiff. Henry III rebuilt the
abbey in honour of a royal saint, Edward the Confessor, whose relics were placed in a shrine in the sanctuary. Henry
III himself was interred nearby, as were many of the Plantagenet kings of England, their wives and other relatives.
Until the death of George II of Great Britain in 1760, most kings and queens were buried in the abbey, some notable
exceptions being Henry VI, Edward IV, Henry VIII and Charles I who are buried in St George's Chapel at Windsor Castle.
Other exceptions include Richard III, now buried at Leicester Cathedral, and the de facto queen Lady Jane Grey, buried
in the chapel of St Peter ad Vincula in the Tower of London. Most monarchs and royals who died after 1760 are buried
either in St George's Chapel or at Frogmore to the east of Windsor Castle.[citation needed] From the Middle Ages,
aristocrats were buried inside chapels, while monks and other people associated with the abbey were buried in the
cloisters and other areas. One of these was Geoffrey Chaucer, who was buried here as he had apartments in the abbey
where he was employed as master of the King's Works. Other poets, writers and musicians were buried or memorialised
around Chaucer in what became known as Poets' Corner. Abbey musicians such as Henry Purcell were also buried in their
place of work.[citation needed] Subsequently, it became one of Britain's most significant honours to be buried or
commemorated in the abbey. The practice of burying national figures in the abbey began under Oliver Cromwell with
the burial of Admiral Robert Blake in 1657. The practice spread to include generals, admirals, politicians, doctors
and scientists such as Isaac Newton, buried on 4 April 1727, and Charles Darwin, buried 26 April 1882. Another was
William Wilberforce who led the movement to abolish slavery in the United Kingdom and the Plantations, buried on
3 August 1833. Wilberforce was buried in the north transept, close to his friend, the former Prime Minister, William
Pitt.[citation needed] During the early 20th century it became increasingly common to bury cremated remains rather
than coffins in the abbey. In 1905 the actor Sir Henry Irving was cremated and his ashes buried in Westminster Abbey,
thereby becoming the first person ever to be cremated prior to interment at the abbey. The majority of interments
at the Abbey are of cremated remains, but some burials still take place - Frances Challen, wife of the Rev Sebastian
Charles, Canon of Westminster, was buried alongside her husband in the south choir aisle in 2014. Members of the
Percy Family have a family vault, The Northumberland Vault, in St Nicholas's chapel within the abbey. In the floor,
just inside the great west door, in the centre of the nave, is the tomb of The Unknown Warrior, an unidentified British
soldier killed on a European battlefield during the First World War. He was buried in the abbey on 11 November 1920.
This grave is the only one in the abbey on which it is forbidden to walk.[citation needed] At the east end of the
Lady Chapel is a memorial chapel to the airmen of the RAF who were killed in the Second World War. It incorporates
a memorial window to the Battle of Britain, which replaces an earlier Tudor stained glass window destroyed in the
war. On Saturday September 6, 1997 the formal, though not "state" Funeral of Diana, Princess of Wales, was held.
It was a royal ceremonial funeral including royal pageantry and Anglican funeral liturgy. A Second Public service
was held on Sunday at the demand of the people. The burial occurred privately later the same day. Diana's former
husband, sons, mother, siblings, a close friend, and a clergyman were present. Diana's body was clothed in a black
long-sleeved dress designed by Catherine Walker, which she had chosen some weeks before. A set of rosary beads was
placed in her hands, a gift she had received from Mother Teresa. Her grave is on the grounds of her family estate,
Althorp, on a private island.[citation needed] Westminster School and Westminster Abbey Choir School are also in
the precincts of the abbey. It was natural for the learned and literate monks to be entrusted with education, and
Benedictine monks were required by the Pope to maintain a charity school in 1179. The Choir School educates and trains
the choirboys who sing for services in the Abbey. The organ was built by Harrison & Harrison in 1937, then with four
manuals and 84 speaking stops, and was used for the first time at the coronation of King George VI. Some pipework
from the previous Hill organ of 1848 was revoiced and incorporated in the new scheme. The two organ cases, designed
in the late 19th century by John Loughborough Pearson, were re-instated and coloured in 1959. In 1982 and 1987, Harrison
and Harrison enlarged the organ under the direction of the then abbey organist Simon Preston to include an additional
Lower Choir Organ and a Bombarde Organ: the current instrument now has five manuals and 109 speaking stops. In 2006,
the console of the organ was refurbished by Harrison and Harrison, and space was prepared for two additional 16 ft
stops on the Lower Choir Organ and the Bombarde Organ. One part of the instrument, the Celestial Organ, is currently
not connected or playable. The bells at the abbey were overhauled in 1971. The ring is now made up of ten bells,
hung for change ringing, cast in 1971, by the Whitechapel Bell Foundry, tuned to the notes: F#, E, D, C#, B, A, G,
F#, E and D. The Tenor bell in D (588.5 Hz) has a weight of 30 cwt, 1 qtr, 15 lb (3403 lb or 1544 kg). In addition
there are two service bells, cast by Robert Mot, in 1585 and 1598 respectively, a Sanctus bell cast in 1738 by Richard
Phelps and Thomas Lester and two unused bells—one cast about 1320, by the successor to R de Wymbish, and a second
cast in 1742, by Thomas Lester. The two service bells and the 1320 bell, along with a fourth small silver "dish bell",
kept in the refectory, have been noted as being of historical importance by the Church Buildings Council of the Church
of England. The chapter house was built concurrently with the east parts of the abbey under Henry III, between about
1245 and 1253. It was restored by Sir George Gilbert Scott in 1872. The entrance is approached from the east cloister
walk and includes a double doorway with a large tympanum above. Inner and outer vestibules lead to the octagonal
chapter house, which is of exceptional architectural purity. It is built in a Geometrical Gothic style with an octagonal
crypt below. A pier of eight shafts carries the vaulted ceiling. To the sides are blind arcading, remains of 14th-century
paintings and numerous stone benches above which are innovatory large 4-light quatre-foiled windows. These are virtually
contemporary with the Sainte-Chapelle, Paris. The chapter house has an original mid-13th-century tiled pavement.
A door within the vestibule dates from around 1050 and is believed to be the oldest in England.[citation needed]
The exterior includes flying buttresses added in the 14th century and a leaded tent-lantern roof on an iron frame
designed by Scott. The Chapter house was originally used in the 13th century by Benedictine monks for daily meetings.
It later became a meeting place of the King's Great Council and the Commons, predecessors of Parliament. The Pyx
Chamber formed the undercroft of the monks' dormitory. It dates to the late 11th century and was used as a monastic
and royal treasury. The outer walls and circular piers are of 11th-century date, several of the capitals were enriched
in the 12th century and the stone altar added in the 13th century. The term pyx refers to the boxwood chest in which
coins were held and presented to a jury during the Trial of the Pyx, in which newly minted coins were presented to
ensure they conformed to the required standards. The chapter house and Pyx Chamber at Westminster Abbey are in the
guardianship of English Heritage, but under the care and management of the Dean and Chapter of Westminster. English
Heritage have funded a major programme of work on the chapter house, comprising repairs to the roof, gutters, stonework
on the elevations and flying buttresses as well as repairs to the lead light. The Westminster Abbey Museum is located
in the 11th-century vaulted undercroft beneath the former monks' dormitory in Westminster Abbey. This is one of the
oldest areas of the abbey, dating back almost to the foundation of the church by Edward the Confessor in 1065. This
space has been used as a museum since 1908. The exhibits include a collection of royal and other funeral effigies
(funeral saddle, helm and shield of Henry V), together with other treasures, including some panels of mediaeval glass,
12th-century sculpture fragments, Mary II's coronation chair and replicas of the coronation regalia, and historic
effigies of Edward III, Henry VII and his queen, Elizabeth of York, Charles II, William III, Mary II and Queen Anne.
Later wax effigies include a likeness of Horatio, Viscount Nelson, wearing some of his own clothes and another of
Prime Minister William Pitt, Earl of Chatham, modelled by the American-born sculptor Patience Wright.[citation needed]
During recent conservation of Elizabeth I's effigy, a unique corset dating from 1603 was found on the figure and
is now displayed separately.[citation needed] A recent addition to the exhibition is the late 13th-century Westminster
Retable, England's oldest altarpiece, which was most probably designed for the high altar of the abbey. Although
it has been damaged in past centuries, the panel has been expertly cleaned and conserved. In June 2009 the first
major building work at the abbey for 250 years was proposed. A corona—a crown-like architectural feature—was intended
to be built around the lantern over the central crossing, replacing an existing pyramidal structure dating from the
1950s. This was part of a wider £23m development of the abbey expected to be completed in 2013. On 4 August 2010
the Dean and Chapter announced that, "[a]fter a considerable amount of preliminary and exploratory work", efforts
toward the construction of a corona would not be continued. In 2012, architects Panter Hudspith completed refurbishment
of the 14th-century food-store originally used by the abbey's monks, converting it into a restaurant with English
Oak furniture by Covent Garden-based furniture makers Luke Hughes and Company. A project that is proceeding is the
creation of The Queen's Diamond Jubilee Galleries in the medieval triforium of the abbey. The aim is to create a
new display area for the abbey's treasures in the galleries high up around the abbey's nave. To this end a new Gothic
access tower with lift has been designed by the abbey architect and Surveyor of the Fabric, Ptolemy Dean. It is planned
that the new galleries will open in 2018.
Political corruption is the use of powers by government officials for illegitimate private gain. An illegal act by an officeholder
constitutes political corruption only if the act is directly related to their official duties, is done under color
of law or involves trading in influence. Forms of corruption vary, but include bribery, extortion, cronyism, nepotism,
gombeenism, parochialism patronage, influence peddling, graft, and embezzlement. Corruption may facilitate criminal
enterprise such as drug trafficking, money laundering, and human trafficking, though is not restricted to these activities.
Misuse of government power for other purposes, such as repression of political opponents and general police brutality,
is also considered political corruption. The activities that constitute illegal corruption differ depending on the
country or jurisdiction. For instance, some political funding practices that are legal in one place may be illegal
in another. In some cases, government officials have broad or ill-defined powers, which make it difficult to distinguish
between legal and illegal actions. Worldwide, bribery alone is estimated to involve over 1 trillion US dollars annually.
A state of unrestrained political corruption is known as a kleptocracy, literally meaning "rule by thieves". Some
forms of corruption – now called "institutional corruption" – are distinguished from bribery and other kinds of obvious
personal gain. A similar problem of corruption arises in any institution that depends on financial support from people
who have interests that may conflict with the primary purpose of the institution. In politics, corruption undermines
democracy and good governance by flouting or even subverting formal processes. Corruption in elections and in the
legislature reduces accountability and distorts representation in policymaking; corruption in the judiciary compromises
the rule of law; and corruption in public administration results in the inefficient provision of services. It violates
a basic principle of republicanism regarding the centrality of civic virtue. More generally, corruption erodes the
institutional capacity of government if procedures are disregarded, resources are siphoned off, and public offices
are bought and sold. Corruption undermines the legitimacy of government and such democratic values as trust and tolerance.
Recent evidence suggests that variation in the levels of corruption amongst high-income democracies can vary significantly
depending on the level of accountability of decision-makers. Evidence from fragile states also shows that corruption
and bribery can adversely impact trust in institutions. In the private sector, corruption increases the cost of business
through the price of illicit payments themselves, the management cost of negotiating with officials and the risk
of breached agreements or detection. Although some claim corruption reduces costs by cutting bureaucracy, the availability
of bribes can also induce officials to contrive new rules and delays. Openly removing costly and lengthy regulations
are better than covertly allowing them to be bypassed by using bribes. Where corruption inflates the cost of business,
it also distorts the playing field, shielding firms with connections from competition and thereby sustaining inefficient
firms. Corruption also generates economic distortion in the public sector by diverting public investment into capital
projects where bribes and kickbacks are more plentiful. Officials may increase the technical complexity of public
sector projects to conceal or pave the way for such dealings, thus further distorting investment. Corruption also
lowers compliance with construction, environmental, or other regulations, reduces the quality of government services
and infrastructure, and increases budgetary pressures on government. Economists argue that one of the factors behind
the differing economic development in Africa and Asia is that in Africa, corruption has primarily taken the form
of rent extraction with the resulting financial capital moved overseas rather than invested at home (hence the stereotypical,
but often accurate, image of African dictators having Swiss bank accounts). In Nigeria, for example, more than $400
billion was stolen from the treasury by Nigeria's leaders between 1960 and 1999. University of Massachusetts Amherst
researchers estimated that from 1970 to 1996, capital flight from 30 Sub-Saharan countries totaled $187bn, exceeding
those nations' external debts. (The results, expressed in retarded or suppressed development, have been modeled in
theory by economist Mancur Olson.) In the case of Africa, one of the factors for this behavior was political instability,
and the fact that new governments often confiscated previous government's corruptly obtained assets. This encouraged
officials to stash their wealth abroad, out of reach of any future expropriation. In contrast, Asian administrations
such as Suharto's New Order often took a cut on business transactions or provided conditions for development, through
infrastructure investment, law and order, etc. Corruption is often most evident in countries with the smallest per
capita incomes, relying on foreign aid for health services. Local political interception of donated money from overseas
is especially prevalent in Sub-Saharan African nations, where it was reported in the 2006 World Bank Report that
about half of the funds that were donated for health usages were never invested into the health sectors or given
to those needing medical attention. Instead, the donated money was expended through "counterfeit drugs, siphoning
off of drugs to the black market, and payments to ghost employees". Ultimately, there is a sufficient amount of money
for health in developing countries, but local corruption denies the wider citizenry the resource they require. Corruption
facilitates environmental destruction. While corrupt societies may have formal legislation to protect the environment,
it cannot be enforced if officials can easily be bribed. The same applies to social rights worker protection, unionization
prevention, and child labor. Violation of these laws rights enables corrupt countries to gain illegitimate economic
advantage in the international market. The Nobel Prize-winning economist Amartya Sen has observed that "there is
no such thing as an apolitical food problem." While drought and other naturally occurring events may trigger famine
conditions, it is government action or inaction that determines its severity, and often even whether or not a famine
will occur. Governments with strong tendencies towards kleptocracy can undermine food security even when harvests
are good. Officials often steal state property. In Bihar, India, more than 80% of the subsidized food aid to poor
is stolen by corrupt officials. Similarly, food aid is often robbed at gunpoint by governments, criminals, and warlords
alike, and sold for a profit. The 20th century is full of many examples of governments undermining the food security
of their own nations – sometimes intentionally. The scale of humanitarian aid to the poor and unstable regions of
the world grows, but it is highly vulnerable to corruption, with food aid, construction and other highly valued assistance
as the most at risk. Food aid can be directly and physically diverted from its intended destination, or indirectly
through the manipulation of assessments, targeting, registration and distributions to favor certain groups or individuals.
In construction and shelter there are numerous opportunities for diversion and profit through substandard workmanship,
kickbacks for contracts and favouritism in the provision of valuable shelter material. Thus while humanitarian aid
agencies are usually most concerned about aid being diverted by including too many, recipients themselves are most
concerned about exclusion. Access to aid may be limited to those with connections, to those who pay bribes or are
forced to give sexual favors. Equally, those able to do so may manipulate statistics to inflate the number of beneficiaries
and siphon off additional assistance. Corruption is not specific to poor, developing, or transition countries. In
western countries, cases of bribery and other forms of corruption in all possible fields exist: under-the-table payments
made to reputed surgeons by patients attempting to be on top of the list of forthcoming surgeries, bribes paid by
suppliers to the automotive industry in order to sell low-quality connectors used for instance in safety equipment
such as airbags, bribes paid by suppliers to manufacturers of defibrillators (to sell low-quality capacitors), contributions
paid by wealthy parents to the "social and culture fund" of a prestigious university in exchange for it to accept
their children, bribes paid to obtain diplomas, financial and other advantages granted to unionists by members of
the executive board of a car manufacturer in exchange for employer-friendly positions and votes, etc. Examples are
endless. These various manifestations of corruption can ultimately present a danger for the public health; they can
discredit specific, essential institutions or social relationships. Corruption can also affect the various components
of sports activities (referees, players, medical and laboratory staff involved in anti-doping controls, members of
national sport federation and international committees deciding about the allocation of contracts and competition
places). Ultimately, the distinction between public and private sector corruption sometimes appears rather artificial,
and national anti-corruption initiatives may need to avoid legal and other loopholes in the coverage of the instruments.
In the context of political corruption, a bribe may involve a payment given to a government official in exchange
of his use of official powers. Bribery requires two participants: one to give the bribe, and one to take it. Either
may initiate the corrupt offering; for example, a customs official may demand bribes to let through allowed (or disallowed)
goods, or a smuggler might offer bribes to gain passage. In some countries the culture of corruption extends to every
aspect of public life, making it extremely difficult for individuals to operate without resorting to bribes. Bribes
may be demanded in order for an official to do something he is already paid to do. They may also be demanded in order
to bypass laws and regulations. In addition to their role in private financial gain, bribes are also used to intentionally
and maliciously cause harm to another (i.e. no financial incentive).[citation needed] In some developing nations,
up to half of the population has paid bribes during the past 12 months. In recent years,[when?] the international
community has made efforts to encourage countries to dissociate active and passive bribery and to incriminate them
as separate offences.[citation needed] This dissociation aims to make the early steps (offering, promising, requesting
an advantage) of a corrupt deal already an offence and, thus, to give a clear signal (from a criminal-policy point-of-view)
that bribery is not acceptable.[citation needed] Furthermore, such a dissociation makes the prosecution of bribery
offences easier since it can be very difficult to prove that two parties (the bribe-giver and the bribe-taker) have
formally agreed upon a corrupt deal. In addition, there is often no such formal deal but only a mutual understanding,
for instance when it is common knowledge in a municipality that to obtain a building permit one has to pay a "fee"
to the decision maker to obtain a favorable decision. A working definition of corruption is also provided as follows
in article 3 of the Civil Law Convention on Corruption (ETS 174): For the purpose of this Convention, "corruption"
means requesting, offering, giving or accepting, directly or indirectly, a bribe or any other undue advantage or
prospect thereof, which distorts the proper performance of any duty or behavior required of the recipient of the
bribe, the undue advantage or the prospect thereof. Trading in influence, or influence peddling, refers a person
selling his/her influence over the decision making process to benefit a third party (person or institution). The
difference with bribery is that this is a tri-lateral relation. From a legal point of view, the role of the third
party (who is the target of the influence) does not really matter although he/she can be an accessory in some instances.
It can be difficult to make a distinction between this form of corruption and some forms of extreme and loosely regulated
lobbying where for instance law- or decision-makers can freely "sell" their vote, decision power or influence to
those lobbyists who offer the highest compensation, including where for instance the latter act on behalf of powerful
clients such as industrial groups who want to avoid the passing of specific environmental, social, or other regulations
perceived as too stringent, etc. Where lobbying is (sufficiently) regulated, it becomes possible to provide for a
distinctive criteria and to consider that trading in influence involves the use of "improper influence", as in article
12 of the Criminal Law Convention on Corruption (ETS 173) of the Council of Europe. Patronage refers to favoring
supporters, for example with government employment. This may be legitimate, as when a newly elected government changes
the top officials in the administration in order to effectively implement its policy. It can be seen as corruption
if this means that incompetent persons, as a payment for supporting the regime, are selected before more able ones.
In nondemocracies many government officials are often selected for loyalty rather than ability. They may be almost
exclusively selected from a particular group (for example, Sunni Arabs in Saddam Hussein's Iraq, the nomenklatura
in the Soviet Union, or the Junkers in Imperial Germany) that support the regime in return for such favors. A similar
problem can also be seen in Eastern Europe, for example in Romania, where the government is often accused of patronage
(when a new government comes to power it rapidly changes most of the officials in the public sector). Favoring relatives
(nepotism) or personal friends (cronyism) of an official is a form of illegitimate private gain. This may be combined
with bribery, for example demanding that a business should employ a relative of an official controlling regulations
affecting the business. The most extreme example is when the entire state is inherited, as in North Korea or Syria.
A lesser form might be in the Southern United States with Good ol' boys, where women and minorities are excluded.
A milder form of cronyism is an "old boy network", in which appointees to official positions are selected only from
a closed and exclusive social network – such as the alumni of particular universities – instead of appointing the
most competent candidate. Seeking to harm enemies becomes corruption when official powers are illegitimately used
as means to this end. For example, trumped-up charges are often brought up against journalists or writers who bring
up politically sensitive issues, such as a politician's acceptance of bribes. Gombeenism refers to an individual
who is dishonest and corrupt for the purpose of personal gain, more often through monetary, while, parochialism which
is also known as parish pump politics relates to placing local or vanity projects ahead of the national interest.For
instance in Irish politics, populist left wing political parties will often apply these terms to mainstream establisment
political parties and will cite the many cases of Corruption in Ireland, such as the Irish Banking crisis, which
found evidence of bribery, cronyism and collusion, where in some cases politicians who were coming to the end of
their political careers would receive a senior management or committee position in a company they had dealings with.
Electoral fraud is illegal interference with the process of an election. Acts of fraud affect vote counts to bring
about an election result, whether by increasing the vote share of the favored candidate, depressing the vote share
of the rival candidates, or both. Also called voter fraud, the mechanisms involved include illegal voter registration,
intimidation at polls, and improper vote counting. Embezzlement is the theft of entrusted funds. It is political
when it involves public money taken by a public official for use by anyone not specified by the public. A common
type of embezzlement is that of personal use of entrusted government resources; for example, when an official assigns
public employees to renovate his own house. A kickback is an official's share of misappropriated funds allocated
from his or her organization to an organization involved in corrupt bidding. For example, suppose that a politician
is in charge of choosing how to spend some public funds. He can give a contract to a company that is not the best
bidder, or allocate more than they deserve. In this case, the company benefits, and in exchange for betraying the
public, the official receives a kickback payment, which is a portion of the sum the company received. This sum itself
may be all or a portion of the difference between the actual (inflated) payment to the company and the (lower) market-based
price that would have been paid had the bidding been competitive. Kickbacks are not limited to government officials;
any situation in which people are entrusted to spend funds that do not belong to them are susceptible to this kind
of corruption. An unholy alliance is a coalition among seemingly antagonistic groups for ad hoc or hidden gain, generally
some influential non-governmental group forming ties with political parties, supplying funding in exchange for the
favorable treatment. Like patronage, unholy alliances are not necessarily illegal, but unlike patronage, by its deceptive
nature and often great financial resources, an unholy alliance can be much more dangerous to the public interest.
An early use of the term was by former US President Theodore "Teddy" Roosevelt: An illustrative example of official
involvement in organized crime can be found from 1920s and 1930s Shanghai, where Huang Jinrong was a police chief
in the French concession, while simultaneously being a gang boss and co-operating with Du Yuesheng, the local gang
ringleader. The relationship kept the flow of profits from the gang's gambling dens, prostitution, and protection
rackets undisturbed.[citation needed] The United States accused Manuel Noriega's government in Panama of being a
"narcokleptocracy", a corrupt government profiting on illegal drug trade. Later the U.S. invaded Panama and captured
Noriega. Thomas Jefferson observed a tendency for "The functionaries of every government ... to command at will the
liberty and property of their constituents. There is no safe deposit [for liberty and property] ... without information.
Where the press is free, and every man able to read, all is safe." Recent research supports Jefferson's claim. Brunetti
and Weder found "evidence of a significant relationship between more press freedom and less corruption in a large
cross-section of countries." They also presented "evidence which suggests that the direction of causation runs from
higher press freedom to lower corruption." Adserà, Boix, and Payne found that increases in newspaper readership led
to increased political accountability and lower corruption in data from roughly 100 countries and from different
states in the US. Snyder and Strömberg found "that a poor fit between newspaper markets and political districts reduces
press coverage of politics. ... Congressmen who are less covered by the local press work less for their constituencies:
they are less likely to stand witness before congressional hearings ... . Federal spending is lower in areas where
there is less press coverage of the local members of congress." Schulhofer-Wohl and Garrido found that the year after
the Cincinnati Post closed in 2007, "fewer candidates ran for municipal office in the Kentucky suburbs most reliant
on the Post, incumbents became more likely to win reelection, and voter turnout and campaign spending fell. An analysis
of the evolution of mass media in the US and Europe since World War II noted mixed results from the growth of the
Internet: "The digital revolution has been good for freedom of expression [and] information [but] has had mixed effects
on freedom of the press": It has disrupted traditional sources of funding, and new forms of Internet journalism have
replaced only a tiny fraction of what's been lost. Transparency International, an anti-corruption NGO, pioneered
this field with the CPI, first released in 1995. This work is often credited with breaking a taboo and forcing the
issue of corruption into high level development policy discourse. Transparency International currently publishes
three measures, updated annually: a CPI (based on aggregating third-party polling of public perceptions of how corrupt
different countries are); a Global Corruption Barometer (based on a survey of general public attitudes toward and
experience of corruption); and a Bribe Payers Index, looking at the willingness of foreign firms to pay bribes. The
Corruption Perceptions Index is the best known of these metrics, though it has drawn much criticism and may be declining
in influence. In 2013 Transparency International published a report on the "Government Defence Anti-corruption Index".
This index evaluates the risk of corruption in countries' military sector. The World Bank collects a range of data
on corruption, including survey responses from over 100,000 firms worldwide and a set of indicators of governance
and institutional quality. Moreover, one of the six dimensions of governance measured by the Worldwide Governance
Indicators is Control of Corruption, which is defined as "the extent to which power is exercised for private gain,
including both petty and grand forms of corruption, as well as 'capture' of the state by elites and private interests."
While the definition itself is fairly precise, the data aggregated into the Worldwide Governance Indicators is based
on any available polling: questions range from "is corruption a serious problem?" to measures of public access to
information, and not consistent across countries. Despite these weaknesses, the global coverage of these datasets
has led to their widespread adoption, most notably by the Millennium Challenge Corporation. A number of parties have
collected survey data, from the public and from experts, to try and gauge the level of corruption and bribery, as
well as its impact on political and economic outcomes. A second wave of corruption metrics has been created by Global
Integrity, the International Budget Partnership, and many lesser known local groups. These metrics include the Global
Integrity Index, first published in 2004. These second wave projects aim to create policy change by identifying resources
more effectively and creating checklists toward incremental reform. Global Integrity and the International Budget
Partnership each dispense with public surveys and instead uses in-country experts to evaluate "the opposite of corruption"
– which Global Integrity defines as the public policies that prevent, discourage, or expose corruption. These approaches
compliment the first wave, awareness-raising tools by giving governments facing public outcry a checklist which measures
concrete steps toward improved governance. Typical second wave corruption metrics do not offer the worldwide coverage
found in first wave projects, and instead focus on localizing information gathered to specific problems and creating
deep, "unpackable"[clarification needed] content that matches quantitative and qualitative data. Alternative approaches,
such as the British aid agency's Drivers of Change research, skips numbers and promotes understanding corruption
via political economy analysis of who controls power in a given society. Extensive and diverse public spending is,
in itself, inherently at risk of cronyism, kickbacks, and embezzlement. Complicated regulations and arbitrary, unsupervised
official conduct exacerbate the problem. This is one argument for privatization and deregulation. Opponents of privatization
see the argument as ideological. The argument that corruption necessarily follows from the opportunity is weakened
by the existence of countries with low to non-existent corruption but large public sectors, like the Nordic countries.
These countries score high on the Ease of Doing Business Index, due to good and often simple regulations, and have
rule of law firmly established. Therefore, due to their lack of corruption in the first place, they can run large
public sectors without inducing political corruption. Recent evidence that takes both the size of expenditures and
regulatory complexity into account has found that high-income democracies with more expansive state sectors do indeed
have higher levels of corruption. Like other governmental economic activities, also privatization, such as in the
sale of government-owned property, is particularly at the risk of cronyism. Privatizations in Russia, Latin America,
and East Germany were accompanied by large-scale corruption during the sale of the state owned companies. Those with
political connections unfairly gained large wealth, which has discredited privatization in these regions. While media
have reported widely the grand corruption that accompanied the sales, studies have argued that in addition to increased
operating efficiency, daily petty corruption is, or would be, larger without privatization, and that corruption is
more prevalent in non-privatized sectors. Furthermore, there is evidence to suggest that extralegal and unofficial
activities are more prevalent in countries that privatized less. In the European Union, the principle of subsidiarity
is applied: a government service should be provided by the lowest, most local authority that can competently provide
it. An effect is that distribution of funds into multiple instances discourages embezzlement, because even small
sums missing will be noticed. In contrast, in a centralized authority, even minute proportions of public funds can
be large sums of money. If the highest echelons of the governments also take advantage from corruption or embezzlement
from the state's treasury, it is sometimes referred with the neologism kleptocracy. Members of the government can
take advantage of the natural resources (e.g., diamonds and oil in a few prominent cases) or state-owned productive
industries. A number of corrupt governments have enriched themselves via foreign aid, which is often spent on showy
buildings and armaments. A corrupt dictatorship typically results in many years of general hardship and suffering
for the vast majority of citizens as civil society and the rule of law disintegrate. In addition, corrupt dictators
routinely ignore economic and social problems in their quest to amass ever more wealth and power. The classic case
of a corrupt, exploitive dictator often given is the regime of Marshal Mobutu Sese Seko, who ruled the Democratic
Republic of the Congo (which he renamed Zaire) from 1965 to 1997. It is said that usage of the term kleptocracy gained
popularity largely in response to a need to accurately describe Mobutu's regime. Another classic case is Nigeria,
especially under the rule of General Sani Abacha who was de facto president of Nigeria from 1993 until his death
in 1998. He is reputed to have stolen some US$3–4 billion. He and his relatives are often mentioned in Nigerian 419
letter scams claiming to offer vast fortunes for "help" in laundering his stolen "fortunes", which in reality turn
out not to exist. More than $400 billion was stolen from the treasury by Nigeria's leaders between 1960 and 1999.
More recently, articles in various financial periodicals, most notably Forbes magazine, have pointed to Fidel Castro,
General Secretary of the Republic of Cuba since 1959, of likely being the beneficiary of up to $900 million, based
on "his control" of state-owned companies. Opponents of his regime claim that he has used money amassed through weapons
sales, narcotics, international loans, and confiscation of private property to enrich himself and his political cronies
who hold his dictatorship together, and that the $900 million published by Forbes is merely a portion of his assets,
although that needs to be proven. Further conventions were adopted at the regional level under the aegis of the Organization
of American States (OAS or OEA), the African Union, and in 2003, at the universal level under that of the United
Nations Convention against Corruption. Measuring corruption statistically is difficult if not impossible due to the
illicit nature of the transaction and imprecise definitions of corruption. While "corruption" indices first appeared
in 1995 with the Corruption Perceptions Index CPI, all of these metrics address different proxies for corruption,
such as public perceptions of the extent of the problem. There are two methods of corruption of the judiciary: the
state (through budget planning and various privileges), and the private. Budget of the judiciary in many transitional
and developing countries is almost completely controlled by the executive. The latter undermines the separation of
powers, as it creates a critical financial dependence of the judiciary. The proper national wealth distribution including
the government spending on the judiciary is subject of the constitutional economics. Judicial corruption can be difficult
to completely eradicate, even in developed countries. Mobile telecommunications and radio broadcasting help to fight
corruption, especially in developing regions like Africa, where other forms of communications are limited. In India,
the anti-corruption bureau fights against corruption, and a new ombudsman bill called Jan Lokpal Bill is being prepared.
In the 1990s, initiatives were taken at an international level (in particular by the European Community, the Council
of Europe, the OECD) to put a ban on corruption: in 1996, the Committee of Ministers of the Council of Europe, for
instance, adopted a comprehensive Programme of Action against Corruption and, subsequently, issued a series of anti-corruption
standard-setting instruments: The purpose of these instruments was to address the various forms of corruption (involving
the public sector, the private sector, the financing of political activities, etc.) whether they had a strictly domestic
or also a transnational dimension. To monitor the implementation at national level of the requirements and principles
provided in those texts, a monitoring mechanism – the Group of States Against Corruption (also known as GRECO) (French:
Groupe d'Etats contre la corruption) was created.
Classical music is art music produced or rooted in the traditions of Western music, including both liturgical (religious)
and secular music. While a similar term is also used to refer to the period from 1750 to 1820 (the Classical period),
this article is about the broad span of time from roughly the 11th century to the present day, which includes the
Classical period and various other periods. The central norms of this tradition became codified between 1550 and
1900, which is known as the common practice period. The major time divisions of classical music are as follows: the
early music period, which includes the Medieval (500–1400) and the Renaissance (1400–1600) eras; the Common practice
period, which includes the Baroque (1600–1750), Classical (1750–1820), and Romantic eras (1804–1910); and the 20th
century (1901–2000) which includes the modern (1890–1930) that overlaps from the late 19th-century, the high modern
(mid 20th-century), and contemporary or postmodern (1975–2015) eras.[citation needed] European art music is largely
distinguished from many other non-European and popular musical forms by its system of staff notation, in use since
about the 16th century. Western staff notation is used by composers to prescribe to the performer the pitches (e.g.,
melodies, basslines and/or chords), tempo, meter and rhythms for a piece of music. This leaves less room for practices
such as improvisation and ad libitum ornamentation, which are frequently heard in non-European art music and in popular
music styles such as jazz and blues. Another difference is that whereas most popular styles lend themselves to the
song form, classical music has been noted for its development of highly sophisticated forms of instrumental music
such as the concerto, symphony, sonata, and mixed vocal and instrumental styles such as opera which, since they are
written down, can attain a high level of complexity. The term "classical music" did not appear until the early 19th
century, in an attempt to distinctly canonize the period from Johann Sebastian Bach to Beethoven as a golden age.
The earliest reference to "classical music" recorded by the Oxford English Dictionary is from about 1836. Given the
wide range of styles in classical music, from Medieval plainchant sung by monks to Classical and Romantic symphonies
for orchestra from the 1700s and 1800s to avant-garde atonal compositions for solo piano from the 1900s, it is difficult
to list characteristics that can be attributed to all works of that type. However, there are characteristics that
classical music contains that few or no other genres of music contain, such as the use of a printed score and the
performance of very complex instrumental works (e.g., the fugue). As well, although the symphony did not exist through
the entire classical music period, from the mid-1700s to the 2000s the symphony ensemble—and the works written for
it—have become a defining feature of classical music. The key characteristic of classical music that distinguishes
it from popular music and folk music is that the repertoire tends to be written down in musical notation, creating
a musical part or score. This score typically determines details of rhythm, pitch, and, where two or more musicians
(whether singers or instrumentalists) are involved, how the various parts are coordinated. The written quality of
the music has enabled a high level of complexity within them: J.S. Bach's fugues, for instance, achieve a remarkable
marriage of boldly distinctive melodic lines weaving in counterpoint yet creating a coherent harmonic logic that
would be impossible in the heat of live improvisation. The use of written notation also preserves a record of the
works and enables Classical musicians to perform music from many centuries ago. Musical notation enables 2000s-era
performers to sing a choral work from the 1300s Renaissance era or a 1700s Baroque concerto with many of the features
of the music (the melodies, lyrics, forms, and rhythms) being reproduced. That said, the score does not provide complete
and exact instructions on how to perform a historical work. Even if the tempo is written with an Italian instruction
(e.g., Allegro), we do not know exactly how fast the piece should be played. As well, in the Baroque era, many works
that were designed for basso continuo accompaniment do not specify which instruments should play the accompaniment
or exactly how the chordal instrument (harpsichord, lute, etc.) should play the chords, which are not notated in
the part (only a figured bass symbol beneath the bass part is used to guide the chord-playing performer). The performer
and/or the conductor have a range of options for musical expression and interpretation of a scored piece, including
the phrasing of melodies, the time taken during fermatas (held notes) or pauses, and the use (or choice not to use)
of effects such as vibrato or glissando (these effects are possible on various stringed, brass and woodwind instruments
and with the human voice). Although Classical music in the 2000s has lost most of its tradition for musical improvisation,
from the Baroque era to the Romantic era, there are examples of performers who could improvise in the style of their
era. In the Baroque era, organ performers would improvise preludes, keyboard performers playing harpsichord would
improvise chords from the figured bass symbols beneath the bass notes of the basso continuo part and both vocal and
instrumental performers would improvise musical ornaments. J.S. Bach was particularly noted for his complex improvisations.
During the Classical era, the composer-performer Mozart was noted for his ability to improvise melodies in different
styles. During the Classical era, some virtuoso soloists would improvise the cadenza sections of a concerto. During
the Romantic era, Beethoven would improvise at the piano. For more information, see Improvisation. The instruments
currently used in most classical music were largely invented before the mid-19th century (often much earlier) and
codified in the 18th and 19th centuries. They consist of the instruments found in an orchestra or in a concert band,
together with several other solo instruments (such as the piano, harpsichord, and organ). The symphony orchestra
is the most widely known medium for classical music and includes members of the string, woodwind, brass, and percussion
families of instruments. The concert band consists of members of the woodwind, brass, and percussion families. It
generally has a larger variety and amount of woodwind and brass instruments than the orchestra but does not have
a string section. However, many concert bands use a double bass. The vocal practices changed a great deal over the
classical period, from the single line monophonic Gregorian chant done by monks in the Medieval period to the complex,
polyphonic choral works of the Renaissance and subsequent periods, which used multiple independent vocal melodies
at the same time. Many of the instruments used to perform medieval music still exist, but in different forms. Medieval
instruments included the wood flute (which in the 21st century is made of metal), the recorder and plucked string
instruments like the lute. As well, early versions of the organ, fiddle (or vielle), and trombone (called the sackbut)
existed. Medieval instruments in Europe had most commonly been used singly, often self accompanied with a drone note,
or occasionally in parts. From at least as early as the 13th century through the 15th century there was a division
of instruments into haut (loud, shrill, outdoor instruments) and bas (quieter, more intimate instruments). During
the earlier medieval period, the vocal music from the liturgical genre, predominantly Gregorian chant, was monophonic,
using a single, unaccompanied vocal melody line. Polyphonic vocal genres, which used multiple independent vocal melodies,
began to develop during the high medieval era, becoming prevalent by the later 13th and early 14th century. Many
instruments originated during the Renaissance; others were variations of, or improvements upon, instruments that
had existed previously. Some have survived to the present day; others have disappeared, only to be recreated in order
to perform music of the period on authentic instruments. As in the modern day, instruments may be classified as brass,
strings, percussion, and woodwind. Brass instruments in the Renaissance were traditionally played by professionals
who were members of Guilds and they included the slide trumpet, the wooden cornet, the valveless trumpet and the
sackbut. Stringed instruments included the viol, the harp-like lyre, the hurdy-gurdy, the cittern and the lute. Keyboard
instruments with strings included the harpsichord and the virginal. Percussion instruments include the triangle,
the Jew's harp, the tambourine, the bells, the rumble-pot, and various kinds of drums. Woodwind instruments included
the double reed shawm, the reed pipe, the bagpipe, the transverse flute and the recorder. Vocal music in the Renaissance
is noted for the flourishing of an increasingly elaborate polyphonic style. The principal liturgical forms which
endured throughout the entire Renaissance period were masses and motets, with some other developments towards the
end, especially as composers of sacred music began to adopt secular forms (such as the madrigal) for their own designs.
Towards the end of the period, the early dramatic precursors of opera such as monody, the madrigal comedy, and the
intermedio are seen. Baroque instruments included some instruments from the earlier periods (e.g., the hurdy-gurdy
and recorder) and a number of new instruments (e.g, the cello, contrabass and fortepiano). Some instruments from
previous eras fell into disuse, such as the shawm and the wooden cornet. The key Baroque instruments for strings
included the violin, viol, viola, viola d'amore, cello, contrabass, lute, theorbo (which often played the basso continuo
parts), mandolin, cittern, Baroque guitar, harp and hurdy-gurdy. Woodwinds included the Baroque flute, Baroque oboe,
rackett, recorder and the bassoon. Brass instruments included the cornett, natural horn, Baroque trumpet, serpent
and the trombone. Keyboard instruments included the clavichord, tangent piano, the fortepiano (an early version of
the piano), the harpsichord and the pipe organ. Percussion instruments included the timpani, snare drum, tambourine
and the castanets. One major difference between Baroque music and the classical era that followed it is that the
types of instruments used in ensembles were much less standardized. Whereas a classical era string quartet consists
almost exclusively of two violins, a viola and a cello, a Baroque group accompanying a soloist or opera could include
one of several different types of keyboard instruments (e.g., pipe organ, harpsichord, or clavichord), additional
stringed chordal instruments (e.g., a lute) and an unspecified number of bass instruments performing the basso continuo
bassline, including bowed strings, woodwinds and brass instruments (e.g., a cello, contrabass, viol, bassoon, serpent,
etc.). The term "classical music" has two meanings: the broader meaning includes all Western art music from the Medieval
era to today, and the specific meaning refers to the music from the 1750s to the early 1830s—the era of Mozart and
Haydn. This section is about the more specific meaning. Classical musicians continued to use many of instruments
from the Baroque era, such as the cello, contrabass, recorder, trombone, timpani, fortepiano and organ. While some
Baroque instruments fell into disuse (e.g., the theorbo and rackett), many Baroque instruments were changed into
the versions that are still in use today, such as the Baroque violin (which became the violin), the Baroque oboe
(which became the oboe) and the Baroque trumpet, which transitioned to the regular valved trumpet. The Classical
era stringed instruments were the four instruments which form the string section of the orchestra: the violin, viola,
cello and contrabass. Woodwinds included the basset clarinet, basset horn, clarinette d'amour, the Classical clarinet,
the chalumeau, the flute, oboe and bassoon. Keyboard instruments included the clavichord and the fortepiano. While
the harpsichord was still used in basso continuo accompaniment in the 1750s and 1760s, it fell out of use in the
end of the century. Brass instruments included the buccin, the ophicleide (a serpent replacement which was the precursor
of tuba) and the natural horn. The "standard complement" of double winds and brass in the orchestra from the first
half of the 19th century is generally attributed to Beethoven. The exceptions to this are his Symphony No. 4, Violin
Concerto, and Piano Concerto No. 4, which each specify a single flute. The composer's instrumentation usually included
paired flutes, oboes, clarinets, bassoons, horns and trumpets. Beethoven carefully calculated the expansion of this
particular timbral "palette" in Symphonies 3, 5, 6, and 9 for an innovative effect. The third horn in the "Eroica"
Symphony arrives to provide not only some harmonic flexibility, but also the effect of "choral" brass in the Trio.
Piccolo, contrabassoon, and trombones add to the triumphal finale of his Symphony No. 5. A piccolo and a pair of
trombones help deliver "storm" and "sunshine" in the Sixth. The Ninth asks for a second pair of horns, for reasons
similar to the "Eroica" (four horns has since become standard); Beethoven's use of piccolo, contrabassoon, trombones,
and untuned percussion—plus chorus and vocal soloists—in his finale, are his earliest suggestion that the timbral
boundaries of symphony should be expanded. For several decades after he died, symphonic instrumentation was faithful
to Beethoven's well-established model, with few exceptions. In the Romantic era, the modern piano, with a more powerful,
sustained tone and a wider range took over from the more delicate-sounding fortepiano. In the orchestra, the existing
Classical instruments and sections were retained (string section, woodwinds, brass and percussion), but these sections
were typically expanded to make a fuller, bigger sound. For example, while a Baroque orchestra may have had two double
bass players, a Romantic orchestra could have as many as ten. "As music grew more expressive, the standard orchestral
palette just wasn't rich enough for many Romantic composers." New woodwind instruments were added, such as the contrabassoon,
bass clarinet and piccolo and new percussion instruments were added, including xylophones, drums, celestes (a bell-like
keyboard instrument), large orchestral harps, bells, and triangles and even wind machines for sound effects. Saxophones
appear in some scores from the late 19th century onwards. While appearing only as featured solo instruments in some
works, for example Maurice Ravel's orchestration of Modest Mussorgsky's Pictures at an Exhibition and Sergei Rachmaninoff's
Symphonic Dances, the saxophone is included in other works, such as Ravel's Boléro, Sergei Prokofiev's Romeo and
Juliet Suites 1 and 2 and many other works as a member of the orchestral ensemble. The euphonium is featured in a
few late Romantic and 20th-century works, usually playing parts marked "tenor tuba", including Gustav Holst's The
Planets, and Richard Strauss's Ein Heldenleben. The Wagner tuba, a modified member of the horn family, appears in
Richard Wagner's cycle Der Ring des Nibelungen and several other works by Strauss, Béla Bartók, and others; it has
a prominent role in Anton Bruckner's Symphony No. 7 in E Major. Cornets appear in Pyotr Ilyich Tchaikovsky's ballet
Swan Lake, Claude Debussy's La Mer, and several orchestral works by Hector Berlioz. Unless these instruments are
played by members doubling on another instrument (for example, a trombone player changing to euphonium for a certain
passage), orchestras will use freelance musicians to augment their regular rosters. Electric instruments such as
the electric guitar, the electric bass and the ondes Martenot appear occasionally in the classical music of the 20th
and 21st centuries. Both classical and popular musicians have experimented in recent decades with electronic instruments
such as the synthesizer, electric and digital techniques such as the use of sampled or computer-generated sounds,
and instruments from other cultures such as the gamelan. Many instruments today associated with popular music filled
important roles in early classical music, such as bagpipes, vihuelas, hurdy-gurdies, and some woodwind instruments.
On the other hand, instruments such as the acoustic guitar, once associated mainly with popular music, gained prominence
in classical music in the 19th and 20th centuries. While equal temperament became gradually accepted as the dominant
musical temperament during the 18th century, different historical temperaments are often used for music from earlier
periods. For instance, music of the English Renaissance is often performed in meantone temperament. Performers who
have studied classical music extensively are said to be "classically trained". This training may be from private
lessons from instrument or voice teachers or from completion of a formal program offered by a Conservatory, college
or university, such as a B.mus. or M.mus. degree (which includes individual lessons from professors). In classical
music, "...extensive formal music education and training, often to postgraduate [Master's degree] level" is required.
Performance of classical music repertoire requires a proficiency in sight-reading and ensemble playing, harmonic
principles, strong ear training (to correct and adjust pitches by ear), knowledge of performance practice (e.g.,
Baroque ornamentation), and a familiarity with the style/musical idiom expected for a given composer or musical work
(e.g., a Brahms symphony or a Mozart concerto). Some "popular" genre musicians have had significant classical training,
such as Billy Joel, Elton John, the Van Halen brothers, Randy Rhoads and Ritchie Blackmore. Moreover, formal training
is not unique to the classical genre. Many rock and pop musicians have completed degrees in commercial music programs
such as those offered by the Berklee College of Music and many jazz musicians have completed degrees in music from
universities with jazz programs, such as the Manhattan School of Music and McGill University. Historically, major
professional orchestras have been mostly or entirely composed of male musicians. Some of the earliest cases of women
being hired in professional orchestras was in the position of harpist. The Vienna Philharmonic, for example, did
not accept women to permanent membership until 1997, far later than the other orchestras ranked among the world's
top five by Gramophone in 2008. The last major orchestra to appoint a woman to a permanent position was the Berlin
Philharmonic. As late as February 1996, the Vienna Philharmonic's principal flute, Dieter Flury, told Westdeutscher
Rundfunk that accepting women would be "gambling with the emotional unity (emotionelle Geschlossenheit) that this
organism currently has". In April 1996, the orchestra's press secretary wrote that "compensating for the expected
leaves of absence" of maternity leave would be a problem. In 1997, the Vienna Philharmonic was "facing protests during
a [US] tour" by the National Organization for Women and the International Alliance for Women in Music. Finally, "after
being held up to increasing ridicule even in socially conservative Austria, members of the orchestra gathered [on
28 February 1997] in an extraordinary meeting on the eve of their departure and agreed to admit a woman, Anna Lelkes,
as harpist." As of 2013, the orchestra has six female members; one of them, violinist Albena Danailova became one
of the orchestra's concertmasters in 2008, the first woman to hold that position. In 2012, women still made up just
6% of the orchestra's membership. VPO president Clemens Hellsberg said the VPO now uses completely screened blind
auditions. In 2013, an article in Mother Jones stated that while "[m]any prestigious orchestras have significant
female membership—women outnumber men in the New York Philharmonic's violin section—and several renowned ensembles,
including the National Symphony Orchestra, the Detroit Symphony, and the Minnesota Symphony, are led by women violinists",
the double bass, brass, and percussion sections of major orchestras "...are still predominantly male." A 2014 BBC
article stated that the "...introduction of 'blind' auditions, where a prospective instrumentalist performs behind
a screen so that the judging panel can exercise no gender or racial prejudice, has seen the gender balance of traditionally
male-dominated symphony orchestras gradually shift." Works of classical repertoire often exhibit complexity in their
use of orchestration, counterpoint, harmony, musical development, rhythm, phrasing, texture, and form. Whereas most
popular styles are usually written in song forms, classical music is noted for its development of highly sophisticated
musical forms, like the concerto, symphony, sonata, and opera. Longer works are often divided into self-contained
pieces, called movements, often with contrasting characters or moods. For instance, symphonies written during the
Classical period are usually divided into four movements: (1) an opening Allegro in sonata form, (2) a slow movement,
(3) a minuet or scherzo, and (4) a final Allegro. These movements can then be further broken down into a hierarchy
of smaller units: first sections, then periods, and finally phrases. The major time divisions of classical music
up to 1900 are the early music period, which includes Medieval (500–1400) and Renaissance (1400–1600) eras, and the
Common practice period, which includes the Baroque (1600–1750), Classical (1750–1830) and Romantic (1804–1910) eras.
Since 1900, classical periods have been reckoned more by calendar century than by particular stylistic movements
that have become fragmented and difficult to define. The 20th century calendar period (1901–2000) includes most of
the early modern musical era (1890–1930), the entire high modern (mid 20th-century), and the first 25 years of the
contemporary or postmodern musical era (1975–current). The 21st century has so far been characterized by a continuation
of the contemporary/postmodern musical era. The dates are generalizations, since the periods and eras overlap and
the categories are somewhat arbitrary, to the point that some authorities reverse terminologies and refer to a common
practice "era" comprising baroque, classical, and romantic "periods". For example, the use of counterpoint and fugue,
which is considered characteristic of the Baroque era (or period), was continued by Haydn, who is classified as typical
of the Classical era. Beethoven, who is often described as a founder of the Romantic era, and Brahms, who is classified
as Romantic, also used counterpoint and fugue, but other characteristics of their music define their era. The prefix
neo is used to describe a 20th-century or contemporary composition written in the style of an earlier era, such as
Classical or Romantic. Stravinsky's Pulcinella, for example, is a neoclassical composition because it is stylistically
similar to works of the Classical era. Burgh (2006), suggests that the roots of Western classical music ultimately
lie in ancient Egyptian art music via cheironomy and the ancient Egyptian orchestra, which dates to 2695 BC. This
was followed by early Christian liturgical music, which itself dates back to the Ancient Greeks[citation needed].
The development of individual tones and scales was made by ancient Greeks such as Aristoxenus and Pythagoras. Pythagoras
created a tuning system and helped to codify musical notation. Ancient Greek instruments such as the aulos (a reed
instrument) and the lyre (a stringed instrument similar to a small harp) eventually led to the modern-day instruments
of a classical orchestra. The antecedent to the early period was the era of ancient music before the fall of the
Roman Empire (476 AD). Very little music survives from this time, most of it from ancient Greece. The Medieval period
includes music from after the fall of Rome to about 1400. Monophonic chant, also called plainsong or Gregorian chant,
was the dominant form until about 1100. Polyphonic (multi-voiced) music developed from monophonic chant throughout
the late Middle Ages and into the Renaissance, including the more complex voicings of motets. The Renaissance era
was from 1400 to 1600. It was characterized by greater use of instrumentation, multiple interweaving melodic lines,
and the use of the first bass instruments. Social dancing became more widespread, so musical forms appropriate to
accompanying dance began to standardize. It is in this time that the notation of music on a staff and other elements
of musical notation began to take shape. This invention made possible the separation of the composition of a piece
of music from its transmission; without written music, transmission was oral, and subject to change every time it
was transmitted. With a musical score, a work of music could be performed without the composer's presence. The invention
of the movable-type printing press in the 15th century had far-reaching consequences on the preservation and transmission
of music. Typical stringed instruments of the early period include the harp, lute, vielle, and psaltery, while wind
instruments included the flute family (including recorder), shawm (an early member of the oboe family), trumpet,
and the bagpipes. Simple pipe organs existed, but were largely confined to churches, although there were portable
varieties. Later in the period, early versions of keyboard instruments like the clavichord and harpsichord began
to appear. Stringed instruments such as the viol had emerged by the 16th century, as had a wider variety of brass
and reed instruments. Printing enabled the standardization of descriptions and specifications of instruments, as
well as instruction in their use. The common practice period is when many of the ideas that make up western classical
music took shape, standardized, or were codified. It began with the Baroque era, running from roughly 1600 to the
middle of the 18th century. The Classical era followed, ending roughly around 1820. The Romantic era ran through
the 19th century, ending about 1910. Baroque music is characterized by the use of complex tonal counterpoint and
the use of a basso continuo, a continuous bass line. Music became more complex in comparison with the songs of earlier
periods. The beginnings of the sonata form took shape in the canzona, as did a more formalized notion of theme and
variations. The tonalities of major and minor as means for managing dissonance and chromaticism in music took full
shape. During the Baroque era, keyboard music played on the harpsichord and pipe organ became increasingly popular,
and the violin family of stringed instruments took the form generally seen today. Opera as a staged musical drama
began to differentiate itself from earlier musical and dramatic forms, and vocal forms like the cantata and oratorio
became more common. Vocalists began adding embellishments to melodies. Instrumental ensembles began to distinguish
and standardize by size, giving rise to the early orchestra for larger ensembles, with chamber music being written
for smaller groups of instruments where parts are played by individual (instead of massed) instruments. The concerto
as a vehicle for solo performance accompanied by an orchestra became widespread, although the relationship between
soloist and orchestra was relatively simple. The theories surrounding equal temperament began to be put in wider
practice, especially as it enabled a wider range of chromatic possibilities in hard-to-tune keyboard instruments.
Although Bach did not use equal temperament, as a modern piano is generally tuned, changes in the temperaments from
the meantone system, common at the time, to various temperaments that made modulation between all keys musically
acceptable, made possible Bach's Well-Tempered Clavier. The Classical era, from about 1750 to 1820, established many
of the norms of composition, presentation, and style, and was also when the piano became the predominant keyboard
instrument. The basic forces required for an orchestra became somewhat standardized (although they would grow as
the potential of a wider array of instruments was developed in the following centuries). Chamber music grew to include
ensembles with as many as 8 to 10 performers for serenades. Opera continued to develop, with regional styles in Italy,
France, and German-speaking lands. The opera buffa, a form of comic opera, rose in popularity. The symphony came
into its own as a musical form, and the concerto was developed as a vehicle for displays of virtuoso playing skill.
Orchestras no longer required a harpsichord (which had been part of the traditional continuo in the Baroque style),
and were often led by the lead violinist (now called the concertmaster). Wind instruments became more refined in
the Classical era. While double reeded instruments like the oboe and bassoon became somewhat standardized in the
Baroque, the clarinet family of single reeds was not widely used until Mozart expanded its role in orchestral, chamber,
and concerto settings. The music of the Romantic era, from roughly the first decade of the 19th century to the early
20th century, was characterized by increased attention to an extended melodic line, as well as expressive and emotional
elements, paralleling romanticism in other art forms. Musical forms began to break from the Classical era forms (even
as those were being codified), with free-form pieces like nocturnes, fantasias, and preludes being written where
accepted ideas about the exposition and development of themes were ignored or minimized. The music became more chromatic,
dissonant, and tonally colorful, with tensions (with respect to accepted norms of the older forms) about key signatures
increasing. The art song (or Lied) came to maturity in this era, as did the epic scales of grand opera, ultimately
transcended by Richard Wagner's Ring cycle. In the 19th century, musical institutions emerged from the control of
wealthy patrons, as composers and musicians could construct lives independent of the nobility. Increasing interest
in music by the growing middle classes throughout western Europe spurred the creation of organizations for the teaching,
performance, and preservation of music. The piano, which achieved its modern construction in this era (in part due
to industrial advances in metallurgy) became widely popular with the middle class, whose demands for the instrument
spurred a large number of piano builders. Many symphony orchestras date their founding to this era. Some musicians
and composers were the stars of the day; some, like Franz Liszt and Niccolò Paganini, fulfilled both roles. The family
of instruments used, especially in orchestras, grew. A wider array of percussion instruments began to appear. Brass
instruments took on larger roles, as the introduction of rotary valves made it possible for them to play a wider
range of notes. The size of the orchestra (typically around 40 in the Classical era) grew to be over 100. Gustav
Mahler's 1906 Symphony No. 8, for example, has been performed with over 150 instrumentalists and choirs of over 400.
European cultural ideas and institutions began to follow colonial expansion into other parts of the world. There
was also a rise, especially toward the end of the era, of nationalism in music (echoing, in some cases, political
sentiments of the time), as composers such as Edvard Grieg, Nikolai Rimsky-Korsakov, and Antonín Dvořák echoed traditional
music of their homelands in their compositions. Encompassing a wide variety of post-Romantic styles composed through
the year 2000, 20th century classical music includes late romantic, modern, high-modern, and postmodern styles of
composition. Modernism (1890–1930) marked an era when many composers rejected certain values of the common practice
period, such as traditional tonality, melody, instrumentation, and structure. The high-modern era saw the emergence
of neo-classical and serial music. A few authorities have claimed high-modernism as the beginning of postmodern music
from about 1930. Others have more or less equated postmodern music with the "contemporary music" composed from the
late 20th century through to the early 21st century. Almost all of the composers who are described in music textbooks
on classical music and whose works are widely performed as part of the standard concert repertoire are male composers,
even though there has been a large number of women composers throughout the classical music period. Musicologist
Marcia Citron has asked "[w]hy is music composed by women so marginal to the standard 'classical' repertoire?" Citron
"examines the practices and attitudes that have led to the exclusion of women composers from the received 'canon'
of performed musical works." She argues that in the 1800s, women composers typically wrote art songs for performance
in small recitals rather than symphonies intended for performance with an orchestra in a large hall, with the latter
works being seen as the most important genre for composers; since women composers did not write many symphonies,
they were deemed to be not notable as composers. In the "...Concise Oxford History of Music, Clara Shumann [sic]
is one of the only [sic] female composers mentioned." Abbey Philips states that "[d]uring the 20th century the women
who were composing/playing gained far less attention than their male counterparts." The modernist views hold that
classical music is considered primarily a written musical tradition, preserved in music notation, as opposed to being
transmitted orally, by rote, or by recordings of particular performances.[citation needed] While there are differences
between particular performances of a classical work, a piece of classical music is generally held to transcend any
interpretation of it. The use of musical notation is an effective method for transmitting classical music, since
the written music contains the technical instructions for performing the work. In 1996–1997, a research study was
conducted on a large population of middle age students in the Cherry Creek School District in Denver, Colorado, USA.
The study showed that students who actively listen to classical music before studying had higher academic scores.
The research further indicated that students who listened to the music prior to an examination also had positively
elevated achievement scores. Students who listened to rock-and-roll or country had moderately lower scores. The study
further indicated that students who used classical during the course of study had a significant leap in their academic
performance; whereas, those who listened to other types of music had significantly lowered academic scores. The research
was conducted over several schools within the Cherry Creek School District and was conducted through University of
Colorado. This study is reflective of several recent studies (i.e. Mike Manthei and Steve N. Kelly of the University
of Nebraska at Omaha; Donald A. Hodges and Debra S. O'Connell of the University of North Carolina at Greensboro;
etc.) and others who had significant results through the discourse of their work. During the 1990s, several research
papers and popular books wrote on what came to be called the "Mozart effect": an observed temporary, small elevation
of scores on certain tests as a result of listening to Mozart's works. The approach has been popularized in a book
by Don Campbell, and is based on an experiment published in Nature suggesting that listening to Mozart temporarily
boosted students' IQ by 8 to 9 points. This popularized version of the theory was expressed succinctly by the New
York Times music columnist Alex Ross: "researchers... have determined that listening to Mozart actually makes you
smarter." Promoters marketed CDs claimed to induce the effect. Florida passed a law requiring toddlers in state-run
schools to listen to classical music every day, and in 1998 the governor of Georgia budgeted $105,000 per year to
provide every child born in Georgia with a tape or CD of classical music. One of the co-authors of the original studies
of the Mozart effect commented "I don't think it can hurt. I'm all for exposing children to wonderful cultural experiences.
But I do think the money could be better spent on music education programs." Shawn Vancour argues that the commercialization
of classical music in the early 20th century served to harm the music industry through inadequate representation.
Several works from the Golden Age of Animation matched the action to classical music. Notable examples are Walt Disney's
Fantasia, Tom and Jerry's Johann Mouse, and Warner Bros.' Rabbit of Seville and What's Opera, Doc?. Similarly, movies
and television often revert to standard, clichéd snatches of classical music to convey refinement or opulence: some
of the most-often heard pieces in this category include Bach´s Cello Suite No. 1, Mozart's Eine kleine Nachtmusik,
Vivaldi's Four Seasons, Mussorgsky's Night on Bald Mountain (as orchestrated by Rimsky-Korsakov), and Rossini's William
Tell Overture. The written score, however, does not usually contain explicit instructions as to how to interpret
the piece in terms of production or performance, apart from directions for dynamics, tempo and expression (to a certain
extent). This is left to the discretion of the performers, who are guided by their personal experience and musical
education, their knowledge of the work's idiom, their personal artistic tastes, and the accumulated body of historic
performance practices. Some critics express the opinion that it is only from the mid-19th century, and especially
in the 20th century, that the score began to hold such a high significance. Previously, improvisation (in preludes,
cadenzas and ornaments), rhythmic flexibility (e.g., tempo rubato), improvisatory deviation from the score and oral
tradition of playing was integral to the style. Yet in the 20th century, this oral tradition and passing on of stylistic
features within classical music disappeared. Instead, musicians tend to use just the score to play music. Yet, even
with the score providing the key elements of the music, there is considerable controversy about how to perform the
works. Some of this controversy relates to the fact that this score-centric approach has led to performances that
emphasize metrically strict block-rhythms (just as the music is notated in the score). Improvisation once played
an important role in classical music. A remnant of this improvisatory tradition in classical music can be heard in
the cadenza, a passage found mostly in concertos and solo works, designed to allow skilled performers to exhibit
their virtuoso skills on the instrument. Traditionally this was improvised by the performer; however, it is often
written for (or occasionally by) the performer beforehand. Improvisation is also an important aspect in authentic
performances of operas of Baroque era and of bel canto (especially operas of Vincenzo Bellini), and is best exemplified
by the da capo aria, a form by which famous singers typically perform variations of the thematic matter of the aria
in the recapitulation section ('B section' / the 'da capo' part). An example is Beverly Sills' complex, albeit pre-written,
variation of Da tempeste il legno infranto from Händel's Giulio Cesare. Certain staples of classical music are often
used commercially (either in advertising or in movie soundtracks). In television commercials, several passages have
become clichéd, particularly the opening of Richard Strauss' Also sprach Zarathustra (made famous in the film 2001:
A Space Odyssey) and the opening section "O Fortuna" of Carl Orff's Carmina Burana, often used in the horror genre;
other examples include the Dies Irae from the Verdi Requiem, Edvard Grieg's In the Hall of the Mountain King from
Peer Gynt, the opening bars of Beethoven's Symphony No. 5, Wagner's Ride of the Valkyries from Die Walküre, Rimsky-Korsakov's
Flight of the Bumblebee, and excerpts of Aaron Copland's Rodeo. Composers of classical music have often made use
of folk music (music created by musicians who are commonly not classically trained, often from a purely oral tradition).
Some composers, like Dvořák and Smetana, have used folk themes to impart a nationalist flavor to their work, while
others like Bartók have used specific themes lifted whole from their folk-music origins. Its written transmission,
along with the veneration bestowed on certain classical works, has led to the expectation that performers will play
a work in a way that realizes in detail the original intentions of the composer. During the 19th century the details
that composers put in their scores generally increased. Yet the opposite trend—admiration of performers for new "interpretations"
of the composer's work—can be seen, and it is not unknown for a composer to praise a performer for achieving a better
realization of the original intent than the composer was able to imagine. Thus, classical performers often achieve
high reputations for their musicianship, even if they do not compose themselves. Generally however, it is the composers
who are remembered more than the performers. The primacy of the composer's written score has also led, today, to
a relatively minor role played by improvisation in classical music, in sharp contrast to the practice of musicians
who lived during the baroque, classical and romantic era. Improvisation in classical music performance was common
during both the Baroque and early romantic eras, yet lessened strongly during the second half of the 19th and in
the 20th centuries. During the classical era, Mozart and Beethoven often improvised the cadenzas to their piano concertos
(and thereby encouraged others to do so), but they also provided written cadenzas for use by other soloists. In opera,
the practice of singing strictly by the score, i.e. come scritto, was famously propagated by soprano Maria Callas,
who called this practice 'straitjacketing' and implied that it allows the intention of the composer to be understood
better, especially during studying the music for the first time. Classical music has often incorporated elements
or material from popular music of the composer's time. Examples include occasional music such as Brahms' use of student
drinking songs in his Academic Festival Overture, genres exemplified by Kurt Weill's The Threepenny Opera, and the
influence of jazz on early- and mid-20th-century composers including Maurice Ravel, exemplified by the movement entitled
"Blues" in his sonata for violin and piano. Certain postmodern, minimalist and postminimalist classical composers
acknowledge a debt to popular music. Numerous examples show influence in the opposite direction, including popular
songs based on classical music, the use to which Pachelbel's Canon has been put since the 1970s, and the musical
crossover phenomenon, where classical musicians have achieved success in the popular music arena. In heavy metal,
a number of lead guitarists (playing electric guitar) modeled their playing styles on Baroque or Classical era instrumental
music, including Ritchie Blackmore and Randy Rhoads.
Slavs are the largest Indo-European ethno-linguistic group in Europe. They inhabit Central Europe, Eastern Europe, Southeast
Europe, North Asia and Central Asia. Slavs speak Indo-European Slavic languages and share, to varying degrees, some
cultural traits and historical backgrounds. From the early 6th century they spread to inhabit most of Central and
Eastern Europe and Southeast Europe, whilst Slavic mercenaries fighting for the Byzantines and Arabs settled Asia
Minor and even as far as Syria. The East Slavs colonised Siberia and Central Asia.[better source needed] Presently
over half of Europe's territory is inhabited by Slavic-speaking communities, but every Slavic ethnicity has emigrated
to other continents. Present-day Slavic people are classified into West Slavic (chiefly Poles, Czechs and Slovaks),
East Slavic (chiefly Russians, Belarusians, and Ukrainians), and South Slavic (chiefly Serbs, Bulgarians, Croats,
Bosniaks, Macedonians, Slovenes, and Montenegrins), though sometimes the West Slavs and East Slavs are combined into
a single group known as North Slavs. For a more comprehensive list, see the ethnocultural subdivisions. Modern Slavic
nations and ethnic groups are considerably diverse both genetically and culturally, and relations between them –
even within the individual ethnic groups themselves – are varied, ranging from a sense of connection to mutual feelings
of hostility. The Slavic autonym is reconstructed in Proto-Slavic as *Slověninъ, plural *Slověne. The oldest documents
written in Old Church Slavonic and dating from the 9th century attest Словѣне Slověne to describe the Slavs. Other
early Slavic attestations include Old East Slavic Словѣнѣ Slověně for "an East Slavic group near Novgorod." However,
the earliest written references to the Slavs under this name are in other languages. In the 6th century AD Procopius,
writing in Byzantine Greek, refers to the Σκλάβοι Sklaboi, Σκλαβηνοί Sklabēnoi, Σκλαυηνοί Sklauenoi, Σθλαβηνοί Sthlabenoi,
or Σκλαβῖνοι Sklabinoi, while his contemporary Jordanes refers to the Sclaveni in Latin. The Slavic autonym *Slověninъ
is usually considered (e.g. by Roman Jakobson) a derivation from slovo "word", originally denoting "people who speak
(the same language)," i.e. people who understand each other, in contrast to the Slavic word denoting "foreign people"
– němci, meaning "mumbling, murmuring people" (from Slavic *němъ – "mumbling, mute"). The word slovo ("word") and
the related slava ("fame") and slukh ("hearing") originate from the Proto-Indo-European root *ḱlew- ("be spoken of,
fame"), cognate with Ancient Greek κλῆς (klês - "famous"), whence the name Pericles, and Latin clueo ("be called"),
and English loud. The English word Slav could be derived from the Middle English word sclave, which was borrowed
from Medieval Latin sclavus or slavus, itself a borrowing and Byzantine Greek σκλάβος sklábos "slave," which was
in turn apparently derived from a misunderstanding of the Slavic autonym (denoting a speaker of their own languages).
The Byzantine term Sklavinoi was loaned into Arabic as Saqaliba صقالبة (sing. Saqlabi صقلبي) by medieval Arab historiographers.
However, the origin of this word is disputed. Alternative proposals for the etymology of *Slověninъ propounded by
some scholars have much less support. Lozinski argues that the word *slava once had the meaning of worshipper, in
this context meaning "practicer of a common Slavic religion," and from that evolved into an ethnonym. S.B. Bernstein
speculates that it derives from a reconstructed Proto-Indo-European *(s)lawos, cognate to Ancient Greek λαός laós
"population, people," which itself has no commonly accepted etymology. Meanwhile, others have pointed out that the
suffix -enin indicates a man from a certain place, which in this case should be a place called Slova or Slava, possibly
a river name. The Old East Slavic Slavuta for the Dnieper River was argued by Henrich Bartek (1907–1986) to be derived
from slova and also the origin of Slovene. The earliest mentions of Slavic raids across the lower River Danube may
be dated to the first half of the 6th century, yet no archaeological evidence of a Slavic settlement in the Balkans
could be securely dated before c. 600 AD. The Slavs under name of the Antes and the Sclaveni make their first appearance
in Byzantine records in the early 6th century. Byzantine historiographers under Justinian I (527–565), such as Procopius
of Caesarea, Jordanes and Theophylact Simocatta describe tribes of these names emerging from the area of the Carpathian
Mountains, the lower Danube and the Black Sea, invading the Danubian provinces of the Eastern Empire. Procopius wrote
in 545 that "the Sclaveni and the Antae actually had a single name in the remote past; for they were both called
Spori in olden times." He describes their social structure and beliefs: Jordanes tells us that the Sclaveni had swamps
and forests for their cities. Another 6th-century source refers to them living among nearly impenetrable forests,
rivers, lakes, and marshes. Menander Protector mentions a Daurentius (577–579) that slew an Avar envoy of Khagan
Bayan I. The Avars asked the Slavs to accept the suzerainty of the Avars, he however declined and is reported as
saying: "Others do not conquer our land, we conquer theirs – so it shall always be for us". The relationship between
the Slavs and a tribe called the Veneti east of the River Vistula in the Roman period is uncertain. The name may
refer both to Balts and Slavs. According to eastern homeland theory, prior to becoming known to the Roman world,
Slavic-speaking tribes were part of the many multi-ethnic confederacies of Eurasia – such as the Sarmatian, Hun and
Gothic empires. The Slavs emerged from obscurity when the westward movement of Germans in the 5th and 6th centuries
CE (thought to be in conjunction with the movement of peoples from Siberia and Eastern Europe: Huns, and later Avars
and Bulgars) started the great migration of the Slavs, who settled the lands abandoned by Germanic tribes fleeing
the Huns and their allies: westward into the country between the Oder and the Elbe-Saale line; southward into Bohemia,
Moravia, much of present-day Austria, the Pannonian plain and the Balkans; and northward along the upper Dnieper
river. Perhaps some Slavs migrated with the movement of the Vandals to Iberia and north Africa. Around the 6th century,
Slavs appeared on Byzantine borders in great numbers.[page needed] The Byzantine records note that grass would not
regrow in places where the Slavs had marched through, so great were their numbers. After a military movement even
the Peloponnese and Asia Minor were reported to have Slavic settlements. This southern movement has traditionally
been seen as an invasive expansion. By the end of the 6th century, Slavs had settled the Eastern Alps regions. When
their migratory movements ended, there appeared among the Slavs the first rudiments of state organizations, each
headed by a prince with a treasury and a defense force. Moreover, it was the beginnings of class differentiation,
and nobles pledged allegiance either to the Frankish/ Holy Roman Emperors or the Byzantine Emperors. In the 7th century,
the Frankish merchant Samo, who supported the Slavs fighting their Avar rulers, became the ruler of the first known
Slav state in Central Europe, which, however, most probably did not outlive its founder and ruler. This provided
the foundation for subsequent Slavic states to arise on the former territory of this realm with Carantania being
the oldest of them. Very old also are the Principality of Nitra and the Moravian principality (see under Great Moravia).
In this period, there existed central Slavic groups and states such as the Balaton Principality, but the subsequent
expansion of the Magyars, as well as the Germanisation of Austria, separated the northern and southern Slavs. The
First Bulgarian Empire was founded in 681, the Slavic language Old Bulgarian became the main and official of the
empire in 864. Bulgaria was instrumental in the spread of Slavic literacy and Christianity to the rest of the Slavic
world. As of 1878, there were only three free Slavic states in the world: the Russian Empire, Serbia and Montenegro.
Bulgaria was also free but was de jure vassal to the Ottoman Empire until official independence was declared in 1908.
In the entire Austro-Hungarian Empire of approximately 50 million people, about 23 million were Slavs. The Slavic
peoples who were, for the most part, denied a voice in the affairs of the Austro-Hungarian Empire, were calling for
national self-determination. During World War I, representatives of the Czechs, Slovaks, Poles, Serbs, Croats, and
Slovenes set up organizations in the Allied countries to gain sympathy and recognition. In 1918, after World War
I ended, the Slavs established such independent states as Czechoslovakia, the Second Polish Republic, and the State
of Slovenes, Croats and Serbs. During World War II, Hitler's Generalplan Ost (general plan for the East) entailed
killing, deporting, or enslaving the Slavic and Jewish population of occupied Eastern Europe to create Lebensraum
(living space) for German settlers. The Nazi Hunger Plan and Generalplan Ost would have led to the starvation of
80 million people in the Soviet Union. These partially fulfilled plans resulted in the deaths of an estimated 19.3
million civilians and prisoners of war. The first half of the 20th century in Russia and the Soviet Union was marked
by a succession of wars, famines and other disasters, each accompanied by large-scale population losses. Stephen
J. Lee estimates that, by the end of World War II in 1945, the Russian population was about 90 million fewer than
it could have been otherwise. Because of the vastness and diversity of the territory occupied by Slavic people, there
were several centers of Slavic consolidation. In the 19th century, Pan-Slavism developed as a movement among intellectuals,
scholars, and poets, but it rarely influenced practical politics and did not find support in some nations that had
Slavic origins. Pan-Slavism became compromised when the Russian Empire started to use it as an ideology justifying
its territorial conquests in Central Europe as well as subjugation of other ethnic groups of Slavic origins such
as Poles and Ukrainians, and the ideology became associated with Russian imperialism. The common Slavic experience
of communism combined with the repeated usage of the ideology by Soviet propaganda after World War II within the
Eastern bloc (Warsaw Pact) was a forced high-level political and economic hegemony of the USSR dominated by Russians.
A notable political union of the 20th century that covered most South Slavs was Yugoslavia, but it ultimately broke
apart in the 1990s along with the Soviet Union. The word "Slavs" was used in the national anthem of the Slovak Republic
(1939–1945), Yugoslavia (1943–1992) and the Federal Republic of Yugoslavia (1992–2003), later Serbia and Montenegro
(2003–2006). Former Soviet states, as well as countries that used to be satellite states or territories of the Warsaw
Pact, have numerous minority Slavic populations, many of whom are originally from the Russian SFSR, Ukrainian SSR
and Byelorussian SSR. As of now, Kazakhstan has the largest Slavic minority population with most being Russians (Ukrainians,
Belarusians and Poles are present as well but in much smaller numbers). Pan-Slavism, a movement which came into prominence
in the mid-19th century, emphasized the common heritage and unity of all the Slavic peoples. The main focus was in
the Balkans where the South Slavs had been ruled for centuries by other empires: the Byzantine Empire, Austria-Hungary,
the Ottoman Empire, and Venice. The Russian Empire used Pan-Slavism as a political tool; as did the Soviet Union,
which gained political-military influence and control over most Slavic-majority nations between 1945 and 1948 and
retained a hegemonic role until the period 1989–1991. Slavic studies began as an almost exclusively linguistic and
philological enterprise. As early as 1833, Slavic languages were recognized as Indo-European. Slavic standard languages
which are official in at least one country: Belarusian, Bosnian, Bulgarian, Croatian, Czech, Macedonian, Montenegrin,
Polish, Russian, Serbian, Slovak, Slovene, and Ukrainian. The alphabet depends on what religion is usual for the
respective Slavic ethnic groups. The Orthodox use the Cyrillic alphabet and the Roman Catholics use Latin alphabet,
the Bosniaks who are Muslims also use the Latin. Few Greek Roman and Roman Catholics use the Cyrillic alphabet however.
The Serbian language and Montenegrin language uses both Cyrillic and Latin alphabets. There is also a Latin script
to write in Belarusian, called the Lacinka alphabet. Proto-Slavic, the supposed ancestor language of all Slavic languages,
is a descendant of common Proto-Indo-European, via a Balto-Slavic stage in which it developed numerous lexical and
morphophonological isoglosses with the Baltic languages. In the framework of the Kurgan hypothesis, "the Indo-Europeans
who remained after the migrations [from the steppe] became speakers of Balto-Slavic". Proto-Slavic, sometimes referred
to as Common Slavic or Late Proto-Slavic, is defined as the last stage of the language preceding the geographical
split of the historical Slavic languages. That language was uniform, and on the basis of borrowings from foreign
languages and Slavic borrowings into other languages, cannot be said to have any recognizable dialects, suggesting
a comparatively compact homeland. Slavic linguistic unity was to some extent visible as late as Old Church Slavonic
manuscripts which, though based on local Slavic speech of Thessaloniki, could still serve the purpose of the first
common Slavic literary language. The pagan Slavic populations were Christianized between the 6th and 10th centuries.
Orthodox Christianity is predominant in the East and South Slavs, while Roman Catholicism is predominant in West
Slavs and the western South Slavs. The religious borders are largely comparable to the East–West Schism which began
in the 11th century. The majority of contemporary Slavic populations who profess a religion are Orthodox, followed
by Catholic, while a small minority are Protestant. There are minor Slavic Muslim groups. Religious delineations
by nationality can be very sharp; usually in the Slavic ethnic groups the vast majority of religious people share
the same religion. Some Slavs are atheist or agnostic: only 19% of Czechs professed belief in god/s in the 2005 Eurobarometer
survey. Slavs are customarily divided along geographical lines into three major subgroups: West Slavs, East Slavs,
and South Slavs, each with a different and a diverse background based on unique history, religion and culture of
particular Slavic groups within them. Apart from prehistorical archaeological cultures, the subgroups have had notable
cultural contact with non-Slavic Bronze- and Iron Age civilisations. ^1 Also considered part of Rusyns ^2 Considered
transitional between Ukrainians and Belarusians ^3 The ethnic affiliation of the Lemkos has become an ideological
conflict. It has been alleged that among the Lemkos the idea of "Carpatho-Ruthenian" nation is supported only by
Lemkos residing in Transcarpathia and abroad ^4 Most inhabitants of historic Moravia considered themselves as Czechs
but significant amount declared their Moravian nationality, different from that Czech (although people from Bohemia
and Moravia use the same official language). ^5 Also considered Poles. ^6 There are sources that show Silesians as
part of the Poles. Parts of the southmost population of Upper Silesia is sometimes considered Czech (controversial).
^7 A census category recognized as an ethnic group. Most Slavic Muslims (especially in Bosnia, Croatia, Montenegro
and Serbia) now opt for Bosniak ethnicity, but some still use the "Muslim" designation. Bosniak and Muslim are considered
two ethnonyms for a single ethnicity and the terms may even be used interchangeably. However, a small number of people
within Bosnia and Herzegovina declare themselves Bosniak but are not necessarily Muslim by faith. ^8 This identity
continues to be used by a minority throughout the former Yugoslav republics. The nationality is also declared by
diasporans living in the USA and Canada. There are a multitude of reasons as to why people prefer this affiliation,
some published on the article. ^9 Sub-groups of Croats include Bunjevci (in Bačka), Šokci (in Slavonia and Vojvodina),
Janjevci (in Kosovo), Burgenland Croats (in Austria), Bosniaks (in Hungary), Molise Croats (in Italy), Krashovans
(in Romania), Moravian Croats (in the Czech Republic) ^10 Sub-groups of Slovenes include Prekmurians, Hungarian Slovenes,
Carinthian Slovenes, Venetian Slovenes, Resians, and the extinct Carantanians and Somogy Slovenes. Note: Besides
ethnic groups, Slavs often identify themselves with the local geographical region in which they live. Some of the
major regional South Slavic groups include: Zagorci in northern Croatia, Istrijani in westernmost Croatia, Dalmatinci
in southern Croatia, Boduli in Adriatic islands, Vlaji in hinterland of Dalmatia, Slavonci in eastern Croatia, Bosanci
in Bosnia, Hercegovci in Herzegovina, Krajišnici in western Bosnia, but is more commonly used to refer to the Serbs
of Croatia, most of whom are descendants of the Grenzers, and continued to live in the area which made up the Military
Frontier until the Croatian war of independence, Semberci in northeast Bosnia, Srbijanci in Serbia proper, Šumadinci
in central Serbia, Vojvođani in northern Serbia, Sremci in Syrmia, Bačvani in northwest Vojvodina, Banaćani in Banat,
Sandžaklije (Muslims in Serbia/Montenegro border), Kosovci in Kosovo, Bokelji in southwest Montenegro, Trakiytsi
in Upper Thracian Lowlands, Dobrudzhantsi in north-east Bulgarian region, Balkandzhii in Central Balkan Mountains,
Miziytsi in north Bulgarian region, Warmiaks and Masurians in north-east Polish regions Warmia and Mazuria, Pirintsi
in Blagoevgrad Province, Ruptsi in the Rhodopes etc. The modern Slavic peoples carry a variety of mitochondrial DNA
haplogroups and Y-chromosome DNA haplogroups. Yet two paternal haplogroups predominate: R1a1a [M17] and I2a2a [L69.2=T/S163.2].
The frequency of Haplogroup R1a ranges from 63.39% in the Sorbs, through 56.4% in Poland, 54% in Ukraine, 52% in
Russia, Belarus, to 15.2% in Republic of Macedonia, 14.7% in Bulgaria and 12.1% in Herzegovina. The correlation between
R1a1a [M17] and the speakers of Indo-European languages, particularly those of Eastern Europe (Russian) and Central
and Southern Asia, was noticed in the late 1990s. From this Spencer Wells and colleagues, following the Kurgan hypothesis,
deduced that R1a1a arose on the Pontic-Caspian steppe. Specific studies of Slavic genetics followed. In 2007 Rębała
and colleagues studied several Slavic populations with the aim of localizing the Proto-Slavic homeland. The significant
findings of this study are that: Marcin Woźniak and colleagues (2010) searched for specifically Slavic sub-group
of R1a1a [M17]. Working with haplotypes, they found a pattern among Western Slavs which turned out to correspond
to a newly discovered marker, M458, which defines subclade R1a1a7. This marker correlates remarkably well with the
distribution of Slavic-speakers today. The team led by Peter Underhill, which discovered M458, did not consider the
possibility that this was a Slavic marker, since they used the "evolutionary effective" mutation rate, which gave
a date far too old to be Slavic. Woźniak and colleagues pointed out that the pedigree mutation rate, giving a later
date, is more consistent with the archaeological record. Pomors are distinguished by the presence of Y Haplogroup
N among them. Postulated to originate from southeast Asia, it is found at high rates in Uralic peoples. Its presence
in Pomors (called "Northern Russians" in the report) attests to the non-Slavic tribes (mixing with Finnic tribes
of northern Eurasia). Autosomally, Russians are generally similar to populations in central-eastern Europe but some
northern Russians are intermediate to Finno-Ugric groups. On the other hand, I2a1b1 (P41.2) is typical of the South
Slavic populations, being highest in Bosnia-Herzegovina (>50%). Haplogroup I2a2 is also commonly found in north-eastern
Italians. There is also a high concentration of I2a2a in the Moldavian region of Romania, Moldova and western Ukraine.
According to original studies, Hg I2a2 was believed to have arisen in the west Balkans sometime after the LGM, subsequently
spreading from the Balkans through Central Russian Plain. Recently, Ken Nordtvedt has split I2a2 into two clades
– N (northern) and S (southern), in relation where they arose compared to Danube river. He proposes that N is slightly
older than S. He recalculated the age of I2a2 to be ~ 2550 years and proposed that the current distribution is explained
by a Slavic expansion from the area north-east of the Carpathians. In 2008, biochemist Boris Arkadievich Malyarchuk
(Russian: Борис Аркадьевич Малярчук) et al. of the Institute of Biological Problems of the North, Russian Academy
of Sciences, Magadan, Russia, used a sample (n=279) of Czech individuals to determine the frequency of "Mongoloid"
"mtDNA lineages". Malyarchuk found Czech mtDNA lineages were typical of "Slavic populations" with "1.8%" Mongoloid
mtDNA lineage. Malyarchuk added that "Slavic populations" "almost always" contain Mongoloid mtDNA lineage. Malyarchuk
said the Mongoloid component of Slavic people was partially added before the split of "Balto-Slavics" in 2,000–3,000
BC with additional Mongoloid mixture occurring among Slavics in the last 4,000 years. Malyarchuk said the "Russian
population" was developed by the "assimilation of the indigenous pre-Slavic population of Eastern Europe by true
Slavs" with additional "assimilation of Finno-Ugric populations" and "long-lasting" interactions with the populations
of "Siberia" and "Central Asia". Malyarchuk said that other Slavs "Mongoloid component" was increased during the
waves of migration from "steppe populations (Huns, Avars, Bulgars and Mongols)", especially the decay of the "Avar
Khaganate". DNA samples from 1228 Russians show that the Y chromosomes analyzed, all except 20 (1.6%) fall into seven
major haplogroups all characteristic to West Eurasian populations. Taken together, they account for 95% of the total
Russian Y chromosomal pool. Only (0.7%) fell into haplogroups that are specific to East and South Asian populations.
Mitochondrial DNA (mtDNA) examined in Poles and Russians revealed the presence of all major European haplogroups,
which were characterized by similar patterns of distribution in Poles and Russians. An analysis of the DNA did not
reveal any specific combinations of unique mtDNA haplotypes and their subclusters. The DNA clearly shows that both
Poles and Russians are not different from the neighbouring European populations. Throughout their history, Slavs
came into contact with non-Slavic groups. In the postulated homeland region (present-day Ukraine), they had contacts
with the Iranic Sarmatians and the Germanic Goths. After their subsequent spread, they began assimilating non-Slavic
peoples. For example, in the Balkans, there were Paleo-Balkan peoples, such as Romanized and Hellenized (Jireček
Line) Illyrians, Thracians and Dacians, as well as Greeks and Celtic Scordisci. Over time, due to the larger number
of Slavs, most descendants of the indigenous populations of the Balkans were Slavicized. The Thracians and Illyrians
vanished from the population during this period – although the modern Albanian nation claims descent from the Illyrians.
Exceptions are Greece, where the lesser numbered Slavs scattered there came to be Hellenized (aided in time by more
Greeks returning to Greece in the 9th century and the role of the church and administration) and Romania where Slavic
people settled en route for present-day Greece, Republic of Macedonia, Bulgaria and East Thrace whereby the Slavic
population had come to assimilate. Bulgars were also assimilated by local Slavs but their ruling status and subsequent
land cast the nominal legacy of Bulgarian country and people onto all future generations. The Romance speakers within
the fortified Dalmatian cities managed to retain their culture and language for a long time, as Dalmatian Romance
was spoken until the high Middle Ages. However, they too were eventually assimilated into the body of Slavs. In the
Western Balkans, South Slavs and Germanic Gepids intermarried with Avar invaders, eventually producing a Slavicized
population.[citation needed] In Central Europe, the Slavs intermixed with Germanic and Celtic, while the eastern
Slavs encountered Uralic and Scandinavian peoples. Scandinavians (Varangians) and Finnic peoples were involved in
the early formation of the Rus' state but were completely Slavicized after a century. Some Finno-Ugric tribes in
the north were also absorbed into the expanding Rus population. At the time of the Magyar migration, the present-day
Hungary was inhabited by Slavs, numbering about 200,000, and by Romano-Dacians who were either assimilated or enslaved
by the Magyars. In the 11th and 12th centuries, constant incursions by nomadic Turkic tribes, such as the Kipchaks
and the Pechenegs, caused a massive migration of East Slavic populations to the safer, heavily forested regions of
the north. In the Middle Ages, groups of Saxon ore miners settled in medieval Bosnia, Serbia and Bulgaria where they
were Slavicized. Polabian Slavs (Wends) settled in parts of England (Danelaw), apparently as Danish allies. Polabian-Pomeranian
Slavs are also known to have even settled on Norse age Iceland. Saqaliba refers to the Slavic mercenaries and slaves
in the medieval Arab world in North Africa, Sicily and Al-Andalus. Saqaliba served as caliph's guards. In the 12th
century, there was intensification of Slavic piracy in the Baltics. The Wendish Crusade was started against the Polabian
Slavs in 1147, as a part of the Northern Crusades. Niklot, pagan chief of the Slavic Obodrites, began his open resistance
when Lothar III, Holy Roman Emperor, invaded Slavic lands. In August 1160 Niklot was killed and German colonization
(Ostsiedlung) of the Elbe-Oder region began. In Hanoverian Wendland, Mecklenburg-Vorpommern and Lusatia invaders
started germanization. Early forms of germanization were described by German monks: Helmold in the manuscript Chronicon
Slavorum and Adam of Bremen in Gesta Hammaburgensis ecclesiae pontificum. The Polabian language survived until the
beginning of the 19th century in what is now the German state of Lower Saxony. In Eastern Germany, around 20% of
Germans have Slavic paternal ancestry. Similarly, in Germany, around 20% of the foreign surnames are of Slavic origin.
Cossacks, although Slavic-speaking and Orthodox Christians, came from a mix of ethnic backgrounds, including Tatars
and other Turks. Many early members of the Terek Cossacks were Ossetians. The Gorals of southern Poland and northern
Slovakia are partially descended from Romance-speaking Vlachs who migrated into the region from the 14th to 17th
centuries and were absorbed into the local population. The population of Moravian Wallachia also descend of this
population. Conversely, some Slavs were assimilated into other populations. Although the majority continued south,
attracted by the riches of the territory which would become Bulgaria, a few remained in the Carpathian basin and
were ultimately assimilated into the Magyar or Romanian population. There is a large number of river names and other
placenames of Slavic origin in Romania.[better source needed]
Southampton (i/saʊθˈæmptən, -hæmptən/) is the largest city in the ceremonial county of Hampshire on the south coast of England,
and is situated 75 miles (121 km) south-west of London and 19 miles (31 km) north-west of Portsmouth. Southampton
is a major port and the closest city to the New Forest. It lies at the northernmost point of Southampton Water at
the confluence of the River Test and River Itchen, with the River Hamble joining to the south of the urban area.
The city, which is a unitary authority, has an estimated population of 253,651. The city's name is sometimes abbreviated
in writing to "So'ton" or "Soton", and a resident of Southampton is called a Sotonian. Significant employers in Southampton
include The University of Southampton, Southampton Solent University, Southampton Airport, Ordnance Survey, BBC South,
the NHS, ABP and Carnival UK. Southampton is noted for its association with the RMS Titanic, the Spitfire and more
generally in the World War II narrative as one of the departure points for D-Day, and more recently as the home port
of a number of the largest cruise ships in the world. Southampton has a large shopping centre and retail park called
WestQuay. In October 2014, the City Council approved a follow-up from the WestQuay park, called WestQuay Watermark.
Construction by Sir Robert McAlpine commenced in January 2015. Hammerson, the owners of the retail park, aim to have
at least 1,550 people employed on its premises at year-end 2016. In the 2001 census Southampton and Portsmouth were
recorded as being parts of separate urban areas, however by the time of the 2011 census they had merged to become
the sixth largest built-up area in England with a population of 855,569. This built-up area is part of the metropolitan
area known as South Hampshire, which is also known as Solent City, particularly in the media when discussing local
governance organisational changes. With a population of over 1.5 million this makes the region one of the United
Kingdom's most populous metropolitan areas. Archaeological finds suggest that the area has been inhabited since the
stone age. Following the Roman invasion of Britain in AD 43 and the conquering of the local Britons in 70 AD the
fortress settlement of Clausentum was established. It was an important trading port and defensive outpost of Winchester,
at the site of modern Bitterne Manor. Clausentum was defended by a wall and two ditches and is thought to have contained
a bath house. Clausentum was not abandoned until around 410. The Anglo-Saxons formed a new, larger, settlement across
the Itchen centred on what is now the St Mary's area of the city. The settlement was known as Hamwic, which evolved
into Hamtun and then Hampton. Archaeological excavations of this site have uncovered one of the best collections
of Saxon artefacts in Europe. It is from this town that the county of Hampshire gets its name. Viking raids from
840 onwards contributed to the decline of Hamwic in the 9th century, and by the 10th century a fortified settlement,
which became medieval Southampton, had been established. Following the Norman Conquest in 1066, Southampton became
the major port of transit between the then capital of England, Winchester, and Normandy. Southampton Castle was built
in the 12th century and by the 13th century Southampton had become a leading port, particularly involved in the import
of French wine in exchange for English cloth and wool. Surviving remains of 12th century merchants' houses such as
King John's House and Canute's Palace are evidence of the wealth that existed in the town at this time. In 1348,
the Black Death reached England via merchant vessels calling at Southampton. The town was sacked in 1338 by French,
Genoese and Monegasque ships (under Charles Grimaldi, who used the plunder to help found the principality of Monaco).
On visiting Southampton in 1339, Edward III ordered that walls be built to 'close the town'. The extensive rebuilding—part
of the walls dates from 1175—culminated in the completion of the western walls in 1380. Roughly half of the walls,
13 of the original towers, and six gates survive. The city walls include God's House Tower, built in 1417, the first
purpose-built artillery fortification in England. Over the years it has been used as home to the city's gunner, the
Town Gaol and even as storage for the Southampton Harbour Board. Until September 2011, it housed the Museum of Archaeology.
The walls were completed in the 15th century, but later development of several new fortifications along Southampton
Water and the Solent by Henry VIII meant that Southampton was no longer dependent upon its fortifications. On the
other hand, many of the medieval buildings once situated within the town walls are now in ruins or have disappeared
altogether. From successive incarnations of the motte and bailey castle, only a section of the bailey wall remains
today, lying just off Castle Way. The last remains of the Franciscan friary in Southampton, founded circa 1233 and
dissolved in 1538, were swept away in the 1940s. The site is now occupied by Friary House. Elsewhere, remnants of
the medieval water supply system devised by the friars can still be seen today. Constructed in 1290, the system carried
water from Conduit Head (remnants of which survive near Hill Lane, Shirley) some 1.7 kilometres to the site of the
friary inside the town walls. The friars granted use of the water to the town in 1310 and passed on ownership of
the water supply system itself in 1420. Further remains can be observed at Conduit House on Commercial Road. In 1642,
during the English Civil War, a Parliamentary garrison moved into Southampton. The Royalists advanced as far as Redbridge,
Southampton, in March 1644 but were prevented from taking the town. During the Middle Ages, shipbuilding became an
important industry for the town. Henry V's famous warship HMS Grace Dieu was built in Southampton. Walter Taylor's
18th century mechanisation of the block-making process was a significant step in the Industrial Revolution. From
1904 to 2004, the Thornycroft shipbuilding yard was a major employer in Southampton, building and repairing ships
used in the two World Wars. Prior to King Henry's departure for the Battle of Agincourt in 1415, the ringleaders
of the "Southampton Plot"—Richard, Earl of Cambridge, Henry Scrope, 3rd Baron Scrope of Masham, and Sir Thomas Grey
of Heton—were accused of high treason and tried at what is now the Red Lion public house in the High Street. They
were found guilty and summarily executed outside the Bargate. Southampton has been used for military embarkation,
including during 18th-century wars with the French, the Crimean war, and the Boer War. Southampton was designated
No. 1 Military Embarkation port during the Great War and became a major centre for treating the returning wounded
and POWs. It was also central to the preparations for the Invasion of Europe in 1944. Southampton became a spa town
in 1740. It had also become a popular site for sea bathing by the 1760s, despite the lack of a good quality beach.
Innovative buildings specifically for this purpose were built at West Quay, with baths that were filled and emptied
by the flow of the tide. The town experienced major expansion during the Victorian era. The Southampton Docks company
had been formed in 1835. In October 1838 the foundation stone of the docks was laid and the first dock opened in
1842. The structural and economic development of docks continued for the next few decades. The railway link to London
was fully opened in May 1840. Southampton subsequently became known as The Gateway to the Empire. In his 1854 book
"The Cruise of the Steam Yacht North Star" John Choules described Southampton thus: "I hardly know a town that can
show a more beautiful Main Street than Southampton, except it be Oxford. The High Street opens from the quay, and
under various names it winds in a gently sweeping line for one mile and a half, and is of very handsome width. The
variety of style and color of material in the buildings affords an exhibition of outline, light and color, that I
think is seldom equalled. The shops are very elegant, and the streets are kept exceedingly clean." The port was the
point of departure for the Pilgrim Fathers aboard Mayflower in 1620. In 1912, the RMS Titanic sailed from Southampton.
Four in five of the crew on board the vessel were Sotonians, with about a third of those who perished in the tragedy
hailing from the city. Southampton was subsequently the home port for the transatlantic passenger services operated
by Cunard with their Blue Riband liner RMS Queen Mary and her running mate RMS Queen Elizabeth. In 1938, Southampton
docks also became home to the flying boats of Imperial Airways. Southampton Container Terminals first opened in 1968
and has continued to expand. The Supermarine Spitfire was designed and developed in Southampton, evolving from the
Schneider trophy-winning seaplanes of the 1920s and 1930s. Its designer, R J Mitchell, lived in the Portswood area
of Southampton, and his house is today marked with a blue plaque. Heavy bombing of the factory in September 1940
destroyed it as well as homes in the vicinity, killing civilians and workers. World War II hit Southampton particularly
hard because of its strategic importance as a major commercial port and industrial area. Prior to the Invasion of
Europe, components for a Mulberry harbour were built here. After D-Day, Southampton docks handled military cargo
to help keep the Allied forces supplied, making it a key target of Luftwaffe bombing raids until late 1944. Southampton
docks was featured in the television show 24: Live Another Day in Day 9: 9:00 p.m. – 10:00 p.m. 630 people lost their
lives as a result of the air raids on Southampton and nearly 2,000 more were injured, not to mention the thousands
of buildings damaged or destroyed. Pockets of Georgian architecture survived the war, but much of the city was levelled.
There has been extensive redevelopment since World War II. Increasing traffic congestion in the 1920s led to partial
demolition of medieval walls around the Bargate in 1932 and 1938. However a large portion of those walls remain.
A Royal Charter in 1952 upgraded University College at Highfield to the University of Southampton. Southampton acquired
city status, becoming the City of Southampton in 1964. After the establishment of Hampshire County Council, following
the act in 1888, Southampton became a county borough within the county of Hampshire, which meant that it had many
features of a county, but governance was now shared between the Corporation in Southampton and the new county council.
There is a great source of confusion in the fact that the ancient shire county, along with its associated assizes,
was known as the County of Southampton or Southamptonshire. This was officially changed to Hampshire in 1959 although
the county had been commonly known as Hampshire or Hantscire for centuries. Southampton became a non-metropolitan
district in 1974. Southampton as a Port and city has had a long history of administrative independence of the surrounding
County; as far back as the reign of King John the town and its port were removed from the writ of the King's Sheriff
in Hampshire and the rights of custom and toll were granted by the King to the burgesses of Southampton over the
port of Southampton and the Port of Portsmouth; this tax farm was granted for an annual fee of £200 in the charter
dated at Orival on 29 June 1199. The definition of the port of Southampton was apparently broader than today and
embraced all of the area between Lymington and Langstone. The corporation had resident representatives in Newport,
Lymington and Portsmouth. By a charter of Henry VI, granted on 9 March 1446/7 (25+26 Hen. VI, m. 32), the mayor,
bailiffs and burgesses of the towns and ports of Southampton and Portsmouth became a County incorporate and separate
from Hampshire. The status of the town was changed by a later charter of Charles I by at once the formal separation
from Portsmouth and the recognition of Southampton as a county, In the charter dated 27 June 1640 the formal title
of the town became 'The Town and County of the Town of Southampton'. These charters and Royal Grants, of which there
were many, also set out the governance and regulation of the town and port which remained the 'constitution' of the
town until the local government organisation of the later Victorian period which from about 1888 saw the setting
up of County Councils across England and Wales and including Hampshire County Council who now took on some of the
function of Government in Southampton Town. In this regime, The Town and County of the Town of Southampton also became
a county borough with shared responsibility for aspects of local government. On 24 February 1964 the status changed
again by a Charter of Elizabeth II, creating the City and County of the City of Southampton. The city has undergone
many changes to its governance over the centuries and once again became administratively independent from Hampshire
County as it was made into a unitary authority in a local government reorganisation on 1 April 1997, a result of
the 1992 Local Government Act. The district remains part of the Hampshire ceremonial county. Southampton City Council
consists of 48 councillors, 3 for each of the 16 wards. Council elections are held in early May for one third of
the seats (one councillor for each ward), elected for a four-year term, so there are elections three years out of
four. Since the 2015 council elections, the composition of the council is: There are three members of parliament
for the city: Royston Smith (Conservative) for Southampton Itchen, the constituency covering the east of the city;
Dr. Alan Whitehead (Labour) for Southampton Test, which covers the west of the city; and Caroline Nokes (Conservative)
for Romsey and Southampton North, which includes a northern portion of the city. The city has a Mayor and is one
of the 16 cities and towns in England and Wales to have a ceremonial sheriff who acts as a deputy for the Mayor.
The current and 793rd Mayor of Southampton is Linda Norris. Catherine McEwing is the current and 578th sherriff.
The town crier from 2004 until his death in 2014 was John Melody, who acted as master of ceremonies in the city and
who possessed a cry of 104 decibels. Southampton City Council has developed twinning links with Le Havre in France
(since 1973), Rems-Murr-Kreis in Germany (since 1991), Trieste in Italy (since 2002); Hampton, Virginia in USA, Qingdao
in China (since 1998), and Busan in South Korea (since 1978). The geography of Southampton is influenced by the sea
and rivers. The city lies at the northern tip of the Southampton Water, a deep water estuary, which is a ria formed
at the end of the last Ice Age. Here, the rivers Test and Itchen converge. The Test—which has salt marsh that makes
it ideal for salmon fishing—runs along the western edge of the city, while the Itchen splits Southampton in two—east
and west. The city centre is located between the two rivers. Town Quay is the original public quay, and dates from
the 13th century. Today's Eastern Docks were created in the 1830s by land reclamation of the mud flats between the
Itchen & Test estuaries. The Western Docks date from the 1930s when the Southern Railway Company commissioned a major
land reclamation and dredging programme. Most of the material used for reclamation came from dredging of Southampton
Water, to ensure that the port can continue to handle large ships. Southampton Water has the benefit of a double
high tide, with two high tide peaks, making the movement of large ships easier. This is not caused as popularly supposed
by the presence of the Isle of Wight, but is a function of the shape and depth of the English Channel. In this area
the general water flow is distorted by more local conditions reaching across to France. The River Test runs along
the western border of the city, separating it from the New Forest. There are bridges over the Test from Southampton,
including the road and rail bridges at Redbridge in the south and the M27 motorway to the north. The River Itchen
runs through the middle of the city and is bridged in several places. The northernmost bridge, and the first to be
built, is at Mansbridge, where the A27 road crosses the Itchen. The original bridge is closed to road traffic, but
is still standing and open to pedestrians and cyclists. The river is bridged again at Swaythling, where Woodmill
Bridge separates the tidal and non tidal sections of the river. Further south is Cobden Bridge which is notable as
it was opened as a free bridge (it was originally named the Cobden Free Bridge), and was never a toll bridge. Downstream
of the Cobden Bridge is the Northam Railway Bridge, then the Northam Road Bridge, which was the first major pre-stressed
concrete bridge to be constructed in the United Kingdom. The southernmost, and newest, bridge on the Itchen is the
Itchen Bridge, which is a toll bridge. Southampton is divided into council wards, suburbs, constituencies, ecclesiastical
parishes, and other less formal areas. It has a number of parks and green spaces, the largest being the 148 hectare
Southampton Common, parts of which are used to host the annual summer festivals, circuses and fun fairs. The Common
includes Hawthorns Urban Wildlife Centre on the former site of Southampton Zoo, a paddling pool and several lakes
and ponds. Council estates are in the Weston, Thornhill and Townhill Park districts. The city is ranked 96th most
deprived out of all 354 Local Authorities in England. As with the rest of the UK, Southampton experiences an oceanic
climate (Köppen Cfb). Its southerly, low lying and sheltered location ensures it is among the warmer, sunnier cities
in the UK. It has held the record for the highest temperature in the UK for June at 35.6 °C (96.1 °F) since 1976.
The centre of Southampton is located above a large hot water aquifer that provides geothermal power to some of the
city's buildings. This energy is processed at a plant in the West Quay region in Southampton city centre, the only
geothermal power station in the UK. The plant provides private electricity for the Port of Southampton and hot water
to the Southampton District Energy Scheme used by many buildings including the WestQuay shopping centre. In a 2006
survey of carbon emissions in major UK cities conducted by British Gas, Southampton was ranked as being one of the
lowest carbon emitting cities in the United Kingdom. At the 2001 Census, 92.4 per cent of the city's populace was
White—including one per cent White Irish—3.8 per cent were South Asian, 1.0 per cent Black, 1.3 per cent Chinese
or other ethnic groups, and 1.5 per cent were of Mixed Race. Southampton had an estimated 236,900 people living within
the city boundary in 2011. There is a sizeable Polish population in the city, with estimates as high as 20,000. There
are 119,500 males within the city and 117,400 females. The 20–24 age range is the most populous, with an estimated
32,300 people falling in this age range. Next largest is the 25–29 range with 24,700 people and then 30–34 years
with 17,800. By population, Southampton is the largest monocentric city in the South East England region and the
second largest on the South Coast after Plymouth. Between 1996 and 2004, the population of the city increased by
4.9 per cent—the tenth biggest increase in England. In 2005 the Government Statistics stated that Southampton was
the third most densely populated city in the country after London and Portsmouth respectively. Hampshire County Council
expects the city's population to grow by around a further two per cent between 2006 and 2013, adding around another
4,200 to the total number of residents. The highest increases are expected among the elderly. In March 2007 there
were 120,305 jobs in Southampton, and 3,570 people claiming job seeker's allowance, approximately 2.4 per cent of
the city's population. This compares with an average of 2.5 per cent for England as a whole. Just over a quarter
of the jobs available in the city are in the health and education sector. A further 19 per cent are property and
other business and the third largest sector is wholesale and retail, which accounts for 16.2 percent. Between 1995
and 2004, the number of jobs in Southampton has increased by 18.5 per cent. In January 2007, the average annual salary
in the city was £22,267. This was £1,700 lower than the national average and £3,800 less than the average for the
South East. Southampton has always been a port, and the docks have long been a major employer in the city. In particular,
it is a port for cruise ships; its heyday was the first half of the 20th century, and in particular the inter-war
years, when it handled almost half the passenger traffic of the UK. Today it remains home to luxury cruise ships,
as well as being the largest freight port on the Channel coast and fourth largest UK port by tonnage, with several
container terminals. Unlike some other ports, such as Liverpool, London, and Bristol, where industry and docks have
largely moved out of the city centres leaving room for redevelopment, Southampton retains much of its inner-city
industry. Despite the still active and expanding docklands to the west of the city centre, further enhanced with
the opening of a fourth cruise terminal in 2009, parts of the eastern docks have been redeveloped; the Ocean Village
development, which included a local marina and small entertainment complex, is a good example. Southampton is home
to the headquarters of both the Maritime and Coastguard Agency and the Marine Accident Investigation Branch of the
Department for Transport in addition to cruise operator Carnival UK. During the latter half of the 20th century,
a more diverse range of industry also came to the city, including aircraft and car manufacture, cables, electrical
engineering products, and petrochemicals. These now exist alongside the city's older industries of the docks, grain
milling, and tobacco processing. University Hospital Southampton NHS Foundation Trust is one of the city's largest
employers. It provides local hospital services to 500,000 people in the Southampton area and specialist regional
services to more than 3 million people across the South of England. The Trust owns and manages Southampton General
Hospital, the Princess Anne Hospital and a palliative care service at Countess Mountbatten House, part of the Moorgreen
Hospital site in the village of West End, just outside the city. Other major employers in the city include Ordnance
Survey, the UK's national mapping agency, whose headquarters is located in a new building on the outskirts of the
city, opened in February 2011. The Lloyd's Register Group has announced plans to move its London marine operations
to a specially developed site at the University of Southampton. The area of Swaythling is home to Ford's Southampton
Assembly Plant, where the majority of their Transit models are manufactured. Closure of the plant in 2013 was announced
in 2012, with the loss of hundreds of jobs. Southampton's largest retail centre, and 35th largest in the UK, is the
WestQuay Shopping Centre, which opened in September 2000 and hosts major high street stores including John Lewis
and Marks and Spencer. The centre was Phase Two of the West Quay development of the former Pirelli undersea cables
factory; the first phase of this was the West Quay Retail Park, while the third phase (Watermark Westquay) was put
on hold due to the recession. Work is has resumed in 2015, with plans for this third stage including shops, housing,
an hotel and a public piazza alongside the Town Walls on Western Esplanade. Southampton has also been granted a licence
for a large casino. A further part of the redevelopment of the West Quay site resulted in a new store, opened on
12 February 2009, for Swedish home products retailer IKEA. Marlands is a smaller shopping centre, built in the 1990s
and located close to the northern side of WestQuay. Southampton currently has two disused shopping centres: the 1970s
Eaststreet mall, and the 1980s Bargate centre. Neither of these were ever commercially successful; the former has
been earmarked for redevelopment as a Morrison's supermarket, while the future of the latter is uncertain. There
is also the East Street area which has been designated for speciality shopping, with the aim of promoting smaller
retailers, alongside the chain store Debenhams. In 2007, Southampton was ranked 13th for shopping in the UK. Southampton's
strong economy is promoting redevelopment, and major projects are proposed, including the city's first skyscrapers
on the waterfront. The three towers proposed will stand 23 storeys high and will be surrounded by smaller apartment
blocks, office blocks and shops. There are also plans for a 15-storey hotel at the Ocean Village marina, and a 21-storey
hotel on the north eastern corner of the city centre, as part of a £100m development. According to 2004 figures,
Southampton contributes around £4.2 bn to the regional economy annually. The vast majority of this is from the service
sector, with the remainder coming from industry in the city. This figure has almost doubled since 1995. The city
is home to the longest surviving stretch of medieval walls in England, as well as a number of museums such as Tudor
House Museum, reopened on 30 July 2011 after undergoing extensive restoration and improvement; Southampton Maritime
Museum; God's House Tower, an archaeology museum about the city's heritage and located in one of the tower walls;
the Medieval Merchant's House; and Solent Sky, which focuses on aviation. The SeaCity Museum is located in the west
wing of the civic centre, formerly occupied by Hampshire Constabulary and the Magistrates' Court, and focuses on
Southampton's trading history and on the RMS Titanic. The museum received half a million pounds from the National
Lottery in addition to interest from numerous private investors and is budgeted at £28 million. The annual Southampton
Boat Show is held in September each year, with over 600 exhibitors present. It runs for just over a week at Mayflower
Park on the city's waterfront, where it has been held since 1968. The Boat Show itself is the climax of Sea City,
which runs from April to September each year to celebrate Southampton's links with the sea. The largest theatre in
the city is the 2,300 capacity Mayflower Theatre (formerly known as the Gaumont), which, as the largest theatre in
Southern England outside London, has hosted West End shows such as Les Misérables, The Rocky Horror Show and Chitty
Chitty Bang Bang, as well as regular visits from Welsh National Opera and English National Ballet. There is also
the Nuffield Theatre based at the University of Southampton's Highfield campus, which is the city's primary producing
theatre. It was awarded The Stage Award for Best Regional Theatre in 2015. It also hosts touring companies and local
performing societies (such as Southampton Operatic Society, the Maskers and the University Players). There are many
innovative art galleries in the city. The Southampton City Art Gallery at the Civic Centre is one of the best known
and as well as a nationally important Designated Collection, houses several permanent and travelling exhibitions.
The Millais Gallery at Southampton Solent University, the John Hansard Gallery at Southampton University as well
as smaller galleries including the Art House in Above Bar Street provide a different view. The city's Bargate is
also an art gallery run by the arts organisation "a space". A space also run the Art Vaults project, which creatively
uses several of Southampton's medieval vaults, halls and cellars as venues for contemporary art installations. Southampton
has two large live music venues, the Mayflower Theatre (formerly the Gaumont Theatre) and the Guildhall. The Guildhall
has seen concerts from a wide range of popular artists including Pink Floyd, David Bowie, Delirious?, Manic Street
Preachers, The Killers, The Kaiser Chiefs, Amy Winehouse, Lostprophets, The Midnight Beast, Modestep, and All Time
Low. It also hosts classical concerts presented by the Bournemouth Symphony Orchestra, City of Southampton Orchestra,
Southampton Concert Orchestra, Southampton Philharmonic Choir and Southampton Choral Society. The city also has several
smaller music venues, including the Brook, The Talking Heads, The Soul Cellar, The Joiners and Turner Sims, as well
as smaller "club circuit" venues like Hampton's and Lennon's, and a number of public houses including the Platform
tavern, the Dolphin, the Blue Keys and many others. The Joiners has played host to such acts as Oasis, Radiohead,
Green Day, Suede, PJ Harvey, the Manic Street Preachers, Coldplay, the Verve, the Libertines and Franz Ferdinand,
while Hampton's and Lennon's have hosted early appearances by Kate Nash, Scouting for Girls and Band of Skulls. The
nightclub, Junk, has been nominated for the UK's best small nightclub, and plays host to a range of dance music's
top acts. The city is home or birthplace to a number of contemporary musicians such as R'n'B singer Craig David,
Coldplay drummer Will Champion, former Holloways singer Rob Skipper as well as 1980s popstar Howard Jones. Several
rock bands were formed in Southampton, including Band of Skulls, The Delays, Bury Tomorrow, Heart in Hand, Thomas
Tantrum (disbanded in 2011) and Kids Can't Fly (disbanded in 2014). James Zabiela, a highly regarded and recognised
name in dance music, is also from Southampton. Local media include the Southern Daily Echo newspaper based in Redbridge
and BBC South, which has its regional headquarters in the city centre opposite the civic centre. From there the BBC
broadcasts South Today, the local television news bulletin and BBC Radio Solent. The local ITV franchise is Meridian,
which has its headquarters in Whiteley, around nine miles (14 km) from the city. Until December 2004, the station's
studios were located in the Northam area of the city on land reclaimed from the River Itchen. That's Solent is an
local television channel that began broadcasting in November 2014, which will be based in and serve Southampton and
Portsmouth. Southampton also has 2 community FM radio stations, the Queens Award winning Unity 101 Community Radio
(www.unity101.org) broadcasting full-time on 101.1 FM since 2006 to the Asian and Ethnic communities, and Voice FM
(http://www.voicefmradio.co.uk) located in St Mary's, which has been broadcasting full-time on 103.9 FM since September
2011, playing a wide range of music from Rock to Dance music and Top 40. A third station, Awaaz FM (www.awaazfm.co.uk),
is an internet only radio stations also catering for Asian and Ethnic community. Commercial radio stations broadcasting
to the city include The Breeze, previously The Saint and currently broadcasting Hot adult contemporary music, Capital,
previously Power FM and Galaxy and broadcasting popular music, Wave 105 and Heart Hampshire, the latter previously
Ocean FM and both broadcasting adult contemporary music, and 106 Jack FM (www.jackradio.com), previously The Coast
106. In addition, Southampton University has a radio station called SURGE, broadcasting on AM band as well as through
the web. Southampton is home to Southampton Football Club—nicknamed "The Saints"—who play in the Premier League at
St Mary's Stadium, having relocated in 2001 from their 103-year-old former stadium, "The Dell". They reached the
top flight of English football (First Division) for the first time in 1966, staying there for eight years. They lifted
the FA Cup with a shock victory over Manchester United in 1976, returned to the top flight two years later, and stayed
there for 27 years (becoming founder members of the Premier League in 1992) before they were relegated in 2005. The
club was promoted back to the Premier League in 2012 following a brief spell in the third-tier and severe financial
difficulties. In 2015, "The Saints" finished 7th in the Premier League, their highest league finish in 30 years,
after a remarkable season under new manager Ronald Koeman. Their highest league position came in 1984 when they were
runners-up in the old First Division. They were also runners-up in the 1979 Football League Cup final and 2003 FA
Cup final. Notable former managers include Ted Bates, Lawrie McMenemy, Chris Nicholl, Ian Branfoot and Gordon Strachan.
There is a strong rivalry between Portsmouth F.C. ("South Coast derby") which is located only about 30 km (19 mi)
away. The two local Sunday Leagues in the Southampton area are the City of Southampton Sunday Football League and
the Southampton and District Sunday Football League. Hampshire County Cricket Club play close to the city, at the
Rose Bowl in West End, after previously playing at the County Cricket Ground and the Antelope Ground, both near the
city centre. There is also the Southampton Evening Cricket League. The city hockey club, Southampton Hockey Club,
founded in 1938, is now one of the largest and highly regarded clubs in Hampshire, fielding 7 senior men's and 5
senior ladies teams on a weekly basis along with boys’ and girls’ teams from 6 upwards. The city is also well provided
for in amateur men's and women's rugby with a number of teams in and around the city, the oldest of which is Trojans
RFC who were promoted to London South West 2 division in 2008/9. A notable former player is Anthony Allen, who played
with Leicester Tigers as a centre. Tottonians are also in London South West division 2 and Southampton RFC are in
Hampshire division 1 in 2009/10, alongside Millbrook RFC and Eastleigh RFC. Many of the sides run mini and midi teams
from under sevens up to under sixteens for both boys and girls. The city provides for yachting and water sports,
with a number of marinas. From 1977 to 2001 the Whitbread Around the World Yacht Race, which is now known as the
Volvo Ocean Race was based in Southampton's Ocean Village marina. The city also has the Southampton Sports Centre
which is the focal point for the public's sporting and outdoor activities and includes an Alpine Centre, theme park
and athletics centre which is used by professional athletes. With the addition of 11 other additional leisure venures
which are currently operate by the Council leisure executives. However these have been sold the operating rights
to "Park Wood Leisure." Southampton was named "fittest city in the UK" in 2006 by Men's Fitness magazine. The results
were based on the incidence of heart disease, the amount of junk food and alcohol consumed, and the level of gym
membership. In 2007, it had slipped one place behind London, but was still ranked first when it came to the parks
and green spaces available for exercise and the amount of television watched by Sotonians was the lowest in the country.
Speedway racing took place at Banister Court Stadium in the pre-war era. It returned in the 1940s after WW2 and the
Saints operated until the stadium closed down at the end of 1963. A training track operated in the 1950s in the Hamble
area. Southampton is also home to one of the most successful College American Football teams in the UK, the Southampton
Stags, who play at the Wide Lane Sports Facility in Eastleigh. Southampton's police service is provided by Hampshire
Constabulary. The main base of the Southampton operation is a new, eight storey purpose-built building which cost
£30 million to construct. The building, located on Southern Road, opened in 2011 and is near to Southampton Central
railway station. Previously, the central Southampton operation was located within the west wing of the Civic Centre,
however the ageing facilities and the plans of constructing a new museum in the old police station and magistrates
court necessitated the move. There are additional police stations at Portswood, Banister Park, Bitterne, and Shirley
as well as a British Transport Police station at Southampton Central railway station. Southampton's fire cover is
provided by Hampshire Fire and Rescue Service. There are three fire stations within the city boundaries at St Mary's,
Hightown and Redbridge. According to Hampshire Constabulary figures, Southampton is currently safer than it has ever
been before, with dramatic reductions in violent crime year on year for the last three years. Data from the Southampton
Safer City Partnership shows there has been a reduction in all crimes in recent years and an increase in crime detection
rates. According to government figures Southampton has a higher crime rate than the national average. There is some
controversy regarding comparative crime statisitics due to inconsistencies between different police forces recording
methodologies. For example, in Hampshire all reported incidents are recorded and all records then retained. However,
in neighbouring Dorset crimes reports withdrawn or shown to be false are not recorded, reducing apparent crime figures.
In the violence against the person category, the national average is 16.7 per 1000 population while Southampton is
42.4 per 1000 population. In the theft from a vehicle category, the national average is 7.6 per 1000 compared to
Southampton's 28.4 per 1000. Overall, for every 1,000 people in the city, 202 crimes are recorded. Hampshire Constabulary's
figures for 2009/10 show fewer incidents of recorded crime in Southampton than the previous year. The city has a
strong higher education sector. The University of Southampton and Southampton Solent University together have a student
population of over 40,000. The University of Southampton, which was founded in 1862 and received its Royal Charter
as a university in 1952, has over 22,000 students. The university is ranked in the top 100 research universities
in the world in the Academic Ranking of World Universities 2010. In 2010, the THES - QS World University Rankings
positioned the University of Southampton in the top 80 universities in the world. The university considers itself
one of the top 5 research universities in the UK. The university has a global reputation for research into engineering
sciences, oceanography, chemistry, cancer sciences, sound and vibration research, computer science and electronics,
optoelectronics and textile conservation at the Textile Conservation Centre (which is due to close in October 2009.)
It is also home to the National Oceanography Centre, Southampton (NOCS), the focus of Natural Environment Research
Council-funded marine research. Southampton Solent University has 17,000 students and its strengths are in the training,
design, consultancy, research and other services undertaken for business and industry. It is also host to the Warsash
Maritime Academy, which provides training and certification for the international shipping and off-shore oil industries.
In addition to school sixth forms at St Anne's and King Edward's there are two sixth form colleges: Itchen College
and Richard Taunton Sixth Form College. A number of Southampton pupils will travel outside the city, for example
to Barton Peveril College. Southampton City College is a further education college serving the city. The college
offers a range of vocational courses for school leavers, as well as ESOL programmes and Access courses for adult
learners. Over 40 per cent of school pupils in the city that responded to a survey claimed to have been the victim
of bullying. More than 2,000 took part and said that verbal bullying was the most common form, although physical
bullying was a close second for boys. It has been revealed that Southampton has the worst behaved secondary schools
within the UK. With suspension rates three times the national average, the suspension rate is approximately 1 in
every 14 children, the highest in the country for physical or verbal assaults against staff. Southampton is a major
UK port which has good transport links with the rest of the country. The M27 motorway, linking places along the south
coast of England, runs just to the north of the city. The M3 motorway links the city to London and also, via a link
to the A34 (part of the European route E05) at Winchester, with the Midlands and North. The M271 motorway is a spur
of the M27, linking it with the Western Docks and city centre. Southampton is also served by the rail network, which
is used both by freight services to and from the docks and passenger services as part of the national rail system.
The main station in the city is Southampton Central. Rail routes run east towards Portsmouth, north to Winchester,
the Midlands and London, and westwards to Bournemouth, Poole, Dorchester, Weymouth, Salisbury, Bristol and Cardiff.
The route to London was opened in 1840 by what was to become the London and South Western Railway Company. Both this
and its successor the Southern Railway (UK) played a significant role in the creation of the modern port following
their purchase and development of the town's docks. Local train services operate in the central, southern and eastern
sections of the city and are operated by South West Trains, with stations at Swaythling, St Denys, Millbrook, Redbridge,
Bitterne, Sholing and Woolston. Plans were announced by Hampshire County Council in July 2009 for the introduction
of tram-train running from Hythe (on what is now a freight-only line to Fawley) via Totton to Southampton Central
Station and on to Fareham via St. Denys, and Swanwick. The proposal follows a failed plan to bring light rail to
the Portsmouth and Gosport areas in 2005. The town was the subject of an attempt by a separate company, the Didcot,
Newbury and Southampton Railway, to open another rail route to the North in the 1880s and some building work, including
a surviving embankment, was undertaken in the Hill Lane area. Southampton Airport is a regional airport located in
the town of Eastleigh, just north of the city. It offers flights to UK and near European destinations, and is connected
to the city by a frequent rail service from Southampton Airport (Parkway) railway station, and by bus services. Many
of the world's largest cruise ships can regularly be seen in Southampton water, including record-breaking vessels
from Royal Caribbean and Carnival Corporation & plc. The latter has headquarters in Southampton, with its brands
including Princess Cruises, P&O Cruises and Cunard Line. The city has a particular connection to Cunard Line and
their fleet of ships. This was particularly evident on 11 November 2008 when the Cunard liner RMS Queen Elizabeth
2 departed the city for the final time amid a spectacular fireworks display after a full day of celebrations. Cunard
ships are regularly launched in the city, for example Queen Victoria was named by HRH The Duchess of Cornwall in
December 2007, and the Queen named Queen Elizabeth in the city during October 2011. The Duchess of Cambridge performed
the naming ceremony of Royal Princess on 13 June 2013. At certain times of the year, The Queen Mary 2, Queen Elizabeth
and Queen Victoria may all visit Southampton at the same time, in an event commonly called 'Arrival of the Three
Queens'. The importance of Southampton to the cruise industry was indicated by P&O Cruises's 175th anniversary celebrations,
which included all seven of the company's liners visiting Southampton in a single day. Adonia, Arcadia, Aurora, Azura,
Oceana, Oriana and Ventura all left the city in a procession on 3 July 2012. While Southampton is no longer the base
for any cross-channel ferries, it is the terminus for three internal ferry services, all of which operate from terminals
at Town Quay. Two of these, a car ferry service and a fast catamaran passenger ferry service, provide links to East
Cowes and Cowes respectively on the Isle of Wight and are operated by Red Funnel. The third ferry is the Hythe Ferry,
providing a passenger service to Hythe on the other side of Southampton Water. Southampton used to be home to a number
of ferry services to the continent, with destinations such as San Sebastian, Lisbon, Tangier and Casablanca. A ferry
port was built during the 1960s. However, a number of these relocated to Portsmouth and by 1996, there were no longer
any car ferries operating from Southampton with the exception of services to the Isle of Wight. The land used for
Southampton Ferry Port was sold off and a retail and housing development was built on the site. The Princess Alexandra
Dock was converted into a marina. Reception areas for new cars now fill the Eastern Docks where passengers, dry docks
and trains used to be. Buses now provide the majority of local public transport. The main bus operators are First
Southampton and Bluestar. Other operators include Brijan Tours, Stagecoach and Xelabus. The other large service provider
is the Uni-link bus service (running from early in the morning to midnight), which was commissioned by the University
of Southampton to provide transport from the university to the town. Previously run by Enterprise, it is now run
by Bluestar. Free buses are provided by City-link'. The City-link runs from the Red Funnel ferry terminal at Town
Quay to Central station via WestQuay and is operated by Bluestar. There is also a door-to-door minibus service called
Southampton Dial a Ride, for residents who cannot access public transport. This is funded by the council and operated
by SCA Support Services. There are two main termini for bus services. As the biggest operator, First uses stops around
Pound Tree Road. This leaves the other terminal of West Quay available for other operators. Uni-link passes West
Quay in both directions, and Wilts & Dorset drop passengers off and pick them up there, terminating at a series of
bus stands along the road. Certain Bluestar services also do this, while others stop at Bargate and some loop round
West Quay, stopping at Hanover Buildings. There was a tram system from 1879 to 1949.
A treaty is an agreement under international law entered into by actors in international law, namely sovereign states and
international organizations. A treaty may also be known as an (international) agreement, protocol, covenant, convention,
pact, or exchange of letters, among other terms. Regardless of terminology, all of these forms of agreements are,
under international law, equally considered treaties and the rules are the same. Treaties can be loosely compared
to contracts: both are means of willing parties assuming obligations among themselves, and a party to either that
fails to live up to their obligations can be held liable under international law. A treaty is an official, express
written agreement that states use to legally bind themselves. A treaty is the official document which expresses that
agreement in words; and it is also the objective outcome of a ceremonial occasion which acknowledges the parties
and their defined relationships. Since the late 19th century, most treaties have followed a fairly consistent format.
A treaty typically begins with a preamble describing the contracting parties and their joint objectives in executing
the treaty, as well as summarizing any underlying events (such as a war). Modern preambles are sometimes structured
as a single very long sentence formatted into multiple paragraphs for readability, in which each of the paragraphs
begins with a verb (desiring, recognizing, having, and so on). The contracting parties' full names or sovereign titles
are often included in the preamble, along with the full names and titles of their representatives, and a boilerplate
clause about how their representatives have communicated (or exchanged) their full powers (i.e., the official documents
appointing them to act on behalf of their respective states) and found them in good or proper form. After the preamble
comes numbered articles, which contain the substance of the parties' actual agreement. Each article heading usually
encompasses a paragraph. A long treaty may further group articles under chapter headings. Modern treaties, regardless
of subject matter, usually contain articles governing where the final authentic copies of the treaty will be deposited
and how any subsequent disputes as to their interpretation will be peacefully resolved. The end of a treaty, the
eschatocol (or closing protocol), is often signaled by a clause like "in witness whereof" or "in faith whereof,"
the parties have affixed their signatures, followed by the words "DONE at," then the site(s) of the treaty's execution
and the date(s) of its execution. The date is typically written in its most formal, longest possible form. For example,
the Charter of the United Nations was "DONE at the city of San Francisco the twenty-sixth day of June, one thousand
nine hundred and forty-five." If the treaty is executed in multiple copies in different languages, that fact is always
noted, and is followed by a stipulation that the versions in different languages are equally authentic. The signatures
of the parties' representatives follow at the very end. When the text of a treaty is later reprinted, such as in
a collection of treaties currently in effect, an editor will often append the dates on which the respective parties
ratified the treaty and on which it came into effect for each party. Bilateral treaties are concluded between two
states or entities. It is possible, however, for a bilateral treaty to have more than two parties; consider for instance
the bilateral treaties between Switzerland and the European Union (EU) following the Swiss rejection of the European
Economic Area agreement. Each of these treaties has seventeen parties. These however are still bilateral, not multilateral,
treaties. The parties are divided into two groups, the Swiss ("on the one part") and the EU and its member states
("on the other part"). The treaty establishes rights and obligations between the Swiss and the EU and the member
states severally—it does not establish any rights and obligations amongst the EU and its member states.[citation
needed] A multilateral treaty is concluded among several countries. The agreement establishes rights and obligations
between each party and every other party. Multilateral treaties are often regional.[citation needed] Treaties of
"mutual guarantee" are international compacts, e.g., the Treaty of Locarno which guarantees each signatory against
attack from another. Reservations are essentially caveats to a state's acceptance of a treaty. Reservations are unilateral
statements purporting to exclude or to modify the legal obligation and its effects on the reserving state. These
must be included at the time of signing or ratification, i.e. "a party cannot add a reservation after it has already
joined a treaty". Originally, international law was unaccepting of treaty reservations, rejecting them unless all
parties to the treaty accepted the same reservations. However, in the interest of encouraging the largest number
of states to join treaties, a more permissive rule regarding reservations has emerged. While some treaties still
expressly forbid any reservations, they are now generally permitted to the extent that they are not inconsistent
with the goals and purposes of the treaty. When a state limits its treaty obligations through reservations, other
states party to that treaty have the option to accept those reservations, object to them, or object and oppose them.
If the state accepts them (or fails to act at all), both the reserving state and the accepting state are relieved
of the reserved legal obligation as concerns their legal obligations to each other (accepting the reservation does
not change the accepting state's legal obligations as concerns other parties to the treaty). If the state opposes,
the parts of the treaty affected by the reservation drop out completely and no longer create any legal obligations
on the reserving and accepting state, again only as concerns each other. Finally, if the state objects and opposes,
there are no legal obligations under that treaty between those two state parties whatsoever. The objecting and opposing
state essentially refuses to acknowledge the reserving state is a party to the treaty at all. There are three ways
an existing treaty can be amended. First, formal amendment requires State parties to the treaty to go through the
ratification process all over again. The re-negotiation of treaty provisions can be long and protracted, and often
some parties to the original treaty will not become parties to the amended treaty. When determining the legal obligations
of states, one party to the original treaty and one a party to the amended treaty, the states will only be bound
by the terms they both agreed upon. Treaties can also be amended informally by the treaty executive council when
the changes are only procedural, technical change in customary international law can also amend a treaty, where state
behavior evinces a new interpretation of the legal obligations under the treaty. Minor corrections to a treaty may
be adopted by a procès-verbal; but a procès-verbal is generally reserved for changes to rectify obvious errors in
the text adopted, i.e. where the text adopted does not correctly reflect the intention of the parties adopting it.
In international law and international relations, a protocol is generally a treaty or international agreement that
supplements a previous treaty or international agreement. A protocol can amend the previous treaty, or add additional
provisions. Parties to the earlier agreement are not required to adopt the protocol. Sometimes this is made clearer
by calling it an "optional protocol", especially where many parties to the first agreement do not support the protocol.
Some examples: the United Nations Framework Convention on Climate Change (UNFCCC) established a framework for the
development of binding greenhouse gas emission limits, while the Kyoto Protocol contained the specific provisions
and regulations later agreed upon. Treaties may be seen as 'self-executing', in that merely becoming a party puts
the treaty and all of its obligations in action. Other treaties may be non-self-executing and require 'implementing
legislation'—a change in the domestic law of a state party that will direct or enable it to fulfill treaty obligations.
An example of a treaty requiring such legislation would be one mandating local prosecution by a party for particular
crimes. The division between the two is often not clear and is often politicized in disagreements within a government
over a treaty, since a non-self-executing treaty cannot be acted on without the proper change in domestic law. If
a treaty requires implementing legislation, a state may be in default of its obligations by the failure of its legislature
to pass the necessary domestic laws. The language of treaties, like that of any law or contract, must be interpreted
when the wording does not seem clear or it is not immediately apparent how it should be applied in a perhaps unforeseen
circumstance. The Vienna Convention states that treaties are to be interpreted "in good faith" according to the "ordinary
meaning given to the terms of the treaty in their context and in the light of its object and purpose." International
legal experts also often invoke the 'principle of maximum effectiveness,' which interprets treaty language as having
the fullest force and effect possible to establish obligations between the parties. No one party to a treaty can
impose its particular interpretation of the treaty upon the other parties. Consent may be implied, however, if the
other parties fail to explicitly disavow that initially unilateral interpretation, particularly if that state has
acted upon its view of the treaty without complaint. Consent by all parties to the treaty to a particular interpretation
has the legal effect of adding another clause to the treaty – this is commonly called an 'authentic interpretation'.
International tribunals and arbiters are often called upon to resolve substantial disputes over treaty interpretations.
To establish the meaning in context, these judicial bodies may review the preparatory work from the negotiation and
drafting of the treaty as well as the final, signed treaty itself. One significant part of treaty making is that
signing a treaty implies recognition that the other side is a sovereign state and that the agreement being considered
is enforceable under international law. Hence, nations can be very careful about terming an agreement to be a treaty.
For example, within the United States, agreements between states are compacts and agreements between states and the
federal government or between agencies of the government are memoranda of understanding. Another situation can occur
when one party wishes to create an obligation under international law, but the other party does not. This factor
has been at work with respect to discussions between North Korea and the United States over security guarantees and
nuclear proliferation. The terminology can also be confusing because a treaty may and usually is named something
other than a treaty, such as a convention, protocol, or simply agreement. Conversely some legal documents such as
the Treaty of Waitangi are internationally considered to be documents under domestic law. Treaties are not necessarily
permanently binding upon the signatory parties. As obligations in international law are traditionally viewed as arising
only from the consent of states, many treaties expressly allow a state to withdraw as long as it follows certain
procedures of notification. For example, the Single Convention on Narcotic Drugs provides that the treaty will terminate
if, as a result of denunciations, the number of parties falls below 40. Many treaties expressly forbid withdrawal.
Article 56 of the Vienna Convention on the Law of Treaties provides that where a treaty is silent over whether or
not it can be denounced there is a rebuttable presumption that it cannot be unilaterally denounced unless: The possibility
of withdrawal depends on the terms of the treaty and its travaux preparatoire. It has, for example, been held that
it is not possible to withdraw from the International Covenant on Civil and Political Rights. When North Korea declared
its intention to do this the Secretary-General of the United Nations, acting as registrar, said that original signatories
of the ICCPR had not overlooked the possibility of explicitly providing for withdrawal, but rather had deliberately
intended not to provide for it. Consequently, withdrawal was not possible. In practice, because of sovereignty, any
state can withdraw from any treaty at any time. The question of whether this is permitted is really a question of
how other states will react to the withdrawal; for instance, another state might impose sanctions or go to war over
a treaty violation. If a state party's withdrawal is successful, its obligations under that treaty are considered
terminated, and withdrawal by one party from a bilateral treaty of course terminates the treaty. When a state withdraws
from a multi-lateral treaty, that treaty will still otherwise remain in force among the other parties, unless, of
course, otherwise should or could be interpreted as agreed upon between the remaining states parties to the treaty.[citation
needed] If a party has materially violated or breached its treaty obligations, the other parties may invoke this
breach as grounds for temporarily suspending their obligations to that party under the treaty. A material breach
may also be invoked as grounds for permanently terminating the treaty itself. A treaty breach does not automatically
suspend or terminate treaty relations, however. It depends on how the other parties regard the breach and how they
resolve to respond to it. Sometimes treaties will provide for the seriousness of a breach to be determined by a tribunal
or other independent arbiter. An advantage of such an arbiter is that it prevents a party from prematurely and perhaps
wrongfully suspending or terminating its own obligations due to another's alleged material breach. Treaties sometimes
include provisions for self-termination, meaning that the treaty is automatically terminated if certain defined conditions
are met. Some treaties are intended by the parties to be only temporarily binding and are set to expire on a given
date. Other treaties may self-terminate if the treaty is meant to exist only under certain conditions.[citation needed]
A party may claim that a treaty should be terminated, even absent an express provision, if there has been a fundamental
change in circumstances. Such a change is sufficient if unforeseen, if it undermined the “essential basis” of consent
by a party, if it radically transforms the extent of obligations between the parties, and if the obligations are
still to be performed. A party cannot base this claim on change brought about by its own breach of the treaty. This
claim also cannot be used to invalidate treaties that established or redrew political boundaries.[citation needed]
The Islamic Prophet Muhammad carried out a siege against the Banu Qaynuqa tribe known as the Invasion of Banu Qaynuqa
in February 624 Muhammad ordered his followers to attack the Banu Qaynuqa Jews for allegedly breaking the treaty
known as the Constitution of Medina by pinning the clothes of a Muslim woman, which led to her being stripped naked
As a result, a Muslim killed a Jew in retaliation, and the Jews in turn killed the Muslim man. This escalated to
a chain of revenge killings, and enmity grew between Muslims and the Banu Qaynuqa, leading to the siege of their
fortress.:122 The tribe eventually surrendered to Muhammad, who initially wanted to kill the members of Banu Qaynuqa
but ultimately yielded to Abdullah ibn Ubayy's insistence and agreed to expel the Qaynuqa. Muhammad also ordered
another siege on the Banu Qurayza during the Invasion of Banu Qurayza, because according to Muslim tradition he had
been ordered to do so by the angel Gabriel. Al-Waqidi claims Muhammad had a treaty with the tribe which was torn
apart. Stillman and Watt deny the authenticity of al-Waqidi. Al-Waqidi has been frequently criticized by Muslim writers,
who claim that he is unreliable. 600-900 members of the Banu Qurayza were beheaded after they surrendered (according
to Tabari and Ibn Hisham). Another source says all Males and 1 woman beheaded (according to Sunni Hadith). Two Muslims
were killed There are several reasons an otherwise valid and agreed upon treaty may be rejected as a binding international
agreement, most of which involve problems created at the formation of the treaty.[citation needed] For example, the
serial Japan-Korea treaties of 1905, 1907 and 1910 were protested; and they were confirmed as "already null and void"
in the 1965 Treaty on Basic Relations between Japan and the Republic of Korea. A party's consent to a treaty is invalid
if it had been given by an agent or body without power to do so under that state's domestic law. States are reluctant
to inquire into the internal affairs and processes of other states, and so a "manifest violation" is required such
that it would be "objectively evident to any State dealing with the matter". A strong presumption exists internationally
that a head of state has acted within his proper authority. It seems that no treaty has ever actually been invalidated
on this provision.[citation needed] Consent is also invalid if it is given by a representative who ignored restrictions
he is subject to by his sovereign during the negotiations, if the other parties to the treaty were notified of those
restrictions prior to his signing.[citation needed] According to the preamble in The Law of Treaties, treaties are
a source of international law. If an act or lack thereof is condemned under international law, the act will not assume
international legality even if approved by internal law. This means that in case of a conflict with domestic law,
international law will always prevail. Articles 46–53 of the Vienna Convention on the Law of Treaties set out the
only ways that treaties can be invalidated—considered unenforceable and void under international law. A treaty will
be invalidated due to either the circumstances by which a state party joined the treaty, or due to the content of
the treaty itself. Invalidation is separate from withdrawal, suspension, or termination (addressed above), which
all involve an alteration in the consent of the parties of a previously valid treaty rather than the invalidation
of that consent in the first place. A state's consent may be invalidated if there was an erroneous understanding
of a fact or situation at the time of conclusion, which formed the "essential basis" of the state's consent. Consent
will not be invalidated if the misunderstanding was due to the state's own conduct, or if the truth should have been
evident. Consent will also be invalidated if it was induced by the fraudulent conduct of another party, or by the
direct or indirect "corruption" of its representative by another party to the treaty. Coercion of either a representative,
or the state itself through the threat or use of force, if used to obtain the consent of that state to a treaty,
will invalidate that consent. A treaty is null and void if it is in violation of a peremptory norm. These norms,
unlike other principles of customary law, are recognized as permitting no violations and so cannot be altered through
treaty obligations. These are limited to such universally accepted prohibitions as those against the aggressive use
of force, genocide and other crimes against humanity, piracy, hostilities directed at civilian population, racial
discrimination and apartheid, slavery and torture, meaning that no state can legally assume an obligation to commit
or permit such acts. The United Nations Charter states that treaties must be registered with the UN to be invoked
before it or enforced in its judiciary organ, the International Court of Justice. This was done to prevent the proliferation
of secret treaties that occurred in the 19th and 20th century. Section 103 of the Charter also states that its members'
obligations under it outweigh any competing obligations under other treaties. After their adoption, treaties as well
as their amendments have to follow the official legal procedures of the United Nations, as applied by the Office
of Legal Affairs, including signature, ratification and entry into force. In function and effectiveness, the UN has
been compared to the pre-Constitutional United States Federal government by some[citation needed], giving a comparison
between modern treaty law and the historical Articles of Confederation. The Brazilian federal constitution states
that the power to enter into treaties is vested in the president and that such treaties must be approved by Congress
(articles 84, clause VIII, and 49, clause I). In practice, this has been interpreted as meaning that the executive
branch is free to negotiate and sign a treaty, but its ratification by the president is contingent upon the prior
approval of Congress. Additionally, the Federal Supreme Court has ruled that, following ratification and entry into
force, a treaty must be incorporated into domestic law by means of a presidential decree published in the federal
register in order to be valid in Brazil and applicable by the Brazilian authorities. The Federal Supreme Court has
established that treaties are subject to constitutional review and enjoy the same hierarchical position as ordinary
legislation (leis ordinárias, or "ordinary laws", in Portuguese). A more recent ruling by the Supreme Court in 2008
has altered that scheme somewhat, by stating that treaties containing human rights provisions enjoy a status above
that of ordinary legislation, though they remain beneath the constitution itself. Additionally, as per the 45th amendment
to the constitution, human rights treaties which are approved by Congress by means of a special procedure enjoy the
same hierarchical position as a constitutional amendment. The hierarchical position of treaties in relation to domestic
legislation is of relevance to the discussion on whether (and how) the latter can abrogate the former and vice versa.
The Brazilian federal constitution does not have a supremacy clause with the same effects as the one on the U.S.
constitution, a fact that is of interest to the discussion on the relation between treaties and state legislation.
In the United States, the term "treaty" has a different, more restricted legal sense than exists in international
law. United States law distinguishes what it calls treaties from executive agreement, congressional-executive agreements,
and sole executive agreements. All four classes are equally treaties under international law; they are distinct only
from the perspective of internal American law. The distinctions are primarily concerning their method of approval.
Whereas treaties require advice and consent by two-thirds of the Senators present, sole executive agreements may
be executed by the President acting alone. Some treaties grant the President the authority to fill in the gaps with
executive agreements, rather than additional treaties or protocols. And finally, congressional-executive agreements
require majority approval by both the House and the Senate, either before or after the treaty is signed by the President.
Currently, international agreements are executed by executive agreement rather than treaties at a rate of 10:1. Despite
the relative ease of executive agreements, the President still often chooses to pursue the formal treaty process
over an executive agreement in order to gain congressional support on matters that require the Congress to pass implementing
legislation or appropriate funds, and those agreements that impose long-term, complex legal obligations on the United
States. For example, the deal by the United States, Iran and other countries is not a Treaty. The Supreme Court ruled
in the Head Money Cases that "treaties" do not have a privileged position over Acts of Congress and can be repealed
or modified (for the purposes of U.S. law) by any subsequent Act of Congress, just like with any other regular law.
The Supreme Court also ruled in Reid v. Covert that any treaty provision that conflicts with the Constitution are
null and void under U.S. law. In India, the legislation subjects are divided into 3 lists -Union List, State List
and Concurrent List . In the normal legislation process, the subjects in Union list can only be legislated upon by
central legislative body called Parliament of India, for subjects in state list only respective state legislature
can legislate. While for Concurrent subjects, both center and state can make laws. But to implement international
treaties, Parliament can legislate on any subject overriding the general division of subject lists. Treaties formed
an important part of European colonization and, in many parts of the world, Europeans attempted to legitimize their
sovereignty by signing treaties with indigenous peoples. In most cases these treaties were in extremely disadvantageous
terms to the native people, who often did not appreciate the implications of what they were signing. In some rare
cases, such as with Ethiopia and Qing Dynasty China, the local governments were able to use the treaties to at least
mitigate the impact of European colonization. This involved learning the intricacies of European diplomatic customs
and then using the treaties to prevent a power from overstepping their agreement or by playing different powers against
each other. In other cases, such as New Zealand and Canada, treaties allowed native peoples to maintain a minimum
amount of autonomy. In the case of indigenous Australians, unlike with the Māori of New Zealand, no treaty was ever
entered into with the indigenous peoples entitling the Europeans to land ownership, under the doctrine of terra nullius
(later overturned by Mabo v Queensland, establishing the concept of native title well after colonization was already
a fait accompli). Such treaties between colonizers and indigenous peoples are an important part of political discourse
in the late 20th and early 21st century, the treaties being discussed have international standing as has been stated
in a treaty study by the UN. Prior to 1871, the government of the United States regularly entered into treaties with
Native Americans but the Indian Appropriations Act of March 3, 1871 (ch. 120, 16 Stat. 563) had a rider (25 U.S.C.
§ 71) attached that effectively ended the President’s treaty making by providing that no Indian nation or tribe shall
be acknowledged as an independent nation, tribe, or power with whom the United States may contract by treaty. The
federal government continued to provide similar contractual relations with the Indian tribes after 1871 by agreements,
statutes, and executive orders.
Josip Broz Tito (Cyrillic: Јосип Броз Тито, pronounced [jǒsip brôːz tîto]; born Josip Broz; 7 May 1892[nb 1] – 4 May 1980)
was a Yugoslav revolutionary and statesman, serving in various roles from 1943 until his death in 1980. During World
War II he was the leader of the Partisans, often regarded as the most effective resistance movement in occupied Europe.
While his presidency has been criticized as authoritarian, and concerns about the repression of political opponents
have been raised, Tito was "seen by most as a benevolent dictator" due to his economic and diplomatic policies. He
was a popular public figure both in Yugoslavia and abroad. Viewed as a unifying symbol, his internal policies maintained
the peaceful coexistence of the nations of the Yugoslav federation. He gained further international attention as
the chief leader of the Non-Aligned Movement, working with Jawaharlal Nehru of India, Gamal Abdel Nasser of Egypt
and Sukarno of Indonesia. He was General Secretary (later Chairman of the Presidium) of the League of Communists
of Yugoslavia (1939–80), and went on to lead the World War II Yugoslav guerrilla movement, the Partisans (1941–45).
After the war, he was the Prime Minister (1944–63), President (later President for Life) (1953–80) of the Socialist
Federal Republic of Yugoslavia (SFRY). From 1943 to his death in 1980, he held the rank of Marshal of Yugoslavia,
serving as the supreme commander of the Yugoslav military, the Yugoslav People's Army (JNA). With a highly favourable
reputation abroad in both Cold War blocs, Josip Broz Tito received some 98 foreign decorations, including the Legion
of Honour and the Order of the Bath. Josip Broz was born to a Croat father and Slovene mother in the village of Kumrovec,
Croatia. Drafted into military service, he distinguished himself, becoming the youngest Sergeant Major in the Austro-Hungarian
Army of that time. After being seriously wounded and captured by the Imperial Russians during World War I, Josip
was sent to a work camp in the Ural Mountains. He participated in the October Revolution, and later joined a Red
Guard unit in Omsk. Upon his return home, Broz found himself in the newly established Kingdom of Yugoslavia, where
he joined the Communist Party of Yugoslavia (KPJ). Tito was the chief architect of the second Yugoslavia, a socialist
federation that lasted from 1943 to 1991–92. Despite being one of the founders of Cominform, soon he became the first
Cominform member to defy Soviet hegemony and the only one to manage to leave Cominform and begin with its own socialist
program. Tito was a backer of independent roads to socialism (sometimes referred to as "national communism"). In
1951 he implemented a self-management system that differentiated Yugoslavia from other socialist countries. A turn
towards a model of market socialism brought economic expansion in the 1950s and 1960s and a decline during the 1970s.
His internal policies included the suppression of nationalist sentiment and the promotion of the "brotherhood and
unity" of the six Yugoslav nations. After Tito's death in 1980, tensions between the Yugoslav republics emerged and
in 1991 the country disintegrated and went into a series of wars and unrest that lasted the rest of the decade, and
which continue to impact most of the former Yugoslav republics. He remains a very controversial figure in the Balkans.
Josip Broz was born on 7 May 1892 in Kumrovec, in the northern Croatian region of Hrvatsko Zagorje in Austria-Hungary.[nb
1] He was the seventh child of Franjo and Marija Broz. His father, Franjo Broz (26 November 1860 – 16 December 1936),
was a Croat, while his mother Marija (25 March 1864 – 14 January 1918), was a Slovene. His parents were married on
21 January 1891. After spending part of his childhood years with his maternal grandfather Martin Javeršek in the
Slovenian village of Podsreda, he entered primary school in 1900 at Kumrovec, he failed the 2nd grade and graduated
in 1905. In 1907 he moved out of the rural environment and started working as a machinist's apprentice in Sisak.
There, he became aware of the labour movement and celebrated 1 May – Labour Day for the first time. In 1910, he joined
the union of metallurgy workers and at the same time the Social-Democratic Party of Croatia and Slavonia. Between
1911 and 1913, Broz worked for shorter periods in Kamnik (1911–1912, factory "Titan"), Cenkov, Munich and Mannheim,
where he worked for the Benz car factory; then he went to Wiener Neustadt, Austria, and worked as a test driver for
Daimler. In the autumn of 1913, he was conscripted into the Austro-Hungarian Army. He was sent to a school for non-commissioned
officers and became a sergeant, serving in the 25th Croatian Regiment based in Zagreb. In May 1914, Broz won a silver
medal at an army fencing competition in Budapest. At the outbreak of World War I in 1914, he was sent to Ruma, where
he was arrested for anti-war propaganda and imprisoned in the Petrovaradin fortress. In January 1915, he was sent
to the Eastern Front in Galicia to fight against Russia. He distinguished himself as a capable soldier, becoming
the youngest Sergeant Major in the Austro-Hungarian Army. For his bravery in the face of the enemy, he was recommended
for the Silver Bravery Medal but was taken prisoner of war before it could be formally presented. On 25 March 1915,
while in Bukovina, he was seriously wounded and captured by the Russians. After 13 months at the hospital, Broz was
sent to a work camp in the Ural Mountains where prisoners selected him for their camp leader. In February 1917, revolting
workers broke into the prison and freed the prisoners. Broz subsequently joined a Bolshevik group. In April 1917,
he was arrested again but managed to escape and participate in the July Days demonstrations in Petrograd (St. Petersburg)
on 16–17 July 1917. On his way to Finland, Broz was caught and imprisoned in the Peter and Paul Fortress for three
weeks. He was again sent to Kungur, but escaped from the train. He hid with a Russian family in Omsk, Siberia where
he met his future wife Pelagija Belousova. After the October Revolution, he joined a Red Guard unit in Omsk. Following
a White counteroffensive, he fled to Kirgiziya and subsequently returned to Omsk, where he married Belousova. In
the spring of 1918, he joined the Yugoslav section of the Russian Communist Party. By June of the same year, Broz
left Omsk to find work and support his family, and was employed as a mechanic near Omsk for a year. In January 1920,
Tito and his wife made a long and difficult journey home to Yugoslavia where he arrived in September. Upon his return,
Broz joined the Communist Party of Yugoslavia. The CPY's influence on the political life of the Kingdom of Yugoslavia
was growing rapidly. In the 1920 elections the Communists won 59 seats in the parliament and became the third strongest
party. Winning numerous local elections, they gained a stronghold in the second largest city of Zagreb, electing
Svetozar Delić for mayor. After the assassination of Milorad Drašković, the Yugoslav Minister of the Interior, by
a young communist on 2 August 1921, the CPY was declared illegal under the Yugoslav State Security Act of 1921. During
1920 and 1921 all Communist-won mandates were nullified. Broz continued his work underground despite pressure on
Communists from the government. As 1921 began he moved to Veliko Trojstvo near Bjelovar and found work as a machinist.
In 1925, Broz moved to Kraljevica where he started working at a shipyard. He was elected as a union leader and a
year later he led a shipyard strike. He was fired and moved to Belgrade, where he worked in a train coach factory
in Smederevska Palanka. He was elected as Workers' Commissary but was fired as soon as his CPY membership was revealed.
Broz then moved to Zagreb, where he was appointed secretary of Metal Workers' Union of Croatia. In 1928, he became
the Zagreb Branch Secretary of the CPY. In the same year he was arrested, tried in court for his illegal communist
activities, and sent to jail. During his five years at Lepoglava prison he met Moša Pijade, who became his ideological
mentor. After his release, he lived incognito and assumed numerous noms de guerre, among them "Walter" and "Tito".
In 1934 the Zagreb Provincial Committee sent Tito to Vienna where all the Central Committee of the Communist Party
of Yugoslavia had sought refuge. He was appointed to the Committee and started to appoint allies to him, among them
Edvard Kardelj, Milovan Đilas, Aleksandar Ranković and Boris Kidrič. In 1935, Tito travelled to the Soviet Union,
working for a year in the Balkans section of Comintern. He was a member of the Soviet Communist Party and the Soviet
secret police (NKVD). Tito was also involved in recruiting for the Dimitrov Battalion, a group of volunteers serving
in the Spanish Civil War. In 1936, the Comintern sent "Comrade Walter" (i.e. Tito) back to Yugoslavia to purge the
Communist Party there. In 1937, Stalin had the Secretary-General of the CPY, Milan Gorkić, murdered in Moscow. Subsequently
Tito was appointed Secretary-General of the still-outlawed CPY. On 6 April 1941, German forces, with Hungarian and
Italian assistance, launched an invasion of Yugoslavia. On 10 April 1941, Slavko Kvaternik proclaimed the Independent
State of Croatia, and Tito responded by forming a Military Committee within the Central Committee of the Yugoslav
Communist Party. Attacked from all sides, the armed forces of the Kingdom of Yugoslavia quickly crumbled. On 17 April
1941, after King Peter II and other members of the government fled the country, the remaining representatives of
the government and military met with the German officials in Belgrade. They quickly agreed to end military resistance.
On 1 May 1941, Tito issued a pamphlet calling on the people to unite in a battle against the occupation. On 27 June
1941, the Central Committee of the Communist Party of Yugoslavia appointed Tito Commander in Chief of all project
national liberation military forces. On 1 July 1941, the Comintern sent precise instructions calling for immediate
action. Despite conflicts with the rival monarchic Chetnik movement, Tito's Partisans succeeded in liberating territory,
notably the "Republic of Užice". During this period, Tito held talks with Chetnik leader Draža Mihailović on 19 September
and 27 October 1941. It is said that Tito ordered his forces to assist escaping Jews, and that more than 2,000 Jews
fought directly for Tito. On 21 December 1941, the Partisans created the First Proletarian Brigade (commanded by
Koča Popović) and on 1 March 1942, Tito created the Second Proletarian Brigade. In liberated territories, the Partisans
organised People's Committees to act as civilian government. The Anti-Fascist Council of National Liberation of Yugoslavia
(AVNOJ) convened in Bihać on 26–27 November 1942 and in Jajce on 29 November 1943. In the two sessions, the resistance
representatives established the basis for post-war organisation of the country, deciding on a federation of the Yugoslav
nations. In Jajce, a 67-member "presidency" was elected and established a nine-member National Committee of Liberation
(five communist members) as a de facto provisional government. Tito was named President of the National Committee
of Liberation. With the growing possibility of an Allied invasion in the Balkans, the Axis began to divert more resources
to the destruction of the Partisans main force and its high command. This meant, among other things, a concerted
German effort to capture Josip Broz Tito personally. On 25 May 1944, he managed to evade the Germans after the Raid
on Drvar (Operation Rösselsprung), an airborne assault outside his Drvar headquarters in Bosnia. After the Partisans
managed to endure and avoid these intense Axis attacks between January and June 1943, and the extent of Chetnik collaboration
became evident, Allied leaders switched their support from Draža Mihailović to Tito. King Peter II, American President
Franklin Roosevelt and British Prime Minister Winston Churchill joined Soviet Premier Joseph Stalin in officially
recognising Tito and the Partisans at the Tehran Conference. This resulted in Allied aid being parachuted behind
Axis lines to assist the Partisans. On 17 June 1944 on the Dalmatian island of Vis, the Treaty of Vis (Viški sporazum)
was signed in an attempt to merge Tito's government (the AVNOJ) with the government in exile of King Peter II. The
Balkan Air Force was formed in June 1944 to control operations that were mainly aimed at aiding his forces. In the
first post war years Tito was widely considered a communist leader very loyal to Moscow, indeed, he was often viewed
as second only to Stalin in the Eastern Bloc. In fact, Stalin and Tito had an uneasy alliance from the start, with
Stalin considering Tito too independent. On 12 September 1944, King Peter II called on all Yugoslavs to come together
under Tito's leadership and stated that those who did not were "traitors", by which time Tito was recognized by all
Allied authorities (including the government-in-exile) as the Prime Minister of Yugoslavia, in addition to commander-in-chief
of the Yugoslav forces. On 28 September 1944, the Telegraph Agency of the Soviet Union (TASS) reported that Tito
signed an agreement with the Soviet Union allowing "temporary entry" of Soviet troops into Yugoslav territory which
allowed the Red Army to assist in operations in the northeastern areas of Yugoslavia. With their strategic right
flank secured by the Allied advance, the Partisans prepared and executed a massive general offensive which succeeded
in breaking through German lines and forcing a retreat beyond Yugoslav borders. After the Partisan victory and the
end of hostilities in Europe, all external forces were ordered off Yugoslav territory. In the final days of World
War II in Yugoslavia, units of the Partisans were responsible for atrocities after the repatriations of Bleiburg,
and accusations of culpability were later raised at the Yugoslav leadership under Tito. At the time, Josip Broz Tito
repeatedly issued calls for surrender to the retreating column, offering amnesty and attempting to avoid a disorderly
surrender. On 14 May he dispatched a telegram to the supreme headquarters Slovene Partisan Army prohibiting "in the
sternest language" the execution of prisoners of war and commanding the transfer of the possible suspects to a military
court. On 7 March 1945, the provisional government of the Democratic Federal Yugoslavia (Demokratska Federativna
Jugoslavija, DFY) was assembled in Belgrade by Josip Broz Tito, while the provisional name allowed for either a republic
or monarchy. This government was headed by Tito as provisional Yugoslav Prime Minister and included representatives
from the royalist government-in-exile, among others Ivan Šubašić. In accordance with the agreement between resistance
leaders and the government-in-exile, post-war elections were held to determine the form of government. In November
1945, Tito's pro-republican People's Front, led by the Communist Party of Yugoslavia, won the elections with an overwhelming
majority, the vote having been boycotted by monarchists. During the period, Tito evidently enjoyed massive popular
support due to being generally viewed by the populace as the liberator of Yugoslavia. The Yugoslav administration
in the immediate post-war period managed to unite a country that had been severely affected by ultra-nationalist
upheavals and war devastation, while successfully suppressing the nationalist sentiments of the various nations in
favor of tolerance, and the common Yugoslav goal. After the overwhelming electoral victory, Tito was confirmed as
the Prime Minister and the Minister of Foreign Affairs of the DFY. The country was soon renamed the Federal People's
Republic of Yugoslavia (FPRY) (later finally renamed into Socialist Federal Republic of Yugoslavia, SFRY). On 29
November 1945, King Peter II was formally deposed by the Yugoslav Constituent Assembly. The Assembly drafted a new
republican constitution soon afterwards. Yugoslavia organized the Yugoslav People's Army (Jugoslavenska narodna armija,
or JNA) from the Partisan movement and became the fourth strongest army in Europe at the time. The State Security
Administration (Uprava državne bezbednosti/sigurnosti/varnosti, UDBA) was also formed as the new secret police, along
with a security agency, the Department of People's Security (Organ Zaštite Naroda (Armije), OZNA). Yugoslav intelligence
was charged with imprisoning and bringing to trial large numbers of Nazi collaborators; controversially, this included
Catholic clergymen due to the widespread involvement of Croatian Catholic clergy with the Ustaša regime. Draža Mihailović
was found guilty of collaboration, high treason and war crimes and was subsequently executed by firing squad in July
1946. Prime Minister Josip Broz Tito met with the president of the Bishops' Conference of Yugoslavia, Aloysius Stepinac
on 4 June 1945, two days after his release from imprisonment. The two could not reach an agreement on the state of
the Catholic Church. Under Stepinac's leadership, the bishops' conference released a letter condemning alleged Partisan
war crimes in September, 1945. The following year Stepinac was arrested and put on trial. In October 1946, in its
first special session for 75 years, the Vatican excommunicated Tito and the Yugoslav government for sentencing Stepinac
to 16 years in prison on charges of assisting Ustaše terror and of supporting forced conversions of Serbs to Catholicism.
Stepinac received preferential treatment in recognition of his status and the sentence was soon shortened and reduced
to house-arrest, with the option of emigration open to the archbishop. At the conclusion of the "Informbiro period",
reforms rendered Yugoslavia considerably more religiously liberal than the Eastern Bloc states. Unlike other new
communist states in east-central Europe, Yugoslavia liberated itself from Axis domination with limited direct support
from the Red Army. Tito's leading role in liberating Yugoslavia not only greatly strengthened his position in his
party and among the Yugoslav people, but also caused him to be more insistent that Yugoslavia had more room to follow
its own interests than other Bloc leaders who had more reasons (and pressures) to recognize Soviet efforts in helping
them liberate their own countries from Axis control. Although Tito was formally an ally of Stalin after World War
II, the Soviets had set up a spy ring in the Yugoslav party as early as 1945, giving way to an uneasy alliance.[citation
needed] In the immediate aftermath of World War II, there occurred several armed incidents between Yugoslavia and
the Western Allies. Following the war, Yugoslavia acquired the Italian territory of Istria as well as the cities
of Zadar and Rijeka. Yugoslav leadership was looking to incorporate Trieste into the country as well, which was opposed
by the Western Allies. This led to several armed incidents, notably attacks by Yugoslav fighter planes on US transport
aircraft, causing bitter criticism from the west. From 1945 to 1948, at least four US aircraft were shot down.[better
source needed] Stalin was opposed to these provocations, as he felt the USSR unready to face the West in open war
so soon after the losses of World War II and at the time when US had operational nuclear weapons whereas USSR had
yet to conduct its first test. In addition, Tito was openly supportive of the Communist side in the Greek Civil War,
while Stalin kept his distance, having agreed with Churchill not to pursue Soviet interests there, although he did
support the Greek communist struggle politically, as demonstrated in several assemblies of the UN Security Council.
In 1948, motivated by the desire to create a strong independent economy, Tito modeled his economic development plan
independently from Moscow, which resulted in a diplomatic escalation followed by a bitter exchange of letters in
which Tito affirmed that The Soviet answer on 4 May admonished Tito and the Communist Party of Yugoslavia (CPY) for
failing to admit and correct its mistakes, and went on to accuse them of being too proud of their successes against
the Germans, maintaining that the Red Army had saved them from destruction. Tito's response on 17 May suggested that
the matter be settled at the meeting of the Cominform to be held that June. However, Tito did not attend the second
meeting of the Cominform, fearing that Yugoslavia was to be openly attacked. In 1949 the crisis nearly escalated
into an armed conflict, as Hungarian and Soviet forces were massing on the northern Yugoslav frontier. On 28 June,
the other member countries expelled Yugoslavia, citing "nationalist elements" that had "managed in the course of
the past five or six months to reach a dominant position in the leadership" of the CPY. The assumption in Moscow
was that once it was known that he had lost Soviet approval, Tito would collapse; 'I will shake my little finger
and there will be no more Tito,' Stalin remarked. The expulsion effectively banished Yugoslavia from the international
association of socialist states, while other socialist states of Eastern Europe subsequently underwent purges of
alleged "Titoists". Stalin took the matter personally and arranged several assassination attempts on Tito, none of
which succeeded. In a correspondence between the two leaders, Tito openly wrote: One significant consequence of the
tension arising between Yugoslavia and Soviet Union, was that Tito fought Yugoslav Stalinists with Stalin's methods.
In other words, Aleksandar Ranković and the State Security Service (UBDA) employed the same inhumane methods against
their opponents as Stalin did in the Soviet Union against his. Not every person accused of a political crime was
convicted and nobody was sentenced to death for his or her pro-Soviet feelings. However this repression, which lasted
until 1956, was marked by significant violations of human rights. Tito's estrangement from the USSR enabled Yugoslavia
to obtain US aid via the Economic Cooperation Administration (ECA), the same US aid institution which administered
the Marshall Plan. Still, he did not agree to align with the West, which was a common consequence of accepting American
aid at the time. After Stalin's death in 1953, relations with the USSR were relaxed and he began to receive aid as
well from the COMECON. In this way, Tito played East-West antagonism to his advantage. Instead of choosing sides,
he was instrumental in kick-starting the Non-Aligned Movement, which would function as a 'third way' for countries
interested in staying outside of the East-West divide. The event was significant not only for Yugoslavia and Tito,
but also for the global development of socialism, since it was the first major split between Communist states, casting
doubt on Comintern's claims for socialism to be a unified force that would eventually control the whole world, as
Tito became the first (and the only successful) socialist leader to defy Stalin's leadership in the COMINFORM. This
rift with the Soviet Union brought Tito much international recognition, but also triggered a period of instability
often referred to as the Informbiro period. Tito's form of communism was labeled "Titoism" by Moscow, which encouraged
purges against suspected "Titoites'" throughout the Eastern bloc. On 26 June 1950, the National Assembly supported
a crucial bill written by Milovan Đilas and Tito about "self-management" (samoupravljanje): a type of cooperative
independent socialist experiment that introduced profit sharing and workplace democracy in previously state-run enterprises
which then became the direct social ownership of the employees. On 13 January 1953, they established that the law
on self-management was the basis of the entire social order in Yugoslavia. Tito also succeeded Ivan Ribar as the
President of Yugoslavia on 14 January 1953. After Stalin's death Tito rejected the USSR's invitation for a visit
to discuss normalization of relations between two nations. Nikita Khrushchev and Nikolai Bulganin visited Tito in
Belgrade in 1955 and apologized for wrongdoings by Stalin's administration. Tito visited the USSR in 1956, which
signaled to the world that animosity between Yugoslavia and USSR was easing. However, the relationship between the
USSR and Yugoslavia would reach another low in the late 1960s. Commenting on the crisis, Tito concluded that: The
Tito-Stalin split had large ramifications for countries outside the USSR and Yugoslavia. It has, for example, been
given as one of the reasons for the Slánský trial in Czechoslovakia, in which 14 high-level Communist officials were
purged, with 11 of them being executed. Stalin put pressure on Czechoslovakia to conduct purges in order to discourage
the spread of the idea of a "national path to socialism," which Tito espoused. Under Tito's leadership, Yugoslavia
became a founding member of the Non-Aligned Movement. In 1961, Tito co-founded the movement with Egypt's Gamal Abdel
Nasser, India's Jawaharlal Nehru, Indonesia's Sukarno and Ghana's Kwame Nkrumah, in an action called The Initiative
of Five (Tito, Nehru, Nasser, Sukarno, Nkrumah), thus establishing strong ties with third world countries. This move
did much to improve Yugoslavia's diplomatic position. On 1 September 1961, Josip Broz Tito became the first Secretary-General
of the Non-Aligned Movement. Tito's foreign policy led to relationships with a variety of governments, such as exchanging
visits (1954 and 1956) with Emperor Haile Selassie of Ethiopia, where a street was named in his honor. Tito was notable
for pursuing a foreign policy of neutrality during the Cold War and for establishing close ties with developing countries.
Tito's strong belief in self-determination caused early rift with Stalin and consequently, the Eastern Bloc. His
public speeches often reiterated that policy of neutrality and cooperation with all countries would be natural as
long as these countries did not use their influence to pressure Yugoslavia to take sides. Relations with the United
States and Western European nations were generally cordial. Yugoslavia had a liberal travel policy permitting foreigners
to freely travel through the country and its citizens to travel worldwide, whereas it was limited by most Communist
countries. A number[quantify] of Yugoslav citizens worked throughout Western Europe. Tito met many world leaders
during his rule, such as Soviet rulers Joseph Stalin, Nikita Khrushchev and Leonid Brezhnev; Egypt's Gamal Abdel
Nasser, Indian politicians Jawaharlal Nehru and Indira Gandhi; British Prime Ministers Winston Churchill, James Callaghan
and Margaret Thatcher; U.S. Presidents Dwight D. Eisenhower, John F. Kennedy, Richard Nixon, Gerald Ford and Jimmy
Carter; other political leaders, dignitaries and heads of state that Tito met at least once in his lifetime included
Che Guevara, Fidel Castro, Yasser Arafat, Willy Brandt, Helmut Schmidt, Georges Pompidou, Queen Elizabeth II, Hua
Guofeng, Kim Il Sung, Sukarno, Sheikh Mujibur Rahman, Suharto, Idi Amin, Haile Selassie, Kenneth Kaunda, Gaddafi,
Erich Honecker, Nicolae Ceaușescu, János Kádár and Urho Kekkonen. He also met numerous celebrities. Tito visited
India from December 22, 1954 through January 8, 1955. After his return, he removed many restrictions on churches
and spiritual institutions in Yugoslavia. Tito also developed warm relations with Burma under U Nu, travelling to
the country in 1955 and again in 1959, though he didn't receive the same treatment in 1959 from the new leader, Ne
Win. Because of its neutrality, Yugoslavia would often be rare among Communist countries to have diplomatic relations
with right-wing, anti-Communist governments. For example, Yugoslavia was the only communist country allowed to have
an embassy in Alfredo Stroessner's Paraguay. One notable exception to Yugoslavia's neutral stance toward anti-communist
countries was Chile under Pinochet; Yugoslavia was one of many countries which severed diplomatic relations with
Chile after Salvador Allende was overthrown. Yugoslavia also provided military aid and arms supplies to staunchly
anti-Communist regimes such as that of Guatemala under Kjell Eugenio Laugerud García. On 7 April 1963, the country
changed its official name to the Socialist Federal Republic of Yugoslavia. Reforms encouraged private enterprise
and greatly relaxed restrictions on freedom of speech and religious expression. Tito subsequently went on a tour
of the Americas. In Chile, two government ministers resigned over his visit to that country. In the autumn of 1960
Tito met President Dwight D. Eisenhower at the United Nations General Assembly meeting. Tito and Eisenhower discussed
a range of issues from arms control to economic development. When Eisenhower remarked that Yugoslavia's neutralism
was "neutral on his side", Tito replied that neutralism did not imply passivity but meant "not taking sides". In
1966 an agreement with the Vatican, fostered in part by the death in 1960 of anti-communist archbishop of Zagreb
Aloysius Stepinac and shifts in the church's approach to resisting communism originating in the Second Vatican Council,
accorded new freedom to the Yugoslav Roman Catholic Church, particularly to catechize and open seminaries. The agreement
also eased tensions, which had prevented the naming of new bishops in Yugoslavia since 1945. Tito's new socialism
met opposition from traditional communists culminating in conspiracy headed by Aleksandar Ranković. In the same year
Tito declared that Communists must henceforth chart Yugoslavia's course by the force of their arguments (implying
an abandonment of Leninist orthodoxy and development of liberal Communism). The State Security Administration (UDBA)
saw its power scaled back and its staff reduced to 5000. On 1 January 1967, Yugoslavia was the first communist country
to open its borders to all foreign visitors and abolish visa requirements. In the same year Tito became active in
promoting a peaceful resolution of the Arab–Israeli conflict. His plan called for Arabs to recognize the state of
Israel in exchange for territories Israel gained. In 1968, Tito offered Czechoslovak leader Alexander Dubček to fly
to Prague on three hours notice if Dubček needed help in facing down the Soviets. In April 1969, Tito removed generals
Ivan Gošnjak and Rade Hamović in the aftermath of the invasion of Czechoslovakia due to the unpreparedness of the
Yugoslav army to respond to a similar invasion of Yugoslavia. In 1971, Tito was re-elected as President of Yugoslavia
by the Federal Assembly for the sixth time. In his speech before the Federal Assembly he introduced 20 sweeping constitutional
amendments that would provide an updated framework on which the country would be based. The amendments provided for
a collective presidency, a 22-member body consisting of elected representatives from six republics and two autonomous
provinces. The body would have a single chairman of the presidency and chairmanship would rotate among six republics.
When the Federal Assembly fails to agree on legislation, the collective presidency would have the power to rule by
decree. Amendments also provided for stronger cabinet with considerable power to initiate and pursue legislature
independently from the Communist Party. Džemal Bijedić was chosen as the Premier. The new amendments aimed to decentralize
the country by granting greater autonomy to republics and provinces. The federal government would retain authority
only over foreign affairs, defense, internal security, monetary affairs, free trade within Yugoslavia, and development
loans to poorer regions. Control of education, healthcare, and housing would be exercised entirely by the governments
of the republics and the autonomous provinces. Tito's greatest strength, in the eyes of the western communists, had
been in suppressing nationalist insurrections and maintaining unity throughout the country. It was Tito's call for
unity, and related methods, that held together the people of Yugoslavia. This ability was put to a test several times
during his reign, notably during the Croatian Spring (also referred as the Masovni pokret, maspok, meaning "Mass
Movement") when the government suppressed both public demonstrations and dissenting opinions within the Communist
Party. Despite this suppression, much of maspok's demands were later realized with the new constitution, heavily
backed by Tito himself against opposition from the Serbian branch of the party.[citation needed] On 16 May 1974,
the new Constitution was passed, and the aging Tito was named president for life, a status which he would enjoy for
five years. Tito's visits to the United States avoided most of the Northeast due to large minorities of Yugoslav
emigrants bitter about communism in Yugoslavia. Security for the state visits was usually high to keep him away from
protesters, who would frequently burn the Yugoslav flag. During a visit to the United Nations in the late 1970s emigrants
shouted "Tito murderer" outside his New York hotel, for which he protested to United States authorities. After the
constitutional changes of 1974, Tito began reducing his role in the day-to-day running of the state. He continued
to travel abroad and receive foreign visitors, going to Beijing in 1977 and reconciling with a Chinese leadership
that had once branded him a revisionist. In turn, Chairman Hua Guofeng visited Yugoslavia in 1979. In 1978, Tito
traveled to the U.S. During the visit strict security was imposed in Washington, D.C. owing to protests by anti-communist
Croat, Serb and Albanian groups. Tito became increasingly ill over the course of 1979. During this time Vila Srna
was built for his use near Morović in the event of his recovery. On 7 January and again on 11 January 1980, Tito
was admitted to the Medical Centre in Ljubljana, the capital city of the SR Slovenia, with circulation problems in
his legs. His left leg was amputated soon afterward due to arterial blockages and he died of gangrene at the Medical
Centre Ljubljana on 4 May 1980 at 15:05, three days short of his 88th birthday. His funeral drew many world statesmen.
Based on the number of attending politicians and state delegations, at the time it was the largest state funeral
in history; this concentration of dignitaries would be unmatched until the funeral of Pope John Paul II in 2005 and
the memorial service of Nelson Mandela in 2013. Those who attended included four kings, 31 presidents, six princes,
22 prime ministers and 47 ministers of foreign affairs. They came from both sides of the Cold War, from 128 different
countries out of 154 UN members at the time. Tito was interred in a mausoleum in Belgrade, which forms part of a
memorial complex in the grounds of the Museum of Yugoslav History (formerly called "Museum 25 May" and "Museum of
the Revolution"). The actual mausoleum is called House of Flowers (Kuća Cveća) and numerous people visit the place
as a shrine to "better times". The museum keeps the gifts Tito received during his presidency. The collection also
includes original prints of Los Caprichos by Francisco Goya, and many others. The Government of Serbia has planned
to merge it into the Museum of the History of Serbia. At the time of his death, speculation began about whether his
successors could continue to hold Yugoslavia together. Ethnic divisions and conflict grew and eventually erupted
in a series of Yugoslav wars a decade after his death. During his life and especially in the first year after his
death, several places were named after Tito. Several of these places have since returned to their original names,
such as Podgorica, formerly Titograd (though Podgorica's international airport is still identified by the code TGD),
and Užice, formerly Titovo Užice, which reverted to its original name in 1992. Streets in Belgrade, the capital,
have all reverted to their original pre–World War II and pre-communist names as well. In 2004, Antun Augustinčić's
statue of Broz in his birthplace of Kumrovec was decapitated in an explosion. It was subsequently repaired. Twice
in 2008, protests took place in Zagreb's Marshal Tito Square, organized by a group called Circle for the Square (Krug
za Trg), with an aim to force the city government to rename it to its previous name, while a counter-protest by Citizens'
Initiative Against Ustašism (Građanska inicijativa protiv ustaštva) accused the "Circle for the Square" of historical
revisionism and neo-fascism. Croatian president Stjepan Mesić criticized the demonstration to change the name. In
the Croatian coastal city of Opatija the main street (also its longest street) still bears the name of Marshal Tito,
as do streets in numerous towns in Serbia, mostly in the country's north. One of the main streets in downtown Sarajevo
is called Marshal Tito Street, and Tito's statue in a park in front of the university campus (ex. JNA barrack "Maršal
Tito") in Marijin Dvor is a place where Bosnians and Sarajevans still today commemorate and pay tribute to Tito (image
on the right). The largest Tito monument in the world, about 10 m (33 ft) high, is located at Tito Square (Slovene:
Titov trg), the central square in Velenje, Slovenia. One of the main bridges in Slovenia's second largest city of
Maribor is Tito Bridge (Titov most). The central square in Koper, the largest Slovenian port city, is as well named
Tito Square. Every year a "Brotherhood and Unity" relay race is organized in Montenegro, Macedonia and Serbia which
ends at the "House of Flowers" in Belgrade on May 25 – the final resting place of Tito. At the same time, runners
in Slovenia, Croatia and Bosnia and Herzegovina set off for Kumrovec, Tito's birthplace in northern Croatia. The
relay is a left-over from Yugoslav times, when young people made a similar yearly trek on foot through Yugoslavia
that ended in Belgrade with a massive celebration. In the years following the dissolution of Yugoslavia, some historians
stated that human rights were suppressed in Yugoslavia under Tito, particularly in the first decade up until the
Tito-Stalin split. On 4 October 2011, the Slovenian Constitutional Court found a 2009 naming of a street in Ljubljana
after Tito to be unconstitutional. While several public areas in Slovenia (named during the Yugoslav period) do already
bear Tito's name, on the issue of renaming an additional street the court ruled that: The court, however, explicitly
made it clear that the purpose of the review was "not a verdict on Tito as a figure or on his concrete actions, as
well as not a historical weighing of facts and circumstances". Slovenia has several streets and squares named after
Tito, notably Tito Square in Velenje, incorporating a 10-meter statue. Tito has also been named as responsible for
systematic eradication of the ethnic German (Danube Swabian) population in Vojvodina by expulsions and mass executions
following the collapse of the German occupation of Yugoslavia at the end of World War II, in contrast to his inclusive
attitude towards other Yugoslav nationalities. Tito carried on numerous affairs and was married several times. In
1918 he was brought to Omsk, Russia, as a prisoner of war. There he met Pelagija Belousova who was then thirteen;
he married her a year later, and she moved with him to Yugoslavia. Pelagija bore him five children but only their
son Žarko Leon (born 4 February, 1924) survived. When Tito was jailed in 1928, she returned to Russia. After the
divorce in 1936 she later remarried. In 1936, when Tito stayed at the Hotel Lux in Moscow, he met the Austrian comrade
Lucia Bauer. They married in October 1936, but the records of this marriage were later erased. His next relationship
was with Herta Haas, whom he married in 1940. Broz left for Belgrade after the April War, leaving Haas pregnant.
In May 1941, she gave birth to their son, Aleksandar "Mišo" Broz. All throughout his relationship with Haas, Tito
had maintained a promiscuous life and had a parallel relationship with Davorjanka Paunović, who, under the codename
"Zdenka", served as a courier in the resistance and subsequently became his personal secretary. Haas and Tito suddenly
parted company in 1943 in Jajce during the second meeting of AVNOJ after she reportedly walked in on him and Davorjanka.
The last time Haas saw Broz was in 1946. Davorjanka died of tuberculosis in 1946 and Tito insisted that she be buried
in the backyard of the Beli Dvor, his Belgrade residence. His best known wife was Jovanka Broz. Tito was just shy
of his 59th birthday, while she was 27, when they finally married in April 1952, with state security chief Aleksandar
Ranković as the best man. Their eventual marriage came about somewhat unexpectedly since Tito actually rejected her
some years earlier when his confidante Ivan Krajacic brought her in originally. At that time, she was in her early
20s and Tito, objecting to her energetic personality, opted for the more mature opera singer Zinka Kunc instead.
Not one to be discouraged easily, Jovanka continued working at Beli Dvor, where she managed the staff and eventually
got another chance after Tito's strange relationship with Zinka failed. Since Jovanka was the only female companion
he married while in power, she also went down in history as Yugoslavia's first lady. Their relationship was not a
happy one, however. It had gone through many, often public, ups and downs with episodes of infidelities and even
allegations of preparation for a coup d'état by the latter pair. Certain unofficial reports suggest Tito and Jovanka
even formally divorced in the late 1970s, shortly before his death. However, during Tito's funeral she was officially
present as his wife, and later claimed rights for inheritance. The couple did not have any children. Tito's notable
grandchildren include Aleksandra Broz, a prominent theatre director in Croatia; Svetlana Broz, a cardiologist and
writer in Bosnia-Herzegovina; and Josip "Joška" Broz, Edvard Broz and Natali Klasevski, an artisan of Bosnia-Herzegovina.
As the President, Tito had access to extensive (state-owned) property associated with the office, and maintained
a lavish lifestyle. In Belgrade he resided in the official residence, the Beli dvor, and maintained a separate private
home. The Brijuni islands were the site of the State Summer Residence from 1949 on. The pavilion was designed by
Jože Plečnik, and included a zoo. Close to 100 foreign heads of state were to visit Tito at the island residence,
along with film stars such as Elizabeth Taylor, Richard Burton, Sophia Loren, Carlo Ponti, and Gina Lollobrigida.
Another residence was maintained at Lake Bled, while the grounds at Karađorđevo were the site of "diplomatic hunts".
By 1974 the Yugoslav President had at his disposal 32 official residences, larger and small, the yacht Galeb ("seagull"),
a Boeing 727 as the presidential airplane, and the Blue Train. After Tito's death the presidential Boeing 727 was
sold to Aviogenex, the Galeb remained docked in Montenegro, while the Blue Train was stored in a Serbian train shed
for over two decades. While Tito was the person who held the office of president for by far the longest period, the
associated property was not private and much of it continues to be in use by Yugoslav successor states, as public
property, or maintained at the disposal of high-ranking officials. As regards knowledge of languages, Tito replied
that he spoke Serbo-Croatian, German, Russian, and some English. A biographer also stated that he spoke "Serbo-Croatian
... Russian, Czech, Slovenian ... German (with a Viennese accent) ... understands and reads French and Italian ...
[and] also speaks Kirghiz." In his youth Tito attended Catholic Sunday school, and was later an altar boy. After
an incident where he was slapped and shouted at by a priest when he had difficulty assisting the priest to remove
his vestments, Tito would not enter a church again. As an adult, he frequently declared that he was an atheist. Every
federal unit had a town or city with historic significance from the World War II period renamed to have Tito's name
included. The largest of these was Titograd, now Podgorica, the capital city of Montenegro. With the exception of
Titograd, the cities were renamed simply by the addition of the adjective "Tito's" ("Titov"). The cities were: In
the years after Tito's death up to nowadays, some people have disputed his identity. Tito's personal doctor, Aleksandar
Matunović, wrote a book about Tito in which he also questioned his true origin, noting that Tito's habits and lifestyle
could only mean that he was from an aristocratic family. Serbian journalist Vladan Dinić (born 1949), in Tito nije
tito, includes several possible alternate identities of Tito. In 2013 a lot of media coverage was given to unclassified
NSA's study in Cryptologic Spectrum that concluded that Tito did not speak the language as a native, and had features
of other Slavic languages (Russian and Polish). The hypothesis that "a non-Yugoslav, perhaps a Russian or a Pole"
assumed Tito's identity was included. The report also notes Draža Mihailović's impressions of Tito's Russian origins.
However, the NSA's report was completely disproved by Croatian experts. The report failed to recognize that Tito
was a native speaker of the very distinctive local Kajkavian dialect of Zagorje. The acute accent, present only in
Croatian dialects, which Tito is perfectly pronouncing, is the strongest proof of Tito's belonging to Kajkavian dialect.
As the Communist Party was outlawed in Yugoslavia starting on 30 December 1920, Josip Broz took on many assumed names
during his activity within the Party, including "Rudi", "Walter", and "Tito." Broz himself explains: Josip Broz Tito
received a total of 119 awards and decorations from 60 countries around the world (59 countries and Yugoslavia).
21 decorations were from Yugoslavia itself, 18 having been awarded once, and the Order of the National Hero on three
occasions. Of the 98 international awards and decorations, 92 were received once, and three on two occasions (Order
of the White Lion, Polonia Restituta, and Karl Marx). The most notable awards included the French Legion of Honour
and National Order of Merit, the British Order of the Bath, the Soviet Order of Lenin, the Japanese Order of the
Chrysanthemum, the German Federal Cross of Merit, and the Order of Merit of Italy. The decorations were seldom displayed,
however. After the Tito–Stalin split of 1948 and his inauguration as president in 1953, Tito rarely wore his uniform
except when present in a military function, and then (with rare exception) only wore his Yugoslav ribbons for obvious
practical reasons. The awards were displayed in full number only at his funeral in 1980. Tito's reputation as one
of the Allied leaders of World War II, along with his diplomatic position as the founder of the Non-Aligned Movement,
was primarily the cause of the favorable international recognition. Some of the other foreign awards and decorations
of Josip Broz Tito include Order of Merit, Order of Manuel Amador Guerrero, Order of Prince Henry, Order of Independence,
Order of Merit, Order of the Nile, Order of the Condor of the Andes, Order of the Star of Romania, Order of the Gold
Lion of the House of Nassau, Croix de Guerre, Order of the Cross of Grunwald, Czechoslovak War Cross, Decoration
of Honour for Services to the Republic of Austria, Military Order of the White Lion, Nishan-e-Pakistan, Order of
Al Rafidain, Order of Carol I, Order of Georgi Dimitrov, Order of Karl Marx, Order of Manuel Amador Guerrero, Order
of Michael the Brave, Order of Pahlavi, Order of Sukhbaatar, Order of Suvorov, Order of the Liberator, Order of the
October Revolution, Order of the Queen of Sheba, Order of the White Rose of Finland, Partisan Cross, Royal Order
of Cambodia and Star of People's Friendship and Thiri Thudhamma Thingaha.[citation needed]
The Marshall Islands, officially the Republic of the Marshall Islands (Marshallese: Aolepān Aorōkin M̧ajeļ),[note 1] is an
island country located near the equator in the Pacific Ocean, slightly west of the International Date Line. Geographically,
the country is part of the larger island group of Micronesia. The country's population of 53,158 people (at the 2011
Census) is spread out over 29 coral atolls, comprising 1,156 individual islands and islets. The islands share maritime
boundaries with the Federated States of Micronesia to the west, Wake Island to the north,[note 2] Kiribati to the
south-east, and Nauru to the south. About 27,797 of the islanders (at the 2011 Census) live on Majuro, which contains
the capital. Micronesian colonists gradually settled the Marshall Islands during the 2nd millennium BC, with inter-island
navigation made possible using traditional stick charts. Islands in the archipelago were first explored by Europeans
in the 1520s, with Spanish explorer Alonso de Salazar sighting an atoll in August 1526. Other expeditions by Spanish
and English ships followed. The islands derive their name from British explorer John Marshall, who visited in 1788.
The islands were historically known by the inhabitants as "jolet jen Anij" (Gifts from God). The European powers
recognized the islands as part of the Spanish East Indies in 1874. However, Spain sold the islands to the German
Empire in 1884, and they became part of German New Guinea in 1885. In World War I the Empire of Japan occupied the
Marshall Islands, which in 1919 the League of Nations combined with other former German territories to form the South
Pacific Mandate. In World War II, the United States conquered the islands in the Gilbert and Marshall Islands campaign.
Along with other Pacific Islands, the Marshall Islands were then consolidated into the Trust Territory of the Pacific
Islands governed by the US. Self-government was achieved in 1979, and full sovereignty in 1986, under a Compact of
Free Association with the United States. Marshall Islands has been a United Nations member state since 1991. Politically,
the Marshall Islands is a presidential republic in free association with the United States, with the US providing
defense, subsidies, and access to U.S. based agencies such as the FCC and the USPS. With few natural resources, the
islands' wealth is based on a service economy, as well as some fishing and agriculture; aid from the United States
represents a large percentage of the islands' gross domestic product. The country uses the United States dollar as
its currency. The majority of the citizens of the Marshall Islands are of Marshallese descent, though there are small
numbers of immigrants from the United States, China, Philippines and other Pacific islands. The two official languages
are Marshallese, which is a member of the Malayo-Polynesian languages, and English. Almost the entire population
of the islands practises some religion, with three-quarters of the country either following the United Church of
Christ – Congregational in the Marshall Islands (UCCCMI) or the Assemblies of God. Micronesians settled the Marshall
Islands in the 2nd millennium BC, but there are no historical or oral records of that period. Over time, the Marshall
Island people learned to navigate over long ocean distances by canoe using traditional stick charts. Spanish explorer
Alonso de Salazar was the first European to see the islands in 1526, commanding the ship Santa Maria de la Victoria,
the only surviving vessel of the Loaísa Expedition. On August 21, he sighted an island (probably Taongi) at 14°N
that he named "San Bartolome". On September 21, 1529, Álvaro de Saavedra Cerón commanded the Spanish ship Florida,
on his second attempt to recross the Pacific from the Maluku Islands. He stood off a group of islands from which
local inhabitants hurled stones at his ship. These islands, which he named "Los Pintados", may have been Ujelang.
On October 1, he found another group of islands where he went ashore for eight days, exchanged gifts with the local
inhabitants and took on water. These islands, which he named "Los Jardines", may have been Enewetak or Bikini Atoll.
The Spanish ship San Pedro and two other vessels in an expedition commanded by Miguel López de Legazpi discovered
an island on January 9, 1530, possibly Mejit, at 10°N, which they named "Los Barbudos". The Spaniards went ashore
and traded with the local inhabitants. On January 10, the Spaniards sighted another island that they named "Placeres",
perhaps Ailuk; ten leagues away, they sighted another island that they called "Pajares" (perhaps Jemo). On January
12, they sighted another island at 10°N that they called "Corrales" (possibly Wotho). On January 15, the Spaniards
sighted another low island, perhaps Ujelang, at 10°N, where they described the people on "Barbudos". After that,
ships including the San Jeronimo, Los Reyes and Todos los Santos also visited the islands in different years. Captain
John Charles Marshall and Thomas Gilbert visited the islands in 1788. The islands were named for Marshall on Western
charts, although the natives have historically named their home "jolet jen Anij" (Gifts from God). Around 1820, Russian
explorer Adam Johann von Krusenstern and the French explorer Louis Isidore Duperrey named the islands after John
Marshall, and drew maps of the islands. The designation was repeated later on British maps.[citation needed] In 1824
the crew of the American whaler Globe mutinied and some of the crew put ashore on Mulgrave Island. One year later,
the American schooner Dolphin arrived and picked up two boys, the last survivors of a massacre by the natives due
to their brutal treatment of the women.:2 A number of vessels visiting the islands were attacked and their crews
killed. In 1834, Captain DonSette and his crew were killed. Similarly, in 1845 the schooner Naiad punished a native
for stealing with such violence that the natives attacked the ship. Later that year a whaler's boat crew were killed.
In 1852 the San Francisco-based ships Glencoe and Sea Nymph were attacked and everyone aboard except for one crew
member were killed. The violence was usually attributed as a response to the ill treatment of the natives in response
to petty theft, which was a common practice. In 1857, two missionaries successfully settled on Ebon, living among
the natives through at least 1870.:3 Although the Spanish Empire had a residual claim on the Marshalls in 1874, when
she began asserting her sovereignty over the Carolines, she made no effort to prevent the German Empire from gaining
a foothold there. Britain also raised no objection to a German protectorate over the Marshalls in exchange for German
recognition of Britain's rights in the Gilbert and Ellice Islands. On October 13, 1885, SMS Nautilus under Captain
Rötger brought German emissaries to Jaluit. They signed a treaty with Kabua, whom the Germans had earlier recognized
as "King of the Ralik Islands," on October 15. Subsequently, seven other chiefs on seven other islands signed a treaty
in German and Marshallese and a final copy witnessed by Rötger on November 1 was sent to the German Foreign Office.
The Germans erected a sign declaring a "Imperial German Protectorate" at Jaluit. It has been speculated that the
crisis over the Carolines with Spain, which almost provoked a war, was in fact "a feint to cover the acquisition
of the Marshall Islands", which went almost unnoticed at the time, despite the islands being the largest source of
copra in Micronesia. Spain sold the islands to Germany in 1884 through papal mediation. A German trading company,
the Jaluit Gesellschaft, administered the islands from 1887 until 1905. They conscripted the islanders as laborers
and mistreated them. After the German–Spanish Treaty of 1899, in which Germany acquired the Carolines, Palau, and
the Marianas from Spain, Germany placed all of its Micronesian islands, including the Marshalls, under the governor
of German New Guinea. Catholic missionary Father A. Erdland, from the Sacred Heart Jesu Society based in Hiltrup,
Germany, lived on Jaluit from around 1904 to 1914. He was very interested in the islands and conducted considerable
research on the Marshallese culture and language. He published a 376-page monograph on the islands in 1914. Father
H. Linckens, another missionary from the Sacred Heart of Jesu Society visited the Marshall Islands in 1904 and 1911
for several weeks. He published a small work in 1912 about the Catholic mission activities and the people of the
Marshall Islands. Under German control, and even before then, Japanese traders and fishermen from time to time visited
the Marshall Islands, although contact with the islanders was irregular. After the Meiji Restoration (1868), the
Japanese government adopted a policy of turning the Japanese Empire into a great economic and military power in East
Asia. In 1914, Japan joined the Entente during World War I and captured various German Empire colonies, including
several in Micronesia. On September 29, 1914, Japanese troops occupied the Enewetak Atoll, and on September 30, 1914,
the Jaluit Atoll, the administrative centre of the Marshall Islands. After the war, on June 28, 1919, Germany signed
(under protest) the Treaty of Versailles. It renounced all of its Pacific possessions, including the Marshall Islands.
On December 17, 1920, the Council of the League of Nations approved the South Pacific Mandate for Japan to take over
all former German colonies in the Pacific Ocean located north of the Equator. The Administrative Centre of the Marshall
Islands archipelago remained Jaluit. The German Empire had primarily economic interests in Micronesia. The Japanese
interests were in land. Despite the Marshalls' small area and few resources, the absorption of the territory by Japan
would to some extent alleviate Japan's problem of an increasing population with a diminishing amount of available
land to house it. During its years of colonial rule, Japan moved more than 1,000 Japanese to the Marshall Islands
although they never outnumbered the indigenous peoples as they did in the Mariana Islands and Palau. The Japanese
enlarged administration and appointed local leaders, which weakened the authority of local traditional leaders. Japan
also tried to change the social organization in the islands from Matrilineality to the Japanese Patriarchal system,
but with no success. Moreover, during the 1930s, one third of all land up to the high water level was declared the
property of the Japanese government. On the archipelago, before it banned foreign traders, the activities of Catholic
and Protestant missionaries were allowed. Indigenous people were educated in Japanese schools, and studied Japanese
language and Japanese culture. This policy was the government strategy not only in the Marshall Islands, but on all
the other mandated territories in Micronesia. On March 27, 1933, Japan handed in its notice at the League of Nations,
but continued to manage the islands, and in the late 1930s began building air bases on several atolls. The Marshall
Islands were in an important geographical position, being the easternmost point in Japan's defensive ring at the
beginning of World War II. In the months before the attack on Pearl Harbor, Kwajalein Atoll was the administrative
center of the Japanese 6th Fleet Forces Service, whose task was the defense of the Marshall Islands. In World War
II, the United States, during the Gilbert and Marshall Islands campaign, invaded and occupied the islands in 1944,
destroying or isolating the Japanese garrisons. In just one month in 1944, Americans captured Kwajalein Atoll, Majuro
and Enewetak, and, in the next two months, the rest of the Marshall Islands, except for Wotje, Mili, Maloelap and
Jaluit. The battle in the Marshall Islands caused irreparable damage, especially on Japanese bases. During the American
bombing, the islands' population suffered from lack of food and various injuries. U.S. attacks started in mid-1943,
and caused half the Japanese garrison of 5,100 people in the atoll Mili to die from hunger by August 1945. Following
capture and occupation by the United States during World War II, the Marshall Islands, along with several other island
groups located in Micronesia, passed formally to the United States under United Nations auspices in 1947 as part
of the Trust Territory of the Pacific Islands established pursuant to Security Council Resolution 21. During the
early years of the Cold War from 1946 to 1958, the United States tested 67 nuclear weapons at its Pacific Proving
Grounds located in the Marshall Islands, including the largest atmospheric nuclear test ever conducted by the U.S.,
code named Castle Bravo. "The bombs had a total yield of 108,496 kilotons, over 7,200 times more powerful than the
atomic weapons used during World War II." With the 1952 test of the first U.S. hydrogen bomb, code named "Ivy Mike,"
the island of Elugelab in the Enewetak atoll was destroyed. In 1956, the United States Atomic Energy Commission regarded
the Marshall Islands as "by far the most contaminated place in the world." Nuclear claims between the U.S. and the
Marshall Islands are ongoing, and health effects from these nuclear tests linger. Project 4.1 was a medical study
conducted by the United States of those residents of the Bikini Atoll exposed to radioactive fallout. From 1956 to
August 1998, at least $759 million was paid to the Marshallese Islanders in compensation for their exposure to U.S.
nuclear weapon testing. In 1986, the Compact of Free Association with the United States entered into force, granting
the Republic of the Marshall Islands (RMI) its sovereignty. The Compact provided for aid and U.S. defense of the
islands in exchange for continued U.S. military use of the missile testing range at Kwajalein Atoll. The independence
procedure was formally completed under international law in 1990, when the UN officially ended the Trusteeship status
pursuant to Security Council Resolution 683. In 2008, extreme waves and high tides caused widespread flooding in
the capital city of Majuro and other urban centres, 3 feet (0.91 m) above sea level. On Christmas morning in 2008,
the government declared a state of emergency. In 2013, heavy waves once again breached the city walls of Majuro.
In 2013, the northern atolls of the Marshall Islands experienced drought. The drought left 6,000 people surviving
on less than 1 litre (0.22 imp gal; 0.26 US gal) of water per day. This resulted in the failure of food crops and
the spread of diseases such as diarrhea, pink eye, and influenza. These emergencies resulted in the United States
President declaring an emergency in the islands. This declaration activated support from US government agencies under
the Republic's "free association" status with the United States, which provides humanitarian and other vital support.
Following the 2013 emergencies, the Minister of Foreign Affairs Tony de Brum was encouraged by the Obama administration
in the United States to turn the crises into an opportunity to promote action against climate change. De Brum demanded
new commitment and international leadership to stave off further climate disasters from battering his country and
other similarly vulnerable countries. In September 2013, the Marshall Islands hosted the 44th Pacific Islands Forum
summit. De Brum proposed a Majuro Declaration for Climate Leadership to galvanize concrete action on climate change.
The government of the Marshall Islands operates under a mixed parliamentary-presidential system as set forth in its
Constitution. Elections are held every four years in universal suffrage (for all citizens above 18), with each of
the twenty-four constituencies (see below) electing one or more representatives (senators) to the lower house of
RMI's unicameral legislature, the Nitijela. (Majuro, the capital atoll, elects five senators.) The President, who
is head of state as well as head of government, is elected by the 33 senators of the Nitijela. Four of the five Marshallese
presidents who have been elected since the Constitution was adopted in 1979 have been traditional paramount chiefs.
Legislative power lies with the Nitijela. The upper house of Parliament, called the Council of Iroij, is an advisory
body comprising twelve tribal chiefs. The executive branch consists of the President and the Presidential Cabinet,
which consists of ten ministers appointed by the President with the approval of the Nitijela. The twenty-four electoral
districts into which the country is divided correspond to the inhabited islands and atolls. There are currently four
political parties in the Marshall Islands: Aelon̄ Kein Ad (AKA), United People's Party (UPP), Kien Eo Am (KEA) and
United Democratic Party (UDP). Rule is shared by the AKA and the UDP. The following senators are in the legislative
body: The Compact of Free Association with the United States gives the U.S. sole responsibility for international
defense of the Marshall Islands. It allows islanders to live and work in the United States and establishes economic
and technical aid programs. The Marshall Islands was admitted to the United Nations based on the Security Council's
recommendation on August 9, 1991, in Resolution 704 and the General Assembly's approval on September 17, 1991, in
Resolution 46/3. In international politics within the United Nations, the Marshall Islands has often voted consistently
with the United States with respect to General Assembly resolutions. On 28 April 2015, the Iranian navy seized the
Marshall Island-flagged MV Maersk Tigris near the Strait of Hormuz. The ship had been chartered by Germany's Rickmers
Ship Management, which stated that the ship contained no special cargo and no military weapons. The ship was reported
to be under the control of the Iranian Revolutionary Guard according to the Pentagon. Tensions escallated in the
region due to the intensifying of Saudi-led coalition attacks in Yemen. The Pentagon reported that the destroyer
USS Farragut and a maritime reconnaissance aircraft were dispatched upon receiving a distress call from the ship
Tigris and it was also reported that all 34 crew members were detained. US defense officials have said that they
would review U.S. defense obligations to the Government of the Marshall Islands in the wake of recent events and
also condemned the shots fired at the bridge as "inappropriate". It was reported in May 2015 that Tehran would release
the ship after it paid a penalty. The islands are located about halfway between Hawaii and Australia, north of Nauru
and Kiribati, east of the Federated States of Micronesia, and south of the U.S. territory of Wake Island, to which
it lays claim. The atolls and islands form two groups: the Ratak (sunrise) and the Ralik (sunset). The two island
chains lie approximately parallel to one another, running northwest to southeast, comprising about 750,000 square
miles (1,900,000 km2) of ocean but only about 70 square miles (180 km2) of land mass. Each includes 15 to 18 islands
and atolls. The country consists of a total of 29 atolls and five isolated islands. In October 2011, the government
declared that an area covering nearly 2,000,000 square kilometres (772,000 sq mi) of ocean shall be reserved as a
shark sanctuary. This is the world's largest shark sanctuary, extending the worldwide ocean area in which sharks
are protected from 2,700,000 to 4,600,000 square kilometres (1,042,000 to 1,776,000 sq mi). In protected waters,
all shark fishing is banned and all by-catch must be released. However, some have questioned the ability of the Marshall
Islands to enforce this zone. The Marshall Islands also lays claim to Wake Island. While Wake has been administered
by the United States since 1899, the Marshallese government refers to it by the name Enen-kio. The climate is hot
and humid, with a wet season from May to November. Many Pacific typhoons begin as tropical storms in the Marshall
Islands region, and grow stronger as they move west toward the Mariana Islands and the Philippines. Due to its very
low elevation, the Marshall Islands are threatened by the potential effects of sea level rise. According to the president
of Nauru, the Marshall Islands are the most endangered nation in the world due to flooding from climate change. Population
has outstripped the supply of freshwater, usually from rainfall. The northern atolls get 50 inches (1,300 mm) of
rainfall annually; the southern atolls about twice that. The threat of drought is commonplace throughout the island
chains. In 2007, the Marshall Islands joined the International Labour Organization, which means its labour laws will
comply with international benchmarks. This may impact business conditions in the islands. United States government
assistance is the mainstay of the economy. Under terms of the Amended Compact of Free Association, the U.S. is committed
to provide US$57.7 million per year in assistance to the Marshall Islands (RMI) through 2013, and then US$62.7 million
through 2023, at which time a trust fund, made up of U.S. and RMI contributions, will begin perpetual annual payouts.
The United States Army maintains the Ronald Reagan Ballistic Missile Defense Test Site on Kwajalein Atoll. Marshallese
land owners receive rent for the base. Agricultural production is concentrated on small farms.[citation needed] The
most important commercial crops are coconuts, tomatoes, melons, and breadfruit.[citation needed] In 1999, a private
company built a tuna loining plant with more than 400 employees, mostly women. But the plant closed in 2005 after
a failed attempt to convert it to produce tuna steaks, a process that requires half as many employees. Operating
costs exceeded revenue, and the plant's owners tried to partner with the government to prevent closure. But government
officials personally interested in an economic stake in the plant refused to help. After the plant closed, it was
taken over by the government, which had been the guarantor of a $2 million loan to the business.[citation needed]
On September 15, 2007, Witon Barry (of the Tobolar Copra processing plant in the Marshall Islands capital of Majuro)
said power authorities, private companies, and entrepreneurs had been experimenting with coconut oil as alternative
to diesel fuel for vehicles, power generators, and ships. Coconut trees abound in the Pacific's tropical islands.
Copra, the meat of the coconut, yields coconut oil (1 liter for every 6 to 10 coconuts). In 2009, a 57 kW solar power
plant was installed, the largest in the Pacific at the time, including New Zealand. It is estimated that 330 kW of
solar and 450 kW of wind power would be required to make the College of the Marshall Islands energy self-sufficient.
Marshalls Energy Company (MEC), a government entity, provides the islands with electricity. In 2008, 420 solar home
systems of 200 Wp each were installed on Ailinglaplap Atoll, sufficient for limited electricity use. Historical population
figures are unknown. In 1862, the population was estimated at about 10,000. In 1960, the entire population was about
15,000. In July 2011, the number of island residents was estimated to number about 72,191. Over two-thirds of the
population live in the capital, Majuro and Ebeye, the secondary urban center, located in Kwajalein Atoll. This excludes
many who have relocated elsewhere, primarily to the United States. The Compact of Free Association allows them to
freely relocate to the United States and obtain work there. A large concentration of about 4,300 Marshall Islanders
have relocated to Springdale, Arkansas, the largest population concentration of natives outside their island home.
Most of the residents are Marshallese, who are of Micronesian origin and migrated from Asia several thousand years
ago. A minority of Marshallese have some recent Asian ancestry, mainly Japanese. About one-half of the nation's population
lives on Majuro, the capital, and Ebeye, a densely populated island. The outer islands are sparsely populated due
to lack of employment opportunities and economic development. Life on the outer atolls is generally traditional.
Major religious groups in the Republic of the Marshall Islands include the United Church of Christ (formerly Congregational),
with 51.5% of the population; the Assemblies of God, 24.2%; the Roman Catholic Church, 8.4%; and The Church of Jesus
Christ of Latter-day Saints (Mormons), 8.3%; Also represented are Bukot Nan Jesus (also known as Assembly of God
Part Two), 2.2%; Baptist, 1.0%; Seventh-day Adventists, 0.9%; Full Gospel, 0.7%; and the Baha'i Faith, 0.6%; Persons
without any religious affiliation account for a very small percentage of the population. There is also a small community
of Ahmadiyya Muslims based in Majuro, with the first mosque opening in the capital in September 2012. The Ministry
of Education (Marshall Islands) operates the state schools in the Marshall Islands. There are two tertiary institutions
operating in the Marshall Islands, the College of the Marshall Islands and the University of the South Pacific. The
Marshall Islands are served by the Marshall Islands International Airport in Majuro, the Bucholz Army Airfield in
Kwajalein, and other small airports and airstrips.
The szlachta ([ˈʂlaxta] ( listen), exonym: Nobility) was a legally privileged noble class with origins in the Kingdom of
Poland. It gained considerable institutional privileges between 1333 and 1370 during the reign of King Casimir III
the Great.:211 In 1413, following a series of tentative personal unions between the Grand Duchy of Lithuania and
the Crown Kingdom of Poland, the existing Lithuanian nobility formally joined this class.:211 As the Polish-Lithuanian
Commonwealth (1569–1795) evolved and expanded in territory, its membership grew to include the leaders of Ducal Prussia,
Podolian and Ruthenian lands. The origins of the szlachta are shrouded in obscurity and mystery and have been the
subject of a variety of theories.:207 Traditionally, its members were owners of landed property, often in the form
of "manor farms" or so-called folwarks. The nobility negotiated substantial and increasing political and legal privileges
for itself throughout its entire history until the decline of the Polish Commonwealth in the late 18th century. During
the Partitions of Poland from 1772 to 1795, its members began to lose these legal privileges and social status. From
that point until 1918, the legal status of the nobility was essentially dependent upon the policies of the three
partitioning powers: the Russian Empire, the Kingdom of Prussia, and the Habsburg Monarchy. The legal privileges
of the szlachta were legally abolished in the Second Polish Republic by the March Constitution of 1921. The notion
that all Polish nobles were social equals, regardless of their financial status or offices held, is enshrined in
a traditional Polish saying: The term szlachta is derived from the Old High German word slahta (modern German Geschlecht),
which means "(noble) family", much as many other Polish words pertaining to the nobility derive from German words—e.g.,
the Polish "rycerz" ("knight", cognate of the German "Ritter") and the Polish "herb" ("coat of arms", from the German
"Erbe", "heritage"). Poles of the 17th century assumed that "szlachta" came from the German "schlachten" ("to slaughter"
or "to butcher"); also suggestive is the German "Schlacht" ("battle"). Early Polish historians thought the term may
have derived from the name of the legendary proto-Polish chief, Lech, mentioned in Polish and Czech writings. Some
powerful Polish nobles were referred to as "magnates" (Polish singular: "magnat", plural: "magnaci") and "możny"
("magnate", "oligarch"; plural: "możni"); see Magnates of Poland and Lithuania. The Polish term "szlachta" designated
the formalized, hereditary noble class of Polish-Lithuanian Commonwealth. In official Latin documents of the old
Commonwealth, hereditary szlachta are referred to as "nobilitas" and are indeed the equivalent in legal status of
the English nobility. Today the word szlachta in the Polish language simply translates to "nobility". In its broadest
meaning, it can also denote some non-hereditary honorary knighthoods granted today by some European monarchs. Occasionally,
19th-century non-noble landowners were referred to as szlachta by courtesy or error, when they owned manorial estates
though they were not noble by birth. In the narrow sense, szlachta denotes the old-Commonwealth nobility. In the
past, a certain misconception sometimes led to the mistranslation of "szlachta" as "gentry" rather than "nobility".:206
:xvi This mistaken practice began due to the economic status of some szlachta members being inferior to that of the
nobility in other European countries (see also Estates of the Realm regarding wealth and nobility). The szlachta
included those almost rich and powerful enough to be magnates down to rascals with a noble lineage, no land, no castle,
no money, no village, and no peasants.:xvi As some szlachta were poorer than some non-noble gentry, some particularly
impoverished szlachta were forced to become tenants of the wealthier gentry. In doing so, however, these szlachta
retained all their constitutional prerogatives, as it was not wealth or lifestyle (obtainable by the gentry), but
hereditary juridical status, that determined nobility. The origins of the szlachta, while ancient, have always been
considered obscure.:207 As a result, its members often referred to it as odwieczna (perennial).:207 Two popular historic
theories of origin forwarded by its members and earlier historians and chroniclers involved descent from the ancient
Iranian tribes known as Sarmatians or from Japheth, one of Noah's sons (by contrast, the peasantry were said to be
the offspring of another son of Noah, Ham—and hence subject to bondage under the Curse of Ham—and the Jews as the
offspring of Shem). Other fanciful theories included its foundation by Julius Caesar, Alexander the Great:207 or
regional leaders who had not mixed their bloodlines with those of 'slaves, prisoners, and aliens'.:208 Another theory
describes its derivation from a non-Slavic warrior class,:42, 64–66 forming a distinct element known as the Lechici/Lekhi
(Lechitów):430 :482 within the ancient Polonic tribal groupings (Indo-European caste systems). This hypothesis states
this upper class was not of Slavonic extraction:482 and was of a different origin than the Slavonic peasants (kmiecie;
Latin: cmethones):430 :118 over which they ruled.:482 The Szlachta were differentiated from the rural population.
The nobleman's sense of distinction led to practices that in later periods would be characterized as racism.:233
The Szlachta were noble in the Aryan sense -- "noble" in contrast to the people over whom they ruled after coming
into contact with them.:482 The szlachta traced their descent from Lech/Lekh, who probably founded the Polish kingdom
in about the fifth century.:482 Lechia was the name of Poland in antiquity, and the szlachta's own name for themselves
was Lechici/Lekhi.:482 An exact counterpart of Szlachta society was the Meerassee system of tenure of southern India—an
aristocracy of equality—settled as conquerors among a separate race.:484 The Polish state paralleled the Roman Empire
in that full rights of citizenship were limited to the szlachta. The szlachta were a caste, a military caste, as
in Hindu society. The documentation regarding Raciborz and Albert's tenure is the earliest surviving of the use of
the clan name and cry defining the honorable status of Polish knights. The names of knightly genealogiae only came
to be associated with heraldic devices later in the Middle Ages and in the early modern period. The Polish clan name
and cry ritualized the ius militare, i.e., the power to command an army; and they had been used some time before
1244 to define knightly status. (Górecki 1992, pp. 183–185). Around the 14th century, there was little difference
between knights and the szlachta in Poland. Members of the szlachta had the personal obligation to defend the country
(pospolite ruszenie), thereby becoming the kingdom's most privileged social class. Inclusion in the class was almost
exclusively based on inheritance. Concerning the early Polish tribes, geography contributed to long-standing traditions.
The Polish tribes were internalized and organized around a unifying religious cult, governed by the wiec, an assembly
of free tribesmen. Later, when safety required power to be consolidated, an elected prince was chosen to govern.
The election privilege was usually limited to elites. The tribes were ruled by clans (ród) consisting of people related
by blood or marriage and theoretically descending from a common ancestor, giving the ród/clan a highly developed
sense of solidarity. (See gens.) The starosta (or starszyna) had judicial and military power over the ród/clan, although
this power was often exercised with an assembly of elders. Strongholds called grόd were built where the religious
cult was powerful, where trials were conducted, and where clans gathered in the face of danger. The opole was the
territory occupied by a single tribe. (Manteuffel 1982, p. 44) Mieszko I of Poland (c. 935 – 25 May 992) established
an elite knightly retinue from within his army, which he depended upon for success in uniting the Lekhitic tribes
and preserving the unity of his state. Documented proof exists of Mieszko I's successors utilizing such a retinue,
as well. Another class of knights were granted land by the prince, allowing them the economic ability to serve the
prince militarily. A Polish nobleman living at the time prior to the 15th century was referred to as a "rycerz",
very roughly equivalent to the English "knight," the critical difference being the status of "rycerz" was almost
strictly hereditary; the class of all such individuals was known as the "rycerstwo". Representing the wealthier families
of Poland and itinerant knights from abroad seeking their fortunes, this other class of rycerstwo, which became the
szlachta/nobility ("szlachta" becomes the proper term for Polish nobility beginning about the 15th century), gradually
formed apart from Mieszko I's and his successors' elite retinues. This rycerstwo/nobility obtained more privileges
granting them favored status. They were absolved from particular burdens and obligations under ducal law, resulting
in the belief only rycerstwo (those combining military prowess with high/noble birth) could serve as officials in
state administration. The Period of Division from, A.D., 1138 – A.D., 1314, which included nearly 200 years of feudal
fragmentation and which stemmed from Bolesław III's division of Poland among his sons, was the genesis of the social
structure which saw the economic elevation of the great landowning feudal nobles (możni/Magnates, both ecclesiastical
and lay) from the rycerstwo they originated from. The prior social structure was one of Polish tribes united into
the historic Polish nation under a state ruled by the Piast dynasty, this dynasty appearing circa 850 A.D. Some możni
(Magnates) descending from past tribal dynasties regarded themselves as co-proprietors of Piast realms, even though
the Piasts attempted to deprive them of their independence. These możni (Magnates) constantly sought to undermine
princely authority.:75, 76 In Gall Anonym's chronicle, there is noted the nobility's alarm when the Palatine Sieciech
"elevated those of a lower class over those who were noble born" entrusting them with state offices. (Manteuffel
1982, p. 149) In Lithuania Propria and in Samogitia prior to the creation of the Kingdom of Lithuania by Mindaugas,
nobles were named die beste leuten in sources that were written in German language. In the Lithuanian language nobles
were named ponai. The higher nobility were named 'kunigai' or 'kunigaikščiai' (dukes)—i.e., loanword from Scandinavic
konung. They were the established local leaders and warlords. During the development of the state they gradually
became subordinated to higher dukes, and later to the King of Lithuania. Because of expansion of Lithuanian duchy
into lands of Ruthenia in the mid of 14th century a new term appeared to denominate nobility bajorai—from Ruthenian
(modern Ukrainian and Belarusian languages) бояре. This word to this day is used in Lithuanian language to name nobility,
not only for own, but also for nobility of other countries. After the Union of Horodło the Lithuanian nobility acquired
equal status with the Polish szlachta, and over time began to become more and more polonized, although they did preserve
their national consciousness, and in most cases recognition of their Lithuanian family roots. In the 16th century
some of the Lithuanian nobility claimed that they were of Roman extraction, and the Lithuanian language was just
a morphed Latin language. This led to paradox: Polish nobility claimed own ancestry from Sarmatian tribes, but Sarmatians
were considered enemies to Romans. Thus new Roman-Sarmatian theory was created. Strong cultural ties with Polish
nobility led that in the 16th century the new term to name Lithuanian nobility appeared šlėkta—a direct loanword
from Polish szlachta. From the view of historical truth Lithuanians also should use this term, šlėkta (szlachta),
to name own nobility, but Lithuanian linguists forbade the usage of this Polish loanword. This refusal to use word
szlachta (in Lithuanian text šlėkta) complicates all naming. The process of polonization took place over a lengthy
period of time. At first only the highest members of the nobility were involved, although gradually a wider group
of the population was affected. The major effects on the lesser Lithuanian nobility took place after various sanctions
were imposed by the Russian Empire such as removing Lithuania from the names of the Gubernyas few years after the
November Uprising. After the January Uprising the sanctions went further, and Russian officials announced that "Lithuanians
are Russians seduced by Poles and Catholicism" and began to intensify russification, and to ban the printing of books
in the Lithuanian language. In Ruthenia the nobility gradually gravitated its loyalty towards the multicultural and
multilingual Grand Duchy of Lithuania after the principalities of Halych and Volhynia became a part of it. Many noble
Ruthenian families intermarried with Lithuanian ones. The Orthodox nobles' rights were nominally equal to those enjoyed
by Polish and Lithuanian nobility, but there was a cultural pressure to convert to Catholicism, that was greatly
eased in 1596 by the Union of Brest. See for example careers of Senator Adam Kisiel and Jerzy Franciszek Kulczycki.
In the Kingdom of Poland and later in the Polish-Lithuanian Commonwealth, ennoblement (nobilitacja) may be equated
with an individual given legal status as a szlachta (member of the Polish nobility). Initially, this privilege could
be granted by monarch, but from the 1641 onward, this right was reserved for the sejm. Most often the individual
being ennobled would join an existing noble szlachta clan and assume the undifferentiated coat of arms of that clan.
According to heraldic sources total number of legal ennoblements issued between the 14th century and the mid-18th
century, is estimated at approximately 800. This is an average of only about two ennoblements per year or only 0.000
000 14 – 0.000 001 of historical population. Compare: historical demography of Poland. According to heraldic sources
1,600 is a total estimated number of all legal ennoblements throughout the history of Kingdom of Poland and Polish-Lithuanian
Commonwealth from the 14th century onward (half of which were performed in the final years of the late 18th century).
In the late 14th century, in the Grand Duchy of Lithuania, Vytautas the Great reformed the Grand Duchy's army: instead
of calling all men to arms, he created forces comprising professional warriors—bajorai ("nobles"; see the cognate
"boyar"). As there were not enough nobles, Vytautas trained suitable men, relieving them of labor on the land and
of other duties; for their military service to the Grand Duke, they were granted land that was worked by hired men
(veldams). The newly formed noble families generally took up, as their family names, the Lithuanian pagan given names
of their ennobled ancestors; this was the case with the Goštautai, Radvilos, Astikai, Kęsgailos and others. These
families were granted their coats of arms under the Union of Horodlo (1413). Significant legislative changes in the
status of the szlachta, as defined by Robert Bideleux and Ian Jeffries, consist of its 1374 exemption from the land
tax, a 1425 guarantee against the 'arbitrary arrests and/or seizure of property' of its members, a 1454 requirement
that military forces and new taxes be approved by provincial Sejms, and statutes issued between 1496 and 1611 that
prescribed the rights of commoners. Nobles were born into a noble family, adopted by a noble family (this was abolished
in 1633) or ennobled by a king or Sejm for various reasons (bravery in combat, service to the state, etc.—yet this
was the rarest means of gaining noble status). Many nobles were, in actuality, really usurpers, being commoners,
who moved into another part of the country and falsely pretended to noble status. Hundreds of such false nobles were
denounced by Hieronim Nekanda Trepka in his Liber generationis plebeanorium (or Liber chamorum) in the first half
of the 16th century. The law forbade non-nobles from owning nobility-estates and promised the estate to the denouncer.
Trepka was an impoverished nobleman who lived a townsman life and collected hundreds of such stories hoping to take
over any of such estates. It does not seem he ever succeeded in proving one at the court. Many sejms issued decrees
over the centuries in an attempt to resolve this issue, but with little success. It is unknown what percentage of
the Polish nobility came from the 'lower' orders of society, but most historians agree that nobles of such base origins
formed a 'significant' element of the szlachta. The Polish nobility enjoyed many rights that were not available to
the noble classes of other countries and, typically, each new monarch conceded them further privileges. Those privileges
became the basis of the Golden Liberty in the Polish–Lithuanian Commonwealth. Despite having a king, Poland was called
the nobility's Commonwealth because the king was elected by all interested members of hereditary nobility and Poland
was considered to be the property of this class, not of the king or the ruling dynasty. This state of affairs grew
up in part because of the extinction of the male-line descendants of the old royal dynasty (first the Piasts, then
the Jagiellons), and the selection by the nobility of the Polish king from among the dynasty's female-line descendants.
Poland's successive kings granted privileges to the nobility at the time of their election to the throne (the privileges
being specified in the king-elect's Pacta conventa) and at other times in exchange for ad hoc permission to raise
an extraordinary tax or a pospolite ruszenie. In 1355 in Buda King Casimir III the Great issued the first country-wide
privilege for the nobility, in exchange for their agreement that in the lack of Casimir's male heirs, the throne
would pass to his nephew, Louis I of Hungary. He decreed that the nobility would no longer be subject to 'extraordinary'
taxes, or use their own funds for military expeditions abroad. He also promised that during travels of the royal
court, the king and the court would pay for all expenses, instead of using facilities of local nobility. In 1374
King Louis of Hungary approved the Privilege of Koszyce (Polish: "przywilej koszycki" or "ugoda koszycka") in Košice
in order to guarantee the Polish throne for his daughter Jadwiga. He broadened the definition of who was a member
of the nobility and exempted the entire class from all but one tax (łanowy, which was limited to 2 grosze from łan
(an old measure of land size)). In addition, the King's right to raise taxes was abolished; no new taxes could be
raised without the agreement of the nobility. Henceforth, also, district offices (Polish: "urzędy ziemskie") were
reserved exclusively for local nobility, as the Privilege of Koszyce forbade the king to grant official posts and
major Polish castles to foreign knights. Finally, this privilege obliged the King to pay indemnities to nobles injured
or taken captive during a war outside Polish borders. In 1422 King Władysław II Jagiełło by the Privilege of Czerwińsk
(Polish: "przywilej czerwiński") established the inviolability of nobles' property (their estates could not be confiscated
except upon a court verdict) and ceded some jurisdiction over fiscal policy to the Royal Council (later, the Senat
of Poland), including the right to mint coinage. In 1430 with the Privileges of Jedlnia, confirmed at Kraków in 1433
(Polish: "przywileje jedlneńsko-krakowskie"), based partially on his earlier Brześć Kujawski privilege (April 25,
1425), King Władysław II Jagiełło granted the nobility a guarantee against arbitrary arrest, similar to the English
Magna Carta's Habeas corpus, known from its own Latin name as "neminem captivabimus (nisi jure victum)." Henceforth
no member of the nobility could be imprisoned without a warrant from a court of justice: the king could neither punish
nor imprison any noble at his whim. King Władysław's quid pro quo for this boon was the nobles' guarantee that his
throne would be inherited by one of his sons (who would be bound to honour the privileges theretofore granted to
the nobility). On May 2, 1447 the same king issued the Wilno Privilege which gave the Lithuanian boyars the same
rights as those possessed by the Polish szlachta. In 1454 King Casimir IV granted the Nieszawa Statutes (Polish:
"statuty cerkwicko-nieszawskie"), clarifying the legal basis of voivodship sejmiks (local parliaments). The king
could promulgate new laws, raise taxes, or call for a levée en masse (pospolite ruszenie) only with the consent of
the sejmiks, and the nobility were protected from judicial abuses. The Nieszawa Statutes also curbed the power of
the magnates, as the Sejm (national parliament) received the right to elect many officials, including judges, voivods
and castellans. These privileges were demanded by the szlachta as a compensation for their participation in the Thirteen
Years' War. The first "free election" (Polish: "wolna elekcja") of a king took place in 1492. (To be sure, some earlier
Polish kings had been elected with help from bodies such as that which put Casimir II on the throne, thereby setting
a precedent for free elections.) Only senators voted in the 1492 free election, which was won by John I Albert. For
the duration of the Jagiellonian Dynasty, only members of that royal family were considered for election; later,
there would be no restrictions on the choice of candidates. On April 26, 1496 King John I Albert granted the Privilege
of Piotrków (Polish: "Przywilej piotrkowski", "konstytucja piotrkowska" or "statuty piotrkowskie"), increasing the
nobility's feudal power over serfs. It bound the peasant to the land, as only one son (not the eldest) was permitted
to leave the village; townsfolk (Polish: "mieszczaństwo") were prohibited from owning land; and positions in the
Church hierarchy could be given only to nobles. On 23 October 1501, at Mielnik Polish–Lithuanian union was reformed
at the Union of Mielnik (Polish: unia mielnicka, unia piotrkowsko-mielnicka). It was there that the tradition of
the coronation Sejm (Polish: "Sejm koronacyjny") was founded. Once again the middle nobility (middle in wealth, not
in rank) attempted to reduce the power of the magnates with a law that made them impeachable before the Senate for
malfeasance. However the Act of Mielno (Polish: Przywilej mielnicki) of 25 October did more to strengthen the magnate
dominated Senate of Poland then the lesser nobility. The nobles were given the right to disobey the King or his representatives—in
the Latin, "non praestanda oboedientia"—and to form confederations, an armed rebellion against the king or state
officers if the nobles thought that the law or their legitimate privileges were being infringed. On 3 May 1505 King
Alexander I Jagiellon granted the Act of "Nihil novi nisi commune consensu" (Latin: "I accept nothing new except
by common consent"). This forbade the king to pass any new law without the consent of the representatives of the
nobility, in Sejm and Senat assembled, and thus greatly strengthened the nobility's political position. Basically,
this act transferred legislative power from the king to the Sejm. This date commonly marks the beginning of the First
Rzeczpospolita, the period of a szlachta-run "Commonwealth". About that time the "executionist movement" (Polish:
"egzekucja praw"--"execution of the laws") began to take form. Its members would seek to curb the power of the magnates
at the Sejm and to strengthen the power of king and country. In 1562 at the Sejm in Piotrków they would force the
magnates to return many leased crown lands to the king, and the king to create a standing army (wojsko kwarciane).
One of the most famous members of this movement was Jan Zamoyski. After his death in 1605, the movement lost its
political force. Until the death of Sigismund II Augustus, the last king of the Jagiellonian dynasty, monarchs could
be elected from within only the royal family. However, starting from 1573, practically any Polish noble or foreigner
of royal blood could become a Polish–Lithuanian monarch. Every newly elected king was supposed to sign two documents—the
Pacta conventa ("agreed pacts")—a confirmation of the king's pre-election promises, and Henrican articles (artykuły
henrykowskie, named after the first freely elected king, Henry of Valois). The latter document served as a virtual
Polish constitution and contained the basic laws of the Commonwealth: In 1578 king Stefan Batory created the Crown
Tribunal in order to reduce the enormous pressure on the Royal Court. This placed much of the monarch's juridical
power in the hands of the elected szlachta deputies, further strengthening the nobility class. In 1581 the Crown
Tribunal was joined by a counterpart in Lithuania, the Lithuanian Tribunal. For many centuries, wealthy and powerful
members of the szlachta sought to gain legal privileges over their peers. Few szlachta were wealthy enough to be
known as magnates (karmazyni—the "Crimsons", from the crimson colour of their boots). A proper magnate should be
able to trace noble ancestors back for many generations and own at least 20 villages or estates. He should also hold
a major office in the Commonwealth. Some historians estimate the number of magnates as 1% of the number of szlachta.
Out of approx. one million szlachta, tens of thousands of families, only 200–300 persons could be classed as great
magnates with country-wide possessions and influence, and 30–40 of them could be viewed as those with significant
impact on Poland's politics. Magnates often received gifts from monarchs, which significantly increased their wealth.
Often, those gifts were only temporary leases, which the magnates never returned (in the 16th century, the anti-magnate
opposition among szlachta was known as the ruch egzekucji praw—movement for execution of the laws—which demanded
that all such possessions are returned to their proper owner, the king). One of the most important victories of the
magnates was the late 16th century right to create ordynacja's (similar to majorats), which ensured that a family
which gained wealth and power could more easily preserve this. Ordynacje's of families of Radziwiłł, Zamoyski, Potocki
or Lubomirski often rivalled the estates of the king and were important power bases for the magnates. The sovereignty
of szlachta was ended in 1795 by Partitions of Poland, and until 1918 their legal status was dependent on policies
of the Russian Empire, the Kingdom of Prussia or the Habsburg Monarchy. In the 1840s Nicholas I reduced 64,000 szlachta
to commoner status. Despite this, 62.8% of Russia's nobles were szlachta in 1858 and still 46.1% in 1897. Serfdom
was abolished in Russian Poland on February 19, 1864. It was deliberately enacted in a way that would ruin the szlachta.
It was the only area where peasants paid the market price in redemption for the land (the average for the empire
was 34% above the market price). All land taken from Polish peasants since 1846 was to be returned without redemption
payments. The ex serfs could only sell land to other peasants, not szlachta. 90% of the ex serfs in the empire who
actually gained land after 1861 were in the 8 western provinces. Along with Romania, Polish landless or domestic
serfs were the only ones to be given land after serfdom was abolished. All this was to punish the szlachta's role
in the uprisings of 1830 and 1863. By 1864 80% of szlachta were déclassé, 1/4 petty nobles were worse off than the
average serf, 48.9% of land in Russian Poland was in peasant hands, nobles still held 46%. In Second Polish Republic
the privileges of the nobility were lawfully abolished by the March Constitution in 1921 and as such not granted
by any future Polish law. The Polish nobility differed in many respects from the nobility of other countries. The
most important difference was that, while in most European countries the nobility lost power as the ruler strove
for absolute monarchy, in Poland the reverse process occurred: the nobility actually gained power at the expense
of the king, and the political system evolved into an oligarchy. Poland's nobility were also more numerous than those
of all other European countries, constituting some 10–12% of the total population of historic Polish–Lithuanian Commonwealth
also some 10–12% among ethnic Poles on ethnic Polish lands (part of Commonwealth), but up to 25% of all Poles worldwide
(szlachta could dispose more of resources to travels and/or conquering), while in some poorer regions (e.g., Mazowsze,
the area centred on Warsaw) nearly 30%. However, according to szlachta comprised around 8% of the total population
in 1791 (up from 6.6% in the 16th century), and no more than 16% of the Roman Catholic (mostly ethnically Polish)
population. It should be noted, though, that Polish szlachta usually incorporated most local nobility from the areas
that were absorbed by Poland–Lithuania (Ruthenian boyars, Livonian nobles, etc.) By contrast, the nobilities of other
European countries, except for Spain, amounted to a mere 1–3%, however the era of sovereign rules of Polish nobility
ended earlier than in other countries (excluding France) yet in 1795 (see: Partitions of Poland), since then their
legitimation and future fate depended on legislature and procedures of Russian Empire, Kingdom of Prussia or Habsburg
Monarchy. Gradually their privileges were under further limitations to be completely dissolved by March Constitution
of Poland in 1921. There were a number of avenues to upward social mobility and the achievement of nobility. Poland's
nobility was not a rigidly exclusive, closed class. Many low-born individuals, including townsfolk, peasants and
Jews, could and did rise to official ennoblement in Polish society. Each szlachcic had enormous influence over the
country's politics, in some ways even greater than that enjoyed by the citizens of modern democratic countries. Between
1652 and 1791, any nobleman could nullify all the proceedings of a given sejm (Commonwealth parliament) or sejmik
(Commonwealth local parliament) by exercising his individual right of liberum veto (Latin for "I do not allow"),
except in the case of a confederated sejm or confederated sejmik. All children of the Polish nobility inherited their
noble status from a noble mother and father. Any individual could attain ennoblement (nobilitacja) for special services
to the state. A foreign noble might be naturalised as a Polish noble (Polish: "indygenat") by the Polish king (later,
from 1641, only by a general sejm). In theory at least, all Polish noblemen were social equals. Also in theory, they
were legal peers. Those who held 'real power' dignities were more privileged but these dignities were not hereditary.
Those who held honorary dignities were higher in 'ritual' hierarchy but these dignities were also granted for a lifetime.
Some tenancies became hereditary and went with both privilege and titles. Nobles who were not direct barons of the
Crown but held land from other lords were only peers "de iure". Note that the Polish landed gentry (ziemianie or
ziemiaństwo) was composed of any nobility that owned lands: thus of course the magnates, the middle nobility and
that lesser nobility that had at least part of the village. As manorial lordships were also opened to burgesses of
certain privileged royal cities, not all landed gentry had a hereditary title of nobility. Coats of arms were very
important to the Polish nobility. Its heraldic system evolved together with its neighbours in Central Europe, while
differing in many ways from the heraldry of other European countries. Polish knighthood families had its counterparts,
links or roots in Moravia (i.e. Poraj) and Germany (i.e. Junosza). The most notable difference is that, contrary
to other European heraldic systems, the Jews, Muslim Tatars or another minorities would be given the noble title.
Also, most families sharing origin would also share a coat-of-arms. They would also share arms with families adopted
into the clan (these would often have their arms officially altered upon ennoblement). Sometimes unrelated families
would be falsely attributed to the clan on the basis of similarity of arms. Also often noble families claimed inaccurate
clan membership. Logically, the number of coats of arms in this system was rather low and did not exceed 200 in late
Middle Ages (40,000 in the late 18th century). Also, the tradition of differentiating between the coat of arms proper
and a lozenge granted to women did not develop in Poland. Usually men inherited the coat of arms from their fathers.
Also, the brisure was rarely used. The szlachta's prevalent mentality and ideology were manifested in "Sarmatism",
a name derived from a myth of the szlachta's origin in the powerful ancient nation of Sarmatians. This belief system
became an important part of szlachta culture and affected all aspects of their lives. It was popularized by poets
who exalted traditional village life, peace and pacifism. It was also manifested in oriental-style apparel (the żupan,
kontusz, sukmana, pas kontuszowy, delia); and made the scimitar-like szabla, too, a near-obligatory item of everyday
szlachta apparel. Sarmatism served to integrate the multi-ethnic nobility as it created an almost nationalistic sense
of unity and pride in the szlachta's "Golden Liberty" (złota wolność). Knowledge of Latin was widespread, and most
szlachta freely mixed Polish and Latin vocabulary (the latter, "macaronisms"—from "macaroni") in everyday conversation.
Prior to the Reformation, the Polish nobility were mostly either Roman Catholic or Orthodox with a small group of
Muslims. Many families, however, soon adopted the Reformed faiths. After the Counter-Reformation, when the Roman
Catholic Church regained power in Poland, the nobility became almost exclusively Catholic, despite the fact that
Roman Catholicism was not the majority religion in Commonwealth (the Catholic and Orthodox churches each accounted
for some 40% of all citizens population, with the remaining 20% being Jews or members of Protestant denominations).
In the 18th century, many followers of Jacob Frank joined the ranks of Jewish-descended Polish gentry. Although Jewish
religion wasn't usually a pretext to block or deprive of noble status, some laws favoured religious conversion from
Judaism to Christianity (see: Neophyte) by rewarding it with ennoblement.
Publius Vergilius Maro (Classical Latin: [ˈpuː.blɪ.ʊs wɛrˈɡɪ.lɪ.ʊs ˈma.roː]; October 15, 70 BC – September 21, 19 BC), usually
called Virgil or Vergil /ˈvɜːrdʒᵻl/ in English, was an ancient Roman poet of the Augustan period. He is known for
three major works of Latin literature, the Eclogues (or Bucolics), the Georgics, and the epic Aeneid. A number of
minor poems, collected in the Appendix Vergiliana, are sometimes attributed to him. Virgil is traditionally ranked
as one of Rome's greatest poets. His Aeneid has been considered the national epic of ancient Rome from the time of
its composition to the present day. Modeled after Homer's Iliad and Odyssey, the Aeneid follows the Trojan refugee
Aeneas as he struggles to fulfill his destiny and arrive on the shores of Italy—in Roman mythology the founding act
of Rome. Virgil's work has had wide and deep influence on Western literature, most notably Dante's Divine Comedy,
in which Virgil appears as Dante's guide through hell and purgatory. Virgil's biographical tradition is thought to
depend on a lost biography by Varius, Virgil's editor, which was incorporated into the biography by Suetonius and
the commentaries of Servius and Donatus, the two great commentators on Virgil's poetry. Although the commentaries
no doubt record much factual information about Virgil, some of their evidence can be shown to rely on inferences
made from his poetry and allegorizing; thus, Virgil's biographical tradition remains problematic. The tradition holds
that Virgil was born in the village of Andes, near Mantua in Cisalpine Gaul. Analysis of his name has led to beliefs
that he descended from earlier Roman colonists. Modern speculation ultimately is not supported by narrative evidence
either from his own writings or his later biographers. Macrobius says that Virgil's father was of a humble background;
however, scholars generally believe that Virgil was from an equestrian landowning family which could afford to give
him an education. He attended schools in Cremona, Mediolanum, Rome and Naples. After considering briefly a career
in rhetoric and law, the young Virgil turned his talents to poetry. According to the commentators, Virgil received
his first education when he was five years old and he later went to Cremona, Milan, and finally Rome to study rhetoric,
medicine, and astronomy, which he soon abandoned for philosophy. From Virgil's admiring references to the neoteric
writers Pollio and Cinna, it has been inferred that he was, for a time, associated with Catullus' neoteric circle.
However schoolmates considered Virgil extremely shy and reserved, according to Servius, and he was nicknamed "Parthenias"
or "maiden" because of his social aloofness. Virgil seems to have suffered bad health throughout his life and in
some ways lived the life of an invalid. According to the Catalepton, while in the Epicurean school of Siro the Epicurean
at Naples, he began to write poetry. A group of small works attributed to the youthful Virgil by the commentators
survive collected under the title Appendix Vergiliana, but are largely considered spurious by scholars. One, the
Catalepton, consists of fourteen short poems, some of which may be Virgil's, and another, a short narrative poem
titled the Culex ("The Gnat"), was attributed to Virgil as early as the 1st century AD. The biographical tradition
asserts that Virgil began the hexameter Eclogues (or Bucolics) in 42 BC and it is thought that the collection was
published around 39–38 BC, although this is controversial. The Eclogues (from the Greek for "selections") are a group
of ten poems roughly modeled on the bucolic hexameter poetry ("pastoral poetry") of the Hellenistic poet Theocritus.
After his victory in the Battle of Philippi in 42 BC, fought against the army led by the assassins of Julius Caesar,
Octavian tried to pay off his veterans with land expropriated from towns in northern Italy, supposedly including,
according to the tradition, an estate near Mantua belonging to Virgil. The loss of his family farm and the attempt
through poetic petitions to regain his property have traditionally been seen as Virgil's motives in the composition
of the Eclogues. This is now thought to be an unsupported inference from interpretations of the Eclogues. In Eclogues
1 and 9, Virgil indeed dramatizes the contrasting feelings caused by the brutality of the land expropriations through
pastoral idiom, but offers no indisputable evidence of the supposed biographic incident. While some readers have
identified the poet himself with various characters and their vicissitudes, whether gratitude by an old rustic to
a new god (Ecl. 1), frustrated love by a rustic singer for a distant boy (his master's pet, Ecl. 2), or a master
singer's claim to have composed several eclogues (Ecl. 5), modern scholars largely reject such efforts to garner
biographical details from works of fiction, preferring to interpret an author's characters and themes as illustrations
of contemporary life and thought. The ten Eclogues present traditional pastoral themes with a fresh perspective.
Eclogues 1 and 9 address the land confiscations and their effects on the Italian countryside. 2 and 3 are pastoral
and erotic, discussing both homosexual love (Ecl. 2) and attraction toward people of any gender (Ecl. 3). Eclogue
4, addressed to Asinius Pollio, the so-called "Messianic Eclogue" uses the imagery of the golden age in connection
with the birth of a child (who the child was meant to be has been subject to debate). 5 and 8 describe the myth of
Daphnis in a song contest, 6, the cosmic and mythological song of Silenus; 7, a heated poetic contest, and 10 the
sufferings of the contemporary elegiac poet Cornelius Gallus. Virgil is credited[by whom?] in the Eclogues with establishing
Arcadia as a poetic ideal that still resonates in Western literature and visual arts and setting the stage for the
development of Latin pastoral by Calpurnius Siculus, Nemesianus, and later writers. Sometime after the publication
of the Eclogues (probably before 37 BC), Virgil became part of the circle of Maecenas, Octavian's capable agent d'affaires
who sought to counter sympathy for Antony among the leading families by rallying Roman literary figures to Octavian's
side. Virgil came to know many of the other leading literary figures of the time, including Horace, in whose poetry
he is often mentioned, and Varius Rufus, who later helped finish the Aeneid. At Maecenas' insistence (according to
the tradition) Virgil spent the ensuing years (perhaps 37–29 BC) on the long didactic hexameter poem called the Georgics
(from Greek, "On Working the Earth") which he dedicated to Maecenas. The ostensible theme of the Georgics is instruction
in the methods of running a farm. In handling this theme, Virgil follows in the didactic ("how to") tradition of
the Greek poet Hesiod's Works and Days and several works of the later Hellenistic poets. The four books of the Georgics
focus respectively on raising crops and trees (1 and 2), livestock and horses (3), and beekeeping and the qualities
of bees (4). Well-known passages include the beloved Laus Italiae of Book 2, the prologue description of the temple
in Book 3, and the description of the plague at the end of Book 3. Book 4 concludes with a long mythological narrative,
in the form of an epyllion which describes vividly the discovery of beekeeping by Aristaeus and the story of Orpheus'
journey to the underworld. Ancient scholars, such as Servius, conjectured that the Aristaeus episode replaced, at
the emperor's request, a long section in praise of Virgil's friend, the poet Gallus, who was disgraced by Augustus,
and who committed suicide in 26 BC. The Georgics' tone wavers between optimism and pessimism, sparking critical debate
on the poet's intentions, but the work lays the foundations for later didactic poetry. Virgil and Maecenas are said
to have taken turns reading the Georgics to Octavian upon his return from defeating Antony and Cleopatra at the Battle
of Actium in 31 BC. The Aeneid is widely considered Virgil's finest work and one of the most important poems in the
history of western literature. Virgil worked on the Aeneid during the last eleven years of his life (29–19 BC), commissioned,
according to Propertius, by Augustus. The epic poem consists of 12 books in dactylic hexameter verse which describe
the journey of Aeneas, a warrior fleeing the sack of Troy, to Italy, his battle with the Italian prince Turnus, and
the foundation of a city from which Rome would emerge. The Aeneid's first six books describe the journey of Aeneas
from Troy to Rome. Virgil made use of several models in the composition of his epic; Homer, the preeminent author
of classical epic, is everywhere present, but Virgil also makes special use of the Latin poet Ennius and the Hellenistic
poet Apollonius of Rhodes among the various other writers to which he alludes. Although the Aeneid casts itself firmly
into the epic mode, it often seeks to expand the genre by including elements of other genres such as tragedy and
aetiological poetry. Ancient commentators noted that Virgil seems to divide the Aeneid into two sections based on
the poetry of Homer; the first six books were viewed as employing the Odyssey as a model while the last six were
connected to the Iliad. Book 1 (at the head of the Odyssean section) opens with a storm which Juno, Aeneas' enemy
throughout the poem, stirs up against the fleet. The storm drives the hero to the coast of Carthage, which historically
was Rome's deadliest foe. The queen, Dido, welcomes the ancestor of the Romans, and under the influence of the gods
falls deeply in love with him. At a banquet in Book 2, Aeneas tells the story of the sack of Troy, the death of his
wife, and his escape, to the enthralled Carthaginians, while in Book 3 he recounts to them his wanderings over the
Mediterranean in search of a suitable new home. Jupiter in Book 4 recalls the lingering Aeneas to his duty to found
a new city, and he slips away from Carthage, leaving Dido to commit suicide, cursing Aeneas and calling down revenge
in a symbolic anticipation of the fierce wars between Carthage and Rome. In Book 5, Aeneas' father Anchises dies
and funeral games are celebrated for him. On reaching Cumae, in Italy in Book 6, Aeneas consults the Cumaean Sibyl,
who conducts him through the Underworld where Aeneas meets the dead Anchises who reveals Rome's destiny to his son.
Book 7 (beginning the Iliadic half) opens with an address to the muse and recounts Aeneas' arrival in Italy and betrothal
to Lavinia, daughter of King Latinus. Lavinia had already been promised to Turnus, the king of the Rutulians, who
is roused to war by the Fury Allecto, and Amata Lavinia's mother. In Book 8, Aeneas allies with King Evander, who
occupies the future site of Rome, and is given new armor and a shield depicting Roman history. Book 9 records an
assault by Nisus and Euryalus on the Rutulians, Book 10, the death of Evander's young son Pallas, and 11 the death
of the Volscian warrior princess Camilla and the decision to settle the war with a duel between Aeneas and Turnus.
The Aeneid ends in Book 12 with the taking of Latinus' city, the death of Amata, and Aeneas' defeat and killing of
Turnus, whose pleas for mercy are spurned. The final book ends with the image of Turnus' soul lamenting as it flees
to the underworld. Critics of the Aeneid focus on a variety of issues. The tone of the poem as a whole is a particular
matter of debate; some see the poem as ultimately pessimistic and politically subversive to the Augustan regime,
while others view it as a celebration of the new imperial dynasty. Virgil makes use of the symbolism of the Augustan
regime, and some scholars see strong associations between Augustus and Aeneas, the one as founder and the other as
re-founder of Rome. A strong teleology, or drive towards a climax, has been detected in the poem. The Aeneid is full
of prophecies about the future of Rome, the deeds of Augustus, his ancestors, and famous Romans, and the Carthaginian
Wars; the shield of Aeneas even depicts Augustus' victory at Actium against Mark Antony and Cleopatra VII in 31 BC.
A further focus of study is the character of Aeneas. As the protagonist of the poem, Aeneas seems to waver constantly
between his emotions and commitment to his prophetic duty to found Rome; critics note the breakdown of Aeneas' emotional
control in the last sections of the poem where the "pious" and "righteous" Aeneas mercilessly slaughters Turnus.
The Aeneid appears to have been a great success. Virgil is said to have recited Books 2, 4, and 6 to Augustus; and
Book 6 apparently caused Augustus' sister Octavia to faint. Although the truth of this claim is subject to scholarly
scepticism, it has served as a basis for later art, such as Jean-Baptiste Wicar's Virgil Reading the Aeneid. According
to the tradition, Virgil traveled to Greece in about 19 BC to revise the Aeneid. After meeting Augustus in Athens
and deciding to return home, Virgil caught a fever while visiting a town near Megara. After crossing to Italy by
ship, weakened with disease, Virgil died in Brundisium harbor on September 21, 19 BC. Augustus ordered Virgil's literary
executors, Lucius Varius Rufus and Plotius Tucca, to disregard Virgil's own wish that the poem be burned, instead
ordering it published with as few editorial changes as possible. As a result, the text of the Aeneid that exists
may contain faults which Virgil was planning to correct before publication. However, the only obvious imperfections
are a few lines of verse that are metrically unfinished (i.e. not a complete line of dactylic hexameter). Some scholars
have argued that Virgil deliberately left these metrically incomplete lines for dramatic effect. Other alleged imperfections
are subject to scholarly debate. The works of Virgil almost from the moment of their publication revolutionized Latin
poetry. The Eclogues, Georgics, and above all the Aeneid became standard texts in school curricula with which all
educated Romans were familiar. Poets following Virgil often refer intertextually to his works to generate meaning
in their own poetry. The Augustan poet Ovid parodies the opening lines of the Aeneid in Amores 1.1.1–2, and his summary
of the Aeneas story in Book 14 of the Metamorphoses, the so-called "mini-Aeneid", has been viewed as a particularly
important example of post-Virgilian response to the epic genre. Lucan's epic, the Bellum Civile has been considered
an anti-Virgilian epic, disposing with the divine mechanism, treating historical events, and diverging drastically
from Virgilian epic practice. The Flavian poet Statius in his 12-book epic Thebaid engages closely with the poetry
of Virgil; in his epilogue he advises his poem not to "rival the divine Aeneid, but follow afar and ever venerate
its footsteps." In Silius Italicus, Virgil finds one of his most ardent admirers. With almost every line of his epic
Punica Silius references Virgil. Indeed, Silius is known to have bought Virgil's tomb and worshipped the poet. Partially
as a result of his so-called "Messianic" Fourth Eclogue—widely interpreted later to have predicted the birth of Jesus
Christ—Virgil was in later antiquity imputed to have the magical abilities of a seer; the Sortes Vergilianae, the
process of using Virgil's poetry as a tool of divination, is found in the time of Hadrian, and continued into the
Middle Ages. In a similar vein Macrobius in the Saturnalia credits the work of Virgil as the embodiment of human
knowledge and experience, mirroring the Greek conception of Homer. Virgil also found commentators in antiquity. Servius,
a commentator of the 4th century AD, based his work on the commentary of Donatus. Servius' commentary provides us
with a great deal of information about Virgil's life, sources, and references; however, many modern scholars find
the variable quality of his work and the often simplistic interpretations frustrating. Even as the Western Roman
empire collapsed, literate men acknowledged that Virgil was a master poet. Gregory of Tours read Virgil, whom he
quotes in several places, along with some other Latin poets, though he cautions that "we ought not to relate their
lying fables, lest we fall under sentence of eternal death." Dante made Virgil his guide in Hell and the greater
part of Purgatory in The Divine Comedy. Dante also mentions Virgil in De vulgari eloquentia, along with Ovid, Lucan
and Statius, as one of the four regulati poetae (ii, vi, 7). In the Middle Ages, Virgil's reputation was such that
it inspired legends associating him with magic and prophecy. From at least the 3rd century, Christian thinkers interpreted
Eclogues 4, which describes the birth of a boy ushering in a golden age, as a prediction of Jesus' birth. As such,
Virgil came to be seen on a similar level as the Hebrew prophets of the Bible as one who had heralded Christianity.
Possibly as early as the second century AD, Virgil's works were seen as having magical properties and were used for
divination. In what became known as the Sortes Vergilianae (Virgilian Lots), passages would be selected at random
and interpreted to answer questions. In the 12th century, starting around Naples but eventually spreading widely
throughout Europe, a tradition developed in which Virgil was regarded as a great magician. Legends about Virgil and
his magical powers remained popular for over two hundred years, arguably becoming as prominent as his writings themselves.
Virgil's legacy in medieval Wales was such that the Welsh version of his name, Fferyllt or Pheryllt, became a generic
term for magic-worker, and survives in the modern Welsh word for pharmacist, fferyllydd. The legend of Virgil in
his Basket arose in the Middle Ages, and is often seen in art and mentioned in literature as part of the Power of
Women literary topos, demonstrating the disruptive force of female attractiveness on men. In this story Virgil became
enamoured of a beautiful woman, sometimes described as the emperor's daughter or mistress and called Lucretia. She
played him along and agreed to an assignation at her house, which he was to sneak into at night by climbing into
a large basket let down from a window. When he did so he was only hoisted halfway up the wall and then left him trapped
there into the next day, exposed to public ridicule. The story paralleled that of Phyllis riding Aristotle. Among
other artists depicting the scene, Lucas van Leyden made a woodcut and later an engraving. The structure known as
"Virgil's tomb" is found at the entrance of an ancient Roman tunnel (also known as "grotta vecchia") in Piedigrotta,
a district two miles from the centre of Naples, near the Mergellina harbor, on the road heading north along the coast
to Pozzuoli. While Virgil was already the object of literary admiration and veneration before his death, in the Middle
Ages his name became associated with miraculous powers, and for a couple of centuries his tomb was the destination
of pilgrimages and veneration. In the Late Empire and Middle Ages Vergilius was spelled Virgilius. Two explanations
are commonly given for this alteration. One deduces a false etymology associated with the word virgo ("maiden" in
Latin) due to Virgil's excessive, "maiden"-like modesty. Alternatively, some argue that Vergilius was altered to
Virgilius by analogy with the Latin virga ("wand") due to the magical or prophetic powers attributed to Virgil in
the Middle Ages (this explanation is found in only a handful of manuscripts, however, and was probably not widespread).
In Norman schools (following the French practice), the habit was to anglicize Latin names by dropping their Latin
endings, hence Virgil. In the 19th century, some German-trained classicists in the United States suggested modification
to Vergil, as it is closer to his original name, and is also the traditional German spelling.[citation needed] Modern
usage permits both, though the Oxford guide to style recommends Vergilius to avoid confusion with the 8th-century
grammarian Virgilius Maro Grammaticus. Some post-Renaissance writers liked to affect the sobriquet "The Swan of Mantua".[citation
needed]
The Alps (/ælps/; Italian: Alpi [ˈalpi]; French: Alpes [alp]; German: Alpen [ˈʔalpm̩]; Slovene: Alpe [ˈáːlpɛ]) are the highest
and most extensive mountain range system that lies entirely in Europe, stretching approximately 1,200 kilometres
(750 mi) across eight Alpine countries: Austria, France, Germany, Italy, Liechtenstein, Monaco, Slovenia, and Switzerland.
The Caucasus Mountains are higher, and the Urals longer, but both lie partly in Asia. The mountains were formed over
tens of millions of years as the African and Eurasian tectonic plates collided. Extreme shortening caused by the
event resulted in marine sedimentary rocks rising by thrusting and folding into high mountain peaks such as Mont
Blanc and the Matterhorn. Mont Blanc spans the French–Italian border, and at 4,810 m (15,781 ft) is the highest mountain
in the Alps. The Alpine region area contains about a hundred peaks higher than 4,000 m (13,123 ft), known as the
"four-thousanders". The altitude and size of the range affects the climate in Europe; in the mountains precipitation
levels vary greatly and climatic conditions consist of distinct zones. Wildlife such as ibex live in the higher peaks
to elevations of 3,400 m (11,155 ft), and plants such as Edelweiss grow in rocky areas in lower elevations as well
as in higher elevations. Evidence of human habitation in the Alps goes back to the Paleolithic era. A mummified man,
determined to be 5,000 years old, was discovered on a glacier at the Austrian–Italian border in 1991. By the 6th
century BC, the Celtic La Tène culture was well established. Hannibal famously crossed the Alps with a herd of elephants,
and the Romans had settlements in the region. In 1800 Napoleon crossed one of the mountain passes with an army of
40,000. The 18th and 19th centuries saw an influx of naturalists, writers, and artists, in particular the Romantics,
followed by the golden age of alpinism as mountaineers began to ascend the peaks. In World War II, Adolf Hitler kept
a base of operation in the Bavarian Alps throughout the war. The Alpine region has a strong cultural identity. The
traditional culture of farming, cheesemaking, and woodworking still exists in Alpine villages, although the tourist
industry began to grow early in the 20th century and expanded greatly after World War II to become the dominant industry
by the end of the century. The Winter Olympic Games have been hosted in the Swiss, French, Italian, Austrian and
German Alps. At present the region is home to 14 million people and has 120 million annual visitors. The English
word Alps derives from the Latin Alpes (through French). Maurus Servius Honoratus, an ancient commentator of Virgil,
says in his commentary (A. X 13) that all high mountains are called Alpes by Celts. The term may be common to Italo-Celtic,
because the Celtic languages have terms for high mountains derived from alp. This may be consistent with the theory
that in Greek Alpes is a name of non-Indo-European origin (which is common for prominent mountains and mountain ranges
in the Mediterranean region). According to the Old English Dictionary, the Latin Alpes might possibly derive from
a pre-Indo-European word *alb "hill"; "Albania" is a related derivation. Albania, a name not native to the region
known as the country of Albania, has been used as a name for a number of mountainous areas across Europe. In Roman
times, "Albania" was a name for the eastern Caucasus, while in the English language "Albania" (or "Albany") was occasionally
used as a name for Scotland. It's likely[weasel words] that alb ("white") and albus have common origins deriving
from the association of the tops of tall mountains or steep hills with snow. In modern languages the term alp, alm,
albe or alpe refers to a grazing pastures in the alpine regions below the glaciers, not the peaks. An alp refers
to a high mountain pasture where cows are taken to be grazed during the summer months and where hay barns can be
found, and the term "the Alps", referring to the mountains, is a misnomer. The term for the mountain peaks varies
by nation and language: words such as horn, kogel, gipfel, spitz, and berg are used in German speaking regions: mont,
pic, dent and aiguille in French speaking regions; and monte, picco or cima in Italian speaking regions. The Alps
are a crescent shaped geographic feature of central Europe that ranges in a 800 km (500 mi) arc from east to west
and is 200 km (120 mi) in width. The mean height of the mountain peaks is 2.5 km (1.6 mi). The range stretches from
the Mediterranean Sea north above the Po basin, extending through France from Grenoble, eastward through mid and
southern Switzerland. The range continues toward Vienna in Austria, and east to the Adriatic Sea and into Slovenia.
To the south it dips into northern Italy and to the north extends to the south border of Bavaria in Germany. In areas
like Chiasso, Switzerland, and Neuschwanstein, Bavaria, the demarcation between the mountain range and the flatlands
are clear; in other places such as Geneva, the demarcation is less clear. The countries with the greatest alpine
territory are Switzerland, France, Austria and Italy. The highest portion of the range is divided by the glacial
trough of the Rhone valley, with the Pennine Alps from Mont Blanc to the Matterhorn and Monte Rosa on the southern
side, and the Bernese Alps on the northern. The peaks in the easterly portion of the range, in Austria and Slovenia,
are smaller than those in the central and western portions. The variances in nomenclature in the region spanned by
the Alps makes classification of the mountains and subregions difficult, but a general classification is that of
the Eastern Alps and Western Alps with the divide between the two occurring in eastern Switzerland according to geologist
Stefan Schmid, near the Splügen Pass. The highest peaks of the Western Alps and Eastern Alps, respectively, are Mont
Blanc, at 4,810 m (15,780 ft) and Piz Bernina at 4,049 metres (13,284 ft). The second-highest major peaks are Monte
Rosa at 4,634 m (15,200 ft) and Ortler at 3,905 m (12,810 ft), respectively Series of lower mountain ranges run parallel
to the main chain of the Alps, including the French Prealps in France and the Jura Mountains in Switzerland and France.
The secondary chain of the Alps follows the watershed from the Mediterranean Sea to the Wienerwald, passing over
many of the highest and most well-known peaks in the Alps. From the Colle di Cadibona to Col de Tende it runs westwards,
before turning to the northwest and then, near the Colle della Maddalena, to the north. Upon reaching the Swiss border,
the line of the main chain heads approximately east-northeast, a heading it follows until its end near Vienna. The
Alps have been crossed for war and commerce, and by pilgrims, students and tourists. Crossing routes by road, train
or foot are known as passes, and usually consist of depressions in the mountains in which a valley leads from the
plains and hilly pre-mountainous zones. In the medieval period hospices were established by religious orders at the
summits of many of the main passes. The most important passes are the Col de l'Iseran (the highest), the Brenner
Pass, the Mont-Cenis, the Great St. Bernard Pass, the Col de Tende, the Gotthard Pass, the Semmering Pass, and the
Stelvio Pass. Crossing the Italian-Austrian border, the Brenner Pass separates the Ötztal Alps and Zillertal Alps
and has been in use as a trading route since the 14th century. The lowest of the Alpine passes at 985 m (3,232 ft),
the Semmering crosses from Lower Austria to Styria; since the 12th century when a hospice was built there it has
seen continuous use. A railroad with a tunnel 1 mile (1.6 km) long was built along the route of the pass in the mid-19th
century. With a summit of 2,469 m (8,100 ft), the Great St. Bernard Pass is one of the highest in the Alps, crossing
the Italian-Swiss border east of the Pennine Alps along the flanks of Mont Blanc. The pass was used by Napoleon Bonaparte
to cross 40,000 troops in 1800. The Saint Gotthard Pass crosses from Central Switzerland to Ticino; in the late 19th
century the 14 km (9 mi) long Saint Gotthard Tunnel was built connecting Lucerne in Switzerland, with Milan in Italy.
The Mont Cenis pass has been a major commercial road between Western Europe and Italy. Now the pass has been supplanted
by the Fréjus Road and Rail tunnel. At 2,756 m (9,042 ft), the Stelvio Pass in northern Italy is one of the highest
of the Alpine passes; the road was built in the 1820s. The highest pass in the alps is the col de l'Iseran in Savoy
(France) at 2,770 m (9,088 ft). Important geological concepts were established as naturalists began studying the
rock formations of the Alps in the 18th century. In the mid-19th century the now defunct theory of geosynclines was
used to explain the presence of "folded" mountain chains but by the mid-20th century the theory of plate tectonics
became widely accepted. The formation of the Alps (the Alpine orogeny) was an episodic process that began about 300
million years ago. In the Paleozoic Era the Pangaean supercontinent consisted of a single tectonic plate; it broke
into separate plates during the Mesozoic Era and the Tethys sea developed between Laurasia and Gondwana during the
Jurassic Period. The Tethys was later squeezed between colliding plates causing the formation of mountain ranges
called the Alpide belt, from Gibraltar through the Himalayas to Indonesia—a process that began at the end of the
Mesozoic and continues into the present. The formation of the Alps was a segment of this orogenic process, caused
by the collision between the African and the Eurasian plates that began in the late Cretaceous Period. Under extreme
compressive stresses and pressure, marine sedimentary rocks were uplifted, creating characteristic recumbent folds,
or nappes, and thrust faults. As the rising peaks underwent erosion, a layer of marine flysch sediments was deposited
in the foreland basin, and the sediments became involved in younger nappes (folds) as the orogeny progressed. Coarse
sediments from the continual uplift and erosion were later deposited in foreland areas as molasse. The molasse regions
in Switzerland and Bavaria were well-developed and saw further upthrusting of flysch. The Alpine orogeny occurred
in ongoing cycles through to the Paleogene causing differences in nappe structures, with a late-stage orogeny causing
the development of the Jura Mountains. A series of tectonic events in the Triassic, Jurassic and Cretaceous periods
caused different paleogeographic regions. The Alps are subdivided by different lithology (rock composition) and nappe
structure according to the orogenic events that affected them. The geological subdivision differentiates the Western,
Eastern Alps and Southern Alps: the Helveticum in the north, the Penninicum and Austroalpine system in the center
and, south of the Periadriatic Seam, the Southern Alpine system. According to geologist Stefan Schmid, because the
Western Alps underwent a metamorphic event in the Cenozoic Era while the Austroalpine peaks underwent an event in
the Cretaceous Period, the two areas show distinct differences in nappe formations. Flysch deposits in the Southern
Alps of Lombardy probably occurred in the Cretaceous or later. Peaks in France, Italy and Switzerland lie in the
"Houillière zone", which consists of basement with sediments from the Mesozoic Era. High "massifs" with external
sedimentary cover are more common in the Western Alps and were affected by Neogene Period thin-skinned thrusting
whereas the Eastern Alps have comparatively few high peaked massifs. Similarly the peaks in Switzerland extending
to western Austria (Helvetic nappes) consist of thin-skinned sedimentary folding that detached from former basement
rock. In simple terms the structure of the Alps consists of layers of rock of European, African and oceanic (Tethyan)
origin. The bottom nappe structure is of continental European origin, above which are stacked marine sediment nappes,
topped off by nappes derived from the African plate. The Matterhorn is an example of the ongoing orogeny and shows
evidence of great folding. The tip of the mountain consists of gneisses from the African plate; the base of the peak,
below the glaciated area, consists of European basement rock. The sequence of Tethyan marine sediments and their
oceanic basement is sandwiched between rock derived from the African and European plates. The core regions of the
Alpine orogenic belt have been folded and fractured in such a manner that erosion created the characteristic steep
vertical peaks of the Swiss Alps that rise seemingly straight out of the foreland areas. Peaks such as Mont Blanc,
the Matterhorn, and high peaks in the Pennine Alps, the Briançonnais, and Hohe Tauern consist of layers of rock from
the various orogenies including exposures of basement rock. The Union Internationale des Associations d'Alpinisme
(UIAA) has defined a list of 82 "official" Alpine summits that reach at least 4,000 m (13,123 ft). The list includes
not only mountains, but also subpeaks with little prominence that are considered important mountaineering objectives.
Below are listed the 22 "four-thousanders" with at least 500 m (1,640 ft) of prominence. While Mont Blanc was first
climbed in 1786, most of the Alpine four-thousanders were climbed during the first half of the 19th century; the
ascent of the Matterhorn in 1865 marked the end of the golden age of alpinism. Karl Blodig (1859–1956) was among
the first to successfully climb all the major 4,000 m peaks. He completed his series of ascents in 1911. The first
British Mont Blanc ascent was in 1788; the first female ascent in 1819. By the mid-1850s Swiss mountaineers had ascended
most of the peaks and were eagerly sought as mountain guides. Edward Whymper reached the top of the Matterhorn in
1865 (after seven attempts), and in 1938 the last of the six great north faces of the Alps was climbed with the first
ascent of the Eiger Nordwand (north face of the Eiger). The Alps are a source of minerals that have been mined for
thousands of years. In the 8th to 6th centuries BC during the Hallstatt culture, Celtic tribes mined copper; later
the Romans mined gold for coins in the Bad Gastein area. Erzberg in Styria furnishes high-quality iron ore for the
steel industry. Crystals are found throughout much of the Alpine region such as cinnabar, amethyst, and quartz. The
cinnabar deposits in Slovenia are a notable source of cinnabar pigments. Alpine crystals have been studied and collected
for hundreds of years, and began to be classified in the 18th century. Leonhard Euler studied the shapes of crystals,
and by the 19th century crystal hunting was common in Alpine regions. David Friedrich Wiser amassed a collection
of 8000 crystals that he studied and documented. In the 20th century Robert Parker wrote a well-known work about
the rock crystals of the Swiss Alps; at the same period a commission was established to control and standardize the
naming of Alpine minerals. In the Miocene Epoch the mountains underwent severe erosion because of glaciation, which
was noted in the mid-19th century by naturalist Louis Agassiz who presented a paper proclaiming the Alps were covered
in ice at various intervals—a theory he formed when studying rocks near his Neuchâtel home which he believed originated
to the west in the Bernese Oberland. Because of his work he came to be known as the "father of the ice-age concept"
although other naturalists before him put forth similar ideas. Agassiz studied glacier movement in the 1840s at the
Unteraar Glacier where he found the glacier moved 100 m (328 ft) per year, more rapidly in the middle than at the
edges. His work was continued by other scientists and now a permanent laboratory exists inside a glacier under the
Jungfraujoch, devoted exclusively to the study of Alpine glaciers. Glaciers pick up rocks and sediment with them
as they flow. This causes erosion and the formation of valleys over time. The Inn valley is an example of a valley
carved by glaciers during the ice ages with a typical terraced structure caused by erosion. Eroded rocks from the
most recent ice age lie at the bottom of the valley while the top of the valley consists of erosion from earlier
ice ages. Glacial valleys have characteristically steep walls (reliefs); valleys with lower reliefs and talus slopes
are remnants of glacial troughs or previously infilled valleys. Moraines, piles of rock picked up during the movement
of the glacier, accumulate at edges, center and the terminus of glaciers. Alpine glaciers can be straight rivers
of ice, long sweeping rivers, spread in a fan-like shape (Piedmont glaciers), and curtains of ice that hang from
vertical slopes of the mountain peaks. The stress of the movement causes the ice to break and crack loudly, perhaps
explaining why the mountains were believed to be home to dragons in the medieval period. The cracking creates unpredictable
and dangerous crevasses, often invisible under new snowfall, which cause the greatest danger to mountaineers. Glaciers
end in ice caves (the Rhone Glacier), by trailing into a lake or river, or by shedding snowmelt on a meadow. Sometimes
a piece of glacier will detach or break resulting in flooding, property damage and loss of life. In the 17th century
about 2500 people were killed by an avalanche in a village on the French-Italian border; in the 19th century 120
homes in a village near Zermatt were destroyed by an avalanche. High levels of precipitation cause the glaciers to
descend to permafrost levels in some areas whereas in other, more arid regions, glaciers remain above about the 3,500
m (11,483 ft) level. The 1,817 square kilometres (702 sq mi) of the Alps covered by glaciers in 1876 had shrunk to
1,342 km2 (518 sq mi) by 1973, resulting in decreased river run-off levels. Forty percent of the glaciation in Austria
has disappeared since 1850, and 30% of that in Switzerland. The Alps provide lowland Europe with drinking water,
irrigation, and hydroelectric power. Although the area is only about 11 percent of the surface area of Europe, the
Alps provide up to 90 percent of water to lowland Europe, particularly to arid areas and during the summer months.
Cities such as Milan depend on 80 percent of water from Alpine runoff. Water from the rivers is used in over 500
hydroelectricity power plants, generating as much as 2900 kilowatts of electricity. Major European rivers flow from
Switzerland, such as the Rhine, the Rhone, the Inn, the Ticino and the Po, all of which have headwaters in the Alps
and flow into neighbouring countries, finally emptying into the North Sea, the Mediterranean Sea, the Adriatic Sea
and the Black Sea. Other rivers such as the Danube have major tributaries flowing into them that originate in the
Alps. The Rhone is second to the Nile as a freshwater source to the Mediterranean Sea; the river begins as glacial
meltwater, flows into Lake Geneva, and from there to France where one of its uses is to cool nuclear power plants.
The Rhine originates in a 30 square kilometre area in Switzerland and represents almost 60 percent of water exported
from the country. Tributary valleys, some of which are complicated, channel water to the main valleys which can experience
flooding during the snow melt season when rapid runoff causes debris torrents and swollen rivers. The rivers form
lakes, such as Lake Geneva, a crescent shaped lake crossing the Swiss border with Lausanne on the Swiss side and
the town of Evian-les-Bains on the French side. In Germany, the medieval St. Bartholomew's chapel was built on the
south side of the Königssee, accessible only by boat or by climbing over the abutting peaks. Scientists have been
studying the impact of climate change and water use. For example, each year more water is diverted from rivers for
snowmaking in the ski resorts, the effect of which is yet unknown. Furthermore, the decrease of glaciated areas combined
with a succession of winters with lower-than-expected precipitation may have a future impact on the rivers in the
Alps as well as an effect on the water availability to the lowlands. The Alps are a classic example of what happens
when a temperate area at lower altitude gives way to higher-elevation terrain. Elevations around the world that have
cold climates similar to those of the polar regions have been called Alpine. A rise from sea level into the upper
regions of the atmosphere causes the temperature to decrease (see adiabatic lapse rate). The effect of mountain chains
on prevailing winds is to carry warm air belonging to the lower region into an upper zone, where it expands in volume
at the cost of a proportionate loss of temperature, often accompanied by precipitation in the form of snow or rain.
The height of the Alps is sufficient to divide the weather patterns in Europe into a wet north and a dry south because
moisture is sucked from the air as it flows over the high peaks. The severe weather in the Alps has been studied
since the 18th century; particularly the weather patterns such as the seasonal foehn wind. Numerous weather stations
were placed in the mountains early in the early 20th century, providing continuous data for climatologists. Some
of the valleys are quite arid such as the Aosta valley in Italy, the Maurienne in France, the Valais in Switzerland,
and northern Tyrol. The areas that are not arid and receive high precipitation experience periodic flooding from
rapid snowmelt and runoff. The mean precipitation in the Alps ranges from a low of 2,600 mm (100 in) per year to
3,600 mm (140 in) per year, with the higher levels occurring at high altitudes. At altitudes between 1,000 and 3,000
m (3,281 and 9,843 ft), snowfall begins in November and accumulates through to April or May when the melt begins.
Snow lines vary from 2,400 to 3,000 m (7,874 to 9,843 ft), above which the snow is permanent and the temperatures
hover around the freezing point even July and August. High-water levels in streams and rivers peak in June and July
when the snow is still melting at the higher altitudes. The Alps are split into five climatic zones, each with different
vegetation. The climate, plant life and animal life vary among the different sections or zones of the mountains.
The lowest zone is the colline zone, which exists between 500 and 1,000 m (1,640 and 3,281 ft), depending on the
location. The montane zone extends from 800 to 1,700 m (2,625 to 5,577 ft), followed by the sub-Alpine zone from
1,600 to 2,400 m (5,249 to 7,874 ft). The Alpine zone, extending from tree line to snow line, is followed by the
glacial zone, which covers the glaciated areas of the mountain. Climatic conditions show variances within the same
zones; for example, weather conditions at the head of a mountain valley, extending directly from the peaks, are colder
and more severe than those at the mouth of a valley which tend to be less severe and receive less snowfall. Various
models of climate change have been projected into the 22nd century for the Alps, with an expectation that a trend
toward increased temperatures will have an effect on snowfall, snowpack, glaciation, and river runoff. Thirteen thousand
species of plants have been identified in the Alpine regions. Alpine plants are grouped by habitat and soil type
which can be limestone or non-calcerous. The habitats range from meadows, bogs, woodland (deciduous and coniferous)
areas to soilless scree and moraines, and rock faces and ridges. A natural vegetation limit with altitude is given
by the presence of the chief deciduous trees—oak, beech, ash and sycamore maple. These do not reach exactly to the
same elevation, nor are they often found growing together; but their upper limit corresponds accurately enough to
the change from a temperate to a colder climate that is further proved by a change in the presence of wild herbaceous
vegetation. This limit usually lies about 1,200 m (3,940 ft) above the sea on the north side of the Alps, but on
the southern slopes it often rises to 1,500 m (4,920 ft), sometimes even to 1,700 m (5,580 ft). Above the forestry,
there is often a band of short pine trees (Pinus mugo), which is in turn superseded by Alpenrosen, dwarf shrubs,
typically Rhododendron ferrugineum (on acid soils) or Rhododendron hirsutum (on alkaline soils). Although the Alpenrose
prefers acidic soil, the plants are found throughout the region. Above the tree line is the area defined as "alpine"
where in the alpine meadow plants are found that have adapted well to harsh conditions of cold temperatures, aridity,
and high altitudes. The alpine area fluctuates greatly because of regional fluctuations in tree lines. Alpine plants
such the Alpine gentian grow in abundance in areas such as the meadows above the Lauterbrunnental. Gentians are named
after the Illyrian king Gentius, and 40 species of the early-spring blooming flower grow in the Alps, in a range
of 1,500 to 2,400 m (4,921 to 7,874 ft). Writing about the gentians in Switzerland D. H. Lawrence described them
as "darkening the day-time, torch-like with the smoking blueness of Pluto's gloom." Gentians tend to "appear" repeatedly
as the spring blooming takes place at progressively later dates, moving from the lower altitude to the higher altitude
meadows where the snow melts much later than in the valleys. On the highest rocky ledges the spring flowers bloom
in the summer. At these higher altitudes, the plants tend to form isolated cushions. In the Alps, several species
of flowering plants have been recorded above 4,000 m (13,120 ft), including Ranunculus glacialis, Androsace alpina
and Saxifraga biflora. The Eritrichium nanum, commonly known as the King of the Alps, is the most elusive of the
alpine flowers, growing on rocky ridges at 2,600 to 3,750 m (8,530 to 12,303 ft). Perhaps the best known of the alpine
plants is the Edelweiss which grows in rocky areas and can be found at altitudes as low as 1,400 m (4,593 ft) and
as high as 3,400 m (11,155 ft). The plants that grow at the highest altitudes have adapted to conditions by specialization
such as growing in rock screes that give protection from winds. The extreme and stressful climatic conditions give
way to the growth of plant species with secondary metabolites important for medicinal purposes. Origanum vulgare,
Prunella vulgaris, Solanum nigrum and Urtica dioica are some of the more useful medicinal species found in the Alps.
Human interference has nearly exterminated the trees in many areas, and, except for the beech forests of the Austrian
Alps, forests of deciduous trees are rarely found after the extreme deforestation between the 17th and 19th centuries.
The vegetation has changed since the second half of the 20th century, as the high alpine meadows cease to be harvested
for hay or used for grazing which eventually might result in a regrowth of forest. In some areas the modern practice
of building ski runs by mechanical means has destroyed the underlying tundra from which the plant life cannot recover
during the non-skiing months, whereas areas that still practice a natural piste type of ski slope building preserve
the fragile underlayers. The Alps are a habitat for 30,000 species of wildlife, ranging from the tiniest snow fleas
to brown bears, many of which have made adaptations to the harsh cold conditions and high altitudes to the point
that some only survive in specific micro-climates either directly above or below the snow line. The largest mammal
to live in the highest altitudes are the alpine ibex, which have been sighted as high as 3,000 m (9,843 ft). The
ibex live in caves and descend to eat the succulent alpine grasses. Classified as antelopes, chamois are smaller
than ibex and found throughout the Alps, living above the tree line and are common in the entire alpine range. Areas
of the eastern Alps are still home to brown bears. In Switzerland the canton of Bern was named for the bears but
the last bear is recorded as having been killed in 1792 above Kleine Scheidegg by three hunters from Grindelwald.
Many rodents such as voles live underground. Marmots live almost exclusively above the tree line as high as 2,700
m (8,858 ft). They hibernate in large groups to provide warmth, and can be found in all areas of the Alps, in large
colonies they build beneath the alpine pastures. Golden eagles and bearded vultures are the largest birds to be found
in the Alps; they nest high on rocky ledges and can be found at altitudes of 2,400 m (7,874 ft). The most common
bird is the alpine chough which can be found scavenging at climber's huts or at the Jungfraujoch, a high altitude
tourist destination. Reptiles such as adders and vipers live up to the snow line; because they cannot bear the cold
temperatures they hibernate underground and soak up the warmth on rocky ledges. The high-altitude Alpine salamanders
have adapted to living above the snow line by giving birth to fully developed young rather than laying eggs. Brown
trout can be found in the streams up to the snow line. Molluscs such as the wood snail live up the snow line. Popularly
gathered as food, the snails are now protected. A number of species of moths live in the Alps, some of which are
believed to have evolved in the same habitat up to 120 million years ago, long before the Alps were created. Blue
moths can commonly be seen drinking from the snow melt; some species of blue moths fly as high as 1,800 m (5,906
ft). The butterflies tend to be large, such as those from the swallowtail Parnassius family, with a habitat that
ranges to 1,800 m (5,906 ft). Twelve species of beetles have habitats up to the snow line; the most beautiful and
formerly collected for its colours but now protected is the Rosalia alpina. Spiders, such as the large wolf spider,
live above the snow line and can be seen as high as 400 m (1,312 ft). Scorpions can be found in the Italian Alps.
Some of the species of moths and insects show evidence of having been indigenous to the area from as long ago as
the Alpine orogeny. In Emosson in Valais, Switzerland, dinosaur tracks were found in the 1970s, dating probably from
the Triassic Period. About 10,000 years ago, when the ice melted after the last glacial period, late Paleolithic
communities were established along the lake shores and in cave systems. Evidence of human habitation has been found
in caves near Vercors, close to Grenoble; in Austria the Mondsee culture shows evidence of houses built on piles
to keep them dry. Standing stones have been found in Alpine areas of France and Italy. The rock drawings in Valcamonica
are more than 5000 years old; more than 200,000 drawings and etchings have been identified at the site. In 1991 a
mummy of a neolithic body, known as Ötzi the Iceman, was discovered by hikers on the Similaun glacier. His clothing
and gear indicate that he lived in an alpine farming community, while the location and manner of his death - an arrowhead
was discovered in his shoulder - suggests he was travelling from one place to another. Analysis of the mitochondrial
DNA of Ötzi, has shown that he belongs to the K1 subclade which cannot be categorized into any of the three modern
branches of that subclade. The new subclade has provisionally been named K1ö for Ötzi. Celtic tribes settled in Switzerland
between 1000 to 1500 BC. The Raetians lived in the eastern regions, while the west was occupied by the Helvetii and
the Allobrogi settled in the Rhone valley and in Savoy. Among the many substances Celtic tribes mined was salt in
areas such as Salzburg in Austria where evidence of the Hallstatt culture was found by a mine manager in the 19th
century. By the 6th century BC the La Tène culture was well established in the region, and became known for high
quality decorated weapons and jewelry. The Celts were the most widespread of the mountain tribes—they had warriors
that were strong, tall and fair skinned skilled with iron weapons, which gave them an advantage in warfare. During
the Second Punic War in 218 BC, the Carthaginian general Hannibal probably crossed the Alps with an army numbering
38,000 infantry, 8,000 cavalry, and 37 war elephants. This was one of the most celebrated achievements of any military
force in ancient warfare, although no evidence exists of the actual crossing or the place of crossing. The Romans,
however, had built roads along the mountain passes, which continued to be used through the medieval period to cross
the mountains and Roman road markers can still be found on the mountain passes. The Roman expansion brought the defeat
of the Allobrogi in 121 BC and during the Gallic Wars in 58 BC Julius Caesar overcame the Helvetii. The Rhaetians
continued to resist but were eventually conquered when the Romans turned northward to the Danube valley in Austria
and defeated the Brigantes. The Romans built settlements in the Alps; towns such as Aosta (named for Augustus) in
Italy, Martigny and Lausanne in Switzerland, and Partenkirchen in Bavaria show remains of Roman baths, villas, arenas
and temples. Much of the Alpine region was gradually settled by Germanic tribes, (Lombards, Alemanni, Bavarii, and
Franks) from the 6th to the 13th centuries mixing with the local Celtic tribes. Christianity was established in the
region by the Romans, and saw the establishment of monasteries and churches in the high regions. The Frankish expansion
of the Carolingian Empire and the Bavarian expansion in the eastern Alps introduced feudalism and the building of
castles to support the growing number of dukedoms and kingdoms. Castello del Buonconsiglio in Trento, Italy, still
has intricate frescoes, excellent examples of Gothic art, in a tower room. In Switzerland, Château de Chillon is
preserved as an example of medieval architecture. Much of the medieval period was a time of power struggles between
competing dynasties such as the House of Savoy, the Visconti in northern Italy and the House of Habsburg in Austria
and Slovenia. In 1291 to protect themselves from incursions by the Habsburgs, four cantons in the middle of Switzerland
drew up a charter that is considered to be a declaration of independence from neighboring kingdoms. After a series
of battles fought in the 13th, 14th and 15th centuries, more cantons joined the confederacy and by the 16th century
Switzerland was well-established as a separate state. During the Napoleonic Wars in the late 18th century and early
19th century, Napoleon annexed territory formerly controlled by the Habsburgs and Savoys. In 1798 he established
the Helvetic Republic in Switzerland; two years later he led an army across the St. Bernard pass and conquered almost
all of the Alpine regions. After the fall of Napoléon, many alpine countries developed heavy protections to prevent
any new invasion. Thus, Savoy built a series of fortifications in the Maurienne valley in order to protect the major
alpine passes, such as the col du Mont-Cenis that was even crossed by, Charlemagne and his father to defeat the Lombarts.
The later indeed became very popular after the construction of a paved road ordered by Napoléon Bonaparte. The Barrière
de l'Esseillon is a serie of forts with heavy batteries, built on a cliff with a perfect view on the valley, a gorge
on one side and steep mountains on the other side. In the 19th century, the monasteries built in the high Alps during
the medieval period to shelter travelers and as places of pilgrimage, became tourist destinations. The Benedictines
had built monasteries in Lucerne, Switzerland, and Oberammergau; the Cistercians in the Tyrol and at Lake Constance;
and the Augustinians had abbeys in the Savoy and one in the center of Interlaken, Switzerland. The Great St Bernard
Hospice, built in the 9th or 10th centuries, at the summit of the Great Saint Bernard Pass was shelter for travelers
and place for pilgrims since its inception; by the 19th century it became a tourist attraction with notable visitors
such as author Charles Dickens and mountaineer Edward Whymper. Radiocarbon dated charcoal placed around 50,000 years
ago was found in the Drachloch (Dragon's Hole) cave above the village of Vattis in the canton of St. Gallen, proving
that the high peaks were visited by prehistoric people. Seven bear skulls from the cave may have been buried by the
same prehistoric people. The peaks, however, were mostly ignored except for a few notable examples, and long left
to the exclusive attention of the people of the adjoining valleys. The mountain peaks were seen as terrifying, the
abode of dragons and demons, to the point that people blindfolded themselves to cross the Alpine passes. The glaciers
remained a mystery and many still believed the highest areas to be inhabited by dragons. Charles VII of France ordered
his chamberlain to climb Mont Aiguille in 1356. The knight reached the summit of Rocciamelone where he left a bronze
triptych of three crosses, a feat which he conducted with the use of ladders to traverse the ice. In 1492 Antoine
de Ville climbed Mont Aiguille, without reaching the summit, an experience he described as "horrifying and terrifying."
Leonardo da Vinci was fascinated by variations of light in the higher altitudes, and climbed a mountain—scholars
are uncertain which one; some believe it may have been Monte Rosa. From his description of a "blue like that of a
gentian" sky it is thought that he reached a significantly high altitude. In the 18th century four Chamonix man almost
made the summit of Mont Blanc but were overcome by altitude sickness and snowblindness. Conrad Gessner was the first
naturalist to ascend the mountains in the 16th century, to study them, writing that in the mountains he found the
"theatre of the Lord". By the 19th century more naturalists began to arrive to explore, study and conquer the high
peaks; they were followed by artists, writers and painters. Two men who first explored the regions of ice and snow
were Horace-Bénédict de Saussure (1740–1799) in the Pennine Alps, and the Benedictine monk of Disentis Placidus a
Spescha (1752–1833). Born in Geneva, Saussure was enamored with the mountains from an early age; he left a law career
to become a naturalist and spent many years trekking through the Bernese Oberland, the Savoy, the Piedmont and Valais,
studying the glaciers and the geology, as he became an early proponent of the theory of rock upheaval. Saussure,
in 1787, was a member of the third ascent of Mont Blanc—today the summits of all the peaks have been climbed. Jean-Jacques
Rousseau was the first of many to present the Alps as a place of allure and beauty, banishing the prevalent conception
of the mountains as a hellish wasteland inhabited by demons. Rousseau's conception of alpine purity was later emphasized
with the publication of Albrecht von Haller's poem Die Alpen that described the mountains as an area of mythical
purity. Late in the 18th century the first wave of Romantics such as Goethe and Turner came to admire the scenery;
Wordsworth visited the area in 1790, writing of his experiences in The Prelude. Schiller later wrote the play William
Tell romanticising Swiss independence. After the end of the Napoleonic Wars, the Alpine countries began to see an
influx of poets, artists, and musicians, as visitors came to experience the sublime effects of monumental nature.
In 1816 Byron, Percy Bysshe Shelley and his wife Mary Shelley visited Geneva and all three were inspired by the scenery
in their writings. During these visits Shelley wrote the poem "Mont Blanc", Byron wrote "The Prisoner of Chillon"
and the dramatic poem Manfred, and Mary Shelley, who found the scenery overwhelming, conceived the idea for the novel
Frankenstein in her villa on the shores of Lake Geneva in the midst of a thunderstorm. When Coleridge travelled to
Chamonix, he declaimed, in defiance of Shelley, who had signed himself "Atheos" in the guestbook of the Hotel de
Londres near Montenvers, "Who would be, who could be an atheist in this valley of wonders". By the mid-19th century
scientists began to arrive en masse to study the geology and ecology of the region. Austrian-born Adolf Hitler had
a lifelong romantic fascination with the Alps and by the 1930s established a home in the Obersalzberg region outside
of Berchtesgaden. His first visit to the area was in 1923 and he maintained a strong tie there until the end of his
life. At the end of World War II the US Army occupied Obersalzberg, to prevent Hitler from retreating with the Wehrmacht
into the mountains. By 1940 the Third Reich had occupied many of the Alpine countries. Austria underwent a political
coup that made it part of the Third Reich; France had been invaded and Italy was a fascist regime. Switzerland was
the only country to luckily avoid invasion. The Swiss Confederate mobilized its troops—the country follows the doctrine
of "armed neutrality" with all males required to have military training—a number that General Eisenhower estimated
to be about 850,000. The Swiss commanders wired the infrastructure leading into the country, and threatening to destroy
bridges, railway tunnels and passes in the event of a Nazi invasion, and then they retreated to the heart of the
mountain peaks where conditions were harsher and a military invasion would involve difficult and protracted battles.
Ski troops were trained for the war, and battles were waged in mountainous areas such as the battle at Riva Ridge
in Italy, where the American 10th Mountain Division encountered heavy resistance in February 1945. At the end of
the war, a substantial amount of Nazi plunder was found stored in Austria, where Hitler had hoped to retreat as the
war drew to a close. The salt mines surrounding the Altaussee area, where American troops found 75 kilos of gold
coins stored in a single mine, were used to store looted art, jewels, and currency; vast quantities of looted art
were found and returned to the owners. The population of the region is 14 million spread across eight countries.
On the rim of the mountains, on the plateaus and the plains the economy consists of manufacturing and service jobs
whereas in the higher altitudes and in the mountains farming is still essential to the economy. Farming and forestry
continue to be mainstays of Alpine culture, industries that provide for export to the cities and maintain the mountain
ecology. Much of the Alpine culture is unchanged since the medieval period when skills that guaranteed survival in
the mountain valleys and in the highest villages became mainstays, leading to strong traditions of carpentry, woodcarving,
baking and pastry-making, and cheesemaking. Farming had been a traditional occupation for centuries, although it
became less dominant in the 20th century with the advent of tourism. Grazing and pasture land are limited because
of the steep and rocky topography of the Alps. In mid-June cows are moved to the highest pastures close to the snowline,
where they are watched by herdsmen who stay in the high altitudes often living in stone huts or wooden barns during
the summers. Villagers celebrate the day the cows are herded up to the pastures and again when they return in mid-September.
The Alpanschluss or Désalpes ("coming down from the alps") is celebrated by decorating the cows with garlands and
enormous cowbells while the farmers dress in traditional costumes. Cheesemaking is an ancient tradition in most Alpine
countries. A wheel of cheese from the Emmental in Switzerland can weigh up to 45 kg (100 lb), and the Beaufort in
Savoy can weight up to 70 kilograms (150 lb). Owners of the cows traditionally receive from the cheesemakers a portion
in relation to the proportion of the cows' milk from the summer months in the high alps. Haymaking is an important
farming activity in mountain villages which has become somewhat mechanized in recent years, although the slopes are
so steep that usually scythes are necessary to cut the grass. Hay is normally brought in twice a year, often also
on festival days. Alpine festivals vary from country to country and often include the display of local costumes such
as dirndl and trachten, the playing of Alpenhorns, wrestling matches, some pagan traditions such as Walpurgis Night
and, in many areas, Carnival is celebrated before Lent. In the high villages people live in homes built according
to medieval designs that withstand cold winters. The kitchen is separated from the living area (called the stube,
the area of the home heated by a stove), and second-floor bedrooms benefit from rising heat. The typical Swiss chalet
originated in the Bernese Oberland. Chalets often face south or downhill, and are built of solid wood, with a steeply
gabled roof to allow accumulated snow to slide off easily. Stairs leading to upper levels are sometimes built on
the outside, and balconies are sometimes enclosed. Food is passed from the kitchen to the stube, where the dining
room table is placed. Some meals are communal, such as fondue, where a pot is set in the middle of the table for
each person to dip into. Other meals are still served in a traditional manner on carved wooden plates. Furniture
has been traditionally elaborately carved and in many Alpine countries carpentry skills are passed from generation
to generation. Roofs are traditionally constructed from Alpine rocks such as pieces of schist, gneiss or slate. Such
chalets are typically found in the higher parts of the valleys, as in the Maurienne valley in Savoy, where the amount
of snow during the cold months is important. The inclination of the roof cannot exceed 40%, allowing the snow to
stay on top, thereby functioning as insulation from the cold. In the lower areas where the forests are widespread,
wooden tiles are traditionally used. Commonly made of Norway spruce, they are called "tavaillon". The Alpine regions
are multicultural and linguistically diverse. Dialects are common, and vary from valley to valley and region to region.
In the Slavic Alps alone 19 dialects have been identified. Some of the French dialects spoken in the French, Swiss
and Italian alps of Aosta Valley derive from Arpitan, while the southern part of the western range is related to
Old Provençal; the German dialects derive from Germanic tribal languages. Romansh, spoken by two percent of the population
in southeast Switzerland, is an ancient Rhaeto-Romanic language derived from Latin, remnants of ancient Celtic languages
and perhaps Etruscan. At present the Alps are one of the more popular tourist destinations in the world with many
resorts such Oberstdorf, in Bavaria, Saalbach in Austria, Davos in Switzerland, Chamonix in France, and Cortina d'Ampezzo
in Italy recording more than a million annual visitors. With over 120 million visitors a year tourism is integral
to the Alpine economy with much it coming from winter sports although summer visitors are an important component
of the tourism industry. The tourism industry began in the early 19th century when foreigners visited the Alps, traveled
to the bases of the mountains to enjoy the scenery, and stayed at the spa-resorts. Large hotels were built during
the Belle Époque; cog-railways, built early in the 20th century, brought tourists to ever higher elevations, with
the Jungfraubahn terminating at the Jungfraujoch, well above the eternal snow-line, after going through a tunnel
in Eiger. During this period winter sports were slowly introduced: in 1882 the first figure skating championship
was held in St. Moritz, and downhill skiing became a popular sport with English visitors early in the 20th century,
as the first ski-lift was installed in 1908 above Grindelwald. In the first half of the 20th century the Olympic
Winter Games were held three times in Alpine venues: the 1924 Winter Olympics in Chamonix, France; the 1928 Winter
Olympics in St. Moritz, Switzerland; and the 1936 Winter Olympics in Garmisch-Partenkirchen, Germany. During World
War II the winter games were canceled but after that time the Winter Games have been held in St. Moritz (1948), Cortina
d'Ampezzo (1956), Innsbruck, Austria (1964 and 1976), Grenoble, France, (1968), Albertville, France, (1992), and
Torino (2006). In 1930 the Lauberhorn Rennen (Lauberhorn Race), was run for the first time on the Lauberhorn above
Wengen; the equally demanding Hahnenkamm was first run in the same year in Kitzbühl, Austria. Both races continue
to be held each January on successive weekends. The Lauberhorn is the more strenuous downhill race at 4.5 km (2.8
mi) and poses danger to racers who reach 130 km/h (81 mph) within seconds of leaving the start gate. During the post-World
War I period ski-lifts were built in Swiss and Austrian towns to accommodate winter visitors, but summer tourism
continued to be important; by the mid-20th century the popularity of downhill skiing increased greatly as it became
more accessible and in the 1970s several new villages were built in France devoted almost exclusively to skiing,
such as Les Menuires. Until this point Austria and Switzerland had been the traditional and more popular destinations
for winter sports, but by the end of the 20th century and into the early 21st century, France, Italy and the Tyrol
began to see increases in winter visitors. From 1980 to the present, ski-lifts have been modernized and snow-making
machines installed at many resorts, leading to concerns regarding the loss of traditional Alpine culture and questions
regarding sustainable development as the winter ski industry continues to develop quickly and the number of summer
tourists decline. The region is serviced by 4,200 km (2,600 mi) of roads used by 6 million vehicles. Train travel
is well established in the Alps, with, for instance 120 km (75 mi) of track for every 1,000 km2 (390 sq mi) in a
country such as Switzerland. Most of Europe's highest railways are located there. Moreover, plans are underway to
build a 57 km (35 mi)-long sub-alpine tunnel connecting the older Lötschberg and Gotthard tunnels built in the 19th
century. Some high mountain villages, such as Avoriaz (in France), Wengen, and Zermatt (in Switzerland) are accessible
only by cable car or cog-rail trains, and are car free. Other villages in the Alps are considering becoming car free
zones or limiting the number of cars for reasons of sustainability of the fragile Alpine terrain. The lower regions
and larger towns of the Alps are well-served by motorways and main roads, but higher mountain passes and byroads,
which are amongst the highest in Europe, can be treacherous even in summer due to steep slopes. Many passes are closed
in winter. A multitude of airports around the Alps (and some within), as well as long-distance rail links from all
neighbouring countries, afford large numbers of travellers easy access from abroad.
A gene is a locus (or region) of DNA that encodes a functional RNA or protein product, and is the molecular unit of heredity.:Glossary
The transmission of genes to an organism's offspring is the basis of the inheritance of phenotypic traits. Most biological
traits are under the influence of polygenes (many different genes) as well as the gene–environment interactions.
Some genetic traits are instantly visible, such as eye colour or number of limbs, and some are not, such as blood
type, risk for specific diseases, or the thousands of basic biochemical processes that comprise life. Genes can acquire
mutations in their sequence, leading to different variants, known as alleles, in the population. These alleles encode
slightly different versions of a protein, which cause different phenotype traits. Colloquial usage of the term "having
a gene" (e.g., "good genes," "hair colour gene") typically refers to having a different allele of the gene. Genes
evolve due to natural selection or survival of the fittest of the alleles. The concept of a gene continues to be
refined as new phenomena are discovered. For example, regulatory regions of a gene can be far removed from its coding
regions, and coding regions can be split into several exons. Some viruses store their genome in RNA instead of DNA
and some gene products are functional non-coding RNAs. Therefore, a broad, modern working definition of a gene is
any discrete locus of heritable, genomic sequence which affect an organism's traits by being expressed as a functional
product or by regulation of gene expression. The existence of discrete inheritable units was first suggested by Gregor
Mendel (1822–1884). From 1857 to 1864, he studied inheritance patterns in 8000 common edible pea plants, tracking
distinct traits from parent to offspring. He described these mathematically as 2n combinations where n is the number
of differing characteristics in the original peas. Although he did not use the term gene, he explained his results
in terms of discrete inherited units that give rise to observable physical characteristics. This description prefigured
the distinction between genotype (the genetic material of an organism) and phenotype (the visible traits of that
organism). Mendel was also the first to demonstrate independent assortment, the distinction between dominant and
recessive traits, the distinction between a heterozygote and homozygote, and the phenomenon of discontinuous inheritance.
Prior to Mendel's work, the dominant theory of heredity was one of blending inheritance, which suggested that each
parent contributed fluids to the fertilisation process and that the traits of the parents blended and mixed to produce
the offspring. Charles Darwin developed a theory of inheritance he termed pangenesis, which used the term gemmule
to describe hypothetical particles that would mix during reproduction. Although Mendel's work was largely unrecognized
after its first publication in 1866, it was 'rediscovered' in 1900 by three European scientists, Hugo de Vries, Carl
Correns, and Erich von Tschermak, who claimed to have reached similar conclusions in their own research. The word
gene is derived (via pangene) from the Ancient Greek word γένος (génos) meaning "race, offspring". Gene was coined
in 1909 by Danish botanist Wilhelm Johannsen to describe the fundamental physical and functional unit of heredity,
while the related word genetics was first used by William Bateson in 1905. Advances in understanding genes and inheritance
continued throughout the 20th century. Deoxyribonucleic acid (DNA) was shown to be the molecular repository of genetic
information by experiments in the 1940s to 1950s. The structure of DNA was studied by Rosalind Franklin using X-ray
crystallography, which led James D. Watson and Francis Crick to publish a model of the double-stranded DNA molecule
whose paired nucleotide bases indicated a compelling hypothesis for the mechanism of genetic replication. Collectively,
this body of research established the central dogma of molecular biology, which states that proteins are translated
from RNA, which is transcribed from DNA. This dogma has since been shown to have exceptions, such as reverse transcription
in retroviruses. The modern study of genetics at the level of DNA is known as molecular genetics. In 1972, Walter
Fiers and his team at the University of Ghent were the first to determine the sequence of a gene: the gene for Bacteriophage
MS2 coat protein. The subsequent development of chain-termination DNA sequencing in 1977 by Frederick Sanger improved
the efficiency of sequencing and turned it into a routine laboratory tool. An automated version of the Sanger method
was used in early phases of the Human Genome Project. The theories developed in the 1930s and 1940s to integrate
molecular genetics with Darwinian evolution are called the modern evolutionary synthesis, a term introduced by Julian
Huxley. Evolutionary biologists subsequently refined this concept, such as George C. Williams' gene-centric view
of evolution. He proposed an evolutionary concept of the gene as a unit of natural selection with the definition:
"that which segregates and recombines with appreciable frequency.":24 In this view, the molecular gene transcribes
as a unit, and the evolutionary gene inherits as a unit. Related ideas emphasizing the centrality of genes in evolution
were popularized by Richard Dawkins. The vast majority of living organisms encode their genes in long strands of
DNA (deoxyribonucleic acid). DNA consists of a chain made from four types of nucleotide subunits, each composed of:
a five-carbon sugar (2'-deoxyribose), a phosphate group, and one of the four bases adenine, cytosine, guanine, and
thymine.:2.1 Two chains of DNA twist around each other to form a DNA double helix with the phosphate-sugar backbone
spiralling around the outside, and the bases pointing inwards with adenine base pairing to thymine and guanine to
cytosine. The specificity of base pairing occurs because adenine and thymine align form two hydrogen bonds, whereas
cytosine and guanine form three hydrogen bonds. The two strands in a double helix must therefore be complementary,
with their sequence of bases matching such that the adenines of one strand are paired with the thymines of the other
strand, and so on.:4.1 Due to the chemical composition of the pentose residues of the bases, DNA strands have directionality.
One end of a DNA polymer contains an exposed hydroxyl group on the deoxyribose; this is known as the 3' end of the
molecule. The other end contains an exposed phosphate group; this is the 5' end. The two strands of a double-helix
run in opposite directions. Nucleic acid synthesis, including DNA replication and transcription occurs in the 5'→3'
direction, because new nucleotides are added via a dehydration reaction that uses the exposed 3' hydroxyl as a nucleophile.:27.2
The expression of genes encoded in DNA begins by transcribing the gene into RNA, a second type of nucleic acid that
is very similar to DNA, but whose monomers contain the sugar ribose rather than deoxyribose. RNA also contains the
base uracil in place of thymine. RNA molecules are less stable than DNA and are typically single-stranded. Genes
that encode proteins are composed of a series of three-nucleotide sequences called codons, which serve as the "words"
in the genetic "language". The genetic code specifies the correspondence during protein translation between codons
and amino acids. The genetic code is nearly the same for all known organisms.:4.1 The total complement of genes in
an organism or cell is known as its genome, which may be stored on one or more chromosomes. A chromosome consists
of a single, very long DNA helix on which thousands of genes are encoded.:4.2 The region of the chromosome at which
a particular gene is located is called its locus. Each locus contains one allele of a gene; however, members of a
population may have different alleles at the locus, each with a slightly different gene sequence. The majority of
eukaryotic genes are stored on a set of large, linear chromosomes. The chromosomes are packed within the nucleus
in complex with storage proteins called histones to form a unit called a nucleosome. DNA packaged and condensed in
this way is called chromatin.:4.2 The manner in which DNA is stored on the histones, as well as chemical modifications
of the histone itself, regulate whether a particular region of DNA is accessible for gene expression. In addition
to genes, eukaryotic chromosomes contain sequences involved in ensuring that the DNA is copied without degradation
of end regions and sorted into daughter cells during cell division: replication origins, telomeres and the centromere.:4.2
Replication origins are the sequence regions where DNA replication is initiated to make two copies of the chromosome.
Telomeres are long stretches of repetitive sequence that cap the ends of the linear chromosomes and prevent degradation
of coding and regulatory regions during DNA replication. The length of the telomeres decreases each time the genome
is replicated and has been implicated in the aging process. The centromere is required for binding spindle fibres
to separate sister chromatids into daughter cells during cell division.:18.2 Prokaryotes (bacteria and archaea) typically
store their genomes on a single large, circular chromosome. Similarly, some eukaryotic organelles contain a remnant
circular chromosome with a small number of genes.:14.4 Prokaryotes sometimes supplement their chromosome with additional
small circles of DNA called plasmids, which usually encode only a few genes and are transferable between individuals.
For example, the genes for antibiotic resistance are usually encoded on bacterial plasmids and can be passed between
individual cells, even those of different species, via horizontal gene transfer. Whereas the chromosomes of prokaryotes
are relatively gene-dense, those of eukaryotes often contain regions of DNA that serve no obvious function. Simple
single-celled eukaryotes have relatively small amounts of such DNA, whereas the genomes of complex multicellular
organisms, including humans, contain an absolute majority of DNA without an identified function. This DNA has often
been referred to as "junk DNA". However, more recent analyses suggest that, although protein-coding DNA makes up
barely 2% of the human genome, about 80% of the bases in the genome may be expressed, so the term "junk DNA" may
be a misnomer. The structure of a gene consists of many elements of which the actual protein coding sequence is often
only a small part. These include DNA regions that are not transcribed as well as untranslated regions of the RNA.
Firstly, flanking the open reading frame, all genes contain a regulatory sequence that is required for their expression.
In order to be expressed, genes require a promoter sequence. The promoter is recognized and bound by transcription
factors and RNA polymerase to initiate transcription.:7.1 A gene can have more than one promoter, resulting in messenger
RNAs (mRNA) that differ in how far they extend in the 5' end. Promoter regions have a consensus sequence, however
highly transcribed genes have "strong" promoter sequences that bind the transcription machinery well, whereas others
have "weak" promoters that bind poorly and initiate transcription less frequently.:7.2 Eukaryotic promoter regions
are much more complex and difficult to identify than prokaryotic promoters.:7.3 Additionally, genes can have regulatory
regions many kilobases upstream or downstream of the open reading frame. These act by binding to transcription factors
which then cause the DNA to loop so that the regulatory sequence (and bound transcription factor) become close to
the RNA polymerase binding site. For example, enhancers increase transcription by binding an activator protein which
then helps to recruit the RNA polymerase to the promoter; conversely silencers bind repressor proteins and make the
DNA less available for RNA polymerase. The transcribed pre-mRNA contains untranslated regions at both ends which
contain a ribosome binding site, terminator and start and stop codons. In addition, most eukaryotic open reading
frames contain untranslated introns which are removed before the exons are translated. The sequences at the ends
of the introns, dictate the splice sites to generate the final mature mRNA which encodes the protein or RNA product.
Many prokaryotic genes are organized into operons, with multiple protein-coding sequences that are transcribed as
a unit. The products of operon genes typically have related functions and are involved in the same regulatory network.:7.3
Defining exactly what section of a DNA sequence comprises a gene is difficult. Regulatory regions of a gene such
as enhancers do not necessarily have to be close to the coding sequence on the linear molecule because the intervening
DNA can be looped out to bring the gene and its regulatory region into proximity. Similarly, a gene's introns can
be much larger than its exons. Regulatory regions can even be on entirely different chromosomes and operate in trans
to allow regulatory regions on one chromosome to come in contact with target genes on another chromosome. Early work
in molecular genetics suggested the model that one gene makes one protein. This model has been refined since the
discovery of genes that can encode multiple proteins by alternative splicing and coding sequences split in short
section across the genome whose mRNAs are concatenated by trans-splicing. A broad operational definition is sometimes
used to encompass the complexity of these diverse phenomena, where a gene is defined as a union of genomic sequences
encoding a coherent set of potentially overlapping functional products. This definition categorizes genes by their
functional products (proteins or RNA) rather than their specific DNA loci, with regulatory elements classified as
gene-associated regions. In all organisms, two steps are required to read the information encoded in a gene's DNA
and produce the protein it specifies. First, the gene's DNA is transcribed to messenger RNA (mRNA).:6.1 Second, that
mRNA is translated to protein.:6.2 RNA-coding genes must still go through the first step, but are not translated
into protein. The process of producing a biologically functional molecule of either RNA or protein is called gene
expression, and the resulting molecule is called a gene product. The nucleotide sequence of a gene's DNA specifies
the amino acid sequence of a protein through the genetic code. Sets of three nucleotides, known as codons, each correspond
to a specific amino acid.:6 Additionally, a "start codon", and three "stop codons" indicate the beginning and end
of the protein coding region. There are 64 possible codons (four possible nucleotides at each of three positions,
hence 43 possible codons) and only 20 standard amino acids; hence the code is redundant and multiple codons can specify
the same amino acid. The correspondence between codons and amino acids is nearly universal among all known living
organisms. Transcription produces a single-stranded RNA molecule known as messenger RNA, whose nucleotide sequence
is complementary to the DNA from which it was transcribed.:6.1 The mRNA acts as an intermediate between the DNA gene
and its final protein product. The gene's DNA is used as a template to generate a complementary mRNA. The mRNA matches
the sequence of the gene's DNA coding strand because it is synthesised as the complement of the template strand.
Transcription is performed by an enzyme called an RNA polymerase, which reads the template strand in the 3' to 5'
direction and synthesizes the RNA from 5' to 3'. To initiate transcription, the polymerase first recognizes and binds
a promoter region of the gene. Thus, a major mechanism of gene regulation is the blocking or sequestering the promoter
region, either by tight binding by repressor molecules that physically block the polymerase, or by organizing the
DNA so that the promoter region is not accessible.:7 In prokaryotes, transcription occurs in the cytoplasm; for very
long transcripts, translation may begin at the 5' end of the RNA while the 3' end is still being transcribed. In
eukaryotes, transcription occurs in the nucleus, where the cell's DNA is stored. The RNA molecule produced by the
polymerase is known as the primary transcript and undergoes post-transcriptional modifications before being exported
to the cytoplasm for translation. One of the modifications performed is the splicing of introns which are sequences
in the transcribed region that do not encode protein. Alternative splicing mechanisms can result in mature transcripts
from the same gene having different sequences and thus coding for different proteins. This is a major form of regulation
in eukaryotic cells and also occurs in some prokaryotes.:7.5 Translation is the process by which a mature mRNA molecule
is used as a template for synthesizing a new protein.:6.2 Translation is carried out by ribosomes, large complexes
of RNA and protein responsible for carrying out the chemical reactions to add new amino acids to a growing polypeptide
chain by the formation of peptide bonds. The genetic code is read three nucleotides at a time, in units called codons,
via interactions with specialized RNA molecules called transfer RNA (tRNA). Each tRNA has three unpaired bases known
as the anticodon that are complementary to the codon it reads on the mRNA. The tRNA is also covalently attached to
the amino acid specified by the complementary codon. When the tRNA binds to its complementary codon in an mRNA strand,
the ribosome attaches its amino acid cargo to the new polypeptide chain, which is synthesized from amino terminus
to carboxyl terminus. During and after synthesis, most new proteins must folds to their active three-dimensional
structure before they can carry out their cellular functions.:3 Genes are regulated so that they are expressed only
when the product is needed, since expression draws on limited resources.:7 A cell regulates its gene expression depending
on its external environment (e.g. available nutrients, temperature and other stresses), its internal environment
(e.g. cell division cycle, metabolism, infection status), and its specific role if in a multicellular organism. Gene
expression can be regulated at any step: from transcriptional initiation, to RNA processing, to post-translational
modification of the protein. The regulation of lactose metabolism genes in E. coli (lac operon) was the first such
mechanism to be described in 1961. A typical protein-coding gene is first copied into RNA as an intermediate in the
manufacture of the final protein product.:6.1 In other cases, the RNA molecules are the actual functional products,
as in the synthesis of ribosomal RNA and transfer RNA. Some RNAs known as ribozymes are capable of enzymatic function,
and microRNA has a regulatory role. The DNA sequences from which such RNAs are transcribed are known as non-coding
RNA genes. Some viruses store their entire genomes in the form of RNA, and contain no DNA at all. Because they use
RNA to store genes, their cellular hosts may synthesize their proteins as soon as they are infected and without the
delay in waiting for transcription. On the other hand, RNA retroviruses, such as HIV, require the reverse transcription
of their genome from RNA into DNA before their proteins can be synthesized. RNA-mediated epigenetic inheritance has
also been observed in plants and very rarely in animals. Organisms inherit their genes from their parents. Asexual
organisms simply inherit a complete copy of their parent's genome. Sexual organisms have two copies of each chromosome
because they inherit one complete set from each parent.:1 According to Mendelian inheritance, variations in an organism's
phenotype (observable physical and behavioral characteristics) are due in part to variations in its genotype (particular
set of genes). Each gene specifies a particular trait with different sequence of a gene (alleles) giving rise to
different phenotypes. Most eukaryotic organisms (such as the pea plants Mendel worked on) have two alleles for each
trait, one inherited from each parent.:20 Alleles at a locus may be dominant or recessive; dominant alleles give
rise to their corresponding phenotypes when paired with any other allele for the same trait, whereas recessive alleles
give rise to their corresponding phenotype only when paired with another copy of the same allele. For example, if
the allele specifying tall stems in pea plants is dominant over the allele specifying short stems, then pea plants
that inherit one tall allele from one parent and one short allele from the other parent will also have tall stems.
Mendel's work demonstrated that alleles assort independently in the production of gametes, or germ cells, ensuring
variation in the next generation. Although Mendelian inheritance remains a good model for many traits determined
by single genes (including a number of well-known genetic disorders) it does not include the physical processes of
DNA replication and cell division. The growth, development, and reproduction of organisms relies on cell division,
or the process by which a single cell divides into two usually identical daughter cells. This requires first making
a duplicate copy of every gene in the genome in a process called DNA replication.:5.2 The copies are made by specialized
enzymes known as DNA polymerases, which "read" one strand of the double-helical DNA, known as the template strand,
and synthesize a new complementary strand. Because the DNA double helix is held together by base pairing, the sequence
of one strand completely specifies the sequence of its complement; hence only one strand needs to be read by the
enzyme to produce a faithful copy. The process of DNA replication is semiconservative; that is, the copy of the genome
inherited by each daughter cell contains one original and one newly synthesized strand of DNA.:5.2 After DNA replication
is complete, the cell must physically separate the two copies of the genome and divide into two distinct membrane-bound
cells.:18.2 In prokaryotes (bacteria and archaea) this usually occurs via a relatively simple process called binary
fission, in which each circular genome attaches to the cell membrane and is separated into the daughter cells as
the membrane invaginates to split the cytoplasm into two membrane-bound portions. Binary fission is extremely fast
compared to the rates of cell division in eukaryotes. Eukaryotic cell division is a more complex process known as
the cell cycle; DNA replication occurs during a phase of this cycle known as S phase, whereas the process of segregating
chromosomes and splitting the cytoplasm occurs during M phase.:18.1 The duplication and transmission of genetic material
from one generation of cells to the next is the basis for molecular inheritance, and the link between the classical
and molecular pictures of genes. Organisms inherit the characteristics of their parents because the cells of the
offspring contain copies of the genes in their parents' cells. In asexually reproducing organisms, the offspring
will be a genetic copy or clone of the parent organism. In sexually reproducing organisms, a specialized form of
cell division called meiosis produces cells called gametes or germ cells that are haploid, or contain only one copy
of each gene.:20.2 The gametes produced by females are called eggs or ova, and those produced by males are called
sperm. Two gametes fuse to form a diploid fertilized egg, a single cell that has two sets of genes, with one copy
of each gene from the mother and one from the father.:20 During the process of meiotic cell division, an event called
genetic recombination or crossing-over can sometimes occur, in which a length of DNA on one chromatid is swapped
with a length of DNA on the corresponding sister chromatid. This has no effect if the alleles on the chromatids are
the same, but results in reassortment of otherwise linked alleles if they are different.:5.5 The Mendelian principle
of independent assortment asserts that each of a parent's two genes for each trait will sort independently into gametes;
which allele an organism inherits for one trait is unrelated to which allele it inherits for another trait. This
is in fact only true for genes that do not reside on the same chromosome, or are located very far from one another
on the same chromosome. The closer two genes lie on the same chromosome, the more closely they will be associated
in gametes and the more often they will appear together; genes that are very close are essentially never separated
because it is extremely unlikely that a crossover point will occur between them. This is known as genetic linkage.
DNA replication is for the most part extremely accurate, however errors (mutations) do occur.:7.6 The error rate
in eukaryotic cells can be as low as 10−8 per nucleotide per replication, whereas for some RNA viruses it can be
as high as 10−3. This means that each generation, each human genome accumulates 1–2 new mutations. Small mutations
can be caused by DNA replication and the aftermath of DNA damage and include point mutations in which a single base
is altered and frameshift mutations in which a single base is inserted or deleted. Either of these mutations can
change the gene by missense (change a codon to encode a different amino acid) or nonsense (a premature stop codon).
Larger mutations can be caused by errors in recombination to cause chromosomal abnormalities including the duplication,
deletion, rearrangement or inversion of large sections of a chromosome. Additionally, the DNA repair mechanisms that
normally revert mutations can introduce errors when repairing the physical damage to the molecule is more important
than restoring an exact copy, for example when repairing double-strand breaks.:5.4 When multiple different alleles
for a gene are present in a species's population it is called polymorphic. Most different alleles are functionally
equivalent, however some alleles can give rise to different phenotypic traits. A gene's most common allele is called
the wild type, and rare alleles are called mutants. The genetic variation in relative frequencies of different alleles
in a population is due to both natural selection and genetic drift. The wild-type allele is not necessarily the ancestor
of less common alleles, nor is it necessarily fitter. Most mutations within genes are neutral, having no effect on
the organism's phenotype (silent mutations). Some mutations do not change the amino acid sequence because multiple
codons encode the same amino acid (synonymous mutations). Other mutations can be neutral if they lead to amino acid
sequence changes, but the protein still functions similarly with the new amino acid (e.g. conservative mutations).
Many mutations, however, are deleterious or even lethal, and are removed from populations by natural selection. Genetic
disorders are the result of deleterious mutations and can be due to spontaneous mutation in the affected individual,
or can be inherited. Finally, a small fraction of mutations are beneficial, improving the organism's fitness and
are extremely important for evolution, since their directional selection leads to adaptive evolution.:7.6 Genes with
a most recent common ancestor, and thus a shared evolutionary ancestry, are known as homologs. These genes appear
either from gene duplication within an organism's genome, where they are known as paralogous genes, or are the result
of divergence of the genes after a speciation event, where they are known as orthologous genes,:7.6 and often perform
the same or similar functions in related organisms. It is often assumed that the functions of orthologous genes are
more similar than those of paralogous genes, although the difference is minimal. The relationship between genes can
be measured by comparing the sequence alignment of their DNA.:7.6 The degree of sequence similarity between homologous
genes is called conserved sequence. Most changes to a gene's sequence do not affect its function and so genes accumulate
mutations over time by neutral molecular evolution. Additionally, any selection on a gene will cause its sequence
to diverge at a different rate. Genes under stabilizing selection are constrained and so change more slowly whereas
genes under directional selection change sequence more rapidly. The sequence differences between genes can be used
for phylogenetic analyses to study how those genes have evolved and how the organisms they come from are related.
The most common source of new genes in eukaryotic lineages is gene duplication, which creates copy number variation
of an existing gene in the genome. The resulting genes (paralogs) may then diverge in sequence and in function. Sets
of genes formed in this way comprise a gene family. Gene duplications and losses within a family are common and represent
a major source of evolutionary biodiversity. Sometimes, gene duplication may result in a nonfunctional copy of a
gene, or a functional copy may be subject to mutations that result in loss of function; such nonfunctional genes
are called pseudogenes.:7.6 De novo or "orphan" genes, whose sequence shows no similarity to existing genes, are
extremely rare. Estimates of the number of de novo genes in the human genome range from 18 to 60. Such genes are
typically shorter and simpler in structure than most eukaryotic genes, with few if any introns. Two primary sources
of orphan protein-coding genes are gene duplication followed by extremely rapid sequence change, such that the original
relationship is undetectable by sequence comparisons, and formation through mutation of "cryptic" transcription start
sites that introduce a new open reading frame in a region of the genome that did not previously code for a protein.
Horizontal gene transfer refers to the transfer of genetic material through a mechanism other than reproduction.
This mechanism is a common source of new genes in prokaryotes, sometimes thought to contribute more to genetic variation
than gene duplication. It is a common means of spreading antibiotic resistance, virulence, and adaptive metabolic
functions. Although horizontal gene transfer is rare in eukaryotes, likely examples have been identified of protist
and alga genomes containing genes of bacterial origin. The genome size, and the number of genes it encodes varies
widely between organisms. The smallest genomes occur in viruses (which can have as few as 2 protein-coding genes),
and viroids (which act as a single non-coding RNA gene). Conversely, plants can have extremely large genomes, with
rice containing >46,000 protein-coding genes. The total number of protein-coding genes (the Earth's proteome) is
estimated to be 5 million sequences. Although the number of base-pairs of DNA in the human genome has been known
since the 1960s, the estimated number of genes has changed over time as definitions of genes, and methods of detecting
them have been refined. Initial theoretical predictions of the number of human genes were as high as 2,000,000. Early
experimental measures indicated there to be 50,000–100,000 transcribed genes (expressed sequence tags). Subsequently,
the sequencing in the Human Genome Project indicated that many of these transcripts were alternative variants of
the same genes, and the total number of protein-coding genes was revised down to ~20,000 with 13 genes encoded on
the mitochondrial genome. Of the human genome, only 1–2% consists of protein-coding genes, with the remainder being
'noncoding' DNA such as introns, retrotransposons, and noncoding RNAs. Essential genes are the set of genes thought
to be critical for an organism's survival. This definition assumes the abundant availability of all relevant nutrients
and the absence of environmental stress. Only a small portion of an organism's genes are essential. In bacteria,
an estimated 250–400 genes are essential for Escherichia coli and Bacillus subtilis, which is less than 10% of their
genes. Half of these genes are orthologs in both organisms and are largely involved in protein synthesis. In the
budding yeast Saccharomyces cerevisiae the number of essential genes is slightly higher, at 1000 genes (~20% of their
genes). Although the number is more difficult to measure in higher eukaryotes, mice and humans are estimated to have
around 2000 essential genes (~10% of their genes). Housekeeping genes are critical for carrying out basic cell functions
and so are expressed at a relatively constant level (constitutively). Since their expression is constant, housekeeping
genes are used as experimental controls when analysing gene expression. Not all essential genes are housekeeping
genes since some essential genes are developmentally regulated or expressed at certain times during the organism's
life cycle. Gene nomenclature has been established by the HUGO Gene Nomenclature Committee (HGNC) for each known
human gene in the form of an approved gene name and symbol (short-form abbreviation), which can be accessed through
a database maintained by HGNC. Symbols are chosen to be unique, and each gene has only one symbol (although approved
symbols sometimes change). Symbols are preferably kept consistent with other members of a gene family and with homologs
in other species, particularly the mouse due to its role as a common model organism. Genetic engineering is the modification
of an organism's genome through biotechnology. Since the 1970s, a variety of techniques have been developed to specifically
add, remove and edit genes in an organism. Recently developed genome engineering techniques use engineered nuclease
enzymes to create targeted DNA repair in a chromosome to either disrupt or edit a gene when the break is repaired.
The related term synthetic biology is sometimes used to refer to extensive genetic engineering of an organism. Genetic
engineering is now a routine research tool with model organisms. For example, genes are easily added to bacteria
and lineages of knockout mice with a specific gene's function disrupted are used to investigate that gene's function.
Many organisms have been genetically modified for applications in agriculture, industrial biotechnology, and medicine.
For multicellular organisms, typically the embryo is engineered which grows into the adult genetically modified organism.
However, the genomes of cells in an adult organism can be edited using gene therapy techniques to treat genetic diseases.
Guinea-Bissau (i/ˈɡɪni bɪˈsaʊ/, GI-nee-bi-SOW), officially the Republic of Guinea-Bissau (Portuguese: República da Guiné-Bissau,
pronounced: [ʁeˈpublikɐ dɐ ɡiˈnɛ biˈsaw]), is a country in West Africa. It covers 36,125 square kilometres (13,948
sq mi) with an estimated population of 1,704,000. Guinea-Bissau was once part of the kingdom of Gabu, as well as
part of the Mali Empire. Parts of this kingdom persisted until the 18th century, while a few others were under some
rule by the Portuguese Empire since the 16th century. In the 19th century, it was colonized as Portuguese Guinea.
Upon independence, declared in 1973 and recognised in 1974, the name of its capital, Bissau, was added to the country's
name to prevent confusion with Guinea (formerly French Guinea). Guinea-Bissau has a history of political instability
since independence, and no elected president has successfully served a full five-year term. Only 14% of the population
speaks Portuguese, established as the official language in the colonial period. Almost half the population (44%)
speaks Crioulo, a Portuguese-based creole language, and the remainder speak a variety of native African languages.
The main religions are African traditional religions and Islam; there is a Christian (mostly Roman Catholic) minority.
The country's per-capita gross domestic product is one of the lowest in the world. Guinea-Bissau is a member of the
United Nations, African Union, Economic Community of West African States, Organisation of Islamic Cooperation, the
Latin Union, Community of Portuguese Language Countries, La Francophonie and the South Atlantic Peace and Cooperation
Zone. Guinea-Bissau was once part of the kingdom of Gabu, part of the Mali Empire; parts of this kingdom persisted
until the 18th century. Other parts of the territory in the current country were considered by the Portuguese as
part of their empire. Portuguese Guinea was known as the Slave Coast, as it was a major area for the exportation
of African slaves by Europeans to the western hemisphere. Previously slaves had been traded by Arabs north to the
northern part of Africa and into the Middle East. Early reports of Europeans reaching this area include those of
the Venetian Alvise Cadamosto's voyage of 1455, the 1479–1480 voyage by Flemish-French trader Eustache de la Fosse,
and Diogo Cão. In the 1480s this Portuguese explorer reached the Congo River and the lands of Bakongo, setting up
the foundations of modern Angola, some 4200 km down the African coast from Guinea-Bissau. Although the rivers and
coast of this area were among the first places colonized by the Portuguese, who set up trading posts in the 16th
century, they did not explore the interior until the 19th century. The local African rulers in Guinea, some of whom
prospered greatly from the slave trade, controlled the inland trade and did not allow the Europeans into the interior.
They kept them in the fortified coastal settlements where the trading took place. African communities that fought
back against slave traders also distrusted European adventurers and would-be settlers. The Portuguese in Guinea were
largely restricted to the port of Bissau and Cacheu. A small number of European settlers established isolated farms
along Bissau's inland rivers. For a brief period in the 1790s, the British tried to establish a rival foothold on
an offshore island, at Bolama. But by the 19th century the Portuguese were sufficiently secure in Bissau to regard
the neighbouring coastline as their own special territory, also up north in part of present South Senegal. An armed
rebellion beginning in 1956 by the African Party for the Independence of Guinea and Cape Verde (PAIGC) under the
leadership of Amílcar Cabral gradually consolidated its hold on then Portuguese Guinea. Unlike guerrilla movements
in other Portuguese colonies, the PAIGC rapidly extended its military control over large portions of the territory,
aided by the jungle-like terrain, its easily reached borderlines with neighbouring allies, and large quantities of
arms from Cuba, China, the Soviet Union, and left-leaning African countries. Cuba also agreed to supply artillery
experts, doctors, and technicians. The PAIGC even managed to acquire a significant anti-aircraft capability in order
to defend itself against aerial attack. By 1973, the PAIGC was in control of many parts of Guinea, although the movement
suffered a setback in January 1973 when Cabral was assassinated. Independence was unilaterally declared on 24 September
1973. Recognition became universal following the 25 April 1974 socialist-inspired military coup in Portugal, which
overthrew Lisbon's Estado Novo regime. Luís Cabral, brother of Amílcar and co-founder of PAIGC, was appointed the
first President of Guinea-Bissau. Following independence, the PAIGC killed thousands of local Guinean soldiers who
had fought along with the Portuguese Army against guerrillas. Some escaped to settle in Portugal or other African
nations. One of the massacres occurred in the town of Bissorã. In 1980 the PAIGC acknowledged in its newspaper Nó
Pintcha (dated 29 November 1980) that many Gueinean soldiers had been executed and buried in unmarked collective
graves in the woods of Cumerá, Portogole, and Mansabá. The country was controlled by a revolutionary council until
1984. The first multi-party elections were held in 1994. An army uprising in May 1998 led to the Guinea-Bissau Civil
War and the president's ousting in June 1999. Elections were held again in 2000, and Kumba Ialá was elected president.
In September 2003, a military coup was conducted. The military arrested Ialá on the charge of being "unable to solve
the problems". After being delayed several times, legislative elections were held in March 2004. A mutiny of military
factions in October 2004 resulted in the death of the head of the armed forces and caused widespread unrest. In June
2005, presidential elections were held for the first time since the coup that deposed Ialá. Ialá returned as the
candidate for the PRS, claiming to be the legitimate president of the country, but the election was won by former
president João Bernardo Vieira, deposed in the 1999 coup. Vieira beat Malam Bacai Sanhá in a runoff election. Sanhá
initially refused to concede, claiming that tampering and electoral fraud occurred in two constituencies including
the capital, Bissau. Despite reports of arms entering the country prior to the election and some "disturbances during
campaigning," including attacks on government offices by unidentified gunmen, foreign election monitors described
the 2005 election overall as "calm and organized". Three years later, PAIGC won a strong parliamentary majority,
with 67 of 100 seats, in the parliamentary election held in November 2008. In November 2008, President Vieira's official
residence was attacked by members of the armed forces, killing a guard but leaving the president unharmed. On 2 March
2009, however, Vieira was assassinated by what preliminary reports indicated to be a group of soldiers avenging the
death of the head of joint chiefs of staff, General Batista Tagme Na Wai. Tagme died in an explosion on Sunday, 1
March 2009, target of an assassination. Military leaders in the country pledged to respect the constitutional order
of succession. National Assembly Speaker Raimundo Pereira was appointed as an interim president until a nationwide
election on 28 June 2009. It was won by Malam Bacai Sanhá. On the evening of 12 April 2012, members of the country's
military staged a coup d'état and arrested the interim president and a leading presidential candidate. Former vice
chief of staff, General Mamadu Ture Kuruma, assumed control of the country in the transitional period and started
negotiations with opposition parties. Guinea-Bissau is a republic. In the past, the government had been highly centralized.
Multi-party governance was not established until mid-1991. The president is the head of state and the prime minister
is the head of government. Since 1974, no president has successfully served a full five-year term. At the legislative
level, a unicameral Assembleia Nacional Popular (National People's Assembly) is made up of 100 members. They are
popularly elected from multi-member constituencies to serve a four-year term. The judicial system is headed by a
Tribunal Supremo da Justiça (Supreme Court), made up of nine justices appointed by the president; they serve at the
pleasure of the president. João Bernardo "Nino" Vieira was elected in 2005 as President of Guinea-Bissau as an independent,
being declared winner of the second round by the CNE (Comité Nacional de Eleições). Vieira returned to power in 2005
six years after being ousted from office during a civil war. Previously, he held power for 19 years after taking
power in 1980 in a bloodless coup. In that action, he toppled the government of Luís Cabral. He was killed on 2 March
2009, possibly by soldiers in retaliation for the assassination of General Batista Tagme Na Waie, the head of the
joint chiefs of staff, killed in an explosion. Vieira's death did not trigger widespread violence, but there were
signs of turmoil in the country, according to the advocacy group Swisspeace. Malam Bacai Sanhá was elected after
a transition. In the 2009 election to replace the assassinated Vieira, Sanhá was the presidential candidate of the
PAIGC while Kumba Ialá was the presidential candidate of the PRS. In 2012, President Rachide Sambu-balde Malam Bacai
Sanhá died. He belonged to PAIGC (African Party for the Independence of Guinea and Cape Verde), one of the two major
political parties in Guinea-Bissau, along with the PRS (Party for Social Renewal). There are more than 20 minor parties.
Guinea-Bissau is divided into eight regions (regiões) and one autonomous sector (sector autónomo). These, in turn,
are subdivided into 37 Sectors. The regions are: Guinea-Bissau is bordered by Senegal to the north and Guinea to
the south and east, with the Atlantic Ocean to its west. It lies mostly between latitudes 11° and 13°N (a small area
is south of 11°), and longitudes 13° and 17°W. At 36,125 square kilometres (13,948 sq mi), the country is larger
in size than Taiwan or Belgium. It lies at a low altitude; its highest point is 300 metres (984 ft). The terrain
of is mostly low coastal plain with swamps of Guinean mangroves rising to Guinean forest-savanna mosaic in the east.
Its monsoon-like rainy season alternates with periods of hot, dry harmattan winds blowing from the Sahara. The Bijagos
Archipelago lies off of the mainland. Guinea-Bissau is warm all year around and there is little temperature fluctuation;
it averages 26.3 °C (79.3 °F). The average rainfall for Bissau is 2,024 millimetres (79.7 in) although this is almost
entirely accounted for during the rainy season which falls between June and September/October. From December through
April, the country experiences drought. Guinea-Bissau's GDP per capita is one of the lowest in the world, and its
Human Development Index is one of the lowest on earth. More than two-thirds of the population lives below the poverty
line. The economy depends mainly on agriculture; fish, cashew nuts and ground nuts are its major exports. A long
period of political instability has resulted in depressed economic activity, deteriorating social conditions, and
increased macroeconomic imbalances. It takes longer on average to register a new business in Guinea-Bissau (233 days
or about 33 weeks) than in any other country in the world except Suriname. [The Economist, Pocket World in Figures,
2008 Edition, London: Profile Books] Guinea-Bissau has started to show some economic advances after a pact of stability
was signed by the main political parties of the country, leading to an IMF-backed structural reform program. The
key challenges for the country in the period ahead are to achieve fiscal discipline, rebuild public administration,
improve the economic climate for private investment, and promote economic diversification. After the country became
independent from Portugal in 1974 due to the Portuguese Colonial War and the Carnation Revolution, the rapid exodus
of the Portuguese civilian, military, and political authorities resulted in considerable damage to the country's
economic infrastructure, social order, and standard of living. After several years of economic downturn and political
instability, in 1997, Guinea-Bissau entered the CFA franc monetary system, bringing about some internal monetary
stability. The civil war that took place in 1998 and 1999, and a military coup in September 2003 again disrupted
economic activity, leaving a substantial part of the economic and social infrastructure in ruins and intensifying
the already widespread poverty. Following the parliamentary elections in March 2004 and presidential elections in
July 2005, the country is trying to recover from the long period of instability, despite a still-fragile political
situation. Beginning around 2005, drug traffickers based in Latin America began to use Guinea-Bissau, along with
several neighboring West African nations, as a transshipment point to Europe for cocaine. The nation was described
by a United Nations official as being at risk for becoming a "narco-state". The government and the military have
done little to stop drug trafficking, which increased after the 2012 coup d'état. According to the 2010 revison of
the UN World Population Prospects, Guinea-Bissau's population was 1,515,000 in 2010, compared to 518,000 in 1950.
The proportion of the population below the age of 15 in 2010 was 41.3%, 55.4% were aged between 15 and 65 years of
age, while 3.3% were aged 65 years or older. Portuguese natives comprise a very small percentage of Guinea-Bissauans.
After Guinea-Bissau gained independence, most of the Portuguese nationals left the country. The country has a tiny
Chinese population. These include traders and merchants of mixed Portuguese and Chinese ancestry from Macau, a former
Asian Portuguese colony. 14% of the population speaks the official language Portuguese, the language of government
and national communication during centuries of colonial rule. 44% speak Kriol, a Portuguese-based creole language,
which is effectively a national language of communication among groups. The remainder speak a variety of native African
languages unique to ethnicities. Most Portuguese and Mestiços speak one of the African languages and Kriol as second
languages. French is also taught in schools because Guinea-Bissau is surrounded by French-speaking nations. Guinea-Bissau
is a full member of the Francophonie. Throughout the 20th century, most Bissau-Guineans practiced some form of Animism.
In the early 21st century, many have adopted Islam, which is now practiced by 50% of the country's population. Most
of Guinea-Bissau's Muslims are of the Sunni denomination with approximately 2% belonging to the Ahmadiyya sect. Approximately
10% of the country's population belong to the Christian community, and 40% continue to hold Indigenous beliefs. These
statistics can be misleading, however, as many residents practice syncretic forms of Islamic and Christian faiths,
combining their practices with traditional African beliefs. The prevalence of HIV-infection among the adult population
is 1.8%. Only 20% of infected pregnant women receive anti retroviral coverage to prevent transmission to newborns.
Malaria kills more residents; 9% of the population have reported infection, It causes three times as many deaths
as AIDS. In 2008, fewer than half of children younger than five slept under antimalaria nets or had access to antimalarial
drugs. Despite lowering rates in surrounding countries, cholera rates were reported in November 2012 to be on the
rise, with 1,500 cases reported and nine deaths. A 2008 cholera epidemic in Guinea-Bissau affected 14,222 people
and killed 225. The 2010 maternal mortality rate per 100,000 births for Guinea Bissau was 1000. This compares with
804.3 in 2008 and 966 in 1990. The under 5 mortality rate, per 1,000 births, was 195 and the neonatal mortality as
a percentage of under 5's mortality was 24. The number of midwives per 1,000 live births was 3; one out of eighteen
pregnant women die as a result of pregnancy. According to a 2013 UNICEF report, 50% of women in Guinea Bissau had
undergone female genital mutilation. In 2010, Guinea Bissau had the 7th highest maternal mortality rate in the world.
Education is compulsory from the age of 7 to 13. The enrollment of boys is higher than that of girls. In 1998, the
gross primary enrollment rate was 53.5%, with higher enrollment ratio for males (67.7%) compared to females (40%).
Guinea-Bissau has several secondary schools (general as well as technical) and a number of universities, to which
an institutionally autonomous Faculty of Law as well as a Faculty of Medicine have been added. The music of Guinea-Bissau
is usually associated with the polyrhythmic gumbe genre, the country's primary musical export. However, civil unrest
and other factors have combined over the years to keep gumbe, and other genres, out of mainstream audiences, even
in generally syncretist African countries. The calabash is the primary musical instrument of Guinea-Bissau, and is
used in extremely swift and rhythmically complex dance music. Lyrics are almost always in Guinea-Bissau Creole, a
Portuguese-based creole language, and are often humorous and topical, revolving around current events and controversies,
especially AIDS. The word gumbe is sometimes used generically, to refer to any music of the country, although it
most specifically refers to a unique style that fuses about ten of the country's folk music traditions. Tina and
tinga are other popular genres, while extent folk traditions include ceremonial music used in funerals, initiations
and other rituals, as well as Balanta brosca and kussundé, Mandinga djambadon, and the kundere sound of the Bissagos
Islands. Rice is a staple in the diet of residents near the coast and millet a staple in the interior. Fish, shellfish,
fruits and vegetables are commonly eaten along with cereal grains, milk, curd and whey. The Portuguese encouraged
peanut production. Vigna subterranea (Bambara groundnut) and Macrotyloma geocarpum (Hausa groundnut) are also grown.
Black-eyed peas are also part of the diet. Palm oil is harvested. Common dishes include soups and stews. Common ingredients
include yams, sweet potato, cassava, onion, tomato and plantain. Spices, peppers and chilis are used in cooking,
including Aframomum melegueta seeds (Guinea pepper). Flora Gomes is an internationally renowned film director; his
most famous film is Nha Fala (English: My Voice). Gomes's Mortu Nega (Death Denied) (1988) was the first fiction
film and the second feature film ever made in Guinea-Bissau. (The first feature film was N’tturudu, by director Umban
u’Kest in 1987.) At FESPACO 1989, Mortu Nega won the prestigious Oumarou Ganda Prize. Mortu Nega is in Creole with
English subtitles. In 1992, Gomes directed Udju Azul di Yonta, which was screened in the Un Certain Regard section
at the 1992 Cannes Film Festival. Gomes has also served on the boards of many Africa-centric film festivals.
This article covers numbered east-west streets in Manhattan, New York City. Major streets have their own linked articles;
minor streets are discussed here. The streets do not run exactly east–west, because the grid plan is aligned with
the Hudson River rather than with the cardinal directions. "West" is approximately 29 degrees north of true west.
The numbered streets carry crosstown traffic. In general, even-numbered streets are one-way eastbound and odd-numbered
streets are one-way west. Several exceptions reverse this. Most wider streets carry two-way traffic, as do a few
of the narrow ones. Streets' names change from West to East (for instance, East 10th Street to West 10th Street)
at Broadway below 8th Street, and at Fifth Avenue from 8th Street and above. Although the numbered streets begin
just north of East Houston Street in the East Village, they generally do not extend west into Greenwich Village,
which already had streets when the grid plan was laid out by the Commissioners' Plan of 1811. Streets that do continue
farther west change direction before reaching the Hudson River. The grid covers the length of the island from 14th
Street north. 220th Street is the highest numbered street on Manhattan Island. Marble Hill is also within the borough
of Manhattan, so the highest street number in the borough is 228th Street. However, the numbering continues in the
Bronx up to 263rd Street. The lowest number is East First Street—which runs in Alphabet City near East Houston Street—as
well as First Place in Battery Park City. East 1st Street begins just North of East Houston Street at Avenue A and
continues to Bowery. Peretz Square, a small triangular sliver park where Houston Street, First Street and First Avenue
meet marks the spot where the grid takes hold. East 2nd Street begins just North of East Houston Street at Avenue
C and also continues to Bowery. The East end of East 3rd, 4th, 5th, and 7th Streets is Avenue D, with East 6th Street
continuing further Eastward and connecting to FDR Drive. The west end of these streets is Bowery and Third Avenue,
except for 3rd Street (formerly Amity Place; to Sixth Avenue) and 4th Street (to 13th Street), which extend west
and north, respectively, into Greenwich Village. Great Jones Street connects East 3rd to West 3rd. East 5th Street
goes west to Cooper Square, but is interrupted between Avenues B and C by The Earth School, Public School 364, and
between First Avenue and Avenue A by the Village View Apartments. 8th and 9th Streets run parallel to each other,
beginning at Avenue D, interrupted by Tompkins Square Park at Avenue B, resuming at Avenue A and continuing to Sixth
Avenue. West 8th Street is an important local shopping street. 8th Street between Avenue A and Third Avenue is called
St Mark's Place, but it is counted in the length below. 10th Street (40°44′03″N 74°00′11″W / 40.7342580°N 74.0029670°W
/ 40.7342580; -74.0029670) begins at the FDR Drive and Avenue C. West of Sixth Avenue, it turns southward about 40
degrees to join the Greenwich Village street grid and continue to West Street on the Hudson River. Because West 4th
Street turns northward at Sixth Avenue, it intersects 10th, 11th and 12th and 13th Streets in the West Village. The
M8 bus operates on 10th Street in both directions between Avenue D and Avenue A, and eastbound between West Street
and Sixth Avenue. 10th Street has an eastbound bike lane from West Street to the East River. In 2009, the two-way
section of 10th Street between Avenue A and the East River had bicycle markings and sharrows installed, but it still
has no dedicated bike lane. West 10th Street was previously named Amos Street for Richard Amos. The end of West 10th
Street toward the Hudson River was once the home of Newgate Prison, New York City's first prison and the United States'
second. 11th Street is in two parts. It is interrupted by the block containing Grace Church between Broadway and
Fourth Avenue. East 11th streets runs from Fourth Avenue to Avenue C and runs past Webster Hall. West 11th Street
runs from Broadway to West Street. 11th Street and 6th Avenue was the location of the Old Grapevine tavern from the
1700s to its demolition in the early 20th century. 13th Street is in three parts. The first is a dead end from Avenue
C. The second starts at a dead end, just before Avenue B, and runs to Greenwich Avenue, and the third part is from
Eighth Avenue to Tenth Avenue. 14th Street is a main numbered street in Manhattan. It begins at Avenue C and ends
at West Street. Its length is 3.4 km (2.1 mi). It has six subway stations: 15th Street starts at FDR Drive, and 16th
Street starts at a dead end half way between FDR Drive and Avenue C. They are both stopped at Avenue C and continue
from First Avenue to West Street, stopped again at Union Square, and 16th Street also pauses at Stuyvesant Square.
On 17th Street (40°44′08″N 73°59′12″W / 40.735532°N 73.986575°W / 40.735532; -73.986575), traffic runs one way
along the street, from east to west excepting the stretch between Broadway and Park Avenue South, where traffic runs
in both directions. It forms the northern borders of both Union Square (between Broadway and Park Avenue South) and
Stuyvesant Square. Composer Antonín Dvořák's New York home was located at 327 East 17th Street, near Perlman Place.
The house was razed by Beth Israel Medical Center after it received approval of a 1991 application to demolish the
house and replace it with an AIDS hospice. Time Magazine was started at 141 East 17th Street. 18th Street has a local
subway station at the crossing with Seventh Avenue, served by the 1 2 trains on the IRT Broadway – Seventh Avenue
Line. There used to be an 18th Street station on the IRT Lexington Avenue Line at the crossing with Park Avenue South.
20th Street starts at Avenue C, and 21st and 22nd Streets begin at First Avenue. They all end at Eleventh Avenue.
Travel on the last block of the 20th, 21st and 22nd Streets, between Tenth and Eleventh Avenues, is in the opposite
direction than it is on the rest of the respective street. 20th Street is very wide from the Avenue C to First Avenue.
Between Second and Third Avenues, 21st Street is alternatively known as Police Officer Anthony Sanchez Way. Along
the northern perimeter of Gramercy Park, between Gramercy Park East and Gramercy Park West, 21st Street is known
as Gramercy Park North. 23rd Street is another main numbered street in Manhattan. It begins at FDR Drive and ends
at Eleventh Avenue. Its length is 3.1 km/1.9m. It has two-way travel. On 23rd Street there are five local subway
stations: 24th Street is in two parts. 24th Street starts at First Avenue and it ends at Madison Avenue, because
of Madison Square Park. 25th Street, which is in three parts, starts at FDR Drive, is a pedestrian plaza between
Third Avenue and Lexington Avenue, and ends at Madison. Then West 24th and 25th Streets continue from Fifth Avenue
to Eleventh Avenue (25th) or Twelfth Avenue (24th). 27th Street is a one-way street runs from Second Avenue to the
West Side Highway with an interruption between Eighth Avenue and Tenth Avenue. It is most noted for its strip between
Tenth and Eleventh Avenues, known as Club Row because it features numerous nightclubs and lounges. In recent years,
the nightclubs on West 27th Street have succumbed to stiff competition from Manhattan's Meatpacking District about
fifteen blocks south, and other venues in downtown Manhattan. Heading east, 27th Street passes through Chelsea Park
between Tenth and Ninth Avenues, with the Fashion Institute of Technology (FIT) on the corner of Eighth. On Madison
Avenue between 26th and 27th streets, on the site of the old Madison Square Garden, is the New York Life Building,
built in 1928 and designed by Cass Gilbert, with a square tower topped by a striking gilded pyramid. Twenty-Seventh
Street passes one block north of Madison Square Park and culminates at Bellevue Hospital Center on First Avenue.
31st Street begins on the West Side at the West Side Yard, while 32nd Street, which includes a segment officially
known as Korea Way between Fifth Avenue and Broadway in Manhattan's Koreatown, begins at the entrance to Penn Station
and Madison Square Garden. On the East Side, both streets end at Second Avenue at Kips Bay Towers and NYU Medical
Center which occupy the area between 30th and 34th Streets. The Catholic church of St. Francis of Assisi is situated
at 135–139 West 31st Street. At 210 West is the Capuchin Monastery of St. John the Baptist, part of St. John the
Baptist Church on 30th Street. At the corner of Broadway and West 31st Street is the Grand Hotel. The former Hotel
Pierrepont was located at 43 West 32nd Street, The Continental NYC tower is at the corner of Sixth Avenue and 32nd
Street. 29 East 32nd Street was the location of the first building owned by the Grolier Club between 1890 and 1917.
35th Street runs from FDR Drive to Eleventh Avenue. Notable locations include East River Ferry, LaptopMD headquarters,
Mercy College Manhattan Campus, and Jacob K. Javits Convention Center. A section of East 58th Street 40°45′40.3″N
73°57′56.9″W / 40.761194°N 73.965806°W / 40.761194; -73.965806 between Lexington and Second Avenues is known as
Designers' Way and features a number of high end interior design and decoration establishments, including 90th Street
is split into two segments. The first segment, West 90th Street begins at Riverside Drive and ends at Central Park
West or West Drive, when it is open, in Central Park on the Upper West Side. The second segment of East 90th Street
begins at East Drive, at Engineers Gate of Central Park. When East Drive is closed, East 90th Street begins at Fifth
Avenue on the Upper East Side and curves to the right at the FDR Drive becoming East End Avenue. Our Lady of Good
Counsel Church, is located on East 90th Street between Third Avenue and Second Avenue, across the street from Ruppert
Towers (1601 and 1619 Third Avenue) and Ruppert Park. Asphalt Green, which is located on East 90th Street between
York Avenue and East End Avenue. 112th Street starts in Morningside Heights and runs from Riverside Drive to Amsterdam
Avenue, where it meets the steps of the Cathedral of Saint John the Divine. The street resumes at the eastern edge
of Morningside Park and extends through Harlem before ending at First Avenue adjacent Thomas Jefferson Park in East
Harlem. Notable locations include: 114th Street marks the southern boundary of Columbia University’s Morningside
Heights Campus and is the location of Butler Library, which is the University’s largest. Above 114th Street between
Amsterdam Avenue and Morningside Drive, there is a private indoor pedestrian bridge connecting two buildings on the
campus of St. Luke's–Roosevelt Hospital Center. 40°48′27″N 73°57′18″W / 40.8076°N 73.9549°W / 40.8076; -73.9549
120th Street traverses the neighborhoods of Morningside Heights, Harlem, and Spanish Harlem. It begins on Riverside
Drive at the Interchurch Center. It then runs east between the campuses of Barnard College and the Union Theological
Seminary, then crosses Broadway and runs between the campuses of Columbia University and Teacher's College. The street
is interrupted by Morningside Park. It then continues east, eventually running along the southern edge of Marcus
Garvey Park, passing by 58 West, the former residence of Maya Angelou. It then continues through Spanish Harlem;
when it crosses Pleasant Avenue it becomes a two‑way street and continues nearly to the East River, where for automobiles,
it turns north and becomes Paladino Avenue, and for pedestrians, continues as a bridge across FDR Drive. 40°48′32″N
73°57′14″W / 40.8088°N 73.9540°W / 40.8088; -73.9540 122nd Street is divided into three noncontiguous segments,
E 122nd Street, W 122nd Street, and W 122nd Street Seminary Row, by Marcus Garvey Memorial Park and Morningside Park.
E 122nd Street runs four blocks (2,250 feet (690 m)) west from the intersection of Second Avenue and terminates at
the intersection of Madison Avenue at Marcus Garvey Memorial Park. This segment runs in East Harlem and crosses portions
of Third Avenue, Lexington, and Park (Fourth Avenue). W 122nd Street runs six blocks (3,280 feet (1,000 m)) west
from the intersection of Mount Morris Park West at Marcus Garvey Memorial Park and terminates at the intersection
of Morningside Avenue at Morningside Park. This segment runs in the Mount Morris Historical District and crosses
portions of Lenox Avenue (Sixth Avenue), Seventh Avenue, Frederick Douglass Boulevard (Eighth Avenue), and Manhattan
Avenue. W 122nd Street Seminary Row runs three blocks (1,500 feet (460 m)) west from the intersection of Amsterdam
Avenue (Tenth Avenue) and terminates at the intersection of Riverside Drive. East of Amsterdam, Seminary Row bends
south along Morningside Park and is resigned as Morningside Drive (Ninth Avenue). Seminary row runs in Morningside
Heights, the district surrounding Columbia University, and crosses portions of Broadway and Claremont Avenue. Seminary
Row is named for the Union Theological Seminary and the Jewish Theological Seminary which it touches. Seminary Row
also runs by the Manhattan School of Music, Riverside Church, Sakura Park, Grant's Tomb, and Morningside Park. 122nd
Street is mentioned in the movie Taxi Driver by main character Travis Bickle as the location where a fellow cab driver
is assaulted with a knife. The street and the surrounding neighborhood of Harlem is then referred to as "Mau Mau
Land" by another character named Wizard, slang indicating it is a majority black area. 40°48′47″N 73°57′27″W / 40.813°N
73.9575°W / 40.813; -73.9575 La Salle Street is a street in West Harlem that runs just two blocks between Amsterdam
Avenue and Claremont Avenue. West of Convent Avenue, 125th Street was re-routed onto the old Manhattan Avenue. The
original 125th Street west of Convent Avenue was swallowed up to make the super-blocks where the low income housing
projects now exist. La Salle Street is the only vestige of the original routing. 40°48′52″N 73°56′53″W / 40.814583°N
73.947944°W / 40.814583; -73.947944 132nd Street runs east-west above Central Park and is located in Harlem just
south of Hamilton Heights. The main portion of 132nd Street runs eastbound from Frederick Douglass Boulevard to northern
end of Park Avenue where there is a southbound exit from/entrance to the Harlem River Drive. After an interruption
from St. Nicholas Park and City College, there is another small stretch of West 132nd Street between Broadway and
Twelfth Avenue The 132nd Street Community Garden is located on 132nd Street between Adam Clayton Powell Jr. Boulevard
and Malcolm X Boulevard. In 1997, the lot received a garden makeover; the Borough President's office funded the installation
of a $100,000 water distribution system that keeps the wide variety of trees green. The garden also holds a goldfish
pond and several benches. The spirit of the neighborhood lives in gardens like this one, planted and tended by local
residents. The Manhattanville Bus Depot (formerly known as the 132nd Street Bus Depot) is located on West 132nd and
133rd Street between Broadway and Riverside Drive in the Manhattanville neighborhood. 155th Street is a major crosstown
street considered to form the boundary between Harlem and Washington Heights. It is the northernmost of the 155 crosstown
streets mapped out in the Commissioner's Plan of 1811 that established the numbered street grid in Manhattan. 155th
Street starts on the West Side at Riverside Drive, crossing Broadway, Amsterdam Avenue and St. Nicholas Avenue. At
St. Nicholas Place, the terrain drops off steeply, and 155th Street is carried on a 1,600-foot (490 m) long viaduct,
a City Landmark constructed in 1893, that slopes down towards the Harlem River, continuing onto the Macombs Dam Bridge,
crossing over (but not intersecting with) the Harlem River Drive. A separate, unconnected section of 155th Street
runs under the viaduct, connecting Bradhurst Avenue and the Harlem River Drive. 181st Street is a major thoroughfare
running through the Washington Heights neighborhood. It runs from the Washington Bridge in the east, to the Henry
Hudson Parkway in the west, near the George Washington Bridge and the Hudson River. The west end is called Plaza
Lafayette. West of Fort Washington Avenue, 181st Street is largely residential, bordering Hudson Heights and having
a few shops to serve the local residents. East of Fort Washington Avenue, the street becomes increasingly commercial,
becoming dominated entirely by retail stores where the street reaches Broadway and continues as such until reaching
the Harlem River. It is the area's major shopping district. 181st Street is served by two New York City Subway lines;
there is a 181st Street station at Fort Washington Avenue on the IND Eighth Avenue Line (A trains) and a 181st Street
station at St. Nicholas Avenue on the IRT Broadway – Seventh Avenue Line (1 trains). The stations are about 500 metres
(550 yd) from each other and are not connected. The George Washington Bridge Bus Terminal is a couple of blocks south
on Fort Washington Avenue. 181st Street is also the last south/west exit in New York on the Trans-Manhattan Expressway
(I-95), just before crossing the George Washington Bridge to New Jersey. 187th Street crosses Washington Heights
and running from Laurel Hill Terrace in the east to Chittenden Avenue in the west near the George Washington Bridge
and Hudson River. The street is interrupted by a long set of stairs east of Fort Washington Avenue leading to the
Broadway valley. West of there, it is mostly lined with store fronts and serves as a main shopping district for the
Hudson Heights neighborhood. 187th Street intersects with, from East to West, Laurel Hill Terrace, Amsterdam Avenue,
Audubon Avenue, St. Nicholas Avenue, Wadsworth Avenue, Broadway, Bennett Avenue, Overlook Terrace, Fort Washington
Avenue, Pinehurst Avenue, Cabrini Boulevard and Chittenden Avenue. The many institutions on 187th Street include
Mount Sinai Jewish Center, the Dombrov Shtiebel, and the uptown campus of Yeshiva University. The local public elementary
school P.S. 187 is located on Cabrini Boulevard, just north of the eponymous 187th Street
The brain is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. Only
a few invertebrates such as sponges, jellyfish, adult sea squirts and starfish do not have a brain; diffuse or localised
nerve nets are present instead. The brain is located in the head, usually close to the primary sensory organs for
such senses as vision, hearing, balance, taste, and smell. The brain is the most complex organ in a vertebrate's
body. In a typical human, the cerebral cortex (the largest part) is estimated to contain 15–33 billion neurons, each
connected by synapses to several thousand other neurons. These neurons communicate with one another by means of long
protoplasmic fibers called axons, which carry trains of signal pulses called action potentials to distant parts of
the brain or body targeting specific recipient cells. Physiologically, the function of the brain is to exert centralized
control over the other organs of the body. The brain acts on the rest of the body both by generating patterns of
muscle activity and by driving the secretion of chemicals called hormones. This centralized control allows rapid
and coordinated responses to changes in the environment. Some basic types of responsiveness such as reflexes can
be mediated by the spinal cord or peripheral ganglia, but sophisticated purposeful control of behavior based on complex
sensory input requires the information integrating capabilities of a centralized brain. The operations of individual
brain cells are now understood in considerable detail but the way they cooperate in ensembles of millions is yet
to be solved. Recent models in modern neuroscience treat the brain as a biological computer, very different in mechanism
from an electronic computer, but similar in the sense that it acquires information from the surrounding world, stores
it, and processes it in a variety of ways, analogous to the central processing unit (CPU) in a computer. This article
compares the properties of brains across the entire range of animal species, with the greatest attention to vertebrates.
It deals with the human brain insofar as it shares the properties of other brains. The ways in which the human brain
differs from other brains are covered in the human brain article. Several topics that might be covered here are instead
covered there because much more can be said about them in a human context. The most important is brain disease and
the effects of brain damage, covered in the human brain article because the most common diseases of the human brain
either do not show up in other species, or else manifest themselves in different ways. The shape and size of the
brain varies greatly in different species, and identifying common features is often difficult. Nevertheless, there
are a number of principles of brain architecture that apply across a wide range of species. Some aspects of brain
structure are common to almost the entire range of animal species; others distinguish "advanced" brains from more
primitive ones, or distinguish vertebrates from invertebrates. The simplest way to gain information about brain anatomy
is by visual inspection, but many more sophisticated techniques have been developed. Brain tissue in its natural
state is too soft to work with, but it can be hardened by immersion in alcohol or other fixatives, and then sliced
apart for examination of the interior. Visually, the interior of the brain consists of areas of so-called grey matter,
with a dark color, separated by areas of white matter, with a lighter color. Further information can be gained by
staining slices of brain tissue with a variety of chemicals that bring out areas where specific types of molecules
are present in high concentrations. It is also possible to examine the microstructure of brain tissue using a microscope,
and to trace the pattern of connections from one brain area to another. The brains of all species are composed primarily
of two broad classes of cells: neurons and glial cells. Glial cells (also known as glia or neuroglia) come in several
types, and perform a number of critical functions, including structural support, metabolic support, insulation, and
guidance of development. Neurons, however, are usually considered the most important cells in the brain. The property
that makes neurons unique is their ability to send signals to specific target cells over long distances. They send
these signals by means of an axon, which is a thin protoplasmic fiber that extends from the cell body and projects,
usually with numerous branches, to other areas, sometimes nearby, sometimes in distant parts of the brain or body.
The length of an axon can be extraordinary: for example, if a pyramidal cell, (an excitatory neuron) of the cerebral
cortex were magnified so that its cell body became the size of a human body, its axon, equally magnified, would become
a cable a few centimeters in diameter, extending more than a kilometer. These axons transmit signals in the form
of electrochemical pulses called action potentials, which last less than a thousandth of a second and travel along
the axon at speeds of 1–100 meters per second. Some neurons emit action potentials constantly, at rates of 10–100
per second, usually in irregular patterns; other neurons are quiet most of the time, but occasionally emit a burst
of action potentials. Axons transmit signals to other neurons by means of specialized junctions called synapses.
A single axon may make as many as several thousand synaptic connections with other cells. When an action potential,
traveling along an axon, arrives at a synapse, it causes a chemical called a neurotransmitter to be released. The
neurotransmitter binds to receptor molecules in the membrane of the target cell. Synapses are the key functional
elements of the brain. The essential function of the brain is cell-to-cell communication, and synapses are the points
at which communication occurs. The human brain has been estimated to contain approximately 100 trillion synapses;
even the brain of a fruit fly contains several million. The functions of these synapses are very diverse: some are
excitatory (exciting the target cell); others are inhibitory; others work by activating second messenger systems
that change the internal chemistry of their target cells in complex ways. A large number of synapses are dynamically
modifiable; that is, they are capable of changing strength in a way that is controlled by the patterns of signals
that pass through them. It is widely believed that activity-dependent modification of synapses is the brain's primary
mechanism for learning and memory. Most of the space in the brain is taken up by axons, which are often bundled together
in what are called nerve fiber tracts. A myelinated axon is wrapped in a fatty insulating sheath of myelin, which
serves to greatly increase the speed of signal propagation. (There are also unmyelinated axons). Myelin is white,
making parts of the brain filled exclusively with nerve fibers appear as light-colored white matter, in contrast
to the darker-colored grey matter that marks areas with high densities of neuron cell bodies. Except for a few primitive
organisms such as sponges (which have no nervous system) and cnidarians (which have a nervous system consisting of
a diffuse nerve net), all living multicellular animals are bilaterians, meaning animals with a bilaterally symmetric
body shape (that is, left and right sides that are approximate mirror images of each other). All bilaterians are
thought to have descended from a common ancestor that appeared early in the Cambrian period, 485-540 million years
ago, and it has been hypothesized that this common ancestor had the shape of a simple tubeworm with a segmented body.
At a schematic level, that basic worm-shape continues to be reflected in the body and nervous system architecture
of all modern bilaterians, including vertebrates. The fundamental bilateral body form is a tube with a hollow gut
cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment,
with an especially large ganglion at the front, called the brain. The brain is small and simple in some species,
such as nematode worms; in other species, including vertebrates, it is the most complex organ in the body. Some types
of worms, such as leeches, also have an enlarged ganglion at the back end of the nerve cord, known as a "tail brain".
There are a few types of existing bilaterians that lack a recognizable brain, including echinoderms, tunicates, and
acoelomorphs (a group of primitive flatworms). It has not been definitively established whether the existence of
these brainless species indicates that the earliest bilaterians lacked a brain, or whether their ancestors evolved
in a way that led to the disappearance of a previously existing brain structure. Two groups of invertebrates have
notably complex brains: arthropods (insects, crustaceans, arachnids, and others), and cephalopods (octopuses, squids,
and similar molluscs). The brains of arthropods and cephalopods arise from twin parallel nerve cords that extend
through the body of the animal. Arthropods have a central brain with three divisions and large optical lobes behind
each eye for visual processing. Cephalopods such as the octopus and squid have the largest brains of any invertebrates.
There are several invertebrate species whose brains have been studied intensively because they have properties that
make them convenient for experimental work: The first vertebrates appeared over 500 million years ago (Mya), during
the Cambrian period, and may have resembled the modern hagfish in form. Sharks appeared about 450 Mya, amphibians
about 400 Mya, reptiles about 350 Mya, and mammals about 200 Mya. Each species has an equally long evolutionary history,
but the brains of modern hagfishes, lampreys, sharks, amphibians, reptiles, and mammals show a gradient of size and
complexity that roughly follows the evolutionary sequence. All of these brains contain the same set of basic anatomical
components, but many are rudimentary in the hagfish, whereas in mammals the foremost part (the telencephalon) is
greatly elaborated and expanded. Brains are most simply compared in terms of their size. The relationship between
brain size, body size and other variables has been studied across a wide range of vertebrate species. As a rule,
brain size increases with body size, but not in a simple linear proportion. In general, smaller animals tend to have
larger brains, measured as a fraction of body size. For mammals, the relationship between brain volume and body mass
essentially follows a power law with an exponent of about 0.75. This formula describes the central tendency, but
every family of mammals departs from it to some degree, in a way that reflects in part the complexity of their behavior.
For example, primates have brains 5 to 10 times larger than the formula predicts. Predators tend to have larger brains
than their prey, relative to body size. All vertebrate brains share a common underlying form, which appears most
clearly during early stages of embryonic development. In its earliest form, the brain appears as three swellings
at the front end of the neural tube; these swellings eventually become the forebrain, midbrain, and hindbrain (the
prosencephalon, mesencephalon, and rhombencephalon, respectively). At the earliest stages of brain development, the
three areas are roughly equal in size. In many classes of vertebrates, such as fish and amphibians, the three parts
remain similar in size in the adult, but in mammals the forebrain becomes much larger than the other parts, and the
midbrain becomes very small. The brains of vertebrates are made of very soft tissue. Living brain tissue is pinkish
on the outside and mostly white on the inside, with subtle variations in color. Vertebrate brains are surrounded
by a system of connective tissue membranes called meninges that separate the skull from the brain. Blood vessels
enter the central nervous system through holes in the meningeal layers. The cells in the blood vessel walls are joined
tightly to one another, forming the blood–brain barrier, which blocks the passage of many toxins and pathogens (though
at the same time blocking antibodies and some drugs, thereby presenting special challenges in treatment of diseases
of the brain). Neuroanatomists usually divide the vertebrate brain into six main regions: the telencephalon (cerebral
hemispheres), diencephalon (thalamus and hypothalamus), mesencephalon (midbrain), cerebellum, pons, and medulla oblongata.
Each of these areas has a complex internal structure. Some parts, such as the cerebral cortex and the cerebellar
cortex, consist of layers that are folded or convoluted to fit within the available space. Other parts, such as the
thalamus and hypothalamus, consist of clusters of many small nuclei. Thousands of distinguishable areas can be identified
within the vertebrate brain based on fine distinctions of neural structure, chemistry, and connectivity. Although
the same basic components are present in all vertebrate brains, some branches of vertebrate evolution have led to
substantial distortions of brain geometry, especially in the forebrain area. The brain of a shark shows the basic
components in a straightforward way, but in teleost fishes (the great majority of existing fish species), the forebrain
has become "everted", like a sock turned inside out. In birds, there are also major changes in forebrain structure.
These distortions can make it difficult to match brain components from one species with those of another species.
The most obvious difference between the brains of mammals and other vertebrates is in terms of size. On average,
a mammal has a brain roughly twice as large as that of a bird of the same body size, and ten times as large as that
of a reptile of the same body size. Size, however, is not the only difference: there are also substantial differences
in shape. The hindbrain and midbrain of mammals are generally similar to those of other vertebrates, but dramatic
differences appear in the forebrain, which is greatly enlarged and also altered in structure. The cerebral cortex
is the part of the brain that most strongly distinguishes mammals. In non-mammalian vertebrates, the surface of the
cerebrum is lined with a comparatively simple three-layered structure called the pallium. In mammals, the pallium
evolves into a complex six-layered structure called neocortex or isocortex. Several areas at the edge of the neocortex,
including the hippocampus and amygdala, are also much more extensively developed in mammals than in other vertebrates.
The elaboration of the cerebral cortex carries with it changes to other brain areas. The superior colliculus, which
plays a major role in visual control of behavior in most vertebrates, shrinks to a small size in mammals, and many
of its functions are taken over by visual areas of the cerebral cortex. The cerebellum of mammals contains a large
portion (the neocerebellum) dedicated to supporting the cerebral cortex, which has no counterpart in other vertebrates.
The brains of humans and other primates contain the same structures as the brains of other mammals, but are generally
larger in proportion to body size. The most widely accepted way of comparing brain sizes across species is the so-called
encephalization quotient (EQ), which takes into account the nonlinearity of the brain-to-body relationship. Humans
have an average EQ in the 7-to-8 range, while most other primates have an EQ in the 2-to-3 range. Dolphins have values
higher than those of primates other than humans, but nearly all other mammals have EQ values that are substantially
lower. Most of the enlargement of the primate brain comes from a massive expansion of the cerebral cortex, especially
the prefrontal cortex and the parts of the cortex involved in vision. The visual processing network of primates includes
at least 30 distinguishable brain areas, with a complex web of interconnections. It has been estimated that visual
processing areas occupy more than half of the total surface of the primate neocortex. The prefrontal cortex carries
out functions that include planning, working memory, motivation, attention, and executive control. It takes up a
much larger proportion of the brain for primates than for other species, and an especially large fraction of the
human brain. For vertebrates, the early stages of neural development are similar across all species. As the embryo
transforms from a round blob of cells into a wormlike structure, a narrow strip of ectoderm running along the midline
of the back is induced to become the neural plate, the precursor of the nervous system. The neural plate folds inward
to form the neural groove, and then the lips that line the groove merge to enclose the neural tube, a hollow cord
of cells with a fluid-filled ventricle at the center. At the front end, the ventricles and cord swell to form three
vesicles that are the precursors of the forebrain, midbrain, and hindbrain. At the next stage, the forebrain splits
into two vesicles called the telencephalon (which will contain the cerebral cortex, basal ganglia, and related structures)
and the diencephalon (which will contain the thalamus and hypothalamus). At about the same time, the hindbrain splits
into the metencephalon (which will contain the cerebellum and pons) and the myelencephalon (which will contain the
medulla oblongata). Each of these areas contains proliferative zones where neurons and glial cells are generated;
the resulting cells then migrate, sometimes for long distances, to their final positions. Once a neuron is in place,
it extends dendrites and an axon into the area around it. Axons, because they commonly extend a great distance from
the cell body and need to reach specific targets, grow in a particularly complex way. The tip of a growing axon consists
of a blob of protoplasm called a growth cone, studded with chemical receptors. These receptors sense the local environment,
causing the growth cone to be attracted or repelled by various cellular elements, and thus to be pulled in a particular
direction at each point along its path. The result of this pathfinding process is that the growth cone navigates
through the brain until it reaches its destination area, where other chemical cues cause it to begin generating synapses.
Considering the entire brain, thousands of genes create products that influence axonal pathfinding. In humans and
many other mammals, new neurons are created mainly before birth, and the infant brain contains substantially more
neurons than the adult brain. There are, however, a few areas where new neurons continue to be generated throughout
life. The two areas for which adult neurogenesis is well established are the olfactory bulb, which is involved in
the sense of smell, and the dentate gyrus of the hippocampus, where there is evidence that the new neurons play a
role in storing newly acquired memories. With these exceptions, however, the set of neurons that is present in early
childhood is the set that is present for life. Glial cells are different: as with most types of cells in the body,
they are generated throughout the lifespan. The functions of the brain depend on the ability of neurons to transmit
electrochemical signals to other cells, and their ability to respond appropriately to electrochemical signals received
from other cells. The electrical properties of neurons are controlled by a wide variety of biochemical and metabolic
processes, most notably the interactions between neurotransmitters and receptors that take place at synapses. Neurotransmitters
are chemicals that are released at synapses when an action potential activates them—neurotransmitters attach themselves
to receptor molecules on the membrane of the synapse's target cell, and thereby alter the electrical or chemical
properties of the receptor molecules. With few exceptions, each neuron in the brain releases the same chemical neurotransmitter,
or combination of neurotransmitters, at all the synaptic connections it makes with other neurons; this rule is known
as Dale's principle. Thus, a neuron can be characterized by the neurotransmitters that it releases. The great majority
of psychoactive drugs exert their effects by altering specific neurotransmitter systems. This applies to drugs such
as cannabinoids, nicotine, heroin, cocaine, alcohol, fluoxetine, chlorpromazine, and many others. The two neurotransmitters
that are used most widely in the vertebrate brain are glutamate, which almost always exerts excitatory effects on
target neurons, and gamma-aminobutyric acid (GABA), which is almost always inhibitory. Neurons using these transmitters
can be found in nearly every part of the brain. Because of their ubiquity, drugs that act on glutamate or GABA tend
to have broad and powerful effects. Some general anesthetics act by reducing the effects of glutamate; most tranquilizers
exert their sedative effects by enhancing the effects of GABA. There are dozens of other chemical neurotransmitters
that are used in more limited areas of the brain, often areas dedicated to a particular function. Serotonin, for
example—the primary target of antidepressant drugs and many dietary aids—comes exclusively from a small brainstem
area called the Raphe nuclei. Norepinephrine, which is involved in arousal, comes exclusively from a nearby small
area called the locus coeruleus. Other neurotransmitters such as acetylcholine and dopamine have multiple sources
in the brain, but are not as ubiquitously distributed as glutamate and GABA. As a side effect of the electrochemical
processes used by neurons for signaling, brain tissue generates electric fields when it is active. When large numbers
of neurons show synchronized activity, the electric fields that they generate can be large enough to detect outside
the skull, using electroencephalography (EEG) or magnetoencephalography (MEG). EEG recordings, along with recordings
made from electrodes implanted inside the brains of animals such as rats, show that the brain of a living animal
is constantly active, even during sleep. Each part of the brain shows a mixture of rhythmic and nonrhythmic activity,
which may vary according to behavioral state. In mammals, the cerebral cortex tends to show large slow delta waves
during sleep, faster alpha waves when the animal is awake but inattentive, and chaotic-looking irregular activity
when the animal is actively engaged in a task. During an epileptic seizure, the brain's inhibitory control mechanisms
fail to function and electrical activity rises to pathological levels, producing EEG traces that show large wave
and spike patterns not seen in a healthy brain. Relating these population-level patterns to the computational functions
of individual neurons is a major focus of current research in neurophysiology. All vertebrates have a blood–brain
barrier that allows metabolism inside the brain to operate differently from metabolism in other parts of the body.
Glial cells play a major role in brain metabolism by controlling the chemical composition of the fluid that surrounds
neurons, including levels of ions and nutrients. Brain tissue consumes a large amount of energy in proportion to
its volume, so large brains place severe metabolic demands on animals. The need to limit body weight in order, for
example, to fly, has apparently led to selection for a reduction of brain size in some species, such as bats. Most
of the brain's energy consumption goes into sustaining the electric charge (membrane potential) of neurons. Most
vertebrate species devote between 2% and 8% of basal metabolism to the brain. In primates, however, the percentage
is much higher—in humans it rises to 20–25%. The energy consumption of the brain does not vary greatly over time,
but active regions of the cerebral cortex consume somewhat more energy than inactive regions; this forms the basis
for the functional brain imaging methods PET, fMRI, and NIRS. The brain typically gets most of its energy from oxygen-dependent
metabolism of glucose (i.e., blood sugar), but ketones provide a major alternative source, together with contributions
from medium chain fatty acids (caprylic and heptanoic acids), lactate, acetate, and possibly amino acids. From an
evolutionary-biological perspective, the function of the brain is to provide coherent control over the actions of
an animal. A centralized brain allows groups of muscles to be co-activated in complex patterns; it also allows stimuli
impinging on one part of the body to evoke responses in other parts, and it can prevent different parts of the body
from acting at cross-purposes to each other. The invention of electronic computers in the 1940s, along with the development
of mathematical information theory, led to a realization that brains can potentially be understood as information
processing systems. This concept formed the basis of the field of cybernetics, and eventually gave rise to the field
now known as computational neuroscience. The earliest attempts at cybernetics were somewhat crude in that they treated
the brain as essentially a digital computer in disguise, as for example in John von Neumann's 1958 book, The Computer
and the Brain. Over the years, though, accumulating information about the electrical responses of brain cells recorded
from behaving animals has steadily moved theoretical concepts in the direction of increasing realism. The essence
of the information processing approach is to try to understand brain function in terms of information flow and implementation
of algorithms. One of the most influential early contributions was a 1959 paper titled What the frog's eye tells
the frog's brain: the paper examined the visual responses of neurons in the retina and optic tectum of frogs, and
came to the conclusion that some neurons in the tectum of the frog are wired to combine elementary responses in a
way that makes them function as "bug perceivers". A few years later David Hubel and Torsten Wiesel discovered cells
in the primary visual cortex of monkeys that become active when sharp edges move across specific points in the field
of view—a discovery for which they won a Nobel Prize. Follow-up studies in higher-order visual areas found cells
that detect binocular disparity, color, movement, and aspects of shape, with areas located at increasing distances
from the primary visual cortex showing increasingly complex responses. Other investigations of brain areas unrelated
to vision have revealed cells with a wide variety of response correlates, some related to memory, some to abstract
types of cognition such as space. Furthermore, even single neurons appear to be complex and capable of performing
computations. So, brain models that don't reflect this are arguably too abstractive to be representative of brain
operation; models that do try to capture this are very computationally expensive and arguably intractable with present
computational resources. However, having said this, the Human Brain Project is trying to build a realistic, detailed
computational model of the entire human brain. It remains to be seen what level of success they can achieve in the
time frame of the project and the wisdom of it has been publicly contested, with high-profile scientists on both
sides of the argument. One of the primary functions of a brain is to extract biologically relevant information from
sensory inputs. The human brain is provided with information about light, sound, the chemical composition of the
atmosphere, temperature, head orientation, limb position, the chemical composition of the bloodstream, and more.
In other animals additional senses may be present, such as the infrared heat-sense of snakes, the magnetic field
sense of some birds, or the electric field sense of some types of fish. Moreover, other animals may develop existing
sensory systems in new ways, such as the adaptation by bats of the auditory sense into a form of sonar. One way or
another, all of these sensory modalities are initially detected by specialized sensors that project signals into
the brain. Each sensory system begins with specialized receptor cells, such as light-receptive neurons in the retina
of the eye, vibration-sensitive neurons in the cochlea of the ear, or pressure-sensitive neurons in the skin. The
axons of sensory receptor cells travel into the spinal cord or brain, where they transmit their signals to a first-order
sensory nucleus dedicated to one specific sensory modality. This primary sensory nucleus sends information to higher-order
sensory areas that are dedicated to the same modality. Eventually, via a way-station in the thalamus, the signals
are sent to the cerebral cortex, where they are processed to extract biologically relevant features, and integrated
with signals coming from other sensory systems. Motor systems are areas of the brain that are directly or indirectly
involved in producing body movements, that is, in activating muscles. Except for the muscles that control the eye,
which are driven by nuclei in the midbrain, all the voluntary muscles in the body are directly innervated by motor
neurons in the spinal cord and hindbrain. Spinal motor neurons are controlled both by neural circuits intrinsic to
the spinal cord, and by inputs that descend from the brain. The intrinsic spinal circuits implement many reflex responses,
and contain pattern generators for rhythmic movements such as walking or swimming. The descending connections from
the brain allow for more sophisticated control. The brain contains several motor areas that project directly to the
spinal cord. At the lowest level are motor areas in the medulla and pons, which control stereotyped movements such
as walking, breathing, or swallowing. At a higher level are areas in the midbrain, such as the red nucleus, which
is responsible for coordinating movements of the arms and legs. At a higher level yet is the primary motor cortex,
a strip of tissue located at the posterior edge of the frontal lobe. The primary motor cortex sends projections to
the subcortical motor areas, but also sends a massive projection directly to the spinal cord, through the pyramidal
tract. This direct corticospinal projection allows for precise voluntary control of the fine details of movements.
Other motor-related brain areas exert secondary effects by projecting to the primary motor areas. Among the most
important secondary areas are the premotor cortex, basal ganglia, and cerebellum. In addition to all of the above,
the brain and spinal cord contain extensive circuitry to control the autonomic nervous system, which works by secreting
hormones and by modulating the "smooth" muscles of the gut. The autonomic nervous system affects heart rate, digestion,
respiration rate, salivation, perspiration, urination, and sexual arousal, and several other processes. Most of its
functions are not under direct voluntary control. A key component of the arousal system is the suprachiasmatic nucleus
(SCN), a tiny part of the hypothalamus located directly above the point at which the optic nerves from the two eyes
cross. The SCN contains the body's central biological clock. Neurons there show activity levels that rise and fall
with a period of about 24 hours, circadian rhythms: these activity fluctuations are driven by rhythmic changes in
expression of a set of "clock genes". The SCN continues to keep time even if it is excised from the brain and placed
in a dish of warm nutrient solution, but it ordinarily receives input from the optic nerves, through the retinohypothalamic
tract (RHT), that allows daily light-dark cycles to calibrate the clock. The SCN projects to a set of areas in the
hypothalamus, brainstem, and midbrain that are involved in implementing sleep-wake cycles. An important component
of the system is the reticular formation, a group of neuron-clusters scattered diffusely through the core of the
lower brain. Reticular neurons send signals to the thalamus, which in turn sends activity-level-controlling signals
to every part of the cortex. Damage to the reticular formation can produce a permanent state of coma. Sleep involves
great changes in brain activity. Until the 1950s it was generally believed that the brain essentially shuts off during
sleep, but this is now known to be far from true; activity continues, but patterns become very different. There are
two types of sleep: REM sleep (with dreaming) and NREM (non-REM, usually without dreaming) sleep, which repeat in
slightly varying patterns throughout a sleep episode. Three broad types of distinct brain activity patterns can be
measured: REM, light NREM and deep NREM. During deep NREM sleep, also called slow wave sleep, activity in the cortex
takes the form of large synchronized waves, whereas in the waking state it is noisy and desynchronized. Levels of
the neurotransmitters norepinephrine and serotonin drop during slow wave sleep, and fall almost to zero during REM
sleep; levels of acetylcholine show the reverse pattern. For any animal, survival requires maintaining a variety
of parameters of bodily state within a limited range of variation: these include temperature, water content, salt
concentration in the bloodstream, blood glucose levels, blood oxygen level, and others. The ability of an animal
to regulate the internal environment of its body—the milieu intérieur, as pioneering physiologist Claude Bernard
called it—is known as homeostasis (Greek for "standing still"). Maintaining homeostasis is a crucial function of
the brain. The basic principle that underlies homeostasis is negative feedback: any time a parameter diverges from
its set-point, sensors generate an error signal that evokes a response that causes the parameter to shift back toward
its optimum value. (This principle is widely used in engineering, for example in the control of temperature using
a thermostat.) In vertebrates, the part of the brain that plays the greatest role is the hypothalamus, a small region
at the base of the forebrain whose size does not reflect its complexity or the importance of its function. The hypothalamus
is a collection of small nuclei, most of which are involved in basic biological functions. Some of these functions
relate to arousal or to social interactions such as sexuality, aggression, or maternal behaviors; but many of them
relate to homeostasis. Several hypothalamic nuclei receive input from sensors located in the lining of blood vessels,
conveying information about temperature, sodium level, glucose level, blood oxygen level, and other parameters. These
hypothalamic nuclei send output signals to motor areas that can generate actions to rectify deficiencies. Some of
the outputs also go to the pituitary gland, a tiny gland attached to the brain directly underneath the hypothalamus.
The pituitary gland secretes hormones into the bloodstream, where they circulate throughout the body and induce changes
in cellular activity. Most organisms studied to date utilize a reward–punishment mechanism: for instance, worms and
insects can alter their behavior to seek food sources or to avoid dangers. In vertebrates, the reward-punishment
system is implemented by a specific set of brain structures, at the heart of which lie the basal ganglia, a set of
interconnected areas at the base of the forebrain. There is substantial evidence that the basal ganglia are the central
site at which decisions are made: the basal ganglia exert a sustained inhibitory control over most of the motor systems
in the brain; when this inhibition is released, a motor system is permitted to execute the action it is programmed
to carry out. Rewards and punishments function by altering the relationship between the inputs that the basal ganglia
receive and the decision-signals that are emitted. The reward mechanism is better understood than the punishment
mechanism, because its role in drug abuse has caused it to be studied very intensively. Research has shown that the
neurotransmitter dopamine plays a central role: addictive drugs such as cocaine, amphetamine, and nicotine either
cause dopamine levels to rise or cause the effects of dopamine inside the brain to be enhanced. Almost all animals
are capable of modifying their behavior as a result of experience—even the most primitive types of worms. Because
behavior is driven by brain activity, changes in behavior must somehow correspond to changes inside the brain. Theorists
dating back to Santiago Ramón y Cajal argued that the most plausible explanation is that learning and memory are
expressed as changes in the synaptic connections between neurons. Until 1970, however, experimental evidence to support
the synaptic plasticity hypothesis was lacking. In 1971 Tim Bliss and Terje Lømo published a paper on a phenomenon
now called long-term potentiation: the paper showed clear evidence of activity-induced synaptic changes that lasted
for at least several days. Since then technical advances have made these sorts of experiments much easier to carry
out, and thousands of studies have been made that have clarified the mechanism of synaptic change, and uncovered
other types of activity-driven synaptic change in a variety of brain areas, including the cerebral cortex, hippocampus,
basal ganglia, and cerebellum. Brain-derived neurotrophic factor (BDNF) and physical activity appear to play a beneficial
role in the process. The field of neuroscience encompasses all approaches that seek to understand the brain and the
rest of the nervous system. Psychology seeks to understand mind and behavior, and neurology is the medical discipline
that diagnoses and treats diseases of the nervous system. The brain is also the most important organ studied in psychiatry,
the branch of medicine that works to study, prevent, and treat mental disorders. Cognitive science seeks to unify
neuroscience and psychology with other fields that concern themselves with the brain, such as computer science (artificial
intelligence and similar fields) and philosophy. The oldest method of studying the brain is anatomical, and until
the middle of the 20th century, much of the progress in neuroscience came from the development of better cell stains
and better microscopes. Neuroanatomists study the large-scale structure of the brain as well as the microscopic structure
of neurons and their components, especially synapses. Among other tools, they employ a plethora of stains that reveal
neural structure, chemistry, and connectivity. In recent years, the development of immunostaining techniques has
allowed investigation of neurons that express specific sets of genes. Also, functional neuroanatomy uses medical
imaging techniques to correlate variations in human brain structure with differences in cognition or behavior. Neurophysiologists
study the chemical, pharmacological, and electrical properties of the brain: their primary tools are drugs and recording
devices. Thousands of experimentally developed drugs affect the nervous system, some in highly specific ways. Recordings
of brain activity can be made using electrodes, either glued to the scalp as in EEG studies, or implanted inside
the brains of animals for extracellular recordings, which can detect action potentials generated by individual neurons.
Because the brain does not contain pain receptors, it is possible using these techniques to record brain activity
from animals that are awake and behaving without causing distress. The same techniques have occasionally been used
to study brain activity in human patients suffering from intractable epilepsy, in cases where there was a medical
necessity to implant electrodes to localize the brain area responsible for epileptic seizures. Functional imaging
techniques such as functional magnetic resonance imaging are also used to study brain activity; these techniques
have mainly been used with human subjects, because they require a conscious subject to remain motionless for long
periods of time, but they have the great advantage of being noninvasive. Another approach to brain function is to
examine the consequences of damage to specific brain areas. Even though it is protected by the skull and meninges,
surrounded by cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the delicate nature
of the brain makes it vulnerable to numerous diseases and several types of damage. In humans, the effects of strokes
and other types of brain damage have been a key source of information about brain function. Because there is no ability
to experimentally control the nature of the damage, however, this information is often difficult to interpret. In
animal studies, most commonly involving rats, it is possible to use electrodes or locally injected chemicals to produce
precise patterns of damage and then examine the consequences for behavior. Computational neuroscience encompasses
two approaches: first, the use of computers to study the brain; second, the study of how brains perform computation.
On one hand, it is possible to write a computer program to simulate the operation of a group of neurons by making
use of systems of equations that describe their electrochemical activity; such simulations are known as biologically
realistic neural networks. On the other hand, it is possible to study algorithms for neural computation by simulating,
or mathematically analyzing, the operations of simplified "units" that have some of the properties of neurons but
abstract out much of their biological complexity. The computational functions of the brain are studied both by computer
scientists and neuroscientists. Recent years have seen increasing applications of genetic and genomic techniques
to the study of the brain and a focus on the roles of neurotrophic factors and physical activity in neuroplasticity.
The most common subjects are mice, because of the availability of technical tools. It is now possible with relative
ease to "knock out" or mutate a wide variety of genes, and then examine the effects on brain function. More sophisticated
approaches are also being used: for example, using Cre-Lox recombination it is possible to activate or deactivate
genes in specific parts of the brain, at specific times. The oldest brain to have been discovered was in Armenia
in the Areni-1 cave complex. The brain, estimated to be over 5,000 years old, was found in the skull of a 12 to 14-year-old
girl. Although the brains were shriveled, they were well preserved due to the climate found inside the cave. Early
philosophers were divided as to whether the seat of the soul lies in the brain or heart. Aristotle favored the heart,
and thought that the function of the brain was merely to cool the blood. Democritus, the inventor of the atomic theory
of matter, argued for a three-part soul, with intellect in the head, emotion in the heart, and lust near the liver.
Hippocrates, the "father of medicine", came down unequivocally in favor of the brain. In his treatise on epilepsy
he wrote: The Roman physician Galen also argued for the importance of the brain, and theorized in some depth about
how it might work. Galen traced out the anatomical relationships among brain, nerves, and muscles, demonstrating
that all muscles in the body are connected to the brain through a branching network of nerves. He postulated that
nerves activate muscles mechanically by carrying a mysterious substance he called pneumata psychikon, usually translated
as "animal spirits". Galen's ideas were widely known during the Middle Ages, but not much further progress came until
the Renaissance, when detailed anatomical study resumed, combined with the theoretical speculations of René Descartes
and those who followed him. Descartes, like Galen, thought of the nervous system in hydraulic terms. He believed
that the highest cognitive functions are carried out by a non-physical res cogitans, but that the majority of behaviors
of humans, and all behaviors of animals, could be explained mechanistically. The first real progress toward a modern
understanding of nervous function, though, came from the investigations of Luigi Galvani, who discovered that a shock
of static electricity applied to an exposed nerve of a dead frog could cause its leg to contract. Since that time,
each major advance in understanding has followed more or less directly from the development of a new technique of
investigation. Until the early years of the 20th century, the most important advances were derived from new methods
for staining cells. Particularly critical was the invention of the Golgi stain, which (when correctly used) stains
only a small fraction of neurons, but stains them in their entirety, including cell body, dendrites, and axon. Without
such a stain, brain tissue under a microscope appears as an impenetrable tangle of protoplasmic fibers, in which
it is impossible to determine any structure. In the hands of Camillo Golgi, and especially of the Spanish neuroanatomist
Santiago Ramón y Cajal, the new stain revealed hundreds of distinct types of neurons, each with its own unique dendritic
structure and pattern of connectivity. In the first half of the 20th century, advances in electronics enabled investigation
of the electrical properties of nerve cells, culminating in work by Alan Hodgkin, Andrew Huxley, and others on the
biophysics of the action potential, and the work of Bernard Katz and others on the electrochemistry of the synapse.
These studies complemented the anatomical picture with a conception of the brain as a dynamic entity. Reflecting
the new understanding, in 1942 Charles Sherrington visualized the workings of the brain waking from sleep: In the
second half of the 20th century, developments in chemistry, electron microscopy, genetics, computer science, functional
brain imaging, and other fields progressively opened new windows into brain structure and function. In the United
States, the 1990s were officially designated as the "Decade of the Brain" to commemorate advances made in brain research,
and to promote funding for such research. In the 21st century, these trends have continued, and several new approaches
have come into prominence, including multielectrode recording, which allows the activity of many brain cells to be
recorded all at the same time; genetic engineering, which allows molecular components of the brain to be altered
experimentally; genomics, which allows variations in brain structure to be correlated with variations in DNA properties
and neuroimaging.
Near East (French: Proche-Orient) is a geographical term that roughly encompasses Western Asia. Despite having varying definitions
within different academic circles, the term was originally applied to the maximum extent of the Ottoman Empire. The
term has fallen into disuse in English, and has been replaced by the term Middle East. The Encyclopædia Britannica
defines the Near East as including Armenia, Azerbaijan, Bahrain, Cyprus, Egypt, Georgia, Iran, Iraq, Israel, Jordan,
Kuwait, Lebanon, Libya, Oman, Palestine, Qatar, Saudi Arabia, Sudan, Syria, Turkey, the United Arab Emirates, the
West Bank, and Yemen. The Food and Agriculture Organization (FAO) of the United Nations defines the region similarly,
but also includes Afghanistan while excluding the countries of North Africa and the Palestinian territories. According
to the National Geographic Society, the terms Near East and Middle East denote the same territories and are 'generally
accepted as comprising the countries of the Arabian Peninsula, Cyprus, Egypt, Iraq, Iran, Israel, Jordan, Lebanon,
Palestinian territories, Syria, and Turkey'. At the beginning of the 19th century the Ottoman Empire included all
of the Balkan Peninsula north to the southern edge of the Hungarian Plain, but by 1914 had lost all of it except
Constantinople and Eastern Thrace to the rise of Balkan nationalism, which saw the independence of Greece, Serbia,
the Danubian Principalities and Bulgaria. Up until 1912 the Ottomans retained a band of territory including Albania,
Macedonia and Thrace, which were lost in the two Balkan Wars of 1912–13. The Ottoman Empire, believed to be about
to collapse, was portrayed in the press as the sick man of Europe". The Balkan states, with the partial exception
of Bosnia and Albania, were primarily Christian. Starting in 1894 the Ottomans struck at the Armenians on the explicit
grounds that they were a non-Muslim people and as such were a potential threat to the Muslim empire within which
they resided. The Hamidian Massacres aroused the indignation of the entire Christian world. In the United States
the now aging Julia Ward Howe, author of the Battle Hymn of the Republic, leaped into the war of words and joined
the Red Cross. Relations of minorities within the Ottoman Empire and the disposition of former Ottoman lands became
known as the "Eastern Question," as the Ottomans were on the east of Europe. It now became relevant to define the
east of the eastern question. In about the middle of the 19th century "Near East" came into use to describe that
part of the east closest to Europe. The term "Far East" appeared contemporaneously meaning Japan, China, Korea, Indonesia
and Viet Nam; in short, the East Indies. "Near East" applied to what had been mainly known as the Levant, which was
in the jurisdiction of the Ottoman Porte, or government. Those who used the term had little choice about its meaning.
They could not set foot on most of the shores of the southern and central Mediterranean from the Gulf of Sidra to
Albania without permits from the Ottoman Empire. Some regions beyond the Ottoman Porte were included. One was North
Africa west of Egypt. It was occupied by piratical kingdoms of the Barbary Coast, de facto independent since the
18th century. Formerly part of the empire at its apogee. Iran was included because it could not easily be reached
except through the Ottoman Empire or neighboring Russia. In the 1890s the term tended to focus on the conflicts in
the Balkan states and Armenia. The demise of the sick man of Europe left considerable confusion as to what was to
be meant by "Near East". It is now generally used only in historical contexts, to describe the countries of Western
Asia from the Mediterranean to (or including) Iran. There is, in short, no universally understood fixed inventory
of nations, languages or historical assets defined to be in it. The geographical terms "Near East" and "Far East"
referring to areas of the globe in or contiguous to the former British Empire and the neighboring colonies of the
Dutch, Portuguese, Spanish and Germans, fit together as a pair based on the opposites of far and near, suggesting
that they were innovated together. They appear together in the journals of the mid-19th century. Both terms were
used before then with local British and American meanings: the near or far east of a field, village or shire. There
was a linguistic predisposition to use such terms. The Romans had used them in near Gaul / far Gaul, near Spain /
far Spain and others. Before them the Greeks had the habit, which appears in Linear B, the oldest known script of
Europe, referring to the near province and the far province of the kingdom of Pylos. Usually these terms were given
with reference to a geographic feature, such as a mountain range or a river. Ptolemy's Geography divided Asia on
a similar basis. In the north is "Scythia this side of the Himalayas" and "Scythia beyond the Himalayas." To the
south is "India on this side of the Ganges" and "India beyond the Ganges." Asia began on the coast of Anatolia ("land
of the rising sun"). Beyond the Ganges and Himalayas (including the Tien Shan) were Serica and Serae (sections of
China) and some other identifiable far eastern locations known to the voyagers and geographers but not to the general
European public. By the time of John Seller's Atlas Maritima of 1670, "India Beyond the Ganges" had become "the East
Indies" including China, Korea, southeast Asia and the islands of the Pacific in a map that was every bit as distorted
as Ptolemy's, despite the lapse of approximately 1500 years. That "east" in turn was only an English translation
of Latin Oriens and Orientalis, "the land of the rising sun," used since Roman times for "east." The world map of
Jodocus Hondius of 1590 labels all of Asia from the Caspian to the Pacific as India Orientalis, shortly to appear
in translation as the East Indies. Elizabeth I of England, primarily interested in trade with the east, collaborated
with English merchants to form the first trading companies to the far-flung regions, using their own jargon. Their
goals were to obtain trading concessions by treaty. The queen chartered the Company of Merchants of the Levant, shortened
to Levant Company, and soon known also as The Turkey Company, in 1581. In 1582, the ship The Great Susan transported
the first ambassador, William Harebone, to the Ottoman Porte (government of the Ottoman Empire) at Constantinople.
Compared to Anatolia, Levant also means "land of the rising sun," but where Anatolia always only meant the projection
of land currently occupied by the Republic of Turkey, Levant meant anywhere in the domain ruled by the Ottoman Porte.
The East India Company (short for a much longer formal name) was chartered in 1600 for trade to the East Indies.
It has pleased western historians to write of a decline of the Ottoman Empire as though a stable and uncontested
polity of that name once existed. The borders did expand and contract but they were always dynamic and always in
"question" right from the beginning. The Ottoman Empire was created from the lands of the former eastern Roman Empire
on the occasion of the latter's violent demise. The last Roman emperor died fighting hand-to-hand in the streets
of his capital, Constantinople, overwhelmed by the Ottoman military, in May, 1453. The victors inherited his remaining
territory in the Balkans. The populations of those lands did not accept Turkish rule. The Turks to them were foreigners
with completely different customs, way of life, and language. Intervals when there was no unrest were rare. The Hungarians
had thrown off Turkish rule by 1688. Serbia was created by the Serbian Revolution, 1815–1833. The Greek War of Independence,
1821–1832, created modern Greece, which recovered most of the lands of ancient Greece, but could not gain Constantinople.
The Ottoman Porte was continuously under attack from some quarter in its empire, primarily the Balkans. Also, on
a number of occasions in the early 19th century, American and British warships had to attack the Barbary pirates
to stop their piracy and recover thousands of enslaved Europeans and Americans. In 1853 the Russian Empire on behalf
of the Slavic Balkan states began to question the very existence of the Ottoman Empire. The result was the Crimean
War, 1853–1856, in which the British Empire and the French Empire supported the Ottoman Empire in its struggle against
the incursions of the Russian Empire. Eventually, the Ottoman Empire lost control of the Balkan region. Until about
1855 the words near east and far east did not refer to any particular region. The far East, a phrase containing a
noun, East, qualified by an adjective, far, could be at any location in the "far east" of the speaker's home territory.
The Ottoman Empire, for example, was the far East as much as the East Indies. The Crimean War brought a change in
vocabulary with the introduction of terms more familiar to the late 19th century. The Russian Empire had entered
a more aggressive phase, becoming militarily active against the Ottoman Empire and also against China, with territorial
aggrandizement explicitly in mind. Rethinking its policy the British government decided that the two polities under
attack were necessary for the balance of power. It therefore undertook to oppose the Russians in both places, one
result being the Crimean War. During that war the administration of the British Empire began promulgating a new vocabulary,
giving specific regional meaning to "the Near East," the Ottoman Empire, and "the Far East," the East Indies. The
two terms were now compound nouns often shown hyphenated. In 1855 a reprint of a letter earlier sent to The Times
appeared in Littel's Living Age. Its author, an "official Chinese interpreter of 10 years' active service" and a
member of the Oriental Club, Thomas Taylor Meadows, was replying to the suggestion by another interpreter that the
British Empire was wasting its resources on a false threat from Russia against China. Toward the end of the letter
he said: Much of the colonial administration belonged to this club, which had been formed by the Duke of Wellington.
Meadows' terminology must represent usage by that administration. If not the first use of the terms, the letter to
the Times was certainly one of the earliest presentations of this vocabulary to the general public. They became immediately
popular, supplanting "Levant" and "East Indies," which gradually receded to minor usages and then began to change
meaning. "Near East" remained popular in diplomatic, trade and journalistic circles, but a variation soon developed
among the scholars and the men of the cloth and their associates: "the Nearer East," reverting to the classical and
then more scholarly distinction of "nearer" and "farther." They undoubtedly saw a need to separate the Biblical lands
from the terrain of the Ottoman Empire. The Christians saw the country as the land of the Old and New Testaments,
where Christianity had developed. The scholars in the field of studies that eventually became Biblical archaeology
attempted to define it on the basis of archaeology. For example, The London Review of 1861 (Telford and Barber, unsigned)
in reviewing several works by Rawlinson, Layard and others, defined themselves as making: The regions in their inventory
were Assyria, Chaldea, Mesopotamia, Persia, Armenia, Egypt, Arabia, Syria, Palestine, Ethiopia, Caucasus, Libya,
Anatolia and Abyssinia. Explicitly excluded is India. No mention is made of the Balkans. Hogarth then proceeds to
say where and why in some detail, but no more mention is made of the classics. His analysis is geopolitical. His
map delineates the Nearer East with regular lines as though surveyed. They include Iran, the Balkans, but not the
Danube lands, Egypt, but not the rest of North Africa. Except for the Balkans, the region matches the later Middle
East. It differs from the Ottoman Empire of the times in including Greece and Iran. Hogarth gives no evidence of
being familiar with the contemporaneous initial concept of the Middle East. In the last years of the 19th century
the term "Near East" acquired considerable disrepute in eyes of the English-speaking public as did the Ottoman Empire
itself. The cause of the onus was the Hamidian Massacres of Armenians because they were Christians, but it seemed
to spill over into the protracted conflicts of the Balkans. For a time, "Near East" meant primarily the Balkans.
Robert Hichens' book The Near East (1913) is subtitled Dalmatia, Greece and Constantinople. The change is evident
in the reports of influential British travellers to the Balkans. In 1894, Sir Henry Norman, 1st Baronet, a journalist,
travelled to the Far East, afterwards writing a book called The Peoples and Politics of the Far East, which came
out in 1895. By "Far East" he meant Siberia, China, Japan, Korea, Siam and Malaya. As the book was a big success,
he was off to the Balkan states with his wife in 1896 to develop detail for a sequel, The People and Politics of
the Near East, which Scribners planned to publish in 1897. Mrs. Norman, a writer herself, wrote glowing letters of
the home and person of Mme. Zakki, "the wife of a Turkish cabinet minister," who, she said, was a cultivated woman
living in a country home full of books. As for the natives of the Balkans, they were "a semi-civilized people." The
book was never published. Instead the Normans whirled off to New York. Norman published the gist of his planned travel
book curiously mixed with vituperation against the Ottoman Empire in an article in June, 1896, in Scribner's Magazine.
The empire had descended from an enlightened civilization ruling over barbarians for their own good to something
considerably less. The difference was the Hamidian Massacres, which were being conducted even as the couple traveled
the Balkans. According to Norman now, the empire had been established by "the Moslem horde" from Asia, which was
stopped by "intrepid Hungary." Furthermore, "Greece shook off the turbaned destroyer of her people" and so on. The
Russians were suddenly liberators of oppressed Balkan states. Having portrayed the Armenians as revolutionaries in
the name of freedom with the expectation of being rescued by the intervention of Christian Europe, he states "but
her hope was vain." England had "turned her back." Norman concluded his exhortation with "In the Balkans one learns
to hate the Turk." Norman made sure that Gladstone read the article. Prince Nicolas of Montenegro wrote a letter
thanking him for his article. Throughout this article Norman uses "Near East" to mean the countries where "the eastern
question" applied; that is, to all of the Balkans. The countries and regions mentioned are Greece, Bulgaria, Serbia,
Bosnia-Herzegovina (which was Moslem and needed, in his view, to be suppressed), Macedonia, Montenegro, Albania,
Romania. The rest of the Ottoman domain is demoted to just "the east." If Norman was apparently attempting to change
British policy, it was perhaps William Miller (1864–1945), journalist and expert on the Near East, who did the most
in that direction. In essence, he signed the death warrant, so to speak, of the Age of Empires. The fall of the Ottoman
Empire ultimately enmeshed all the others as well. In the Travel and Politics in the Near East, 1898, Miller claimed
to have made four trips to the Balkans, 1894, 1896, 1897 and 1898, and to be, in essence, an expert on "the Near
East," by which he primarily meant the Balkans. Apart from the fact that he attended Oxford and played Rugby not
many biographical details have been promulgated. He was in effect (whatever his formal associations if any) a point
man of British near eastern intelligence. These were fighting words to be coming from a country that once insisted
Europe needed Turkey and was willing to spill blood over it. For his authority Miller invokes the people, citing
the "collective wisdom" of Europe, and introducing a concept to arise many times in the decades to follow under chilling
circumstances: If the British Empire was now going to side with the Russian Empire, the Ottoman Empire had no choice
but to cultivate a relationship with the Austro-Hungarian Empire, which was supported by the German Empire. In a
few years these alignments became the Triple Entente and the Triple Alliance (already formed in 1882), which were
in part a cause of World War I. By its end in 1918 three empires were gone, a fourth was about to fall to revolution,
and two more, the British and French, were forced to yield in revolutions started under the aegis of their own ideologies.
By 1916, when millions of Europeans were becoming casualties of imperial war in the trenches of eastern and western
Europe over "the eastern question," Arnold J. Toynbee, Hegelesque historian of civilization at large, was becoming
metaphysical about the Near East. Geography alone was not a sufficient explanation of the terms, he believed. If
the Ottoman Empire had been a sick man, then: From the death of the Near East new nations were able to rise from
the ashes, notably the Republic of Turkey. Paradoxically it now aligned itself with the west rather than with the
east. Mustafa Kemal, its founder, a former Ottoman high-ranking officer, was insistent on this social revolution,
which, among other changes, liberated women from the strait rules still in effect in most Arabic-speaking countries.
The demise of the political Near East now left a gap where it had been, into which stepped the Middle East. The term
middle east as a noun and adjective was common in the 19th century in nearly every context except diplomacy and archaeology.
An uncountable number of places appear to have had their middle easts from gardens to regions, including the United
States. The innovation of the term "Near East" to mean the holdings of the Ottoman Empire as early as the Crimean
War had left a geographical gap. The East Indies, or "Far East," derived ultimately from Ptolemy's "India Beyond
the Ganges." The Ottoman Empire ended at the eastern border of Iraq. "India This Side of the Ganges" and Iran had
been omitted. The archaeologists counted Iran as "the Near East" because Old Persian cuneiform had been found there.
This usage did not sit well with the diplomats; India was left in an equivocal state. They needed a regional term.
The use of the term Middle East as a region of international affairs apparently began in British and American diplomatic
circles quite independently of each other over concern for the security of the same country: Iran, then known to
the west as Persia. In 1900 Thomas Edward Gordon published an article, The Problem of the Middle East, which began:
The threat that caused Gordon, diplomat and military officer, to publish the article was resumption of work on a
railway from Russia to the Persian Gulf. Gordon, a published author, had not used the term previously, but he was
to use it from then on. A second strategic personality from American diplomatic and military circles, Alfred Thayer
Mahan, concerned about the naval vulnerability of the trade routes in the Persian Gulf and Indian Ocean, commented
in 1902: Apparently the sailor did not connect with the soldier, as Mahan believed he was innovating the term Middle
East. It was, however, already there to be seen. Until the period following World War I the Near East and the Middle
East coexisted, but they were not always seen as distinct. Bertram Lenox Simpson, a colonial officer killed eventually
in China, uses the terms together in his 1910 book, The Conflict of Color, as "the Near and Middle East." The total
super-region consisted of "India, Afghanistan, Persia, Arabistan, Asia Minor, and last, but not least, Egypt." Simpson
(under his pen-name, Weale) explains that this entire region "is politically one region – in spite of the divisions
into which it is academically divided." His own term revives "the Nearer East" as opposed to "the Far East." The
basis of Simpson's unity is color and colonial subjection. His color chart recognizes a spectrum of black, brown
and yellow, which at the time had been traditional since the late 19th century. Apart from these was "the great white
race", which the moderate Simpson tones down to simply the white race. The great whites were appearing as late as
the 1920s works of James Henry Breasted, which were taught as the gospel of ancient history throughout the entire
first half of the 20th century. A red wavelength was mainly of interest in America. The eastern question was modified
by Simpson to "The Problem of the Nearer East," which had nothing to do with the Ottomans but everything to do with
British colonialism. Simpson wrote of the white man: These regions were occupied by "the brown men," with the yellow
in the Far East and the black in Africa. The color issue was not settled until Kenya became independent in 1963,
ending the last vestige of the British Empire. This view reveals a somewhat less than altruistic Christian intent
of the British Empire; however, it was paradoxical from the beginning, as Simpson and most other writers pointed
out. The Ottomans were portrayed as the slavers, but even as the American and British fleets were striking at the
Barbary pirates on behalf of freedom, their countries were promulgating a vigorous African slave trade of their own.
Charles George Gordon is known as the saint of all British colonial officers. A dedicated Christian, he spent his
time between assignments living among the poor and donating his salary on their behalf. He won Ottoman confidence
as a junior officer in the Crimean War. In his later career he became a high official in the Ottoman Empire, working
as Governor of Egypt for the Ottoman khedive for the purpose of conducting campaigns against slavers and slavery
in Egypt and the Sudan. The term "Near and Middle East," held the stage for a few years before World War I. It proved
to be less acceptable to a colonial point of view that saw the entire region as one. In 1916 Captain T.C. Fowle,
40th Pathans (troops of British India), wrote of a trip he had taken from Karachi to Syria just before the war. The
book does not contain a single instance of "Near East." Instead, the entire region is considered "the Middle East."
The formerly Near Eastern sections of his trip are now "Turkish" and not Ottoman. Subsequently with the disgrace
of "Near East" in diplomatic and military circles, "Middle East" prevailed. However, "Near East" continues in some
circles at the discretion of the defining agency or academic department. They are not generally considered distinct
regions as they were at their original definition. Although racial and colonial definitions of the Middle East are
no longer considered ideologically sound, the sentiment of unity persists. For much, but by no means all, of the
Middle East, the predominance of Islam lends some unity, as does the transient accident of geographical continuity.
Otherwise there is but little basis except for history and convention to lump together peoples of multiple, often
unrelated languages, governments, loyalties and customs. In the 20th century after decades of intense warfare and
political turmoil terms such as "Near East", "Far East" and "Middle East" were relegated to the experts, especially
in the new field of political science. The new wave of diplomats often came from those programs. Archaeology on the
international scene, although very much of intellectual interest to the major universities, fell into the shadow
of international relations. Their domain became the Ancient Near East, which could no longer be relied upon to be
the Near East. The Ottoman Empire was gone, along with all the other empires of the 19th century, replaced with independent
republics. Someone had to reconcile the present with the past. This duty was inherited by various specialized agencies
that were formed to handle specific aspects of international relations, now so complex as to be beyond the scope
and abilities of a diplomatic corps in the former sense. The ancient Near East is frozen in time. The living Near
East is primarily what the agencies say it is. In most cases this single term is inadequate to describe the geographical
range of their operations. The result is multiple definitions. The United States is the chief remaining nation to
assign official responsibilities to a region called the Near East. Within the government the State Department has
been most influential in promulgating the Near Eastern regional system. The countries of the former empires of the
19th century have in general abandoned the term and the subdivision in favor of Middle East, North Africa and various
forms of Asia. In many cases, such as France, no distinct regional substructures have been employed. Each country
has its own French diplomatic apparatus, although regional terms, including Proche-Orient and Moyen-Orient, may be
used in a descriptive sense. The most influential agencies in the United States still using Near East as a working
concept are as follows. The Bureau of Near Eastern Affairs, a division of the United States Department of State,
is perhaps the most influential agency to still use the term Near East. Under the Secretary of State, it implements
the official diplomacy of the United States, called also statecraft by Secretary Clinton. The name of the bureau
is traditional and historic. There is, however, no distinct Middle East. All official Middle Eastern affairs are
referred to this bureau. Working closely in conjunction with the definition of the Near East provided by the State
Department is the Near East South Asia Center for Strategic Studies (NESA), an educational institution of the United
States Department of Defense. It teaches courses and holds seminars and workshops for government officials and military
officers who will work or are working within its region. As the name indicates, that region is a combination of State
Department regions; however, NESA is careful to identify the State Department region. As its Near East is not different
from the State Department's it does not appear in the table. Its name, however, is not entirely accurate. For example,
its region includes Mauritania, a member of the State Department's Africa (Sub-Sahara). The Washington Institute
for Near East Policy (WINEP) is a non-profit organization for research and advice on Middle Eastern policy. It regards
its target countries as the Middle East but adopts the convention of calling them the Near East to be in conformance
with the practices of the State Department. Its views are independent. The WINEP bundles the countries of Northwest
Africa together under "North Africa." Details can be found in Policy Focus #65. The Library of Congress (LoC) is
an institution established by Congress to provide a research library for the government of the United States and
serve as a national library. It is under the supervision of the United States Congress Joint Committee on the Library
and the Librarian of Congress. The Near East is a separate topic and subdivision of the African and Middle Eastern
division. The Middle East contains a Hebraic section consisting of only Israel for a country, but including eleven
modern and ancient languages relating to Judaism, such as Yiddish, a European pidgin language. The Near East is otherwise
nearly identical to the Middle East, except that it extends partly into Central Asia and the Caucasus, regions that
the State Department considers to be in Asia. The United Nations formulates multiple regional divisions as is convenient
for its various operations. But few of them include a Near East, and that poorly defined. UNICEF recognizes the "Middle
East and North Africa" region, where the Middle East is bounded by the Red Sea on the west and includes Iran on the
east. UNESCO recognizes neither a Near East nor a Middle East, dividing the countries instead among three regions:
Arab States, Asia and the Pacific, and Africa. Its division "does not forcibly reflect geography" but "refers to
the execution of regional activities." The United Nations Statistics Division defines Western Asia to contain the
countries included elsewhere in the Middle East. The Food and Agriculture Organization (FAO) describes its entire
theatre of operations as the Near East, but then assigns many of its members to other regions as well; for example,
Cyprus, Malta and Turkey are in both the European and the Near Eastern regions. Its total area extends further into
Central Asia than that of most agencies. The Central Intelligence Agency (CIA) is a quasi-independent agency of the
United States Government. It appears to have multiple leadership. On the one hand its director is appointed by the
president. It plays a significant role in providing the president with intelligence. On the other hand, Congress
oversees its operations through a committee. The CIA was first formed under the National Security Act of 1947 from
the army's Office of Strategic Services (OSS), which furnished both military intelligence and clandestine military
operations to the army during the crisis of World War II. Many revisions and redefinitions have taken place since
then. Although the name of the CIA reflects the original advised intent of Presidents Franklin D. Roosevelt and Harry
S. Truman, the government's needs for strategic services have frustrated that intent from the beginning. The press
received by the agency in countless articles, novels and other media have tended to create various popular myths;
for example, that this agency replaced any intelligence effort other than that of the OSS, or that it contains the
central intelligence capability of the United States. Strategic services are officially provided by some 17 agencies
called the Intelligence Community. Army intelligence did not come to an end; in fact, all the branches of the Armed
Forces retained their intelligence services. This community is currently under the leadership (in addition to all
its other leadership) of the Office of the Director of National Intelligence. Under these complex circumstances regional
names are less useful. They are more historical than an accurate gauge of operations. The Directorate of Intelligence,
one of four directorates into which the CIA is divided, includes the Office of Near Eastern and South Asian Analysis
(NESA). Its duties are defined as "support on Middle Eastern and North African countries, as well as on the South
Asian nations of India, Pakistan, and Afghanistan." The total range of countries is in fact the same as the State
Department's Near East, but the names do not correspond. The Near East of the NESA is the same as the Middle East
defined in the CIA-published on-line resource, The World Factbook. Its list of countries is limited by the Red Sea,
comprises the entire eastern coast of the Mediterranean, including Israel, Turkey, the small nations of the Caucasus,
Iran and the states of the Arabian Peninsula. The U.S. Agency for International Development (USAID), an independent
agency under the Department of State established in place of the Marshall Plan for the purpose of determining and
distributing foreign aid, does not use the term Near East. Its definition of Middle East corresponds to that of the
State Department, which officially prefers the term Near East. The Foreign and Commonwealth Office of United Kingdom
recognises a Middle East and North Africa region, but not a Near East. Their original Middle East consumed the Near
East as far as the Red Sea, ceded India to the Asia and Oceania region, and went into partnership with North Africa
as far as the Atlantic. The Ministry of Foreign Affairs of the Republic of Greece conducts "bilateral relationships"
with the countries of the "Mediterranean – Middle East Region" but has formulated no Near East Region. The Ministry
of Foreign Affairs of the Republic of Turkey also does not use the term Near East. Its regions include the Middle
East, the Balkans and others. The Ancient Near East is a term of the 20th century intended to stabilize the geographical
application of Near East to ancient history.[citation needed] The Near East may acquire varying meanings, but the
Ancient Near East always has the same meaning: the ancient nations, people and languages of the enhanced Fertile
Crescent, a sweep of land from the Nile Valley through Anatolia and southward to the limits of Mesopotamia. Resorting
to this verbal device, however, did not protect the "Ancient Near East" from the inroads of "the Middle East." For
example, a high point in the use of "Ancient Near East" was for Biblical scholars the Ancient Near Eastern Texts
relating to the Old Testament by James Bennett Pritchard, a textbook of first edition dated 1950. The last great
book written by Leonard Woolley, British archaeologist, excavator of ancient Ur and associate of T.E. Lawrence and
Arthur Evans, was The Art of the Middle East, Including Persia, Mesopotamia and Palestine, published in 1961. Woolley
had completed it in 1960 two weeks before his death. The geographical ranges in each case are identical. Parallel
with the growth of specialized agencies for conducting or supporting statescraft in the second half of the 20th century
has been the collection of resources for scholarship and research typically in university settings. Most universities
teaching the liberal arts have library and museum collections. These are not new; however, the erection of these
into "centres" of national and international interest in the second half of the 20th century have created larger
databases not available to the scholars of the past. Many of these focus on the Ancient Near East or Near East in
the sense of Ancient Near East. One such institution is the Centre for the Study of Ancient Documents (CSAD) founded
by and located centrally at Oxford University, Great Britain. Among its many activities CSAD numbers "a long-term
project to create a library of digitised images of Greek inscriptions." These it arranges by region. The Egypt and
the Near East region besides Egypt includes Cyprus, Persia and Afghanistan but not Asia Minor (a separate region).
A large percentage of experts on the modern Middle East began their training in university departments named for
the Near East. Similarly the journals associated with these fields of expertise include the words Near East or Near
Eastern. The meaning of Near East in these numerous establishments and publications is Middle East. Expertise on
the modern Middle East is almost never mixed or confused with studies of the Ancient Near East, although often "Ancient
Near East" is abbreviated to "Near East" without any implication of modern times. For example, "Near Eastern Languages"
in the ancient sense includes such languages as Sumerian and Akkadian. In the modern sense, it is likely to mean
any or all of the Arabic languages.
Zhejiang (help·info), formerly romanized as Chekiang, is an eastern coastal province of China. Zhejiang is bordered by Jiangsu
province and Shanghai municipality to the north, Anhui province to the northwest, Jiangxi province to the west, and
Fujian province to the south; to the east is the East China Sea, beyond which lie the Ryukyu Islands of Japan. The
province's name derives from the Zhe River (浙江, Zhè Jiāng), the former name of the Qiantang River which flows past
Hangzhou and whose mouth forms Hangzhou Bay. It is usually glossed as meaning "Crooked" or "Bent River", from the
meaning of Chinese 折, but is more likely a phono-semantic compound formed from adding 氵 (the "water" radical used
for river names) to phonetic 折 (pinyin zhé but reconstructed Old Chinese *tet), preserving a proto-Wu name of the
local Yue, similar to Yuhang, Kuaiji, and Jiang. Zhejiang was the site of the Neolithic cultures of the Hemudu and
Liangzhu. A 2007 analysis of the DNA recovered from human remains in the archeological sites of prehistoric peoples
along the Yangtze River shows high frequencies of haplogroup O1 in the Liangzhu culture, linking them to Austronesian
and Tai-Kadai peoples. The area of modern Zhejiang was outside the major sphere of influence of the Shang civilization
during the second millennium BC. Instead, this area was populated by peoples collectively known as the Hundred Yue,
including the Dongyue and the Ouyue. The kingdom of Yue began to appear in the chronicles and records written during
the Spring and Autumn Period. According to the chronicles, the kingdom of Yue was located in northern Zhejiang. Shiji
claims that its leaders were descended from the Shang founder Yu the Great. Evidence suggests that Baiyue and the
kingdom of Yue possessed their own culture and history that are different from those kingdoms in north and central
China, whose cultures and histories were carefully recorded in chronicles and histories during the Spring and Autumn
Period and into the Qin dynasty. The Song of the Yue Boatman (Chinese: 越人歌, p Yuèrén Gē, lit. "Song of the man of
Yue") was transliterated into Chinese and recorded by authors in north China or inland China of Hebei and Henan around
528 BC. The song shows that the Yue people spoke a language that was mutually unintelligible with the dialects spoken
in north and inland China. The Yue peoples seem to have had their own written script. The Sword of Goujian bears
bird-worm seal script. Yuenü (Chinese: 越女; pinyin: Yuènǚ; Wade–Giles: Yüeh-nü; literally: "the Lady of Yue") was
a swordswoman from the state of Yue. In order to check the growth of the kingdom of Wu, Chu pursued a policy of strengthening
Yue. Under King Goujian, Yue recovered from its early reverses and fully annexed the lands of its rival in 473 BC.
The Yue kings then moved their capital center from their original home around Mount Kuaiji in present-day Shaoxing
to the former Wu capital at present-day Suzhou. With no southern power to turn against Yue, Chu opposed it directly
and, in 333 BC, succeeded in destroying it. Yue's former lands were annexed by the Qin Empire in 222 BC and organized
into a commandery named for Kuaiji in Zhejiang but initially headquartered in Wu in Jiangsu. Kuaiji Commandery was
the initial power base for Xiang Liang and Xiang Yu's rebellion against the Qin Empire which initially succeeded
in restoring the kingdom of Chu but eventually fell to the Han. Under the Later Han, control of the area returned
to the settlement below Mount Kuaiji but authority over the Minyue hinterland was nominal at best and its Yue inhabitants
largely retained their own political and social structures. At the beginning of the Three Kingdoms era (220–280 CE),
Zhejiang was home to the warlords Yan Baihu and Wang Lang prior to their defeat by Sun Ce and Sun Quan, who eventually
established the Kingdom of Wu. Despite the removal of their court from Kuaiji to Jianye (present-day Nanjing), they
continued development of the region and benefitted from influxes of refugees fleeing the turmoil in northern China.
Industrial kilns were established and trade reached as far as Manchuria and Funan (south Vietnam). Zhejiang was part
of the Wu during the Three Kingdoms. Wu (229–280), commonly known as Eastern Wu or Sun Wu, had been the economically
most developed state among the Three Kingdoms (220–280 CE). The historical novel Romance of the Three Kingdoms records
that Zhejiang had the best-equipped, strong navy force. The story depicts how the states of Wei (魏) and Shu (蜀),
lack of material resources, avoided direct confrontation with the Wu. In armed military conflicts with Wu, the two
states relied intensively on tactics of camouflage and deception to steal Wu's military resources including arrows
and bows. Despite the continuing prominence of Nanjing (then known as Jiankang), the settlement of Qiantang, the
former name of Hangzhou, remained one of the three major metropolitan centers in the south to provide major tax revenue
to the imperial centers in the north China. The other two centers in the south were Jiankang and Chengdu. In 589,
Qiangtang was raised in status and renamed Hangzhou. Following the fall of Wu and the turmoil of the Wu Hu uprising
against the Jin dynasty (265–420), most of elite Chinese families had collaborated with the non-Chinese rulers and
military conquerors in the north. Some may have lost social privilege, and took refugee in areas south to Yangtze
River. Some of the Chinese refugees from north China might have resided in areas near Hangzhou. For example, the
clan of Zhuge Liang (181–234), a chancellor of the state of Shu Han from Central Plain in north China during the
Three Kingdoms period, gathered together at the suburb of Hangzhou, forming an exclusive, closed village Zhuge Village
(Zhege Cun), consisting of villagers all with family name "Zhuge". The village has intentionally isolated itself
from the surrounding communities for centuries to this day, and only recently came to be known in public. It suggests
that a small number of powerful, elite Chinese refugees from the Central Plain might have taken refugee in south
of the Yangtze River. However, considering the mountainous geography and relative lack of agrarian lands in Zhejiang,
most of these refugees might have resided in some areas in south China beyond Zhejiang, where fertile agrarian lands
and metropolitan resources were available, mainly north Jiangsu, west Fujian, Jiangxi, Hunan, Anhui,and provinces
where less cohesive, organized regional governments had been in place. Metropolitan areas of Sichuan was another
hub for refugees, given that the state of Shu had long been founded and ruled by political and military elites from
the Central Plain and north China. Some refugees from the north China might have found residence in south China depending
on their social status and military power in the north. The rump Jin state or the Southern Dynasties vied against
some elite Chinese from the Central Plain and south of the Yangtze River. Zhejiang, as the heartland of the Jiangnan
(Yangtze River Delta), remained the wealthiest area during the Six Dynasties (220 or 222–589), Sui, and Tang. After
being incorporated into the Sui dynasty, its economic richness was used for the Sui dynasty's ambitions to expand
north and south, particularly into Korea and Vietnam. The plan led the Sui dynasty to restore and expand the network
which became the Grand Canal of China. The Canal regularly transported grains and resources from Zhejiang, through
its metropolitan center Hangzhou (and its hinterland along both the Zhe River and the shores of Hangzhou Bay), and
from Suzhou, and thence to the North China Plain. The débâcle of the Korean war led to Sui's overthrow by the Tang,
who then presided over a centuries-long golden age for the country. Zhejiang was an important economic center of
the empire's Jiangnan East Circuit and was considered particularly prosperous. Throughout the Tang dynasty, The Grand
Canal had remained effective, transporting grains and material resources to North China plain and metropolitan centers
of the empire. As the Tang Dynasty disintegrated, Zhejiang constituted most of the territory of the regional kingdom
of Wuyue. The Song dynasty reëstablished unity around 960. Under the Song, the prosperity of South China began to
overtake that of North China. After the north was lost to the Jurchen Jin dynasty in 1127 following the Jingkang
Incident, Hangzhou became the capital of the Southern Song under the name Lin'an. Renowned for its prosperity and
beauty, it may have been the largest city in the world at the time. From then on, north Zhejiang and neighboring
south Jiangsu have been synonymous with luxury and opulence in Chinese culture. The Mongol conquest and the establishment
of the Yuan dynasty in 1279 ended Hangzhou's political clout, but its economy continued to prosper. Marco Polo visited
the city, which he called "Kinsay" (after the Chinese Jingshi, meaning "Capital City") claiming it was "the finest
and noblest city in the world". Greenware ceramics made from celadon had been made in the area since the 3rd-century
Jin dynasty, but it returned to prominence—particularly in Longquan—during the Southern Song and Yuan. Longquan greenware
is characterized by a thick unctuous glaze of a particular bluish-green tint over an otherwise undecorated light-grey
porcellaneous body that is delicately potted. Yuan Longquan celadons feature a thinner, greener glaze on increasingly
large vessels with decoration and shapes derived from Middle Eastern ceramic and metalwares. These were produced
in large quantities for the Chinese export trade to Southeast Asia, the Middle East, and (during the Ming) Europe.
By the Ming, however, production was notably deficient in quality. It is in this period that the Longquan kilns declined,
to be eventually replaced in popularity and ceramic production by the kilns of Jingdezhen in Jiangxi. "In 1727 the
to-min or 'idle people' of Cheh Kiang province (a Ningpo name still existing), the yoh-hu or 'music people' of Shanxi
province, the si-min or 'small people' of Kiang Su (Jiangsu) province, and the Tanka people or 'egg-people' of Canton
(to this day the boat population there), were all freed from their social disabilities, and allowed to count as free
men." "Cheh Kiang" is another romanization for Zhejiang. The Duomin (Chinese: 惰民; pinyin: duò mín; Wade–Giles: to-min)
are a caste of outcasts in this province. During the First Opium War, the British navy defeated Eight Banners forces
at Ningbo and Dinghai. Under the terms of the Treaty of Nanking, signed in 1843, Ningbo became one of the five Chinese
treaty ports opened to virtually unrestricted foreign trade. Much of Zhejiang came under the control of the Taiping
Heavenly Kingdom during the Taiping Rebellion, which resulted in a considerable loss of life in the province. In
1876, Wenzhou became Zhejiang's second treaty port. During the Second Sino-Japanese War, which led into World War
II, much of Zhejiang was occupied by Japan and placed under the control of the Japanese puppet state known as the
Reorganized National Government of China. Following the Doolittle Raid, most of the B-25 American crews that came
down in China eventually made it to safety with the help of Chinese civilians and soldiers. The Chinese people who
helped them, however, paid dearly for sheltering the Americans. The Imperial Japanese Army began the Zhejiang-Jiangxi
Campaign to intimidate the Chinese out of helping downed American airmen. The Japanese killed an estimated 250,000
civilians while searching for Doolittle’s men. After the People's Republic of China took control of Mainland China
in 1949, the Republic of China government based in Taiwan continued to control the Dachen Islands off the coast of
Zhejiang until 1955, even establishing a rival Zhejiang provincial government there, creating a situation similar
to Fujian province today. During the Cultural Revolution (1966–76), Zhejiang was in chaos and disunity, and its economy
was stagnant, especially during the high tide (1966–69) of the revolution. The agricultural policy favoring grain
production at the expense of industrial and cash crops intensified economic hardships in the province. Mao’s self-reliance
policy and the reduction in maritime trade cut off the lifelines of the port cities of Ningbo and Wenzhou. While
Mao invested heavily in railroads in interior China, no major railroads were built in South Zhejiang, where transportation
remained poor. Zhejiang benefited less from central government investment than some other provinces due to its lack
of natural resources, a location vulnerable to potential flooding from the sea, and an economic base at the national
average. Zhejiang, however, has been an epicenter of capitalist development in China, and has led the nation in the
development of a market economy and private enterprises. Northeast Zhejiang, as part of the Yangtze Delta, is flat,
more developed, and industrial. Zhejiang consists mostly of hills, which account for about 70% of its total area.
Altitudes tend to be the highest to the south and west and the highest peak of the province, Huangmaojian Peak (1,929
meters or 6,329 feet), is located there. Other prominent mountains include Mounts Yandang, Tianmu, Tiantai, and Mogan,
which reach altitudes of 700 to 1,500 meters (2,300 to 4,900 ft). Valleys and plains are found along the coastline
and rivers. The north of the province lies just south of the Yangtze Delta, and consists of plains around the cities
of Hangzhou, Jiaxing, and Huzhou, where the Grand Canal of China enters from the northern border to end at Hangzhou.
Another relatively flat area is found along the Qu River around the cities of Quzhou and Jinhua. Major rivers include
the Qiangtang and Ou Rivers. Most rivers carve out valleys in the highlands, with plenty of rapids and other features
associated with such topography. Well-known lakes include the West Lake of Hangzhou and the South Lake of Jiaxing.
There are over three thousand islands along the rugged coastline of Zhejiang. The largest, Zhoushan Island, is Mainland
China's third largest island, after Hainan and Chongming. There are also many bays, of which Hangzhou Bay is the
largest. Zhejiang has a humid subtropical climate with four distinct seasons. Spring starts in March and is rainy
with changeable weather. Summer, from June to September is long, hot, rainy, and humid. Fall is generally dry, warm
and sunny. Winters are short but cold except in the far south. Average annual temperature is around 15 to 19 °C (59
to 66 °F), average January temperature is around 2 to 8 °C (36 to 46 °F) and average July temperature is around 27
to 30 °C (81 to 86 °F). Annual precipitation is about 1,000 to 1,900 mm (39 to 75 in). There is plenty of rainfall
in early summer, and by late summer Zhejiang is directly threatened by typhoons forming in the Pacific. The eleven
prefecture-level divisions of Zhejiang are subdivided into 90 county-level divisions (36 districts, 20 county-level
cities, 33 counties, and one autonomous county). Those are in turn divided into 1,570 township-level divisions (761
towns, 505 townships, 14 ethnic townships, and 290 subdistricts). Hengdian belongs to Jinhua, which is the largest
base of shooting films and TV dramas in China. Hengdian is called "China's Hollywood". The politics of Zhejiang is
structured in a dual party-government system like all other governing institutions in Mainland China. The Governor
of Zhejiang is the highest-ranking official in the People's Government of Zhejiang. However, in the province's dual
party-government governing system, the Governor is subordinate to the Zhejiang Communist Party of China (CPC) Provincial
Committee Secretary, colloquially termed the "Zhejiang CPC Party Chief". Several political figures who served as
Zhejiang's top political office of Communist Party Secretary have played key roles in various events in PRC history.
Tan Zhenlin (term 1949-1952), the inaugural Party Secretary, was one of the leading voices against Mao's Cultural
Revolution during the so-called February Countercurrent of 1967. Jiang Hua (term 1956-1968), was the "chief justice"
on the Special Court in the case against the Gang of Four in 1980. Three provincial Party Secretaries since the 1990s
have gone onto prominence at the national level. They include CPC General Secretary and President Xi Jinping (term
2002-2007), National People's Congress Chairman and former Vice-Premier Zhang Dejiang (term 1998-2002), and Zhao
Hongzhu (term 2007-2012), the Deputy Secretary of the Central Commission for Discipline Inspection, China's top anti-corruption
body. Of Zhejiang's fourteen Party Secretaries since 1949, none were native to the province. The province is traditionally
known as the "Land of Fish and Rice". True to its name, rice is the main crop, followed by wheat; north Zhejiang
is also a center of aquaculture in China, and the Zhoushan fishery is the largest fishery in the country. The main
cash crops include jute and cotton, and the province also leads the provinces of China in tea production. (The renowned
Longjing tea is a product of Hangzhou.) Zhejiang's towns have been known for handicraft production of goods such
as silk, for which it is ranked second among the provinces. Its many market towns connect the cities with the countryside.
Ningbo, Wenzhou, Taizhou and Zhoushan are important commercial ports. The Hangzhou Bay Bridge between Haiyan County
and Cixi, is the longest bridge over a continuous body of sea water in the world. Zhejiang's main manufacturing sectors
are electromechanical industries, textiles, chemical industries, food, and construction materials. In recent years
Zhejiang has followed its own development model, dubbed the "Zhejiang model", which is based on prioritizing and
encouraging entrepreneurship, an emphasis on small businesses responsive to the whims of the market, large public
investments into infrastructure, and the production of low-cost goods in bulk for both domestic consumption and export.
As a result, Zhejiang has made itself one of the richest provinces, and the "Zhejiang spirit" has become something
of a legend within China. However, some economists now worry that this model is not sustainable, in that it is inefficient
and places unreasonable demands on raw materials and public utilities, and also a dead end, in that the myriad small
businesses in Zhejiang producing cheap goods in bulk are unable to move to more sophisticated or technologically
more advanced industries. The economic heart of Zhejiang is moving from North Zhejiang, centered on Hangzhou, southeastward
to the region centered on Wenzhou and Taizhou. The per capita disposable income of urbanites in Zhejiang reached
24,611 yuan (US$3,603) in 2009, an annual real growth of 8.3%. The per capita pure income of rural residents stood
at 10,007 yuan (US$1,465), a real growth of 8.1% year-on-year. Zhejiang's nominal GDP for 2011 was 3.20 trillion
yuan (US$506 billion) with a per capita GDP of 44,335 yuan (US$6,490). In 2009, Zhejiang's primary, secondary, and
tertiary industries were worth 116.2 billion yuan (US$17 billion), 1.1843 trillion yuan (US$173.4 billion), and 982.7
billion yuan (US$143.9 billion) respectively. On Thursday, September 15, 2011, more than 500 people from Hongxiao
Village protested over the large-scale death of fish in a nearby river. Angry protesters stormed the Zhejiang Jinko
Solar Company factory compound, overturned eight company vehicles, and destroyed the offices before police came to
disperse the crowd. Protests continued on the two following nights with reports of scuffles, officials said. Chen
Hongming, a deputy head of Haining's environmental protection bureau, said the factory's waste disposal had failed
pollution tests since April. The environmental watchdog had warned the factory, but it had not effectively controlled
the pollution, Chen added. Han Chinese make up the vast majority of the population, and the largest Han subgroup
are the speakers of Wu varieties of Chinese. There are also 400,000 members of ethnic minorities, including approximately
200,000 She people and approximately 20,000 Hui Chinese[citation needed]. Jingning She Autonomous County in Lishui
is the only She autonomous county in China. The predominant religions in Zhejiang are Chinese folk religions, Taoist
traditions and Chinese Buddhism. According to surveys conducted in 2007 and 2009, 23.02% of the population believes
and is involved in cults of ancestors, while 2.62% of the population identifies as Christian, decreasing from 3.92%
in 2004. The reports didn't give figures for other types of religion; 74.36% of the population may be either irreligious
or involved in worship of nature deities, Buddhism, Confucianism, Taoism, folk religious sects, and small minorities
of Muslims. In mid-2015 the government of Zhejiang recognised folk religion as "civil religion" beginning the registration
of more than twenty thousand folk religious associations. Buddhism has an important presence since its arrival in
Zhejiang 1,800 years ago. Catholicism arrived 400 years ago in the province and Protestantism 150 years ago. Zhejiang
is one of the provinces of China with the largest concentrations of Protestants, especially notable in the city of
Wenzhou. In 1999 Zhejiang's Protestant population comprised 2.8% of the provincial population, a small percentage
but higher than the national average. The rapid development of religions in Zhejiang has driven the local committee
of ethnic and religious affairs to enact measures to rationalise them in 2014, variously named "Three Rectifications
and One Demolition" operations or "Special Treatment Work on Illegally Constructed Sites of Religious and Folk Religion
Activities" according to the locality. These regulations have led to cases of demolition of churches and folk religion
temples, or the removal of crosses from churches' roofs and spires. An exemplary case was that of the Sanjiang Church.
Islam arrived 1,400 years ago in Zhejiang. Today Islam is practiced by a small number of people including virtually
all the Hui Chinese living in Zhejiang. Another religion present in the province is She shamanism (practiced by She
ethnic minority). Zhejiang is mountainous and has therefore fostered the development of many distinct local cultures.
Linguistically speaking, Zhejiang is extremely diverse. Most inhabitants of Zhejiang speak Wu, but the Wu dialects
are very diverse, especially in the south, where one valley may speak a dialect completely unintelligible to the
next valley a few kilometers away. Other varieties of Chinese are spoken as well, mostly along the borders; Mandarin
and Huizhou dialects are spoken on the border with Anhui, while Min dialects are spoken on the border with Fujian.
(See Hangzhou dialect, Shaoxing dialect, Ningbo dialect, Wenzhou dialect, Taizhou dialect, Jinhua dialect, and Quzhou
dialect for more information). Throughout history there have been a series of lingua francas in the area to allow
for better communication. The dialects spoken in Hangzhou, Shaoxing, and Ningbo have taken on this role historically.
Since the founding of the People's Republic of China in 1949, Mandarin, which is not mutually intelligible with any
of the local dialects, has been promoted as the standard language of communication throughout China. As a result,
most of the population now can, to some degree, speak and comprehend Mandarin and can code-switch when necessary.
A majority of the population educated since 1978 can speak Mandarin. Urban residents tend to be more fluent in Mandarin
than rural people. Nevertheless, a Zhejiang accent is detectable in almost everyone from the area communicating in
Mandarin, and the home dialect remains an important part of the everyday lives and cultural identities of most Zhejiang
residents. Zhejiang is the home of Yueju (越劇), one of the most prominent forms of Chinese opera. Yueju originated
in Shengzhou and is traditionally performed by actresses only, in both male and female roles. Other important opera
traditions include Yongju (of Ningbo), Shaoju (of Shaoxing), Ouju (of Wenzhou), Wuju (of Jinhua), Taizhou Luantan
(of Taizhou) and Zhuji Luantan (of Zhuji). Longjing tea (also called dragon well tea), originating in Hangzhou, is
one of the most prestigious, if not the most prestigious Chinese tea. Hangzhou is also renowned for its silk umbrellas
and hand fans. Zhejiang cuisine (itself subdivided into many traditions, including Hangzhou cuisine) is one of the
eight great traditions of Chinese cuisine. Since ancient times, north Zhejiang and neighbouring south Jiangsu have
been famed for their prosperity and opulence[citation needed], and simply inserting north Zhejiang place names (Hangzhou,
Jiaxing, etc.) into poetry gave an effect of dreaminess, a practice followed by many noted poets. In particular,
the fame of Hangzhou (as well as Suzhou in neighbouring Jiangsu province) has led to the popular saying: "Above there
is heaven; below there is Suzhou and Hangzhou" (上有天堂,下有苏杭), a saying that continues to be a source of pride for the
people of these two still prosperous cities.
The Ministry of Defence (MoD) is the British government department responsible for implementing the defence policy set by
Her Majesty's Government, and is the headquarters of the British Armed Forces. The MoD states that its principal
objectives are to defend the United Kingdom of Great Britain and Northern Ireland and its interests and to strengthen
international peace and stability. With the collapse of the Soviet Union and the end of the Cold War, the MoD does
not foresee any short-term conventional military threat; rather, it has identified weapons of mass destruction, international
terrorism, and failed and failing states as the overriding threats to Britain's interests. The MoD also manages day-to-day
running of the armed forces, contingency planning and defence procurement. During the 1920s and 1930s, British civil
servants and politicians, looking back at the performance of the state during World War I, concluded that there was
a need for greater co-ordination between the three Services that made up the armed forces of the United Kingdom—the
British Army, the Royal Navy, and the Royal Air Force. The formation of a united ministry of defence was rejected
by David Lloyd George's coalition government in 1921; but the Chiefs of Staff Committee was formed in 1923, for the
purposes of inter-Service co-ordination. As rearmament became a concern during the 1930s, Stanley Baldwin created
the position of Minister for Coordination of Defence. Lord Chatfield held the post until the fall of Neville Chamberlain's
government in 1940; his success was limited by his lack of control over the existing Service departments and his
limited political influence. Winston Churchill, on forming his government in 1940, created the office of Minister
of Defence to exercise ministerial control over the Chiefs of Staff Committee and to co-ordinate defence matters.
The post was held by the Prime Minister of the day until Clement Attlee's government introduced the Ministry of Defence
Act of 1946. The new ministry was headed by a Minister of Defence who possessed a seat in the Cabinet. The three
existing service Ministers—the Secretary of State for War, the First Lord of the Admiralty, and the Secretary of
State for Air—remained in direct operational control of their respective services, but ceased to attend Cabinet.
From 1946 to 1964 five Departments of State did the work of the modern Ministry of Defence: the Admiralty, the War
Office, the Air Ministry, the Ministry of Aviation, and an earlier form of the Ministry of Defence. These departments
merged in 1964; the defence functions of the Ministry of Aviation Supply merged into the Ministry of Defence in 1971.
The Ministers and Chiefs of the Defence Staff are supported by a number of civilian, scientific and professional
military advisors. The Permanent Under-Secretary of State for Defence (generally known as the Permanent Secretary)
is the senior civil servant at the MoD. His or her role is to ensure the MoD operates effectively as a department
of the government. The current Chief of the Defence Staff, the professional head of the British Armed Forces, is
General Sir Nicholas Houghton, late Green Howards. He is supported by the Vice Chief of the Defence Staff, by the
professional heads of the three services of HM Armed Forces and by the Commander of Joint Forces Command. There are
also three Deputy Chiefs of the Defence Staff with particular remits, Deputy Chief of the Defence Staff (Capability),
Deputy CDS (Personnel and Training) and Deputy CDS (Operations). The Surgeon General, represents the Defence Medical
Services on the Defence Staff, and is the clinical head of that service. Additionally, there are a number of Assistant
Chiefs of Defence Staff, including the Assistant Chief of the Defence Staff (Reserves and Cadets) and the Defence
Services Secretary in the Royal Household of the Sovereign of the United Kingdom, who is also the Assistant Chief
of Defence Staff (Personnel). The 1998 Strategic Defence Review and the 2003 Delivering Security in a Changing World
White Paper outlined the following posture for the British Armed Forces: The MoD has since been regarded as a leader
in elaborating the post-Cold War organising concept of "defence diplomacy". As a result of the Strategic Defence
and Security Review 2010, Prime Minister David Cameron signed a 50-year treaty with French President Nicolas Sarkozy
that would have the two countries co-operate intensively in military matters. The UK is establishing air and naval
bases in the Persian Gulf, located in the UAE and Bahrain. A presence in Oman is also being considered. The Strategic
Defence and Security Review 2015 included £178 billion investment in new equipment and capabilities. The review set
a defence policy with four primary missions for the Armed Forces: Following the end of the Cold War, the threat of
direct conventional military confrontation with other states has been replaced by terrorism. Sir Richard Dannatt
predicted British forces to be involved in combating "predatory non-state actors" for the foreseeable future, in
what he called an "era of persistent conflict". He told the Chatham House think tank that the fight against al-Qaeda
and other militant Islamist groups was "probably the fight of our generation". Dannatt criticised a remnant "Cold
War mentality", with military expenditures based on retaining a capability against a direct conventional strategic
threat; He said currently only 10% of the MoD's equipment programme budget between 2003 and 2018 was to be invested
in the "land environment"—at a time when Britain was engaged in land-based wars in Afghanistan and Iraq. The Defence
Committee—Third Report "Defence Equipment 2009" cites an article from the Financial Times website stating that the
Chief of Defence Materiel, General Sir Kevin O’Donoghue, had instructed staff within Defence Equipment and Support
(DE&S) through an internal memorandum to reprioritize the approvals process to focus on supporting current operations
over the next three years; deterrence related programmes; those that reflect defence obligations both contractual
or international; and those where production contracts are already signed. The report also cites concerns over potential
cuts in the defence science and technology research budget; implications of inappropriate estimation of Defence Inflation
within budgetary processes; underfunding in the Equipment Programme; and a general concern over striking the appropriate
balance over a short-term focus (Current Operations) and long-term consequences of failure to invest in the delivery
of future UK defence capabilities on future combatants and campaigns. The then Secretary of State for Defence, Bob
Ainsworth MP, reinforced this reprioritisation of focus on current operations and had not ruled out "major shifts"
in defence spending. In the same article the First Sea Lord and Chief of the Naval Staff, Admiral Sir Mark Stanhope,
Royal Navy, acknowledged that there was not enough money within the defence budget and it is preparing itself for
tough decisions and the potential for cutbacks. According to figures published by the London Evening Standard the
defence budget for 2009 is "more than 10% overspent" (figures cannot be verified) and the paper states that this
had caused Gordon Brown to say that the defence spending must be cut. The MoD has been investing in IT to cut costs
and improve services for its personnel. The Ministry of Defence is one of the United Kingdom's largest landowners,
owning 227,300 hectares of land and foreshore (either freehold or leasehold) at April 2014, which was valued at "about
£20 billion". The MoD also has "rights of access" to a further 222,000 hectares. In total, this is about 1.8% of
the UK land mass. The total annual cost to support the defence estate is "in excess of £3.3 billion". The defence
estate is divided as training areas & ranges (84.0%), research & development (5.4%), airfields (3.4%), barracks &
camps (2.5%), storage & supply depots (1.6%), and other (3.0%). These are largely managed by the Defence Infrastructure
Organisation. The headquarters of the MoD are in Whitehall and are now known as Main Building. This structure is
neoclassical in style and was originally built between 1938 and 1959 to designs by Vincent Harris to house the Air
Ministry and the Board of Trade. The northern entrance in Horse Guards Avenue is flanked by two monumental statues,
Earth and Water, by Charles Wheeler. Opposite stands the Gurkha Monument, sculpted by Philip Jackson and unveiled
in 1997 by Queen Elizabeth II. Within it is the Victoria Cross and George Cross Memorial, and nearby are memorials
to the Fleet Air Arm and RAF (to its east, facing the riverside). A major refurbishment of the building was completed
under a PFI contract by Skanska in 2004. Henry VIII's wine cellar at the Palace of Whitehall, built in 1514–1516
for Cardinal Wolsey, is in the basement of Main Building, and is used for entertainment. The entire vaulted brick
structure of the cellar was encased in steel and concrete and relocated nine feet to the west and nearly 19 feet
(5.8 m) deeper in 1949, when construction was resumed at the site after World War II. This was carried out without
any significant damage to the structure. The most notable fraud conviction was that of Gordon Foxley, head of defence
procurement at the Ministry of Defence from 1981 to 1984. Police claimed he received at least £3.5m in total in corrupt
payments, such as substantial bribes from overseas arms contractors aiming to influence the allocation of contracts.
A government report covered by the Guardian in 2002 indicates that between 1940 and 1979, the Ministry of Defence
"turned large parts of the country into a giant laboratory to conduct a series of secret germ warfare tests on the
public" and many of these tests "involved releasing potentially dangerous chemicals and micro-organisms over vast
swaths of the population without the public being told." The Ministry of Defence claims that these trials were to
simulate germ warfare and that the tests were harmless. Still, families who have been in the area of many of the
tests are experiencing children with birth defects and physical and mental handicaps and many are asking for a public
inquiry. According to the report these tests affected estimated millions of people including one period between 1961
and 1968 where "more than a million people along the south coast of England, from Torquay to the New Forest, were
exposed to bacteria including e.coli and bacillus globigii, which mimics anthrax." Two scientists commissioned by
the Ministry of Defence stated that these trials posed no risk to the public. This was confirmed by Sue Ellison,
a representative of Porton Down who said that the results from these trials "will save lives, should the country
or our forces face an attack by chemical and biological weapons." Asked whether such tests are still being carried
out, she said: "It is not our policy to discuss ongoing research." It is unknown whether or not the harmlessness
of the trials was known at the time of their occurrence. The MoD has been criticised for an ongoing fiasco, having
spent £240m on eight Chinook HC3 helicopters which only started to enter service in 2010, years after they were ordered
in 1995 and delivered in 2001. A National Audit Office report reveals that the helicopters have been stored in air
conditioned hangars in Britain since their 2001[why?] delivery, while troops in Afghanistan have been forced to rely
on helicopters which are flying with safety faults. By the time the Chinooks are airworthy, the total cost of the
project could be as much as £500m. In April 2008, a £90m contract was signed with Boeing for a "quick fix" solution,
so they can fly by 2010: QinetiQ will downgrade the Chinooks—stripping out some of their more advanced equipment.
In October 2009, the MoD was heavily criticized for withdrawing the bi-annual non-operational training £20m budget
for the volunteer Territorial Army (TA), ending all non-operational training for 6 months until April 2010. The government
eventually backed down and restored the funding. The TA provides a small percentage of the UK's operational troops.
Its members train on weekly evenings and monthly weekends, as well as two-week exercises generally annually and occasionally
bi-annually for troops doing other courses. The cuts would have meant a significant loss of personnel and would have
had adverse effects on recruitment. In 2013 it was found that the Ministry of Defence had overspent on its equipment
budget by £6.5bn on orders that could take up to 39 years to fulfil. The Ministry of Defence has been criticised
in the past for poor management and financial control, investing in projects that have taken up to 10 and even as
much as 15 years to be delivered.
The term high definition once described a series of television systems originating from August 1936; however, these systems
were only high definition when compared to earlier systems that were based on mechanical systems with as few as 30
lines of resolution. The ongoing competition between companies and nations to create true "HDTV" spanned the entire
20th century, as each new system became more HD than the last.In the beginning of the 21st century, this race has
continued with 4k, 5k and current 8K systems. The British high-definition TV service started trials in August 1936
and a regular service on 2 November 1936 using both the (mechanical) Baird 240 line sequential scan (later to be
inaccurately rechristened 'progressive') and the (electronic) Marconi-EMI 405 line interlaced systems. The Baird
system was discontinued in February 1937. In 1938 France followed with their own 441-line system, variants of which
were also used by a number of other countries. The US NTSC 525-line system joined in 1941. In 1949 France introduced
an even higher-resolution standard at 819 lines, a system that should have been high definition even by today's standards,
but was monochrome only and the technical limitations of the time prevented it from achieving the definition of which
it should have been capable. All of these systems used interlacing and a 4:3 aspect ratio except the 240-line system
which was progressive (actually described at the time by the technically correct term "sequential") and the 405-line
system which started as 5:4 and later changed to 4:3. The 405-line system adopted the (at that time) revolutionary
idea of interlaced scanning to overcome the flicker problem of the 240-line with its 25 Hz frame rate. The 240-line
system could have doubled its frame rate but this would have meant that the transmitted signal would have doubled
in bandwidth, an unacceptable option as the video baseband bandwidth was required to be not more than 3 MHz. Colour
broadcasts started at similarly higher resolutions, first with the US NTSC color system in 1953, which was compatible
with the earlier monochrome systems and therefore had the same 525 lines of resolution. European standards did not
follow until the 1960s, when the PAL and SECAM color systems were added to the monochrome 625 line broadcasts. The
Nippon Hōsō Kyōkai (NHK, the Japan Broadcasting Corporation) began conducting research to "unlock the fundamental
mechanism of video and sound interactions with the five human senses" in 1964, after the Tokyo Olympics. NHK set
out to create an HDTV system that ended up scoring much higher in subjective tests than NTSC's previously dubbed
"HDTV". This new system, NHK Color, created in 1972, included 1125 lines, a 5:3 aspect ratio and 60 Hz refresh rate.
The Society of Motion Picture and Television Engineers (SMPTE), headed by Charles Ginsburg, became the testing and
study authority for HDTV technology in the international theater. SMPTE would test HDTV systems from different companies
from every conceivable perspective, but the problem of combining the different formats plagued the technology for
many years. There were four major HDTV systems tested by SMPTE in the late 1970s, and in 1979 an SMPTE study group
released A Study of High Definition Television Systems: Since the formal adoption of digital video broadcasting's
(DVB) widescreen HDTV transmission modes in the early 2000s; the 525-line NTSC (and PAL-M) systems, as well as the
European 625-line PAL and SECAM systems, are now regarded as standard definition television systems. In 1949, France
started its transmissions with an 819 lines system (with 737 active lines). The system was monochrome only, and was
used only on VHF for the first French TV channel. It was discontinued in 1983. In 1958, the Soviet Union developed
Тransformator (Russian: Трансформатор, meaning Transformer), the first high-resolution (definition) television system
capable of producing an image composed of 1,125 lines of resolution aimed at providing teleconferencing for military
command. It was a research project and the system was never deployed by either the military or consumer broadcasting.
In 1979, the Japanese state broadcaster NHK first developed consumer high-definition television with a 5:3 display
aspect ratio. The system, known as Hi-Vision or MUSE after its Multiple sub-Nyquist sampling encoding for encoding
the signal, required about twice the bandwidth of the existing NTSC system but provided about four times the resolution
(1080i/1125 lines). Satellite test broadcasts started in 1989, with regular testing starting in 1991 and regular
broadcasting of BS-9ch commencing on November 25, 1994, which featured commercial and NHK programming. In 1981, the
MUSE system was demonstrated for the first time in the United States, using the same 5:3 aspect ratio as the Japanese
system. Upon visiting a demonstration of MUSE in Washington, US President Ronald Reagan was impressed and officially
declared it "a matter of national interest" to introduce HDTV to the US. Several systems were proposed as the new
standard for the US, including the Japanese MUSE system, but all were rejected by the FCC because of their higher
bandwidth requirements. At this time, the number of television channels was growing rapidly and bandwidth was already
a problem. A new standard had to be more efficient, needing less bandwidth for HDTV than the existing NTSC. The limited
standardization of analog HDTV in the 1990s did not lead to global HDTV adoption as technical and economic constraints
at the time did not permit HDTV to use bandwidths greater than normal television. Early HDTV commercial experiments,
such as NHK's MUSE, required over four times the bandwidth of a standard-definition broadcast. Despite efforts made
to reduce analog HDTV to about twice the bandwidth of SDTV, these television formats were still distributable only
by satellite. In addition, recording and reproducing an HDTV signal was a significant technical challenge in the
early years of HDTV (Sony HDVS). Japan remained the only country with successful public broadcasting of analog HDTV,
with seven broadcasters sharing a single channel. Since 1972, International Telecommunication Union's radio telecommunications
sector (ITU-R) had been working on creating a global recommendation for Analog HDTV. These recommendations, however,
did not fit in the broadcasting bands which could reach home users. The standardization of MPEG-1 in 1993 also led
to the acceptance of recommendations ITU-R BT.709. In anticipation of these standards the Digital Video Broadcasting
(DVB) organisation was formed, an alliance of broadcasters, consumer electronics manufacturers and regulatory bodies.
The DVB develops and agrees upon specifications which are formally standardised by ETSI. DVB created first the standard
for DVB-S digital satellite TV, DVB-C digital cable TV and DVB-T digital terrestrial TV. These broadcasting systems
can be used for both SDTV and HDTV. In the US the Grand Alliance proposed ATSC as the new standard for SDTV and HDTV.
Both ATSC and DVB were based on the MPEG-2 standard, although DVB systems may also be used to transmit video using
the newer and more efficient H.264/MPEG-4 AVC compression standards. Common for all DVB standards is the use of highly
efficient modulation techniques for further reducing bandwidth, and foremost for reducing receiver-hardware and antenna
requirements. In 1983, the International Telecommunication Union's radio telecommunications sector (ITU-R) set up
a working party (IWP11/6) with the aim of setting a single international HDTV standard. One of the thornier issues
concerned a suitable frame/field refresh rate, the world already having split into two camps, 25/50 Hz and 30/60
Hz, largely due to the differences in mains frequency. The IWP11/6 working party considered many views and throughout
the 1980s served to encourage development in a number of video digital processing areas, not least conversion between
the two main frame/field rates using motion vectors, which led to further developments in other areas. While a comprehensive
HDTV standard was not in the end established, agreement on the aspect ratio was achieved. Initially the existing
5:3 aspect ratio had been the main candidate but, due to the influence of widescreen cinema, the aspect ratio 16:9
(1.78) eventually emerged as being a reasonable compromise between 5:3 (1.67) and the common 1.85 widescreen cinema
format. An aspect ratio of 16:9 was duly agreed upon at the first meeting of the IWP11/6 working party at the BBC's
Research and Development establishment in Kingswood Warren. The resulting ITU-R Recommendation ITU-R BT.709-2 ("Rec.
709") includes the 16:9 aspect ratio, a specified colorimetry, and the scan modes 1080i (1,080 actively interlaced
lines of resolution) and 1080p (1,080 progressively scanned lines). The British Freeview HD trials used MBAFF, which
contains both progressive and interlaced content in the same encoding. It also includes the alternative 1440×1152
HDMAC scan format. (According to some reports, a mooted 750-line (720p) format (720 progressively scanned lines)
was viewed by some at the ITU as an enhanced television format rather than a true HDTV format, and so was not included,
although 1920×1080i and 1280×720p systems for a range of frame and field rates were defined by several US SMPTE standards.)
HDTV technology was introduced in the United States in the late 1980s and made official in 1993 by the Digital HDTV
Grand Alliance, a group of television, electronic equipment, communications companies consisting of AT&T Bell Labs,
General Instrument, Philips, Sarnoff, Thomson, Zenith and the Massachusetts Institute of Technology. Field testing
of HDTV at 199 sites in the United States was completed August 14, 1994. The first public HDTV broadcast in the United
States occurred on July 23, 1996 when the Raleigh, North Carolina television station WRAL-HD began broadcasting from
the existing tower of WRAL-TV southeast of Raleigh, winning a race to be first with the HD Model Station in Washington,
D.C., which began broadcasting July 31, 1996 with the callsign WHD-TV, based out of the facilities of NBC owned and
operated station WRC-TV. The American Advanced Television Systems Committee (ATSC) HDTV system had its public launch
on October 29, 1998, during the live coverage of astronaut John Glenn's return mission to space on board the Space
Shuttle Discovery. The signal was transmitted coast-to-coast, and was seen by the public in science centers, and
other public theaters specially equipped to receive and display the broadcast. The first HDTV transmissions in Europe,
albeit not direct-to-home, began in 1990, when the Italian broadcaster RAI used the HD-MAC and MUSE HDTV technologies
to broadcast the 1990 FIFA World Cup. The matches were shown in 8 cinemas in Italy and 2 in Spain. The connection
with Spain was made via the Olympus satellite link from Rome to Barcelona and then with a fiber optic connection
from Barcelona to Madrid. After some HDTV transmissions in Europe the standard was abandoned in the mid-1990s. The
first regular broadcasts started on January 1, 2004 when the Belgian company Euro1080 launched the HD1 channel with
the traditional Vienna New Year's Concert. Test transmissions had been active since the IBC exhibition in September
2003, but the New Year's Day broadcast marked the official launch of the HD1 channel, and the official start of direct-to-home
HDTV in Europe. Euro1080, a division of the former and now bankrupt Belgian TV services company Alfacam, broadcast
HDTV channels to break the pan-European stalemate of "no HD broadcasts mean no HD TVs bought means no HD broadcasts
..." and kick-start HDTV interest in Europe. The HD1 channel was initially free-to-air and mainly comprised sporting,
dramatic, musical and other cultural events broadcast with a multi-lingual soundtrack on a rolling schedule of 4
or 5 hours per day. These first European HDTV broadcasts used the 1080i format with MPEG-2 compression on a DVB-S
signal from SES's Astra 1H satellite. Euro1080 transmissions later changed to MPEG-4/AVC compression on a DVB-S2
signal in line with subsequent broadcast channels in Europe. Despite delays in some countries, the number of European
HD channels and viewers has risen steadily since the first HDTV broadcasts, with SES's annual Satellite Monitor market
survey for 2010 reporting more than 200 commercial channels broadcasting in HD from Astra satellites, 185 million
HD capable TVs sold in Europe (£60 million in 2010 alone), and 20 million households (27% of all European digital
satellite TV homes) watching HD satellite broadcasts (16 million via Astra satellites). In December 2009 the United
Kingdom became the first European country to deploy high definition content using the new DVB-T2 transmission standard,
as specified in the Digital TV Group (DTG) D-book, on digital terrestrial television. The Freeview HD service currently
contains 10 HD channels (as of December 2013[update]) and was rolled out region by region across the UK in accordance
with the digital switchover process, finally being completed in October 2012. However, Freeview HD is not the first
HDTV service over digital terrestrial television in Europe; If all three parameters are used, they are specified
in the following form: [frame size][scanning system][frame or field rate] or [frame size]/[frame or field rate][scanning
system].[citation needed] Often, frame size or frame rate can be dropped if its value is implied from context. In
this case, the remaining numeric parameter is specified first, followed by the scanning system. For example, 1920×1080p25
identifies progressive scanning format with 25 frames per second, each frame being 1,920 pixels wide and 1,080 pixels
high. The 1080i25 or 1080i50 notation identifies interlaced scanning format with 25 frames (50 fields) per second,
each frame being 1,920 pixels wide and 1,080 pixels high. The 1080i30 or 1080i60 notation identifies interlaced scanning
format with 30 frames (60 fields) per second, each frame being 1,920 pixels wide and 1,080 pixels high. The 720p60
notation identifies progressive scanning format with 60 frames per second, each frame being 720 pixels high; 1,280
pixels horizontally are implied. 50 Hz systems support three scanning rates: 50i, 25p and 50p. 60 Hz systems support
a much wider set of frame rates: 59.94i, 60i, 23.976p, 24p, 29.97p, 30p, 59.94p and 60p. In the days of standard
definition television, the fractional rates were often rounded up to whole numbers, e.g. 23.976p was often called
24p, or 59.94i was often called 60i. 60 Hz high definition television supports both fractional and slightly different
integer rates, therefore strict usage of notation is required to avoid ambiguity. Nevertheless, 29.97i/59.94i is
almost universally called 60i, likewise 23.976p is called 24p. For the commercial naming of a product, the frame
rate is often dropped and is implied from context (e.g., a 1080i television set). A frame rate can also be specified
without a resolution. For example, 24p means 24 progressive scan frames per second, and 50i means 25 interlaced frames
per second. There is no single standard for HDTV color support. Colors are typically broadcast using a (10-bits per
channel) YUV color space but, depending on the underlying image generating technologies of the receiver, are then
subsequently converted to a RGB color space using standardized algorithms. When transmitted directly through the
Internet, the colors are typically pre-converted to 8-bit RGB channels for additional storage savings with the assumption
that it will only be viewed only on a (sRGB) computer screen. As an added benefit to the original broadcasters, the
losses of the pre-conversion essentially make these files unsuitable for professional TV re-broadcasting. At a minimum,
HDTV has twice the linear resolution of standard-definition television (SDTV), thus showing greater detail than either
analog television or regular DVD. The technical standards for broadcasting HDTV also handle the 16:9 aspect ratio
images without using letterboxing or anamorphic stretching, thus increasing the effective image resolution. A very
high resolution source may require more bandwidth than available in order to be transmitted without loss of fidelity.
The lossy compression that is used in all digital HDTV storage and transmission systems will distort the received
picture, when compared to the uncompressed source. The optimum format for a broadcast depends upon the type of videographic
recording medium used and the image's characteristics. For best fidelity to the source the transmitted field ratio,
lines, and frame rate should match those of the source. PAL, SECAM and NTSC frame rates technically apply only to
analogue standard definition television, not to digital or high definition broadcasts. However, with the roll out
of digital broadcasting, and later HDTV broadcasting, countries retained their heritage systems. HDTV in former PAL
and SECAM countries operates at a frame rate of 25/50 Hz, while HDTV in former NTSC countries operates at 30/60 Hz.
Standard 35mm photographic film used for cinema projection has a much higher image resolution than HDTV systems,
and is exposed and projected at a rate of 24 frames per second (frame/s). To be shown on standard television, in
PAL-system countries, cinema film is scanned at the TV rate of 25 frame/s, causing a speedup of 4.1 percent, which
is generally considered acceptable. In NTSC-system countries, the TV scan rate of 30 frame/s would cause a perceptible
speedup if the same were attempted, and the necessary correction is performed by a technique called 3:2 Pulldown:
Over each successive pair of film frames, one is held for three video fields (1/20 of a second) and the next is held
for two video fields (1/30 of a second), giving a total time for the two frames of 1/12 of a second and thus achieving
the correct average film frame rate. Non-cinematic HDTV video recordings intended for broadcast are typically recorded
either in 720p or 1080i format as determined by the broadcaster. 720p is commonly used for Internet distribution
of high-definition video, because most computer monitors operate in progressive-scan mode. 720p also imposes less
strenuous storage and decoding requirements compared to both 1080i and 1080p. 1080p/24, 1080i/30, 1080i/25, and 720p/30
is most often used on Blu-ray Disc. In the US, residents in the line of sight of television station broadcast antennas
can receive free, over the air programming with a television set with an ATSC tuner (most sets sold since 2009 have
this). This is achieved with a TV aerial, just as it has been since the 1940s except now the major network signals
are broadcast in high definition (ABC, Fox, and Ion Television broadcast at 720p resolution; CBS, My Network TV,
NBC, PBS, and The CW at 1080i). As their digital signals more efficiently use the broadcast channel, many broadcasters
are adding multiple channels to their signals. Laws about antennas were updated before the change to digital terrestrial
broadcasts. These new laws prohibit home owners' associations and city government from banning the installation of
antennas. Additionally, cable-ready TV sets can display HD content without using an external box. They have a QAM
tuner built-in and/or a card slot for inserting a CableCARD. High-definition image sources include terrestrial broadcast,
direct broadcast satellite, digital cable, IPTV (including GoogleTV Roku boxes and AppleTV or built into "Smart Televisions"),
Blu-ray video disc (BD), and internet downloads. Sony's PlayStation 3 has extensive HD compatibility because of its
built in Blu-ray disc based player, so does Microsoft's Xbox 360 with the addition of Netflix and Windows Media Center
HTPC streaming capabilities, and the Zune marketplace where users can rent or purchase digital HD content. Recently,
Nintendo released a next generation high definition gaming platform, The Wii U, which includes TV remote control
features in addition to IPTV streaming features like Netflix. The HD capabilities of the consoles has influenced
some developers to port games from past consoles onto the PS3, Xbox 360 and Wii U, often with remastered or upscaled
graphics. HDTV can be recorded to D-VHS (Digital-VHS or Data-VHS), W-VHS (analog only), to an HDTV-capable digital
video recorder (for example DirecTV's high-definition Digital video recorder, Sky HD's set-top box, Dish Network's
VIP 622 or VIP 722 high-definition Digital video recorder receivers, or TiVo's Series 3 or HD recorders), or an HDTV-ready
HTPC. Some cable boxes are capable of receiving or recording two or more broadcasts at a time in HDTV format, and
HDTV programming, some included in the monthly cable service subscription price, some for an additional fee, can
be played back with the cable company's on-demand feature. The massive amount of data storage required to archive
uncompressed streams meant that inexpensive uncompressed storage options were not available to the consumer. In 2008,
the Hauppauge 1212 Personal Video Recorder was introduced. This device accepts HD content through component video
inputs and stores the content in MPEG-2 format in a .ts file or in a Blu-ray compatible format .m2ts file on the
hard drive or DVD burner of a computer connected to the PVR through a USB 2.0 interface. More recent systems are
able to record a broadcast high definition program in its 'as broadcast' format or transcode to a format more compatible
with Blu-ray. Analog tape recorders with bandwidth capable of recording analog HD signals, such as W-VHS recorders,
are no longer produced for the consumer market and are both expensive and scarce in the secondary market. In the
United States, as part of the FCC's plug and play agreement, cable companies are required to provide customers who
rent HD set-top boxes with a set-top box with "functional" FireWire (IEEE 1394) on request. None of the direct broadcast
satellite providers have offered this feature on any of their supported boxes, but some cable TV companies have.
As of July 2004[update], boxes are not included in the FCC mandate. This content is protected by encryption known
as 5C. This encryption can prevent duplication of content or simply limit the number of copies permitted, thus effectively
denying most if not all fair use of the content.
It has been used for thousands of years for both fuel and as a construction material. It is an organic material, a natural
composite of cellulose fibers (which are strong in tension) embedded in a matrix of lignin which resists compression.
Wood is sometimes defined as only the secondary xylem in the stems of trees, or it is defined more broadly to include
the same type of tissue elsewhere such as in the roots of trees or shrubs.[citation needed] In a living tree it performs
a support function, enabling woody plants to grow large or to stand up by themselves. It also conveys water and nutrients
between the leaves, other growing tissues, and the roots. Wood may also refer to other plant materials with comparable
properties, and to material engineered from wood, or wood chips or fiber. The Earth contains about 434 billion cubic
meters of growing stock forest, 47% of which is commercial. As an abundant, carbon-neutral renewable resource, woody
materials have been of intense interest as a source of renewable energy. In 1991, approximately 3.5 cubic kilometers
of wood were harvested. Dominant uses were for furniture and building construction. A 2011 discovery in the Canadian
province of New Brunswick uncovered the earliest known plants to have grown wood, approximately 395 to 400 million
years ago. Wood can be dated by carbon dating and in some species by dendrochronology to make inferences about when
a wooden object was created. People have used wood for millennia for many purposes, primarily as a fuel or as a construction
material for making houses, tools, weapons, furniture, packaging, artworks, and paper. The year-to-year variation
in tree-ring widths and isotopic abundances gives clues to the prevailing climate at that time. Wood, in the strict
sense, is yielded by trees, which increase in diameter by the formation, between the existing wood and the inner
bark, of new woody layers which envelop the entire stem, living branches, and roots. This process is known as secondary
growth; it is the result of cell division in the vascular cambium, a lateral meristem, and subsequent expansion of
the new cells. Where there are clear seasons, growth can occur in a discrete annual or seasonal pattern, leading
to growth rings; these can usually be most clearly seen on the end of a log, but are also visible on the other surfaces.
If these seasons are annual these growth rings are referred to as annual rings. Where there is no seasonal difference
growth rings are likely to be indistinct or absent. If there are differences within a growth ring, then the part
of a growth ring nearest the center of the tree, and formed early in the growing season when growth is rapid, is
usually composed of wider elements. It is usually lighter in color than that near the outer portion of the ring,
and is known as earlywood or springwood. The outer portion formed later in the season is then known as the latewood
or summerwood. However, there are major differences, depending on the kind of wood (see below). A knot is a particular
type of imperfection in a piece of wood; it will affect the technical properties of the wood, usually reducing the
local strength and increasing the tendency for splitting along the wood grain, but may be exploited for visual effect.
In a longitudinally sawn plank, a knot will appear as a roughly circular "solid" (usually darker) piece of wood around
which the grain of the rest of the wood "flows" (parts and rejoins). Within a knot, the direction of the wood (grain
direction) is up to 90 degrees different from the grain direction of the regular wood. In the tree a knot is either
the base of a side branch or a dormant bud. A knot (when the base of a side branch) is conical in shape (hence the
roughly circular cross-section) with the inner tip at the point in stem diameter at which the plant's vascular cambium
was located when the branch formed as a bud. During the development of a tree, the lower limbs often die, but may
remain attached for a time, sometimes years. Subsequent layers of growth of the attaching stem are no longer intimately
joined with the dead limb, but are grown around it. Hence, dead branches produce knots which are not attached, and
likely to drop out after the tree has been sawn into boards. In grading lumber and structural timber, knots are classified
according to their form, size, soundness, and the firmness with which they are held in place. This firmness is affected
by, among other factors, the length of time for which the branch was dead while the attaching stem continued to grow.
Knots do not necessarily influence the stiffness of structural timber, this will depend on the size and location.
Stiffness and elastic strength are more dependent upon the sound wood than upon localized defects. The breaking strength
is very susceptible to defects. Sound knots do not weaken wood when subject to compression parallel to the grain.
In some decorative applications, wood with knots may be desirable to add visual interest. In applications where wood
is painted, such as skirting boards, fascia boards, door frames and furniture, resins present in the timber may continue
to 'bleed' through to the surface of a knot for months or even years after manufacture and show as a yellow or brownish
stain. A knot primer paint or solution, correctly applied during preparation, may do much to reduce this problem
but it is difficult to control completely, especially when using mass-produced kiln-dried timber stocks. Heartwood
(or duramen) is wood that as a result of a naturally occurring chemical transformation has become more resistant
to decay. Heartwood formation occurs spontaneously (it is a genetically programmed process). Once heartwood formation
is complete, the heartwood is dead. Some uncertainty still exists as to whether heartwood is truly dead, as it can
still chemically react to decay organisms, but only once. Heartwood is often visually distinct from the living sapwood,
and can be distinguished in a cross-section where the boundary will tend to follow the growth rings. For example,
it is sometimes much darker. However, other processes such as decay or insect invasion can also discolor wood, even
in woody plants that do not form heartwood, which may lead to confusion. Sapwood (or alburnum) is the younger, outermost
wood; in the growing tree it is living wood, and its principal functions are to conduct water from the roots to the
leaves and to store up and give back according to the season the reserves prepared in the leaves. However, by the
time they become competent to conduct water, all xylem tracheids and vessels have lost their cytoplasm and the cells
are therefore functionally dead. All wood in a tree is first formed as sapwood. The more leaves a tree bears and
the more vigorous its growth, the larger the volume of sapwood required. Hence trees making rapid growth in the open
have thicker sapwood for their size than trees of the same species growing in dense forests. Sometimes trees (of
species that do form heartwood) grown in the open may become of considerable size, 30 cm or more in diameter, before
any heartwood begins to form, for example, in second-growth hickory, or open-grown pines. The term heartwood derives
solely from its position and not from any vital importance to the tree. This is evidenced by the fact that a tree
can thrive with its heart completely decayed. Some species begin to form heartwood very early in life, so having
only a thin layer of live sapwood, while in others the change comes slowly. Thin sapwood is characteristic of such
species as chestnut, black locust, mulberry, osage-orange, and sassafras, while in maple, ash, hickory, hackberry,
beech, and pine, thick sapwood is the rule. Others never form heartwood. No definite relation exists between the
annual rings of growth and the amount of sapwood. Within the same species the cross-sectional area of the sapwood
is very roughly proportional to the size of the crown of the tree. If the rings are narrow, more of them are required
than where they are wide. As the tree gets larger, the sapwood must necessarily become thinner or increase materially
in volume. Sapwood is thicker in the upper portion of the trunk of a tree than near the base, because the age and
the diameter of the upper sections are less. When a tree is very young it is covered with limbs almost, if not entirely,
to the ground, but as it grows older some or all of them will eventually die and are either broken off or fall off.
Subsequent growth of wood may completely conceal the stubs which will however remain as knots. No matter how smooth
and clear a log is on the outside, it is more or less knotty near the middle. Consequently, the sapwood of an old
tree, and particularly of a forest-grown tree, will be freer from knots than the inner heartwood. Since in most uses
of wood, knots are defects that weaken the timber and interfere with its ease of working and other properties, it
follows that a given piece of sapwood, because of its position in the tree, may well be stronger than a piece of
heartwood from the same tree. It is remarkable that the inner heartwood of old trees remains as sound as it usually
does, since in many cases it is hundreds, and in a few instances thousands, of years old. Every broken limb or root,
or deep wound from fire, insects, or falling timber, may afford an entrance for decay, which, once started, may penetrate
to all parts of the trunk. The larvae of many insects bore into the trees and their tunnels remain indefinitely as
sources of weakness. Whatever advantages, however, that sapwood may have in this connection are due solely to its
relative age and position. If a tree grows all its life in the open and the conditions of soil and site remain unchanged,
it will make its most rapid growth in youth, and gradually decline. The annual rings of growth are for many years
quite wide, but later they become narrower and narrower. Since each succeeding ring is laid down on the outside of
the wood previously formed, it follows that unless a tree materially increases its production of wood from year to
year, the rings must necessarily become thinner as the trunk gets wider. As a tree reaches maturity its crown becomes
more open and the annual wood production is lessened, thereby reducing still more the width of the growth rings.
In the case of forest-grown trees so much depends upon the competition of the trees in their struggle for light and
nourishment that periods of rapid and slow growth may alternate. Some trees, such as southern oaks, maintain the
same width of ring for hundreds of years. Upon the whole, however, as a tree gets larger in diameter the width of
the growth rings decreases. Different pieces of wood cut from a large tree may differ decidedly, particularly if
the tree is big and mature. In some trees, the wood laid on late in the life of a tree is softer, lighter, weaker,
and more even-textured than that produced earlier, but in other trees, the reverse applies. This may or may not correspond
to heartwood and sapwood. In a large log the sapwood, because of the time in the life of the tree when it was grown,
may be inferior in hardness, strength, and toughness to equally sound heartwood from the same log. In a smaller tree,
the reverse may be true. In species which show a distinct difference between heartwood and sapwood the natural color
of heartwood is usually darker than that of the sapwood, and very frequently the contrast is conspicuous (see section
of yew log above). This is produced by deposits in the heartwood of chemical substances, so that a dramatic color
difference does not mean a dramatic difference in the mechanical properties of heartwood and sapwood, although there
may be a dramatic chemical difference. Some experiments on very resinous Longleaf Pine specimens indicate an increase
in strength, due to the resin which increases the strength when dry. Such resin-saturated heartwood is called "fat
lighter". Structures built of fat lighter are almost impervious to rot and termites; however they are very flammable.
Stumps of old longleaf pines are often dug, split into small pieces and sold as kindling for fires. Stumps thus dug
may actually remain a century or more since being cut. Spruce impregnated with crude resin and dried is also greatly
increased in strength thereby. Since the latewood of a growth ring is usually darker in color than the earlywood,
this fact may be used in judging the density, and therefore the hardness and strength of the material. This is particularly
the case with coniferous woods. In ring-porous woods the vessels of the early wood not infrequently appear on a finished
surface as darker than the denser latewood, though on cross sections of heartwood the reverse is commonly true. Except
in the manner just stated the color of wood is no indication of strength. Abnormal discoloration of wood often denotes
a diseased condition, indicating unsoundness. The black check in western hemlock is the result of insect attacks.
The reddish-brown streaks so common in hickory and certain other woods are mostly the result of injury by birds.
The discoloration is merely an indication of an injury, and in all probability does not of itself affect the properties
of the wood. Certain rot-producing fungi impart to wood characteristic colors which thus become symptomatic of weakness;
however an attractive effect known as spalting produced by this process is often considered a desirable characteristic.
Ordinary sap-staining is due to fungal growth, but does not necessarily produce a weakening effect. In heartwood
it occurs only in the first and last forms. Wood that is thoroughly air-dried retains 8–16% of the water in the cell
walls, and none, or practically none, in the other forms. Even oven-dried wood retains a small percentage of moisture,
but for all except chemical purposes, may be considered absolutely dry. The general effect of the water content upon
the wood substance is to render it softer and more pliable. A similar effect of common observation is in the softening
action of water on rawhide, paper, or cloth. Within certain limits, the greater the water content, the greater its
softening effect. Drying produces a decided increase in the strength of wood, particularly in small specimens. An
extreme example is the case of a completely dry spruce block 5 cm in section, which will sustain a permanent load
four times as great as a green (undried) block of the same size will. The greatest strength increase due to drying
is in the ultimate crushing strength, and strength at elastic limit in endwise compression; these are followed by
the modulus of rupture, and stress at elastic limit in cross-bending, while the modulus of elasticity is least affected.
Wood is a heterogeneous, hygroscopic, cellular and anisotropic material. It consists of cells, and the cell walls
are composed of micro-fibrils of cellulose (40% – 50%) and hemicellulose (15% – 25%) impregnated with lignin (15%
– 30%). In coniferous or softwood species the wood cells are mostly of one kind, tracheids, and as a result the material
is much more uniform in structure than that of most hardwoods. There are no vessels ("pores") in coniferous wood
such as one sees so prominently in oak and ash, for example. The structure of hardwoods is more complex. The water
conducting capability is mostly taken care of by vessels: in some cases (oak, chestnut, ash) these are quite large
and distinct, in others (buckeye, poplar, willow) too small to be seen without a hand lens. In discussing such woods
it is customary to divide them into two large classes, ring-porous and diffuse-porous. In ring-porous species, such
as ash, black locust, catalpa, chestnut, elm, hickory, mulberry, and oak, the larger vessels or pores (as cross sections
of vessels are called) are localised in the part of the growth ring formed in spring, thus forming a region of more
or less open and porous tissue. The rest of the ring, produced in summer, is made up of smaller vessels and a much
greater proportion of wood fibers. These fibers are the elements which give strength and toughness to wood, while
the vessels are a source of weakness.[citation needed] In diffuse-porous woods the pores are evenly sized so that
the water conducting capability is scattered throughout the growth ring instead of being collected in a band or row.
Examples of this kind of wood are alder, basswood,[citation needed] birch, buckeye, maple, willow,and the Populus
species such as aspen, cottonwood and poplar. Some species, such as walnut and cherry, are on the border between
the two classes, forming an intermediate group.[citation needed] In temperate softwoods there often is a marked difference
between latewood and earlywood. The latewood will be denser than that formed early in the season. When examined under
a microscope the cells of dense latewood are seen to be very thick-walled and with very small cell cavities, while
those formed first in the season have thin walls and large cell cavities. The strength is in the walls, not the cavities.
Hence the greater the proportion of latewood the greater the density and strength. In choosing a piece of pine where
strength or stiffness is the important consideration, the principal thing to observe is the comparative amounts of
earlywood and latewood. The width of ring is not nearly so important as the proportion and nature of the latewood
in the ring. If a heavy piece of pine is compared with a lightweight piece it will be seen at once that the heavier
one contains a larger proportion of latewood than the other, and is therefore showing more clearly demarcated growth
rings. In white pines there is not much contrast between the different parts of the ring, and as a result the wood
is very uniform in texture and is easy to work. In hard pines, on the other hand, the latewood is very dense and
is deep-colored, presenting a very decided contrast to the soft, straw-colored earlywood. It is not only the proportion
of latewood, but also its quality, that counts. In specimens that show a very large proportion of latewood it may
be noticeably more porous and weigh considerably less than the latewood in pieces that contain but little. One can
judge comparative density, and therefore to some extent strength, by visual inspection. No satisfactory explanation
can as yet be given for the exact mechanisms determining the formation of earlywood and latewood. Several factors
may be involved. In conifers, at least, rate of growth alone does not determine the proportion of the two portions
of the ring, for in some cases the wood of slow growth is very hard and heavy, while in others the opposite is true.
The quality of the site where the tree grows undoubtedly affects the character of the wood formed, though it is not
possible to formulate a rule governing it. In general, however, it may be said that where strength or ease of working
is essential, woods of moderate to slow growth should be chosen. In ring-porous woods each season's growth is always
well defined, because the large pores formed early in the season abut on the denser tissue of the year before. In
the case of the ring-porous hardwoods there seems to exist a pretty definite relation between the rate of growth
of timber and its properties. This may be briefly summed up in the general statement that the more rapid the growth
or the wider the rings of growth, the heavier, harder, stronger, and stiffer the wood. This, it must be remembered,
applies only to ring-porous woods such as oak, ash, hickory, and others of the same group, and is, of course, subject
to some exceptions and limitations. In ring-porous woods of good growth it is usually the latewood in which the thick-walled,
strength-giving fibers are most abundant. As the breadth of ring diminishes, this latewood is reduced so that very
slow growth produces comparatively light, porous wood composed of thin-walled vessels and wood parenchyma. In good
oak these large vessels of the earlywood occupy from 6 to 10 percent of the volume of the log, while in inferior
material they may make up 25% or more. The latewood of good oak is dark colored and firm, and consists mostly of
thick-walled fibers which form one-half or more of the wood. In inferior oak, this latewood is much reduced both
in quantity and quality. Such variation is very largely the result of rate of growth. Wide-ringed wood is often called
"second-growth", because the growth of the young timber in open stands after the old trees have been removed is more
rapid than in trees in a closed forest, and in the manufacture of articles where strength is an important consideration
such "second-growth" hardwood material is preferred. This is particularly the case in the choice of hickory for handles
and spokes. Here not only strength, but toughness and resilience are important. The results of a series of tests
on hickory by the U.S. Forest Service show that: In the diffuse-porous woods, the demarcation between rings is not
always so clear and in some cases is almost (if not entirely) invisible to the unaided eye. Conversely, when there
is a clear demarcation there may not be a noticeable difference in structure within the growth ring. In diffuse-porous
woods, as has been stated, the vessels or pores are even-sized, so that the water conducting capability is scattered
throughout the ring instead of collected in the earlywood. The effect of rate of growth is, therefore, not the same
as in the ring-porous woods, approaching more nearly the conditions in the conifers. In general it may be stated
that such woods of medium growth afford stronger material than when very rapidly or very slowly grown. In many uses
of wood, total strength is not the main consideration. If ease of working is prized, wood should be chosen with regard
to its uniformity of texture and straightness of grain, which will in most cases occur when there is little contrast
between the latewood of one season's growth and the earlywood of the next. Structural material that resembles ordinary,
"dicot" or conifer wood in its gross handling characteristics is produced by a number of monocot plants, and these
also are colloquially called wood. Of these, bamboo, botanically a member of the grass family, has considerable economic
importance, larger culms being widely used as a building and construction material in their own right and, these
days, in the manufacture of engineered flooring, panels and veneer. Another major plant group that produce material
that often is called wood are the palms. Of much less importance are plants such as Pandanus, Dracaena and Cordyline.
With all this material, the structure and composition of the structural material is quite different from ordinary
wood. The single most revealing property of wood as an indicator of wood quality is specific gravity (Timell 1986),
as both pulp yield and lumber strength are determined by it. Specific gravity is the ratio of the mass of a substance
to the mass of an equal volume of water; density is the ratio of a mass of a quantity of a substance to the volume
of that quantity and is expressed in mass per unit substance, e.g., grams per millilitre (g/cm3 or g/ml). The terms
are essentially equivalent as long as the metric system is used. Upon drying, wood shrinks and its density increases.
Minimum values are associated with green (water-saturated) wood and are referred to as basic specific gravity (Timell
1986). Wood density is determined by multiple growth and physiological factors compounded into “one fairly easily
measured wood characteristic” (Elliott 1970). Age, diameter, height, radial growth, geographical location, site and
growing conditions, silvicultural treatment, and seed source, all to some degree influence wood density. Variation
is to be expected. Within an individual tree, the variation in wood density is often as great as or even greater
than that between different trees (Timell 1986). Variation of specific gravity within the bole of a tree can occur
in either the horizontal or vertical direction. :) It is common to classify wood as either softwood or hardwood.
The wood from conifers (e.g. pine) is called softwood, and the wood from dicotyledons (usually broad-leaved trees,
e.g. oak) is called hardwood. These names are a bit misleading, as hardwoods are not necessarily hard, and softwoods
are not necessarily soft. The well-known balsa (a hardwood) is actually softer than any commercial softwood. Conversely,
some softwoods (e.g. yew) are harder than many hardwoods. There is a strong relationship between the properties of
wood and the properties of the particular tree that yielded it. The density of wood varies with species. The density
of a wood correlates with its strength (mechanical properties). For example, mahogany is a medium-dense hardwood
that is excellent for fine furniture crafting, whereas balsa is light, making it useful for model building. One of
the densest woods is black ironwood. The chemical composition of wood varies from species to species, but is approximately
50% carbon, 42% oxygen, 6% hydrogen, 1% nitrogen, and 1% other elements (mainly calcium, potassium, sodium, magnesium,
iron, and manganese) by weight. Wood also contains sulfur, chlorine, silicon, phosphorus, and other elements in small
quantity. Aside from water, wood has three main components. Cellulose, a crystalline polymer derived from glucose,
constitutes about 41–43%. Next in abundance is hemicellulose, which is around 20% in deciduous trees but near 30%
in conifers. It is mainly five-carbon sugars that are linked in an irregular manner, in contrast to the cellulose.
Lignin is the third component at around 27% in coniferous wood vs. 23% in deciduous trees. Lignin confers the hydrophobic
properties reflecting the fact that it is based on aromatic rings. These three components are interwoven, and direct
covalent linkages exist between the lignin and the hemicellulose. A major focus of the paper industry is the separation
of the lignin from the cellulose, from which paper is made. In chemical terms, the difference between hardwood and
softwood is reflected in the composition of the constituent lignin. Hardwood lignin is primarily derived from sinapyl
alcohol and coniferyl alcohol. Softwood lignin is mainly derived from coniferyl alcohol. Aside from the lignocellulose,
wood consists of a variety of low molecular weight organic compounds, called extractives. The wood extractives are
fatty acids, resin acids, waxes and terpenes. For example, rosin is exuded by conifers as protection from insects.
The extraction of these organic materials from wood provides tall oil, turpentine, and rosin. Wood has a long history
of being used as fuel, which continues to this day, mostly in rural areas of the world. Hardwood is preferred over
softwood because it creates less smoke and burns longer. Adding a woodstove or fireplace to a home is often felt
to add ambiance and warmth. Wood has been an important construction material since humans began building shelters,
houses and boats. Nearly all boats were made out of wood until the late 19th century, and wood remains in common
use today in boat construction. Elm in particular was used for this purpose as it resisted decay as long as it was
kept wet (it also served for water pipe before the advent of more modern plumbing). Wood to be used for construction
work is commonly known as lumber in North America. Elsewhere, lumber usually refers to felled trees, and the word
for sawn planks ready for use is timber. In Medieval Europe oak was the wood of choice for all wood construction,
including beams, walls, doors, and floors. Today a wider variety of woods is used: solid wood doors are often made
from poplar, small-knotted pine, and Douglas fir. New domestic housing in many parts of the world today is commonly
made from timber-framed construction. Engineered wood products are becoming a bigger part of the construction industry.
They may be used in both residential and commercial buildings as structural and aesthetic materials. In buildings
made of other materials, wood will still be found as a supporting material, especially in roof construction, in interior
doors and their frames, and as exterior cladding. Engineered wood products, glued building products "engineered"
for application-specific performance requirements, are often used in construction and industrial applications. Glued
engineered wood products are manufactured by bonding together wood strands, veneers, lumber or other forms of wood
fiber with glue to form a larger, more efficient composite structural unit. These products include glued laminated
timber (glulam), wood structural panels (including plywood, oriented strand board and composite panels), laminated
veneer lumber (LVL) and other structural composite lumber (SCL) products, parallel strand lumber, and I-joists. Approximately
100 million cubic meters of wood was consumed for this purpose in 1991. The trends suggest that particle board and
fiber board will overtake plywood. Wood unsuitable for construction in its native form may be broken down mechanically
(into fibers or chips) or chemically (into cellulose) and used as a raw material for other building materials, such
as engineered wood, as well as chipboard, hardboard, and medium-density fiberboard (MDF). Such wood derivatives are
widely used: wood fibers are an important component of most paper, and cellulose is used as a component of some synthetic
materials. Wood derivatives can also be used for kinds of flooring, for example laminate flooring. Wood has always
been used extensively for furniture, such as chairs and beds. It is also used for tool handles and cutlery, such
as chopsticks, toothpicks, and other utensils, like the wooden spoon. Further developments include new lignin glue
applications, recyclable food packaging, rubber tire replacement applications, anti-bacterial medical agents, and
high strength fabrics or composites. As scientists and engineers further learn and develop new techniques to extract
various components from wood, or alternatively to modify wood, for example by adding components to wood, new more
advanced products will appear on the marketplace. Moisture content electronic monitoring can also enhance next generation
wood protection. Wood has long been used as an artistic medium. It has been used to make sculptures and carvings
for millennia. Examples include the totem poles carved by North American indigenous people from conifer trunks, often
Western Red Cedar (Thuja plicata), and the Millennium clock tower, now housed in the National Museum of Scotland
in Edinburgh. It is also used in woodcut printmaking, and for engraving. Certain types of musical instruments, such
as those of the violin family, the guitar, the clarinet and recorder, the xylophone, and the marimba, are traditionally
made mostly or entirely of wood. The choice of wood may make a significant difference to the tone and resonant qualities
of the instrument, and tonewoods have widely differing properties, ranging from the hard and dense african blackwood
(used for the bodies of clarinets) to the light but resonant European spruce (Picea abies), which is traditionally
used for the soundboards of violins. The most valuable tonewoods, such as the ripple sycamore (Acer pseudoplatanus),
used for the backs of violins, combine acoustic properties with decorative color and grain which enhance the appearance
of the finished instrument. Despite their collective name, not all woodwind instruments are made entirely of wood.
The reeds used to play them, however, are usually made from Arundo donax, a type of monocot cane plant. Many types
of sports equipment are made of wood, or were constructed of wood in the past. For example, cricket bats are typically
made of white willow. The baseball bats which are legal for use in Major League Baseball are frequently made of ash
wood or hickory, and in recent years have been constructed from maple even though that wood is somewhat more fragile.
NBA courts have been traditionally made out of parquetry. Many other types of sports and recreation equipment, such
as skis, ice hockey sticks, lacrosse sticks and archery bows, were commonly made of wood in the past, but have since
been replaced with more modern materials such as aluminium, fiberglass, carbon fiber, titanium, and composite materials.
One noteworthy example of this trend is the golf club commonly known as the wood, the head of which was traditionally
made of persimmon wood in the early days of the game of golf, but is now generally made of synthetic materials. Little
is known about the bacteria that degrade cellulose. Symbiotic bacteria in Xylophaga may play a role in the degradation
of sunken wood; while bacteria such as Alphaproteobacteria, Flavobacteria, Actinobacteria, Clostridia, and Bacteroidetes
have been detected in wood submerged over a year.
Somalis (Somali: Soomaali, Arabic: صومال) are an ethnic group inhabiting the Horn of Africa (Somali Peninsula). The overwhelming
majority of Somalis speak the Somali language, which is part of the Cushitic branch of the Afro-Asiatic family. They
are predominantly Sunni Muslim. Ethnic Somalis number around 16-20 million and are principally concentrated in Somalia
(around 12.3 million), Ethiopia (4.6 million), Kenya (2.4 million), and Djibouti (464,600), with many also residing
in parts of the Middle East, North America and Europe. Irir Samaale, the oldest common ancestor of several Somali
clans, is generally regarded as the source of the ethnonym Somali. The name "Somali" is, in turn, held to be derived
from the words soo and maal, which together mean "go and milk" — a reference to the ubiquitous pastoralism of the
Somali people. Another plausible etymology proposes that the term Somali is derived from the Arabic for "wealthy"
(dhawamaal), again referring to Somali riches in livestock. An Ancient Chinese document from the 9th century referred
to the northern Somali coast — which was then called "Berbera" by Arab geographers in reference to the region's "Berber"
(Cushitic) inhabitants — as Po-pa-li. The first clear written reference of the sobriquet Somali, however, dates back
to the 15th century. During the wars between the Sultanate of Ifat based at Zeila and the Solomonic Dynasty, the
Abyssinian Emperor had one of his court officials compose a hymn celebrating a military victory over the Sultan of
Ifat's eponymous troops. Ancient rock paintings in Somalia which date back to 5000 years have been found in the northern
part of the country, depicting early life in the territory. The most famous of these is the Laas Geel complex, which
contains some of the earliest known rock art on the African continent and features many elaborate pastoralist sketches
of animal and human figures. In other places, such as the northern Dhambalin region, a depiction of a man on a horse
is postulated as being one of the earliest known examples of a mounted huntsman. Inscriptions have been found beneath
many of the rock paintings, but archaeologists have so far been unable to decipher this form of ancient writing.
During the Stone age, the Doian culture and the Hargeisan culture flourished here with their respective industries
and factories. The oldest evidence of burial customs in the Horn of Africa comes from cemeteries in Somalia dating
back to 4th millennium BC. The stone implements from the Jalelo site in northern Somalia are said to be the most
important link in evidence of the universality in palaeolithic times between the East and the West. In antiquity,
the ancestors of the Somali people were an important link in the Horn of Africa connecting the region's commerce
with the rest of the ancient world. Somali sailors and merchants were the main suppliers of frankincense, myrrh and
spices, items which were considered valuable luxuries by the Ancient Egyptians, Phoenicians, Mycenaeans and Babylonians.
According to most scholars, the ancient Land of Punt and its inhabitants formed part of the ethnogenesis of the Somali
people. The ancient Puntites were a nation of people that had close relations with Pharaonic Egypt during the times
of Pharaoh Sahure and Queen Hatshepsut. The pyramidal structures, temples and ancient houses of dressed stone littered
around Somalia are said to date from this period. In the classical era, several ancient city-states such as Opone,
Essina, Sarapion, Nikon, Malao, Damo and Mosylon near Cape Guardafui, which competed with the Sabaeans, Parthians
and Axumites for the wealthy Indo-Greco-Roman trade, also flourished in Somalia. The birth of Islam on the opposite
side of Somalia's Red Sea coast meant that Somali merchants, sailors and expatriates living in the Arabian Peninsula
gradually came under the influence of the new religion through their converted Arab Muslim trading partners. With
the migration of fleeing Muslim families from the Islamic world to Somalia in the early centuries of Islam and the
peaceful conversion of the Somali population by Somali Muslim scholars in the following centuries, the ancient city-states
eventually transformed into Islamic Mogadishu, Berbera, Zeila, Barawa and Merca, which were part of the Berberi civilization.
The city of Mogadishu came to be known as the City of Islam, and controlled the East African gold trade for several
centuries. The Sultanate of Ifat, led by the Walashma dynasty with its capital at Zeila, ruled over parts of what
is now eastern Ethiopia, Djibouti, and northern Somalia. The historian al-Umari records that Ifat was situated near
the Red Sea coast, and states its size as 15 days travel by 20 days travel. Its army numbered 15,000 horsemen and
20,000 foot soldiers. Al-Umari also credits Ifat with seven "mother cities": Belqulzar, Kuljura, Shimi, Shewa, Adal,
Jamme and Laboo. In the Middle Ages, several powerful Somali empires dominated the regional trade including the Ajuran
Sultanate, which excelled in hydraulic engineering and fortress building, the Sultanate of Adal, whose general Ahmad
ibn Ibrahim al-Ghazi (Ahmed Gurey) was the first commander to use cannon warfare on the continent during Adal's conquest
of the Ethiopian Empire, and the Sultanate of the Geledi, whose military dominance forced governors of the Omani
empire north of the city of Lamu to pay tribute to the Somali Sultan Ahmed Yusuf. In the late 19th century, after
the Berlin conference had ended, European empires sailed with their armies to the Horn of Africa. The imperial clouds
wavering over Somalia alarmed the Dervish leader Mohammed Abdullah Hassan, who gathered Somali soldiers from across
the Horn of Africa and began one of the longest anti-colonial wars ever. The Dervish State successfully repulsed
the British empire four times and forced it to retreat to the coastal region. As a result of its successes against
the British, the Dervish State received support from the Ottoman and German empires. The Turks also named Hassan
Emir of the Somali nation, and the Germans promised to officially recognize any territories the Dervishes were to
acquire. After a quarter of a century of holding the British at bay, the Dervishes were finally defeated in 1920,
when Britain for the first time in Africa used airplanes to bomb the Dervish capital of Taleex. As a result of this
bombardment, former Dervish territories were turned into a protectorate of Britain. Italy similarly faced the same
opposition from Somali Sultans and armies and did not acquire full control of parts of modern Somalia until the Fascist
era in late 1927. This occupation lasted till 1941 and was replaced by a British military administration. Following
World War II, Britain retained control of both British Somaliland and Italian Somaliland as protectorates. In 1945,
during the Potsdam Conference, the United Nations granted Italy trusteeship of Italian Somaliland, but only under
close supervision and on the condition — first proposed by the Somali Youth League (SYL) and other nascent Somali
political organizations, such as Hizbia Digil Mirifle Somali (HDMS) and the Somali National League (SNL) — that Somalia
achieve independence within ten years. British Somaliland remained a protectorate of Britain until 1960. To the extent
that Italy held the territory by UN mandate, the trusteeship provisions gave the Somalis the opportunity to gain
experience in political education and self-government. These were advantages that British Somaliland, which was to
be incorporated into the new Somali state, did not have. Although in the 1950s British colonial officials attempted,
through various administrative development efforts, to make up for past neglect, the protectorate stagnated. The
disparity between the two territories in economic development and political experience would cause serious difficulties
when it came time to integrate the two parts. Meanwhile, in 1948, under pressure from their World War II allies and
to the dismay of the Somalis, the British "returned" the Haud (an important Somali grazing area that was presumably
'protected' by British treaties with the Somalis in 1884 and 1886) and the Ogaden to Ethiopia, based on a treaty
they signed in 1897 in which the British ceded Somali territory to the Ethiopian Emperor Menelik in exchange for
his help against plundering by Somali clans. Britain included the proviso that the Somali nomads would retain their
autonomy, but Ethiopia immediately claimed sovereignty over them. This prompted an unsuccessful bid by Britain in
1956 to buy back the Somali lands it had turned over. Britain also granted administration of the almost exclusively
Somali-inhabited Northern Frontier District (NFD) to Kenyan nationalists despite an informal plebiscite demonstrating
the overwhelming desire of the region's population to join the newly formed Somali Republic. A referendum was held
in neighboring Djibouti (then known as French Somaliland) in 1958, on the eve of Somalia's independence in 1960,
to decide whether or not to join the Somali Republic or to remain with France. The referendum turned out in favour
of a continued association with France, largely due to a combined yes vote by the sizable Afar ethnic group and resident
Europeans. There was also widespread vote rigging, with the French expelling thousands of Somalis before the referendum
reached the polls. The majority of those who voted no were Somalis who were strongly in favour of joining a united
Somalia, as had been proposed by Mahmoud Harbi, Vice President of the Government Council. Harbi was killed in a plane
crash two years later. Djibouti finally gained its independence from France in 1977, and Hassan Gouled Aptidon, a
Somali who had campaigned for a yes vote in the referendum of 1958, eventually wound up as Djibouti's first president
(1977–1991). British Somaliland became independent on 26 June 1960 as the State of Somaliland, and the Trust Territory
of Somalia (the former Italian Somaliland) followed suit five days later. On 1 July 1960, the two territories united
to form the Somali Republic, albeit within boundaries drawn up by Italy and Britain. A government was formed by Abdullahi
Issa Mohamud and Muhammad Haji Ibrahim Egal other members of the trusteeship and protectorate governments, with Haji
Bashir Ismail Yusuf as President of the Somali National Assembly, Aden Abdullah Osman Daar as the President of the
Somali Republic and Abdirashid Ali Shermarke as Prime Minister (later to become President from 1967 to 1969). On
20 July 1961 and through a popular referendum, the people of Somalia ratified a new constitution, which was first
drafted in 1960. In 1967, Muhammad Haji Ibrahim Egal became Prime Minister, a position to which he was appointed
by Shermarke. Egal would later become the President of the autonomous Somaliland region in northwestern Somalia.
On 15 October 1969, while paying a visit to the northern town of Las Anod, Somalia's then President Abdirashid Ali
Shermarke was shot dead by one of his own bodyguards. His assassination was quickly followed by a military coup d'état
on 21 October 1969 (the day after his funeral), in which the Somali Army seized power without encountering armed
opposition — essentially a bloodless takeover. The putsch was spearheaded by Major General Mohamed Siad Barre, who
at the time commanded the army. Alongside Barre, the Supreme Revolutionary Council (SRC) that assumed power after
President Sharmarke's assassination was led by Lieutenant Colonel Salaad Gabeyre Kediye and Chief of Police Jama
Korshel. The SRC subsequently renamed the country the Somali Democratic Republic, dissolved the parliament and the
Supreme Court, and suspended the constitution. The revolutionary army established large-scale public works programs
and successfully implemented an urban and rural literacy campaign, which helped dramatically increase the literacy
rate. In addition to a nationalization program of industry and land, the new regime's foreign policy placed an emphasis
on Somalia's traditional and religious links with the Arab world, eventually joining the Arab League (AL) in 1974.
That same year, Barre also served as chairman of the Organization of African Unity (OAU), the predecessor of the
African Union (AU). Somali people in the Horn of Africa are divided among different countries (Somalia, Djibouti,
Ethiopia, and northeastern Kenya) that were artificially and some might say arbitrarily partitioned by the former
imperial powers. Pan-Somalism is an ideology that advocates the unification of all ethnic Somalis once part of Somali
empires such as the Ajuran Empire, the Adal Sultanate, the Gobroon Dynasty and the Dervish State under one flag and
one nation. The Siad Barre regime actively promoted Pan-Somalism, which eventually led to the Ogaden War between
Somalia on one side, and Ethiopia, Cuba and the Soviet Union on the other. According to Y chromosome studies by Sanchez
et al. (2005), Cruciani et al. (2004, 2007), the Somalis are paternally closely related to other Afro-Asiatic-speaking
groups in Northeast Africa. Besides comprising the majority of the Y-DNA in Somalis, the E1b1b1a (formerly E3b1a)
haplogroup also makes up a significant proportion of the paternal DNA of Ethiopians, Sudanese, Egyptians, Berbers,
North African Arabs, as well as many Mediterranean populations. Sanchez et al. (2005) observed the M78 subclade of
E1b1b in about 77% of their Somali male samples. According to Cruciani et al. (2007), the presence of this subhaplogroup
in the Horn region may represent the traces of an ancient migration from Egypt/Libya. After haplogroup E1b1b, the
second most frequently occurring Y-DNA haplogroup among Somalis is the West Asian haplogroup T (M70). It is observed
in slightly more than 10% of Somali males. Haplogroup T, like haplogroup E1b1b, is also typically found among populations
of Northeast Africa, North Africa, the Near East and the Mediterranean. According to mtDNA studies by Holden (2005)
and Richards et al. (2006), a significant proportion of the maternal lineages of Somalis consists of the M1 haplogroup.
This mitochondrial clade is common among Ethiopians and North Africans, particularly Egyptians and Algerians. M1
is believed to have originated in Asia, where its parent M clade represents the majority of mtDNA lineages. This
haplogroup is also thought to possibly correlate with the Afro-Asiatic language family: According to an autosomal
DNA study by Hodgson et al. (2014), the Afro-Asiatic languages were likely spread across Africa and the Near East
by an ancestral population(s) carrying a newly identified non-African genetic component, which the researchers dub
the "Ethio-Somali". This Ethio-Somali component is today most common among Afro-Asiatic-speaking populations in the
Horn of Africa. It reaches a frequency peak among ethnic Somalis, representing the majority of their ancestry. The
Ethio-Somali component is most closely related to the Maghrebi non-African genetic component, and is believed to
have diverged from all other non-African ancestries at least 23,000 years ago. On this basis, the researchers suggest
that the original Ethio-Somali carrying population(s) probably arrived in the pre-agricultural period from the Near
East, having crossed over into northeastern Africa via the Sinai Peninsula. The population then likely split into
two branches, with one group heading westward toward the Maghreb and the other moving south into the Horn. The analysis
of HLA antigens has also helped clarify the possible background of the Somali people, as the distribution of haplotype
frequencies vary among population groups. According to Mohamoud et al. (2006): The history of Islam in Somalia is
as old as the religion itself. The early persecuted Muslims fled to various places in the region, including the city
of Zeila in modern-day northern Somalia, so as to seek protection from the Quraysh. Somalis were among the first
populations on the continent to embrace Islam. With very few exceptions, Somalis are entirely Muslims, the majority
belonging to the Sunni branch of Islam and the Shafi`i school of Islamic jurisprudence, although a few are also adherents
of the Shia Muslim denomination. Qur'anic schools (also known as dugsi) remain the basic system of traditional religious
instruction in Somalia. They provide Islamic education for children, thereby filling a clear religious and social
role in the country. Known as the most stable local, non-formal system of education providing basic religious and
moral instruction, their strength rests on community support and their use of locally made and widely available teaching
materials. The Qur'anic system, which teaches the greatest number of students relative to other educational sub-sectors,
is oftentimes the only system accessible to Somalis in nomadic as compared to urban areas. A study from 1993 found,
among other things, that "unlike in primary schools where gender disparity is enormous, around 40 per cent of Qur'anic
school pupils are girls; but the teaching staff have minimum or no qualification necessary to ensure intellectual
development of children." To address these concerns, the Somali government on its own part subsequently established
the Ministry of Endowment and Islamic Affairs, under which Qur'anic education is now regulated. In the Somali diaspora,
multiple Islamic fundraising events are held every year in cities like Birmingham, London, Toronto and Minneapolis,
where Somali scholars and professionals give lectures and answer questions from the audience. The purpose of these
events is usually to raise money for new schools or universities in Somalia, to help Somalis that have suffered as
a consequence of floods and/or droughts, or to gather funds for the creation of new mosques like the Abuubakar-As-Saddique
Mosque, which is currently undergoing construction in the Twin cities. In addition, the Somali community has produced
numerous important Muslim figures over the centuries, many of whom have significantly shaped the course of Islamic
learning and practice in the Horn of Africa, the Arabian Peninsula and well beyond. The clan groupings of the Somali
people are important social units, and clan membership plays a central part in Somali culture and politics. Clans
are patrilineal and are often divided into sub-clans, sometimes with many sub-divisions. The tombs of the founders
of the Darod, Dir and Isaaq major clans as well as the Abgaal sub-clan of the Hawiye are all located in northern
Somalia. Tradition holds this general area as an ancestral homeland of the Somali people. Somali society is traditionally
ethnically endogamous. So to extend ties of alliance, marriage is often to another ethnic Somali from a different
clan. Thus, for example, a recent study observed that in 89 marriages contracted by men of the Dhulbahante clan,
55 (62%) were with women of Dhulbahante sub-clans other than those of their husbands; 30 (33.7%) were with women
of surrounding clans of other clan families (Isaaq, 28; Hawiye, 3); and 3 (4.3%) were with women of other clans of
the Darod clan family (Majerteen 2, Ogaden 1). In 1975, the most prominent government reforms regarding family law
in a Muslim country were set in motion in the Somali Democratic Republic, which put women and men, including husbands
and wives, on complete equal footing. The 1975 Somali Family Law gave men and women equal division of property between
the husband and wife upon divorce and the exclusive right to control by each spouse over his or her personal property.
Somalis constitute the largest ethnic group in Somalia, at approximately 85% of the nation's inhabitants. They are
traditionally nomads, but since the late 20th century, many have moved to urban areas. While most Somalis can be
found in Somalia proper, large numbers also live in Ethiopia, Djibouti, Kenya, Yemen, the Middle East, South Asia
and Europe due to their seafaring tradition. Civil strife in the early 1990s greatly increased the size of the Somali
diaspora, as many of the best educated Somalis left for the Middle East, Europe and North America. In Canada, the
cities of Toronto, Ottawa, Calgary, Edmonton, Montreal, Vancouver, Winnipeg and Hamilton all harbor Somali populations.
Statistics Canada's 2006 census ranks people of Somali descent as the 69th largest ethnic group in Canada. While
the distribution of Somalis per country in Europe is hard to measure because the Somali community on the continent
has grown so quickly in recent years, an official 2010 estimate reported 108,000 Somalis living in the United Kingdom.
Somalis in Britain are largely concentrated in the cities of London, Sheffield, Bristol, Birmingham, Cardiff, Liverpool,
Manchester, Leeds, and Leicester, with London alone accounting for roughly 78% of Britain's Somali population. There
are also significant Somali communities in Sweden: 57,906 (2014); the Netherlands: 37,432 (2014); Norway: 38,413
(2015); Denmark: 18,645 (2014); and Finland: 16,721 (2014). In the United States, Minneapolis, Saint Paul, Columbus,
San Diego, Seattle, Washington, D.C., Houston, Atlanta, Los Angeles, Portland, Denver, Nashville, Green Bay, Lewiston,
Portland, Maine and Cedar Rapids have the largest Somali populations. An estimated 20,000 Somalis emigrated to the
U.S. state of Minnesota some ten years ago and the Twin Cities (Minneapolis and Saint Paul) now have the highest
population of Somalis in North America. The city of Minneapolis hosts hundreds of Somali-owned and operated businesses
offering a variety of products, including leather shoes, jewelry and other fashion items, halal meat, and hawala
or money transfer services. Community-based video rental stores likewise carry the latest Somali films and music.
The number of Somalis has especially surged in the Cedar-Riverside area of Minneapolis. There is a sizable Somali
community in the United Arab Emirates. Somali-owned businesses line the streets of Deira, the Dubai city centre,
with only Iranians exporting more products from the city at large. Internet cafés, hotels, coffee shops, restaurants
and import-export businesses are all testimony to the Somalis' entrepreneurial spirit. Star African Air is also one
of three Somali-owned airlines which are based in Dubai. Besides their traditional areas of inhabitation in Greater
Somalia, a Somali community mainly consisting of entrepreneurs, academics, and students also exists in Egypt. In
addition, there is an historical Somali community in the general Sudan area. Primarily concentrated in the north
and Khartoum, the expatriate community mainly consists of students as well as some businesspeople. More recently,
Somali entrepreneurs have established themselves in Kenya, investing over $1.5 billion in the Somali enclave of Eastleigh
alone. In South Africa, Somali businesspeople also provide most of the retail trade in informal settlements around
the Western Cape province. The Somali language is a member of the Cushitic branch of the Afro-Asiatic family. Its
nearest relatives are the Afar and Saho languages. Somali is the best documented of the Cushitic languages, with
academic studies of it dating from before 1900. The exact number of speakers of Somali is unknown. One source estimates
that there are 7.78 million speakers of Somali in Somalia itself and 12.65 million speakers globally. The Somali
language is spoken by ethnic Somalis in Greater Somalia and the Somali diaspora. Somali dialects are divided into
three main groups: Northern, Benaadir and Maay. Northern Somali (or Northern-Central Somali) forms the basis for
Standard Somali. Benaadir (also known as Coastal Somali) is spoken on the Benadir coast from Adale to south of Merca,
including Mogadishu, as well as in the immediate hinterland. The coastal dialects have additional phonemes which
do not exist in Standard Somali. Maay is principally spoken by the Digil and Mirifle (Rahanweyn) clans in the southern
areas of Somalia. A number of writing systems have been used over the years for transcribing the language. Of these,
the Somali alphabet is the most widely used, and has been the official writing script in Somalia since the government
of former President of Somalia Mohamed Siad Barre formally introduced it in October 1972. The script was developed
by the Somali linguist Shire Jama Ahmed specifically for the Somali language, and uses all letters of the English
Latin alphabet except p, v and z. Besides Ahmed's Latin script, other orthographies that have been used for centuries
for writing Somali include the long-established Arabic script and Wadaad's writing. Indigenous writing systems developed
in the twentieth century include the Osmanya, Borama and Kaddare scripts, which were invented by Osman Yusuf Kenadid,
Abdurahman Sheikh Nuur and Hussein Sheikh Ahmed Kaddare, respectively. In addition to Somali, Arabic, which is also
an Afro-Asiatic tongue, is an official national language in both Somalia and Djibouti. Many Somalis speak it due
to centuries-old ties with the Arab world, the far-reaching influence of the Arabic media, and religious education.
Somalia and Djibouti are also both members of the Arab League. The culture of Somalia is an amalgamation of traditions
developed independently and through interaction with neighbouring and far away civilizations, such as other parts
of Northeast Africa, the Arabian Peninsula, India and Southeast Asia. The textile-making communities in Somalia are
a continuation of an ancient textile industry, as is the culture of wood carving, pottery and monumental architecture
that dominates Somali interiors and landscapes. The cultural diffusion of Somali commercial enterprise can be detected
in its cuisine, which contains Southeast Asian influences. Due to the Somali people's passionate love for and facility
with poetry, Somalia has often been referred to by scholars as a "Nation of Poets" and a "Nation of Bards" including,
among others, the Canadian novelist Margaret Laurence. All of these traditions, including festivals, martial arts,
dress, literature, sport and games such as Shax, have immensely contributed to the enrichment of Somali heritage.
Somalis have a rich musical heritage centered on traditional Somali folklore. Most Somali songs are pentatonic. That
is, they only use five pitches per octave in contrast to a heptatonic (seven note) scale, such as the major scale.
At first listen, Somali music might be mistaken for the sounds of nearby regions such as Ethiopia, Sudan or Arabia,
but it is ultimately recognizable by its own unique tunes and styles. Somali songs are usually the product of collaboration
between lyricists (midho), songwriters (lahan) and singers ('odka or "voice"). Growing out of the Somali people's
rich storytelling tradition, the first few feature-length Somali films and cinematic festivals emerged in the early
1960s, immediately after independence. Following the creation of the Somali Film Agency (SFA) regulatory body in
1975, the local film scene began to expand rapidly. The Somali filmmaker Ali Said Hassan concurrently served as the
SFA's representative in Rome. In the 1970s and early 1980s, popular musicals known as riwaayado were the main driving
force behind the Somali movie industry. Epic and period films as well as international co-productions followed suit,
facilitated by the proliferation of video technology and national television networks. Said Salah Ahmed during this
period directed his first feature film, The Somali Darwish (The Somalia Dervishes), devoted to the Dervish State.
In the 1990s and 2000s, a new wave of more entertainment-oriented movies emerged. Referred to as Somaliwood, this
upstart, youth-based cinematic movement has energized the Somali film industry and in the process introduced innovative
storylines, marketing strategies and production techniques. The young directors Abdisalam Aato of Olol Films and
Abdi Malik Isak are at the forefront of this quiet revolution. Somali art is the artistic culture of the Somali people,
both historic and contemporary. These include artistic traditions in pottery, music, architecture, wood carving and
other genres. Somali art is characterized by its aniconism, partly as a result of the vestigial influence of the
pre-Islamic mythology of the Somalis coupled with their ubiquitous Muslim beliefs. However, there have been cases
in the past of artistic depictions representing living creatures, such as certain ancient rock paintings in northern
Somalia, the golden birds on the Mogadishan canopies, and the plant decorations on religious tombs in southern Somalia.
More typically, intricate patterns and geometric designs, bold colors and monumental architecture were the norm.
Additionally, henna is an important part of Somali culture. It is worn by Somali women on their hands, arms, feet
and neck during weddings, Eid, Ramadan, and other festive occasions. Somali henna designs are similar to those in
the Arabian peninsula, often featuring flower motifs and triangular shapes. The palm is also frequently decorated
with a dot of henna and the fingertips are dipped in the dye. Henna parties are usually held before the wedding takes
place. Somali women have likewise traditionally applied kohl (kuul) to their eyes. Usage of the eye cosmetic in the
Horn region is believed to date to the ancient Land of Punt. Football is the most popular sport amongst Somalis.
Important competitions are the Somalia League and Somalia Cup. The multi-ethnic Ocean Stars, Somalia's national team,
first participated at the Olympic Games in 1972 and has sent athletes to compete in most Summer Olympic Games since
then. The equally diverse Somali beach soccer team also represents the country in international beach soccer competitions.
In addition, several international footballers such as Mohammed Ahamed Jama, Liban Abdi, Ayub Daud and Abdisalam
Ibrahim have played in European top divisions. The FIBA Africa Championship 1981 was hosted by Somalia from 15–23
December 1981. The games were played in Mogadishu, and the Somali national team received the bronze prize. Abdi Bile
won the 1500 m event at the World Championships in 1987, running the fastest final 800 m of any 1,500 meter race
in history. He was a two-time Olympian (1984 and 1996) and dominated the event in the late 1980s. Hussein Ahmed Salah,
a Somalia-born former long-distance runner from Djibouti, won a bronze medal in the marathon at the 1988 Summer Olympics.
He also won silver medals in this event at the 1987 and 1991 World Championships, as well as the 1985 IAAF World
Marathon Cup. Mo Farah is a double Olympic gold medal winner and world champion, and holds the European track record
for 10,000 metres, the British road record for 10,000 metres, the British indoor record in the 3000 metres, the British
track record for 5000 metres and the European indoor record for 5000 metres. Mohammed Ahmed (athlete) is a Canadian
long-distance runner who represented Canada in the 10,000 meter races at the 2012 Summer Olympics and the 2013 World
Championships in Athletics. In the martial arts, Faisal Jeylani Aweys and Mohamed Deq Abdulle also took home a silver
medal and fourth place, respectively, at the 2013 Open World Taekwondo Challenge Cup in Tongeren. The Somali National
Olympic committee has devised a special support program to ensure continued success in future tournaments. Additionally,
Mohamed Jama has won both world and European titles in K1 and Thai Boxing. When not dressed in Westernized clothing
such as jeans and t-shirts, Somali men typically wear the macawis, which is a sarong-like garment worn around the
waist. On their heads, they often wrap a colorful turban or wear the koofiyad, an embroidered fez. Due to Somalia's
proximity to and close ties with the Arabian Peninsula, many Somali men also wear the jellabiya (jellabiyad or qamiis
in Somali), a long white garment common in the Arab world. During regular, day-to-day activities, Somali women usually
wear the guntiino, a long stretch of cloth tied over the shoulder and draped around the waist. It is usually made
out of alandi, which is a textile common in the Horn region and some parts of North Africa. The garment can be worn
in different styles. It can also be made with other fabrics, including white cloth with gold borders. For more formal
settings, such as at weddings or religious celebrations like Eid, women wear the dirac. It is a long, light, diaphanous
voile dress made of silk, chiffon, taffeta or saree fabric. The gown is worn over a full-length half-slip and a brassiere.
Known as the gorgorad, the underskirt is made out of silk and serves as a key part of the overall outfit. The dirac
is usually sparkly and very colorful, the most popular styles being those with gilded borders or threads. The fabric
is typically acquired from Somali clothing stores in tandem with the gorgorad. In the past, dirac fabric was also
frequently purchased from South Asian merchandisers. Married women tend to sport headscarves referred to as shaash.
They also often cover their upper body with a shawl, which is known as garbasaar. Unmarried or young women, however,
do not always cover their heads. Traditional Arabian garb, such as the jilbab and abaya, is also commonly worn. Additionally,
Somali women have a long tradition of wearing gold jewelry, particularly bangles. During weddings, the bride is frequently
adorned in gold. Many Somali women by tradition also wear gold necklaces and anklets. The Somali flag is an ethnic
flag conceived to represent ethnic Somalis. It was created in 1954 by the Somali scholar Mohammed Awale Liban, after
he had been selected by the labour trade union of the Trust Territory of Somalia to come up with a design. Upon independence
in 1960, the flag was adopted as the national flag of the nascent Somali Republic. The five-pointed Star of Unity
in the flag's center represents the Somali ethnic group inhabiting the five territories in Greater Somalia. Somali
cuisine varies from region to region and consists of a fusion of diverse culinary influences. It is the product of
Somalia's rich tradition of trade and commerce. Despite the variety, there remains one thing that unites the various
regional cuisines: all food is served halal. There are therefore no pork dishes, alcohol is not served, nothing that
died on its own is eaten, and no blood is incorporated. Qado or lunch is often elaborate. Varieties of bariis (rice),
the most popular probably being basmati, usually serve as the main dish. Spices like cumin, cardamom, cloves, cinnamon,
and garden sage are used to aromatize these different rice delicacies. Somalis eat dinner as late as 9 pm. During
Ramadan, supper is often served after Tarawih prayers; sometimes as late as 11 pm. Xalwo (halva) is a popular confection
eaten during festive occasions, such as Eid celebrations or wedding receptions. It is made from sugar, corn starch,
cardamom powder, nutmeg powder and ghee. Peanuts are also sometimes added to enhance texture and flavor. After meals,
homes are traditionally perfumed using frankincense (lubaan) or incense (cuunsi), which is prepared inside an incense
burner referred to as a dabqaad. Somali scholars have for centuries produced many notable examples of Islamic literature
ranging from poetry to Hadith. With the adoption of the Latin alphabet in 1972 to transcribe the Somali language,
numerous contemporary Somali authors have also released novels, some of which have gone on to receive worldwide acclaim.
Of these modern writers, Nuruddin Farah is probably the most celebrated. Books such as From a Crooked Rib and Links
are considered important literary achievements, works which have earned Farah, among other accolades, the 1998 Neustadt
International Prize for Literature. Farah Mohamed Jama Awl is another prominent Somali writer who is perhaps best
known for his Dervish era novel, Ignorance is the enemy of love. Young upstart Nadifa Mohamed was also awarded the
2010 Betty Trask Prize. Additionally, Mohamed Ibrahim Warsame 'Hadrawi' is considered by many to be the greatest
living Somali poet, and several of his works have been translated internationally. Somalis for centuries have practiced
a form of customary law, which they call Xeer. Xeer is a polycentric legal system where there is no monopolistic
agent that determines what the law should be or how it should be interpreted. The Xeer legal system is assumed to
have developed exclusively in the Horn of Africa since approximately the 7th century. There is no evidence that it
developed elsewhere or was greatly influenced by any foreign legal system. The fact that Somali legal terminology
is practically devoid of loan words from foreign languages suggests that Xeer is truly indigenous. The Xeer legal
system also requires a certain amount of specialization of different functions within the legal framework. Thus,
one can find odayal (judges), xeer boggeyaal (jurists), guurtiyaal (detectives), garxajiyaal (attorneys), murkhaatiyal
(witnesses) and waranle (police officers) to enforce the law. Somali architecture is a rich and diverse tradition
of engineering and designing. It involves multiple different construction types, such as stone cities, castles, citadels,
fortresses, mosques, mausoleums, towers, tombs, tumuli, cairns, megaliths, menhirs, stelae, dolmens, stone circles,
monuments, temples, enclosures, cisterns, aqueducts, and lighthouses. Spanning the ancient, medieval and early modern
periods in Greater Somalia, it also includes the fusion of Somalo-Islamic architecture with Western designs in contemporary
times. In ancient Somalia, pyramidical structures known in Somali as taalo were a popular burial style, with hundreds
of these dry stone monuments scattered around the country today. Houses were built of dressed stone similar to the
ones in Ancient Egypt. There are also examples of courtyards and large stone walls enclosing settlements, such as
the Wargaade Wall. The peaceful introduction of Islam in the early medieval era of Somalia's history brought Islamic
architectural influences from Arabia and Persia. This had the effect of stimulating a shift in construction from
drystone and other related materials to coral stone, sundried bricks, and the widespread use of limestone in Somali
architecture. Many of the new architectural designs, such as mosques, were built on the ruins of older structures.
This practice would continue over and over again throughout the following centuries. The scholarly term for research
concerning Somalis and Greater Somalia is known as Somali Studies. It consists of several disciplines such as anthropology,
sociology, linguistics, historiography and archaeology. The field draws from old Somali chronicles, records and oral
literature, in addition to written accounts and traditions about Somalis from explorers and geographers in the Horn
of Africa and the Middle East. Since 1980, prominent Somalist scholars from around the world have also gathered annually
to hold the International Congress of Somali Studies.
In European history, the Middle Ages or medieval period lasted from the 5th to the 15th century. It began with the collapse
of the Western Roman Empire and merged into the Renaissance and the Age of Discovery. The Middle Ages is the middle
period of the three traditional divisions of Western history: Antiquity, Medieval period, and Modern period. The
Medieval period is itself subdivided into the Early, the High, and the Late Middle Ages. Depopulation, deurbanisation,
invasion, and movement of peoples, which had begun in Late Antiquity, continued in the Early Middle Ages. The barbarian
invaders, including various Germanic peoples, formed new kingdoms in what remained of the Western Roman Empire. In
the 7th century, North Africa and the Middle East—once part of the Eastern Roman Empire—came under the rule of the
Caliphate, an Islamic empire, after conquest by Muhammad's successors. Although there were substantial changes in
society and political structures, the break with Antiquity was not complete. The still-sizeable Byzantine Empire
survived in the east and remained a major power. The empire's law code, the Code of Justinian, was rediscovered in
Northern Italy in 1070 and became widely admired later in the Middle Ages. In the West, most kingdoms incorporated
the few extant Roman institutions. Monasteries were founded as campaigns to Christianise pagan Europe continued.
The Franks, under the Carolingian dynasty, briefly established the Carolingian Empire during the later 8th and early
9th century. It covered much of Western Europe, but later succumbed to the pressures of internal civil wars combined
with external invasions—Vikings from the north, Magyars from the east, and Saracens from the south. During the High
Middle Ages, which began after 1000, the population of Europe increased greatly as technological and agricultural
innovations allowed trade to flourish and the Medieval Warm Period climate change allowed crop yields to increase.
Manorialism, the organisation of peasants into villages that owed rent and labour services to the nobles, and feudalism,
the political structure whereby knights and lower-status nobles owed military service to their overlords in return
for the right to rent from lands and manors, were two of the ways society was organised in the High Middle Ages.
The Crusades, first preached in 1095, were military attempts by Western European Christians to regain control of
the Holy Land from the Muslims. Kings became the heads of centralised nation states, reducing crime and violence
but making the ideal of a unified Christendom more distant. Intellectual life was marked by scholasticism, a philosophy
that emphasised joining faith to reason, and by the founding of universities. The theology of Thomas Aquinas, the
paintings of Giotto, the poetry of Dante and Chaucer, the travels of Marco Polo, and the architecture of Gothic cathedrals
such as Chartres are among the outstanding achievements toward the end of this period, and into the Late Middle Ages.
The Late Middle Ages was marked by difficulties and calamities including famine, plague, and war, which significantly
diminished the population of Europe; between 1347 and 1350, the Black Death killed about a third of Europeans. Controversy,
heresy, and schism within the Church paralleled the interstate conflict, civil strife, and peasant revolts that occurred
in the kingdoms. Cultural and technological developments transformed European society, concluding the Late Middle
Ages and beginning the early modern period. The Middle Ages is one of the three major periods in the most enduring
scheme for analysing European history: classical civilisation, or Antiquity; the Middle Ages; and the Modern Period.
Medieval writers divided history into periods such as the "Six Ages" or the "Four Empires", and considered their
time to be the last before the end of the world. When referring to their own times, they spoke of them as being "modern".
In the 1330s, the humanist and poet Petrarch referred to pre-Christian times as antiqua (or "ancient") and to the
Christian period as nova (or "new"). Leonardo Bruni was the first historian to use tripartite periodisation in his
History of the Florentine People (1442). Bruni and later historians argued that Italy had recovered since Petrarch's
time, and therefore added a third period to Petrarch's two. The "Middle Ages" first appears in Latin in 1469 as media
tempestas or "middle season". In early usage, there were many variants, including medium aevum, or "middle age",
first recorded in 1604, and media saecula, or "middle ages", first recorded in 1625. The alternative term "medieval"
(or occasionally "mediaeval") derives from medium aevum. Tripartite periodisation became standard after the German
17th century historian Christoph Cellarius divided history into three periods: Ancient, Medieval, and Modern. The
most commonly given starting point for the Middle Ages is 476, first used by Bruni.[A] For Europe as a whole, 1500
is often considered to be the end of the Middle Ages, but there is no universally agreed upon end date. Depending
on the context, events such as Christopher Columbus's first voyage to the Americas in 1492, the conquest of Constantinople
by the Turks in 1453, or the Protestant Reformation in 1517 are sometimes used. English historians often use the
Battle of Bosworth Field in 1485 to mark the end of the period. For Spain, dates commonly used are the death of King
Ferdinand II in 1516, the death of Queen Isabella I of Castile in 1504, or the conquest of Granada in 1492. Historians
from Romance-speaking countries tend to divide the Middle Ages into two parts: an earlier "High" and later "Low"
period. English-speaking historians, following their German counterparts, generally subdivide the Middle Ages into
three intervals: "Early", "High", and "Late". In the 19th century, the entire Middle Ages were often referred to
as the "Dark Ages",[B] but with the adoption of these subdivisions, use of this term was restricted to the Early
Middle Ages, at least among historians. The Roman Empire reached its greatest territorial extent during the 2nd century
AD; the following two centuries witnessed the slow decline of Roman control over its outlying territories. Economic
issues, including inflation, and external pressure on the frontiers combined to make the 3rd century politically
unstable, with emperors coming to the throne only to be rapidly replaced by new usurpers. Military expenses increased
steadily during the 3rd century, mainly in response to the war with Sassanid Persia, which revived in the middle
of the 3rd century. The army doubled in size, and cavalry and smaller units replaced the legion as the main tactical
unit. The need for revenue led to increased taxes and a decline in numbers of the curial, or landowning, class, and
decreasing numbers of them willing to shoulder the burdens of holding office in their native towns. More bureaucrats
were needed in the central administration to deal with the needs of the army, which led to complaints from civilians
that there were more tax-collectors in the empire than tax-payers. The Emperor Diocletian (r. 284–305) split the
empire into separately administered eastern and western halves in 286; the empire was not considered divided by its
inhabitants or rulers, as legal and administrative promulgations in one division were considered valid in the other.[C]
In 330, after a period of civil war, Constantine the Great (r. 306–337) refounded the city of Byzantium as the newly
renamed eastern capital, Constantinople. Diocletian's reforms strengthened the governmental bureaucracy, reformed
taxation, and strengthened the army, which bought the empire time but did not resolve the problems it was facing:
excessive taxation, a declining birthrate, and pressures on its frontiers, among others. Civil war between rival
emperors became common in the middle of the 4th century, diverting soldiers from the empire's frontier forces and
allowing invaders to encroach. For much of the 4th century, Roman society stabilised in a new form that differed
from the earlier classical period, with a widening gulf between the rich and poor, and a decline in the vitality
of the smaller towns. Another change was the Christianisation, or conversion of the empire to Christianity, a gradual
process that lasted from the 2nd to the 5th centuries. In 376, the Ostrogoths, fleeing from the Huns, received permission
from Emperor Valens (r. 364–378) to settle in the Roman province of Thracia in the Balkans. The settlement did not
go smoothly, and when Roman officials mishandled the situation, the Ostrogoths began to raid and plunder.[D] Valens,
attempting to put down the disorder, was killed fighting the Ostrogoths at the Battle of Adrianople on 9 August 378.
As well as the threat from such tribal confederacies from the north, internal divisions within the empire, especially
within the Christian Church, caused problems. In 400, the Visigoths invaded the Western Roman Empire and, although
briefly forced back from Italy, in 410 sacked the city of Rome. In 406 the Alans, Vandals, and Suevi crossed into
Gaul; over the next three years they spread across Gaul and in 409 crossed the Pyrenees Mountains into modern-day
Spain. The Migration Period began, where various people, initially largely Germanic peoples, moved across Europe.
The Franks, Alemanni, and the Burgundians all ended up in northern Gaul while the Angles, Saxons, and Jutes settled
in Britain. In the 430s the Huns began invading the empire; their king Attila (r. 434–453) led invasions into the
Balkans in 442 and 447, Gaul in 451, and Italy in 452. The Hunnic threat remained until Attila's death in 453, when
the Hunnic confederation he led fell apart. These invasions by the tribes completely changed the political and demographic
nature of what had been the Western Roman Empire. By the end of the 5th century the western section of the empire
was divided into smaller political units, ruled by the tribes that had invaded in the early part of the century.
The deposition of the last emperor of the west, Romulus Augustus, in 476 has traditionally marked the end of the
Western Roman Empire.[E] The Eastern Roman Empire, often referred to as the Byzantine Empire after the fall of its
western counterpart, had little ability to assert control over the lost western territories. The Byzantine emperors
maintained a claim over the territory, but none of the new kings in the west dared to elevate himself to the position
of emperor of the west, Byzantine control of most of the Western Empire could not be sustained; the reconquest of
the Italian peninsula and Mediterranean periphery by Justinian (r. 527–565) was the sole, and temporary, exception.
The political structure of Western Europe changed with the end of the united Roman Empire. Although the movements
of peoples during this period are usually described as "invasions", they were not just military expeditions but migrations
of entire peoples into the empire. Such movements were aided by the refusal of the western Roman elites to support
the army or pay the taxes that would have allowed the military to suppress the migration. The emperors of the 5th
century were often controlled by military strongmen such as Stilicho (d. 408), Aspar (d. 471), Ricimer (d. 472),
or Gundobad (d. 516), who were partly or fully of non-Roman background. When the line of western emperors ceased,
many of the kings who replaced them were from the same background. Intermarriage between the new kings and the Roman
elites was common. This led to a fusion of Roman culture with the customs of the invading tribes, including the popular
assemblies that allowed free male tribal members more say in political matters than was common in the Roman state.
Material artefacts left by the Romans and the invaders are often similar, and tribal items were often modelled on
Roman objects. Much of the scholarly and written culture of the new kingdoms was also based on Roman intellectual
traditions. An important difference was the gradual loss of tax revenue by the new polities. Many of the new political
entities no longer supported their armies through taxes, instead relying on granting them land or rents. This meant
there was less need for large tax revenues and so the taxation systems decayed. Warfare was common between and within
the kingdoms. Slavery declined as the supply weakened, and society became more rural.[F] Between the 5th and 8th
centuries, new peoples and individuals filled the political void left by Roman centralised government. The Ostrogoths
settled in Italy in the late 5th century under Theoderic (d. 526) and set up a kingdom marked by its co-operation
between the Italians and the Ostrogoths, at least until the last years of Theodoric's reign. The Burgundians settled
in Gaul, and after an earlier realm was destroyed by the Huns in 436 formed a new kingdom in the 440s. Between today's
Geneva and Lyon, it grew to become the realm of Burgundy in the late 5th and early 6th centuries. In northern Gaul,
the Franks and Britons set up small polities. The Frankish Kingdom was centred in north-eastern Gaul, and the first
king of whom much is known is Childeric (d. 481).[G] Under Childeric's son Clovis (r. 509–511), the Frankish kingdom
expanded and converted to Christianity. Britons, related to the natives of Britannia — modern-day Great Britain —
settled in what is now Brittany.[H] Other monarchies were established by the Visigoths in Iberia, the Suevi in north-western
Iberia, and the Vandals in North Africa. In the 6th century, the Lombards settled in northern Italy, replacing the
Ostrogothic kingdom with a grouping of duchies that occasionally selected a king to rule over them all. By the late
6th century this arrangement had been replaced by a permanent monarchy. The invasions brought new ethnic groups to
Europe, although some regions received a larger influx of new peoples than others. In Gaul for instance, the invaders
settled much more extensively in the north-east than in the south-west. Slavic peoples settled in Central and Eastern
Europe and the Balkan Peninsula. The settlement of peoples was accompanied by changes in languages. The Latin of
the Western Roman Empire was gradually replaced by languages based on, but distinct from, Latin, collectively known
as Romance languages. These changes from Latin to the new languages took many centuries. Greek remained the language
of the Byzantine Empire, but the migrations of the Slavs added Slavonic languages to Eastern Europe. As Western Europe
witnessed the formation of new kingdoms, the Eastern Roman Empire remained intact and experienced an economic revival
that lasted into the early 7th century. There were fewer invasions of the eastern section of the empire; most occurred
in the Balkans. Peace with Persia, the traditional enemy of Rome, lasted throughout most of the 5th century. The
Eastern Empire was marked by closer relations between the political state and Christian Church, with doctrinal matters
assuming an importance in eastern politics that they did not have in Western Europe. Legal developments included
the codification of Roman law; the first effort—the Theodosian Code—was completed in 438. Under Emperor Justinian
(r. 527–565), another compilation took place—the Corpus Juris Civilis. Justinian also oversaw the construction of
the Hagia Sophia in Constantinople and the reconquest of North Africa from the Vandals and Italy from the Ostrogoths,
under Belisarius (d. 565). The conquest of Italy was not complete, as a deadly outbreak of plague in 542 led to the
rest of Justinian's reign concentrating on defensive measures rather than further conquests. At the emperor's death,
the Byzantines had control of most of Italy, North Africa, and a small foothold in southern Spain. Justinian's reconquests
have been criticised by historians for overextending his realm and setting the stage for the Muslim conquests, but
many of the difficulties faced by Justinian's successors were due not just to over-taxation to pay for his wars but
to the essentially civilian nature of the empire, which made raising troops difficult. In the Eastern Empire the
slow infiltration of the Balkans by the Slavs added a further difficulty for Justinian's successors. It began gradually,
but by the late 540s Slavic tribes were in Thrace and Illyrium, and had defeated an imperial army near Adrianople
in 551. In the 560s the Avars began to expand from their base on the north bank of the Danube; by the end of the
6th century they were the dominant power in Central Europe and routinely able to force the eastern emperors to pay
tribute. They remained a strong power until 796. An additional problem to face the empire came as a result of the
involvement of Emperor Maurice (r. 582–602) in Persian politics when he intervened in a succession dispute. This
led to a period of peace, but when Maurice was overthrown, the Persians invaded and during the reign of Emperor Heraclius
(r. 610–641) controlled large chunks of the empire, including Egypt, Syria, and Asia Minor, until Heraclius' successful
counterattack. In 628 the empire secured a peace treaty and recovered all of its lost territories. In Western Europe,
some of the older Roman elite families died out while others became more involved with Church than secular affairs.
Values attached to Latin scholarship and education mostly disappeared, and while literacy remained important, it
became a practical skill rather than a sign of elite status. In the 4th century, Jerome (d. 420) dreamed that God
rebuked him for spending more time reading Cicero than the Bible. By the 6th century, Gregory of Tours (d. 594) had
a similar dream, but instead of being chastised for reading Cicero, he was chastised for learning shorthand. By the
late 6th century, the principal means of religious instruction in the Church had become music and art rather than
the book. Most intellectual efforts went towards imitating classical scholarship, but some original works were created,
along with now-lost oral compositions. The writings of Sidonius Apollinaris (d. 489), Cassiodorus (d. c. 585), and
Boethius (d. c. 525) were typical of the age. Changes also took place among laymen, as aristocratic culture focused
on great feasts held in halls rather than on literary pursuits. Clothing for the elites was richly embellished with
jewels and gold. Lords and kings supported entourages of fighters who formed the backbone of the military forces.[I]
Family ties within the elites were important, as were the virtues of loyalty, courage, and honour. These ties led
to the prevalence of the feud in aristocratic society, examples of which included those related by Gregory of Tours
that took place in Merovingian Gaul. Most feuds seem to have ended quickly with the payment of some sort of compensation.
Women took part in aristocratic society mainly in their roles as wives and mothers of men, with the role of mother
of a ruler being especially prominent in Merovingian Gaul. In Anglo-Saxon society the lack of many child rulers meant
a lesser role for women as queen mothers, but this was compensated for by the increased role played by abbesses of
monasteries. Only in Italy does it appear that women were always considered under the protection and control of a
male relative. Peasant society is much less documented than the nobility. Most of the surviving information available
to historians comes from archaeology; few detailed written records documenting peasant life remain from before the
9th century. Most the descriptions of the lower classes come from either law codes or writers from the upper classes.
Landholding patterns in the West were not uniform; some areas had greatly fragmented landholding patterns, but in
other areas large contiguous blocks of land were the norm. These differences allowed for a wide variety of peasant
societies, some dominated by aristocratic landholders and others having a great deal of autonomy. Land settlement
also varied greatly. Some peasants lived in large settlements that numbered as many as 700 inhabitants. Others lived
in small groups of a few families and still others lived on isolated farms spread over the countryside. There were
also areas where the pattern was a mix of two or more of those systems. Unlike in the late Roman period, there was
no sharp break between the legal status of the free peasant and the aristocrat, and it was possible for a free peasant's
family to rise into the aristocracy over several generations through military service to a powerful lord. Roman city
life and culture changed greatly in the early Middle Ages. Although Italian cities remained inhabited, they contracted
significantly in size. Rome, for instance, shrank from a population of hundreds of thousands to around 30,000 by
the end of the 6th century. Roman temples were converted into Christian churches and city walls remained in use.
In Northern Europe, cities also shrank, while civic monuments and other public buildings were raided for building
materials. The establishment of new kingdoms often meant some growth for the towns chosen as capitals. Although there
had been Jewish communities in many Roman cities, the Jews suffered periods of persecution after the conversion of
the empire to Christianity. Officially they were tolerated, if subject to conversion efforts, and at times were even
encouraged to settle in new areas. Religious beliefs in the Eastern Empire and Persia were in flux during the late
6th and early 7th centuries. Judaism was an active proselytising faith, and at least one Arab political leader converted
to it.[J] Christianity had active missions competing with the Persians' Zoroastrianism in seeking converts, especially
among residents of the Arabian Peninsula. All these strands came together with the emergence of Islam in Arabia during
the lifetime of Muhammad (d. 632). After his death, Islamic forces conquered much of the Eastern Empire and Persia,
starting with Syria in 634–635 and reaching Egypt in 640–641, Persia between 637 and 642, North Africa in the later
7th century, and the Iberian Peninsula in 711. By 714, Islamic forces controlled much of the peninsula in a region
they called Al-Andalus. The Islamic conquests reached their peak in the mid-8th century. The defeat of Muslim forces
at the Battle of Poitiers in 732 led to the reconquest of southern France by the Franks, but the main reason for
the halt of Islamic growth in Europe was the overthrow of the Umayyad dynasty and its replacement by the Abbasid
dynasty. The Abbasids moved their capital to Baghdad and were more concerned with the Middle East than Europe, losing
control of sections of the Muslim lands. Umayyad descendants took over the Iberian Peninsula, the Aghlabids controlled
North Africa, and the Tulunids became rulers of Egypt. By the middle of the 8th century, new trading patterns were
emerging in the Mediterranean; trade between the Franks and the Arabs replaced the old Roman patterns of trade. Franks
traded timber, furs, swords and slaves in return for silks and other fabrics, spices, and precious metals from the
Arabs. The migrations and invasions of the 4th and 5th centuries disrupted trade networks around the Mediterranean.
African goods stopped being imported into Europe, first disappearing from the interior and by the 7th century found
only in a few cities such as Rome or Naples. By the end of the 7th century, under the impact of the Muslim conquests,
African products were no longer found in Western Europe. The replacement of goods from long-range trade with local
products was a trend throughout the old Roman lands that happened in the Early Middle Ages. This was especially marked
in the lands that did not lie on the Mediterranean, such as northern Gaul or Britain. Non-local goods appearing in
the archaeological record are usually luxury goods. In the northern parts of Europe, not only were the trade networks
local, but the goods carried were simple, with little pottery or other complex products. Around the Mediterranean,
pottery remained prevalent and appears to have been traded over medium-range networks, not just produced locally.
The various Germanic states in the west all had coinages that imitated existing Roman and Byzantine forms. Gold continued
to be minted until the end of the 7th century, when it was replaced by silver coins. The basic Frankish silver coin
was the denarius or denier, while the Anglo-Saxon version was called a penny. From these areas, the denier or penny
spread throughout Europe during the centuries from 700 to 1000. Copper or bronze coins were not struck, nor were
gold except in Southern Europe. No silver coins denominated in multiple units were minted. Christianity was a major
unifying factor between Eastern and Western Europe before the Arab conquests, but the conquest of North Africa sundered
maritime connections between those areas. Increasingly the Byzantine Church differed in language, practices, and
liturgy from the western Church. The eastern church used Greek instead of the western Latin. Theological and political
differences emerged, and by the early and middle 8th century issues such as iconoclasm, clerical marriage, and state
control of the church had widened to the extent that the cultural and religious differences were greater than the
similarities. The formal break came in 1054, when the papacy and the patriarchy of Constantinople clashed over papal
supremacy and excommunicated each other, which led to the division of Christianity into two churches—the western
branch became the Roman Catholic Church and the eastern branch the Orthodox Church. The ecclesiastical structure
of the Roman Empire survived the movements and invasions in the west mostly intact, but the papacy was little regarded,
and few of the western bishops looked to the bishop of Rome for religious or political leadership. Many of the popes
prior to 750 were more concerned with Byzantine affairs and eastern theological controversies. The register, or archived
copies of the letters, of Pope Gregory the Great (pope 590–604) survived, and of those more than 850 letters, the
vast majority were concerned with affairs in Italy or Constantinople. The only part of Western Europe where the papacy
had influence was Britain, where Gregory had sent the Gregorian mission in 597 to convert the Anglo-Saxons to Christianity.
Irish missionaries were most active in Western Europe between the 5th and the 7th centuries, going first to England
and Scotland and then on to the continent. Under such monks as Columba (d. 597) and Columbanus (d. 615), they founded
monasteries, taught in Latin and Greek, and authored secular and religious works. The Early Middle Ages witnessed
the rise of monasticism in the West. The shape of European monasticism was determined by traditions and ideas that
originated with the Desert Fathers of Egypt and Syria. Most European monasteries were of the type that focuses on
community experience of the spiritual life, called cenobitism, which was pioneered by Pachomius (d. 348) in the 4th
century. Monastic ideals spread from Egypt to Western Europe in the 5th and 6th centuries through hagiographical
literature such as the Life of Anthony. Benedict of Nursia (d. 547) wrote the Benedictine Rule for Western monasticism
during the 6th century, detailing the administrative and spiritual responsibilities of a community of monks led by
an abbot. Monks and monasteries had a deep effect on the religious and political life of the Early Middle Ages, in
various cases acting as land trusts for powerful families, centres of propaganda and royal support in newly conquered
regions, and bases for missions and proselytisation. They were the main and sometimes only outposts of education
and literacy in a region. Many of the surviving manuscripts of the Latin classics were copied in monasteries in the
Early Middle Ages. Monks were also the authors of new works, including history, theology, and other subjects, written
by authors such as Bede (d. 735), a native of northern England who wrote in the late 7th and early 8th centuries.
The Frankish kingdom in northern Gaul split into kingdoms called Austrasia, Neustria, and Burgundy during the 6th
and 7th centuries, all of them ruled by the Merovingian dynasty, who were descended from Clovis. The 7th century
was a tumultuous period of wars between Austrasia and Neustria. Such warfare was exploited by Pippin (d. 640), the
Mayor of the Palace for Austrasia who became the power behind the Austrasian throne. Later members of his family
inherited the office, acting as advisers and regents. One of his descendants, Charles Martel (d. 741), won the Battle
of Poitiers in 732, halting the advance of Muslim armies across the Pyrenees.[K] Great Britain was divided into small
states dominated by the kingdoms of Northumbria, Mercia, Wessex, and East Anglia, which were descended from the Anglo-Saxon
invaders. Smaller kingdoms in present-day Wales and Scotland were still under the control of the native Britons and
Picts. Ireland was divided into even smaller political units, usually known as tribal kingdoms, under the control
of kings. There were perhaps as many as 150 local kings in Ireland, of varying importance. The Carolingian dynasty,
as the successors to Charles Martel are known, officially took control of the kingdoms of Austrasia and Neustria
in a coup of 753 led by Pippin III (r. 752–768). A contemporary chronicle claims that Pippin sought, and gained,
authority for this coup from Pope Stephen II (pope 752–757). Pippin's takeover was reinforced with propaganda that
portrayed the Merovingians as inept or cruel rulers, exalted the accomplishments of Charles Martel, and circulated
stories of the family's great piety. At the time of his death in 768, Pippin left his kingdom in the hands of his
two sons, Charles (r. 768–814) and Carloman (r. 768–771). When Carloman died of natural causes, Charles blocked the
succession of Carloman's young son and installed himself as the king of the united Austrasia and Neustria. Charles,
more often known as Charles the Great or Charlemagne, embarked upon a programme of systematic expansion in 774 that
unified a large portion of Europe, eventually controlling modern-day France, northern Italy, and Saxony. In the wars
that lasted beyond 800, he rewarded allies with war booty and command over parcels of land. In 774, Charlemagne conquered
the Lombards, which freed the papacy from the fear of Lombard conquest and marked the beginnings of the Papal States.[L]
The coronation of Charlemagne as emperor on Christmas Day 800 is regarded as a turning point in medieval history,
marking a return of the Western Roman Empire, since the new emperor ruled over much of the area previously controlled
by the western emperors. It also marks a change in Charlemagne's relationship with the Byzantine Empire, as the assumption
of the imperial title by the Carolingians asserted their equivalence to the Byzantine state. There were several differences
between the newly established Carolingian Empire and both the older Western Roman Empire and the concurrent Byzantine
Empire. The Frankish lands were rural in character, with only a few small cities. Most of the people were peasants
settled on small farms. Little trade existed and much of that was with the British Isles and Scandinavia, in contrast
to the older Roman Empire with its trading networks centred on the Mediterranean. The empire was administered by
an itinerant court that travelled with the emperor, as well as approximately 300 imperial officials called counts,
who administered the counties the empire had been divided into. Clergy and local bishops served as officials, as
well as the imperial officials called missi dominici, who served as roving inspectors and troubleshooters. Charlemagne's
court in Aachen was the centre of the cultural revival sometimes referred to as the "Carolingian Renaissance". Literacy
increased, as did development in the arts, architecture and jurisprudence, as well as liturgical and scriptural studies.
The English monk Alcuin (d. 804) was invited to Aachen and brought the education available in the monasteries of
Northumbria. Charlemagne's chancery—or writing office—made use of a new script today known as Carolingian minuscule,[M]
allowing a common writing style that advanced communication across much of Europe. Charlemagne sponsored changes
in church liturgy, imposing the Roman form of church service on his domains, as well as the Gregorian chant in liturgical
music for the churches. An important activity for scholars during this period was the copying, correcting, and dissemination
of basic works on religious and secular topics, with the aim of encouraging learning. New works on religious topics
and schoolbooks were also produced. Grammarians of the period modified the Latin language, changing it from the Classical
Latin of the Roman Empire into a more flexible form to fit the needs of the church and government. By the reign of
Charlemagne, the language had so diverged from the classical that it was later called Medieval Latin. Charlemagne
planned to continue the Frankish tradition of dividing his kingdom between all his heirs, but was unable to do so
as only one son, Louis the Pious (r. 814–840), was still alive by 813. Just before Charlemagne died in 814, he crowned
Louis as his successor. Louis's reign of 26 years was marked by numerous divisions of the empire among his sons and,
after 829, civil wars between various alliances of father and sons over the control of various parts of the empire.
Eventually, Louis recognised his eldest son Lothair I (d. 855) as emperor and gave him Italy. Louis divided the rest
of the empire between Lothair and Charles the Bald (d. 877), his youngest son. Lothair took East Francia, comprising
both banks of the Rhine and eastwards, leaving Charles West Francia with the empire to the west of the Rhineland
and the Alps. Louis the German (d. 876), the middle child, who had been rebellious to the last, was allowed to keep
Bavaria under the suzerainty of his elder brother. The division was disputed. Pepin II of Aquitaine (d. after 864),
the emperor's grandson, rebelled in a contest for Aquitaine, while Louis the German tried to annex all of East Francia.
Louis the Pious died in 840, with the empire still in chaos. A three-year civil war followed his death. By the Treaty
of Verdun (843), a kingdom between the Rhine and Rhone rivers was created for Lothair to go with his lands in Italy,
and his imperial title was recognised. Louis the German was in control of Bavaria and the eastern lands in modern-day
Germany. Charles the Bald received the western Frankish lands, comprising most of modern-day France. Charlemagne's
grandsons and great-grandsons divided their kingdoms between their descendants, eventually causing all internal cohesion
to be lost.[N] In 987 the Carolingian dynasty was replaced in the western lands, with the crowning of Hugh Capet
(r. 987–996) as king.[O][P] In the eastern lands the dynasty had died out earlier, in 911, with the death of Louis
the Child, and the selection of the unrelated Conrad I (r. 911–918) as king. The breakup of the Carolingian Empire
was accompanied by invasions, migrations, and raids by external foes. The Atlantic and northern shores were harassed
by the Vikings, who also raided the British Isles and settled there as well as in Iceland. In 911, the Viking chieftain
Rollo (d. c. 931) received permission from the Frankish King Charles the Simple (r. 898–922) to settle in what became
Normandy.[Q] The eastern parts of the Frankish kingdoms, especially Germany and Italy, were under continual Magyar
assault until the invader's defeat at the Battle of Lechfeld in 955. The breakup of the Abbasid dynasty meant that
the Islamic world fragmented into smaller political states, some of which began expanding into Italy and Sicily,
as well as over the Pyrenees into the southern parts of the Frankish kingdoms. Efforts by local kings to fight the
invaders led to the formation of new political entities. In Anglo-Saxon England, King Alfred the Great (r. 871–899)
came to an agreement with the Viking invaders in the late 9th century, resulting in Danish settlements in Northumbria,
Mercia, and parts of East Anglia. By the middle of the 10th century, Alfred's successors had conquered Northumbria,
and restored English control over most of the southern part of Great Britain. In northern Britain, Kenneth MacAlpin
(d. c. 860) united the Picts and the Scots into the Kingdom of Alba. In the early 10th century, the Ottonian dynasty
had established itself in Germany, and was engaged in driving back the Magyars. Its efforts culminated in the coronation
in 962 of Otto I (r. 936–973) as Holy Roman Emperor. In 972, he secured recognition of his title by the Byzantine
Empire, which he sealed with the marriage of his son Otto II (r. 967–983) to Theophanu (d. 991), daughter of an earlier
Byzantine Emperor Romanos II (r. 959–963). By the late 10th century Italy had been drawn into the Ottonian sphere
after a period of instability; Otto III (r. 996–1002) spent much of his later reign in the kingdom. The western Frankish
kingdom was more fragmented, and although kings remained nominally in charge, much of the political power devolved
to the local lords. Missionary efforts to Scandinavia during the 9th and 10th centuries helped strengthen the growth
of kingdoms such as Sweden, Denmark, and Norway, which gained power and territory. Some kings converted to Christianity,
although not all by 1000. Scandinavians also expanded and colonised throughout Europe. Besides the settlements in
Ireland, England, and Normandy, further settlement took place in what became Russia and in Iceland. Swedish traders
and raiders ranged down the rivers of the Russian steppe, and even attempted to seize Constantinople in 860 and 907.
Christian Spain, initially driven into a small section of the peninsula in the north, expanded slowly south during
the 9th and 10th centuries, establishing the kingdoms of Asturias and León. In Eastern Europe, Byzantium revived
its fortunes under Emperor Basil I (r. 867–886) and his successors Leo VI (r. 886–912) and Constantine VII (r. 913–959),
members of the Macedonian dynasty. Commerce revived and the emperors oversaw the extension of a uniform administration
to all the provinces. The military was reorganised, which allowed the emperors John I (r. 969–976) and Basil II (r.
976–1025) to expand the frontiers of the empire on all fronts. The imperial court was the centre of a revival of
classical learning, a process known as the Macedonian Renaissance. Writers such as John Geometres (fl. early 10th
century) composed new hymns, poems, and other works. Missionary efforts by both eastern and western clergy resulted
in the conversion of the Moravians, Bulgars, Bohemians, Poles, Magyars, and Slavic inhabitants of the Kievan Rus'.
These conversions contributed to the founding of political states in the lands of those peoples—the states of Moravia,
Bulgaria, Bohemia, Poland, Hungary, and the Kievan Rus'. Bulgaria, which was founded around 680, at its height reached
from Budapest to the Black Sea and from the Dnieper River in modern Ukraine to the Adriatic Sea. By 1018, the last
Bulgarian nobles had surrendered to the Byzantine Empire. Few large stone buildings were constructed between the
Constantinian basilicas of the 4th century and the 8th century, although many smaller ones were built during the
6th and 7th centuries. By the beginning of the 8th century, the Carolingian Empire revived the basilica form of architecture.
One feature of the basilica is the use of a transept, or the "arms" of a cross-shaped building that are perpendicular
to the long nave. Other new features of religious architecture include the crossing tower and a monumental entrance
to the church, usually at the west end of the building. During the later Roman Empire, the principal military developments
were attempts to create an effective cavalry force as well as the continued development of highly specialised types
of troops. The creation of heavily armoured cataphract-type soldiers as cavalry was an important feature of the 5th-century
Roman military. The various invading tribes had differing emphasis on types of soldiers—ranging from the primarily
infantry Anglo-Saxon invaders of Britain to the Vandals and Visigoths, who had a high proportion of cavalry in their
armies. During the early invasion period, the stirrup had not been introduced into warfare, which limited the usefulness
of cavalry as shock troops because it was not possible to put the full force of the horse and rider behind blows
struck by the rider. The greatest change in military affairs during the invasion period was the adoption of the Hunnic
composite bow in place of the earlier, and weaker, Scythian composite bow. Another development was the increasing
use of longswords and the progressive replacement of scale armour by mail armour and lamellar armour. Carolingian
art was produced for a small group of figures around the court, and the monasteries and churches they supported.
It was dominated by efforts to regain the dignity and classicism of imperial Roman and Byzantine art, but was also
influenced by the Insular art of the British Isles. Insular art integrated the energy of Irish Celtic and Anglo-Saxon
Germanic styles of ornament with Mediterranean forms such as the book, and established many characteristics of art
for the rest of the medieval period. Surviving religious works from the Early Middle Ages are mostly illuminated
manuscripts and carved ivories, originally made for metalwork that has since been melted down. Objects in precious
metals were the most prestigious form of art, but almost all are lost except for a few crosses such as the Cross
of Lothair, several reliquaries, and finds such as the Anglo-Saxon burial at Sutton Hoo and the hoards of Gourdon
from Merovingian France, Guarrazar from Visigothic Spain and Nagyszentmiklós near Byzantine territory. There are
survivals from the large brooches in fibula or penannular form that were a key piece of personal adornment for elites,
including the Irish Tara Brooch. Highly decorated books were mostly Gospel Books and these have survived in larger
numbers, including the Insular Book of Kells, the Book of Lindisfarne, and the imperial Codex Aureus of St. Emmeram,
which is one of the few to retain its "treasure binding" of gold encrusted with jewels. Charlemagne's court seems
to have been responsible for the acceptance of figurative monumental sculpture in Christian art, and by the end of
the period near life-sized figures such as the Gero Cross were common in important churches. The importance of infantry
and light cavalry began to decline during the early Carolingian period, with a growing dominance of elite heavy cavalry.
The use of militia-type levies of the free population declined over the Carolingian period. Although much of the
Carolingian armies were mounted, a large proportion during the early period appear to have been mounted infantry,
rather than true cavalry. One exception was Anglo-Saxon England, where the armies were still composed of regional
levies, known as the fyrd, which were led by the local elites. In military technology, one of the main changes was
the return of the crossbow, which had been known in Roman times and reappeared as a military weapon during the last
part of the Early Middle Ages. Another change was the introduction of the stirrup, which increased the effectiveness
of cavalry as shock troops. A technological advance that had implications beyond the military was the horseshoe,
which allowed horses to be used in rocky terrain. The High Middle Ages was a period of tremendous expansion of population.
The estimated population of Europe grew from 35 to 80 million between 1000 and 1347, although the exact causes remain
unclear: improved agricultural techniques, the decline of slaveholding, a more clement climate and the lack of invasion
have all been suggested. As much as 90 per cent of the European population remained rural peasants. Many were no
longer settled in isolated farms but had gathered into small communities, usually known as manors or villages. These
peasants were often subject to noble overlords and owed them rents and other services, in a system known as manorialism.
There remained a few free peasants throughout this period and beyond, with more of them in the regions of Southern
Europe than in the north. The practice of assarting, or bringing new lands into production by offering incentives
to the peasants who settled them, also contributed to the expansion of population. Other sections of society included
the nobility, clergy, and townsmen. Nobles, both the titled nobility and simple knights, exploited the manors and
the peasants, although they did not own lands outright but were granted rights to the income from a manor or other
lands by an overlord through the system of feudalism. During the 11th and 12th centuries, these lands, or fiefs,
came to be considered hereditary, and in most areas they were no longer divisible between all the heirs as had been
the case in the early medieval period. Instead, most fiefs and lands went to the eldest son.[R] The dominance of
the nobility was built upon its control of the land, its military service as heavy cavalry, control of castles, and
various immunities from taxes or other impositions.[S] Castles, initially in wood but later in stone, began to be
constructed in the 9th and 10th centuries in response to the disorder of the time, and provided protection from invaders
as well as allowing lords defence from rivals. Control of castles allowed the nobles to defy kings or other overlords.
Nobles were stratified; kings and the highest-ranking nobility controlled large numbers of commoners and large tracts
of land, as well as other nobles. Beneath them, lesser nobles had authority over smaller areas of land and fewer
people. Knights were the lowest level of nobility; they controlled but did not own land, and had to serve other nobles.[T]
The clergy was divided into two types: the secular clergy, who lived out in the world, and the regular clergy, who
lived under a religious rule and were usually monks. Throughout the period monks remained a very small proportion
of the population, usually less than one per cent. Most of the regular clergy were drawn from the nobility, the same
social class that served as the recruiting ground for the upper levels of the secular clergy. The local parish priests
were often drawn from the peasant class. Townsmen were in a somewhat unusual position, as they did not fit into the
traditional three-fold division of society into nobles, clergy, and peasants. During the 12th and 13th centuries,
the ranks of the townsmen expanded greatly as existing towns grew and new population centres were founded. But throughout
the Middle Ages the population of the towns probably never exceeded 10 per cent of the total population. Jews also
spread across Europe during the period. Communities were established in Germany and England in the 11th and 12th
centuries, but Spanish Jews, long settled in Spain under the Muslims, came under Christian rule and increasing pressure
to convert to Christianity. Most Jews were confined to the cities, as they were not allowed to own land or be peasants.[U]
Besides the Jews, there were other non-Christians on the edges of Europe—pagan Slavs in Eastern Europe and Muslims
in Southern Europe. Women in the Middle Ages were officially required to be subordinate to some male, whether their
father, husband, or other kinsman. Widows, who were often allowed much control over their own lives, were still restricted
legally. Women's work generally consisted of household or other domestically inclined tasks. Peasant women were usually
responsible for taking care of the household, child-care, as well as gardening and animal husbandry near the house.
They could supplement the household income by spinning or brewing at home. At harvest-time, they were also expected
to help with field-work. Townswomen, like peasant women, were responsible for the household, and could also engage
in trade. What trades were open to women varied by country and period. Noblewomen were responsible for running a
household, and could occasionally be expected to handle estates in the absence of male relatives, but they were usually
restricted from participation in military or government affairs. The only role open to women in the Church was that
of nuns, as they were unable to become priests. In central and northern Italy and in Flanders, the rise of towns
that were to a degree self-governing stimulated economic growth and created an environment for new types of trade
associations. Commercial cities on the shores of the Baltic entered into agreements known as the Hanseatic League,
and the Italian Maritime republics such as Venice, Genoa, and Pisa expanded their trade throughout the Mediterranean.[V]
Great trading fairs were established and flourished in northern France during the period, allowing Italian and German
merchants to trade with each other as well as local merchants. In the late 13th century new land and sea routes to
the Far East were pioneered, famously described in The Travels of Marco Polo written by one of the traders, Marco
Polo (d. 1324). Besides new trading opportunities, agricultural and technological improvements enabled an increase
in crop yields, which in turn allowed the trade networks to expand. Rising trade brought new methods of dealing with
money, and gold coinage was again minted in Europe, first in Italy and later in France and other countries. New forms
of commercial contracts emerged, allowing risk to be shared among merchants. Accounting methods improved, partly
through the use of double-entry bookkeeping; letters of credit also appeared, allowing easy transmission of money.
The High Middle Ages was the formative period in the history of the modern Western state. Kings in France, England,
and Spain consolidated their power, and set up lasting governing institutions. New kingdoms such as Hungary and Poland,
after their conversion to Christianity, became Central European powers. The Magyars settled Hungary around 900 under
King Árpád (d. c. 907) after a series of invasions in the 9th century. The papacy, long attached to an ideology of
independence from secular kings, first asserted its claim to temporal authority over the entire Christian world;
the Papal Monarchy reached its apogee in the early 13th century under the pontificate of Innocent III (pope 1198–1216).
Northern Crusades and the advance of Christian kingdoms and military orders into previously pagan regions in the
Baltic and Finnic north-east brought the forced assimilation of numerous native peoples into European culture. During
the early High Middle Ages, Germany was ruled by the Ottonian dynasty, which struggled to control the powerful dukes
ruling over territorial duchies tracing back to the Migration period. In 1024, they were replaced by the Salian dynasty,
who famously clashed with the papacy under Emperor Henry IV (r. 1084–1105) over church appointments as part of the
Investiture Controversy. His successors continued to struggle against the papacy as well as the German nobility.
A period of instability followed the death of Emperor Henry V (r. 1111–25), who died without heirs, until Frederick
I Barbarossa (r. 1155–90) took the imperial throne. Although he ruled effectively, the basic problems remained, and
his successors continued to struggle into the 13th century. Barbarossa's grandson Frederick II (r. 1220–1250), who
was also heir to the throne of Sicily through his mother, clashed repeatedly with the papacy. His court was famous
for its scholars and he was often accused of heresy. He and his successors faced many difficulties, including the
invasion of the Mongols into Europe in the mid-13th century. Mongols first shattered the Kievan Rus' principalities
and then invaded Eastern Europe in 1241, 1259, and 1287. Under the Capetian dynasty France slowly began to expand
its authority over the nobility, growing out of the Île-de-France to exert control over more of the country in the
11th and 12th centuries. They faced a powerful rival in the Dukes of Normandy, who in 1066 under William the Conqueror
(duke 1035–1087), conquered England (r. 1066–87) and created a cross-channel empire that lasted, in various forms,
throughout the rest of the Middle Ages. Normans also settled in Sicily and southern Italy, when Robert Guiscard (d.
1085) landed there in 1059 and established a duchy that later became the Kingdom of Sicily. Under the Angevin dynasty
of Henry II (r. 1154–89) and his son Richard I (r. 1189–99), the kings of England ruled over England and large areas
of France,[W] brought to the family by Henry II's marriage to Eleanor of Aquitaine (d. 1204), heiress to much of
southern France.[X] Richard's younger brother John (r. 1199–1216) lost Normandy and the rest of the northern French
possessions in 1204 to the French King Philip II Augustus (r. 1180–1223). This led to dissension among the English
nobility, while John's financial exactions to pay for his unsuccessful attempts to regain Normandy led in 1215 to
Magna Carta, a charter that confirmed the rights and privileges of free men in England. Under Henry III (r. 1216–72),
John's son, further concessions were made to the nobility, and royal power was diminished. The French monarchy continued
to make gains against the nobility during the late 12th and 13th centuries, bringing more territories within the
kingdom under their personal rule and centralising the royal administration. Under Louis IX (r. 1226–70), royal prestige
rose to new heights as Louis served as a mediator for most of Europe.[Y] In Iberia, the Christian states, which had
been confined to the north-western part of the peninsula, began to push back against the Islamic states in the south,
a period known as the Reconquista. By about 1150, the Christian north had coalesced into the five major kingdoms
of León, Castile, Aragon, Navarre, and Portugal. Southern Iberia remained under control of Islamic states, initially
under the Caliphate of Córdoba, which broke up in 1031 into a shifting number of petty states known as taifas, who
fought with the Christians until the Almohad Caliphate re-established centralised rule over Southern Iberia in the
1170s. Christian forces advanced again in the early 13th century, culminating in the capture of Seville in 1248.
In the 11th century, the Seljuk Turks took over much of the Middle East, occupying Persia during the 1040s, Armenia
in the 1060s, and Jerusalem in 1070. In 1071, the Turkish army defeated the Byzantine army at the Battle of Manzikert
and captured the Byzantine Emperor Romanus IV (r. 1068–71). The Turks were then free to invade Asia Minor, which
dealt a dangerous blow to the Byzantine Empire by seizing a large part of its population and its economic heartland.
Although the Byzantines regrouped and recovered somewhat, they never fully regained Asia Minor and were often on
the defensive. The Turks also had difficulties, losing control of Jerusalem to the Fatimids of Egypt and suffering
from a series of internal civil wars. The Byzantines also faced a revived Bulgaria, which in the late 12th and 13th
centuries spread throughout the Balkans. The crusades were intended to seize Jerusalem from Muslim control. The First
Crusade was proclaimed by Pope Urban II (pope 1088–99) at the Council of Clermont in 1095 in response to a request
from the Byzantine Emperor Alexios I Komnenos (r. 1081–1118) for aid against further Muslim advances. Urban promised
indulgence to anyone who took part. Tens of thousands of people from all levels of society mobilised across Europe
and captured Jerusalem in 1099. One feature of the crusades was the pogroms against local Jews that often took place
as the crusaders left their countries for the East. These were especially brutal during the First Crusade, when the
Jewish communities in Cologne, Mainz, and Worms were destroyed, and other communities in cities between the rivers
Seine and Rhine suffered destruction. Another outgrowth of the crusades was the foundation of a new type of monastic
order, the military orders of the Templars and Hospitallers, which fused monastic life with military service. The
crusaders consolidated their conquests into crusader states. During the 12th and 13th centuries, there were a series
of conflicts between those states and the surrounding Islamic states. Appeals from those states to the papacy led
to further crusades, such as the Third Crusade, called to try to regain Jerusalem, which had been captured by Saladin
(d. 1193) in 1187.[Z] In 1203, the Fourth Crusade was diverted from the Holy Land to Constantinople, and captured
the city in 1204, setting up a Latin Empire of Constantinople and greatly weakening the Byzantine Empire. The Byzantines
recaptured the city in 1261, but never regained their former strength. By 1291 all the crusader states had been captured
or forced from the mainland, although a titular Kingdom of Jerusalem survived on the island of Cyprus for several
years afterwards. Popes called for crusades to take place elsewhere besides the Holy Land: in Spain, southern France,
and along the Baltic. The Spanish crusades became fused with the Reconquista of Spain from the Muslims. Although
the Templars and Hospitallers took part in the Spanish crusades, similar Spanish military religious orders were founded,
most of which had become part of the two main orders of Calatrava and Santiago by the beginning of the 12th century.
Northern Europe also remained outside Christian influence until the 11th century or later, and became a crusading
venue as part of the Northern Crusades of the 12th to 14th centuries. These crusades also spawned a military order,
the Order of the Sword Brothers. Another order, the Teutonic Knights, although originally founded in the crusader
states, focused much of its activity in the Baltic after 1225, and in 1309 moved its headquarters to Marienburg in
Prussia. During the 11th century, developments in philosophy and theology led to increased intellectual activity.
There was debate between the realists and the nominalists over the concept of "universals". Philosophical discourse
was stimulated by the rediscovery of Aristotle and his emphasis on empiricism and rationalism. Scholars such as Peter
Abelard (d. 1142) and Peter Lombard (d. 1164) introduced Aristotelian logic into theology. In the late 11th and early
12th centuries cathedral schools spread throughout Western Europe, signalling the shift of learning from monasteries
to cathedrals and towns. Cathedral schools were in turn replaced by the universities established in major European
cities. Philosophy and theology fused in scholasticism, an attempt by 12th- and 13th-century scholars to reconcile
authoritative texts, most notably Aristotle and the Bible. This movement tried to employ a systemic approach to truth
and reason and culminated in the thought of Thomas Aquinas (d. 1274), who wrote the Summa Theologica, or Summary
of Theology. Chivalry and the ethos of courtly love developed in royal and noble courts. This culture was expressed
in the vernacular languages rather than Latin, and comprised poems, stories, legends, and popular songs spread by
troubadours, or wandering minstrels. Often the stories were written down in the chansons de geste, or "songs of great
deeds", such as The Song of Roland or The Song of Hildebrand. Secular and religious histories were also produced.
Geoffrey of Monmouth (d. c. 1155) composed his Historia Regum Britanniae, a collection of stories and legends about
Arthur. Other works were more clearly history, such as Otto von Freising's (d. 1158) Gesta Friderici Imperatoris
detailing the deeds of Emperor Frederick Barbarossa, or William of Malmesbury's (d. c. 1143) Gesta Regum on the kings
of England. Legal studies advanced during the 12th century. Both secular law and canon law, or ecclesiastical law,
were studied in the High Middle Ages. Secular law, or Roman law, was advanced greatly by the discovery of the Corpus
Juris Civilis in the 11th century, and by 1100 Roman law was being taught at Bologna. This led to the recording and
standardisation of legal codes throughout Western Europe. Canon law was also studied, and around 1140 a monk named
Gratian (fl. 12th century), a teacher at Bologna, wrote what became the standard text of canon law—the Decretum.
Among the results of the Greek and Islamic influence on this period in European history was the replacement of Roman
numerals with the decimal positional number system and the invention of algebra, which allowed more advanced mathematics.
Astronomy advanced following the translation of Ptolemy's Almagest from Greek into Latin in the late 12th century.
Medicine was also studied, especially in southern Italy, where Islamic medicine influenced the school at Salerno.
In the 12th and 13th centuries, Europe produced economic growth and innovations in methods of production. Major technological
advances included the invention of the windmill, the first mechanical clocks, the manufacture of distilled spirits,
and the use of the astrolabe. Concave spectacles were invented around 1286 by an unknown Italian artisan, probably
working in or near Pisa. The development of a three-field rotation system for planting crops[AA] increased the usage
of land from one half in use each year under the old two-field system to two-thirds under the new system, with a
consequent increase in production. The development of the heavy plough allowed heavier soils to be farmed more efficiently,
aided by the spread of the horse collar, which led to the use of draught horses in place of oxen. Horses are faster
than oxen and require less pasture, factors that aided the implementation of the three-field system. The construction
of cathedrals and castles advanced building technology, leading to the development of large stone buildings. Ancillary
structures included new town halls, houses, bridges, and tithe barns. Shipbuilding improved with the use of the rib
and plank method rather than the old Roman system of mortise and tenon. Other improvements to ships included the
use of lateen sails and the stern-post rudder, both of which increased the speed at which ships could be sailed.
In military affairs, the use of infantry with specialised roles increased. Along with the still-dominant heavy cavalry,
armies often included mounted and infantry crossbowmen, as well as sappers and engineers. Crossbows, which had been
known in Late Antiquity, increased in use partly because of the increase in siege warfare in the 10th and 11th centuries.[AB]
The increasing use of crossbows during the 12th and 13th centuries led to the use of closed-face helmets, heavy body
armour, as well as horse armour. Gunpowder was known in Europe by the mid-13th century with a recorded use in European
warfare by the English against the Scots in 1304, although it was merely used as an explosive and not as a weapon.
Cannon were being used for sieges in the 1320s, and hand-held guns were in use by the 1360s. In the 10th century
the establishment of churches and monasteries led to the development of stone architecture that elaborated vernacular
Roman forms, from which the term "Romanesque" is derived. Where available, Roman brick and stone buildings were recycled
for their materials. From the tentative beginnings known as the First Romanesque, the style flourished and spread
across Europe in a remarkably homogeneous form. Just before 1000 there was a great wave of building stone churches
all over Europe. Romanesque buildings have massive stone walls, openings topped by semi-circular arches, small windows,
and, particularly in France, arched stone vaults. The large portal with coloured sculpture in high relief became
a central feature of façades, especially in France, and the capitals of columns were often carved with narrative
scenes of imaginative monsters and animals. According to art historian C. R. Dodwell, "virtually all the churches
in the West were decorated with wall-paintings", of which few survive. Simultaneous with the development in church
architecture, the distinctive European form of the castle was developed, and became crucial to politics and warfare.
Romanesque art, especially metalwork, was at its most sophisticated in Mosan art, in which distinct artistic personalities
including Nicholas of Verdun (d. 1205) become apparent, and an almost classical style is seen in works such as a
font at Liège, contrasting with the writhing animals of the exactly contemporary Gloucester Candlestick. Large illuminated
bibles and psalters were the typical forms of luxury manuscripts, and wall-painting flourished in churches, often
following a scheme with a Last Judgement on the west wall, a Christ in Majesty at the east end, and narrative biblical
scenes down the nave, or in the best surviving example, at Saint-Savin-sur-Gartempe, on the barrel-vaulted roof.
From the early 12th century, French builders developed the Gothic style, marked by the use of rib vaults, pointed
arches, flying buttresses, and large stained glass windows. It was used mainly in churches and cathedrals, and continued
in use until the 16th century in much of Europe. Classic examples of Gothic architecture include Chartres Cathedral
and Reims Cathedral in France as well as Salisbury Cathedral in England. Stained glass became a crucial element in
the design of churches, which continued to use extensive wall-paintings, now almost all lost. During this period
the practice of manuscript illumination gradually passed from monasteries to lay workshops, so that according to
Janetta Benton "by 1300 most monks bought their books in shops", and the book of hours developed as a form of devotional
book for lay-people. Metalwork continued to be the most prestigious form of art, with Limoges enamel a popular and
relatively affordable option for objects such as reliquaries and crosses. In Italy the innovations of Cimabue and
Duccio, followed by the Trecento master Giotto (d. 1337), greatly increased the sophistication and status of panel
painting and fresco. Increasing prosperity during the 12th century resulted in greater production of secular art;
many carved ivory objects such as gaming-pieces, combs, and small religious figures have survived. Monastic reform
became an important issue during the 11th century, as elites began to worry that monks were not adhering to the rules
binding them to a strictly religious life. Cluny Abbey, founded in the Mâcon region of France in 909, was established
as part of the Cluniac Reforms, a larger movement of monastic reform in response to this fear. Cluny quickly established
a reputation for austerity and rigour. It sought to maintain a high quality of spiritual life by placing itself under
the protection of the papacy and by electing its own abbot without interference from laymen, thus maintaining economic
and political independence from local lords. Monastic reform inspired change in the secular church. The ideals that
it was based upon were brought to the papacy by Pope Leo IX (pope 1049–1054), and provided the ideology of the clerical
independence that led to the Investiture Controversy in the late 11th century. This involved Pope Gregory VII (pope
1073–85) and Emperor Henry IV, who initially clashed over episcopal appointments, a dispute that turned into a battle
over the ideas of investiture, clerical marriage, and simony. The emperor saw the protection of the Church as one
of his responsibilities as well as wanting to preserve the right to appoint his own choices as bishops within his
lands, but the papacy insisted on the Church's independence from secular lords. These issues remained unresolved
after the compromise of 1122 known as the Concordat of Worms. The dispute represents a significant stage in the creation
of a papal monarchy separate from and equal to lay authorities. It also had the permanent consequence of empowering
German princes at the expense of the German emperors. The High Middle Ages was a period of great religious movements.
Besides the Crusades and monastic reforms, people sought to participate in new forms of religious life. New monastic
orders were founded, including the Carthusians and the Cistercians. The latter especially expanded rapidly in their
early years under the guidance of Bernard of Clairvaux (d. 1153). These new orders were formed in response to the
feeling of the laity that Benedictine monasticism no longer met the needs of the laymen, who along with those wishing
to enter the religious life wanted a return to the simpler hermetical monasticism of early Christianity, or to live
an Apostolic life. Religious pilgrimages were also encouraged. Old pilgrimage sites such as Rome, Jerusalem, and
Compostela received increasing numbers of visitors, and new sites such as Monte Gargano and Bari rose to prominence.
In the 13th century mendicant orders—the Franciscans and the Dominicans—who swore vows of poverty and earned their
living by begging, were approved by the papacy. Religious groups such as the Waldensians and the Humiliati also attempted
to return to the life of early Christianity in the middle 12th and early 13th centuries, but they were condemned
as heretical by the papacy. Others joined the Cathars, another heretical movement condemned by the papacy. In 1209,
a crusade was preached against the Cathars, the Albigensian Crusade, which in combination with the medieval Inquisition,
eliminated them. The first years of the 14th century were marked by famines, culminating in the Great Famine of 1315–17.
The causes of the Great Famine included the slow transition from the Medieval Warm Period to the Little Ice Age,
which left the population vulnerable when bad weather caused crop failures. The years 1313–14 and 1317–21 were excessively
rainy throughout Europe, resulting in widespread crop failures. The climate change—which resulted in a declining
average annual temperature for Europe during the 14th century—was accompanied by an economic downturn. These troubles
were followed in 1347 by the Black Death, a pandemic that spread throughout Europe during the following three years.[AC]
The death toll was probably about 35 million people in Europe, about one-third of the population. Towns were especially
hard-hit because of their crowded conditions.[AD] Large areas of land were left sparsely inhabited, and in some places
fields were left unworked. Wages rose as landlords sought to entice the reduced number of available workers to their
fields. Further problems were lower rents and lower demand for food, both of which cut into agricultural income.
Urban workers also felt that they had a right to greater earnings, and popular uprisings broke out across Europe.
Among the uprisings were the jacquerie in France, the Peasants' Revolt in England, and revolts in the cities of Florence
in Italy and Ghent and Bruges in Flanders. The trauma of the plague led to an increased piety throughout Europe,
manifested by the foundation of new charities, the self-mortification of the flagellants, and the scapegoating of
Jews. Conditions were further unsettled by the return of the plague throughout the rest of the 14th century; it continued
to strike Europe periodically during the rest of the Middle Ages. Society throughout Europe was disturbed by the
dislocations caused by the Black Death. Lands that had been marginally productive were abandoned, as the survivors
were able to acquire more fertile areas. Although serfdom declined in Western Europe it became more common in Eastern
Europe, as landlords imposed it on those of their tenants who had previously been free. Most peasants in Western
Europe managed to change the work they had previously owed to their landlords into cash rents. The percentage of
serfs amongst the peasantry declined from a high of 90 to closer to 50 per cent by the end of the period. Landlords
also became more conscious of common interests with other landholders, and they joined together to extort privileges
from their governments. Partly at the urging of landlords, governments attempted to legislate a return to the economic
conditions that existed before the Black Death. Non-clergy became increasingly literate, and urban populations began
to imitate the nobility's interest in chivalry. Jewish communities were expelled from England in 1290 and from France
in 1306. Although some were allowed back into France, most were not, and many Jews emigrated eastwards, settling
in Poland and Hungary. The Jews were expelled from Spain in 1492, and dispersed to Turkey, France, Italy, and Holland.
The rise of banking in Italy during the 13th century continued throughout the 14th century, fuelled partly by the
increasing warfare of the period and the needs of the papacy to move money between kingdoms. Many banking firms loaned
money to royalty, at great risk, as some were bankrupted when kings defaulted on their loans.[AE] Strong, royalty-based
nation states rose throughout Europe in the Late Middle Ages, particularly in England, France, and the Christian
kingdoms of the Iberian Peninsula: Aragon, Castile, and Portugal. The long conflicts of the period strengthened royal
control over their kingdoms and were extremely hard on the peasantry. Kings profited from warfare that extended royal
legislation and increased the lands they directly controlled. Paying for the wars required that methods of taxation
become more effective and efficient, and the rate of taxation often increased. The requirement to obtain the consent
of taxpayers allowed representative bodies such as the English Parliament and the French Estates General to gain
power and authority. Throughout the 14th century, French kings sought to expand their influence at the expense of
the territorial holdings of the nobility. They ran into difficulties when attempting to confiscate the holdings of
the English kings in southern France, leading to the Hundred Years' War, waged from 1337 to 1453. Early in the war
the English under Edward III (r. 1327–77) and his son Edward, the Black Prince (d. 1376),[AF] won the battles of
Crécy and Poitiers, captured the city of Calais, and won control of much of France.[AG] The resulting stresses almost
caused the disintegration of the French kingdom during the early years of the war. In the early 15th century, France
again came close to dissolving, but in the late 1420s the military successes of Joan of Arc (d. 1431) led to the
victory of the French and the capture of the last English possessions in southern France in 1453. The price was high,
as the population of France at the end of the Wars was likely half what it had been at the start of the conflict.
Conversely, the Wars had a positive effect on English national identity, doing much to fuse the various local identities
into a national English ideal. The conflict with France also helped create a national culture in England separate
from French culture, which had previously been the dominant influence. The dominance of the English longbow began
during early stages of the Hundred Years' War, and cannon appeared on the battlefield at Crécy in 1346. In modern-day
Germany, the Holy Roman Empire continued to rule, but the elective nature of the imperial crown meant there was no
enduring dynasty around which a strong state could form. Further east, the kingdoms of Poland, Hungary, and Bohemia
grew powerful. In Iberia, the Christian kingdoms continued to gain land from the Muslim kingdoms of the peninsula;
Portugal concentrated on expanding overseas during the 15th century, while the other kingdoms were riven by difficulties
over royal succession and other concerns. After losing the Hundred Years' War, England went on to suffer a long civil
war known as the Wars of the Roses, which lasted into the 1490s and only ended when Henry Tudor (r. 1485–1509 as
Henry VII) became king and consolidated power with his victory over Richard III (r. 1483–85) at Bosworth in 1485.
In Scandinavia, Margaret I of Denmark (r. in Denmark 1387–1412) consolidated Norway, Denmark, and Sweden in the Union
of Kalmar, which continued until 1523. The major power around the Baltic Sea was the Hanseatic League, a commercial
confederation of city states that traded from Western Europe to Russia. Scotland emerged from English domination
under Robert the Bruce (r. 1306–29), who secured papal recognition of his kingship in 1328. Although the Palaeologi
emperors recaptured Constantinople from the Western Europeans in 1261, they were never able to regain control of
much of the former imperial lands. They usually controlled only a small section of the Balkan Peninsula near Constantinople,
the city itself, and some coastal lands on the Black Sea and around the Aegean Sea. The former Byzantine lands in
the Balkans were divided between the new Kingdom of Serbia, the Second Bulgarian Empire and the city-state of Venice.
The power of the Byzantine emperors was threatened by a new Turkish tribe, the Ottomans, who established themselves
in Anatolia in the 13th century and steadily expanded throughout the 14th century. The Ottomans expanded into Europe,
reducing Bulgaria to a vassal state by 1366 and taking over Serbia after its defeat at the Battle of Kosovo in 1389.
Western Europeans rallied to the plight of the Christians in the Balkans and declared a new crusade in 1396; a great
army was sent to the Balkans, where it was defeated at the Battle of Nicopolis. Constantinople was finally captured
by the Ottomans in 1453. During the tumultuous 14th century, disputes within the leadership of the Church led to
the Avignon Papacy of 1305–78, also called the "Babylonian Captivity of the Papacy" (a reference to the Babylonian
captivity of the Jews), and then to the Great Schism, lasting from 1378 to 1418, when there were two and later three
rival popes, each supported by several states. Ecclesiastical officials convened at the Council of Constance in 1414,
and in the following year the council deposed one of the rival popes, leaving only two claimants. Further depositions
followed, and in November 1417 the council elected Martin V (pope 1417–31) as pope. Besides the schism, the western
church was riven by theological controversies, some of which turned into heresies. John Wycliffe (d. 1384), an English
theologian, was condemned as a heretic in 1415 for teaching that the laity should have access to the text of the
Bible as well as for holding views on the Eucharist that were contrary to church doctrine. Wycliffe's teachings influenced
two of the major heretical movements of the later Middle Ages: Lollardy in England and Hussitism in Bohemia. The
Bohemian movement initiated with the teaching of Jan Hus, who was burned at the stake in 1415 after being condemned
as a heretic by the Council of Constance. The Hussite church, although the target of a crusade, survived beyond the
Middle Ages. Other heresies were manufactured, such as the accusations against the Knights Templar that resulted
in their suppression in 1312 and the division of their great wealth between the French King Philip IV (r. 1285–1314)
and the Hospitallers. The papacy further refined the practice in the Mass in the Late Middle Ages, holding that the
clergy alone was allowed to partake of the wine in the Eucharist. This further distanced the secular laity from the
clergy. The laity continued the practices of pilgrimages, veneration of relics, and belief in the power of the Devil.
Mystics such as Meister Eckhart (d. 1327) and Thomas à Kempis (d. 1471) wrote works that taught the laity to focus
on their inner spiritual life, which laid the groundwork for the Protestant Reformation. Besides mysticism, belief
in witches and witchcraft became widespread, and by the late 15th century the Church had begun to lend credence to
populist fears of witchcraft with its condemnation of witches in 1484 and the publication in 1486 of the Malleus
Maleficarum, the most popular handbook for witch-hunters. During the Later Middle Ages, theologians such as John
Duns Scotus (d. 1308)[AH] and William of Ockham (d. c. 1348), led a reaction against scholasticism, objecting to
the application of reason to faith. Their efforts undermined the prevailing Platonic idea of "universals". Ockham's
insistence that reason operates independently of faith allowed science to be separated from theology and philosophy.
Legal studies were marked by the steady advance of Roman law into areas of jurisprudence previously governed by customary
law. The lone exception to this trend was in England, where the common law remained pre-eminent. Other countries
codified their laws; legal codes were promulgated in Castile, Poland, and Lithuania. Education remained mostly focused
on the training of future clergy. The basic learning of the letters and numbers remained the province of the family
or a village priest, but the secondary subjects of the trivium—grammar, rhetoric, logic—were studied in cathedral
schools or in schools provided by cities. Commercial secondary schools spread, and some Italian towns had more than
one such enterprise. Universities also spread throughout Europe in the 14th and 15th centuries. Lay literacy rates
rose, but were still low; one estimate gave a literacy rate of ten per cent of males and one per cent of females
in 1500. The publication of vernacular literature increased, with Dante (d. 1321), Petrarch (d. 1374) and Giovanni
Boccaccio (d. 1375) in 14th-century Italy, Geoffrey Chaucer (d. 1400) and William Langland (d. c. 1386) in England,
and François Villon (d. 1464) and Christine de Pizan (d. c. 1430) in France. Much literature remained religious in
character, and although a great deal of it continued to be written in Latin, a new demand developed for saints' lives
and other devotional tracts in the vernacular languages. This was fed by the growth of the Devotio Moderna movement,
most prominently in the formation of the Brethren of the Common Life, but also in the works of German mystics such
as Meister Eckhart and Johannes Tauler (d. 1361). Theatre also developed in the guise of miracle plays put on by
the Church. At the end of the period, the development of the printing press in about 1450 led to the establishment
of publishing houses throughout Europe by 1500. In the early 15th century, the countries of the Iberian peninsula
began to sponsor exploration beyond the boundaries of Europe. Prince Henry the Navigator of Portugal (d. 1460) sent
expeditions that discovered the Canary Islands, the Azores, and Cape Verde during his lifetime. After his death,
exploration continued; Bartolomeu Dias (d. 1500) went around the Cape of Good Hope in 1486 and Vasco da Gama (d.
1524) sailed around Africa to India in 1498. The combined Spanish monarchies of Castile and Aragon sponsored the
voyage of exploration by Christopher Columbus (d. 1506) in 1492 that discovered the Americas. The English crown under
Henry VII sponsored the voyage of John Cabot (d. 1498) in 1497, which landed on Cape Breton Island. One of the major
developments in the military sphere during the Late Middle Ages was the increased use of infantry and light cavalry.
The English also employed longbowmen, but other countries were unable to create similar forces with the same success.
Armour continued to advance, spurred by the increasing power of crossbows, and plate armour was developed to protect
soldiers from crossbows as well as the hand-held guns that were developed. Pole arms reached new prominence with
the development of the Flemish and Swiss infantry armed with pikes and other long spears. In agriculture, the increased
usage of sheep with long-fibred wool allowed a stronger thread to be spun. In addition, the spinning wheel replaced
the traditional distaff for spinning wool, tripling production.[AI] A less technological refinement that still greatly
affected daily life was the use of buttons as closures for garments, which allowed for better fitting without having
to lace clothing on the wearer. Windmills were refined with the creation of the tower mill, allowing the upper part
of the windmill to be spun around to face the direction from which the wind was blowing. The blast furnace appeared
around 1350 in Sweden, increasing the quantity of iron produced and improving its quality. The first patent law in
1447 in Venice protected the rights of inventors to their inventions. The Late Middle Ages in Europe as a whole correspond
to the Trecento and Early Renaissance cultural periods in Italy. Northern Europe and Spain continued to use Gothic
styles, which became increasingly elaborate in the 15th century, until almost the end of the period. International
Gothic was a courtly style that reached much of Europe in the decades around 1400, producing masterpieces such as
the Très Riches Heures du Duc de Berry. All over Europe secular art continued to increase in quantity and quality,
and in the 15th century the mercantile classes of Italy and Flanders became important patrons, commissioning small
portraits of themselves in oils as well as a growing range of luxury items such as jewellery, ivory caskets, cassone
chests, and maiolica pottery. These objects also included the Hispano-Moresque ware produced by mostly Mudéjar potters
in Spain. Although royalty owned huge collections of plate, little survives except for the Royal Gold Cup. Italian
silk manufacture developed, so that western churches and elites no longer needed to rely on imports from Byzantium
or the Islamic world. In France and Flanders tapestry weaving of sets like The Lady and the Unicorn became a major
luxury industry. The large external sculptural schemes of Early Gothic churches gave way to more sculpture inside
the building, as tombs became more elaborate and other features such as pulpits were sometimes lavishly carved, as
in the Pulpit by Giovanni Pisano in Sant'Andrea. Painted or carved wooden relief altarpieces became common, especially
as churches created many side-chapels. Early Netherlandish painting by artists such as Jan van Eyck (d. 1441) and
Rogier van der Weyden (d. 1464) rivalled that of Italy, as did northern illuminated manuscripts, which in the 15th
century began to be collected on a large scale by secular elites, who also commissioned secular books, especially
histories. From about 1450 printed books rapidly became popular, though still expensive. There were around 30,000
different editions of incunabula, or works printed before 1500, by which time illuminated manuscripts were commissioned
only by royalty and a few others. Very small woodcuts, nearly all religious, were affordable even by peasants in
parts of Northern Europe from the middle of the 15th century. More expensive engravings supplied a wealthier market
with a variety of images. The medieval period is frequently caricatured as a "time of ignorance and superstition"
that placed "the word of religious authorities over personal experience and rational activity." This is a legacy
from both the Renaissance and Enlightenment, when scholars contrasted their intellectual cultures with those of the
medieval period, to the detriment of the Middle Ages. Renaissance scholars saw the Middle Ages as a period of decline
from the high culture and civilisation of the Classical world; Enlightenment scholars saw reason as superior to faith,
and thus viewed the Middle Ages as a time of ignorance and superstition. Others argue that reason was generally held
in high regard during the Middle Ages. Science historian Edward Grant writes, "If revolutionary rational thoughts
were expressed [in the 18th century], they were only made possible because of the long medieval tradition that established
the use of reason as one of the most important of human activities". Also, contrary to common belief, David Lindberg
writes, "the late medieval scholar rarely experienced the coercive power of the church and would have regarded himself
as free (particularly in the natural sciences) to follow reason and observation wherever they led". The caricature
of the period is also reflected in some more specific notions. One misconception, first propagated in the 19th century
and still very common, is that all people in the Middle Ages believed that the Earth was flat. This is untrue, as
lecturers in the medieval universities commonly argued that evidence showed the Earth was a sphere. Lindberg and
Ronald Numbers, another scholar of the period, state that there "was scarcely a Christian scholar of the Middle Ages
who did not acknowledge [Earth's] sphericity and even know its approximate circumference". Other misconceptions such
as "the Church prohibited autopsies and dissections during the Middle Ages", "the rise of Christianity killed off
ancient science", or "the medieval Christian church suppressed the growth of natural philosophy", are all cited by
Numbers as examples of widely popular myths that still pass as historical truth, although they are not supported
by current historical research.
Phonology is a branch of linguistics concerned with the systematic organization of sounds in languages. It has traditionally
focused largely on the study of the systems of phonemes in particular languages (and therefore used to be also called
phonemics, or phonematics), but it may also cover any linguistic analysis either at a level beneath the word (including
syllable, onset and rime, articulatory gestures, articulatory features, mora, etc.) or at all levels of language
where sound is considered to be structured for conveying linguistic meaning. Phonology also includes the study of
equivalent organizational systems in sign languages. The word phonology (as in the phonology of English) can also
refer to the phonological system (sound system) of a given language. This is one of the fundamental systems which
a language is considered to comprise, like its syntax and its vocabulary. Phonology is often distinguished from phonetics.
While phonetics concerns the physical production, acoustic transmission and perception of the sounds of speech, phonology
describes the way sounds function within a given language or across languages to encode meaning. For many linguists,
phonetics belongs to descriptive linguistics, and phonology to theoretical linguistics, although establishing the
phonological system of a language is necessarily an application of theoretical principles to analysis of phonetic
evidence. Note that this distinction was not always made, particularly before the development of the modern concept
of the phoneme in the mid 20th century. Some subfields of modern phonology have a crossover with phonetics in descriptive
disciplines such as psycholinguistics and speech perception, resulting in specific areas like articulatory phonology
or laboratory phonology. The word phonology comes from the Greek φωνή, phōnḗ, "voice, sound," and the suffix -logy
(which is from Greek λόγος, lógos, "word, speech, subject of discussion"). Definitions of the term vary. Nikolai
Trubetzkoy in Grundzüge der Phonologie (1939) defines phonology as "the study of sound pertaining to the system of
language," as opposed to phonetics, which is "the study of sound pertaining to the act of speech" (the distinction
between language and speech being basically Saussure's distinction between langue and parole). More recently, Lass
(1998) writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language,
while in more narrow terms, "phonology proper is concerned with the function, behavior and organization of sounds
as linguistic items." According to Clark et al. (2007), it means the systematic use of sound to encode meaning in
any spoken human language, or the field of linguistics studying this use. The history of phonology may be traced
back to the Ashtadhyayi, the Sanskrit grammar composed by Pāṇini in the 4th century BC. In particular the Shiva Sutras,
an auxiliary text to the Ashtadhyayi, introduces what can be considered a list of the phonemes of the Sanskrit language,
with a notational system for them that is used throughout the main text, which deals with matters of morphology,
syntax and semantics. The Polish scholar Jan Baudouin de Courtenay (together with his former student Mikołaj Kruszewski)
introduced the concept of the phoneme in 1876, and his work, though often unacknowledged, is considered to be the
starting point of modern phonology. He also worked on the theory of phonetic alternations (what is now called allophony
and morphophonology), and had a significant influence on the work of Ferdinand de Saussure. An influential school
of phonology in the interwar period was the Prague school. One of its leading members was Prince Nikolai Trubetzkoy,
whose Grundzüge der Phonologie (Principles of Phonology), published posthumously in 1939, is among the most important
works in the field from this period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder
of morphophonology, although this concept had also been recognized by de Courtenay. Trubetzkoy also developed the
concept of the archiphoneme. Another important figure in the Prague school was Roman Jakobson, who was one of the
most prominent linguists of the 20th century. In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of
English (SPE), the basis for generative phonology. In this view, phonological representations are sequences of segments
made up of distinctive features. These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant,
and Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set,
and have the binary values + or −. There are at least two levels of representation: underlying representation and
surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into
the actual pronunciation (the so-called surface form). An important consequence of the influence SPE had on phonological
theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the generativists folded morphophonology
into phonology, which both solved and created problems. Natural phonology is a theory based on the publications of
its proponent David Stampe in 1969 and (more explicitly) in 1979. In this view, phonology is based on a set of universal
phonological processes that interact with one another; which ones are active and which are suppressed is language-specific.
Rather than acting on segments, phonological processes act on distinctive features within prosodic groups. Prosodic
groups can be as small as a part of a syllable or as large as an entire utterance. Phonological processes are unordered
with respect to each other and apply simultaneously (though the output of one process may be the input to another).
The second most prominent natural phonologist is Patricia Donegan (Stampe's wife); there are many natural phonologists
in Europe, and a few in the U.S., such as Geoffrey Nathan. The principles of natural phonology were extended to morphology
by Wolfgang U. Dressler, who founded natural morphology. In 1976 John Goldsmith introduced autosegmental phonology.
Phonological phenomena are no longer seen as operating on one linear sequence of segments, called phonemes or feature
combinations, but rather as involving some parallel sequences of features which reside on multiple tiers. Autosegmental
phonology later evolved into feature geometry, which became the standard theory of representation for theories of
the organization of phonology as different as lexical phonology and optimality theory. Government phonology, which
originated in the early 1980s as an attempt to unify theoretical notions of syntactic and phonological structures,
is based on the notion that all languages necessarily follow a small set of principles and vary according to their
selection of certain binary parameters. That is, all languages' phonological structures are essentially the same,
but there is restricted variation that accounts for differences in surface realizations. Principles are held to be
inviolable, though parameters may sometimes come into conflict. Prominent figures in this field include Jonathan
Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, and John Harris. In a course at the LSA summer institute
in 1991, Alan Prince and Paul Smolensky developed optimality theory—an overall architecture for phonology according
to which languages choose a pronunciation of a word that best satisfies a list of constraints ordered by importance;
a lower-ranked constraint can be violated when the violation is necessary in order to obey a higher-ranked constraint.
The approach was soon extended to morphology by John McCarthy and Alan Prince, and has become a dominant trend in
phonology. The appeal to phonetic grounding of constraints and representational elements (e.g. features) in various
approaches has been criticized by proponents of 'substance-free phonology', especially Mark Hale and Charles Reiss.
Broadly speaking, government phonology (or its descendant, strict-CV phonology) has a greater following in the United
Kingdom, whereas optimality theory is predominant in the United States.[citation needed] An integrated approach to
phonological theory that combines synchronic and diachronic accounts to sound patterns was initiated with Evolutionary
Phonology in recent years. An important part of traditional, pre-generative schools of phonology is studying which
sounds can be grouped into distinctive units within a language; these units are known as phonemes. For example, in
English, the "p" sound in pot is aspirated (pronounced [pʰ]) while that in spot is not aspirated (pronounced [p]).
However, English speakers intuitively treat both sounds as variations (allophones) of the same phonological category,
that is of the phoneme /p/. (Traditionally, it would be argued that if an aspirated [pʰ] were interchanged with the
unaspirated [p] in spot, native speakers of English would still hear the same words; that is, the two sounds are
perceived as "the same" /p/.) In some other languages, however, these two sounds are perceived as different, and
they are consequently assigned to different phonemes. For example, in Thai, Hindi, and Quechua, there are minimal
pairs of words for which aspiration is the only contrasting feature (two words can have different meanings but with
the only difference in pronunciation being that one has an aspirated sound where the other has an unaspirated one).
Part of the phonological study of a language therefore involves looking at data (phonetic transcriptions of the speech
of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language
is. The presence or absence of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether
two sounds should be assigned to the same phoneme. However, other considerations often need to be taken into account
as well. The particular contrasts which are phonemic in a language can change over time. At one time, [f] and [v],
two sounds that have the same place and manner of articulation and differ in voicing only, were allophones of the
same phoneme in English, but later came to belong to separate phonemes. This is one of the main factors of historical
change of languages as described in historical linguistics. The findings and insights of speech perception and articulation
research complicate the traditional and somewhat intuitive idea of interchangeable allophones being perceived as
the same phoneme. First, interchanged allophones of the same phoneme can result in unrecognizable words. Second,
actual speech, even at a word level, is highly co-articulated, so it is problematic to expect to be able to splice
words into simple segments without affecting speech perception. Different linguists therefore take different approaches
to the problem of assigning sounds to phonemes. For example, they differ in the extent to which they require allophones
to be phonetically similar. There are also differing ideas as to whether this grouping of sounds is purely a tool
for linguistic analysis, or reflects an actual process in the way the human brain processes a language. Since the
early 1960s, theoretical linguists have moved away from the traditional concept of a phoneme, preferring to consider
basic units at a more abstract level, as a component of morphemes; these units can be called morphophonemes, and
analysis using this approach is called morphophonology. In addition to the minimal units that can serve the purpose
of differentiating meaning (the phonemes), phonology studies how sounds alternate, i.e. replace one another in different
forms of the same morpheme (allomorphs), as well as, for example, syllable structure, stress, feature geometry, accent,
and intonation. Phonology also includes topics such as phonotactics (the phonological constraints on what sounds
can appear in what positions in a given language) and phonological alternation (how the pronunciation of a sound
changes through the application of phonological rules, sometimes in a given order which can be feeding or bleeding,)
as well as prosody, the study of suprasegmentals and topics such as stress and intonation. The principles of phonological
analysis can be applied independently of modality because they are designed to serve as general analytical tools,
not language-specific ones. The same principles have been applied to the analysis of sign languages (see Phonemes
in sign languages), even though the sub-lexical units are not instantiated as speech sounds.
Conventionally, a computer consists of at least one processing element, typically a central processing unit (CPU), and some
form of memory. The processing element carries out arithmetic and logic operations, and a sequencing and control
unit can change the order of operations in response to stored information. Peripheral devices allow information to
be retrieved from an external source, and the result of operations saved and retrieved. Mechanical analog computers
started appearing in the first century and were later used in the medieval era for astronomical calculations. In
World War II, mechanical analog computers were used for specialized military applications such as calculating torpedo
aiming. During this time the first electronic digital computers were developed. Originally they were the size of
a large room, consuming as much power as several hundred modern personal computers (PCs). Modern computers based
on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction
of the space. Computers are small enough to fit into mobile devices, and mobile computers can be powered by small
batteries. Personal computers in their various forms are icons of the Information Age and are generally considered
as "computers". However, the embedded computers found in many devices from MP3 players to fighter aircraft and from
electronic toys to industrial robots are the most numerous. The first known use of the word "computer" was in 1613
in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I haue read the truest computer of
Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number." It referred
to a person who carried out calculations, or computations. The word continued with the same meaning until the middle
of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine
that carries out computations. Devices have been used to aid computation for thousands of years, mostly using one-to-one
correspondence with fingers. The earliest counting device was probably a form of tally stick. Later record keeping
aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items,
probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example.
The abacus was initially used for arithmetic tasks. The Roman abacus was used in Babylonia as early as 2400 BC. Since
then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a
checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid
to calculating sums of money. The Antikythera mechanism is believed to be the earliest mechanical analog "computer",
according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901
in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa
100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until
a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and
navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The
astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to
Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable
of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical
calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented
the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with
a gear train and gear-wheels, circa 1000 AD. The sector, a calculating instrument used for solving problems in proportion,
trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed
in the late 16th century and found application in gunnery, surveying and navigation. The slide rule was invented
around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer
for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares
and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials,
circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where slide rules are
still in widespread use, particularly for solving time–distance problems in light aircraft. To save space and for
ease of reading, these are typically circular devices rather than the classic linear slide rule shape. A popular
example is the E6B. In the 1770s Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automata) that
could write holding a quill pen. By switching the number and order of its internal wheels different letters, and
hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions.
Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and
still operates. The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation
in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set
period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential
equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 Lord Kelvin had already
discussed the possible construction of such calculators, but he had been stymied by the limited output torque of
the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next
integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting
in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. Charles Babbage, an English mechanical
engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer",
he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary
difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design,
an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched
cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine
would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be
read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching
and loops, and integrated memory, making it the first design for a general-purpose computer that could be described
in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine
had to be made by hand — this was a major problem for a device with thousands of parts. Eventually, the project was
dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical
engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop
an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his
son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888.
He gave a successful demonstration of its use in computing tables in 1906. The first modern analog computer was a
tide-predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog
computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized
in 1876 by James Thomson, the brother of the more famous Lord Kelvin. The art of mechanical analog computing reached
its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built
on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these
devices were built before their obsolescence became obvious. By the 1950s the success of digital electronic computers
had spelled the end for most analog computing machines, but analog computers remain in use in some specialized applications
such as education (control systems) and aircraft (slide rule). The principle of the modern computer was first described
by mathematician and pioneering computer scientist Alan Turing, who set out the idea in his seminal 1936 paper, On
Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing
Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known
as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical
computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem
by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide
algorithmically whether a given Turing machine will ever halt. He also introduced the notion of a 'Universal Machine'
(now known as a Universal Turing machine), with the idea that such a machine could perform the tasks of any other
machine, or in other words, it is provably capable of computing anything that is computable by executing a program
stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the
modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation.
Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete,
which is to say, they have algorithm execution capability equivalent to a universal Turing machine. By 1938 the United
States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the
Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During
World War II similar devices were developed in other countries as well. Early digital computers were electromechanical;
electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and
were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created
by German engineer Konrad Zuse in 1939, was one of the earliest examples of an electromechanical relay computer.
In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable,
fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22 bit word length that operated
at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64
words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering
numerous advances such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles
Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially
more reliable, given the technologies available at that time. The Z3 was Turing complete. Purely electronic circuit
elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation
replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s,
began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built
in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic
data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry
of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic
digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in
a mechanically rotating drum for memory. During World War II, the British at Bletchley Park achieved a number of
successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first
attacked with the help of the electro-mechanical bombes. To crack the more sophisticated German Lorenz SZ 40/42 machine,
used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus.
He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test
in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked
its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used
a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a
variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built
(The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1500 thermionic valves
(tubes), but Mark II with 2400 valves, was both 5 times faster and simpler to operate than Mark 1, greatly speeding
the decoding process. The US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic
programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more
flexible. It was unambiguously a Turing-complete device and could compute any problem that would fit into its memory.
Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry
from the stored program electronic machines that came later. Once a program was written, it had to be mechanically
set into the machine with manual resetting of plugs and switches. It combined the high speed of electronics with
the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand
times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory
was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the
University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of
1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum
tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. Early computing machines
had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal
of the stored-program computer this changed. A stored-program computer includes by design an instruction set and
can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the
stored-program computer was laid by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory
and began work on developing an electronic stored-program digital computer. His 1945 report ‘Proposed Electronic
Calculator’ was the first specification for such a device. John von Neumann at the University of Pennsylvania, also
circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Small-Scale Experimental Machine, nicknamed
Baby, was the world's first stored-program computer. It was built at the Victoria University of Manchester by Frederic
C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed
for the Williams tube the first random-access digital storage device. Although the computer was considered "small
and primitive" by the standards of its time, it was the first working machine to contain all of the elements essential
to a modern electronic computer. As soon as the SSEM had demonstrated the feasibility of its design, a project was
initiated at the university to develop it into a more usable computer, the Manchester Mark 1. The Mark 1 in turn
quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.
Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later
machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947, the directors
of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development
of computers. The LEO I computer became operational in April 1951 and ran the world's first regular routine office
computer job. The bipolar transistor was invented in 1947. From 1955 onwards transistors replaced vacuum tubes in
computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have
many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction
transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers
could contain tens of thousands of binary logic circuits in a relatively compact space. At the University of Manchester,
a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead
of valves. Their first transistorised computer and the first in the world, was operational by 1953, and a second
version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock
waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely
transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of
the Atomic Energy Research Establishment at Harwell. The next great advance in computing power came with the advent
of the integrated circuit. The idea of the integrated circuit was first conceived by a radar scientist working for
the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public
description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington,
D.C. on 7 May 1952. The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at
Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully
demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February
1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic
circuit are completely integrated". Noyce also came up with his own idea of an integrated circuit half a year later
than Kilby. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it
was made of silicon, whereas Kilby's chip was made of germanium. This new development heralded an explosion in the
commercial and personal use of computers and led to the invention of the microprocessor. While the subject of exactly
which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition
of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004,
designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at Intel. With the continued miniaturization
of computing resources, and advancements in portable battery life, portable computers grew in popularity in the 2000s.
The same developments that spurred the growth of laptop computers and other portable computers allowed manufacturers
to integrate computing resources into cellular phones. These so-called smartphones and tablets run on a variety of
operating systems and have become the dominant computing device on the market, with manufacturers reporting having
shipped an estimated 237 million devices in 2Q 2013. In practical terms, a computer program may be just a few instructions
or extend to many millions of instructions, as do the programs for word processors and web browsers for example.
A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake
over many years of operation. Large computer programs consisting of several million instructions may take teams of
programmers years to write, and due to the complexity of the task almost certainly contain errors. Program execution
might be likened to reading a book. While a person will normally read each word and line in sequence, they may at
times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer
may sometimes go back and repeat the instructions in some section of the program over and over again until some internal
condition is met. This is called the flow of control within the program and it is what allows the computer to perform
tasks repeatedly without human intervention. In most computers, individual instructions are stored as machine code
with each instruction being given a unique number (its operation code or opcode for short). The command to add two
numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The
simplest computers are able to perform any of a handful of different instructions; the more complex computers have
several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers,
it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists
of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer
in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the
data they operate on is the crux of the von Neumann, or stored program[citation needed], architecture. In some cases,
a computer might store some or all of its program in memory that is kept separate from the data it operates on. This
is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits
of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs
as long lists of numbers (machine language) and while this technique was used with many early computers, it is extremely
tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic
instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as
ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs
written in assembly language into something the computer can actually understand (machine language) is usually done
by a computer program called an assembler. Programming languages provide various ways of specifying programs for
computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise.
They are purely written languages and are often difficult to read aloud. They are generally either translated into
machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter.
Sometimes programs are executed by a hybrid method of the two techniques. Machine languages and the assembly languages
that represent them (collectively termed low-level programming languages) tend to be unique to a particular type
of computer. For instance, an ARM architecture computer (such as may be found in a PDA or a hand-held videogame)
cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer that might be in a PC. Though
considerably easier than in machine language, writing long programs in assembly language is often difficult and is
also error prone. Therefore, most practical programs are written in more abstract high-level programming languages
that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error).
High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into
machine language) using another computer program called a compiler. High level languages are less related to the
workings of the target computer than assembly language, and more related to the language and structure of the problem(s)
to be solved by the final program. It is therefore often possible to use different compilers to translate the same
high level language program into the machine language of many different types of computer. This is part of the means
by which software like video games may be made available for different computer architectures such as personal computers
and various video game consoles. These 4G languages are less procedural than 3G languages. The benefit of 4GL is
that it provides ways to obtain information without requiring the direct help of a programmer. Example of 4GL is
SQL. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program,
or have only subtle effects. But in some cases, they may cause the program or the entire system to "hang", becoming
unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs
may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take
advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since
computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or
an oversight made in the program's design. Admiral Grace Hopper, an American computer scientist and developer of
the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting
a relay in the Harvard Mark II computer in September 1947. A general purpose computer has four main components: the
arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed
I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands
to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit
represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it
represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more
of the circuits may control the state of one or more of the other circuits. The control unit (often called a control
system or central controller) manages the computer's various components; it reads and interprets (decodes) the program
instructions, transforming them into control signals that activate other parts of the computer. Control systems in
advanced computers may change the order of execution of some instructions to improve performance. A key component
common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location
in memory the next instruction is to be read from. Since the program counter is (conceptually) just another set of
memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the
next instruction to be read from a place 100 locations further down the program. Instructions that modify the program
counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often
conditional instruction execution (both examples of control flow). The sequence of operations that the control unit
goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex
CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes
all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing
unit (CPU). Early CPUs were composed of many separate components but since the mid-1970s CPUs have typically been
constructed on a single integrated circuit called a microprocessor. The set of arithmetic operations that a particular
ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry
functions such as sine, cosine, etc., and square roots. Some can only operate on whole numbers (integers) whilst
others use floating point to represent real numbers, albeit with limited precision. However, any computer that is
capable of performing just the simplest operations can be programmed to break down the more complex operations into
simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although
it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers
and return boolean truth values (true or false) depending on whether one is equal to, greater than or less than the
other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful
for creating complicated conditional statements and processing boolean logic. Superscalar computers may contain multiple
ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and
MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be
viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store
a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the
number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information
stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into
memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's
responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern
computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte
is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers,
several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are
usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of
specialized applications or historical contexts. A computer can store any kind of information in memory if it can
be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains
a special set of memory cells called registers that can be read and written to much more rapidly than the main memory
area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used
for the most frequently needed data items to avoid having to access main memory every time data is needed. As data
is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and
control units) greatly increases the computer's speed. RAM can be read and written to anytime the CPU commands it,
but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically
used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power
to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program
called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever
the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required
software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like
hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned
off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted
to applications where high speed is unnecessary. In more sophisticated computers there may be one or more RAM cache
memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache
are designed to move frequently needed data into the cache automatically, often without the need for any intervention
on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices
that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals
include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk
drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is
another form of I/O. While a computer may be viewed as running one gigantic program stored in its main memory, in
some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved
by multitasking i.e. having the computer switch rapidly between running each program in turn. One means by which
this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing
instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt,
the computer can return to that task later. If several programs are running "at the same time". then the interrupt
generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern
computers typically execute instructions several orders of magnitude faster than human perception, it may appear
that many programs are running at the same time even though only one is ever executing in any given instant. This
method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn.
Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in
direct proportion to the number of programs it is running, but most programs spend much of their time waiting for
slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or
press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred.
This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable
speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration,
a technique once employed only in large and powerful machines such as supercomputers, mainframe computers and servers.
Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now
widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular
often have highly unique architectures that differ significantly from the basic stored-program architecture and from
general purpose computers. They often feature thousands of CPUs, customized high-speed interconnects, and specialized
computing hardware. Such designs tend to be useful only for specialized tasks due to the large scale of program organization
required to successfully utilize most of the available resources at once. Supercomputers usually see usage in large-scale
simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel"
tasks. Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's
SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial
systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began
to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA),
and the computer network that resulted was called the ARPANET. The technologies that made the Arpanet possible spread
and evolved. In time, the network spread beyond academic and military institutions and became known as the Internet.
The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating
systems and applications were modified to include the ability to define and access the resources of other computers
on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an
individual computer. Initially these facilities were available primarily to people working in high-tech environments,
but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of
cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact,
the number of computers that are networked is growing phenomenally. A very large proportion of personal computers
regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing
mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.
The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing
them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with
a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other
computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able
to perform the same computational tasks, given enough time and storage capacity. A computer does not need to be electronic,
nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous
with a personal electronic computer, the modern definition of a computer is literally: "A device that computes, especially
a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles,
stores, correlates, or otherwise processes information." Any device which processes information qualifies as a computer,
especially if the processing is purposeful.[citation needed] Historically, computers evolved from mechanical computers
and eventually from vacuum tubes to transistors. However, conceptually computational systems as flexible as a personal
computer can be built out of almost anything. For example, a computer can be made out of billiard balls (billiard
ball computer); an often quoted example.[citation needed] More realistically, modern computers are made out of transistors
made of photolithographed semiconductors. There is active research to make computers out of many promising new types
of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers
are universal, and are able to calculate any computable function, and are limited only by their memory capacity and
operating speed. However different designs of computers can give very different performance for particular problems;
for example quantum computers can potentially break some modern encryption algorithms (by quantum factoring) very
quickly. A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative
solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of
the emerging field of artificial intelligence and machine learning. The term hardware covers all of those parts of
a computer that are tangible objects. Circuits, displays, power supplies, cables, keyboards, printers and mice are
all hardware. Software refers to parts of the computer which do not have a material form, such as programs, data,
protocols, etc. When software is stored in hardware that cannot easily be modified (such as BIOS ROM in an IBM PC
compatible), it is sometimes called "firmware". Firmware is the technology which has the combination of both hardware
and software such as BIOS chip inside a computer. This chip (hardware) is located on the motherboard and has the
BIOS set up (software) stored in it. When unprocessed data is sent to the computer with the help of input devices,
the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of
processing is mainly regulated by the CPU. Some examples of hand-operated input devices are:
Black people is a term used in certain countries, often in socially based systems of racial classification or of ethnicity,
to describe persons who are perceived to be dark-skinned compared to other given populations. As such, the meaning
of the expression varies widely both between and within societies, and depends significantly on context. For many
other individuals, communities and countries, "black" is also perceived as a derogatory, outdated, reductive or otherwise
unrepresentative label, and as a result is neither used nor defined. Different societies apply differing criteria
regarding who is classified as "black", and these social constructs have also changed over time. In a number of countries,
societal variables affect classification as much as skin color, and the social criteria for "blackness" vary. For
example, in North America the term black people is not necessarily an indicator of skin color or majority ethnic
ancestry, but it is instead a socially based racial classification related to being African American, with a family
history associated with institutionalized slavery. In South Africa and Latin America, for instance, mixed-race people
are generally not classified as "black." In South Pacific regions such as Australia and Melanesia, European colonists
applied the term "black" or it was used by populations with different histories and ethnic origin. The Romans interacted
with and later conquered parts of Mauretania, an early state that covered modern Morocco, western Algeria, and the
Spanish cities Ceuta and Melilla during the classical period. The people of the region were noted in Classical literature
as Mauri, which was subsequently rendered as Moors in English. Numerous communities of dark-skinned peoples are present
in North Africa, some dating from prehistoric communities. Others are descendants of the historical Trans-Saharan
trade in peoples and/or, and after the Arab invasions of North Africa in the 7th century, descendants of slaves from
the Arab Slave Trade in North Africa. In the 18th century, the Moroccan Sultan Moulay Ismail "the Bloodthirsty" (1672–1727)
raised a corps of 150,000 black slaves, called his Black Guard, who coerced the country into submission. According
to Dr. Carlos Moore, resident scholar at Brazil's University of the State of Bahia, in the 21st century Afro-multiracials
in the Arab world, including Arabs in North Africa, self-identify in ways that resemble multi-racials in Latin America.
He claims that black-looking Arabs, much like black-looking Latin Americans, consider themselves white because they
have some distant white ancestry. Egyptian President Anwar Sadat had a mother who was a dark-skinned Nubian Sudanese
woman and a father who was a lighter-skinned Egyptian. In response to an advertisement for an acting position, as
a young man he said, "I am not white but I am not exactly black either. My blackness is tending to reddish". Due
to the patriarchal nature of Arab society, Arab men, including during the slave trade in North Africa, enslaved more
black women than men. They used more black female slaves in domestic service and agriculture than males. The men
interpreted the Qur'an to permit sexual relations between a male master and his female slave outside of marriage
(see Ma malakat aymanukum and sex), leading to many mixed-race children. When an enslaved woman became pregnant with
her Arab master's child, she was considered as umm walad or "mother of a child", a status that granted her privileged
rights. The child was given rights of inheritance to the father's property, so mixed-race children could share in
any wealth of the father. Because the society was patrilineal, the children took their fathers' social status at
birth and were born free. Some succeeded their fathers as rulers, such as Sultan Ahmad al-Mansur, who ruled Morocco
from 1578 to 1608. He was not technically considered as a mixed-race child of a slave; his mother was Fulani and
a concubine of his father. Such tolerance for black persons, even when technically "free", was not so common in Morocco.
The long association of sub-Saharan peoples as slaves is shown in the term abd (Arabic: عبد,) (meaning "slave");
it is still frequently used in the Arabic-speaking world as a term for black people. In early 1991, non-Arabs of
the Zaghawa tribe of Sudan attested that they were victims of an intensifying Arab apartheid campaign, segregating
Arabs and non-Arabs (specifically people of sub-Saharan African descent). Sudanese Arabs, who controlled the government,
were widely referred to as practicing apartheid against Sudan's non-Arab citizens. The government was accused of
"deftly manipulat(ing) Arab solidarity" to carry out policies of apartheid and ethnic cleansing. American University
economist George Ayittey accused the Arab government of Sudan of practicing acts of racism against black citizens.
According to Ayittey, "In Sudan... the Arabs monopolized power and excluded blacks – Arab apartheid." Many African
commentators joined Ayittey in accusing Sudan of practising Arab apartheid. Alan Dershowitz described Sudan as an
example of a government that "actually deserve(s)" the appellation "apartheid." Former Canadian Minister of Justice
Irwin Cotler echoed the accusation. In South Africa, the period of colonization resulted in many unions and marriages
between European men and African women from various tribes, resulting in mixed-race children. As the Europeans acquired
territory and imposed rule over the Africans, they generally pushed mixed-race and Africans into second-class status.
During the first half of the 20th century, the Afrikaaner-dominated government classified the population according
to four main racial groups: Black, White, Asian (mostly Indian), and Coloured. The Coloured group included people
of mixed Bantu, Khoisan, and European descent (with some Malay ancestry, especially in the Western Cape). The Coloured
definition occupied an intermediary political position between the Black and White definitions in South Africa. It
imposed a system of legal racial segregation, a complex of laws known as apartheid. The apartheid bureaucracy devised
complex (and often arbitrary) criteria in the Population Registration Act of 1945 to determine who belonged in which
group. Minor officials administered tests to enforce the classifications. When it was unclear from a person's physical
appearance whether the individual should be considered Coloured or Black, the "pencil test" was used. A pencil was
inserted into a person's hair to determine if the hair was kinky enough to hold the pencil, rather than having it
pass through, as it would with smoother hair. If so, the person was classified as Black. Such classifications sometimes
divided families. Sandra Laing is a South African woman who was classified as Coloured by authorities during the
apartheid era, due to her skin colour and hair texture, although her parents could prove at least three generations
of European ancestors. At age 10, she was expelled from her all-white school. The officials' decisions based on her
anomalous appearance disrupted her family and adult life. She was the subject of the 2008 biographical dramatic film
Skin, which won numerous awards. During the apartheid era, those classed as "Coloured" were oppressed and discriminated
against. But, they had limited rights and overall had slightly better socioeconomic conditions than those classed
as "Black". The government required that Blacks and Coloureds live in areas separate from Whites, creating large
townships located away from the cities as areas for Blacks. In the post-apartheid era, the Constitution of South
Africa has declared the country to be a "Non-racial democracy". In an effort to redress past injustices, the ANC
government has introduced laws in support of affirmative action policies for Blacks; under these they define "Black"
people to include "Africans", "Coloureds" and "Asians". Some affirmative action policies favor "Africans" over "Coloureds"
in terms of qualifying for certain benefits. Some South Africans categorized as "African Black" say that "Coloureds"
did not suffer as much as they did during apartheid. "Coloured" South Africans are known to discuss their dilemma
by saying, "we were not white enough under apartheid, and we are not black enough under the ANC (African National
Congress)".[citation needed] In 2008, the High Court in South Africa ruled that Chinese South Africans who were residents
during the apartheid era (and their descendants) are to be reclassified as "Black people," solely for the purposes
of accessing affirmative action benefits, because they were also "disadvantaged" by racial discrimination. Chinese
people who arrived in the country after the end of apartheid do not qualify for such benefits. Other than by appearance,
"Coloureds" can usually be distinguished from "Blacks" by language. Most speak Afrikaans or English as a first language,
as opposed to Bantu languages such as Zulu or Xhosa. They also tend to have more European-sounding names than Bantu
names. Historians estimate that between the advent of Islam in 650CE and the abolition of slavery in the Arabian
Peninsula in the mid-20th century, 10 to 18 million sub-Saharan Black Africans were enslaved by Arab slave traders
and transported to the Arabian Peninsula and neighboring countries. This number far exceeded the number of slaves
who were taken to the Americas. Several factors affected the visibility of descendants of this diaspora in 21st-century
Arab societies: The traders shipped more female slaves than males, as there was a demand for them to serve as concubines
in harems in the Arabian Peninsula and neighboring countries. Male slaves were castrated in order to serve as harem
guards. The death toll of Black African slaves from forced labor was high. The mixed-race children of female slaves
and Arab owners were assimilated into the Arab owners' families under the patrilineal kinship system. As a result,
few distinctive Afro-Arab black communities have survived in the Arabian Peninsula and neighboring countries. Genetic
studies have found significant African female-mediated gene flow in Arab communities in the Arabian Peninsula and
neighboring countries, with an average of 38% of maternal lineages in Yemen are of direct African descent, 16% in
Oman-Qatar, and 10% in Saudi Arabia-United Arab Emirates. Distinctive and self-identified black communities have
been reported in countries such as Iraq, with a reported 1.2 million black people, and they attest to a history of
discrimination. African-Iraquis have sought minority status from the government, which would reserve some seats in
Parliament for representatives of their population. According to Alamin M. Mazrui et al., generally in the Arabian
Peninsula and neighboring countries, most of those of visible African descent are still classified and identify as
Arab, not black. About 150,000 East African and black people live in Israel, amounting to just over 2% of the nation's
population. The vast majority of these, some 120,000, are Beta Israel, most of whom are recent immigrants who came
during the 1980s and 1990s from Ethiopia. In addition, Israel is home to over 5,000 members of the African Hebrew
Israelites of Jerusalem movement that are descendants of African Americans who emigrated to Israel in the 20th century,
and who reside mainly in a distinct neighborhood in the Negev town of Dimona. Unknown numbers of black converts to
Judaism reside in Israel, most of them converts from the United Kingdom, Canada, and the United States. Additionally,
there are around 60,000 non-Jewish African immigrants in Israel, some of whom have sought asylum. Most of the migrants
are from communities in Sudan and Eritrea, particularly the Niger-Congo-speaking Nuba groups of the southern Nuba
Mountains; some are illegal immigrants. Beginning several centuries ago, during the period of the Ottoman Empire,
tens of thousands of Black Africans were brought by slave traders to plantations and agricultural areas situated
between Antalya and Istanbul in present-day Turkey. Some of their descendants remained in situ, and many migrated
to larger cities and towns. Other blacks slaves were transported to Crete, from where they or their descendants later
reached the İzmir area through the population exchange between Greece and Turkey in 1923, or indirectly from Ayvalık
in pursuit of work. The Siddi are an ethnic group inhabiting India and Pakistan whose members are descended from
Bantu peoples from Southeast Africa that were brought to the Indian subcontinent as slaves by Arab and Portuguese
merchants. Although it is commonly believed locally that "Siddi" derives from a word meaning "black", the term is
actually derived from "Sayyid", the title borne by the captains of the Arab vessels that first brought Siddi settlers
to the area. In the Makran strip of the Sindh and Balochistan provinces in southwestern Pakistan, these Bantu descendants
are known as the Makrani. There was a brief "Black Power" movement in Sindh in the 1960s and many Siddi are proud
of and celebrate their African ancestry. The Negritos are believed to be the first inhabitants of Southeast Asia.
Once inhabiting Taiwan, Vietnam, and various other parts of Asia, they are now confined primarily to Thailand, the
Malay Archipelago, and the Andaman and Nicobar Islands. Negrito means "little black people" in Spanish (negrito is
the Spanish diminutive of negro, i.e., "little black person"); it is what the Spaniards called the short-statured,
hunter-gatherer autochthones that they encountered in the Philippines. Despite this, Negritos are never referred
to as black today, and doing so would cause offense. The term Negrito itself has come under criticism in countries
like Malaysia, where it is now interchangeable with the more acceptable Semang, although this term actually refers
to a specific group. The common Thai word for Negritos literally means "frizzy hair". The term "Moors" has been used
in Europe in a broader, somewhat derogatory sense to refer to Muslims, especially those of Arab or Berber descent,
whether living in North Africa or Iberia. Moors were not a distinct or self-defined people. Medieval and early modern
Europeans applied the name to Muslim Arabs, Berbers, Black Africans and Europeans alike. Isidore of Seville, writing
in the 7th century, claimed that the Latin word Maurus was derived from the Greek mauron, μαύρον, which is the Greek
word for black. Indeed, by the time Isidore of Seville came to write his Etymologies, the word Maurus or "Moor" had
become an adjective in Latin, "for the Greeks call black, mauron". "In Isidore’s day, Moors were black by definition…"
Afro-Spaniards are Spanish nationals of West/Central African descent. They today mainly come from Angola, Brazil,
Cameroon, Cape Verde, Equatorial Guinea, Ghana, Gambia, Guinea-Bissau, Mali, Nigeria and Senegal. Additionally, many
Afro-Spaniards born in Spain are from the former Spanish colony Equatorial Guinea. Today, there are an estimated
683,000 Afro-Spaniards in Spain. According to the Office for National Statistics, at the 2001 census there were over
a million black people in the United Kingdom; 1% of the total population described themselves as "Black Caribbean",
0.8% as "Black African", and 0.2% as "Black other". Britain encouraged the immigration of workers from the Caribbean
after World War II; the first symbolic movement was those who came on the ship the Empire Windrush. The preferred
official umbrella term is "black and minority ethnic" (BME), but sometimes the term "black" is used on its own, to
express unified opposition to racism, as in the Southall Black Sisters, which started with a mainly British Asian
constituency, and the National Black Police Association, which has a membership of "African, African-Caribbean and
Asian origin". As African states became independent in the 1960s, the Soviet Union offered many of their citizens
the chance to study in Russia. Over a period of 40 years, about 400,000 African students from various countries moved
to Russia to pursue higher studies, including many Black Africans. This extended beyond the Soviet Union to many
countries of the Eastern bloc. Due to the Ottoman slave trade that had flourished in the Balkans, the coastal town
of Ulcinj in Montenegro had its own black community. As a consequence of the slave trade and privateer activity,
it is told how until 1878 in Ulcinj 100 black people lived. The Ottoman Army also deployed an estimated 30,000 Black
African troops and cavalrymen to its expedition in Hungary during the Austro-Turkish War of 1716–18. Indigenous Australians
have been referred to as "black people" in Australia since the early days of European settlement. While originally
related to skin colour, the term is used to today to indicate Aboriginal or Torres Strait Islander ancestry in general
and can refer to people of any skin pigmentation. Being identified as either "black" or "white" in Australia during
the 19th and early 20th centuries was critical in one's employment and social prospects. Various state-based Aboriginal
Protection Boards were established which had virtually complete control over the lives of Indigenous Australians
– where they lived, their employment, marriage, education and included the power to separate children from their
parents. Aborigines were not allowed to vote and were often confined to reserves and forced into low paid or effectively
slave labour. The social position of mixed-race or "half-caste" individuals varied over time. A 1913 report by Sir
Baldwin Spencer states that: After the First World War, however, it became apparent that the number of mixed-race
people was growing at a faster rate than the white population, and by 1930 fear of the "half-caste menace" undermining
the White Australia ideal from within was being taken as a serious concern. Dr. Cecil Cook, the Northern Territory
Protector of Natives, noted that: The official policy became one of biological and cultural assimilation: "Eliminate
the full-blood and permit the white admixture to half-castes and eventually the race will become white". This led
to different treatment for "black" and "half-caste" individuals, with lighter-skinned individuals targeted for removal
from their families to be raised as "white" people, restricted from speaking their native language and practising
traditional customs, a process now known as the Stolen Generation. The second half of the 20th century to the present
has seen a gradual shift towards improved human rights for Aboriginal people. In a 1967 referendum over 90% of the
Australian population voted to end constitutional discrimination and to include Aborigines in the national census.
During this period many Aboriginal activists began to embrace the term "black" and use their ancestry as a source
of pride. Activist Bob Maza said: In 1978 Aboriginal writer Kevin Gilbert received the National Book Council award
for his book Living Black: Blacks Talk to Kevin Gilbert, a collection of Aboriginal people's stories, and in 1998
was awarded (but refused to accept) the Human Rights Award for Literature for Inside Black Australia, a poetry anthology
and exhibition of Aboriginal photography. In contrast to previous definitions based solely on the degree of Aboriginal
ancestry, in 1990 the Government changed the legal definition of Aboriginal to include any: This nationwide acceptance
and recognition of Aboriginal people led to a significant increase in the number of people self-identifying as Aboriginal
or Torres Strait Islander. The reappropriation of the term "black" with a positive and more inclusive meaning has
resulted in its widespread use in mainstream Australian culture, including public media outlets, government agencies,
and private companies. In 2012, a number of high-profile cases highlighted the legal and community attitude that
identifying as Aboriginal or Torres Strait Islander is not dependent on skin colour, with a well-known boxer Anthony
Mundine being widely criticised for questioning the "blackness" of another boxer and journalist Andrew Bolt being
successfully sued for publishing discriminatory comments about Aboriginals with light skin. In the Colonial America
of 1619, John Rolfe used negars in describing the slaves who were captured from West Africa and then shipped to the
Virginia colony. Later American English spellings, neger and neggar, prevailed in a northern colony, New York under
the Dutch, and in metropolitan Philadelphia's Moravian and Pennsylvania Dutch communities; the African Burial Ground
in New York City originally was known by the Dutch name "Begraafplaats van de Neger" (Cemetery of the Negro); an
early US occurrence of neger in Rhode Island, dates from 1625. Thomas Jefferson also used the term "black" in his
Notes on the State of Virginia in allusion to the slave populations. By the 1900s, nigger had become a pejorative
word in the United States. In its stead, the term colored became the mainstream alternative to negro and its derived
terms. After the African-American Civil rights movement, the terms colored and negro gave way to "black". Negro had
superseded colored as the most polite word for African Americans at a time when black was considered more offensive.
This term was accepted as normal, including by people classified as Negroes, until the later Civil Rights movement
in the late 1960s. One well-known example is the identification by Reverend Martin Luther King, Jr. of his own race
as "Negro" in his famous speech of 1963, I Have a Dream. During the American Civil Rights movement of the 1950s and
1960s, some African-American leaders in the United States, notably Malcolm X, objected to the word Negro because
they associated it with the long history of slavery, segregation, and discrimination that treated African Americans
as second-class citizens, or worse. Malcolm X preferred Black to Negro, but later gradually abandoned that as well
for Afro-American after leaving the Nation of Islam. In the first 200 years that black people were in the United
States, they primarily identified themselves by their specific ethnic group (closely allied to language) and not
by skin color. Individuals identified themselves, for example, as Ashanti, Igbo, Bakongo, or Wolof. However, when
the first captives were brought to the Americas, they were often combined with other groups from West Africa, and
individual ethnic affiliations were not generally acknowledged by English colonists. In areas of the Upper South,
different ethnic groups were brought together. This is significant as the captives came from a vast geographic region:
the West African coastline stretching from Senegal to Angola and in some cases from the south-east coast such as
Mozambique. A new African-American identity and culture was born that incorporated elements of the various ethnic
groups and of European cultural heritage, resulting in fusions such as the Black church and Black English. This new
identity was based on provenance and slave status rather than membership in any one ethnic group.[citation needed]
By contrast, slave records from Louisiana show that the French and Spanish colonists recorded more complete identities
of the West Africans, including ethnicities and given tribal names. The US racial or ethnic classification "black"
refers to people with all possible kinds of skin pigmentation, from the darkest through to the very lightest skin
colors, including albinos, if they are believed by others to have West African ancestry (in any discernible percentage),
or to exhibit cultural traits associated with being "African American". As a result, in the United States the term
"black people" is not an indicator of skin color or ethnic origin but is instead a socially based racial classification
related to being African American, with a family history associated with institutionalized slavery. Relatively dark-skinned
people can be classified as white if they fulfill other social criteria of "whiteness", and relatively light-skinned
people can be classified as black if they fulfill the social criteria for "blackness" in a particular setting. By
that time, the majority of black people in the United States were native-born, so the use of the term "African" became
problematic. Though initially a source of pride, many blacks feared that the use of African as an identity would
be a hindrance to their fight for full citizenship in the US. They also felt that it would give ammunition to those
who were advocating repatriating black people back to Africa. In 1835, black leaders called upon Black Americans
to remove the title of "African" from their institutions and replace it with "Negro" or "Colored American". A few
institutions chose to keep their historic names, such as the African Methodist Episcopal Church. African Americans
popularly used the terms "Negro" or "colored" for themselves until the late 1960s. In 1988, the civil rights leader
Jesse Jackson urged Americans to use instead the term "African American" because it had a historical cultural base
and was a construction similar to terms used by European descendants, such as German American, Italian American,
etc. Since then, African American and black have often had parallel status. However, controversy continues over which
if any of the two terms is more appropriate. Maulana Karenga argues that the term African-American is more appropriate
because it accurately articulates their geographical and historical origin.[citation needed] Others have argued that
"black" is a better term because "African" suggests foreignness, although Black Americans helped found the United
States. Still others believe that the term black is inaccurate because African Americans have a variety of skin tones.
Some surveys suggest that the majority of Black Americans have no preference for "African American" or "Black", although
they have a slight preference for "black" in personal settings and "African American" in more formal settings. The
U.S. census race definitions says a "black" is a person having origins in any of the black (sub-Saharan) racial groups
of Africa. It includes people who indicate their race as "Black, African Am., or Negro" or who provide written entries
such as African American, Afro-American, Kenyan, Nigerian, or Haitian. The Census Bureau notes that these classifications
are socio-political constructs and should not be interpreted as scientific or anthropological. Most African Americans
also have European ancestry in varying amounts; a lesser proportion have some Native American ancestry. For instance,
genetic studies of African Americans show an ancestry that is on average 17–18% European. From the late 19th century,
the South used a colloquial term, the one-drop rule, to classify as black a person of any known African ancestry.
This practice of hypodescent was not put into law until the early 20th century. Legally the definition varied from
state to state. Racial definition was more flexible in the 18th and 19th centuries before the American Civil War.
For instance, President Thomas Jefferson held persons who were legally white (less than 25% black) according to Virginia
law at the time, but, because they were born to slave mothers, they were born into slavery, according to the principle
of partus sequitur ventrem, which Virginia adopted into law in 1662. The concept of blackness in the United States
has been described as the degree to which one associates themselves with mainstream African-American culture, politics,
and values. To a certain extent, this concept is not so much about race but more about political orientation, culture
and behavior. Blackness can be contrasted with "acting white", where black Americans are said to behave with assumed
characteristics of stereotypical white Americans with regard to fashion, dialect, taste in music, and possibly, from
the perspective of a significant number of black youth, academic achievement. Due to the often political and cultural
contours of blackness in the United States, the notion of blackness can also be extended to non-black people. Toni
Morrison once described Bill Clinton as the first black President of the United States, because, as she put it, he
displayed "almost every trope of blackness". Christopher Hitchens was offended by the notion of Clinton as the first
black president, noting, "Mr Clinton, according to Toni Morrison, the Nobel Prize-winning novelist, is our first
black President, the first to come from the broken home, the alcoholic mother, the under-the-bridge shadows of our
ranking systems. Thus, we may have lost the mystical power to divine diabolism, but we can still divine blackness
by the following symptoms: broken homes, alcoholic mothers, under-the-bridge habits and (presumable from the rest
of [Arthur] Miller's senescent musings) the tendency to sexual predation and to shameless perjury about same." Some
black activists were also offended, claiming that Clinton used his knowledge of black culture to exploit black people
for political gain as no other president had before, while not serving black interests. They cite the lack of action
during the Rwandan Genocide and his welfare reform, which Larry Roberts said had led to the worst child poverty since
the 1960s. Others cited that the number of black people in jail increased during his administration. In July 2012,
Ancestry.com reported on historic and DNA research by its staff that discovered that Obama is likely a descendant
through his mother of John Punch, considered by some historians to be the first African slave in the Virginia colony.
An indentured servant, he was "bound for life" in 1640 after trying to escape. The story of him and his descendants
is that of multi-racial America since it appeared he and his sons married or had unions with white women, likely
indentured servants and working-class like them. Their multi-racial children were free because they were born to
free English women. Over time, Obama's line of the Bunch family (as they became known) were property owners and continued
to "marry white"; they became part of white society, likely by the early to mid-18th century. Approximately 12 million
Africans were shipped to the Americas during the Atlantic slave trade from 1492 to 1888, with 11.5 million of those
shipped to South America and the Caribbean. Brazil was the largest importer in the Americas, with 5.5 million African
slaves imported, followed by the British Caribbean with 2.76 million, the Spanish Caribbean and Spanish Mainland
with 1.59 million Africans, and the French Caribbean with 1.32 million. Today their descendants number approximately
150 million in South America and the Caribbean. In addition to skin color, other physical characteristics such as
facial features and hair texture are often variously used in classifying peoples as black in South America and the
Caribbean. In South America and the Caribbean, classification as black is also closely tied to social status and
socioeconomic variables, especially in light of social conceptions of "blanqueamiento" (racial whitening) and related
concepts. The concept of race in Brazil is complex. A Brazilian child was never automatically identified with the
racial type of one or both of his or her parents, nor were there only two categories to choose from. Between an individual
of unmixed West African descent and a very light mulatto individual, more than a dozen racial categories were acknowledged,
based on various combinations of hair color, hair texture, eye color, and skin color. These types grade into each
other like the colors of the spectrum, and no one category stands significantly isolated from the rest. In Brazil,
people are classified by appearance, not heredity. Scholars disagree over the effects of social status on racial
classifications in Brazil. It is generally believed that achieving upward mobility and education results in individuals
being classified as a category of lighter skin. The popular claim is that in Brazil, poor whites are considered black
and wealthy blacks are considered white. Some scholars disagree, arguing that "whitening" of one's social status
may be open to people of mixed race, a large part of the population known as pardo, but a person perceived as preto
(black) will continue to be classified as black regardless of wealth or social status. From the years 1500 to 1850,
an estimated 3.5 million captives were forcibly shipped from West/Central Africa to Brazil; the territory received
the highest number of slaves of any country in the Americas. Scholars estimate that more than half of the Brazilian
population is at least in part descended from these individuals. Brazil has the largest population of Afro-descendants
outside of Africa. In contrast to the US, during the slavery period and after, the Portuguese colonial government
and later Brazilian government did not pass formal anti-miscegenation or segregation laws. As in other Latin countries,
intermarriage was prevalent during the colonial period and continued afterward. In addition, people of mixed race
(pardo) often tended to marry white, and their descendants became accepted as white. As a result, some of the European
descended population also has West African or Amerindian blood. According to the last census of the 20th century,
in which Brazilians could choose from five color/ethnic categories with which they identified, 54% of individuals
identified as white, 6.2% identified as black, and 39.5% identified as pardo (brown) — a broad multi-racial category,
including tri-racial persons. By the 2000 census, demographic changes including the end to slavery, immigration from
Europe and Asia, assimilation of multiracial persons, and other factors resulted in a population in which 6.2% of
the population identified as black, 40% as pardo, and 55% as white. Essentially most of the black population was
absorbed into the multi-racial category by intermixing. A 2007 genetic study found that at least 29% of the middle-class,
white Brazilian population had some recent (since 1822 and the end of the colonial period) African ancestry. Because
of the acceptance of miscegenation, Brazil has avoided the binary polarization of society into black and white. In
addition, it abolished slavery without a civil war. The bitter and sometimes violent racial tensions that have divided
the US are notably absent in Brazil. According to the 2010 census, 6.7% of Brazilians said they were black, compared
with 6.2% in 2000, and 43.1% said they were racially mixed, up from 38.5%. In 2010, Elio Ferreira de Araujo, Brazil's
minister for racial equality, attributed the increases to growing pride among his country's black and indigenous
communities. In the US, African Americans, who include multiracial people, earn 75% of what white people earn. In
Brazil, people of color earn less than 50% of what whites earn. Some have posited that the facts of lower socioeconomic
status for people of color suggest that Brazil practices a kind of one-drop rule, or discrimination against people
who are not visibly European in ancestry. The gap in income between blacks and other non-whites is relatively small
compared to the large gap between whites and all people of color. Other social factors, such as illiteracy and education
levels, show the same patterns of disadvantage for people of color. Some commentators observe that the United States
practice of segregation and white supremacy in the South, and discrimination in many areas outside that region, forced
many African Americans to unite in the civil rights struggle. They suggest that the fluid nature of race in Brazil
has divided individuals of African descent, between those with more or less ancestry. As a result, they have not
united for a stronger civil rights movement.[citation needed] Though Brazilians of at least partial African heritage
make up a large percentage of the population, few blacks have been elected as politicians. The city of Salvador,
Bahia, for instance, is 80% people of color, but voters have not elected a mayor of color. Journalists like to say
that US cities with black majorities, such as Detroit and New Orleans, have not elected white mayors since after
the civil rights movement, when the Voting Rights Act of 1965 protected the franchise for minorities, and blacks
in the South regained the power to vote for the first time since the turn of the 20th century. New Orleans elected
its first black mayor in the 1970s. New Orleans elected a white mayor after the widescale disruption and damage of
Hurricane Katrina in 2005. Critics note that people of color have limited media visibility. The Brazilian media has
been accused of hiding or overlooking the nation's Black, Indigenous, Multiracial and East Asian populations. For
example, the telenovelas or soaps are criticized for featuring actors who resemble northern Europeans rather than
actors of the more prevalent Southern European features) and light-skinned mulatto and mestizo appearance. (Pardos
may achieve "white" status if they have attained the middle-class or higher social status). These patterns of discrimination
against non-whites have led some academic and other activists to advocate for use of the Portuguese term negro to
encompass all African-descended people, in order to stimulate a "black" consciousness and identity. This proposal
has been criticized since the term pardo is considered to include a wide range of multiracial people, such as caboclos
(mestizos), assimilated Amerindians and tri-racials, not only people of partial African and European descent. Trying
to identify this entire group as "black" would be a false imposition of a different identity from outside the culture
and deny people their other, equally valid, ancestries and cultures. It seems a one-drop rule in reverse.
The Times is a British daily national newspaper based in London. It began in 1785 under the title The Daily Universal Register
and became The Times on 1 January 1788. The Times and its sister paper The Sunday Times (founded in 1821) are published
by Times Newspapers, since 1981 a subsidiary of News UK, itself wholly owned by the News Corp group headed by Rupert
Murdoch. The Times and The Sunday Times do not share editorial staff, were founded independently and have only had
common ownership since 1967. The Times is the first newspaper to have borne that name, lending it to numerous other
papers around the world, including The Times of India (founded in 1838), The Straits Times (Singapore) (1845), The
New York Times (1851), The Irish Times (1859), Le Temps (France) (1861-1942), the Cape Times (South Africa) (1872),
the Los Angeles Times (1881), The Seattle Times (1891), The Manila Times (1898), The Daily Times (Malawi) (1900),
El Tiempo (Colombia) (1911), The Canberra Times (1926), and The Times (Malta) (1935). In these countries, the newspaper
is often referred to as The London Times or The Times of London. The Times is the originator of the widely used Times
Roman typeface, originally developed by Stanley Morison of The Times in collaboration with the Monotype Corporation
for its legibility in low-tech printing. In November 2006 The Times began printing headlines in a new font, Times
Modern. The Times was printed in broadsheet format for 219 years, but switched to compact size in 2004 in an attempt
to appeal more to younger readers and commuters using public transport. The Sunday Times remains a broadsheet. Though
traditionally a moderate newspaper and sometimes a supporter of the Conservative Party, it supported the Labour Party
in the 2001 and 2005 general elections. In 2004, according to MORI, the voting intentions of its readership were
40% for the Conservative Party, 29% for the Liberal Democrats, and 26% for Labour. The Times had an average daily
circulation of 394,448 in March 2014; in the same period, The Sunday Times had an average daily circulation of 839,077.
An American edition of The Times has been published since 6 June 2006. It has been heavily used by scholars and researchers
because of its widespread availability in libraries and its detailed index. A complete historical file of the digitized
paper is online from Gage Cengage publisher. The Times was founded by publisher John Walter on 1 January 1785 as
The Daily Universal Register, with Walter in the role of editor. Walter had lost his job by the end of 1784 after
the insurance company where he was working went bankrupt because of the complaints of a Jamaican hurricane. Being
unemployed, Walter decided to set a new business up. It was in that time when Henry Johnson invented the logography,
a new typography that was faster and more precise (three years later, it was proved that it was not as efficient
as had been said). Walter bought the logography's patent and to use it, he decided to open a printing house, where
he would daily produce an advertising sheet. The first publication of the newspaper The Daily Universal Register
in Great Britain was 1 January 1785. Unhappy because people always omitted the word Universal, Ellias changed the
title after 940 editions on 1 January 1788 to The Times. In 1803, Walter handed ownership and editorship to his son
of the same name. Walter Sr had spent sixteen months in Newgate Prison for libel printed in The Times, but his pioneering
efforts to obtain Continental news, especially from France, helped build the paper's reputation among policy makers
and financiers. The Times used contributions from significant figures in the fields of politics, science, literature,
and the arts to build its reputation. For much of its early life, the profits of The Times were very large and the
competition minimal, so it could pay far better than its rivals for information or writers. Beginning in 1814, the
paper was printed on the new steam-driven cylinder press developed by Friedrich Koenig. In 1815, The Times had a
circulation of 5,000. Thomas Barnes was appointed general editor in 1817. In the same year, the paper's printer James
Lawson, died and passed the business onto his son John Joseph Lawson(1802–1852). Under the editorship of Barnes and
his successor in 1841, John Thadeus Delane, the influence of The Times rose to great heights, especially in politics
and amongst the City of London. Peter Fraser and Edward Sterling were two noted journalists, and gained for The Times
the pompous/satirical nickname 'The Thunderer' (from "We thundered out the other day an article on social and political
reform."). The increased circulation and influence of the paper was based in part to its early adoption of the steam-driven
rotary printing press. Distribution via steam trains to rapidly growing concentrations of urban populations helped
ensure the profitability of the paper and its growing influence. The Times was the first newspaper to send war correspondents
to cover particular conflicts. W. H. Russell, the paper's correspondent with the army in the Crimean War, was immensely
influential with his dispatches back to England. In other events of the nineteenth century, The Times opposed the
repeal of the Corn Laws until the number of demonstrations convinced the editorial board otherwise, and only reluctantly
supported aid to victims of the Irish Potato Famine. It enthusiastically supported the Great Reform Bill of 1832,
which reduced corruption and increased the electorate from 400,000 people to 800,000 people (still a small minority
of the population). During the American Civil War, The Times represented the view of the wealthy classes, favouring
the secessionists, but it was not a supporter of slavery. The third John Walter, the founder's grandson, succeeded
his father in 1847. The paper continued as more or less independent, but from the 1850s The Times was beginning to
suffer from the rise in competition from the penny press, notably The Daily Telegraph and The Morning Post. During
the 19th century, it was not infrequent for the Foreign Office to approach The Times and ask for continental intelligence,
which was often superior to that conveyed by official sources.[citation needed] The Times faced financial extinction
in 1890 under Arthur Fraser Walter, but it was rescued by an energetic editor, Charles Frederic Moberly Bell. During
his tenure (1890–1911), The Times became associated with selling the Encyclopædia Britannica using aggressive American
marketing methods introduced by Horace Everett Hooper and his advertising executive, Henry Haxton. Due to legal fights
between the Britannica's two owners, Hooper and Walter Montgomery Jackson, The Times severed its connection in 1908
and was bought by pioneering newspaper magnate, Alfred Harmsworth, later Lord Northcliffe. In editorials published
on 29 and 31 July 1914, Wickham Steed, the Times's Chief Editor, argued that the British Empire should enter World
War I. On 8 May 1920, also under the editorship of Steed, The Times in an editorial endorsed the anti-Semitic fabrication
The Protocols of the Learned Elders of Zion as a genuine document, and called Jews the world's greatest danger. In
the leader entitled "The Jewish Peril, a Disturbing Pamphlet: Call for Inquiry", Steed wrote about The Protocols
of the Elders of Zion: The following year, when Philip Graves, the Constantinople (modern Istanbul) correspondent
of The Times, exposed The Protocols as a forgery, The Times retracted the editorial of the previous year. In 1922,
John Jacob Astor, son of the 1st Viscount Astor, bought The Times from the Northcliffe estate. The paper gained a
measure of notoriety in the 1930s with its advocacy of German appeasement; then-editor Geoffrey Dawson was closely
allied with those in the government who practised appeasement, most notably Neville Chamberlain. Kim Philby, a Soviet
double agent, was a correspondent for the newspaper in Spain during the Spanish Civil War of the late 1930s. Philby
was admired for his courage in obtaining high-quality reporting from the front lines of the bloody conflict. He later
joined MI6 during World War II, was promoted into senior positions after the war ended, then eventually defected
to the Soviet Union in 1963. Between 1941 and 1946, the left-wing British historian E.H. Carr was Assistant Editor.
Carr was well known for the strongly pro-Soviet tone of his editorials. In December 1944, when fighting broke out
in Athens between the Greek Communist ELAS and the British Army, Carr in a Times editorial sided with the Communists,
leading Winston Churchill to condemn him and that leader in a speech to the House of Commons. As a result of Carr's
editorial, The Times became popularly known during that stage of World War II as the threepenny Daily Worker (the
price of the Daily Worker being one penny). On 3 May 1966 it resumed printing news on the front page - previously
the front page featured small advertisements, usually of interest to the moneyed classes in British society. In 1967,
members of the Astor family sold the paper to Canadian publishing magnate Roy Thomson. His Thomson Corporation brought
it under the same ownership as The Sunday Times to form Times Newspapers Limited. The Thomson Corporation management
were struggling to run the business due to the 1979 Energy Crisis and union demands. Management were left with no
choice but to find a buyer who was in a position to guarantee the survival of both titles, and also one who had the
resources and was committed to funding the introduction of modern printing methods. Several suitors appeared, including
Robert Maxwell, Tiny Rowland and Lord Rothermere; however, only one buyer was in a position to meet the full Thomson
remit, Australian media magnate Rupert Murdoch. Robert Holmes à Court, another Australian magnate had previously
tried to buy The Times in 1980. In 1981, The Times and The Sunday Times were bought from Thomson by Rupert Murdoch's
News International. The acquisition followed three weeks of intensive bargaining with the unions by company negotiators,
John Collier and Bill O'Neill. After 14 years as editor, William Rees-Mogg resigned the post upon completion of the
change of ownership. Murdoch began to make his mark on the paper by appointing Harold Evans as his replacement. One
of his most important changes was the introduction of new technology and efficiency measures. In March–May 1982,
following agreement with print unions, the hot-metal Linotype printing process used to print The Times since the
19th century was phased out and replaced by computer input and photo-composition. This allowed print room staff at
The Times and The Sunday Times to be reduced by half. However, direct input of text by journalists ("single stroke"
input) was still not achieved, and this was to remain an interim measure until the Wapping dispute of 1986, when
The Times moved from New Printing House Square in Gray's Inn Road (near Fleet Street) to new offices in Wapping.
Robert Fisk, seven times British International Journalist of the Year, resigned as foreign correspondent in 1988
over what he saw as "political censorship" of his article on the shooting-down of Iran Air Flight 655 in July 1988.
He wrote in detail about his reasons for resigning from the paper due to meddling with his stories, and the paper's
pro-Israel stance. In June 1990, The Times ceased its policy of using courtesy titles ("Mr", "Mrs", or "Miss" prefixes)
for living persons before full names on first reference, but it continues to use them before surnames on subsequent
references. The more formal style is now confined to the "Court and Social" page, though "Ms" is now acceptable in
that section, as well as before surnames in news sections. In November 2003, News International began producing the
newspaper in both broadsheet and tabloid sizes. On 13 September 2004, the weekday broadsheet was withdrawn from sale
in Northern Ireland. Since 1 November 2004, the paper has been printed solely in tabloid format. On 6 June 2005,
The Times redesigned its Letters page, dropping the practice of printing correspondents' full postal addresses. Published
letters were long regarded as one of the paper's key constituents. Author/solicitor David Green of Castle Morris
Pembrokeshire has had more letters published on the main letters page than any other known contributor – 158 by 31
January 2008. According to its leading article, "From Our Own Correspondents", removal of full postal addresses was
in order to fit more letters onto the page. In a 2007 meeting with the House of Lords Select Committee on Communications,
which was investigating media ownership and the news, Murdoch stated that the law and the independent board prevented
him from exercising editorial control. In May 2008 printing of The Times switched from Wapping to new plants at Broxbourne
on the outskirts of London, and Merseyside and Glasgow, enabling the paper to be produced with full colour on every
page for the first time. On 26 July 2012, to coincide with the official start of the London 2012 Olympics and the
issuing of a series of souvenir front covers, The Times added the suffix "of London" to its masthead. The Times features
news for the first half of the paper, the Opinion/Comment section begins after the first news section with world
news normally following this. The business pages begin on the centre spread, and are followed by The Register, containing
obituaries, Court & Social section, and related material. The sport section is at the end of the main paper. The
Times current prices are £1.20 for the daily edition and £1.50 for the Saturday edition. The Times's main supplement,
every day, is the times2, featuring various lifestyle columns. It was discontinued on 1 March 2010 but reintroduced
on 11 October 2010 after negative feedback. Its regular features include a puzzles section called Mind Games. Its
previous incarnation began on 5 September 2005, before which it was called T2 and previously Times 2. Regular features
include columns by a different columnist each weekday. There was a column by Marcus du Sautoy each Wednesday, for
example. The back pages are devoted to puzzles and contain sudoku, "Killer Sudoku", "KenKen", word polygon puzzles,
and a crossword simpler and more concise than the main "Times Crossword". The Game is included in the newspaper on
Mondays, and details all the weekend's football activity (Premier League and Football League Championship, League
One and League Two.) The Scottish edition of The Game also includes results and analysis from Scottish Premier League
games. The Saturday edition of The Times contains a variety of supplements. These supplements were relaunched in
January 2009 as: Sport, Weekend (including travel and lifestyle features), Saturday Review (arts, books, and ideas),
The Times Magazine (columns on various topics), and Playlist (an entertainment listings guide). The Times Magazine
features columns touching on various subjects such as celebrities, fashion and beauty, food and drink, homes and
gardens or simply writers' anecdotes. Notable contributors include Giles Coren, Food and Drink Writer of the Year
in 2005 and Nadiya Hussain, winner of BBC's The Great British Bake Off. The Times and The Sunday Times have had an
online presence since March 1999, originally at the-times.co.uk and sunday-times.co.uk, and later at timesonline.co.uk.
There are now two websites: thetimes.co.uk is aimed at daily readers, and the thesundaytimes.co.uk site at providing
weekly magazine-like content. There are also iPad and Android editions of both newspapers. Since July 2010, News
UK has required readers who do not subscribe to the print edition to pay £2 per week to read The Times and The Sunday
Times online. The Times Digital Archive (1785–2008) is freely accessible via Gale databases to readers affiliated
with subscribing academic, public, and school libraries. Visits to the websites have decreased by 87% since the paywall
was introduced, from 21 million unique users per month to 2.7 million. In April 2009, the timesonline site had a
readership of 750,000 readers per day. As of October 2011, there were around 111,000 subscribers to The Times' digital
products. At the time of Harold Evans' appointment as editor in 1981, The Times had an average daily sale of 282,000
copies in comparison to the 1.4 million daily sales of its traditional rival The Daily Telegraph. By November 2005
The Times sold an average of 691,283 copies per day, the second-highest of any British "quality" newspaper (after
The Daily Telegraph, which had a circulation of 903,405 copies in the period), and the highest in terms of full-rate
sales. By March 2014, average daily circulation of The Times had fallen to 394,448 copies, compared to The Daily
Telegraph's 523,048, with the two retaining respectively the second-highest and highest circulations among British
"quality" newspapers. In contrast The Sun, the highest-selling "tabloid" daily newspaper in the United Kingdom, sold
an average of 2,069,809 copies in March 2014, and the Daily Mail, the highest-selling "middle market" British daily
newspaper, sold an average of 1,708,006 copies in the period. The Sunday Times has a significantly higher circulation
than The Times, and sometimes outsells The Sunday Telegraph. As of January 2013, The Times has a circulation of 399,339
and The Sunday Times of 885,612. In a 2009 national readership survey The Times was found to have the highest number
of ABC1 25–44 readers and the largest numbers of readers in London of any of the "quality" papers. The Times commissioned
the serif typeface Times New Roman, created by Victor Lardent at the English branch of Monotype, in 1931. It was
commissioned after Stanley Morison had written an article criticizing The Times for being badly printed and typographically
antiquated. The font was supervised by Morison and drawn by Victor Lardent, an artist from the advertising department
of The Times. Morison used an older font named Plantin as the basis for his design, but made revisions for legibility
and economy of space. Times New Roman made its debut in the issue of 3 October 1932. After one year, the design was
released for commercial sale. The Times stayed with Times New Roman for 40 years, but new production techniques and
the format change from broadsheet to tabloid in 2004 have caused the newspaper to switch font five times since 1972.
However, all the new fonts have been variants of the original New Roman font: Historically, the paper was not overtly
pro-Tory or Whig, but has been a long time bastion of the English Establishment and empire. The Times adopted a stance
described as "peculiarly detached" at the 1945 general election; although it was increasingly critical of the Conservative
Party's campaign, it did not advocate a vote for any one party. However, the newspaper reverted to the Tories for
the next election five years later. It supported the Conservatives for the subsequent three elections, followed by
support for both the Conservatives and the Liberal Party for the next five elections, expressly supporting a Con-Lib
coalition in 1974. The paper then backed the Conservatives solidly until 1997, when it declined to make any party
endorsement but supported individual (primarily Eurosceptic) candidates. For the 2001 general election The Times
declared its support for Tony Blair's Labour government, which was re-elected by a landslide. It supported Labour
again in 2005, when Labour achieved a third successive win, though with a reduced majority. For the 2010 general
election, however, the newspaper declared its support for the Tories once again; the election ended in the Tories
taking the most votes and seats but having to form a coalition with the Liberal Democrats in order to form a government
as they had failed to gain an overall majority. This makes it the most varied newspaper in terms of political support
in British history. Some columnists in The Times are connected to the Conservative Party such as Daniel Finkelstein,
Tim Montgomerie, Matthew Parris and Matt Ridley, but there are also columnists connected to the Labour Party such
as David Aaronovitch, Phil Collins, Oliver Kamm and Jenni Russell. The Times occasionally makes endorsements for
foreign elections. In November 2012, it endorsed a second term for Barack Obama although it also expressed reservations
about his foreign policy. The Times, along with the British Film Institute, sponsors the "The Times" bfi London Film
Festival. It also sponsors the Cheltenham Literature Festival and the Asia House Festival of Asian Literature at
Asia House, London. The Times Literary Supplement (TLS) first appeared in 1902 as a supplement to The Times, becoming
a separately paid-for weekly literature and society magazine in 1914. The Times and the TLS have continued to be
co-owned, and as of 2012 the TLS is also published by News International and cooperates closely with The Times, with
its online version hosted on The Times website, and its editorial offices based in Times House, Pennington Street,
London. Times Atlases have been produced since 1895. They are currently produced by the Collins Bartholomew imprint
of HarperCollins Publishers. The flagship product is The Times Comprehensive Atlas of the World. This 164-page monthly
magazine is sold separately from the newspaper of record and is Britain's best-selling travel magazine. The first
issue of The Sunday Times Travel Magazine was in 2003, and it includes news, features and insider guides. In the
dystopian future world of George Orwell's Nineteen Eighty-Four, The Times has been transformed into the organ of
the totalitarian ruling party, its editorials—of which several are quoted in the book—reflecting Big Brother's pronouncements.
Rex Stout's fictional detective Nero Wolfe is described as fond of solving the London Times' crossword puzzle at
his New York home, in preference to those of American papers. In the James Bond series by Ian Fleming, James Bond,
reads The Times. As described by Fleming in From Russia, with Love: "The Times was the only paper that Bond ever
read." In The Wombles, Uncle Bulgaria read The Times and asked for the other Wombles to bring him any copies that
they found amongst the litter. The newspaper played a central role in the episode Very Behind the Times (Series 2,
Episode 12).
New Delhi (i/ˌnjuː ˈdɛli/) is a municipality and district in Delhi which serves as the capital and seat of government of
India. In addition, it also serves as the seat of Government of Delhi. The foundation stone of the city was laid
by George V, Emperor of India during the Delhi Durbar of 1911. It was designed by British architects, Sir Edwin Lutyens
and Sir Herbert Baker. The new capital was inaugurated on 13 February 1931, by India's Viceroy Lord Irwin. Although
colloquially Delhi and New Delhi as names are used interchangeably to refer to the jurisdiction of NCT of Delhi,
these are two distinct entities, and the latter is a small part of the former. New Delhi has been selected as one
of the hundred Indian cities to be developed as a smart city under PM Narendra Modi's flagship Smart Cities Mission.
Calcutta (now Kolkata) was the capital of India during the British Raj until December 1911. However, Delhi had served
as the political and financial centre of several empires of ancient India and the Delhi Sultanate, most notably of
the Mughal Empire from 1649 to 1857. During the early 1900s, a proposal was made to the British administration to
shift the capital of the British Indian Empire (as it was officially called) from Calcutta to Delhi. Unlike Calcutta,
which was located on the eastern coast of India, Delhi was at the centre of northern India and the Government of
British India felt that it would be logistically easier to administer India from the latter rather than the former.
On 12 December 1911, during the Delhi Durbar, George V, then Emperor of India, along with Queen Mary, his Consort,
made the announcement that the capital of the Raj was to be shifted from Calcutta to Delhi, while laying the foundation
stone for the Viceroy's residence in the Coronation Park, Kingsway Camp. The foundation stone of New Delhi was laid
by King George V and Queen Mary at the site of Delhi Durbar of 1911 at Kingsway Camp on 15 December 1911, during
their imperial visit. Large parts of New Delhi were planned by Edwin Lutyens (Sir Edwin from 1918), who first visited
Delhi in 1912, and Herbert Baker (Sir Herbert from 1926), both leading 20th-century British architects. The contract
was given to Sobha Singh (later Sir Sobha Singh). Construction really began after World War I and was completed by
1931. The city that was later dubbed "Lutyens' Delhi" was inaugurated in ceremonies beginning on 10 February 1931
by Lord Irwin, the Viceroy. Lutyens designed the central administrative area of the city as a testament to Britain's
imperial aspirations. Soon Lutyens started considering other places. Indeed, the Delhi Town Planning Committee, set
up to plan the new imperial capital, with George Swinton as chairman and John A. Brodie and Lutyens as members, submitted
reports for both North and South sites. However, it was rejected by the Viceroy when the cost of acquiring the necessary
properties was found to be too high. The central axis of New Delhi, which today faces east at India Gate, was previously
meant to be a north-south axis linking the Viceroy's House at one end with Paharganj at the other. During the project's
early years, many tourists believed it was a gate from Earth to Heaven itself. Eventually, owing to space constraints
and the presence of a large number of heritage sites in the North side, the committee settled on the South site.
A site atop the Raisina Hill, formerly Raisina Village, a Meo village, was chosen for the Rashtrapati Bhawan, then
known as the Viceroy's House. The reason for this choice was that the hill lay directly opposite the Dinapanah citadel,
which was also considered the site of Indraprastha, the ancient region of Delhi. Subsequently, the foundation stone
was shifted from the site of Delhi Durbar of 1911–1912, where the Coronation Pillar stood, and embedded in the walls
of the forecourt of the Secretariat. The Rajpath, also known as King's Way, stretched from the India Gate to the
Rashtrapati Bhawan. The Secretariat building, the two blocks of which flank the Rashtrapati Bhawan and houses ministries
of the Government of India, and the Parliament House, both designed by Herbert Baker, are located at the Sansad Marg
and run parallel to the Rajpath. In the south, land up to Safdarjung's Tomb was acquired in order to create what
is today known as Lutyens' Bungalow Zone. Before construction could begin on the rocky ridge of Raisina Hill, a circular
railway line around the Council House (now Parliament House), called the Imperial Delhi Railway, was built to transport
construction material and workers for the next twenty years. The last stumbling block was the Agra-Delhi railway
line that cut right through the site earmarked for the hexagonal All-India War Memorial (India Gate) and Kingsway
(Rajpath), which was a problem because the Old Delhi Railway Station served the entire city at that time. The line
was shifted to run along the Yamuna river, and it began operating in 1924. The New Delhi Railway Station opened in
1926 with a single platform at Ajmeri Gate near Paharganj and was completed in time for the city's inauguration in
1931. As construction of the Viceroy's House (the present Rashtrapati Bhavan), Central Secretariat, Parliament House,
and All-India War Memorial (India Gate) was winding down, the building of a shopping district and a new plaza, Connaught
Place, began in 1929, and was completed by 1933. Named after Prince Arthur, 1st Duke of Connaught (1850–1942), it
was designed by Robert Tor Russell, chief architect to the Public Works Department (PWD). After the capital of India
moved to Delhi, a temporary secretariat building was constructed in a few months in 1912 in North Delhi. Most of
the government offices of the new capital moved here from the 'Old secretariat' in Old Delhi (the building now houses
the Delhi Legislative Assembly), a decade before the new capital was inaugurated in 1931. Many employees were brought
into the new capital from distant parts of India, including the Bengal Presidency and Madras Presidency. Subsequently
housing for them was developed around Gole Market area in the 1920s. Built in the 1940s, to house government employees,
with bungalows for senior officials in the nearby Lodhi Estate area, Lodhi colony near historic Lodhi Gardens, was
the last residential areas built by the British Raj. After India gained independence in 1947, a limited autonomy
was conferred to New Delhi and was administered by a Chief Commissioner appointed by the Government of India. In
1956, Delhi was converted into a union territory and eventually the Chief Commissioner was replaced by a Lieutenant
Governor. The Constitution (Sixty-ninth Amendment) Act, 1991 declared the Union Territory of Delhi to be formally
known as National Capital Territory of Delhi. A system was introduced under which the elected Government was given
wide powers, excluding law and order which remained with the Central Government. The actual enforcement of the legislation
came in 1993. The first major extension of New Delhi outside of Lutyens' Delhi came in the 1950s when the Central
Public Works Department (CPWD) developed a large area of land southwest of Lutyens' Delhi to create the diplomatic
enclave of Chanakyapuri, where land was allotted for embassies, chanceries, high commissions and residences of ambassadors,
around wide central vista, Shanti Path. With a total area of 42.7 km2 (16.5 sq mi), New Delhi forms a small part
of the Delhi metropolitan area. Because the city is located on the Indo-Gangetic Plain, there is little difference
in elevation across the city. New Delhi and surrounding areas were once a part of the Aravalli Range; all that is
left of those mountains is the Delhi Ridge, which is also called the Lungs of Delhi. While New Delhi lies on the
floodplains of the Yamuna River, it is essentially a landlocked city. East of the river is the urban area of Shahdara.
New Delhi falls under the seismic zone-IV, making it vulnerable to earthquakes. New Delhi lies on several fault lines
and thus experiences frequent earthquakes, most of them of mild intensity. There has, however, been a spike in the
number of earthquakes in the last six years, most notable being a 5.4 magnitude earthquake in 2015 with its epicentre
in Nepal, a 4.7-magnitude earthquake on 25 November 2007, a 4.2-magnitude earthquake on 7 September 2011, a 5.2-magnitude
earthquake on 5 March 2012, and a swarm of twelve earthquakes, including four of magnitudes 2.5, 2.8, 3.1, and 3.3,
on 12 November 2013. The climate of New Delhi is a monsoon-influenced humid subtropical climate (Köppen Cwa) with
high variation between summer and winter in terms of both temperature and rainfall. The temperature varies from 46
°C (115 °F) in summers to around 0 °C (32 °F) in winters. The area's version of a humid subtropical climate is noticeably
different from many other cities with this climate classification in that it features long and very hot summers,
relatively dry and mild winters, a monsoonal period, and dust storms. Summers are long, extending from early April
to October, with the monsoon season occurring in the middle of the summer. Winter starts in November and peaks in
January. The annual mean temperature is around 25 °C (77 °F); monthly daily mean temperatures range from approximately
14 to 34 °C (57 to 93 °F). New Delhi's highest temperature ever recorded is 49.1 °C (120.4 °F) while the lowest temperature
ever recorded is −3.2 °C (26.2 °F). Those for Delhi metropolis stand at 49.9 °C (121.8 °F) and −3.2 °C (26.2 °F)
respectively. The average annual rainfall is 784 millimetres (30.9 in), most of which is during the monsoons in July
and August. In recent Mercer’s 2015 annual quality-of-living survey, New Delhi ranks at number 154 out of 230 cities
due to bad air quality and pollution. The World Health Organization ranked New Delhi as the world’s worst polluted
city in 2014 among about 1,600 cities the organization tracked around the world. In an attempt to curb air pollution
in New Delhi, which gets worst during the winter, a temporary alternate-day travel scheme for cars using the odd-
and even-numbered license plates system was announced by Delhi government in December 2015. In addition, trucks will
be allowed to enter India's capital only after 11 p.m., two hours later than the existing restriction. The driving
restriction scheme is planned to be implemented as a trial from January 1, 2016 for an initial period of 15 days.
The restriction will be in force between 8 a.m. and 8 p.m., and traffic will not be restricted on Sundays. Public
transportation service will be increased during the restriction period. On December 16, 2015, the Supreme Court of
India mandated several restrictions on Delhi's transportation system to curb pollution. Among the measures, the court
ordered to stop registrations of diesel cars and sport utility vehicles with an engine capacity of 2,000 cc and over
until March 31, 2016. The court also ordered all taxis in the Delhi region to switch to compressed natural gas by
March 1, 2016. Transportation vehicles that are more than 10 years old were banned from entering the capital. The
national capital of India, New Delhi is jointly administered by both the Central Government of India and the local
Government of Delhi, it is also the capital of the National Capital Territory (NCT) of Delhi. As of 2015, the government
structure of the New Delhi Municipal Council includes a chairperson, three members of New Delhi's Legislative Assembly,
two members nominated by the Chief Minister of the NCT of Delhi and five members nominated by the central government.
The head of state of Delhi is the Lieutenant Governor of the Union Territory of Delhi, appointed by the President
of India on the advice of the Central government and the post is largely ceremonial, as the Chief Minister of the
Union Territory of Delhi is the head of government and is vested with most of the executive powers. According to
the Indian constitution, if a law passed by Delhi's legislative assembly is repugnant to any law passed by the Parliament
of India, then the law enacted by the parliament will prevail over the law enacted by the assembly. New Delhi is
governed through a municipal government, known as the New Delhi Municipal Council (NDMC). Other urban areas of the
metropolis of Delhi are administered by the Municipal Corporation of Delhi (MCD). However, the entire metropolis
of Delhi is commonly known as New Delhi in contrast to Old Delhi. Much of New Delhi, planned by the leading 20th-century
British architect Edwin Lutyens, was laid out to be the central administrative area of the city as a testament to
Britain's imperial pretensions. New Delhi is structured around two central promenades called the Rajpath and the
Janpath. The Rajpath, or King's Way, stretches from the Rashtrapati Bhavan to the India Gate. The Janpath (Hindi:
"Path of the People"), formerly Queen's Way, begins at Connaught Circus and cuts the Rajpath at right angles. 19
foreign embassies are located on the nearby Shantipath (Hindi: "Path of Peace"), making it the largest diplomatic
enclave in India. At the heart of the city is the magnificent Rashtrapati Bhavan (formerly known as Viceroy's House)
which sits atop Raisina Hill. The Secretariat, which houses ministries of the Government of India, flanks out of
the Rashtrapati Bhavan. The Parliament House, designed by Herbert Baker, is located at the Sansad Marg, which runs
parallel to the Rajpath. Connaught Place is a large, circular commercial area in New Delhi, modelled after the Royal
Crescent in England. Twelve separate roads lead out of the outer ring of Connaught Place, one of them being the Janpath.
Indira Gandhi International Airport, situated to the southwest of Delhi, is the main gateway for the city's domestic
and international civilian air traffic. In 2012-13, the airport was used by more than 35 million passengers, making
it one of the busiest airports in South Asia. Terminal 3, which cost ₹96.8 billion (US$1.4 billion) to construct
between 2007 and 2010, handles an additional 37 million passengers annually. The Delhi Flying Club, established in
1928 with two de Havilland Moth aircraft named Delhi and Roshanara, was based at Safdarjung Airport which started
operations in 1929, when it was the Delhi's only airport and the second in India. The airport functioned until 2001,
however in January 2002 the government closed the airport for flying activities because of security concerns following
the New York attacks in September 2001. Since then, the club only carries out aircraft maintenance courses, and is
used for helicopter rides to Indira Gandhi International Airport for VIP including the president and the prime minister.
In 2010, Indira Gandhi International Airport (IGIA) was conferred the fourth best airport award in the world in the
15–25 million category, and Best Improved Airport in the Asia-Pacific Region by Airports Council International. The
airport was rated as the Best airport in the world in the 25–40 million passengers category in 2015, by Airports
Council International.[not in citation given][better source needed] Delhi Airport also bags two awards for The Best
Airport in Central Asia/India and Best Airport Staff in Central Asia/India at the Skytrax World Airport Awards 2015.
New Delhi has one of India's largest bus transport systems. Buses are operated by the state-owned Delhi Transport
Corporation (DTC), which owns largest fleet of compressed natural gas (CNG)-fueled buses in the world. Personal vehicles
especially cars also form a major chunk of vehicles plying on New Delhi roads. New Delhi has the highest number of
registered cars compared to any other metropolitan city in India. Taxis and Auto Rickshaws also ply on New Delhi
roads in large numbers. New Delhi has one of the highest road density in India. New Delhi is a major junction in
the Indian railway network and is the headquarters of the Northern Railway. The five main railway stations are New
Delhi railway station, Old Delhi, Nizamuddin Railway Station, Anand Vihar Railway Terminal and Sarai Rohilla. The
Delhi Metro, a mass rapid transit system built and operated by Delhi Metro Rail Corporation (DMRC), serves many parts
of Delhi and the neighbouring cities Faridabad, Gurgaon, Noida and Ghaziabad. As of August 2011, the metro consists
of six operational lines with a total length of 189 km (117 mi) and 146 stations, and several other lines are under
construction. It carries millions of passengers every day. In addition to the Delhi Metro, a suburban railway, the
Delhi Suburban Railway exists. The Delhi Metro is a rapid transit system serving New Delhi, Delhi, Gurgaon, Faridabad,
Noida, and Ghaziabad in the National Capital Region of India. Delhi Metro is the world's 12th largest metro system
in terms of length. Delhi Metro was India's first modern public transportation system, which had revolutionised travel
by providing a fast, reliable, safe, and comfortable means of transport. The network consists of six lines with a
total length of 189.63 kilometres (117.83 miles) with 142 stations, of which 35 are underground, five are at-grade,
and the remainder are elevated. All stations have escalators, elevators, and tactile tiles to guide the visually
impaired from station entrances to trains. It has a combination of elevated, at-grade, and underground lines, and
uses both broad gauge and standard gauge rolling stock. Four types of rolling stock are used: Mitsubishi-ROTEM Broad
gauge, Bombardier MOVIA, Mitsubishi-ROTEM Standard gauge, and CAF Beasain Standard gauge. Delhi Metro is being built
and operated by the Delhi Metro Rail Corporation Limited (DMRC), a state-owned company with equal equity participation
from Government of India and Government of National Capital Territory of Delhi. However, the organisation is under
administrative control of Ministry of Urban Development, Government of India. Besides construction and operation
of Delhi metro, DMRC is also involved in the planning and implementation of metro rail, monorail and high-speed rail
projects in India and providing consultancy services to other metro projects in the country as well as abroad. The
Delhi Metro project was spearheaded by Padma Vibhushan E. Sreedharan, the Managing Director of DMRC and popularly
known as the "Metro Man" of India. He famously resigned from DMRC, taking moral responsibility for a metro bridge
collapse which took five lives. Sreedharan was awarded with the prestigious Legion of Honour by the French Government
for his contribution to Delhi Metro. New Delhi has a population of 249,998. Hindi and Punjabi are the most widely
spoken languages in New Delhi and the lingua franca of the city. English is primarily used as the formal language
by business and government institutes. New Delhi has a literacy rate of 89.38% according to 2011 census, which is
highest in Delhi. Hinduism is the religion of 79.8% of New Delhi's population. There are also communities of Muslims
(12.9%), Sikhs (5.4%), Jains (1.1%) and Christians (0.9%) in Delhi. Other religious groups (2.5%) include Parsis,
Buddhists and Jews. New Delhi is a cosmopolitan city due to the multi-ethnic and multi-cultural presence of the vast
Indian bureaucracy and political system. The city's capital status has amplified the importance of national events
and holidays. National events such as Republic Day, Independence Day and Gandhi Jayanti (Gandhi's birthday) are celebrated
with great enthusiasm in New Delhi and the rest of India. On India's Independence Day (15 August) the Prime Minister
of India addresses the nation from the Red Fort. Most Delhiites celebrate the day by flying kites, which are considered
a symbol of freedom. The Republic Day Parade is a large cultural and military parade showcasing India's cultural
diversity and military might. Religious festivals include Diwali (the festival of light), Maha Shivaratri, Teej,
Guru Nanak Jayanti, Baisakhi, Durga Puja, Holi, Lohri, Eid ul-Fitr, Eid ul-Adha, Christmas, Chhath Puja and Mahavir
Jayanti. The Qutub Festival is a cultural event during which performances of musicians and dancers from all over
India are showcased at night, with the Qutub Minar as the chosen backdrop of the event. Other events such as Kite
Flying Festival, International Mango Festival and Vasant Panchami (the Spring Festival) are held every year in Delhi.
In 2007, the Japanese Buddhist organisation Nipponzan Myohoji decided to build a Peace Pagoda in the city containing
Buddha relics. It was inaugurated by the current Dalai Lama. The New Delhi town plan, like its architecture, was
chosen with one single chief consideration: to be a symbol of British power and supremacy. All other decisions were
subordinate to this, and it was this framework that dictated the choice and application of symbology and influences
from both Hindu and Islamic architecture. It took about 20 years to build the city from 1911. Many elements of New
Delhi architecture borrow from indigenous sources; however, they fit into a British Classical/Palladian tradition.
The fact that there were any indigenous features in the design were due to the persistence and urging of both the
Viceroy Lord Hardinge and historians like E.B. Havell. New Delhi is home to several historic sites and museums. The
National Museum which began with an exhibition of Indian art and artefacts at the Royal Academy in London in the
winter of 1947–48 was later at the end was shown at the Rashtrapati Bhawan in 1949. Later it was to form a permanent
National Museum. On 15 August 1949, the National Museum was formally inaugurated and currently has 200,000 works
of art, both of Indian and foreign origin, covering over 5,000 years. The India Gate built in 1931 was inspired by
the Arc de Triomphe in Paris. It is the national monument of India commemorating the 90,000 soldiers of the Indian
Army who lost their lives while fighting for the British Raj in World War I and the Third Anglo-Afghan War. The Rajpath
which was built similar to the Champs-Élysées in Paris is the ceremonial boulevard for the Republic of India located
in New Delhi. The annual Republic Day parade takes place here on 26 January. Gandhi Smriti in New Delhi is the location
where Mahatma Gandhi spent the last 144 days of his life and was assassinated on 30 January 1948. Rajghat is the
place where Mahatma Gandhi was cremated on 31 January 1948 after his assassination and his ashes were buried and
make it a final resting place beside the sanctity of the Yamuna River. The Raj Ghat in the shape of large square
platform with black marble was designed by architect Vanu Bhuta. Jantar Mantar located in Connaught Place was built
by Maharaja Jai Singh II of Jaipur. It consists of 13 architectural astronomy instruments. The primary purpose of
the observatory was to compile astronomical tables, and to predict the times and movements of the sun, moon and planets.
New Delhi is home to Indira Gandhi Memorial Museum, National Gallery of Modern Art, National Museum of Natural History,
National Rail Museum, National Handicrafts and Handlooms Museum, National Philatelic Museum, Nehru Planetarium, Shankar's
International Dolls Museum. and Supreme Court of India Museum. New Delhi is particularly renowned for its beautifully
landscaped gardens that can look quite stunning in spring. The largest of these include Buddha Jayanti Park and the
historic Lodi Gardens. In addition, there are the gardens in the Presidential Estate, the gardens along the Rajpath
and India Gate, the gardens along Shanti Path, the Rose Garden, Nehru Park and the Railway Garden in Chanakya Puri.
Also of note is the garden adjacent to the Jangpura Metro Station near the Defence Colony Flyover, as are the roundabout
and neighbourhood gardens throughout the city. The city hosted the 2010 Commonwealth Games and annually hosts Delhi
Half Marathon foot-race. The city has previously hosted the 1951 Asian Games and the 1982 Asian Games. New Delhi
was interested in bidding for the 2019 Asian Games but was turned down by the government on 2 August 2010 amid allegations
of corruption in 2010 Commonwealth Games . Major sporting venues in New Delhi include the Jawaharlal Nehru Stadium,
Ambedkar Stadium, Indira Gandhi Indoor Stadium, Feroz Shah Kotla Ground, R.K. Khanna Tennis Complex, Dhyan Chand
National Stadium and Siri Fort Sports Complex. New Delhi is the largest commercial city in northern India. It has
an estimated net State Domestic Product (FY 2010) of ₹1595 billion (US$23 billion) in nominal terms and ~₹6800 billion
(US$100 billion) in PPP terms. As of 2013, the per capita income of Delhi was Rs. 230000, second highest in India
after Goa. GSDP in Delhi at the current prices for 2012-13 is estimated at Rs 3.88 trillion (short scale) against
Rs 3.11 trillion (short scale) in 2011-12. Connaught Place, one of North India's largest commercial and financial
centres, is located in the northern part of New Delhi. Adjoining areas such as Barakhamba Road, ITO are also major
commercial centres. Government and quasi government sector was the primary employer in New Delhi. The city's service
sector has expanded due in part to the large skilled English-speaking workforce that has attracted many multinational
companies. Key service industries include information technology, telecommunications, hotels, banking, media and
tourism. The 2011 World Wealth Report ranks economic activity in New Delhi at 39, but overall the capital is ranked
at 37, above cities like Jakarta and Johannesburg. New Delhi with Beijing shares the top position as the most targeted
emerging markets retail destination among Asia-Pacific markets. The Government of National Capital Territory of Delhi
does not release any economic figures specifically for New Delhi but publishes an official economic report on the
whole of Delhi annually. According to the Economic Survey of Delhi, the metropolis has a net State Domestic Product
(SDP) of Rs. 83,085 crores (for the year 2004–05) and a per capita income of Rs. 53,976($1,200). In the year 2008–09
New Delhi had a Per Capita Income of Rs.1,16,886 ($2,595).It grew by 16.2% to reach Rs.1,35,814 ($3,018) in 2009–10
fiscal. New Delhi's Per Capita GDP (at PPP) was at $6,860 during 2009–10 fiscal, making it one of the richest cities
in India. The tertiary sector contributes 78.4% of Delhi's gross SDP followed by secondary and primary sectors with
20.2% and 1.4% contribution respectively. The gross state domestic product (GSDP) of Delhi at current prices for
the year 2011-12 has been estimated at Rs 3.13 lakh crore, which is an increase of 18.7 per cent over the previous
fiscal. The city is home to numerous international organisations. The Asian and Pacific Centre for Transfer of Technology
of the UNESCAP servicing the Asia-Pacific region is headquartered in New Delhi. New Delhi is home to most UN regional
offices in India namely the UNDP, UNODC, UNESCO, UNICEF, WFP, UNV, UNCTAD, FAO, UNFPA, WHO, World Bank, ILO, IMF,
UNIFEM, IFC and UNAIDS.
Bird migration is the regular seasonal movement, often north and south along a flyway, between breeding and wintering grounds.
Many species of bird migrate. Migration carries high costs in predation and mortality, including from hunting by
humans, and is driven primarily by availability of food. It occurs mainly in the northern hemisphere, where birds
are funnelled on to specific routes by natural barriers such as the Mediterranean Sea or the Caribbean Sea. Historically,
migration has been recorded as much as 3,000 years ago by Ancient Greek authors including Homer and Aristotle, and
in the Book of Job, for species such as storks, turtle doves, and swallows. More recently, Johannes Leche began recording
dates of arrivals of spring migrants in Finland in 1749, and scientific studies have used techniques including bird
ringing and satellite tracking. Threats to migratory birds have grown with habitat destruction especially of stopover
and wintering sites, as well as structures such as power lines and wind farms. The Arctic tern holds the long-distance
migration record for birds, travelling between Arctic breeding grounds and the Antarctic each year. Some species
of tubenoses (Procellariiformes) such as albatrosses circle the earth, flying over the southern oceans, while others
such as Manx shearwaters migrate 14,000 km (8,700 mi) between their northern breeding grounds and the southern ocean.
Shorter migrations are common, including altitudinal migrations on mountains such as the Andes and Himalayas. The
timing of migration seems to be controlled primarily by changes in day length. Migrating birds navigate using celestial
cues from the sun and stars, the earth's magnetic field, and probably also mental maps. Records of bird migration
were made as much as 3,000 years ago by the Ancient Greek writers Hesiod, Homer, Herodotus and Aristotle. The Bible
also notes migrations, as in the Book of Job (39:26), where the inquiry is made: "Is it by your insight that the
hawk hovers, spreads its wings southward?" The author of Jeremiah (8:7) wrote: "Even the stork in the heavens knows
its seasons, and the turtle dove, the swift and the crane keep the time of their arrival." Aristotle noted that cranes
traveled from the steppes of Scythia to marshes at the headwaters of the Nile. Pliny the Elder, in his Historia Naturalis,
repeats Aristotle's observations. Aristotle however suggested that swallows and other birds hibernated. This belief
persisted as late as 1878, when Elliott Coues listed the titles of no less than 182 papers dealing with the hibernation
of swallows. Even the "highly observant" Gilbert White, in his posthumously published 1789 The Natural History of
Selborne, quoted a man's story about swallows being found in a chalk cliff collapse "while he was a schoolboy at
Brighthelmstone", though the man denied being an eyewitness. However, he also writes that "as to swallows being found
in a torpid state during the winter in the Isle of Wight or any part of this country, I never heard any such account
worth attending to", and that if early swallows "happen to find frost and snow they immediately withdraw for a time—a
circumstance this much more in favour of hiding than migration", since he doubts they would "return for a week or
two to warmer latitudes". It was not until the end of the eighteenth century that migration as an explanation for
the winter disappearance of birds from northern climes was accepted. Thomas Bewick's A History of British Birds (Volume
1, 1797) mentions a report from "a very intelligent master of a vessel" who, "between the islands of Minorca and
Majorca, saw great numbers of Swallows flying northward", and states the situation in Britain as follows: Bewick
then describes an experiment which succeeded in keeping swallows alive in Britain for several years, where they remained
warm and dry through the winters. He concludes: Migration is the regular seasonal movement, often north and south,
undertaken by many species of birds. Bird movements include those made in response to changes in food availability,
habitat, or weather. Sometimes, journeys are not termed "true migration" because they are irregular (nomadism, invasions,
irruptions) or in only one direction (dispersal, movement of young away from natal area). Migration is marked by
its annual seasonality. Non-migratory birds are said to be resident or sedentary. Approximately 1800 of the world's
10,000 bird species are long-distance migrants. Many bird populations migrate long distances along a flyway. The
most common pattern involves flying north in the spring to breed in the temperate or Arctic summer and returning
in the autumn to wintering grounds in warmer regions to the south. Of course, in the southern hemisphere the directions
are reversed, but there is less land area in the far south to support long-distance migration. The primary motivation
for migration appears to be food; for example, some hummingbirds choose not to migrate if fed through the winter.
Also, the longer days of the northern summer provide extended time for breeding birds to feed their young. This helps
diurnal birds to produce larger clutches than related non-migratory species that remain in the tropics. As the days
shorten in autumn, the birds return to warmer regions where the available food supply varies little with the season.
These advantages offset the high stress, physical exertion costs, and other risks of the migration. Predation can
be heightened during migration: Eleonora's falcon Falco eleonorae, which breeds on Mediterranean islands, has a very
late breeding season, coordinated with the autumn passage of southbound passerine migrants, which it feeds to its
young. A similar strategy is adopted by the greater noctule bat, which preys on nocturnal passerine migrants. The
higher concentrations of migrating birds at stopover sites make them prone to parasites and pathogens, which require
a heightened immune response. Within a species not all populations may be migratory; this is known as "partial migration".
Partial migration is very common in the southern continents; in Australia, 44% of non-passerine birds and 32% of
passerine species are partially migratory. In some species, the population at higher latitudes tends to be migratory
and will often winter at lower latitude. The migrating birds bypass the latitudes where other populations may be
sedentary, where suitable wintering habitats may already be occupied. This is an example of leap-frog migration.
Many fully migratory species show leap-frog migration (birds that nest at higher latitudes spend the winter at lower
latitudes), and many show the alternative, chain migration, where populations 'slide' more evenly north and south
without reversing order. Within a population, it is common for different ages and/or sexes to have different patterns
of timing and distance. Female chaffinches Fringilla coelebs in Eastern Fennoscandia migrate earlier in the autumn
than males do. Most migrations begin with the birds starting off in a broad front. Often, this front narrows into
one or more preferred routes termed flyways. These routes typically follow mountain ranges or coastlines, sometimes
rivers, and may take advantage of updrafts and other wind patterns or avoid geographical barriers such as large stretches
of open water. The specific routes may be genetically programmed or learned to varying degrees. The routes taken
on forward and return migration are often different. A common pattern in North America is clockwise migration, where
birds flying North tend to be further West, and flying South tend to shift Eastwards. Many, if not most, birds migrate
in flocks. For larger birds, flying in flocks reduces the energy cost. Geese in a V-formation may conserve 12–20%
of the energy they would need to fly alone. Red knots Calidris canutus and dunlins Calidris alpina were found in
radar studies to fly 5 km/h (3.1 mph) faster in flocks than when they were flying alone. Birds fly at varying altitudes
during migration. An expedition to Mt. Everest found skeletons of northern pintail Anas acuta and black-tailed godwit
Limosa limosa at 5,000 m (16,000 ft) on the Khumbu Glacier. Bar-headed geese Anser indicus have been recorded by
GPS flying at up to 6,540 metres (21,460 ft) while crossing the Himalayas, at the same time engaging in the highest
rates of climb to altitude for any bird. Anecdotal reports of them flying much higher have yet to be corroborated
with any direct evidence. Seabirds fly low over water but gain altitude when crossing land, and the reverse pattern
is seen in landbirds. However most bird migration is in the range of 150 to 600 m (490 to 1,970 ft). Bird strike
aviation records from the United States show most collisions occur below 600 m (2,000 ft) and almost none above 1,800
m (5,900 ft). Bird migration is not limited to birds that can fly. Most species of penguin (Spheniscidae) migrate
by swimming. These routes can cover over 1,000 km (620 mi). Dusky grouse Dendragapus obscurus perform altitudinal
migration mostly by walking. Emus Dromaius novaehollandiae in Australia have been observed to undertake long-distance
movements on foot during droughts. The typical image of migration is of northern landbirds, such as swallows (Hirundinidae)
and birds of prey, making long flights to the tropics. However, many Holarctic wildfowl and finch (Fringillidae)
species winter in the North Temperate Zone, in regions with milder winters than their summer breeding grounds. For
example, the pink-footed goose Anser brachyrhynchus migrates from Iceland to Britain and neighbouring countries,
whilst the dark-eyed junco Junco hyemalis migrates from subarctic and arctic climates to the contiguous United States
and the American goldfinch from taiga to wintering grounds extending from the American South northwestward to Western
Oregon. Migratory routes and wintering grounds are traditional and learned by young during their first migration
with their parents. Some ducks, such as the garganey Anas querquedula, move completely or partially into the tropics.
The European pied flycatcher Ficedula hypoleuca also follows this migratory trend, breeding in Asia and Europe and
wintering in Africa. Often, the migration route of a long-distance migrator bird doesn't follow a straight line between
breeding and wintering grounds. Rather, it could follow an hooked or arched line, with detours around geographical
barriers. For most land-birds, such barriers could consist in seas, large water bodies or high mountain ranges, because
of the lack of stopover or feeding sites, or the lack of thermal columns for broad-winged birds. The same considerations
about barriers and detours that apply to long-distance land-bird migration apply to water birds, but in reverse:
a large area of land without bodies of water that offer feeding sites may also be a barrier to a bird that feeds
in coastal waters. Detours avoiding such barriers are observed: for example, brent geese Branta bernicla migrating
from the Taymyr Peninsula to the Wadden Sea travel via the White Sea coast and the Baltic Sea rather than directly
across the Arctic Ocean and northern Scandinavia. A similar situation occurs with waders (called shorebirds in North
America). Many species, such as dunlin Calidris alpina and western sandpiper Calidris mauri, undertake long movements
from their Arctic breeding grounds to warmer locations in the same hemisphere, but others such as semipalmated sandpiper
C. pusilla travel longer distances to the tropics in the Southern Hemisphere. For some species of waders, migration
success depends on the availability of certain key food resources at stopover points along the migration route. This
gives the migrants an opportunity to refuel for the next leg of the voyage. Some examples of important stopover locations
are the Bay of Fundy and Delaware Bay. Some bar-tailed godwits Limosa lapponica have the longest known non-stop flight
of any migrant, flying 11,000 km from Alaska to their New Zealand non-breeding areas. Prior to migration, 55 percent
of their bodyweight is stored as fat to fuel this uninterrupted journey. Seabird migration is similar in pattern
to those of the waders and waterfowl. Some, such as the black guillemot Cepphus grylle and some gulls, are quite
sedentary; others, such as most terns and auks breeding in the temperate northern hemisphere, move varying distances
south in the northern winter. The Arctic tern Sterna paradisaea has the longest-distance migration of any bird, and
sees more daylight than any other, moving from its Arctic breeding grounds to the Antarctic non-breeding areas. One
Arctic tern, ringed (banded) as a chick on the Farne Islands off the British east coast, reached Melbourne, Australia
in just three months from fledging, a sea journey of over 22,000 km (14,000 mi). Many tubenosed birds breed in the
southern hemisphere and migrate north in the southern winter. The most pelagic species, mainly in the 'tubenose'
order Procellariiformes, are great wanderers, and the albatrosses of the southern oceans may circle the globe as
they ride the "roaring forties" outside the breeding season. The tubenoses spread widely over large areas of open
ocean, but congregate when food becomes available. Many are also among the longest-distance migrants; sooty shearwaters
Puffinus griseus nesting on the Falkland Islands migrate 14,000 km (8,700 mi) between the breeding colony and the
North Atlantic Ocean off Norway. Some Manx shearwaters Puffinus puffinus do this same journey in reverse. As they
are long-lived birds, they may cover enormous distances during their lives; one record-breaking Manx shearwater is
calculated to have flown 8 million km (5 million miles) during its over-50 year lifespan. Some large broad-winged
birds rely on thermal columns of rising hot air to enable them to soar. These include many birds of prey such as
vultures, eagles, and buzzards, but also storks. These birds migrate in the daytime. Migratory species in these groups
have great difficulty crossing large bodies of water, since thermals only form over land, and these birds cannot
maintain active flight for long distances. Mediterranean and other seas present a major obstacle to soaring birds,
which must cross at the narrowest points. Massive numbers of large raptors and storks pass through areas such as
the Strait of Messina, Gibraltar, Falsterbo, and the Bosphorus at migration times. More common species, such as the
European honey buzzard Pernis apivorus, can be counted in hundreds of thousands in autumn. Other barriers, such as
mountain ranges, can also cause funnelling, particularly of large diurnal migrants. This is a notable factor in the
Central American migratory bottleneck. Batumi bottleneck in the Caucasus is one of the heaviest migratory funnels
on earth. Avoiding flying over the Black Sea surface and across high mountains, hundreds of thousands of soaring
birds funnel through an area around the city of Batumi, Georgia. Birds of prey such as honey buzzards which migrate
using thermals lose only 10 to 20% of their weight during migration, which may explain why they forage less during
migration than do smaller birds of prey with more active flight such as falcons, hawks and harriers. Many of the
smaller insectivorous birds including the warblers, hummingbirds and flycatchers migrate large distances, usually
at night. They land in the morning and may feed for a few days before resuming their migration. The birds are referred
to as passage migrants in the regions where they occur for short durations between the origin and destination. Nocturnal
migrants minimize predation, avoid overheating, and can feed during the day. One cost of nocturnal migration is the
loss of sleep. Migrants may be able to alter their quality of sleep to compensate for the loss. Many long-distance
migrants appear to be genetically programmed to respond to changing day length. Species that move short distances,
however, may not need such a timing mechanism, instead moving in response to local weather conditions. Thus mountain
and moorland breeders, such as wallcreeper Tichodroma muraria and white-throated dipper Cinclus cinclus, may move
only altitudinally to escape the cold higher ground. Other species such as merlin Falco columbarius and Eurasian
skylark Alauda arvensis move further, to the coast or towards the south. Species like the chaffinch are much less
migratory in Britain than those of continental Europe, mostly not moving more than 5 km in their lives. Short-distance
passerine migrants have two evolutionary origins. Those that have long-distance migrants in the same family, such
as the common chiffchaff Phylloscopus collybita, are species of southern hemisphere origins that have progressively
shortened their return migration to stay in the northern hemisphere. Species that have no long-distance migratory
relatives, such as the waxwings Bombycilla, are effectively moving in response to winter weather and the loss of
their usual winter food, rather than enhanced breeding opportunities. In the tropics there is little variation in
the length of day throughout the year, and it is always warm enough for a food supply, but altitudinal migration
occurs in some tropical birds. There is evidence that this enables the migrants to obtain more of their preferred
foods such as fruits. Sometimes circumstances such as a good breeding season followed by a food source failure the
following year lead to irruptions in which large numbers of a species move far beyond the normal range. Bohemian
waxwings Bombycilla garrulus well show this unpredictable variation in annual numbers, with five major arrivals in
Britain during the nineteenth century, but 18 between the years 1937 and 2000. Red crossbills Loxia curvirostra too
are irruptive, with widespread invasions across England noted in 1251, 1593, 1757, and 1791. Bird migration is primarily,
but not entirely, a Northern Hemisphere phenomenon. This is because land birds in high northern latitudes, where
food becomes scarce in winter, leave for areas further south (including the Southern Hemisphere) to overwinter, and
because the continental landmass is much larger in the Northern Hemisphere. In contrast, among (pelagic) seabirds,
species of the Southern Hemisphere are more likely to migrate. This is because there is a large area of ocean in
the Southern Hemisphere, and more islands suitable for seabirds to nest. The control of migration, its timing and
response are genetically controlled and appear to be a primitive trait that is present even in non-migratory species
of birds. The ability to navigate and orient themselves during migration is a much more complex phenomenon that may
include both endogenous programs as well as learning. The primary physiological cue for migration are the changes
in the day length. These changes are also related to hormonal changes in the birds. In the period before migration,
many birds display higher activity or Zugunruhe (German: migratory restlessness), first described by Johann Friedrich
Naumann in 1795, as well as physiological changes such as increased fat deposition. The occurrence of Zugunruhe even
in cage-raised birds with no environmental cues (e.g. shortening of day and falling temperature) has pointed to the
role of circannual endogenous programs in controlling bird migrations. Caged birds display a preferential flight
direction that corresponds with the migratory direction they would take in nature, changing their preferential direction
at roughly the same time their wild conspecifics change course. In polygynous species with considerable sexual dimorphism,
males tend to return earlier to the breeding sites than their females. This is termed protandry. Navigation is based
on a variety of senses. Many birds have been shown to use a sun compass. Using the sun for direction involves the
need for making compensation based on the time. Navigation has also been shown to be based on a combination of other
abilities including the ability to detect magnetic fields (magnetoception), use visual landmarks as well as olfactory
cues. Long distance migrants are believed to disperse as young birds and form attachments to potential breeding sites
and to favourite wintering sites. Once the site attachment is made they show high site-fidelity, visiting the same
wintering sites year after year. The ability of birds to navigate during migrations cannot be fully explained by
endogenous programming, even with the help of responses to environmental cues. The ability to successfully perform
long-distance migrations can probably only be fully explained with an accounting for the cognitive ability of the
birds to recognize habitats and form mental maps. Satellite tracking of day migrating raptors such as ospreys and
honey buzzards has shown that older individuals are better at making corrections for wind drift. Migratory birds
may use two electromagnetic tools to find their destinations: one that is entirely innate and another that relies
on experience. A young bird on its first migration flies in the correct direction according to the Earth's magnetic
field, but does not know how far the journey will be. It does this through a radical pair mechanism whereby chemical
reactions in special photo pigments sensitive to long wavelengths are affected by the field. Although this only works
during daylight hours, it does not use the position of the sun in any way. At this stage the bird is in the position
of a boy scout with a compass but no map, until it grows accustomed to the journey and can put its other capabilities
to use. With experience it learns various landmarks and this "mapping" is done by magnetites in the trigeminal system,
which tell the bird how strong the field is. Because birds migrate between northern and southern regions, the magnetic
field strengths at different latitudes let it interpret the radical pair mechanism more accurately and let it know
when it has reached its destination. There is a neural connection between the eye and "Cluster N", the part of the
forebrain that is active during migrational orientation, suggesting that birds may actually be able to see the magnetic
field of the earth. Migrating birds can lose their way and appear outside their normal ranges. This can be due to
flying past their destinations as in the "spring overshoot" in which birds returning to their breeding areas overshoot
and end up further north than intended. Certain areas, because of their location, have become famous as watchpoints
for such birds. Examples are the Point Pelee National Park in Canada, and Spurn in England. Reverse migration, where
the genetic programming of young birds fails to work properly, can lead to rarities turning up as vagrants thousands
of kilometres out of range. A related phenomenon called "abmigration" involves birds from one region joining similar
birds from a different breeding region in the common winter grounds and then migrating back along with the new population.
This is especially common in some waterfowl, which shift from one flyway to another. It has been possible to teach
a migration route to a flock of birds, for example in re-introduction schemes. After a trial with Canada geese Branta
canadensis, microlight aircraft were used in the US to teach safe migration routes to reintroduced whooping cranes
Grus americana. Birds need to alter their metabolism in order to meet the demands of migration. The storage of energy
through the accumulation of fat and the control of sleep in nocturnal migrants require special physiological adaptations.
In addition, the feathers of a bird suffer from wear-and-tear and require to be molted. The timing of this molt -
usually once a year but sometimes twice - varies with some species molting prior to moving to their winter grounds
and others molting prior to returning to their breeding grounds. Apart from physiological adaptations, migration
sometimes requires behavioural changes such as flying in flocks to reduce the energy used in migration or the risk
of predation. Migration in birds is highly labile and is believed to have developed independently in many avian lineages.
While it is agreed that the behavioral and physiological adaptations necessary for migration are under genetic control,
some authors have argued that no genetic change is necessary for migratory behavior to develop in a sedentary species
because the genetic framework for migratory behavior exists in nearly all avian lineages. This explains the rapid
appearance of migratory behavior after the most recent glacial maximum. Theoretical analyses show that detours that
increase flight distance by up to 20% will often be adaptive on aerodynamic grounds - a bird that loads itself with
food to cross a long barrier flies less efficiently. However some species show circuitous migratory routes that reflect
historical range expansions and are far from optimal in ecological terms. An example is the migration of continental
populations of Swainson's thrush Catharus ustulatus, which fly far east across North America before turning south
via Florida to reach northern South America; this route is believed to be the consequence of a range expansion that
occurred about 10,000 years ago. Detours may also be caused by differential wind conditions, predation risk, or other
factors. Large scale climatic changes, as have been experienced in the past, are expected to have an effect on the
timing of migration. Studies have shown a variety of effects including timing changes in migration, breeding as well
as population variations. The migration of birds also aids the movement of other species, including those of ectoparasites
such as ticks and lice, which in turn may carry micro-organisms including those of concern to human health. Due to
the global spread of avian influenza, bird migration has been studied as a possible mechanism of disease transmission,
but it has been found not to present a special risk; import of pet and domestic birds is a greater threat. Some viruses
that are maintained in birds without lethal effects, such as the West Nile Virus may however be spread by migrating
birds. Birds may also have a role in the dispersal of propagules of plants and plankton. Some predators take advantage
of the concentration of birds during migration. Greater noctule bats feed on nocturnal migrating passerines. Some
birds of prey specialize on migrating waders. Bird migration routes have been studied by a variety of techniques
including the oldest, marking. Swans have been marked with a nick on the beak since about 1560 in England. Scientific
ringing was pioneered by Hans Christian Cornelius Mortensen in 1899. Other techniques include radar and satellite
tracking. Orientation behaviour studies have been traditionally carried out using variants of a setup known as the
Emlen funnel, which consists of a circular cage with the top covered by glass or wire-screen so that either the sky
is visible or the setup is placed in a planetarium or with other controls on environmental cues. The orientation
behaviour of the bird inside the cage is studied quantitatively using the distribution of marks that the bird leaves
on the walls of the cage. Other approaches used in pigeon homing studies make use of the direction in which the bird
vanishes on the horizon. Hunting along migration routes threatens some bird species. The populations of Siberian
cranes (Leucogeranus leucogeranus) that wintered in India declined due to hunting along the route, particularly in
Afghanistan and Central Asia. Birds were last seen in their favourite wintering grounds in Keoladeo National Park
in 2002. Structures such as power lines, wind farms and offshore oil-rigs have also been known to affect migratory
birds. Other migration hazards include pollution, storms, wildfires, and habitat destruction along migration routes,
denying migrants food at stopover points. For example, in the East Asian–Australasian Flyway, up to 65% of key intertidal
habitat at the Yellow Sea migration bottleneck has been destroyed since the 1950s.
It is on Absecon Island, on the Atlantic coast. Atlantic City was incorporated on May 1, 1854, from portions of Egg Harbor
Township and Galloway Township. The city borders Absecon, Brigantine, Pleasantville, Ventnor City and West Atlantic
City. Because of its location in South Jersey, hugging the Atlantic Ocean between marshlands and islands, Atlantic
City was viewed by developers as prime real estate and a potential resort town. In 1853, the first commercial hotel,
The Belloe House, located at Massachusetts and Atlantic Avenue, was built. The city was incorporated in 1854, the
same year in which the Camden and Atlantic Railroad train service began. Built on the edge of the bay, this served
as the direct link of this remote parcel of land with Philadelphia, Pennsylvania. That same year, construction of
the Absecon Lighthouse, designed by George Meade of the Corps of Topographical Engineers, was approved, with work
initiated the next year. By 1874, almost 500,000 passengers a year were coming to Atlantic City by rail. In Boardwalk
Empire: The Birth, High Times, and Corruption of Atlantic City, "Atlantic City's Godfather" Nelson Johnson describes
the inspiration of Dr. Jonathan Pitney (the "Father of Atlantic City") to develop Atlantic City as a health resort,
his efforts to convince the municipal authorities that a railroad to the beach would be beneficial, his successful
alliance with Samuel Richards (entrepreneur and member of the most influential family in southern New Jersey at the
time) to achieve that goal, the actual building of the railroad, and the experience of the first 600 riders, who
"were chosen carefully by Samuel Richards and Jonathan Pitney": The first boardwalk was built in 1870 along a portion
of the beach in an effort to help hotel owners keep sand out of their lobbies. Businesses were restricted and the
boardwalk was removed each year at the end of the peak season. Because of its effectiveness and popularity, the boardwalk
was expanded in length and width, and modified several times in subsequent years. The historic length of the boardwalk,
before the destructive 1944 Great Atlantic Hurricane, was about 7 miles (11 km) and it extended from Atlantic City
to Longport, through Ventnor and Margate. The first road connecting the city to the mainland at Pleasantville was
completed in 1870 and charged a 30-cent toll. Albany Avenue was the first road to the mainland that was available
without a toll. By 1878, because of the growing popularity of the city, one railroad line could no longer keep up
with demand. Soon, the Philadelphia and Atlantic City Railway was also constructed to transport tourists to Atlantic
City. At this point massive hotels like The United States and Surf House, as well as smaller rooming houses, had
sprung up all over town. The United States Hotel took up a full city block between Atlantic, Pacific, Delaware, and
Maryland Avenues. These hotels were not only impressive in size, but featured the most updated amenities, and were
considered quite luxurious for their time. In the early part of the 20th century, Atlantic City went through a radical
building boom. Many of the modest boarding houses that dotted the boardwalk were replaced with large hotels. Two
of the city's most distinctive hotels were the Marlborough-Blenheim Hotel and the Traymore Hotel. In 1903, Josiah
White III bought a parcel of land near Ohio Avenue and the boardwalk and built the Queen Anne style Marlborough House.
The hotel was a hit and, in 1905–06, he chose to expand the hotel and bought another parcel of land next door to
his Marlborough House. In an effort to make his new hotel a source of conversation, White hired the architectural
firm of Price and McLanahan. The firm made use of reinforced concrete, a new building material invented by Jean-Louis
Lambot in 1848 (Joseph Monier received the patent in 1867). The hotel's Spanish and Moorish themes, capped off with
its signature dome and chimneys, represented a step forward from other hotels that had a classically designed influence.
White named the new hotel the Blenheim and merged the two hotels into the Marlborough-Blenheim. Bally's Atlantic
City was later constructed at this location. The Traymore Hotel was located at the corner of Illinois Avenue and
the boardwalk. Begun in 1879 as a small boarding house, the hotel grew through a series of uncoordinated expansions.
By 1914, the hotel's owner, Daniel White, taking a hint from the Marlborough-Blenheim, commissioned the firm of Price
and McLanahan to build an even bigger hotel. Rising 16 stories, the tan brick and gold-capped hotel would become
one of the city's best-known landmarks. The hotel made use of ocean-facing hotel rooms by jutting its wings farther
from the main portion of the hotel along Pacific Avenue. One by one, additional large hotels were constructed along
the boardwalk, including the Brighton, Chelsea, Shelburne, Ambassador, Ritz Carlton, Mayflower, Madison House, and
the Breakers. The Quaker-owned Chalfonte House, opened in 1868, and Haddon House, opened in 1869, flanked North Carolina
Avenue at the beach end. Their original wood-frame structures would be enlarged, and even moved closer to the beach,
over the years. The modern Chalfonte Hotel, eight stories tall, opened in 1904. The modern Haddon Hall was built
in stages and was completed in 1929, at eleven stories. By this time, they were under the same ownership and merged
into the Chalfonte-Haddon Hall Hotel, becoming the city's largest hotel with nearly 1,000 rooms. By 1930, the Claridge,
the city's last large hotel before the casinos, opened its doors. The 400-room Claridge was built by a partnership
that included renowned Philadelphia contractor John McShain. At 24 stories, it would become known as the "Skyscraper
By The Sea." The city became known as the "The World's Playground. In 1883, salt water taffy was conceived in Atlantic
City by David Bradley. The traditional story is that Bradley's shop was flooded after a major storm, soaking his
taffy with salty Atlantic Ocean water. He sold some "salt water taffy" to a girl, who proudly walked down to the
beach to show her friends. Bradley's mother was in the back of the store when the sale was made, and loved the name,
and so salt water taffy was born. The 1920s, with tourism at its peak, are considered by many historians as Atlantic
City's golden age. During Prohibition, which was enacted nationally in 1919 and lasted until 1933, much liquor was
consumed and gambling regularly took place in the back rooms of nightclubs and restaurants. It was during Prohibition
that racketeer and political boss Enoch L. "Nucky" Johnson rose to power. Prohibition was largely unenforced in Atlantic
City, and, because alcohol that had been smuggled into the city with the acquiescence of local officials could be
readily obtained at restaurants and other establishments, the resort's popularity grew further. The city then dubbed
itself as "The World's Playground". Nucky Johnson's income, which reached as much as $500,000 annually, came from
the kickbacks he took on illegal liquor, gambling and prostitution operating in the city, as well as from kickbacks
on construction projects. During this time, Atlantic City was under the mayoral reign of Edward L. Bader, known for
his contributions to the construction, athletics and aviation of Atlantic City. Despite the opposition of many others,
he purchased land that became the city's municipal airport and high school football stadium, both of which were later
named Bader Field in his honor. He led the initiative, in 1923, to construct the Atlantic City High School at Albany
and Atlantic Avenues. Bader, in November 1923, initiated a public referendum, during the general election, at which
time residents approved the construction of a Convention Center. The city passed an ordinance approving a bond issue
for $1.5 million to be used for the purchase of land for Convention Hall, now known as the Boardwalk Hall, finalized
September 30, 1924. Bader was also a driving force behind the creation of the Miss America competition. From May
13 to May 16 in 1929, Johnson hosted a conference for organized crime figures from all across America. The men who
called this meeting were Masseria family lieutenant Charles "Lucky" Luciano and former Chicago South Side Gang boss
Johnny "the Fox" Torrio, with heads of the Bugs and Meyer Mob, Meyer Lansky and Benjamin Siegel, being used as muscle
for the meeting. Like many older east coast cities after World War II, Atlantic City became plagued with poverty,
crime, corruption, and general economic decline in the mid-to-late 20th century. The neighborhood known as the "Inlet"
became particularly impoverished. The reasons for the resort's decline were multi-layered. First of all, the automobile
became more readily available to many Americans after the war. Atlantic City had initially relied upon visitors coming
by train and staying for a couple of weeks. The car allowed them to come and go as they pleased, and many people
would spend only a few days, rather than weeks. Also, the advent of suburbia played a huge role. With many families
moving to their own private houses, luxuries such as home air conditioning and swimming pools diminished their interest
in flocking to the luxury beach resorts during the hot summer. But perhaps the biggest factor in the decline in Atlantic
City's popularity came from cheap, fast jet service to other premier resorts, such as Miami Beach and the Bahamas.
The city hosted the 1964 Democratic National Convention which nominated Lyndon Johnson for President and Hubert Humphrey
as Vice President. The convention and the press coverage it generated, however, cast a harsh light on Atlantic City,
which by then was in the midst of a long period of economic decline. Many felt that the friendship between Johnson
and Governor of New Jersey Richard J. Hughes led Atlantic City to host the Democratic Convention. By the late 1960s,
many of the resort's once great hotels were suffering from embarrassing vacancy rates. Most of them were either shut
down, converted to cheap apartments, or converted to nursing home facilities by the end of the decade. Prior to and
during the advent of legalized gaming, many of these hotels were demolished. The Breakers, the Chelsea, the Brighton,
the Shelburne, the Mayflower, the Traymore, and the Marlborough-Blenheim were demolished in the 1970s and 1980s.
Of the many pre-casino resorts that bordered the boardwalk, only the Claridge, the Dennis, the Ritz-Carlton, and
the Haddon Hall survive to this day as parts of Bally's Atlantic City, a condo complex, and Resorts Atlantic City.
The old Ambassador Hotel was purchased by Ramada in 1978 and was gutted to become the Tropicana Casino and Resort
Atlantic City, only reusing the steelwork of the original building. Smaller hotels off the boardwalk, such as the
Madison also survived. In an effort at revitalizing the city, New Jersey voters in 1976 passed a referendum, approving
casino gambling for Atlantic City; this came after a 1974 referendum on legalized gambling failed to pass. Immediately
after the legislation passed, the owners of the Chalfonte-Haddon Hall Hotel began converting it into the Resorts
International. It was the first legal casino in the eastern United States when it opened on May 26, 1978. Other casinos
were soon constructed along the Boardwalk and, later, in the marina district for a total of eleven today. The introduction
of gambling did not, however, quickly eliminate many of the urban problems that plagued Atlantic City. Many people
have suggested that it only served to exacerbate those problems, as attested to by the stark contrast between tourism
intensive areas and the adjacent impoverished working-class neighborhoods. In addition, Atlantic City has been less
popular than Las Vegas, as a gambling city in the United States. Donald Trump helped bring big name boxing bouts
to the city to attract customers to his casinos. The boxer Mike Tyson had most of his fights in Atlantic City in
the 1980s, which helped Atlantic City achieve nationwide attention as a gambling resort. Numerous highrise condominiums
were built for use as permanent residences or second homes. By end of the decade it was one of the most popular tourist
destinations in the United States. With the redevelopment of Las Vegas and the opening of two casinos in Connecticut
in the early 1990s, along with newly built casinos in the nearby Philadelphia metro area in the 2000s, Atlantic City's
tourism began to decline due to its failure to diversify away from gaming. Determined to expand, in 1999 the Atlantic
City Redevelopment Authority partnered with Las Vegas casino mogul Steve Wynn to develop a new roadway to a barren
section of the city near the Marina. Nicknamed "The Tunnel Project", Steve Wynn planned the proposed 'Mirage Atlantic
City' around the idea that he would connect the $330 million tunnel stretching 2.5 miles (4.0 km) from the Atlantic
City Expressway to his new resort. The roadway was later officially named the Atlantic City-Brigantine Connector,
and funnels incoming traffic off of the expressway into the city's marina district and Brigantine, New Jersey. Although
Wynn's plans for development in the city were scrapped in 2002, the tunnel opened in 2001. The new roadway prompted
Boyd Gaming in partnership with MGM/Mirage to build Atlantic City's newest casino. The Borgata opened in July 2003,
and its success brought an influx of developers to Atlantic City with plans for building grand Las Vegas style mega
casinos to revitalize the aging city. Owing to economic conditions and the late 2000s recession, many of the proposed
mega casinos never went further than the initial planning stages. One of these developers was Pinnacle Entertainment,
who purchased the Sands Atlantic City, only to close it permanently November 11, 2006. The following year, the resort
was demolished in a dramatic, Las Vegas styled implosion, the first of its kind in Atlantic City. While Pinnacle
Entertainment intended to replace it with a $1.5–2 billion casino resort, the company canceled its construction plans
and plans to sell the land. The biggest disappointment was when MGM Resorts International announced that it would
pull out of all development for Atlantic City, effectively ending their plans for the MGM Grand Atlantic City. In
2006, Morgan Stanley purchased 20 acres (8.1 ha) directly north of the Showboat Atlantic City Hotel and Casino for
a new $2 billion plus casino resort. Revel Entertainment Group was named as the project's developer for the Revel
Casino. Revel was hindered with many problems, with the biggest setback to the company being in April 2010 when Morgan
Stanley, the owner of 90% of Revel Entertainment Group, decided to discontinue funding for continued construction
and put its stake in Revel up for sale. Early in 2010 the New Jersey state legislature passed a bill offering tax
incentives to attract new investors and complete the job, but a poll by Fairleigh Dickinson University's PublicMind
released in March 2010 showed that three of five voters (60%) opposed the legislation, and two of three of those
who opposed it "strongly" opposed it. Ultimately, Governor Chris Christie offered Revel $261 million in state tax
credits to assist the casino once it opened. As of March 2011[update], Revel had completed all of the exterior work
and had continued work on the interior after finally receiving the funding necessary to complete construction. It
had a soft opening in April 2012, and was fully open by May 2012. Ten months later, in February 2013, after serious
losses and a write-down in the value of the resort from $2.4 billion to $450 million, Revel filed for Chapter 11
bankruptcy. It was restructured but still could not carry on and re-entered bankruptcy on June 19, 2014. It was put
up for sale, however as no suitable bids were received the resort closed its doors on September 2, 2014. In the wake
of the closures and declining revenue from casinos, Governor Christie said in September 2014 that the state would
consider a 2015 referendum to end the 40-year-old monopoly that Atlantic City holds on casino gambling and allowing
gambling in other municipalities. With casino revenue declining from $5.2 billion in 2006 to $2.9 billion in 2013,
the state saw a drop in money from its 8% tax on those earnings, which is used to fund programs for senior citizens
and the disabled. "Superstorm Sandy" struck Atlantic City on October 29, 2012, causing flooding and power-outages
but left minimal damage to any of the tourist areas including the Boardwalk and casino resorts, despite widespread
belief that the city's boardwalk had been destroyed. The source of the misinformation was a widely circulated photograph
of a damaged section of the Boardwalk that was slated for repairs, prior to the storm, and incorrect news reports
at the time of the disaster. The storm produced an all-time record low barometric pressure reading of 943 mb (27.85")
for not only Atlantic City, but the state of New Jersey. According to the United States Census Bureau, the city had
a total area of 17.037 square miles (44.125 km2), including 10.747 square miles (27.835 km2) of land and 6.290 square
miles (16.290 km2) of water (36.92%). Unincorporated communities, localities and place names located partially or
completely within the city include Chelsea, City Island, Great Island and Venice Park. Summers are typically warm
and humid with a July daily average of 75.6 °F (24.2 °C). During this time, the city gets a sea breeze off the ocean
that often makes daytime temperatures much cooler than inland areas, making Atlantic City a prime place for beating
the summer heat from June through September. Average highs even just a few miles west of Atlantic City exceed 85
°F (29 °C) in July. Near the coast, temperatures reach or exceed 90 °F (32 °C) on an average of only 6.8 days a year,
but this reaches 21 days at nearby Atlantic City Int'l.[a] Winters are cool, with January averaging 35.5 °F (2 °C).
Spring and autumn are erratic, although they are usually mild with low humidity. The average window for freezing
temperatures is November 20 to March 25, allowing a growing season of 239 days. Extreme temperatures range from −9
°F (−23 °C) on February 9, 1934 to 104 °F (40 °C) on August 7, 1918.[b] Annual precipitation is 40 inches (1,020
mm) which is fairly spread throughout the year. Owing to its proximity to the Atlantic Ocean and its location in
South Jersey, Atlantic City receives less snow than a good portion of the rest of New Jersey. Even at the airport,
where low temperatures are often much lower than along the coast, snow averages only 16.5 inches (41.9 cm) each winter.
It is very common for rain to fall in Atlantic City while the northern and western parts of the state are receiving
snow. At the 2010 United States Census, there were 39,558 people, 15,504 households, and 8,558 families residing
in the city. The population density was 3,680.8 per square mile (1,421.2/km2). There were 20,013 housing units at
an average density of 1,862.2 per square mile (719.0/km2). The racial makeup of the city was 26.65% (10,543) White,
38.29% (15,148) Black or African American, 0.61% (242) Native American, 15.55% (6,153) Asian, 0.05% (18) Pacific
Islander, 14.03% (5,549) from other races, and 4.82% (1,905) from two or more races. Hispanics or Latinos of any
race were 30.45% (12,044) of the population. There were 15,504 households, of which 27.3% had children under the
age of 18 living with them, 25.9% were married couples living together, 22.2% had a female householder with no husband
present, and 44.8% were non-families. 37.5% of all households were made up of individuals, and 14.3% had someone
living alone who was 65 years of age or older. The average household size was 2.50 and the average family size was
3.34. In the city, 24.6% of the population were under the age of 18, 10.2% from 18 to 24, 26.8% from 25 to 44, 25.8%
from 45 to 64, and 12.7% who were 65 years of age or older. The median age was 36.3 years. For every 100 females
there were 96.2 males. For every 100 females age 18 and over, there were 94.4 males. The Census Bureau's 2006–2010
American Community Survey showed that (in 2010 inflation-adjusted dollars) median household income was $30,237 (with
a margin of error of +/- $2,354) and the median family income was $35,488 (+/- $2,607). Males had a median income
of $32,207 (+/- $1,641) versus $29,298 (+/- $1,380) for females. The per capita income for the city was $20,069 (+/-
$2,532). About 23.1% of families and 25.3% of the population were below the poverty line, including 36.6% of those
under age 18 and 16.8% of those age 65 or over. As of the 2000 United States Census there were 40,517 people, 15,848
households, and 8,700 families residing in the city. The population density was 3,569.8 people per square mile (1,378.3/km2).
There were 20,219 housing units at an average density of 1,781.4 per square mile (687.8/km2). The racial makeup of
the city was 44.16% black or African American, 26.68% White, 0.48% Native American, 10.40% Asian, 0.06% Pacific Islander,
13.76% other races, and 4.47% from two or more races. 24.95% of the population were Hispanic or Latino of any race.
19.44% of the population was non-Hispanic whites. There were 15,848 households out of which 27.7% had children under
the age of 18 living with them, 24.8% were married couples living together, 23.2% had a female householder with no
husband present, and 45.1% were non-families. 37.2% of all households were made up of individuals and 15.4% had someone
living alone who was 65 years of age or older. The average household size was 2.46 and the average family size was
3.26. In the city the population was spread out with 25.7% under the age of 18, 8.9% from 18 to 24, 31.0% from 25
to 44, 20.2% from 45 to 64, and 14.2% who were 65 years of age or older. The median age was 35 years. For every 100
females there were 96.1 males. For every 100 females age 18 and over, there were 93.2 males. The median income for
a household in the city was $26,969, and the median income for a family was $31,997. Males had a median income of
$25,471 versus $23,863 for females. The per capita income for the city was $15,402. About 19.1% of families and 23.6%
of the population were below the poverty line, including 29.1% of those under age 18 and 18.9% of those age 65 or
over. As of September 2014, the greater Atlantic City area has one of the highest unemployment rates in the country
at 13.8%, out of labor force of around 141,000. In July 2010, Governor Chris Christie announced that a state takeover
of the city and local government "was imminent". Comparing regulations in Atlantic City to an "antique car", Atlantic
City regulatory reform is a key piece of Gov. Chris Christie's plan, unveiled on July 22, to reinvigorate an industry
mired in a four-year slump in revenue and hammered by fresh competition from casinos in the surrounding states of
Delaware, Pennsylvania, Connecticut, and more recently, Maryland. In January 2011, Chris Christie announced the Atlantic
City Tourism District, a state-run district encompassing the boardwalk casinos, the marina casinos, the Atlantic
City Outlets, and Bader Field. Fairleigh Dickinson University's PublicMind poll surveyed New Jersey voters' attitudes
on the takeover. The February 16, 2011 survey showed that 43% opposed the measure while 29% favored direct state
oversight. Interestingly, the poll also found that even South Jersey voters expressed opposition to the plan; 40%
reported they opposed the measure and 37% reported they were in favor of it. On April 29, 2011, the boundaries for
the state-run tourism district were set. The district would include heavier police presence, as well as beautification
and infrastructure improvements. The CRDA would oversee all functions of the district and will make changes to attract
new businesses and attractions. New construction would be ambitious and may resort to eminent domain. The tourism
district would comprise several key areas in the city; the Marina District, Ducktown, Chelsea, South Inlet, Bader
Field, and Gardner's Basin. Also included are 10 roadways that lead into the district, including several in the city's
northern end, or North Beach. Gardner's Basin, which is home to the Atlantic City Aquarium, was initially left out
of the tourism district, while a residential neighborhood in the Chelsea section was removed from the final boundaries,
owing to complaints from the city. Also, the inclusion of Bader Field in the district was controversial and received
much scrutiny from mayor Lorenzo Langford, who cast the lone "no" vote on the creation of the district citing its
inclusion. Atlantic City is considered as the "Gambling Capital of the East Coast," and currently has eight large
casinos and several smaller ones. In 2011, New Jersey's casinos employed approximately 33,000 employees, had 28.5
million visitors, made $3.3 billion in gaming revenue, and paid $278 million in taxes. They are regulated by the
New Jersey Casino Control Commission and the New Jersey Division of Gaming Enforcement. In the wake of the United
States' economic downturn and the legalization of gambling in adjacent and nearby states (including Delaware, Maryland,
New York, and Pennsylvania), four casino closures took place in 2014: the Atlantic Club on January 13; the Showboat
on August 31; the Revel, which was Atlantic City's second-newest casino, on September 2; and Trump Plaza, which originally
opened in 1984, and was the poorest performing casino in the city, on September 16. Executives at Trump Entertainment
Resorts, whose sole remaining property will be the Trump Taj Mahal, said in 2013 that they were considering the option
of selling the Taj and winding down and exiting the gaming and hotel business. Caesars Entertainment executives have
been reconsidering the future of their three remaining Atlantic City properties (Bally's, Caesars and Harrah's),
in the wake of a Chapter 11 bankruptcy filing by the company's casino operating unit in January 2015. Boardwalk Hall,
formally known as the "Historic Atlantic City Convention Hall", is an arena in Atlantic City along the boardwalk.
Boardwalk Hall was Atlantic City's primary convention center until the opening of the Atlantic City Convention Center
in 1997. The Atlantic City Convention Center includes 500,000 sq ft (46,000 m2) of showroom space, 5 exhibit halls,
45 meeting rooms with 109,000 sq ft (10,100 m2) of space, a garage with 1,400 parking spaces, and an adjacent Sheraton
hotel. Both the Boardwalk Hall and Convention Center are operated by the Atlantic City Convention & Visitors Authority.
Atlantic City (sometimes referred to as "Monopoly City") has become well-known over the years for its portrayal in
the U.S. version of the popular board game, Monopoly, in which properties on the board are named after locations
in and near Atlantic City. While the original incarnation of the game did not feature Atlantic City, it was in Indianapolis
that Ruth Hoskins learned the game, and took it back to Atlantic City. After she arrived, Hoskins made a new board
with Atlantic City street names, and taught it to a group of local Quakers. Marvin Gardens, the leading yellow property
on the board shown, is actually a misspelling of the original location name, "Marven Gardens". The misspelling was
said to have been introduced by Charles Todd and passed on when his home-made Monopoly board was copied by Charles
Darrow and thence Parker Brothers. It was not until 1995 that Parker Brothers acknowledged this mistake and formally
apologized to the residents of Marven Gardens for the misspelling although the spelling error was not corrected.
Immunology is a branch of biomedical science that covers the study of immune systems in all organisms. It charts, measures,
and contextualizes the: physiological functioning of the immune system in states of both health and diseases; malfunctions
of the immune system in immunological disorders (such as autoimmune diseases, hypersensitivities, immune deficiency,
and transplant rejection); the physical, chemical and physiological characteristics of the components of the immune
system in vitro, in situ, and in vivo. Immunology has applications in numerous disciplines of medicine, particularly
in the fields of organ transplantation, oncology, virology, bacteriology, parasitology, psychiatry, and dermatology.
Prior to the designation of immunity from the etymological root immunis, which is Latin for "exempt"; early physicians
characterized organs that would later be proven as essential components of the immune system. The important lymphoid
organs of the immune system are the thymus and bone marrow, and chief lymphatic tissues such as spleen, tonsils,
lymph vessels, lymph nodes, adenoids, and liver. When health conditions worsen to emergency status, portions of immune
system organs including the thymus, spleen, bone marrow, lymph nodes and other lymphatic tissues can be surgically
excised for examination while patients are still alive. Many components of the immune system are typically cellular
in nature and not associated with any specific organ; but rather are embedded or circulating in various tissues located
throughout the body. Classical immunology ties in with the fields of epidemiology and medicine. It studies the relationship
between the body systems, pathogens, and immunity. The earliest written mention of immunity can be traced back to
the plague of Athens in 430 BCE. Thucydides noted that people who had recovered from a previous bout of the disease
could nurse the sick without contracting the illness a second time. Many other ancient societies have references
to this phenomenon, but it was not until the 19th and 20th centuries before the concept developed into scientific
theory. The study of the molecular and cellular components that comprise the immune system, including their function
and interaction, is the central science of immunology. The immune system has been divided into a more primitive innate
immune system and, in vertebrates, an acquired or adaptive immune system. The latter is further divided into humoral
(or antibody) and cell-mediated components. The humoral (antibody) response is defined as the interaction between
antibodies and antigens. Antibodies are specific proteins released from a certain class of immune cells known as
B lymphocytes, while antigens are defined as anything that elicits the generation of antibodies ("anti"body "gen"erators).
Immunology rests on an understanding of the properties of these two biological entities and the cellular response
to both. Immunological research continues to become more specialized, pursuing non-classical models of immunity and
functions of cells, organs and systems not previously associated with the immune system (Yemeserach 2010). Clinical
immunology is the study of diseases caused by disorders of the immune system (failure, aberrant action, and malignant
growth of the cellular elements of the system). It also involves diseases of other systems, where immune reactions
play a part in the pathology and clinical features. Other immune system disorders include various hypersensitivities
(such as in asthma and other allergies) that respond inappropriately to otherwise harmless compounds. The most well-known
disease that affects the immune system itself is AIDS, an immunodeficiency characterized by the suppression of CD4+
("helper") T cells, dendritic cells and macrophages by the Human Immunodeficiency Virus (HIV). The body’s capability
to react to antigen depends on a person's age, antigen type, maternal factors and the area where the antigen is presented.
Neonates are said to be in a state of physiological immunodeficiency, because both their innate and adaptive immunological
responses are greatly suppressed. Once born, a child’s immune system responds favorably to protein antigens while
not as well to glycoproteins and polysaccharides. In fact, many of the infections acquired by neonates are caused
by low virulence organisms like Staphylococcus and Pseudomonas. In neonates, opsonic activity and the ability to
activate the complement cascade is very limited. For example, the mean level of C3 in a newborn is approximately
65% of that found in the adult. Phagocytic activity is also greatly impaired in newborns. This is due to lower opsonic
activity, as well as diminished up-regulation of integrin and selectin receptors, which limit the ability of neutrophils
to interact with adhesion molecules in the endothelium. Their monocytes are slow and have a reduced ATP production,
which also limits the newborn's phagocytic activity. Although, the number of total lymphocytes is significantly higher
than in adults, the cellular and humoral immunity is also impaired. Antigen-presenting cells in newborns have a reduced
capability to activate T cells. Also, T cells of a newborn proliferate poorly and produce very small amounts of cytokines
like IL-2, IL-4, IL-5, IL-12, and IFN-g which limits their capacity to activate the humoral response as well as the
phagocitic activity of macrophage. B cells develop early during gestation but are not fully active. Maternal factors
also play a role in the body’s immune response. At birth, most of the immunoglobulin present is maternal IgG. Because
IgM, IgD, IgE and IgA don’t cross the placenta, they are almost undetectable at birth. Some IgA is provided by breast
milk. These passively-acquired antibodies can protect the newborn for up to 18 months, but their response is usually
short-lived and of low affinity. These antibodies can also produce a negative response. If a child is exposed to
the antibody for a particular antigen before being exposed to the antigen itself then the child will produce a dampened
response. Passively acquired maternal antibodies can suppress the antibody response to active immunization. Similarly
the response of T-cells to vaccination differs in children compared to adults, and vaccines that induce Th1 responses
in adults do not readily elicit these same responses in neonates. Between six to nine months after birth, a child’s
immune system begins to respond more strongly to glycoproteins, but there is usually no marked improvement in their
response to polysaccharides until they are at least one year old. This can be the reason for distinct time frames
found in vaccination schedules. During adolescence, the human body undergoes various physical, physiological and
immunological changes triggered and mediated by hormones, of which the most significant in females is 17-β-oestradiol
(an oestrogen) and, in males, is testosterone. Oestradiol usually begins to act around the age of 10 and testosterone
some months later. There is evidence that these steroids act directly not only on the primary and secondary sexual
characteristics but also have an effect on the development and regulation of the immune system, including an increased
risk in developing pubescent and post-pubescent autoimmunity. There is also some evidence that cell surface receptors
on B cells and macrophages may detect sex hormones in the system. Immunology is strongly experimental in everyday
practice but is also characterized by an ongoing theoretical attitude. Many theories have been suggested in immunology
from the end of the nineteenth century up to the present time. The end of the 19th century and the beginning of the
20th century saw a battle between "cellular" and "humoral" theories of immunity. According to the cellular theory
of immunity, represented in particular by Elie Metchnikoff, it was cells – more precisely, phagocytes – that were
responsible for immune responses. In contrast, the humoral theory of immunity, held by Robert Koch and Emil von Behring,
among others, stated that the active immune agents were soluble components (molecules) found in the organism’s “humors”
rather than its cells. In the mid-1950s, Frank Burnet, inspired by a suggestion made by Niels Jerne, formulated the
clonal selection theory (CST) of immunity. On the basis of CST, Burnet developed a theory of how an immune response
is triggered according to the self/nonself distinction: "self" constituents (constituents of the body) do not trigger
destructive immune responses, while "nonself" entities (e.g., pathogens, an allograft) trigger a destructive immune
response. The theory was later modified to reflect new discoveries regarding histocompatibility or the complex "two-signal"
activation of T cells. The self/nonself theory of immunity and the self/nonself vocabulary have been criticized,
but remain very influential. Bioscience is the overall major in which undergraduate students who are interested in
general well-being take in college. Immunology is a branch of bioscience for undergraduate programs but the major
gets specified as students move on for graduate program in immunology. The aim of immunology is to study the health
of humans and animals through effective yet consistent research, (AAAAI, 2013). The most important thing about being
immunologists is the research because it is the biggest portion of their jobs. Most graduate immunology schools follow
the AAI courses immunology which are offered throughout numerous schools in the United States. For example, in New
York State, there are several universities that offer the AAI courses immunology: Albany Medical College, Cornell
University, Icahn School of Medicine at Mount Sinai, New York University Langone Medical Center, University at Albany
(SUNY), University at Buffalo (SUNY), University of Rochester Medical Center and Upstate Medical University (SUNY).
The AAI immunology courses include an Introductory Course and an Advance Course. The Introductory Course is a course
that gives students an overview of the basics of immunology. In addition, this Introductory Course gives students
more information to complement general biology or science training. It also has two different parts: Part I is an
introduction to the basic principles of immunology and Part II is a clinically-oriented lecture series. On the other
hand, the Advanced Course is another course for those who are willing to expand or update their understanding of
immunology. It is advised for students who want to attend the Advanced Course to have a background of the principles
of immunology. Most schools require students to take electives in other to complete their degrees. A Master’s degree
requires two years of study following the attainment of a bachelor's degree. For a doctoral programme it is required
to take two additional years of study.
MPEG-1 or MPEG-2 Audio Layer III, more commonly referred to as MP3, is an audio coding format for digital audio which uses
a form of lossy data compression. It is a common audio format for consumer audio streaming or storage, as well as
a de facto standard of digital audio compression for the transfer and playback of music on most digital audio players.
The use of lossy compression is designed to greatly reduce the amount of data required to represent the audio recording
and still sound like a faithful reproduction of the original uncompressed audio for most listeners. An MP3 file that
is created using the setting of 128 kbit/s will result in a file that is about 1/11 the size of the CD file created
from the original audio source (44,100 samples per second × 16 bits per sample × 2 channels = 1,411,200 bit/s; MP3
compressed at 128 kbit/s: 128,000 bit/s [1 k = 1,000, not 1024, because it is a bit rate]. Ratio: 1,411,200/128,000
= 11.025). An MP3 file can also be constructed at higher or lower bit rates, with higher or lower resulting quality.
The compression works by reducing the accuracy of certain parts of a sound that are considered to be beyond the auditory
resolution ability of most people. This method is commonly referred to as perceptual coding. It uses psychoacoustic
models to discard or reduce precision of components less audible to human hearing, and then records the remaining
information in an efficient manner. MP3 was designed by the Moving Picture Experts Group (MPEG) as part of its MPEG-1
standard and later extended in the MPEG-2 standard. The first subgroup for audio was formed by several teams of engineers
at Fraunhofer IIS, University of Hannover, AT&T-Bell Labs, Thomson-Brandt, CCETT, and others. MPEG-1 Audio (MPEG-1
Part 3), which included MPEG-1 Audio Layer I, II and III was approved as a committee draft of ISO/IEC standard in
1991, finalised in 1992 and published in 1993 (ISO/IEC 11172-3:1993). Backwards compatible MPEG-2 Audio (MPEG-2 Part
3) with additional bit rates and sample rates was published in 1995 (ISO/IEC 13818-3:1995). The MP3 lossy audio data
compression algorithm takes advantage of a perceptual limitation of human hearing called auditory masking. In 1894,
the American physicist Alfred M. Mayer reported that a tone could be rendered inaudible by another tone of lower
frequency. In 1959, Richard Ehmer described a complete set of auditory curves regarding this phenomenon. Ernst Terhardt
et al. created an algorithm describing auditory masking with high accuracy. This work added to a variety of reports
from authors dating back to Fletcher, and to the work that initially determined critical ratios and critical bandwidths.
The psychoacoustic masking codec was first proposed in 1979, apparently independently, by Manfred R. Schroeder, et
al. from Bell Telephone Laboratories, Inc. in Murray Hill, NJ, and M. A. Krasner both in the United States. Krasner
was the first to publish and to produce hardware for speech (not usable as music bit compression), but the publication
of his results as a relatively obscure Lincoln Laboratory Technical Report did not immediately influence the mainstream
of psychoacoustic codec development. Manfred Schroeder was already a well-known and revered figure in the worldwide
community of acoustical and electrical engineers, but his paper was not much noticed, since it described negative
results due to the particular nature of speech and the linear predictive coding (LPC) gain present in speech. Both
Krasner and Schroeder built upon the work performed by Eberhard F. Zwicker in the areas of tuning and masking of
critical bands, that in turn built on the fundamental research in the area from Bell Labs of Harvey Fletcher and
his collaborators. A wide variety of (mostly perceptual) audio compression algorithms were reported in IEEE's refereed
Journal on Selected Areas in Communications. That journal reported in February 1988 on a wide range of established,
working audio bit compression technologies, some of them using auditory masking as part of their fundamental design,
and several showing real-time hardware implementations. The immediate predecessors of MP3 were "Optimum Coding in
the Frequency Domain" (OCF), and Perceptual Transform Coding (PXFM). These two codecs, along with block-switching
contributions from Thomson-Brandt, were merged into a codec called ASPEC, which was submitted to MPEG, and which
won the quality competition, but that was mistakenly rejected as too complex to implement. The first practical implementation
of an audio perceptual coder (OCF) in hardware (Krasner's hardware was too cumbersome and slow for practical use),
was an implementation of a psychoacoustic transform coder based on Motorola 56000 DSP chips. As a doctoral student
at Germany's University of Erlangen-Nuremberg, Karlheinz Brandenburg began working on digital music compression in
the early 1980s, focusing on how people perceive music. He completed his doctoral work in 1989. MP3 is directly descended
from OCF and PXFM, representing the outcome of the collaboration of Brandenburg—working as a postdoc at AT&T-Bell
Labs with James D. Johnston ("JJ") of AT&T-Bell Labs—with the Fraunhofer Institut for Integrated Circuits, Erlangen,
with relatively minor contributions from the MP2 branch of psychoacoustic sub-band coders. In 1990, Brandenburg became
an assistant professor at Erlangen-Nuremberg. While there, he continued to work on music compression with scientists
at the Fraunhofer Society (in 1993 he joined the staff of the Fraunhofer Institute). The song "Tom's Diner" by Suzanne
Vega was the first song used by Karlheinz Brandenburg to develop the MP3. Brandenburg adopted the song for testing
purposes, listening to it again and again each time refining the scheme, making sure it did not adversely affect
the subtlety of Vega's voice. In 1991, there were only two proposals available that could be completely assessed
for an MPEG audio standard: Musicam (Masking pattern adapted Universal Subband Integrated Coding And Multiplexing)
and ASPEC (Adaptive Spectral Perceptual Entropy Coding). The Musicam technique, as proposed by Philips (the Netherlands),
CCETT (France) and Institut für Rundfunktechnik (Germany) was chosen due to its simplicity and error robustness,
as well as its low computational power associated with the encoding of high quality compressed audio. The Musicam
format, based on sub-band coding, was the basis of the MPEG Audio compression format (sampling rates, structure of
frames, headers, number of samples per frame). Much of its technology and ideas were incorporated into the definition
of ISO MPEG Audio Layer I and Layer II and the filter bank alone into Layer III (MP3) format as part of the computationally
inefficient hybrid filter bank. Under the chairmanship of Professor Musmann (University of Hannover) the editing
of the standard was made under the responsibilities of Leon van de Kerkhof (Layer I) and Gerhard Stoll (Layer II).
ASPEC was the joint proposal of AT&T Bell Laboratories, Thomson Consumer Electronics, Fraunhofer Society and CNET.
It provided the highest coding efficiency. A working group consisting of Leon van de Kerkhof (The Netherlands), Gerhard
Stoll (Germany), Leonardo Chiariglione (Italy), Yves-François Dehery (France), Karlheinz Brandenburg (Germany) and
James D. Johnston (USA) took ideas from ASPEC, integrated the filter bank from Layer 2, added some of their own ideas
and created MP3, which was designed to achieve the same quality at 128 kbit/s as MP2 at 192 kbit/s. All algorithms
for MPEG-1 Audio Layer I, II and III were approved in 1991 and finalized in 1992 as part of MPEG-1, the first standard
suite by MPEG, which resulted in the international standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio or MPEG-1 Part 3),
published in 1993. Further work on MPEG audio was finalized in 1994 as part of the second suite of MPEG standards,
MPEG-2, more formally known as international standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Part 3 or backwards compatible
MPEG-2 Audio or MPEG-2 Audio BC), originally published in 1995. MPEG-2 Part 3 (ISO/IEC 13818-3) defined additional
bit rates and sample rates for MPEG-1 Audio Layer I, II and III. The new sampling rates are exactly half that of
those originally defined in MPEG-1 Audio. This reduction in sampling rate serves to cut the available frequency fidelity
in half while likewise cutting the bitrate by 50%. MPEG-2 Part 3 also enhanced MPEG-1's audio by allowing the coding
of audio programs with more than two channels, up to 5.1 multichannel. An additional extension to MPEG-2 is named
MPEG-2.5 audio, as MPEG-3 already had a different meaning. This extension was developed at Fraunhofer IIS, the registered
MP3 patent holders. Like MPEG-2, MPEG-2.5 adds new sampling rates exactly half of that previously possible with MPEG-2.
It thus widens the scope of MP3 to include human speech and other applications requiring only 25% of the frequency
reproduction possible with MPEG-1. While not an ISO recognized standard, MPEG-2.5 is widely supported by both inexpensive
and brand name digital audio players as well as computer software based MP3 encoders and decoders. A sample rate
comparison between MPEG-1, 2 and 2.5 is given further down. MPEG-2.5 was not developed by MPEG and was never approved
as an international standard. MPEG-2.5 is thus an unofficial or proprietary extension to the MP3 format. Compression
efficiency of encoders is typically defined by the bit rate, because compression ratio depends on the bit depth and
sampling rate of the input signal. Nevertheless, compression ratios are often published. They may use the Compact
Disc (CD) parameters as references (44.1 kHz, 2 channels at 16 bits per channel or 2×16 bit), or sometimes the Digital
Audio Tape (DAT) SP parameters (48 kHz, 2×16 bit). Compression ratios with this latter reference are higher, which
demonstrates the problem with use of the term compression ratio for lossy encoders. Karlheinz Brandenburg used a
CD recording of Suzanne Vega's song "Tom's Diner" to assess and refine the MP3 compression algorithm. This song was
chosen because of its nearly monophonic nature and wide spectral content, making it easier to hear imperfections
in the compression format during playbacks. Some refer to Suzanne Vega as "The mother of MP3". This particular track
has an interesting property in that the two channels are almost, but not completely, the same, leading to a case
where Binaural Masking Level Depression causes spatial unmasking of noise artifacts unless the encoder properly recognizes
the situation and applies corrections similar to those detailed in the MPEG-2 AAC psychoacoustic model. Some more
critical audio excerpts (glockenspiel, triangle, accordion, etc.) were taken from the EBU V3/SQAM reference compact
disc and have been used by professional sound engineers to assess the subjective quality of the MPEG Audio formats.
A reference simulation software implementation, written in the C language and later known as ISO 11172-5, was developed
(in 1991–1996) by the members of the ISO MPEG Audio committee in order to produce bit compliant MPEG Audio files
(Layer 1, Layer 2, Layer 3). It was approved as a committee draft of ISO/IEC technical report in March 1994 and printed
as document CD 11172-5 in April 1994. It was approved as a draft technical report (DTR/DIS) in November 1994, finalized
in 1996 and published as international standard ISO/IEC TR 11172-5:1998 in 1998. The reference software in C language
was later published as a freely available ISO standard. Working in non-real time on a number of operating systems,
it was able to demonstrate the first real time hardware decoding (DSP based) of compressed audio. Some other real
time implementation of MPEG Audio encoders were available for the purpose of digital broadcasting (radio DAB, television
DVB) towards consumer receivers and set top boxes. On 7 July 1994, the Fraunhofer Society released the first software
MP3 encoder called l3enc. The filename extension .mp3 was chosen by the Fraunhofer team on 14 July 1995 (previously,
the files had been named .bit). With the first real-time software MP3 player WinPlay3 (released 9 September 1995)
many people were able to encode and play back MP3 files on their PCs. Because of the relatively small hard drives
back in that time (~ 500–1000 MB) lossy compression was essential to store non-instrument based (see tracker and
MIDI) music for playback on computer. As sound scholar Jonathan Sterne notes, "An Australian hacker acquired l3enc
using a stolen credit card. The hacker then reverse-engineered the software, wrote a new user interface, and redistributed
it for free, naming it "thank you Fraunhofer"". In the second half of '90s, MP3 files began to spread on the Internet.
The popularity of MP3s began to rise rapidly with the advent of Nullsoft's audio player Winamp, released in 1997.
In 1998, the first portable solid state digital audio player MPMan, developed by SaeHan Information Systems which
is headquartered in Seoul, South Korea, was released and the Rio PMP300 was sold afterwards in 1998, despite legal
suppression efforts by the RIAA. In November 1997, the website mp3.com was offering thousands of MP3s created by
independent artists for free. The small size of MP3 files enabled widespread peer-to-peer file sharing of music ripped
from CDs, which would have previously been nearly impossible. The first large peer-to-peer filesharing network, Napster,
was launched in 1999. The ease of creating and sharing MP3s resulted in widespread copyright infringement. Major
record companies argued that this free sharing of music reduced sales, and called it "music piracy". They reacted
by pursuing lawsuits against Napster (which was eventually shut down and later sold) and against individual users
who engaged in file sharing. Unauthorized MP3 file sharing continues on next-generation peer-to-peer networks. Some
authorized services, such as Beatport, Bleep, Juno Records, eMusic, Zune Marketplace, Walmart.com, Rhapsody, the
recording industry approved re-incarnation of Napster, and Amazon.com sell unrestricted music in the MP3 format.
An MP3 file is made up of MP3 frames, which consist of a header and a data block. This sequence of frames is called
an elementary stream. Due to the "byte reservoir", frames are not independent items and cannot usually be extracted
on arbitrary frame boundaries. The MP3 Data blocks contain the (compressed) audio information in terms of frequencies
and amplitudes. The diagram shows that the MP3 Header consists of a sync word, which is used to identify the beginning
of a valid frame. This is followed by a bit indicating that this is the MPEG standard and two bits that indicate
that layer 3 is used; hence MPEG-1 Audio Layer 3 or MP3. After this, the values will differ, depending on the MP3
file. ISO/IEC 11172-3 defines the range of values for each section of the header along with the specification of
the header. Most MP3 files today contain ID3 metadata, which precedes or follows the MP3 frames, as noted in the
diagram. The MPEG-1 standard does not include a precise specification for an MP3 encoder, but does provide example
psychoacoustic models, rate loop, and the like in the non-normative part of the original standard. At present, these
suggested implementations are quite dated. Implementers of the standard were supposed to devise their own algorithms
suitable for removing parts of the information from the audio input. As a result, there are many different MP3 encoders
available, each producing files of differing quality. Comparisons are widely available, so it is easy for a prospective
user of an encoder to research the best choice. An encoder that is proficient at encoding at higher bit rates (such
as LAME) is not necessarily as good at lower bit rates. During encoding, 576 time-domain samples are taken and are
transformed to 576 frequency-domain samples.[clarification needed] If there is a transient, 192 samples are taken
instead of 576. This is done to limit the temporal spread of quantization noise accompanying the transient. (See
psychoacoustics.) Due to the tree structure of the filter bank, pre-echo problems are made worse, as the combined
impulse response of the two filter banks does not, and cannot, provide an optimum solution in time/frequency resolution.
Additionally, the combining of the two filter banks' outputs creates aliasing problems that must be handled partially
by the "aliasing compensation" stage; however, that creates excess energy to be coded in the frequency domain, thereby
decreasing coding efficiency.[citation needed] Decoding, on the other hand, is carefully defined in the standard.
Most decoders are "bitstream compliant", which means that the decompressed output that they produce from a given
MP3 file will be the same, within a specified degree of rounding tolerance, as the output specified mathematically
in the ISO/IEC high standard document (ISO/IEC 11172-3). Therefore, comparison of decoders is usually based on how
computationally efficient they are (i.e., how much memory or CPU time they use in the decoding process). Encoder
/ decoder overall delay is not defined, which means there is no official provision for gapless playback. However,
some encoders such as LAME can attach additional metadata that will allow players that can handle it to deliver seamless
playback. When performing lossy audio encoding, such as creating an MP3 file, there is a trade-off between the amount
of space used and the sound quality of the result. Typically, the creator is allowed to set a bit rate, which specifies
how many kilobits the file may use per second of audio. The higher the bit rate, the larger the compressed file will
be, and, generally, the closer it will sound to the original file. With too low a bit rate, compression artifacts
(i.e., sounds that were not present in the original recording) may be audible in the reproduction. Some audio is
hard to compress because of its randomness and sharp attacks. When this type of audio is compressed, artifacts such
as ringing or pre-echo are usually heard. A sample of applause compressed with a relatively low bit rate provides
a good example of compression artifacts. Besides the bit rate of an encoded piece of audio, the quality of MP3 files
also depends on the quality of the encoder itself, and the difficulty of the signal being encoded. As the MP3 standard
allows quite a bit of freedom with encoding algorithms, different encoders may feature quite different quality, even
with identical bit rates. As an example, in a public listening test featuring two different MP3 encoders at about
128 kbit/s, one scored 3.66 on a 1–5 scale, while the other scored only 2.22. The simplest type of MP3 file uses
one bit rate for the entire file: this is known as Constant Bit Rate (CBR) encoding. Using a constant bit rate makes
encoding simpler and faster. However, it is also possible to create files where the bit rate changes throughout the
file. These are known as Variable Bit Rate (VBR) files. The idea behind this is that, in any piece of audio, some
parts will be much easier to compress, such as silence or music containing only a few instruments, while others will
be more difficult to compress. So, the overall quality of the file may be increased by using a lower bit rate for
the less complex passages and a higher one for the more complex parts. With some encoders, it is possible to specify
a given quality, and the encoder will vary the bit rate accordingly. Users who know a particular "quality setting"
that is transparent to their ears can use this value when encoding all of their music, and generally speaking not
need to worry about performing personal listening tests on each piece of music to determine the correct bit rate.
Perceived quality can be influenced by listening environment (ambient noise), listener attention, and listener training
and in most cases by listener audio equipment (such as sound cards, speakers and headphones). A test given to new
students by Stanford University Music Professor Jonathan Berger showed that student preference for MP3-quality music
has risen each year. Berger said the students seem to prefer the 'sizzle' sounds that MP3s bring to music. An in-depth
study of MP3 audio quality, sound artist and composer Ryan Maguire's project "The Ghost in the MP3" isolates the
sounds lost during MP3 compression. In 2015, he released the track "moDernisT" (an anagram of "Tom's Diner"), composed
exclusively from the sounds deleted during MP3 compression of the song "Tom's Diner", the track originally used in
the formulation of the MP3 standard. A detailed account of the techniques used to isolate the sounds deleted during
MP3 compression, along with the conceptual motivation for the project, was published in the 2014 Proceedings of the
International Computer Music Conference. Several bit rates are specified in the MPEG-1 Audio Layer III standard:
32, 40, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256 and 320 kbit/s, with available sampling frequencies of 32,
44.1 and 48 kHz. MPEG-2 Audio Layer III allows bit rates of 8, 16, 24, 32, 40, 48, 56, 64, 80, 96, 112, 128, 144,
160 kbit/s with sampling frequencies of 16, 22.05 and 24 kHz. MPEG-2.5 Audio Layer III is restricted to bit rates
of 8, 16, 24, 32, 40, 48, 56 and 64 kbit/s with sampling frequencies of 8, 11.025, and 12 kHz.[citation needed] Because
of the Nyquist–Shannon sampling theorem, frequency reproduction is always strictly less than half of the sampling
frequency, and imperfect filters requires a larger margin for error (noise level versus sharpness of filter), so
8 kHz sampling rate limits the maximum frequency to 4 kHz, while 48 kHz maximum sampling rate limits an MP3 to 24
kHz sound reproduction. A sample rate of 44.1 kHz is almost always used, because this is also used for CD audio,
the main source used for creating MP3 files. A greater variety of bit rates are used on the Internet. The rate of
128 kbit/s is commonly used, at a compression ratio of 11:1, offering adequate audio quality in a relatively small
space. As Internet bandwidth availability and hard drive sizes have increased, higher bit rates up to 320 kbit/s
are widespread. Uncompressed audio as stored on an audio-CD has a bit rate of 1,411.2 kbit/s, (16 bit/sample × 44100
samples/second × 2 channels / 1000 bits/kilobit), so the bitrates 128, 160 and 192 kbit/s represent compression ratios
of approximately 11:1, 9:1 and 7:1 respectively. Non-standard bit rates up to 640 kbit/s can be achieved with the
LAME encoder and the freeformat option, although few MP3 players can play those files. According to the ISO standard,
decoders are only required to be able to decode streams up to 320 kbit/s. Early MPEG Layer III encoders used what
is now called Constant Bit Rate (CBR). The software was only able to use a uniform bitrate on all frames in an MP3
file. Later more sophisticated MP3 encoders were able to use the bit reservoir to target an average bit rate selecting
the encoding rate for each frame based on the complexity of the sound in that portion of the recording. A more sophisticated
MP3 encoder can produce variable bitrate audio. MPEG audio may use bitrate switching on a per-frame basis, but only
layer III decoders must support it. VBR is used when the goal is to achieve a fixed level of quality. The final file
size of a VBR encoding is less predictable than with constant bitrate. Average bitrate is VBR implemented as a compromise
between the two: the bitrate is allowed to vary for more consistent quality, but is controlled to remain near an
average value chosen by the user, for predictable file sizes. Although an MP3 decoder must support VBR to be standards
compliant, historically some decoders have bugs with VBR decoding, particularly before VBR encoders became widespread.
Layer III audio can also use a "bit reservoir", a partially full frame's ability to hold part of the next frame's
audio data, allowing temporary changes in effective bitrate, even in a constant bitrate stream. Internal handling
of the bit reservoir increases encoding delay.[citation needed] There is no scale factor band 21 (sfb21) for frequencies
above approx 16 kHz, forcing the encoder to choose between less accurate representation in band 21 or less efficient
storage in all bands below band 21, the latter resulting in wasted bitrate in VBR encoding. A "tag" in an audio file
is a section of the file that contains metadata such as the title, artist, album, track number or other information
about the file's contents. The MP3 standards do not define tag formats for MP3 files, nor is there a standard container
format that would support metadata and obviate the need for tags. However, several de facto standards for tag formats
exist. As of 2010, the most widespread are ID3v1 and ID3v2, and the more recently introduced APEv2. These tags are
normally embedded at the beginning or end of MP3 files, separate from the actual MP3 frame data. MP3 decoders either
extract information from the tags, or just treat them as ignorable, non-MP3 junk data. ReplayGain is a standard for
measuring and storing the loudness of an MP3 file (audio normalization) in its metadata tag, enabling a ReplayGain-compliant
player to automatically adjust the overall playback volume for each file. MP3Gain may be used to reversibly modify
files based on ReplayGain measurements so that adjusted playback can be achieved on players without ReplayGain capability.
The basic MP3 decoding and encoding technology is patent-free in the European Union, all patents having expired there.
In the United States, the technology will be substantially patent-free on 31 December 2017 (see below). The majority
of MP3 patents expired in the US between 2007 and 2015. In the past, many organizations have claimed ownership of
patents related to MP3 decoding or encoding. These claims led to a number of legal threats and actions from a variety
of sources. As a result, uncertainty about which patents must be licensed in order to create MP3 products without
committing patent infringement in countries that allow software patents was a common feature of the early stages
of adoption of the technology. The initial near-complete MPEG-1 standard (parts 1, 2 and 3) was publicly available
on 6 December 1991 as ISO CD 11172. In most countries, patents cannot be filed after prior art has been made public,
and patents expire 20 years after the initial filing date, which can be up to 12 months later for filings in other
countries. As a result, patents required to implement MP3 expired in most countries by December 2012, 21 years after
the publication of ISO CD 11172. An exception is the United States, where patents filed prior to 8 June 1995 expire
17 years after the publication date of the patent, but application extensions make it possible for a patent to issue
much later than normally expected (see submarine patents). The various MP3-related patents expire on dates ranging
from 2007 to 2017 in the U.S. Patents filed for anything disclosed in ISO CD 11172 a year or more after its publication
are questionable. If only the known MP3 patents filed by December 1992 are considered, then MP3 decoding has been
patent-free in the US since 22 September 2015 when U.S. Patent 5,812,672 expired which had a PCT filing in October
1992. If the longest-running patent mentioned in the aforementioned references is taken as a measure, then the MP3
technology will be patent-free in the United States on 30 December 2017 when U.S. Patent 5,703,999, held by the Fraunhofer-Gesellschaft
and administered by Technicolor, expires. Technicolor (formerly called Thomson Consumer Electronics) claims to control
MP3 licensing of the Layer 3 patents in many countries, including the United States, Japan, Canada and EU countries.
Technicolor has been actively enforcing these patents. In September 1998, the Fraunhofer Institute sent a letter
to several developers of MP3 software stating that a license was required to "distribute and/or sell decoders and/or
encoders". The letter claimed that unlicensed products "infringe the patent rights of Fraunhofer and Thomson. To
make, sell and/or distribute products using the [MPEG Layer-3] standard and thus our patents, you need to obtain
a license under these patents from us." Sisvel S.p.A. and its U.S. subsidiary Audio MPEG, Inc. previously sued Thomson
for patent infringement on MP3 technology, but those disputes were resolved in November 2005 with Sisvel granting
Thomson a license to their patents. Motorola followed soon after, and signed with Sisvel to license MP3-related patents
in December 2005. Except for three patents, the US patents administered by Sisvel had all expired in 2015, however
(the exceptions are: U.S. Patent 5,878,080, expires February 2017, U.S. Patent 5,850,456, expires February 2017 and
U.S. Patent 5,960,037, expires 9. April 2017. In September 2006, German officials seized MP3 players from SanDisk's
booth at the IFA show in Berlin after an Italian patents firm won an injunction on behalf of Sisvel against SanDisk
in a dispute over licensing rights. The injunction was later reversed by a Berlin judge, but that reversal was in
turn blocked the same day by another judge from the same court, "bringing the Patent Wild West to Germany" in the
words of one commentator. In February 2007, Texas MP3 Technologies sued Apple, Samsung Electronics and Sandisk in
eastern Texas federal court, claiming infringement of a portable MP3 player patent that Texas MP3 said it had been
assigned. Apple, Samsung, and Sandisk all settled the claims against them in January 2009. Alcatel-Lucent has asserted
several MP3 coding and compression patents, allegedly inherited from AT&T-Bell Labs, in litigation of its own. In
November 2006, before the companies' merger, Alcatel sued Microsoft for allegedly infringing seven patents. On 23
February 2007, a San Diego jury awarded Alcatel-Lucent US $1.52 billion in damages for infringement of two of them.
The court subsequently tossed the award, however, finding that one patent had not been infringed and that the other
was not even owned by Alcatel-Lucent; it was co-owned by AT&T and Fraunhofer, who had licensed it to Microsoft, the
judge ruled. That defense judgment was upheld on appeal in 2008. See Alcatel-Lucent v. Microsoft for more information.
Other lossy formats exist. Among these, mp3PRO, AAC, and MP2 are all members of the same technological family as
MP3 and depend on roughly similar psychoacoustic models. The Fraunhofer Gesellschaft owns many of the basic patents
underlying these formats as well, with others held by Dolby Labs, Sony, Thomson Consumer Electronics, and AT&T. There
are also open compression formats like Opus and Vorbis that are available free of charge and without any known patent
restrictions. Some of the newer audio compression formats, such as AAC, WMA Pro and Vorbis, are free of some limitations
inherent to the MP3 format that cannot be overcome by any MP3 encoder. Besides lossy compression methods, lossless
formats are a significant alternative to MP3 because they provide unaltered audio content, though with an increased
file size compared to lossy compression. Lossless formats include FLAC (Free Lossless Audio Codec), Apple Lossless
and many others.
House music is a genre of electronic dance music that originated in Chicago in the early 1980s. It was initially popularized
in Chicago, circa 1984. House music quickly spread to other American cities such as Detroit, New York City, and Newark
– all of which developed their own regional scenes. In the mid-to-late 1980s, house music became popular in Europe
as well as major cities in South America, and Australia. Early house music commercial success in Europe saw songs
such as "Pump Up The Volume" by MARRS (1987), "House Nation" by House Master Boyz and the Rude Boy of House (1987),
"Theme from S'Express" by S'Express (1988) and "Doctorin' the House" by Coldcut (1988) in the pop charts. Since the
early to mid-1990s, house music has been infused in mainstream pop and dance music worldwide. Early house music was
generally dance-based music characterized by repetitive 4/4 beats, rhythms mainly provided by drum machines, off-beat
hi-hat cymbals, and synthesized basslines. While house displayed several characteristics similar to disco music,
it was more electronic and minimalistic, and the repetitive rhythm of house was more important than the song itself.
House music in the 2010s, while keeping several of these core elements, notably the prominent kick drum on every
beat, varies widely in style and influence, ranging from the soulful and atmospheric deep house to the more minimalistic
microhouse. House music has also fused with several other genres creating fusion subgenres, such as euro house, tech
house, electro house and jump house. In the late 1980s, many local Chicago house music artists suddenly found themselves
presented with major label deals. House music proved to be a commercially successful genre and a more mainstream
pop-based variation grew increasingly popular. Artists and groups such as Madonna, Janet Jackson, Paula Abdul, Aretha
Franklin, Bananarama, Diana Ross, Tina Turner, Whitney Houston, Steps, Kylie Minogue, Bjork, and C+C Music Factoryhave
all incorporated the genre into some of their work. After enjoying significant success in the early to mid-90s, house
music grew even larger during the second wave of progressive house (1999–2001). The genre has remained popular and
fused into other popular subgenres, for example, G-house, Deep House, Tech House and Bass House. As of 2015, house
music remains extremely popular in both clubs and in the mainstream pop scene while retaining a foothold on underground
scenes across the globe.[citation needed] Various disco songs incorporated sounds produced with synthesizers and
drum machines, and some compositions were entirely electronic; examples include Giorgio Moroder's late 1970s productions
such as Donna Summer's hit single "I Feel Love" from 1977, Cerrone's "Supernature" (1977), Yellow Magic Orchestra's
synth-disco-pop productions from Yellow Magic Orchestra (1978), Solid State Survivor (1979), and several early 1980s
disco-pop productions by the Hi-NRG group Lime. Soul and disco influenced house music, plus mixing and editing techniques
earlier explored by disco, garage music and post-disco DJs, producers, and audio engineers such as Walter Gibbons,
Tom Moulton, Jim Burgess, Larry Levan, Ron Hardy, M & M, and others who produced longer, more repetitive, and percussive
arrangements of existing disco recordings. Early house producers such as Frankie Knuckles created similar compositions
from scratch, using samplers, synthesizers, sequencers, and drum machines. The electronic instrumentation and minimal
arrangement of Charanjit Singh's Synthesizing: Ten Ragas to a Disco Beat (1982), an album of Indian ragas performed
in a disco style, anticipated the sounds of acid house music, but it is not known to have had any influence on the
genre prior to the album's rediscovery in the 21st century. Rachel Cain, co-founder of an influential Trax Records,
was previously involved in the burgeoning punk scene and cites industrial and post-punk record store Wax Trax! Records
as an important connection between the ever-changing underground sounds of Chicago. As most proto-house DJs were
primarily stuck to playing their conventional ensemble of dance records, Frankie Knuckles and Ron Hardy, two influential
pioneers of house music, were known for their out-of-bounds behavior. The former, credited as "the Godfather of House,"
worked primarily with early disco music with a hint of new and different music (whether it was post-punk or post-disco)
but still enjoying a variety of music, while the latter produced unconventional DIY mixtapes which he later played
straight-on in the music club Muzic Box, boiling with raw energy. Marshall Jefferson, who would later appear with
the Chicago house classic "Move Your Body (The House-Music Anthem)," (originally released on Chicago-based Trax Records)
got involved in house music after hearing Ron Hardy's music in Muzic Box. In the early 1980s, Chicago radio jocks
The Hot Mix 5, and club DJs Ron Hardy and Frankie Knuckles played various styles of dance music, including older
disco records (mostly Philly disco and Salsoul tracks), electro funk tracks by artists such as Afrika Bambaataa,
newer Italo disco, B-Boy hip hop music by Man Parrish, Jellybean Benitez, Arthur Baker, and John Robie, and electronic
pop music by Kraftwerk and Yellow Magic Orchestra. Some made and played their own edits of their favorite songs on
reel-to-reel tape, and sometimes mixed in effects, drum machines, and other rhythmic electronic instrumentation.
In this era, The hypnotic electronic dance song "On and On", produced in 1984 by Chicago DJ Jesse Saunders and co-written
by Vince Lawrence, had elements that became staples of the early house sound, such as the Roland TB-303 bass synthesizer
and minimal vocals as well as a Roland (specifically TR-808) drum machine and Korg (specifically Poly-61) synthesizer.
It also utilized the bassline from Player One's disco record "Space Invaders" (1979). "On and On" is sometimes cited
as the 'first house record', though other examples from around that time, such as J.M. Silk's "Music is the Key"
(1985), have also been cited. Starting in 1984, some of these DJs, inspired by Jesse Saunders' success with "On and
On", tried their hand at producing and releasing original compositions. These compositions used newly affordable
electronic instruments to emulate not just Saunders' song, but the edited, enhanced styles of disco and other dance
music they already favored. These homegrown productions were played on Chicago-area radio and in local discothèques
catering mainly to African-American and gay audiences. By 1985, although the exact origins of the term are debated,
"house music" encompassed these locally produced recordings. Subgenres of house, including deep house and acid house,
quickly emerged and gained traction. Deep house's origins can be traced to Chicago producer Mr Fingers's relatively
jazzy, soulful recordings "Mystery of Love" (1985) and "Can You Feel It?" (1986). According to author Richie Unterberger,
it moved house music away from its "posthuman tendencies back towards the lush" soulful sound of early disco music.
Acid house arose from Chicago artists' experiments with the squelchy Roland TB-303 bass synthesizer, and the style's
origins on vinyl is generally cited as Phuture's "Acid Tracks" (1987). Phuture, a group founded by Nathan "DJ Pierre"
Jones, Earl "Spanky" Smith Jr., and Herbert "Herb J" Jackson, is credited with having been the first to use the TB-303
in the house music context. The group's 12-minute "Acid Tracks" was recorded to tape and was played by DJ Ron Hardy
at the Music Box, where Hardy was resident DJ. Hardy once played it four times over the course of an evening until
the crowd responded favorably. The track also utilized a Roland TR-707 drum machine. Club play from pioneering Chicago
DJs such as Hardy and Lil Louis, local dance music record shops such as Importes, State Street Records, Loop Records,
Gramaphone Records and the popular Hot Mix 5 shows on radio station WBMX-FM helped popularize house music in Chicago.
Later, visiting DJs & producers from Detroit fell into the genre. Trax Records and DJ International Records, Chicago
labels with wider distribution, helped popularize house music inside and outside of Chicago. One 1986 house tune
called "Move Your Body" by Marshall Jefferson, taken from the appropriately titled "The House Music Anthem" EP, became
a big hit in Chicago and eventually worldwide. By 1986, UK labels were releasing house music by Chicago acts, and
by 1987 house tracks by Chicago DJs and producers were appearing on and topping the UK music chart. By this time,
house music released by Chicago-based labels was considered a must-play in clubs. The term "house music" is said
to have originated from a Chicago club called The Warehouse, which existed from 1977 to 1983. Clubbers to The Warehouse
were primarily black and gay, who came to dance to music played by the club's resident DJ Frankie Knuckles, whom
fans refer to as the "godfather of house". After the Warehouse closed in 1983, the crowds went to Knuckles' new club,
The Power Plant. In the Channel 4 documentary Pump Up The Volume, Knuckles remarks that the first time he heard the
term "house music" was upon seeing "we play house music" on a sign in the window of a bar on Chicago's South Side.
One of the people in the car with him joked, "you know, that's the kind of music you play down at the Warehouse!",
and then everybody laughed. South-Side Chicago DJ Leonard "Remix" Roy, in self-published statements, claims he put
such a sign in a tavern window because it was where he played music that one might find in one's home; in his case,
it referred to his mother's soul & disco records, which he worked into his sets. Farley Jackmaster Funk was quoted
as saying "In 1982, I was DJing at a club called The Playground and there was this kid named Leonard 'Remix' Roy
who was a DJ at a rival club called The Rink. He came over to my club one night, and into the DJ booth and said to
me, 'I've got the gimmick that's gonna take all the people out of your club and into mine – it's called House music.'
Now, where he got that name from or what made him think of it I don't know, so the answer lies with him." Chip E.'s
1985 recording "It's House" may also have helped to define this new form of electronic music. However, Chip E. himself
lends credence to the Knuckles association, claiming the name came from methods of labeling records at the Importes
Etc. record store, where he worked in the early 1980s: bins of music that DJ Knuckles played at the Warehouse nightclub
were labelled in the store "As Heard At The Warehouse", which was shortened to simply "House". Patrons later asked
for new music for the bins, which Chip E. implies was a demand the shop tried to meet by stocking newer local club
hits. In a 1986 interview, Rocky Jones, the former club DJ who ran the D.J. International record label, doesn't mention
Importes Etc., Frankie Knuckles, or the Warehouse by name, but agrees that "house" was a regional catch-all term
for dance music, and that it was once synonymous with older disco music. Larry Heard, a.k.a. "Mr. Fingers", claims
that the term "house" became popular due to many of the early DJs creating music in their own homes using synthesizers
and drum machines such as the Roland TR-808, TR-909, and the TB 303.[citation needed] These synthesizers were used
to create a house subgenre called acid house. Juan Atkins, an originator of Detroit techno music, claims the term
"house" reflected the exclusive association of particular tracks with particular clubs and DJs; those records helped
differentiate the clubs and DJs, and thus were considered to be their "house" records. In an effort to maintain such
exclusives, the DJs were inspired to create their own "house" records. House also had an influence of relaying political
messages to people who were considered to be outcasts of society. The music appealed to those who didn't fit into
mainstream American society and was especially celebrated by many black males. Frankie Knuckles once said that the
Warehouse club in Chicago was like "church for people who have fallen from grace" The house producer Marshall Jefferson
compared it to "old-time religion in the way that people just get happy and screamin'". Deep house was similar to
many of the messages of freedom for the black community. Detroit techno is an offshoot of Chicago house music. It
was developed starting in the late 80s, one of the earliest hits being "Big Fun" by Inner City. Detroit techno developed
as the legendary disc jockey The Electrifying Mojo conducted his own radio program at this time, influencing the
fusion of eclectic sounds into the signature Detroit techno sound. This sound, also influenced by European electronica
(Kraftwerk, Art of Noise), Japanese synthpop (Yellow Magic Orchestra), early B-boy Hip-Hop (Man Parrish, Soul Sonic
Force) and Italo disco (Doctor's Cat, Ris, Klein M.B.O.), was further pioneered by Juan Atkins, Derrick May, and
Kevin Saunderson, the "godfathers" of Detroit Techno.[citation needed] Derrick May a.k.a. "MAYDAY" and Thomas Barnett
released "Nude Photo" in 1987 on May's label "Transmat Records", which helped kickstart the Detroit techno music
scene and was put in heavy rotation on Chicago's Hot Mix 5 Radio DJ mix show and in many Chicago clubs.[citation
needed] A year later, Transmat released what was to become one of techno and house music's classic anthems – the
seminal track "Strings of Life". Transmat Records went on to have many more successful releases[citation needed]
such as 1988's "Wiggin". As well, Derrick May had successful[citation needed] releases on Kool Kat Records and many
remixes for a host of underground and mainstream recording artist. Kevin Saunderson's company KMS Records contributed
many releases that were as much house music as they were techno. These tracks were well received in Chicago and played
on Chicago radio and in clubs.[citation needed] Blake Baxter's 1986 recording, "When we Used to Play / Work your
Body", 1987's "Bounce Your Body to the Box" and "Force Field", "The Sound / How to Play our Music" and "the Groove
that Won't Stop" and a remix of "Grooving Without a Doubt". In 1988, as house music became more popular among general
audiences, Kevin Saunderson's group Inner City with Paris Gray released the 1988 hits "Big Fun" and "Good Life",
which eventually were picked up by Virgin Records. Each EP / 12 inch single sported remixes by Mike "Hitman" Wilson
and Steve "Silk" Hurley of Chicago and Derrick "Mayday" May and Juan Atkins of Detroit. In 1989, KMS had another
hit release of "Rock to the Beat" which was a theme in Chicago dance clubs.[citation needed] With house music already
massive on the '80s dance-scene it was only a matter of time before it would penetrate the UK pop charts.[citation
needed] The record generally credited as the first house hit in the UK was Farley "Jackmaster" Funk's "Love Can't
Turn Around" which reached #10 in the UK singles chart in September 1986. In January 1987, Chicago artist Steve "Silk"
Hurley's "Jack Your Body" reached number one in the UK, showing it was possible for house music to cross over. The
same month also saw Raze enter the top 20 with "Jack the Groove", and several further house hits reached the top
ten that year. Stock Aitken Waterman's productions for Mel and Kim, including the number-one hit "Respectable", added
elements of house to their previous Europop sound, and session group Mirage scored top-ten hits with "Jack Mix II"
and "Jack Mix IV", medleys of previous electro and Europop hits rearranged in a house style. Key labels in the rise
of house music in the UK included: The tour in March 1987[citation needed] of Knuckles, Jefferson, Fingers Inc. (Heard)
and Adonis as the DJ International Tour boosted house in the UK. Following the number-one success of MARRS' "Pump
Up The Volume" in October, the years 1987 to 1989 also saw UK acts such as The Beatmasters, Krush, Coldcut, Yazz,
Bomb The Bass, S-Express, and Italy's Black Box opening the doors to a house music onslaught on the UK charts. Early
British house music quickly set itself apart from the original Chicago house sound;[citation needed] many of the
early hits were based on sample montage, rap was often used for vocals (far more than in the US),[citation needed]
and humor was frequently an important element. One of the early anthemic tunes, "Promised Land" by Joe Smooth, was
covered and charted within a week by the Style Council. Europeans embraced house, and began booking legendary American
house DJs to play at the big clubs, such as Ministry of Sound, whose resident, Justin Berkmann brought in Larry Levan.
The house scene in cities such as Birmingham, Leeds, Sheffield and London were also provided with many underground
Pirate Radio stations and DJs alike which helped bolster an already contagious, but otherwise ignored by the mainstream,
music genre. The earliest and influential UK house and techno record labels such as Warp Records and Network Records
(otherwise known as Kool Kat records) helped introduce American and later Italian dance music to Britain as well
as promoting select UK dance music acts. But house was also being developed on Ibiza,[citation needed] although no
house artists or labels were coming from this tiny island at the time. By the mid-1980s a distinct Balearic mix of
house was discernible.[citation needed] Several clubs such as Amnesia with DJ Alfredo were playing a mix of rock,
pop, disco and house. These clubs, fueled by their distinctive sound and Ecstasy, began to have an influence on the
British scene. By late 1987, DJs such as Trevor Fung, Paul Oakenfold and Danny Rampling were bringing the Ibiza sound
to UK clubs such as the Haçienda in Manchester, and in London clubs such as Shoom in Southwark, Heaven, Future and
Spectrum. In the U.S., the music was being developed to create a more sophisticated sound,[citation needed] moving
beyond just drum loops and short samples. In Chicago, Marshall Jefferson had formed the house group Ten City Byron
Burke, Byron Stingily & Herb Lawson(from "intensity"). New York–based performers such as Mateo & Matos and Blaze
had slickly produced disco house tracks. In Detroit a proto-techno music sound began to emerge with the recordings
of Juan Atkins, Derrick May and Kevin Saunderson. Atkins, a former member of Cybotron, released Model 500 "No UFOs"
in 1985, which became a regional hit, followed by dozens of tracks on Transmat, Metroplex and Fragile. One of the
most unusual was "Strings of Life" by Derrick May, a darker, more intellectual strain of house. "Techno-Scratch"
was released by the Knights Of The Turntable in 1984 which had a similar techno sound to Cybotron. The manager of
the Factory nightclub and co-owner of the Haçienda, Tony Wilson, also promoted acid house culture on his weekly TV
show. The Midlands also embraced the late 1980s house scene with illegal parties and more legal dance clubs such
as The Hummingbird. Back in America the scene had still not progressed beyond a small number of clubs in Chicago,
Detroit, Newark and New York City. However, many independent Chicago-based record labels were making appearances
on the Dance Chart with their releases. In the UK, any house song released by a Chicago-based label was routinely
considered a must play at many clubs playing house music. Paradise Garage in New York City was still a top club.
The emergence of Todd Terry, a pioneer of the genre, was important in America. His cover of Class Action's Larry
Levan mixed "Weekend" demonstrated the continuum from the underground disco to a new house sound with hip-hop influences
evident in the quicker sampling and the more rugged bass-line. In the late 1980s, Nu Groove Records prolonged, if
not launched the careers of Rheji Burrell & Rhano Burrell, collectively known as Burrell (after a brief stay on Virgin
America via Timmy Regisford and Frank Mendez), along with basically every relevant DJ and Producer in the NY underground
scene. The Burrell's are responsible for the "New York Underground" sound and are the undisputed champions of this
style of house. Their 30+ releases on this label alone seems to support that fact. In today's market Nu Groove Record
releases like the Burrells' enjoy a cult-like following and mint vinyl can fetch $100 U.S. or more in the open market.
By the late 80s, House had moved West, particularly to San Francisco, Oakland, Los Angeles, Fresno, San Diego and
Seattle. Los Angeles saw a huge explosion of underground raves and DJs, notably DJs Marques Wyatt and Billy Long,
who spun at Jewel's Catch One, the oldest dance club in America. In 1989, the L.A. based, former EBN-OZN singer/rapper
Robert Ozn started indie house label One Voice Records, releasing the Mike "Hitman" Wilson remix of Dada Nada's "Haunted
House," which garnered instant club and mix show radio play in Chicago, Detroit and New York as well as in the U.K.
and France. The record shot up to Number Five on the Billboard Club Chart, marking it as the first House record by
a white artist to chart in the U.S. Dada Nada, the moniker for Ozn's solo act, released in 1990, what has become
a classic example of jazz-based Deep House, the Frankie Knuckles and David Morales remix of Dada Nada's "Deep Love"
(One Voice Records/US, Polydor/UK), featuring Ozn's lush, crooning vocals and muted trumpet improvisational solos,
underscoring Deep House's progression into a genre that integrated jazz and pop songwriting structures – a feature
which continued to set it apart from Acid House and Techno. The early 1990s additionally saw the rise in mainstream
US popularity for house music. Pop recording artist Madonna's 1990 single "Vogue" became an international hit single
and topped the US charts. The single is credited as helping to bring house music to the US mainstream. Influential
gospel/R&B-influenced Aly-us released "Time Passes On" in 1993 (Strictly Rhythm), then later, "Follow Me" which received
radio airplay as well as being played in clubs. Another U.S. hit which received radio play was the single "Time for
the Perculator" by Cajmere, which became the prototype of ghetto house subgenre. Cajmere started the Cajual and Relief
labels (amongst others). By the early 1990s artists such as Cajmere himself (under that name as well as Green Velvet
and as producer for Dajae), DJ Sneak, Glenn Underground and others did many recordings. The 1990s saw new Chicago
house artists emerge such as DJ Funk, who operates a Chicago house record label called Dance Mania. Ghetto house
and acid house were other house music styles that were also started in Chicago. In Britain, further experiments in
the genre boosted its appeal. House and rave clubs such as Lakota and Cream emerged across Britain, hosting house
and dance scene events. The 'chilling out' concept developed in Britain with ambient house albums such as The KLF's
Chill Out and Analogue Bubblebath by Aphex Twin. The Godskitchen superclub brand also began in the midst of the early
90's rave scene. After initially hosting small nights in Cambridge and Northampton, the associated events scaled
up in Milton Keynes, Birmingham and Leeds. A new indie dance scene also emerged in the 90's. In New York, bands such
as Deee-Lite furthered house's international influence. Two distinctive tracks from this era were the Orb's "Little
Fluffy Clouds" (with a distinctive vocal sample from Rickie Lee Jones) and the Happy Mondays' "Wrote for Luck" ("WFL")
which was transformed into a dance hit by Vince Clarke. In England, one of the few licensed venues The Eclipse attracted
people from up and down the country as it was open until the early hours. The Criminal Justice and Public Order Act
1994 was a government attempt to ban large rave dance events featuring music with "repetitive beats". There were
a number of abortive "Kill the Bill" demonstrations. The Spiral Tribe at Castle Morten was probably the nail in the
coffin for illegal raves, and forced through the bill, which became law, in November 1994. The music continued to
grow and change, as typified by Leftfield with "Release the Pressure", which introduced dub and reggae into the house
sound, although Leftfield had prior releases, such as "Not Forgotten" released in 1990 on Sheffield's Outer Rhythm
records. A new generation of clubs such as Liverpool's Cream and the Ministry of Sound were opened to provide a venue
for more commercial sounds. Major record companies began to open "superclubs" promoting their own acts. These superclubs
entered into sponsorship deals initially with fast food, soft drinks, and clothing companies. Flyers in clubs in
Ibiza often sported many corporate logos. A new subgenre, Chicago hard house, was developed by DJs such as Bad Boy
Bill, DJ Lynnwood, DJ Irene, Richard "Humpty" Vission and DJ Enrie, mixing elements of Chicago house, funky house
and hard house together. Additionally, Producers such as George Centeno, Darren Ramirez, and Martin O. Cairo would
develop the Los Angeles Hard House sound. Similar to gabber or hardcore techno from the Netherlands, this sound was
often associated with the "rebel" culture of the time. These 3 producers are often considered "ahead of their time"
since many of the sounds they engineered during the late 20th century became more prominent during the 21st century.
Towards the end of the 1990s and into the 2000s, producers such as Daft Punk, Stardust, Cassius, St. Germain and
DJ Falcon began producing a new sound out of Paris's house scene. Together, they laid the groundwork for what would
be known as the French house movement. By combining the harder-edged-yet-soulful philosophy of Chicago house with
the melodies of obscure funk, state-of-the-art production techniques and the sound of analog synthesizers, they began
to create the standards that would shape all house music. Chicago Mayor Richard M. Daley proclaimed August 10, 2005
to be "House Unity Day" in Chicago, in celebration of the "21st anniversary of house music" (actually the 21st anniversary
of the founding of Trax Records, an independent Chicago-based house label). The proclamation recognized Chicago as
the original home of house music and that the music's original creators "were inspired by the love of their city,
with the dream that someday their music would spread a message of peace and unity throughout the world". DJs such
as Frankie Knuckles, Marshall Jefferson, Paul Johnson and Mickey Oliver celebrated the proclamation at the Summer
Dance Series, an event organized by Chicago's Department of Cultural Affairs. It was during this decade that vocal
house became firmly established, both in the underground and as part of the pop market, and labels such as Defected
Records, Roule and Om were at the forefront of championing the emerging sound. In the mid-2000s, fusion genres such
as electro house and fidget house emerged.[citation needed] This fusion is apparent in the crossover of musical styles
by artists such as Dennis Ferrer and Booka Shade, with the former's production style having evolved from the New
York soulful house scene and the latter's roots in techno. Numerous live performance events dedicated to house music
were founded during the course of the decade, including Shambhala Music Festival and major industry sponsored events
like Miami's Winter Music Conference. The genre even gained popularity in the Middle East in cities such as Dubai
& Abu Dhabi[citation needed] and at events like Creamfields. 2010s saw multiple new sounds in house music developed
by numerous DJs. Sweden knew a prominence of snare-less "Swedish progressive house" with the emergence of Sebastian
Ingrosso, Axwell, Steve Angello (These three formed a trio called Swedish House Mafia), Avicii, Alesso, etc. Netherlands
brought together a concept of "Dirty Dutch", electro house subgenre characterized by very abrasive leads and darker
arpeggios, with prominent DJs Chuckie, Hardwell, Laidback Luke, Afrojack, R3hab, Bingo Players, Quintino, Alvaro,
Cedric Gervais, 2G, etc. Elsewhere, fusion genres derivative of 2000s progressive house returned to prominence, especially
with the help of DJs Calvin Harris, Eric Prydz, Mat Zo, Above & Beyond and Fonzerelli in Europe, Deadmau5, Kaskade,
Steve Aoki, Porter Robinson and Wolfgang Gartner in the US and Canada. The growing popularity of such artists led
to the emergence of electro house and progressive house blended sounds in popular music, such as singles Lady Gaga's
"Marry the Night", The Black Eyed Peas' "The Best One Yet (The Boy)" and the will.i.am and Britney Spears "Scream
& Shout". Big room house found increasing popularity since 2010, particularly through international dance music festivals
such as Tomorrowland, Ultra Music Festival, and Electric Daisy Carnival. In addition to these popular examples of
house, there has also been a reunification of contemporary house and its roots. Many hip hop and R&B artists also
turn to house music to add a mass appeal to the music they produce.
The terms upper case and lower case can be written as two consecutive words, connected with a hyphen (upper-case and lower-case),
or as a single word (uppercase and lowercase). These terms originated from the common layouts of the shallow drawers
called type cases used to hold the movable type for letterpress printing. Traditionally, the capital letters were
stored in a separate case that was located above the case that held the small letters, and the name proved easy to
remember since capital letters are taller. The convention followed by many British publishers (including scientific
publishers, like Nature, magazines, like The Economist and New Scientist, and newspapers, like The Guardian and The
Times) and U.S. newspapers is to use sentence-style capitalisation in headlines, where capitalisation follows the
same rules that apply for sentences. This convention is usually called sentence case. It may also be applied to publication
titles, especially in bibliographic references and library catalogues. Examples of global publishers whose English-language
house styles prescribe sentence-case titles and headings include the International Organization for Standardization.
Similar developments have taken place in other alphabets. The lower-case script for the Greek alphabet has its origins
in the 7th century and acquired its quadrilinear form in the 8th century. Over time, uncial letter forms were increasingly
mixed into the script. The earliest dated Greek lower-case text is the Uspenski Gospels (MS 461) in the year 835.[citation
needed] The modern practice of capitalising the first letter of every sentence seems to be imported (and is rarely
used when printing Ancient Greek materials even today). In orthography and typography, letter case (or just case)
is the distinction between the letters that are in larger upper case (also capital letters, capitals, caps, large
letters, or more formally majuscule, see Terminology) and smaller lower case (also small letters, or more formally
minuscule, see Terminology) in the written representation of certain languages. Here is a comparison of the upper
and lower case versions of each letter included in the English alphabet (the exact representation will vary according
to the font used): Capitalisation in English, in terms of the general orthographic rules independent of context (e.g.
title vs. heading vs. text), is universally standardized for formal writing. (Informal communication, such as texting,
instant messaging or a handwritten sticky note, may not bother, but that is because its users usually do not expect
it to be formal.) In English, capital letters are used as the first letter of a sentence, a proper noun, or a proper
adjective. There are a few pairs of words of different meanings whose only difference is capitalisation of the first
letter. The names of the days of the week and the names of the months are also capitalised, as are the first-person
pronoun "I" and the interjection "O" (although the latter is uncommon in modern usage, with "oh" being preferred).
Other words normally start with a lower-case letter. There are, however, situations where further capitalisation
may be used to give added emphasis, for example in headings and titles (see below). In some traditional forms of
poetry, capitalisation has conventionally been used as a marker to indicate the beginning of a line of verse independent
of any grammatical feature. Originally alphabets were written entirely in majuscule letters, spaced between well-defined
upper and lower bounds. When written quickly with a pen, these tended to turn into rounder and much simpler forms.
It is from these that the first minuscule hands developed, the half-uncials and cursive minuscule, which no longer
stayed bound between a pair of lines. These in turn formed the foundations for the Carolingian minuscule script,
developed by Alcuin for use in the court of Charlemagne, which quickly spread across Europe. The advantage of the
minuscule over majuscule was improved, faster readability.[citation needed] Letter case is often prescribed by the
grammar of a language or by the conventions of a particular discipline. In orthography, the uppercase is primarily
reserved for special purposes, such as the first letter of a sentence or of a proper noun, which makes the lowercase
the more common variant in text. In mathematics, letter case may indicate the relationship between objects with uppercase
letters often representing "superior" objects (e.g. X could be a set containing the generic member x). Engineering
design drawings are typically labelled entirely in upper-case letters, which are easier to distinguish than lowercase,
especially when space restrictions require that the lettering be small. Typographically, the basic difference between
the majuscules and minuscules is not that the majuscules are big and minuscules small, but that the majuscules generally
have the same height. The height of the minuscules varies, as some of them have parts higher or lower than the average,
i.e. ascenders and descenders. In Times New Roman, for instance, b, d, f, h, k, l, t are the letters with ascenders,
and g, j, p, q, y are the ones with descenders. Further to this, with old-style numerals still used by some traditional
or classical fonts—although most do have a set of alternative Lining Figures— 6 and 8 make up the ascender set, and
3, 4, 5, 7 and 9 the descender set. Most Western languages (particularly those with writing systems based on the
Latin, Cyrillic, Greek, Coptic, and Armenian alphabets) use letter cases in their written form as an aid to clarity.
Scripts using two separate cases are also called bicameral scripts. Many other writing systems make no distinction
between majuscules and minuscules – a system called unicameral script or unicase. This includes most syllabic and
other non-alphabetic scripts. The Georgian alphabet is special since it used to be bicameral, but today is mostly
used in a unicameral way. In Latin, papyri from Herculaneum dating before 79 AD (when it was destroyed) have been
found that have been written in old Roman cursive, where the early forms of minuscule letters "d", "h" and "r", for
example, can already be recognised. According to papyrologist Knut Kleve, "The theory, then, that the lower-case
letters have been developed from the fifth century uncials and the ninth century Carolingian minuscules seems to
be wrong." Both majuscule and minuscule letters existed, but the difference between the two variants was initially
stylistic rather than orthographic and the writing system was still basically unicameral: a given handwritten document
could use either one style or the other but these were not mixed. European languages, except for Ancient Greek and
Latin, did not make the case distinction before about 1300.[citation needed] As regards publication titles it is,
however, a common typographic practice among both British and U.S. publishers to capitalise significant words (and
in the United States, this is often applied to headings, too). This family of typographic conventions is usually
called title case. For example, R. M. Ritter's Oxford Manual of Style (2002) suggests capitalising "the first word
and all nouns, pronouns, adjectives, verbs and adverbs, but generally not articles, conjunctions and short prepositions".
This is an old form of emphasis, similar to the more modern practice of using a larger or boldface font for titles.
The rules for which words to capitalise are not based on any grammatically inherent correct/incorrect distinction
and are not universally standardized; they are arbitrary and differ between style guides, although in most styles
they tend to follow a few strong conventions, as follows: As briefly discussed in Unicode Technical Note #26, "In
terms of implementation issues, any attempt at a unification of Latin, Greek, and Cyrillic would wreak havoc [and]
make casing operations an unholy mess, in effect making all casing operations context sensitive […]". In other words,
while the shapes of letters like A, B, E, H, K, M, O, P, T, X, Y and so on are shared between the Latin, Greek, and
Cyrillic alphabets (and small differences in their canonical forms may be considered to be of a merely typographical
nature), it would still be problematic for a multilingual character set or a font to provide only a single codepoint
for, say, uppercase letter B, as this would make it quite difficult for a wordprocessor to change that single uppercase
letter to one of the three different choices for the lower-case letter, b (Latin), β (Greek), or в (Cyrillic). Without
letter case, a "unified European alphabet" – such as ABБCГDΔΕZЄЗFΦGHIИJ…Z, with an appropriate subset for each language
– is feasible; but considering letter case, it becomes very clear that these alphabets are rather distinct sets of
symbols.
During the 14th century in the northeastern part of the state nomad tribes by the name of Jornado hunted bison along the
Rio Grande; they left numerous rock paintings throughout the northeastern part of the state. When the Spanish explorers
reached this area they found their descendants, Suma and Manso tribes. In the southern part of the state, in a region
known as Aridoamerica, Chichimeca people survived by hunting, gathering, and farming between AD 300 and 1300. The
Chichimeca are the ancestors of the Tepehuan people. During the Napoleonic Occupation of Spain, Miguel Hidalgo y
Costilla, a Catholic priest of progressive ideas, declared Mexican independence in the small town of Dolores, Guanajuato
on September 16, 1810 with a proclamation known as the "Grito de Dolores". Hidalgo built a large support among intellectuals,
liberal priests and many poor people. Hidalgo fought to protect the rights of the poor and indigenous population.
He started on a march to the capital, Mexico City, but retreated back north when faced with the elite of the royal
forces at the outskirts of the capital. He established a liberal government from Guadalajara, Jalisco but was soon
forced to flee north by the royal forces that recaptured the city. Hidalgo attempted to reach the United States and
gain American support for Mexican independence. HIdalgo reached Saltillo, Coahuila where he publicly resigned his
military post and rejected a pardon offered by Viceroy Francisco Venegas in return for Hidalgo's surrender. A short
time later, he and his supporters were captured by royalist Ignacio Elizondo at the Wells of Baján (Norias de Baján)
on March 21, 1811 and taken to the city of Chihuahua. Hidalgo forced the Bishop of Valladolid, Manuel Abad y Queipo,
to rescind the excommunication order he had circulated against him on September 24, 1810. Later, the Inquisition
issued an excommunication edict on October 13, 1810 condemning Miguel Hidalgo as a seditionary, apostate, and heretic.
At a convention of citizens called to select a new provisional ruler, Gutierrez obtained the vote, with P. J. Escalante
for his deputy, and a council to guide the administration. Santa Anna ordered the reinstatement of Mendarozqueta
as comandante general. Gutiérrez yielded, but Escalante refused to surrender office, demonstrations of support ensued,
but Escalante yielded when troops were summoned from Zacatecas. A new election brought a new legislature, and conforming
governors. In September 1835 José Urrea a federalist army officer came into power. The Treaty of Guadalupe Hidalgo,
signed on February 2, 1848, by American diplomat Nicholas Trist and Mexican plenipotentiary representatives Luis
G. Cuevas, Bernardo Couto, and Miguel Atristain, ended the war, gave the U.S. undisputed control of Texas, and established
the U.S.–Mexican border of the Rio Grande. As news of peace negotiations reached the state, new call to arms began
to flare among the people of the state. But as the Mexican officials in Chihuahua heard that General Price was heading
back to Mexico with a large force comprising several companies of infantry and three companies of cavalry and one
division of light artillery from Santa Fe on February 8, 1848, Ángel Trías sent a message to Sacramento Pass to ask
for succession of the area as they understood the war had concluded. General Price, misunderstanding this as a deception
by the Mexican forces, continued to advance towards the state capital. On March 16, 1848 Price began negotiations
with Ángel Trías, but the Mexican leader responded with an ultimatum to General Price. The American forces engaged
with the Mexican forces near Santa Cruz de los Rosales on March 16, 1848. The Battle of Santa Cruz de los Rosales
was the last battle of the Mexican–American War and it occurred after the peace treaty was signed. The American forces
maintained control over the state capital for three months after the confirmation of the peace treaty. The American
presence served to delay the possible succession of the state which had been discussed at the end of 1847, and the
state remained under United States occupation until May 22, 1848. In consequence to the Reform War, the federal government
was bankrupt and could not pay its foreign debts to Spain, England, and France. On July 17, 1861, President Juárez
decreed a moratorium on payment to foreign debtors for a period of two years. Spain, England, and France did not
accept the moratorium by Mexico; they united at the Convention of the Triple Alliance on October 31, 1861 in which
they agreed to take possession of several custom stations within Mexico as payment. A delegation of the Triple Alliance
arrived in Veracruz in December 1861. President Juárez immediately sent his Foreign Affairs Minister, Manuel Doblado,
who is able to reduce the debts through the Pacto de Soledad (Soledad Pact). General Juan Prim of Spain persuaded
the English delegation to accept the terms of the Pacto de Soledad, but the French delegation refused. Maximilian
was deeply dissatisfied with General Bazaine's decision to abandon the state capital of Chihuahua and immediately
ordered Agustín B. Billaut to recapture the city. On December 11, 1865, Billaut with a force of 500 men took control
of the city. By January 31, 1866 Billaut was ordered to leave Chihuahua, but he left behind 500 men to maintain control.
At the zenith of their power, the imperialist forces controlled all but four states in Mexico; the only states to
maintain strong opposition to the French were: Guerrero, Chihuahua, Sonora, and Baja California. After the death
of the president Benito Juárez in 1872, the first magistracy of the country was occupied by the vice-president Sebastián
Lerdo de Tejada, who called for new elections. Two candidates were registered; Lerdo de Tejada and General Porfirio
Díaz, one of the heroes of the Battle of Puebla which had taken place on May 5, 1862. Lerdeo de Tejada won the election,
but lost popularity after he announced his intent to run for re-election. On March 21, 1876, Don Porfirio Díaz rebelled
against President Sebastian Lerdo de Tejada. The Plan of Tuxtepec defended the "No Re-election" principle. On June
2, 1876 the garrisons in the state of Chihuahua surrendered to the authority of General Porfirio Díaz; Governor Antonio
Ochoa was arrested until all the Lerdista forces were suppressed throughout the state. Porfirio Díaz then helped
Tíras regain the governorship of the state of Chihuahua allowing for the Plan of Tuxtepec to be implemented. The
victory of the Plan of Tuxtepec, gave the interim presidency to Jose Maria Iglesias and later, as the only candidate,
the General Porfirio Díaz assumed the presidency on May 5, 1877. During the first years of the Porfiriato (Porfirio
Díaz Era), the Díaz administration had to combat several attacks from the Lerdista forces and the Apache. A new rebellion
led by the Lerdista party was orchestrated from exile in the United States. The Lerdista forces were able to temporarily
occupy the city of El Paso del Norte until mid-1877. During 1877 the northern parts of the state suffered through
a spell of extreme drought which were responsible for many deaths in El Paso del Norte. In March 1912, in Chihuahua,
Gen. Pascual Orozco revolted. Immediately President Francisco Madero commanded Gen. Victoriano Huerta of the Federal
Army, to put down the Orozco revolt. The governor of Chihuahua mobilized the state militia led by Colonel Pancho
Villa to supplement General Huerta. By June, Villa notified Huerta that the Orozco revolt had been put down and that
the militia would consider themselves no longer under Huerta's command and would depart. Huerta became furious and
ordered that Villa be executed. Raúl Madero, Madero's brother, intervened to save Villa's life. Jailed in Mexico
City, Villa fled to the United States. Madero's time as leader was short-lived, ended by a coup d'état in 1913 led
by Gen. Victoriano Huerta; Orozco sided with Huerta, and Huerta made him one of his generals. Although Chihuahua
is primarily identified with the Chihuahuan Desert for namesake, it has more forests than any other state in Mexico,
with the exception of Durango. Due to its variant climate, the state has a large variety of fauna and flora. The
state is mostly characterized by rugged mountainous terrain and wide river valleys. The Sierra Madre Occidental mountain
range, an extension of the Rocky Mountains, dominates the state's terrain and is home to the state's greatest attraction,
Las Barrancas del Cobre, or Copper Canyon, a canyon system larger and deeper than the Grand Canyon. On the slope
of the Sierra Madre Occidental mountains (around the regions of Casas Grandes, Cuauhtémoc and Parral), there are
vast prairies of short yellow grass, the source of the bulk of the state's agricultural production. Most of the inhabitants
live along the Rio Grande Valley and the Conchos River Valley. Public opinion pressured the U.S. government to bring
Villa to justice for the raid on Columbus, New Mexico; U.S. President Wilson sent Gen. John J. Pershing and some
5,000 troops into Mexico in an unsuccessful attempt to capture Villa. It was known as the Punitive Expedition. After
nearly a year of pursuing Villa, American forces returned to the United States. The American intervention had been
limited to the western sierras of Chihuahua. Villa had the advantage of intimately knowing the inhospitable terrain
of the Sonoran Desert and the almost impassable Sierra Madre mountains and always managed to stay one step ahead
of his pursuers. In 1923 Villa was assassinated by a group of seven gunmen who ambushed him while he was sitting
in the back seat of his car in Parral. La Cueva De Las Ventanas (The Cave of Windows), a series of cliff dwellings
along an important trade route, and Las Jarillas Cave scrambled along the canyons of the Sierra Madre in Northwestern
Chihuahua date between AD 1205 and 1260 and belong to the Paquimé culture. Cuarenta Casas is thought to have been
a branch settlement from Paquime to protect the trade route from attack. Archaeologists believe the civilization
began to decline during the 13th century and by the 15th century the inhabitants of Paquime sought refuge in the
Sierra Madre Occidental while others are thought to have emigrated north and joined the Ancestral Pueblo peoples.
According to anthropologist current natives tribes (Yaqui, Mayo, Opata, and Tarahumara) are descendants of the Casas
Grandes culture. and Camargo. In 1631 Juan Rangel de Biezma discovered a rich vein of silver, and subsequently established
San Jose del Parral near the site. Parral remained an important economic and cultural center for the next 300 years.
On December 8, 1659 Fray García de San Francisco founded the mission of Nuestra Señora de Guadalupe de Mansos del
Paso del Río del Norte and founded the town El Paso Del Norte (present day Ciudad Juárez) in 1667. The Spanish society
that developed in the region replaced the sparse population of indigenous peoples. The absence of servants and workers
forged the spirit of northern people as self-dependent, creative people that defended their European heritage. In
1680 settlers from Santa Fe, New Mexico sought refuge in El Paso Del Norte for twelve years after fleeing the attacks
from Pueblo tribes, but returned to Santa Fe in 1692 after Diego de Vargas recaptured the city and vicinity. In 1709,
Antonio de Deza y Ulloa founded the state capital Chihuahua City; shortly after, the city became the headquarters
for the regional mining offices of the Spanish crown known as Real de Minas de San Francisco de Cuéllar in honor
of the Viceroy of New Spain, Francisco Fernández de la Cueva Enríquez, Duke of Alburquerque and the Marquee of Cuéllar..
The earliest evidence of human inhabitants of modern day Chihuahua was discovered in the area of Samalayuca and Rancho
Colorado. Clovis points have been found in northeastern Chihuahua that have been dated from 12,000 BC to 7000 BC.
It is thought that these inhabitants were hunter gatherers. Inhabitants of the state later developed farming with
the domestication of corn. An archeological site in northern Chihuahua known as Cerro Juanaqueña revealed squash
cultivation, irrigation techniques, and ceramic artifacts dating to around 2000 BC. During the period from 2000–2005
it is estimated that 49,722 people left the state for the United States. Some 82,000 people are thought to have immigrated
to the state from 2000–2005 mainly coming from Veracruz (17.6%), United States (16.2%), Durango (13.2%), Coahuila
(8.0%) and Chiapas (4.5%). It is believed that there is a large number of undocumented immigrants in that state the
come from Central and South America which mainly settle in Ciudad Juárez. According to the 2005 census, the population
grew 1.06% from 2000 to 2005. The state has an uneven settlement of people and the lowest population density of any
Mexican state; according to the 2005 census there were 12 people per km2. Of all the 3,241,444 people in the state,
two-thirds (2,072,129) live in the cities of Ciudad Juárez and Chihuahua. Only three other cities have populations
over 100,000: Parral 101,147, Cuauhtémoc 105,725, and Delicias 108,187. In the constituent legislature or convention,
the conservative and liberal elements formed using the nicknames of Chirrines and Cuchas. The military entered as
a third party. The elections for the first regular legislature were disputed, and it was not until May 1, 1826, that
the body was installed. The liberals gained control and the opposition responded by fomenting a conspiracy. This
was promptly stopped with the aid of informers, and more strenuous measures were taken against the conservatives.
Extra powers were conferred on the Durango governor, Santiago Baca Ortiz, deputy to the first national congress,
and leader of the liberal party. In 1562 Francisco de Ibarra headed a personal expedition in search of the mythical
cities of Cibola and Quivira; he traveled through the present-day state of Chihuahua. Francisco de Ibarra is thought
to have been the first European to see the ruins of Paquime. In 1564 Rodrigo de Río de Loza, a lieutenant under Francisco
de Ibarra, stayed behind after the expedition and found gold at the foot of the mountains of the Sierra Madre Occidental;
he founded the first Spanish city in the region, Santa Barbara in 1567 by bringing 400 European families to the settlement.
A few years later in 1569 Franciscan missionaries led by Fray Agustín Rodríguez from the coast of Sinaloa and the
state of Durango founded the first mission in the state in Valle de San Bartolomé (present-day Valle de Allende).
Fray Agustín Rodríguez evangelized the native population until 1581. Between 1586 and 1588 a epidemic caused a temporary
exodus of the small population in the territory of Nueva Vizcaya. According to the Instituto Nacional de Estadística,
Geografía e Informática (INEGI), 95.6% of the population over the age of 15 could read and write Spanish, and 97.3%
of children of ages 8–14 could read and write Spanish. An estimated 93.5% of the population ages 6–14 attend an institution
of education. Estimated 12.8% of residents of the state have obtained a college degree. Average schooling is 8.5
years, which means that in general the average citizen over 15 years of age has gone as far as a second year in secondary
education. The state has the 12th-largest state economy in Mexico, accounting for 2.7% of the country’s GDP. Chihuahua
has the fifth highest manufacturing GDP in Mexico and ranks second for the most factories funded by foreign investment
in the country. As of 2011[update], the state had an estimated 396 billion pesos (31.1 billion dollars) of annual
GDP. According to official federal statistical studies, the service sector accounted for the largest portion of the
state economy at 59.28%; the manufacturing and industrial sector is estimated to account for 34.36% of the state's
GDP, with the agricultural sector accounting for 6.36% of the state's GDP. Manufacturing sector was the principal
foreign investment in the state followed by the mining sector. In 2011, the state received approximately 884 million
dollars in remittances from the United States, which was 4.5% of all remittances from the United States to Mexico.
The general features of the preceding occurrence applied also to Chihuahua, although in a modified form. The first
person elected under the new constitution of 1825 was Simón Elías Gonzalez, who being in Sonora, was induced to remain
there. José Antonio Arcé took his place as ruler in Chihuahua. In 1829, González became general commander of Chihuahua,
when his term of office on the west coast expired. Arcé was less of a yorkino than his confrere of Durango. Although
unable to resist the popular demand for the expulsion of the Spaniards, he soon quarreled with the legislature, which
declared itself firmly for Guerrero, and announcing his support of Bustamante's revolution, he suspended, in March
1830, eight members of that body, the vice-governor, and several other officials, and expelled them from the state.
The course thus outlined was followed by Governor José Isidro Madero, who succeeded in 1830, associated with J. J.
Calvo as general commander, stringent laws being issued against secret societies, which were supposed to be the main
spring to the anti-clerical feeling among liberals. Between AD 300 and 1300 in the northern part of the state along
the wide, fertile valley on the San Miguel River the Casas Grandes (Big Houses) culture developed into an advanced
civilization. The Casas Grandes civilization is part of a major prehistoric archaeological culture known as Mogollon
which is related to the Ancestral Pueblo culture. Paquime was the center of the Casas Grandes civilization. Extensive
archaeological evidence shows commerce, agriculture, and hunting at Paquime and Cuarenta Casas (Forty Houses). The
state seemed at relative calm compared to the rest of the country due to its close ties to the United States until
1841. In 1843 the possibility of war was anticipated by the state government and it began to reinforce the defense
lines along the political boundary with Texas. Supplies of weapons were sent to fully equip the military and took
steps to improve efficiency at the presidios. Later, the Regimen for the Defenders of the Border were organized by
the state which were made up of: light cavalry, four squads of two brigades, and a small force of 14 men and 42 officials
at the price of 160,603 pesos per year. During the beginning of the 1840s, private citizens took it upon themselves
to stop the commercial caravans of supplies from the United States, but being so far away from the large suppliers
in central Mexico the caravan was allowed to continue in March 1844. Continuing to anticipate a war, the state legislature
on July 11, 1846 by decree enlisted 6,000 men to serve along the border; during that time Ángel Trías quickly rose
to power by portraying zealous anti-American rhetoric. Trías took the opportunity to dedicate important state resources
to gain economic concessions from the people and loans from many municipalities in preparation to defend the state;
he used all the money he received to equip and organize a large volunteer militia. Ángel Trías took measures for
state self-dependence in regards to state militia due to the diminishing financial support from the federal government.
Santa Bárbara became the launching place for expeditions into New Mexico by Spanish conquistadors like: Antonio de
Espejo, Gaspar Castaño, Antonio Gutiérrez de Umaña, Francisco Leyba de Bonilla, and Vicente de Zaldívar. Several
expeditions were led to find a shorter route from Santa Barbara to New Mexico. In April 1598, Juan de Oñate finally
found a short route from Santa Barbara to New Mexico which came to be called El Paso del Norte (The Northern Pass).
The discovery of El Paso Del Norte was important for the expansion of El Camino Real de Tierra Adentro (The Inner
Land Royal Road) to link Spanish settlements in New Mexico to Mexico City; El Camino Real de Tierra Adentro facilitated
transport of settlers and supplies to New Mexico. Hidalgo is hailed as the Father of the Nation even though it was
Agustin de Iturbide and not Hidalgo who achieved Mexican Independence in 1821. Shortly after gaining independence,
the day to celebrate it varied between September 16, the day of Hidalgo's Grito, and September 27, the day Iturbide
rode into Mexico City to end the war. Later, political movements would favor the more liberal Hidalgo over the conservative
Iturbide, so that eventually September 16, 1810 became the officially recognized day of Mexican independence. The
reason for this is that Hidalgo is considered to be "precursor and creator of the rest of the heroes of the (Mexican
War of) Independence." Hidalgo has become an icon for Mexicans who resist tyranny in the country. Diego Rivera painted
Hidalgo's image in half a dozen murals. José Clemente Orozco depicted him with a flaming torch of liberty and considered
the painting among his best work. David Alfaro Siqueiros was commissioned by San Nicolas University in Morelia to
paint a mural for a celebration commemorating the 200th anniversary of Hidalgo's birth. The town of his parish was
renamed Dolores Hidalgo in his honor and the state of Hidalgo was created in 1869. Every year on the night of 15–16
September, the president of Mexico re-enacts the Grito from the balcony of the National Palace. This scene is repeated
by the heads of cities and towns all over Mexico. The remains of Miguel Hidalgo y Costilla lie in the column of the
Angel of Independence in Mexico City. Next to it is a lamp lit to represent the sacrifice of those who gave their
lives for Mexican Independence. On February 8, 1847, Doniphan continued his march with 924 men mostly from Missouri;
he accompanied a train of 315 wagons of a large commercial caravan heading to the state capital. Meanwhile, the Mexican
forces in the state had time to prepare a defense against the Americans. About 20 miles (32 km) north of the capital
where two mountain ranges join from east to west is the only pass into the capital; known as Sacramento Pass, this
point is now part of present-day Chihuahua City. The Battle of Sacramento was the most important battle fought in
the state of Chihuahua because it was the sole defense for the state capital. The battle ended quickly because of
some devastating defensive errors from the Mexican forces and the ingenious strategic moves by the American forces.
After their loss at the Battle of Sacramento, the remaining Mexican soldiers retreated south, leaving the city to
American occupation. Almost 300 Mexicans were killed in the battle, as well as almost 300 wounded. The Americans
also confiscated large amounts of Mexican supplies and took 400 Mexican soldiers prisoners of war. American forces
maintained an occupation of the state capital for the rest of the Mexican–American War. The anti-clerical feeling
was widespread, and Durango supported the initial reaction against the government at Mexico. In May 1832, José Urrea,
a rising officer, supported the restoration of President Pedraza. On July 20, Governor Elorriaga was reinstated,
and Baca along with the legislative minority were brought back to form a new legislature, which met on September
1. Chihuahua showed no desire to imitate the revolutionary movement and Urrea prepared to invade the state. Comandante-general
J.J.Calvo threatened to retaliate, and a conflict seemed imminent. The entry of General Santa Anna into Mexico brought
calm, as the leaders waited for clarity. Under threat from the conservative forces, Governor Terrazas was deposed,
and the state legislature proclaimed martial law in the state in April 1864 and established Jesús José Casavantes
as the new governor. In response, José María Patoni decided to march to Chihuahua with presidential support. Meanwhile,
Maximilian von Habsburg, a younger brother of the Emperor of Austria, was proclaimed Emperor Maximilian I of Mexico
on April 10, 1864 with the backing of Napoleon III and a group of Mexican conservatives. Before President Benito
Juárez was forced to flee, Congress granted him an emergency extension of his presidency, which would go into effect
in 1865 when his term expired, and last until 1867. At the same time, the state liberals and conservatives compromised
to allow the popular Ángel Trías take the governorship; by this time the French forces had taken control over the
central portions of the country and were making preparations to invade the northern states. The state united behind
the Plan of Ayutla and ratified the new constitution in 1855. The state was able to survive through the Reform War
with minimal damage due to the large number of liberal political figures. The 1858 conservative movement did not
succeed in the state even after the successful military campaign of the conservative Zuloaga with 1,000 men occupied
the cities of Chihuahua and Parral. In August 1859, Zuloaga and his forces were defeated by the liberal Orozco and
his forces; Orozco soon after deposed the state governor, but had to flee to Durango two months later. In the late
1860s the conservative General Cajen briefly entered the state after his campaign through the state of Jalisco and
helped establish conservative politicians and ran out the liberal leaders Jesús González Ortega and José María Patoni.
Cajen took possession of the state capital and established himself as governor; he brooked no delay in uniting a
large force to combat the liberal forces which he defeated in La Batalla del Gallo. Cajen attained several advantages
over the liberals within the state, but soon lost his standing due to a strong resurgence of the liberal forces within
the state. The successful liberal leaders José María Patoni of Durango and J.E. Muñoz of Chihuahua quickly strengthened
their standing by limiting the political rights of the clergy implementing the presidential decree. The state elected
General Luis Terrazas, a liberal leader, as governor; he would continue to fight small battles within the state to
suppress conservative uprisings during 1861. After running imperial military affairs in the states of Coahuila and
Durango, General Agustín Enrique Brincourt made preparations to invade the state of Chihuahua. On July 8, 1865 Brincourt
crossed the Nazas River in northern Durango, heading toward Chihuahua. On July 22 Brincourt crossed the banks of
Río Florido into Ciudad Jiménez; one day later he arrived at Valle de Allende where he sent Colonel Pyot with a garrison
to take control of Hidalgo del Parral. Brincourt continued through Santa Rosalia de Camargo and Santa Cruz de Rosales.
President Juárez remained in the state capital until August 5, 1865 when he left for El Paso del Norte (present-day
Ciudad Juárez) due to evidence that the French were to attack the city. On the same day, the President named General
Manuel Ojinaga the new governor and placed him in charge of all the republican forces. Meanwhile, General Villagran
surprised the imperial forces in control of Hidalgo de Parral; after a short two-hour battle, Colonel Pyot was defeated
and forced to retreat. At the Battle of Parral, the French lost 55 men to the Republican forces. On August 13, 1865,
the French forces with an estimated 2,500 men arrived at the outskirts of Chihuahua City, and on August 15, 1865,
General Brincourt defeated the republican forces, taking control of the state capital. Brincourt designated Tomás
Zuloaga as Prefect of Chihuahua. Fearing the French would continue their campaign to El Paso del Norte, President
Juárez relocated to El Carrizal, a secluded place in the mountains near El Paso del Norte, in August 1865, . It would
have been easy for the French forces to continue in pursuit of President Juárez across the border, but they feared
altercations with American forces. General François Achille Bazaine ordered the French troops to retreat back to
the state of Durango after only reaching a point one days travel north of Chihuahua City. General Brincourt asked
for 1,000 men to be left behind to help maintain control over the state, but his request was denied. After the death
of General Ojinaga, the Republican government declared General Villagran in charge of the fight against the Imperial
forces. The French left the state on October 29, 1865. President Juárez returned to Chihuahua City on November 20,
1865 and remained in the city until December 9, 1865 when he returned to El Paso del Norte. Shortly after the president
left Chihuahua City, Terrazas was restored as governor of the state on December 11, 1865. General Aguirre moved to
the deserts of the southeastern portion of the state and defeated the French forces in Parral, led by Colonel Cottret.
By the middle of 1866, the state of Chihuahua was declared free of enemy control; Parral was the last French stronghold
within the state. On June 17, 1866, President Juárez arrived in Chihuahua City and remained in the capital until
December 10, 1866. During his two years in the state of Chihuahua, President Juárez passed ordinances regarding the
rights of adjudication of property and nationalized the property of the clergy. The distance of the French forces
and their allies allowed the Ministry of War, led by General Negrete, to reorganize the state's national guard into
the Patriotic Battalion of Chihuahua, which was deployed to fight in the battle of Matamoros, Tamaulipas against
the French. After a series of major defeats and an escalating threat from Prussia, France began pulling troops out
of Mexico in late 1866. Disillusioned with the liberal political views of Maximilian, the Mexican conservatives abandoned
him, and in 1867 the last of the Emperor's forces were defeated. Maximilian was sentenced to death by a military
court; despite national and international pleas for amnesty, Juárez refused to commute the sentence. Maximilian was
executed by firing squad on June 19, 1867. The United States Congress declared war on Mexico on May 13, 1846 after
only having a few hours to debate. Although President José Mariano Paredes's issuance of a manifesto on May 23 is
sometimes considered the declaration of war, Mexico officially declared war by Congress on July 7. After the American
invasion of New Mexico, Chihuahua sent 12,000 men led by Colonel Vidal to the border to stop the American military
advance into the state. The Mexican forces being impatient to confront the American forces passed beyond El Paso
del Norte about 20 miles (32 km) north along the Rio Grande. The first battle that Chihuahua fought was the battle
of El Bracito; the Mexican forces consisting of 500 cavalry and 70 infantry confronted a force of 1,100–1,200 Americans
on December 25, 1846. The battle ended badly by the Mexican forces that were then forced to retreat back into the
state of Chihuahua. By December 27, 1846, the American forces occupied El Paso Del Norte. General Doniphan maintained
camp in El Paso Del Norte awaiting supplies and artillery which he received in February 1847. But the peace in the
state did not last long, the elections of 1875 caused new hostilities. Ángel Trías led a new movement against the
government in June 1875 and maintained control over the government until September 18, 1875 when Donato Guerra the
orchestrator of the Revolution of the North was captured. Donato Guerra was assassinated in a suburb of Chihuahua
City where he was incarcerated for conspiring with Ángel Trías. During October 1875 several locations were controlled
by rebel forces, but the government finally regained control on November 25, 1875. Under Governor Miguel Ahumada,
the education system in the state was unified and brought under tighter control by the state government, and the
metric system was standardized throughout the state to replace the colonial system of weights and measures. On September
16, 1897, the Civilian Hospital of Chihuahua was inaugurated in Chihuahua City and became known among the best in
the country. In 1901 the Heroes Theater (Teatro de los Héroes) opened in Chihuahua City. On August 18, 1904, Governor
Terrazas was replaced by Governor Enrique C. Creel. From 1907 to 1911, the Creel administration succeeded in advancing
the state's legal system, modernizing the mining industry, and raising public education standards. In 1908 the Chihuahuan
State Penitentiary was built, and the construction on the first large scale dam project was initiated on the Chuviscar
River. During the same time, the streets of Chihuahua City were paved and numerous monuments were built in Chihuahua
City and Ciudad Juárez. Despite the internal stability (known as the paz porfiriana), modernization, and economic
growth in Mexico during the Porfiriato from 1876 to 1910, many across the state became deeply dissatisfied with the
political system. When Díaz first ran for office, he committed to a strict “No Re-election” policy in which he disqualified
himself to serve consecutive terms. Eventually backtracking on many of his initial political positions Díaz became
a de facto dictator. Díaz became increasingly unpopular due to brutal suppression of political dissidents by using
the Rurales and manipulating the elections to solidify his political machine. The working class was frustrated with
the Díaz regime due to the corruption of the political system that had increased the inequality between the rich
and poor. The peasants felt disenfranchised by the policies that promoted the unfair distribution of land where 95%
of the land was owned by the top 5%. In response to Madero's letter to action, Pascual Orozco (a wealthy mining baron)
and Chihuahua Governor Abraham González formed a powerful military union in the north, taking military control of
several northern Mexican cities with other revolutionary leaders, including Pancho Villa. Against Madero's wishes,
Orozco and Villa fought for and won Ciudad Juárez. After militias loyal to Madero defeated the Mexican federal army,
on May 21, 1911, Madero signed the Treaty of Ciudad Juárez with Díaz. It required that Díaz abdicate his rule and
be replaced by Madero. Insisting on a new election, Madero won overwhelmingly in late 1911, and he established a
liberal democracy and received support from the United States and popular leaders such as Orozco and Villa. Orozco
eventually became disappointed with the Madero's government and led a rebellion against him. He organized his own
army, called "Orozquistas"—also called the Colorados ("Red Flaggers")—after Madero refused to agree to social reforms
calling for better working hours, pay and conditions. The rural working class, which had supported Madero, now took
up arms against him in support of Orozco. The situation became so tense that war with the United States seemed imminent.
On April 22, 1914, on the initiative of Felix A. Sommerfeld and Sherburne Hopkins, Pancho Villa traveled to Juárez
to calm fears along the border and asked President Wilson's emissary George Carothers to tell "Señor Wilson" that
he had no problems with the American occupation of Veracruz. Carothers wrote to Secretary William Jennings Bryan:
"As far as he was concerned we could keep Vera Cruz [sic] and hold it so tight that not even water could get in to
Huerta and . . . he could not feel any resentment". Whether trying to please the U.S. government or through the diplomatic
efforts of Sommerfeld and Carothers, or maybe as a result of both, Villa stepped out from under Carranza’s stated
foreign policy. The state of Chihuahua is the largest state in the country and is known as El Estado Grande (The
Big State); it accounts for 12.6% of the land of Mexico. The area is landlocked by the states of Sonora to the west,
Sinaloa to the south-west, Durango to the south, and Coahuila to the east, and by the U.S. states of Texas to the
northeast and New Mexico to the north. The state is made up of three geologic regions: Mountains, Plains-Valleys,
and Desert, which occur in large bands from west to east. Because of the different geologic regions there are contrasting
climates and ecosystems. The climate in the state depends mainly in the elevation of the terrain. According to Köppen
climate classification the state has five major climate zones. The Sierra Madre Occidental dominates the western
part of the state; there are two main climates in this area: Subtropical Highland (Cfb) and Humid Subtropical (Cwa).
There are some microclimates in the state due to the varying topology mostly found in the western side of the state.
The two best known microclimates are: Tropical savanna climate (Aw) in deep canyons located in the extreme southern
part of the state; Continental Mediterranean climate (Dsb) in the extremely high elevations of the Sierra Madre Occidental.
Satellite image to the right shows the vegetation is much greener in the west because of the cooler temperatures
and larger amounts of precipitation as compared to the rest of the state. The Chihuahuan Desert is home to a diverse
ecosystem which is home to a large variety of mammals. The most common mammals in the desert include: Desert cottontail
Sylvilagus audubonii, black-tailed jackrabbit Lepus californicus, hooded skunk Mephitis macroura, cactus mouse Peromyscus
eremicus, swift fox Vulpes velox, white-throated woodrat Neotoma albigula, pallid bat Antrozous pallidus, and coyote
Canis latrans. The most observed reptiles in the desert include: Mohave rattlesnake Crotalus scutulatus, twin-spotted
rattlesnake Crotalus pricei, prairie rattlesnake Crotalus viridis, ridge-nosed rattlesnake Crotalus willardi, whip
snake Masticophis flagellum, New Mexico whiptail Cnemidophorus neomexicanus, and red-spotted toad Bufo punctatus.
Villa and Carranza had different political goals causing Villa to become an enemy of Carranza. After Carranza took
control in 1914, Villa and other revolutionaries who opposed him met at what was called the Convention of Aguascalientes.
The convention deposed Carranza in favor of Eulalio Gutiérrez. In the winter of 1914 Villa's and Zapata's troops
entered and occupied Mexico City. Villa was forced from the city in early 1915 and attacked the forces of Gen. Obregón
at the Battle of Celaya and was badly defeated in the bloodiest battle of the revolution, with thousands dead. With
the defeat of Villa, Carranza seized power. A short time later the United States recognized Carranza as president
of Mexico. Even though Villa's forces were badly depleted by his loss at Celaya, he continued his fight against the
Carranza government. Finally, in 1920, Obregón—who had defeated him at Celaya—finally reached an agreement with Villa
end his rebellion. The plains at the foot of the Sierra Madre Occidental is an elongated mesa known as Altiplanicie
Mexicana that exhibits a steppe climate and serves as a transition zone from the mountain climate in the western
part of the state to the desert climate in the eastern side of the state. The steppe zone accounts for a third of
the state's area, and it experiences pronounced dry and wet seasons. The pronounced rainy season in the steppe is
usually observed in the months of July, August, and September. The steppe also encounters extreme temperatures that
often reach over 100 °F in the summer and drop below 32 °F in the winter. The steppe zone is an important agriculture
zone due to an extensive development of canals exploiting several rivers that flow down from the mountains. The steppe
zone is the most populated area of the state. The state has a great diversity due to the large number of microclimates
found and dramatic varying terrain. The flora throughout the Sierra Madre Occidental mountain range varies with elevation.
Pine (Pinus) and oak (Quercus) species are usually found at an elevation of 2,000 m (6,560 ft) above sea level. The
most common species of flora found in the mountains are: Pinus, Quercus, Abies, Ficus, Vachellia, Ipomoea, Acacia,
Lysiloma, Bursera, Vitex, Tabebuia, Sideroxylon, Cordia, Fouquieria, Pithecellobium. The state is home to one of
the largest variation species of the genus Pinus in the world. The lower elevations have a steppe vegetation with
a variety of grasses and small bushes. Several species of Juniperus dot the steppe and the transition zone. The state
has one city with a population exceeding one million: Ciudad Juárez. Ciudad Juárez is ranked eighth most populous
city in the country and Chihuahua City was ranked 16th most populous in Mexico. Chihuahua (along with Baja California)
is the only state in Mexico to have two cities ranked in the top 20 most populated. El Paso and Ciudad Juárez comprise
one of the largest binational metropolitan areas in the world with a combined population of 2.4 million. In fact,
Ciudad Juárez is one of the fastest growing cities in the world in spite of the fact that it is "the most violent
zone in the world outside of declared war zones". For instance, a few years ago the Federal Reserve Bank of Dallas
published that in Ciudad Juárez "the average annual growth over the 10-year period 1990–2000 was 5.3 percent. Juárez
experienced much higher population growth than the state of Chihuahua and than Mexico as a whole". Chihuahua City
has one of the highest literacy rates in the country at 98%; 35% of the population is aged 14 or below, 60% 15-65,
and 5% over 65. The growth rate is 2.4%. The 76.5% of the population of the state of Chihuahua live in cities which
makes the state one of the most urbanized in Mexico. The French forces tried to subdue and capture the liberal government
based in Saltillo. On September 21, 1864, José María Patoni and Jesús González Ortega lost against the French forces
at the Battle of Estanzuelas; the supreme government led by President Juárez was forced to evacuate the city of Saltillo
and relocate to Chihuahua. Juárez stopped in Ciudad Jiménez, Valle de Allende, and Hidalgo de Parral, in turn. He
decreed Parral the capital of Mexico from October 2–5, 1864. Perceiving the threat from the advancing French forces,
the president continued his evacuation through Santa Rosalía de Camargo, Santa Cruz de Rosales, and finally Chihuahua,
Chihuahua. On October 12, 1864, the people of the state gave President Juárez an overwhelmingly supportive reception,
led by Governor Ángel Trías. On October 15, 1864 the city of Chihuahua was declared the temporary capital of Mexico.
President Benito Juárez was re-elected in the general election of 1867 in which he received strong liberal support,
especially in Chihuahua. Luis Terrazas was confirmed by the people of Chihuahua to be governor of the state. But
soon after the election, President Juárez had another crisis on his hands; the Juárez administration was suspected
to be involved in the assassination of the military chief José María Patoni executed by General Canto in August 1868.
General Canto turned himself over to Donato Guerra. Canto was sentenced to death, but later his sentence changed
to 10 years imprisonment. The sense of injustice gave rise to a new rebellion in 1869 that threatened the federal
government. In response, the Juárez administration took drastic measures by temporarily suspending constitutional
rights, but the governor of Chihuahua did not support this action. Hostilities continued to increase especially after
the election of 1871 which was perceived to be fraudulent. A new popular leader arose among the rebels, Porfirio
Díaz. The federal government was successful in quelling rebellions in Durango an Chihuahua. On July 18, 1872, President
Juárez died from a heart attack; soon after, many of his supporters ceased the fighting. Peace returned to Chihuahua
and the new government was led by Governor Antonio Ochoa (formerly a co-owner of the Batopilas silver mines) in 1873
after Luis Terrazas finished his term in 1872. The Díaz administration made political decisions and took legal measures
that allowed the elite throughout Mexico to concentrate the nation's wealth by favoring monopolies. During this time,
two-fifths of the state's territory was divided among 17 rich families which owned practically all of the arable
land in Chihuahua. The state economy grew at a rapid pace during the Porfiriato; the economy in Chihuahua was dominated
by agriculture and mining. The Díaz administration helped Governor Luis Terrazas by funding the Municipal Public
Library in Chihuahua City and passing a federal initiative for the construction of the railroad from Chihuahua City
to Ciudad Júarez. By 1881, the Central Mexican Railroad was completed which connected Mexico City to Ciudad Juárez.
In 1883 telephone lines were installed throughout the state, allowing communication between Chihuahua City and Aldama.
By 1888 the telephone services were extended from the capital to the cites of Julimes, Meoqui, and Hidalgo del Parral;
the telecommunication network in the state covered an estimated 3,500 kilometers. The need of laborers to construct
the extensive infrastructure projects resulted in a significant Asian immigration, mostly from China. Asian immigrants
soon become integral to the state economy by opening restaurants, small grocery stores, and hotels. By the end of
the Terrazas term, the state experienced an increase in commerce, mining, and banking. When the banks were nationalized,
Chihuahua became the most important banking state in Mexico. The end of the Porfiriato came in 1910 with the beginning
of the Mexican Revolution. Díaz had stated that Mexico was ready for democracy and he would step down to allow other
candidates to compete for the presidency, but Díaz decided to run again in 1910 for the last time against Francisco
I. Madero. During the campaign Díaz incarcerated Madero on election day in 1910. Díaz was announced the winner of
the election by a landslide, triggering the revolution. Madero supporter Toribio Ortega took up arms with a group
of followers at Cuchillo Parado, Chihuahua on November 10, 1910. The uneasy alliance of Carranza, Obregón, Villa,
and Zapata eventually led the rebels to victory. The fight against Huerta formally ended on August 15, 1914, when
Álvaro Obregón signed a number of treaties in Teoloyucan in which the last of Huerta's forces surrendered to him
and recognized the constitutional government. On August 20, 1914, Carranza made a triumphal entry into Mexico City.
Carranza (supported by Obregón) was now the strongest candidate to fill the power vacuum and set himself up as head
of the new government. This government successfully printed money, passed laws, etc. The main mountain range in the
state is the Sierra Madre Occidental reaching a maximum altitude of 10,826 ft (3,300 m) known as Cerro Mohinora.
Mountains account for one third of the state's surface area which include large coniferous forests. The climate in
the mountainous regions varies Chihuahua has more forests than any other state in Mexico making the area a bountiful
source of wood; the mountainous areas are rich in minerals important to Mexico's mining industry. Precipitation and
temperature in the mountainous areas depends on the elevation. Between the months of November and March snow storms
are possible in the lower elevations and are frequent in the higher elevations. There are several watersheds located
in the Sierra Madre Occidental all of the water that flows through the state; most of the rivers finally empty into
the Río Grande. Temperatures in some canyons in the state reach over 100 °F in the summer while the same areas rarely
drop below 32 °F in the winter. Microclimates found in the heart of the Sierra Madre Occidental in the state could
be considered tropical, and wild tropical plants have been found in some canyons. La Barranca del Cobre, or Copper
Canyon, a spectacular canyon system larger and deeper than the Grand Canyon; the canyon also contains Mexico's two
tallest waterfalls: Basaseachic Falls and Piedra Volada. There are two national parks found in the mountainous area
of the state: Cumbres de Majalca National Park and Basaseachic Falls National Park. In the far eastern part of the
state the Chihuahuan Desert dominates due to low precipitation and extremely high temperatures; some areas of the
eastern part of the state are so dry no vegetation is found like the Sand Dunes of Samalayuca. There are two distinctive
climate zones found in the eastern part of the state: Hot Desert (BWh) and Cool Desert (BWk) which are differentiated
by average annual temperature due to differences in elevation. There is a transition zone in the middle of the state
between the two extremely different climates from the east and west; this zone is the Steppe characterized by a compromise
between juxtaposed climate zones. The state is also a host to a large population of birds which include endemic species
and migratory species: greater roadrunner Geococcyx californianus, cactus wren Campylorhynchus brunneicapillus, Mexican
jay Aphelocoma ultramarina, Steller's jay Cyanocitta stelleri, acorn woodpecker Melanerpes formicivorus, canyon towhee
Pipilo fuscus, mourning dove Zenaida macroura, broad-billed hummingbird Cynanthus latirostris, Montezuma quail Cyrtonyx
montezumae, mountain trogon Trogon mexicanus, turkey vulture Cathartes aura, and golden eagle Aquila chrysaetos.
Trogon mexicanus is an endemic species found in the mountains in Mexico; it is considered an endangered species[citation
needed] and has symbolic significance to Mexicans. During the Mexican Revolution, Álvaro Obregón invited a group
of Canadian German-speaking Mennonites to resettle in Mexico. By the late 1920s, some 7,000 had immigrated to Chihuahua
State and Durango State, almost all from Canada, only a few from the U.S. and Russia. Today, Mexico accounts for
about 42% of all Mennonites in Latin America. Mennonites in the country stand out because of their light skin, hair,
and eyes. They are a largely insular community that speaks a form of German and wear traditional clothing. They own
their own businesses in various communities in Chihuahua, and account for about half of the state's farm economy,
excelling in cheese production. Agriculture is a relatively small component of the state's economy and varies greatly
due to the varying climate across the state. The state ranked first in Mexico for the production of the following
crops: oats, chile verde, cotton, apples, pecans, and membrillo. The state has an important dairy industry with large
milk processors throughout the state. Delicias is home to Alpura, the second-largest dairy company in Mexico. The
state has a large logging industry ranking second in oak and third in pine in Mexico. The mining industry is a small
but continues to produce large amounts of minerals. The state ranked first place in the country for the production
of lead with 53,169 metric tons. Chihuahua ranked second in Mexico for zinc at 150,211 metric tons, silver at 580,271
kg, and gold at 15,221.8 kg. Nueva Vizcaya (New Biscay) was the first province of northern New Spain to be explored
and settled by the Spanish. Around 1528, a group of Spaniard explorers, led by Álvar Núñez Cabeza de Vaca, first
entered the actual territory of what is now Chihuahua. The conquest of the territory lasted nearly one century, and
encountered fierce resistance from the Conchos tribe, but the desire of the Spanish Crown to transform the region
into a bustling mining center led to a strong strategy to control the area. Hidalgo was turned over to the Bishop
of Durango, Francisco Gabriel de Olivares, for an official defrocking and excommunication on July 27, 1811. He was
then found guilty of treason by a military court and executed by firing squad on July 30 at 7 in the morning. Before
his execution, he thanked his jailers, Private Soldiers Ortega and Melchor, in letters for their humane treatment.
At his execution, Hidalgo placed his right hand over his heart to show the riflemen where they should aim. He also
refused the use of a blindfold. His body, along with the bodies of Allende, Aldama and José Mariano Jiménez were
decapitated, and the heads were put on display on the four corners of the Alhóndiga de Granaditas in Guanajuato.
The heads remained there for ten years until the end of the Mexican War of Independence to serve as a warning to
other insurgents. Hidalgo's headless body was first displayed outside the prison but then buried in the Church of
St Francis in Chihuahua. Those remains would later be transferred in 1824 to Mexico City. Because of the general
instability of the federal government during 1828, the installation of the new legislature did not take place until
the middle of the following year. It was quickly dissolved by Governor Santiago de Baca Ortiz, who replaced it with
a more pronounced Yorkino type. When Guerrero's liberal administration was overthrown in December, Gaspar de Ochoa
aligned with Anastasio Bustamante, and in February 1830, organized an opposition group that arrested the new governor,
F. Elorriaga, along with other prominent Yorkinos. He then summoned the legislature, which had been dissolved by
Baca. The civil and military authorities were now headed by J. A. Pescador and Simón Ochoa. Comandante general Simón
Elías González, was nominated governor and military command was given to Colonel J.J. Calvo, whose firmness had earned
well-merited praise. The state was in the midst of a war with the Apaches, which became the focus of all their energy
and resources. After a review of the situation, Simón Elías González declared that the interests of the territory
would be best served by uniting the civil and military power, at least while the campaign lasted. He resigned under
opposition, but was renominated in 1837. During the American occupation of the state, the number of Indian attacks
was drastically reduced, but in 1848 the attacks resumed to such a degree that the Mexican officials had no choice
but to resume military projects to protect Mexican settlements in the state. Through the next three decades the state
faced constant attacks from indigenous on Mexican settlements. After the occupation the people of the state were
worried about the potential attack from the hostile indigenous tribes north of the Rio Grande; as a result a decree
on July 19, 1848, the state established 18 military colonies along the Rio Grande. The new military colonies were
to replace the presidios as population centers to prevent future invasions by indigenous tribes; these policies remained
prominent in the state until 1883. Eventually the state replaced the old state security with a state policy to form
militias organized with every Mexican in the state capable to serve between the ages of 18 and 55 to fulfill the
mandate of having six men defending for every 1000 residents. The liberal political forces maintained strong control
over the state government until shortly after the French Intervention which turned the tables in favor to the conservative
forces once again. The intervention had serious repercussions for the state of Chihuahua. President Juárez, in an
effort to organize a strong defense against the French, decreed a list of national guard units that every state had
to contribute to the Ministry of War and the Navy; Chihuahua was responsible for inducting 2,000 men. Regaining power,
Governor Luis Terrazas assigned the First Battalion of Chihuahua for integration into the national army led by General
Jesús González Ortega; the battalion was deployed to Puebla. After the defeat of the army in Puebla, the Juárez administration
was forced to abandon Mexico City; the president retreated further north seeking refuge in the state of Chihuahua.
President Juárez once again based his government in the state of Chihuahua and it served as the center for the resistance
against the French invasion throughout Mexico. On March 25, 1866, a battle ensued in the Plaza de Armas in the center
of Chihuahua City between the French imperial forces that were guarding the plaza and the Republican forces led by
General Terrazas. Being completely caught off guard, the French imperial forces sought refuge by bunkering themselves
in the Cathedral of the Holy Cross, Our Lady of Regla, and St Fancis of Assisi and made it almost impossible to penetrate
their defenses. General Terrazas then decided to fire a heavy artillery barrage with 8 kg cannonballs. The first
cannon fired hit a bell in the tower of the church, instantly breaking it in half; soon after, 200 men of the imperial
army forces surrendered. The republican forces had recovered control over the state capital. The bell in the church
was declared a historical monument and can be seen today in the Cathedral. By April 1866, the state government had
established a vital trading route from Chihuahua City to San Antonio, Texas; the government began to replenish their
supplies and reinforce their fight against the Imperial forces. The officials in Mexico City reduced the price of
corn from six cents to two cents a pound. The northern portion of the state continued to decline economically which
led to another revolt led by G. Casavantes in August 1879; Governor Trías was accused of misappropriation of funds
and inefficient administration of the state. Casavantes took the state capital and occupied it briefly; he was also
successful in forcing Governor Trías to exile. Shortly afterwards, the federal government sent an entourage led by
Treviño; Casavantes was immediately ordered to resign his position. Casavantes declared political victory as he was
able to publicly accuse and depose Governor Trías. At the same time the states of Durango and Coahuila had a military
confrontation over territorial claims and water rights; this altercation between the state required additional federal
troops to stabilize the area. Later a dispute ensued again among the states of Coahuila, Durango, and Chihuahua over
the mountain range area known as Sierra Mojada, when large deposits of gold ore was discovered. The state of Chihuahua
officially submitted a declaration of protest in May 1880 that shortly after was amicably settled. Despite the difficulties
at the beginning, Díaz was able to secure and stabilize the state, which earned the confidence and support of the
people. A handful of families owned large estates (known as haciendas) and controlled the greater part of the land
across the state while the vast majority of Chihuahuans were landless. The state economy was largely defined by ranching
and mining. At the expense of the working class, the Díaz administration promoted economic growth by encouraging
investment from foreign companies from the United Kingdom, France, Imperial Germany and the United States. The proletariat
was often exploited, and found no legal protection or political recourse to redress injustices. On March 26, 1913,
Venustiano Carranza issued the Plan de Guadalupe, which refused to recognize Huerta as president and called for war
between the two factions. Soon after the assassination of President Madero, Carranza returned to Mexico to fight
Huerta, but with only a handful of comrades. However, by 1913 his forces had swelled into an army of thousands, called
the División del Norte (Northern Division). Villa and his army, along with Emiliano Zapata and Álvaro Obregón, united
with Carranza to fight against Huerta. In March 1914 Carranza traveled to Ciudad Juárez, which served as rebellion's
capital for the remainder of the struggle with Huerta. In April 1914 U.S. opposition to Huerta had reached its peak,
blockading the regime's ability to resupply from abroad. Carranza trying to keep his nationalistic credentials threatened
war with the United States. In his spontaneous response to U.S. President Woodrow Wilson Carranza asked "that the
president withdraw American troops from Mexico.” The desert zone also accounts for about a third of the state's surface
area. The Chihuahuan Desert is an international biome that also extends into the neighboring Mexican state of Coahuila
and into the U.S. states of Texas and New Mexico. The desert zone is mainly of flat topography with some small mountain
ranges that run north to south. The desert in the state varies slightly with a small variant in climate. The lower
elevations of the desert zone are found in the north along the Rio Grande which experience hotter temperatures in
the summer and winter while the southern portion of the desert zone experiences cooler temperatures due to its higher
elevation. The Samalayuca dunes cover an area of about 150 km2; it is an impressive site of the Chihuahuan Desert
and is a protected area by the state due to unique species of plants and animals. The fauna in the state is just
as diverse as the flora and varies greatly due to the large contrast in climates. In the mountain zone of the state
the most observed mammals are: Mexican fox squirrel (Sciurus nayaritensis), antelope jackrabbit (Lepus alleni), raccoon
(Procyon lotor), hooded skunk (Mephitis macroura), wild boar (Sus scrofa), collared peccary (Pecari tajacu), white-tailed
deer (Odocoileus virginianus), mule deer Odocoileus hemionus, American bison Bison bison, cougar (Puma concolor),
eastern cottontail Sylvilagus floridanus, North American porcupine Erethizon dorsatum, bobcat Lynx rufus, Mexican
wolf Canis lupus baileyi, and coyote Canis latrans. American black bear Ursus americanus is also found but in very
small numbers. The Mexican wolf, once abundant, has been extirpated. The main cause of degradation has been grazing.
Although there are many reptilian species in the mountains the most observed species include: Northern Mexican pine
snake, Pituophis deppei jani, Texas horned lizard (Phrynosoma cornutum), rock rattlesnake (Crotalus lepidus), black-tailed
rattlesnake (Crotalus molossus), and plateau tiger salamander Ambystoma velasci, one of possibly many amphibians
to be found in the mountains. The last census in Mexico that asked for an individual's race, which was taken in 1921,
indicated that 50.09% of the population identified as Mestizo (mixed Amerindian and European descent). The second-largest
group was whites at 36.33% of the population. The third-largest group was the "pure indigenous" population, constituting
12.76% of the population. The remaining 0.82% of the population of Chihuahua was considered "other", i.e., neither
Mestizo, indigenous, nor white. The most important indigenous tribes of the state of Chihuahua are: During the 1990s
after NAFTA was signed, industrial development grew rapidly with foreign investment. Large factories known as maquiladoras
were built to export manufactured goods to the United States and Canada. Today, most of the maquiladoras produce
electronics, automobile, and aerospace components. There are more than 406 companies operating under the federal
IMMEX or Prosec program in Chihuahua. The large portion of the manufacturing sector of the state is 425 factories
divided into 25 industrial parks accounting for 12.47% of the maquiladoras in Mexico, which employ 294,026 people
in the state. While export-driven manufacturing is one of the most important components of the state's economy, the
industrial sector is quite diverse and can be broken down into several sectors, which are: electronics, agro-industrial,
wood base manufacturing, mineral, and biotech. Similar to the rest of the country, small businesses continue to be
the foundation of the state’s economy. Small business employs the largest portion of the population.[citation needed]
Shias believe that Imamah is of the Principles of Faith (Usul al-Din).As the verse 4:165 of quran expresses the necessity
to the appointment of the prophets; so after the demise of the prophet who will play the role of the prophet; till
the people have not any plea against Allah.So the same logic that necessitated the assignment of prophets also is
applied for Imamah.That is Allah Must assign someone similar to prophet in his attributes and Ismah as his successor
to guide the people without any deviation in religion. They refer to the verse (...This day I have perfected for
you your religion and completed My favor upon you and have approved for you Islam as religion...) 5:3 of Quran which
was revealed to the prophet when he appointed Ali as his successor at the day of Ghadir Khumm. Imamah (Arabic: إمامة)
is the Shia Islam doctrine (belief) of religious, spiritual and political leadership of the Ummah. The Shia believe
that the Imams are the true Caliphs or rightful successors of Muhammad, and further that Imams are possessed of divine
knowledge and authority (Ismah) as well as being part of the Ahl al-Bayt, the family of Muhammad. These Imams have
the role of providing commentary and interpretation of the Quran as well as guidance to their tariqa followers as
is the case of the living Imams of the Nizari Ismaili tariqah. Within Shia Islam (Shiism), the various sects came
into being because they differed over their Imams' successions, just as the Shia - Sunni separation within Islam
itself had come into being from the dispute that had arisen over the succession to Muhammad. Each succession dispute
brought forth a different tariqah (literal meaning 'path'; extended meaning 'sect') within Shia Islam. Each Shia
tariqah followed its own particular Imam's dynasty, thus resulting in different numbers of Imams for each particular
Shia tariqah. When the dynastic line of the separating successor Imam ended with no heir to succeed him, then either
he (the last Imam) or his unborn successor was believed to have gone into concealment, that is, The Occultation.
It is forbidden for the Divine Leader not to be from the family of Muhammad.[citation needed] According to Ali al-Ridha,
since it is obligatory to obey him, there should be a sign to clearly indicate the Divine Leader. That sign is his
well-known ties of kinship with Muhammad and his clear appointment so that the people could distinguish him from
others, and be clearly guided toward him. Otherwise others are nobler than Muhammad's offspring and they are to be
followed and obeyed; and the offspring of Muhammad are obedient and subject to the offspring of Muhammad’s enemies
such as Abi Jahl or Ibn Abi Ma’eet.[original research?] However, Muhammad is much nobler than others to be in charge
and to be obeyed. Moreover, once the prophethood of Muhammad is testified they would obey him, no one would hesitate
to follow his offspring and this would not be hard for anyone. While to follow the offspring of the corrupted families
is difficult.[original research?] And that is maybe why the basic characteristic of Muhammad and other prophets was
their nobility.[original research?] For none of them, it is said, were originated from a disgraced family.[citation
needed] It is believed that all Muhammad's ancestors up to Adam were true Muslims. [a][citation needed] Jesus was
also from a pious family, as it is mentioned in Quran that after his birth, people said to Mary: O sister of Aaron,
your father was not a man of evil, nor was your mother unchaste."[b][improper synthesis?] The word "Imām" denotes
a person who stands or walks "in front". For Sunni Islam, the word is commonly used to mean a person who leads the
course of prayer in the mosque. It also means the head of a madhhab ("school of thought"). However, from the Shia
point of view this is merely the basic understanding of the word in the Arabic language and, for its proper religious
usage, the word "Imam" is applicable only to those members of the house of Muhammad designated as infallible by the
preceding Imam. Although all these different Shia tariqahs belong to the Shia group (as opposed to the Sunni group)
in Islam, there are major doctrinal differences between the main Shia tariqahs. After that there is the complete
doctrinal break between all the different Shia tariqahs whose last Imams have gone into Occultation and the Shia
Nizari Ismailis who deny the very concept of Occultation. The Shia Nizari Ismailis by definition have to have a present
and living Imam until the end of time.[citation needed] Thus if any living Nizari Ismaili Imam fails to leave behind
a successor after him then the Nizari Ismailism’s cardinal principle would be broken and it’s very raison d'être
would come to an end. According to Ismā‘īlīsm, Allah has sent "seven" great prophets known as “Nātıq” (Spoken) in
order to disseminate and improve his Dīn of Islam. All of these great prophets has also one assistant known as “Sāmad
(Silent) Imām”. At the end of each seven “Sāmad” silsila, one great “Nātıq” (Spoken) has ben sent in order to reimprove
the Dīn of Islam. After Adam and his son Seth, and after six “Nātıq” (Spoken) – “Sāmad” (Silent) silsila (Noah–Shem),
(Abraham–Ishmael), (Moses–Aaron), (Jesus–Simeon), (Muhammad bin ʿAbd Allāh–Ali ibn Abu Tālib); the silsila of “Nātıqs
and Sāmads have been completed with (Muhammad bin Ismā‘īl as-ṣaghīr (Maymûn’ûl-Qaddāh)–ʿAbd Allāh Ibn-i Maymûn and
his sons). The Shia tariqah with a majority of adherents are the Twelvers who are commonly known as the "Shia". After
that come the Nizari Ismailis commonly known as the Ismailis; and then come the Mustalian Ismailis commonly known
as the "Bohras" with further schisms within their Bohri tariqah. The Druze tariqah (very small in number today) initially
were of the Fatimid Ismailis and separated from them (the Fatimid Ismailis) after the death of the Fatimid Imam and
Caliph Hakim Bi Amrillah. The Shia Sevener tariqah no longer exists. Another small tariqah is the Zaidi Shias, also
known as the Fivers and who do not believe in The Occultation of their last Imam. During the Minor Occultation (Ghaybat
al-Sughrá), it is believed that al-Mahdi maintained contact with his followers via deputies (Arab. an-nuwāb al-arbaʻa
or "the Four Leaders"). They represented him and acted as agents between him and his followers. Whenever the believers
faced a problem, they would write their concerns and send them to his deputy. The deputy would ascertain his verdict,
endorse it with his seal and signature and return it to the relevant parties. The deputies also collected zakat and
khums on his behalf. The Ismailis differ from Twelvers because they had living imams for centuries after the last
Twelver Imam went into concealment. They followed Isma'il ibn Jafar, elder brother of Musa al-Kadhim, as the rightful
Imam after his father Ja'far al-Sadiq. The Ismailis believe that whether Imam Ismail did or did not die before Imam
Ja'far, he had passed on the mantle of the imamate to his son Muḥammad ibn Ismail as the next imam. Thus, their line
of imams is as follows (the years of their individual imamats during the Common Era are given in brackets): The line
of imams of the Mustali Ismaili Shia Muslims (also known as the Bohras/Dawoodi Bohra) continued up to Aamir ibn Mustali.
After his death, they believe their 21st Imam Taiyab abi al-Qasim went into a Dawr-e-Satr (period of concealment)
that continues to this day. In the absence of an imam they are led by a Dai-al-Mutlaq (absolute missionary) who manages
the affairs of the Imam-in-Concealment until re-emergence of the Imam from concealment. Dawoodi Bohra's present 53rd
Da'i al-Mutlaq is His Holiness Syedna Mufaddal Saifuddin (TUS) who succeeded his predessor the 52nd Da'i al-Mutlaq
His Holiness Syedna Mohammed Burhanuddin (RA). Furthermore, there has been a split in the Dawoodi Bohra sect which
has led to the formation of Qutbi Bohra sect which was formed and led by Khuzaima Qutbuddin. All Muslims believe
that Muhammad had said: "To whomsoever I am Mawla, Ali is his Mawla." This hadith has been narrated in different
ways by many different sources in no less than 45 hadith books[citation needed] of both Sunni and Shia collections.
This hadith has also been narrated by the collector of hadiths, al-Tirmidhi, 3713;[citation needed] as well as Ibn
Maajah, 121;[citation needed] etc. The major point of conflict between the Sunni and the Shia is in the interpretation
of the word 'Mawla'. For the Shia the word means 'Lord and Master' and has the same elevated significance as when
the term had been used to address Muhammad himself during his lifetime. Thus, when Muhammad actually (by speech)
and physically (by way of having his closest companions including Abu Bakr, Umar and Uthman [the three future Caliphs
who had preceded Ali as Caliph] publicly accept Ali as their Lord and Master by taking Ali's hand in both of theirs
as token of their allegiance to Ali) transferred this title and manner of addressing Ali as the Mawla for all Muslims
at Ghadiri Khum Oasis just a few months before his death, the people that came to look upon Ali as Muhammad's immediate
successor even before Muhamamd's death came to be known as the Shia. However, for the Sunnis the word simply means
the 'beloved' or the 'revered' and has no other significance at all. By the verse Quran, 2:124, Shias believe that
Imamah is a divine position always Imamah is accompanied by the word guidance, of course a guidance by God's Command.A
kind of guidance which brings humanity to the goal. Regarding 17:71, no age can be without an Imam. So, according
to the upper verse 1.Imamah is a position which is appointed by God and must be specified by Him 2.Imam is protected
by a divine protection and no one exceles him in nobility 3. No age can be without an Imam and finally Imam knows
everything which is needed for human being to get to the truth and goal. According to the majority of Shī'a, namely
the Twelvers (Ithnā'ashariyya), the following is a listing of the rightful successors to Muḥammad. Each Imam was
the son of the previous Imam except for Hussayn ibn 'Alī, who was the brother of Hassan ibn 'Alī.The belief in this
succession to Muḥammad stems from various Quranic verses which include: 75:36, 13:7, 35:24, 2:30, 2:124, 36:26, 7:142,
42:23.[citation needed] They support their discussion by citing Genesis 17:19–20 and Sunni hadith:Sahih Muslim, Hadith
number 4478, English translation by Abdul Hamid Siddiqui.[original research?]
Pitch is an auditory sensation in which a listener assigns musical tones to relative positions on a musical scale based primarily
on their perception of the frequency of vibration. Pitch is closely related to frequency, but the two are not equivalent.
Frequency is an objective, scientific attribute that can be measured. Pitch is each person's subjective perception
of a sound, which cannot be directly measured. However, this does not necessarily mean that most people won't agree
on which notes are higher and lower. This creates a linear pitch space in which octaves have size 12, semitones (the
distance between adjacent keys on the piano keyboard) have size 1, and A440 is assigned the number 69. (See Frequencies
of notes.) Distance in this space corresponds to musical intervals as understood by musicians. An equal-tempered
semitone is subdivided into 100 cents. The system is flexible enough to include "microtones" not found on standard
piano keyboards. For example, the pitch halfway between C (60) and C♯ (61) can be labeled 60.5. The relative pitches
of individual notes in a scale may be determined by one of a number of tuning systems. In the west, the twelve-note
chromatic scale is the most common method of organization, with equal temperament now the most widely used method
of tuning that scale. In it, the pitch ratio between any two successive notes of the scale is exactly the twelfth
root of two (or about 1.05946). In well-tempered systems (as used in the time of Johann Sebastian Bach, for example),
different methods of musical tuning were used. Almost all of these systems have one interval in common, the octave,
where the pitch of one note is double the frequency of another. For example, if the A above middle C is 440 Hz, the
A an octave above that is 880 Hz (info). According to the American National Standards Institute, pitch is the auditory
attribute of sound according to which sounds can be ordered on a scale from low to high. Since pitch is such a close
proxy for frequency, it is almost entirely determined by how quickly the sound wave is making the air vibrate and
has almost nothing to do with the intensity, or amplitude, of the wave. That is, "high" pitch means very rapid oscillation,
and "low" pitch corresponds to slower oscillation. Despite that, the idiom relating vertical height to sound pitch
is shared by most languages. At least in English, it is just one of many deep conceptual metaphors that involve up/down.
The exact etymological history of the musical sense of high and low pitch is still unclear. There is evidence that
humans do actually perceive that the source of a sound is slightly higher or lower in vertical space when the sound
frequency is increased or decreased. A sound generated on any instrument produces many modes of vibration that occur
simultaneously. A listener hears numerous frequencies at once. The vibration with the lowest frequency is called
the fundamental frequency; the other frequencies are overtones. Harmonics are an important class of overtones with
frequencies that are integer multiples of the fundamental. Whether or not the higher frequencies are integer multiples,
they are collectively called the partials, referring to the different parts that make up the total spectrum. The
pitch of complex tones can be ambiguous, meaning that two or more different pitches can be perceived, depending upon
the observer. When the actual fundamental frequency can be precisely determined through physical measurement, it
may differ from the perceived pitch because of overtones, also known as upper partials, harmonic or otherwise. A
complex tone composed of two sine waves of 1000 and 1200 Hz may sometimes be heard as up to three pitches: two spectral
pitches at 1000 and 1200 Hz, derived from the physical frequencies of the pure tones, and the combination tone at
200 Hz, corresponding to the repetition rate of the waveform. In a situation like this, the percept at 200 Hz is
commonly referred to as the missing fundamental, which is often the greatest common divisor of the frequencies present.
The just-noticeable difference (jnd) (the threshold at which a change is perceived) depends on the tone's frequency
content. Below 500 Hz, the jnd is about 3 Hz for sine waves, and 1 Hz for complex tones; above 1000 Hz, the jnd for
sine waves is about 0.6% (about 10 cents). The jnd is typically tested by playing two tones in quick succession with
the listener asked if there was a difference in their pitches. The jnd becomes smaller if the two tones are played
simultaneously as the listener is then able to discern beat frequencies. The total number of perceptible pitch steps
in the range of human hearing is about 1,400; the total number of notes in the equal-tempered scale, from 16 to 16,000
Hz, is 120. It is still possible for two sounds of indefinite pitch to clearly be higher or lower than one another.
For instance, a snare drum sounds higher pitched than a bass drum though both have indefinite pitch, because its
sound contains higher frequencies. In other words, it is possible and often easy to roughly discern the relative
pitches of two sounds of indefinite pitch, but sounds of indefinite pitch do not neatly correspond to any specific
pitch. A special type of pitch often occurs in free nature when sound reaches the ear of an observer directly from
the source, and also after reflecting off a sound-reflecting surface. This phenomenon is called repetition pitch,
because the addition of a true repetition of the original sound to itself is the basic prerequisite. For example,
one might refer to the A above middle C as a', A4, or 440 Hz. In standard Western equal temperament, the notion of
pitch is insensitive to "spelling": the description "G4 double sharp" refers to the same pitch as A4; in other temperaments,
these may be distinct pitches. Human perception of musical intervals is approximately logarithmic with respect to
fundamental frequency: the perceived interval between the pitches "A220" and "A440" is the same as the perceived
interval between the pitches A440 and A880. Motivated by this logarithmic perception, music theorists sometimes represent
pitches using a numerical scale based on the logarithm of fundamental frequency. For example, one can adopt the widely
used MIDI standard to map fundamental frequency, f, to a real number, p, as follows Temporal theories offer an alternative
that appeals to the temporal structure of action potentials, mostly the phase-locking and mode-locking of action
potentials to frequencies in a stimulus. The precise way this temporal structure helps code for pitch at higher levels
is still debated, but the processing seems to be based on an autocorrelation of action potentials in the auditory
nerve. However, it has long been noted that a neural mechanism that may accomplish a delay—a necessary operation
of a true autocorrelation—has not been found. At least one model shows that a temporal delay is unnecessary to produce
an autocorrelation model of pitch perception, appealing to phase shifts between cochlear filters; however, earlier
work has shown that certain sounds with a prominent peak in their autocorrelation function do not elicit a corresponding
pitch percept, and that certain sounds without a peak in their autocorrelation function nevertheless elicit a pitch.
To be a more complete model, autocorrelation must therefore apply to signals that represent the output of the cochlea,
as via auditory-nerve interspike-interval histograms. Some theories of pitch perception hold that pitch has inherent
octave ambiguities, and therefore is best decomposed into a pitch chroma, a periodic value around the octave, like
the note names in western music—and a pitch height, which may be ambiguous, that indicates the octave the pitch is
in.
The motif of the England national football team has three lions passant guardant, the emblem of King Richard I, who reigned
from 1189 to 1199. The lions, often blue, have had minor changes to colour and appearance. Initially topped by a
crown, this was removed in 1949 when the FA was given an official coat of arms by the College of Arms; this introduced
ten Tudor roses, one for each of the regional branches of the FA. Since 2003, England top their logo with a star
to recognise their World Cup win in 1966; this was first embroidered onto the left sleeve of the home kit, and a
year later was moved to its current position, first on the away shirt. Although England's first away kits were blue,
England's traditional away colours are red shirts, white shorts and red socks. In 1996, England's away kit was changed
to grey shirts, shorts and socks. This kit was only worn three times, including against Germany in the semi-final
of Euro 96 but the deviation from the traditional red was unpopular with supporters and the England away kit remained
red until 2011, when a navy blue away kit was introduced. The away kit is also sometimes worn during home matches,
when a new edition has been released to promote it. England failed to qualify for the World Cup in 1974, 1978 and
1994. The team's earliest exit in the competition itself was its elimination in the first round in 1950, 1958 and
most recently in the 2014 FIFA World Cup, after being defeated in both their opening two matches for the first time,
versus Italy and Uruguay in Group D. In 1950, four teams remained after the first round, in 1958 eight teams remained
and in 2014 sixteen teams remained. In 2010, England suffered its most resounding World Cup defeat (4–1 to Germany)
in the Round of 16, after drawing with the United States and Algeria and defeating Slovenia 1–0 in the group stage.
Their first ever defeat on home soil to a foreign team was an 0–2 loss to the Republic of Ireland, on 21 September
1949 at Goodison Park. A 6–3 loss in 1953 to Hungary, was their second defeat by a foreign team at Wembley. In the
return match in Budapest, Hungary won 7–1. This still stands as England's worst ever defeat. After the game, a bewildered
Syd Owen said, "it was like playing men from outer space". In the 1954 FIFA World Cup, England reached the quarter-finals
for the first time, and lost 4–2 to reigning champions Uruguay. In February 2012, Capello resigned from his role
as England manager, following a disagreement with the FA over their request to remove John Terry from team captaincy
after accusations of racial abuse concerning the player. Following this, there was media speculation that Harry Redknapp
would take the job. However, on 1 May 2012, Roy Hodgson was announced as the new manager, just six weeks before UEFA
Euro 2012. England managed to finish top of their group, winning two and drawing one of their fixtures, but exited
the Championships in the quarter-finals via a penalty shoot-out, this time to Italy. The England national football
team represents England and the Crown Dependencies of Jersey, Guernsey and the Isle of Man for football matches as
part of FIFA-authorised events, and is controlled by The Football Association, the governing body for football in
England. England are one of the two oldest national teams in football; alongside Scotland, whom they played in the
world's first international football match in 1872. England's home ground is Wembley Stadium, London, and the current
team manager is Roy Hodgson. To begin with, England had no permanent home stadium. They joined FIFA in 1906 and played
their first ever games against countries other than the Home Nations on a tour of Central Europe in 1908. Wembley
Stadium was opened in 1923 and became their home ground. The relationship between England and FIFA became strained,
and this resulted in their departure from FIFA in 1928, before they rejoined in 1946. As a result, they did not compete
in a World Cup until 1950, in which they were beaten in a 1–0 defeat by the United States, failing to get past the
first round in one of the most embarrassing defeats in the team's history. England is quite a successful nation at
the UEFA European Football Championship, having finished in third place in 1968 and reached the semi-final in 1996.
England hosted Euro 96 and have appeared in eight UEFA European Championship Finals tournaments, tied for ninth-best.
The team has also reached the quarter-final on two recent occasions in 2004 and 2012. The team's worst result in
the competition was a first-round elimination in 1980, 1988, 1992 and 2000. The team did not enter in 1960, and they
failed to qualify in 1964, 1972, 1976, 1984, and 2008. England qualified for the 1970 FIFA World Cup in Mexico as
reigning champions, and reached the quarter-finals, where they were knocked out by West Germany. England had been
2–0 up, but were eventually beaten 3–2 after extra time. They failed in qualification for the 1974, leading to Ramsey's
dismissal, and 1978 FIFA World Cups. Under Ron Greenwood, they managed to qualify for the 1982 FIFA World Cup in
Spain (the first time competitively since 1962); despite not losing a game, they were eliminated in the second group
stage. Sven-Göran Eriksson took charge of the team between 2001 and 2006, and was the first non–English manager of
England. Despite controversial press coverage of his personal life, Eriksson was consistently popular with the majority
of fans.[citation needed] He guided England to the quarter-finals of the 2002 FIFA World Cup, UEFA Euro 2004, and
the 2006 FIFA World Cup. He lost only five competitive matches during his tenure, and England rose to an No.4 world
ranking under his guidance. His contract was extended by the Football Association by two years, to include UEFA Euro
2008. However, it was terminated by them at the 2006 FIFA World Cup's conclusion. All England matches are broadcast
with full commentary on BBC Radio 5 Live. From the 2008–09 season until the 2017–18 season, England's home and away
qualifiers, and friendlies both home and away are broadcast live on ITV (often with the exception of STV, the ITV
affiliate in central and northern Scotland). England's away qualifiers for the 2010 World Cup were shown on Setanta
Sports until that company's collapse. As a result of Setanta Sports's demise, England's World Cup qualifier in Ukraine
on 10 October 2009 was shown in the United Kingdom on a pay-per-view basis via the internet only. This one-off event
was the first time an England game had been screened in such a way. The number of subscribers, paying between £4.99
and £11.99 each, was estimated at between 250,000 and 300,000 and the total number of viewers at around 500,000.
England first appeared at the 1950 FIFA World Cup and have appeared in 14 FIFA World Cups, they are tied for sixth-best
in terms of number of wins alongside France and Spain. The national team is one of eight national teams to have won
at least one FIFA World Cup title. The England team won their first and only World Cup title in 1966. The tournament
was played on home soil and England defeated Germany 4–2 in the final. In 1990, England finished in fourth place,
losing 2–1 to host nation Italy in the third place play-off after losing on penalties to champions Germany in the
semi-final. The team has also reached the quarter-final on two recent occasions in 2002 and 2006. Previously, they
reached this stage in 1954, 1962, 1970 and 1986.
In August 1836, two real estate entrepreneurs—Augustus Chapman Allen and John Kirby Allen—from New York, purchased 6,642
acres (26.88 km2) of land along Buffalo Bayou with the intent of founding a city. The Allen brothers decided to name
the city after Sam Houston, the popular general at the Battle of San Jacinto, who was elected President of Texas
in September 1836. The great majority of slaves in Texas came with their owners from the older slave states. Sizable
numbers, however, came through the domestic slave trade. New Orleans was the center of this trade in the Deep South,
but there were slave dealers in Houston. Thousands of enslaved African-Americans lived near the city before the Civil
War. Many of them near the city worked on sugar and cotton plantations, while most of those in the city limits had
domestic and artisan jobs. In 1860 forty-nine percent of the city's population was enslaved. A few slaves, perhaps
as many as 2,000 between 1835 and 1865, came through the illegal African trade. Post-war Texas grew rapidly as migrants
poured into the cotton lands of the state. They also brought or purchased enslaved African Americans, whose numbers
nearly tripled in the state from 1850 to 1860, from 58,000 to 182,566. When World War II started, tonnage levels
at the port decreased and shipping activities were suspended; however, the war did provide economic benefits for
the city. Petrochemical refineries and manufacturing plants were constructed along the ship channel because of the
demand for petroleum and synthetic rubber products by the defense industry during the war. Ellington Field, initially
built during World War I, was revitalized as an advanced training center for bombardiers and navigators. The Brown
Shipbuilding Company was founded in 1942 to build ships for the U.S. Navy during World War II. Due to the boom in
defense jobs, thousands of new workers migrated to the city, both blacks and whites competing for the higher-paying
jobs. President Roosevelt had established a policy of non-discrimination for defense contractors, and blacks gained
some opportunities, especially in shipbuilding, although not without resistance from whites and increasing social
tensions that erupted into occasional violence. Economic gains of blacks who entered defense industries continued
in the postwar years. One wave of the population boom ended abruptly in the mid-1980s, as oil prices fell precipitously.
The space industry also suffered in 1986 after the Space Shuttle Challenger disintegrated shortly after launch. There
was a cutback in some activities for a period. In the late 1980s, the city's economy suffered from the nationwide
recession. After the early 1990s recession, Houston made efforts to diversify its economy by focusing on aerospace
and health care/biotechnology, and reduced its dependence on the petroleum industry. Since the increase of oil prices
in the 2000s, the petroleum industry has again increased its share of the local economy. Underpinning Houston's land
surface are unconsolidated clays, clay shales, and poorly cemented sands up to several miles deep. The region's geology
developed from river deposits formed from the erosion of the Rocky Mountains. These sediments consist of a series
of sands and clays deposited on decaying organic marine matter, that over time, transformed into oil and natural
gas. Beneath the layers of sediment is a water-deposited layer of halite, a rock salt. The porous layers were compressed
over time and forced upward. As it pushed upward, the salt dragged surrounding sediments into salt dome formations,
often trapping oil and gas that seeped from the surrounding porous sands. The thick, rich, sometimes black, surface
soil is suitable for rice farming in suburban outskirts where the city continues to grow. In the 1960s, Downtown
Houston consisted of a collection of mid-rise office structures. Downtown was on the threshold of an energy industry–led
boom in 1970. A succession of skyscrapers were built throughout the 1970s—many by real estate developer Gerald D.
Hines—culminating with Houston's tallest skyscraper, the 75-floor, 1,002-foot (305 m)-tall JPMorgan Chase Tower (formerly
the Texas Commerce Tower), completed in 1982. It is the tallest structure in Texas, 15th tallest building in the
United States, and the 85th tallest skyscraper in the world, based on highest architectural feature. In 1983, the
71-floor, 992-foot (302 m)-tall Wells Fargo Plaza (formerly Allied Bank Plaza) was completed, becoming the second-tallest
building in Houston and Texas. Based on highest architectural feature, it is the 17th tallest in the United States
and the 95th tallest in the world. In 2007, downtown Houston had over 43 million square feet (4,000,000 m²) of office
space. In 2006, the Houston metropolitan area ranked first in Texas and third in the U.S. within the Category of
"Best Places for Business and Careers" by Forbes magazine. Foreign governments have established 92 consular offices
in Houston's metropolitan area, the third highest in the nation. Forty foreign governments maintain trade and commercial
offices here and 23 active foreign chambers of commerce and trade associations. Twenty-five foreign banks representing
13 nations operate in Houston, providing financial assistance to the international community. Many annual events
celebrate the diverse cultures of Houston. The largest and longest running is the annual Houston Livestock Show and
Rodeo, held over 20 days from early to late March, is the largest annual livestock show and rodeo in the world. Another
large celebration is the annual night-time Houston Pride Parade, held at the end of June. Other annual events include
the Houston Greek Festival, Art Car Parade, the Houston Auto Show, the Houston International Festival, and the Bayou
City Art Festival, which is considered to be one of the top five art festivals in the United States. Of the 10 most
populous U.S. cities, Houston has the most total area of parks and green space, 56,405 acres (228 km2). The city
also has over 200 additional green spaces—totaling over 19,600 acres (79 km2) that are managed by the city—including
the Houston Arboretum and Nature Center. The Lee and Joe Jamail Skatepark is a public skatepark owned and operated
by the city of Houston, and is one of the largest skateparks in Texas consisting of 30,000 (2,800 m2) square foot
in-ground facility. The Gerald D. Hines Waterwall Park—located in the Uptown District of the city—serves as a popular
tourist attraction, weddings, and various celebrations. A 2011 study by Walk Score ranked Houston the 23rd most walkable
of the 50 largest cities in the United States. Wet'n'Wild SplashTown is a water park located north of Houston. The
city of Houston has a strong mayoral form of municipal government. Houston is a home rule city and all municipal
elections in the state of Texas are nonpartisan. The City's elected officials are the mayor, city controller and
16 members of the Houston City Council. The current mayor of Houston is Sylvester Turner, a Democrat elected on a
nonpartisan ballot. Houston's mayor serves as the city's chief administrator, executive officer, and official representative,
and is responsible for the general management of the city and for seeing that all laws and ordinances are enforced.
The Houston–The Woodlands–Sugar Land metropolitan area is served by one public television station and two public
radio stations. KUHT (HoustonPBS) is a PBS member station and is the first public television station in the United
States. Houston Public Radio is listener-funded and comprises two NPR member stations: KUHF (KUHF News) and KUHA
(Classical 91.7). KUHF is news/talk radio and KUHA is a classical music station. The University of Houston System
owns and holds broadcasting licenses to KUHT, KUHF, and KUHA. The stations broadcast from the Melcher Center for
Public Broadcasting, located on the campus of the University of Houston. Houston (i/ˈhjuːstən/ HYOO-stən) is the
most populous city in Texas and the fourth most populous city in the United States, located in Southeast Texas near
the Gulf of Mexico. With a census-estimated 2014 population of 2.239 million people, within a land area of 599.6
square miles (1,553 km2), it also is the largest city in the Southern United States, as well as the seat of Harris
County. It is the principal city of Houston–The Woodlands–Sugar Land, which is the fifth most populated metropolitan
area in the United States. By 1860, Houston had emerged as a commercial and railroad hub for the export of cotton.
Railroad spurs from the Texas inland converged in Houston, where they met rail lines to the ports of Galveston and
Beaumont. During the American Civil War, Houston served as a headquarters for General John Bankhead Magruder, who
used the city as an organization point for the Battle of Galveston. After the Civil War, Houston businessmen initiated
efforts to widen the city's extensive system of bayous so the city could accept more commerce between downtown and
the nearby port of Galveston. By 1890, Houston was the railroad center of Texas. In August 2005, Houston became a
shelter to more than 150,000 people from New Orleans who evacuated from Hurricane Katrina. One month later, approximately
2.5 million Houston area residents evacuated when Hurricane Rita approached the Gulf Coast, leaving little damage
to the Houston area. This was the largest urban evacuation in the history of the United States. In September 2008,
Houston was hit by Hurricane Ike. As many as forty percent refused to leave Galveston Island because they feared
the traffic problems that happened after Hurricane Rita. Though Houston is the largest city in the United States
without formal zoning regulations, it has developed similarly to other Sun Belt cities because the city's land use
regulations and legal covenants have played a similar role. Regulations include mandatory lot size for single-family
houses and requirements that parking be available to tenants and customers. Such restrictions have had mixed results.
Though some have blamed the city's low density, urban sprawl, and lack of pedestrian-friendliness on these policies,
the city's land use has also been credited with having significant affordable housing, sparing Houston the worst
effects of the 2008 real estate crisis. The city issued 42,697 building permits in 2008 and was ranked first in the
list of healthiest housing markets for 2009. The Houston area is a leading center for building oilfield equipment.
Much of its success as a petrochemical complex is due to its busy ship channel, the Port of Houston. In the United
States, the port ranks first in international commerce and tenth among the largest ports in the world. Unlike most
places, high oil and gasoline prices are beneficial for Houston's economy, as many of its residents are employed
in the energy industry. Houston is the beginning or end point of numerous oil, gas, and products pipelines: The Houston
Theater District, located downtown, is home to nine major performing arts organizations and six performance halls.
It is the second-largest concentration of theater seats in a downtown area in the United States. Houston is one of
few United States cities with permanent, professional, resident companies in all major performing arts disciplines:
opera (Houston Grand Opera), ballet (Houston Ballet), music (Houston Symphony Orchestra), and theater (The Alley
Theatre). Houston is also home to folk artists, art groups and various small progressive arts organizations. Houston
attracts many touring Broadway acts, concerts, shows, and exhibitions for a variety of interests. Facilities in the
Theater District include the Jones Hall—home of the Houston Symphony Orchestra and Society for the Performing Arts—and
the Hobby Center for the Performing Arts. The Theater District is a 17-block area in the center of downtown Houston
that is home to the Bayou Place entertainment complex, restaurants, movies, plazas, and parks. Bayou Place is a large
multilevel building containing full-service restaurants, bars, live music, billiards, and Sundance Cinema. The Bayou
Music Center stages live concerts, stage plays, and stand-up comedy. Space Center Houston is the official visitors'
center of NASA's Lyndon B. Johnson Space Center. The Space Center has many interactive exhibits including moon rocks,
a shuttle simulator, and presentations about the history of NASA's manned space flight program. Other tourist attractions
include the Galleria (Texas's largest shopping mall, located in the Uptown District), Old Market Square, the Downtown
Aquarium, and Sam Houston Race Park. Minute Maid Park (home of the Astros) and Toyota Center (home of the Rockets),
are located in downtown Houston. Houston has the NFL's first retractable-roof stadium with natural grass, NRG Stadium
(home of the Texans). Minute Maid Park is also a retractable-roof stadium. Toyota Center also has the largest screen
for an indoor arena in the United States built to coincide with the arena's hosting of the 2013 NBA All-Star Game.
BBVA Compass Stadium is a soccer-specific stadium for the Dynamo, the Texas Southern University football team, and
Dash, located in East Downtown. In addition, NRG Astrodome was the first indoor stadium in the world, built in 1965.
Other sports facilities include Hofheinz Pavilion (Houston Cougars basketball), Rice Stadium (Rice Owls football),
and Reliant Arena. TDECU Stadium is where the University of Houston Houston Cougars football team plays. Houston
has hosted several major sports events: the 1968, 1986 and 2004 Major League Baseball All-Star Games; the 1989, 2006
and 2013 NBA All-Star Games; Super Bowl VIII and Super Bowl XXXVIII, as well as hosting the 2005 World Series and
1981, 1986, 1994 and 1995 NBA Finals, winning the latter two. Super Bowl LI is currently slated to be hosted in NRG
Stadium in 2017. Several private institutions of higher learning—ranging from liberal arts colleges, such as The
University of St. Thomas, Houston's only Catholic university, to Rice University, the nationally recognized research
university—are located within the city. Rice, with a total enrollment of slightly more than 6,000 students, has a
number of distinguished graduate programs and research institutes, such as the James A. Baker Institute for Public
Policy. Houston Baptist University, affiliated with the Baptist General Convention of Texas, offers bachelor's and
graduate degrees. It was founded in 1960 and is located in the Sharpstown area in Southwest Houston. Houston is the
seat of the internationally renowned Texas Medical Center, which contains the world's largest concentration of research
and healthcare institutions. All 49 member institutions of the Texas Medical Center are non-profit organizations.
They provide patient and preventive care, research, education, and local, national, and international community well-being.
Employing more than 73,600 people, institutions at the medical center include 13 hospitals and two specialty institutions,
two medical schools, four nursing schools, and schools of dentistry, public health, pharmacy, and virtually all health-related
careers. It is where one of the first—and still the largest—air emergency service, Life Flight, was created, and
a very successful inter-institutional transplant program was developed. More heart surgeries are performed at the
Texas Medical Center than anywhere else in the world. Houston was the headquarters of Continental Airlines until
its 2010 merger with United Airlines with headquarters in Chicago; regulatory approval for the merger was granted
in October of that year. Bush Intercontinental became United Airlines' largest airline hub. The airline retained
a significant operational presence in Houston while offering more than 700 daily departures from the city. In early
2007, Bush Intercontinental Airport was named a model "port of entry" for international travelers by U.S. Customs
and Border Protection. Houston's economy has a broad industrial base in energy, manufacturing, aeronautics, and transportation.
It is also leading in health care sectors and building oilfield equipment; only New York City is home to more Fortune
500 headquarters within its city limits. The Port of Houston ranks first in the United States in international waterborne
tonnage handled and second in total cargo tonnage handled. Nicknamed the Space City, Houston is a global city, with
strengths in business, international trade, entertainment, culture, media, fashion, science, sports, technology,
education, medicine and research. The city has a population from various ethnic and religious backgrounds and a large
and growing international community. Houston is the most diverse city in Texas and has been described as the most
diverse in the United States. It is home to many cultural institutions and exhibits, which attract more than 7 million
visitors a year to the Museum District. Houston has an active visual and performing arts scene in the Theater District
and offers year-round resident companies in all major performing arts. Houston has mild winters in contrast to most
areas of the United States. In January, the normal mean temperature at Intercontinental Airport is 53.1 °F (11.7
°C), while that station has an average of 13 days with a low at or below freezing. Snowfall is rare. Recent snow
events in Houston include a storm on December 24, 2004 when one inch (2.5 cm) of snow accumulated in parts of the
metro area. Falls of at least one inch on both December 10, 2008 and December 4, 2009 marked the first time measurable
snowfall had occurred in two consecutive years in the city's recorded history. The coldest temperature officially
recorded in Houston was 5 °F (−15 °C) on January 18, 1940. Houston has historically received an ample amount of rainfall,
averaging about 49.8 in (1,260 mm) annually per 1981–2010 normals. Localized flooding often occurs, owing to the
extremely flat topography and widespread typical clay-silt prairie soils, which do not drain quickly. The University
of Houston System's annual impact on the Houston area's economy equates to that of a major corporation: $1.1 billion
in new funds attracted annually to the Houston area, $3.13 billion in total economic benefit and 24,000 local jobs
generated. This is in addition to the 12,500 new graduates the U.H. System produces every year who enter the workforce
in Houston and throughout the state of Texas. These degree-holders tend to stay in Houston. After five years, 80.5%
of graduates are still living and working in the region. Houston was founded in 1836 on land near the banks of Buffalo
Bayou (now known as Allen's Landing) and incorporated as a city on June 5, 1837. The city was named after former
General Sam Houston, who was president of the Republic of Texas and had commanded and won at the Battle of San Jacinto
25 miles (40 km) east of where the city was established. The burgeoning port and railroad industry, combined with
oil discovery in 1901, has induced continual surges in the city's population. In the mid-twentieth century, Houston
became the home of the Texas Medical Center—the world's largest concentration of healthcare and research institutions—and
NASA's Johnson Space Center, where the Mission Control Center is located. Located in the American South, Houston
is a diverse city with a large and growing international community. The metropolitan area is home to an estimated
1.1 million (21.4 percent) residents who were born outside the United States, with nearly two-thirds of the area's
foreign-born population from south of the United States–Mexico border. Additionally, more than one in five foreign-born
residents are from Asia. The city is home to the nation's third-largest concentration of consular offices, representing
86 countries. In 1900, after Galveston was struck by a devastating hurricane, efforts to make Houston into a viable
deep-water port were accelerated. The following year, oil discovered at the Spindletop oil field near Beaumont prompted
the development of the Texas petroleum industry. In 1902, President Theodore Roosevelt approved a $1 million improvement
project for the Houston Ship Channel. By 1910 the city's population had reached 78,800, almost doubling from a decade
before. African-Americans formed a large part of the city's population, numbering 23,929 people, or nearly one-third
of the residents. According to the United States Census Bureau, the city has a total area of 656.3 square miles (1,700
km2); this comprises 634.0 square miles (1,642 km2) of land and 22.3 square miles (58 km2) of water. The Piney Woods
is north of Houston. Most of Houston is located on the gulf coastal plain, and its vegetation is classified as temperate
grassland and forest. Much of the city was built on forested land, marshes, swamp, or prairie which resembles the
Deep South, and are all still visible in surrounding areas. Flatness of the local terrain, when combined with urban
sprawl, has made flooding a recurring problem for the city. Downtown stands about 50 feet (15 m) above sea level,
and the highest point in far northwest Houston is about 125 feet (38 m) in elevation. The city once relied on groundwater
for its needs, but land subsidence forced the city to turn to ground-level water sources such as Lake Houston, Lake
Conroe and Lake Livingston. The city owns surface water rights for 1.20 billion gallons of water a day in addition
to 150 million gallons a day worth of groundwater. During the summer months, it is common for temperatures to reach
over 90 °F (32 °C), with an average of 106.5 days per year, including a majority from June to September, with a high
of 90 °F or above and 4.6 days at or over 100 °F (38 °C). However, humidity usually yields a higher heat index. Summer
mornings average over 90 percent relative humidity. Winds are often light in the summer and offer little relief,
except in the far southeastern outskirts near the Gulf coast and Galveston. To cope with the strong humidity and
heat, people use air conditioning in nearly every vehicle and building. In 1980, Houston was described as the "most
air-conditioned place on earth". Officially, the hottest temperature ever recorded in Houston is 109 °F (43 °C),
which was reached both on September 4, 2000 and August 28, 2011. Houston is considered to be a politically divided
city whose balance of power often sways between Republicans and Democrats. Much of the city's wealthier areas vote
Republican while the city's working class and minority areas vote Democratic. According to the 2005 Houston Area
Survey, 68 percent of non-Hispanic whites in Harris County are declared or favor Republicans while 89 percent of
non-Hispanic blacks in the area are declared or favor Democrats. About 62 percent Hispanics (of any race) in the
area are declared or favor Democrats. The city has often been known to be the most politically diverse city in Texas,
a state known for being generally conservative. As a result, the city is often a contested area in statewide elections.
In 2009, Houston became the first US city with a population over 1 million citizens to elect a gay mayor, by electing
Annise Parker. According to the 2010 Census, whites made up 51% of Houston's population; 26% of the total population
were non-Hispanic whites. Blacks or African Americans made up 25% of Houston's population. American Indians made
up 0.7% of the population. Asians made up 6% (1.7% Vietnamese, 1.3% Chinese, 1.3% Indian, 0.9% Pakistani, 0.4% Filipino,
0.3% Korean, 0.1% Japanese), while Pacific Islanders made up 0.1%. Individuals from some other race made up 15.2%
of the city's population, of which 0.2% were non-Hispanic. Individuals from two or more races made up 3.3% of the
city. At the 2000 Census, there were 1,953,631 people and the population density was 3,371.7 people per square mile
(1,301.8/km²). The racial makeup of the city was 49.3% White, 25.3% African American, 5.3% Asian, 0.7% American Indian,
0.1% Pacific Islander, 16.5% from some other race, and 3.1% from two or more races. In addition, Hispanics made up
37.4% of Houston's population while non-Hispanic whites made up 30.8%, down from 62.4% in 1970. The Houston–The Woodlands–Sugar
Land MSA's gross domestic product (GDP) in 2012 was $489 billion, making it the fourth-largest of any metropolitan
area in the United States and larger than Austria's, Venezuela's, or South Africa's GDP. Only 26 countries other
than the United States have a gross domestic product exceeding Houston's regional gross area product (GAP). In 2010,
mining (which consists almost entirely of exploration and production of oil and gas in Houston) accounted for 26.3%
of Houston's GAP up sharply in response to high energy prices and a decreased worldwide surplus of oil production
capacity, followed by engineering services, health services, and manufacturing. The Baylor College of Medicine has
annually been considered within the top ten medical schools in the nation; likewise, the MD Anderson Cancer Center
has consistently ranked as one of the top two U.S. hospitals specializing in cancer care by U.S. News & World Report
since 1990. The Menninger Clinic, a renowned psychiatric treatment center, is affiliated with Baylor College of Medicine
and The Methodist Hospital System. With hospital locations nationwide and headquarters in Houston, the Triumph Healthcare
hospital system is the third largest long term acute care provider nationally. In 2013, Houston was identified as
the #1 U.S. city for job creation by the U.S. Bureau of Statistics after it was not only the first major city to
regain all the jobs lost in the preceding economic downturn, but after the crash, more than two jobs were added for
every one lost. Economist and vice president of research at the Greater Houston Partnership Patrick Jankowski attributed
Houston's success to the ability of the region's real estate and energy industries to learn from historical mistakes.
Furthermore, Jankowski stated that "more than 100 foreign-owned companies relocated, expanded or started new businesses
in Houston" between 2008 and 2010, and this openness to external business boosted job creation during a period when
domestic demand was problematically low. Also in 2013, Houston again appeared on Forbes' list of Best Places for
Business and Careers. Three community college districts exist with campuses in and around Houston. The Houston Community
College System serves most of Houston. The northwestern through northeastern parts of the city are served by various
campuses of the Lone Star College System, while the southeastern portion of Houston is served by San Jacinto College,
and a northeastern portion is served by Lee College. The Houston Community College and Lone Star College systems
are within the 10 largest institutions of higher learning in the United States. METRO began light rail service on
January 1, 2004, with the inaugural track ("Red Line") running about 8 miles (13 km) from the University of Houston–Downtown
(UHD), which traverses through the Texas Medical Center and terminates at NRG Park. METRO is currently in the design
phase of a 10-year expansion plan that will add five more lines. and expand the current Red Line. Amtrak, the national
passenger rail system, provides service three times a week to Houston via the Sunset Limited (Los Angeles–New Orleans),
which stops at a train station on the north side of the downtown area. The station saw 14,891 boardings and alightings
in fiscal year 2008. In 2012, there was a 25 percent increase in ridership to 20,327 passengers embarking from the
Houston Amtrak station. The second-largest commercial airport is William P. Hobby Airport (named Houston International
Airport until 1967) which operates primarily short- to medium-haul domestic flights. However, in 2015 Southwest Airlines
launched service from a new international terminal at Hobby airport to several destinations in Mexico, Central America,
and the Caribbean. These were the first international flights flown from Hobby since 1969. Houston's aviation history
is showcased in the 1940 Air Terminal Museum located in the old terminal building on the west side of the airport.
Hobby Airport has been recognized with two awards for being one of the top five performing airports in the world
and for customer service by Airports Council International. The Houston area has over 150 active faults (estimated
to be 300 active faults) with an aggregate length of up to 310 miles (500 km), including the Long Point–Eureka Heights
fault system which runs through the center of the city. There have been no significant historically recorded earthquakes
in Houston, but researchers do not discount the possibility of such quakes having occurred in the deeper past, nor
occurring in the future. Land in some areas southeast of Houston is sinking because water has been pumped out of
the ground for many years. It may be associated with slip along the faults; however, the slippage is slow and not
considered an earthquake, where stationary faults must slip suddenly enough to create seismic waves. These faults
also tend to move at a smooth rate in what is termed "fault creep", which further reduces the risk of an earthquake.
Houston was incorporated in 1837 under the ward system of representation. The ward designation is the progenitor
of the eleven current-day geographically oriented Houston City Council districts. Locations in Houston are generally
classified as either being inside or outside the Interstate 610 Loop. The inside encompasses the central business
district and many residential neighborhoods that predate World War II. More recently, high-density residential areas
have been developed within the loop. The city's outlying areas, suburbs and enclaves are located outside of the loop.
Beltway 8 encircles the city another 5 miles (8.0 km) farther out. Centered on Post Oak Boulevard and Westheimer
Road, the Uptown District boomed during the 1970s and early 1980s when a collection of mid-rise office buildings,
hotels, and retail developments appeared along Interstate 610 west. Uptown became one of the most prominent instances
of an edge city. The tallest building in Uptown is the 64-floor, 901-foot (275 m)-tall, Philip Johnson and John Burgee
designed landmark Williams Tower (known as the Transco Tower until 1999). At the time of construction, it was believed
to be the world's tallest skyscraper outside of a central business district. The new 20-story Skanska building and
BBVA Compass Plaza are the newest office buildings built in Uptown after 30 years. The Uptown District is also home
to buildings designed by noted architects I. M. Pei, César Pelli, and Philip Johnson. In the late 1990s and early
2000s decade, there was a mini-boom of mid-rise and high-rise residential tower construction, with several over 30
stories tall. Since 2000 more than 30 high-rise buildings have gone up in Houston; all told, 72 high-rises tower
over the city, which adds up to about 8,300 units. In 2002, Uptown had more than 23 million square feet (2,100,000
m²) of office space with 16 million square feet (1,500,000 m²) of Class A office space. Houston is recognized worldwide
for its energy industry—particularly for oil and natural gas—as well as for biomedical research and aeronautics.
Renewable energy sources—wind and solar—are also growing economic bases in the city. The Houston Ship Channel is
also a large part of Houston's economic base. Because of these strengths, Houston is designated as a global city
by the Globalization and World Cities Study Group and Network and global management consulting firm A.T. Kearney.
The Houston area is the top U.S. market for exports, surpassing New York City in 2013, according to data released
by the U.S. Department of Commerce's International Trade Administration. In 2012, the Houston–The Woodlands–Sugar
Land area recorded $110.3 billion in merchandise exports. Petroleum products, chemicals, and oil and gas extraction
equipment accounted for approximately two-thirds of the metropolitan area's exports last year. The Top 3 destinations
for exports were Mexico, Canada, and Brazil. In 2008, Houston received top ranking on Kiplinger's Personal Finance
Best Cities of 2008 list, which ranks cities on their local economy, employment opportunities, reasonable living
costs, and quality of life. The city ranked fourth for highest increase in the local technological innovation over
the preceding 15 years, according to Forbes magazine. In the same year, the city ranked second on the annual Fortune
500 list of company headquarters, first for Forbes magazine's Best Cities for College Graduates, and first on their
list of Best Cities to Buy a Home. In 2010, the city was rated the best city for shopping, according to Forbes. Houston
has sports teams for every major professional league except the National Hockey League (NHL). The Houston Astros
are a Major League Baseball (MLB) expansion team formed in 1962 (known as the "Colt .45s" until 1965) that made one
World Series appearance in 2005. The Houston Rockets are a National Basketball Association (NBA) franchise based
in the city since 1971. They have won two NBA Championships: in 1994 and 1995 under star players Hakeem Olajuwon,
Otis Thorpe, Clyde Drexler, Vernon Maxwell, and Kenny Smith. The Houston Texans are a National Football League (NFL)
expansion team formed in 2002. The Houston Dynamo are a Major League Soccer (MLS) franchise that has been based in
Houston since 2006 after they won two MLS Cup titles in 2006 and 2007. The Houston Dash play in the National Women's
Soccer League (NWSL). The Scrap Yard Dawgs, a women's pro softball team, is expected to play in the National Pro
Fastpitch (NPF) from 2016. The original city council line-up of 14 members (nine district-based and five at-large
positions) was based on a U.S. Justice Department mandate which took effect in 1979. At-large council members represent
the entire city. Under the city charter, once the population in the city limits exceeded 2.1 million residents, two
additional districts were to be added. The city of Houston's official 2010 census count was 600 shy of the required
number; however, as the city was expected to grow beyond 2.1 million shortly thereafter, the two additional districts
were added for, and the positions filled during, the August 2011 elections. Four separate and distinct state universities
are located in Houston. The University of Houston is a nationally recognized Tier One research university, and is
the flagship institution of the University of Houston System. The third-largest university in Texas, the University
of Houston has nearly 40,000 students on its 667-acre campus in southeast Houston. The University of Houston–Clear
Lake and the University of Houston–Downtown are stand-alone universities; they are not branch campuses of the University
of Houston. Located in the historic community of Third Ward is Texas Southern University, one of the largest historically
black colleges and universities in the United States. Houston is served by the Houston Chronicle, its only major
daily newspaper with wide distribution. The Hearst Corporation, which owns and operates the Houston Chronicle, bought
the assets of the Houston Post—its long-time rival and main competition—when Houston Post ceased operations in 1995.
The Houston Post was owned by the family of former Lieutenant Governor Bill Hobby of Houston. The only other major
publication to serve the city is the Houston Press—a free alternative weekly with a weekly readership of more than
300,000. Houston's highway system has a hub-and-spoke freeway structure serviced by multiple loops. The innermost
loop is Interstate 610, which encircles downtown, the medical center, and many core neighborhoods with around a 8-mile
(13 km) diameter. Beltway 8 and its freeway core, the Sam Houston Tollway, form the middle loop at a diameter of
roughly 23 miles (37 km). A proposed highway project, State Highway 99 (Grand Parkway), will form a third loop outside
of Houston, totaling 180 miles in length and making an almost-complete circumference, with the exception of crossing
the ship channel. As of June 2014, two of eleven segments of State Highway 99 have been completed to the west of
Houston, and three northern segments, totaling 38 miles, are actively under construction and scheduled to open to
traffic late in 2015. In addition to the Sam Houston Tollway loop mentioned above, the Harris County Toll Road Authority
currently operates four spoke tollways: The Katy Managed Lanes of Interstate 10, the Hardy Toll Road, the Westpark
Tollway, and the Fort Bend Parkway Extension. Other spoke roads either planned or under construction include Crosby
Freeway, and the future Alvin Freeway. The primary city airport is George Bush Intercontinental Airport (IAH), the
tenth-busiest in the United States for total passengers, and twenty eighth-busiest worldwide. Bush Intercontinental
currently ranks fourth in the United States for non-stop domestic and international service with 182 destinations.
In 2006, the United States Department of Transportation named IAH the fastest-growing of the top ten airports in
the United States. The Houston Air Route Traffic Control Center stands on the George Bush Intercontinental Airport
grounds.
In the Roman era, copper was principally mined on Cyprus, the origin of the name of the metal from aes сyprium (metal of
Cyprus), later corrupted to сuprum, from which the words copper (English), cuivre (French), Koper (Dutch) and Kupfer
(German) are all derived. Its compounds are commonly encountered as copper(II) salts, which often impart blue or
green colors to minerals such as azurite, malachite and turquoise and have been widely used historically as pigments.
Architectural structures built with copper corrode to give green verdigris (or patina). Decorative art prominently
features copper, both by itself and in the form of pigments. Copper occurs naturally as native copper and was known
to some of the oldest civilizations on record. It has a history of use that is at least 10,000 years old, and estimates
of its discovery place it at 9000 BC in the Middle East; a copper pendant was found in northern Iraq that dates to
8700 BC. There is evidence that gold and meteoric iron (but not iron smelting) were the only metals used by humans
before copper. The history of copper metallurgy is thought to have followed the following sequence: 1) cold working
of native copper, 2) annealing, 3) smelting, and 4) the lost wax method. In southeastern Anatolia, all four of these
metallurgical techniques appears more or less simultaneously at the beginning of the Neolithic c. 7500 BC. However,
just as agriculture was independently invented in several parts of the world, copper smelting was invented locally
in several different places. It was probably discovered independently in China before 2800 BC, in Central America
perhaps around 600 AD, and in West Africa about the 9th or 10th century AD. Investment casting was invented in 4500–4000
BC in Southeast Asia and carbon dating has established mining at Alderley Edge in Cheshire, UK at 2280 to 1890 BC.
Ötzi the Iceman, a male dated from 3300–3200 BC, was found with an axe with a copper head 99.7% pure; high levels
of arsenic in his hair suggest his involvement in copper smelting. Experience with copper has assisted the development
of other metals; in particular, copper smelting led to the discovery of iron smelting. Production in the Old Copper
Complex in Michigan and Wisconsin is dated between 6000 and 3000 BC. Natural bronze, a type of copper made from ores
rich in silicon, arsenic, and (rarely) tin, came into general use in the Balkans around 5500 BC.[citation needed]
The gates of the Temple of Jerusalem used Corinthian bronze made by depletion gilding. It was most prevalent in Alexandria,
where alchemy is thought to have begun. In ancient India, copper was used in the holistic medical science Ayurveda
for surgical instruments and other medical equipment. Ancient Egyptians (~2400 BC) used copper for sterilizing wounds
and drinking water, and later on for headaches, burns, and itching. The Baghdad Battery, with copper cylinders soldered
to lead, dates back to 248 BC to AD 226 and resembles a galvanic cell, leading people to believe this was the first
battery; the claim has not been verified. Despite competition from other materials, copper remains the preferred
electrical conductor in nearly all categories of electrical wiring with the major exception being overhead electric
power transmission where aluminium is often preferred. Copper wire is used in power generation, power transmission,
power distribution, telecommunications, electronics circuitry, and countless types of electrical equipment. Electrical
wiring is the most important market for the copper industry. This includes building wire, communications cable, power
distribution cable, appliance wire, automotive wire and cable, and magnet wire. Roughly half of all copper mined
is used to manufacture electrical wire and cable conductors. Many electrical devices rely on copper wiring because
of its multitude of inherent beneficial properties, such as its high electrical conductivity, tensile strength, ductility,
creep (deformation) resistance, corrosion resistance, low thermal expansion, high thermal conductivity, solderability,
and ease of installation. Copper is biostatic, meaning bacteria will not grow on it. For this reason it has long
been used to line parts of ships to protect against barnacles and mussels. It was originally used pure, but has since
been superseded by Muntz metal. Similarly, as discussed in copper alloys in aquaculture, copper alloys have become
important netting materials in the aquaculture industry because they are antimicrobial and prevent biofouling, even
in extreme conditions and have strong structural and corrosion-resistant properties in marine environments. Copper,
silver and gold are in group 11 of the periodic table, and they share certain attributes: they have one s-orbital
electron on top of a filled d-electron shell and are characterized by high ductility and electrical conductivity.
The filled d-shells in these elements do not contribute much to the interatomic interactions, which are dominated
by the s-electrons through metallic bonds. Unlike metals with incomplete d-shells, metallic bonds in copper are lacking
a covalent character and are relatively weak. This explains the low hardness and high ductility of single crystals
of copper. At the macroscopic scale, introduction of extended defects to the crystal lattice, such as grain boundaries,
hinders flow of the material under applied stress, thereby increasing its hardness. For this reason, copper is usually
supplied in a fine-grained polycrystalline form, which has greater strength than monocrystalline forms. Copper is
synthesized in massive stars and is present in the Earth's crust at a concentration of about 50 parts per million
(ppm), where it occurs as native copper or in minerals such as the copper sulfides chalcopyrite and chalcocite, the
copper carbonates azurite and malachite, and the copper(I) oxide mineral cuprite. The largest mass of elemental copper
discovered weighed 420 tonnes and was found in 1857 on the Keweenaw Peninsula in Michigan, US. Native copper is a
polycrystal, with the largest described single crystal measuring 4.4×3.2×3.2 cm. In Greece, copper was known by the
name chalkos (χαλκός). It was an important resource for the Romans, Greeks and other ancient peoples. In Roman times,
it was known as aes Cyprium, aes being the generic Latin term for copper alloys and Cyprium from Cyprus, where much
copper was mined. The phrase was simplified to cuprum, hence the English copper. Aphrodite and Venus represented
copper in mythology and alchemy, because of its lustrous beauty, its ancient use in producing mirrors, and its association
with Cyprus, which was sacred to the goddess. The seven heavenly bodies known to the ancients were associated with
the seven metals known in antiquity, and Venus was assigned to copper. Compounds that contain a carbon-copper bond
are known as organocopper compounds. They are very reactive towards oxygen to form copper(I) oxide and have many
uses in chemistry. They are synthesized by treating copper(I) compounds with Grignard reagents, terminal alkynes
or organolithium reagents; in particular, the last reaction described produces a Gilman reagent. These can undergo
substitution with alkyl halides to form coupling products; as such, they are important in the field of organic synthesis.
Copper(I) acetylide is highly shock-sensitive but is an intermediate in reactions such as the Cadiot-Chodkiewicz
coupling and the Sonogashira coupling. Conjugate addition to enones and carbocupration of alkynes can also be achieved
with organocopper compounds. Copper(I) forms a variety of weak complexes with alkenes and carbon monoxide, especially
in the presence of amine ligands. The uses of copper in art were not limited to currency: it was used by Renaissance
sculptors, in photographic technology known as the daguerreotype, and the Statue of Liberty. Copper plating and copper
sheathing for ships' hulls was widespread; the ships of Christopher Columbus were among the earliest to have this
feature. The Norddeutsche Affinerie in Hamburg was the first modern electroplating plant starting its production
in 1876. The German scientist Gottfried Osann invented powder metallurgy in 1830 while determining the metal's atomic
mass; around then it was discovered that the amount and type of alloying element (e.g., tin) to copper would affect
bell tones. Flash smelting was developed by Outokumpu in Finland and first applied at Harjavalta in 1949; the energy-efficient
process accounts for 50% of the world's primary copper production. Copper's greater conductivity versus other metals
enhances the electrical energy efficiency of motors. This is important because motors and motor-driven systems account
for 43%-46% of all global electricity consumption and 69% of all electricity used by industry. Increasing the mass
and cross section of copper in a coil increases the electrical energy efficiency of the motor. Copper motor rotors,
a new technology designed for motor applications where energy savings are prime design objectives, are enabling general-purpose
induction motors to meet and exceed National Electrical Manufacturers Association (NEMA) premium efficiency standards.
Chromobacterium violaceum and Pseudomonas fluorescens can both mobilize solid copper, as a cyanide compound. The
ericoid mycorrhizal fungi associated with Calluna, Erica and Vaccinium can grow in copper metalliferous soils. The
ectomycorrhizal fungus Suillus luteus protects young pine trees from copper toxicity. A sample of the fungus Aspergillus
niger was found growing from gold mining solution; and was found to contain cyano metal complexes; such as gold,
silver, copper iron and zinc. The fungus also plays a role in the solubilization of heavy metal sulfides. Copper-alloy
touch surfaces have natural intrinsic properties to destroy a wide range of microorganisms (e.g., E. coli O157:H7,
methicillin-resistant Staphylococcus aureus (MRSA), Staphylococcus, Clostridium difficile, influenza A virus, adenovirus,
and fungi). Some 355 copper alloys were proven to kill more than 99.9% of disease-causing bacteria within just two
hours when cleaned regularly. The United States Environmental Protection Agency (EPA) has approved the registrations
of these copper alloys as "antimicrobial materials with public health benefits," which allows manufacturers to legally
make claims as to the positive public health benefits of products made with registered antimicrobial copper alloys.
In addition, the EPA has approved a long list of antimicrobial copper products made from these alloys, such as bedrails,
handrails, over-bed tables, sinks, faucets, door knobs, toilet hardware, computer keyboards, health club equipment,
shopping cart handles, etc. (for a comprehensive list of products, see: Antimicrobial copper-alloy touch surfaces#Approved
products). Copper doorknobs are used by hospitals to reduce the transfer of disease, and Legionnaires' disease is
suppressed by copper tubing in plumbing systems. Antimicrobial copper alloy products are now being installed in healthcare
facilities in the U.K., Ireland, Japan, Korea, France, Denmark, and Brazil[citation needed] and in the subway transit
system in Santiago, Chile, where copper-zinc alloy handrails will be installed in some 30 stations between 2011–2014.
Copper compounds in liquid form are used as a wood preservative, particularly in treating original portion of structures
during restoration of damage due to dry rot. Together with zinc, copper wires may be placed over non-conductive roofing
materials to discourage the growth of moss.[citation needed] Textile fibers use copper to create antimicrobial protective
fabrics, as do ceramic glazes, stained glass and musical instruments. Electroplating commonly uses copper as a base
for other metals such as nickel. Copper has been in use at least 10,000 years, but more than 95% of all copper ever
mined and smelted has been extracted since 1900, and more than half was extracted in only the last 24 years. As with
many natural resources, the total amount of copper on Earth is vast (around 1014 tons just in the top kilometer of
Earth's crust, or about 5 million years' worth at the current rate of extraction). However, only a tiny fraction
of these reserves is economically viable, given present-day prices and technologies. Various estimates of existing
copper reserves available for mining vary from 25 years to 60 years, depending on core assumptions such as the growth
rate. Recycling is a major source of copper in the modern world. Because of these and other factors, the future of
copper production and supply is the subject of much debate, including the concept of peak copper, analogous to peak
oil. The cultural role of copper has been important, particularly in currency. Romans in the 6th through 3rd centuries
BC used copper lumps as money. At first, the copper itself was valued, but gradually the shape and look of the copper
became more important. Julius Caesar had his own coins made from brass, while Octavianus Augustus Caesar's coins
were made from Cu-Pb-Sn alloys. With an estimated annual output of around 15,000 t, Roman copper mining and smelting
activities reached a scale unsurpassed until the time of the Industrial Revolution; the provinces most intensely
mined were those of Hispania, Cyprus and in Central Europe. The major applications of copper are in electrical wires
(60%), roofing and plumbing (20%) and industrial machinery (15%). Copper is mostly used as a pure metal, but when
a higher hardness is required it is combined with other elements to make an alloy (5% of total use) such as brass
and bronze. A small part of copper supply is used in production of compounds for nutritional supplements and fungicides
in agriculture. Machining of copper is possible, although it is usually necessary to use an alloy for intricate parts
to get good machinability characteristics. The softness of copper partly explains its high electrical conductivity
(59.6×106 S/m) and thus also high thermal conductivity, which are the second highest (to silver) among pure metals
at room temperature. This is because the resistivity to electron transport in metals at room temperature mostly originates
from scattering of electrons on thermal vibrations of the lattice, which are relatively weak for a soft metal. The
maximum permissible current density of copper in open air is approximately 3.1×106 A/m2 of cross-sectional area,
above which it begins to heat excessively. As with other metals, if copper is placed against another metal, galvanic
corrosion will occur. Most copper is mined or extracted as copper sulfides from large open pit mines in porphyry
copper deposits that contain 0.4 to 1.0% copper. Examples include Chuquicamata in Chile, Bingham Canyon Mine in Utah,
United States and El Chino Mine in New Mexico, United States. According to the British Geological Survey, in 2005,
Chile was the top mine producer of copper with at least one-third world share followed by the United States, Indonesia
and Peru. Copper can also be recovered through the in-situ leach process. Several sites in the state of Arizona are
considered prime candidates for this method. The amount of copper in use is increasing and the quantity available
is barely sufficient to allow all countries to reach developed world levels of usage. Like aluminium, copper is 100%
recyclable without any loss of quality, regardless of whether it is in a raw state or contained in a manufactured
product. In volume, copper is the third most recycled metal after iron and aluminium. It is estimated that 80% of
the copper ever mined is still in use today. According to the International Resource Panel's Metal Stocks in Society
report, the global per capita stock of copper in use in society is 35–55 kg. Much of this is in more-developed countries
(140–300 kg per capita) rather than less-developed countries (30–40 kg per capita). The metal's distinctive natural
green patina has long been coveted by architects and designers. The final patina is a particularly durable layer
that is highly resistant to atmospheric corrosion, thereby protecting the underlying metal against further weathering.
It can be a mixture of carbonate and sulfate compounds in various amounts, depending upon environmental conditions
such as sulfur-containing acid rain. Architectural copper and its alloys can also be 'finished' to embark a particular
look, feel, and/or color. Finishes include mechanical surface treatments, chemical coloring, and coatings. Gram quantities
of various copper salts have been taken in suicide attempts and produced acute copper toxicity in humans, possibly
due to redox cycling and the generation of reactive oxygen species that damage DNA. Corresponding amounts of copper
salts (30 mg/kg) are toxic in animals. A minimum dietary value for healthy growth in rabbits has been reported to
be at least 3 ppm in the diet. However, higher concentrations of copper (100 ppm, 200 ppm, or 500 ppm) in the diet
of rabbits may favorably influence feed conversion efficiency, growth rates, and carcass dressing percentages. Britain's
first use of brass occurred around the 3rd–2nd century BC. In North America, copper mining began with marginal workings
by Native Americans. Native copper is known to have been extracted from sites on Isle Royale with primitive stone
tools between 800 and 1600. Copper metallurgy was flourishing in South America, particularly in Peru around 1000
AD; it proceeded at a much slower rate on other continents. Copper burial ornamentals from the 15th century have
been uncovered, but the metal's commercial production did not start until the early 20th century. There are 29 isotopes
of copper. 63Cu and 65Cu are stable, with 63Cu comprising approximately 69% of naturally occurring copper; they both
have a spin of 3⁄2. The other isotopes are radioactive, with the most stable being 67Cu with a half-life of 61.83
hours. Seven metastable isotopes have been characterized, with 68mCu the longest-lived with a half-life of 3.8 minutes.
Isotopes with a mass number above 64 decay by β−, whereas those with a mass number below 64 decay by β+. 64Cu, which
has a half-life of 12.7 hours, decays both ways. The alloy of copper and nickel, called cupronickel, is used in low-denomination
coins, often for the outer cladding. The US 5-cent coin called a nickel consists of 75% copper and 25% nickel and
has a homogeneous composition. The alloy consisting of 90% copper and 10% nickel is remarkable for its resistance
to corrosion and is used in various parts that are exposed to seawater. Alloys of copper with aluminium (about 7%)
have a pleasant golden color and are used in decorations. Some lead-free solders consist of tin alloyed with a small
proportion of copper and other metals. Polyols, compounds containing more than one alcohol functional group, generally
interact with cupric salts. For example, copper salts are used to test for reducing sugars. Specifically, using Benedict's
reagent and Fehling's solution the presence of the sugar is signaled by a color change from blue Cu(II) to reddish
copper(I) oxide. Schweizer's reagent and related complexes with ethylenediamine and other amines dissolve cellulose.
Amino acids form very stable chelate complexes with copper(II). Many wet-chemical tests for copper ions exist, one
involving potassium ferrocyanide, which gives a brown precipitate with copper(II) salts. Alloying copper with tin
to make bronze was first practiced about 4000 years after the discovery of copper smelting, and about 2000 years
after "natural bronze" had come into general use[citation needed]. Bronze artifacts from the Vinča culture date to
4500 BC. Sumerian and Egyptian artifacts of copper and bronze alloys date to 3000 BC. The Bronze Age began in Southeastern
Europe around 3700–3300 BC, in Northwestern Europe about 2500 BC. It ended with the beginning of the Iron Age, 2000–1000
BC in the Near East, 600 BC in Northern Europe. The transition between the Neolithic period and the Bronze Age was
formerly termed the Chalcolithic period (copper-stone), with copper tools being used with stone tools. This term
has gradually fallen out of favor because in some parts of the world the Chalcolithic and Neolithic are coterminous
at both ends. Brass, an alloy of copper and zinc, is of much more recent origin. It was known to the Greeks, but
became a significant supplement to bronze during the Roman Empire. Copper is an essential trace element in plants
and animals, but not some microorganisms. The human body contains copper at a level of about 1.4 to 2.1 mg per kg
of body mass. Stated differently, the RDA for copper in normal healthy adults is quoted as 0.97 mg/day and as 3.0
mg/day. Copper is absorbed in the gut, then transported to the liver bound to albumin. After processing in the liver,
copper is distributed to other tissues in a second phase. Copper transport here involves the protein ceruloplasmin,
which carries the majority of copper in blood. Ceruloplasmin also carries copper that is excreted in milk, and is
particularly well-absorbed as a copper source. Copper in the body normally undergoes enterohepatic circulation (about
5 mg a day, vs. about 1 mg per day absorbed in the diet and excreted from the body), and the body is able to excrete
some excess copper, if needed, via bile, which carries some copper out of the liver that is not then reabsorbed by
the intestine. The concentration of copper in ores averages only 0.6%, and most commercial ores are sulfides, especially
chalcopyrite (CuFeS2) and to a lesser extent chalcocite (Cu2S). These minerals are concentrated from crushed ores
to the level of 10–15% copper by froth flotation or bioleaching. Heating this material with silica in flash smelting
removes much of the iron as slag. The process exploits the greater ease of converting iron sulfides into its oxides,
which in turn react with the silica to form the silicate slag, which floats on top of the heated mass. The resulting
copper matte consisting of Cu2S is then roasted to convert all sulfides into oxides: Together with caesium and gold
(both yellow), and osmium (bluish), copper is one of only four elemental metals with a natural color other than gray
or silver. Pure copper is orange-red and acquires a reddish tarnish when exposed to air. The characteristic color
of copper results from the electronic transitions between the filled 3d and half-empty 4s atomic shells – the energy
difference between these shells is such that it corresponds to orange light. The same mechanism accounts for the
yellow color of gold and caesium. Copper has been used since ancient times as a durable, corrosion resistant, and
weatherproof architectural material. Roofs, flashings, rain gutters, downspouts, domes, spires, vaults, and doors
have been made from copper for hundreds or thousands of years. Copper's architectural use has been expanded in modern
times to include interior and exterior wall cladding, building expansion joints, radio frequency shielding, and antimicrobial
indoor products, such as attractive handrails, bathroom fixtures, and counter tops. Some of copper's other important
benefits as an architectural material include its low thermal movement, light weight, lightning protection, and its
recyclability.
A psychological identity relates to self-image (one's mental model of oneself), self-esteem, and individuality. Consequently,
Weinreich gives the definition "A person's identity is defined as the totality of one's self-construal, in which
how one construes oneself in the present expresses the continuity between how one construes oneself as one was in
the past and how one construes oneself as one aspires to be in the future"; this allows for definitions of aspects
of identity, such as: "One's ethnic identity is defined as that part of the totality of one's self-construal made
up of those dimensions that express the continuity between one's construal of past ancestry and one's future aspirations
in relation to ethnicity" (Weinreich, 1986a). The description or representation of individual and group identity
is a central task for psychologists, sociologists and anthropologists and those of other disciplines where "identity"
needs to be mapped and defined. How should one describe the identity of another, in ways which encompass both their
idiosyncratic qualities and their group memberships or identifications, both of which can shift according to circumstance?
Following on from the work of Kelly, Erikson, Tajfel and others Weinreich's Identity Structure Analysis (ISA), is
"a structural representation of the individual's existential experience, in which the relationships between self
and other agents are organised in relatively stable structures over time … with the emphasis on the socio-cultural
milieu in which self relates to other agents and institutions" (Weinreich and Saunderson, (eds) 2003, p1). Using
constructs drawn from the salient discourses of the individual, the group and cultural norms, the practical operationalisation
of ISA provides a methodology that maps how these are used by the individual, applied across time and milieus by
the "situated self" to appraise self and other agents and institutions (for example, resulting in the individual's
evaluation of self and significant others and institutions).[citation needed] Weinreich's identity variant similarly
includes the categories of identity diffusion, foreclosure and crisis, but with a somewhat different emphasis. Here,
with respect to identity diffusion for example, an optimal level is interpreted as the norm, as it is unrealistic
to expect an individual to resolve all their conflicted identifications with others; therefore we should be alert
to individuals with levels which are much higher or lower than the norm – highly diffused individuals are classified
as diffused, and those with low levels as foreclosed or defensive. (Weinreich & Saunderson, 2003, pp 65–67; 105-106).
Weinreich applies the identity variant in a framework which also allows for the transition from one to another by
way of biographical experiences and resolution of conflicted identifications situated in various contexts – for example,
an adolescent going through family break-up may be in one state, whereas later in a stable marriage with a secure
professional role may be in another. Hence, though there is continuity, there is also development and change. (Weinreich
& Saunderson, 2003, pp 22–23). Anthropologists have contributed to the debate by shifting the focus of research:
One of the first challenges for the researcher wishing to carry out empirical research in this area is to identify
an appropriate analytical tool. The concept of boundaries is useful here for demonstrating how identity works. In
the same way as Barth, in his approach to ethnicity, advocated the critical focus for investigation as being "the
ethnic boundary that defines the group rather than the cultural stuff that it encloses" (1969:15), social anthropologists
such as Cohen and Bray have shifted the focus of analytical study from identity to the boundaries that are used for
purposes of identification. If identity is a kind of virtual site in which the dynamic processes and markers used
for identification are made apparent, boundaries provide the framework on which this virtual site is built. They
concentrated on how the idea of community belonging is differently constructed by individual members and how individuals
within the group conceive ethnic boundaries. The inclusiveness of Weinreich's definition (above) directs attention
to the totality of one's identity at a given phase in time, and assists in elucidating component aspects of one's
total identity, such as one's gender identity, ethnic identity, occupational identity and so on. The definition readily
applies to the young child, to the adolescent, to the young adult, and to the older adult in various phases of the
life cycle. Depending on whether one is a young child or an adult at the height of one's powers, how one construes
oneself as one was in the past will refer to very different salient experiential markers. Likewise, how one construes
oneself as one aspires to be in the future will differ considerably according to one's age and accumulated experiences.
(Weinreich & Saunderson, (eds) 2003, pp 26–34). Although the self is distinct from identity, the literature of self-psychology
can offer some insight into how identity is maintained (Cote & Levin 2002, p. 24). From the vantage point of self-psychology,
there are two areas of interest: the processes by which a self is formed (the "I"), and the actual content of the
schemata which compose the self-concept (the "Me"). In the latter field, theorists have shown interest in relating
the self-concept to self-esteem, the differences between complex and simple ways of organizing self-knowledge, and
the links between those organizing principles and the processing of information (Cote & Levin 2002). At a general
level, self-psychology is compelled to investigate the question of how the personal self relates to the social environment.
To the extent that these theories place themselves in the tradition of "psychological" social psychology, they focus
on explaining an individual's actions within a group in terms of mental events and states. However, some "sociological"
social psychology theories go further by attempting to deal with the issue of identity at both the levels of individual
cognition and of collective behavior. Anthropologists have most frequently employed the term 'identity' to refer
to this idea of selfhood in a loosely Eriksonian way (Erikson 1972) properties based on the uniqueness and individuality
which makes a person distinct from others. Identity became of more interest to anthropologists with the emergence
of modern concerns with ethnicity and social movements in the 1970s. This was reinforced by an appreciation, following
the trend in sociological thought, of the manner in which the individual is affected by and contributes to the overall
social context. At the same time, the Eriksonian approach to identity remained in force, with the result that identity
has continued until recently to be used in a largely socio-historical way to refer to qualities of sameness in relation
to a person's connection to others and to a particular group of people. Boundaries can be inclusive or exclusive
depending on how they are perceived by other people. An exclusive boundary arises, for example, when a person adopts
a marker that imposes restrictions on the behaviour of others. An inclusive boundary is created, by contrast, by
the use of a marker with which other people are ready and able to associate. At the same time, however, an inclusive
boundary will also impose restrictions on the people it has included by limiting their inclusion within other boundaries.
An example of this is the use of a particular language by a newcomer in a room full of people speaking various languages.
Some people may understand the language used by this person while others may not. Those who do not understand it
might take the newcomer's use of this particular language merely as a neutral sign of identity. But they might also
perceive it as imposing an exclusive boundary that is meant to mark them off from her. On the other hand, those who
do understand the newcomer's language could take it as an inclusive boundary, through which the newcomer associates
herself with them to the exclusion of the other people present. Equally, however, it is possible that people who
do understand the newcomer but who also speak another language may not want to speak the newcomer's language and
so see her marker as an imposition and a negative boundary. It is possible that the newcomer is either aware or unaware
of this, depending on whether she herself knows other languages or is conscious of the plurilingual quality of the
people there and is respectful of it or not. The "Neo-Eriksonian" identity status paradigm emerged in later years[when?],
driven largely by the work of James Marcia. This paradigm focuses upon the twin concepts of exploration and commitment.
The central idea is that any individual's sense of identity is determined in large part by the explorations and commitments
that he or she makes regarding certain personal and social traits. It follows that the core of the research in this
paradigm investigates the degrees to which a person has made certain explorations, and the degree to which he or
she displays a commitment to those explorations. Many people gain a sense of positive self-esteem from their identity
groups, which furthers a sense of community and belonging. Another issue that researchers have attempted to address
is the question of why people engage in discrimination, i.e., why they tend to favor those they consider a part of
their "in-group" over those considered to be outsiders. Both questions have been given extensive attention by researchers
working in the social identity tradition. For example, in work relating to social identity theory it has been shown
that merely crafting cognitive distinction between in- and out-groups can lead to subtle effects on people's evaluations
of others (Cote & Levine 2002). The first favours a primordialist approach which takes the sense of self and belonging
to a collective group as a fixed thing, defined by objective criteria such as common ancestry and common biological
characteristics. The second, rooted in social constructionist theory, takes the view that identity is formed by a
predominantly political choice of certain characteristics. In so doing, it questions the idea that identity is a
natural given, characterised by fixed, supposedly objective criteria. Both approaches need to be understood in their
respective political and historical contexts, characterised by debate on issues of class, race and ethnicity. While
they have been criticized, they continue to exert an influence on approaches to the conceptualisation of identity
today. The implications are multiple as various research traditions are now[when?] heavily utilizing the lens of
identity to examine phenomena.[citation needed] One implication of identity and of identity construction can be seen
in occupational settings. This becomes increasing challenging in stigmatized jobs or "dirty work" (Hughes, 1951).
Tracy and Trethewey (2005) state that "individuals gravitate toward and turn away from particular jobs depending
in part, on the extent to which they validate a "preferred organizational self" (Tracy & Tretheway 2005, p. 169).
Some jobs carry different stigmas or acclaims. In her analysis Tracy uses the example of correctional officers trying
to shake the stigma of "glorified maids" (Tracy & Tretheway 2005). "The process by which people arrive at justifications
of and values for various occupational choices." Among these are workplace satisfaction and overall quality of life
(Tracy & Scott 2006, p. 33). People in these types of jobs are forced to find ways in order to create an identity
they can live with. "Crafting a positive sense of self at work is more challenging when one's work is considered
"dirty" by societal standards" (Tracy & Scott 2006, p. 7). "In other words, doing taint management is not just about
allowing the employee to feel good in that job. "If employees must navigate discourses that question the viability
of their work, and/ or experience obstacles in managing taint through transforming dirty work into a badge of honor,
it is likely they will find blaming the client to be an efficacious route in affirming their identity" (Tracy & Scott
2006, p. 33). However, the formation of one's identity occurs through one's identifications with significant others
(primarily with parents and other individuals during one's biographical experiences, and also with "groups" as they
are perceived). These others may be benign - such that one aspires to their characteristics, values and beliefs (a
process of idealistic-identification), or malign - when one wishes to dissociate from their characteristics (a process
of defensive contra-identification) (Weinreich & Saunderson 2003, Chapter 1, pp 54–61). A person may display either
relative weakness or relative strength in terms of both exploration and commitments. When assigned categories, four
possible permutations result: identity diffusion, identity foreclosure, identity moratorium, and identity achievement.
Diffusion is when a person lacks both exploration in life and interest in committing even to those unchosen roles
that he or she occupies. Foreclosure is when a person has not chosen extensively in the past, but seems willing to
commit to some relevant values, goals, or roles in the future. Moratorium is when a person displays a kind of flightiness,
ready to make choices but unable to commit to them. Finally, achievement is when a person makes identity choices
and commits to them. These different explorations of 'identity' demonstrate how difficult a concept it is to pin
down. Since identity is a virtual thing, it is impossible to define it empirically. Discussions of identity use the
term with different meanings, from fundamental and abiding sameness, to fluidity, contingency, negotiated and so
on. Brubaker and Cooper note a tendency in many scholars to confuse identity as a category of practice and as a category
of analysis (Brubaker & Cooper 2000, p. 5). Indeed, many scholars demonstrate a tendency to follow their own preconceptions
of identity, following more or less the frameworks listed above, rather than taking into account the mechanisms by
which the concept is crystallised as reality. In this environment, some analysts, such as Brubaker and Cooper, have
suggested doing away with the concept completely (Brubaker & Cooper 2000, p. 1). Others, by contrast, have sought
to introduce alternative concepts in an attempt to capture the dynamic and fluid qualities of human social self-expression.
Hall (1992, 1996), for example, suggests treating identity as a process, to take into account the reality of diverse
and ever-changing social experience. Some scholars have introduced the idea of identification, whereby identity is
perceived as made up of different components that are 'identified' and interpreted by individuals. The construction
of an individual sense of self is achieved by personal choices regarding who and what to associate with. Such approaches
are liberating in their recognition of the role of the individual in social interaction and the construction of identity.
Gender identity forms an important part of identity in psychology, as it dictates to a significant degree how one
views oneself both as a person and in relation to other people, ideas and nature. Other aspects of identity, such
as racial, religious, ethnic, occupational… etc. may also be more or less significant – or significant in some situations
but not in others (Weinreich & Saunderson 2003 pp26–34). In cognitive psychology, the term "identity" refers to the
capacity for self-reflection and the awareness of self.(Leary & Tangney 2003, p. 3) Erik Erikson (1902-1994) became
one of the earliest psychologists to take an explicit interest in identity. The Eriksonian framework rests upon a
distinction among the psychological sense of continuity, known as the ego identity (sometimes identified simply as
"the self"); the personal idiosyncrasies that separate one person from the next, known as the personal identity;
and the collection of social roles that a person might play, known as either the social identity or the cultural
identity. Erikson's work, in the psychodynamic tradition, aimed to investigate the process of identity formation
across a lifespan. Progressive strength in the ego identity, for example, can be charted in terms of a series of
stages in which identity is formed in response to increasingly sophisticated challenges. The process of forming a
viable sense of identity for the culture is conceptualized as an adolescent task, and those who do not manage a resynthesis
of childhood identifications are seen as being in a state of 'identity diffusion' whereas those who retain their
initially given identities unquestioned have 'foreclosed' identities (Weinreich & Saunderson 2003 p7-8). On some
readings of Erikson, the development of a strong ego identity, along with the proper integration into a stable society
and culture, lead to a stronger sense of identity in general. Accordingly, a deficiency in either of these factors
may increase the chance of an identity crisis or confusion (Cote & Levine 2002, p. 22). Laing's definition of identity
closely follows Erikson's, in emphasising the past, present and future components of the experienced self. He also
develops the concept of the "metaperspective of self", i.e. the self's perception of the other's view of self, which
has been found to be extremely important in clinical contexts such as anorexia nervosa. (Saunderson and O'Kane, 2005).
Harré also conceptualises components of self/identity – the "person" (the unique being I am to myself and others)
along with aspects of self (including a totality of attributes including beliefs about one's characteristics including
life history), and the personal characteristics displayed to others. Kenneth Gergen formulated additional classifications,
which include the strategic manipulator, the pastiche personality, and the relational self. The strategic manipulator
is a person who begins to regard all senses of identity merely as role-playing exercises, and who gradually becomes
alienated from his or her social "self". The pastiche personality abandons all aspirations toward a true or "essential"
identity, instead viewing social interactions as opportunities to play out, and hence become, the roles they play.
Finally, the relational self is a perspective by which persons abandon all sense of exclusive self, and view all
sense of identity in terms of social engagement with others. For Gergen, these strategies follow one another in phases,
and they are linked to the increase in popularity of postmodern culture and the rise of telecommunications technology.
As a non-directive and flexible analytical tool, the concept of boundaries helps both to map and to define the changeability
and mutability that are characteristic of people's experiences of the self in society. While identity is a volatile,
flexible and abstract 'thing', its manifestations and the ways in which it is exercised are often open to view. Identity
is made evident through the use of markers such as language, dress, behaviour and choice of space, whose effect depends
on their recognition by other social beings. Markers help to create the boundaries that define similarities or differences
between the marker wearer and the marker perceivers, their effectiveness depends on a shared understanding of their
meaning. In a social context, misunderstandings can arise due to a misinterpretation of the significance of specific
markers. Equally, an individual can use markers of identity to exert influence on other people without necessarily
fulfilling all the criteria that an external observer might typically associate with such an abstract identity.
The economy of Himachal Pradesh is currently the third-fastest growing economy in India.[citation needed] Himachal Pradesh
has been ranked fourth in the list of the highest per capita incomes of Indian states. This has made it one of the
wealthiest places in the entire South Asia. Abundance of perennial rivers enables Himachal to sell hydroelectricity
to other states such as Delhi, Punjab, and Rajasthan. The economy of the state is highly dependent on three sources:
hydroelectric power, tourism, and agriculture.[citation needed] After independence, the Chief Commissioner's Province
of H.P. came into being on 15 April 1948 as a result of integration of 28 petty princely states (including feudal
princes and zaildars) in the promontories of the western Himalaya, known in full as the Simla Hills States and four
Punjab southern hill states by issue of the Himachal Pradesh (Administration) Order, 1948 under Sections 3 and 4
of the Extra-Provincial Jurisdiction Act, 1947 (later renamed as the Foreign Jurisdiction Act, 1947 vide A.O. of
1950). The State of Bilaspur was merged in the Himachal Pradesh on 1 April 1954 by the Himachal Pradesh and Bilaspur
(New State) Act, 1954. Himachal became a part C state on 26 January 1950 with the implementation of the Constitution
of India and the Lt. Governor was appointed. Legislative Assembly was elected in 1952. Himachal Pradesh became a
union territory on 1 November 1956. Following area of Punjab State namely Simla, Kangra, Kulu and Lahul and Spiti
Districts, Nalagarh tehsil of Ambala District, Lohara, Amb and Una kanungo circles, some area of Santokhgarh kanungo
circle and some other specified area of Una tehsil of Hoshiarpur District besides some parts of Dhar Kalan Kanungo
circle of Pathankot tehsil of Gurdaspur District; were merged with Himachal Pradesh on 1 November 1966 on enactment
of Punjab Reorganisation Act, 1966 by the Parliament. On 18 December 1970, the State of Himachal Pradesh Act was
passed by Parliament and the new state came into being on 25 January 1971. Thus Himachal emerged as the 18th state
of the Indian Union. In the assembly elections held in November 2012, the Congress secured an absolute majority.
The Congress won 36 of the 68 seats while the BJP won only 26 of the 68 seats. Virbhadra Singh was sworn-in as Himachal
Pradesh's Chief Minister for a record sixth term in Shimla on 25 December 2012. Virbhadra Singh who has held the
top office in Himachal five times in the past, was administered the oath of office and secrecy by Governor Urmila
Singh at an open ceremony at the historic Ridge Maidan in Shimla. Himachal is extremely rich in hydro electric resources.
The state has about 25% of the national potential in this respect. It has been estimated that about 20,300MW of hydro
electric power can be generated in the State by constructing various major, medium, small and mini/micro hydel projects
on the five river basins. The state is also the first state in India to achieve the goal of having a bank account
for every family.[citation needed] As per the current prices, the total GDP was estimated at ₹ 254 billion as against
₹ 230 billion in the year 2004–05, showing an increase of 10.5%. The recent years witnessed quick establishment of
International Entrepreneurship. Luxury hotels, food and franchisees of recognised brands e.g. Mc Donalds, KFC and
Pizza hut have rapidly spread. The state is well known for its handicrafts. The carpets, leather works, shawls, metalware,
woodwork and paintings are worth appreciating. Pashmina shawls are a product that is highly in demand in Himachal
and all over the country. Himachali caps are famous art work of the people. Extreme cold winters of Himachal necessitated
wool weaving. Nearly every household in Himachal owns a pit-loom. Wool is considered as pure and is used as a ritual
cloth. The well-known woven object is the shawl, ranging from fine pashmina to the coarse desar. Kullu is famous
for its shawls with striking patterns and vibrant colours. Kangra and Dharamshala are famous for Kangra miniature
paintings. The history of the area that now constitutes Himachal Pradesh dates back to the time when the Indus valley
civilisation flourished between 2250 and 1750 BCE. Tribes such as the Koilis, Halis, Dagis, Dhaugris, Dasa, Khasas,
Kinnars, and Kirats inhabited the region from the prehistoric era. During the Vedic period, several small republics
known as "Janapada" existed which were later conquered by the Gupta Empire. After a brief period of supremacy by
King Harshavardhana, the region was once again divided into several local powers headed by chieftains, including
some Rajput principalities. These kingdoms enjoyed a large degree of independence and were invaded by Delhi Sultanate
a number of times. Mahmud Ghaznavi conquered Kangra at the beginning of the 10th century. Timur and Sikander Lodi
also marched through the lower hills of the state and captured a number of forts and fought many battles. Several
hill states acknowledged Mughal suzerainty and paid regular tribute to the Mughals. Himachal has a rich heritage
of handicrafts. These include woolen and pashmina shawls, carpets, silver and metal ware, embroidered chappals, grass
shoes, Kangra and Gompa style paintings, wood work, horse-hair bangles, wooden and metal utensils and various other
house hold items. These aesthetic and tasteful handicrafts declined under competition from machine made goods and
also because of lack of marketing facilities. But now the demand for handicrafts has increased within and outside
the country. A district of Himachal Pradesh is an administrative geographical unit, headed by a Deputy Commissioner
or District Magistrate, an officer belonging to the Indian Administrative Service. The district magistrate or the
deputy commissioner is assisted by a number of officers belonging to Himachal Administrative Service and other Himachal
state services. Each district is subdivided into Sub-Divisions, governed by a sub-divisional magistrate, and again
into Blocks. Blocks consists of panchayats (village councils) and town municipalities. A Superintendent of Police,
an officer belonging to the Indian Police Service is entrusted with the responsibility of maintaining law and order
and related issues of the district. He is assisted by the officers of the Himachal Police Service and other Himachal
Police officials. The era of planning in Himachal Pradesh started 1948 along with the rest of India. The first five-year
plan allocated ₹ 52.7 million to Himachal. More than 50% of this expenditure was incurred on road construction since
it was felt that without proper transport facilities, the process of planning and development could not be carried
to the people, who mostly lived an isolated existence in far away areas. Himachal now ranks fourth in respect of
per capita income among the states of the Indian Union. Census-wise, the state is placed 21st on the population chart,
followed by Tripura at 22nd place. Kangra district was top ranked with a population strength of 1,507,223 (21.98%),
Mandi district 999,518 (14.58%), Shimla district 813,384 (11.86%), Solan district 576,670 (8.41%), Sirmaur district
530,164 (7.73%), Una district 521,057 (7.60%), Chamba district 518,844 (7.57%), Hamirpur district 454,293 (6.63%),
Kullu district 437,474 (6.38%), Bilaspur district 382,056 (5.57%), Kinnaur district 84,298 (1.23%) and Lahaul Spiti
31,528 (0.46%). Other religions that form a small percentage are Buddhism and Sikhism. The Lahaulis of Lahaul and
Spiti region are mainly Buddhists. Sikhs mostly live in towns and cities and constitute 1.16% of the state population.
For example, they form 10% of the population in Una District adjoining the state of Punjab and 17% in Shimla, the
state capital. The Buddhists constitute 1.15% are mainly natives and tribals from Lahaul and Spiti, where they form
majority of 60% and Kinnaur where they form 40%, however the bulk are refugees from Tibet. The Muslims constitute
slightly 2.18% of the population of Himachal Pradesh. The Gurkhas, a martial tribe, came to power in Nepal in the
year 1768. They consolidated their military power and began to expand their territory. Gradually, the Gorkhas annexed
Sirmour and Shimla. With the leadership of Amar Singh Thapa, Gorkhas laid siege to Kangra. They managed to defeat
Sansar Chand Katoch, the ruler of Kangra, in 1806 with the help of many provincial chiefs. However, Gurkhas could
not capture Kangra fort which came under Maharaja Ranjeet Singh in 1809. After the defeat, the Gurkhas began to expand
towards the south of the state. However, Raja Ram Singh, Raja of Siba State managed to capture the fort of Siba from
the remnants of Lahore Darbar in Samvat 1846, during the First Anglo-Sikh War. They came into direct conflict with
the British along the tarai belt after which the British expelled them from the provinces of the Satluj. The British
gradually emerged as the paramount power. In the revolt of 1857, or first Indian war of independence, arising from
a number of grievances against the British, the people of the hill states were not as politically active as were
those in other parts of the country. They and their rulers, with the exception of Bushahr, remained more or less
inactive. Some, including the rulers of Chamba, Bilaspur, Bhagal and Dhami, rendered help to the British government
during the revolt. Due to extreme variation in elevation, great variation occurs in the climatic conditions of Himachal
. The climate varies from hot and subhumid tropical in the southern tracts to, with more elevation, cold, alpine,
and glacial in the northern and eastern mountain ranges. The state has areas like Dharamsala that receive very heavy
rainfall, as well as those like Lahaul and Spiti that are cold and almost rainless. Broadly, Himachal experiences
three seasons: summer, winter, and rainy season. Summer lasts from mid-April till the end of June and most parts
become very hot (except in the alpine zone which experiences a mild summer) with the average temperature ranging
from 28 to 32 °C (82 to 90 °F). Winter lasts from late November till mid March. Snowfall is common in alpine tracts
(generally above 2,200 metres (7,218 ft) i.e. in the higher and trans-Himalayan region). Himachal Pradesh is governed
through a parliamentary system of representative democracy, a feature the state shares with other Indian states.
Universal suffrage is granted to residents. The legislature consists of elected members and special office bearers
such as the Speaker and the Deputy Speaker who are elected by the members. Assembly meetings are presided over by
the Speaker or the Deputy Speaker in the Speaker's absence. The judiciary is composed of the Himachal Pradesh High
Court and a system of lower courts. Executive authority is vested in the Council of Ministers headed by the Chief
Minister, although the titular head of government is the Governor. The Governor is the head of state appointed by
the President of India. The leader of the party or coalition with a majority in the Legislative Assembly is appointed
as the Chief Minister by the Governor, and the Council of Ministers are appointed by the Governor on the advice of
the Chief Minister. The Council of Ministers reports to the Legislative Assembly. The Assembly is unicameral with
68 Members of the Legislative Assembly (MLA). Terms of office run for 5 years, unless the Assembly is dissolved prior
to the completion of the term. Auxiliary authorities known as panchayats, for which local body elections are regularly
held, govern local affairs. Railway Himachal is famous for its narrow gauge tracks railways, one is UNESCO World
Heritage Kalka-Shimla Railway and another one is Pathankot–Jogindernagar. Total length of these two tracks is 259
kilometres (161 mi). Kalka-Shimla Railway track passes through many tunnels, while Pathankot–Jogindernagar gently
meanders through a maze of hills and valleys. It also has standard gauge railway track which connect Amb (Una district)
to Delhi. A survey is being conducted to extend this railway line to Kangra (via Nadaun). Other proposed railways
in the state are Baddi-Bilaspur, Dharamsala-Palampur and Bilaspur-Manali-Leh. Himachal Pradesh is famous for its
abundant natural beauty. After the war between Nepal and Britain, also known as the Anglo-Gorkha War (1814–1816),
the British colonial government came into power and the land now comprising Himachal Pradesh became part of the Punjab
Province of British India. In 1950, Himachal was declared a union territory, but after the State of Himachal Pradesh
Act 1971, Himachal emerged as the 18th state of the Republic of India. Hima means snow in Sanskrit, and the literal
meaning of the state's name is "In the lap of Himalayas". It was named by Acharya Diwakar Datt Sharma, one of the
great Sanskrit scholars of Himachal Pradesh. Though situated in a remote part of the country, Himachal Pradesh has
an active community of journalists and publishers. Several newspapers and magazines are published in more than one
language, and their reach extends to almost all the Hindi-speaking states. Radio and TV have permeated significantly.
Judging by the number of people writing to these media, there is a very large media-aware population in the state.
All major English daily newspapers are available in Shimla and district headquarters. Aapka Faisla, Amar Ujala, Panjab
Kesari, Divya Himachal are Hindi daily newspaper with local editions are read widely. Governments have seen alternates
between Bharatiya Janata Party (BJP) and Indian National Congress (INC), no third front ever has become significant.
In 2003, the state legislative assembly was won by the Indian National Congress and Virbhadra Singh was elected as
the chief minister of the state. In the assembly elections held in December 2007, the BJP secured a landslide victory.
The BJP won 41 of the 68 seats while the Congress won only 23 of the 68 seats. BJP's Prem Kumar Dhumal was sworn
in as Chief Minister of Himachal Pradesh on 30 December 2007. Though the state is deficient in food grains, it has
gained a lot in other spheres of agricultural production such as seed potato, ginger, vegetables, vegetable seeds,
mushrooms, chicory seeds, hops, olives and fig. Seed potato is mostly grown in the Shimla, Kullu and Lahaul areas.
Special efforts are being made to promote cultivation of crops like olives, figs, hops, mushrooms, flowers, pistachio
nuts, sarda melon and saffron. Solan is the largest vegetable producing district in the state. The district of Sirmaur
is also famous for growing flowers, and is the largest producer of flowers in the state. Himachal was one of the
few states that had remained largely untouched by external customs, largely due to its difficult terrain. With the
technological advancements the state has changed very rapidly. It is a multireligional, multicultural as well as
multilingual state like other Indian states. Some of the most commonly spoken languages includes Hindi, Pahari, Dogri,
Mandeali Kangri, Mandyali, Gojri and Kinnauri. The caste communities residing in Himachal include the Khatri, Brahmins
of the Hindu Faith and the Sikh Brahmin Caste Bhatra, Rajputs, Gujjars, Gaddis, Ghirth (choudhary), Kannets, Rathis
and Kolis, Sood There are tribal populations in the state which mainly comprise Kinnars, Pangawals, Sulehria, and
Lahaulis.The people Of Himachal Pradesh are very simple and live a traditional ´Pahari' lifestyle. The Indian Institute
of Technology Mandi, Himachal Pradesh University Shimla, Institute of Himalayan Bioresource Technology (IHBT, CSIR
Lab), Palampur, the National Institute of Technology, Hamirpur, Indian Institute of Information Technology, Una the
Central University Dharamshala, AP Goyal (Alakh Prakash Goyal) Shimla University, The Bahra University (Waknaghat,
Solan) the Baddi University of Emerging Sciences and Technologies Baddi, IEC University, Shoolini University Of Biotechnology
and Management Sciences, Solan, Manav Bharti University Solan, the Jaypee University of Information Technology Waknaghat,
Eternal University, Sirmaur & Chitkara University Solan are some of the pioneer universities in the state. CSK Himachal
Pradesh Krishi Vishwavidyalya Palampur is one of the most renowned hill agriculture institutes in world. Dr. Yashwant
Singh Parmar University of Horticulture and Forestry has earned a unique distinction in India for imparting teaching,
research and extension education in horticulture, forestry and allied disciplines. Further, state-run Jawaharlal
Nehru Government Engineering College started in 2006 at Sundernagar. Doordarshan is the state-owned television broadcaster.
Doordarshan Shimla also provides programs in Pahari language.Multi system operators provide a mix of Nepali, Hindi,
English, and international channels via cable. All India Radio is a public radio station. Private FM stations are
also available in few cities like Shimla. BSNL, Reliance Infocomm, Tata Indicom, Tata Docomo, Aircel, Vodafone, Idea
Cellular and Airtel are available cellular phone operators. Broadband internet is available in select towns and cities
and is provided by the state-run BSNL and by other private companies. Dial-up access is provided throughout the state
by BSNL and other providers. Himachal Pradesh is spread across valleys, and 90% of the population lives in villages
and towns. However, the state has achieved 100% hygiene and practically no single house is without a toilet. The
villages are well connected to roads, public health centers, and now with Lokmitra kendra using high-speed broadband.
Shimla district has maximum urban population of 25%. According to a 2005 Transparency International survey, Himachal
Pradesh is ranked the second-least corrupt state in the country after Kerala. The hill stations of the state are
among the most visited places in the country. The government has successfully imposed environmental protection and
tourism development, meeting European standards, and it is the only state which forbids the use of polyethylene and
tobacco products.[citation needed]
Nonverbal communication describes the process of conveying meaning in the form of non-word messages. Examples of nonverbal
communication include haptic communication, chronemic communication, gestures, body language, facial expression,
eye contact, and how one dresses. Nonverbal communication also relates to intent of a message. Examples of intent
are voluntary, intentional movements like shaking a hand or winking, as well as involuntary, such as sweating. Speech
also contains nonverbal elements known as paralanguage, e.g. rhythm, intonation, tempo, and stress. There may even
be a pheromone component. Research has shown that up to 55% of human communication may occur through non-verbal facial
expressions, and a further 38% through paralanguage. It affects communication most at the subconscious level and
establishes trust. Likewise, written texts include nonverbal elements such as handwriting style, spatial arrangement
of words and the use of emoticons to convey emotion. Fungi communicate to coordinate and organize their growth and
development such as the formation of Marcelia and fruiting bodies. Fungi communicate with their own and related species
as well as with non fungal organisms in a great variety of symbiotic interactions, especially with bacteria, unicellular
eukaryote, plants and insects through biochemicals of biotic origin. The biochemicals trigger the fungal organism
to react in a specific manner, while if the same chemical molecules are not part of biotic messages, they do not
trigger the fungal organism to react. This implies that fungal organisms can differentiate between molecules taking
part in biotic messages and similar molecules being irrelevant in the situation. So far five different primary signalling
molecules are known to coordinate different behavioral patterns such as filamentation, mating, growth, and pathogenicity.
Behavioral coordination and production of signaling substances is achieved through interpretation processes that
enables the organism to differ between self or non-self, a biotic indicator, biotic message from similar, related,
or non-related species, and even filter out "noise", i.e. similar molecules without biotic content. Communication
is usually described along a few major dimensions: Message (what type of things are communicated), source / emisor
/ sender / encoder (by whom), form (in which form), channel (through which medium), destination / receiver / target
/ decoder (to whom), and Receiver. Wilbur Schram (1954) also indicated that we should also examine the impact that
a message has (both desired and undesired) on the target of the message. Between parties, communication includes
acts that confer knowledge and experiences, give advice and commands, and ask questions. These acts may take many
forms, in one of the various manners of communication. The form depends on the abilities of the group communicating.
Together, communication content and form make messages that are sent towards a destination. The target can be oneself,
another person or being, another entity (such as a corporation or group of beings). Effective verbal or spoken communication
is dependent on a number of factors and cannot be fully isolated from other important interpersonal skills such as
non-verbal communication, listening skills and clarification. Human language can be defined as a system of symbols
(sometimes known as lexemes) and the grammars (rules) by which the symbols are manipulated. The word "language" also
refers to common properties of languages. Language learning normally occurs most intensively during human childhood.
Most of the thousands of human languages use patterns of sound or gesture for symbols which enable communication
with others around them. Languages tend to share certain properties, although there are exceptions. There is no defined
line between a language and a dialect. Constructed languages such as Esperanto, programming languages, and various
mathematical formalism is not necessarily restricted to the properties shared by human languages. Communication is
two-way process not merely one-way. Family communication study looks at topics such as family rules, family roles
or family dialectics and how those factors could affect the communication between family members. Researchers develop
theories to understand communication behaviors. Family communication study also digs deep into certain time periods
of family life such as marriage, parenthood or divorce and how communication stands in those situations. It is important
for family members to understand communication as a trusted way which leads to a well constructed family. The broad
field of animal communication encompasses most of the issues in ethology. Animal communication can be defined as
any behavior of one animal that affects the current or future behavior of another animal. The study of animal communication,
called zoo semiotics (distinguishable from anthroposemiotics, the study of human communication) has played an important
part in the development of ethology, sociobiology, and the study of animal cognition. Animal communication, and indeed
the understanding of the animal world in general, is a rapidly growing field, and even in the 21st century so far,
a great share of prior understanding related to diverse fields such as personal symbolic name use, animal emotions,
animal culture and learning, and even sexual conduct, long thought to be well understood, has been revolutionized.
A special field of animal communication has been investigated in more detail such as vibrational communication. The
first major model for communication was introduced by Claude Shannon and Warren Weaver for Bell Laboratories in 1949
The original model was designed to mirror the functioning of radio and telephone technologies. Their initial model
consisted of three primary parts: sender, channel, and receiver. The sender was the part of a telephone a person
spoke into, the channel was the telephone itself, and the receiver was the part of the phone where one could hear
the other person. Shannon and Weaver also recognized that often there is static that interferes with one listening
to a telephone conversation, which they deemed noise. In a simple model, often referred to as the transmission model
or standard view of communication, information or content (e.g. a message in natural language) is sent in some form
(as spoken language) from an emisor/ sender/ encoder to a destination/ receiver/ decoder. This common conception
of communication simply views communication as a means of sending and receiving information. The strengths of this
model are simplicity, generality, and quantifiability. Claude Shannon and Warren Weaver structured this model based
on the following elements: In a slightly more complex form a sender and a receiver are linked reciprocally. This
second attitude of communication, referred to as the constitutive model or constructionist view, focuses on how an
individual communicates as the determining factor of the way the message will be interpreted. Communication is viewed
as a conduit; a passage in which information travels from one individual to another and this information becomes
separate from the communication itself. A particular instance of communication is called a speech act. The sender's
personal filters and the receiver's personal filters may vary depending upon different regional traditions, cultures,
or gender; which may alter the intended meaning of message contents. In the presence of "communication noise" on
the transmission channel (air, in this case), reception and decoding of content may be faulty, and thus the speech
act may not achieve the desired effect. One problem with this encode-transmit-receive-decode model is that the processes
of encoding and decoding imply that the sender and receiver each possess something that functions as a codebook,
and that these two code books are, at the very least, similar if not identical. Although something like code books
is implied by the model, they are nowhere represented in the model, which creates many conceptual difficulties. Communication
is observed within the plant organism, i.e. within plant cells and between plant cells, between plants of the same
or related species, and between plants and non-plant organisms, especially in the root zone. Plant roots communicate
with rhizome bacteria, fungi, and insects within the soil. These interactions are governed by syntactic, pragmatic,
and semantic rules,[citation needed] and are possible because of the decentralized "nervous system" of plants. The
original meaning of the word "neuron" in Greek is "vegetable fiber" and recent research has shown that most of the
microorganism plant communication processes are neuron-like. Plants also communicate via volatiles when exposed to
herbivory attack behavior, thus warning neighboring plants. In parallel they produce other volatiles to attract parasites
which attack these herbivores. In stress situations plants can overwrite the genomes they inherited from their parents
and revert to that of their grand- or great-grandparents.[citation needed] Theories of coregulation describe communication
as a creative and dynamic continuous process, rather than a discrete exchange of information. Canadian media scholar
Harold Innis had the theory that people use different types of media to communicate and which one they choose to
use will offer different possibilities for the shape and durability of society (Wark, McKenzie 1997). His famous
example of this is using ancient Egypt and looking at the ways they built themselves out of media with very different
properties stone and papyrus. Papyrus is what he called 'Space Binding'. it made possible the transmission of written
orders across space, empires and enables the waging of distant military campaigns and colonial administration. The
other is stone and 'Time Binding', through the construction of temples and the pyramids can sustain their authority
generation to generation, through this media they can change and shape communication in their society (Wark, McKenzie
1997). Companies with limited resources may choose to engage in only a few of these activities, while larger organizations
may employ a full spectrum of communications. Since it is difficult to develop such a broad range of skills, communications
professionals often specialize in one or two of these areas but usually have at least a working knowledge of most
of them. By far, the most important qualifications communications professionals can possess are excellent writing
ability, good 'people' skills, and the capacity to think critically and strategically.
Grapes are a type of fruit that grow in clusters of 15 to 300, and can be crimson, black, dark blue, yellow, green, orange,
and pink. "White" grapes are actually green in color, and are evolutionarily derived from the purple grape. Mutations
in two regulatory genes of white grapes turn off production of anthocyanins, which are responsible for the color
of purple grapes. Anthocyanins and other pigment chemicals of the larger family of polyphenols in purple grapes are
responsible for the varying shades of purple in red wines. Grapes are typically an ellipsoid shape resembling a prolate
spheroid. The cultivation of the domesticated grape began 6,000–8,000 years ago in the Near East. Yeast, one of the
earliest domesticated microorganisms, occurs naturally on the skins of grapes, leading to the innovation of alcoholic
drinks such as wine. The earliest archeological evidence for a dominant position of wine-making in human culture
dates from 8,000 years ago in Georgia. The oldest winery was found in Armenia, dating to around 4000 BC.[citation
needed] By the 9th century AD the city of Shiraz was known to produce some of the finest wines in the Middle East.
Thus it has been proposed that Syrah red wine is named after Shiraz, a city in Persia where the grape was used to
make Shirazi wine.[citation needed] Ancient Egyptian hieroglyphics record the cultivation of purple grapes,[citation
needed] and history attests to the ancient Greeks, Phoenicians, and Romans growing purple grapes for both eating
and wine production[citation needed]. The growing of grapes would later spread to other regions in Europe, as well
as North Africa, and eventually in North America. Comparing diets among Western countries, researchers have discovered
that although the French tend to eat higher levels of animal fat, the incidence of heart disease remains low in France.
This phenomenon has been termed the French paradox, and is thought to occur from protective benefits of regularly
consuming red wine. Apart from potential benefits of alcohol itself, including reduced platelet aggregation and vasodilation,
polyphenols (e.g., resveratrol) mainly in the grape skin provide other suspected health benefits, such as: Grape
juice is obtained from crushing and blending grapes into a liquid. The juice is often sold in stores or fermented
and made into wine, brandy, or vinegar. Grape juice that has been pasteurized, removing any naturally occurring yeast,
will not ferment if kept sterile, and thus contains no alcohol. In the wine industry, grape juice that contains 7–23%
of pulp, skins, stems and seeds is often referred to as "must". In North America, the most common grape juice is
purple and made from Concord grapes, while white grape juice is commonly made from Niagara grapes, both of which
are varieties of native American grapes, a different species from European wine grapes. In California, Sultana (known
there as Thompson Seedless) grapes are sometimes diverted from the raisin or table market to produce white juice.
Red wine may offer health benefits more so than white because potentially beneficial compounds are present in grape
skin, and only red wine is fermented with skins. The amount of fermentation time a wine spends in contact with grape
skins is an important determinant of its resveratrol content. Ordinary non-muscadine red wine contains between 0.2
and 5.8 mg/L, depending on the grape variety, because it is fermented with the skins, allowing the wine to absorb
the resveratrol. By contrast, a white wine contains lower phenolic contents because it is fermented after removal
of skins. Commercially cultivated grapes can usually be classified as either table or wine grapes, based on their
intended method of consumption: eaten raw (table grapes) or used to make wine (wine grapes). While almost all of
them belong to the same species, Vitis vinifera, table and wine grapes have significant differences, brought about
through selective breeding. Table grape cultivars tend to have large, seedless fruit (see below) with relatively
thin skin. Wine grapes are smaller, usually seeded, and have relatively thick skins (a desirable characteristic in
winemaking, since much of the aroma in wine comes from the skin). Wine grapes also tend to be very sweet: they are
harvested at the time when their juice is approximately 24% sugar by weight. By comparison, commercially produced
"100% grape juice", made from table grapes, is usually around 15% sugar by weight. In the Bible, grapes are first
mentioned when Noah grows them on his farm (Genesis 9:20–21). Instructions concerning wine are given in the book
of Proverbs and in the book of Isaiah, such as in Proverbs 20:1 and Isaiah 5:20–25. Deuteronomy 18:3–5,14:22–27,16:13–15
tell of the use of wine during Jewish feasts. Grapes were also significant to both the Greeks and Romans, and their
god of agriculture, Dionysus, was linked to grapes and wine, being frequently portrayed with grape leaves on his
head. Grapes are especially significant for Christians, who since the Early Church have used wine in their celebration
of the Eucharist. Views on the significance of the wine vary between denominations. In Christian art, grapes often
represent the blood of Christ, such as the grape leaves in Caravaggio’s John the Baptist. There are several sources
of the seedlessness trait, and essentially all commercial cultivators get it from one of three sources: Thompson
Seedless, Russian Seedless, and Black Monukka, all being cultivars of Vitis vinifera. There are currently more than
a dozen varieties of seedless grapes. Several, such as Einset Seedless, Benjamin Gunnels's Prime seedless grapes,
Reliance, and Venus, have been specifically cultivated for hardiness and quality in the relatively cold climates
of northeastern United States and southern Ontario. Anthocyanins tend to be the main polyphenolics in purple grapes
whereas flavan-3-ols (i.e. catechins) are the more abundant phenolic in white varieties. Total phenolic content,
a laboratory index of antioxidant strength, is higher in purple varieties due almost entirely to anthocyanin density
in purple grape skin compared to absence of anthocyanins in white grape skin. It is these anthocyanins that are attracting
the efforts of scientists to define their properties for human health. Phenolic content of grape skin varies with
cultivar, soil composition, climate, geographic origin, and cultivation practices or exposure to diseases, such as
fungal infections. The Catholic Church uses wine in the celebration of the Eucharist because it is part of the tradition
passed down through the ages starting with Jesus Christ at the Last Supper, where Catholics believe the consecrated
bread and wine literally become the body and blood of Jesus Christ, a dogma known as transubstantiation. Wine is
used (not grape juice) both due to its strong Scriptural roots, and also to follow the tradition set by the early
Christian Church. The Code of Canon Law of the Catholic Church (1983), Canon 924 says that the wine used must be
natural, made from grapes of the vine, and not corrupt. In some circumstances, a priest may obtain special permission
to use grape juice for the consecration, however this is extremely rare and typically requires sufficient impetus
to warrant such a dispensation, such as personal health of the priest.
Computer security, also known as cybersecurity or IT security, is the protection of information systems from theft or damage
to the hardware, the software, and to the information on them, as well as from disruption or misdirection of the
services they provide. It includes controlling physical access to the hardware, as well as protecting against harm
that may come via network access, data and code injection, and due to malpractice by operators, whether intentional,
accidental, or due to them being tricked into deviating from secure procedures. Denial of service attacks are designed
to make a machine or network resource unavailable to its intended users. Attackers can deny service to individual
victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim account to
be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network
attack from a single IP address can be blocked by adding a new firewall rule, many forms of Distributed denial of
service (DDoS) attacks are possible, where the attack comes from a large number of points – and defending is much
more difficult. Such attacks can originate from the zombie computers of a botnet, but a range of other techniques
are possible including reflection and amplification attacks, where innocent systems are fooled into sending traffic
to the victim. If access is gained to a car's internal controller area network, it is possible to disable the brakes
and turn the steering wheel. Computerized engine timing, cruise control, anti-lock brakes, seat belt tensioners,
door locks, airbags and advanced driver assistance systems make these disruptions possible, and self-driving cars
go even further. Connected cars may use wifi and bluetooth to communicate with onboard consumer devices, and the
cell phone network to contact concierge and emergency assistance services or get navigational or entertainment information;
each of these networks is a potential entry point for malware or an attacker. Researchers in 2011 were even able
to use a malicious compact disc in a car's stereo system as a successful attack vector, and cars with built-in voice
recognition or remote assistance features have onboard microphones which could be used for eavesdropping. However,
relatively few organisations maintain computer systems with effective detection systems, and fewer still have organised
response mechanisms in place. As result, as Reuters points out: "Companies for the first time report they are losing
more through electronic theft of data than physical stealing of assets". The primary obstacle to effective eradication
of cyber crime could be traced to excessive reliance on firewalls and other automated "detection" systems. Yet it
is basic evidence gathering by using packet capture appliances that puts criminals behind bars. One use of the term
"computer security" refers to technology that is used to implement secure operating systems. In the 1980s the United
States Department of Defense (DoD) used the "Orange Book" standards, but the current international standard ISO/IEC
15408, "Common Criteria" defines a number of progressively more stringent Evaluation Assurance Levels. Many common
operating systems meet the EAL4 standard of being "Methodically Designed, Tested and Reviewed", but the formal verification
required for the highest levels means that they are uncommon. An example of an EAL6 ("Semiformally Verified Design
and Tested") system is Integrity-178B, which is used in the Airbus A380 and several military jets. China's network
security and information technology leadership team was established February 27, 2014. The leadership team is tasked
with national security and long-term development and co-ordination of major issues related to network security and
information technology. Economic, political, cultural, social and military fields as related to network security
and information technology strategy, planning and major macroeconomic policy are being researched. The promotion
of national network security and information technology law are constantly under study for enhanced national security
capabilities. Eavesdropping is the act of surreptitiously listening to a private conversation, typically between
hosts on a network. For instance, programs such as Carnivore and NarusInsight have been used by the FBI and NSA to
eavesdrop on the systems of internet service providers. Even machines that operate as a closed system (i.e., with
no contact to the outside world) can be eavesdropped upon via monitoring the faint electro-magnetic transmissions
generated by the hardware; TEMPEST is a specification by the NSA referring to these attacks. Desktop computers and
laptops are commonly infected with malware either to gather passwords or financial account information, or to construct
a botnet to attack another target. Smart phones, tablet computers, smart watches, and other mobile devices such as
Quantified Self devices like activity trackers have also become targets and many of these have sensors such as cameras,
microphones, GPS receivers, compasses, and accelerometers which could be exploited, and may collect personal information,
including sensitive health information. Wifi, Bluetooth, and cell phone network on any of these devices could be
used as attack vectors, and sensors might be remotely activated after a successful breach. Within computer systems,
two of many security models capable of enforcing privilege separation are access control lists (ACLs) and capability-based
security. Using ACLs to confine programs has been proven to be insecure in many situations, such as if the host computer
can be tricked into indirectly allowing restricted file access, an issue known as the confused deputy problem. It
has also been shown that the promise of ACLs of giving access to an object to only one person can never be guaranteed
in practice. Both of these problems are resolved by capabilities. This does not mean practical flaws exist in all
ACL-based systems, but only that the designers of certain utilities must take responsibility to ensure that they
do not introduce flaws.[citation needed] In 1994, over a hundred intrusions were made by unidentified crackers into
the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horses, hackers were able
to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were
able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected
networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force
Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.
In July of 2015, a hacker group known as "The Impact Team" successfully breached the extramarital relationship website
Ashley Madison. The group claimed that they had taken not only company data but user data as well. After the breach,
The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data
unless the website was taken down permanently. With this initial data release, the group stated “Avid Life Media
has been instructed to take Ashley Madison and Established Men offline permanently in all forms, or we will release
all customer records, including profiles with all the customers' secret sexual fantasies and matching credit card
transactions, real names and addresses, and employee documents and emails. The other websites may stay online.” When
Avid Life Media, the parent company that created the Ashley Madison website, did not take the site offline, The Impact
Group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media
CEO Noel Biderman resigned, but the website remained functional. The question of whether the government should intervene
or not in the regulation of the cyberspace is a very polemical one. Indeed, for as long as it has existed and by
definition, the cyberspace is a virtual space free of any government intervention. Where everyone agree that an improvement
on cybersecurity is more than vital, is the government the best actor to solve this issue? Many government officials
and experts think that the government should step in and that there is a crucial need for regulation, mainly due
to the failure of the private sector to solve efficiently the cybersecurity problem. R. Clarke said during a panel
discussion at the RSA Security Conference in San Francisco, he believes that the "industry only responds when you
threaten regulation. If industry doesn't respond (to the threat), you have to follow through." On the other hand,
executives from the private sector agree that improvements are necessary, but think that the government intervention
would affect their ability to innovate efficiently. On October 3, 2010, Public Safety Canada unveiled Canada’s Cyber
Security Strategy, following a Speech from the Throne commitment to boost the security of Canadian cyberspace. The
aim of the strategy is to strengthen Canada’s "cyber systems and critical infrastructure sectors, support economic
growth and protect Canadians as they connect to each other and to the world." Three main pillars define the strategy:
securing government systems, partnering to secure vital cyber systems outside the federal government, and helping
Canadians to be secure online. The strategy involves multiple departments and agencies across the Government of Canada.
The Cyber Incident Management Framework for Canada outlines these responsibilities, and provides a plan for coordinated
response between government and other partners in the event of a cyber incident. The Action Plan 2010–2015 for Canada's
Cyber Security Strategy outlines the ongoing implementation of the strategy. Computers control functions at many
utilities, including coordination of telecommunications, the power grid, nuclear power plants, and valve opening
and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected,
but the Stuxnet worm demonstrated that even equipment controlled by computers not connected to the Internet can be
vulnerable to physical damage caused by malicious commands sent to industrial equipment (in that case uranium enrichment
centrifuges) which are infected via removable media. In 2014, the Computer Emergency Readiness Team, a division of
the Department of Homeland Security, investigated 79 hacking incidents at energy companies. Today, computer security
comprises mainly "preventive" measures, like firewalls or an exit procedure. A firewall can be defined as a way of
filtering network data between a host or a network and another network, such as the Internet, and can be implemented
as software running on the machine, hooking into the network stack (or, in the case of most UNIX-based operating
systems such as Linux, built into the operating system kernel) to provide real time filtering and blocking. Another
implementation is a so-called physical firewall which consists of a separate machine filtering network traffic. Firewalls
are common amongst machines that are permanently connected to the Internet. Serious financial damage has been caused
by security breaches, but because there is no standard model for estimating the cost of an incident, the only data
available is that which is made public by the organizations involved. "Several computer security consulting firms
produce estimates of total worldwide losses attributable to virus and worm attacks and to hostile digital acts in
general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for
all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology
is basically anecdotal." While hardware may be a source of insecurity, such as with microchip vulnerabilities maliciously
introduced during the manufacturing process, hardware-based or assisted computer security also offers an alternative
to software-only computer security. Using devices and methods such as dongles, trusted platform modules, intrusion-aware
cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical
access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail
below. Public Safety Canada’s Canadian Cyber Incident Response Centre (CCIRC) is responsible for mitigating and responding
to threats to Canada’s critical infrastructure and cyber systems. The CCIRC provides support to mitigate cyber threats,
technical support to respond and recover from targeted cyber attacks, and provides online tools for members of Canada’s
critical infrastructure sectors. The CCIRC posts regular cyber security bulletins on the Public Safety Canada website.
The CCIRC also operates an online reporting tool where individuals and organizations can report a cyber incident.
Canada's Cyber Security Strategy is part of a larger, integrated approach to critical infrastructure protection,
and functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure. This
has led to new terms such as cyberwarfare and cyberterrorism. More and more critical infrastructure is being controlled
via computer programs that, while increasing efficiency, exposes new vulnerabilities. The test will be to see if
governments and corporations that control critical systems such as energy, communications and other information will
be able to prevent attacks before they occur. As Jay Cross, the chief scientist of the Internet Time Group, remarked,
"Connectedness begets vulnerability." On September 27, 2010, Public Safety Canada partnered with STOP.THINK.CONNECT,
a coalition of non-profit, private sector, and government organizations dedicated to informing the general public
on how to protect themselves online. On February 4, 2014, the Government of Canada launched the Cyber Security Cooperation
Program. The program is a $1.5 million five-year initiative aimed at improving Canada’s cyber systems through grants
and contributions to projects in support of this objective. Public Safety Canada aims to begin an evaluation of Canada's
Cyber Security Strategy in early 2015. Public Safety Canada administers and routinely updates the GetCyberSafe portal
for Canadian citizens, and carries out Cyber Security Awareness Month during October. An unauthorized user gaining
physical access to a computer is most likely able to directly download data from it. They may also compromise security
by making operating system modifications, installing software worms, keyloggers, or covert listening devices. Even
when the system is protected by standard security measures, these may be able to be by passed by booting another
operating system or tool from a CD-ROM or other bootable media. Disk encryption and Trusted Platform Module are designed
to prevent these attacks. Clickjacking, also known as "UI redress attack or User Interface redress attack", is a
malicious technique in which an attacker tricks a user into clicking on a button or link on another webpage while
the user intended to click on the top level page. This is done using multiple transparent or opaque layers. The attacker
is basically "hijacking" the clicks meant for the top level page and routing them to some other irrelevant page,
most likely owned by someone else. A similar technique can be used to hijack keystrokes. Carefully drafting a combination
of stylesheets, iframes, buttons and text boxes, a user can be led into believing that they are typing the password
or other information on some authentic webpage while it is being channeled into an invisible frame controlled by
the attacker. In 1988, only 60,000 computers were connected to the Internet, and most were mainframes, minicomputers
and professional workstations. On November 2, 1988, many started to slow down, because they were running a malicious
code that demanded processor time and that spread itself to other computers – the first internet "computer worm".
The software was traced back to 23-year-old Cornell University graduate student Robert Tappan Morris, Jr. who said
'he wanted to count how many machines were connected to the Internet'. In 2013 and 2014, a Russian/Ukrainian hacking
ring known as "Rescator" broke into Target Corporation computers in 2013, stealing roughly 40 million credit cards,
and then Home Depot computers in 2014, stealing between 53 and 56 million credit card numbers. Warnings were delivered
at both corporations, but ignored; physical security breaches using self checkout machines are believed to have played
a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of
threat intelligence operations at security technology company McAfee – meaning that the heists could have easily
been stopped by existing antivirus software had administrators responded to the warnings. The size of the thefts
has resulted in major attention from state and Federal United States authorities and the investigation is ongoing.
Berlin starts National Cyber Defense Initiative: On June 16, 2011, the German Minister for Home Affairs, officially
opened the new German NCAZ (National Center for Cyber Defense) Nationales Cyber-Abwehrzentrum located in Bonn. The
NCAZ closely cooperates with BSI (Federal Office for Information Security) Bundesamt für Sicherheit in der Informationstechnik,
BKA (Federal Police Organisation) Bundeskriminalamt (Deutschland), BND (Federal Intelligence Service) Bundesnachrichtendienst,
MAD (Military Intelligence Service) Amt für den Militärischen Abschirmdienst and other national organisations in
Germany taking care of national security aspects. According to the Minister the primary task of the new organisation
founded on February 23, 2011, is to detect and prevent attacks against the national infrastructure and mentioned
incidents like Stuxnet.
Orthodox Judaism is the approach to religious Judaism which subscribes to a tradition of mass revelation and adheres to the
interpretation and application of the laws and ethics of the Torah as legislated in the Talmudic texts by the Tanaim
and Amoraim. These texts were subsequently developed and applied by later authorities, known as the Gaonim, Rishonim,
and Acharonim. Orthodox Judaism generally includes Modern Orthodox Judaism (אורתודוקסיה מודרנית) and Ultra-Orthodox
or Haredi Judaism (יהדות חרדית), but complete within is a wide range of philosophies. Although Orthodox Judaism would
probably be considered the mainstream expression of Judaism prior to the 19th century, for some Orthodox Judaism
is a modern self-identification that distinguishes it from traditional pre-modern Judaism. According to the New Jersey
Press Association, several media entities refrain from using the term "ultra-Orthodox", including the Religion Newswriters
Association; JTA, the global Jewish news service; and the Star-Ledger, New Jersey’s largest daily newspaper. The
Star-Ledger was the first mainstream newspaper to drop the term. Several local Jewish papers, including New York's
Jewish Week and Philadelphia's Jewish Exponent have also dropped use of the term. According to Rabbi Shammai Engelmayer,
spiritual leader of Temple Israel Community Center in Cliffside Park and former executive editor of Jewish Week,
this leaves "Orthodox" as "an umbrella term that designates a very widely disparate group of people very loosely
tied together by some core beliefs." Modern Orthodoxy comprises a fairly broad spectrum of movements, each drawing
on several distinct though related philosophies, which in some combination have provided the basis for all variations
of the movement today. In general, Modern Orthodoxy holds that Jewish law is normative and binding, while simultaneously
attaching a positive value to interaction with contemporary society. In this view, Orthodox Judaism can "be enriched"
by its intersection with modernity; further, "modern society creates opportunities to be productive citizens engaged
in the Divine work of transforming the world to benefit humanity". At the same time, in order to preserve the integrity
of halakha, any area of "powerful inconsistency and conflict" between Torah and modern culture must be avoided. Modern
Orthodoxy, additionally, assigns a central role to the "People of Israel". Externally, Orthodox Jews can be identified
by their manner of dress and family lifestyle. Orthodox women dress modestly by keeping most of their skin covered.
Additionally, married women cover their hair, most commonly in the form of a scarf, also in the form of hats, bandanas,
berets, snoods or, sometimes, wigs. Orthodox men wear a skullcap known as a kipa and often fringes called "tzitzit".
Haredi men often grow beards and always wear black hats and suits, indoors and outdoors. However, Modern Orthodox
Jews are commonly indistinguishable in their dress from those around them. Orthodox Judaism holds that the words
of the Torah, including both the Written Law (Pentateuch) and those parts of the Oral Law which are "halacha leMoshe
m'Sinai", were dictated by God to Moses essentially as they exist today. The laws contained in the Written Torah
were given along with detailed explanations as how to apply and interpret them, the Oral Law. Although Orthodox Jews
believe that many elements of current religious law were decreed or added as "fences" around the law by the rabbis,
all Orthodox Jews believe that there is an underlying core of Sinaitic law and that this core of the religious laws
Orthodox Jews know today is thus directly derived from Sinai and directly reflects the Divine will. As such, Orthodox
Jews believe that one must be extremely careful in interpreting Jewish law. Orthodox Judaism holds that, given Jewish
law's Divine origin, no underlying principle may be compromised in accounting for changing political, social or economic
conditions; in this sense, "creativity" and development in Jewish law is limited. In reaction to the emergence of
Reform Judaism, a group of traditionalist German Jews emerged in support of some of the values of the Haskalah, but
also wanted to defend the classic, traditional interpretation of Jewish law and tradition. This group was led by
those who opposed the establishment of a new temple in Hamburg , as reflected in the booklet "Ele Divrei HaBerit".
As a group of Reform Rabbis convened in Braunschweig, Rabbi Jacob Ettlinger of Altona published a manifesto entitled
"Shlomei Emunei Yisrael" in German and Hebrew, having 177 Rabbis sign on. At this time the first Orthodox Jewish
periodical, "Der Treue Zions Waechter", was launched with the Hebrew supplement "Shomer Zion HaNe'eman" [1845 - 1855].
In later years it was Rav Ettlinger's students Rabbi Samson Raphael Hirsch and Rabbi Azriel Hildesheimer of Berlin
who deepened the awareness and strength of Orthodox Jewry. Rabbi Samson Raphael Hirsch commented in 1854: Jewish
historians also note that certain customs of today's Orthodox are not continuations of past practice, but instead
represent innovations that would have been unknown to prior generations. For example, the now-widespread haredi tradition
of cutting a boy's hair for the first time on his third birthday (upshirin or upsheerin, Yiddish for "haircut") "originated
as an Arab custom that parents cut a newborn boy's hair and burned it in a fire as a sacrifice," and "Jews in Palestine
learned this custom from Arabs and adapted it to a special Jewish context." The Ashkenazi prohibition against eating
kitniyot (grains and legumes such as rice, corn, beans, and peanuts) during Passover was explicitly rejected in the
Talmud, has no known precedent before the 12th century and represented a minority position for hundreds of years
thereafter, but nonetheless has remained a mandatory prohibition among Ashkenazi Orthodox Jews due to their historic
adherence to the ReMA's rulings in the Shulchan Aruch. Haredi Judaism advocates segregation from non-Jewish culture,
although not from non-Jewish society entirely. It is characterised by its focus on community-wide Torah study. Haredi
Orthodoxy's differences with Modern Orthodoxy usually lie in interpretation of the nature of traditional halakhic
concepts and in acceptable application of these concepts. Thus, engaging in the commercial world is a legitimate
means to achieving a livelihood, but individuals should participate in modern society as little as possible. The
same outlook is applied with regard to obtaining degrees necessary to enter one's intended profession: where tolerated
in the Haredi society, attending secular institutions of higher education is viewed as a necessary but inferior activity.
Academic interest is instead to be directed toward the religious education found in the yeshiva. Both boys and girls
attend school and may proceed to higher Torah study, starting anywhere between the ages of 13 and 18. A significant
proportion of students, especially boys, remain in yeshiva until marriage (which is often arranged through facilitated
dating – see shiduch), and many study in a kollel (Torah study institute for married men) for many years after marriage.
Most Orthodox men (including many Modern Orthodox), even those not in Kollel, will study Torah daily. Orthodox Judaism
holds that on Mount Sinai, the Written Law was transmitted along with an Oral Law. The words of the Torah (Pentateuch)
were spoken to Moses by God; the laws contained in this Written Torah, the "Mitzvot", were given along with detailed
explanations in the oral tradition as to how to apply and interpret them. Furthermore, the Oral law includes principles
designed to create new rules. The Oral law is held to be transmitted with an extremely high degree of accuracy. Jewish
theologians, who choose to emphasize the more evolutionary nature of the Halacha point to a famous story in the Talmud,
where Moses is miraculously transported to the House of Study of Rabbi Akiva and is clearly unable to follow the
ensuing discussion. Orthodox Judaism maintains the historical understanding of Jewish identity. A Jew is someone
who was born to a Jewish mother, or who converts to Judaism in accordance with Jewish law and tradition. Orthodoxy
thus rejects patrilineal descent as a means of establishing Jewish identity. Similarly, Orthodoxy strongly condemns
intermarriage. Intermarriage is seen as a deliberate rejection of Judaism, and an intermarried person is effectively
cut off from most of the Orthodox community. However, some Orthodox Jewish organizations do reach out to intermarried
Jews. However, there is significant disagreement within Orthodox Judaism, particularly between Haredi Judaism and
Modern Orthodox Judaism, about the extent and circumstances under which the proper application of Halakha should
be re-examined as a result of changing realities. As a general rule, Haredi Jews believe that when at all possible
the law should be maintained as it was understood by their authorities at the haskalah, believing that it had never
changed. Modern Orthodox authorities are more willing to assume that under scrupulous examination, identical principles
may lead to different applications in the context of modern life. To the Orthodox Jew, halakha is a guide, God's
Law, governing the structure of daily life from the moment he or she wakes up to the moment he goes to sleep. It
includes codes of behaviour applicable to a broad range of circumstances (and many hypothetical ones). There are
though a number of halakhic meta-principles that guide the halakhic process and in an instance of opposition between
a specific halakha and a meta-principle, the meta-principle often wins out . Examples of Halakhic Meta-Principles
are: "Deracheha Darchei Noam" - the ways of Torah are pleasant, "Kavod Habriyot" - basic respect for human beings,
"Pikuach Nefesh" - the sanctity of human life. According to Orthodox Judaism, Jewish law today is based on the commandments
in the Torah, as viewed through the discussions and debates contained in classical rabbinic literature, especially
the Mishnah and the Talmud. Orthodox Judaism thus holds that the halakha represents the "will of God", either directly,
or as closely to directly as possible. The laws are from the word of God in the Torah, using a set of rules also
revealed by God to Moses on Mount Sinai, and have been derived with the utmost accuracy and care, and thus the Oral
Law is considered to be no less the word of God. If some of the details of Jewish law may have been lost over the
millennia, they were reconstructed in accordance with internally consistent rules; see The 13 rules by which Jewish
law was derived. Hasidic or Chasidic Judaism overlaps significantly with Haredi Judaism in its engagement with the
secular and commercial world, and in regard to social issues. It precedes the later and differs in its genesis and
emerged focus. The movement originated in Eastern Europe (what is now Belarus and Ukraine) in the 18th century. Founded
by Israel ben Eliezer, known as the Baal Shem Tov (1698–1760), it emerged in an age of persecution of the Jewish
people, when a schism existed between scholarly and common European Jews. In addition to bridging this class gap,
Hasidic teachings sought to reintroduce joy in the performance of the commandments and in prayer through the popularisation
of Jewish mysticism (this joy had been suppressed in the intense intellectual study of the Talmud). The Ba'al Shem
Tov sought to combine rigorous scholarship with more emotional mitzvah observance. In a practical sense, what distinguishes
Hasidic Judaism from other forms of Haredi Judaism is the close-knit organization of Hasidic communities centered
on a Rebbe (sometimes translated as "Grand Rabbi"), and various customs and modes of dress particular to each community.
In some cases, there are religious ideological distinctions between Hasidic groups, as well. Another phenomenon that
sets Hasidic Judaism apart from general Haredi Judaism is the strong emphasis placed on speaking Yiddish; in (many)
Hasidic households and communities, Yiddish is spoken exclusively. In contrast to the general American Jewish community,
which is dwindling due to low fertility and high intermarriage and assimilation rates, the Orthodox Jewish community
of the United States is growing rapidly. Among Orthodox Jews, the fertility rate stands at about 4.1 children per
family, as compared to 1.9 children per family among non-Orthodox Jews, and intermarriage among Orthodox Jews is
practically non-existent, standing at about 2%, in contrast to a 71% intermarriage rate among non-Orthodox Jews.
In addition, Orthodox Judaism has a growing retention rate; while about half of those raised in Orthodox homes previously
abandoned Orthodox Judaism, that number is declining. According to The New York Times, the high growth rate of Orthodox
Jews will eventually render them the dominant demographic force in New York Jewry. On the other hand, Orthodox Jews
subscribing to Modern Orthodoxy in its American and UK incarnations, tend to be far more right-wing than both non-orthodox
and other orthodox Jews. While the overwhelming majority of non-Orthodox American Jews are on average strongly liberal
and supporters of the Democratic Party, the Modern Orthodox subgroup of Orthodox Judaism tends to be far more conservative,
with roughly half describing themselves as political conservatives, and are mostly Republican Party supporters. Modern
Orthodox Jews, compared to both the non-Orthodox American Jewry and the Haredi and Hasidic Jewry, also tend to have
a stronger connection to Israel due to their attachment to Zionism. Modern Orthodoxy, as a stream of Orthodox Judaism
represented by institutions such as the U.S. National Council for Young Israel, is pro-Zionist and thus places a
high national, as well as religious, significance on the State of Israel, and its affiliates are, typically, Zionist
in orientation. It also practices involvement with non-Orthodox Jews that extends beyond "outreach (Kiruv)" to continued
institutional relations and cooperation; see further under Torah Umadda. Other "core beliefs" are a recognition of
the value and importance of secular studies, a commitment to equality of education for both men and women, and a
full acceptance of the importance of being able to financially support oneself and one's family. However, the Orthodox
claim to absolute fidelity to past tradition has been challenged by scholars who contend that the Judaism of the
Middle Ages bore little resemblance to that practiced by today's Orthodox. Rather, the Orthodox community, as a counterreaction
to the liberalism of the Haskalah movement, began to embrace far more stringent halachic practices than their predecessors,
most notably in matters of Kashrut and Passover dietary laws, where the strictest possible interpretation becomes
a religious requirement, even where the Talmud explicitly prefers a more lenient position, and even where a more
lenient position was practiced by prior generations. The roots of Orthodox Judaism can be traced to the late 18th
or early 19th century, when elements within German Jewry sought to reform Jewish belief and practice in the early
19th century in response to the Age of Enlightenment, Jewish Emancipation, and Haskalah. They sought to modernize
education in light of contemporary scholarship. They rejected claims of the absolute divine authorship of the Torah,
declaring only biblical laws concerning ethics to be binding, and stated that the rest of halakha (Jewish law) need
not be viewed as normative for Jews in wider society. (see Reform Judaism). Politically, Orthodox Jews, given their
variety of movements and affiliations, tend not to conform easily to the standard left-right political spectrum,
with one of the key differences between the movements stemming from the groups' attitudes to Zionism. Generally speaking,
of the three key strands of Orthodox Judaism, Haredi Orthodox and Hasidic Orthodox Jews are at best ambivalent towards
the ideology of Zionism and the creation of the State of Israel, and there are many groups and organisations who
are outspokenly anti-Zionistic, seeing the ideology of Zionism as diametrically opposed to the teaching of the Torah,
and the Zionist administration of the State of Israel, with its emphasis on militarism and nationalism, as destructive
of the Judaic way of life. In practice, the emphasis on strictness has resulted in the rise of "homogeneous enclaves"
with other haredi Jews that are less likely to be threatened by assimilation and intermarriage, or even to interact
with other Jews who do not share their doctrines. Nevertheless, this strategy has proved successful and the number
of adherents to Orthodox Judaism, especially Haredi and Chassidic communities, has grown rapidly. Some scholars estimate
more Jewish men are studying in yeshivot (Talmudic schools) and Kollelim (post-graduate Talmudical colleges for married
(male) students) than at any other time in history.[citation needed] Orthodox Judaism collectively considers itself
the only true heir to the Jewish tradition. The Orthodox Jewish movements consider all non-Orthodox Jewish movements
to be unacceptable deviations from authentic Judaism; both because of other denominations' doubt concerning the verbal
revelation of Written and Oral Torah, and because of their rejection of Halakhic precedent as binding. As such, Orthodox
Jewish groups characterize non-Orthodox forms of Judaism as heretical; see the article on Relationships between Jewish
religious movements. Although sizable Orthodox Jewish communities are located throughout the United States, many
American Orthodox Jews live in New York State, particularly in the New York City Metropolitan Area. Two of the main
Orthodox communities in the United States are located in New York City and Rockland County. In New York City, the
neighborhoods of Borough Park, Midwood, Williamsburg, and Crown Heights, located in the borough of Brooklyn, have
particularly large Orthodox communities. The most rapidly growing community of American Orthodox Jews is located
in Rockland County and the Hudson Valley of New York, including the communities of Monsey, Monroe, New Square, and
Kiryas Joel. There are also sizable and rapidly growing Orthodox communities throughout New Jersey, particularly
in Lakewood, Teaneck, Englewood, Passaic, and Fair Lawn. Some scholars believe that Modern Orthodoxy arose from the
religious and social realities of Western European Jewry. While most Jews consider Modern Orthodoxy traditional today,
some (the hareidi and hasidic groups) within the Orthodox community consider some elements to be of questionable
validity. The neo-Orthodox movement holds that Hirsch's views are not accurately followed by Modern Orthodoxy. [See
Torah im Derech Eretz and Torah Umadda "Relationship with Torah im Derech Eretz" for a more extensive listing.] For
guidance in practical application of Jewish law, the majority of Orthodox Jews appeal to the Shulchan Aruch ("Code
of Jewish Law" composed in the 16th century by Rabbi Joseph Caro) together with its surrounding commentaries. Thus,
at a general level, there is a large degree of uniformity amongst all Orthodox Jews. Concerning the details, however,
there is often variance: decisions may be based on various of the standardized codes of Jewish Law that have been
developed over the centuries, as well as on the various responsa. These codes and responsa may differ from each other
as regards detail (and reflecting the above philosophical differences, as regards the weight assigned to these).
By and large, however, the differences result from the historic dispersal of the Jews and the consequent development
of differences among regions in their practices (see minhag). Hirsch held the opinion that Judaism demands an application
of Torah thought to the entire realm of human experience, including the secular disciplines. His approach was termed
the Torah im Derech Eretz approach, or "neo-Orthodoxy". While insisting on strict adherence to Jewish beliefs and
practices, he held that Jews should attempt to engage and influence the modern world, and encouraged those secular
studies compatible with Torah thought. This pattern of religious and secular involvement has been evident at many
times in Jewish history. Scholars[who?] believe it was characteristic of the Jews in Babylon during the Amoraic and
Geonic periods, and likewise in early medieval Spain, shown by their engagement with both Muslim and Christian society.
It appeared as the traditional response to cultural and scientific innovation.
The word "animal" comes from the Latin animalis, meaning having breath, having soul or living being. In everyday non-scientific
usage the word excludes humans – that is, "animal" is often used to refer only to non-human members of the kingdom
Animalia; often, only closer relatives of humans such as mammals, or mammals and other vertebrates, are meant. The
biological definition of the word refers to all members of the kingdom Animalia, encompassing creatures as diverse
as sponges, jellyfish, insects, and humans. All animals have eukaryotic cells, surrounded by a characteristic extracellular
matrix composed of collagen and elastic glycoproteins. This may be calcified to form structures like shells, bones,
and spicules. During development, it forms a relatively flexible framework upon which cells can move about and be
reorganized, making complex structures possible. In contrast, other multicellular organisms, like plants and fungi,
have cells held in place by cell walls, and so develop by progressive growth. Also, unique to animal cells are the
following intercellular junctions: tight junctions, gap junctions, and desmosomes. Predation is a biological interaction
where a predator (a heterotroph that is hunting) feeds on its prey (the organism that is attacked). Predators may
or may not kill their prey prior to feeding on them, but the act of predation almost always results in the death
of the prey. The other main category of consumption is detritivory, the consumption of dead organic matter. It can
at times be difficult to separate the two feeding behaviours, for example, where parasitic species prey on a host
organism and then lay their eggs on it for their offspring to feed on its decaying corpse. Selective pressures imposed
on one another has led to an evolutionary arms race between prey and predator, resulting in various antipredator
adaptations. Among the other phyla, the Ctenophora and the Cnidaria, which includes sea anemones, corals, and jellyfish,
are radially symmetric and have digestive chambers with a single opening, which serves as both the mouth and the
anus. Both have distinct tissues, but they are not organized into organs. There are only two main germ layers, the
ectoderm and endoderm, with only scattered cells between them. As such, these animals are sometimes called diploblastic.
The tiny placozoans are similar, but they do not have a permanent digestive chamber. Animals have several characteristics
that set them apart from other living things. Animals are eukaryotic and multicellular, which separates them from
bacteria and most protists. They are heterotrophic, generally digesting food in an internal chamber, which separates
them from plants and algae. They are also distinguished from plants, algae, and fungi by lacking rigid cell walls.
All animals are motile, if only at certain life stages. In most animals, embryos pass through a blastula stage, which
is a characteristic exclusive to animals. A zygote initially develops into a hollow sphere, called a blastula, which
undergoes rearrangement and differentiation. In sponges, blastula larvae swim to a new location and develop into
a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to
form a gastrula with a digestive chamber, and two separate germ layers — an external ectoderm and an internal endoderm.
In most cases, a mesoderm also develops between them. These germ layers then differentiate to form tissues and organs.
Some paleontologists suggest that animals appeared much earlier than the Cambrian explosion, possibly as early as
1 billion years ago. Trace fossils such as tracks and burrows found in the Tonian period indicate the presence of
triploblastic worms, like metazoans, roughly as large (about 5 mm wide) and complex as earthworms. During the beginning
of the Tonian period around 1 billion years ago, there was a decrease in Stromatolite diversity, which may indicate
the appearance of grazing animals, since stromatolite diversity increased when grazing animals went extinct at the
End Permian and End Ordovician extinction events, and decreased shortly after the grazer populations recovered. However
the discovery that tracks very similar to these early trace fossils are produced today by the giant single-celled
protist Gromia sphaerica casts doubt on their interpretation as evidence of early animal evolution. Animals are generally
considered to have evolved from a flagellated eukaryote. Their closest known living relatives are the choanoflagellates,
collared flagellates that have a morphology similar to the choanocytes of certain sponges. Molecular studies place
animals in a supergroup called the opisthokonts, which also include the choanoflagellates, fungi and a few small
parasitic protists. The name comes from the posterior location of the flagellum in motile cells, such as most animal
spermatozoa, whereas other eukaryotes tend to have anterior flagella. The remaining animals form a monophyletic group
called the Bilateria. For the most part, they are bilaterally symmetric, and often have a specialized head with feeding
and sensory organs. The body is triploblastic, i.e. all three germ layers are well-developed, and tissues form distinct
organs. The digestive chamber has two openings, a mouth and an anus, and there is also an internal body cavity called
a coelom or pseudocoelom. There are exceptions to each of these characteristics, however — for instance adult echinoderms
are radially symmetric, and certain parasitic worms have extremely simplified body structures. Traditional morphological
and modern molecular phylogenetic analysis have both recognized a major evolutionary transition from "non-bilaterian"
animals, which are those lacking a bilaterally symmetric body plan (Porifera, Ctenophora, Cnidaria and Placozoa),
to "bilaterian" animals (Bilateria) whose body plans display bilateral symmetry. The latter are further classified
based on a major division between Deuterostomes and Protostomes. The relationships among non-bilaterian animals are
disputed, but all bilaterian animals are thought to form a monophyletic group. Current understanding of the relationships
among the major groups of animals is summarized by the following cladogram: The Ecdysozoa are protostomes, named
after the common trait of growth by moulting or ecdysis. The largest animal phylum belongs here, the Arthropoda,
including insects, spiders, crabs, and their kin. All these organisms have a body divided into repeating segments,
typically with paired appendages. Two smaller phyla, the Onychophora and Tardigrada, are close relatives of the arthropods
and share these traits. The ecdysozoans also include the Nematoda or roundworms, perhaps the second largest animal
phylum. Roundworms are typically microscopic, and occur in nearly every environment where there is water. A number
are important parasites. Smaller phyla related to them are the Nematomorpha or horsehair worms, and the Kinorhyncha,
Priapulida, and Loricifera. These groups have a reduced coelom, called a pseudocoelom. Because of the great diversity
found in animals, it is more economical for scientists to study a small number of chosen species so that connections
can be drawn from their work and conclusions extrapolated about how animals function in general. Because they are
easy to keep and breed, the fruit fly Drosophila melanogaster and the nematode Caenorhabditis elegans have long been
the most intensively studied metazoan model organisms, and were among the first life-forms to be genetically sequenced.
This was facilitated by the severely reduced state of their genomes, but as many genes, introns, and linkages lost,
these ecdysozoans can teach us little about the origins of animals in general. The extent of this type of evolution
within the superphylum will be revealed by the crustacean, annelid, and molluscan genome projects currently in progress.
Analysis of the starlet sea anemone genome has emphasised the importance of sponges, placozoans, and choanoflagellates,
also being sequenced, in explaining the arrival of 1500 ancestral genes unique to the Eumetazoa. The Lophotrochozoa,
evolved within Protostomia, include two of the most successful animal phyla, the Mollusca and Annelida. The former,
which is the second-largest animal phylum by number of described species, includes animals such as snails, clams,
and squids, and the latter comprises the segmented worms, such as earthworms and leeches. These two groups have long
been considered close relatives because of the common presence of trochophore larvae, but the annelids were considered
closer to the arthropods because they are both segmented. Now, this is generally considered convergent evolution,
owing to many morphological and genetic differences between the two phyla. The Lophotrochozoa also include the Nemertea
or ribbon worms, the Sipuncula, and several phyla that have a ring of ciliated tentacles around the mouth, called
a lophophore. These were traditionally grouped together as the lophophorates. but it now appears that the lophophorate
group may be paraphyletic, with some closer to the nemerteans and some to the molluscs and annelids. They include
the Brachiopoda or lamp shells, which are prominent in the fossil record, the Entoprocta, the Phoronida, and possibly
the Bryozoa or moss animals. Several animal phyla are recognized for their lack of bilateral symmetry, and are thought
to have diverged from other animals early in evolution. Among these, the sponges (Porifera) were long thought to
have diverged first, representing the oldest animal phylum. They lack the complex organization found in most other
phyla. Their cells are differentiated, but in most cases not organized into distinct tissues. Sponges typically feed
by drawing in water through pores. However, a series of phylogenomic studies from 2008-2015 have found support for
Ctenophora, or comb jellies, as the basal lineage of animals. This result has been controversial, since it would
imply that that sponges may not be so primitive, but may instead be secondarily simplified. Other researchers have
argued that the placement of Ctenophora as the earliest-diverging animal phylum is a statistical anomaly caused by
the high rate of evolution in ctenophore genomes. Deuterostomes differ from protostomes in several ways. Animals
from both groups possess a complete digestive tract. However, in protostomes, the first opening of the gut to appear
in embryological development (the archenteron) develops into the mouth, with the anus forming secondarily. In deuterostomes
the anus forms first, with the mouth developing secondarily. In most protostomes, cells simply fill in the interior
of the gastrula to form the mesoderm, called schizocoelous development, but in deuterostomes, it forms through invagination
of the endoderm, called enterocoelic pouching. Deuterostome embryos undergo radial cleavage during cell division,
while protostomes undergo spiral cleavage. The Platyzoa include the phylum Platyhelminthes, the flatworms. These
were originally considered some of the most primitive Bilateria, but it now appears they developed from more complex
ancestors. A number of parasites are included in this group, such as the flukes and tapeworms. Flatworms are acoelomates,
lacking a body cavity, as are their closest relatives, the microscopic Gastrotricha. The other platyzoan phyla are
mostly microscopic and pseudocoelomate. The most prominent are the Rotifera or rotifers, which are common in aqueous
environments. They also include the Acanthocephala or spiny-headed worms, the Gnathostomulida, Micrognathozoa, and
possibly the Cycliophora. These groups share the presence of complex jaws, from which they are called the Gnathifera.
Most animals indirectly use the energy of sunlight by eating plants or plant-eating animals. Most plants use light
to convert inorganic molecules in their environment into carbohydrates, fats, proteins and other biomolecules, characteristically
containing reduced carbon in the form of carbon-hydrogen bonds. Starting with carbon dioxide (CO2) and water (H2O),
photosynthesis converts the energy of sunlight into chemical energy in the form of simple sugars (e.g., glucose),
with the release of molecular oxygen. These sugars are then used as the building blocks for plant growth, including
the production of other biomolecules. When an animal eats plants (or eats other animals which have eaten plants),
the reduced carbon compounds in the food become a source of energy and building materials for the animal. They are
either used directly to help the animal grow, or broken down, releasing stored solar energy, and giving the animal
the energy required for motion.
The process of making beer is known as brewing. A dedicated building for the making of beer is called a brewery, though beer
can be made in the home and has been for much of its history. A company that makes beer is called either a brewery
or a brewing company. Beer made on a domestic scale for non-commercial reasons is classified as homebrewing regardless
of where it is made, though most homebrewed beer is made in the home. Brewing beer is subject to legislation and
taxation in developed countries, which from the late 19th century largely restricted brewing to a commercial operation
only. However, the UK government relaxed legislation in 1963, followed by Australia in 1972 and the US in 1978, allowing
homebrewing to become a popular hobby. After boiling, the hopped wort is now cooled, ready for the yeast. In some
breweries, the hopped wort may pass through a hopback, which is a small vat filled with hops, to add aromatic hop
flavouring and to act as a filter; but usually the hopped wort is simply cooled for the fermenter, where the yeast
is added. During fermentation, the wort becomes beer in a process which requires a week to months depending on the
type of yeast and strength of the beer. In addition to producing ethanol, fine particulate matter suspended in the
wort settles during fermentation. Once fermentation is complete, the yeast also settles, leaving the beer clear.
Nearly all beer includes barley malt as the majority of the starch. This is because its fibrous hull remains attached
to the grain during threshing. After malting, barley is milled, which finally removes the hull, breaking it into
large pieces. These pieces remain with the grain during the mash, and act as a filter bed during lautering, when
sweet wort is separated from insoluble grain material. Other malted and unmalted grains (including wheat, rice, oats,
and rye, and less frequently, corn and sorghum) may be used. Some brewers have produced gluten-free beer, made with
sorghum with no barley malt, for those who cannot consume gluten-containing grains like wheat, barley, and rye. A
microbrewery, or craft brewery, produces a limited amount of beer. The maximum amount of beer a brewery can produce
and still be classed as a microbrewery varies by region and by authority, though is usually around 15,000 barrels
(1.8 megalitres, 396 thousand imperial gallons or 475 thousand US gallons) a year. A brewpub is a type of microbrewery
that incorporates a pub or other eating establishment. The highest density of breweries in the world, most of them
microbreweries, exists in the German Region of Franconia, especially in the district of Upper Franconia, which has
about 200 breweries. The Benedictine Weihenstephan Brewery in Bavaria, Germany, can trace its roots to the year 768,
as a document from that year refers to a hop garden in the area paying a tithe to the monastery. The brewery was
licensed by the City of Freising in 1040, and therefore is the oldest working brewery in the world. The alcohol in
beer comes primarily from the metabolism of sugars that are produced during fermentation. The quantity of fermentable
sugars in the wort and the variety of yeast used to ferment the wort are the primary factors that determine the amount
of alcohol in the final beer. Additional fermentable sugars are sometimes added to increase alcohol content, and
enzymes are often added to the wort for certain styles of beer (primarily "light" beers) to convert more complex
carbohydrates (starches) to fermentable sugars. Alcohol is a by-product of yeast metabolism and is toxic to the yeast;
typical brewing yeast cannot survive at alcohol concentrations above 12% by volume. Low temperatures and too little
fermentation time decreases the effectiveness of yeasts and consequently decreases the alcohol content. Cask-conditioned
ales (or cask ales) are unfiltered and unpasteurised beers. These beers are termed "real ale" by the CAMRA organisation.
Typically, when a cask arrives in a pub, it is placed horizontally on a frame called a "stillage" which is designed
to hold it steady and at the right angle, and then allowed to cool to cellar temperature (typically between 11–13
°C or 52–55 °F), before being tapped and vented—a tap is driven through a (usually rubber) bung at the bottom of
one end, and a hard spile or other implement is used to open a hole in the side of the cask, which is now uppermost.
The act of stillaging and then venting a beer in this manner typically disturbs all the sediment, so it must be left
for a suitable period to "drop" (clear) again, as well as to fully condition—this period can take anywhere from several
hours to several days. At this point the beer is ready to sell, either being pulled through a beer line with a hand
pump, or simply being "gravity-fed" directly into the glass. Beer contains ethyl alcohol, the same chemical that
is present in wine and distilled spirits and as such, beer consumption has short-term psychological and physiological
effects on the user. Different concentrations of alcohol in the human body have different effects on a person. The
effects of alcohol depend on the amount an individual has drunk, the percentage of alcohol in the beer and the timespan
that the consumption took place, the amount of food eaten and whether an individual has taken other prescription,
over-the-counter or street drugs, among other factors. Drinking enough to cause a blood alcohol concentration (BAC)
of 0.03%-0.12% typically causes an overall improvement in mood and possible euphoria, increased self-confidence and
sociability, decreased anxiety, a flushed, red appearance in the face and impaired judgment and fine muscle coordination.
A BAC of 0.09% to 0.25% causes lethargy, sedation, balance problems and blurred vision. A BAC from 0.18% to 0.30%
causes profound confusion, impaired speech (e.g., slurred speech), staggering, dizziness and vomiting. A BAC from
0.25% to 0.40% causes stupor, unconsciousness, anterograde amnesia, vomiting (death may occur due to inhalation of
vomit (pulmonary aspiration) while unconscious and respiratory depression (potentially life-threatening). A BAC from
0.35% to 0.80% causes a coma (unconsciousness), life-threatening respiratory depression and possibly fatal alcohol
poisoning. As with all alcoholic drinks, drinking while driving, operating an aircraft or heavy machinery increases
the risk of an accident; many countries have penalties against drunk driving. Beer is sold in bottles and cans; it
may also be available on draught, particularly in pubs and bars. The brewing industry is a global business, consisting
of several dominant multinational companies and many thousands of smaller producers ranging from brewpubs to regional
breweries. The strength of beer is usually around 4% to 6% alcohol by volume (abv), although it may vary between
0.5% and 20%, with some breweries creating examples of 40% abv and above. Beer forms part of the culture of beer-drinking
nations and is associated with social traditions such as beer festivals, as well as a rich pub culture involving
activities like pub crawling, and pub games such as bar billiards. Beer is composed mostly of water. Regions have
water with different mineral components; as a result, different regions were originally better suited to making certain
types of beer, thus giving them a regional character. For example, Dublin has hard water well-suited to making stout,
such as Guinness; while the Plzeň Region has soft water well-suited to making Pilsner (pale lager), such as Pilsner
Urquell. The waters of Burton in England contain gypsum, which benefits making pale ale to such a degree that brewers
of pale ales will add gypsum to the local water in a process known as Burtonisation. In 1516, William IV, Duke of
Bavaria, adopted the Reinheitsgebot (purity law), perhaps the oldest food-quality regulation still in use in the
21st century, according to which the only allowed ingredients of beer are water, hops and barley-malt. Beer produced
before the Industrial Revolution continued to be made and sold on a domestic scale, although by the 7th century AD,
beer was also being produced and sold by European monasteries. During the Industrial Revolution, the production of
beer moved from artisanal manufacture to industrial manufacture, and domestic manufacture ceased to be significant
by the end of the 19th century. The development of hydrometers and thermometers changed brewing by allowing the brewer
more control of the process and greater knowledge of the results. Hops contain several characteristics that brewers
desire in beer. Hops contribute a bitterness that balances the sweetness of the malt; the bitterness of beers is
measured on the International Bitterness Units scale. Hops contribute floral, citrus, and herbal aromas and flavours
to beer. Hops have an antibiotic effect that favours the activity of brewer's yeast over less desirable microorganisms
and aids in "head retention", the length of time that a foamy head created by carbonation will last. The acidity
of hops is a preservative. The first step, where the wort is prepared by mixing the starch source (normally malted
barley) with hot water, is known as "mashing". Hot water (known as "liquor" in brewing terms) is mixed with crushed
malt or malts (known as "grist") in a mash tun. The mashing process takes around 1 to 2 hours, during which the starches
are converted to sugars, and then the sweet wort is drained off the grains. The grains are now washed in a process
known as "sparging". This washing allows the brewer to gather as much of the fermentable liquid from the grains as
possible. The process of filtering the spent grain from the wort and sparge water is called wort separation. The
traditional process for wort separation is lautering, in which the grain bed itself serves as the filter medium.
Some modern breweries prefer the use of filter frames which allow a more finely ground grist. The basic ingredients
of beer are water; a starch source, such as malted barley, able to be saccharified (converted to sugars) then fermented
(converted into ethanol and carbon dioxide); a brewer's yeast to produce the fermentation; and a flavouring such
as hops. A mixture of starch sources may be used, with a secondary starch source, such as maize (corn), rice or sugar,
often being termed an adjunct, especially when used as a lower-cost substitute for malted barley. Less widely used
starch sources include millet, sorghum and cassava root in Africa, and potato in Brazil, and agave in Mexico, among
others. The amount of each starch source in a beer recipe is collectively called the grain bill. The first historical
mention of the use of hops in beer was from 822 AD in monastery rules written by Adalhard the Elder, also known as
Adalard of Corbie, though the date normally given for widespread cultivation of hops for use in beer is the thirteenth
century. Before the thirteenth century, and until the sixteenth century, during which hops took over as the dominant
flavouring, beer was flavoured with other plants; for instance, grains of paradise or alehoof. Combinations of various
aromatic herbs, berries, and even ingredients like wormwood would be combined into a mixture known as gruit and used
as hops are now used. Some beers today, such as Fraoch' by the Scottish Heather Ales company and Cervoise Lancelot
by the French Brasserie-Lancelot company, use plants other than hops for flavouring. Stout and porter are dark beers
made using roasted malts or roast barley, and typically brewed with slow fermenting yeast. There are a number of
variations including Baltic porter, dry stout, and Imperial stout. The name Porter was first used in 1721 to describe
a dark brown beer popular with the street and river porters of London. This same beer later also became known as
stout, though the word stout had been used as early as 1677. The history and development of stout and porter are
intertwined. Many beers are sold in cans, though there is considerable variation in the proportion between different
countries. In Sweden in 2001, 63.9% of beer was sold in cans. People either drink from the can or pour the beer into
a glass. A technology developed by Crown Holdings for the 2010 FIFA World Cup is the 'full aperture' can, so named
because the entire lid is removed during the opening process, turning the can into a drinking cup. Cans protect the
beer from light (thereby preventing "skunked" beer) and have a seal less prone to leaking over time than bottles.
Cans were initially viewed as a technological breakthrough for maintaining the quality of a beer, then became commonly
associated with less expensive, mass-produced beers, even though the quality of storage in cans is much like bottles.
Plastic (PET) bottles are used by some breweries. The product claimed to be the strongest beer made is Schorschbräu's
2011 Schorschbock 57 with 57,5%. It was preceded by The End of History, a 55% Belgian ale, made by BrewDog in 2010.
The same company had previously made Sink The Bismarck!, a 41% abv IPA, and Tactical Nuclear Penguin, a 32% abv Imperial
stout. Each of these beers are made using the eisbock method of fractional freezing, in which a strong ale is partially
frozen and the ice is repeatedly removed, until the desired strength is reached, a process that may class the product
as spirits rather than beer. The German brewery Schorschbräu's Schorschbock, a 31% abv eisbock, and Hair of the Dog's
Dave, a 29% abv barley wine made in 1994, used the same fractional freezing method. A 60% abv blend of beer with
whiskey was jokingly claimed as the strongest beer by a Dutch brewery in July 2010. Most beers are cleared of yeast
by filtering when packaged in bottles and cans. However, bottle conditioned beers retain some yeast—either by being
unfiltered, or by being filtered and then reseeded with fresh yeast. It is usually recommended that the beer be poured
slowly, leaving any yeast sediment at the bottom of the bottle. However, some drinkers prefer to pour in the yeast;
this practice is customary with wheat beers. Typically, when serving a hefeweizen wheat beer, 90% of the contents
are poured, and the remainder is swirled to suspend the sediment before pouring it into the glass. Alternatively,
the bottle may be inverted prior to opening. Glass bottles are always used for bottle conditioned beers. It is considered
that overeating and lack of muscle tone is the main cause of a beer belly, rather than beer consumption. A 2004 study,
however, found a link between binge drinking and a beer belly. But with most overconsumption, it is more a problem
of improper exercise and overconsumption of carbohydrates than the product itself. Several diet books quote beer
as having an undesirably high glycemic index of 110, the same as maltose; however, the maltose in beer undergoes
metabolism by yeast during fermentation so that beer consists mostly of water, hop oils and only trace amounts of
sugars, including maltose. Around the world, there are many traditional and ancient starch-based drinks classed as
beer. In Africa, there are various ethnic beers made from sorghum or millet, such as Oshikundu in Namibia and Tella
in Ethiopia. Kyrgyzstan also has a beer made from millet; it is a low alcohol, somewhat porridge-like drink called
"Bozo". Bhutan, Nepal, Tibet and Sikkim also use millet in Chhaang, a popular semi-fermented rice/millet drink in
the eastern Himalayas. Further east in China are found Huangjiu and Choujiu—traditional rice-based beverages related
to beer. The earliest known chemical evidence of barley beer dates to circa 3500–3100 BC from the site of Godin Tepe
in the Zagros Mountains of western Iran. Some of the earliest Sumerian writings contain references to beer; examples
include a prayer to the goddess Ninkasi, known as "The Hymn to Ninkasi", which served as both a prayer as well as
a method of remembering the recipe for beer in a culture with few literate people, and the ancient advice (Fill your
belly. Day and night make merry) to Gilgamesh, recorded in the Epic of Gilgamesh, by the ale-wife Siduri may, at
least in part, have referred to the consumption of beer. The Ebla tablets, discovered in 1974 in Ebla, Syria, show
that beer was produced in the city in 2500 BC. A fermented beverage using rice and fruit was made in China around
7000 BC. Unlike sake, mould was not used to saccharify the rice (amylolytic fermentation); the rice was probably
prepared for fermentation by mastication or malting. The sweet wort collected from sparging is put into a kettle,
or "copper" (so called because these vessels were traditionally made from copper), and boiled, usually for about
one hour. During boiling, water in the wort evaporates, but the sugars and other components of the wort remain; this
allows more efficient use of the starch sources in the beer. Boiling also destroys any remaining enzymes left over
from the mashing stage. Hops are added during boiling as a source of bitterness, flavour and aroma. Hops may be added
at more than one point during the boil. The longer the hops are boiled, the more bitterness they contribute, but
the less hop flavour and aroma remains in the beer. The starch source in a beer provides the fermentable material
and is a key determinant of the strength and flavour of the beer. The most common starch source used in beer is malted
grain. Grain is malted by soaking it in water, allowing it to begin germination, and then drying the partially germinated
grain in a kiln. Malting grain produces enzymes that convert starches in the grain into fermentable sugars. Different
roasting times and temperatures are used to produce different colours of malt from the same grain. Darker malts will
produce darker beers. The brewing industry is a global business, consisting of several dominant multinational companies
and many thousands of smaller producers ranging from brewpubs to regional breweries. More than 133 billion litres
(35 billion gallons) are sold per year—producing total global revenues of $294.5 billion (£147.7 billion) in 2006.
The history of breweries has been one of absorbing smaller breweries in order to ensure economy of scale. In 2002
South African Breweries bought the North American Miller Brewing Company to found SABMiller, becoming the second
largest brewery, after North American Anheuser-Bush. In 2004 the Belgian Interbrew was the third largest brewery
by volume and the Brazilian AmBev was the fifth largest. They merged into InBev, becoming the largest brewery. In
2007, SABMiller surpassed InBev and Anheuser-Bush when it acquired Royal Grolsch, brewer of Dutch premium beer brand
Grolsch in 2007. In 2008, InBev (the second-largest) bought Anheuser-Busch (the third largest), the new Anheuser-Busch
InBev company became again the largest brewer in the world. As of 2015[update] AB InBev is the largest brewery, with
SABMiller second, and Heineken International third. Beer ranges from less than 3% alcohol by volume (abv) to around
14% abv, though this strength can be increased to around 20% by re-pitching with champagne yeast, and to 55% abv
by the freeze-distilling process. The alcohol content of beer varies by local practice or beer style. The pale lagers
that most consumers are familiar with fall in the range of 4–6%, with a typical abv of 5%. The customary strength
of British ales is quite low, with many session beers being around 4% abv. Some beers, such as table beer are of
such low alcohol content (1%–4%) that they are served instead of soft drinks in some schools. In many societies,
beer is the most popular alcoholic drink. Various social traditions and activities are associated with beer drinking,
such as playing cards, darts, or other pub games; attending beer festivals; engaging in zythology (the study of beer);
visiting a series of pubs in one evening; visiting breweries; beer-oriented tourism; or rating beer. Drinking games,
such as beer pong, are also popular. A relatively new profession is that of the beer sommelier, who informs restaurant
patrons about beers and food pairings. Beer is the world's most widely consumed and likely the oldest alcoholic beverage;
it is the third most popular drink overall, after water and tea. The production of beer is called brewing, which
involves the fermentation of starches, mainly derived from cereal grains—most commonly malted barley, although wheat,
maize (corn), and rice are widely used. Most beer is flavoured with hops, which add bitterness and act as a natural
preservative, though other flavourings such as herbs or fruit may occasionally be included. The fermentation process
causes a natural carbonation effect which is often removed during processing, and replaced with forced carbonation.
Some of humanity's earliest known writings refer to the production and distribution of beer: the Code of Hammurabi
included laws regulating beer and beer parlours, and "The Hymn to Ninkasi", a prayer to the Mesopotamian goddess
of beer, served as both a prayer and as a method of remembering the recipe for beer in a culture with few literate
people. Beer was spread through Europe by Germanic and Celtic tribes as far back as 3000 BC, and it was mainly brewed
on a domestic scale. The product that the early Europeans drank might not be recognised as beer by most people today.
Alongside the basic starch source, the early European beers might contain fruits, honey, numerous types of plants,
spices and other substances such as narcotic herbs. What they did not contain was hops, as that was a later addition,
first mentioned in Europe around 822 by a Carolingian Abbot and again in 1067 by Abbess Hildegard of Bingen. The
word ale comes from Old English ealu (plural ealoþ), in turn from Proto-Germanic *alu (plural *aluþ), ultimately
from the Proto-Indo-European base *h₂elut-, which holds connotations of "sorcery, magic, possession, intoxication".
The word beer comes from Old English bēor, from Proto-Germanic *beuzą, probably from Proto-Indo-European *bʰeusóm,
originally "brewer's yeast, beer dregs", although other theories have been provided connecting the word with Old
English bēow, "barley", or Latin bibere, "to drink". On the currency of two words for the same thing in the Germanic
languages, the 12th-century Old Icelandic poem Alvíssmál says, "Ale it is called among men, but among the gods, beer."
The strength of beers has climbed during the later years of the 20th century. Vetter 33, a 10.5% abv (33 degrees
Plato, hence Vetter "33") doppelbock, was listed in the 1994 Guinness Book of World Records as the strongest beer
at that time, though Samichlaus, by the Swiss brewer Hürlimann, had also been listed by the Guinness Book of World
Records as the strongest at 14% abv. Since then, some brewers have used champagne yeasts to increase the alcohol
content of their beers. Samuel Adams reached 20% abv with Millennium, and then surpassed that amount to 25.6% abv
with Utopias. The strongest beer brewed in Britain was Baz's Super Brew by Parish Brewery, a 23% abv beer. In September
2011, the Scottish brewery BrewDog produced Ghost Deer, which, at 28%, they claim to be the world's strongest beer
produced by fermentation alone. Draught beer's environmental impact can be 68% lower than bottled beer due to packaging
differences. A life cycle study of one beer brand, including grain production, brewing, bottling, distribution and
waste management, shows that the CO2 emissions from a 6-pack of micro-brew beer is about 3 kilograms (6.6 pounds).
The loss of natural habitat potential from the 6-pack of micro-brew beer is estimated to be 2.5 square meters (26
square feet). Downstream emissions from distribution, retail, storage and disposal of waste can be over 45% of a
bottled micro-brew beer's CO2 emissions. Where legal, the use of a refillable jug, reusable bottle or other reusable
containers to transport draught beer from a store or a bar, rather than buying pre-bottled beer, can reduce the environmental
impact of beer consumption. Drinking chilled beer began with the development of artificial refrigeration and by the
1870s, was spread in those countries that concentrated on brewing pale lager. Chilling beer makes it more refreshing,
though below 15.5 °C the chilling starts to reduce taste awareness and reduces it significantly below 10 °C (50 °F).
Beer served unchilled—either cool or at room temperature, reveal more of their flavours. Cask Marque, a non-profit
UK beer organisation, has set a temperature standard range of 12°–14 °C (53°–57 °F) for cask ales to be served. The
main active ingredient of beer is alcohol, and therefore, the health effects of alcohol apply to beer. Consumption
of small quantities of alcohol (less than one drink in women and two in men) is associated with a decreased risk
of cardiac disease, stroke and diabetes mellitus. The long term health effects of continuous, moderate or heavy alcohol
consumption include the risk of developing alcoholism and alcoholic liver disease. A total of 3.3 million deaths
(5.9% of all deaths) are believed to be due to alcohol. Alcoholism often reduces a person's life expectancy by around
ten years. Alcohol use is the third leading cause of early death in the United States. A study published in the Neuropsychopharmacology
journal in 2013 revealed the finding that the flavour of beer alone could provoke dopamine activity in the brain
of the male participants, who wanted to drink more as a result. The 49 men in the study were subject to positron
emission tomography scans, while a computer-controlled device sprayed minute amounts of beer, water and a sports
drink onto their tongues. Compared with the taste of the sports drink, the taste of beer significantly increased
the participants desire to drink. Test results indicated that the flavour of the beer triggered a dopamine release,
even though alcohol content in the spray was insufficient for the purpose of becoming intoxicated.
The biggest change in this year's census was in racial classification. Enumerators were instructed to no longer use the "Mulatto"
classification. Instead, they were given special instructions for reporting the race of interracial persons. A person
with both white and black ancestry (termed "blood") was to be recorded as "Negro," no matter the fraction of that
lineage (the "one-drop rule"). A person of mixed black and American Indian ancestry was also to be recorded as "Neg"
(for "Negro") unless he was considered to be "predominantly" American Indian and accepted as such within the community.
A person with both White and American Indian ancestry was to be recorded as an Indian, unless his American Indian
ancestry was small, and he was accepted as white within the community. In all situations in which a person had White
and some other racial ancestry, he was to be reported as that other race. Persons who had minority interracial ancestry
were to be reported as the race of their father. Race was asked differently in the Census 2000 in several other ways
than previously. Most significantly, respondents were given the option of selecting one or more race categories to
indicate racial identities. Data show that nearly seven million Americans identified as members of two or more races.
Because of these changes, the Census 2000 data on race are not directly comparable with data from the 1990 census
or earlier censuses. Use of caution is therefore recommended when interpreting changes in the racial composition
of the US population over time. In September 1997, during the process of revision of racial categories previously
declared by OMB directive no. 15, the American Anthropological Association (AAA) recommended that OMB combine the
"race" and "ethnicity" categories into one question to appear as "race/ethnicity" for the 2000 US Census. The Interagency
Committee agreed, stating that "race" and "ethnicity" were not sufficiently defined and "that many respondents conceptualize
'race' and 'ethnicity' as one in the same [sic] underscor[ing] the need to consolidate these terms into one category,
using a term that is more meaningful to the American people." The racial categories represent a social-political
construct for the race or races that respondents consider themselves to be and "generally reflect a social definition
of race recognized in this country." OMB defines the concept of race as outlined for the U.S. Census as not "scientific
or anthropological" and takes into account "social and cultural characteristics as well as ancestry", using "appropriate
scientific methodologies" that are not "primarily biological or genetic in reference." The race categories include
both racial and national-origin groups. Race and ethnicity are considered separate and distinct identities, with
Hispanic or Latino origin asked as a separate question. Thus, in addition to their race or races, all respondents
are categorized by membership in one of two ethnic categories, which are "Hispanic or Latino" and "Not Hispanic or
Latino". However, the practice of separating "race" and "ethnicity" as different categories has been criticized both
by the American Anthropological Association and members of U.S. Commission on Civil Rights. President Franklin D.
Roosevelt promoted a "good neighbor" policy that sought better relations with Mexico. In 1935 a federal judge ruled
that three Mexican immigrants were ineligible for citizenship because they were not white, as required by federal
law. Mexico protested, and Roosevelt decided to circumvent the decision and make sure the federal government treated
Hispanics as white. The State Department, the Census Bureau, the Labor Department, and other government agencies
therefore made sure to uniformly classify people of Mexican descent as white. This policy encouraged the League of
United Latin American Citizens in its quest to minimize discrimination by asserting their whiteness. In 1997, OMB
issued a Federal Register notice regarding revisions to the standards for the classification of federal data on race
and ethnicity. OMB developed race and ethnic standards in order to provide "consistent data on race and ethnicity
throughout the Federal Government. The development of the data standards stem in large measure from new responsibilities
to enforce civil rights laws." Among the changes, OMB issued the instruction to "mark one or more races" after noting
evidence of increasing numbers of interracial children and wanting to capture the diversity in a measurable way and
having received requests by people who wanted to be able to acknowledge their or their children's full ancestry rather
than identifying with only one group. Prior to this decision, the Census and other government data collections asked
people to report only one race. The 1850 census saw a dramatic shift in the way information about residents was collected.
For the first time, free persons were listed individually instead of by head of household. There were two questionnaires:
one for free inhabitants and one for slaves. The question on the free inhabitants schedule about color was a column
that was to be left blank if a person was white, marked "B" if a person was black, and marked "M" if a person was
mulatto. Slaves were listed by owner, and classified by gender and age, not individually, and the question about
color was a column that was to be marked with a "B" if the slave was black and an "M" if mulatto. Although used in
the Census and the American Community Survey, "Some other race" is not an official race, and the Bureau considered
eliminating it prior to the 2000 Census. As the 2010 census form did not contain the question titled "Ancestry" found
in prior censuses, there were campaigns to get non-Hispanic West Indian Americans, Turkish Americans, Armenian Americans,
Arab Americans and Iranian Americans to indicate their ethnic or national background through the race question, specifically
the "Some other race" category. "Data on ethnic groups are important for putting into effect a number of federal
statutes (i.e., enforcing bilingual election rules under the Voting Rights Act; monitoring and enforcing equal employment
opportunities under the Civil Rights Act). Data on Ethnic Groups are also needed by local governments to run programs
and meet legislative requirements (i.e., identifying segments of the population who may not be receiving medical
services under the Public Health Act; evaluating whether financial institutions are meeting the credit needs of minority
populations under the Community Reinvestment Act)." For 1890, the Census Office changed the design of the population
questionnaire. Residents were still listed individually, but a new questionnaire sheet was used for each family.
Additionally, this was the first year that the census distinguished between different East Asian races, such as Japanese
and Chinese, due to increased immigration. This census also marked the beginning of the term "race" in the questionnaires.
Enumerators were instructed to write "White," "Black," "Mulatto," "Quadroon," "Octoroon," "Chinese," "Japanese,"
or "Indian." The federal government of the United States has mandated that "in data collection and presentation,
federal agencies are required to use a minimum of two ethnicities: 'Hispanic or Latino' and 'Not Hispanic or Latino'."
The Census Bureau defines "Hispanic or Latino" as "a person of Cuban, Mexican, Puerto Rican, South or Central American
or other Spanish culture or origin regardless of race." For discussion of the meaning and scope of the Hispanic or
Latino ethnicity, see the Hispanic and Latino Americans and Racial and ethnic demographics of the United States articles.
Unlike the Spanish milled dollar the U.S. dollar is based upon a decimal system of values. In addition to the dollar the
coinage act officially established monetary units of mill or one-thousandth of a dollar (symbol ₥), cent or one-hundredth
of a dollar (symbol ¢), dime or one-tenth of a dollar, and eagle or ten dollars, with prescribed weights and composition
of gold, silver, or copper for each. It was proposed in the mid-1800s that one hundred dollars be known as a union,
but no union coins were ever struck and only patterns for the $50 half union exist. However, only cents are in everyday
use as divisions of the dollar; "dime" is used solely as the name of the coin with the value of 10¢, while "eagle"
and "mill" are largely unknown to the general public, though mills are sometimes used in matters of tax levies, and
gasoline prices are usually in the form of $X.XX9 per gallon, e.g., $3.599, sometimes written as $3.599⁄10. When
currently issued in circulating form, denominations equal to or less than a dollar are emitted as U.S. coins while
denominations equal to or greater than a dollar are emitted as Federal Reserve notes (with the exception of gold,
silver and platinum coins valued up to $100 as legal tender, but worth far more as bullion). Both one-dollar coins
and notes are produced today, although the note form is significantly more common. In the past, "paper money" was
occasionally issued in denominations less than a dollar (fractional currency) and gold coins were issued for circulation
up to the value of $20 (known as the "double eagle", discontinued in the 1930s). The term eagle was used in the Coinage
Act of 1792 for the denomination of ten dollars, and subsequently was used in naming gold coins. Paper currency less
than one dollar in denomination, known as "fractional currency", was also sometimes pejoratively referred to as "shinplasters".
In 1854, James Guthrie, then Secretary of the Treasury, proposed creating $100, $50 and $25 gold coins, which were
referred to as a "Union", "Half Union", and "Quarter Union", thus implying a denomination of 1 Union = $100. The
symbol $, usually written before the numerical amount, is used for the U.S. dollar (as well as for many other currencies).
The sign was the result of a late 18th-century evolution of the scribal abbreviation "ps" for the peso, the common
name for the Spanish dollars that were in wide circulation in the New World from the 16th to the 19th centuries.
These Spanish pesos or dollars were minted in Spanish America, namely in Mexico City, Potosí, Bolivia; and Lima,
Peru. The p and the s eventually came to be written over each other giving rise to $. Though still predominantly
green, post-2004 series incorporate other colors to better distinguish different denominations. As a result of a
2008 decision in an accessibility lawsuit filed by the American Council of the Blind, the Bureau of Engraving and
Printing is planning to implement a raised tactile feature in the next redesign of each note, except the $1 and the
version of the $100 bill already in process. It also plans larger, higher-contrast numerals, more color differences,
and distribution of currency readers to assist the visually impaired during the transition period. The Constitution
of the United States of America provides that the United States Congress has the power "To coin money". Laws implementing
this power are currently codified at 31 U.S.C. § 5112. Section 5112 prescribes the forms, in which the United States
dollars should be issued. These coins are both designated in Section 5112 as "legal tender" in payment of debts.
The Sacagawea dollar is one example of the copper alloy dollar. The pure silver dollar is known as the American Silver
Eagle. Section 5112 also provides for the minting and issuance of other coins, which have values ranging from one
cent to 50 dollars. These other coins are more fully described in Coins of the United States dollar. In the 16th
century, Count Hieronymus Schlick of Bohemia began minting coins known as Joachimstalers (from German thal, or nowadays
usually Tal, "valley", cognate with "dale" in English), named for Joachimstal, the valley where the silver was mined
(St. Joachim's Valley, now Jáchymov; then part of the Kingdom of Bohemia, now part of the Czech Republic). Joachimstaler
was later shortened to the German Taler, a word that eventually found its way into Danish and Swedish as daler, Norwegian
as dalar and daler, Dutch as daler or daalder, Ethiopian as ታላሪ (talari), Hungarian as tallér, Italian as tallero,
and English as dollar. Alternatively, thaler is said to come from the German coin Guldengroschen ("great guilder",
being of silver but equal in value to a gold guilder), minted from the silver from Joachimsthal. The early currency
of the United States did not exhibit faces of presidents, as is the custom now; although today, by law, only the
portrait of a deceased individual may appear on United States currency. In fact, the newly formed government was
against having portraits of leaders on the currency, a practice compared to the policies of European monarchs. The
currency as we know it today did not get the faces they currently have until after the early 20th century; before
that "heads" side of coinage used profile faces and striding, seated, and standing figures from Greek and Roman mythology
and composite Native Americans. The last coins to be converted to profiles of historic Americans were the dime (1946)
and the Dollar (1971). In 1862, paper money was issued without the backing of precious metals, due to the Civil War.
Silver and gold coins continued to be issued and in 1878 the link between paper money and coins was reinstated. This
disconnection from gold and silver backing also occurred during the War of 1812. The use of paper money not backed
by precious metals had also occurred under the Articles of Confederation from 1777 to 1788. With no solid backing
and being easily counterfeited, the continentals quickly lost their value, giving rise to the phrase "not worth a
continental". This was a primary reason for the "No state shall... make any thing but gold and silver coin a tender
in payment of debts" clause in article 1, section 10 of the United States Constitution. In February 2007, the U.S.
Mint, under the Presidential $1 Coin Act of 2005, introduced a new $1 U.S. Presidential dollar coin. Based on the
success of the "50 State Quarters" series, the new coin features a sequence of presidents in order of their inaugurations,
starting with George Washington, on the obverse side. The reverse side features the Statue of Liberty. To allow for
larger, more detailed portraits, the traditional inscriptions of "E Pluribus Unum", "In God We Trust", the year of
minting or issuance, and the mint mark will be inscribed on the edge of the coin instead of the face. This feature,
similar to the edge inscriptions seen on the British £1 coin, is not usually associated with U.S. coin designs. The
inscription "Liberty" has been eliminated, with the Statue of Liberty serving as a sufficient replacement. In addition,
due to the nature of U.S. coins, this will be the first time there will be circulating U.S. coins of different denominations
with the same president featured on the obverse (heads) side (Lincoln/penny, Jefferson/nickel, Franklin D. Roosevelt/dime,
Washington/quarter, Kennedy/half dollar, and Eisenhower/dollar). Another unusual fact about the new $1 coin is Grover
Cleveland will have two coins with his portrait issued due to the fact he was the only U.S. President to be elected
to two non-consecutive terms. When the Federal Reserve makes a purchase, it credits the seller's reserve account
(with the Federal Reserve). This money is not transferred from any existing funds—it is at this point that the Federal
Reserve has created new high-powered money. Commercial banks can freely withdraw in cash any excess reserves from
their reserve account at the Federal Reserve. To fulfill those requests, the Federal Reserve places an order for
printed money from the U.S. Treasury Department. The Treasury Department in turn sends these requests to the Bureau
of Engraving and Printing (to print new dollar bills) and the Bureau of the Mint (to stamp the coins). The value
of the U.S. dollar declined significantly during wartime, especially during the American Civil War, World War I,
and World War II. The Federal Reserve, which was established in 1913, was designed to furnish an "elastic" currency
subject to "substantial changes of quantity over short periods", which differed significantly from previous forms
of high-powered money such as gold, national bank notes, and silver coins. Over the very long run, the prior gold
standard kept prices stable—for instance, the price level and the value of the U.S. dollar in 1914 was not very different
from the price level in the 1880s. The Federal Reserve initially succeeded in maintaining the value of the U.S. dollar
and price stability, reversing the inflation caused by the First World War and stabilizing the value of the dollar
during the 1920s, before presiding over a 30% deflation in U.S. prices in the 1930s. There is ongoing debate about
whether central banks should target zero inflation (which would mean a constant value for the U.S. dollar over time)
or low, stable inflation (which would mean a continuously but slowly declining value of the dollar over time, as
is the case now). Although some economists are in favor of a zero inflation policy and therefore a constant value
for the U.S. dollar, others contend that such a policy limits the ability of the central bank to control interest
rates and stimulate the economy when needed. The word "dollar" is one of the words in the first paragraph of Section
9 of Article 1 of the U.S. Constitution. In that context, "dollars" is a reference to the Spanish milled dollar,
a coin that had a monetary value of 8 Spanish units of currency, or reales. In 1792 the U.S. Congress adopted legislation
titled An act establishing a mint, and regulating the Coins of the United States. Section 9 of that act authorized
the production of various coins, including "DOLLARS OR UNITS—each to be of the value of a Spanish milled dollar as
the same is now current, and to contain three hundred and seventy-one grains and four sixteenth parts of a grain
of pure, or four hundred and sixteen grains of standard silver". Section 20 of the act provided, "That the money
of account of the United States shall be expressed in dollars, or units... and that all accounts in the public offices
and all proceedings in the courts of the United States shall be kept and had in conformity to this regulation". In
other words, this act designated the United States dollar as the unit of currency of the United States. A "grand",
sometimes shortened to simply "G", is a common term for the amount of $1,000. The suffix "K" or "k" (from "kilo-")
is also commonly used to denote this amount (such as "$10k" to mean $10,000). However, the $1,000 note is no longer
in general use. A "large" or "stack", it is usually a reference to a multiple of $1,000 (such as "fifty large" meaning
$50,000). The $100 note is nicknamed "Benjamin", "Benji", "Ben", or "Franklin" (after Benjamin Franklin), "C-note"
(C being the Roman numeral for 100), "Century note" or "bill" (e.g. "two bills" being $200). The $50 note is occasionally
called a "yardstick" or a "grant" (after President Ulysses S. Grant, pictured on the obverse). The $20 note is referred
to as a "double sawbuck", "Jackson" (after Andrew Jackson), or "double eagle". The $10 note is referred to as a "sawbuck",
"ten-spot" or "Hamilton" (after Alexander Hamilton). The $5 note as "Lincoln", "fin", "fiver" or "five-spot". The
infrequently-used $2 note is sometimes called "deuce", "Tom", or "Jefferson" (after Thomas Jefferson). The $1 note
as a "single" or "buck". The dollar has also been, referred to as a "bone" and "bones" in plural (e.g. "twenty bones"
is equal to $20). The newer designs, with portraits displayed in the main body of the obverse rather than in cameo
insets upon paper color-coded by denomination, are sometimes referred to as "bigface" notes or "Monopoly money".
The U.S. dollar was created by the Constitution and defined by the Coinage Act of 1792. It specified a "dollar" to
be based in the Spanish milled dollar and of 371 grains and 4 sixteenths part of a grain of pure or 416 grains (27.0
g) of standard silver and an "eagle" to be 247 and 4 eighths of a grain or 270 grains (17 g) of gold (again depending
on purity). The choice of the value 371 grains arose from Alexander Hamilton's decision to base the new American
unit on the average weight of a selection of worn Spanish dollars. Hamilton got the treasury to weigh a sample of
Spanish dollars and the average weight came out to be 371 grains. A new Spanish dollar was usually about 377 grains
in weight, and so the new U.S. dollar was at a slight discount in relation to the Spanish dollar. The United States
Mint produces Proof Sets specifically for collectors and speculators. Silver Proofs tend to be the standard designs
but with the dime, quarter, and half dollar containing 90% silver. Starting in 1983 and ending in 1997, the Mint
also produced proof sets containing the year's commemorative coins alongside the regular coins. Another type of proof
set is the Presidential Dollar Proof Set where four special $1 coins are minted each year featuring a president.
Because of budget constraints and increasing stockpiles of these relatively unpopular coins, the production of new
Presidential dollar coins for circulation was suspended on December 13, 2011, by U.S. Treasury Secretary Timothy
F. Geithner. Future minting of such coins will be made solely for collectors. The Constitution provides that "a regular
Statement and Account of the Receipts and Expenditures of all public Money shall be published from time to time".
That provision of the Constitution is made specific by Section 331 of Title 31 of the United States Code. The sums
of money reported in the "Statements" are currently being expressed in U.S. dollars (for example, see the 2009 Financial
Report of the United States Government). The U.S. dollar may therefore be described as the unit of account of the
United States. Currently printed denominations are $1, $2, $5, $10, $20, $50, and $100. Notes above the $100 denomination
stopped being printed in 1946 and were officially withdrawn from circulation in 1969. These notes were used primarily
in inter-bank transactions or by organized crime; it was the latter usage that prompted President Richard Nixon to
issue an executive order in 1969 halting their use. With the advent of electronic banking, they became less necessary.
Notes in denominations of $500, $1,000, $5,000, $10,000 and $100,000 were all produced at one time; see large denomination
bills in U.S. currency for details. These notes are now collectors' items and are worth more than their face value
to collectors. The colloquialism "buck"(s) (much like the British word "quid"(s, pl) for the pound sterling) is often
used to refer to dollars of various nations, including the U.S. dollar. This term, dating to the 18th century, may
have originated with the colonial leather trade. It may also have originated from a poker term. "Greenback" is another
nickname originally applied specifically to the 19th century Demand Note dollars created by Abraham Lincoln to finance
the costs of the Civil War for the North. The original note was printed in black and green on the back side. It is
still used to refer to the U.S. dollar (but not to the dollars of other countries). Other well-known names of the
dollar as a whole in denominations include "greenmail", "green" and "dead presidents" (the last because deceased
presidents are pictured on most bills). The value of the U.S. dollar was therefore no longer anchored to gold, and
it fell upon the Federal Reserve to maintain the value of the U.S. currency. The Federal Reserve, however, continued
to increase the money supply, resulting in stagflation and a rapidly declining value of the U.S. dollar in the 1970s.
This was largely due to the prevailing economic view at the time that inflation and real economic growth were linked
(the Phillips curve), and so inflation was regarded as relatively benign. Between 1965 and 1981, the U.S. dollar
lost two thirds of its value. The dollar was first based on the value and look of the Spanish dollar, used widely
in Spanish America from the 16th to the 19th centuries. The first dollar coins issued by the United States Mint (founded
1792) were similar in size and composition to the Spanish dollar, minted in Mexico and Peru. The Spanish, U.S. silver
dollars, and later, Mexican silver pesos circulated side by side in the United States, and the Spanish dollar and
Mexican peso remained legal tender until the Coinage Act of 1857. The coinage of various English colonies also circulated.
The lion dollar was popular in the Dutch New Netherland Colony (New York), but the lion dollar also circulated throughout
the English colonies during the 17th century and early 18th century. Examples circulating in the colonies were usually
worn so that the design was not fully distinguishable, thus they were sometimes referred to as "dog dollars". The
Gold Standard Act of 1900 abandoned the bimetallic standard and defined the dollar as 23.22 grains (1.505 g) of gold,
equivalent to setting the price of 1 troy ounce of gold at $20.67. Silver coins continued to be issued for circulation
until 1964, when all silver was removed from dimes and quarters, and the half dollar was reduced to 40% silver. Silver
half dollars were last issued for circulation in 1970. Gold coins were confiscated by Executive Order 6102 issued
in 1933 by Franklin Roosevelt. The gold standard was changed to 13.71 grains (0.888 g), equivalent to setting the
price of 1 troy ounce of gold at $35. This standard persisted until 1968. Early releases of the Washington coin included
error coins shipped primarily from the Philadelphia mint to Florida and Tennessee banks. Highly sought after by collectors,
and trading for as much as $850 each within a week of discovery, the error coins were identified by the absence of
the edge impressions "E PLURIBUS UNUM IN GOD WE TRUST 2007 P". The mint of origin is generally accepted to be mostly
Philadelphia, although identifying the source mint is impossible without opening a mint pack also containing marked
units. Edge lettering is minted in both orientations with respect to "heads", some amateur collectors were initially
duped into buying "upside down lettering error" coins. Some cynics also erroneously point out that the Federal Reserve
makes more profit from dollar bills than dollar coins because they wear out in a few years, whereas coins are more
permanent. The fallacy of this argument arises because new notes printed to replace worn out notes, which have been
withdrawn from circulation, bring in no net revenue to the government to offset the costs of printing new notes and
destroying the old ones. As most vending machines are incapable of making change in banknotes, they commonly accept
only $1 bills, though a few will give change in dollar coins. The U.S. Constitution provides that Congress shall
have the power to "borrow money on the credit of the United States". Congress has exercised that power by authorizing
Federal Reserve Banks to issue Federal Reserve Notes. Those notes are "obligations of the United States" and "shall
be redeemed in lawful money on demand at the Treasury Department of the United States, in the city of Washington,
District of Columbia, or at any Federal Reserve bank". Federal Reserve Notes are designated by law as "legal tender"
for the payment of debts. Congress has also authorized the issuance of more than 10 other types of banknotes, including
the United States Note and the Federal Reserve Bank Note. The Federal Reserve Note is the only type that remains
in circulation since the 1970s. Usually, the short-term goal of open market operations is to achieve a specific short-term
interest rate target. In other instances, monetary policy might instead entail the targeting of a specific exchange
rate relative to some foreign currency or else relative to gold. For example, in the case of the United States the
Federal Reserve targets the federal funds rate, the rate at which member banks lend to one another overnight. The
other primary means of conducting monetary policy include: (i) Discount window lending (as lender of last resort);
(ii) Fractional deposit lending (changes in the reserve requirement); (iii) Moral suasion (cajoling certain market
players to achieve specified outcomes); (iv) "Open mouth operations" (talking monetary policy with the market). Under
the Bretton Woods system established after World War II, the value of gold was fixed to $35 per ounce, and the value
of the U.S. dollar was thus anchored to the value of gold. Rising government spending in the 1960s, however, led
to doubts about the ability of the United States to maintain this convertibility, gold stocks dwindled as banks and
international investors began to convert dollars to gold, and as a result the value of the dollar began to decline.
Facing an emerging currency crisis and the imminent danger that the United States would no longer be able to redeem
dollars for gold, gold convertibility was finally terminated in 1971 by President Nixon, resulting in the "Nixon
shock". The U.S. dollar is fiat money. It is the currency most used in international transactions and is the world's
most dominant reserve currency. Several countries use it as their official currency, and in many others it is the
de facto currency. Besides the United States, it is also used as the sole currency in two British Overseas Territories
in the Caribbean: the British Virgin Islands and the Turks and Caicos islands. A few countries use only the U.S.
Dollar for paper money, while the country mints its own coins, or also accepts U.S. coins that can be used as payment
in U.S. dollars, such as the Susan B. Anthony dollar. Today, USD notes are made from cotton fiber paper, unlike most
common paper, which is made of wood fiber. U.S. coins are produced by the United States Mint. U.S. dollar banknotes
are printed by the Bureau of Engraving and Printing and, since 1914, have been issued by the Federal Reserve. The
"large-sized notes" issued before 1928 measured 7.42 inches (188 mm) by 3.125 inches (79.4 mm); small-sized notes,
introduced that year, measure 6.14 inches (156 mm) by 2.61 inches (66 mm) by 0.0043 inches (0.11 mm). When the current,
smaller sized U.S. currency was introduced it was referred to as Philippine-sized currency because the Philippines
had previously adopted the same size for its legal currency. From 1792, when the Mint Act was passed, the dollar
was defined as 371.25 grains (24.056 g) of silver. Many historians[who?] erroneously assume gold was standardized
at a fixed rate in parity with silver; however, there is no evidence of Congress making this law. This has to do
with Alexander Hamilton's suggestion to Congress of a fixed 15:1 ratio of silver to gold, respectively. The gold
coins that were minted however, were not given any denomination whatsoever and traded for a market value relative
to the Congressional standard of the silver dollar. 1834 saw a shift in the gold standard to 23.2 grains (1.50 g),
followed by a slight adjustment to 23.22 grains (1.505 g) in 1837 (16:1 ratio).[citation needed] Technically, all
these coins are still legal tender at face value, though some are far more valuable today for their numismatic value,
and for gold and silver coins, their precious metal value. From 1965 to 1970 the Kennedy half dollar was the only
circulating coin with any silver content, which was removed in 1971 and replaced with cupronickel. However, since
1992, the U.S. Mint has produced special Silver Proof Sets in addition to the regular yearly proof sets with silver
dimes, quarters, and half dollars in place of the standard copper-nickel versions. In addition, an experimental $4.00
(Stella) coin was also minted in 1879, but never placed into circulation, and is properly considered to be a pattern
rather than an actual coin denomination. Dollar coins have not been very popular in the United States. Silver dollars
were minted intermittently from 1794 through 1935; a copper-nickel dollar of the same large size, featuring President
Dwight D. Eisenhower, was minted from 1971 through 1978. Gold dollars were also minted in the 19th century. The Susan
B. Anthony dollar coin was introduced in 1979; these proved to be unpopular because they were often mistaken for
quarters, due to their nearly equal size, their milled edge, and their similar color. Minting of these dollars for
circulation was suspended in 1980 (collectors' pieces were struck in 1981), but, as with all past U.S. coins, they
remain legal tender. As the number of Anthony dollars held by the Federal Reserve and dispensed primarily to make
change in postal and transit vending machines had been virtually exhausted, additional Anthony dollars were struck
in 1999. In 2000, a new $1 coin, featuring Sacagawea, (the Sacagawea dollar) was introduced, which corrected some
of the problems of the Anthony dollar by having a smooth edge and a gold color, without requiring changes to vending
machines that accept the Anthony dollar. However, this new coin has failed to achieve the popularity of the still-existing
$1 bill and is rarely used in daily transactions. The failure to simultaneously withdraw the dollar bill and weak
publicity efforts have been cited by coin proponents as primary reasons for the failure of the dollar coin to gain
popular support. The monetary base consists of coins and Federal Reserve Notes in circulation outside the Federal
Reserve Banks and the U.S. Treasury, plus deposits held by depository institutions at Federal Reserve Banks. The
adjusted monetary base has increased from approximately 400 billion dollars in 1994, to 800 billion in 2005, and
over 3000 billion in 2013. The amount of cash in circulation is increased (or decreased) by the actions of the Federal
Reserve System. Eight times a year, the 12-person Federal Open Market Committee meet to determine U.S. monetary policy.
Every business day, the Federal Reserve System engages in Open market operations to carry out that monetary policy.
If the Federal Reserve desires to increase the money supply, it will buy securities (such as U.S. Treasury Bonds)
anonymously from banks in exchange for dollars. Conversely, it will sell securities to the banks in exchange for
dollars, to take dollars out of circulation. The decline in the value of the U.S. dollar corresponds to price inflation,
which is a rise in the general level of prices of goods and services in an economy over a period of time. A consumer
price index (CPI) is a measure estimating the average price of consumer goods and services purchased by households.
The United States Consumer Price Index, published by the Bureau of Labor Statistics, is a measure estimating the
average price of consumer goods and services in the United States. It reflects inflation as experienced by consumers
in their day-to-day living expenses. A graph showing the U.S. CPI relative to 1982–1984 and the annual year-over-year
change in CPI is shown at right.
The Royal College of Chemistry was established by private subscription in 1845 as there was a growing awareness that practical
aspects of the experimental sciences were not well taught and that in the United Kingdom the teaching of chemistry
in particular had fallen behind that in Germany. As a result of a movement earlier in the decade, many politicians
donated funds to establish the college, including Benjamin Disraeli, William Gladstone and Robert Peel. It was also
supported by Prince Albert, who persuaded August Wilhelm von Hofmann to be the first professor. City and Guilds College
was founded in 1876 from a meeting of 16 of the City of London's livery companies for the Advancement of Technical
Education (CGLI), which aimed to improve the training of craftsmen, technicians, technologists, and engineers. The
two main objectives were to create a Central Institution in London and to conduct a system of qualifying examinations
in technical subjects. Faced with their continuing inability to find a substantial site, the Companies were eventually
persuaded by the Secretary of the Science and Art Department, General Sir John Donnelly (who was also a Royal Engineer)
to found their institution on the eighty-seven acre (350,000 m²) site at South Kensington bought by the 1851 Exhibition
Commissioners (for GBP 342,500) for 'purposes of art and science' in perpetuity. The latter two colleges were incorporated
by Royal Charter into the Imperial College of Science and Technology and the CGLI Central Technical College was renamed
the City and Guilds College in 1907, but not incorporated into Imperial College until 1910. In December 2005, Imperial
announced a science park programme at the Wye campus, with extensive housing; however, this was abandoned in September
2006 following complaints that the proposal infringed on Areas of Outstanding Natural Beauty, and that the true scale
of the scheme, which could have raised £110m for the College, was known to Kent and Ashford Councils and their consultants
but concealed from the public. One commentator observed that Imperial's scheme reflected "the state of democracy
in Kent, the transformation of a renowned scientific college into a grasping, highly aggressive, neo-corporate institution,
and the defence of the status of an Area of Outstanding Natural Beauty – throughout England, not just Wye – against
rampant greed backed by the connivance of two important local authorities. Wye College campus was finally closed
in September 2009. The College's endowment is sub-divided into three distinct portfolios: (i) Unitised Scheme – a
unit trust vehicle for College, Faculties and Departments to invest endowments and unfettered income to produce returns
for the long term; (ii) Non-Core Property – a portfolio containing around 120 operational and developmental properties
which College has determined are not core to the academic mission; and (iii) Strategic Asset Investments – containing
College’s shareholding in Imperial Innovations and other restricted equity holdings. During the year 2014/15, the
market value of the endowment increased by £78 million (18%) to £512.4 million on 31 July 2015. Imperial submitted
a total of 1,257 staff across 14 units of assessment to the 2014 Research Excellence Framework (REF) assessment.
In the REF results 46% of Imperial's submitted research was classified as 4*, 44% as 3*, 9% as 2* and 1% as 1*, giving
an overall GPA of 3.36. In rankings produced by Times Higher Education based upon the REF results Imperial was ranked
2nd overall for GPA and 8th for "research power" (compared to 6th and 7th respectively in the equivalent rankings
for the RAE 2008). In September 2014, Professor Stefan Grimm, of the Department of Medicine, was found dead after
being threatened with dismissal for failure to raise enough grant money. The College made its first public announcement
of his death on 4 December 2014. Grimm's last email accused his employers of bullying by demanding that he should
get grants worth at least £200,000 per year. His last email was viewed more than 100,000 times in the first four
days after it was posted. The College has announced an internal inquiry into Stefan Grimm's death. The inquest on
his death has not yet reported. Imperial College Boat Club The Imperial College Boat Club was founded on 12 December
1919. The Gold medal winning GB 8+ at the 2000 Sydney Olympics had been based at Imperial College's recently refurbished
boathouse and included 3 alumni of the college along with their coach Martin McElroy. The club has been highly successful,
with many wins at Henley Royal Regatta including most recently in 2013 with victory in The Prince Albert Challenge
Cup event. The club has been home to numerous National Squad oarsmen and women and is open to all rowers not just
students of Imperial College London. The Royal School of Mines was established by Sir Henry de la Beche in 1851,
developing from the Museum of Economic Geology, a collection of minerals, maps and mining equipment. He created a
school which laid the foundations for the teaching of science in the country, and which has its legacy today at Imperial.
Prince Albert was a patron and supporter of the later developments in science teaching, which led to the Royal College
of Chemistry becoming part of the Royal School of Mines, to the creation of the Royal College of Science and eventually
to these institutions becoming part of his plan for South Kensington being an educational region. In 2003 Imperial
was granted degree-awarding powers in its own right by the Privy Council. The London Centre for Nanotechnology was
established in the same year as a joint venture between UCL and Imperial College London. In 2004 the Tanaka Business
School (now named the Imperial College Business School) and a new Main Entrance on Exhibition Road were opened by
The Queen. The UK Energy Research Centre was also established in 2004 and opened its headquarters at Imperial College.
In November 2005 the Faculties of Life Sciences and Physical Sciences merged to become the Faculty of Natural Sciences.
Imperial's main campus is located in the South Kensington area of central London. It is situated in an area of South
Kensington, known as Albertopolis, which has a high concentration of cultural and academic institutions, adjacent
to the Natural History Museum, the Science Museum, the Victoria and Albert Museum, the Royal College of Music, the
Royal College of Art, the Royal Geographical Society and the Royal Albert Hall. Nearby public attractions include
the Kensington Palace, Hyde Park and the Kensington Gardens, the National Art Library, and the Brompton Oratory.
The expansion of the South Kensington campus in the 1950s & 1960s absorbed the site of the former Imperial Institute,
designed by Thomas Collcutt, of which only the 287 foot (87 m) high Queen's Tower remains among the more modern buildings.
The Centre For Co-Curricular Studies provides elective subjects and language courses outside the field of science
for students in the other faculties and departments. Students are encouraged to take these classes either for credit
or in their own time, and in some departments this is mandatory. Courses exist in a wide range of topics including
philosophy, ethics in science and technology, history, modern literature and drama, art in the 20th century, film
studies. Language courses are available in French, German, Japanese, Italian, Russian, Spanish, Arabic and Mandarin
Chinese. The Centre For Co-Curricular Studies is home to the Science Communication Unit which offers master's degrees
in Science Communication and Science Media Production for science graduates. Furthermore, in terms of job prospects,
as of 2014 the average starting salary of an Imperial graduate was the highest of any UK university. In terms of
specific course salaries, the Sunday Times ranked Computing graduates from Imperial as earning the second highest
average starting salary in the UK after graduation, over all universities and courses. In 2012, the New York Times
ranked Imperial College as one of the top 10 most-welcomed universities by the global job market. In May 2014, the
university was voted highest in the UK for Job Prospects by students voting in the Whatuni Student Choice Awards
Imperial is jointly ranked as the 3rd best university in the UK for the quality of graduates according to recruiters
from the UK's major companies. Imperial College TV ICTV (formerly STOIC (Student Television of Imperial College))
is Imperial College Union's TV station, founded in 1969 and operated from a small TV studio in the Electrical Engineering
block. The department had bought an early AMPEX Type A 1-inch videotape recorder and this was used to produce an
occasional short news programme which was then played to students by simply moving the VTR and a monitor into a common
room. A cable link to the Southside halls of residence was laid in a tunnel under Exhibition Road in 1972. Besides
the news, early productions included a film of the Queen opening what was then called College Block and interview
programmes with DJ Mike Raven, Richard O'Brian and Monty Python producer Ian MacNaughton. The society was renamed
to ICTV for the start of the 2014/15 academic year. Non-academic alumni: Author, H. G. Wells, McLaren and Ferrari
Chief Designer, Nicholas Tombazis, CEO of Rolls Royce, Ralph Robins, rock band Queen, Brian May, CEO of Singapore
Airlines, Chew Choon Seng, Prime Minister of New Zealand, Julius Vogel, Prime Minister of India, Rajiv Gandhi, Deputy
Prime Minister of Singapore, Teo Chee Hean, Chief Medical Officer for England, Sir Liam Donaldson, Head Physician
to the Queen, Huw Thomas, CEO of Moonfruit, Wendy Tan White, Businessman and philanthropist, Winston Wong, billionaire
hedge fund manager Alan Howard. The Great Exhibition was organised by Prince Albert, Henry Cole, Francis Fuller and
other members of the Royal Society for the Encouragement of Arts, Manufactures and Commerce. The Great Exhibition
made a surplus of £186,000 used in creating an area in the South of Kensington celebrating the encouragement of the
arts, industry, and science. Albert insisted the Great Exhibition surplus should be used as a home for culture and
education for everyone. His commitment was to find practical solutions to today's social challenges. Prince Albert's
vision built the Victoria and Albert Museum, Science Museum, Natural History Museum, Geological Museum, Royal College
of Science, Royal College of Art, Royal School of Mines, Royal School of Music, Royal College of Organists, Royal
School of Needlework, Royal Geographical Society, Institute of Recorded Sound, Royal Horticultural Gardens, Royal
Albert Hall and the Imperial Institute. Royal colleges and the Imperial Institute merged to form what is now Imperial
College London. In 1907, the newly established Board of Education found that greater capacity for higher technical
education was needed and a proposal to merge the City and Guilds College, the Royal School of Mines and the Royal
College of Science was approved and passed, creating The Imperial College of Science and Technology as a constituent
college of the University of London. Imperial's Royal Charter, granted by Edward VII, was officially signed on 8
July 1907. The main campus of Imperial College was constructed beside the buildings of the Imperial Institute in
South Kensington. In the financial year ended 31 July 2013, Imperial had a total net income of £822.0 million (2011/12
– £765.2 million) and total expenditure of £754.9 million (2011/12 – £702.0 million). Key sources of income included
£329.5 million from research grants and contracts (2011/12 – £313.9 million), £186.3 million from academic fees and
support grants (2011/12 – £163.1 million), £168.9 million from Funding Council grants (2011/12 – £172.4 million)
and £12.5 million from endowment and investment income (2011/12 – £8.1 million). During the 2012/13 financial year
Imperial had a capital expenditure of £124 million (2011/12 – £152 million). In 1988 Imperial merged with St Mary's
Hospital Medical School, becoming The Imperial College of Science, Technology and Medicine. In 1995 Imperial launched
its own academic publishing house, Imperial College Press, in partnership with World Scientific. Imperial merged
with the National Heart and Lung Institute in 1995 and the Charing Cross and Westminster Medical School, Royal Postgraduate
Medical School (RPMS) and the Institute of Obstetrics and Gynaecology in 1997. In the same year the Imperial College
School of Medicine was formally established and all of the property of Charing Cross and Westminster Medical School,
the National Heart and Lung Institute and the Royal Postgraduate Medical School were transferred to Imperial as the
result of the Imperial College Act 1997. In 1998 the Sir Alexander Fleming Building was opened by Queen Elizabeth
II to provide a headquarters for the College's medical and biomedical research. The 2008 Research Assessment Exercise
returned 26% of the 1225 staff submitted as being world-leading (4*) and a further 47% as being internationally excellent
(3*). The 2008 Research Assessment Exercise also showed five subjects – Pure Mathematics, Epidemiology and Public
Health, Chemical Engineering, Civil Engineering, and Mechanical, Aeronautical and Manufacturing Engineering – were
assessed to be the best[clarification needed] in terms of the proportion of internationally recognised research quality.
Imperial College Healthcare NHS Trust was formed on 1 October 2007 by the merger of Hammersmith Hospitals NHS Trust
(Charing Cross Hospital, Hammersmith Hospital and Queen Charlotte's and Chelsea Hospital) and St Mary's NHS Trust
(St. Mary's Hospital and Western Eye Hospital) with Imperial College London Faculty of Medicine. It is an academic
health science centre and manages five hospitals: Charing Cross Hospital, Queen Charlotte's and Chelsea Hospital,
Hammersmith Hospital, St Mary's Hospital, and Western Eye Hospital. The Trust is currently the largest in the UK
and has an annual turnover of £800 million, treating more than a million patients a year.[citation needed] In 2003,
it was reported that one third of female academics "believe that discrimination or bullying by managers has held
back their careers". It was said then that "A spokesman for Imperial said the college was acting on the recommendations
and had already made changes". Nevertheless, allegations of bullying have continued: in 2007, concerns were raised
about the methods that were being used to fire people in the Faculty of Medicine. New President of Imperial College,
Alice Gast says she sees bright lights in the horizon for female careers at Imperial College London. Imperial College
Union, the students' union at Imperial College, is run by five full-time sabbatical officers elected from the student
body for a tenure of one year, and a number of permanent members of staff. The Union is given a large subvention
by the university, much of which is spent on maintaining around 300 clubs, projects and societies. Examples of notable
student groups and projects are Project Nepal which sends Imperial College students to work on educational development
programmes in rural Nepal and the El Salvador Project, a construction based project in Central America. The Union
also hosts sports-related clubs such as Imperial College Boat Club and Imperial College Gliding Club. Imperial College
London is a public research university located in London, United Kingdom. It was founded by Prince Albert who envisioned
an area composed of the Natural History Museum, Science Museum, Victoria and Albert Museum, Royal Albert Hall and
the Imperial Institute. The Imperial Institute was opened by his wife, Queen Victoria, who laid the first stone.
In 1907, Imperial College London was formed by Royal Charter, and soon joined the University of London, with a focus
on science and technology. The college has expanded its coursework to medicine through mergers with St Mary's Hospital.
In 2004, Queen Elizabeth II opened the Imperial College Business School. Imperial became an independent university
from the University of London during its one hundred year anniversary. William Henry Perkin studied and worked at
the college under von Hofmann, but resigned his position after discovering the first synthetic dye, mauveine, in
1856. Perkin's discovery was prompted by his work with von Hofmann on the substance aniline, derived from coal tar,
and it was this breakthrough which sparked the synthetic dye industry, a boom which some historians have labelled
the second chemical revolution. His contribution led to the creation of the Perkin Medal, an award given annually
by the Society of Chemical Industry to a scientist residing in the United States for an "innovation in applied chemistry
resulting in outstanding commercial development". It is considered the highest honour given in the industrial chemical
industry. Imperial acquired Silwood Park in 1947, to provide a site for research and teaching in those aspects of
biology not well suited for the main London campus. Felix, Imperial's student newspaper, was launched on 9 December
1949. On 29 January 1950, the government announced that it was intended that Imperial should expand to meet the scientific
and technological challenges of the 20th century and a major expansion of the College followed over the next decade.
In 1959 the Wolfson Foundation donated £350,000 for the establishment of a new Biochemistry Department.[citation
needed] A special relationship between Imperial and the Indian Institute of Technology Delhi was established in 1963.[citation
needed]
In 1636 George, Duke of Brunswick-Lüneburg, ruler of the Brunswick-Lüneburg principality of Calenberg, moved his residence
to Hanover. The Dukes of Brunswick-Lüneburg were elevated by the Holy Roman Emperor to the rank of Prince-Elector
in 1692, and this elevation was confirmed by the Imperial Diet in 1708. Thus the principality was upgraded to the
Electorate of Brunswick-Lüneburg, colloquially known as the Electorate of Hanover after Calenberg's capital (see
also: House of Hanover). Its electors would later become monarchs of Great Britain (and from 1801, of the United
Kingdom of Great Britain and Ireland). The first of these was George I Louis, who acceded to the British throne in
1714. The last British monarch who ruled in Hanover was William IV. Semi-Salic law, which required succession by
the male line if possible, forbade the accession of Queen Victoria in Hanover. As a male-line descendant of George
I, Queen Victoria was herself a member of the House of Hanover. Her descendants, however, bore her husband's titular
name of Saxe-Coburg-Gotha. Three kings of Great Britain, or the United Kingdom, were concurrently also Electoral
Princes of Hanover. As an important railroad and road junction and production center, Hanover was a major target
for strategic bombing during World War II, including the Oil Campaign. Targets included the AFA (Stöcken), the Deurag-Nerag
refinery (Misburg), the Continental plants (Vahrenwald and Limmer), the United light metal works (VLW) in Ricklingen
and Laatzen (today Hanover fairground), the Hanover/Limmer rubber reclamation plant, the Hanomag factory (Linden)
and the tank factory M.N.H. Maschinenfabrik Niedersachsen (Badenstedt). Forced labourers were sometimes used from
the Hannover-Misburg subcamp of the Neuengamme concentration camp. Residential areas were also targeted, and more
than 6,000 civilians were killed by the Allied bombing raids. More than 90% of the city center was destroyed in a
total of 88 bombing raids. After the war, the Aegidienkirche was not rebuilt and its ruins were left as a war memorial.
The Hanover Zoo is one of the most spectacular and best zoos in Europe. The zoo received the Park Scout Award for
the fourth year running in 2009/10, placing it among the best zoos in Germany. The zoo consists of several theme
areas: Sambesi, Meyers Farm, Gorilla-Mountain, Jungle-Palace, and Mullewapp. Some smaller areas are Australia, the
wooded area for wolves, and the so-called swimming area with many seabirds. There is also a tropical house, a jungle
house, and a show arena. The new Canadian-themed area, Yukon Bay, opened in 2010. In 2010 the Hanover Zoo had over
1.6 million visitors. Hanover's leading cabaret-stage is the GOP Variety theatre which is located in the Georgs Palace.
Some other famous cabaret-stages are the Variety Marlene, the Uhu-Theatre. the theatre Die Hinterbühne, the Rampenlich
Variety and the revue-stage TAK. The most important Cabaret-Event is the Kleines Fest im Großen Garten (Little Festival
in the Great Garden) which is the most successful Cabaret Festival in Germany. It features artists from around the
world. Some other important events are the Calenberger Cabaret Weeks, the Hanover Cabaret Festival and the Wintervariety.
"Hanover" is the traditional English spelling. The German spelling (with a double n) is becoming more popular in
English; recent editions of encyclopedias prefer the German spelling, and the local government uses the German spelling
on English websites. The English pronunciation /ˈhænəvər/, with stress on the first syllable and a reduced second
syllable, is applied to both the German and English spellings, which is different from German pronunciation [haˈnoːfɐ],
with stress on the second syllable and a long second vowel. The traditional English spelling is still used in historical
contexts, especially when referring to the British House of Hanover. After 1937 the Lord Mayor and the state commissioners
of Hanover were members of the NSDAP (Nazi party). A large Jewish population then existed in Hanover. In October
1938, 484 Hanoverian Jews of Polish origin were expelled to Poland, including the Grynszpan family. However, Poland
refused to accept them, leaving them stranded at the border with thousands of other Polish-Jewish deportees, fed
only intermittently by the Polish Red Cross and Jewish welfare organisations. The Gryszpan's son Herschel Grynszpan
was in Paris at the time. When he learned of what was happening, he drove to the German embassy in Paris and shot
the German diplomat Eduard Ernst vom Rath, who died shortly afterwards. The Great Garden is an important European
baroque garden. The palace itself, however, was largely destroyed by Allied bombing but is currently under reconstruction.[citation
needed] Some points of interest are the Grotto (the interior was designed by the French artist Niki de Saint-Phalle),
the Gallery Building, the Orangerie and the two pavilions by Remy de la Fosse. The Great Garden consists of several
parts. The most popular ones are the Great Ground and the Nouveau Jardin. At the centre of the Nouveau Jardin is
Europe's highest garden fountain. The historic Garden Theatre inter alia hosted the musicals of the German rock musician
Heinz Rudolf Kunze.[citation needed] Some other popular sights are the Waterloo Column, the Laves House, the Wangenheim
Palace, the Lower Saxony State Archives, the Hanover Playhouse, the Kröpcke Clock, the Anzeiger Tower Block, the
Administration Building of the NORD/LB, the Cupola Hall of the Congress Centre, the Lower Saxony Stock, the Ministry
of Finance, the Garten Church, the Luther Church, the Gehry Tower (designed by the American architect Frank O. Gehry),
the specially designed Bus Stops, the Opera House, the Central Station, the Maschsee lake and the city forest Eilenriede,
which is one of the largest of its kind in Europe. With around 40 parks, forests and gardens, a couple of lakes,
two rivers and one canal, Hanover offers a large variety of leisure activities. But Hanover is not only one of the
most important Exhibition Cities in the world, it is also one of the German capitals for marksmen. The Schützenfest
Hannover is the largest Marksmen's Fun Fair in the world and takes place once a year (late June to early July) (2014
- July 4th to the 13th). It consists of more than 260 rides and inns, five large beer tents and a big entertainment
programme. The highlight of this fun fair is the 12 kilometres (7 mi) long Parade of the Marksmen with more than
12.000 participants from all over the world, among them around 5.000 marksmen, 128 bands and more than 70 wagons,
carriages and big festival vehicles. It is the longest procession in Europe. Around 2 million people visit this fun
fair every year. The landmark of this Fun Fair is the biggest transportable Ferris Wheel in the world (60 m or 197
ft high). The origins of this fun fair is located in the year 1529. The Schnellweg (en: expressway) system, a number
of Bundesstraße roads, forms a structure loosely resembling a large ring road together with A2 and A7. The roads
are B 3, B 6 and B 65, called Westschnellweg (B6 on the northern part, B3 on the southern part), Messeschnellweg
(B3, becomes A37 near Burgdorf, crosses A2, becomes B3 again, changes to B6 at Seelhorster Kreuz, then passes the
Hanover fairground as B6 and becomes A37 again before merging into A7) and Südschnellweg (starts out as B65, becomes
B3/B6/B65 upon crossing Westschnellweg, then becomes B65 again at Seelhorster Kreuz). In 1837, the personal union
of the United Kingdom and Hanover ended because William IV's heir in the United Kingdom was female (Queen Victoria).
Hanover could be inherited only by male heirs. Thus, Hanover passed to William IV's brother, Ernest Augustus, and
remained a kingdom until 1866, when it was annexed by Prussia during the Austro-Prussian war. Despite being expected
to defeat Prussia at the Battle of Langensalza, Prussia employed Moltke the Elder's Kesselschlacht order of battle
to instead destroy the Hanoverian army. The city of Hanover became the capital of the Prussian Province of Hanover.
After the annexation, the people of Hanover generally opposed the Prussian government. In September 1941, through
the "Action Lauterbacher" plan, a ghettoisation of the remaining Hanoverian Jewish families began. Even before the
Wannsee Conference, on 15 December 1941, the first Jews from Hanover were deported to Riga. A total of 2,400 people
were deported, and very few survived. During the war seven concentration camps were constructed in Hanover, in which
many Jews were confined. Of the approximately 4,800 Jews who had lived in Hannover in 1938, fewer than 100 were still
in the city when troops of the United States Army arrived on 10 April 1945 to occupy Hanover at the end of the war.[citation
needed] Today, a memorial at the Opera Square is a reminder of the persecution of the Jews in Hanover. After the
war a large group of Orthodox Jewish survivors of the nearby Bergen-Belsen concentration camp settled in Hanover.
Hanover was founded in medieval times on the east bank of the River Leine. Its original name Honovere may mean "high
(river)bank", though this is debated (cf. das Hohe Ufer). Hanover was a small village of ferrymen and fishermen that
became a comparatively large town in the 13th century due to its position at a natural crossroads. As overland travel
was relatively difficult, its position on the upper navigable reaches of the river helped it to grow by increasing
trade. It was connected to the Hanseatic League city of Bremen by the Leine, and was situated near the southern edge
of the wide North German Plain and north-west of the Harz mountains, so that east-west traffic such as mule trains
passed through it. Hanover was thus a gateway to the Rhine, Ruhr and Saar river valleys, their industrial areas which
grew up to the southwest and the plains regions to the east and north, for overland traffic skirting the Harz between
the Low Countries and Saxony or Thuringia. After Napoleon imposed the Convention of Artlenburg (Convention of the
Elbe) on July 5, 1803, about 30,000 French soldiers occupied Hanover. The Convention also required disbanding the
army of Hanover. However, George III did not recognize the Convention of the Elbe. This resulted in a great number
of soldiers from Hanover eventually emigrating to Great Britain, where the King's German Legion was formed. It was
the only German army to fight against France throughout the entire Napoleonic wars. The Legion later played an important
role in the Battle of Waterloo in 1815. The Congress of Vienna in 1815 elevated the electorate to the Kingdom of
Hanover. The capital town Hanover expanded to the western bank of the Leine and since then has grown considerably.
The Berggarten is an important European botanical garden.[citation needed] Some points of interest are the Tropical
House, the Cactus House, the Canary House and the Orchid House, which hosts one of the world's biggest collection
of orchids, and free-flying birds and butterflies. Near the entrance to the Berggarten is the historic Library Pavillon.
The Mausoleum of the Guelphs is also located in the Berggarten. Like the Great Garden, the Berggarten also consists
of several parts, for example the Paradies and the Prairie Garden. There is also the Sea Life Centre Hanover, which
is the first tropical aquarium in Germany.[citation needed] Various industrial businesses are located in Hannover.
The Volkswagen Commercial Vehicles Transporter (VWN) factory at Hannover-Stöcken is the biggest employer in the region
and operates a huge plant at the northern edge of town adjoining the Mittellandkanal and Motorway A2. Jointly with
a factory of German tire and automobile parts manufacturer Continental AG, they have a coal-burning power plant.
Continental AG, founded in Hanover in 1871, is one of the city's major companies, as is Sennheiser. Since 2008 a
take-over is in progress: the Schaeffler Group from Herzogenaurach (Bavaria) holds the majority of the stock but
were required due to the financial crisis to deposit the options as securities at banks. TUI AG has its HQ in Hanover.
Hanover is home to many insurance companies, many of which operate only in Germany. One major global reinsurance
company is Hannover Re, whose headquarters are east of the city centre. Around 40 theatres are located in Hanover.
The Opera House, the Schauspielhaus (Play House), the Ballhofeins, the Ballhofzwei and the Cumberlandsche Galerie
belong to the Lower Saxony State Theatre. The Theater am Aegi is Hanover's big theatre for musicals, shows and guest
performances. The Neues Theater (New Theatre) is the Boulevard Theatre of Hanover. The Theater für Niedersachsen
is another big theatre in Hanover, which also has an own Musical-Company. Some of the most important Musical-Productions
are the rock musicals of the German rock musician Heinz Rudolph Kunze, which take place at the Garden-Theatre in
the Great Garden. Hannover 96 (nickname Die Roten or 'The Reds') is the top local football team that plays in the
Bundesliga top division. Home games are played at the HDI-Arena, which hosted matches in the 1974 and 2006 World
Cups and the Euro 1988. Their reserve team Hannover 96 II plays in the fourth league. Their home games were played
in the traditional Eilenriedestadium till they moved to the HDI Arena due to DFL directives. Arminia Hannover is
another very traditional soccer team in Hanover that has played in the first league for years and plays now in the
Niedersachsen-West Liga (Lower Saxony League West). Home matches are played in the Rudolf-Kalweit-Stadium. With a
population of 518,000, Hanover is a major centre of Northern Germany and the country's thirteenth largest city. Hanover
also hosts annual commercial trade fairs such as the Hanover Fair and the CeBIT. Every year Hanover hosts the Schützenfest
Hannover, the world's largest marksmen's festival, and the Oktoberfest Hannover, the second largest Oktoberfest in
the world (beside Oktoberfest of Blumenau). In 2000, Hanover hosted the world fair Expo 2000. The Hanover fairground,
due to numerous extensions, especially for the Expo 2000, is the largest in the world. Hanover is of national importance
because of its universities and medical school, its international airport and its large zoo. The city is also a major
crossing point of railway lines and highways (Autobahnen), connecting European main lines in both the east-west (Berlin–Ruhr
area) and north-south (Hamburg–Munich, etc.) directions. Another point of interest is the Old Town. In the centre
are the large Marktkirche (Church St. Georgii et Jacobi, preaching venue of the bishop of the Lutheran Landeskirche
Hannovers) and the Old Town Hall. Nearby are the Leibniz House, the Nolte House, and the Beguine Tower. A very nice
quarter of the Old Town is the Kreuz-Church-Quarter around the Kreuz Church with many nice little lanes. Nearby is
the old royal sports hall, now called the Ballhof theatre. On the edge of the Old Town are the Market Hall, the Leine
Palace, and the ruin of the Aegidien Church which is now a monument to the victims of war and violence. Through the
Marstall Gate you arrive at the bank of the river Leine, where the world-famous Nanas of Niki de Saint-Phalle are
located. They are part of the Mile of Sculptures, which starts from Trammplatz, leads along the river bank, crosses
Königsworther Square, and ends at the entrance of the Georgengarten. Near the Old Town is the district of Calenberger
Neustadt where the Catholic Basilica Minor of St. Clemens, the Reformed Church and the Lutheran Neustädter Hof- und
Stadtkirche St. Johannis are located. A cabinet of coins is the Münzkabinett der TUI-AG. The Polizeigeschichtliche
Sammlung Niedersachsen is the largest police museum in Germany. Textiles from all over the world can be visited in
the Museum for textile art. The EXPOseeum is the museum of the world-exhibition "EXPO 2000 Hannover". Carpets and
objects from the orient can be visited in the Oriental Carpet Museum. The Blind Man Museum is a rarity in Germany,
another one is only in Berlin. The Museum of veterinary medicine is unique in Germany. The Museum for Energy History
describes the 150 years old history of the application of energy. The Home Museum Ahlem shows the history of the
district of Ahlem. The Mahn- und Gedenkstätte Ahlem describes the history of the Jewish people in Hanover and the
Stiftung Ahlers Pro Arte / Kestner Pro Arte shows modern art. Modern art is also the main topic of the Kunsthalle
Faust, the Nord/LB Art Gellery and of the Foro Artistico / Eisfabrik.
Emotions are complex. According to some theories, they are a state of feeling that results in physical and psychological
changes that influence our behavior. The physiology of emotion is closely linked to arousal of the nervous system
with various states and strengths of arousal relating, apparently, to particular emotions. Emotion is also linked
to behavioral tendency. Extroverted people are more likely to be social and express their emotions, while introverted
people are more likely to be more socially withdrawn and conceal their emotions. Emotion is often the driving force
behind motivation, positive or negative. Definition has been described as is a "positive or negative experience that
is associated with a particular pattern of physiological activity." According to other theories, emotions are not
causal forces but simply syndromes of components, which might include motivation, feeling, behavior, and physiological
changes, but no one of these components is the emotion. Nor is the emotion an entity that causes these components
Robert Plutchik agreed with Ekman's biologically driven perspective but developed the "wheel of emotions", suggesting
eight primary emotions grouped on a positive or negative basis: joy versus sadness; anger versus fear; trust versus
disgust; and surprise versus anticipation. Some basic emotions can be modified to form complex emotions. The complex
emotions could arise from cultural conditioning or association combined with the basic emotions. Alternatively, similar
to the way primary colors combine, primary emotions could blend to form the full spectrum of human emotional experience.
For example, interpersonal anger and disgust could blend to form contempt. Relationships exist between basic emotions,
resulting in positive or negative influences. Perspectives on emotions from evolutionary theory were initiated in
the late 19th century with Charles Darwin's book The Expression of the Emotions in Man and Animals. Darwin argued
that emotions actually served a purpose for humans, in communication and also in aiding their survival. Darwin, therefore,
argued that emotions evolved via natural selection and therefore have universal cross-cultural counterparts. Darwin
also detailed the virtues of experiencing emotions and the parallel experiences that occur in animals. This led the
way for animal research on emotions and the eventual determination of the neural underpinnings of emotion. An example
of this theory in action would be as follows: An emotion-evoking stimulus (snake) triggers a pattern of physiological
response (increased heart rate, faster breathing, etc.), which is interpreted as a particular emotion (fear). This
theory is supported by experiments in which by manipulating the bodily state induces a desired emotional state. Some
people may believe that emotions give rise to emotion-specific actions: e.g. "I'm crying because I'm sad," or "I
ran away because I was scared." The issue with the James–Lange theory is that of causation (bodily states causing
emotions and being a priori), not that of the bodily influences on emotional experience (which can be argued and
is still quite prevalent today in biofeedback studies and embodiment theory). With the two-factor theory now incorporating
cognition, several theories began to argue that cognitive activity in the form of judgments, evaluations, or thoughts
were entirely necessary for an emotion to occur. One of the main proponents of this view was Richard Lazarus who
argued that emotions must have some cognitive intentionality. The cognitive activity involved in the interpretation
of an emotional context may be conscious or unconscious and may or may not take the form of conceptual processing.
Theories dealing with perception either use one or multiples perceptions in order to find an emotion (Goldie, 2007).A
recent hybrid of the somatic and cognitive theories of emotion is the perceptual theory. This theory is neo-Jamesian
in arguing that bodily responses are central to emotions, yet it emphasizes the meaningfulness of emotions or the
idea that emotions are about something, as is recognized by cognitive theories. The novel claim of this theory is
that conceptually-based cognition is unnecessary for such meaning. Rather the bodily changes themselves perceive
the meaningful content of the emotion because of being causally triggered by certain situations. In this respect,
emotions are held to be analogous to faculties such as vision or touch, which provide information about the relation
between the subject and the world in various ways. A sophisticated defense of this view is found in philosopher Jesse
Prinz's book Gut Reactions, and psychologist James Laird's book Feelings. Social sciences often examine emotion for
the role that it plays in human culture and social interactions. In sociology, emotions are examined for the role
they play in human society, social patterns and interactions, and culture. In anthropology, the study of humanity,
scholars use ethnography to undertake contextual analyses and cross-cultural comparisons of a range of human activities.
Some anthropology studies examine the role of emotions in human activities. In the field of communication sciences,
critical organizational scholars have examined the role of emotions in organizations, from the perspectives of managers,
employees, and even customers. A focus on emotions in organizations can be credited to Arlie Russell Hochschild's
concept of emotional labor. The University of Queensland hosts EmoNet, an e-mail distribution list representing a
network of academics that facilitates scholarly discussion of all matters relating to the study of emotion in organizational
settings. The list was established in January 1997 and has over 700 members from across the globe. A common way in
which emotions are conceptualized in sociology is in terms of the multidimensional characteristics including cultural
or emotional labels (e.g., anger, pride, fear, happiness), physiological changes (e.g., increased perspiration, changes
in pulse rate), expressive facial and body movements (e.g., smiling, frowning, baring teeth), and appraisals of situational
cues. One comprehensive theory of emotional arousal in humans has been developed by Jonathan Turner (2007: 2009).
Two of the key eliciting factors for the arousal of emotions within this theory are expectations states and sanctions.
When people enter a situation or encounter with certain expectations for how the encounter should unfold, they will
experience different emotions depending on the extent to which expectations for Self, other and situation are met
or not met. People can also provide positive or negative sanctions directed at Self or other which also trigger different
emotional experiences in individuals. Turner analyzed a wide range of emotion theories across different fields of
research including sociology, psychology, evolutionary science, and neuroscience. Based on this analysis, he identified
four emotions that all researchers consider being founded on human neurology including assertive-anger, aversion-fear,
satisfaction-happiness, and disappointment-sadness. These four categories are called primary emotions and there is
some agreement amongst researchers that these primary emotions become combined to produce more elaborate and complex
emotional experiences. These more elaborate emotions are called first-order elaborations in Turner's theory and they
include sentiments such as pride, triumph, and awe. Emotions can also be experienced at different levels of intensity
so that feelings of concern are a low-intensity variation of the primary emotion aversion-fear whereas depression
is a higher intensity variant. In the late 19th century, the most influential theorists were William James (1842–1910)
and Carl Lange (1834–1900). James was an American psychologist and philosopher who wrote about educational psychology,
psychology of religious experience/mysticism, and the philosophy of pragmatism. Lange was a Danish physician and
psychologist. Working independently, they developed the James–Lange theory, a hypothesis on the origin and nature
of emotions. The theory states that within human beings, as a response to experiences in the world, the autonomic
nervous system creates physiological events such as muscular tension, a rise in heart rate, perspiration, and dryness
of the mouth. Emotions, then, are feelings which come about as a result of these physiological changes, rather than
being their cause. Research on emotion has increased significantly over the past two decades with many fields contributing
including psychology, neuroscience, endocrinology, medicine, history, sociology, and even computer science. The numerous
theories that attempt to explain the origin, neurobiology, experience, and function of emotions have only fostered
more intense research on this topic. Current areas of research in the concept of emotion include the development
of materials that stimulate and elicit emotion. In addition PET scans and fMRI scans help study the affective processes
in the brain. It also is influenced by hormones and neurotransmitters such as dopamine, noradrenaline, serotonin,
oxytocin, cortisol and GABA. A distinction can be made between emotional episodes and emotional dispositions. Emotional
dispositions are also comparable to character traits, where someone may be said to be generally disposed to experience
certain emotions. For example, an irritable person is generally disposed to feel irritation more easily or quickly
than others do. Finally, some theorists place emotions within a more general category of "affective states" where
affective states can also include emotion-related phenomena such as pleasure and pain, motivational states (for example,
hunger or curiosity), moods, dispositions and traits. The idea that core affect is but one component of the emotion
led to a theory called “psychological construction.” According to this theory, an emotional episode consists of a
set of components, each of which is an ongoing process and none of which is necessary or sufficient for the emotion
to be instantiated. The set of components is not fixed, either by human evolutionary history or by social norms and
roles. Instead, the emotional episode is assembled at the moment of its occurrence to suit its specific circumstances.
One implication is that all cases of, for example, fear are not identical but instead bear a family resemblance to
one another. Walter Bradford Cannon agreed that physiological responses played a crucial role in emotions, but did
not believe that physiological responses alone could explain subjective emotional experiences. He argued that physiological
responses were too slow and often imperceptible and this could not account for the relatively rapid and intense subjective
awareness of emotion. He also believed that the richness, variety, and temporal course of emotional experiences could
not stem from physiological reactions, that reflected fairly undifferentiated fight or flight responses. An example
of this theory in action is as follows: An emotion-evoking event (snake) triggers simultaneously both a physiological
response and a conscious experience of an emotion. A situated perspective on emotion, developed by Paul E. Griffiths
and Andrea Scarantino , emphasizes the importance of external factors in the development and communication of emotion,
drawing upon the situationism approach in psychology. This theory is markedly different from both cognitivist and
neo-Jamesian theories of emotion, both of which see emotion as a purely internal process, with the environment only
acting as a stimulus to the emotion. In contrast, a situationist perspective on emotion views emotion as the product
of an organism investigating its environment, and observing the responses of other organisms. Emotion stimulates
the evolution of social relationships, acting as a signal to mediate the behavior of other organisms. In some contexts,
the expression of emotion (both voluntary and involuntary) could be seen as strategic moves in the transactions between
different organisms. The situated perspective on emotion states that conceptual thought is not an inherent part of
emotion, since emotion is an action-oriented form of skillful engagement with the world. Griffiths and Scarantino
suggested that this perspective on emotion could be helpful in understanding phobias, as well as the emotions of
infants and animals. Emotions are thought to be related to certain activities in brain areas that direct our attention,
motivate our behavior, and determine the significance of what is going on around us. Pioneering work by Broca (1878),
Papez (1937), and MacLean (1952) suggested that emotion is related to a group of structures in the center of the
brain called the limbic system, which includes the hypothalamus, cingulate cortex, hippocampi, and other structures.
More recent research has shown that some of these limbic structures are not as directly related to emotion as others
are while some non-limbic structures have been found to be of greater emotional relevance. In philosophy, emotions
are studied in sub-fields such as ethics, the philosophy of art (for example, sensory–emotional values, and matters
of taste and sentimentality), and the philosophy of music (see also Music and emotion). In history, scholars examine
documents and other sources to interpret and analyze past activities; speculation on the emotional state of the authors
of historical documents is one of the tools of interpretation. In literature and film-making, the expression of emotion
is the cornerstone of genres such as drama, melodrama, and romance. In communication studies, scholars study the
role that emotion plays in the dissemination of ideas and messages. Emotion is also studied in non-human animals
in ethology, a branch of zoology which focuses on the scientific study of animal behavior. Ethology is a combination
of laboratory and field science, with strong ties to ecology and evolution. Ethologists often study one type of behavior
(for example, aggression) in a number of unrelated animals. Sociological attention to emotion has varied over time.
Emilé Durkheim (1915/1965) wrote about the collective effervescence or emotional energy that was experienced by members
of totemic rituals in Australian aborigine society. He explained how the heightened state of emotional energy achieved
during totemic rituals transported individuals above themselves giving them the sense that they were in the presence
of a higher power, a force, that was embedded in the sacred objects that were worshipped. These feelings of exaltation,
he argued, ultimately lead people to believe that there were forces that governed sacred objects. Some of the most
influential theorists on emotion from the 20th century have died in the last decade. They include Magda B. Arnold
(1903–2002), an American psychologist who developed the appraisal theory of emotions; Richard Lazarus (1922–2002),
an American psychologist who specialized in emotion and stress, especially in relation to cognition; Herbert A. Simon
(1916–2001), who included emotions into decision making and artificial intelligence; Robert Plutchik (1928–2006),
an American psychologist who developed a psychoevolutionary theory of emotion; Robert Zajonc (1923–2008) a Polish–American
social psychologist who specialized in social and cognitive processes such as social facilitation; Robert C. Solomon
(1942–2007), an American philosopher who contributed to the theories on the philosophy of emotions with books such
as What Is An Emotion?: Classic and Contemporary Readings (Oxford, 2003); Peter Goldie (1946–2011), a British philosopher
who specialized in ethics, aesthetics, emotion, mood and character; Nico Frijda (1927–2015), a Dutch psychologist
who advanced the theory that human emotions serve to promote a tendency to undertake actions that are appropriate
in the circumstances, detailed in his book The Emotions (1986). The word "emotion" dates back to 1579, when it was
adapted from the French word émouvoir, which means "to stir up". The term emotion was introduced into academic discussion
to replace passion. According to one dictionary, the earliest precursors of the word likely dates back to the very
origins of language. The modern word emotion is heterogeneous In some uses of the word, emotions are intense feelings
that are directed at someone or something. On the other hand, emotion can be used to refer to states that are mild
(as in annoyed or content) and to states that are not directed at anything (as in anxiety and depression). One line
of research thus looks at the meaning of the word emotion in everyday language and this usage is rather different
from that in academic discourse. Another line of research asks about languages other than English, and one interesting
finding is that many languages have a similar but not identical term Phillip Bard contributed to the theory with
his work on animals. Bard found that sensory, motor, and physiological information all had to pass through the diencephalon
(particularly the thalamus), before being subjected to any further processing. Therefore, Cannon also argued that
it was not anatomically possible for sensory events to trigger a physiological response prior to triggering conscious
awareness and emotional stimuli had to trigger both physiological and experiential aspects of emotion simultaneously.
There are some theories on emotions arguing that cognitive activity in the form of judgments, evaluations, or thoughts
are necessary in order for an emotion to occur. A prominent philosophical exponent is Robert C. Solomon (for example,
The Passions, Emotions and the Meaning of Life, 1993). Solomon claims that emotions are judgments. He has put forward
a more nuanced view which response to what he has called the ‘standard objection’ to cognitivism, the idea that a
judgment that something is fearsome can occur with or without emotion, so judgment cannot be identified with emotion.
The theory proposed by Nico Frijda where appraisal leads to action tendencies is another example. Emotions can motivate
social interactions and relationships and therefore are directly related with basic physiology, particularly with
the stress systems. This is important because emotions are related to the anti-stress complex, with an oxytocin-attachment
system, which plays a major role in bonding. Emotional phenotype temperaments affect social connectedness and fitness
in complex social systems (Kurt Kortschal 2013). These characteristics are shared with other species and taxa and
are due to the effects of genes and their continuous transmission. Information that is encoded in the DNA sequences
provides the blueprint for assembling proteins that make up our cells. Zygotes require genetic information from their
parental germ cells, and at every speciation event, heritable traits that have enabled its ancestor to survive and
reproduce successfully are passed down along with new traits that could be potentially beneficial to the offspring.
In the five million years since the linages leading to modern humans and chimpanzees split, only about 1.2% of their
genetic material has been modified. This suggests that everything that separates us from chimpanzees must be encoded
in that very small amount of DNA, including our behaviors. Students that study animal behaviors have only identified
intraspecific examples of gene-dependent behavioral phenotypes. In voles (Microtus spp.) minor genetic differences
have been identified in a vasopressin receptor gene that corresponds to major species differences in social organization
and the mating system (Hammock & Young 2005). Another potential example with behavioral differences is the FOCP2
gene, which is involved in neural circuitry handling speech and language (Vargha-Khadem et al. 2005). Its present
form in humans differed from that of the chimpanzees by only a few mutations and has been present for about 200,000
years, coinciding with the beginning of modern humans (Enard et al. 2002). Speech, language, and social organization
are all part of the basis for emotions. Emotion, in everyday speech, is any relatively brief conscious experience
characterized by intense mental activity and a high degree of pleasure or displeasure. Scientific discourse has drifted
to other meanings and there is no consensus on a definition. Emotion is often intertwined with mood, temperament,
personality, disposition, and motivation. In some theories, cognition is an important aspect of emotion. Those acting
primarily on emotion may seem as if they are not thinking, but mental processes are still essential, particularly
in the interpretation of events. For example, the realization of danger and subsequent arousal of the nervous system
(e.g. rapid heartbeat and breathing, sweating, muscle tension) is integral to the experience of fear. Other theories,
however, claim that emotion is separate from and can precede cognition. Emotions have been described by some theorists
as discrete and consistent responses to internal or external events which have a particular significance for the
organism. Emotions are brief in duration and consist of a coordinated set of responses, which may include verbal,
physiological, behavioural, and neural mechanisms. Psychotherapist Michael C. Graham describes all emotions as existing
on a continuum of intensity. Thus fear might range from mild concern to terror or shame might range from simple embarrassment
to toxic shame. Emotions have also been described as biologically given and a result of evolution because they provided
good solutions to ancient and recurring problems that faced our ancestors. Moods are feelings that tend to be less
intense than emotions and that often lack a contextual stimulus. For more than 40 years, Paul Ekman has supported
the view that emotions are discrete, measurable, and physiologically distinct. Ekman's most influential work revolved
around the finding that certain emotions appeared to be universally recognized, even in cultures that were preliterate
and could not have learned associations for facial expressions through media. Another classic study found that when
participants contorted their facial muscles into distinct facial expressions (e.g. disgust), they reported subjective
and physiological experiences that matched the distinct facial expressions. His research findings led him to classify
six emotions as basic: anger, disgust, fear, happiness, sadness and surprise. Western philosophy regarded emotion
in varying ways. In stoic theories it was seen as a hindrance to reason and therefore a hindrance to virtue. Aristotle
believed that emotions were an essential component of virtue. In the Aristotelian view all emotions (called passions)
corresponded to appetites or capacities. During the Middle Ages, the Aristotelian view was adopted and further developed
by scholasticism and Thomas Aquinas in particular. There are also theories of emotions in the works of philosophers
such as René Descartes, Niccolò Machiavelli, Baruch Spinoza and David Hume. In the 19th century emotions were considered
adaptive and were studied more frequently from an empiricist psychiatric perspective. In his 1884 article William
James argued that feelings and emotions were secondary to physiological phenomena. In his theory, James proposed
that the perception of what he called an "exciting fact" directly led to a physiological response, known as "emotion."
To account for different types of emotional experiences, James proposed that stimuli trigger activity in the autonomic
nervous system, which in turn produces an emotional experience in the brain. The Danish psychologist Carl Lange also
proposed a similar theory at around the same time, and therefore this theory became known as the James–Lange theory.
As James wrote, "the perception of bodily changes, as they occur, is the emotion." James further claims that "we
feel sad because we cry, angry because we strike, afraid because we tremble, and neither we cry, strike, nor tremble
because we are sorry, angry, or fearful, as the case may be." The history of emotions has become an increasingly
popular topic recently, with some scholars arguing that it is an essential category of analysis, not unlike class,
race, or gender. Historians, like other social scientists, assume that emotions, feelings and their expressions are
regulated in different ways by both different cultures and different historical times, and constructivist school
of history claims even that some sentiments and meta-emotions, for example Schadenfreude, are learnt and not only
regulated by culture. Historians of emotion trace and analyse the changing norms and rules of feeling, while examining
emotional regimes, codes, and lexicons from social, cultural or political history perspectives. Others focus on the
history of medicine, science or psychology. What somebody can and may feel (and show) in a given situation, towards
certain people or things, depends on social norms and rules. It is thus historically variable and open to change.
Several research centers have opened in the past few years in Germany, England, Spain, Sweden and Australia. Stanley
Schachter formulated his theory on the earlier work of a Spanish physician, Gregorio Marañón, who injected patients
with epinephrine and subsequently asked them how they felt. Interestingly, Marañón found that most of these patients
felt something but in the absence of an actual emotion-evoking stimulus, the patients were unable to interpret their
physiological arousal as an experienced emotion. Schachter did agree that physiological reactions played a big role
in emotions. He suggested that physiological reactions contributed to emotional experience by facilitating a focused
cognitive appraisal of a given physiologically arousing event and that this appraisal was what defined the subjective
emotional experience. Emotions were thus a result of two-stage process: general physiological arousal, and experience
of emotion. For example, the physiological arousal, heart pounding, in a response to an evoking stimulus, the sight
of a bear in the kitchen. The brain then quickly scans the area, to explain the pounding, and notices the bear. Consequently,
the brain interprets the pounding heart as being the result of fearing the bear. With his student, Jerome Singer,
Schachter demonstrated that subjects can have different emotional reactions despite being placed into the same physiological
state with an injection of epinephrine. Subjects were observed to express either anger or amusement depending on
whether another person in the situation (a confederate) displayed that emotion. Hence, the combination of the appraisal
of the situation (cognitive) and the participants' reception of adrenaline or a placebo together determined the response.
This experiment has been criticized in Jesse Prinz's (2004) Gut Reactions. In the 1990s, sociologists focused on
different aspects of specific emotions and how these emotions were socially relevant. For Cooley (1992), pride and
shame were the most important emotions that drive people to take various social actions. During every encounter,
he proposed that we monitor ourselves through the "looking glass" that the gestures and reactions of others provide.
Depending on these reactions, we either experience pride or shame and this results in particular paths of action.
Retzinger (1991) conducted studies of married couples who experienced cycles of rage and shame. Drawing predominantly
on Goffman and Cooley's work, Scheff (1990) developed a micro sociological theory of the social bond. The formation
or disruption of social bonds is dependent on the emotions that people experience during interactions. Emotion regulation
refers to the cognitive and behavioral strategies people use to influence their own emotional experience. For example,
a behavioral strategy in which one avoids a situation to avoid unwanted emotions (e.g., trying not to think about
the situation, doing distracting activities, etc.). Depending on the particular school's general emphasis on either
cognitive components of emotion, physical energy discharging, or on symbolic movement and facial expression components
of emotion, different schools of psychotherapy approach the regulation of emotion differently. Cognitively oriented
schools approach them via their cognitive components, such as rational emotive behavior therapy. Yet others approach
emotions via symbolic movement and facial expression components (like in contemporary Gestalt therapy). Based on
discoveries made through neural mapping of the limbic system, the neurobiological explanation of human emotion is
that emotion is a pleasant or unpleasant mental state organized in the limbic system of the mammalian brain. If distinguished
from reactive responses of reptiles, emotions would then be mammalian elaborations of general vertebrate arousal
patterns, in which neurochemicals (for example, dopamine, noradrenaline, and serotonin) step-up or step-down the
brain's activity level, as visible in body movements, gestures and postures. Emotions can likely be mediated by pheromones
(see fear). Many different disciplines have produced work on the emotions. Human sciences study the role of emotions
in mental processes, disorders, and neural mechanisms. In psychiatry, emotions are examined as part of the discipline's
study and treatment of mental disorders in humans. Nursing studies emotions as part of its approach to the provision
of holistic health care to humans. Psychology examines emotions from a scientific perspective by treating them as
mental processes and behavior and they explore the underlying physiological and neurological processes. In neuroscience
sub-fields such as social neuroscience and affective neuroscience, scientists study the neural mechanisms of emotion
by combining neuroscience with the psychological study of personality, emotion, and mood. In linguistics, the expression
of emotion may change to the meaning of sounds. In education, the role of emotions in relation to learning is examined.
Subsequent to these developments, Randall Collins (2004) formulated his interaction ritual theory by drawing on Durkheim's
work on totemic rituals that was extended by Goffman (1964/2013; 1967) into everyday focused encounters. Based on
interaction ritual theory, we experience different levels or intensities of emotional energy during face-to-face
interactions. Emotional energy is considered to be a feeling of confidence to take action and a boldness that one
experiences when they are charged up from the collective effervescence generated during group gatherings that reach
high levels of intensity. In the 2000s, research in computer science, engineering, psychology and neuroscience has
been aimed at developing devices that recognize human affect display and model emotions. In computer science, affective
computing is a branch of the study and development of artificial intelligence that deals with the design of systems
and devices that can recognize, interpret, and process human emotions. It is an interdisciplinary field spanning
computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as
to early philosophical enquiries into emotion, the more modern branch of computer science originated with Rosalind
Picard's 1995 paper on affective computing. Detecting emotional information begins with passive sensors which capture
data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to
the cues humans use to perceive emotions in others. Another area within affective computing is the design of computational
devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions.
Emotional speech processing recognizes the user's emotional state by analyzing speech patterns. The detection and
processing of facial expression or body gestures is achieved through detectors and sensors. Emotions involve different
components, such as subjective experience, cognitive processes, expressive behavior, psychophysiological changes,
and instrumental behavior. At one time, academics attempted to identify the emotion with one of the components: William
James with a subjective experience, behaviorists with instrumental behavior, psychophysiologists with physiological
changes, and so on. More recently, emotion is said to consist of all the components. The different components of
emotion are categorized somewhat differently depending on the academic discipline. In psychology and philosophy,
emotion typically includes a subjective, conscious experience characterized primarily by psychophysiological expressions,
biological reactions, and mental states. A similar multicomponential description of emotion is found in sociology.
For example, Peggy Thoits described emotions as involving physiological components, cultural or emotional labels
(e.g., anger, surprise etc.), expressive body actions, and the appraisal of situations and contexts. In Scherer's
components processing model of emotion, five crucial elements of emotion are said to exist. From the component processing
perspective, emotion experience is said to require that all of these processes become coordinated and synchronized
for a short period of time, driven by appraisal processes. Although the inclusion of cognitive appraisal as one of
the elements is slightly controversial, since some theorists make the assumption that emotion and cognition are separate
but interacting systems, the component processing model provides a sequence of events that effectively describes
the coordination involved during an emotional episode. Through the use of multidimensional scaling, psychologists
can map out similar emotional experiences, which allows a visual depiction of the "emotional distance" between experiences.
A further step can be taken by looking at the map's dimensions of the emotional experiences. The emotional experiences
are divided into two dimensions known as valence (how negative or positive the experience feels) and arousal (how
energized or enervated the experience feels). These two dimensions can be depicted on a 2D coordinate map. This two-dimensional
map was theorized to capture one important component of emotion called core affect. Core affect is not the only component
to emotion, but gives the emotion its hedonic and felt energy. More contemporary views along the evolutionary psychology
spectrum posit that both basic emotions and social emotions evolved to motivate (social) behaviors that were adaptive
in the ancestral environment. Current research[citation needed] suggests that emotion is an essential part of any
human decision-making and planning, and the famous distinction made between reason and emotion is not as clear as
it seems. Paul D. MacLean claims that emotion competes with even more instinctive responses, on one hand, and the
more abstract reasoning, on the other hand. The increased potential in neuroimaging has also allowed investigation
into evolutionarily ancient parts of the brain. Important neurological advances were derived from these perspectives
in the 1990s by Joseph E. LeDoux and António Damásio. This is a communication-based theory developed by Howard M.
Weiss and Russell Cropanzano (1996), that looks at the causes, structures, and consequences of emotional experience
(especially in work contexts). This theory suggests that emotions are influenced and caused by events which in turn
influence attitudes and behaviors. This theoretical frame also emphasizes time in that human beings experience what
they call emotion episodes— a "series of emotional states extended over time and organized around an underlying theme."
This theory has been utilized by numerous researchers to better understand emotion from a communicative lens, and
was reviewed further by Howard M. Weiss and Daniel J. Beal in their article, "Reflections on Affective Events Theory",
published in Research on Emotion in Organizations in 2005. The motor centers of reptiles react to sensory cues of
vision, sound, touch, chemical, gravity, and motion with pre-set body movements and programmed postures. With the
arrival of night-active mammals, smell replaced vision as the dominant sense, and a different way of responding arose
from the olfactory sense, which is proposed to have developed into mammalian emotion and emotional memory. The mammalian
brain invested heavily in olfaction to succeed at night as reptiles slept—one explanation for why olfactory lobes
in mammalian brains are proportionally larger than in the reptiles. These odor pathways gradually formed the neural
blueprint for what was later to become our limbic brain. This still left open the question of whether the opposite
of approach in the prefrontal cortex is better described as moving away (Direction Model), as unmoving but with strength
and resistance (Movement Model), or as unmoving with passive yielding (Action Tendency Model). Support for the Action
Tendency Model (passivity related to right prefrontal activity) comes from research on shyness and research on behavioral
inhibition. Research that tested the competing hypotheses generated by all four models also supported the Action
Tendency Model. In economics, the social science that studies the production, distribution, and consumption of goods
and services, emotions are analyzed in some sub-fields of microeconomics, in order to assess the role of emotions
on purchase decision-making and risk perception. In criminology, a social science approach to the study of crime,
scholars often draw on behavioral sciences, sociology, and psychology; emotions are examined in criminology issues
such as anomie theory and studies of "toughness," aggressive behavior, and hooliganism. In law, which underpins civil
obedience, politics, economics and society, evidence about people's emotions is often raised in tort law claims for
compensation and in criminal law prosecutions against alleged lawbreakers (as evidence of the defendant's state of
mind during trials, sentencing, and parole hearings). In political science, emotions are examined in a number of
sub-fields, such as the analysis of voter decision-making. Attempts are frequently made to regulate emotion according
to the conventions of the society and the situation based on many (sometimes conflicting) demands and expectations
which originate from various entities. The emotion of anger is in many cultures discouraged in girls and women, while
fear is discouraged in boys and men. Expectations attached to social roles, such as "acting as man" and not as a
woman, and the accompanying "feeling rules" contribute to the differences in expression of certain emotions. Some
cultures encourage or discourage happiness, sadness, or jealousy, and the free expression of the emotion of disgust
is considered socially unacceptable in most cultures. Some social institutions are seen as based on certain emotion,
such as love in the case of contemporary institution of marriage. In advertising, such as health campaigns and political
messages, emotional appeals are commonly found. Recent examples include no-smoking health campaigns and political
campaigns emphasizing the fear of terrorism.
Everton were founder members of the Premier League in 1992, but struggled to find the right manager. Howard Kendall had returned
in 1990 but could not repeat his previous success, while his successor, Mike Walker, was statistically the least
successful Everton manager to date. When former Everton player Joe Royle took over in 1994 the club's form started
to improve; his first game in charge was a 2–0 victory over derby rivals Liverpool. Royle dragged Everton clear of
relegation, leading the club to the FA Cup for the fifth time in its history, defeating Manchester United 1–0 in
the final. The Tower has been inextricably linked with the Everton area since its construction in 1787. It was originally
used as a bridewell to incarcerate mainly drunks and minor criminals, and it still stands today on Everton Brow in
Netherfield Road. The tower was accompanied by two laurel wreaths on either side and, according to the College of
Arms in London, Kelly chose to include the laurels as they were the sign of winners. The crest was accompanied by
the club motto, "Nil Satis Nisi Optimum", meaning "Nothing but the best is good enough". On matchdays, in a tradition
going back to 1962, players walk out to the theme tune to Z-Cars, named "Johnny Todd", a traditional Liverpool children's
song collected in 1890 by Frank Kidson which tells the story of a sailor betrayed by his lover while away at sea,
although on two separate occasions in the 1994, they ran out to different songs. In August 1994, the club played
2 Unlimited's song "Get Ready For This", and a month later, a reworking of the Creedence Clearwater Revival classic
"Bad Moon Rising". Both were met with complete disapproval by Everton fans. Everton hold the record for the most
seasons in England's top tier (Division One/Premier League), at 111 seasons out of 114 as of 2014–15 (the club played
in Division 2 in 1930–31 and from 1951–54). They are one of seven teams to have played all 22 seasons of the Premier
League since its inception in August 1992 – the others being Arsenal, Aston Villa, Chelsea, Liverpool, Manchester
United, and Tottenham Hotspur. Everton against Aston Villa is the most played fixture in England's top flight, as
of the 2012–13 season the two founder members of the Football League have played a record 196 league games. Formed
in 1878, Everton were founding members of The Football League in 1888 and won their first league championship two
seasons later. Following four league titles and two FA Cup wins, Everton experienced a lull in the immediate post
World War Two period until a revival in the 1960s which saw the club win two league championships and an FA Cup.
The mid-1980s represented their most recent period of sustained success, with two League Championship successes,
an FA Cup, and the 1985 European Cup Winners' Cup. The club's most recent major trophy was the 1995 FA Cup. The club's
supporters are known as Evertonians. Everton were relegated to the Second Division two years later during internal
turmoil at the club. However, the club was promoted at the first attempt scoring a record number of goals in the
second division. On return to the top flight in 1931–32, Everton wasted no time in reaffirming their status and won
a fourth League title at the first opportunity. Everton also won their second FA Cup in 1933 with a 3–0 win against
Manchester City in the final. The era ended in 1938–39 with a fifth League title. The Everton board finally ran out
of patience with Smith and he was sacked in March 2002 after an FA Cup exit at Middlesbrough, with Everton in real
danger of relegation. David Moyes, was his replacement and guided Everton to a safe finish in fifteenth place. In
2002–03 Everton finished seventh, their highest finish since 1996. A fourth-place finish in 2004–05, ensured Everton
qualified for the Champions League qualifying round. The team failed to make it through to the Champions League group
stage and were then eliminated from the UEFA Cup. Everton qualified for the 2007–08 and 2008–09 UEFA Cup competitions
and they were runners-up in the 2009 FA Cup Final. In May 2013, the club launched a new crest to improve the reproducibility
of the design in print and broadcast media, particularly on a small scale. Critics[who?] suggested that it was external
pressure from sports manufacturers Nike, Inc. that evoked the redesign as the number of colours has been reduced
and the radial effect have been removed, making the kit more cost efficient to reproduce.[citation needed] The redesign
was poorly received by supporters, with a poll on an Everton fan site registering a 91% negative response to the
crest. A protest petition reached over 22,000 signatures before the club offered an apology and announced a new crest
would be created for the 2014–15 season with an emphasis on fan consultation. Shortly afterwards, the Head of Marketing
left the club. Everton originally played in the southeast corner of Stanley Park, which was the site for the new
Liverpool F.C. stadium, with the first official match taking place in 1879. In 1882, a man named J. Cruitt donated
land at Priory Road which became the club's home before they moved to Anfield, which was Everton's home until 1892.
At this time, a dispute of how the club was to be owned and run emerged with Anfield's owner and Everton's chairman,
John Houlding. A dispute between Houlding and the club's committee over how the club should be run, led to Houlding
attempting to gain full control of the club by registering the company, "Everton F.C. and Athletic Grounds Ltd".
In response, Everton left Anfield for a new ground, Goodison Park, where the club have played ever since. Houlding
attempted to take over Everton's name, colours, fixtures and league position, but was denied by The Football Association.
Instead, Houlding formed a new club, Liverpool F.C. There have been indications since 1996 that Everton will move
to a new stadium. The original plan was for a new 60,000-seat stadium to be built, but in 2000 a proposal was submitted
to build a 55,000 seat stadium as part of the King's Dock regeneration. This was unsuccessful as Everton failed to
generate the £30 million needed for a half stake in the stadium project, with the city council rejecting the proposal
in 2003. Late in 2004, driven by Liverpool Council and the Northwest Development Corporation, the club entered talks
with Liverpool F.C. about sharing a proposed stadium on Stanley Park. Negotiations broke down as Everton failed to
raise 50% of the costs. On 11 January 2005, Liverpool announced that ground-sharing was not a possibility, proceeding
to plan their own Stanley Park Stadium. Everton have a large fanbase, with the eighth highest average attendance
in the Premier League in the 2008–09 season. The majority of Everton's matchday support comes from the North West
of England, primarily Merseyside, Cheshire, West Lancashire and parts of Western Greater Manchester along with many
fans who travel from North Wales and Ireland. Within the city of Liverpool support for Everton and city rivals Liverpool
is not determined by geographical basis with supporters mixed across the city. However Everton's support heartland
is traditionally based in the North West of the city and in the southern parts of Sefton. Everton also have many
supporters' clubs worldwide, in places such as North America, Singapore, Indonesia, Lebanon, Malaysia, Thailand,
and Australia. The official supporters club is FOREVERTON, and there are also several fanzines including When Skies
are Grey and Speke from the Harbour, which are sold around Goodison Park on match days. Current manager, Roberto
Martínez, is the fourteenth permanent holder of the position since it was established in 1939. There have also been
four caretaker managers, and before 1939 the team was selected by either the club secretary or by committee. The
club's longest-serving manager has been Harry Catterick, who was in charge of the team from 1961–73, taking in 594
first team matches. The Everton manager to win most domestic and international trophies is Howard Kendall, who won
two Division One championships, the 1984 FA Cup, the 1984 UEFA Cup Winners' Cup, and three Charity Shields. Everton's
second successful era started when Harry Catterick was made manager in 1961. In 1962–63, his second season in charge,
Everton won the League title and in 1966 the FA Cup followed with a 3–2 win over Sheffield Wednesday. Everton again
reached the final in 1968, but this time were unable to overcome West Bromwich Albion at Wembley. Two seasons later
in 1969–70, Everton won the League championship, nine points clear of nearest rivals Leeds United. During this period,
Everton were the first English club to achieve five consecutive years in European competitions—seasons 1961–62 to
1966–67. On 16 June 2006, it was announced that Everton had entered into talks with Knowsley Council and Tesco over
the possibility of building a new 55,000 seat stadium, expandable to over 60,000, in Kirkby. The club took the unusual
move of giving its supporters a say in the club's future by holding a ballot on the proposal, finding a split of
59% to 41% in favour. Opponents to the plan included other local councils concerned by the effect of a large Tesco
store being built as part of the development, and a group of fans demanding that Everton should remain within the
city boundaries of Liverpool. Everton regularly take large numbers away from home both domestically and in European
fixtures. The club implements a loyalty points scheme offering the first opportunity to purchase away tickets to
season ticket holders who have attended the most away matches. Everton often sell out the full allocation in away
grounds and tickets sell particularly well for North West England away matches. In October 2009, Everton took 7,000
travelling fans to Benfica, their largest ever away crowd in Europe since the 1985 European Cup Winners' Cup Final.
Everton F.C. is a limited company with the board of directors holding a majority of the shares. The club's most recent
accounts, from May 2014, show a net total debt of £28.1 million, with a turnover of £120.5 million and a profit of
£28.2 million. The club's overdraft with Barclays Bank is secured against the Premier League's "Basic Award Fund",
a guaranteed sum given to clubs for competing in the Premier League. Everton agreed a long-term loan of £30 million
with Bear Stearns and Prudential plc in 2002 over the duration of 25 years; a consolidation of debts at the time
as well as a source of capital for new player acquisitions. Goodison Park is secured as collateral. Everton's biggest
rivalry is with neighbours Liverpool, against whom they contest the Merseyside derby. The Merseyside derby is usually
a sellout fixture, and has been known as the "friendly derby" because both sets of fans can often be seen side by
side red and blue inside the stadium both at Anfield and Goodison Park. Recently on the field, matches tend to be
extremely stormy affairs; the derby has had more red cards than any other fixture in Premiership history. The rivalry
stems from an internal dispute between Everton officials and the owners of Anfield, which was then Everton's home
ground, resulting in Everton moving to Goodison Park, and the subsequent formation of Liverpool F.C., in 1892. Neville
Southall holds the record for the most Everton appearances, having played 751 first-team matches between 1981 and
1997, and previously held the record for the most league clean sheets during a season (15). During the 2008–09 season,
this record was beaten by American goalkeeper Tim Howard (17). The late centre half and former captain Brian Labone
comes second, having played 534 times. The longest serving player is Goalkeeper Ted Sagar who played for 23 years
between 1929 and 1953, both sides of the Second World War, making a total of 495 appearances. The club's top goalscorer,
with 383 goals in all competitions, is Dixie Dean; the second-highest goalscorer is Graeme Sharp with 159. Dean still
holds the English national record of most goals in a season, with 60. The record attendance for an Everton home match
is 78,299 against Liverpool on 18 September 1948. Amazingly, there was only 1 injury at this game-Tom Fleetwood was
hit on the head by a coin thrown from the crowd whilst he marched around the perimeter with St Edward's Orphanage
Band, playing the cornet. Goodison Park, like all major English football grounds since the recommendations of the
Taylor Report were implemented, is now an all-seater and only holds just under 40,000, meaning it is unlikely that
this attendance record will ever be broken at Goodison. Everton's record transfer paid was to Chelsea for Belgian
forward Romelu Lukaku for a sum of £28m. Everton bought the player after he played the previous year with the team
on loan. The club also owned and operated a professional basketball team, by the name of Everton Tigers, who compete
in the elite British Basketball League. The team was launched in the summer of 2007 as part of the clubs' Community
programme, and play their home games at the Greenbank Sports Academy. The team was an amalgam of the Toxteth Tigers
community youth programme which started in 1968. The team quickly became one of the most successful in the league
winning the BBL Cup in 2009 and the play-offs in 2010. However Everton withdrew funding before the 2010–11 season
and the team was re launched as the Mersey Tigers. Everton also have links with Chilean team Everton de Viña del
Mar who were named after the English club. On 4 August 2010, the two Evertons played each other in a friendly named
the Copa Hermandad at Goodison Park to mark the centenary of the Chilean team, an occasion organised by The Ruleteros
Society, a society founded to promote connections between the two clubs. Other Evertons exist in Rosario in Colonia
Department, Uruguay, La Plata, and Río Cuarto in Argentina, Elk Grove, California in the United States, and in Cork,
Ireland. The club have entered the UK pop charts on four occasions under different titles during the 1980s and 1990s
when many clubs released a song to mark their reaching the FA Cup Final. "The Boys in Blue", released in 1984, peaked
at number 82. The following year the club scored their biggest hit when "Here We Go" peaked at 14. In 1986 the club
released "Everybody's Cheering The Blues" which reached number 83. "All Together Now", a reworking of a song by Merseyside
band The Farm, was released for the 1995 FA Cup Final and reached number 27. When the club next reached the 2009
FA Cup Final, the tradition had passed into history and no song was released. The cup triumph was also Everton's
passport to the Cup Winners' Cup—their first European campaign in the post-Heysel era. Progress under Joe Royle continued
in 1995–96 as they climbed to sixth place in the Premiership. A fifteenth-place finish the following season saw Royle
resign towards the end of the campaign, to be temporarily replaced by club captain, Dave Watson. Howard Kendall was
appointed Everton manager for the third time in 1997, but the appointment proved unsuccessful as Everton finished
seventeenth in the Premiership; only avoiding relegation due to their superior goal difference over Bolton Wanderers.
Former Rangers manager Walter Smith then took over from Kendall in the summer of 1998 but only managed three successive
finishes in the bottom half of the table. Everton have had many other nicknames over the years. When the black kit
was worn Everton were nicknamed "The Black Watch", after the famous army regiment. Since going blue in 1901, Everton
have been given the simple nickname "The Blues". Everton's attractive style of play led to Steve Bloomer calling
the team "scientific" in 1928, which is thought to have inspired the nickname "The School of Science". The battling
1995 FA Cup winning side were known as "The Dogs of War". When David Moyes arrived as manager he proclaimed Everton
as "The People's Club", which has been adopted as a semi-official club nickname.
Old English (Ænglisc, Anglisc, Englisc) or Anglo-Saxon is the earliest historical form of the English language, spoken in
England and southern and eastern Scotland in the early Middle Ages. It was brought to Great Britain by Anglo-Saxon
settlers probably in the mid 5th century, and the first Old English literary works date from the mid 7th century.
After the Norman Conquest of 1066, English was replaced for a time as the language of the upper classes by Anglo-Norman,
a relative of French, and Old English developed into the next historical form of English, known as Middle English.
The four main dialectal forms of Old English were Mercian, Northumbrian, Kentish, and West Saxon. Mercian and Northumbrian
are together referred to as Anglian. In terms of geography the Northumbrian region lay north of the Humber River;
the Mercian lay north of the Thames and South of the Humber River; West Saxon lay south and southwest of the Thames;
and the smallest, Kentish region lay southeast of the Thames, a small corner of England. The Kentish region, settled
by the Jutes from Jutland, has the scantiest literary remains. Old English contained a certain number of loanwords
from Latin, which was the scholarly and diplomatic lingua franca of Western Europe. It is sometimes possible to give
approximate dates for the borrowing of individual Latin words based on which patterns of sound change they have undergone.
Some Latin words had already been borrowed into the Germanic languages before the ancestral Angles and Saxons left
continental Europe for Britain. More entered the language when the Anglo-Saxons were converted to Christianity and
Latin-speaking priests became influential. It was also through Irish Christian missionaries that the Latin alphabet
was introduced and adapted for the writing of Old English, replacing the earlier runic system. Nonetheless, the largest
transfer of Latin-based (mainly Old French) words into English occurred after the Norman Conquest of 1066, and thus
in the Middle English rather than the Old English period. Old English nouns had grammatical gender, a feature absent
in modern English, which uses only natural gender. For example, the words sunne ("sun"), mōna ("moon") and wīf ("woman/wife")
were respectively feminine, masculine and neuter; this is reflected, among other things, in the form of the definite
article used with these nouns: sēo sunne ("the sun"), se mōna ("the moon"), þæt wīf ("the woman/wife"). Pronoun usage
could reflect either natural or grammatical gender, when those conflicted (as in the case of wīf, a neuter noun referring
to a female person). The Latin alphabet of the time still lacked the letters ⟨j⟩ and ⟨w⟩, and there was no ⟨v⟩ as
distinct from ⟨u⟩; moreover native Old English spellings did not use ⟨k⟩, ⟨q⟩ or ⟨z⟩. The remaining 20 Latin letters
were supplemented by four more: ⟨æ⟩ (æsc, modern ash) and ⟨ð⟩ (ðæt, now called eth or edh), which were modified Latin
letters, and thorn ⟨þ⟩ and wynn ⟨ƿ⟩, which are borrowings from the futhorc. A few letter pairs were used as digraphs,
representing a single sound. Also used was the Tironian note ⟨⁊⟩ (a character similar to the digit 7) for the conjunction
and, and a thorn with a crossbar through the ascender for the pronoun þæt. Macrons over vowels were originally used
not to mark long vowels (as in modern editions), but to indicate stress, or as abbreviations for a following m or
n. The first example is taken from the opening lines of the folk-epic Beowulf, a poem of some 3,000 lines and the
single greatest work of Old English. This passage describes how Hrothgar's legendary ancestor Scyld was found as
a baby, washed ashore, and adopted by a noble family. The translation is literal and represents the original poetic
word order. As such, it is not typical of Old English prose. The modern cognates of original words have been used
whenever practical to give a close approximation of the feel of the original poem. Old English is one of the West
Germanic languages, and its closest relatives are Old Frisian and Old Saxon. Like other old Germanic languages, it
is very different from Modern English and difficult for Modern English speakers to understand without study. Old
English grammar is quite similar to that of modern German: nouns, adjectives, pronouns, and verbs have many inflectional
endings and forms, and word order is much freer. The oldest Old English inscriptions were written using a runic system,
but from about the 9th century this was replaced by a version of the Latin alphabet. A later literary standard, dating
from the later 10th century, arose under the influence of Bishop Æthelwold of Winchester, and was followed by such
writers as the prolific Ælfric of Eynsham ("the Grammarian"). This form of the language is known as the "Winchester
standard", or more commonly as Late West Saxon. It is considered to represent the "classical" form of Old English.
It retained its position of prestige until the time of the Norman Conquest, after which English ceased for a time
to be of importance as a literary language. Old English was not static, and its usage covered a period of 700 years,
from the Anglo-Saxon settlement of Britain in the 5th century to the late 11th century, some time after the Norman
invasion. While indicating that the establishment of dates is an arbitrary process, Albert Baugh dates Old English
from 450 to 1150, a period of full inflections, a synthetic language. Perhaps around 85 per cent of Old English words
are no longer in use, but those that survived, to be sure, are basic elements of Modern English vocabulary. Due to
the centralisation of power and the Viking invasions, there is relatively little written record of the non-Wessex
dialects after Alfred's unification. Some Mercian texts continued to be written, however, and the influence of Mercian
is apparent in some of the translations produced under Alfred's programme, many of which were produced by Mercian
scholars. Other dialects certainly continued to be spoken, as is evidenced by the continued variation between their
successors in Middle and Modern English. In fact, what would become the standard forms of Middle English and of Modern
English are descended from Mercian rather than West Saxon, while Scots developed from the Northumbrian dialect. It
was once claimed that, owing to its position at the heart of the Kingdom of Wessex, the relics of Anglo-Saxon accent,
idiom and vocabulary were best preserved in the dialect of Somerset. The strength of the Viking influence on Old
English appears from the fact that the indispensable elements of the language - pronouns, modals, comparatives, pronominal
adverbs (like "hence" and "together"), conjunctions and prepositions - show the most marked Danish influence; the
best evidence of Scandinavian influence appears in the extensive word borrowings for, as Jespersen indicates, no
texts exist in either Scandinavia or in Northern England from this time to give certain evidence of an influence
on syntax. The change to Old English from Old Norse was substantive, pervasive, and of a democratic character. Old
Norse and Old English resembled each other closely like cousins and with some words in common, they roughly understood
each other; in time the inflections melted away and the analytic pattern emerged. It is most “important to recognize
that in many words the English and Scandinavian language differed chiefly in their inflectional elements. The body
of the word was so nearly the same in the two languages that only the endings would put obstacles in the way of mutual
understanding. In the mixed population which existed in the Danelaw these endings must have led to much confusion,
tending gradually to become obscured and finally lost.” This blending of peoples and languages happily resulted in
“simplifying English grammar.” Unlike Modern English, Old English is a language rich in morphological diversity.
It maintains several distinct cases: the nominative, accusative, genitive, dative and (vestigially) instrumental.
The only remnants of this system in Modern English are in the forms of a few pronouns (such as I/me/mine, she/her,
who/whom/whose) and in the possessive ending -'s, which derives from the old (masculine and neuter) genitive ending
-es. In Old English, however, nouns and their modifying words take appropriate endings depending on their case. The
influence of Old Norse certainly helped move English from a synthetic language along the continuum to a more analytic
word order, and Old Norse most likely made a greater impact on the English language than any other language. The
eagerness of Vikings in the Danelaw to communicate with their southern Anglo-Saxon neighbors produced a friction
that led to the erosion of the complicated inflectional word-endings. Simeon Potter notes: “No less far-reaching
was the influence of Scandinavian upon the inflexional endings of English in hastening that wearing away and leveling
of grammatical forms which gradually spread from north to south. It was, after all, a salutary influence. The gain
was greater than the loss. There was a gain in directness, in clarity, and in strength.” The form of the verb varies
with person (first, second and third), number (singular and plural), tense (present and past), and mood (indicative,
subjunctive and imperative). Old English also sometimes uses compound constructions to express other verbal aspects,
the future and the passive voice; in these we see the beginnings of the compound tenses of Modern English. Old English
verbs include strong verbs, which form the past tense by altering the root vowel, and weak verbs, which use a suffix
such as -de. As in Modern English, and peculiar to the Germanic languages, the verbs formed two great classes: weak
(regular), and strong (irregular). Like today, Old English had fewer strong verbs, and many of these have over time
decayed into weak forms. Then, as now, dental suffixes indicated the past tense of the weak verbs, as in work and
worked. Old English is a West Germanic language, developing out of Ingvaeonic (also known as North Sea Germanic)
dialects from the 5th century. It came to be spoken over most of the territory of the Anglo-Saxon kingdoms which
became the Kingdom of England. This included most of present-day England, as well as part of what is now southeastern
Scotland, which for several centuries belonged to the Anglo-Saxon kingdom of Northumbria. Other parts of the island
– Wales and most of Scotland – continued to use Celtic languages, except in the areas of Scandinavian settlements
where Old Norse was spoken. Celtic speech also remained established in certain parts of England: Medieval Cornish
was spoken all over Cornwall and in adjacent parts of Devon, while Cumbric survived perhaps to the 12th century in
parts of Cumbria, and Welsh may have been spoken on the English side of the Anglo-Welsh border. Norse was also widely
spoken in the parts of England which fell under Danish law. Some of the most important surviving works of Old English
literature are Beowulf, an epic poem; the Anglo-Saxon Chronicle, a record of early English history; the Franks Casket,
an inscribed early whalebone artefact; and Cædmon's Hymn, a Christian religious poem. There are also a number of
extant prose works, such as sermons and saints' lives, biblical translations, and translated Latin works of the early
Church Fathers, legal documents, such as laws and wills, and practical works on grammar, medicine, and geography.
Still, poetry is considered the heart of Old English literature. Nearly all Anglo-Saxon authors are anonymous, with
a few exceptions, such as Bede and Cædmon. Cædmon, the earliest English poet we know by name, served as a lay brother
in the monastery at Whitby. Old English developed from a set of Anglo-Frisian or North Sea Germanic dialects originally
spoken by Germanic tribes traditionally known as the Angles, Saxons, and Jutes. As the Anglo-Saxons became dominant
in England, their language replaced the languages of Roman Britain: Common Brittonic, a Celtic language, and Latin,
brought to Britain by Roman invasion. Old English had four main dialects, associated with particular Anglo-Saxon
kingdoms: Mercian, Northumbrian, Kentish and West Saxon. It was West Saxon that formed the basis for the literary
standard of the later Old English period, although the dominant forms of Middle and Modern English would develop
mainly from Mercian. The speech of eastern and northern parts of England was subject to strong Old Norse influence
due to Scandinavian rule and settlement beginning in the 9th century. With the unification of the Anglo-Saxon kingdoms
(outside the Danelaw) by Alfred the Great in the later 9th century, the language of government and literature became
standardised around the West Saxon dialect (Early West Saxon). Alfred advocated education in English alongside Latin,
and had many works translated into the English language; some of them, such as Pope Gregory I's treatise Pastoral
Care, appear to have been translated by Alfred himself. In Old English, typical of the development of literature,
poetry arose before prose, but King Alfred the Great (871 to 901) chiefly inspired the growth of prose. Each of these
four dialects was associated with an independent kingdom on the island. Of these, Northumbria south of the Tyne,
and most of Mercia, were overrun by the Vikings during the 9th century. The portion of Mercia that was successfully
defended, and all of Kent, were then integrated into Wessex under Alfred the Great. From that time on, the West Saxon
dialect (then in the form now known as Early West Saxon) became standardised as the language of government, and as
the basis for the many works of literature and religious materials produced or translated from Latin in that period.
Another source of loanwords was Old Norse, which came into contact with Old English via the Scandinavian rulers and
settlers in the Danelaw from the late 9th century, and during the rule of Cnut and other Danish kings in the early
11th century. Many place-names in eastern and northern England are of Scandinavian origin. Norse borrowings are relatively
rare in Old English literature, being mostly terms relating to government and administration. The literary standard,
however, was based on the West Saxon dialect, away from the main area of Scandinavian influence; the impact of Norse
may have been greater in the eastern and northern dialects. Certainly in Middle English texts, which are more often
based on eastern dialects, a strong Norse influence becomes apparent. Modern English contains a great many, often
everyday, words that were borrowed from Old Norse, and the grammatical simplification that occurred after the Old
English period is also often attributed to Norse influence. Modern editions of Old English manuscripts generally
introduce some additional conventions. The modern forms of Latin letters are used, including ⟨g⟩ in place of the
insular G, ⟨s⟩ for long S, and others which may differ considerably from the insular script, notably ⟨e⟩, ⟨f⟩ and
⟨r⟩. Macrons are used to indicate long vowels, where usually no distinction was made between long and short vowels
in the originals. (In some older editions an acute accent mark was used for consistency with Old Norse conventions.)
Additionally, modern editions often distinguish between velar and palatal ⟨c⟩ and ⟨g⟩ by placing dots above the palatals:
⟨ċ⟩, ⟨ġ⟩. The letter wynn ⟨ƿ⟩ is usually replaced with ⟨w⟩, but æsc, eth and thorn are normally retained (except
when eth is replaced by thorn). Like other historical languages, Old English has been used by scholars and enthusiasts
of later periods to create texts either imitating Anglo-Saxon literature or deliberately transferring it to a different
cultural context. Examples include Alistair Campbell and J. R. R. Tolkien. A number of websites devoted to Neo-Paganism
and Historical re-enactment offer reference material and forums promoting the active use of Old English. By far the
most ambitious project[peacock term] is the Old English Wikipedia, but most of the Neo-Old English texts published
online bear little resemblance to the historical model and are riddled with very basic grammatical mistakes. Old
English was first written in runes, using the futhorc – a rune set derived from the Germanic 24-character elder futhark,
extended by five more runes used to represent Anglo-Saxon vowel sounds, and sometimes by several more additional
characters. From around the 9th century, the runic system came to be supplanted by a (minuscule) half-uncial script
of the Latin alphabet introduced by Irish Christian missionaries. This was replaced by insular script, a cursive
and pointed version of the half-uncial script. This was used until the end of the 12th century when continental Carolingian
minuscule (also known as Caroline) replaced the insular.
A fleet carrier is intended to operate with the main fleet and usually provides an offensive capability. These are the largest
carriers capable of fast speeds. By comparison, escort carriers were developed to provide defense for convoys of
ships. They were smaller and slower with lower numbers of aircraft carried. Most were built from mercantile hulls
or, in the case of merchant aircraft carriers, were bulk cargo ships with a flight deck added on top. Light aircraft
carriers were carriers that were fast enough to operate with the fleet but of smaller size with reduced aircraft
capacity. Soviet aircraft carriers now in use by Russia are actually called heavy aviation cruisers, these ships
while sized in the range of large fleet carriers were designed to deploy alone or with escorts and provide both strong
defensive weaponry and heavy offensive missiles equivalent to a guided missile cruiser in addition to supporting
fighters and helicopters. This new-found importance of naval aviation forced nations to create a number of carriers,
in efforts to provide air superiority cover for every major fleet in order to ward off enemy aircraft. This extensive
usage required the construction of several new 'light' carriers. Escort aircraft carriers, such as USS Bogue, were
sometimes purpose-built, but most were converted from merchant ships as a stop-gap measure to provide anti-submarine
air support for convoys and amphibious invasions. Following this concept, light aircraft carriers built by the US,
such as USS Independence, represented a larger, more "militarized" version of the escort carrier. Although with similar
complement to Escort carriers, they had the advantage of speed from their converted cruiser hulls. The UK 1942 Design
Light Fleet Carrier was designed for building quickly by civilian shipyards and with an expected service life of
about 3 years. They served the Royal Navy during the war and was the hull design chosen for nearly all aircraft carrier
equipped navies after the war until the 1980s. Emergencies also spurred the creation or conversion of highly unconventional
aircraft carriers. CAM ships, were cargo-carrying merchant ships that could launch (but not retrieve) a single fighter
aircraft from a catapult to defend the convoy from long range German aircraft. Speaking in St. Petersburg, Russia
on 30 June 2011, the head of Russia's United Shipbuilding Corporation said his company expected to begin design work
for a new carrier in 2016, with a goal of beginning construction in 2018 and having the carrier achieve initial operational
capability by 2023. Several months later, on 3 November 2011 the Russian newspaper Izvestiya reported that the naval
building plan now included (first) the construction of a new shipyard capable of building large hull ships, after
which Moscow will build two (80,000 tons full load each) nuclear-powered aircraft carriers by 2027. The spokesperson
said one carrier would be assigned to the Russian Navy's Northern Fleet at Murmansk, and the second would be stationed
with the Pacific Fleet at Vladivostok. As "runways at sea", aircraft carriers have a flat-top flight deck, which
launches and recovers aircraft. Aircraft launch forward, into the wind, and are recovered from astern. The flight
deck is where the most notable differences between a carrier and a land runway are found. Creating such a surface
at sea poses constraints on the carrier – for example, the fact that it is a ship means that a full-length runway
would be costly to construct and maintain. This affects take-off procedure, as a shorter runway length of the deck
requires that aircraft accelerate more quickly to gain lift. This either requires a thrust boost, a vertical component
to its velocity, or a reduced take-off load (to lower mass). The differing types of deck configuration, as above,
influence the structure of the flight deck. The form of launch assistance a carrier provides is strongly related
to the types of aircraft embarked and the design of the carrier itself. Since the early 1950s on conventional carriers
it has been the practice to recover aircraft at an angle to port of the axial line of the ship. The primary function
of this angled deck is to allow aircraft that miss the arresting wires, referred to as a bolter, to become airborne
again without the risk of hitting aircraft parked forward. The angled deck allows the installation of one or two
"waist" catapults in addition to the two bow cats. An angled deck also improves launch and recovery cycle flexibility
with the option of simultaneous launching and recovery of aircraft. Another deck structure that can be seen is a
ski-jump ramp at the forward end of the flight deck. This was first developed to help launch STOVL aircraft take
off at far higher weights than is possible with a vertical or rolling takeoff on flat decks. Originally developed
by the Royal Navy, it since has been adopted by many navies for smaller carriers. A ski-jump ramp works by converting
some of the forward rolling movement of the aircraft into vertical velocity and is sometimes combined with the aiming
of jet thrust partly downwards. This allows heavily loaded and fueled aircraft a few more precious seconds to attain
sufficient air velocity and lift to sustain normal flight. Without a ski-jump launching fully loaded and fueled aircraft
such as the Harrier would not be possible on a smaller flat deck ship before either stalling out or crashing directly
into the sea. An aircraft carrier is a warship that serves as a seagoing airbase, equipped with a full-length flight
deck and facilities for carrying, arming, deploying, and recovering aircraft. Typically, it is the capital ship of
a fleet, as it allows a naval force to project air power worldwide without depending on local bases for staging aircraft
operations. Aircraft carriers are expensive to build and are critical assets. Aircraft carriers have evolved from
converted cruisers to nuclear-powered warships that carry numerous fighter planes, strike aircraft, helicopters,
and other types of aircraft. The 1903 advent of heavier-than-air fixed-wing aircraft was closely followed in 1910
by the first experimental take-off of an airplane, made from the deck of a United States Navy vessel (cruiser USS
Birmingham), and the first experimental landings were conducted in 1911. On 9 May 1912 the first airplane take-off
from a ship underway was made from the deck of the British Royal Navy's HMS Hibernia. Seaplane tender support ships
came next, with the French Foudre of 1911. In September 1914 the Imperial Japanese Navy Wakamiya conducted the world's
first successful ship-launched air raid: on 6 September 1914 a Farman aircraft launched by Wakamiya attacked the
Austro-Hungarian cruiser SMS Kaiserin Elisabeth and the German gunboat Jaguar in Kiaochow Bay off Tsingtao; neither
was hit. The first carrier-launched airstrike was the Tondern Raid in July 1918. Seven Sopwith Camels launched from
the converted battlecruiser HMS Furious damaged the German airbase at Tønder and destroyed two zeppelins. Modern
navies that operate such aircraft carriers treat them as the capital ship of the fleet, a role previously held by
the battleship. This change took place during World War II in response to air power becoming a significant factor
in warfare, driven by the superior range, flexibility and effectiveness of carrier-launched aircraft. Following the
war, carrier operations continued to increase in size and importance. Supercarriers, displacing 75,000 tonnes or
greater, have become the pinnacle of carrier development. Some are powered by nuclear reactors and form the core
of a fleet designed to operate far from home. Amphibious assault ships, such as USS Tarawa and HMS Ocean, serve the
purpose of carrying and landing Marines, and operate a large contingent of helicopters for that purpose. Also known
as "commando carriers" or "helicopter carriers", many have the capability to operate VSTOL aircraft. The Royal Australian
Navy is in the process of procuring two Canberra-class LHD's, the first of which was commissioned in November 2015,
while the second is expected to enter service in 2016. The ships will be the largest in Australian naval history.
Their primary roles are to embark, transport and deploy an embarked force and to carry out or support humanitarian
assistance missions. The LHD is capable of launching multiple helicopters at one time while maintaining an amphibious
capability of 1,000 troops and their supporting vehicles (tanks, armoured personnel carriers etc.). The Australian
Defence Minister has publicly raised the possibility of procuring F-35B STOVL aircraft for the carrier, stating that
it "has been on the table since day one and stating the LHD's are "STOVL capable". The British Royal Navy is constructing
two new larger STOVL aircraft carriers, the Queen Elizabeth class, to replace the three Invincible-class carriers.
The ships will be named HMS Queen Elizabeth and HMS Prince of Wales. They will be able to operate up to 40 aircraft
in peace time with a tailored group of up to 50, and will have a displacement of 70,600 tonnes. The ships are due
to become operational from 2020. Their primary aircraft complement will be made up of F-35B Lightning IIs, and their
ship's company will number around 680 with the total complement rising to about 1,600 when the air group is embarked.
Defensive weapons will include the Phalanx Close-In Weapons System for anti-aircraft and anti-missile defence; also
30 mm Automated Small Calibre Guns and miniguns for use against fast attack craft. The two ships will be the largest
warships ever built for the Royal Navy. The constraints of constructing a flight deck affect the role of a given
carrier strongly, as they influence the weight, type, and configuration of the aircraft that may be launched. For
example, assisted launch mechanisms are used primarily for heavy aircraft, especially those loaded with air-to-ground
weapons. CATOBAR is most commonly used on USN supercarriers as it allows the deployment of heavy jets with full loadouts,
especially on ground-attack missions. STOVL is used by other navies because it is cheaper to operate and still provides
good deployment capability for fighter aircraft. On the recovery side of the flight deck, the adaptation to the aircraft
loadout is mirrored. Non-VTOL or conventional aircraft cannot decelerate on their own, and almost all carriers using
them must have arrested-recovery systems (-BAR, e.g. CATOBAR or STOBAR) to recover their aircraft. Aircraft that
are landing extend a tailhook that catches on arrestor wires stretched across the deck to bring themselves to a stop
in a short distance. Post-WWII Royal Navy research on safer CATOBAR recovery eventually led to universal adoption
of a landing area angled off axis to allow aircraft who missed the arresting wires to "bolt" and safely return to
flight for another landing attempt rather than crashing into aircraft on the forward deck. Key personnel involved
in the flight deck include the shooters, the handler, and the air boss. Shooters are naval aviators or Naval Flight
Officers and are responsible for launching aircraft. The handler works just inside the island from the flight deck
and is responsible for the movement of aircraft before launching and after recovery. The "air boss" (usually a commander)
occupies the top bridge (Primary Flight Control, also called primary or the tower) and has the overall responsibility
for controlling launch, recovery and "those aircraft in the air near the ship, and the movement of planes on the
flight deck, which itself resembles a well-choreographed ballet." The captain of the ship spends most of his time
one level below primary on the Navigation Bridge. Below this is the Flag Bridge, designated for the embarked admiral
and his staff. The disadvantage of the ski-jump is the penalty it exacts on aircraft size, payload, and fuel load
(and thus range); heavily laden aircraft can not launch using a ski-jump because their high loaded weight requires
either a longer takeoff roll than is possible on a carrier deck, or assistance from a catapult or JATO rocket. For
example, the Russian Su-33 is only able to launch from the carrier Admiral Kuznetsov with a minimal armament and
fuel load. Another disadvantage is on mixed flight deck operations where helicopters are also present such as a US
Landing Helicopter Dock or Landing Helicopter Assault amphibious assault ship a ski jump is not included as this
would eliminate one or more helicopter landing areas, this flat deck limits the loading of Harriers but is somewhat
mitigated by the longer rolling start provided by a long flight deck compared to many STOVL carriers. One STOBAR
carrier: Liaoning was originally built as the 57,000 tonne Soviet Admiral Kuznetsov-class carrier Varyag and was
later purchased as a stripped hulk by China in 1998 on the pretext of use as a floating casino, then partially rebuilt
and towed to China for completion. Liaoning was commissioned on 25 September 2012, and began service for testing
and training. On 24 or 25 November 2012, Liaoning successfully launched and recovered several Shenyang J-15 jet fighter
aircraft. She is classified as a training ship, intended to allow the navy to practice with carrier usage. On 26
December 2012, the People's Daily reported that it will take four to five years for Liaoning to reach full capacity,
mainly due to training and coordination which will take significant amount of time for Chinese PLA Navy to complete
as this is the first aircraft carrier in their possession. As it is a training ship, Liaoning is not assigned to
any of China's operation fleets. India started the construction of a 40,000-tonne, 260-metre-long (850 ft) Vikrant-class
aircraft carrier in 2009. The new carrier will operate MiG-29K and naval HAL Tejas aircraft along with the Indian-made
helicopter HAL Dhruv. The ship will be powered by four gas-turbine engines and will have a range of 8,000 nautical
miles (15,000 kilometres), carrying 160 officers, 1,400 sailors, and 30 aircraft. The carrier is being constructed
by Cochin Shipyard. The ship was launched in August 2013 and is scheduled for commissioning in 2018. Carriers have
evolved since their inception in the early twentieth century from wooden vessels used to deploy balloons to nuclear-powered
warships that carry dozens of aircraft, including fighter jets and helicopters. As of 3 March 2016, there are thirty-seven
active aircraft carriers in the world within twelve navies. The United States Navy has 10 large nuclear-powered carriers
(known as supercarriers, carrying up to 90 aircraft each), the largest carriers in the world; the total deckspace
is over twice that of all other nations' combined. As well as the supercarrier fleet, the US Navy has nine amphibious
assault ships used primarily for helicopters (sometimes called helicopter carriers); these can also carry up to 25
fighter jets, and in some cases, are as large as some other nations' fixed-wing carriers. There is no single definition
of an "aircraft carrier", and modern navies use several variants of the type. These variants are sometimes categorized
as sub-types of aircraft carriers, and sometimes as distinct types of naval aviation-capable ships. Aircraft carriers
may be classified according to the type of aircraft they carry and their operational assignments. Admiral Sir Mark
Stanhope, former head of the Royal Navy, has said that "To put it simply, countries that aspire to strategic international
influence have aircraft carriers". The aircraft carrier dramatically changed naval combat in World War II, because
air power was becoming a significant factor in warfare. The advent of aircraft as focal weapons was driven by the
superior range, flexibility and effectiveness of carrier-launched aircraft. They had higher range and precision than
naval guns, making them highly effective. The versatility of the carrier was demonstrated in November 1940 when HMS
Illustrious launched a long-range strike on the Italian fleet at their base in Taranto, signalling the beginning
of the effective and highly mobile aircraft strikes. This operation incapacitated three of the six battleships at
a cost of two torpedo bombers. World War II in the Pacific Ocean involved clashes between aircraft carrier fleets.
The 1941 Japanese surprise attack on Pearl Harbor was a clear illustration of the power projection capability afforded
by a large force of modern carriers. Concentrating six carriers in a single unit turned naval history about, as no
other nation had fielded anything comparable. However, the vulnerability of carriers compared to traditional battleships
when forced into a gun-range encounter was quickly illustrated by the sinking of HMS Glorious by German battleships
during the Norwegian campaign in 1940. The development of flattop vessels produced the first large fleet ships. In
1918, HMS Argus became the world's first carrier capable of launching and recovering naval aircraft. As a result
of the Washington Naval Treaty of 1922, which limited the construction of new heavy surface combat ships, most early
aircraft carriers were conversions of ships that were laid down (or had served) as different ship types: cargo ships,
cruisers, battlecruisers, or battleships. These conversions gave rise to the Lexington-class aircraft carriers (1927),
Akagi and Courageous class. Specialist carrier evolution was well underway, with several navies ordering and building
warships that were purposefully designed to function as aircraft carriers by the mid-1920s, resulting in the commissioning
of ships such as Hōshō (1922), HMS Hermes (1924), and Béarn (1927). During World War II, these ships would become
known as fleet carriers.[citation needed] In December 2009, then Indian Navy chief Admiral Nirmal Kumar Verma said
at his maiden navy week press conference that concepts currently being examined by the Directorate of Naval Design
for the second indigenous aircraft carrier (IAC-2), are for a conventionally powered carrier displacing over 50,000
tons and equipped with steam catapults (rather than the ski-jump on the Gorshkov/Vikramaditya and the IAC) to launch
fourth-generation aircraft. Later on in August 2013 Vice Admiral RK Dhowan, while talking about the detailed study
underway on the IAC-II project, said that nuclear propulsion was also being considered. The navy also evaluated the
Electromagnetic Aircraft Launch System (EMALS), which is being used by the US Navy in their latest Gerald R. Ford-class
aircraft carriers. General Atomics, the developer of the EMALS, was cleared by the US government to give a technical
demonstration to Indian Navy officers, who were impressed by the new capabilities of the system. The EMALS enables
launching varied aircraft including unmanned combat air vehicles (UCAV). The aim is to have a total of three aircraft
carriers in service, with two fully operational carriers and the third in refit. In August 2013, a launching ceremony
for Japan's largest military ship since World War II was held in Yokohama. The 820-foot-long (250 m), 19,500-ton
flattop Izumo was deployed in March 2015. The ship is able to carry up to 14 helicopters; however, only seven ASW
helicopters and two SAR helicopters were planned for the initial aircraft complement. For other operations, 400 troops
and fifty 3.5 t trucks (or equivalent equipment) can also be carried. The flight deck has five helicopter landing
spots that allow simultaneous landings or take-offs. The ship is equipped with two Phalanx CIWS and two SeaRAM for
its defense. The destroyers of this class were initially intended to replace the two ships of the Shirane class,
which were originally scheduled to begin decommissioning in FY2014. The current US fleet of Nimitz-class carriers
will be followed into service (and in some cases replaced) by the ten-ship Gerald R. Ford class. It is expected that
the ships will be more automated in an effort to reduce the amount of funding required to staff, maintain and operate
its supercarriers. The main new features are implementation of Electromagnetic Aircraft Launch System (EMALS) (which
replace the old steam catapults) and unmanned aerial vehicles. With the deactivation of USS Enterprise in December
2012 (decommissioning scheduled for 2016), the U.S. fleet comprises 10 active supercarriers. On 24 July 2007, the
House Armed Services Seapower subcommittee recommended seven or eight new carriers (one every four years). However,
the debate has deepened over budgeting for the $12–14.5 billion (plus $12 billion for development and research) for
the 100,000 ton Gerald R. Ford-class carrier (estimated service 2016) compared to the smaller $2 billion 45,000 ton
America-class amphibious assault ships able to deploy squadrons of F-35B of which one is already active, another
is under construction and nine more are planned. If the aircraft are VTOL-capable or helicopters, they do not need
to decelerate and hence there is no such need. The arrested-recovery system has used an angled deck since the 1950s
because, in case the aircraft does not catch the arresting wire, the short deck allows easier take off by reducing
the number of objects between the aircraft and the end of the runway. It also has the advantage of separating the
recovery operation area from the launch area. Helicopters and aircraft capable of vertical or short take-off and
landing (V/STOL) usually recover by coming abreast the carrier on the port side and then using their hover capability
to move over the flight deck and land vertically without the need for arresting gear. The superstructure of a carrier
(such as the bridge, flight control tower) are concentrated in a relatively small area called an island, a feature
pioneered on the HMS Hermes in 1923. While the island is usually built on the starboard side of the fight deck, the
Japanese aircraft carriers Akagi and Hiryū had their islands built on the port side. Very few carriers have been
designed or built without an island. The flush deck configuration proved to have significant drawbacks, primary of
which was management of the exhaust from the power plant. Fumes coming across the deck were a major issue in USS
Langley. In addition, lack of an island meant difficulties managing the flight deck, performing air traffic control,
a lack of radar housing placements and problems with navigating and controlling the ship itself. 1 CATOBAR carrier:
Charles de Gaulle is a 42,000 tonne nuclear-powered aircraft carrier, commissioned in 2001 and is the flagship of
the French Navy (Marine Nationale). The ship carries a complement of Dassault-Breguet Super Étendard, Dassault Rafale
M and E‑2C Hawkeye aircraft, EC725 Caracal and AS532 Cougar helicopters for combat search and rescue, as well as
modern electronics and Aster missiles. It is a CATOBAR-type carrier that uses two 75 m C13‑3 steam catapults of a
shorter version of the catapult system installed on the U.S. Nimitz-class carriers, one catapult at the bow and one
across the front of the landing area. With the deactivation of USS Enterprise in December 2012, the U.S. fleet comprises
10 supercarriers. The House Armed Services Seapower subcommittee on 24 July 2007, recommended seven or maybe eight
new carriers (one every four years). However, the debate has deepened over budgeting for the $12–14.5 billion (plus
$12 billion for development and research) for the 100,000 ton Gerald R. Ford-class carrier (estimated service 2016)
compared to the smaller $2 billion 45,000 ton America-class amphibious assault ships, which are able to deploy squadrons
of F-35Bs. The first of this class, USS America, is now in active service with another, USS Tripoli, under construction
and 9 more are planned. Since World War II, aircraft carrier designs have increased in size to accommodate a steady
increase in aircraft size. The large, modern Nimitz class of US carriers has a displacement nearly four times that
of the World War II–era USS Enterprise, yet its complement of aircraft is roughly the same—a consequence of the steadily
increasing size and weight of military aircraft over the years. Today's aircraft carriers are so expensive that nations
which operate them risk significant political, economic, and military impact if a carrier is lost, or even used in
conflict. Conventional ("tailhook") aircraft rely upon a landing signal officer (LSO, radio call sign paddles) to
monitor the aircraft's approach, visually gauge glideslope, attitude, and airspeed, and transmit that data to the
pilot. Before the angled deck emerged in the 1950s, LSOs used colored paddles to signal corrections to the pilot
(hence the nickname). From the late 1950s onward, visual landing aids such as Optical Landing System have provided
information on proper glide slope, but LSOs still transmit voice calls to approaching pilots by radio. Although STOVL
aircraft are capable of taking off vertically from a spot on the deck, using the ramp and a running start is far
more fuel efficient and permits a heavier launch weight. As catapults are unnecessary, carriers with this arrangement
reduce weight, complexity, and space needed for complex steam or electromagnetic launching equipment, vertical landing
aircraft also remove the need for arresting cables and related hardware. Russian, Chinese, and future Indian carriers
include a ski-jump ramp for launching lightly loaded conventional fighter aircraft but recover using traditional
carrier arresting cables and a tailhook on their aircraft. One CATOBAR carrier: São Paulo is a Clemenceau-class aircraft
carrier currently in service with the Brazilian Navy. São Paulo was first commissioned in 1963 by the French Navy
as Foch and was transferred in 2000 to Brazil, where she became the new flagship of the Brazilian Navy. During the
period from 2005–2010, São Paulo underwent extensive modernization. At the end of 2010, sea trials began, and as
of 2011[update] São Paulo had been evaluated by the CIASA (Inspection Commission and Training Advisory). She was
expected to rejoin the fleet in late 2013, but suffered another major fire in 2012. 1 STOBAR carrier: Admiral Flota
Sovetskovo Soyuza Kuznetsov: 55,000 tonne Admiral Kuznetsov-class STOBAR aircraft carrier. Launched in 1985 as Tbilisi,
renamed and operational from 1995. Without catapults she can launch and recover lightly fueled naval fighters for
air defense or anti-ship missions but not heavy conventional bombing strikes.[citation needed] Officially designated
an aircraft carrying cruiser, she is unique in carrying a heavy cruiser's complement of defensive weapons and large
P-700 Granit offensive missiles. The P-700 systems will be removed in the coming refit to enlarge her below decks
aviation facilities as well as upgrading her defensive systems. The Royal Navy is constructing two new larger STOVL
aircraft carriers, the Queen Elizabeth class, to replace the three now retired Invincible-class carriers. The ships
are HMS Queen Elizabeth and HMS Prince of Wales. They will be able to operate up to 40 aircraft on peace time operations
with a tailored group of up to 50, and will have a displacement of 70,600 tonnes. HMS Queen Elizabeth is projected
to commission in 2017 followed by Prince of Wales in about 2020. The ships are due to become operational starting
in 2020. Their primary aircraft complement will be made up of F-35B Lightning IIs, and their ship's company will
number around 680 with the total complement rising to about 1600 when the air group is embarked. The two ships will
be the largest warships ever built for the Royal Navy.
In 2007, two FAA whistleblowers, inspectors Charalambe "Bobby" Boutris and Douglas E. Peters, alleged that Boutris said he
attempted to ground Southwest after finding cracks in the fuselage, but was prevented by supervisors he said were
friendly with the airline. This was validated by a report by the Department of Transportation which found FAA managers
had allowed Southwest Airlines to fly 46 airplanes in 2006 and 2007 that were overdue for safety inspections, ignoring
concerns raised by inspectors. Audits of other airlines resulted in two airlines grounding hundreds of planes, causing
thousands of flight cancellations. The House Transportation and Infrastructure Committee held hearings in April 2008.
Jim Oberstar, former chairman of the committee said its investigation uncovered a pattern of regulatory abuse and
widespread regulatory lapses, allowing 117 aircraft to be operated commercially although not in compliance with FAA
safety rules. Oberstar said there was a "culture of coziness" between senior FAA officials and the airlines and "a
systematic breakdown" in the FAA's culture that resulted in "malfeasance, bordering on corruption." In 2008 the FAA
proposed to fine Southwest $10.2 million for failing to inspect older planes for cracks, and in 2009 Southwest and
the FAA agreed that Southwest would pay a $7.5 million penalty and would adopt new safety procedures, with the fine
doubling if Southwest failed to follow through. On July 22, 2008, in the aftermath of the Southwest Airlines inspection
scandal, a bill was unanimously approved in the House to tighten regulations concerning airplane maintenance procedures,
including the establishment of a whistleblower office and a two-year "cooling off" period that FAA inspectors or
supervisors of inspectors must wait before they can work for those they regulated. The bill also required rotation
of principal maintenance inspectors and stipulated that the word "customer" properly applies to the flying public,
not those entities regulated by the FAA. The bill died in a Senate committee that year. The FAA gradually assumed
additional functions. The hijacking epidemic of the 1960s had already brought the agency into the field of civil
aviation security. In response to the hijackings on September 11, 2001, this responsibility is now primarily taken
by the Department of Homeland Security. The FAA became more involved with the environmental aspects of aviation in
1968 when it received the power to set aircraft noise standards. Legislation in 1970 gave the agency management of
a new airport aid program and certain added responsibilities for airport safety. During the 1960s and 1970s, the
FAA also started to regulate high altitude (over 500 feet) kite and balloon flying. On the eve of America's entry
into World War II, CAA began to extend its ATC responsibilities to takeoff and landing operations at airports. This
expanded role eventually became permanent after the war. The application of radar to ATC helped controllers in their
drive to keep abreast of the postwar boom in commercial air transportation. In 1946, meanwhile, Congress gave CAA
the added task of administering the federal-aid airport program, the first peacetime program of financial assistance
aimed exclusively at promoting development of the nation's civil airports. By the mid-1970s, the agency had achieved
a semi-automated air traffic control system using both radar and computer technology. This system required enhancement
to keep pace with air traffic growth, however, especially after the Airline Deregulation Act of 1978 phased out the
CAB's economic regulation of the airlines. A nationwide strike by the air traffic controllers union in 1981 forced
temporary flight restrictions but failed to shut down the airspace system. During the following year, the agency
unveiled a new plan for further automating its air traffic control facilities, but progress proved disappointing.
In 1994, the FAA shifted to a more step-by-step approach that has provided controllers with advanced equipment. The
Air Commerce Act of May 20, 1926, is the cornerstone of the federal government's regulation of civil aviation. This
landmark legislation was passed at the urging of the aviation industry, whose leaders believed the airplane could
not reach its full commercial potential without federal action to improve and maintain safety standards. The Act
charged the Secretary of Commerce with fostering air commerce, issuing and enforcing air traffic rules, licensing
pilots, certifying aircraft, establishing airways, and operating and maintaining aids to air navigation. The newly
created Aeronautics Branch, operating under the Department of Commerce assumed primary responsibility for aviation
oversight. The approaching era of jet travel, and a series of midair collisions (most notable was the 1956 Grand
Canyon mid-air collision), prompted passage of the Federal Aviation Act of 1958. This legislation gave the CAA's
functions to a new independent body, the Federal Aviation Agency. The act transferred air safety regulation from
the CAB to the new FAA, and also gave the FAA sole responsibility for a common civil-military system of air navigation
and air traffic control. The FAA's first administrator, Elwood R. Quesada, was a former Air Force general and adviser
to President Eisenhower. On October 31, 2013, after outcry from media outlets, including heavy criticism from Nick
Bilton of The New York Times, the FAA announced it will allow airlines to expand the passengers use of portable electronic
devices during all phases of flight, but mobile phone calls will still be prohibited. Implementation will vary among
airlines. The FAA expects many carriers to show that their planes allow passengers to safely use their devices in
airplane mode, gate-to-gate, by the end of 2013. Devices must be held or put in the seat-back pocket during the actual
takeoff and landing. Mobile phones must be in airplane mode or with mobile service disabled, with no signal bars
displayed, and cannot be used for voice communications due to Federal Communications Commission regulations that
prohibit any airborne calls using mobile phones. If an air carrier provides Wi-Fi service during flight, passengers
may use it. Short-range Bluetooth accessories, like wireless keyboards, can also be used. In 1967, a new U.S. Department
of Transportation (DOT) combined major federal responsibilities for air and surface transport. The Federal Aviation
Agency's name changed to the Federal Aviation Administration as it became one of several agencies (e.g., Federal
Highway Administration, Federal Railroad Administration, the Coast Guard, and the Saint Lawrence Seaway Commission)
within DOT (albeit the largest). The FAA administrator would no longer report directly to the president but would
instead report to the Secretary of Transportation. New programs and budget requests would have to be approved by
DOT, which would then include these requests in the overall budget and submit it to the president. The Aeronautics
Branch was renamed the Bureau of Air Commerce in 1934 to reflect its enhanced status within the Department. As commercial
flying increased, the Bureau encouraged a group of airlines to establish the first three centers for providing air
traffic control (ATC) along the airways. In 1936, the Bureau itself took over the centers and began to expand the
ATC system. The pioneer air traffic controllers used maps, blackboards, and mental calculations to ensure the safe
separation of aircraft traveling along designated routes between cities. In 2014, the FAA changed a long-standing
approach to air traffic control candidates that eliminated preferences based on training and experience at flight
schools in favor of a personality test open to anyone irrespective of experience. The move was made to increase flight
traffic controller racial diversity. Before the change, candidates who had completed coursework at participating
colleges and universities could be "fast-tracked" for consideration. The agency eliminated that program and instead
switched to an open system to the general public, with no need for any experience or even a college degree. Instead,
applicants could take "a biographical questionnaire" that many applicants found baffling. The FAA has been cited
as an example of regulatory capture, "in which the airline industry openly dictates to its regulators its governing
rules, arranging for not only beneficial regulation, but placing key people to head these regulators." Retired NASA
Office of Inspector General Senior Special Agent Joseph Gutheinz, who used to be a Special Agent with the Office
of Inspector General for the Department of Transportation and with FAA Security, is one of the most outspoken critics
of FAA. Rather than commend the agency for proposing a $10.2 million fine against Southwest Airlines for its failure
to conduct mandatory inspections in 2008, he was quoted as saying the following in an Associated Press story: "Penalties
against airlines that violate FAA directives should be stiffer. At $25,000 per violation, Gutheinz said, airlines
can justify rolling the dice and taking the chance on getting caught. He also said the FAA is often too quick to
bend to pressure from airlines and pilots." Other experts have been critical of the constraints and expectations
under which the FAA is expected to operate. The dual role of encouraging aerospace travel and regulating aerospace
travel are contradictory. For example, to levy a heavy penalty upon an airline for violating an FAA regulation which
would impact their ability to continue operating would not be considered encouraging aerospace travel.
The boroughs of Liverpool, Knowsley, St Helens and Sefton were included in Merseyside. In Greater Manchester the successor
boroughs were Bury, Bolton, Manchester, Oldham (part), Rochdale, Salford, Tameside (part), Trafford (part) and Wigan.
Warrington and Widnes, south of the new Merseyside/Greater Manchester border were added to the new non-metropolitan
county of Cheshire. The urban districts of Barnoldswick and Earby, Bowland Rural District and the parishes of Bracewell
and Brogden and Salterforth from Skipton Rural District in the West Riding of Yorkshire became part of the new Lancashire.
One parish, Simonswood, was transferred from the borough of Knowsley in Merseyside to the district of West Lancashire
in 1994. In 1998 Blackpool and Blackburn with Darwen became independent unitary authorities. The Duchy of Lancaster
is one of two royal duchies in England. It has landholdings throughout the region and elsewhere, operating as a property
company, but also exercising the right of the Crown in the County Palatine of Lancaster. While the administrative
boundaries changed in the 1970s, the county palatine boundaries remain the same as the historic boundaries. As a
result, the High Sheriffs for Lancashire, Greater Manchester and Merseyside are appointed "within the Duchy and County
Palatine of Lancaster". Lancashire has a mostly comprehensive system with four state grammar schools. Not including
sixth form colleges, there are 77 state schools (not including Burnley's new schools) and 24 independent schools.
The Clitheroe area has secondary modern schools. Sixth form provision is limited at most schools in most districts,
with only Fylde and Lancaster districts having mostly sixth forms at schools. The rest depend on FE colleges and
sixth form colleges, where they exist. South Ribble has the largest school population and Fylde the smallest (only
three schools). Burnley's schools have had a new broom and have essentially been knocked down and started again in
2006. There are many Church of England and Catholic faith schools in Lancashire. Lancashire produced well known teams
in super league such as St Helens, Wigan, and Warrington. The county was once the focal point for many of the sport's
professional competitions including the Lancashire League competition which ran from 1895 to 1970, and the Lancashire
County Cup which was abandoned in 1993. Rugby League has also seen a representative fixture between Lancashire and
Yorkshire contested 89 times since its inception in 1895. Currently there are several rugby league teams that are
based within the ceremonial county which include Blackpool Panthers, East Lancashire Lions, Blackpool Sea Eagles,
Bamber Bridge, Leyland Warriors, Chorley Panthers, Blackpool Stanley, Blackpool Scorpions and Adlington Rangers.
Lancashire had a lively culture of choral and classical music, with very large numbers of local church choirs from
the 17th century, leading to the foundation of local choral societies from the mid-18th century, often particularly
focused on performances of the music of Handel and his contemporaries. It also played a major part in the development
of brass bands which emerged in the county, particularly in the textile and coalfield areas, in the 19th century.
The first open competition for brass bands was held at Manchester in 1853, and continued annually until the 1980s.
The vibrant brass band culture of the area made an important contribution to the foundation and staffing of the Hallé
Orchestra from 1857, the oldest extant professional orchestra in the United Kingdom. The same local musical tradition
produced eminent figures such as Sir William Walton (1902–88), son of an Oldham choirmaster and music teacher, Sir
Thomas Beecham (1879–1961), born in St. Helens, who began his career by conducting local orchestras and Alan Rawsthorne
(1905–71) born in Haslingden. The conductor David Atherton, co-founder of the London Sinfonietta, was born in Blackpool
in 1944. Lancashire also produced more populist figures, such as early musical theatre composer Leslie Stuart (1863–1928),
born in Southport, who began his musical career as organist of Salford Cathedral. Lancashire emerged as a major commercial
and industrial region during the Industrial Revolution. Manchester and Liverpool grew into its largest cities, dominating
global trade and the birth of modern capitalism. The county contained several mill towns and the collieries of the
Lancashire Coalfield. By the 1830s, approximately 85% of all cotton manufactured worldwide was processed in Lancashire.
Accrington, Blackburn, Bolton, Burnley, Bury, Chorley, Colne, Darwen, Nelson, Oldham, Preston, Rochdale and Wigan
were major cotton mill towns during this time. Blackpool was a centre for tourism for the inhabitants of Lancashire's
mill towns, particularly during wakes week. The county was subject to significant boundary reform in 1974 that removed
Liverpool and Manchester and most of their surrounding conurbations to form the metropolitan counties of Merseyside
and Greater Manchester. The detached northern part of Lancashire in the Lake District, including the Furness Peninsula
and Cartmel, was merged with Cumberland and Westmorland to form Cumbria. Lancashire lost 709 square miles of land
to other counties, about two fifths of its original area, although it did gain some land from the West Riding of
Yorkshire. Today the county borders Cumbria to the north, Greater Manchester and Merseyside to the south and North
and West Yorkshire to the east; with a coastline on the Irish Sea to the west. The county palatine boundaries remain
the same[clarification needed] with the Duke of Lancaster exercising sovereignty rights, including the appointment
of lords lieutenant in Greater Manchester and Merseyside. Lancashire is smaller than its historical extent following
a major reform of local government. In 1889, the administrative county of Lancashire was created, covering the historical
county except for the county boroughs such as Blackburn, Burnley, Barrow-in-Furness, Preston, Wigan, Liverpool and
Manchester. The area served by the Lord-Lieutenant (termed now a ceremonial county) covered the entirety of the administrative
county and the county boroughs, and was expanded whenever boroughs annexed areas in neighbouring counties such as
Wythenshawe in Manchester south of the River Mersey and historically in Cheshire, and southern Warrington. It did
not cover the western part of Todmorden, where the ancient border between Lancashire and Yorkshire passes through
the middle of the town. During the 20th century, the county became increasingly urbanised, particularly the southern
part. To the existing county boroughs of Barrow-in-Furness, Blackburn, Bolton, Bootle, Burnley, Bury, Liverpool,
Manchester, Oldham, Preston, Rochdale, Salford, St Helens and Wigan were added Blackpool (1904), Southport (1905),
and Warrington (1900). The county boroughs also had many boundary extensions. The borders around the Manchester area
were particularly complicated, with narrow protrusions of the administrative county between the county boroughs –
Lees urban district formed a detached part of the administrative county, between Oldham county borough and the West
Riding of Yorkshire. To the east of the county are upland areas leading to the Pennines. North of the Ribble is Beacon
Fell Country Park and the Forest of Bowland, another AONB. Much of the lowland in this area is devoted to dairy farming
and cheesemaking, whereas the higher ground is more suitable for sheep, and the highest ground is uncultivated moorland.
The valleys of the River Ribble and its tributary the Calder form a large gap to the west of the Pennines, overlooked
by Pendle Hill. Most of the larger Lancashire towns are in these valleys South of the Ribble are the West Pennine
Moors and the Forest of Rossendale where former cotton mill towns are in deep valleys. The Lancashire Coalfield,
largely in modern-day Greater Manchester, extended into Merseyside and to Ormskirk, Chorley, Burnley and Colne in
Lancashire. The Duchy administers bona vacantia within the County Palatine, receiving the property of persons who
die intestate and where the legal ownership cannot be ascertained. There is no separate Duke of Lancaster, the title
merged into the Crown many centuries ago – but the Duchy is administered by the Queen in Right of the Duchy of Lancaster.
A separate court system for the county palatine was abolished by Courts Act 1971. A particular form of The Loyal
Toast, 'The Queen, Duke of Lancaster' is in regular use in the county palatine. Lancaster serves as the county town
of the county palatine. The Lancashire economy relies strongly on the M6 motorway which runs from north to south,
past Lancaster and Preston. The M55 connects Preston to Blackpool and is 11.5 miles (18.3 km) long. The M65 motorway
from Colne, connects Burnley, Accrington, Blackburn to Preston. The M61 from Preston via Chorley and the M66 starting
500 metres (0.3 mi) inside the county boundary near Edenfield, provide links between Lancashire and Manchester] and
the trans-Pennine M62. The M58 crosses the southernmost part of the county from the M6 near Wigan to Liverpool via
Skelmersdale. The major settlements in the ceremonial county are concentrated on the Fylde coast (the Blackpool Urban
Area), and a belt of towns running west-east along the M65: Preston, Blackburn, Accrington, Burnley, Nelson and Colne.
South of Preston are the towns of Leyland and Chorley; the three formed part of the Central Lancashire New Town designated
in 1970. The north of the county is predominantly rural and sparsely populated, except for the towns of Lancaster
and Morecambe which form a large conurbation of almost 100,000 people. Lancashire is home to a significant Asian
population, numbering over 70,000 and 6% of the county's population, and concentrated largely in the former cotton
mill towns in the south east. Liverpool produced a number of nationally and internationally successful popular singers
in the 1950s, including traditional pop stars Frankie Vaughan and Lita Roza, and one of the most successful British
rock and roll stars in Billy Fury. Many Lancashire towns had vibrant skiffle scenes in the late 1950s, out of which
by the early 1960s a flourishing culture of beat groups began to emerge, particularly around Liverpool and Manchester.
It has been estimated that there were around 350 bands active in and around Liverpool in this era, often playing
ballrooms, concert halls and clubs, among them the Beatles. After their national success from 1962, a number of Liverpool
performers were able to follow them into the charts, including Gerry & the Pacemakers, the Searchers and Cilla Black.
The first act to break through in the UK who were not from Liverpool, or managed by Brian Epstein, were Freddie and
the Dreamers, who were based in Manchester, as were Herman's Hermits and the Hollies. Led by the Beatles, beat groups
from the region spearheaded the British Invasion of the US, which made a major contribution to the development of
rock music. After the decline of beat groups in the late 1960s the centre of rock culture shifted to London and there
were relatively few local bands who achieved national prominence until the growth of a disco funk scene and the punk
rock revolution in the mid and late 1970s. Lancashire has a long and highly productive tradition of music making.
In the early modern era the county shared in the national tradition of balladry, including perhaps the finest border
ballad, "The Ballad of Chevy Chase", thought to have been composed by the Lancashire-born minstrel Richard Sheale.
The county was also a common location for folk songs, including "The Lancashire Miller", "Warrington Ale" and "The
soldier's farewell to Manchester", while Liverpool, as a major seaport, was the subject of many sea shanties, including
"The Leaving of Liverpool" and "Maggie May", beside several local Wassailing songs. In the Industrial Revolution
changing social and economic patterns helped create new traditions and styles of folk song, often linked to migration
and patterns of work. These included processional dances, often associated with rushbearing or the Wakes Week festivities,
and types of step dance, most famously clog dancing. The county was established in 1182, later than many other counties.
During Roman times the area was part of the Brigantes tribal area in the military zone of Roman Britain. The towns
of Manchester, Lancaster, Ribchester, Burrow, Elslack and Castleshaw grew around Roman forts. In the centuries after
the Roman withdrawal in 410AD the northern parts of the county probably formed part of the Brythonic kingdom of Rheged,
a successor entity to the Brigantes tribe. During the mid-8th century, the area was incorporated into the Anglo-Saxon
Kingdom of Northumbria, which became a part of England in the 10th century. By the census of 1971, the population
of Lancashire and its county boroughs had reached 5,129,416, making it the most populous geographic county in the
UK. The administrative county was also the most populous of its type outside London, with a population of 2,280,359
in 1961. On 1 April 1974, under the Local Government Act 1972, the administrative county was abolished, as were the
county boroughs. The urbanised southern part largely became part of two metropolitan counties, Merseyside and Greater
Manchester. The new county of Cumbria incorporates the Furness exclave. A local pioneer of folk song collection in
the first half of the 19th century was Shakespearean scholar James Orchard Halliwell, but it was not until the second
folk revival in the 20th century that the full range of song from the county, including industrial folk song, began
to gain attention. The county produced one of the major figures of the revival in Ewan MacColl, but also a local
champion in Harry Boardman, who from 1965 onwards probably did more than anyone to popularise and record the folk
song of the county. Perhaps the most influential folk artists to emerge from the region in the late 20th century
were Liverpool folk group The Spinners, and from Manchester folk troubadour Roy Harper and musician, comedian and
broadcaster Mike Harding. The region is home to numerous folk clubs, many of them catering to Irish and Scottish
folk music. Regular folk festivals include the Fylde Folk Festival at Fleetwood. The Red Rose of Lancaster is the
county flower found on the county's heraldic badge and flag. The rose was a symbol of the House of Lancaster, immortalised
in the verse "In the battle for England's head/York was white, Lancaster red" (referring to the 15th-century Wars
of the Roses). The traditional Lancashire flag, a red rose on a white field, was not officially registered. When
an attempt was made to register it with the Flag Institute it was found that it was officially registered by Montrose
in Scotland, several hundred years earlier with the Lyon Office. Lancashire's official flag is registered as a red
rose on a gold field. More recent Lancashire-born composers include Hugh Wood (1932- Parbold), Sir Peter Maxwell
Davies (1934-, Salford), Sir Harrison Birtwistle (1934-, Accrington), Gordon Crosse (1937-, Bury),John McCabe (1939-2015,
Huyton), Roger Smalley (1943-2015, Swinton), Nigel Osborne (1948-, Manchester), Steve Martland (1954-2013, Liverpool),
Simon Holt (1958-, Bolton) and Philip Cashian (1963-, Manchester). The Royal Manchester College of Music was founded
in 1893 to provide a northern counterpart to the London musical colleges. It merged with the Northern College of
Music (formed in 1920) to form the Royal Northern College of Music in 1972.
The Early Triassic was between 250 million to 247 million years ago and was dominated by deserts as Pangaea had not yet broken
up, thus the interior was nothing but arid. The Earth had just witnessed a massive die-off in which 95% of all life
went extinct. The most common life on earth were Lystrosaurus, Labyrinthodont, and Euparkeria along with many other
creatures that managed to survive the Great Dying. Temnospondyli evolved during this time and would be the dominant
predator for much of the Triassic. The climatic changes of the late Jurassic and Cretaceous provided for further
adaptive radiation. The Jurassic was the height of archosaur diversity, and the first birds and eutherian mammals
also appeared. Angiosperms radiated sometime in the early Cretaceous, first in the tropics, but the even temperature
gradient allowed them to spread toward the poles throughout the period. By the end of the Cretaceous, angiosperms
dominated tree floras in many areas, although some evidence suggests that biomass was still dominated by cycad and
ferns until after the Cretaceous–Paleogene extinction. The lower (Triassic) boundary is set by the Permian–Triassic
extinction event, during which approximately 90% to 96% of marine species and 70% of terrestrial vertebrates became
extinct. It is also known as the "Great Dying" because it is considered the largest mass extinction in the Earth's
history. The upper (Cretaceous) boundary is set at the Cretaceous–Tertiary (KT) extinction event (now more accurately
called the Cretaceous–Paleogene (or K–Pg) extinction event), which may have been caused by the impactor that created
Chicxulub Crater on the Yucatán Peninsula. Towards the Late Cretaceous large volcanic eruptions are also believed
to have contributed to the Cretaceous–Paleogene extinction event. Approximately 50% of all genera became extinct,
including all of the non-avian dinosaurs. The Late Cretaceous spans from 100 million to 65 million years ago. The
Late Cretaceous featured a cooling trend that would continue on in the Cenozoic period. Eventually, tropics were
restricted to the equator and areas beyond the tropic lines featured extreme seasonal changes in weather. Dinosaurs
still thrived as new species such as Tyrannosaurus, Ankylosaurus, Triceratops and Hadrosaurs dominated the food web.
In the oceans, Mosasaurs ruled the seas to fill the role of the Ichthyosaurs, and huge plesiosaurs, such as Elasmosaurus,
evolved. Also, the first flowering plants evolved. At the end of the Cretaceous, the Deccan traps and other volcanic
eruptions were poisoning the atmosphere. As this was continuing, it is thought that a large meteor smashed into earth,
creating the Chicxulub Crater in an event known as the K-T Extinction, the fifth and most recent mass extinction
event, in which 75% of life on earth went extinct, including all non-avian dinosaurs. Everything over 10 kilograms
went extinct. The age of the dinosaurs was over. The climate of the Cretaceous is less certain and more widely disputed.
Higher levels of carbon dioxide in the atmosphere are thought to have caused the world temperature gradient from
north to south to become almost flat: temperatures were about the same across the planet. Average temperatures were
also higher than today by about 10°C. In fact, by the middle Cretaceous, equatorial ocean waters (perhaps as warm
as 20 °C in the deep ocean) may have been too warm for sea life,[dubious – discuss][citation needed] and land areas
near the equator may have been deserts despite their proximity to water. The circulation of oxygen to the deep ocean
may also have been disrupted.[dubious – discuss] For this reason, large volumes of organic matter that was unable
to decompose accumulated, eventually being deposited as "black shale". The Late Triassic spans from 237 million to
200 million years ago. Following the bloom of the Middle Triassic, the Late Triassic featured frequent heat spells,
as well as moderate precipitation (10-20 inches per year). The recent warming led to a boom of reptilian evolution
on land as the first true dinosaurs evolve, as well as pterosaurs. All this climatic change, however, resulted in
a large die-out known as the Triassic-Jurassic extinction event, in which all archosaurs (excluding ancient crocodiles),
most synapsids, and almost all large amphibians went extinct, as well as 34% of marine life in the fourth mass extinction
event of the world. The cause is debatable. The Early Cretaceous spans from 145 million to 100 million years ago.
The Early Cretaceous saw the expansion of seaways, and as a result, the decline and extinction of sauropods (except
in South America). Many coastal shallows were created, and that caused Ichthyosaurs to die out. Mosasaurs evolved
to replace them as head of the seas. Some island-hopping dinosaurs, like Eustreptospondylus, evolved to cope with
the coastal shallows and small islands of ancient Europe. Other dinosaurs rose up to fill the empty space that the
Jurassic-Cretaceous extinction left behind, such as Carcharodontosaurus and Spinosaurus. Of the most successful would
be the Iguanodon which spread to every continent. Seasons came back into effect and the poles got seasonally colder,
but dinosaurs still inhabited this area like the Leaellynasaura which inhabited the polar forests year-round, and
many dinosaurs migrated there during summer like Muttaburrasaurus. Since it was too cold for crocodiles, it was the
last stronghold for large amphibians, like Koolasuchus. Pterosaurs got larger as species like Tapejara and Ornithocheirus
evolved. Sea levels began to rise during the Jurassic, which was probably caused by an increase in seafloor spreading.
The formation of new crust beneath the surface displaced ocean waters by as much as 200 m (656 ft) more than today,
which flooded coastal areas. Furthermore, Pangaea began to rift into smaller divisions, bringing more land area in
contact with the ocean by forming the Tethys Sea. Temperatures continued to increase and began to stabilize. Humidity
also increased with the proximity of water, and deserts retreated. The Early Jurassic spans from 200 million years
to 175 million years ago. The climate was much more humid than the Triassic, and as a result, the world was very
tropical. In the oceans, Plesiosaurs, Ichthyosaurs and Ammonites fill waters as the dominant races of the seas. On
land, dinosaurs and other reptiles stake their claim as the dominant race of the land, with species such as Dilophosaurus
at the top. The first true crocodiles evolved, pushing out the large amphibians to near extinction. All-in-all, reptiles
rise to rule the world. Meanwhile, the first true mammals evolve, but remained relatively small sized. Compared to
the vigorous convergent plate mountain-building of the late Paleozoic, Mesozoic tectonic deformation was comparatively
mild. The sole major Mesozoic orogeny occurred in what is now the Arctic, creating the Innuitian orogeny, the Brooks
Range, the Verkhoyansk and Cherskiy Ranges in Siberia, and the Khingan Mountains in Manchuria. This orogeny was related
to the opening of the Arctic Ocean and subduction of the North China and Siberian cratons under the Pacific Ocean.
Nevertheless, the era featured the dramatic rifting of the supercontinent Pangaea. Pangaea gradually split into a
northern continent, Laurasia, and a southern continent, Gondwana. This created the passive continental margin that
characterizes most of the Atlantic coastline (such as along the U.S. East Coast) today. Recent research indicates
that the specialized animals that formed complex ecosystems, with high biodiversity, complex food webs and a variety
of niches, took much longer to reestablish, recovery did not begin until the start of the mid-Triassic, 4M to 6M
years after the extinction and was not complete until 30M years after the Permian–Triassic extinction event. Animal
life was then dominated by various archosaurian reptiles: dinosaurs, pterosaurs, and aquatic reptiles such as ichthyosaurs,
plesiosaurs, and mosasaurs. The era began in the wake of the Permian–Triassic extinction event, the largest well-documented
mass extinction in Earth's history, and ended with the Cretaceous–Paleogene extinction event, another mass extinction
which is known for having killed off non-avian dinosaurs, as well as other plant and animal species. The Mesozoic
was a time of significant tectonic, climate and evolutionary activity. The era witnessed the gradual rifting of the
supercontinent Pangaea into separate landmasses that would eventually move into their current positions. The climate
of the Mesozoic was varied, alternating between warming and cooling periods. Overall, however, the Earth was hotter
than it is today. Non-avian dinosaurs appeared in the Late Triassic and became the dominant terrestrial vertebrates
early in the Jurassic, occupying this position for about 135 million years until their demise at the end of the Cretaceous.
Birds first appeared in the Jurassic, having evolved from a branch of theropod dinosaurs. The first mammals also
appeared during the Mesozoic, but would remain small—less than 15 kg (33 lb)—until the Cenozoic. The Middle Triassic
spans from 247 million to 237 million years ago. The Middle Triassic featured the beginnings of the breakup of Pangaea,
and the beginning of the Tethys Sea. The ecosystem had recovered from the devastation that was the Great Dying. Phytoplankton,
coral, and crustaceans all had recovered, and the reptiles began to get bigger and bigger. New aquatic reptiles evolved
such as Ichthyosaurs and Nothosaurs. Meanwhile, on land, Pine forests flourished, bringing along mosquitoes and fruit
flies. The first ancient crocodilians evolved, which sparked competition with the large amphibians that had since
ruled the freshwater world. The Late Jurassic spans from 163 million to 145 million years ago. The Late Jurassic
featured a massive extinction of sauropods and Ichthyosaurs due to the separation of Pangaea into Laurasia and Gondwana
in an extinction known as the Jurassic-Cretaceous extinction. Sea levels rose, destroying fern prairies and creating
shallows in its wake. Ichthyosaurs went extinct whereas sauropods, as a whole, did not die out in the Jurassic; in
fact, some species, like the Titanosaurus, lived up to the K-T extinction. The increase in sea-levels opened up the
Atlantic sea way which would continue to get larger over time. The divided world would give opportunity for the diversification
of new dinosaurs. The Triassic was generally dry, a trend that began in the late Carboniferous, and highly seasonal,
especially in the interior of Pangaea. Low sea levels may have also exacerbated temperature extremes. With its high
specific heat capacity, water acts as a temperature-stabilizing heat reservoir, and land areas near large bodies
of water—especially the oceans—experience less variation in temperature. Because much of the land that constituted
Pangaea was distant from the oceans, temperatures fluctuated greatly, and the interior of Pangaea probably included
expansive areas of desert. Abundant red beds and evaporites such as halite support these conclusions, but evidence
exists that the generally dry climate of the Triassic was punctuated by episodes of increased rainfall. Most important
humid episodes were the Carnian Pluvial Event and one in the Rhaetian, few million years before the Triassic–Jurassic
extinction event. The dominant land plant species of the time were gymnosperms, which are vascular, cone-bearing,
non-flowering plants such as conifers that produce seeds without a coating. This is opposed to the earth's current
flora, in which the dominant land plants in terms of number of species are angiosperms. One particular plant genus,
Ginkgo, is thought to have evolved at this time and is represented today by a single species, Ginkgo biloba. As well,
the extant genus Sequoia is believed to have evolved in the Mesozoic.
It was only in the 1980s that digital telephony transmission networks became possible, such as with ISDN networks, assuring
a minimum bit rate (usually 128 kilobits/s) for compressed video and audio transmission. During this time, there
was also research into other forms of digital video and audio communication. Many of these technologies, such as
the Media space, are not as widely used today as videoconferencing but were still an important area of research.
The first dedicated systems started to appear in the market as ISDN networks were expanding throughout the world.
One of the first commercial videoconferencing systems sold to companies came from PictureTel Corp., which had an
Initial Public Offering in November, 1984. The MC controls the conferencing while it is active on the signaling plane,
which is simply where the system manages conferencing creation, endpoint signaling and in-conferencing controls.
This component negotiates parameters with every endpoint in the network and controls conferencing resources. While
the MC controls resources and signaling negotiations, the MP operates on the media plane and receives media from
each endpoint. The MP generates output streams from each endpoint and redirects the information to other endpoints
in the conference. High speed Internet connectivity has become more widely available at a reasonable cost and the
cost of video capture and display technology has decreased. Consequently, personal videoconferencing systems based
on a webcam, personal computer system, software compression and broadband Internet connectivity have become affordable
to the general public. Also, the hardware used for this technology has continued to improve in quality, and prices
have dropped dramatically. The availability of freeware (often as part of chat programs) has made software based
videoconferencing accessible to many. Videophone calls (also: videocalls, video chat as well as Skype and Skyping
in verb form), differ from videoconferencing in that they expect to serve individuals, not groups. However that distinction
has become increasingly blurred with technology improvements such as increased bandwidth and sophisticated software
clients that can allow for multiple parties on a call. In general everyday usage the term videoconferencing is now
frequently used instead of videocall for point-to-point calls between two units. Both videophone calls and videoconferencing
are also now commonly referred to as a video link. Technological developments by videoconferencing developers in
the 2010s have extended the capabilities of video conferencing systems beyond the boardroom for use with hand-held
mobile devices that combine the use of video, audio and on-screen drawing capabilities broadcasting in real-time
over secure networks, independent of location. Mobile collaboration systems now allow multiple people in previously
unreachable locations, such as workers on an off-shore oil rig, the ability to view and discuss issues with colleagues
thousands of miles away. Traditional videoconferencing system manufacturers have begun providing mobile applications
as well, such as those that allow for live and still image streaming. Videoconferencing provides students with the
opportunity to learn by participating in two-way communication forums. Furthermore, teachers and lecturers worldwide
can be brought to remote or otherwise isolated educational facilities. Students from diverse communities and backgrounds
can come together to learn about one another, although language barriers will continue to persist. Such students
are able to explore, communicate, analyze and share information and ideas with one another. Through videoconferencing,
students can visit other parts of the world to speak with their peers, and visit museums and educational facilities.
Such virtual field trips can provide enriched learning opportunities to students, especially those in geographically
isolated locations, and to the economically disadvantaged. Small schools can use these technologies to pool resources
and provide courses, such as in foreign languages, which could not otherwise be offered. One of the first demonstrations
of the ability for telecommunications to help sign language users communicate with each other occurred when AT&T's
videophone (trademarked as the "Picturephone") was introduced to the public at the 1964 New York World's Fair –two
deaf users were able to communicate freely with each other between the fair and another city. Various universities
and other organizations, including British Telecom's Martlesham facility, have also conducted extensive research
on signing via videotelephony. The use of sign language via videotelephony was hampered for many years due to the
difficulty of its use over slow analogue copper phone lines, coupled with the high cost of better quality ISDN (data)
phone lines. Those factors largely disappeared with the introduction of more efficient video codecs and the advent
of lower cost high-speed ISDN data and IP (Internet) services in the 1990s. In the increasingly globalized film industry,
videoconferencing has become useful as a method by which creative talent in many different locations can collaborate
closely on the complex details of film production. For example, for the 2013 award-winning animated film Frozen,
Burbank-based Walt Disney Animation Studios hired the New York City-based husband-and-wife songwriting team of Robert
Lopez and Kristen Anderson-Lopez to write the songs, which required two-hour-long transcontinental videoconferences
nearly every weekday for about 14 months. A videoconference system is generally higher cost than a videophone and
deploys greater capabilities. A videoconference (also known as a videoteleconference) allows two or more locations
to communicate via live, simultaneous two-way video and audio transmissions. This is often accomplished by the use
of a multipoint control unit (a centralized distribution and call management system) or by a similar non-centralized
multipoint capability embedded in each videoconferencing unit. Again, technology improvements have circumvented traditional
definitions by allowing multiple party videoconferencing via web-based applications. This technique was very expensive,
though, and could not be used for applications such as telemedicine, distance education, and business meetings. Attempts
at using normal telephony networks to transmit slow-scan video, such as the first systems developed by AT&T Corporation,
first researched in the 1950s, failed mostly due to the poor picture quality and the lack of efficient video compression
techniques. The greater 1 MHz bandwidth and 6 Mbit/s bit rate of the Picturephone in the 1970s also did not achieve
commercial success, mostly due to its high cost, but also due to a lack of network effect —with only a few hundred
Picturephones in the world, users had extremely few contacts they could actually call to, and interoperability with
other videophone systems would not exist for decades. While videoconferencing technology was initially used primarily
within internal corporate communication networks, one of the first community service usages of the technology started
in 1992 through a unique partnership with PictureTel and IBM Corporations which at the time were promoting a jointly
developed desktop based videoconferencing product known as the PCS/1. Over the next 15 years, Project DIANE (Diversified
Information and Assistance Network) grew to utilize a variety of videoconferencing platforms to create a multi-state
cooperative public service and distance education network consisting of several hundred schools, neighborhood centers,
libraries, science museums, zoos and parks, public assistance centers, and other community oriented organizations.
Simultaneous videoconferencing among three or more remote points is possible by means of a Multipoint Control Unit
(MCU). This is a bridge that interconnects calls from several sources (in a similar way to the audio conference call).
All parties call the MCU, or the MCU can also call the parties which are going to participate, in sequence. There
are MCU bridges for IP and ISDN-based videoconferencing. There are MCUs which are pure software, and others which
are a combination of hardware and software. An MCU is characterised according to the number of simultaneous calls
it can handle, its ability to conduct transposing of data rates and protocols, and features such as Continuous Presence,
in which multiple parties can be seen on-screen at once. MCUs can be stand-alone hardware devices, or they can be
embedded into dedicated videoconferencing units. Videoconferencing can enable individuals in distant locations to
participate in meetings on short notice, with time and money savings. Technology such as VoIP can be used in conjunction
with desktop videoconferencing to enable low-cost face-to-face business meetings without leaving the desk, especially
for businesses with widespread offices. The technology is also used for telecommuting, in which employees work from
home. One research report based on a sampling of 1,800 corporate employees showed that, as of June 2010, 54% of the
respondents with access to video conferencing used it “all of the time” or “frequently”. Finally, in the 1990s, Internet
Protocol-based videoconferencing became possible, and more efficient video compression technologies were developed,
permitting desktop, or personal computer (PC)-based videoconferencing. In 1992 CU-SeeMe was developed at Cornell
by Tim Dorcey et al. In 1995 the first public videoconference between North America and Africa took place, linking
a technofair in San Francisco with a techno-rave and cyberdeli in Cape Town. At the Winter Olympics opening ceremony
in Nagano, Japan, Seiji Ozawa conducted the Ode to Joy from Beethoven's Ninth Symphony simultaneously across five
continents in near-real time. The core technology used in a videoconferencing system is digital compression of audio
and video streams in real time. The hardware or software that performs compression is called a codec (coder/decoder).
Compression rates of up to 1:500 can be achieved. The resulting digital stream of 1s and 0s is subdivided into labeled
packets, which are then transmitted through a digital network of some kind (usually ISDN or IP). The use of audio
modems in the transmission line allow for the use of POTS, or the Plain Old Telephone System, in some low-speed applications,
such as videotelephony, because they convert the digital pulses to/from analog waves in the audio spectrum range.
Typical use of the various technologies described above include calling or conferencing on a one-on-one, one-to-many
or many-to-many basis for personal, business, educational, deaf Video Relay Service and tele-medical, diagnostic
and rehabilitative use or services. New services utilizing videocalling and videoconferencing, such as teachers and
psychologists conducting online sessions, personal videocalls to inmates incarcerated in penitentiaries, and videoconferencing
to resolve airline engineering issues at maintenance facilities, are being created or evolving on an ongoing basis.
Videoconferencing is a highly useful technology for real-time telemedicine and telenursing applications, such as
diagnosis, consulting, transmission of medical images, etc... With videoconferencing, patients may contact nurses
and physicians in emergency or routine situations; physicians and other paramedical professionals can discuss cases
across large distances. Rural areas can use this technology for diagnostic purposes, thus saving lives and making
more efficient use of health care money. For example, a rural medical center in Ohio, United States, used videoconferencing
to successfully cut the number of transfers of sick infants to a hospital 70 miles (110 km) away. This had previously
cost nearly $10,000 per transfer. VRS services have become well developed nationally in Sweden since 1997 and also
in the United States since the first decade of the 2000s. With the exception of Sweden, VRS has been provided in
Europe for only a few years since the mid-2000s, and as of 2010 has not been made available in many European Union
countries, with most European countries still lacking the legislation or the financing for large-scale VRS services,
and to provide the necessary telecommunication equipment to deaf users. Germany and the Nordic countries are among
the other leaders in Europe, while the United States is another world leader in the provisioning of VRS services.
In May 2005, the first high definition video conferencing systems, produced by LifeSize Communications, were displayed
at the Interop trade show in Las Vegas, Nevada, able to provide video at 30 frames per second with a 1280 by 720
display resolution. Polycom introduced its first high definition video conferencing system to the market in 2006.
As of the 2010s, high definition resolution for videoconferencing became a popular feature, with most major suppliers
in the videoconferencing market offering it. Some systems are capable of multipoint conferencing with no MCU, stand-alone,
embedded or otherwise. These use a standards-based H.323 technique known as "decentralized multipoint", where each
station in a multipoint call exchanges video and audio directly with the other stations with no central "manager"
or other bottleneck. The advantages of this technique are that the video and audio will generally be of higher quality
because they don't have to be relayed through a central point. Also, users can make ad-hoc multipoint calls without
any concern for the availability or control of an MCU. This added convenience and quality comes at the expense of
some increased network bandwidth, because every station must transmit to every other station directly. The U.S. Social
Security Administration (SSA), which oversees the world's largest administrative judicial system under its Office
of Disability Adjudication and Review (ODAR), has made extensive use of videoconferencing to conduct hearings at
remote locations. In Fiscal Year (FY) 2009, the U.S. Social Security Administration (SSA) conducted 86,320 videoconferenced
hearings, a 55% increase over FY 2008. In August 2010, the SSA opened its fifth and largest videoconferencing-only
National Hearing Center (NHC), in St. Louis, Missouri. This continues the SSA's effort to use video hearings as a
means to clear its substantial hearing backlog. Since 2007, the SSA has also established NHCs in Albuquerque, New
Mexico, Baltimore, Maryland, Falls Church, Virginia, and Chicago, Illinois.
To unambiguously specify the date, dual dating or Old Style (O.S.) and New Style (N.S.) are sometimes used with dates. Dual
dating uses two consecutive years because of differences in the starting date of the year, or includes both the Julian
and Gregorian dates. Old Style and New Style (N.S.) indicate either whether the start of the Julian year has been
adjusted to start on 1 January (N.S.) even though documents written at the time use a different start of year (O.S.),
or whether a date conforms to the Julian calendar (O.S.) rather than the Gregorian (N.S.). The Gregorian calendar
was a reform of the Julian calendar instituted in 1582 by Pope Gregory XIII, after whom the calendar was named, by
papal bull Inter gravissimas dated 24 February 1582. The motivation for the adjustment was to bring the date for
the celebration of Easter to the time of year in which it was celebrated when it was introduced by the early Church.
Although a recommendation of the First Council of Nicaea in 325 specified that all Christians should celebrate Easter
on the same day, it took almost five centuries before virtually all Christians achieved that objective by adopting
the rules of the Church of Alexandria (see Easter for the issues which arose). Philip II of Spain decreed the change
from the Julian to the Gregorian calendar, which affected much of Roman Catholic Europe, as Philip was at the time
ruler over Spain and Portugal as well as much of Italy. In these territories, as well as in the Polish–Lithuanian
Commonwealth (ruled by Anna Jagiellon) and in the Papal States, the new calendar was implemented on the date specified
by the bull, with Julian Thursday, 4 October 1582, being followed by Gregorian Friday, 15 October 1582. The Spanish
and Portuguese colonies followed somewhat later de facto because of delay in communication. During the period between
1582, when the first countries adopted the Gregorian calendar, and 1923, when the last European country adopted it,
it was often necessary to indicate the date of some event in both the Julian calendar and in the Gregorian calendar,
for example, "10/21 February 1750/51", where the dual year accounts for some countries already beginning their numbered
year on 1 January while others were still using some other date. Even before 1582, the year sometimes had to be double
dated because of the different beginnings of the year in various countries. Woolley, writing in his biography of
John Dee (1527–1608/9), notes that immediately after 1582 English letter writers "customarily" used "two dates" on
their letters, one OS and one NS. The calendar was a refinement to the Julian calendar amounting to a 0.002% correction
in the length of the year. The motivation for the reform was to bring the date for the celebration of Easter to the
time of the year in which it was celebrated when it was introduced by the early Church. Because the celebration of
Easter was tied to the spring equinox, the Roman Catholic Church considered the steady drift in the date of Easter
caused by the year being slightly too long to be undesirable. The reform was adopted initially by the Catholic countries
of Europe. Protestants and Eastern Orthodox countries continued to use the traditional Julian calendar and adopted
the Gregorian reform after a time, for the sake of convenience in international trade. The last European country
to adopt the reform was Greece, in 1923. The Gregorian calendar is a solar calendar. A regular Gregorian year consists
of 365 days, but as in the Julian calendar, in a leap year, a leap day is added to February. In the Julian calendar
a leap year occurs every 4 years, but the Gregorian calendar omits 3 leap days every 400 years. In the Julian calendar,
this leap day was inserted by doubling 24 February, and the Gregorian reform did not change the date of the leap
day. In the modern period, it has become customary to number the days from the beginning of the month, and February
29th is often considered as the leap day. Some churches, notably the Roman Catholic Church, delay February festivals
after the 23rd by one day in leap years. Easter was the Sunday after the 15th day of this moon, whose 14th day was
allowed to precede the equinox. Where the two systems produced different dates there was generally a compromise so
that both churches were able to celebrate on the same day. By the 10th century all churches (except some on the eastern
border of the Byzantine Empire) had adopted the Alexandrian Easter, which still placed the vernal equinox on 21 March,
although Bede had already noted its drift in 725—it had drifted even further by the 16th century. Lilius's proposals
had two components. Firstly, he proposed a correction to the length of the year. The mean tropical year is 365.24219
days long. As the average length of a Julian year is 365.25 days, the Julian year is almost 11 minutes longer than
the mean tropical year. The discrepancy results in a drift of about three days every 400 years. Lilius's proposal
resulted in an average year of 365.2425 days (see Accuracy). At the time of Gregory's reform there had already been
a drift of 10 days since the Council of Nicaea, resulting in the vernal equinox falling on 10 or 11 March instead
of the ecclesiastically fixed date of 21 March, and if unreformed it would drift further. Lilius proposed that the
10-day drift should be corrected by deleting the Julian leap day on each of its ten occurrences over a period of
forty years, thereby providing for a gradual return of the equinox to 21 March. Most Western European countries changed
the start of the year to 1 January before they adopted the Gregorian calendar. For example, Scotland changed the
start of the Scottish New Year to 1 January in 1600 (this means that 1599 was a short year). England, Ireland and
the British colonies changed the start of the year to 1 January in 1752 (so 1751 was a short year with only 282 days)
though in England the start of the tax year remained at 25 March (O.S.), 5 April (N.S.) till 1800, when it moved
to 6 April. Later in 1752 in September the Gregorian calendar was introduced throughout Britain and the British colonies
(see the section Adoption). These two reforms were implemented by the Calendar (New Style) Act 1750. Extending the
Gregorian calendar backwards to dates preceding its official introduction produces a proleptic calendar, which should
be used with some caution. For ordinary purposes, the dates of events occurring prior to 15 October 1582 are generally
shown as they appeared in the Julian calendar, with the year starting on 1 January, and no conversion to their Gregorian
equivalents. For example, the Battle of Agincourt is universally considered to have been fought on 25 October 1415
which is Saint Crispin's Day. A language-independent alternative used in many countries is to hold up one's two fists
with the index knuckle of the left hand against the index knuckle of the right hand. Then, starting with January
from the little knuckle of the left hand, count knuckle, space, knuckle, space through the months. A knuckle represents
a month of 31 days, and a space represents a short month (a 28- or 29-day February or any 30-day month). The junction
between the hands is not counted, so the two index knuckles represent July and August. The Gregorian reform contained
two parts: a reform of the Julian calendar as used prior to Pope Gregory XIII's time and a reform of the lunar cycle
used by the Church, with the Julian calendar, to calculate the date of Easter. The reform was a modification of a
proposal made by Aloysius Lilius. His proposal included reducing the number of leap years in four centuries from
100 to 97, by making 3 out of 4 centurial years common instead of leap years. Lilius also produced an original and
practical scheme for adjusting the epacts of the moon when calculating the annual date of Easter, solving a long-standing
obstacle to calendar reform. Prior to 1917, Turkey used the lunar Islamic calendar with the Hegira era for general
purposes and the Julian calendar for fiscal purposes. The start of the fiscal year was eventually fixed at 1 March
and the year number was roughly equivalent to the Hegira year (see Rumi calendar). As the solar year is longer than
the lunar year this originally entailed the use of "escape years" every so often when the number of the fiscal year
would jump. From 1 March 1917 the fiscal year became Gregorian, rather than Julian. On 1 January 1926 the use of
the Gregorian calendar was extended to include use for general purposes and the number of the year became the same
as in other countries. Up to February 28 in the calendar you are converting from add one day less or subtract one
day more than the calculated value. Remember to give February the appropriate number of days for the calendar you
are converting into. When you are subtracting days to move from Julian to Gregorian be careful, when calculating
the Gregorian equivalent of February 29 (Julian), to remember that February 29 is discounted. Thus if the calculated
value is -4 the Gregorian equivalent of this date is February 24. In addition to the change in the mean length of
the calendar year from 365.25 days (365 days 6 hours) to 365.2425 days (365 days 5 hours 49 minutes 12 seconds),
a reduction of 10 minutes 48 seconds per year, the Gregorian calendar reform also dealt with the accumulated difference
between these lengths. The canonical Easter tables were devised at the end of the third century, when the vernal
equinox fell either on 20 March or 21 March depending on the year's position in the leap year cycle. As the rule
was that the full moon preceding Easter was not to precede the equinox the equinox was fixed at 21 March for computational
purposes and the earliest date for Easter was fixed at 22 March. The Gregorian calendar reproduced these conditions
by removing ten days. The Council of Trent approved a plan in 1563 for correcting the calendrical errors, requiring
that the date of the vernal equinox be restored to that which it held at the time of the First Council of Nicaea
in 325 and that an alteration to the calendar be designed to prevent future drift. This would allow for a more consistent
and accurate scheduling of the feast of Easter. In 1577, a Compendium was sent to expert mathematicians outside the
reform commission for comments. Some of these experts, including Giambattista Benedetti and Giuseppe Moleto, believed
Easter should be computed from the true motions of the sun and moon, rather than using a tabular method, but these
recommendations were not adopted. The reform adopted was a modification of a proposal made by the Calabrian doctor
Aloysius Lilius (or Lilio). A month after having decreed the reform, the pope with a brief of 3 April 1582 granted
to Antonio Lilio, the brother of Luigi Lilio, the exclusive right to publish the calendar for a period of ten years.
The Lunario Novo secondo la nuova riforma printed by Vincenzo Accolti, one of the first calendars printed in Rome
after the reform, notes at the bottom that it was signed with papal authorization and by Lilio (Con licentia delli
Superiori... et permissu Ant(onii) Lilij). The papal brief was later revoked, on 20 September 1582, because Antonio
Lilio proved unable to keep up with the demand for copies. The year used in dates during the Roman Republic and the
Roman Empire was the consular year, which began on the day when consuls first entered office—probably 1 May before
222 BC, 15 March from 222 BC and 1 January from 153 BC. The Julian calendar, which began in 45 BC, continued to use
1 January as the first day of the new year. Even though the year used for dates changed, the civil year always displayed
its months in the order January to December from the Roman Republican period until the present. In conjunction with
the system of months there is a system of weeks. A physical or electronic calendar provides conversion from a given
date to the weekday, and shows multiple dates for a given weekday and month. Calculating the day of the week is not
very simple, because of the irregularities in the Gregorian system. When the Gregorian calendar was adopted by each
country, the weekly cycle continued uninterrupted. For example, in the case of the few countries that adopted the
reformed calendar on the date proposed by Gregory XIII for the calendar's adoption, Friday, 15 October 1582, the
preceding date was Thursday, 4 October 1582 (Julian calendar). Because the spring equinox was tied to the date of
Easter, the Roman Catholic Church considered the seasonal drift in the date of Easter undesirable. The Church of
Alexandria celebrated Easter on the Sunday after the 14th day of the moon (computed using the Metonic cycle) that
falls on or after the vernal equinox, which they placed on 21 March. However, the Church of Rome still regarded 25
March as the equinox (until 342) and used a different cycle to compute the day of the moon. In the Alexandrian system,
since the 14th day of the Easter moon could fall at earliest on 21 March its first day could fall no earlier than
8 March and no later than 5 April. This meant that Easter varied between 22 March and 25 April. In Rome, Easter was
not allowed to fall later than 21 April, that being the day of the Parilia or birthday of Rome and a pagan festival.The
first day of the Easter moon could fall no earlier than 5 March and no later than 2 April. Ancient tables provided
the sun's mean longitude. Christopher Clavius, the architect of the Gregorian calendar, noted that the tables agreed
neither on the time when the sun passed through the vernal equinox nor on the length of the mean tropical year. Tycho
Brahe also noticed discrepancies. The Gregorian leap year rule (97 leap years in 400 years) was put forward by Petrus
Pitatus of Verona in 1560. He noted that it is consistent with the tropical year of the Alfonsine tables and with
the mean tropical year of Copernicus (De revolutionibus) and Reinhold (Prutenic tables). The three mean tropical
years in Babylonian sexagesimals as the excess over 365 days (the way they would have been extracted from the tables
of mean longitude) were 14,33,9,57 (Alphonsine), 14,33,11,12 (Copernicus) and 14,33,9,24 (Reinhold). All values are
the same to two places (14:33) and this is also the mean length of the Gregorian year. Thus Pitatus' solution would
have commended itself to the astronomers. "Old Style" (OS) and "New Style" (NS) are sometimes added to dates to identify
which system is used in the British Empire and other countries that did not immediately change. Because the Calendar
Act of 1750 altered the start of the year, and also aligned the British calendar with the Gregorian calendar, there
is some confusion as to what these terms mean. They can indicate that the start of the Julian year has been adjusted
to start on 1 January (NS) even though contemporary documents use a different start of year (OS); or to indicate
that a date conforms to the Julian calendar (OS), formerly in use in many countries, rather than the Gregorian calendar
(NS). The Gregorian calendar improves the approximation made by the Julian calendar by skipping three Julian leap
days in every 400 years, giving an average year of 365.2425 mean solar days long. This approximation has an error
of about one day per 3,300 years with respect to the mean tropical year. However, because of the precession of the
equinoxes, the error with respect to the vernal equinox (which occurs, on average, 365.24237 days apart near 2000)
is 1 day every 7,700 years, assuming a constant time interval between vernal equinoxes, which is not true. By any
criterion, the Gregorian calendar is substantially more accurate than the 1 day in 128 years error of the Julian
calendar (average year 365.25 days).
Known during development as Xbox Next, Xenon, Xbox 2, Xbox FS or NextBox, the Xbox 360 was conceived in early 2003. In February
2003, planning for the Xenon software platform began, and was headed by Microsoft's Vice President J Allard. That
month, Microsoft held an event for 400 developers in Bellevue, Washington to recruit support for the system. Also
that month, Peter Moore, former president of Sega of America, joined Microsoft. On August 12, 2003, ATI signed on
to produce the graphic processing unit for the new console, a deal which was publicly announced two days later. Before
the launch of the Xbox 360, several Alpha development kits were spotted using Apple's Power Mac G5 hardware. This
was because the system's PowerPC 970 processor running the same PowerPC architecture that the Xbox 360 would eventually
run under IBM's Xenon processor. The cores of the Xenon processor were developed using a slightly modified version
of the PlayStation 3's Cell Processor PPE architecture. According to David Shippy and Mickie Phipps, the IBM employees
were "hiding" their work from Sony and Toshiba, IBM's partners in developing the Cell Processor. Jeff Minter created
the music visualization program Neon which is included with the Xbox 360. At launch, the Xbox 360 was available in
two configurations: the "Xbox 360" package (unofficially known as the 20 GB Pro or Premium), priced at US$399 or
GB£279.99, and the "Xbox 360 Core", priced at US$299 and GB£209.99. The original shipment of the Xbox 360 version
included a cut-down version of the Media Remote as a promotion. The Elite package was launched later at US$479. The
"Xbox 360 Core" was replaced by the "Xbox 360 Arcade" in October 2007 and a 60 GB version of the Xbox 360 Pro was
released on August 1, 2008. The Pro package was discontinued and marked down to US$249 on August 28, 2009 to be sold
until stock ran out, while the Elite was also marked down in price to US$299. The Xbox 360 launched with 14 games
in North America and 13 in Europe. The console's best-selling game for 2005, Call of Duty 2, sold over a million
copies. Five other games sold over a million copies in the console's first year on the market: Ghost Recon Advanced
Warfighter, The Elder Scrolls IV: Oblivion, Dead or Alive 4, Saints Row, and Gears of War. Gears of War would become
the best-selling game on the console with 3 million copies in 2006, before being surpassed in 2007 by Halo 3 with
over 8 million copies. The Xbox 360 supports videos in Windows Media Video (WMV) format (including high-definition
and PlaysForSure videos), as well as H.264 and MPEG-4 media. The December 2007 dashboard update added support for
the playback of MPEG-4 ASP format videos. The console can also display pictures and perform slideshows of photo collections
with various transition effects, and supports audio playback, with music player controls accessible through the Xbox
360 Guide button. Users may play back their own music while playing games or using the dashboard, and can play music
with an interactive visual synthesizer. When the Xbox 360 was released, Microsoft's online gaming service Xbox Live
was shut down for 24 hours and underwent a major upgrade, adding a basic non-subscription service called Xbox Live
Silver (later renamed Xbox Live Free) to its already established premium subscription-based service (which was renamed
Gold). Xbox Live Free is included with all SKUs of the console. It allows users to create a user profile, join on
message boards, and access Microsoft's Xbox Live Arcade and Marketplace and talk to other members. A Live Free account
does not generally support multiplayer gaming; however, some games that have rather limited online functions already,
(such as Viva Piñata) or games that feature their own subscription service (e.g. EA Sports games) can be played with
a Free account. Xbox Live also supports voice the latter a feature possible with the Xbox Live Vision. The Xbox 360
features an online service, Xbox Live, which was expanded from its previous iteration on the original Xbox and received
regular updates during the console's lifetime. Available in free and subscription-based varieties, Xbox Live allows
users to: play games online; download games (through Xbox Live Arcade) and game demos; purchase and stream music,
television programs, and films through the Xbox Music and Xbox Video portals; and access third-party content services
through media streaming applications. In addition to online multimedia features, the Xbox 360 allows users to stream
media from local PCs. Several peripherals have been released, including wireless controllers, expanded hard drive
storage, and the Kinect motion sensing camera. The release of these additional services and peripherals helped the
Xbox brand grow from gaming-only to encompassing all multimedia, turning it into a hub for living-room computing
entertainment. The Xbox 360's advantage over its competitors was due to the release of high profile titles from both
first party and third party developers. The 2007 Game Critics Awards honored the platform with 38 nominations and
12 wins – more than any other platform. By March 2008, the Xbox 360 had reached a software attach rate of 7.5 games
per console in the US; the rate was 7.0 in Europe, while its competitors were 3.8 (PS3) and 3.5 (Wii), according
to Microsoft. At the 2008 Game Developers Conference, Microsoft announced that it expected over 1,000 games available
for Xbox 360 by the end of the year. As well as enjoying exclusives such as additions to the Halo franchise and Gears
of War, the Xbox 360 has managed to gain a simultaneous release of titles that were initially planned to be PS3 exclusives,
including Devil May Cry, Ace Combat, Virtua Fighter, Grand Theft Auto IV, Final Fantasy XIII, Tekken 6, Metal Gear
Solid : Rising, and L.A. Noire. In addition, Xbox 360 versions of cross-platform games were generally considered
superior to their PS3 counterparts in 2006 and 2007, due in part to the difficulties of programming for the PS3.
To aid customers with defective consoles, Microsoft extended the Xbox 360's manufacturer's warranty to three years
for hardware failure problems that generate a "General Hardware Failure" error report. A "General Hardware Failure"
is recognized on all models released before the Xbox 360 S by three quadrants of the ring around the power button
flashing red. This error is often known as the "Red Ring of Death". In April 2009 the warranty was extended to also
cover failures related to the E74 error code. The warranty extension is not granted for any other types of failures
that do not generate these specific error codes. In 2009, IGN named the Xbox 360 the sixth-greatest video game console
of all time, out of a field of 25. Although not the best-selling console of the seventh-generation, the Xbox 360
was deemed by TechRadar to be the most influential, by emphasizing digital media distribution and online gaming through
Xbox Live, and by popularizing game achievement awards. PC Magazine considered the Xbox 360 the prototype for online
gaming as it "proved that online gaming communities could thrive in the console space". Five years after the Xbox
360's original debut, the well-received Kinect motion capture camera was released, which set the record of being
the fastest selling consumer electronic device in history, and extended the life of the console. Edge ranked Xbox
360 the second-best console of the 1993–2013 period, stating "It had its own social network, cross-game chat, new
indie games every week, and the best version of just about every multiformat game...Killzone is no Halo and nowadays
Gran Turismo is no Forza, but it's not about the exclusives—there's nothing to trump Naughty Dog's PS3 output, after
all. Rather, it's about the choices Microsoft made back in the original Xbox's lifetime. The PC-like architecture
meant those early EA Sports titles ran at 60fps compared to only 30 on PS3, Xbox Live meant every dedicated player
had an existing friends list, and Halo meant Microsoft had the killer next-generation exclusive. And when developers
demo games on PC now they do it with a 360 pad—another industry benchmark, and a critical one." The Xbox 360 sold
much better than its predecessor, and although not the best-selling console of the seventh-generation, it is regarded
as a success since it strengthened Microsoft as a major force in the console market at the expense of well-established
rivals. The inexpensive Nintendo Wii did sell the most console units but eventually saw a collapse of third-party
software support in its later years, and it has been viewed by some as a fad since the succeeding Wii U had a poor
debut in 2012. The PlayStation 3 struggled for a time due to being too expensive and initially lacking quality titles,
making it far less dominant than its predecessor, the PlayStation 2, and it took until late in the PlayStation 3's
lifespan for its sales and game titles to reach parity with the Xbox 360. TechRadar proclaimed that "Xbox 360 passes
the baton as the king of the hill – a position that puts all the more pressure on its successor, Xbox One". The Xbox
360's original graphical user interface was the Xbox 360 Dashboard; a tabbed interface that featured five "Blades"
(formerly four blades), and was designed by AKQA and Audiobrain. It could be launched automatically when the console
booted without a disc in it, or when the disc tray was ejected, but the user had the option to select what the console
does if a game is in the tray on start up, or if inserted when already on. A simplified version of it was also accessible
at any time via the Xbox Guide button on the gamepad. This simplified version showed the user's gamercard, Xbox Live
messages and friends list. It also allowed for personal and music settings, in addition to voice or video chats,
or returning to the Xbox Dashboard from the game. Xbox Live Arcade is an online service operated by Microsoft that
is used to distribute downloadable video games to Xbox and Xbox 360 owners. In addition to classic arcade games such
as Ms. Pac-Man, the service offers some new original games like Assault Heroes. The Xbox Live Arcade also features
games from other consoles, such as the PlayStation game Castlevania: Symphony of the Night and PC games such as Zuma.
The service was first launched on November 3, 2004, using a DVD to load, and offered games for about US$5 to $15.
Items are purchased using Microsoft Points, a proprietary currency used to reduce credit card transaction charges.
On November 22, 2005, Xbox Live Arcade was re-launched with the release of the Xbox 360, in which it was now integrated
with the Xbox 360's dashboard. The games are generally aimed toward more casual gamers; examples of the more popular
titles are Geometry Wars, Street Fighter II' Hyper Fighting, and Uno. On March 24, 2010, Microsoft introduced the
Game Room to Xbox Live. Game Room is a gaming service for Xbox 360 and Microsoft Windows that lets players compete
in classic arcade and console games in a virtual arcade. At the 2007, 2008, and 2009 Consumer Electronics Shows,
Microsoft had announced that IPTV services would soon be made available to use through the Xbox 360. In 2007, Microsoft
chairman Bill Gates stated that IPTV on Xbox 360 was expected to be available to consumers by the holiday season,
using the Microsoft TV IPTV Edition platform. In 2008, Gates and president of Entertainment & Devices Robbie Bach
announced a partnership with BT in the United Kingdom, in which the BT Vision advanced TV service, using the newer
Microsoft Mediaroom IPTV platform, would be accessible via Xbox 360, planned for the middle of the year. BT Vision's
DVR-based features would not be available on Xbox 360 due to limited hard drive capacity. In 2010, while announcing
version 2.0 of Microsoft Mediaroom, Microsoft CEO Steve Ballmer mentioned that AT&T's U-verse IPTV service would
enable Xbox 360s to be used as set-top boxes later in the year. As of January 2010, IPTV on Xbox 360 has yet to be
deployed beyond limited trials. The Xbox Live Marketplace is a virtual market designed for the console that allows
Xbox Live users to download purchased or promotional content. The service offers movie and game trailers, game demos,
Xbox Live Arcade games and Xbox 360 Dashboard themes as well as add-on game content (items, costumes, levels etc.).
These features are available to both Free and Gold members on Xbox Live. A hard drive or memory unit is required
to store products purchased from Xbox Live Marketplace. In order to download priced content, users are required to
purchase Microsoft Points for use as scrip; though some products (such as trailers and demos) are free to download.
Microsoft Points can be obtained through prepaid cards in 1,600 and 4,000-point denominations. Microsoft Points can
also be purchased through Xbox Live with a credit card in 500, 1,000, 2,000 and 5,000-point denominations. Users
are able to view items available to download on the service through a PC via the Xbox Live Marketplace website. An
estimated seventy percent of Xbox Live users have downloaded items from the Marketplace. Launched worldwide across
2005–2006, the Xbox 360 was initially in short supply in many regions, including North America and Europe. The earliest
versions of the console suffered from a high failure rate, indicated by the so-called "Red Ring of Death", necessitating
an extension of the device's warranty period. Microsoft released two redesigned models of the console: the Xbox 360
S in 2010, and the Xbox 360 E in 2013. As of June 2014, 84 million Xbox 360 consoles have been sold worldwide, making
it the sixth-highest-selling video game console in history, and the highest-selling console made by an American company.
Although not the best-selling console of its generation, the Xbox 360 was deemed by TechRadar to be the most influential
through its emphasis on digital media distribution and multiplayer gaming on Xbox Live. The Xbox 360's successor,
the Xbox One, was released on November 22, 2013. Microsoft has stated they plan to support the Xbox 360 until 2016.
The Xbox One is also backwards compatible with the Xbox 360. In May 2008 Microsoft announced that 10 million Xbox
360s had been sold and that it was the "first current generation gaming console" to surpass the 10 million figure
in the US. In the US, the Xbox 360 was the leader in current-generation home console sales until June 2008, when
it was surpassed by the Wii. The Xbox 360 has sold a total of 870,000 units in Canada as of August 1, 2008. Between
January 2011 and October 2013, the Xbox 360 was the best-selling console in the United States for these 32 consecutive
months. TechRadar deemed the Xbox 360 as the most influential game system through its emphasis of digital media distribution,
Xbox Live online gaming service, and game achievement feature. During the console's lifetime, the Xbox brand has
grown from gaming-only to encompassing all multimedia, turning it into a hub for "living-room computing environment".
Five years after the Xbox 360's original debut, the well-received Kinect motion capture camera was released, which
became the fastest selling consumer electronic device in history, and extended the life of the console. Kinect is
a "controller-free gaming and entertainment experience" for the Xbox 360. It was first announced on June 1, 2009
at the Electronic Entertainment Expo, under the codename, Project Natal. The add-on peripheral enables users to control
and interact with the Xbox 360 without a game controller by using gestures, spoken commands and presented objects
and images. The Kinect accessory is compatible with all Xbox 360 models, connecting to new models via a custom connector,
and to older ones via a USB and mains power adapter. During their CES 2010 keynote speech, Robbie Bach and Microsoft
CEO Steve Ballmer went on to say that Kinect will be released during the holiday period (November–January) and it
will work with every 360 console. Its name and release date of 2010-11-04 were officially announced on 2010-06-13,
prior to Microsoft's press conference at E3 2010. Since these problems surfaced, Microsoft has attempted to modify
the console to improve its reliability. Modifications include a reduction in the number, size, and placement of components,
the addition of dabs of epoxy on the corners and edges of the CPU and GPU as glue to prevent movement relative to
the board during heat expansion, and a second GPU heatsink to dissipate more heat. With the release of the redesigned
Xbox 360 S, the warranty for the newer models does not include the three-year extended coverage for "General Hardware
Failures". The newer Xbox 360 S model indicates system overheating when the console's power button begins to flash
red, unlike previous models where the first and third quadrant of the ring would light up red around the power button
if overheating occurred. The system will then warn the user of imminent system shutdown until the system has cooled,
whereas a flashing power button that alternates between green and red is an indication of a "General Hardware Failure"
unlike older models where three of the quadrants would light up red. On November 6, 2006, Microsoft announced the
Xbox Video Marketplace, an exclusive video store accessible through the console. Launched in the United States on
November 22, 2006, the first anniversary of the Xbox 360's launch, the service allows users in the United States
to download high-definition and standard-definition television shows and movies onto an Xbox 360 console for viewing.
With the exception of short clips, content is not currently available for streaming, and must be downloaded. Movies
are also available for rental. They expire in 14 days after download or at the end of the first 24 hours after the
movie has begun playing, whichever comes first. Television episodes can be purchased to own, and are transferable
to an unlimited number of consoles. Downloaded files use 5.1 surround audio and are encoded using VC-1 for video
at 720p, with a bitrate of 6.8 Mbit/s. Television content is offered from MTV, VH1, Comedy Central, Turner Broadcasting,
and CBS; and movie content is Warner Bros., Paramount, and Disney, along with other publishers. While the original
Xbox sold poorly in Japan, selling just 2 million units while it was on the market (between 2002 and 2005),[citation
needed] the Xbox 360 sold even more poorly, selling only 1.5 million units from 2005 to 2011. Edge magazine reported
in August 2011 that initially lackluster and subsequently falling sales in Japan, where Microsoft had been unable
to make serious inroads into the dominance of domestic rivals Sony and Nintendo, had led to retailers scaling down
and in some cases discontinuing sales of the Xbox 360 completely. Two major hardware revisions of the Xbox 360 have
succeeded the original models; the Xbox 360 S (also referred to as the "Slim") replaced the original "Elite" and
"Arcade" models in 2010. The S model carries a smaller, streamlined appearance with an angular case, and utilizes
a redesigned motherboard designed to alleviate the hardware and overheating issues experienced by prior models. It
also includes a proprietary port for use with the Kinect sensor. The Xbox 360 E, a further streamlined variation
of the 360 S with a two-tone rectangular case inspired by Xbox One, was released in 2013. In addition to its revised
aesthetics, Xbox 360 E also has one fewer USB port and no longer supports S/PDIF. Six games were initially available
in Japan, while eagerly anticipated titles such as Dead or Alive 4 and Enchanted Arms were released in the weeks
following the console's launch. Games targeted specifically for the region, such as Chromehounds, Ninety-Nine Nights,
and Phantasy Star Universe, were also released in the console's first year. Microsoft also had the support of Japanese
developer Mistwalker, founded by Final Fantasy creator Hironobu Sakaguchi. Mistwalker's first game, Blue Dragon,
was released in 2006 and had a limited-edition bundle which sold out quickly with over 10,000 pre-orders. Blue Dragon
is one of three Xbox 360 games to surpass 200,000 units in Japan, along with Tales of Vesperia and Star Ocean: The
Last Hope. Mistwalker's second game, Lost Odyssey also sold over 100,000 copies. Music, photos and videos can be
played from standard USB mass storage devices, Xbox 360 proprietary storage devices (such as memory cards or Xbox
360 hard drives), and servers or computers with Windows Media Center or Windows XP with Service pack 2 or higher
within the local-area network in streaming mode. As the Xbox 360 uses a modified version of the UPnP AV protocol,
some alternative UPnP servers such as uShare (part of the GeeXboX project) and MythTV can also stream media to the
Xbox 360, allowing for similar functionality from non-Windows servers. This is possible with video files up to HD-resolution
and with several codecs (MPEG-2, MPEG-4, WMV) and container formats (WMV, MOV, TS). Xbox Live Gold includes the same
features as Free and includes integrated online game playing capabilities outside of third-party subscriptions. Microsoft
has allowed previous Xbox Live subscribers to maintain their profile information, friends list, and games history
when they make the transition to Xbox Live Gold. To transfer an Xbox Live account to the new system, users need to
link a Windows Live ID to their gamertag on Xbox.com. When users add an Xbox Live enabled profile to their console,
they are required to provide the console with their passport account information and the last four digits of their
credit card number, which is used for verification purposes and billing. An Xbox Live Gold account has an annual
cost of US$59.99, C$59.99, NZ$90.00, GB£39.99, or €59.99. As of January 5, 2011, Xbox Live has over 30 million subscribers.
On May 26, 2009, Microsoft announced the future release of the Zune HD (in the fall of 2009), the next addition to
the Zune product range. This is of an impact on the Xbox Live Video Store as it was also announced that the Zune
Video Marketplace and the Xbox Live Video Store will be merged to form the Zune Marketplace, which will be arriving
on Xbox Live in 7 countries initially, the United Kingdom, the United States, France, Italy, Germany, Ireland and
Spain. Further details were released at the Microsoft press conference at E3 2009.
Beginning in 1689, the colonies became involved in a series of wars between Great Britain and France for control of North
America, the most important of which were Queen Anne's War, in which the British conquered French colony Acadia,
and the final French and Indian War (1754–63) when Britain was victorious over all the French colonies in North America.
This final war was to give thousands of colonists, including Virginia colonel George Washington, military experience
which they put to use during the American Revolutionary War. By far the largest military action in which the United
States engaged during this era was the War of 1812. With Britain locked in a major war with Napoleon's France, its
policy was to block American shipments to France. The United States sought to remain neutral while pursuing overseas
trade. Britain cut the trade and impressed seamen on American ships into the Royal Navy, despite intense protests.
Britain supported an Indian insurrection in the American Midwest, with the goal of creating an Indian state there
that would block American expansion. The United States finally declared war on the United Kingdom in 1812, the first
time the U.S. had officially declared war. Not hopeful of defeating the Royal Navy, the U.S. attacked the British
Empire by invading British Canada, hoping to use captured territory as a bargaining chip. The invasion of Canada
was a debacle, though concurrent wars with Native Americans on the western front (Tecumseh's War and the Creek War)
were more successful. After defeating Napoleon in 1814, Britain sent large veteran armies to invade New York, raid
Washington and capture the key control of the Mississippi River at New Orleans. The New York invasion was a fiasco
after the much larger British army retreated to Canada. The raiders succeeded in the burning of Washington on 25
August 1814, but were repulsed in their Chesapeake Bay Campaign at the Battle of Baltimore and the British commander
killed. The major invasion in Louisiana was stopped by a one-sided military battle that killed the top three British
generals and thousands of soldiers. The winners were the commanding general of the Battle of New Orleans, Major General
Andrew Jackson, who became president and the Americans who basked in a victory over a much more powerful nation.
The peace treaty proved successful, and the U.S. and Britain never again went to war. The losers were the Indians,
who never gained the independent territory in the Midwest promised by Britain. Secretary of War Elihu Root (1899–1904)
led the modernization of the Army. His goal of a uniformed chief of staff as general manager and a European-type
general staff for planning was stymied by General Nelson A. Miles but did succeed in enlarging West Point and establishing
the U.S. Army War College as well as the General Staff. Root changed the procedures for promotions and organized
schools for the special branches of the service. He also devised the principle of rotating officers from staff to
line. Root was concerned about the Army's role in governing the new territories acquired in 1898 and worked out the
procedures for turning Cuba over to the Cubans, and wrote the charter of government for the Philippines. The United
States originally wished to remain neutral when World War I broke out in August 1914. However, it insisted on its
right as a neutral party to immunity from German submarine attack, even though its ships carried food and raw materials
to Britain. In 1917 the Germans resumed submarine attacks, knowing that it would lead to American entry. When the
U.S declared war, the U.S. army was still small by European standards and mobilization would take a year. Meanwhile,
the U.S. continued to provide supplies and money to Britain and France, and initiated the first peacetime draft.
Industrial mobilization took longer than expected, so divisions were sent to Europe without equipment, relying instead
on the British and French to supply them. Memories and lessons from the war are still a major factor in American
politics. One side views the war as a necessary part of the Containment policy, which allowed the enemy to choose
the time and place of warfare. Others note the U.S. made major strategic gains as the Communists were defeated in
Indonesia, and by 1972 both Moscow and Beijing were competing for American support, at the expense of their allies
in Hanoi. Critics see the conflict as a "quagmire"—an endless waste of American blood and treasure in a conflict
that did not concern US interests. Fears of another quagmire have been major factors in foreign policy debates ever
since. The draft became extremely unpopular, and President Nixon ended it in 1973, forcing the military (the Army
especially) to rely entirely upon volunteers. That raised the issue of how well the professional military reflected
overall American society and values; the soldiers typically took the position that their service represented the
highest and best American values. The Persian Gulf War was a conflict between Iraq and a coalition force of 34 nations
led by the United States. The lead up to the war began with the Iraqi invasion of Kuwait in August 1990 which was
met with immediate economic sanctions by the United Nations against Iraq. The coalition commenced hostilities in
January 1991, resulting in a decisive victory for the U.S. led coalition forces, which drove Iraqi forces out of
Kuwait with minimal coalition deaths. Despite the low death toll, over 180,000 US veterans would later be classified
as "permanently disabled" according to the US Department of Veterans Affairs (see Gulf War Syndrome). The main battles
were aerial and ground combat within Iraq, Kuwait and bordering areas of Saudi Arabia. Land combat did not expand
outside of the immediate Iraq/Kuwait/Saudi border region, although the coalition bombed cities and strategic targets
across Iraq, and Iraq fired missiles on Israeli and Saudi cities. US troops participated in a UN peacekeeping mission
in Somalia beginning in 1992. By 1993 the US troops were augmented with Rangers and special forces with the aim of
capturing warlord Mohamed Farrah Aidid, whose forces had massacred peacekeepers from Pakistan. During a raid in downtown
Mogadishu, US troops became trapped overnight by a general uprising in the Battle of Mogadishu. Eighteen American
soldiers were killed, and a US television crew filmed graphic images of the body of one soldier being dragged through
the streets by an angry mob. Somali guerrillas paid a staggering toll at an estimated 1,000–5,000 total casualties
during the conflict. After much public disapproval, American forces were quickly withdrawn by President Bill Clinton.
The incident profoundly affected US thinking about peacekeeping and intervention. The book Black Hawk Down was written
about the battle, and was the basis for the later movie of the same name. In January 2002, the U.S. sent more than
1,200 troops (later raised to 2,000) to assist the Armed Forces of the Philippines in combating terrorist groups
linked to al-Qaida, such as Abu Sayyaf, under Operation Enduring Freedom - Philippines. Operations have taken place
mostly in the Sulu Archipelago, where terrorists and other groups are active. The majority of troops provide logistics.
However, there are special forces troops that are training and assisting in combat operations against the terrorist
groups. The British, for their part, lacked both a unified command and a clear strategy for winning. With the use
of the Royal Navy, the British were able to capture coastal cities, but control of the countryside eluded them. A
British sortie from Canada in 1777 ended with the disastrous surrender of a British army at Saratoga. With the coming
in 1777 of General von Steuben, the training and discipline along Prussian lines began, and the Continental Army
began to evolve into a modern force. France and Spain then entered the war against Great Britain as Allies of the
US, ending its naval advantage and escalating the conflict into a world war. The Netherlands later joined France,
and the British were outnumbered on land and sea in a world war, as they had no major allies apart from Indian tribes.
When revolutionary France declared war on Great Britain in 1793, the United States sought to remain neutral, but
the Jay Treaty, which was favorable to Great Britain, angered the French government, which viewed it as a violation
of the 1778 Treaty of Alliance. French privateers began to seize U.S. vessels, which led to an undeclared "Quasi-War"
between the two nations. Fought at sea from 1798 to 1800, the United States won a string of victories in the Caribbean.
George Washington was called out of retirement to head a "provisional army" in case of invasion by France, but President
John Adams managed to negotiate a truce, in which France agreed to terminate the prior alliance and cease its attacks.
In the Treaty of Paris after the Revolution, the British had ceded the lands between the Appalachian Mountains and
the Mississippi River to the United States, without consulting the Shawnee, Cherokee, Choctaw and other smaller tribes
who lived there. Because many of the tribes had fought as allies of the British, the United States compelled tribal
leaders to sign away lands in postwar treaties, and began dividing these lands for settlement. This provoked a war
in the Northwest Territory in which the U.S. forces performed poorly; the Battle of the Wabash in 1791 was the most
severe defeat ever suffered by the United States at the hands of American Indians. President Washington dispatched
a newly trained army to the region, which decisively defeated the Indian confederacy at the Battle of Fallen Timbers
in 1794. Sectional tensions had long existed between the states located north of the Mason–Dixon line and those south
of it, primarily centered on the "peculiar institution" of slavery and the ability of states to overrule the decisions
of the national government. During the 1840s and 1850s, conflicts between the two sides became progressively more
violent. After the election of Abraham Lincoln in 1860 (who southerners thought would work to end slavery) states
in the South seceded from the United States, beginning with South Carolina in late 1860. On April 12, 1861, forces
of the South (known as the Confederate States of America or simply the Confederacy) opened fire on Fort Sumter, whose
garrison was loyal to the Union. The Spanish–American War was a short decisive war marked by quick, overwhelming
American victories at sea and on land against Spain. The Navy was well-prepared and won laurels, even as politicians
tried (and failed) to have it redeployed to defend East Coast cities against potential threats from the feeble Spanish
fleet. The Army performed well in combat in Cuba. However, it was too oriented to small posts in the West and not
as well-prepared for an overseas conflict. It relied on volunteers and state militia units, which faced logistical,
training and food problems in the staging areas in Florida. The United States freed Cuba (after an occupation by
the U.S. Army). By the peace treaty Spain ceded to the United States its colonies of Puerto Rico, Guam, and the Philippines.
The Navy set up coaling stations there and in Hawaii (which voluntarily joined the U.S. in 1898). The U.S. Navy now
had a major forward presence across the Pacific and (with the lease of Guantánamo Bay Naval Base in Cuba) a major
base in the Caribbean guarding the approaches to the Gulf Coast and the Panama Canal. The Philippine–American War
(1899–1902) was an armed conflict between a group of Filipino revolutionaries and the American forces following the
ceding of the Philippines to the United States after the defeat of Spanish forces in the Battle of Manila. The Army
sent in 100,000 soldiers (mostly from the National Guard) under General Elwell Otis. Defeated in the field and losing
its capital in March 1899, the poorly armed and poorly led rebels broke into armed bands. The insurgency collapsed
in March 1901 when the leader Emilio Aguinaldo was captured by General Frederick Funston and his Macabebe allies.
Casualties included 1,037 Americans killed in action and 3,340 who died from disease; 20,000 rebels were killed.
The loss of eight battleships and 2,403 Americans at Pearl Harbor forced the U.S. to rely on its remaining aircraft
carriers, which won a major victory over Japan at Midway just six months into the war, and on its growing submarine
fleet. The Navy and Marine Corps followed this up with an island hopping campaign across the central and south Pacific
in 1943–45, reaching the outskirts of Japan in the Battle of Okinawa. During 1942 and 1943, the U.S. deployed millions
of men and thousands of planes and tanks to the UK, beginning with the strategic bombing of Nazi Germany and occupied
Europe and leading up to the Allied invasions of occupied North Africa in November 1942, Sicily and Italy in 1943,
France in 1944, and the invasion of Germany in 1945, parallel with the Soviet invasion from the east. That led to
the surrender of Nazi Germany in May 1945. In the Pacific, the U.S. experienced much success in naval campaigns during
1944, but bloody battles at Iwo Jima and Okinawa in 1945 led the U.S. to look for a way to end the war with minimal
loss of American lives. The U.S. used atomic bombs on Hiroshima and Nagasaki to destroy the Japanese war effort and
to shock the Japanese leadership, which quickly caused the surrender of Japan. The Korean War was a conflict between
the United States and its United Nations allies and the communist powers under influence of the Soviet Union (also
a UN member nation) and the People's Republic of China (which later also gained UN membership). The principal combatants
were North and South Korea. Principal allies of South Korea included the United States, Canada, Australia, the United
Kingdom, although many other nations sent troops under the aegis of the United Nations. Allies of North Korea included
the People's Republic of China, which supplied military forces, and the Soviet Union, which supplied combat advisors
and aircraft pilots, as well as arms, for the Chinese and North Korean troops. The war started badly for the US and
UN. North Korean forces struck massively in the summer of 1950 and nearly drove the outnumbered US and ROK defenders
into the sea. However the United Nations intervened, naming Douglas MacArthur commander of its forces, and UN-US-ROK
forces held a perimeter around Pusan, gaining time for reinforcement. MacArthur, in a bold but risky move, ordered
an amphibious invasion well behind the front lines at Inchon, cutting off and routing the North Koreans and quickly
crossing the 38th Parallel into North Korea. As UN forces continued to advance toward the Yalu River on the border
with Communist China, the Chinese crossed the Yalu River in October and launched a series of surprise attacks that
sent the UN forces reeling back across the 38th Parallel. Truman originally wanted a Rollback strategy to unify Korea;
after the Chinese successes he settled for a Containment policy to split the country. MacArthur argued for rollback
but was fired by President Harry Truman after disputes over the conduct of the war. Peace negotiations dragged on
for two years until President Dwight D. Eisenhower threatened China with nuclear weapons; an armistice was quickly
reached with the two Koreas remaining divided at the 38th parallel. North and South Korea are still today in a state
of war, having never signed a peace treaty, and American forces remain stationed in South Korea as part of American
foreign policy. Fighting on one side was a coalition of forces including the Republic of Vietnam (South Vietnam or
the "RVN"), the United States, supplemented by South Korea, Thailand, Australia, New Zealand, and the Philippines.
The allies fought against the North Vietnamese Army (NVA) as well as the National Liberation Front (NLF, also known
as Viet communists Viet Cong), or "VC", a guerrilla force within South Vietnam. The NVA received substantial military
and economic aid from the Soviet Union and China, turning Vietnam into a proxy war. The military history of the American
side of the war involved different strategies over the years. The bombing campaigns of the Air Force were tightly
controlled by the White House for political reasons, and until 1972 avoided the main Northern cities of Hanoi and
Haiphong and concentrated on bombing jungle supply trails, especially the Ho Chi Minh Trail. The most controversial
Army commander was William Westmoreland whose strategy involved systematic defeat of all enemy forces in the field,
despite heavy American casualties that alienated public opinion back home. In 1983 fighting between Palestinian refugees
and Lebanese factions reignited that nation's long-running civil war. A UN agreement brought an international force
of peacekeepers to occupy Beirut and guarantee security. US Marines landed in August 1982 along with Italian and
French forces. On October 23, 1983, a suicide bomber driving a truck filled with 6 tons of TNT crashed through a
fence and destroyed the Marine barracks, killing 241 Marines; seconds later, a second bomber leveled a French barracks,
killing 58. Subsequently the US Navy engaged in bombing of militia positions inside Lebanon. While US President Ronald
Reagan was initially defiant, political pressure at home eventually forced the withdrawal of the Marines in February
1984. However, the battle was one-sided almost from the beginning. The reasons for this are the subject of continuing
study by military strategists and academics. There is general agreement that US technological superiority was a crucial
factor but the speed and scale of the Iraqi collapse has also been attributed to poor strategic and tactical leadership
and low morale among Iraqi troops, which resulted from a history of incompetent leadership. After devastating initial
strikes against Iraqi air defenses and command and control facilities on 17 January 1991, coalition forces achieved
total air superiority almost immediately. The Iraqi air force was destroyed within a few days, with some planes fleeing
to Iran, where they were interned for the duration of the conflict. The overwhelming technological advantages of
the US, such as stealth aircraft and infrared sights, quickly turned the air war into a "turkey shoot". The heat
signature of any tank which started its engine made an easy target. Air defense radars were quickly destroyed by
radar-seeking missiles fired from wild weasel aircraft. Grainy video clips, shot from the nose cameras of missiles
as they aimed at impossibly small targets, were a staple of US news coverage and revealed to the world a new kind
of war, compared by some to a video game. Over 6 weeks of relentless pounding by planes and helicopters, the Iraqi
army was almost completely beaten but did not retreat, under orders from Iraqi President Saddam Hussein, and by the
time the ground forces invaded on 24 February, many Iraqi troops quickly surrendered to forces much smaller than
their own; in one instance, Iraqi forces attempted to surrender to a television camera crew that was advancing with
coalition forces. After just 100 hours of ground combat, and with all of Kuwait and much of southern Iraq under coalition
control, US President George H. W. Bush ordered a cease-fire and negotiations began resulting in an agreement for
cessation of hostilities. Some US politicians were disappointed by this move, believing Bush should have pressed
on to Baghdad and removed Hussein from power; there is little doubt that coalition forces could have accomplished
this if they had desired. Still, the political ramifications of removing Hussein would have broadened the scope of
the conflict greatly, and many coalition nations refused to participate in such an action, believing it would create
a power vacuum and destabilize the region. The War on Terrorism is a global effort by the governments of several
countries (primarily the United States and its principal allies) to neutralize international terrorist groups (primarily
Islamic Extremist terrorist groups, including al-Qaeda) and ensure that countries considered by the US and some of
its allies to be Rogue Nations no longer support terrorist activities. It has been adopted primarily as a response
to the September 11, 2001 attacks on the United States. Since 2001, terrorist motivated attacks upon service members
have occurred in Arkansas and Texas. After the lengthy Iraq disarmament crisis culminated with an American demand
that Iraqi President Saddam Hussein leave Iraq, which was refused, a coalition led by the United States and the United
Kingdom fought the Iraqi army in the 2003 invasion of Iraq. Approximately 250,000 United States troops, with support
from 45,000 British, 2,000 Australian and 200 Polish combat forces, entered Iraq primarily through their staging
area in Kuwait. (Turkey had refused to permit its territory to be used for an invasion from the north.) Coalition
forces also supported Iraqi Kurdish militia, estimated to number upwards of 50,000. After approximately three weeks
of fighting, Hussein and the Ba'ath Party were forcibly removed, followed by 9 years of military presence by the
United States and the coalition fighting alongside the newly elected Iraqi government against various insurgent groups.
As a result of the Libyan Civil War, the United Nations enacted United Nations Security Council Resolution 1973,
which imposed a no-fly zone over Libya, and the protection of civilians from the forces of Muammar Gaddafi. The United
States, along with Britain, France and several other nations, committed a coalition force against Gaddafi's forces.
On 19 March, the first U.S. action was taken when 114 Tomahawk missiles launched by US and UK warships destroyed
shoreline air defenses of the Gaddafi regime. The U.S. continued to play a major role in Operation Unified Protector,
the NATO-directed mission that eventually incorporated all of the military coalition's actions in the theater. Throughout
the conflict however, the U.S. maintained it was playing a supporting role only and was following the UN mandate
to protect civilians, while the real conflict was between Gaddafi's loyalists and Libyan rebels fighting to depose
him. During the conflict, American drones were also deployed. General George Washington (1732–99) proved an excellent
organizer and administrator, who worked successfully with Congress and the state governors, selecting and mentoring
his senior officers, supporting and training his troops, and maintaining an idealistic Republican Army. His biggest
challenge was logistics, since neither Congress nor the states had the funding to provide adequately for the equipment,
munitions, clothing, paychecks, or even the food supply of the soldiers. As a battlefield tactician Washington was
often outmaneuvered by his British counterparts. As a strategist, however, he had a better idea of how to win the
war than they did. The British sent four invasion armies. Washington's strategy forced the first army out of Boston
in 1776, and was responsible for the surrender of the second and third armies at Saratoga (1777) and Yorktown (1781).
He limited the British control to New York and a few places while keeping Patriot control of the great majority of
the population. The Loyalists, on whom the British had relied too heavily, comprised about 20% of the population
but never were well organized. As the war ended, Washington watched proudly as the final British army quietly sailed
out of New York City in November 1783, taking the Loyalist leadership with them. Washington astonished the world
when, instead of seizing power, he retired quietly to his farm in Virginia. The Berbers along the Barbary Coast (modern
day Libya) sent pirates to capture merchant ships and hold the crews for ransom. The U.S. paid protection money until
1801, when President Thomas Jefferson refused to pay and sent in the Navy to challenge the Barbary States, the First
Barbary War followed. After the U.S.S. Philadelphia was captured in 1803, Lieutenant Stephen Decatur led a raid which
successfully burned the captured ship, preventing Tripoli from using or selling it. In 1805, after William Eaton
captured the city of Derna, Tripoli agreed to a peace treaty. The other Barbary states continued to raid U.S. shipping,
until the Second Barbary War in 1815 ended the practice. The American Civil War caught both sides unprepared. The
Confederacy hoped to win by getting Britain and France to intervene, or else by wearing down the North's willingness
to fight. The U.S. sought a quick victory focused on capturing the Confederate capital at Richmond, Virginia. The
Confederates under Robert E. Lee tenaciously defended their capital until the very end. The war spilled across the
continent, and even to the high seas. Most of the material and personnel of the South were used up, while the North
prospered. The Navy was modernized in the 1880s, and by the 1890s had adopted the naval power strategy of Captain
Alfred Thayer Mahan—as indeed did every major navy. The old sailing ships were replaced by modern steel battleships,
bringing them in line with the navies of Britain and Germany. In 1907, most of the Navy's battleships, with several
support vessels, dubbed the Great White Fleet, were featured in a 14-month circumnavigation of the world. Ordered
by President Theodore Roosevelt, it was a mission designed to demonstrate the Navy's capability to extend to the
global theater. The Mexican Revolution involved a civil war with hundreds of thousands of deaths and large numbers
fleeing combat zones. Tens of thousands fled to the U.S. President Wilson sent U.S. forces to occupy the Mexican
city of Veracruz for six months in 1914. It was designed to show the U.S. was keenly interested in the civil war
and would not tolerate attacks on Americans, especially the April 9, 1914, "Tampico Affair", which involved the arrest
of American sailors by soldiers of the regime of Mexican President Victoriano Huerta. In early 1916 Pancho Villa
a Mexican general ordered 500 soldiers on a murderous raid on the American city of Columbus New Mexico, with the
goal of robbing banks to fund his army. The German Secret Service encouraged Pancho Villa in his attacks to involve
the United States in an intervention in Mexico which would distract the United States from its growing involvement
in the war and divert aid from Europe to support the intervention. Wilson called up the state militias (National
Guard) and sent them and the U.S. Army under General John J. Pershing to punish Villa in the Pancho Villa Expedition.
Villa fled, with the Americans in pursuit deep into Mexico, thereby arousing Mexican nationalism. By early 1917 President
Venustiano Carranza had contained Villa and secured the border, so Wilson ordered Pershing to withdraw. After the
costly U.S. involvement in World War I, isolationism grew within the nation. Congress refused membership in the League
of Nations, and in response to the growing turmoil in Europe and Asia, the gradually more restrictive Neutrality
Acts were passed, which were intended to prevent the U.S. from supporting either side in a war. President Franklin
D. Roosevelt sought to support Britain, however, and in 1940 signed the Lend-Lease Act, which permitted an expansion
of the "cash and carry" arms trade to develop with Britain, which controlled the Atlantic sea lanes. World War II
holds a special place in the American psyche as the country's greatest triumph, and the U.S. military personnel of
World War II are frequently referred to as "the Greatest Generation." Over 16 million served (about 11% of the population),
and over 400,000 died during the war. The U.S. emerged as one of the two undisputed superpowers along with the Soviet
Union, and unlike the Soviet Union, the U.S. homeland was virtually untouched by the ravages of war. During and following
World War II, the United States and Britain developed an increasingly strong defense and intelligence relationship.
Manifestations of this include extensive basing of U.S. forces in the UK, shared intelligence, shared military technology
(e.g. nuclear technology), and shared procurement. The U.S. framed the war as part of its policy of containment of
Communism in south Asia, but American forces were frustrated by an inability to engage the enemy in decisive battles,
corruption and incompetence in the Army of the Republic of Vietnam, and ever increasing protests at home. The Tet
Offensive in 1968, although a major military defeat for the NLF with half their forces eliminated, marked the psychological
turning point in the war. With President Richard M. Nixon opposed to containment and more interested in achieving
détente with both the Soviet Union and China, American policy shifted to "Vietnamization," – providing very large
supplies of arms and letting the Vietnamese fight it out themselves. After more than 57,000 dead and many more wounded,
American forces withdrew in 1973 with no clear victory, and in 1975 South Vietnam was finally conquered by communist
North Vietnam and unified. Ongoing political tensions between Great Britain and the thirteen colonies reached a crisis
in 1774 when the British placed the province of Massachusetts under martial law after the Patriots protested taxes
they regarded as a violation of their constitutional rights as Englishmen. When shooting began at Lexington and Concord
in April 1775, militia units from across New England rushed to Boston and bottled up the British in the city. The
Continental Congress appointed George Washington as commander-in-chief of the newly created Continental Army, which
was augmented throughout the war by colonial militia. He drove the British out of Boston but in late summer 1776
they returned to New York and nearly captured Washington's army. Meanwhile, the revolutionaries expelled British
officials from the 13 states, and declared themselves an independent nation on July 4, 1776. Following the American
Revolutionary War, the United States faced potential military conflict on the high seas as well as on the western
frontier. The United States was a minor military power during this time, having only a modest army, Marine corps,
and navy. A traditional distrust of standing armies, combined with faith in the abilities of local militia, precluded
the development of well-trained units and a professional officer corps. Jeffersonian leaders preferred a small army
and navy, fearing that a large military establishment would involve the United States in excessive foreign wars,
and potentially allow a domestic tyrant to seize power. After the Civil War, population expansion, railroad construction,
and the disappearance of the buffalo herds heightened military tensions on the Great Plains. Several tribes, especially
the Sioux and Comanche, fiercely resisted confinement to reservations. The main role of the Army was to keep indigenous
peoples on reservations and to end their wars against settlers and each other, William Tecumseh Sherman and Philip
Sheridan were in charge. A famous victory for the Plains Nations was the Battle of the Little Big Horn in 1876, when
Col. George Armstrong Custer and two hundred plus members of the 7th Cavalry were killed by a force consisting of
Native Americans from the Lakota, Northern Cheyenne, and Arapaho nations. The last significant conflict came in 1891.
Rear Admiral Bradley A. Fiske was at the vanguard of new technology in naval guns and gunnery, thanks to his innovations
in fire control 1890–1910. He immediately grasped the potential for air power, and called for the development of
a torpedo plane. Fiske, as aide for operations in 1913–15 to Assistant Secretary Franklin D. Roosevelt, proposed
a radical reorganization of the Navy to make it a war-fighting instrument. Fiske wanted to centralize authority in
a chief of naval operations and an expert staff that would develop new strategies, oversee the construction of a
larger fleet, coordinate war planning including force structure, mobilization plans, and industrial base, and ensure
that the US Navy possessed the best possible war machines. Eventually, the Navy adopted his reforms and by 1915 started
to reorganize for possible involvement in the World War then underway. By summer 1918, a million American soldiers,
or "doughboys" as they were often called, of the American Expeditionary Forces were in Europe under the command of
John J. Pershing, with 25,000 more arriving every week. The failure of Germany's spring offensive exhausted its reserves
and they were unable to launch new offensives. The German Navy and home front then revolted and a new German government
signed a conditional surrender, the Armistice, ending the war against the western front on November 11, 1918. Starting
in 1940 (18 months before Pearl Harbor), the nation mobilized, giving high priority to air power. American involvement
in World War II in 1940–41 was limited to providing war material and financial support to Britain, the Soviet Union,
and the Republic of China. The U.S. entered officially on 8 December 1941 following the Japanese attack on Pearl
Harbor, Hawaii. Japanese forces soon seized American, Dutch, and British possessions across the Pacific and Southeast
Asia, except for Australia, which became a main American forward base along with Hawaii. The Vietnam War was a war
fought between 1959 and 1975 on the ground in South Vietnam and bordering areas of Cambodia and Laos (see Secret
War) and in the strategic bombing (see Operation Rolling Thunder) of North Vietnam. American advisors came in the
late 1950s to help the RVN (Republic of Vietnam) combat Communist insurgents known as "Viet Cong." Major American
military involvement began in 1964, after Congress provided President Lyndon B. Johnson with blanket approval for
presidential use of force in the Gulf of Tonkin Resolution. Before the war, many observers believed the US and its
allies could win but might suffer substantial casualties (certainly more than any conflict since Vietnam), and that
the tank battles across the harsh desert might rival those of North Africa during World War II. After nearly 50 years
of proxy wars, and constant fears of another war in Europe between NATO and the Warsaw Pact, some thought the Persian
Gulf War might finally answer the question of which military philosophy would have reigned supreme. Iraqi forces
were battle-hardened after 8 years of war with Iran, and they were well equipped with late model Soviet tanks and
jet fighters, but the antiaircraft weapons were crippled; in comparison, the US had no large-scale combat experience
since its withdrawal from Vietnam nearly 20 years earlier, and major changes in US doctrine, equipment and technology
since then had never been tested under fire. With the emergence of ISIL and its capture of large areas of Iraq and
Syria, a number of crises resulted that sparked international attention. ISIL had perpetrated sectarian killings
and war crimes in both Iraq and Syria. Gains made in the Iraq war were rolled back as Iraqi army units abandoned
their posts. Cities were taken over by the terrorist group which enforced its brand of Sharia law. The kidnapping
and decapitation of numerous Western journalists and aid-workers also garnered interest and outrage among Western
powers. The US intervened with airstrikes in Iraq over ISIL held territories and assets in August, and in September
a coalition of US and Middle Eastern powers initiated a bombing campaign in Syria aimed at degrading and destroying
ISIL and Al-Nusra-held territory.
Groups that emerged from the American psychedelic scene about the same time included Iron Butterfly, MC5, Blue Cheer and
Vanilla Fudge. San Francisco band Blue Cheer released a crude and distorted cover of Eddie Cochran's classic "Summertime
Blues", from their 1968 debut album Vincebus Eruptum, that outlined much of the later hard rock and heavy metal sound.
The same month, Steppenwolf released its self-titled debut album, including "Born to Be Wild", which contained the
first lyrical reference to heavy metal and helped popularise the style when it was used in the film Easy Rider (1969).
Iron Butterfly's In-A-Gadda-Da-Vida (1968), with its 17-minute-long title track, using organs and with a lengthy
drum solo, also prefigured later elements of the sound. From outside the United Kingdom and the United States, the
Canadian trio Rush released three distinctively hard rock albums in 1974–75 (Rush, Fly by Night and Caress of Steel)
before moving toward a more progressive sound with the 1976 album 2112. The Irish band Thin Lizzy, which had formed
in the late 1960s, made their most substantial commercial breakthrough in 1976 with the hard rock album Jailbreak
and their worldwide hit "The Boys Are Back in Town", which reached number 8 in the UK and number 12 in the US. Their
style, consisting of two duelling guitarists often playing leads in harmony, proved itself to be a large influence
on later bands. They reached their commercial, and arguably their artistic peak with Black Rose: A Rock Legend (1979).
The arrival of Scorpions from Germany marked the geographical expansion of the subgenre. Australian-formed AC/DC,
with a stripped back, riff heavy and abrasive style that also appealed to the punk generation, began to gain international
attention from 1976, culminating in the release of their multi-platinum albums Let There Be Rock (1977) and Highway
to Hell (1979). Also influenced by a punk ethos were heavy metal bands like Motörhead, while Judas Priest abandoned
the remaining elements of the blues in their music, further differentiating the hard rock and heavy metal styles
and helping to create the New Wave of British Heavy Metal which was pursued by bands like Iron Maiden, Saxon and
Venom. Bon Jovi's third album, Slippery When Wet (1986), mixed hard rock with a pop sensitivity and spent a total
of 8 weeks at the top of the Billboard 200 album chart, selling 12 million copies in the US while becoming the first
hard rock album to spawn three top 10 singles — two of which reached number one. The album has been credited with
widening the audiences for the genre, particularly by appealing to women as well as the traditional male dominated
audience, and opening the door to MTV and commercial success for other bands at the end of the decade. The anthemic
The Final Countdown (1986) by Swedish group Europe was an international hit, reaching number eight on the US charts
while hitting the top 10 in nine other countries. This era also saw more glam-infused American hard rock bands come
to the forefront, with both Poison and Cinderella releasing their multi-platinum début albums in 1986. Van Halen
released 5150 (1986), their first album with Sammy Hagar on lead vocals, which was number one in the US for three
weeks and sold over 6 million copies. By the second half of the decade, hard rock had become the most reliable form
of commercial popular music in the United States. Some established acts continued to enjoy commercial success, such
as Aerosmith, with their number one multi-platinum albums: Get a Grip (1993), which produced four Top 40 singles
and became the band's best-selling album worldwide (going on to sell over 10 million copies), and Nine Lives (1997).
In 1998, Aerosmith released the number one hit "I Don't Want to Miss a Thing", which remains the only single by a
hard rock band to debut at number one. AC/DC produced the double platinum Ballbreaker (1995). Bon Jovi appealed to
their hard rock audience with songs such as "Keep the Faith" (1992), but also achieved success in adult contemporary
radio, with the Top 10 ballads "Bed of Roses" (1993) and "Always" (1994). Bon Jovi's 1995 album These Days was a
bigger hit in Europe than it was in the United States, spawning four Top 10 singles on the UK Singles Chart. Metallica's
Load (1996) and ReLoad (1997) each sold in excess of 4 million copies in the US and saw the band develop a more melodic
and blues rock sound. As the initial impetus of grunge bands faltered in the middle years of the decade, post-grunge
bands emerged. They emulated the attitudes and music of grunge, particularly thick, distorted guitars, but with a
more radio-friendly commercially oriented sound that drew more directly on traditional hard rock. Among the most
successful acts were the Foo Fighters, Candlebox, Live, Collective Soul, Australia's Silverchair and England's Bush,
who all cemented post-grunge as one of the most commercially viable subgenres by the late 1990s. Similarly, some
post-Britpop bands that followed in the wake of Oasis, including Feeder and Stereophonics, adopted a hard rock or
"pop-metal" sound. Hard rock developed into a major form of popular music in the 1970s, with bands such as Led Zeppelin,
The Who, Deep Purple, Aerosmith, AC/DC and Van Halen. During the 1980s, some hard rock bands moved away from their
hard rock roots and more towards pop rock, while others began to return to a hard rock sound. Established bands made
a comeback in the mid-1980s and it reached a commercial peak in the 1980s, with glam metal bands like Bon Jovi and
Def Leppard and the rawer sounds of Guns N' Roses, which followed up with great success in the later part of that
decade. Hard rock began losing popularity with the commercial success of grunge and later Britpop in the 1990s. The
roots of hard rock can be traced back to the 1950s, particularly electric blues, which laid the foundations for key
elements such as a rough declamatory vocal style, heavy guitar riffs, string-bending blues-scale guitar solos, strong
beat, thick riff-laden texture, and posturing performances. Electric blues guitarists began experimenting with hard
rock elements such as driving rhythms, distorted guitar solos and power chords in the 1950s, evident in the work
of Memphis blues guitarists such as Joe Hill Louis, Willie Johnson, and particularly Pat Hare, who captured a "grittier,
nastier, more ferocious electric guitar sound" on records such as James Cotton's "Cotton Crop Blues" (1954). Other
antecedents include Link Wray's instrumental "Rumble" in 1958, and the surf rock instrumentals of Dick Dale, such
as "Let's Go Trippin'" (1961) and "Misirlou" (1962). In the early 1970s the Rolling Stones developed their hard rock
sound with Exile on Main St. (1972). Initially receiving mixed reviews, according to critic Steve Erlewine it is
now "generally regarded as the Rolling Stones' finest album". They continued to pursue the riff-heavy sound on albums
including It's Only Rock 'n' Roll (1974) and Black and Blue (1976). Led Zeppelin began to mix elements of world and
folk music into their hard rock from Led Zeppelin III (1970) and Led Zeppelin IV (1971). The latter included the
track "Stairway to Heaven", which would become the most played song in the history of album-oriented radio. Deep
Purple continued to define hard rock, particularly with their album Machine Head (1972), which included the tracks
"Highway Star" and "Smoke on the Water". In 1975 guitarist Ritchie Blackmore left, going on to form Rainbow and after
the break-up of the band the next year, vocalist David Coverdale formed Whitesnake. 1970 saw The Who release Live
at Leeds, often seen as the archetypal hard rock live album, and the following year they released their highly acclaimed
album Who's Next, which mixed heavy rock with extensive use of synthesizers. Subsequent albums, including Quadrophenia
(1973), built on this sound before Who Are You (1978), their last album before the death of pioneering rock drummer
Keith Moon later that year. The opening years of the 1980s saw a number of changes in personnel and direction of
established hard rock acts, including the deaths of Bon Scott, the lead singer of AC/DC, and John Bonham, drummer
with Led Zeppelin. Whereas Zeppelin broke up almost immediately afterwards, AC/DC pressed on, recording the album
Back in Black (1980) with their new lead singer, Brian Johnson. It became the fifth-highest-selling album of all
time in the US and the second-highest-selling album in the world. Black Sabbath had split with original singer Ozzy
Osbourne in 1979 and replaced him with Ronnie James Dio, formerly of Rainbow, giving the band a new sound and a period
of creativity and popularity beginning with Heaven and Hell (1980). Osbourne embarked on a solo career with Blizzard
of Ozz (1980), featuring American guitarist Randy Rhoads. Some bands, such as Queen, moved away from their hard rock
roots and more towards pop rock, while others, including Rush with Moving Pictures (1981), began to return to a hard
rock sound. The creation of thrash metal, which mixed heavy metal with elements of hardcore punk from about 1982,
particularly by Metallica, Anthrax, Megadeth and Slayer, helped to create extreme metal and further remove the style
from hard rock, although a number of these bands or their members would continue to record some songs closer to a
hard rock sound. Kiss moved away from their hard rock roots toward pop metal: firstly removing their makeup in 1983
for their Lick It Up album, and then adopting the visual and sound of glam metal for their 1984 release, Animalize,
both of which marked a return to commercial success. Pat Benatar was one of the first women to achieve commercial
success in hard rock, with three successive Top 5 albums between 1980 and 1982. Hard rock entered the 1990s as one
of the dominant forms of commercial music. The multi-platinum releases of AC/DC's The Razors Edge (1990), Guns N'
Roses' Use Your Illusion I and Use Your Illusion II (both in 1991), Ozzy Osbourne's No More Tears (1991), and Van
Halen's For Unlawful Carnal Knowledge (1991) showcased this popularity. Additionally, The Black Crowes released their
debut album, Shake Your Money Maker (1990), which contained a bluesy classic rock sound and sold five million copies.
In 1992, Def Leppard followed up 1987's Hysteria with Adrenalize, which went multi-platinum, spawned four Top 40
singles and held the number one spot on the US album chart for five weeks. The term "retro-metal" has been applied
to such bands as Texas based The Sword, California's High on Fire, Sweden's Witchcraft and Australia's Wolfmother.
Wolfmother's self-titled 2005 debut album combined elements of the sounds of Deep Purple and Led Zeppelin. Fellow
Australians Airbourne's début album Runnin' Wild (2007) followed in the hard riffing tradition of AC/DC. England's
The Darkness' Permission to Land (2003), described as an "eerily realistic simulation of '80s metal and '70s glam",
topped the UK charts, going quintuple platinum. The follow-up, One Way Ticket to Hell... and Back (2005), reached
number 11, before the band broke up in 2006. Los Angeles band Steel Panther managed to gain a following by sending
up 80s glam metal. A more serious attempt to revive glam metal was made by bands of the sleaze metal movement in
Sweden, including Vains of Jenna, Hardcore Superstar and Crashdïet. In the 1960s, American and British blues and
rock bands began to modify rock and roll by adding harder sounds, heavier guitar riffs, bombastic drumming, and louder
vocals, from electric blues. Early forms of hard rock can be heard in the work of Chicago blues musicians Elmore
James, Muddy Waters, and Howlin' Wolf, The Kingsmen's version of "Louie Louie" (1963) which made it a garage rock
standard, and the songs of rhythm and blues influenced British Invasion acts, including "You Really Got Me" by The
Kinks (1964), "My Generation" by The Who (1965), "Shapes of Things" (1966) by The Yardbirds and "(I Can't Get No)
Satisfaction" (1965) by The Rolling Stones. From the late 1960s, it became common to divide mainstream rock music
that emerged from psychedelia into soft and hard rock. Soft rock was often derived from folk rock, using acoustic
instruments and putting more emphasis on melody and harmonies. In contrast, hard rock was most often derived from
blues rock and was played louder and with more intensity. Emerging British acts included Free, who released their
signature song "All Right Now" (1970), which has received extensive radio airplay in both the UK and US. After the
breakup of the band in 1973, vocalist Paul Rodgers joined supergroup Bad Company, whose eponymous first album (1974)
was an international hit. The mixture of hard rock and progressive rock, evident in the works of Deep Purple, was
pursued more directly by bands like Uriah Heep and Argent. Scottish band Nazareth released their self-titled début
album in 1971, producing a blend of hard rock and pop that would culminate in their best selling, Hair of the Dog
(1975), which contained the proto-power ballad "Love Hurts". Having enjoyed some national success in the early 1970s,
Queen, after the release of Sheer Heart Attack (1974) and A Night at the Opera (1975), gained international recognition
with a sound that used layered vocals and guitars and mixed hard rock with heavy metal, progressive rock, and even
opera. The latter featured the single "Bohemian Rhapsody", which stayed at number one in the UK charts for nine weeks.
Often categorised with the New Wave of British Heavy Metal, in 1981 Def Leppard released their second album High
'n' Dry, mixing glam-rock with heavy metal, and helping to define the sound of hard rock for the decade. The follow-up
Pyromania (1983), reached number two on the American charts and the singles "Photograph", "Rock of Ages" and "Foolin'",
helped by the emergence of MTV, all reached the Top 40. It was widely emulated, particularly by the emerging Californian
glam metal scene. This was followed by US acts like Mötley Crüe, with their albums Too Fast for Love (1981) and Shout
at the Devil (1983) and, as the style grew, the arrival of bands such as Ratt, White Lion, Twisted Sister and Quiet
Riot. Quiet Riot's album Metal Health (1983) was the first glam metal album, and arguably the first heavy metal album
of any kind, to reach number one in the Billboard music charts and helped open the doors for mainstream success by
subsequent bands. While these few hard rock bands managed to maintain success and popularity in the early part of
the decade, alternative forms of hard rock achieved mainstream success in the form of grunge in the US and Britpop
in the UK. This was particularly evident after the success of Nirvana's Nevermind (1991), which combined elements
of hardcore punk and heavy metal into a "dirty" sound that made use of heavy guitar distortion, fuzz and feedback,
along with darker lyrical themes than their "hair band" predecessors. Although most grunge bands had a sound that
sharply contrasted mainstream hard rock, several, including Pearl Jam, Alice in Chains, Mother Love Bone and Soundgarden,
were more strongly influenced by 1970s and 1980s rock and metal, while Stone Temple Pilots managed to turn alternative
rock into a form of stadium rock. However, all grunge bands shunned the macho, anthemic and fashion-focused aesthetics
particularly associated with glam metal. In the UK, Oasis were unusual among the Britpop bands of the mid-1990s in
incorporating a hard rock sound. Although Foo Fighters continued to be one of the most successful rock acts, with
albums like In Your Honor (2005) reaching number two in the US and UK, many of the first wave of post-grunge bands
began to fade in popularity. Acts like Creed, Staind, Puddle of Mudd and Nickelback took the genre into the 2000s
with considerable commercial success, abandoning most of the angst and anger of the original movement for more conventional
anthems, narratives and romantic songs. They were followed in this vein by new acts including Shinedown and Seether.
Acts with more conventional hard rock sounds included Andrew W.K., Beautiful Creatures and Buckcherry, whose breakthrough
album 15 (2006) went platinum and spawned the single "Sorry" (2007), which made the Top 10 of the Billboard 100.
These were joined by bands with hard rock leanings that emerged in the mid-2000s from the garage rock or post punk
revival, including Black Rebel Motorcycle Club and Kings of Leon, and Queens of the Stone Age from the US, Three
Days Grace from Canada, Jet from Australia and The Datsuns from New Zealand. In 2009 Them Crooked Vultures, a supergroup
that brought together Foo Fighters' Dave Grohl, Queens of the Stone Age's Josh Homme and Led Zeppelin bass player
John Paul Jones attracted attention as a live act and released a self-titled debut album that reached the top 20
in the US and UK and the top ten in several other countries. Hard rock is a form of loud, aggressive rock music.
The electric guitar is often emphasised, used with distortion and other effects, both as a rhythm instrument using
repetitive riffs with a varying degree of complexity, and as a solo lead instrument. Drumming characteristically
focuses on driving rhythms, strong bass drum and a backbeat on snare, sometimes using cymbals for emphasis. The bass
guitar works in conjunction with the drums, occasionally playing riffs, but usually providing a backing for the rhythm
and lead guitars. Vocals are often growling, raspy, or involve screaming or wailing, sometimes in a high range, or
even falsetto voice. Blues rock acts that pioneered the sound included Cream, The Jimi Hendrix Experience, and The
Jeff Beck Group. Cream, in songs like "I Feel Free" (1966) combined blues rock with pop and psychedelia, particularly
in the riffs and guitar solos of Eric Clapton. Jimi Hendrix produced a form of blues-influenced psychedelic rock,
which combined elements of jazz, blues and rock and roll. From 1967 Jeff Beck brought lead guitar to new heights
of technical virtuosity and moved blues rock in the direction of heavy rock with his band, The Jeff Beck Group. Dave
Davies of The Kinks, Keith Richards of The Rolling Stones, Pete Townshend of The Who, Hendrix, Clapton and Beck all
pioneered the use of new guitar effects like phasing, feedback and distortion. The Beatles began producing songs
in the new hard rock style beginning with the White Album in 1968 and, with the track "Helter Skelter", attempted
to create a greater level of noise than the Who. Stephen Thomas Erlewine of AllMusic has described the "proto-metal
roar" of "Helter Skelter," while Ian MacDonald argued that "their attempts at emulating the heavy style were without
exception embarrassing." In the United States, macabre-rock pioneer Alice Cooper achieved mainstream success with
the top ten album School's Out (1972). In the following year blues rockers ZZ Top released their classic album Tres
Hombres and Aerosmith produced their eponymous début, as did Southern rockers Lynyrd Skynyrd and proto-punk outfit
New York Dolls, demonstrating the diverse directions being pursued in the genre. Montrose, including the instrumental
talent of Ronnie Montrose and vocals of Sammy Hagar and arguably the first all American hard rock band to challenge
the British dominance of the genre, released their first album in 1973. Kiss built on the theatrics of Alice Cooper
and the look of the New York Dolls to produce a unique band persona, achieving their commercial breakthrough with
the double live album Alive! in 1975 and helping to take hard rock into the stadium rock era. In the mid-1970s Aerosmith
achieved their commercial and artistic breakthrough with Toys in the Attic (1975), which reached number 11 in the
American album chart, and Rocks (1976), which peaked at number three. Blue Öyster Cult, formed in the late 60s, picked
up on some of the elements introduced by Black Sabbath with their breakthrough live gold album On Your Feet or on
Your Knees (1975), followed by their first platinum album, Agents of Fortune (1976), containing the hit single "(Don't
Fear) The Reaper", which reached number 12 on the Billboard charts. Journey released their eponymous debut in 1975
and the next year Boston released their highly successful début album. In the same year, hard rock bands featuring
women saw commercial success as Heart released Dreamboat Annie and The Runaways débuted with their self-titled album.
While Heart had a more folk-oriented hard rock sound, the Runaways leaned more towards a mix of punk-influenced music
and hard rock. The Amboy Dukes, having emerged from the Detroit garage rock scene and most famous for their Top 20
psychedelic hit "Journey to the Center of the Mind" (1968), were dissolved by their guitarist Ted Nugent, who embarked
on a solo career that resulted in four successive multi-platinum albums between Ted Nugent (1975) and his best selling
Double Live Gonzo (1978). Established bands made something of a comeback in the mid-1980s. After an 8-year separation,
Deep Purple returned with the classic Machine Head line-up to produce Perfect Strangers (1984), which reached number
five in the UK, hit the top five in five other countries, and was a platinum-seller in the US. After somewhat slower
sales of its fourth album, Fair Warning, Van Halen rebounded with the Top 3 album Diver Down in 1982, then reached
their commercial pinnacle with 1984. It reached number two on the Billboard album chart and provided the track "Jump",
which reached number one on the singles chart and remained there for several weeks. Heart, after floundering during
the first half of the decade, made a comeback with their eponymous ninth studio album which hit number one and contained
four Top 10 singles including their first number one hit. The new medium of video channels was used with considerable
success by bands formed in previous decades. Among the first were ZZ Top, who mixed hard blues rock with new wave
music to produce a series of highly successful singles, beginning with "Gimme All Your Lovin'" (1983), which helped
their albums Eliminator (1983) and Afterburner (1985) achieve diamond and multi-platinum status respectively. Others
found renewed success in the singles charts with power ballads, including REO Speedwagon with "Keep on Loving You"
(1980) and "Can't Fight This Feeling" (1984), Journey with "Don't Stop Believin'" (1981) and "Open Arms" (1982),
Foreigner's "I Want to Know What Love Is", Scorpions' "Still Loving You" (both from 1984), Heart’s "What About Love"
(1985) and "These Dreams" (1986), and Boston's "Amanda" (1986). In the new commercial climate glam metal bands like
Europe, Ratt, White Lion and Cinderella broke up, Whitesnake went on hiatus in 1991, and while many of these bands
would re-unite again in the late 1990s or early 2000s, they never reached the commercial success they saw in the
1980s or early 1990s. Other bands such as Mötley Crüe and Poison saw personnel changes which impacted those bands'
commercial viability during the decade. In 1995 Van Halen released Balance, a multi-platinum seller that would be
the band's last with Sammy Hagar on vocals. In 1996 David Lee Roth returned briefly and his replacement, former Extreme
singer Gary Cherone, was fired soon after the release of the commercially unsuccessful 1998 album Van Halen III and
Van Halen would not tour or record again until 2004. Guns N' Roses' original lineup was whittled away throughout
the decade. Drummer Steven Adler was fired in 1990, guitarist Izzy Stradlin left in late 1991 after recording Use
Your Illusion I and II with the band. Tensions between the other band members and lead singer Axl Rose continued
after the release of the 1993 covers album The Spaghetti Incident? Guitarist Slash left in 1996, followed by bassist
Duff McKagan in 1997. Axl Rose, the only original member, worked with a constantly changing lineup in recording an
album that would take over fifteen years to complete. In the late 1960s, the term heavy metal was used interchangeably
with hard rock, but gradually began to be used to describe music played with even more volume and intensity. While
hard rock maintained a bluesy rock and roll identity, including some swing in the back beat and riffs that tended
to outline chord progressions in their hooks, heavy metal's riffs often functioned as stand-alone melodies and had
no swing in them. Heavy metal took on "darker" characteristics after Black Sabbath's breakthrough at the beginning
of the 1970s. In the 1980s it developed a number of subgenres, often termed extreme metal, some of which were influenced
by hardcore punk, and which further differentiated the two styles. Despite this differentiation, hard rock and heavy
metal have existed side by side, with bands frequently standing on the boundary of, or crossing between, the genres.
By the end of the decade a distinct genre of hard rock was emerging with bands like Led Zeppelin, who mixed the music
of early rock bands with a more hard-edged form of blues rock and acid rock on their first two albums Led Zeppelin
(1969) and Led Zeppelin II (1969), and Deep Purple, who began as a progressive rock group but achieved their commercial
breakthrough with their fourth and distinctively heavier album, In Rock (1970). Also significant was Black Sabbath's
Paranoid (1970), which combined guitar riffs with dissonance and more explicit references to the occult and elements
of Gothic horror. All three of these bands have been seen as pivotal in the development of heavy metal, but where
metal further accentuated the intensity of the music, with bands like Judas Priest following Sabbath's lead into
territory that was often "darker and more menacing", hard rock tended to continue to remain the more exuberant, good-time
music. With the rise of disco in the US and punk rock in the UK, hard rock's mainstream dominance was rivalled toward
the later part of the decade. Disco appealed to a more diverse group of people and punk seemed to take over the rebellious
role that hard rock once held. Early punk bands like The Ramones explicitly rebelled against the drum solos and extended
guitar solos that characterised stadium rock, with almost all of their songs clocking in around two minutes with
no guitar solos. However, new rock acts continued to emerge and record sales remained high into the 1980s. 1977 saw
the début and rise to stardom of Foreigner, who went on to release several platinum albums through to the mid-1980s.
Midwestern groups like Kansas, REO Speedwagon and Styx helped further cement heavy rock in the Midwest as a form
of stadium rock. In 1978, Van Halen emerged from the Los Angeles music scene with a sound based around the skills
of lead guitarist Eddie Van Halen. He popularised a guitar-playing technique of two-handed hammer-ons and pull-offs
called tapping, showcased on the song "Eruption" from the album Van Halen, which was highly influential in re-establishing
hard rock as a popular genre after the punk and disco explosion, while also redefining and elevating the role of
electric guitar. Established acts benefited from the new commercial climate, with Whitesnake's self-titled album
(1987) selling over 17 million copies, outperforming anything in Coverdale's or Deep Purple's catalogue before or
since. It featured the rock anthem "Here I Go Again '87" as one of 4 UK top 20 singles. The follow-up Slip of the
Tongue (1989) went platinum, but according to critics Steve Erlwine and Greg Prato, "it was a considerable disappointment
after the across-the-board success of Whitesnake". Aerosmith's comeback album Permanent Vacation (1987) would begin
a decade long revival of their popularity. Crazy Nights (1987) by Kiss was the band's highest charting release in
the US since 1979 and the highest of their career in the UK. Mötley Crüe with Girls, Girls, Girls (1987) continued
their commercial success and Def Leppard with Hysteria (1987) hit their commercial peak, the latter producing seven
hit singles (a record for a hard rock act). Guns N' Roses released the best-selling début of all time, Appetite for
Destruction (1987). With a "grittier" and "rawer" sound than most glam metal, it produced three top 10 hits, including
the number one "Sweet Child O' Mine". Some of the glam rock bands that formed in the mid-1980s, such as White Lion
and Cinderella experienced their biggest success during this period with their respective albums Pride (1987) and
Long Cold Winter (1988) both going multi-platinum and launching a series of hit singles. In the last years of the
decade, the most notable successes were New Jersey (1988) by Bon Jovi, OU812 (1988) by Van Halen, Open Up and Say...
Ahh! (1988) by Poison, Pump (1989) by Aerosmith, and Mötley Crüe's most commercially successful album Dr. Feelgood
(1989). New Jersey spawned five Top 10 singles, a record for a hard rock act. In 1988 from 25 June to 5 November,
the number one spot on the Billboard 200 album chart was held by a hard rock album for 18 out of 20 consecutive weeks;
the albums were OU812, Hysteria, Appetite for Destruction, and New Jersey. A final wave of glam rock bands arrived
in the late 1980s, and experienced success with multi-platinum albums and hit singles from 1989 until the early 1990s,
among them Extreme, Warrant Slaughter and FireHouse. Skid Row also released their eponymous début (1989), reaching
number six on the Billboard 200, but they were to be one of the last major bands that emerged in the glam rock era.
A few hard rock bands from the 1970s and 1980s managed to sustain highly successful recording careers. Bon Jovi were
still able to achieve a commercial hit with "It's My Life" from their double platinum-certified album Crush (2000).
and AC/DC released the platinum-certified Stiff Upper Lip (2000) Aerosmith released a number two platinum album,
Just Push Play (2001), which saw the band foray further into pop with the Top 10 hit "Jaded", and a blues cover album,
Honkin' on Bobo, which reached number five in 2004. Heart achieved their first Top 10 album since the early 90s with
Red Velvet Car in 2010, becoming the first female-led hard rock band to earn Top 10 albums spanning five decades.
There were reunions and subsequent tours from Van Halen (with Hagar in 2004 and then Roth in 2007), The Who (delayed
in 2002 by the death of bassist John Entwistle until 2006) and Black Sabbath (with Osbourne 1997–2006 and Dio 2006–2010)
and even a one off performance by Led Zeppelin (2007), renewing the interest in previous eras. Additionally, hard
rock supergroups, such as Audioslave (with former members of Rage Against the Machine and Soundgarden) and Velvet
Revolver (with former members of Guns N' Roses, punk band Wasted Youth and Stone Temple Pilots singer Scott Weiland),
emerged and experienced some success. However, these bands were short-lived, ending in 2007 and 2008, respectively.
The long awaited Guns N' Roses album Chinese Democracy was finally released in 2008, but only went platinum and failed
to come close to the success of the band's late 1980s and early 1990s material. More successfully, AC/DC released
the double platinum-certified Black Ice (2008). Bon Jovi continued to enjoy success, branching into country music
with "Who Says You Can't Go Home", which reached number one on the Hot Country Singles chart in 2006, and the rock/country
album Lost Highway, which reached number one in 2007. In 2009, Bon Jovi released another number one album, The Circle,
which marked a return to their hard rock sound.
The term "Great Plains", for the region west of about the 96th or 98th meridian and east of the Rocky Mountains, was not
generally used before the early 20th century. Nevin Fenneman's 1916 study, Physiographic Subdivision of the United
States, brought the term Great Plains into more widespread usage. Before that the region was almost invariably called
the High Plains, in contrast to the lower Prairie Plains of the Midwestern states. Today the term "High Plains" is
used for a subregion of the Great Plains. Much of the Great Plains became open range, or rangeland where cattle roamed
free, hosting ranching operations where anyone was theoretically free to run cattle. In the spring and fall, ranchers
held roundups where their cowboys branded new calves, treated animals and sorted the cattle for sale. Such ranching
began in Texas and gradually moved northward. In 1866-95, cowboys herded 10 million cattle north to rail heads such
as Dodge City, Kansas and Ogallala, Nebraska; from there, cattle were shipped eastward. With the arrival of Francisco
Vázquez de Coronado, a Spanish conquistador, the first recorded history of encounter between Europeans and Native
Americans in the Great Plains occurred in Texas, Kansas and Nebraska from 1540-1542. In that same time period, Hernando
de Soto crossed a west-northwest direction in what is now Oklahoma and Texas. Today this is known as the De Soto
Trail. The Spanish thought the Great Plains were the location of the mythological Quivira and Cíbola, a place said
to be rich in gold. The 100th meridian roughly corresponds with the line that divides the Great Plains into an area
that receive 20 inches (510 millimetres) or more of rainfall per year and an area that receives less than 20 in (510
mm). In this context, the High Plains, as well as Southern Alberta, south-western Saskatchewan and Eastern Montana
are mainly semi hot steppe land and are generally characterised by rangeland or marginal farmland. The region (especially
the High Plains) is periodically subjected to extended periods of drought; high winds in the region may then generate
devastating dust storms. The eastern Great Plains near the eastern boundary falls in the humid subtropical climate
zone in the southern areas, and the northern and central areas fall in the humid continental climate. After 1870,
the new railroads across the Plains brought hunters who killed off almost all the bison for their hides. The railroads
offered attractive packages of land and transportation to European farmers, who rushed to settle the land. They (and
Americans as well) also took advantage of the homestead laws to obtain free farms. Land speculators and local boosters
identified many potential towns, and those reached by the railroad had a chance, while the others became ghost towns.
In Kansas, for example, nearly 5000 towns were mapped out, but by 1970 only 617 were actually operating. In the mid-20th
century, closeness to an interstate exchange determined whether a town would flourish or struggle for business. The
rural Plains have lost a third of their population since 1920. Several hundred thousand square miles (several hundred
thousand square kilometers) of the Great Plains have fewer than 6 inhabitants per square mile (2.3 inhabitants per
square kilometer)—the density standard Frederick Jackson Turner used to declare the American frontier "closed" in
1893. Many have fewer than 2 inhabitants per square mile (0.77 inhabitants per square kilometer). There are more
than 6,000 ghost towns in the state of Kansas alone, according to Kansas historian Daniel Fitzgerald. This problem
is often exacerbated by the consolidation of farms and the difficulty of attracting modern industry to the region.
In addition, the smaller school-age population has forced the consolidation of school districts and the closure of
high schools in some communities. The continuing population loss has led some to suggest that the current use of
the drier parts of the Great Plains is not sustainable, and there has been a proposal - the "Buffalo Commons" - to
return approximately 139,000 square miles (360,000 km2) of these drier parts to native prairie land. Although the
eastern image of farm life in the prairies emphasized the isolation of the lonely farmer and wife, plains residents
created busy social lives for themselves. They often sponsored activities that combined work, food and entertainment
such as barn raisings, corn huskings, quilting bees, Grange meetings, church activities and school functions. Women
organized shared meals and potluck events, as well as extended visits between families. The Grange was a nationwide
farmers' organization, they reserved high offices for women, and gave them a voice in public affairs. From the 1950s
on, many areas of the Great Plains have become productive crop-growing areas because of extensive irrigation on large
landholdings. The United States is a major exporter of agricultural products. The southern portion of the Great Plains
lies over the Ogallala Aquifer, a huge underground layer of water-bearing strata dating from the last ice age. Center
pivot irrigation is used extensively in drier sections of the Great Plains, resulting in aquifer depletion at a rate
that is greater than the ground's ability to recharge. The Great Plains is the broad expanse of flat land (a plain),
much of it covered in prairie, steppe and grassland, that lies west of the Mississippi River tallgrass prairie states
and east of the Rocky Mountains in the United States and Canada. This area covers parts, but not all, of the states
of Colorado, Kansas, Montana, Nebraska, New Mexico, North Dakota, Oklahoma, South Dakota, Texas, and Wyoming, and
the Canadian provinces of Alberta, Manitoba and Saskatchewan. The region is known for supporting extensive cattle
ranching and dry farming. The North American Environmental Atlas, produced by the Commission for Environmental Cooperation,
a NAFTA agency composed of the geographical agencies of the Mexican, American, and Canadian governments uses the
"Great Plains" as an ecoregion synonymous with predominant prairies and grasslands rather than as physiographic region
defined by topography. The Great Plains ecoregion includes five sub-regions: Temperate Prairies, West-Central Semi-Arid
Prairies, South-Central Semi-Arid Prairies, Texas Louisiana Coastal Plains, and Tamaulipus-Texas Semi-Arid Plain,
which overlap or expand upon other Great Plains designations. The railroads opened up the Great Plains for settlement,
for now it was possible to ship wheat and other crops at low cost to the urban markets in the East, and Europe. Homestead
land was free for American settlers. Railroads sold their land at cheap rates to immigrants in expectation they would
generate traffic as soon as farms were established. Immigrants poured in, especially from Germany and Scandinavia.
On the plains, very few single men attempted to operate a farm or ranch by themselves; they clearly understood the
need for a hard-working wife, and numerous children, to handle the many chores, including child-rearing, feeding
and clothing the family, managing the housework, feeding the hired hands, and, especially after the 1930s, handling
paperwork and financial details. During the early years of settlement, farm women played an integral role in assuring
family survival by working outdoors. After approximately one generation, women increasingly left the fields, thus
redefining their roles within the family. New technology including sewing and washing machines encouraged women to
turn to domestic roles. The scientific housekeeping movement, promoted across the land by the media and government
extension agents, as well as county fairs which featured achievements in home cookery and canning, advice columns
for women regarding farm bookkeeping, and home economics courses in the schools. During the Cenozoic era, specifically
about 25 million years ago during the Miocene and Pliocene epochs, the continental climate became favorable to the
evolution of grasslands. Existing forest biomes declined and grasslands became much more widespread. The grasslands
provided a new niche for mammals, including many ungulates and glires, that switched from browsing diets to grazing
diets. Traditionally, the spread of grasslands and the development of grazers have been strongly linked. However,
an examination of mammalian teeth suggests that it is the open, gritty habitat and not the grass itself which is
linked to diet changes in mammals, giving rise to the "grit, not grass" hypothesis. To allow for agricultural development
of the Great Plains and house a growing population, the US passed the Homestead Acts of 1862: it allowed a settler
to claim up to 160 acres (65 ha) of land, provided that he lived on it for a period of five years and cultivated
it. The provisions were expanded under the Kinkaid Act of 1904 to include a homestead of an entire section. Hundreds
of thousands of people claimed such homesteads, sometimes building sod houses out of the very turf of their land.
Many of them were not skilled dryland farmers and failures were frequent. Much of the Plains were settled during
relatively wet years. Government experts did not understand how farmers should cultivate the prairies and gave advice
counter to what would have worked[citation needed]. Germans from Russia who had previously farmed, under similar
circumstances, in what is now Ukraine were marginally more successful than other homesteaders. The Dominion Lands
Act of 1871 served a similar function for establishing homesteads on the prairies in Canada.
Infrared radiation is used in industrial, scientific, and medical applications. Night-vision devices using active near-infrared
illumination allow people or animals to be observed without the observer being detected. Infrared astronomy uses
sensor-equipped telescopes to penetrate dusty regions of space, such as molecular clouds; detect objects such as
planets, and to view highly red-shifted objects from the early days of the universe. Infrared thermal-imaging cameras
are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, and to detect overheating
of electrical apparatus. The onset of infrared is defined (according to different standards) at various values typically
between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human
eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions
to scenes illuminated by common light sources. However, particularly intense near-IR light (e.g., from IR lasers,
IR LED sources, or from bright daylight with the visible light removed by colored gels) can be detected up to approximately
780 nm, and will be perceived as red light. Sources providing wavelengths as long as 1050 nm can be seen as a dull
red glow in intense sources, causing some difficulty in near-IR illumination of scenes in the dark (usually this
practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all
visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely
dim image coming through a visually opaque IR-passing photographic filter, it is possible to see the Wood effect
that consists of IR-glowing foliage. The concept of emissivity is important in understanding the infrared emissions
of objects. This is a property of a surface that describes how its thermal emissions deviate from the ideal of a
black body. To further explain, two objects at the same physical temperature will not show the same infrared image
if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity
will appear hotter, and those with a lower emissivity will appear cooler. For that reason, incorrect selection of
emissivity will give inaccurate results when using infrared cameras and pyrometers. Infrared vibrational spectroscopy
(see also near-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their
constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group
of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions
of the group as a whole. If an oscillation leads to a change in dipole in the molecule then it will absorb a photon
that has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared
light. Typically, the technique is used to study organic compounds using light radiation from 4000–400 cm−1, the
mid-infrared. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information
about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will
show a broad O-H absorption around 3200 cm−1). In infrared photography, infrared filters are used to capture the
near-infrared spectrum. Digital cameras often use infrared blockers. Cheaper digital cameras and camera phones have
less effective filters and can "see" intense near-infrared, appearing as a bright purple-white color. This is especially
pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared
interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging using far-infrared
or terahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared
imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments
such as terahertz time-domain spectroscopy. Infrared reflectography (fr; it; es), as called by art conservators,
can be applied to paintings to reveal underlying layers in a completely non-destructive manner, in particular the
underdrawing or outline drawn by the artist as a guide. This often reveals the artist's use of carbon black, which
shows up well in reflectograms, as long as it has not also been used in the ground underlying the whole painting.
Art conservators are looking to see whether the visible layers of paint differ from the underdrawing or layers in
between – such alterations are called pentimenti when made by the original artist. This is very useful information
in deciding whether a painting is the prime version by the original artist or a copy, and whether it has been altered
by over-enthusiastic restoration work. In general, the more pentimenti the more likely a painting is to be the prime
version. It also gives useful insights into working practices. The discovery of infrared radiation is ascribed to
William Herschel, the astronomer, in the early 19th century. Herschel published his results in 1800 before the Royal
Society of London. Herschel used a prism to refract light from the sun and detected the infrared, beyond the red
part of the spectrum, through an increase in the temperature recorded on a thermometer. He was surprised at the result
and called them "Calorific Rays". The term 'Infrared' did not appear until late in the 19th century. Infrared radiation
is popularly known as "heat radiation"[citation needed], but light and electromagnetic waves of any frequency will
heat surfaces that absorb them. Infrared light from the Sun accounts for 49% of the heating of Earth, with the rest
being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting
lasers can char paper and incandescently hot objects emit visible radiation. Objects at room temperature will emit
radiation concentrated mostly in the 8 to 25 µm band, but this is not distinct from the emission of visible light
by incandescent objects and ultraviolet by even hotter objects (see black body and Wien's displacement law). Infrared
tracking, also known as infrared homing, refers to a passive missile guidance system, which uses the emission from
a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared
seeking are often referred to as "heat-seekers", since infrared (IR) is just below the visible spectrum of light
in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate
and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in
the background. High, cold ice clouds such as Cirrus or Cumulonimbus show up bright white, lower warmer clouds such
as Stratus or Stratocumulus show up as grey with intermediate clouds shaded accordingly. Hot land surfaces will show
up as dark-grey or black. One disadvantage of infrared imagery is that low cloud such as stratus or fog can be a
similar temperature to the surrounding land or sea surface and does not show up. However, using the difference in
brightness of the IR4 channel (10.3–11.5 µm) and the near-infrared channel (1.58–1.64 µm), low cloud can be distinguished,
producing a fog satellite picture. The main advantage of infrared is that images can be produced at night, allowing
a continuous sequence of weather to be studied. The sensitivity of Earth-based infrared telescopes is significantly
limited by water vapor in the atmosphere, which absorbs a portion of the infrared radiation arriving from space outside
of selected atmospheric windows. This limitation can be partially alleviated by placing the telescope observatory
at a high altitude, or by carrying the telescope aloft with a balloon or an aircraft. Space telescopes do not suffer
from this handicap, and so outer space is considered the ideal location for infrared astronomy. Near-infrared is
the region closest in wavelength to the radiation detectable by the human eye, mid- and far-infrared are progressively
further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands,
water absorption) and the newest follow technical reasons (the common silicon detectors are sensitive to about 1,050
nm, while InGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific
configuration). Unfortunately, international standards for these specifications are not currently available. Heat
is energy in transit that flows due to temperature difference. Unlike heat transmitted by thermal conduction or thermal
convection, thermal radiation can propagate through a vacuum. Thermal radiation is characterized by a particular
spectrum of many wavelengths that is associated with emission from an object, due to the vibration of its molecules
at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures
such radiations are associated with spectra far above the infrared, extending into visible, ultraviolet, and even
X-ray regions (i.e., the solar corona). Thus, the popular association of infrared radiation with thermal radiation
is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth.
Thermographic cameras detect radiation in the infrared range of the electromagnetic spectrum (roughly 900–14,000
nanometers or 0.9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects
based on their temperatures, according to the black body radiation law, thermography makes it possible to "see" one's
environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature,
therefore thermography allows one to see variations in temperature (hence the name). The infrared portion of the
spectrum has several useful benefits for astronomers. Cold, dark molecular clouds of gas and dust in our galaxy will
glow with radiated heat as they are irradiated by imbedded stars. Infrared can also be used to detect protostars
before they begin to emit visible light. Stars emit a smaller portion of their energy in the infrared spectrum, so
nearby cool objects such as planets can be more readily detected. (In the visible light spectrum, the glare from
the star will drown out the reflected light from a planet.) Infrared is used in night vision equipment when there
is insufficient visible light to see. Night vision devices operate through a process involving the conversion of
ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted
back into visible light. Infrared light sources can be used to augment the available ambient light for conversion
by night vision devices, increasing in-the-dark visibility without actually using a visible light source. IR data
transmission is also employed in short-range communication among computer peripherals and personal digital assistants.
These devices usually conform to standards published by IrDA, the Infrared Data Association. Remote controls and
IrDA devices use infrared light-emitting diodes (LEDs) to emit infrared radiation that is focused by a plastic lens
into a narrow beam. The beam is modulated, i.e. switched on and off, to encode the data. The receiver uses a silicon
photodiode to convert the infrared radiation to an electric current. It responds only to the rapidly pulsing signal
created by the transmitter, and filters out slowly changing infrared radiation from ambient light. Infrared communications
are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere
with other devices in adjoining rooms. Infrared is the most common way for remote controls to command appliances.
Infrared remote control protocols like RC-5, SIRC, are used to communicate with infrared. In the semiconductor industry,
infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring
the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction
Coefficient (k) can be determined via the Forouhi-Bloomer dispersion equations. The reflectance from the infrared
light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench
structures. Infrared cleaning is a technique used by some Motion picture film scanner, film scanners and flatbed
scanners to reduce or remove the effect of dust and scratches upon the finished scan. It works by collecting an additional
infrared channel from the scan at the same position and resolution as the three visible color channels (red, green,
and blue). The infrared channel, in combination with the other channels, is used to detect the location of scratches
and dust. Once located, those defects can be corrected by scaling or replaced by inpainting. Earth's surface and
the clouds absorb visible and invisible radiation from the sun and re-emit much of the energy as infrared back to
atmosphere. Certain substances in the atmosphere, chiefly cloud droplets and water vapor, but also carbon dioxide,
methane, nitrous oxide, sulfur hexafluoride, and chlorofluorocarbons, absorb this infrared, and re-radiate it in
all directions including back to Earth. Thus, the greenhouse effect keeps the atmosphere and surface much warmer
than if the infrared absorbers were absent from the atmosphere.
Biodiversity, a contraction of "biological diversity," generally refers to the variety and variability of life on Earth.
One of the most widely used definitions defines it in terms of the variability within species, between species, and
between ecosystems. It is a measure of the variety of organisms present in different ecosystems. This can refer to
genetic variation, ecosystem variation, or species variation (number of species) within an area, biome, or planet.
Terrestrial biodiversity tends to be greater near the equator, which seems to be the result of the warm climate and
high primary productivity. Biodiversity is not distributed evenly on Earth. It is richest in the tropics. Marine
biodiversity tends to be highest along coasts in the Western Pacific, where sea surface temperature is highest and
in the mid-latitudinal band in all oceans. There are latitudinal gradients in species diversity. Biodiversity generally
tends to cluster in hotspots, and has been increasing through time, but will be likely to slow in the future. This
multilevel construct is consistent with Dasmann and Lovejoy. An explicit definition consistent with this interpretation
was first given in a paper by Bruce A. Wilcox commissioned by the International Union for the Conservation of Nature
and Natural Resources (IUCN) for the 1982 World National Parks Conference. Wilcox's definition was "Biological diversity
is the variety of life forms...at all levels of biological systems (i.e., molecular, organismic, population, species
and ecosystem)...". The 1992 United Nations Earth Summit defined "biological diversity" as "the variability among
living organisms from all sources, including, 'inter alia', terrestrial, marine, and other aquatic ecosystems, and
the ecological complexes of which they are part: this includes diversity within species, between species and of ecosystems".
This definition is used in the United Nations Convention on Biological Diversity. On the other hand, changes through
the Phanerozoic correlate much better with the hyperbolic model (widely used in population biology, demography and
macrosociology, as well as fossil biodiversity) than with exponential and logistic models. The latter models imply
that changes in diversity are guided by a first-order positive feedback (more ancestors, more descendants) and/or
a negative feedback arising from resource limitation. Hyperbolic model implies a second-order positive feedback.
The hyperbolic pattern of the world population growth arises from a second-order positive feedback between the population
size and the rate of technological growth. The hyperbolic character of biodiversity growth can be similarly accounted
for by a feedback between diversity and community structure complexity. The similarity between the curves of biodiversity
and human population probably comes from the fact that both are derived from the interference of the hyperbolic trend
with cyclical and stochastic dynamics. Interspecific crop diversity is, in part, responsible for offering variety
in what we eat. Intraspecific diversity, the variety of alleles within a single species, also offers us choice in
our diets. If a crop fails in a monoculture, we rely on agricultural diversity to replant the land with something
new. If a wheat crop is destroyed by a pest we may plant a hardier variety of wheat the next year, relying on intraspecific
diversity. We may forgo wheat production in that area and plant a different species altogether, relying on interspecific
diversity. Even an agricultural society which primarily grows monocultures, relies on biodiversity at some point.
In absolute terms, the planet has lost 52% of its biodiversity since 1970 according to a 2014 study by the World
Wildlife Fund. The Living Planet Report 2014 claims that "the number of mammals, birds, reptiles, amphibians and
fish across the globe is, on average, about half the size it was 40 years ago". Of that number, 39% accounts for
the terrestrial wildlife gone, 39% for the marine wildlife gone, and 76% for the freshwater wildlife gone. Biodiversity
took the biggest hit in Latin America, plummeting 83 percent. High-income countries showed a 10% increase in biodiversity,
which was canceled out by a loss in low-income countries. This is despite the fact that high-income countries use
five times the ecological resources of low-income countries, which was explained as a result of process whereby wealthy
nations are outsourcing resource depletion to poorer nations, which are suffering the greatest ecosystem losses.
A 2007 study conducted by the National Science Foundation found that biodiversity and genetic diversity are codependent—that
diversity among species requires diversity within a species, and vice versa. "If any one type is removed from the
system, the cycle can break down, and the community becomes dominated by a single species." At present, the most
threatened ecosystems are found in fresh water, according to the Millennium Ecosystem Assessment 2005, which was
confirmed by the "Freshwater Animal Diversity Assessment", organised by the biodiversity platform, and the French
Institut de recherche pour le développement (MNHNP). Finally, an introduced species may unintentionally injure a
species that depends on the species it replaces. In Belgium, Prunus spinosa from Eastern Europe leafs much sooner
than its West European counterparts, disrupting the feeding habits of the Thecla betulae butterfly (which feeds on
the leaves). Introducing new species often leaves endemic and other local species unable to compete with the exotic
species and unable to survive. The exotic organisms may be predators, parasites, or may simply outcompete indigenous
species for nutrients, water and light. The forests play a vital role in harbouring more than 45,000 floral and 81,000
faunal species of which 5150 floral and 1837 faunal species are endemic. Plant and animal species confined to a specific
geographical area are called endemic species. In reserved forests, rights to activities like hunting and grazing
are sometimes given to communities living on the fringes of the forest, who sustain their livelihood partially or
wholly from forest resources or products. The unclassed forests covers 6.4 percent of the total forest area and they
are marked by the following characteristics: Global agreements such as the Convention on Biological Diversity, give
"sovereign national rights over biological resources" (not property). The agreements commit countries to "conserve
biodiversity", "develop resources for sustainability" and "share the benefits" resulting from their use. Biodiverse
countries that allow bioprospecting or collection of natural products, expect a share of the benefits rather than
allowing the individual or institution that discovers/exploits the resource to capture them privately. Bioprospecting
can become a type of biopiracy when such principles are not respected.[citation needed] Rapid environmental changes
typically cause mass extinctions. More than 99 percent of all species, amounting to over five billion species, that
ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10
million to 14 million, of which about 1.2 million have been documented and over 86 percent have not yet been described.
The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037, and weighs 50 billion tonnes. In
comparison, the total mass of the biosphere has been estimated to be as much as 4 TtC (trillion tons of carbon).
The history of biodiversity during the Phanerozoic (the last 540 million years), starts with rapid growth during
the Cambrian explosion—a period during which nearly every phylum of multicellular organisms first appeared. Over
the next 400 million years or so, invertebrate diversity showed little overall trend, and vertebrate diversity shows
an overall exponential trend. This dramatic rise in diversity was marked by periodic, massive losses of diversity
classified as mass extinction events. A significant loss occurred when rainforests collapsed in the carboniferous.
The worst was the Permian-Triassic extinction event, 251 million years ago. Vertebrates took 30 million years to
recover from this event. Jared Diamond describes an "Evil Quartet" of habitat destruction, overkill, introduced species,
and secondary extinctions. Edward O. Wilson prefers the acronym HIPPO, standing for Habitat destruction, Invasive
species, Pollution, human over-Population, and Over-harvesting. The most authoritative classification in use today
is IUCN's Classification of Direct Threats which has been adopted by major international conservation organizations
such as the US Nature Conservancy, the World Wildlife Fund, Conservation International, and BirdLife International.
Endemic species can be threatened with extinction through the process of genetic pollution, i.e. uncontrolled hybridization,
introgression and genetic swamping. Genetic pollution leads to homogenization or replacement of local genomes as
a result of either a numerical and/or fitness advantage of an introduced species. Hybridization and introgression
are side-effects of introduction and invasion. These phenomena can be especially detrimental to rare species that
come into contact with more abundant ones. The abundant species can interbreed with the rare species, swamping its
gene pool. This problem is not always apparent from morphological (outward appearance) observations alone. Some degree
of gene flow is normal adaptation, and not all gene and genotype constellations can be preserved. However, hybridization
with or without introgression may, nevertheless, threaten a rare species' existence. The age of the Earth is about
4.54 billion years old. The earliest undisputed evidence of life on Earth dates at least from 3.5 billion years ago,
during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. There
are microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. Other early physical
evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western
Greenland. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia.
According to one of the researchers, "If life arose relatively quickly on Earth ... then it could be common in the
universe." The fossil record suggests that the last few million years featured the greatest biodiversity in history.
However, not all scientists support this view, since there is uncertainty as to how strongly the fossil record is
biased by the greater availability and preservation of recent geologic sections. Some scientists believe that corrected
for sampling artifacts, modern biodiversity may not be much different from biodiversity 300 million years ago., whereas
others consider the fossil record reasonably reflective of the diversification of life. Estimates of the present
global macroscopic species diversity vary from 2 million to 100 million, with a best estimate of somewhere near 9
million, the vast majority arthropods. Diversity appears to increase continually in the absence of natural selection.
Agricultural diversity can also be divided by whether it is ‘planned’ diversity or ‘associated’ diversity. This is
a functional classification that we impose and not an intrinsic feature of life or diversity. Planned diversity includes
the crops which a farmer has encouraged, planted or raised (e.g.: crops, covers, symbionts and livestock, among others),
which can be contrasted with the associated diversity that arrives among the crops, uninvited (e.g.: herbivores,
weed species and pathogens, among others). Biodiversity's relevance to human health is becoming an international
political issue, as scientific evidence builds on the global health implications of biodiversity loss. This issue
is closely linked with the issue of climate change, as many of the anticipated health risks of climate change are
associated with changes in biodiversity (e.g. changes in populations and distribution of disease vectors, scarcity
of fresh water, impacts on agricultural biodiversity and food resources etc.) This is because the species most likely
to disappear are those that buffer against infectious disease transmission, while surviving species tend to be the
ones that increase disease transmission, such as that of West Nile Virus, Lyme disease and Hantavirus, according
to a study done co-authored by Felicia Keesing, an ecologist at Bard College, and Drew Harvell, associate director
for Environment of the Atkinson Center for a Sustainable Future (ACSF) at Cornell University. Since life began on
Earth, five major mass extinctions and several minor events have led to large and sudden drops in biodiversity. The
Phanerozoic eon (the last 540 million years) marked a rapid growth in biodiversity via the Cambrian explosion—a period
during which the majority of multicellular phyla first appeared. The next 400 million years included repeated, massive
biodiversity losses classified as mass extinction events. In the Carboniferous, rainforest collapse led to a great
loss of plant and animal life. The Permian–Triassic extinction event, 251 million years ago, was the worst; vertebrate
recovery took 30 million years. The most recent, the Cretaceous–Paleogene extinction event, occurred 65 million years
ago and has often attracted more attention than others because it resulted in the extinction of the dinosaurs. The
number of species invasions has been on the rise at least since the beginning of the 1900s. Species are increasingly
being moved by humans (on purpose and accidentally). In some cases the invaders are causing drastic changes and damage
to their new habitats (e.g.: zebra mussels and the emerald ash borer in the Great Lakes region and the lion fish
along the North American Atlantic coast). Some evidence suggests that invasive species are competitive in their new
habitats because they are subject to less pathogen disturbance. Others report confounding evidence that occasionally
suggest that species-rich communities harbor many native and exotic species simultaneously while some say that diverse
ecosystems are more resilient and resist invasive plants and animals. An important question is, "do invasive species
cause extinctions?" Many studies cite effects of invasive species on natives, but not extinctions. Invasive species
seem to increase local (i.e.: alpha diversity) diversity, which decreases turnover of diversity (i.e.: beta diversity).
Overall gamma diversity may be lowered because species are going extinct because of other causes, but even some of
the most insidious invaders (e.g.: Dutch elm disease, emerald ash borer, chestnut blight in North America) have not
caused their host species to become extinct. Extirpation, population decline, and homogenization of regional biodiversity
are much more common. Human activities have frequently been the cause of invasive species circumventing their barriers,
by introducing them for food and other purposes. Human activities therefore allow species to migrate to new areas
(and thus become invasive) occurred on time scales much shorter than historically have been required for a species
to extend its range. Brazil's Atlantic Forest is considered one such hotspot, containing roughly 20,000 plant species,
1,350 vertebrates, and millions of insects, about half of which occur nowhere else.[citation needed] The island of
Madagascar and India are also particularly notable. Colombia is characterized by high biodiversity, with the highest
rate of species by area unit worldwide and it has the largest number of endemics (species that are not found naturally
anywhere else) of any country. About 10% of the species of the Earth can be found in Colombia, including over 1,900
species of bird, more than in Europe and North America combined, Colombia has 10% of the world’s mammals species,
14% of the amphibian species, and 18% of the bird species of the world. Madagascar dry deciduous forests and lowland
rainforests possess a high ratio of endemism.[citation needed] Since the island separated from mainland Africa 66
million years ago, many species and ecosystems have evolved independently.[citation needed] Indonesia's 17,000 islands
cover 735,355 square miles (1,904,560 km2) and contain 10% of the world's flowering plants, 12% of mammals, and 17%
of reptiles, amphibians and birds—along with nearly 240 million people. Many regions of high biodiversity and/or
endemism arise from specialized habitats which require unusual adaptations, for example, alpine environments in high
mountains, or Northern European peat bogs.[citation needed] The existence of a "global carrying capacity", limiting
the amount of life that can live at once, is debated, as is the question of whether such a limit would also cap the
number of species. While records of life in the sea shows a logistic pattern of growth, life on land (insects, plants
and tetrapods)shows an exponential rise in diversity. As one author states, "Tetrapods have not yet invaded 64 per
cent of potentially habitable modes, and it could be that without human influence the ecological and taxonomic diversity
of tetrapods would continue to increase in an exponential fashion until most or all of the available ecospace is
filled." From 1950 to 2011, world population increased from 2.5 billion to 7 billion and is forecast to reach a plateau
of more than 9 billion during the 21st century. Sir David King, former chief scientific adviser to the UK government,
told a parliamentary inquiry: "It is self-evident that the massive growth in the human population through the 20th
century has had more impact on biodiversity than any other single factor." At least until the middle of the 21st
century, worldwide losses of pristine biodiverse land will probably depend much on the worldwide human birth rate.
The control of associated biodiversity is one of the great agricultural challenges that farmers face. On monoculture
farms, the approach is generally to eradicate associated diversity using a suite of biologically destructive pesticides,
mechanized tools and transgenic engineering techniques, then to rotate crops. Although some polyculture farmers use
the same techniques, they also employ integrated pest management strategies as well as strategies that are more labor-intensive,
but generally less dependent on capital, biotechnology and energy. National park and nature reserve is the area selected
by governments or private organizations for special protection against damage or degradation with the objective of
biodiversity and landscape conservation. National parks are usually owned and managed by national or state governments.
A limit is placed on the number of visitors permitted to enter certain fragile areas. Designated trails or roads
are created. The visitors are allowed to enter only for study, cultural and recreation purposes. Forestry operations,
grazing of animals and hunting of animals are prohibited. Exploitation of habitat or wildlife is banned. During the
last century, decreases in biodiversity have been increasingly observed. In 2007, German Federal Environment Minister
Sigmar Gabriel cited estimates that up to 30% of all species will be extinct by 2050. Of these, about one eighth
of known plant species are threatened with extinction. Estimates reach as high as 140,000 species per year (based
on Species-area theory). This figure indicates unsustainable ecological practices, because few species emerge each
year.[citation needed] Almost all scientists acknowledge that the rate of species loss is greater now than at any
time in human history, with extinctions occurring at rates hundreds of times higher than background extinction rates.
As of 2012, some studies suggest that 25% of all mammal species could be extinct in 20 years. Habitat size and numbers
of species are systematically related. Physically larger species and those living at lower latitudes or in forests
or oceans are more sensitive to reduction in habitat area. Conversion to "trivial" standardized ecosystems (e.g.,
monoculture following deforestation) effectively destroys habitat for the more diverse species that preceded the
conversion. In some countries lack of property rights or lax law/regulatory enforcement necessarily leads to biodiversity
loss (degradation costs having to be supported by the community).[citation needed] Not all introduced species are
invasive, nor all invasive species deliberately introduced. In cases such as the zebra mussel, invasion of US waterways
was unintentional. In other cases, such as mongooses in Hawaii, the introduction is deliberate but ineffective (nocturnal
rats were not vulnerable to the diurnal mongoose). In other cases, such as oil palms in Indonesia and Malaysia, the
introduction produces substantial economic benefits, but the benefits are accompanied by costly unintended consequences.
Less than 1% of all species that have been described have been studied beyond simply noting their existence. The
vast majority of Earth's species are microbial. Contemporary biodiversity physics is "firmly fixated on the visible
[macroscopic] world". For example, microbial life is metabolically and environmentally more diverse than multicellular
life (see e.g., extremophile). "On the tree of life, based on analyses of small-subunit ribosomal RNA, visible life
consists of barely noticeable twigs. The inverse relationship of size and population recurs higher on the evolutionary
ladder—"to a first approximation, all multicellular species on Earth are insects". Insect extinction rates are high—supporting
the Holocene extinction hypothesis. The number and variety of plants, animals and other organisms that exist is known
as biodiversity. It is an essential component of nature and it ensures the survival of human species by providing
food, fuel, shelter, medicines and other resources to mankind. The richness of biodiversity depends on the climatic
conditions and area of the region. All species of plants taken together are known as flora and about 70,000 species
of plants are known till date. All species of animals taken together are known as fauna which includes birds, mammals,
fish, reptiles, insects, crustaceans, molluscs, etc. The term biological diversity was used first by wildlife scientist
and conservationist Raymond F. Dasmann in the year 1968 lay book A Different Kind of Country advocating conservation.
The term was widely adopted only after more than a decade, when in the 1980s it came into common usage in science
and environmental policy. Thomas Lovejoy, in the foreword to the book Conservation Biology, introduced the term to
the scientific community. Until then the term "natural diversity" was common, introduced by The Science Division
of The Nature Conservancy in an important 1975 study, "The Preservation of Natural Diversity." By the early 1980s
TNC's Science program and its head, Robert E. Jenkins, Lovejoy and other leading conservation scientists at the time
in America advocated the use of the term "biological diversity". Biodiversity provides critical support for drug
discovery and the availability of medicinal resources. A significant proportion of drugs are derived, directly or
indirectly, from biological sources: at least 50% of the pharmaceutical compounds on the US market are derived from
plants, animals, and micro-organisms, while about 80% of the world population depends on medicines from nature (used
in either modern or traditional medical practice) for primary healthcare. Only a tiny fraction of wild species has
been investigated for medical potential. Biodiversity has been critical to advances throughout the field of bionics.
Evidence from market analysis and biodiversity science indicates that the decline in output from the pharmaceutical
sector since the mid-1980s can be attributed to a move away from natural product exploration ("bioprospecting") in
favor of genomics and synthetic chemistry, indeed claims about the value of undiscovered pharmaceuticals may not
provide enough incentive for companies in free markets to search for them because of the high cost of development;
meanwhile, natural products have a long history of supporting significant economic and health innovation. Marine
ecosystems are particularly important, although inappropriate bioprospecting can increase biodiversity loss, as well
as violating the laws of the communities and states from which the resources are taken. In agriculture and animal
husbandry, the Green Revolution popularized the use of conventional hybridization to increase yield. Often hybridized
breeds originated in developed countries and were further hybridized with local varieties in the developing world
to create high yield strains resistant to local climate and diseases. Local governments and industry have been pushing
hybridization. Formerly huge gene pools of various wild and indigenous breeds have collapsed causing widespread genetic
erosion and genetic pollution. This has resulted in loss of genetic diversity and biodiversity as a whole.
Originally based on the English alphabet, ASCII encodes 128 specified characters into seven-bit integers as shown by the
ASCII chart on the right. The characters encoded are numbers 0 to 9, lowercase letters a to z, uppercase letters
A to Z, basic punctuation symbols, control codes that originated with Teletype machines, and a space. For example,
lowercase j would become binary 1101010 and decimal 106. ASCII includes definitions for 128 characters: 33 are non-printing
control characters (many now obsolete) that affect how text and space are processed and 95 printable characters,
including the space (which is considered an invisible graphic:223). The code itself was patterned so that most control
codes were together, and all graphic codes were together, for ease of identification. The first two columns (32 positions)
were reserved for control characters.:220, 236 § 8,9) The "space" character had to come before graphics to make sorting
easier, so it became position 20hex;:237 § 10 for the same reason, many special signs commonly used as separators
were placed before digits. The committee decided it was important to support uppercase 64-character alphabets, and
chose to pattern ASCII so it could be reduced easily to a usable 64-character set of graphic codes,:228, 237 § 14
as was done in the DEC SIXBIT code. Lowercase letters were therefore not interleaved with uppercase. To keep options
available for lowercase letters and other graphics, the special and numeric codes were arranged before the letters,
and the letter A was placed in position 41hex to match the draft of the corresponding British standard.:238 § 18
The digits 0–9 were arranged so they correspond to values in binary prefixed with 011, making conversion with binary-coded
decimal straightforward. ASCII was incorporated into the Unicode character set as the first 128 symbols, so the 7-bit
ASCII characters have the same numeric codes in both sets. This allows UTF-8 to be backward compatible with 7-bit
ASCII, as a UTF-8 file containing only ASCII characters is identical to an ASCII file containing the same sequence
of characters. Even more importantly, forward compatibility is ensured as software that recognizes only 7-bit ASCII
characters as special and does not alter bytes with the highest bit set (as is often done to support 8-bit ASCII
extensions such as ISO-8859-1) will preserve UTF-8 data unchanged. When a Teletype 33 ASR equipped with the automatic
paper tape reader received a Control-S (XOFF, an abbreviation for transmit off), it caused the tape reader to stop;
receiving Control-Q (XON, "transmit on") caused the tape reader to resume. This technique became adopted by several
early computer operating systems as a "handshaking" signal warning a sender to stop transmission because of impending
overflow; it persists to this day in many systems as a manual output control technique. On some systems Control-S
retains its meaning but Control-Q is replaced by a second Control-S to resume output. The 33 ASR also could be configured
to employ Control-R (DC2) and Control-T (DC4) to start and stop the tape punch; on some units equipped with this
function, the corresponding control character lettering on the keycap above the letter was TAPE and TAPE respectively.
DEC operating systems (OS/8, RT-11, RSX-11, RSTS, TOPS-10, etc.) used both characters to mark the end of a line so
that the console device (originally Teletype machines) would work. By the time so-called "glass TTYs" (later called
CRTs or terminals) came along, the convention was so well established that backward compatibility necessitated continuing
the convention. When Gary Kildall cloned RT-11 to create CP/M he followed established DEC convention. Until the introduction
of PC DOS in 1981, IBM had no hand in this because their 1970s operating systems used EBCDIC instead of ASCII and
they were oriented toward punch-card input and line printer output on which the concept of carriage return was meaningless.
IBM's PC DOS (also marketed as MS-DOS by Microsoft) inherited the convention by virtue of being a clone of CP/M,
and Windows inherited it from MS-DOS. C trigraphs were created to solve this problem for ANSI C, although their late
introduction and inconsistent implementation in compilers limited their use. Many programmers kept their computers
on US-ASCII, so plain-text in Swedish, German etc. (for example, in e-mail or Usenet) contained "{, }" and similar
variants in the middle of words, something those programmers got used to. For example, a Swedish programmer mailing
another programmer asking if they should go for lunch, could get "N{ jag har sm|rg}sar." as the answer, which should
be "Nä jag har smörgåsar." meaning "No I've got sandwiches." The X3.2 subcommittee designed ASCII based on the earlier
teleprinter encoding systems. Like other character encodings, ASCII specifies a correspondence between digital bit
patterns and character symbols (i.e. graphemes and control characters). This allows digital devices to communicate
with each other and to process, store, and communicate character-oriented information such as written language. Before
ASCII was developed, the encodings in use included 26 alphabetic characters, 10 numerical digits, and from 11 to
25 special graphic symbols. To include all these, and control characters compatible with the Comité Consultatif International
Téléphonique et Télégraphique (CCITT) International Telegraph Alphabet No. 2 (ITA2) standard, Fieldata, and early
EBCDIC, more than 64 codes were required for ASCII. ASCII itself was first used commercially during 1963 as a seven-bit
teleprinter code for American Telephone & Telegraph's TWX (TeletypeWriter eXchange) network. TWX originally used
the earlier five-bit ITA2, which was also used by the competing Telex teleprinter system. Bob Bemer introduced features
such as the escape sequence. His British colleague Hugh McGregor Ross helped to popularize this work – according
to Bemer, "so much so that the code that was to become ASCII was first called the Bemer-Ross Code in Europe". Because
of his extensive work on ASCII, Bemer has been called "the father of ASCII." For example, character 10 represents
the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". RFC
2822 refers to control characters that do not include carriage return, line feed or white space as non-whitespace
control characters. Except for the control characters that prescribe elementary line-oriented formatting, ASCII does
not define any mechanism for describing the structure or appearance of text within a document. Other schemes, such
as markup languages, address page and document layout and formatting. Some software assigned special meanings to
ASCII characters sent to the software from the terminal. Operating systems from Digital Equipment Corporation, for
example, interpreted DEL as an input character as meaning "remove previously-typed input character", and this interpretation
also became common in Unix systems. Most other systems used BS for that meaning and used DEL to mean "remove the
character at the cursor".[citation needed] That latter interpretation is the most common now.[citation needed] Computers
attached to the ARPANET included machines running operating systems such as TOPS-10 and TENEX using CR-LF line endings,
machines running operating systems such as Multics using LF line endings, and machines running operating systems
such as OS/360 that represented lines as a character count followed by the characters of the line and that used EBCDIC
rather than ASCII. The Telnet protocol defined an ASCII "Network Virtual Terminal" (NVT), so that connections between
hosts with different line-ending conventions and character sets could be supported by transmitting a standard text
format over the network. Telnet used ASCII along with CR-LF line endings, and software using other conventions would
translate between the local conventions and the NVT. The File Transfer Protocol adopted the Telnet protocol, including
use of the Network Virtual Terminal, for use when transmitting commands and transferring data in the default ASCII
mode. This adds complexity to implementations of those protocols, and to other network protocols, such as those used
for E-mail and the World Wide Web, on systems not using the NVT's CR-LF line-ending convention. From early in its
development, ASCII was intended to be just one of several national variants of an international character code standard,
ultimately published as ISO/IEC 646 (1972), which would share most characters in common but assign other locally
useful characters to several code points reserved for "national use." However, the four years that elapsed between
the publication of ASCII-1963 and ISO's first acceptance of an international recommendation during 1967 caused ASCII's
choices for the national use characters to seem to be de facto standards for the world, causing confusion and incompatibility
once other countries did begin to make their own assignments to these code points. Most early home computer systems
developed their own 8-bit character sets containing line-drawing and game glyphs, and often filled in some or all
of the control characters from 0–31 with more graphics. Kaypro CP/M computers used the "upper" 128 characters for
the Greek alphabet. The IBM PC defined code page 437, which replaced the control-characters with graphic symbols
such as smiley faces, and mapped additional graphic characters to the upper 128 positions. Operating systems such
as DOS supported these code pages, and manufacturers of IBM PCs supported them in hardware. Digital Equipment Corporation
developed the Multinational Character Set (DEC-MCS) for use in the popular VT220 terminal as one of the first extensions
designed more for international languages than for block graphics. The Macintosh defined Mac OS Roman and Postscript
also defined a set, both of these contained both international letters and typographic punctuation marks instead
of graphics, more like modern character sets. ASCII (i/ˈæski/ ASS-kee), abbreviated from American Standard Code for
Information Interchange, is a character-encoding scheme (the IANA prefers the name US-ASCII). ASCII codes represent
text in computers, communications equipment, and other devices that use text. Most modern character-encoding schemes
are based on ASCII, though they support many additional characters. ASCII was the most common character encoding
on the World Wide Web until December 2007, when it was surpassed by UTF-8, which is fully backward compatibe to ASCII.
The committee debated the possibility of a shift function (like in ITA2), which would allow more than 64 codes to
be represented by a six-bit code. In a shifted code, some character codes determine choices between options for the
following character codes. It allows compact encoding, but is less reliable for data transmission as an error in
transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided
against shifting, and so ASCII required at least a seven-bit code.:215, 236 § 4 Many more of the control codes have
been given meanings quite different from their original ones. The "escape" character (ESC, code 27), for example,
was intended originally to allow sending other control characters as literals instead of invoking their meaning.
This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain
characters have a reserved meaning. Over time this meaning has been co-opted and has eventually been changed. In
modern use, an ESC sent to the terminal usually indicates the start of a command sequence, usually in the form of
a so-called "ANSI escape code" (or, more properly, a "Control Sequence Introducer") beginning with ESC followed by
a "[" (left-bracket) character. An ESC sent from the terminal is most often used as an out-of-band character used
to terminate an operation, as in the TECO and vi text editors. In graphical user interface (GUI) and windowing systems,
ESC generally causes an application to abort its current operation or to exit (terminate) altogether. Older operating
systems such as TOPS-10, along with CP/M, tracked file length only in units of disk blocks and used Control-Z (SUB)
to mark the end of the actual text in the file. For this reason, EOF, or end-of-file, was used colloquially and conventionally
as a three-letter acronym for Control-Z instead of SUBstitute. The end-of-text code (ETX), also known as Control-C,
was inappropriate for a variety of reasons, while using Z as the control code to end a file is analogous to it ending
the alphabet and serves as a very convenient mnemonic aid. A historically common and still prevalent convention uses
the ETX code convention to interrupt and halt a program via an input data stream, usually from a keyboard. ASCII
developed from telegraphic codes. Its first commercial use was as a seven-bit teleprinter code promoted by Bell data
services. Work on the ASCII standard began on October 6, 1960, with the first meeting of the American Standards Association's
(ASA) X3.2 subcommittee. The first edition of the standard was published during 1963, underwent a major revision
during 1967, and experienced its most recent update during 1986. Compared to earlier telegraph codes, the proposed
Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists, and added features
for devices other than teleprinters. The committee considered an eight-bit code, since eight bits (octets) would
allow two four-bit patterns to efficiently encode two digits with binary-coded decimal. However, it would require
all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to
minimize costs associated with data transmission. Since perforated tape at the time could record eight bits in one
position, it also allowed for a parity bit for error checking if desired.:217, 236 § 5 Eight-bit machines (with octets
as the native data type) that did not use parity checking typically set the eighth bit to 0. With the other special
characters and control codes filled in, ASCII was published as ASA X3.4-1963, leaving 28 code positions without any
assigned meaning, reserved for future standardization, and one unassigned control code.:66, 245 There was some debate
at the time whether there should be more control characters rather than the lowercase alphabet.:435 The indecision
did not last long: during May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lowercase
characters to columns 6 and 7, and International Organization for Standardization TC 97 SC 2 voted during October
to incorporate the change into its draft standard. The X3.2.4 task group voted its approval for the change to ASCII
at its May 1963 meeting. Locating the lowercase letters in columns 6 and 7 caused the characters to differ in bit
pattern from the upper case by a single bit, which simplified case-insensitive character matching and the construction
of keyboards and printers. Other international standards bodies have ratified character encodings such as ISO/IEC
646 that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet
and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£). Almost
every country needed an adapted version of ASCII, since ASCII suited the needs of only the USA and a few other countries.
For example, Canada had its own version that supported French characters. Other adapted encodings include ISCII (India),
VISCII (Vietnam), and YUSCII (Yugoslavia). Although these encodings are sometimes referred to as ASCII, true ASCII
is defined strictly only by the ANSI standard. Probably the most influential single device on the interpretation
of these characters was the Teletype Model 33 ASR, which was a printing terminal with an available paper tape reader/punch
option. Paper tape was a very popular medium for long-term program storage until the 1980s, less costly and in some
ways less fragile than magnetic tape. In particular, the Teletype Model 33 machine assignments for codes 17 (Control-Q,
DC1, also known as XON), 19 (Control-S, DC3, also known as XOFF), and 127 (Delete) became de facto standards. The
Model 33 was also notable for taking the description of Control-G (BEL, meaning audibly alert the operator) literally
as the unit contained an actual bell which it rang when it received a BEL character. Because the keytop for the O
key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), a noncompliant
use of code 15 (Control-O, Shift In) interpreted as "delete previous character" was also adopted by many early timesharing
systems but eventually became neglected. The inherent ambiguity of many control characters, combined with their historical
usage, created problems when transferring "plain text" files between systems. The best example of this is the newline
problem on various operating systems. Teletype machines required that a line of text be terminated with both "Carriage
Return" (which moves the printhead to the beginning of the line) and "Line Feed" (which advances the paper one line
without moving the printhead). The name "Carriage Return" comes from the fact that on a manual typewriter the carriage
holding the paper moved while the position where the typebars struck the ribbon remained stationary. The entire carriage
had to be pushed (returned) to the right in order to position the left margin of the paper for the next line. Many
of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters; an important
subtlety is that these were based on mechanical typewriters, not electric typewriters. Mechanical typewriters followed
the standard set by the Remington No. 2 (1878), the first typewriter with a shift key, and the shifted values of
23456789- were "#$%_&'() – early typewriters omitted 0 and 1, using O (capital letter o) and l (lowercase letter
L) instead, but 1! and 0) pairs became standard once 0 and 1 became common. Thus, in ASCII !"#$% were placed in second
column, rows 1–5, corresponding to the digits 1–5 in the adjacent column. The parentheses could not correspond to
9 and 0, however, because the place corresponding to 0 was taken by the space character. This was accommodated by
removing _ (underscore) from 6 and shifting the remaining characters left, which corresponded to many European typewriters
that placed the parentheses with 8 and 9. This discrepancy from typewriters led to bit-paired keyboards, notably
the Teletype Model 33, which used the left-shifted layout corresponding to ASCII, not to traditional mechanical typewriters.
Electric typewriters, notably the more recently introduced IBM Selectric (1961), used a somewhat different layout
that has become standard on computers—following the IBM PC (1981), especially Model M (1984)—and thus shift values
for symbols on modern keyboards do not correspond as closely to the ASCII table as earlier keyboards did. The /?
pair also dates to the No. 2, and the ,
< .> pairs were used on some keyboards (others, including the No. 2, did not shift , (comma) or . (full stop) so
they could be used in uppercase without unshifting). However, ASCII split the ;: pair (dating to No. 2), and rearranged
mathematical symbols (varied conventions, commonly -* =+) to :* ;+ -=. Code 127 is officially named "delete" but
the Teletype label was "rubout". Since the original standard did not give detailed interpretation for most control
codes, interpretations of this code varied. The original Teletype meaning, and the intent of the standard, was to
make it an ignored character, the same as NUL (all zeroes). This was useful specifically for paper tape, because
punching the all-ones bit pattern on top of an existing mark would obliterate it. Tapes designed to be "hand edited"
could even be produced with spaces of extra NULs (blank tape) so that a block of characters could be "rubbed out"
and then replacements put into the empty space. Unfortunately, requiring two characters to mark the end of a line
introduces unnecessary complexity and questions as to how to interpret each character when encountered alone. To
simplify matters plain text data streams, including files, on Multics used line feed (LF) alone as a line terminator.
Unix and Unix-like systems, and Amiga systems, adopted this convention from Multics. The original Macintosh OS,
Apple DOS, and ProDOS, on the other hand, used carriage return (CR) alone as a line terminator; however, since Apple
replaced these operating systems with the Unix-based OS X operating system, they now use line feed (LF) as well.
In the human digestive system, food enters the mouth and mechanical digestion of the food starts by the action of mastication
(chewing), a form of mechanical digestion, and the wetting contact of saliva. Saliva, a liquid secreted by the salivary
glands, contains salivary amylase, an enzyme which starts the digestion of starch in the food; the saliva also contains
mucus, which lubricates the food, and hydrogen carbonate, which provides the ideal conditions of pH (alkaline) for
amylase to work. After undergoing mastication and starch digestion, the food will be in the form of a small, round
slurry mass called a bolus. It will then travel down the esophagus and into the stomach by the action of peristalsis.
Gastric juice in the stomach starts protein digestion. Gastric juice mainly contains hydrochloric acid and pepsin.
As these two chemicals may damage the stomach wall, mucus is secreted by the stomach, providing a slimy layer that
acts as a shield against the damaging effects of the chemicals. At the same time protein digestion is occurring,
mechanical mixing occurs by peristalsis, which is waves of muscular contractions that move along the stomach wall.
This allows the mass of food to further mix with the digestive enzymes. Other animals, such as rabbits and rodents,
practise coprophagia behaviours - eating specialised faeces in order to re-digest food, especially in the case of
roughage. Capybara, rabbits, hamsters and other related species do not have a complex digestive system as do, for
example, ruminants. Instead they extract more nutrition from grass by giving their food a second pass through the
gut. Soft faecal pellets of partially digested food are excreted and generally consumed immediately. They also produce
normal droppings, which are not eaten. Digestive systems take many forms. There is a fundamental distinction between
internal and external digestion. External digestion developed earlier in evolutionary history, and most fungi still
rely on it. In this process, enzymes are secreted into the environment surrounding the organism, where they break
down an organic material, and some of the products diffuse back to the organism. Animals have a tube (gastrointestinal
tract) in which internal digestion occurs, which is more efficient because more of the broken down products can be
captured, and the internal chemical environment can be more efficiently controlled. The nitrogen fixing Rhizobia
are an interesting case, wherein conjugative elements naturally engage in inter-kingdom conjugation. Such elements
as the Agrobacterium Ti or Ri plasmids contain elements that can transfer to plant cells. Transferred genes enter
the plant cell nucleus and effectively transform the plant cells into factories for the production of opines, which
the bacteria use as carbon and energy sources. Infected plant cells form crown gall or root tumors. The Ti and Ri
plasmids are thus endosymbionts of the bacteria, which are in turn endosymbionts (or parasites) of the infected plant.
Teeth (singular tooth) are small whitish structures found in the jaws (or mouths) of many vertebrates that are used
to tear, scrape, milk and chew food. Teeth are not made of bone, but rather of tissues of varying density and hardness,
such as enamel, dentine and cementum. Human teeth have a blood and nerve supply which enables proprioception. This
is the ability of sensation when chewing, for example if we were to bite into something too hard for our teeth, such
as a chipped plate mixed in food, our teeth send a message to our brain and we realise that it cannot be chewed,
so we stop trying. The abomasum is the fourth and final stomach compartment in ruminants. It is a close equivalent
of a monogastric stomach (e.g., those in humans or pigs), and digesta is processed here in much the same way. It
serves primarily as a site for acid hydrolysis of microbial and dietary protein, preparing these protein sources
for further digestion and absorption in the small intestine. Digesta is finally moved into the small intestine, where
the digestion and absorption of nutrients occurs. Microbes produced in the reticulo-rumen are also digested in the
small intestine. An earthworm's digestive system consists of a mouth, pharynx, esophagus, crop, gizzard, and intestine.
The mouth is surrounded by strong lips, which act like a hand to grab pieces of dead grass, leaves, and weeds, with
bits of soil to help chew. The lips break the food down into smaller pieces. In the pharynx, the food is lubricated
by mucus secretions for easier passage. The esophagus adds calcium carbonate to neutralize the acids formed by food
matter decay. Temporary storage occurs in the crop where food and calcium carbonate are mixed. The powerful muscles
of the gizzard churn and mix the mass of food and dirt. When the churning is complete, the glands in the walls of
the gizzard add enzymes to the thick paste, which helps chemically breakdown the organic matter. By peristalsis,
the mixture is sent to the intestine where friendly bacteria continue chemical breakdown. This releases carbohydrates,
protein, fat, and various vitamins and minerals for absorption into the body. Digestion of some fats can begin in
the mouth where lingual lipase breaks down some short chain lipids into diglycerides. However fats are mainly digested
in the small intestine. The presence of fat in the small intestine produces hormones that stimulate the release of
pancreatic lipase from the pancreas and bile from the liver which helps in the emulsification of fats for absorption
of fatty acids. Complete digestion of one molecule of fat (a triglyceride) results a mixture of fatty acids, mono-
and di-glycerides, as well as some undigested triglycerides, but no free glycerol molecules. Digestion is the breakdown
of large insoluble food molecules into small water-soluble food molecules so that they can be absorbed into the watery
blood plasma. In certain organisms, these smaller substances are absorbed through the small intestine into the blood
stream. Digestion is a form of catabolism that is often divided into two processes based on how food is broken down:
mechanical and chemical digestion. The term mechanical digestion refers to the physical breakdown of large pieces
of food into smaller pieces which can subsequently be accessed by digestive enzymes. In chemical digestion, enzymes
break down food into the small molecules the body can use. Different phases of digestion take place including: the
cephalic phase , gastric phase, and intestinal phase. The cephalic phase occurs at the sight, thought and smell of
food, which stimulate the cerebral cortex. Taste and smell stimuli are sent to the hypothalamus and medulla oblongata.
After this it is routed through the vagus nerve and release of acetylcholine. Gastric secretion at this phase rises
to 40% of maximum rate. Acidity in the stomach is not buffered by food at this point and thus acts to inhibit parietal
(secretes acid) and G cell (secretes gastrin) activity via D cell secretion of somatostatin. The gastric phase takes
3 to 4 hours. It is stimulated by distension of the stomach, presence of food in stomach and decrease in pH. Distention
activates long and myenteric reflexes. This activates the release of acetylcholine, which stimulates the release
of more gastric juices. As protein enters the stomach, it binds to hydrogen ions, which raises the pH of the stomach.
Inhibition of gastrin and gastric acid secretion is lifted. This triggers G cells to release gastrin, which in turn
stimulates parietal cells to secrete gastric acid. Gastric acid is about 0.5% hydrochloric acid (HCl), which lowers
the pH to the desired pH of 1-3. Acid release is also triggered by acetylcholine and histamine. The intestinal phase
has two parts, the excitatory and the inhibitory. Partially digested food fills the duodenum. This triggers intestinal
gastrin to be released. Enterogastric reflex inhibits vagal nuclei, activating sympathetic fibers causing the pyloric
sphincter to tighten to prevent more food from entering, and inhibits local reflexes. In a channel transupport system,
several proteins form a contiguous channel traversing the inner and outer membranes of the bacteria. It is a simple
system, which consists of only three protein subunits: the ABC protein, membrane fusion protein (MFP), and outer
membrane protein (OMP)[specify]. This secretion system transports various molecules, from ions, drugs, to proteins
of various sizes (20 - 900 kDa). The molecules secreted vary in size from the small Escherichia coli peptide colicin
V, (10 kDa) to the Pseudomonas fluorescens cell adhesion protein LapA of 900 kDa. In addition to the use of the multiprotein
complexes listed above, Gram-negative bacteria possess another method for release of material: the formation of outer
membrane vesicles. Portions of the outer membrane pinch off, forming spherical structures made of a lipid bilayer
enclosing periplasmic materials. Vesicles from a number of bacterial species have been found to contain virulence
factors, some have immunomodulatory effects, and some can directly adhere to and intoxicate host cells. While release
of vesicles has been demonstrated as a general response to stress conditions, the process of loading cargo proteins
seems to be selective. Underlying the process is muscle movement throughout the system through swallowing and peristalsis.
Each step in digestion requires energy, and thus imposes an "overhead charge" on the energy made available from absorbed
substances. Differences in that overhead cost are important influences on lifestyle, behavior, and even physical
structures. Examples may be seen in humans, who differ considerably from other hominids (lack of hair, smaller jaws
and musculature, different dentition, length of intestines, cooking, etc.). Digestion begins in the mouth with the
secretion of saliva and its digestive enzymes. Food is formed into a bolus by the mechanical mastication and swallowed
into the esophagus from where it enters the stomach through the action of peristalsis. Gastric juice contains hydrochloric
acid and pepsin which would damage the walls of the stomach and mucus is secreted for protection. In the stomach
further release of enzymes break down the food further and this is combined with the churning action of the stomach.
The partially digested food enters the duodenum as a thick semi-liquid chyme. In the small intestine, the larger
part of digestion takes place and this is helped by the secretions of bile, pancreatic juice and intestinal juice.
The intestinal walls are lined with villi, and their epithelial cells is covered with numerous microvilli to improve
the absorption of nutrients by increasing the surface area of the intestine. Lactase is an enzyme that breaks down
the disaccharide lactose to its component parts, glucose and galactose. Glucose and galactose can be absorbed by
the small intestine. Approximately 65 percent of the adult population produce only small amounts of lactase and are
unable to eat unfermented milk-based foods. This is commonly known as lactose intolerance. Lactose intolerance varies
widely by ethnic heritage; more than 90 percent of peoples of east Asian descent are lactose intolerant, in contrast
to about 5 percent of people of northern European descent. After some time (typically 1–2 hours in humans, 4–6 hours
in dogs, 3–4 hours in house cats),[citation needed] the resulting thick liquid is called chyme. When the pyloric
sphincter valve opens, chyme enters the duodenum where it mixes with digestive enzymes from the pancreas and bile
juice from the liver and then passes through the small intestine, in which digestion continues. When the chyme is
fully digested, it is absorbed into the blood. 95% of absorption of nutrients occurs in the small intestine. Water
and minerals are reabsorbed back into the blood in the colon (large intestine) where the pH is slightly acidic about
5.6 ~ 6.9. Some vitamins, such as biotin and vitamin K (K2MK7) produced by bacteria in the colon are also absorbed
into the blood in the colon. Waste material is eliminated from the rectum during defecation. In mammals, preparation
for digestion begins with the cephalic phase in which saliva is produced in the mouth and digestive enzymes are produced
in the stomach. Mechanical and chemical digestion begin in the mouth where food is chewed, and mixed with saliva
to begin enzymatic processing of starches. The stomach continues to break food down mechanically and chemically through
churning and mixing with both acids and enzymes. Absorption occurs in the stomach and gastrointestinal tract, and
the process finishes with defecation. Protein digestion occurs in the stomach and duodenum in which 3 main enzymes,
pepsin secreted by the stomach and trypsin and chymotrypsin secreted by the pancreas, break down food proteins into
polypeptides that are then broken down by various exopeptidases and dipeptidases into amino acids. The digestive
enzymes however are mostly secreted as their inactive precursors, the zymogens. For example, trypsin is secreted
by pancreas in the form of trypsinogen, which is activated in the duodenum by enterokinase to form trypsin. Trypsin
then cleaves proteins to smaller polypeptides.
Gymnasts sprint down a runway, which is a maximum of 25 meters in length, before hurdling onto a spring board. The gymnast
is allowed to choose where they start on the runway. The body position is maintained while "punching" (blocking using
only a shoulder movement) the vaulting platform. The gymnast then rotates to a standing position. In advanced gymnastics,
multiple twists and somersaults may be added before landing. Successful vaults depend on the speed of the run, the
length of the hurdle, the power the gymnast generates from the legs and shoulder girdle, the kinesthetic awareness
in the air, and the speed of rotation in the case of more difficult and complex vaults. According to FIG rules, only
women compete in rhythmic gymnastics. This is a sport that combines elements of ballet, gymnastics, dance, and apparatus
manipulation. The sport involves the performance of five separate routines with the use of five apparatus; ball,
ribbon, hoop, clubs, rope—on a floor area, with a much greater emphasis on the aesthetic rather than the acrobatic.
There are also group routines consisting of 5 gymnasts and 5 apparatuses of their choice. Rhythmic routines are scored
out of a possible 30 points; the score for artistry (choreography and music) is averaged with the score for difficulty
of the moves and then added to the score for execution. Aesthetic Group Gymnastics (AGG) was developed from the Finnish
"naisvoimistelu". It differs from Rhythmic Gymnastics in that body movement is large and continuous and teams are
larger' Athletes do not use apparatus in international AGG competitions compared to Rhythmic Gymnastics where ball,
ribbon, hoop and clubs are used on the floor area. The sport requires physical qualities such as flexibility, balance,
speed, strength, coordination and sense of rhythm where movements of the body are emphasized through the flow, expression
and aesthetic appeal. A good performance is characterized by uniformity and simultaneity. The competition program
consists of versatile and varied body movements, such as body waves, swings, balances, pivots, jumps and leaps, dance
steps, and lifts. The International Federation of Aesthetic Group Gymnastics (IFAGG) was established in 2003. This
apparatus may be made of hemp or a synthetic material which retains the qualities of lightness and suppleness. Its
length is in proportion to the size of the gymnast. The rope should, when held down by the feet, reach both of the
gymnasts' armpits. One or two knots at each end are for keeping hold of the rope while doing the routine. At the
ends (to the exclusion of all other parts of the rope) an anti-slip material, either coloured or neutral may cover
a maximum of 10 cm (3.94 in). The rope must be coloured, either all or partially and may either be of a uniform diameter
or be progressively thicker in the center provided that this thickening is of the same material as the rope. The
fundamental requirements of a rope routine include leaps and skipping. Other elements include swings, throws, circles,
rotations and figures of eight. In 2011, the FIG decided to nullify the use of rope in rhythmic gymnastic competitions.
The Federation of International Gymnastics (FIG) was founded in Liege in 1881. By the end of the nineteenth century,
men's gymnastics competition was popular enough to be included in the first "modern" Olympic Games in 1896. From
then on until the early 1950s, both national and international competitions involved a changing variety of exercises
gathered under the rubric, gymnastics, that would seem strange to today's audiences and that included for example,
synchronized team floor calisthenics, rope climbing, high jumping, running, and horizontal ladder. During the 1920s,
women organized and participated in gymnastics events. The first women's Olympic competition was primitive, only
involving synchronized calisthenics and track and field. These games were held in 1928, in Amsterdam. In the vaulting
events, gymnasts sprint down a 25 metres (82 ft) runway, jump onto a spring filled board or perform a roundoff, or
handspring entry onto a springboard (run/ take-off segment), land momentarily, inverted on the hands on the vaulting
horse, or vaulting table (pre flight segment), then propel themselves forward or backward, off of this platform to
a two footed landing (post flight segment). Every gymnast starts at a different point on the vault runway depending
on their height and strength. The post flight segment may include one or more multiple saltos or somersaults, and/or
twisting movements. A round-off entry vault, called a Yurchenko, is the most common vault in elite level gymnastics.
When performing a yurchenko, gymnasts "round-off" so hands are on the runway while the feet land on the springboard
(beatboard). From the roundoff position the gymnast travels backwards and executes a backhandspring so that the hands
land on the vaulting table. The gymnast then blocks off the vaulting platform into various twisting and/or somersaulting
combinations. The post flight segment brings the gymnast to her feet. A gymnast's score comes from deductions taken
from their start value. The start value of a routine is based on the difficulty of the elements the gymnast attempts
and whether or not the gymnast meets composition requirements. The composition requirements are different for each
apparatus; this score is called the D score. Deductions in execution and artistry are taken from 10.0. This score
is called the E score. The final score is calculated by taking deductions from the E score, and adding the result
to the D score. Since 2007, the scoring system has changed by adding bonus plus the execution and then adding those
two together to get the final score. The technical rules for the Japanese version of men's rhythmic gymnastics came
around the 1970s. For individuals, only four types of apparatus are used: the double rings, the stick, the rope,
and the clubs. Groups do not use any apparatus. The Japanese version includes tumbling performed on a spring floor.
Points are awarded based a 10-point scale that measures the level of difficulty of the tumbling and apparatus handling.
On November 27–29, 2003, Japan hosted first edition of the Men's Rhythmic Gymnastics World Championship. The word
gymnastics derives from the common Greek adjective γυμνός (gymnos) meaning "naked", by way of the related verb γυμνάζω
(gymnazo), whose meaning is to train naked", "train in gymnastic exercise", generally "to train, to exercise". The
verb had this meaning, because athletes in ancient times exercised and competed without clothing. It came into use
in the 1570s, from Latin gymnasticus, from Greek gymnastikos "fond of or skilled in bodily exercise," from gymnazein
"to exercise or train" (see gymnasium). By 1954, Olympic Games apparatus and events for both men and women had been
standardized in modern format, and uniform grading structures (including a point system from 1 to 15) had been agreed
upon. At this time, Soviet gymnasts astounded the world with highly disciplined and difficult performances, setting
a precedent that continues. The new medium of television has helped publicize and initiate a modern age of gymnastics.
Both men's and women's gymnastics now attract considerable international interest, and excellent gymnasts can be
found on every continent. Nadia Comăneci received the first perfect score, at the 1976 Summer Olympics held in Montreal,
Canada. She was coached in Romania by coach, (Hungarian ethnicity), Béla Károlyi. Comaneci scored four of her perfect
tens on the uneven bars, two on the balance beam and one in the floor exercise. Even with Nadia's perfect scores,
the Romanians lost the gold medal to the Soviet Union. Nevertheless, Comaneci became an Olympic icon. A typical pommel
horse exercise involves both single leg and double leg work. Single leg skills are generally found in the form of
scissors, an element often done on the pommels. Double leg work however, is the main staple of this event. The gymnast
swings both legs in a circular motion (clockwise or counterclockwise depending on preference) and performs such skills
on all parts of the apparatus. To make the exercise more challenging, gymnasts will often include variations on a
typical circling skill by turning (moores and spindles) or by straddling their legs (Flares). Routines end when the
gymnast performs a dismount, either by swinging his body over the horse, or landing after a handstand. This requires
back muscles to do any sort of skill. From handstands being easy to back or front flips being a little difficult.
In a tumbling pass, dismount or vault, landing is the final phase, following take off and flight This is a critical
skill in terms of execution in competition scores, general performance, and injury occurrence. Without the necessary
magnitude of energy dissipation during impact, the risk of sustaining injuries during somersaulting increases. These
injuries commonly occur at the lower extremities such as: cartilage lesions, ligament tears, and bone bruises/fractures.
To avoid such injuries, and to receive a high performance score, proper technique must be used by the gymnast. "The
subsequent ground contact or impact landing phase must be achieved using a safe, aesthetic and well-executed double
foot landing." A successful landing in gymnastics is classified as soft, meaning the knee and hip joints are at greater
than 63 degrees of flexion. Individual routines in trampolining involve a build-up phase during which the gymnast
jumps repeatedly to achieve height, followed by a sequence of ten bounces without pause during which the gymnast
performs a sequence of aerial skills. Routines are marked out of a maximum score of 10 points. Additional points
(with no maximum at the highest levels of competition) can be earned depending on the difficulty of the moves and
the length of time taken to complete the ten skills which is an indication of the average height of the jumps. In
high level competitions, there are two preliminary routines, one which has only two moves scored for difficulty and
one where the athlete is free to perform any routine. This is followed by a final routine which is optional. Some
competitions restart the score from zero for the finals, other add the final score to the preliminary results. On
the uneven bars, the gymnast performs a routine on two horizontal bars set at different heights. These bars are made
of fiberglass covered in wood laminate, to prevent them from breaking. In the past, bars were made of wood, but the
bars were prone to breaking, providing an incentive to switch to newer technologies. The width and height of the
bars may be adjusted. In the past, the uneven parallel bars were closer together. They've been moved increasingly
further apart, allowing gymnasts to perform swinging, circling, transitional, and release moves, that may pass over,
under, and between the two bars. At the Elite level, movements must pass through the handstand. Gymnasts often mount
the Uneven Bars using a springboard, or a small mat. Chalk and grips (a leather strip with holes for fingers to protect
hands and improve performance) may be used while doing this event. The chalk helps take the moisture out of gymnast's
hands to decrease friction and prevent rips (tears to the skin of the hands), dowel grips help gymnasts grip the
bar. A higher flight phase results in a higher vertical ground reaction force. Vertical ground reaction force represents
external force which the gymnasts have to overcome with their muscle force and has an impact on the gymnasts linear
and angular momentum. Another important variable that affects linear and angular momentum is time the landing takes
Gymnasts can alter the shape of the area by increasing the time taken to perform the landing. Gymnasts can achieve
this by increasing hip, knee and ankle amplitude. With the increase of height, the amplitude in ankles knees and
hips rise the bars. Aerobic gymnastics (formally Sport Aerobics) involves the performance of routines by individuals,
pairs, trios or groups up to 6 people, emphasizing strength, flexibility, and aerobic fitness rather than acrobatic
or balance skills. Routines are performed for all individuals on a 7x7m floor and also for 12–14 and 15-17 trios
and mixed pairs. From 2009, all senior trios and mixed pairs were required to be on the larger floor (10x10m), all
groups also perform on this floor. Routines generally last 60–90 seconds depending on age of participant and routine
category. General gymnastics enables people of all ages and abilities to participate in performance groups of 6 to
more than 150 athletes. They perform synchronized, choreographed routines. Troupes may consist of both genders and
are not separated into age divisions. The largest general gymnastics exhibition is the quadrennial World Gymnaestrada
which was first held in 1939. In 1984 Gymnastics for All was officially recognized first as a Sport Program by the
FIG (International Gymnastic Federation), and subsequently by national gymnastic federations worldwide with participants
that now number 30 million. Gymnastics is a sport involving the performance of exercises requiring strength, flexibility,
balance and control. Internationally, all events are governed by the Fédération Internationale de Gymnastique (FIG).
Each country has its own national governing body (BIW) affiliated to FIG. Competitive artistic gymnastics is the
best known of the gymnastic events. It typically involves the women's events of vault, uneven bars, balance beam,
and floor exercise. Men's events are floor exercise, pommel horse, still rings, vault, parallel bars, and the high
bar. Gymnastics evolved from exercises used by the ancient Greeks that included skills for mounting and dismounting
a horse, and from circus performance skills. In the late eighteenth- and early nineteenth-century Germany, three
pioneer physical educators – Johann Friedrich GutsMuths (1759–1839) and Friedrich Ludwig Jahn (1778–1852) – created
exercises for boys and young men on apparatus they had designed that ultimately led to what is considered modern
gymnastics. Don Francisco Amorós y Ondeano, was born on February 19, 1770 in Valence and died on August 8, 1848 in
Paris. He was a Spanish colonel, and the first person to introduce educative gymnastic in France. Jahn promoted the
use of parallel bars, rings and high bar in international competition. In 2006, FIG introduced a new points system
for Artistic gymnastics in which scores are no longer limited to 10 points. The system is used in the US for elite
level competition. Unlike the old code of points, there are two separate scores, an execution score and a difficulty
score. In the previous system, the "execution score" was the only score. It was and still is out of 10.00. During
the gymnast's performance, the judges deduct this score only. A fall, on or off the event, is a 1.00 deduction, in
elite level gymnastics. The introduction of the difficulty score is a significant change. The gymnast's difficulty
score is based on what elements they perform and is subject to change if they do not perform or complete all the
skills, or they do not connect a skill meant to be connected to another. Connection bonuses are the most common deduction
from a difficulty score, as it can be difficult to connect multiple flight elements. It is very hard to connect skills
if the first skill is not performed correctly. The new code of points allows the gymnasts to gain higher scores based
on the difficulty of the skills they perform as well as their execution. There is no maximum score for difficulty,
as it can keep increasing as the difficulty of the skills increase. In the past, the floor exercise event was executed
on the bare floor or mats such as wrestling mats. Today, the floor event occurs on a carpeted 12m × 12m square, usually
consisting of hard foam over a layer of plywood, which is supported by springs or foam blocks generally called a
"spring" floor. This provides a firm surface that provides extra bounce or spring when compressed, allowing gymnasts
to achieve greater height and a softer landing after the composed skill. Gymnasts perform a choreographed routine
up to 90 seconds in the floor exercise event; Depending on the level, they may choose their own, or, if known as
a "compulsory gymnast," default music must be played. In some gymnastic associations such as United States Association
of Gymnastic Clubs (USAIGC), gymnasts are allowed to have vocals in their music but at USA Gymnastics competitions
a large deduction is taken from the score for having vocals in the music. The routine should consist of tumbling
lines, series of jumps, leaps, dance elements, acrobatic skills, and turns, or piviots, on one foot. A gymnast can
perform up to four tumbling lines that usually includes at least one flight element without hand support. Each level
of gymnastics requires the athlete to perform a different number of tumbling passes. In level 7 in the United States,
a gymnast is required to do 2–3, and in levels 8–10, at least 3–4 tumbling passes are required. In tumbling, athletes
perform an explosive series of flips and twists down a sprung tumbling track. Scoring is similar to trampolining.
Tumbling was originally contested as one of the events in Men's Artistic Gymnastics at the 1932 Summer Olympics,
and in 1955 and 1959 at the Pan American Games. From 1974 to 1998 it was included as an event for both genders at
the Acrobatic Gymnastics World Championships. The event has also been contested since 1976 at the Trampoline World
Championships. Since the recognition of Trampoline and Acrobatic Gymnastics as FIG disciplines in 1999, official
Tumbling competitions are only allowed as an event in Trampoline gymnastics meets. Men's rhythmic gymnastics is related
to both Men's artistic gymnastics and wushu martial arts. It emerged in Japan from stick gymnastics. Stick gymnastics
has been taught and performed for many years with the aim of improving physical strength and health. Male athletes
are judged on some of the same physical abilities and skills as their female counterparts, such as hand/body-eye
co-ordination, but tumbling, strength, power, and martial arts skills are the main focus, as opposed to flexibility
and dance in women's rhythmic gymnastics. There are a growing number of participants, competing alone and on a team;
it is most popular in Asia, especially in Japan where high school and university teams compete fiercely. As of 2002[update],
there were 1000 men's rhythmic gymnasts in Japan.[citation needed]
Domestically, Barcelona has won 23 La Liga, 27 Copa del Rey, 11 Supercopa de España, 3 Copa Eva Duarte and 2 Copa de la Liga
trophies, as well as being the record holder for the latter four competitions. In international club football, Barcelona
has won five UEFA Champions League titles, a record four UEFA Cup Winners' Cup, a shared record five UEFA Super Cup,
a record three Inter-Cities Fairs Cup and a record three FIFA Club World Cup trophies. Barcelona was ranked first
in the IFFHS Club World Ranking for 1997, 2009, 2011, 2012 and 2015 and currently occupies the second position on
the UEFA club rankings. The club has a long-standing rivalry with Real Madrid; matches between the two teams are
referred to as El Clásico. On 14 June 1925, in a spontaneous reaction against Primo de Rivera's dictatorship, the
crowd in the stadium jeered the Royal March. As a reprisal, the ground was closed for six months and Gamper was forced
to relinquish the presidency of the club. This coincided with the transition to professional football, and, in 1926,
the directors of Barcelona publicly claimed, for the first time, to operate a professional football club. On 3 July
1927, the club held a second testimonial match for Paulino Alcántara, against the Spanish national team. To kick
off the match, local journalist and pilot Josep Canudas dropped the ball onto the pitch from his airplane. In 1928,
victory in the Spanish Cup was celebrated with a poem titled "Oda a Platko", which was written by a member of the
Generation of '27, Rafael Alberti, inspired by the heroic performance of the Barcelona goalkeeper, Franz Platko.
On 23 June 1929, Barcelona won the inaugural Spanish League. A year after winning the championship, on 30 July 1930,
Gamper committed suicide after a period of depression brought on by personal and financial problems. The 1973–74
season saw the arrival of Johan Cruyff, who was bought for a world record £920,000 from Ajax. Already an established
player with Ajax, Cruyff quickly won over the Barcelona fans when he told the European press that he chose Barcelona
over Real Madrid because he could not play for a club associated with Francisco Franco. He further endeared himself
when he named his son Jordi, after the local Catalan Saint George. Next to champions like Juan Manuel Asensi, Carles
Rexach and Hugo Sotil, he helped the club win the 1973–74 season for the first time since 1960, defeating Real Madrid
5–0 at the Bernabéu along the way. He was crowned European Footballer of the Year in 1973 during his first season
with Barcelona (his second Ballon d'Or win; he won his first while playing for Ajax in 1971). Cruyff received this
prestigious award a third time (the first player to do so) in 1974, while he was still with Barcelona. Like Maradona,
Ronaldo only stayed a short time before he left for Internazionale. However, new heroes emerged, such as Luís Figo,
Patrick Kluivert, Luis Enrique and Rivaldo, and the team won a Copa del Rey and La Liga double in 1998. In 1999,
the club celebrated its centenari, winning the Primera División title, and Rivaldo became the fourth Barcelona player
to be awarded European Footballer of the Year. Despite this domestic success, the failure to emulate Real Madrid
in the Champions League led to van Gaal and Núñez resigning in 2000. On 4 January 2016, Barcelona's transfer ban
ended. The same day, they registered 77 players across all categories and ages, and both last summer signings Arda
Turan and Aleix Vidal became eligible to play with the first team. On 10 February, qualifying for the sixth Copa
del Rey final in the last eight seasons, Luis Enrique’s Barcelona broke the club's record of 28 consecutive games
unbeaten in all competitions set by Guardiola’s team in the 2010–11 season, with a 1–1 draw with Valencia in the
second leg of the 2015–16 Copa del Rey. Though it is the most played local derby in the history of La Liga, it is
also the most unbalanced, with Barcelona overwhelmingly dominant. In the primera división league table, Espanyol
has only managed to end above Barça on three occasions from 80 seasons (1928–2015) and the only all-Catalan Copa
del Rey final was won by Barça in 1957. Espanyol has the consolation of achieving the largest margin win with a 6–0
in 1951, while Barcelona's biggest win was 5–0 on five occasions (in 1933, 1947, 1964, 1975 and 1992). Espanyol achieved
a 2–1 win against Barça during the 2008–09 season, becoming the first team to defeat Barcelona at Camp Nou in their
treble-winning season. Barcelona is one of three founding members of the Primera División that have never been relegated
from the top division, along with Athletic Bilbao and Real Madrid. In 2009, Barcelona became the first Spanish club
to win the continental treble consisting of La Liga, Copa del Rey, and the UEFA Champions League, and also became
the first football club to win six out of six competitions in a single year, completing the sextuple in also winning
the Spanish Super Cup, UEFA Super Cup and FIFA Club World Cup. In 2011, the club became European champions again
and won five trophies. This Barcelona team, which reached a record six consecutive Champions League semi-finals and
won 14 trophies in just four years under Pep Guardiola, is considered by some in the sport to be the greatest team
of all time. In June 2015, Barcelona became the first European club in history to achieve the continental treble
twice. As of December 2015[update], Barcelona has won 23 La Liga, 27 Copa del Rey, 11 Supercopa de España, three
Copa Eva Duarte[note 2] and two Copa de la Liga trophies, as well as being the record holder for the latter four
competitions. They have also won five UEFA Champions League, a record four UEFA Cup Winners' Cup, a shared record
five UEFA Super Cup and a record three FIFA Club World Cup trophies. They also won a record three Inter-Cities Fairs
Cup trophies, considered the predecessor to the UEFA Cup-Europa League. A month after the Spanish Civil War began
in 1936, several players from Barcelona enlisted in the ranks of those who fought against the military uprising,
along with players from Athletic Bilbao. On 6 August, Falangist soldiers near Guadarrama murdered club president
Josep Sunyol, a representative of the pro-independence political party. He was dubbed the martyr of barcelonisme,
and his murder was a defining moment in the history of FC Barcelona and Catalan identity. In the summer of 1937,
the squad was on tour in Mexico and the United States, where it was received as an ambassador of the Second Spanish
Republic. The tour led to the financial security of the club, but also resulted in half of the team seeking asylum
in Mexico and France, making it harder for the remaining team to contest for trophies. It was ten years after the
inception of the youth program, La Masia, when the young players began to graduate and play for their first team.
One of the first graduates, who would later earn international acclaim, was previous Barcelona coach Pep Guardiola.
Under Cruyff's guidance, Barcelona won four consecutive La Liga titles from 1991 to 1994. They beat Sampdoria in
both the 1989 UEFA Cup Winners' Cup final and the 1992 European Cup final at Wembley, with a free kick goal from
Dutch international Ronald Koeman. They also won a Copa del Rey in 1990, the European Super Cup in 1992 and three
Supercopa de España trophies. With 11 trophies, Cruyff became the club's most successful manager at that point. He
also became the club's longest consecutive serving manager, serving eight years. Cruyff's fortune was to change,
and, in his final two seasons, he failed to win any trophies and fell out with president Núñez, resulting in his
departure. On the legacy of Cruyff's football philosophy and the passing style of play he introduced to the club,
future coach of Barcelona Pep Guardiola would state, "Cruyff built the cathedral, our job is to maintain and renovate
it." After the disappointment of the Gaspart era, the combination of a new young president, Joan Laporta, and a young
new manager, former Dutch and Milan star Frank Rijkaard, saw the club bounce back. On the field, an influx of international
players, including Ronaldinho, Deco, Henrik Larsson, Ludovic Giuly, Samuel Eto'o, and Rafael Márquez, combined with
home grown Spanish players, such as Carles Puyol, Andrés Iniesta, Xavi and Víctor Valdés, led to the club's return
to success. Barcelona won La Liga and the Supercopa de España in 2004–05, and Ronaldinho and Eto'o were voted first
and third, respectively, in the FIFA World Player of the Year awards. Barça beat Athletic Bilbao 4–1 in the 2009
Copa del Rey Final, winning the competition for a record-breaking 25th time. A historic 2–6 victory against Real
Madrid followed three days later and ensured that Barcelona became La Liga champions for the 2008–09 season. Barça
finished the season by beating the previous year's Champions League winners Manchester United 2–0 at the Stadio Olimpico
in Rome to win their third Champions League title and completed the first ever treble won by a Spanish team. The
team went on to win the 2009 Supercopa de España against Athletic Bilbao and the 2009 UEFA Super Cup against Shakhtar
Donetsk, becoming the first European club to win both domestic and European Super Cups following a treble. In December
2009, Barcelona won the 2009 FIFA Club World Cup, and became the first football club ever to accomplish the sextuple.
Barcelona accomplished two new records in Spanish football in 2010 as they retained the La Liga trophy with 99 points
and won the Spanish Super Cup trophy for a ninth time. It was announced in summer of 2012 that Tito Vilanova, assistant
manager at FC Barcelona, would take over from Pep Guardiola as manager. Following his appointment, Barcelona went
on an incredible run that saw them hold the top spot on the league table for the entire season, recording only two
losses and amassing 100 points. Their top scorer once again was Lionel Messi, who scored 46 goals in the League,
including two hat-tricks. On 11 May 2013 Barcelona were crowned as the Spanish football champions for the 22nd time,
still with four games left to play. Ultimately Barcelona ended the season 15 points clear of rivals Real Madrid,
despite losing 2–1 to them at the beginning of March. They reached the semifinal stage of both the Copa del Rey and
the Champions League, going out to Real Madrid and Bayern Munich respectively. On 19 July, it was announced that
Vilanova was resigning as Barcelona manager because his throat cancer had returned, and he would be receiving treatment
for the second time after a three-month medical leave in December 2012. In late December, Barcelona's appeal to the
Court of Arbitration for Sport was unsuccessful and the original transfer ban was reinstated, leaving the club unable
to utilise the 2015 winter and summer transfer windows. On 5 January 2015, Zubizareta was sacked by the board after
4 years as director of football. The next month, Barcelona announced the formation of a new Football Area Technical
Commission, made up of vice-president Jordi Mestre, board member Javier Bordas, Carles Rexach and Ariedo Braida.
In addition to membership, as of 2010[update] there are 1,335 officially registered fan clubs, called penyes, around
the world. The fan clubs promote Barcelona in their locality and receive beneficial offers when visiting Barcelona.
Among the best supported teams globally, Barcelona has the highest social media following in the world among sports
teams, with over 90 million Facebook fans as of February 2016. The club has had many prominent people among its supporters,
including Pope John Paul II, who was an honorary member, and former prime minister of Spain José Luis Rodríguez Zapatero.
FC Barcelona has the second highest average attendance of European football clubs only behind Borussia Dortmund.
Barça's local rival has always been Espanyol. Blanc-i-blaus, being one of the clubs granted royal patronage, was
founded exclusively by Spanish football fans, unlike the multinational nature of Barça's primary board. The founding
message of the club was clearly anti-Barcelona, and they disapprovingly saw FC Barcelona as a team of foreigners.
The rivalry was strengthened by what Catalonians saw as a provocative representative of Madrid. Their original ground
was in the affluent district of Sarrià. The club's original crest was a quartered diamond-shaped crest topped by
the Crown of Aragon and the bat of King James, and surrounded by two branches, one of a laurel tree and the other
a palm. In 1910 the club held a competition among its members to design a new crest. The winner was Carles Comamala,
who at the time played for the club. Comamala's suggestion became the crest that the club wears today, with some
minor variations. The crest consists of the St George Cross in the upper-left corner with the Catalan flag beside
it, and the team colours at the bottom. In 1922, the number of supporters had surpassed 20,000 and by lending money
to the club, Barça was able to build the larger Camp de Les Corts, which had an initial capacity of 20,000 spectators.
After the Spanish Civil War the club started attracting more members and a larger number of spectators at matches.
This led to several expansion projects: the grandstand in 1944, the southern stand in 1946, and finally the northern
stand in 1950. After the last expansion, Les Corts could hold 60,000 spectators. On 16 March 1938, Barcelona came
under aerial bombardment from the Italian Air Force, causing more than 3,000 deaths, with one of the bombs hitting
the club's offices. A few months later, Catalonia came under occupation and as a symbol of the "undisciplined" Catalanism,
the club, now down to just 3,486 members, faced a number of restrictions. All signs of regional nationalism, including
language, flag and other signs of separatism were banned throughout Spain. The Catalan flag was banned and the club
were prohibited from using non-Spanish names. These measures forced the club to change its name to Club de Fútbol
Barcelona and to remove the Catalan flag from its crest. In June 1982, Diego Maradona was signed for a world record
fee of £5 million from Boca Juniors. In the following season, under coach Luis, Barcelona won the Copa del Rey, beating
Real Madrid. However, Maradona's time with Barcelona was short-lived and he soon left for Napoli. At the start of
the 1984–85 season, Terry Venables was hired as manager and he won La Liga with noteworthy displays by German midfielder
Bernd Schuster. The next season, he took the team to their second European Cup final, only to lose on penalties to
Steaua Bucureşti during a dramatic evening in Seville. Founded in 1899 by a group of Swiss, English and Catalan footballers
led by Joan Gamper, the club has become a symbol of Catalan culture and Catalanism, hence the motto "Més que un club"
(More than a club). Unlike many other football clubs, the supporters own and operate Barcelona. It is the second
most valuable sports team in the world, worth $3.16 billion, and the world's second richest football club in terms
of revenue, with an annual turnover of €560.8 million. The official Barcelona anthem is the "Cant del Barça", written
by Jaume Picas and Josep Maria Espinàs. FC Barcelona had a successful start in regional and national cups, competing
in the Campionat de Catalunya and the Copa del Rey. In 1902, the club won its first trophy, the Copa Macaya, and
participated in the first Copa del Rey, losing 1–2 to Bizcaya in the final. Hans Gamper — now known as Joan Gamper
— became club president in 1908, finding the club in financial difficulty after not winning a competition since the
Campionat de Catalunya in 1905. Club president on five separate occasions between 1908 and 1925, he spent 25 years
in total at the helm. One of his main achievements was ensuring Barça acquire its own stadium and thus generate a
stable income. In 1943, Barcelona faced rivals Real Madrid in the semi-finals of Copa del Generalísimo (now the Copa
del Rey). The first match at Les Corts was won by Barcelona 3–0. Real Madrid comfortably won the second leg, beating
Barcelona 11–1. According to football writer Sid Lowe, "There have been relatively few mentions of the game [since]
and it is not a result that has been particularly celebrated in Madrid. Indeed, the 11–1 occupies a far more prominent
place in Barcelona's history." It has been alleged by local journalist Paco Aguilar that Barcelona's players were
threatened by police in the changing room, though nothing was ever proven. The 1960s were less successful for the
club, with Real Madrid monopolising La Liga. The completion of the Camp Nou, finished in 1957, meant the club had
little money to spend on new players. The 1960s saw the emergence of Josep Maria Fusté and Carles Rexach, and the
club won the Copa del Generalísimo in 1963 and the Fairs Cup in 1966. Barcelona restored some pride by beating Real
Madrid 1–0 in the 1968 Copa del Generalísimo final at the Bernabéu in front of Franco, with coach Salvador Artigas,
a former republican pilot in the civil war. With the end of Franco's dictatorship in 1974, the club changed its official
name back to Futbol Club Barcelona and reverted the crest to its original design, including the original letters
once again. Around this time, tensions began to arise between what was perceived as president Núñez's dictatorial
rule and the nationalistic support group, Boixos Nois. The group, identified with a left-wing separatism, repeatedly
demanded the resignation of Núñez and openly defied him through chants and banners at matches. At the same time,
Barcelona experienced an eruption in skinheads, who often identified with a right-wing separatism. The skinheads
slowly transferred the Boixos Nois' ideology from liberalism to fascism, which caused division within the group and
a sudden support for Núñez's presidency. Inspired by British hooligans, the remaining Boixos Nois became violent,
causing havoc leading to large-scale arrests. Despite being the favourites and starting strongly, Barcelona finished
the 2006–07 season without trophies. A pre-season US tour was later blamed for a string of injuries to key players,
including leading scorer Eto'o and rising star Lionel Messi. There was open feuding as Eto'o publicly criticized
coach Frank Rijkaard and Ronaldinho. Ronaldinho also admitted that a lack of fitness affected his form. In La Liga,
Barcelona were in first place for much of the season, but inconsistency in the New Year saw Real Madrid overtake
them to become champions. Barcelona advanced to the semi-finals of the Copa del Rey, winning the first leg against
Getafe 5–2, with a goal from Messi bringing comparison to Diego Maradona's goal of the century, but then lost the
second leg 4–0. They took part in the 2006 FIFA Club World Cup, but were beaten by a late goal in the final against
Brazilian side Internacional. In the Champions League, Barcelona were knocked out of the competition in the last
16 by eventual runners-up Liverpool on away goals. Later the same month, Barcelona won the UEFA Super Cup after defeating
Porto 2–0 thanks to goals from Lionel Messi and Cesc Fàbregas. This extended the club's overall number of official
trophies to 74, surpassing Real Madrid's total amount of official trophies. The UEFA Super Cup victory also marked
another impressive achievement as Josep Guardiola won his 12th trophy out of 15 possible in only three years at the
helm of the club, becoming the all-time record holder of most titles won as a coach at FC Barcelona. In April 2014,
FIFA banned the club from buying players for the next two transfer windows following the violation of the FIFA's
rules about the transfer of footballers aged under 18. A statement on FIFA's website read "With regard to the case
in question, FC Barcelona has been found to be in breach of art. 19 of the Regulations in the case of ten minor players
and to have committed several other concurrent infringements in the context of other players, including under Annexe
2 of the Regulations. The Disciplinary Committee regarded the infringements as serious and decided to sanction the
club with a transfer ban at both national and international level for two complete and consecutive transfer periods,
together with a fine of CHF 450,000. Additionally, the club was granted a period of 90 days in which to regularise
the situation of all minor players concerned." FIFA rejected an appeal in August but the pending appeal to the Court
of Arbitration for Sport allowed Barcelona to sign players during the summer of 2014. On 11 August, Barcelona started
the 2015–16 season winning a joint record fifth European Super Cup by beating Sevilla FC 5–4 in the 2015 UEFA Super
Cup. They ended the year with a 3–0 win over Argentine club River Plate in the 2015 FIFA Club World Cup Final on
20 December to win the trophy for a record third time, with Suárez, Messi and Iniesta the top three players of the
tournament. The FIFA Club World Cup was Barcelona's 20th international title, a record only matched by Egyptian club
Al Ahly SC. By scoring 180 goals in 2015 in all competitions, Barcelona set the record for most goals scored in a
calendar year, breaking Real Madrid's record of 178 goals scored in 2014. In the 2005–06 season, Barcelona repeated
their league and Supercup successes. The pinnacle of the league season arrived at the Santiago Bernabéu Stadium in
a 3–0 win over Real Madrid. It was Frank Rijkaard's second victory at the Bernabéu, making him the first Barcelona
manager to win there twice. Ronaldinho's performance was so impressive that after his second goal, which was Barcelona's
third, some Real Madrid fans gave him a standing ovation. In the Champions League, Barcelona beat the English club
Arsenal in the final. Trailing 1–0 to a 10-man Arsenal and with less than 15 minutes remaining, they came back to
win 2–1, with substitute Henrik Larsson, in his final appearance for the club, setting up goals for Samuel Eto'o
and fellow substitute Juliano Belletti, for the club's first European Cup victory in 14 years. During the dictatorships
of Miguel Primo de Rivera (1923–1930) and especially of Francisco Franco (1939–1975), all regional cultures were
suppressed. All of the languages spoken in Spanish territory, except Spanish (Castilian) itself, were officially
banned. Symbolising the Catalan people's desire for freedom, Barça became 'More than a club' (Més que un club) for
the Catalans. According to Manuel Vázquez Montalbán, the best way for the Catalans to demonstrate their identity
was by joining Barça. It was less risky than joining a clandestine anti-Franco movement, and allowed them to express
their dissidence. During Franco's regime, however, the blaugrana team was granted profit due to its good relationship
with the dictator at management level, even giving two awards to him. After Laporta's departure from the club in
June 2010, Sandro Rosell was soon elected as the new president. The elections were held on 13 June, where he got
61.35% (57,088 votes, a record) of total votes. Rosell signed David Villa from Valencia for €40 million and Javier
Mascherano from Liverpool for €19 million. In November 2010, Barcelona defeated their main rival, Real Madrid 5–0
in El Clásico. In the 2010–11 season, Barcelona retained the La Liga trophy, their third title in succession, finishing
with 96 points. In April 2011, the club reached the Copa del Rey final, losing 1–0 to Real Madrid at the Mestalla
in Valencia. In May, Barcelona defeated Manchester United in the 2011 Champions League Final 3–1 held at Wembley
Stadium, a repeat of the 2009 final, winning their fourth European Cup. In August 2011, La Masia graduate Cesc Fàbregas
was bought from Arsenal and he would help Barcelona defend the Spanish Supercup against Real Madrid. The Supercup
victory brought the total number of official trophies to 73, matching the number of titles won by Real Madrid. In
1918 Espanyol started a counter-petition against autonomy, which at that time had become a pertinent issue. Later
on, an Espanyol supporter group would join the Falangists in the Spanish Civil War, siding with the fascists. Despite
these differences in ideology, the derbi has always been more relevant to Espanyol supporters than Barcelona ones
due to the difference in objectives. In recent years the rivalry has become less political, as Espanyol translated
its official name and anthem from Spanish to Catalan. FC Barcelona's all-time highest goalscorer in all competitions
(including friendlies) is Lionel Messi with 474 goals. Messi is also the all-time highest goalscorer for Barcelona
in all official competitions, excluding friendlies, with 445 goals. He is the record goalscorer for Barcelona in
European (82 goals) and international club competitions (90 goals), and the record league scorer with 305 goals in
La Liga. Four players have managed to score over 100 league goals at Barcelona: Lionel Messi (305), César Rodríguez
(192), László Kubala (131) and Samuel Eto'o (108). Barcelona won the treble in the 2014–2015 season, winning La Liga,
Copa del Rey and UEFA Champions League titles, and became the first European team to have won the treble twice. On
17 May, the club clinched their 23rd La Liga title after defeating Atlético Madrid. This was Barcelona's seventh
La Liga title in the last ten years. On 30 May, the club defeated Athletic Bilbao in the Copa del Rey final at Camp
Nou. On 6 June, Barcelona won the UEFA Champions League final with a 3–1 win against Juventus, which completed the
treble, the club's second in 6 years. Barcelona's attacking trio of Messi, Suárez and Neymar, dubbed MSN, scored
122 goals in all competitions, the most in a season for an attacking trio in Spanish football history. Prior to the
2011–2012 season, Barcelona had a long history of avoiding corporate sponsorship on the playing shirts. On 14 July
2006, the club announced a five-year agreement with UNICEF, which includes having the UNICEF logo on their shirts.
The agreement had the club donate €1.5 million per year to UNICEF (0.7 percent of its ordinary income, equal to the
UN International Aid Target, cf. ODA) via the FC Barcelona Foundation. The FC Barcelona Foundation is an entity set
up in 1994 on the suggestion of then-chairman of the Economical-Statutory Committee, Jaime Gil-Aluja. The idea was
to set up a foundation that could attract financial sponsorships to support a non-profit sport company. In 2004,
a company could become one of 25 "Honorary members" by contributing between £40,000–60,000 (£54,800–82,300) per year.
There are also 48 associate memberships available for an annual fee of £14,000 (£19,200) and an unlimited number
of "patronages" for the cost of £4,000 per year (£5,500). It is unclear whether the honorary members have any formal
say in club policy, but according to the author Anthony King, it is "unlikely that Honorary Membership would not
involve at least some informal influence over the club". There is often a fierce rivalry between the two strongest
teams in a national league, and this is particularly the case in La Liga, where the game between Barcelona and Real
Madrid is known as El Clásico. From the start of national competitions the clubs were seen as representatives of
two rival regions in Spain: Catalonia and Castile, as well as of the two cities. The rivalry reflects what many regard
as the political and cultural tensions felt between Catalans and the Castilians, seen by one author as a re-enactment
of the Spanish Civil War. In 1980, when the stadium was in need of redesign to meet UEFA criteria, the club raised
money by offering supporters the opportunity to inscribe their name on the bricks for a small fee. The idea was popular
with supporters, and thousands of people paid the fee. Later this became the centre of controversy when media in
Madrid picked up reports that one of the stones was inscribed with the name of long-time Real Madrid chairman and
Franco supporter Santiago Bernabéu. In preparation for the 1992 Summer Olympics two tiers of seating were installed
above the previous roofline. It has a current capacity of 99,354 making it the largest stadium in Europe. Traditionally,
Espanyol was seen by the vast majority of Barcelona's citizens as a club which cultivated a kind of compliance to
the central authority, in stark contrast to Barça's revolutionary spirit. Also in the 1960s and 1970s, while FC Barcelona
acted as an integrating force for Catalonia's new arrivals from poorer regions of Spain expecting to find a better
life, Espanyol drew their support mainly from sectors close to the regime such as policemen, military officers, civil
servants and career fascists. The blue and red colours of the shirt were first worn in a match against Hispania in
1900. Several competing theories have been put forth for the blue and red design of the Barcelona shirt. The son
of the first president, Arthur Witty, claimed it was the idea of his father as the colours were the same as the Merchant
Taylor's School team. Another explanation, according to author Toni Strubell, is that the colours are from Robespierre's
First Republic. In Catalonia the common perception is that the colours were chosen by Joan Gamper and are those of
his home team, FC Basel. The club's most frequently used change colours have been yellow and orange. An away kit
featuring the red and yellow stripes of the flag of Catalonia has also been used. After the construction was complete
there was no further room for expansion at Les Corts. Back-to-back La Liga titles in 1948 and 1949 and the signing
of László Kubala in June 1950, who would later go on to score 196 goals in 256 matches, drew larger crowds to the
games. The club began to make plans for a new stadium. The building of Camp Nou commenced on 28 March 1954, before
a crowd of 60,000 Barça fans. The first stone of the future stadium was laid in place under the auspices of Governor
Felipe Acedo Colunga and with the blessing of Archbishop of Barcelona Gregorio Modrego. Construction took three years
and ended on 24 September 1957 with a final cost of 288 million pesetas, 336% over budget. Barcelona is one of the
most supported teams in the world, and has the largest social media following in the world among sports teams. Barcelona's
players have won a record number of Ballon d'Or awards (11), as well as a record number of FIFA World Player of the
Year awards (7). In 2010, the club made history when three players who came through its youth academy (Messi, Iniesta
and Xavi) were chosen as the three best players in the world in the FIFA Ballon d'Or awards, an unprecedented feat
for players from the same football school. With the new stadium, Barcelona participated in the inaugural version
of the Pyrenees Cup, which, at the time, consisted of the best teams of Languedoc, Midi and Aquitaine (Southern France),
the Basque Country and Catalonia; all were former members of the Marca Hispanica region. The contest was the most
prestigious in that era. From the inaugural year in 1910 to 1913, Barcelona won the competition four consecutive
times. Carles Comamala played an integral part of the four-time champion, managing the side along with Amechazurra
and Jack Greenwell. The latter became the club's first full-time coach in 1917. The last edition was held in 1914
in the city of Barcelona, which local rivals Espanyol won. In 1978, Josep Lluís Núñez became the first elected president
of FC Barcelona, and, since then, the members of Barcelona have elected the club president. The process of electing
a president of FC Barcelona was closely tied to Spain's transition to democracy in 1974 and the end of Franco's dictatorship.
The new president's main objective was to develop Barcelona into a world-class club by giving it stability both on
and off the pitch. His presidency was to last for 22 years, and it deeply affected the image of Barcelona, as Núñez
held to a strict policy regarding wages and discipline, letting go of such players as Maradona, Romário and Ronaldo
rather than meeting their demands. The departures of Núñez and van Gaal were hardly noticed by the fans when compared
to that of Luís Figo, then club vice-captain. Figo had become a cult hero, and was considered by Catalans to be one
of their own. However, Barcelona fans were distraught by Figo's decision to join arch-rivals Real Madrid, and, during
subsequent visits to the Camp Nou, Figo was given an extremely hostile reception. Upon his first return, a piglet's
head and a full bottle of whiskey were thrown at him from the crowd. The next three years saw the club in decline,
and managers came and went. van Gaal was replaced by Llorenç Serra Ferrer who, despite an extensive investment in
players in the summer of 2000, presided over a mediocre league campaign and a humiliating first-round Champions League
exit, and was eventually dismissed late in the season. Long-serving coach Carles Rexach was appointed as his replacement,
initially on a temporary basis, and managed to at least steer the club to the last Champions League spot on the final
day of the season. Despite better form in La Liga and a good run to the semi-finals of the Champions League, Rexach
was never viewed as a long-term solution and that summer Louis van Gaal returned to the club for a second spell as
manager. What followed, despite another decent Champions League performance, was one of the worst La Liga campaigns
in the club's history, with the team as low as 15th in February 2003. This led to van Gaal's resignation and replacement
for the rest of the campaign by Radomir Antić, though a sixth-place finish was the best that he could manage. At
the end of the season, Antić's short-term contract was not renewed, and club president Joan Gaspart resigned, his
position having been made completely untenable by such a disastrous season on top of the club's overall decline in
fortunes since he became president three years prior. Two days later, it was announced that Luis Enrique would return
to Barcelona as head coach, after he agreed to a two-year deal. He was recommended by sporting director Andoni Zubizarreta,
his former national teammate. Following Enrique's arrival, Barcelona broke their transfer record when they paid Liverpool
F.C. between €81 to €94 million for striker Luis Suárez, who was serving a four-month ban from all football-related
activity imposed by the FIFA Disciplinary Committee after biting Italian defender Giorgio Chiellini during his appearance
for Uruguay in a World Cup group stage match. The nickname culé for a Barcelona supporter is derived from the Catalan
cul (English: arse), as the spectators at the first stadium, Camp de la Indústria, sat with their culs over the stand.
In Spain, about 25% of the population is said to be Barça sympathisers, second behind Real Madrid, supported by 32%
of the population. Throughout Europe, Barcelona is the favourite second-choice club. The club's membership figures
have seen a significant increase from 100,000 in the 2003–04 season to 170,000 in September 2009, the sharp rise
being attributed to the influence of Ronaldinho and then-president Joan Laporta's media strategy that focused on
Spanish and English online media. During the 1950s the rivalry was exacerbated further when there was a controversy
surrounding the transfer of Alfredo di Stéfano, who finally played for Real Madrid and was key to their subsequent
success. The 1960s saw the rivalry reach the European stage when they met twice in a controversial knock-out round
of the European Cup, with Madrid receiving unfavourable treatment from the referee. In 2002, the European encounter
between the clubs was dubbed the "Match of The Century" by Spanish media, and Madrid's win was watched by more than
500 million people. In 2010, Forbes evaluated Barcelona's worth to be around €752 million (USD $1 billion), ranking
them fourth after Manchester United, Real Madrid, and Arsenal, based on figures from the 2008–09 season. According
to Deloitte, Barcelona had a recorded revenue of €366 million in the same period, ranking second to Real Madrid,
who generated €401 million in revenue. In 2013, Forbes magazine ranked Barcelona the third most valuable sports team
in the world, behind Real Madrid and Manchester United, with a value of $2.6 billion. In 2014, Forbes ranked them
the second most valuable sports team in the world, worth $3.2 billion, and Deloitte ranked them the world's fourth
richest football club in terms of revenue, with an annual turnover of €484.6 million. Barcelona is the only European
club to have played continental football every season since 1955, and one of three clubs to have never been relegated
from La Liga, along with Athletic Bilbao and Real Madrid. In 2009, Barcelona became the first club in Spain to win
the treble consisting of La Liga, Copa del Rey, and the Champions League. That same year, it also became the first
football club ever to win six out of six competitions in a single year, thus completing the sextuple, comprising
the aforementioned treble and the Spanish Super Cup, UEFA Super Cup and FIFA Club World Cup. In the 2014–15 season,
Barcelona won another historic treble, making them the first club in European football to win the treble twice.
Hoover began using wiretapping in the 1920s during Prohibition to arrest bootleggers. In the 1927 case Olmstead v. United
States, in which a bootlegger was caught through telephone tapping, the United States Supreme Court ruled that FBI
wiretaps did not violate the Fourth Amendment as unlawful search and seizure, as long as the FBI did not break into
a person's home to complete the tapping. After Prohibition's repeal, Congress passed the Communications Act of 1934,
which outlawed non-consensual phone tapping, but allowed bugging. In the 1939 case Nardone v. United States, the
court ruled that due to the 1934 law, evidence the FBI obtained by phone tapping was inadmissible in court. After
the 1967 case Katz v. United States overturned the 1927 case that had allowed bugging, Congress passed the Omnibus
Crime Control Act, allowing public authorities to tap telephones during investigations as long as they obtain a warrant
beforehand. In March 1971, the residential office of an FBI agent in Media, Pennsylvania was burglarized by a group
calling itself the Citizens' Commission to Investigate the FBI. Numerous files were taken and distributed to a range
of newspapers, including The Harvard Crimson. The files detailed the FBI's extensive COINTELPRO program, which included
investigations into lives of ordinary citizens—including a black student group at a Pennsylvania military college
and the daughter of Congressman Henry Reuss of Wisconsin. The country was "jolted" by the revelations, which included
assassinations of political activists, and the actions were denounced by members of Congress, including House Majority
Leader Hale Boggs. The phones of some members of Congress, including Boggs, had allegedly been tapped. From the end
of the 1980s to the early 1990s, the FBI reassigned more than 300 agents from foreign counter-intelligence duties
to violent crime, and made violent crime the sixth national priority. With reduced cuts to other well-established
departments, and because terrorism was no longer considered a threat after the end of the Cold War, the FBI assisted
local and state police forces in tracking fugitives who had crossed state lines, which is a federal offense. The
FBI Laboratory helped develop DNA testing, continuing its pioneering role in identification that began with its fingerprinting
system in 1924. The 9/11 Commission's final report on July 22, 2004 stated that the FBI and Central Intelligence
Agency (CIA) were both partially to blame for not pursuing intelligence reports that could have prevented the September
11, 2001 attacks. In its most damning assessment, the report concluded that the country had "not been well served"
by either agency and listed numerous recommendations for changes within the FBI. While the FBI has acceded to most
of the recommendations, including oversight by the new Director of National Intelligence, some former members of
the 9/11 Commission publicly criticized the FBI in October 2005, claiming it was resisting any meaningful changes.
The Criminal Justice Information Services (CJIS) Division, is located in Clarksburg, West Virginia. Organized beginning
in 1991, the office opened in 1995 as the youngest agency division. The complex is the length of three football fields.
It provides a main repository for information in various data systems. Under the roof of the CJIS are the programs
for the National Crime Information Center (NCIC), Uniform Crime Reporting (UCR), Fingerprint Identification, Integrated
Automated Fingerprint Identification System (IAFIS), NCIC 2000, and the National Incident-Based Reporting System
(NIBRS). Many state and local agencies use these data systems as a source for their own investigations and contribute
to the database using secure communications. FBI provides these tools of sophisticated identification and information
services to local, state, federal, and international law enforcement agencies. The FBI director is responsible for
the day-to-day operations at the FBI. Along with his deputies, the director makes sure cases and operations are handled
correctly. The director also is in charge of making sure the leadership in any one of the FBI field offices is manned
with qualified agents. Before the Intelligence Reform and Terrorism Prevention Act was passed in the wake of the
September 11 attacks, the FBI director would directly brief the President of the United States on any issues that
arise from within the FBI. Since then, the director now reports to the Director of National Intelligence (DNI), who
in turn reports to the President. The Uniform Crime Reports (UCR) compile data from over 17,000 law enforcement agencies
across the country. They provide detailed data regarding the volume of crimes to include arrest, clearance (or closing
a case), and law enforcement officer information. The UCR focuses its data collection on violent crimes, hate crimes,
and property crimes. Created in the 1920s, the UCR system has not proven to be as uniform as its name implies. The
UCR data only reflect the most serious offense in the case of connected crimes and has a very restrictive definition
of rape. Since about 93% of the data submitted to the FBI is in this format, the UCR stands out as the publication
of choice as most states require law enforcement agencies to submit this data. FBI records show that 85% of COINTELPRO
resources targeted groups and individuals that the FBI deemed "subversive", including communist and socialist organizations;
organizations and individuals associated with the Civil Rights Movement, including Martin Luther King, Jr. and others
associated with the Southern Christian Leadership Conference, the National Association for the Advancement of Colored
People, and the Congress of Racial Equality and other civil rights organizations; black nationalist groups; the American
Indian Movement; a broad range of organizations labeled "New Left", including Students for a Democratic Society and
the Weathermen; almost all groups protesting the Vietnam War, as well as individual student demonstrators with no
group affiliation; the National Lawyers Guild; organizations and individuals associated with the women's rights movement;
nationalist groups such as those seeking independence for Puerto Rico, United Ireland, and Cuban exile movements
including Orlando Bosch's Cuban Power and the Cuban Nationalist Movement. The remaining 15% of COINTELPRO resources
were expended to marginalize and subvert white hate groups, including the Ku Klux Klan and the National States' Rights
Party. Although many of FBI's functions are unique, its activities in support of national security are comparable
to those of the British MI5 and the Russian FSB. Unlike the Central Intelligence Agency (CIA), which has no law enforcement
authority and is focused on intelligence collection overseas, FBI is primarily a domestic agency, maintaining 56
field offices in major cities throughout the United States, and more than 400 resident agencies in lesser cities
and areas across the nation. At an FBI field office, a senior-level FBI officer concurrently serves as the representative
of the Director of National Intelligence. The bureau's first official task was visiting and making surveys of the
houses of prostitution in preparation for enforcing the "White Slave Traffic Act," or Mann Act, passed on June 25,
1910. In 1932, it was renamed the United States Bureau of Investigation. The following year it was linked to the
Bureau of Prohibition and rechristened the Division of Investigation (DOI) before finally becoming an independent
service within the Department of Justice in 1935. In the same year, its name was officially changed from the Division
of Investigation to the present-day Federal Bureau of Investigation, or FBI. During the 1950s and 1960s, FBI officials
became increasingly concerned about the influence of civil rights leaders, whom they believed had communist ties
or were unduly influenced by them. In 1956, for example, Hoover sent an open letter denouncing Dr. T.R.M. Howard,
a civil rights leader, surgeon, and wealthy entrepreneur in Mississippi who had criticized FBI inaction in solving
recent murders of George W. Lee, Emmett Till, and other blacks in the South. The FBI carried out controversial domestic
surveillance in an operation it called the COINTELPRO, which was short for "COunter-INTELligence PROgram." It was
to investigate and disrupt the activities of dissident political organizations within the United States, including
both militant and non-violent organizations. Among its targets was the Southern Christian Leadership Conference,
a leading civil rights organization with clergy leadership. Beginning in the 1940s and continuing into the 1970s,
the bureau investigated cases of espionage against the United States and its allies. Eight Nazi agents who had planned
sabotage operations against American targets were arrested, and six were executed (Ex parte Quirin) under their sentences.
Also during this time, a joint US/UK code-breaking effort (the Venona project)—with which the FBI was heavily involved—broke
Soviet diplomatic and intelligence communications codes, allowing the US and British governments to read Soviet communications.
This effort confirmed the existence of Americans working in the United States for Soviet intelligence. Hoover was
administering this project but failed to notify the Central Intelligence Agency (CIA) until 1952. Another notable
case is the arrest of Soviet spy Rudolf Abel in 1957. The discovery of Soviet spies operating in the US allowed Hoover
to pursue his longstanding obsession with the threat he perceived from the American Left, ranging from Communist
Party of the United States of America (CPUSA) union organizers to American liberals. In 2003 a congressional committee
called the FBI's organized crime informant program "one of the greatest failures in the history of federal law enforcement."
The FBI allowed four innocent men to be convicted of the March 1965 gangland murder of Edward "Teddy" Deegan in order
to protect Vincent Flemmi, an FBI informant. Three of the men were sentenced to death (which was later reduced to
life in prison), and the fourth defendant was sentenced to life in prison. Two of the four men died in prison after
serving almost 30 years, and two others were released after serving 32 and 36 years. In July 2007, U.S. District
Judge Nancy Gertner in Boston found the bureau helped convict the four men using false witness account by mobster
Joseph Barboza. The U.S. Government was ordered to pay $100 million in damages to the four defendants. Between 1993
and 1996, the FBI increased its counter-terrorism role in the wake of the first 1993 World Trade Center bombing in
New York City, New York; the 1995 Oklahoma City bombing in Oklahoma City, Oklahoma; and the arrest of the Unabomber
in 1996. Technological innovation and the skills of FBI Laboratory analysts helped ensure that the three cases were
successfully prosecuted. Justice Department investigations into the FBI's roles in the Ruby Ridge and Waco incidents
were found to be obstructed by agents within the Bureau. During the 1996 Summer Olympics in Atlanta, Georgia, the
FBI was criticized for its investigation of the Centennial Olympic Park bombing. It has settled a dispute with Richard
Jewell, who was a private security guard at the venue, along with some media organizations, in regard to the leaking
of his name during the investigation. During the September 11, 2001 attacks on the World Trade Center, FBI agent
Leonard W. Hatton Jr. was killed during the rescue effort while helping the rescue personnel evacuate the occupants
of the South Tower and stayed when it collapsed. Within months after the attacks, FBI Director Robert Mueller, who
had been sworn in a week before the attacks, called for a re-engineering of FBI structure and operations. He made
countering every federal crime a top priority, including the prevention of terrorism, countering foreign intelligence
operations, addressing cyber security threats, other high-tech crimes, protecting civil rights, combating public
corruption, organized crime, white-collar crime, and major acts of violent crime. On July 8, 2007 The Washington
Post published excerpts from UCLA Professor Amy Zegart's book Spying Blind: The CIA, the FBI, and the Origins of
9/11. The Post reported from Zegart's book that government documents show the CIA and FBI missed 23 potential chances
to disrupt the terrorist attacks of September 11, 2001. The primary reasons for the failures included: agency cultures
resistant to change and new ideas; inappropriate incentives for promotion; and a lack of cooperation between the
FBI, CIA and the rest of the United States Intelligence Community. The book blamed the FBI's decentralized structure,
which prevented effective communication and cooperation among different FBI offices. The book suggested that the
FBI has not evolved into an effective counter-terrorism or counter-intelligence agency, due in large part to deeply
ingrained agency cultural resistance to change. For example, FBI personnel practices continue to treat all staff
other than special agents as support staff, classifying intelligence analysts alongside the FBI's auto mechanics
and janitors. The USA PATRIOT Act increased the powers allotted to the FBI, especially in wiretapping and monitoring
of Internet activity. One of the most controversial provisions of the act is the so-called sneak and peek provision,
granting the FBI powers to search a house while the residents are away, and not requiring them to notify the residents
for several weeks afterwards. Under the PATRIOT Act's provisions, the FBI also resumed inquiring into the library
records of those who are suspected of terrorism (something it had supposedly not done since the 1970s). The FBI Laboratory,
established with the formation of the BOI, did not appear in the J. Edgar Hoover Building until its completion in
1974. The lab serves as the primary lab for most DNA, biological, and physical work. Public tours of FBI headquarters
ran through the FBI laboratory workspace before the move to the J. Edgar Hoover Building. The services the lab conducts
include Chemistry, Combined DNA Index System (CODIS), Computer Analysis and Response, DNA Analysis, Evidence Response,
Explosives, Firearms and Tool marks, Forensic Audio, Forensic Video, Image Analysis, Forensic Science Research, Forensic
Science Training, Hazardous Materials Response, Investigative and Prospective Graphics, Latent Prints, Materials
Analysis, Questioned Documents, Racketeering Records, Special Photographic Analysis, Structural Design, and Trace
Evidence. The services of the FBI Laboratory are used by many state, local, and international agencies free of charge.
The lab also maintains a second lab at the FBI Academy. In 2000, the FBI began the Trilogy project to upgrade its
outdated information technology (IT) infrastructure. This project, originally scheduled to take three years and cost
around $380 million, ended up going far over budget and behind schedule. Efforts to deploy modern computers and networking
equipment were generally successful, but attempts to develop new investigation software, outsourced to Science Applications
International Corporation (SAIC), were not. Virtual Case File, or VCF, as the software was known, was plagued by
poorly defined goals, and repeated changes in management. In January 2005, more than two years after the software
was originally planned for completion, the FBI officially abandoned the project. At least $100 million (and much
more by some estimates) was spent on the project, which never became operational. The FBI has been forced to continue
using its decade-old Automated Case Support system, which IT experts consider woefully inadequate. In March 2005,
the FBI announced it was beginning a new, more ambitious software project, code-named Sentinel, which they expected
to complete by 2009. An FBI special agent is issued a Glock Model 22 pistol or a Glock 23 in .40 S&W caliber. If
they fail their first qualification, they are issued either a Glock 17 or Glock 19, to aid in their next qualification.
In May 1997, the FBI officially adopted the Glock .40 S&W pistol for general agent use and first issued it to New
Agent Class 98-1 in October 1997. At present, the Model 23 "FG&R" (finger groove and rail) is the issue sidearm.
New agents are issued firearms, on which they must qualify, on successful completion of their training at the FBI
Academy. The Glock 26 in 9×19mm Parabellum, and Glock Models 23 and 27 in .40 S&W caliber are authorized as secondary
weapons. Special agents are authorized to purchase and qualify with the Glock Model 21 in .45 ACP. Special agents
of the FBI HRT (Hostage Rescue Team), and regional SWAT teams are issued the Springfield Professional Model 1911A1
.45 ACP pistol (see FBI Special Weapons and Tactics Teams). The FBI is near-impenetrable, with applicants intensely
scrutinized and assessed over an extended period. To apply to become an FBI agent, one must be between the ages of
23 and 37. Due to the decision in Robert P. Isabella v. Department of State and Office of Personnel Management, 2008
M.S.P.B. 146, preference-eligible veterans may apply after age 37. In 2009, the Office of Personnel Management issued
implementation guidance on the Isabella decision. The applicant must also hold American citizenship, be of high moral
character, have a clean record, and hold at least a four-year bachelor's degree. At least three years of professional
work experience prior to application is also required. All FBI employees require a Top Secret (TS) security clearance,
and in many instances, employees need a TS/SCI (Top Secret/Sensitive Compartmented Information) clearance. To obtain
a security clearance, all potential FBI personnel must pass a series of Single Scope Background Investigations (SSBI),
which are conducted by the Office of Personnel Management. Special Agents candidates also have to pass a Physical
Fitness Test (PFT), which includes a 300-meter run, one-minute sit-ups, maximum push-ups, and a 1.5-mile (2.4 km)
run. Personnel must pass a polygraph test with questions including possible drug use. Applicants who fail polygraphs
may not gain employment with the FBI. The FBI has maintained files on numerous people, including celebrities such
as Elvis Presley, Frank Sinatra, John Denver, John Lennon, Jane Fonda, Groucho Marx, Charlie Chaplin, the band MC5,
Lou Costello, Sonny Bono, Bob Dylan, Michael Jackson, and Mickey Mantle. The files were collected for various reasons.
Some of the subjects were investigated for alleged ties to the Communist party (Charlie Chaplin and Groucho Marx),
or in connection with antiwar activities during the Vietnam War (John Denver, John Lennon, and Jane Fonda). Numerous
celebrity files concern threats or extortion attempts against them (Sonny Bono, John Denver, John Lennon, Elvis Presley,
Michael Jackson, Mickey Mantle, Groucho Marx, and Frank Sinatra). In December 1994, after being tipped off by his
former FBI handler about a pending indictment under the Racketeer Influenced and Corrupt Organizations Act, Bulger
fled Boston and went into hiding. For 16 years, he remained at large. For 12 of those years, Bulger was prominently
listed on the FBI Ten Most Wanted Fugitives list. Beginning in 1997, the New England media exposed criminal actions
by federal, state, and local law enforcement officials tied to Bulger. The revelation caused great embarrassment
to the FBI. In 2002, Special Agent John J Connolly was convicted of federal racketeering charges for helping Bulger
avoid arrest. In 2008, Special Agent Connolly completed his term on the federal charges and was transferred to Florida
where he was convicted of helping plan the murder of John B Callahan, a Bulger rival. In 2014, that conviction was
overturned on a technicality. Connolly was the agent leading the investigation of Bulger. In August 2007 Virgil Griffith,
a Caltech computation and neural-systems graduate student, created a searchable database that linked changes made
by anonymous Wikipedia editors to companies and organizations from which the changes were made. The database cross-referenced
logs of Wikipedia edits with publicly available records pertaining to the internet IP addresses edits were made from.
Griffith was motivated by the edits from the United States Congress, and wanted to see if others were similarly promoting
themselves. The tool was designed to detect conflict of interest edits. Among his findings were that FBI computers
were used to edit the FBI article in Wikipedia. Although the edits correlated with known FBI IP addresses, there
was no proof that the changes actually came from a member or employee of the FBI, only that someone who had access
to their network had edited the FBI article in Wikipedia. Wikipedia spokespersons received Griffith's "WikiScanner"
positively, noting that it helped prevent conflicts of interest from influencing articles as well as increasing transparency
and mitigating attempts to remove or distort relevant facts. On 20 February 2001, the bureau announced that a special
agent, Robert Hanssen (born 1944) had been arrested for spying for the Soviet Union and then Russia from 1979 to
2001. He is serving 15 consecutive life sentences without the possibility of parole at ADX Florence, a federal supermax
prison near Florence, Colorado. Hanssen was arrested on February 18, 2001 at Foxstone Park near his home in Vienna,
Virginia, and was charged with selling US secrets to the USSR and subsequently Russia for more than US$1.4 million
in cash and diamonds over a 22-year period. On July 6, 2001, he pleaded guilty to 15 counts of espionage in the United
States District Court for the Eastern District of Virginia. His spying activities have been described by the US Department
of Justice's Commission for the Review of FBI Security Programs as "possibly the worst intelligence disaster in U.S.
history". The Federal Bureau of Investigation (FBI) is the domestic intelligence and security service of the United
States, which simultaneously serves as the nation's prime federal law enforcement organization. Operating under the
jurisdiction of the U.S. Department of Justice, FBI is concurrently a member of the U.S. Intelligence Community and
reports to both the Attorney General and the Director of National Intelligence. A leading U.S. counterterrorism,
counterintelligence, and criminal investigative organization, FBI has jurisdiction over violations of more than 200
categories of federal crimes. J. Edgar Hoover served as Director from 1924 to 1972, a combined 48 years with the
BOI, DOI, and FBI. He was chiefly responsible for creating the Scientific Crime Detection Laboratory, or the FBI
Laboratory, which officially opened in 1932, as part of his work to professionalize investigations by the government.
Hoover was substantially involved in most major cases and projects that the FBI handled during his tenure. After
Hoover's death, Congress passed legislation that limited the tenure of future FBI Directors to ten years. Despite
its domestic focus, the FBI also maintains a significant international footprint, operating 60 Legal Attache (LEGAT)
offices and 15 sub-offices in U.S. embassies and consulates across the globe. These overseas offices exist primarily
for the purpose of coordination with foreign security services and do not usually conduct unilateral operations in
the host countries. The FBI can and does at times carry out secret activities overseas, just as the CIA has a limited
domestic function; these activities generally require coordination across government agencies. In 1939, the Bureau
began compiling a custodial detention list with the names of those who would be taken into custody in the event of
war with Axis nations. The majority of the names on the list belonged to Issei community leaders, as the FBI investigation
built on an existing Naval Intelligence index that had focused on Japanese Americans in Hawaii and the West Coast,
but many German and Italian nationals also found their way onto the secret list. Robert Shivers, head of the Honolulu
office, obtained permission from Hoover to start detaining those on the list on December 7, 1941, while bombs were
still falling over Pearl Harbor. Mass arrests and searches of homes (in most cases conducted without warrants) began
a few hours after the attack, and over the next several weeks more than 5,500 Issei men were taken into FBI custody.
On February 19, 1942, President Franklin Roosevelt issued Executive Order 9066, authorizing the removal of Japanese
Americans from the West Coast. FBI Director Hoover opposed the subsequent mass removal and confinement of Japanese
Americans authorized under Executive Order 9066, but Roosevelt prevailed. The vast majority went along with the subsequent
exclusion orders, but in a handful of cases where Japanese Americans refused to obey the new military regulations,
FBI agents handled their arrests. The Bureau continued surveillance on Japanese Americans throughout the war, conducting
background checks on applicants for resettlement outside camp, and entering the camps (usually without the permission
of War Relocation Authority officials) and grooming informants in order to monitor dissidents and "troublemakers."
After the war, the FBI was assigned to protect returning Japanese Americans from attacks by hostile white communities.
In response to organized crime, on August 25, 1953, the FBI created the Top Hoodlum Program. The national office
directed field offices to gather information on mobsters in their territories and to report it regularly to Washington
for a centralized collection of intelligence on racketeers. After the Racketeer Influenced and Corrupt Organizations
Act, or RICO Act, took effect, the FBI began investigating the former Prohibition-organized groups, which had become
fronts for crime in major cities and small towns. All of the FBI work was done undercover and from within these organizations,
using the provisions provided in the RICO Act. Gradually the agency dismantled many of the groups. Although Hoover
initially denied the existence of a National Crime Syndicate in the United States, the Bureau later conducted operations
against known organized crime syndicates and families, including those headed by Sam Giancana and John Gotti. The
RICO Act is still used today for all organized crime and any individuals who might fall under the Act. After Congress
passed the Communications Assistance for Law Enforcement Act (CALEA, 1994), the Health Insurance Portability and
Accountability Act (HIPAA, 1996), and the Economic Espionage Act (EEA, 1996), the FBI followed suit and underwent
a technological upgrade in 1998, just as it did with its CART team in 1991. Computer Investigations and Infrastructure
Threat Assessment Center (CITAC) and the National Infrastructure Protection Center (NIPC) were created to deal with
the increase in Internet-related problems, such as computer viruses, worms, and other malicious programs that threatened
US operations. With these developments, the FBI increased its electronic surveillance in public safety and national
security investigations, adapting to the telecommunications advancements that changed the nature of such problems.
For over 40 years, the FBI crime lab in Quantico believed lead in bullets had unique chemical signatures. It analyzed
the bullets with the goal of matching them chemically, not only to a single batch of ammunition coming out of a factory,
but also to a single box of bullets. The National Academy of Sciences conducted an 18-month independent review of
comparative bullet-lead analysis. In 2003, its National Research Council published a report whose conclusions called
into question 30 years of FBI testimony. It found the analytic model used by the FBI for interpreting results was
deeply flawed, and the conclusion, that bullet fragments could be matched to a box of ammunition, was so overstated
that it was misleading under the rules of evidence. One year later, the FBI decided to stop doing bullet lead analysis.
The FBI's chief tool against organized crime is the Racketeer Influenced and Corrupt Organizations (RICO) Act. The
FBI is also charged with the responsibility of enforcing compliance of the United States Civil Rights Act of 1964
and investigating violations of the act in addition to prosecuting such violations with the United States Department
of Justice (DOJ). The FBI also shares concurrent jurisdiction with the Drug Enforcement Administration (DEA) in the
enforcement of the Controlled Substances Act of 1970. The FBI often works in conjunction with other Federal agencies,
including the U.S. Coast Guard (USCG) and U.S. Customs and Border Protection (CBP) in seaport and airport security,
and the National Transportation Safety Board in investigating airplane crashes and other critical incidents. Immigration
and Customs Enforcement Homeland Security Investigations (ICE-HSI) has nearly the same amount of investigative man
power as the FBI, and investigates the largest range of crimes. In the wake of the September 11 attacks, then-Attorney
General Ashcroft assigned the FBI as the designated lead organization in terrorism investigations after the creation
of the U.S. Department of Homeland Security. ICE-HSI and the FBI are both integral members of the Joint Terrorism
Task Force. The FBI Academy, located in Quantico, Virginia, is home to the communications and computer laboratory
the FBI utilizes. It is also where new agents are sent for training to become FBI Special Agents. Going through the
21-week course is required for every Special Agent. First opened for use in 1972, the facility located on 385 acres
(1.6 km2) of woodland. The Academy trains state and local law enforcement agencies, which are invited to the law
enforcement training center. The FBI units that reside at Quantico are the Field and Police Training Unit, Firearms
Training Unit, Forensic Science Research and Training Center, Technology Services Unit (TSU), Investigative Training
Unit, Law Enforcement Communication Unit, Leadership and Management Science Units (LSMU), Physical Training Unit,
New Agents' Training Unit (NATU), Practical Applications Unit (PAU), the Investigative Computer Training Unit and
the "College of Analytical Studies." The FBI frequently investigated Martin Luther King, Jr. In the mid-1960s, King
began publicly criticizing the Bureau for giving insufficient attention to the use of terrorism by white supremacists.
Hoover responded by publicly calling King the most "notorious liar" in the United States. In his 1991 memoir, Washington
Post journalist Carl Rowan asserted that the FBI had sent at least one anonymous letter to King encouraging him to
commit suicide. Historian Taylor Branch documents an anonymous November 1964 "suicide package" sent by the Bureau
that combined a letter to the civil rights leader telling him "You are done. There is only one way out for you..."
with audio recordings of King's sexual indiscretions. The National Incident Based Reporting System (NIBRS) crime
statistics system aims to address limitations inherent in UCR data. The system is used by law enforcement agencies
in the United States for collecting and reporting data on crimes. Local, state, and federal agencies generate NIBRS
data from their records management systems. Data is collected on every incident and arrest in the Group A offense
category. The Group A offenses are 46 specific crimes grouped in 22 offense categories. Specific facts about these
offenses are gathered and reported in the NIBRS system. In addition to the Group A offenses, eleven Group B offenses
are reported with only the arrest information. The NIBRS system is in greater detail than the summary-based UCR system.
As of 2004, 5,271 law enforcement agencies submitted NIBRS data. That amount represents 20% of the United States
population and 16% of the crime statistics data collected by the FBI. The FBI also spied upon and collected information
on Puerto Rican independence leader Pedro Albizu Campos and his Nationalist political party in the 1930s. Abizu Campos
was convicted three times in connection with deadly attacks on US government officials: in 1937 (Conspiracy to overthrow
the government of the United States), in 1950 (attempted murder), and in 1954 (after an armed assault on the US House
of Representatives while in session; although not present, Abizu Campos was considered the mastermind). The FBI operation
was covert and did not become known until U.S. Congressman Luis Gutierrez had it made public via the Freedom of Information
Act in the 1980s. The FBI is organized into functional branches and the Office of the Director, which contains most
administrative offices. An executive assistant director manages each branch. Each branch is then divided into offices
and divisions, each headed by an assistant director. The various divisions are further divided into sub-branches,
led by deputy assistant directors. Within these sub-branches there are various sections headed by section chiefs.
Section chiefs are ranked analogous to special agents in charge. The FBI is headquartered at the J. Edgar Hoover
Building in Washington, D.C., with 56 field offices in major cities across the United States. The FBI also maintains
over 400 resident agencies across the United States, as well as over 50 legal attachés at United States embassies
and consulates. Many specialized FBI functions are located at facilities in Quantico, Virginia, as well as a "data
campus" in Clarksburg, West Virginia, where 96 million sets of fingerprints "from across the United States are stored,
along with others collected by American authorities from prisoners in Saudi Arabia and Yemen, Iraq and Afghanistan."
The FBI is in process of moving its Records Management Division, which processes Freedom of Information Act (FOIA)
requests, to Winchester, Virginia. Carnivore was an electronic eavesdropping software system implemented by the FBI
during the Clinton administration; it was designed to monitor email and electronic communications. After prolonged
negative coverage in the press, the FBI changed the name of its system from "Carnivore" to "DCS1000." DCS is reported
to stand for "Digital Collection System"; the system has the same functions as before. The Associated Press reported
in mid-January 2005 that the FBI essentially abandoned the use of Carnivore in 2001, in favor of commercially available
software, such as NarusInsight. FBI Directors are appointed by the President of the United States. They must be confirmed
by the United States Senate and serve a term of office of five years, with a maximum of ten years, if reappointed,
unless they resign or are fired by the President before their term ends. J. Edgar Hoover, appointed by Calvin Coolidge
in 1924, was by far the longest-serving director, serving until his death in 1972. In 1968, Congress passed legislation
as part of the Omnibus Crime Control and Safe Streets Act Pub.L. 90–351, June 19, 1968, 82 Stat. 197 that specified
a 10-year limit, a maximum of two 5-year terms, for future FBI Directors, as well as requiring Senate confirmation
of appointees. As the incumbent, this legislation did not apply to Hoover, only to his successors. The current FBI
Director is James B. Comey, who was appointed in 2013 by Barack Obama. The FBI has been frequently depicted in popular
media since the 1930s. The bureau has participated to varying degrees, which has ranged from direct involvement in
the creative process of film or TV series development, to providing consultation on operations and closed cases.
A few of the notable portrayals of the FBI on television are the 1993-2002 series The X-Files, which concerned investigations
into paranormal phenomena by five fictional Special Agents and the fictional Counter Terrorist Unit (CTU) agency
in the TV drama 24, which is patterned after the FBI Counterterrorism Division. The 1991 movie Point Break is based
on the true story of an undercover FBI agent who infiltrated a gang of bank robbers. The 1997 movie Donnie Brasco
is based on the true story of undercover FBI agent Joseph D. Pistone infiltrating the Mafia. During the period from
1993 to 2011, FBI agents fired their weapons on 289 occasions; FBI internal reviews found the shots justified in
all but 5 cases, in none of the 5 cases were people wounded. Samuel Walker, a professor of criminal justice at the
University of Nebraska Omaha said the number of shots found to be unjustified was "suspiciously low." In the same
time period, the FBI wounded 150 people, 70 of whom died; the FBI found all 150 shootings to be justified. Likewise,
during the period from 2011 to the present, all shootings by FBI agents have been found to be justified by internal
investigation. In a 2002 case in Maryland, an innocent man was shot, and later paid $1.3 million by the FBI after
agents mistook him for a bank robber; the internal investigation found that the shooting was justified, based on
the man's actions. In 2005, fugitive Puerto Rican Nationalist leader Filiberto Ojeda Ríos died in a gun battle with
FBI agents in 2005 in what some charged was an assassination.[citation needed] Puerto Rico Governor Aníbal Acevedo
Vilá criticized the FBI assault as "improper" and "highly irregular" and demanded to know why his government was
not informed of it. The FBI refused to release information beyond the official press release, citing security and
agent privacy issues. The Puerto Rico Justice Department filed suit in federal court against the FBI and the US Attorney
General, demanding information crucial to the Commonwealth's own investigation of the incident. The case was dismissed
by the U.S Supreme Court. Ojeda Rios' funeral was attended by a long list of dignitaries, including the highest authority
of the Roman Catholic Church in Puerto Rico, Archbishop Roberto Octavio González Nieves, ex-Governor Rafael Hernández
Colón, and numerous other personalities.
According to the apocryphal Gospel of James, Mary was the daughter of Saint Joachim and Saint Anne. Before Mary's conception,
Anne had been barren and was far advanced in years. Mary was given to service as a consecrated virgin in the Temple
in Jerusalem when she was three years old, much like Hannah took Samuel to the Tabernacle as recorded in the Old
Testament. Some apocryphal accounts state that at the time of her betrothal to Joseph, Mary was 12–14 years old,
and he was thirty years old, but such accounts are unreliable. The Gospel of Luke begins its account of Mary's life
with the Annunciation, when the angel Gabriel appeared to her and announced her divine selection to be the mother
of Jesus. According to gospel accounts, Mary was present at the Crucifixion of Jesus and is depicted as a member
of the early Christian community in Jerusalem. According to Apocryphal writings, at some time soon after her death,
her incorrupt body was assumed directly into Heaven, to be reunited with her soul, and the apostles thereupon found
the tomb empty; this is known in Christian teaching as the Assumption. The doctrines of the Assumption or Dormition
of Mary relate to her death and bodily assumption to Heaven. The Roman Catholic Church has dogmaically defined the
doctrine of the Assumption, which was done in 1950 by Pope Pius XII in Munificentissimus Deus. Whether the Virgin
Mary died or not is not defined dogmatically, however, although a reference to the death of Mary are made in Munificentissimus
Deus. In the Eastern Orthodox Church, the Assumption of the Virgin Mary is believed, and celebrated with her Dormition,
where they believe she died. After Mary continued in the "blood of her purifying" another 33 days for a total of
40 days, she brought her burnt offering and sin offering to the Temple in Jerusalem,[Luke 2:22] so the priest could
make atonement for her sins, being cleansed from her blood.[Leviticus 12:1-8] They also presented Jesus – "As it
is written in the law of the Lord, Every male that openeth the womb shall be called holy to the Lord" (Luke 2:23other
verses). After the prophecies of Simeon and the prophetess Anna in Luke 2:25-38 concluded, Joseph and Mary took Jesus
and "returned into Galilee, to their own city Nazareth".[Luke 2:39] According to the writer of Luke, Mary was a relative
of Elizabeth, wife of the priest Zechariah of the priestly division of Abijah, who was herself part of the lineage
of Aaron and so of the tribe of Levi.[Luke 1:5;1:36] Some of those who consider that the relationship with Elizabeth
was on the maternal side, consider that Mary, like Joseph, to whom she was betrothed, was of the House of David and
so of the Tribe of Judah, and that the genealogy of Jesus presented in Luke 3 from Nathan, third son of David and
Bathsheba, is in fact the genealogy of Mary,[need quotation to verify] while the genealogy from Solomon given in
Matthew 1 is that of Joseph. (Aaron's wife Elisheba was of the tribe of Judah, so all their descendants are from
both Levi and Judah.)[Num.1:7 & Ex.6:23] Mary is also depicted as being present among the women at the crucifixion
during the crucifixion standing near "the disciple whom Jesus loved" along with Mary of Clopas and Mary Magdalene,[Jn
19:25-26] to which list Matthew 27:56 adds "the mother of the sons of Zebedee", presumably the Salome mentioned in
Mark 15:40. This representation is called a Stabat Mater. While not recorded in the Gospel accounts, Mary cradling
the dead body of her son is a common motif in art, called a "pietà" or "pity". The adoption of the mother of Jesus
as a virtual goddess may represent a reintroduction of aspects of the worship of Isis. "When looking at images of
the Egyptian goddess Isis and those of the Virgin Mary, one may initially observe iconographic similarities. These
parallels have led many scholars to suggest that there is a distinct iconographic relationship between Isis and Mary.
In fact, some scholars have gone even further, and have suggested, on the basis of this relationship, a direct link
between the cult of Mary and that of Isis." In the 19th century, a house near Ephesus in Turkey was found, based
on the visions of Anne Catherine Emmerich, an Augustinian nun in Germany. It has since been visited as the House
of the Virgin Mary by Roman Catholic pilgrims who consider it the place where Mary lived until her assumption. The
Gospel of John states that Mary went to live with the Disciple whom Jesus loved,[Jn 19:27] identified as John the
Evangelist.[citation needed] Irenaeus and Eusebius of Caesarea wrote in their histories that John later went to Ephesus,
which may provide the basis for the early belief that Mary also lived in Ephesus with John. Devotions to artistic
depictions of Mary vary among Christian traditions. There is a long tradition of Roman Catholic Marian art and no
image permeates Catholic art as does the image of Madonna and Child. The icon of the Virgin Theotokos with Christ
is without doubt the most venerated icon in the Orthodox Church. Both Roman Catholic and Orthodox Christians venerate
images and icons of Mary, given that the Second Council of Nicaea in 787 permitted their veneration with the understanding
that those who venerate the image are venerating the reality of the person it represents, and the 842 Synod of Constantinople
confirming the same. According to Orthodox piety and traditional practice, however, believers ought to pray before
and venerate only flat, two-dimensional icons, and not three-dimensional statues. Ephesus is a cultic centre of Mary,
the site of the first Church dedicated to her and the rumoured place of her death. Ephesus was previously a centre
for worship of Artemis a virgin goddess. The Temple of Artemis at Ephesus being regarded as one of the Seven Wonders
of the Ancient World The cult of Mary was furthered by Queen Theodora in the 6th Century. According to William E.
Phipps, in the book Survivals of Roman Religion "Gordon Laing argues convincingly that the worship of Artemis as
both virgin and mother at the grand Ephesian temple contributed to the veneration of Mary." Some titles have a Biblical
basis, for instance the title Queen Mother has been given to Mary since she was the mother of Jesus, who was sometimes
referred to as the "King of Kings" due to his lineage of King David. The biblical basis for the term Queen can be
seen in the Gospel of Luke 1:32 and the Book of Isaiah 9:6, and Queen Mother from 1 Kings 2:19-20 and Jeremiah 13:18-19.
Other titles have arisen from reported miracles, special appeals or occasions for calling on Mary, e.g., Our Lady
of Good Counsel, Our Lady of Navigators or Our Lady of Ransom who protects captives. Despite Martin Luther's harsh
polemics against his Roman Catholic opponents over issues concerning Mary and the saints, theologians appear to agree
that Luther adhered to the Marian decrees of the ecumenical councils and dogmas of the church. He held fast to the
belief that Mary was a perpetual virgin and the Theotokos or Mother of God. Special attention is given to the assertion
that Luther, some three-hundred years before the dogmatization of the Immaculate Conception by Pope Pius IX in 1854,
was a firm adherent of that view. Others maintain that Luther in later years changed his position on the Immaculate
Conception, which, at that time was undefined in the Church, maintaining however the sinlessness of Mary throughout
her life. For Luther, early in his life, the Assumption of Mary was an understood fact, although he later stated
that the Bible did not say anything about it and stopped celebrating its feast. Important to him was the belief that
Mary and the saints do live on after death. "Throughout his career as a priest-professor-reformer, Luther preached,
taught, and argued about the veneration of Mary with a verbosity that ranged from childlike piety to sophisticated
polemics. His views are intimately linked to his Christocentric theology and its consequences for liturgy and piety."
Luther, while revering Mary, came to criticize the "Papists" for blurring the line, between high admiration of the
grace of God wherever it is seen in a human being, and religious service given to another creature. He considered
the Roman Catholic practice of celebrating saints' days and making intercessory requests addressed especially to
Mary and other departed saints to be idolatry. His final thoughts on Marian devotion and veneration are preserved
in a sermon preached at Wittenberg only a month before his death: Differences in feasts may also originate from doctrinal
issues—the Feast of the Assumption is such an example. Given that there is no agreement among all Christians on the
circumstances of the death, Dormition or Assumption of Mary, the feast of assumption is celebrated among some denominations
and not others. While the Catholic Church celebrates the Feast of the Assumption on August 15, some Eastern Catholics
celebrate it as Dormition of the Theotokos, and may do so on August 28, if they follow the Julian calendar. The Eastern
Orthodox also celebrate it as the Dormition of the Theotokos, one of their 12 Great Feasts. Protestants do not celebrate
this, or any other Marian feasts. In paintings, Mary is traditionally portrayed in blue. This tradition can trace
its origin to the Byzantine Empire, from c.500 AD, where blue was "the colour of an empress". A more practical explanation
for the use of this colour is that in Medieval and Renaissance Europe, the blue pigment was derived from the rock
lapis lazuli, a stone imported from Afghanistan of greater value than gold. Beyond a painter's retainer, patrons
were expected to purchase any gold or lapis lazuli to be used in the painting. Hence, it was an expression of devotion
and glorification to swathe the Virgin in gowns of blue. Nontrinitarians, such as Unitarians, Christadelphians and
Jehovah's Witnesses also acknowledge Mary as the biological mother of Jesus Christ, but do not recognise Marian titles
such as "Mother of God" as these groups generally reject Christ's divinity. Since Nontrinitarian churches are typically
also mortalist, the issue of praying to Mary, whom they would consider "asleep", awaiting resurrection, does not
arise. Emanuel Swedenborg says God as he is in himself could not directly approach evil spirits to redeem those spirits
without destroying them (Exodus 33:20, John 1:18), so God impregnated Mary, who gave Jesus Christ access to the evil
heredity of the human race, which he could approach, redeem and save. The Qur'an relates detailed narrative accounts
of Maryam (Mary) in two places, Qur'an 3:35–47 and 19:16–34. These state beliefs in both the Immaculate Conception
of Mary and the Virgin birth of Jesus. The account given in Sura 19 is nearly identical with that in the Gospel according
to Luke, and both of these (Luke, Sura 19) begin with an account of the visitation of an angel upon Zakariya (Zecharias)
and Good News of the birth of Yahya (John), followed by the account of the annunciation. It mentions how Mary was
informed by an angel that she would become the mother of Jesus through the actions of God alone. The Perpetual Virginity
of Mary asserts Mary's real and perpetual virginity even in the act of giving birth to the Son of God made Man. The
term Ever-Virgin (Greek ἀειπάρθενος) is applied in this case, stating that Mary remained a virgin for the remainder
of her life, making Jesus her biological and only son, whose conception and birth are held to be miraculous. While
the Orthodox Churches hold the position articulated in the Protoevangelium of James that Jesus' brothers and sisters
are older children of Joseph the Betrothed, step-siblings from an earlier marriage that left him widowed, Roman Catholic
teaching follows the Latin father Jerome in considering them Jesus' cousins. Orthodox Christianity includes a large
number of traditions regarding the Ever Virgin Mary, the Theotokos. The Orthodox believe that she was and remained
a virgin before and after Christ's birth. The Theotokia (i.e., hymns to the Theotokos) are an essential part of the
Divine Services in the Eastern Church and their positioning within the liturgical sequence effectively places the
Theotokos in the most prominent place after Christ. Within the Orthodox tradition, the order of the saints begins
with: The Theotokos, Angels, Prophets, Apostles, Fathers, Martyrs, etc. giving the Virgin Mary precedence over the
angels. She is also proclaimed as the "Lady of the Angels". The multiple churches that form the Anglican Communion
and the Continuing Anglican movement have different views on Marian doctrines and venerative practices given that
there is no single church with universal authority within the Communion and that the mother church (the Church of
England) understands itself to be both "catholic" and "Reformed". Thus unlike the Protestant churches at large, the
Anglican Communion (which includes the Episcopal Church in the United States) includes segments which still retain
some veneration of Mary. Although Calvin and Huldrych Zwingli honored Mary as the Mother of God in the 16th century,
they did so less than Martin Luther. Thus the idea of respect and high honor for Mary was not rejected by the first
Protestants; but, they came to criticize the Roman Catholics for venerating Mary. Following the Council of Trent
in the 16th century, as Marian veneration became associated with Catholics, Protestant interest in Mary decreased.
During the Age of the Enlightenment any residual interest in Mary within Protestant churches almost disappeared,
although Anglicans and Lutherans continued to honor her. In Methodism, Mary is honored as the Mother of God. Methodists
do not have any additional teachings on the Virgin Mary except from what is mentioned in Scripture and the ecumenical
Creeds. As such, Methodists believe that Mary was conceived in her womb through the Holy Ghost and accept the doctrine
of the Virgin Birth, although they, along with Orthodox Christians and other Protestant Christians, reject the doctrine
of the Immaculate Conception. John Wesley, the principal founder of the Methodist movement within the Church of England,
believed that Mary "continued a pure and unspotted virgin", thus upholding the doctrine of the perpetual virginity
of Mary. Contemporary Methodism does hold that Mary was a virgin before, during, and immediately after the birth
of Christ. In addition, some Methodists also hold the doctrine of the Assumption of Mary as a pious opinion. She
is the only woman directly named in the Qur'an; declared (uniquely along with Jesus) to be a Sign of God to humanity;
as one who "guarded her chastity"; an obedient one; chosen of her mother and dedicated to Allah whilst still in the
womb; uniquely (amongst women) Accepted into service by God; cared for by (one of the prophets as per Islam) Zakariya
(Zacharias); that in her childhood she resided in the Temple and uniquely had access to Al-Mihrab (understood to
be the Holy of Holies), and was provided with heavenly "provisions" by God. From the early stages of Christianity,
belief in the virginity of Mary and the virgin conception of Jesus, as stated in the gospels, holy and supernatural,
was used by detractors, both political and religious, as a topic for discussions, debates and writings, specifically
aimed to challenge the divinity of Jesus and thus Christians and Christianity alike. In the 2nd century, as part
of the earliest anti-Christian polemics, Celsus suggested that Jesus was the illegitimate son of a Roman soldier
named Panthera. The views of Celsus drew responses from Origen, the Church Father in Alexandria, Egypt, who considered
it a fabricated story. How far Celsus sourced his view from Jewish sources remains a subject of discussion. Mary
had been venerated since Early Christianity, and is considered by millions to be the most meritorious saint of the
religion. The Eastern and Oriental Orthodox, Roman Catholic, Anglican, and Lutheran Churches believe that Mary, as
Mother of Jesus, is the Mother of God and the Theotokos, literally "Giver of birth to God". There is significant
diversity in the Marian beliefs and devotional practices of major Christian traditions. The Roman Catholic Church
holds distinctive Marian dogmas; namely her status as the mother of God; her Immaculate Conception; her perpetual
virginity; and her Assumption into heaven. Many Protestants minimize Mary's role within Christianity, based on the
argued brevity of biblical references. Mary (Maryam) also has a revered position in Islam, where a whole chapter
of the Qur'an is devoted to her, also describing the birth of Jesus. Mary resided in "her own house"[Lk.1:56] in
Nazareth in Galilee, possibly with her parents, and during her betrothal — the first stage of a Jewish marriage —
the angel Gabriel announced to her that she was to be the mother of the promised Messiah by conceiving him through
the Holy Spirit, and she responded, "I am the handmaid of the Lord. Let it be done unto me according to your word."
After a number of months, when Joseph was told of her conception in a dream by "an angel of the Lord", he planned
to divorce her; but the angel told him to not hesitate to take her as his wife, which Joseph did, thereby formally
completing the wedding rites.[Mt 1:18-25] The Virgin birth of Jesus was an almost universally held belief among Christians
from the 2nd until the 19th century. It is included in the two most widely used Christian creeds, which state that
Jesus "was incarnate of the Holy Spirit and the Virgin Mary" (the Nicene Creed in what is now its familiar form)
and the Apostles' Creed. The Gospel of Matthew describes Mary as a virgin who fulfilled the prophecy of Isaiah 7:14,
mistranslating the Hebrew word alma ("young woman") in Isaiah 7:14 as "virgin", though.[citation needed] The authors
of the Gospels of Matthew and Luke consider Jesus' conception not the result of intercourse and assert that Mary
had "no relations with man" before Jesus' birth.[Mt 1:18] [Mt 1:25] [Lk 1:34] This alludes to the belief that Mary
conceived Jesus through the action of God the Holy Spirit, and not through intercourse with Joseph or anyone else.
In the Catholic Church, Mary is accorded the title "Blessed", (from Latin beatus, blessed, via Greek μακάριος, makarios and
Latin facere, make) in recognition of her assumption to Heaven and her capacity to intercede on behalf of those who
pray to her. Catholic teachings make clear that Mary is not considered divine and prayers to her are not answered
by her, they are answered by God. The four Catholic dogmas regarding Mary are: Mother of God, Perpetual virginity
of Mary, Immaculate Conception (of Mary) and Assumption of Mary. The views of the Church Fathers still play an important
role in the shaping of Orthodox Marian perspective. However, the Orthodox views on Mary are mostly doxological, rather
than academic: they are expressed in hymns, praise, liturgical poetry and the veneration of icons. One of the most
loved Orthodox Akathists (i.e. standing hymns) is devoted to Mary and it is often simply called the Akathist Hymn.
Five of the twelve Great Feasts in Orthodoxy are dedicated to Mary. The Sunday of Orthodoxy directly links the Virgin
Mary's identity as Mother of God with icon veneration. A number of Orthodox feasts are connected with the miraculous
icons of the Theotokos. Mary's special position within God's purpose of salvation as "God-bearer" (Theotokos) is
recognised in a number of ways by some Anglican Christians. All the member churches of the Anglican Communion affirm
in the historic creeds that Jesus was born of the Virgin Mary, and celebrates the feast days of the Presentation
of Christ in the Temple. This feast is called in older prayer books the Purification of the Blessed Virgin Mary on
February 2. The Annunciation of our Lord to the Blessed Virgin on March 25 was from before the time of Bede until
the 18th century New Year's Day in England. The Annunciation is called the "Annunciation of our Lady" in the 1662
Book of Common Prayer. Anglicans also celebrate in the Visitation of the Blessed Virgin on 31 May, though in some
provinces the traditional date of July 2 is kept. The feast of the St. Mary the Virgin is observed on the traditional
day of the Assumption, August 15. The Nativity of the Blessed Virgin is kept on September 8. Protestants in general
reject the veneration and invocation of the Saints.:1174 Protestants typically hold that Mary was the mother of Jesus,
but was an ordinary woman devoted to God. Therefore, there is virtually no Marian veneration, Marian feasts, Marian
pilgrimages, Marian art, Marian music or Marian spirituality in today's Protestant communities. Within these views,
Roman Catholic beliefs and practices are at times rejected, e.g., theologian Karl Barth wrote that "the heresy of
the Catholic Church is its Mariology". The statement that Joseph "knew her not till she brought forth her first born
son" (Matthew 1:25 DouayRheims) has been debated among scholars, with some saying that she did not remain a virgin
and some saying that she was a perpetual virgin. Other scholars contend that the Greek word heos (i.e., until) denotes
a state up to a point, but does not mean that the state ended after that point, and that Matthew 1:25 does not confirm
or deny the virginity of Mary after the birth of Jesus. According to Biblical scholar Bart Ehrman the Hebrew word
almah, meaning young woman of childbearing age, was translated into Greek as parthenos, which only means virgin,
in Isaiah 7:14, which is commonly believed by Christians to be the prophecy of the Virgin Mary referred to in Matthew
1:23. While Matthew and Luke give differing versions of the virgin birth, John quotes the uninitiated Philip and
the disbelieving Jews gathered at Galilee referring to Joseph as Jesus's father. The hagiography of Mary and the
Holy Family can be contrasted with other material in the Gospels. These references include an incident which can
be interpreted as Jesus rejecting his family in the New Testament: "And his mother and his brothers arrived, and
standing outside, they sent in a message asking for him ... And looking at those who sat in a circle around him,
Jesus said, 'These are my mother and my brothers. Whoever does the will of God is my brother, and sister, and mother'."[3:31-35]
Other verses suggest a conflict between Jesus and his family, including an attempt to have Jesus restrained because
"he is out of his mind", and the famous quote: "A prophet is not without honor except in his own town, among his
relatives and in his own home." A leading Biblical scholar commented: "there are clear signs not only that Jesus's
family rejected his message during his public ministry but that he in turn spurned them publicly". Although the Catholics
and the Orthodox may honor and venerate Mary, they do not view her as divine, nor do they worship her. Roman Catholics
view Mary as subordinate to Christ, but uniquely so, in that she is seen as above all other creatures. Similarly
Theologian Sergei Bulgakov wrote that the Orthodox view Mary as "superior to all created beings" and "ceaselessly
pray for her intercession". However, she is not considered a "substitute for the One Mediator" who is Christ. "Let
Mary be in honor, but let worship be given to the Lord", he wrote. Similarly, Catholics do not worship Mary as a
divine being, but rather "hyper-venerate" her. In Roman Catholic theology, the term hyperdulia is reserved for Marian
veneration, latria for the worship of God, and dulia for the veneration of other saints and angels. The definition
of the three level hierarchy of latria, hyperdulia and dulia goes back to the Second Council of Nicaea in 787. Mary
is referred to by the Eastern Orthodox Church, Oriental Orthodoxy, the Anglican Church, and all Eastern Catholic
Churches as Theotokos, a title recognized at the Third Ecumenical Council (held at Ephesus to address the teachings
of Nestorius, in 431). Theotokos (and its Latin equivalents, "Deipara" and "Dei genetrix") literally means "Godbearer".
The equivalent phrase "Mater Dei" (Mother of God) is more common in Latin and so also in the other languages used
in the Western Catholic Church, but this same phrase in Greek (Μήτηρ Θεοῦ), in the abbreviated form of the first
and last letter of the two words (ΜΡ ΘΥ), is the indication attached to her image in Byzantine icons. The Council
stated that the Church Fathers "did not hesitate to speak of the holy Virgin as the Mother of God". Roman Catholics
believe in the Immaculate Conception of Mary, as proclaimed Ex Cathedra by Pope Pius IX in 1854, namely that she
was filled with grace from the very moment of her conception in her mother's womb and preserved from the stain of
original sin. The Latin Rite of the Roman Catholic Church has a liturgical feast by that name, kept on December 8.
Orthodox Christians reject the Immaculate Conception dogma principally because their understanding of ancestral sin
(the Greek term corresponding to the Latin "original sin") differs from the Augustinian interpretation and that of
the Roman Catholic Church. The Protoevangelium of James, an extra-canonical book, has been the source of many Orthodox
beliefs on Mary. The account of Mary's life presented includes her consecration as a virgin at the temple at age
three. The High Priest Zachariah blessed Mary and informed her that God had magnified her name among many generations.
Zachariah placed Mary on the third step of the altar, whereby God gave her grace. While in the temple, Mary was miraculously
fed by an angel, until she was twelve years old. At that point an angel told Zachariah to betroth Mary to a widower
in Israel, who would be indicated. This story provides the theme of many hymns for the Feast of Presentation of Mary,
and icons of the feast depict the story. The Orthodox believe that Mary was instrumental in the growth of Christianity
during the life of Jesus, and after his Crucifixion, and Orthodox Theologian Sergei Bulgakov wrote: "The Virgin Mary
is the center, invisible, but real, of the Apostolic Church." In the Islamic tradition, Mary and Jesus were the only
children who could not be touched by Satan at the moment of their birth, for God imposed a veil between them and
Satan. According to author Shabbir Akhtar, the Islamic perspective on Mary's Immaculate Conception is compatible
with the Catholic doctrine of the same topic. "O People of the Book! Do not go beyond the bounds in your religion,
and do not say anything of Allah but the truth. The Messiah, Jesus son of Mary, was but a Messenger of God, and a
Word of His (Power) which He conveyed to Mary, and a spirit from Him. So believe in Allah (as the One, Unique God),
and His Messengers (including Jesus, as Messenger); and do not say: (Allah is one of) a trinity. Give up (this assertion)
â€" (it is) for your own good (to do so). Allah is but One Allah ; All-Glorified He is in that He is absolutely above
having a son. To Him belongs whatever is in the heavens and whatever is on the earth. And Allah suffices as the One
to be relied on, to Whom affairs should be referred." Quran 4/171 The issue of the parentage of Jesus in the Talmud
affects also the view of his mother. However the Talmud does not mention Mary by name and is considerate rather than
only polemic. The story about Panthera is also found in the Toledot Yeshu, the literary origins of which can not
be traced with any certainty and given that it is unlikely to go before the 4th century, it is far too late to include
authentic remembrances of Jesus. The Blackwell Companion to Jesus states that the Toledot Yeshu has no historical
facts as such, and was perhaps created as a tool for warding off conversions to Christianity. The name Panthera may
be a distortion of the term parthenos (virgin) and Raymond E. Brown considers the story of Panthera a fanciful explanation
of the birth of Jesus which includes very little historical evidence. Robert Van Voorst states that given that Toledot
Yeshu is a medieval document and due to its lack of a fixed form and orientation towards a popular audience, it is
"most unlikely" to have reliable historical information. The gospels of Matthew and Luke in the New Testament describe
Mary as a virgin (Greek: παρθένος, parthénos) and Christians believe that she conceived her son while a virgin by
the Holy Spirit. This took place when she was already betrothed to Joseph and was awaiting the concluding rite of
marriage, the formal home-taking ceremony. She married Joseph and accompanied him to Bethlehem, where Jesus was born.
According to ancient Jewish custom, Mary could have been betrothed at about 12, however, there is no direct evidence
of Mary's age at betrothal or in pregnancy. The term "betrothal" is an awkward translation of kiddushin; according
to the Jewish law those called "betrothed" were actually husband and wife. Since the angel Gabriel had told Mary
(according to Luke 1:36) that Elizabeth—having previously been barren—was then miraculously pregnant, Mary hurried
to see Elizabeth, who was living with her husband Zechariah in "Hebron, in the hill country of Judah". Mary arrived
at the house and greeted Elizabeth who called Mary "the mother of my Lord", and Mary spoke the words of praise that
later became known as the Magnificat from her first word in the Latin version.[Luke 1:46-55] After about three months,
Mary returned to her own house.[Lk 1:56-57] Christian Marian perspectives include a great deal of diversity. While
some Christians such as Roman Catholics and Eastern Orthodox have well established Marian traditions, Protestants
at large pay scant attention to Mariological themes. Roman Catholic, Eastern Orthodox, Oriental Orthodox, Anglican,
and Lutherans venerate the Virgin Mary. This veneration especially takes the form of prayer for intercession with
her Son, Jesus Christ. Additionally it includes composing poems and songs in Mary's honor, painting icons or carving
statues of her, and conferring titles on Mary that reflect her position among the saints.
The main passenger airport serving the metropolis and the state is Melbourne Airport (also called Tullamarine Airport), which
is the second busiest in Australia, and the Port of Melbourne is Australia's busiest seaport for containerised and
general cargo. Melbourne has an extensive transport network. The main metropolitan train terminus is Flinders Street
Station, and the main regional train and coach terminus is Southern Cross Station. Melbourne is also home to Australia's
most extensive freeway network and has the world's largest urban tram network. Between 1836 and 1842 Victorian Aboriginal
groups were largely dispossessed[by whom?] of their land. By January 1844, there were said to be 675 Aborigines resident
in squalid camps in Melbourne. The British Colonial Office appointed five Aboriginal Protectors for the Aborigines
of Victoria, in 1839, however their work was nullified by a land policy that favoured squatters to take possession
of Aboriginal lands. By 1845, fewer than 240 wealthy Europeans held all the pastoral licences then issued in Victoria
and became a powerful political and economic force in Victoria for generations to come. With the gold rush largely
over by 1860, Melbourne continued to grow on the back of continuing gold mining, as the major port for exporting
the agricultural products of Victoria, especially wool, and a developing manufacturing sector protected by high tariffs.
An extensive radial railway network centred on Melbourne and spreading out across the suburbs and into the countryside
was established from the late 1850s. Further major public buildings were begun in the 1860s and 1870s such as the
Supreme Court, Government House, and the Queen Victoria Market. The central city filled up with shops and offices,
workshops, and warehouses. Large banks and hotels faced the main streets, with fine townhouses in the east end of
Collins Street, contrasting with tiny cottages down laneways within the blocks. The Aboriginal population continued
to decline with an estimated 80% total decrease by 1863, due primarily to introduced diseases, particularly smallpox,
frontier violence and dispossession from their lands. Melbourne (/ˈmɛlbərn/, AU i/ˈmɛlbən/) is the capital and most
populous city in the Australian state of Victoria, and the second most populous city in Australia and Oceania. The
name "Melbourne" refers to the area of urban agglomeration (as well as a census statistical division) spanning 9,900
km2 (3,800 sq mi) which comprises the broader metropolitan area, as well as being the common name for its city centre.
The metropolis is located on the large natural bay of Port Phillip and expands into the hinterlands towards the Dandenong
and Macedon mountain ranges, Mornington Peninsula and Yarra Valley. Melbourne consists of 31 municipalities. It has
a population of 4,347,955 as of 2013, and its inhabitants are called Melburnians. A brash boosterism that had typified
Melbourne during this time ended in the early 1890s with a severe depression of the city's economy, sending the local
finance and property industries into a period of chaos during which 16 small "land banks" and building societies
collapsed, and 133 limited companies went into liquidation. The Melbourne financial crisis was a contributing factor
in the Australian economic depression of the 1890s and the Australian banking crisis of 1893. The effects of the
depression on the city were profound, with virtually no new construction until the late 1890s. An influx of interstate
and overseas migrants, particularly Irish, German and Chinese, saw the development of slums including a temporary
"tent city" established on the southern banks of the Yarra. Chinese migrants founded the Melbourne Chinatown in 1851,
which remains the longest continuous Chinese settlement in the Western World. In the aftermath of the Eureka Stockade,
mass public support for the plight of the miners resulted in major political changes to the colony, including changes
to working conditions across local industries including mining, agriculture and manufacturing. The nationalities
involved in the Eureka revolt and Burke and Wills expedition gave an indication of immigration flows in the second
half of the nineteenth century. The decade began with the Melbourne International Exhibition in 1880, held in the
large purpose-built Exhibition Building. In 1880 a telephone exchange was established and in the same year the foundations
of St Paul's, were laid; in 1881 electric light was installed in the Eastern Market, and in the following year a
generating station capable of supplying 2,000 incandescent lamps was in operation. In 1885 the first line of the
Melbourne cable tramway system was built, becoming one of the worlds most extensive systems by 1890. Melbourne is
also prone to isolated convective showers forming when a cold pool crosses the state, especially if there is considerable
daytime heating. These showers are often heavy and can contain hail and squalls and significant drops in temperature,
but they pass through very quickly at times with a rapid clearing trend to sunny and relatively calm weather and
the temperature rising back to what it was before the shower. This often occurs in the space of minutes and can be
repeated many times in a day, giving Melbourne a reputation for having "four seasons in one day", a phrase that is
part of local popular culture and familiar to many visitors to the city. The lowest temperature on record is −2.8
°C (27.0 °F), on 21 July 1869. The highest temperature recorded in Melbourne city was 46.4 °C (115.5 °F), on 7 February
2009. While snow is occasionally seen at higher elevations in the outskirts of the city, it has not been recorded
in the Central Business District since 1986. Melbourne's CBD, compared with other Australian cities, has comparatively
unrestricted height limits and as a result of waves of post-war development contains five of the six tallest buildings
in Australia, the tallest of which is the Eureka Tower, situated in Southbank. It has an observation deck near the
top from where you can see above all of Melbourne's structures. The Rialto tower, the city's second tallest, remains
the tallest building in the old CBD; its observation deck for visitors has recently closed. Since the mid-1990s,
Melbourne has maintained significant population and employment growth. There has been substantial international investment
in the city's industries and property market. Major inner-city urban renewal has occurred in areas such as Southbank,
Port Melbourne, Melbourne Docklands and more recently, South Wharf. According to the Australian Bureau of Statistics,
Melbourne sustained the highest population increase and economic growth rate of any Australian capital city in the
three years ended June 2004. These factors have led to population growth and further suburban expansion through the
2000s. Melbourne is experiencing high population growth, generating high demand for housing. This housing boom has
increased house prices and rents, as well as the availability of all types of housing. Subdivision regularly occurs
in the outer areas of Melbourne, with numerous developers offering house and land packages. However, after 10 years[when?]
of planning policies to encourage medium-density and high-density development in existing areas with greater access
to public transport and other services, Melbourne's middle and outer-ring suburbs have seen significant brownfields
redevelopment. Founded by free settlers from the British Crown colony of Van Diemen's Land on 30 August 1835, in
what was then the colony of New South Wales, it was incorporated as a Crown settlement in 1837. It was named "Melbourne"
by the Governor of New South Wales, Sir Richard Bourke, in honour of the British Prime Minister of the day, William
Lamb, 2nd Viscount Melbourne. It was officially declared a city by Queen Victoria in 1847, after which it became
the capital of the newly founded colony of Victoria in 1851. During the Victorian gold rush of the 1850s, it was
transformed into one of the world's largest and wealthiest cities. After the federation of Australia in 1901, it
served as the nation's interim seat of government until 1927. In response to attribution of recent climate change,
the City of Melbourne, in 2002, set a target to reduce carbon emissions to net zero by 2020 and Moreland City Council
established the Zero Moreland program, however not all metropolitan municipalities have followed, with the City of
Glen Eira notably deciding in 2009 not to become carbon neutral. Melbourne has one of the largest urban footprints
in the world due to its low density housing, resulting in a vast suburban sprawl, with a high level of car dependence
and minimal public transport outside of inner areas. Much of the vegetation within the city are non-native species,
most of European origin, and in many cases plays host to invasive species and noxious weeds. Significant introduced
urban pests include the common myna, feral pigeon, brown rat, European wasp, common starling and red fox. Many outlying
suburbs, particularly towards the Yarra Valley and the hills to the north-east and east, have gone for extended periods
without regenerative fires leading to a lack of saplings and undergrowth in urbanised native bushland. The Department
of Sustainability and Environment partially addresses this problem by regularly burning off. Several national parks
have been designated around the urban area of Melbourne, including the Mornington Peninsula National Park, Port Phillip
Heads Marine National Park and Point Nepean National Park in the south east, Organ Pipes National Park to the north
and Dandenong Ranges National Park to the east. There are also a number of significant state parks just outside Melbourne.
Responsibility for regulating pollution falls under the jurisdiction of the EPA Victoria and several local councils.
Air pollution, by world standards, is classified as being good. Summer and autumn are the worst times of year for
atmospheric haze in the urban area. In 2012, the city contained a total of 594 high-rise buildings, with 8 under
construction, 71 planned and 39 at proposal stage making the city's skyline the second largest in Australia. The
CBD is dominated by modern office buildings including the Rialto Towers (1986), built on the site of several grand
classical Victorian buildings, two of which — the Rialto Building (1889) designed by William Pitt and the Winfield
Building (1890) designed by Charles D'Ebro and Richard Speight — still remain today and more recently hi-rise apartment
buildings including Eureka Tower (2006), which is listed as the 13th tallest residential building in the world in
January 2014. In May and June 1835, the area which is now central and northern Melbourne was explored by John Batman,
a leading member of the Port Phillip Association in Van Diemen's Land (now known as Tasmania), who claimed to have
negotiated a purchase of 600,000 acres (2,400 km2) with eight Wurundjeri elders. Batman selected a site on the northern
bank of the Yarra River, declaring that "this will be the place for a village". Batman then returned to Launceston
in Tasmania. In early August 1835 a different group of settlers, including John Pascoe Fawkner, left Launceston on
the ship Enterprize. Fawkner was forced to disembark at Georgetown, Tasmania, because of outstanding debts. The remainder
of the party continued and arrived at the mouth of the Yarra River on 15 August 1835. On 30 August 1835 the party
disembarked and established a settlement at the site of the current Melbourne Immigration Museum. Batman and his
group arrived on 2 September 1835 and the two groups ultimately agreed to share the settlement. Melbourne is typical
of Australian capital cities in that after the turn of the 20th century, it expanded with the underlying notion of
a 'quarter acre home and garden' for every family, often referred to locally as the Australian Dream. This, coupled
with the popularity of the private automobile after 1945, led to the auto-centric urban structure now present today
in the middle and outer suburbs. Much of metropolitan Melbourne is accordingly characterised by low density sprawl,
whilst its inner city areas feature predominantly medium-density, transit-oriented urban forms. The city centre,
Docklands, St. Kilda Road and Southbank areas feature high-density forms. With the wealth brought on by the gold
rush following closely on the heels of the establishment of Victoria as a separate colony and the subsequent need
for public buildings, a program of grand civic construction soon began. The 1850s and 1860s saw the commencement
of Parliament House, the Treasury Building, the Old Melbourne Gaol, Victoria Barracks, the State Library, University,
General Post Office, Customs House, the Melbourne Town Hall, St Patrick's cathedral, though many remained uncompleted
for decades, with some still not finished. Melbourne's rich and diverse literary history was recognised in 2008 when
it became the second UNESCO City of Literature. The State Library of Victoria is one of Australia's oldest cultural
institutions and one of many public and university libraries across the city. Melbourne also has Australia's widest
range of bookstores, as well the nation's largest publishing sector. The city is home to significant writers' festivals,
most notably the Melbourne Writers Festival. Several major literary prizes are open to local writers including the
Melbourne Prize for Literature and the Victorian Premier's Literary Awards. Significant novels set in Melbourne include
Fergus Hume's The Mystery of a Hansom Cab, Helen Garner's Monkey Grip and Christos Tsiolkas' The Slap. Notable writers
and poets from Melbourne include Thomas Browne, C. J. Dennis, Germaine Greer and Peter Carey. During a visit in 1885
English journalist George Augustus Henry Sala coined the phrase "Marvellous Melbourne", which stuck long into the
twentieth century and is still used today by Melburnians. Growing building activity culminated in a "land boom" which,
in 1888, reached a peak of speculative development fuelled by consumer confidence and escalating land value. As a
result of the boom, large commercial buildings, coffee palaces, terrace housing and palatial mansions proliferated
in the city. The establishment of a hydraulic facility in 1887 allowed for the local manufacture of elevators, resulting
in the first construction of high-rise buildings; most notably the APA Building, amongst the world's tallest commercial
buildings upon completion in 1889. This period also saw the expansion of a major radial rail-based transport network.
Melbourne rates highly in education, entertainment, health care, research and development, tourism and sport, making
it the world's most liveable city—for the fifth year in a row in 2015, according to the Economist Intelligence Unit.
It is a leading financial centre in the Asia-Pacific region, and ranks among the top 30 cities in the world in the
Global Financial Centres Index. Referred to as Australia's "cultural capital", it is the birthplace of Australian
impressionism, Australian rules football, the Australian film and television industries, and Australian contemporary
dance such as the Melbourne Shuffle. It is recognised as a UNESCO City of Literature and a major centre for street
art, music and theatre. It is home to many of Australia's largest and oldest cultural institutions such as the Melbourne
Cricket Ground, the National Gallery of Victoria, the State Library of Victoria and the UNESCO World Heritage-listed
Royal Exhibition Building. Some of Australia's most prominent and well known schools are based in Melbourne. Of the
top twenty high schools in Australia according to the Better Education ranking, six are located in Melbourne. There
has also been a rapid increase in the number of International students studying in the city. Furthermore, Melbourne
was ranked the world's fourth top university city in 2008 after London, Boston and Tokyo in a poll commissioned by
the Royal Melbourne Institute of Technology. Melbourne is the home of seven public universities: the University of
Melbourne, Monash University, Royal Melbourne Institute of Technology (RMIT University), Deakin University, La Trobe
University, Swinburne University of Technology and Victoria University. Height limits in the Melbourne CBD were lifted
in 1958, after the construction of ICI House, transforming the city's skyline with the introduction of skyscrapers.
Suburban expansion then intensified, serviced by new indoor malls beginning with Chadstone Shopping Centre. The post-war
period also saw a major renewal of the CBD and St Kilda Road which significantly modernised the city. New fire regulations
and redevelopment saw most of the taller pre-war CBD buildings either demolished or partially retained through a
policy of facadism. Many of the larger suburban mansions from the boom era were also either demolished or subdivided.
Melbourne is notable as the host city for the 1956 Summer Olympic Games (the first Olympic Games held in the southern
hemisphere and Oceania, with all previous games held in Europe and the United States), along with the 2006 Commonwealth
Games. Melbourne is so far the southernmost city to host the games. The city is home to three major annual international
sporting events: the Australian Open (one of the four Grand Slam tennis tournaments); the Melbourne Cup (horse racing);
and the Australian Grand Prix (Formula One). Also, the Australian Masters golf tournament is held at Melbourne since
1979, having been co-sanctioned by the European Tour from 2006 to 2009. Melbourne was proclaimed the "World's Ultimate
Sports City", in 2006, 2008 and 2010. The city is home to the National Sports Museum, which until 2003 was located
outside the members pavilion at the Melbourne Cricket Ground. It reopened in 2008 in the Olympic Stand. Batman's
Treaty with the Aborigines was annulled by the New South Wales governor (who at the time governed all of eastern
mainland Australia), with compensation paid to members of the association. In 1836, Governor Bourke declared the
city the administrative capital of the Port Phillip District of New South Wales, and commissioned the first plan
for the city, the Hoddle Grid, in 1837. The settlement was named Batmania after Batman. However, later that year
the settlement was named "Melbourne" after the British Prime Minister, William Lamb, 2nd Viscount Melbourne, whose
seat was Melbourne Hall in the market town of Melbourne, Derbyshire. On 13 April 1837 the settlement's general post
office officially opened with that name. From 2006, the growth of the city extended into "green wedges" and beyond
the city's urban growth boundary. Predictions of the city's population reaching 5 million people pushed the state
government to review the growth boundary in 2008 as part of its Melbourne @ Five Million strategy. In 2009, Melbourne
was less affected by the Late-2000s financial crisis in comparison to other Australian cities. At this time, more
new jobs were created in Melbourne than any other Australian city—almost as many as the next two fastest growing
cities, Brisbane and Perth, combined, and Melbourne's property market remained strong, resulting in historically
high property prices and widespread rent increases. Melbourne is also an important financial centre. Two of the big
four banks, NAB and ANZ, are headquartered in Melbourne. The city has carved out a niche as Australia's leading centre
for superannuation (pension) funds, with 40% of the total, and 65% of industry super-funds including the $109 billion-dollar
Federal Government Future Fund. The city was rated 41st within the top 50 financial cities as surveyed by the MasterCard
Worldwide Centers of Commerce Index (2008), second only to Sydney (12th) in Australia. Melbourne is Australia's second-largest
industrial centre. It is the Australian base for a number of significant manufacturers including Boeing, truck-makers
Kenworth and Iveco, Cadbury as well as Bombardier Transportation and Jayco, among many others. It is also home to
a wide variety of other manufacturers, ranging from petrochemicals and pharmaceuticals to fashion garments, paper
manufacturing and food processing. The south-eastern suburb of Scoresby is home to Nintendo's Australian headquarters.
The city also boasts a research and development hub for Ford Australia, as well as a global design studio and technical
centre for General Motors and Toyota respectively. The layout of the inner suburbs on a largely one-mile grid pattern,
cut through by wide radial boulevards, and string of gardens surrounding the central city was largely established
in the 1850s and 1860s. These areas were rapidly filled from the mid 1850s by the ubiquitous terrace house, as well
as detached houses and some grand mansions in large grounds, while some of the major roads developed as shopping
streets. Melbourne quickly became a major finance centre, home to several banks, the Royal Mint, and Australia's
first stock exchange in 1861. In 1855 the Melbourne Cricket Club secured possession of its now famous ground, the
MCG. Members of the Melbourne Football Club codified Australian football in 1859, and Yarra rowing clubs and "regattas"
became popular about the same time. In 1861 the Melbourne Cup was first run. In 1864 Melbourne acquired its first
public monument—the Burke and Wills statue. Over two-thirds of Melburnians speak only English at home (68.1%). Chinese
(mainly Cantonese and Mandarin) is the second-most-common language spoken at home (3.6%), with Greek third, Italian
fourth and Vietnamese fifth, each with more than 100,000 speakers. Although Victoria's net interstate migration has
fluctuated, the population of the Melbourne statistical division has grown by about 70,000 people a year since 2005.
Melbourne has now attracted the largest proportion of international overseas immigrants (48,000) finding it outpacing
Sydney's international migrant intake on percentage, along with having strong interstate migration from Sydney and
other capitals due to more affordable housing and cost of living. Melbourne has a temperate oceanic climate (Köppen
climate classification Cfb) and is well known for its changeable weather conditions. This is mainly due to Melbourne's
location situated on the boundary of the very hot inland areas and the cool southern ocean. This temperature differential
is most pronounced in the spring and summer months and can cause very strong cold fronts to form. These cold fronts
can be responsible for all sorts of severe weather from gales to severe thunderstorms and hail, large temperature
drops, and heavy rain. Another recent environmental issue in Melbourne was the Victorian government project of channel
deepening Melbourne Ports by dredging Port Phillip Bay—the Port Phillip Channel Deepening Project. It was subject
to controversy and strict regulations among fears that beaches and marine wildlife could be affected by the disturbance
of heavy metals and other industrial sediments. Other major pollution problems in Melbourne include levels of bacteria
including E. coli in the Yarra River and its tributaries caused by septic systems, as well as litter. Up to 350,000
cigarette butts enter the storm water runoff every day. Several programs are being implemented to minimise beach
and river pollution. In February 2010, The Transition Decade, an initiative to transition human society, economics
and environment towards sustainability, was launched in Melbourne. To counter the trend towards low-density suburban
residential growth, the government began a series of controversial public housing projects in the inner city by the
Housing Commission of Victoria, which resulted in demolition of many neighbourhoods and a proliferation of high-rise
towers. In later years, with the rapid rise of motor vehicle ownership, the investment in freeway and highway developments
greatly accelerated the outward suburban sprawl and declining inner city population. The Bolte government sought
to rapidly accelerate the modernisation of Melbourne. Major road projects including the remodelling of St Kilda Junction,
the widening of Hoddle Street and then the extensive 1969 Melbourne Transportation Plan changed the face of the city
into a car-dominated environment. The Melbourne rail network has its origins in privately built lines from the 1850s
gold rush era, and today the suburban network consists of 209 suburban stations on 16 lines which radiate from the
City Loop, a partially underground metro section of the network beneath the Central Business District (Hoddle Grid).
Flinders Street Station is Melbourne's busiest railway station, and was the world's busiest passenger station in
1926. It remains a prominent Melbourne landmark and meeting place. The city has rail connections with regional Victorian
cities, as well as direct interstate rail services to Sydney and Adelaide and beyond which depart from Melbourne's
other major rail terminus, Southern Cross Station in Spencer Street. In the 2013–2014 financial year, the Melbourne
rail network recorded 232.0 million passenger trips, the highest in its history. Many rail lines, along with dedicated
lines and rail yards are also used for freight. The Overland to Adelaide departs Southern Cross twice a week, while
the XPT to Sydney departs twice a day. RMIT University was also ranked among the top 51–100 universities in the world
in the subjects of: accounting, Business and Management, communication and media studies, computer science and information
systems. The Swinburne University of Technology, based in the inner city Melbourne suburb of Hawthorn is ranked 76–100
in the world for Physics by the Academic Ranking of World Universities making Swinburne the only Australian university
outside the Group of Eight to achieve a top 100 rating in a science discipline. Deakin University maintains two major
campuses in Melbourne and Geelong, and is the third largest university in Victoria. In recent years, the number of
international students at Melbourne's universities has risen rapidly, a result of an increasing number of places
being made available to full fee paying students. Education in Melbourne is overseen by the Victorian Department
of Education and Early Childhood Development (DEECD), whose role is to 'provide policy and planning advice for the
delivery of education'. Melbourne is often referred to as Australia's garden city, and the state of Victoria was
once known as the garden state. There is an abundance of parks and gardens in Melbourne, many close to the CBD with
a variety of common and rare plant species amid landscaped vistas, pedestrian pathways and tree-lined avenues. Melbourne's
parks are often considered the best public parks in all of Australia's major cities. There are also many parks in
the surrounding suburbs of Melbourne, such as in the municipalities of Stonnington, Boroondara and Port Phillip,
south east of the central business district. The extensive area covered by urban Melbourne is formally divided into
hundreds of suburbs (for addressing and postal purposes), and administered as local government areas 31 of which
are located within the metropolitan area. Port Phillip is often warmer than the surrounding oceans and/or the land
mass, particularly in spring and autumn; this can set up a "bay effect" similar to the "lake effect" seen in colder
climates where showers are intensified leeward of the bay. Relatively narrow streams of heavy showers can often affect
the same places (usually the eastern suburbs) for an extended period, while the rest of Melbourne and surrounds stays
dry. Overall, Melbourne is, owing to the rain shadow of the Otway Ranges, nonetheless drier than average for southern
Victoria. Within the city and surrounds, however, rainfall varies widely, from around 425 millimetres (17 in) at
Little River to 1,250 millimetres (49 in) on the eastern fringe at Gembrook. Melbourne receives 48.6 clear days annually.
Dewpoint temperatures in the summer range from 9.5 °C (49.1 °F) to 11.7 °C (53.1 °F). The local councils are responsible
for providing the functions set out in the Local Government Act 1989 such as urban planning and waste management.
Most other government services are provided or regulated by the Victorian state government, which governs from Parliament
House in Spring Street. These include services which are associated with local government in other countries and
include public transport, main roads, traffic control, policing, education above preschool level, health and planning
of major infrastructure projects. The state government retains the right to override certain local government decisions,
including urban planning, and Melburnian issues often feature prominently in state election. The Hoddle Grid (dimensions
of 1 by 1⁄2 mile (1.61 by 0.80 km)) forms the centre of Melbourne's central business district. The grid's southern
edge fronts onto the Yarra River. Office, commercial and public developments in the adjoining districts of Southbank
and Docklands have made these redeveloped areas into extensions of the CBD in all but name. The city centre has a
reputation for its historic and prominent lanes and arcades (most notably Block Place and Royal Arcade) which contain
a variety of shops and cafés and are a byproduct of the city's layout. The city is recognised for its mix of modern
architecture which intersects with an extensive range of nineteenth and early twentieth century buildings. Some of
the most architecturally noteworthy historic buildings include the World Heritage Site-listed Royal Exhibition Building,
constructed over a two-year period for the Melbourne International Exhibition in 1880, A.C. Goode House, a Neo Gothic
building located on Collins Street designed by Wright, Reed & Beaver (1891), William Pitt's Venetian Gothic style
Old Stock Exchange (1888), William Wardell's Gothic Bank (1883) which features some of Melbourne's finest interiors,
the incomplete Parliament House, St Paul's Cathedral (1891) and Flinders Street Station (1909), which was the busiest
commuter railway station in the world in the mid-1920s. Australian rules football and cricket are the most popular
sports in Melbourne. It is considered the spiritual home of the two sports in Australia. The first official Test
cricket match was played at the Melbourne Cricket Ground in March 1877. The origins of Australian rules football
can be traced to matches played next to the MCG in 1858. The Australian Football League is headquartered at Docklands
Stadium. Nine of the League's teams are based in the Melbourne metropolitan area: Carlton, Collingwood, Essendon,
Hawthorn, Melbourne, North Melbourne, Richmond, St Kilda, and Western Bulldogs. Up to five AFL matches are played
each week in Melbourne, attracting an average 40,000 people per game. Additionally, the city annually hosts the AFL
Grand Final. Ship transport is an important component of Melbourne's transport system. The Port of Melbourne is Australia's
largest container and general cargo port and also its busiest. The port handled two million shipping containers in
a 12-month period during 2007, making it one of the top five ports in the Southern Hemisphere. Station Pier on Port
Phillip Bay is the main passenger ship terminal with cruise ships and the Spirit of Tasmania ferries which cross
Bass Strait to Tasmania docking there. Ferries and water taxis run from berths along the Yarra River as far upstream
as South Yarra and across Port Phillip Bay. CSL, one of the world's top five biotech companies, and Sigma Pharmaceuticals
have their headquarters in Melbourne. The two are the largest listed Australian pharmaceutical companies. Melbourne
has an important ICT industry that employs over 60,000 people (one third of Australia's ICT workforce), with a turnover
of $19.8 billion and export revenues of $615 million. In addition, tourism also plays an important role in Melbourne's
economy, with about 7.6 million domestic visitors and 1.88 million international visitors in 2004. In 2008, Melbourne
overtook Sydney with the amount of money that domestic tourists spent in the city, accounting for around $15.8 billion
annually. Melbourne has been attracting an increasing share of domestic and international conference markets. Construction
began in February 2006 of a $1 billion 5000-seat international convention centre, Hilton Hotel and commercial precinct
adjacent to the Melbourne Exhibition and Convention Centre to link development along the Yarra River with the Southbank
precinct and multibillion-dollar Docklands redevelopment. In recent years, Melton, Wyndham and Casey, part of the
Melbourne statistical division, have recorded the highest growth rate of all local government areas in Australia.
Melbourne could overtake Sydney in population by 2028, The ABS has projected in two scenarios that Sydney will remain
larger than Melbourne beyond 2056, albeit by a margin of less than 3% compared to a margin of 12% today. Melbourne's
population could overtake that of Sydney by 2037 or 2039, according to the first scenario projected by the ABS; primarily
due to larger levels of internal migration losses assumed for Sydney. Another study claims that Melbourne will surpass
Sydney in population by 2040. Melbourne's live performance institutions date from the foundation of the city, with
the first theatre, the Pavilion, opening in 1841. The city's East End Theatre District includes theatres that similarly
date from 1850s to the 1920s, including the Princess Theatre, Regent Theatre, Her Majesty's Theatre, Forum Theatre,
Comedy Theatre, and the Athenaeum Theatre. The Melbourne Arts Precinct in Southbank is home to Arts Centre Melbourne,
which includes the State Theatre, Hamer Hall, the Playhouse and the Fairfax Studio. The Melbourne Recital Centre
and Southbank Theatre (principal home of the MTC, which includes the Sumner and Lawler performance spaces) are also
located in Southbank. The Sidney Myer Music Bowl, which dates from 1955, is located in the gardens of Kings Domain;
and the Palais Theatre is a feature of the St Kilda Beach foreshore. Three daily newspapers serve Melbourne: the
Herald Sun (tabloid), The Age (formerly broadsheet, now compact) and The Australian (national broadsheet). Six free-to-air
television stations service Greater Melbourne and Geelong: ABC Victoria, (ABV), SBS Victoria (SBS), Seven Melbourne
(HSV), Nine Melbourne (GTV), Ten Melbourne (ATV), C31 Melbourne (MGV) – community television. Each station (excluding
C31) broadcasts a primary channel and several multichannels. C31 is only broadcast from the transmitters at Mount
Dandenong and South Yarra. Hybrid digital/print media companies such as Broadsheet and ThreeThousand are based in
and primarily serve Melbourne. The city is home to many professional franchises/teams in national competitions including:
cricket clubs Melbourne Stars, Melbourne Renegades and Victorian Bushrangers, which play in the Big Bash League and
other domestic cricket competitions; soccer clubs Melbourne Victory and Melbourne City FC (known until June 2014
as Melbourne Heart), which play in the A-League competition, both teams play their home games at AAMI Park, with
the Victory also playing home games at Etihad Stadium. Rugby league club Melbourne Storm which plays in the NRL competition;
rugby union clubs Melbourne Rebels and Melbourne Rising, which play in the Super Rugby and National Rugby Championship
competitions respectively; netball club Melbourne Vixens, which plays in the trans-Tasman trophy ANZ Championship;
basketball club Melbourne United, which plays in the NBL competition; Bulleen Boomers and Dandenong Rangers, which
play in the WNBL; ice hockey teams Melbourne Ice and Melbourne Mustangs, who play in the Australian Ice Hockey League;
and baseball club Melbourne Aces, which plays in the Australian Baseball League. Rowing is also a large part of Melbourne's
sporting identity, with a number of clubs located on the Yarra River, out of which many Australian Olympians trained.
The city previously held the nation's premier long distance swimming event the annual Race to Prince's Bridge, in
the Yarra River. Like many Australian cities, Melbourne has a high dependency on the automobile for transport, particularly
in the outer suburban areas where the largest number of cars are bought, with a total of 3.6 million private vehicles
using 22,320 km (13,870 mi) of road, and one of the highest lengths of road per capita in the world. The early 20th
century saw an increase in popularity of automobiles, resulting in large-scale suburban expansion. By the mid 1950s
there was just under 200 passenger vehicles per 1000 people by 2013 there was 600 passenger vehicles per 1000 people.
Today it has an extensive network of freeways and arterial roadways used by private vehicles including freight as
well as public transport systems including bus and taxis. Major highways feeding into the city include the Eastern
Freeway, Monash Freeway and West Gate Freeway (which spans the large West Gate Bridge), whilst other freeways circumnavigate
the city or lead to other major cities, including CityLink (which spans the large Bolte Bridge), Eastlink, the Western
Ring Road, Calder Freeway, Tullamarine Freeway (main airport link) and the Hume Freeway which links Melbourne and
Sydney. After a trend of declining population density since World War II, the city has seen increased density in
the inner and western suburbs, aided in part by Victorian Government planning, such as Postcode 3000 and Melbourne
2030 which have aimed to curtail urban sprawl. According to the Australian Bureau of Statistics as of June 2013,
inner city Melbourne had the highest population density with 12,400 people per km2. Surrounding inner city suburbs
experienced an increase in population density between 2012 and 2013; Carlton (9,000 people per km2) and Fitzroy (7,900).
Television shows are produced in Melbourne, most notably Neighbours, Kath & Kim, Winners and Losers, Offspring, Underbelly
, House Husbands, Wentworth and Miss Fisher's Murder Mysteries, along with national news-based programs such as The
Project, Insiders and ABC News Breakfast. Melbourne is also known as the game show capital of Australia; productions
such as Million Dollar Minute, Millionaire Hot Seat and Family Feud are all based in Melbourne. Reality television
productions such as Dancing with the Stars, MasterChef, The Block and The Real Housewives of Melbourne are all filmed
in and around Melbourne. Melbourne has four airports. Melbourne Airport, at Tullamarine, is the city's main international
and domestic gateway and second busiest in Australia. The airport is home base for passenger airlines Jetstar Airways
and Tiger Airways Australia and cargo airlines Australian air Express and Toll Priority; and is a major hub for Qantas
and Virgin Australia. Avalon Airport, located between Melbourne and Geelong, is a secondary hub of Jetstar. It is
also used as a freight and maintenance facility. Buses and taxis are the only forms of public transport to and from
the city's main airports. Air Ambulance facilities are available for domestic and international transportation of
patients. Melbourne also has a significant general aviation airport, Moorabbin Airport in the city's south east that
also handles a small number of passenger flights. Essendon Airport, which was once the city's main airport also handles
passenger flights, general aviation and some cargo flights. Melbourne has an integrated public transport system based
around extensive train, tram, bus and taxi systems. Flinders Street Station was the world's busiest passenger station
in 1927 and Melbourne's tram network overtook Sydney's to become the world's largest in the 1940s, at which time
25% of travellers used public transport but by 2003 it had declined to just 7.6%. The public transport system was
privatised in 1999, symbolising the peak of the decline. Despite privatisation and successive governments persisting
with auto-centric urban development into the 21st century, there have since been large increases in public transport
patronage, with the mode share for commuters increasing to 14.8% and 8.4% of all trips. A target of 20% public transport
mode share for Melbourne by 2020 was set by the state government in 2006. Since 2006 public transport patronage has
grown by over 20%. Water storage and supply for Melbourne is managed by Melbourne Water, which is owned by the Victorian
Government. The organisation is also responsible for management of sewerage and the major water catchments in the
region as well as the Wonthaggi desalination plant and North–South Pipeline. Water is stored in a series of reservoirs
located within and outside the Greater Melbourne area. The largest dam, the Thomson River Dam, located in the Victorian
Alps, is capable of holding around 60% of Melbourne's water capacity, while smaller dams such as the Upper Yarra
Dam, Yan Yean Reservoir, and the Cardinia Reservoir carry secondary supplies. The discovery of gold in Victoria in
mid 1851 led to the Victorian gold rush, and Melbourne, which served as the major port and provided most services
for the region, experienced rapid growth. Within months, the city's population had increased by nearly three-quarters,
from 25,000 to 40,000 inhabitants. Thereafter, growth was exponential and by 1865, Melbourne had overtaken Sydney
as Australia's most populous city. Additionally, Melbourne along with the Victorian regional cities of Ballarat and
Geelong became the wealthiest cities in the world during the Gold Rush era. At the time of Australia's federation
on 1 January 1901, Melbourne became the seat of government of the federation. The first federal parliament was convened
on 9 May 1901 in the Royal Exhibition Building, subsequently moving to the Victorian Parliament House where it was
located until 1927, when it was moved to Canberra. The Governor-General of Australia resided at Government House
in Melbourne until 1930 and many major national institutions remained in Melbourne well into the twentieth century.
As the centre of Australia's "rust belt", Melbourne experienced an economic downturn between 1989 to 1992, following
the collapse of several local financial institutions. In 1992 the newly elected Kennett government began a campaign
to revive the economy with an aggressive development campaign of public works coupled with the promotion of the city
as a tourist destination with a focus on major events and sports tourism. During this period the Australian Grand
Prix moved to Melbourne from Adelaide. Major projects included the construction of a new facility for the Melbourne
Museum, Federation Square, the Melbourne Exhibition and Convention Centre, Crown Casino and the CityLink tollway.
Other strategies included the privatisation of some of Melbourne's services, including power and public transport,
and a reduction in funding to public services such as health, education and public transport infrastructure. The
city reaches south-east through Dandenong to the growth corridor of Pakenham towards West Gippsland, and southward
through the Dandenong Creek valley, the Mornington Peninsula and the city of Frankston taking in the peaks of Olivers
Hill, Mount Martha and Arthurs Seat, extending along the shores of Port Phillip as a single conurbation to reach
the exclusive suburb of Portsea and Point Nepean. In the west, it extends along the Maribyrnong River and its tributaries
north towards Sunbury and the foothills of the Macedon Ranges, and along the flat volcanic plain country towards
Melton in the west, Werribee at the foothills of the You Yangs granite ridge south west of the CBD. The Little River,
and the township of the same name, marks the border between Melbourne and neighbouring Geelong city. Melbourne's
air quality is generally good and has improved significantly since the 1980s. Like many urban environments, the city
faces significant environmental issues, many of them relating to the city's large urban footprint and urban sprawl
and the demand for infrastructure and services. One such issue is water usage, drought and low rainfall. Drought
in Victoria, low rainfalls and high temperatures deplete Melbourne water supplies and climate change may have a long-term
impact on the water supplies of Melbourne. In response to low water supplies and low rainfall due to drought, the
government implemented water restrictions and a range of other options including: water recycling schemes for the
city, incentives for household water tanks, greywater systems, water consumption awareness initiatives, and other
water saving and reuse initiatives; also, in June 2007, the Bracks Government announced that a $3.1 billion Wonthaggi
desalination plant would be built on Victoria's south-east coast, capable of treating 150 billion litres of water
per year, as well as a 70 km (43 mi) pipeline from the Goulburn area in Victoria's north to Melbourne and a new water
pipeline linking Melbourne and Geelong. Both projects are being conducted under controversial Public-Private Partnerships
and a multitude of independent reports have found that neither project is required to supply water to the city and
that Sustainable Water Management is the best solution. In the meantime, the drought must be weathered. Melbourne
is an international cultural centre, with cultural endeavours spanning major events and festivals, drama, musicals,
comedy, music, art, architecture, literature, film and television. The climate, waterfront location and nightlife
make it one of the most vibrant destinations in Australia. For five years in a row (as of 2015) it has held the top
position in a survey by The Economist Intelligence Unit of the world's most liveable cities on the basis of a number
of attributes which include its broad cultural offerings. The city celebrates a wide variety of annual cultural events
and festivals of all types, including Australia's largest free community festival—Moomba, the Melbourne International
Arts Festival, Melbourne International Film Festival, Melbourne International Comedy Festival and the Melbourne Fringe
Festival. The culture of the city is an important drawcard for tourists, of which just under two million international
overnight visitors and 57.7 million domestic overnight visited during the year ending March 2014. The Story of the
Kelly Gang, the world's first feature film, was shot in Melbourne in 1906. Melbourne filmmakers continued to produce
bushranger films until they were banned by Victorian politicians in 1912 for the perceived promotion of crime, thus
contributing to the decline of one of the silent film era's most productive industries. A notable film shot and set
in Melbourne during Australia's cinematic lull is On the Beach (1959). The 1970s saw the rise of the Australian New
Wave and its Ozploitation offshoot, instigated by Melbourne-based productions Stork and Alvin Purple. Picnic at Hanging
Rock and Mad Max, both shot in and around Melbourne, achieved worldwide acclaim. 2004 saw the construction of Melbourne's
largest film and television studio complex, Docklands Studios Melbourne, which has hosted many domestic productions,
as well as international features. Melbourne is also home to the headquarters of Village Roadshow Pictures, Australia's
largest film production company. Famous modern day actors from Melbourne include Cate Blanchett, Rachel Griffiths,
Guy Pearce, Geoffrey Rush and Eric Bana. Residential architecture is not defined by a single architectural style,
but rather an eclectic mix of houses, townhouses, condominiums, and apartment buildings in the metropolitan area
(particularly in areas of urban sprawl). Free standing dwellings with relatively large gardens are perhaps the most
common type of housing outside inner city Melbourne. Victorian terrace housing, townhouses and historic Italianate,
Tudor revival and Neo-Georgian mansions are all common in neighbourhoods such as Toorak. Melbourne has a highly diversified
economy with particular strengths in finance, manufacturing, research, IT, education, logistics, transportation and
tourism. Melbourne houses the headquarters for many of Australia's largest corporations, including five of the ten
largest in the country (based on revenue), and four of the largest six in the country (based on market capitalisation)
(ANZ, BHP Billiton (the world's largest mining company), the National Australia Bank and Telstra), as well as such
representative bodies and think tanks as the Business Council of Australia and the Australian Council of Trade Unions.
Melbourne's suburbs also have the Head Offices of Wesfarmers companies Coles (including Liquorland), Bunnings, Target,
K-Mart & Officeworks. The city is home to Australia's largest and busiest seaport which handles more than $75 billion
in trade every year and 39% of the nation's container trade. Melbourne Airport provides an entry point for national
and international visitors, and is Australia's second busiest airport.[citation needed] Melbourne has the largest
Greek-speaking population outside of Europe, a population comparable to some larger Greek cities like Larissa and
Volos. Thessaloniki is Melbourne's Greek sister city. The Vietnamese surname Nguyen is the second most common in
Melbourne's phone book after Smith. The city also features substantial Indian, Sri Lankan, and Malaysian-born communities,
in addition to recent South African and Sudanese influxes. The cultural diversity is reflected in the city's restaurants
that serve international cuisines. Melbourne universities have campuses all over Australia and some internationally.
Swinburne University has campuses in Malaysia, while Monash has a research centre based in Prato, Italy. The University
of Melbourne, the second oldest university in Australia, was ranked first among Australian universities in the 2010
THES international rankings. The 2012–2013 Times Higher Education Supplement ranked the University of Melbourne as
the 28th (30th by QS ranking) best university in the world. Monash University was ranked as the 99th (60th by QS
ranking) best university in the world. Both universities are members of the Group of Eight, a coalition of leading
Australian tertiary institutions offering comprehensive and leading education. A long list of AM and FM radio stations
broadcast to greater Melbourne. These include "public" (i.e., state-owned ABC and SBS) and community stations. Many
commercial stations are networked-owned: DMG has Nova 100 and Smooth; ARN controls Gold 104.3 and KIIS 101.1; and
Southern Cross Austereo runs both Fox and Triple M. Stations from towns in regional Victoria may also be heard (e.g.
93.9 Bay FM, Geelong). Youth alternatives include ABC Triple J and youth run SYN. Triple J, and similarly PBS and
Triple R, strive to play under represented music. JOY 94.9 caters for gay, lesbian, bisexual and transgender audiences.
For fans of classical music there are 3MBS and ABC Classic FM. Light FM is a contemporary Christian station. AM stations
include ABC: 774, Radio National, and News Radio; also Fairfax affiliates 3AW (talk) and Magic (easy listening).
For sport fans and enthusiasts there is SEN 1116. Melbourne has many community run stations that serve alternative
interests, such as 3CR and 3KND (Indigenous). Many suburbs have low powered community run stations serving local
audiences. Melbourne has the largest tram network in the world which had its origins in the city's 1880s land boom.
In 2013–2014, 176.9 million passenger trips were made by tram. Melbourne's is Australia's only tram network to comprise
more than a single line and consists of 250 km (155.3 mi) of track, 487 trams, 25 routes, and 1,763 tram stops. Around
80 per cent of Melbourne's tram network shares road space with other vehicles, while the rest of the network is separated
or are light rail routes. Melbourne's trams are recognised as iconic cultural assets and a tourist attraction. Heritage
trams operate on the free City Circle route, intended for visitors to Melbourne, and heritage restaurant trams travel
through the city and surrounding areas during the evening. Melbourne is currently building 50 new E Class trams with
some already in service in 2014. The E Class trams are about 30 metres long and are superior to the C2 class tram
of similar length. Melbourne's bus network consists of almost 300 routes which mainly service the outer suburbs and
fill the gaps in the network between rail and tram services. 127.6 million passenger trips were recorded on Melbourne's
buses in 2013–2014, an increase of 10.2 percent on the previous year.
John was born to Henry II of England and Eleanor of Aquitaine on 24 December 1166. Henry had inherited significant territories
along the Atlantic seaboard—Anjou, Normandy and England—and expanded his empire by conquering Brittany. Henry married
the powerful Eleanor of Aquitaine, who reigned over the Duchy of Aquitaine and had a tenuous claim to Toulouse and
Auvergne in southern France, in addition to being the former wife of Louis VII of France. The result was the Angevin
Empire, named after Henry's paternal title as Count of Anjou and, more specifically, its seat in Angers.[nb 2] The
Empire, however, was inherently fragile: although all the lands owed allegiance to Henry, the disparate parts each
had their own histories, traditions and governance structures. As one moved south through Anjou and Aquitaine, the
extent of Henry's power in the provinces diminished considerably, scarcely resembling the modern concept of an empire
at all. Some of the traditional ties between parts of the empire such as Normandy and England were slowly dissolving
over time. It was unclear what would happen to the empire on Henry's death. Although the custom of primogeniture,
under which an eldest son would inherit all his father's lands, was slowly becoming more widespread across Europe,
it was less popular amongst the Norman kings of England. Most believed that Henry would divide the empire, giving
each son a substantial portion, and hoping that his children would continue to work together as allies after his
death. To complicate matters, much of the Angevin empire was held by Henry only as a vassal of the King of France
of the rival line of the House of Capet. Henry had often allied himself with the Holy Roman Emperor against France,
making the feudal relationship even more challenging. In 1173 John's elder brothers, backed by Eleanor, rose in revolt
against Henry in the short-lived rebellion of 1173 to 1174. Growing irritated with his subordinate position to Henry
II and increasingly worried that John might be given additional lands and castles at his expense, Henry the Young
King travelled to Paris and allied himself with Louis VII. Eleanor, irritated by her husband's persistent interference
in Aquitaine, encouraged Richard and Geoffrey to join their brother Henry in Paris. Henry II triumphed over the coalition
of his sons, but was generous to them in the peace settlement agreed at Montlouis. Henry the Young King was allowed
to travel widely in Europe with his own household of knights, Richard was given Aquitaine back, and Geoffrey was
allowed to return to Brittany; only Eleanor was imprisoned for her role in the revolt. After Richard's death on 6
April 1199 there were two potential claimants to the Angevin throne: John, whose claim rested on being the sole surviving
son of Henry II, and young Arthur I of Brittany, who held a claim as the son of John's elder brother Geoffrey. Richard
appears to have started to recognise John as his heir presumptive in the final years before his death, but the matter
was not clear-cut and medieval law gave little guidance as to how the competing claims should be decided. With Norman
law favouring John as the only surviving son of Henry II and Angevin law favouring Arthur as the only son of Henry's
elder son, the matter rapidly became an open conflict. John was supported by the bulk of the English and Norman nobility
and was crowned at Westminster, backed by his mother, Eleanor. Arthur was supported by the majority of the Breton,
Maine and Anjou nobles and received the support of Philip II, who remained committed to breaking up the Angevin territories
on the continent. With Arthur's army pressing up the Loire valley towards Angers and Philip's forces moving down
the valley towards Tours, John's continental empire was in danger of being cut in two. Although John was the Count
of Poitou and therefore the rightful feudal lord over the Lusignans, they could legitimately appeal John's actions
in France to his own feudal lord, Philip. Hugh did exactly this in 1201 and Philip summoned John to attend court
in Paris in 1202, citing the Le Goulet treaty to strengthen his case. John was unwilling to weaken his authority
in western France in this way. He argued that he need not attend Philip's court because of his special status as
the Duke of Normandy, who was exempt by feudal tradition from being called to the French court. Philip argued that
he was summoning John not as the Duke of Normandy, but as the Count of Poitou, which carried no such special status.
When John still refused to come, Philip declared John in breach of his feudal responsibilities, reassigned all of
John's lands that fell under the French crown to Arthur – with the exception of Normandy, which he took back for
himself – and began a fresh war against John. The nature of government under the Angevin monarchs was ill-defined
and uncertain. John's predecessors had ruled using the principle of vis et voluntas, or "force and will", taking
executive and sometimes arbitrary decisions, often justified on the basis that a king was above the law. Both Henry
II and Richard had argued that kings possessed a quality of "divine majesty"; John continued this trend and claimed
an "almost imperial status" for himself as ruler. During the 12th century, there were contrary opinions expressed
about the nature of kingship, and many contemporary writers believed that monarchs should rule in accordance with
the custom and the law, and take counsel of the leading members of the realm. There was as yet no model for what
should happen if a king refused to do so. Despite his claim to unique authority within England, John would sometimes
justify his actions on the basis that he had taken council with the barons. Modern historians remain divided as to
whether John suffered from a case of "royal schizophrenia" in his approach to government, or if his actions merely
reflected the complex model of Angevin kingship in the early 13th century. At the start of John's reign there was
a sudden change in prices, as bad harvests and high demand for food resulted in much higher prices for grain and
animals. This inflationary pressure was to continue for the rest of the 13th century and had long-term economic consequences
for England. The resulting social pressures were complicated by bursts of deflation that resulted from John's military
campaigns. It was usual at the time for the king to collect taxes in silver, which was then re-minted into new coins;
these coins would then be put in barrels and sent to royal castles around the country, to be used to hire mercenaries
or to meet other costs. At those times when John was preparing for campaigns in Normandy, for example, huge quantities
of silver had to be withdrawn from the economy and stored for months, which unintentionally resulted in periods during
which silver coins were simply hard to come by, commercial credit difficult to acquire and deflationary pressure
placed on the economy. The result was political unrest across the country. John attempted to address some of the
problems with the English currency in 1204 and 1205 by carrying out a radical overhaul of the coinage, improving
its quality and consistency. John, the youngest of five sons of King Henry II of England and Eleanor of Aquitaine,
was at first not expected to inherit significant lands. Following the failed rebellion of his elder brothers between
1173 and 1174, however, John became Henry's favourite child. He was appointed the Lord of Ireland in 1177 and given
lands in England and on the continent. John's elder brothers William, Henry and Geoffrey died young; by the time
Richard I became king in 1189, John was a potential heir to the throne. John unsuccessfully attempted a rebellion
against Richard's royal administrators whilst his brother was participating in the Third Crusade. Despite this, after
Richard died in 1199, John was proclaimed King of England, and came to an agreement with Philip II of France to recognise
John's possession of the continental Angevin lands at the peace treaty of Le Goulet in 1200. The character of John's
relationship with his second wife, Isabella of Angoulême, is unclear. John married Isabella whilst she was relatively
young – her exact date of birth is uncertain, and estimates place her between at most 15 and more probably towards
nine years old at the time of her marriage.[nb 15] Even by the standards of the time, Isabella was married whilst
very young. John did not provide a great deal of money for his wife's household and did not pass on much of the revenue
from her lands, to the extent that historian Nicholas Vincent has described him as being "downright mean" towards
Isabella. Vincent concluded that the marriage was not a particularly "amicable" one. Other aspects of their marriage
suggest a closer, more positive relationship. Chroniclers recorded that John had a "mad infatuation" with Isabella,
and certainly John had conjugal relationships with Isabella between at least 1207 and 1215; they had five children.
In contrast to Vincent, historian William Chester Jordan concludes that the pair were a "companionable couple" who
had a successful marriage by the standards of the day. When war with France broke out again in 1202, John achieved
early victories, but shortages of military resources and his treatment of Norman, Breton and Anjou nobles resulted
in the collapse of his empire in northern France in 1204. John spent much of the next decade attempting to regain
these lands, raising huge revenues, reforming his armed forces and rebuilding continental alliances. John's judicial
reforms had a lasting impact on the English common law system, as well as providing an additional source of revenue.
An argument with Pope Innocent III led to John's excommunication in 1209, a dispute finally settled by the king in
1213. John's attempt to defeat Philip in 1214 failed due to the French victory over John's allies at the battle of
Bouvines. When he returned to England, John faced a rebellion by many of his barons, who were unhappy with his fiscal
policies and his treatment of many of England's most powerful nobles. Although both John and the barons agreed to
the Magna Carta peace treaty in 1215, neither side complied with its conditions. Civil war broke out shortly afterwards,
with the barons aided by Louis of France. It soon descended into a stalemate. John died of dysentery contracted whilst
on campaign in eastern England during late 1216; supporters of his son Henry III went on to achieve victory over
Louis and the rebel barons the following year. Baronial unrest in England prevented the departure of the planned
1205 expedition, and only a smaller force under William Longespée deployed to Poitou. In 1206 John departed for Poitou
himself, but was forced to divert south to counter a threat to Gascony from Alfonso VIII of Castile. After a successful
campaign against Alfonso, John headed north again, taking the city of Angers. Philip moved south to meet John; the
year's campaigning ended in stalemate and a two-year truce was made between the two rulers. During John's early years,
Henry attempted to resolve the question of his succession. Henry the Young King had been crowned King of England
in 1170, but was not given any formal powers by his father; he was also promised Normandy and Anjou as part of his
future inheritance. Richard was to be appointed the Count of Poitou with control of Aquitaine, whilst Geoffrey was
to become the Duke of Brittany. At this time it seemed unlikely that John would ever inherit substantial lands, and
he was jokingly nicknamed "Lackland" by his father. John grew up to be around 5 ft 5 in (1.68 m) tall, relatively
short, with a "powerful, barrel-chested body" and dark red hair; he looked to contemporaries like an inhabitant of
Poitou. John enjoyed reading and, unusually for the period, built up a travelling library of books. He enjoyed gambling,
in particular at backgammon, and was an enthusiastic hunter, even by medieval standards. He liked music, although
not songs. John would become a "connoisseur of jewels", building up a large collection, and became famous for his
opulent clothes and also, according to French chroniclers, for his fondness for bad wine. As John grew up, he became
known for sometimes being "genial, witty, generous and hospitable"; at other moments, he could be jealous, over-sensitive
and prone to fits of rage, "biting and gnawing his fingers" in anger.[nb 3] When the Archbishop of Canterbury, Hubert
Walter, died on 13 July 1205, John became involved in a dispute with Pope Innocent III that would lead to the king's
excommunication. The Norman and Angevin kings had traditionally exercised a great deal of power over the church within
their territories. From the 1040s onwards, however, successive popes had put forward a reforming message that emphasised
the importance of the church being "governed more coherently and more hierarchically from the centre" and established
"its own sphere of authority and jurisdiction, separate from and independent of that of the lay ruler", in the words
of historian Richard Huscroft. After the 1140s, these principles had been largely accepted within the English church,
albeit with an element of concern about centralising authority in Rome. These changes brought the customary rights
of lay rulers such as John over ecclesiastical appointments into question. Pope Innocent was, according to historian
Ralph Turner, an "ambitious and aggressive" religious leader, insistent on his rights and responsibilities within
the church. In 1185 John made his first visit to Ireland, accompanied by 300 knights and a team of administrators.
Henry had tried to have John officially proclaimed King of Ireland, but Pope Lucius III would not agree. John's first
period of rule in Ireland was not a success. Ireland had only recently been conquered by Anglo-Norman forces, and
tensions were still rife between Henry II, the new settlers and the existing inhabitants. John infamously offended
the local Irish rulers by making fun of their unfashionable long beards, failed to make allies amongst the Anglo-Norman
settlers, began to lose ground militarily against the Irish and finally returned to England later in the year, blaming
the viceroy, Hugh de Lacy, for the fiasco. Henry the Young King fought a short war with his brother Richard in 1183
over the status of England, Normandy and Aquitaine. Henry II moved in support of Richard, and Henry the Young King
died from dysentery at the end of the campaign. With his primary heir dead, Henry rearranged the plans for the succession:
Richard was to be made King of England, albeit without any actual power until the death of his father; Geoffrey would
retain Brittany; and John would now become the Duke of Aquitaine in place of Richard. Richard refused to give up
Aquitaine; Henry II was furious and ordered John, with help from Geoffrey, to march south and retake the duchy by
force. The two attacked the capital of Poitiers, and Richard responded by attacking Brittany. The war ended in stalemate
and a tense family reconciliation in England at the end of 1184. Under mounting political pressure, John finally
negotiated terms for a reconciliation, and the papal terms for submission were accepted in the presence of the papal
legate Pandulf Verraccio in May 1213 at the Templar Church at Dover. As part of the deal, John offered to surrender
the Kingdom of England to the papacy for a feudal service of 1,000 marks (equivalent to £666 at the time) annually:
700 marks (£466) for England and 300 marks (£200) for Ireland, as well as recompensing the church for revenue lost
during the crisis. The agreement was formalised in the Bulla Aurea, or Golden Bull. This resolution produced mixed
responses. Although some chroniclers felt that John had been humiliated by the sequence of events, there was little
public reaction. Innocent benefited from the resolution of his long-standing English problem, but John probably gained
more, as Innocent became a firm supporter of John for the rest of his reign, backing him in both domestic and continental
policy issues. Innocent immediately turned against Philip, calling upon him to reject plans to invade England and
to sue for peace. John paid some of the compensation money he had promised the church, but he ceased making payments
in late 1214, leaving two-thirds of the sum unpaid; Innocent appears to have conveniently forgotten this debt for
the good of the wider relationship. The political turmoil continued. John began to explore an alliance with the French
king Philip II, freshly returned from the crusade. John hoped to acquire Normandy, Anjou and the other lands in France
held by Richard in exchange for allying himself with Philip. John was persuaded not to pursue an alliance by his
mother. Longchamp, who had left England after Walter's intervention, now returned, and argued that he had been wrongly
removed as justiciar. John intervened, suppressing Longchamp's claims in return for promises of support from the
royal administration, including a reaffirmation of his position as heir to the throne. When Richard still did not
return from the crusade, John began to assert that his brother was dead or otherwise permanently lost. Richard had
in fact been captured en route to England by the Duke of Austria and was handed over to Emperor Henry VI, who held
him for ransom. John seized the opportunity and went to Paris, where he formed an alliance with Philip. He agreed
to set aside his wife, Isabella of Gloucester, and marry Philip's sister, Alys, in exchange for Philip's support.
Fighting broke out in England between forces loyal to Richard and those being gathered by John. John's military position
was weak and he agreed to a truce; in early 1194 the king finally returned to England, and John's remaining forces
surrendered. John retreated to Normandy, where Richard finally found him later that year. Richard declared that his
younger brother – despite being 27 years old – was merely "a child who has had evil counsellors" and forgave him,
but removed his lands with the exception of Ireland. The political situation in England rapidly began to deteriorate.
Longchamp refused to work with Puiset and became unpopular with the English nobility and clergy. John exploited this
unpopularity to set himself up as an alternative ruler with his own royal court, complete with his own justiciar,
chancellor and other royal posts, and was happy to be portrayed as an alternative regent, and possibly the next king.
Armed conflict broke out between John and Longchamp, and by October 1191 Longchamp was isolated in the Tower of London
with John in control of the city of London, thanks to promises John had made to the citizens in return for recognition
as Richard's heir presumptive. At this point Walter of Coutances, the Archbishop of Rouen, returned to England, having
been sent by Richard to restore order. John's position was undermined by Walter's relative popularity and by the
news that Richard had married whilst in Cyprus, which presented the possibility that Richard would have legitimate
children and heirs. Letters of support from the pope arrived in April but by then the rebel barons had organised.
They congregated at Northampton in May and renounced their feudal ties to John, appointing Robert fitz Walter as
their military leader. This self-proclaimed "Army of God" marched on London, taking the capital as well as Lincoln
and Exeter. John's efforts to appear moderate and conciliatory had been largely successful, but once the rebels held
London they attracted a fresh wave of defectors from John's royalist faction. John instructed Langton to organise
peace talks with the rebel barons. The new peace would only last for two years; war recommenced in the aftermath
of John's decision in August 1200 to marry Isabella of Angoulême. In order to remarry, John first needed to abandon
Isabel, Countess of Gloucester, his first wife; John accomplished this by arguing that he had failed to get the necessary
papal permission to marry Isabel in the first place – as a cousin, John could not have legally wed her without this.
It remains unclear why John chose to marry Isabella of Angoulême. Contemporary chroniclers argued that John had fallen
deeply in love with Isabella, and John may have been motivated by desire for an apparently beautiful, if rather young,
girl. On the other hand, the Angoumois lands that came with Isabella were strategically vital to John: by marrying
Isabella, John was acquiring a key land route between Poitou and Gascony, which significantly strengthened his grip
on Aquitaine.[nb 5] After his coronation, John moved south into France with military forces and adopted a defensive
posture along the eastern and southern Normandy borders. Both sides paused for desultory negotiations before the
war recommenced; John's position was now stronger, thanks to confirmation that the counts Baldwin IX of Flanders
and Renaud of Boulogne had renewed the anti-French alliances they had previously agreed to with Richard. The powerful
Anjou nobleman William des Roches was persuaded to switch sides from Arthur to John; suddenly the balance seemed
to be tipping away from Philip and Arthur in favour of John. Neither side was keen to continue the conflict, and
following a papal truce the two leaders met in January 1200 to negotiate possible terms for peace. From John's perspective,
what then followed represented an opportunity to stabilise control over his continental possessions and produce a
lasting peace with Philip in Paris. John and Philip negotiated the May 1200 Treaty of Le Goulet; by this treaty,
Philip recognised John as the rightful heir to Richard in respect to his French possessions, temporarily abandoning
the wider claims of his client, Arthur.[nb 4] John, in turn, abandoned Richard's former policy of containing Philip
through alliances with Flanders and Boulogne, and accepted Philip's right as the legitimate feudal overlord of John's
lands in France. John's policy earned him the disrespectful title of "John Softsword" from some English chroniclers,
who contrasted his behaviour with his more aggressive brother, Richard. The rebel barons responded by inviting the
French prince Louis to lead them: Louis had a claim to the English throne by virtue of his marriage to Blanche of
Castile, a granddaughter of Henry II. Philip may have provided him with private support but refused to openly support
Louis, who was excommunicated by Innocent for taking part in the war against John. Louis' planned arrival in England
presented a significant problem for John, as the prince would bring with him naval vessels and siege engines essential
to the rebel cause. Once John contained Alexander in Scotland, he marched south to deal with the challenge of the
coming invasion. Further desertions of John's local allies at the beginning of 1203 steadily reduced John's freedom
to manoeuvre in the region. He attempted to convince Pope Innocent III to intervene in the conflict, but Innocent's
efforts were unsuccessful. As the situation became worse for John, he appears to have decided to have Arthur killed,
with the aim of removing his potential rival and of undermining the rebel movement in Brittany. Arthur had initially
been imprisoned at Falaise and was then moved to Rouen. After this, Arthur's fate remains uncertain, but modern historians
believe he was murdered by John. The annals of Margam Abbey suggest that "John had captured Arthur and kept him alive
in prison for some time in the castle of Rouen ... when John was drunk he slew Arthur with his own hand and tying
a heavy stone to the body cast it into the Seine."[nb 7] Rumours of the manner of Arthur's death further reduced
support for John across the region. Arthur's sister, Eleanor, who had also been captured at Mirebeau, was kept imprisoned
by John for many years, albeit in relatively good conditions. John's position in France was considerably strengthened
by the victory at Mirebeau, but John's treatment of his new prisoners and of his ally, William de Roches, quickly
undermined these gains. De Roches was a powerful Anjou noble, but John largely ignored him, causing considerable
offence, whilst the king kept the rebel leaders in such bad conditions that twenty-two of them died. At this time
most of the regional nobility were closely linked through kinship, and this behaviour towards their relatives was
regarded as unacceptable. William de Roches and other of John's regional allies in Anjou and Brittany deserted him
in favour of Philip, and Brittany rose in fresh revolt. John's financial situation was tenuous: once factors such
as the comparative military costs of materiel and soldiers were taken into account, Philip enjoyed a considerable,
although not overwhelming, advantage of resources over John.[nb 6] In the aftermath of John's death William Marshal
was declared the protector of the nine-year-old Henry III. The civil war continued until royalist victories at the
battles of Lincoln and Dover in 1217. Louis gave up his claim to the English throne and signed the Treaty of Lambeth.
The failed Magna Carta agreement was resuscitated by Marshal's administration and reissued in an edited form in 1217
as a basis for future government. Henry III continued his attempts to reclaim Normandy and Anjou until 1259, but
John's continental losses and the consequent growth of Capetian power in the 13th century proved to mark a "turning
point in European history". One of John's principal challenges was acquiring the large sums of money needed for his
proposed campaigns to reclaim Normandy. The Angevin kings had three main sources of income available to them, namely
revenue from their personal lands, or demesne; money raised through their rights as a feudal lord; and revenue from
taxation. Revenue from the royal demesne was inflexible and had been diminishing slowly since the Norman conquest.
Matters were not helped by Richard's sale of many royal properties in 1189, and taxation played a much smaller role
in royal income than in later centuries. English kings had widespread feudal rights which could be used to generate
income, including the scutage system, in which feudal military service was avoided by a cash payment to the king.
He derived income from fines, court fees and the sale of charters and other privileges. John intensified his efforts
to maximise all possible sources of income, to the extent that he has been described as "avaricious, miserly, extortionate
and moneyminded". John also used revenue generation as a way of exerting political control over the barons: debts
owed to the crown by the king's favoured supporters might be forgiven; collection of those owed by enemies was more
stringently enforced. The administration of justice was of particular importance to John. Several new processes had
been introduced to English law under Henry II, including novel disseisin and mort d'ancestor. These processes meant
the royal courts had a more significant role in local law cases, which had previously been dealt with only by regional
or local lords. John increased the professionalism of local sergeants and bailiffs, and extended the system of coroners
first introduced by Hubert Walter in 1194, creating a new class of borough coroners. John worked extremely hard to
ensure that this system operated well, through judges he had appointed, by fostering legal specialists and expertise,
and by intervening in cases himself. John continued to try relatively minor cases, even during military crises. Viewed
positively, Lewis Warren considers that John discharged "his royal duty of providing justice ... with a zeal and
a tirelessness to which the English common law is greatly endebted". Seen more critically, John may have been motivated
by the potential of the royal legal process to raise fees, rather than a desire to deliver simple justice; John's
legal system also only applied to free men, rather than to all of the population. Nonetheless, these changes were
popular with many free tenants, who acquired a more reliable legal system that could bypass the barons, against whom
such cases were often brought. John's reforms were less popular with the barons themselves, especially as they remained
subject to arbitrary and frequently vindictive royal justice. In the 1940s, new interpretations of John's reign began
to emerge, based on research into the record evidence of his reign, such as pipe rolls, charters, court documents
and similar primary records. Notably, an essay by Vivian Galbraith in 1945 proposed a "new approach" to understanding
the ruler. The use of recorded evidence was combined with an increased scepticism about two of the most colourful
chroniclers of John's reign, Roger of Wendover and Matthew Paris. In many cases the detail provided by these chroniclers,
both writing after John's death, was challenged by modern historians. Interpretations of Magna Carta and the role
of the rebel barons in 1215 have been significantly revised: although the charter's symbolic, constitutional value
for later generations is unquestionable, in the context of John's reign most historians now consider it a failed
peace agreement between "partisan" factions. There has been increasing debate about the nature of John's Irish policies.
Specialists in Irish medieval history, such as Sean Duffy, have challenged the conventional narrative established
by Lewis Warren, suggesting that Ireland was less stable by 1216 than was previously supposed. John was deeply suspicious
of the barons, particularly those with sufficient power and wealth to potentially challenge the king. Numerous barons
were subjected to John's malevolentia, even including William Marshal, a famous knight and baron normally held up
as a model of utter loyalty. The most infamous case, which went beyond anything considered acceptable at the time,
proved to be that of William de Braose, a powerful marcher lord with lands in Ireland. De Braose was subjected to
punitive demands for money, and when he refused to pay a huge sum of 40,000 marks (equivalent to £26,666 at the time),[nb
13] his wife and one of his sons were imprisoned by John, which resulted in their deaths. De Braose died in exile
in 1211, and his grandsons remained in prison until 1218. John's suspicions and jealousies meant that he rarely enjoyed
good relationships with even the leading loyalist barons. This trend for the king to rely on his own men at the expense
of the barons was exacerbated by the tradition of Angevin royal ira et malevolentia – "anger and ill-will" – and
John's own personality. From Henry II onwards, ira et malevolentia had come to describe the right of the king to
express his anger and displeasure at particular barons or clergy, building on the Norman concept of malevoncia –
royal ill-will. In the Norman period, suffering the king's ill-will meant difficulties in obtaining grants, honours
or petitions; Henry II had infamously expressed his fury and ill-will towards Thomas Becket; this ultimately resulted
in Becket's death. John now had the additional ability to "cripple his vassals" on a significant scale using his
new economic and judicial measures, which made the threat of royal anger all the more serious. John spent much of
1205 securing England against a potential French invasion. As an emergency measure, John recreated a version of Henry
II's Assize of Arms of 1181, with each shire creating a structure to mobilise local levies. When the threat of invasion
faded, John formed a large military force in England intended for Poitou, and a large fleet with soldiers under his
own command intended for Normandy. To achieve this, John reformed the English feudal contribution to his campaigns,
creating a more flexible system under which only one knight in ten would actually be mobilised, but would be financially
supported by the other nine; knights would serve for an indefinite period. John built up a strong team of engineers
for siege warfare and a substantial force of professional crossbowmen. The king was supported by a team of leading
barons with military expertise, including William Longespée, William the Marshal, Roger de Lacy and, until he fell
from favour, the marcher lord William de Braose. During the remainder of his reign, John focused on trying to retake
Normandy. The available evidence suggests that John did not regard the loss of the Duchy as a permanent shift in
Capetian power. Strategically, John faced several challenges: England itself had to be secured against possible French
invasion, the sea-routes to Bordeaux needed to be secured following the loss of the land route to Aquitaine, and
his remaining possessions in Aquitaine needed to be secured following the death of his mother, Eleanor, in April
1204. John's preferred plan was to use Poitou as a base of operations, advance up the Loire valley to threaten Paris,
pin down the French forces and break Philip's internal lines of communication before landing a maritime force in
the Duchy itself. Ideally, this plan would benefit from the opening of a second front on Philip's eastern frontiers
with Flanders and Boulogne – effectively a re-creation of Richard's old strategy of applying pressure from Germany.
All of this would require a great deal of money and soldiers. John remained Lord of Ireland throughout his reign.
He drew on the country for resources to fight his war with Philip on the continent. Conflict continued in Ireland
between the Anglo-Norman settlers and the indigenous Irish chieftains, with John manipulating both groups to expand
his wealth and power in the country. During Richard's rule, John had successfully increased the size of his lands
in Ireland, and he continued this policy as king. In 1210 the king crossed into Ireland with a large army to crush
a rebellion by the Anglo-Norman lords; he reasserted his control of the country and used a new charter to order compliance
with English laws and customs in Ireland. John stopped short of trying to actively enforce this charter on the native
Irish kingdoms, but historian David Carpenter suspects that he might have done so, had the baronial conflict in England
not intervened. Simmering tensions remained with the native Irish leaders even after John left for England. In the
late 12th and early 13th centuries the border and political relationship between England and Scotland was disputed,
with the kings of Scotland claiming parts of what is now northern England. John's father, Henry II, had forced William
the Lion to swear fealty to him at the Treaty of Falaise in 1174. This had been rescinded by Richard I in exchange
for financial compensation in 1189, but the relationship remained uneasy. John began his reign by reasserting his
sovereignty over the disputed northern counties. He refused William's request for the earldom of Northumbria, but
did not intervene in Scotland itself and focused on his continental problems. The two kings maintained a friendly
relationship, meeting in 1206 and 1207, until it was rumoured in 1209 that William was intending to ally himself
with Philip II of France. John invaded Scotland and forced William to sign the Treaty of Norham, which gave John
control of William's daughters and required a payment of £10,000. This effectively crippled William's power north
of the border, and by 1212 John had to intervene militarily to support the Scottish king against his internal rivals.[nb
16] John made no efforts to reinvigorate the Treaty of Falaise, though, and both William and Alexander remained independent
kings, supported by, but not owing fealty to, John. John treated the interdict as "the equivalent of a papal declaration
of war". He responded by attempting to punish Innocent personally and to drive a wedge between those English clergy
that might support him and those allying themselves firmly with the authorities in Rome. John seized the lands of
those clergy unwilling to conduct services, as well as those estates linked to Innocent himself; he arrested the
illicit concubines that many clerics kept during the period, only releasing them after the payment of fines; he seized
the lands of members of the church who had fled England, and he promised protection for those clergy willing to remain
loyal to him. In many cases, individual institutions were able to negotiate terms for managing their own properties
and keeping the produce of their estates. By 1209 the situation showed no signs of resolution, and Innocent threatened
to excommunicate John if he did not acquiesce to Langton's appointment. When this threat failed, Innocent excommunicated
the king in November 1209. Although theoretically a significant blow to John's legitimacy, this did not appear to
greatly worry the king. Two of John's close allies, Emperor Otto IV and Count Raymond VI of Toulouse, had already
suffered the same punishment themselves, and the significance of excommunication had been somewhat devalued. John
simply tightened his existing measures and accrued significant sums from the income of vacant sees and abbeys: one
1213 estimate, for example, suggested the church had lost an estimated 100,000 marks (equivalent to £66,666 at the
time) to John. Official figures suggest that around 14% of annual income from the English church was being appropriated
by John each year. John was incensed about what he perceived as an abrogation of his customary right as monarch to
influence the election. He complained both about the choice of Langton as an individual, as John felt he was overly
influenced by the Capetian court in Paris, and about the process as a whole. He barred Langton from entering England
and seized the lands of the archbishopric and other papal possessions. Innocent set a commission in place to try
to convince John to change his mind, but to no avail. Innocent then placed an interdict on England in March 1208,
prohibiting clergy from conducting religious services, with the exception of baptisms for the young, and confessions
and absolutions for the dying. The first part of the campaign went well, with John outmanoeuvring the forces under
the command of Prince Louis and retaking the county of Anjou by the end of June. John besieged the castle of Roche-au-Moine,
a key stronghold, forcing Louis to give battle against John's larger army. The local Angevin nobles refused to advance
with the king; left at something of a disadvantage, John retreated back to La Rochelle. Shortly afterwards, Philip
won the hard-fought battle of Bouvines in the north against Otto and John's other allies, bringing an end to John's
hopes of retaking Normandy. A peace agreement was signed in which John returned Anjou to Philip and paid the French
king compensation; the truce was intended to last for six years. John arrived back in England in October. In 1214
John began his final campaign to reclaim Normandy from Philip. John was optimistic, as he had successfully built
up alliances with the Emperor Otto, Renaud of Boulogne and Count Ferdinand of Flanders; he was enjoying papal favour;
and he had successfully built up substantial funds to pay for the deployment of his experienced army. Nonetheless,
when John left for Poitou in February 1214, many barons refused to provide military service; mercenary knights had
to fill the gaps. John's plan was to split Philip's forces by pushing north-east from Poitou towards Paris, whilst
Otto, Renaud and Ferdinand, supported by William Longespée, marched south-west from Flanders. The rebels made the
first move in the war, seizing the strategic Rochester Castle, owned by Langton but left almost unguarded by the
archbishop. John was well prepared for a conflict. He had stockpiled money to pay for mercenaries and ensured the
support of the powerful marcher lords with their own feudal forces, such as William Marshal and Ranulf of Chester.
The rebels lacked the engineering expertise or heavy equipment necessary to assault the network of royal castles
that cut off the northern rebel barons from those in the south. John's strategy was to isolate the rebel barons in
London, protect his own supply lines to his key source of mercenaries in Flanders, prevent the French from landing
in the south-east, and then win the war through slow attrition. John put off dealing with the badly deteriorating
situation in North Wales, where Llywelyn the Great was leading a rebellion against the 1211 settlement. Neither John
nor the rebel barons seriously attempted to implement the peace accord. The rebel barons suspected that the proposed
baronial council would be unacceptable to John and that he would challenge the legality of the charter; they packed
the baronial council with their own hardliners and refused to demobilise their forces or surrender London as agreed.
Despite his promises to the contrary, John appealed to Innocent for help, observing that the charter compromised
the pope's rights under the 1213 agreement that had appointed him John's feudal lord. Innocent obliged; he declared
the charter "not only shameful and demeaning, but illegal and unjust" and excommunicated the rebel barons. The failure
of the agreement led rapidly to the First Barons' War. The king returned west but is said to have lost a significant
part of his baggage train along the way. Roger of Wendover provides the most graphic account of this, suggesting
that the king's belongings, including the Crown Jewels, were lost as he crossed one of the tidal estuaries which
empties into the Wash, being sucked in by quicksand and whirlpools. Accounts of the incident vary considerably between
the various chroniclers and the exact location of the incident has never been confirmed; the losses may have involved
only a few of his pack-horses. Modern historians assert that by October 1216 John faced a "stalemate", "a military
situation uncompromised by defeat". In September 1216 John began a fresh, vigorous attack. He marched from the Cotswolds,
feigned an offensive to relieve the besieged Windsor Castle, and attacked eastwards around London to Cambridge to
separate the rebel-held areas of Lincolnshire and East Anglia. From there he travelled north to relieve the rebel
siege at Lincoln and back east to King's Lynn, probably to order further supplies from the continent.[nb 17] In King's
Lynn, John contracted dysentery, which would ultimately prove fatal. Meanwhile, Alexander II invaded northern England
again, taking Carlisle in August and then marching south to give homage to Prince Louis for his English possessions;
John narrowly missed intercepting Alexander along the way. Tensions between Louis and the English barons began to
increase, prompting a wave of desertions, including William Marshal's son William and William Longespée, who both
returned to John's faction. In the 16th century political and religious changes altered the attitude of historians
towards John. Tudor historians were generally favourably inclined towards the king, focusing on John's opposition
to the Papacy and his promotion of the special rights and prerogatives of a king. Revisionist histories written by
John Foxe, William Tyndale and Robert Barnes portrayed John as an early Protestant hero, and John Foxe included the
king in his Book of Martyrs. John Speed's Historie of Great Britaine in 1632 praised John's "great renown" as a king;
he blamed the bias of medieval chroniclers for the king's poor reputation. Nineteenth-century fictional depictions
of John were heavily influenced by Sir Walter Scott's historical romance, Ivanhoe, which presented "an almost totally
unfavourable picture" of the king; the work drew on Victorian histories of the period and on Shakespeare's play.
Scott's work influenced the late 19th-century children's writer Howard Pyle's book The Merry Adventures of Robin
Hood, which in turn established John as the principal villain within the traditional Robin Hood narrative. During
the 20th century, John was normally depicted in fictional books and films alongside Robin Hood. Sam De Grasse's role
as John in the black-and-white 1922 film version shows John committing numerous atrocities and acts of torture. Claude
Rains played John in the 1938 colour version alongside Errol Flynn, starting a trend for films to depict John as
an "effeminate ... arrogant and cowardly stay-at-home". The character of John acts either to highlight the virtues
of King Richard, or contrasts with the Sheriff of Nottingham, who is usually the "swashbuckling villain" opposing
Robin. An extreme version of this trend can be seen in the Disney cartoon version, for example, which depicts John,
voiced by Peter Ustinov, as a "cowardly, thumbsucking lion". Popular works that depict John beyond the Robin Hood
legends, such as James Goldman's play and later film, The Lion in Winter, set in 1183, commonly present him as an
"effete weakling", in this instance contrasted with the more masculine Henry II, or as a tyrant, as in A. A. Milne's
poem for children, "King John's Christmas". Historical interpretations of John have been subject to considerable
change over the years. Medieval chroniclers provided the first contemporary, or near contemporary, histories of John's
reign. One group of chroniclers wrote early in John's life, or around the time of his accession, including Richard
of Devizes, William of Newburgh, Roger of Hoveden and Ralph de Diceto. These historians were generally unsympathetic
to John's behaviour under Richard's rule, but slightly more positive towards the very earliest years of John's reign.
Reliable accounts of the middle and later parts of John's reign are more limited, with Gervase of Canterbury and
Ralph of Coggeshall writing the main accounts; neither of them were positive about John's performance as king. Much
of John's later, negative reputation was established by two chroniclers writing after the king's death, Roger of
Wendover and Matthew Paris, the latter claiming that John attempted conversion to Islam in exchange for military
aid from the Almohad ruler Muhammad al-Nasir - a story which is considered to be untrue by modern historians. Popular
representations of John first began to emerge during the Tudor period, mirroring the revisionist histories of the
time. The anonymous play The Troublesome Reign of King John portrayed the king as a "proto-Protestant martyr", similar
to that shown in John Bale's morality play Kynge Johan, in which John attempts to save England from the "evil agents
of the Roman Church". By contrast, Shakespeare's King John, a relatively anti-Catholic play that draws on The Troublesome
Reign for its source material, offers a more "balanced, dual view of a complex monarch as both a proto-Protestant
victim of Rome's machinations and as a weak, selfishly motivated ruler". Anthony Munday's play The Downfall and The
Death of Robert Earl of Huntington portrays many of John's negative traits, but adopts a positive interpretation
of the king's stand against the Roman Catholic Church, in line with the contemporary views of the Tudor monarchs.
By the middle of the 17th century, plays such as Robert Davenport's King John and Matilda, although based largely
on the earlier Elizabethan works, were transferring the role of Protestant champion to the barons and focusing more
on the tyrannical aspects of John's behaviour. Contemporary chroniclers were mostly critical of John's performance
as king, and his reign has since been the subject of significant debate and periodic revision by historians from
the 16th century onwards. Historian Jim Bradbury has summarised the contemporary historical opinion of John's positive
qualities, observing that John is today usually considered a "hard-working administrator, an able man, an able general".
Nonetheless, modern historians agree that he also had many faults as king, including what historian Ralph Turner
describes as "distasteful, even dangerous personality traits", such as pettiness, spitefulness and cruelty. These
negative qualities provided extensive material for fiction writers in the Victorian era, and John remains a recurring
character within Western popular culture, primarily as a villain in films and stories depicting the Robin Hood legends.
Henry II wanted to secure the southern borders of Aquitaine and decided to betroth his youngest son to Alais, the
daughter and heiress of Humbert III of Savoy. As part of this agreement John was promised the future inheritance
of Savoy, Piedmont, Maurienne, and the other possessions of Count Humbert. For his part in the potential marriage
alliance, Henry II transferred the castles of Chinon, Loudun and Mirebeau into John's name; as John was only five
years old his father would continue to control them for practical purposes. Henry the Young King was unimpressed
by this; although he had yet to be granted control of any castles in his new kingdom, these were effectively his
future property and had been given away without consultation. Alais made the trip over the Alps and joined Henry
II's court, but she died before marrying John, which left the prince once again without an inheritance. For the remaining
years of Richard's reign, John supported his brother on the continent, apparently loyally. Richard's policy on the
continent was to attempt to regain through steady, limited campaigns the castles he had lost to Philip II whilst
on crusade. He allied himself with the leaders of Flanders, Boulogne and the Holy Roman Empire to apply pressure
on Philip from Germany. In 1195 John successfully conducted a sudden attack and siege of Évreux castle, and subsequently
managed the defences of Normandy against Philip. The following year, John seized the town of Gamaches and led a raiding
party within 50 miles (80 km) of Paris, capturing the Bishop of Beauvais. In return for this service, Richard withdrew
his malevolentia (ill-will) towards John, restored him to the county of Gloucestershire and made him again the Count
of Mortain. Unfortunately, Isabella was already engaged to Hugh of Lusignan, an important member of a key Poitou
noble family and brother of Count Raoul of Eu, who possessed lands along the sensitive eastern Normandy border. Just
as John stood to benefit strategically from marrying Isabella, so the marriage threatened the interests of the Lusignans,
whose own lands currently provided the key route for royal goods and troops across Aquitaine. Rather than negotiating
some form of compensation, John treated Hugh "with contempt"; this resulted in a Lusignan uprising that was promptly
crushed by John, who also intervened to suppress Raoul in Normandy. In late 1203, John attempted to relieve Château
Gaillard, which although besieged by Philip was guarding the eastern flank of Normandy. John attempted a synchronised
operation involving land-based and water-borne forces, considered by most historians today to have been imaginative
in conception, but overly complex for forces of the period to have carried out successfully. John's relief operation
was blocked by Philip's forces, and John turned back to Brittany in an attempt to draw Philip away from eastern Normandy.
John successfully devastated much of Brittany, but did not deflect Philip's main thrust into the east of Normandy.
Opinions vary amongst historians as to the military skill shown by John during this campaign, with most recent historians
arguing that his performance was passable, although not impressive.[nb 8] John's situation began to deteriorate rapidly.
The eastern border region of Normandy had been extensively cultivated by Philip and his predecessors for several
years, whilst Angevin authority in the south had been undermined by Richard's giving away of various key castles
some years before. His use of routier mercenaries in the central regions had rapidly eaten away his remaining support
in this area too, which set the stage for a sudden collapse of Angevin power.[nb 9] John retreated back across the
Channel in December, sending orders for the establishment of a fresh defensive line to the west of Chateau Gaillard.
In March 1204, Gaillard fell. John's mother Eleanor died the following month. This was not just a personal blow for
John, but threatened to unravel the widespread Angevin alliances across the far south of France. Philip moved south
around the new defensive line and struck upwards at the heart of the Duchy, now facing little resistance. By August,
Philip had taken Normandy and advanced south to occupy Anjou and Poitou as well. John's only remaining possession
on the Continent was now the Duchy of Aquitaine. The result was a sequence of innovative but unpopular financial
measures.[nb 10] John levied scutage payments eleven times in his seventeen years as king, as compared to eleven
times in total during the reign of the preceding three monarchs. In many cases these were levied in the absence of
any actual military campaign, which ran counter to the original idea that scutage was an alternative to actual military
service. John maximised his right to demand relief payments when estates and castles were inherited, sometimes charging
enormous sums, beyond barons' abilities to pay. Building on the successful sale of sheriff appointments in 1194,
John initiated a new round of appointments, with the new incumbents making back their investment through increased
fines and penalties, particularly in the forests. Another innovation of Richard's, increased charges levied on widows
who wished to remain single, was expanded under John. John continued to sell charters for new towns, including the
planned town of Liverpool, and charters were sold for markets across the kingdom and in Gascony.[nb 11] The king
introduced new taxes and extended existing ones. The Jews, who held a vulnerable position in medieval England, protected
only by the king, were subject to huge taxes; £44,000 was extracted from the community by the tallage of 1210; much
of it was passed on to the Christian debtors of Jewish moneylenders.[nb 12] John created a new tax on income and
movable goods in 1207 – effectively a version of a modern income tax – that produced £60,000; he created a new set
of import and export duties payable directly to the crown. John found that these measures enabled him to raise further
resources through the confiscation of the lands of barons who could not pay or refused to pay. John's personal life
greatly affected his reign. Contemporary chroniclers state that John was sinfully lustful and lacking in piety. It
was common for kings and nobles of the period to keep mistresses, but chroniclers complained that John's mistresses
were married noblewomen, which was considered unacceptable. John had at least five children with mistresses during
his first marriage to Isabelle of Gloucester, and two of those mistresses are known to have been noblewomen. John's
behaviour after his second marriage to Isabella of Angoulême is less clear, however. None of John's known illegitimate
children were born after he remarried, and there is no actual documentary proof of adultery after that point, although
John certainly had female friends amongst the court throughout the period. The specific accusations made against
John during the baronial revolts are now generally considered to have been invented for the purposes of justifying
the revolt; nonetheless, most of John's contemporaries seem to have held a poor opinion of his sexual behaviour.[nb
14] John had already begun to improve his Channel forces before the loss of Normandy and he rapidly built up further
maritime capabilities after its collapse. Most of these ships were placed along the Cinque Ports, but Portsmouth
was also enlarged. By the end of 1204 he had around 50 large galleys available; another 54 vessels were built between
1209 and 1212. William of Wrotham was appointed "keeper of the galleys", effectively John's chief admiral. Wrotham
was responsible for fusing John's galleys, the ships of the Cinque Ports and pressed merchant vessels into a single
operational fleet. John adopted recent improvements in ship design, including new large transport ships called buisses
and removable forecastles for use in combat. Royal power in Wales was unevenly applied, with the country divided
between the marcher lords along the borders, royal territories in Pembrokeshire and the more independent native Welsh
lords of North Wales. John took a close interest in Wales and knew the country well, visiting every year between
1204 and 1211 and marrying his illegitimate daughter, Joan, to the Welsh prince Llywelyn the Great. The king used
the marcher lords and the native Welsh to increase his own territory and power, striking a sequence of increasingly
precise deals backed by royal military power with the Welsh rulers. A major royal expedition to enforce these agreements
occurred in 1211, after Llywelyn attempted to exploit the instability caused by the removal of William de Braose,
through the Welsh uprising of 1211. John's invasion, striking into the Welsh heartlands, was a military success.
Llywelyn came to terms that included an expansion of John's power across much of Wales, albeit only temporarily.
Innocent gave some dispensations as the crisis progressed. Monastic communities were allowed to celebrate Mass in
private from 1209 onwards, and late in 1212 the Holy Viaticum for the dying was authorised. The rules on burials
and lay access to churches appear to have been steadily circumvented, at least unofficially. Although the interdict
was a burden to much of the population, it did not result in rebellion against John. By 1213, though, John was increasingly
worried about the threat of French invasion. Some contemporary chroniclers suggested that in January Philip II of
France had been charged with deposing John on behalf of the papacy, although it appears that Innocent merely prepared
secret letters in case Innocent needed to claim the credit if Philip did successfully invade England. Within a few
months of John's return, rebel barons in the north and east of England were organising resistance to his rule. John
held a council in London in January 1215 to discuss potential reforms and sponsored discussions in Oxford between
his agents and the rebels during the spring. John appears to have been playing for time until Pope Innocent III could
send letters giving him explicit papal support. This was particularly important for John, as a way of pressuring
the barons but also as a way of controlling Stephen Langton, the Archbishop of Canterbury. In the meantime, John
began to recruit fresh mercenary forces from Poitou, although some were later sent back to avoid giving the impression
that the king was escalating the conflict. John announced his intent to become a crusader, a move which gave him
additional political protection under church law. John's campaign started well. In November John retook Rochester
Castle from rebel baron William d'Aubigny in a sophisticated assault. One chronicler had not seen "a siege so hard
pressed or so strongly resisted", whilst historian Reginald Brown describes it as "one of the greatest [siege] operations
in England up to that time". Having regained the south-east John split his forces, sending William Longespée to retake
the north side of London and East Anglia, whilst John himself headed north via Nottingham to attack the estates of
the northern barons. Both operations were successful and the majority of the remaining rebels were pinned down in
London. In January 1216 John marched against Alexander II of Scotland, who had allied himself with the rebel cause.
John took back Alexander's possessions in northern England in a rapid campaign and pushed up towards Edinburgh over
a ten-day period. John's illness grew worse and by the time he reached Newark Castle he was unable to travel any
farther; John died on the night of 18 October. Numerous – probably fictitious – accounts circulated soon after his
death that he had been killed by poisoned ale, poisoned plums or a "surfeit of peaches". His body was escorted south
by a company of mercenaries and he was buried in Worcester Cathedral in front of the altar of St Wulfstan. A new
sarcophagus with an effigy was made for him in 1232, in which his remains now rest. By the Victorian period in the
19th century historians were more inclined to draw on the judgements of the chroniclers and to focus on John's moral
personality. Kate Norgate, for example, argued that John's downfall had been due not to his failure in war or strategy,
but due to his "almost superhuman wickedness", whilst James Ramsay blamed John's family background and his cruel
personality for his downfall. Historians in the "Whiggish" tradition, focusing on documents such as the Domesday
Book and Magna Carta, trace a progressive and universalist course of political and economic development in England
over the medieval period. These historians were often inclined to see John's reign, and his signing of Magna Carta
in particular, as a positive step in the constitutional development of England, despite the flaws of the king himself.
Winston Churchill, for example, argued that "[w]hen the long tally is added, it will be seen that the British nation
and the English-speaking world owe far more to the vices of John than to the labours of virtuous sovereigns". John
(24 December 1166 – 19 October 1216), also known as John Lackland (Norman French: Johan sanz Terre), was King of
England from 6 April 1199 until his death in 1216. John lost the duchy of Normandy to King Philip II of France, which
resulted in the collapse of most of the Angevin Empire and contributed to the subsequent growth in power of the Capetian
dynasty during the 13th century. The baronial revolt at the end of John's reign led to the sealing of the Magna Carta,
a document sometimes considered to be an early step in the evolution of the constitution of the United Kingdom. Shortly
after his birth, John was passed from Eleanor into the care of a wet nurse, a traditional practice for medieval noble
families. Eleanor then left for Poitiers, the capital of Aquitaine, and sent John and his sister Joan north to Fontevrault
Abbey. This may have been done with the aim of steering her youngest son, with no obvious inheritance, towards a
future ecclesiastical career. Eleanor spent the next few years conspiring against her husband Henry and neither parent
played a part in John's very early life. John was probably, like his brothers, assigned a magister whilst he was
at Fontevrault, a teacher charged with his early education and with managing the servants of his immediate household;
John was later taught by Ranulph Glanville, a leading English administrator. John spent some time as a member of
the household of his eldest living brother Henry the Young King, where he probably received instruction in hunting
and military skills. John had spent the conflict travelling alongside his father, and was given widespread possessions
across the Angevin empire as part of the Montlouis settlement; from then onwards, most observers regarded John as
Henry II's favourite child, although he was the furthest removed in terms of the royal succession. Henry II began
to find more lands for John, mostly at various nobles' expense. In 1175 he appropriated the estates of the late Earl
of Cornwall and gave them to John. The following year, Henry disinherited the sisters of Isabelle of Gloucester,
contrary to legal custom, and betrothed John to the now extremely wealthy Isabelle. In 1177, at the Council of Oxford,
Henry dismissed William FitzAldelm as the Lord of Ireland and replaced him with the ten-year-old John. When John's
elder brother Richard became king in September 1189, he had already declared his intention of joining the Third Crusade.
Richard set about raising the huge sums of money required for this expedition through the sale of lands, titles and
appointments, and attempted to ensure that he would not face a revolt while away from his empire. John was made Count
of Mortain, was married to the wealthy Isabel of Gloucester, and was given valuable lands in Lancaster and the counties
of Cornwall, Derby, Devon, Dorset, Nottingham and Somerset, all with the aim of buying his loyalty to Richard whilst
the king was on crusade. Richard retained royal control of key castles in these counties, thereby preventing John
from accumulating too much military and political power, and, for the time being, the king named the four-year-old
Arthur of Brittany as the heir to the throne. In return, John promised not to visit England for the next three years,
thereby in theory giving Richard adequate time to conduct a successful crusade and return from the Levant without
fear of John seizing power. Richard left political authority in England – the post of justiciar – jointly in the
hands of Bishop Hugh de Puiset and William Mandeville, and made William Longchamp, the Bishop of Ely, his chancellor.
Mandeville immediately died, and Longchamp took over as joint justiciar with Puiset, which would prove to be a less
than satisfactory partnership. Eleanor, the queen mother, convinced Richard to allow John into England in his absence.
Warfare in Normandy at the time was shaped by the defensive potential of castles and the increasing costs of conducting
campaigns. The Norman frontiers had limited natural defences but were heavily reinforced with castles, such as Château
Gaillard, at strategic points, built and maintained at considerable expense. It was difficult for a commander to
advance far into fresh territory without having secured his lines of communication by capturing these fortifications,
which slowed the progress of any attack. Armies of the period could be formed from either feudal or mercenary forces.
Feudal levies could only be raised for a fixed length of time before they returned home, forcing an end to a campaign;
mercenary forces, often called Brabançons after the Duchy of Brabant but actually recruited from across northern
Europe, could operate all year long and provide a commander with more strategic options to pursue a campaign, but
cost much more than equivalent feudal forces. As a result, commanders of the period were increasingly drawing on
larger numbers of mercenaries. John initially adopted a defensive posture similar to that of 1199: avoiding open
battle and carefully defending his key castles. John's operations became more chaotic as the campaign progressed,
and Philip began to make steady progress in the east. John became aware in July that Arthur's forces were threatening
his mother, Eleanor, at Mirebeau Castle. Accompanied by William de Roches, his seneschal in Anjou, he swung his mercenary
army rapidly south to protect her. His forces caught Arthur by surprise and captured the entire rebel leadership
at the battle of Mirebeau. With his southern flank weakening, Philip was forced to withdraw in the east and turn
south himself to contain John's army. John inherited a sophisticated system of administration in England, with a
range of royal agents answering to the Royal Household: the Chancery kept written records and communications; the
Treasury and the Exchequer dealt with income and expenditure respectively; and various judges were deployed to deliver
justice around the kingdom. Thanks to the efforts of men like Hubert Walter, this trend towards improved record keeping
continued into his reign. Like previous kings, John managed a peripatetic court that travelled around the kingdom,
dealing with both local and national matters as he went. John was very active in the administration of England and
was involved in every aspect of government. In part he was following in the tradition of Henry I and Henry II, but
by the 13th century the volume of administrative work had greatly increased, which put much more pressure on a king
who wished to rule in this style. John was in England for much longer periods than his predecessors, which made his
rule more personal than that of previous kings, particularly in previously ignored areas such as the north. John's
royal household was based around several groups of followers. One group was the familiares regis, John's immediate
friends and knights who travelled around the country with him. They also played an important role in organising and
leading military campaigns. Another section of royal followers were the curia regis; these curiales were the senior
officials and agents of the king and were essential to his day-to-day rule. Being a member of these inner circles
brought huge advantages, as it was easier to gain favours from the king, file lawsuits, marry a wealthy heiress or
have one's debts remitted. By the time of Henry II, these posts were increasingly being filled by "new men" from
outside the normal ranks of the barons. This intensified under John's rule, with many lesser nobles arriving from
the continent to take up positions at court; many were mercenary leaders from Poitou. These men included soldiers
who would become infamous in England for their uncivilised behaviour, including Falkes de Breauté, Geard d'Athies,
Engelard de Cigongé and Philip Marc. Many barons perceived the king's household as what Ralph Turner has characterised
as a "narrow clique enjoying royal favour at barons' expense" staffed by men of lesser status. John's lack of religious
conviction has been noted by contemporary chroniclers and later historians, with some suspecting that John was at
best impious, or even atheistic, a very serious issue at the time. Contemporary chroniclers catalogued his various
anti-religious habits at length, including his failure to take communion, his blasphemous remarks, and his witty
but scandalous jokes about church doctrine, including jokes about the implausibility of the Resurrection. They commented
on the paucity of John's charitable donations to the church. Historian Frank McLynn argues that John's early years
at Fontevrault, combined with his relatively advanced education, may have turned him against the church. Other historians
have been more cautious in interpreting this material, noting that chroniclers also reported John's personal interest
in the life of St Wulfstan of Worcester and his friendships with several senior clerics, most especially with Hugh
of Lincoln, who was later declared a saint. Financial records show a normal royal household engaged in the usual
feasts and pious observances – albeit with many records showing John's offerings to the poor to atone for routinely
breaking church rules and guidance. The historian Lewis Warren has argued that the chronicler accounts were subject
to considerable bias and the King was "at least conventionally devout," citing his pilgrimages and interest in religious
scripture and commentaries. During the truce of 1206–1208, John focused on building up his financial and military
resources in preparation for another attempt to recapture Normandy. John used some of this money to pay for new alliances
on Philip's eastern frontiers, where the growth in Capetian power was beginning to concern France's neighbours. By
1212 John had successfully concluded alliances with his nephew Otto IV, a contender for the crown of Holy Roman Emperor
in Germany, as well as with the counts Renaud of Boulogne and Ferdinand of Flanders. The invasion plans for 1212
were postponed because of fresh English baronial unrest about service in Poitou. Philip seized the initiative in
1213, sending his elder son, Louis, to invade Flanders with the intention of next launching an invasion of England.
John was forced to postpone his own invasion plans to counter this threat. He launched his new fleet to attack the
French at the harbour of Damme. The attack was a success, destroying Philip's vessels and any chances of an invasion
of England that year. John hoped to exploit this advantage by invading himself late in 1213, but baronial discontent
again delayed his invasion plans until early 1214, in what would prove to be his final Continental campaign. John
wanted John de Gray, the Bishop of Norwich and one of his own supporters, to be appointed Archbishop of Canterbury
after the death of Walter, but the cathedral chapter for Canterbury Cathedral claimed the exclusive right to elect
Walter's successor. They favoured Reginald, the chapter's sub-prior. To complicate matters, the bishops of the province
of Canterbury also claimed the right to appoint the next archbishop. The chapter secretly elected Reginald and he
travelled to Rome to be confirmed; the bishops challenged the appointment and the matter was taken before Innocent.
John forced the Canterbury chapter to change their support to John de Gray, and a messenger was sent to Rome to inform
the papacy of the new decision. Innocent disavowed both Reginald and John de Gray, and instead appointed his own
candidate, Stephen Langton. John refused Innocent's request that he consent to Langton's appointment, but the pope
consecrated Langton anyway in June 1207. Tensions between John and the barons had been growing for several years,
as demonstrated by the 1212 plot against the king. Many of the disaffected barons came from the north of England;
that faction was often labelled by contemporaries and historians as "the Northerners". The northern barons rarely
had any personal stake in the conflict in France, and many of them owed large sums of money to John; the revolt has
been characterised as "a rebellion of the king's debtors". Many of John's military household joined the rebels, particularly
amongst those that John had appointed to administrative roles across England; their local links and loyalties outweighed
their personal loyalty to John. Tension also grew across North Wales, where opposition to the 1211 treaty between
John and Llywelyn was turning into open conflict. For some the appointment of Peter des Roches as justiciar was an
important factor, as he was considered an "abrasive foreigner" by many of the barons. The failure of John's French
military campaign in 1214 was probably the final straw that precipitated the baronial uprising during John's final
years as king; James Holt describes the path to civil war as "direct, short and unavoidable" following the defeat
at Bouvines. John met the rebel leaders at Runnymede, near Windsor Castle, on 15 June 1215. Langton's efforts at
mediation created a charter capturing the proposed peace agreement; it was later renamed Magna Carta, or "Great Charter".
The charter went beyond simply addressing specific baronial complaints, and formed a wider proposal for political
reform, albeit one focusing on the rights of free men, not serfs and unfree labour. It promised the protection of
church rights, protection from illegal imprisonment, access to swift justice, new taxation only with baronial consent
and limitations on scutage and other feudal payments. A council of twenty-five barons would be created to monitor
and ensure John's future adherence to the charter, whilst the rebel army would stand down and London would be surrendered
to the king. Prince Louis intended to land in the south of England in May 1216, and John assembled a naval force
to intercept him. Unfortunately for John, his fleet was dispersed by bad storms and Louis landed unopposed in Kent.
John hesitated and decided not to attack Louis immediately, either due to the risks of open battle or over concerns
about the loyalty of his own men. Louis and the rebel barons advanced west and John retreated, spending the summer
reorganising his defences across the rest of the kingdom. John saw several of his military household desert to the
rebels, including his half-brother, William Longespée. By the end of the summer the rebels had regained the south-east
of England and parts of the north. John's first wife, Isabel, Countess of Gloucester, was released from imprisonment
in 1214; she remarried twice, and died in 1217. John's second wife, Isabella of Angoulême, left England for Angoulême
soon after the king's death; she became a powerful regional leader, but largely abandoned the children she had had
by John. John had five legitimate children, all by Isabella. His eldest son, Henry III, ruled as king for the majority
of the 13th century. Richard became a noted European leader and ultimately the King of the Romans in the Holy Roman
Empire. Joan married Alexander II of Scotland to become his queen consort. Isabella married the Holy Roman Emperor
Frederick II. His youngest daughter, Eleanor, married William Marshal's son, also called William, and later the famous
English rebel Simon de Montfort. John had a number of illegitimate children by various mistresses, including nine
sons – Richard, Oliver, John, Geoffrey, Henry, Osbert Gifford, Eudes, Bartholomew and probably Philip – and three
daughters – Joan, Maud and probably Isabel. Of these, Joan became the most famous, marrying Prince Llywelyn the Great
of Wales. Most historians today, including John's recent biographers Ralph Turner and Lewis Warren, argue that John
was an unsuccessful monarch, but note that his failings were exaggerated by 12th- and 13th-century chroniclers. Jim
Bradbury notes the current consensus that John was a "hard-working administrator, an able man, an able general",
albeit, as Turner suggests, with "distasteful, even dangerous personality traits", including pettiness, spitefulness
and cruelty. John Gillingham, author of a major biography of Richard I, follows this line too, although he considers
John a less effective general than do Turner or Warren, and describes him "one of the worst kings ever to rule England".
Bradbury takes a moderate line, but suggests that in recent years modern historians have been overly lenient towards
John's numerous faults. Popular historian Frank McLynn maintains a counter-revisionist perspective on John, arguing
that the king's modern reputation amongst historians is "bizarre", and that as a monarch John "fails almost all those
[tests] that can be legitimately set".
The Macintosh, however, was expensive, which hindered its ability to be competitive in a market already dominated by the
Commodore 64 for consumers, as well as the IBM Personal Computer and its accompanying clone market for businesses.
Macintosh systems still found success in education and desktop publishing and kept Apple as the second-largest PC
manufacturer for the next decade. In the 1990s, improvements in the rival Wintel platform, notably with the introduction
of Windows 3.0, then Windows 95, gradually took market share from the more expensive Macintosh systems. The performance
advantage of 68000-based Macintosh systems was eroded by Intel's Pentium, and in 1994 Apple was relegated to third
place as Compaq became the top PC manufacturer. Even after a transition to the superior PowerPC-based Power Macintosh
(later renamed the PowerMac, in line with the PowerBook series) line in 1994, the falling prices of commodity PC
components and the release of Windows 95 saw the Macintosh user base decline. Smith's first Macintosh board was built
to Raskin's design specifications: it had 64 kilobytes (kB) of RAM, used the Motorola 6809E microprocessor, and was
capable of supporting a 256×256-pixel black-and-white bitmap display. Bud Tribble, a member of the Mac team, was
interested in running the Apple Lisa's graphical programs on the Macintosh, and asked Smith whether he could incorporate
the Lisa's Motorola 68000 microprocessor into the Mac while still keeping the production cost down. By December 1980,
Smith had succeeded in designing a board that not only used the 68000, but increased its speed from 5 MHz to 8 MHz;
this board also had the capacity to support a 384×256-pixel display. Smith's design used fewer RAM chips than the
Lisa, which made production of the board significantly more cost-efficient. The final Mac design was self-contained
and had the complete QuickDraw picture language and interpreter in 64 kB of ROM – far more than most other computers;
it had 128 kB of RAM, in the form of sixteen 64 kilobit (kb) RAM chips soldered to the logicboard. Though there were
no memory slots, its RAM was expandable to 512 kB by means of soldering sixteen IC sockets to accept 256 kb RAM chips
in place of the factory-installed chips. The final product's screen was a 9-inch, 512x342 pixel monochrome display,
exceeding the size of the planned screen. Apple spent $2.5 million purchasing all 39 advertising pages in a special,
post-election issue of Newsweek, and ran a "Test Drive a Macintosh" promotion, in which potential buyers with a credit
card could take home a Macintosh for 24 hours and return it to a dealer afterwards. While 200,000 people participated,
dealers disliked the promotion, the supply of computers was insufficient for demand, and many were returned in such
a bad condition that they could no longer be sold. This marketing campaign caused CEO John Sculley to raise the price
from US$1,995 to US$2,495 (about $5,200 when adjusted for inflation in 2010). The computer sold well, nonetheless,
reportedly outselling the IBM PCjr which also began shipping early that year. By April 1984 the company sold 50,000
Macintoshes, and hoped for 70,000 by early May and almost 250,000 by the end of the year. Apple released the Macintosh
Plus on January 10, 1986, for a price of US$2,600. It offered one megabyte of RAM, easily expandable to four megabytes
by the use of socketed RAM boards. It also featured a SCSI parallel interface, allowing up to seven peripherals—such
as hard drives and scanners—to be attached to the machine. Its floppy drive was increased to an 800 kB capacity.
The Mac Plus was an immediate success and remained in production, unchanged, until October 15, 1990; on sale for
just over four years and ten months, it was the longest-lived Macintosh in Apple's history. In September 1986, Apple
introduced the Macintosh Programmer's Workshop, or MPW, an application that allowed software developers to create
software for Macintosh on Macintosh, rather than cross compiling from a Lisa. In August 1987, Apple unveiled HyperCard
and MultiFinder, which added cooperative multitasking to the Macintosh. Apple began bundling both with every Macintosh.
With the new Motorola 68030 processor came the Macintosh IIx in 1988, which had benefited from internal improvements,
including an on-board MMU. It was followed in 1989 by the Macintosh IIcx, a more compact version with fewer slots
and a version of the Mac SE powered by the 16 MHz 68030, the Macintosh SE/30. Later that year, the Macintosh IIci,
running at 25 MHz, was the first Mac to be "32-bit clean." This allowed it to natively support more than 8 MB of
RAM, unlike its predecessors, which had "32-bit dirty" ROMs (8 of the 32 bits available for addressing were used
for OS-level flags). System 7 was the first Macintosh operating system to support 32-bit addressing. The following
year, the Macintosh IIfx, starting at US$9,900, was unveiled. Apart from its fast 40 MHz 68030 processor, it had
significant internal architectural improvements, including faster memory and two Apple II CPUs (6502s) dedicated
to I/O processing. As for Mac OS, System 7 was a 32-bit rewrite from Pascal to C++ that introduced virtual memory
and improved the handling of color graphics, as well as memory addressing, networking, and co-operative multitasking.
Also during this time, the Macintosh began to shed the "Snow White" design language, along with the expensive consulting
fees they were paying to Frogdesign. Apple instead brought the design work in-house by establishing the Apple Industrial
Design Group, becoming responsible for crafting a new look for all Apple products. When Steve Jobs returned to Apple
in 1997 following the company's purchase of NeXT, he ordered that the OS that had been previewed as version 7.7 be
branded Mac OS 8 (in place of the never-to-appear Copland OS). Since Apple had licensed only System 7 to third parties,
this move effectively ended the clone line. The decision caused significant financial losses for companies like Motorola,
who produced the StarMax; Umax, who produced the SuperMac; and Power Computing, who offered several lines of Mac
clones, including the PowerWave, PowerTower, and PowerTower Pro. These companies had invested substantial resources
in creating their own Mac-compatible hardware. Apple bought out Power Computing's license, but allowed Umax to continue
selling Mac clones until their license expired, as they had a sizeable presence in the lower-end segment that Apple
did not. In September 1997 Apple extended Umax' license allowing them to sell clones with Mac OS 8, the only clone
maker to do so, but with the restriction that they only sell low-end systems. Without the higher profit margins of
high-end systems, however, Umax judged this would not be profitable and exited the Mac clone market in May 1998,
having lost USD$36 million on the program. Mac OS continued to evolve up to version 9.2.2, including retrofits such
as the addition of a nanokernel and support for Multiprocessing Services 2.0 in Mac OS 8.6, though its dated architecture
made replacement necessary. Initially developed in the Pascal programming language, it was substantially rewritten
in C++ for System 7. From its beginnings on an 8 MHz machine with 128 KB of RAM, it had grown to support Apple's
latest 1 GHz G4-equipped Macs. Since its architecture was laid down, features that were already common on Apple's
competition, like preemptive multitasking and protected memory, had become feasible on the kind of hardware Apple
manufactured. As such, Apple introduced Mac OS X, a fully overhauled Unix-based successor to Mac OS 9. OS X uses
Darwin, XNU, and Mach as foundations, and is based on NeXTSTEP. It was released to the public in September 2000,
as the Mac OS X Public Beta, featuring a revamped user interface called "Aqua". At US$29.99, it allowed adventurous
Mac users to sample Apple's new operating system and provide feedback for the actual release. The initial version
of Mac OS X, 10.0 "Cheetah", was released on March 24, 2001. Older Mac OS applications could still run under early
Mac OS X versions, using an environment called "Classic". Subsequent releases of Mac OS X included 10.1 "Puma" (2001),
10.2 "Jaguar" (2002), 10.3 "Panther" (2003) and 10.4 "Tiger" (2005). The current Mac product family uses Intel x86-64
processors. Apple introduced an emulator during the transition from PowerPC chips (called Rosetta), much as it did
during the transition from Motorola 68000 architecture a decade earlier. The Macintosh is the only mainstream computer
platform to have successfully transitioned to a new CPU architecture, and has done so twice. All current Mac models
ship with at least 8 GB of RAM as standard other than the 1.4 GHz Mac Mini, MacBook Pro (without Retina Display),
and MacBook Air. Current Mac computers use ATI Radeon or nVidia GeForce graphics cards as well as Intel graphics
built into the main CPU. All current Macs (except for the MacBook Pro without Retina Display) do not ship with an
optical media drive that includes a dual-function DVD/CD burner. Apple refers to this as a SuperDrive. Current Macs
include two standard data transfer ports: USB and Thunderbolt (except for the MacBook (2015 version), which only
has a USB-C port and headphone port). MacBook Pro, iMac, MacBook Air, and Mac Mini computers now also feature the
"Thunderbolt" port, which Apple says can transfer data at speeds up to 10 gigabits per second. USB was introduced
in the 1998 iMac G3 and is ubiquitous today, while FireWire is mainly reserved for high-performance devices such
as hard drives or video cameras. Starting with the then-new iMac G5, released in October 2005, Apple started to include
built-in iSight cameras on appropriate models, and a media center interface called Front Row that can be operated
by an Apple Remote or keyboard for accessing media stored on the computer. Front Row has been discontinued as of
2011, however, and the Apple Remote is no longer bundled with new Macs. Originally, the hardware architecture was
so closely tied to the Mac OS operating system that it was impossible to boot an alternative operating system. The
most common workaround, is to boot into Mac OS and then to hand over control to a Mac OS-based bootloader application.
Used even by Apple for A/UX and MkLinux, this technique is no longer necessary since the introduction of Open Firmware-based
PCI Macs, though it was formerly used for convenience on many Old World ROM systems due to bugs in the firmware implementation.[citation
needed] Now, Mac hardware boots directly from Open Firmware in most PowerPC-based Macs or EFI in all Intel-based
Macs. In 1982, Regis McKenna was brought in to shape the marketing and launch of the Macintosh. Later the Regis McKenna
team grew to include Jane Anderson, Katie Cadigan and Andy Cunningham, who eventually led the Apple account for the
agency. Cunningham and Anderson were the primary authors of the Macintosh launch plan. The launch of the Macintosh
pioneered many different tactics that are used today in launching technology products, including the "multiple exclusive,"
event marketing (credited to John Sculley, who brought the concept over from Pepsi), creating a mystique around a
product and giving an inside look into a product's creation. Compaq, who had previously held the third place spot
among PC manufacturers during the 1980s and early-mid 1990s, initiated a successful price war in 1994 that vaulted
them to the biggest by the year end, overtaking a struggling IBM and relegating Apple to third place. Apple's market
share further struggled due to the release of the Windows 95 operating system, which unified Microsoft's formerly
separate MS-DOS and Windows products. Windows 95 significantly enhanced the multimedia capability and performance
of IBM PC compatible computers, and brought the capabilities of Windows to parity with the Mac OS GUI. Statistics
from late 2003 indicate that Apple had 2.06 percent of the desktop share in the United States that had increased
to 2.88 percent by Q4 2004. As of October 2006, research firms IDC and Gartner reported that Apple's market share
in the U.S. had increased to about 6 percent. Figures from December 2006, showing a market share around 6 percent
(IDC) and 6.1 percent (Gartner) are based on a more than 30 percent increase in unit sale from 2005 to 2006. The
installed base of Mac computers is hard to determine, with numbers ranging from 5% (estimated in 2009) to 16% (estimated
in 2005). The Macintosh SE was released at the same time as the Macintosh II for $2900 (or $3900 with hard drive),
as the first compact Mac with a 20 MB internal hard drive and an expansion slot. The SE's expansion slot was located
inside the case along with the CRT, potentially exposing an upgrader to high voltage. For this reason, Apple recommended
users bring their SE to an authorized Apple dealer to have upgrades performed. The SE also updated Jerry Manock and
Terry Oyama's original design and shared the Macintosh II's Snow White design language, as well as the new Apple
Desktop Bus (ADB) mouse and keyboard that had first appeared on the Apple IIGS some months earlier. In recent years,
Apple has seen a significant boost in sales of Macs. This has been attributed, in part, to the success of the iPod
and the iPhone, a halo effect whereby satisfied iPod or iPhone owners purchase more Apple products, and Apple has
since capitalized on that with the iCloud cloud service that allows users to seamlessly sync data between these devices
and Macs. Nonetheless, like other personal computer manufacturers, the Macintosh lines have been hurt by consumer
trend towards smartphones and tablet computers (particularly Apple's own iPhone and iPad, respectively) as the computing
devices of choice among consumers. In response, Apple introduced a range of relatively inexpensive Macs in October
1990. The Macintosh Classic, essentially a less expensive version of the Macintosh SE, was the least expensive Mac
offered until early 2001. The 68020-powered Macintosh LC, in its distinctive "pizza box" case, offered color graphics
and was accompanied by a new, low-cost 512×384 pixel monitor. The Macintosh IIsi was essentially a 20 MHz IIci with
only one expansion slot. All three machines sold well, although Apple's profit margin on them was considerably lower
than that on earlier models. Starting in 2006, Apple's industrial design shifted to favor aluminum, which was used
in the construction of the first MacBook Pro. Glass was added in 2008 with the introduction of the unibody MacBook
Pro. These materials are billed as environmentally friendly. The iMac, MacBook Pro, MacBook Air, and Mac Mini lines
currently all use aluminum enclosures, and are now made of a single unibody. Chief designer Jonathan Ive continues
to guide products towards a minimalist and simple feel, including eliminating of replaceable batteries in notebooks.
Multi-touch gestures from the iPhone's interface have been applied to the Mac line in the form of touch pads on notebooks
and the Magic Mouse and Magic Trackpad for desktops. The Macintosh project was begun in 1979 by Jef Raskin, an Apple
employee who envisioned an easy-to-use, low-cost computer for the average consumer. He wanted to name the computer
after his favorite type of apple, the McIntosh, but the spelling was changed to "Macintosh" for legal reasons as
the original was the same spelling as that used by McIntosh Laboratory, Inc., the audio equipment manufacturer. Steve
Jobs requested that McIntosh Laboratory give Apple a release for the name with its changed spelling so that Apple
could use it, but the request was denied, forcing Apple to eventually buy the rights to use the name. (A 1984 Byte
Magazine article suggested Apple changed the spelling only after "early users" misspelled "McIntosh". However, Jef
Raskin had adopted the Macintosh spelling by 1981, when the Macintosh computer was still a single prototype machine
in the lab. This explanation further clashes with the first explanation given above that the change was made for
"legal reasons.") After the Lisa's announcement, John Dvorak discussed rumors of a mysterious "MacIntosh" project
at Apple in February 1983. The company announced the Macintosh 128K—manufactured at an Apple factory in Fremont,
California—in October 1983, followed by an 18-page brochure included with various magazines in December. The Macintosh
was introduced by a US$1.5 million Ridley Scott television commercial, "1984". It most notably aired during the third
quarter of Super Bowl XVIII on January 22, 1984, and is now considered a "watershed event" and a "masterpiece." Regis
McKenna called the ad "more successful than the Mac itself." "1984" used an unnamed heroine to represent the coming
of the Macintosh (indicated by a Picasso-style picture of the computer on her white tank top) as a means of saving
humanity from the "conformity" of IBM's attempts to dominate the computer industry. The ad alludes to George Orwell's
novel, Nineteen Eighty-Four, which described a dystopian future ruled by a televised "Big Brother." The Macintosh
(/ˈmækᵻntɒʃ/ MAK-in-tosh; branded as Mac since 1997) is a series of personal computers (PCs) designed, developed,
and marketed by Apple Inc. Steve Jobs introduced the original Macintosh computer on January 24, 1984. This was the
first mass-market personal computer featuring an integral graphical user interface and mouse. This first model was
later renamed to "Macintosh 128k" for uniqueness amongst a populous family of subsequently updated models which are
also based on Apple's same proprietary architecture. Since 1998, Apple has largely phased out the Macintosh name
in favor of "Mac", though the product family has been nicknamed "Mac" or "the Mac" since the development of the first
model. In 1985, the combination of the Mac, Apple's LaserWriter printer, and Mac-specific software like Boston Software's
MacPublisher and Aldus PageMaker enabled users to design, preview, and print page layouts complete with text and
graphics—an activity to become known as desktop publishing. Initially, desktop publishing was unique to the Macintosh,
but eventually became available for other platforms. Later, applications such as Macromedia FreeHand, QuarkXPress,
and Adobe's Photoshop and Illustrator strengthened the Mac's position as a graphics computer and helped to expand
the emerging desktop publishing market. Raskin was authorized to start hiring for the project in September 1979,
and he immediately asked his long-time colleague, Brian Howard, to join him. His initial team would eventually consist
of himself, Howard, Joanna Hoffman, Burrell Smith, and Bud Tribble. The rest of the original Mac team would include
Bill Atkinson, Bob Belleville, Steve Capps, George Crow, Donn Denman, Chris Espinosa, Andy Hertzfeld, Bruce Horn,
Susan Kare, Larry Kenyon, and Caroline Rose with Steve Jobs leading the project. From 2001 to 2008, Mac sales increased
continuously on an annual basis. Apple reported worldwide sales of 3.36 million Macs during the 2009 holiday season.
As of Mid-2011, the Macintosh continues to enjoy rapid market share increase in the US, growing from 7.3% of all
computer shipments in 2010 to 9.3% in 2011. According to IDC's quarterly PC tracker, globally, in 3rd quarter of
2014, Apple's PC market share increased 5.7 percent year over year, with record sales of 5.5 million units. Apple
now sits in the number five spot, with a global market share of about 6% during 2014, behind Lenovo, HP, Dell and
Acer. In 1987, Apple spun off its software business as Claris. It was given the code and rights to several applications,
most notably MacWrite, MacPaint, and MacProject. In the late 1980s, Claris released a number of revamped software
titles; the result was the "Pro" series, including MacDraw Pro, MacWrite Pro, and FileMaker Pro. To provide a complete
office suite, Claris purchased the rights to the Informix Wingz spreadsheet program on the Mac, renaming it Claris
Resolve, and added the new presentation software Claris Impact. By the early 1990s, Claris applications were shipping
with the majority of consumer-level Macintoshes and were extremely popular. In 1991, Claris released ClarisWorks,
which soon became their second best-selling application. When Claris was reincorporated back into Apple in 1998,
ClarisWorks was renamed AppleWorks beginning with version 5.0. Two days after "1984" aired, the Macintosh went on
sale, and came bundled with two applications designed to show off its interface: MacWrite and MacPaint. It was first
demonstrated by Steve Jobs in the first of his famous Mac keynote speeches, and though the Mac garnered an immediate,
enthusiastic following, some labeled it a mere "toy." Because the operating system was designed largely around the
GUI, existing text-mode and command-driven applications had to be redesigned and the programming code rewritten.
This was a time-consuming task that many software developers chose not to undertake, and could be regarded as a reason
for an initial lack of software for the new system. In April 1984, Microsoft's MultiPlan migrated over from MS-DOS,
with Microsoft Word following in January 1985. In 1985, Lotus Software introduced Lotus Jazz for the Macintosh platform
after the success of Lotus 1-2-3 for the IBM PC, although it was largely a flop. Apple introduced the Macintosh Office
suite the same year with the "Lemmings" ad. Infamous for insulting its own potential customers, the ad was not successful.
Apple has generally dominated the premium PC market, having a 91 percent market share for PCs priced at more than
$1,000 in 2009, according to NPD. The Macintosh took 45 percent of operating profits in the PC industry during Q4
2012, compared to 13 percent for Dell, seven percent for Hewlett Packard, six percent for Lenovo and Asus, and one
percent for Acer. While sales of the Macintosh have largely held steady, in comparison to Apple's sales of the iPhone
and iPad which increased significantly during the 2010s, Macintosh computers still enjoy high margins on a per unit
basis, with the majority being their MacBooks that are focused on the ultraportable niche that is the most profitable
and only growing segment of PCs. It also helped that the Macintosh lineup is simple, updated on a yearly schedule,
and consistent across both Apple retail stores, and authorized resellers where they have a special "store within
a store" section to distinguish them from Windows PCs. In contrast, Windows PC manufacturers generally have a wide
range of offerings, selling only a portion through retail with a full selection on the web, and often with limited-time
or region-specific models. The Macintosh ranked third on the "list of intended brands for desktop purchases" for
the 2011 holiday season, then moved up to second in 2012 by displacing Hewlett Packard, and in 2013 took the top
spot ahead of Dell. The Macintosh's minimal memory became apparent, even compared with other personal computers in
1984, and could not be expanded easily. It also lacked a hard disk drive or the means to easily attach one. Many
small companies sprang up to address the memory issue. Suggestions revolved around either upgrading the memory to
512 KB or removing the computer's 16 memory chips and replacing them with larger-capacity chips, a tedious and difficult
operation. In October 1984, Apple introduced the Macintosh 512K, with quadruple the memory of the original, at a
price of US$3,195. It also offered an upgrade for 128k Macs that involved replacing the logic board. In 1988, Apple
sued Microsoft and Hewlett-Packard on the grounds that they infringed Apple's copyrighted GUI, citing (among other
things) the use of rectangular, overlapping, and resizable windows. After four years, the case was decided against
Apple, as were later appeals. Apple's actions were criticized by some in the software community, including the Free
Software Foundation (FSF), who felt Apple was trying to monopolize on GUIs in general, and boycotted GNU software
for the Macintosh platform for seven years. Furthermore, Apple had created too many similar models that confused
potential buyers. At one point, its product lineup was subdivided into Classic, LC, II, Quadra, Performa, and Centris
models, with essentially the same computer being sold under a number of different names. These models competed against
Macintosh clones, hardware manufactured by third parties that ran Apple's System 7. This succeeded in increasing
the Macintosh's market share somewhat, and provided cheaper hardware for consumers, but hurt Apple financially as
existing Apple customers began to buy cheaper clones which cannibalized the sales of Apple's higher-margin Macintosh
systems, yet Apple still shouldered the burden of developing the Mac OS platform. In early 2001, Apple began shipping
computers with CD-RW drives and emphasized the Mac's ability to play DVDs by including DVD-ROM and DVD-RAM drives
as standard. Steve Jobs admitted that Apple had been "late to the party" on writable CD technology, but felt that
Macs could become a "digital hub" that linked and enabled an "emerging digital lifestyle". Apple would later introduce
an update to its iTunes music player software that enabled it to burn CDs, along with a controversial "Rip, Mix,
Burn" advertising campaign that some felt encouraged media piracy. This accompanied the release of the iPod, Apple's
first successful handheld device. Apple continued to launch products, such as the unsuccessful Power Mac G4 Cube,
the education-oriented eMac, and the titanium (and later aluminium) PowerBook G4 laptop for professionals. It was
not long until Apple released their first portable computer, the Macintosh Portable in 1989. Although due to considerable
design issues, it was soon replaced in 1991 with the first of the PowerBook line: the PowerBook 100, a miniaturized
portable; the 16 MHz 68030 PowerBook 140; and the 25 MHz 68030 PowerBook 170. They were the first portable computers
with the keyboard behind a palm rest and a built-in pointing device (a trackball) in front of the keyboard. The 1993
PowerBook 165c was Apple's first portable computer to feature a color screen, displaying 256 colors with 640 x 400-pixel
resolution. The second generation of PowerBooks, the 68040-equipped 500 series, introduced trackpads, integrated
stereo speakers, and built-in Ethernet to the laptop form factor in 1994. In 2001, Apple introduced Mac OS X, based
on Darwin and NEXTSTEP; its new features included the Dock and the Aqua user interface. During the transition, Apple
included a virtual machine subsystem known as Classic, allowing users to run Mac OS 9 applications under Mac OS X
10.4 and earlier on PowerPC machines. Apple introduced Mac OS X 10.8 in February, and it was made available in the
summer of 2012. Mountain Lion includes many new features, such as Mission Control, the Mac App Store (available to
Mac OS X v10.6.6 "Snow Leopard." users by software update), Launchpad, an application viewer and launcher akin to
the iOS Home Screen, and Resume, a feature similar to the hibernate function found in Microsoft Windows. The most
recent version is OS X El Capitan . In addition to Mavericks, all new Macs are bundled with assorted Apple-produced
applications, including iLife, the Safari web browser and the iTunes media player. Apple introduced Mavericks at
WWDC 2013 in June, and released it on October 15 of that year. It is free of charge to everyone running Snow Leopard
or later and is compatible with most Macs from 2007 and later. Mavericks brought a lot of the iOS apps, functions,
and feel to the Mac as well as better multi display support, iBooks, Maps, app nap, and other upgrades to improve
performance and battery life. In 2000, Apple released the Power Mac G4 Cube, their first desktop since the discontinued
Power Macintosh G3, to slot between the iMac G3 and the Power Mac G4. Even with its innovative design, it was initially
priced US$200 higher than the comparably-equipped and more-expandable base Power Mac G4, while also not including
a monitor, making it too expensive and resulting in slow sales. Apple sold just 29,000 Cubes in Q4 of 2000 which
was one third of expectations, compared to 308,000 Macs during that same quarter, and Cube sales dropped to 12,000
units in Q1 of 2001. A price drop and hardware upgrades could not offset the earlier perception of the Cube's reduced
value compared to the iMac and Power Mac G4 lineup, and it was discontinued in July 2001. Historically, Mac OS X
enjoyed a near-absence of the types of malware and spyware that affect Microsoft Windows users. Mac OS X has a smaller
usage share compared to Microsoft Windows (roughly 5% and 92%, respectively), but it also has traditionally more
secure UNIX roots. Worms, as well as potential vulnerabilities, were noted in February 2006, which led some industry
analysts and anti-virus companies to issue warnings that Apple's Mac OS X is not immune to malware. Increasing market
share coincided with additional reports of a variety of attacks. Apple releases security updates for its software.
In early 2011, Mac OS X experienced a large increase in malware attacks, and malware such as Mac Defender, MacProtector,
and MacGuard were seen as an increasing problem for Mac users. At first, the malware installer required the user
to enter the administrative password, but later versions were able to install without user input. Initially, Apple
support staff were instructed not to assist in the removal of the malware or admit the existence of the malware issue,
but as the malware spread, a support document was issued. Apple announced an OS X update to fix the problem. An estimated
100,000 users were affected. By March 2011, the market share of OS X in North America had increased to slightly over
14%. Whether the size of the Mac's market share and installed base is relevant, and to whom, is a hotly debated issue.
Industry pundits have often called attention to the Mac's relatively small market share to predict Apple's impending
doom, particularly in the early and mid-1990s when the company's future seemed bleakest. Others argue that market
share is the wrong way to judge the Mac's success. Apple has positioned the Mac as a higher-end personal computer,
and so it may be misleading to compare it to a budget PC. Because the overall market for personal computers has grown
rapidly, the Mac's increasing sales numbers are effectively swamped by the industry's expanding sales volume as a
whole. Apple's small market share, then, gives the impression that fewer people are using Macs than did ten years
ago, when exactly the opposite is true. Soaring sales of the iPhone and iPad mean that the portion of Apple's profits
represented by the Macintosh has declined in 2010, dropping to 24% from 46% two years earlier. Others try to de-emphasize
market share, citing that it is rarely brought up in other industries. Regardless of the Mac's market share, Apple
has remained profitable since Steve Jobs' return and the company's subsequent reorganization. Notably, a report published
in the first quarter of 2008 found that Apple had a 14% market share in the personal computer market in the US, including
66% of all computers over $1,000. Market research indicates that Apple draws its customer base from a higher-income
demographic than the mainstream personal computer market. Notwithstanding these technical and commercial successes
on the Macintosh platform, their systems remained fairly expensive, making them less competitive in light of the
falling costs of components that made IBM PC compatibles cheaper and accelerated their adoption. In 1989, Jean-Louis
Gassée had steadfastly refused to lower the profit margins on Mac computers, then there was a component shortage
that rocked the exponentially-expanding PC industry that year, forcing Apple USA head Allan Loren to cut prices which
dropped Apple's margins. Microsoft Windows 3.0 was released in May 1990, the first iteration of Windows which had
a feature set and performance comparable to the significantly costlier Macintosh. Furthermore, Apple had created
too many similar models that confused potential buyers; at one point the product lineup was subdivided into Classic,
LC, II, Quadra, Performa, and Centris models, with essentially the same computer being sold under a number of different
names. Starting in 2002, Apple moved to eliminate CRT displays from its product line as part of aesthetic design
and space-saving measures with the iMac G4. However, the new iMac with its flexible LCD flat-panel monitor was considerably
more expensive on its debut than the preceding iMac G3, largely due to the higher cost of the LCD technology at the
time. In order to keep the Macintosh affordable for the education market and due to obsolescence of the iMac G3,
Apple created the eMac in April 2002 as the intended successor; however the eMac's CRT made it relatively bulky and
somewhat outdated, while its all-in-one construction meant it could not be expanded to meet consumer demand for larger
monitors. The iMac G4's relatively high prices were approaching that of laptops which were portable and had higher
resolution LCD screens. Meanwhile, Windows PC manufacturers could offer desktop configurations with LCD flat panel
monitors at prices comparable to the eMac and at much lower cost than the iMac G4. The flop of the Power Mac G4 Cube,
along with the more expensive iMac G4 and heavy eMac, meant that Macintosh desktop sales never reached the market
share attained by the previous iMac G3. For the next half-decade while Macintosh sales held steady, it would instead
be the iPod portable music player and iTunes music download service that would drive Apple's sales growth. The sales
breakdown of the Macintosh have seen sales of desktop Macs stayed mostly constant while being surpassed by that of
Mac notebooks whose sales rate has grown considerably; seven out of ten Macs sold were laptops in 2009, a ratio projected
to rise to three out of four by 2010. The change in sales of form factors is due to the desktop iMac moving from
affordable (iMac G3) to upscale (iMac G4) and subsequent releases are considered premium all-in-ones. By contrast
the MSRP of the MacBook laptop lines have dropped through successive generations such that the MacBook Air and MacBook
Pro constitute the lowest price of entry to a Mac, with the exception of the even more inexpensive Mac Mini (the
only sub-$1000 offering from Apple, albeit without a monitor and keyboard), not surprisingly the MacBooks are the
top-selling form factors of the Macintosh platform today. The use of Intel microprocessors has helped Macs more directly
compete with their Windows counterparts on price and performance, and by the 2010s Apple was receiving Intel's latest
CPUs first before other PC manufacturers. In 1998, after the return of Steve Jobs, Apple consolidated its multiple
consumer-level desktop models into the all-in-one iMac G3, which became a commercial success and revitalized the
brand. Since their transition to Intel processors in 2006, the complete lineup is entirely based on said processors
and associated systems. Its current lineup comprises three desktops (the all-in-one iMac, entry-level Mac mini, and
the Mac Pro tower graphics workstation), and four laptops (the MacBook, MacBook Air, MacBook Pro, and MacBook Pro
with Retina display). Its Xserve server was discontinued in 2011 in favor of the Mac Mini and Mac Pro. Burrel's innovative
design, which combined the low production cost of an Apple II with the computing power of Lisa's CPU, the Motorola
68K, received the attention of Steve Jobs, co-founder of Apple. Realizing that the Macintosh was more marketable
than the Lisa, he began to focus his attention on the project. Raskin left the team in 1981 over a personality conflict
with Jobs. Team member Andy Hertzfeld said that the final Macintosh design is closer to Jobs' ideas than Raskin's.
After hearing of the pioneering GUI technology being developed at Xerox PARC, Jobs had negotiated a visit to see
the Xerox Alto computer and its Smalltalk development tools in exchange for Apple stock options. The Lisa and Macintosh
user interfaces were influenced by technology seen at Xerox PARC and were combined with the Macintosh group's own
ideas. Jobs also commissioned industrial designer Hartmut Esslinger to work on the Macintosh line, resulting in the
"Snow White" design language; although it came too late for the earliest Macs, it was implemented in most other mid-
to late-1980s Apple computers. However, Jobs' leadership at the Macintosh project did not last; after an internal
power struggle with new CEO John Sculley, Jobs resigned from Apple in 1985. He went on to found NeXT, another computer
company targeting the education market, and did not return until 1997, when Apple acquired NeXT. Jobs stated during
the Macintosh's introduction "we expect Macintosh to become the third industry standard", after the Apple II and
IBM PC. Although outselling every other computer, it did not meet expectations during the first year, especially
among business customers. Only about ten applications including MacWrite and MacPaint were widely available, although
many non-Apple software developers participated in the introduction and Apple promised that 79 companies including
Lotus, Digital Research, and Ashton-Tate were creating products for the new computer. After one year, it had less
than one quarter of the software selection available compared to the IBM PC—including only one word processor, two
databases, and one spreadsheet—although Apple had sold 280,000 Macintoshes compared to IBM's first year sales of
fewer than 100,000 PCs. Updated Motorola CPUs made a faster machine possible, and in 1987 Apple took advantage of
the new Motorola technology and introduced the Macintosh II at $5500, powered by a 16 MHz Motorola 68020 processor.
The primary improvement in the Macintosh II was Color QuickDraw in ROM, a color version of the graphics language
which was the heart of the machine. Among the many innovations in Color QuickDraw were the ability to handle any
display size, any color depth, and multiple monitors. The Macintosh II marked the start of a new direction for the
Macintosh, as now for the first time it had an open architecture with several NuBus expansion slots, support for
color graphics and external monitors, and a modular design similar to that of the IBM PC. It had an internal hard
drive and a power supply with a fan, which was initially fairly loud. One third-party developer sold a device to
regulate fan speed based on a heat sensor, but it voided the warranty. Later Macintosh computers had quieter power
supplies and hard drives. Microsoft Windows 3.0 was released in May 1990, and according to a common saying at the
time "Windows was not as good as Macintosh, but it was good enough for the average user". Though still a graphical
wrapper that relied upon MS-DOS, 3.0 was the first iteration of Windows which had a feature set and performance comparable
to the much more expensive Macintosh platform. It also did not help matters that during the previous year Jean-Louis
Gassée had steadfastly refused to lower the profit margins on Mac computers. Finally, there was a component shortage
that rocked the exponentially-expanding PC industry in 1989, forcing Apple USA head Allan Loren to cut prices which
dropped Apple's margins. Intel had tried unsuccessfully to push Apple to migrate the Macintosh platform to Intel
chips. Apple concluded that Intel's CISC (Complex Instruction Set Computer) architecture ultimately would not be
able to compete against RISC (Reduced Instruction Set Computer) processors. While the Motorola 68040 offered the
same features as the Intel 80486 and could on a clock-for-clock basis significantly outperform the Intel chip, the
486 had the ability to be clocked significantly faster without suffering from overheating problems, especially the
clock-doubled i486DX2 which ran the CPU logic at twice the external bus speed, giving such equipped IBM compatible
systems a significant performance lead over their Macintosh equivalents. Apple's product design and engineering didn't
help matters as they restricted the use of the '040 to their expensive Quadras for a time while the 486 was readily
available to OEMs as well as enthusiasts who put together their own machines. In late 1991, as the higher-end Macintosh
desktop lineup transitioned to the '040, Apple was unable to offer the '040 in their top-of-the-line PowerBooks until
early 1994 with the PowerBook 500 series, several years after the first 486-powered IBM compatible laptops hit the
market which cost Apple considerable sales. In 1993 Intel rolled out the Pentium processors as the successor to the
486, while the Motorola 68050 was never released, leaving the Macintosh platform a generation behind IBM compatibles
in the latest CPU technology. In 1994, Apple abandoned Motorola CPUs for the RISC PowerPC architecture developed
by the AIM alliance of Apple Computer, IBM, and Motorola. The Power Macintosh line, the first to use the new chips,
proved to be highly successful, with over a million PowerPC units sold in nine months. However, in the long run,
spurning Intel for the PowerPC was a mistake as the commoditization of Intel-architecture chips meant Apple couldn't
compete on price against "the Dells of the world". In 1998, Apple introduced its new iMac which, like the original
128K Mac, was an all-in-one computer. Its translucent plastic case, originally Bondi blue and later various additional
colors, is considered an industrial design landmark of the late 1990s. The iMac did away with most of Apple's standard
(and usually proprietary) connections, such as SCSI and ADB, in favor of two USB ports. It replaced a floppy disk
drive with a CD-ROM drive for installing software, but was incapable of writing to CDs or other media without external
third-party hardware. The iMac proved to be phenomenally successful, with 800,000 units sold in 139 days. It made
the company an annual profit of US$309 million, Apple's first profitable year since Michael Spindler took over as
CEO in 1995. This aesthetic was applied to the Power Macintosh and later the iBook, Apple's first consumer-level
laptop computer, filling the missing quadrant of Apple's "four-square product matrix" (desktop and portable products
for both consumers and professionals). More than 140,000 pre-orders were placed before it started shipping in September,
and by October proved to be a large success. Apple discontinued the use of PowerPC microprocessors in 2006. At WWDC
2005, Steve Jobs announced this transition, revealing that Mac OS X was always developed to run on both the Intel
and PowerPC architectures. All new Macs now use x86 processors made by Intel, and some were renamed as a result.
Intel-based Macs running OS X 10.6 and below (support has been discontinued since 10.7) can run pre-existing software
developed for PowerPC using an emulator called Rosetta, although at noticeably slower speeds than native programs.
However, the Classic environment is unavailable on the Intel architecture. Intel chips introduced the potential to
run the Microsoft Windows operating system natively on Apple hardware, without emulation software such as Virtual
PC. In March 2006, a group of hackers announced that they were able to run Windows XP on an Intel-based Mac. The
group released their software as open source and has posted it for download on their website. On April 5, 2006, Apple
announced the availability of the public beta of Boot Camp, software that allows owners of Intel-based Macs to install
Windows XP on their machines; later versions added support for Windows Vista and Windows 7. Classic was discontinued
in Mac OS X 10.5, and Boot Camp became a standard feature on Intel-based Macs. Apple was initially reluctant to embrace
mice with multiple buttons and scroll wheels. Macs did not natively support pointing devices that featured multiple
buttons, even from third parties, until Mac OS X arrived in 2001. Apple continued to offer only single button mice,
in both wired and Bluetooth wireless versions, until August 2005, when it introduced the Mighty Mouse. While it looked
like a traditional one-button mouse, it actually had four buttons and a scroll ball, capable of independent x- and
y-axis movement. A Bluetooth version followed in July 2006. In October 2009, Apple introduced the Magic Mouse, which
uses multi-touch gesture recognition (similar to that of the iPhone) instead of a physical scroll wheel or ball.
It is available only in a wireless configuration, but the wired Mighty Mouse (re-branded as "Apple Mouse") is still
available as an alternative. Since 2010, Apple has also offered the Magic Trackpad as a means to control Macintosh
desktop computers in a way similar to laptops. Following the release of Intel-based Macs, third-party platform virtualization
software such as Parallels Desktop, VMware Fusion, and VirtualBox began to emerge. These programs allow users to
run Microsoft Windows or previously Windows-only software on Macs at near native speed. Apple also released Boot
Camp and Mac-specific Windows drivers that help users to install Windows XP or Vista and natively dual boot between
Mac OS X and Windows. Though not condoned by Apple, it is possible to run the Linux operating system using Boot camp
or other virtualization workarounds. Unlike most PCs, however, Macs are unable to run many legacy PC operating systems.
In particular, Intel-based macs lack the A20 gate. Although the PC market declined, Apple still managed to ship 2.8
million MacBooks in Q2 2012 (the majority of which are the MacBook Air) compared to 500,000 total Ultrabooks, although
there were dozens of Ultrabooks from various manufacturers on the market while Apple only offered 11-inch and 13-inch
models of the MacBook Air. The Air has been the best-selling ultra-portable in certain countries over Windows Ultrabooks,
particularly the United States. While several Ultrabooks were able to claim individual distinctions such as being
the lightest or thinnest, the Air was regarded by reviewers as the best all-around subnotebook/ultraportable in regard
to "OS X experience, full keyboard, superior trackpad, Thunderbolt connector and the higher-quality, all-aluminum
unibody construction". The Air was among the first to receive Intel's latest CPUs before other PC manufacturers,
and OS X has gained market share on Windows in recent years. Through July 1, 2013, the MacBook Air took in 56 percent
of all Ultrabook sales in the United States, although being one of the higher-priced competitors, though several
Ultrabooks with better features were often more expensive than the MacBook Air. The competitive pricing of MacBooks
was particularly effective when rivals charged more for seemingly equivalent Ultrabooks, as this contradicted the
established "elitist aura" perception that Apple products cost more but were higher quality, which made these most
expensive Ultrabooks seem exorbitant no matter how valid their higher prices were.
Anti-aircraft warfare or counter-air defence is defined by NATO as "all measures designed to nullify or reduce the effectiveness
of hostile air action." They include ground-and air-based weapon systems, associated sensor systems, command and
control arrangements and passive measures (e.g. barrage balloons). It may be used to protect naval, ground, and air
forces in any location. However, for most countries the main effort has tended to be 'homeland defence'. NATO refers
to airborne air defence as counter-air and naval air defence as anti-aircraft warfare. Missile defence is an extension
of air defence as are initiatives to adapt air defence to the task of intercepting any projectile in flight. Non-English
terms for air defence include the German Flak (Fliegerabwehrkanone, "aircraft defence cannon", also cited as Flugabwehrkanone),
whence English flak, and the Russian term Protivovozdushnaya oborona (Cyrillic: Противовозду́шная оборо́на), a literal
translation of "anti-air defence", abbreviated as PVO. In Russian the AA systems are called zenitnye (i.e. "pointing
to zenith") systems (guns, missiles etc.). In French, air defence is called DCA (Défense contre les aéronefs, "aéronef"
being the generic term for all kind of airborne device (airplane, airship, balloon, missile, rocket, etc.)). Initially
sensors were optical and acoustic devices developed during the First World War and continued into the 1930s, but
were quickly superseded by radar, which in turn was supplemented by optronics in the 1980s. Command and control remained
primitive until the late 1930s, when Britain created an integrated system for ADGB that linked the ground-based air
defence of the army's AA Command, although field-deployed air defence relied on less sophisticated arrangements.
NATO later called these arrangements an "air defence ground environment", defined as "the network of ground radar
sites and command and control centres within a specific theatre of operations which are used for the tactical control
of air defence operations". The most extreme case was the Soviet Union, and this model may still be followed in some
countries: it was a separate service, on a par with the navy or ground force. In the Soviet Union this was called
Voyska PVO, and had both fighter aircraft and ground-based systems. This was divided into two arms, PVO Strany, the
Strategic Air defence Service responsible for Air Defence of the Homeland, created in 1941 and becoming an independent
service in 1954, and PVO SV, Air Defence of the Ground Forces. Subsequently these became part of the air force and
ground forces respectively On 30 September 1915, troops of the Serbian Army observed three enemy aircraft approaching
Kragujevac. Soldiers shot at them with shotguns and machine-guns but failed to prevent them from dropping 45 bombs
over the city, hitting military installations, the railway station and many other, mostly civilian, targets in the
city. During the bombing raid, private Radoje Ljutovac fired his cannon at the enemy aircraft and successfully shot
one down. It crashed in the city and both pilots died from their injuries. The cannon Ljutovac used was not designed
as an anti-aircraft gun, it was a slightly modified Turkish cannon captured during the First Balkan War in 1912.
This was the first occasion in military history that a military aircraft was shot down with ground-to-air fire. AA
gunnery was a difficult business. The problem was of successfully aiming a shell to burst close to its target's future
position, with various factors affecting the shells' predicted trajectory. This was called deflection gun-laying,
'off-set' angles for range and elevation were set on the gunsight and updated as their target moved. In this method
when the sights were on the target, the barrel was pointed at the target's future position. Range and height of the
target determined fuse length. The difficulties increased as aircraft performance improved. World War I demonstrated
that aircraft could be an important part of the battlefield, but in some nations it was the prospect of strategic
air attack that was the main issue, presenting both a threat and an opportunity. The experience of four years of
air attacks on London by Zeppelins and Gotha G.V bombers had particularly influenced the British and was one of if
not the main driver for forming an independent air force. As the capabilities of aircraft and their engines improved
it was clear that their role in future war would be even more critical as their range and weapon load grew. However,
in the years immediately after World War I the prospect of another major war seemed remote, particularly in Europe
where the most militarily capable nations were, and little financing was available. From the early 1930s eight countries
developed radar, these developments were sufficiently advanced by the late 1930s for development work on sound locating
acoustic devices to be generally halted, although equipment was retained. Furthermore, in Britain the volunteer Observer
Corps formed in 1925 provided a network of observation posts to report hostile aircraft flying over Britain. Initially
radar was used for airspace surveillance to detect approaching hostile aircraft. However, the German Würzburg radar
was capable of providing data suitable for controlling AA guns and the British AA No 1 Mk 1 GL radar was designed
to be used on AA gun positions. Until this time the British, at RAF insistence, continued their World War I use of
machine guns, and introduced twin MG mountings for AAAD. The army was forbidden from considering anything larger
than .50-inch. However, in 1935 their trials showed that the minimum effective round was an impact fused 2 lb HE
shell. The following year they decided to adopt the Bofors 40 mm and a twin barrel Vickers 2-pdr (40 mm) on a modified
naval mount. The air-cooled Bofors was vastly superior for land use, being much lighter than the water-cooled pom-pom,
and UK production of the Bofors 40 mm was licensed. The Predictor AA No 3, as the Kerrison Predictor was officially
known, was introduced with it. During the 1930s solid fuel rockets were under development in the Soviet Union and
Britain. In Britain the interest was for anti-aircraft fire, it quickly became clear that guidance would be required
for precision. However, rockets, or 'unrotated projectiles' as they were called could the used for anti-aircraft
barrages. A 2-inch rocket using HE or wire obstacle warheads was introduced first to deal with low-level or dive
bombing attacks on smaller targets such as airfields. The 3-inch was in development at the end of the inter-war period.
The British had already arranged licence building of the Bofors 40 mm, and introduced these into service. These had
the power to knock down aircraft of any size, yet were light enough to be mobile and easily swung. The gun became
so important to the British war effort that they even produced a movie, The Gun, that encouraged workers on the assembly
line to work harder. The Imperial measurement production drawings the British had developed were supplied to the
Americans who produced their own (unlicensed) copy of the 40 mm at the start of the war, moving to licensed production
in mid-1941. The interceptor aircraft (or simply interceptor) is a type of fighter aircraft designed specifically
to intercept and destroy enemy aircraft, particularly bombers, usually relying on high speed and altitude capabilities.
A number of jet interceptors such as the F-102 Delta Dagger, the F-106 Delta Dart, and the MiG-25 were built in the
period starting after the end of World War II and ending in the late 1960s, when they became less important due to
the shifting of the strategic bombing role to ICBMs. Invariably the type is differentiated from other fighter aircraft
designs by higher speeds and shorter operating ranges, as well as much reduced ordnance payloads. Another potential
weapon system for anti-aircraft use is the laser. Although air planners have imagined lasers in combat since the
late 1960s, only the most modern laser systems are currently reaching what could be considered "experimental usefulness".
In particular the Tactical High Energy Laser can be used in the anti-aircraft and anti-missile role. If current developments
continue, some[who?] believe it is reasonable to suggest that lasers will play a major role in air defence starting
in the next ten years. Area air defence, the air defence of a specific area or location, (as opposed to point defence),
have historically been operated by both armies (Anti-Aircraft Command in the British Army, for instance) and Air
Forces (the United States Air Force's CIM-10 Bomarc). Area defence systems have medium to long range and can be made
up of various other systems and networked into an area defence system (in which case it may be made up of several
short range systems combined to effectively cover an area). An example of area defence is the defence of Saudi Arabia
and Israel by MIM-104 Patriot missile batteries during the first Gulf War, where the objective was to cover populated
areas. The term air defence was probably first used by Britain when Air Defence of Great Britain (ADGB) was created
as a Royal Air Force command in 1925. However, arrangements in the UK were also called 'anti-aircraft', abbreviated
as AA, a term that remained in general use into the 1950s. After the First World War it was sometimes prefixed by
'Light' or 'Heavy' (LAA or HAA) to classify a type of gun or unit. Nicknames for anti-aircraft guns include AA, AAA
or triple-A, an abbreviation of anti-aircraft artillery; "ack-ack" (from the spelling alphabet used by the British
for voice transmission of "AA"); and archie (a World War I British term probably coined by Amyas Borton and believed
to derive via the Royal Flying Corps from the music-hall comedian George Robey's line "Archibald, certainly not!").
The essence of air defence is to detect hostile aircraft and destroy them. The critical issue is to hit a target
moving in three-dimensional space; an attack must not only match these three coordinates, but must do so at the time
the target is at that position. This means that projectiles either have to be guided to hit the target, or aimed
at the predicted position of the target at the time the projectile reaches it, taking into account speed and direction
of both the target and the projectile. Passive air defence is defined by NATO as "Passive measures taken for the
physical defence and protection of personnel, essential installations and equipment in order to minimize the effectiveness
of air and/or missile attack". It remains a vital activity by ground forces and includes camouflage and concealment
to avoid detection by reconnaissance and attacking aircraft. Measures such as camouflaging important buildings were
common in the Second World War. During the Cold War the runways and taxiways of some airfields were painted green.
The basic air defence unit is typically a battery with 2 to 12 guns or missile launchers and fire control elements.
These batteries, particularly with guns, usually deploy in a small area, although batteries may be split; this is
usual for some missile systems. SHORAD missile batteries often deploy across an area with individual launchers several
kilometres apart. When MANPADS is operated by specialists, batteries may have several dozen teams deploying separately
in small sections; self-propelled air defence guns may deploy in pairs. The first issue was ammunition. Before the
war it was recognised that ammunition needed to explode in the air. Both high explosive (HE) and shrapnel were used,
mostly the former. Airburst fuses were either igniferious (based on a burning fuse) or mechanical (clockwork). Igniferious
fuses were not well suited for anti-aircraft use. The fuse length was determined by time of flight, but the burning
rate of the gunpowder was affected by altitude. The British pom-poms had only contact-fused ammunition. Zeppelins,
being hydrogen filled balloons, were targets for incendiary shells and the British introduced these with airburst
fuses, both shrapnel type-forward projection of incendiary 'pot' and base ejection of an incendiary stream. The British
also fitted tracers to their shells for use at night. Smoke shells were also available for some AA guns, these bursts
were used as targets during training. Two assumptions underpinned the British approach to HAA fire; first, aimed
fire was the primary method and this was enabled by predicting gun data from visually tracking the target and having
its height. Second, that the target would maintain a steady course, speed and height. This HAA was to engage targets
up to 24,000 feet. Mechanical, as opposed to igniferous, time fuses were required because the speed of powder burning
varied with height so fuse length was not a simple function of time of flight. Automated fire ensured a constant
rate of fire that made it easier to predict where each shell should be individually aimed. The US ended World War
I with two 3-inch AA guns and improvements were developed throughout the inter-war period. However, in 1924 work
started on a new 105 mm static mounting AA gun, but only a few were produced by the mid-1930s because by this time
work had started on the 90 mm AA gun, with mobile carriages and static mountings able to engage air, sea and ground
targets. The M1 version was approved in 1940. During the 1920s there was some work on a 4.7-inch which lapsed, but
revived in 1937, leading to a new gun in 1944. In some countries, such as Britain and Germany during the Second World
War, the Soviet Union and NATO's Allied Command Europe, ground based air defence and air defence aircraft have been
under integrated command and control. However, while overall air defence may be for homeland defence including military
facilities, forces in the field, wherever they are, invariably deploy their own air defence capability if there is
an air threat. A surface-based air defence capability can also be deployed offensively to deny the use of airspace
to an opponent. After World War I the US Army started developing a dual-role (AA/ground) automatic 37 mm cannon,
designed by John M. Browning. It was standardised in 1927 as the T9 AA cannon, but trials quickly revealed that it
was worthless in the ground role. However, while the shell was a bit light (well under 2 lbs) it had a good effective
ceiling and fired 125 rounds per minute; an AA carriage was developed and it entered service in 1939. The Browning
37mm proved prone to jamming, and was eventually replaced in AA units by the Bofors 40 mm. The Bofors had attracted
attention from the US Navy, but none were acquired before 1939. Also, in 1931 the US Army worked on a mobile anti-aircraft
machine mount on the back of a heavy truck having four .30 caliber water-cooled machine guns and an optical director.
It proved unsuccessful and was abandoned. Germany's high-altitude needs were originally going to be filled by a 75
mm gun from Krupp, designed in collaboration with their Swedish counterpart Bofors, but the specifications were later
amended to require much higher performance. In response Krupp's engineers presented a new 88 mm design, the FlaK
36. First used in Spain during the Spanish Civil War, the gun proved to be one of the best anti-aircraft guns in
the world, as well as particularly deadly against light, medium, and even early heavy tanks. A plethora of anti-aircraft
gun systems of smaller calibre were available to the German Wehrmacht combined forces, and among them the 1940-origin
Flakvierling quadruple-20 mm-gun antiaircraft weapon system was one of the most often-seen weapons, seeing service
on both land and sea. The similar Allied smaller-calibre air-defence weapons systems of the American forces were
also quite capable, although they receive little attention. Their needs could cogently be met with smaller-calibre
ordnance beyond using the usual singly-mounted M2 .50 caliber machine gun atop a tank's turret, as four of the ground-used
"heavy barrel" (M2HB) guns were mounted together on the American Maxson firm's M45 Quadmount weapons system (as a
direct answer to the Flakvierling),which were often mounted on the back of a half-track to form the Half Track, M16
GMC, Anti-Aircraft. Although of less power than Germany's 20 mm systems, the typical 4 or 5 combat batteries of an
Army AAA battalion were often spread many kilometers apart from each other, rapidly attaching and detaching to larger
ground combat units to provide welcome defence from enemy aircraft. Another aspect of anti-aircraft defence was the
use of barrage balloons to act as physical obstacle initially to bomber aircraft over cities and later for ground
attack aircraft over the Normandy invasion fleets. The balloon, a simple blimp tethered to the ground, worked in
two ways. Firstly, it and the steel cable were a danger to any aircraft that tried to fly among them. Secondly, to
avoid the balloons, bombers had to fly at a higher altitude, which was more favorable for the guns. Barrage balloons
were limited in application, and had minimal success at bringing down aircraft, being largely immobile and passive
defences. The maximum distance at which a gun or missile can engage an aircraft is an important figure. However,
many different definitions are used but unless the same definition is used, performance of different guns or missiles
cannot be compared. For AA guns only the ascending part of the trajectory can be usefully used. One term is 'ceiling',
maximum ceiling being the height a projectile would reach if fired vertically, not practically useful in itself as
few AA guns are able to fire vertically, and maximum fuse duration may be too short, but potentially useful as a
standard to compare different weapons. As this process continued, the missile found itself being used for more and
more of the roles formerly filled by guns. First to go were the large weapons, replaced by equally large missile
systems of much higher performance. Smaller missiles soon followed, eventually becoming small enough to be mounted
on armored cars and tank chassis. These started replacing, or at least supplanting, similar gun-based SPAAG systems
in the 1960s, and by the 1990s had replaced almost all such systems in modern armies. Man-portable missiles, MANPADs
as they are known today, were introduced in the 1960s and have supplanted or even replaced even the smallest guns
in most advanced armies. Unlike the heavier guns, these smaller weapons are in widespread use due to their low cost
and ability to quickly follow the target. Classic examples of autocannons and large caliber guns are the 40 mm autocannon
and the 8.8 cm FlaK 18, 36 gun, both designed by Bofors of Sweden. Artillery weapons of this sort have for the most
part been superseded by the effective surface-to-air missile systems that were introduced in the 1950s, although
they were still retained by many nations. The development of surface-to-air missiles began in Nazi Germany during
the late World War II with missiles such as the Wasserfall, though no working system was deployed before the war's
end, and represented new attempts to increase effectiveness of the anti-aircraft systems faced with growing threat
from [bomber]s. Land-based SAMs can be deployed from fixed installations or mobile launchers, either wheeled or tracked.
The tracked vehicles are usually armoured vehicles specifically designed to carry SAMs. Smaller boats and ships typically
have machine-guns or fast cannons, which can often be deadly to low-flying aircraft if linked to a radar-directed
fire-control system radar-controlled cannon for point defence. Some vessels like Aegis cruisers are as much a threat
to aircraft as any land-based air defence system. In general, naval vessels should be treated with respect by aircraft,
however the reverse is equally true. Carrier battle groups are especially well defended, as not only do they typically
consist of many vessels with heavy air defence armament but they are also able to launch fighter jets for combat
air patrol overhead to intercept incoming airborne threats. Rocket-propelled grenades can be—and often are—used against
hovering helicopters (e.g., by Somali militiamen during the Battle of Mogadishu (1993)). Firing an RPG at steep angles
poses a danger to the user, because the backblast from firing reflects off the ground. In Somalia, militia members
sometimes welded a steel plate in the exhaust end of an RPG's tube to deflect pressure away from the shooter when
shooting up at US helicopters. RPGs are used in this role only when more effective weapons are not available. The
British adopted "effective ceiling", meaning the altitude at which a gun could deliver a series of shells against
a moving target; this could be constrained by maximum fuse running time as well as the gun's capability. By the late
1930s the British definition was "that height at which a directly approaching target at 400 mph (=643.6 km/h) can
be engaged for 20 seconds before the gun reaches 70 degrees elevation". However, effective ceiling for heavy AA guns
was affected by non-ballistic factors: Until the 1950s guns firing ballistic munitions were the standard weapon;
guided missiles then became dominant, except at the very shortest ranges. However, the type of shell or warhead and
its fuzing and, with missiles the guidance arrangement, were and are varied. Targets are not always easy to destroy;
nonetheless, damaged aircraft may be forced to abort their mission and, even if they manage to return and land in
friendly territory, may be out of action for days or permanently. Ignoring small arms and smaller machine-guns, ground-based
air defence guns have varied in calibre from 20 mm to at least 150 mm. The British recognised the need for anti-aircraft
capability a few weeks before World War I broke out; on 8 July 1914, the New York Times reported that the British
government had decided to 'dot the coasts of the British Isles with a series of towers, each armed with two quick-firing
guns of special design,' while 'a complete circle of towers' was to be built around 'naval installations' and 'at
other especially vulnerable points.' By December 1914 the Royal Naval Volunteer Reserve (RNVR) was manning AA guns
and searchlights assembled from various sources at some nine ports. The Royal Garrison Artillery (RGA) was given
responsibility for AA defence in the field, using motorised two-gun sections. The first were formally formed in November
1914. Initially they used QF 1-pounder "pom-pom" (a 37 mm version of the Maxim Gun). In Britain and some other armies,
the single artillery branch has been responsible for both home and overseas ground-based air defence, although there
was divided responsibility with the Royal Navy for air defence of the British Isles in World War I. However, during
the Second World War the RAF Regiment was formed to protect airfields everywhere, and this included light air defences.
In the later decades of the Cold War this included the United States Air Force's operating bases in UK. However,
all ground-based air defence was removed from Royal Air Force (RAF) jurisdiction in 2004. The British Army's Anti-Aircraft
Command was disbanded in March 1955, but during the 1960s and 1970s the RAF's Fighter Command operated long-range
air -defence missiles to protect key areas in the UK. During World War II the Royal Marines also provided air defence
units; formally part of the mobile naval base defence organisation, they were handled as an integral part of the
army-commanded ground based air defences. The British dealt with range measurement first, when it was realised that
range was the key to producing a better fuse setting. This led to the Height/Range Finder (HRF), the first model
being the Barr & Stroud UB2, a 2-metre optical coincident rangefinder mounted on a tripod. It measured the distance
to the target and the elevation angle, which together gave the height of the aircraft. These were complex instruments
and various other methods were also used. The HRF was soon joined by the Height/Fuse Indicator (HFI), this was marked
with elevation angles and height lines overlaid with fuse length curves, using the height reported by the HRF operator,
the necessary fuse length could be read off. By the early 20th century balloon, or airship, guns, for land and naval
use were attracting attention. Various types of ammunition were proposed, high explosive, incendiary, bullet-chains,
rod bullets and shrapnel. The need for some form of tracer or smoke trail was articulated. Fuzing options were also
examined, both impact and time types. Mountings were generally pedestal type, but could be on field platforms. Trials
were underway in most countries in Europe but only Krupp, Erhardt, Vickers Maxim, and Schneider had published any
information by 1910. Krupp's designs included adaptations of their 65 mm 9-pounder, a 75 mm 12-pounder, and even
a 105 mm gun. Erhardt also had a 12-pounder, while Vickers Maxim offered a 3-pounder and Schneider a 47 mm. The French
balloon gun appeared in 1910, it was an 11-pounder but mounted on a vehicle, with a total uncrewed weight of 2 tons.
However, since balloons were slow moving, sights were simple. But the challenges of faster moving airplanes were
recognised. All armies soon deployed AA guns often based on their smaller field pieces, notably the French 75 mm
and Russian 76.2 mm, typically simply propped up on some sort of embankment to get the muzzle pointed skyward. The
British Army adopted the 13-pounder quickly producing new mountings suitable for AA use, the 13-pdr QF 6 cwt Mk III
was issued in 1915. It remained in service throughout the war but 18-pdr guns were lined down to take the 13-pdr
shell with a larger cartridge producing the 13-pr QF 9 cwt and these proved much more satisfactory. However, in general,
these ad-hoc solutions proved largely useless. With little experience in the role, no means of measuring target,
range, height or speed the difficulty of observing their shell bursts relative to the target gunners proved unable
to get their fuse setting correct and most rounds burst well below their targets. The exception to this rule was
the guns protecting spotting balloons, in which case the altitude could be accurately measured from the length of
the cable holding the balloon. The Treaty of Versailles prevented Germany having AA weapons, and for example, the
Krupps designers joined Bofors in Sweden. Some World War I guns were retained and some covert AA training started
in the late 1920s. Germany introduced the 8.8 cm FlaK 18 in 1933, 36 and 37 models followed with various improvements
but ballistic performance was unchanged. In the late 1930s the 10.5 cm FlaK 38 appeared soon followed by the 39,
this was designed primarily for static sites but had a mobile mounting and the unit had 220v 24 kW generators. In
1938 design started on the 12.8 cm FlaK. However, the problem of deflection settings — 'aim-off' — required knowing
the rate of change in the target's position. Both France and UK introduced tachymetric devices to track targets and
produce vertical and horizontal deflection angles. The French Brocq system was electrical, the operator entered the
target range and had displays at guns; it was used with their 75 mm. The British Wilson-Dalby gun director used a
pair of trackers and mechanical tachymetry; the operator entered the fuse length, and deflection angles were read
from the instruments. Poland's AA defences were no match for the German attack and the situation was similar in other
European countries. Significant AA warfare started with the Battle of Britain in the summer of 1940. 3.7-inch HAA
were to provide the backbone of the groundbased AA defences, although initially significant numbers of 3-inch 20-cwt
were also used. The Army's Anti-aircraft command, which was under command of the Air Defence UK organisation, grew
to 12 AA divisions in 3 AA corps. 40-mm Bofors entered service in increasing numbers. In addition the RAF regiment
was formed in 1941 with responsibility for airfield air defence, eventually with Bofors 40mm as their main armament.
Fixed AA defences, using HAA and LAA, were established by the Army in key overseas places, notably Malta, Suez Canal
and Singapore. Britain had successful tested a new HAA gun, 3.6-inch, in 1918. In 1928 3.7-inch became the preferred
solution, but it took 6 years to gain funding. Production of the QF 3.7-inch (94 mm) began in 1937; this gun was
used both on mobile carriages with the field army and transportable guns on fixed mountings for static positions.
At the same time the Royal Navy adopted a new 4.5-inch (114 mm) gun in a twin turret, which the army adopted in simplified
single-gun mountings for static positions, mostly around ports where naval ammunition was available. However, the
performance of both 3.7 and 4.5-in guns was limited by their standard fuse No 199, with a 30-second running time,
although a new mechanical time fuse giving 43 seconds was nearing readiness. In 1939 a Machine Fuse Setter was introduced
to eliminate manual fuse setting. Service trials demonstrated another problem however: that ranging and tracking
the new high-speed targets was almost impossible. At short range, the apparent target area is relatively large, the
trajectory is flat and the time of flight is short, allowing to correct lead by watching the tracers. At long range,
the aircraft remains in firing range for a long time, so the necessary calculations can in theory be done by slide
rules - though, because small errors in distance cause large errors in shell fall height and detonation time, exact
ranging is crucial. For the ranges and speeds that the Bofors worked at, neither answer was good enough. Rheinmetall
in Germany developed an automatic 20 mm in the 1920s and Oerlikon in Switzerland had acquired the patent to an automatic
20 mm gun designed in Germany during World War I. Germany introduced the rapid-fire 2 cm FlaK 30 and later in the
decade it was redesigned by Mauser-Werke and became the 2 cm FlaK 38. Nevertheless, while 20 mm was better than a
machine gun and mounted on a very small trailer made it easy to move, its effectiveness was limited. Germany therefore
added a 3.7 cm. The first, the 3.7 cm FlaK 18 developed by Rheinmetall in the early 1930s, was basically an enlarged
2 cm FlaK 30. It was introduced in 1935 and production stopped the following year. A redesigned gun 3.7 cm FlaK 36
entered service in 1938, it too had a two-wheel carriage. However, by the mid-1930s the Luftwaffe realised that there
was still a coverage gap between 3.7 cm and 8.8 cm guns. They started development of a 5 cm gun on a four-wheel carriage.
The Germans developed massive reinforced concrete blockhouses, some more than six stories high, which were known
as Hochbunker "High Bunkers" or "Flaktürme" flak towers, on which they placed anti-aircraft artillery. Those in cities
attacked by the Allied land forces became fortresses. Several in Berlin were some of the last buildings to fall to
the Soviets during the Battle of Berlin in 1945. The British built structures such as the Maunsell Forts in the North
Sea, the Thames Estuary and other tidal areas upon which they based guns. After the war most were left to rot. Some
were outside territorial waters, and had a second life in the 1960s as platforms for pirate radio stations. The developments
during World War II continued for a short time into the post-war period as well. In particular the U.S. Army set
up a huge air defence network around its larger cities based on radar-guided 90 mm and 120 mm guns. US efforts continued
into the 1950s with the 75 mm Skysweeper system, an almost fully automated system including the radar, computers,
power, and auto-loading gun on a single powered platform. The Skysweeper replaced all smaller guns then in use in
the Army, notably the 40 mm Bofors. In Europe NATO's Allied Command Europe developed an integrated air defence system,
NATO Air Defence Ground Environment (NADGE), that later became the NATO Integrated Air Defence System. The solution
was automation, in the form of a mechanical computer, the Kerrison Predictor. Operators kept it pointed at the target,
and the Predictor then calculated the proper aim point automatically and displayed it as a pointer mounted on the
gun. The gun operators simply followed the pointer and loaded the shells. The Kerrison was fairly simple, but it
pointed the way to future generations that incorporated radar, first for ranging and later for tracking. Similar
predictor systems were introduced by Germany during the war, also adding radar ranging as the war progressed. Although
the firearms used by the infantry, particularly machine guns, can be used to engage low altitude air targets, on
occasion with notable success, their effectiveness is generally limited and the muzzle flashes reveal infantry positions.
Speed and altitude of modern jet aircraft limit target opportunities, and critical systems may be armored in aircraft
designed for the ground attack role. Adaptations of the standard autocannon, originally intended for air-to-ground
use, and heavier artillery systems were commonly used for most anti-aircraft gunnery, starting with standard pieces
on new mountings, and evolving to specially designed guns with much higher performance prior to World War II. Some
nations started rocket research before World War II, including for anti-aircraft use. Further research started during
the war. The first step was unguided missile systems like the British 2-inch RP and 3-inch, which was fired in large
numbers from Z batteries, and were also fitted to warships. The firing of one of these devices during an air raid
is suspected to have caused the Bethnal Green disaster in 1943. Facing the threat of Japanese Kamikaze attacks the
British and US developed surface-to-air rockets like British Stooge or the American Lark as counter measures, but
none of them were ready at the end of the war. The Germans missile research was the most advanced of the war as the
Germans put considerable effort in the research and development of rocket systems for all purposes. Among them were
several guided and unguided systems. Unguided systems involved the Fliegerfaust (literally "aircraft fist") as the
first MANPADS. Guided systems were several sophisticated radio, wire, or radar guided missiles like the Wasserfall
("waterfall") rocket. Due to the severe war situation for Germany all of those systems were only produced in small
numbers and most of them were only used by training or trial units. The introduction of the guided missile resulted
in a significant shift in anti-aircraft strategy. Although Germany had been desperate to introduce anti-aircraft
missile systems, none became operational during World War II. Following several years of post-war development, however,
these systems began to mature into viable weapons systems. The US started an upgrade of their defences using the
Nike Ajax missile, and soon the larger anti-aircraft guns disappeared. The same thing occurred in the USSR after
the introduction of their SA-2 Guideline systems. The future of projectile based weapons may be found in the railgun.
Currently tests are underway on developing systems that could create as much damage as a Tomahawk (missile), but
at a fraction of the cost. In February 2008 the US Navy tested a railgun; it fired a shell at 5,600 miles (9,000
km) per hour using 10 megajoules of energy. Its expected performance is over 13,000 miles (21,000 km) per hour muzzle
velocity, accurate enough to hit a 5-meter target from 200 nautical miles (370 km) away while shooting at 10 shots
per minute. It is expected to be ready in 2020 to 2025.[verification needed] These systems while currently designed
for static targets would only need the ability to be retargeted to become the next generation of AA system. The ammunition
and shells fired by these weapons are usually fitted with different types of fuses (barometric, time-delay, or proximity)
to explode close to the airborne target, releasing a shower of fast metal fragments. For shorter-range work, a lighter
weapon with a higher rate of fire is required, to increase a hit probability on a fast airborne target. Weapons between
20 mm and 40 mm caliber have been widely used in this role. Smaller weapons, typically .50 caliber or even 8 mm rifle
caliber guns have been used in the smallest mounts. Air defence in naval tactics, especially within a carrier group,
is often built around a system of concentric layers with the aircraft carrier at the centre. The outer layer will
usually be provided by the carrier's aircraft, specifically its AEW&C aircraft combined with the CAP. If an attacker
is able to penetrate this layer, then the next layers would come from the surface-to-air missiles carried by the
carrier's escorts; the area-defence missiles, such as the RIM-67 Standard, with a range of up to 100 nmi, and the
point-defence missiles, like the RIM-162 ESSM, with a range of up to 30 nmi. Finally, virtually every modern warship
will be fitted with small-calibre guns, including a CIWS, which is usually a radar-controlled Gatling gun of between
20mm and 30mm calibre capable of firing several thousand rounds per minute. If current trends continue, missiles
will replace gun systems completely in "first line" service.[citation needed] Guns are being increasingly pushed
into specialist roles, such as the Dutch Goalkeeper CIWS, which uses the GAU-8 Avenger 30 mm seven-barrel Gatling
gun for last ditch anti-missile and anti-aircraft defence. Even this formerly front-line weapon is currently being
replaced by new missile systems, such as the RIM-116 Rolling Airframe Missile, which is smaller, faster, and allows
for mid-flight course correction (guidance) to ensure a hit. To bridge the gap between guns and missiles, Russia
in particular produces the Kashtan CIWS, which uses both guns and missiles for final defence. Two six-barrelled 30
mm Gsh-6-30 Gatling guns and 9M311 surface-to-air missiles provide for its defensive capabilities. Most modern air
defence systems are fairly mobile. Even the larger systems tend to be mounted on trailers and are designed to be
fairly quickly broken down or set up. In the past, this was not always the case. Early missile systems were cumbersome
and required much infrastructure; many could not be moved at all. With the diversification of air defence there has
been much more emphasis on mobility. Most modern systems are usually either self-propelled (i.e. guns or missiles
are mounted on a truck or tracked chassis) or easily towed. Even systems that consist of many components (transporter/erector/launchers,
radars, command posts etc.) benefit from being mounted on a fleet of vehicles. In general, a fixed system can be
identified, attacked and destroyed whereas a mobile system can show up in places where it is not expected. Soviet
systems especially concentrate on mobility, after the lessons learnt in the Vietnam war between the USA and Vietnam.
For more information on this part of the conflict, see SA-2 Guideline. Most Western and Commonwealth militaries integrate
air defence purely with the traditional services, of the military (i.e. army, navy and air force), as a separate
arm or as part of artillery. In the United States Army for instance, air defence is part of the artillery arm, while
in the Pakistan Army, it was split off from Artillery to form a separate arm of its own in 1990. This is in contrast
to some (largely communist or ex-communist) countries where not only are there provisions for air defence in the
army, navy and air force but there are specific branches that deal only with the air defence of territory, for example,
the Soviet PVO Strany. The USSR also had a separate strategic rocket force in charge of nuclear intercontinental
ballistic missiles. Armies typically have air defence in depth, from integral MANPADS such as the RBS 70, Stinger
and Igla at smaller force levels up to army-level missile defence systems such as Angara and Patriot. Often, the
high-altitude long-range missile systems force aircraft to fly at low level, where anti-aircraft guns can bring them
down. As well as the small and large systems, for effective air defence there must be intermediate systems. These
may be deployed at regiment-level and consist of platoons of self-propelled anti-aircraft platforms, whether they
are self-propelled anti-aircraft guns (SPAAGs), integrated air-defence systems like Tunguska or all-in-one surface-to-air
missile platforms like Roland or SA-8 Gecko. Israel, and The US Air Force, in conjunction with the members of NATO,
has developed significant tactics for air defence suppression. Dedicated weapons such as anti-radiation missiles
and advanced electronics intelligence and electronic countermeasures platforms seek to suppress or negate the effectiveness
of an opposing air-defence system. It is an arms race; as better jamming, countermeasures and anti-radiation weapons
are developed, so are better SAM systems with ECCM capabilities and the ability to shoot down anti-radiation missiles
and other munitions aimed at them or the targets they are defending. NATO defines anti-aircraft warfare (AAW) as
"measures taken to defend a maritime force against attacks by airborne weapons launched from aircraft, ships, submarines
and land-based sites." In some armies the term All-Arms Air Defence (AAAD) is used for air defence by non-specialist
troops. Other terms from the late 20th century include GBAD (Ground Based AD) with related terms SHORAD (Short Range
AD) and MANPADS ("Man Portable AD Systems": typically shoulder-launched missiles). Anti-aircraft missiles are variously
called surface-to-air missile, abbreviated and pronounced "SAM" and Surface to Air Guided Weapon (SAGW). Throughout
the 20th century air defence was one of the fastest-evolving areas of military technology, responding to the evolution
of aircraft and exploiting various enabling technologies, particularly radar, guided missiles and computing (initially
electromechanical analog computing from the 1930s on, as with equipment described below). Air defence evolution covered
the areas of sensors and technical fire control, weapons, and command and control. At the start of the 20th century
these were either very primitive or non-existent. Batteries are usually grouped into battalions or equivalent. In
the field army a light gun or SHORAD battalion is often assigned to a manoeuvre division. Heavier guns and long-range
missiles may be in air-defence brigades and come under corps or higher command. Homeland air defence may have a full
military structure. For example, the UK's Anti-Aircraft Command, commanded by a full British Army general was part
of ADGB. At its peak in 1941–42 it comprised three AA corps with 12 AA divisions between them. German air attacks
on the British Isles increased in 1915 and the AA efforts were deemed somewhat ineffective, so a Royal Navy gunnery
expert, Admiral Sir Percy Scott, was appointed to make improvements, particularly an integrated AA defence for London.
The air defences were expanded with more RNVR AA guns, 75 mm and 3-inch, the pom-poms being ineffective. The naval
3-inch was also adopted by the army, the QF 3 inch 20 cwt (76 mm), a new field mounting was introduced in 1916. Since
most attacks were at night, searchlights were soon used, and acoustic methods of detection and locating were developed.
By December 1916 there were 183 AA Sections defending Britain (most with the 3-inch), 74 with the BEF in France and
10 in the Middle East. As aircraft started to be used against ground targets on the battlefield, the AA guns could
not be traversed quickly enough at close targets and, being relatively few, were not always in the right place (and
were often unpopular with other troops), so changed positions frequently. Soon the forces were adding various machine-gun
based weapons mounted on poles. These short-range weapons proved more deadly, and the "Red Baron" is believed to
have been shot down by an anti-aircraft Vickers machine gun. When the war ended, it was clear that the increasing
capabilities of aircraft would require better means of acquiring targets and aiming at them. Nevertheless, a pattern
had been set: anti-aircraft weapons would be based around heavy weapons attacking high-altitude targets and lighter
weapons for use when they came to lower altitudes. In 1925 the British adopted a new instrument developed by Vickers.
It was a mechanical analogue computer Predictor AA No 1. Given the target height its operators tracked the target
and the predictor produced bearing, quadrant elevation and fuse setting. These were passed electrically to the guns
where they were displayed on repeater dials to the layers who 'matched pointers' (target data and the gun's actual
data) to lay the guns. This system of repeater electrical dials built on the arrangements introduced by British coast
artillery in the 1880s, and coast artillery was the background of many AA officers. Similar systems were adopted
in other countries and for example the later Sperry device, designated M3A3 in the US was also used by Britain as
the Predictor AA No 2. Height finders were also increasing in size, in Britain, the World War I Barr & Stroud UB
2 (7 feet optical base) was replaced by the UB 7 (9 feet optical base) and the UB 10 (18 feet optical base, only
used on static AA sites). Goertz in Germany and Levallois in France produced 5 metre instruments. However, in most
countries the main effort in HAA guns until the mid-1930s was improving existing ones, although various new designs
were on drawing boards. After the Dambusters raid in 1943 an entirely new system was developed that was required
to knock down any low-flying aircraft with a single hit. The first attempt to produce such a system used a 50 mm
gun, but this proved inaccurate and a new 55 mm gun replaced it. The system used a centralised control system including
both search and targeting radar, which calculated the aim point for the guns after considering windage and ballistics,
and then sent electrical commands to the guns, which used hydraulics to point themselves at high speeds. Operators
simply fed the guns and selected the targets. This system, modern even by today's standards, was in late development
when the war ended. AAA battalions were also used to help suppress ground targets. Their larger 90 mm M3 gun would
prove, as did the eighty-eight, to make an excellent anti-tank gun as well, and was widely used late in the war in
this role. Also available to the Americans at the start of the war was the 120 mm M1 gun stratosphere gun, which
was the most powerful AA gun with an impressive 60,000 ft (18 km) altitude capability. No 120 M1 was ever fired at
an enemy aircraft. The 90 mm and 120 mm guns would continue to be used into the 1950s. The allies' most advanced
technologies were showcased by the anti-aircraft defence against the German V-1 cruise missiles (V stands for Vergeltungswaffe,
"retaliation weapon"). The 419th and 601st Antiaircraft Gun Battalions of the US Army were first allocated to the
Folkestone-Dover coast to defend London, and then moved to Belgium to become part of the "Antwerp X" project. With
the liberation of Antwerp, the port city immediately became the highest priority target, and received the largest
number of V-1 and V-2 missiles of any city. The smallest tactical unit of the operation was a gun battery consisting
of four 90 mm guns firing shells equipped with a radio proximity fuse. Incoming targets were acquired and automatically
tracked by SCR-584 radar, developed at the MIT Rad Lab. Output from the gun-laying radar was fed to the M-9 director,
an electronic analog computer developed at Bell Laboratories to calculate the lead and elevation corrections for
the guns. With the help of these three technologies, close to 90% of the V-1 missiles, on track to the defence zone
around the port, were destroyed. In the 1982 Falklands War, the Argentine armed forces deployed the newest west European
weapons including the Oerlikon GDF-002 35 mm twin cannon and SAM Roland. The Rapier missile system was the primary
GBAD system, used by both British artillery and RAF regiment, a few brand-new FIM-92 Stinger were used by British
special forces. Both sides also used the Blowpipe missile. British naval missiles used included Sea Dart and the
older Sea Slug longer range systems, Sea Cat and the new Sea Wolf short range systems. Machine guns in AA mountings
was used both ashore and afloat. Larger SAMs may be deployed in fixed launchers, but can be towed/re-deployed at
will. The SAMs launched by individuals are known in the United States as the Man-Portable Air Defence Systems (MANPADS).
MANPADS of the former Soviet Union have been exported around the World, and can be found in use by many armed forces.
Targets for non-ManPAD SAMs will usually be acquired by air-search radar, then tracked before/while a SAM is "locked-on"
and then fired. Potential targets, if they are military aircraft, will be identified as friend or foe before being
engaged. The developments in the latest and relatively cheap short-range missiles have begun to replace autocannons
in this role. However, as stealth technology grows, so does anti-stealth technology. Multiple transmitter radars
such as those from bistatic radars and low-frequency radars are said to have the capabilities to detect stealth aircraft.
Advanced forms of thermographic cameras such as those that incorporate QWIPs would be able to optically see a Stealth
aircraft regardless of the aircraft's RCS. In addition, Side looking radars, High-powered optical satellites, and
sky-scanning, high-aperture, high sensitivity radars such as radio telescopes, would all be able to narrow down the
location of a stealth aircraft under certain parameters. The newest SAM's have a claimed ability to be able to detect
and engage stealth targets, with the most notable being the S-400, which is claimed to be able to detect a target
with a 0.05 meter squared RCS from 90 km away.
Sanskrit (/ˈsænskrɪt/; Sanskrit: saṃskṛtam [səmskr̩t̪əm] or saṃskṛta, originally saṃskṛtā vāk, "refined speech") is the primary
sacred language of Hinduism, a philosophical language in Buddhism, Hinduism, Sikhism and Jainism, and a literary
language that was in use as a lingua franca in Greater India. It is a standardised dialect of Old Indo-Aryan, originating
as Vedic Sanskrit and tracing its linguistic ancestry back to Proto-Indo-Iranian and Proto-Indo-European. Today it
is listed as one of the 22 scheduled languages of India and is an official language of the state of Uttarakhand.
As one of the oldest Indo-European languages for which substantial written documentation exists, Sanskrit holds a
prominent position in Indo-European studies. Over 90 weeklies, fortnightlies and quarterlies are published in Sanskrit.
Sudharma, a daily newspaper in Sanskrit, has been published out of Mysore, India, since 1970, while Sanskrit Vartman
Patram and Vishwasya Vrittantam started in Gujarat during the last five years. Since 1974, there has been a short
daily news broadcast on state-run All India Radio. These broadcasts are also made available on the internet on AIR's
website. Sanskrit news is broadcast on TV and on the internet through the DD National channel at 6:55 AM IST. Sanskrit
linguist Madhav Deshpande says that when the term "Sanskrit" arose it was not thought of as a specific language set
apart from other languages, but rather as a particularly refined or perfected manner of speaking. Knowledge of Sanskrit
was a marker of social class and educational attainment in ancient India, and the language was taught mainly to members
of the higher castes through the close analysis of Vyākaraṇins such as Pāṇini and Patanjali, who exhorted proper
Sanskrit at all times, especially during ritual. Sanskrit, as the learned language of Ancient India, thus existed
alongside the vernacular Prakrits, which were Middle Indo-Aryan languages. However, linguistic change led to an eventual
loss of mutual intelligibility. Brahmi evolved into a multiplicity of Brahmic scripts, many of which were used to
write Sanskrit. Roughly contemporary with the Brahmi, Kharosthi was used in the northwest of the subcontinent. Sometime
between the fourth and eighth centuries, the Gupta script, derived from Brahmi, became prevalent. Around the eighth
century, the Śāradā script evolved out of the Gupta script. The latter was displaced in its turn by Devanagari in
the 11th or 12th century, with intermediary stages such as the Siddhaṃ script. In East India, the Bengali alphabet,
and, later, the Odia alphabet, were used. Many Sanskrit loanwords are also found in Austronesian languages, such
as Javanese, particularly the older form in which nearly half the vocabulary is borrowed. Other Austronesian languages,
such as traditional Malay and modern Indonesian, also derive much of their vocabulary from Sanskrit, albeit to a
lesser extent, with a larger proportion derived from Arabic. Similarly, Philippine languages such as Tagalog have
some Sanskrit loanwords, although more are derived from Spanish. A Sanskrit loanword encountered in many Southeast
Asian languages is the word bhāṣā, or spoken language, which is used to refer to the names of many languages. For
nearly 2000 years, Sanskrit was the language of a cultural order that exerted influence across South Asia, Inner
Asia, Southeast Asia, and to a certain extent East Asia. A significant form of post-Vedic Sanskrit is found in the
Sanskrit of Indian epic poetry—the Ramayana and Mahabharata. The deviations from Pāṇini in the epics are generally
considered to be on account of interference from Prakrits, or innovations, and not because they are pre-Paninian.
Traditional Sanskrit scholars call such deviations ārṣa (आर्ष), meaning 'of the ṛṣis', the traditional title for
the ancient authors. In some contexts, there are also more "prakritisms" (borrowings from common speech) than in
Classical Sanskrit proper. Buddhist Hybrid Sanskrit is a literary language heavily influenced by the Middle Indo-Aryan
languages, based on early Buddhist Prakrit texts which subsequently assimilated to the Classical Sanskrit standard
in varying degrees. From the Rigveda until the time of Pāṇini (fourth century BCE) the development of the early Vedic
language can be observed in other Vedic texts: the Samaveda, Yajurveda, Atharvaveda, Brahmanas, and Upanishads. During
this time, the prestige of the language, its use for sacred purposes, and the importance attached to its correct
enunciation all served as powerful conservative forces resisting the normal processes of linguistic change. However,
there is a clear, five-level linguistic development of Vedic from the Rigveda to the language of the Upanishads and
the earliest sutras such as the Baudhayana sutras. Sheldon Pollock argues that "most observers would agree that,
in some crucial way, Sanskrit is dead".:393 Pollock has further argued that, while Sanskrit continued to be used
in literary cultures in India, it was never adapted to express the changing forms of subjectivity and sociality as
embodied and conceptualised in the modern age.:416 Instead, it was reduced to "reinscription and restatements" of
ideas already explored, and any creativity was restricted to hymns and verses.:398 A notable exception are the military
references of Nīlakaṇṭha Caturdhara's 17th-century commentary on the Mahābhārata. The CBSE (Central Board of Secondary
Education) of India, along with several other state education boards, has made Sanskrit an alternative option to
the state's own official language as a second or third language choice in the schools it governs. In such schools,
learning Sanskrit is an option for grades 5 to 8 (Classes V to VIII). This is true of most schools affiliated with
the ICSE board, especially in those states where the official language is Hindi. Sanskrit is also taught in traditional
gurukulas throughout India. St James Junior School in London, England, offers Sanskrit as part of the curriculum.
In the United States, since September 2009, high school students have been able to receive credits as Independent
Study or toward Foreign Language requirements by studying Sanskrit, as part of the "SAFL: Samskritam as a Foreign
Language" program coordinated by Samskrita Bharati. In Australia, the Sydney private boys' high school Sydney Grammar
School offers Sanskrit from years 7 through to 12, including for the Higher School Certificate. Sanskrit originated
in an oral society, and the oral tradition was maintained through the development of early classical Sanskrit literature.
Writing was not introduced to India until after Sanskrit had evolved into the Prakrits; when it was written, the
choice of writing system was influenced by the regional scripts of the scribes. Therefore, Sanskrit has no native
script of its own. As such, virtually all the major writing systems of South Asia have been used for the production
of Sanskrit manuscripts. Since the late 18th century, Sanskrit has been transliterated using the Latin alphabet.
The system most commonly used today is the IAST (International Alphabet of Sanskrit Transliteration), which has been
the academic standard since 1888. ASCII-based transliteration schemes have also evolved because of difficulties representing
Sanskrit characters in computer systems. These include Harvard-Kyoto and ITRANS, a transliteration scheme that is
used widely on the Internet, especially in Usenet and in email, for considerations of speed of entry as well as rendering
issues. With the wide availability of Unicode-aware web browsers, IAST has become common online. It is also possible
to type using an alphanumeric keyboard and transliterate to Devanagari using software like Mac OS X's international
support. Sanskrit has also influenced Sino-Tibetan languages through the spread of Buddhist texts in translation.
Buddhism was spread to China by Mahayana missionaries sent by Ashoka, mostly through translations of Buddhist Hybrid
Sanskrit. Many terms were transliterated directly and added to the Chinese vocabulary. Chinese words like 剎那 chànà
(Devanagari: क्षण kṣaṇa 'instantaneous period') were borrowed from Sanskrit. Many Sanskrit texts survive only in
Tibetan collections of commentaries to the Buddhist teachings, the Tengyur. Sanskrit has greatly influenced the languages
of India that grew from its vocabulary and grammatical base; for instance, Hindi is a "Sanskritised register" of
the Khariboli dialect. All modern Indo-Aryan languages, as well as Munda and Dravidian languages, have borrowed many
words either directly from Sanskrit (tatsama words), or indirectly via middle Indo-Aryan languages (tadbhava words).
Words originating in Sanskrit are estimated at roughly fifty percent of the vocabulary of modern Indo-Aryan languages,
as well as the literary forms of Malayalam and Kannada. Literary texts in Telugu are lexically Sanskrit or Sanskritised
to an enormous extent, perhaps seventy percent or more. The earliest known inscriptions in Sanskrit date to the first
century BCE.[citation needed] They are in the Brahmi script, which was originally used for Prakrit, not Sanskrit.
It has been described as a paradox that the first evidence of written Sanskrit occurs centuries later than that of
the Prakrit languages which are its linguistic descendants. In northern India, there are Brāhmī inscriptions dating
from the third century BCE onwards, the oldest appearing on the famous Prakrit pillar inscriptions of king Ashoka.
The earliest South Indian inscriptions in Tamil Brahmi, written in early Tamil, belong to the same period. When Sanskrit
was written down, it was first used for texts of an administrative, literary or scientific nature. The sacred texts
were preserved orally, and were set down in writing "reluctantly" (according to one commentator), and at a comparatively
late date. The Sanskrit grammatical tradition, Vyākaraṇa, one of the six Vedangas, began in the late Vedic period
and culminated in the Aṣṭādhyāyī of Pāṇini, which consists of 3990 sutras (ca. fifth century BCE). About a century
after Pāṇini (around 400 BCE), Kātyāyana composed Vārtikas on the Pāṇini sũtras. Patanjali, who lived three centuries
after Pāṇini, wrote the Mahābhāṣya, the "Great Commentary" on the Aṣṭādhyāyī and Vārtikas. Because of these three
ancient Vyākaraṇins (grammarians), this grammar is called Trimuni Vyākarana. To understand the meaning of the sutras,
Jayaditya and Vāmana wrote a commentary, the Kāsikā, in 600 CE. Pāṇinian grammar is based on 14 Shiva sutras (aphorisms),
where the whole mātrika (alphabet) is abbreviated. This abbreviation is called the Pratyāhara. Sanskrit, as defined
by Pāṇini, evolved out of the earlier Vedic form. The present form of Vedic Sanskrit can be traced back to as early
as the second millennium BCE (for Rig-vedic). Scholars often distinguish Vedic Sanskrit and Classical or "Pāṇinian"
Sanskrit as separate dialects. Though they are quite similar, they differ in a number of essential points of phonology,
vocabulary, grammar and syntax. Vedic Sanskrit is the language of the Vedas, a large collection of hymns, incantations
(Samhitas) and theological and religio-philosophical discussions in the Brahmanas and Upanishads. Modern linguists
consider the metrical hymns of the Rigveda Samhita to be the earliest, composed by many authors over several centuries
of oral tradition. The end of the Vedic period is marked by the composition of the Upanishads, which form the concluding
part of the traditional Vedic corpus; however, the early Sutras are Vedic, too, both in language and content. In
order to explain the common features shared by Sanskrit and other Indo-European languages, many scholars have proposed
the Indo-Aryan migration theory, asserting that the original speakers of what became Sanskrit arrived in what is
now India and Pakistan from the north-west some time during the early second millennium BCE. Evidence for such a
theory includes the close relationship between the Indo-Iranian tongues and the Baltic and Slavic languages, vocabulary
exchange with the non-Indo-European Uralic languages, and the nature of the attested Indo-European words for flora
and fauna. Many Sanskrit dramas also indicate that the language coexisted with Prakrits, spoken by multilingual speakers
with a more extensive education. Sanskrit speakers were almost always multilingual. In the medieval era, Sanskrit
continued to be spoken and written, particularly by learned Brahmins for scholarly communication. This was a thin
layer of Indian society, but covered a wide geography. Centres like Varanasi, Paithan, Pune and Kanchipuram had a
strong presence of teaching and debating institutions, and high classical Sanskrit was maintained until British times.
Samskrita Bharati is an organisation working for Sanskrit revival. The "All-India Sanskrit Festival" (since 2002)
holds composition contests. The 1991 Indian census reported 49,736 fluent speakers of Sanskrit. Sanskrit learning
programmes also feature on the lists of most AIR broadcasting centres. The Mattur village in central Karnataka claims
to have native speakers of Sanskrit among its population. Inhabitants of all castes learn Sanskrit starting in childhood
and converse in the language. Even the local Muslims converse in Sanskrit. Historically, the village was given by
king Krishnadevaraya of the Vijayanagara Empire to Vedic scholars and their families, while people in his kingdom
spoke Kannada and Telugu. Another effort concentrates on preserving and passing along the oral tradition of the Vedas,
www.shrivedabharathi.in is one such organisation based out of Hyderabad that has been digitising the Vedas by recording
recitations of Vedic Pandits. Orientalist scholars of the 18th century like Sir William Jones marked a wave of enthusiasm
for Indian culture and for Sanskrit. According to Thomas Trautmann, after this period of "Indomania", a certain hostility
to Sanskrit and to Indian culture in general began to assert itself in early 19th century Britain, manifested by
a neglect of Sanskrit in British academia. This was the beginning of a general push in favor of the idea that India
should be culturally, religiously and linguistically assimilated to Britain as far as possible. Trautmann considers
two separate and logically opposite sources for the growing hostility: one was "British Indophobia", which he calls
essentially a developmentalist, progressivist, liberal, and non-racial-essentialist critique of Hindu civilisation
as an aid for the improvement of India along European lines; the other was scientific racism, a theory of the English
"common-sense view" that Indians constituted a "separate, inferior and unimprovable race". Satyagraha, an opera by
Philip Glass, uses texts from the Bhagavad Gita, sung in Sanskrit. The closing credits of The Matrix Revolutions
has a prayer from the Brihadaranyaka Upanishad. The song "Cyber-raga" from Madonna's album Music includes Sanskrit
chants, and Shanti/Ashtangi from her 1998 album Ray of Light, which won a Grammy, is the ashtanga vinyasa yoga chant.
The lyrics include the mantra Om shanti. Composer John Williams featured choirs singing in Sanskrit for Indiana Jones
and the Temple of Doom and in Star Wars: Episode I – The Phantom Menace. The theme song of Battlestar Galactica 2004
is the Gayatri Mantra, taken from the Rigveda. The lyrics of "The Child In Us" by Enigma also contains Sanskrit verses.[better
source needed].
Valencia (/vəˈlɛnsiə/; Spanish: [baˈlenθja]), or València (Valencian: [vaˈlensia]), is the capital of the autonomous community
of Valencia and the third largest city in Spain after Madrid and Barcelona, with around 800,000 inhabitants in the
administrative centre. Its urban area extends beyond the administrative city limits with a population of around 1.5
million people. Valencia is Spain's third largest metropolitan area, with a population ranging from 1.7 to 2.5 million.
The city has global city status. The Port of Valencia is the 5th busiest container port in Europe and the busiest
container port on the Mediterranean Sea. Valencia enjoyed strong economic growth over the last decade, much of it
spurred by tourism and the construction industry,[citation needed] with concurrent development and expansion of telecommunications
and transport. The city's economy is service-oriented, as nearly 84% of the working population is employed in service
sector occupations[citation needed]. However, the city still maintains an important industrial base, with 5.5% of
the population employed in this sector. Agricultural activities are still carried on in the municipality, even though
of relatively minor importance with only 1.9% of the working population and 3973 hectares planted mostly in orchards
and citrus groves. Public transport is provided by the Ferrocarrils de la Generalitat Valenciana (FGV), which operates
the Metrovalencia and other rail and bus services. The Estació del Nord (North Station) is the main railway terminus
in Valencia. A new temporary station, Estación de València-Joaquín Sorolla, has been built on land adjacent to this
terminus to accommodate high speed AVE trains to and from Madrid, Barcelona, Seville and Alicante. Valencia Airport
is situated 9 km (5.6 mi) west of Valencia city centre. Alicante Airport is situated about 170 km (110 mi) south
of Valencia. Starting in the mid-1990s, Valencia, formerly an industrial centre, saw rapid development that expanded
its cultural and touristic possibilities, and transformed it into a newly vibrant city. Many local landmarks were
restored, including the ancient Towers of the medieval city (Serrano Towers and Quart Towers), and the San Miguel
de los Reyes monastery, which now holds a conservation library. Whole sections of the old city, for example the Carmen
Quarter, have been extensively renovated. The Paseo Marítimo, a 4 km (2 mi) long palm tree-lined promenade was constructed
along the beaches of the north side of the port (Playa Las Arenas, Playa Cabañal and Playa de la Malvarrosa). The
English held the city for 16 months and defeated several attempts to expel them. English soldiers advanced as far
as Requena on the road to Madrid. After the victory of the Bourbons at the Battle of Almansa on 25 April 1707, the
English army evacuated Valencia and Philip V ordered the repeal of the privileges of Valencia as punishment for the
kingdom's support of Charles of Austria. By the Nueva Planta decrees (Decretos de Nueva Planta) the ancient Charters
of Valencia were abolished and the city was governed by the Castilian Charter. The Bourbon forces burned important
cities like Xativa, where pictures of the Spanish Bourbons in public places are hung upside down as a protest to
this day. The capital of the Kingdom of Valencia was moved to Orihuela, an outrage to the citizens of Valencia. Philip
ordered the Cortes to meet with the Viceroy of Valencia, Cardinal Luis de Belluga, who opposed the change of capital
because of the proximity of Orihuela, a religious, cultural and now political centre, to Murcia (capital of another
viceroyalty and his diocese). Because of his hatred of the city of Orihuela, which had bombarded and looted Valencia
during the War of Succession, the cardinal resigned the viceroyalty in protest against the actions of Philip, who
finally relented and returned the capital to Valencia. The city remained in the hands of Christian troops until 1102,
when the Almoravids retook the city and restored the Muslim religion. Although the self-styled 'Emperor of All Spain',
Alfonso VI of León and Castile, drove them from the city, he was not strong enough to hold it. The Christians set
it afire before abandoning it, and the Almoravid Masdali took possession on 5 May 1109. The event was commemorated
in a poem by Ibn Khafaja in which he thanked Yusuf ibn Tashfin for the city's liberation.The declining power of the
Almoravids coincided with the rise of a new dynasty in North Africa, the Almohads, who seized control of the peninsula
from the year 1145, although their entry into Valencia was deterred by Ibn Mardanis, King of Valencia and Murcia
until 1171, at which time the city finally fell to the North Africans. The two Muslim dynasties would rule Valencia
for more than a century. The 15th century was a time of Islamic economic expansion, known as the Valencian Golden
Age, in which culture and the arts flourished. Concurrent population growth made Valencia the most populous city
in the Crown of Aragon. Local industry, led by textile production, reached a great development, and a financial institution,
the Canvi de Taula, was created to support municipal banking operations; Valencian bankers lent funds to Queen Isabella
I of Castile for Columbus's voyage in 1492. At the end of the century the Silk Exchange (Llotja de la Seda) building
was erected as the city became a commercial emporium that attracted merchants from all over Europe. The vicereine
Germaine of Foix brutally repressed the uprising and its leaders, and this accelerated the authoritarian centralisation
of the government of Charles I. Queen Germaine favoured harsh treatment of the agermanats. She is thought to have
signed the death warrants of 100 former rebels personally, and sources indicate that as many as 800 executions may
have occurred. The agermanats are comparable to the comuneros of neighbouring Castile, who fought a similar revolt
against Charles from 1520–1522. In the early 20th century Valencia was an industrialised city. The silk industry
had disappeared, but there was a large production of hides and skins, wood, metals and foodstuffs, this last with
substantial exports, particularly of wine and citrus. Small businesses predominated, but with the rapid mechanisation
of industry larger companies were being formed. The best expression of this dynamic was in the regional exhibitions,
including that of 1909 held next to the pedestrian avenue L'Albereda (Paseo de la Alameda), which depicted the progress
of agriculture and industry. Among the most architecturally successful buildings of the era were those designed in
the Art Nouveau style, such as the North Station (Gare du Nord) and the Central and Columbus markets. Its average
annual temperature is 18.4 °C (65.1 °F). 22.8 °C (73.0 °F) during the day and 13.8 °C (56.8 °F) at night. In the
coldest month – January, the maximum temperature typically during the day ranges from 13 to 21 °C (55 to 70 °F),
the minimum temperature typically at night ranges from 4 to 12 °C (39 to 54 °F). In the warmest month – August, the
maximum temperature during the day typically ranges from 28–34 °C (82–93 °F), about 23 °C (73 °F) at night. Generally,
temperatures similar to those experienced in the northern part of Europe in summer last about 8 months, from April
to November. March is transitional, the temperature often exceeds 20 °C (68 °F), with an average temperature of 19.0
°C (66 °F) during the day and 10.0 °C (50 °F) at night. December, January and February are the coldest months, with
average temperatures around 17 °C (63 °F) during the day and 7 °C (45 °F) at night. Valencia has one of the mildest
winters in Europe, owing to its southern location on the Mediterranean Sea and the Foehn phenomenon. The January
average is comparable to temperatures expected for May and September in the major cities of northern Europe. The
Valencian economy recovered during the 18th century with the rising manufacture of woven silk and ceramic tiles.
The Palau de Justícia is an example of the affluence manifested in the most prosperous times of Bourbon rule (1758–1802)
during the rule of Charles III. The 18th century was the age of the Enlightenment in Europe, and its humanistic ideals
influenced such men as Gregory Maians and Perez Bayer in Valencia, who maintained correspondence with the leading
French and German thinkers of the time. In this atmosphere of the exaltation of ideas the Economic Society of Friends
of the Country (Societat Econòmica d'Amics del País) was founded in 1776; it introduced numerous improvements in
agriculture and industry and promoted various cultural, civic, and economic institutions in Valencia. The dictatorship
of Franco forbade political parties and began a harsh ideological and cultural repression countenanced and sometimes
even led by the Church. The financial markets were destabilised, causing a severe economic crisis that led to rationing.
A black market in rationed goods existed for over a decade. The Francoist administrations of Valencia silenced publicity
of the catastrophic consequences of the floods of 1949 with the attendant dozens of deaths, but could not do the
same after the even more tragic flood of 1957 when the river Turia overflowed its banks again, killing many Valencians
(officially, eighty-one died; the actual figure is not known). To prevent further disasters, the river was eventually
diverted to a new course. The old river bed was abandoned for years, and successive Francoist mayors proposed making
it a motorway, but that option was finally rejected with the advent of democracy and fervent neighbourhood protests.
The river was divided in two at the western city limits (Plan Sur de Valencia), and diverted southwards along a new
course that skirts the city, before meeting the Mediterranean. The old course of the river continues, dry, through
the city centre, almost to the sea. The old riverbed is now a verdant sunken park called the 'Garden of the Turia'
(Jardí del Túria or Jardín del Turia) that allows cyclists and pedestrians to traverse much of the city without the
use of roads; overhead bridges carry motor traffic across the park. Valencia's port is the biggest on the Mediterranean
western coast, the first of Spain in container traffic as of 2008 and the second of Spain in total traffic, handling
20% of Spain's exports. The main exports are foodstuffs and beverages. Other exports include oranges, furniture,
ceramic tiles, fans, textiles and iron products. Valencia's manufacturing sector focuses on metallurgy, chemicals,
textiles, shipbuilding and brewing. Small and medium-sized industries are an important part of the local economy,
and before the current crisis unemployment was lower than the Spanish average. A fervent follower of the absolutist
cause, Elío had played an important role in the repression of the supporters of the Constitution of 1812. For this,
he was arrested in 1820 and executed in 1822 by garroting. Conflict between absolutists and liberals continued, and
in the period of conservative rule called the Ominous Decade (1823–1833), which followed the Trienio Liberal, there
was ruthless repression by government forces and the Catholic Inquisition. The last victim of the Inquisition was
Gaietà Ripoli, a teacher accused of being a deist and a Mason who was hanged in Valencia in 1824. On 9 July 2006,
during Mass at Valencia's Cathedral, Our Lady of the Forsaken Basilica, Pope Benedict XVI used, at the World Day
of Families, the Santo Caliz, a 1st-century Middle-Eastern artifact that some Catholics believe is the Holy Grail.
It was supposedly brought to that church by Emperor Valerian in the 3rd century, after having been brought by St.
Peter to Rome from Jerusalem. The Santo Caliz (Holy Chalice) is a simple, small stone cup. Its base was added in
Medieval Times and consists of fine gold, alabaster and gem stones. During the Cantonal Revolution of 1873, a cantonalist
uprising that took place during the First Spanish Republic, the city was consolidated with most of the nearby cities
in the Federal Canton of Valencia (proclaimed on 19 July and dissolved on 7 August). It did not have the revolutionary
fervor of the movement in cities like Alcoy, as it was initiated by the bourgeoisie, but the Madrid government sent
General Martinez-Campos to stifle the rebellion by force of arms and subjected Valencia to an intense bombardment.
The city surrendered on 7 August; Alfonso XII was proclaimed king on 29 December 1874, and arrived in Valencia on
11 January 1875 on his way to Madrid, marking the end of the first republic. Despite the Bourbon restoration, the
roughly even balance between conservatives and liberals in the government was sustained in Valencia until the granting
of universal male suffrage in 1890, after which the Republicans, led by Vicente Blasco Ibáñez, gained considerably
more of the popular vote. World-renowned (and city-born) architect Santiago Calatrava produced the futuristic City
of Arts and Sciences (Ciutat de les Arts i les Ciències), which contains an opera house/performing arts centre, a
science museum, an IMAX cinema/planetarium, an oceanographic park and other structures such as a long covered walkway
and restaurants. Calatrava is also responsible for the bridge named after him in the centre of the city. The Music
Palace (Palau De La Música) is another noteworthy example of modern architecture in Valencia. Valencia is a bilingual
city: Valencian and Spanish are the two official languages. Spanish is official in all of Spain, whereas Valencian
is official in the Valencian Country, as well as in Catalonia and the Balearic Islands, where it receives the name
of Catalan. Despite the differentiated denomination, the distinct dialectal traits and political tension between
Catalonia and the Valencian Country, Catalan and Valencian are mutually intelligible and are considered two varieties
of the same language. Valencian has been historically repressed in favour of Spanish. The effects have been more
noticeable in the city proper, whereas the language has remained active in the rural and metropolitan areas. After
the Castille-Aragon unification, a Spanish-speaking elite established itself in the city. In more recent history,
the establishment of Franco's military and administrative apparatus in Valencia further excluded Valencian from public
life. Valencian recovered its official status, prestige and use in education after the transition to democracy in
1978. However, due to industrialisation in recent decades, Valencia has attracted immigration from other regions
in Spain, and hence there is also a demographic factor for its declining social use. Due to a combination of these
reasons, Valencia has become the bastion of anti-Catalan blaverism, which celebrates Valencian as merely folkloric,
but rejects the existing standard which was adapted from Catalan orthography. Spanish is currently the predominant
language in the city proper but, thanks to the education system, most Valencians have basic knowledge of both Spanish
and Valencian, and either can be used in the city. Valencia is therefore the second biggest Catalan-speaking city
after Barcelona. Institutional buildings and streets are named in Valencian. The city is also home to many pro-Valencian
political and civil organisations. Furthermore, education entirely in Valencian is offered in more than 70 state-owned
schools in the city, as well as by the University of Valencia across all disciplines. Valencia has experienced a
surge in its cultural development during the last thirty years, exemplified by exhibitions and performances at such
iconic institutions as the Palau de la Música, the Palacio de Congresos, the Metro, the City of Arts and Sciences
(Ciutat de les Arts i les Ciències), the Valencian Museum of Enlightenment and Modernity (Museo Valenciano de la
Ilustracion y la Modernidad), and the Institute of Modern Art (Instituto Valenciano de Arte Moderno). The various
productions of Santiago Calatrava, a renowned structural engineer, architect, and sculptor and of the architect Félix
Candela have contributed to Valencia's international reputation. These public works and the ongoing rehabilitation
of the Old City (Ciutat Vella) have helped improve the city's livability and tourism is continually increasing. Among
the parish churches are Saints John (Baptist and Evangelist), rebuilt in 1368, whose dome, decorated by Palonino,
contains some of the best frescoes in Spain; El Templo (the Temple), the ancient church of the Knights Templar, which
passed into the hands of the Order of Montesa and was rebuilt in the reigns of Ferdinand VI and Charles III; the
former convent of the Dominicans, at one time the headquarters of the Capital General, the cloister of which has
a beautiful Gothic wing and the chapter room, large columns imitating palm trees; the Colegio del Corpus Christi,
which is devoted to the Blessed Sacrament, and in which perpetual adoration is carried on; the Jesuit college, which
was destroyed in 1868 by the revolutionary Committee of the Popular Front, but later rebuilt; and the Colegio de
San Juan (also of the Society), the former college of the nobles, now a provincial institute for secondary instruction.
A few centuries later, coinciding with the first waves of the invading Germanic peoples (Suevi, Vandals and Alans,
and later the Visigoths) and the power vacuum left by the demise of the Roman imperial administration, the church
assumed the reins of power in the city and replaced the old Roman temples with religious buildings. With the Byzantine
invasion of the southwestern Iberian peninsula in 554 the city acquired strategic importance. After the expulsion
of the Byzantines in 625, Visigothic military contingents were posted there and the ancient Roman amphitheatre was
fortified. Little is known of its history for nearly a hundred years; although this period is only scarcely documented
by archeology, excavations suggest that there was little development of the city. During Visigothic times Valencia
was an episcopal See of the Catholic Church, albeit a suffragan diocese subordinate to the archdiocese of Toledo,
comprising the ancient Roman province of Carthaginensis in Hispania. In the 15th century the dome was added and the
naves extended back of the choir, uniting the building to the tower and forming a main entrance. Archbishop Luis
Alfonso de los Cameros began the building of the main chapel in 1674; the walls were decorated with marbles and bronzes
in the Baroque style of that period. At the beginning of the 18th century the German Conrad Rudolphus built the façade
of the main entrance. The other two doors lead into the transept; one, that of the Apostles in pure pointed Gothic,
dates from the 14th century, the other is that of the Paláu. The additions made to the back of the cathedral detract
from its height. The 18th-century restoration rounded the pointed arches, covered the Gothic columns with Corinthian
pillars, and redecorated the walls. The dome has no lantern, its plain ceiling being pierced by two large side windows.
There are four chapels on either side, besides that at the end and those that open into the choir, the transept,
and the sanctuary. It contains many paintings by eminent artists. A silver reredos, which was behind the altar, was
carried away in the war of 1808, and converted into coin to meet the expenses of the campaign. There are two paintings
by Francisco Goya in the San Francesco chapel. Behind the Chapel of the Blessed Sacrament is a small Renaissance
chapel built by Calixtus III. Beside the cathedral is the chapel dedicated to the Our Lady of the Forsaken (Virgen
de los desamparados or Mare de Déu dels Desamparats). In 1238, King James I of Aragon, with an army composed of Aragonese,
Catalans, Navarrese and crusaders from the Order of Calatrava, laid siege to Valencia and on 28 September obtained
a surrender. Fifty thousand Moors were forced to leave. Poets such as Ibn al-Abbar and Ibn Amira mourned this exile
from their beloved Valencia. After the Christian victory and the expulsion of the Muslim population the city was
divided between those who had participated in the conquest, according to the testimony in the Llibre del Repartiment
(Book of Distribution). James I granted the city new charters of law, the Furs of Valencia, which later were extended
to the whole kingdom of Valencia. Thenceforth the city entered a new historical stage in which a new society and
a new language developed, forming the basis of the character of the Valencian people as they are known today. In
its long history, Valencia has acquired many local traditions and festivals, among them the Falles, which were declared
Celebrations of International Touristic Interest (Fiestas de Interés Turístico Internacional) on 25 January 1965,
and the Water Tribunal of Valencia (Tribunal de las Aguas de Valencia), which was declared an intangible cultural
heritage of humanity (Patrimonio Cultural Inmaterial de la Humanidad) in 2009. In addition to these Valencia has
hosted world-class events that helped shape the city's reputation and put it in the international spotlight, e.g.,
the Regional Exhibition of 1909, the 32nd and the 33rd America's Cup competitions, the European Grand Prix of Formula
One auto racing, the Valencia Open 500 tennis tournament, and the Global Champions Tour of equestrian sports. The
city had surrendered without a fight to the invading Moors (Berbers and Arabs) by 714 AD, and the cathedral of Saint
Vincent was turned into a mosque. Abd al-Rahman I, the first emir of Cordoba, ordered the city destroyed in 755 during
his wars against other nobility, but several years later his son, Abdullah, had a form of autonomous rule over the
province of Valencia. Among his administrative acts he ordered the building of a luxurious palace, the Russafa, on
the outskirts of the city in the neighbourhood of the same name. So far no remains have been found. Also at this
time Valencia received the name Medina al-Turab (City of Sand). When Islamic culture settled in, Valencia, then called
Balansiyya, prospered from the 10th century, due to a booming trade in paper, silk, leather, ceramics, glass and
silver-work. The architectural legacy of this period is abundant in Valencia and can still be appreciated today in
the remnants of the old walls, the Baños del Almirante bath house, Portal de Valldigna street and even the Cathedral
and the tower, El Micalet (El Miguelete), which was the minaret of the old mosque. This boom was reflected in the
growth of artistic and cultural pursuits. Some of the most emblematic buildings of the city were built during this
period, including the Serranos Towers (1392), the Lonja (1482), the Miguelete and the Chapel of the Kings of the
Convent of Santo Domingo. In painting and sculpture, Flemish and Italian trends had an influence on artists such
as Lluís Dalmau, Peris Gonçal and Damià Forment. Literature flourished with the patronage of the court of Alfonso
the Magnanimous, supporting authors like Ausiàs March, Roiç de Corella, and Isabel de Villena. By 1460 Joanot Martorell
wrote Tirant lo Blanch, an innovative novel of chivalry that influenced many later writers, from Cervantes to Shakespeare.
Ausiàs March was one of the first poets to use the everyday language Valencian, instead of the troubadour language,
Occitan. Also around this time, between 1499 and 1502, the University of Valencia was founded under the parsimonious
name of Estudio General ("studium generale", place of general studies). The decline of the city reached its nadir
with the War of Spanish Succession (1702–1709) that marked the end of the political and legal independence of the
Kingdom of Valencia. During the War of the Spanish Succession, Valencia sided with Charles of Austria. On 24 January
1706, Charles Mordaunt, 3rd Earl of Peterborough, 1st Earl of Monmouth, led a handful of English cavalrymen into
the city after riding south from Barcelona, capturing the nearby fortress at Sagunt, and bluffing the Spanish Bourbon
army into withdrawal. The mutineers seized the Citadel, a Supreme Junta government took over, and on 26–28 June,
Napoleon's Marshal Moncey attacked the city with a column of 9,000 French imperial troops in the First Battle of
Valencia. He failed to take the city in two assaults and retreated to Madrid. Marshal Suchet began a long siege of
the city in October 1811, and after intense bombardment forced it to surrender on 8 January 1812. After the capitulation,
the French instituted reforms in Valencia, which became the capital of Spain when the Bonapartist pretender to the
throne, José I (Joseph Bonaparte, Napoleon's elder brother), moved the Court there in the summer of 1812. The disaster
of the Battle of Vitoria on 21 June 1813 obliged Suchet to quit Valencia, and the French troops withdrew in July.
The crisis deepened during the 17th century with the expulsion in 1609 of the Jews and the Moriscos, descendants
of the Muslim population that converted to Christianity under threat of exile from Ferdinand and Isabella in 1502.
From 1609 through 1614, the Spanish government systematically forced Moriscos to leave the kingdom for Muslim North
Africa. They were concentrated in the former Kingdom of Aragon, where they constituted a fifth of the population,
and the Valencia area specifically, where they were roughly a third of the total population. The expulsion caused
the financial ruin of some of the nobility and the bankruptcy of the Taula de Canvi in 1613. The Crown endeavoured
to compensate the nobles, who had lost much of their agricultural labour force; this harmed the economy of the city
for generations to come. Later, during the so-called Catalan Revolt (1640–1652), Valencia contributed to the cause
of Philip IV with militias and money, resulting in a period of further economic hardship exacerbated by the arrival
of troops from other parts of Spain. During the second half of the 19th century the bourgeoisie encouraged the development
of the city and its environs; land-owners were enriched by the introduction of the orange crop and the expansion
of vineyards and other crops,. This economic boom corresponded with a revival of local traditions and of the Valencian
language, which had been ruthlessly suppressed from the time of Philip V. Around 1870, the Valencian Renaissance,
a movement committed to the revival of the Valencian language and traditions, began to gain ascendancy. In its early
stages the movement inclined to the romanticism of the poet Teodor Llorente, and resisted the more assertive remonstrances
of Constantine Llombart, founder of the still extant cultural society, Lo Rat Penat, which is dedicated to the promotion
and dissemination of the Valencian language and culture. During the regency of Maria Cristina, Espartero ruled Spain
for two years as its 18th Prime Minister from 16 September 1840 to 21 May 1841. Under his progressive government
the old regime was tenuously reconciled to his liberal policies. During this period of upheaval in the provinces
he declared that all the estates of the Church, its congregations, and its religious orders were national property—though
in Valencia, most of this property was subsequently acquired by the local bourgeoisie. City life in Valencia carried
on in a revolutionary climate, with frequent clashes between liberals and republicans, and the constant threat of
reprisals by the Carlist troops of General Cabrera. The Valencia Metro derailment occurred on 3 July 2006 at 1 pm.
CEST (1100 UTC) between Jesús and Plaça d'Espanya stations on Line 1 of the Metrovalencia mass transit system. 43
people were killed and more than ten were seriously injured. It was not immediately clear what caused the crash.
Both the Valencian government spokesman Vicente Rambla and Mayor Rita Barberá called the accident a "fortuitous"
event. However, the trade union CC.OO. accused the authorities of "rushing" to say anything but admit that Line 1
is in a state of "constant deterioration" with a "failure to carry out maintenance". During the 20th century Valencia
remained the third most populous city of Spain as its population tripled, rising from 213,550 inhabitants in 1900
to 739,014 in 2000. Valencia was also third in industrial and economic development; notable milestones include urban
expansion of the city in the latter 1800s, the creation of the Banco de Valencia in 1900, construction of the Central
and Columbus markets, and the construction of the Gare du Nord railway station, completed in 1921. The new century
was marked in Valencia with a major event, the Valencian regional exhibition of 1909 (La Exposición Regional Valenciana
de 1909), which emulated the national and universal expositions held in other cities. This production was promoted
by the Ateneo Mercantil de Valencia (Mercantile Athenaeum of Valencia), especially by its chairman, Tomás Trénor
y Palavicino, and had the support of the Government and the Crown; it was officially inaugurated by King Alfonso
XIII himself. The inevitable march to civil war and the combat in Madrid resulted in the removal of the capital of
the Republic to Valencia. On 6 November 1936 the city became the capital of Republican Spain under the control of
the prime minister Manuel Azana; the government moved to the Palau de Benicarló, its ministries occupying various
other buildings. The city was heavily bombarded by air and sea, necessitating the construction of over two hundred
bomb shelters to protect the population. On 13 January 1937 the city was first shelled by a vessel of the Fascist
Italian Navy, which was blockading the port by the order of Benito Mussolini. The bombardment intensified and inflicted
massive destruction on several occasions; by the end of the war the city had survived 442 bombardments, leaving 2,831
dead and 847 wounded, although it is estimated that the death toll was higher, as the data given are those recognised
by Francisco Franco's government. The Republican government passed to Juan Negrín on 17 May 1937 and on 31 October
of that year moved to Barcelona. On 30 March 1939 Valencia surrendered and the Nationalist troops entered the city.
The postwar years were a time of hardship for Valencians. During Franco's regime speaking or teaching Valencian was
prohibited; in a significant reversal it is now compulsory for every schoolchild in Valencia. In March 2012, the
newspaper El Mundo published a story according to which FGV had instructed employees who were to testify at the crash
commission investigation, providing a set of possible questions and guidelines to prepare the answers. In April 2013,
the television program Salvados questioned the official version of the incident as there were indications that the
Valencian Government had tried to downplay the accident, which coincided with the visit of the pope to Valencia,
or even to hide evidence, as the book of train breakdowns was never found. The day after the broadcast of this report,
which received extensive media coverage, several voices called for the reopening of the investigation. The investigation
was effectively reopened and the accident is currently under re-examination. In 1409, a hospital was founded and
placed under the patronage of Santa María de los Inocentes; to this was attached a confraternity devoted to recovering
the bodies of the unfriended dead in the city and within a radius of three miles (4.8 km) around it. At the end of
the 15th century this confraternity separated from the hospital, and continued its work under the name of "Cofradia
para el ámparo de los desamparados". King Philip IV of Spain and the Duke of Arcos suggested the building of the
new chapel, and in 1647 the Viceroy, Conde de Oropesa, who had been preserved from the bubonic plague, insisted on
carrying out their project. The Blessed Virgin was proclaimed patroness of the city under the title of Virgen de
los desamparados (Virgin of the Forsaken), and Archbishop Pedro de Urbina, on 31 June 1652, laid the cornerstone
of the new chapel of this name. The archiepiscopal palace, a grain market in the time of the Moors, is simple in
design, with an inside cloister and a handsome chapel. In 1357, the arch that connects it with the cathedral was
built. In the council chamber are preserved the portraits of all the prelates of Valencia. Valencia is also internationally
famous for its football club, Valencia C.F., which won the Spanish league in 2002 and 2004 (the year it also won
the UEFA Cup), for a total of six times, and was a UEFA Champions League runner-up in 2000 and 2001. The team's stadium
is the Mestalla; its city rival Levante UD also plays in the highest division after gaining promotion in 2010, their
stadium is Estadi Ciutat de València. From the year 2011 there has been a third team in the city, Huracán Valencia,
who play their games in Municipal de Manises, in the Segunda División B. Valencia was founded as a Roman colony in
138 BC. The city is situated on the banks of the Turia, on the east coast of the Iberian Peninsula, fronting the
Gulf of Valencia on the Mediterranean Sea. Its historic centre is one of the largest in Spain, with approximately
169 hectares; this heritage of ancient monuments, views and cultural attractions makes Valencia one of the country's
most popular tourist destinations. Major monuments include Valencia Cathedral, the Torres de Serrans, the Torres
de Quart, the Llotja de la Seda (declared a World Heritage Site by UNESCO in 1996), and the Ciutat de les Arts i
les Ciències (City of Arts and Sciences), an entertainment-based cultural and architectural complex designed by Santiago
Calatrava and Félix Candela. The Museu de Belles Arts de València houses a large collection of paintings from the
14th to the 18th centuries, including works by Velázquez, El Greco, and Goya, as well as an important series of engravings
by Piranesi. The Institut Valencià d'Art Modern (Valencian Institute of Modern Art) houses both permanent collections
and temporary exhibitions of contemporary art and photography. Valencia stands on the banks of the Turia River, located
on the eastern coast of the Iberian Peninsula and the western part of the Mediterranean Sea, fronting the Gulf of
Valencia. At its founding by the Romans, it stood on a river island in the Turia, 6.4 km (4 mi) from the sea. The
Albufera, a freshwater lagoon and estuary about 11 km (7 mi) south of the city, is one of the largest lakes in Spain.
The City Council bought the lake from the Crown of Spain for 1,072,980 pesetas in 1911, and today it forms the main
portion of the Parc Natural de l'Albufera (Albufera Nature Reserve), with a surface area of 21,120 hectares (52,200
acres). In 1986, because of its cultural, historical, and ecological value, the Generalitat Valenciana declared it
a natural park. The third largest city in Spain and the 24th most populous municipality in the European Union, Valencia
has a population of 809,267 within its administrative limits on a land area of 134.6 km2 (52 sq mi). The urban area
of Valencia extending beyond the administrative city limits has a population of between 1,561,000 and 1,564,145.
1,705,742 or 2,300,000 or 2,516,818 people live in the Valencia metropolitan area. Between 2007 and 2008 there was
a 14% increase in the foreign born population with the largest numeric increases by country being from Bolivia, Romania
and Italy. About two thousand Roman colonists were settled there in 138 BC during the rule of consul Decimus Junius
Brutus Galaico. The Roman historian Florus says that Brutus transferred the soldiers who had fought under him to
that province. This was a typical Roman city in its conception, as it was located in a strategic location near the
sea on a river island crossed by the Via Augusta, the imperial road that connected the province to Rome, the capital
of the empire. The centre of the city was located in the present-day neighbourhood of the Plaza de la Virgen. Here
was the forum and the crossing of the Cardo Maximus and the Decumanus Maximus, which remain the two main axes of
the city. The Cardo corresponds to the existing Calle de Salvador, Almoina, and the Decumanus corresponds to Calle
de los Caballeros. Balansiyya had a rebirth of sorts with the beginning of the Taifa of Valencia kingdom in the 11th
century. The town grew, and during the reign of Abd al-Aziz a new city wall was built, remains of which are preserved
throughout the Old City (Ciutat Vella) today. The Castilian nobleman Rodrigo Diaz de Vivar, known as El Cid, who
was intent on possessing his own principality on the Mediterranean, entered the province in command of a combined
Christian and Moorish army and besieged the city beginning in 1092. By the time the siege ended in May 1094, he had
carved out his own fiefdom—which he ruled from 15 June 1094 to July 1099. This victory was immortalised in the Lay
of the Cid. During his rule, he converted nine mosques into churches and installed the French monk Jérôme as bishop
of the See of Valencia. El Cid was killed in July 1099 while defending the city from an Almoravid siege, whereupon
his wife Ximena Díaz ruled in his place for two years. The city went through serious troubles in the mid-fourteenth
century. On the one hand were the decimation of the population by the Black Death of 1348 and subsequent years of
epidemics — and on the other, the series of wars and riots that followed. Among these were the War of the Union,
a citizen revolt against the excesses of the monarchy, led by Valencia as the capital of the kingdom — and the war
with Castile, which forced the hurried raising of a new wall to resist Castilian attacks in 1363 and 1364. In these
years the coexistence of the three communities that occupied the city—Christian, Jewish and Muslim — was quite contentious.
The Jews who occupied the area around the waterfront had progressed economically and socially, and their quarter
gradually expanded its boundaries at the expense of neighbouring parishes. Meanwhile, Muslims who remained in the
city after the conquest were entrenched in a Moorish neighbourhood next to the present-day market Mosen Sorel. In
1391 an uncontrolled mob attacked the Jewish quarter, causing its virtual disappearance and leading to the forced
conversion of its surviving members to Christianity. The Muslim quarter was attacked during a similar tumult among
the populace in 1456, but the consequences were minor. Faced with this loss of business, Valencia suffered a severe
economic crisis. This manifested early in 1519–1523 when the artisan guilds known as the Germanies revolted against
the government of the Habsburg king Charles I in Valencia, now part of the Crown of Aragon, with most of the fighting
done in 1521. The revolt was an anti-monarchist, anti-feudal autonomist movement inspired by the Italian republics,
and a social revolt against the nobility who had fled the city before an epidemic of plague in 1519. It also bore
a strong anti-Islamic aspect, as rebels rioted against Aragon's population of mudéjars and imposed forced conversions
to Christianity. With the abolition of the charters of Valencia and most of its institutions, and the conformation
of the kingdom and its capital to the laws and customs of Castile, top civil officials were no longer elected, but
instead were appointed directly from Madrid, the king's court city, the offices often filled by foreign aristocrats.
Valencia had to become accustomed to being an occupied city, living with the presence of troops quartered in the
Citadel near the convent of Santo Domingo and in other buildings such as the Lonja, which served as a barracks until
1762. Ferdinand refused and went to Valencia instead of Madrid. Here, on 17 April, General Elio invited the King
to reclaim his absolute rights and put his troops at the King's disposition. The king abolished the Constitution
of 1812. He followed this act by dissolving the two chambers of the Spanish Parliament on 10 May. Thus began six
years (1814–1820) of absolutist rule, but the constitution was reinstated during the Trienio Liberal, a period of
three years of liberal government in Spain from 1820–1823. The public water supply network was completed in 1850,
and in 1858 the architects Sebastián Monleón Estellés, Antonino Sancho, and Timoteo Calvo drafted a general expansion
project for the city that included demolishing its ancient walls (a second version was printed in 1868). Neither
proposed project received final approval, but they did serve as a guide, though not closely followed, for future
growth. By 1860 the municipality had 140,416 inhabitants, and beginning in 1866 the ancient city walls were almost
entirely demolished to facilitate urban expansion. Electricity was introduced to Valencia in 1882. The economy began
to recover in the early 1960s, and the city experienced explosive population growth through immigration spurred by
the jobs created with the implementation of major urban projects and infrastructure improvements. With the advent
of democracy in Spain, the ancient kingdom of Valencia was established as a new autonomous entity, the Valencian
Community, the Statute of Autonomy of 1982 designating Valencia as its capital. On the night of 23 February 1981,
shortly after Antonio Tejero had stormed Congress, the Captain General of the Third Military Region, Jaime Milans
del Bosch, rose up in Valencia, put tanks on the streets, declared a state of emergency and tried to convince other
senior military figures to support the coup. After the televised message of King Juan Carlos I, those in the military
who had not yet aligned themselves decided to remain loyal to the government, and the coup failed. Despite this lack
of support, Milans del Bosch only surrendered at 5 a.m. on the next day, 24 February. The largest plaza in Valencia
is the Plaza del Ayuntamiento; it is home to the City Hall (Ayuntamiento) on its western side and the central post
office (Edificio de Correos) on its eastern side, a cinema that shows classic movies, and many restaurants and bars.
The plaza is triangular in shape, with a large cement lot at the southern end, normally surrounded by flower vendors.
It serves as ground zero during the Les Falles when the fireworks of the Mascletà can be heard every afternoon. There
is a large fountain at the northern end. The Valencia Cathedral was called Iglesia Mayor in the early days of the
Reconquista, then Iglesia de la Seo (Seo is from the Latin sedes, i.e., (archiepiscopal) See), and by virtue of the
papal concession of 16 October 1866, it was called the Basilica Metropolitana. It is situated in the centre of the
ancient Roman city where some believe the temple of Diana stood. In Gothic times, it seems to have been dedicated
to the Holy Saviour; the Cid dedicated it to the Blessed Virgin; King James I of Aragon did likewise, leaving in
the main chapel the image of the Blessed Virgin, which he carried with him and is reputed to be the one now preserved
in the sacristy. The Moorish mosque, which had been converted into a Christian Church by the conqueror, was deemed
unworthy of the title of the cathedral of Valencia, and in 1262 Bishop Andrés de Albalat laid the cornerstone of
the new Gothic building, with three naves; these reach only to the choir of the present building. Bishop Vidal de
Blanes built the chapter hall, and James I added the tower, called El Miguelete because it was blessed on St. Michael's
day in 1418. The tower is about 58 m high and topped with a belfry (1660–1736). Once a year between 2008–2012 the
European Formula One Grand Prix took place in the Valencia Street Circuit. Valencia is among with Barcelona, Porto
and Monte Carlo the only European cities ever to host Formula One World Championship Grands Prix on public roads
in the middle of cities. The final race in 2012 European Grand Prix saw an extremely popular winner, since home driver
Fernando Alonso won for Ferrari in spite of starting halfway down the field. The Valencian Community motorcycle Grand
Prix (Gran Premi de la Comunitat Valenciana de motociclisme) is part of the Grand Prix motorcycle racing season at
the Circuit Ricardo Tormo (also known as Circuit de Valencia). Periodically the Spanish round of the Deutsche Tourenwagen
Masters touring car racing Championship (DTM) is held in Valencia.
It has been said that GE got into computer manufacturing because in the 1950s they were the largest user of computers outside
the United States federal government, aside from being the first business in the world to own a computer. Its major
appliance manufacturing plant "Appliance Park" was the first non-governmental site to host one. However, in 1970,
GE sold its computer division to Honeywell, exiting the computer manufacturing industry, though it retained its timesharing
operations for some years afterwards. GE was a major provider of computer timesharing services, through General Electric
Information Services (GEIS, now GXS), offering online computing services that included GEnie. During 1889, Thomas
Edison had business interests in many electricity-related companies: Edison Lamp Company, a lamp manufacturer in
East Newark, New Jersey; Edison Machine Works, a manufacturer of dynamos and large electric motors in Schenectady,
New York; Bergmann & Company, a manufacturer of electric lighting fixtures, sockets, and other electric lighting
devices; and Edison Electric Light Company, the patent-holding company and the financial arm backed by J.P. Morgan
and the Vanderbilt family for Edison's lighting experiments. In 1889, Drexel, Morgan & Co., a company founded by
J.P. Morgan and Anthony J. Drexel, financed Edison's research and helped merge those companies under one corporation
to form Edison General Electric Company which was incorporated in New York on April 24, 1889. The new company also
acquired Sprague Electric Railway & Motor Company in the same year. Since over half of GE's revenue is derived from
financial services, it is arguably a financial company with a manufacturing arm. It is also one of the largest lenders
in countries other than the United States, such as Japan. Even though the first wave of conglomerates (such as ITT
Corporation, Ling-Temco-Vought, Tenneco, etc.) fell by the wayside by the mid-1980s, in the late 1990s, another wave
(consisting of Westinghouse, Tyco, and others) tried and failed to emulate GE's success.[citation needed] The changes
included a new corporate color palette, small modifications to the GE logo, a new customized font (GE Inspira) and
a new slogan, "Imagination at work", composed by David Lucas, to replace the slogan "We Bring Good Things to Life"
used since 1979. The standard requires many headlines to be lowercased and adds visual "white space" to documents
and advertising. The changes were designed by Wolff Olins and are used on GE's marketing, literature and website.
In 2014, a second typeface family was introduced: GE Sans and Serif by Bold Monday created under art direction by
Wolff Olins. GE has a history of some of its activities giving rise to large-scale air and water pollution. Based
on year 2000 data, researchers at the Political Economy Research Institute listed the corporation as the fourth-largest
corporate producer of air pollution in the United States, with more than 4.4 million pounds per year (2,000 tons)
of toxic chemicals released into the air. GE has also been implicated in the creation of toxic waste. According to
EPA documents, only the United States Government, Honeywell, and Chevron Corporation are responsible for producing
more Superfund toxic waste sites. At about the same time, Charles Coffin, leading the Thomson-Houston Electric Company,
acquired a number of competitors and gained access to their key patents. General Electric was formed through the
1892 merger of Edison General Electric Company of Schenectady, New York, and Thomson-Houston Electric Company of
Lynn, Massachusetts, with the support of Drexel, Morgan & Co. Both plants continue to operate under the GE banner
to this day. The company was incorporated in New York, with the Schenectady plant used as headquarters for many years
thereafter. Around the same time, General Electric's Canadian counterpart, Canadian General Electric, was formed.
From circa 1932 until 1977, General Electric polluted the Housatonic River with PCBs discharges from the General
Electric plant at Pittsfield, Massachusetts. Aroclor 1254 and Aroclor 1260, made by Monsanto was the primary contaminant
of the pollution. The highest concentrations of PCBs in the Housatonic River are found in Woods Pond in Lenox, Massachusetts,
just south of Pittsfield, where they have been measured up to 110 mg/kg in the sediment. About 50% of all the PCBs
currently in the river are estimated to be retained in the sediment behind Woods Pond dam. This is estimated to be
about 11,000 pounds of PCBs. Former filled oxbows are also polluted. Waterfowl and fish who live in and around the
river contain significant levels of PCBs and can present health risks if consumed. GE (General Electric) Energy's
renewable energy business has expanded greatly, to keep up with growing U.S. and global demand for clean energy.
Since entering the renewable energy industry in 2002, GE has invested more than $850 million in renewable energy
commercialization. In August 2008 it acquired Kelman Ltd, a Northern Ireland company specializing in advanced monitoring
and diagnostics technologies for transformers used in renewable energy generation, and announced an expansion of
its business in Northern Ireland in May 2010. In 2009, GE's renewable energy initiatives, which include solar power,
wind power and GE Jenbacher gas engines using renewable and non-renewable methane-based gases, employ more than 4,900
people globally and have created more than 10,000 supporting jobs. In May 2005, GE announced the launch of a program
called "Ecomagination," intended, in the words of CEO Jeff Immelt "to develop tomorrow's solutions such as solar
energy, hybrid locomotives, fuel cells, lower-emission aircraft engines, lighter and stronger durable materials,
efficient lighting, and water purification technology". The announcement prompted an op-ed piece in The New York
Times to observe that, "while General Electric's increased emphasis on clean technology will probably result in improved
products and benefit its bottom line, Mr. Immelt's credibility as a spokesman on national environmental policy is
fatally flawed because of his company's intransigence in cleaning up its own toxic legacy." GE's history of working
with turbines in the power-generation field gave them the engineering know-how to move into the new field of aircraft
turbosuperchargers.[citation needed] Led by Sanford Alexander Moss, GE introduced the first superchargers during
World War I, and continued to develop them during the Interwar period. Superchargers became indispensable in the
years immediately prior to World War II, and GE was the world leader in exhaust-driven supercharging when the war
started. This experience, in turn, made GE a natural selection to develop the Whittle W.1 jet engine that was demonstrated
in the United States in 1941. GE ranked ninth among United States corporations in the value of wartime production
contracts. Although their early work with Whittle's designs was later handed to Allison Engine Company, GE Aviation
emerged as one of the world's largest engine manufacturers, second only to the British company, Rolls-Royce plc.
General Electric heavily contaminated the Hudson River with polychlorinated biphenyls (PCBs) between 1947-77. This
pollution caused a range of harmful effects to wildlife and people who eat fish from the river or drink the water.
In response to this contamination, activists protested in various ways. Musician Pete Seeger founded the Hudson River
Sloop Clearwater and the Clearwater Festival to draw attention to the problem. The activism led to the site being
designated by the EPA as one of the superfund sites requiring extensive cleanup. Other sources of pollution, including
mercury contamination and sewage dumping, have also contributed to problems in the Hudson River watershed. GE has
said that it will invest $1.4 billion in clean technology research and development in 2008 as part of its Ecomagination
initiative. As of October 2008, the scheme had resulted in 70 green products being brought to market, ranging from
halogen lamps to biogas engines. In 2007, GE raised the annual revenue target for its Ecomagination initiative from
$20 billion in 2010 to $25 billion following positive market response to its new product lines. In 2010, GE continued
to raise its investment by adding $10 billion into Ecomagination over the next five years. Short Films, Big Ideas
was launched at the 2011 Toronto International Film Festival in partnership with cinelan. Stories included breakthroughs
in Slingshot (water vapor distillation system), cancer research, energy production, pain management and food access.
Each of the 30 films received world premiere screenings at a major international film festival, including the Sundance
Film Festival and the Tribeca Film Festival. The winning amateur director film, The Cyborg Foundation, was awarded
a US$100,000 prize at the 2013 at Sundance Film Festival.[citation needed] According to GE, the campaign garnered
more than 1.5 billion total media impressions, 14 million online views, and was seen in 156 countries.[citation needed]
In April 2014, it was announced that GE was in talks to acquire the global power division of French engineering group
Alstom for a figure of around $13 billion. A rival joint bid was submitted in June 2014 by Siemens and Mitsubishi
Heavy Industries (MHI) with Siemens seeking to acquire Alstom's gas turbine business for €3.9 billion, and MHI proposing
a joint venture in steam turbines, plus a €3.1 billion cash investment. In June 2014 a formal offer From GE worth
$17 billion was agreed by the Alstom board. Part of the transaction involved the French government taking a 20% stake
in Alstom to help secure France's energy and transport interests, and French jobs. A rival offer from Siemens-Mitsubishi
Heavy Industries was rejected. The acquisition was expected to be completed in 2015. GE is a multinational conglomerate
headquartered in Fairfield, Connecticut. Its main offices are located at 30 Rockefeller Plaza at Rockefeller Center
in New York City, known now as the Comcast Building. It was formerly known as the GE Building for the prominent GE
logo on the roof; NBC's headquarters and main studios are also located in the building. Through its RCA subsidiary,
it has been associated with the center since its construction in the 1930s. GE moved its corporate headquarters from
the GE Building on Lexington Avenue to Fairfield in 1974. In 1983, New York State Attorney General Robert Abrams
filed suit in the United States District Court for the Northern District of New York to compel GE to pay for the
cleanup of what was claimed to be more than 100,000 tons of chemicals dumped from their plant in Waterford, New York.
In 1999, the company agreed to pay a $250 million settlement in connection with claims it polluted the Housatonic
River (Pittsfield, Massachusetts) and other sites with polychlorinated biphenyls (PCBs) and other hazardous substances.
The Continental Army was created on 14 June 1775 by the Continental Congress as a unified army for the colonies to fight
Great Britain, with George Washington appointed as its commander. The army was initially led by men who had served
in the British Army or colonial militias and who brought much of British military heritage with them. As the Revolutionary
War progressed, French aid, resources, and military thinking influenced the new army. A number of European soldiers
came on their own to help, such as Friedrich Wilhelm von Steuben, who taught the army Prussian tactics and organizational
skills. The Vietnam War is often regarded as a low point for the U.S. Army due to the use of drafted personnel, the
unpopularity of the war with the American public, and frustrating restrictions placed on the military by American
political leaders. While American forces had been stationed in the Republic of Vietnam since 1959, in intelligence
& advising/training roles, they did not deploy in large numbers until 1965, after the Gulf of Tonkin Incident. American
forces effectively established and maintained control of the "traditional" battlefield, however they struggled to
counter the guerrilla hit and run tactics of the communist Viet Cong and the North Vietnamese Army. On a tactical
level, American soldiers (and the U.S. military as a whole) did not lose a sizable battle. By the twentieth century,
the U.S. Army had mobilized the U.S. Volunteers on four separate occasions during each of the major wars of the nineteenth
century. During World War I, the "National Army" was organized to fight the conflict, replacing the concept of U.S.
Volunteers. It was demobilized at the end of World War I, and was replaced by the Regular Army, the Organized Reserve
Corps, and the State Militias. In the 1920s and 1930s, the "career" soldiers were known as the "Regular Army" with
the "Enlisted Reserve Corps" and "Officer Reserve Corps" augmented to fill vacancies when needed. The army is led
by a civilian Secretary of the Army, who has the statutory authority to conduct all the affairs of the army under
the authority, direction and control of the Secretary of Defense. The Chief of Staff of the Army, who is the highest-ranked
military officer in the army, serves as the principal military adviser and executive agent for the Secretary of the
Army, i.e., its service chief; and as a member of the Joint Chiefs of Staff, a body composed of the service chiefs
from each of the four military services belonging to the Department of Defense who advise the President of the United
States, the Secretary of Defense, and the National Security Council on operational military matters, under the guidance
of the Chairman and Vice Chairman of the Joint Chiefs of Staff. In 1986, the Goldwater–Nichols Act mandated that
operational control of the services follows a chain of command from the President to the Secretary of Defense directly
to the unified combatant commanders, who have control of all armed forces units in their geographic or function area
of responsibility. Thus, the secretaries of the military departments (and their respective service chiefs underneath
them) only have the responsibility to organize, train and equip their service components. The army provides trained
forces to the combatant commanders for use as directed by the Secretary of Defense. The United States Army (USA)
is the largest branch of the United States Armed Forces and performs land-based military operations. It is one of
the seven uniformed services of the United States and is designated as the Army of the United States in the United
States Constitution, Article 2, Section 2, Clause 1 and United States Code, Title 10, Subtitle B, Chapter 301, Section
3001. As the largest and senior branch of the U.S. military, the modern U.S. Army has its roots in the Continental
Army, which was formed (14 June 1775) to fight the American Revolutionary War (1775–83)—before the U.S. was established
as a country. After the Revolutionary War, the Congress of the Confederation created the United States Army on 3
June 1784, to replace the disbanded Continental Army. The United States Army considers itself descended from the
Continental Army, and dates its institutional inception from the origin of that armed force in 1775. The War of 1812,
the second and last American war against the United Kingdom, was less successful for the U.S. than the Revolution
and Northwest Indian War against natives had been, though it ended on a high note for Americans as well. After the
taking control of Lake Erie in 1813, the Americans were able to seize parts of western Upper Canada, burn York and
defeat Tecumseh, which caused his Indian Confederacy to collapse. Following ending victories in the province of Upper
Canada, which dubbed the U.S. Army "Regulars, by God!", British troops were able to capture and burn Washington.
The regular army, however, proved they were professional and capable of defeating the British army during the invasions
of Plattsburgh and Baltimore, prompting British agreement on the previously rejected terms of a status quo ante bellum.
Two weeks after a treaty was signed (but not ratified), Andrew Jackson defeated the British in the Battle of New
Orleans and became a national hero. Per the treaty both sides returned to the status quo with no victor. After the
war, though, the Continental Army was quickly given land certificates and disbanded in a reflection of the republican
distrust of standing armies. State militias became the new nation's sole ground army, with the exception of a regiment
to guard the Western Frontier and one battery of artillery guarding West Point's arsenal. However, because of continuing
conflict with Native Americans, it was soon realized that it was necessary to field a trained standing army. The
Regular Army was at first very small, and after General St. Clair's defeat at the Battle of the Wabash, the Regular
Army was reorganized as the Legion of the United States, which was established in 1791 and renamed the "United States
Army" in 1796. Collective training at the unit level takes place at the unit's assigned station, but the most intensive
training at higher echelons is conducted at the three combat training centers (CTC); the National Training Center
(NTC) at Fort Irwin, California, the Joint Readiness Training Center (JRTC) at Fort Polk, Louisiana, and the Joint
Multinational Training Center (JMRC) at the Hohenfels Training Area in Hohenfels, Germany. ARFORGEN is the Army Force
Generation process approved in 2006 to meet the need to continuously replenish forces for deployment, at unit level,
and for other echelons as required by the mission. Individual-level replenishment still requires training at a unit
level, which is conducted at the continental US (CONUS) replacement center at Fort Bliss, in New Mexico and Texas,
before their individual deployment. For the first two years Confederate forces did well in set battles but lost control
of the border states. The Confederates had the advantage of defending a very large country in an area where disease
caused twice as many deaths as combat. The Union pursued a strategy of seizing the coastline, blockading the ports,
and taking control of the river systems. By 1863 the Confederacy was being strangled. Its eastern armies fought well,
but the western armies were defeated one after another until the Union forces captured New Orleans in 1862 along
with the Tennessee River. In the famous Vicksburg Campaign of 1862–63, Ulysses Grant seized the Mississippi River
and cut off the Southwest. Grant took command of Union forces in 1864 and after a series of battles with very heavy
casualties, he had Lee under siege in Richmond as William T. Sherman captured Atlanta and marched through Georgia
and the Carolinas. The Confederate capital was abandoned in April 1865 and Lee subsequently surrendered his army
at Appomattox Court House; all other Confederate armies surrendered within a few months. The end of World War II
set the stage for the East–West confrontation known as the Cold War. With the outbreak of the Korean War, concerns
over the defense of Western Europe rose. Two corps, V and VII, were reactivated under Seventh United States Army
in 1950 and American strength in Europe rose from one division to four. Hundreds of thousands of U.S. troops remained
stationed in West Germany, with others in Belgium, the Netherlands and the United Kingdom, until the 1990s in anticipation
of a possible Soviet attack. The United States joined World War II in December 1941 after the Japanese attack on
Pearl Harbor. On the European front, U.S. Army troops formed a significant portion of the forces that captured North
Africa and Sicily, and later fought in Italy. On D-Day, June 6, 1944, and in the subsequent liberation of Europe
and defeat of Nazi Germany, millions of U.S. Army troops played a central role. In the Pacific War, U.S. Army soldiers
participated alongside the United States Marine Corps in capturing the Pacific Islands from Japanese control. Following
the Axis surrenders in May (Germany) and August (Japan) of 1945, army troops were deployed to Japan and Germany to
occupy the two defeated nations. Two years after World War II, the Army Air Forces separated from the army to become
the United States Air Force in September 1947 after decades of attempting to separate. Also, in 1948, the army was
desegregated by order of President Harry S. Truman. The Total Force Policy was adopted by Chief of Staff of the Army
General Creighton Abrams in the aftermath of the Vietnam War and involves treating the three components of the army
– the Regular Army, the Army National Guard and the Army Reserve as a single force. Believing that no U.S. president
should be able to take the United States (and more specifically the U.S. Army) to war without the support of the
American people, General Abrams intertwined the structure of the three components of the army in such a way as to
make extended operations impossible, without the involvement of both the Army National Guard and the Army Reserve.
In response to the September 11 attacks, and as part of the Global War on Terror, U.S. and NATO forces invaded Afghanistan
in October 2001, displacing the Taliban government. The U.S. Army also led the combined U.S. and allied invasion
of Iraq in 2003. It served as the primary source for ground forces with its ability to sustain short and long-term
deployment operations. In the following years the mission changed from conflict between regular militaries to counterinsurgency,
resulting in the deaths of more than 4,000 U.S service members (as of March 2008) and injuries to thousands more.
23,813 insurgents were killed in Iraq between 2003–2011. Currently, the army is divided into the Regular Army, the
Army Reserve, and the Army National Guard. The army is also divided into major branches such as Air Defense Artillery,
Infantry, Aviation, Signal Corps, Corps of Engineers, and Armor. Before 1903 members of the National Guard were considered
state soldiers unless federalized (i.e., activated) by the President. Since the Militia Act of 1903 all National
Guard soldiers have held dual status: as National Guardsmen under the authority of the governor of their state or
territory and, when activated, as a reserve of the U.S. Army under the authority of the President. The U.S. Army
currently consists of 10 active divisions as well as several independent units. The force is in the process of contracting
after several years of growth. In June 2013, the Army announced plans to downsize to 32 active combat brigade teams
by 2015 to match a reduction in active duty strength to 490,000 soldiers. Army Chief of Staff Raymond Odierno has
projected that by 2018 the Army will eventually shrink to "450,000 in the active component, 335,000 in the National
Guard and 195,000 in U.S. Army Reserve." Training in the U.S. Army is generally divided into two categories – individual
and collective. Basic training consists of 10 weeks for most recruits followed by Advanced Individualized Training
(AIT) where they receive training for their military occupational specialties (MOS). Some individuals MOSs range
anywhere from 14–20 weeks of One Station Unit Training (OSUT), which combines Basic Training and AIT. The length
of AIT school varies by the MOS The length of time spent in AIT depends on the MOS of the soldier, and some highly
technical MOS training may require many months (e.g., foreign language translators). Depending on the needs of the
army, Basic Combat Training for combat arms soldiers is conducted at a number of locations, but two of the longest-running
are the Armor School and the Infantry School, both at Fort Benning, Georgia. Many units are supplemented with a variety
of specialized weapons, including the M249 SAW (Squad Automatic Weapon), to provide suppressive fire at the fire-team
level. Indirect fire is provided by the M203 grenade launcher. The M1014 Joint Service Combat Shotgun or the Mossberg
590 Shotgun are used for door breaching and close-quarters combat. The M14EBR is used by designated marksmen. Snipers
use the M107 Long Range Sniper Rifle, the M2010 Enhanced Sniper Rifle, and the M110 Semi-Automatic Sniper Rifle.
The Pentagon bought 25,000 MRAP vehicles since 2007 in 25 variants through rapid acquisition with no long-term plans
for the platforms. The Army plans to divest 7,456 vehicles and retain 8,585. Of the total number of vehicles the
Army will keep, 5,036 will be put in storage, 1,073 will be used for training, and the remainder will be spread across
the active force. The Oshkosh M-ATV will be kept the most at 5,681 vehicles, as it is smaller and lighter than other
MRAPs for off-road mobility. The other most retained vehicle will be the Navistar MaxxPro Dash with 2,633 vehicles,
plus 301 Maxxpro ambulances. Thousands of other MRAPs like the Cougar, BAE Caiman, and larger MaxxPros will be disposed
of. The U.S. Army black beret (having been permanently replaced with the patrol cap) is no longer worn with the new
ACU for garrison duty. After years of complaints that it wasn't suited well for most work conditions, Army Chief
of Staff General Martin Dempsey eliminated it for wear with the ACU in June 2011. Soldiers still wear berets who
are currently in a unit in jump status, whether the wearer is parachute-qualified, or not (maroon beret), Members
of the 75th Ranger Regiment and the Airborne and Ranger Training Brigade (tan beret), and Special Forces (rifle green
beret) and may wear it with the Army Service Uniform for non-ceremonial functions. Unit commanders may still direct
the wear of patrol caps in these units in training environments or motor pools. As a uniformed military service,
the Army is part of the Department of the Army, which is one of the three military departments of the Department
of Defense. The U.S. Army is headed by a civilian senior appointed civil servant, the Secretary of the Army (SECARMY),
and by a chief military officer, the Chief of Staff of the Army (CSA) who is also a member of the Joint Chiefs of
Staff. In the fiscal year 2016, the projected end strength for the Regular Army (USA) was 475,000 soldiers; the Army
National Guard (ARNG) had 342,000 soldiers, and the United States Army Reserve (USAR) had 198,000 soldiers; the combined-component
strength of the U.S. Army was 1,015,000 soldiers. As a branch of the armed forces, the mission of the U.S. Army is
"to fight and win our Nation's wars, by providing prompt, sustained, land dominance, across the full range of military
operations and the spectrum of conflict, in support of combatant commanders." The service participates in conflicts
worldwide and is the major ground-based offensive and defensive force. The army's major campaign against the Indians
was fought in Florida against Seminoles. It took long wars (1818–58) to finally defeat the Seminoles and move them
to Oklahoma. The usual strategy in Indian wars was to seize control of the Indians winter food supply, but that was
no use in Florida where there was no winter. The second strategy was to form alliances with other Indian tribes,
but that too was useless because the Seminoles had destroyed all the other Indians when they entered Florida in the
late eighteenth century. During the Cold War, American troops and their allies fought Communist forces in Korea and
Vietnam. The Korean War began in 1950, when the Soviets walked out of a U.N. Security meeting, removing their possible
veto. Under a United Nations umbrella, hundreds of thousands of U.S. troops fought to prevent the takeover of South
Korea by North Korea, and later, to invade the northern nation. After repeated advances and retreats by both sides,
and the PRC People's Volunteer Army's entry into the war, the Korean Armistice Agreement returned the peninsula to
the status quo in 1953. By 1989 Germany was nearing reunification and the Cold War was coming to a close. Army leadership
reacted by starting to plan for a reduction in strength. By November 1989 Pentagon briefers were laying out plans
to reduce army end strength by 23%, from 750,000 to 580,000. A number of incentives such as early retirement were
used. In 1990 Iraq invaded its smaller neighbor, Kuwait, and U.S. land forces, quickly deployed to assure the protection
of Saudi Arabia. In January 1991 Operation Desert Storm commenced, a U.S.-led coalition which deployed over 500,000
troops, the bulk of them from U.S. Army formations, to drive out Iraqi forces. The campaign ended in total victory,
as Western coalition forces routed the Iraqi Army, organized along Soviet lines, in just one hundred hours. The task
of organizing the U.S. Army commenced in 1775. In the first one hundred years of its existence, the United States
Army was maintained as a small peacetime force to man permanent forts and perform other non-wartime duties such as
engineering and construction works. During times of war, the U.S. Army was augmented by the much larger United States
Volunteers which were raised independently by various state governments. States also maintained full-time militias
which could also be called into the service of the army. The United States Army is made up of three components: the
active component, the Regular Army; and two reserve components, the Army National Guard and the Army Reserve. Both
reserve components are primarily composed of part-time soldiers who train once a month, known as battle assemblies
or unit training assemblies (UTAs), and conduct two to three weeks of annual training each year. Both the Regular
Army and the Army Reserve are organized under Title 10 of the United States Code, while the National Guard is organized
under Title 32. While the Army National Guard is organized, trained and equipped as a component of the U.S. Army,
when it is not in federal service it is under the command of individual state and territorial governors; the District
of Columbia National Guard, however, reports to the U.S. President, not the district's mayor, even when not federalized.
Any or all of the National Guard can be federalized by presidential order and against the governor's wishes. Following
their basic and advanced training at the individual-level, soldiers may choose to continue their training and apply
for an "additional skill identifier" (ASI). The ASI allows the army to take a wide ranging MOS and focus it into
a more specific MOS. For example, a combat medic, whose duties are to provide pre-hospital emergency treatment, may
receive ASI training to become a cardiovascular specialist, a dialysis specialist, or even a licensed practical nurse.
For commissioned officers, ASI training includes pre-commissioning training either at USMA, or via ROTC, or by completing
OCS. After commissioning, officers undergo branch specific training at the Basic Officer Leaders Course, (formerly
called Officer Basic Course), which varies in time and location according their future assignments. Further career
development is available through the Army Correspondence Course Program. The army has relied heavily on tents to
provide the various facilities needed while on deployment. The most common tent uses for the military are as temporary
barracks (sleeping quarters), DFAC buildings (dining facilities), forward operating bases (FOBs), after action review
(AAR), tactical operations center (TOC), morale, welfare, and recreation (MWR) facilities, and security checkpoints.
Furthermore, most of these tents are set up and operated through the support of Natick Soldier Systems Center. The
American Civil War was the costliest war for the U.S. in terms of casualties. After most slave states, located in
the southern U.S., formed the Confederate States, C.S. troops led by former U.S. Army officers, mobilized a very
large fraction of Southern white manpower. Forces of the United States (the "Union" or "the North") formed the Union
Army consisting of a small body of regular army units and a large body of volunteer units raised from every state,
north and south, except South Carolina.[citation needed] Starting in 1910, the army began acquiring fixed-wing aircraft.
In 1910, Mexico was having a civil war, peasant rebels fighting government soldiers. The army was deployed to American
towns near the border to ensure safety to lives and property. In 1916, Pancho Villa, a major rebel leader, attacked
Columbus, New Mexico, prompting a U.S. intervention in Mexico until 7 February 1917. They fought the rebels and the
Mexican federal troops until 1918. The United States joined World War I in 1917 on the side of Britain, France, Russia,
Italy and other allies. U.S. troops were sent to the Western Front and were involved in the last offensives that
ended the war. With the armistice in November 1918, the army once again decreased its forces. During the 1960s the
Department of Defense continued to scrutinize the reserve forces and to question the number of divisions and brigades
as well as the redundancy of maintaining two reserve components, the Army National Guard and the Army Reserve. In
1967 Secretary of Defense Robert McNamara decided that 15 combat divisions in the Army National Guard were unnecessary
and cut the number to 8 divisions (1 mechanized infantry, 2 armored, and 5 infantry), but increased the number of
brigades from 7 to 18 (1 airborne, 1 armored, 2 mechanized infantry, and 14 infantry). The loss of the divisions
did not set well with the states. Their objections included the inadequate maneuver element mix for those that remained
and the end to the practice of rotating divisional commands among the states that supported them. Under the proposal,
the remaining division commanders were to reside in the state of the division base. No reduction, however, in total
Army National Guard strength was to take place, which convinced the governors to accept the plan. The states reorganized
their forces accordingly between 1 December 1967 and 1 May 1968. On September 11, 2001, 53 Army civilians (47 employees
and six contractors) and 22 soldiers were among the 125 victims killed in the Pentagon in a terrorist attack when
American Airlines Flight 77 commandeered by five Al-Qaeda hijackers slammed into the western side of the building,
as part of the September 11 attacks. Lieutenant General Timothy Maude was the highest-ranking military official killed
at the Pentagon, and the most senior U.S. Army officer killed by foreign action since the death of Lieutenant General
Simon B. Buckner, Jr. on June 18, 1945, in the Battle of Okinawa during World War II. The army is also changing its
base unit from divisions to brigades. Division lineage will be retained, but the divisional headquarters will be
able to command any brigade, not just brigades that carry their divisional lineage. The central part of this plan
is that each brigade will be modular, i.e., all brigades of the same type will be exactly the same, and thus any
brigade can be commanded by any division. As specified before the 2013 end-strength re-definitions, the three major
types of ground combat brigades are: The army employs various individual weapons to provide light firepower at short
ranges. The most common weapons used by the army are the compact variant of the M16 rifle, the M4 carbine, as well
as the 7.62×51mm variant of the FN SCAR for Army Rangers. The primary sidearm in the U.S. Army is the 9 mm M9 pistol;
the M11 pistol is also used. Both handguns are to be replaced through the Modular Handgun System program. Soldiers
are also equiped with various hand grenades, such as the M67 fragmentation grenade and M18 smoke grenade. The army's
most common vehicle is the High Mobility Multipurpose Wheeled Vehicle (HMMWV), commonly called the Humvee, which
is capable of serving as a cargo/troop carrier, weapons platform, and ambulance, among many other roles. While they
operate a wide variety of combat support vehicles, one of the most common types centers on the family of HEMTT vehicles.
The M1A2 Abrams is the army's main battle tank, while the M2A3 Bradley is the standard infantry fighting vehicle.
Other vehicles include the Stryker, and the M113 armored personnel carrier, and multiple types of Mine Resistant
Ambush Protected (MRAP) vehicles.
The German states proclaimed their union as the German Empire under the Prussian king, Wilhelm I, uniting Germany as a nation-state.
The Treaty of Frankfurt of 10 May 1871 gave Germany most of Alsace and some parts of Lorraine, which became the Imperial
territory of Alsace-Lorraine (Reichsland Elsaß-Lothringen).The German conquest of France and the unification of Germany
upset the European balance of power, that had existed since the Congress of Vienna in 1815 and Otto von Bismarck
maintained great authority in international affairs for two decades. French determination to regain Alsace-Lorraine
and fear of another Franco-German war, along with British apprehension about the balance of power, became factors
in the causes of World War I. The Ems telegram had exactly the effect on French public opinion that Bismarck had
intended. "This text produced the effect of a red flag on the Gallic bull", Bismarck later wrote. Gramont, the French
foreign minister, declared that he felt "he had just received a slap". The leader of the monarchists in Parliament,
Adolphe Thiers, spoke for moderation, arguing that France had won the diplomatic battle and there was no reason for
war, but he was drowned out by cries that he was a traitor and a Prussian. Napoleon's new prime minister, Emile Ollivier,
declared that France had done all that it could humanly and honorably do to prevent the war, and that he accepted
the responsibility "with a light heart." A crowd of 15–20,000 people, carrying flags and patriotic banners, marched
through the streets of Paris, demanding war. On 19 July 1870 a declaration of war was sent to the Prussian government.
The southern German states immediately sided with Prussia. The army was still equipped with the Dreyse needle gun
of Battle of Königgrätz fame, which was by this time showing the age of its 25-year-old design. The rifle had a range
of only 600 m (2,000 ft) and lacked the rubber breech seal that permitted aimed shots. The deficiencies of the needle
gun were more than compensated for by the famous Krupp 6-pounder (3 kg) steel breech-loading cannons being issued
to Prussian artillery batteries. Firing a contact-detonated shell, the Krupp gun had a longer range and a higher
rate of fire than the French bronze muzzle loading cannon, which relied on faulty time fuses. The first action of
the Franco-Prussian War took place on 4 August 1870. This battle saw the unsupported division of General Douay of
I Corps, with some attached cavalry, which was posted to watch the border, attacked in overwhelming but uncoordinated
fashion by the German 3rd Army. During the day, elements of a Bavarian and two Prussian corps became engaged and
were aided by Prussian artillery, which blasted holes in the defenses of the town. Douay held a very strong position
initially, thanks to the accurate long-range fire of the Chassepots but his force was too thinly stretched to hold
it. Douay was killed in the late morning when a caisson of the divisional mitrailleuse battery exploded near him;
the encirclement of the town by the Prussians threatened the French avenue of retreat. The French were unaware of
German numerical superiority at the beginning of the battle as the German 2nd Army did not attack all at once. Treating
the oncoming attacks as merely skirmishes, Frossard did not request additional support from other units. By the time
he realized what kind of a force he was opposing, it was too late. Seriously flawed communications between Frossard
and those in reserve under Bazaine slowed down so much that by the time the reserves received orders to move out
to Spicheren, German soldiers from the 1st and 2nd armies had charged up the heights. Because the reserves had not
arrived, Frossard erroneously believed that he was in grave danger of being outflanked as German soldiers under General
von Glume were spotted in Forbach. Instead of continuing to defend the heights, by the close of battle after dusk
he retreated to the south. The German casualties were relatively high due to the advance and the effectiveness of
the chassepot rifle. They were quite startled in the morning when they had found out that their efforts were not
in vain—Frossard had abandoned his position on the heights. The Battle of Gravelotte, or Gravelotte–St. Privat (18
August), was the largest battle during the Franco-Prussian War. It was fought about 6 miles (9.7 km) west of Metz,
where on the previous day, having intercepted the French army's retreat to the west at the Battle of Mars-La-Tour,
the Prussians were now closing in to complete the destruction of the French forces. The combined German forces, under
Field Marshal Count Helmuth von Moltke, were the Prussian First and Second Armies of the North German Confederation
numbering about 210 infantry battalions, 133 cavalry squadrons, and 732 heavy cannons totaling 188,332 officers and
men. The French Army of the Rhine, commanded by Marshal François-Achille Bazaine, numbering about 183 infantry battalions,
104 cavalry squadrons, backed by 520 heavy cannons, totaling 112,800 officers and men, dug in along high ground with
their southern left flank at the town of Rozerieulles, and their northern right flank at St. Privat. With the defeat
of Marshal Bazaine's Army of the Rhine at Gravelotte, the French were forced to retire to Metz, where they were besieged
by over 150,000 Prussian troops of the First and Second Armies. Napoleon III and MacMahon formed the new French Army
of Châlons, to march on to Metz to rescue Bazaine. Napoleon III personally led the army with Marshal MacMahon in
attendance. The Army of Châlons marched northeast towards the Belgian border to avoid the Prussians before striking
south to link up with Bazaine. The Prussians, under the command of Field Marshal Count Helmuth von Moltke, took advantage
of this maneuver to catch the French in a pincer grip. He left the Prussian First and Second Armies besieging Metz,
except three corps detached to form the Army of the Meuse under the Crown Prince of Saxony. With this army and the
Prussian Third Army, Moltke marched northward and caught up with the French at Beaumont on 30 August. After a sharp
fight in which they lost 5,000 men and 40 cannons, the French withdrew toward Sedan. Having reformed in the town,
the Army of Châlons was immediately isolated by the converging Prussian armies. Napoleon III ordered the army to
break out of the encirclement immediately. With MacMahon wounded on the previous day, General Auguste Ducrot took
command of the French troops in the field. When the war had begun, European public opinion heavily favored the Germans;
many Italians attempted to sign up as volunteers at the Prussian embassy in Florence and a Prussian diplomat visited
Giuseppe Garibaldi in Caprera. Bismarck's demand for the return of Alsace caused a dramatic shift in that sentiment
in Italy, which was best exemplified by the reaction of Garibaldi soon after the revolution in Paris, who told the
Movimento of Genoa on 7 September 1870 that "Yesterday I said to you: war to the death to Bonaparte. Today I say
to you: rescue the French Republic by every means." Garibaldi went to France and assumed command of the Army of the
Vosges, with which he operated around Dijon till the end of the war. On 10 October, hostilities began between German
and French republican forces near Orléans. At first, the Germans were victorious but the French drew reinforcements
and defeated the Germans at the Battle of Coulmiers on 9 November. After the surrender of Metz, more than 100,000
well-trained and experienced German troops joined the German 'Southern Army'. The French were forced to abandon Orléans
on 4 December, and were finally defeated at the Battle of Le Mans (10–12 January). A second French army which operated
north of Paris was turned back at the Battle of Amiens (27 November), the Battle of Bapaume (3 January 1871) and
the Battle of St. Quentin (13 January). Although public opinion in Paris was strongly against any form of surrender
or concession to the Prussians, the Government realised that it could not hold the city for much longer, and that
Gambetta's provincial armies would probably never break through to relieve Paris. President Trochu resigned on 25
January and was replaced by Favre, who signed the surrender two days later at Versailles, with the armistice coming
into effect at midnight. Several sources claim that in his carriage on the way back to Paris, Favre broke into tears,
and collapsed into his daughter's arms as the guns around Paris fell silent at midnight. At Tours, Gambetta received
word from Paris on 30 January that the Government had surrendered. Furious, he refused to surrender and launched
an immediate attack on German forces at Orleans which, predictably, failed. A delegation of Parisian diplomats arrived
in Tours by train on 5 February to negotiate with Gambetta, and the following day Gambetta stepped down and surrendered
control of the provincial armies to the Government of National Defence, which promptly ordered a cease-fire across
France. The quick German victory over the French stunned neutral observers, many of whom had expected a French victory
and most of whom had expected a long war. The strategic advantages possessed by the Germans were not appreciated
outside Germany until after hostilities had ceased. Other countries quickly discerned the advantages given to the
Germans by their military system, and adopted many of their innovations, particularly the General Staff, universal
conscription and highly detailed mobilization systems. The effect of these differences was accentuated by the pre-war
preparations. The Prussian General Staff had drawn up minutely detailed mobilization plans using the railway system,
which in turn had been partly laid out in response to recommendations of a Railway Section within the General Staff.
The French railway system, with multiple competing companies, had developed purely from commercial pressures and
many journeys to the front in Alsace and Lorraine involved long diversions and frequent changes between trains. Furthermore,
no system had been put in place for military control of the railways, and officers simply commandeered trains as
they saw fit. Rail sidings and marshalling yards became choked with loaded wagons, with nobody responsible for unloading
them or directing them to the destination. At the Battle of Mars-la-Tours, the Prussian 12th Cavalry Brigade, commanded
by General Adalbert von Bredow, conducted a charge against a French artillery battery. The attack was a costly success
and came to be known as "von Bredow's Death Ride", which was held to prove that cavalry charges could still prevail
on the battlefield. Use of traditional cavalry on the battlefields of 1914 proved to be disastrous, due to accurate,
long-range rifle fire, machine-guns and artillery. Von Bredow's attack had succeeded only because of an unusually
effective artillery bombardment just before the charge, along with favorable terrain that masked his approach. In
the Prussian province of Posen, with a large Polish population, there was strong support for the French and angry
demonstrations at news of Prussian-German victories—a clear manifestation of Polish nationalist feeling. Calls were
also made for Polish recruits to desert from the Prussian Army—though these went mainly unheeded. An alarming report
on the Posen situation, sent to Bismarck on 16 August 1870, led to the quartering of reserve troop contingents in
the restive province. The Franco-Prussian War thus turned out to be a significant event also in German–Polish relations,
marking the beginning of a prolonged period of repressive measures by the authorities and efforts at Germanisation.
The causes of the Franco-Prussian War are deeply rooted in the events surrounding the unification of Germany. In
the aftermath of the Austro–Prussian War of 1866, Prussia had annexed numerous territories and formed the North German
Confederation. This new power destabilized the European balance of power established by the Congress of Vienna in
1815 after the Napoleonic Wars. Napoleon III, then the emperor of France, demanded compensations in Belgium and on
the left bank of the Rhine to secure France's strategic position, which the Prussian chancellor, Otto von Bismarck,
flatly refused. Prussia then turned its attention towards the south of Germany, where it sought to incorporate the
southern German kingdoms, Bavaria, Württemberg, Baden and Hesse-Darmstadt, into a unified Prussia-dominated Germany.
France was strongly opposed to any further alliance of German states, which would have significantly strengthened
the Prussian military. The French Army consisted in peacetime of approximately 400,000 soldiers, some of them regulars,
others conscripts who until 1869 served the comparatively long period of seven years with the colours. Some of them
were veterans of previous French campaigns in the Crimean War, Algeria, the Franco-Austrian War in Italy, and in
the Franco-Mexican War. However, following the "Seven Weeks War" between Prussia and Austria four years earlier,
it had been calculated that the French Army could field only 288,000 men to face the Prussian Army when perhaps 1,000,000
would be required. Under Marshal Adolphe Niel, urgent reforms were made. Universal conscription (rather than by ballot,
as previously) and a shorter period of service gave increased numbers of reservists, who would swell the army to
a planned strength of 800,000 on mobilisation. Those who for any reason were not conscripted were to be enrolled
in the Garde Mobile, a militia with a nominal strength of 400,000. However, the Franco-Prussian War broke out before
these reforms could be completely implemented. The mobilisation of reservists was chaotic and resulted in large numbers
of stragglers, while the Garde Mobile were generally untrained and often mutinous. The Prussian army was controlled
by the General Staff, under Field Marshal Helmuth von Moltke. The Prussian army was unique in Europe for having the
only such organisation in existence, whose purpose in peacetime was to prepare the overall war strategy, and in wartime
to direct operational movement and organise logistics and communications. The officers of the General Staff were
hand-picked from the Prussian Kriegsakademie (War Academy). Moltke embraced new technology, particularly the railroad
and telegraph, to coordinate and accelerate mobilisation of large forces. General Frossard's II Corps and Marshal
Bazaine's III Corps crossed the German border on 2 August, and began to force the Prussian 40th Regiment of the 16th
Infantry Division from the town of Saarbrücken with a series of direct attacks. The Chassepot rifle proved its worth
against the Dreyse rifle, with French riflemen regularly outdistancing their Prussian counterparts in the skirmishing
around Saarbrücken. However the Prussians resisted strongly, and the French suffered 86 casualties to the Prussian
83 casualties. Saarbrücken also proved to be a major obstacle in terms of logistics. Only one railway there led to
the German hinterland but could be easily defended by a single force, and the only river systems in the region ran
along the border instead of inland. While the French hailed the invasion as the first step towards the Rhineland
and later Berlin, General Le Bœuf and Napoleon III were receiving alarming reports from foreign news sources of Prussian
and Bavarian armies massing to the southeast in addition to the forces to the north and northeast. According to some
historians, Bismarck adroitly created a diplomatic crisis over the succession to the Spanish throne, then edited
a dispatch about a meeting between King William of Prussia and the French ambassador, to make it appear that the
French had been insulted. The French press and parliament demanded a war, which the generals of Napoleon III assured
him that France would win. Napoleon and his Prime Minister, Émile Ollivier, for their parts sought war to solve their
problems with political disunity in France. On 16 July 1870, the French parliament voted to declare war on the German
Kingdom of Prussia and hostilities began three days later. The German coalition mobilised its troops much more quickly
than the French and rapidly invaded northeastern France. The German forces were superior in numbers, had better training
and leadership and made more effective use of modern technology, particularly railroads and artillery. The fighting
within the town had become extremely intense, becoming a door to door battle of survival. Despite a never-ending
attack of Prussian infantry, the soldiers of the 2nd Division kept to their positions. The people of the town of
Wissembourg finally surrendered to the Germans. The French troops who did not surrender retreated westward, leaving
behind 1,000 dead and wounded and another 1,000 prisoners and all of their remaining ammunition. The final attack
by the Prussian troops also cost c. 1,000 casualties. The German cavalry then failed to pursue the French and lost
touch with them. The attackers had an initial superiority of numbers, a broad deployment which made envelopment highly
likely but the effectiveness of French Chassepot rifle-fire inflicted costly repulses on infantry attacks, until
the French infantry had been extensively bombarded by the Prussian artillery. The Franco-Prussian War or Franco-German
War (German: Deutsch-Französischer Krieg, lit. German-French War, French: Guerre franco-allemande, lit. Franco-German
War), often referred to in France as the War of 1870 (19 July 1870 – 10 May 1871), was a conflict between the Second
French Empire and the German states of the North German Confederation led by the Kingdom of Prussia. The conflict
was caused by Prussian ambitions to extend German unification. Some historians argue that the Prussian chancellor
Otto von Bismarck planned to provoke a French attack in order to draw the southern German states—Baden, Württemberg,
Bavaria and Hesse-Darmstadt—into an alliance with the North German Confederation dominated by Prussia, while others
contend that Bismarck did not plan anything and merely exploited the circumstances as they unfolded. The immediate
cause of the war resided in the candidacy of a Leopold of Hohenzollern-Sigmaringen, a Prussian prince, to the throne
of Spain. France feared encirclement by an alliance between Prussia and Spain. The Hohenzollern prince's candidacy
was withdrawn under French diplomatic pressure, but Otto von Bismarck goaded the French into declaring war by altering
a telegram sent by William I. Releasing the Ems Dispatch to the public, Bismarck made it sound as if the king had
treated the French envoy in a demeaning fashion, which inflamed public opinion in France. The Battle of Wörth (also
known as Fröschwiller or Reichshoffen) began when the two armies clashed again on 6 August near Wörth in the town
of Fröschwiller, about 10 miles (16 km) from Wissembourg. The Crown Prince of Prussia's 3rd army had, on the quick
reaction of his Chief of Staff General von Blumenthal, drawn reinforcements which brought its strength up to 140,000
troops. The French had been slowly reinforced and their force numbered only 35,000. Although badly outnumbered, the
French defended their position just outside Fröschwiller. By afternoon, the Germans had suffered c. 10,500 killed
or wounded and the French had lost a similar number of casualties and another c. 9,200 men taken prisoner, a loss
of about 50%. The Germans captured Fröschwiller which sat on a hilltop in the centre of the French line. Having lost
any hope for victory and facing a massacre, the French army disengaged and retreated in a westerly direction towards
Bitche and Saverne, hoping to join French forces on the other side of the Vosges mountains. The German 3rd army did
not pursue the French but remained in Alsace and moved slowly south, attacking and destroying the French garrisons
in the vicinity. In Prussia, some officials considered a war against France both inevitable and necessary to arouse
German nationalism in those states that would allow the unification of a great German empire. This aim was epitomized
by Prussian Chancellor Otto von Bismarck's later statement: "I did not doubt that a Franco-German war must take place
before the construction of a United Germany could be realised." Bismarck also knew that France should be the aggressor
in the conflict to bring the southern German states to side with Prussia, hence giving Germans numerical superiority.
Many Germans also viewed the French as the traditional destabilizer of Europe, and sought to weaken France to prevent
further breaches of the peace. On 18 August, the battle began when at 08:00 Moltke ordered the First and Second Armies
to advance against the French positions. By 12:00, General Manstein opened up the battle before the village of Amanvillers
with artillery from the 25th Infantry Division. But the French had spent the night and early morning digging trenches
and rifle pits while placing their artillery and their mitrailleuses in concealed positions. Finally aware of the
Prussian advance, the French opened up a massive return fire against the mass of advancing Germans. The battle at
first appeared to favor the French with their superior Chassepot rifle. However, the Prussian artillery was superior
with the all-steel Krupp breech-loading gun. By 14:30, General Steinmetz, the commander of the First Army, unilaterally
launched his VIII Corps across the Mance Ravine in which the Prussian infantry were soon pinned down by murderous
rifle and mitrailleuse fire from the French positions. At 15:00, the massed guns of the VII and VIII Corps opened
fire to support the attack. But by 16:00, with the attack in danger of stalling, Steinmetz ordered the VII Corps
forward, followed by the 1st Cavalry Division. French infantry were equipped with the breech-loading Chassepot rifle,
one of the most modern mass-produced firearms in the world at the time. With a rubber ring seal and a smaller bullet,
the Chassepot had a maximum effective range of some 1,500 metres (4,900 ft) with a short reloading time. French tactics
emphasised the defensive use of the Chassepot rifle in trench-warfare style fighting—the so-called feu de bataillon.
The artillery was equipped with rifled, muzzle-loaded La Hitte guns. The army also possessed a precursor to the machine-gun:
the mitrailleuse, which could unleash significant, concentrated firepower but nevertheless lacked range and was comparatively
immobile, and thus prone to being easily overrun. The mitrailleuse was mounted on an artillery gun carriage and grouped
in batteries in a similar fashion to cannon. On 1 September 1870, the battle opened with the Army of Châlons, with
202 infantry battalions, 80 cavalry squadrons and 564 guns, attacking the surrounding Prussian Third and Meuse Armies
totaling 222 infantry battalions, 186 cavalry squadrons and 774 guns. General De Wimpffen, the commander of the French
V Corps in reserve, hoped to launch a combined infantry and cavalry attack against the Prussian XI Corps. But by
11:00, Prussian artillery took a toll on the French while more Prussian troops arrived on the battlefield. The French
cavalry, commanded by General Marguerite, launched three desperate attacks on the nearby village of Floing where
the Prussian XI Corps was concentrated. Marguerite was killed leading the very first charge and the two additional
charges led to nothing but heavy losses. By the end of the day, with no hope of breaking out, Napoleon III called
off the attacks. The French lost over 17,000 men, killed or wounded, with 21,000 captured. The Prussians reported
their losses at 2,320 killed, 5,980 wounded and 700 captured or missing. By the next day, on 2 September, Napoleon
III surrendered and was taken prisoner with 104,000 of his soldiers. It was an overwhelming victory for the Prussians,
for they not only captured an entire French army, but the leader of France as well. The defeat of the French at Sedan
had decided the war in Prussia's favour. One French army was now immobilised and besieged in the city of Metz, and
no other forces stood on French ground to prevent a German invasion. Nevertheless, the war would continue. A pre-war
plan laid out by the late Marshal Niel called for a strong French offensive from Thionville towards Trier and into
the Prussian Rhineland. This plan was discarded in favour of a defensive plan by Generals Charles Frossard and Bartélemy
Lebrun, which called for the Army of the Rhine to remain in a defensive posture near the German border and repel
any Prussian offensive. As Austria along with Bavaria, Württemberg and Baden were expected to join in a revenge war
against Prussia, I Corps would invade the Bavarian Palatinate and proceed to "free" the South German states in concert
with Austro-Hungarian forces. VI Corps would reinforce either army as needed. On the French side, planning after
the disaster at Wissembourg had become essential. General Le Bœuf, flushed with anger, was intent upon going on the
offensive over the Saar and countering their loss. However, planning for the next encounter was more based upon the
reality of unfolding events rather than emotion or pride, as Intendant General Wolff told him and his staff that
supply beyond the Saar would be impossible. Therefore, the armies of France would take up a defensive position that
would protect against every possible attack point, but also left the armies unable to support each other. Following
the Army of the Loire's defeats, Gambetta turned to General Faidherbe's Army of the North. The army had achieved
several small victories at towns such as Ham, La Hallue, and Amiens and was protected by the belt of fortresses in
northern France, allowing Faidherbe's men to launch quick attacks against isolated Prussian units, then retreat behind
the fortresses. Despite access to the armaments factories of Lille, the Army of the North suffered from severe supply
difficulties, which depressed morale. In January 1871, Gambetta forced Faidherbe to march his army beyond the fortresses
and engage the Prussians in open battle. The army was severely weakened by low morale, supply problems, the terrible
winter weather and low troop quality, whilst general Faidherbe was unable to command due to his poor health, the
result of decades of campaigning in West Africa. At the Battle of St. Quentin, the Army of the North suffered a crushing
defeat and was scattered, releasing thousands of Prussian soldiers to be relocated to the East. Despite odds of four
to one, the III Corps launched a risky attack. The French were routed and the III Corps captured Vionville, blocking
any further escape attempts to the west. Once blocked from retreat, the French in the fortress of Metz had no choice
but to engage in a fight that would see the last major cavalry engagement in Western Europe. The battle soon erupted,
and III Corps was shattered by incessant cavalry charges, losing over half its soldiers. The German Official History
recorded 15,780 casualties and French casualties of 13,761 men. With the defeat of the First Army, Prince Frederick
Charles ordered a massed artillery attack against Canrobert's position at St. Privat to prevent the Guards attack
from failing too. At 19:00 the 3rd Division of Fransecky's II Corps of the Second Army advanced across Ravine while
the XII Corps cleared out the nearby town of Roncourt and with the survivors of the 1st Guards Infantry Division
launched a fresh attack against the ruins of St. Privat. At 20:00, the arrival of the Prussian 4th Infantry Division
of the II Corps and with the Prussian right flank on Mance Ravine, the line stabilised. By then, the Prussians of
the 1st Guards Infantry Division and the XII and II Corps captured St. Privat forcing the decimated French forces
to withdraw. With the Prussians exhausted from the fighting, the French were now able to mount a counter-attack.
General Bourbaki, however, refused to commit the reserves of the French Old Guard to the battle because, by that
time, he considered the overall situation a 'defeat'. By 22:00, firing largely died down across the battlefield for
the night. The next morning, the French Army of the Rhine, rather than resume the battle with an attack of its own
against the battle-weary German armies, retreated to Metz where they were besieged and forced to surrender two months
later. When the war began, the French government ordered a blockade of the North German coasts, which the small North
German navy (Norddeutsche Bundesmarine) with only five ironclads could do little to oppose. For most of the war,
the three largest German ironclads were out of service with engine troubles; only the turret ship SMS Arminius was
available to conduct operations. By the time engine repairs had been completed, the French fleet had already departed.
The blockade proved only partially successful due to crucial oversights by the planners in Paris. Reservists that
were supposed to be at the ready in case of war, were working in the Newfoundland fisheries or in Scotland. Only
part of the 470-ship French Navy put to sea on 24 July. Before long, the French navy ran short of coal, needing 200
short tons (180 t) per day and having a bunker capacity in the fleet of only 250 short tons (230 t). A blockade of
Wilhelmshaven failed and conflicting orders about operations in the Baltic Sea or a return to France, made the French
naval efforts futile. Spotting a blockade-runner became unwelcome because of the question du charbon; pursuit of
Prussian ships quickly depleted the coal reserves of the French ships. The Battle of Spicheren, on 5 August, was
the second of three critical French defeats. Moltke had originally planned to keep Bazaine's army on the Saar River
until he could attack it with the 2nd Army in front and the 1st Army on its left flank, while the 3rd Army closed
towards the rear. The aging General von Steinmetz made an overzealous, unplanned move, leading the 1st Army south
from his position on the Moselle. He moved straight toward the town of Spicheren, cutting off Prince Frederick Charles
from his forward cavalry units in the process. The Prussian General Staff developed by Moltke proved to be extremely
effective, in contrast to the traditional French school. This was in large part due to the fact that the Prussian
General Staff was created to study previous Prussian operations and learn to avoid mistakes. The structure also greatly
strengthened Moltke's ability to control large formations spread out over significant distances. The Chief of the
General Staff, effectively the commander in chief of the Prussian army, was independent of the minister of war and
answered only to the monarch. The French General Staff—along with those of every other European military—was little
better than a collection of assistants for the line commanders. This disorganization hampered the French commanders'
ability to exercise control of their forces. The Germans expected to negotiate an end to the war but immediately
ordered an advance on Paris; by 15 September Moltke issued the orders for an investment of Paris and on 20 September
the encirclement was complete. Bismarck met Favre on 18 September at the Château de Ferrières and demanded a frontier
immune to a French war of revenge, which included Strasbourg, Alsace and most the Moselle department in Lorraine
of which Metz was the capital. In return for an armistice for the French to elect a National Assembly, Bismarck demanded
the surrender of Strasbourg and the fortress city of Toul. To allow supplies into Paris, one of the perimeter forts
had to be handed over. Favre was unaware that the real aim of Bismarck in making such extortionate demands was to
establish a durable peace on the new western frontier of Germany, preferably by a peace with a friendly government,
on terms acceptable to French public opinion. An impregnable military frontier was an inferior alternative to him,
favoured only by the militant nationalists on the German side. By 16:50, with the Prussian southern attacks in danger
of breaking up, the Prussian 3rd Guards Infantry Brigade of the Second Army opened an attack against the French positions
at St. Privat which were commanded by General Canrobert. At 17:15, the Prussian 4th Guards Infantry Brigade joined
the advance followed at 17:45 by the Prussian 1st Guards Infantry Brigade. All of the Prussian Guard attacks were
pinned down by lethal French gunfire from the rifle pits and trenches. At 18:15 the Prussian 2nd Guards Infantry
Brigade, the last of the 1st Guards Infantry Division, was committed to the attack on St. Privat while Steinmetz
committed the last of the reserves of the First Army across the Mance Ravine. By 18:30, a considerable portion of
the VII and VIII Corps disengaged from the fighting and withdrew towards the Prussian positions at Rezonville. The
Prussian Army, under the terms of the armistice, held a brief victory parade in Paris on 17 February; the city was
silent and draped with black and the Germans quickly withdrew. Bismarck honoured the armistice, by allowing train
loads of food into Paris and withdrawing Prussian forces to the east of the city, prior to a full withdrawal once
France agreed to pay a five billion franc war indemnity. At the same time, Prussian forces were concentrated in the
provinces of Alsace and Lorraine. An exodus occurred from Paris as some 200,000 people, predominantly middle-class,
went to the countryside. When the news arrived at Paris of the surrender at Sedan of Napoleon III and 80,000 men,
the Second Empire was overthrown by a popular uprising in Paris, which forced the proclamation of a Provisional Government
and a Third Republic by general Trochu, Favre and Gambetta at Paris on 4 September, the new government calling itself
the Government of National Defence. After the German victory at Sedan, most of the French standing army was either
besieged in Metz or prisoner of the Germans, who hoped for an armistice and an end to the war. Bismarck wanted an
early peace but had difficulty in finding a legitimate French authority with which to negotiate. The Government of
National Defence had no electoral mandate, the Emperor was a captive and the Empress in exile but there had been
no abdication de jure and the army was still bound by an oath of allegiance to the defunct imperial régime. A series
of swift Prussian and German victories in eastern France, culminating in the Siege of Metz and the Battle of Sedan,
saw the army of the Second Empire decisively defeated (Napoleon III had been captured at Sedan on 2 September). A
Government of National Defence declared the Third Republic in Paris on 4 September and continued the war and for
another five months, the German forces fought and defeated new French armies in northern France. Following the Siege
of Paris, the capital fell on 28 January 1871 and then a revolutionary uprising called the Paris Commune seized power
in the capital and held it for two months, until it was bloodily suppressed by the regular French army at the end
of May 1871. Some historians argue that Napoleon III also sought war, particularly for the diplomatic defeat in 1866
in leveraging any benefits from the Austro-Prussian War, and he believed he would win a conflict with Prussia. They
also argue that he wanted a war to resolve growing domestic political problems. Other historians, notably French
historian Pierre Milza, dispute this. On 8 May 1870, shortly before the war, French voters had overwhelmingly supported
Napoleon III's program in a national plebiscite, with 7,358,000 votes yes against 1,582,000 votes no, an increase
of support of two million votes since the legislative elections in 1869. According to Milza, the Emperor had no need
for a war to increase his popularity. To relieve pressure from the expected German attack into Alsace-Lorraine, Napoleon
III and the French high command planned a seaborne invasion of northern Germany as soon as war began. The French
expected the invasion to divert German troops and to encourage Denmark to join in the war, with its 50,000-strong
army and the Royal Danish Navy. It was discovered that Prussia had recently built defences around the big North German
ports, including coastal artillery batteries with Krupp heavy artillery, which with a range of 4,000 yards (3,700
m), had double the range of French naval guns. The French Navy lacked the heavy guns to engage the coastal defences
and the topography of the Prussian coast made a seaborne invasion of northern Germany impossible. The Prussian Army
was composed not of regulars but conscripts. Service was compulsory for all of men of military age, and thus Prussia
and its North and South German allies could mobilise and field some 1,000,000 soldiers in time of war. German tactics
emphasised encirclement battles like Cannae and using artillery offensively whenever possible. Rather than advancing
in a column or line formation, Prussian infantry moved in small groups that were harder to target by artillery or
French defensive fire. The sheer number of soldiers available made encirclement en masse and destruction of French
formations relatively easy. In addition, the Prussian military education system was superior to the French model;
Prussian staff officers were trained to exhibit initiative and independent thinking. Indeed, this was Moltke's expectation.
The French, meanwhile, suffered from an education and promotion system that stifled intellectual development. According
to the military historian Dallas Irvine, the system "was almost completely effective in excluding the army's brain
power from the staff and high command. To the resulting lack of intelligence at the top can be ascribed all the inexcusable
defects of French military policy." The French breech-loading rifle, the Chassepot, had a far longer range than the
German needle gun; 1,500 yards (1,400 m) compared to 600 yd (550 m). The French also had an early machine-gun type
weapon, the mitrailleuse, which could fire its thirty-seven barrels at a range of around 1,200 yd (1,100 m). It was
developed in such secrecy, that little training with the weapon had occurred, leaving French gunners with no experience;
the gun was treated like artillery and in this role it was ineffective. Worse still, once the small number of soldiers
who had been trained how to use the new weapon became casualties, there were no replacements who knew how to operate
the mitrailleuse. The French Marines and naval infantry intended for the invasion of northern Germany were dispatched
to reinforce the French Army of Châlons and fell into captivity at Sedan along with Napoleon III. A shortage of officers,
following the capture of most of the professional French army at the Siege of Metz and at the Battle of Sedan, led
naval officers to be sent from their ships to command hastily assembled reservists of the Garde Mobile. As the autumn
storms of the North Sea forced the return of more of the French ships, the blockade of the north German ports diminished
and in September 1870 the French navy abandoned the blockade for the winter. The rest of the navy retired to ports
along the English Channel and remained in port for the rest of the war. Marshal MacMahon, now closest to Wissembourg,
spread his four divisions over 20 miles (32 km) to react to any Prussian invasion. This organization of forces was
due to a lack of supplies, forcing each division to seek out basic provisions along with the representatives of the
army supply arm that was supposed to aid them. What made a bad situation much worse was the conduct of General Auguste-Alexandre
Ducrot, commander of the 1st Division. He told General Abel Douay, commander of the 2nd Division, on 1 August that
"The information I have received makes me suppose that the enemy has no considerable forces very near his advance
posts, and has no desire to take the offensive". Two days later, he told MacMahon that he had not found "a single
enemy post ... it looks to me as if the menace of the Bavarians is simply bluff". Even though Ducrot shrugged off
the possibility of an attack by the Germans, MacMahon tried to warn the other divisions of his army, without success.
During the war, the Paris National Guard, particularly in the working-class neighbourhoods of Paris, had become highly
politicised and units elected officers; many refused to wear uniforms or obey commands from the national government.
National guard units tried to seize power in Paris on 31 October 1870 and 22 January 1871. On 18 March 1871, when
the regular army tried to remove cannons from an artillery park on Montmartre, National Guard units resisted and
killed two army generals. The national government and regular army forces retreated to Versailles and a revolutionary
government was proclaimed in Paris. A Commune was elected, which was dominated by socialists, anarchists and revolutionaries.
The red flag replaced the French tricolour and a civil war began between the Commune and the regular army, which
attacked and recaptured Paris from 21–28 May in La Semaine Sanglante (Bloody week). While the French army under General
MacMahon engaged the German 3rd Army at the Battle of Wörth, the German 1st Army under Steinmetz finished their advance
west from Saarbrücken. A patrol from the German 2nd Army under Prince Friedrich Karl of Prussia spotted decoy fires
close and Frossard's army farther off on a distant plateau south of the town of Spicheren, and took this as a sign
of Frossard's retreat. Ignoring Moltke's plan again, both German armies attacked Frossard's French 2nd Corps, fortified
between Spicheren and Forbach. The casualties were horrible, especially for the attacking Prussian forces. A grand
total of 20,163 German troops were killed, wounded or missing in action during the August 18 battle. The French losses
were 7,855 killed and wounded along with 4,420 prisoners of war (half of them were wounded) for a total of 12,275.
While most of the Prussians fell under the French Chassepot rifles, most French fell under the Prussian Krupp shells.
In a breakdown of the casualties, Frossard's II Corps of the Army of the Rhine suffered 621 casualties while inflicting
4,300 casualties on the Prussian First Army under Steinmetz before the Pointe du Jour. The Prussian Guards Infantry
Divisions losses were even more staggering with 8,000 casualties out of 18,000 men. The Special Guards Jäger lost
19 officers, a surgeon and 431 men out of a total of 700. The 2nd Guards Infantry Brigade lost 39 officers and 1,076
men. The 3rd Guards Infantry Brigade lost 36 officers and 1,060 men. On the French side, the units holding St. Privat
lost more than half their number in the village. While the republican government was amenable to war reparations
or ceding colonial territories in Africa or in South East Asia to Prussia, Favre on behalf of the Government of National
Defense, declared on 6 September that France would not "yield an inch of its territory nor a stone of its fortresses."
The republic then renewed the declaration of war, called for recruits in all parts of the country and pledged to
drive the German troops out of France by a guerre à outrance. Under these circumstances, the Germans had to continue
the war, yet could not pin down any proper military opposition in their vicinity. As the bulk of the remaining French
armies were digging-in near Paris, the German leaders decided to put pressure upon the enemy by attacking Paris.
By September 15, German troops reached the outskirts of the fortified city. On September 19, the Germans surrounded
it and erected a blockade, as already established at Metz. Albrecht von Roon, the Prussian Minister of War from 1859
to 1873, put into effect a series of reforms of the Prussian military system in the 1860s. Among these were two major
reforms that substantially increased the military power of Germany. The first was a reorganization of the army that
integrated the regular army and the Landwehr reserves. The second was the provision for the conscription of every
male Prussian of military age in the event of mobilization. Thus, despite the population of France being greater
than the population of all of the German states that participated in the war, the Germans mobilized more soldiers
for battle. The French were equipped with bronze, rifled muzzle-loading artillery, while the Prussians used new steel
breech-loading guns, which had a far longer range and a faster rate of fire. Prussian gunners strove for a high rate
of fire, which was discouraged in the French army in the belief that it wasted ammunition. In addition, the Prussian
artillery batteries had 30% more guns than their French counterparts. The Prussian guns typically opened fire at
a range of 2–3 kilometres (1.2–1.9 mi), beyond the range of French artillery or the Chassepot rifle. The Prussian
batteries could thus destroy French artillery with impunity, before being moved forward to directly support infantry
attacks. On 28 January 1871 the Government of National Defence based in Paris negotiated an armistice with the Prussians.
With Paris starving, and Gambetta's provincial armies reeling from one disaster after another, French foreign minister
Favre went to Versailles on 24 January to discuss peace terms with Bismarck. Bismarck agreed to end the siege and
allow food convoys to immediately enter Paris (including trains carrying millions of German army rations), on condition
that the Government of National Defence surrender several key fortresses outside Paris to the Prussians. Without
the forts, the French Army would no longer be able to defend Paris. During the fighting, the Communards killed c. 500
people, including the Archbishop of Paris, and burned down many government buildings, including the Tuileries Palace
and the Hotel de Ville. Communards captured with weapons were routinely shot by the army and Government troops killed
from 7,000–30,000 Communards in the fighting and in massacres of men, women, and children during and after the Commune.
More recent histories, based on studies of the number buried in Paris cemeteries and in mass graves after the fall
of the Commune, put the number killed at between 6,000 and 10,000. Twenty-six courts were established to try more
than 40,000 people who had been arrested, which took until 1875 and imposed 95 death sentences, of which 23 were
inflicted. Forced labour for life was imposed on 251 people, 1,160 people were transported to "a fortified place"
and 3,417 people were transported. About 20,000 Communards were held in prison hulks until released in 1872 and a
great many Communards fled abroad to England, Switzerland, Belgium or the United States. The survivors were amnestied
by a bill introduced by Gambetta in 1880 and allowed to return. At the outset of the Franco-Prussian War, 462,000
German soldiers concentrated on the French frontier while only 270,000 French soldiers could be moved to face them,
the French army having lost 100,000 stragglers before a shot was fired through poor planning and administration.
This was partly due to the peacetime organisations of the armies. Each Prussian Corps was based within a Kreis (literally
"circle") around the chief city in an area. Reservists rarely lived more than a day's travel from their regiment's
depot. By contrast, French regiments generally served far from their depots, which in turn were not in the areas
of France from which their soldiers were drawn. Reservists often faced several days' journey to report to their depots,
and then another long journey to join their regiments. Large numbers of reservists choked railway stations, vainly
seeking rations and orders. The events of the Franco-Prussian War had great influence on military thinking over the
next forty years. Lessons drawn from the war included the need for a general staff system, the scale and duration
of future wars and the tactical use of artillery and cavalry. The bold use of artillery by the Prussians, to silence
French guns at long range and then to directly support infantry attacks at close range, proved to be superior to
the defensive doctrine employed by French gunners. The Prussian tactics were adopted by European armies by 1914,
exemplified in the French 75, an artillery piece optimised to provide direct fire support to advancing infantry.
Most European armies ignored the evidence of the Russo-Japanese War of 1904–05 which suggested that infantry armed
with new smokeless-powder rifles could engage gun crews effectively. This forced gunners to fire at longer range
using indirect fire, usually from a position of cover. The creation of a unified German Empire ended the balance
of power that had been created with the Congress of Vienna after the end of the Napoleonic Wars. Germany had established
itself as the main power in continental Europe with the most powerful and professional army in the world.[citation
needed] Although Great Britain remained the dominant world power, British involvement in European affairs during
the late 19th century was very limited, allowing Germany to exercise great influence over the European mainland.[citation
needed] Besides, the Crown Prince's marriage with the daughter of Queen Victoria was only the most prominent of several
German–British relationships.
Puberty occurs through a long process and begins with a surge in hormone production, which in turn causes a number of physical
changes. It is the stage of life characterized by the appearance and development of secondary sex characteristics
(for example, a deeper voice and larger adam's apple in boys, and development of breasts and more curved and prominent
hips in girls) and a strong shift in hormonal balance towards an adult state. This is triggered by the pituitary
gland, which secretes a surge of hormonal agents into the blood stream, initiating a chain reaction to occur. The
male and female gonads are subsequently activated, which puts them into a state of rapid growth and development;
the triggered gonads now commence the mass production of the necessary chemicals. The testes primarily release testosterone,
and the ovaries predominantly dispense estrogen. The production of these hormones increases gradually until sexual
maturation is met. Some boys may develop gynecomastia due to an imbalance of sex hormones, tissue responsiveness
or obesity. A thorough understanding of adolescence in society depends on information from various perspectives,
including psychology, biology, history, sociology, education, and anthropology. Within all of these perspectives,
adolescence is viewed as a transitional period between childhood and adulthood, whose cultural purpose is the preparation
of children for adult roles. It is a period of multiple transitions involving education, training, employment and
unemployment, as well as transitions from one living circumstance to another. Pubertal development also affects circulatory
and respiratory systems as an adolescents' heart and lungs increase in both size and capacity. These changes lead
to increased strength and tolerance for exercise. Sex differences are apparent as males tend to develop "larger hearts
and lungs, higher systolic blood pressure, a lower resting heart rate, a greater capacity for carrying oxygen to
the blood, a greater power for neutralizing the chemical products of muscular exercise, higher blood hemoglobin and
more red blood cells". The human brain is not fully developed by the time a person reaches puberty. Between the ages
of 10 and 25, the brain undergoes changes that have important implications for behavior (see Cognitive development
below). The brain reaches 90% of its adult size by the time a person is six years of age. Thus, the brain does not
grow in size much during adolescence. However, the creases in the brain continue to become more complex until the
late teens. The biggest changes in the folds of the brain during this time occur in the parts of the cortex that
process cognitive and emotional information. Serotonin is a neuromodulator involved in regulation of mood and behavior.
Development in the limbic system plays an important role in determining rewards and punishments and processing emotional
experience and social information. Changes in the levels of the neurotransmitters dopamine and serotonin in the limbic
system make adolescents more emotional and more responsive to rewards and stress. The corresponding increase in emotional
variability also can increase adolescents' vulnerability. The effect of serotonin is not limited to the limbic system:
Several serotonin receptors have their gene expression change dramatically during adolescence, particularly in the
human frontal and prefrontal cortex . Adolescents' thinking is less bound to concrete events than that of children:
they can contemplate possibilities outside the realm of what currently exists. One manifestation of the adolescent's
increased facility with thinking about possibilities is the improvement of skill in deductive reasoning, which leads
to the development of hypothetical thinking. This provides the ability to plan ahead, see the future consequences
of an action and to provide alternative explanations of events. It also makes adolescents more skilled debaters,
as they can reason against a friend's or parent's assumptions. Adolescents also develop a more sophisticated understanding
of probability. The timing of puberty can have important psychological and social consequences. Early maturing boys
are usually taller and stronger than their friends. They have the advantage in capturing the attention of potential
partners and in becoming hand-picked for sports. Pubescent boys often tend to have a good body image, are more confident,
secure, and more independent. Late maturing boys can be less confident because of poor body image when comparing
themselves to already developed friends and peers. However, early puberty is not always positive for boys; early
sexual maturation in boys can be accompanied by increased aggressiveness due to the surge of hormones that affect
them. Because they appear older than their peers, pubescent boys may face increased social pressure to conform to
adult norms; society may view them as more emotionally advanced, despite the fact that their cognitive and social
development may lag behind their appearance. Studies have shown that early maturing boys are more likely to be sexually
active and are more likely to participate in risky behaviors. Wisdom, or the capacity for insight and judgment that
is developed through experience, increases between the ages of fourteen and twenty-five, then levels off. Thus, it
is during the adolescence-adulthood transition that individuals acquire the type of wisdom that is associated with
age. Wisdom is not the same as intelligence: adolescents do not improve substantially on IQ tests since their scores
are relative to others in their same age group, and relative standing usually does not change—everyone matures at
approximately the same rate in this way. In studying adolescent development, adolescence can be defined biologically,
as the physical transition marked by the onset of puberty and the termination of physical growth; cognitively, as
changes in the ability to think abstractly and multi-dimensionally; or socially, as a period of preparation for adult
roles. Major pubertal and biological changes include changes to the sex organs, height, weight, and muscle mass,
as well as major changes in brain structure and organization. Cognitive advances encompass both increases in knowledge
and in the ability to think abstractly and to reason more effectively. The study of adolescent development often
involves interdisciplinary collaborations. For example, researchers in neuroscience or bio-behavioral health might
focus on pubertal changes in brain structure and its effects on cognition or social relations. Sociologists interested
in adolescence might focus on the acquisition of social roles (e.g., worker or romantic partner) and how this varies
across cultures or social conditions. Developmental psychologists might focus on changes in relations with parents
and peers as a function of school structure and pubertal status. Jean Macfarlane founded the University of California,
Berkeley's Institute of Human Development, formerly called the Institute of Child Welfare, in 1927. The Institute
was instrumental in initiating studies of healthy development, in contrast to previous work that had been dominated
by theories based on pathological personalities. The studies looked at human development during the Great Depression
and World War II, unique historical circumstances under which a generation of children grew up. The Oakland Growth
Study, initiated by Harold Jones and Herbert Stolz in 1931, aimed to study the physical, intellectual, and social
development of children in the Oakland area. Data collection began in 1932 and continued until 1981, allowing the
researchers to gather longitudinal data on the individuals that extended past adolescence into adulthood. Jean Macfarlane
launched the Berkeley Guidance Study, which examined the development of children in terms of their socioeconomic
and family backgrounds. These studies provided the background for Glen Elder in the 1960s, to propose a life-course
perspective of adolescent development. Elder formulated several descriptive principles of adolescent development.
The principle of historical time and place states that an individual's development is shaped by the period and location
in which they grow up. The principle of the importance of timing in one's life refers to the different impact that
life events have on development based on when in one's life they occur. The idea of linked lives states that one's
development is shaped by the interconnected network of relationships of which one is a part; and the principle of
human agency asserts that one's life course is constructed via the choices and actions of an individual within the
context of their historical period and social network. The major landmark of puberty for males is the first ejaculation,
which occurs, on average, at age 13. For females, it is menarche, the onset of menstruation, which occurs, on average,
between ages 12 and 13. The age of menarche is influenced by heredity, but a girl's diet and lifestyle contribute
as well. Regardless of genes, a girl must have a certain proportion of body fat to attain menarche. Consequently,
girls who have a high-fat diet and who are not physically active begin menstruating earlier, on average, than girls
whose diet contains less fat and whose activities involve fat reducing exercise (e.g. ballet and gymnastics). Girls
who experience malnutrition or are in societies in which children are expected to perform physical labor also begin
menstruating at later ages. Adolescents can conceptualize multiple "possible selves" that they could become and long-term
possibilities and consequences of their choices. Exploring these possibilities may result in abrupt changes in self-presentation
as the adolescent chooses or rejects qualities and behaviors, trying to guide the actual self toward the ideal self
(who the adolescent wishes to be) and away from the feared self (who the adolescent does not want to be). For many,
these distinctions are uncomfortable, but they also appear to motivate achievement through behavior consistent with
the ideal and distinct from the feared possible selves. In females, changes in the primary sex characteristics involve
growth of the uterus, vagina, and other aspects of the reproductive system. Menarche, the beginning of menstruation,
is a relatively late development which follows a long series of hormonal changes. Generally, a girl is not fully
fertile until several years after menarche, as regular ovulation follows menarche by about two years. Unlike males,
therefore, females usually appear physically mature before they are capable of becoming pregnant. Primary sex characteristics
are those directly related to the sex organs. In males, the first stages of puberty involve growth of the testes
and scrotum, followed by growth of the penis. At the time that the penis develops, the seminal vesicles, the prostate,
and the bulbourethral gland also enlarge and develop. The first ejaculation of seminal fluid generally occurs about
one year after the beginning of accelerated penis growth, although this is often determined culturally rather than
biologically, since for many boys first ejaculation occurs as a result of masturbation. Boys are generally fertile
before they have an adult appearance. Adolescence marks a rapid change in one's role within a family. Young children
tend to assert themselves forcefully, but are unable to demonstrate much influence over family decisions until early
adolescence, when they are increasingly viewed by parents as equals. The adolescent faces the task of increasing
independence while preserving a caring relationship with his or her parents. When children go through puberty, there
is often a significant increase in parent–child conflict and a less cohesive familial bond. Arguments often concern
minor issues of control, such as curfew, acceptable clothing, and the adolescent's right to privacy, which adolescents
may have previously viewed as issues over which their parents had complete authority. Parent-adolescent disagreement
also increases as friends demonstrate a greater impact on one another, new influences on the adolescent that may
be in opposition to parents' values. Social media has also played an increasing role in adolescent and parent disagreements.
While parents never had to worry about the threats of social media in the past, it has become a dangerous place for
children. While adolescents strive for their freedoms, the unknowns to parents of what their child is doing on social
media sites is a challenging subject, due to the increasing amount of predators on social media sites. Many parents
have very little knowledge of social networking sites in the first place and this further increases their mistrust.
An important challenge for the parent–adolescent relationship is to understand how to enhance the opportunities of
online communication while managing its risks. Although conflicts between children and parents increase during adolescence,
these are just relatively minor issues. Regarding their important life issues, most adolescents still share the same
attitudes and values as their parents. The first areas of the brain to be pruned are those involving primary functions,
such as motor and sensory areas. The areas of the brain involved in more complex processes lose matter later in development.
These include the lateral and prefrontal cortices, among other regions. Some of the most developmentally significant
changes in the brain occur in the prefrontal cortex, which is involved in decision making and cognitive control,
as well as other higher cognitive functions. During adolescence, myelination and synaptic pruning in the prefrontal
cortex increases, improving the efficiency of information processing, and neural connections between the prefrontal
cortex and other regions of the brain are strengthened. This leads to better evaluation of risks and rewards, as
well as improved control over impulses. Specifically, developments in the dorsolateral prefrontal cortex are important
for controlling impulses and planning ahead, while development in the ventromedial prefrontal cortex is important
for decision making. Changes in the orbitofrontal cortex are important for evaluating rewards and risks. Communication
within peer groups allows adolescents to explore their feelings and identity as well as develop and evaluate their
social skills. Peer groups offer members the opportunity to develop social skills such as empathy, sharing, and leadership.
Adolescents choose peer groups based on characteristics similarly found in themselves. By utilizing these relationships,
adolescents become more accepting of who they are becoming. Group norms and values are incorporated into an adolescent’s
own self-concept. Through developing new communication skills and reflecting upon those of their peers, as well as
self-opinions and values, an adolescent can share and express emotions and other concerns without fear of rejection
or judgment. Peer groups can have positive influences on an individual, such as on academic motivation and performance.
However, while peers may facilitate social development for one another they may also hinder it. Peers can have negative
influences, such as encouraging experimentation with drugs, drinking, vandalism, and stealing through peer pressure.
Susceptibility to peer pressure increases during early adolescence, peaks around age 14, and declines thereafter.
Further evidence of peers hindering social development has been found in Spanish teenagers, where emotional (rather
than solution-based) reactions to problems and emotional instability have been linked with physical aggression against
peers. Both physical and relational aggression are linked to a vast number of enduring psychological difficulties,
especially depression, as is social rejection. Because of this, bullied adolescents often develop problems that lead
to further victimization. Bullied adolescents are both more likely to continued to be bullied and to bully others
in the future. However, this relationship is less stable in cases of cyberbullying, a relatively new issue among
adolescents. There are at least two major approaches to understanding cognitive change during adolescence. One is
the constructivist view of cognitive development. Based on the work of Piaget, it takes a quantitative, state-theory
approach, hypothesizing that adolescents' cognitive improvement is relatively sudden and drastic. The second is the
information-processing perspective, which derives from the study of artificial intelligence and attempts to explain
cognitive development in terms of the growth of specific components of the thinking process. Adolescence marks a
time of sexual maturation, which manifests in social interactions as well. While adolescents may engage in casual
sexual encounters (often referred to as hookups), most sexual experience during this period of development takes
place within romantic relationships. Adolescents can use technologies and social media to seek out romantic relationships
as they feel it is a safe place to try out dating and identity exploration. From these social media encounters, a
further relationship may begin. Kissing, hand holding, and hugging signify satisfaction and commitment. Among young
adolescents, "heavy" sexual activity, marked by genital stimulation, is often associated with violence, depression,
and poor relationship quality. This effect does not hold true for sexual activity in late adolescence that takes
place within a romantic relationship. Some research suggest that there are genetic causes of early sexual activity
that are also risk factors for delinquency, suggesting that there is a group who are at risk for both early sexual
activity and emotional distress. For old adolescents, though, sexual activity in the context of romantic relationships
was actually correlated with lower levels of deviant behavior after controlling for genetic risks, as opposed to
sex outside of a relationship (hook-ups) A third gain in cognitive ability involves thinking about thinking itself,
a process referred to as metacognition. It often involves monitoring one's own cognitive activity during the thinking
process. Adolescents' improvements in knowledge of their own thinking patterns lead to better self-control and more
effective studying. It is also relevant in social cognition, resulting in increased introspection, self-consciousness,
and intellectualization (in the sense of thought about one's own thoughts, rather than the Freudian definition as
a defense mechanism). Adolescents are much better able than children to understand that people do not have complete
control over their mental activity. Being able to introspect may lead to two forms of adolescent egocentrism, which
results in two distinct problems in thinking: the imaginary audience and the personal fable. These likely peak at
age fifteen, along with self-consciousness in general. A questionnaire called the teen timetable has been used to
measure the age at which individuals believe adolescents should be able to engage in behaviors associated with autonomy.
This questionnaire has been used to gauge differences in cultural perceptions of adolescent autonomy, finding, for
instance, that White parents and adolescents tend to expect autonomy earlier than those of Asian descent. It is,
therefore, clear that cultural differences exist in perceptions of adolescent autonomy, and such differences have
implications for the lifestyles and development of adolescents. In sub-Saharan African youth, the notions of individuality
and freedom may not be useful in understanding adolescent development. Rather, African notions of childhood and adolescent
development are relational and interdependent. Given the potential consequences, engaging in sexual behavior is somewhat
risky, particularly for adolescents. Having unprotected sex, using poor birth control methods (e.g. withdrawal),
having multiple sexual partners, and poor communication are some aspects of sexual behavior that increase individual
and/or social risk. Some qualities of adolescents' lives that are often correlated with risky sexual behavior include
higher rates of experienced abuse, lower rates of parental support and monitoring. Adolescence is also commonly a
time of questioning sexuality and gender. This may involve intimate experimentation with people identifying as the
same gender as well as with people of differing genders. Such exploratory sexual behavior can be seen as similar
to other aspects of identity, including the exploration of vocational, social, and leisure identity, all of which
involve some risk. Research seems to favor the hypothesis that adolescents and adults think about risk in similar
ways, but hold different values and thus come to different conclusions. Some have argued that there may be evolutionary
benefits to an increased propensity for risk-taking in adolescence. For example, without a willingness to take risks,
teenagers would not have the motivation or confidence necessary to leave their family of origin. In addition, from
a population perspective, there is an advantage to having a group of individuals willing to take more risks and try
new methods, counterbalancing the more conservative elements more typical of the received knowledge held by older
adults. Risktaking may also have reproductive advantages: adolescents have a newfound priority in sexual attraction
and dating, and risk-taking is required to impress potential mates. Research also indicates that baseline sensation
seeking may affect risk-taking behavior throughout the lifespan. Identity development is a stage in the adolescent
life cycle. For most, the search for identity begins in the adolescent years. During these years, adolescents are
more open to 'trying on' different behaviours and appearances to discover who they are. In other words, in an attempt
to find their identity and discover who they are adolescents are liking to cycle through a number of identities to
find one that suits them best. But, developing and maintaining identity (in adolescent years) is a difficult task
due to multiple factors such as family life, environment, and social status. Empirical studies suggest that this
process might be more accurately described as identity development, rather than formation, but confirms a normative
process of change in both content and structure of one's thoughts about the self. The two main aspects of identity
development are self-clarity and self-esteem. Since choices made during adolescent years can influence later life,
high levels of self-awareness and self-control during mid-adolescence will lead to better decisions during the transition
to adulthood.[citation needed] Researchers have used three general approaches to understanding identity development:
self-concept, sense of identity, and self-esteem. The years of adolescence create a more conscientious group of young
adults. Adolescents pay close attention and give more time and effort to their appearance as their body goes through
changes. Unlike children, teens put forth an effort to look presentable (1991). The environment in which an adolescent
grows up also plays an important role in their identity development. Studies done by the American Psychological Association
have shown that adolescents with a less privileged upbringing have a more difficult time developing their identity.
Research since reveals self-examination beginning early in adolescence, but identity achievement rarely occurring
before age 18. The freshman year of college influences identity development significantly, but may actually prolong
psychosocial moratorium by encouraging reexamination of previous commitments and further exploration of alternate
possibilities without encouraging resolution. For the most part, evidence has supported Erikson's stages: each correlates
with the personality traits he originally predicted. Studies also confirm the impermanence of the stages; there is
no final endpoint in identity development. Egocentrism in adolescents forms a self-conscious desire to feel important
in their peer groups and enjoy social acceptance. Unlike the conflicting aspects of self-concept, identity represents
a coherent sense of self stable across circumstances and including past experiences and future goals. Everyone has
a self-concept, whereas Erik Erikson argued that not everyone fully achieves identity. Erikson's theory of stages
of development includes the identity crisis in which adolescents must explore different possibilities and integrate
different parts of themselves before committing to their beliefs. He described the resolution of this process as
a stage of "identity achievement" but also stressed that the identity challenge "is never fully resolved once and
for all at one point in time". Adolescents begin by defining themselves based on their crowd membership. "Clothes
help teens explore new identities, separate from parents, and bond with peers." Fashion has played a major role when
it comes to teenagers "finding their selves"; Fashion is always evolving, which corresponds with the evolution of
change in the personality of teenagers. Adolescents attempt to define their identity by consciously styling themselves
in different manners to find what best suits them. Trial and error in matching both their perceived image and the
image others respond to and see, allows for the adolescent to grasp an understanding of who they are Just as fashion
is evolving to influence adolescents so is the media. "Modern life takes place amidst a never-ending barrage of flesh
on screens, pages, and billboards." This barrage consciously or subconsciously registers into the mind causing issues
with self-image a factor that contributes to an adolescence sense of identity. Researcher James Marcia developed
the current method for testing an individual's progress along these stages. His questions are divided into three
categories: occupation, ideology, and interpersonal relationships. Answers are scored based on extent to which the
individual has explored and the degree to which he has made commitments. The result is classification of the individual
into a) identity diffusion in which all children begin, b) Identity Foreclosure in which commitments are made without
the exploration of alternatives, c) Moratorium, or the process of exploration, or d) Identity Achievement in which
Moratorium has occurred and resulted in commitments. The final major aspect of identity formation is self-esteem.
Self-esteem is defined as one's thoughts and feelings about one's self-concept and identity. Most theories on self-esteem
state that there is a grand desire, across all genders and ages, to maintain, protect and enhance their self-esteem.
Contrary to popular belief, there is no empirical evidence for a significant drop in self-esteem over the course
of adolescence. "Barometric self-esteem" fluctuates rapidly and can cause severe distress and anxiety, but baseline
self-esteem remains highly stable across adolescence. The validity of global self-esteem scales has been questioned,
and many suggest that more specific scales might reveal more about the adolescent experience. Girls are most likely
to enjoy high self-esteem when engaged in supportive relationships with friends, the most important function of friendship
to them is having someone who can provide social and moral support. When they fail to win friends' approval or couldn't
find someone with whom to share common activities and common interests, in these cases, girls suffer from low self-esteem.
In contrast, boys are more concerned with establishing and asserting their independence and defining their relation
to authority. As such, they are more likely to derive high self-esteem from their ability to successfully influence
their friends; on the other hand, the lack of romantic competence, for example, failure to win or maintain the affection
of the opposite or same-sex (depending on sexual orientation), is the major contributor to low self-esteem in adolescent
boys. Due to the fact that both men and women happen to have a low self-esteem after ending a romantic relationship,
they are prone to other symptoms that is caused by this state. Depression and hopelessness are only two of the various
symptoms and it is said that women are twice as likely to experience depression and men are three to four times more
likely to commit suicide (Mearns, 1991; Ustun & Sartorius, 1995). In terms of sexual identity, adolescence is when
most gay/lesbian and transgender adolescents begin to recognize and make sense of their feelings. Many adolescents
may choose to come out during this period of their life once an identity has been formed; many others may go through
a period of questioning or denial, which can include experimentation with both homosexual and heterosexual experiences.
A study of 194 lesbian, gay, and bisexual youths under the age of 21 found that having an awareness of one's sexual
orientation occurred, on average, around age 10, but the process of coming out to peers and adults occurred around
age 16 and 17, respectively. Coming to terms with and creating a positive LGBT identity can be difficult for some
youth for a variety of reasons. Peer pressure is a large factor when youth who are questioning their sexuality or
gender identity are surrounded by heteronormative peers and can cause great distress due to a feeling of being different
from everyone else. While coming out can also foster better psychological adjustment, the risks associated are real.
Indeed, coming out in the midst of a heteronormative peer environment often comes with the risk of ostracism, hurtful
jokes, and even violence. Because of this, statistically the suicide rate amongst LGBT adolescents is up to four
times higher than that of their heterosexual peers due to bullying and rejection from peers or family members. Despite
changing family roles during adolescence, the home environment and parents are still important for the behaviors
and choices of adolescents. Adolescents who have a good relationship with their parents are less likely to engage
in various risk behaviors, such as smoking, drinking, fighting, and/or unprotected sexual intercourse. In addition,
parents influence the education of adolescence. A study conducted by Adalbjarnardottir and Blondal (2009) showed
that adolescents at the age of 14 who identify their parents as authoritative figures are more likely to complete
secondary education by the age of 22—as support and encouragement from an authoritative parent motivates the adolescence
to complete schooling to avoid disappointing that parent. Much research has been conducted on the psychological ramifications
of body image on adolescents. Modern day teenagers are exposed to more media on a daily basis than any generation
before them. Recent studies have indicated that the average teenager watches roughly 1500 hours of television per
year. As such, modern day adolescents are exposed to many representations of ideal, societal beauty. The concept
of a person being unhappy with their own image or appearance has been defined as "body dissatisfaction". In teenagers,
body dissatisfaction is often associated with body mass, low self-esteem, and atypical eating patterns. Scholars
continue to debate the effects of media on body dissatisfaction in teens. A potential important influence on adolescence
is change of the family dynamic, specifically divorce. With the divorce rate up to about 50%, divorce is common and
adds to the already great amount of change in adolescence. Custody disputes soon after a divorce often reflect a
playing out of control battles and ambivalence between parents. Divorce usually results in less contact between the
adolescent and their noncustodial parent. In extreme cases of instability and abuse in homes, divorce can have a
positive effect on families due to less conflict in the home. However, most research suggests a negative effect on
adolescence as well as later development. A recent study found that, compared with peers who grow up in stable post-divorce
families, children of divorce who experience additional family transitions during late adolescence, make less progress
in their math and social studies performance over time. Another recent study put forth a new theory entitled the
adolescent epistemological trauma theory, which posited that traumatic life events such as parental divorce during
the formative period of late adolescence portend lifelong effects on adult conflict behavior that can be mitigated
by effective behavioral assessment and training. A parental divorce during childhood or adolescence continues to
have a negative effect when a person is in his or her twenties and early thirties. These negative effects include
romantic relationships and conflict style, meaning as adults, they are more likely to use the styles of avoidance
and competing in conflict management. Some examples of social and religious transition ceremonies that can be found
in the U.S., as well as in other cultures around the world, are Confirmation, Bar and Bat Mitzvahs, Quinceañeras,
sweet sixteens, cotillions, and débutante balls. In other countries, initiation ceremonies play an important role,
marking the transition into adulthood or the entrance into adolescence. This transition may be accompanied by obvious
physical changes, which can vary from a change in clothing to tattoos and scarification. Furthermore, transitions
into adulthood may also vary by gender, and specific rituals may be more common for males or for females. This illuminates
the extent to which adolescence is, at least in part, a social construction; it takes shape differently depending
on the cultural context, and may be enforced more by cultural practices or transitions than by universal chemical
or biological physical changes. Romantic relationships tend to increase in prevalence throughout adolescence. By
age 15, 53% of adolescents have had a romantic relationship that lasted at least one month over the course of the
previous 18 months. In a 2008 study conducted by YouGov for Channel 4, 20% of 14−17-year-olds surveyed revealed that
they had their first sexual experience at 13 or under in the United Kingdom. A 2002 American study found that those
aged 15–44 reported that the average age of first sexual intercourse was 17.0 for males and 17.3 for females. The
typical duration of relationships increases throughout the teenage years as well. This constant increase in the likelihood
of a long-term relationship can be explained by sexual maturation and the development of cognitive skills necessary
to maintain a romantic bond (e.g. caregiving, appropriate attachment), although these skills are not strongly developed
until late adolescence. Long-term relationships allow adolescents to gain the skills necessary for high-quality relationships
later in life and develop feelings of self-worth. Overall, positive romantic relationships among adolescents can
result in long-term benefits. High-quality romantic relationships are associated with higher commitment in early
adulthood and are positively associated with self-esteem, self-confidence, and social competence. For example, an
adolescent with positive self-confidence is likely to consider themselves a more successful partner, whereas negative
experiences may lead to low confidence as a romantic partner. Adolescents often date within their demographic in
regards to race, ethnicity, popularity, and physical attractiveness. However, there are traits in which certain individuals,
particularly adolescent girls, seek diversity. While most adolescents date people approximately their own age, boys
typically date partners the same age or younger; girls typically date partners the same age or older. There are certain
characteristics of adolescent development that are more rooted in culture than in human biology or cognitive structures.
Culture has been defined as the "symbolic and behavioral inheritance received from the past that provides a community
framework for what is valued". Culture is learned and socially shared, and it affects all aspects of an individual's
life. Social responsibilities, sexual expression, and belief system development, for instance, are all things that
are likely to vary by culture. Furthermore, distinguishing characteristics of youth, including dress, music and other
uses of media, employment, art, food and beverage choices, recreation, and language, all constitute a youth culture.
For these reasons, culture is a prevalent and powerful presence in the lives of adolescents, and therefore we cannot
fully understand today's adolescents without studying and understanding their culture. However, "culture" should
not be seen as synonymous with nation or ethnicity. Many cultures are present within any given country and racial
or socioeconomic group. Furthermore, to avoid ethnocentrism, researchers must be careful not to define the culture's
role in adolescence in terms of their own cultural beliefs. When discussing peer relationships among adolescents
it is also important to include information in regards to how they communicate with one another. An important aspect
of communication is the channel used. Channel, in this respect, refers to the form of communication, be it face-to-face,
email, text message, phone or other. Teens are heavy users of newer forms of communication such as text message and
social-networking websites such as Facebook, especially when communicating with peers. Adolescents use online technology
to experiment with emerging identities and to broaden their peer groups, such as increasing the amount of friends
acquired on Facebook and other social media sites. Some adolescents use these newer channels to enhance relationships
with peers however there can be negative uses as well such as cyberbullying, as mentioned previously, and negative
impacts on the family. In contemporary society, adolescents also face some risks as their sexuality begins to transform.
While some of these, such as emotional distress (fear of abuse or exploitation) and sexually transmitted infections/diseases
(STIs/STDs), including HIV/AIDS, are not necessarily inherent to adolescence, others such as teenage pregnancy (through
non-use or failure of contraceptives) are seen as social problems in most western societies. One in four sexually
active teenagers will contract an STI. Adolescents in the United States often chose "anything but intercourse" for
sexual activity because they mistakenly believe it reduces the risk of STIs. Across the country, clinicians report
rising diagnoses of herpes and human papillomavirus (HPV), which can cause genital warts, and is now thought to affect
15 percent of the teen population. Girls 15 to 19 have higher rates of gonorrhea than any other age group. One-quarter
of all new HIV cases occur in those under the age of 21. Multrine also states in her article that according to a
March survey by the Kaiser Family Foundation, eighty-one percent of parents want schools to discuss the use of condoms
and contraception with their children. They also believe students should be able to be tested for STIs. Furthermore,
teachers want to address such topics with their students. But, although 9 in 10 sex education instructors across
the country believe that students should be taught about contraceptives in school, over one quarter report receiving
explicit instructions from school boards and administrators not to do so. According to anthropologist Margaret Mead,
the turmoil found in adolescence in Western society has a cultural rather than a physical cause; they reported that
societies where young women engaged in free sexual activity had no such adolescent turmoil. Adolescence is a period
frequently marked by increased rights and privileges for individuals. While cultural variation exists for legal rights
and their corresponding ages, considerable consistency is found across cultures. Furthermore, since the advent of
the Convention on the Rights of the Child in 1989 (children here defined as under 18), almost every country in the
world (except the U.S. and South Sudan) has legally committed to advancing an anti-discriminatory stance towards
young people of all ages. This includes protecting children against unchecked child labor, enrollment in the military,
prostitution, and pornography. In many societies, those who reach a certain age (often 18, though this varies) are
considered to have reached the age of majority and are legally regarded as adults who are responsible for their actions.
People below this age are considered minors or children. A person below the age of majority may gain adult rights
through legal emancipation. In addition to the sharing of household chores, certain cultures expect adolescents to
share in their family's financial responsibilities. According to family economic and financial education specialists,
adolescents develop sound money management skills through the practices of saving and spending money, as well as
through planning ahead for future economic goals. Differences between families in the distribution of financial responsibilities
or provision of allowance may reflect various social background circumstances and intrafamilial processes, which
are further influenced by cultural norms and values, as well as by the business sector and market economy of a given
society. For instance, in many developing countries it is common for children to attend fewer years of formal schooling
so that, when they reach adolescence, they can begin working. Many cultures define the transition into adultlike
sexuality by specific biological or social milestones in an adolescent's life. For example, menarche (the first menstrual
period of a female), or semenarche (the first ejaculation of a male) are frequent sexual defining points for many
cultures. In addition to biological factors, an adolescent's sexual socialization is highly dependent upon whether
their culture takes a restrictive or permissive attitude toward teen or premarital sexual activity. Restrictive cultures
overtly discourage sexual activity in unmarried adolescents or until an adolescent undergoes a formal rite of passage.
These cultures may attempt to restrict sexual activity by separating males and females throughout their development,
or through public shaming and physical punishment when sexual activity does occur. In less restrictive cultures,
there is more tolerance for displays of adolescent sexuality, or of the interaction between males and females in
public and private spaces. Less restrictive cultures may tolerate some aspects of adolescent sexuality, while objecting
to other aspects. For instance, some cultures find teenage sexual activity acceptable but teenage pregnancy highly
undesirable. Other cultures do not object to teenage sexual activity or teenage pregnancy, as long as they occur
after marriage. In permissive societies, overt sexual behavior among unmarried teens is perceived as acceptable,
and is sometimes even encouraged. Regardless of whether a culture is restrictive or permissive, there are likely
to be discrepancies in how females versus males are expected to express their sexuality. Cultures vary in how overt
this double standard is—in some it is legally inscribed, while in others it is communicated through social convention.
Lesbian, gay, bi-sexual and transsexual youth face much discrimination through bullying from those unlike them and
may find telling others that they are gay to be a traumatic experience. The range of sexual attitudes that a culture
embraces could thus be seen to affect the beliefs, lifestyles, and societal perceptions of its adolescents. Drinking
habits and the motives behind them often reflect certain aspects of an individual's personality; in fact, four dimensions
of the Five-Factor Model of personality demonstrate associations with drinking motives (all but 'Openness'). Greater
enhancement motives for alcohol consumption tend to reflect high levels of extraversion and sensation-seeking in
individuals; such enjoyment motivation often also indicates low conscientiousness, manifesting in lowered inhibition
and a greater tendency towards aggression. On the other hand, drinking to cope with negative emotional states correlates
strongly with high neuroticism and low agreeableness. Alcohol use as a negative emotion control mechanism often links
with many other behavioral and emotional impairments, such as anxiety, depression, and low self-esteem. Although
research has been inconclusive, some findings have indicated that electronic communication negatively affects adolescents'
social development, replaces face-to-face communication, impairs their social skills, and can sometimes lead to unsafe
interaction with strangers. A 2015 review reported that “adolescents lack awareness of strategies to cope with cyberbullying,
which has been consistently associated with an increased likelihood of depression.” Studies have shown differences
in the ways the internet negatively impacts the adolescents' social functioning. Online socializing tends to make
girls particularly vulnerable, while socializing in Internet cafés seems only to affect boys academic achievement.
However, other research suggests that Internet communication brings friends closer and is beneficial for socially
anxious teens, who find it easier to interact socially online. The more conclusive finding has been that Internet
use has a negative effect on the physical health of adolescents, as time spent using the Internet replaces time doing
physical activities. However, the Internet can be significantly useful in educating teens because of the access they
have to information on many various topics. Following a steady decline, beginning in the late 1990s up through the
mid-2000s, illicit drug use among adolescents has been on the rise in the U.S. Aside from alcohol, marijuana is the
most commonly indulged drug habit during adolescent years. Data collected by the National Institute on Drug Abuse
shows that between the years of 2007 and 2011, marijuana use grew from 5.7% to 7.2% among 8th grade students; among
10th grade students, from 14.2% to 17.6%; and among 12th graders, from 18.8% to 22.6%. Additional, recent years have
seen a surge in popularity of MDMA; between 2010 and 2011, the use of MDMA increased from 1.4% to 2.3% among high
school seniors. The heightened usage of ecstasy most likely ties in at least to some degree with the rising popularity
of rave culture. Until mid-to-late adolescence, boys and girls show relatively little difference in drinking motives.
Distinctions between the reasons for alcohol consumption of males and females begin to emerge around ages 14–15;
overall, boys tend to view drinking in a more social light than girls, who report on average a more frequent use
of alcohol as a coping mechanism. The latter effect appears to shift in late adolescence and onset of early adulthood
(18–19 years of age); however, despite this trend, age tends to bring a greater desire to drink for pleasure rather
than coping in both boys and girls. Adolescence (from Latin adolescere, meaning "to grow up") is a transitional stage
of physical and psychological human development that generally occurs during the period from puberty to legal adulthood
(age of majority). The period of adolescence is most closely associated with the teenage years, though its physical,
psychological and cultural expressions may begin earlier and end later. For example, although puberty has been historically
associated with the onset of adolescent development, it now typically begins prior to the teenage years and there
has been a normative shift of it occurring in preadolescence, particularly in females (see precocious puberty). Physical
growth, as distinct from puberty (particularly in males), and cognitive development generally seen in adolescence,
can also extend into the early twenties. Thus chronological age provides only a rough marker of adolescence, and
scholars have found it difficult to agree upon a precise definition of adolescence. Within the past ten years, the
amount of social networking sites available to the public has greatly increased as well as the number of adolescents
using them. Several sources report a high proportion of adolescents who use social media: 73% of 12–17 year olds
reported having at least one social networking profile; two-thirds (68%) of teens text every day, half (51%) visit
social networking sites daily, and 11% send or receive tweets at least once every day. In fact, more than a third
(34%) of teens visit their main social networking site several times a day. One in four (23%) teens are "heavy" social
media users, meaning they use at least two different types of social media each and every day. For girls, early maturation
can sometimes lead to increased self-consciousness, though a typical aspect in maturing females. Because of their
bodies' developing in advance, pubescent girls can become more insecure and dependent. Consequently, girls that reach
sexual maturation early are more likely than their peers to develop eating disorders (such as anorexia nervosa).
Nearly half of all American high school girls' diets are to lose weight. In addition, girls may have to deal with
sexual advances from older boys before they are emotionally and mentally mature. In addition to having earlier sexual
experiences and more unwanted pregnancies than late maturing girls, early maturing girls are more exposed to alcohol
and drug abuse. Those who have had such experiences tend to perform not as well in school as their "inexperienced"
peers. Another set of significant physical changes during puberty happen in bodily distribution of fat and muscle.
This process is different for females and males. Before puberty, there are nearly no sex differences in fat and muscle
distribution; during puberty, boys grow muscle much faster than girls, although both sexes experience rapid muscle
development. In contrast, though both sexes experience an increase in body fat, the increase is much more significant
for girls. Frequently, the increase in fat for girls happens in their years just before puberty. The ratio between
muscle and fat among post-pubertal boys is around three to one, while for girls it is about five to four. This may
help explain sex differences in athletic performance. Changes in secondary sex characteristics include every change
that is not directly related to sexual reproduction. In males, these changes involve appearance of pubic, facial,
and body hair, deepening of the voice, roughening of the skin around the upper arms and thighs, and increased development
of the sweat glands. In females, secondary sex changes involve elevation of the breasts, widening of the hips, development
of pubic and underarm hair, widening of the areolae, and elevation of the nipples. The changes in secondary sex characteristics
that take place during puberty are often referred to in terms of five Tanner stages, named after the British pediatrician
who devised the categorization system. Compared to children, adolescents are more likely to question others' assertions,
and less likely to accept facts as absolute truths. Through experience outside the family circle, they learn that
rules they were taught as absolute are in fact relativistic. They begin to differentiate between rules instituted
out of common sense—not touching a hot stove—and those that are based on culturally-relative standards (codes of
etiquette, not dating until a certain age), a delineation that younger children do not make. This can lead to a period
of questioning authority in all domains. The formal study of adolescent psychology began with the publication of
G. Stanley Hall's "Adolescence in 1904." Hall, who was the first president of the American Psychological Association,
viewed adolescence primarily as a time of internal turmoil and upheaval (sturm und drang). This understanding of
youth was based on two then new ways of understanding human behavior: Darwin's evolutionary theory and Freud's psychodynamic
theory. He believed that adolescence was a representation of our human ancestors' phylogenetic shift from being primitive
to being civilized. Hall's assertions stood relatively uncontested until the 1950s when psychologists such as Erik
Erikson and Anna Freud started to formulate their theories about adolescence. Freud believed that the psychological
disturbances associated with youth were biologically based and culturally universal while Erikson focused on the
dichotomy between identity formation and role fulfillment. Even with their different theories, these three psychologists
agreed that adolescence was inherently a time of disturbance and psychological confusion. The less turbulent aspects
of adolescence, such as peer relations and cultural influence, were left largely ignored until the 1980s. From the
'50s until the '80s, the focus of the field was mainly on describing patterns of behavior as opposed to explaining
them. The idea of self-concept is known as the ability of a person to have opinions and beliefs that are defined
confidently, consistent and stable. Early in adolescence, cognitive developments result in greater self-awareness,
greater awareness of others and their thoughts and judgments, the ability to think about abstract, future possibilities,
and the ability to consider multiple possibilities at once. As a result, adolescents experience a significant shift
from the simple, concrete, and global self-descriptions typical of young children; as children, they defined themselves
by physical traits whereas as adolescents, they define themselves based on their values, thoughts, and opinions.
An adolescent's environment plays a huge role in their identity development. While most adolescent studies are conducted
on white, middle class children, studies show that the more privileged upbringing people have, the more successfully
they develop their identity. The forming of an adolescent's identity is a crucial time in their life. It has been
recently found that demographic patterns suggest that the transition to adulthood is now occurring over a longer
span of years than was the case during the middle of the 20th century. Accordingly, youth, a period that spans late
adolescence and early adulthood, has become a more prominent stage of the life course. This therefore has caused
various factors to become important during this development. So many factors contribute to the developing social
identity of an adolescent from commitment, to coping devices, to social media. All of these factors are affected
by the environment an adolescent grows up in. A child from a more privileged upbringing is exposed to more opportunities
and better situations in general. An adolescent from an inner city or a crime-driven neighborhood is more likely
to be exposed to an environment that can be detrimental to their development. Adolescence is a sensitive period in
the development process, and exposure to the wrong things at that time can have a major effect on future decisions.
While children that grow up in nice suburban communities are not exposed to bad environments they are more likely
to participate in activities that can benefit their identity and contribute to a more successful identity development.
The relationships adolescents have with their peers, family, and members of their social sphere play a vital role
in the social development of an adolescent. As an adolescent's social sphere develops rapidly as they distinguish
the differences between friends and acquaintances, they often become heavily emotionally invested in friends. This
is not harmful; however, if these friends expose an individual to potentially harmful situations, this is an aspect
of peer pressure. Adolescence is a critical period in social development because adolescents can be easily influenced
by the people they develop close relationships with. This is the first time individuals can truly make their own
decisions, which also makes this a sensitive period. Relationships are vital in the social development of an adolescent
due to the extreme influence peers can have over an individual. These relationships become significant because they
begin to help the adolescent understand the concept of personalities, how they form and why a person has that specific
type of personality. "The use of psychological comparisons could serve both as an index of the growth of an implicit
personality theory and as a component process accounting for its creation. In other words, by comparing one person's
personality characteristics to another's, we would be setting up the framework for creating a general theory of personality
(and, ... such a theory would serve as a useful framework for coming to understand specific persons)." This can be
likened to the use of social comparison in developing one’s identity and self-concept, which includes ones personality,
and underscores the importance of communication, and thus relationships, in one’s development. In social comparison
we use reference groups, with respect to both psychological and identity development. These reference groups are
the peers of adolescents. This means that who the teen chooses/accepts as their friends and who they communicate
with on a frequent basis often makes up their reference groups and can therefore have a huge impact on who they become.
Research shows that relationships have the largest affect over the social development of an individual. Peer groups
are essential to social and general development. Communication with peers increases significantly during adolescence
and peer relationships become more intense than in other stages and more influential to the teen, affecting both
the decisions and choices being made. High quality friendships may enhance children's development regardless of the
characteristics of those friends. As children begin to bond with various people and create friendships, it later
helps them when they are adolescent and sets up the framework for adolescence and peer groups. Peer groups are especially
important during adolescence, a period of development characterized by a dramatic increase in time spent with peers
and a decrease in adult supervision. Adolescents also associate with friends of the opposite sex much more than in
childhood and tend to identify with larger groups of peers based on shared characteristics. It is also common for
adolescents to use friends as coping devices in different situations. A three-factor structure of dealing with friends
including avoidance, mastery, and nonchalance has shown that adolescents use friends as coping devices with social
stresses. Some researchers are now focusing on learning about how adolescents view their own relationships and sexuality;
they want to move away from a research point of view that focuses on the problems associated with adolescent sexuality.[why?]
College Professor Lucia O'Sullivan and her colleagues found that there weren't any significant gender differences
in the relationship events adolescent boys and girls from grades 7-12 reported. Most teens said they had kissed their
partners, held hands with them, thought of themselves as being a couple and told people they were in a relationship.
This means that private thoughts about the relationship as well as public recognition of the relationship were both
important to the adolescents in the sample. Sexual events (such as sexual touching, sexual intercourse) were less
common than romantic events (holding hands) and social events (being with one's partner in a group setting). The
researchers state that these results are important because the results focus on the more positive aspects of adolescents
and their social and romantic interactions rather than focusing on sexual behavior and its consequences. The degree
to which adolescents are perceived as autonomous beings varies widely by culture, as do the behaviors that represent
this emerging autonomy. Psychologists have identified three main types of autonomy: emotional independence, behavioral
autonomy, and cognitive autonomy. Emotional autonomy is defined in terms of an adolescent's relationships with others,
and often includes the development of more mature emotional connections with adults and peers. Behavioral autonomy
encompasses an adolescent's developing ability to regulate his or her own behavior, to act on personal decisions,
and to self-govern. Cultural differences are especially visible in this category because it concerns issues of dating,
social time with peers, and time-management decisions. Cognitive autonomy describes the capacity for an adolescent
to partake in processes of independent reasoning and decision-making without excessive reliance on social validation.
Converging influences from adolescent cognitive development, expanding social relationships, an increasingly adultlike
appearance, and the acceptance of more rights and responsibilities enhance feelings of autonomy for adolescents.
Proper development of autonomy has been tied to good mental health, high self-esteem, self-motivated tendencies,
positive self-concepts, and self-initiating and regulating behaviors. Furthermore, it has been found that adolescents'
mental health is best when their feelings about autonomy match closely with those of their parents. Furthermore,
the amount of time adolescents spend on work and leisure activities varies greatly by culture as a result of cultural
norms and expectations, as well as various socioeconomic factors. American teenagers spend less time in school or
working and more time on leisure activities—which include playing sports, socializing, and caring for their appearance—than
do adolescents in many other countries. These differences may be influenced by cultural values of education and the
amount of responsibility adolescents are expected to assume in their family or community. Teenage alcohol drug use
is currently at an all-time low. Out of a polled body of students, 4.4% of 8th graders reported having been on at
least one occasion been drunk within the previous month; for 10th graders, the number was 13.7%, and for 12th graders,
25%. More drastically, cigarette smoking has become a far less prevalent activity among American middle- and high-school
students; in fact, a greater number of teens now smoke marijuana than smoke cigarettes, with one recent study showing
a respective 15.2% versus 11.7% of surveyed students. Recent studies have shown that male late adolescents are far
more likely to smoke cigarettes rather than females. The study indicated that there was a discernible gender difference
in the prevalence of smoking among the students. The finding of the study show that more males than females began
smoking when they were in primary and high schools whereas most females started smoking after high school. This may
be attributed to recent changing social and political views towards marijuana; issues such as medicinal use and legalization
have tended towards painting the drug in a more positive light than historically, while cigarettes continue to be
vilified due to associated health risks. Research has generally shown striking uniformity across different cultures
in the motives behind teen alcohol use. Social engagement and personal enjoyment appear to play a fairly universal
role in adolescents' decision to drink throughout separate cultural contexts. Surveys conducted in Argentina, Hong
Kong, and Canada have each indicated the most common reason for drinking among adolescents to relate to pleasure
and recreation; 80% of Argentinian teens reported drinking for enjoyment, while only 7% drank to improve a bad mood.
The most prevalent answers among Canadian adolescents were to "get in a party mood," 18%; "because I enjoy it," 16%;
and "to get drunk," 10%. In Hong Kong, female participants most frequently reported drinking for social enjoyment,
while males most frequently reported drinking to feel the effects of alcohol. A broad way of defining adolescence
is the transition from child-to-adulthood. According to Hogan & Astone (1986), this transition can include markers
such as leaving school, starting a full-time job, leaving the home of origin, getting married, and becoming a parent
for the first time. However, the time frame of this transition varies drastically by culture. In some countries,
such as the United States, adolescence can last nearly a decade, but in others, the transition—often in the form
of a ceremony—can last for only a few days. The end of adolescence and the beginning of adulthood varies by country
and by function. Furthermore, even within a single nation state or culture there can be different ages at which an
individual is considered (chronologically and legally) mature enough for society to entrust them with certain privileges
and responsibilities. Such milestones include driving a vehicle, having legal sexual relations, serving in the armed
forces or on a jury, purchasing and drinking alcohol, voting, entering into contracts, finishing certain levels of
education, and marriage. Adolescence is usually accompanied by an increased independence allowed by the parents or
legal guardians, including less supervision as compared to preadolescence. Facial hair in males normally appears
in a specific order during puberty: The first facial hair to appear tends to grow at the corners of the upper lip,
typically between 14 to 17 years of age. It then spreads to form a moustache over the entire upper lip. This is followed
by the appearance of hair on the upper part of the cheeks, and the area under the lower lip. The hair eventually
spreads to the sides and lower border of the chin, and the rest of the lower face to form a full beard. As with most
human biological processes, this specific order may vary among some individuals. Facial hair is often present in
late adolescence, around ages 17 and 18, but may not appear until significantly later. Some men do not develop full
facial hair for 10 years after puberty. Facial hair continues to get coarser, darker and thicker for another 2–4
years after puberty. The adolescent growth spurt is a rapid increase in the individual's height and weight during
puberty resulting from the simultaneous release of growth hormones, thyroid hormones, and androgens. Males experience
their growth spurt about two years later, on average, than females. During their peak height velocity (the time of
most rapid growth), adolescents grow at a growth rate nearly identical to that of a toddler—about 4 inches (10.3
cm) a year for males and 3.5 inches (9 cm) for females. In addition to changes in height, adolescents also experience
a significant increase in weight (Marshall, 1978). The weight gained during adolescence constitutes nearly half of
one's adult body weight. Teenage and early adult males may continue to gain natural muscle growth even after puberty.
Over the course of adolescence, the amount of white matter in the brain increases linearly, while the amount of grey
matter in the brain follows an inverted-U pattern. Through a process called synaptic pruning, unnecessary neuronal
connections in the brain are eliminated and the amount of grey matter is pared down. However, this does not mean
that the brain loses functionality; rather, it becomes more efficient due to increased myelination (insulation of
axons) and the reduction of unused pathways. Adolescence is also a time for rapid cognitive development. Piaget describes
adolescence as the stage of life in which the individual's thoughts start taking more of an abstract form and the
egocentric thoughts decrease. This allows the individual to think and reason in a wider perspective. A combination
of behavioural and fMRI studies have demonstrated development of executive functions, that is, cognitive skills that
enable the control and coordination of thoughts and behaviour, which are generally associated with the prefrontal
cortex. The thoughts, ideas and concepts developed at this period of life greatly influence one's future life, playing
a major role in character and personality formation. The appearance of more systematic, abstract thinking is another
notable aspect of cognitive development during adolescence. For example, adolescents find it easier than children
to comprehend the sorts of higher-order abstract logic inherent in puns, proverbs, metaphors, and analogies. Their
increased facility permits them to appreciate the ways in which language can be used to convey multiple messages,
such as satire, metaphor, and sarcasm. (Children younger than age nine often cannot comprehend sarcasm at all.) This
also permits the application of advanced reasoning and logical processes to social and ideological matters such as
interpersonal relationships, politics, philosophy, religion, morality, friendship, faith, democracy, fairness, and
honesty. Because most injuries sustained by adolescents are related to risky behavior (car crashes, alcohol, unprotected
sex), a great deal of research has been done on the cognitive and emotional processes underlying adolescent risk-taking.
In addressing this question, it is important to distinguish whether adolescents are more likely to engage in risky
behaviors (prevalence), whether they make risk-related decisions similarly or differently than adults (cognitive
processing perspective), or whether they use the same processes but value different things and thus arrive at different
conclusions. The behavioral decision-making theory proposes that adolescents and adults both weigh the potential
rewards and consequences of an action. However, research has shown that adolescents seem to give more weight to rewards,
particularly social rewards, than do adults. Further distinctions in self-concept, called "differentiation," occur
as the adolescent recognizes the contextual influences on their own behavior and the perceptions of others, and begin
to qualify their traits when asked to describe themselves. Differentiation appears fully developed by mid-adolescence.
Peaking in the 7th-9th grades, the personality traits adolescents use to describe themselves refer to specific contexts,
and therefore may contradict one another. The recognition of inconsistent content in the self-concept is a common
source of distress in these years (see Cognitive dissonance), but this distress may benefit adolescents by encouraging
structural development. In 1989, Troiden proposed a four-stage model for the development of homosexual sexual identity.
The first stage, known as sensitization, usually starts in childhood, and is marked by the child's becoming aware
of same-sex attractions. The second stage, identity confusion, tends to occur a few years later. In this stage, the
youth is overwhelmed by feelings of inner turmoil regarding their sexual orientation, and begins to engage sexual
experiences with same-sex partners. In the third stage of identity assumption, which usually takes place a few years
after the adolescent has left home, adolescents begin to come out to their family and close friends, and assumes
a self-definition as gay, lesbian, or bisexual. In the final stage, known as commitment, the young adult adopts their
sexual identity as a lifestyle. Therefore, this model estimates that the process of coming out begins in childhood,
and continues through the early to mid 20s. This model has been contested, and alternate ideas have been explored
in recent years. During childhood, siblings are a source of conflict and frustration as well as a support system.
Adolescence may affect this relationship differently, depending on sibling gender. In same-sex sibling pairs, intimacy
increases during early adolescence, then remains stable. Mixed-sex siblings pairs act differently; siblings drift
apart during early adolescent years, but experience an increase in intimacy starting at middle adolescence. Sibling
interactions are children's first relational experiences, the ones that shape their social and self-understanding
for life. Sustaining positive sibling relations can assist adolescents in a number of ways. Siblings are able to
act as peers, and may increase one another's sociability and feelings of self-worth. Older siblings can give guidance
to younger siblings, although the impact of this can be either positive or negative depending on the activity of
the older sibling. Adolescents tend to associate with "cliques" on a small scale and "crowds" on a larger scale.
During early adolescence, adolescents often associate in cliques, exclusive, single-sex groups of peers with whom
they are particularly close. Despite the common notion that cliques are an inherently negative influence, they may
help adolescents become socially acclimated and form a stronger sense of identity. Within a clique of highly athletic
male-peers, for example, the clique may create a stronger sense of fidelity and competition. Cliques also have become
somewhat a "collective parent," i.e. telling the adolescents what to do and not to do. Towards late adolescence,
cliques often merge into mixed-sex groups as teenagers begin romantically engaging with one another. These small
friend groups then break down further as socialization becomes more couple-oriented. On a larger scale, adolescents
often associate with crowds, groups of individuals who share a common interest or activity. Often, crowd identities
may be the basis for stereotyping young people, such as jocks or nerds. In large, multi-ethnic high schools, there
are often ethnically-determined crowds. While crowds are very influential during early and middle adolescence, they
lose salience during high school as students identify more individually. Dating violence is fairly prevalent within
adolescent relationships. When surveyed, 10-45% of adolescents reported having experienced physical violence in the
context of a relationship while a quarter to a third of adolescents reported having experiencing psychological aggression.
This reported aggression includes hitting, throwing things, or slaps, although most of this physical aggression does
not result in a medical visit. Physical aggression in relationships tends to decline from high school through college
and young adulthood. In heterosexual couples, there is no significant difference between the rates of male and female
aggressors, unlike in adult relationships. The lifestyle of an adolescent in a given culture is profoundly shaped
by the roles and responsibilities he or she is expected to assume. The extent to which an adolescent is expected
to share family responsibilities is one large determining factor in normative adolescent behavior. For instance,
adolescents in certain cultures are expected to contribute significantly to household chores and responsibilities.
Household chores are frequently divided into self-care tasks and family-care tasks. However, specific household responsibilities
for adolescents may vary by culture, family type, and adolescent age. Some research has shown that adolescent participation
in family work and routines has a positive influence on the development of an adolescent's feelings of self-worth,
care, and concern for others. Adolescence is frequently characterized by a transformation of an adolescent's understanding
of the world, the rational direction towards a life course, and the active seeking of new ideas rather than the unquestioning
acceptance of adult authority. An adolescent begins to develop a unique belief system through his or her interaction
with social, familial, and cultural environments. While organized religion is not necessarily a part of every adolescent's
life experience, youth are still held responsible for forming a set of beliefs about themselves, the world around
them, and whatever higher powers they may or may not believe in. This process is often accompanied or aided by cultural
traditions that intend to provide a meaningful transition to adulthood through a ceremony, ritual, confirmation,
or rite of passage. Because exposure to media has increased over the past decade, adolescents' utilization of computers,
cell phones, stereos and televisions to gain access to various mediums of popular culture has also increased. Almost
all American households have at least one television, more than three-quarters of all adolescents' homes have access
to the Internet, and more than 90% of American adolescents use the Internet at least occasionally. As a result of
the amount of time adolescents spend using these devices, their total media exposure is high. In the last decade,
the amount of time that adolescents spend on the computer has greatly increased. Online activities with the highest
rates of use among adolescents are video games (78% of adolescents), email (73%), instant messaging (68%), social
networking sites (65%), news sources (63%), music (59%), and videos (57%). At the decision-making point of their
lives, youth is susceptible to drug addiction, sexual abuse, peer pressure, violent crimes and other illegal activities.
Developmental Intervention Science (DIS) is a fusion of the literature of both developmental and intervention sciences.
This association conducts youth interventions that mutually assist both the needs of the community as well as psychologically
stranded youth by focusing on risky and inappropriate behaviors while promoting positive self-development along with
self-esteem among adolescents. Peer acceptance and social norms gain a significantly greater hand in directing behavior
at the onset of adolescence; as such, the alcohol and illegal drug habits of teens tend to be shaped largely by the
substance use of friends and other classmates. In fact, studies suggest that more significantly than actual drug
norms, an individual's perception of the illicit drug use by friends and peers is highly associated with his or her
own habits in substance use during both middle and high school, a relationship that increases in strength over time.
Whereas social influences on alcohol use and marijuana use tend to work directly in the short term, peer and friend
norms on smoking cigarettes in middle school have a profound effect on one's own likelihood to smoke cigarettes well
into high school. Perhaps the strong correlation between peer influence in middle school and cigarette smoking in
high school may be explained by the addictive nature of cigarettes, which could lead many students to continue their
smoking habits from middle school into late adolescence. The age of consent to sexual activity varies widely between
jurisdictions, ranging from 12 to 20 years, as does the age at which people are allowed to marry. Specific legal
ages for adolescents that also vary by culture are enlisting in the military, gambling, and the purchase of alcohol,
cigarettes or items with parental advisory labels. It should be noted that the legal coming of age often does not
correspond with the sudden realization of autonomy; many adolescents who have legally reached adult age are still
dependent on their guardians or peers for emotional and financial support. Nonetheless, new legal privileges converge
with shifting social expectations to usher in a phase of heightened independence or social responsibility for most
legal adolescents.
Antarctica, on average, is the coldest, driest, and windiest continent, and has the highest average elevation of all the
continents. Antarctica is considered a desert, with annual precipitation of only 200 mm (8 in) along the coast and
far less inland. The temperature in Antarctica has reached −89.2 °C (−128.6 °F), though the average for the third
quarter (the coldest part of the year) is −63 °C (−81 °F). There are no permanent human residents, but anywhere from
1,000 to 5,000 people reside throughout the year at the research stations scattered across the continent. Organisms
native to Antarctica include many types of algae, bacteria, fungi, plants, protista, and certain animals, such as
mites, nematodes, penguins, seals and tardigrades. Vegetation, where it occurs, is tundra. Geologically, West Antarctica
closely resembles the Andes mountain range of South America. The Antarctic Peninsula was formed by uplift and metamorphism
of sea bed sediments during the late Paleozoic and the early Mesozoic eras. This sediment uplift was accompanied
by igneous intrusions and volcanism. The most common rocks in West Antarctica are andesite and rhyolite volcanics
formed during the Jurassic period. There is also evidence of volcanic activity, even after the ice sheet had formed,
in Marie Byrd Land and Alexander Island. The only anomalous area of West Antarctica is the Ellsworth Mountains region,
where the stratigraphy is more similar to East Antarctica. Integral to the story of the origin of the name "Antarctica"
is how it was not named Terra Australis—this name was given to Australia instead, and it was because of a mistake
made by people who decided that a significant landmass would not be found farther south than Australia. Explorer
Matthew Flinders, in particular, has been credited with popularizing the transfer of the name Terra Australis to
Australia. He justified the titling of his book A Voyage to Terra Australis (1814) by writing in the introduction:
A census of sea life carried out during the International Polar Year and which involved some 500 researchers was
released in 2010. The research is part of the global Census of Marine Life (CoML) and has disclosed some remarkable
findings. More than 235 marine organisms live in both polar regions, having bridged the gap of 12,000 km (7,456 mi).
Large animals such as some cetaceans and birds make the round trip annually. More surprising are small forms of life
such as mudworms, sea cucumbers and free-swimming snails found in both polar oceans. Various factors may aid in their
distribution – fairly uniform temperatures of the deep ocean at the poles and the equator which differ by no more
than 5 °C, and the major current systems or marine conveyor belt which transport eggs and larval stages. During the
Nimrod Expedition led by Ernest Shackleton in 1907, parties led by Edgeworth David became the first to climb Mount
Erebus and to reach the South Magnetic Pole. Douglas Mawson, who assumed the leadership of the Magnetic Pole party
on their perilous return, went on to lead several expeditions until retiring in 1931. In addition, Shackleton himself
and three other members of his expedition made several firsts in December 1908 – February 1909: they were the first
humans to traverse the Ross Ice Shelf, the first to traverse the Transantarctic Mountains (via the Beardmore Glacier),
and the first to set foot on the South Polar Plateau. An expedition led by Norwegian polar explorer Roald Amundsen
from the ship Fram became the first to reach the geographic South Pole on 14 December 1911, using a route from the
Bay of Whales and up the Axel Heiberg Glacier. One month later, the doomed Scott Expedition reached the pole. The
passing of the Antarctic Conservation Act (1978) in the U.S. brought several restrictions to U.S. activity on Antarctica.
The introduction of alien plants or animals can bring a criminal penalty, as can the extraction of any indigenous
species. The overfishing of krill, which plays a large role in the Antarctic ecosystem, led officials to enact regulations
on fishing. The Convention for the Conservation of Antarctic Marine Living Resources (CCAMLR), a treaty that came
into force in 1980, requires that regulations managing all Southern Ocean fisheries consider potential effects on
the entire Antarctic ecosystem. Despite these new acts, unregulated and illegal fishing, particularly of Patagonian
toothfish (marketed as Chilean Sea Bass in the U.S.), remains a serious problem. The illegal fishing of toothfish
has been increasing, with estimates of 32,000 tonnes (35,300 short tons) in 2000. Small-scale "expedition tourism"
has existed since 1957 and is currently subject to Antarctic Treaty and Environmental Protocol provisions, but in
effect self-regulated by the International Association of Antarctica Tour Operators (IAATO). Not all vessels associated
with Antarctic tourism are members of IAATO, but IAATO members account for 95% of the tourist activity. Travel is
largely by small or medium ship, focusing on specific scenic locations with accessible concentrations of iconic wildlife.
A total of 37,506 tourists visited during the 2006–07 Austral summer with nearly all of them coming from commercial
ships. The number was predicted to increase to over 80,000 by 2010. The main mineral resource known on the continent
is coal. It was first recorded near the Beardmore Glacier by Frank Wild on the Nimrod Expedition, and now low-grade
coal is known across many parts of the Transantarctic Mountains. The Prince Charles Mountains contain significant
deposits of iron ore. The most valuable resources of Antarctica lie offshore, namely the oil and natural gas fields
found in the Ross Sea in 1973. Exploitation of all mineral resources is banned until 2048 by the Protocol on Environmental
Protection to the Antarctic Treaty. On 6 September 2007, Belgian-based International Polar Foundation unveiled the
Princess Elisabeth station, the world's first zero-emissions polar science station in Antarctica to research climate
change. Costing $16.3 million, the prefabricated station, which is part of the International Polar Year, was shipped
to the South Pole from Belgium by the end of 2008 to monitor the health of the polar regions. Belgian polar explorer
Alain Hubert stated: "This base will be the first of its kind to produce zero emissions, making it a unique model
of how energy should be used in the Antarctic." Johan Berte is the leader of the station design team and manager
of the project which conducts research in climatology, glaciology and microbiology. Melting of floating ice shelves
(ice that originated on the land) does not in itself contribute much to sea-level rise (since the ice displaces only
its own mass of water). However it is the outflow of the ice from the land to form the ice shelf which causes a rise
in global sea level. This effect is offset by snow falling back onto the continent. Recent decades have witnessed
several dramatic collapses of large ice shelves around the coast of Antarctica, especially along the Antarctic Peninsula.
Concerns have been raised that disruption of ice shelves may result in increased glacial outflow from the continental
ice mass. The climate of Antarctica does not allow extensive vegetation to form. A combination of freezing temperatures,
poor soil quality, lack of moisture, and lack of sunlight inhibit plant growth. As a result, the diversity of plant
life is very low and limited in distribution. The flora of the continent largely consists of bryophytes. There are
about 100 species of mosses and 25 species of liverworts, but only three species of flowering plants, all of which
are found in the Antarctic Peninsula: Deschampsia antarctica (Antarctic hair grass), Colobanthus quitensis (Antarctic
pearlwort) and the non-native Poa annua (annual bluegrass). Growth is restricted to a few weeks in the summer. New
claims on Antarctica have been suspended since 1959 although Norway in 2015 formally defined Queen Maud Land as including
the unclaimed area between it and the South Pole. Antarctica's status is regulated by the 1959 Antarctic Treaty and
other related agreements, collectively called the Antarctic Treaty System. Antarctica is defined as all land and
ice shelves south of 60° S for the purposes of the Treaty System. The treaty was signed by twelve countries including
the Soviet Union (and later Russia), the United Kingdom, Argentina, Chile, Australia, and the United States. It set
aside Antarctica as a scientific preserve, established freedom of scientific investigation and environmental protection,
and banned military activity on Antarctica. This was the first arms control agreement established during the Cold
War. European maps continued to show this hypothesized land until Captain James Cook's ships, HMS Resolution and
Adventure, crossed the Antarctic Circle on 17 January 1773, in December 1773 and again in January 1774. Cook came
within about 120 km (75 mi) of the Antarctic coast before retreating in the face of field ice in January 1773. The
first confirmed sighting of Antarctica can be narrowed down to the crews of ships captained by three individuals.
According to various organizations (the National Science Foundation, NASA, the University of California, San Diego,
and other sources), ships captained by three men sighted Antarctica or its ice shelf in 1820: von Bellingshausen
(a captain in the Imperial Russian Navy), Edward Bransfield (a captain in the Royal Navy), and Nathaniel Palmer (a
sealer out of Stonington, Connecticut). The expedition led by von Bellingshausen and Lazarev on the ships Vostok
and Mirny reached a point within 32 km (20 mi) from Queen Maud's Land and recorded the sight of an ice shelf at 69°21′28″S
2°14′50″W / 69.35778°S 2.24722°W / -69.35778; -2.24722, which became known as the Fimbul ice shelf. This happened
three days before Bransfield sighted land, and ten months before Palmer did so in November 1820. The first documented
landing on Antarctica was by the American sealer John Davis, apparently at Hughes Bay, near Cape Charles, in West
Antarctica on 7 February 1821, although some historians dispute this claim. The first recorded and confirmed landing
was at Cape Adair in 1895. Since the 1970s, an important focus of study has been the ozone layer in the atmosphere
above Antarctica. In 1985, three British scientists working on data they had gathered at Halley Station on the Brunt
Ice Shelf discovered the existence of a hole in this layer. It was eventually determined that the destruction of
the ozone was caused by chlorofluorocarbons (CFCs) emitted by human products. With the ban of CFCs in the Montreal
Protocol of 1989, climate projections indicate that the ozone layer will return to 1980 levels between 2050 and 2070.
During the Cambrian period, Gondwana had a mild climate. West Antarctica was partially in the Northern Hemisphere,
and during this period large amounts of sandstones, limestones and shales were deposited. East Antarctica was at
the equator, where sea floor invertebrates and trilobites flourished in the tropical seas. By the start of the Devonian
period (416 Ma), Gondwana was in more southern latitudes and the climate was cooler, though fossils of land plants
are known from this time. Sand and silts were laid down in what is now the Ellsworth, Horlick and Pensacola Mountains.
Glaciation began at the end of the Devonian period (360 Ma), as Gondwana became centered on the South Pole and the
climate cooled, though flora remained. During the Permian period, the land became dominated by seed plants such as
Glossopteris, a pteridosperm which grew in swamps. Over time these swamps became deposits of coal in the Transantarctic
Mountains. Towards the end of the Permian period, continued warming led to a dry, hot climate over much of Gondwana.
Antarctica (US English i/æntˈɑːrktɪkə/, UK English /ænˈtɑːktɪkə/ or /ænˈtɑːtɪkə/ or /ænˈɑːtɪkə/)[Note 1] is Earth's
southernmost continent, containing the geographic South Pole. It is situated in the Antarctic region of the Southern
Hemisphere, almost entirely south of the Antarctic Circle, and is surrounded by the Southern Ocean. At 14,000,000
square kilometres (5,400,000 square miles), it is the fifth-largest continent in area after Asia, Africa, North America,
and South America. For comparison, Antarctica is nearly twice the size of Australia. About 98% of Antarctica is covered
by ice that averages 1.9 km (1.2 mi; 6,200 ft) in thickness, which extends to all but the northernmost reaches of
the Antarctic Peninsula. Antarctica is the coldest of Earth's continents. The coldest natural temperature ever recorded
on Earth was −89.2 °C (−128.6 °F) at the Soviet (now Russian) Vostok Station in Antarctica on 21 July 1983. For comparison,
this is 10.7 °C (20 °F) colder than subliming dry ice at one atmosphere of partial pressure, but since CO2 only makes
up 0.039% of air, temperatures of less than −150 °C (−238 °F) would be needed to produce dry ice snow in Antarctica.
Antarctica is a frozen desert with little precipitation; the South Pole itself receives less than 10 cm (4 in) per
year, on average. Temperatures reach a minimum of between −80 °C (−112 °F) and −89.2 °C (−128.6 °F) in the interior
in winter and reach a maximum of between 5 °C (41 °F) and 15 °C (59 °F) near the coast in summer. Sunburn is often
a health issue as the snow surface reflects almost all of the ultraviolet light falling on it. Given the latitude,
long periods of constant darkness or constant sunlight create climates unfamiliar to human beings in much of the
rest of the world. Aristotle wrote in his book Meteorology about an Antarctic region in c. 350 B.C. Marinus of Tyre
reportedly used the name in his unpreserved world map from the 2nd century A.D. The Roman authors Hyginus and Apuleius
(1–2 centuries A.D.) used for the South Pole the romanized Greek name polus antarcticus, from which derived the Old
French pole antartike (modern pôle antarctique) attested in 1270, and from there the Middle English pol antartik
in a 1391 technical treatise by Geoffrey Chaucer (modern Antarctic Pole). Some scientific studies suggest that ozone
depletion may have a dominant role in governing climatic change in Antarctica (and a wider area of the Southern Hemisphere).
Ozone absorbs large amounts of ultraviolet radiation in the stratosphere. Ozone depletion over Antarctica can cause
a cooling of around 6 °C in the local stratosphere. This cooling has the effect of intensifying the westerly winds
which flow around the continent (the polar vortex) and thus prevents outflow of the cold air near the South Pole.
As a result, the continental mass of the East Antarctic ice sheet is held at lower temperatures, and the peripheral
areas of Antarctica, especially the Antarctic Peninsula, are subject to higher temperatures, which promote accelerated
melting. Models also suggest that the ozone depletion/enhanced polar vortex effect also accounts for the recent increase
in sea ice just offshore of the continent. Several governments maintain permanent manned research stations on the
continent. The number of people conducting and supporting scientific research and other work on the continent and
its nearby islands varies from about 1,000 in winter to about 5,000 in the summer, giving it a population density
between 70 and 350 inhabitants per million square kilometres (180 and 900 per million square miles) at these times.
Many of the stations are staffed year-round, the winter-over personnel typically arriving from their home countries
for a one-year assignment. An Orthodox church—Trinity Church, opened in 2004 at the Russian Bellingshausen Station—is
manned year-round by one or two priests, who are similarly rotated every year. Positioned asymmetrically around the
South Pole and largely south of the Antarctic Circle, Antarctica is the southernmost continent and is surrounded
by the Southern Ocean; alternatively, it may be considered to be surrounded by the southern Pacific, Atlantic, and
Indian Oceans, or by the southern waters of the World Ocean. It covers more than 14,000,000 km2 (5,400,000 sq mi),
making it the fifth-largest continent, about 1.3 times as large as Europe. The coastline measures 17,968 km (11,165
mi) and is mostly characterized by ice formations, as the following table shows: Some species of marine animals exist
and rely, directly or indirectly, on the phytoplankton. Antarctic sea life includes penguins, blue whales, orcas,
colossal squids and fur seals. The emperor penguin is the only penguin that breeds during the winter in Antarctica,
while the Adélie penguin breeds farther south than any other penguin. The rockhopper penguin has distinctive feathers
around the eyes, giving the appearance of elaborate eyelashes. King penguins, chinstrap penguins, and gentoo penguins
also breed in the Antarctic. Vinson Massif, the highest peak in Antarctica at 4,892 m (16,050 ft), is located in
the Ellsworth Mountains. Antarctica contains many other mountains, on both the main continent and the surrounding
islands. Mount Erebus on Ross Island is the world's southernmost active volcano. Another well-known volcano is found
on Deception Island, which is famous for a giant eruption in 1970. Minor eruptions are frequent and lava flow has
been observed in recent years. Other dormant volcanoes may potentially be active. In 2004, a potentially active underwater
volcano was found in the Antarctic Peninsula by American and Canadian researchers. In 1983, the Antarctic Treaty
Parties began negotiations on a convention to regulate mining in Antarctica. A coalition of international organizations
launched a public pressure campaign to prevent any minerals development in the region, led largely by Greenpeace
International, which established its own scientific station—World Park Base—in the Ross Sea region and conducted
annual expeditions to document environmental effects of humans on Antarctica. In 1988, the Convention on the Regulation
of Antarctic Mineral Resources (CRAMRA) was adopted. The following year, however, Australia and France announced
that they would not ratify the convention, rendering it dead for all intents and purposes. They proposed instead
that a comprehensive regime to protect the Antarctic environment be negotiated in its place. The Protocol on Environmental
Protection to the Antarctic Treaty (the "Madrid Protocol") was negotiated as other countries followed suit and on
14 January 1998 it entered into force. The Madrid Protocol bans all mining in Antarctica, designating Antarctica
a "natural reserve devoted to peace and science". As a result of continued warming, the polar ice caps melted and
much of Gondwana became a desert. In Eastern Antarctica, seed ferns or pteridosperms became abundant and large amounts
of sandstone and shale were laid down at this time. Synapsids, commonly known as "mammal-like reptiles", were common
in Antarctica during the Early Triassic and included forms such as Lystrosaurus. The Antarctic Peninsula began to
form during the Jurassic period (206–146 Ma), and islands gradually rose out of the ocean. Ginkgo trees, conifers,
bennettites, horsetails, ferns and cycads were plentiful during this period. In West Antarctica, coniferous forests
dominated through the entire Cretaceous period (146–66 Ma), though southern beech became more prominent towards the
end of this period. Ammonites were common in the seas around Antarctica, and dinosaurs were also present, though
only three Antarctic dinosaur genera (Cryolophosaurus and Glacialisaurus, from the Hanson Formation, and Antarctopelta)
have been described to date. It was during this era that Gondwana began to break up. Meteorites from Antarctica are
an important area of study of material formed early in the solar system; most are thought to come from asteroids,
but some may have originated on larger planets. The first meteorite was found in 1912, and named the Adelie Land
meteorite. In 1969, a Japanese expedition discovered nine meteorites. Most of these meteorites have fallen onto the
ice sheet in the last million years. Motion of the ice sheet tends to concentrate the meteorites at blocking locations
such as mountain ranges, with wind erosion bringing them to the surface after centuries beneath accumulated snowfall.
Compared with meteorites collected in more temperate regions on Earth, the Antarctic meteorites are well-preserved.
Due to its location at the South Pole, Antarctica receives relatively little solar radiation. This means that it
is a very cold continent where water is mostly in the form of ice. Precipitation is low (most of Antarctica is a
desert) and almost always in the form of snow, which accumulates and forms a giant ice sheet which covers the land.
Parts of this ice sheet form moving glaciers known as ice streams, which flow towards the edges of the continent.
Next to the continental shore are many ice shelves. These are floating extensions of outflowing glaciers from the
continental ice mass. Offshore, temperatures are also low enough that ice is formed from seawater through most of
the year. It is important to understand the various types of Antarctic ice to understand possible effects on sea
levels and the implications of global cooling. The first semi-permanent inhabitants of regions near Antarctica (areas
situated south of the Antarctic Convergence) were British and American sealers who used to spend a year or more on
South Georgia, from 1786 onward. During the whaling era, which lasted until 1966, the population of that island varied
from over 1,000 in the summer (over 2,000 in some years) to some 200 in the winter. Most of the whalers were Norwegian,
with an increasing proportion of Britons. The settlements included Grytviken, Leith Harbour, King Edward Point, Stromness,
Husvik, Prince Olav Harbour, Ocean Harbour and Godthul. Managers and other senior officers of the whaling stations
often lived together with their families. Among them was the founder of Grytviken, Captain Carl Anton Larsen, a prominent
Norwegian whaler and explorer who, along with his family, adopted British citizenship in 1910. Some of Antarctica
has been warming up; particularly strong warming has been noted on the Antarctic Peninsula. A study by Eric Steig
published in 2009 noted for the first time that the continent-wide average surface temperature trend of Antarctica
is slightly positive at >0.05 °C (0.09 °F) per decade from 1957 to 2006. This study also noted that West Antarctica
has warmed by more than 0.1 °C (0.2 °F) per decade in the last 50 years, and this warming is strongest in winter
and spring. This is partly offset by autumn cooling in East Antarctica. There is evidence from one study that Antarctica
is warming as a result of human carbon dioxide emissions, but this remains ambiguous. The amount of surface warming
in West Antarctica, while large, has not led to appreciable melting at the surface, and is not directly affecting
the West Antarctic Ice Sheet's contribution to sea level. Instead the recent increases in glacier outflow are believed
to be due to an inflow of warm water from the deep ocean, just off the continental shelf. The net contribution to
sea level from the Antarctic Peninsula is more likely to be a direct result of the much greater atmospheric warming
there. The Antarctic fur seal was very heavily hunted in the 18th and 19th centuries for its pelt by sealers from
the United States and the United Kingdom. The Weddell seal, a "true seal", is named after Sir James Weddell, commander
of British sealing expeditions in the Weddell Sea. Antarctic krill, which congregate in large schools, is the keystone
species of the ecosystem of the Southern Ocean, and is an important food organism for whales, seals, leopard seals,
fur seals, squid, icefish, penguins, albatrosses and many other birds. The Protocol on Environmental Protection to
the Antarctic Treaty (also known as the Environmental Protocol or Madrid Protocol) came into force in 1998, and is
the main instrument concerned with conservation and management of biodiversity in Antarctica. The Antarctic Treaty
Consultative Meeting is advised on environmental and conservation issues in Antarctica by the Committee for Environmental
Protection. A major concern within this committee is the risk to Antarctica from unintentional introduction of non-native
species from outside the region. Although coal, hydrocarbons, iron ore, platinum, copper, chromium, nickel, gold
and other minerals have been found, they have not been in large enough quantities to exploit. The 1991 Protocol on
Environmental Protection to the Antarctic Treaty also restricts a struggle for resources. In 1998, a compromise agreement
was reached to place an indefinite ban on mining, to be reviewed in 2048, further limiting economic development and
exploitation. The primary economic activity is the capture and offshore trading of fish. Antarctic fisheries in 2000–01
reported landing 112,934 tonnes. This large collection of meteorites allows a better understanding of the abundance
of meteorite types in the solar system and how meteorites relate to asteroids and comets. New types of meteorites
and rare meteorites have been found. Among these are pieces blasted off the Moon, and probably Mars, by impacts.
These specimens, particularly ALH84001 discovered by ANSMET, are at the center of the controversy about possible
evidence of microbial life on Mars. Because meteorites in space absorb and record cosmic radiation, the time elapsed
since the meteorite hit the Earth can be determined from laboratory studies. The elapsed time since fall, or terrestrial
residence age, of a meteorite represents more information that might be useful in environmental studies of Antarctic
ice sheets. In 2002 the Antarctic Peninsula's Larsen-B ice shelf collapsed. Between 28 February and 8 March 2008,
about 570 km2 (220 sq mi) of ice from the Wilkins Ice Shelf on the southwest part of the peninsula collapsed, putting
the remaining 15,000 km2 (5,800 sq mi) of the ice shelf at risk. The ice was being held back by a "thread" of ice
about 6 km (4 mi) wide, prior to its collapse on 5 April 2009. According to NASA, the most widespread Antarctic surface
melting of the past 30 years occurred in 2005, when an area of ice comparable in size to California briefly melted
and refroze; this may have resulted from temperatures rising to as high as 5 °C (41 °F). Antarctica has no indigenous
population and there is no evidence that it was seen by humans until the 19th century. However, belief in the existence
of a Terra Australis—a vast continent in the far south of the globe to "balance" the northern lands of Europe, Asia
and North Africa—had existed since the times of Ptolemy (1st century AD), who suggested the idea to preserve the
symmetry of all known landmasses in the world. Even in the late 17th century, after explorers had found that South
America and Australia were not part of the fabled "Antarctica", geographers believed that the continent was much
larger than its actual size. About 98% of Antarctica is covered by the Antarctic ice sheet, a sheet of ice averaging
at least 1.6 km (1.0 mi) thick. The continent has about 90% of the world's ice (and thereby about 70% of the world's
fresh water). If all of this ice were melted, sea levels would rise about 60 m (200 ft). In most of the interior
of the continent, precipitation is very low, down to 20 mm (0.8 in) per year; in a few "blue ice" areas precipitation
is lower than mass loss by sublimation and so the local mass balance is negative. In the dry valleys, the same effect
occurs over a rock base, leading to a desiccated landscape. There is some evidence, in the form of ice cores drilled
to about 400 m (1,300 ft) above the water line, that Lake Vostok's waters may contain microbial life. The frozen
surface of the lake shares similarities with Jupiter's moon, Europa. If life is discovered in Lake Vostok, it would
strengthen the argument for the possibility of life on Europa. On 7 February 2008, a NASA team embarked on a mission
to Lake Untersee, searching for extremophiles in its highly alkaline waters. If found, these resilient creatures
could further bolster the argument for extraterrestrial life in extremely cold, methane-rich environments. Africa
separated from Antarctica in the Jurassic, around 160 Ma, followed by the Indian subcontinent in the early Cretaceous
(about 125 Ma). By the end of the Cretaceous, about 66 Ma, Antarctica (then connected to Australia) still had a subtropical
climate and flora, complete with a marsupial fauna. In the Eocene epoch, about 40 Ma Australia-New Guinea separated
from Antarctica, so that latitudinal currents could isolate Antarctica from Australia, and the first ice began to
appear. During the Eocene–Oligocene extinction event about 34 million years ago, CO2 levels have been found to be
about 760 ppm and had been decreasing from earlier levels in the thousands of ppm. Antarctica is colder than the
Arctic for three reasons. First, much of the continent is more than 3,000 m (9,800 ft) above sea level, and temperature
decreases with elevation in the troposphere. Second, the Arctic Ocean covers the north polar zone: the ocean's relative
warmth is transferred through the icepack and prevents temperatures in the Arctic regions from reaching the extremes
typical of the land surface of Antarctica. Third, the Earth is at aphelion in July (i.e., the Earth is farthest from
the Sun in the Antarctic winter), and the Earth is at perihelion in January (i.e., the Earth is closest to the Sun
in the Antarctic summer). The orbital distance contributes to a colder Antarctic winter (and a warmer Antarctic summer)
but the first two effects have more impact. Emilio Marcos Palma was the first person born south of the 60th parallel
south (the continental limit according to the Antarctic Treaty), as well as the first one born on the Antarctic mainland,
in 1978 at Base Esperanza, on the tip of the Antarctic Peninsula; his parents were sent there along with seven other
families by the Argentine government to determine if the continent was suitable for family life. In 1984, Juan Pablo
Camacho was born at the Frei Montalva Station, becoming the first Chilean born in Antarctica. Several bases are now
home to families with children attending schools at the station. As of 2009, eleven children were born in Antarctica
(south of the 60th parallel south): eight at the Argentine Esperanza Base and three at the Chilean Frei Montalva
Station. About 1150 species of fungi have been recorded from Antarctica, of which about 750 are non-lichen-forming
and 400 are lichen-forming. Some of these species are cryptoendoliths as a result of evolution under extreme conditions,
and have significantly contributed to shaping the impressive rock formations of the McMurdo Dry Valleys and surrounding
mountain ridges. The apparently simple morphology, scarcely differentiated structures, metabolic systems and enzymes
still active at very low temperatures, and reduced life cycles shown by such fungi make them particularly suited
to harsh environments such as the McMurdo Dry Valleys. In particular, their thick-walled and strongly melanized cells
make them resistant to UV light. Those features can also be observed in algae and cyanobacteria, suggesting that
these are adaptations to the conditions prevailing in Antarctica. This has led to speculation that, if life ever
occurred on Mars, it might have looked similar to Antarctic fungi such as Cryomyces minteri. Some of these fungi
are also apparently endemic to Antarctica. Endemic Antarctic fungi also include certain dung-inhabiting species which
have had to evolve in response to the double challenge of extreme cold while growing on dung, and the need to survive
passage through the gut of warm-blooded animals. The Argentine, British and Chilean claims all overlap, and have
caused friction. On 18 December 2012, the British Foreign and Commonwealth Office named a previously unnamed area
Queen Elizabeth Land in tribute to Queen Elizabeth II's Diamond Jubilee. On 22 December 2012, the UK ambassador to
Argentina, John Freeman, was summoned to the Argentine government as protest against the claim. Argentine–UK relations
had previously been damaged throughout 2012 due to disputes over the sovereignty of the nearby Falkland Islands,
and the 30th anniversary of the Falklands War. There has been some concern over the potential adverse environmental
and ecosystem effects caused by the influx of visitors. Some environmentalists and scientists have made a call for
stricter regulations for ships and a tourism quota. The primary response by Antarctic Treaty Parties has been to
develop, through their Committee for Environmental Protection and in partnership with IAATO, "site use guidelines"
setting landing limits and closed or restricted zones on the more frequently visited sites. Antarctic sightseeing
flights (which did not land) operated out of Australia and New Zealand until the fatal crash of Air New Zealand Flight
901 in 1979 on Mount Erebus, which killed all 257 aboard. Qantas resumed commercial overflights to Antarctica from
Australia in the mid-1990s. Researchers include biologists, geologists, oceanographers, physicists, astronomers,
glaciologists, and meteorologists. Geologists tend to study plate tectonics, meteorites from outer space, and resources
from the breakup of the supercontinent Gondwana. Glaciologists in Antarctica are concerned with the study of the
history and dynamics of floating ice, seasonal snow, glaciers, and ice sheets. Biologists, in addition to examining
the wildlife, are interested in how harsh temperatures and the presence of people affect adaptation and survival
strategies in a wide variety of organisms. Medical physicians have made discoveries concerning the spreading of viruses
and the body's response to extreme seasonal temperatures. Astrophysicists at Amundsen–Scott South Pole Station study
the celestial dome and cosmic microwave background radiation. Many astronomical observations are better made from
the interior of Antarctica than from most surface locations because of the high elevation, which results in a thin
atmosphere; low temperature, which minimizes the amount of water vapour in the atmosphere; and absence of light pollution,
thus allowing for a view of space clearer than anywhere else on Earth. Antarctic ice serves as both the shield and
the detection medium for the largest neutrino telescope in the world, built 2 km (1.2 mi) below Amundsen–Scott station.
At Buya in Eritrea, one of the oldest hominids representing a possible link between Homo erectus and an archaic Homo sapiens
was found by Italian scientists. Dated to over 1 million years old, it is the oldest skeletal find of its kind and
provides a link between hominids and the earliest anatomically modern humans. It is believed that the section of
the Danakil Depression in Eritrea was also a major player in terms of human evolution, and may contain other traces
of evolution from Homo erectus hominids to anatomically modern humans. The Scottish traveler James Bruce reported
in 1770 that Medri Bahri was a distinct political entity from Abyssinia, noting that the two territories were frequently
in conflict. The Bahre-Nagassi ("Kings of the Sea") alternately fought with or against the Abyssinians and the neighbouring
Muslim Adal Sultanate depending on the geopolitical circumstances. Medri Bahri was thus part of the Christian resistance
against Imam Ahmad ibn Ibrahim al-Ghazi of Adal's forces, but later joined the Adalite states and the Ottoman Empire
front against Abyssinia in 1572. That 16th century also marked the arrival of the Ottomans, who began making inroads
in the Red Sea area. The creation of modern-day Eritrea is a result of the incorporation of independent, distinct
kingdoms and sultanates (for example, Medri Bahri and the Sultanate of Aussa) eventually resulting in the formation
of Italian Eritrea. In 1947 Eritrea became part of a federation with Ethiopia, the Federation of Ethiopia and Eritrea.
Subsequent annexation into Ethiopia led to the Eritrean War of Independence, ending with Eritrean independence following
a referendum in April 1993. Hostilities between Eritrea and Ethiopia persisted, leading to the Eritrean–Ethiopian
War of 1998–2000 and further skirmishes with both Djibouti and Ethiopia. Excavations in and near Agordat in central
Eritrea yielded the remains of an ancient pre-Aksumite civilization known as the Gash Group. Ceramics were discovered
that were related to those of the C-Group (Temehu) pastoral culture, which inhabited the Nile Valley between 2500–1500
BC. Some sources dating back to 3500 BC. Shards akin to those of the Kerma culture, another community that flourished
in the Nile Valley around the same period, were also found at other local archaeological sites in the Barka valley
belonging to the Gash Group. According to Peter Behrens (1981) and Marianne Bechaus-Gerst (2000), linguistic evidence
indicates that the C-Group and Kerma peoples spoke Afroasiatic languages of the Berber and Cushitic branches, respectively.
The Aksumites erected a number of large stelae, which served a religious purpose in pre-Christian times. One of these
granite columns, the obelisk of Aksum, is the largest such structure in the world, standing at 90 feet. Under Ezana
(fl. 320–360), Aksum later adopted Christianity. In the 7th century, early Muslims from Mecca also sought refuge
from Quraysh persecution by travelling to the kingdom, a journey known in Islamic history as the First Hijra. It
is also the alleged resting place of the Ark of the Covenant and the purported home of the Queen of Sheba. Eritrea
is a member of the United Nations, the African Union, and is an observing member of the Arab League. The nation holds
a seat on the United Nations' Advisory Committee on Administrative and Budgetary Questions (ACABQ). Eritrea also
holds memberships in the International Bank for Reconstruction and Development, International Finance Corporation,
International Criminal Police Organization (INTERPOL), Non-Aligned Movement, Organization for the Prohibition of
Chemical Weapons, Permanent Court of Arbitration, and the World Customs Organization. The kingdom is mentioned in
the Periplus of the Erythraean Sea as an important market place for ivory, which was exported throughout the ancient
world. Aksum was at the time ruled by Zoskales, who also governed the port of Adulis. The Aksumite rulers facilitated
trade by minting their own Aksumite currency. The state also established its hegemony over the declining Kingdom
of Kush and regularly entered the politics of the kingdoms on the Arabian peninsula, eventually extending its rule
over the region with the conquest of the Himyarite Kingdom. In 1888, the Italian administration launched its first
development projects in the new colony. The Eritrean Railway was completed to Saati in 1888, and reached Asmara in
the highlands in 1911. The Asmara–Massawa Cableway was the longest line in the world during its time, but was later
dismantled by the British in World War II. Besides major infrastructural projects, the colonial authorities invested
significantly in the agricultural sector. It also oversaw the provision of urban amenities in Asmara and Massawa,
and employed many Eritreans in public service, particularly in the police and public works departments. Thousands
of Eritreans were concurrently enlisted in the army, serving during the Italo-Turkish War in Libya as well as the
First and second Italo-Abyssinian Wars. Following the adoption of UN Resolution 390A(V) in December 1950, Eritrea
was federated with Ethiopia under the prompting of the United States. The resolution called for Eritrea and Ethiopia
to be linked through a loose federal structure under the sovereignty of the Emperor. Eritrea was to have its own
administrative and judicial structure, its own flag, and control over its domestic affairs, including police, local
administration, and taxation. The federal government, which for all intents and purposes was the existing imperial
government, was to control foreign affairs (including commerce), defense, finance, and transportation. The resolution
ignored the wishes of Eritreans for independence, but guaranteed the population democratic rights and a measure of
autonomy. Additionally, the Italian Eritrea administration opened a number of new factories, which produced buttons,
cooking oil, pasta, construction materials, packing meat, tobacco, hide and other household commodities. In 1939,
there were around 2,198 factories and most of the employees were Eritrean citizens. The establishment of industries
also made an increase in the number of both Italians and Eritreans residing in the cities. The number of Italians
residing in the territory increased from 4,600 to 75,000 in five years; and with the involvement of Eritreans in
the industries, trade and fruit plantation was expanded across the nation, while some of the plantations were owned
by Eritreans. Lions are said to inhabit the mountains of the Gash-Barka Region. There is also a small population
of elephants that roam in some parts of the country. Dik-diks can also be found in many areas. The endangered African
wild ass can be seen in Denakalia Region. Other local wildlife include bushbucks, duikers, greater kudus, klipspringers,
African leopards, oryxs and crocodiles., The spotted hyena is widespread and fairly common. Between 1955 and 2001
there were no reported sightings of elephant herds, and they are thought to have fallen victim to the war of independence.
In December 2001 a herd of about 30, including 10 juveniles, was observed in the vicinity of the Gash River. The
elephants seemed to have formed a symbiotic relationship with olive baboons, with the baboons using the water holes
dug by the elephants, while the elephants use the tree-top baboons as an early warning system. Eritrea can be split
into three ecoregions. To the east of the highlands are the hot, arid coastal plains stretching down to the southeast
of the country. The cooler, more fertile highlands, reaching up to 3000m has a different habitat. Habitats here vary
from the sub-tropical rainforest at Filfil Solomona to the precipitous cliffs and canyons of the southern highlands.
The Afar Triangle or Danakil Depression of Eritrea is the probable location of a triple junction where three tectonic
plates are pulling away from one another.The highest point of the country, Emba Soira, is located in the center of
Eritrea, at 3,018 meters (9,902 ft) above sea level. It is estimated that there are around 100 elephants left in
Eritrea, the most northerly of East Africa's elephants. The endangered African wild dog (Lycaon pictus) was previously
found in Eritrea, but is now deemed extirpated from the entire country. In Gash Barka, deadly snakes like saw-scaled
viper are common. Puff adder and red spitting cobra are widespread and can be found even in the highlands.In the
coastal areas marine species that are common include dolphin, dugong, whale shark, turtles, marlin/swordfish, and
manta ray. Eritrea is a one-party state in which national legislative elections have been repeatedly postponed. According
to Human Rights Watch, the government's human rights record is considered among the worst in the world. Most Western
countries have accused the Eritrean authorities of arbitrary arrest and detentions, and of detaining an unknown number
of people without charge for their political activism. However, the Eritrean government has continually dismissed
the accusations as politically motivated. In June 2015, a 500-page United Nations Human Rights Council report accused
Eritrea's government of extrajudicial executions, torture, indefinitely prolonged national service and forced labour,
and indicated that sexual harassment, rape and sexual servitude by state officials are also widespread. In an attempt
at reform, Eritrean government officials and NGO representatives have participated in numerous public meetings and
dialogues. In these sessions they have answered questions as fundamental as, "What are human rights?", "Who determines
what are human rights?", and "What should take precedence, human or communal rights?" In 2007, the Eritrean government
also banned female genital mutilation. In Regional Assemblies and religious circles, Eritreans themselves speak out
continuously against the use of female circumcision. They cite health concerns and individual freedom as being of
primary concern when they say this. Furthermore, they implore rural peoples to cast away this ancient cultural practice.
Additionally, a new movement called Citizens for Democratic Rights in Eritrea aimed at bringing about dialogue between
the government and opposition was formed in early 2009. The group consists of ordinary citizens and some people close
to the government. In its 2014 Press Freedom Index, Reporters Without Borders ranked the media environment in Eritrea
at the very bottom of a list of 178 countries, just below totalitarian North Korea. According to the BBC, "Eritrea
is the only African country to have no privately owned news media", and Reporters Without Borders said of the public
media, "[they] do nothing but relay the regime's belligerent and ultra-nationalist discourse. ... Not a single [foreign
correspondent] now lives in Asmara." The state-owned news agency censors news about external events. Independent
media have been banned since 2001. In 2015, The Guardian published an opinion piece claiming, Even during the war,
Eritrea developed its transportation infrastructure by asphalting new roads, improving its ports, and repairing war-damaged
roads and bridges as a part of the Warsay Yika'alo Program. The most significant of these projects was the construction
of a coastal highway of more than 500 km connecting Massawa with Asseb, as well as the rehabilitation of the Eritrean
Railway. The rail line has been restored between the port of Massawa and the capital Asmara, although services are
sporadic. Steam locomotives are sometimes used for groups of enthusiasts. Eritrea is a multilingual country. The
nation has no official language, as the Constitution establishes the "equality of all Eritrean languages". However,
Tigrinya serves as the de facto language of national identity. With 2,540,000 total speakers of a population of 5,254,000
in 2006, Tigrinya is the most widely spoken language, particularly in the southern and central parts of Eritrea.
Modern Standard Arabic and English serve as de facto working languages, with the latter used in university education
and many technical fields. Italian, the former colonial language, is widely used in commerce and is taught as a second
language in schools, with a few elderly monolinguals. Eritrea has achieved significant improvements in health care
and is one of the few countries to be on target to meet its Millennium Development Goal (MDG) targets in health,
in particular child health. Life expectancy at birth has increased from 39.1 in 1960 to 59.5 years in 2008, maternal
and child mortality rates have dropped dramatically and the health infrastructure has been expanded. Due to Eritrea's
relative isolation, information and resources are extremely limited and according to the World Health Organisation
(WHO) found in 2008 average life expectancy to be slightly less than 63 years. Immunisation and child nutrition has
been tackled by working closely with schools in a multi-sectoral approach; the number of children vaccinated against
measles almost doubled in seven years, from 40.7% to 78.5% and the underweight prevalence among children decreased
by 12% in 1995–2002 (severe underweight prevalence by 28%). The National Malaria Protection Unit of the Ministry
of Health has registered tremendous improvements in reducing malarial mortality by as much as 85% and the number
of cases by 92% between 1998 and 2006. The Eritrean government has banned female genital mutilation (FGM), saying
the practice was painful and put women at risk of life-threatening health problems. Additionally, owing to its colonial
history, cuisine in Eritrea features more Italian influences than are present in Ethiopian cooking, including more
pasta and greater use of curry powders and cumin.The Italian Eritrean cuisine started to be practiced during the
colonial times of the Kingdom of Italy, when a large number of Italians moved to Eritrea. They brought the use of
"pasta" to Italian Eritrea, and it is one of the main food eaten in present-day Asmara. An Italian Eritrean cuisine
emerged, and dishes common dishes are 'Pasta al Sugo e Berbere', which means "Pasta with tomato sauce and berbere"
(spice), but there are many more like "lasagna" and "cotoletta alla milanese" (milano cutlet). Alongside sowa, people
in Eritrea also tend to drink coffee. Mies is another popular local alcoholic beverage, made out of honey. The culture
of Eritrea has been largely shaped by the country's location on the Red Sea coast. One of the most recognizable parts
of Eritrean culture is the coffee ceremony. Coffee (Ge'ez ቡን būn) is offered when visiting friends, during festivities,
or as a daily staple of life. During the coffee ceremony, there are traditions that are upheld. The coffee is served
in three rounds: the first brew or round is called awel in Tigrinya meaning first, the second round is called kalaay
meaning second, and the third round is called bereka meaning "to be blessed". If coffee is politely declined, then
most likely tea ("shai" ሻሂ shahee) will instead be served. Eritrea (/ˌɛrᵻˈtreɪ.ə/ or /ˌɛrᵻˈtriːə/;, officially the
State of Eritrea, is a country in East Africa. With its capital at Asmara, it is bordered by Sudan in the west, Ethiopia
in the south, and Djibouti in the southeast. The northeastern and eastern parts of Eritrea have an extensive coastline
along the Red Sea. The nation has a total area of approximately 117,600 km2 (45,406 sq mi), and includes the Dahlak
Archipelago and several of the Hanish Islands. Its name Eritrea is based on the Greek name for the Red Sea (Ἐρυθρὰ
Θάλασσα Erythra Thalassa), which was first adopted for Italian Eritrea in 1890. Eritrea's ethnic groups each have
their own styles of music and accompanying dances. Amongst the Tigrinya, the best known traditional musical genre
is the guaila. Traditional instruments of Eritrean folk music include the stringed krar, kebero, begena, masenqo
and the wata (a distant/rudimentary cousin of the violin). The most popular Eritrean artist is the Tigrinya singer
Helen Meles, who is noted for her powerful voice and wide singing range. Other prominent local musicians include
the Kunama singer Dehab Faytinga, Ruth Abraha, Bereket Mengisteab, Yemane Baria, and the late Abraham Afewerki. During
the Middle Ages, the Eritrea region was known as Medri Bahri ("sea-land"). The name Eritrea is derived from the ancient
Greek name for Red Sea (Ἐρυθρὰ Θάλασσα Erythra Thalassa, based on the adjective ἐρυθρός erythros "red"). It was first
formally adopted in 1890, with the formation of Italian Eritrea (Colonia Eritrea). The territory became the Eritrea
Governorate within Italian East Africa in 1936. Eritrea was annexed by Ethiopia in 1953 (nominally within a federation
until 1962) and an Eritrean Liberation Front formed in 1960. Eritrea gained independence following the 1993 referendum,
and the name of the new state was defined as State of Eritrea in the 1997 constitution.[citation needed] In 2010,
a genetic study was conducted on the mummified remains of baboons that were brought back as gifts from Punt by the
ancient Egyptians. Led by a research team from the Egyptian Museum and the University of California, the scientists
used oxygen isotope analysis to examine hairs from two baboon mummies that had been preserved in the British Museum.
One of the baboons had distorted isotopic data, so the other's oxygen isotope values were compared to those of present-day
baboon specimens from regions of interest. The researchers found that the mummies most closely matched modern baboon
specimens in Eritrea and Ethiopia, which they suggested implied that Punt was likely a narrow region that included
eastern Ethiopia and all of Eritrea. After the decline of Aksum, the Eritrean highlands were under the domain of
Bahr Negash ruled by the Bahr Negus. The area was then known as Ma'ikele Bahr ("between the seas/rivers," i.e. the
land between the Red Sea and the Mereb river). It was later renamed under Emperor Zara Yaqob as the domain of the
Bahr Negash, the Medri Bahri ("Sea land" in Tingrinya, although it included some areas like Shire on the other side
of the Mereb, today in Ethiopia). With its capital at Debarwa, the state's main provinces were Hamasien, Serae and
Akele Guzai. In 1922, Benito Mussolini's rise to power in Italy brought profound changes to the colonial government
in Italian Eritrea. After il Duce declared the birth of the Italian Empire in May 1936, Italian Eritrea (enlarged
with northern Ethiopia's regions) and Italian Somaliland were merged with the just conquered Ethiopia in the new
Italian East Africa (Africa Orientale Italiana) administrative territory. This Fascist period was characterized by
imperial expansion in the name of a "new Roman Empire". Eritrea was chosen by the Italian government to be the industrial
center of Italian East Africa. When Emperor Haile Selassie unilaterally dissolved the Eritrean parliament and annexed
the country in 1962, the Eritrean Liberation Front (ELF) waged an armed struggle for independence. The ensuing Eritrean
War for Independence went on for 30 years against successive Ethiopian governments until 1991, when the Eritrean
People's Liberation Front (EPLF), a successor of the ELF, defeated the Ethiopian forces in Eritrea and helped a coalition
of Ethiopian rebel forces take control of the Ethiopian Capital Addis Ababa. Disagreements following the war have
resulted in stalemate punctuated by periods of elevated tension and renewed threats of war. The stalemate led the
President of Eritrea to urge the UN to take action on Ethiopia with the Eleven Letters penned by the President to
the United Nations Security Council. The situation has been further escalated by the continued efforts of the Eritrean
and Ethiopian leaders in supporting opposition in one another's countries.[citation needed] In 2011, Ethiopia accused
Eritrea of planting bombs at an African Union summit in Addis Ababa, which was later supported by a UN report. Eritrea
denied the claims. However, Eritrea still faces many challenges. Despite number of physicians increasing from only
0.2 in 1993 to 0.5 in 2004 per 1000 population, this is still very low. Malaria and tuberculosis are common in Eritrea.
HIV prevalence among the 15–49 group exceeds 2%. The fertility rate is at about 5 births per woman. Maternal mortality
dropped by more than half from 1995 to 2002, although the figure is still high. Similarly, between 1995 and 2002,
the number of births attended by skilled health personnel has doubled but still is only 28.3%. A major cause of death
in neonates is by severe infection. Per capita expenditure on health is low in Eritrea. Traditional Eritrean attire
is quite varied among the ethnic groups of Eritrea. In the larger cities, most people dress in Western casual dress
such as jeans and shirts. In offices, both men and women often dress in suits. Traditional clothing for Christian
Tigrinya-speaking highlanders consists of bright white gowns called zurias for the women, and long white shirts accompanied
by white pants for the men. In Muslim communities in the Eritrean lowland, the women traditionally dress in brightly
colored clothes. Only Rashaida women maintain a tradition of covering half of their faces, though they do not cover
their hair. Football and cycling are the most popular sports in Eritrea. In recent years, Eritrean athletes have
also seen increasing success in the international arena. Zersenay Tadese, an Eritrean athlete, currently holds the
world record in half marathon distance running. The Tour of Eritrea, a multi-stage international cycling event, is
held annually throughout the country. The Eritrea national cycling team has experienced a lot of success, winning
the continental cycling championship several years in a row. Six Eritrean riders have been signed to international
cycling teams, including Natnael Berhane and Daniel Teklehaimanot. Berhane was named African Sportsman of the Year
in 2013, ahead of footballers Yaya Touré and Didier Drogba, while Teklehaimanot became the first Eritrean to ride
the Vuelta a España in 2012. In 2015 Teklehaimanot won the King of the Mountains classification in the Critérium
du Dauphine. Teklehaimanot and fellow Eritrean Merhawi Kudus became the first black African riders to compete in
the Tour de France when they were selected by the MTN–Qhubeka team for the 2015 edition of the race, where, on 9
July, Teklehaimanot became the first African rider to wear the polkadot jersey. During the last interglacial period,
the Red Sea coast of Eritrea was occupied by early anatomically modern humans. It is believed that the area was on
the route out of Africa that some scholars suggest was used by early humans to colonize the rest of the Old World.
In 1999, the Eritrean Research Project Team composed of Eritrean, Canadian, American, Dutch and French scientists
discovered a Paleolithic site with stone and obsidian tools dated to over 125,000 years old near the Bay of Zula
south of Massawa, along the Red Sea littoral. The tools are believed to have been used by early humans to harvest
marine resources like clams and oysters. At the end of the 16th century, the Aussa Sultanate was established in the
Denkel lowlands of Eritrea. The polity had come into existence in 1577, when Muhammed Jasa moved his capital from
Harar to Aussa (Asaita) with the split of the Adal Sultanate into Aussa and the Sultanate of Harar. At some point
after 1672, Aussa declined in conjunction with Imam Umar Din bin Adam's recorded ascension to the throne. In 1734,
the Afar leader Kedafu, head of the Mudaito clan, seized power and established the Mudaito Dynasty. This marked the
start of a new and more sophisticated polity that would last into the colonial period. In the vacuum that followed
the 1889 death of Emperor Yohannes II, Gen. Oreste Baratieri occupied the highlands along the Eritrean coast and
Italy proclaimed the establishment of the new colony of Italian Eritrea, a colony of the Kingdom of Italy. In the
Treaty of Wuchale (It. Uccialli) signed the same year, King Menelik of Shewa, a southern Ethiopian kingdom, recognized
the Italian occupation of his rivals' lands of Bogos, Hamasien, Akkele Guzay, and Serae in exchange for guarantees
of financial assistance and continuing access to European arms and ammunition. His subsequent victory over his rival
kings and enthronement as Emperor Menelek II (r. 1889–1913) made the treaty formally binding upon the entire territory.
In the 1950s, the Ethiopian feudal administration under Emperor Haile Selassie sought to annex Eritrea and Italian
Somaliland. He laid claim to both territories in a letter to Franklin D. Roosevelt at the Paris Peace Conference
and at the First Session of the United Nations. In the United Nations, the debate over the fate of the former Italian
colonies continued. The British and Americans preferred to cede all of Eritrea except the Western province to the
Ethiopians as a reward for their support during World War II. The Independence Bloc of Eritrean parties consistently
requested from the UN General Assembly that a referendum be held immediately to settle the Eritrean question of sovereignty.
The Eritrean highway system is named according to the road classification. The three levels of classification are:
primary (P), secondary (S), and tertiary (T). The lowest level road is tertiary and serves local interests. Typically
they are improved earth roads which are occasionally paved. During the wet seasons these roads typically become impassable.
The next higher level road is a secondary road and typically is a single-layered asphalt road that connects district
capitals together and those to the regional capitals. Roads that are considered primary roads are those that are
fully asphalted (throughout their entire length) and in general they carry traffic between all the major towns in
Eritrea. According to recent estimates, 50% of the population adheres to Christianity, Islam 48%, while 2% of the
population follows other religions including traditional African religion and animism. According to a study made
by Pew Research Center, 63% adheres to Christianity and 36% adheres to Islam. Since May 2002, the government of Eritrea
has officially recognized the Eritrean Orthodox Tewahedo Church (Oriental Orthodox), Sunni Islam, the Eritrean Catholic
Church (a Metropolitanate sui juris) and the Evangelical Lutheran church. All other faiths and denominations are
required to undergo a registration process. Among other things, the government's registration system requires religious
groups to submit personal information on their membership to be allowed to worship. Education in Eritrea is officially
compulsory between seven and 13 years of age. However, the education infrastructure is inadequate to meet current
needs. Statistics vary at the elementary level, suggesting that between 65 and 70% of school-aged children attend
primary school; Approximately 61% attend secondary school. Student-teacher ratios are high: 45 to 1 at the elementary
level and 54 to 1 at the secondary level. There are an average 63 students per classroom at the elementary level
and 97 per classroom at the secondary level. Learning hours at school are often less than six hours per day. Skill
shortages are present at all levels of the education system, and funding for and access to education vary significantly
by gender and location. Illiteracy estimates for Eritrea range from around 40% to as high as 70%. A typical traditional
Eritrean dish consists of injera accompanied by a spicy stew, which frequently includes beef, kid, lamb or fish.
Overall, Eritrean cuisine strongly resembles those of neighboring Ethiopia, Eritrean cooking tend to feature more
seafood than Ethiopian cuisine on account of their coastal location. Eritrean dishes are also frequently "lighter"
in texture than Ethiopian meals. They likewise tend to employ less seasoned butter and spices and more tomatoes,
as in the tsebhi dorho delicacy.
Depleted uranium is also used as a shielding material in some containers used to store and transport radioactive materials.
While the metal itself is radioactive, its high density makes it more effective than lead in halting radiation from
strong sources such as radium. Other uses of depleted uranium include counterweights for aircraft control surfaces,
as ballast for missile re-entry vehicles and as a shielding material. Due to its high density, this material is found
in inertial guidance systems and in gyroscopic compasses. Depleted uranium is preferred over similarly dense metals
due to its ability to be easily machined and cast as well as its relatively low cost. The main risk of exposure to
depleted uranium is chemical poisoning by uranium oxide rather than radioactivity (uranium being only a weak alpha
emitter). The discovery and isolation of radium in uranium ore (pitchblende) by Marie Curie sparked the development
of uranium mining to extract the radium, which was used to make glow-in-the-dark paints for clock and aircraft dials.
This left a prodigious quantity of uranium as a waste product, since it takes three tonnes of uranium to extract
one gram of radium. This waste product was diverted to the glazing industry, making uranium glazes very inexpensive
and abundant. Besides the pottery glazes, uranium tile glazes accounted for the bulk of the use, including common
bathroom and kitchen tiles which can be produced in green, yellow, mauve, black, blue, red and other colors. The
X-10 Graphite Reactor at Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tennessee, formerly known as the Clinton
Pile and X-10 Pile, was the world's second artificial nuclear reactor (after Enrico Fermi's Chicago Pile) and was
the first reactor designed and built for continuous operation. Argonne National Laboratory's Experimental Breeder
Reactor I, located at the Atomic Energy Commission's National Reactor Testing Station near Arco, Idaho, became the
first nuclear reactor to create electricity on 20 December 1951. Initially, four 150-watt light bulbs were lit by
the reactor, but improvements eventually enabled it to power the whole facility (later, the town of Arco became the
first in the world to have all its electricity come from nuclear power generated by BORAX-III, another reactor designed
and operated by Argonne National Laboratory). The world's first commercial scale nuclear power station, Obninsk in
the Soviet Union, began generation with its reactor AM-1 on 27 June 1954. Other early nuclear power plants were Calder
Hall in England, which began generation on 17 October 1956, and the Shippingport Atomic Power Station in Pennsylvania,
which began on 26 May 1958. Nuclear power was used for the first time for propulsion by a submarine, the USS Nautilus,
in 1954. Uranium is a naturally occurring element that can be found in low levels within all rock, soil, and water.
Uranium is the 51st element in order of abundance in the Earth's crust. Uranium is also the highest-numbered element
to be found naturally in significant quantities on Earth and is almost always found combined with other elements.
Along with all elements having atomic weights higher than that of iron, it is only naturally formed in supernovae.
The decay of uranium, thorium, and potassium-40 in the Earth's mantle is thought to be the main source of heat that
keeps the outer core liquid and drives mantle convection, which in turn drives plate tectonics. Uranium-235 was the
first isotope that was found to be fissile. Other naturally occurring isotopes are fissionable, but not fissile.
On bombardment with slow neutrons, its uranium-235 isotope will most of the time divide into two smaller nuclei,
releasing nuclear binding energy and more neutrons. If too many of these neutrons are absorbed by other uranium-235
nuclei, a nuclear chain reaction occurs that results in a burst of heat or (in special circumstances) an explosion.
In a nuclear reactor, such a chain reaction is slowed and controlled by a neutron poison, absorbing some of the free
neutrons. Such neutron absorbent materials are often part of reactor control rods (see nuclear reactor physics for
a description of this process of reactor control). In nature, uranium(VI) forms highly soluble carbonate complexes
at alkaline pH. This leads to an increase in mobility and availability of uranium to groundwater and soil from nuclear
wastes which leads to health hazards. However, it is difficult to precipitate uranium as phosphate in the presence
of excess carbonate at alkaline pH. A Sphingomonas sp. strain BSAR-1 has been found to express a high activity alkaline
phosphatase (PhoK) that has been applied for bioprecipitation of uranium as uranyl phosphate species from alkaline
solutions. The precipitation ability was enhanced by overexpressing PhoK protein in E. coli. It is estimated that
5.5 million tonnes of uranium exists in ore reserves that are economically viable at US$59 per lb of uranium, while
35 million tonnes are classed as mineral resources (reasonable prospects for eventual economic extraction). Prices
went from about $10/lb in May 2003 to $138/lb in July 2007. This has caused a big increase in spending on exploration,
with US$200 million being spent worldwide in 2005, a 54% increase on the previous year. This trend continued through
2006, when expenditure on exploration rocketed to over $774 million, an increase of over 250% compared to 2004. The
OECD Nuclear Energy Agency said exploration figures for 2007 would likely match those for 2006. A team led by Enrico
Fermi in 1934 observed that bombarding uranium with neutrons produces the emission of beta rays (electrons or positrons
from the elements produced; see beta particle). The fission products were at first mistaken for new elements of atomic
numbers 93 and 94, which the Dean of the Faculty of Rome, Orso Mario Corbino, christened ausonium and hesperium,
respectively. The experiments leading to the discovery of uranium's ability to fission (break apart) into lighter
elements and release binding energy were conducted by Otto Hahn and Fritz Strassmann in Hahn's laboratory in Berlin.
Lise Meitner and her nephew, the physicist Otto Robert Frisch, published the physical explanation in February 1939
and named the process "nuclear fission". Soon after, Fermi hypothesized that the fission of uranium might release
enough neutrons to sustain a fission reaction. Confirmation of this hypothesis came in 1939, and later work found
that on average about 2.5 neutrons are released by each fission of the rare uranium isotope uranium-235. Further
work found that the far more common uranium-238 isotope can be transmuted into plutonium, which, like uranium-235,
is also fissile by thermal neutrons. These discoveries led numerous countries to begin working on the development
of nuclear weapons and nuclear power. The interactions of carbonate anions with uranium(VI) cause the Pourbaix diagram
to change greatly when the medium is changed from water to a carbonate containing solution. While the vast majority
of carbonates are insoluble in water (students are often taught that all carbonates other than those of alkali metals
are insoluble in water), uranium carbonates are often soluble in water. This is because a U(VI) cation is able to
bind two terminal oxides and three or more carbonates to form anionic complexes. Uranium is more plentiful than antimony,
tin, cadmium, mercury, or silver, and it is about as abundant as arsenic or molybdenum. Uranium is found in hundreds
of minerals, including uraninite (the most common uranium ore), carnotite, autunite, uranophane, torbernite, and
coffinite. Significant concentrations of uranium occur in some substances such as phosphate rock deposits, and minerals
such as lignite, and monazite sands in uranium-rich ores (it is recovered commercially from sources with as little
as 0.1% uranium). Natural uranium consists of three major isotopes: uranium-238 (99.28% natural abundance), uranium-235
(0.71%), and uranium-234 (0.0054%). All three are radioactive, emitting alpha particles, with the exception that
all three of these isotopes have small probabilities of undergoing spontaneous fission, rather than alpha emission.
There are also five other trace isotopes: uranium-239, which is formed when 238U undergoes spontaneous fission, releasing
neutrons that are captured by another 238U atom; uranium-237, which is formed when 238U captures a neutron but emits
two more, which then decays to neptunium-237; uranium-233, which is formed in the decay chain of that neptunium-237;
and finally, uranium-236 and -240, which appear in the decay chain of primordial plutonium-244. It is also expected
that thorium-232 should be able to undergo double beta decay, which would produce uranium-232, but this has not yet
been observed experimentally. An additional 4.6 billion tonnes of uranium are estimated to be in sea water (Japanese
scientists in the 1980s showed that extraction of uranium from sea water using ion exchangers was technically feasible).
There have been experiments to extract uranium from sea water, but the yield has been low due to the carbonate present
in the water. In 2012, ORNL researchers announced the successful development of a new absorbent material dubbed HiCap
which performs surface retention of solid or gas molecules, atoms or ions and also effectively removes toxic metals
from water, according to results verified by researchers at Pacific Northwest National Laboratory. Many contemporary
uses of uranium exploit its unique nuclear properties. Uranium-235 has the distinction of being the only naturally
occurring fissile isotope. Uranium-238 is fissionable by fast neutrons, and is fertile, meaning it can be transmuted
to fissile plutonium-239 in a nuclear reactor. Another fissile isotope, uranium-233, can be produced from natural
thorium and is also important in nuclear technology. While uranium-238 has a small probability for spontaneous fission
or even induced fission with fast neutrons, uranium-235 and to a lesser degree uranium-233 have a much higher fission
cross-section for slow neutrons. In sufficient concentration, these isotopes maintain a sustained nuclear chain reaction.
This generates the heat in nuclear power reactors, and produces the fissile material for nuclear weapons. Depleted
uranium (238U) is used in kinetic energy penetrators and armor plating. A person can be exposed to uranium (or its
radioactive daughters, such as radon) by inhaling dust in air or by ingesting contaminated water and food. The amount
of uranium in air is usually very small; however, people who work in factories that process phosphate fertilizers,
live near government facilities that made or tested nuclear weapons, live or work near a modern battlefield where
depleted uranium weapons have been used, or live or work near a coal-fired power plant, facilities that mine or process
uranium ore, or enrich uranium for reactor fuel, may have increased exposure to uranium. Houses or structures that
are over uranium deposits (either natural or man-made slag deposits) may have an increased incidence of exposure
to radon gas. The Occupational Safety and Health Administration (OSHA) has set the permissible exposure limit for
uranium exposure in the workplace as 0.25 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety
and Health (NIOSH) has set a recommended exposure limit (REL) of 0.2 mg/m3 over an 8-hour workday and a short-term
limit of 0.6 mg/m3. At levels of 10 mg/m3, uranium is immediately dangerous to life and health. The most common forms
of uranium oxide are triuranium octoxide (U 3O 8) and UO 2. Both oxide forms are solids that have low solubility
in water and are relatively stable over a wide range of environmental conditions. Triuranium octoxide is (depending
on conditions) the most stable compound of uranium and is the form most commonly found in nature. Uranium dioxide
is the form in which uranium is most commonly used as a nuclear reactor fuel. At ambient temperatures, UO 2 will
gradually convert to U 3O 8. Because of their stability, uranium oxides are generally considered the preferred chemical
form for storage or disposal. The use of uranium in its natural oxide form dates back to at least the year 79 CE,
when it was used to add a yellow color to ceramic glazes. Yellow glass with 1% uranium oxide was found in a Roman
villa on Cape Posillipo in the Bay of Naples, Italy, by R. T. Gunther of the University of Oxford in 1912. Starting
in the late Middle Ages, pitchblende was extracted from the Habsburg silver mines in Joachimsthal, Bohemia (now Jáchymov
in the Czech Republic), and was used as a coloring agent in the local glassmaking industry. In the early 19th century,
the world's only known sources of uranium ore were these mines. On 2 December 1942, as part of the Manhattan Project,
another team led by Enrico Fermi was able to initiate the first artificial self-sustained nuclear chain reaction,
Chicago Pile-1. Working in a lab below the stands of Stagg Field at the University of Chicago, the team created the
conditions needed for such a reaction by piling together 400 short tons (360 metric tons) of graphite, 58 short tons
(53 metric tons) of uranium oxide, and six short tons (5.5 metric tons) of uranium metal, a majority of which was
supplied by Westinghouse Lamp Plant in a makeshift production process. Uranium is used as a colorant in uranium glass
producing orange-red to lemon yellow hues. It was also used for tinting and shading in early photography. The 1789
discovery of uranium in the mineral pitchblende is credited to Martin Heinrich Klaproth, who named the new element
after the planet Uranus. Eugène-Melchior Péligot was the first person to isolate the metal and its radioactive properties
were discovered in 1896 by Henri Becquerel. Research by Otto Hahn, Lise Meitner, Enrico Fermi and others, such as
J. Robert Oppenheimer starting in 1934 led to its use as a fuel in the nuclear power industry and in Little Boy,
the first nuclear weapon used in war. An ensuing arms race during the Cold War between the United States and the
Soviet Union produced tens of thousands of nuclear weapons that used uranium metal and uranium-derived plutonium-239.
The security of those weapons and their fissile material following the breakup of the Soviet Union in 1991 is an
ongoing concern for public health and safety. See Nuclear proliferation. In nature, uranium is found as uranium-238
(99.2742%) and uranium-235 (0.7204%). Isotope separation concentrates (enriches) the fissionable uranium-235 for
nuclear weapons and most nuclear power plants, except for gas cooled reactors and pressurised heavy water reactors.
Most neutrons released by a fissioning atom of uranium-235 must impact other uranium-235 atoms to sustain the nuclear
chain reaction. The concentration and amount of uranium-235 needed to achieve this is called a 'critical mass'. The
major application of uranium in the military sector is in high-density penetrators. This ammunition consists of depleted
uranium (DU) alloyed with 1–2% other elements, such as titanium or molybdenum. At high impact speed, the density,
hardness, and pyrophoricity of the projectile enable the destruction of heavily armored targets. Tank armor and other
removable vehicle armor can also be hardened with depleted uranium plates. The use of depleted uranium became politically
and environmentally contentious after the use of such munitions by the US, UK and other countries during wars in
the Persian Gulf and the Balkans raised questions concerning uranium compounds left in the soil (see Gulf War Syndrome).
The discovery of the element is credited to the German chemist Martin Heinrich Klaproth. While he was working in
his experimental laboratory in Berlin in 1789, Klaproth was able to precipitate a yellow compound (likely sodium
diuranate) by dissolving pitchblende in nitric acid and neutralizing the solution with sodium hydroxide. Klaproth
assumed the yellow substance was the oxide of a yet-undiscovered element and heated it with charcoal to obtain a
black powder, which he thought was the newly discovered metal itself (in fact, that powder was an oxide of uranium).
He named the newly discovered element after the planet Uranus, (named after the primordial Greek god of the sky),
which had been discovered eight years earlier by William Herschel. Uranium ore is mined in several ways: by open
pit, underground, in-situ leaching, and borehole mining (see uranium mining). Low-grade uranium ore mined typically
contains 0.01 to 0.25% uranium oxides. Extensive measures must be employed to extract the metal from its ore. High-grade
ores found in Athabasca Basin deposits in Saskatchewan, Canada can contain up to 23% uranium oxides on average. Uranium
ore is crushed and rendered into a fine powder and then leached with either an acid or alkali. The leachate is subjected
to one of several sequences of precipitation, solvent extraction, and ion exchange. The resulting mixture, called
yellowcake, contains at least 75% uranium oxides U3O8. Yellowcake is then calcined to remove impurities from the
milling process before refining and conversion. Two major types of atomic bombs were developed by the United States
during World War II: a uranium-based device (codenamed "Little Boy") whose fissile material was highly enriched uranium,
and a plutonium-based device (see Trinity test and "Fat Man") whose plutonium was derived from uranium-238. The uranium-based
Little Boy device became the first nuclear weapon used in war when it was detonated over the Japanese city of Hiroshima
on 6 August 1945. Exploding with a yield equivalent to 12,500 tonnes of TNT, the blast and thermal wave of the bomb
destroyed nearly 50,000 buildings and killed approximately 75,000 people (see Atomic bombings of Hiroshima and Nagasaki).
Initially it was believed that uranium was relatively rare, and that nuclear proliferation could be avoided by simply
buying up all known uranium stocks, but within a decade large deposits of it were discovered in many places around
the world. In 2005, seventeen countries produced concentrated uranium oxides, with Canada (27.9% of world production)
and Australia (22.8%) being the largest producers and Kazakhstan (10.5%), Russia (8.0%), Namibia (7.5%), Niger (7.4%),
Uzbekistan (5.5%), the United States (2.5%), Argentina (2.1%), Ukraine (1.9%) and China (1.7%) also producing significant
amounts. Kazakhstan continues to increase production and may have become the world's largest producer of uranium
by 2009 with an expected production of 12,826 tonnes, compared to Canada with 11,100 t and Australia with 9,430 t.
In the late 1960s, UN geologists also discovered major uranium deposits and other rare mineral reserves in Somalia.
The find was the largest of its kind, with industry experts estimating the deposits at over 25% of the world's then
known uranium reserves of 800,000 tons. During the Cold War between the Soviet Union and the United States, huge
stockpiles of uranium were amassed and tens of thousands of nuclear weapons were created using enriched uranium and
plutonium made from uranium. Since the break-up of the Soviet Union in 1991, an estimated 600 short tons (540 metric
tons) of highly enriched weapons grade uranium (enough to make 40,000 nuclear warheads) have been stored in often
inadequately guarded facilities in the Russian Federation and several other former Soviet states. Police in Asia,
Europe, and South America on at least 16 occasions from 1993 to 2005 have intercepted shipments of smuggled bomb-grade
uranium or plutonium, most of which was from ex-Soviet sources. From 1993 to 2005 the Material Protection, Control,
and Accounting Program, operated by the federal government of the United States, spent approximately US $550 million
to help safeguard uranium and plutonium stockpiles in Russia. This money was used for improvements and security enhancements
at research and storage facilities. Scientific American reported in February 2006 that in some of the facilities
security consisted of chain link fences which were in severe states of disrepair. According to an interview from
the article, one facility had been storing samples of enriched (weapons grade) uranium in a broom closet before the
improvement project; another had been keeping track of its stock of nuclear warheads using index cards kept in a
shoe box. Salts of many oxidation states of uranium are water-soluble and may be studied in aqueous solutions. The
most common ionic forms are U3+ (brown-red), U4+ (green), UO+ 2 (unstable), and UO2+ 2 (yellow), for U(III), U(IV),
U(V), and U(VI), respectively. A few solid and semi-metallic compounds such as UO and US exist for the formal oxidation
state uranium(II), but no simple ions are known to exist in solution for that state. Ions of U3+ liberate hydrogen
from water and are therefore considered to be highly unstable. The UO2+ 2 ion represents the uranium(VI) state and
is known to form compounds such as uranyl carbonate, uranyl chloride and uranyl sulfate. UO2+ 2 also forms complexes
with various organic chelating agents, the most commonly encountered of which is uranyl acetate. Some organisms,
such as the lichen Trapelia involuta or microorganisms such as the bacterium Citrobacter, can absorb concentrations
of uranium that are up to 300 times the level of their environment. Citrobacter species absorb uranyl ions when given
glycerol phosphate (or other similar organic phosphates). After one day, one gram of bacteria can encrust themselves
with nine grams of uranyl phosphate crystals; this creates the possibility that these organisms could be used in
bioremediation to decontaminate uranium-polluted water. The proteobacterium Geobacter has also been shown to bioremediate
uranium in ground water. The mycorrhizal fungus Glomus intraradices increases uranium content in the roots of its
symbiotic plant. Uranium metal heated to 250 to 300 °C (482 to 572 °F) reacts with hydrogen to form uranium hydride.
Even higher temperatures will reversibly remove the hydrogen. This property makes uranium hydrides convenient starting
materials to create reactive uranium powder along with various uranium carbide, nitride, and halide compounds. Two
crystal modifications of uranium hydride exist: an α form that is obtained at low temperatures and a β form that
is created when the formation temperature is above 250 °C. Uranium carbides and uranium nitrides are both relatively
inert semimetallic compounds that are minimally soluble in acids, react with water, and can ignite in air to form
U 3O 8. Carbides of uranium include uranium monocarbide (UC), uranium dicarbide (UC 2), and diuranium tricarbide
(U 2C 3). Both UC and UC 2 are formed by adding carbon to molten uranium or by exposing the metal to carbon monoxide
at high temperatures. Stable below 1800 °C, U 2C 3 is prepared by subjecting a heated mixture of UC and UC 2 to mechanical
stress. Uranium nitrides obtained by direct exposure of the metal to nitrogen include uranium mononitride (UN), uranium
dinitride (UN 2), and diuranium trinitride (U 2N 3). To be considered 'enriched', the uranium-235 fraction should
be between 3% and 5%. This process produces huge quantities of uranium that is depleted of uranium-235 and with a
correspondingly increased fraction of uranium-238, called depleted uranium or 'DU'. To be considered 'depleted',
the uranium-235 isotope concentration should be no more than 0.3%. The price of uranium has risen since 2001, so
enrichment tailings containing more than 0.35% uranium-235 are being considered for re-enrichment, driving the price
of depleted uranium hexafluoride above $130 per kilogram in July 2007 from $5 in 2001. Normal functioning of the
kidney, brain, liver, heart, and other systems can be affected by uranium exposure, because, besides being weakly
radioactive, uranium is a toxic metal. Uranium is also a reproductive toxicant. Radiological effects are generally
local because alpha radiation, the primary form of 238U decay, has a very short range, and will not penetrate skin.
Uranyl (UO2+ 2) ions, such as from uranium trioxide or uranyl nitrate and other hexavalent uranium compounds, have
been shown to cause birth defects and immune system damage in laboratory animals. While the CDC has published one
study that no human cancer has been seen as a result of exposure to natural or depleted uranium, exposure to uranium
and its decay products, especially radon, are widely known and significant health threats. Exposure to strontium-90,
iodine-131, and other fission products is unrelated to uranium exposure, but may result from medical procedures or
exposure to spent reactor fuel or fallout from nuclear weapons. Although accidental inhalation exposure to a high
concentration of uranium hexafluoride has resulted in human fatalities, those deaths were associated with the generation
of highly toxic hydrofluoric acid and uranyl fluoride rather than with uranium itself. Finely divided uranium metal
presents a fire hazard because uranium is pyrophoric; small grains will ignite spontaneously in air at room temperature.
The gas centrifuge process, where gaseous uranium hexafluoride (UF 6) is separated by the difference in molecular
weight between 235UF6 and 238UF6 using high-speed centrifuges, is the cheapest and leading enrichment process. The
gaseous diffusion process had been the leading method for enrichment and was used in the Manhattan Project. In this
process, uranium hexafluoride is repeatedly diffused through a silver-zinc membrane, and the different isotopes of
uranium are separated by diffusion rate (since uranium 238 is heavier it diffuses slightly slower than uranium-235).
The molecular laser isotope separation method employs a laser beam of precise energy to sever the bond between uranium-235
and fluorine. This leaves uranium-238 bonded to fluorine and allows uranium-235 metal to precipitate from the solution.
An alternative laser method of enrichment is known as atomic vapor laser isotope separation (AVLIS) and employs visible
tunable lasers such as dye lasers. Another method used is liquid thermal diffusion. Uranium is a chemical element
with symbol U and atomic number 92. It is a silvery-white metal in the actinide series of the periodic table. A uranium
atom has 92 protons and 92 electrons, of which 6 are valence electrons. Uranium is weakly radioactive because all
its isotopes are unstable (with half-lives of the six naturally known isotopes, uranium-233 to uranium-238, varying
between 69 years and 4.5 billion years). The most common isotopes of uranium are uranium-238 (which has 146 neutrons
and accounts for almost 99.3% of the uranium found in nature) and uranium-235 (which has 143 neutrons, accounting
for 0.7% of the element found naturally). Uranium has the second highest atomic weight of the primordially occurring
elements, lighter only than plutonium. Its density is about 70% higher than that of lead, but slightly lower than
that of gold or tungsten. It occurs naturally in low concentrations of a few parts per million in soil, rock and
water, and is commercially extracted from uranium-bearing minerals such as uraninite. Uranium metal reacts with almost
all non-metal elements (with an exception of the noble gases) and their compounds, with reactivity increasing with
temperature. Hydrochloric and nitric acids dissolve uranium, but non-oxidizing acids other than hydrochloric acid
attack the element very slowly. When finely divided, it can react with cold water; in air, uranium metal becomes
coated with a dark layer of uranium oxide. Uranium in ores is extracted chemically and converted into uranium dioxide
or other chemical forms usable in industry. During the later stages of World War II, the entire Cold War, and to
a lesser extent afterwards, uranium-235 has been used as the fissile explosive material to produce nuclear weapons.
Initially, two major types of fission bombs were built: a relatively simple device that uses uranium-235 and a more
complicated mechanism that uses plutonium-239 derived from uranium-238. Later, a much more complicated and far more
powerful type of fission/fusion bomb (thermonuclear weapon) was built, that uses a plutonium-based device to cause
a mixture of tritium and deuterium to undergo nuclear fusion. Such bombs are jacketed in a non-fissile (unenriched)
uranium case, and they derive more than half their power from the fission of this material by fast neutrons from
the nuclear fusion process. Uranium was also used in photographic chemicals (especially uranium nitrate as a toner),
in lamp filaments for stage lighting bulbs, to improve the appearance of dentures, and in the leather and wood industries
for stains and dyes. Uranium salts are mordants of silk or wool. Uranyl acetate and uranyl formate are used as electron-dense
"stains" in transmission electron microscopy, to increase the contrast of biological specimens in ultrathin sections
and in negative staining of viruses, isolated cell organelles and macromolecules. In 1972, the French physicist Francis
Perrin discovered fifteen ancient and no longer active natural nuclear fission reactors in three separate ore deposits
at the Oklo mine in Gabon, West Africa, collectively known as the Oklo Fossil Reactors. The ore deposit is 1.7 billion
years old; then, uranium-235 constituted about 3% of the total uranium on Earth. This is high enough to permit a
sustained nuclear fission chain reaction to occur, provided other supporting conditions exist. The capacity of the
surrounding sediment to contain the nuclear waste products has been cited by the U.S. federal government as supporting
evidence for the feasibility to store spent nuclear fuel at the Yucca Mountain nuclear waste repository. Uranium's
average concentration in the Earth's crust is (depending on the reference) 2 to 4 parts per million, or about 40
times as abundant as silver. The Earth's crust from the surface to 25 km (15 mi) down is calculated to contain 1017
kg (2×1017 lb) of uranium while the oceans may contain 1013 kg (2×1013 lb). The concentration of uranium in soil
ranges from 0.7 to 11 parts per million (up to 15 parts per million in farmland soil due to use of phosphate fertilizers),
and its concentration in sea water is 3 parts per billion. Uranium-238 is the most stable isotope of uranium, with
a half-life of about 4.468×109 years, roughly the age of the Earth. Uranium-235 has a half-life of about 7.13×108
years, and uranium-234 has a half-life of about 2.48×105 years. For natural uranium, about 49% of its alpha rays
are emitted by each of 238U atom, and also 49% by 234U (since the latter is formed from the former) and about 2.0%
of them by the 235U. When the Earth was young, probably about one-fifth of its uranium was uranium-235, but the percentage
of 234U was probably much lower than this. Most ingested uranium is excreted during digestion. Only 0.5% is absorbed
when insoluble forms of uranium, such as its oxide, are ingested, whereas absorption of the more soluble uranyl ion
can be up to 5%. However, soluble uranium compounds tend to quickly pass through the body, whereas insoluble uranium
compounds, especially when inhaled by way of dust into the lungs, pose a more serious exposure hazard. After entering
the bloodstream, the absorbed uranium tends to bioaccumulate and stay for many years in bone tissue because of uranium's
affinity for phosphates. Uranium is not absorbed through the skin, and alpha particles released by uranium cannot
penetrate the skin. Calcined uranium yellowcake, as produced in many large mills, contains a distribution of uranium
oxidation species in various forms ranging from most oxidized to least oxidized. Particles with short residence times
in a calciner will generally be less oxidized than those with long retention times or particles recovered in the
stack scrubber. Uranium content is usually referenced to U 3O 8, which dates to the days of the Manhattan project
when U 3O 8 was used as an analytical chemistry reporting standard.
Appointments to the Order of the British Empire were at first made on the nomination of the self-governing Dominions of the
Empire, the Viceroy of India, and the colonial governors, as well as on nominations from within the United Kingdom.
As the Empire evolved into the Commonwealth, nominations continued to come from the Commonwealth realms, in which
the monarch remained head of state. These overseas nominations have been discontinued in realms that have established
their own Orders—such as the Order of Australia, the Order of Canada, and the New Zealand Order of Merit—but members
of the Order are still appointed in the British Overseas Territories. Any individual made a member of the Order for
gallantry could wear an emblem of two crossed silver oak leaves on the same riband, ribbon or bow as the badge. It
could not be awarded posthumously and was effectively replaced in 1974 with the Queen's Gallantry Medal. If recipients
of the Order of the British Empire for Gallantry received promotion within the Order, whether for gallantry or otherwise,
they continued to wear also the insignia of the lower grade with the oak leaves. However, they only used the post-nominal
letters of the higher grade. Honorary knighthoods are appointed to citizens of nations where Queen Elizabeth II is
not Head of State, and may permit use of post-nominal letters but not the title of Sir or Dame. Occasionally honorary
appointees are, incorrectly, referred to as Sir or Dame - Bill Gates or Bob Geldof, for example. Honorary appointees
who later become a citizen of a Commonwealth realm can convert their appointment from honorary to substantive, then
enjoy all privileges of membership of the order including use of the title of Sir and Dame for the senior two ranks
of the Order. An example is Irish broadcaster Terry Wogan, who was appointed an honorary Knight Commander of the
Order in 2005 and on successful application for dual British and Irish citizenship was made a substantive member
and subsequently styled as "Sir Terry Wogan KBE". The Order has six officials: the Prelate; the Dean; the Secretary;
the Registrar; the King of Arms; and the Usher. The Bishop of London, a senior bishop in the Church of England, serves
as the Order's Prelate. The Dean of St Paul's is ex officio the Dean of the Order. The Order's King of Arms is not
a member of the College of Arms, as are many other heraldic officers. The Usher of the Order is known as the Gentleman
Usher of the Purple Rod; he does not – unlike his Order of the Garter equivalent, the Gentleman Usher of the Black
Rod – perform any duties related to the House of Lords. Appointments to the Order of the British Empire were discontinued
in those Commonwealth realms that established a national system of honours and awards such as the Order of Australia,
the Order of Canada, and the New Zealand Order of Merit. In many of these systems, the different levels of award
and honour reflect the Imperial system they replaced. Canada, Australia, and New Zealand all have (in increasing
level of precedence) Members of, Officers of, and Companions to (rather than Commanders of) their respective orders,
with both Australia and New Zealand having Knights and Dames as their highest classes. The members of The Beatles
were made MBEs in 1965. John Lennon justified the comparative merits of his investiture by comparing military membership
in the Order: "Lots of people who complained about us receiving the MBE [status] received theirs for heroism in the
war – for killing people… We received ours for entertaining other people. I'd say we deserve ours more." Lennon later
returned his MBE insignia on 25 November 1969 as part of his ongoing peace protests. Other criticism centres on the
claim that many recipients of the Order are being rewarded with honours for simply doing their jobs; critics claim
that the civil service and judiciary receive far more orders and honours than leaders of other professions. The Most
Excellent Order of the British Empire is the "order of chivalry of British constitutional monarchy", rewarding contributions
to the arts and sciences, work with charitable and welfare organisations and public service outside the Civil Service.
It was established on 4 June 1917 by King George V, and comprises five classes, in civil and military divisions,
the most senior two of which make the recipient either a knight if male, or dame if female. There is also the related
British Empire Medal, whose recipients are affiliated with, but not members of, the order. At the foundation of the
Order, the "Medal of the Order of the British Empire" was instituted, to serve as a lower award granting recipients
affiliation but not membership. In 1922, this was renamed the "British Empire Medal". It stopped being awarded by
the United Kingdom as part of the 1993 reforms to the honours system, but was again awarded beginning in 2012, starting
with 293 BEMs awarded for the Queen's Diamond Jubilee. In addition, the BEM is awarded by the Cook Islands and by
some other Commonwealth nations. In 2004, a report entitled "A Matter of Honour: Reforming Our Honours System" by
a Commons committee recommended to phase out the Order of the British Empire, as its title was "now considered to
be unacceptable, being thought to embody values that are no longer shared by many of the country’s population". From
1940, the Sovereign could appoint a person as a Commander, Officer or Member of the Order of the British Empire for
gallantry for acts of bravery (not in the face of the enemy) below the level required for the George Medal. The grade
was determined by the same criteria as usual, and not by the level of gallantry (and with more junior people instead
receiving the British Empire Medal). Oddly, this meant that it was awarded for lesser acts of gallantry than the
George Medal, but, as an Order, was worn before it and listed before it in post-nominal initials. From 14 January
1958, these awards were designated the Order of the British Empire for Gallantry. Knights Grand Cross and Knights
Commander prefix Sir, and Dames Grand Cross and Dames Commander prefix Dame, to their forenames.[b] Wives of Knights
may prefix Lady to their surnames, but no equivalent privilege exists for husbands of Knights or spouses of Dames.
Such forms are not used by peers and princes, except when the names of the former are written out in their fullest
forms. Clergy of the Church of England or the Church of Scotland do not use the title Sir or Dame as they do not
receive the accolade (i.e., they are not dubbed "knight" with a sword), although they do append the post-nominal
letters. India, while remaining an active member of the Commonwealth, chose as a republic to institute its own set
of honours awarded by the President of India who holds a republican position some consider similar to that of the
monarch in Britain. These are commonly referred to as the Padma Awards and consist of Padma Vibhushan, Padma Bhushan
and Padma Shri in descending order. These do not carry any decoration or insignia that can be worn on the person
and may not be used as titles along with individuals' names. The Order is limited to 300 Knights and Dames Grand
Cross, 845 Knights and Dames Commander, and 8,960 Commanders. There are no limits applied to the total number of
members of the fourth and fifth classes, but no more than 858 Officers and 1,464 Members may be appointed per year.
Foreign recipients, as honorary members, do not contribute to the numbers restricted to the Order as full members
do. Although the Order of the British Empire has by far the highest number of members of the British Orders of Chivalry,
with over 100,000 living members worldwide, there are fewer appointments to knighthoods than in other orders. Members
of all classes of the Order are assigned positions in the order of precedence. Wives of male members of all classes
also feature on the order of precedence, as do sons, daughters and daughters-in-law of Knights Grand Cross and Knights
Commander; relatives of Ladies of the Order, however, are not assigned any special precedence. As a general rule,
individuals can derive precedence from their fathers or husbands, but not from their mothers or wives (see order
of precedence in England and Wales for the exact positions).
Circadian rhythms allow organisms to anticipate and prepare for precise and regular environmental changes. They thus enable
organisms to best capitalize on environmental resources (e.g. light and food) compared to those that cannot predict
such availability. It has therefore been suggested that circadian rhythms put organisms at a selective advantage
in evolutionary terms. However, rhythmicity appears to be as important in regulating and coordinating internal metabolic
processes, as in coordinating with the environment. This is suggested by the maintenance (heritability) of circadian
rhythms in fruit flies after several hundred generations in constant laboratory conditions, as well as in creatures
in constant darkness in the wild, and by the experimental elimination of behavioral, but not physiological, circadian
rhythms in quail. Norwegian researchers at the University of Tromsø have shown that some Arctic animals (ptarmigan,
reindeer) show circadian rhythms only in the parts of the year that have daily sunrises and sunsets. In one study
of reindeer, animals at 70 degrees North showed circadian rhythms in the autumn, winter and spring, but not in the
summer. Reindeer on Svalbard at 78 degrees North showed such rhythms only in autumn and spring. The researchers suspect
that other Arctic animals as well may not show circadian rhythms in the constant light of summer and the constant
dark of winter. The central oscillator generates a self-sustaining rhythm and is driven by two interacting feedback
loops that are active at different times of day. The morning loop consists of CCA1 (Circadian and Clock-Associated
1) and LHY (Late Elongated Hypocotyl), which encode closely related MYB transcription factors that regulate circadian
rhythms in Arabidopsis, as well as PRR 7 and 9 (Pseudo-Response Regulators.) The evening loop consists of GI (Gigantea)
and ELF4, both involved in regulation of flowering time genes. When CCA1 and LHY are overexpressed (under constant
light or dark conditions), plants become arrhythmic, and mRNA signals reduce, contributing to a negative feedback
loop. Gene expression of CCA1 and LHY oscillates and peaks in the early morning, whereas TOC1 gene expression oscillates
and peaks in the early evening. While it was previously hypothesised that these three genes model a negative feedback
loop in which over-expressed CCA1 and LHY repress TOC1 and over-expressed TOC1 is a positive regulator of CCA1 and
LHY, it was shown in 2012 by Andrew Millar and others that TOC1 in fact serves as a repressor not only of CCA1, LHY,
and PRR7 and 9 in the morning loop but also of GI and ELF4 in the evening loop. This finding and further computational
modeling of TOC1 gene functions and interactions suggest a reframing of the plant circadian clock as a triple negative-component
repressilator model rather than the positive/negative-element feedback loop characterizing the clock in mammals.
A defect in the human homologue of the Drosophila "period" gene was identified as a cause of the sleep disorder FASPS
(Familial advanced sleep phase syndrome), underscoring the conserved nature of the molecular circadian clock through
evolution. Many more genetic components of the biological clock are now known. Their interactions result in an interlocked
feedback loop of gene products resulting in periodic fluctuations that the cells of the body interpret as a specific
time of the day.[citation needed] Due to the work nature of airline pilots, who often cross several timezones and
regions of sunlight and darkness in one day, and spend many hours awake both day and night, they are often unable
to maintain sleep patterns that correspond to the natural human circadian rhythm; this situation can easily lead
to fatigue. The NTSB cites this as contributing to many accidents[unreliable medical source?] and has conducted several
research studies in order to find methods of combating fatigue in pilots. The earliest recorded account of a circadian
process dates from the 4th century B.C.E., when Androsthenes, a ship captain serving under Alexander the Great, described
diurnal leaf movements of the tamarind tree. The observation of a circadian or diurnal process in humans is mentioned
in Chinese medical texts dated to around the 13th century, including the Noon and Midnight Manual and the Mnemonic
Rhyme to Aid in the Selection of Acu-points According to the Diurnal Cycle, the Day of the Month and the Season of
the Year. Plant circadian rhythms tell the plant what season it is and when to flower for the best chance of attracting
pollinators. Behaviors showing rhythms include leaf movement, growth, germination, stomatal/gas exchange, enzyme
activity, photosynthetic activity, and fragrance emission, among others. Circadian rhythms occur as a plant entrains
to synchronize with the light cycle of its surrounding environment. These rhythms are endogenously generated and
self-sustaining and are relatively constant over a range of ambient temperatures. Important features include two
interacting transcription-translation feedback loops: proteins containing PAS domains, which facilitate protein-protein
interactions; and several photoreceptors that fine-tune the clock to different light conditions. Anticipation of
changes in the environment allows appropriate changes in a plant's physiological state, conferring an adaptive advantage.
A better understanding of plant circadian rhythms has applications in agriculture, such as helping farmers stagger
crop harvests to extend crop availability and securing against massive losses due to weather. The simplest known
circadian clock is that of the prokaryotic cyanobacteria. Recent research has demonstrated that the circadian clock
of Synechococcus elongatus can be reconstituted in vitro with just the three proteins (KaiA, KaiB, KaiC) of their
central oscillator. This clock has been shown to sustain a 22-hour rhythm over several days upon the addition of
ATP. Previous explanations of the prokaryotic circadian timekeeper were dependent upon a DNA transcription/translation
feedback mechanism.[citation needed] The rhythm is linked to the light–dark cycle. Animals, including humans, kept
in total darkness for extended periods eventually function with a free-running rhythm. Their sleep cycle is pushed
back or forward each "day", depending on whether their "day", their endogenous period, is shorter or longer than
24 hours. The environmental cues that reset the rhythms each day are called zeitgebers (from the German, "time-givers").
Totally blind subterranean mammals (e.g., blind mole rat Spalax sp.) are able to maintain their endogenous clocks
in the apparent absence of external stimuli. Although they lack image-forming eyes, their photoreceptors (which detect
light) are still functional; they do surface periodically as well.[page needed] Melatonin is absent from the system
or undetectably low during daytime. Its onset in dim light, dim-light melatonin onset (DLMO), at roughly 21:00 (9
p.m.) can be measured in the blood or the saliva. Its major metabolite can also be measured in morning urine. Both
DLMO and the midpoint (in time) of the presence of the hormone in the blood or saliva have been used as circadian
markers. However, newer research indicates that the melatonin offset may be the more reliable marker. Benloucif et
al. found that melatonin phase markers were more stable and more highly correlated with the timing of sleep than
the core temperature minimum. They found that both sleep offset and melatonin offset are more strongly correlated
with phase markers than the onset of sleep. In addition, the declining phase of the melatonin levels is more reliable
and stable than the termination of melatonin synthesis. For temperature studies, subjects must remain awake but calm
and semi-reclined in near darkness while their rectal temperatures are taken continuously. Though variation is great
among normal chronotypes, the average human adult's temperature reaches its minimum at about 05:00 (5 a.m.), about
two hours before habitual wake time. Baehr et al. found that, in young adults, the daily body temperature minimum
occurred at about 04:00 (4 a.m.) for morning types but at about 06:00 (6 a.m.) for evening types. This minimum occurred
at approximately the middle of the eight hour sleep period for morning types, but closer to waking in evening types.
In 1896, Patrick and Gilbert observed that during a prolonged period of sleep deprivation, sleepiness increases and
decreases with a period of approximately 24 hours. In 1918, J.S. Szymanski showed that animals are capable of maintaining
24-hour activity patterns in the absence of external cues such as light and changes in temperature. In the early
20th century, circadian rhythms were noticed in the rhythmic feeding times of bees. Extensive experiments were done
by Auguste Forel, Ingeborg Beling, and Oskar Wahl to see whether this rhythm was due to an endogenous clock.[citation
needed] Ron Konopka and Seymour Benzer isolated the first clock mutant in Drosophila in the early 1970s and mapped
the "period" gene, the first discovered genetic determinant of behavioral rhythmicity. Joseph Takahashi discovered
the first mammalian circadian clock mutation (clockΔ19) using mice in 1994. However, recent studies show that deletion
of clock does not lead to a behavioral phenotype (the animals still have normal circadian rhythms), which questions
its importance in rhythm generation. It is now known that the molecular circadian clock can function within a single
cell; i.e., it is cell-autonomous. This was shown by Gene Block in isolated mollusk BRNs.[clarification needed] At
the same time, different cells may communicate with each other resulting in a synchronised output of electrical signaling.
These may interface with endocrine glands of the brain to result in periodic release of hormones. The receptors for
these hormones may be located far across the body and synchronise the peripheral clocks of various organs. Thus,
the information of the time of the day as relayed by the eyes travels to the clock in the brain, and, through that,
clocks in the rest of the body may be synchronised. This is how the timing of, for example, sleep/wake, body temperature,
thirst, and appetite are coordinately controlled by the biological clock.[citation needed] Light is the signal by
which plants synchronize their internal clocks to their environment and is sensed by a wide variety of photoreceptors.
Red and blue light are absorbed through several phytochromes and cryptochromes. One phytochrome, phyA, is the main
phytochrome in seedlings grown in the dark but rapidly degrades in light to produce Cry1. Phytochromes B–E are more
stable with phyB, the main phytochrome in seedlings grown in the light. The cryptochrome (cry) gene is also a light-sensitive
component of the circadian clock and is thought to be involved both as a photoreceptor and as part of the clock's
endogenous pacemaker mechanism. Cryptochromes 1–2 (involved in blue–UVA) help to maintain the period length in the
clock through a whole range of light conditions. Studies by Nathaniel Kleitman in 1938 and by Derk-Jan Dijk and Charles
Czeisler in the 1990s put human subjects on enforced 28-hour sleep–wake cycles, in constant dim light and with other
time cues suppressed, for over a month. Because normal people cannot entrain to a 28-hour day in dim light if at
all,[citation needed] this is referred to as a forced desynchrony protocol. Sleep and wake episodes are uncoupled
from the endogenous circadian period of about 24.18 hours and researchers are allowed to assess the effects of circadian
phase on aspects of sleep and wakefulness including sleep latency and other functions.[page needed] Shift-work or
chronic jet-lag have profound consequences on circadian and metabolic events in the body. Animals that are forced
to eat during their resting period show increased body mass and altered expression of clock and metabolic genes.[medical
citation needed] In humans, shift-work that favors irregular eating times is associated with altered insulin sensitivity
and higher body mass. Shift-work also leads to increased metabolic risks for cardio-metabolic syndrome, hypertension,
inflammation. Studies conducted on both animals and humans show major bidirectional relationships between the circadian
system and abusive drugs. It is indicated that these abusive drugs affect the central circadian pacemaker. Individuals
suffering from substance abuse display disrupted rhythms. These disrupted rhythms can increase the risk for substance
abuse and relapse. It is possible that genetic and/or environmental disturbances to the normal sleep and wake cycle
can increase the susceptibility to addiction. What drove circadian rhythms to evolve has been an enigmatic question.
Previous hypotheses emphasized that photosensitive proteins and circadian rhythms may have originated together in
the earliest cells, with the purpose of protecting replicating DNA from high levels of damaging ultraviolet radiation
during the daytime. As a result, replication was relegated to the dark. However, evidence for this is lacking, since
the simplest organisms with a circadian rhythm, the cyanobacteria, do the opposite of this - they divide more in
the daytime. Recent studies instead highlight the importance of co-evolution of redox proteins with circadian oscillators
in all three kingdoms of life following the Great Oxidation Event approximately 2.3 billion years ago. The current
view is that circadian changes in environmental oxygen levels and the production of reactive oxygen species (ROS)
in the presence of daylight are likely to have driven a need to evolve circadian rhythms to preempt, and therefore
counteract, damaging redox reactions on a daily basis. Mutations or deletions of clock gene in mice have demonstrated
the importance of body clocks to ensure the proper timing of cellular/metabolic events; clock-mutant mice are hyperphagic
and obese, and have altered glucose metabolism. In mice, deletion of the Rev-ErbA alpha clock gene facilitates diet-induced
obesity and changes the balance between glucose and lipid utilization predisposing to diabetes. However, it is not
clear whether there is a strong association between clock gene polymorphisms in humans and the susceptibility to
develop the metabolic syndrome. The primary circadian "clock" in mammals is located in the suprachiasmatic nucleus
(or nuclei) (SCN), a pair of distinct groups of cells located in the hypothalamus. Destruction of the SCN results
in the complete absence of a regular sleep–wake rhythm. The SCN receives information about illumination through the
eyes. The retina of the eye contains "classical" photoreceptors ("rods" and "cones"), which are used for conventional
vision. But the retina also contains specialized ganglion cells that are directly photosensitive, and project directly
to the SCN, where they help in the entrainment (synchronization) of this master circadian clock. Early research into
circadian rhythms suggested that most people preferred a day closer to 25 hours when isolated from external stimuli
like daylight and timekeeping. However, this research was faulty because it failed to shield the participants from
artificial light. Although subjects were shielded from time cues (like clocks) and daylight, the researchers were
not aware of the phase-delaying effects of indoor electric lights.[dubious – discuss] The subjects were allowed to
turn on light when they were awake and to turn it off when they wanted to sleep. Electric light in the evening delayed
their circadian phase.[citation needed] A more stringent study conducted in 1999 by Harvard University estimated
the natural human rhythm to be closer to 24 hours, 11 minutes: much closer to the solar day but still not perfectly
in sync. More-or-less independent circadian rhythms are found in many organs and cells in the body outside the suprachiasmatic
nuclei (SCN), the "master clock". These clocks, called peripheral oscillators, are found in the adrenal gland,[citation
needed] oesophagus, lungs, liver, pancreas, spleen, thymus, and skin.[citation needed] Though oscillators in the
skin respond to light, a systemic influence has not been proven. There is also some evidence that the olfactory bulb
and prostate may experience oscillations when cultured, suggesting that these structures may also be weak oscillators.[citation
needed]
Elizabeth's many historic visits and meetings include a state visit to the Republic of Ireland and reciprocal visits to and
from the Pope. She has seen major constitutional changes, such as devolution in the United Kingdom, Canadian patriation,
and the decolonisation of Africa. She has also reigned through various wars and conflicts involving many of her realms.
She is the world's oldest reigning monarch as well as Britain's longest-lived. In 2015, she surpassed the reign of
her great-great-grandmother, Queen Victoria, to become the longest-reigning British head of state and the longest-reigning
queen regnant in world history. During the war, plans were drawn up to quell Welsh nationalism by affiliating Elizabeth
more closely with Wales. Proposals, such as appointing her Constable of Caernarfon Castle or a patron of Urdd Gobaith
Cymru (the Welsh League of Youth), were abandoned for various reasons, which included a fear of associating Elizabeth
with conscientious objectors in the Urdd, at a time when Britain was at war. Welsh politicians suggested that she
be made Princess of Wales on her 18th birthday. Home Secretary, Herbert Morrison supported the idea, but the King
rejected it because he felt such a title belonged solely to the wife of a Prince of Wales and the Prince of Wales
had always been the heir apparent. In 1946, she was inducted into the Welsh Gorsedd of Bards at the National Eisteddfod
of Wales. Elizabeth and Philip were married on 20 November 1947 at Westminster Abbey. They received 2500 wedding
gifts from around the world. Because Britain had not yet completely recovered from the devastation of the war, Elizabeth
required ration coupons to buy the material for her gown, which was designed by Norman Hartnell. In post-war Britain,
it was not acceptable for the Duke of Edinburgh's German relations, including his three surviving sisters, to be
invited to the wedding. The Duke of Windsor, formerly King Edward VIII, was not invited either. Amid preparations
for the coronation, Princess Margaret informed her sister that she wished to marry Peter Townsend, a divorcé‚ 16
years Margaret's senior, with two sons from his previous marriage. The Queen asked them to wait for a year; in the
words of Martin Charteris, "the Queen was naturally sympathetic towards the Princess, but I think she thought—she
hoped—given time, the affair would peter out." Senior politicians were against the match and the Church of England
did not permit remarriage after divorce. If Margaret had contracted a civil marriage, she would have been expected
to renounce her right of succession. Eventually, she decided to abandon her plans with Townsend. In 1960, she married
Antony Armstrong-Jones, who was created Earl of Snowdon the following year. They divorced in 1978; she did not remarry.
The Suez crisis and the choice of Eden's successor led in 1957 to the first major personal criticism of the Queen.
In a magazine, which he owned and edited, Lord Altrincham accused her of being "out of touch". Altrincham was denounced
by public figures and slapped by a member of the public appalled by his comments. Six years later, in 1963, Macmillan
resigned and advised the Queen to appoint the Earl of Home as prime minister, advice that she followed. The Queen
again came under criticism for appointing the prime minister on the advice of a small number of ministers or a single
minister. In 1965, the Conservatives adopted a formal mechanism for electing a leader, thus relieving her of involvement.
A year later, at the height of the 1975 Australian constitutional crisis, the Australian Prime Minister, Gough Whitlam,
was dismissed from his post by Governor-General Sir John Kerr, after the Opposition-controlled Senate rejected Whitlam's
budget proposals. As Whitlam had a majority in the House of Representatives, Speaker Gordon Scholes appealed to the
Queen to reverse Kerr's decision. She declined, stating that she would not interfere in decisions reserved by the
Constitution of Australia for the governor-general. The crisis fuelled Australian republicanism. In 1987, in Canada,
Elizabeth publicly supported politically divisive constitutional amendments, prompting criticism from opponents of
the proposed changes, including Pierre Trudeau. The same year, the elected Fijian government was deposed in a military
coup. Elizabeth, as monarch of Fiji, supported the attempts of the Governor-General, Ratu Sir Penaia Ganilau, to
assert executive power and negotiate a settlement. Coup leader Sitiveni Rabuka deposed Ganilau and declared Fiji
a republic. By the start of 1991, republican feeling in Britain had risen because of press estimates of the Queen's
private wealth—which were contradicted by the Palace—and reports of affairs and strained marriages among her extended
family. The involvement of the younger royals in the charity game show It's a Royal Knockout was ridiculed and the
Queen was the target of satire. In 2002, Elizabeth marked her Golden Jubilee. Her sister and mother died in February
and March respectively, and the media speculated whether the Jubilee would be a success or a failure. She again undertook
an extensive tour of her realms, which began in Jamaica in February, where she called the farewell banquet "memorable"
after a power cut plunged the King's House, the official residence of the governor-general, into darkness. As in
1977, there were street parties and commemorative events, and monuments were named to honour the occasion. A million
people attended each day of the three-day main Jubilee celebration in London, and the enthusiasm shown by the public
for the Queen was greater than many journalists had expected. The Queen, who opened the 1976 Summer Olympics in Montreal,
also opened the 2012 Summer Olympics and Paralympics in London, making her the first head of state to open two Olympic
Games in two different countries. For the London Olympics, she played herself in a short film as part of the opening
ceremony, alongside Daniel Craig as James Bond. On 4 April 2013, she received an honorary BAFTA for her patronage
of the film industry and was called "the most memorable Bond girl yet" at the award ceremony. Elizabeth's personal
fortune has been the subject of speculation for many years. Jock Colville, who was her former private secretary and
a director of her bank, Coutts, estimated her wealth in 1971 at £2 million (equivalent to about £25 million today).
In 1993, Buckingham Palace called estimates of £100 million "grossly overstated". She inherited an estimated £70
million estate from her mother in 2002. The Sunday Times Rich List 2015 estimated her private wealth at £340 million,
making her the 302nd richest person in the UK. Times of personal significance have included the births and marriages
of her children, grandchildren and great grandchildren, her coronation in 1953, and the celebration of milestones
such as her Silver, Golden and Diamond Jubilees in 1977, 2002, and 2012, respectively. Moments of sadness for her
include the death of her father, aged 56; the assassination of Prince Philip's uncle, Lord Mountbatten; the breakdown
of her children's marriages in 1992 (her annus horribilis); the death in 1997 of her son's former wife, Diana, Princess
of Wales; and the deaths of her mother and sister in 2002. Elizabeth has occasionally faced republican sentiments
and severe press criticism of the royal family, but support for the monarchy and her personal popularity remain high.
Despite the death of Queen Mary on 24 March, the coronation on 2 June 1953 went ahead as planned, as Mary had asked
before she died. The ceremony in Westminster Abbey, with the exception of the anointing and communion, was televised
for the first time.[d] Elizabeth's coronation gown was embroidered on her instructions with the floral emblems of
Commonwealth countries: English Tudor rose; Scots thistle; Welsh leek; Irish shamrock; Australian wattle; Canadian
maple leaf; New Zealand silver fern; South African protea; lotus flowers for India and Ceylon; and Pakistan's wheat,
cotton, and jute. In 1957, she made a state visit to the United States, where she addressed the United Nations General
Assembly on behalf of the Commonwealth. On the same tour, she opened the 23rd Canadian Parliament, becoming the first
monarch of Canada to open a parliamentary session. Two years later, solely in her capacity as Queen of Canada, she
revisited the United States and toured Canada. In 1961, she toured Cyprus, India, Pakistan, Nepal, and Iran. On a
visit to Ghana the same year, she dismissed fears for her safety, even though her host, President Kwame Nkrumah,
who had replaced her as head of state, was a target for assassins. Harold Macmillan wrote, "The Queen has been absolutely
determined all through ... She is impatient of the attitude towards her to treat her as ... a film star ... She has
indeed 'the heart and stomach of a man' ... She loves her duty and means to be a Queen." Before her tour through
parts of Quebec in 1964, the press reported that extremists within the Quebec separatist movement were plotting Elizabeth's
assassination. No attempt was made, but a riot did break out while she was in Montreal; the Queen's "calmness and
courage in the face of the violence" was noted. In 1977, Elizabeth marked the Silver Jubilee of her accession. Parties
and events took place throughout the Commonwealth, many coinciding with her associated national and Commonwealth
tours. The celebrations re-affirmed the Queen's popularity, despite virtually coincident negative press coverage
of Princess Margaret's separation from her husband. In 1978, the Queen endured a state visit to the United Kingdom
by Romania's communist dictator, Nicolae Ceaușescu, and his wife, Elena, though privately she thought they had "blood
on their hands". The following year brought two blows: one was the unmasking of Anthony Blunt, former Surveyor of
the Queen's Pictures, as a communist spy; the other was the assassination of her relative and in-law Lord Mountbatten
by the Provisional Irish Republican Army. In the 1950s, as a young woman at the start of her reign, Elizabeth was
depicted as a glamorous "fairytale Queen". After the trauma of the Second World War, it was a time of hope, a period
of progress and achievement heralding a "new Elizabethan age". Lord Altrincham's accusation in 1957 that her speeches
sounded like those of a "priggish schoolgirl" was an extremely rare criticism. In the late 1960s, attempts to portray
a more modern image of the monarchy were made in the television documentary Royal Family and by televising Prince
Charles's investiture as Prince of Wales. In public, she took to wearing mostly solid-colour overcoats and decorative
hats, which allow her to be seen easily in a crowd. The Royal Collection, which includes thousands of historic works
of art and the Crown Jewels, is not owned by the Queen personally but is held in trust, as are her official residences,
such as Buckingham Palace and Windsor Castle, and the Duchy of Lancaster, a property portfolio valued in 2014 at
£442 million. Sandringham House and Balmoral Castle are privately owned by the Queen. The British Crown Estate –
with holdings of £9.4 billion in 2014 – is held in trust by the sovereign and cannot be sold or owned by Elizabeth
in a private capacity. Elizabeth's only sibling, Princess Margaret, was born in 1930. The two princesses were educated
at home under the supervision of their mother and their governess, Marion Crawford, who was casually known as "Crawfie".
Lessons concentrated on history, language, literature and music. Crawford published a biography of Elizabeth and
Margaret's childhood years entitled The Little Princesses in 1950, much to the dismay of the royal family. The book
describes Elizabeth's love of horses and dogs, her orderliness, and her attitude of responsibility. Others echoed
such observations: Winston Churchill described Elizabeth when she was two as "a character. She has an air of authority
and reflectiveness astonishing in an infant." Her cousin Margaret Rhodes described her as "a jolly little girl, but
fundamentally sensible and well-behaved". In 1943, at the age of 16, Elizabeth undertook her first solo public appearance
on a visit to the Grenadier Guards, of which she had been appointed colonel the previous year. As she approached
her 18th birthday, parliament changed the law so that she could act as one of five Counsellors of State in the event
of her father's incapacity or absence abroad, such as his visit to Italy in July 1944. In February 1945, she joined
the Women's Auxiliary Territorial Service as an honorary second subaltern with the service number of 230873. She
trained as a driver and mechanic and was promoted to honorary junior commander five months later. The engagement
was not without controversy: Philip had no financial standing, was foreign-born (though a British subject who had
served in the Royal Navy throughout the Second World War), and had sisters who had married German noblemen with Nazi
links. Marion Crawford wrote, "Some of the King's advisors did not think him good enough for her. He was a prince
without a home or kingdom. Some of the papers played long and loud tunes on the string of Philip's foreign origin."
Elizabeth's mother was reported, in later biographies, to have opposed the union initially, even dubbing Philip "The
Hun". In later life, however, she told biographer Tim Heald that Philip was "an English gentleman". During 1951,
George VI's health declined and Elizabeth frequently stood in for him at public events. When she toured Canada and
visited President Harry S. Truman in Washington, D.C., in October 1951, her private secretary, Martin Charteris,
carried a draft accession declaration in case the King died while she was on tour. In early 1952, Elizabeth and Philip
set out for a tour of Australia and New Zealand by way of Kenya. On 6 February 1952, they had just returned to their
Kenyan home, Sagana Lodge, after a night spent at Treetops Hotel, when word arrived of the death of the King and
consequently Elizabeth's immediate accession to the throne. Philip broke the news to the new Queen. Martin Charteris
asked her to choose a regnal name; she chose to remain Elizabeth, "of course". She was proclaimed queen throughout
her realms and the royal party hastily returned to the United Kingdom. She and the Duke of Edinburgh moved into Buckingham
Palace. In 1956, the British and French prime ministers, Sir Anthony Eden and Guy Mollet, discussed the possibility
of France joining the Commonwealth. The proposal was never accepted and the following year France signed the Treaty
of Rome, which established the European Economic Community, the precursor to the European Union. In November 1956,
Britain and France invaded Egypt in an ultimately unsuccessful attempt to capture the Suez Canal. Lord Mountbatten
claimed the Queen was opposed to the invasion, though Eden denied it. Eden resigned two months later. The 1960s and
1970s saw an acceleration in the decolonisation of Africa and the Caribbean. Over 20 countries gained independence
from Britain as part of a planned transition to self-government. In 1965, however, the Rhodesian Prime Minister,
Ian Smith, in opposition to moves toward majority rule, declared unilateral independence from Britain while still
expressing "loyalty and devotion" to Elizabeth. Although the Queen dismissed him in a formal declaration, and the
international community applied sanctions against Rhodesia, his regime survived for over a decade. As Britain's ties
to its former empire weakened, the British government sought entry to the European Community, a goal it achieved
in 1973. During the 1981 Trooping the Colour ceremony and only six weeks before the wedding of Charles, Prince of
Wales, and Lady Diana Spencer, six shots were fired at the Queen from close range as she rode down The Mall on her
horse, Burmese. Police later discovered that the shots were blanks. The 17-year-old assailant, Marcus Sarjeant, was
sentenced to five years in prison and released after three. The Queen's composure and skill in controlling her mount
were widely praised. From April to September 1982, the Queen remained anxious but proud of her son, Prince Andrew,
who was serving with British forces during the Falklands War. On 9 July, the Queen awoke in her bedroom at Buckingham
Palace to find an intruder, Michael Fagan, in the room with her. Remaining calm and through two calls to the Palace
police switchboard, she spoke to Fagan while he sat at the foot of her bed until assistance arrived seven minutes
later. Though she hosted US President Ronald Reagan at Windsor Castle in 1982 and visited his Californian ranch in
1983, she was angered when his administration ordered the invasion of Grenada, one of her Caribbean realms, without
informing her. In the years to follow, public revelations on the state of Charles and Diana's marriage continued.
Even though support for republicanism in Britain seemed higher than at any time in living memory, republicanism was
still a minority viewpoint, and the Queen herself had high approval ratings. Criticism was focused on the institution
of the monarchy itself and the Queen's wider family rather than her own behaviour and actions. In consultation with
her husband and the Prime Minister, John Major, as well as the Archbishop of Canterbury, George Carey, and her private
secretary, Robert Fellowes, she wrote to Charles and Diana at the end of December 1995, saying that a divorce was
desirable. The Queen addressed the United Nations for a second time in 2010, again in her capacity as Queen of all
Commonwealth realms and Head of the Commonwealth. The UN Secretary General, Ban Ki-moon, introduced her as "an anchor
for our age". During her visit to New York, which followed a tour of Canada, she officially opened a memorial garden
for the British victims of the September 11 attacks. The Queen's visit to Australia in October 2011 – her sixteenth
visit since 1954 – was called her "farewell tour" in the press because of her age. From 21 April 1944 until her accession,
Elizabeth's arms consisted of a lozenge bearing the royal coat of arms of the United Kingdom differenced with a label
of three points argent, the centre point bearing a Tudor rose and the first and third a cross of St George. Upon
her accession, she inherited the various arms her father held as sovereign. The Queen also possesses royal standards
and personal flags for use in the United Kingdom, Canada, Australia, New Zealand, Jamaica, Barbados, and elsewhere.
Elizabeth was born in London to the Duke and Duchess of York, later King George VI and Queen Elizabeth, and was the
elder of their two daughters. She was educated privately at home. Her father acceded to the throne on the abdication
of his brother Edward VIII in 1936, from which time she was the heir presumptive. She began to undertake public duties
during World War II, serving in the Auxiliary Territorial Service. In 1947, she married Philip, Duke of Edinburgh,
with whom she has four children: Charles, Anne, Andrew, and Edward. During her grandfather's reign, Elizabeth was
third in the line of succession to the throne, behind her uncle Edward, Prince of Wales, and her father, the Duke
of York. Although her birth generated public interest, she was not expected to become queen, as the Prince of Wales
was still young, and many assumed that he would marry and have children of his own. When her grandfather died in
1936 and her uncle succeeded as Edward VIII, she became second-in-line to the throne, after her father. Later that
year Edward abdicated, after his proposed marriage to divorced socialite Wallis Simpson provoked a constitutional
crisis. Consequently, Elizabeth's father became king, and she became heir presumptive. If her parents had had a later
son, she would have lost her position as first-in-line, as her brother would have been heir apparent and above her
in the line of succession. With Elizabeth's accession, it seemed probable that the royal house would bear her husband's
name, becoming the House of Mountbatten, in line with the custom of a wife taking her husband's surname on marriage.
The British Prime Minister, Winston Churchill, and Elizabeth's grandmother, Queen Mary, favoured the retention of
the House of Windsor, and so on 9 April 1952 Elizabeth issued a declaration that Windsor would continue to be the
name of the royal house. The Duke complained, "I am the only man in the country not allowed to give his name to his
own children." In 1960, after the death of Queen Mary in 1953 and the resignation of Churchill in 1955, the surname
Mountbatten-Windsor was adopted for Philip and Elizabeth's male-line descendants who do not carry royal titles. The
absence of a formal mechanism within the Conservative Party for choosing a leader meant that, following Eden's resignation,
it fell to the Queen to decide whom to commission to form a government. Eden recommended that she consult Lord Salisbury,
the Lord President of the Council. Lord Salisbury and Lord Kilmuir, the Lord Chancellor, consulted the British Cabinet,
Winston Churchill, and the Chairman of the backbench 1922 Committee, resulting in the Queen appointing their recommended
candidate: Harold Macmillan. In February 1974, the British Prime Minister, Edward Heath, advised the Queen to call
a general election in the middle of her tour of the Austronesian Pacific Rim, requiring her to fly back to Britain.
The election resulted in a hung parliament; Heath's Conservatives were not the largest party, but could stay in office
if they formed a coalition with the Liberals. Heath only resigned when discussions on forming a coalition foundered,
after which the Queen asked the Leader of the Opposition, Labour's Harold Wilson, to form a government. Intense media
interest in the opinions and private lives of the royal family during the 1980s led to a series of sensational stories
in the press, not all of which were entirely true. As Kelvin MacKenzie, editor of The Sun, told his staff: "Give
me a Sunday for Monday splash on the Royals. Don't worry if it's not true—so long as there's not too much of a fuss
about it afterwards." Newspaper editor Donald Trelford wrote in The Observer of 21 September 1986: "The royal soap
opera has now reached such a pitch of public interest that the boundary between fact and fiction has been lost sight
of ... it is not just that some papers don't check their facts or accept denials: they don't care if the stories
are true or not." It was reported, most notably in The Sunday Times of 20 July 1986, that the Queen was worried that
Margaret Thatcher's economic policies fostered social divisions and was alarmed by high unemployment, a series of
riots, the violence of a miners' strike, and Thatcher's refusal to apply sanctions against the apartheid regime in
South Africa. The sources of the rumours included royal aide Michael Shea and Commonwealth Secretary-General Shridath
Ramphal, but Shea claimed his remarks were taken out of context and embellished by speculation. Thatcher reputedly
said the Queen would vote for the Social Democratic Party—Thatcher's political opponents. Thatcher's biographer John
Campbell claimed "the report was a piece of journalistic mischief-making". Belying reports of acrimony between them,
Thatcher later conveyed her personal admiration for the Queen, and the Queen gave two honours in her personal gift—membership
in the Order of Merit and the Order of the Garter—to Thatcher after her replacement as prime minister by John Major.
Former Canadian Prime Minister Brian Mulroney said Elizabeth was a "behind the scenes force" in ending apartheid.
In 1997, a year after the divorce, Diana was killed in a car crash in Paris. The Queen was on holiday with her extended
family at Balmoral. Diana's two sons by Charles—Princes William and Harry—wanted to attend church and so the Queen
and Prince Philip took them that morning. After that single public appearance, for five days the Queen and the Duke
shielded their grandsons from the intense press interest by keeping them at Balmoral where they could grieve in private,
but the royal family's seclusion and the failure to fly a flag at half-mast over Buckingham Palace caused public
dismay. Pressured by the hostile reaction, the Queen agreed to return to London and do a live television broadcast
on 5 September, the day before Diana's funeral. In the broadcast, she expressed admiration for Diana and her feelings
"as a grandmother" for the two princes. As a result, much of the public hostility evaporated. Her Diamond Jubilee
in 2012 marked 60 years on the throne, and celebrations were held throughout her realms, the wider Commonwealth,
and beyond. In a message released on Accession Day, she stated: "In this special year, as I dedicate myself anew
to your service, I hope we will all be reminded of the power of togetherness and the convening strength of family,
friendship and good neighbourliness ... I hope also that this Jubilee year will be a time to give thanks for the
great advances that have been made since 1952 and to look forward to the future with clear head and warm heart".
She and her husband undertook an extensive tour of the United Kingdom, while her children and grandchildren embarked
on royal tours of other Commonwealth states on her behalf. On 4 June, Jubilee beacons were lit around the world.
On 18 December, she became the first British sovereign to attend a peacetime Cabinet meeting since George III in
1781. Since Elizabeth rarely gives interviews, little is known of her personal feelings. As a constitutional monarch,
she has not expressed her own political opinions in a public forum. She does have a deep sense of religious and civic
duty, and takes her coronation oath seriously. Aside from her official religious role as Supreme Governor of the
established Church of England, she is personally a member of that church and the national Church of Scotland. She
has demonstrated support for inter-faith relations and has met with leaders of other churches and religions, including
five popes: Pius XII, John XXIII, John Paul II, Benedict XVI and Francis. A personal note about her faith often features
in her annual Christmas message broadcast to the Commonwealth. In 2000, she spoke about the theological significance
of the millennium marking the 2000th anniversary of the birth of Jesus: Elizabeth was born at 02:40 (GMT) on 21 April
1926, during the reign of her paternal grandfather, King George V. Her father, Prince Albert, Duke of York (later
King George VI), was the second son of the King. Her mother, Elizabeth, Duchess of York (later Queen Elizabeth),
was the youngest daughter of Scottish aristocrat Claude Bowes-Lyon, 14th Earl of Strathmore and Kinghorne. She was
delivered by Caesarean section at her maternal grandfather's London house: 17 Bruton Street, Mayfair. She was baptised
by the Anglican Archbishop of York, Cosmo Gordon Lang, in the private chapel of Buckingham Palace on 29 May,[c] and
named Elizabeth after her mother, Alexandra after George V's mother, who had died six months earlier, and Mary after
her paternal grandmother. Called "Lilibet" by her close family, based on what she called herself at first, she was
cherished by her grandfather George V, and during his serious illness in 1929 her regular visits were credited in
the popular press and by later biographers with raising his spirits and aiding his recovery. In September 1939, Britain
entered the Second World War, which lasted until 1945. During the war, many of London's children were evacuated to
avoid the frequent aerial bombing. The suggestion by senior politician Lord Hailsham that the two princesses should
be evacuated to Canada was rejected by Elizabeth's mother, who declared, "The children won't go without me. I won't
leave without the King. And the King will never leave." Princesses Elizabeth and Margaret stayed at Balmoral Castle,
Scotland, until Christmas 1939, when they moved to Sandringham House, Norfolk. From February to May 1940, they lived
at Royal Lodge, Windsor, until moving to Windsor Castle, where they lived for most of the next five years. At Windsor,
the princesses staged pantomimes at Christmas in aid of the Queen's Wool Fund, which bought yarn to knit into military
garments. In 1940, the 14-year-old Elizabeth made her first radio broadcast during the BBC's Children's Hour, addressing
other children who had been evacuated from the cities. She stated: "We are trying to do all we can to help our gallant
sailors, soldiers and airmen, and we are trying, too, to bear our share of the danger and sadness of war. We know,
every one of us, that in the end all will be well." Following their wedding, the couple leased Windlesham Moor, near
Windsor Castle, until 4 July 1949, when they took up residence at Clarence House in London. At various times between
1949 and 1951, the Duke of Edinburgh was stationed in the British Crown Colony of Malta as a serving Royal Navy officer.
He and Elizabeth lived intermittently, for several months at a time, in the hamlet of Gwardamanġa, at Villa Guardamangia,
the rented home of Philip's uncle, Lord Mountbatten. The children remained in Britain. From Elizabeth's birth onwards,
the British Empire continued its transformation into the Commonwealth of Nations. By the time of her accession in
1952, her role as head of multiple independent states was already established. In 1953, the Queen and her husband
embarked on a seven-month round-the-world tour, visiting 13 countries and covering more than 40,000 miles by land,
sea and air. She became the first reigning monarch of Australia and New Zealand to visit those nations. During the
tour, crowds were immense; three-quarters of the population of Australia were estimated to have seen her. Throughout
her reign, the Queen has made hundreds of state visits to other countries and tours of the Commonwealth; she is the
most widely travelled head of state. According to Paul Martin, Sr., by the end of the 1970s the Queen was worried
that the Crown "had little meaning for" Pierre Trudeau, the Canadian Prime Minister. Tony Benn said that the Queen
found Trudeau "rather disappointing". Trudeau's supposed republicanism seemed to be confirmed by his antics, such
as sliding down banisters at Buckingham Palace and pirouetting behind the Queen's back in 1977, and the removal of
various Canadian royal symbols during his term of office. In 1980, Canadian politicians sent to London to discuss
the patriation of the Canadian constitution found the Queen "better informed ... than any of the British politicians
or bureaucrats". She was particularly interested after the failure of Bill C-60, which would have affected her role
as head of state. Patriation removed the role of the British parliament from the Canadian constitution, but the monarchy
was retained. Trudeau said in his memoirs that the Queen favoured his attempt to reform the constitution and that
he was impressed by "the grace she displayed in public" and "the wisdom she showed in private". In a speech on 24
November 1992, to mark the 40th anniversary of her accession, Elizabeth called 1992 her annus horribilis, meaning
horrible year. In March, her second son, Prince Andrew, Duke of York, and his wife, Sarah, separated; in April, her
daughter, Princess Anne, divorced Captain Mark Phillips; during a state visit to Germany in October, angry demonstrators
in Dresden threw eggs at her; and, in November, a large fire broke out at Windsor Castle, one of her official residences.
The monarchy came under increased criticism and public scrutiny. In an unusually personal speech, the Queen said
that any institution must expect criticism, but suggested it be done with "a touch of humour, gentleness and understanding".
Two days later, the Prime Minister, John Major, announced reforms to the royal finances planned since the previous
year, including the Queen paying income tax from 1993 onwards, and a reduction in the civil list. In December, Prince
Charles and his wife, Diana, formally separated. The year ended with a lawsuit as the Queen sued The Sun newspaper
for breach of copyright when it published the text of her annual Christmas message two days before it was broadcast.
The newspaper was forced to pay her legal fees and donated £200,000 to charity. In May 2007, The Daily Telegraph,
citing unnamed sources, reported that the Queen was "exasperated and frustrated" by the policies of the British Prime
Minister, Tony Blair, that she was concerned the British Armed Forces were overstretched in Iraq and Afghanistan,
and that she had raised concerns over rural and countryside issues with Blair. She was, however, said to admire Blair's
efforts to achieve peace in Northern Ireland. On 20 March 2008, at the Church of Ireland St Patrick's Cathedral,
Armagh, the Queen attended the first Maundy service held outside England and Wales. At the invitation of the Irish
President, Mary McAleese, the Queen made the first state visit to the Republic of Ireland by a British monarch in
May 2011. The Queen surpassed her great-great-grandmother, Queen Victoria, to become the longest-lived British monarch
in December 2007, and the longest-reigning British monarch on 9 September 2015. She was celebrated in Canada as the
"longest-reigning sovereign in Canada's modern era". (King Louis XIV of France reigned over part of Canada for longer.)
She is the longest-reigning queen regnant in history, the world's oldest reigning monarch and second-longest-serving
current head of state after King Bhumibol Adulyadej of Thailand. At her Silver Jubilee in 1977, the crowds and celebrations
were genuinely enthusiastic, but in the 1980s, public criticism of the royal family increased, as the personal and
working lives of Elizabeth's children came under media scrutiny. Elizabeth's popularity sank to a low point in the
1990s. Under pressure from public opinion, she began to pay income tax for the first time, and Buckingham Palace
was opened to the public. Discontent with the monarchy reached its peak on the death of Diana, Princess of Wales,
though Elizabeth's personal popularity and support for the monarchy rebounded after her live television broadcast
to the world five days after Diana's death. Elizabeth has held many titles and honorary military positions throughout
the Commonwealth, is Sovereign of many orders in her own countries, and has received honours and awards from around
the world. In each of her realms she has a distinct title that follows a similar formula: Queen of Jamaica and her
other realms and territories in Jamaica, Queen of Australia and her other realms and territories in Australia, etc.
In the Channel Islands and Isle of Man, which are Crown dependencies rather than separate realms, she is known as
Duke of Normandy and Lord of Mann, respectively. Additional styles include Defender of the Faith and Duke of Lancaster.
When in conversation with the Queen, the practice is to initially address her as Your Majesty and thereafter as Ma'am.
Scientists do not know the exact cause of sexual orientation, but they believe that it is caused by a complex interplay of
genetic, hormonal, and environmental influences. They favor biologically-based theories, which point to genetic factors,
the early uterine environment, both, or the inclusion of genetic and social factors. There is no substantive evidence
which suggests parenting or early childhood experiences play a role when it comes to sexual orientation. Research
over several decades has demonstrated that sexual orientation ranges along a continuum, from exclusive attraction
to the opposite sex to exclusive attraction to the same sex. Sexual identity and sexual behavior are closely related
to sexual orientation, but they are distinguished, with sexual identity referring to an individual's conception of
themselves, behavior referring to actual sexual acts performed by the individual, and orientation referring to "fantasies,
attachments and longings." Individuals may or may not express their sexual orientation in their behaviors. People
who have a homosexual sexual orientation that does not align with their sexual identity are sometimes referred to
as 'closeted'. The term may, however, reflect a certain cultural context and particular stage of transition in societies
which are gradually dealing with integrating sexual minorities. In studies related to sexual orientation, when dealing
with the degree to which a person's sexual attractions, behaviors and identity match, scientists usually use the
terms concordance or discordance. Thus, a woman who is attracted to other women, but calls herself heterosexual and
only has sexual relations with men, can be said to experience discordance between her sexual orientation (homosexual
or lesbian) and her sexual identity and behaviors (heterosexual). Gay and lesbian people can have sexual relationships
with someone of the opposite sex for a variety of reasons, including the desire for a perceived traditional family
and concerns of discrimination and religious ostracism. While some LGBT people hide their respective orientations
from their spouses, others develop positive gay and lesbian identities while maintaining successful heterosexual
marriages. Coming out of the closet to oneself, a spouse of the opposite sex, and children can present challenges
that are not faced by gay and lesbian people who are not married to people of the opposite sex or do not have children.
No major mental health professional organization has sanctioned efforts to change sexual orientation and virtually
all of them have adopted policy statements cautioning the profession and the public about treatments that purport
to change sexual orientation. These include the American Psychiatric Association, American Psychological Association,
American Counseling Association, National Association of Social Workers in the USA, the Royal College of Psychiatrists,
and the Australian Psychological Society. From at least the late nineteenth century in Europe, there was speculation
that the range of human sexual response looked more like a continuum than two or three discrete categories. Berlin
sexologist Magnus Hirschfeld published a scheme in 1896 that measured the strength of an individual's sexual desire
on two independent 10-point scales, A (homosexual) and B (heterosexual). A heterosexual individual may be A0, B5;
a homosexual individual may be A5, B0; an asexual would be A0, B0; and someone with an intense attraction to both
sexes would be A9, B9. A third concern with the Kinsey scale is that it inappropriately measures heterosexuality
and homosexuality on the same scale, making one a tradeoff of the other. Research in the 1970s on masculinity and
femininity found that concepts of masculinity and femininity are more appropriately measured as independent concepts
on a separate scale rather than as a single continuum, with each end representing opposite extremes. When compared
on the same scale, they act as tradeoffs such, whereby to be more feminine one had to be less masculine and vice
versa. However, if they are considered as separate dimensions one can be simultaneously very masculine and very feminine.
Similarly, considering heterosexuality and homosexuality on separate scales would allow one to be both very heterosexual
and very homosexual or not very much of either. When they are measured independently, the degree of heterosexual
and homosexual can be independently determined, rather than the balance between heterosexual and homosexual as determined
using the Kinsey Scale. Research focusing on sexual orientation uses scales of assessment to identify who belongs
in which sexual population group. It is assumed that these scales will be able to reliably identify and categorize
people by their sexual orientation. However, it is difficult to determine an individual's sexual orientation through
scales of assessment, due to ambiguity regarding the definition of sexual orientation. Generally, there are three
components of sexual orientation used in assessment. Their definitions and examples of how they may be assessed are
as follows: Depending on which component of sexual orientation is being assessed and referenced, different conclusions
can be drawn about the prevalence rate of homosexuality which has real world consequences. Knowing how much of the
population is made up of homosexual individuals influences how this population may be seen or treated by the public
and government bodies. For example, if homosexual individuals constitute only 1% of the general population they are
politically easier to ignore or than if they are known to be a constituency that surpasses most ethnic and ad minority
groups. If the number is relatively minor then it is difficult to argue for community based same sex programs and
services, mass media inclusion of gay role models, or Gay/Straight Alliances in schools. For this reason, in the
1970s Bruce Voeller, the chair of the National Gay and Lesbian Task Force perpetuated a common myth that the prevalence
of homosexuality is 10% for the whole population by averaging a 13% number for men and a 7% number for women. Voeller
generalized this finding and used it as part of the modern gay rights movement to convince politicians and the public
that "we [gays and lesbians] are everywhere". Another study on men and women's patterns of sexual arousal confirmed
that men and women have different patterns of arousal, independent of their sexual orientations. The study found
that women's genitals become aroused to both human and nonhuman stimuli from movies showing humans of both genders
having sex (heterosexual and homosexual) and from videos showing non-human primates (bonobos) having sex. Men did
not show any sexual arousal to non-human visual stimuli, their arousal patterns being in line with their specific
sexual interest (women for heterosexual men and men for homosexual men). Androphilia and gynephilia (or gynecophilia)
are terms used in behavioral science to describe sexual attraction, as an alternative to a homosexual and heterosexual
conceptualization. They are used for identifying a subject's object of attraction without attributing a sex assignment
or gender identity to the subject. Related terms such as pansexual and polysexual do not make any such assignations
to the subject. People may also use terms such as queer, pansensual, polyfidelitous, ambisexual, or personalized
identities such as byke or biphilic. Translation is a major obstacle when comparing different cultures. Many English
terms lack equivalents in other languages, while concepts and words from other languages fail to be reflected in
the English language. Translation and vocabulary obstacles are not limited to the English language. Language can
force individuals to identify with a label that may or may not accurately reflect their true sexual orientation.
Language can also be used to signal sexual orientation to others. The meaning of words referencing categories of
sexual orientation are negotiated in the mass media in relation to social organization. New words may be brought
into use to describe new terms or better describe complex interpretations of sexual orientation. Other words may
pick up new layers or meaning. For example, the heterosexual Spanish terms marido and mujer for "husband" and "wife",
respectively, have recently been replaced in Spain by the gender-neutral terms cónyuges or consortes meaning "spouses".
The earliest writers on sexual orientation usually understood it to be intrinsically linked to the subject's own
sex. For example, it was thought that a typical female-bodied person who is attracted to female-bodied persons would
have masculine attributes, and vice versa. This understanding was shared by most of the significant theorists of
sexual orientation from the mid nineteenth to early twentieth century, such as Karl Heinrich Ulrichs, Richard von
Krafft-Ebing, Magnus Hirschfeld, Havelock Ellis, Carl Jung, and Sigmund Freud, as well as many gender-variant homosexual
people themselves. However, this understanding of homosexuality as sexual inversion was disputed at the time, and,
through the second half of the twentieth century, gender identity came to be increasingly seen as a phenomenon distinct
from sexual orientation. Transgender and cisgender people may be attracted to men, women, or both, although the prevalence
of different sexual orientations is quite different in these two populations. An individual homosexual, heterosexual
or bisexual person may be masculine, feminine, or androgynous, and in addition, many members and supporters of lesbian
and gay communities now see the "gender-conforming heterosexual" and the "gender-nonconforming homosexual" as negative
stereotypes. Nevertheless, studies by J. Michael Bailey and Kenneth Zucker found a majority of the gay men and lesbians
sampled reporting various degrees of gender-nonconformity during their childhood years. In the United States, non-Caucasian
LGBT individuals may find themselves in a double minority, where they are neither fully accepted or understood by
mainly Caucasian LGBT communities, nor are they accepted by their own ethnic group. Many people experience racism
in the dominant LGBT community where racial stereotypes merge with gender stereotypes, such that Asian-American LGBTs
are viewed as more passive and feminine, while African-American LGBTs are viewed as more masculine and aggressive.
There are a number of culturally specific support networks for LGBT individuals active in the United States. For
example, "Ô-Môi" for Vietnamese American queer females. Some research suggests that "[f]or some [people] the focus
of sexual interest will shift at various points through the life span..." "There... [was, as of 1995,] essentially
no research on the longitudinal stability of sexual orientation over the adult life span... It [was]... still an
unanswered question whether... [the] measure [of 'the complex components of sexual orientation as differentiated
from other aspects of sexual identity at one point in time'] will predict future behavior or orientation. Certainly,
it is... not a good predictor of past behavior and self-identity, given the developmental process common to most
gay men and lesbians (i.e., denial of homosexual interests and heterosexual experimentation prior to the coming-out
process)." Some studies report that "[a number of] lesbian women, and some heterosexual women as well, perceive choice
as an important element in their sexual orientations." In 2012, the Pan American Health Organization (the North and
South American branch of the World Health Organization) released a statement cautioning against services that purport
to "cure" people with non-heterosexual sexual orientations as they lack medical justification and represent a serious
threat to the health and well-being of affected people, and noted that the global scientific and professional consensus
is that homosexuality is a normal and natural variation of human sexuality and cannot be regarded as a pathological
condition. The Pan American Health Organization further called on governments, academic institutions, professional
associations and the media to expose these practices and to promote respect for diversity. The World Health Organization
affiliate further noted that gay minors have sometimes been forced to attend these "therapies" involuntarily, being
deprived of their liberty and sometimes kept in isolation for several months, and that these findings were reported
by several United Nations bodies. Additionally, the Pan American Health Organization recommended that such malpractices
be denounced and subject to sanctions and penalties under national legislation, as they constitute a violation of
the ethical principles of health care and violate human rights that are protected by international and regional agreements.
The Kinsey scale provides a classification of sexual orientation based on the relative amounts of heterosexual and
homosexual experience or psychic response in one's history at a given time. The classification scheme works such
that individuals in the same category show the same balance between the heterosexual and homosexual elements in their
histories. The position on the scale is based on the relation of heterosexuality to homosexuality in one's history,
rather than the actual amount of overt experience or psychic response. An individual can be assigned a position on
the scale in accordance with the following definitions of the points of the scale: The Sell Assessment of Sexual
Orientation (SASO) was developed to address the major concerns with the Kinsey Scale and Klein Sexual Orientation
Grid and as such, measures sexual orientation on a continuum, considers various dimensions of sexual orientation,
and considers homosexuality and heterosexuality separately. Rather than providing a final solution to the question
of how to best measure sexual orientation, the SASO is meant to provoke discussion and debate about measurements
of sexual orientation. As there is no research indicating which of the three components is essential in defining
sexual orientation, all three are used independently and provide different conclusions regarding sexual orientation.
Savin Williams (2006) discusses this issue and notes that by basing findings regarding sexual orientation on a single
component, researchers may not actually capture the intended population. For example, if homosexual is defined by
same sex behavior, gay virgins are omitted, heterosexuals engaging in same sex behavior for other reasons than preferred
sexual arousal are miscounted, and those with same sex attraction who only have opposite-sex relations are excluded.
Because of the limited populations that each component captures, consumers of research should be cautious in generalizing
these findings. More recently, scientists have started to focus on measuring changes in brain activity related to
sexual arousal, by using brain-scanning techniques. A study on how heterosexual and homosexual men's brains react
to seeing pictures of naked men and women has found that both hetero- and homosexual men react positively to seeing
their preferred sex, using the same brain regions. The only significant group difference between these orientations
was found in the amygdala, a brain region known to be involved in regulating fear. Perceived sexual orientation may
affect how a person is treated. For instance, in the United States, the FBI reported that 15.6% of hate crimes reported
to police in 2004 were "because of a sexual-orientation bias". Under the UK Employment Equality (Sexual Orientation)
Regulations 2003, as explained by Advisory, Conciliation and Arbitration Service, "workers or job applicants must
not be treated less favourably because of their sexual orientation, their perceived sexual orientation or because
they associate with someone of a particular sexual orientation". Most definitions of sexual orientation include a
psychological component, such as the direction of an individual's erotic desires, or a behavioral component, which
focuses on the sex of the individual's sexual partner/s. Some people prefer simply to follow an individual's self-definition
or identity. Scientific and professional understanding is that "the core attractions that form the basis for adult
sexual orientation typically emerge between middle childhood and early adolescence". Sexual orientation differs from
sexual identity in that it encompasses relationships with others, while sexual identity is a concept of self. In
the oft-cited and oft-criticized Sexual Behavior in the Human Male (1948) and Sexual Behavior in the Human Female
(1953), by Alfred C. Kinsey et al., people were asked to rate themselves on a scale from completely heterosexual
to completely homosexual. Kinsey reported that when the individuals' behavior as well as their identity are analyzed,
most people appeared to be at least somewhat bisexual — i.e., most people have some attraction to either sex, although
usually one sex is preferred. According to Kinsey, only a minority (5–10%) can be considered fully heterosexual or
homosexual.[citation needed] Conversely, only an even smaller minority can be considered fully bisexual (with an
equal attraction to both sexes). Kinsey's methods have been criticized as flawed, particularly with regard to the
randomness of his sample population, which included prison inmates, male prostitutes and those who willingly participated
in discussion of previously taboo sexual topics. Nevertheless, Paul Gebhard, subsequent director of the Kinsey Institute
for Sex Research, reexamined the data in the Kinsey Reports and concluded that removing the prison inmates and prostitutes
barely affected the results. Innate bisexuality is an idea introduced by Sigmund Freud. According to this theory,
all humans are born bisexual in a very broad sense of the term, that of incorporating general aspects of both sexes.
In Freud's view, this was true anatomically and therefore also psychologically, with sexual attraction to both sexes
being one part of this psychological bisexuality. Freud believed that in the course of sexual development the masculine
side would normally become dominant in men and the feminine side in women, but that as adults everyone still has
desires derived from both the masculine and the feminine sides of their natures. Freud did not claim that everyone
is bisexual in the sense of feeling the same level of sexual attraction to both genders. These categories are aspects
of the more nuanced nature of sexual identity and terminology. For example, people may use other labels, such as
pansexual or polysexual, or none at all. According to the American Psychological Association, sexual orientation
"also refers to a person's sense of identity based on those attractions, related behaviors, and membership in a community
of others who share those attractions". Androphilia and gynephilia are terms used in behavioral science to describe
sexual orientation as an alternative to a gender binary conceptualization. Androphilia describes sexual attraction
to masculinity; gynephilia describes the sexual attraction to femininity. The term sexual preference largely overlaps
with sexual orientation, but is generally distinguished in psychological research. A person who identifies as bisexual,
for example, may sexually prefer one sex over the other. Sexual preference may also suggest a degree of voluntary
choice, whereas the scientific consensus is that sexual orientation is not a choice. The American Psychological Association
states that "[s]exual orientation refers to an enduring pattern of emotional, romantic, and/or sexual attractions
to men, women, or both sexes" and that "[t]his range of behaviors and attractions has been described in various cultures
and nations throughout the world. Many cultures use identity labels to describe people who express these attractions.
In the United States, the most frequent labels are lesbians (women attracted to women), gay men (men attracted to
men), and bisexual people (men or women attracted to both sexes). However, some people may use different labels or
none at all". They additionally state that sexual orientation "is distinct from other components of sex and gender,
including biological sex (the anatomical, physiological, and genetic characteristics associated with being male or
female), gender identity (the psychological sense of being male or female), and social gender role (the cultural
norms that define feminine and masculine behavior)". According to psychologists, sexual orientation also refers to
a person’s choice of sexual partners, who may be homosexual, heterosexual, or bisexual. The National Association
for Research & Therapy of Homosexuality (NARTH), which describes itself as a "professional, scientific organization
that offers hope to those who struggle with unwanted homosexuality," disagrees with the mainstream mental health
community's position on conversion therapy, both on its effectiveness and by describing sexual orientation not as
a binary immutable quality, or as a disease, but as a continuum of intensities of sexual attractions and emotional
affect. The American Psychological Association and the Royal College of Psychiatrists expressed concerns that the
positions espoused by NARTH are not supported by the science and create an environment in which prejudice and discrimination
can flourish. Using androphilia and gynephilia can avoid confusion and offense when describing people in non-western
cultures, as well as when describing intersex and transgender people. Psychiatrist Anil Aggrawal explains that androphilia,
along with gynephilia, "is needed to overcome immense difficulties in characterizing the sexual orientation of trans
men and trans women. For instance, it is difficult to decide whether a trans man erotically attracted to males is
a heterosexual female or a homosexual male; or a trans woman erotically attracted to females is a heterosexual male
or a lesbian female. Any attempt to classify them may not only cause confusion but arouse offense among the affected
subjects. In such cases, while defining sexual attraction, it is best to focus on the object of their attraction
rather than on the sex or gender of the subject." Sexologist Milton Diamond writes, "The terms heterosexual, homosexual,
and bisexual are better used as adjectives, not nouns, and are better applied to behaviors, not people. This usage
is particularly advantageous when discussing the partners of transsexual or intersexed individuals. These newer terms
also do not carry the social weight of the former ones." The Kinsey scale has been praised for dismissing the dichotomous
classification of sexual orientation and allowing for a new perspective on human sexuality. However, the scale has
been criticized because it is still not a true continuum. Despite seven categories being able to provide a more accurate
description of sexual orientation than a dichotomous scale it is still difficult to determine which category individuals
should be assigned to. In a major study comparing sexual response in homosexual males and females, Masters and Johnson
discuss the difficulty of assigning the Kinsey ratings to participants. Particularly, they found it difficult to
determine the relative amount heterosexual and homosexual experience and response in a person's history when using
the scale. They report finding it difficult to assign ratings 2-4 for individuals with a large number of heterosexual
and homosexual experiences. When, there is a lot of heterosexual and homosexual experiences in one's history it becomes
difficult for that individual to be fully objective in assessing the relative amount of each. The SASO consists of
12 questions. Six of these questions assess sexual attraction, four assess sexual behavior, and two assess sexual
orientation identity. For each question on the scale that measures homosexuality there is a corresponding question
that measures heterosexuality giving six matching pairs of questions. Taken all together, the six pairs of questions
and responses provide a profile of an individual's sexual orientation. However, results can be further simplified
into four summaries that look specifically at responses that correspond to either homosexuality, heterosexuality,
bisexuality or asexuality. The exact causes for the development of a particular sexual orientation have yet to be
established. To date, a lot of research has been conducted to determine the influence of genetics, hormonal action,
development dynamics, social and cultural influences—which has led many to think that biology and environment factors
play a complex role in forming it. It was once thought that homosexuality was the result of faulty psychological
development, resulting from childhood experiences and troubled relationships, including childhood sexual abuse. It
has been found that this was based on prejudice and misinformation. One of the uses for scales that assess sexual
orientation is determining what the prevalence of different sexual orientations are within a population. Depending
on subject's age, culture and sex, the prevalence rates of homosexuality vary depending on which component of sexual
orientation is being assessed: sexual attraction, sexual behavior, or sexual identity. Assessing sexual attraction
will yield the greatest prevalence of homosexuality in a population whereby the proportion of individuals indicating
they are same sex attracted is two to three times greater than the proportion reporting same sex behavior or identify
as gay, lesbian, or bisexual. Furthermore, reports of same sex behavior usually exceed those of gay, lesbian, or
bisexual identification. The following chart demonstrates how widely the prevalence of homosexuality can vary depending
on what age, location and component of sexual orientation is being assessed: As female fetuses have two X chromosomes
and male ones a XY pair, the chromosome Y is the responsible for producing male differentiation on the defect female
development. The differentiation process is driven by androgen hormones, mainly testosterone and dihydrotestosterone
(DHT). The newly formed testicles in the fetus are responsible for the secretion of androgens, that will cooperate
in driving the sexual differentiation of the developing fetus, included its brain. This results in sexual differences
between males and females. This fact has led some scientists to test in various ways the result of modifying androgen
exposure levels in mammals during fetus and early life. One of the earliest sexual orientation classification schemes
was proposed in the 1860s by Karl Heinrich Ulrichs in a series of pamphlets he published privately. The classification
scheme, which was meant only to describe males, separated them into three basic categories: dionings, urnings and
uranodionings. An urning can be further categorized by degree of effeminacy. These categories directly correspond
with the categories of sexual orientation used today: heterosexual, homosexual, and bisexual. In the series of pamphlets,
Ulrichs outlined a set of questions to determine if a man was an urning. The definitions of each category of Ulrichs'
classification scheme are as follows: Weinrich et al. (1993) and Weinberg et al. (1994) criticized the scale for
lumping individuals who are different based on different dimensions of sexuality into the same categories. When applying
the scale, Kinsey considered two dimensions of sexual orientation: overt sexual experience and psychosexual reactions.
Valuable information was lost by collapsing the two values into one final score. A person who has only predominantly
same sex reactions is different from someone with relatively little reaction but lots of same sex experience. It
would have been quite simple for Kinsey to have measured the two dimensions separately and report scores independently
to avoid loss of information. Furthermore, there are more than two dimensions of sexuality to be considered. Beyond
behavior and reactions, one could also assess attraction, identification, lifestyle etc. This is addressed by the
Klein Sexual Orientation Grid. Of all the questions on the scale, Sell considered those assessing sexual attraction
to be the most important as sexual attraction is a better reflection of the concept of sexual orientation which he
defined as "extent of sexual attractions toward members of the other, same, both sexes or neither" than either sexual
identity or sexual behavior. Identity and behavior are measured as supplemental information because they are both
closely tied to sexual attraction and sexual orientation. Major criticisms of the SASO have not been established,
but a concern is that the reliability and validity remains largely unexamined. The variance in prevalence rates is
reflected in people's inconsistent responses to the different components of sexual orientation within a study and
the instability of their responses over time. Laumann et al., (1994) found that among U.S. adults 20% of those who
would be considered homosexual on one component of orientation were homosexual on the other two dimensions and 70%
responded in a way that was consistent with homosexuality on only one of the three dimensions. Furthermore, sexuality
is fluid such that one's sexual orientation is not necessarily stable or consistent over time but is subject to change
throughout life. Diamond (2003) found that over 7 years 2/3 of the women changed their sexual identity at least once,
with many reporting that the label was not adequate in capturing the diversity of their sexual or romantic feelings.
Furthermore, women who relinquished bisexual and lesbian identification did not relinquish same sex sexuality and
acknowledged the possibility for future same sex attractions and/or behaviour. One woman stated "I'm mainly straight
but I'm one of those people who, if the right circumstance came along, would change my viewpoint". Therefore, individuals
classified as homosexual in one study might not be identified the same way in another depending on which components
are assessed and when the assessment is made making it difficult to pin point who is homosexual and who is not and
what the overall prevalence within a population may be. Because sexual orientation is complex and multi-dimensional,
some academics and researchers, especially in queer studies, have argued that it is a historical and social construction.
In 1976, philosopher and historian Michel Foucault argued in The History of Sexuality that homosexuality as an identity
did not exist in the eighteenth century; that people instead spoke of "sodomy," which referred to sexual acts. Sodomy
was a crime that was often ignored, but sometimes punished severely (see sodomy law). He wrote, "'Sexuality' is an
invention of the modern state, the industrial revolution, and capitalism." Some researchers who study sexual orientation
argue that the concept may not apply similarly to men and women. A study of sexual arousal patterns found that women,
when viewing erotic films which show female-female, male-male and male-female sexual activity (oral sex or penetration),
have patterns of arousal which do not match their declared sexual orientations as well as men's. That is, heterosexual
and lesbian women's sexual arousal to erotic films do not differ significantly by the genders of the participants
(male or female) or by the type of sexual activity (heterosexual or homosexual). On the contrary, men's sexual arousal
patterns tend to be more in line with their stated orientations, with heterosexual men showing more penis arousal
to female-female sexual activity and less arousal to female-male and male-male sexual stimuli, and homosexual and
bisexual men being more aroused by films depicting male-male intercourse and less aroused by other stimuli. Research
suggests that sexual orientation is independent of cultural and other social influences, but that open identification
of one's sexual orientation may be hindered by homophobic/hetereosexist settings. Social systems such as religion,
language and ethnic traditions can have a powerful impact on realization of sexual orientation. Influences of culture
may complicate the process of measuring sexual orientation. The majority of empirical and clinical research on LGBT
populations are done with largely white, middle-class, well-educated samples, however there are pockets of research
that document various other cultural groups, although these are frequently limited in diversity of gender and sexual
orientation of the subjects. Integration of sexual orientation with sociocultural identity may be a challenge for
LGBT individuals. Individuals may or may not consider their sexual orientation to define their sexual identity, as
they may experience various degrees of fluidity of sexuality, or may simply identify more strongly with another aspect
of their identity such as family role. American culture puts a great emphasis on individual attributes, and views
the self as unchangeable and constant. In contrast, East Asian cultures put a great emphasis on a person's social
role within social hierarchies, and view the self as fluid and malleable. These differing cultural perspectives have
many implications on cognitions of the self, including perception of sexual orientation. Some other cultures do not
recognize a homosexual/heterosexual/bisexual distinction. It is common to distinguish a person's sexuality according
to their sexual role (active/passive; insertive/penetrated). In this distinction, the passive role is typically associated
with femininity and/or inferiority, while the active role is typically associated with masculinity and/or superiority.
For example, an investigation of a small Brazilian fishing village revealed three sexual categories for men: men
who have sex only with men (consistently in a passive role), men who have sex only with women, and men who have sex
with women and men (consistently in an active role). While men who consistently occupied the passive role were recognized
as a distinct group by locals, men who have sex with only women, and men who have sex with women and men, were not
differentiated. Little is known about same-sex attracted females, or sexual behavior between females in these cultures.
Though researchers generally believe that sexual orientation is not determined by any one factor but by a combination
of genetic, hormonal, and environmental influences, with biological factors involving a complex interplay of genetic
factors and the early uterine environment, they favor biological models for the cause. They believe that sexual orientation
is not a choice, and some of them believe that it is established at conception. That is, individuals do not choose
to be homosexual, heterosexual, bisexual, or asexual. While current scientific investigation usually seeks to find
biological explanations for the adoption of a particular sexual orientation, there are yet no replicated scientific
studies supporting any specific biological etiology for sexual orientation. However, scientific studies have found
a number of statistical biological differences between gay people and heterosexuals, which may result from the same
underlying cause as sexual orientation itself. Sexual orientation is argued as a concept that evolved in the industrialized
West, and there is a controversy as to the universality of its application in other societies or cultures. Non-westernized
concepts of male sexuality differ essentially from the way sexuality is seen and classified under the Western system
of sexual orientation.[unreliable source?] The validity of the notion of sexual orientation as defined in the West,
as a biological phenomenon rather than a social construction specific to a region and period, has also been questioned
within the industrialized Western society). Some researchers, such as Bruce Bagemihl, have criticized the labels
"heterosexual" and "homosexual" as confusing and degrading. Bagemihl writes, "...the point of reference for 'heterosexual'
or 'homosexual' orientation in this nomenclature is solely the individual's genetic sex prior to reassignment (see
for example, Blanchard et al. 1987, Coleman and Bockting, 1988, Blanchard, 1989). These labels thereby ignore the
individual's personal sense of gender identity taking precedence over biological sex, rather than the other way around."
Bagemihl goes on to take issue with the way this terminology makes it easy to claim transsexuals are really homosexual
males seeking to escape from stigma. Often, sexual orientation and sexual orientation identity are not distinguished,
which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual
orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual
behavior or actual sexual orientation. While the Centre for Addiction and Mental Health and American Psychiatric
Association state that sexual orientation is innate, continuous or fixed throughout their lives for some people,
but is fluid or changes over time for others, the American Psychological Association distinguishes between sexual
orientation (an innate attraction) and sexual orientation identity (which may change at any point in a person's life).
Known as the fraternal birth order (FBO) effect, this theory has been backed up by strong evidence of its prenatal
origin, although no evidence thus far has linked it to an exact prenatal mechanism. However, research suggests that
this may be of immunological origin, caused by a maternal immune reaction against a substance crucial to male fetal
development during pregnancy, which becomes increasingly likely after every male gestation. As a result of this immune
effect, alterations in later-born males' prenatal development have been thought to occur. This process, known as
the maternal immunization hypothesis (MIH), would begin when cells from a male fetus enter the mother's circulation
during pregnancy or while giving birth. These Y-linked proteins would not be recognized in the mother's immune system
because she is female, causing her to develop antibodies which would travel through the placental barrier into the
fetal compartment. From here, the anti-male bodies would then cross the blood/brain barrier (BBB) of the developing
fetal brain, altering sex-dimorphic brain structures relative to sexual orientation, causing the exposed son to be
more attracted to men over women. The Kinsey scale, also called the Heterosexual-Homosexual Rating Scale, was first
published in Sexual Behavior in the Human Male (1948) by Alfred Kinsey, Wardell Pomeroy, and Clyde Martin and also
featured in Sexual Behavior in the Human Female (1953). The scale was developed to combat the assumption at the time
that people are either heterosexual or homosexual and that these two types represent antitheses in the sexual world.
Recognizing that a large portion of population is not completely heterosexual or homosexual and people can experience
both heterosexual and homosexual behavior and psychic responses, Kinsey et al., stated: In response to the criticism
of the Kinsey scale only measuring two dimensions of sexual orientation, Fritz Klein developed the Klein sexual orientation
grid (KSOG), a multidimensional scale for describing sexual orientation. Introduced in Klein's book The Bisexual
Option, the KSOG uses a 7-point scale to assess seven different dimensions of sexuality at three different points
in an individual's life: past (from early adolescence up to one year ago), present (within the last 12 months), and
ideal (what would you choose if it were completely your choice). Though sexual attraction, behavior, and identity
are all components of sexual orientation, if a person defined by one of these dimensions were congruent with those
defined by another dimension it would not matter which was used in assessing orientation, but this is not the case.
There is "little coherent relationship between the amount and mix of homosexual and heterosexual behavior in a person's
biography and that person's choice to label himself or herself as bisexual, homosexual, or heterosexual". Individuals
typically experience diverse attractions and behaviors that may reflect curiosity, experimentation, social pressure
and is not necessarily indicative of an underlying sexual orientation. For example, a woman may have fantasies or
thoughts about sex with other women but never act on these thoughts and only have sex with opposite gender partners.
If sexual orientation was being assessed based on one's sexual attraction then this individual would be considered
homosexual, but her behavior indicates heterosexuality. In the paper "Who's Gay? Does It Matter?", Ritch Savin-Williams
proposes two different approaches to assessing sexual orientation until well positioned and psychometrically sound
and tested definitions are developed that would allow research to reliably identify the prevalence, causes, and consequences
of homosexuality. He first suggests that greater priority should be given to sexual arousal and attraction over behaviour
and identity because it is less prone to self- and other-deception, social conditions and variable meanings. To measure
attraction and arousal he proposed that biological measures should be developed and used. There are numerous biological/physiological
measures that exist that can measure sexual orientation such as sexual arousal, brain scans, eye tracking, body odour
preference, and anatomical variations such as digit-length ratio and right or left handedness. Secondly, Savin-Williams
suggests that researchers should forsake the general notion of sexual orientation altogether and assess only those
components that are relevant for the research question being investigated. For example: These studies suggest that
men and women are different in terms of sexual arousal patterns and that this is also reflected in how their genitals
react to sexual stimuli of both genders or even to non-human stimuli. Sexual orientation has many dimensions (attractions,
behavior, identity), of which sexual arousal is the only product of sexual attractions which can be measured at present
with some degree of physical precision. Thus, the fact that women are aroused by seeing non-human primates having
sex does not mean that women's sexual orientation includes this type of sexual interest. Some researchers argue that
women's sexual orientation depends less on their patterns of sexual arousal than men's and that other components
of sexual orientation (like emotional attachment) must be taken into account when describing women's sexual orientations.
In contrast, men's sexual orientations tend to be primarily focused on the physical component of attractions and,
thus, their sexual feelings are more exclusively oriented according to sex. One person may presume knowledge of another
person's sexual orientation based upon perceived characteristics, such as appearance, clothing, tone of voice, and
accompaniment by and behavior with other people. The attempt to detect sexual orientation in social situations is
known as gaydar; some studies have found that guesses based on face photos perform better than chance. 2015 research
suggests that "gaydar" is an alternate label for using LGBT stereotypes to infer orientation, and that face-shape
is not an accurate indication of orientation. Estimates for the percentage of the population that are bisexual vary
widely, at least in part due to differing definitions of bisexuality. Some studies only consider a person bisexual
if they are nearly equally attracted to both sexes, and others consider a person bisexual if they are at all attracted
to the same sex (for otherwise mostly heterosexual persons) or to the opposite sex (for otherwise mostly homosexual
persons). A small percentage of people are not sexually attracted to anyone (asexuality). A study in 2004 placed
the prevalence of asexuality at 1%. Some historians and researchers argue that the emotional and affectionate activities
associated with sexual-orientation terms such as "gay" and "heterosexual" change significantly over time and across
cultural boundaries. For example, in many English-speaking nations, it is assumed that same-sex kissing, particularly
between men, is a sign of homosexuality, whereas various types of same-sex kissing are common expressions of friendship
in other nations. Also, many modern and historic cultures have formal ceremonies expressing long-term commitment
between same-sex friends, even though homosexuality itself is taboo within the cultures.
Dell was listed at number 51 in the Fortune 500 list, until 2014. After going private in 2013, the newly confidential nature
of its financial information prevents the company from being ranked by Fortune. In 2014 it was the third largest
PC vendor in the world after Lenovo and HP. Dell is currently the #1 shipper of PC monitors in the world. Dell is
the sixth largest company in Texas by total revenue, according to Fortune magazine. It is the second largest non-oil
company in Texas – behind AT&T – and the largest company in the Greater Austin area. It was a publicly traded company
(NASDAQ: DELL), as well as a component of the NASDAQ-100 and S&P 500, until it was taken private in a leveraged buyout
which closed on October 30, 2013. Originally, Dell did not emphasize the consumer market, due to the higher costs
and unacceptably low profit margins in selling to individuals and households; this changed when the company’s Internet
site took off in 1996 and 1997. While the industry’s average selling price to individuals was going down, Dell's
was going up, as second- and third-time computer buyers who wanted powerful computers with multiple features and
did not need much technical support were choosing Dell. Dell found an opportunity among PC-savvy individuals who
liked the convenience of buying direct, customizing their PC to their means, and having it delivered in days. In
early 1997, Dell created an internal sales and marketing group dedicated to serving the home market and introduced
a product line designed especially for individual users. Dell had long stuck by its direct sales model. Consumers
had become the main drivers of PC sales in recent years, yet there had a decline in consumers purchasing PCs through
the Web or on the phone, as increasing numbers were visiting consumer electronics retail stores to try out the devices
first. Dell's rivals in the PC industry, HP, Gateway and Acer, had a long retail presence and so were well poised
to take advantage of the consumer shift. The lack of a retail presence stymied Dell's attempts to offer consumer
electronics such as flat-panel TVs and MP3 players. Dell responded by experimenting with mall kiosks, plus quasi-retail
stores in Texas and New York. In the shrinking PC industry, Dell continued to lose market share, as it dropped below
Lenovo in 2011 to fall to number three in the world. Dell and fellow American contemporary Hewlett Packard came under
pressure from Asian PC manufacturers Lenovo, Asus, and Acer, all of which had lower production costs and willing
to accept lower profit margins. In addition, while the Asian PC vendors had been improving their quality and design,
for instance Lenovo's ThinkPad series was winning corporate customers away from Dell's laptops, Dell's customer service
and reputation had been slipping. Dell remained the second-most profitable PC vendor, as it took 13 percent of operating
profits in the PC industry during Q4 2012, behind Apple Inc.'s Macintosh that took 45 percent, seven percent at Hewlett
Packard, six percent at Lenovo and Asus, and one percent for Acer. Dell's manufacturing process covers assembly,
software installation, functional testing (including "burn-in"), and quality control. Throughout most of the company's
history, Dell manufactured desktop machines in-house and contracted out manufacturing of base notebooks for configuration
in-house. The company's approach has changed, as cited in the 2006 Annual Report, which states, "We are continuing
to expand our use of original design manufacturing partnerships and manufacturing outsourcing relationships." The
Wall Street Journal reported in September 2008 that "Dell has approached contract computer manufacturers with offers
to sell" their plants. By the late 2000s, Dell's "configure to order" approach of manufacturing—delivering individual
PCs configured to customer specifications from its US facilities was no longer as efficient or competitive with high-volume
Asian contract manufacturers as PCs became powerful low-cost commodities. Dell advertisements have appeared in several
types of media including television, the Internet, magazines, catalogs and newspapers. Some of Dell Inc's marketing
strategies include lowering prices at all times of the year, free bonus products (such as Dell printers), and free
shipping to encourage more sales and stave off competitors. In 2006, Dell cut its prices in an effort to maintain
its 19.2% market share. This also cut profit-margins by more than half, from 8.7 to 4.3 percent. To maintain its
low prices, Dell continues to accept most purchases of its products via the Internet and through the telephone network,
and to move its customer-care division to India and El Salvador. In May 2008, Dell reached an agreement with office
supply chain, Officeworks (part of Coles Group), to stock a few modified models in the Inspiron desktop and notebook
range. These models have slightly different model numbers, but almost replicate the ones available from the Dell
Store. Dell continued its retail push in the Australian market with its partnership with Harris Technology (another
part of Coles Group) in November of the same year. In addition, Dell expanded its retail distributions in Australia
through an agreement with discount electrical retailer, The Good Guys, known for "Slashing Prices". Dell agreed to
distribute a variety of makes of both desktops and notebooks, including Studio and XPS systems in late 2008. Dell
and Dick Smith Electronics (owned by Woolworths Limited) reached an agreement to expand within Dick Smith's 400 stores
throughout Australia and New Zealand in May 2009 (1 year since Officeworks — owned by Coles Group — reached a deal).
The retailer has agreed to distribute a variety of Inspiron and Studio notebooks, with minimal Studio desktops from
the Dell range. As of 2009[update], Dell continues to run and operate its various kiosks in 18 shopping centres throughout
Australia. On March 31, 2010 Dell announced to Australian Kiosk employees that they were shutting down the Australian/New
Zealand Dell kiosk program. On August 17, 2007, Dell Inc. announced that after an internal investigation into its
accounting practices it would restate and reduce earnings from 2003 through to the first quarter of 2007 by a total
amount of between $50 million and $150 million, or 2 cents to 7 cents per share. The investigation, begun in November
2006, resulted from concerns raised by the U.S. Securities and Exchange Commission over some documents and information
that Dell Inc. had submitted. It was alleged that Dell had not disclosed large exclusivity payments received from
Intel for agreeing not to buy processors from rival manufacturer AMD. In 2010 Dell finally paid $100 million to settle
the SEC's charges of fraud. Michael Dell and other executives also paid penalties and suffered other sanctions, without
admitting or denying the charges. In the mid-1990s, Dell expanded beyond desktop computers and laptops by selling
servers, starting with low-end servers. The major three providers of servers at the time were IBM, Hewlett Packard,
and Compaq, many of which were based on proprietary technology, such as IBM's Power4 microprocessors or various proprietary
versions of the Unix operating system. Dell's new PowerEdge servers did not require a major investment in proprietary
technologies, as they ran Microsoft Windows NT on Intel chips, and could be built cheaper than its competitors. Consequently,
Dell's enterprise revenues, almost nonexistent in 1994, accounted for 13 percent of the company's total intake by
1998. Three years later, Dell passed Compaq as the top provider of Intel-based servers, with 31 percent of the market.
Dell's first acquisition occurred in 1999 with the purchase of ConvergeNet Technologies for $332 million, after Dell
had failed to develop an enterprise storage system in-house; ConvergeNet's elegant but complex technology did not
fit in with Dell's commodity-producer business model, forcing Dell to write down the entire value of the acquisition.
The slowing sales growth has been attributed to the maturing PC market, which constituted 66% of Dell's sales, and
analysts suggested that Dell needed to make inroads into non-PC businesses segments such as storage, services and
servers. Dell's price advantage was tied to its ultra-lean manufacturing for desktop PCs, but this became less important
as savings became harder to find inside the company's supply chain, and as competitors such as Hewlett-Packard and
Acer made their PC manufacturing operations more efficient to match Dell, weakening Dell's traditional price differentiation.
Throughout the entire PC industry, declines in prices along with commensurate increases in performance meant that
Dell had fewer opportunities to upsell to their customers (a lucrative strategy of encouraging buyers to upgrade
the processor or memory). As a result, the company was selling a greater proportion of inexpensive PCs than before,
which eroded profit margins. The laptop segment had become the fastest-growing of the PC market, but Dell produced
low-cost notebooks in China like other PC manufacturers which eliminated Dell's manufacturing cost advantages, plus
Dell's reliance on Internet sales meant that it missed out on growing notebook sales in big box stores. CNET has
suggested that Dell was getting trapped in the increasing commoditization of high volume low margin computers, which
prevented it from offering more exciting devices that consumers demanded. By the late 2000s, Dell's "configure to
order" approach of manufacturing—delivering individual PCs configured to customer specifications from its US facilities
was no longer as efficient or competitive with high-volume Asian contract manufacturers as PCs became powerful low-cost
commodities. Dell closed plants that produced desktop computers for the North American market, including the Mort
Topfer Manufacturing Center in Austin, Texas (original location) and Lebanon, Tennessee (opened in 1999) in 2008
and early 2009, respectively. The desktop production plant in Winston-Salem, North Carolina, received US$280 million
in incentives from the state and opened in 2005, but ceased operations in November 2010. Dell's contract with the
state required them to repay the incentives for failing to meet the conditions, and they sold the North Carolina
plant to Herbalife. Most of the work that used to take place in Dell's U.S. plants was transferred to contract manufacturers
in Asia and Mexico, or some of Dell's own factories overseas. The Miami, Florida, facility of its Alienware subsidiary
remains in operation, while Dell continues to produce its servers (its most profitable products) in Austin, Texas.
On January 8, 2009, Dell announced the closure of its manufacturing plant in Limerick, Ireland, with the loss of
1,900 jobs and the transfer of production to its plant in Łodź in Poland. The announcement came two years after Dell
Inc. returned to private ownership, claiming that it faced bleak prospects and would need several years out of the
public eye to rebuild its business. It's thought that the company's value has roughly doubled since then. EMC was
being pressured by Elliott Management, a hedge fund holding 2.2% of EMC's stock, to reorganize their unusual "Federation"
structure, in which EMC's divisions were effectively being run as independent companies. Elliott argued this structure
deeply undervalued EMC's core "EMC II" data storage business, and that increasing competition between EMC II and
VMware products was confusing the market and hindering both companies. The Wall Street Journal estimated that in
2014 Dell had revenue of $27.3 billion from personal computers and $8.9bn from servers, while EMC had $16.5bn from
EMC II, $1bn from RSA Security, $6bn from VMware, and $230 million from Pivotal Software. EMC owns around 80 percent
of the stock of VMware. The proposed acquisition will maintain VMware as a separate company, held via a new tracking
stock, while the other parts of EMC will be rolled into Dell. Once the acquisition closes Dell will again publish
quarterly financial results, having ceased these on going private in 2013. Dell opened plants in Penang, Malaysia
in 1995, and in Xiamen, China in 1999. These facilities serve the Asian market and assemble 95% of Dell notebooks.
Dell Inc. has invested[when?] an estimated $60 million in a new manufacturing unit in Chennai, India, to support
the sales of its products in the Indian subcontinent. Indian-made products bear the "Made in India" mark. In 2007
the Chennai facility had the target of producing 400,000 desktop PCs, and plans envisaged it starting to produce
notebook PCs and other products in the second half of 2007.[citation needed] The board consists of nine directors.
Michael Dell, the founder of the company, serves as chairman of the board and chief executive officer. Other board
members include Don Carty, William Gray, Judy Lewent, Klaus Luft, Alex Mandl, Michael A. Miles, and Sam Nunn. Shareholders
elect the nine board members at meetings, and those board members who do not get a majority of votes must submit
a resignation to the board, which will subsequently choose whether or not to accept the resignation. The board of
directors usually sets up five committees having oversight over specific matters. These committees include the Audit
Committee, which handles accounting issues, including auditing and reporting; the Compensation Committee, which approves
compensation for the CEO and other employees of the company; the Finance Committee, which handles financial matters
such as proposed mergers and acquisitions; the Governance and Nominating Committee, which handles various corporate
matters (including nomination of the board); and the Antitrust Compliance Committee, which attempts to prevent company
practices from violating antitrust laws.[citation needed] In the early 1990s, Dell sold its products through Best
Buy, Costco and Sam's Club stores in the United States. Dell stopped this practice in 1994, citing low profit-margins
on the business, exclusively distributing through a direct-sales model for the next decade. In 2003, Dell briefly
sold products in Sears stores in the U.S. In 2007, Dell started shipping its products to major retailers in the U.S.
once again, starting with Sam's Club and Wal-Mart. Staples, the largest office-supply retailer in the U.S., and Best
Buy, the largest electronics retailer in the U.S., became Dell retail partners later that same year. Dell became
the first company in the information technology industry to establish a product-recycling goal (in 2004) and completed
the implementation of its global consumer recycling-program in 2006. On February 6, 2007, the National Recycling
Coalition awarded Dell its "Recycling Works" award for efforts to promote producer responsibility. On July 19, 2007,
Dell announced that it had exceeded targets in working to achieve a multi-year goal of recovering 275 million pounds
of computer equipment by 2009. The company reported the recovery of 78 million pounds (nearly 40,000 tons) of IT
equipment from customers in 2006, a 93-percent increase over 2005; and 12.4% of the equipment Dell sold seven years
earlier. In the 1990s, Dell switched from using primarily ATX motherboards and PSU to using boards and power supplies
with mechanically identical but differently wired connectors. This meant customers wishing to upgrade their hardware
would have to replace parts with scarce Dell-compatible parts instead of commonly available parts. While motherboard
power connections reverted to the industry standard in 2003, Dell continues to remain secretive about their motherboard
pin-outs for peripherals (such as MMC readers and power on/off switches and LEDs). Dell traces its origins to 1984,
when Michael Dell created Dell Computer Corporation, which at the time did business as PC's Limited, while a student
of the University of Texas at Austin. The dorm-room headquartered company sold IBM PC-compatible computers built
from stock components. Dell dropped out of school to focus full-time on his fledgling business, after getting $1,000
in expansion-capital from his family. In 1985, the company produced the first computer of its own design, the Turbo
PC, which sold for $795. PC's Limited advertised its systems in national computer magazines for sale directly to
consumers and custom assembled each ordered unit according to a selection of options. The company grossed more than
$73 million in its first year of operation. From 1997 to 2004, Dell enjoyed steady growth and it gained market share
from competitors even during industry slumps. During the same period, rival PC vendors such as Compaq, Gateway, IBM,
Packard Bell, and AST Research struggled and eventually left the market or were bought out. Dell surpassed Compaq
to become the largest PC manufacturer in 1999. Operating costs made up only 10 percent of Dell's $35 billion in revenue
in 2002, compared with 21 percent of revenue at Hewlett-Packard, 25 percent at Gateway, and 46 percent at Cisco.
In 2002, when Compaq merged with Hewlett Packard (the fourth-place PC maker), the newly combined Hewlett Packard
took the top spot but struggled and Dell soon regained its lead. Dell grew the fastest in the early 2000s. Dell had
a reputation as a company that relied upon supply chain efficiencies to sell established technologies at low prices,
instead of being an innovator. By the mid-2000s many analysts were looking to innovating companies as the next source
of growth in the technology sector. Dell's low spending on R&D relative to its revenue (compared to IBM, Hewlett
Packard, and Apple Inc.)—which worked well in the commoditized PC market—prevented it from making inroads into more
lucrative segments, such as MP3 players and later mobile devices. Increasing spending on R&D would have cut into
the operating margins that the company emphasized. Dell had done well with a horizontal organization that focused
on PCs when the computing industry moved to horizontal mix-and-match layers in the 1980s, but by the mid-2000 the
industry shifted to vertically integrated stacks to deliver complete IT solutions and Dell lagged far behind competitors
like Hewlett Packard and Oracle. Dell announced a change campaign called "Dell 2.0," reducing the number of employees
and diversifying the company's products. While chairman of the board after relinquishing his CEO position, Michael
Dell still had significant input in the company during Rollins' years as CEO. With the return of Michael Dell as
CEO, the company saw immediate changes in operations, the exodus of many senior vice-presidents and new personnel
brought in from outside the company. Michael Dell announced a number of initiatives and plans (part of the "Dell
2.0" initiative) to improve the company's financial performance. These include elimination of 2006 bonuses for employees
with some discretionary awards, reduction in the number of managers reporting directly to Michael Dell from 20 to
12, and reduction of "bureaucracy". Jim Schneider retired as CFO and was replaced by Donald Carty, as the company
came under an SEC probe for its accounting practices. Dell has been attempting to offset its declining PC business,
which still accounted for half of its revenue and generates steady cash flow, by expanding into the enterprise market
with servers, networking, software, and services. It avoided many of the acquisition writedowns and management turnover
that plagued its chief rival Hewlett Packard. Dell also managed some success in taking advantage of its high-touch
direct sales heritage to establish close relationships and design solutions for clients. Despite spending $13 billion
on acquisitions to diversify its portfolio beyond hardware, the company was unable to convince the market that it
could thrive or made the transformation in the post-PC world, as it suffered continued declines in revenue and share
price. Dell's market share in the corporate segment was previously a "moat" against rivals but this has no longer
been the case as sales and profits have fallen precipitously. Dell facilities in the United States are located in
Austin, Texas; Plano, Texas; Nashua, New Hampshire; Nashville, Tennessee; Oklahoma City, Oklahoma; Peoria, Illinois;
Hillsboro, Oregon (Portland area); Winston-Salem, North Carolina; Eden Prairie, Minnesota (Dell Compellent); Bowling
Green, Kentucky; Lincoln, Nebraska; and Miami, Florida. Facilities located abroad include Penang, Malaysia; Xiamen,
China; Bracknell, UK; Manila, Philippines Chennai, India; Hyderabad, India; Noida, India; Hortolandia and Porto Alegre,
Brazil; Bratislava, Slovakia; Łódź, Poland; Panama City, Panama; Dublin and Limerick, Ireland; and Casablanca, Morocco.
Assembly of desktop computers for the North American market formerly took place at Dell plants in Austin, Texas (original
location) and Lebanon, Tennessee (opened in 1999), which have been closed in 2008 and early 2009, respectively. The
plant in Winston-Salem, North Carolina received $280 million USD in incentives from the state and opened in 2005,
but ceased operations in November 2010, and Dell's contract with the state requires them to repay the incentives
for failing to meet the conditions. Most of the work that used to take place in Dell's U.S. plants was transferred
to contract manufacturers in Asia and Mexico, or some of Dell's own factories overseas. The Miami, Florida facility
of its Alienware subsidiary remains in operation, while Dell continues to produce its servers (its most profitable
products) in Austin, Texas. Dell committed to reduce greenhouse gas emissions from its global activities by 40% by
2015, with 2008 fiscal year as the baseline year. It is listed in Greenpeace’s Guide to Greener Electronics that
scores leading electronics manufacturers according to their policies on sustainability, climate and energy and how
green their products are. In November 2011, Dell ranked 2nd out of 15 listed electronics makers (increasing its score
to 5.1 from 4.9, which it gained in the previous ranking from October 2010). In July 2009, Dell apologized after
drawing the ire of the Taiwanese Consumer Protection Commission for twice refusing to honour a flood of orders against
unusually low prices offered on its Taiwanese website. In the first instance, Dell offered a 19" LCD panel for $15.
In the second instance, Dell offered its Latitude E4300 notebook at NT$18,558 (US$580), 70% lower than usual price
of NT$60,900 (US$1900). Concerning the E4300, rather than honour the discount taking a significant loss, the firm
withdrew orders and offered a voucher of up to NT$20,000 (US$625) a customer in compensation. The consumer rights
authorities in Taiwan fined Dell NT$1 million (US$31250) for customer rights infringements. Many consumers sued the
firm for the unfair compensation. A court in southern Taiwan ordered the firm to deliver 18 laptops and 76 flat-panel
monitors to 31 consumers for NT$490,000 (US$15,120), less than a third of the normal price. The court said the event
could hardly be regarded as mistakes, as the prestigious firm said the company mispriced its products twice in Taiwanese
website within 3 weeks. Dell sells personal computers (PCs), servers, data storage devices, network switches, software,
computer peripherals, HDTVs, cameras, printers, MP3 players, and electronics built by other manufacturers. The company
is well known for its innovations in supply chain management and electronic commerce, particularly its direct-sales
model and its "build-to-order" or "configure to order" approach to manufacturing—delivering individual PCs configured
to customer specifications. Dell was a pure hardware vendor for much of its existence, but with the acquisition in
2009 of Perot Systems, Dell entered the market for IT services. The company has since made additional acquisitions
in storage and networking systems, with the aim of expanding their portfolio from offering computers only to delivering
complete solutions for enterprise customers. In 1993, to complement its own direct sales channel, Dell planned to
sell PCs at big-box retail outlets such as Wal-Mart, which would have brought in an additional $125 million in annual
revenue. Bain consultant Kevin Rollins persuaded Michael Dell to pull out of these deals, believing they would be
money losers in the long run. Margins at retail were thin at best and Dell left the reseller channel in 1994. Rollins
would soon join Dell full-time and eventually become the company President and CEO. In 1986, Michael Dell brought
in Lee Walker, a 51-year-old venture capitalist, as president and chief operating officer, to serve as Michael's
mentor and implement Michael's ideas for growing the company. Walker was also instrumental in recruiting members
to the board of directors when the company went public in 1988. Walker retired in 1990 due to health, and Michael
Dell hired Morton Meyerson, former CEO and president of Electronic Data Systems to transform the company from a fast-growing
medium-sized firm into a billion-dollar enterprise. Despite plans of expanding into other global regions and product
segments, Dell was heavily dependent on U.S. corporate PC market, as desktop PCs sold to both commercial and corporate
customers accounted for 32 percent of its revenue, 85 percent of its revenue comes from businesses, and Sixty-four
percent of its revenue comes from North and South America, according to its 2006 third-quarter results. U.S. shipments
of desktop PCs were shrinking, and the corporate PC market which purchases PCs in upgrade cycles had largely decided
to take a break from buying new systems. The last cycle started around 2002, three or so years after companies started
buying PCs ahead of the perceived Y2K problems, and corporate clients were not expected to upgrade again until extensive
testing of Microsoft's Windows Vista (expected in early 2007), putting the next upgrade cycle around 2008. Heavily
depending on PCs, Dell had to slash prices to boost sales volumes, while demanding deep cuts from suppliers. The
release of Apple's iPad tablet computer had a negative impact on Dell and other major PC vendors, as consumers switched
away from desktop and laptop PCs. Dell's own mobility division has not managed success with developing smartphones
or tablets, whether running Windows or Google Android. The Dell Streak was a failure commercially and critically
due to its outdated OS, numerous bugs, and low resolution screen. InfoWorld suggested that Dell and other OEMs saw
tablets as a short-term, low-investment opportunity running Google Android, an approach that neglected user interface
and failed to gain long term market traction with consumers. Dell has responded by pushing higher-end PCs, such as
the XPS line of notebooks, which do not compete with the Apple iPad and Kindle Fire tablets. The growing popularity
of smartphones and tablet computers instead of PCs drove Dell's consumer segment to an operating loss in Q3 2012.
In December 2012, Dell suffered its first decline in holiday sales in five years, despite the introduction of Windows
8. Dell's reputation for poor customer service, since 2002, which was exacerbated as it moved call centres offshore
and as its growth outstripped its technical support infrastructure, came under increasing scrutiny on the Web. The
original Dell model was known for high customer satisfaction when PCs sold for thousands but by the 2000s, the company
could not justify that level of service when computers in the same lineup sold for hundreds. Rollins responded by
shifting Dick Hunter from head of manufacturing to head of customer service. Hunter, who noted that Dell's DNA of
cost-cutting "got in the way," aimed to reduce call transfer times and have call center representatives resolve inquiries
in one call. By 2006, Dell had spent $100 million in just a few months to improve on this, and rolled out DellConnect
to answer customer inquiries more quickly. In July 2006, the company started its Direct2Dell blog, and then in February
2007, Michael Dell launched IdeaStorm.com, asking customers for advice including selling Linux computers and reducing
the promotional "bloatware" on PCs. These initiatives did manage to cut the negative blog posts from 49% to 22%,
as well as reduce the "Dell Hell" prominent on Internet search engines. In March 2013, the Blackstone Group and Carl
Icahn expressed interest in purchasing Dell. In April 2013, Blackstone withdrew their offer, citing deteriorating
business. Other private equity firms such as KKR & Co. and TPG Capital declined to submit alternative bids for Dell,
citing the uncertain market for personal computers and competitive pressures, so the "wide-open bidding war" never
materialized. Analysts said that the biggest challenge facing Silver Lake would be to find an “exit strategy” to
profit from its investment, which would be when the company would hold an IPO to go public again, and one warned
“But even if you can get a $25bn enterprise value for Dell, it will take years to get out.” On April 23, 2008, Dell
announced the closure of one of its biggest Canadian call-centers in Kanata, Ontario, terminating approximately 1100
employees, with 500 of those redundancies effective on the spot, and with the official closure of the center scheduled
for the summer. The call-center had opened in 2006 after the city of Ottawa won a bid to host it. Less than a year
later, Dell planned to double its workforce to nearly 3,000 workers add a new building. These plans were reversed,
due to a high Canadian dollar that made the Ottawa staff relatively expensive, and also as part of Dell's turnaround,
which involved moving these call-center jobs offshore to cut costs. The company had also announced the shutdown of
its Edmonton, Alberta office, losing 900 jobs. In total, Dell announced the ending of about 8,800 jobs in 2007–2008
— 10% of its workforce. The combined business is expected to address the markets for scale-out architecture, converged
infrastructure and private cloud computing, playing to the strengths of both EMC and Dell. Commentators have questioned
the deal, with FBR Capital Markets saying that though it makes a "ton of sense" for Dell, it's a "nightmare scenario
that would lack strategic synergies" for EMC. Fortune said there was a lot for Dell to like in EMC's portfolio, but
"does it all add up enough to justify tens of billions of dollars for the entire package? Probably not." The Register
reported the view of William Blair & Company that the merger would "blow up the current IT chess board", forcing
other IT infrastructure vendors to restructure to achieve scale and vertical integration. The value of VMware stock
fell 10% after the announcement, valuing the deal at around $63–64bn rather than the $67bn originally reported. After
several weeks of rumors, which started around January 11, 2013, Dell announced on February 5, 2013 that it had struck
a $24.4 billion leveraged buyout deal, that would have delisted its shares from the NASDAQ and Hong Kong Stock Exchange
and taken it private. Reuters reported that Michael Dell and Silver Lake Partners, aided by a $2 billion loan from
Microsoft, would acquire the public shares at $13.65 apiece. The $24.4 billion buyout was projected to be the largest
leveraged buyout backed by private equity since the 2007 financial crisis. It is also the largest technology buyout
ever, surpassing the 2006 buyout of Freescale Semiconductor for $17.5 billion. In 2000, Dell announced that it would
lease 80,000 square feet (7,400 m2) of space in the Las Cimas office complex in unincorporated Travis County, Texas,
between Austin and West Lake Hills, to house the company's executive offices and corporate headquarters. 100 senior
executives were scheduled to work in the building by the end of 2000. In January 2001, the company leased the space
in Las Cimas 2, located along Loop 360. Las Cimas 2 housed Dell's executives, the investment operations, and some
corporate functions. Dell also had an option for 138,000 square feet (12,800 m2) of space in Las Cimas 3. After a
slowdown in business required reducing employees and production capacity, Dell decided to sublease its offices in
two buildings in the Las Cimas office complex. In 2002 Dell announced that it planned to sublease its space to another
tenant; the company planned to move its headquarters back to Round Rock once a tenant was secured. By 2003, Dell
moved its headquarters back to Round Rock. It leased all of Las Cimas I and II, with a total of 312,000 square feet
(29,000 m2), for about a seven-year period after 2003. By that year roughly 100,000 square feet (9,300 m2) of that
space was absorbed by new subtenants. Dell previously had its headquarters in the Arboretum complex in northern Austin,
Texas. In 1989 Dell occupied 127,000 square feet (11,800 m2) in the Arboretum complex. In 1990, Dell had 1,200 employees
in its headquarters. In 1993, Dell submitted a document to Round Rock officials, titled "Dell Computer Corporate
Headquarters, Round Rock, Texas, May 1993 Schematic Design." Despite the filing, during that year the company said
that it was not going to move its headquarters. In 1994, Dell announced that it was moving most of its employees
out of the Arboretum, but that it was going to continue to occupy the top floor of the Arboretum and that the company's
official headquarters address would continue to be the Arboretum. The top floor continued to hold Dell's board room,
demonstration center, and visitor meeting room. Less than one month prior to August 29, 1994, Dell moved 1,100 customer
support and telephone sales employees to Round Rock. Dell's lease in the Arboretum had been scheduled to expire in
1994. Dell assembled computers for the EMEA market at the Limerick facility in the Republic of Ireland, and once
employed about 4,500 people in that country. Dell began manufacturing in Limerick in 1991 and went on to become Ireland's
largest exporter of goods and its second-largest company and foreign investor. On January 8, 2009, Dell announced
that it would move all Dell manufacturing in Limerick to Dell's new plant in the Polish city of Łódź by January 2010.
European Union officials said they would investigate a €52.7million aid package the Polish government used to attract
Dell away from Ireland. European Manufacturing Facility 1 (EMF1, opened in 1990) and EMF3 form part of the Raheen
Industrial Estate near Limerick. EMF2 (previously a Wang facility, later occupied by Flextronics, situated in Castletroy)
closed in 2002,[citation needed] and Dell Inc. has consolidated production into EMF3 (EMF1 now[when?] contains only
offices). Subsidies from the Polish government did keep Dell for a long time. After ending assembly in the Limerick
plant the Cherrywood Technology Campus in Dublin was the largest Dell office in the republic with over 1200 people
in sales (mainly UK & Ireland), support (enterprise support for EMEA) and research and development for cloud computing,
but no more manufacturing except Dell's Alienware subsidiary, which manufactures PCs in an Athlone, Ireland plant.
Whether this facility will remain in Ireland is not certain. Construction of EMF4 in Łódź, Poland has started[update]:
Dell started production there in autumn 2007. In addition, the company provides protection services, advisory services,
multivendor hardware support, "how-to" support for software applications, collaborative support with many third-party
vendors, and online parts and labor dispatching for customers who diagnose and troubleshoot their hardware. Dell
also provides Dell ProSupport customers access to a crisis-center to handle major outages, or problems caused by
natural disasters. Dell also provide on-line support by using the computer's service-tag that provides full list
of the hardware elements installed originally, purchase date and provides the latest upgrades for the original hardware
drivers. Dell service and support brands include the Dell Solution Station (extended domestic support services, previously
"Dell on Call"), Dell Support Center (extended support services abroad), Dell Business Support (a commercial service-contract
that provides an industry-certified technician with a lower call-volume than in normal queues), Dell Everdream Desktop
Management ("Software as a Service" remote-desktop management, originally a SaaS company founded by Elon Musk's cousin,
Lyndon Rive, which Dell bought in 2007), and Your Tech Team (a support-queue available to home users who purchased
their systems either through Dell's website or through Dell phone-centers). In late 2006[update], Dell lost its lead
in the PC-business to Hewlett-Packard. Both Gartner and IDC estimated that in the third quarter of 2006, HP shipped
more units worldwide than Dell did. Dell's 3.6% growth paled in comparison to HP's 15% growth during the same period.
The problem got worse in the fourth quarter, when Gartner estimated that Dell PC shipments declined 8.9% (versus
HP's 23.9% growth). As a result, at the end of 2006 Dell's overall PC market-share stood at 13.9% (versus HP's 17.4%).
Dell was the first company to publicly state a timeline for the elimination of toxic polyvinyl chloride (PVC) and
brominated flame retardants (BFRs), which it planned to phase out by the end of 2009. It revised this commitment
and now aims to remove these toxics by the end of 2011 but only in its computing products. In March 2010, Greenpeace
activists protested at Dell offices in Bangalore, Amsterdam and Copenhagen calling for Dell’s founder and CEO Michael
Dell to ‘drop the toxics’ and claiming that Dell’s aspiration to be ‘the greenest technology company on the planet’
was ‘hypocritical’. Dell has launched its first products completely free of PVC and BFRs with the G-Series monitors
(G2210 and G2410) in 2009. In 2006, Dell acknowledged that it had problems with customer service. Issues included
call transfers of more than 45% of calls and long wait times. Dell's blog detailed the response: "We're spending
more than a $100 million — and a lot of blood, sweat and tears of talented people — to fix this." Later in the year,
the company increased its spending on customer service to $150 million. Despite significant investment in this space,
Dell continues to face public scrutiny with even the company's own website littered with complaints regarding the
issue escalation process.[original research?] The company aims to reduce its external environmental impact through
energy-efficient evolution of products, and also reduce its direct operational impact through energy-efficiency programs.
Internal energy-efficiency programs reportedly save the company more than $3 million annually in energy-cost savings.
The largest component of the company's internal energy-efficiency savings comes through PC power management: the
company expects to save $1.8 million in energy costs through using specialized energy-management software on a network
of 50,000 PCs.
If a defendant is sentenced to death at the trial level, the case then goes into a direct review. The direct review process
is a typical legal appeal. An appellate court examines the record of evidence presented in the trial court and the
law that the lower court applied and decides whether the decision was legally sound or not. Direct review of a capital
sentencing hearing will result in one of three outcomes. If the appellate court finds that no significant legal errors
occurred in the capital sentencing hearing, the appellate court will affirm the judgment, or let the sentence stand.
If the appellate court finds that significant legal errors did occur, then it will reverse the judgment, or nullify
the sentence and order a new capital sentencing hearing. Lastly, if the appellate court finds that no reasonable
juror could find the defendant eligible for the death penalty, a rarity, then it will order the defendant acquitted,
or not guilty, of the crime for which he/she was given the death penalty, and order him sentenced to the next most
severe punishment for which the offense is eligible. About 60 percent survive the process of direct review intact.
Under the Antiterrorism and Effective Death Penalty Act of 1996, a state prisoner is ordinarily only allowed one
suit for habeas corpus in federal court. If the federal courts refuse to issue a writ of habeas corpus, an execution
date may be set. In recent times, however, prisoners have postponed execution through a final round of federal litigation
using the Civil Rights Act of 1871 — codified at 42 U.S.C. § 1983 — which allows people to bring lawsuits against
state actors to protect their federal constitutional and statutory rights. The moratorium ended on January 17, 1977
with the shooting of Gary Gilmore by firing squad in Utah. The first use of the electric chair after the moratorium
was the electrocution of John Spenkelink in Florida on May 25, 1979. The first use of the gas chamber after the moratorium
was the gassing of Jesse Bishop in Nevada on October 22, 1979. The first use of the gallows after the moratorium
was the hanging of Westley Allan Dodd in Washington on January 5, 1993. The first use of lethal injection was on
December 7, 1982, when Charles Brooks, Jr., was executed in Texas. Electrocution was the preferred method of execution
during the 20th century. Electric chairs have commonly been nicknamed Old Sparky; however, Alabama's electric chair
became known as the "Yellow Mama" due to its unique color. Some, particularly in Florida, were noted for malfunctions,
which caused discussion of their cruelty and resulted in a shift to lethal injection as the preferred method of execution.
Although lethal injection dominates as a method of execution, some states allow prisoners on death row to choose
the method used to execute them. Other states with long histories of no death penalty include Wisconsin (the only
state with only one execution), Rhode Island (although later reintroduced, it was unused and abolished again), Maine,
North Dakota, Minnesota, West Virginia, Iowa, and Vermont. The District of Columbia has also abolished the death
penalty; it was last used in 1957. Oregon abolished the death penalty through an overwhelming majority in a 1964
public referendum but reinstated it in a 1984 joint death penalty/life imprisonment referendum by an even higher
margin after a similar 1978 referendum succeeded but was not implemented due to judicial rulings. Within the context
of the overall murder rate, the death penalty cannot be said to be widely or routinely used in the United States;
in recent years the average has been about one execution for about every 700 murders committed, or 1 execution for
about every 325 murder convictions. However, 32 of the 50 states still execute people. Among them, Alabama has the
highest per capita rate of death sentences. This is due to judges overriding life imprisonment sentences and imposing
the death penalty. No other states allow this. Puerto Rico's constitution expressly forbids capital punishment, stating
"The death penalty shall not exist", setting it apart from all U.S. states and territories other than Michigan, which
also has a constitutional prohibition (eleven other states and the District of Columbia have abolished capital punishment
through statutory law). However, capital punishment is still applicable to offenses committed in Puerto Rico, if
they fall under the jurisdiction of the federal government, though federal death penalty prosecutions there have
generated significant controversy. Capital punishment was suspended in the United States from 1972 through 1976 primarily
as a result of the Supreme Court's decision in Furman v. Georgia. The last pre-Furman execution was that of Luis
Monge on June 2, 1967. In this case, the court found that the death penalty was being imposed in an unconstitutional
manner, on the grounds of cruel and unusual punishment in violation of the Eighth Amendment to the United States
Constitution. The Supreme Court has never ruled the death penalty to be per se unconstitutional. Present-day statutes
from across the nation use the same words and phrases, requiring modern executions to take place within a wall or
enclosure to exclude public view. Connecticut General Statute § 54–100 requires death sentences to be conducted in
an "enclosure" which "shall be so constructed as to exclude public view." Kentucky Revised Statute 431.220 and Missouri
Revised Statute § 546.730 contain substantially identical language. New Mexico's former death penalty, since repealed,
see N.M. Stat. § 31-14-12, required executions be conducted in a "room or place enclosed from public view." Similarly,
a dormant Massachusetts law, see Mass. Gen. Law ch. 279 § 60, required executions to take place "within an enclosure
or building." North Carolina General Statute § 15-188 requires death sentences to be executed "within the walls"
of the penitentiary, as do Oklahoma Statute Title 22 § 1015 and Montana Code § 46-19-103. Ohio Revised Code § 2949.22
requires that "[t]he enclosure shall exclude public view." Similarly, Tennessee Code § 40-23-116 requires "an enclosure"
for "strict seclusion and privacy." United States Code Title 18 § 3596 and the Code of Federal Regulations 28 CFR
26.4 limit the witnesses permitted at federal executions. In 1976, contemporaneously with Woodson and Roberts, the
Court decided Gregg v. Georgia and upheld a procedure in which the trial of capital crimes was bifurcated into guilt-innocence
and sentencing phases. At the first proceeding, the jury decides the defendant's guilt; if the defendant is innocent
or otherwise not convicted of first-degree murder, the death penalty will not be imposed. At the second hearing,
the jury determines whether certain statutory aggravating factors exist, whether any mitigating factors exist, and,
in many jurisdictions, weigh the aggravating and mitigating factors in assessing the ultimate penalty – either death
or life in prison, either with or without parole. Possibly in part due to expedited federal habeas corpus procedures
embodied in the Antiterrorism and Effective Death Penalty Act of 1996, the pace of executions picked up, reaching
a peak of 98 in 1999 and then they declined gradually to 28 in 2015. Since the death penalty was reauthorized in
1976, 1,411 people have been executed, almost exclusively by the states, with most occurring after 1990. Texas has
accounted for over one-third of modern executions (although only two death sentences were imposed in Texas during
2015, with the courts preferring to issue sentences of life without parole instead) and over four times as many as
Oklahoma, the state with the second-highest number. California has the greatest number of prisoners on death row,
has issued the highest number of death sentences but has held relatively few executions. As of November 2008, there
is only one person on death row facing capital punishment who has not been convicted of murder. Demarcus Sears remains
under a death sentence in Georgia for the crime of "kidnapping with bodily injury." Sears was convicted in 1986 for
the kidnapping and bodily injury of victim Gloria Ann Wilbur. Wilbur was kidnapped and beaten in Georgia, raped in
Tennessee, and murdered in Kentucky. Sears was never charged with the murder of Wilbur in Kentucky, but was sentenced
to death by a jury in Georgia for "kidnapping with bodily injury." At times when a death sentence is affirmed on
direct review, it is considered final. Yet, supplemental methods to attack the judgment, though less familiar than
a typical appeal, do remain. These supplemental remedies are considered collateral review, that is, an avenue for
upsetting judgments that have become otherwise final. Where the prisoner received his death sentence in a state-level
trial, as is usually the case, the first step in collateral review is state collateral review. (If the case is a
federal death penalty case, it proceeds immediately from direct review to federal habeas corpus.) Although all states
have some type of collateral review, the process varies widely from state to state. Generally, the purpose of these
collateral proceedings is to permit the prisoner to challenge his sentence on grounds that could not have been raised
reasonably at trial or on direct review. Most often these are claims, such as ineffective assistance of counsel,
which requires the court to consider new evidence outside the original trial record, something courts may not do
in an ordinary appeal. State collateral review, though an important step in that it helps define the scope of subsequent
review through federal habeas corpus, is rarely successful in and of itself. Only around 6 percent of death sentences
are overturned on state collateral review. In 2010, the death sentences of 53 inmates were overturned as a result
of legal appeals or high court reversals. Traditionally, Section 1983 was of limited use for a state prisoner under
sentence of death because the Supreme Court has held that habeas corpus, not Section 1983, is the only vehicle by
which a state prisoner can challenge his judgment of death. In the 2006 Hill v. McDonough case, however, the United
States Supreme Court approved the use of Section 1983 as a vehicle for challenging a state's method of execution
as cruel and unusual punishment in violation of the Eighth Amendment. The theory is that a prisoner bringing such
a challenge is not attacking directly his judgment of death, but rather the means by which that the judgment will
be carried out. Therefore, the Supreme Court held in the Hill case that a prisoner can use Section 1983 rather than
habeas corpus to bring the lawsuit. Yet, as Clarence Hill's own case shows, lower federal courts have often refused
to hear suits challenging methods of execution on the ground that the prisoner brought the claim too late and only
for the purposes of delay. Further, the Court's decision in Baze v. Rees, upholding a lethal injection method used
by many states, has drastically narrowed the opportunity for relief through Section 1983. The largest single execution
in United States history was the hanging of 38 American Indians convicted of murder and rape during the Dakota War
of 1862. They were executed simultaneously on December 26, 1862, in Mankato, Minnesota. A single blow from an axe
cut the rope that held the large four-sided platform, and the prisoners (except for one whose rope had broken and
who had to be re-hanged) fell to their deaths. The second-largest mass execution was also a hanging: the execution
of 13 African-American soldiers for taking part in the Houston Riot of 1917. The largest non-military mass execution
occurred in one of the original thirteen colonies in 1723, when 26 convicted pirates were hanged in Newport, Rhode
Island by order of the Admiralty Court. After a death sentence is affirmed in state collateral review, the prisoner
may file for federal habeas corpus, which is a unique type of lawsuit that can be brought in federal courts. Federal
habeas corpus is a species of collateral review, and it is the only way that state prisoners may attack a death sentence
in federal court (other than petitions for certiorari to the United States Supreme Court after both direct review
and state collateral review). The scope of federal habeas corpus is governed by the Antiterrorism and Effective Death
Penalty Act of 1996, which restricted significantly its previous scope. The purpose of federal habeas corpus is to
ensure that state courts, through the process of direct review and state collateral review, have done at least a
reasonable job in protecting the prisoner's federal constitutional rights. Prisoners may also use federal habeas
corpus suits to bring forth new evidence that they are innocent of the crime, though to be a valid defense at this
late stage in the process, evidence of innocence must be truly compelling. In New Jersey and Illinois, all death
row inmates had their sentences commuted to life in prison without parole when the death penalty repeal bills were
signed into law. In Maryland, Governor Martin O'Malley commuted the state's four remaining death sentences to life
in prison without parole in January 2015. While the bill repealing capital punishment in Connecticut was not retroactive,
the Connecticut Supreme Court ruled in 2015 in State v. Santiago that the legislature's decision to prospectively
abolish capital punishment rendered it an offense to "evolving standards of decency," thus commuting the sentences
of the 11 men remaining on death row to life in prison without parole. New Mexico may yet execute two condemned inmates
sentenced prior to abolition, and Nebraska has ten death row inmates who may still be executed despite abolition.
The United States Supreme Court in Penry v. Lynaugh and the United States Court of Appeals for the Fifth Circuit
in Bigby v. Dretke have been clear in their decisions that jury instructions in death penalty cases that do not ask
about mitigating factors regarding the defendant's mental health violate the defendant's Eighth Amendment rights,
saying that the jury is to be instructed to consider mitigating factors when answering unrelated questions. This
ruling suggests that specific explanations to the jury are necessary to weigh mitigating factors. Several states
have never had capital punishment, the first being Michigan, which abolished it shortly after entering the Union.
(However, the United States government executed Tony Chebatoris at the Federal Correctional Institution in Milan,
Michigan in 1938.) Article 4, Section 46 of Michigan's fourth Constitution (ratified in 1963; effective in 1964)
prohibits any law providing for the penalty of death. Attempts to change the provision have failed. In 2004, a constitutional
amendment proposed to allow capital punishment in some circumstances failed to make it on the November ballot after
a resolution failed in the legislature and a public initiative failed to gather enough signatures. The method of
execution of federal prisoners for offenses under the Violent Crime Control and Law Enforcement Act of 1994 is that
of the state in which the conviction took place. If the state has no death penalty, the judge must choose a state
with the death penalty for carrying out the execution. For offenses under the Drug Kingpin Act of 1988, the method
of execution is lethal injection. The Federal Correctional Complex in Terre Haute, Indiana is currently the home
of the only death chamber for federal death penalty recipients in the United States, where inmates are put to death
by lethal injection. The complex has so far been the only location used for federal executions post-Gregg. Timothy
McVeigh and Juan Garza were put to death in June 2001, and Louis Jones, Jr. was put to death on March 18, 2003. In
October 2009, the American Law Institute voted to disavow the framework for capital punishment that it had created
in 1962, as part of the Model Penal Code, "in light of the current intractable institutional and structural obstacles
to ensuring a minimally adequate system for administering capital punishment." A study commissioned by the institute
had said that experience had proved that the goal of individualized decisions about who should be executed and the
goal of systemic fairness for minorities and others could not be reconciled. In 1977, the Supreme Court's Coker v.
Georgia decision barred the death penalty for rape of an adult woman, and implied that the death penalty was inappropriate
for any offense against another person other than murder. Prior to the decision, the death penalty for rape of an
adult had been gradually phased out in the United States, and at the time of the decision, the State of Georgia and
the U.S. Federal government were the only two jurisdictions to still retain the death penalty for that offense. However,
three states maintained the death penalty for child rape, as the Coker decision only imposed a ban on executions
for the rape of an adult woman. In 2008, the Kennedy v. Louisiana decision barred the death penalty for child rape.
The result of these two decisions means that the death penalty in the United States is largely restricted to cases
where the defendant took the life of another human being. The current federal kidnapping statute, however, may be
exempt because the death penalty applies if the victim dies in the perpetrator's custody, not necessarily by his
hand, thus stipulating a resulting death, which was the wording of the objection. In addition, the Federal government
retains the death penalty for non-murder offenses that are considered crimes against the state, including treason,
espionage, and crimes under military jurisdiction. Four states in the modern era, Nebraska in 2008, New York and
Kansas in 2004, and Massachusetts in 1984, had their statutes ruled unconstitutional by state courts. The death rows
of New York and Massachusetts were disestablished, and attempts to restore the death penalty were unsuccessful. Kansas
successfully appealed State v. Kleypas, the Kansas Supreme Court decision that declared the state's death penalty
statute unconstitutional, to the United States Supreme Court. Nebraska's death penalty statute was rendered ineffective
on February 8, 2008 when the required method, electrocution, was ruled unconstitutional by the Nebraska Supreme Court.
In 2009, Nebraska enacted a bill that changed its method of execution to lethal injection. In the 2010s, American
jurisdictions have experienced a shortage of lethal injection drugs, due to anti-death penalty advocacy and low production
volume. Hospira, the only U.S. manufacturer of sodium thiopental, stopped making the drug in 2011. The European Union
has outlawed the export of any product that could be used in an execution; this has prevented executioners from using
EU-manufactured anesthetics like propofol which are needed for general medical purposes. Another alternative, pentobarbital,
is also only manufactured in the European Union, which has caused the Danish producer to restrict distribution to
U.S. government customers. As noted in the introduction to this article, the American public has maintained its position
of support for capital punishment for murder. However, when given a choice between the death penalty and life imprisonment
without parole, support has traditionally been significantly lower than polling which has only mentioned the death
penalty as a punishment. In 2010, for instance, one poll showed 49 percent favoring the death penalty and 46 percent
favoring life imprisonment while in another 61% said they preferred another punishment to the death penalty. The
highest level of support for the death penalty recorded overall was 80 percent in 1994 (16 percent opposed), and
the lowest recorded was 42 percent in 1966 (47 percent opposed). On the question of the death penalty vs. life without
parole, the strongest preference for the death penalty was 61 percent in 1997 (29 percent favoring life), and the
lowest preference for the death penalty was 47 percent in 2006 (48 percent favoring life). Other capital crimes include:
the use of a weapon of mass destruction resulting in death, espionage, terrorism, certain violations of the Geneva
Conventions that result in the death of one or more persons, and treason at the federal level; aggravated rape in
Louisiana, Florida, and Oklahoma; extortionate kidnapping in Oklahoma; aggravated kidnapping in Georgia, Idaho, Kentucky
and South Carolina; aircraft hijacking in Alabama and Mississippi; assault by an escaping capital felon in Colorado;
armed robbery in Georgia; drug trafficking resulting in a person's death in Florida; train wrecking which leads to
a person's death, and perjury which leads to a person's death in California, Colorado, Idaho and Nebraska. In a five-to-four
decision, the Supreme Court struck down the impositions of the death penalty in each of the consolidated cases as
unconstitutional. The five justices in the majority did not produce a common opinion or rationale for their decision,
however, and agreed only on a short statement announcing the result. The narrowest opinions, those of Byron White
and Potter Stewart, expressed generalized concerns about the inconsistent application of the death penalty across
a variety of cases but did not exclude the possibility of a constitutional death penalty law. Stewart and William
O. Douglas worried explicitly about racial discrimination in enforcement of the death penalty. Thurgood Marshall
and William J. Brennan, Jr. expressed the opinion that the death penalty was proscribed absolutely by the Eighth
Amendment as "cruel and unusual" punishment. In total, 156 prisoners have been either acquitted, or received pardons
or commutations on the basis of possible innocence, between 1973 to 2015. Death penalty opponents often argue that
this statistic shows how perilously close states have come to undertaking wrongful executions; proponents point out
that the statistic refers only to those exonerated in law, and that the truly innocent may be a smaller number. Statistics
likely understate the actual problem of wrongful convictions because once an execution has occurred there is often
insufficient motivation and finance to keep a case open, and it becomes unlikely at that point that the miscarriage
of justice will ever be exposed. The death penalty is sought and applied more often in some jurisdictions, not only
between states but within states. A 2004 Cornell University study showed that while 2.5 percent of murderers convicted
nationwide were sentenced to the death penalty, in Nevada 6 percent were given the death penalty. Texas gave 2 percent
of murderers a death sentence, less than the national average. Texas, however, executed 40 percent of those sentenced,
which was about four times higher than the national average. California had executed only 1 percent of those sentenced.
Congress acted defiantly toward the Supreme Court by passing the Drug Kingpin Act of 1988 and the Federal Death Penalty
Act of 1994 that made roughly fifty crimes punishable by death, including crimes that do not always involve the death
of someone. Such non-death capital offenses include treason, espionage (spying for another country), and high-level
drug trafficking. Since no one has yet been sentenced to death for such non-death capital offenses, the Supreme Court
has not ruled on their constitutionality. Executions resumed on January 17, 1977, when Gary Gilmore went before a
firing squad in Utah. But the pace was quite slow due to the use of litigation tactics which involved filing repeated
writs of habeas corpus, which succeeded for many in delaying their actual execution for many years. Although hundreds
of individuals were sentenced to death in the United States during the 1970s and early 1980s, only ten people besides
Gilmore (who had waived all of his appeal rights) were actually executed prior to 1984. After the September 2011
execution of Troy Davis, believed by many to be innocent, Richard Dieter, the director of the Death Penalty Information
Center, said this case was a clear wake-up call to politicians across the United States. He said: "They weren't expecting
such passion from people in opposition to the death penalty. There's a widely held perception that all Americans
are united in favor of executions, but this message came across loud and clear that many people are not happy with
it." Brian Evans of Amnesty International, which led the campaign to spare Davis's life, said that there was a groundswell
in America of people "who are tired of a justice system that is inhumane and inflexible and allows executions where
there is clear doubts about guilt". He predicted the debate would now be conducted with renewed energy. Various methods
have been used in the history of the American colonies and the United States but only five methods are currently
used. Historically, burning, crushing, breaking on wheel, and bludgeoning were used for a small number of executions,
while hanging was the most common method. The last person burned at the stake was a black slave in South Carolina
in August 1825. The last person to be hanged in chains was a murderer named John Marshall in West Virginia on April
4, 1913. Although beheading was a legal method in Utah from 1851 to 1888, it was never used. African Americans made
up 41 percent of death row inmates while making up only 12.6 percent of the general population. (They have made up
34 percent of those actually executed since 1976.) However, that number is lower than that of prison inmates, which
is 47 percent. According to the US Department of Justice, African Americans accounted for 52.5% of homicide offenders
from 1980 to 2008, with whites 45.3% and Native Americans and Asians 2.2%. This means African Americans are less
likely to be executed on a per capita basis. However, according to a 2003 Amnesty International report, blacks and
whites were the victims of murder in almost equal numbers, yet 80 percent of the people executed since 1977 were
convicted of murders involving white victims. 13.5% of death row inmates are of Hispanic or Latino descent, while
they make up 17.4% of the general population. The legal administration of the death penalty in the United States
is complex. Typically, it involves four critical steps: (1) sentencing, (2) direct review, (3) state collateral review,
and (4) federal habeas corpus. Recently, a narrow and final fifth level of process – (5) the Section 1983 challenge
– has become increasingly important. (Clemency or pardon, through which the Governor or President of the jurisdiction
can unilaterally reduce or abrogate a death sentence, is an executive rather than judicial process.) The number of
new death sentences handed down peaked in 1995–1996 (309). There were 73 new death sentences handed down in 2014,
the lowest number since 1973 (44). All of the executions which have taken place since the 1936 hanging of Bethea
in Owensboro have been conducted within a wall or enclosure. For example, Fred Adams was legally hanged in Kennett,
Missouri, on April 2, 1937, within a 10-foot (3 m) wooden stockade. Roscoe "Red" Jackson was hanged within a stockade
in Galena, Missouri, on May 26, 1937. Two Kentucky hangings were conducted after Galena in which numerous persons
were present within a wooden stockade, that of John "Peter" Montjoy in Covington, Kentucky on December 17, 1937,
and that of Harold Van Venison in Covington on June 3, 1938. An estimated 400 witnesses were present for the hanging
of Lee Simpson in Ryegate, Montana, on December 30, 1939. The execution of Timothy McVeigh on June 11, 2001 was witnessed
by some 300 people, some by closed-circuit television. James Liebman, a professor of law at Columbia Law School,
stated in 1996 that his study found that when habeas corpus petitions in death penalty cases were traced from conviction
to completion of the case that there was "a 40 percent success rate in all capital cases from 1978 to 1995." Similarly,
a study by Ronald Tabak in a law review article puts the success rate in habeas corpus cases involving death row
inmates even higher, finding that between "1976 and 1991, approximately 47 percent of the habeas petitions filed
by death row inmates were granted." The different numbers are largely definitional, rather than substantive. Freedam's
statistics looks at the percentage of all death penalty cases reversed, while the others look only at cases not reversed
prior to habeas corpus review. The last use of the firing squad between 1608 and the moratorium on judicial executions
between 1967 and 1977 was when Utah shot James W. Rodgers on March 30, 1960. The last use of the gallows between
1608 and the moratorium was when Kansas hanged George York on June 22, 1965. The last use of the electric chair between
the first electrocution on August 6, 1890 and the moratorium was when Oklahoma electrocuted James French on August
10, 1966. The last use of the gas chamber between the first gassing on February 8, 1924 and the moratorium was when
Colorado gassed Luis Monge on June 2, 1967. Since 1642 (in the 13 colonies, the United States under the Articles
of Confederation, and the current United States) an estimated 364 juvenile offenders have been put to death by the
states and the federal government. The earliest known execution of a prisoner for crimes committed as a juvenile
was Thomas Graunger in 1642. Twenty-two of the executions occurred after 1976, in seven states. Due to the slow process
of appeals, it was highly unusual for a condemned person to be under 18 at the time of execution. The youngest person
to be executed in the 20th century was George Stinney, who was electrocuted in South Carolina at the age of 14 on
June 16, 1944. The last execution of a juvenile may have been Leonard Shockley, who died in the Maryland gas chamber
on April 10, 1959, at the age of 17. No one has been under age 19 at time of execution since at least 1964. Since
the reinstatement of the death penalty in 1976, 22 people have been executed for crimes committed under the age of
18. Twenty-one were 17 at the time of the crime. The last person to be executed for a crime committed as a juvenile
was Scott Hain on April 3, 2003 in Oklahoma. In May 2014, Oklahoma Director of Corrections, Robert Patton, recommended
an indefinite hold on executions in the state after the botched execution of African-American Clayton Lockett. The
prisoner had to be tasered to restrain him prior to the execution, and the lethal injection missed a vein in his
groin, resulting in Lockett regaining consciousness, trying to get up, and to speak, before dying of a heart attack
43 minutes later, after the attempted execution had been called off. In 2015, the state approved nitrogen asphyxiation
as a method of execution. Opponents argue that the death penalty is not an effective means of deterring crime, risks
the execution of the innocent, is unnecessarily barbaric in nature, cheapens human life, and puts a government on
the same base moral level as those criminals involved in murder. Furthermore, some opponents argue that the arbitrariness
with which it is administered and the systemic influence of racial, socio-economic, geographic, and gender bias on
determinations of desert make the current practice of capital punishment immoral and illegitimate. Sixteen was held
to be the minimum permissible age in the 1988 Supreme Court decision of Thompson v. Oklahoma. The Court, considering
the case Roper v. Simmons in March 2005, found the execution of juvenile offenders unconstitutional by a 5–4 margin,
effectively raising the minimum permissible age to 18. State laws have not been updated to conform with this decision.
In the American legal system, unconstitutional laws do not need to be repealed; instead, they are held to be unenforceable.
(See also List of juvenile offenders executed in the United States) Around 1890, a political movement developed in
the United States to mandate private executions. Several states enacted laws which required executions to be conducted
within a "wall" or "enclosure" to "exclude public view." For example, in 1919, the Missouri legislature adopted a
statute (L.1919, p. 781) which required, "the sentence of death should be executed within the county jail, if convenient,
and otherwise within an enclosure near the jail." The Missouri law permitted the local sheriff to distribute passes
to individuals (usually local citizens) whom he believed should witness the hanging, but the sheriffs – for various
reasons – sometimes denied passes to individuals who wanted to watch. Missouri executions conducted after 1919 were
not "public" because they were conducted behind closed walls, and the general public was not permitted to attend.
Previous post-Furman mass clemencies took place in 1986 in New Mexico, when Governor Toney Anaya commuted all death
sentences because of his personal opposition to the death penalty. In 1991, outgoing Ohio Governor Dick Celeste commuted
the sentences of eight prisoners, among them all four women on the state's death row. And during his two terms (1979–1987)
as Florida's Governor, Bob Graham, although a strong death penalty supporter who had overseen the first post-Furman
involuntary execution as well as 15 others, agreed to commute the sentences of six people on the grounds of "possible
innocence" or "disproportionality." In 2010, bills to abolish the death penalty in Kansas and in South Dakota (which
had a de facto moratorium at the time) were rejected. Idaho ended its de facto moratorium, during which only one
volunteer had been executed, on November 18, 2011 by executing Paul Ezra Rhoades; South Dakota executed Donald Moeller
on October 30, 2012, ending a de facto moratorium during which only two volunteers had been executed. Of the 12 prisoners
whom Nevada has executed since 1976, 11 waived their rights to appeal. Kentucky and Montana have executed two prisoners
against their will (KY: 1997 and 1999, MT: 1995 and 1998) and one volunteer, respectively (KY: 2008, MT: 2006). Colorado
(in 1997) and Wyoming (in 1992) have executed only one prisoner, respectively. Pharmaceutical companies whose products
are used in the three-drug cocktails for lethal injections are predominantly European, and they have strenuously
objected to the use of their drugs for executions and taken steps to prevent their use. For example, Hospira, the
sole American manufacturer of sodium thiopental, the critical anesthetic in the three-drug cocktail, announced in
2011 that it would no longer manufacture the drug for the American market, in part for ethical reasons and in part
because its transfer of sodium thiopental manufacturing to Italy would subject it to the European Union's Torture
Regulation, which forbids the use of any product manufactured within the Union for torture (as execution by lethal
injection is considered by the Regulation). Since the drug manufacturers began taking these steps and the EU regulation
ended the importation of drugs produced in Europe, the resulting shortage of execution drugs has led to or influenced
decisions to impose moratoria in Arkansas, California, Kentucky, Louisiana, Mississippi, Montana, Nevada, North Carolina,
and Tennessee. In the decades since Furman, new questions have emerged about whether or not prosecutorial arbitrariness
has replaced sentencing arbitrariness. A study by Pepperdine University School of Law published in Temple Law Review,
"Unpredictable Doom and Lethal Injustice: An Argument for Greater Transparency in Death Penalty Decisions," surveyed
the decision-making process among prosecutors in various states. The authors found that prosecutors' capital punishment
filing decisions remain marked by local "idiosyncrasies," suggesting they are not in keeping with the spirit of the
Supreme Court's directive. This means that "the very types of unfairness that the Supreme Court sought to eliminate"
may still "infect capital cases." Wide prosecutorial discretion remains because of overly broad criteria. California
law, for example, has 22 "special circumstances," making nearly all premeditated murders potential capital cases.
The 32 death penalty states have varying numbers and types of "death qualifiers" – circumstances that allow for capital
charges. The number varies from a high of 34 in California to 22 in Colorado and Delaware to 12 in Texas, Nebraska,
Georgia and Montana. The study's authors call for reform of state procedures along the lines of reforms in the federal
system, which the U.S. Department of Justice initiated with a 1995 protocol. Crimes subject to the death penalty
vary by jurisdiction. All jurisdictions that use capital punishment designate the highest grade of murder a capital
crime, although most jurisdictions require aggravating circumstances. Treason against the United States, as well
as treason against the states of Arkansas, California, Georgia, Louisiana, Mississippi, and Missouri are capital
offenses.
French historians traditionally place the Enlightenment between 1715, the year that Louis XIV died, and 1789, the beginning
of the French Revolution. Some recent historians begin the period in the 1620s, with the start of the scientific
revolution. The Philosophes, the French term for the philosophers of the period, widely circulated their ideas through
meetings at scientific academies, Masonic lodges, literary salons and coffee houses, and through printed books and
pamphlets. The ideas of the Enlightenment undermined the authority of the monarchy and the church, and paved the
way for the revolutions of the 18th and 19th centuries. A variety of 19th-century movements, including liberalism
and neo-classicism, trace their intellectual heritage back to the Enlightenment. Francis Hutcheson, a moral philosopher,
described the utilitarian and consequentialist principle that virtue is that which provides, in his words, "the greatest
happiness for the greatest numbers". Much of what is incorporated in the scientific method (the nature of knowledge,
evidence, experience, and causation) and some modern attitudes towards the relationship between science and religion
were developed by his protégés David Hume and Adam Smith. Hume became a major figure in the skeptical philosophical
and empiricist traditions of philosophy. The influence of science also began appearing more commonly in poetry and
literature during the Enlightenment. Some poetry became infused with scientific metaphor and imagery, while other
poems were written directly about scientific topics. Sir Richard Blackmore committed the Newtonian system to verse
in Creation, a Philosophical Poem in Seven Books (1712). After Newton's death in 1727, poems were composed in his
honour for decades. James Thomson (1700–1748) penned his "Poem to the Memory of Newton," which mourned the loss of
Newton, but also praised his science and legacy. John Locke, one of the most influential Enlightenment thinkers,
based his governance philosophy in social contract theory, a subject that permeated Enlightenment political thought.
The English philosopher Thomas Hobbes ushered in this new debate with his work Leviathan in 1651. Hobbes also developed
some of the fundamentals of European liberal thought: the right of the individual; the natural equality of all men;
the artificial character of the political order (which led to the later distinction between civil society and the
state); the view that all legitimate political power must be "representative" and based on the consent of the people;
and a liberal interpretation of law which leaves people free to do whatever the law does not explicitly forbid. Both
Rousseau and Locke's social contract theories rest on the presupposition of natural rights, which are not a result
of law or custom, but are things that all men have in pre-political societies, and are therefore universal and inalienable.
The most famous natural right formulation comes from John Locke in his Second Treatise, when he introduces the state
of nature. For Locke the law of nature is grounded on mutual security, or the idea that one cannot infringe on another's
natural rights, as every man is equal and has the same inalienable rights. These natural rights include perfect equality
and freedom, and the right to preserve life and property. Locke also argued against slavery on the basis that enslaving
yourself goes against the law of nature; you cannot surrender your own rights, your freedom is absolute and no one
can take it from you. Additionally, Locke argues that one person cannot enslave another because it is morally reprehensible,
although he introduces a caveat by saying that enslavement of a lawful captive in time of war would not go against
one's natural rights. Enlightenment era religious commentary was a response to the preceding century of religious
conflict in Europe, especially the Thirty Years' War. Theologians of the Enlightenment wanted to reform their faith
to its generally non-confrontational roots and to limit the capacity for religious controversy to spill over into
politics and warfare while still maintaining a true faith in God. For moderate Christians, this meant a return to
simple Scripture. John Locke abandoned the corpus of theological commentary in favor of an "unprejudiced examination"
of the Word of God alone. He determined the essence of Christianity to be a belief in Christ the redeemer and recommended
avoiding more detailed debate. Thomas Jefferson in the Jefferson Bible went further; he dropped any passages dealing
with miracles, visitations of angels, and the resurrection of Jesus after his death. He tried to extract the practical
Christian moral code of the New Testament. The Enlightenment took hold in most European countries, often with a specific
local emphasis. For example, in France it became associated with anti-government and anti-Church radicalism while
in Germany it reached deep into the middle classes and where it expressed a spiritualistic and nationalistic tone
without threatening governments or established churches. Government responses varied widely. In France, the government
was hostile, and the philosophes fought against its censorship, sometimes being imprisoned or hounded into exile.
The British government for the most part ignored the Enlightenment's leaders in England and Scotland, although it
did give Isaac Newton a knighthood and a very lucrative government office. The Enlightenment has always been contested
territory. Its supporters "hail it as the source of everything that is progressive about the modern world. For them,
it stands for freedom of thought, rational inquiry, critical thinking, religious tolerance, political liberty, scientific
achievement, the pursuit of happiness, and hope for the future." However, its detractors accuse it of 'shallow' rationalism,
naïve optimism, unrealistic universalism, and moral darkness. From the start there was a Counter-Enlightenment in
which conservative and clerical defenders of traditional religion attacked materialism and skepticism as evil forces
that encouraged immorality. By 1794, they pointed to the Terror during the French Revolution as confirmation of their
predictions. As the Enlightenment was ending, Romantic philosophers argued that excessive dependence on reason was
a mistake perpetuated by the Enlightenment, because it disregarded the bonds of history, myth, faith and tradition
that were necessary to hold society together. There is little consensus on the precise beginning of the Age of Enlightenment;
the beginning of the 18th century (1701) or the middle of the 17th century (1650) are often used as epochs. French
historians usually place the period, called the Siècle des Lumières (Century of Enlightenments), between 1715 and
1789, from the beginning of the reign of Louis XV until the French Revolution. If taken back to the mid-17th century,
the Enlightenment would trace its origins to Descartes' Discourse on Method, published in 1637. In France, many cited
the publication of Isaac Newton's Principia Mathematica in 1687. It is argued by several historians and philosophers
that the beginning of the Enlightenment is when Descartes shifted the epistemological basis from external authority
to internal certainty by his cogito ergo sum published in 1637. As to its end, most scholars use the last years of
the century, often choosing the French Revolution of 1789 or the beginning of the Napoleonic Wars (1804–15) as a
convenient point in time with which to date the end of the Enlightenment. The creation of the public sphere has been
associated with two long-term historical trends: the rise of the modern nation state and the rise of capitalism.
The modern nation state, in its consolidation of public power, created by counterpoint a private realm of society
independent of the state, which allowed for the public sphere. Capitalism also increased society's autonomy and self-awareness,
and an increasing need for the exchange of information. As the nascent public sphere expanded, it embraced a large
variety of institutions; the most commonly cited were coffee houses and cafés, salons and the literary public sphere,
figuratively localized in the Republic of Letters. In France, the creation of the public sphere was helped by the
aristocracy's move from the King's palace at Versailles to Paris in about 1720, since their rich spending stimulated
the trade in luxuries and artistic creations, especially fine paintings. The desire to explore, record and systematize
knowledge had a meaningful impact on music publications. Jean-Jacques Rousseau's Dictionnaire de musique (published
1767 in Geneva and 1768 in Paris) was a leading text in the late 18th century. This widely available dictionary gave
short definitions of words like genius and taste, and was clearly influenced by the Enlightenment movement. Another
text influenced by Enlightenment values was Charles Burney's A General History of Music: From the Earliest Ages to
the Present Period (1776), which was a historical survey and an attempt to rationalize elements in music systematically
over time. Recently, musicologists have shown renewed interest in the ideas and consequences of the Enlightenment.
For example, Rose Rosengard Subotnik's Deconstructive Variations (subtitled Music and Reason in Western Society)
compares Mozart's Die Zauberflöte (1791) using the Enlightenment and Romantic perspectives, and concludes that the
work is "an ideal musical representation of the Enlightenment". Many women played an essential part in the French
Enlightenment, due to the role they played as salonnières in Parisian salons, as the contrast to the male philosophes.
The salon was the principal social institution of the republic, and "became the civil working spaces of the project
of Enlightenment." Women, as salonnières, were "the legitimate governors of [the] potentially unruly discourse" that
took place within. While women were marginalized in the public culture of the Ancien Régime, the French Revolution
destroyed the old cultural and economic restraints of patronage and corporatism (guilds), opening French society
to female participation, particularly in the literary sphere. The vast majority of the reading public could not afford
to own a private library, and while most of the state-run "universal libraries" set up in the 17th and 18th centuries
were open to the public, they were not the only sources of reading material. On one end of the spectrum was the Bibliothèque
Bleue, a collection of cheaply produced books published in Troyes, France. Intended for a largely rural and semi-literate
audience these books included almanacs, retellings of medieval romances and condensed versions of popular novels,
among other things. While some historians have argued against the Enlightenment's penetration into the lower classes,
the Bibliothèque Bleue represents at least a desire to participate in Enlightenment sociability. Moving up the classes,
a variety of institutions offered readers access to material without needing to buy anything. Libraries that lent
out their material for a small price started to appear, and occasionally bookstores would offer a small lending library
to their patrons. Coffee houses commonly offered books, journals and sometimes even popular novels to their customers.
The Tatler and The Spectator, two influential periodicals sold from 1709 to 1714, were closely associated with coffee
house culture in London, being both read and produced in various establishments in the city. This is an example of
the triple or even quadruple function of the coffee house: reading material was often obtained, read, discussed and
even produced on the premises. The target audience of natural history was French polite society, evidenced more by
the specific discourse of the genre than by the generally high prices of its works. Naturalists catered to polite
society's desire for erudition – many texts had an explicit instructive purpose. However, natural history was often
a political affair. As E. C. Spary writes, the classifications used by naturalists "slipped between the natural world
and the social ... to establish not only the expertise of the naturalists over the natural, but also the dominance
of the natural over the social". The idea of taste (le goût) was a social indicator: to truly be able to categorize
nature, one had to have the proper taste, an ability of discretion shared by all members of polite society. In this
way natural history spread many of the scientific developments of the time, but also provided a new source of legitimacy
for the dominant class. From this basis, naturalists could then develop their own social ideals based on their scientific
works. The first technical dictionary was drafted by John Harris and entitled Lexicon Technicum: Or, An Universal
English Dictionary of Arts and Sciences. Harris' book avoided theological and biographical entries; instead it concentrated
on science and technology. Published in 1704, the Lexicon technicum was the first book to be written in English that
took a methodical approach to describing mathematics and commercial arithmetic along with the physical sciences and
navigation. Other technical dictionaries followed Harris' model, including Ephraim Chambers' Cyclopaedia (1728),
which included five editions, and was a substantially larger work than Harris'. The folio edition of the work even
included foldout engravings. The Cyclopaedia emphasized Newtonian theories, Lockean philosophy, and contained thorough
examinations of technologies, such as engraving, brewing, and dyeing. The first significant work that expressed scientific
theory and knowledge expressly for the laity, in the vernacular, and with the entertainment of readers in mind, was
Bernard de Fontenelle's Conversations on the Plurality of Worlds (1686). The book was produced specifically for women
with an interest in scientific writing and inspired a variety of similar works. These popular works were written
in a discursive style, which was laid out much more clearly for the reader than the complicated articles, treatises,
and books published by the academies and scientists. Charles Leadbetter's Astronomy (1727) was advertised as "a Work
entirely New" that would include "short and easie [sic] Rules and Astronomical Tables." The first French introduction
to Newtonianism and the Principia was Eléments de la philosophie de Newton, published by Voltaire in 1738. Émilie
du Châtelet's translation of the Principia, published after her death in 1756, also helped to spread Newton's theories
beyond scientific academies and the university. Francesco Algarotti, writing for a growing female audience, published
Il Newtonianism per le dame, which was a tremendously popular work and was translated from Italian into English by
Elizabeth Carter. A similar introduction to Newtonianism for women was produced by Henry Pembarton. His A View of
Sir Isaac Newton's Philosophy was published by subscription. Extant records of subscribers show that women from a
wide range of social standings purchased the book, indicating the growing number of scientifically inclined female
readers among the middling class. During the Enlightenment, women also began producing popular scientific works themselves.
Sarah Trimmer wrote a successful natural history textbook for children titled The Easy Introduction to the Knowledge
of Nature (1782), which was published for many years after in eleven editions. The strongest contribution of the
French Academies to the public sphere comes from the concours académiques (roughly translated as 'academic contests')
they sponsored throughout France. These academic contests were perhaps the most public of any institution during
the Enlightenment. The practice of contests dated back to the Middle Ages, and was revived in the mid-17th century.
The subject matter had previously been generally religious and/or monarchical, featuring essays, poetry, and painting.
By roughly 1725, however, this subject matter had radically expanded and diversified, including "royal propaganda,
philosophical battles, and critical ruminations on the social and political institutions of the Old Regime." Topics
of public controversy were also discussed such as the theories of Newton and Descartes, the slave trade, women's
education, and justice in France. The first English coffeehouse opened in Oxford in 1650. Brian Cowan said that Oxford
coffeehouses developed into "penny universities", offering a locus of learning that was less formal than structured
institutions. These penny universities occupied a significant position in Oxford academic life, as they were frequented
by those consequently referred to as the "virtuosi", who conducted their research on some of the resulting premises.
According to Cowan, "the coffeehouse was a place for like-minded scholars to congregate, to read, as well as learn
from and to debate with each other, but was emphatically not a university institution, and the discourse there was
of a far different order than any university tutorial." Historians have long debated the extent to which the secret
network of Freemasonry was a main factor in the Enlightenment. The leaders of the Enlightenment included Freemasons
such as Diderot, Montesquieu, Voltaire, Pope, Horace Walpole, Sir Robert Walpole, Mozart, Goethe, Frederick the Great,
Benjamin Franklin, and George Washington. Norman Davies said that Freemasonry was a powerful force on behalf of Liberalism
in Europe, from about 1700 to the twentieth century. It expanded rapidly during the Age of Enlightenment, reaching
practically every country in Europe. It was especially attractive to powerful aristocrats and politicians as well
as intellectuals, artists and political activists. The Age of Enlightenment was preceded by and closely associated
with the scientific revolution. Earlier philosophers whose work influenced the Enlightenment included Francis Bacon,
Descartes, Locke, and Spinoza. The major figures of the Enlightenment included Cesare Beccaria, Voltaire, Denis Diderot,
Jean-Jacques Rousseau, David Hume, Adam Smith, and Immanuel Kant. Some European rulers, including Catherine II of
Russia, Joseph II of Austria and Frederick I of Prussia, tried to apply Enlightenment thought on religious and political
tolerance, which became known as enlightened absolutism. The Americans Benjamin Franklin and Thomas Jefferson came
to Europe during the period and contributed actively to the scientific and political debate, and the ideals of the
Enlightenment were incorporated into the United States Declaration of Independence and the Constitution of the United
States. Immanuel Kant (1724–1804) tried to reconcile rationalism and religious belief, individual freedom and political
authority, as well as map out a view of the public sphere through private and public reason. Kant's work continued
to shape German thought, and indeed all of European philosophy, well into the 20th century. Mary Wollstonecraft was
one of England's earliest feminist philosophers. She argued for a society based on reason, and that women, as well
as men, should be treated as rational beings. She is best known for her work A Vindication of the Rights of Woman
(1791). Hume and other Scottish Enlightenment thinkers developed a 'science of man', which was expressed historically
in works by authors including James Burnett, Adam Ferguson, John Millar, and William Robertson, all of whom merged
a scientific study of how humans behaved in ancient and primitive cultures with a strong awareness of the determining
forces of modernity. Modern sociology largely originated from this movement, and Hume's philosophical concepts that
directly influenced James Madison (and thus the U.S. Constitution) and as popularised by Dugald Stewart, would be
the basis of classical liberalism. Both Locke and Rousseau developed social contract theories in Two Treatises of
Government and Discourse on Inequality, respectively. While quite different works, Locke, Hobbes, and Rousseau agreed
that a social contract, in which the government's authority lies in the consent of the governed, is necessary for
man to live in civil society. Locke defines the state of nature as a condition in which humans are rational and follow
natural law; in which all men are born equal and with the right to life, liberty and property. However, when one
citizen breaks the Law of Nature, both the transgressor and the victim enter into a state of war, from which it is
virtually impossible to break free. Therefore, Locke said that individuals enter into civil society to protect their
natural rights via an "unbiased judge" or common authority, such as courts, to appeal to. Contrastingly, Rousseau's
conception relies on the supposition that "civil man" is corrupted, while "natural man" has no want he cannot fulfill
himself. Natural man is only taken out of the state of nature when the inequality associated with private property
is established. Rousseau said that people join into civil society via the social contract to achieve unity while
preserving individual freedom. This is embodied in the sovereignty of the general will, the moral and collective
legislative body constituted by citizens. In several nations, rulers welcomed leaders of the Enlightenment at court
and asked them to help design laws and programs to reform the system, typically to build stronger national states.
These rulers are called "enlightened despots" by historians. They included Frederick the Great of Prussia, Catherine
the Great of Russia, Leopold II of Tuscany, and Joseph II of Austria. Joseph was over-enthusiastic, announcing so
many reforms that had so little support that revolts broke out and his regime became a comedy of errors and nearly
all his programs were reversed. Senior ministers Pombal in Portugal and Struensee in Denmark also governed according
to Enlightenment ideals. In Poland, the model constitution of 1791 expressed Enlightenment ideals, but was in effect
for only one year as the nation was partitioned among its neighbors. More enduring were the cultural achievements,
which created a nationalist spirit in Poland. Enlightenment scholars sought to curtail the political power of organized
religion and thereby prevent another age of intolerant religious war. Spinoza determined to remove politics from
contemporary and historical theology (e.g. disregarding Judaic law). Moses Mendelssohn advised affording no political
weight to any organized religion, but instead recommended that each person follow what they found most convincing.
A good religion based in instinctive morals and a belief in God should not theoretically need force to maintain order
in its believers, and both Mendelssohn and Spinoza judged religion on its moral fruits, not the logic of its theology.
In the Scottish Enlightenment, Scotland's major cities created an intellectual infrastructure of mutually supporting
institutions such as universities, reading societies, libraries, periodicals, museums and masonic lodges. The Scottish
network was "predominantly liberal Calvinist, Newtonian, and 'design' oriented in character which played a major
role in the further development of the transatlantic Enlightenment". In France, Voltaire said "we look to Scotland
for all our ideas of civilization." The focus of the Scottish Enlightenment ranged from intellectual and economic
matters to the specifically scientific as in the work of William Cullen, physician and chemist; James Anderson, an
agronomist; Joseph Black, physicist and chemist; and James Hutton, the first modern geologist. The term "Enlightenment"
emerged in English in the later part of the 19th century, with particular reference to French philosophy, as the
equivalent of the French term 'Lumières' (used first by Dubos in 1733 and already well established by 1751). From
Immanuel Kant's 1784 essay "Beantwortung der Frage: Was ist Aufklärung?" ("Answering the Question: What is Enlightenment?")
the German term became 'Aufklärung' (aufklären = to illuminate; sich aufklären = to clear up). However, scholars
have never agreed on a definition of the Enlightenment, or on its chronological or geographical extent. Terms like
"les Lumières" (French), "illuminismo" (Italian), "ilustración" (Spanish) and "Aufklärung" (German) referred to partly
overlapping movements. Not until the late nineteenth century did English scholars agree they were talking about "the
Enlightenment." The context for the rise of the public sphere was the economic and social change commonly associated
with the Industrial Revolution: "economic expansion, increasing urbanization, rising population and improving communications
in comparison to the stagnation of the previous century"." Rising efficiency in production techniques and communication
lowered the prices of consumer goods and increased the amount and variety of goods available to consumers (including
the literature essential to the public sphere). Meanwhile, the colonial experience (most European states had colonial
empires in the 18th century) began to expose European society to extremely heterogeneous cultures, leading to the
breaking down of "barriers between cultural systems, religious divides, gender differences and geographical areas".
As the economy and the middle class expanded, there was an increasing number of amateur musicians. One manifestation
of this involved women, who became more involved with music on a social level. Women were already engaged in professional
roles as singers, and increased their presence in the amateur performers' scene, especially with keyboard music.
Music publishers begin to print music that amateurs could understand and play. The majority of the works that were
published were for keyboard, voice and keyboard, and chamber ensemble. After these initial genres were popularized,
from the mid-century on, amateur groups sang choral music, which then became a new trend for publishers to capitalize
on. The increasing study of the fine arts, as well as access to amateur-friendly published works, led to more people
becoming interested in reading and discussing music. Music magazines, reviews, and critical works which suited amateurs
as well as connoisseurs began to surface. In France, the established men of letters (gens de lettres) had fused with
the elites (les grands) of French society by the mid-18th century. This led to the creation of an oppositional literary
sphere, Grub Street, the domain of a "multitude of versifiers and would-be authors". These men came to London to
become authors, only to discover that the literary market simply could not support large numbers of writers, who,
in any case, were very poorly remunerated by the publishing-bookselling guilds. The first scientific and literary
journals were established during the Enlightenment. The first journal, the Parisian Journal des Sçavans, appeared
in 1665. However, it was not until 1682 that periodicals began to be more widely produced. French and Latin were
the dominant languages of publication, but there was also a steady demand for material in German and Dutch. There
was generally low demand for English publications on the Continent, which was echoed by England's similar lack of
desire for French works. Languages commanding less of an international market – such as Danish, Spanish and Portuguese
– found journal success more difficult, and more often than not, a more international language was used instead.
French slowly took over Latin's status as the lingua franca of learned circles. This in turn gave precedence to the
publishing industry in Holland, where the vast majority of these French language periodicals were produced. In Germany,
practical reference works intended for the uneducated majority became popular in the 18th century. The Marperger
Curieuses Natur-, Kunst-, Berg-, Gewerkund Handlungs-Lexicon (1712) explained terms that usefully described the trades
and scientific and commercial education. Jablonksi Allgemeines Lexicon (1721) was better known than the Handlungs-Lexicon,
and underscored technical subjects rather than scientific theory. For example, over five columns of text were dedicated
to wine, while geometry and logic were allocated only twenty-two and seventeen lines, respectively. The first edition
of the Encyclopædia Britannica (1771) was modelled along the same lines as the German lexicons. More importantly,
the contests were open to all, and the enforced anonymity of each submission guaranteed that neither gender nor social
rank would determine the judging. Indeed, although the "vast majority" of participants belonged to the wealthier
strata of society ("the liberal arts, the clergy, the judiciary, and the medical profession"), there were some cases
of the popular classes submitting essays, and even winning. Similarly, a significant number of women participated
– and won – the competitions. Of a total of 2300 prize competitions offered in France, women won 49 – perhaps a small
number by modern standards, but very significant in an age in which most women did not have any academic training.
Indeed, the majority of the winning entries were for poetry competitions, a genre commonly stressed in women's education.
The Café Procope was established in Paris in 1686; by the 1720s there were around 400 cafés in the city. The Café
Procope in particular became a center of Enlightenment, welcoming such celebrities as Voltaire and Rousseau. The
Café Procope was where Diderot and D'Alembert decided to create the Encyclopédie. The cafés were one of the various
"nerve centers" for bruits publics, public noise or rumour. These bruits were allegedly a much better source of information
than were the actual newspapers available at the time. During the Age of Enlightenment, Freemasons comprised an international
network of like-minded men, often meeting in secret in ritualistic programs at their lodges. they promoted the ideals
of the Enlightenment, and helped diffuse these values across Britain and France and other places. Freemasonry as
a systematic creed with its own myths, values and set of rituals originated in Scotland around 1600 and spread first
to England and then across the Continent in the eighteenth century. They fostered new codes of conduct – including
a communal understanding of liberty and equality inherited from guild sociability – "liberty, fraternity, and equality"
Scottish soldiers and Jacobite Scots brought to the Continent ideals of fraternity which reflected not the local
system of Scottish customs but the institutions and ideals originating in the English Revolution against royal absolutism.
Freemasonry was particularly prevalent in France – by 1789, there were perhaps as many as 100,000 French Masons,
making Freemasonry the most popular of all Enlightenment associations. The Freemasons displayed a passion for secrecy
and created new degrees and ceremonies. Similar societies, partially imitating Freemasonry, emerged in France, Germany,
Sweden and Russia. One example was the "Illuminati" founded in Bavaria in 1776, which was copied after the Freemasons
but was never part of the movement. The Illuminati was an overtly political group, which most Masonic lodges decidedly
were not. Locke is known for his statement that individuals have a right to "Life, Liberty and Property", and his
belief that the natural right to property is derived from labor. Tutored by Locke, Anthony Ashley-Cooper, 3rd Earl
of Shaftesbury wrote in 1706: "There is a mighty Light which spreads its self over the world especially in those
two free Nations of England and Holland; on whom the Affairs of Europe now turn". Locke's theory of natural rights
has influenced many political documents, including the United States Declaration of Independence and the French National
Constituent Assembly's Declaration of the Rights of Man and of the Citizen. Frederick the Great, the king of Prussia
from 1740 to 1786, saw himself as a leader of the Enlightenment and patronized philosophers and scientists at his
court in Berlin. Voltaire, who had been imprisoned and maltreated by the French government, was eager to accept Frederick's
invitation to live at his palace. Frederick explained, "My principal occupation is to combat ignorance and prejudice
... to enlighten minds, cultivate morality, and to make people as happy as it suits human nature, and as the means
at my disposal permit." Several Americans, especially Benjamin Franklin and Thomas Jefferson, played a major role
in bringing Enlightenment ideas to the New World and in influencing British and French thinkers. Franklin was influential
for his political activism and for his advances in physics. The cultural exchange during the Age of Enlightenment
ran in both directions across the Atlantic. Thinkers such as Paine, Locke, and Rousseau all take Native American
cultural practices as examples of natural freedom. The Americans closely followed English and Scottish political
ideas, as well as some French thinkers such as Montesquieu. As deists, they were influenced by ideas of John Toland
(1670–1722) and Matthew Tindal (1656–1733). During the Enlightenment there was a great emphasis upon liberty, democracy,
republicanism and religious tolerance. Attempts to reconcile science and religion resulted in a widespread rejection
of prophecy, miracle and revealed religion in preference for Deism – especially by Thomas Paine in The Age of Reason
and by Thomas Jefferson in his short Jefferson Bible – from which all supernatural aspects were removed. The writers
of Grub Street, the Grub Street Hacks, were left feeling bitter about the relative success of the men of letters,
and found an outlet for their literature which was typified by the libelle. Written mostly in the form of pamphlets,
the libelles "slandered the court, the Church, the aristocracy, the academies, the salons, everything elevated and
respectable, including the monarchy itself". Le Gazetier cuirassé by Charles Théveneau de Morande was a prototype
of the genre. It was Grub Street literature that was most read by the public during the Enlightenment. More importantly,
according to Darnton, the Grub Street hacks inherited the "revolutionary spirit" once displayed by the philosophes,
and paved the way for the French Revolution by desacralizing figures of political, moral and religious authority
in France. Coffeehouses were especially important to the spread of knowledge during the Enlightenment because they
created a unique environment in which people from many different walks of life gathered and shared ideas. They were
frequently criticized by nobles who feared the possibility of an environment in which class and its accompanying
titles and privileges were disregarded. Such an environment was especially intimidating to monarchs who derived much
of their power from the disparity between classes of people. If classes were to join together under the influence
of Enlightenment thinking, they might recognize the all-encompassing oppression and abuses of their monarchs and,
because of their size, might be able to carry out successful revolts. Monarchs also resented the idea of their subjects
convening as one to discuss political matters, especially those concerning foreign affairs - rulers thought political
affairs to be their business only, a result of their supposed divine right to rule. A genre that greatly rose in
importance was that of scientific literature. Natural history in particular became increasingly popular among the
upper classes. Works of natural history include René-Antoine Ferchault de Réaumur's Histoire naturelle des insectes
and Jacques Gautier d'Agoty's La Myologie complète, ou description de tous les muscles du corps humain (1746). Outside
ancien régime France, natural history was an important part of medicine and industry, encompassing the fields of
botany, zoology, meteorology, hydrology and mineralogy. Students in Enlightenment universities and academies were
taught these subjects to prepare them for careers as diverse as medicine and theology. As shown by M D Eddy, natural
history in this context was a very middle class pursuit and operated as a fertile trading zone for the interdisciplinary
exchange of diverse scientific ideas. Masonic lodges created a private model for public affairs. They "reconstituted
the polity and established a constitutional form of self-government, complete with constitutions and laws, elections
and representatives." In other words, the micro-society set up within the lodges constituted a normative model for
society as a whole. This was especially true on the Continent: when the first lodges began to appear in the 1730s,
their embodiment of British values was often seen as threatening by state authorities. For example, the Parisian
lodge that met in the mid 1720s was composed of English Jacobite exiles. Furthermore, freemasons all across Europe
explicitly linked themselves to the Enlightenment as a whole. In French lodges, for example, the line "As the means
to be enlightened I search for the enlightened" was a part of their initiation rites. British lodges assigned themselves
the duty to "initiate the unenlightened". This did not necessarily link lodges to the irreligious, but neither did
this exclude them from the occasional heresy. In fact, many lodges praised the Grand Architect, the masonic terminology
for the deistic divine being who created a scientifically ordered universe. Coffeehouses represent a turning point
in history during which people discovered that they could have enjoyable social lives within their communities. Coffeeshops
became homes away from home for many who sought, for the first time, to engage in discourse with their neighbors
and discuss intriguing and thought-provoking matters, especially those regarding philosophy to politics. Coffeehouses
were essential to the Enlightenment, for they were centers of free-thinking and self-discovery. Although many coffeehouse
patrons were scholars, a great deal were not. Coffeehouses attracted a diverse set of people, including not only
the educated wealthy but also members of the bourgeoisie and the lower class. While it may seem positive that patrons,
being doctors, lawyers, merchants, etc. represented almost all classes, the coffeeshop environment sparked fear in
those who sought to preserve class distinction. One of the most popular critiques of the coffeehouse claimed that
it "allowed promiscuous association among people from different rungs of the social ladder, from the artisan to the
aristocrat" and was therefore compared to Noah's Ark, receiving all types of animals, clean or unclean. This unique
culture served as a catalyst for journalism when Joseph Addison and Richard Steele recognized its potential as an
audience. Together, Steele and Addison published The Spectator (1711), a daily publication which aimed, through fictional
narrator Mr. Spectator, both to entertain and to provoke discussion regarding serious philosophical matters. The
major opponent of Freemasonry was the Roman Catholic Church, so that in countries with a large Catholic element,
such as France, Italy, Spain, and Mexico, much of the ferocity of the political battles involve the confrontation
between what Davies calls the reactionary Church and enlightened Freemasonry. Even in France, Masons did not act
as a group. American historians, while noting that Benjamin Franklin and George Washington were indeed active Masons,
have downplayed the importance of Freemasonry in causing the American Revolution because the Masonic order was non-political
and included both Patriots and their enemy the Loyalists. The Enlightenment – known in French as the Siècle des Lumières,
the Century of Enlightenment, and in German as the Aufklärung – was a philosophical movement which dominated the
world of ideas in Europe in the 18th century. The Enlightenment included a range of ideas centered on reason as the
primary source of authority and legitimacy, and came to advance ideals such as liberty, progress, tolerance, fraternity,
constitutional government and ending the abuses of the church and state. In France, the central doctrines of the
Lumières were individual liberty and religious tolerance, in opposition to the principle of absolute monarchy and
the fixed dogmas of the Roman Catholic Church. The Enlightenment was marked by increasing empiricism, scientific
rigor, and reductionism, along with increased questioning of religious orthodoxy. In the mid-18th century, Paris
became the center of an explosion of philosophic and scientific activity challenging traditional doctrines and dogmas.
The philosophic movement was led by Voltaire and Jean-Jacques Rousseau, who argued for a society based upon reason
rather than faith and Catholic doctrine, for a new civil order based on natural law, and for science based on experiments
and observation. The political philosopher Montesquieu introduced the idea of a separation of powers in a government,
a concept which was enthusiastically adopted by the authors of the United States Constitution. While the Philosophes
of the French Enlightenment were not revolutionaries, and many were members of the nobility, their ideas played an
important part in undermining the legitimacy of the Old Regime and shaping the French Revolution. The most influential
publication of the Enlightenment was the Encyclopédie, compiled by Denis Diderot and (until 1759) by Jean le Rond
d'Alembert and a team of 150 scientists and philosophers. It was published between 1751 and 1772 in thirty-five volumes,
and spread the ideas of the Enlightenment across Europe and beyond. Other landmark publications were the Dictionnaire
philosophique (Philosophical Dictionary, 1764) and Letters on the English (1733) written by Voltaire; Rousseau's
Discourse on Inequality (1754) and The Social Contract (1762); and Montesquieu's Spirit of the Laws (1748). The ideas
of the Enlightenment played a major role in inspiring the French Revolution, which began in 1789. After the Revolution,
the Enlightenment was followed by an opposing intellectual movement known as Romanticism. There were two distinct
lines of Enlightenment thought: the radical enlightenment, inspired by the philosophy of Spinoza, advocating democracy,
individual liberty, freedom of expression, and eradication of religious authority; and a second, more moderate variety,
supported by René Descartes, John Locke, Christian Wolff, Isaac Newton and others, which sought accommodation between
reform and the traditional systems of power and faith. Both lines of thought were opposed by the conservative Counter-Enlightenment.
The "Radical Enlightenment" promoted the concept of separating church and state, an idea that often credited to English
philosopher John Locke (1632–1704). According to his principle of the social contract, Locke said that the government
lacked authority in the realm of individual conscience, as this was something rational people could not cede to the
government for it or others to control. For Locke, this created a natural right in the liberty of conscience, which
he said must therefore remain protected from any government authority. A number of novel ideas about religion developed
with the Enlightenment, including Deism and talk of atheism. Deism, according to Thomas Paine, is the simple belief
in God the Creator, with no reference to the Bible or any other miraculous source. Instead, the Deist relies solely
on personal reason to guide his creed, which was eminently agreeable to many thinkers of the time. Atheism was much
discussed, but there were few proponents. Wilson and Reill note that, "In fact, very few enlightened intellectuals,
even when they were vocal critics of Christianity, were true atheists. Rather, they were critics of orthodox belief,
wedded rather to skepticism, deism, vitalism, or perhaps pantheism." Some followed Pierre Bayle and argued that atheists
could indeed be moral men. Many others like Voltaire held that without belief in a God who punishes evil, the moral
order of society was undermined. That is, since atheists gave themselves to no Supreme Authority and no law, and
had no fear of eternal consequences, they were far more likely to disrupt society. Bayle (1647–1706) observed that
in his day, "prudent persons will always maintain an appearance of [religion].". He believed that even atheists could
hold concepts of honor and go beyond their own self-interest to create and interact in society. Locke said that if
there were no God and no divine law, the result would be moral anarchy: every individual "could have no law but his
own will, no end but himself. He would be a god to himself, and the satisfaction of his own will the sole measure
and end of all his actions". The word "public" implies the highest level of inclusivity – the public sphere by definition
should be open to all. However, this sphere was only public to relative degrees. Enlightenment thinkers frequently
contrasted their conception of the "public" with that of the people: Condorcet contrasted "opinion" with populace,
Marmontel "the opinion of men of letters" with "the opinion of the multitude," and d'Alembert the "truly enlightened
public" with "the blind and noisy multitude". Additionally, most institutions of the public sphere excluded both
women and the lower classes. Cross-class influences occurred through noble and lower class participation in areas
such as the coffeehouses and the Masonic lodges. The increased consumption of reading materials of all sorts was
one of the key features of the "social" Enlightenment. Developments in the Industrial Revolution allowed consumer
goods to be produced in greater quantities at lower prices, encouraging the spread of books, pamphlets, newspapers
and journals – "media of the transmission of ideas and attitudes". Commercial development likewise increased the
demand for information, along with rising populations and increased urbanisation. However, demand for reading material
extended outside of the realm of the commercial, and outside the realm of the upper and middle classes, as evidenced
by the Bibliothèque Bleue. Literacy rates are difficult to gauge, but in France at least, the rates doubled over
the course of the 18th century. Reflecting the decreasing influence of religion, the number of books about science
and art published in Paris doubled from 1720 to 1780, while the number of books about religion dropped to just one-tenth
of the total. As musicians depended more and more on public support, public concerts became increasingly popular
and helped supplement performers' and composers' incomes. The concerts also helped them to reach a wider audience.
Handel, for example, epitomized this with his highly public musical activities in London. He gained considerable
fame there with performances of his operas and oratorios. The music of Haydn and Mozart, with their Viennese Classical
styles, are usually regarded as being the most in line with the Enlightenment ideals. The history of Academies in
France during the Enlightenment begins with the Academy of Science, founded in 1635 in Paris. It was closely tied
to the French state, acting as an extension of a government seriously lacking in scientists. It helped promote and
organize new disciplines, and it trained new scientists. It also contributed to the enhancement of scientists' social
status, considering them to be the "most useful of all citizens". Academies demonstrate the rising interest in science
along with its increasing secularization, as evidenced by the small number of clerics who were members (13 percent).
The presence of the French academies in the public sphere cannot be attributed to their membership; although the
majority of their members were bourgeois, the exclusive institution was only open to elite Parisian scholars. They
perceived themselves as "interpreters of the sciences for the people". For example, it was with this in mind that
academicians took it upon themselves to disprove the popular pseudo-science of mesmerism. Many of the leading universities
associated with Enlightenment progressive principles were located in northern Europe, with the most renowned being
the universities of Leiden, Göttingen, Halle, Montpellier, Uppsala and Edinburgh. These universities, especially
Edinburgh, produced professors whose ideas had a significant impact on Britain's North American colonies and, later,
the American Republic. Within the natural sciences, Edinburgh's medical also led the way in chemistry, anatomy and
pharmacology. In other parts of Europe, the universities and schools of France and most of Europe were bastions of
traditionalism and were not hospitable to the Enlightenment. In France, the major exception was the medical university
at Montpellier. The predominant educational psychology from the 1750s onward, especially in northern European countries
was associationism, the notion that the mind associates or dissociates ideas through repeated routines. In addition
to being conducive to Enlightenment ideologies of liberty, self-determination and personal responsibility, it offered
a practical theory of the mind that allowed teachers to transform longstanding forms of print and manuscript culture
into effective graphic tools of learning for the lower and middle orders of society. Children were taught to memorize
facts through oral and graphic methods that originated during the Renaissance. However, the prime example of reference
works that systematized scientific knowledge in the age of Enlightenment were universal encyclopedias rather than
technical dictionaries. It was the goal of universal encyclopedias to record all human knowledge in a comprehensive
reference work. The most well-known of these works is Denis Diderot and Jean le Rond d'Alembert's Encyclopédie, ou
dictionnaire raisonné des sciences, des arts et des métiers. The work, which began publication in 1751, was composed
of thirty-five volumes and over 71 000 separate entries. A great number of the entries were dedicated to describing
the sciences and crafts in detail, and provided intellectuals across Europe with a high-quality survey of human knowledge.
In d'Alembert's Preliminary Discourse to the Encyclopedia of Diderot, the work's goal to record the extent of human
knowledge in the arts and sciences is outlined: Broadly speaking, Enlightenment science greatly valued empiricism
and rational thought, and was embedded with the Enlightenment ideal of advancement and progress. The study of science,
under the heading of natural philosophy, was divided into physics and a conglomerate grouping of chemistry and natural
history, which included anatomy, biology, geology, mineralogy, and zoology. As with most Enlightenment views, the
benefits of science were not seen universally; Rousseau criticized the sciences for distancing man from nature and
not operating to make people happier. Science during the Enlightenment was dominated by scientific societies and
academies, which had largely replaced universities as centres of scientific research and development. Societies and
academies were also the backbone of the maturation of the scientific profession. Another important development was
the popularization of science among an increasingly literate population. Philosophes introduced the public to many
scientific theories, most notably through the Encyclopédie and the popularization of Newtonianism by Voltaire and
Émilie du Châtelet. Some historians have marked the 18th century as a drab period in the history of science; however,
the century saw significant advancements in the practice of medicine, mathematics, and physics; the development of
biological taxonomy; a new understanding of magnetism and electricity; and the maturation of chemistry as a discipline,
which established the foundations of modern chemistry. Science came to play a leading role in Enlightenment discourse
and thought. Many Enlightenment writers and thinkers had backgrounds in the sciences and associated scientific advancement
with the overthrow of religion and traditional authority in favour of the development of free speech and thought.
Scientific progress during the Enlightenment included the discovery of carbon dioxide (fixed air) by the chemist
Joseph Black, the argument for deep time by the geologist James Hutton, and the invention of the steam engine by
James Watt. The experiments of Lavoisier were used to create the first modern chemical plants in Paris, and the experiments
of the Montgolfier Brothers enabled them to launch the first manned flight in a hot-air balloon on 21 November 1783,
from the Château de la Muette, near the Bois de Boulogne. Cesare Beccaria, a jurist and one of the great Enlightenment
writers, became famous for his masterpiece Of Crimes and Punishments (1764), which was later translated into 22 languages.
Another prominent intellectual was Francesco Mario Pagano, who wrote important studies such as Saggi Politici (Political
Essays, 1783), one of the major works of the Enlightenment in Naples, and Considerazioni sul processo criminale (Considerations
on the criminal trial, 1787), which established him as an international authority on criminal law. Scientific academies
and societies grew out of the Scientific Revolution as the creators of scientific knowledge in contrast to the scholasticism
of the university. During the Enlightenment, some societies created or retained links to universities. However, contemporary
sources distinguished universities from scientific societies by claiming that the university's utility was in the
transmission of knowledge, while societies functioned to create knowledge. As the role of universities in institutionalized
science began to diminish, learned societies became the cornerstone of organized science. Official scientific societies
were chartered by the state in order to provide technical expertise. Most societies were granted permission to oversee
their own publications, control the election of new members, and the administration of the society. After 1700, a
tremendous number of official academies and societies were founded in Europe, and by 1789 there were over seventy
official scientific societies. In reference to this growth, Bernard de Fontenelle coined the term "the Age of Academies"
to describe the 18th century. The Enlightenment has been frequently linked to the French Revolution of 1789. One
view of the political changes that occurred during the Enlightenment is that the "consent of the governed" philosophy
as delineated by Locke in Two Treatises of Government (1689) represented a paradigm shift from the old governance
paradigm under feudalism known as the "divine right of kings". In this view, the revolutions of the late 1700s and
early 1800s were caused by the fact that this governance paradigm shift often could not be resolved peacefully, and
therefore violent revolution was the result. Clearly a governance philosophy where the king was never wrong was in
direct conflict with one whereby citizens by natural law had to consent to the acts and rulings of their government.
Though much of Enlightenment political thought was dominated by social contract theorists, both David Hume and Adam
Ferguson criticized this camp. Hume's essay Of the Original Contract argues that governments derived from consent
are rarely seen, and civil government is grounded in a ruler's habitual authority and force. It is precisely because
of the ruler's authority over-and-against the subject, that the subject tacitly consents; Hume says that the subjects
would "never imagine that their consent made him sovereign", rather the authority did so. Similarly, Ferguson did
not believe citizens built the state, rather polities grew out of social development. In his 1767 An Essay on the
History of Civil Society, Ferguson uses the four stages of progress, a theory that was very popular in Scotland at
the time, to explain how humans advance from a hunting and gathering society to a commercial and civil society without
"signing" a social contract. In Russia, the government began to actively encourage the proliferation of arts and
sciences in the mid-18th century. This era produced the first Russian university, library, theatre, public museum,
and independent press. Like other enlightened despots, Catherine the Great played a key role in fostering the arts,
sciences, and education. She used her own interpretation of Enlightenment ideals, assisted by notable international
experts such as Voltaire (by correspondence) and, in residence, world class scientists such as Leonhard Euler and
Peter Simon Pallas. The national Enlightenment differed from its Western European counterpart in that it promoted
further modernization of all aspects of Russian life and was concerned with attacking the institution of serfdom
in Russia. The Russian enlightenment centered on the individual instead of societal enlightenment and encouraged
the living of an enlightened life. Bertrand Russell saw the Enlightenment as a phase in a progressive development,
which began in antiquity, and that reason and challenges to the established order were constant ideals throughout
that time. Russell said that the Enlightenment was ultimately born out of the Protestant reaction against the Catholic
counter-reformation, and that philosophical views such as affinity for democracy against monarchy originated among
16th-century Protestants to justify their desire to break away from the Catholic Church. Though many of these philosophical
ideals were picked up by Catholics, Russell argues, by the 18th century the Enlightenment was the principal manifestation
of the schism that began with Martin Luther. Alexis de Tocqueville described the French Revolution as the inevitable
result of the radical opposition created in the 18th century between the monarchy and the men of letters of the Enlightenment.
These men of letters constituted a sort of "substitute aristocracy that was both all-powerful and without real power".
This illusory power came from the rise of "public opinion", born when absolutist centralization removed the nobility
and the bourgeoisie from the political sphere. The "literary politics" that resulted promoted a discourse of equality
and was hence in fundamental opposition to the monarchical regime. De Tocqueville "clearly designates ... the cultural
effects of transformation in the forms of the exercise of power". Nevertheless, it took another century before cultural
approach became central to the historiography, as typified by Robert Darnton, The Business of Enlightenment: A Publishing
History of the Encyclopédie, 1775–1800 (1979). Enlightenment historiography began in the period itself, from what
Enlightenment figures said about their work. A dominant element was the intellectual angle they took. D'Alembert's
Preliminary Discourse of l'Encyclopédie provides a history of the Enlightenment which comprises a chronological list
of developments in the realm of knowledge – of which the Encyclopédie forms the pinnacle. In 1783, Jewish philosopher
Moses Mendelssohn referred to Enlightenment as a process by which man was educated in the use of reason. Immanuel
Kant called Enlightenment "man's release from his self-incurred tutelage", tutelage being "man's inability to make
use of his understanding without direction from another". "For Kant, Enlightenment was mankind's final coming of
age, the emancipation of the human consciousness from an immature state of ignorance." The German scholar Ernst Cassirer
called the Enlightenment "a part and a special phase of that whole intellectual development through which modern
philosophic thought gained its characteristic self-confidence and self-consciousness". According to historian Roy
Porter, the liberation of the human mind from a dogmatic state of ignorance is the epitome of what the Age of Enlightenment
was trying to capture. These views on religious tolerance and the importance of individual conscience, along with
the social contract, became particularly influential in the American colonies and the drafting of the United States
Constitution. Thomas Jefferson called for a "wall of separation between church and state" at the federal level. He
previously had supported successful efforts to disestablish the Church of England in Virginia, and authored the Virginia
Statute for Religious Freedom. Jefferson's political ideals were greatly influenced by the writings of John Locke,
Francis Bacon, and Isaac Newton whom he considered the three greatest men that ever lived. In addition to debates
on religion, societies discussed issues such as politics and the role of women. It is important to note, however,
that the critical subject matter of these debates did not necessarily translate into opposition to the government.
In other words, the results of the debate quite frequently upheld the status quo. From a historical standpoint, one
of the most important features of the debating society was their openness to the public; women attended and even
participated in almost every debating society, which were likewise open to all classes providing they could pay the
entrance fee. Once inside, spectators were able to participate in a largely egalitarian form of sociability that
helped spread Enlightenment ideas. Intellectuals such as Robert Darnton and Jürgen Habermas have focused on the social
conditions of the Enlightenment. Habermas described the creation of the "bourgeois public sphere" in 18th-century
Europe, containing the new venues and modes of communication allowing for rational exchange. Habermas said that the
public sphere was bourgeois, egalitarian, rational, and independent from the state, making it the ideal venue for
intellectuals to critically examine contemporary politics and society, away from the interference of established
authority. While the public sphere is generally an integral component of the social study of the Enlightenment, other
historians have questioned whether the public sphere had these characteristics. Jonathan Israel rejects the attempts
of postmodern and Marxian historians to understand the revolutionary ideas of the period purely as by-products of
social and economic transformations. He instead focuses on the history of ideas in the period from 1650 to the end
of the 18th century, and claims that it was the ideas themselves that caused the change that eventually led to the
revolutions of the latter half of the 18th century and the early 19th century. Israel argues that until the 1650s
Western civilization "was based on a largely shared core of faith, tradition and authority". One of the primary elements
of the culture of the Enlightenment was the rise of the public sphere, a "realm of communication marked by new arenas
of debate, more open and accessible forms of urban public space and sociability, and an explosion of print culture,"
in the late 17th century and 18th century. Elements of the public sphere included: it was egalitarian, it discussed
the domain of "common concern," and argument was founded on reason. Habermas uses the term "common concern" to describe
those areas of political/social knowledge and discussion that were previously the exclusive territory of the state
and religious authorities, now open to critical examination by the public sphere. The values of this bourgeois public
sphere included holding reason to be supreme, considering everything to be open to criticism (the public sphere is
critical), and the opposition of secrecy of all sorts. Across continental Europe, but in France especially, booksellers
and publishers had to negotiate censorship laws of varying strictness. The Encyclopédie, for example, narrowly escaped
seizure and had to be saved by Malesherbes, the man in charge of the French censure. Indeed, many publishing companies
were conveniently located outside France so as to avoid overzealous French censors. They would smuggle their merchandise
across the border, where it would then be transported to clandestine booksellers or small-time peddlers. The records
of clandestine booksellers may give a better representation of what literate Frenchmen might have truly read, since
their clandestine nature provided a less restrictive product choice. In one case, political books were the most popular
category, primarily libels and pamphlets. Readers were more interested in sensationalist stories about criminals
and political corruption than they were in political theory itself. The second most popular category, "general works"
(those books "that did not have a dominant motif and that contained something to offend almost everyone in authority")
demonstrated a high demand for generally low-brow subversive literature. However, these works never became part of
literary canon, and are largely forgotten today as a result. A healthy, and legal, publishing industry existed throughout
Europe, although established publishers and book sellers occasionally ran afoul of the law. The Encyclopédie, for
example, condemned not only by the King but also by Clement XII, nevertheless found its way into print with the help
of the aforementioned Malesherbes and creative use of French censorship law. But many works were sold without running
into any legal trouble at all. Borrowing records from libraries in England, Germany and North America indicate that
more than 70 percent of books borrowed were novels. Less than 1 percent of the books were of a religious nature,
indicating the general trend of declining religiosity. The Republic of Letters was the sum of a number of Enlightenment
ideals: an egalitarian realm governed by knowledge that could act across political boundaries and rival state power.
It was a forum that supported "free public examination of questions regarding religion or legislation". Immanuel
Kant considered written communication essential to his conception of the public sphere; once everyone was a part
of the "reading public", then society could be said to be enlightened. The people who participated in the Republic
of Letters, such as Diderot and Voltaire, are frequently known today as important Enlightenment figures. Indeed,
the men who wrote Diderot's Encyclopédie arguably formed a microcosm of the larger "republic". Jonathan Israel called
the journals the most influential cultural innovation of European intellectual culture. They shifted the attention
of the "cultivated public" away from established authorities to novelty and innovation, and promoted the "enlightened"
ideals of toleration and intellectual objectivity. Being a source of knowledge derived from science and reason, they
were an implicit critique of existing notions of universal truth monopolized by monarchies, parliaments, and religious
authorities. They also advanced Christian enlightenment that upheld "the legitimacy of God-ordained authority"—the
Bible—in which there had to be agreement between the biblical and natural theories. Although the existence of dictionaries
and encyclopedias spanned into ancient times, the texts changed from simply defining words in a long running list
to far more detailed discussions of those words in 18th-century encyclopedic dictionaries. The works were part of
an Enlightenment movement to systematize knowledge and provide education to a wider audience than the elite. As the
18th century progressed, the content of encyclopedias also changed according to readers' tastes. Volumes tended to
focus more strongly on secular affairs, particularly science and technology, rather than matters of theology. The
massive work was arranged according to a "tree of knowledge." The tree reflected the marked division between the
arts and sciences, which was largely a result of the rise of empiricism. Both areas of knowledge were united by philosophy,
or the trunk of the tree of knowledge. The Enlightenment's desacrilization of religion was pronounced in the tree's
design, particularly where theology accounted for a peripheral branch, with black magic as a close neighbour. As
the Encyclopédie gained popularity, it was published in quarto and octavo editions after 1777. The quarto and octavo
editions were much less expensive than previous editions, making the Encyclopédie more accessible to the non-elite.
Robert Darnton estimates that there were approximately 25 000 copies of the Encyclopédie in circulation throughout
France and Europe before the French Revolution. The extensive, yet affordable encyclopedia came to represent the
transmission of Enlightenment and scientific education to an expanding audience. In England, the Royal Society of
London also played a significant role in the public sphere and the spread of Enlightenment ideas. It was founded
by a group of independent scientists and given a royal charter in 1662. The Society played a large role in spreading
Robert Boyle's experimental philosophy around Europe, and acted as a clearinghouse for intellectual correspondence
and exchange. Boyle was "a founder of the experimental world in which scientists now live and operate," and his method
based knowledge on experimentation, which had to be witnessed to provide proper empirical legitimacy. This is where
the Royal Society came into play: witnessing had to be a "collective act", and the Royal Society's assembly rooms
were ideal locations for relatively public demonstrations. However, not just any witness was considered to be credible;
"Oxford professors were accounted more reliable witnesses than Oxfordshire peasants." Two factors were taken into
account: a witness's knowledge in the area; and a witness's "moral constitution". In other words, only civil society
were considered for Boyle's public. The debating societies discussed an extremely wide range of topics. Before the
Enlightenment, most intellectual debates revolved around "confessional" – that is, Catholic, Lutheran, Reformed (Calvinist),
or Anglican issues, and the main aim of these debates was to establish which bloc of faith ought to have the "monopoly
of truth and a God-given title to authority". After this date everything thus previously rooted in tradition was
questioned and often replaced by new concepts in the light of philosophical reason. After the second half of the
17th century and during the 18th century, a "general process of rationalization and secularization set in," and confessional
disputes were reduced to a secondary status in favor of the "escalating contest between faith and incredulity". German
historian Reinhart Koselleck claimed that "On the Continent there were two social structures that left a decisive
imprint on the Age of Enlightenment: the Republic of Letters and the Masonic lodges." Scottish professor Thomas Munck
argues that "although the Masons did promote international and cross-social contacts which were essentially non-religious
and broadly in agreement with enlightened values, they can hardly be described as a major radical or reformist network
in their own right." Many of the Masons values seemed to greatly appeal to Enlightenment values and thinkers. Diderot
discusses the link between Freemason ideals and the enlightenment in D'Alembert's Dream, exploring masonry as a way
of spreading enlightenment beliefs. Historian Margaret Jacob stresses the importance of the Masons in indirectly
inspiring enlightened political thought. On the negative side, Daniel Roche contests claims that Masonry promoted
egalitarianism. He argues that the lodges only attracted men of similar social backgrounds. The presence of noble
women in the French "lodges of adoption" that formed in the 1780s was largely due to the close ties shared between
these lodges and aristocratic society. Along with secular matters, readers also favoured an alphabetical ordering
scheme over cumbersome works arranged along thematic lines. The historian Charles Porset, commenting on alphabetization,
has said that "as the zero degree of taxonomy, alphabetical order authorizes all reading strategies; in this respect
it could be considered an emblem of the Enlightenment." For Porset, the avoidance of thematic and hierarchical systems
thus allows free interpretation of the works and becomes an example of egalitarianism. Encyclopedias and dictionaries
also became more popular during the Age of Reason as the number of educated consumers who could afford such texts
began to multiply. In the later half of the 18th century, the number of dictionaries and encyclopedias published
by decade increased from 63 between 1760 and 1769 to approximately 148 in the decade proceeding the French Revolution
(1780–1789). Along with growth in numbers, dictionaries and encyclopedias also grew in length, often having multiple
print runs that sometimes included in supplemented editions. One of the most important developments that the Enlightenment
era brought to the discipline of science was its popularization. An increasingly literate population seeking knowledge
and education in both the arts and the sciences drove the expansion of print culture and the dissemination of scientific
learning. The new literate population was due to a high rise in the availability of food. This enabled many people
to rise out of poverty, and instead of paying more for food, they had money for education. Popularization was generally
part of an overarching Enlightenment ideal that endeavoured "to make information available to the greatest number
of people." As public interest in natural philosophy grew during the 18th century, public lecture courses and the
publication of popular texts opened up new roads to money and fame for amateurs and scientists who remained on the
periphery of universities and academies. More formal works included explanations of scientific theories for individuals
lacking the educational background to comprehend the original scientific text. Sir Isaac Newton's celebrated Philosophiae
Naturalis Principia Mathematica was published in Latin and remained inaccessible to readers without education in
the classics until Enlightenment writers began to translate and analyze the text in the vernacular.
The game pad controllers were more-or-less copied directly from the Game & Watch machines, although the Famicom design team
originally wanted to use arcade-style joysticks, even taking apart ones from American game consoles to see how they
worked. However, it was eventually decided that children might step on joysticks left on the floor and their durability
was also questioned. Katsuyah Nakawaka attached a Game & Watch D-pad to the Famicom prototype and found that it was
easy to use and had no discomfort. Ultimately though, they did install a 15-pin expansion port on the front of the
console so that an arcade-style joystick could be used optionally. The controllers were hard-wired to the console
with no connectors for cost reasons. At June 1985's Consumer Electronics Show (CES), Nintendo unveiled the American
version of its Famicom. This is the system which would eventually be officially deployed as the Nintendo Entertainment
System, or the colloquial "NES". Nintendo seeded these first systems to limited American test markets starting in
New York City on October 18, 1985, following up with a full-fledged North American release of the console in February
of the following year. Nintendo released 17 launch titles: 10-Yard Fight, Baseball, Clu Clu Land, Duck Hunt, Excitebike,
Golf, Gyromite, Hogan’s Alley, Ice Climber, Kung Fu, Pinball, Soccer, Stack-Up, Tennis, Wild Gunman, Wrecking Crew,
and Super Mario Bros.h[›] Some varieties of these launch games contained Famicom chips with an adapter inside the
cartridge so they would play on North American consoles, which is why the title screen of Gyromite has the Famicom
title "Robot Gyro" and the title screen of Stack-Up has the Famicom title "Robot Block". In June 1989, Nintendo of
America's vice president of marketing Peter Main, said that the Famicom was present in 37% of Japan's households.
By 1990, 30% of American households owned the NES, compared to 23% for all personal computers. By 1990, the NES had
outsold all previously released consoles worldwide.[better source needed] The slogan for this brand was It can't
be beaten. In Europe and South America, however, the NES was outsold by Sega's Master System, while the Nintendo
Entertainment System was not available in the Soviet Union. Japanese (Famicom) cartridges are shaped slightly differently.
While the NES used a 72-pin interface, the Famicom system used a 60-pin design. Unlike NES games, official Famicom
cartridges were produced in many colors of plastic. Adapters, similar in design to the popular accessory Game Genie,
are available that allow Famicom games to be played on an NES. In Japan, several companies manufactured the cartridges
for the Famicom. This allowed these companies to develop their own customized chips designed for specific purposes,
such as chips that increased the quality of sound in their games. In the longer run, however, with the NES near its
end of its life many third-party publishers such as Electronic Arts supported upstart competing consoles with less
strict licensing terms such as the Sega Genesis and then the PlayStation, which eroded and then took over Nintendo's
dominance in the home console market, respectively. Consoles from Nintendo's rivals in the post-SNES era had always
enjoyed much stronger third-party support than Nintendo, which relied more heavily on first-party games. As the Nintendo
Entertainment System grew in popularity and entered millions of American homes, some small video rental shops began
buying their own copies of NES games, and renting them out to customers for around the same price as a video cassette
rental for a few days. Nintendo received no profit from the practice beyond the initial cost of their game, and unlike
movie rentals, a newly released game could hit store shelves and be available for rent on the same day. Nintendo
took steps to stop game rentals, but didn't take any formal legal action until Blockbuster Video began to make game
rentals a large-scale service. Nintendo claimed that allowing customers to rent games would significantly hurt sales
and drive up the cost of games. Nintendo lost the lawsuit, but did win on a claim of copyright infringement. Blockbuster
was banned from including original, copyrighted instruction booklets with their rented games. In compliance with
the ruling, Blockbuster produced their own short instructions—usually in the form of a small booklet, card, or label
stuck on the back of the rental box—that explained the game's basic premise and controls. Video rental shops continued
the practice of renting video games and still do today. In the UK, Italy and Australia which share the PAL A region,
two versions of the NES were released; the "Mattel Version" and "NES Version". When the NES was first released in
those countries, it was distributed by Mattel and Nintendo decided to use a lockout chip specific to those countries,
different from the chip used in other European countries. When Nintendo took over European distribution in 1990,
they produced consoles that were then labelled "NES Version"; therefore, the only differences between the two are
the text on the front flap and texture on the top/bottom of the casing. Nintendo also made two turbo controllers
for the NES called NES Advantage and the NES Max. Both controllers had a Turbo feature, a feature where one tap of
the button represented multiple taps. This feature allowed players to shoot much faster during shooter games. The
NES Advantage had two knobs that adjusted the firing rate of the turbo button from quick to Turbo, as well as a "Slow"
button that slowed down the game by rapidly pausing the game. The "Slow" button did not work with games that had
a pause menu or pause screen and can interfere with jumping and shooting. The NES Max also had the Turbo Feature,
but it was not adjustable, in contrast with the Advantage. It also did not have the "Slow" button. Its wing-like
shape made it easier to hold than the Advantage and it also improved on the joystick. Turbo features were also featured
on the NES Satellite, the NES Four Score, and the U-Force. Other accessories include the Power Pad and the Power
Glove, which was featured in the movie The Wizard. In 1986, Nintendo released the Famicom Disk System (FDS) in Japan,
a type of floppy drive that uses a single-sided, proprietary 5 cm (2") disk and plugs into the cartridge port. It
contains RAM for the game to load into and an extra single-cycle wavetable-lookup sound chip. The disks were originally
obtained from kiosks in malls and other public places where buyers could select a title and have it written to the
disk. This process would cost less than cartridges and users could take the disk back to a vending booth and have
it rewritten with a new game. The disks were used both for storing the game and saving progress and total capacity
was 128k (64k per side). The system was originally targeted for release in the spring of 1985, but the release date
was pushed back. After test-marketing in the New York City area in late fall, retailers had reportedly stated the
system "failed miserably".[contradictory] While others stated that Nintendo had an excellent nine-week market test
in New York last fall Due to the moderate success launch in New York City, Nintendo tried a second time; the system
was test-marketed further beginning in February 1986, with the nationwide release occurring in September 1986. In
Europe and Australia, the system was released to two separate marketing regions. One region consisted of most of
mainland Europe (excluding Italy), and distribution there was handled by a number of different companies, with Nintendo
responsible for most cartridge releases. Most of this region saw a 1986 release. Mattel handled distribution for
the other region, consisting of the United Kingdom, Ireland, Canada, Italy, Australia and New Zealand, starting the
following year. Not until the 1990s did Nintendo's newly created European branch direct distribution throughout Europe.
As the 1990s dawned, gamers predicted that competition from technologically superior systems such as the 16-bit Sega
Mega Drive/Genesis would mean the immediate end of the NES’s dominance. Instead, during the first year of Nintendo's
successor console the Super Famicom (named Super Nintendo Entertainment System outside Japan), the Famicom remained
the second highest-selling video game console in Japan, outselling the newer and more powerful NEC PC Engine and
Sega Mega Drive by a wide margin. The console remained popular in Japan and North America until late 1993, when the
demand for new NES software abruptly plummeted. The final Famicom game released in Japan is Takahashi Meijin no Bōken
Jima IV (Adventure Island IV), while in North America, Wario's Woods is the final licensed game. In the wake of ever
decreasing sales and the lack of new software titles, Nintendo of America officially discontinued the NES by 1995.
However, Nintendo kept producing new Famicom units in Japan until September 25, 2003, and continued to repair Famicom
consoles until October 31, 2007, attributing the discontinuation of support to insufficient supplies of parts. Nintendo's
near monopoly on the home video game market left it with a degree of influence over the industry. Unlike Atari, which
never actively courted third-party developers (and even went to court in an attempt to force Activision to cease
production of Atari 2600 games), Nintendo had anticipated and encouraged the involvement of third-party software
developers; strictly, however, on Nintendo's terms. Some of the Nintendo platform-control measures were adopted by
later console manufacturers such as Sega, Sony, and Microsoft, although not as stringent. Several companies, refusing
to pay the licensing fee or having been rejected by Nintendo, found ways to circumvent the console's authentication
system. Most of these companies created circuits that used a voltage spike to temporarily disable the 10NES chip
in the NES. A few unlicensed games released in Europe and Australia came in the form of a dongle that would be connected
to a licensed game, in order to use the licensed game's 10NES chip for authentication. In order to combat unlicensed
games, Nintendo of America threatened retailers who sold them with losing their supply of licensed titles. In addition,
multiple revisions were made to the NES PCBs to prevent these games from working. The Nintendo Entertainment System
(also abbreviated as NES) is an 8-bit home video game console that was developed and manufactured by Nintendo. It
was initially released in Japan as the Family Computer (Japanese: ファミリーコンピュータ, Hepburn: Famirī Konpyūta?) (also known
by the portmanteau abbreviation Famicom (ファミコン, Famikon?) and abbreviated as FC) on July 15, 1983, and was later
released in North America during 1985, in Europe during 1986, and Australia in 1987. In South Korea, it was known
as the Hyundai Comboy (현대 컴보이 Hyeondae Keomboi) and was distributed by SK Hynix which then was known as Hyundai Electronics.
It was succeeded by the Super Nintendo Entertainment System. Problems with the 10NES lockout chip frequently resulted
in the console's most infamous problem: the blinking red power light, in which the system appears to turn itself
on and off repeatedly because the 10NES would reset the console once per second. The lockout chip required constant
communication with the chip in the game to work. Dirty, aging and bent connectors would often disrupt the communication,
resulting in the blink effect. Alternatively, the console would turn on but only show a solid white, gray, or green
screen. Users attempted to solve this problem by blowing air onto the cartridge connectors, inserting the cartridge
just far enough to get the ZIF to lower, licking the edge connector, slapping the side of the system after inserting
a cartridge, shifting the cartridge from side to side after insertion, pushing the ZIF up and down repeatedly, holding
the ZIF down lower than it should have been, and cleaning the connectors with alcohol. These attempted solutions
often became notable in their own right and are often remembered alongside the NES. Many of the most frequent attempts
to fix this problem instead ran the risk of damaging the cartridge and/or system.[citation needed] In 1989, Nintendo
released an official NES Cleaning Kit to help users clean malfunctioning cartridges and consoles. Subsequent plans
to market a Famicom console in North America featuring a keyboard, cassette data recorder, wireless joystick controller
and a special BASIC cartridge under the name "Nintendo Advanced Video System" likewise never materialized. By the
beginning of 1985, the Famicom had sold more than 2.5 million units in Japan and Nintendo soon announced plans to
release it in North America as the Advanced Video Entertainment System (AVS) that same year. The American video game
press was skeptical that the console could have any success in the region, with the March 1985 issue of Electronic
Games magazine stating that "the videogame market in America has virtually disappeared" and that "this could be a
miscalculation on Nintendo's part." Near the end of the NES's lifespan, upon the release of the AV Famicom and the
top-loading NES 2, the design of the game controllers was modified slightly. Though the original button layout was
retained, the redesigned device abandoned the brick shell in favor of a dog bone shape. In addition, the AV Famicom
joined its international counterpart and dropped the hardwired controllers in favor of detachable controller ports.
However, the controllers included with the Famicom AV had cables which were 90 cm (3 feet) long, as opposed to the
standard 180 cm(6 feet) of NES controllers. By 1988, industry observers stated that the NES's popularity had grown
so quickly that the market for Nintendo cartridges was larger than that for all home computer software. Compute!
reported in 1989 that Nintendo had sold seven million NES systems in 1988, almost as many as the number of Commodore
64s sold in its first five years. "Computer game makers [are] scared stiff", the magazine said, stating that Nintendo's
popularity caused most competitors to have poor sales during the previous Christmas and resulted in serious financial
problems for some. A variety of games for the FDS were released by Nintendo (including some like Super Mario Bros.
which had already been released on cartridge) and third party companies such as Konami and Taito. A few unlicensed
titles were made as well. However, its limitations became quickly apparent as larger ROM chips were introduced, allowing
cartridges with greater than 128k of space. More advanced memory management chips (MMC) soon appeared and the FDS
quickly became obsolete. Nintendo also charged developers considerable amounts of money to produce FDS games, and
many refused to develop for it, instead continuing to make cartridge titles. Many FDS disks have no dust covers (except
in some unlicensed and bootleg variants) and are easily prone to getting dirt on the media. In addition, the drive
use a belt which breaks frequently and requires invasive replacement. After only two years, the FDS was discontinued,
although vending booths remained in place until 1993 and Nintendo continued to service drives, and to rewrite and
offer replacement disks until 2003. Although most hardware clones were not produced under license by Nintendo, certain
companies were granted licenses to produce NES-compatible devices. The Sharp Corporation produced at least two such
clones: the Twin Famicom and the SHARP 19SC111 television. The Twin Famicom was compatible with both Famicom cartridges
and Famicom Disk System disks. It was available in two colors (red and black) and used hardwired controllers (as
did the original Famicom), but it featured a different case design. The SHARP 19SC111 television was a television
which included a built-in Famicom. A similar licensing deal was reached with Hyundai Electronics, who licensed the
system under the name Comboy in the South Korean market. This deal with Hyundai was made necessary because of the
South Korean government's wide ban on all Japanese "cultural products", which remained in effect until 1998 and ensured
that the only way Japanese products could legally enter the South Korean market was through licensing to a third-party
(non-Japanese) distributor (see also Japan–Korea disputes). The back of the cartridge bears a label with instructions
on handling. Production and software revision codes were imprinted as stamps on the back label to correspond with
the software version and producer. With the exception of The Legend of Zelda and Zelda II: The Adventure of Link,
manufactured in gold-plastic carts, all licensed NTSC and PAL cartridges are a standard shade of gray plastic. Unlicensed
carts were produced in black, robin egg blue, and gold, and are all slightly different shapes than standard NES cartridges.
Nintendo also produced yellow-plastic carts for internal use at Nintendo Service Centers, although these "test carts"
were never made available for purchase. All licensed US cartridges were made by Nintendo, Konami and Acclaim. For
promotion of DuckTales: Remastered, Capcom sent 150 limited-edition gold NES cartridges with the original game, featuring
the Remastered art as the sticker, to different gaming news agencies. The instruction label on the back included
the opening lyric from the show's theme song, "Life is like a hurricane". Nintendo was accused of antitrust behavior
because of the strict licensing requirements. The United States Department of Justice and several states began probing
Nintendo's business practices, leading to the involvement of Congress and the Federal Trade Commission (FTC). The
FTC conducted an extensive investigation which included interviewing hundreds of retailers. During the FTC probe,
Nintendo changed the terms of its publisher licensing agreements to eliminate the two-year rule and other restrictive
terms. Nintendo and the FTC settled the case in April 1991, with Nintendo required to send vouchers giving a $5 discount
off to a new game, to every person that had purchased a NES title between June 1988 and December 1990. GameSpy remarked
that Nintendo's punishment was particularly weak giving the case's findings, although it has been speculated that
the FTC did not want to damage the video game industry in the United States. The NES can be emulated on many other
systems, most notably the PC. The first emulator was the Japanese-only Pasofami. It was soon followed by iNES, which
was available in English and was cross-platform, in 1996. It was described as being the first NES emulation software
that could be used by a non-expert. NESticle, a popular MS-DOS emulator, was released on April 3, 1997. There have
since been many other emulators. The Virtual Console for the Wii, Nintendo 3DS and Wii U also offers emulation of
many NES games. The Famicom contained no lockout hardware and, as a result, unlicensed cartridges (both legitimate
and bootleg) were extremely common throughout Japan and the Far East. The original NES (but not the top-loading NES-101)
contained the 10NES lockout chip, which significantly increased the challenges faced by unlicensed developers. Tinkerers
at home in later years discovered that disassembling the NES and cutting the fourth pin of the lockout chip would
change the chip’s mode of operation from "lock" to "key", removing all effects and greatly improving the console’s
ability to play legal games, as well as bootlegs and converted imports. NES consoles sold in different regions had
different lockout chips, so games marketed in one region would not work on consoles from another region. Known regions
are: USA/Canada (3193 lockout chip), most of Europe (3195), Asia (3196) and UK, Italy and Australia (3197). Since
two types of lockout chip were used in Europe, European NES game boxes often had an "A" or "B" letter on the front,
indicating whether the game is compatible with UK/Italian/Australian consoles (A), or the rest of Europe (B). Rest-of-Europe
games typically had text on the box stating "This game is not compatible with the Mattel or NES versions of the Nintendo
Entertainment System". Similarly, UK/Italy/Australia games stated "This game is only compatible with the Mattel or
NES versions of the Nintendo Entertainment System". Following a series of arcade game successes in the early 1980s,
Nintendo made plans to create a cartridge-based console called the Famicom. Masayuki Uemura designed the system.
Original plans called for an advanced 16-bit system which would function as a full-fledged computer with a keyboard
and floppy disk drive, but Nintendo president Hiroshi Yamauchi rejected this and instead decided to go for a cheaper,
more conventional cartridge-based game console as he felt that features such as keyboards and disks were intimidating
to non-technophiles. A test model was constructed in October 1982 to verify the functionality of the hardware, after
which work began on programming tools. Because 65xx CPUs had not been manufactured or sold in Japan up to that time,
no cross-development software was available and it had to be produced from scratch. Early Famicom games were written
on a system that ran on an NEC PC-8001 computer and LEDs on a grid were used with a digitizer to design graphics
as no software design tools for this purpose existed at that time.[citation needed] A number of special controllers
designed for use with specific games were released for the system, though very few such devices proved particularly
popular. Such devices included, but were not limited to, the Zapper (a light gun), the R.O.B., and the Power Pad.
The original Famicom featured a deepened DA-15 expansion port on the front of the unit, which was used to connect
most auxiliary devices. On the NES, these special controllers were generally connected to one of the two control
ports on the front of the console. The system's launch represented not only a new product, but also a reframing of
the severely damaged home video game market segment as a whole. The video game market crash of 1983 had occurred
in significant part due to a lack of consumer and retailer confidence in video games, which had in turn been due
partially to confusion and misrepresentation in the marketing of video games. Prior to the NES, the packaging of
many video games presented bombastic artwork which exaggerated the graphics of the actual game. In terms of product
identity, a single game such as Pac-Man would appear in many versions on many different game consoles and computers,
with large variations in graphics, sound, and general quality between the versions. By stark contrast, Nintendo's
marketing strategy aimed to regain consumer and retailer confidence, by delivering a singular platform whose technology
was not in need of heavy exaggeration and whose qualities were clearly defined. For its complete North American release,
the Nintendo Entertainment System was progressively released over the ensuing years in four different bundles: the
Deluxe Set, the Control Deck, the Action Set and the Power Set. The Deluxe Set, retailing at US$199.99 (equivalent
to $475 in 2016), included R.O.B., a light gun called the NES Zapper, two controllers, and two Game Paks: Gyromite,
and Duck Hunt. The Basic Set, retailing at US$89.99 with no game, and US$99.99 bundled with "Super Mario Bros." The
Action Set, retailing in 1988 for US$149.99, came with the Control Deck, two game controllers, an NES Zapper, and
a dual Game Pak containing both Super Mario Bros. and Duck Hunt. In 1989, the Power Set included the console, two
game controllers, a NES Zapper, a Power Pad, and a triple Game Pak containing Super Mario Bros, Duck Hunt, and World
Class Track Meet. In 1990, a Sports Set bundle was released, including the console, an NES Satellite infrared wireless
multitap adapter, four game controllers, and a dual Game Pak containing Super Spike V'Ball and Nintendo World Cup.
Two more bundle packages were later released using the original model NES console. The Challenge Set of 1992 included
the console, two controllers, and a Super Mario Bros. 3 Game Pak for a retail price of US$89.99. The Basic Set, first
released in 1987, was repackaged for a retail US$89.99. It included only the console and two controllers, and no
longer was bundled with a cartridge. Instead, it contained a book called the Official Nintendo Player's Guide, which
contained detailed information for every NES game made up to that point. The unlicensed clone market has flourished
following Nintendo's discontinuation of the NES. Some of the more exotic of these resulting systems have gone beyond
the functionality of the original hardware and have included variations such as a portable system with a color LCD
(e.g. PocketFami). Others have been produced with certain specialized markets in mind, such as an NES clone that
functions as a rather primitive personal computer, which includes a keyboard and basic word processing software.
These unauthorized clones have been helped by the invention of the so-called NES-on-a-chip. The NES was released
after the "video game crash" of the early 1980s, whereupon many retailers and adults had regarded electronic games
as being merely a passing fad, and many believed at first that the NES was another fad. Before the NES/Famicom, Nintendo
was known as a moderately successful Japanese toy and playing card manufacturer, and the popularity of the NES/Famicom
helped the company grow into an internationally recognized name almost synonymous with video games as Atari had been
during the 2600 era and set the stage for Japanese dominance of the video game industry. With the NES, Nintendo also
changed the relationship of console manufacturers and third-party software developers by restricting developers from
publishing and distributing software without licensed approval. This led to higher quality software titles, which
helped to change the attitude of a public that had grown weary from poorly produced titles for other game systems
of the day. Atari Games created a line of NES products under the name Tengen and took a different approach. The company
attempted to reverse engineer the lockout chip to develop its own "Rabbit" chip. However, Tengen also obtained a
description of the lockout chip from the United States Patent and Trademark Office by falsely claiming that it was
required to defend against present infringement claims in a legal case. Nintendo sued Tengen for copyright infringement,
which Tengen lost as it could not prove that the legally obtained patent documents had not been used by the reverse
engineering team. Tengen's antitrust claims against Nintendo were never finally decided. In December 1993, the Famicom
received a similar redesign. It also loads cartridges through a covered slot on the top of the unit and uses non-hardwired
controllers. Because HVC-101 used composite video output instead of being RF only like the HVC-001, Nintendo marketed
the newer model as the AV Famicom (AV仕様ファミコン, Eibui Shiyō Famikon?). Since the new controllers don't have microphones
on them like the second controller on the original console, certain games such as the Disk System version of The
Legend of Zelda and Raid on Bungeling Bay will have certain tricks that cannot be replicated when played on an HVC-101
Famicom without a modded controller. However, the HVC-101 Famicom is compatible with most NES controllers due to
having the same controller port. Nintendo had also released a 3D graphic capable headset. However, this peripheral
was never released outside Japan.[citation needed] The NES uses a custom-made Picture Processing Unit (PPU) developed
by Ricoh. All variations of the PPU feature 2 kB of video RAM, 256 bytes of on-die "object attribute memory" (OAM)
to store the positions, colors, and tile indices of up to 64 sprites on the screen, and 28 bytes of on-die palette
RAM to allow selection of background and sprite colors. The console's 2 kB of onboard RAM may be used for tile maps
and attributes on the NES board and 8 kB of tile pattern ROM or RAM may be included on a cartridge. The system has
an available color palette of 48 colors and 6 grays. Up to 25 simultaneous colors may be used without writing new
values mid-frame: a background color, four sets of three tile colors and four sets of three sprite colors. The NES
palette is based on NTSC rather than RGB values. A total of 64 sprites may be displayed onscreen at a given time
without reloading sprites mid-screen. The standard display resolution of the NES is 256 horizontal pixels by 240
vertical pixels. Encouraged by these successes, Nintendo soon turned its attention to the North American market.
Nintendo entered into negotiations with Atari to release the Famicom under Atari’s name as the name Nintendo Advanced
Video Gaming System. The deal was set to be finalized and signed at the Summer Consumer Electronics Show in June
1983. However, Atari discovered at that show that its competitor Coleco was illegally demonstrating its Coleco Adam
computer with Nintendo's Donkey Kong game. This violation of Atari's exclusive license with Nintendo to publish the
game for its own computer systems delayed the implementation of Nintendo's game console marketing contract with Atari.
Atari's CEO Ray Kassar was fired the next month, so the deal went nowhere, and Nintendo decided to market its system
on its own.g[›] To differentiate Nintendo's new home platform from the early 1980s' common perception of a troubled
and shallow video game market, the company freshened its product nomenclature and positioning, and it established
a strict product approval and licensing policy. The overall system was referred to as an "Entertainment System" instead
of a "video game system", which was centered upon a machine called a "Control Deck" instead of a "console", and which
featured software cartridges called "Game Paks" instead of "video games". The 10NES lockout chip system acted as
a lock-and-key coupling of each Game Pak and Control Deck, deterring the copying or production of NES games which
had not first achieved Nintendo's licensed approval. The packaging of the launch lineup of NES games bore pictures
of a very close representation of the actual onscreen graphics of the game, which were of sufficiently recognizable
quality on their own. Symbols on the launch games' packaging clearly indicated the genre of the game, in order to
reduce consumer confusion. A 'seal of quality' was printed on all appropriately licensed game and accessory packaging.
The initial seal stated, "This seal is your assurance that Nintendo has approved and guaranteed the quality of this
product". This text was later changed to "Official Nintendo Seal of Quality". The original model Famicom featured
two game controllers, both of which were hardwired to the back of the console. The second controller lacked the START
and SELECT buttons, but featured a small microphone. Relatively few games made use of this feature. The earliest
produced Famicom units initially had square A and B buttons. This was changed to the circular designs because of
the square buttons being caught in the controller casing when pressed down and glitches within the hardware causing
the system to freeze occasionally while playing a game. Nintendo was not as restrictive as Sega, which did not permit
third-party publishing until Mediagenic in late summer 1988. Nintendo's intention, however, was to reserve a large
part of NES game revenue for itself. Nintendo required that they be the sole manufacturer of all cartridges, and
that the publisher had to pay in full before the cartridges for that game be produced. Cartridges could not be returned
to Nintendo, so publishers assumed all the risk. As a result, some publishers lost more money due to distress sales
of remaining inventory at the end of the NES era than they ever earned in profits from sales of the games. Because
Nintendo controlled the production of all cartridges, it was able to enforce strict rules on its third-party developers,
which were required to sign a contract by Nintendo that would obligate these parties to develop exclusively for the
system, order at least 10,000 cartridges, and only make five games per year. A 1988 shortage of DRAM and ROM chips
also reportedly caused Nintendo to only permit 25% of publishers' requests for cartridges. This was an average figure,
with some publishers receiving much higher amounts and others almost none. GameSpy noted that Nintendo's "iron-clad
terms" made the company many enemies during the 1980s. Some developers tried to circumvent the five game limit by
creating additional company brands like Konami's Ultra Games label; others tried circumventing the 10NES chip. When
Nintendo released the NES in the US, the design styling was deliberately different from that of other game consoles.
Nintendo wanted to distinguish its product from those of competitors and to avoid the generally poor reputation that
game consoles had acquired following the video game crash of 1983. One result of this philosophy was to disguise
the cartridge slot design as a front-loading zero insertion force (ZIF) cartridge socket, designed to resemble the
front-loading mechanism of a VCR. The newly designed connector worked quite well when both the connector and the
cartridges were clean and the pins on the connector were new. Unfortunately, the ZIF connector was not truly zero
insertion force. When a user inserted the cartridge into the NES, the force of pressing the cartridge down and into
place bent the contact pins slightly, as well as pressing the cartridge’s ROM board back into the cartridge itself.
Frequent insertion and removal of cartridges caused the pins to wear out from repeated usage over the years and the
ZIF design proved more prone to interference by dirt and dust than an industry-standard card edge connector. These
design issues were not alleviated by Nintendo’s choice of materials; the console slot nickel connector springs would
wear due to design and the game cartridge copper connectors were also prone to tarnishing. Many players would try
to alleviate issues in the game caused by this corrosion by blowing into the cartridges, then reinserting them, which
actually hurt the copper connectors by speeding up the tarnishing. In response to these hardware flaws, "Nintendo
Authorized Repair Centers" sprang up across the U.S. According to Nintendo, the authorization program was designed
to ensure that the machines were properly repaired. Nintendo would ship the necessary replacement parts only to shops
that had enrolled in the authorization program. In practice, the authorization process consisted of nothing more
than paying a fee to Nintendo for the privilege. In a recent trend, many sites have sprung up to offer Nintendo repair
parts, guides, and services that replace those formerly offered by the authorized repair centers. Video output connections
varied from one model of the console to the next. The original HVC-001 model of the Family Computer featured only
radio frequency (RF) modulator output. When the console was released in North America and Europe, support for composite
video through RCA connectors was added in addition to the RF modulator. The HVC-101 model of the Famicom dropped
the RF modulator entirely and adopted composite video output via a proprietary 12-pin "multi-out" connector first
introduced for the Super Famicom/Super Nintendo Entertainment System. Conversely, the North American re-released
NES-101 model most closely resembled the original HVC-001 model Famicom, in that it featured RF modulator output
only. Finally, the PlayChoice-10 utilized an inverted RGB video output. The NES dropped the hardwired controllers,
instead featuring two custom 7-pin ports on the front of the console. Also in contrast to the Famicom, the controllers
included with the NES were identical to each other—the second controller lacked the microphone that was present on
the Famicom model and possessed the same START and SELECT buttons as the primary controller. Some NES localizations
of games, such as The Legend of Zelda, which required the use of the Famicom microphone in order to kill certain
enemies, suffered from the lack of hardware to do so. A thriving market of unlicensed NES hardware clones emerged
during the climax of the console's popularity. Initially, such clones were popular in markets where Nintendo never
issued a legitimate version of the console. In particular, the Dendy (Russian: Де́нди), an unlicensed hardware clone
produced in Taiwan and sold in the former Soviet Union, emerged as the most popular video game console of its time
in that setting and it enjoyed a degree of fame roughly equivalent to that experienced by the NES/Famicom in North
America and Japan. A Famicom clone was marketed in Argentina under the name of "Family Game", resembling the original
hardware design. The Micro Genius (Simplified Chinese: 小天才) was marketed in Southeast Asia as an alternative to the
Famicom; Samurai was the popular PAL alternative to the NES; and in Central Europe, especially Poland, the Pegasus
was available. Samurai was also available in India in early 90s which was the first instance of console gaming in
India. The NES Test Station's front features a Game Pak slot and connectors for testing various components (AC adapter,
RF switch, Audio/Video cable, NES Control Deck, accessories and games), with a centrally-located selector knob to
chose which component to test. The unit itself weighs approximately 11.7 pounds without a TV. It connects to a television
via a combined A/V and RF Switch cable. By actuating the green button, a user can toggle between an A/V Cable or
RF Switch connection. The television it is connected to (typically 11" to 14") is meant to be placed atop it.
Saint Athanasius of Alexandria (/ˌæθəˈneɪʃəs/; Greek: Ἀθανάσιος Ἀλεξανδρείας, Athanásios Alexandrías; c. 296–298 – 2 May
373), also called Athanasius the Great, Athanasius the Confessor or, primarily in the Coptic Orthodox Church, Athanasius
the Apostolic, was the twentieth bishop of Alexandria (as Athanasius I). His episcopate lasted 45 years (c. 8 June
328 – 2 May 373), of which over 17 were spent in five exiles ordered by four different Roman emperors. Athanasius
is a renowned Christian theologian, a Church Father, the chief defender of Trinitarianism against Arianism, and a
noted Egyptian leader of the fourth century. T. Gilmartin, (Professor of History, Maynooth, 1890), writes in Church
History, Vol. 1, Ch XVII: On the death of Alexander, five months after the termination of the Council of Nice, Athanasius
was unanimously elected to fill the vacant see. He was most unwilling to accept the dignity, for he clearly foresaw
the difficulties in which it would involve him. The clergy and people were determined to have him as their bishop,
Patriarch of Alexandria, and refused to accept any excuses. He at length consented to accept a responsibility that
he sought in vain to escape, and was consecrated in 326, when he was about thirty years of age. Through the influence
of the Eusebian faction at Constantinople, an Arian bishop, George of Cappadocia, was now appointed to rule the see
of Alexandria. Athanasius, after remaining some days in the neighbourhood of the city, finally withdrew into the
desert of Upper Egypt, where he remained for a period of six years, living the life of the monks, devoting himself
to the composition of a group of writings; "Apology to Constantius", the "Apology for his Flight", the "Letter to
the Monks", and the "History of the Arians". The Arians no longer presented an unbroken front to their orthodox opponents.
The Emperor Constantius, who had been the cause of so much trouble, died 4 November, 361 and was succeeded by Julian.
The proclamation of the new prince's accession was the signal for a pagan outbreak against the still dominant Arian
faction in Alexandria. George, the usurping Bishop, was flung into prison and murdered. An obscure presbyter of the
name of Pistus was immediately chosen by the Arians to succeed him, when fresh news arrived that filled the orthodox
party with hope. An edict had been put forth by Julian permitting the exiled bishops of the "Galileans" to return
to their "towns and provinces". Athanasius received a summons from his own flock, and he accordingly re-entered his
episcopal capitol on 22 February, 362. The accession of Valens gave a fresh lease of life to the Arian party. He
issued a decree banishing the bishops who had been deposed by Constantius, but who had been permitted by Jovian to
return to their sees. The news created the greatest consternation in the city of Alexandria itself, and the prefect,
in order to prevent a serious outbreak, gave public assurance that the very special case of Athanasius would be laid
before the emperor. But the saint seems to have divined what was preparing in secret against him. He quietly withdrew
from Alexandria, 5 October, and took up his abode in a country house outside the city. Valens, who seems to have
sincerely dreaded the possible consequences of another popular outbreak, within a few weeks issued orders allowing
Athanasius to return to his episcopal see. His biography of Anthony the Great entitled Life of Antony(Βίος καὶ Πολιτεία
Πατρὸς Ἀντωνίου, Vita Antonii) became his most widely-read work. Translated into several languages, it played an
important role in the spreading of the ascetic ideal in Eastern and Western Christianity. Depicting Anthony as an
illiterate and holy man who through his existence in a primordial landscape has an absolute connection to the divine
truth, the biography also resembles the life of his biographer Athanasius. It later served as an inspiration to Christian
monastics in both the East and the West. The so-called Athanasian Creed dates from well after Athanasius's death
and draws upon the phraseology of Augustine's De trinitate. Conflict with Arius and Arianism as well as successive
Roman emperors shaped Athanasius's career. In 325, at the age of 27, Athanasius began his leading role against the
Arians as his bishop's assistant during the First Council of Nicaea. Roman emperor Constantine the Great had convened
the council in May–August 325 to address the Arian position that the Son of God, Jesus of Nazareth, is of a distinct
substance from the Father. Three years after that council, Athanasius succeeded his mentor as archbishop of Alexandria.
In addition to the conflict with the Arians (including powerful and influential Arian churchmen led by Eusebius of
Nicomedia), he struggled against the Emperors Constantine, Constantius II, Julian the Apostate and Valens. He was
known as "Athanasius Contra Mundum" (Latin for Athanasius Against the World). Rufinus relates a story that as Bishop
Alexander stood by a window, he watched boys playing on the seashore below, imitating the ritual of Christian baptism.
He sent for the children and discovered that one of the boys (Athanasius) had acted as bishop. After questioning
Athanasius, Bishop Alexander informed him that the baptisms were genuine, as both the form and matter of the sacrament
had been performed through the recitation of the correct words and the administration of water, and that he must
not continue to do this as those baptized had not been properly catechized. He invited Athanasius and his playfellows
to prepare for clerical careers. Athanasius recounts being a student, as well as being educated by the Martyrs of
the Great (tenth) and last persecution of Christianity by pagan Rome.[citation needed] This persecution was most
severe in the East, particularly in Egypt and Palestine. Peter of Alexandria, the 17th archbishop of Alexandria,
was martyred in 311 in the closing days of that persecution, and may have been one of those teachers. His successor
as bishop of Alexandria, Alexander of Alexandria (312–328) was an Origenist as well as a documented mentor of Athanasius.
According to Sozomen, Bishop Alexander "invited Athanasius to be his commensal and secretary. He had been well educated,
and was versed in grammar and rhetoric, and had already, while still a young man, and before reaching the episcopate,
given proof to those who dwelt with him of his wisdom and acumen". Athanasius's earliest work, Against the Heathen
– On the Incarnation (written before 319), bears traces of Origenist Alexandrian thought (such as repeatedly quoting
Plato and used a definition from Aristotle's Organon) but in an orthodox way. Athanasius was also familiar with the
theories of various philosophical schools, and in particular with the developments of Neo-Platonism. Ultimately,
Athanasius would modify the philosophical thought of the School of Alexandria away from the Origenist principles
such as the "entirely allegorical interpretation of the text". Still, in later works, Athanasius quotes Homer more
than once (Hist. Ar. 68, Orat. iv. 29). In his letter to Emperor Constantius, he presents a defense of himself bearing
unmistakable traces of a study of Demosthenes de Corona. In about 319, when Athanasius was a deacon, a presbyter
named Arius came into a direct conflict with Alexander of Alexandria. It appears that Arius reproached Alexander
for what he felt were misguided or heretical teachings being taught by the bishop. Arius's theological views appear
to have been firmly rooted in Alexandrian Christianity, and his Christological views were certainly not radical at
all. He embraced a subordinationist Christology which taught that Christ was the divine Son (Logos) of God, made,
not begotten, heavily influenced by Alexandrian thinkers like Origen, and which was a common Christological view
in Alexandria at the time. Support for Arius from powerful bishops like Eusebius of Caesarea and Eusebius of Nicomedia,
further illustrate how Arius's subordinationist Christology was shared by other Christians in the Empire. Arius was
subsequently excommunicated by Alexander, and he would begin to elicit the support of many bishops who agreed with
his position. Athanasius's episcopate began on 9 May 328 as the Alexandrian Council elected Athanasius to succeed
the aged Alexander. That council also denounced various heresies and schisms, many of which continued to preoccupy
his 45-year-long episcopate (c. 8 June 328 – 2 May 373). Patriarch Athanasius spent over 17 years in five exiles
ordered by four different Roman Emperors, not counting approximately six more incidents in which Athanasius fled
Alexandria to escape people seeking to take his life. This gave rise to the expression "Athanasius contra mundum"
or "Athanasius against the world". However, during his first years as bishop, Athanasius visited the churches of
his territory, which at that time included all of Egypt and Libya. He established contacts with the hermits and monks
of the desert, including Pachomius, which proved very valuable to him over the years. Shortly thereafter, Athanasius
became occupied with the theological disputes against Arians within the Byzantine Empire that would occupy much of
his life. Early in the year 343 we find Athanasius had travelled, via Rome, from Alexandria, North Africa, to Gaul;
nowadays Belgium / Holland and surrounding areas, where Hosius of Cordoba was Bishop, the great champion of orthodoxy
in the West. The two, together, set out for Sardica. A full Council of the Church was convened / summoned there in
deference to the Roman pontiff's wishes. The travel was a mammoth task in itself. At this great gathering of prelates,
leaders of the Church, the case of Athanasius was taken up once more, that is, Athanasius was formally questioned
over misdemeanours and even murder, (a man called Arsenius and using his body for magic, – an absurd charge.). [The
Council was convoked for the purpose of inquiring into the charges against Athanasius and other bishops, on account
of which they were deposed from their sees by the Semi-Arian Synod of Antioch (341), and went into exile. It was
called according to Socrates, (E. H. ii. 20) by the two Emperors, Constans and Constantius; but, according to Baronius
by Pope Julius (337–352), (Ad an. 343). One hundred and seventy six attended. Eusebian bishops objected to the admission
of Athanasius and other deposed bishops to the Council, except as accused persons to answer the charges brought against
them. Their objections were overridden by the orthodox bishops, about a hundred were orthodox, who were the majority.
The Eusebians, seeing they had no chance of having their views carried, retired to Philoppopolis in Thrace, Philippopolis
(Thracia), where they held an opposition council, under the presidency of the Patriarch of Antioch, and confirmed
the decrees of the Synod of Antioch. ]. Once more, at the Council of Sardica, was his innocence reaffirmed. Two conciliar
letters were prepared, one to the clergy and faithful of Alexandria, the other to the bishops of Egypt and Libya,
in which the will of the Council was made known. Meanwhile, the Eusebian party had gone to Philippopolis, where they
issued an anathema against Athanasius and his supporters. The persecution against the orthodox party broke out with
renewed vigour, and Constantius was induced to prepare drastic measures against Athanasius and the priests who were
devoted to him. Orders were given that if the Saint attempt to re-enter his see, he should be put to death. Athanasius,
accordingly, withdrew from Sardica to Naissus in Mysia, where he celebrated the Easter festival of the year 344.
It was Hosius who presided over the Council of Sardica, as he did for the First Council of Nicaea, which like the
341 synod, found Athanasius innocent. &. He celebrated his last Easter in exile in Aquileia in April 345, received
by bishop Fortunatianus. Pope Julius had died in April, 352, and was succeeded by Liberius. For two years Liberius
had been favourable to the cause of Athanasius; but driven at last into exile, he was induced to sign an ambiguous
formula, from which the great Nicene text, the "homoousion", had been studiously omitted. In 355 a council was held
at Milan, where in spite of the vigorous opposition of a handful of loyal prelates among the Western bishops, a fourth
condemnation of Athanasius was announced to the world. With his friends scattered, the saintly Hosius in exile, and
Pope Liberius denounced as acquiescing in Arian formularies, Athanasius could hardly hope to escape. On the night
of 8 February, 356, while engaged in services in the Church of St. Thomas, a band of armed men burst in to secure
his arrest. It was the beginning of his third exile. Constantius, renewing his previous policies favoring the Arians,
banished Athanasius from Alexandria once again. This was followed, in 356, by an attempt to arrest Athanasius during
a vigil service. Athanasius fled to Upper Egypt, where he stayed in several monasteries and other houses. During
this period, Athanasius completed his work Four Orations against the Arians and defended his own recent conduct in
the Apology to Constantius and Apology for His Flight. Constantius's persistence in his opposition to Athanasius,
combined with reports Athanasius received about the persecution of non-Arians by the new Arian bishop George of Laodicea,
prompted Athanasius to write his more emotional History of the Arians, in which he described Constantius as a precursor
of the Antichrist. T. Gilmartin, (Professor of History, Maynooth, 1890), writes in Church History, Vol. 1, Ch XVII:
The Arians sought the approval of an Ecumenical Council. They sought to hold two councils. Constantius, summoned
the bishops of the East to meet at Seleucia in Isauria, and those of the West to Rimini in Italy. A preliminary conference
was held by the Arians at Sirmium, to agree a formula of faith. A "Homoeon" creed was adopted, declaring The Son
to be "like the Father". The two met in autumn of 359. At Seleucia, one hundred and fifty bishops, of which one hundred
and five were semi-Arian. The semi-Arians refused to accept anything less than the "Homoiousion", (see: Homoiousian),
formulary of faith. The Imperial Prefect was obliged to disband, without agreeing on any creed. Acacius, the leader
of the "Homoean" party went to Constantinople, where the Sirmian formulary of faith was approved by the "Home Synod",
(consisted of those bishops who happened to be present at the Court for the time), and a decree of deposition issued
against the leaders of the semi-Arians. At Rimini were over four hundred of which eighty were Arian, the rest were
orthodox. The orthodox fathers refused to accept any creed but the Nicene, while the others were equally in favour
of the Sirmian. Each party sent a deputation to the Emperor to say there was no probability to agreement, and asked
for the bishops to return to their dioceses. For the purpose of wearing-down the orthodox bishops; (Sulpitius Severius
says), Constantius delayed his answer for several months, and finally prevailed on them to accept the Sirmian creed.
It was after this Council that Jerome said: " ...the whole world groaned in astonishment to find itself Arian." With
characteristic energy he set to work to re-establish the somewhat shattered fortunes of the orthodox party and to
purge the theological atmosphere of uncertainty. To clear up the misunderstandings that had arisen in the course
of the previous years, an attempt was made to determine still further the significance of the Nicene formularies.
In the meanwhile, Julian, who seems to have become suddenly jealous of the influence that Athanasius was exercising
at Alexandria, addressed an order to Ecdicius, the Prefect of Egypt, peremptorily commanding the expulsion of the
restored primate, on the ground that he had never been included in the imperial act of clemency. The edict was communicated
to the bishop by Pythicodorus Trico, who, though described in the "Chronicon Athanasianum" (XXXV) as a "philosopher",
seems to have behaved with brutal insolence. On 23 October the people gathered about the proscribed bishop to protest
against the emperor's decree; but Athanasius urged them to submit, consoling them with the promise that his absence
would be of short duration. Athanasius also wrote a two-part Against the Heathen and The Incarnation of the Word
of God. Completed probably early in his life, before the Arian controversy, they constitute the first classic work
of developed Orthodox theology. In the first part, Athanasius attacks several pagan practices and beliefs. The second
part presents teachings on the redemption. Also in these books, Athanasius put forward the belief that the Son of
God, the eternal Word through whom God created the world, entered that world in human form to lead men back into
the harmony from which they had earlier fallen away. In the light of Mother F. A. Forbes research and reference to
Pope Saint Gregory's writings, it would appear that Athanasius was constrained to be Bishop: She writes that when
the Patriarch Alexander was on his death-bed he called Athanasius, who fled fearing he would be constrained to be
made Bishop. "When the Bishops of the Church assembled to elect their new Patriarch, the whole Catholic population
surrounded the church, holding up their hands to Heaven and crying; "Give us Athanasius!" The Bishops had nothing
better. Athanasius was thus elected, as Gregory tells us..." (Pope Gregory I, would have full access to the Vatican
Archives). Throughout most of his career, Athanasius had many detractors. Classical scholar Timothy Barnes relates
contemporary allegations against Athanasius: from defiling an altar, to selling Church grain that had been meant
to feed the poor for his own personal gain, and even violence and murder to suppress dissent. Athanasius used "Arian"
to describe both followers of Arius, and as a derogatory polemical term for Christians who disagreed with his formulation
of the Trinity. Athanasius called many of his opponents "Arian", except for Miletus. Nonetheless, within a few years
of his death, Gregory of Nazianzus called him the "Pillar of the Church". His writings were well regarded by all
Church fathers who followed, in both the West and the East, who noted their rich devotion to the Word-become-man,
great pastoral concern, and profound interest in monasticism. Athanasius is counted as one of the four great Eastern
Doctors of the Church in the Roman Catholic Church. In the Eastern Orthodox Church, he is labeled the "Father of
Orthodoxy". Some Protestants label him "Father of the Canon". Athanasius is venerated as a Christian saint, whose
feast day is 2 May in Western Christianity, 15 May in the Coptic Orthodox Church, and 18 January in the other Eastern
Orthodox Churches. He is venerated by the Oriental and Eastern Orthodox Churches, the Roman Catholic Church, the
Lutherans, and the Anglican Communion. Alexandria was the most important trade center in the whole empire during
Athanasius's boyhood. Intellectually, morally, and politically—it epitomized the ethnically diverse Graeco-Roman
world, even more than Rome or Constantinople, Antioch or Marseilles. Its famous catechetical school, while sacrificing
none of its famous passion for orthodoxy since the days of Pantaenus, Clement of Alexandria, Origen of Alexandria,
Dionysius and Theognostus, had begun to take on an almost secular character in the comprehensiveness of its interests,
and had counted influential pagans among its serious auditors. However Cornelius Clifford places his birth no earlier
than 296 and no later than 298, based on the fact that Athanasius indicates no first hand recollection of the Maximian
persecution of 303, which he suggests Athanasius would have remembered if he had been ten years old at the time.
Secondly, the Festal Epistles state that the Arians had accused Athanasius, among other charges, of not having yet
attained the canonical age (30) and thus could not have been properly ordained as Patriarch of Alexandria in 328.
The accusation must have seemed plausible. The Orthodox Church places his year of birth around 297. Athanasius's
first problem lay with Meletius of Lycopolis and his followers, who had failed to abide by the First Council of Nicaea.
That council also anathematized Arius. Accused of mistreating Arians and Meletians, Athanasius answered those charges
at a gathering of bishops in Tyre, the First Synod of Tyre, in 335. There, Eusebius of Nicomedia and other supporters
of Arius deposed Athanasius. On 6 November, both sides of the dispute met with Emperor Constantine I in Constantinople.
At that meeting, the Arians claimed Athanasius would try to cut off essential Egyptian grain supplies to Constantinople.
He was found guilty, and sent into exile to Augusta Treverorum in Gaul (now Trier in Germany). Athanasius knew Greek
and admitted not knowing Hebrew [see, e.g., the 39th Festal Letter of St. Athan.]. The Old Testament passages he
quotes frequently come from the Septuagint Greek translation. Only rarely did he use other Greek versions (to Aquila
once in the Ecthesis, to other versions once or twice on the Psalms), and his knowledge of the Old Testament was
limited to the Septuagint. Nonetheless, during his later exile, with no access to a copy of the Scriptures, Athanasius
could quote from memory every verse in the Old Testament with a supposed reference to the Trinity without missing
any.[citation needed] The combination of Scriptural study and of Greek learning was characteristic of the famous
Alexandrian School. After the death of the replacement bishop Gregory in 345, Constans used his influence to allow
Athanasius to return to Alexandria in October 345, amidst the enthusiastic demonstrations of the populace. This began
a "golden decade" of peace and prosperity, during which time Athanasius assembled several documents relating to his
exiles and returns from exile in the Apology Against the Arians. However, upon Constans's death in 350, another civil
war broke out, which left pro-Arian Constantius as sole emperor. An Alexandria local council in 350 replaced (or
reaffirmed) Athanasius in his see. Frances A. M. Forbes writes that when the Patriarch Alexander was on his death-bed
he called Athanasius, who fled fearing he would be constrained to be made Bishop. "When the Bishops of the Church
assembled to elect their new Patriarch, the whole Catholic population surrounded the church, holding up their hands
to Heaven and crying; "Give us Athanasius!" The Bishops had nothing better. Athanasius was thus elected, as Gregory
tells us..." (Pope Gregory I, would have full access to the Vatican Archives). Constantius, ordered Liberius into
exile in 356 giving him, then, three days to comply. He was ordered into banishment to Beroea, in Thrace; Beroea
(Thrace). He sent expensive presents, too, if he were to accept the Arian position but were refused. He sent him,
indeed, five hundred pieces of gold "to bear his charges" but Liberius refused them, saying, he might bestow them
on his flatters; as he did also a like present from the empress, bidding the messenger learn to believe in Christ,
and not to persecute the Church of God. Attempts were made to leave the presents in The Church, but Liberius threw
them out. Constantius hereupon sent for him under a strict guard to Milan, where, in a conference recorded by Theodoret,
he boldly told Constantius that Athanasius had been acquitted at Sardica, and his enemies proved calumniators (see:
"calumny") and impostors, and that it was unjust to condemn a person who could not be legally convicted of any crime.
The emperor was reduced to silence on every article, but being the more out of patience, ordered him into banishment.
Liberius went into exile. Constantius, after two years went to Rome to celebrate the twentieth year of his reign.
The ladies joined in a petition to him that he would restore Liberius. He assented, upon condition that he should
comply with the bishops, then, at court. He subscribed the condemnation of Athanasius, and a confession or creed
which had been framed by the Arians at Sirmium. And he no sooner had recovered his see that he declared himself for
the Creed of Niceae, as Theodoret testifies. (Theodoret, Hist. lib. ii. c. 17.). The Emperor knew what he wanted
people to believe. So did the bishops at his court. Athanasius stuck by the orthodox creed. Constantius was an avowed
Arian, became sole ruler in 350, at the death of his brother, Constans. When Emperor Constantine I died, Athanasius
was allowed to return to his See of Alexandria. Shortly thereafter, however, Constantine's son, the new Roman Emperor
Constantius II, renewed the order for Athanasius's banishment in 338. Athanasius went to Rome, where he was under
the protection of Constans, the Emperor of the West. During this time, Gregory of Cappadocia was installed as the
Patriarch of Alexandria, usurping the absent Athanasius. Athanasius did, however, remain in contact with his people
through his annual Festal Letters, in which he also announced on which date Easter would be celebrated that year.
T. Gilmartin, (Professor of History, Maynooth, 1890), writes in Church History, Vol. 1, Ch XVII: By Constantius's
order, the sole ruler of The Roman Empire at the death of his brother Constans, the Council of Arles in 353, was
held, which was presided over by Vincent, Bishop of Capua, in the name of Pope Liberius. The fathers terrified of
the threats of the Emperor, an avowed Arian, they consented to the condemnation of Athanasius. The Pope refused to
accept their decision, and requested the Emperor to hold another Council, in which the charges against Athanasius
could be freely investigated. To this Constantius consented, for he felt able to control it, at Milan. Milan was
named as the place, here three hundred bishops assembled, most from the West, only a few from the East, in 355. They
met in the Church of Milan. Shortly, the Emperor ordered them to a hall in the Imperial Palace, thus ending any free
debate. He presented an Arian formula of faith for their acceptance. He threatened any who refused with exile and
death. All, with the exception of Dionysius (bishop of Milan), and the two Papal Legates, viz., Eusebius of Vercelli
and Lucifer of Cagliari, consented to the Arian Creed and the condemnation of Athanasius. Those who refused were
sent into exile. The decrees were forwarded to the Pope for approval, but were rejected, because of the violence
to which the bishops were subjected. In 361, after the death of Emperor Constantius, shortly followed by the murder
of the very unpopular Bishop George, Athanasius returned to his patriarchate. The following year he convened a council
at Alexandria, and presided over it with Eusebius of Vercelli. Athanasius appealed for unity among all those who
had faith in Christianity, even if they differed on matters of terminology. This prepared the groundwork for his
definition of the orthodox doctrine of the Trinity. However, the council also was directed against those who denied
the divinity of the Holy Spirit, the human soul of Christ, and Christ's divinity. Mild measures were agreed on for
those heretic bishops who repented, but severe penance was decreed for the chief leaders of the major heresies. Two
years later, the Emperor Valens, who favored the Arian position, in his turn exiled Athanasius. This time however,
Athanasius simply left for the outskirts of Alexandria, where he stayed for only a few months before the local authorities
convinced Valens to retract his order of exile. Some early reports state that Athanasius spent this period of exile
at his family's ancestral tomb in a Christian cemetery. It was during this period, the final exile, that he is said
to have spent four months in hiding in his father's tomb. (Soz., "Hist. Eccl.", VI, xii; Soc., "Hist. Eccl.", IV,
xii). Athanasius is the first person to identify the same 27 books of the New Testament that are in use today. Up
until then, various similar lists of works to be read in churches were in use. Athanasius compiled the list to resolve
questions about such texts as The Epistle of Barnabas. Athanasius includes the Book of Baruch and the Letter of Jeremiah
and places the Book of Esther among the "7 books not in the canon but to be read" along with the Wisdom of Solomon,
Sirach (Ecclesiasticus), Judith, Tobit, the Didache, and the Shepherd of Hermas. Perhaps, his most notable letter
was his Festal Letter, written to his Church in Alexandria when he was in exile, as he could not be in their presence.
This letter shows clearly his stand that accepting Jesus is the Divine Son of God is not optional but necessary;
"I know moreover that not only this thing saddens you, but also the fact that while others have obtained the churches
by violence, you are meanwhile cast out from your places. For they hold the places, but you the Apostolic Faith.
They are, it is true, in the places, but outside of the true Faith; while you are outside the places indeed, but
the Faith, within you. Let us consider whether is the greater, the place or the Faith. Clearly the true Faith. Who
then has lost more, or who possesses more? He who holds the place, or he who holds the Faith? Alban Butler, writes
on the subject: "Five months after this great Council, Nicae, St Alexander lying on his death-bed, recommended to
his clergy and people the choice of Athanasius for his successor, thrice repeating his name. In consequence of his
recommendation, the bishops of all Egypt assembled at Alexandria, and finding the people and clergy unanimous in
their choice of Athanasius for patriarch, they confirmed the election about the middle of year 326. He seems, then,
to have been about thirty years of age. " The Gospel of John and particularly the first chapter demonstrates the
Divinity of Jesus. This Gospel in itself is the greatest support of Athanasius's stand. The Gospel of John's first
chapter began to be said at the end of Mass, we believe as a result of Athanasius, and his life's stand, but quietly.
The Last Gospel of The Mass, The Eucharist, St John[1:1–14], together with the prayer; "Placeat tibi", the Blessing,
are all private devotions that have been gradually absorbed by the liturgical service. The beginning of John's Gospel
was much used as an object of special devotion throughout the Middle Ages. Nevertheless, the practice of saying it
at the altar grew; eventually Pius V made this practice universal for the Roman Rite in his edition of the Missal
(1570). It became a firm custom with exceptions in using an other Gospel in use from 1920. So the Missals showed
different last Gospel for certain Feast days. A Prayer Card for the St John's Gospel. Also: Scholars now believe
that the Arian Party was not monolithic, but held drastically different theological views that spanned the early
Christian theological spectrum. They supported the tenets of Origenist thought and theology, but had little else
in common. Moreover, many labelled "Arian" did not consider themselves followers of Arius. In addition, non-Homoousian
bishops disagreed with being labeled as followers of Arius, since Arius was merely a presbyter, while they were fully
ordained bishops. However, others point to the Council of Nicaea as proof in and of itself that Arianism was a real
theological ideology.[citation needed]
Logging was Seattle's first major industry, but by the late 19th century the city had become a commercial and shipbuilding
center as a gateway to Alaska during the Klondike Gold Rush. By 1910, Seattle was one of the 25 largest cities in
the country. However, the Great Depression severely damaged the city's economy. Growth returned during and after
World War II, due partially to the local Boeing company, which established Seattle as a center for aircraft manufacturing.
The Seattle area developed as a technology center beginning in the 1980s, with companies like Microsoft becoming
established in the region. In 1994 the Internet retail giant Amazon was founded in Seattle. The stream of new software,
biotechnology, and Internet companies led to an economic revival, which increased the city's population by almost
50,000 between 1990 and 2000. Seattle (i/siˈætəl/) is a West Coast seaport city and the seat of King County. With
an estimated 662,400 residents as of 2015[update], Seattle is the largest city in both the state of Washington and
the Pacific Northwest region of North America. In July 2013 it was the fastest-growing major city in the United States,
and remained in the top five in May 2015 with an annual growth rate of 2.1%. The Seattle metropolitan area of around
3.6 million inhabitants is the 15th largest metropolitan area in the United States. The city is situated on an isthmus
between Puget Sound (an inlet of the Pacific Ocean) and Lake Washington, about 100 miles (160 km) south of the Canada–United
States border. A major gateway for trade with Asia, Seattle is the third largest port in North America in terms of
container handling as of 2015. After a difficult winter, most of the Denny Party relocated across Elliott Bay and
claimed land a second time at the site of present-day Pioneer Square. Charles Terry and John Low remained at the
original landing location and reestablished their old land claim and called it "New York", but renamed "New York
Alki" in April 1853, from a Chinook word meaning, roughly, "by and by" or "someday". For the next few years, New
York Alki and Duwamps competed for dominance, but in time Alki was abandoned and its residents moved across the bay
to join the rest of the settlers. The name "Seattle" appears on official Washington Territory papers dated May 23,
1853, when the first plats for the village were filed. In 1855, nominal land settlements were established. On January
14, 1865, the Legislature of Territorial Washington incorporated the Town of Seattle with a board of trustees managing
the city. The town of Seattle was disincorporated January 18, 1867 and remained a mere precinct of King County until
late 1869, when a new petition was filed and the city was re-incorporated December 2, 1869 with a Mayor-council government.
The corporate seal of the City of Seattle carries the date "1869" and a likeness of Chief Sealth in left profile.
The first such boom, covering the early years of the city, rode on the lumber industry. (During this period the road
now known as Yesler Way won the nickname "Skid Road", supposedly after the timber skidding down the hill to Henry
Yesler's sawmill. The later dereliction of the area may be a possible origin for the term which later entered the
wider American lexicon as Skid Row.) Like much of the American West, Seattle saw numerous conflicts between labor
and management, as well as ethnic tensions that culminated in the anti-Chinese riots of 1885–1886. This violence
originated with unemployed whites who were determined to drive the Chinese from Seattle (anti-Chinese riots also
occurred in Tacoma). In 1900, Asians were 4.2% of the population. Authorities declared martial law and federal troops
arrived to put down the disorder. The second and most dramatic boom and bust resulted from the Klondike Gold Rush,
which ended the depression that had begun with the Panic of 1893; in a short time, Seattle became a major transportation
center. On July 14, 1897, the S.S. Portland docked with its famed "ton of gold", and Seattle became the main transport
and supply point for the miners in Alaska and the Yukon. Few of those working men found lasting wealth, however;
it was Seattle's business of clothing the miners and feeding them salmon that panned out in the long run. Along with
Seattle, other cities like Everett, Tacoma, Port Townsend, Bremerton, and Olympia, all in the Puget Sound region,
became competitors for exchange, rather than mother lodes for extraction, of precious metals. The boom lasted well
into the early part of the 20th century and funded many new Seattle companies and products. In 1907, 19-year-old
James E. Casey borrowed $100 from a friend and founded the American Messenger Company (later UPS). Other Seattle
companies founded during this period include Nordstrom and Eddie Bauer. Seattle brought in the Olmsted Brothers landscape
architecture firm to design a system of parks and boulevards. War work again brought local prosperity during World
War II, this time centered on Boeing aircraft. The war dispersed the city's numerous Japanese-American businessmen
due to the Japanese American internment. After the war, the local economy dipped. It rose again with Boeing's growing
dominance in the commercial airliner market. Seattle celebrated its restored prosperity and made a bid for world
recognition with the Century 21 Exposition, the 1962 World's Fair. Another major local economic downturn was in the
late 1960s and early 1970s, at a time when Boeing was heavily affected by the oil crises, loss of Government contracts,
and costs and delays associated with the Boeing 747. Many people left the area to look for work elsewhere, and two
local real estate agents put up a billboard reading "Will the last person leaving Seattle – Turn out the lights."
Seattle was also the home base of impresario Alexander Pantages who, starting in 1902, opened a number of theaters
in the city exhibiting vaudeville acts and silent movies. His activities soon expanded, and the thrifty Greek went
on and became one of America's greatest theater and movie tycoons. Between Pantages and his rival John Considine,
Seattle was for a while the western United States' vaudeville mecca. B. Marcus Priteca, the Scottish-born and Seattle-based
architect, built several theaters for Pantages, including some in Seattle. The theaters he built for Pantages in
Seattle have been either demolished or converted to other uses, but many other theaters survive in other cities of
the U.S., often retaining the Pantages name; Seattle's surviving Paramount Theatre, on which he collaborated, was
not a Pantages theater. Seattle remained the corporate headquarters of Boeing until 2001, when the company separated
its headquarters from its major production facilities; the headquarters were moved to Chicago. The Seattle area is
still home to Boeing's Renton narrow-body plant (where the 707, 720, 727, and 757 were assembled, and the 737 is
assembled today) and Everett wide-body plant (assembly plant for the 747, 767, 777, and 787). The company's credit
union for employees, BECU, remains based in the Seattle area, though it is now open to all residents of Washington.
A shipbuilding boom in the early part of the 20th century became massive during World War I, making Seattle somewhat
of a company town; the subsequent retrenchment led to the Seattle General Strike of 1919, the first general strike
in the country. A 1912 city development plan by Virgil Bogue went largely unused. Seattle was mildly prosperous in
the 1920s but was particularly hard hit in the Great Depression, experiencing some of the country's harshest labor
strife in that era. Violence during the Maritime Strike of 1934 cost Seattle much of its maritime traffic, which
was rerouted to the Port of Los Angeles. As prosperity began to return in the 1980s, the city was stunned by the
Wah Mee massacre in 1983, when 13 people were killed in an illegal gambling club in the International District, Seattle's
Chinatown. Beginning with Microsoft's 1979 move from Albuquerque, New Mexico to nearby Bellevue, Washington, Seattle
and its suburbs became home to a number of technology companies including Amazon.com, RealNetworks, Nintendo of America,
McCaw Cellular (now part of AT&T Mobility), VoiceStream (now T-Mobile), and biomedical corporations such as HeartStream
(later purchased by Philips), Heart Technologies (later purchased by Boston Scientific), Physio-Control (later purchased
by Medtronic), ZymoGenetics, ICOS (later purchased by Eli Lilly and Company) and Immunex (later purchased by Amgen).
This success brought an influx of new residents with a population increase within city limits of almost 50,000 between
1990 and 2000, and saw Seattle's real estate become some of the most expensive in the country. In 1993, the movie
Sleepless in Seattle brought the city further national attention. Many of the Seattle area's tech companies remained
relatively strong, but the frenzied dot-com boom years ended in early 2001. Seattle in this period attracted widespread
attention as home to these many companies, but also by hosting the 1990 Goodwill Games and the APEC leaders conference
in 1993, as well as through the worldwide popularity of grunge, a sound that had developed in Seattle's independent
music scene. Another bid for worldwide attention—hosting the World Trade Organization Ministerial Conference of 1999—garnered
visibility, but not in the way its sponsors desired, as related protest activity and police reactions to those protests
overshadowed the conference itself. The city was further shaken by the Mardi Gras Riots in 2001, and then literally
shaken the following day by the Nisqually earthquake. Seattle is located between the saltwater Puget Sound (an arm
of the Pacific Ocean) to the west and Lake Washington to the east. The city's chief harbor, Elliott Bay, is part
of Puget Sound, which makes the city an oceanic port. To the west, beyond Puget Sound, are the Kitsap Peninsula and
Olympic Mountains on the Olympic Peninsula; to the east, beyond Lake Washington and the eastside suburbs, are Lake
Sammamish and the Cascade Range. Lake Washington's waters flow to Puget Sound through the Lake Washington Ship Canal
(consisting of two man-made canals, Lake Union, and the Hiram M. Chittenden Locks at Salmon Bay, ending in Shilshole
Bay on Puget Sound). Yet another boom began as the city emerged from the Great Recession. Amazon.com moved its headquarters
from North Beacon Hill to South Lake Union and began a rapid expansion. For the five years beginning in 2010, Seattle
gained an average of 14,511 residents per year, with the growth strongly skewed toward the center of the city, as
unemployment dropped from roughly 9 percent to 3.6 percent. The city has found itself "bursting at the seams," with
over 45,000 households spending more than half their income on housing and at least 2,800 people homeless, and with
the country's sixth-worst rush hour traffic. Due to its location in the Pacific Ring of Fire, Seattle is in a major
earthquake zone. On February 28, 2001, the magnitude 6.8 Nisqually earthquake did significant architectural damage,
especially in the Pioneer Square area (built on reclaimed land, as are the Industrial District and part of the city
center), but caused only one fatality. Other strong quakes occurred on January 26, 1700 (estimated at 9 magnitude),
December 14, 1872 (7.3 or 7.4), April 13, 1949 (7.1), and April 29, 1965 (6.5). The 1965 quake caused three deaths
in Seattle directly, and one more by heart failure. Although the Seattle Fault passes just south of the city center,
neither it nor the Cascadia subduction zone has caused an earthquake since the city's founding. The Cascadia subduction
zone poses the threat of an earthquake of magnitude 9.0 or greater, capable of seriously damaging the city and collapsing
many buildings, especially in zones built on fill. The city itself is hilly, though not uniformly so. Like Rome,
the city is said to lie on seven hills; the lists vary, but typically include Capitol Hill, First Hill, West Seattle,
Beacon Hill, Queen Anne, Magnolia, and the former Denny Hill. The Wallingford, Mount Baker, and Crown Hill neighborhoods
are technically located on hills as well. Many of the hilliest areas are near the city center, with Capitol Hill,
First Hill, and Beacon Hill collectively constituting something of a ridge along an isthmus between Elliott Bay and
Lake Washington. The break in the ridge between First Hill and Beacon Hill is man-made, the result of two of the
many regrading projects that reshaped the topography of the city center. The topography of the city center was also
changed by the construction of a seawall and the artificial Harbor Island (completed 1909) at the mouth of the city's
industrial Duwamish Waterway, the terminus of the Green River. The highest point within city limits is at High Point
in West Seattle, which is roughly located near 35th Ave SW and SW Myrtle St. Other notable hills include Crown Hill,
View Ridge/Wedgwood/Bryant, Maple Leaf, Phinney Ridge, Mt. Baker Ridge and Highlands/Carkeek/Bitterlake. From 1981
to 2010, the average annual precipitation measured at Seattle–Tacoma International Airport was 37.49 inches (952
mm). Annual precipitation has ranged from 23.78 in (604 mm) in 1952 to 55.14 in (1,401 mm) in 1950; for water year
(October 1 – September 30) precipitation, the range is 23.16 in (588 mm) in 1976–77 to 51.82 in (1,316 mm) in 1996–97.
Due to local variations in microclimate, Seattle also receives significantly lower precipitation than some other
locations west of the Cascades. Around 80 mi (129 km) to the west, the Hoh Rain Forest in Olympic National Park on
the western flank of the Olympic Mountains receives an annual average precipitation of 142 in (3.61 m). Sixty miles
to the south of Seattle, the state capital Olympia, which is out of the Olympic Mountains' rain shadow, receives
an annual average precipitation of 50 in (1,270 mm). The city of Bremerton, about 15 mi (24 km) west of downtown
Seattle, receives 56.4 in (1,430 mm) of precipitation annually. In November, Seattle averages more rainfall than
any other U.S. city of more than 250,000 people; it also ranks highly in winter precipitation. Conversely, the city
receives some of the lowest precipitation amounts of any large city from June to September. Seattle is one of the
five rainiest major U.S. cities as measured by the number of days with precipitation, and it receives some of the
lowest amounts of annual sunshine among major cities in the lower 48 states, along with some cities in the Northeast,
Ohio and Michigan. Thunderstorms are rare, as the city reports thunder on just seven days per year. By comparison,
Fort Myers, Florida reports thunder on 93 days per year, Kansas City on 52, and New York City on 25. Temperature
extremes are moderated by the adjacent Puget Sound, greater Pacific Ocean, and Lake Washington. The region is largely
shielded from Pacific storms by the Olympic Mountains and from Arctic air by the Cascade Range. Despite being on
the margin of the rain shadow of the Olympic Mountains, the city has a reputation for frequent rain. This reputation
stems from the frequency of light precipitation in the fall, winter, and spring. In an average year, at least 0.01
inches (0.25 mm) of precipitation falls on 150 days, more than nearly all U.S. cities east of the Rocky Mountains.
It is cloudy 201 days out of the year and partly cloudy 93 days. Official weather and climatic data is collected
at Seattle–Tacoma International Airport, located about 19 km (12 mi) south of downtown in the city of SeaTac, which
is at a higher elevation, and records more cloudy days and fewer partly cloudy days per year. The Puget Sound Convergence
Zone is an important feature of Seattle's weather. In the convergence zone, air arriving from the north meets air
flowing in from the south. Both streams of air originate over the Pacific Ocean; airflow is split by the Olympic
Mountains to Seattle's west, then reunited to the east. When the air currents meet, they are forced upward, resulting
in convection. Thunderstorms caused by this activity are usually weak and can occur north and south of town, but
Seattle itself rarely receives more than occasional thunder and small hail showers. The Hanukkah Eve Wind Storm in
December 2006 is an exception that brought heavy rain and winds gusting up to 69 mph (111 km/h), an event that was
not caused by the Puget Sound Convergence Zone and was widespread across the Pacific Northwest. Seattle typically
receives some snowfall on an annual basis but heavy snow is rare. Average annual snowfall, as measured at Sea-Tac
Airport, is 6.8 inches (17.3 cm). Single calendar-day snowfall of six inches or greater has occurred on only 15 days
since 1948, and only once since February 17, 1990, when 6.8 in (17.3 cm) of snow officially fell at Sea-Tac airport
on January 18, 2012. This moderate snow event was officially the 12th snowiest calendar day at the airport since
1948 and snowiest since November 1985. Much of the city of Seattle proper received somewhat lesser snowfall accumulations.
Locations to the south of Seattle received more, with Olympia and Chehalis receiving 14 to 18 in (36 to 46 cm). Another
moderate snow event occurred from December 12–25, 2008, when over one foot (30 cm) of snow fell and stuck on much
of the roads over those two weeks, when temperatures remained below 32 °F (0 °C), causing widespread difficulties
in a city not equipped for clearing snow. The largest documented snowstorm occurred from January 5–9, 1880, with
snow drifting to 6 feet (1.8 m) in places at the end of the snow event. From January 31 to February 2, 1916, another
heavy snow event occurred with 29 in (74 cm) of snow on the ground by the time the event was over. With official
records dating to 1948, the largest single-day snowfall is 20.0 in (51 cm) on January 13, 1950. Seasonal snowfall
has ranged from zero in 1991–92 to 67.5 in (171 cm) in 1968–69, with trace amounts having occurred as recently as
2009–10. The month of January 1950 was particularly severe, bringing 57.2 in (145 cm) of snow, the most of any month
along with the aforementioned record cold. Autumn, winter, and early spring are frequently characterized by rain.
Winters are cool and wet with December, the coolest month, averaging 40.6 °F (4.8 °C), with 28 annual days with lows
that reach the freezing mark, and 2.0 days where the temperature stays at or below freezing all day; the temperature
rarely lowers to 20 °F (−7 °C). Summers are sunny, dry and warm, with August, the warmest month, averaging 66.1 °F
(18.9 °C), and with temperatures reaching 90 °F (32 °C) on 3.1 days per year, although 2011 is the most recent year
to not reach 90 °F. The hottest officially recorded temperature was 103 °F (39 °C) on July 29, 2009; the coldest
recorded temperature was 0 °F (−18 °C) on January 31, 1950; the record cold daily maximum is 16 °F (−9 °C) on January
14, 1950, while, conversely, the record warm daily minimum is 71 °F (22 °C) the day the official record high was
set. The average window for freezing temperatures is November 16 thru March 10, allowing a growing season of 250
days. Seattle experiences its heaviest rainfall during the months of November, December and January, receiving roughly
half of its annual rainfall (by volume) during this period. In late fall and early winter, atmospheric rivers (also
known as "Pineapple Express" systems), strong frontal systems, and Pacific low pressure systems are common. Light
rain & drizzle are the predominant forms of precipitation during the remainder of the year; for instance, on average,
less than 1.6 in (41 mm) of rain falls in July and August combined when rain is rare. On occasion, Seattle experiences
somewhat more significant weather events. One such event occurred on December 2–4, 2007, when sustained hurricane-force
winds and widespread heavy rainfall associated with a strong Pineapple Express event occurred in the greater Puget
Sound area and the western parts of Washington and Oregon. Precipitation totals exceeded 13.8 in (350 mm) in some
areas with winds topping out at 209 km/h (130 mph) along coastal Oregon. It became the second wettest event in Seattle
history when a little over 130 mm (5.1 in) of rain fell on Seattle in a 24-hour period. Lack of adaptation to the
heavy rain contributed to five deaths and widespread flooding and damage. In recent years, the city has experienced
steady population growth, and has been faced with the issue of accommodating more residents. In 2006, after growing
by 4,000 citizens per year for the previous 16 years, regional planners expected the population of Seattle to grow
by 200,000 people by 2040. However, former mayor Greg Nickels supported plans that would increase the population
by 60%, or 350,000 people, by 2040 and worked on ways to accommodate this growth while keeping Seattle's single-family
housing zoning laws. The Seattle City Council later voted to relax height limits on buildings in the greater part
of Downtown, partly with the aim to increase residential density in the city centre. As a sign of increasing inner-city
growth, the downtown population crested to over 60,000 in 2009, up 77% since 1990. Seattle's foreign-born population
grew 40% between the 1990 and 2000 censuses. The Chinese population in the Seattle area has origins in mainland China,
Hong Kong, Southeast Asia, and Taiwan. The earliest Chinese-Americans that came in the late 19th and early 20th centuries
were almost entirely from Guangdong province. The Seattle area is also home to a large Vietnamese population of more
than 55,000 residents, as well as over 30,000 Somali immigrants. The Seattle-Tacoma area is also home to one of the
largest Cambodian communities in the United States, numbering about 19,000 Cambodian Americans, and one of the largest
Samoan communities in the mainland U.S., with over 15,000 people having Samoan ancestry. Additionally, the Seattle
area had the highest percentage of self-identified mixed-race people of any large metropolitan area in the United
States, according to the 2000 United States Census Bureau. According to a 2012 HistoryLink study, Seattle's 98118
ZIP code (in the Columbia City neighborhood) was one of the most diverse ZIP Code Tabulation Areas in the United
States. Seattle's population historically has been predominantly white. The 2010 census showed that Seattle was one
of the whitest big cities in the country, although its proportion of white residents has been gradually declining.
In 1960, whites comprised 91.6% of the city's population, while in 2010 they comprised 69.5%. According to the 2006–2008
American Community Survey, approximately 78.9% of residents over the age of five spoke only English at home. Those
who spoke Asian languages other than Indo-European languages made up 10.2% of the population, Spanish was spoken
by 4.5% of the population, speakers of other Indo-European languages made up 3.9%, and speakers of other languages
made up 2.5%. Still, very large companies dominate the business landscape. Four companies on the 2013 Fortune 500
list of the United States' largest companies, based on total revenue, are headquartered in Seattle: Internet retailer
Amazon.com (#49), coffee chain Starbucks (#208), department store Nordstrom (#227), and freight forwarder Expeditors
International of Washington (#428). Other Fortune 500 companies popularly associated with Seattle are based in nearby
Puget Sound cities. Warehouse club chain Costco (#22), the largest retail company in Washington, is based in Issaquah.
Microsoft (#35) is located in Redmond. Weyerhaeuser, the forest products company (#363), is based in Federal Way.
Finally, Bellevue is home to truck manufacturer Paccar (#168). Other major companies in the area include Nintendo
of America in Redmond, T-Mobile US in Bellevue, Expedia Inc. in Bellevue and Providence Health & Services — the state's
largest health care system and fifth largest employer — in Renton. The city has a reputation for heavy coffee consumption;
coffee companies founded or based in Seattle include Starbucks, Seattle's Best Coffee, and Tully's. There are also
many successful independent artisanal espresso roasters and cafés. Prior to moving its headquarters to Chicago, aerospace
manufacturer Boeing (#30) was the largest company based in Seattle. Its largest division is still headquartered in
nearby Renton, and the company has large aircraft manufacturing plants in Everett and Renton, so it remains the largest
private employer in the Seattle metropolitan area. Former Seattle Mayor Greg Nickels announced a desire to spark
a new economic boom driven by the biotechnology industry in 2006. Major redevelopment of the South Lake Union neighborhood
is underway, in an effort to attract new and established biotech companies to the city, joining biotech companies
Corixa (acquired by GlaxoSmithKline), Immunex (now part of Amgen), Trubion, and ZymoGenetics. Vulcan Inc., the holding
company of billionaire Paul Allen, is behind most of the development projects in the region. While some see the new
development as an economic boon, others have criticized Nickels and the Seattle City Council for pandering to Allen's
interests at taxpayers' expense. Also in 2006, Expansion Magazine ranked Seattle among the top 10 metropolitan areas
in the nation for climates favorable to business expansion. In 2005, Forbes ranked Seattle as the most expensive
American city for buying a house based on the local income levels. In 2013, however, the magazine ranked Seattle
No. 9 on its list of the Best Places for Business and Careers. Seattle's economy is driven by a mix of older industrial
companies, and "new economy" Internet and technology companies, service, design and clean technology companies. The
city's gross metropolitan product was $231 billion in 2010, making it the 11th largest metropolitan economy in the
United States. The Port of Seattle, which also operates Seattle–Tacoma International Airport, is a major gateway
for trade with Asia and cruises to Alaska, and is the 8th largest port in the United States in terms of container
capacity. Though it was affected by the Great Recession, Seattle has retained a comparatively strong economy, and
remains a hotbed for start-up businesses, especially in green building and clean technologies: it was ranked as America's
No. 1 "smarter city" based on its government policies and green economy. In February 2010, the city government committed
Seattle to becoming North America's first "climate neutral" city, with a goal of reaching zero net per capita greenhouse
gas emissions by 2030. Seattle also has large lesbian, gay, bisexual, and transgender populations. According to a
2006 study by UCLA, 12.9% of city residents polled identified as gay, lesbian, or bisexual. This was the second-highest
proportion of any major U.S. city, behind San Francisco Greater Seattle also ranked second among major U.S. metropolitan
areas, with 6.5% of the population identifying as gay, lesbian, or bisexual. According to 2012 estimates from the
United States Census Bureau, Seattle has the highest percentage of same-sex households in the United States, at 2.6
per cent, surpassing San Francisco. The 5th Avenue Theatre, built in 1926, stages Broadway-style musical shows featuring
both local talent and international stars. Seattle has "around 100" theatrical production companies and over two
dozen live theatre venues, many of them associated with fringe theatre; Seattle is probably second only to New York
for number of equity theaters (28 Seattle theater companies have some sort of Actors' Equity contract). In addition,
the 900-seat Romanesque Revival Town Hall on First Hill hosts numerous cultural events, especially lectures and recitals.
Seattle has been a regional center for the performing arts for many years. The century-old Seattle Symphony Orchestra
is among the world's most recorded and performs primarily at Benaroya Hall. The Seattle Opera and Pacific Northwest
Ballet, which perform at McCaw Hall (opened 2003 on the site of the former Seattle Opera House at Seattle Center),
are comparably distinguished, with the Opera being particularly known for its performances of the works of Richard
Wagner and the PNB School (founded in 1974) ranking as one of the top three ballet training institutions in the United
States. The Seattle Youth Symphony Orchestras (SYSO) is the largest symphonic youth organization in the United States.
The city also boasts lauded summer and winter chamber music festivals organized by the Seattle Chamber Music Society.
From 1869 until 1982, Seattle was known as the "Queen City". Seattle's current official nickname is the "Emerald
City", the result of a contest held in 1981; the reference is to the lush evergreen forests of the area. Seattle
is also referred to informally as the "Gateway to Alaska" for being the nearest major city in the contiguous US to
Alaska, "Rain City" for its frequent cloudy and rainy weather, and "Jet City" from the local influence of Boeing.
The city has two official slogans or mottos: "The City of Flowers", meant to encourage the planting of flowers to
beautify the city, and "The City of Goodwill", adopted prior to the 1990 Goodwill Games. Seattle residents are known
as Seattleites. Among Seattle's prominent annual fairs and festivals are the 24-day Seattle International Film Festival,
Northwest Folklife over the Memorial Day weekend, numerous Seafair events throughout July and August (ranging from
a Bon Odori celebration to the Seafair Cup hydroplane races), the Bite of Seattle, one of the largest Gay Pride festivals
in the United States, and the art and music festival Bumbershoot, which programs music as well as other art and entertainment
over the Labor Day weekend. All are typically attended by 100,000 people annually, as are the Seattle Hempfest and
two separate Independence Day celebrations. Seattle annually sends a team of spoken word slammers to the National
Poetry Slam and considers itself home to such performance poets as Buddy Wakefield, two-time Individual World Poetry
Slam Champ; Anis Mojgani, two-time National Poetry Slam Champ; and Danny Sherrard, 2007 National Poetry Slam Champ
and 2008 Individual World Poetry Slam Champ. Seattle also hosted the 2001 national Poetry Slam Tournament. The Seattle
Poetry Festival is a biennial poetry festival that (launched first as the Poetry Circus in 1997) has featured local,
regional, national, and international names in poetry. Seattle is considered the home of grunge music, having produced
artists such as Nirvana, Soundgarden, Alice in Chains, Pearl Jam, and Mudhoney, all of whom reached international
audiences in the early 1990s. The city is also home to such varied artists as avant-garde jazz musicians Bill Frisell
and Wayne Horvitz, hot jazz musician Glenn Crytzer, hip hop artists Sir Mix-a-Lot, Macklemore, Blue Scholars, and
Shabazz Palaces, smooth jazz saxophonist Kenny G, classic rock staples Heart and Queensrÿche, and alternative rock
bands such as Foo Fighters, Harvey Danger, The Presidents of the United States of America, The Posies, Modest Mouse,
Band of Horses, Death Cab for Cutie, and Fleet Foxes. Rock musicians such as Jimi Hendrix, Duff McKagan, and Nikki
Sixx spent their formative years in Seattle. The Henry Art Gallery opened in 1927, the first public art museum in
Washington. The Seattle Art Museum (SAM) opened in 1933; SAM opened a museum downtown in 1991 (expanded and reopened
2007); since 1991, the 1933 building has been SAM's Seattle Asian Art Museum (SAAM). SAM also operates the Olympic
Sculpture Park (opened 2007) on the waterfront north of the downtown piers. The Frye Art Museum is a free museum
on First Hill. Regional history collections are at the Loghouse Museum in Alki, Klondike Gold Rush National Historical
Park, the Museum of History and Industry and the Burke Museum of Natural History and Culture. Industry collections
are at the Center for Wooden Boats and the adjacent Northwest Seaport, the Seattle Metropolitan Police Museum, and
the Museum of Flight. Regional ethnic collections include the Nordic Heritage Museum, the Wing Luke Asian Museum
and the Northwest African American Museum. Seattle has artist-run galleries, including 10-year veteran Soil Art Gallery,
and the newer Crawl Space Gallery. There are other annual events, ranging from the Seattle Antiquarian Book Fair
& Book Arts Show; an anime convention, Sakura-Con; Penny Arcade Expo, a gaming convention; a two-day, 9,000-rider
Seattle to Portland Bicycle Classic, and specialized film festivals, such as the Maelstrom International Fantastic
Film Festival, the Seattle Asian American Film Festival (formerly known as the Northwest Asian American Film Festival),
Children's Film Festival Seattle, Translation: the Seattle Transgender Film Festival, the Seattle Gay and Lesbian
Film Festival, and the Seattle Polish Film Festival. Seattle's professional sports history began at the start of
the 20th century with the PCHA's Seattle Metropolitans, which in 1917 became the first American hockey team to win
the Stanley Cup. Seattle was also home to a previous Major League Baseball franchise in 1969: the Seattle Pilots.
The Pilots relocated to Milwaukee, Wisconsin and became the Milwaukee Brewers for the 1970 season. From 1967 to 2008
Seattle was also home to an National Basketball Association (NBA) franchise: the Seattle SuperSonics, who were the
1978–79 NBA champions. The SuperSonics relocated to Oklahoma City, Oklahoma and became the Oklahoma City Thunder
for the 2008–09 season. The Seahawks' CenturyLink Field has hosted NFL playoff games in 2006, 2008, 2011, 2014 and
2015. The Seahawks have advanced to the Super Bowl three times: 2005, 2013 and 2014. They defeated the Denver Broncos
43-8 to win their first Super Bowl championship in Super Bowl XLVIII, but lost 24-28 against the New England Patriots
in Super Bowl XLIX. Seattle Sounders FC has played in Major League Soccer since 2009, sharing CenturyLink Field with
the Seahawks, as a continuation of earlier teams in the lower divisions of American soccer. The Sounders have not
won the MLS Cup but have, however, won the MLS Supporters' Shield in 2014 and the Lamar Hunt U.S. Open Cup on four
occasions: 2009, 2010, 2011, and 2014. Seattle is widely considered one of the most liberal cities in the United
States, even surpassing its neighbor, Portland, Oregon. Support for issues such as same-sex marriage and reproductive
rights are largely taken for granted in local politics. In the 2012 U.S. general election, an overwhelming majority
of Seattleites voted to approve Referendum 74 and legalize gay marriage in Washington state. In the same election,
an overwhelming majority of Seattleites also voted to approve the legalization of the recreational use of cannabis
in the state. Like much of the Pacific Northwest (which has the lowest rate of church attendance in the United States
and consistently reports the highest percentage of atheism), church attendance, religious belief, and political influence
of religious leaders are much lower than in other parts of America. Seattle's political culture is very liberal and
progressive for the United States, with over 80% of the population voting for the Democratic Party. All precincts
in Seattle voted for Democratic Party candidate Barack Obama in the 2012 presidential election. In partisan elections
for the Washington State Legislature and United States Congress, nearly all elections are won by Democrats. Seattle
is considered the first major American city to elect a female mayor, Bertha Knight Landes. It has also elected an
openly gay mayor, Ed Murray, and a socialist councillor, Kshama Sawant. For the first time in United States history,
an openly gay black woman was elected to public office when Sherry Harris was elected as a Seattle city councillor
in 1991. The majority of the current city council is female, while white men comprise a minority. Like most parts
of the United States, government and laws are also run by a series of ballot initiatives (allowing citizens to pass
or reject laws), referenda (allowing citizens to approve or reject legislation already passed), and propositions
(allowing specific government agencies to propose new laws/tax increases directly to the people). Federally, Seattle
is part of Washington's 7th congressional district, represented by Democrat Jim McDermott, elected in 1988 and one
of Congress's liberal members. Ed Murray is currently serving as mayor. Of the city's population over the age of
25, 53.8% (vs. a national average of 27.4%) hold a bachelor's degree or higher, and 91.9% (vs. 84.5% nationally)
have a high school diploma or equivalent. A 2008 United States Census Bureau survey showed that Seattle had the highest
percentage of college and university graduates of any major U.S. city. The city was listed as the most literate of
the country's 69 largest cities in 2005 and 2006, the second most literate in 2007 and the most literate in 2008
in studies conducted by Central Connecticut State University. Non-commercial radio stations include NPR affiliates
KUOW-FM 94.9 and KPLU-FM 88.5 (Tacoma), as well as classical music station KING-FM 98.1. Other stations include KEXP-FM
90.3 (affiliated with the UW), community radio KBCS-FM 91.3 (affiliated with Bellevue College), and high school radio
KNHC-FM 89.5, which broadcasts an electronic dance music radio format and is owned by the public school system and
operated by students of Nathan Hale High School. Many Seattle radio stations are also available through Internet
radio, with KEXP in particular being a pioneer of Internet radio. Seattle also has numerous commercial radio stations.
In a March 2012 report by the consumer research firm Arbitron, the top FM stations were KRWM (adult contemporary
format), KIRO-FM (news/talk), and KISW (active rock) while the top AM stations were KOMO (AM) (all news), KJR (AM)
(all sports), KIRO (AM) (all sports). As of 2010[update], Seattle has one major daily newspaper, The Seattle Times.
The Seattle Post-Intelligencer, known as the P-I, published a daily newspaper from 1863 to March 17, 2009, before
switching to a strictly on-line publication. There is also the Seattle Daily Journal of Commerce, and the University
of Washington publishes The Daily, a student-run publication, when school is in session. The most prominent weeklies
are the Seattle Weekly and The Stranger; both consider themselves "alternative" papers. The weekly LGBT newspaper
is the Seattle Gay News. Real Change is a weekly street newspaper that is sold mainly by homeless persons as an alternative
to panhandling. There are also several ethnic newspapers, including the The Facts, Northwest Asian Weekly and the
International Examiner, and numerous neighborhood newspapers. King County Metro provides frequent stop bus service
within the city and surrounding county, as well as a South Lake Union Streetcar line between the South Lake Union
neighborhood and Westlake Center in downtown. Seattle is one of the few cities in North America whose bus fleet includes
electric trolleybuses. Sound Transit currently provides an express bus service within the metropolitan area; two
Sounder commuter rail lines between the suburbs and downtown; its Central Link light rail line, which opened in 2009,
between downtown and Sea-Tac Airport gives the city its first rapid transit line that has intermediate stops within
the city limits. Washington State Ferries, which manages the largest network of ferries in the United States and
third largest in the world, connects Seattle to Bainbridge and Vashon Islands in Puget Sound and to Bremerton and
Southworth on the Kitsap Peninsula. Located in the Laurelhurst neighborhood, Seattle Children's, formerly Children's
Hospital and Regional Medical Center, is the pediatric referral center for Washington, Alaska, Montana, and Idaho.
The Fred Hutchinson Cancer Research Center has a campus in the Eastlake neighborhood. The University District is
home to the University of Washington Medical Center which, along with Harborview, is operated by the University of
Washington. Seattle is also served by a Veterans Affairs hospital on Beacon Hill, a third campus of Swedish in Ballard,
and Northwest Hospital and Medical Center near Northgate Mall. The first streetcars appeared in 1889 and were instrumental
in the creation of a relatively well-defined downtown and strong neighborhoods at the end of their lines. The advent
of the automobile sounded the death knell for rail in Seattle. Tacoma–Seattle railway service ended in 1929 and the
Everett–Seattle service came to an end in 1939, replaced by inexpensive automobiles running on the recently developed
highway system. Rails on city streets were paved over or removed, and the opening of the Seattle trolleybus system
brought the end of streetcars in Seattle in 1941. This left an extensive network of privately owned buses (later
public) as the only mass transit within the city and throughout the region. The main mode of transportation, however,
relies on Seattle's streets, which are laid out in a cardinal directions grid pattern, except in the central business
district where early city leaders Arthur Denny and Carson Boren insisted on orienting their plats relative to the
shoreline rather than to true North. Only two roads, Interstate 5 and State Route 99 (both limited-access highways),
run uninterrupted through the city from north to south. State Route 99 runs through downtown Seattle on the Alaskan
Way Viaduct, which was built in 1953. However, due to damage sustained during the 2001 Nisqually earthquake the viaduct
will be replaced by a tunnel. The 2-mile (3.2 km) Alaskan Way Viaduct replacement tunnel was originally scheduled
to be completed in December 2015 at a cost of US$4.25 billion. Unfortunately, due to issues with the worlds largest
tunnel boring machine (TBM), which is nicknamed "Bertha" and is 57 feet (17 m) in diameter, the projected date of
completion has been pushed back to 2017. Seattle has the 8th worst traffic congestion of all American cities, and
is 10th among all North American cities. Seattle is home to the University of Washington, as well as the institution's
professional and continuing education unit, the University of Washington Educational Outreach. A study by Newsweek
International in 2006 cited the University of Washington as the twenty-second best university in the world. Seattle
also has a number of smaller private universities including Seattle University and Seattle Pacific University, the
former a Jesuit Catholic institution, the latter Free Methodist; universities aimed at the working adult, like City
University and Antioch University; colleges within the Seattle Colleges District system, comprising North, Central,
and South; seminaries, including Western Seminary and a number of arts colleges, such as Cornish College of the Arts,
Pratt Fine Arts Center, and The Art Institute of Seattle. In 2001, Time magazine selected Seattle Central Community
College as community college of the year, stating the school "pushes diverse students to work together in small teams".
The city has started moving away from the automobile and towards mass transit. From 2004 to 2009, the annual number
of unlinked public transportation trips increased by approximately 21%. In 2006, voters in King County passed proposition
2 (Transit Now) which increased bus service hours on high ridership routes and paid for five bus rapid transit lines
called RapidRide. After rejecting a roads and transit measure in 2007, Seattle-area voters passed a transit only
measure in 2008 to increase ST Express bus service, extend the Link Light Rail system, and expand and improve Sounder
commuter rail service. A light rail line from downtown heading south to Sea-Tac Airport began service on December
19, 2009, giving the city its first rapid transit line with intermediate stations within the city limits. An extension
north to the University of Washington is scheduled to open in 2016; and further extensions are planned to reach Lynnwood
to the north, Des Moines to the south, and Bellevue and Redmond to the east by 2023. Former mayor Michael McGinn
has supported building light rail from downtown to Ballard and West Seattle.
In psychology, memory is the process in which information is encoded, stored, and retrieved. Encoding allows information
from the outside world to be sensed in the form of chemical and physical stimuli. In the first stage the information
must be changed so that it may be put into the encoding process. Storage is the second memory stage or process. This
entails that information is maintained over short periods of time. Finally the third process is the retrieval of
information that has been stored. Such information must be located and returned to the consciousness. Some retrieval
attempts may be effortless due to the type of information, and other attempts to remember stored information may
be more demanding for various reasons. Short-term memory is believed to rely mostly on an acoustic code for storing
information, and to a lesser extent a visual code. Conrad (1964) found that test subjects had more difficulty recalling
collections of letters that were acoustically similar (e.g. E, P, D). Confusion with recalling acoustically similar
letters rather than visually similar letters implies that the letters were encoded acoustically. Conrad's (1964)
study, however, deals with the encoding of written text; thus, while memory of written language may rely on acoustic
components, generalisations to all forms of memory cannot be made. Short-term memory is also known as working memory.
Short-term memory allows recall for a period of several seconds to a minute without rehearsal. Its capacity is also
very limited: George A. Miller (1956), when working at Bell Laboratories, conducted experiments showing that the
store of short-term memory was 7±2 items (the title of his famous paper, "The magical number 7±2"). Modern estimates
of the capacity of short-term memory are lower, typically of the order of 4–5 items; however, memory capacity can
be increased through a process called chunking. For example, in recalling a ten-digit telephone number, a person
could chunk the digits into three groups: first, the area code (such as 123), then a three-digit chunk (456) and
lastly a four-digit chunk (7890). This method of remembering telephone numbers is far more effective than attempting
to remember a string of 10 digits; this is because we are able to chunk the information into meaningful groups of
numbers. This may be reflected in some countries in the tendency to display telephone numbers as several chunks of
two to four numbers. The storage in sensory memory and short-term memory generally has a strictly limited capacity
and duration, which means that information is not retained indefinitely. By contrast, long-term memory can store
much larger quantities of information for potentially unlimited duration (sometimes a whole life span). Its capacity
is immeasurable. For example, given a random seven-digit number we may remember it for only a few seconds before
forgetting, suggesting it was stored in our short-term memory. On the other hand, we can remember telephone numbers
for many years through repetition; this information is said to be stored in long-term memory. The model also shows
all the memory stores as being a single unit whereas research into this shows differently. For example, short-term
memory can be broken up into different units such as visual information and acoustic information. In a study by Zlonoga
and Gerber (1986), patient 'KF' demonstrated certain deviations from the Atkinson–Shiffrin model. Patient KF was
brain damaged, displaying difficulties regarding short-term memory. Recognition of sounds such as spoken numbers,
letters, words and easily identifiable noises (such as doorbells and cats meowing) were all impacted. Interestingly,
visual short-term memory was unaffected, suggesting a dichotomy between visual and audial memory. Short-term memory
is supported by transient patterns of neuronal communication, dependent on regions of the frontal lobe (especially
dorsolateral prefrontal cortex) and the parietal lobe. Long-term memory, on the other hand, is maintained by more
stable and permanent changes in neural connections widely spread throughout the brain. The hippocampus is essential
(for learning new information) to the consolidation of information from short-term to long-term memory, although
it does not seem to store information itself. Without the hippocampus, new memories are unable to be stored into
long-term memory, as learned from patient Henry Molaison after removal of both his hippocampi, and there will be
a very short attention span. Furthermore, it may be involved in changing neural connections for a period of three
months or more after the initial learning. Sensory memory holds sensory information less than one second after an
item is perceived. The ability to look at an item and remember what it looked like with just a split second of observation,
or memorization, is the example of sensory memory. It is out of cognitive control and is an automatic response. With
very short presentations, participants often report that they seem to "see" more than they can actually report. The
first experiments exploring this form of sensory memory were conducted by George Sperling (1963) using the "partial
report paradigm". Subjects were presented with a grid of 12 letters, arranged into three rows of four. After a brief
presentation, subjects were then played either a high, medium or low tone, cuing them which of the rows to report.
Based on these partial report experiments,Sperling was able to show that the capacity of sensory memory was approximately
12 items, but that it degraded very quickly (within a few hundred milliseconds). Because this form of memory degrades
so quickly, participants would see the display but be unable to report all of the items (12 in the "whole report"
procedure) before they decayed. This type of memory cannot be prolonged via rehearsal. While short-term memory encodes
information acoustically, long-term memory encodes it semantically: Baddeley (1966) discovered that, after 20 minutes,
test subjects had the most difficulty recalling a collection of words that had similar meanings (e.g. big, large,
great, huge) long-term. Another part of long-term memory is episodic memory, "which attempts to capture information
such as 'what', 'when' and 'where'". With episodic memory, individuals are able to recall specific events such as
birthday parties and weddings. Infants do not have the language ability to report on their memories and so verbal
reports cannot be used to assess very young children’s memory. Throughout the years, however, researchers have adapted
and developed a number of measures for assessing both infants’ recognition memory and their recall memory. Habituation
and operant conditioning techniques have been used to assess infants’ recognition memory and the deferred and elicited
imitation techniques have been used to assess infants’ recall memory. Another major way to distinguish different
memory functions is whether the content to be remembered is in the past, retrospective memory, or in the future,
prospective memory. Thus, retrospective memory as a category includes semantic, episodic and autobiographical memory.
In contrast, prospective memory is memory for future intentions, or remembering to remember (Winograd, 1988). Prospective
memory can be further broken down into event- and time-based prospective remembering. Time-based prospective memories
are triggered by a time-cue, such as going to the doctor (action) at 4pm (cue). Event-based prospective memories
are intentions triggered by cues, such as remembering to post a letter (action) after seeing a mailbox (cue). Cues
do not need to be related to the action (as the mailbox/letter example), and lists, sticky-notes, knotted handkerchiefs,
or string around the finger all exemplify cues that people use as strategies to enhance prospective memory. Hebb
distinguished between short-term and long-term memory. He postulated that any memory that stayed in short-term storage
for a long enough time would be consolidated into a long-term memory. Later research showed this to be false. Research
has shown that direct injections of cortisol or epinephrine help the storage of recent experiences. This is also
true for stimulation of the amygdala. This proves that excitement enhances memory by the stimulation of hormones
that affect the amygdala. Excessive or prolonged stress (with prolonged cortisol) may hurt memory storage. Patients
with amygdalar damage are no more likely to remember emotionally charged words than nonemotionally charged ones.
The hippocampus is important for explicit memory. The hippocampus is also important for memory consolidation. The
hippocampus receives input from different parts of the cortex and sends its output out to different parts of the
brain also. The input comes from secondary and tertiary sensory areas that have processed the information a lot already.
Hippocampal damage may also cause memory loss and problems with memory storage. This memory loss includes, retrograde
amnesia which is the loss of memory for events that occurred shortly before the time of brain damage. One question
that is crucial in cognitive neuroscience is how information and mental experiences are coded and represented in
the brain. Scientists have gained much knowledge about the neuronal codes from the studies of plasticity, but most
of such research has been focused on simple learning in simple neuronal circuits; it is considerably less clear about
the neuronal changes involved in more complex examples of memory, particularly declarative memory that requires the
storage of facts and events (Byrne 2007). Convergence-divergence zones might be the neural networks where memories
are stored and retrieved. Cognitive neuroscientists consider memory as the retention, reactivation, and reconstruction
of the experience-independent internal representation. The term of internal representation implies that such definition
of memory contains two components: the expression of memory at the behavioral or conscious level, and the underpinning
physical neural changes (Dudai 2007). The latter component is also called engram or memory traces (Semon 1904). Some
neuroscientists and psychologists mistakenly equate the concept of engram and memory, broadly conceiving all persisting
after-effects of experiences as memory; others argue against this notion that memory does not exist until it is revealed
in behavior or thought (Moscovitch 2007). In contrast, procedural memory (or implicit memory) is not based on the
conscious recall of information, but on implicit learning. It can best be summarized as remember how to do something.
Procedural memory is primarily employed in learning motor skills and should be considered a subset of implicit memory.
It is revealed when one does better in a given task due only to repetition - no new explicit memories have been formed,
but one is unconsciously accessing aspects of those previous experiences. Procedural memory involved in motor learning
depends on the cerebellum and basal ganglia. The working memory model explains many practical observations, such
as why it is easier to do two different tasks (one verbal and one visual) than two similar tasks (e.g., two visual),
and the aforementioned word-length effect. However, the concept of a central executive as noted here has been criticised
as inadequate and vague.[citation needed] Working memory is also the premise for what allows us to do everyday activities
involving thought. It is the section of memory where we carry out thought processes and use them to learn and reason
about topics. One of the key concerns of older adults is the experience of memory loss, especially as it is one of
the hallmark symptoms of Alzheimer's disease. However, memory loss is qualitatively different in normal aging from
the kind of memory loss associated with a diagnosis of Alzheimer's (Budson & Price, 2005). Research has revealed
that individuals’ performance on memory tasks that rely on frontal regions declines with age. Older adults tend to
exhibit deficits on tasks that involve knowing the temporal order in which they learned information; source memory
tasks that require them to remember the specific circumstances or context in which they learned information; and
prospective memory tasks that involve remembering to perform an act at a future time. Older adults can manage their
problems with prospective memory by using appointment books, for example. It should be noted that although 6-month-olds
can recall information over the short-term, they have difficulty recalling the temporal order of information. It
is only by 9 months of age that infants can recall the actions of a two-step sequence in the correct temporal order
- that is, recalling step 1 and then step 2. In other words, when asked to imitate a two-step action sequence (such
as putting a toy car in the base and pushing in the plunger to make the toy roll to the other end), 9-month-olds
tend to imitate the actions of the sequence in the correct order (step 1 and then step 2). Younger infants (6-month-olds)
can only recall one step of a two-step sequence. Researchers have suggested that these age differences are probably
due to the fact that the dentate gyrus of the hippocampus and the frontal components of the neural network are not
fully developed at the age of 6-months. Declarative memory can be further sub-divided into semantic memory, concerning
principles and facts taken independent of context; and episodic memory, concerning information specific to a particular
context, such as a time and place. Semantic memory allows the encoding of abstract knowledge about the world, such
as "Paris is the capital of France". Episodic memory, on the other hand, is used for more personal memories, such
as the sensations, emotions, and personal associations of a particular place or time. Episodic memories often reflect
the "firsts" in life such as a first kiss, first day of school or first time winning a championship. These are key
events in one's life that can be remembered clearly. Autobiographical memory - memory for particular events within
one's own life - is generally viewed as either equivalent to, or a subset of, episodic memory. Visual memory is part
of memory preserving some characteristics of our senses pertaining to visual experience. One is able to place in
memory information that resembles objects, places, animals or people in sort of a mental image. Visual memory can
result in priming and it is assumed some kind of perceptual representational system underlies this phenomenon.[citation
needed] Stress has a significant effect on memory formation and learning. In response to stressful situations, the
brain releases hormones and neurotransmitters (ex. glucocorticoids and catecholamines) which affect memory encoding
processes in the hippocampus. Behavioural research on animals shows that chronic stress produces adrenal hormones
which impact the hippocampal structure in the brains of rats. An experimental study by German cognitive psychologists
L. Schwabe and O. Wolf demonstrates how learning under stress also decreases memory recall in humans. In this study,
48 healthy female and male university students participated in either a stress test or a control group. Those randomly
assigned to the stress test group had a hand immersed in ice cold water (the reputable SECPT or ‘Socially Evaluated
Cold Pressor Test’) for up to three minutes, while being monitored and videotaped. Both the stress and control groups
were then presented with 32 words to memorize. Twenty-four hours later, both groups were tested to see how many words
they could remember (free recall) as well as how many they could recognize from a larger list of words (recognition
performance). The results showed a clear impairment of memory performance in the stress test group, who recalled
30% fewer words than the control group. The researchers suggest that stress experienced during learning distracts
people by diverting their attention during the memory encoding process. Interference can hamper memorization and
retrieval. There is retroactive interference, when learning new information makes it harder to recall old information
and proactive interference, where prior learning disrupts recall of new information. Although interference can lead
to forgetting, it is important to keep in mind that there are situations when old information can facilitate learning
of new information. Knowing Latin, for instance, can help an individual learn a related language such as French –
this phenomenon is known as positive transfer. Up until the middle of the 1980s it was assumed that infants could
not encode, retain, and retrieve information. A growing body of research now indicates that infants as young as 6-months
can recall information after a 24-hour delay. Furthermore, research has revealed that as infants grow older they
can store information for longer periods of time; 6-month-olds can recall information after a 24-hour period, 9-month-olds
after up to five weeks, and 20-month-olds after as long as twelve months. In addition, studies have shown that with
age, infants can store information faster. Whereas 14-month-olds can recall a three-step sequence after being exposed
to it once, 6-month-olds need approximately six exposures in order to be able to remember it. Brain areas involved
in the neuroanatomy of memory such as the hippocampus, the amygdala, the striatum, or the mammillary bodies are thought
to be involved in specific types of memory. For example, the hippocampus is believed to be involved in spatial learning
and declarative learning, while the amygdala is thought to be involved in emotional memory. Damage to certain areas
in patients and animal models and subsequent memory deficits is a primary source of information. However, rather
than implicating a specific area, it could be that damage to adjacent areas, or to a pathway traveling through the
area is actually responsible for the observed deficit. Further, it is not sufficient to describe memory, and its
counterpart, learning, as solely dependent on specific brain regions. Learning and memory are attributed to changes
in neuronal synapses, thought to be mediated by long-term potentiation and long-term depression. The more long term
the exposure to stress is, the more impact it may have. However, short term exposure to stress also causes impairment
in memory by interfering with the function of the hippocampus. Research shows that subjects placed in a stressful
situation for a short amount of time still have blood glucocorticoid levels that have increased drastically when
measured after the exposure is completed. When subjects are asked to complete a learning task after short term exposure
they have often difficulties. Prenatal stress also hinders the ability to learn and memorize by disrupting the development
of the hippocampus and can lead to unestablished long term potentiation in the offspring of severely stressed parents.
Although the stress is applied prenatally, the offspring show increased levels of glucocorticoids when they are subjected
to stress later on in life. Stressful life experiences may be a cause of memory loss as a person ages. Glucocorticoids
that are released during stress damage neurons that are located in the hippocampal region of the brain. Therefore,
the more stressful situations that someone encounters, the more susceptible they are to memory loss later on. The
CA1 neurons found in the hippocampus are destroyed due to glucocorticoids decreasing the release of glucose and the
reuptake of glutamate. This high level of extracellular glutamate allow calcium to enter NMDA receptors which in
return kills neurons. Stressful life experiences can also cause repression of memories where a person moves an unbearable
memory to the unconscious mind. This directly relates to traumatic events in one's past such as kidnappings, being
prisoners of war or sexual abuse as a child. Sleep does not affect acquisition or recall while one is awake. Therefore,
sleep has the greatest effect on memory consolidation. During sleep, the neural connections in the brain are strengthened.
This enhances the brain’s abilities to stabilize and retain memories. There have been several studies which show
that sleep improves the retention of memory, as memories are enhanced through active consolidation. System consolidation
takes place during slow-wave sleep (SWS). This process implicates that memories are reactivated during sleep, but
that the process doesn’t enhance every memory. It also implicates that qualitative changes are made to the memories
when they are transferred to long-term store during sleep. When you are sleeping, the hippocampus replays the events
of the day for the neocortex. The neocortex then reviews and processes memories, which moves them into long-term
memory. When you do not get enough sleep it makes it more difficult to learn as these neural connections are not
as strong, resulting in a lower retention rate of memories. Sleep deprivation makes it harder to focus, resulting
in inefficient learning. Furthermore, some studies have shown that sleep deprivation can lead to false memories as
the memories are not properly transferred to long-term memory. Therefore, it is important to get the proper amount
of sleep so that memory can function at the highest level. One of the primary functions of sleep is thought to be
the improvement of the consolidation of information, as several studies have demonstrated that memory depends on
getting sufficient sleep between training and test. Additionally, data obtained from neuroimaging studies have shown
activation patterns in the sleeping brain that mirror those recorded during the learning of tasks from the previous
day, suggesting that new memories may be solidified through such rehearsal. A UCLA research study published in the
June 2006 issue of the American Journal of Geriatric Psychiatry found that people can improve cognitive function
and brain efficiency through simple lifestyle changes such as incorporating memory exercises, healthy eating, physical
fitness and stress reduction into their daily lives. This study examined 17 subjects, (average age 53) with normal
memory performance. Eight subjects were asked to follow a "brain healthy" diet, relaxation, physical, and mental
exercise (brain teasers and verbal memory training techniques). After 14 days, they showed greater word fluency (not
memory) compared to their baseline performance. No long term follow up was conducted, it is therefore unclear if
this intervention has lasting effects on memory. Much of the current knowledge of memory has come from studying memory
disorders, particularly amnesia. Loss of memory is known as amnesia. Amnesia can result from extensive damage to:
(a) the regions of the medial temporal lobe, such as the hippocampus, dentate gyrus, subiculum, amygdala, the parahippocampal,
entorhinal, and perirhinal cortices or the (b) midline diencephalic region, specifically the dorsomedial nucleus
of the thalamus and the mammillary bodies of the hypothalamus. There are many sorts of amnesia, and by studying their
different forms, it has become possible to observe apparent defects in individual sub-systems of the brain's memory
systems, and thus hypothesize their function in the normally working brain. Other neurological disorders such as
Alzheimer's disease and Parkinson's disease can also affect memory and cognition. Hyperthymesia, or hyperthymesic
syndrome, is a disorder that affects an individual's autobiographical memory, essentially meaning that they cannot
forget small details that otherwise would not be stored. Korsakoff's syndrome, also known as Korsakoff's psychosis,
amnesic-confabulatory syndrome, is an organic brain disease that adversely affects memory by widespread loss or shrinkage
of neurons within the prefrontal cortex. Physical exercise, particularly continuous aerobic exercises such as running,
cycling and swimming, has many cognitive benefits and effects on the brain. Influences on the brain include increases
in neurotransmitter levels, improved oxygen and nutrient delivery, and increased neurogenesis in the hippocampus.
The effects of exercise on memory have important implications for improving children's academic performance, maintaining
mental abilities in old age, and the prevention and potential cure of neurological diseases. However, memory performance
can be enhanced when material is linked to the learning context, even when learning occurs under stress. A separate
study by cognitive psychologists Schwabe and Wolf shows that when retention testing is done in a context similar
to or congruent with the original learning task (i.e., in the same room), memory impairment and the detrimental effects
of stress on learning can be attenuated. Seventy-two healthy female and male university students, randomly assigned
to the SECPT stress test or to a control group, were asked to remember the locations of 15 pairs of picture cards
– a computerized version of the card game "Concentration" or "Memory". The room in which the experiment took place
was infused with the scent of vanilla, as odour is a strong cue for memory. Retention testing took place the following
day, either in the same room with the vanilla scent again present, or in a different room without the fragrance.
The memory performance of subjects who experienced stress during the object-location task decreased significantly
when they were tested in an unfamiliar room without the vanilla scent (an incongruent context); however, the memory
performance of stressed subjects showed no impairment when they were tested in the original room with the vanilla
scent (a congruent context). All participants in the experiment, both stressed and unstressed, performed faster when
the learning and retrieval contexts were similar. Interestingly, research has revealed that asking individuals to
repeatedly imagine actions that they have never performed or events that they have never experienced could result
in false memories. For instance, Goff and Roediger (1998) asked participants to imagine that they performed an act
(e.g., break a toothpick) and then later asked them whether they had done such a thing. Findings revealed that those
participants who repeatedly imagined performing such an act were more likely to think that they had actually performed
that act during the first session of the experiment. Similarly, Garry and her colleagues (1996) asked college students
to report how certain they were that they experienced a number of events as children (e.g., broke a window with their
hand) and then two weeks later asked them to imagine four of those events. The researchers found that one-fourth
of the students asked to imagine the four events reported that they had actually experienced such events as children.
That is, when asked to imagine the events they were more confident that they experienced the events. Although people
often think that memory operates like recording equipment, it is not the case. The molecular mechanisms underlying
the induction and maintenance of memory are very dynamic and comprise distinct phases covering a time window from
seconds to even a lifetime. In fact, research has revealed that our memories are constructed. People can construct
their memories when they encode them and/or when they recall them. To illustrate, consider a classic study conducted
by Elizabeth Loftus and John Palmer (1974) in which people were instructed to watch a film of a traffic accident
and then asked about what they saw. The researchers found that the people who were asked, "How fast were the cars
going when they smashed into each other?" gave higher estimates than those who were asked, "How fast were the cars
going when they hit each other?" Furthermore, when asked a week later whether they have seen broken glass in the
film, those who had been asked the question with smashed were twice more likely to report that they have seen broken
glass than those who had been asked the question with hit. There was no broken glass depicted in the film. Thus,
the wording of the questions distorted viewers’ memories of the event. Importantly, the wording of the question led
people to construct different memories of the event – those who were asked the question with smashed recalled a more
serious car accident than they had actually seen. The findings of this experiment were replicated around the world,
and researchers consistently demonstrated that when people were provided with misleading information they tended
to misremember, a phenomenon known as the misinformation effect. Memorization is a method of learning that allows
an individual to recall information verbatim. Rote learning is the method most often used. Methods of memorizing
things have been the subject of much discussion over the years with some writers, such as Cosmos Rossellius using
visual alphabets. The spacing effect shows that an individual is more likely to remember a list of items when rehearsal
is spaced over an extended period of time. In contrast to this is cramming: an intensive memorization in a short
period of time. Also relevant is the Zeigarnik effect which states that people remember uncompleted or interrupted
tasks better than completed ones. The so-called Method of loci uses spatial memory to memorize non-spatial information.
In 1790, the first federal population census was taken in the United States. Enumerators were instructed to classify free
residents as white or "other." Only the heads of households were identified by name in the federal census until 1850.
Native Americans were included among "Other;" in later censuses, they were included as "Free people of color" if
they were not living on Indian reservations. Slaves were counted separately from free persons in all the censuses
until the Civil War and end of slavery. In later censuses, people of African descent were classified by appearance
as mulatto (which recognized visible European ancestry in addition to African) or black. By 1990, the Census Bureau
included more than a dozen ethnic/racial categories on the census, reflecting not only changing social ideas about
ethnicity, but the wide variety of immigrants who had come to reside in the United States due to changing historical
forces and new immigration laws in the 1960s. With a changing society, more citizens have begun to press for acknowledging
multiracial ancestry. The Census Bureau changed its data collection by allowing people to self-identify as more than
one ethnicity. Some ethnic groups are concerned about the potential political and economic effects, as federal assistance
to historically underserved groups has depended on Census data. According to the Census Bureau, as of 2002, over
75% of all African Americans had multiracial ancestries. Americans with Sub-Saharan African ancestry for historical
reasons: slavery, partus sequitur ventrem, one-eighth law, the one-drop rule of 20th-century legislation, have frequently
been classified as black (historically) or African American, even if they have significant European American or Native
American ancestry. As slavery became a racial caste, those who were enslaved and others of any African ancestry were
classified by what is termed "hypodescent" according to the lower status ethnic group. Many of majority European
ancestry and appearance "married white" and assimilated into white society for its social and economic advantages,
such as generations of families identified as Melungeons, now generally classified as white but demonstrated genetically
to be of European and sub-Saharan African ancestry. After a lengthy period of formal racial segregation in the former
Confederacy following the Reconstruction Era, and bans on interracial marriage in various parts of the country, more
people are openly forming interracial unions. In addition, social conditions have changed and many multiracial people
do not believe it is socially advantageous to try to "pass" as white. Diverse immigration has brought more mixed-race
people into the United States, such as the large population of Hispanics identifying as mestizos. Since the 1980s,
the United States has had a growing multiracial identity movement (cf. Loving Day). Because more Americans have insisted
on being allowed to acknowledge their mixed racial origins, the 2000 census for the first time allowed residents
to check more than one ethno-racial identity and thereby identify as multiracial. In 2008 Barack Obama was elected
as the first multiracial President of the United States; he acknowledges both sides of his family and identifies
as African American. Anti-miscegenation laws were passed in most states during the 18th, 19th and early 20th centuries,
but this did not prevent white slaveholders, their sons, or other powerful white men from taking slave women as concubines
and having multiracial children with them. In California and the western US, there were greater numbers of Latino
and Asian residents. These were prohibited from official relationships with whites. White legislators passed laws
prohibiting marriage between European and Asian Americans until the 1950s. After the American Revolutionary War,
the number and proportion of free people of color increased markedly in the North and the South as slaves were freed.
Most northern states abolished slavery, sometimes, like New York, in programs of gradual emancipation that took more
than two decades to be completed. The last slaves in New York were not freed until 1827. In connection with the Second
Great Awakening, Quaker and Methodist preachers in the South urged slaveholders to free their slaves. Revolutionary
ideals led many men to free their slaves, some by deed and others by will, so that from 1782 to 1810, the percentage
of free people of color rose from less than one percent to nearly 10 percent of blacks in the South. In their attempt
to ensure white supremacy decades after emancipation, in the early 20th century, most southern states created laws
based on the one-drop rule, defining as black, persons with any known African ancestry. This was a stricter interpretation
than what had prevailed in the 19th century; it ignored the many mixed families in the state and went against commonly
accepted social rules of judging a person by appearance and association. Some courts called it "the traceable amount
rule." Anthropologists called it an example of a hypodescent rule, meaning that racially mixed persons were assigned
the status of the socially subordinate group. Multiracial Americans are Americans who have mixed ancestry of "two
or more races". The term may also include Americans of mixed-race ancestry who self-identify with just one group
culturally and socially (cf. the one-drop rule). In the 2010 US census, approximately 9 million individuals, or 2.9%
of the population, self-identified as multiracial. There is evidence that an accounting by genetic ancestry would
produce a higher number, but people live according to social and cultural identities, not DNA. Historical reasons,
including slavery creating a racial caste and the European-American suppression of Native Americans, often led people
to identify or be classified by only one ethnicity, generally that of the culture in which they were raised. Prior
to the mid-20th century, many people hid their multiracial heritage because of racial discrimination against minorities.
While many Americans may be biologically multiracial, they often do not know it or do not identify so culturally,
any more than they maintain all the differing traditions of a variety of national ancestries. The American people
are mostly multi-ethnic descendants of various culturally distinct immigrant groups, many of which have now developed
nations. Some consider themselves multiracial, while acknowledging race as a social construct. Creolization, assimilation
and integration have been continuing processes. The African-American Civil Rights Movement (1955–1968) and other
social movements since the mid-twentieth century worked to achieve social justice and equal enforcement of civil
rights under the constitution for all ethnicities. In the 2000s, less than 5% of the population identified as multiracial.
In many instances, mixed racial ancestry is so far back in an individual's family history (for instance, before the
Civil War or earlier), that it does not affect more recent ethnic and cultural identification. Some Europeans living
among Indigenous Americans were called "white Indians". They "lived in native communities for years, learned native
languages fluently, attended native councils, and often fought alongside their native companions." More numerous
and typical were traders and trappers, who married Indigenous American women from tribes on the frontier and had
families with them. Some traders, who kept bases in the cities, had what ware called "country wives" among Indigenous
Americans, with legal European-American wives and children at home in the city. Not all abandoned their "natural"
mixed-race children. Some arranged for sons to be sent to European-American schools for their education. In the colonial
years, while conditions were more fluid, white women, indentured servant or free, and African men, servant, slave
or free, made unions. Because the women were free, their mixed-race children were born free; they and their descendants
formed most of the families of free people of color during the colonial period in Virginia. The scholar Paul Heinegg
found that eighty percent of the free people of color in North Carolina in censuses from 1790–1810 could be traced
to families free in Virginia in colonial years. Sometimes people of mixed African-American and Native American descent
report having had elder family members withholding pertinent genealogical information. Tracing the genealogy of African
Americans can be a very difficult process, as censuses did not identify slaves by name before the American Civil
War, meaning that most African Americans did not appear by name in those records. In addition, many white fathers
who used slave women sexually, even those in long-term relationships like Thomas Jefferson's with Sally Hemings,
did not acknowledge their mixed-race slave children in records, so paternity was lost. During the 1800s Christian
missionaries from Great Britain and the United States followed traders to the Hawaiian islands. Long-termly, the
Anglo-Saxon presence negatively impacted the level of regard Hawaiian royal women held for their own indigenous looks.
For centuries prior the arrival of Christians, first nation Hawaiian aesthetics, such as dark skin and ample bodies,
had been considered signs of nobility. No matter how much they adapted their mannerisms to Western standard, some
of the Anglo-Saxon missionaries were relentless in referring to the indigenous women as "Hawaiian squaws." By the
last half of the 19th century, some Hawaiian women began marrying European men who found them exotic. The men, however,
selected Hawaiian women who were thinner and paler in complexion. Racial discrimination continued to be enacted in
new laws in the 20th century, for instance the one-drop rule was enacted in Virginia's 1924 Racial Integrity Law
and in other southern states, in part influenced by the popularity of eugenics and ideas of racial purity. People
buried fading memories that many whites had multiracial ancestry. Many families were multiracial. Similar laws had
been proposed but not passed in the late nineteenth century in South Carolina and Virginia, for instance. After regaining
political power in Southern states by disenfranchising blacks, white Democrats passed laws to impose Jim Crow and
racial segregation to restore white supremacy. They maintained these until forced to change in the 1960s and after
by enforcement of federal legislation authorizing oversight of practices to protect the constitutional rights of
African Americans and other minority citizens. The phenomenon known as "passing as white" is difficult to explain
in other countries or to foreign students. Typical questions are: "Shouldn't Americans say that a person who is passing
as white is white, or nearly all white, and has previously been passing as black?" or "To be consistent, shouldn't
you say that someone who is one-eighth white is passing as black?" ... A person who is one-fourth or less American
Indian or Korean or Filipino is not regarded as passing if he or she intermarries with and joins fully the life of
the dominant community, so the minority ancestry need not be hidden. ... It is often suggested that the key reason
for this is that the physical differences between these other groups and whites are less pronounced than the physical
differences between African blacks and whites, and therefore are less threatening to whites. ... [W]hen ancestry
in one of these racial minority groups does not exceed one-fourth, a person is not defined solely as a member of
that group. Population testing is still being done. Some Native American groups that have been sampled may not have
shared the pattern of markers being searched for. Geneticists acknowledge that DNA testing cannot yet distinguish
among members of differing cultural Native American nations. There is genetic evidence for three major migrations
into North America, but not for more recent historic differentiation. In addition, not all Native Americans have
been tested, so scientists do not know for sure that Native Americans have only the genetic markers they have identified.
Some multiracial individuals feel marginalized by U.S. society. For example, when applying to schools or for a job,
or when taking standardized tests, Americans are sometimes asked to check boxes corresponding to race or ethnicity.
Typically, about five race choices are given, with the instruction to "check only one." While some surveys offer
an "other" box, this choice groups together individuals of many different multiracial types (ex: European Americans/African-Americans
are grouped with Asian/Native American Indians). Prior to the one-drop rule, different states had different laws
regarding color. More importantly, social acceptance often played a bigger role in how a person was perceived and
how identity was construed than any law. In frontier areas, there were fewer questions about origins. The community
looked at how people performed, whether they served in the militia and voted, which were the responsibilities and
signs of free citizens. When questions about racial identity arose because of inheritance issues, for instance, litigation
outcomes often were based on how people were accepted by neighbors. Since the late twentieth century, the number
of African and Caribbean ethnic African immigrants have increased in the United States. Together with publicity about
the ancestry of President Barack Obama, whose father was from Kenya, some black writers have argued that new terms
are needed for recent immigrants. They suggest that the term "African-American" should refer strictly to the descendants
of African slaves and free people of color who survived the slavery era in the United States. They argue that grouping
together all ethnic Africans regardless of their unique ancestral circumstances would deny the lingering effects
of slavery within the American slave descendant community. They say recent ethnic African immigrants need to recognize
their own unique ancestral backgrounds. In the 1980s, parents of mixed-race children began to organize and lobby
for the addition of a more inclusive term of racial designation that would reflect the heritage of their children.
When the U.S. government proposed the addition of the category of "bi-racial" or "multiracial" in 1988, the response
from the public was mostly negative. Some African-American organizations, and African-American political leaders,
such as Congresswoman Diane Watson and Congressman Augustus Hawkins, were particularly vocal in their rejection of
the category, as they feared the loss of political and economic power if African Americans reduced their numbers
by self-identification. The social identity of the children was strongly determined by the tribe's kinship system.
Among the matrilineal tribes of the Southeast, the mixed-race children generally were accepted as and identified
as Indian, as they gained their social status from their mother's clans and tribes, and often grew up with their
mothers and their male relatives. By contrast, among the patrilineal Omaha, for example, the child of a white man
and Omaha woman was considered "white"; such mixed-race children and their mothers would be protected, but the children
could formally belong to the tribe as members only if adopted by a man. In the late 19th century, three European-American
middle-class female teachers married Indigenous American men they had met at Hampton Institute during the years when
it ran its Indian program. In the late nineteenth century, Charles Eastman, a physician of European and Sioux ancestry
who trained at Boston University, married Elaine Goodale, a European-American woman from New England. They met and
worked together in Dakota Territory when she was Superintendent of Indian Education and he was a doctor for the reservations.
His maternal grandfather was Seth Eastman, an artist and Army officer from New England, who had married a Sioux woman
and had a daughter with her while stationed at Fort Snelling in Minnesota. The writer Sherrel W. Stewart's assertion
that "most" African Americans have significant Native American heritage, is not supported by genetic researchers
who have done extensive population mapping studies. The TV series on African-American ancestry, hosted by the scholar
Henry Louis Gates, Jr., had genetics scholars who discussed in detail the variety of ancestries among African Americans.
They noted there is popular belief in a high rate of Native American admixture that is not supported by the data
that has been collected. (Reference is coming) Interracial relationships have had a long history in North America
and the United States, beginning with the intermixing of European explorers and soldiers, who took native women as
companions. After European settlement increased, traders and fur trappers often married or had unions with women
of native tribes. In the 17th century, faced with a continuing, critical labor shortage, colonists primarily in the
Chesapeake Bay Colony, imported Africans as laborers, sometimes as indentured servants and, increasingly, as slaves.
African slaves were also imported into New York and other northern ports by the Dutch and later English. Some African
slaves were freed by their masters during these early years. Of numerous relationships between male slaveholders,
overseers, or master's sons and women slaves, the most notable is likely that of President Thomas Jefferson with
his slave Sally Hemings. As noted in the 2012 collaborative Smithsonian-Monticello exhibit, Slavery at Monticello:
The Paradox of Liberty, Jefferson, then a widower, took Hemings as his concubine for nearly 40 years. They had six
children of record; four Hemings children survived into adulthood, and he freed them all, among the very few slaves
he freed. Two were allowed to "escape" to the North in 1822, and two were granted freedom by his will upon his death
in 1826. Seven-eighths white by ancestry, all four of his Hemings children moved to northern states as adults; three
of the four entered the white community, and all their descendants identified as white. Of the descendants of Madison
Hemings, who continued to identify as black, some in future generations eventually identified as white and "married
out", while others continued to identify as African American. It was socially advantageous for the Hemings children
to identify as white, in keeping with their appearance and the majority proportion of their ancestry. Although born
into slavery, the Hemings children were legally white under Virginia law of the time. After the Civil War, racial
segregation forced African Americans to share more of a common lot in society than they might have given widely varying
ancestry, educational and economic levels. The binary division altered the separate status of the traditionally free
people of color in Louisiana, for instance, although they maintained a strong Louisiana Créole culture related to
French culture and language, and practice of Catholicism. African Americans began to create common cause—regardless
of their multiracial admixture or social and economic stratification. In 20th-century changes, during the rise of
the Civil Rights and Black Power movements, the African-American community increased its own pressure for people
of any portion of African descent to be claimed by the black community to add to its power. Chinese men entered the
United States as laborers, primarily on the West Coast and in western territories. Following the Reconstruction era,
as blacks set up independent farms, white planters imported Chinese laborers to satisfy their need for labor. In
1882, the Chinese Exclusion Act was passed, and Chinese workers who chose to stay in the U.S. were unable to have
their wives join them. In the South, some Chinese married into the black and mulatto communities, as generally discrimination
meant they did not take white spouses. They rapidly left working as laborers, and set up groceries in small towns
throughout the South. They worked to get their children educated and socially mobile. Multiracial people who wanted
to acknowledge their full heritage won a victory of sorts in 1997, when the Office of Management and Budget (OMB)
changed the federal regulation of racial categories to permit multiple responses. This resulted in a change to the
2000 United States Census, which allowed participants to select more than one of the six available categories, which
were, in brief: "White," "Black or African American," "Asian," "American Indian or Alaskan Native," "Native Hawaiian
or other Pacific Islander," and "Other." Further details are given in the article: Race (U.S. census). The OMB made
its directive mandatory for all government forms by 2003. Laws dating from 17th-century colonial America defined
children of African slave mothers as taking the status of their mothers, and born into slavery regardless of the
race or status of the father, under partus sequitur ventrem. The association of slavery with a "race" led to slavery
as a racial caste. But, most families of free people of color formed in Virginia before the American Revolution were
the descendants of unions between white women and African men, who frequently worked and lived together in the looser
conditions of the early colonial period. While interracial marriage was later prohibited, white men frequently took
sexual advantage of slave women, and numerous generations of multiracial children were born. By the late 1800s it
had become common among African Americans to use passing to gain educational opportunities as did the first African-American
graduate of Vassar College Anita Florence Hemmings. Some 19th-century categorization schemes defined people by proportion
of African ancestry: a person whose parents were black and white was classified as mulatto, with one black grandparent
and three white as quadroon, and with one black great-grandparent and the remainder white as octoroon. The latter
categories remained within an overall black or colored category, but before the Civil War, in Virginia and some other
states, a person of one-eighth or less black ancestry was legally white. Some members of these categories passed
temporarily or permanently as white. Reacting to media criticism of Michelle Obama during the 2008 presidential election,
Charles Kenzie Steele, Jr., CEO of the Southern Christian Leadership Conference said, "Why are they attacking Michelle
Obama, and not really attacking, to that degree, her husband? Because he has no slave blood in him." He later claimed
his comment was intended to be "provocative" but declined to expand on the subject. Former Secretary of State Condoleezza
Rice (who was famously mistaken for a "recent American immigrant" by French President Nicolas Sarkozy), said "descendants
of slaves did not get much of a head start, and I think you continue to see some of the effects of that." She has
also rejected an immigrant designation for African Americans and instead prefers the term "black" or "white" . Some
early male settlers married Indigenous American women and had informal unions with them. Early contact between Indigenous
Americans and Europeans was often charged with tension, but also had moments of friendship, cooperation, and intimacy.
Marriages took place in both English and Latin colonies between European men and Native women. For instance, on April
5, 1614, Pocahontas, a Powhatan woman in present-day Virginia, married the Englishman John Rolfe of Jamestown. Their
son Thomas Rolfe was an ancestor to many descendants in First Families of Virginia. As a result, English laws did
not exclude people with some Indigenous American ancestry from being considered English or white. Colonial records
of French and Spanish slave ships and sales, and plantation records in all the former colonies, often have much more
information about slaves, from which researchers are reconstructing slave family histories. Genealogists have begun
to find plantation records, court records, land deeds and other sources to trace African-American families and individuals
before 1870. As slaves were generally forbidden to learn to read and write, black families passed along oral histories,
which have had great persistence. Similarly, Native Americans did not generally learn to read and write English,
although some did in the nineteenth century. Until 1930, census enumerators used the terms free people of color and
mulatto to classify people of apparent mixed race. When those terms were dropped, as a result of the lobbying by
the Southern Congressional bloc, the Census Bureau used only the binary classifications of black or white, as was
typical in segregated southern states. European colonists created treaties with Indigenous American tribes requesting
the return of any runaway slaves. For example, in 1726, the British governor of New York exacted a promise from the
Iroquois to return all runaway slaves who had joined them. This same promise was extracted from the Huron Nation
in 1764, and from the Delaware Nation in 1765, though there is no record of slaves ever being returned. Numerous
advertisements requested the return of African Americans who had married Indigenous Americans or who spoke an Indigenous
American language. The primary exposure that Africans and Indigenous Americans had to each other came through the
institution of slavery. Indigenous Americans learned that Africans had what Indigenous Americans considered 'Great
Medicine' in their bodies because Africans were virtually immune to the Old-World diseases that were decimating most
native populations. Because of this, many tribes encouraged marriage between the two groups, to create stronger,
healthier children from the unions. Interracial relationships, common-law marriages, and marriages occurred since
the earliest colonial years, especially before slavery hardened as a racial caste associated with people of African
descent in the British colonies. Virginia and other English colonies passed laws in the 17th century that gave children
the social status of their mother, according to the principle of partus sequitur ventrem, regardless of the father's
race or citizenship. This overturned the principle in English common law by which a man gave his status to his children
– this had enabled communities to demand that fathers support their children, whether legitimate or not. The change
increased white men's ability to use slave women sexually, as they had no responsibility for the children. As master
as well as father of mixed-race children born into slavery, the men could use these people as servants or laborers
or sell them as slaves. In some cases, white fathers provided for their multiracial children, paying or arranging
for education or apprenticeships and freeing them, particularly during the two decades following the American Revolution.
(The practice of providing for the children was more common in French and Spanish colonies, where a class of free
people of color developed who became educated and property owners.) Many other white fathers abandoned the mixed-race
children and their mothers to slavery. Many Latin American migrants have been mestizo, Amerindian, or other mixed
race. Multiracial Latinos have limited media appearance; critics have accused the U.S. Hispanic media of overlooking
the brown-skinned indigenous and multiracial Hispanic and black Hispanic populations by over-representation of blond
and blue/green-eyed white Hispanic and Latino Americans (who resemble Scandinavians and other Northern Europeans
rather than they look like white Hispanic and Latino Americans mostly of typical Southern European features), and
also light-skinned mulatto and mestizo Hispanic and Latino Americans (often deemed as white persons in U.S. Hispanic
and Latino populations if achieving the middle class or higher social status), especially some of the actors on the
telenovelas. In Virginia prior to 1920, for example, a person was legally white if having seven-eights or more white
ancestry. The one-drop rule originated in some Southern United States in the late 19th century, likely in response
to whites' attempt to maintain white supremacy and limit black political power following the Democrats' regaining
control of state legislatures in the late 1870s. The first year in which the U.S. Census dropped the mulatto category
was 1920; that year enumerators were instructed to classify people in a binary way as white or black. This was a
result of the Southern-dominated Congress convincing the Census Bureau to change its rules. Stanley Crouch wrote
in a New York Daily News piece "Obama's mother is of white U.S. stock. His father is a black Kenyan," in a column
entitled "What Obama Isn't: Black Like Me." During the 2008 campaign, the African-American columnist David Ehrenstein
of the LA Times accused white liberals of flocking to Obama because he was a "Magic Negro", a term that refers to
a black person with no past who simply appears to assist the mainstream white (as cultural protagonists/drivers)
agenda. Ehrenstein went on to say "He's there to assuage white 'guilt' they feel over the role of slavery and racial
segregation in American history." The 2000 U.S. Census in the write-in response category had a code listing which
standardizes the placement of various write-in responses for automatic placement within the framework of the U.S.
Census's enumerated races. Whereas most responses can be distinguished as falling into one of the five enumerated
races, there remains some write-in responses which fall into the "Mixture" heading which cannot be racially categorized.
These include "Bi Racial, Combination, Everything, Many, Mixed, Multi National, Multiple, Several and Various". Interracial
relations between Indigenous Americans and African Americans is a part of American history that has been neglected.
The earliest record of African and Indigenous American relations in the Americas occurred in April 1502, when the
first Africans kidnapped were brought to Hispaniola to serve as slaves. Some escaped, and somewhere inland on Santo
Domingo, the first Black Indians were born. In addition, an example of African slaves' escaping from European colonists
and being absorbed by Indigenous Americans occurred as far back as 1526. In June of that year, Lucas Vasquez de Ayllon
established a Spanish colony near the mouth of the Pee Dee River in what is now eastern South Carolina. The Spanish
settlement was named San Miguel de Gualdape. Amongst the settlement were 100 enslaved Africans. In 1526, the first
African slaves fled the colony and took refuge with local Indigenous Americans. Some biographical accounts include
the autobiography Life on the Color Line: The True Story of a White Boy Who Discovered He Was Black by Gregory Howard
Williams; One Drop: My Father's Hidden Life—A Story of Race and Family Secrets written by Bliss Broyard about her
father Anatole Broyard; the documentary Colored White Boy about a white man in North Carolina who discovers that
he is the descendant of a white plantation owner and a raped African slave; and the documentary on The Sanders Women
of Shreveport, Louisiana. By the 1980s, parents of mixed-race children (and adults of mixed-race ancestry) began
to organize and lobby for the ability to show more than one ethnic category on Census and other legal forms. They
refused to be put into just one category. When the U.S. government proposed the addition of the category of "bi-racial"
or "multiracial" in 1988, the response from the general public was mostly negative. Some African-American organizations
and political leaders, such as Senator Diane Watson and Representative Augustus Hawkins, were particularly vocal
in their rejection of the category. They feared a loss in political and economic power if African Americans abandoned
their one category. In the early 19th century, the Indigenous American woman Sacagawea, who would help translate
for and guide the Lewis and Clark Expedition in the West, married the French trapper Toussaint Charbonneau. Most
marriages between Europeans and Indigenous Americans were between European men and Indigenous American women. Depending
on the kinship system of the woman's tribe, their children would be more or less easily assimilated into the tribe.
Nations that had matrilineal systems, such as the Creek and Cherokee in the Southeast, gave the mixed-race children
status in their mother's clans and tribes. If the tribe had a patrilineal system, like the Omaha, the children of
white fathers were considered white. Unless they were specifically adopted into the tribe by an adult male, they
could have no social status in it. For African Americans, the one-drop rule was a significant factor in ethnic solidarity.
African Americans generally shared a common cause in society regardless of their multiracial admixture, or social/economic
stratification. Additionally, African Americans found it, near, impossible to learn about their Indigenous American
heritage as many family elders withheld pertinent genealogical information. Tracing the genealogy of African Americans
can be a very difficult process, especially for descendants of Indigenous Americans, because African Americans who
were slaves were forbidden to learn to read and write, and a majority of Indigenous Americans neither spoke English,
nor read or wrote it. The figure of the "tragic octoroon" was a stock character of abolitionist literature: a mixed-race
woman raised as if a white woman in her white father's household, until his bankruptcy or death has her reduced to
a menial position She may even be unaware of her status before being reduced to victimization. The first character
of this type was the heroine of Lydia Maria Child's "The Quadroons" (1842), a short story. This character allowed
abolitionists to draw attention to the sexual exploitation in slavery and, unlike portrayals of the suffering of
the field hands, did not allow slaveholders to retort that the sufferings of Northern mill hands were no easier.
The Northern mill owner would not sell his own children into slavery.
It is estimated that in the 11th century Ashkenazi Jews composed only three percent of the world's Jewish population, while
at their peak in 1931 they accounted for 92 percent of the world's Jews. Immediately prior to the Holocaust, the
number of Jews in the world stood at approximately 16.7 million. Statistical figures vary for the contemporary demography
of Ashkenazi Jews, oscillating between 10 million and 11.2 million. Sergio DellaPergola in a rough calculation of
Sephardic and Mizrahi Jews, implies that Ashkenazi make up less than 74% of Jews worldwide. Other estimates place
Ashkenazi Jews as making up about 75% of Jews worldwide. In the Yoma tractate of the Babylonian Talmud the name Gomer
is rendered as Germania, which elsewhere in rabbinical literature was identified with Germanikia in northwestern
Syria, but later became associated with Germania. Ashkenaz is linked to Scandza/Scanzia, viewed as the cradle of
Germanic tribes, as early as a 6th-century gloss to the Historia Ecclesiastica of Eusebius. In the 10th-century History
of Armenia of Yovhannes Drasxanakertc'i (1.15) Ashkenaz was associated with Armenia, as it was occasionally in Jewish
usage, where its denotation extended at times to Adiabene, Khazaria, Crimea and areas to the east. His contemporary
Saadia Gaon identified Ashkenaz with the Saquliba or Slavic territories, and such usage covered also the lands of
tribes neighboring the Slavs, and Eastern and Central Europe. In modern times, Samuel Krauss identified the Biblical
"Ashkenaz" with Khazaria. No evidence has yet been found of a Jewish presence in antiquity in Germany beyond its
Roman border, nor in Eastern Europe. In Gaul and Germany itself, with the possible exception of Trier and Cologne,
the archeological evidence suggests at most a fleeting presence of very few Jews, primarily itinerant traders or
artisans. A substantial Jewish population emerged in northern Gaul by the Middle Ages, but Jewish communities existed
in 465 CE in Brittany, in 524 CE in Valence, and in 533 CE in Orleans. Throughout this period and into the early
Middle Ages, some Jews assimilated into the dominant Greek and Latin cultures, mostly through conversion to Christianity.[better
source needed] King Dagobert I of the Franks expelled the Jews from his Merovingian kingdom in 629. Jews in former
Roman territories faced new challenges as harsher anti-Jewish Church rulings were enforced. People of Ashkenazi descent
constitute around 47.5% of Israeli Jews (and therefore 35–36% of Israelis). They have played a prominent role in
the economy, media, and politics of Israel since its founding. During the first decades of Israel as a state, strong
cultural conflict occurred between Sephardic and Ashkenazi Jews (mainly east European Ashkenazim). The roots of this
conflict, which still exists to a much smaller extent in present-day Israeli society, are chiefly attributed to the
concept of the "melting pot". That is to say, all Jewish immigrants who arrived in Israel were strongly encouraged
to "melt down" their own particular exilic identities within the general social "pot" in order to become Israeli.
Culturally, an Ashkenazi Jew can be identified by the concept of Yiddishkeit, which means "Jewishness" in the Yiddish
language. Yiddishkeit is specifically the Jewishness of Ashkenazi Jews. Before the Haskalah and the emancipation
of Jews in Europe, this meant the study of Torah and Talmud for men, and a family and communal life governed by the
observance of Jewish Law for men and women. From the Rhineland to Riga to Romania, most Jews prayed in liturgical
Ashkenazi Hebrew, and spoke Yiddish in their secular lives. But with modernization, Yiddishkeit now encompasses not
just Orthodoxy and Hasidism, but a broad range of movements, ideologies, practices, and traditions in which Ashkenazi
Jews have participated and somehow retained a sense of Jewishness. Although a far smaller number of Jews still speak
Yiddish, Yiddishkeit can be identified in manners of speech, in styles of humor, in patterns of association. Broadly
speaking, a Jew is one who associates culturally with Jews, supports Jewish institutions, reads Jewish books and
periodicals, attends Jewish movies and theater, travels to Israel, visits historical synagogues, and so forth. It
is a definition that applies to Jewish culture in general, and to Ashkenazi Yiddishkeit in particular. In an ethnic
sense, an Ashkenazi Jew is one whose ancestry can be traced to the Jews who settled in Central Europe. For roughly
a thousand years, the Ashkenazim were a reproductively isolated population in Europe, despite living in many countries,
with little inflow or outflow from migration, conversion, or intermarriage with other groups, including other Jews.
Human geneticists have argued that genetic variations have been identified that show high frequencies among Ashkenazi
Jews, but not in the general European population, be they for patrilineal markers (Y-chromosome haplotypes) and for
matrilineal markers (mitotypes). However, a 2013 study of Ashkenazi mitochondrial DNA, from the University of Huddersfield
in England, suggests that at least 80 percent of the Ashkenazi maternal lineages derive from the assimilation of
mtDNAs indigenous to Europe, probably as a consequence of conversion. Since the middle of the 20th century, many
Ashkenazi Jews have intermarried, both with members of other Jewish communities and with people of other nations
and faiths. Relations between Ashkenazim and Sephardim have not always been warm. North African Sepharadim and Berber
Jews were often looked upon by Ashkenazim as second-class citizens during the first decade after the creation of
Israel. This has led to protest movements such as the Israeli Black Panthers led by Saadia Marciano a Moroccan Jew.
Nowadays, relations are getting better. In some instances, Ashkenazi communities have accepted significant numbers
of Sephardi newcomers, sometimes resulting in intermarriage. A 2010 study by Bray et al., using SNP microarray techniques
and linkage analysis found that when assuming Druze and Palestinian Arab populations to represent the reference to
world Jewry ancestor genome, between 35 to 55 percent of the modern Ashkenazi genome can possibly be of European
origin, and that European "admixture is considerably higher than previous estimates by studies that used the Y chromosome"
with this reference point. Assuming this reference point the linkage disequilibrium in the Ashkenazi Jewish population
was interpreted as "matches signs of interbreeding or 'admixture' between Middle Eastern and European populations".
On the Bray et al. tree, Ashkenazi Jews were found to be a genetically more divergent population than Russians, Orcadians,
French, Basques, Italians, Sardinians and Tuscans. The study also observed that Ashkenazim are more diverse than
their Middle Eastern relatives, which was counterintuitive because Ashkenazim are supposed to be a subset, not a
superset, of their assumed geographical source population. Bray et al. therefore postulate that these results reflect
not the population antiquity but a history of mixing between genetically distinct populations in Europe. However,
it's possible that the relaxation of marriage prescription in the ancestors of Ashkenazim that drove their heterozygosity
up, while the maintenance of the FBD rule in native Middle Easterners have been keeping their heterozygosity values
in check. Ashkenazim distinctiveness as found in the Bray et al. study, therefore, may come from their ethnic endogamy
(ethnic inbreeding), which allowed them to "mine" their ancestral gene pool in the context of relative reproductive
isolation from European neighbors, and not from clan endogamy (clan inbreeding). Consequently, their higher diversity
compared to Middle Easterners stems from the latter's marriage practices, not necessarily from the former's admixture
with Europeans. Genetic studies on Ashkenazim have been conducted to determine how much of their ancestry comes from
the Levant, and how much derives from European populations. These studies—researching both their paternal and maternal
lineages—point to a significant prevalence of ancient Levantine origins. But they have arrived at diverging conclusions
regarding both the degree and the sources of their European ancestry. These diverging conclusions focus particularly
on the extent of the European genetic origin observed in Ashkenazi maternal lineages. Sometime in the early medieval
period, the Jews of central and eastern Europe came to be called by this term. In conformity with the custom of designating
areas of Jewish settlement with biblical names, Spain was denominated Sefarad (Obadiah 20), France was called Tsarefat
(1 Kings 17:9), and Bohemia was called the Land of Canaan. By the high medieval period, Talmudic commentators like
Rashi began to use Ashkenaz/Eretz Ashkenaz to designate Germany, earlier known as Loter, where, especially in the
Rhineland communities of Speyer, Worms and Mainz, the most important Jewish communities arose. Rashi uses leshon
Ashkenaz (Ashkenazi language) to describe German speech, and Byzantium and Syrian Jewish letters referred to the
Crusaders as Ashkenazim. Given the close links between the Jewish communities of France and Germany following the
Carolingian unification, the term Ashkenazi came to refer to both the Jews of medieval Germany and France. Charlemagne's
expansion of the Frankish empire around 800, including northern Italy and Rome, brought on a brief period of stability
and unity in Francia. This created opportunities for Jewish merchants to settle again north of the Alps. Charlemagne
granted the Jews freedoms similar to those once enjoyed under the Roman Empire. In addition, Jews from southern Italy,
fleeing religious persecution, began to move into central Europe.[citation needed] Returning to Frankish lands, many
Jewish merchants took up occupations in finance and commerce, including money lending, or usury. (Church legislation
banned Christians from lending money in exchange for interest.) From Charlemagne's time to the present, Jewish life
in northern Europe is well documented. By the 11th century, when Rashi of Troyes wrote his commentaries, Jews in
what came to be known as "Ashkenaz" were known for their halakhic learning, and Talmudic studies. They were criticized
by Sephardim and other Jewish scholars in Islamic lands for their lack of expertise in Jewish jurisprudence (dinim)
and general ignorance of Hebrew linguistics and literature. Yiddish emerged as a result of language contact with
various High German vernaculars in the medieval period. It was written with Hebrew letters, and heavily influenced
by Hebrew and Aramaic. The answer to why there was so little assimilation of Jews in central and eastern Europe for
so long would seem to lie in part in the probability that the alien surroundings in central and eastern Europe were
not conducive, though contempt did not prevent some assimilation. Furthermore, Jews lived almost exclusively in shtetls,
maintained a strong system of education for males, heeded rabbinic leadership, and scorned the life-style of their
neighbors; and all of these tendencies increased with every outbreak of antisemitism. According to 16th-century mystic
Rabbi Elijah of Chelm, Ashkenazi Jews lived in Jerusalem during the 11th century. The story is told that a German-speaking
Palestinian Jew saved the life of a young German man surnamed Dolberger. So when the knights of the First Crusade
came to siege Jerusalem, one of Dolberger's family members who was among them rescued Jews in Palestine and carried
them back to Worms to repay the favor. Further evidence of German communities in the holy city comes in the form
of halakhic questions sent from Germany to Jerusalem during the second half of the 11th century. Of the estimated
8.8 million Jews living in Europe at the beginning of World War II, the majority of whom were Ashkenazi, about 6
million – more than two-thirds – were systematically murdered in the Holocaust. These included 3 million of 3.3 million
Polish Jews (91%); 900,000 of 1.5 million in Ukraine (60%); and 50–90% of the Jews of other Slavic nations, Germany,
Hungary, and the Baltic states, and over 25% of the Jews in France. Sephardi communities suffered similar depletions
in a few countries, including Greece, the Netherlands and the former Yugoslavia. As the large majority of the victims
were Ashkenazi Jews, their percentage dropped from nearly 92% of world Jewry in 1931 to nearly 80% of world Jewry
today. The Holocaust also effectively put an end to the dynamic development of the Yiddish language in the previous
decades, as the vast majority of the Jewish victims of the Holocaust, around 5 million, were Yiddish speakers. Many
of the surviving Ashkenazi Jews emigrated to countries such as Israel, Canada, Argentina, Australia, and the United
States after the war. Religious Jews have Minhagim, customs, in addition to Halakha, or religious law, and different
interpretations of law. Different groups of religious Jews in different geographic areas historically adopted different
customs and interpretations. On certain issues, Orthodox Jews are required to follow the customs of their ancestors,
and do not believe they have the option of picking and choosing. For this reason, observant Jews at times find it
important for religious reasons to ascertain who their household's religious ancestors are in order to know what
customs their household should follow. These times include, for example, when two Jews of different ethnic background
marry, when a non-Jew converts to Judaism and determines what customs to follow for the first time, or when a lapsed
or less observant Jew returns to traditional Judaism and must determine what was done in his or her family's past.
In this sense, "Ashkenazic" refers both to a family ancestry and to a body of customs binding on Jews of that ancestry.
Reform Judaism, which does not necessarily follow those minhagim, did nonetheless originate among Ashkenazi Jews.
As Ashkenazi Jews moved away from Europe, mostly in the form of aliyah to Israel, or immigration to North America,
and other English-speaking areas; and Europe (particularly France) and Latin America, the geographic isolation that
gave rise to Ashkenazim has given way to mixing with other cultures, and with non-Ashkenazi Jews who, similarly,
are no longer isolated in distinct geographic locales. Hebrew has replaced Yiddish as the primary Jewish language
for many Ashkenazi Jews, although many Hasidic and Hareidi groups continue to use Yiddish in daily life. (There are
numerous Ashkenazi Jewish anglophones and Russian-speakers as well, although English and Russian are not originally
Jewish languages.) A 2006 study found Ashkenazi Jews to be a clear, homogeneous genetic subgroup. Strikingly, regardless
of the place of origin, Ashkenazi Jews can be grouped in the same genetic cohort – that is, regardless of whether
an Ashkenazi Jew's ancestors came from Poland, Russia, Hungary, Lithuania, or any other place with a historical Jewish
population, they belong to the same ethnic group. The research demonstrates the endogamy of the Jewish population
in Europe and lends further credence to the idea of Ashkenazi Jews as an ethnic group. Moreover, though intermarriage
among Jews of Ashkenazi descent has become increasingly common, many Haredi Jews, particularly members of Hasidic
or Hareidi sects, continue to marry exclusively fellow Ashkenazi Jews. This trend keeps Ashkenazi genes prevalent
and also helps researchers further study the genes of Ashkenazi Jews with relative ease. It is noteworthy that these
Haredi Jews often have extremely large families. Ashkenazi Jews have a noted history of achievement in Western societies
in the fields of exact and social sciences, literature, finance, politics, media, and others. In those societies
where they have been free to enter any profession, they have a record of high occupational achievement, entering
professions and fields of commerce where higher education is required. Ashkenazi Jews have won a large number of
the Nobel awards. While they make up about 2% of the U.S. population, 27% of United States Nobel prize winners in
the 20th century, a quarter of Fields Medal winners, 25% of ACM Turing Award winners, half the world's chess champions,
including 8% of the top 100 world chess players, and a quarter of Westinghouse Science Talent Search winners have
Ashkenazi Jewish ancestry. Although the Jewish people in general were present across a wide geographical area as
described, genetic research done by Gil Atzmon of the Longevity Genes Project at Albert Einstein College of Medicine
suggests "that Ashkenazim branched off from other Jews around the time of the destruction of the First Temple, 2,500
years ago ... flourished during the Roman Empire but then went through a 'severe bottleneck' as they dispersed, reducing
a population of several million to just 400 families who left Northern Italy around the year 1000 for Central and
eventually Eastern Europe." A 2001 study by Nebel et al. showed that both Ashkenazi and Sephardic Jewish populations
share the same overall paternal Near Eastern ancestries. In comparison with data available from other relevant populations
in the region, Jews were found to be more closely related to groups in the north of the Fertile Crescent. The authors
also report on Eu 19 (R1a) chromosomes, which are very frequent in Central and Eastern Europeans (54%–60%) at elevated
frequency (12.7%) in Ashkenazi Jews. They hypothesized that the differences among Ashkenazim Jews could reflect low-level
gene flow from surrounding European populations and/or genetic drift during isolation. A later 2005 study by Nebel
et al., found a similar level of 11.5% of male Ashkenazim belonging to R1a1a (M17+), the dominant Y-chromosome haplogroup
in Central and Eastern Europeans. They established communities throughout Central and Eastern Europe, which had been
their primary region of concentration and residence until recent times, evolving their own distinctive characteristics
and diasporic identities. In the late Middle Ages the center of gravity of the Ashkenazi population, and its traditional
cultural life, shifted steadily eastward, out of the German lands into Poland and Lithuania (including present-day
Belarus and Ukraine). In the course of the late 18th and 19th centuries, those Jews who remained in or returned to
the German lands experienced a cultural reorientation; under the influence of the Haskalah and the struggle for emancipation,
as well the intellectual and cultural ferment in urban centers, they gradually abandoned the use of Yiddish, while
developing new forms of Jewish religious life and cultural identity. The name Ashkenazi derives from the biblical
figure of Ashkenaz, the first son of Gomer, son of Khaphet, son of Noah, and a Japhetic patriarch in the Table of
Nations (Genesis 10). The name of Gomer has often been linked to the ethnonym Cimmerians. Biblical Ashkenaz is usually
derived from Assyrian Aškūza (cuneiform Aškuzai/Iškuzai), a people who expelled the Cimmerians from the Armenian
area of the Upper Euphrates, whose name is usually associated with the name of the Scythians. The intrusive n in
the Biblical name is likely due to a scribal error confusing a waw ו with a nun נ. The genome-wide genetic study
carried out in 2010 by Behar et al. examined the genetic relationships among all major Jewish groups, including Ashkenazim,
as well as the genetic relationship between these Jewish groups and non-Jewish ethnic populations. The study found
that contemporary Jews (excluding Indian and Ethiopian Jews) have a close genetic relationship with people from the
Levant. The authors explained that "the most parsimonious explanation for these observations is a common genetic
origin, which is consistent with an historical formulation of the Jewish people as descending from ancient Hebrew
and Israelite residents of the Levant". The history of Jews in Greece goes back to at least the Archaic Era of Greece,
when the classical culture of Greece was undergoing a process of formalization after the Greek Dark Age. The Greek
historian Herodotus knew of the Jews, whom he called "Palestinian Syrians", and listed them among the levied naval
forces in service of the invading Persians. While Jewish monotheism was not deeply affected by Greek Polytheism,
the Greek way of living was attractive for many wealthier Jews. The Synagogue in the Agora of Athens is dated to
the period between 267 and 396 CE. The Stobi Synagogue in Macedonia, was built on the ruins of a more ancient synagogue
in the 4th century, while later in the 5th century, the synagogue was transformed into Christian basilica. In an
essay on Sephardi Jewry, Daniel Elazar at the Jerusalem Center for Public Affairs summarized the demographic history
of Ashkenazi Jews in the last thousand years, noting that at the end of the 11th century, 97% of world Jewry was
Sephardic and 3% Ashkenazi; by the end of XVI century, the: 'Treaty on the redemption of captives', by Gracian of
the God's Mother, Mercy Priest, who was imprisoned by Turks, cites a Tunisian Hebrew, made captive when arriving
to Gaeta, who aided others with money, named: 'Simon Escanasi', in the mid-17th century, "Sephardim still outnumbered
Ashkenazim three to two", but by the end of the 18th century, "Ashkenazim outnumbered Sephardim three to two, the
result of improved living conditions in Christian Europe versus the Ottoman Muslim world." By 1931, Ashkenazi Jews
accounted for nearly 92% of world Jewry. These factors are sheer demography showing the migration patterns of Jews
from Southern and Western Europe to Central and Eastern Europe. In Israel, the term Ashkenazi is now used in a manner
unrelated to its original meaning, often applied to all Jews who settled in Europe and sometimes including those
whose ethnic background is actually Sephardic. Jews of any non-Ashkenazi background, including Mizrahi, Yemenite,
Kurdish and others who have no connection with the Iberian Peninsula, have similarly come to be lumped together as
Sephardic. Jews of mixed background are increasingly common, partly because of intermarriage between Ashkenazi and
non-Ashkenazi, and partly because many do not see such historic markers as relevant to their life experiences as
Jews. In this respect, the counterpart of Ashkenazi is Sephardic, since most non-Ashkenazi Orthodox Jews follow Sephardic
rabbinical authorities, whether or not they are ethnically Sephardic. By tradition, a Sephardic or Mizrahi woman
who marries into an Orthodox or Haredi Ashkenazi Jewish family raises her children to be Ashkenazi Jews; conversely
an Ashkenazi woman who marries a Sephardi or Mizrahi man is expected to take on Sephardic practice and the children
inherit a Sephardic identity, though in practice many families compromise. A convert generally follows the practice
of the beth din that converted him or her. With the integration of Jews from around the world in Israel, North America,
and other places, the religious definition of an Ashkenazi Jew is blurring, especially outside Orthodox Judaism.
But after emancipation, a sense of a unified French Jewry emerged, especially when France was wracked by the Dreyfus
affair in the 1890s. In the 1920s and 1930s, Ashkenazi Jews from Europe arrived in large numbers as refugees from
antisemitism, the Russian revolution, and the economic turmoil of the Great Depression. By the 1930s, Paris had a
vibrant Yiddish culture, and many Jews were involved in diverse political movements. After the Vichy years and the
Holocaust, the French Jewish population was augmented once again, first by Ashkenazi refugees from Central Europe,
and later by Sephardi immigrants and refugees from North Africa, many of them francophone. The term Ashkenazi also
refers to the nusach Ashkenaz (Hebrew, "liturgical tradition", or rite) used by Ashkenazi Jews in their Siddur (prayer
book). A nusach is defined by a liturgical tradition's choice of prayers, order of prayers, text of prayers and melodies
used in the singing of prayers. Two other major forms of nusach among Ashkenazic Jews are Nusach Sefard (not to be
confused with the Sephardic ritual), which is the general Polish Hasidic nusach, and Nusach Ari, as used by Lubavitch
Hasidim. Efforts to identify the origins of Ashkenazi Jews through DNA analysis began in the 1990s. Currently, there
are three types of genetic origin testing, autosomal DNA (atDNA), mitochondrial DNA (mtDNA), and Y-chromosomal DNA
(Y-DNA). Autosomal DNA is a mixture from an individual's entire ancestry, Y-DNA shows a male's lineage only along
his strict-paternal line, mtDNA shows any person's lineage only along the strict-maternal line. Genome-wide association
studies have also been employed to yield findings relevant to genetic origins. In 2006, a study by Behar et al.,
based on what was at that time high-resolution analysis of haplogroup K (mtDNA), suggested that about 40% of the
current Ashkenazi population is descended matrilineally from just four women, or "founder lineages", that were "likely
from a Hebrew/Levantine mtDNA pool" originating in the Middle East in the 1st and 2nd centuries CE. Additionally,
Behar et al. suggested that the rest of Ashkenazi mtDNA is originated from ~150 women, and that most of those were
also likely of Middle Eastern origin. In reference specifically to Haplogroup K, they suggested that although it
is common throughout western Eurasia, "the observed global pattern of distribution renders very unlikely the possibility
that the four aforementioned founder lineages entered the Ashkenazi mtDNA pool via gene flow from a European host
population". Speculation that the Ashkenazi arose from Khazar stock surfaced in the later 19th century and has met
with mixed fortunes in the scholarly literature. In late 2012 Eran Elhaik, a research associate studying genetics
at the Johns Hopkins University School of Public Health, argued for Khazar descent in his paper The Missing Link
of Jewish European Ancestry: Contrasting the Rhineland and the Khazarian Hypotheses. A 2013 study of Ashkenazi mitochondrial
DNA found no significant evidence of Khazar contribution to the Ashkenazi Jewish DNA, as would be predicted by the
Khazar hypothesis. Sporadic epigraphic evidence in grave site excavations, particularly in Brigetio (Szőny), Aquincum
(Óbuda), Intercisa (Dunaújváros), Triccinae (Sárvár), Savaria (Szombathely), Sopianae (Pécs), and Osijek in Croatia,
attest to the presence of Jews after the 2nd and 3rd centuries where Roman garrisons were established, There was
a sufficient number of Jews in Pannonia to form communities and build a synagogue. Jewish troops were among the Syrian
soldiers transferred there, and replenished from the Middle East, after 175 C.E. Jews and especially Syrians came
from Antioch, Tarsus and Cappadocia. Others came from Italy and the Hellenized parts of the Roman empire. The excavations
suggest they first lived in isolated enclaves attached to Roman legion camps, and intermarried among other similar
oriental families within the military orders of the region.Raphael Patai states that later Roman writers remarked
that they differed little in either customs, manner of writing, or names from the people among whom they dwelt; and
it was especially difficult to differentiate Jews from the Syrians. After Pannonia was ceded to the Huns in 433,
the garrison populations were withdrawn to Italy, and only a few, enigmatic traces remain of a possible Jewish presence
in the area some centuries later. With the onset of the Crusades in 1095, and the expulsions from England (1290),
France (1394), and parts of Germany (15th century), Jewish migration pushed eastward into Poland (10th century),
Lithuania (10th century), and Russia (12th century). Over this period of several hundred years, some have suggested,
Jewish economic activity was focused on trade, business management, and financial services, due to several presumed
factors: Christian European prohibitions restricting certain activities by Jews, preventing certain financial activities
(such as "usurious" loans) between Christians, high rates of literacy, near universal male education, and ability
of merchants to rely upon and trust family members living in different regions and countries. In the Midrash compilation,
Genesis Rabbah, Rabbi Berechiah mentions Ashkenaz, Riphath, and Togarmah as German tribes or as German lands. It
may correspond to a Greek word that may have existed in the Greek dialect of the Palestinian Jews, or the text is
corrupted from "Germanica." This view of Berechiah is based on the Talmud (Yoma 10a; Jerusalem Talmud Megillah 71b),
where Gomer, the father of Ashkenaz, is translated by Germamia, which evidently stands for Germany, and which was
suggested by the similarity of the sound. In the generations after emigration from the west, Jewish communities in
places like Poland, Russia, and Belarus enjoyed a comparatively stable socio-political environment. A thriving publishing
industry and the printing of hundreds of biblical commentaries precipitated the development of the Hasidic movement
as well as major Jewish academic centers. After two centuries of comparative tolerance in the new nations, massive
westward emigration occurred in the 19th and 20th centuries in response to pogroms in the east and the economic opportunities
offered in other parts of the world. Ashkenazi Jews have made up the majority of the American Jewish community since
1750. Religious Ashkenazi Jews living in Israel are obliged to follow the authority of the chief Ashkenazi rabbi
in halakhic matters. In this respect, a religiously Ashkenazi Jew is an Israeli who is more likely to support certain
religious interests in Israel, including certain political parties. These political parties result from the fact
that a portion of the Israeli electorate votes for Jewish religious parties; although the electoral map changes from
one election to another, there are generally several small parties associated with the interests of religious Ashkenazi
Jews. The role of religious parties, including small religious parties that play important roles as coalition members,
results in turn from Israel's composition as a complex society in which competing social, economic, and religious
interests stand for election to the Knesset, a unicameral legislature with 120 seats. New developments in Judaism
often transcend differences in religious practice between Ashkenazi and Sephardic Jews. In North American cities,
social trends such as the chavurah movement, and the emergence of "post-denominational Judaism" often bring together
younger Jews of diverse ethnic backgrounds. In recent years, there has been increased interest in Kabbalah, which
many Ashkenazi Jews study outside of the Yeshiva framework. Another trend is the new popularity of ecstatic worship
in the Jewish Renewal movement and the Carlebach style minyan, both of which are nominally of Ashkenazi origin. Several
famous people have Ashkenazi as a surname, such as Vladimir Ashkenazy. However, most people with this surname hail
from within Sephardic communities, particularly from the Syrian Jewish community. The Sephardic carriers of the surname
would have some Ashkenazi ancestors since the surname was adopted by families who were initially of Ashkenazic origins
who moved to Sephardi countries and joined those communities. Ashkenazi would be formally adopted as the family surname
having started off as a nickname imposed by their adopted communities. Some have shortened the name to Ash. A study
of haplotypes of the Y-chromosome, published in 2000, addressed the paternal origins of Ashkenazi Jews. Hammer et
al. found that the Y-chromosome of Ashkenazi and Sephardic Jews contained mutations that are also common among Middle
Eastern peoples, but uncommon in the general European population. This suggested that the male ancestors of the Ashkenazi
Jews could be traced mostly to the Middle East. The proportion of male genetic admixture in Ashkenazi Jews amounts
to less than 0.5% per generation over an estimated 80 generations, with "relatively minor contribution of European
Y chromosomes to the Ashkenazim," and a total admixture estimate "very similar to Motulsky's average estimate of
12.5%." This supported the finding that "Diaspora Jews from Europe, Northwest Africa, and the Near East resemble
each other more closely than they resemble their non-Jewish neighbors." "Past research found that 50–80 percent of
DNA from the Ashkenazi Y chromosome, which is used to trace the male lineage, originated in the Near East," Richards
said. In 2013, however, a study of Ashkenazi mitochondrial DNA by a team led by Martin B. Richards of the University
of Huddersfield in England reached different conclusions, again corroborating the pre-2006 origin hypothesis. Testing
was performed on the full 16,600 DNA units composing mitochondrial DNA (the 2006 Behar study had only tested 1,000
units) in all their subjects, and the study found that the four main female Ashkenazi founders had descent lines
that were established in Europe 10,000 to 20,000 years in the past while most of the remaining minor founders also
have a deep European ancestry. The study states that the great majority of Ashkenazi maternal lineages were not brought
from the Near East (i.e., they were non-Israelite), nor were they recruited in the Caucasus (i.e., they were non-Khazar),
but instead they were assimilated within Europe, primarily of Italian and Old French origins. Richards summarized
the findings on the female line as such: "[N]one [of the mtDNA] came from the North Caucasus, located along the border
between Europe and Asia between the Black and Caspian seas. All of our presently available studies including my own,
should thoroughly debunk one of the most questionable, but still tenacious, hypotheses: that most Ashkenazi Jews
can trace their roots to the mysterious Khazar Kingdom that flourished during the ninth century in the region between
the Byzantine Empire and the Persian Empire." The 2013 study estimated that 80 percent of Ashkenazi maternal ancestry
comes from women indigenous to Europe, and only 8 percent from the Near East, while the origin of the remainder is
undetermined. According to the study these findings "point to a significant role for the conversion of women in the
formation of Ashkenazi communities." A 2010 study on Jewish ancestry by Atzmon-Ostrer et al. stated "Two major groups
were identified by principal component, phylogenetic, and identity by descent (IBD) analysis: Middle Eastern Jews
and European/Syrian Jews. The IBD segment sharing and the proximity of European Jews to each other and to southern
European populations suggested similar origins for European Jewry and refuted large-scale genetic contributions of
Central and Eastern European and Slavic populations to the formation of Ashkenazi Jewry", as both groups – the Middle
Eastern Jews and European/Syrian Jews – shared common ancestors in the Middle East about 2500 years ago. The study
examines genetic markers spread across the entire genome and shows that the Jewish groups (Ashkenazi and non Ashkenazi)
share large swaths of DNA, indicating close relationships and that each of the Jewish groups in the study (Iranian,
Iraqi, Syrian, Italian, Turkish, Greek and Ashkenazi) has its own genetic signature but is more closely related to
the other Jewish groups than to their fellow non-Jewish countrymen. Atzmon's team found that the SNP markers in genetic
segments of 3 million DNA letters or longer were 10 times more likely to be identical among Jews than non-Jews. Results
of the analysis also tally with biblical accounts of the fate of the Jews. The study also found that with respect
to non-Jewish European groups, the population most closely related to Ashkenazi Jews are modern-day Italians. The
study speculated that the genetic-similarity between Ashkenazi Jews and Italians may be due to inter-marriage and
conversions in the time of the Roman Empire. It was also found that any two Ashkenazi Jewish participants in the
study shared about as much DNA as fourth or fifth cousins. A 2013 trans-genome study carried out by 30 geneticists,
from 13 universities and academies, from 9 countries, assembling the largest data set available to date, for assessment
of Ashkenazi Jewish genetic origins found no evidence of Khazar origin among Ashkenazi Jews. "Thus, analysis of Ashkenazi
Jews together with a large sample from the region of the Khazar Khaganate corroborates the earlier results that Ashkenazi
Jews derive their ancestry primarily from populations of the Middle East and Europe, that they possess considerable
shared ancestry with other Jewish populations, and that there is no indication of a significant genetic contribution
either from within or from north of the Caucasus region", the authors concluded. The origins of the Ashkenazim are
obscure, and many theories have arisen speculating about their ultimate provenance. The most well supported theory
is the one that details a Jewish migration through what is now Italy and other parts of southern Europe. The historical
record attests to Jewish communities in southern Europe since pre-Christian times. Many Jews were denied full Roman
citizenship until 212 CE, when Emperor Caracalla granted all free peoples this privilege. Jews were required to pay
a poll tax until the reign of Emperor Julian in 363. In the late Roman Empire, Jews were free to form networks of
cultural and religious ties and enter into various local occupations. But, after Christianity became the official
religion of Rome and Constantinople in 380, Jews were increasingly marginalized. Historical records show evidence
of Jewish communities north of the Alps and Pyrenees as early as the 8th and 9th century. By the 11th century Jewish
settlers, moving from southern European and Middle Eastern centers, appear to have begun to settle in the north,
especially along the Rhine, often in response to new economic opportunities and at the invitation of local Christian
rulers. Thus Baldwin V, Count of Flanders, invited Jacob ben Yekutiel and his fellow Jews to settle in his lands;
and soon after the Norman Conquest of England, William the Conqueror likewise extended a welcome to continental Jews
to take up residence there. Bishop Rüdiger Huzmann called on the Jews of Mainz to relocate to Speyer. In all of these
decisions, the idea that Jews had the know-how and capacity to jump-start the economy, improve revenues, and enlarge
trade seems to have played a prominent role. Typically Jews relocated close to the markets and churches in town centres,
where, though they came under the authority of both royal and ecclesiastical powers, they were accorded administrative
autonomy. In the first half of the 11th century, Hai Gaon refers to questions that had been addressed to him from
Ashkenaz, by which he undoubtedly means Germany. Rashi in the latter half of the 11th century refers to both the
language of Ashkenaz and the country of Ashkenaz. During the 12th century, the word appears quite frequently. In
the Mahzor Vitry, the kingdom of Ashkenaz is referred to chiefly in regard to the ritual of the synagogue there,
but occasionally also with regard to certain other observances. France's blended Jewish community is typical of the
cultural recombination that is going on among Jews throughout the world. Although France expelled its original Jewish
population in the Middle Ages, by the time of the French Revolution, there were two distinct Jewish populations.
One consisted of Sephardic Jews, originally refugees from the Inquisition and concentrated in the southwest, while
the other community was Ashkenazi, concentrated in formerly German Alsace, and speaking mainly Yiddish. The two communities
were so separate and different that the National Assembly emancipated them separately in 1790 and 1791. Various studies
have arrived at diverging conclusions regarding both the degree and the sources of the non-Levantine admixture in
Ashkenazim, particularly in respect to the extent of the non-Levantine genetic origin observed in Ashkenazi maternal
lineages, which is in contrast to the predominant Levantine genetic origin observed in Ashkenazi paternal lineages.
All studies nevertheless agree that genetic overlap with the Fertile Crescent exists in both lineages, albeit at
differing rates. Collectively, Ashkenazi Jews are less genetically diverse than other Jewish ethnic divisions. Before
2006, geneticists had largely attributed the ethnogenesis of most of the world's Jewish populations, including Ashkenazi
Jews, to Israelite Jewish male migrants from the Middle East and "the women from each local population whom they
took as wives and converted to Judaism." Thus, in 2002, in line with this model of origin, David Goldstein, now of
Duke University, reported that unlike male Ashkenazi lineages, the female lineages in Ashkenazi Jewish communities
"did not seem to be Middle Eastern", and that each community had its own genetic pattern and even that "in some cases
the mitochondrial DNA was closely related to that of the host community." In his view this suggested "that Jewish
men had arrived from the Middle East, taken wives from the host population and converted them to Judaism, after which
there was no further intermarriage with non-Jews." A 2006 study by Seldin et al. used over five thousand autosomal
SNPs to demonstrate European genetic substructure. The results showed "a consistent and reproducible distinction
between 'northern' and 'southern' European population groups". Most northern, central, and eastern Europeans (Finns,
Swedes, English, Irish, Germans, and Ukrainians) showed >90% in the "northern" population group, while most individual
participants with southern European ancestry (Italians, Greeks, Portuguese, Spaniards) showed >85% in the "southern"
group. Both Ashkenazi Jews as well as Sephardic Jews showed >85% membership in the "southern" group. Referring to
the Jews clustering with southern Europeans, the authors state the results were "consistent with a later Mediterranean
origin of these ethnic groups".
By the 1890s the profound effect of adrenal extracts on many different tissue types had been discovered, setting off a search
both for the mechanism of chemical signalling and efforts to exploit these observations for the development of new
drugs. The blood pressure raising and vasoconstrictive effects of adrenal extracts were of particular interest to
surgeons as hemostatic agents and as treatment for shock, and a number of companies developed products based on adrenal
extracts containing varying purities of the active substance. In 1897 John Abel of Johns Hopkins University identified
the active principle as epinephrine, which he isolated in an impure state as the sulfate salt. Industrial chemist
Jokichi Takamine later developed a method for obtaining epinephrine in a pure state, and licensed the technology
to Parke Davis. Parke Davis marketed epinephrine under the trade name Adrenalin. Injected epinephrine proved to be
especially efficacious for the acute treatment of asthma attacks, and an inhaled version was sold in the United States
until 2011 (Primatene Mist). By 1929 epinephrine had been formulated into an inhaler for use in the treatment of
nasal congestion. While highly effective, the requirement for injection limited the use of norepinephrine[clarification
needed] and orally active derivatives were sought. A structurally similar compound, ephedrine, was identified by
Japanese chemists in the Ma Huang plant and marketed by Eli Lilly as an oral treatment for asthma. Following the
work of Henry Dale and George Barger at Burroughs-Wellcome, academic chemist Gordon Alles synthesized amphetamine
and tested it in asthma patients in 1929. The drug proved to have only modest anti-asthma effects, but produced sensations
of exhilaration and palpitations. Amphetamine was developed by Smith, Kline and French as a nasal decongestant under
the trade name Benzedrine Inhaler. Amphetamine was eventually developed for the treatment of narcolepsy, post-encepheletic
parkinsonism, and mood elevation in depression and other psychiatric indications. It received approval as a New and
Nonofficial Remedy from the American Medical Association for these uses in 1937 and remained in common use for depression
until the development of tricyclic antidepressants in the 1960s. A series of experiments performed from the late
1800s to the early 1900s revealed that diabetes is caused by the absence of a substance normally produced by the
pancreas. In 1869, Oskar Minkowski and Joseph von Mering found that diabetes could be induced in dogs by surgical
removal of the pancreas. In 1921, Canadian professor Frederick Banting and his student Charles Best repeated this
study, and found that injections of pancreatic extract reversed the symptoms produced by pancreas removal. Soon,
the extract was demonstrated to work in people, but development of insulin therapy as a routine medical procedure
was delayed by difficulties in producing the material in sufficient quantity and with reproducible purity. The researchers
sought assistance from industrial collaborators at Eli Lilly and Co. based on the company's experience with large
scale purification of biological materials. Chemist George Walden of Eli Lilly and Company found that careful adjustment
of the pH of the extract allowed a relatively pure grade of insulin to be produced. Under pressure from Toronto University
and a potential patent challenge by academic scientists who had independently developed a similar purification method,
an agreement was reached for non-exclusive production of insulin by multiple companies. Prior to the discovery and
widespread availability of insulin therapy the life expectancy of diabetics was only a few months. In 1903 Hermann
Emil Fischer and Joseph von Mering disclosed their discovery that diethylbarbituric acid, formed from the reaction
of diethylmalonic acid, phosphorus oxychloride and urea, induces sleep in dogs. The discovery was patented and licensed
to Bayer pharmaceuticals, which marketed the compound under the trade name Veronal as a sleep aid beginning in 1904.
Systematic investigations of the effect of structural changes on potency and duration of action led to the discovery
of phenobarbital at Bayer in 1911 and the discovery of its potent anti-epileptic activity in 1912. Phenobarbital
was among the most widely used drugs for the treatment of epilepsy through the 1970s, and as of 2014, remains on
the World Health Organizations list of essential medications. The 1950s and 1960s saw increased awareness of the
addictive properties and abuse potential of barbiturates and amphetamines and led to increasing restrictions on their
use and growing government oversight of prescribers. Today, amphetamine is largely restricted to use in the treatment
of attention deficit disorder and phenobarbital in the treatment of epilepsy. In 1911 arsphenamine, the first synthetic
anti-infective drug, was developed by Paul Ehrlich and chemist Alfred Bertheim of the Institute of Experimental Therapy
in Berlin. The drug was given the commercial name Salvarsan. Ehrlich, noting both the general toxicity of arsenic
and the selective absorption of certain dyes by bacteria, hypothesized that an arsenic-containing dye with similar
selective absorption properties could be used to treat bacterial infections. Arsphenamine was prepared as part of
a campaign to synthesize a series of such compounds, and found to exhibit partially selective toxicity. Arsphenamine
proved to be the first effective treatment for syphilis, a disease which prior to that time was incurable and led
inexorably to severe skin ulceration, neurological damage, and death.[citation needed] The modern pharmaceutical
industry traces its roots to two sources. The first of these were local apothecaries that expanded from their traditional
role distributing botanical drugs such as morphine and quinine to wholesale manufacture in the mid 1800s. Rational
drug discovery from plants started particularly with the isolation of morphine, analgesic and sleep-inducing agent
from opium, by the German apothecary assistant Friedrich Sertürner, who named the compound after the Greek god of
dreams, Morpheus. Multinational corporations including Merck, Hoffman-La Roche, Burroughs-Wellcome (now part of Glaxo
Smith Kline), Abbott Laboratories, Eli Lilly and Upjohn (now part of Pfizer) began as local apothecary shops in the
mid-1800s. By the late 1880s, German dye manufacturers had perfected the purification of individual organic compounds
from coal tar and other mineral sources and had also established rudimentary methods in organic chemical synthesis.
The development of synthetic chemical methods allowed scientists to systematically vary the structure of chemical
substances, and growth in the emerging science of pharmacology expanded their ability to evaluate the biological
effects of these structural changes. Ehrlich’s approach of systematically varying the chemical structure of synthetic
compounds and measuring the effects of these changes on biological activity was pursued broadly by industrial scientists,
including Bayer scientists Josef Klarer, Fritz Mietzsch, and Gerhard Domagk. This work, also based in the testing
of compounds available from the German dye industry, led to the development of Prontosil, the first representative
of the sulfonamide class of antibiotics. Compared to arsphenamine, the sulfonamides had a broader spectrum of activity
and were far less toxic, rendering them useful for infections caused by pathogens such as streptococci. In 1939,
Domagk received the Nobel Prize in Medicine for this discovery. Nonetheless, the dramatic decrease in deaths from
infectious diseases that occurred prior to World War II was primarily the result of improved public health measures
such as clean water and less crowded housing, and the impact of anti-infective drugs and vaccines was significant
mainly after World War II. Early progress toward the development of vaccines occurred throughout this period, primarily
in the form of academic and government-funded basic research directed toward the identification of the pathogens
responsible for common communicable diseases. In 1885 Louis Pasteur and Pierre Paul Émile Roux created the first
rabies vaccine. The first diphtheria vaccines were produced in 1914 from a mixture of diphtheria toxin and antitoxin
(produced from the serum of an inoculated animal), but the safety of the inoculation was marginal and it was not
widely used. The United States recorded 206,000 cases of diphtheria in 1921 resulting in 15,520 deaths. In 1923 parallel
efforts by Gaston Ramon at the Pasteur Institute and Alexander Glenny at the Wellcome Research Laboratories (later
part of GlaxoSmithKline) led to the discovery that a safer vaccine could be produced by treating diphtheria toxin
with formaldehyde. In 1944, Maurice Hilleman of Squibb Pharmaceuticals developed the first vaccine against Japanese
encephelitis. Hilleman would later move to Merck where he would play a key role in the development of vaccines against
measles, mumps, chickenpox, rubella, hepatitis A, hepatitis B, and meningitis. In 1937 over 100 people died after
ingesting "Elixir Sulfanilamide" manufactured by S.E. Massengill Company of Tennessee. The product was formulated
in diethylene glycol, a highly toxic solvent that is now widely used as antifreeze. Under the laws extant at that
time, prosecution of the manufacturer was possible only under the technicality that the product had been called an
"elixir", which literally implied a solution in ethanol. In response to this episode, the U.S. Congress passed the
Federal Food, Drug, and Cosmetic Act of 1938, which for the first time required pre-market demonstration of safety
before a drug could be sold, and explicitly prohibited false therapeutic claims. The aftermath of World War II saw
an explosion in the discovery of new classes of antibacterial drugs including the cephalosporins (developed by Eli
Lilly based on the seminal work of Giuseppe Brotzu and Edward Abraham), streptomycin (discovered during a Merck-funded
research program in Selman Waksman's laboratory), the tetracyclines (discovered at Lederle Laboratories, now a part
of Pfizer), erythromycin (discovered at Eli Lilly and Co.) and their extension to an increasingly wide range of bacterial
pathogens. Streptomycin, discovered during a Merck-funded research program in Selman Waksman's laboratory at Rutgers
in 1943, became the first effective treatment for tuberculosis. At the time of its discovery, sanitoriums for the
isolation of tuberculosis-infected people were an ubiquitous feature of cities in developed countries, with 50% dying
within 5 years of admission. During the years 1940-1955, the rate of decline in the U.S. death rate accelerated from
2% per year to 8% per year, then returned to the historical rate of 2% per year. The dramatic decline in the immediate
post-war years has been attributed to the rapid development of new treatments and vaccines for infectious disease
that occurred during these years. Vaccine development continued to accelerate, with the most notable achievement
of the period being Jonas Salk's 1954 development of the polio vaccine under the funding of the non-profit National
Foundation for Infantile Paralysis. The vaccine process was never patented, but was instead given to pharmaceutical
companies to manufacture as a low-cost generic. In 1960 Maurice Hilleman of Merck Sharp & Dohme identified the SV40
virus, which was later shown to cause tumors in many mammalian species. It was later determined that SV40 was present
as a contaminant in polio vaccine lots that had been administered to 90% of the children in the United States. The
contamination appears to have originated both in the original cell stock and in monkey tissue used for production.
In 2004 the United States Cancer Institute announced that it had concluded that SV40 is not associated with cancer
in people. On 2 July 2012, GlaxoSmithKline pleaded guilty to criminal charges and agreed to a $3 billion settlement
of the largest health-care fraud case in the U.S. and the largest payment by a drug company. The settlement is related
to the company's illegal promotion of prescription drugs, its failure to report safety data, bribing doctors, and
promoting medicines for uses for which they were not licensed. The drugs involved were Paxil, Wellbutrin, Advair,
Lamictal, and Zofran for off-label, non-covered uses. Those and the drugs Imitrex, Lotronex, Flovent, and Valtrex
were involved in the kickback scheme. In the US, starting in 2013, under the Physician Financial Transparency Reports
(part of the Sunshine Act), the Centers for Medicare & Medicaid Services has to collect information from applicable
manufacturers and group purchasing organizations in order to report information about their financial relationships
with physicians and hospitals. Data are made public in the Centers for Medicare & Medicaid Services website. The
expectation is that relationship between doctors and Pharmaceutical industry will become fully transparent. A Federal
Trade Commission report issued in 1958 attempted to quantify the effect of antibiotic development on American public
health. The report found that over the period 1946-1955, there was a 42% drop in the incidence of diseases for which
antibiotics were effective and only a 20% drop in those for which antibiotics were not effective. The report concluded
that "it appears that the use of antibiotics, early diagnosis, and other factors have limited the epidemic spread
and thus the number of these diseases which have occurred". The study further examined mortality rates for eight
common diseases for which antibiotics offered effective therapy (syphilis, tuberculosis, dysentery, scarlet fever,
whooping cough, meningococcal infections, and pneumonia), and found a 56% decline over the same period. Notable among
these was a 75% decline in deaths due to tuberculosis. In March 2001, 40 multi-national pharmaceutical companies
brought litigation against South Africa for its Medicines Act, which allowed the generic production of antiretroviral
drugs (ARVs) for treating HIV, despite the fact that these drugs were on-patent. HIV was and is an epidemic in South
Africa, and ARVs at the time cost between 10,000 and 15,000 USD per patient per year. This was unaffordable for most
South African citizens, and so the South African government committed to providing ARVs at prices closer to what
people could afford. To do so, they would need to ignore the patents on drugs and produce generics within the country
(using a compulsory license), or import them from abroad. After international protest in favour of public health
rights (including the collection of 250,000 signatures by MSF), the governments of several developed countries (including
The Netherlands, Germany, France, and later the US) backed the South African government, and the case was dropped
in April of that year. Prior to the 20th century drugs were generally produced by small scale manufacturers with
little regulatory control over manufacturing or claims of safety and efficacy. To the extent that such laws did exist,
enforcement was lax. In the United States, increased regulation of vaccines and other biological drugs was spurred
by tetanus outbreaks and deaths caused by the distribution of contaminated smallpox vaccine and diphtheria antitoxin.
The Biologics Control Act of 1902 required that federal government grant premarket approval for every biological
drug and for the process and facility producing such drugs. This was followed in 1906 by the Pure Food and Drugs
Act, which forbade the interstate distribution of adulterated or misbranded foods and drugs. A drug was considered
misbranded if it contained alcohol, morphine, opium, cocaine, or any of several other potentially dangerous or addictive
drugs, and if its label failed to indicate the quantity or proportion of such drugs. The government's attempts to
use the law to prosecute manufacturers for making unsupported claims of efficacy were undercut by a Supreme Court
ruling restricting the federal government's enforcement powers to cases of incorrect specification of the drug's
ingredients. Patents have been criticized in the developing world, as they are thought to reduce access to existing
medicines. Reconciling patents and universal access to medicine would require an efficient international policy of
price discrimination. Moreover, under the TRIPS agreement of the World Trade Organization, countries must allow pharmaceutical
products to be patented. In 2001, the WTO adopted the Doha Declaration, which indicates that the TRIPS agreement
should be read with the goals of public health in mind, and allows some methods for circumventing pharmaceutical
monopolies: via compulsory licensing or parallel imports, even before patent expiration. Pharmaceutical fraud involves
deceptions which bring financial gain to a pharmaceutical company. It affects individuals and public and private
insurers. There are several different schemes used to defraud the health care system which are particular to the
pharmaceutical industry. These include: Good Manufacturing Practice (GMP) Violations, Off Label Marketing, Best Price
Fraud, CME Fraud, Medicaid Price Reporting, and Manufactured Compound Drugs. Of this amount $2.5 billion was recovered
through False Claims Act cases in FY 2010. Examples of fraud cases include the GlaxoSmithKline $3 billion settlement,
Pfizer $2.3 billion settlement and Merck & Co. $650 million settlement. Damages from fraud can be recovered by use
of the False Claims Act, most commonly under the qui tam provisions which rewards an individual for being a "whistleblower",
or relator (law). A 2009 Cochrane review concluded that thiazide antihypertensive drugs reduce the risk of death
(RR 0.89), stroke (RR 0.63), coronary heart disease (RR 0.84), and cardiovascular events (RR 0.70) in people with
high blood pressure. In the ensuring years other classes of antihypertensive drug were developed and found wide acceptance
in combination therapy, including loop diuretics (Lasix/furosemide, Hoechst Pharmaceuticals, 1963), beta blockers
(ICI Pharmaceuticals, 1964) ACE inhibitors, and angiotensin receptor blockers. ACE inhibitors reduce the risk of
new onset kidney disease [RR 0.71] and death [RR 0.84] in diabetic patients, irrespective of whether they have hypertension.
Others have argued that excessive regulation suppresses therapeutic innovation, and that the current cost of regulator-required
clinical trials prevents the full exploitation of new genetic and biological knowledge for the treatment of human
disease. A 2012 report by the President's Council of Advisors on Science and Technology made several key recommendations
to reduce regulatory burdens to new drug development, including 1) expanding the FDA's use of accelerated approval
processes, 2) creating an expedited approval pathway for drugs intended for use in narrowly defined populations,
and 3) undertaking pilot projects designed to evaluate the feasibility of a new, adaptive drug approval process.
In 1952 researchers at Ciba discovered the first orally available vasodilator, hydralazine. A major shortcoming of
hydralazine monotherapy was that it lost its effectiveness over time (tachyphylaxis). In the mid-1950s Karl H. Beyer,
James M. Sprague, John E. Baer, and Frederick C. Novello of Merck and Co. discovered and developed chlorothiazide,
which remains the most widely used antihypertensive drug today. This development was associated with a substantial
decline in the mortality rate among people with hypertension. The inventors were recognized by a Public Health Lasker
Award in 1975 for "the saving of untold thousands of lives and the alleviation of the suffering of millions of victims
of hypertension". In the U.S., a push for revisions of the FD&C Act emerged from Congressional hearings led by Senator
Estes Kefauver of Tennessee in 1959. The hearings covered a wide range of policy issues, including advertising abuses,
questionable efficacy of drugs, and the need for greater regulation of the industry. While momentum for new legislation
temporarily flagged under extended debate, a new tragedy emerged that underscored the need for more comprehensive
regulation and provided the driving force for the passage of new laws. Other notable new vaccines of the period include
those for measles (1962, John Franklin Enders of Children's Medical Center Boston, later refined by Maurice Hilleman
at Merck), Rubella (1969, Hilleman, Merck) and mumps (1967, Hilleman, Merck) The United States incidences of rubella,
congenital rubella syndrome, measles, and mumps all fell by >95% in the immediate aftermath of widespread vaccination.
The first 20 years of licensed measles vaccination in the U.S. prevented an estimated 52 million cases of the disease,
17,400 cases of mental retardation, and 5,200 deaths. The thalidomide tragedy resurrected Kefauver's bill to enhance
drug regulation that had stalled in Congress, and the Kefauver-Harris Amendment became law on 10 October 1962. Manufacturers
henceforth had to prove to FDA that their drugs were effective as well as safe before they could go on the US market.
The FDA received authority to regulate advertising of prescription drugs and to establish good manufacturing practices.
The law required that all drugs introduced between 1938 and 1962 had to be effective. An FDA - National Academy of
Sciences collaborative study showed that nearly 40 percent of these products were not effective. A similarly comprehensive
study of over-the-counter products began ten years later. The firm continued to pressure Kelsey and the agency to
approve the application—until November 1961, when the drug was pulled off the German market because of its association
with grave congenital abnormalities. Several thousand newborns in Europe and elsewhere suffered the teratogenic effects
of thalidomide. Though the drug was never approved in the USA, the firm distributed Kevadon to over 1,000 physicians
there under the guise of investigational use. Over 20,000 Americans received thalidomide in this "study," including
624 pregnant patients, and about 17 known newborns suffered the effects of the drug.[citation needed] Prior to the
second world war, birth control was prohibited in many countries, and in the United States even the discussion of
contraceptive methods sometimes led to prosecution under Comstock laws. The history of the development of oral contraceptives
is thus closely tied to the birth control movement and the efforts of activists Margaret Sanger, Mary Dennett, and
Emma Goldman. Based on fundamental research performed by Gregory Pincus and synthetic methods for progesterone developed
by Carl Djerassi at Syntex and by Frank Colton at G.D. Searle & Co., the first oral contraceptive, Enovid, was developed
by E.D. Searle and Co. and approved by the FDA in 1960. The original formulation incorporated vastly excessive doses
of hormones, and caused severe side effects. Nonetheless, by 1962, 1.2 million American women were on the pill, and
by 1965 the number had increased to 6.5 million. The availability of a convenient form of temporary contraceptive
led to dramatic changes in social mores including expanding the range of lifestyle options available to women, reducing
the reliance of women on men for contraceptive practice, encouraging the delay of marriage, and increasing pre-marital
co-habitation. In April 1994, the results of a Merck-sponsored study, the Scandinavian Simvastatin Survival Study,
were announced. Researchers tested simvastatin, later sold by Merck as Zocor, on 4,444 patients with high cholesterol
and heart disease. After five years, the study concluded the patients saw a 35% reduction in their cholesterol, and
their chances of dying of a heart attack were reduced by 42%. In 1995, Zocor and Mevacor both made Merck over US$1
billion. Endo was awarded the 2006 Japan Prize, and the Lasker-DeBakey Clinical Medical Research Award in 2008. For
his "pioneering research into a new class of molecules" for "lowering cholesterol,"[sentence fragment] Drug discovery
is the process by which potential drugs are discovered or designed. In the past most drugs have been discovered either
by isolating the active ingredient from traditional remedies or by serendipitous discovery. Modern biotechnology
often focuses on understanding the metabolic pathways related to a disease state or pathogen, and manipulating these
pathways using molecular biology or biochemistry. A great deal of early-stage drug discovery has traditionally been
carried out by universities and research institutions. Drug discovery and development is very expensive; of all compounds
investigated for use in humans only a small fraction are eventually approved in most nations by government appointed
medical institutions or boards, who have to approve new drugs before they can be marketed in those countries. In
2010 18 NMEs (New Molecular Entities) were approved and three biologics by the FDA, or 21 in total, which is down
from 26 in 2009 and 24 in 2008. On the other hand, there were only 18 approvals in total in 2007 and 22 back in 2006.
Since 2001, the Center for Drug Evaluation and Research has averaged 22.9 approvals a year. This approval comes only
after heavy investment in pre-clinical development and clinical trials, as well as a commitment to ongoing safety
monitoring. Drugs which fail part-way through this process often incur large costs, while generating no revenue in
return. If the cost of these failed drugs is taken into account, the cost of developing a successful new drug (new
chemical entity, or NCE), has been estimated at about 1.3 billion USD(not including marketing expenses). Professors
Light and Lexchin reported in 2012, however, that the rate of approval for new drugs has been a relatively stable
average rate of 15 to 25 for decades. Some of these estimates also take into account the opportunity cost of investing
capital many years before revenues are realized (see Time-value of money). Because of the very long time needed for
discovery, development, and approval of pharmaceuticals, these costs can accumulate to nearly half the total expense.
A direct consequence within the pharmaceutical industry value chain is that major pharmaceutical multinationals tend
to increasingly outsource risks related to fundamental research, which somewhat reshapes the industry ecosystem with
biotechnology companies playing an increasingly important role, and overall strategies being redefined accordingly.
Some approved drugs, such as those based on re-formulation of an existing active ingredient (also referred to as
Line-extensions) are much less expensive to develop. In 1971, Akira Endo, a Japanese biochemist working for the pharmaceutical
company Sankyo, identified mevastatin (ML-236B), a molecule produced by the fungus Penicillium citrinum, as an inhibitor
of HMG-CoA reductase, a critical enzyme used by the body to produce cholesterol. Animal trials showed very good inhibitory
effect as in clinical trials, however a long term study in dogs found toxic effects at higher doses and as a result
mevastatin was believed to be too toxic for human use. Mevastatin was never marketed, because of its adverse effects
of tumors, muscle deterioration, and sometimes death in laboratory dogs. Every major company selling the antipsychotics
— Bristol-Myers Squibb, Eli Lilly, Pfizer, AstraZeneca and Johnson & Johnson — has either settled recent government
cases, under the False Claims Act, for hundreds of millions of dollars or is currently under investigation for possible
health care fraud. Following charges of illegal marketing, two of the settlements set records last year for the largest
criminal fines ever imposed on corporations. One involved Eli Lilly's antipsychotic Zyprexa, and the other involved
Bextra. In the Bextra case, the government also charged Pfizer with illegally marketing another antipsychotic, Geodon;
Pfizer settled that part of the claim for $301 million, without admitting any wrongdoing. In contrast to this viewpoint,
an article and associated editorial in the New England Journal of Medicine in May 2015 emphasized the importance
of pharmaceutical industry-physician interactions for the development of novel treatments, and argued that moral
outrage over industry malfeasance had unjustifiably led many to overemphasize the problems created by financial conflicts
of interest. The article noted that major healthcare organizations such as National Center for Advancing Translational
Sciences of the National Institutes of Health, the President’s Council of Advisors on Science and Technology, the
World Economic Forum, the Gates Foundation, the Wellcome Trust, and the Food and Drug Administration had encouraged
greater interactions between physicians and industry in order to bring greater benefits to patients. An investigation
by ProPublica found that at least 21 doctors have been paid more than $500,000 for speeches and consulting by drugs
manufacturers since 2009, with half of the top earners working in psychiatry, and about $2 billion in total paid
to doctors for such services. AstraZeneca, Johnson & Johnson and Eli Lilly have paid billions of dollars in federal
settlements over allegations that they paid doctors to promote drugs for unapproved uses. Some prominent medical
schools have since tightened rules on faculty acceptance of such payments by drug companies. Often, large multinational
corporations exhibit vertical integration, participating in a broad range of drug discovery and development, manufacturing
and quality control, marketing, sales, and distribution. Smaller organizations, on the other hand, often focus on
a specific aspect such as discovering drug candidates or developing formulations. Often, collaborative agreements
between research organizations and large pharmaceutical companies are formed to explore the potential of new drug
substances. More recently, multi-nationals are increasingly relying on contract research organizations to manage
drug development. In the UK, the Medicines and Healthcare Products Regulatory Agency approves drugs for use, though
the evaluation is done by the European Medicines Agency, an agency of the European Union based in London. Normally
an approval in the UK and other European countries comes later than one in the USA. Then it is the National Institute
for Health and Care Excellence (NICE), for England and Wales, who decides if and how the National Health Service
(NHS) will allow (in the sense of paying for) their use. The British National Formulary is the core guide for pharmacists
and clinicians. There are special rules for certain rare diseases ("orphan diseases") in several major drug regulatory
territories. For example, diseases involving fewer than 200,000 patients in the United States, or larger populations
in certain circumstances are subject to the Orphan Drug Act. Because medical research and development of drugs to
treat such diseases is financially disadvantageous, companies that do so are rewarded with tax reductions, fee waivers,
and market exclusivity on that drug for a limited time (seven years), regardless of whether the drug is protected
by patents. Ben Goldacre has argued that regulators – such as the Medicines and Healthcare products Regulatory Agency
(MHRA) in the UK, or the Food and Drug Administration (FDA) in the United States – advance the interests of the drug
companies rather than the interests of the public due to revolving door exchange of employees between the regulator
and the companies and friendships develop between regulator and company employees. He argues that regulators do not
require that new drugs offer an improvement over what is already available, or even that they be particularly effective.
In many non-US western countries a 'fourth hurdle' of cost effectiveness analysis has developed before new technologies
can be provided. This focuses on the efficiency (in terms of the cost per QALY) of the technologies in question rather
than their efficacy. In England and Wales NICE decides whether and in what circumstances drugs and technologies will
be made available by the NHS, whilst similar arrangements exist with the Scottish Medicines Consortium in Scotland,
and the Pharmaceutical Benefits Advisory Committee in Australia. A product must pass the threshold for cost-effectiveness
if it is to be approved. Treatments must represent 'value for money' and a net benefit to society. The top ten best-selling
drugs of 2013 totaled $75.6 billion in sales, with the anti-inflammatory drug Humira being the best-selling drug
worldwide at $10.7 billion in sales. The second and third best selling were Enbrel and Remicade, respectively. The
top three best-selling drugs in the United States in 2013 were Abilify ($6.3 billion,) Nexium ($6 billion) and Humira
($5.4 billion). The best-selling drug ever, Lipitor, averaged $13 billion annually and netted $141 billion total
over its lifetime before Pfizer's patent expired in November 2011. Depending on a number of considerations, a company
may apply for and be granted a patent for the drug, or the process of producing the drug, granting exclusivity rights
typically for about 20 years. However, only after rigorous study and testing, which takes 10 to 15 years on average,
will governmental authorities grant permission for the company to market and sell the drug. Patent protection enables
the owner of the patent to recover the costs of research and development through high profit margins for the branded
drug. When the patent protection for the drug expires, a generic drug is usually developed and sold by a competing
company. The development and approval of generics is less expensive, allowing them to be sold at a lower price. Often
the owner of the branded drug will introduce a generic version before the patent expires in order to get a head start
in the generic market. Restructuring has therefore become routine, driven by the patent expiration of products launched
during the industry's "golden era" in the 1990s and companies' failure to develop sufficient new blockbuster products
to replace lost revenues. In the United States, new pharmaceutical products must be approved by the Food and Drug
Administration (FDA) as being both safe and effective. This process generally involves submission of an Investigational
New Drug filing with sufficient pre-clinical data to support proceeding with human trials. Following IND approval,
three phases of progressively larger human clinical trials may be conducted. Phase I generally studies toxicity using
healthy volunteers. Phase II can include pharmacokinetics and dosing in patients, and Phase III is a very large study
of efficacy in the intended patient population. Following the successful completion of phase III testing, a New Drug
Application is submitted to the FDA. The FDA review the data and if the product is seen as having a positive benefit-risk
assessment, approval to market the product in the US is granted. Advertising is common in healthcare journals as
well as through more mainstream media routes. In some countries, notably the US, they are allowed to advertise directly
to the general public. Pharmaceutical companies generally employ sales people (often called 'drug reps' or, an older
term, 'detail men') to market directly and personally to physicians and other healthcare providers. In some countries,
notably the US, pharmaceutical companies also employ lobbyists to influence politicians. Marketing of prescription
drugs in the US is regulated by the federal Prescription Drug Marketing Act of 1987. There has been increasing controversy
surrounding pharmaceutical marketing and influence. There have been accusations and findings of influence on doctors
and other health professionals through drug reps, including the constant provision of marketing 'gifts' and biased
information to health professionals; highly prevalent advertising in journals and conferences; funding independent
healthcare organizations and health promotion campaigns; lobbying physicians and politicians (more than any other
industry in the US); sponsorship of medical schools or nurse training; sponsorship of continuing educational events,
with influence on the curriculum; and hiring physicians as paid consultants on medical advisory boards.
The rivalries between the Arab tribes had caused unrest in the provinces outside Syria, most notably in the Second Muslim
Civil War of 680–692 CE and the Berber Revolt of 740–743 CE. During the Second Civil War, leadership of the Umayyad
clan shifted from the Sufyanid branch of the family to the Marwanid branch. As the constant campaigning exhausted
the resources and manpower of the state, the Umayyads, weakened by the Third Muslim Civil War of 744–747 CE, were
finally toppled by the Abbasid Revolution in 750 CE/132 AH. A branch of the family fled across North Africa to Al-Andalus,
where they established the Caliphate of Córdoba, which lasted until 1031 before falling due to the Fitna of al-Ándalus.
Ali was assassinated in 661 by a Kharijite partisan. Six months later in the same year, in the interest of peace,
Hasan ibn Ali, highly regarded for his wisdom and as a peacemaker, and the Second Imam for the Shias, and the grandson
of Muhammad, made a peace treaty with Muawiyah I. In the Hasan-Muawiya treaty, Hasan ibn Ali handed over power to
Muawiya on the condition that he be just to the people and keep them safe and secure, and after his death he not
establish a dynasty. This brought to an end the era of the Rightly Guided Caliphs for the Sunnis, and Hasan ibn Ali
was also the last Imam for the Shias to be a Caliph. Following this, Mu'awiyah broke the conditions of the agreement
and began the Umayyad dynasty, with its capital in Damascus. At the time, the Umayyad taxation and administrative
practice were perceived as unjust by some Muslims. The Christian and Jewish population had still autonomy; their
judicial matters were dealt with in accordance with their own laws and by their own religious heads or their appointees,
although they did pay a poll tax for policing to the central state. Muhammad had stated explicitly during his lifetime
that abrahamic religious groups (still a majority in times of the Umayyad Caliphate), should be allowed to practice
their own religion, provided that they paid the jizya taxation. The welfare state of both the Muslim and the non-Muslim
poor started by Umar ibn al Khattab had also continued. Muawiya's wife Maysum (Yazid's mother) was also a Christian.
The relations between the Muslims and the Christians in the state were stable in this time. The Umayyads were involved
in frequent battles with the Christian Byzantines without being concerned with protecting themselves in Syria, which
had remained largely Christian like many other parts of the empire. Prominent positions were held by Christians,
some of whom belonged to families that had served in Byzantine governments. The employment of Christians was part
of a broader policy of religious assimilation that was necessitated by the presence of large Christian populations
in the conquered provinces, as in Syria. This policy also boosted Muawiya's popularity and solidified Syria as his
power base. The Umayyad Caliphate (Arabic: الخلافة الأموية, trans. Al-Khilāfat al-ʾumawiyya) was the second of the
four major Islamic caliphates established after the death of Muhammad. This caliphate was centered on the Umayyad
dynasty (Arabic: الأمويون, al-ʾUmawiyyūn, or بنو أمية, Banū ʾUmayya, "Sons of Umayya"), hailing from Mecca. The
Umayyad family had first come to power under the third caliph, Uthman ibn Affan (r. 644–656), but the Umayyad regime
was founded by Muawiya ibn Abi Sufyan, long-time governor of Syria, after the end of the First Muslim Civil War in
661 CE/41 AH. Syria remained the Umayyads' main power base thereafter, and Damascus was their capital. The Umayyads
continued the Muslim conquests, incorporating the Caucasus, Transoxiana, Sindh, the Maghreb and the Iberian Peninsula
(Al-Andalus) into the Muslim world. At its greatest extent, the Umayyad Caliphate covered 15 million km2 (5.79 million
square miles), making it the largest empire (in terms of area - not in terms of population) the world had yet seen,
and the fifth largest ever to exist. Most historians[who?] consider Caliph Muawiyah (661–80) to have been the second
ruler of the Umayyad dynasty, even though he was the first to assert the Umayyads' right to rule on a dynastic principle.
It was really the caliphate of Uthman Ibn Affan (644–656), a member of Umayyad clan himself, that witnessed the revival
and then the ascendancy of the Umayyad clan to the corridors of power. Uthman placed some of the trusted members
of his clan at prominent and strong positions throughout the state. Most notable was the appointment of Marwan ibn
al-Hakam, Uthman's first cousin, as his top advisor, which created a stir among the Hashimite companions of Muhammad,
as Marwan along with his father Al-Hakam ibn Abi al-'As had been permanently exiled from Medina by Muhammad during
his lifetime. Uthman also appointed as governor of Kufa his half-brother, Walid ibn Uqba, who was accused by Hashmites
of leading prayer while under the influence of alcohol. Uthman also consolidated Muawiyah's governorship of Syria
by granting him control over a larger area and appointed his foster brother Abdullah ibn Saad as the Governor of
Egypt. However, since Uthman never named an heir, he cannot be considered the founder of a dynasty. Following the
death of Husayn, Ibn al-Zubayr, although remaining in Mecca, was associated with two opposition movements, one centered
in Medina and the other around Kharijites in Basra and Arabia. Because Medina had been home to Muhammad and his family,
including Husayn, word of his death and the imprisonment of his family led to a large opposition movement. In 683,
Yazid dispatched an army to subdue both movements. The army suppressed the Medinese opposition at the Battle of al-Harrah.
The Grand Mosque in Medina was severely damaged and widespread pillaging caused deep-seated dissent. Yazid's army
continued on and laid siege to Mecca. At some point during the siege, the Kaaba was badly damaged in a fire. The
destruction of the Kaaba and Grand Mosque became a major cause for censure of the Umayyads in later histories of
the period. According to tradition, the Umayyad family (also known as the Banu Abd-Shams) and Muhammad both descended
from a common ancestor, Abd Manaf ibn Qusai, and they originally came from the city of Mecca. Muhammad descended
from Abd Manāf via his son Hashim, while the Umayyads descended from Abd Manaf via a different son, Abd-Shams, whose
son was Umayya. The two families are therefore considered to be different clans (those of Hashim and of Umayya, respectively)
of the same tribe (that of the Quraish). However Muslim Shia historians suspect that Umayya was an adopted son of
Abd Shams so he was not a blood relative of Abd Manaf ibn Qusai. Umayya was later discarded from the noble family.
Sunni historians disagree with this and view Shia claims as nothing more than outright polemics due to their hostility
to the Umayyad family in general. They point to the fact that the grand sons of Uthman, Zaid bin amr bin uthman bin
affan and Abdullah bin Amr bin Uthman got married to the Sukaina and Fatima the daughters of Hussein son of Ali to
show closeness of Banu hashem and Bani Ummayah. Following this battle, Ali fought a battle against Muawiyah, known
as the Battle of Siffin. The battle was stopped before either side had achieved victory, and the two parties agreed
to arbitrate their dispute. After the battle Amr ibn al-As was appointed by Muawiyah as an arbitrator, and Ali appointed
Abu Musa Ashaari. Seven months later, in February 658, the two arbitrators met at Adhruh, about 10 miles north west
of Maan in Jordon. Amr ibn al-As convinced Abu Musa Ashaari that both Ali and Muawiyah should step down and a new
Caliph be elected. Ali and his supporters were stunned by the decision which had lowered the Caliph to the status
of the rebellious Muawiyah I. Ali was therefore outwitted by Muawiyah and Amr. Ali refused to accept the verdict
and found himself technically in breach of his pledge to abide by the arbitration. This put Ali in a weak position
even amongst his own supporters. The most vociferous opponents in Ali's camp were the very same people who had forced
Ali into the ceasefire. They broke away from Ali's force, rallying under the slogan, "arbitration belongs to God
alone." This group came to be known as the Kharijites ("those who leave"). In 659 Ali's forces and the Kharijites
met in the Battle of Nahrawan. Although Ali won the battle, the constant conflict had begun to affect his standing,
and in the following years some Syrians seem to have acclaimed Muawiyah as a rival caliph. The Quran and Muhammad
talked about racial equality and justice as in The Farewell Sermon. Tribal and nationalistic differences were discouraged.
But after Muhammad's passing, the old tribal differences between the Arabs started to resurface. Following the Roman–Persian
Wars and the Byzantine–Sassanid Wars, deep rooted differences between Iraq, formally under the Persian Sassanid Empire,
and Syria, formally under the Byzantine Empire, also existed. Each wanted the capital of the newly established Islamic
State to be in their area. Previously, the second caliph Umar was very firm on the governors and his spies kept an
eye on them. If he felt that a governor or a commander was becoming attracted to wealth, he had him removed from
his position. While the Umayyads and the Hashimites may have had bitterness between the two clans before Muhammad,
the rivalry turned into a severe case of tribal animosity after the Battle of Badr. The battle saw three top leaders
of the Umayyad clan (Utba ibn Rabi'ah, Walid ibn Utbah and Shaybah) killed by Hashimites (Ali, Hamza ibn ‘Abd al-Muttalib
and Ubaydah ibn al-Harith) in a three-on-three melee. This fueled the opposition of Abu Sufyan ibn Harb, the grandson
of Umayya, to Muhammad and to Islam. Abu Sufyan sought to exterminate the adherents of the new religion by waging
another battle with Muslims based in Medina only a year after the Battle of Badr. He did this to avenge the defeat
at Badr. The Battle of Uhud is generally believed by scholars to be the first defeat for the Muslims, as they had
incurred greater losses than the Meccans. After the battle, Abu Sufyan's wife Hind, who was also the daughter of
Utba ibn Rabi'ah, is reported to have cut open the corpse of Hamza, taking out his liver which she then attempted
to eat. Within five years after his defeat in the Battle of Uhud, however, Muhammad took control of Mecca and announced
a general amnesty for all. Abu Sufyan and his wife Hind embraced Islam on the eve of the conquest of Mecca, as did
their son (the future caliph Muawiyah I). Umar is honored for his attempt to resolve the fiscal problems attendant
upon conversion to Islam. During the Umayyad period, the majority of people living within the caliphate were not
Muslim, but Christian, Jewish, Zoroastrian, or members of other small groups. These religious communities were not
forced to convert to Islam, but were subject to a tax (jizyah) which was not imposed upon Muslims. This situation
may actually have made widespread conversion to Islam undesirable from the point of view of state revenue, and there
are reports that provincial governors actively discouraged such conversions. It is not clear how Umar attempted to
resolve this situation, but the sources portray him as having insisted on like treatment of Arab and non-Arab (mawali)
Muslims, and on the removal of obstacles to the conversion of non-Arabs to Islam. After the assassination of Uthman
in 656, Ali, a member of the Quraysh tribe and the cousin and son-in-law of Muhammad, was elected as the caliph.
He soon met with resistance from several factions, owing to his relative political inexperience. Ali moved his capital
from Medina to Kufa. The resulting conflict, which lasted from 656 until 661, is known as the First Fitna ("civil
war"). Muawiyah I, the governor of Syria, a relative of Uthman ibn al-Affan and Marwan I, wanted the culprits arrested.
Marwan I manipulated everyone and created conflict. Aisha, the wife of Muhammad, and Talhah and Al-Zubayr, two of
the companions of Muhammad, went to Basra to tell Ali to arrest the culprits who murdered Uthman. Marwan I and other
people who wanted conflict manipulated everyone to fight. The two sides clashed at the Battle of the Camel in 656,
where Ali won a decisive victory. Early Muslim armies stayed in encampments away from cities because Umar feared
that they might get attracted to wealth and luxury. In the process, they might turn away from the worship of God
and start accumulating wealth and establishing dynasties. When Uthman ibn al-Affan became very old, Marwan I, a relative
of Muawiyah I, slipped into the vacuum, became his secretary, slowly assumed more control and relaxed some of these
restrictions. Marwan I had previously been excluded from positions of responsibility. In 656, Muhammad ibn Abi Bakr,
the son of Abu Bakr, the adopted son of Ali ibn Abi Talib, and the great grandfather of Ja'far al-Sadiq, showed some
Egyptians the house of Uthman ibn al-Affan. Later the Egyptians ended up killing Uthman ibn al-Affan. In 680 Ibn
al-Zubayr fled Medina for Mecca. Hearing about Husayn's opposition to Yazid I, the people of Kufa sent to Husayn
asking him to take over with their support. Al-Husayn sent his cousin Muslim bin Agail to verify if they would rally
behind him. When the news reached Yazid I, he sent Ubayd-Allah bin Ziyad, ruler of Basrah, with the instruction to
prevent the people of Kufa rallying behind Al-Husayn. Ubayd-Allah bin Ziyad managed to disperse the crowd that gathered
around Muslim bin Agail and captured him. Realizing Ubayd-Allah bin Ziyad had been instructed to prevent Husayn from
establishing support in Kufa, Muslim bin Agail requested a message to be sent to Husayn to prevent his immigration
to Kufa. The request was denied and Ubayd-Allah bin Ziyad killed Muslim bin Agail. While Ibn al-Zubayr would stay
in Mecca until his death, Husayn decided to travel on to Kufa with his family, unaware of the lack of support there.
Husayn and his family were intercepted by Yazid I's forces led by Amru bin Saad, Shamar bin Thi Al-Joshan, and Hussain
bin Tamim, who fought Al-Husayn and his male family members until they were killed. There were 200 people in Husayn's
caravan, many of whom were women, including his sisters, wives, daughters and their children. The women and children
from Husayn's camp were taken as prisoners of war and led back to Damascus to be presented to Yazid I. They remained
imprisoned until public opinion turned against him as word of Husayn's death and his family's capture spread. They
were then granted passage back to Medina. The sole adult male survivor from the caravan was Ali ibn Husayn who was
with fever too ill to fight when the caravan was attacked. In the year 712, Muhammad bin Qasim, an Umayyad general,
sailed from the Persian Gulf into Sindh in Pakistan and conquered both the Sindh and the Punjab regions along the
Indus river. The conquest of Sindh and Punjab, in modern-day Pakistan, although costly, were major gains for the
Umayyad Caliphate. However, further gains were halted by Hindu kingdoms in India in the battle of Rajasthan. The
Arabs tried to invade India but they were defeated by the north Indian king Nagabhata of the Pratihara Dynasty and
by the south Indian Emperor Vikramaditya II of the Chalukya dynasty in the early 8th century. After this the Arab
chroniclers admit that the Caliph Mahdi "gave up the project of conquering any part of India." The second major event
of the early reign of Abd al-Malik was the construction of the Dome of the Rock in Jerusalem. Although the chronology
remains somewhat uncertain, the building seems to have been completed in 692, which means that it was under construction
during the conflict with Ibn al-Zubayr. This had led some historians, both medieval and modern, to suggest that the
Dome of the Rock was built as a destination for pilgrimage to rival the Kaaba, which was under the control of Ibn
al-Zubayr. Muawiyah also encouraged peaceful coexistence with the Christian communities of Syria, granting his reign
with "peace and prosperity for Christians and Arabs alike", and one of his closest advisers was Sarjun, the father
of John of Damascus. At the same time, he waged unceasing war against the Byzantine Roman Empire. During his reign,
Rhodes and Crete were occupied, and several assaults were launched against Constantinople. After their failure, and
faced with a large-scale Christian uprising in the form of the Mardaites, Muawiyah concluded a peace with Byzantium.
Muawiyah also oversaw military expansion in North Africa (the foundation of Kairouan) and in Central Asia (the conquest
of Kabul, Bukhara, and Samarkand). Yazid died while the siege was still in progress, and the Umayyad army returned
to Damascus, leaving Ibn al-Zubayr in control of Mecca. Yazid's son Muawiya II (683–84) initially succeeded him but
seems to have never been recognized as caliph outside of Syria. Two factions developed within Syria: the Confederation
of Qays, who supported Ibn al-Zubayr, and the Quda'a, who supported Marwan, a descendant of Umayya via Wa'il ibn
Umayyah. The partisans of Marwan triumphed at a battle at Marj Rahit, near Damascus, in 684, and Marwan became caliph
shortly thereafter. Marwan was succeeded by his son, Abd al-Malik (685–705), who reconsolidated Umayyad control of
the caliphate. The early reign of Abd al-Malik was marked by the revolt of Al-Mukhtar, which was based in Kufa. Al-Mukhtar
hoped to elevate Muhammad ibn al-Hanafiyyah, another son of Ali, to the caliphate, although Ibn al-Hanafiyyah himself
may have had no connection to the revolt. The troops of al-Mukhtar engaged in battles both with the Umayyads in 686,
defeating them at the river Khazir near Mosul, and with Ibn al-Zubayr in 687, at which time the revolt of al-Mukhtar
was crushed. In 691, Umayyad troops reconquered Iraq, and in 692 the same army captured Mecca. Ibn al-Zubayr was
killed in the attack. Geographically, the empire was divided into several provinces, the borders of which changed
numerous times during the Umayyad reign. Each province had a governor appointed by the khalifah. The governor was
in charge of the religious officials, army leaders, police, and civil administrators in his province. Local expenses
were paid for by taxes coming from that province, with the remainder each year being sent to the central government
in Damascus. As the central power of the Umayyad rulers waned in the later years of the dynasty, some governors neglected
to send the extra tax revenue to Damascus and created great personal fortunes. Hisham suffered still worse defeats
in the east, where his armies attempted to subdue both Tokharistan, with its center at Balkh, and Transoxiana, with
its center at Samarkand. Both areas had already been partially conquered, but remained difficult to govern. Once
again, a particular difficulty concerned the question of the conversion of non-Arabs, especially the Sogdians of
Transoxiana. Following the Umayyad defeat in the "Day of Thirst" in 724, Ashras ibn 'Abd Allah al-Sulami, governor
of Khurasan, promised tax relief to those Sogdians who converted to Islam, but went back on his offer when it proved
too popular and threatened to reduce tax revenues. Discontent among the Khurasani Arabs rose sharply after the losses
suffered in the Battle of the Defile in 731, and in 734, al-Harith ibn Surayj led a revolt that received broad backing
from Arabs and natives alike, capturing Balkh but failing to take Merv. After this defeat, al-Harith's movement seems
to have been dissolved, but the problem of the rights of non-Arab Muslims would continue to plague the Umayyads.
The Hashimiyya movement (a sub-sect of the Kaysanites Shia), led by the Abbasid family, overthrew the Umayyad caliphate.
The Abbasids were members of the Hashim clan, rivals of the Umayyads, but the word "Hashimiyya" seems to refer specifically
to Abu Hashim, a grandson of Ali and son of Muhammad ibn al-Hanafiyya. According to certain traditions, Abu Hashim
died in 717 in Humeima in the house of Muhammad ibn Ali, the head of the Abbasid family, and before dying named Muhammad
ibn Ali as his successor. This tradition allowed the Abbasids to rally the supporters of the failed revolt of Mukhtar,
who had represented themselves as the supporters of Muhammad ibn al-Hanafiyya. From the caliphate's north-western
African bases, a series of raids on coastal areas of the Visigothic Kingdom paved the way to the permanent occupation
of most of Iberia by the Umayyads (starting in 711), and on into south-eastern Gaul (last stronghold at Narbonne
in 759). Hisham's reign witnessed the end of expansion in the west, following the defeat of the Arab army by the
Franks at the Battle of Tours in 732. In 739 a major Berber Revolt broke out in North Africa, which was subdued only
with difficulty, but it was followed by the collapse of Umayyad authority in al-Andalus. In India the Arab armies
were defeated by the south Indian Chalukya dynasty and by the north Indian Pratiharas Dynasty in the 8th century
and the Arabs were driven out of India. In the Caucasus, the confrontation with the Khazars peaked under Hisham:
the Arabs established Derbent as a major military base and launched several invasions of the northern Caucasus, but
failed to subdue the nomadic Khazars. The conflict was arduous and bloody, and the Arab army even suffered a major
defeat at the Battle of Marj Ardabil in 730. Marwan ibn Muhammad, the future Marwan II, finally ended the war in
737 with a massive invasion that is reported to have reached as far as the Volga, but the Khazars remained unsubdued.
The final son of Abd al-Malik to become caliph was Hisham (724–43), whose long and eventful reign was above all marked
by the curtailment of military expansion. Hisham established his court at Resafa in northern Syria, which was closer
to the Byzantine border than Damascus, and resumed hostilities against the Byzantines, which had lapsed following
the failure of the last siege of Constantinople. The new campaigns resulted in a number of successful raids into
Anatolia, but also in a major defeat (the Battle of Akroinon), and did not lead to any significant territorial expansion.
With limited resources Muawiyah went about creating allies. Muawiyah married Maysum the daughter of the chief of
the Kalb tribe, that was a large Jacobite Christian Arab tribe in Syria. His marriage to Maysum was politically motivated.
The Kalb tribe had remained largely neutral when the Muslims first went into Syria. After the plague that killed
much of the Muslim Army in Syria, by marrying Maysum, Muawiyah started to use the Jacobite Christians, against the
Romans. Muawiya's wife Maysum (Yazid's mother) was also a Jacobite Christian. With limited resources and the Byzantine
just over the border, Muawiyah worked in cooperation with the local Christian population. To stop Byzantine harassment
from the sea during the Arab-Byzantine Wars, in 649 Muawiyah set up a navy; manned by Monophysitise Christians, Copts
and Jacobite Syrian Christians sailors and Muslim troops. Only Umayyad ruler (Caliphs of Damascus), Umar ibn Abd
al-Aziz, is unanimously praised by Sunni sources for his devout piety and justice. In his efforts to spread Islam
he established liberties for the Mawali by abolishing the jizya tax for converts to Islam. Imam Abu Muhammad Adbullah
ibn Abdul Hakam stated that Umar ibn Abd al-Aziz also stopped the personal allowance offered to his relatives stating
that he could only give them an allowance if he gave an allowance to everyone else in the empire. Umar ibn Abd al-Aziz
was later poisoned in the year 720. When successive governments tried to reverse Umar ibn Abd al-Aziz's tax policies
it created rebellion. Around 746, Abu Muslim assumed leadership of the Hashimiyya in Khurasan. In 747, he successfully
initiated an open revolt against Umayyad rule, which was carried out under the sign of the black flag. He soon established
control of Khurasan, expelling its Umayyad governor, Nasr ibn Sayyar, and dispatched an army westwards. Kufa fell
to the Hashimiyya in 749, the last Umayyad stronghold in Iraq, Wasit, was placed under siege, and in November of
the same year Abu al-Abbas was recognized as the new caliph in the mosque at Kufa.[citation needed] At this point
Marwan mobilized his troops from Harran and advanced toward Iraq. In January 750 the two forces met in the Battle
of the Zab, and the Umayyads were defeated. Damascus fell to the Abbasids in April, and in August, Marwan was killed
in Egypt. The books written later in the Abbasid period in Iran are more anti Umayyad. Iran was Sunni at the time.
There was much anti Arab feeling in Iran after the fall of the Persian empire. This anti Arab feeling also influenced
the books on Islamic history. Al-Tabri was also written in Iran during that period. Al-Tabri was a huge collection
including all the text that he could find, from all the sources. It was a collection preserving everything for future
generations to codify and for future generations to judge if it was true or false. The Diwan of Umar, assigning annuities
to all Arabs and to the Muslim soldiers of other races, underwent a change in the hands of the Umayyads. The Umayyads
meddled with the register and the recipients regarded pensions as the subsistence allowance even without being in
active service. Hisham reformed it and paid only to those who participated in battle. On the pattern of the Byzantine
system the Umayyads reformed their army organization in general and divided it into five corps: the centre, two wings,
vanguards and rearguards, following the same formation while on march or on a battle field. Marwan II (740–50) abandoned
the old division and introduced Kurdus (cohort), a small compact body. The Umayyad troops were divided into three
divisions: infantry, cavalry and artillery. Arab troops were dressed and armed in Greek fashion. The Umayyad cavalry
used plain and round saddles. The artillery used arradah (ballista), manjaniq (the mangonel) and dabbabah or kabsh
(the battering ram). The heavy engines, siege machines and baggage were carried on camels behind the army. Mu'awiyah
introduced postal service, Abd al-Malik extended it throughout his empire, and Walid made full use of it. The Umayyad
Caliph Abd al-Malik developed a regular postal service. Umar bin Abdul-Aziz developed it further by building caravanserais
at stages along the Khurasan highway. Relays of horses were used for the conveyance of dispatches between the caliph
and his agents and officials posted in the provinces. The main highways were divided into stages of 12 miles (19
km) each and each stage had horses, donkeys or camels ready to carry the post. Primarily the service met the needs
of Government officials, but travellers and their important dispatches were also benefitted by the system. The postal
carriages were also used for the swift transport of troops. They were able to carry fifty to a hundred men at a time.
Under Governor Yusuf bin Umar, the postal department of Iraq cost 4,000,000 dirhams a year. However many early history
books like the Islamic Conquest of Syria Fatuhusham by al-Imam al-Waqidi state that after the conversion to Islam
Muawiyah's father Abu Sufyan ibn Harb and his brothers Yazid ibn Abi Sufyan were appointed as commanders in the Muslim
armies by Muhammad. Muawiyah, Abu Sufyan ibn Harb, Yazid ibn Abi Sufyan and Hind bint Utbah fought in the Battle
of Yarmouk. The defeat of the Byzantine Emperor Heraclius at the Battle of Yarmouk opened the way for the Muslim
expansion into Jerusalem and Syria. Non-Muslim groups in the Umayyad Caliphate, which included Christians, Jews,
Zoroastrians, and pagan Berbers, were called dhimmis. They were given a legally protected status as second-class
citizens as long as they accepted and acknowledged the political supremacy of the ruling Muslims. They were allowed
to have their own courts, and were given freedom of their religion within the empire.[citation needed] Although they
could not hold the highest public offices in the empire, they had many bureaucratic positions within the government.
Christians and Jews still continued to produce great theological thinkers within their communities, but as time wore
on, many of the intellectuals converted to Islam, leading to a lack of great thinkers in the non-Muslim communities.
The Umayyad caliphate was marked both by territorial expansion and by the administrative and cultural problems that
such expansion created. Despite some notable exceptions, the Umayyads tended to favor the rights of the old Arab
families, and in particular their own, over those of newly converted Muslims (mawali). Therefore, they held to a
less universalist conception of Islam than did many of their rivals. As G.R. Hawting has written, "Islam was in fact
regarded as the property of the conquering aristocracy." Many Muslims criticized the Umayyads for having too many
non-Muslim, former Roman administrators in their government. St John of Damascus was also a high administrator in
the Umayyad administration. As the Muslims took over cities, they left the peoples political representatives and
the Roman tax collectors and the administrators. The taxes to the central government were calculated and negotiated
by the peoples political representatives. The Central government got paid for the services it provided and the local
government got the money for the services it provided. Many Christian cities also used some of the taxes on maintain
their churches and run their own organizations. Later the Umayyads were criticized by some Muslims for not reducing
the taxes of the people who converted to Islam. These new converts continues to pay the same taxes that were previously
negotiated. The Umayyads have met with a largely negative reception from later Islamic historians, who have accused
them of promoting a kingship (mulk, a term with connotations of tyranny) instead of a true caliphate (khilafa). In
this respect it is notable that the Umayyad caliphs referred to themselves not as khalifat rasul Allah ("successor
of the messenger of God", the title preferred by the tradition), but rather as khalifat Allah ("deputy of God").
The distinction seems to indicate that the Umayyads "regarded themselves as God's representatives at the head of
the community and saw no need to share their religious power with, or delegate it to, the emergent class of religious
scholars." In fact, it was precisely this class of scholars, based largely in Iraq, that was responsible for collecting
and recording the traditions that form the primary source material for the history of the Umayyad period. In reconstructing
this history, therefore, it is necessary to rely mainly on sources, such as the histories of Tabari and Baladhuri,
that were written in the Abbasid court at Baghdad.
Asphalt/bitumen also occurs in unconsolidated sandstones known as "oil sands" in Alberta, Canada, and the similar "tar sands"
in Utah, US. The Canadian province of Alberta has most of the world's reserves of natural bitumen, in three huge
deposits covering 142,000 square kilometres (55,000 sq mi), an area larger than England or New York state. These
bituminous sands contain 166 billion barrels (26.4×10^9 m3) of commercially established oil reserves, giving Canada
the third largest oil reserves in the world. and produce over 2.3 million barrels per day (370×10^3 m3/d) of heavy
crude oil and synthetic crude oil. Although historically it was used without refining to pave roads, nearly all of
the bitumen is now used as raw material for oil refineries in Canada and the United States. The first use of asphalt/bitumen
in the New World was by indigenous peoples. On the west coast, as early as the 13th century, the Tongva, Luiseño
and Chumash peoples collected the naturally occurring asphalt/bitumen that seeped to the surface above underlying
petroleum deposits. All three used the substance as an adhesive. It is found on many different artifacts of tools
and ceremonial items. For example, it was used on rattles to adhere gourds or turtle shells to rattle handles. It
was also used in decorations. Small round shell beads were often set in asphaltum to provide decorations. It was
used as a sealant on baskets to make them watertight for carrying water. Asphaltum was used also to seal the planks
on ocean-going canoes. When maintenance is performed on asphalt pavements, such as milling to remove a worn or damaged
surface, the removed material can be returned to a facility for processing into new pavement mixtures. The asphalt/bitumen
in the removed material can be reactivated and put back to use in new pavement mixes. With some 95% of paved roads
being constructed of or surfaced with asphalt, a substantial amount of asphalt pavement material is reclaimed each
year. According to industry surveys conducted annually by the Federal Highway Administration and the National Asphalt
Pavement Association, more than 99% of the asphalt removed each year from road surfaces during widening and resurfacing
projects is reused as part of new pavements, roadbeds, shoulders and embankments. In Alberta, five bitumen upgraders
produce synthetic crude oil and a variety of other products: The Suncor Energy upgrader near Fort McMurray, Alberta
produces synthetic crude oil plus diesel fuel; the Syncrude Canada, Canadian Natural Resources, and Nexen upgraders
near Fort McMurray produce synthetic crude oil; and the Shell Scotford Upgrader near Edmonton produces synthetic
crude oil plus an intermediate feedstock for the nearby Shell Oil Refinery. A sixth upgrader, under construction
in 2015 near Redwater, Alberta, will upgrade half of its crude bitumen directly to diesel fuel, with the remainder
of the output being sold as feedstock to nearby oil refineries and petrochemical plants. Asphalt/bitumen is typically
stored and transported at temperatures around 150 °C (302 °F). Sometimes diesel oil or kerosene are mixed in before
shipping to retain liquidity; upon delivery, these lighter materials are separated out of the mixture. This mixture
is often called "bitumen feedstock", or BFS. Some dump trucks route the hot engine exhaust through pipes in the dump
body to keep the material warm. The backs of tippers carrying asphalt/bitumen, as well as some handling equipment,
are also commonly sprayed with a releasing agent before filling to aid release. Diesel oil is no longer used as a
release agent due to environmental concerns. The Albanian bitumen extraction has a long history and was practiced
in an organized way by the Romans. After centuries of silence, the first mentions of Albanian bitumen appeared only
in 1868, when the Frenchman Coquand published the first geological description of the deposits of Albanian bitumen.
In 1875, the exploitation rights were granted to the Ottoman government and in 1912, they were transferred to the
Italian company Simsa. Since 1945, the mine was exploited by the Albanian government and from 2001 to date, the management
passed to a French company, which organized the mining process for the manufacture of the natural bitumen on an industrial
scale. The word asphalt is derived from the late Middle English, in turn from French asphalte, based on Late Latin
asphalton, asphaltum, which is the latinisation of the Greek ἄσφαλτος (ásphaltos, ásphalton), a word meaning "asphalt/bitumen/pitch",
which perhaps derives from ἀ-, "without" and σφάλλω (sfallō), "make fall". Note that in French, the term asphalte
is used for naturally occurring bitumen-soaked limestone deposits, and for specialised manufactured products with
fewer voids or greater bitumen content than the "asphaltic concrete" used to pave roads. It is a significant fact
that the first use of asphalt by the ancients was in the nature of a cement for securing or joining together various
objects, and it thus seems likely that the name itself was expressive of this application. Specifically Herodotus
mentioned that bitumen was brought to Babylon to build its gigantic fortification wall. From the Greek, the word
passed into late Latin, and thence into French (asphalte) and English ("asphaltum" and "asphalt"). The terms asphalt
and bitumen are often used interchangeably to mean both natural and manufactured forms of the substance. In American
English, asphalt (or asphalt cement) is the carefully refined residue from the distillation process of selected crude
oils. Outside the United States, the product is often called bitumen. Geologists often prefer the term bitumen. Common
usage often refers to various forms of asphalt/bitumen as "tar", such as at the La Brea Tar Pits. Another archaic
term for asphalt/bitumen is "pitch". The great majority of asphalt used commercially is obtained from petroleum.
Nonetheless, large amounts of asphalt occur in concentrated form in nature. Naturally occurring deposits of asphalt/bitumen
are formed from the remains of ancient, microscopic algae (diatoms) and other once-living things. These remains were
deposited in the mud on the bottom of the ocean or lake where the organisms lived. Under the heat (above 50 °C) and
pressure of burial deep in the earth, the remains were transformed into materials such as asphalt/bitumen, kerogen,
or petroleum. The world's largest deposit of natural bitumen, known as the Athabasca oil sands is located in the
McMurray Formation of Northern Alberta. This formation is from the early Cretaceous, and is composed of numerous
lenses of oil-bearing sand with up to 20% oil. Isotopic studies attribute the oil deposits to be about 110 million
years old. Two smaller but still very large formations occur in the Peace River oil sands and the Cold Lake oil sands,
to the west and southeast of the Athabasca oil sands, respectively. Of the Alberta bitumen deposits, only parts of
the Athabasca oil sands are shallow enough to be suitable for surface mining. The other 80% has to be produced by
oil wells using enhanced oil recovery techniques like steam-assisted gravity drainage. Bitumen was used in early
photographic technology. In 1826 or 1827, it was used by French scientist Joseph Nicéphore Niépce to make the oldest
surviving photograph from nature. The bitumen was thinly coated onto a pewter plate which was then exposed in a camera.
Exposure to light hardened the bitumen and made it insoluble, so that when it was subsequently rinsed with a solvent
only the sufficiently light-struck areas remained. Many hours of exposure in the camera were required, making bitumen
impractical for ordinary photography, but from the 1850s to the 1920s it was in common use as a photoresist in the
production of printing plates for various photomechanical printing processes.[not in citation given] The first British
patent for the use of asphalt/bitumen was 'Cassell's patent asphalte or bitumen' in 1834. Then on 25 November 1837,
Richard Tappin Claridge patented the use of Seyssel asphalt (patent #7849), for use in asphalte pavement, having
seen it employed in France and Belgium when visiting with Frederick Walter Simms, who worked with him on the introduction
of asphalt to Britain. Dr T. Lamb Phipson writes that his father, Samuel Ryland Phipson, a friend of Claridge, was
also "instrumental in introducing the asphalte pavement (in 1836)". Indeed, mastic pavements had been previously
employed at Vauxhall by a competitor of Claridge, but without success. Roads in the US have been paved with materials
that include asphalt/bitumen since at least 1870, when a street in front of the Newark, NJ City Hall was paved. In
many cases, these early pavings were made from naturally occurring "bituminous rock", such as at Ritchie Mines in
Macfarlan in Ritchie County, West Virginia from 1852 to 1873. In 1876, asphalt-based paving was used to pave Pennsylvania
Avenue in Washington, DC, in time for the celebration of the national centennial. Asphalt/bitumen was also used for
flooring, paving and waterproofing of baths and swimming pools during the early 20th century, following similar trends
in Europe. In 1838, there was a flurry of entrepreneurial activity involving asphalt/bitumen, which had uses beyond
paving. For example, asphalt could also used for flooring, damp proofing in buildings, and for waterproofing of various
types of pools and baths, with these latter themselves proliferating in the 19th century. On the London stockmarket,
there were various claims as to the exclusivity of asphalt quality from France, Germany and England. And numerous
patents were granted in France, with similar numbers of patent applications being denied in England due to their
similarity to each other. In England, "Claridge's was the type most used in the 1840s and 50s" The value of the deposit
was obvious from the start, but the means of extracting the bitumen were not. The nearest town, Fort McMurray, Alberta
was a small fur trading post, other markets were far away, and transportation costs were too high to ship the raw
bituminous sand for paving. In 1915, Sidney Ells of the Federal Mines Branch experimented with separation techniques
and used the bitumen to pave 600 feet of road in Edmonton, Alberta. Other roads in Alberta were paved with oil sands,
but it was generally not economic. During the 1920s Dr. Karl A. Clark of the Alberta Research Council patented a
hot water oil separation process and entrepreneur Robert C. Fitzsimmons built the Bitumount oil separation plant,
which between 1925 and 1958 produced up to 300 barrels (50 m3) per day of bitumen using Dr. Clark's method. Most
of the bitumen was used for waterproofing roofs, but other uses included fuels, lubrication oils, printers ink, medicines,
rust and acid-proof paints, fireproof roofing, street paving, patent leather, and fence post preservatives. Eventually
Fitzsimmons ran out of money and the plant was taken over by the Alberta government. Today the Bitumount plant is
a Provincial Historic Site. Canadian bitumen does not differ substantially from oils such as Venezuelan extra-heavy
and Mexican heavy oil in chemical composition, and the real difficulty is moving the extremely viscous bitumen through
oil pipelines to the refinery. Many modern oil refineries are extremely sophisticated and can process non-upgraded
bitumen directly into products such as gasoline, diesel fuel, and refined asphalt without any preprocessing. This
is particularly common in areas such as the US Gulf coast, where refineries were designed to process Venezuelan and
Mexican oil, and in areas such as the US Midwest where refineries were rebuilt to process heavy oil as domestic light
oil production declined. Given the choice, such heavy oil refineries usually prefer to buy bitumen rather than synthetic
oil because the cost is lower, and in some cases because they prefer to produce more diesel fuel and less gasoline.
By 2015 Canadian production and exports of non-upgraded bitumen exceeded that of synthetic crude oil at over 1.3
million barrels (210×10^3 m3) per day, of which about 65% was exported to the United States. A number of technologies
allow asphalt/bitumen to be mixed at much lower temperatures. These involve mixing with petroleum solvents to form
"cutbacks" with reduced melting point, or mixtures with water to turn the asphalt/bitumen into an emulsion. Asphalt
emulsions contain up to 70% asphalt/bitumen and typically less than 1.5% chemical additives. There are two main types
of emulsions with different affinity for aggregates, cationic and anionic. Asphalt emulsions are used in a wide variety
of applications. Chipseal involves spraying the road surface with asphalt emulsion followed by a layer of crushed
rock, gravel or crushed slag. Slurry seal involves the creation of a mixture of asphalt emulsion and fine crushed
aggregate that is spread on the surface of a road. Cold-mixed asphalt can also be made from asphalt emulsion to create
pavements similar to hot-mixed asphalt, several inches in depth and asphalt emulsions are also blended into recycled
hot-mix asphalt to create low-cost pavements. Naturally occurring crude asphalt/bitumen impregnated in sedimentary
rock is the prime feed stock for petroleum production from "Oil sands", currently under development in Alberta, Canada.
Canada has most of the world's supply of natural asphalt/bitumen, covering 140,000 square kilometres (an area larger
than England), giving it the second-largest proven oil reserves in the world. The Athabasca oil sands is the largest
asphalt/bitumen deposit in Canada and the only one accessible to surface mining, although recent technological breakthroughs
have resulted in deeper deposits becoming producible by in situ methods. Because of oil price increases after 2003,
producing bitumen became highly profitable, but as a result of the decline after 2014 it became uneconomic to build
new plants again. By 2014, Canadian crude asphalt/bitumen production averaged about 2.3 million barrels (370,000
m3) per day and was projected to rise to 4.4 million barrels (700,000 m3) per day by 2020. The total amount of crude
asphalt/bitumen in Alberta which could be extracted is estimated to be about 310 billion barrels (50×10^9 m3), which
at a rate of 4,400,000 barrels per day (700,000 m3/d) would last about 200 years. Roofing shingles account for most
of the remaining asphalt/bitumen consumption. Other uses include cattle sprays, fence-post treatments, and waterproofing
for fabrics. Asphalt/bitumen is used to make Japan black, a lacquer known especially for its use on iron and steel,
and it is also used in paint and marker inks by some graffiti supply companies to increase the weather resistance
and permanence of the paint or ink, and to make the color much darker.[citation needed] Asphalt/bitumen is also used
to seal some alkaline batteries during the manufacturing process. The expression "bitumen" originated in the Sanskrit,
where we find the words jatu, meaning "pitch," and jatu-krit, meaning "pitch creating", "pitch producing" (referring
to coniferous or resinous trees). The Latin equivalent is claimed by some to be originally gwitu-men (pertaining
to pitch), and by others, pixtumens (exuding or bubbling pitch), which was subsequently shortened to bitumen, thence
passing via French into English. From the same root is derived the Anglo Saxon word cwidu (mastix), the German word
Kitt (cement or mastic) and the old Norse word kvada. Asphalt/bitumen can sometimes be confused with "coal tar",
which is a visually similar black, thermoplastic material produced by the destructive distillation of coal. During
the early and mid-20th century when town gas was produced, coal tar was a readily available byproduct and extensively
used as the binder for road aggregates. The addition of tar to macadam roads led to the word tarmac, which is now
used in common parlance to refer to road-making materials. However, since the 1970s, when natural gas succeeded town
gas, asphalt/bitumen has completely overtaken the use of coal tar in these applications. Other examples of this confusion
include the La Brea Tar Pits and the Canadian oil sands, both of which actually contain natural bitumen rather than
tar. Pitch is another term sometimes used at times to refer to asphalt/bitumen, as in Pitch Lake. One hundred years
after the fall of Constantinople in 1453, Pierre Belon described in his work Observations in 1553 that pissasphalto,
a mixture of pitch and bitumen, was used in Dubrovnik for tarring of ships from where it was exported to a market
place in Venice where it could be bought by anyone. An 1838 edition of Mechanics Magazine cites an early use of asphalt
in France. A pamphlet dated 1621, by "a certain Monsieur d'Eyrinys, states that he had discovered the existence (of
asphaltum) in large quantities in the vicinity of Neufchatel", and that he proposed to use it in a variety of ways
– "principally in the construction of air-proof granaries, and in protecting, by means of the arches, the water-courses
in the city of Paris from the intrusion of dirt and filth", which at that time made the water unusable. "He expatiates
also on the excellence of this material for forming level and durable terraces" in palaces, "the notion of forming
such terraces in the streets not one likely to cross the brain of a Parisian of that generation". But it was generally
neglected in France until the revolution of 1830. Then, in the 1830s, there was a surge of interest, and asphalt
became widely used "for pavements, flat roofs, and the lining of cisterns, and in England, some use of it had been
made of it for similar purposes". Its rise in Europe was "a sudden phenomenon", after natural deposits were found
"in France at Osbann (BasRhin), the Parc (l'Ain) and the Puy-de-la-Poix (Puy-de-Dome)", although it could also be
made artificially. One of the earliest uses in France was the laying of about 24,000 square yards of Seyssel asphalt
at the Place de la Concorde in 1835. In 1838, Claridge obtained patents in Scotland on 27 March, and Ireland on 23
April, and in 1851 extensions were sought for all three patents, by the trustees of a company previously formed by
Claridge. This was Claridge's Patent Asphalte Company, formed in 1838 for the purpose of introducing to Britain "Asphalte
in its natural state from the mine at Pyrimont Seysell in France", and "laid one of the first asphalt pavements in
Whitehall". Trials were made of the pavement in 1838 on the footway in Whitehall, the stable at Knightsbridge Barracks,
"and subsequently on the space at the bottom of the steps leading from Waterloo Place to St. James Park". "The formation
in 1838 of Claridge's Patent Asphalte Company (with a distinguished list of aristocratic patrons, and Marc and Isambard
Brunel as, respectively, a trustee and consulting engineer), gave an enormous impetus to the development of a British
asphalt industry". "By the end of 1838, at least two other companies, Robinson's and the Bastenne company, were in
production", with asphalt being laid as paving at Brighton, Herne Bay, Canterbury, Kensington, the Strand, and a
large floor area in Bunhill-row, while meantime Claridge's Whitehall paving "continue(d) in good order". Canada has
the world's largest deposit of natural bitumen in the Athabasca oil sands and Canadian First Nations along the Athabasca
River had long used it to waterproof their canoes. In 1719, a Cree Indian named Wa-Pa-Su brought a sample for trade
to Henry Kelsey of the Hudson’s Bay Company, who was the first recorded European to see it. However, it wasn't until
1787 that fur trader and explorer Alexander MacKenzie saw the Athabasca oil sands and said, "At about 24 miles from
the fork (of the Athabasca and Clearwater Rivers) are some bituminous fountains into which a pole of 20 feet long
may be inserted without the least resistance." Mastic asphalt is a type of asphalt which differs from dense graded
asphalt (asphalt concrete) in that it has a higher asphalt/bitumen (binder) content, usually around 7–10% of the
whole aggregate mix, as opposed to rolled asphalt concrete, which has only around 5% added asphalt/bitumen. This
thermoplastic substance is widely used in the building industry for waterproofing flat roofs and tanking underground.
Mastic asphalt is heated to a temperature of 210 °C (410 °F) and is spread in layers to form an impervious barrier
about 20 millimeters (0.79 inches) thick. Because of the difficulty of moving crude bitumen through pipelines, non-upgraded
bitumen is usually diluted with natural-gas condensate in a form called dilbit or with synthetic crude oil, called
synbit. However, to meet international competition, much non-upgraded bitumen is now sold as a blend of multiple
grades of bitumen, conventional crude oil, synthetic crude oil, and condensate in a standardized benchmark product
such as Western Canadian Select. This sour, heavy crude oil blend is designed to have uniform refining characteristics
to compete with internationally marketed heavy oils such as Mexican Mayan or Arabian Dubai Crude. Asphalt/bitumen
is similar to the organic matter in carbonaceous meteorites. However, detailed studies have shown these materials
to be distinct. The vast Alberta bitumen resources are believed to have started out as living material from marine
plants and animals, mainly algae, that died millions of years ago when an ancient ocean covered Alberta. They were
covered by mud, buried deeply over the eons, and gently cooked into oil by geothermal heat at a temperature of 50
to 150 °C (120 to 300 °F). Due to pressure from the rising of the Rocky Mountains in southwestern Alberta, 80 to
55 million years ago, the oil was driven northeast hundreds of kilometres into underground sand deposits left behind
by ancient river beds and ocean beaches, thus forming the oil sands. Selenizza is mainly used as an additive in the
road construction sector. It is mixed with traditional bitumen to improve both the viscoelastic properties and the
resistance to ageing. It may be blended with the hot bitumen in tanks, but its granular form allows it to be fed
in the mixer or in the recycling ring of normal asphalt plants. Other typical applications include the production
of mastic asphalts for sidewalks, bridges, car-parks and urban roads as well as drilling fluid additives for the
oil and gas industry. Selenizza is available in powder or in granular material of various particle sizes and is packaged
in big bags or in thermal fusible polyethylene bags. In British English, the word 'asphalt' is used to refer to a
mixture of mineral aggregate and asphalt/bitumen (also called tarmac in common parlance). When bitumen is mixed with
clay it is usually called asphaltum. The earlier word 'asphaltum' is now archaic and not commonly used.[citation
needed] In American English, 'asphalt' is equivalent to the British 'bitumen'. However, 'asphalt' is also commonly
used as a shortened form of 'asphalt concrete' (therefore equivalent to the British 'asphalt' or 'tarmac'). In Australian
English, bitumen is often used as the generic term for road surfaces. In Canadian English, the word bitumen is used
to refer to the vast Canadian deposits of extremely heavy crude oil, while asphalt is used for the oil refinery product
used to pave roads and manufacture roof shingles and various waterproofing products. Diluted bitumen (diluted with
naphtha to make it flow in pipelines) is known as dilbit in the Canadian petroleum industry, while bitumen "upgraded"
to synthetic crude oil is known as syncrude and syncrude blended with bitumen as synbit. Bitumen is still the preferred
geological term for naturally occurring deposits of the solid or semi-solid form of petroleum. Bituminous rock is
a form of sandstone impregnated with bitumen. The tar sands of Alberta, Canada are a similar material. Bitumen was
the nemesis of many artists during the 19th century. Although widely used for a time, it ultimately proved unstable
for use in oil painting, especially when mixed with the most common diluents, such as linseed oil, varnish and turpentine.
Unless thoroughly diluted, bitumen never fully solidifies and will in time corrupt the other pigments with which
it comes into contact. The use of bitumen as a glaze to set in shadow or mixed with other colors to render a darker
tone resulted in the eventual deterioration of many paintings, for instance those of Delacroix. Perhaps the most
famous example of the destructiveness of bitumen is Théodore Géricault's Raft of the Medusa (1818–1819), where his
use of bitumen caused the brilliant colors to degenerate into dark greens and blacks and the paint and canvas to
buckle. In 1914, Claridge's Company entered into a joint venture to produce tar-bound macadam, with materials manufactured
through a subsidiary company called Clarmac Roads Ltd. Two products resulted, namely Clarmac, and Clarphalte, with
the former being manufactured by Clarmac Roads and the latter by Claridge's Patent Asphalte Co., although Clarmac
was more widely used.[note 1] However, the First World War impacted financially on the Clarmac Company, which entered
into liquidation in 1915. The failure of Clarmac Roads Ltd had a flow-on effect to Claridge's Company, which was
itself compulsorily wound up, ceasing operations in 1917, having invested a substantial amount of funds into the
new venture, both at the outset, and in a subsequent attempt to save the Clarmac Company. The largest use of asphalt/bitumen
is for making asphalt concrete for road surfaces and accounts for approximately 85% of the asphalt consumed in the
United States. Asphalt concrete pavement mixes are typically composed of 5% asphalt/bitumen cement and 95% aggregates
(stone, sand, and gravel). Due to its highly viscous nature, asphalt/bitumen cement must be heated so it can be mixed
with the aggregates at the asphalt mixing facility. The temperature required varies depending upon characteristics
of the asphalt/bitumen and the aggregates, but warm-mix asphalt technologies allow producers to reduce the temperature
required. There are about 4,000 asphalt concrete mixing plants in the U.S., and a similar number in Europe. Synthetic
crude oil, also known as syncrude, is the output from a bitumen upgrader facility used in connection with oil sand
production in Canada. Bituminous sands are mined using enormous (100 ton capacity) power shovels and loaded into
even larger (400 ton capacity) dump trucks for movement to an upgrading facility. The process used to extract the
bitumen from the sand is a hot water process originally developed by Dr. Karl Clark of the University of Alberta
during the 1920s. After extraction from the sand, the bitumen is fed into a bitumen upgrader which converts it into
a light crude oil equivalent. This synthetic substance is fluid enough to be transferred through conventional oil
pipelines and can be fed into conventional oil refineries without any further treatment. By 2015 Canadian bitumen
upgraders were producing over 1 million barrels (160×10^3 m3) per day of synthetic crude oil, of which 75% was exported
to oil refineries in the United States. About 40,000,000 tons were produced in 1984[needs update]. It is obtained
as the "heavy" (i.e., difficult to distill) fraction. Material with a boiling point greater than around 500 °C is
considered asphalt. Vacuum distillation separates it from the other components in crude oil (such as naphtha, gasoline
and diesel). The resulting material is typically further treated to extract small but valuable amounts of lubricants
and to adjust the properties of the material to suit applications. In a de-asphalting unit, the crude asphalt is
treated with either propane or butane in a supercritical phase to extract the lighter molecules, which are then separated.
Further processing is possible by "blowing" the product: namely reacting it with oxygen. This step makes the product
harder and more viscous. Selenizza is a naturally occurring solid hydrocarbon bitumen found in the native asphalt
deposit of Selenice, in Albania, the only European asphalt mine still in use. The rock asphalt is found in the form
of veins, filling cracks in a more or less horizontal direction. The bitumen content varies from 83% to 92% (soluble
in carbon disulphide), with a penetration value near to zero and a softening point (ring & ball) around 120 °C. The
insoluble matter, consisting mainly of silica ore, ranges from 8% to 17%. People can be exposed to asphalt in the
workplace by breathing in fumes or skin absorption. The National Institute for Occupational Safety and Health (NIOSH)
has set a Recommended exposure limit (REL) of 5 mg/m3 over a 15-minute period. Asphalt is basically an inert material
that must be heated or diluted to a point where it becomes workable for the production of materials for paving, roofing,
and other applications. In examining the potential health hazards associated with asphalt, the International Agency
for Research on Cancer (IARC) determined that it is the application parameters, predominantly temperature, that effect
occupational exposure and the potential bioavailable carcinogenic hazard/risk of the asphalt emissions. In particular,
temperatures greater than 199 °C (390 °F), were shown to produce a greater exposure risk than when asphalt was heated
to lower temperatures, such as those typically used in asphalt pavement mix production and placement.
Victoria married her first cousin, Prince Albert of Saxe-Coburg and Gotha, in 1840. Their nine children married into royal
and noble families across the continent, tying them together and earning her the sobriquet "the grandmother of Europe".
After Albert's death in 1861, Victoria plunged into deep mourning and avoided public appearances. As a result of
her seclusion, republicanism temporarily gained strength, but in the latter half of her reign her popularity recovered.
Her Golden and Diamond Jubilees were times of public celebration. Victoria later described her childhood as "rather
melancholy". Her mother was extremely protective, and Victoria was raised largely isolated from other children under
the so-called "Kensington System", an elaborate set of rules and protocols devised by the Duchess and her ambitious
and domineering comptroller, Sir John Conroy, who was rumoured to be the Duchess's lover. The system prevented the
princess from meeting people whom her mother and Conroy deemed undesirable (including most of her father's family),
and was designed to render her weak and dependent upon them. The Duchess avoided the court because she was scandalised
by the presence of King William's bastard children, and perhaps prompted the emergence of Victorian morality by insisting
that her daughter avoid any appearance of sexual impropriety. Victoria shared a bedroom with her mother every night,
studied with private tutors to a regular timetable, and spent her play-hours with her dolls and her King Charles
spaniel, Dash. Her lessons included French, German, Italian, and Latin, but she spoke only English at home. In 1853,
Victoria gave birth to her eighth child, Leopold, with the aid of the new anaesthetic, chloroform. Victoria was so
impressed by the relief it gave from the pain of childbirth that she used it again in 1857 at the birth of her ninth
and final child, Beatrice, despite opposition from members of the clergy, who considered it against biblical teaching,
and members of the medical profession, who thought it dangerous. Victoria may have suffered from post-natal depression
after many of her pregnancies. Letters from Albert to Victoria intermittently complain of her loss of self-control.
For example, about a month after Leopold's birth Albert complained in a letter to Victoria about her "continuance
of hysterics" over a "miserable trifle". In March 1861, Victoria's mother died, with Victoria at her side. Through
reading her mother's papers, Victoria discovered that her mother had loved her deeply; she was heart-broken, and
blamed Conroy and Lehzen for "wickedly" estranging her from her mother. To relieve his wife during her intense and
deep grief, Albert took on most of her duties, despite being ill himself with chronic stomach trouble. In August,
Victoria and Albert visited their son, the Prince of Wales, who was attending army manoeuvres near Dublin, and spent
a few days holidaying in Killarney. In November, Albert was made aware of gossip that his son had slept with an actress
in Ireland. Appalled, Albert travelled to Cambridge, where his son was studying, to confront him. By the beginning
of December, Albert was very unwell. He was diagnosed with typhoid fever by William Jenner, and died on 14 December
1861. Victoria was devastated. She blamed her husband's death on worry over the Prince of Wales's philandering. He
had been "killed by that dreadful business", she said. She entered a state of mourning and wore black for the remainder
of her life. She avoided public appearances, and rarely set foot in London in the following years. Her seclusion
earned her the nickname "widow of Windsor". On 2 March 1882, Roderick Maclean, a disgruntled poet apparently offended
by Victoria's refusal to accept one of his poems, shot at the Queen as her carriage left Windsor railway station.
Two schoolboys from Eton College struck him with their umbrellas, until he was hustled away by a policeman. Victoria
was outraged when he was found not guilty by reason of insanity, but was so pleased by the many expressions of loyalty
after the attack that she said it was "worth being shot at—to see how much one is loved". Gladstone returned to power
after the 1892 general election; he was 82 years old. Victoria objected when Gladstone proposed appointing the Radical
MP Henry Labouchere to the Cabinet, so Gladstone agreed not to appoint him. In 1894, Gladstone retired and, without
consulting the outgoing prime minister, Victoria appointed Lord Rosebery as prime minister. His government was weak,
and the following year Lord Salisbury replaced him. Salisbury remained prime minister for the remainder of Victoria's
reign. In 1897, Victoria had written instructions for her funeral, which was to be military as befitting a soldier's
daughter and the head of the army, and white instead of black. On 25 January, Edward VII, the Kaiser and Prince Arthur,
Duke of Connaught, helped lift her body into the coffin. She was dressed in a white dress and her wedding veil. An
array of mementos commemorating her extended family, friends and servants were laid in the coffin with her, at her
request, by her doctor and dressers. One of Albert's dressing gowns was placed by her side, with a plaster cast of
his hand, while a lock of John Brown's hair, along with a picture of him, was placed in her left hand concealed from
the view of the family by a carefully positioned bunch of flowers. Items of jewellery placed on Victoria included
the wedding ring of John Brown's mother, given to her by Brown in 1883. Her funeral was held on Saturday, 2 February,
in St George's Chapel, Windsor Castle, and after two days of lying-in-state, she was interred beside Prince Albert
in Frogmore Mausoleum at Windsor Great Park. As she was laid to rest at the mausoleum, it began to snow. Victoria
wrote to her uncle Leopold, whom Victoria considered her "best and kindest adviser", to thank him "for the prospect
of great happiness you have contributed to give me, in the person of dear Albert ... He possesses every quality that
could be desired to render me perfectly happy. He is so sensible, so kind, and so good, and so amiable too. He has
besides the most pleasing and delightful exterior and appearance you can possibly see." However at 17, Victoria,
though interested in Albert, was not yet ready to marry. The parties did not undertake a formal engagement, but assumed
that the match would take place in due time. In 1839, Melbourne resigned after Radicals and Tories (both of whom
Victoria detested) voted against a bill to suspend the constitution of Jamaica. The bill removed political power
from plantation owners who were resisting measures associated with the abolition of slavery. The Queen commissioned
a Tory, Sir Robert Peel, to form a new ministry. At the time, it was customary for the prime minister to appoint
members of the Royal Household, who were usually his political allies and their spouses. Many of the Queen's ladies
of the bedchamber were wives of Whigs, and Peel expected to replace them with wives of Tories. In what became known
as the bedchamber crisis, Victoria, advised by Melbourne, objected to their removal. Peel refused to govern under
the restrictions imposed by the Queen, and consequently resigned his commission, allowing Melbourne to return to
office. Internationally, Victoria took a keen interest in the improvement of relations between France and Britain.
She made and hosted several visits between the British royal family and the House of Orleans, who were related by
marriage through the Coburgs. In 1843 and 1845, she and Albert stayed with King Louis Philippe I at château d'Eu
in Normandy; she was the first British or English monarch to visit a French one since the meeting of Henry VIII of
England and Francis I of France on the Field of the Cloth of Gold in 1520. When Louis Philippe made a reciprocal
trip in 1844, he became the first French king to visit a British sovereign. Louis Philippe was deposed in the revolutions
of 1848, and fled to exile in England. At the height of a revolutionary scare in the United Kingdom in April 1848,
Victoria and her family left London for the greater safety of Osborne House, a private estate on the Isle of Wight
that they had purchased in 1845 and redeveloped. Demonstrations by Chartists and Irish nationalists failed to attract
widespread support, and the scare died down without any major disturbances. Victoria's first visit to Ireland in
1849 was a public relations success, but it had no lasting impact or effect on the growth of Irish nationalism. On
14 January 1858, an Italian refugee from Britain called Orsini attempted to assassinate Napoleon III with a bomb
made in England. The ensuing diplomatic crisis destabilised the government, and Palmerston resigned. Derby was reinstated
as prime minister. Victoria and Albert attended the opening of a new basin at the French military port of Cherbourg
on 5 August 1858, in an attempt by Napoleon III to reassure Britain that his military preparations were directed
elsewhere. On her return Victoria wrote to Derby reprimanding him for the poor state of the Royal Navy in comparison
to the French one. Derby's ministry did not last long, and in June 1859 Victoria recalled Palmerston to office. Palmerston
died in 1865, and after a brief ministry led by Russell, Derby returned to power. In 1866, Victoria attended the
State Opening of Parliament for the first time since Albert's death. The following year she supported the passing
of the Reform Act 1867 which doubled the electorate by extending the franchise to many urban working men, though
she was not in favour of votes for women. Derby resigned in 1868, to be replaced by Benjamin Disraeli, who charmed
Victoria. "Everyone likes flattery," he said, "and when you come to royalty you should lay it on with a trowel."
With the phrase "we authors, Ma'am", he complimented her. Disraeli's ministry only lasted a matter of months, and
at the end of the year his Liberal rival, William Ewart Gladstone, was appointed prime minister. Victoria found Gladstone's
demeanour far less appealing; he spoke to her, she is thought to have complained, as though she were "a public meeting
rather than a woman". In 1887, the British Empire celebrated Victoria's Golden Jubilee. Victoria marked the fiftieth
anniversary of her accession on 20 June with a banquet to which 50 kings and princes were invited. The following
day, she participated in a procession and attended a thanksgiving service in Westminster Abbey. By this time, Victoria
was once again extremely popular. Two days later on 23 June, she engaged two Indian Muslims as waiters, one of whom
was Abdul Karim. He was soon promoted to "Munshi": teaching her Hindustani, and acting as a clerk. Her family and
retainers were appalled, and accused Abdul Karim of spying for the Muslim Patriotic League, and biasing the Queen
against the Hindus. Equerry Frederick Ponsonby (the son of Sir Henry) discovered that the Munshi had lied about his
parentage, and reported to Lord Elgin, Viceroy of India, "the Munshi occupies very much the same position as John
Brown used to do." Victoria dismissed their complaints as racial prejudice. Abdul Karim remained in her service until
he returned to India with a pension on her death. Victoria visited mainland Europe regularly for holidays. In 1889,
during a stay in Biarritz, she became the first reigning monarch from Britain to set foot in Spain when she crossed
the border for a brief visit. By April 1900, the Boer War was so unpopular in mainland Europe that her annual trip
to France seemed inadvisable. Instead, the Queen went to Ireland for the first time since 1861, in part to acknowledge
the contribution of Irish regiments to the South African war. In July, her second son Alfred ("Affie") died; "Oh,
God! My poor darling Affie gone too", she wrote in her journal. "It is a horrible year, nothing but sadness & horrors
of one kind & another." Victoria was physically unprepossessing—she was stout, dowdy and no more than five feet tall—but
she succeeded in projecting a grand image. She experienced unpopularity during the first years of her widowhood,
but was well liked during the 1880s and 1890s, when she embodied the empire as a benevolent matriarchal figure. Only
after the release of her diary and letters did the extent of her political influence become known to the wider public.
Biographies of Victoria written before much of the primary material became available, such as Lytton Strachey's Queen
Victoria of 1921, are now considered out of date. The biographies written by Elizabeth Longford and Cecil Woodham-Smith,
in 1964 and 1972 respectively, are still widely admired. They, and others, conclude that as a person Victoria was
emotional, obstinate, honest, and straight-talking. In 1830, the Duchess of Kent and Conroy took Victoria across
the centre of England to visit the Malvern Hills, stopping at towns and great country houses along the way. Similar
journeys to other parts of England and Wales were taken in 1832, 1833, 1834 and 1835. To the King's annoyance, Victoria
was enthusiastically welcomed in each of the stops. William compared the journeys to royal progresses and was concerned
that they portrayed Victoria as his rival rather than his heiress presumptive. Victoria disliked the trips; the constant
round of public appearances made her tired and ill, and there was little time for her to rest. She objected on the
grounds of the King's disapproval, but her mother dismissed his complaints as motivated by jealousy, and forced Victoria
to continue the tours. At Ramsgate in October 1835, Victoria contracted a severe fever, which Conroy initially dismissed
as a childish pretence. While Victoria was ill, Conroy and the Duchess unsuccessfully badgered her to make Conroy
her private secretary. As a teenager, Victoria resisted persistent attempts by her mother and Conroy to appoint him
to her staff. Once queen, she banned him from her presence, but he remained in her mother's household. At the time
of her accession, the government was led by the Whig prime minister Lord Melbourne, who at once became a powerful
influence on the politically inexperienced Queen, who relied on him for advice. Charles Greville supposed that the
widowed and childless Melbourne was "passionately fond of her as he might be of his daughter if he had one", and
Victoria probably saw him as a father figure. Her coronation took place on 28 June 1838 at Westminster Abbey. Over
400,000 visitors came to London for the celebrations. She became the first sovereign to take up residence at Buckingham
Palace and inherited the revenues of the duchies of Lancaster and Cornwall as well as being granted a civil list
allowance of £385,000 per year. Financially prudent, she paid off her father's debts. In 1845, Ireland was hit by
a potato blight. In the next four years over a million Irish people died and another million emigrated in what became
known as the Great Famine. In Ireland, Victoria was labelled "The Famine Queen". She personally donated £2,000 to
famine relief, more than any other individual donor, and also supported the Maynooth Grant to a Roman Catholic seminary
in Ireland, despite Protestant opposition. The story that she donated only £5 in aid to the Irish, and on the same
day gave the same amount to Battersea Dogs Home, was a myth generated towards the end of the 19th century. Victoria's
self-imposed isolation from the public diminished the popularity of the monarchy, and encouraged the growth of the
republican movement. She did undertake her official government duties, yet chose to remain secluded in her royal
residences—Windsor Castle, Osborne House, and the private estate in Scotland that she and Albert had acquired in
1847, Balmoral Castle. In March 1864, a protester stuck a notice on the railings of Buckingham Palace that announced
"these commanding premises to be let or sold in consequence of the late occupant's declining business". Her uncle
Leopold wrote to her advising her to appear in public. She agreed to visit the gardens of the Royal Horticultural
Society at Kensington and take a drive through London in an open carriage. After the Indian Rebellion of 1857, the
British East India Company, which had ruled much of India, was dissolved, and Britain's possessions and protectorates
on the Indian subcontinent were formally incorporated into the British Empire. The Queen had a relatively balanced
view of the conflict, and condemned atrocities on both sides. She wrote of "her feelings of horror and regret at
the result of this bloody civil war", and insisted, urged on by Albert, that an official proclamation announcing
the transfer of power from the company to the state "should breathe feelings of generosity, benevolence and religious
toleration". At her behest, a reference threatening the "undermining of native religions and customs" was replaced
by a passage guaranteeing religious freedom. On 17 March 1883, she fell down some stairs at Windsor, which left her
lame until July; she never fully recovered and was plagued with rheumatism thereafter. Brown died 10 days after her
accident, and to the consternation of her private secretary, Sir Henry Ponsonby, Victoria began work on a eulogistic
biography of Brown. Ponsonby and Randall Davidson, Dean of Windsor, who had both seen early drafts, advised Victoria
against publication, on the grounds that it would stoke the rumours of a love affair. The manuscript was destroyed.
In early 1884, Victoria did publish More Leaves from a Journal of a Life in the Highlands, a sequel to her earlier
book, which she dedicated to her "devoted personal attendant and faithful friend John Brown". On the day after the
first anniversary of Brown's death, Victoria was informed by telegram that her youngest son, Leopold, had died in
Cannes. He was "the dearest of my dear sons", she lamented. The following month, Victoria's youngest child, Beatrice,
met and fell in love with Prince Henry of Battenberg at the wedding of Victoria's granddaughter Princess Victoria
of Hesse and by Rhine to Henry's brother Prince Louis of Battenberg. Beatrice and Henry planned to marry, but Victoria
opposed the match at first, wishing to keep Beatrice at home to act as her companion. After a year, she was won around
to the marriage by Henry and Beatrice's promise to remain living with and attending her. Victoria's youngest son,
Leopold, was affected by the blood-clotting disease haemophilia B and two of her five daughters, Alice and Beatrice,
were carriers. Royal haemophiliacs descended from Victoria included her great-grandsons, Tsarevich Alexei of Russia,
Alfonso, Prince of Asturias, and Infante Gonzalo of Spain. The presence of the disease in Victoria's descendants,
but not in her ancestors, led to modern speculation that her true father was not the Duke of Kent but a haemophiliac.
There is no documentary evidence of a haemophiliac in connection with Victoria's mother, and as male carriers always
suffer the disease, even if such a man had existed he would have been seriously ill. It is more likely that the mutation
arose spontaneously because Victoria's father was over 50 at the time of her conception and haemophilia arises more
frequently in the children of older fathers. Spontaneous mutations account for about a third of cases. Victoria was
the daughter of Prince Edward, Duke of Kent and Strathearn, the fourth son of King George III. Both the Duke of Kent
and King George III died in 1820, and Victoria was raised under close supervision by her German-born mother Princess
Victoria of Saxe-Coburg-Saalfeld. She inherited the throne aged 18, after her father's three elder brothers had all
died, leaving no surviving legitimate children. The United Kingdom was already an established constitutional monarchy,
in which the sovereign held relatively little direct political power. Privately, Victoria attempted to influence
government policy and ministerial appointments; publicly, she became a national icon who was identified with strict
standards of personal morality. At birth, Victoria was fifth in the line of succession after her father and his three
older brothers: the Prince Regent, the Duke of York, and the Duke of Clarence (later William IV). The Prince Regent
and the Duke of York were estranged from their wives, who were both past child-bearing age, so the two eldest brothers
were unlikely to have any further children. The Dukes of Kent and Clarence married on the same day 12 months before
Victoria's birth, but both of Clarence's daughters (born in 1819 and 1820 respectively) died as infants. Victoria's
grandfather and father died in 1820, within a week of each other, and the Duke of York died in 1827. On the death
of her uncle George IV in 1830, Victoria became heiress presumptive to her next surviving uncle, William IV. The
Regency Act 1830 made special provision for the Duchess of Kent to act as regent in case William died while Victoria
was still a minor. King William distrusted the Duchess's capacity to be regent, and in 1836 declared in her presence
that he wanted to live until Victoria's 18th birthday, so that a regency could be avoided. Victoria turned 18 on
24 May 1837, and a regency was avoided. On 20 June 1837, William IV died at the age of 71, and Victoria became Queen
of the United Kingdom. In her diary she wrote, "I was awoke at 6 o'clock by Mamma, who told me the Archbishop of
Canterbury and Lord Conyngham were here and wished to see me. I got out of bed and went into my sitting-room (only
in my dressing gown) and alone, and saw them. Lord Conyngham then acquainted me that my poor Uncle, the King, was
no more, and had expired at 12 minutes past 2 this morning, and consequently that I am Queen." Official documents
prepared on the first day of her reign described her as Alexandrina Victoria, but the first name was withdrawn at
her own wish and not used again. Though queen, as an unmarried young woman Victoria was required by social convention
to live with her mother, despite their differences over the Kensington System and her mother's continued reliance
on Conroy. Her mother was consigned to a remote apartment in Buckingham Palace, and Victoria often refused to see
her. When Victoria complained to Melbourne that her mother's close proximity promised "torment for many years", Melbourne
sympathised but said it could be avoided by marriage, which Victoria called a "schocking [sic] alternative". She
showed interest in Albert's education for the future role he would have to play as her husband, but she resisted
attempts to rush her into wedlock. On 29 May 1842, Victoria was riding in a carriage along The Mall, London, when
John Francis aimed a pistol at her but the gun did not fire; he escaped. The following day, Victoria drove the same
route, though faster and with a greater escort, in a deliberate attempt to provoke Francis to take a second aim and
catch him in the act. As expected, Francis shot at her, but he was seized by plain-clothes policemen, and convicted
of high treason. On 3 July, two days after Francis's death sentence was commuted to transportation for life, John
William Bean also tried to fire a pistol at the Queen, but it was loaded only with paper and tobacco and had too
little charge. Edward Oxford felt that the attempts were encouraged by his acquittal in 1840. Bean was sentenced
to 18 months in jail. In a similar attack in 1849, unemployed Irishman William Hamilton fired a powder-filled pistol
at Victoria's carriage as it passed along Constitution Hill, London. In 1850, the Queen did sustain injury when she
was assaulted by a possibly insane ex-army officer, Robert Pate. As Victoria was riding in a carriage, Pate struck
her with his cane, crushing her bonnet and bruising her forehead. Both Hamilton and Pate were sentenced to seven
years' transportation. Russell's ministry, though Whig, was not favoured by the Queen. She found particularly offensive
the Foreign Secretary, Lord Palmerston, who often acted without consulting the Cabinet, the Prime Minister, or the
Queen. Victoria complained to Russell that Palmerston sent official dispatches to foreign leaders without her knowledge,
but Palmerston was retained in office and continued to act on his own initiative, despite her repeated remonstrances.
It was only in 1851 that Palmerston was removed after he announced the British government's approval of President
Louis-Napoleon Bonaparte's coup in France without consulting the Prime Minister. The following year, President Bonaparte
was declared Emperor Napoleon III, by which time Russell's administration had been replaced by a short-lived minority
government led by Lord Derby. Eleven days after Orsini's assassination attempt in France, Victoria's eldest daughter
married Prince Frederick William of Prussia in London. They had been betrothed since September 1855, when Princess
Victoria was 14 years old; the marriage was delayed by the Queen and Prince Albert until the bride was 17. The Queen
and Albert hoped that their daughter and son-in-law would be a liberalising influence in the enlarging Prussian state.
Victoria felt "sick at heart" to see her daughter leave England for Germany; "It really makes me shudder", she wrote
to Princess Victoria in one of her frequent letters, "when I look round to all your sweet, happy, unconscious sisters,
and think I must give them up too – one by one." Almost exactly a year later, Princess Victoria gave birth to the
Queen's first grandchild, Wilhelm, who would become the last German Kaiser. Victoria's father was Prince Edward,
Duke of Kent and Strathearn, the fourth son of the reigning King of the United Kingdom, George III. Until 1817, Edward's
niece, Princess Charlotte of Wales, was the only legitimate grandchild of George III. Her death in 1817 precipitated
a succession crisis that brought pressure on the Duke of Kent and his unmarried brothers to marry and have children.
In 1818 he married Princess Victoria of Saxe-Coburg-Saalfeld, a widowed German princess with two children—Carl (1804–1856)
and Feodora (1807–1872)—by her first marriage to the Prince of Leiningen. Her brother Leopold was Princess Charlotte's
widower. The Duke and Duchess of Kent's only child, Victoria, was born at 4.15 a.m. on 24 May 1819 at Kensington
Palace in London. By 1836, the Duchess's brother, Leopold, who had been King of the Belgians since 1831, hoped to
marry his niece to his nephew, Prince Albert of Saxe-Coburg and Gotha. Leopold, Victoria's mother, and Albert's father
(Ernest I, Duke of Saxe-Coburg and Gotha) were siblings. Leopold arranged for Victoria's mother to invite her Coburg
relatives to visit her in May 1836, with the purpose of introducing Victoria to Albert. William IV, however, disapproved
of any match with the Coburgs, and instead favoured the suit of Prince Alexander of the Netherlands, second son of
the Prince of Orange. Victoria was aware of the various matrimonial plans and critically appraised a parade of eligible
princes. According to her diary, she enjoyed Albert's company from the beginning. After the visit she wrote, "[Albert]
is extremely handsome; his hair is about the same colour as mine; his eyes are large and blue, and he has a beautiful
nose and a very sweet mouth with fine teeth; but the charm of his countenance is his expression, which is most delightful."
Alexander, on the other hand, was "very plain". At the start of her reign Victoria was popular, but her reputation
suffered in an 1839 court intrigue when one of her mother's ladies-in-waiting, Lady Flora Hastings, developed an
abdominal growth that was widely rumoured to be an out-of-wedlock pregnancy by Sir John Conroy. Victoria believed
the rumours. She hated Conroy, and despised "that odious Lady Flora", because she had conspired with Conroy and the
Duchess of Kent in the Kensington System. At first, Lady Flora refused to submit to a naked medical examination,
until in mid-February she eventually agreed, and was found to be a virgin. Conroy, the Hastings family and the opposition
Tories organised a press campaign implicating the Queen in the spreading of false rumours about Lady Flora. When
Lady Flora died in July, the post-mortem revealed a large tumour on her liver that had distended her abdomen. At
public appearances, Victoria was hissed and jeered as "Mrs. Melbourne". In 1870, republican sentiment in Britain,
fed by the Queen's seclusion, was boosted after the establishment of the Third French Republic. A republican rally
in Trafalgar Square demanded Victoria's removal, and Radical MPs spoke against her. In August and September 1871,
she was seriously ill with an abscess in her arm, which Joseph Lister successfully lanced and treated with his new
antiseptic carbolic acid spray. In late November 1871, at the height of the republican movement, the Prince of Wales
contracted typhoid fever, the disease that was believed to have killed his father, and Victoria was fearful her son
would die. As the tenth anniversary of her husband's death approached, her son's condition grew no better, and Victoria's
distress continued. To general rejoicing, he pulled through. Mother and son attended a public parade through London
and a grand service of thanksgiving in St Paul's Cathedral on 27 February 1872, and republican feeling subsided.
During Victoria's first pregnancy in 1840, in the first few months of the marriage, 18-year-old Edward Oxford attempted
to assassinate her while she was riding in a carriage with Prince Albert on her way to visit her mother. Oxford fired
twice, but either both bullets missed or, as he later claimed, the guns had no shot. He was tried for high treason
and found guilty, but was acquitted on the grounds of insanity. In the immediate aftermath of the attack, Victoria's
popularity soared, mitigating residual discontent over the Hastings affair and the bedchamber crisis. Her daughter,
also named Victoria, was born on 21 November 1840. The Queen hated being pregnant, viewed breast-feeding with disgust,
and thought newborn babies were ugly. Nevertheless, over the following seventeen years, she and Albert had a further
eight children: Albert Edward, Prince of Wales (b. 1841), Alice (b. 1843), Alfred (b. 1844), Helena (b. 1846), Louise
(b. 1848), Arthur (b. 1850), Leopold (b. 1853) and Beatrice (b. 1857). Between April 1877 and February 1878, she
threatened five times to abdicate while pressuring Disraeli to act against Russia during the Russo-Turkish War, but
her threats had no impact on the events or their conclusion with the Congress of Berlin. Disraeli's expansionist
foreign policy, which Victoria endorsed, led to conflicts such as the Anglo-Zulu War and the Second Anglo-Afghan
War. "If we are to maintain our position as a first-rate Power", she wrote, "we must ... be Prepared for attacks
and wars, somewhere or other, CONTINUALLY." Victoria saw the expansion of the British Empire as civilising and benign,
protecting native peoples from more aggressive powers or cruel rulers: "It is not in our custom to annexe countries",
she said, "unless we are obliged & forced to do so." To Victoria's dismay, Disraeli lost the 1880 general election,
and Gladstone returned as prime minister. When Disraeli died the following year, she was blinded by "fast falling
tears", and erected a memorial tablet "placed by his grateful Sovereign and Friend, Victoria R.I." Napoleon III,
since the Crimean War Britain's closest ally, visited London in April 1855, and from 17 to 28 August the same year
Victoria and Albert returned the visit. Napoleon III met the couple at Dunkirk and accompanied them to Paris. They
visited the Exposition Universelle (a successor to Albert's 1851 brainchild the Great Exhibition) and Napoleon I's
tomb at Les Invalides (to which his remains had only been returned in 1840), and were guests of honour at a 1,200-guest
ball at the Palace of Versailles. Through the 1860s, Victoria relied increasingly on a manservant from Scotland,
John Brown. Slanderous rumours of a romantic connection and even a secret marriage appeared in print, and the Queen
was referred to as "Mrs. Brown". The story of their relationship was the subject of the 1997 movie Mrs. Brown. A
painting by Sir Edwin Henry Landseer depicting the Queen with Brown was exhibited at the Royal Academy, and Victoria
published a book, Leaves from the Journal of Our Life in the Highlands, which featured Brown prominently and in which
the Queen praised him highly. Following a custom she maintained throughout her widowhood, Victoria spent the Christmas
of 1900 at Osborne House on the Isle of Wight. Rheumatism in her legs had rendered her lame, and her eyesight was
clouded by cataracts. Through early January, she felt "weak and unwell", and by mid-January she was "drowsy ... dazed,
[and] confused". She died on Tuesday, 22 January 1901, at half past six in the evening, at the age of 81. Her son
and successor King Edward VII, and her eldest grandson, Emperor Wilhelm II of Germany, were at her deathbed. Her
favourite pet Pomeranian, Turri, was laid upon her deathbed as a last request. In the 1874 general election, Disraeli
was returned to power. He passed the Public Worship Regulation Act 1874, which removed Catholic rituals from the
Anglican liturgy and which Victoria strongly supported. She preferred short, simple services, and personally considered
herself more aligned with the presbyterian Church of Scotland than the episcopal Church of England. He also pushed
the Royal Titles Act 1876 through Parliament, so that Victoria took the title "Empress of India" from 1 May 1876.
The new title was proclaimed at the Delhi Durbar of 1 January 1877. Through Victoria's reign, the gradual establishment
of a modern constitutional monarchy in Britain continued. Reforms of the voting system increased the power of the
House of Commons at the expense of the House of Lords and the monarch. In 1867, Walter Bagehot wrote that the monarch
only retained "the right to be consulted, the right to encourage, and the right to warn". As Victoria's monarchy
became more symbolic than political, it placed a strong emphasis on morality and family values, in contrast to the
sexual, financial and personal scandals that had been associated with previous members of the House of Hanover and
which had discredited the monarchy. The concept of the "family monarchy", with which the burgeoning middle classes
could identify, was solidified. Victoria was pleased when Gladstone resigned in 1885 after his budget was defeated.
She thought his government was "the worst I have ever had", and blamed him for the death of General Gordon at Khartoum.
Gladstone was replaced by Lord Salisbury. Salisbury's government only lasted a few months, however, and Victoria
was forced to recall Gladstone, whom she referred to as a "half crazy & really in many ways ridiculous old man".
Gladstone attempted to pass a bill granting Ireland home rule, but to Victoria's glee it was defeated. In the ensuing
election, Gladstone's party lost to Salisbury's and the government switched hands again. According to one of her
biographers, Giles St Aubyn, Victoria wrote an average of 2,500 words a day during her adult life. From July 1832
until just before her death, she kept a detailed journal, which eventually encompassed 122 volumes. After Victoria's
death, her youngest daughter, Princess Beatrice, was appointed her literary executor. Beatrice transcribed and edited
the diaries covering Victoria's accession onwards, and burned the originals in the process. Despite this destruction,
much of the diaries still exist. In addition to Beatrice's edited copy, Lord Esher transcribed the volumes from 1832
to 1861 before Beatrice destroyed them. Part of Victoria's extensive correspondence has been published in volumes
edited by A. C. Benson, Hector Bolitho, George Earle Buckle, Lord Esher, Roger Fulford, and Richard Hough among others.
Relations between Grand Lodges are determined by the concept of Recognition. Each Grand Lodge maintains a list of other Grand
Lodges that it recognises. When two Grand Lodges recognise and are in Masonic communication with each other, they
are said to be in amity, and the brethren of each may visit each other's Lodges and interact Masonically. When two
Grand Lodges are not in amity, inter-visitation is not allowed. There are many reasons why one Grand Lodge will withhold
or withdraw recognition from another, but the two most common are Exclusive Jurisdiction and Regularity. Since the
middle of the 19th century, Masonic historians have sought the origins of the movement in a series of similar documents
known as the Old Charges, dating from the Regius Poem in about 1425 to the beginning of the 18th century. Alluding
to the membership of a lodge of operative masons, they relate a mythologised history of the craft, the duties of
its grades, and the manner in which oaths of fidelity are to be taken on joining. The fifteenth century also sees
the first evidence of ceremonial regalia. A dispute during the Lausanne Congress of Supreme Councils of 1875 prompted
the Grand Orient de France to commission a report by a Protestant pastor which concluded that, as Freemasonry was
not a religion, it should not require a religious belief. The new constitutions read, "Its principles are absolute
liberty of conscience and human solidarity", the existence of God and the immortality of the soul being struck out.
It is possible that the immediate objections of the United Grand Lodge of England were at least partly motivated
by the political tension between France and Britain at the time. The result was the withdrawal of recognition of
the Grand Orient of France by the United Grand Lodge of England, a situation that continues today. At the dawn of
the Grand Lodge era, during the 1720s, James Anderson composed the first printed constitutions for Freemasons, the
basis for most subsequent constitutions, which specifically excluded women from Freemasonry. As Freemasonry spread,
continental masons began to include their ladies in Lodges of Adoption, which worked three degrees with the same
names as the men's but different content. The French officially abandoned the experiment in the early 19th century.
Later organisations with a similar aim emerged in the United States, but distinguished the names of the degrees from
those of male masonry. In contrast to Catholic allegations of rationalism and naturalism, Protestant objections are
more likely to be based on allegations of mysticism, occultism, and even Satanism. Masonic scholar Albert Pike is
often quoted (in some cases misquoted) by Protestant anti-Masons as an authority for the position of Masonry on these
issues. However, Pike, although undoubtedly learned, was not a spokesman for Freemasonry and was also controversial
among Freemasons in general. His writings represented his personal opinion only, and furthermore an opinion grounded
in the attitudes and understandings of late 19th century Southern Freemasonry of the USA. Notably, his book carries
in the preface a form of disclaimer from his own Grand Lodge. No one voice has ever spoken for the whole of Freemasonry.
In 1799, English Freemasonry almost came to a halt due to Parliamentary proclamation. In the wake of the French Revolution,
the Unlawful Societies Act 1799 banned any meetings of groups that required their members to take an oath or obligation.
The Grand Masters of both the Moderns and the Antients Grand Lodges called on Prime Minister William Pitt (who was
not a Freemason) and explained to him that Freemasonry was a supporter of the law and lawfully constituted authority
and was much involved in charitable work. As a result, Freemasonry was specifically exempted from the terms of the
Act, provided that each private lodge's Secretary placed with the local "Clerk of the Peace" a list of the members
of his lodge once a year. This continued until 1967 when the obligation of the provision was rescinded by Parliament.
In some countries anti-Masonry is often related to antisemitism and anti-Zionism. For example, In 1980, the Iraqi
legal and penal code was changed by Saddam Hussein's ruling Ba'ath Party, making it a felony to "promote or acclaim
Zionist principles, including Freemasonry, or who associate [themselves] with Zionist organisations". Professor Andrew
Prescott of the University of Sheffield writes: "Since at least the time of the Protocols of the Elders of Zion,
antisemitism has gone hand in hand with anti-masonry, so it is not surprising that allegations that 11 September
was a Zionist plot have been accompanied by suggestions that the attacks were inspired by a masonic world order".
The bulk of Masonic ritual consists of degree ceremonies. Candidates for Freemasonry are progressively initiated
into Freemasonry, first in the degree of Entered Apprentice. Some time later, in a separate ceremony, they will be
passed to the degree of Fellowcraft, and finally they will be raised to the degree of Master Mason. In all of these
ceremonies, the candidate is entrusted with passwords, signs and grips peculiar to his new rank. Another ceremony
is the annual installation of the Master and officers of the Lodge. In some jurisdictions Installed Master is valued
as a separate rank, with its own secrets to distinguish its members. In other jurisdictions, the grade is not recognised,
and no inner ceremony conveys new secrets during the installation of a new Master of the Lodge. English Freemasonry
spread to France in the 1720s, first as lodges of expatriates and exiled Jacobites, and then as distinctively French
lodges which still follow the ritual of the Moderns. From France and England, Freemasonry spread to most of Continental
Europe during the course of the 18th century. The Grande Loge de France formed under the Grand Mastership of the
Duke of Clermont, who exercised only nominal authority. His successor, the Duke of Orléans, reconstituted the central
body as the Grand Orient de France in 1773. Briefly eclipsed during the French Revolution, French Freemasonry continued
to grow in the next century. The majority of Freemasonry considers the Liberal (Continental) strand to be Irregular,
and thus withhold recognition. For the Continental lodges, however, having a different approach to Freemasonry was
not a reason for severing masonic ties. In 1961, an umbrella organisation, Centre de Liaison et d'Information des
Puissances maçonniques Signataires de l'Appel de Strasbourg (CLIPSAS) was set up, which today provides a forum for
most of these Grand Lodges and Grand Orients worldwide. Included in the list of over 70 Grand Lodges and Grand Orients
are representatives of all three of the above categories, including mixed and women's organisations. The United Grand
Lodge of England does not communicate with any of these jurisdictions, and expects its allies to follow suit. This
creates the distinction between Anglo-American and Continental Freemasonry. The denomination with the longest history
of objection to Freemasonry is the Roman Catholic Church. The objections raised by the Roman Catholic Church are
based on the allegation that Masonry teaches a naturalistic deistic religion which is in conflict with Church doctrine.
A number of Papal pronouncements have been issued against Freemasonry. The first was Pope Clement XII's In eminenti
apostolatus, 28 April 1738; the most recent was Pope Leo XIII's Ab apostolici, 15 October 1890. The 1917 Code of
Canon Law explicitly declared that joining Freemasonry entailed automatic excommunication, and banned books favouring
Freemasonry. In 1933, the Orthodox Church of Greece officially declared that being a Freemason constitutes an act
of apostasy and thus, until he repents, the person involved with Freemasonry cannot partake of the Eucharist. This
has been generally affirmed throughout the whole Eastern Orthodox Church. The Orthodox critique of Freemasonry agrees
with both the Roman Catholic and Protestant versions: "Freemasonry cannot be at all compatible with Christianity
as far as it is a secret organisation, acting and teaching in mystery and secret and deifying rationalism." In addition,
most Grand Lodges require the candidate to declare a belief in a Supreme Being. In a few cases, the candidate may
be required to be of a specific religion. The form of Freemasonry most common in Scandinavia (known as the Swedish
Rite), for example, accepts only Christians. At the other end of the spectrum, "Liberal" or Continental Freemasonry,
exemplified by the Grand Orient de France, does not require a declaration of belief in any deity, and accepts atheists
(a cause of discord with the rest of Freemasonry). Exclusive Jurisdiction is a concept whereby only one Grand Lodge
will be recognised in any geographical area. If two Grand Lodges claim jurisdiction over the same area, the other
Grand Lodges will have to choose between them, and they may not all decide to recognise the same one. (In 1849, for
example, the Grand Lodge of New York split into two rival factions, each claiming to be the legitimate Grand Lodge.
Other Grand Lodges had to choose between them until the schism was healed.) Exclusive Jurisdiction can be waived
when the two over-lapping Grand Lodges are themselves in Amity and agree to share jurisdiction (for example, since
the Grand Lodge of Connecticut is in Amity with the Prince Hall Grand Lodge of Connecticut, the principle of Exclusive
Jurisdiction does not apply, and other Grand Lodges may recognise both). There is no clear mechanism by which these
local trade organisations became today's Masonic Lodges, but the earliest rituals and passwords known, from operative
lodges around the turn of the 17th–18th centuries, show continuity with the rituals developed in the later 18th century
by accepted or speculative Masons, as those members who did not practice the physical craft came to be known. The
minutes of the Lodge of Edinburgh (Mary's Chapel) No. 1 in Scotland show a continuity from an operative lodge in
1598 to a modern speculative Lodge. It is reputed to be the oldest Masonic Lodge in the world. Prince Hall Freemasonry
exists because of the refusal of early American lodges to admit African-Americans. In 1775, an African-American named
Prince Hall, along with fourteen other African-Americans, was initiated into a British military lodge with a warrant
from the Grand Lodge of Ireland, having failed to obtain admission from the other lodges in Boston. When the military
Lodge left North America, those fifteen men were given the authority to meet as a Lodge, but not to initiate Masons.
In 1784, these individuals obtained a Warrant from the Premier Grand Lodge of England (GLE) and formed African Lodge,
Number 459. When the UGLE was formed in 1813, all U.S.-based Lodges were stricken from their rolls – due largely
to the War of 1812. Thus, separated from both UGLE and any concordantly recognised U.S. Grand Lodge, African Lodge
re-titled itself as the African Lodge, Number 1 – and became a de facto "Grand Lodge" (this Lodge is not to be confused
with the various Grand Lodges on the Continent of Africa). As with the rest of U.S. Freemasonry, Prince Hall Freemasonry
soon grew and organised on a Grand Lodge system for each state. Maria Deraismes was initiated into Freemasonry in
1882, then resigned to allow her lodge to rejoin their Grand Lodge. Having failed to achieve acceptance from any
masonic governing body, she and Georges Martin started a mixed masonic lodge that actually worked masonic ritual.
Annie Besant spread the phenomenon to the English speaking world. Disagreements over ritual led to the formation
of exclusively female bodies of Freemasons in England, which spread to other countries. Meanwhile, the French had
re-invented Adoption as an all-female lodge in 1901, only to cast it aside again in 1935. The lodges, however, continued
to meet, which gave rise, in 1959, to a body of women practising continental Freemasonry. Many Islamic anti-Masonic
arguments are closely tied to both antisemitism and Anti-Zionism, though other criticisms are made such as linking
Freemasonry to al-Masih ad-Dajjal (the false Messiah). Some Muslim anti-Masons argue that Freemasonry promotes the
interests of the Jews around the world and that one of its aims is to destroy the Al-Aqsa Mosque in order to rebuild
the Temple of Solomon in Jerusalem. In article 28 of its Covenant, Hamas states that Freemasonry, Rotary, and other
similar groups "work in the interest of Zionism and according to its instructions ..." The preserved records of the
Reichssicherheitshauptamt (the Reich Security Main Office) show the persecution of Freemasons during the Holocaust.
RSHA Amt VII (Written Records) was overseen by Professor Franz Six and was responsible for "ideological" tasks, by
which was meant the creation of antisemitic and anti-Masonic propaganda. While the number is not accurately known,
it is estimated that between 80,000 and 200,000 Freemasons were killed under the Nazi regime. Masonic concentration
camp inmates were graded as political prisoners and wore an inverted red triangle. Freemasonry consists of fraternal
organisations that trace their origins to the local fraternities of stonemasons, which from the end of the fourteenth
century regulated the qualifications of stonemasons and their interaction with authorities and clients. The degrees
of freemasonry retain the three grades of medieval craft guilds, those of Apprentice, Journeyman or fellow (now called
Fellowcraft), and Master Mason. These are the degrees offered by Craft (or Blue Lodge) Freemasonry. Members of these
organisations are known as Freemasons or Masons. There are additional degrees, which vary with locality and jurisdiction,
and are usually administered by different bodies than the craft degrees. Candidates for Freemasonry will have met
most active members of the Lodge they are joining before they are initiated. The process varies between jurisdictions,
but the candidate will typically have been introduced by a friend at a Lodge social function, or at some form of
open evening in the Lodge. In modern times, interested people often track down a local Lodge through the Internet.
The onus is on candidates to ask to join; while candidates may be encouraged to ask, they are never invited. Once
the initial inquiry is made, an interview usually follows to determine the candidate's suitability. If the candidate
decides to proceed from here, the Lodge ballots on the application before he (or she, depending on the Masonic Jurisdiction)
can be accepted. Freemasonry, as it exists in various forms all over the world, has a membership estimated by the
United Grand Lodge of England at around six million worldwide. The fraternity is administratively organised into
independent Grand Lodges (or sometimes Grand Orients), each of which governs its own Masonic jurisdiction, which
consists of subordinate (or constituent) Lodges. The largest single jurisdiction, in terms of membership, is the
United Grand Lodge of England (with a membership estimated at around a quarter million). The Grand Lodge of Scotland
and Grand Lodge of Ireland (taken together) have approximately 150,000 members. In the United States total membership
is just under two million. The idea of Masonic brotherhood probably descends from a 16th-century legal definition
of a brother as one who has taken an oath of mutual support to another. Accordingly, Masons swear at each degree
to keep the contents of that degree secret, and to support and protect their brethren unless they have broken the
law. In most Lodges the oath or obligation is taken on a Volume of Sacred Law, whichever book of divine revelation
is appropriate to the religious beliefs of the individual brother (usually the Bible in the Anglo-American tradition).
In Progressive continental Freemasonry, books other than scripture are permissible, a cause of rupture between Grand
Lodges. The earliest known American lodges were in Pennsylvania. The Collector for the port of Pennsylvania, John
Moore, wrote of attending lodges there in 1715, two years before the formation of the first Grand Lodge in London.
The Premier Grand Lodge of England appointed a Provincial Grand Master for North America in 1731, based in Pennsylvania.
Other lodges in the colony obtained authorisations from the later Antient Grand Lodge of England, the Grand Lodge
of Scotland, and the Grand Lodge of Ireland, which was particularly well represented in the travelling lodges of
the British Army. Many lodges came into existence with no warrant from any Grand Lodge, applying and paying for their
authorisation only after they were confident of their own survival. Masonic lodges existed in Iraq as early as 1917,
when the first lodge under the United Grand Lodge of England (UGLE) was opened. Nine lodges under UGLE existed by
the 1950s, and a Scottish lodge was formed in 1923. However, the position changed following the revolution, and all
lodges were forced to close in 1965. This position was later reinforced under Saddam Hussein; the death penalty was
"prescribed" for those who "promote or acclaim Zionist principles, including freemasonry, or who associate [themselves]
with Zionist organisations." The ritual form on which the Grand Orient of France was based was abolished in England
in the events leading to the formation of the United Grand Lodge of England in 1813. However the two jurisdictions
continued in amity (mutual recognition) until events of the 1860s and 1870s drove a seemingly permanent wedge between
them. In 1868 the Supreme Council of the Ancient and Accepted Scottish Rite of the State of Louisiana appeared in
the jurisdiction of the Grand Lodge of Louisiana, recognised by the Grand Orient de France, but regarded by the older
body as an invasion of their jurisdiction. The new Scottish rite body admitted blacks, and the resolution of the
Grand Orient the following year that neither colour, race, nor religion could disqualify a man from Masonry prompted
the Grand Lodge to withdraw recognition, and it persuaded other American Grand Lodges to do the same. In 1983, the
Church issued a new code of canon law. Unlike its predecessor, the 1983 Code of Canon Law did not explicitly name
Masonic orders among the secret societies it condemns. It states: "A person who joins an association which plots
against the Church is to be punished with a just penalty; one who promotes or takes office in such an association
is to be punished with an interdict." This named omission of Masonic orders caused both Catholics and Freemasons
to believe that the ban on Catholics becoming Freemasons may have been lifted, especially after the perceived liberalisation
of Vatican II. However, the matter was clarified when Cardinal Joseph Ratzinger (later Pope Benedict XVI), as the
Prefect of the Congregation for the Doctrine of the Faith, issued a Declaration on Masonic Associations, which states:
"... the Church's negative judgment in regard to Masonic association remains unchanged since their principles have
always been considered irreconcilable with the doctrine of the Church and therefore membership in them remains forbidden.
The faithful who enroll in Masonic associations are in a state of grave sin and may not receive Holy Communion."
For its part, Freemasonry has never objected to Catholics joining their fraternity. Those Grand Lodges in amity with
UGLE deny the Church's claims. The UGLE now states that "Freemasonry does not seek to replace a Mason's religion
or provide a substitute for it." Even in modern democracies, Freemasonry is sometimes viewed with distrust. In the
UK, Masons working in the justice system, such as judges and police officers, were from 1999 to 2009 required to
disclose their membership. While a parliamentary inquiry found that there has been no evidence of wrongdoing, it
was felt that any potential loyalties Masons might have, based on their vows to support fellow Masons, should be
transparent to the public. The policy of requiring a declaration of masonic membership of applicants for judicial
office (judges and magistrates) was ended in 2009 by Justice Secretary Jack Straw (who had initiated the requirement
in the 1990s). Straw stated that the rule was considered disproportionate, since no impropriety or malpractice had
been shown as a result of judges being Freemasons. The Masonic Lodge is the basic organisational unit of Freemasonry.
The Lodge meets regularly to conduct the usual formal business of any small organisation (pay bills, organise social
and charitable events, elect new members, etc.). In addition to business, the meeting may perform a ceremony to confer
a Masonic degree or receive a lecture, which is usually on some aspect of Masonic history or ritual. At the conclusion
of the meeting, the Lodge might adjourn for a formal dinner, or festive board, sometimes involving toasting and song.
During the ceremony of initiation, the candidate is expected to swear (usually on a volume of sacred text appropriate
to his personal religious faith) to fulfil certain obligations as a Mason. In the course of three degrees, new masons
will promise to keep the secrets of their degree from lower degrees and outsiders, and to support a fellow Mason
in distress (as far as practicality and the law permit). There is instruction as to the duties of a Freemason, but
on the whole, Freemasons are left to explore the craft in the manner they find most satisfying. Some will further
explore the ritual and symbolism of the craft, others will focus their involvement on the social side of the Lodge,
while still others will concentrate on the charitable functions of the lodge. Regularity is a concept based on adherence
to Masonic Landmarks, the basic membership requirements, tenets and rituals of the craft. Each Grand Lodge sets its
own definition of what these landmarks are, and thus what is Regular and what is Irregular (and the definitions do
not necessarily agree between Grand Lodges). Essentially, every Grand Lodge will hold that its landmarks (its requirements,
tenets and rituals) are Regular, and judge other Grand Lodges based on those. If the differences are significant,
one Grand Lodge may declare the other "Irregular" and withdraw or withhold recognition. All Freemasons begin their
journey in the "craft" by being progressively initiated, passed and raised into the three degrees of Craft, or Blue
Lodge Masonry. During these three rituals, the candidate is progressively taught the meanings of the Lodge symbols,
and entrusted with grips, signs and words to signify to other Masons that he has been so initiated. The initiations
are part allegory and part lecture, and revolve around the construction of the Temple of Solomon, and the artistry
and death of his chief architect, Hiram Abiff. The degrees are those of Entered apprentice, Fellowcraft and Master
Mason. While many different versions of these rituals exist, with at least two different lodge layouts and versions
of the Hiram myth, each version is recognisable to any Freemason from any jurisdiction. The first Grand Lodge, the
Grand Lodge of London and Westminster (later called the Grand Lodge of England (GLE)), was founded on 24 June 1717,
when four existing London Lodges met for a joint dinner. Many English Lodges joined the new regulatory body, which
itself entered a period of self-publicity and expansion. However, many Lodges could not endorse changes which some
Lodges of the GLE made to the ritual (they came to be known as the Moderns), and a few of these formed a rival Grand
Lodge on 17 July 1751, which they called the "Antient Grand Lodge of England." These two Grand Lodges vied for supremacy
until the Moderns promised to return to the ancient ritual. They united on 27 December 1813 to form the United Grand
Lodge of England (UGLE). Widespread segregation in 19th- and early 20th-century North America made it difficult for
African-Americans to join Lodges outside of Prince Hall jurisdictions – and impossible for inter-jurisdiction recognition
between the parallel U.S. Masonic authorities. By the 1980s, such discrimination was a thing of the past, and today
most U.S. Grand Lodges recognise their Prince Hall counterparts, and the authorities of both traditions are working
towards full recognition. The United Grand Lodge of England has no problem with recognising Prince Hall Grand Lodges.
While celebrating their heritage as lodges of black Americans, Prince Hall is open to all men regardless of race
or religion. In general, Continental Freemasonry is sympathetic to Freemasonry amongst women, dating from the 1890s
when French lodges assisted the emergent co-masonic movement by promoting enough of their members to the 33rd degree
of the Ancient and Accepted Scottish Rite to allow them, in 1899, to form their own grand council, recognised by
the other Continental Grand Councils of that Rite. The United Grand Lodge of England issued a statement in 1999 recognising
the two women's grand lodges there to be regular in all but the participants. While they were not, therefore, recognised
as regular, they were part of Freemasonry "in general". The attitude of most regular Anglo-American grand lodges
remains that women Freemasons are not legitimate Masons. Since the founding of Freemasonry, many Bishops of the Church
of England have been Freemasons, such as Archbishop Geoffrey Fisher. In the past, few members of the Church of England
would have seen any incongruity in concurrently adhering to Anglican Christianity and practicing Freemasonry. In
recent decades, however, reservations about Freemasonry have increased within Anglicanism, perhaps due to the increasing
prominence of the evangelical wing of the church. The former Archbishop of Canterbury, Dr Rowan Williams, appeared
to harbour some reservations about Masonic ritual, whilst being anxious to avoid causing offence to Freemasons inside
and outside the Church of England. In 2003 he felt it necessary to apologise to British Freemasons after he said
that their beliefs were incompatible with Christianity and that he had barred the appointment of Freemasons to senior
posts in his diocese when he was Bishop of Monmouth. In Italy, Freemasonry has become linked to a scandal concerning
the Propaganda Due lodge (a.k.a. P2). This lodge was chartered by the Grande Oriente d'Italia in 1877, as a lodge
for visiting Masons unable to attend their own lodges. Under Licio Gelli's leadership, in the late 1970s, P2 became
involved in the financial scandals that nearly bankrupted the Vatican Bank. However, by this time the lodge was operating
independently and irregularly, as the Grand Orient had revoked its charter and expelled Gelli in 1976.
On 29 November 1947, the United Nations General Assembly recommended the adoption and implementation of the Partition Plan
for Mandatory Palestine. This UN plan specified borders for new Arab and Jewish states and also specified an area
of Jerusalem and its environs which was to be administered by the UN under an international regime. The end of the
British Mandate for Palestine was set for midnight on 14 May 1948. That day, David Ben-Gurion, the executive head
of the Zionist Organization and president of the Jewish Agency for Palestine, declared "the establishment of a Jewish
state in Eretz Israel, to be known as the State of Israel," which would start to function from the termination of
the mandate. The borders of the new state were not specified in the declaration. Neighboring Arab armies invaded
the former Palestinian mandate on the next day and fought the Israeli forces. Israel has since fought several wars
with neighboring Arab states, in the course of which it has occupied the West Bank, Sinai Peninsula (1956–57, 1967–82),
part of Southern Lebanon (1982–2000), Gaza Strip (1967–2005; still considered occupied after 2005 disengagement)
and the Golan Heights. It extended its laws to the Golan Heights and East Jerusalem, but not the West Bank. Efforts
to resolve the Israeli–Palestinian conflict have not resulted in peace. However, peace treaties between Israel and
both Egypt and Jordan have successfully been signed. Israel's occupation of Gaza, the West Bank and East Jerusalem
is the world's longest military occupation in modern times.[note 2] Israel (/ˈɪzreɪəl/ or /ˈɪzriːəl/; Hebrew: יִשְׂרָאֵל
Yisrā'el; Arabic: إِسْرَائِيل Isrāʼīl), officially the State of Israel (Hebrew: מְדִינַת יִשְׂרָאֵל Medīnat Yisrā'el
[mediˈnat jisʁaˈʔel] ( listen); Arabic: دولة إِسْرَائِيل Dawlat Isrāʼīl [dawlat ʔisraːˈʔiːl]), is a sovereign state
in Western Asia. The country is situated in the Middle East at the southeastern shore of the Mediterranean Sea and
the northern shore of the Gulf of Aqaba in the Red Sea. It shares land borders with Lebanon to the north, Syria in
the northeast, Jordan on the east, the Palestinian territories (which are claimed by the State of Palestine and are
partially controlled by Israel) comprising the West Bank and Gaza Strip to the east and west, respectively, and Egypt
to the southwest. It contains geographically diverse features within its relatively small area. Israel's financial
and technology center is Tel Aviv while Jerusalem is both the self-designated capital and most populous individual
city under the country's governmental administration. Israeli sovereignty over Jerusalem is internationally unrecognized.[note
1] The population of Israel, as defined by the Israel Central Bureau of Statistics, was estimated in 2016 to be 8,476,600
people. It is the world's only Jewish-majority state, with 6,345,400 citizens, or 74.9%, being designated as Jewish.
The country's second largest group of citizens are denoted as Arabs, numbering 1,760,400 people (including the Druze
and most East Jerusalem Arabs). The great majority of Israeli Arabs are Sunni Muslims, with smaller but significant
numbers of semi-settled Negev Bedouins; the rest are Christians and Druze. Other far smaller minorities include Maronites,
Samaritans, Dom people and Roma, Black Hebrew Israelites, other Sub-Saharan Africans, Armenians, Circassians, Vietnamese
boat people, and others. Israel also hosts a significant population of non-citizen foreign workers and asylum seekers
from Africa and Asia. In its Basic Laws, Israel defines itself as a Jewish and democratic state. Israel is a representative
democracy with a parliamentary system, proportional representation and universal suffrage. The prime minister serves
as head of government and the Knesset serves as the legislature. Israel is a developed country and an OECD member,
with the 35th-largest economy in the world by nominal gross domestic product as of 2015[update]. The country benefits
from a highly skilled workforce and is among the most educated countries in the world with the one of the highest
percentage of its citizens holding a tertiary education degree. The country has the highest standard of living in
the Middle East and the fourth highest in Asia, and has one of the highest life expectancies in the world. The names
Land of Israel and Children of Israel have historically been used to refer to the biblical Kingdom of Israel and
the entire Jewish people respectively. The name "Israel" (Standard Yisraʾel, Isrāʾīl; Septuagint Greek: Ἰσραήλ Israēl;
'El(God) persists/rules' though, after Hosea 12:4 often interpreted as "struggle with God") in these phrases refers
to the patriarch Jacob who, according to the Hebrew Bible, was given the name after he successfully wrestled with
the angel of the Lord. Jacob's twelve sons became the ancestors of the Israelites, also known as the Twelve Tribes
of Israel or Children of Israel. Jacob and his sons had lived in Canaan but were forced by famine to go into Egypt
for four generations, lasting 430 years, until Moses, a great-great grandson of Jacob, led the Israelites back into
Canaan during the "Exodus". The earliest known archaeological artifact to mention the word "Israel" is the Merneptah
Stele of ancient Egypt (dated to the late 13th century BCE). The notion of the "Land of Israel", known in Hebrew
as Eretz Yisrael, has been important and sacred to the Jewish people since Biblical times. According to the Torah,
God promised the land to the three Patriarchs of the Jewish people. On the basis of scripture, the period of the
three Patriarchs has been placed somewhere in the early 2nd millennium BCE, and the first Kingdom of Israel was established
around the 11th century BCE. Subsequent Israelite kingdoms and states ruled intermittently over the next four hundred
years, and are known from various extra-biblical sources. The first record of the name Israel (as ysrỉꜣr) occurs
in the Merneptah stele, erected for Egyptian Pharaoh Merneptah c. 1209 BCE, "Israel is laid waste and his seed is
not." This "Israel" was a cultural and probably political entity of the central highlands, well enough established
to be perceived by the Egyptians as a possible challenge to their hegemony, but an ethnic group rather than an organised
state; Ancestors of the Israelites may have included Semites native to Canaan and the Sea Peoples. McNutt says, "It
is probably safe to assume that sometime during Iron Age a population began to identify itself as 'Israelite'", differentiating
itself from the Canaanites through such markers as the prohibition of intermarriage, an emphasis on family history
and genealogy, and religion. Around 930 BCE, the kingdom split into a southern Kingdom of Judah and a northern Kingdom
of Israel. From the middle of the 8th century BCE Israel came into increasing conflict with the expanding neo-Assyrian
empire. Under Tiglath-Pileser III it first split Israel's territory into several smaller units and then destroyed
its capital, Samaria (722 BCE). An Israelite revolt (724–722 BCE) was crushed after the siege and capture of Samaria
by the Assyrian king Sargon II. Sargon's son, Sennacherib, tried and failed to conquer Judah. Assyrian records say
he leveled 46 walled cities and besieged Jerusalem, leaving after receiving extensive tribute. In 586 BCE King Nebuchadnezzar
II of Babylon conquered Judah. According to the Hebrew Bible, he destroyed Solomon's Temple and exiled the Jews to
Babylon. The defeat was also recorded by the Babylonians (see the Babylonian Chronicles). In 538 BCE, Cyrus the Great
of Persia conquered Babylon and took over its empire. Cyrus issued a proclamation granting subjugated nations (including
the people of Judah) religious freedom (for the original text, which corroborates the biblical narrative only in
very broad terms, see the Cyrus Cylinder). According to the Hebrew Bible 50,000 Judeans, led by Zerubabel, returned
to Judah and rebuilt the temple. A second group of 5,000, led by Ezra and Nehemiah, returned to Judah in 456 BCE
although non-Jews wrote to Cyrus to try to prevent their return. With successive Persian rule, the region, divided
between Syria-Coele province and later the autonomous Yehud Medinata, was gradually developing back into urban society,
largely dominated by Judeans. The Greek conquests largely skipped the region without any resistance or interest.
Incorporated into Ptolemaic and finally Seleucid Empires, the southern Levant was heavily hellenized, building the
tensions between Judeans and Greeks. The conflict erupted in 167 BCE with the Maccabean Revolt, which succeeded in
establishing an independent Hasmonean Kingdom in Judah, which later expanded over much of modern Israel, as the Seleucids
gradually lost control in the region. With the decline of Herodians, Judea, transformed into a Roman province, became
the site of a violent struggle of Jews against Greco-Romans, culminating in the Jewish-Roman Wars, ending in wide-scale
destruction, expulsions, and genocide. Jewish presence in the region significantly dwindled after the failure of
the Bar Kokhba revolt against the Roman Empire in 132 CE. Nevertheless, there was a continuous small Jewish presence
and Galilee became its religious center. The Mishnah and part of the Talmud, central Jewish texts, were composed
during the 2nd to 4th centuries CE in Tiberias and Jerusalem. The region came to be populated predominantly by Greco-Romans
on the coast and Samaritans in the hill-country. Christianity was gradually evolving over Roman paganism, when the
area stood under Byzantine rule. Through the 5th and 6th centuries, the dramatic events of the repeated Samaritan
revolts reshaped the land, with massive destruction to Byzantine Christian and Samaritan societies and a resulting
decrease of the population. After the Persian conquest and the installation of a short-lived Jewish Commonwealth
in 614 CE, the Byzantine Empire reconquered the country in 628. During the siege of Jerusalem by the First Crusade
in 1099, the Jewish inhabitants of the city fought side by side with the Fatimid garrison and the Muslim population
who tried in vain to defend the city against the Crusaders. When the city fell, about 60,000 people were massacred,
including 6,000 Jews seeking refuge in a synagogue. At this time, a full thousand years after the fall of the Jewish
state, there were Jewish communities all over the country. Fifty of them are known and include Jerusalem, Tiberias,
Ramleh, Ashkelon, Caesarea, and Gaza. According to Albert of Aachen, the Jewish residents of Haifa were the main
fighting force of the city, and "mixed with Saracen [Fatimid] troops", they fought bravely for close to a month until
forced into retreat by the Crusader fleet and land army. However, Joshua Prawer expressed doubt over the story, noting
that Albert did not attend the Crusades and that such a prominent role for the Jews is not mentioned by any other
source.[undue weight? – discuss] In 1165 Maimonides visited Jerusalem and prayed on the Temple Mount, in the "great,
holy house". In 1141 Spanish-Jewish poet, Yehuda Halevi, issued a call to the Jews to emigrate to the Land of Israel,
a journey he undertook himself. In 1187 Sultan Saladin, founder of the Ayyubid dynasty, defeated the Crusaders in
the Battle of Hattin and subsequently captured Jerusalem and almost all of Palestine. In time, Saladin issued a proclamation
inviting Jews to return and settle in Jerusalem, and according to Judah al-Harizi, they did: "From the day the Arabs
took Jerusalem, the Israelites inhabited it." Al-Harizi compared Saladin's decree allowing Jews to re-establish themselves
in Jerusalem to the one issued by the Persian king Cyrus the Great over 1,600 years earlier. In 1211, the Jewish
community in the country was strengthened by the arrival of a group headed by over 300 rabbis from France and England,
among them Rabbi Samson ben Abraham of Sens. Nachmanides, the 13th-century Spanish rabbi and recognised leader of
Jewry greatly praised the land of Israel and viewed its settlement as a positive commandment incumbent on all Jews.
He wrote "If the gentiles wish to make peace, we shall make peace and leave them on clear terms; but as for the land,
we shall not leave it in their hands, nor in the hands of any nation, not in any generation." In 1260, control passed
to the Mamluk sultans of Egypt. The country was located between the two centres of Mamluk power, Cairo and Damascus,
and only saw some development along the postal road connecting the two cities. Jerusalem, although left without the
protection of any city walls since 1219, also saw a flurry of new construction projects centred around the Al-Aqsa
Mosque compound (the Temple Mount). In 1266 the Mamluk Sultan Baybars converted the Cave of the Patriarchs in Hebron
into an exclusive Islamic sanctuary and banned Christians and Jews from entering, which previously would be able
to enter it for a fee. The ban remained in place until Israel took control of the building in 1967. Since the existence
of the earliest Jewish diaspora, many Jews have aspired to return to "Zion" and the "Land of Israel", though the
amount of effort that should be spent towards such an aim was a matter of dispute. The hopes and yearnings of Jews
living in exile are an important theme of the Jewish belief system. After the Jews were expelled from Spain in 1492,
some communities settled in Palestine. During the 16th century, Jewish communities struck roots in the Four Holy
Cities—Jerusalem, Tiberias, Hebron, and Safed—and in 1697, Rabbi Yehuda Hachasid led a group of 1,500 Jews to Jerusalem.
In the second half of the 18th century, Eastern European opponents of Hasidism, known as the Perushim, settled in
Palestine. The first wave of modern Jewish migration to Ottoman-ruled Palestine, known as the First Aliyah, began
in 1881, as Jews fled pogroms in Eastern Europe. Although the Zionist movement already existed in practice, Austro-Hungarian
journalist Theodor Herzl is credited with founding political Zionism, a movement which sought to establish a Jewish
state in the Land of Israel, thus offering a solution to the so-called Jewish Question of the European states, in
conformity with the goals and achievements of other national projects of the time. In 1896, Herzl published Der Judenstaat
(The State of the Jews), offering his vision of a future Jewish state; the following year he presided over the first
Zionist Congress. The Second Aliyah (1904–14), began after the Kishinev pogrom; some 40,000 Jews settled in Palestine,
although nearly half of them left eventually. Both the first and second waves of migrants were mainly Orthodox Jews,
although the Second Aliyah included socialist groups who established the kibbutz movement. During World War I, British
Foreign Secretary Arthur Balfour sent the Balfour Declaration of 1917 to Baron Rothschild (Walter Rothschild, 2nd
Baron Rothschild), a leader of the British Jewish community, that stated that Britain intended for the creation of
a Jewish "national home" within the Palestinian Mandate. The Jewish Legion, a group primarily of Zionist volunteers,
assisted, in 1918, in the British conquest of Palestine. Arab opposition to British rule and Jewish immigration led
to the 1920 Palestine riots and the formation of a Jewish militia known as the Haganah (meaning "The Defense" in
Hebrew), from which the Irgun and Lehi, or Stern Gang, paramilitary groups later split off. In 1922, the League of
Nations granted Britain a mandate over Palestine under terms which included the Balfour Declaration with its promise
to the Jews, and with similar provisions regarding the Arab Palestinians. The population of the area at this time
was predominantly Arab and Muslim, with Jews accounting for about 11%, and Arab Christians at about 9.5% of the population.
The Third (1919–23) and Fourth Aliyahs (1924–29) brought an additional 100,000 Jews to Palestine. Finally, the rise
of Nazism and the increasing persecution of Jews in 1930s Europe led to the Fifth Aliyah, with an influx of a quarter
of a million Jews. This was a major cause of the Arab revolt of 1936–39 during which the British Mandate authorities
alongside the Zionist militias of Haganah and Irgun killed 5,032 Arabs and wounded 14,760, resulting in over ten
percent of the adult male Palestinian Arab population killed, wounded, imprisoned or exiled. The British introduced
restrictions on Jewish immigration to Palestine with the White Paper of 1939. With countries around the world turning
away Jewish refugees fleeing the Holocaust, a clandestine movement known as Aliyah Bet was organized to bring Jews
to Palestine. By the end of World War II, the Jewish population of Palestine had increased to 33% of the total population.
On July 22, 1946, Irgun attacked the British administrative headquarters for Palestine, which was housed in the southern
wing of the King David Hotel in Jerusalem. 91 people of various nationalities were killed and 46 were injured. The
hotel was the site of the central offices of the British Mandatory authorities of Palestine, principally the Secretariat
of the Government of Palestine and the Headquarters of the British Armed Forces in Palestine and Transjordan. The
attack initially had the approval of the Haganah (the principal Jewish paramilitary group in Palestine). It was conceived
as a response to Operation Agatha (a series of widespread raids, including one on the Jewish Agency, conducted by
the British authorities) and was the deadliest directed at the British during the Mandate era (1920–1948). After
World War II, Britain found itself in intense conflict with the Jewish community over Jewish immigration limits,
as well as continued conflict with the Arab community over limit levels. The Haganah joined Irgun and Lehi in an
armed struggle against British rule. At the same time, hundreds of thousands of Jewish Holocaust survivors and refugees
sought a new life far from their destroyed communities in Europe. The Yishuv attempted to bring these refugees to
Palestine but many were turned away or rounded up and placed in detention camps in Atlit and Cyprus by the British.
Escalating violence culminated with the 1946 King David Hotel bombing which Bruce Hoffman characterized as one of
the "most lethal terrorist incidents of the twentieth century". In 1947, the British government announced it would
withdraw from Mandatory Palestine, stating it was unable to arrive at a solution acceptable to both Arabs and Jews.
On 15 May 1947, the General Assembly of the newly formed United Nations resolved that a committee, United Nations
Special Committee on Palestine (UNSCOP), be created "to prepare for consideration at the next regular session of
the Assembly a report on the question of Palestine". In the Report of the Committee dated 3 September 1947 to the
UN General Assembly, the majority of the Committee in Chapter VI proposed a plan to replace the British Mandate with
"an independent Arab State, an independent Jewish State, and the City of Jerusalem ... the last to be under an International
Trusteeship System". On 29 November 1947, the General Assembly adopted a resolution recommending the adoption and
implementation of the Plan of Partition with Economic Union as Resolution 181 (II). The Plan attached to the resolution
was essentially that proposed by the majority of the Committee in the Report of 3 September 1947. The following day,
the armies of four Arab countries—Egypt, Syria, Transjordan and Iraq—entered what had been British Mandatory Palestine,
launching the 1948 Arab–Israeli War; Contingents from Yemen, Morocco, Saudi Arabia and Sudan joined the war. The
apparent purpose of the invasion was to prevent the establishment of the Jewish state at inception, and some Arab
leaders talked about driving the Jews into the sea. According to Benny Morris, Jews felt that the invading Arab armies
aimed to slaughter the Jews. The Arab league stated that the invasion was to restore law and order and to prevent
further bloodshed. Immigration to Israel during the late 1940s and early 1950s was aided by the Israeli Immigration
Department and the non-government sponsored Mossad LeAliyah Bet ("Institution for Illegal Immigration"). Both groups
facilitated regular immigration logistics like arranging transportation, but the latter also engaged in clandestine
operations in countries, particularly in the Middle East and Eastern Europe, where the lives of Jews were believed
to be in danger and exit from those places was difficult. Mossad LeAliyah Bet continued to take part in immigration
efforts until its disbanding in 1953. An influx of Holocaust survivors and Jews from Arab and Muslim lands immigrated
to Israel during the first 3 years and the number of Jews increased from 700,000 to 1,400,000, many of whom faced
persecution in their original countries. The immigration was in accordance with the One Million Plan. Consequently,
the population of Israel rose from 800,000 to two million between 1948 and 1958. Between 1948 and 1970, approximately
1,150,000 Jewish refugees relocated to Israel. The immigrants came to Israel for differing reasons. Some believed
in a Zionist ideology, while others moved to escape persecution. There were others that did it for the promise of
a better life in Israel and a small number that were expelled from their homelands, such as British and French Jews
in Egypt after the Suez Crisis. Some new immigrants arrived as refugees with no possessions and were housed in temporary
camps known as ma'abarot; by 1952, over 200,000 immigrants were living in these tent cities. During this period,
food, clothes and furniture had to be rationed in what became known as the Austerity Period. The need to solve the
crisis led Ben-Gurion to sign a reparations agreement with West Germany that triggered mass protests by Jews angered
at the idea that Israel could accept monetary compensation for the Holocaust. In 1950 Egypt closed the Suez Canal
to Israeli shipping and tensions mounted as armed clashes took place along Israel's borders. During the 1950s, Israel
was frequently attacked by Palestinian fedayeen, nearly always against civilians, mainly from the Egyptian-occupied
Gaza Strip, leading to several Israeli counter-raids. In 1956, Great Britain and France aimed at regaining control
of the Suez Canal, which the Egyptians had nationalized (see the Suez Crisis). The continued blockade of the Suez
Canal and Straits of Tiran to Israeli shipping, together with the growing amount of Fedayeen attacks against Israel's
southern population, and recent Arab grave and threatening statements, prompted Israel to attack Egypt. Israel joined
a secret alliance with Great Britain and France and overran the Sinai Peninsula but was pressured to withdraw by
the United Nations in return for guarantees of Israeli shipping rights in the Red Sea via the Straits of Tiran and
the Canal[citation needed]. The war resulted in significant reduction of Israeli border infiltration. Arab nationalists
led by Egyptian President Gamal Abdel Nasser refused to recognize Israel, and called for its destruction. By 1966,
Israeli-Arab relations had deteriorated to the point of actual battles taking place between Israeli and Arab forces.
In May 1967, Egypt massed its army near the border with Israel, expelled UN peacekeepers, stationed in the Sinai
Peninsula since 1957, and blocked Israel's access to the Red Sea[citation needed]. Other Arab states mobilized their
forces. Israel reiterated that these actions were a casus belli. On 5 June 1967, Israel launched a pre-emptive strike
against Egypt. Jordan, Syria and Iraq responded and attacked Israel. In a Six-Day War, Israel defeated Jordan and
captured the West Bank, defeated Egypt and captured the Gaza Strip and Sinai Peninsula, and defeated Syria and captured
the Golan Heights. Jerusalem's boundaries were enlarged, incorporating East Jerusalem, and the 1949 Green Line became
the administrative boundary between Israel and the occupied territories. Following the 1967 war and the "three nos"
resolution of the Arab League, during the 1967–1970 War of Attrition Israel faced attacks from the Egyptians in the
Sinai, and from Palestinian groups targeting Israelis in the occupied territories, in Israel proper, and around the
world. Most important among the various Palestinian and Arab groups was the Palestinian Liberation Organization (PLO),
established in 1964, which initially committed itself to "armed struggle as the only way to liberate the homeland".
In the late 1960s and early 1970s, Palestinian groups launched a wave of attacks against Israeli and Jewish targets
around the world, including a massacre of Israeli athletes at the 1972 Summer Olympics in Munich. The Israeli government
responded with an assassination campaign against the organizers of the massacre, a bombing and a raid on the PLO
headquarters in Lebanon. On 6 October 1973, as Jews were observing Yom Kippur, the Egyptian and Syrian armies launched
a surprise attack against Israeli forces in the Sinai Peninsula and Golan Heights, that opened the Yom Kippur War.
The war ended on 26 October with Israel successfully repelling Egyptian and Syrian forces but having suffered over
2,500 soldiers killed in a war which collectively took 10–35,000 lives in just 20 days. An internal inquiry exonerated
the government of responsibility for failures before and during the war, but public anger forced Prime Minister Golda
Meir to resign. The 1977 Knesset elections marked a major turning point in Israeli political history as Menachem
Begin's Likud party took control from the Labor Party. Later that year, Egyptian President Anwar El Sadat made a
trip to Israel and spoke before the Knesset in what was the first recognition of Israel by an Arab head of state.
In the two years that followed, Sadat and Begin signed the Camp David Accords (1978) and the Israel–Egypt Peace Treaty
(1979). In return, Israel withdrew from the Sinai Peninsula, which Israel had captured during the Six-Day War in
1967, and agreed to enter negotiations over an autonomy for Palestinians in the West Bank and the Gaza Strip. On
11 March 1978, a PLO guerilla raid from Lebanon led to the Coastal Road Massacre. Israel responded by launching an
invasion of southern Lebanon to destroy the PLO bases south of the Litani River. Most PLO fighters withdrew, but
Israel was able to secure southern Lebanon until a UN force and the Lebanese army could take over. The PLO soon resumed
its policy of attacks against Israel. In the next few years, the PLO infiltrated the south and kept up a sporadic
shelling across the border. Israel carried out numerous retaliatory attacks by air and on the ground. Meanwhile,
Begin's government provided incentives for Israelis to settle in the occupied West Bank, increasing friction with
the Palestinians in that area. The Basic Law: Jerusalem, the Capital of Israel, passed in 1980, was believed by some
to reaffirm Israel's 1967 annexation of Jerusalem by government decree, and reignited international controversy over
the status of the city. No Israeli legislation has defined the territory of Israel and no act specifically included
East Jerusalem therein. The position of the majority of UN member states is reflected in numerous resolutions declaring
that actions taken by Israel to settle its citizens in the West Bank, and impose its laws and administration on East
Jerusalem, are illegal and have no validity. In 1981 Israel annexed the Golan Heights, although annexation was not
recognized internationally. On 7 June 1981, the Israeli air force destroyed Iraq's sole nuclear reactor, in order
to impede Iraq's nuclear weapons program. The reactor was under construction just outside Baghdad. Following a series
of PLO attacks in 1982, Israel invaded Lebanon that year to destroy the bases from which the PLO launched attacks
and missiles into northern Israel. In the first six days of fighting, the Israelis destroyed the military forces
of the PLO in Lebanon and decisively defeated the Syrians. An Israeli government inquiry – the Kahan Commission –
would later hold Begin, Sharon and several Israeli generals as indirectly responsible for the Sabra and Shatila massacre.
In 1985, Israel responded to a Palestinian terrorist attack in Cyprus by bombing the PLO headquarters in Tunis. Israel
withdrew from most of Lebanon in 1986, but maintained a borderland buffer zone in southern Lebanon until 2000, from
where Israeli forces engaged in conflict with Hezbollah. The First Intifada, a Palestinian uprising against Israeli
rule, broke out in 1987, with waves of uncoordinated demonstrations and violence occurring in the occupied West Bank
and Gaza. Over the following six years, the Intifada became more organised and included economic and cultural measures
aimed at disrupting the Israeli occupation. More than a thousand people were killed in the violence. During the 1991
Gulf War, the PLO supported Saddam Hussein and Iraqi Scud missile attacks against Israel. Despite public outrage,
Israel heeded US calls to refrain from hitting back and did not participate in that war. In 1992, Yitzhak Rabin became
Prime Minister following an election in which his party called for compromise with Israel's neighbors. The following
year, Shimon Peres on behalf of Israel, and Mahmoud Abbas for the PLO, signed the Oslo Accords, which gave the Palestinian
National Authority the right to govern parts of the West Bank and the Gaza Strip. The PLO also recognized Israel's
right to exist and pledged an end to terrorism. In 1994, the Israel–Jordan Treaty of Peace was signed, making Jordan
the second Arab country to normalize relations with Israel. Arab public support for the Accords was damaged by the
continuation of Israeli settlements and checkpoints, and the deterioration of economic conditions. Israeli public
support for the Accords waned as Israel was struck by Palestinian suicide attacks. Finally, while leaving a peace
rally in November 1995, Yitzhak Rabin was assassinated by a far-right-wing Jew who opposed the Accords. At the end
of the 1990s, Israel, under the leadership of Benjamin Netanyahu, withdrew from Hebron, and signed the Wye River
Memorandum, giving greater control to the Palestinian National Authority. Ehud Barak, elected Prime Minister in 1999,
began the new millennium by withdrawing forces from Southern Lebanon and conducting negotiations with Palestinian
Authority Chairman Yasser Arafat and U.S. President Bill Clinton at the 2000 Camp David Summit. During the summit,
Barak offered a plan for the establishment of a Palestinian state. The proposed state included the entirety of the
Gaza Strip and over 90% of the West Bank with Jerusalem as a shared capital, although some argue that the plan was
to annex areas which would lead to a cantonization of the West Bank into three blocs, which the Palestinian delegation
likened to South African "bantustans", a loaded word that was disputed by the Israeli and American negotiators. Each
side blamed the other for the failure of the talks. After the collapse of the talks and a controversial visit by
Likud leader Ariel Sharon to the Temple Mount, the Second Intifada began. Some commentators contend that the uprising
was pre-planned by Yasser Arafat due to the collapse of peace talks. Sharon became prime minister in a 2001 special
election. During his tenure, Sharon carried out his plan to unilaterally withdraw from the Gaza Strip and also spearheaded
the construction of the Israeli West Bank barrier, ending the Intifada. By this time 1,100 Israelis had been killed,
mostly in suicide bombings. The Palestinian fatalities, by 30 April 2008, reached 4,745 killed by Israeli security
forces, 44 killed by Israeli civilians, and 577 killed by Palestinians. In July 2006, a Hezbollah artillery assault
on Israel's northern border communities and a cross-border abduction of two Israeli soldiers precipitated the month-long
Second Lebanon War. On 6 September 2007, the Israeli Air Force destroyed a nuclear reactor in Syria. In May 2008,
Israel confirmed it had been discussing a peace treaty with Syria for a year, with Turkey as a go-between. However,
at the end of the year, Israel entered another conflict as a ceasefire between Hamas and Israel collapsed. The Gaza
War lasted three weeks and ended after Israel announced a unilateral ceasefire. Hamas announced its own ceasefire,
with its own conditions of complete withdrawal and opening of border crossings. Despite neither the rocket launchings
nor Israeli retaliatory strikes having completely stopped, the fragile ceasefire remained in order. In what Israel
described as a response to more than a hundred Palestinian rocket attacks on southern Israeli cities, Israel began
an operation in Gaza on 14 November 2012, lasting eight days. Israel started another operation in Gaza following
an escalation of rocket attacks by Hamas in July 2014. The sovereign territory of Israel (according to the demarcation
lines of the 1949 Armistice Agreements and excluding all territories captured by Israel during the 1967 Six-Day War)
is approximately 20,770 square kilometers (8,019 sq mi) in area, of which two percent is water. However Israel is
so narrow that the exclusive economic zone in the Mediterranean is double the land area of the country. The total
area under Israeli law, including East Jerusalem and the Golan Heights, is 22,072 square kilometers (8,522 sq mi),
and the total area under Israeli control, including the military-controlled and partially Palestinian-governed territory
of the West Bank, is 27,799 square kilometers (10,733 sq mi). Despite its small size, Israel is home to a variety
of geographic features, from the Negev desert in the south to the inland fertile Jezreel Valley, mountain ranges
of the Galilee, Carmel and toward the Golan in the north. The Israeli Coastal Plain on the shores of the Mediterranean
is home to 57 percent of the nation's population. East of the central highlands lies the Jordan Rift Valley, which
forms a small part of the 6,500-kilometer (4,039 mi) Great Rift Valley. The Jordan River runs along the Jordan Rift
Valley, from Mount Hermon through the Hulah Valley and the Sea of Galilee to the Dead Sea, the lowest point on the
surface of the Earth. Further south is the Arabah, ending with the Gulf of Eilat, part of the Red Sea. Unique to
Israel and the Sinai Peninsula are makhteshim, or erosion cirques. The largest makhtesh in the world is Ramon Crater
in the Negev, which measures 40 by 8 kilometers (25 by 5 mi). A report on the environmental status of the Mediterranean
basin states that Israel has the largest number of plant species per square meter of all the countries in the basin.
The Jordan Rift Valley is the result of tectonic movements within the Dead Sea Transform (DSF) fault system. The
DSF forms the transform boundary between the African Plate to the west and the Arabian Plate to the east. The Golan
Heights and all of Jordan are part of the Arabian Plate, while the Galilee, West Bank, Coastal Plain, and Negev along
with the Sinai Peninsula are on the African Plate. This tectonic disposition leads to a relatively high seismic activity
in the region. The entire Jordan Valley segment is thought to have ruptured repeatedly, for instance during the last
two major earthquakes along this structure in 749 and 1033. The deficit in slip that has built up since the 1033
event is sufficient to cause an earthquake of Mw~7.4. The most catastrophic earthquakes we know of occurred in 31
BCE, 363, 749, and 1033 CE, that is every ca. 400 years on average. Destructive earthquakes leading to serious loss
of life strike about every 80 years. While stringent construction regulations are currently in place and recently
built structures are earthquake-safe, as of 2007[update] the majority of the buildings in Israel were older than
these regulations and many public buildings as well as 50,000 residential buildings did not meet the new standards
and were "expected to collapse" if exposed to a strong quake. Given the fragile political situation of the Middle
East region and the presence there of major holy sites, a quake reaching magnitude 7 on the Richter scale could have
dire consequences for world peace. Temperatures in Israel vary widely, especially during the winter. Coastal areas,
such as those of Tel Aviv and Haifa, have a typical Mediterranean climate with cool, rainy winters and long, hot
summers. The area of Beersheba and the Northern Negev has a semi-arid climate with hot summers, cool winters and
fewer rainy days than the Mediterranean climate. The Southern Negev and the Arava areas have desert climate with
very hot and dry summers, and mild winters with few days of rain. The highest temperature in the continent of Asia
(54.0 °C or 129.2 °F) was recorded in 1942 at Tirat Zvi kibbutz in the northern Jordan river valley. At the other
extreme mountainous regions can be windy, cold, and areas at elevation of 750 meters or more (same elevation as Jerusalem)
will usually receive at least one snowfall each year. From May to September, rain in Israel is rare. With scarce
water resources, Israel has developed various water-saving technologies, including drip irrigation. Israelis also
take advantage of the considerable sunlight available for solar energy, making Israel the leading nation in solar
energy use per capita (practically every house uses solar panels for water heating). In 2016, Israel's population
was an estimated 8,476,600 million people, of whom 6,345,400 (74.9%) were recorded by the civil government as Jews.
1,760,400 Arabs comprised 20.7% of the population, while non-Arab Christians and people who have no religion listed
in the civil registry made up 4.4%. Over the last decade, large numbers of migrant workers from Romania, Thailand,
China, Africa, and South America have settled in Israel. Exact figures are unknown, as many of them are living in
the country illegally, but estimates run in the region of 203,000. By June 2012, approximately 60,000 African migrants
had entered Israel. About 92% of Israelis live in urban areas. In 2009[update], over 300,000 Israeli citizens lived
in West Bank settlements such as Ma'ale Adumim and Ariel, including settlements that predated the establishment of
the State of Israel and which were re-established after the Six-Day War, in cities such as Hebron and Gush Etzion.
In 2011, there were 250,000 Jews living in East Jerusalem. 20,000 Israelis live in Golan Heights settlements. The
total number of Israeli settlers is over 500,000 (6.5% of the Israeli population). Approximately 7,800 Israelis lived
in settlements in the Gaza Strip, until they were evacuated by the government as part of its 2005 disengagement plan.
Israel was established as a homeland for the Jewish people and is often referred to as a Jewish state. The country's
Law of Return grants all Jews and those of Jewish ancestry the right to Israeli citizenship. Over three quarters,
or 75.5%, of the population are Jews from a diversity of Jewish backgrounds. Around 4% of Israelis (300,000), ethnically
defined as "others", are Russian descendants of Jewish origin or family who are not Jewish according to rabbinical
law, but were eligible for Israeli citizenship under the Law of Return. Approximately 75% of Israeli Jews are born
in Israel, 17% are immigrants from Europe and the Americas, and 8% are immigrants from Asia and Africa (including
the Arab World). Jews from Europe and the former Soviet Union and their descendants born in Israel, including Ashkenazi
Jews, constitute approximately 50% of Jewish Israelis. Jews who left or fled Arab and Muslim countries and their
descendants, including both Mizrahi and Sephardi Jews, form most of the rest of the Jewish population. Jewish intermarriage
rates run at over 35% and recent studies suggest that the percentage of Israelis descended from both Sephardi and
Ashkenazi Jews increases by 0.5 percent every year, with over 25% of school children now originating from both communities.
Israel has two official languages, Hebrew and Arabic. Hebrew is the primary language of the state and is spoken everyday
by the majority of the population, and Arabic is spoken by the Arab minority and Hebrew is taught in Arab schools.
English was an official language during the Mandate period; it lost this status after the creation of Israel, but
retains a role comparable to that of an official language, as may be seen in road signs and official documents. Many
Israelis communicate reasonably well in English, as many television programs are broadcast in English with subtitles
and the language is taught from the early grades in elementary school. In addition, Israeli universities offer courses
in the English language on various subjects. As a country of immigrants, many languages can be heard on the streets.
Due to mass immigration from the former Soviet Union and Ethiopia (some 130,000 Ethiopian Jews live in Israel), Russian
and Amharic are widely spoken. More than one million Russian-speaking immigrants arrived in Israel from the former
Soviet Union states between 1990 and 2004. French is spoken by around 700,000 Israelis, mostly originating from France
and North Africa (see Maghrebi Jews). Making up 16% of the population, Muslims constitute Israel's largest religious
minority. About 2% of the population is Christian and 1.5% is Druze. The Christian population primarily comprises
Arab Christians, but also includes post-Soviet immigrants, the foreign laborers of multinational origins, and followers
of Messianic Judaism, considered by most Christians and Jews to be a form of Christianity. Members of many other
religious groups, including Buddhists and Hindus, maintain a presence in Israel, albeit in small numbers. Out of
more than one million immigrants from the former Soviet Union in Israel, about 300,000 are considered not Jewish
by the Orthodox rabbinate. The city of Jerusalem is of special importance to Jews, Muslims and Christians as it is
the home of sites that are pivotal to their religious beliefs, such as the Old City that incorporates the Western
Wall and the Temple Mount, the Al-Aqsa Mosque and the Church of the Holy Sepulchre. Other locations of religious
importance in Israel are Nazareth (holy in Christianity as the site of the Annunciation of Mary), Tiberias and Safed
(two of the Four Holy Cities in Judaism), the White Mosque in Ramla (holy in Islam as the shrine of the prophet Saleh),
and the Church of Saint George in Lod (holy in Christianity and Islam as the tomb of Saint George or Al Khidr). A
number of other religious landmarks are located in the West Bank, among them Joseph's Tomb in Nablus, the birthplace
of Jesus and Rachel's Tomb in Bethlehem, and the Cave of the Patriarchs in Hebron. The administrative center of the
Bahá'í Faith and the Shrine of the Báb are located at the Bahá'í World Centre in Haifa; the leader of the faith is
buried in Acre. Apart from maintenance staff, there is no Bahá'í community in Israel, although it is a destination
for pilgrimages. Bahá'í staff in Israel do not teach their faith to Israelis following strict policy. A few miles
south of the Bahá'í World Centre is the Middle East centre of the reformist Ahmadiyya movement. Its mixed neighbourhood
of Jews and Ahmadi Arabs is the only one of its kind in the country. Education in Israel is highly valued in the
national culture with its historical values dating back to Ancient Israel and was viewed as one fundamental blocks
of ancient Israelite life. Israeli culture views higher education as the key to higher mobility and socioeconomic
status in Israeli society. The emphasis of education within Israeli society goes to the gulf within the Jewish diaspora
from the Renaissance and Enlightenment Movement all the way to the roots of Zionism in the 1880s. Jewish communities
in the Levant were the first to introduce compulsory education for which the organized community, not less than the
parents, was responsible for the education of the next generation of Jews. With contemporary Jewish culture's strong
emphasis, promotion of scholarship and learning and the strong propensity to promote cultivation of intellectual
pursuits as well as the nations high university educational attainment rate exemplifies how highly Israeli society
values higher education. The Israeli education system has been praised for various reasons, including its high quality
and its major role in spurring Israel's economic development and technological boom. Many international business
leaders and organizations such as Microsoft founder Bill Gates have praised Israel for its high quality of education
in helping spur Israel's economic development. In 2012, the country ranked second among OECD countries (tied with
Japan and after Canada) for the percentage of 25- to 64-year-olds that have attained tertiary education with 46 percent
compared with the OECD average of 32 percent. In addition, nearly twice as many Israelis aged 55–64 held a higher
education degree compared to other OECD countries, with 47 percent holding an academic degree compared with the OECD
average of 25%. In 2012, the country ranked third in the world in the number of academic degrees per capita (20 percent
of the population). Israel has a school life expectancy of 15.5 years and a literacy rate of 97.1% according to the
United Nations. The State Education Law, passed in 1953, established five types of schools: state secular, state
religious, ultra orthodox, communal settlement schools, and Arab schools. The public secular is the largest school
group, and is attended by the majority of Jewish and non-Arab pupils in Israel. Most Arabs send their children to
schools where Arabic is the language of instruction. Education is compulsory in Israel for children between the ages
of three and eighteen. Schooling is divided into three tiers – primary school (grades 1–6), middle school (grades
7–9), and high school (grades 10–12) – culminating with Bagrut matriculation exams. Proficiency in core subjects
such as mathematics, the Hebrew language, Hebrew and general literature, the English language, history, Biblical
scripture and civics is necessary to receive a Bagrut certificate. In Arab, Christian and Druze schools, the exam
on Biblical studies is replaced by an exam on Muslim, Christian or Druze heritage. Christian Arabs are one of the
most educated groups in Israel. Maariv have describe the Christian Arabs sectors as "the most successful in education
system", since Christian Arabs fared the best in terms of education in comparison to any other group receiving an
education in Israel. Israeli children from Russian-speaking families have a higher bagrut pass rate at high-school
level. Although amongst immigrant children born in the FSU, the bagrut pass rate is highest amongst those families
from Western FSU states of Russia, Ukraine, Belarus and Moldova (at 62.6%), and lower amongst those from Central
Asian and Caucasian FSU states. In 2003, over half of all Israeli twelfth graders earned a matriculation certificate.
Israel has nine public universities that are subsidized by the state and 49 private colleges. The Hebrew University
of Jerusalem, Israel's second-oldest university after the Technion, houses the National Library of Israel, the world's
largest repository of Judaica and Hebraica. The Technion, the Hebrew University, and the Weizmann Institute consistently
ranked among world's 100 top universities by the prestigious ARWU academic ranking. The Hebrew University of Jerusalem
and Tel Aviv University are ranked among the world's top 100 universities by Times Higher Education magazine. Other
major universities in the country include Bar-Ilan University, the University of Haifa, The Open University, and
Ben-Gurion University of the Negev. Ariel University, in the West Bank, is the newest university institution, upgraded
from college status, and the first in over thirty years. Israel's seven research universities (excluding the Open
University) are consistently ranked among top 500 in the world. Israel operates under a parliamentary system as a
democratic republic with universal suffrage. A member of parliament supported by a parliamentary majority becomes
the prime minister—usually this is the chair of the largest party. The prime minister is the head of government and
head of the cabinet. Israel is governed by a 120-member parliament, known as the Knesset. Membership of the Knesset
is based on proportional representation of political parties, with a 3.25% electoral threshold, which in practice
has resulted in coalition governments. Israel has a three-tier court system. At the lowest level are magistrate courts,
situated in most cities across the country. Above them are district courts, serving as both appellate courts and
courts of first instance; they are situated in five of Israel's six districts. The third and highest tier is the
Supreme Court, located in Jerusalem; it serves a dual role as the highest court of appeals and the High Court of
Justice. In the latter role, the Supreme Court rules as a court of first instance, allowing individuals, both citizens
and non-citizens, to petition against the decisions of state authorities. Although Israel supports the goals of the
International Criminal Court, it has not ratified the Rome Statute, citing concerns about the ability of the court
to remain free from political impartiality. Israel's legal system combines three legal traditions: English common
law, civil law, and Jewish law. It is based on the principle of stare decisis (precedent) and is an adversarial system,
where the parties in the suit bring evidence before the court. Court cases are decided by professional judges rather
than juries. Marriage and divorce are under the jurisdiction of the religious courts: Jewish, Muslim, Druze, and
Christian. A committee of Knesset members, Supreme Court justices, and Israeli Bar members carries out the election
of judges. Administration of Israel's courts (both the "General" courts and the Labor Courts) is carried by the Administration
of Courts, situated in Jerusalem. Both General and Labor courts are paperless courts: the storage of court files,
as well as court decisions, are conducted electronically. Israel's Basic Law: Human Dignity and Liberty seeks to
defend human rights and liberties in Israel. The State of Israel is divided into six main administrative districts,
known as mehozot (מחוזות; singular: mahoz) – Center, Haifa, Jerusalem, North, Southern, and Tel Aviv Districts, as
well as the Judea and Samaria Area in the West Bank. All of the Judea and Samaria Area and parts of the Jerusalem
and North districts are not recognized internationally as part of Israel. Districts are further divided into fifteen
sub-districts known as nafot (נפות; singular: nafa), which are themselves partitioned into fifty natural regions.
For statistical purposes, the country is divided into three metropolitan areas: Tel Aviv metropolitan area (population
3,206,400), Haifa metropolitan area (population 1,021,000), and Beer Sheva metropolitan area (population 559,700).
Israel's largest municipality, in population and area, is Jerusalem with 773,800 residents in an area of 126 square
kilometres (49 sq mi) (in 2009). Israeli government statistics on Jerusalem include the population and area of East
Jerusalem, which is widely recognized as part of the Palestinian territories under Israeli occupation. Tel Aviv,
Haifa, and Rishon LeZion rank as Israel's next most populous cities, with populations of 393,900, 265,600, and 227,600
respectively. Since Israel's capture of these territories, Israeli settlements and military installations have been
built within each of them. Israel has applied civilian law to the Golan Heights and East Jerusalem and granted their
inhabitants permanent residency status and the ability to apply for citizenship. The West Bank, outside of the Israeli
settlements within the territory, has remained under direct military rule, and Palestinians in this area cannot become
Israeli citizens. Israel withdrew its military forces and dismantled the Israeli settlements in the Gaza Strip as
part of its disengagement from Gaza though it continues to maintain control of its airspace and waters. The UN Security
Council has declared the annexation of the Golan Heights and East Jerusalem to be "null and void" and continues to
view the territories as occupied. The International Court of Justice, principal judicial organ of the United Nations,
asserted, in its 2004 advisory opinion on the legality of the construction of the Israeli West Bank barrier, that
the lands captured by Israel in the Six-Day War, including East Jerusalem, are occupied territory. The status of
East Jerusalem in any future peace settlement has at times been a difficult issue in negotiations between Israeli
governments and representatives of the Palestinians, as Israel views it as its sovereign territory, as well as part
of its capital. Most negotiations relating to the territories have been on the basis of United Nations Security Council
Resolution 242, which emphasises "the inadmissibility of the acquisition of territory by war", and calls on Israel
to withdraw from occupied territories in return for normalization of relations with Arab states, a principle known
as "Land for peace". The West Bank was annexed by Jordan in 1950, following the Arab rejection of the UN decision
to create two states in Palestine. Only Britain recognized this annexation and Jordan has since ceded its claim to
the territory to the PLO. The West Bank was occupied by Israel in 1967 during the Six-Day War. The population are
mainly Palestinians, including refugees of the 1948 Arab-Israeli War. From their occupation in 1967 until 1993, the
Palestinians living in these territories were under Israeli military administration. Since the Israel–PLO letters
of recognition, most of the Palestinian population and cities have been under the internal jurisdiction of the Palestinian
Authority, and only partial Israeli military control, although Israel has on several occasions redeployed its troops
and reinstated full military administration during periods of unrest. In response to increasing attacks as part of
the Second Intifada, the Israeli government started to construct the Israeli West Bank barrier. When completed, approximately
13% of the Barrier will be constructed on the Green Line or in Israel with 87% inside the West Bank. The Gaza Strip
was occupied by Egypt from 1948 to 1967 and then by Israel after 1967. In 2005, as part of Israel's unilateral disengagement
plan, Israel removed all of its settlers and forces from the territory. Israel does not consider the Gaza Strip to
be occupied territory and declared it a "foreign territory". That view has been disputed by numerous international
humanitarian organizations and various bodies of the United Nations. Following June 2007, when Hamas assumed power
in the Gaza Strip, Israel tightened its control of the Gaza crossings along its border, as well as by sea and air,
and prevented persons from entering and exiting the area except for isolated cases it deemed humanitarian. Gaza has
a border with Egypt and an agreement between Israel, the European Union and the PA governed how border crossing would
take place (it was monitored by European observers). Egypt adhered to this agreement under Mubarak and prevented
access to Gaza until April 2011 when it announced it was opening its border with Gaza. Israel maintains diplomatic
relations with 158 countries and has 107 diplomatic missions around the world; countries with whom they have no diplomatic
relations include most Muslim countries. Only three members of the Arab League have normalized relations with Israel:
Egypt and Jordan signed peace treaties in 1979 and 1994, respectively, and Mauritania opted for full diplomatic relations
with Israel in 1999. Despite the peace treaty between Israel and Egypt, Israel is still widely considered an enemy
country among Egyptians. Under Israeli law, Lebanon, Syria, Saudi Arabia, Iraq, Iran, Sudan, and Yemen are enemy
countries, and Israeli citizens may not visit them without permission from the Ministry of the Interior. Iran had
diplomatic relations with Israel under the Pahlavi dynasty but withdrew its recognition of Israel during the Islamic
Revolution. As a result of the 2008–09 Gaza War, Mauritania, Qatar, Bolivia, and Venezuela suspended political and
economic ties with Israel. The United States and the Soviet Union were the first two countries to recognize the State
of Israel, having declared recognition roughly simultaneously. The United States regards Israel as its "most reliable
partner in the Middle East," based on "common democratic values, religious affinities, and security interests". Their
bilateral relations are multidimensional and the United States is the principal proponent of the Arab-Israeli peace
process. The United States and Israeli views differ on some issues, such as the Golan Heights, Jerusalem, and settlements.
The United States has provided $68 billion in military assistance and $32 billion in grants to Israel since 1967,
under the Foreign Assistance Act (period beginning 1962), more than any other country for that period until 2003.
Germany's strong ties with Israel include cooperation on scientific and educational endeavors and the two states
remain strong economic and military partners. Under the reparations agreement, by 2007[update] Germany had paid 25
billion euros in reparations to the Israeli state and individual Israeli Holocaust survivors. The UK has kept full
diplomatic relations with Israel since its formation having had two visits from heads of state in 2007. The UK is
seen as having a "natural" relationship with Israel on account of the British Mandate for Palestine. Relations between
the two countries were also made stronger by former prime minister Tony Blair's efforts for a two state resolution.
Israel is included in the European Union's European Neighbourhood Policy (ENP), which aims at bringing the EU and
its neighbours closer. Although Turkey and Israel did not establish full diplomatic relations until 1991, Turkey
has cooperated with the State since its recognition of Israel in 1949. Turkey's ties to the other Muslim-majority
nations in the region have at times resulted in pressure from Arab and Muslim states to temper its relationship with
Israel. Relations between Turkey and Israel took a downturn after the 2008–09 Gaza War and Israel's raid of the Gaza
flotilla. IHH, which organized the flotilla, is a Turkish charity that has been challenged on ties to Hamas and Al-Qaeda.
Relations between Israel and Greece have improved since 1995 due to the decline of Israeli-Turkish relations. The
two countries have a defense cooperation agreement and in 2010, the Israeli Air Force hosted Greece’s Hellenic Air
Force in a joint exercise at the Uvda base. Israel is the second largest importer of Greek products in the Middle
East. The joint Cyprus-Israel oil and gas explorations centered on the Leviathan gas field are an important factor
for Greece, given its strong links with Cyprus. Cooperation in the world's longest sub-sea electric power cable,
the EuroAsia Interconnector, has strengthened relations between Cyprus and Israel. India established full diplomatic
ties with Israel in 1992 and has fostered a strong military, technological and cultural partnership with the country
since then. According to an international opinion survey conducted in 2009 on behalf of the Israeli Foreign Ministry,
India is the most pro-Israel country in the world. India is the largest customer of Israeli military equipment and
Israel is the second-largest military partner of India after the Russian Federation. India is also the third-largest
Asian economic partner of Israel and the two countries have military as well as extensive space technology ties.
India became the top source market for Israel from Asia in 2010 with 41,000 tourist arrivals in that year. Azerbaijan
is one of the few majority Muslim countries to develop bilateral strategic and economic relations with Israel. Azerbaijan
supplies Israel with a substantial amount of its oil needs, and Israel has helped modernize the Armed Forces of Azerbaijan.
In Africa, Ethiopia is Israel's main and closest ally in the continent due to common political, religious and security
interests. Israel provides expertise to Ethiopia on irrigation projects and thousands of Ethiopian Jews (Beta Israel)
live in Israel. Israeli foreign aid ranks very low among OECD nations, spending less than 0.1% of its GNI on foreign
aid, as opposed to the recommended 0.7%. Individual international charitable donations are also very low, with only
0.1% of charitable donations being sent to foreign causes. However, Israel has a history of providing emergency aid
and humanitarian response teams to disasters across the world. Israel's humanitarian efforts officially began in
1958, with the establishment of MASHAV, the Israeli Ministry of Foreign Affairs Agency for International Development
Cooperation. Between 1985 and 2015, Israel sent 24 delegations of IDF search and rescue unit to 22 countries. In
Haiti, immediately following the 2010 earthquake, Israel was the first country to set up a field hospital capable
of performing surgical operations. Israel sent over 200 medical doctors and personnel to start treating injured Haitians
at the scene. At the conclusion of its humanitarian mission 11 days later, the Israeli delegation had treated more
than 1,110 patients, conducted 319 successful surgeries, delivered 16 births and rescued or assisted in the rescue
of four individuals. Despite radiation concerns, Israel was one of the first countries to send a medical delegation
to Japan following the earthquake and tsunami disaster. Israel dispatched a medical team to the tsunami-stricken
city of Kurihara in 2011. A medical clinic run by an IDF team of some 50 members featured pediatric, surgical, maternity
and gynecological, and otolaryngology wards, together with an optometry department, a laboratory, a pharmacy and
an intensive care unit. After treating 200 patients in two weeks, the departing emergency team donated its equipment
to the Japanese. The Israel Defense Forces is the sole military wing of the Israeli security forces, and is headed
by its Chief of General Staff, the Ramatkal, subordinate to the Cabinet. The IDF consist of the army, air force and
navy. It was founded during the 1948 Arab–Israeli War by consolidating paramilitary organizations—chiefly the Haganah—that
preceded the establishment of the state. The IDF also draws upon the resources of the Military Intelligence Directorate
(Aman), which works with Mossad and Shabak. The Israel Defense Forces have been involved in several major wars and
border conflicts in its short history, making it one of the most battle-trained armed forces in the world. Most Israelis
are drafted into the military at the age of 18. Men serve two years and eight months and women two years. Following
mandatory service, Israeli men join the reserve forces and usually do up to several weeks of reserve duty every year
until their forties. Most women are exempt from reserve duty. Arab citizens of Israel (except the Druze) and those
engaged in full-time religious studies are exempt from military service, although the exemption of yeshiva students
has been a source of contention in Israeli society for many years. An alternative for those who receive exemptions
on various grounds is Sherut Leumi, or national service, which involves a program of service in hospitals, schools
and other social welfare frameworks. As a result of its conscription program, the IDF maintains approximately 176,500
active troops and an additional 445,000 reservists. The nation's military relies heavily on high-tech weapons systems
designed and manufactured in Israel as well as some foreign imports. The Arrow missile is one of the world's few
operational anti-ballistic missile systems. The Python air-to-air missile series is often considered one of the most
crucial weapons in its military history. Israel's Spike missile is one of the most widely exported ATGMs in the world.
Israel's Iron Dome anti-missile air defense system gained worldwide acclaim after intercepting hundreds of Qassam,
122 mm Grad and Fajr-5 artillery rockets fire by Palestinian militants from the Gaza Strip. Since the Yom Kippur
War, Israel has developed a network of reconnaissance satellites. The success of the Ofeq program has made Israel
one of seven countries capable of launching such satellites. Israel is widely believed to possess nuclear weapons
as well as chemical and biological weapons of mass destruction. Israel has not signed the Treaty on the Non-Proliferation
of Nuclear Weapons and maintains a policy of deliberate ambiguity toward its nuclear capabilities. The Israeli Navy's
Dolphin submarines are believed to be armed with nuclear Popeye Turbo missiles, offering nuclear second strike capability.
Since the Gulf War in 1991, when Israel was attacked by Iraqi Scud missiles, all homes in Israel are required to
have a reinforced security room, Merkhav Mugan, impermeable to chemical and biological substances. Israel has one
of the highest ratios of defense spending to GDP of all developed countries, only topped by Oman and Saudi Arabia.
In 1984, for example, the country spent 24% of its GDP on defense. By 2006, that figure had dropped to 7.3%. Israel
is one of the world's largest arms exporters, and was ranked fourth in the world for weapons exports in 2007. The
majority of Israel's arms exports are unreported for security reasons. Since 1967, the United States has been a particularly
notable foreign contributor of military aid to Israel: the US is expected to provide the country with $3.15 billion
per year from 2013 to 2018. Israel is consistently rated low in the Global Peace Index, ranking 148th out of 162
nations for peacefulness in 2015. Israel is considered the most advanced country in Southwest Asia and the Middle
East in economic and industrial development. Israel's quality university education and the establishment of a highly
motivated and educated populace is largely responsible for spurring the country's high technology boom and rapid
economic development. In 2010, it joined the OECD. The country is ranked 3rd in the region and 38th worldwide on
the World Bank's Ease of Doing Business Index as well as in the World Economic Forum's Global Competitiveness Report.
It has the second-largest number of startup companies in the world (after the United States) and the largest number
of NASDAQ-listed companies outside North America. Despite limited natural resources, intensive development of the
agricultural and industrial sectors over the past decades has made Israel largely self-sufficient in food production,
apart from grains and beef. Imports to Israel, totaling $77.59 billion in 2012, include raw materials, military equipment,
investment goods, rough diamonds, fuels, grain, consumer goods. Leading exports include electronics, software, computerized
systems, communications technology, medical equipment, pharmaceuticals, fruits, chemicals, military technology, and
cut diamonds; in 2012, Israeli exports reached $64.74 billion. Israel is a leading country in the development of
solar energy. Israel is a global leader in water conservation and geothermal energy, and its development of cutting-edge
technologies in software, communications and the life sciences have evoked comparisons with Silicon Valley. According
to the OECD, Israel is also ranked 1st in the world in expenditure on Research and Development (R&D) as a percentage
of GDP. Intel and Microsoft built their first overseas research and development centers in Israel, and other high-tech
multi-national corporations, such as IBM, Google, Apple, HP, Cisco Systems, and Motorola, have opened R&D facilities
in the country. In July 2007, American business magnate and investor Warren Buffett's holding company Berkshire Hathaway
bought an Israeli company, Iscar, its first non-U.S. acquisition, for $4 billion. Since the 1970s, Israel has received
military aid from the United States, as well as economic assistance in the form of loan guarantees, which now account
for roughly half of Israel's external debt. Israel has one of the lowest external debts in the developed world, and
is a net lender in terms of net external debt (the total value of assets vs. liabilities in debt instruments owed
abroad), which in December 2015[update] stood at a surplus of US$118 billion. Days of working time in Israel are
Sunday through Thursday (for a five-day workweek), or Friday (for a six-day workweek). In observance of Shabbat,
in places where Friday is a work day and the majority of population is Jewish, Friday is a "short day", usually lasting
till 14:00 in the winter, or 16:00 in the summer. Several proposals have been raised to adjust the work week with
the majority of the world, and make Sunday a non-working day, while extending working time of other days or replacing
Friday with Sunday as a work day. Israeli universities are among 100 top world universities in mathematics (Hebrew
University, TAU and Technion), physics (TAU, Hebrew University and Weizmann Institute of Science), chemistry (Technion
and Weizmann Institute of Science), computer science (Weizmann Institute of Science, Technion, Hebrew University,
TAU and BIU) and economics (Hebrew University and TAU). Israel has produced six Nobel Prize-winning scientists since
2002 and has been frequently ranked as one of the countries with the highest ratios of scientific papers per capita
in the world. Israel has led the world in stem-cell research papers per capita since 2000. Israel is one of the world's
technological leaders in water technology. In 2011, its water technology industry was worth around $2 billion a year
with annual exports of products and services in the tens of millions of dollars. The ongoing shortage of water in
the country has spurred innovation in water conservation techniques, and a substantial agricultural modernization,
drip irrigation, was invented in Israel. Israel is also at the technological forefront of desalination and water
recycling. The Ashkelon seawater reverse osmosis (SWRO) plant, the largest in the world, was voted 'Desalination
Plant of the Year' in the Global Water Awards in 2006. Israel hosts an annual Water Technology Exhibition and Conference
(WaTec) that attracts thousands of people from across the world. By 2014, Israel's desalination programs provided
roughly 35% of Israel's drinking water and it is expected to supply 40% by 2015 and 70% by 2050. As of May 29, 2015
more than 50 percent of the water for Israeli households, agriculture and industry is artificially produced. As a
result of innovations in reverse osmosis technology, Israel is set to become a net exporter of water in the coming
years. Israel has embraced solar energy; its engineers are on the cutting edge of solar energy technology and its
solar companies work on projects around the world. Over 90% of Israeli homes use solar energy for hot water, the
highest per capita in the world. According to government figures, the country saves 8% of its electricity consumption
per year because of its solar energy use in heating. The high annual incident solar irradiance at its geographic
latitude creates ideal conditions for what is an internationally renowned solar research and development industry
in the Negev Desert. Israel had a modern electric car infrastructure involving a countrywide network of recharging
stations to facilitate the charging and exchange of car batteries. It was thought that this would have lowered Israel's
oil dependency and lowered the fuel costs of hundreds of Israel's motorists that use cars powered only by electric
batteries. The Israeli model was being studied by several countries and being implemented in Denmark and Australia.
However, Israel's trailblazing electric car company Better Place shut down in 2013. The Israeli Space Agency coordinates
all Israeli space research programs with scientific and commercial goals. In 2012 Israel was ranked ninth in the
world by the Futron's Space Competitiveness Index. Israel is one of only seven countries that both build their own
satellites and launch their own launchers. The Shavit is a space launch vehicle produced by Israel to launch small
satellites into low earth orbit. It was first launched in 1988, making Israel the eighth nation to have a space launch
capability. Shavit rockets are launched from the spaceport at the Palmachim Airbase by the Israeli Space Agency.
Since 1988 Israel Aerospace Industries have indigenously designed and built at least 13 commercial, research and
spy satellites. Some of Israel's satellites are ranked among the world's most advanced space systems. In 2003, Ilan
Ramon became Israel's first astronaut, serving as payload specialist of STS-107, the fatal mission of the Space Shuttle
Columbia. Israel has 18,096 kilometers (11,244 mi) of paved roads, and 2.4 million motor vehicles. The number of
motor vehicles per 1,000 persons was 324, relatively low with respect to developed countries. Israel has 5,715 buses
on scheduled routes, operated by several carriers, the largest of which is Egged, serving most of the country. Railways
stretch across 949 kilometers (590 mi) and are operated solely by government-owned Israel Railways (All figures are
for 2008). Following major investments beginning in the early to mid-1990s, the number of train passengers per year
has grown from 2.5 million in 1990, to 35 million in 2008; railways are also used to transport 6.8 million tons of
cargo, per year. Israel is served by two international airports, Ben Gurion International Airport, the country's
main hub for international air travel near Tel Aviv-Yafo, Ovda Airport in the south, as well as several small domestic
airports. Ben Gurion, Israel's largest airport, handled over 12.1 million passengers in 2010. On the Mediterranean
coast, Haifa Port is the country's oldest and largest port, while Ashdod Port is one of the few deep water ports
in the world built on the open sea. In addition to these, the smaller Port of Eilat is situated on the Red Sea, and
is used mainly for trading with Far East countries. Tourism, especially religious tourism, is an important industry
in Israel, with the country's temperate climate, beaches, archaeological, other historical and biblical sites, and
unique geography also drawing tourists. Israel's security problems have taken their toll on the industry, but the
number of incoming tourists is on the rebound. In 2013, a record of 3.54 million tourists visited Israel with the
most popular site of attraction being the Western Wall with 68% of tourists visiting there. Israel has the highest
number of museums per capita in the world. Israel's diverse culture stems from the diversity of its population: Jews
from diaspora communities around the world have brought their cultural and religious traditions back with them, creating
a melting pot of Jewish customs and beliefs. Israel is the only country in the world where life revolves around the
Hebrew calendar. Work and school holidays are determined by the Jewish holidays, and the official day of rest is
Saturday, the Jewish Sabbath. Israel's substantial Arab minority has also left its imprint on Israeli culture in
such spheres as architecture, music, and cuisine. Israeli literature is primarily poetry and prose written in Hebrew,
as part of the renaissance of Hebrew as a spoken language since the mid-19th century, although a small body of literature
is published in other languages, such as English. By law, two copies of all printed matter published in Israel must
be deposited in the National Library of Israel at the Hebrew University of Jerusalem. In 2001, the law was amended
to include audio and video recordings, and other non-print media. In 2013, 91 percent of the 7,863 books transferred
to the library were in Hebrew. The Hebrew Book Week is held each June and features book fairs, public readings, and
appearances by Israeli authors around the country. During the week, Israel's top literary award, the Sapir Prize,
is presented.[citation needed] In 1966, Shmuel Yosef Agnon shared the Nobel Prize in Literature with German Jewish
author Nelly Sachs. Leading Israeli poets have been Yehuda Amichai, Nathan Alterman and Rachel Bluwstein. Internationally
famous contemporary Israeli novelists include Amos Oz, Etgar Keret and David Grossman. The Israeli-Arab satirist
Sayed Kashua (who writes in Hebrew) is also internationally known.[citation needed] Israel has also been the home
of two leading Palestinian poets and writers: Emile Habibi, whose novel The Secret Life of Saeed the Pessoptimist,
and other writings, won him the Israel prize for Arabic literature; and Mahmoud Darwish, considered by many to be
"the Palestinian national poet." Darwish was born and raised in northern Israel, but lived his adult life abroad
after joining the Palestine Liberation Organization.[citation needed] Israeli music contains musical influences from
all over the world; Sephardic music, Hasidic melodies, Belly dancing music, Greek music, jazz, and pop rock are all
part of the music scene. Among Israel's world-renowned orchestras is the Israel Philharmonic Orchestra, which has
been in operation for over seventy years and today performs more than two hundred concerts each year. Israel has
also produced many musicians of note, some achieving international stardom. Itzhak Perlman, Pinchas Zukerman and
Ofra Haza are among the internationally acclaimed musicians born in Israel.[citation needed] Israel has participated
in the Eurovision Song Contest nearly every year since 1973, winning the competition three times and hosting it twice.
Eilat has hosted its own international music festival, the Red Sea Jazz Festival, every summer since 1987. The nation's
canonical folk songs, known as "Songs of the Land of Israel," deal with the experiences of the pioneers in building
the Jewish homeland. The Hora circle dance introduced by early Jewish settlers was originally popular in the Kibbutzim
and outlying communities. It became a symbol of the Zionist reconstruction and of the ability to experience joy amidst
austerity. It now plays a significant role in modern Israeli folk dancing and is regularly performed at weddings
and other celebrations, and in group dances throughout Israel.[citation needed] Modern dance in Israel is a flourishing
field, and several Israeli choreographers such as Ohad Naharin, Rami Beer, Barak Marshall and many others, are considered[by
whom?] to be among the most versatile and original international creators working today. Famous Israeli companies
include the Batsheva Dance Company and the Kibbutz Contemporary Dance Company.[citation needed] Israel is home to
many Palestinian musicians, including internationally acclaimed oud and violin virtuoso Taiseer Elias, singer Amal
Murkus, and brothers Samir and Wissam Joubran. Israeli Arab musicians have achieved fame beyond Israel's borders:
Elias and Murkus frequently play to audiences in Europe and America, and oud player Darwish Darwish (Prof. Elias's
student) was awarded first prize in the all-Arab oud contest in Egypt in 2003. The Jerusalem Academy of Music and
Dance has an advanced degree program, headed by Taiseer Elias, in Arabic music.[citation needed] The Israel Museum
in Jerusalem is one of Israel's most important cultural institutions and houses the Dead Sea scrolls, along with
an extensive collection of Judaica and European art. Israel's national Holocaust museum, Yad Vashem, is the world
central archive of Holocaust-related information. Beth Hatefutsoth (the Diaspora Museum), on the campus of Tel Aviv
University, is an interactive museum devoted to the history of Jewish communities around the world. Apart from the
major museums in large cities, there are high-quality artspaces in many towns and kibbutzim. Mishkan Le'Omanut on
Kibbutz Ein Harod Meuhad is the largest art museum in the north of the country. Israeli cuisine includes local dishes
as well as dishes brought to the country by Jewish immigrants from the diaspora. Since the establishment of the State
in 1948, and particularly since the late 1970s, an Israeli fusion cuisine has developed. Roughly half of the Israeli-Jewish
population attests to keeping kosher at home. Kosher restaurants, though rare in the 1960s, make up around 25% of
the total as of 2015[update], perhaps reflecting the largely secular values of those who dine out. Hotel restaurants
are much more likely to serve kosher food. The non-kosher retail market was traditionally sparse, but grew rapidly
and considerably following the influx of immigrants from Eastern Europe and Russia during the 1990s. Together with
non-kosher fish, rabbits and ostriches, pork—often called "white meat" in Israel—is produced and consumed, though
it is forbidden by both Judaism and Islam. Israeli cuisine has adopted, and continues to adapt, elements of various
styles of Jewish cuisine, particularly the Mizrahi, Sephardic, and Ashkenazi styles of cooking, along with Moroccan
Jewish, Iraqi Jewish, Ethiopian Jewish, Indian Jewish, Iranian Jewish and Yemeni Jewish influences. It incorporates
many foods traditionally eaten in the Arab, Middle Eastern and Mediterranean cuisines, such as falafel, hummus, shakshouka,
couscous, and za'atar, which have become common ingredients in Israeli cuisine. Schnitzel, pizza, hamburgers, French
fries, rice and salad are also very common in Israel.[citation needed] The most popular spectator sports in Israel
are association football and basketball. The Israeli Premier League is the country's premier football league, and
the Israeli Basketball Super League is the premier basketball league. Maccabi Haifa, Maccabi Tel Aviv, Hapoel Tel
Aviv and Beitar Jerusalem are the largest sports clubs. Maccabi Tel Aviv, Maccabi Haifa and Hapoel Tel Aviv have
competed in the UEFA Champions League and Hapoel Tel Aviv reached the UEFA Cup quarter-finals. Maccabi Tel Aviv B.C.
has won the European championship in basketball six times. In 1964 Israel hosted and won the Asian Nations Cup; in
1970 the Israel national football team managed to qualify to the FIFA World Cup, which is still considered[by whom?]
the biggest achievement of Israeli football.[citation needed] The 1974 Asian Games held in Tehran, were the last
Asian Games in which Israel participated, and was plagued by the Arab countries which refused to compete with Israel,
and Israel since ceased competing in Asian competitions. Israel was excluded from the 1978 Asian Games due to security
and expense involved if they were to participate. In 1994, UEFA agreed to admit Israel and all Israeli sporting organizations
now compete in Europe.[citation needed] Chess is a leading sport in Israel and is enjoyed by people of all ages.
There are many Israeli grandmasters and Israeli chess players have won a number of youth world championships. Israel
stages an annual international championship and hosted the World Team Chess Championship in 2005. The Ministry of
Education and the World Chess Federation agreed upon a project of teaching chess within Israeli schools, and it has
been introduced into the curriculum of some schools. The city of Beersheba has become a national chess center, with
the game being taught in the city's kindergartens. Owing partly to Soviet immigration, it is home to the largest
number of chess grandmasters of any city in the world. The Israeli chess team won the silver medal at the 2008 Chess
Olympiad and the bronze, coming in third among 148 teams, at the 2010 Olympiad. Israeli grandmaster Boris Gelfand
won the Chess World Cup in 2009 and the 2011 Candidates Tournament for the right to challenge the world champion.
He only lost the World Chess Championship 2012 to reigning world champion Anand after a speed-chess tie breaker.[citation
needed]
The Hellenistic period covers the period of ancient Greek (Hellenic) history and Mediterranean history between the death
of Alexander the Great in 323 BC and the emergence of the Roman Empire as signified by the Battle of Actium in 31
BC and the subsequent conquest of Ptolemaic Egypt the following year. At this time, Greek cultural influence and
power was at its peak in Europe, Africa and Asia, experiencing prosperity and progress in the arts, exploration,
literature, theatre, architecture, music, mathematics, philosophy, and science. For example, competitive public games
took place, ideas in biology, and popular entertainment in theaters. It is often considered a period of transition,
sometimes even of decadence or degeneration, compared to the enlightenment of the Greek Classical era. The Hellenistic
period saw the rise of New Comedy, Alexandrian poetry, the Septuagint and the philosophies of Stoicism and Epicureanism.
Greek Science was advanced by the works of the mathematician Euclid and the polymath Archimedes. The religious sphere
expanded to include new gods such as the Greco-Egyptian Serapis, eastern deities such as Attis and Cybele and the
Greek adoption of Buddhism. After Alexander the Great's ventures in the Persian Empire, Hellenistic kingdoms were
established throughout south-west Asia (Seleucid Empire, Kingdom of Pergamon), north-east Africa (Ptolemaic Kingdom)
and South Asia (Greco-Bactrian Kingdom, Indo-Greek Kingdom). This resulted in the export of Greek culture and language
to these new realms through Greco-Macedonian colonization, spanning as far as modern-day Pakistan. Equally, however,
these new kingdoms were influenced by the indigenous cultures, adopting local practices where beneficial, necessary,
or convenient. Hellenistic culture thus represents a fusion of the Ancient Greek world with that of the Near East,
Middle East, and Southwest Asia, and a departure from earlier Greek attitudes towards "barbarian" cultures. The Hellenistic
period was characterized by a new wave of Greek colonization (as distinguished from that occurring in the 8th–6th
centuries BC) which established Greek cities and kingdoms in Asia and Africa. Those new cities were composed of Greek
colonists who came from different parts of the Greek world, and not, as before, from a specific "mother city". The
main cultural centers expanded from mainland Greece to Pergamon, Rhodes, and new Greek colonies such as Seleucia,
Antioch, Alexandria and Ai-Khanoum. This mixture of Greek-speakers gave birth to a common Attic-based dialect, known
as Koine Greek, which became the lingua franca through the Hellenistic world. Scholars and historians are divided
as to what event signals the end of the Hellenistic era. The Hellenistic period may be seen to end either with the
final conquest of the Greek heartlands by Rome in 146 BC following the Achean War, with the final defeat of the Ptolemaic
Kingdom at the Battle of Actium in 31 BC, or even the move by Roman emperor Constantine the Great of the capital
of the Roman Empire to Constantinople in 330 AD. "Hellenistic" is distinguished from "Hellenic" in that the first
encompasses the entire sphere of direct ancient Greek influence, while the latter refers to Greece itself. "Hellenistic"
is a modern word and a 19th-century concept; the idea of a Hellenistic period did not exist in Ancient Greece. Although
words related in form or meaning, e.g. Hellenist (Ancient Greek: Ἑλληνιστής, Hellēnistēs), have been attested since
ancient times, it was J. G. Droysen in the mid-19th century, who in his classic work Geschichte des Hellenismus,
i.e. History of Hellenism, coined the term Hellenistic to refer to and define the period when Greek culture spread
in the non-Greek world after Alexander’s conquest. Following Droysen, Hellenistic and related terms, e.g. Hellenism,
have been widely used in various contexts; a notable such use is in Culture and Anarchy by Matthew Arnold, where
Hellenism is used in contrast with Hebraism. The major issue with the term Hellenistic lies in its convenience, as
the spread of Greek culture was not the generalized phenomenon that the term implies. Some areas of the conquered
world were more affected by Greek influences than others. The term Hellenistic also implies that the Greek populations
were of majority in the areas in which they settled, while in many cases, the Greek settlers were actually the minority
among the native populations. The Greek population and the native population did not always mix; the Greeks moved
and brought their own culture, but interaction did not always occur. While a few fragments exist, there is no surviving
historical work which dates to the hundred years following Alexander's death. The works of the major Hellenistic
historians Hieronymus of Cardia (who worked under Alexander, Antigonus I and other successors), Duris of Samos and
Phylarchus which were used by surviving sources are all lost. The earliest and most credible surviving source for
the Hellenistic period is Polybius of Megalopolis (c. 200-118), a statesman of the Achaean League until 168 BCE when
he was forced to go to Rome as a hostage. His Histories eventually grew to a length of forty books, covering the
years 220 to 167 BCE. The most important source after Polybius is Diodorus Siculus who wrote his Bibliotheca historica
between 60 and 30 BCE and reproduced some important earlier sources such as Hieronymus, but his account of the Hellenistic
period breaks off after the battle of Ipsus (301). Another important source, Plutarch's (c.50—c.120) Parallel Lives
though more preoccupied with issues of personal character and morality, outlines the history of important Hellenistic
figures. Appian of Alexandria (late first century CE-before 165 CE) wrote a history of the Roman empire that includes
information of some Hellenistic kingdoms. Ancient Greece had traditionally been a fractious collection of fiercely
independent city-states. After the Peloponnesian War (431–404 BC), Greece had fallen under a Spartan hegemony, in
which Sparta was pre-eminent but not all-powerful. Spartan hegemony was succeeded by a Theban one after the Battle
of Leuctra (371 BC), but after the Battle of Mantinea (362 BC), all of Greece was so weakened that no one state could
claim pre-eminence. It was against this backdrop, that the ascendancy of Macedon began, under king Philip II. Macedon
was located at the periphery of the Greek world, and although its royal family claimed Greek descent, the Macedonians
themselves were looked down upon as semi-barbaric by the rest of the Greeks. However, Macedon had a relatively strong
and centralised government, and compared to most Greek states, directly controlled a large area. Philip II was a
strong and expansionist king and he took every opportunity to expand Macedonian territory. In 352 BC he annexed Thessaly
and Magnesia. In 338 BC, Philip defeated a combined Theban and Athenian army at the Battle of Chaeronea after a decade
of desultory conflict. In the aftermath, Philip formed the League of Corinth, effectively bringing the majority of
Greece under his direct sway. He was elected Hegemon of the league, and a campaign against the Achaemenid Empire
of Persia was planned. However, while this campaign was in its early stages, he was assassinated. Meleager and the
infantry supported the candidacy of Alexander's half-brother, Philip Arrhidaeus, while Perdiccas, the leading cavalry
commander, supported waiting until the birth of Alexander's unborn child by Roxana. After the infantry stormed the
palace of Babylon, a compromise was arranged – Arrhidaeus (as Philip III) should become king, and should rule jointly
with Roxana's child, assuming that it was a boy (as it was, becoming Alexander IV). Perdiccas himself would become
regent (epimeletes) of the empire, and Meleager his lieutenant. Soon, however, Perdiccas had Meleager and the other
infantry leaders murdered, and assumed full control. The generals who had supported Perdiccas were rewarded in the
partition of Babylon by becoming satraps of the various parts of the empire, but Perdiccas' position was shaky, because,
as Arrian writes, "everyone was suspicious of him, and he of them". The first of the Diadochi wars broke out when
Perdiccas planned to marry Alexander's sister Cleopatra and began to question Antigonus I Monophthalmus' leadership
in Asia Minor. Antigonus fled for Greece, and then, together with Antipater and Craterus (the satrap of Cilicia who
had been in Greece fighting the Lamian war) invaded Anatolia. The rebels were supported by Lysimachus, the satrap
of Thrace and Ptolemy, the satrap of Egypt. Although Eumenes, satrap of Cappadocia, defeated the rebels in Asia Minor,
Perdiccas himself was murdered by his own generals Peithon, Seleucus, and Antigenes (possibly with Ptolemy's aid)
during his invasion of Egypt (c. 21 May to 19 June, 320). Ptolemy came to terms with Perdiccas's murderers, making
Peithon and Arrhidaeus regents in his place, but soon these came to a new agreement with Antipater at the Treaty
of Triparadisus. Antipater was made regent of the Empire, and the two kings were moved to Macedon. Antigonus remained
in charge of Asia minor, Ptolemy retained Egypt, Lysimachus retained Thrace and Seleucus I controlled Babylon. The
second Diadochi war began following the death of Antipater in 319 BC. Passing over his own son, Cassander, Antipater
had declared Polyperchon his successor as Regent. Cassander rose in revolt against Polyperchon (who was joined by
Eumenes) and was supported by Antigonus, Lysimachus and Ptolemy. In 317, Cassander invaded Macedonia, attaining control
of Macedon, sentencing Olympias to death and capturing the boy king Alexander IV, and his mother. In Asia, Eumenes
was betrayed by his own men after years of campaign and was given up to Antigonus who had him executed. The third
war of the Diadochi broke out because of the growing power and ambition of Antigonus. He began removing and appointing
satraps as if he were king and also raided the royal treasuries in Ectabana, Persepolis and Susa, making off with
25,000 talents. Seleucus was forced to flee to Egypt and Antigonus was soon at war with Ptolemy, Lysimachus, and
Cassander. He then invaded Phoenicia, laid siege to Tyre, stormed Gaza and began building a fleet. Ptolemy invaded
Syria and defeated Antigonus' son, Demetrius Poliorcetes, in the Battle of Gaza of 312 BC which allowed Seleucus
to secure control of Babylonia, and the eastern satrapies. In 310, Cassander had young King Alexander IV and his
mother Roxane murdered, ending the Argead Dynasty which had ruled Macedon for several centuries. Antigonus then sent
his son Demetrius to regain control of Greece. In 307 he took Athens, expelling Demetrius of Phaleron, Cassander's
governor, and proclaiming the city free again. Demetrius now turned his attention to Ptolemy, defeating his fleet
at the Battle of Salamis and taking control of Cyprus. In the aftermath of this victory, Antigonus took the title
of king (basileus) and bestowed it on his son Demetrius Poliorcetes, the rest of the Diadochi soon followed suit.
Demetrius continued his campaigns by laying siege to Rhodes and conquering most of Greece in 302, creating a league
against Cassander's Macedon. The decisive engagement of the war came when Lysimachus invaded and overran much of
western Anatolia, but was soon isolated by Antigonus and Demetrius near Ipsus in Phrygia. Seleucus arrived in time
to save Lysimachus and utterly crushed Antigonus at the Battle of Ipsus in 301 BCE. Seleucus' war elephants proved
decisive, Antigonus was killed, and Demetrius fled back to Greece to attempt to preserve the remnants of his rule
there by recapturing a rebellious Athens. Meanwhile, Lysimachus took over Ionia, Seleucus took Cilicia, and Ptolemy
captured Cyprus. After Cassander's death in 298 BCE, however, Demetrius, who still maintained a sizable loyal army
and fleet, invaded Macedon, seized the Macedonian throne (294) and conquered Thessaly and most of central Greece
(293-291). He was defeated in 288 BC when Lysimachus of Thrace and Pyrrhus of Epirus invaded Macedon on two fronts,
and quickly carved up the kingdom for themselves. Demetrius fled to central Greece with his mercenaries and began
to build support there and in the northern Peloponnese. He once again laid siege to Athens after they turned on him,
but then struck a treaty with the Athenians and Ptolemy, which allowed him to cross over to Asia minor and wage war
on Lysimachus' holdings in Ionia, leaving his son Antigonus Gonatas in Greece. After initial successes, he was forced
to surrender to Seleucus in 285 and later died in captivity. Lysimachus, who had seized Macedon and Thessaly for
himself, was forced into war when Seleucus invaded his territories in Asia minor and was defeated and killed in 281
BCE at the Battle of Corupedium, near Sardis. Seleucus then attempted to conquer Lysimachus' European territories
in Thrace and Macedon, but he was assassinated by Ptolemy Ceraunus ("the thunderbolt"), who had taken refuge at the
Seleucid court and then had himself acclaimed as king of Macedon. Ptolemy was killed when Macedon was invaded by
Gauls in 279, his head stuck on a spear and the country fell into anarchy. Antigonus II Gonatas invaded Thrace in
the summer of 277 and defeated a large force of 18,000 Gauls. He was quickly hailed as king of Macedon and went on
to rule for 35 years. During the Hellenistic period the importance of Greece proper within the Greek-speaking world
declined sharply. The great centers of Hellenistic culture were Alexandria and Antioch, capitals of Ptolemaic Egypt
and Seleucid Syria respectively. The conquests of Alexander greatly widened the horizons of the Greek world, making
the endless conflicts between the cities which had marked the 5th and 4th centuries BC seem petty and unimportant.
It led to a steady emigration, particularly of the young and ambitious, to the new Greek empires in the east. Many
Greeks migrated to Alexandria, Antioch and the many other new Hellenistic cities founded in Alexander's wake, as
far away as modern Afghanistan and Pakistan. Independent city states were unable to compete with Hellenistic kingdoms
and were usually forced to ally themselves to one of them for defense, giving honors to Hellenistic rulers in return
for protection. One example is Athens, which had been decisively defeated by Antipater in the Lamian war (323-322)
and had its port in the Piraeus garrisoned by Macedonian troops who supported a conservative oligarchy. After Demetrius
Poliorcetes captured Athens in 307 and restored the democracy, the Athenians honored him and his father Antigonus
by placing gold statues of them on the agora and granting them the title of king. Athens later allied itself to Ptolemaic
Egypt to throw off Macedonian rule, eventually setting up a religious cult for the Ptolemaic kings and naming one
of the city phyles in honor of Ptolemy for his aid against Macedon. In spite of the Ptolemaic monies and fleets backing
their endeavors, Athens and Sparta were defeated by Antigonus II during the Chremonidean War (267-61). Athens was
then occupied by Macedonian troops, and run by Macedonian officials. Sparta remained independent, but it was no longer
the leading military power in the Peloponnese. The Spartan king Cleomenes III (235–222 BCE) staged a military coup
against the conservative ephors and pushed through radical social and land reforms in order to increase the size
of the shrinking Spartan citizenry able to provide military service and restore Spartan power. Sparta's bid for supremacy
was crushed at the Battle of Sellasia (222) by the Achaean league and Macedon, who restored the power of the ephors.
Other city states formed federated states in self-defense, such as the Aetolian League (est. 370 BCE), the Achaean
League (est. 280 BCE), the Boeotian league, the "Northern League" (Byzantium, Chalcedon, Heraclea Pontica and Tium)
and the "Nesiotic League" of the Cyclades. These federations involved a central government which controlled foreign
policy and military affairs, while leaving most of the local governing to the city states, a system termed sympoliteia.
In states such as the Achaean league, this also involved the admission of other ethnic groups into the federation
with equal rights, in this case, non-Achaeans. The Achean league was able to drive out the Macedonians from the Peloponnese
and free Corinth, which duly joined the league. One of the few city states who managed to maintain full independence
from the control of any Hellenistic kingdom was Rhodes. With a skilled navy to protect its trade fleets from pirates
and an ideal strategic position covering the routes from the east into the Aegean, Rhodes prospered during the Hellenistic
period. It became a center of culture and commerce, its coins were widely circulated and its philosophical schools
became one of the best in the mediterranean. After holding out for one year under siege by Demetrius Poliorcetes
(304-305 BCE), the Rhodians built the Colossus of Rhodes to commemorate their victory. They retained their independence
by the maintenance of a powerful navy, by maintaining a carefully neutral posture and acting to preserve the balance
of power between the major Hellenistic kingdoms. Antigonus II, a student of Zeno of Citium, spent most of his rule
defending Macedon against Epirus and cementing Macedonian power in Greece, first against the Athenians in the Chremonidean
War, and then against the Achaean League of Aratus of Sicyon. Under the Antigonids, Macedonia was often short on
funds, the Pangaeum mines were no longer as productive as under Philip II, the wealth from Alexander's campaigns
had been used up and the countryside pillaged by the Gallic invasion. A large number of the Macedonian population
had also been resettled abroad by Alexander or had chosen to emigrate to the new eastern Greek cities. Up to two
thirds of the population emigrated, and the Macedonian army could only count on a levy of 25,000 men, a significantly
smaller force than under Philip II. Philip V, who came to power when Doson died in 221 BC, was the last Macedonian
ruler with both the talent and the opportunity to unite Greece and preserve its independence against the "cloud rising
in the west": the ever-increasing power of Rome. He was known as "the darling of Hellas". Under his auspices the
Peace of Naupactus (217 BC) brought the latest war between Macedon and the Greek leagues (the social war 220-217)
to an end, and at this time he controlled all of Greece except Athens, Rhodes and Pergamum. In 215 BC Philip, with
his eye on Illyria, formed an alliance with Rome's enemy Hannibal of Carthage, which led to Roman alliances with
the Achaean League, Rhodes and Pergamum. The First Macedonian War broke out in 212 BC, and ended inconclusively in
205 BC. Philip continued to wage war against Pergamon and Rhodes for control of the Aegean (204-200 BCE) and ignored
Roman demands for non-intervention in Greece by invading Attica. In 198 BC, during the Second Macedonian War Philip
was decisively defeated at Cynoscephalae by the Roman proconsul Titus Quinctius Flamininus and Macedon lost all its
territories in Greece proper. Greece was now thoroughly brought into the Roman sphere of influence, though it retained
nominal autonomy. The end of Antigonid Macedon came when Philip V's son, Perseus, was defeated and captured by the
Romans in the Third Macedonian War (171–168 BCE). The west Balkan coast was inhabited by various Illyrian tribes
and kingdoms such as the kingdom of the Dalmatae and of the Ardiaei, who often engaged in piracy under Queen Teuta
(reigned 231 BC to 227 BCE). Further inland was the Illyrian Paeonian Kingdom and the tribe of the Agrianes which
covers most of the modern republic of Macedonia. Illyrians on the coast of the Adriatic were under the effects and
influence of Hellenisation and some tribes adopted Greek, becoming bilingual due to their proximity to the Greek
colonies in Illyria. Illyrians imported weapons and armor from the Ancient Greeks (such as the Illyrian type helmet,
originally a Greek type) and also adopted the ornamentation of Ancient Macedon on their shields and their war belts
(a single one has been found, dated 3rd century BC at modern Selce e Poshtme part of Macedon at the time under Philip
V of Macedon). The Odrysian Kingdom was a union of Thracian tribes under the kings of the powerful Odrysian tribe
centered around the region of Thrace. Various parts of Thrace were under Macedonian rule under Philip II of Macedon,
Alexander the Great, Lysimachus, Ptolemy II, and Philip V but were also often ruled by their own kings. The Thracians
and Agrianes were widely used by Alexander as peltasts and light cavalry, forming about one fifth of his army. The
Diadochi also used Thracian mercenaries in their armies and they were also used as colonists. The Odrysians used
Greek as the language of administration and of the nobility. The nobility also adopted Greek fashions in dress, ornament
and military equipment, spreading it to the other tribes. Thracian kings were among the first to be Hellenized. Southern
Italy (Magna Graecia) and south-eastern Sicily had been colonized by the Greeks during the 8th century. In 4th century
Sicily the leading Greek city and hegemon was Syracuse. During the Hellenistic period the leading figure in Sicily
was Agathocles of Syracuse (361 – 289 BCE) who seized the city with an army of mercenaries in 317 BCE. Agathocles
extended his power throughout most of the Greek cities in Sicily, fought a long war with the Carthaginians, at one
point invading Tunisia in 310 and defeating a Carthaginian army there. This was the first time a European force had
invaded the region. After this war he controlled most of south-east Sicily and had himself proclaimed king, in imitation
of the Hellenistic monarchs of the east. Agathocles then invaded Italy (c. 300 BCE) in defense of Tarentum against
the Bruttians and Romans, but was unsuccessful. Greeks in pre-Roman Gaul were mostly limited to the Mediterranean
coast of Provence. The first Greek colony in the region was Massalia, which became one of the largest trading ports
of Mediterranean by the 4th century BCE with 6,000 inhabitants. Massalia was also the local hegemon, controlling
various coastal Greek cities like Nice and Agde. The coins minted in Massalia have been found in all parts of Ligurian-Celtic
Gaul. Celtic coinage was influenced by Greek designs, and Greek letters can be found on various Celtic coins, especially
those of Southern France. Traders from Massalia ventured inland deep into France on the Rivers Durance and Rhône,
and established overland trade routes deep into Gaul, and to Switzerland and Burgundy. The Hellenistic period saw
the Greek alphabet spread into southern Gaul from Massalia (3rd and 2nd centuries BCE) and according to Strabo, Massalia
was also a center of education, where Celts went to learn Greek. A staunch ally of Rome, Massalia retained its independence
until it sided with Pompey in 49 BCE and was then taken by Caesar's forces. The Hellenistic states of Asia and Egypt
were run by an occupying imperial elite of Greco-Macedonian administrators and governors propped up by a standing
army of mercenaries and a small core of Greco-Macedonian settlers. Promotion of immigration from Greece was important
in the establishment of this system. Hellenistic monarchs ran their kingdoms as royal estates and most of the heavy
tax revenues went into the military and paramilitary forces which preserved their rule from any kind of revolution.
Macedonian and Hellenistic monarchs were expected to lead their armies on the field, along with a group of privileged
aristocratic companions or friends (hetairoi, philoi) which dined and drank with the king and acted as his advisory
council. Another role that was expected the monarch fill was that of charitable patron of his people, this public
philanthropy could mean building projects and handing out gifts but also promotion of Greek culture and religion.
Ptolemy, a somatophylax, one of the seven bodyguards who served as Alexander the Great's generals and deputies, was
appointed satrap of Egypt after Alexander's death in 323 BC. In 305 BC, he declared himself King Ptolemy I, later
known as "Soter" (saviour) for his role in helping the Rhodians during the siege of Rhodes. Ptolemy built new cities
such as Ptolemais Hermiou in upper Egypt and settled his veterans throughout the country, especially in the region
of the Faiyum. Alexandria, a major center of Greek culture and trade, became his capital city. As Egypt's first port
city, it was the main grain exporter in the Mediterranean. The Egyptians begrudgingly accepted the Ptolemies as the
successors to the pharaohs of independent Egypt, though the kingdom went through several native revolts. The Ptolemies
took on the traditions of the Egyptian Pharaohs, such as marrying their siblings (Ptolemy II was the first to adopt
this custom), having themselves portrayed on public monuments in Egyptian style and dress, and participating in Egyptian
religious life. The Ptolemaic ruler cult portrayed the Ptolemies as gods, and temples to the Ptolemies were erected
throughout the kingdom. Ptolemy I even created a new god, Serapis, who was combination of two Egyptian gods: Apis
and Osiris, with attributes of Greek gods. Ptolemaic administration was, like the Ancient Egyptian bureaucracy, highly
centralized and focused on squeezing as much revenue out of the population as possible though tariffs, excise duties,
fines, taxes and so forth. A whole class of petty officials, tax farmers, clerks and overseers made this possible.
The Egyptian countryside was directly administered by this royal bureaucracy. External possessions such as Cyprus
and Cyrene were run by strategoi, military commanders appointed by the crown. Under Ptolemy II, Callimachus, Apollonius
of Rhodes, Theocritus and a host of other poets made the city a center of Hellenistic literature. Ptolemy himself
was eager to patronise the library, scientific research and individual scholars who lived on the grounds of the library.
He and his successors also fought a series of wars with the Seleucids, known as the Syrian wars, over the region
of Coele-Syria. Ptolemy IV won the great battle of Raphia (217 BCE) against the Seleucids, using native Egyptians
trained as phalangites. However these Egyptian soldiers revolted, eventually setting up a native breakaway Egyptian
state in the Thebaid between 205-186/5 BCE, severely weakening the Ptolemaic state. Ptolemy's family ruled Egypt
until the Roman conquest of 30 BC. All the male rulers of the dynasty took the name Ptolemy. Ptolemaic queens, some
of whom were the sisters of their husbands, were usually called Cleopatra, Arsinoe or Berenice. The most famous member
of the line was the last queen, Cleopatra VII, known for her role in the Roman political battles between Julius Caesar
and Pompey, and later between Octavian and Mark Antony. Her suicide at the conquest by Rome marked the end of Ptolemaic
rule in Egypt though Hellenistic culture continued to thrive in Egypt throughout the Roman and Byzantine periods
until the Muslim conquest. Following division of Alexander's empire, Seleucus I Nicator received Babylonia. From
there, he created a new empire which expanded to include much of Alexander's near eastern territories. At the height
of its power, it included central Anatolia, the Levant, Mesopotamia, Persia, today's Turkmenistan, Pamir, and parts
of Pakistan. It included a diverse population estimated at fifty to sixty million people. Under Antiochus I (c. 324/3
– 261 BC), however, the unwieldy empire was already beginning to shed territories. Pergamum broke away under Eumenes
I who defeated a Seleucid army sent against him. The kingdoms of Cappadocia, Bithynia and Pontus were all practically
independent by this time as well. Like the Ptolemies, Antiochus I established a dynastic religious cult, deifying
his father Seleucus I. Seleucus, officially said to be descended from Apollo, had his own priests and monthly sacrifices.
The erosion of the empire continued under Seleucus II, who was forced to fight a civil war (239-236) against his
brother Antiochus Hierax and was unable to keep Bactria, Sogdiana and Parthia from breaking away. Hierax carved off
most of Seleucid Anatolia for himself, but was defeated, along with his Galatian allies, by Attalus I of Pergamon
who now also claimed kingship. The vast Seleucid Empire was, like Egypt, mostly dominated by a Greco-Macedonian political
elite. The Greek population of the cities who formed the dominant elite were reinforced by emigration from Greece.
These cities included newly founded colonies such as Antioch, the other cities of the Syrian tetrapolis, Seleucia
(north of Babylon) and Dura-Europos on the Euphrates. These cities retained traditional Greek city state institutions
such as assemblies, councils and elected magistrates, but this was a facade for they were always controlled by the
royal Seleucid officials. Apart from these cities, there were also a large number of Seleucid garrisons (choria),
military colonies (katoikiai) and Greek villages (komai) which the Seleucids planted throughout the empire to cement
their rule. This 'Greco-Macedonian' population (which also included the sons of settlers who had married local women)
could make up a phalanx of 35,000 men (out of a total Seleucid army of 80,000) during the reign of Antiochos III.
The rest of the army was made up of native troops. Antiochus III the great conducted several vigorous campaigns to
retake all the lost provinces of the empire since the death of Seleucus I. After being defeated by Ptolemy IV's forces
at Raphia (217), Antiochus III led a long campaign to the east to subdue the far eastern breakaway provinces (212-205)
including Bactria, Parthia, Ariana, Sogdiana, Gedrosia and Drangiana. He was successful, bringing back most of these
provinces into at least nominal vassalage and receiving tribute from their rulers. After the death of Ptolemy IV
(204), Antiochus took advantage of the weakness of Egypt to conquer Coele-Syria in the fifth Syrian war (202-195).
He then began expanding his influence into Pergamene territory in Asia and crossed into Europe, fortifying Lysimachia
on the hellespont, but his expansion into Anatolia and Greece was abruptly halted after a decisive defeat at the
Battle of Magnesia (190 BCE). In the Treaty of Apamea which ended the war, Antiochus lost all of his territories
in Anatolia west of the Taurus and was forced to pay a large indemnity of 15,000 talents. After the death of Lysimachus,
one of his officers, Philetaerus, took control of the city of Pergamum in 282 BC along with Lysimachus' war chest
of 9,000 talents and declared himself loyal to Seleucus I while remaining de facto independent. His descendant, Attalus
I, defeated the invading Galatians and proclaimed himself an independent king. Attalus I (241–197BC), was a staunch
ally of Rome against Philip V of Macedon during the first and second Macedonian Wars. For his support against the
Seleucids in 190 BCE, Eumenes II was rewarded with all the former Seleucid domains in Asia Minor. Eumenes II turned
Pergamon into a centre of culture and science by establishing the library of Pergamum which was said to be second
only to the library of Alexandria with 200,000 volumes according to Plutarch. It included a reading room and a collection
of paintings. Eumenes II also constructed the Pergamum Altar with friezes depicting the Gigantomachy on the acropolis
of the city. Pergamum was also a center of parchment (charta pergamena) production. The Attalids ruled Pergamon until
Attalus III bequeathed the kingdom to the Roman Republic in 133 BC to avoid a likely succession crisis. The Celts
who settled in Galatia came through Thrace under the leadership of Leotarios and Leonnorios circa 270 BC. They were
defeated by Seleucus I in the 'battle of the Elephants', but were still able to establish a Celtic territory in central
Anatolia. The Galatians were well respected as warriors and were widely used as mercenaries in the armies of the
successor states. They continued to attack neighboring kingdoms such as Bithynia and Pergamon, plundering and extracting
tribute. This came to an end when they sided with the renegade Seleucid prince Antiochus Hierax who tried to defeat
Attalus, the ruler of Pergamon (241–197 BC). Attalus severely defeated the Gauls, forcing them to confine themselves
to Galatia. The theme of the Dying Gaul (a famous statue displayed in Pergamon) remained a favorite in Hellenistic
art for a generation signifying the victory of the Greeks over a noble enemy. In the early 2nd century BC, the Galatians
became allies of Antiochus the Great, the last Seleucid king trying to regain suzerainty over Asia Minor. In 189
BC, Rome sent Gnaeus Manlius Vulso on an expedition against the Galatians. Galatia was henceforth dominated by Rome
through regional rulers from 189 BC onward. The Bithynians were a Thracian people living in northwest Anatolia. After
Alexander's conquests the region of Bithynia came under the rule of the native king Bas, who defeated Calas, a general
of Alexander the Great, and maintained the independence of Bithynia. His son, Zipoetes I of Bithynia maintained this
autonomy against Lysimachus and Seleucus I, and assumed the title of king (basileus) in 297 BCE. His son and successor,
Nicomedes I, founded Nicomedia, which soon rose to great prosperity, and during his long reign (c. 278 – c. 255 BCE),
as well as those of his successors, the kingdom of Bithynia held a considerable place among the minor monarchies
of Anatolia. Nicomedes also invited the Celtic Galatians into Anatolia as mercenaries, and they later turned on his
son Prusias I, who defeated them in battle. Their last king, Nicomedes IV, was unable to maintain himself against
Mithridates VI of Pontus, and, after being restored to his throne by the Roman Senate, he bequeathed his kingdom
by will to the Roman republic (74 BCE). In 255 B.C., Ariarathes III took the title of king and married Stratonice,
a daughter of Antiochus II, remaining an ally of the Seleucid kingdom. Under Ariarathes IV, Cappadocia came into
relations with Rome, first as a foe espousing the cause of Antiochus the Great, then as an ally against Perseus of
Macedon and finally in a war against the Seleucids. Ariarathes V also waged war with Rome against Aristonicus, a
claimant to the throne of Pergamon, and their forces were annihilated in 130 BCE. This defeat allowed Pontus to invade
and conquer the kingdom. The Kingdom of Pontus was a Hellenistic kingdom on the southern coast of the Black Sea.
It was founded by Mithridates I in 291 BC and lasted until its conquest by the Roman Republic in 63 BC. Despite being
ruled by a dynasty which was a descendant of the Persian Achaemenid Empire it became hellenized due to the influence
of the Greek cities on the Black Sea and its neighboring kingdoms. Pontic culture was a mix of Greek and Iranian
elements, the most hellenized parts of the kingdom were on the coast, populated by Greek colonies such as Trapezus
and Sinope, which became the capital of the kingdom. Epigraphic evidence also shows extensive Hellenistic influence
in the interior. During the reign of Mithridates II, Pontus was allied with the Seleucids through dynastic marriages.
By the time of Mithridates VI Eupator, Greek was the official language of the kingdom though Anatolian languages
continued to be spoken. The kingdom grew to its largest extent under Mithridates VI, who conquered Colchis, Cappadocia,
Paphlagonia, Bithynia, Lesser Armenia, the Bosporan Kingdom, the Greek colonies of the Tauric Chersonesos and for
a brief time the Roman province of Asia. Mithridates VI, himself of mixed Persian and Greek ancestry, presented himself
as the protector of the Greeks against the 'barbarians' of Rome styling himself as "King Mithridates Eupator Dionysus."
and as the "great liberator". Mithridates also depicted himself with the anastole hairstyle of Alexander and used
the symbolism of Herakles whom the Macedonian kings claimed descend from. After a long struggle with Rome in the
Mithridatic wars, Pontus was defeated, part of it was incorporated into the Roman Republic as the province Bithynia
and Pontus and the eastern half survived as a client kingdom. Orontid Armenia formally passed to empire of Alexander
the Great following his conquest of Persia. Alexander appointed an Orontid named Mithranes to govern Armenia. Armenia
later became a vassal state of the Seleucid Empire, but it maintained a considerable degree of autonomy, retaining
its native rulers. Towards the end 212 BC the country was divided into two kingdoms, Greater Armenia and Armenia
Sophene including Commagene or Armenia Minor. The kingdoms became so independent from Seleucid control that Antiochus
III the Great waged war on them during his reign and replaced their rulers. After the Seleucid defeat at the Battle
of Magnesia in 190 BC, the kings of Sophene and Greater Armenia revolted and declared their independence, with Artaxias
becoming the first king of the Artaxiad dynasty of Armenia in 188. During the reign of the Artaxiads, Armenia went
through a period of hellenization. Numismatic evidence shows Greek artistic styles and the use of the Greek language.
Some coins describe the Armenian kings as "Philhellenes". During the reign of Tigranes the Great (95–55 BC), the
kingdom of Armenia reached its greatest extent, containing many Greek cities including the entire Syrian tetrapolis.
Cleopatra, the wife of Tigranes the Great, invited Greeks such as the rhetor Amphicrates and the historian Metrodorus
of Scepsis to the Armenian court, and - according to Plutarch - when the Roman general Lucullus seized the Armenian
capital Tigranocerta, he found a troupe of Greek actors who had arrived to perform plays for Tigranes. Tigranes'
successor Artavasdes II even composed Greek tragedies himself. Parthia was a north-eastern Iranian satrapy of the
Achaemenid empire which later passed on to Alexander's empire. Under the Seleucids, Parthia was governed by various
Greek satraps such as Nicanor and Philip (satrap). In 247 BC, following the death of Antiochus II Theos, Andragoras,
the Seleucid governor of Parthia, proclaimed his independence and began minting coins showing himself wearing a royal
diadem and claiming kingship. He ruled until 238 BCE when Arsaces, the leader of the Parni tribe conquered Parthia,
killing Andragoras and inaugurating the Arsacid Dynasty. Antiochus III recaptured Arsacid controlled territory in
209 BC from Arsaces II. Arsaces II sued for peace became a vassal of the Seleucids and it was not until the reign
of Phraates I (168–165 BCE), that the Arsacids would again begin to assert their independence. During the reign of
Mithridates I of Parthia, Arsacid control expanded to include Herat (in 167 BC), Babylonia (in 144 BC), Media (in
141 BC), Persia (in 139 BC), and large parts of Syria (in the 110s BC). The Seleucid–Parthian wars continued as the
Seleucids invaded Mesopotamia under Antiochus VII Sidetes (r. 138–129 BC), but he was eventually killed by a Parthian
counterattack. After the fall of the Seleucid dynasty, the Parthians fought frequently against neighbouring Rome
in the Roman–Parthian Wars (66 BC – 217 AD). Abundant traces of Hellenism continued under the Parthian empire. The
Parthians used Greek as well as their own Parthian language (though lesser than Greek) as languages of administration
and also used Greek drachmas as coinage. They enjoyed Greek theater and Greek art influenced Parthian art. The Parthians
continued worhipping Greek gods syncretized together with Iranian deities. Their rulers established ruler cults in
the manner of Hellenistic kings and often used Hellenistic royal epithets. The Nabatean Kingdom was an Arab state
located between the Sinai Peninsula and the Arabian Peninsula. Its capital was the city of Petra, an important trading
city on the incense route. The Nabateans resisted the attacks of Antigonous and were allies of the Hasmoneans in
their struggle against the Seleucids, but later fought against Herod the great. The hellenization of the Nabateans
accured relatively late in comparison to the surrounding regions. Nabatean material culture does not show any Greek
influence until the reign of Aretas III Philhellene in the 1st century BCE. Aretas captured Damascus and built the
Petra pool complex and gardens in the Hellenistic style. Though the Nabateans originally worshipped their traditional
gods in symbolic form such as stone blocks or pillars, during the Hellenistic period they began to identify their
gods with Greek gods and depict them in figurative forms influenced by Greek sculpture. Nabatean art shows Greek
influences and paintings have been found depicting Dionysian scenes. They also slowly adopted Greek as a language
of commerce along with Aramaic and Arabic. During the Hellenistic period, Judea became a frontier region between
the Seleucid Empire and Ptolemaic Egypt and therefore was often the frontline of the Syrian wars, changing hands
several times during these conflicts. Under the Hellenistic kingdoms, Judea was ruled by the hereditary office of
the High Priest of Israel as a Hellenistic vassal. This period also saw the rise of a Hellenistic Judaism, which
first developed in the Jewish diaspora of Alexandria and Antioch, and then spread to Judea. The major literary product
of this cultural syncretism is the Septuagint translation of the Hebrew Bible from Biblical Hebrew and Biblical Aramaic
to Koiné Greek. The reason for the production of this translation seems to be that many of the Alexandrian Jews had
lost the ability to speak Hebrew and Aramaic. Between 301 and 219 BCE the Ptolemies ruled Judea in relative peace,
and Jews often found themselves working in the Ptolemaic administration and army, which led to the rise of a Hellenized
Jewish elite class (e.g. the Tobiads). The wars of Antiochus III brought the region into the Seleucid empire; Jerusalem
fell to his control in 198 and the Temple was repaired and provided with money and tribute. Antiochus IV Epiphanes
sacked Jerusalem and looted the Temple in 169 BCE after disturbances in Judea during his abortive invasion of Egypt.
Antiochus then banned key Jewish religious rites and traditions in Judea. He may have been attempting to Hellenize
the region and unify his empire and the Jewish resistance to this eventually led to an escalation of violence. Whatever
the case, tensions between pro and anti-Seleucid Jewish factions led to the 174–135 BCE Maccabean Revolt of Judas
Maccabeus (whose victory is celebrated in the Jewish festival of Hanukkah). Modern interpretations see this period
as a civil war between Hellenized and orthodox forms of Judaism. Out of this revolt was formed an independent Jewish
kingdom known as the Hasmonaean Dynasty, which lasted from 165 BCE to 63 BCE. The Hasmonean Dynasty eventually disintegrated
in a civil war, which coincided with civil wars in Rome. The last Hasmonean ruler, Antigonus II Mattathias, was captured
by Herod and executed in 37 BCE. In spite of originally being a revolt against Greek overlordship, the Hasmonean
kingdom and also the Herodian kingdom which followed gradually became more and more hellenized. From 37 BCE to 6
CE, the Herodian dynasty, Jewish-Roman client kings, ruled Judea. Herod the Great considerably enlarged the Temple
(see Herod's Temple), making it one of the largest religious structures in the world. The style of the enlarged temple
and other Herodian architecture shows significant Hellenistic architectural influence. The Greek kingdom of Bactria
began as a breakaway satrapy of the Seleucid empire, which, because of the size of the empire, had significant freedom
from central control. Between 255-246 BCE, the governor of Bactria, Sogdiana and Margiana (most of present-day Afghanistan),
one Diodotus, took this process to its logical extreme and declared himself king. Diodotus II, son of Diodotus, was
overthrown in about 230 BC by Euthydemus, possibly the satrap of Sogdiana, who then started his own dynasty. In c.
210 BC, the Greco-Bactrian kingdom was invaded by a resurgent Seleucid empire under Antiochus III. While victorious
in the field, it seems Antiochus came to realise that there were advantages in the status quo (perhaps sensing that
Bactria could not be governed from Syria), and married one of his daughters to Euthydemus's son, thus legitimising
the Greco-Bactria dynasty. Soon afterwards the Greco-Bactrian kingdom seems to have expanded, possibly taking advantage
of the defeat of the Parthian king Arsaces II by Antiochus. According to Strabo, the Greco-Bactrians seem to have
had contacts with China through the silk road trade routes (Strabo, XI.XI.I). Indian sources also maintain religious
contact between Buddhist monks and the Greeks, and some Greco-Bactrians did convert to Buddhism. Demetrius, son and
successor of Euthydemus, invaded north-western India in 180 BC, after the destruction of the Mauryan empire there;
the Mauryans were probably allies of the Bactrians (and Seleucids). The exact justification for the invasion remains
unclear, but by about 175 BC, the Greeks ruled over parts of north-western India. This period also marks the beginning
of the obfuscation of Greco-Bactrian history. Demetrius possibly died about 180 BC; numismatic evidence suggest the
existence of several other kings shortly thereafter. It is probable that at this point that the Greco-Bactrian kingdom
split into several semi-independent regions for some years, often warring amongst themselves. Heliocles was the last
Greek to clearly rule Bactria, his power collapsing in the face of central Asian tribal invasions (Scythian and Yuezhi),
by about 130 BCE. However, Greek urban civilisation seems to have continued in Bactria after the fall of the kingdom,
having a hellenising effect on the tribes which had displaced Greek-rule. The Kushan empire which followed continued
to use Greek on their coinage and Greeks continued being influential in the empire. After Demetrius' death, civil
wars between Bactrian kings in India allowed Apollodotus I (from c. 180/175 BCE) to make himself independent as the
first proper Indo-Greek king (who did not rule from Bactria). Large numbers of his coins have been found in India,
and he seems to have reigned in Gandhara as well as western Punjab. Apollodotus I was succeeded by or ruled alongside
Antimachus II, likely the son of the Bactrian king Antimachus I. In about 155 (or 165) BC he seems to have been succeeded
by the most successful of the Indo-Greek kings, Menander I. Menander converted to Buddhism, and seems to have been
a great patron of the religion; he is remembered in some Buddhist texts as 'Milinda'. He also expanded the kingdom
further east into Punjab, though these conquests were rather ephemeral. After the death of Menander (c. 130 BC),
the Kingdom appears to have fragmented, with several 'kings' attested contemporaneously in different regions. This
inevitably weakened the Greek position, and territory seems to have been lost progressively. Around 70 BC, the western
regions of Arachosia and Paropamisadae were lost to tribal invasions, presumably by those tribes responsible for
the end of the Bactrian kingdom. The resulting Indo-Scythian kingdom seems to have gradually pushed the remaining
Indo-Greek kingdom towards the east. The Indo-Greek kingdom appears to have lingered on in western Punjab until about
10 AD when finally ended by the Indo-Scythians. Several references in Indian literature praise the knowledge of the
Yavanas or the Greeks. The Mahabharata compliments them as "the all-knowing Yavanas" (sarvajnaa yavanaa) i.e. "The
Yavanas, O king, are all-knowing; the Suras are particularly so. The mlecchas are wedded to the creations of their
own fancy." and the creators of flying machines that are generally called vimanas. The "Brihat-Samhita" of the mathematician
Varahamihira says: "The Greeks, though impure, must be honored since they were trained in sciences and therein, excelled
others....." . Hellenistic culture was at its height of world influence in the Hellenistic period. Hellenism or at
least Philhellenism reached most regions on the frontiers of the Hellenistic kingdoms. Though some of these regions
were not ruled by Greeks or even Greek speaking elites, certain Hellenistic influences can be seen in the historical
record and material culture of these regions. Other regions had established contact with Greek colonies before this
period, and simply saw a continued process of Hellenization and intermixing. Before the Hellenistic period, Greek
colonies had been established on the coast of the Crimean and Taman peninsulas. The Bosporan Kingdom was a multi-ethnic
kingdom of Greek city states and local tribal peoples such as the Maeotians, Thracians, Crimean Scythians and Cimmerians
under the Spartocid dynasty (438–110 BCE). The Spartocids were a hellenized Thracian family from Panticapaeum. The
Bosporans had long lasting trade contacts with the Scythian peoples of the Pontic-Caspian steppe, and Hellenistic
influence can be seen in the Scythian settlements of the Crimea, such as in the Scythian Neapolis. Scythian pressure
on the Bosporan kingdom under Paerisades V led to its eventual vassalage under the Pontic king Mithradates VI for
protection, circa 107 BCE. It later became a Roman client state. Other Scythians on the steppes of Central Asia came
into contact with Hellenistic culture through the Greeks of Bactria. Many Scythian elites purchased Greek products
and some Scythian art shows Greek influences. At least some Scythians seem to have become Hellenized, because we
know of conflicts between the elites of the Scythian kingdom over the adoption of Greek ways. These Hellenized Scythians
were known as the "young Scythians". The peoples around Pontic Olbia, known as the Callipidae, were intermixed and
Hellenized Greco-Scythians. In Arabia, Bahrain, which was referred to by the Greeks as Tylos, the centre of pearl
trading, when Nearchus came to discover it serving under Alexander the Great. The Greek admiral Nearchus is believed
to have been the first of Alexander's commanders to visit these islands. It is not known whether Bahrain was part
of the Seleucid Empire, although the archaeological site at Qalat Al Bahrain has been proposed as a Seleucid base
in the Persian Gulf. Alexander had planned to settle the eastern shores of the Persian Gulf with Greek colonists,
and although it is not clear that this happened on the scale he envisaged, Tylos was very much part of the Hellenised
world: the language of the upper classes was Greek (although Aramaic was in everyday use), while Zeus was worshipped
in the form of the Arabian sun-god Shams. Tylos even became the site of Greek athletic contests. Carthage was a Phoenician
colony on the coast of Tunisia. Carthaginian culture came into contact with the Greeks through Punic colonies in
Sicily and through their widespread Mediterranean trade network. While the Carthaginians retained their Punic culture
and language, they did adopt some Hellenistic ways, one of the most prominent of which was their military practices.
In 550 BCE, Mago I of Carthage began a series of military reforms which included copying the army of Timoleon, Tyrant
of Syracuse. The core of Carthage's military was the Greek-style phalanx formed by citizen hoplite spearmen who had
been conscripted into service, though their armies also included large numbers of mercenaries. After their defeat
in the first Punic war, Carthage hired a Spartan mercenary captain, Xanthippus of Carthage to reform their military
forces. Xanthippus reformed the Carthaginian military along Macedonian army lines. Widespread Roman interference
in the Greek world was probably inevitable given the general manner of the ascendency of the Roman Republic. This
Roman-Greek interaction began as a consequence of the Greek city-states located along the coast of southern Italy.
Rome had come to dominate the Italian peninsula, and desired the submission of the Greek cities to its rule. Although
they initially resisted, allying themselves with Pyrrhus of Epirus, and defeating the Romans at several battles,
the Greek cities were unable to maintain this position and were absorbed by the Roman republic. Shortly afterwards,
Rome became involved in Sicily, fighting against the Carthaginians in the First Punic War. The end result was the
complete conquest of Sicily, including its previously powerful Greek cities, by the Romans. Roman entanglement in
the Balkans began when Illyrian piratical raids on Roman merchants led to invasions of Illyria (the First and, Second
Illyrian Wars). Tension between Macedon and Rome increased when the young king of Macedon, Philip V harbored one
of the chief pirates, Demetrius of Pharos (a former client of Rome). As a result, in an attempt to reduce Roman influence
in the Balkans, Philip allied himself with Carthage after Hannibal had dealt the Romans a massive defeat at the Battle
of Cannae (216 BC) during the Second Punic War. Forcing the Romans to fight on another front when they were at a
nadir of manpower gained Philip the lasting enmity of the Romans; the only real result from the somewhat insubstantial
First Macedonian War (215–202 BC). Once the Second Punic War had been resolved, and the Romans had begun to regather
their strength, they looked to re-assert their influence in the Balkans, and to curb the expansion of Philip. A pretext
for war was provided by Philip's refusal to end his war with Attalid Pergamum, and Rhodes, both Roman allies. The
Romans, also allied with the Aetolian League of Greek city-states (which resented Philip's power), thus declared
war on Macedon in 200 BC, starting the Second Macedonian War. This ended with a decisive Roman victory at the Battle
of Cynoscephalae (197 BC). Like most Roman peace treaties of the period, the resultant 'Peace of Flaminius' was designed
utterly to crush the power of the defeated party; a massive indemnity was levied, Philip's fleet was surrendered
to Rome, and Macedon was effectively returned to its ancient boundaries, losing influence over the city-states of
southern Greece, and land in Thrace and Asia Minor. The result was the end of Macedon as a major power in the Mediterranean.
As a result of the confusion in Greece at the end of the Second Macedonian War, the Seleucid Empire also became entangled
with the Romans. The Seleucid Antiochus III had allied with Philip V of Macedon in 203 BC, agreeing that they should
jointly conquer the lands of the boy-king of Egypt, Ptolemy V. After defeating Ptolemy in the Fifth Syrian War, Antiochus
concentrated on occupying the Ptolemaic possessions in Asia Minor. However, this brought Antiochus into conflict
with Rhodes and Pergamum, two important Roman allies, and began a 'cold war' between Rome and Antiochus (not helped
by the presence of Hannibal at the Seleucid court). Meanwhile, in mainland Greece, the Aetolian League, which had
sided with Rome against Macedon, now grew to resent the Roman presence in Greece. This presented Antiochus III with
a pretext to invade Greece and 'liberate' it from Roman influence, thus starting the Roman-Syrian War (192–188 BC).
In 191 BC, the Romans under Manius Acilius Glabrio routed him at Thermopylae and obliged him to withdraw to Asia.
During the course of this war Roman troops moved into Asia for the first time, where they defeated Antiochus again
at the Battle of Magnesia (190 BC). A crippling treaty was imposed on Antiochus, with Seleucid possessions in Asia
Minor removed and given to Rhodes and Pergamum, the size of the Seleucid navy reduced, and a massive war indemnity
invoked. Thus, in less than twenty years, Rome had destroyed the power of one of the successor states, crippled another,
and firmly entrenched its influence over Greece. This was primarily a result of the over-ambition of the Macedonian
kings, and their unintended provocation of Rome; though Rome was quick to exploit the situation. In another twenty
years, the Macedonian kingdom was no more. Seeking to re-assert Macedonian power and Greek independence, Philip V's
son Perseus incurred the wrath of the Romans, resulting in the Third Macedonian War (171–168 BC). Victorious, the
Romans abolished the Macedonian kingdom, replacing it with four puppet republics; these lasted a further twenty years
before Macedon was formally annexed as a Roman province (146 BC) after yet another rebellion under Andriscus. Rome
now demanded that the Achaean League, the last stronghold of Greek independence, be dissolved. The Achaeans refused
and declared war on Rome. Most of the Greek cities rallied to the Achaeans' side, even slaves were freed to fight
for Greek independence. The Roman consul Lucius Mummius advanced from Macedonia and defeated the Greeks at Corinth,
which was razed to the ground. In 146 BC, the Greek peninsula, though not the islands, became a Roman protectorate.
Roman taxes were imposed, except in Athens and Sparta, and all the cities had to accept rule by Rome's local allies.
The Attalid dynasty of Pergamum lasted little longer; a Roman ally until the end, its final king Attalus III died
in 133 BC without an heir, and taking the alliance to its natural conclusion, willed Pergamum to the Roman Republic.
The final Greek resistance came in 88 BC, when King Mithridates of Pontus rebelled against Rome, captured Roman held
Anatolia, and massacred up to 100,000 Romans and Roman allies across Asia Minor. Many Greek cities, including Athens,
overthrew their Roman puppet rulers and joined him in the Mithridatic wars. When he was driven out of Greece by the
Roman general Lucius Cornelius Sulla, who laid siege to Athens and razed the city. Mithridates was finally defeated
by Gnaeus Pompeius Magnus (Pompey the Great) in 65 BC. Further ruin was brought to Greece by the Roman civil wars,
which were partly fought in Greece. Finally, in 27 BC, Augustus directly annexed Greece to the new Roman Empire as
the province of Achaea. The struggles with Rome had left Greece depopulated and demoralised. Nevertheless, Roman
rule at least brought an end to warfare, and cities such as Athens, Corinth, Thessaloniki and Patras soon recovered
their prosperity. Contrarily, having so firmly entrenched themselves into Greek affairs, the Romans now completely
ignored the rapidly disintegrating Seleucid empire (perhaps because it posed no threat); and left the Ptolemaic kingdom
to decline quietly, while acting as a protector of sorts, in as much as to stop other powers taking Egypt over (including
the famous line-in-the-sand incident when the Seleucid Antiochus IV Epiphanes tried to invade Egypt). Eventually,
instability in the near east resulting from the power vacuum left by the collapse of the Seleucid empire caused the
Roman proconsul Pompey the Great to abolish the Seleucid rump state, absorbing much of Syria into the Roman republic.
Famously, the end of Ptolemaic Egypt came as the final act in the republican civil war between the Roman triumvirs
Mark Anthony and Augustus Caesar. After the defeat of Anthony and his lover, the last Ptolemaic monarch, Cleopatra
VII at the Battle of Actium, Augustus invaded Egypt and took it as his own personal fiefdom. He thereby completed
both the destruction of the Hellenistic kingdoms and the Roman republic, and ended (in hindsight) the Hellenistic
era. In some fields Hellenistic culture thrived, particularly in its preservation of the past. The states of the
Hellenistic period were deeply fixated with the past and its seemingly lost glories. The preservation of many classical
and archaic works of art and literature (including the works of the three great classical tragedians, Aeschylus,
Sophocles, and Euripides) are due to the efforts of the Hellenistic Greeks. The museum and library of Alexandria
was the center of this conservationist activity. With the support of royal stipends, Alexandrian scholars collected,
translated, copied, classified and critiqued every book they could find. Most of the great literary figures of the
Hellenistic period studied at Alexandria and conducted research there. They were scholar poets, writing not only
poetry but treatises on Homer and other archaic and classical Greek literature. Athens retained its position as the
most prestigious seat of higher education, especially in the domains of philosophy and rhetoric, with considerable
libraries and philosophical schools. Alexandria had the monumental Museum (i.e. research center) and Library of Alexandria
which was estimated to have had 700,000 volumes. The city of Pergamon also had a large library and became a major
center of book production. The island of Rhodes had a library and also boasted a famous finishing school for politics
and diplomacy. Libraries were also present in Antioch, Pella, and Kos. Cicero was educated in Athens and Mark Antony
in Rhodes. Antioch was founded as a metropolis and center of Greek learning which retained its status into the era
of Christianity. Seleucia replaced Babylon as the metropolis of the lower Tigris. The spread of Greek culture and
language throughout the Near East and Asia owed much to the development of newly founded cities and deliberate colonization
policies by the successor states, which in turn was necessary for maintaining their military forces. Settlements
such as Ai-Khanoum, situated on trade routes, allowed Greek culture to mix and spread. The language of Philip II's
and Alexander's court and army (which was made up of various Greek and non-Greek speaking peoples) was a version
of Attic Greek, and over time this language developed into Koine, the lingua franca of the successor states. The
identification of local gods with similar Greek deities, a practice termed 'Interpretatio graeca', facilitated the
building of Greek-style temples, and the Greek culture in the cities also meant that buildings such as gymnasia and
theaters became common. Many cities maintained nominal autonomy while under the rule of the local king or satrap,
and often had Greek-style institutions. Greek dedications, statues, architecture and inscriptions have all been found.
However, local cultures were not replaced, and mostly went on as before, but now with a new Greco-Macedonian or otherwise
Hellenized elite. An example that shows the spread of Greek theater is Plutarch's story of the death of Crassus,
in which his head was taken to the Parthian court and used as a prop in a performance of The Bacchae. Theaters have
also been found: for example, in Ai-Khanoum on the edge of Bactria, the theater has 35 rows – larger than the theater
in Babylon. It seems likely that Alexander himself pursued policies which led Hellenization, such as the foundations
of new cities and Greek colonies. While it may have been a deliberate attempt to spread Greek culture (or as Arrian
says, "to civilise the natives"), it is more likely that it was a series of pragmatic measures designed to aid in
the rule of his enormous empire. Cities and colonies were centers of administrative control and Macedonian power
in a newly conquered region. Alexander also seems to have attempted to create a mixed Greco-Persian elite class as
shown by the Susa weddings and his adoption of some forms of Persian dress and court culture. He also brought in
Persian and other non-Greek peoples into his military and even the elite cavalry units of the companion cavalry.
Again, it is probably better to see these policies as a pragmatic response to the demands of ruling a large empire
than to any idealized attempt to bringing Greek culture to the 'barbarians'. This approach was bitterly resented
by the Macedonians and discarded by most of the Diadochi after Alexander's death. These policies can also be interpreted
as the result of Alexander's possible megalomania during his later years. Throughout the Hellenistic world, these
Greco-Macedonian colonists considered themselves by and large superior to the native "barbarians" and excluded most
non-Greeks from the upper echelons of courtly and government life. Most of the native population was not Hellenized,
had little access to Greek culture and often found themselves discriminated against by their Hellenic overlords.
Gymnasiums and their Greek education, for example, were for Greeks only. Greek cities and colonies may have exported
Greek art and architecture as far as the Indus, but these were mostly enclaves of Greek culture for the transplanted
Greek elite. The degree of influence that Greek culture had throughout the Hellenistic kingdoms was therefore highly
localized and based mostly on a few great cities like Alexandria and Antioch. Some natives did learn Greek and adopt
Greek ways, but this was mostly limited to a few local elites who were allowed to retain their posts by the Diadochi
and also to a small number of mid-level administrators who acted as intermediaries between the Greek speaking upper
class and their subjects. In the Seleucid empire for example, this group amounted to only 2.5 percent of the official
class. Despite their initial reluctance, the Successors seem to have later deliberately naturalized themselves to
their different regions, presumably in order to help maintain control of the population. In the Ptolemaic kingdom,
we find some Egyptianized Greeks by the 2nd century onwards. The Indo-Greek kingdom, we find kings who were converts
to Buddhism (e.g. Menander). The Greeks in the regions therefore gradually become 'localized', adopting local customs
as appropriate. In this way, hybrid 'Hellenistic' cultures naturally emerged, at least among the upper echelons of
society. The trends of Hellenization were therefore accompanied by Greeks adopting native ways over time, but this
was widely varied by place and by social class. The farther away from the Mediterranean and the lower in social status,
the more likely that a colonist was to adopt local ways, while the Greco-Macedonian elites and Royal families, usually
remained thoroughly Greek and viewed most non-Greeks with disdain. It is only until Cleopatra VII, that a Ptolemaic
ruler bothered to learn the Egyptian language of their subjects. In the Hellenistic period, there was much continuity
in Greek religion: the Greek gods continued to be worshiped, and the same rites were practiced as before. However
the socio-political changes brought on by the conquest of the Persian empire and Greek emigration abroad meant that
change also came to religious practices. This varied greatly on location, Athens, Sparta and most cities in the Greek
mainland did not see much religious change or new gods (with the exception of the Egyptian Isis in Athens), while
the multi-ethnic Alexandria had a very varied group of gods and religious practices, including Egyptian, Jewish and
Greek. Greek emigres brought their Greek religion everywhere they went, even as far as India and Afghanistan. Non-Greeks
also had more freedom to travel and trade throughout the Mediterranean and in this period we can see Egyptian gods
such as Serapis, and the Syrian gods Atargatis and Hadad, as well as a Jewish synagogue, all coexisting on the island
of Delos alongside classical Greek deities. A common practice was to identify Greek gods with native gods that had
similar characteristics and this created new fusions like Zeus-Ammon, Aphrodite Hagne (a Hellenized Atargatis) and
Isis-Demeter. Greek emigres faced individual religious choices they had not faced on their home cities, where the
gods they worshiped were dictated by tradition. The worship of dynastic ruler cults was also a feature of this period,
most notably in Egypt, where the Ptolemies adopted earlier Pharaonic practice, and established themselves as god-kings.
These cults were usually associated with a specific temple in honor of the ruler such as the Ptolemaieia at Alexandria
and had their own festivals and theatrical performances. The setting up of ruler cults was more based on the systematized
honors offered to the kings (sacrifice, proskynesis, statues, altars, hymns) which put them on par with the gods
(isotheism) than on actual belief of their divine nature. According to Peter Green, these cults did not produce genuine
belief of the divinity of rulers among the Greeks and Macedonians. The worship of Alexander was also popular, as
in the long lived cult at Erythrae and of course, at Alexandria, where his tomb was located. The Hellenistic age
also saw a rise in the disillusionment with traditional religion. The rise of philosophy and the sciences had removed
the gods from many of their traditional domains such as their role in the movement of the heavenly bodies and natural
disasters. The Sophists proclaimed the centrality of humanity and agnosticism; the belief in Euhemerism (the view
that the gods were simply ancient kings and heroes), became popular. The popular philosopher Epicurus promoted a
view of disinterested gods living far away from the human realm in metakosmia. The apotheosis of rulers also brought
the idea of divinity down to earth. While there does seem to have been a substantial decline in religiosity, this
was mostly reserved for the educated classes. Magic was practiced widely, and these too, were a continuation from
earlier times. Throughout the Hellenistic world, people would consult oracles, and use charms and figurines to deter
misfortune or to cast spells. Also developed in this era was the complex system of astrology, which sought to determine
a person's character and future in the movements of the sun, moon, and planets. Astrology was widely associated with
the cult of Tyche (luck, fortune), which grew in popularity during this period. The Hellenistic period saw the rise
of New Comedy, the only few surviving representative texts being those of Menander (born 342/1 BCE). Only one play,
Dyskolos, survives in its entirety. The plots of this new Hellenistic comedy of manners were more domestic and formulaic,
stereotypical low born characters such as slaves became more important, the language was colloquial and major motifs
included escapism, marriage, romance and luck (Tyche). Though no Hellenistic tragedy remains intact, they were still
widely produced during the period, yet it seems that there was no major breakthrough in style, remaining within the
classical model. The Supplementum Hellenisticum, a modern collection of extant fragments, contains the fragments
of 150 authors. Hellenistic poets now sought patronage from kings, and wrote works in their honor. The scholars at
the libraries in Alexandria and Pergamon focused on the collection, cataloging, and literary criticism of classical
Athenian works and ancient Greek myths. The poet-critic Callimachus, a staunch elitist, wrote hymns equating Ptolemy
II to Zeus and Apollo. He promoted short poetic forms such as the epigram, epyllion and the iambic and attacked epic
as base and common ("big book, big evil" was his doctrine). He also wrote a massive catalog of the holdings of the
library of Alexandria, the famous Pinakes. Callimachus was extremely influential in his time and also for the development
of Augustan poetry. Another poet, Apollonius of Rhodes, attempted to revive the epic for the Hellenistic world with
his Argonautica. He had been a student of Callimachus and later became chief librarian (prostates) of the library
of Alexandria, Apollonius and Callimachus spent much of their careers feuding with each other. Pastoral poetry also
thrived during the Hellenistic era, Theocritus was a major poet who popularized the genre. During the Hellenistic
period, many different schools of thought developed. Athens, with its multiple philosophical schools, continued to
remain the center of philosophical thought. However Athens had now lost her political freedom and Hellenistic philosophy
is a reflection of this new difficult period. In this political climate, Hellenistic philosophers went in search
of goals such as ataraxia (un-disturbedness), autarky (self-sufficiency) and apatheia (freedom from suffering), which
would allow them to wrest well-being or eudaimonia out of the most difficult turns of fortune. This occupation with
the inner life, with personal inner liberty and with the pursuit of eudaimonia is what all Hellenistic philosophical
schools have in common. The Epicureans and the Cynics rejected public offices and civic service, which amounted to
a rejection of the polis itself, the defining institution of the Greek world. Epicurus promoted atomism and an asceticism
based on freedom from pain as its ultimate goal. Cynics such as Diogenes of Sinope rejected all material possessions
and social conventions (nomos) as unnatural and useless. The Cyrenaics meanwhile, embraced hedonism, arguing that
pleasure was the only true good. Stoicism, founded by Zeno of Citium, taught that virtue was sufficient for eudaimonia
as it would allow one to live in accordance with Nature or Logos. Zeno became extremely popular, the Athenians set
up a gold statue of him and Antigonus II Gonatas invited him to the Macedonian court. The philosophical schools of
Aristotle (the Peripatetics of the Lyceum) and Plato (Platonism at the Academy) also remained influential. The academy
would eventually turn to Academic Skepticism under Arcesilaus until it was rejected by Antiochus of Ascalon (c. 90
BCE) in favor of Neoplatonism. Hellenistic philosophy, had a significant influence on the Greek ruling elite. Examples
include Athenian statesman Demetrius of Phaleron, who had studied in the lyceum; the Spartan king Cleomenes III who
was a student of the Stoic Sphairos of Borysthenes and Antigonus II who was also a well known Stoic. This can also
be said of the Roman upper classes, were Stoicism was dominant, as seen in the Meditations of the Roman emperor Marcus
Aurelius and the works of Cicero. Hellenistic culture produced seats of learning throughout the Mediterranean. Hellenistic
science differed from Greek science in at least two ways: first, it benefited from the cross-fertilization of Greek
ideas with those that had developed in the larger Hellenistic world; secondly, to some extent, it was supported by
royal patrons in the kingdoms founded by Alexander's successors. Especially important to Hellenistic science was
the city of Alexandria in Egypt, which became a major center of scientific research in the 3rd century BC. Hellenistic
scholars frequently employed the principles developed in earlier Greek thought: the application of mathematics and
deliberate empirical research, in their scientific investigations. Hellenistic Geometers such as Archimedes (c. 287
– 212 BC), Apollonius of Perga (c. 262 – c. 190 BC), and Euclid (c. 325 – 265 BC), whose Elements became the most
important textbook in mathematics until the 19th century, built upon the work of the Hellenic era Pythagoreans. Euclid
developed proofs for the Pythagorean Theorem, for the infinitude of primes, and worked on the five Platonic solids.
Eratosthenes used his knowledge of geometry to measure the circumference of the Earth. His calculation was remarkably
accurate. He was also the first to calculate the tilt of the Earth's axis (again with remarkable accuracy). Additionally,
he may have accurately calculated the distance from the Earth to the Sun and invented the leap day. Known as the
"Father of Geography ", Eratosthenes also created the first map of the world incorporating parallels and meridians,
based on the available geographical knowledge of the era. Astronomers like Hipparchus (c. 190 – c. 120 BC) built
upon the measurements of the Babylonian astronomers before him, to measure the precession of the Earth. Pliny reports
that Hipparchus produced the first systematic star catalog after he observed a new star (it is uncertain whether
this was a nova or a comet) and wished to preserve astronomical record of the stars, so that other new stars could
be discovered. It has recently been claimed that a celestial globe based on Hipparchus's star catalog sits atop the
broad shoulders of a large 2nd-century Roman statue known as the Farnese Atlas. Another astronomer, Aristarchos of
Samos developed a heliocentric system. The level of Hellenistic achievement in astronomy and engineering is impressively
shown by the Antikythera mechanism (150–100 BC). It is a 37-gear mechanical computer which computed the motions of
the Sun and Moon, including lunar and solar eclipses predicted on the basis of astronomical periods believed to have
been learned from the Babylonians. Devices of this sort are not found again until the 10th century, when a simpler
eight-geared luni-solar calculator incorporated into an astrolabe was described by the Persian scholar, Al-Biruni.[not
in citation given] Similarly complex devices were also developed by other Muslim engineers and astronomers during
the Middle Ages.[not in citation given] Medicine, which was dominated by the Hippocratic tradition, saw new advances
under Praxagoras of Kos, who theorized that blood traveled through the veins. Herophilos (335–280 BC) was the first
to base his conclusions on dissection of the human body, animal vivisection and to provide accurate descriptions
of the nervous system, liver and other key organs. Influenced by Philinus of Cos (fl. 250), a student of Herophilos,
a new medical sect emerged, the Empiric school, which was based on strict observation and rejected unseen causes
of the Dogmatic school. Hellenistic warfare was a continuation of the military developments of Iphicrates and Philip
II of Macedon, particularly his use of the Macedonian Phalanx, a dense formation of pikemen, in conjunction with
heavy companion cavalry. Armies of the Hellenistic period differed from those of the classical period in being largely
made up of professional soldiers and also in their greater specialization and technical proficiency in siege warfare.
Hellenistic armies were significantly larger than those of classical Greece relying increasingly on Greek mercenaries
(misthophoroi; men-for-pay) and also on non-Greek soldiery such as Thracians, Galatians, Egyptians and Iranians.
Some ethnic groups were known for their martial skill in a particular mode of combat and were highly sought after,
including Tarantine cavalry, Cretan archers, Rhodian slingers and Thracian peltasts. This period also saw the adoption
of new weapons and troop types such as Thureophoroi and the Thorakitai who used the oval Thureos shield and fought
with javelins and the machaira sword. The use of heavily armored cataphracts and also horse archers was adopted by
the Seleucids, Greco-Bactrians, Armenians and Pontus. The use of war elephants also became common. Seleucus received
Indian war elephants from the Mauryan empire, and used them to good effect at the battle of Ipsus. He kept a core
of 500 of them at Apameia. The Ptolemies used the smaller African elephant. Hellenistic military equipment was generally
characterized by an increase in size. Hellenistic-era warships grew from the trireme to include more banks of oars
and larger numbers of rowers and soldiers as in the Quadrireme and Quinquereme. The Ptolemaic Tessarakonteres was
the largest ship constructed in Antiquity. New siege engines were developed during this period. An unknown engineer
developed the torsion-spring catapult (ca. 360) and Dionysios of Alexandria designed a repeating ballista, the Polybolos.
Preserved examples of ball projectiles range from 4.4 kg to 78 kg (or over 170 lbs). Demetrius Poliorcetes was notorious
for the large siege engines employed in his campaigns, especially during the 12-month siege of Rhodes when he had
Epimachos of Athens build a massive 160 ton siege tower named Helepolis, filled with artillery. Hellenistic art saw
a turn from the idealistic, perfected, calm and composed figures of classical Greek art to a style dominated by realism
and the depiction of emotion (pathos) and character (ethos). The motif of deceptively realistic naturalism in art
(aletheia) is reflected in stories such as that of the painter Zeuxis, who was said to have painted grapes that seemed
so real that birds came and pecked at them. The female nude also became more popular as epitomized by the Aphrodite
of Cnidos of Praxiteles and art in general became more erotic (e.g. Leda and the Swan and Scopa's Pothos). The dominant
ideals of Hellenistic art were those of sensuality and passion. People of all ages and social statuses were depicted
in the art of the Hellenistic age. Artists such as Peiraikos chose mundane and lower class subjects for his paintings.
According to Pliny, "He painted barbers' shops, cobblers' stalls, asses, eatables and similar subjects, earning for
himself the name of rhyparographos [painter of dirt/low things]. In these subjects he could give consummate pleasure,
selling them for more than other artists received for their large pictures" (Natural History, Book XXXV.112). Even
barbarians, such as the Galatians, were depicted in heroic form, prefiguring the artistic theme of the noble savage.
The image of Alexander the Great was also an important artistic theme, and all of the diadochi had themselves depicted
imitating Alexander's youthful look. A number of the best-known works of Greek sculpture belong to the Hellenistic
period, including Laocoön and his Sons, Venus de Milo, and the Winged Victory of Samothrace. Developments in painting
included experiments in chiaroscuro by Zeuxis and the development of landscape painting and still life painting.
Greek temples built during the Hellenistic period were generally larger than classical ones, such as the temple of
Artemis at Ephesus, the temple of Artemis at Sardis, and the temple of Apollo at Didyma (rebuilt by Seleucus in 300
BCE). The royal palace (basileion) also came into its own during the Hellenistic period, the first extant example
being the massive fourth-century villa of Cassander at Vergina. There has been a trend in writing the history of
this period to depict Hellenistic art as a decadent style, following of the Golden Age of Classical Athens. Pliny
the Elder, after having described the sculpture of the classical period says: Cessavit deinde ars ("then art disappeared").
The 18th century terms Baroque and Rococo have sometimes been applied, to the art of this complex and individual
period. The renewal of the historiographical approach as well as some recent discoveries, such as the tombs of Vergina,
allow a better appreciation of this period's artistic richness. The focus on the Hellenistic period over the course
of the 19th century by scholars and historians has led to an issue common to the study of historical periods; historians
see the period of focus as a mirror of the period in which they are living. Many 19th century scholars contended
that the Hellenistic period represented a cultural decline from the brilliance of classical Greece. Though this comparison
is now seen as unfair and meaningless, it has been noted that even commentators of the time saw the end of a cultural
era which could not be matched again. This may be inextricably linked with the nature of government. It has been
noted by Herodotus that after the establishment of the Athenian democracy: However, William Woodthorpe Tarn, between
World War I and World War II and the heyday of the League of Nations, focused on the issues of racial and cultural
confrontation and the nature of colonial rule. Michael Rostovtzeff, who fled the Russian Revolution, concentrated
predominantly on the rise of the capitalist bourgeoisie in areas of Greek rule. Arnaldo Momigliano, an Italian Jew
who wrote before and after the Second World War, studied the problem of mutual understanding between races in the
conquered areas. Moses Hadas portrayed an optimistic picture of synthesis of culture from the perspective of the
1950s, while Frank William Walbank in the 1960s and 1970s had a materialistic approach to the Hellenistic period,
focusing mainly on class relations. Recently, however, papyrologist C. Préaux has concentrated predominantly on the
economic system, interactions between kings and cities and provides a generally pessimistic view on the period. Peter
Green, on the other hand, writes from the point of view of late 20th century liberalism, his focus being on individualism,
the breakdown of convention, experiments and a postmodern disillusionment with all institutions and political processes.
Bill & Melinda Gates Foundation (or the Gates Foundation, abbreviated as BMGF) is the largest private foundation in the world,
founded by Bill and Melinda Gates. It was launched in 2000 and is said to be the largest transparently operated private
foundation in the world. The primary aims of the foundation are, globally, to enhance healthcare and reduce extreme
poverty, and in America, to expand educational opportunities and access to information technology. The foundation,
based in Seattle, Washington, is controlled by its three trustees: Bill Gates, Melinda Gates and Warren Buffett.
Other principal officers include Co-Chair William H. Gates, Sr. and Chief Executive Officer Susan Desmond-Hellmann.
On June 25, 2006, Warren Buffett (then the world's richest person, estimated worth of US$62 billion as of April 16,
2008) pledged to give the foundation approximately 10 million Berkshire Hathaway Class B shares spread over multiple
years through annual contributions, with the first year's donation of 500,000 shares being worth approximately US$1.5
billion. Buffett set conditions so that these contributions do not simply increase the foundation's endowment, but
effectively work as a matching contribution, doubling the Foundation's annual giving: "Buffett's gift came with three
conditions for the Gates foundation: Bill or Melinda Gates must be alive and active in its administration; it must
continue to qualify as a charity; and each year it must give away an amount equal to the previous year's Berkshire
gift, plus an additional amount equal to 5 percent of net assets. Buffett gave the foundation two years to abide
by the third requirement." The Gates Foundation received 5% (500,000) of the shares in July 2006 and will receive
5% of the remaining earmarked shares in the July of each following year (475,000 in 2007, 451,250 in 2008). In July
2013, Buffet announced another donation of his company's Class B, this time in the amount worth $2 billion, is going
to the Bill and Melinda Gates Foundation. In November 2014, the Bill and Melinda Gates Foundation announced that
they are adopting an open access (OA) policy for publications and data, "to enable the unrestricted access and reuse
of all peer-reviewed published research funded by the foundation, including any underlying data sets". This move
has been widely applauded by those who are working in the area of capacity building and knowledge sharing.[citation
needed] Its terms have been called the most stringent among similar OA policies. As of January 1, 2015 their Open
Access policy is effective for all new agreements. The foundation explains on its website that its trustees divided
the organization into two entities: the Bill & Melinda Gates Foundation (foundation) and the Bill & Melinda Gates
Foundation Trust (trust). The foundation section, based in Seattle, US, "focuses on improving health and alleviating
extreme poverty," and its trustees are Bill and Melinda Gates and Warren Buffett. The trust section manages "the
investment assets and transfer proceeds to the foundation as necessary to achieve the foundation's charitable goals"—it
holds the assets of Bill and Melinda Gates, who are the sole trustees, and receives contributions from Buffett. The
foundation trust invests undistributed assets, with the exclusive goal of maximizing the return on investment. As
a result, its investments include companies that have been criticized for worsening poverty in the same developing
countries where the foundation is attempting to relieve poverty. These include companies that pollute heavily and
pharmaceutical companies that do not sell into the developing world. In response to press criticism, the foundation
announced in 2007 a review of its investments to assess social responsibility. It subsequently cancelled the review
and stood by its policy of investing for maximum return, while using voting rights to influence company practices.
In March 2006, the foundation announced a US$5 million grant for the International Justice Mission (IJM), a human
rights organization based in Washington, D.C., US to work in the area of sex trafficking. The official announcement
explained that the grant would allow the IJM to "create a replicable model for combating sex trafficking and slavery"
that would involve the opening of an office in a region with high rates of sex trafficking, following research. The
office was opened for three years for the following purposes: "conducting undercover investigations, training law
enforcement, rescuing victims, ensuring appropriate aftercare, and seeking perpetrator accountability". The IJM used
the grant money to found "Project Lantern" and established an office in the Philippines city of Cebu. In 2010 the
results of the project were published, in which the IJM stated that Project Lantern had led to "an increase in law
enforcement activity in sex trafficking cases, an increase in commitment to resolving sex trafficking cases among
law enforcement officers trained through the project, and an increase in services – like shelter, counseling and
career training – provided to trafficking survivors". At the time that the results were released, the IJM was exploring
opportunities to replicate the model in other regions. The Water, Sanitation and Hygiene (WSH) program of the Gates
Foundation was launched in mid-2005 as a "Learning Initiative," and became a full-fledged program under the Global
Development Division in early 2010. The Foundation has since 2005 undertaken a wide range of efforts in the WASH
sector involving research, experimentation, reflection, advocacy, and field implementation. In 2009, the Foundation
decided to refocus its WASH effort mainly on sustainable sanitation services for the poor, using non-piped sanitation
services (i.e. without the use of sewers), and less on water supply. This was because the sanitation sector was generally
receiving less attention from other donors and from governments, and because the Foundation believed it had the potential
to make a real difference through strategic investments. In mid 2011, the Foundation announced in its new "Water,
Sanitation, Hygiene Strategy Overview" that its funding now focuses primarily on sanitation, particularly in sub-Saharan
Africa and South Asia, because access to improved sanitation is lowest in those regions. Their grant-making focus
has been since 2011 on sanitation science and technology ("transformative technologies"), delivery models at scale,
urban sanitation markets, building demand for sanitation, measurement and evaluation as well as policy, advocacy
and communications. Improved sanitation in the developing world is a global need, but a neglected priority as shown
by the data collected by the Joint Monitoring Programme for Water Supply and Sanitation (JMP) of UNICEF and WHO.
This program is tasked to monitor progress towards the Millennium Development Goal (MDG) relating to drinking water
and sanitation. About one billion people have no sanitation facility whatsoever and continue to defecate in gutters,
behind bushes or in open water bodies, with no dignity or privacy - which is called open defecation and which poses
significant health risks. India is the country with the highest number of people practicing open defecation: around
600 million people. India has also become a focus country for the foundation's sanitation activities which has become
evident since the "Reinvent the Toilet Fair" in Delhi, India in March 2014. In 2011, the foundation launched a program
called "Reinvent the Toilet Challenge" with the aim to promote the development of innovations in toilet design to
benefit the 2.5 billion people that do not have access to safe and effective sanitation. This program has generated
significant interest of the mainstream media. It was complemented by a program called "Grand Challenges Explorations"
(2011 to 2013 with some follow-up grants reaching until 2015) which involved grants of US$100,000 each in the first
round. Both funding schemes explicitly excluded project ideas that relied on centralized sewerage systems or are
not compatible with development country contexts. Since the launch of the "Reinvent the Toilet Challenge", more than
a dozen research teams, mainly at universities in the U.S., Europe, India, China and South Africa, have received
grants to develop innovative on-site and off-site waste treatment solutions for the urban poor. The grants were in
the order of 400,000 USD for their first phase, followed by typically 1-3 million USD for their second phase; many
of them investigated resource recovery or processing technologies for excreta or fecal sludge. The Reinvent the Toilet
Challenge is a long-term research and development effort to develop a hygienic, stand-alone toilet. This challenge
is being complemented by another investment program to develop new technologies for improved pit latrine emptying
(called by the foundation the "Omni-Ingestor") and fecal sludge processing (called "Omni-Processor"). The aim of
the "Omni Processor" is to convert excreta (for example fecal sludge) into beneficial products such as energy and
soil nutrients with the potential to develop local business and revenue. The foundation has donated billions of dollars
to help sufferers of AIDS, tuberculosis and malaria, protecting millions of children from death at the hands of preventable
diseases. However, a 2007 investigation by The Los Angeles Times claimed there are three major unintended consequences
with the foundation's allocation of aid. First, sub-Saharan Africa already suffered from a shortage of primary doctors
before the arrival of the Gates Foundation, but "by pouring most contributions into the fight against such high-profile
killers as AIDS, Gates grantees have increased the demand for specially trained, higher-paid clinicians, diverting
staff from basic care" in sub-Saharan Africa. This "brain drain" adds to the existing doctor shortage and pulls away
additional trained staff from children and those suffering from other common killers. Second, "the focus on a few
diseases has shortchanged basic needs such as nutrition and transportation". Third, "Gates-funded vaccination programs
have instructed caregivers to ignore – even discourage patients from discussing – ailments that the vaccinations
cannot prevent". Melinda Gates has stated that the foundation "has decided not to fund abortion". In response to
questions about this decision, Gates stated in a June 2014 blog post that she "struggle[s] with the issue" and that
"the emotional and personal debate about abortion is threatening to get in the way of the lifesaving consensus regarding
basic family planning". Up to 2013, the Bill & Melinda Gates Foundation provided $71 million to Planned Parenthood,
the primary U.S. abortion provider, and affiliated organizations. In 1997, the charity introduced a U.S. Libraries
initiative with a goal of "ensuring that if you can get to a public library, you can reach the internet". Only 35%
of the world's population has access to the Internet. The foundation has given grants, installed computers and software,
and provided training and technical support in partnership with public libraries nationwide in an effort to increase
access and knowledge. Helping provide access and training for these resources, this foundation helps move public
libraries into the digital age. A key aspect of the Gates Foundation's U.S. efforts involves an overhaul of the country's
education policies at both the K-12 and college levels, including support for teacher evaluations and charter schools
and opposition to seniority-based layoffs and other aspects of the education system that are typically backed by
teachers' unions. It spent $373 million on education in 2009. It has also donated to the two largest national teachers'
unions. The foundation was the biggest early backer of the Common Core State Standards Initiative. One of the foundation's
goals is to lower poverty by increasing the number of college graduates in the United States, and the organization
has funded "Reimagining Aid Design and Delivery" grants to think tanks and advocacy organizations to produce white
papers on ideas for changing the current system of federal financial aid for college students, with a goal of increasing
graduation rates. One of the ways the foundation has sought to increase the number of college graduates is to get
them through college faster, but that idea has received some pushback from organizations of universities and colleges.
As part of its education-related initiatives, the foundation has funded journalists, think tanks, lobbying organizations
and governments. Millions of dollars of grants to news organizations have funded reporting on education and higher
education, including more than $1.4 million to the Education Writers Association to fund training for journalists
who cover education. While some critics have feared the foundation for directing the conversation on education or
pushing its point of view through news coverage, the foundation has said it lists all its grants publicly and does
not enforce any rules for content among its grantees, who have editorial independence. Union activists in Chicago
have accused Gates Foundation grantee Teach Plus, which was founded by new teachers and advocates against seniority-based
layoffs, of "astroturfing". The K-12 and higher education reform programs of the Gates Foundation have been criticized
by some education professionals, parents, and researchers because they have driven the conversation on education
reform to such an extent that they may marginalize researchers who do not support Gates' predetermined policy preferences.
Several Gates-backed policies such as small schools, charter schools, and increasing class sizes have been expensive
and disruptive, but some studies indicate they have not improved educational outcomes and may have caused harm. Peer
reviewed scientific studies at Stanford find that Charter Schools do not systematically improve student performance
In October 2006, the Bill & Melinda Gates Foundation was split into two entities: the Bill & Melinda Gates Foundation
Trust, which manages the endowment assets and the Bill & Melinda Gates Foundation, which "... conducts all operations
and grantmaking work, and it is the entity from which all grants are made". Also announced was the decision to "...
spend all of [the Trust's] resources within 20 years after Bill's and Melinda's deaths". This would close the Bill
& Melinda Gates Foundation Trust and effectively end the Bill & Melinda Gates Foundation. In the same announcement
it was reiterated that Warren Buffett "... has stipulated that the proceeds from the Berkshire Hathaway shares he
still owns at death are to be used for philanthropic purposes within 10 years after his estate has been settled".
It is classified as a Beta World City, ranking seventh in Latin America and 73rd in the world. Described as a "vibrant, eclectic
place with a rich cultural life", and "a thriving tech center and entrepreneurial culture", Montevideo ranks 8th
in Latin America on the 2013 MasterCard Global Destination Cities Index. By 2014, is also regarded as the fifth most
gay-friendly major city in the world, first in Latin America. It is the hub of commerce and higher education in Uruguay
as well as its chief port. The city is also the financial and cultural hub of a larger metropolitan area, with a
population of around 2 million. A Spanish expedition was sent from Buenos Aires, organized by the Spanish governor
of that city, Bruno Mauricio de Zabala. On 22 January 1724, the Spanish forced the Portuguese to abandon the location
and started populating the city, initially with six families moving in from Buenos Aires and soon thereafter by families
arriving from the Canary Islands who were called by the locals "guanches", "guanchos" or "canarios". There was also
one significant early Italian resident by the name of Jorge Burgues. A few years after its foundation, Montevideo
became the main city of the region north of the Río de la Plata and east of the Uruguay River, competing with Buenos
Aires for dominance in maritime commerce. The importance of Montevideo as the main port of the Viceroyalty of the
Río de la Plata brought it in confrontations with the city of Buenos Aires in various occasions, including several
times when it was taken over to be used as a base to defend the eastern province of the Viceroyalty from Portuguese
incursions. On 3 February 1807, British troops under the command of General Samuel Auchmuty and Admiral Charles Stirling
occupied the city during the Battle of Montevideo (1807), but it was recaptured by the Spanish in the same year on
2 September when John Whitelocke was forced to surrender to troops formed by forces of the Banda Oriental—roughly
the same area as modern Uruguay—and of Buenos Aires. After this conflict, the governor of Montevideo Francisco Javier
de Elío opposed the new viceroy Santiago de Liniers, and created a government Junta when the Peninsular War started
in Spain, in defiance of Liniers. Elío disestablished the Junta when Liniers was replaced by Baltasar Hidalgo de
Cisneros. During the May Revolution of 1810 and the subsequent uprising of the provinces of Rio de la Plata, the
Spanish colonial government moved to Montevideo. During that year and the next, Uruguayan revolutionary José Gervasio
Artigas united with others from Buenos Aires against Spain. In 1811, the forces deployed by the Junta Grande of Buenos
Aires and the gaucho forces led by Artigas started a siege of Montevideo, which had refused to obey the directives
of the new authorities of the May Revolution. The siege was lifted at the end of that year, when the military situation
started deteriorating in the Upper Peru region. The Spanish governor was expelled in 1814. In 1816, Portugal invaded
the recently liberated territory and in 1821, it was annexed to the Banda Oriental of Brazil. Juan Antonio Lavalleja
and his band called the Treinta y Tres Orientales ("Thirty-Three Orientals") re-established the independence of the
region in 1825. Uruguay was consolidated as an independent state in 1828, with Montevideo as the nation's capital.
In 1829, the demolition of the city's fortifications began and plans were made for an extension beyond the Ciudad
Vieja, referred to as the "Ciudad Nueva" ("new city"). Urban expansion, however, moved very slowly because of the
events that followed. Uruguay's 1830s were dominated by the confrontation between Manuel Oribe and Fructuoso Rivera,
the two revolutionary leaders who had fought against the Empire of Brazil under the command of Lavalleja, each of
whom had become the caudillo of their respective faction. Politics were divided between Oribe's Blancos ("whites"),
represented by the National Party, and Rivera's Colorados ("reds"), represented by the Colorado Party, with each
party's name taken from the colour of its emblems. In 1838, Oribe was forced to resign the presidency; he established
a rebel army and began a long civil war, the Guerra Grande, which lasted until 1851. The city of Montevideo suffered
a siege of eight years between 1843 and 1851, during which it was supplied by sea with British and French support.
Oribe, with the support of the then conservative Governor of Buenos Aires Province Juan Manuel de Rosas, besieged
the Colorados in Montevideo, where the latter were supported by the French Legion, the Italian Legion, the Basque
Legion and battalions from Brazil. Finally, in 1851, with the additional support of Argentine rebels who opposed
Rosas, the Colorados defeated Oribe. The fighting, however, resumed in 1855, when the Blancos came to power, which
they maintained until 1865. Thereafter, the Colorado Party regained power, which they retained until past the middle
of the 20th century. After the end of hostilities, a period of growth and expansion started for the city. In 1853
a stagecoach bus line was established joining Montevideo with the newly formed settlement of Unión and the first
natural gas street lights were inaugurated.[citation needed] From 1854 to 1861 the first public sanitation facilities
were constructed. In 1856 the Teatro Solís was inaugurated, 15 years after the beginning of its construction. By
Decree, on December 1861 the areas of Aguada and Cordón were incorporated to the growing Ciudad Nueva (New City).
In 1866, an underwater telegraph line connected the city with Buenos Aires. The statue of Peace, La Paz, was erected
on a column in Plaza Cagancha and the building of the Postal Service as well as the bridge of Paso Molino were inaugurated
in 1867. In 1868, the horse-drawn tram company Compañía de Tranvías al Paso del Molino y Cerro created the first
lines connecting Montevideo with Unión, the beach resort of Capurro and the industrialized and economically independent
Villa del Cerro, at the time called Cosmopolis. In the same year, the Mercado del Puerto was inaugurated. In 1869,
the first railway line of the company Ferrocarril Central del Uruguay was inaugurated connecting Bella Vista with
the town of Las Piedras. During the same year and the next, the neighbourhoods Colón, Nuevo París and La Comercial
were founded. The famous to our days Sunday market of Tristán Narvaja Street was established in Cordón in 1870. Public
water supply was established in 1871. In 1878, Bulevar Circunvalación was constructed, a boulevard starting from
Punta Carretas, going up to the north end of the city and then turning west to end at the beach of Capurro. It was
renamed to Artigas Boulevard (its current name) in 1885. By Decree, on 8 January 1881, the area Los Pocitos was incorporated
to the Novísima Ciudad (Most New City). The first telephone lines were installed in 1882 and electric street lights
took the place of the gas operated ones in 1886. The Hipódromo de Maroñas started operating in 1888, and the neighbourhoods
of Reus del Sur, Reus del Norte and Conciliación were inaugurated in 1889. The new building of the School of Arts
and Trades, as well as Zabala Square in Ciudad Vieja were inaugurated in 1890, followed by the Italian Hospital in
1891. In the same year, the village of Peñarol was founded. Other neighbourhoods that were founded were Belgrano
and Belvedere in 1892, Jacinto Vera in 1895 and Trouville in 1897. In 1894 the new port was constructed, and in 1897,
the Central Railway Station of Montevideo was inaugurated. In the early 20th century, many Europeans (particularly
Spaniards and Italians but also thousands from Central Europe) immigrated to the city. In 1908, 30% of the city's
population of 300,000 was foreign-born. In that decade the city expanded quickly: new neighbourhoods were created
and many separate settlements were annexed to the city, among which were the Villa del Cerro, Pocitos, the Prado
and Villa Colón. The Rodó Park and the Estadio Gran Parque Central were also established, which served as poles of
urban development. During World War II, a famous incident involving the German pocket battleship Admiral Graf Spee
took place in Punta del Este, 200 kilometers (120 mi) from Montevideo. After the Battle of the River Plate with the
Royal Navy and Royal New Zealand Navy on 13 December 1939, the Graf Spee retreated to Montevideo's port, which was
considered neutral at the time. To avoid risking the crew in what he thought would be a losing battle, Captain Hans
Langsdorff scuttled the ship on 17 December. Langsdorff committed suicide two days later.[citation needed] The eagle
figurehead of the Graf Spee was salvaged on 10 February 2006; to protect the feelings of those still sensitive to
Nazi Germany, the swastika on the figurehead was covered as it was pulled from the water. Montevideo is situated
on the north shore of the Río de la Plata, the arm of the Atlantic Ocean that separates the south coast of Uruguay
from the north coast of Argentina; Buenos Aires lies 230 kilometres (140 mi) west on the Argentine side. The Santa
Lucía River forms a natural border between Montevideo and San José Department to its west. To the city's north and
east is Canelones Department, with the stream of Carrasco forming the eastern natural border. The coastline forming
the city's southern border is interspersed with rocky protrusions and sandy beaches. The Bay of Montevideo forms
a natural harbour, the nation's largest and one of the largest in the Southern Cone, and the finest natural port
in the region, functioning as a crucial component of the Uruguayan economy and foreign trade. Various streams criss-cross
the town and empty into the Bay of Montevideo. The coastline and rivers are heavily polluted and of high salinity.
The city has an average elevation of 43 metres (141 ft). Its highest elevations are two hills: the Cerro de Montevideo
and the Cerro de la Victoria, with the highest point, the peak of Cerro de Montevideo, crowned by a fortress, the
Fortaleza del Cerro at a height of 134 metres (440 ft). Closest cities by road are Las Piedras to the north and the
so-called Ciudad de la Costa (a conglomeration of coastal towns) to the east, both in the range of 20 to 25 kilometres
(16 mi) from the city center. The approximate distances to the neighbouring department capitals by road are, 90 kilometres
(56 mi) to San Jose de Mayo (San Jose Department) and 46 kilometres (29 mi) to Canelones (Canelones Department).
The Municipality of Montevideo was first created by a legal act of 18 December 1908. The municipality's first mayor
(1909–1911) was Daniel Muñoz. Municipalities were abolished by the Uruguayan Constitution of 1918, effectively restored
during the 1933 military coup of Gabriel Terra, and formally restored by the 1934 Constitution. The 1952 Constitution
again decided to abolish the municipalities; it came into effect in February 1955. Municipalities were replaced by
departmental councils, which consisted of a collegiate executive board with 7 members from Montevideo and 5 from
the interior region. However, municipalities were revived under the 1967 Constitution and have operated continuously
since that time. As of 2010[update], the city of Montevideo has been divided into 8 political municipalities (Municipios),
referred to with the letters from A to G, including CH, each presided over by a mayor elected by the citizens registered
in the constituency. This division, according to the Municipality of Montevideo, "aims to advance political and administrative
decentralization in the department of Montevideo, with the aim of deepening the democratic participation of citizens
in governance." The head of each Municipio is called an alcalde or (if female) alcaldesa. Of much greater importance
is the division of the city into 62 barrios: neighbourhoods or wards. Many of the city's barrios—such as Sayago,
Ituzaingó and Pocitos—were previously geographically separate settlements, later absorbed by the growth of the city.
Others grew up around certain industrial sites, including the salt-curing works of Villa del Cerro and the tanneries
in Nuevo París. Each barrio has its own identity, geographic location and socio-cultural activities. A neighbourhood
of great significance is Ciudad Vieja, that was surrounded by a protective wall until 1829. This area contains most
important buildings of the colonial era and early decades of independence. In 1860, Montevideo had 57,913 inhabitants
including a number of people of African origin who had been brought as slaves and had gained their freedom around
the middle of the century. By 1880, the population had quadrupled, mainly because of the great European immigration.
In 1908, its population had grown massively to 309,331 inhabitants. In the course of the 20th century the city continued
to receive large numbers of European immigrants, especially Spanish and Italian, followed by French, Germans or Dutch,
English or Irish, Polish, Greek, Hungarians, Russians, Croats, Lebanese, Armenians, and Jews of various origins.
The last wave of immigrants occurred between 1945 and 1955. According to the census survey carried out between 15
June and 31 July 2004, Montevideo had a population of 1,325,968 persons, compared to Uruguay's total population of
3,241,003. The female population was 707,697 (53.4%) while the male population accounted for 618,271 (46.6%). The
population had declined since the previous census carried out in 1996, with an average annual growth rate of −1.5
per thousand. Continual decline has been documented since the census period of 1975–1985, which showed a rate of
−5.6 per thousand. The decrease is due in large part to lowered fertility, partly offset by mortality, and to a smaller
degree in migration. The birth rate declined by 19% from 1996 (17 per thousand) to 2004 (13.8 per thousand). Similarly,
the total fertility rate (TFR) declined from 2.24 in 1996 to 1.79 in 2004. However, mortality continued to fall with
life expectancy at birth for both sexes increasing by 1.73 years. As the capital of Uruguay, Montevideo is the economic
and political centre of the country. Most of the largest and wealthiest businesses in Uruguay have their headquarters
in the city. Since the 1990s the city has undergone rapid economic development and modernization, including two of
Uruguay's most important buildings—the World Trade Center Montevideo (1998), and Telecommunications Tower (2000),
the headquarters of Uruguay's government-owned telecommunications company ANTEL, increasing the city's integration
into the global marketplace. The most important state-owned companies headquartered in Montevideo are: AFE (railways),
ANCAP (Energy), Administracion Nacional de Puertos (Ports), ANTEL (telecommunications), BHU (savings and loan), BROU
(bank), BSE (insurance), OSE (water & sewage), UTE (electricity). These companies operate under public law, using
a legal entity defined in the Uruguayan Constitution called Ente Autonomo ("autonomous entity"). The government also
owns part of other companies operating under private law, such as those owned wholly or partially by the CND (National
Development Corporation). Banking has traditionally been one of the strongest service export sectors in Uruguay:
the country was once dubbed "the Switzerland of America", mainly for its banking sector and stability, although that
stability has been threatened in the 21st century by the recent global economic climate. The largest bank in Uruguay
is Banco Republica (BROU), based in Montevideo. Almost 20 private banks, most of them branches of international banks,
operate in the country (Banco Santander, ABN AMRO, Citibank, Lloyds TSB, among others). There are also a myriad of
brokers and financial-services bureaus, among them Ficus Capital, Galfin Sociedad de Bolsa, Europa Sociedad de Bolsa,
Darío Cukier, GBU, Hordeñana & Asociados Sociedad de Bolsa, etc. Tourism accounts for much of Uruguay's economy.
Tourism in Montevideo is centered in the Ciudad Vieja area, which includes the city's oldest buildings, several museums,
art galleries, and nightclubs, with Sarandí Street and the Mercado del Puerto being the most frequented venues of
the old city. On the edge of Ciudad Vieja, Plaza Independencia is surrounded by many sights, including the Solís
Theatre and the Palacio Salvo; the plaza also constitutes one end of 18 de Julio Avenue, the city's most important
tourist destination outside of Ciudad Vieja. Apart from being a shopping street, the avenue is noted for its Art
Deco buildings, three important public squares, the Gaucho Museum, the Palacio Municipal and many other sights. The
avenue leads to the Obelisk of Montevideo; beyond that is Parque Batlle, which along with the Parque Prado is another
important tourist destination. Along the coast, the Fortaleza del Cerro, the Rambla (the coastal avenue), 13 kilometres
(8.1 mi) of sandy beaches, and Punta Gorda attract many tourists, as do the Barrio Sur and Palermo barrios. Montevideo
has over 50 hotels, mostly located within the downtown area or along the beachfront of the Rambla de Montevideo.
Many of the hotels are in the modern, western style, such as the Sheraton Montevideo, the Radisson Montevideo Victoria
Plaza Hotel located on the central Plaza Independencia, and the Plaza Fuerte Hotel on the waterfront. The Sheraton
has 207 guest rooms and 10 suites and is luxuriously furnished with imported furniture. The Radisson Montevideo has
232 rooms and contains a casino and is served by the Restaurante Arcadia. Montevideo is the heartland of retailing
in Uruguay. The city has become the principal centre of business and real estate, including many expensive buildings
and modern towers for residences and offices, surrounded by extensive green spaces. In 1985, the first shopping centre
in Rio de la Plata, Montevideo Shopping was built. In 1994, with building of three more shopping complexes such as
the Shopping Tres Cruces, Portones Shopping, and Punta Carretas Shopping, the business map of the city changed dramatically.
The creation of shopping complexes brought a major change in the habits of the people of Montevideo. Global firms
such as McDonald's and Burger King etc. are firmly established in Montevideo. The architecture of Montevideo ranges
from Neoclassical buildings such as the Montevideo Metropolitan Cathedral to the Postmodern style of the World Trade
Center Montevideo or the 158-metre (518 ft) ANTEL Telecommunication Tower, the tallest skyscraper in the country.
The Along with the Telecommunications Tower, the Palacio Salvo dominates the skyline of the Bay of Montevideo. The
building façades in the Old Town reflect the city's extensive European immigration, displaying the influence of old
European architecture. Notable government buildings include the Legislative Palace, the City Hall, Estévez Palace
and the Executive Tower. The most notable sports stadium is the Estadio Centenario within Parque Batlle. Parque Batlle,
Parque Rodó and Parque Prado are Montevideo's three great parks. The Pocitos district, near the beach of the same
name, has many homes built by Bello and Reboratti between 1920 and 1940, with a mixture of styles. Other landmarks
in Pocitos are the "Edificio Panamericano" designed by Raul Sichero, and the "Positano" and "El Pilar" designed by
Adolfo Sommer Smith and Luis García Pardo in the 1950s and 1960s. However, the construction boom of the 1970s and
1980s transformed the face of this neighbourhood, with a cluster of modern apartment buildings for upper and upper
middle class residents.[citation needed] World Trade Center Montevideo officially opened in 1998, although work is
still ongoing as of 2010[update]. The complex is composed of three towers, two three-story buildings called World
Trade Center Plaza and World Trade Center Avenue and a large central square called Towers Square. World Trade Center
1 was the first building to be inaugurated, in 1998.[citation needed] It has 22 floors and 17,100 square metres of
space. That same year the avenue and the auditorium were raised. World Trade Center 2 was inaugurated in 2002, a
twin tower of World Trade Center 1. Finally, in 2009, World Trade Center 3 and the World Trade Center Plaza and the
Towers Square were inaugurated. It is located between the avenues Luis Alberto de Herrera and 26 de Marzo and has
19 floors and 27,000 square metres (290,000 sq ft) of space. The 6,300-square-metre (68,000 sq ft)[citation needed]
World Trade Center Plaza is designed to be a centre of gastronomy opposite Towers Square and Bonavita St. Among the
establishments on the plaza are Burger King, Walrus, Bamboo, Asia de Cuba, Gardenia Mvd, and La Claraboya Cafe. The
Towers Square, is an area of remarkable aesthetic design, intended to be a platform for the development of business
activities, art exhibitions, dance and music performances and social place. This square connects the different buildings
and towers which comprise the WTC Complex and it is the main access to the complex. The square contains various works
of art, notably a sculpture by renowned Uruguayan sculptor Pablo Atchugarry. World Trade Center 4, with 40 floors
and 53,500 square metres (576,000 sq ft) of space is under construction as of 2010[update].[citation needed] The
Solís Theatre is Uruguay's oldest theatre. It was built in 1856 and is currently owned by the government of Montevideo.
In 1998, the government of Montevideo started a major reconstruction of the theatre, which included two US$110,000
columns designed by Philippe Starck. The reconstruction was completed in 2004, and the theatre reopened in August
of that year. The plaza is also the site of the offices of the President of Uruguay (both the Estévez Palace and
the Executive Tower). The Artigas Mausoleum is located at the centre of the plaza. Statues include that of José Gervasio
Artigas, hero of Uruguay's independence movement; an honour guard keeps vigil at the Mausoleum. Palacio Salvo, at
the intersection of 18 de Julio Avenue and Plaza Independencia, was designed by the architect Mario Palanti and completed
in 1925. Palanti, an Italian immigrant living in Buenos Aires, used a similar design for his Palacio Barolo in Buenos
Aires, Argentina. Palacio Salvo stands 100 metres (330 ft) high, including its antenna. It is built on the former
site of the Confitería La Giralda, renowned for being where Gerardo Matos Rodríguez wrote his tango "La Cumparsita"
(1917.) Palacio Salvo was originally intended to function as a hotel but is now a mixture of offices and private
residences. Also of major note in Ciudad Vieja is the Plaza de la Constitución (or Plaza Matriz). During the first
decades of Uruguayan independence this square was the main hub of city life. On the square are the Cabildo—the seat
of colonial government—and the Montevideo Metropolitan Cathedral. The cathedral is the burial place of Fructuoso
Rivera, Juan Antonio Lavalleja and Venancio Flores. Another notable square is Plaza Zabala with the equestrian statue
of Bruno Mauricio de Zabala. On its south side, Palacio Taranco, once residence of the Ortiz Taranco brothers, is
now the Museum of Decorative Arts. A few blocks northwest of Plaza Zabala is the Mercado del Puerto, another major
tourist destination. Parque Batlle (formerly: Parque de los Aliados, translation: "Park of the Allies") is a major
public central park, located south of Avenida Italia and north of Avenue Rivera. Along with Parque Prado and Parque
Rodó it is one of three large parks that dominate Montevideo. The park and surrounding area constitute one of the
62 neighbourhoods (barrios) of the city. The barrio of Parque Batlle is one of seven coastal barrios, the others
being Buceo, Carrasco, Malvin, Pocitos, Punta Carretas, and Punta Gorda. The current barrio of Parque Battle includes
four former districts: Belgrano, Italiano, Villa Dolores and Batlle Park itself and borders the neighbourhoods of
La Blanqueada, Tres Cruces, Pocitos and Buceo. It has a high population density and most of its households are of
medium-high- or high-income. Villa Dolores, a subdistrict of Parque Batlle, took its name from the original villa
of Don Alejo Rossell y Rius and of Doña Dolores Pereira de Rossel. On their grounds, they started a private collection
of animals that became a zoological garden and was passed to the city in 1919; in 1955 the Planetarium of Montevideo
was built within its premises. Parque Batlle is named in honour of José Batlle y Ordóñez, President of Uruguay from
1911 to 1915. The park was originally proposed by an Act of March 1907, which also projected wide boulevards and
avenues. French landscape architect, Carlos Thays, began the plantings in 1911. In 1918, the park was named Parque
de los Aliados, following the victory of the Allies of World War I. On 5 May 1930, after significant expansion, it
was again renamed as Parque Batlle y Ordóñez, in memory of the prominent politician and president, who had died in
1929. The park was designated a National Historic Monument Park in 1975. As of 2010[update], the park covers an area
of 60 hectares (150 acres) and is considered the "lung" of the Montevideo city due to the large variety of trees
planted here. Parque Rodó is both a barrio (neighbourhood) of Montevideo and a park which lies mostly outside the
limits of the neighbourhood itself and belongs to Punta Carretas. The name "Rodó" commemorates José Enrique Rodó,
an important Uruguayan writer whose monument is in the southern side of the main park. The park was conceived as
a French-style city park. Apart from the main park area which is delimited by Sarmiento Avenue to the south, Parque
Rodó includes an amusement park; the Estadio Luis Franzini, belonging to Defensor Sporting; the front lawn of the
Faculty of Engineering and a strip west of the Club de Golf de Punta Carretas that includes the Canteras ("quarry")
del Parque Rodó, the Teatro de Verano ("summer theatre") and the Lago ("lake") del Parque Rodó. On the east side
of the main park area is the National Museum of Visual Arts. On this side, a very popular street market takes place
every Sunday. On the north side is an artificial lake with a little castle housing a municipal library for children.
An area to its west is used as an open-air exhibition of photography. West of the park, across the coastal avenue
Rambla Presidente Wilson, stretches Ramirez Beach. Directly west of the main park are, and belonging to Parque Rodó
barrio, is the former Parque Hotel, now called Edifício Mercosur, seat of the parliament of the members countries
of the Mercosur. During the guerilla war the Tupamaros frequently attacked buildings in this area, including the
old hotel. The first set of subsidiary forts were planned by the Portuguese at Montevideo in 1701 to establish a
front line base to stop frequent insurrections by the Spaniards emanating from Buenos Aires. These fortifications
were planned within the River Plate estuary at Colonia del Sacramento. However, this plan came to fruition only in
November 1723, when Captain Manuel Henriques de Noronha reached the shores of Montevideo with soldiers, guns and
colonists on his warship Nossa Senhora de Oliveara. They built a small square fortification. However, under siege
from forces from Buenos Aires, the Portuguese withdrew from Montevideo Bay in January 1724, after signing an agreement
with the Spaniards. Fortaleza del Cerro overlooks the bay of Montevideo. An observation post at this location was
first built by the Spanish in the late 18th century. In 1802, a beacon replaced the observation post; construction
of the fortress began in 1809 and was completed in 1839. It has been involved in many historical developments and
has been repeatedly taken over by various sides. In 1907, the old beacon was replaced with a stronger electric one.
It has been a National Monument since 1931 and has housed a military museum since 1916. Today it is one of the tourist
attractions of Montevideo. The Rambla is an avenue that goes along the entire coastline of Montevideo. The literal
meaning of the Spanish word rambla is "avenue" or "watercourse", but in the Americas it is mostly used as "coastal
avenue", and since all the southern departments of Uruguay border either the Río de la Plata or the Atlantic Ocean,
they all have ramblas as well. As an integral part of Montevidean identity, the Rambla has been included by Uruguay
in the Indicative List of World Heritage sites, though it has not received this status. Previously, the entire Rambla
was called Rambla Naciones Unidas ("United Nations"), but in recent times different names have been given to specific
parts of it. The largest cemetery is the Cementerio del Norte, located in the northern-central part of the city.
The Central Cemetery (Spanish: Cementerio central), located in Barrio Sur in the southern area of the city, is one
of Uruguay's main cemeteries. It was one of the first cemeteries (in contrast to church graveyards) in the country,
founded in 1835 in a time where burials were still carried out by the Catholic Church. It is the burial place of
many of the most famous Uruguayans, such as Eduardo Acevedo, Delmira Agustini, Luis Batlle Berres, José Batlle y
Ordóñez, Juan Manuel Blanes, François Ducasse, father of Comte de Lautréamont (Isidore Ducasse), Luis Alberto de
Herrera, Benito Nardone, José Enrique Rodó, and Juan Zorrilla de San Martín. The other large cemeteries are the Cementerio
del Buceo, Cementerio del Cerro, and Cementerio Paso Molino. The British Cemetery Montevideo (Cementerio Británico)
is another of the oldest cemeteries in Uruguay, located in the Buceo neighborhood. Many noblemen and eminent persons
are buried there. The cemetery originated when the Englishman Mr. Thomas Samuel Hood purchased a plot of land in
the name of the English residents in 1828. However, in 1884 the government compensated the British by moving the
cemetery to Buceo to accommodate city growth. A section of the cemetery, known as British Cemetery Montevideo Soldiers
and Sailors, contains the graves of quite a number of sailors of different nationalities, although the majority are
of British descent. One United States Marine, Henry de Costa, is buried here. Montevideo has a very rich architectural
heritage and an impressive number of writers, artists, and musicians. Uruguayan tango is a unique form of dance that
originated in the neighbourhoods of Montevideo towards the end of the 1800s. Tango, candombe and murga are the three
main styles of music in this city. The city is also the centre of the cinema of Uruguay, which includes commercial,
documentary and experimental films. There are two movie theatre companies running seven cinemas, around ten independent
ones and four art film cinemas in the city. The theatre of Uruguay is admired inside and outside Uruguayan borders.
The Solís Theatre is the most prominent theatre in Uruguay and the oldest in South America. There are several notable
theatrical companies and thousands of professional actors and amateurs. Montevideo playwrights produce dozens of
works each year; of major note are Mauricio Rosencof, Ana Magnabosco and Ricardo Prieto. The first public library
in Montevideo was formed by the initial donation of the private library of Father José Manuel Pérez Castellano, who
died in 1815. Its promoter, director and organizer was Father Dámaso Antonio Larrañaga, who also made a considerable
donation along with donations from José Raimundo Guerra, as well as others from the Convent of San Francisco in Salta.
In 1816 its stock was 5,000 volumes.[citation needed] The current building of the National Library of Uruguay (Biblioteca
Pública de Uruguay) was designed by Luis Crespi in the Neoclassical style and occupies an area of 4,000 square metres
(43,000 sq ft). Construction began in 1926 and it was finally inaugurated in 1964. Its current collection amounts
to roughly 900,000 volumes. In Montevideo, as throughout the Rio de Plata region, the most popular forms of music
are tango, milonga and vals criollo. Many notable songs originated in Montevideo including "El Tango supremo", La
Cumparsita", La Milonga", "La Puñalada" and "Desde el Alma", composed by notable Montevideo musicians such as Gerardo
Matos Rodríguez, Pintín Castellanos and Rosita Melo. Tango is deeply ingrained in the cultural life of the city and
is the theme for many of the bars and restaurants in the city. Fun Fun' Bar, established in 1935, is one of the most
important places for tango in Uruguay as is El Farolito, located in the old part of the city and Joventango, Café
Las Musas, Garufa and Vieja Viola. The city is also home to the Montevideo Jazz Festival and has the Bancaria Jazz
Club bar catering for jazz enthusiasts. In the early 1970s (1973, to be particular) when the military junta took
over power in Uruguay, art suffered in Montevideo. The art studios went into protest mode, with Rimer Cardillo, one
of the country's leading artists, making the National Institute of Fine Arts, Montevideo a "hotbed of resistance".
This resulted in the military junta coming down heavily on artists by closing the Fine Art Institute and carting
away all the presses and other studio equipment. Consequently, the learning of fine arts was only in private studios
run by people who had been let out of jail, in works of printing and on paper and also painting and sculpture. It
resumed much later. The Montevideo Cabildo was the seat of government during the colonial times of the Viceroyalty
of the Río de la Plata. It is located in front of Constitution Square, in Ciudad Vieja. Built between 1804 and 1869
in Neoclassical style, with a series of Doric and Ionic columns, it became a National Heritage Site in 1975. In 1958,
the Municipal Historic Museum and Archive was inaugurated here. It features three permanent city museum exhibitions,
as well as temporary art exhibitions, cultural events, seminars, symposiums and forums. The Palacio Taranco is located
in front of the Plaza Zabala, in the heart of Ciudad Vieja. It was erected in the early 20th century as the residence
of the Ortiz Taranco brothers on the ruins of Montevideo's first theatre (of 1793), during a period in which the
architectural style was influenced by French architecture. The palace was designed by French architects Charles Louis
Girault and Jules Chifflot León who also designed the Petit Palais and the Arc de Triomphe in Paris. It passed to
the city from the heirs of the Tarancos in 1943, along with its precious collection of Uruguayan furniture and draperies
and was deemed by the city as an ideal place for a museum; in 1972 it became the Museum of Decorative Arts of Montevideo
and in 1975 it became a National Heritage Site. The Decorative Arts Museum has an important collection of European
paintings and decorative arts, ancient Greek and Roman art and Islamic ceramics of the 10th–18th century from the
area of present-day Iran. The palace is often used as a meeting place by the Uruguayan government. The National History
Museum of Montevideo is located in the historical residence of General Fructuoso Rivera. It exhibits artifacts related
to the history of Uruguay. In a process begun in 1998, the National Museum of Natural History (1837) and the National
Museum of Anthropology (1981), merged in 2001, becoming the National Museum of Natural History and Anthropology.
In July 2009, the two institutions again became independent. The Historical Museum has annexed eight historical houses
in the city, five of which are located in the Ciudad Vieja. One of them, on the same block with the main building,
is the historic residence of Antonio Montero, which houses the Museo Romantico. The Museo Torres García is located
in the Old Town, and exhibits Joaquín Torres García's unusual portraits of historical icons and cubist paintings
akin to those of Picasso and Braque. The museum was established by Manolita Piña Torres, the widow of Torres Garcia,
after his death in 1949. She also set up the García Torres Foundation, a private non-profit organization that organizes
the paintings, drawings, original writings, archives, objects and furniture designed by the painter as well as the
photographs, magazines and publications related to him. There are several other important art museums in Montevideo.
The National Museum of Visual Arts in Parque Rodó has Uruguay's largest collection of paintings. The Juan Manuel
Blanes Museum was founded in 1930, the 100th anniversary of the first Constitution of Uruguay, significant with regard
to the fact that Juan Manuel Blanes painted Uruguayan patriotic themes. In back of the museum is a beautiful Japanese
Garden with a pond where there are over a hundred carp. The Museo de Historia del Arte, located in the Palacio Municipal,
features replicas of ancient monuments and exhibits a varied collection of artifacts from Egypt, Mesopotamia, Persia,
Greece, Rome and Native American cultures including local finds of the pre-Columbian period. The Museo Municipal
Precolombino y Colonial, in the Ciudad Vieja, has preserved collections of the archaeological finds from excavations
carried out by Uruguayan archaeologist Antonio Taddei. These antiquaries are exhibits of pre-Columbian art of Latin
America, painting and sculpture from the 17th and 18th century mostly from Mexico, Peru and Brazil. The Museo de
Arte Contempo has small but impressive exhibits of modern Uruguayan painting and sculpture. There are also other
types of museums in the city. The Museo del Gaucho y de la Moneda, located in the Centro, has distinctive displays
of the historical culture of Uruguay's gauchos, their horse gear, silver work and mate (tea), gourds, and bombillas
(drinking straws) in odd designs. The Museo Naval, is located on the eastern waterfront in Buceo and offers exhibits
depicting the maritime history of Uruguay. The Museo del Automóvil, belonging to the Automobile Club of Uruguay,
has a rich collection of vintage cars which includes a 1910 Hupmobile. The Museo y Parque Fernando García in Carrasco,
a transport and automobile museum, includes old horse carriages and some early automobiles. The Castillo Pittamiglio,
with an unusual façade, highlights the eccentric legacy of Humberto Pittamiglio, local alchemist and architect. The
center of traditional Uruguayan food and beverage in Montevideo is the Mercado del Puerto ("Port Market"). A torta
frita is a pan-fried cake consumed in Montevideo and throughout Uruguay. It is generally circular, with a small cut
in the centre for cooking, and is made from wheat flour, yeast, water and sugar or salt. Beef is very important in
Uruguayan cuisine and an essential part of many dishes. Montevideo has a variety of restaurants, from traditional
Uruguayan cuisine to Japanese cuisine such as sushi. Notable restaurants in Montevideo include Arcadia atop the Plaza
Victoria, widely regarded to be the finest restaurant in the city. Arcadia is set in a classic Italian-inspired dining
room and serves lavish dishes such as terrine of pheasant marinated in cognac, grilled lamb glazed with mint and
garlic, and duck confit on thin strudel pastry with red cabbage. El Fogon is more popular with the late-night diners
of the city. Its interior is brightly lit and the walls covered with big mirrors. Officially a barbecue and seafood
restaurant, it serves grilled meat dishes, as well as salmon, shrimp and calamari. Also of note is the Cru. Numerous
restaurants are located along the Rambla of Montevideo. There is an Irish pub in the eastern part of the Old District
named Shannon Irish pub, another testament to the European heritage of the city. As the capital of Uruguay, Montevideo
is home to a number of festivals and carnivals including a Gaucho festival when people ride through the streets on
horseback in traditional gaucho gear. The major annual festival is the annual Montevideo Carnaval which is part of
the national festival of Carnival Week, celebrated throughout Uruguay, with central activities in the capital, Montevideo.
Officially, the public holiday lasts for two days on Carnival Monday and Shrove Tuesday preceding Ash Wednesday,
but due to the prominence of the festival, most shops and businesses close for the entire week. During carnival there
are many open-air stage performances and competitions and the streets and houses are vibrantly decorated. "Tablados"
or popular scenes, both fixed and movable, are erected in the whole city. Notable displays include "Desfile de las
Llamadas" ("Parade of the Calls"), which is a grand united parade held on the south part of downtown, where it used
to be a common ritual back in the early 20th century. Due to the scale of the festival, preparation begins as early
as December with an election of the "zonal beauty queens" to appear in the carnival. Church and state are officially
separated since 1916 in Uruguay. The religion with most followers in Montevideo is Roman Catholicism and has been
so since the foundation of the city. The Roman Catholic Archdiocese of Montevideo was created as the Apostolic Vicariate
of Montevideo in 1830. The vicariate was promoted to the Diocese of Montevideo on 13 July 1878. Pope Leo XIII elevated
it to the rank of a metropolitan archdiocese on 14 April 1897. The new archdiocese became the Metropolitan of the
suffragan sees of Canelones, Florida, Maldonado–Punta del Este, Melo, Mercedes, Minas, Salto, San José de Mayo, Tacuarembó.
Nuestra Señora del Sagrado Corazón ("Our Lady of the Sacred Heart"), also known as Iglesia Punta Carretas ("Punta
Carretas Church"), was built between 1917 and 1927 in the Romanesque Revival style. The church was originally part
of the Order of Friars Minor Capuchin, but is presently in the parish of the Ecclesiastic Curia. Its location is
at the corner of Solano García and José Ellauri. It has a nave and aisles. The roof has many vaults. During the construction
of the Punta Carretas Shopping complex, major cracks developed in the structure of the church as a result of differential
foundation settlement. The University of the Republic is the country's largest and most important university, with
a student body of 81,774, according to the census of 2007. It was founded on 18 July 1849 in Montevideo, where most
of its buildings and facilities are still located. Its current Rector is Dr. Rodrigo Arocena. The university houses
14 faculties (departments) and various institutes and schools. Many eminent Uruguayans have graduated from this university,
including Carlos Vaz Ferreira, José Luis Massera, Gabriel Paternain, Mario Wschebor, Roman Fresnedo Siri, Carlos
Ott and Eladio Dieste The process of founding the country's public university began on 11 June 1833 with the passage
of a law proposed by Senator Dámaso Antonio Larrañaga. It called for the creation of nine academic departments; the
President of the Republic would pass a decree formally creating the departments once the majority of them were in
operation. In 1836, the House of General Studies was formed, housing the departments of Latin, philosophy, mathematics,
theology and jurisprudence. On 27 May 1838, Manuel Oribe passed a decree establishing the Greater University of the
Republic. That decree had few practical effects, given the institutional instability of the Oriental Republic of
the Uruguay at that time. The largest private university in Uruguay, is also located in Montevideo. ORT Uruguay was
first established as a non-profit organization in 1942, and was officially certified as a private university in September
1996, becoming the second private educational institution in the country to achieve that status.[citation needed]
It is a member of World ORT, an international educational network founded in 1880 by the Jewish community in Saint
Petersburg, Russia. The university has about 8,000 students, distributed among 5 faculties and institutes, mainly
geared towards the sciences and technology/engineering. Its current rector as of 2010[update] is Dr. Jorge A. Grünberg.
The Montevideo Crandon Institute is an American School of missionary origin and the main Methodist educational institution
in Uruguay. Founded in 1879 and supported by the Women's Society of the Methodist Church of the United States, it
is one of the most traditional and emblematic institutions in the city inculcating John Wesley's values. Its alumni
include presidents, senators, ambassadors and Nobel Prize winners, along with musicians, scientists, and others.
The Montevideo Crandon Institute boasts of being the first academic institution in South America where a home economics
course was taught. The Christian Brothers of Ireland Stella Maris College is a private, co-educational, not-for-profit
Catholic school located in the wealthy residential southeastern neighbourhood of Carrasco. Established in 1955, it
is regarded as one of the best high schools in the country, blending a rigorous curriculum with strong extracurricular
activities. The school's headmaster, history professor Juan Pedro Toni, is a member of the Stella Maris Board of
Governors and the school is a member of the International Baccalaureate Organization (IBO). Its long list of distinguished
former pupils includes economists, engineers, architects, lawyers, politicians and even F1 champions. The school
has also played an important part in the development of rugby union in Uruguay, with the creation of Old Christians
Club, the school's alumni club. Also in Carrasco is The British Schools of Montevideo, one of the oldest educational
institutions in the country, established in 1908.[citation needed] Its original purpose was to give Uruguayan children
a complete education, on par with the best schools of the United Kingdom and to establish strong bonds between the
British and Uruguayan children living in the country. The School is governed by the Board of Governors, elected by
the British Schools Society in Uruguay, whose honorary president is the British Ambassador to Uruguay. Prominent
alumni include former government ministers Pedro Bordaberry Herrán and Gabriel Gurméndez Armand-Ugon. Located in
Cordon, St.Brendan´s school, before named St.Catherine´s is a non-profit civil association, which has a solid institutional
culture with a clear vision of the future. It is knowned for being one of the best schools in the country, joining
students from the wealthiest parts of Montevideo, such us, Punta Carretas, Pocitos, Malvin and Carrasco. St. Brendan’s
School is a bilingual, non-denominational school that promotes a pedagogical constructivist approach focused on the
child as a whole. In this approach, understanding is built from the connections children make between their own prior
knowledge and the learning experiences, thus developing critical thinking skills. It is also the only school in the
country implementing the three International Baccalaureate Programmes. These are: Estadio Centenario, the national
football stadium in Parque Batlle, was opened in 1930 for the first World Cup, as well as to commemorate the centennial
of Uruguay's first constitution. In this World Cup, Uruguay won the title game against Argentina by 4 goals to 2.
The stadium has 70,000 seats. It is listed by FIFA as one of the football world's classic stadiums, along with Maracanã,
Wembley Stadium, San Siro, Estadio Azteca, and Santiago Bernabéu Stadium. A museum located within the football stadium
has exhibits of memorabilia from Uruguay's 1930 and 1950 World Cup championships. Museum tickets give access to the
stadium, stands, locker rooms and playing field. The Uruguayan Basketball League is headquartered in Montevideo and
most of its teams are from the city, including Defensor Sporting, Biguá, Aguada, Goes, Malvín, Unión Atlética, and
Trouville. Montevideo is also a centre of rugby; equestrianism, which regained importance in Montevideo after the
Maroñas Racecourse reopened; golf, with the Club de Punta Carretas; and yachting, with the Puerto del Buceo, an ideal
place to moor yachts. The Golf Club of Punta Carretas was founded in 1894 covers all the area encircled by the west
side of Bulevar Artigas, the Rambla (Montevideo's promenade) and the Parque Rodó (Fun Fair). The Dirección Nacional
de Transporte (DNT), part of the national Ministry of Transport and Public Works, is responsible for the organization
and development of Montevideo's transport infrastructure. A bus service network covers the entire city. An international
bus station, the Tres Cruces Bus Terminal, is located on the lower level of the Tres Cruces Shopping Center, on the
side of Artigas Boulevard. This terminal, along with the Baltazar Brum Bus Terminal (or Rio Branco Terminal) by the
Port of Montevideo, handles the long distance and intercity bus routes connecting to destinations within Uruguay.
The State Railways Administration of Uruguay (AFE) operates three commuter rail lines, namely the Empalme Olmos,
San Jose and Florida. These lines operate to major suburban areas of Canelones, San José and Florida. Within the
Montevideo city limits, local trains stop at Lorenzo Carnelli, Yatai (Step Mill), Sayago, Columbus (line to San Jose
and Florida), Peñarol and Manga (line Empalme Olmos) stations. The historic 19th century General Artigas Central
Station located in the neighbourhood of Aguada, six blocks from the central business district, was abandoned 1 March
2003 and remains closed. A new station, 500 metres (1,600 ft) north of the old one and part of the Tower of Communications
modern complex, has taken over the rail traffic. The port on Montevideo Bay is one of the reasons the city was founded.
It gives natural protection to ships, although two jetties now further protect the harbour entrance from waves. This
natural port is competitive with the other great port of Río de la Plata, Buenos Aires. The main engineering work
on the port occurred between the years 1870 and 1930. These six decades saw the construction of the port's first
wooden pier, several warehouses in La Aguada, the north and south Rambla, a river port, a new pier, the dredged river
basin and the La Teja refinery. A major storm in 1923 necessitated repairs to many of the city's engineering works.
Since the second half of the 20th century, physical changes have ceased, and since that time the area has degraded
due to national economic stagnation. Hospital Maciel is one of the oldest hospitals in Uruguay and stands on the
block bounded by the streets Maciel, 25 de Mayo, Guaraní and Washington, with the main entrance at 25 de Mayo, 172.
The land was originally donated in Spanish colonial times by philanthropist Francisco Antonio Maciel, who teamed
up with Mateo Vidal to establish a hospital and charity. The first building was constructed between 1781 and 1788
and later expanded upon. The present building stems from the 1825 plans of José Toribio (son of Tomás Toribio) and
later Bernardo Poncini (wing on the Guaraní street, 1859), Eduardo Canstatt (corner of Guaraní and 25 de Mayo) and
Julián Masquelez (1889). The hospital has a chapel built in Greek style by Miguel Estévez in 1798. Hospital Vilardebó
is the only psychiatric hospital in Montevideo. Named after the physician and naturalist Teodoro Vilardebó Matuliche,
it opened 21 May 1880. The hospital was originally one of the best of Latin America and in 1915 grew to 1,500 inpatients.
Today the hospital is very deteriorated, with broken walls and floors, lack of medicines, beds, and rooms for the
personnel. It has an emergency service, outpatient, clinic and inpatient rooms and employs approximately 610 staff,
psychologists, psychiatrists, social workers, administrators, guards, among others. The average patient age is 30
years; more than half of the patients arrive by court order; 42% suffer from schizophrenia, 18% from depression and
mania, and there are also a high percentage of drug addicted patients.
Poultry (/ˌpoʊltriː/) are domesticated birds kept by humans for the eggs they produce, their meat, their feathers, or sometimes
as pets. These birds are most typically members of the superorder Galloanserae (fowl), especially the order Galliformes
(which includes chickens, quails and turkeys) and the family Anatidae, in order Anseriformes, commonly known as "waterfowl"
and including domestic ducks and domestic geese. Poultry also includes other birds that are killed for their meat,
such as the young of pigeons (known as squabs) but does not include similar wild birds hunted for sport or food and
known as game. The word "poultry" comes from the French/Norman word poule, itself derived from the Latin word pullus,
which means small animal. The domestication of poultry took place several thousand years ago. This may have originally
been as a result of people hatching and rearing young birds from eggs collected from the wild, but later involved
keeping the birds permanently in captivity. Domesticated chickens may have been used for cockfighting at first and
quail kept for their songs, but soon it was realised how useful it was having a captive-bred source of food. Selective
breeding for fast growth, egg-laying ability, conformation, plumage and docility took place over the centuries, and
modern breeds often look very different from their wild ancestors. Although some birds are still kept in small flocks
in extensive systems, most birds available in the market today are reared in intensive commercial enterprises. Poultry
is the second most widely eaten type of meat globally and, along with eggs, provides nutritionally beneficial food
containing high-quality protein accompanied by a low proportion of fat. All poultry meat should be properly handled
and sufficiently cooked in order to reduce the risk of food poisoning. "Poultry" is a term used for any kind of domesticated
bird, captive-raised for its utility, and traditionally the word has been used to refer to wildfowl (Galliformes)
and waterfowl (Anseriformes). "Poultry" can be defined as domestic fowls, including chickens, turkeys, geese and
ducks, raised for the production of meat or eggs and the word is also used for the flesh of these birds used as food.
The Encyclopædia Britannica lists the same bird groups but also includes guinea fowl and squabs (young pigeons).
In R. D. Crawford's Poultry breeding and genetics, squabs are omitted but Japanese quail and common pheasant are
added to the list, the latter frequently being bred in captivity and released into the wild. In his 1848 classic
book on poultry, Ornamental and Domestic Poultry: Their History, and Management, Edmund Dixon included chapters on
the peafowl, guinea fowl, mute swan, turkey, various types of geese, the muscovy duck, other ducks and all types
of chickens including bantams. In colloquial speech, the term "fowl" is often used near-synonymously with "domesticated
chicken" (Gallus gallus), or with "poultry" or even just "bird", and many languages do not distinguish between "poultry"
and "fowl". Both words are also used for the flesh of these birds. Poultry can be distinguished from "game", defined
as wild birds or mammals hunted for food or sport, a word also used to describe the flesh of these when eaten. Chickens
are medium-sized, chunky birds with an upright stance and characterised by fleshy red combs and wattles on their
heads. Males, known as cocks, are usually larger, more boldly coloured, and have more exaggerated plumage than females
(hens). Chickens are gregarious, omnivorous, ground-dwelling birds that in their natural surroundings search among
the leaf litter for seeds, invertebrates, and other small animals. They seldom fly except as a result of perceived
danger, preferring to run into the undergrowth if approached. Today's domestic chicken (Gallus gallus domesticus)
is mainly descended from the wild red junglefowl of Asia, with some additional input from grey junglefowl. Domestication
is believed to have taken place between 7,000 and 10,000 years ago, and what are thought to be fossilized chicken
bones have been found in northeastern China dated to around 5,400 BC. Archaeologists believe domestication was originally
for the purpose of cockfighting, the male bird being a doughty fighter. By 4,000 years ago, chickens seem to have
reached the Indus Valley and 250 years later, they arrived in Egypt. They were still used for fighting and were regarded
as symbols of fertility. The Romans used them in divination, and the Egyptians made a breakthrough when they learned
the difficult technique of artificial incubation. Since then, the keeping of chickens has spread around the world
for the production of food with the domestic fowl being a valuable source of both eggs and meat. Since their domestication,
a large number of breeds of chickens have been established, but with the exception of the white Leghorn, most commercial
birds are of hybrid origin. In about 1800, chickens began to be kept on a larger scale, and modern high-output poultry
farms were present in the United Kingdom from around 1920 and became established in the United States soon after
the Second World War. By the mid-20th century, the poultry meat-producing industry was of greater importance than
the egg-laying industry. Poultry breeding has produced breeds and strains to fulfil different needs; light-framed,
egg-laying birds that can produce 300 eggs a year; fast-growing, fleshy birds destined for consumption at a young
age, and utility birds which produce both an acceptable number of eggs and a well-fleshed carcase. Male birds are
unwanted in the egg-laying industry and can often be identified as soon as they are hatch for subsequent culling.
In meat breeds, these birds are sometimes castrated (often chemically) to prevent aggression. The resulting bird,
called a capon, has more tender and flavorful meat, as well. A bantam is a small variety of domestic chicken, either
a miniature version of a member of a standard breed, or a "true bantam" with no larger counterpart. The name derives
from the town of Bantam in Java where European sailors bought the local small chickens for their shipboard supplies.
Bantams may be a quarter to a third of the size of standard birds and lay similarly small eggs. They are kept by
small-holders and hobbyists for egg production, use as broody hens, ornamental purposes, and showing. Cockfighting
is said to be the world's oldest spectator sport and may have originated in Persia 6,000 years ago. Two mature males
(cocks or roosters) are set to fight each other, and will do so with great vigour until one is critically injured
or killed. Breeds such as the Aseel were developed in the Indian subcontinent for their aggressive behaviour. The
sport formed part of the culture of the ancient Indians, Chinese, Greeks, and Romans, and large sums were won or
lost depending on the outcome of an encounter. Cockfighting has been banned in many countries during the last century
on the grounds of cruelty to animals. Ducks are medium-sized aquatic birds with broad bills, eyes on the side of
the head, fairly long necks, short legs set far back on the body, and webbed feet. Males, known as drakes, are often
larger than females (simply known as ducks) and are differently coloured in some breeds. Domestic ducks are omnivores,
eating a variety of animal and plant materials such as aquatic insects, molluscs, worms, small amphibians, waterweeds,
and grasses. They feed in shallow water by dabbling, with their heads underwater and their tails upended. Most domestic
ducks are too heavy to fly, and they are social birds, preferring to live and move around together in groups. They
keep their plumage waterproof by preening, a process that spreads the secretions of the preen gland over their feathers.
Clay models of ducks found in China dating back to 4000 BC may indicate the domestication of ducks took place there
during the Yangshao culture. Even if this is not the case, domestication of the duck took place in the Far East at
least 1500 years earlier than in the West. Lucius Columella, writing in the first century BC, advised those who sought
to rear ducks to collect wildfowl eggs and put them under a broody hen, because when raised in this way, the ducks
"lay aside their wild nature and without hesitation breed when shut up in the bird pen". Despite this, ducks did
not appear in agricultural texts in Western Europe until about 810 AD, when they began to be mentioned alongside
geese, chickens, and peafowl as being used for rental payments made by tenants to landowners. It is widely agreed
that the mallard (Anas platyrhynchos) is the ancestor of all breeds of domestic duck (with the exception of the Muscovy
duck (Cairina moschata), which is not closely related to other ducks). Ducks are farmed mainly for their meat, eggs,
and down. As is the case with chickens, various breeds have been developed, selected for egg-laying ability, fast
growth, and a well-covered carcase. The most common commercial breed in the United Kingdom and the United States
is the Pekin duck, which can lay 200 eggs a year and can reach a weight of 3.5 kg (7.7 lb) in 44 days. In the Western
world, ducks are not as popular as chickens, because the latter produce larger quantities of white, lean meat and
are easier to keep intensively, making the price of chicken meat lower than that of duck meat. While popular in haute
cuisine, duck appears less frequently in the mass-market food industry. However, things are different in the East.
Ducks are more popular there than chickens and are mostly still herded in the traditional way and selected for their
ability to find sufficient food in harvested rice fields and other wet environments. The greylag goose (Anser anser)
was domesticated by the Egyptians at least 3000 years ago, and a different wild species, the swan goose (Anser cygnoides),
domesticated in Siberia about a thousand years later, is known as a Chinese goose. The two hybridise with each other
and the large knob at the base of the beak, a noticeable feature of the Chinese goose, is present to a varying extent
in these hybrids. The hybrids are fertile and have resulted in several of the modern breeds. Despite their early
domestication, geese have never gained the commercial importance of chickens and ducks. Domestic geese are much larger
than their wild counterparts and tend to have thick necks, an upright posture, and large bodies with broad rear ends.
The greylag-derived birds are large and fleshy and used for meat, while the Chinese geese have smaller frames and
are mainly used for egg production. The fine down of both is valued for use in pillows and padded garments. They
forage on grass and weeds, supplementing this with small invertebrates, and one of the attractions of rearing geese
is their ability to grow and thrive on a grass-based system. They are very gregarious and have good memories and
can be allowed to roam widely in the knowledge that they will return home by dusk. The Chinese goose is more aggressive
and noisy than other geese and can be used as a guard animal to warn of intruders. The flesh of meat geese is dark-coloured
and high in protein, but they deposit fat subcutaneously, although this fat contains mostly monounsaturated fatty
acids. The birds are killed either around 10 or about 24 weeks. Between these ages, problems with dressing the carcase
occur because of the presence of developing pin feathers. The modern domesticated turkey is descended from one of
six subspecies of wild turkey (Meleagris gallopavo) found in the present Mexican states of Jalisco, Guerrero and
Veracruz. Pre-Aztec tribes in south-central Mexico first domesticated the bird around 800 BC, and Pueblo Indians
inhabiting the Colorado Plateau in the United States did likewise around 200 BC. They used the feathers for robes,
blankets, and ceremonial purposes. More than 1,000 years later, they became an important food source. The first Europeans
to encounter the bird misidentified it as a guineafowl, a bird known as a "turkey fowl" at that time because it had
been introduced into Europe via Turkey. Commercial turkeys are usually reared indoors under controlled conditions.
These are often large buildings, purpose-built to provide ventilation and low light intensities (this reduces the
birds' activity and thereby increases the rate of weight gain). The lights can be switched on for 24-hrs/day, or
a range of step-wise light regimens to encourage the birds to feed often and therefore grow rapidly. Females achieve
slaughter weight at about 15 weeks of age and males at about 19. Mature commercial birds may be twice as heavy as
their wild counterparts. Many different breeds have been developed, but the majority of commercial birds are white,
as this improves the appearance of the dressed carcass, the pin feathers being less visible. Turkeys were at one
time mainly consumed on special occasions such as Christmas (10 million birds in the United Kingdom) or Thanksgiving
(60 million birds in the United States). However, they are increasingly becoming part of the everyday diet in many
parts of the world. The quail is a small to medium-sized, cryptically coloured bird. In its natural environment,
it is found in bushy places, in rough grassland, among agricultural crops, and in other places with dense cover.
It feeds on seeds, insects, and other small invertebrates. Being a largely ground-dwelling, gregarious bird, domestication
of the quail was not difficult, although many of its wild instincts are retained in captivity. It was known to the
Egyptians long before the arrival of chickens and was depicted in hieroglyphs from 2575 BC. It migrated across Egypt
in vast flocks and the birds could sometimes be picked up off the ground by hand. These were the common quail (Coturnix
coturnix), but modern domesticated flocks are mostly of Japanese quail (Coturnix japonica) which was probably domesticated
as early as the 11th century AD in Japan. They were originally kept as songbirds, and they are thought to have been
regularly used in song contests. In the early 20th century, Japanese breeders began to selectively breed for increased
egg production. By 1940, the quail egg industry was flourishing, but the events of World War II led to the complete
loss of quail lines bred for their song type, as well as almost all of those bred for egg production. After the war,
the few surviving domesticated quail were used to rebuild the industry, and all current commercial and laboratory
lines are considered to have originated from this population. Modern birds can lay upward of 300 eggs a year and
countries such as Japan, India, China, Italy, Russia, and the United States have established commercial Japanese
quail farming industries. Japanese quail are also used in biomedical research in fields such as genetics, embryology,
nutrition, physiology, pathology, and toxicity studies. These quail are closely related to the common quail, and
many young hybrid birds are released into the wild each year to replenish dwindling wild populations. Guinea fowl
originated in southern Africa, and the species most often kept as poultry is the helmeted guineafowl (Numida meleagris).
It is a medium-sized grey or speckled bird with a small naked head with colourful wattles and a knob on top, and
was domesticated by the time of the ancient Greeks and Romans. Guinea fowl are hardy, sociable birds that subsist
mainly on insects, but also consume grasses and seeds. They will keep a vegetable garden clear of pests and will
eat the ticks that carry Lyme disease. They happily roost in trees and give a loud vocal warning of the approach
of predators. Their flesh and eggs can be eaten in the same way as chickens, young birds being ready for the table
at the age of about four months. A squab is the name given to the young of domestic pigeons that are destined for
the table. Like other domesticated pigeons, birds used for this purpose are descended from the rock pigeon (Columba
livia). Special utility breeds with desirable characteristics are used. Two eggs are laid and incubated for about
17 days. When they hatch, the squabs are fed by both parents on "pigeon's milk", a thick secretion high in protein
produced by the crop. Squabs grow rapidly, but are slow to fledge and are ready to leave the nest at 26 to 30 days
weighing about 500 g (18 oz). By this time, the adult pigeons will have laid and be incubating another pair of eggs
and a prolific pair should produce two squabs every four weeks during a breeding season lasting several months. Worldwide,
more chickens are kept than any other type of poultry, with over 50 billion birds being raised each year as a source
of meat and eggs. Traditionally, such birds would have been kept extensively in small flocks, foraging during the
day and housed at night. This is still the case in developing countries, where the women often make important contributions
to family livelihoods through keeping poultry. However, rising world populations and urbanization have led to the
bulk of production being in larger, more intensive specialist units. These are often situated close to where the
feed is grown or near to where the meat is needed, and result in cheap, safe food being made available for urban
communities. Profitability of production depends very much on the price of feed, which has been rising. High feed
costs could limit further development of poultry production. In free-range husbandry, the birds can roam freely outdoors
for at least part of the day. Often, this is in large enclosures, but the birds have access to natural conditions
and can exhibit their normal behaviours. A more intensive system is yarding, in which the birds have access to a
fenced yard and poultry house at a higher stocking rate. Poultry can also be kept in a barn system, with no access
to the open air, but with the ability to move around freely inside the building. The most intensive system for egg-laying
chickens is battery cages, often set in multiple tiers. In these, several birds share a small cage which restricts
their ability to move around and behave in a normal manner. The eggs are laid on the floor of the cage and roll into
troughs outside for ease of collection. Battery cages for hens have been illegal in the EU since January 1, 2012.
Chickens raised intensively for their meat are known as "broilers". Breeds have been developed that can grow to an
acceptable carcass size (2 kg (4.4 lb)) in six weeks or less. Broilers grow so fast, their legs cannot always support
their weight and their hearts and respiratory systems may not be able to supply enough oxygen to their developing
muscles. Mortality rates at 1% are much higher than for less-intensively reared laying birds which take 18 weeks
to reach similar weights. Processing the birds is done automatically with conveyor-belt efficiency. They are hung
by their feet, stunned, killed, bled, scalded, plucked, have their heads and feet removed, eviscerated, washed, chilled,
drained, weighed, and packed, all within the course of little over two hours. Both intensive and free-range farming
have animal welfare concerns. In intensive systems, cannibalism, feather pecking and vent pecking can be common,
with some farmers using beak trimming as a preventative measure. Diseases can also be common and spread rapidly through
the flock. In extensive systems, the birds are exposed to adverse weather conditions and are vulnerable to predators
and disease-carrying wild birds. Barn systems have been found to have the worst bird welfare. In Southeast Asia,
a lack of disease control in free-range farming has been associated with outbreaks of avian influenza. In many countries,
national and regional poultry shows are held where enthusiasts exhibit their birds which are judged on certain phenotypical
breed traits as specified by their respective breed standards. The idea of poultry exhibition may have originated
after cockfighting was made illegal, as a way of maintaining a competitive element in poultry husbandry. Breed standards
were drawn up for egg-laying, meat-type, and purely ornamental birds, aiming for uniformity. Sometimes, poultry shows
are part of general livestock shows, and sometimes they are separate events such as the annual "National Championship
Show" in the United Kingdom organised by the Poultry Club of Great Britain. Poultry is the second most widely eaten
type of meat in the world, accounting for about 30% of total meat production worldwide compared to pork at 38%. Sixteen
billion birds are raised annually for consumption, more than half of these in industrialised, factory-like production
units. Global broiler meat production rose to 84.6 million tonnes in 2013. The largest producers were the United
States (20%), China (16.6%), Brazil (15.1%) and the European Union (11.3%). There are two distinct models of production;
the European Union supply chain model seeks to supply products which can be traced back to the farm of origin. This
model faces the increasing costs of implementing additional food safety requirements, welfare issues and environmental
regulations. In contrast, the United States model turns the product into a commodity. World production of duck meat
was about 4.2 million tonnes in 2011 with China producing two thirds of the total, some 1.7 billion birds. Other
notable duck-producing countries in the Far East include Vietnam, Thailand, Malaysia, Myanmar, Indonesia and South
Korea (12% in total). France (3.5%) is the largest producer in the West, followed by other EU nations (3%) and North
America (1.7%). China was also by far the largest producer of goose and guinea fowl meat, with a 94% share of the
2.6 million tonne global market. Poultry is available fresh or frozen, as whole birds or as joints (cuts), bone-in
or deboned, seasoned in various ways, raw or ready cooked. The meatiest parts of a bird are the flight muscles on
its chest, called "breast" meat, and the walking muscles on the legs, called the "thigh" and "drumstick". The wings
are also eaten (Buffalo wings are a popular example in the United States) and may be split into three segments, the
meatier "drumette", the "wingette" (also called the "flat"), and the wing tip (also called the "flapper"). In Japan,
the wing is frequently separated, and these parts are referred to as 手羽元 (teba-moto "wing base") and 手羽先 (teba-saki
"wing tip"). Dark meat, which avian myologists refer to as "red muscle", is used for sustained activity—chiefly walking,
in the case of a chicken. The dark colour comes from the protein myoglobin, which plays a key role in oxygen uptake
and storage within cells. White muscle, in contrast, is suitable only for short bursts of activity such as, for chickens,
flying. Thus, the chicken's leg and thigh meat are dark, while its breast meat (which makes up the primary flight
muscles) is white. Other birds with breast muscle more suitable for sustained flight, such as ducks and geese, have
red muscle (and therefore dark meat) throughout. Some cuts of meat including poultry expose the microscopic regular
structure of intracellular muscle fibrils which can diffract light and produce iridescent colours, an optical phenomenon
sometimes called structural colouration. A 2011 study by the Translational Genomics Research Institute showed that
47% of the meat and poultry sold in United States grocery stores was contaminated with Staphylococcus aureus, and
52% of the bacteria concerned showed resistance to at least three groups of antibiotics. Thorough cooking of the
product would kill these bacteria, but a risk of cross-contamination from improper handling of the raw product is
still present. Also, some risk is present for consumers of poultry meat and eggs to bacterial infections such as
Salmonella and Campylobacter. Poultry products may become contaminated by these bacteria during handling, processing,
marketing, or storage, resulting in food-borne illness if the product is improperly cooked or handled. In general,
avian influenza is a disease of birds caused by bird-specific influenza A virus that is not normally transferred
to people; however, people in contact with live poultry are at the greatest risk of becoming infected with the virus
and this is of particular concern in areas such as Southeast Asia, where the disease is endemic in the wild bird
population and domestic poultry can become infected. The virus possibly could mutate to become highly virulent and
infectious in humans and cause an influenza pandemic. Bacteria can be grown in the laboratory on nutrient culture
media, but viruses need living cells in which to replicate. Many vaccines to infectious diseases can be grown in
fertilised chicken eggs. Millions of eggs are used each year to generate the annual flu vaccine requirements, a complex
process that takes about six months after the decision is made as to what strains of virus to include in the new
vaccine. A problem with using eggs for this purpose is that people with egg allergies are unable to be immunised,
but this disadvantage may be overcome as new techniques for cell-based rather than egg-based culture become available.
Cell-based culture will also be useful in a pandemic when it may be difficult to acquire a sufficiently large quantity
of suitable sterile, fertile eggs. Poultry meat and eggs provide nutritionally beneficial food containing protein
of high quality. This is accompanied by low levels of fat which have a favourable mix of fatty acids. Chicken meat
contains about two to three times as much polyunsaturated fat as most types of red meat when measured by weight.
However, for boneless, skinless chicken breast, the amount is much lower. A 100-g serving of baked chicken breast
contains 4 g of fat and 31 g of protein, compared to 10 g of fat and 27 g of protein for the same portion of broiled,
lean skirt steak.
Outside of the Low Countries, it is the native language of the majority of the population of Suriname, and also holds official
status in the Caribbean island nations of Aruba, Curaçao and Sint Maarten. Historical minorities on the verge of
extinction remain in parts of France and Germany, and in Indonesia,[n 1] while up to half a million native speakers
may reside in the United States, Canada and Australia combined.[n 2] The Cape Dutch dialects of Southern Africa have
evolved into Afrikaans, a mutually intelligible daughter language[n 3] which is spoken to some degree by at least
16 million people, mainly in South Africa and Namibia.[n 4] Dutch is one of the closest relatives of both German
and English[n 5] and is said to be roughly in between them.[n 6] Dutch, like English, has not undergone the High
German consonant shift, does not use Germanic umlaut as a grammatical marker, has largely abandoned the use of the
subjunctive, and has levelled much of its morphology, including the case system.[n 7] Features shared with German
include the survival of three grammatical genders—albeit with few grammatical consequences[n 8]—as well as the use
of modal particles, final-obstruent devoicing, and a similar word order.[n 9] Dutch vocabulary is mostly Germanic
and incorporates more Romance loans than German but fewer than English.[n 10] While Dutch generally refers to the
language as a whole, Belgian varieties are sometimes collectively referred to as Flemish. In both Belgium and the
Netherlands, the native official name for Dutch is Nederlands, and its dialects have their own names, e.g. Hollands
"Hollandish", West-Vlaams "Western Flemish", Brabants "Brabantian". The use of the word Vlaams ("Flemish") to describe
Standard Dutch for the variations prevalent in Flanders and used there, however, is common in the Netherlands and
Belgium. The Dutch language has been known under a variety of names. In Middle Dutch, which was a collection of dialects,
dietsc was used in Flanders and Brabant, while diets or duutsc was in use in the Northern Netherlands. It derived
from the Old Germanic word theudisk, one of the first names ever used for the non-Romance languages of Western Europe,
meaning (pertaining to the language) of the people, that is, the native Germanic language. The term was used as opposed
to Latin, the non-native language of writing and the Catholic Church. In the first text in which it is found, dating
from 784, it refers to the Germanic dialects of Britain. In the Oaths of Strasbourg (842) it appeared as teudisca
to refer to the Germanic (Rhenish Franconian) portion of the oath. Until roughly the 16th century, speakers of all
the varieties of the West Germanic languages from the mouth of the Rhine to the Alps had been accustomed to refer
to their native speech as Dietsch, (Neder)duyts or some other cognate of theudisk. This let inevitably to confusion
since similar terms referred to different languages. Therefore, in the 16th century, a differentiation took place.
Owing to Dutch commercial and colonial rivalry in the 16th and 17th centuries, the English term came to refer exclusively
to the Dutch. A notable exception is Pennsylvania Dutch, which is a West Central German variety called Deitsch by
its speakers. Jersey Dutch, on the other hand, as spoken until the 1950s in New Jersey, is a Dutch-based creole.
In Dutch itself, Diets went out of common use - although Platdiets is still used for the transitional Limburgish-Ripuarian
dialects in the north-east of Belgium. Nederlands, the official Dutch word for "Dutch", did not become firmly established
until the 19th century. This designation had been in use as far back as the end of the 15th century, but received
competition from the more popular terminology Nederduits, "Low Dutch", for several reasons. One of them was it reflected
a distinction with Hoogduits, "High Dutch", meaning the language spoken in Germany. The Hoog was later dropped, and
thus, Duits narrowed down in meaning to refer to the German language. The term Nederduits, however introduced new
confusion, since the non standardised dialects spoken in the north of Germany came to be known as Niederdeutsch as
well, and thus the Duits reference in the name was dropped, leading to Nederlands as designation to refer to the
Dutch language. The repeated use of Neder (or "low") to refer to the Dutch language is a reference to the Netherlands'
downriver location at the Rhine–Meuse–Scheldt delta near the North Sea, harking back to Latin nomenclature, e.g.
Germania Inferior. See also: Netherlands (toponymy). Three Germanic dialects were originally spoken in the Low Countries:
Frisian in the north and along the western coast; Saxon in the east (contiguous with the Low German area); and Franconian
in the centre and south. It is the Franconian dialects that is designated as Old Dutch, and that would develop in
Middle Dutch and later Modern Dutch. The division in these development phases is mostly conventional, since the transition
between them was very gradual. One of the few moments linguists can detect somewhat of a revolution is when the Dutch
standard language emerged and quickly established itself. The development of the Dutch language is illustrated by
the following sentence in Old, Middle and Modern Dutch: Within the Indo-European language tree, Dutch is grouped
within the Germanic languages, which means it shares a common ancestor with languages such as English, German, and
Scandinavian languages. All Germanic languages are united by subjection to the sound shifts of Grimm's law and Verner's
law which originated in the Proto-Germanic language and define the basic differentiating features from other Indo-European
languages. This assumed to have originated in approximately the mid-first millennium BCE in Iron Age northern Europe.
The Germanic languages are traditionally divided into three groups: West, East and North Germanic. They remained
mutually intelligible throughout the Migration Period. Dutch is together with English and German part of the West
Germanic group, that is characterized by a number of phonological and morphological innovations not found in North
and East Germanic. The West Germanic varieties of the time are generally split into three dialect groups: Ingvaeonic
(North Sea Germanic), Istvaeonic (Weser-Rhine Germanic) and Irminonic (Elbe Germanic). It appears that the Frankish
tribes fit primarily into the Istvaeonic dialect group with certain Ingvaeonic influences towards the northwest,
still seen in modern Dutch. A Frankish identity emerged and so did their Frankish or Franconian language. The language
itself is poorly attested. A notable exception is the Bergakker inscription, found near the Dutch city of Tiel, which
may represent a primary record of 5th-century Frankish. Although some placenames recorded in Roman texts could arguably
be considered as the oldest "Dutch" single words, like vadam (modern Dutch: wad, English: "mudflat"), the Bergakker
inscription yields the oldest evidence of Dutch morphology, but there is no consensus on the interpretation of the
rest of the text. Old Low Franconian or Old Dutch is regarded as the primary stage in the development of a separate
Dutch language. The "Low" in Old Low Franconian refers to the Frankish spoken in the Low Countries where it was not
influenced by the High German consonant shift, as opposed to Central and high Franconian in Germany. The latter would
as a consequence evolve with Allemanic into Old High German. At more or less the same time the Ingvaeonic nasal spirant
law led to the development of Old Saxon, Old Frisian (Anglo-Frisian) and Old English (Anglo-Saxon). Hardly influenced
by either development, Old Dutch remained close to the original language of the Franks, the people that would rule
Europe for centuries. The language however, did experienced developments on its own, like final-obstruent devoicing
in a very early stage. In fact, by judging from the find at Bergakker, it would seem that the language already experienced
this characteristic during the Old Frankish period. Attestations of Old Dutch sentences are extremely rare. The oldest
one first recorded has been found in the Salic law. From this Frankish document written around 510 the oldest sentence
has been identified as Dutch: Maltho thi afrio lito (I say to you, I free you, serf) used to free a serf. Another
old fragment of Dutch is Visc flot aftar themo uuatare (A fish was swimming in the water). The oldest conserved larger
Dutch text is the Utrecht baptismal vow (776-800) starting with Forsachistu diobolae [...] ec forsacho diabolae (Do
you forsake the devil? [...] I forsake the devil). Probably the most famous sentence Hebban olla vogala nestas hagunnan,
hinase hic enda tu, wat unbidan we nu (All birds have started making nests, except me and you, what are we waiting
for), is dated around the year 1100, written by a Flemish monk in a convent in Rochester, England. Old Dutch naturally
evolved into Middle Dutch. The year 1150 is often cited as the time of the discontinuity, but it actually marks a
time of profuse Dutch writing and during this period a rich Medieval Dutch literature developed. There was at that
time no overarching standard language; Middle Dutch is rather a collective name for a number of closely related dialects
whose ancestor was Old Dutch. But they were all mutually intelligible. In fact, since Dutch is a rather conservative
language, the various literary works of that time today are often very readable for modern-day speakers. A process
of standardisation started in the Middle Ages, especially under the influence of the Burgundian Ducal Court in Dijon
(Brussels after 1477). The dialects of Flanders and Brabant were the most influential around this time. The process
of standardisation became much stronger at the start of the 16th century, mainly based on the urban dialect of Antwerp.
In 1585 Antwerp fell to the Spanish army: many fled to the Northern Netherlands, where the Dutch Republic declared
its independence from Spain. They particularly influenced the urban dialects of the province of Holland. In 1637,
a further important step was made towards a unified language, when the Statenvertaling, the first major Bible translation
into Dutch, was created that people from all over the new republic could understand. It used elements from various,
even Dutch Low Saxon, dialects but was predominantly based on the urban dialects of Holland of post 16th century.
In the Southern Netherlands (now Belgium and Luxembourg) developments were different. Under Spanish, then Austrian,
and then French rule standardisation of Dutch language came to a standstill. The state, law, and increasingly education
used French, yet more than half the Belgian population were speaking a Dutch dialect. In the course of the nineteenth
century the Flemish movement stood up for the rights of Dutch, mostly called Flemish. But in competing with the French
language the variation in dialects was a serious disadvantage. Since standardisation is a lengthy process, Dutch-speaking
Belgium associated itself with the standard language that had already developed in the Netherlands over the centuries.
Therefore, the situation in Belgium is essentially no different from that in the Netherlands, although there are
recognisable differences in pronunciation, comparable to the pronunciation differences between standard British and
standard American English. In 1980 the Netherlands and Belgium concluded the Language Union Treaty. This treaty lays
down the principle that the two countries must gear their language policy to each other, among other things, for
a common system of spelling. Dutch belongs to its own West Germanic sub-group, West Low Franconian, paired with its
sister language Limburgish, or East Low Franconian. Closest relative is the mutual intelligible daughter language
Afrikaans. Other West Germanic languages related to Dutch are German, English and the Frisian languages, and the
non standardised languages Low German and Yiddish. Dutch stands out in combining a small degree of Ingvaeonic characteristics
(occurring consistently in English and Frisian and reduced in intensity from 'west to east' over the continental
West Germanic plane) with mostly Istvaeonic characteristics, of which some of them are also incorporated in German.
Unlike German, Dutch (apart from Limburgish) has not been influenced at all by the 'south to north' movement of the
High German sound shift, and had some changes of its own. The cumulation of these changes resulted over time in separate,
but related standard languages with various degrees of similarities and differences between them. For a comparison
between the West Germanic languages, see the sections Morphology, Grammar and Vocabulary. In the east there is a
Dutch Low Saxon dialect area, comprising the provinces of Groningen, Drenthe and Overijssel, and parts of the province
of Gelderland as well. The IJssel river roughly forms the linguistic watershed here. This group, though not being
Low Franconian and being close to the neighbouring Low German, is regarded as Dutch, because of a number of reasons.
From the 14th to 15th century onward, its urban centers (Deventer, Zwolle and Kampen as well as Zutphen and Doesburg)
have been increasingly influenced by the western written Dutch and became a linguistically mixed area. From the 17th
century onward, it was gradually integrated into the Dutch language area. In other words, this group is Dutch synchronically
but not diachronically.[citation needed] Dutch dialects and regional languages are not spoken as often as they used
to be. Recent research by Geert Driessen shows that the use of dialects and regional languages among both Dutch adults
and youth is in heavy decline. In 1995, 27 percent of the Dutch adult population spoke a dialect or regional language
on a regular basis, while in 2011 this was no more than 11 percent. In 1995, 12 percent of the primary school aged
children spoke a dialect or regional language, while in 2011 this had declined to 4 percent. Of the three officially
recognized regional languages Limburgish is spoken most (in 2011 among adults 54%, among children 31%) and Dutch
Low Saxon least (adults 15%, children 1%); Frisian occupies a middle position (adults 44%, children 22%). The different
dialects show many sound shifts in different vowels (even shifting between diphthongs and monophthongs), and in some
cases consonants also shift pronunciation. For example, an oddity of West Flemings (and to a lesser extent, East
Flemings) is that, the voiced velar fricative (written as "g" in Dutch) shifts to a voiced glottal fricative (written
as "h" in Dutch), while the letter "h" in West Flemish becomes mute (just like in French). As a result, when West
Flemish try to talk Standard Dutch, they're often unable to pronounce the g-sound, and pronounce it similar to the
h-sound. This leaves f.e. no difference between "held" (hero) and "geld" (money). Or in some cases, they are aware
of the problem, and hyper-correct the "h" into a voiced velar fricative or g-sound, again leaving no difference.
Next to sound shifts, there are ample examples of suffix differences. Often simple suffix shifts (like switching
between -the, -ske, -ke, -je, ...), sometimes the suffixes even depend on quite specific grammar rules for a certain
dialect. Again taking West Flemish as an example. In that language, the words "ja" (yes) and "nee" (no) are also
conjugated to the (often implicit) subject of the sentence. These separate grammar rules are a lot more difficult
to imitate correctly than simple sound shifts, making it easy to recognise people who didn't grow up in a certain
region, even decades after they moved. Some Flemish dialects are so distinct that they might be considered as separate
language variants, although the strong significance of language in Belgian politics would prevent the government
from classifying them as such. West Flemish in particular has sometimes been considered a distinct variety. Dialect
borders of these dialects do not correspond to present political boundaries, but reflect older, medieval divisions.
The Brabantian dialect group, for instance, also extends to much of the south of the Netherlands, and so does Limburgish.
West Flemish is also spoken in Zeelandic Flanders (part of the Dutch province of Zeeland), and by older people in
French Flanders (a small area that borders Belgium). Many native speakers of Dutch, both in Belgium and the Netherlands,
assume that Afrikaans and West Frisian are dialects of Dutch but are considered separate and distinct from Dutch:
a daughter language and a sister language, respectively. Afrikaans evolved mainly from 17th century Dutch dialects,
but had influences from various other languages in South Africa. However, it is still largely mutually intelligible
with Dutch. (West) Frisian evolved from the same West Germanic branch as Old English and is less akin to Dutch. In
Europe, Dutch is the majority language in the Netherlands (96%) and Belgium (59%) as well as a minority language
in Germany and northern France's French Flanders, where it is in the ultimate stage of language death. Though Belgium
as a whole is multilingual, the two regions into which the country is divided (Flanders, francophone Wallonia, bilingual
Brussels and small 'facility' zones) are largely monolingual. The Netherlands and Belgium produce the vast majority
of music, films, books and other media written or spoken in Dutch. Dutch is a monocentric language, with all speakers
using the same standard form (authorized by the Dutch Language Union) based on a Dutch orthography employing the
Latin alphabet when writing. In stark contrast to its written uniformity, Dutch lacks a prestige dialect and has
a large dialectal continuum consisting of 28 main dialects, which can themselves be further divided into at least
600 distinguishable varieties. Outside of the Netherlands and Belgium, the dialect around the German town of Kleve
(South Guelderish) both historically and genetically belongs to the Dutch language. In Northeastern France, the area
around Calais was historically Dutch-speaking (West Flemish) of which an estimated 20,000 daily speakers. The cities
of Dunkirk, Gravelines and Bourbourg only became predominantly French-speaking by the end of the 19th century. In
the countryside, until World War I, many elementary schools continued to teach in Dutch, and the Catholic Church
continued to preach and teach the catechism in Flemish in many parishes. During the second half of the 19th century
Dutch was banned from all levels of education by both Prussia and France and lost most of its functions as a cultural
language. In both Germany and France the Dutch standard language is largely absent and speakers of these Dutch dialects
will use German or French in everyday speech. Dutch is not afforded legal status in France or Germany, either by
the central or regional public authorities and knowledge of the language is declining among younger generations.
As a foreign language, Dutch is mainly taught in primary and secondary schools in areas adjacent to the Netherlands
and Flanders. In French-speaking Belgium, over 300,000 pupils are enrolled in Dutch courses, followed by over 23,000
in the German states of Lower Saxony and North Rhine-Westphalia, and about 7,000 in the French region of Nord-Pas-de-Calais
(of which 4,550 are in primary school). At an academic level, the largest number of faculties of neerlandistiek can
be found in Germany (30 universities), followed by France (20 universities) and the United Kingdom (5 universities).
Despite the Dutch presence in Indonesia for almost 350 years, as the Asian bulk of the Dutch East Indies, the Dutch
language has no official status there and the small minority that can speak the language fluently are either educated
members of the oldest generation, or employed in the legal profession, as some legal codes are still only available
in Dutch. Dutch is taught in various educational centres in Indonesia, the most important of which is the Erasmus
Language Centre (ETC) in Jakarta. Each year, some 1,500 to 2,000 students take Dutch courses there. In total, several
thousand Indonesians study Dutch as a foreign language. Owing to centuries of Dutch rule in Indonesia, many old documents
are written in Dutch. Many universities therefore include Dutch as a source language, mainly for law and history
students. In Indonesia this involves about 35,000 students. Unlike other European nations, the Dutch chose not to
follow a policy of language expansion amongst the indigenous peoples of their colonies. In the last quarter of the
19th century, however, a local elite gained proficiency in Dutch so as to meet the needs of expanding bureaucracy
and business. Nevertheless, the Dutch government remained reluctant to teach Dutch on a large scale for fear of destabilising
the colony. Dutch, the language of power, was supposed to remain in the hands of the leading elite. After independence,
Dutch was dropped as an official language and replaced by Malay. Yet the Indonesian language inherited many words
from Dutch: words for everyday life as well as scientific and technological terms. One scholar argues that 20% of
Indonesian words can be traced back to Dutch words, many of which are transliterated to reflect phonetic pronunciation
e.g. kantoor (Dutch for "office") in Indonesian is kantor, while bus ("bus") becomes bis. In addition, many Indonesian
words are calques on Dutch, for example, rumah sakit (Indonesian for "hospital") is calqued on the Dutch ziekenhuis
(literally "house of the sick"), kebun binatang ("zoo") on dierentuin (literally "animal garden"), undang-undang
dasar ("constitution") from grondwet (literally "ground law"). These account for some of the differences in vocabulary
between Indonesian and Malay. In Suriname today, Dutch is the sole official language, and over 60 percent of the
population speaks it as a mother tongue. Dutch is the obligatory medium of instruction in schools in Suriname, even
for non-native speakers. A further twenty-four percent of the population speaks Dutch as a second language. Suriname
gained its independence from the Netherlands in 1975 and has been an associate member of the Dutch Language Union
since 2004. The lingua franca of Suriname, however, is Sranan Tongo, spoken natively by about a fifth of the population.
In the United States, an almost extinct dialect of Dutch, Jersey Dutch, spoken by descendants of 17th-century Dutch
settlers in Bergen and Passaic counties, was still spoken as late as 1921. Other Dutch-based creole languages once
spoken in the Americas include Mohawk Dutch (in Albany, New York), Berbice (in Guyana), Skepi (in Essequibo, Guyana)
and Negerhollands (in the United States Virgin Islands). Pennsylvania Dutch is not a member of the set of Dutch dialects
and is less misleadingly called Pennsylvania German. European Dutch remained the literary language until the start
of the 1920s, when under pressure of Afrikaner nationalism the local "African" Dutch was preferred over the written,
European-based standard. In 1925, section 137 of the 1909 constitution of the Union of South Africa was amended by
Act 8 of 1925, stating "the word Dutch in article 137 [...] is hereby declared to include Afrikaans". The constitution
of 1983 only listed English and Afrikaans as official languages. It is estimated that between 90% to 95% of Afrikaans
vocabulary is ultimately of Dutch origin. Both languages are still largely mutually intelligible, although this relation
can in some fields (such as lexicon, spelling and grammar) be asymmetric, as it is easier for Dutch speakers to understand
written Afrikaans than it is for Afrikaans speakers to understand written Dutch. Afrikaans is grammatically far less
complex than Dutch, and vocabulary items are generally altered in a clearly patterned manner, e.g. vogel becomes
voël ("bird") and regen becomes reën ("rain"). In South Africa, the number of students following Dutch at university,
is difficult to estimate, since the academic study of Afrikaans inevitably includes the study of Dutch. Elsewhere
in the world, the number of people learning Dutch is relatively small. It is the third language of South Africa in
terms of native speakers (~13.5%), of whom 53 percent are Coloureds and 42.4 percent Whites. In 1996, 40 percent
of South Africans reported to know Afrikaans at least at a very basic level of communication. It is the lingua franca
in Namibia, where it is spoken natively in 11 percent of households. In total, Afrikaans is the first language in
South Africa alone of about 6.8 million people and is estimated to be a second language for at least 10 million people
worldwide, compared to over 23 million and 5 million respectively, for Dutch. Unlike other Germanic languages, Dutch
doesn't have phonological aspiration of consonants. Like English, Dutch did not participate in the second consonant
shift. Like most Germanic languages, the Dutch consonant system did not undergo the High German consonant shift and
has a syllable structure that allows fairly complex consonant clusters. Dutch also retains full use of the velar
fricatives that were present in Proto-Germanic, but lost or modified in many other Germanic languages. Dutch has
final-obstruent devoicing: at the end of a word, voicing distinction is neutralised and all obstruents are pronounced
voiceless. For example, goede ("good") is /ˈɣudə/ but the related form goed is /ɣut/. Dutch shares with German Final-obstruent
devoicing (Du brood [broːt] and German Brot vs Eng bread). Voicing of pre-vocalic initial voiceless alveolar fricatives
occurs, although less in Dutch than in German (Du zeven, Germ sieben [z] vs. Eng seven and LG seven [s]), and also
the shift in /θ/ > /d/. Dutch shares only with Low German the development of /xs/ > /ss/ (Du vossen, ossen and LG
Vösse, Ossen vs. Germ Füchse, Ochsen and Eng foxes, oxen), and also the development of /ft/ → /xt/ though it is far
more common in Dutch (Du zacht and LG sacht vs. Germ sanft and Eng soft, but Du kracht vs. LG/Germ kraft and Eng
cognate craft). Vowel length is not always considered a distinctive feature in Dutch phonology, because it normally
co-occurs with changes in vowel quality. One feature or the other may be considered redundant, and some phonemic
analyses prefer to treat it as an opposition of tenseness. However, even if not considered part of the phonemic opposition,
the long/tense vowels are still realised as phonetically longer than their short counterparts. The changes in vowel
quality are also not always the same in all dialects, and in some there may be little difference at all, with length
remaining the primary distinguishing feature. And while it is true that older words always pair vowel length with
a change in vowel quality, new loanwords have reintroduced phonemic oppositions of length. Compare zonne(n) [ˈzɔnə]
("suns") versus zone [ˈzɔːnə] ("zone") versus zonen [ˈzoːnə(n)] ("sons"), or kroes [krus] ("mug") versus cruise [kruːs]
("cruise"). Unique to the development of Dutch is the collaps of older ol/ul/al + dental into ol + dental, followed
by vocalisation of pre-consonantal /l/ and after a short vowel, creating the diphthong /ɑu/ e.g., Dutch goud, zout
and bout corresponds with Low German Gold, Solt, Bolt; German Gold, Salz, Balt and English gold, salt, bold. This
is the most common diphthong along with /ɛi œy/. All three are commonly the only ones considered unique phonemes
in Dutch. The tendency for native English speakers is to pronounce Dutch names with /ɛi/ (written as ij or ei) as
/aɪ/, (like the English vowel y) which does not normally lead to confusion among native listeners, since in a number
of dialects (e.g. in Amsterdam) the same pronunciation is heard. This change is interesting from a sociolinguistic
point of view because it has apparently happened relatively recently, in the 1970s, and was pioneered by older well-educated
women from the upper middle classes. The lowering of the diphthongs has long been current in many Dutch dialects,
and is comparable to the English Great Vowel Shift, and the diphthongisation of long high vowels in Modern High German,
which centuries earlier reached the state now found in Polder Dutch. Stroop theorizes that the lowering of open-mid
to open diphthongs is a phonetically "natural" and inevitable development and that Dutch, after having diphthongised
the long high vowels like German and English, "should" have lowered the diphthongs like German and English as well.
Instead, he argues, this development has been artificially frozen in an "intermediate" state by the standardisation
of Dutch pronunciation in the 16th century, where lowered diphthongs found in rural dialects were perceived as ugly
by the educated classes and accordingly declared substandard. Now, however, in his opinion, the newly affluent and
independent women can afford to let that natural development take place in their speech. Stroop compares the role
of Polder Dutch with the urban variety of British English pronunciation called Estuary English. Standard Dutch uses
three genders to differentiate between natural gender and three when discerning grammatical gender. But for most
non-Belgian speakers, the masculine and feminine genders have merged to form the common gender (de), while the neuter
(het) remains distinct as before. This gender system is similar to those of most Continental Scandinavian languages.
As in English, but to a lesser degree, the inflectional grammar of the language (e.g., adjective and noun endings)
has simplified over time. The Dutch written grammar has simplified over the past 100 years: cases are now mainly
used for the pronouns, such as ik (I), mij, me (me), mijn (my), wie (who), wiens (whose: masculine or neuter singular),
wier (whose: feminine singular; masculine, feminine or neuter plural). Nouns and adjectives are not case inflected
(except for the genitive of proper nouns (names): -s, -'s or -'). In the spoken language cases and case inflections
had already gradually disappeared from a much earlier date on (probably the 15th century) as in many continental
West Germanic dialects. More complex inflection is still found in certain lexicalized expressions like de heer des
huizes (literally, the man of the house), etc. These are usually remnants of cases (in this instance, the genitive
case which is still used in German, cf. Der Herr des Hauses) and other inflections no longer in general use today.
In such lexicalized expressions remnants of strong and weak nouns can be found too, e.g. in het jaar des Heren (Anno
Domini), where "-en" is actually the genitive ending of the weak noun. Also in this case, German retains this feature.
Though the genitive is widely avoided in speech. In an interrogative main clause the usual word order is: conjugated
verb followed by subject; other verbs in final position: "Kun jij je pen niet vinden?" (literally "Can you your pen
not find?") "Can't you find your pen?" In the Dutch equivalent of a wh-question the word order is: interrogative
pronoun (or expression) + conjugated verb + subject; other verbs in final position: "Waarom kun jij je pen niet vinden?"
("Why can you your pen not find?") "Why can't you find your pen?"" In a tag question the word order is the same as
in a declarative clause: "Jij kunt je pen niet vinden?" ("You can your pen not find?") "You can't find your pen?""
A subordinate clause does not change its word order: "Kun jij je pen niet vinden omdat het veel te donker is?" ("Can
you your pen not find because it far too dark is?") "Can you not find your pen because it's too dark?"" In Dutch,
the diminutive is not merely restricted to nouns and exist in numerals (met z'n tweetjes, "the two of us"), pronouns
(onderonsje, "tête-à-tête"), verbal particles (moetje, "shotgun marriage"), and even prepositions (toetje, "dessert").
Most notable however, are the diminutive forms of adjectives and adverbs. The former take an diminutive ending and
thus functions as a noun, the latter remain adverbs and have always the diminutive with the -s appended, e.g. adjective:
groen ("green") → noun: groentje ("rookie"); adverb: even ("just") → adverb: eventjes ("just a minute"). Some nouns
have two different diminutives, each with a different meaning: bloem (flower) → bloempje (lit. "small flower"), but
bloemetje (lit. also "small flower", meaning bouquet). A few nouns exist solely in a diminutive form, e.g. zeepaardje
(seahorse), while many, e.g. meisje (girl), originally a diminutive of meid (maid), have acquired a meaning independent
of their non-diminutive forms. A diminutive can sometimes be added to an uncountable noun to refer to a single portion:
ijs (ice, ice cream) → ijsje (ice cream treat, cone of ice cream), bier (beer) → biertje. Some diminutive forms only
exist in plural, e.g. kleertjes (clothing). Like in English, Dutch has generalised the dative over the accusative
case for all pronouns, e.g. Du me, je, Eng me, you, vs. Germ mich/mir dich/dir. There is one exception: the standard
language prescribes that in the third person plural, hen is to be used for the direct object, and hun for the indirect
object. This distinction was artificially introduced in the 17th century by grammarians, and is largely ignored in
spoken language and not well understood by Dutch speakers. Consequently, the third person plural forms hun and hen
are interchangeable in normal usage, with hun being more common. The shared unstressed form ze is also often used
as both direct and indirect objects and is a useful avoidance strategy when people are unsure which form to use.
Like most Germanic languages, Dutch forms noun compounds, where the first noun modifies the category given by the
second (hondenhok = doghouse). Unlike English, where newer compounds or combinations of longer nouns are often written
in open form with separating spaces, Dutch (like the other Germanic languages) either uses the closed form without
spaces (boomhuis = tree house) or inserts a hyphen (VVD-coryfee = outstanding member of the VVD, a political party).
Like German, Dutch allows arbitrarily long compounds, but the longer they get, the less frequent they tend to be.
Dutch vocabulary is predominantly Germanic in origin, with an additional share of loanwords of 20%. The main foreign
influence on Dutch vocabulary since the 12th century and culminating in the French period has been French and (northern)
French, accounting for an estimated 6.8%, or more than a third of all loanwords. Latin, that has been spoken for
centuries in the south of the Low Countries, and has since then for centuries plaid a major role as the language
of science and religion, follows with 6.1%. High German and Low German, influential until the mid of the 19th century,
account for 2.7%, but are mostly unrecognizable since many German loanwords have been "Dutchified", e.g. German "Fremdling"
become Dutch "vreemdeling". From English, Dutch has taken over words since the middle of the 19th century, as a consequence
of the gaining power of Britain and the United States. The share of English loanwords is about 1.5%, but this number
is still on the increase. Conversely, Dutch contributed many loanwords to English, accounting for 1.3%. Dutch is
written using the Latin script. Dutch uses one additional character beyond the standard alphabet, the digraph IJ.
It has a relatively high proportion of doubled letters, both vowels and consonants, due to the formation of compound
words and also to the spelling devices for distinguishing the many vowel sounds in the Dutch language. An example
of five consecutive doubled letters is the word voorraaddoos (food storage container). The diaeresis (Dutch: trema)
is used to mark vowels that are pronounced separately when involving a pre- or suffix. Whereas a hyphen is used when
this problem occurs in compound words. For example; "beïnvloed" (influenced), but zee-eend (sea duck). Generally,
other diacritical marks only occur in loanwords, though the acute accent can also be used for emphasis or to differentiate
between two forms. Its most common use is to differentiate between the indefinite article 'een' (a, an) and the numeral
'één' (one).
Originally known as Buckingham House, the building at the core of today's palace was a large townhouse built for the Duke
of Buckingham in 1703 on a site that had been in private ownership for at least 150 years. It was acquired by King
George III in 1761 as a private residence for Queen Charlotte and became known as "The Queen's House". During the
19th century it was enlarged, principally by architects John Nash and Edward Blore, who constructed three wings around
a central courtyard. Buckingham Palace became the London residence of the British monarch on the accession of Queen
Victoria in 1837. The original early 19th-century interior designs, many of which survive, include widespread use
of brightly coloured scagliola and blue and pink lapis, on the advice of Sir Charles Long. King Edward VII oversaw
a partial redecoration in a Belle Époque cream and gold colour scheme. Many smaller reception rooms are furnished
in the Chinese regency style with furniture and fittings brought from the Royal Pavilion at Brighton and from Carlton
House. The palace has 775 rooms, and the garden is the largest private garden in London. The state rooms, used for
official and state entertaining, are open to the public each year for most of August and September, and on selected
days in winter and spring. In the Middle Ages, the site of the future palace formed part of the Manor of Ebury (also
called Eia). The marshy ground was watered by the river Tyburn, which still flows below the courtyard and south wing
of the palace. Where the river was fordable (at Cow Ford), the village of Eye Cross grew. Ownership of the site changed
hands many times; owners included Edward the Confessor and his queen consort Edith of Wessex in late Saxon times,
and, after the Norman Conquest, William the Conqueror. William gave the site to Geoffrey de Mandeville, who bequeathed
it to the monks of Westminster Abbey. Various owners leased it from royal landlords and the freehold was the subject
of frenzied speculation during the 17th century. By then, the old village of Eye Cross had long since fallen into
decay, and the area was mostly wasteland. Needing money, James I sold off part of the Crown freehold but retained
part of the site on which he established a 4-acre (16,000 m2) mulberry garden for the production of silk. (This is
at the northwest corner of today's palace.) Clement Walker in Anarchia Anglicana (1649) refers to "new-erected sodoms
and spintries at the Mulberry Garden at S. James's"; this suggests it may have been a place of debauchery. Eventually,
in the late 17th century, the freehold was inherited from the property tycoon Sir Hugh Audley by the great heiress
Mary Davies. Possibly the first house erected within the site was that of a Sir William Blake, around 1624. The next
owner was Lord Goring, who from 1633 extended Blake's house and developed much of today's garden, then known as Goring
Great Garden. He did not, however, obtain the freehold interest in the mulberry garden. Unbeknown to Goring, in 1640
the document "failed to pass the Great Seal before King Charles I fled London, which it needed to do for legal execution".
It was this critical omission that helped the British royal family regain the freehold under King George III. The
house which forms the architectural core of the palace was built for the first Duke of Buckingham and Normanby in
1703 to the design of William Winde. The style chosen was of a large, three-floored central block with two smaller
flanking service wings. Buckingham House was eventually sold by Buckingham's descendant, Sir Charles Sheffield, in
1761 to George III for £21,000. Sheffield's leasehold on the mulberry garden site, the freehold of which was still
owned by the royal family, was due to expire in 1774. Remodelling of the structure began in 1762. After his accession
to the throne in 1820, King George IV continued the renovation with the idea in mind of a small, comfortable home.
While the work was in progress, in 1826, the King decided to modify the house into a palace with the help of his
architect John Nash. Some furnishings were transferred from Carlton House, and others had been bought in France after
the French Revolution. The external façade was designed keeping in mind the French neo-classical influence preferred
by George IV. The cost of the renovations grew dramatically, and by 1829 the extravagance of Nash's designs resulted
in his removal as architect. On the death of George IV in 1830, his younger brother King William IV hired Edward
Blore to finish the work. At one stage, William considered converting the palace into the new Houses of Parliament,
after the destruction of the Palace of Westminster by fire in 1834. Buckingham Palace finally became the principal
royal residence in 1837, on the accession of Queen Victoria, who was the first monarch to reside there; her predecessor
William IV had died before its completion. While the state rooms were a riot of gilt and colour, the necessities
of the new palace were somewhat less luxurious. For one thing, it was reported the chimneys smoked so much that the
fires had to be allowed to die down, and consequently the court shivered in icy magnificence. Ventilation was so
bad that the interior smelled, and when a decision was taken to install gas lamps, there was a serious worry about
the build-up of gas on the lower floors. It was also said that staff were lax and lazy and the palace was dirty.
Following the queen's marriage in 1840, her husband, Prince Albert, concerned himself with a reorganisation of the
household offices and staff, and with the design faults of the palace. The problems were all rectified by the close
of 1840. However, the builders were to return within the decade. By 1847, the couple had found the palace too small
for court life and their growing family, and consequently the new wing, designed by Edward Blore, was built by Thomas
Cubitt, enclosing the central quadrangle. The large East Front, facing The Mall, is today the "public face" of Buckingham
Palace, and contains the balcony from which the royal family acknowledge the crowds on momentous occasions and after
the annual Trooping the Colour. The ballroom wing and a further suite of state rooms were also built in this period,
designed by Nash's student Sir James Pennethorne. Before Prince Albert's death, the palace was frequently the scene
of musical entertainments, and the greatest contemporary musicians entertained at Buckingham Palace. The composer
Felix Mendelssohn is known to have played there on three occasions. Johann Strauss II and his orchestra played there
when in England. Strauss's "Alice Polka" was first performed at the palace in 1849 in honour of the queen's daughter,
Princess Alice. Under Victoria, Buckingham Palace was frequently the scene of lavish costume balls, in addition to
the usual royal ceremonies, investitures and presentations. Widowed in 1861, the grief-stricken Queen withdrew from
public life and left Buckingham Palace to live at Windsor Castle, Balmoral Castle and Osborne House. For many years
the palace was seldom used, even neglected. In 1864, a note was found pinned to the fence of Buckingham Palace, saying:
"These commanding premises to be let or sold, in consequence of the late occupant's declining business." Eventually,
public opinion forced the Queen to return to London, though even then she preferred to live elsewhere whenever possible.
Court functions were still held at Windsor Castle, presided over by the sombre Queen habitually dressed in mourning
black, while Buckingham Palace remained shuttered for most of the year. The palace measures 108 metres (354 ft) by
120 metres (390 ft), is 24 metres (79 ft) high and contains over 77,000 m2 (830,000 sq ft) of floorspace. The floor
area is smaller than the Royal Palace of Madrid, the Papal Palace in Rome, the Louvre in Paris, the Hofburg Palace
in Vienna, or the Forbidden City. There are 775 rooms, including 19 state rooms, 52 principal bedrooms, 188 staff
bedrooms, 92 offices, and 78 bathrooms. The principal rooms are contained on the piano nobile behind the west-facing
garden façade at the rear of the palace. The centre of this ornate suite of state rooms is the Music Room, its large
bow the dominant feature of the façade. Flanking the Music Room are the Blue and the White Drawing Rooms. At the
centre of the suite, serving as a corridor to link the state rooms, is the Picture Gallery, which is top-lit and
55 yards (50 m) long. The Gallery is hung with numerous works including some by Rembrandt, van Dyck, Rubens and Vermeer;
other rooms leading from the Picture Gallery are the Throne Room and the Green Drawing Room. The Green Drawing Room
serves as a huge anteroom to the Throne Room, and is part of the ceremonial route to the throne from the Guard Room
at the top of the Grand Staircase. The Guard Room contains white marble statues of Queen Victoria and Prince Albert,
in Roman costume, set in a tribune lined with tapestries. These very formal rooms are used only for ceremonial and
official entertaining, but are open to the public every summer. Directly underneath the State Apartments is a suite
of slightly less grand rooms known as the semi-state apartments. Opening from the Marble Hall, these rooms are used
for less formal entertaining, such as luncheon parties and private audiences. Some of the rooms are named and decorated
for particular visitors, such as the 1844 Room, decorated in that year for the State visit of Tsar Nicholas I of
Russia, and, on the other side of the Bow Room, the 1855 Room, in honour of the visit of Emperor Napoleon III of
France. At the centre of this suite is the Bow Room, through which thousands of guests pass annually to the Queen's
Garden Parties in the Gardens. The Queen and Prince Philip use a smaller suite of rooms in the north wing. Between
1847 and 1850, when Blore was building the new east wing, the Brighton Pavilion was once again plundered of its fittings.
As a result, many of the rooms in the new wing have a distinctly oriental atmosphere. The red and blue Chinese Luncheon
Room is made up from parts of the Brighton Banqueting and Music Rooms with a large oriental chimney piece sculpted
by Richard Westmacott. The Yellow Drawing Room has wallpaper supplied in 1817 for the Brighton Saloon, and a chimney
piece which is a European vision of how the Chinese chimney piece may appear. It has nodding mandarins in niches
and fearsome winged dragons, designed by Robert Jones. At the centre of this wing is the famous balcony with the
Centre Room behind its glass doors. This is a Chinese-style saloon enhanced by Queen Mary, who, working with the
designer Sir Charles Allom, created a more "binding" Chinese theme in the late 1920s, although the lacquer doors
were brought from Brighton in 1873. Running the length of the piano nobile of the east wing is the great gallery,
modestly known as the Principal Corridor, which runs the length of the eastern side of the quadrangle. It has mirrored
doors, and mirrored cross walls reflecting porcelain pagodas and other oriental furniture from Brighton. The Chinese
Luncheon Room and Yellow Drawing Room are situated at each end of this gallery, with the Centre Room obviously placed
in the centre. When paying a state visit to Britain, foreign heads of state are usually entertained by the Queen
at Buckingham Palace. They are allocated a large suite of rooms known as the Belgian Suite, situated at the foot
of the Minister's Staircase, on the ground floor of the north-facing Garden Wing. The rooms of the suite are linked
by narrow corridors, one of them is given extra height and perspective by saucer domes designed by Nash in the style
of Soane. A second corridor in the suite has Gothic influenced cross over vaulting. The Belgian Rooms themselves
were decorated in their present style and named after Prince Albert's uncle Léopold I, first King of the Belgians.
In 1936, the suite briefly became the private apartments of the palace when they were occupied by King Edward VIII.
Thus, Buckingham Palace is a symbol and home of the British monarchy, an art gallery and a tourist attraction. Behind
the gilded railings and gates which were completed by the Bromsgrove Guild in 1911 and Webb's famous façade, which
has been described in a book published by the Royal Collection as looking "like everybody's idea of a palace", is
not only a weekday home of the Queen and Prince Philip but also the London residence of the Duke of York and the
Earl and Countess of Wessex. The palace also houses the offices of the Queen, Prince Philip, Duke of York, Earl and
Countess of Wessex, Princess Royal, and Princess Alexandra, and is the workplace of more than 800 people. The palace,
like Windsor Castle, is owned by the Crown Estate. It is not the monarch's personal property, unlike Sandringham
House and Balmoral Castle. Many of the contents from Buckingham Palace, Windsor Castle, Kensington Palace, and St
James's Palace are part of the Royal Collection, held in trust by the Sovereign; they can, on occasion, be viewed
by the public at the Queen's Gallery, near the Royal Mews. Unlike the palace and the castle, the purpose-built gallery
is open continually and displays a changing selection of items from the collection. It occupies the site of the chapel
destroyed by an air raid in World War II. The palace's state rooms have been open to the public during August and
September and on selected dates throughout the year since 1993. The money raised in entry fees was originally put
towards the rebuilding of Windsor Castle after the 1992 fire devastated many of its state rooms. 476,000 people visited
the palace in the 2014–15 financial year. Formerly, men not wearing military uniform wore knee breeches of an 18th-century
design. Women's evening dress included obligatory trains and tiaras or feathers in their hair (or both). The dress
code governing formal court uniform and dress has progressively relaxed. After World War I, when Queen Mary wished
to follow fashion by raising her skirts a few inches from the ground, she requested a lady-in-waiting to shorten
her own skirt first to gauge the king's reaction. King George V was horrified, so the queen kept her hemline unfashionably
low. Following their accession in 1936, King George VI and his consort, Queen Elizabeth, allowed the hemline of daytime
skirts to rise. Today, there is no official dress code. Most men invited to Buckingham Palace in the daytime choose
to wear service uniform or lounge suits; a minority wear morning coats, and in the evening, depending on the formality
of the occasion, black tie or white tie. Court presentations of aristocratic young ladies to the monarch took place
at the palace from the reign of Edward VII. These young women were known as débutantes, and the occasion—termed their
"coming out"—represented their first entrée into society. Débutantes wore full court dress, with three tall ostrich
feathers in their hair. They entered, curtsied, and performed a choreographed backwards walk and a further curtsy,
while manoeuvring a dress train of prescribed length. (The ceremony, known as an evening court, corresponded to the
"court drawing rooms" of Victoria's reign.) After World War II, the ceremony was replaced by less formal afternoon
receptions, usually without choreographed curtsies and court dress. Investitures, which include the conferring of
knighthoods by dubbing with a sword, and other awards take place in the palace's Ballroom, built in 1854. At 36.6
m (120 ft) long, 18 m (59 ft) wide and 13.5 m (44 ft) high, it is the largest room in the palace. It has replaced
the throne room in importance and use. During investitures, the Queen stands on the throne dais beneath a giant,
domed velvet canopy, known as a shamiana or a baldachin, that was used at the Delhi Durbar in 1911. A military band
plays in the musicians' gallery as award recipients approach the Queen and receive their honours, watched by their
families and friends. State banquets also take place in the Ballroom; these formal dinners are held on the first
evening of a state visit by a foreign head of state. On these occasions, for up to 170 guests in formal "white tie
and decorations", including tiaras, the dining table is laid with the Grand Service, a collection of silver-gilt
plate made in 1811 for the Prince of Wales, later George IV. The largest and most formal reception at Buckingham
Palace takes place every November when the Queen entertains members of the diplomatic corps. On this grand occasion,
all the state rooms are in use, as the royal family proceed through them, beginning at the great north doors of the
Picture Gallery. As Nash had envisaged, all the large, double-mirrored doors stand open, reflecting the numerous
crystal chandeliers and sconces, creating a deliberate optical illusion of space and light. Adjacent to the palace
is the Royal Mews, also designed by Nash, where the royal carriages, including the Gold State Coach, are housed.
This rococo gilt coach, designed by Sir William Chambers in 1760, has painted panels by G. B. Cipriani. It was first
used for the State Opening of Parliament by George III in 1762 and has been used by the monarch for every coronation
since George IV. It was last used for the Golden Jubilee of Elizabeth II. Also housed in the mews are the coach horses
used at royal ceremonial processions. In 1901 the accession of Edward VII saw new life breathed into the palace.
The new King and his wife Queen Alexandra had always been at the forefront of London high society, and their friends,
known as "the Marlborough House Set", were considered to be the most eminent and fashionable of the age. Buckingham
Palace—the Ballroom, Grand Entrance, Marble Hall, Grand Staircase, vestibules and galleries redecorated in the Belle
époque cream and gold colour scheme they retain today—once again became a setting for entertaining on a majestic
scale but leaving some to feel King Edward's heavy redecorations were at odds with Nash's original work. The last
major building work took place during the reign of King George V when, in 1913, Sir Aston Webb redesigned Blore's
1850 East Front to resemble in part Giacomo Leoni's Lyme Park in Cheshire. This new, refaced principal façade (of
Portland stone) was designed to be the backdrop to the Victoria Memorial, a large memorial statue of Queen Victoria,
placed outside the main gates. George V, who had succeeded Edward VII in 1910, had a more serious personality than
his father; greater emphasis was now placed on official entertaining and royal duties than on lavish parties. He
arranged a series of command performances featuring jazz musicians such as the Original Dixieland Jazz Band (1919)
– the first jazz performance for a head of state, Sidney Bechet, and Louis Armstrong (1932), which earned the palace
a nomination in 2009 for a (Kind of) Blue Plaque by the Brecon Jazz Festival as one of the venues making the greatest
contribution to jazz music in the United Kingdom. George V's wife Queen Mary was a connoisseur of the arts, and took
a keen interest in the Royal Collection of furniture and art, both restoring and adding to it. Queen Mary also had
many new fixtures and fittings installed, such as the pair of marble Empire-style chimneypieces by Benjamin Vulliamy,
dating from 1810, which the Queen had installed in the ground floor Bow Room, the huge low room at the centre of
the garden façade. Queen Mary was also responsible for the decoration of the Blue Drawing Room. This room, 69 feet
(21 metres) long, previously known as the South Drawing Room, has a ceiling designed specially by Nash, coffered
with huge gilt console brackets. During World War I, the palace, then the home of King George V and Queen Mary, escaped
unscathed. Its more valuable contents were evacuated to Windsor but the royal family remained in situ. The King imposed
rationing at the palace, much to the dismay of his guests and household. To the King's later regret, David Lloyd
George persuaded him to go further by ostentatiously locking the wine cellars and refraining from alcohol, to set
a good example to the supposedly inebriated working class. The workers continued to imbibe and the King was left
unhappy at his enforced abstinence. In 1938, the north-west pavilion, designed by Nash as a conservatory, was converted
into a swimming pool. During World War II, the palace was bombed nine times, the most serious and publicised of which
resulted in the destruction of the palace chapel in 1940. Coverage of this event was played in cinemas all over the
UK to show the common suffering of rich and poor. One bomb fell in the palace quadrangle while King George VI and
Queen Elizabeth were in residence, and many windows were blown in and the chapel destroyed. War-time coverage of
such incidents was severely restricted, however. The King and Queen were filmed inspecting their bombed home, the
smiling Queen, as always, immaculately dressed in a hat and matching coat seemingly unbothered by the damage around
her. It was at this time the Queen famously declared: "I'm glad we have been bombed. Now I can look the East End
in the face". The royal family were seen as sharing their subjects' hardship, as The Sunday Graphic reported: On
15 September 1940, known as the Battle of Britain Day, an RAF pilot, Ray Holmes of No. 504 Squadron RAF rammed a
German bomber he believed was going to bomb the Palace. Holmes had run out of ammunition and made the quick decision
to ram it. Holmes bailed out. Both aircraft crashed. In fact the Dornier Do 17 bomber was empty. It had already been
damaged, two of its crew had been killed and the remainder bailed out. Its pilot, Feldwebel Robert Zehbe, landed,
only to die later of wounds suffered during the attack. During the Dornier's descent, it somehow unloaded its bombs,
one of which hit the Palace. It then crashed into the forecourt of London Victoria station. The bomber's engine was
later exhibited at the Imperial War Museum in London. The British pilot became a King's Messenger after the war,
and died at the age of 90 in 2005.
An incandescent light bulb, incandescent lamp or incandescent light globe is an electric light with a wire filament heated
to a high temperature, by passing an electric current through it, until it glows with visible light (incandescence).
The hot filament is protected from oxidation with a glass or quartz bulb that is filled with inert gas or evacuated.
In a halogen lamp, filament evaporation is prevented by a chemical process that redeposits metal vapor onto the filament,
extending its life. The light bulb is supplied with electric current by feed-through terminals or wires embedded
in the glass. Most bulbs are used in a socket which provides mechanical support and electrical connections. Incandescent
bulbs are much less efficient than most other types of electric lighting; incandescent bulbs convert less than 5%
of the energy they use into visible light, with standard light bulbs averaging about 2.2%. The remaining energy is
converted into heat. The luminous efficacy of a typical incandescent bulb is 16 lumens per watt, compared with 60
lm/W for a compact fluorescent bulb or 150 lm/W for some white LED lamps. Some applications of the incandescent bulb
deliberately use the heat generated by the filament. Such applications include incubators, brooding boxes for poultry,
heat lights for reptile tanks, infrared heating for industrial heating and drying processes, lava lamps, and the
Easy-Bake Oven toy. Incandescent bulbs typically have short lifetimes compared with other types of lighting; around
1,000 hours for home light bulbs versus typically 10,000 hours for compact fluorescents and 30,000 hours for lighting
LEDs. Incandescent bulbs have been replaced in many applications by other types of electric light, such as fluorescent
lamps, compact fluorescent lamps (CFL), cold cathode fluorescent lamps (CCFL), high-intensity discharge lamps, and
light-emitting diode lamps (LED). Some jurisdictions, such as the European Union, China, Canada and United States,
are in the process of phasing out the use of incandescent light bulbs while others, including Colombia, Mexico, Cuba,
Argentina, Brazil and Australia, have prohibited them already. In 1872, Russian Alexander Lodygin invented an incandescent
light bulb and obtained a Russian patent in 1874. He used as a burner two carbon rods of diminished section in a
glass receiver, hermetically sealed, and filled with nitrogen, electrically arranged so that the current could be
passed to the second carbon when the first had been consumed. Later he lived in the USA, changed his name to Alexander
de Lodyguine and applied and obtained patents for incandescent lamps having chromium, iridium, rhodium, ruthenium,
osmium, molybdenum and tungsten filaments, and a bulb using a molybdenum filament was demonstrated at the world fair
of 1900 in Paris. With the help of Charles Stearn, an expert on vacuum pumps, in 1878, Swan developed a method of
processing that avoided the early bulb blackening. This received a British Patent in 1880.[dubious – discuss] On
18 December 1878, a lamp using a slender carbon rod was shown at a meeting of the Newcastle Chemical Society, and
Swan gave a working demonstration at their meeting on 17 January 1879. It was also shown to 700 who attended a meeting
of the Literary and Philosophical Society of Newcastle upon Tyne on 3 February 1879. These lamps used a carbon rod
from an arc lamp rather than a slender filament. Thus they had low resistance and required very large conductors
to supply the necessary current, so they were not commercially practical, although they did furnish a demonstration
of the possibilities of incandescent lighting with relatively high vacuum, a carbon conductor, and platinum lead-in
wires. Besides requiring too much current for a central station electric system to be practical, they had a very
short lifetime. Swan turned his attention to producing a better carbon filament and the means of attaching its ends.
He devised a method of treating cotton to produce 'parchmentised thread' and obtained British Patent 4933 in 1880.
From this year he began installing light bulbs in homes and landmarks in England. His house was the first in the
world to be lit by a lightbulb and also the first house in the world to be lit by hydroelectric power. In 1878 the
home of Lord Armstrong at Cragside was also among the first houses to be lit by electricity. In the early 1880s he
had started his company. In 1881, the Savoy Theatre in the City of Westminster, London was lit by Swan incandescent
lightbulbs, which was the first theatre, and the first public building in the world, to be lit entirely by electricity.
Thomas Edison began serious research into developing a practical incandescent lamp in 1878. Edison filed his first
patent application for "Improvement In Electric Lights" on 14 October 1878. After many experiments, first with carbon
in the early 1880s and then with platinum and other metals, in the end Edison returned to a carbon filament. The
first successful test was on 22 October 1879, and lasted 13.5 hours. Edison continued to improve this design and
by 4 November 1879, filed for a US patent for an electric lamp using "a carbon filament or strip coiled and connected
... to platina contact wires." Although the patent described several ways of creating the carbon filament including
using "cotton and linen thread, wood splints, papers coiled in various ways," Edison and his team later discovered
that a carbonized bamboo filament could last more than 1200 hours. In 1880, the Oregon Railroad and Navigation Company
steamer, Columbia, became the first application for Edison's incandescent electric lamps (it was also the first ship
to execute use of a dynamo). Albon Man, a New York lawyer, started Electro-Dynamic Light Company in 1878 to exploit
his patents and those of William Sawyer. Weeks later the United States Electric Lighting Company was organized. This
company didn't made their first commercial installation of incandescent lamps until the fall of 1880 at the Mercantile
Safe Deposit Company in New York City, about six months after the Edison incandescent lamps had been installed on
the Columbia. Hiram S. Maxim was the chief engineer at the United States Electric Lighting Company. Lewis Latimer,
employed at the time by Edison, developed an improved method of heat-treating carbon filaments which reduced breakage
and allowed them to be molded into novel shapes, such as the characteristic "M" shape of Maxim filaments. On 17 January
1882, Latimer received a patent for the "Process of Manufacturing Carbons", an improved method for the production
of light bulb filaments, which was purchased by the United States Electric Light Company. Latimer patented other
improvements such as a better way of attaching filaments to their wire supports. On 13 December 1904, Hungarian Sándor
Just and Croatian Franjo Hanaman were granted a Hungarian patent (No. 34541) for a tungsten filament lamp that lasted
longer and gave brighter light than the carbon filament. Tungsten filament lamps were first marketed by the Hungarian
company Tungsram in 1904. This type is often called Tungsram-bulbs in many European countries. Filling a bulb with
an inert gas such as argon or nitrogen retards the evaporation of the tungsten filament compared to operating it
in a vacuum. This allows for greater temperatures and therefore greater efficacy with less reduction in filament
life. Luminous efficacy of a light source may be defined in two ways. The radiant luminous efficacy (LER) is the
ratio of the visible light flux emitted (the luminous flux) to the total power radiated over all wavelengths. The
source luminous efficacy (LES) is the ratio of the visible light flux emitted (the luminous flux) to the total power
input to the source, such as a lamp. Visible light is measured in lumens, a unit which is defined in part by the
differing sensitivity of the human eye to different wavelengths of light. Not all wavelengths of visible electromagnetic
energy are equally effective at stimulating the human eye; the luminous efficacy of radiant energy (LER) is a measure
of how well the distribution of energy matches the perception of the eye. The units of luminous efficacy are "lumens
per watt" (lpw). The maximum LER possible is 683 lm/W for monochromatic green light at 555 nanometers wavelength,
the peak sensitivity of the human eye. The spectrum emitted by a blackbody radiator at temperatures of incandescent
bulbs does not match the sensitivity characteristics of the human eye; the light emitted does not appear white, and
most is not in the range of wavelengths at which the eye is most sensitive. Tungsten filaments radiate mostly infrared
radiation at temperatures where they remain solid – below 3,695 K (3,422 °C; 6,191 °F). Donald L. Klipstein explains
it this way: "An ideal thermal radiator produces visible light most efficiently at temperatures around 6,300 °C (6,600
K; 11,400 °F). Even at this high temperature, a lot of the radiation is either infrared or ultraviolet, and the theoretical
luminous efficacy (LER) is 95 lumens per watt." No known material can be used as a filament at this ideal temperature,
which is hotter than the sun's surface. An upper limit for incandescent lamp luminous efficacy (LER) is around 52
lumens per watt, the theoretical value emitted by tungsten at its melting point. Although inefficient, incandescent
light bulbs have an advantage in applications where accurate color reproduction is important, since the continuous
blackbody spectrum emitted from an incandescent light-bulb filament yields near-perfect color rendition, with a color
rendering index of 100 (the best possible). White-balancing is still required to avoid too "warm" or "cool" colors,
but this is a simple process that requires only the color temperature in Kelvin as input for modern, digital visual
reproduction equipment such as video or still cameras unless it is completely automated. The color-rendering performance
of incandescent lights cannot be matched by LEDs or fluorescent lights, although they can offer satisfactory performance
for non-critical applications such as home lighting. White-balancing such lights is therefore more complicated, requiring
additional adjustments to reduce for example green-magenta color casts, and even when properly white-balanced, the
color reproduction will not be perfect. There are many non-incandescent light sources, such as the fluorescent lamp,
high-intensity discharge lamps and LED lamps, which have higher luminous efficiency, and some have been designed
to be retrofitted in fixtures for incandescent lights. These devices produce light by luminescence. These lamps produce
discrete spectral lines and do not have the broad "tail" of invisible infrared emissions. By careful selection of
which electron energy level transitions are used, and fluorescent coatings which modify the spectral distribution,
the spectrum emitted can be tuned to mimic the appearance of incandescent sources, or other different color temperatures
of white light. Due to the discrete spectral lines rather than a continuous spectrum, the light is not ideal for
applications such as photography and cinematography. The initial cost of an incandescent bulb is small compared to
the cost of the energy it uses over its lifetime. Incandescent bulbs have a shorter life than most other lighting,
an important factor if replacement is inconvenient or expensive. Some types of lamp, including incandescent and fluorescent,
emit less light as they age; this may be an inconvenience, or may reduce effective lifetime due to lamp replacement
before total failure. A comparison of incandescent lamp operating cost with other light sources must include illumination
requirements, cost of the lamp and labor cost to replace lamps (taking into account effective lamp lifetime), cost
of electricity used, effect of lamp operation on heating and air conditioning systems. When used for lighting in
houses and commercial buildings, the energy lost to heat can significantly increase the energy required by a building's
air conditioning system. During the heating season heat produced by the bulbs is not wasted, although in most cases
it is more cost effective to obtain heat from the heating system. Regardless, over the course of a year a more efficient
lighting system saves energy in nearly all climates. Since incandescent light bulbs use more energy than alternatives
such as CFLs and LED lamps, many governments have introduced measures to ban their use, by setting minimum efficacy
standards higher than can be achieved by incandescent lamps. Measures to ban light bulbs have been implemented in
the European Union, the United States, Russia, Brazil, Argentina, Canada and Australia, among others. In the Europe
the EC has calculated that the ban contributes 5 to 10 billion euros to the economy and saves 40 TWh of electricity
every year, translating in CO2 emission reductions of 15 million tonnes. Objections to banning the use of incandescent
light bulbs include the higher initial cost of alternatives and lower quality of light of fluorescent lamps. Some
people have concerns about the health effects of fluorescent lamps. However, even though they contain mercury, the
environmental performance of CFLs is much better than that of light bulbs, mostly because they consume much less
energy and therefore strongly reduce the environmental impact of power production. LED lamps are even more efficient,
and are free of mercury. They are regarded as the best solution in terms of cost effectiveness and robustness. Prompted
by legislation in various countries mandating increased bulb efficiency, new "hybrid" incandescent bulbs have been
introduced by Philips. The "Halogena Energy Saver" incandescents can produce about 23 lm/W; about 30 percent more
efficient than traditional incandescents, by using a reflective capsule to reflect formerly wasted infrared radiation
back to the filament from which it can be re-emitted as visible light. This concept was pioneered by Duro-Test in
1980 with a commercial product that produced 29.8 lm/W. More advanced reflectors based on interference filters or
photonic crystals can theoretically result in higher efficiency, up to a limit of about 270 lm/W (40% of the maximum
efficacy possible). Laboratory proof-of-concept experiments have produced as much as 45 lm/W, approaching the efficacy
of compact fluorescent bulbs. Incandescent light bulbs consist of an air-tight glass enclosure (the envelope, or
bulb) with a filament of tungsten wire inside the bulb, through which an electric current is passed. Contact wires
and a base with two (or more) conductors provide electrical connections to the filament. Incandescent light bulbs
usually contain a stem or glass mount anchored to the bulb's base that allows the electrical contacts to run through
the envelope without air or gas leaks. Small wires embedded in the stem in turn support the filament and its lead
wires. Most light bulbs have either clear or coated glass. The coated glass bulbs have a white powdery substance
on the inside called kaolin. Kaolin, or kaolinite, is a white, chalky clay in a very fine powder form, that is blown
in and electrostatically deposited on the interior of the bulb. It diffuses the light emitted from the filament,
producing a more gentle and evenly distributed light. Manufacturers may add pigments to the kaolin to adjust the
characteristics of the final light emitted from the bulb. Kaolin diffused bulbs are used extensively in interior
lighting because of their comparatively gentle light. Other kinds of colored bulbs are also made, including the various
colors used for "party bulbs", Christmas tree lights and other decorative lighting. These are created by coloring
the glass with a dopant; which is often a metal like cobalt (blue) or chromium (green). Neodymium-containing glass
is sometimes used to provide a more natural-appearing light. Many arrangements of electrical contacts are used. Large
lamps may have a screw base (one or more contacts at the tip, one at the shell) or a bayonet base (one or more contacts
on the base, shell used as a contact or used only as a mechanical support). Some tubular lamps have an electrical
contact at either end. Miniature lamps may have a wedge base and wire contacts, and some automotive and special purpose
lamps have screw terminals for connection to wires. Contacts in the lamp socket allow the electric current to pass
through the base to the filament. Power ratings for incandescent light bulbs range from about 0.1 watt to about 10,000
watts. The role of the gas is to prevent evaporation of the filament, without introducing significant heat losses.
For these properties, chemical inertness and high atomic or molecular weight is desirable. The presence of gas molecules
knocks the liberated tungsten atoms back to the filament, reducing its evaporation and allowing it to be operated
at higher temperature without reducing its life (or, for operating at the same temperature, prolongs the filament
life). It however introduces heat losses (and therefore efficiency loss) from the filament, by heat conduction and
heat convection. In manufacturing the glass bulb, a type of "ribbon machine" is used. A continuous ribbon of glass
is passed along a conveyor belt, heated in a furnace, and then blown by precisely aligned air nozzles through holes
in the conveyor belt into molds. Thus the glass bulbs are created. After the bulbs are blown, and cooled, they are
cut off the ribbon machine; a typical machine of this sort produces 50,000 bulbs per hour. The filament and its supports
are assembled on a glass stem, which is fused to the bulb. The air is pumped out of the bulb, and the evacuation
tube in the stem press is sealed by a flame. The bulb is then inserted into the lamp base, and the whole assembly
tested. The first successful light bulb filaments were made of carbon (from carbonized paper or bamboo). Early carbon
filaments had a negative temperature coefficient of resistance — as they got hotter, their electrical resistance
decreased. This made the lamp sensitive to fluctuations in the power supply, since a small increase of voltage would
cause the filament to heat up, reducing its resistance and causing it to draw even more power and heat even further.
In the "flashing" process, carbon filaments were heated by current passing through them while in an evacuated vessel
containing hydrocarbon vapor (usually gasoline). The carbon deposited on the filament by this treatment improved
the uniformity and strength of filaments as well as their efficiency. A metallized or "graphitized" filament was
first heated in a high-temperature oven before flashing and lamp assembly. This transformed the carbon into graphite
which further strengthened and smoothed the filament. This also changed the filament to have a positive temperature
coefficient, like a metallic conductor, and helped stabilize the lamp's power consumption, temperature and light
output against minor variations in supply voltage. In 1902, the Siemens company developed a tantalum lamp filament.
These lamps were more efficient than even graphitized carbon filaments and could operate at higher temperatures.
Since tantalum metal has a lower resistivity than carbon, the tantalum lamp filament was quite long and required
multiple internal supports. The metal filament had the property of gradually shortening in use; the filaments were
installed with large loops that tightened in use. This made lamps in use for several hundred hours quite fragile.
Metal filaments had the property of breaking and re-welding, though this would usually decrease resistance and shorten
the life of the filament. General Electric bought the rights to use tantalum filaments and produced them in the US
until 1913. In 1906, the tungsten filament was introduced. Tungsten metal was initially not available in a form that
allowed it to be drawn into fine wires. Filaments made from sintered tungsten powder were quite fragile. By 1910,
a process was developed by William D. Coolidge at General Electric for production of a ductile form of tungsten.
The process required pressing tungsten powder into bars, then several steps of sintering, swaging, and then wire
drawing. It was found that very pure tungsten formed filaments that sagged in use, and that a very small "doping"
treatment with potassium, silicon, and aluminium oxides at the level of a few hundred parts per million greatly improved
the life and durability of the tungsten filaments. To improve the efficiency of the lamp, the filament usually consists
of multiple coils of coiled fine wire, also known as a 'coiled coil'. For a 60-watt 120-volt lamp, the uncoiled length
of the tungsten filament is usually 22.8 inches (580 mm), and the filament diameter is 0.0018 inches (0.046 mm).
The advantage of the coiled coil is that evaporation of the tungsten filament is at the rate of a tungsten cylinder
having a diameter equal to that of the coiled coil. The coiled-coil filament evaporates more slowly than a straight
filament of the same surface area and light-emitting power. As a result, the filament can then run hotter, which
results in a more efficient light source, while reducing the evaporation so that the filament will last longer than
a straight filament at the same temperature. One of the problems of the standard electric light bulb is filament
notching due to evaporation of the filament. Small variations in resistivity along the filament cause "hot spots"
to form at points of higher resistivity; a variation of diameter of only 1% will cause a 25% reduction in service
life. These hot spots evaporate faster than the rest of the filament, which increases the resistance at that point—this
creates a positive feedback that ends in the familiar tiny gap in an otherwise healthy-looking filament. Irving Langmuir
found that an inert gas, instead of vacuum, would retard evaporation. General service incandescent light bulbs over
about 25 watts in rating are now filled with a mixture of mostly argon and some nitrogen, or sometimes krypton. Lamps
operated on direct current develop random stairstep irregularities on the filament surface which may cut lifespan
in half compared to AC operation; different alloys of tungsten and rhenium can be used to counteract the effect.
While inert gas reduces filament evaporation, it also conducts heat from the filament, thereby cooling the filament
and reducing efficiency. At constant pressure and temperature, the thermal conductivity of a gas depends upon the
molecular weight of the gas and the cross sectional area of the gas molecules. Higher molecular weight gasses have
lower thermal conductivity, because both the molecular weight is higher and also the cross sectional area is higher.
Xenon gas improves efficiency because of its high molecular weight, but is also more expensive, so its use is limited
to smaller lamps. During ordinary operation, the tungsten of the filament evaporates; hotter, more-efficient filaments
evaporate faster. Because of this, the lifetime of a filament lamp is a trade-off between efficiency and longevity.
The trade-off is typically set to provide a lifetime of several hundred to 2,000 hours for lamps used for general
illumination. Theatrical, photographic, and projection lamps may have a useful life of only a few hours, trading
life expectancy for high output in a compact form. Long-life general service lamps have lower efficiency but are
used where the cost of changing the lamp is high compared to the value of energy used. In a conventional lamp, the
evaporated tungsten eventually condenses on the inner surface of the glass envelope, darkening it. For bulbs that
contain a vacuum, the darkening is uniform across the entire surface of the envelope. When a filling of inert gas
is used, the evaporated tungsten is carried in the thermal convection currents of the gas, depositing preferentially
on the uppermost part of the envelope and blackening just that portion of the envelope. An incandescent lamp that
gives 93% or less of its initial light output at 75% of its rated life is regarded as unsatisfactory, when tested
according to IEC Publication 60064. Light loss is due to filament evaporation and bulb blackening. Study of the problem
of bulb blackening led to the discovery of the Edison effect, thermionic emission and invention of the vacuum tube.
A very small amount of water vapor inside a light bulb can significantly affect lamp darkening. Water vapor dissociates
into hydrogen and oxygen at the hot filament. The oxygen attacks the tungsten metal, and the resulting tungsten oxide
particles travel to cooler parts of the lamp. Hydrogen from water vapor reduces the oxide, reforming water vapor
and continuing this water cycle. The equivalent of a drop of water distributed over 500,000 lamps will significantly
increase darkening. Small amounts of substances such as zirconium are placed within the lamp as a getter to react
with any oxygen that may bake out of the lamp components during operation. The halogen lamp reduces uneven evaporation
of the filament and eliminates darkening of the envelope by filling the lamp with a halogen gas at low pressure,
rather than an inert gas. The halogen cycle increases the lifetime of the bulb and prevents its darkening by redepositing
tungsten from the inside of the bulb back onto the filament. The halogen lamp can operate its filament at a higher
temperature than a standard gas filled lamp of similar power without loss of operating life. Such bulbs are much
smaller than normal incandescent bulbs, and are widely used where intense illumination is needed in a limited space.
Fiber-optic lamps for optical microscopy is one typical application. A variation of the incandescent lamp did not
use a hot wire filament, but instead used an arc struck on a spherical bead electrode to produce heat. The electrode
then became incandescent, with the arc contributing little to the light produced. Such lamps were used for projection
or illumination for scientific instruments such as microscopes. These arc lamps ran on relatively low voltages and
incorporated tungsten filaments to start ionization within the envelope. They provided the intense concentrated light
of an arc lamp but were easier to operate. Developed around 1915, these lamps were displaced by mercury and xenon
arc lamps. Incandescent lamps are nearly pure resistive loads with a power factor of 1. This means the actual power
consumed (in watts) and the apparent power (in volt-amperes) are equal. Incandescent light bulbs are usually marketed
according to the electrical power consumed. This is measured in watts and depends mainly on the resistance of the
filament, which in turn depends mainly on the filament's length, thickness, and material. For two bulbs of the same
voltage, type, color, and clarity, the higher-powered bulb gives more light. The actual resistance of the filament
is temperature dependent. The cold resistance of tungsten-filament lamps is about 1/15 the hot-filament resistance
when the lamp is operating. For example, a 100-watt, 120-volt lamp has a resistance of 144 ohms when lit, but the
cold resistance is much lower (about 9.5 ohms). Since incandescent lamps are resistive loads, simple phase-control
TRIAC dimmers can be used to control brightness. Electrical contacts may carry a "T" rating symbol indicating that
they are designed to control circuits with the high inrush current characteristic of tungsten lamps. For a 100-watt,
120-volt general-service lamp, the current stabilizes in about 0.10 seconds, and the lamp reaches 90% of its full
brightness after about 0.13 seconds. Incandescent light bulbs come in a range of shapes and sizes. The names of the
shapes may be slightly different in some regions. Many of these shapes have a designation consisting of one or more
letters followed by one or more numbers, e.g. A55 or PAR38. The letters represent the shape of the bulb. The numbers
represent the maximum diameter, either in 1⁄8 of an inch, or in millimeters, depending on the shape and the region.
For example, 63 mm reflectors are designated R63, but in the US, they are known as R20 (2.5 in). However, in both
regions, a PAR38 reflector is known as PAR38. Very small lamps may have the filament support wires extended through
the base of the lamp, and can be directly soldered to a printed circuit board for connections. Some reflector-type
lamps include screw terminals for connection of wires. Most lamps have metal bases that fit in a socket to support
the lamp and conduct current to the filament wires. In the late 19th century, manufacturers introduced a multitude
of incompatible lamp bases. General Electric introduced standard base sizes for tungsten incandescent lamps under
the Mazda trademark in 1909. This standard was soon adopted across the US, and the Mazda name was used by many manufacturers
under license through 1945. Today most incandescent lamps for general lighting service use an Edison screw in candelabra,
intermediate, or standard or mogul sizes, or double contact bayonet base. Technical standards for lamp bases include
ANSI standard C81.67 and IEC standard 60061-1 for common commercial lamp sizes, to ensure interchangeablitity between
different manufacturer's products. Bayonet base lamps are frequently used in automotive lamps to resist loosening
due to vibration. A bipin base is often used for halogen or reflector lamps. This means that a 5% reduction in operating
voltage will more than double the life of the bulb, at the expense of reducing its light output by about 16%. This
may be a very acceptable trade off for a light bulb that is in a difficult-to-access location (for example, traffic
lights or fixtures hung from high ceilings). Long-life bulbs take advantage of this trade-off. Since the value of
the electric power they consume is much more than the value of the lamp, general service lamps emphasize efficiency
over long operating life. The objective is to minimize the cost of light, not the cost of lamps. Early bulbs had
a life of up to 2500 hours, but in 1924 a cartel agreed to limit life to 1000 hours. When this was exposed in 1953,
General Electric and other leading American manufacturers were banned from limiting the life. The relationships above
are valid for only a few percent change of voltage around rated conditions, but they do indicate that a lamp operated
at much lower than rated voltage could last for hundreds of times longer than at rated conditions, albeit with greatly
reduced light output. The "Centennial Light" is a light bulb that is accepted by the Guinness Book of World Records
as having been burning almost continuously at a fire station in Livermore, California, since 1901. However, the bulb
emits the equivalent light of a four watt bulb. A similar story can be told of a 40-watt bulb in Texas that has been
illuminated since 21 September 1908. It once resided in an opera house where notable celebrities stopped to take
in its glow, and was moved to an area museum in 1977. In flood lamps used for photographic lighting, the tradeoff
is made in the other direction. Compared to general-service bulbs, for the same power, these bulbs produce far more
light, and (more importantly) light at a higher color temperature, at the expense of greatly reduced life (which
may be as short as two hours for a type P1 lamp). The upper temperature limit for the filament is the melting point
of the metal. Tungsten is the metal with the highest melting point, 3,695 K (6,191 °F). A 50-hour-life projection
bulb, for instance, is designed to operate only 50 °C (122 °F) below that melting point. Such a lamp may achieve
up to 22 lumens per watt, compared with 17.5 for a 750-hour general service lamp. Lamps designed for different voltages
have different luminous efficacy. For example, a 100-watt, 120-volt lamp will produce about 17.1 lumens per watt.
A lamp with the same rated lifetime but designed for 230 V would produce only around 12.8 lumens per watt, and a
similar lamp designed for 30 volts (train lighting) would produce as much as 19.8 lumens per watt. Lower voltage
lamps have a thicker filament, for the same power rating. They can run hotter for the same lifetime before the filament
evaporates. In addressing the question of who invented the incandescent lamp, historians Robert Friedel and Paul
Israel list 22 inventors of incandescent lamps prior to Joseph Swan and Thomas Edison. They conclude that Edison's
version was able to outstrip the others because of a combination of three factors: an effective incandescent material,
a higher vacuum than others were able to achieve (by use of the Sprengel pump) and a high resistance that made power
distribution from a centralized source economically viable.
Arsenal was the first club from the south of England to join The Football League, in 1893. They entered the First Division
in 1904, and have since accumulated the second most points. Relegated only once, in 1913, they continue the longest
streak in the top division. In the 1930s, Arsenal won five League Championships and two FA Cups, and another FA Cup
and two Championships after the war. In 1970–71, they won their first League and FA Cup Double. Between 1988 and
2005, they won five League titles and five FA Cups, including two more Doubles. They completed the 20th century with
the highest average league position. In 1886, Woolwich munitions workers founded the club as Dial Square. In 1913,
the club crossed the city to Arsenal Stadium in Highbury. They became Tottenham Hotspur's nearest club, commencing
the North London derby. In 2006, they moved to the Emirates Stadium in nearby Holloway. Arsenal earned €435.5m in
2014–15, with the Emirates Stadium generating the highest revenue in world football. Based on social media activity
from 2014–15, Arsenal's fanbase is the fifth largest in the world. Forbes estimates the club was worth $1.3 billion
in 2015. Arsenal Football Club was formed as Dial Square in 1886 by workers at the Royal Arsenal in Woolwich, south-east
London, and were renamed Royal Arsenal shortly afterwards. The club was renamed again to Woolwich Arsenal after becoming
a limited company in 1893. The club became the first southern member of the Football League in 1893, starting out
in the Second Division, and won promotion to the First Division in 1904. The club's relative geographic isolation
resulted in lower attendances than those of other clubs, which led to the club becoming mired in financial problems
and effectively bankrupt by 1910, when they were taken over by businessmen Henry Norris and William Hall. Norris
sought to move the club elsewhere, and in 1913, soon after relegation back to the Second Division, Arsenal moved
to the new Arsenal Stadium in Highbury, north London; they dropped "Woolwich" from their name the following year.
Arsenal only finished in fifth place in the second division during the last pre-war competitive season of 1914–15,
but were nevertheless elected to rejoin the First Division when competitive football resumed in 1919–20, at the expense
of local rivals Tottenham Hotspur. Some books have reported that this election to division 1 was achieved by dubious
means. Arsenal appointed Herbert Chapman as manager in 1925. Having already won the league twice with Huddersfield
Town in 1923–24 and 1924–25 (see Seasons in English football), Chapman brought Arsenal their first period of major
success. His revolutionary tactics and training, along with the signings of star players such as Alex James and Cliff
Bastin, laid the foundations of the club's domination of English football in the 1930s. Under his guidance Arsenal
won their first major trophies – victory in the 1930 FA Cup Final preceded two League Championships, in 1930–31 and
1932–33. In addition, Chapman was behind the 1932 renaming of the local London Underground station from "Gillespie
Road" to "Arsenal", making it the only Tube station to be named specifically after a football club. Arsenal began
winning silverware again with the surprise appointment of club physiotherapist Bertie Mee as manager in 1966. After
losing two League Cup finals, they won their first European trophy, the 1969–70 Inter-Cities Fairs Cup. This was
followed by an even greater triumph: their first League and FA Cup double in 1970–71. This marked a premature high
point of the decade; the Double-winning side was soon broken up and the following decade was characterised by a series
of near misses, starting with Arsenal finishing as FA Cup runners up in 1972, and First Division runners-up in 1972–73.
Terry Neill was recruited by the Arsenal board to replace Bertie Mee on 9 July 1976 and at the age of 34 he became
the youngest Arsenal manager to date. With new signings like Malcolm Macdonald and Pat Jennings, and a crop of talent
in the side such as Liam Brady and Frank Stapleton, the club enjoyed their best form since the 1971 double, reaching
a trio of FA Cup finals (1978, 1979 and 1980), and losing the 1980 European Cup Winners' Cup Final on penalties.
The club's only success during this time was a last-minute 3–2 victory over Manchester United in the 1979 FA Cup
Final, widely regarded as a classic. The return of former player George Graham as manager in 1986 brought a third
period of glory. Arsenal won the League Cup in 1987, Graham's first season in charge. This was followed by a League
title win in 1988–89, won with a last-minute goal in the final game of the season against fellow title challengers
Liverpool. Graham's Arsenal won another title in 1990–91, losing only one match, won the FA Cup and League Cup double
in 1993, and a second European trophy, the European Cup Winners' Cup, in 1994. Graham's reputation was tarnished
when he was found to have taken kickbacks from agent Rune Hauge for signing certain players, and he was dismissed
in 1995. His replacement, Bruce Rioch, lasted for only one season, leaving the club after a dispute with the board
of directors. The club's success in the late 1990s and first decade of the 21st century owed a great deal to the
1996 appointment of Arsène Wenger as manager. Wenger brought new tactics, a new training regime and several foreign
players who complemented the existing English talent. Arsenal won a second League and Cup double in 1997–98 and a
third in 2001–02. In addition, the club reached the final of the 1999–2000 UEFA Cup (losing on penalties to Galatasaray),
were victorious in the 2003 and 2005 FA Cups, and won the Premier League in 2003–04 without losing a single match,
an achievement which earned the side the nickname "The Invincibles". The feat came within a run of 49 league matches
unbeaten from 7 May 2003 to 24 October 2004, a national record. Arsenal finished in either first or second place
in the league in eight of Wenger's first eleven seasons at the club, although on no occasion were they able to retain
the title. As of July 2013, they were one of only five teams, the others being Manchester United, Blackburn Rovers,
Chelsea, and Manchester City, to have won the Premier League since its formation in 1992. Arsenal had never progressed
beyond the quarter-finals of the Champions League until 2005–06; in that season they became the first club from London
in the competition's fifty-year history to reach the final, in which they were beaten 2–1 by Barcelona. In July 2006,
they moved into the Emirates Stadium, after 93 years at Highbury. Arsenal reached the final of the 2007 and 2011
League Cups, losing 2–1 to Chelsea and Birmingham City respectively. The club had not gained a major trophy since
the 2005 FA Cup until 17 May 2014, when Arsenal beat Hull City in the 2014 FA Cup Final, coming back from a 2–0 deficit
to win the match 3–2. This qualified them for the 2014 FA Community Shield where they would play Premier League champions
Manchester City. They recorded a resounding 3–0 win in the game, winning their second trophy in three months. Nine
months after their Community Shield triumph, Arsenal appeared in the FA Cup final for the second year in a row, thrashing
Aston Villa 4–0 in the final and becoming the most successful club in the tournament's history with 12 titles. On
2 August 2015, Arsenal beat Chelsea 1–0 at Wembley Stadium to retain the Community Shield and earn their 14th Community
Shield title. Unveiled in 1888, Royal Arsenal's first crest featured three cannon viewed from above, pointing northwards,
similar to the coat of arms of the Metropolitan Borough of Woolwich (nowadays transferred to the coat of arms of
the Royal Borough of Greenwich). These can sometimes be mistaken for chimneys, but the presence of a carved lion's
head and a cascabel on each are clear indicators that they are cannon. This was dropped after the move to Highbury
in 1913, only to be reinstated in 1922, when the club adopted a crest featuring a single cannon, pointing eastwards,
with the club's nickname, The Gunners, inscribed alongside it; this crest only lasted until 1925, when the cannon
was reversed to point westward and its barrel slimmed down. In 1949, the club unveiled a modernised crest featuring
the same style of cannon below the club's name, set in blackletter, and above the coat of arms of the Metropolitan
Borough of Islington and a scroll inscribed with the club's newly adopted Latin motto, Victoria Concordia Crescit
"victory comes from harmony", coined by the club's programme editor Harry Homer. For the first time, the crest was
rendered in colour, which varied slightly over the crest's lifespan, finally becoming red, gold and green. Because
of the numerous revisions of the crest, Arsenal were unable to copyright it. Although the club had managed to register
the crest as a trademark, and had fought (and eventually won) a long legal battle with a local street trader who
sold "unofficial" Arsenal merchandise, Arsenal eventually sought a more comprehensive legal protection. Therefore,
in 2002 they introduced a new crest featuring more modern curved lines and a simplified style, which was copyrightable.
The cannon once again faces east and the club's name is written in a sans-serif typeface above the cannon. Green
was replaced by dark blue. The new crest was criticised by some supporters; the Arsenal Independent Supporters' Association
claimed that the club had ignored much of Arsenal's history and tradition with such a radical modern design, and
that fans had not been properly consulted on the issue. The monogram theme was developed into an Art Deco-style badge
on which the letters A and C framed a football rather than the letter F, the whole set within a hexagonal border.
This early example of a corporate logo, introduced as part of Herbert Chapman's rebranding of the club in the 1930s,
was used not only on Cup Final shirts but as a design feature throughout Highbury Stadium, including above the main
entrance and inlaid in the floors. From 1967, a white cannon was regularly worn on the shirts, until replaced by
the club crest, sometimes with the addition of the nickname "The Gunners", in the 1990s. In the 2011–12 season, Arsenal
celebrated their 125th year anniversary. The celebrations included a modified version of the current crest worn on
their jerseys for the season. The crest was all white, surrounded by 15 oak leaves to the right and 15 laurel leaves
to the left. The oak leaves represent the 15 founding members of the club who met at the Royal Oak pub. The 15 laurel
leaves represent the design detail on the six pence pieces paid by the founding fathers to establish the club. The
laurel leaves also represent strength. To complete the crest, 1886 and 2011 are shown on either sides of the motto
"Forward" at the bottom of the crest. For much of Arsenal's history, their home colours have been bright red shirts
with white sleeves and white shorts, though this has not always been the case. The choice of red is in recognition
of a charitable donation from Nottingham Forest, soon after Arsenal's foundation in 1886. Two of Dial Square's founding
members, Fred Beardsley and Morris Bates, were former Forest players who had moved to Woolwich for work. As they
put together the first team in the area, no kit could be found, so Beardsley and Bates wrote home for help and received
a set of kit and a ball. The shirt was redcurrant, a dark shade of red, and was worn with white shorts and socks
with blue and white hoops. In 1933, Herbert Chapman, wanting his players to be more distinctly dressed, updated the
kit, adding white sleeves and changing the shade to a brighter pillar box red. Two possibilities have been suggested
for the origin of the white sleeves. One story reports that Chapman noticed a supporter in the stands wearing a red
sleeveless sweater over a white shirt; another was that he was inspired by a similar outfit worn by the cartoonist
Tom Webster, with whom Chapman played golf. Regardless of which story is true, the red and white shirts have come
to define Arsenal and the team have worn the combination ever since, aside from two seasons. The first was 1966–67,
when Arsenal wore all-red shirts; this proved unpopular and the white sleeves returned the following season. The
second was 2005–06, the last season that Arsenal played at Highbury, when the team wore commemorative redcurrant
shirts similar to those worn in 1913, their first season in the stadium; the club reverted to their normal colours
at the start of the next season. In the 2008–09 season, Arsenal replaced the traditional all-white sleeves with red
sleeves with a broad white stripe. Arsenal's home colours have been the inspiration for at least three other clubs.
In 1909, Sparta Prague adopted a dark red kit like the one Arsenal wore at the time; in 1938, Hibernian adopted the
design of the Arsenal shirt sleeves in their own green and white strip. In 1920, Sporting Clube de Braga's manager
returned from a game at Highbury and changed his team's green kit to a duplicate of Arsenal's red with white sleeves
and shorts, giving rise to the team's nickname of Os Arsenalistas. These teams still wear those designs to this day.
For many years Arsenal's away colours were white shirts and either black or white shorts. In the 1969–70 season,
Arsenal introduced an away kit of yellow shirts with blue shorts. This kit was worn in the 1971 FA Cup Final as Arsenal
beat Liverpool to secure the double for the first time in their history. Arsenal reached the FA Cup final again the
following year wearing the red and white home strip and were beaten by Leeds United. Arsenal then competed in three
consecutive FA Cup finals between 1978 and 1980 wearing their "lucky" yellow and blue strip, which remained the club's
away strip until the release of a green and navy away kit in 1982–83. The following season, Arsenal returned to the
yellow and blue scheme, albeit with a darker shade of blue than before. When Nike took over from Adidas as Arsenal's
kit provider in 1994, Arsenal's away colours were again changed to two-tone blue shirts and shorts. Since the advent
of the lucrative replica kit market, the away kits have been changed regularly, with Arsenal usually releasing both
away and third choice kits. During this period the designs have been either all blue designs, or variations on the
traditional yellow and blue, such as the metallic gold and navy strip used in the 2001–02 season, the yellow and
dark grey used from 2005 to 2007, and the yellow and maroon of 2010 to 2013. As of 2009, the away kit is changed
every season, and the outgoing away kit becomes the third-choice kit if a new home kit is being introduced in the
same year. Widely referred to as Highbury, Arsenal Stadium was the club's home from September 1913 until May 2006.
The original stadium was designed by the renowned football architect Archibald Leitch, and had a design common to
many football grounds in the UK at the time, with a single covered stand and three open-air banks of terracing. The
entire stadium was given a massive overhaul in the 1930s: new Art Deco West and East stands were constructed, opening
in 1932 and 1936 respectively, and a roof was added to the North Bank terrace, which was bombed during the Second
World War and not restored until 1954. Highbury could hold more than 60,000 spectators at its peak, and had a capacity
of 57,000 until the early 1990s. The Taylor Report and Premier League regulations obliged Arsenal to convert Highbury
to an all-seater stadium in time for the 1993–94 season, thus reducing the capacity to 38,419 seated spectators.
This capacity had to be reduced further during Champions League matches to accommodate additional advertising boards,
so much so that for two seasons, from 1998 to 2000, Arsenal played Champions League home matches at Wembley, which
could house more than 70,000 spectators. Expansion of Highbury was restricted because the East Stand had been designated
as a Grade II listed building and the other three stands were close to residential properties. These limitations
prevented the club from maximising matchday revenue during the 1990s and first decade of the 21st century, putting
them in danger of being left behind in the football boom of that time. After considering various options, in 2000
Arsenal proposed building a new 60,361-capacity stadium at Ashburton Grove, since named the Emirates Stadium, about
500 metres south-west of Highbury. The project was initially delayed by red tape and rising costs, and construction
was completed in July 2006, in time for the start of the 2006–07 season. The stadium was named after its sponsors,
the airline company Emirates, with whom the club signed the largest sponsorship deal in English football history,
worth around £100 million; some fans referred to the ground as Ashburton Grove, or the Grove, as they did not agree
with corporate sponsorship of stadium names. The stadium will be officially known as Emirates Stadium until at least
2028, and the airline will be the club's shirt sponsor until the end of the 2018–19 season. From the start of the
2010–11 season on, the stands of the stadium have been officially known as North Bank, East Stand, West Stand and
Clock end. Arsenal fans often refer to themselves as "Gooners", the name derived from the team's nickname, "The Gunners".
The fanbase is large and generally loyal, and virtually all home matches sell out; in 2007–08 Arsenal had the second-highest
average League attendance for an English club (60,070, which was 99.5% of available capacity), and, as of 2015, the
third-highest all-time average attendance. Arsenal have the seventh highest average attendance of European football
clubs only behind Borussia Dortmund, FC Barcelona, Manchester United, Real Madrid, Bayern Munich, and Schalke. The
club's location, adjoining wealthy areas such as Canonbury and Barnsbury, mixed areas such as Islington, Holloway,
Highbury, and the adjacent London Borough of Camden, and largely working-class areas such as Finsbury Park and Stoke
Newington, has meant that Arsenal's supporters have come from a variety of social classes. Like all major English
football clubs, Arsenal have a number of domestic supporters' clubs, including the Arsenal Football Supporters' Club,
which works closely with the club, and the Arsenal Independent Supporters' Association, which maintains a more independent
line. The Arsenal Supporters' Trust promotes greater participation in ownership of the club by fans. The club's supporters
also publish fanzines such as The Gooner, Gunflash and the satirical Up The Arse!. In addition to the usual English
football chants, supporters sing "One-Nil to the Arsenal" (to the tune of "Go West"). There have always been Arsenal
supporters outside London, and since the advent of satellite television, a supporter's attachment to a football club
has become less dependent on geography. Consequently, Arsenal have a significant number of fans from beyond London
and all over the world; in 2007, 24 UK, 37 Irish and 49 other overseas supporters clubs were affiliated with the
club. A 2011 report by SPORT+MARKT estimated Arsenal's global fanbase at 113 million. The club's social media activity
was the fifth highest in world football during the 2014–15 season. Arsenal's longest-running and deepest rivalry
is with their nearest major neighbours, Tottenham Hotspur; matches between the two are referred to as North London
derbies. Other rivalries within London include those with Chelsea, Fulham and West Ham United. In addition, Arsenal
and Manchester United developed a strong on-pitch rivalry in the late 1980s, which intensified in recent years when
both clubs were competing for the Premier League title – so much so that a 2003 online poll by the Football Fans
Census listed Manchester United as Arsenal's biggest rivals, followed by Tottenham and Chelsea. A 2008 poll listed
the Tottenham rivalry as more important. The largest shareholder on the Arsenal board is American sports tycoon Stan
Kroenke. Kroenke first launched a bid for the club in April 2007, and faced competition for shares from Red and White
Securities, which acquired its first shares off David Dein in August 2007. Red & White Securities was co-owned by
Russian billionaire Alisher Usmanov and Iranian London-based financier Farhad Moshiri, though Usmanov bought Moshiri's
stake in 2016. Kroenke came close to the 30% takeover threshold in November 2009, when he increased his holding to
18,594 shares (29.9%). In April 2011, Kroenke achieved a full takeover by purchasing the shareholdings of Nina Bracewell-Smith
and Danny Fiszman, taking his shareholding to 62.89%. As of June 2015, Kroenke owns 41,698 shares (67.02%) and Red
& White Securities own 18,695 shares (30.04%). Ivan Gazidis has been the club's Chief Executive since 2009. Arsenal's
parent company, Arsenal Holdings plc, operates as a non-quoted public limited company, whose ownership is considerably
different from that of other football clubs. Only 62,217 shares in Arsenal have been issued, and they are not traded
on a public exchange such as the FTSE or AIM; instead, they are traded relatively infrequently on the ICAP Securities
and Derivatives Exchange, a specialist market. On 10 March 2016, a single share in Arsenal had a mid price of £15,670,
which sets the club's market capitalisation value at approximately £975m. Most football clubs aren't listed on an
exchange, which makes direct comparisons of their values difficult. Business magazine Forbes valued Arsenal as a
whole at $1.3 billion in 2015. Consultants Brand Finance valued the club's brand and intangible assets at $703m in
2015, and consider Arsenal an AAA global brand. Research by the Henley Business School modelled the club's value
at £1.118 billion in 2015, the second highest in the Premier League. Arsenal's financial results for the 2014–15
season show group revenue of £344.5m, with a profit before tax of £24.7m. The footballing core of the business showed
a revenue of £329.3m. The Deloitte Football Money League is a publication that homogenizes and compares clubs' annual
revenue. They put Arsenal's footballing revenue at £331.3m (€435.5m), ranking Arsenal seventh among world football
clubs. Arsenal and Deloitte both list the match day revenue generated by the Emirates Stadium as £100.4m, more than
any other football stadium in the world. Arsenal have appeared in a number of media "firsts". On 22 January 1927,
their match at Highbury against Sheffield United was the first English League match to be broadcast live on radio.
A decade later, on 16 September 1937, an exhibition match between Arsenal's first team and the reserves was the first
football match in the world to be televised live. Arsenal also featured in the first edition of the BBC's Match of
the Day, which screened highlights of their match against Liverpool at Anfield on 22 August 1964. BSkyB's coverage
of Arsenal's January 2010 match against Manchester United was the first live public broadcast of a sports event on
3D television. As one of the most successful teams in the country, Arsenal have often featured when football is depicted
in the arts in Britain. They formed the backdrop to one of the earliest football-related films, The Arsenal Stadium
Mystery (1939). The film centres on a friendly match between Arsenal and an amateur side, one of whose players is
poisoned while playing. Many Arsenal players appeared as themselves and manager George Allison was given a speaking
part. More recently, the book Fever Pitch by Nick Hornby was an autobiographical account of Hornby's life and relationship
with football and Arsenal in particular. Published in 1992, it formed part of the revival and rehabilitation of football
in British society during the 1990s. The book was twice adapted for the cinema – the 1997 British film focuses on
Arsenal's 1988–89 title win, and a 2005 American version features a fan of baseball's Boston Red Sox. Arsenal have
often been stereotyped as a defensive and "boring" side, especially during the 1970s and 1980s; many comedians, such
as Eric Morecambe, made jokes about this at the team's expense. The theme was repeated in the 1997 film The Full
Monty, in a scene where the lead actors move in a line and raise their hands, deliberately mimicking the Arsenal
defence's offside trap, in an attempt to co-ordinate their striptease routine. Another film reference to the club's
defence comes in the film Plunkett & Macleane, in which two characters are named Dixon and Winterburn after Arsenal's
long-serving full backs – the right-sided Lee Dixon and the left-sided Nigel Winterburn. Arsenal's tally of 13 League
Championships is the third highest in English football, after Manchester United (20) and Liverpool (18), and they
were the first club to reach 8 League Championships. They hold the highest number of FA Cup trophies, 12. The club
is one of only six clubs to have won the FA Cup twice in succession, in 2002 and 2003, and 2014 and 2015. Arsenal
have achieved three League and FA Cup "Doubles" (in 1971, 1998 and 2002), a feat only previously achieved by Manchester
United (in 1994, 1996 and 1999). They were the first side in English football to complete the FA Cup and League Cup
double, in 1993. Arsenal were also the first London club to reach the final of the UEFA Champions League, in 2006,
losing the final 2–1 to Barcelona. Arsenal Ladies are the women's football club affiliated to Arsenal. Founded in
1987, they turned semi-professional in 2002 and are managed by Clare Wheatley. Arsenal Ladies are the most successful
team in English women's football. In the 2008–09 season, they won all three major English trophies – the FA Women's
Premier League, FA Women's Cup and FA Women's Premier League Cup, and, as of 2009, were the only English side to
have won the UEFA Women's Cup, having done so in the 2006–07 season as part of a unique quadruple. The men's and
women's clubs are formally separate entities but have quite close ties; Arsenal Ladies are entitled to play once
a season at the Emirates Stadium, though they usually play their home matches at Boreham Wood.
Physically, clothing serves many purposes: it can serve as protection from the elements, and can enhance safety during hazardous
activities such as hiking and cooking. It protects the wearer from rough surfaces, rash-causing plants, insect bites,
splinters, thorns and prickles by providing a barrier between the skin and the environment. Clothes can insulate
against cold or hot conditions. Further, they can provide a hygienic barrier, keeping infectious and toxic materials
away from the body. Clothing also provides protection from harmful UV radiation. There is no easy way to determine
when clothing was first developed, but some information has been inferred by studying lice. The body louse specifically
lives in clothing, and diverge from head lice about 107,000 years ago, suggesting that clothing existed at that time.
Another theory is that modern humans are the only survivors of several species of primates who may have worn clothes
and that clothing may have been used as long ago as 650 thousand years ago. Other louse-based estimates put the introduction
of clothing at around 42,000–72,000 BP. The most obvious function of clothing is to improve the comfort of the wearer,
by protecting the wearer from the elements. In hot climates, clothing provides protection from sunburn or wind damage,
while in cold climates its thermal insulation properties are generally more important. Shelter usually reduces the
functional need for clothing. For example, coats, hats, gloves, and other superficial layers are normally removed
when entering a warm home, particularly if one is residing or sleeping there. Similarly, clothing has seasonal and
regional aspects, so that thinner materials and fewer layers of clothing are generally worn in warmer seasons and
regions than in colder ones. Clothing can and has in history been made from a very wide variety of materials. Materials
have ranged from leather and furs, to woven materials, to elaborate and exotic natural and synthetic fabrics. Not
all body coverings are regarded as clothing. Articles carried rather than worn (such as purses), worn on a single
part of the body and easily removed (scarves), worn purely for adornment (jewelry), or those that serve a function
other than protection (eyeglasses), are normally considered accessories rather than clothing, as are footwear and
hats. Clothing protects against many things that might injure the uncovered human body. Clothes protect people from
the elements, including rain, snow, wind, and other weather, as well as from the sun. However, clothing that is too
sheer, thin, small, tight, etc., offers less protection. Clothes also reduce risk during activities such as work
or sport. Some clothing protects from specific environmental hazards, such as insects, noxious chemicals, weather,
weapons, and contact with abrasive substances. Conversely, clothing may protect the environment from the clothing
wearer, as with doctors wearing medical scrubs. Humans have shown extreme inventiveness in devising clothing solutions
to environmental hazards. Examples include: space suits, air conditioned clothing, armor, diving suits, swimsuits,
bee-keeper gear, motorcycle leathers, high-visibility clothing, and other pieces of protective clothing. Meanwhile,
the distinction between clothing and protective equipment is not always clear-cut—since clothes designed to be fashionable
often have protective value and clothes designed for function often consider fashion in their design. Wearing clothes
also has social implications. They cover parts of the body that social norms require to be covered, act as a form
of adornment, and serve other social purposes. Although dissertations on clothing and its function appear from the
19th century as colonising countries dealt with new environments, concerted scientific research into psycho-social,
physiological and other functions of clothing (e.g. protective, cartage) occurred in the first half of the 20th century,
with publications such as J. C. Flügel's Psychology of Clothes in 1930, and Newburgh's seminal Physiology of Heat
Regulation and The Science of Clothing in 1949. By 1968, the field of environmental physiology had advanced and expanded
significantly, but the science of clothing in relation to environmental physiology had changed little. While considerable
research has since occurred and the knowledge-base has grown significantly, the main concepts remain unchanged, and
indeed Newburgh's book is still cited by contemporary authors, including those attempting to develop thermoregulatory
models of clothing development. In Western societies, skirts, dresses and high-heeled shoes are usually seen as women's
clothing, while neckties are usually seen as men's clothing. Trousers were once seen as exclusively male clothing,
but are nowadays worn by both genders. Male clothes are often more practical (that is, they can function well under
a wide variety of situations), but a wider range of clothing styles are available for females. Males are typically
allowed to bare their chests in a greater variety of public places. It is generally acceptable for a woman to wear
traditionally male clothing, while the converse is unusual. In some societies, clothing may be used to indicate rank
or status. In ancient Rome, for example, only senators could wear garments dyed with Tyrian purple. In traditional
Hawaiian society, only high-ranking chiefs could wear feather cloaks and palaoa, or carved whale teeth. Under the
Travancore Kingdom of Kerala, (India), lower caste women had to pay a tax for the right to cover their upper body.
In China, before establishment of the republic, only the emperor could wear yellow. History provides many examples
of elaborate sumptuary laws that regulated what people could wear. In societies without such laws, which includes
most modern societies, social status is instead signaled by the purchase of rare or luxury items that are limited
by cost to those with wealth or status. In addition, peer pressure influences clothing choice. According to archaeologists
and anthropologists, the earliest clothing likely consisted of fur, leather, leaves, or grass that were draped, wrapped,
or tied around the body. Knowledge of such clothing remains inferential, since clothing materials deteriorate quickly
compared to stone, bone, shell and metal artifacts. Archeologists have identified very early sewing needles of bone
and ivory from about 30,000 BC, found near Kostenki, Russia in 1988. Dyed flax fibers that could have been used in
clothing have been found in a prehistoric cave in the Republic of Georgia that date back to 36,000 BP. Scientists
are still debating when people started wearing clothes. Ralf Kittler, Manfred Kayser and Mark Stoneking, anthropologists
at the Max Planck Institute for Evolutionary Anthropology, have conducted a genetic analysis of human body lice that
suggests clothing originated quite recently, around 170,000 years ago. Body lice is an indicator of clothes-wearing,
since most humans have sparse body hair, and lice thus require human clothing to survive. Their research suggests
the invention of clothing may have coincided with the northward migration of modern Homo sapiens away from the warm
climate of Africa, thought to have begun between 50,000 and 100,000 years ago. However, a second group of researchers
using similar genetic methods estimate that clothing originated around 540,000 years ago (Reed et al. 2004. PLoS
Biology 2(11): e340). For now, the date of the origin of clothing remains unresolved.[citation needed] Different
cultures have evolved various ways of creating clothes out of cloth. One approach simply involves draping the cloth.
Many people wore, and still wear, garments consisting of rectangles of cloth wrapped to fit – for example, the dhoti
for men and the sari for women in the Indian subcontinent, the Scottish kilt or the Javanese sarong. The clothes
may simply be tied up, as is the case of the first two garments; or pins or belts hold the garments in place, as
in the case of the latter two. The precious cloth remains uncut, and people of various sizes or the same person at
different sizes can wear the garment. By the early years of the 21st century, western clothing styles had, to some
extent, become international styles. This process began hundreds of years earlier, during the periods of European
colonialism. The process of cultural dissemination has perpetuated over the centuries as Western media corporations
have penetrated markets throughout the world, spreading Western culture and styles. Fast fashion clothing has also
become a global phenomenon. These garments are less expensive, mass-produced Western clothing. Donated used clothing
from Western countries are also delivered to people in poor countries by charity organizations. Most sports and physical
activities are practiced wearing special clothing, for practical, comfort or safety reasons. Common sportswear garments
include shorts, T-shirts, tennis shirts, leotards, tracksuits, and trainers. Specialized garments include wet suits
(for swimming, diving or surfing), salopettes (for skiing) and leotards (for gymnastics). Also, spandex materials
are often used as base layers to soak up sweat. Spandex is also preferable for active sports that require form fitting
garments, such as volleyball, wrestling, track & field, dance, gymnastics and swimming. The world of clothing is
always changing, as new cultural influences meet technological innovations. Researchers in scientific labs have been
developing prototypes for fabrics that can serve functional purposes well beyond their traditional roles, for example,
clothes that can automatically adjust their temperature, repel bullets, project images, and generate electricity.
Some practical advances already available to consumers are bullet-resistant garments made with kevlar and stain-resistant
fabrics that are coated with chemical mixtures that reduce the absorption of liquids. Though mechanization transformed
most aspects of human industry by the mid-20th century, garment workers have continued to labor under challenging
conditions that demand repetitive manual labor. Mass-produced clothing is often made in what are considered by some
to be sweatshops, typified by long work hours, lack of benefits, and lack of worker representation. While most examples
of such conditions are found in developing countries, clothes made in industrialized nations may also be manufactured
similarly, often staffed by undocumented immigrants.[citation needed] Outsourcing production to low wage countries
like Bangladesh, China, India and Sri Lanka became possible when the Multi Fibre Agreement (MFA) was abolished. The
MFA, which placed quotas on textiles imports, was deemed a protectionist measure.[citation needed] Globalization
is often quoted as the single most contributing factor to the poor working conditions of garment workers. Although
many countries recognize treaties like the International Labor Organization, which attempt to set standards for worker
safety and rights, many countries have made exceptions to certain parts of the treaties or failed to thoroughly enforce
them. India for example has not ratified sections 87 and 92 of the treaty.[citation needed] The use of animal fur
in clothing dates to prehistoric times. It is currently associated in developed countries with expensive, designer
clothing, although fur is still used by indigenous people in arctic zones and higher elevations for its warmth and
protection. Once uncontroversial, it has recently been the focus of campaigns on the grounds that campaigners consider
it cruel and unnecessary. PETA, along with other animal rights and animal liberation groups have called attention
to fur farming and other practices they consider cruel. Many kinds of clothing are designed to be ironed before they
are worn to remove wrinkles. Most modern formal and semi-formal clothing is in this category (for example, dress
shirts and suits). Ironed clothes are believed to look clean, fresh, and neat. Much contemporary casual clothing
is made of knit materials that do not readily wrinkle, and do not require ironing. Some clothing is permanent press,
having been treated with a coating (such as polytetrafluoroethylene) that suppresses wrinkles and creates a smooth
appearance without ironing. A resin used for making non-wrinkle shirts releases formaldehyde, which could cause contact
dermatitis for some people; no disclosure requirements exist, and in 2008 the U.S. Government Accountability Office
tested formaldehyde in clothing and found that generally the highest levels were in non-wrinkle shirts and pants.
In 1999, a study of the effect of washing on the formaldehyde levels found that after 6 months after washing, 7 of
27 shirts had levels in excess of 75 ppm, which is a safe limit for direct skin exposure. In past times, mending
was an art. A meticulous tailor or seamstress could mend rips with thread raveled from hems and seam edges so skillfully
that the tear was practically invisible. When the raw material – cloth – was worth more than labor, it made sense
to expend labor in saving it. Today clothing is considered a consumable item. Mass-manufactured clothing is less
expensive than the labor required to repair it. Many people buy a new piece of clothing rather than spend time mending.
The thrifty still replace zippers and buttons and sew up ripped hems.
The Chicago Cubs are an American professional baseball team located on the North Side of Chicago, Illinois. The Cubs compete
in Major League Baseball (MLB) as a members of the National League (NL) Central division; the team plays its home
baseball games at Wrigley Field. The Cubs are also one of two active major league teams based in Chicago; the other
is the Chicago White Sox, who are a member of the American League (AL) Central division. The team is currently owned
by Thomas S. Ricketts, son of TD Ameritrade founder Joe Ricketts. The team played its first games in 1876 as a founding
member of the National League (NL), eventually becoming known officially as the Chicago Cubs for the 1903 season.
Officially, the Cubs are tied for the distinction of being the oldest currently active U.S. professional sports club,
along with the Atlanta Braves, which also began play in the NL in 1876 as the Boston Red Stockings (Major League
Baseball does not officially recognize the National Association of Professional Base Ball Players as a major league.)
In 1906, the franchise recorded a Major League record 116 wins (tied by the 2001 Seattle Mariners) and posted a modern-era
record winning percentage of .763, which still stands today. They appeared in their first World Series the same year,
falling to their crosstown rivals, the Chicago White Sox, four games to two. The Cubs won back-to-back World Series
championships in 1907 and 1908, becoming the first Major League team to play in three consecutive Fall Classics,
and the first to win it twice. The team has appeared in seven World Series following their 1908 title, most recently
in 1945. The Cubs have not won the World Series in 107 years, the longest championship drought of any major North
American professional sports team, and are often referred to as the "Lovable Losers" because of this distinction.
They are also known as "The North Siders" because Wrigley Field, their home park since 1916, is located in Chicago's
North Side Lake View community at 1060 West Addison Street. The Cubs have a major rivalry with the St. Louis Cardinals.
The Cubs began play as the Chicago White Stockings, joining the National League (NL) as a charter member. Owner William
Hulbert signed multiple star players, such as pitcher Albert Spalding and infielders Ross Barnes, Deacon White, and
Adrian "Cap" Anson, to join the team prior to the N.L.'s first season. The White Stockings played their home games
at West Side Grounds,against the bloods and quickly established themselves as one of the new league's top teams.
Spalding won forty-seven games and Barnes led the league in hitting at .429 as Chicago won the first ever National
League pennant, which at the time was the game's top prize. After back-to-back pennants in 1880 and 1881, Hulbert
died, and Spalding, who had retired to start Spalding sporting goods, assumed ownership of the club. The White Stockings,
with Anson acting as player/manager, captured their third consecutive pennant in 1882, and Anson established himself
as the game's first true superstar. In 1885 and '86, after winning N.L. pennants, the White Stockings met the short-lived
American Association champion in that era's version of a World Series. Both seasons resulted in match ups with the
St. Louis Brown Stockings, with the clubs tying in 1885 and with St. Louis winning in 1886. This was the genesis
of what would eventually become one of the greatest rivalries in sports. In all, the Anson-led Chicago Base Ball
Club won six National League pennants between 1876 and 1886. As a result, Chicago's club nickname transitioned, and
by 1890 they had become known as the Chicago Colts, or sometimes "Anson's Colts", referring to Cap's influence within
the club. Anson was the first player in history credited with collecting 3,000 career hits. After a disappointing
record of 59-73 and a 9th-place finish in 1897, Anson was released by the Cubs as both a player and manager. Due
to Anson's absence from the club after 22 years, local newspaper reporters started to refer to the Cubs as the "Orphans".
In 1902, Spalding, who by this time had revamped the roster to boast what would soon be one of the best teams of
the early century, sold the club to Jim Hart. The franchise was nicknamed the Cubs by the Chicago Daily News in 1902,
although not officially becoming the Chicago Cubs until the 1907 season. During this period, which has become known
as baseball's dead-ball era, Cub infielders Joe Tinker, Johnny Evers, and Frank Chance were made famous as a double-play
combination by Franklin P. Adams' poem Baseball's Sad Lexicon. The poem first appeared in the July 18, 1910 edition
of the New York Evening Mail. Mordecai "Three-Finger" Brown, Jack Taylor, Ed Reulbach, Jack Pfiester, and Orval Overall
were several key pitchers for the Cubs during this time period. With Chance acting as player-manager from 1905 to
1912, the Cubs won four pennants and two World Series titles over a five-year span. Although they fell to the "Hitless
Wonders" White Sox in the 1906 World Series, the Cubs recorded a record 116 victories and the best winning percentage
(.763) in Major League history. With mostly the same roster, Chicago won back-to-back World Series championships
in 1907 and 1908, becoming the first Major League club to play three times in the Fall Classic and the first to win
it twice. However, the Cubs have not won a World Series since; this remains the longest championship drought in North
American professional sports. In 1914, advertising executive Albert Lasker obtained a large block of the club's shares
and before the 1916 season assumed majority ownership of the franchise. Lasker brought in a wealthy partner, Charles
Weeghman, the proprietor of a popular chain of lunch counters who had previously owned the Chicago Whales of the
short-lived Federal League. As principal owners, the pair moved the club from the West Side Grounds to the much newer
Weeghman Park, which had been constructed for the Whales only two years earlier, where they remain to this day. The
Cubs responded by winning a pennant in the war-shortened season of 1918, where they played a part in another team's
curse: the Boston Red Sox defeated Grover Cleveland Alexander's Cubs four games to two in the 1918 World Series,
Boston's last Series championship until 2004. Near the end of the first decade of the double-Bills' guidance, the
Cubs won the NL pennant in 1929 and then achieved the unusual feat of winning a pennant every three years, following
up the 1929 flag with league titles in 1932, 1935, and 1938. Unfortunately, their success did not extend to the Fall
Classic, as they fell to their AL rivals each time. The '32 series against the Yankees featured Babe Ruth's "called
shot" at Wrigley Field in Game 3. There were some historic moments for the Cubs as well; In 1930, Hack Wilson, one
of the top home run hitters in the game, had one of the most impressive seasons in MLB history, hitting 56 home runs
and establishing the current runs-batted-in record of 191. That 1930 club, which boasted six eventual Hall of Famers
(Wilson, Gabby Hartnett, Rogers Hornsby, George "High Pockets" Kelly, Kiki Cuyler and manager Joe McCarthy) established
the current team batting average record of .309. In 1935 the Cubs claimed the pennant in thrilling fashion, winning
a record 21 games in a row in September. The '38 club saw Dizzy Dean lead the team's pitching staff and provided
a historic moment when they won a crucial late-season game at Wrigley Field over the Pittsburgh Pirates with a walk-off
home run by Gabby Hartnett, which became known in baseball lore as "The Homer in the Gloamin'". The Cubs enjoyed
one more pennant at the close of World War II, finishing 98–56. Due to the wartime travel restrictions, the first
three games of the 1945 World Series were played in Detroit, where the Cubs won two games, including a one-hitter
by Claude Passeau, and the final four were played at Wrigley. In Game 4 of the Series, the Curse of the Billy Goat
was allegedly laid upon the Cubs when P.K. Wrigley ejected Billy Sianis, who had come to Game 4 with two box seat
tickets, one for him and one for his goat. They paraded around for a few innings, but Wrigley demanded the goat leave
the park due to its unpleasant odor. Upon his ejection, Mr. Sianis uttered, "The Cubs, they ain't gonna win no more."
The Cubs lost Game 4, lost the Series, and have not been back since. It has also been said by many that Sianis put
a "curse" on the Cubs, apparently preventing the team from playing in the World Series. After losing the 1945 World
Series to the Detroit Tigers, the Cubs finished with winning seasons the next two years, but those teams did not
enter post-season play. In the following two decades after Sianis' ill will, the Cubs played mostly forgettable baseball,
finishing among the worst teams in the National League on an almost annual basis. Longtime infielder/manager Phil
Cavarretta, who had been a key player during the '45 season, was fired during spring training in 1954 after admitting
the team was unlikely to finish above fifth place. Although shortstop Ernie Banks would become one of the star players
in the league during the next decade, finding help for him proved a difficult task, as quality players such as Hank
Sauer were few and far between. This, combined with poor ownership decisions such as the College of Coaches, and
the ill-fated trade of future Hall of Famer Lou Brock to the Cardinals for pitcher Ernie Broglio (who won only 7
games over the next three seasons), hampered on-field performance. In 1969 the Cubs, managed by Leo Durocher, built
a substantial lead in the newly created National League Eastern Division by mid-August. Ken Holtzman pitched a no-hitter
on August 19, and the division lead grew to 8 1⁄2 games over the St. Louis Cardinals and by 9 1⁄2 games over the
New York Mets. After the game of September 2, the Cubs record was 84-52 with the Mets in second place at 77-55. But
then a losing streak began just as a Mets winning streak was beginning. The Cubs lost the final game of a series
at Cincinnati, then came home to play the resurgent Pittsburgh Pirates (who would finish in third place). After losing
the first two games by scores of 9-2 and 13-4, the Cubs led going into the ninth inning. A win would be a positive
springboard since the Cubs were to play a crucial series with the Mets the very next day. But Willie Stargell drilled
a 2-out, 2-strike pitch from the Cubs' ace reliever, Phil Regan, onto Sheffield Avenue to tie the score in the top
of the ninth. The Cubs would lose 7-5 in extra innings. Burdened by a four-game losing streak, the Cubs traveled
to Shea Stadium for a short two-game set. The Mets won both games, and the Cubs left New York with a record of 84-58
just 1⁄2 game in front. Disaster followed in Philadelphia, as a 99 loss Phillies team nonetheless defeated the Cubs
twice, to extend Chicago's losing streak to eight games. In a key play in the second game, on September 11, Cubs
starter Dick Selma threw a surprise pickoff attempt to third baseman Ron Santo, who was nowhere near the bag or the
ball. Selma's throwing error opened the gates to a Phillies rally. After that second Philly loss, the Cubs were 84-60
and the Mets had pulled ahead at 85-57. The Mets would not look back. The Cubs' eight-game losing streak finally
ended the next day in St. Louis, but the Mets were in the midst of a ten-game winning streak, and the Cubs, wilting
from team fatigue, generally deteriorated in all phases of the game. The Mets (who had lost a record 120 games 7
years earlier), would go on to win the World Series. The Cubs, despite a respectable 92-70 record, would be remembered
for having lost a remarkable 17½ games in the standings to the Mets in the last quarter of the season. Following
the '69 season, the club posted winning records for the next few seasons, but no playoff action. After the core players
of those teams started to move on, the 70s got worse for the team, and they became known as "The Loveable Losers."
In 1977, the team found some life, but ultimately experienced one of its biggest collapses. The Cubs hit a high-water
mark on June 28 at 47–22, boasting an 8 1⁄2 game NL East lead, as they were led by Bobby Murcer (27 Hr/89 RBI), and
Rick Reuschel (20–10). However, the Philadelphia Phillies cut the lead to two by the All-star break, as the Cubs
sat 19 games over .500, but they swooned late in the season, going 20–40 after July 31. The Cubs finished in 4th
place at 81–81, while Philadelphia surged, finishing with 101 wins. The following two seasons also saw the Cubs get
off to a fast start, as the team rallied to over 10 games above .500 well into both seasons, only to again wear down
and play poorly later on, and ultimately settling back to mediocrity. This trait became known as the "June Swoon."
Again, the Cubs' unusually high number of day games is often pointed to as one reason for the team's inconsistent
late season play. After over a dozen more subpar seasons, in 1981 the Cubs hired GM Dallas Green from Philadelphia
to turn around the franchise. Green had managed the 1980 Phillies to the World Series title. One of his early GM
moves brought in a young Phillies minor-league 3rd baseman named Ryne Sandberg, along with Larry Bowa for Iván DeJesús.
The 1983 Cubs had finished 71–91 under Lee Elia, who was fired before the season ended by Green. Green continued
the culture of change and overhauled the Cubs roster, front-office and coaching staff prior to 1984. Jim Frey was
hired to manage the 1984 Cubs, with Don Zimmer coaching 3rd base and Billy Connors serving as pitching coach. Green
shored up the 1984 roster with a series of transactions. In December, 1983 Scott Sanderson was acquired from Montreal
in a three-team deal with San Diego for Carmelo Martínez. Pinch hitter Richie Hebner (.333 BA in 1984) was signed
as a free-agent. In spring training, moves continued: LF Gary Matthews and CF Bobby Dernier came from Philadelphia
on March 26, for Bill Campbell and a minor leaguer. Reliever Tim Stoddard (10–6 3.82, 7 saves) was acquired the same
day for a minor leaguer; veteran pitcher Ferguson Jenkins was released. The team's commitment to contend was complete
when Green made a midseason deal on June 15 to shore up the starting rotation due to injuries to Rick Reuschel (5–5)
and Sanderson. The deal brought 1979 NL Rookie of the Year pitcher Rick Sutcliffe from the Cleveland Indians. Joe
Carter (who was with the Triple-A Iowa Cubs at the time) and center fielder Mel Hall were sent to Cleveland for Sutcliffe
and back-up catcher Ron Hassey (.333 with Cubs in 1984). Sutcliffe (5–5 with the Indians) immediately joined Sanderson
(8–5 3.14), Eckersley (10–8 3.03), Steve Trout (13–7 3.41) and Dick Ruthven (6–10 5.04) in the starting rotation.
Sutcliffe proceeded to go 16–1 for Cubs and capture the Cy Young Award. The shift in the Cubs' fortunes was characterized
June 23 on the "NBC Saturday Game of the Week" contest against the St. Louis Cardinals. it has since been dubbed
simply "The Sandberg Game." With the nation watching and Wrigley Field packed, Sandberg emerged as a superstar with
not one, but two game-tying home runs against Cardinals closer Bruce Sutter. With his shots in the 9th and 10th innings
Wrigley Field erupted and Sandberg set the stage for a comeback win that cemented the Cubs as the team to beat in
the East. No one would catch them, except the Padres in the playoffs. In 1984, each league had two divisions, East
and West. The divisional winners met in a best-of-5 series to advance to the World Series, in a "2–3" format, first
two games were played at the home of the team who did not have home field advantage. Then the last three games were
played at the home of the team, with home field advantage. Thus the first two games were played at Wrigley Field
and the next three at the home of their opponents, San Diego. A common and unfounded myth is that since Wrigley Field
did not have lights at that time the National League decided to give the home field advantage to the winner of the
NL West. In fact, home field advantage had rotated between the winners of the East and West since 1969 when the league
expanded. In even numbered years, the NL West had home field advantage. In odd numbered years, the NL East had home
field advantage. Since the NL East winners had had home field advantage in 1983, the NL West winners were entitled
to it. The confusion may stem from the fact that Major League Baseball did decide that, should the Cubs make it to
the World Series, the American League winner would have home field advantage unless the Cubs hosted home games at
an alternate site since the Cubs home field of Wrigley Field did not yet have lights. Rumor was the Cubs could hold
home games across town at Comiskey Park, home of the American League's Chicago White Sox. Rather than hold any games
in the cross town rival Sox Park, the Cubs made arrangements with the August A. Busch, owner of the St. Louis Cardinals,
to use Busch Stadium in St. Louis as the Cubs "home field" for the World Series. This was approved by Major League
Baseball and would have enabled the Cubs to host games 1 and 2, along with games 6 and 7 if necessary. At the time
home field advantage was rotated between each league. Odd numbered years the AL had home field advantage. Even numbered
years the NL had home field advantage. In the 1982 World Series the St. Louis Cardinals of the NL had home field
advantage. In the 1983 World Series the Baltimore Orioles of the AL had home field advantage. In the NLCS, the Cubs
easily won the first two games at Wrigley Field against the San Diego Padres. The Padres were the winners of the
Western Division with Steve Garvey, Tony Gwynn, Eric Show, Goose Gossage and Alan Wiggins. With wins of 13–0 and
4–2, the Cubs needed to win only one game of the next three in San Diego to make it to the World Series. After being
beaten in Game 3 7–1, the Cubs lost Game 4 when Smith, with the game tied 5–5, allowed a game-winning home run to
Garvey in the bottom of the ninth inning. In Game 5 the Cubs took a 3–0 lead into the 6th inning, and a 3–2 lead
into the seventh with Sutcliffe (who won the Cy Young Award that year) still on the mound. Then, Leon Durham had
a sharp grounder go under his glove. This critical error helped the Padres win the game 6–3, with a 4-run 7th inning
and keep Chicago out of the 1984 World Series against the Detroit Tigers. The loss ended a spectacular season for
the Cubs, one that brought alive a slumbering franchise and made the Cubs relevant for a whole new generation of
Cubs fans. In 1989, the first full season with night baseball at Wrigley Field, Don Zimmer's Cubs were led by a core
group of veterans in Ryne Sandberg, Rick Sutcliffe and Andre Dawson, who were boosted by a crop of youngsters such
as Mark Grace, Shawon Dunston, Greg Maddux, Rookie of the Year Jerome Walton, and Rookie of the Year Runner-Up Dwight
Smith. The Cubs won the NL East once again that season winning 93 games. This time the Cubs met the San Francisco
Giants in the NLCS. After splitting the first two games at home, the Cubs headed to the Bay Area, where despite holding
a lead at some point in each of the next three games, bullpen meltdowns and managerial blunders ultimately led to
three straight losses. The Cubs couldn't overcome the efforts of Will Clark, whose home run off Maddux, just after
a managerial visit to the mound, led Maddux to think Clark knew what pitch was coming. Afterward, Maddux would speak
into his glove during any mound conversation, beginning what is a norm today. Mark Grace was 11–17 in the series
with 8 RBI. Eventually, the Giants lost to the "Bash Brothers" and the Oakland A's in the famous "Earthquake Series."
The '98 season would begin on a somber note with the death of legendary broadcaster Harry Caray. After the retirement
of Sandberg and the trade of Dunston, the Cubs had holes to fill and the signing of Henry Rodríguez, known affectionately
as "H-Rod" to bat cleanup provided protection for Sammy Sosa in the lineup, as Rodriguez slugged 31 round-trippers
in his first season in Chicago. Kevin Tapani led the club with a career high 19 wins, Rod Beck anchored a strong
bullpen and Mark Grace turned in one of his best seasons. The Cubs were swamped by media attention in 1998, and the
team's two biggest headliners were Sosa and rookie flamethrower Kerry Wood. Wood's signature performance was one-hitting
the Houston Astros, a game in which he tied the major league record of 20 strikeouts in nine innings. His torrid
strikeout numbers earned Wood the nickname "Kid K," and ultimately earned him the 1998 NL Rookie of the Year award.
Sosa caught fire in June, hitting a major league record 20 home runs in the month, and his home run race with Cardinals
slugger Mark McGwire transformed the pair into international superstars in a matter of weeks. McGwire finished the
season with a new major league record of 70 home runs, but Sosa's .308 average and 66 homers earned him the National
League MVP Award. After a down-to-the-wire Wild Card chase with the San Francisco Giants, Chicago and San Francisco
ended the regular season tied, and thus squared off in a one-game playoff at Wrigley Field in which third baseman
Gary Gaetti hit the eventual game winning homer. The win propelled the Cubs into the postseason once again with a
90–73 regular season tally. Unfortunately, the bats went cold in October, as manager Jim Riggleman's club batted
.183 and scored only four runs en route to being swept by Atlanta. On a positive note, the home run chase between
Sosa, McGwire and Ken Griffey, Jr. helped professional baseball to bring in a new crop of fans as well as bringing
back some fans who had been disillusioned by the 1994 strike. The Cubs retained many players who experienced career
years in '98, and after a fast start in 1999, they collapsed again (starting with being swept at the hands of the
cross-town White Sox in mid-June) and finished in the bottom of the division for the next two seasons. Despite losing
fan favorite Grace to free agency, and the lack of production from newcomer Todd Hundley, skipper Don Baylor's Cubs
put together a good season in 2001. The season started with Mack Newton being brought in to preach "positive thinking."
One of the biggest stories of the season transpired as the club made a midseason deal for Fred McGriff, which was
drawn out for nearly a month as McGriff debated waiving his no-trade clause, as the Cubs led the wild card race by
2.5 games in early September. That run died when Preston Wilson hit a three run walk off homer off of closer Tom
"Flash" Gordon, which halted the team's momentum. The team was unable to make another serious charge, and finished
at 88–74, five games behind both Houston and St. Louis, who tied for first. Sosa had perhaps his finest season and
Jon Lieber led the staff with a 20 win season. The Cubs had high expectations in 2002, but the squad played poorly.
On July 5, 2002 the Cubs promoted assistant general manager and player personnel director Jim Hendry to the General
Manager position. The club responded by hiring Dusty Baker and by making some major moves in '03. Most notably, they
traded with the Pittsburgh Pirates for outfielder Kenny Lofton and third baseman Aramis Ramírez, and rode dominant
pitching, led by Kerry Wood and Mark Prior, as the Cubs led the division down the stretch. After losing an extra-inning
game in Game 1, the Cubs rallied and took a 3 games to 1 lead over the Wild Card Florida Marlins in the NLCS. Florida
shut the Cubs out in Game 5, but young pitcher Mark Prior led the Cubs in Game 6 as they took a 3–0 lead into the
8th inning and it was at this point when a now-infamous incident took place. Several spectators attempted to catch
a foul ball off the bat of Luis Castillo. A Chicago Cubs fan by the name of Steve Bartman, of Northbrook, Illinois,
reached for the ball and deflected it away from the glove of Moisés Alou for the second out of the 8th inning. Alou
reacted angrily toward the stands, and after the game stated that he would have caught the ball. Alou at one point
recanted, saying he would not have been able to make the play, but later said this was just an attempt to make Bartman
feel better and believing the whole incident should be forgotten. Interference was not called on the play, as the
ball was ruled to be on the spectator side of the wall. Castillo was eventually walked by Prior. Two batters later,
and to the chagrin of the packed stadium, Cubs shortstop Alex Gonzalez misplayed an inning ending double play, loading
the bases and leading to eight Florida runs and a Marlin victory. Despite sending Kerry Wood to the mound and holding
a lead twice, the Cubs ultimately dropped Game 7, and failed to reach the World Series. In 2004, the Cubs were a
consensus pick by most media outlets to win the World Series. The offseason acquisition of Derek Lee (who was acquired
in a trade with Florida for Hee-seop Choi) and the return of Greg Maddux only bolstered these expectation. Despite
a mid-season deal for Nomar Garciaparra, misfortune struck the Cubs again. They led the Wild Card by 1.5 games over
San Francisco and Houston on September 25, and both of those teams lost that day, giving the Cubs a chance at increasing
the lead to a commanding 2.5 games with only eight games remaining in the season, but reliever LaTroy Hawkins blew
a save to the Mets, and the Cubs lost the game in extra innings, a defeat that seemingly deflated the team, as they
proceeded to drop 6 of their last 8 games as the Astros won the Wild Card. Despite the fact that the Cubs had won
89 games, this fallout was decidedly unlovable, as the Cubs traded superstar Sammy Sosa after he had left the season's
final game early and then lied about it publicly. Already a controversial figure in the clubhouse after his corked-bat
incident, Sammy's actions alienated much of his once strong fan base as well as the few teammates still on good terms
with him, (many teammates grew tired of Sosa playing loud salsa music in the locker room) and possibly tarnished
his place in Cubs' lore for years to come. The disappointing season also saw fans start to become frustrated with
the constant injuries to ace pitchers Mark Prior and Kerry Wood. Additionally, the '04 season led to the departure
of popular commentator Steve Stone, who had become increasingly critical of management during broadcasts and was
verbally attacked by reliever Kent Mercker. Things were no better in 2005, despite a career year from first baseman
Derrek Lee and the emergence of closer Ryan Dempster. The club struggled and suffered more key injuries, only managing
to win 79 games after being picked by many to be a serious contender for the N.L. pennant. In 2006, bottom fell out
as the Cubs finished 66–96, last in the NL Central. After finishing last in the NL Central with 66 wins in 2006,
the Cubs re-tooled and went from "worst to first" in 2007. In the offseason they signed Alfonso Soriano to a contract
at 8 years for $136 million, and replaced manager Dusty Baker with fiery veteran manager Lou Piniella. After a rough
start, which included a brawl between Michael Barrett and Carlos Zambrano, the Cubs overcame the Milwaukee Brewers,
who had led the division for most of the season, with winning streaks in June and July, coupled with a pair of dramatic,
late-inning wins against the Reds, and ultimately clinched the NL Central with a record of 85–77. The Cubs traded
Barrett to the Padres, and later acquired Jason Kendall from Oakland. Kendall was highly successful with his management
of the pitching rotation and helped at the plate as well. By September, Geovany Soto became the full-time starter
behind the plate, replacing the veteran Kendall. They met Arizona in the NLDS, but controversy followed as Piniella,
in a move that has since come under scrutiny, pulled Carlos Zambrano after the sixth inning of a pitcher's duel with
D-Backs ace Brandon Webb, to "....save Zambrano for (a potential) Game 4." The Cubs, however, were unable to come
through, losing the first game and eventually stranding over 30 baserunners in a 3-game Arizona sweep. The Cubs successfully
defended their National League Central title in 2008, going to the postseason in consecutive years for the first
time since 1906–08. The offseason was dominated by three months of unsuccessful trade talks with the Orioles involving
2B Brian Roberts, as well as the signing of Chunichi Dragons star Kosuke Fukudome. The team recorded their 10,000th
win in April, while establishing an early division lead. Reed Johnson and Jim Edmonds were added early on and Rich
Harden was acquired from the Oakland Athletics in early July. The Cubs headed into the All-Star break with the N.L.'s
best record, and tied the league record with eight representatives to the All-Star game, including catcher Geovany
Soto, who was named Rookie of the Year. The Cubs took control of the division by sweeping a four-game series in Milwaukee.
On September 14, in a game moved to Miller Park due to Hurricane Ike, Zambrano pitched a no-hitter against the Astros,
and six days later the team clinched by beating St. Louis at Wrigley. The club ended the season with a 97–64 record
and met Los Angeles in the NLDS. The heavily favored Cubs took an early lead in Game 1, but James Loney's grand slam
off Ryan Dempster changed the series' momentum. Chicago committed numerous critical errors and were outscored 20–6
in a Dodger sweep, which provided yet another sudden ending. The Ricketts family acquired a majority interest in
the Cubs in 2009, ending the Tribune years. Apparently handcuffed by the Tribune's bankruptcy and the sale of the
club to the Ricketts family, the Cubs' quest for a NL Central 3-peat started with notice that there would be less
invested into contracts than in previous years. Chicago engaged St. Louis in a see-saw battle for first place into
August 2009, but the Cardinals played to a torrid 20–6 pace that month, designating their rivals to battle in the
Wild Card race, from which they were eliminated in the season's final week. The Cubs were plagued by injuries in
2009, and were only able to field their Opening Day starting lineup three times the entire season. Third baseman
Aramis Ramírez injured his throwing shoulder in an early May game against the Milwaukee Brewers, sidelining him until
early July and forcing journeyman players like Mike Fontenot and Aaron Miles into more prominent roles. Additionally,
key players like Derrek Lee (who still managed to hit .306 with 35 HR and 111 RBI that season), Alfonso Soriano and
Geovany Soto also nursed nagging injuries. The Cubs posted a winning record (83–78) for the third consecutive season,
the first time the club had done so since 1972, and a new era of ownership under the Ricketts' family was approved
by MLB owners in early October. Rookie Starlin Castro debuted in early May (2010) as the starting shortstop. However,
the club played poorly in the early season, finding themselves 10 games under .500 at the end of June. In addition,
long-time ace Carlos Zambrano was pulled from a game against the White Sox on June 25 after a tirade and shoving
match with Derrek Lee, and was suspended indefinitely by Jim Hendry, who called the conduct "unacceptable." On August
22, Lou Piniella, who had already announced his retirement at the end of the season, announced that he would leave
the Cubs prematurely to take care of his sick mother. Mike Quade took over as the interim manager for the final 37
games of the year. Despite being well out of playoff contention the Cubs went 24–13 under Quade, the best record
in baseball during that 37 game stretch, earning Quade to have the interim tag removed on October 19. Despite trading
for pitcher Matt Garza and signing free-agent slugger Carlos Peña, the Cubs finished the 2011 season 20 games under
.500 with a record of 71-91. Weeks after the season came to an end, the club was rejuvenated in the form of a new
philosophy, as new owner Tom Ricketts signed Theo Epstein away from the Boston Red Sox, naming him club President
and giving him a five-year contract worth over $18 million, and subsequently discharged manager Mike Quade. Epstein,
a proponent of sabremetrics and one of the architects of two world series titles in Boston brought along Jed Hoyer
to fill the role of GM and hired Dale Sveum as manager. Although the team had a dismal 2012 season, losing 101 games
(the worst record since 1966) it was largely expected. The youth movement ushered in by Epstein and Hoyer began as
longtime fan favorite Kerry Wood retired in May, followed by Ryan Dempster and Geovany Soto being traded to Texas
at the All-Star break for a group of minor league prospects headlined by Christian Villanueva. The development of
Castro, Anthony Rizzo, Darwin Barney, Brett Jackson and pitcher Jeff Samardzija as well as the replenishing of the
minor-league system with prospects such as Javier Baez, Albert Almora, and Jorge Soler became the primary focus of
the season, a philosophy which the new management said would carry over at least through the 2013 season. The 2013
season resulted in much as the same the year before. Shortly before the trade deadline, the Cubs traded Matt Garza
to the Texas Rangers for Mike Olt, C. J. Edwards, Neil Ramirez, and Justin Grimm. Three days later, the Cubs sent
Alfonso Soriano to the New York Yankees for minor leaguer Corey Black. The mid season fire sale led to another last
place finish in the NL Central, finishing with a record of 66-96. Although there was a five-game improvement in the
record from the year before, Anthony Rizzo and Starlin Castro seemed to take steps backward in their development.
On September 30, 2013, Theo Epstein made the decision to fire manager Dale Sveum after just two seasons at the helm
of the Cubs. The regression of several young players was thought to be the main focus point, as the front office
said Dale would not be judged based on wins and losses. In two seasons as skipper, Sveum finished with a record of
127-197. On November 2, 2014, the Cubs announced that Joe Maddon had signed a five-year contract to be the 54th manager
in team history. On December 10, 2014, Maddon announced that the team had signed free agent Jon Lester to a 6-year,
$155 million contract. Many other trades and acquisitions occurred during the off season. The opening day lineup
for the Cubs contained five new players including rookie right fielder Jorge Soler. Rookies Kris Bryant and Addison
Russell were in the starting lineup by mid-April, and rookie Kyle Schwarber was added in mid-June. The Cubs finished
the 2015 season with a record of 97–65, third best in the majors. On October 7, in the 2015 National League Wild
Card Game, Jake Arrieta pitched a complete game shutout and the Cubs defeated the Pittsburgh Pirates 4–0. On September
23, 1908, the Cubs and New York Giants were involved in a tight pennant race. The two clubs were tied in the bottom
of the ninth inning at the Polo Grounds, and N.Y. had runners on first and third and two outs when Al Bridwell singled,
scoring Moose McCormick from third with the Giants' apparent winning run, but the runner on first base, rookie Fred
Merkle, left the field without touching second base. As fans swarmed the field, Cub infielder Johnny Evers retrieved
the ball and touched second. Since there were two outs, a forceout was called at second base, ending the inning and
the game. Because of the tie the Giants and Cubs ended up tied for first place. The Giants lost the ensuing one-game
playoff and the Cubs went on to the World Series. On October 1, 1932, in game three of the World Series between the
Cubs and the New York Yankees, Babe Ruth allegedly stepped to the plate, pointed his finger to Wrigley Field's center
field bleachers and hit a long home run to center. There is speculation as to whether the "facts" surrounding the
story are true or not, but nevertheless Ruth did help the Yankees secure a World Series win that year and the home
run accounted for his 15th and last home run in the post season before he retired in 1935. Hack Wilson set a record
of 56 home-runs and 190 runs-batted-in in 1930, breaking Lou Gehrig's MLB record of 176 RBI. (In 1999, a long-lost
extra RBI mistakenly credited to Charlie Grimm had been found by Cooperstown researcher Cliff Kachline and verified
by historian Jerome Holtzman, increasing the record number to 191.) As of 2014 the record still stands, with no serious
threats coming since Gehrig (184) and Hank Greenberg (183) in the same era. The closest anyone has come to the mark
in the last 75 years was Manny Ramirez's 165 RBI in 1999. In addition to the RBI record, Wilson 56 home-runs stood
as the National League record until 1998, when Sammy Sosa and Mark McGwire hit 66 and 70, respectively. Wilson was
named "Most Useful" player that year by the Baseball Writers' Association of America, as the official N.L. Most Valuable
Player Award was not awarded until the next season. On April 25, 1976, at Dodger Stadium, father-and-son protestors
ran into the outfield and tried to set fire to a U.S. flag. When Cubs outfielder Rick Monday noticed the flag on
the ground and the man and boy fumbling with matches and lighter fluid, he dashed over and snatched the flag to thunderous
applause. When he came up to bat in the next half-inning, he got a standing ovation from the crowd and the stadium
titantron flashed the message, "RICK MONDAY... YOU MADE A GREAT PLAY..." Monday later said, "If you're going to burn
the flag, don't do it around me. I've been to too many veterans' hospitals and seen too many broken bodies of guys
who tried to protect it." In June, 1998 Sammy Sosa exploded into the pursuit of Roger Maris' home run record. Sosa
had 13 home runs entering the month, representing less than half of Mark McGwire's total. Sosa had his first of four
multi-home run games that month on June 1, and went on to break Rudy York's record with 20 home runs in the month,
a record that still stands. By the end of his historic month, the outfielder's 33 home runs tied him with Ken Griffey,
Jr. and left him only four behind McGwire's 37. Sosa finished with 66 and won the NL MVP Award. On April 23, 2008,
against the Colorado Rockies, the Cubs recorded the 10,000th regular-season win in their franchise's history dating
back to the beginning of the National League in 1876. The Cubs reached the milestone with an overall National League
record of 10,000-9,465. Chicago was only the second club in Major League Baseball history to attain this milestone,
the first having been the San Francisco Giants in mid-season 2005. The Cubs, however, hold the mark for victories
for a team in a single city. The Chicago club's 77–77 record in the National Association (1871, 1874–1875) is not
included in MLB record keeping. Post-season series are also not included in the totals. To honor the milestone, the
Cubs flew an extra white flag displaying "10,000" in blue, along with the customary "W" flag. In only his third career
start, Kerry Wood struck out 20 batters against Houston on May 6, 1998. This is the franchise record and tied for
the Major League record for the most strikeouts in one game by one pitcher (the only other pitcher to strike out
20 batters in a nine-inning game was Roger Clemens, who achieved it twice). The game is often considered the most
dominant pitching performance of all time. Interestingly, Wood's first pitch struck home plate umpire Jerry Meals
in the facemask. Wood then struck out the first five batters he faced. Wood hit one batter, Craig Biggio, and allowed
one hit, a scratch single by Ricky Gutiérrez off third baseman Kevin Orie's glove. The play was nearly scored an
error, which would have given Wood a no-hitter. The Chicago Cubs have not won a World Series championship since 1908,
and have not appeared in the Fall Classic since 1945, although between their postseason appearance in 1984 and their
most recent in 2015, they have made the postseason seven times. 107 seasons is the longest championship drought in
all four of the major North American professional sports leagues, which also includes the National Football League
(NFL), the National Basketball Association (NBA), and the National Hockey League (NHL). In fact, the Cubs' last World
Series title occurred before those other three leagues even existed, and even the Cubs' last World Series appearance
predates the founding of the NBA. The much publicized drought was concurrent to championship droughts by the Boston
Red Sox and the Chicago White Sox, who both had over 80 years between championships. It is this unfortunate distinction
that has led to the club often being known as "The Lovable Losers." The team was one win away from breaking what
is often called the "Curse of the Billy Goat" in 1984 and 2003 (Steve Bartman incident), but was unable get the victory
that would send it to the World Series. On May 11, 2000, Glenallen Hill, facing Brewers starter Steve Woodard, became
the first, and thus far only player, to hit a pitched ball onto the roof of a five-story residential building across
Waveland Ave, beyond Wrigley Field's left field wall. The shot was estimated at well over 500 feet (150 m), but the
Cubs fell to Milwaukee 12–8. No batted ball has ever hit the center field scoreboard, although the original "Slammin'
Sammy", golfer Sam Snead, hit it with a golf ball in an exhibition in the 1950s. In 1948, Bill Nicholson barely missed
the scoreboard when he launched a home run ball onto Sheffield Avenue and in 1959, Roberto Clemente came even closer
with a home run ball hit onto Waveland Avenue. In 2001, a Sammy Sosa shot landed across Waveland and bounced a block
down Kenmore Avenue. Dave Kingman hit a shot in 1979 that hit the third porch roof on the east side of Kenmore, estimated
at 555 feet (169 m), and is regarded as the longest home run in Wrigley Field history. On May 26, 2015, the Cubs
rookie third baseman, Kris Bryant, hit a homerun that traveled an estimated 477 feet (145 m) off the park's new videoboard
in left field. Later the same year, he hit a homer that traveled 495 feet (151 m) that also ricocheted off of the
videoboard On October 13, 2015, Kyle Schwarber's 438-foot home run landed on the equally new right field videoboard.
Before signing a developmental agreement with the Kane County Cougars in 2012, the Cubs had a Class A minor league
affiliation on two occasions with the Peoria Chiefs (1985–1995 and 2004–2012). Ryne Sandberg managed the Chiefs from
2006 to 2010. In the period between those associations with the Chiefs the club had affiliations with the Dayton
Dragons and Lansing Lugnuts. The Lugnuts were often affectionately referred to by Chip Caray as "Steve Stone's favorite
team." The 2007 developmental contract with the Tennessee Smokies was preceded by Double A affiliations with the
Orlando Cubs and West Tenn Diamond Jaxx. On September 16, 2014 the Cubs announced a move of their top Class A affiliate
from Daytona in the Florida State League to Myrtle Beach in the Carolina League for the 2015 season. Two days later,
on the 18th, the Cubs signed a 4-year player development contract with the South Bend Silver Hawks of the Midwest
League, ending their brief relationship with the Kane County Cougars and shortly thereafter renaming the Silver Hawks
the South Bend Cubs. The Chicago White Stockings, (today's Chicago Cubs), began spring training in Hot Springs, Arkansas
in 1886. President Albert Spalding (founder of Spalding Sporting Goods) and player/manager Cap Anson brought their
players to Hot Springs and played at the Hot Springs Baseball Grounds. The concept was for the players to have training
and fitness before the start of the regular season. After the White Stockings had a successful season in 1886, winning
the National League Pennant, other teams began bringing their players to "spring training". The Chicago Cubs, St.
Louis Browns, New York Yankees, St. Louis Cardinals, Cleveland Spiders, Detroit Tigers, Pittsburgh Pirates, Cincinnati
Reds, New York Highlanders, Brooklyn Dodgers and Boston Red Sox were among the early squads to arrive. Whittington
Park (1894) and later Majestic Park (1909) and Fogel Field (1912) were all built in Hot Springs specifically to host
Major League teams. The Cubs' current spring training facility is located in Sloan Park in |Mesa, Arizona, where
they play in the Cactus League. The park seats 15,000, making it Major League baseball's largest spring training
facility by capacity. The Cubs annually sell out most of their games both at home and on the road. Before Sloan Park
opened in 2014, the team played games at HoHoKam Park - Dwight Patterson Field from 1979. "HoHoKam" is literally
translated from Native American as "those who vanished." The North Siders have called Mesa their spring home for
most seasons since 1952. In addition to Mesa, the club has held spring training in Hot Springs, Arkansas (1886, 1896–1900),
(1909–1910) New Orleans (1870, 1907, 1911–1912); Champaign, Illinois (1901–02, 1906); Los Angeles (1903–04, 1948–1949),
Santa Monica, California (1905); French Lick, Indiana (1908, 1943–1945); Tampa, Florida (1913–1916); Pasadena, California
(1917–1921); Santa Catalina Island, California (1922–1942, 1946–1947, 1950–1951); Rendezvous Park in Mesa (1952–1965);
Blair Field in Long Beach, California (1966); and Scottsdale, Arizona (1967–1978). The curious location on Catalina
Island stemmed from Cubs owner William Wrigley Jr.'s then-majority interest in the island in 1919. Wrigley constructed
a ballpark on the island to house the Cubs in spring training: it was built to the same dimensions as Wrigley Field.
(The ballpark is long gone, but a clubhouse built by Wrigley to house the Cubs exists as the Catalina County Club.)
However, by 1951 the team chose to leave Catalina Island and spring training was shifted to Mesa, Arizona. The Cubs'
30-year association with Catalina is chronicled in the book, The Cubs on Catalina, by Jim Vitti . . . which was named
International 'Book of the Year' by The Sporting News. The former location in Mesa is actually the second HoHoKam
Park; the first was built in 1976 as the spring-training home of the Oakland Athletics who left the park in 1979.
Apart from HoHoKam Park and Sloan Park the Cubs also have another Mesa training facility called Fitch Park, this
complex provides 25,000 square feet (2,300 m2) of team facilities, including major league clubhouse, four practice
fields, one practice infield, enclosed batting tunnels, batting cages, a maintenance facility, and administrative
offices for the Cubs. Jack Brickhouse manned the Cubs radio and especially the TV booth for parts of five decades,
the 34-season span from 1948 to 1981. He covered the games with a level of enthusiasm that often seemed unjustified
by the team's poor performance on the field for many of those years. His trademark call "Hey Hey!" always followed
a home run. That expression is spelled out in large letters vertically on both foul pole screens at Wrigley Field.
"Whoo-boy!" and "Wheeee!" and "Oh, brother!" were among his other pet expressions. When he approached retirement
age, he personally recommended his successor. Harry Caray's stamp on the team is perhaps even deeper than that of
Brickhouse, although his 17-year tenure, from 1982 to 1997, was half as long. First, Caray had already become a well-known
Chicago figure by broadcasting White Sox games for a decade, after having been a St Louis Cardinals icon for 25 years.
Caray also had the benefit of being in the booth during the NL East title run in 1984, which was widely seen due
to WGN's status as a cable-TV superstation. His trademark call of "Holy Cow!" and his enthusiastic singing of "Take
me out to the ballgame" during the 7th inning stretch (as he had done with the White Sox) made Caray a fan favorite
both locally and nationally. Caray had lively discussions with commentator Steve Stone, who was hand-picked by Harry
himself, and producer Arne Harris. Caray often playfully quarreled with Stone over Stone's cigar and why Stone was
single, while Stone would counter with poking fun at Harry being "under the influence." Stone disclosed in his book
"Where's Harry" that most of this "arguing" was staged, and usually a ploy developed by Harry himself to add flavor
to the broadcast. The Cubs still have a "guest conductor", usually a celebrity, lead the crowd in singing "Take me
out to the ballgame" during the 7th inning stretch to honor Caray's memory. In 1981, after 6 decades under the Wrigley
family, the Cubs were purchased by Tribune Company for $20,500,000. Tribune, owners of the Chicago Tribune, Los Angeles
Times, WGN Television, WGN Radio and many other media outlets, controlled the club until December 2007, when Sam
Zell completed his purchase of the entire Tribune organization and announced his intention to sell the baseball team.
After a nearly two-year process which involved potential buyers such as Mark Cuban and a group led by Hank Aaron,
a family trust of TD Ameritrade founder Joe Ricketts won the bidding process as the 2009 season came to a close.
Ultimately, the sale was unanimously approved by MLB owners and the Ricketts family took control on October 27, 2009.
"Baseball's Sad Lexicon," also known as "Tinker to Evers to Chance" after its refrain, is a 1910 baseball poem by
Franklin Pierce Adams. The poem is presented as a single, rueful stanza from the point of view of a New York Giants
fan seeing the talented Chicago Cubs infield of shortstop Joe Tinker, second baseman Johnny Evers, and first baseman
Frank Chance complete a double play. The trio began playing together with the Cubs in 1902, and formed a double play
combination that lasted through April 1912. The Cubs won the pennant four times between 1906 and 1910, often defeating
the Giants en route to the World Series. The official Cubs team mascot is a young bear cub, named Clark, described
by the team's press release as a young and friendly Cub. Clark made his debut at Advocate Health Care on January
13, 2014, the same day as the press release announcing his installation as the club's first ever official physical
mascot. The bear cub itself was used in the clubs since the early 1900s and was the inspiration of the Chicago Staleys
changing their team's name to the Chicago Bears, due to the Cubs allowing the football team to play at Wrigley Field
in the 1930s. The Cubs had no official physical mascot prior to Clark, though a man in a 'polar bear' looking outfit,
called "The Bear-man" (or Beeman), which was mildly popular with the fans, paraded the stands briefly in the early
1990s. There is no record of whether or not he was just a fan in a costume or employed by the club. Through the 2013
season, there were "Cubbie-bear" mascots outside of Wrigley on game day, but none are employed by the team. They
pose for pictures with fans for tips. The most notable of these was "Billy Cub" who worked outside of the stadium
until for over 6 years until July 2013, when the club asked him to stop. Billy Cub, who is played by fan John Paul
Weier, had unsuccessfully petitioned the team to become the official mascot. Another unofficial but much more well-known
mascot is Ronnie "Woo Woo" Wickers who is a longtime fan and local celebrity in the Chicago area. He is known to
Wrigley Field visitors for his idiosyncratic cheers at baseball games, generally punctuated with an exclamatory "Woo!"
(e.g., "Cubs, woo! Cubs, woo! Big-Z, woo! Zambrano, woo! Cubs, woo!") Longtime Cubs announcer Harry Caray dubbed
Wickers "Leather Lungs" for his ability to shout for hours at a time. He is not employed by the team, although the
club has on two separate occasions allowed him into the broadcast booth and allow him some degree of freedom once
he purchases or is given a ticket by fans to get into the games. He is largely allowed to roam the park and interact
with fans by Wrigley Field security. Located in Chicago's Lake View neighborhood, Wrigley Field sits on an irregular
block bounded by Clark and Addison Streets and Waveland and Sheffield Avenues. The area surrounding the ballpark
is typically referred to as Wrigleyville. There is a dense collection of sports bars and restaurants in the area,
most with baseball inspired themes, including Sluggers, Murphy's Bleachers and The Cubby Bear. Many of the apartment
buildings surrounding Wrigley Field on Waveland and Sheffield Avenues have built bleachers on their rooftops for
fans to view games and other sell space for advertisement. One building on Sheffield Avenue has a sign atop its roof
which says "Eamus Catuli!" which is Latin for "Let's Go Cubs!" and another chronicles the time since the last Division
title, pennant, and World Series championship. The 02 denotes two years since the 2008 NL Central title, 65 years
since the 1945 pennant and 102 years since the 1908 World Series championship. On game days, many residents rent
out their yards and driveways to people looking for parking spots. The uniqueness of the neighborhood itself has
ingrained itself into the culture of the Chicago Cubs as well as the Wrigleyville neighborhood, and has led to being
used for concerts and other sporting events, such as the 2010 NHL Winter Classic between the Chicago Blackhawks and
Detroit Red Wings, as well as a 2010 NCAA men's football game between the Northwestern Wildcats and Illinois Fighting
Illini. In 2013, Tom Ricketts and team president Crane Kenney unveiled plans for a five-year, $575 million privately
funded renovation of Wrigley Field. Called the 1060 Project, the proposed plans included vast improvements to the
stadium's facade, infrastructure, restrooms, concourses, suites, press box, bullpens, and clubhouses, as well as
a 6,000-square foot jumbotron to be added in the left field bleachers, batting tunnels, a 3,000-square-foot video
board in right field, and, eventually, an adjacent hotel, plaza, and office-retail complex. In previously years mostly
all efforts to conduct any large-scale renovations to the field had been opposed by the city, former mayor Richard
M. Daley (a staunch White Sox fan), and especially the rooftop owners. The "Bleacher Bums" is a name given to fans,
many of whom spend much of the day heckling, who sit in the bleacher section at Wrigley Field. Initially, the group
was called "bums" because it referred to a group of fans who were at most games, and since those games were all day
games, it was assumed they did not work. Many of those fans were, and are still, students at Chicago area colleges,
such as DePaul University, Loyola, Northwestern University, and Illinois-Chicago. A Broadway play, starring Joe Mantegna,
Dennis Farina, Dennis Franz, and James Belushi ran for years and was based on a group of Cub fans who frequented
the club's games. The group was started in 1967 by dedicated fans Ron Grousl, Tom Nall and "mad bugler" Mike Murphy,
who was a sports radio host during mid days on Chicago-based WSCR AM 670 "The Score". Murphy alleges that Grousl
started the Wrigley tradition of throwing back opposing teams' home run balls. The current group is headed by Derek
Schaul (Derek the Five Dollar Kid). Prior to the 2006 season, they were updated, with new shops and private bar (The
Batter's Eye) being added, and Bud Light bought naming rights to the bleacher section, dubbing them the Bud Light
Bleachers. Bleachers at Wrigley are general admission, except during the playoffs. The bleachers have been referred
to as the "World's Largest Beer Garden." A popular T-shirt (sold inside the park and licensed by the club) which
says "Wrigley Bleachers" on the front and the phrase "Shut Up and Drink Your Beer" on the reverse fuels this stereotype.
In 1975, a group of Chicago Cubs fans based in Washington, D.C. formed the Emil Verban Society. The society is a
select club of high profile Cub fans, currently headed by Illinois Senator Dick Durbin which is named for Emil Verban,
who in three seasons with the Cubs in the 1940s batted .280 with 39 runs batted in and one home run. Verban was picked
as the epitome of a Cub player, explains columnist George Will, because "He exemplified mediocrity under pressure,
he was competent but obscure and typifying of the work ethics." Verban initially believed he was being ridiculed,
but his ill feeling disappeared several years later when he was flown to Washington to meet President Ronald Reagan,
also a society member, at the White House. Hillary Clinton, Jim Belushi, Joe Mantegna, Rahm Emanuel, Dick Cheney
and many others have been included among its membership. During the summer of 1969, a Chicago studio group produced
a single record called "Hey Hey! Holy Mackerel! (The Cubs Song)" whose title and lyrics incorporated the catch-phrases
of the respective TV and radio announcers for the Cubs, Jack Brickhouse and Vince Lloyd. Several members of the Cubs
recorded an album called Cub Power which contained a cover of the song. The song received a good deal of local airplay
that summer, associating it very strongly with that bittersweet season. It was played much less frequently thereafter,
although it remained an unofficial Cubs theme song for some years after. An album entitled Take Me Out to a Cubs
Game was released in 2008. It is a collection of 17 songs and other recordings related to the team, including Harry
Caray's final performance of "Take Me Out to the Ball Game" on September 21, 1997, the Steve Goodman song mentioned
above, and a newly recorded rendition of "Talkin' Baseball" (subtitled "Baseball and the Cubs") by Terry Cashman.
The album was produced in celebration of the 100th anniversary of the Cubs' 1908 World Series victory and contains
sounds and songs of the Cubs and Wrigley Field. The 1989 film Back to the Future Part II depicts the Chicago Cubs
defeating a baseball team from Miami in the 2015 World Series, ending the longest championship drought in all four
of the major North American professional sports leagues. In 2015, the Miami Marlins failed to make the playoffs and
were able to make it to the 2015 National League Wild Card round and move on to the 2015 National League Championship
Series by October 21, 2015, the date where protagonist Marty McFly traveled to the future in the film. However, it
was on October 21 that the Cubs were swept by the New York Mets in the NLCS.
The Korean War (in South Korean Hangul: 한국전쟁, Hanja: 韓國戰爭, Hanguk Jeonjaeng, "Korean War"; in North Korean Chosungul: 조국해방전쟁,
Joguk Haebang Jeonjaeng, "Fatherland Liberation War"; 25 June 1950 – 27 July 1953)[a] was started when North Korea
invaded South Korea. The United Nations, with United States as the principal force, came to aid of South Korea. China,
along with assistance from Soviet Union, came to aid of North Korea. The war arose from the division of Korea at
the end of World War II and from the global tensions of the Cold War that developed immediately afterwards. Korea
was ruled by Japan from 1910 until the closing days of World War II. In August 1945, the Soviet Union declared war
on Japan and—by agreement with the United States—occupied Korea north of the 38th parallel. U.S. forces subsequently
occupied the south and Japan surrendered. By 1948, two separate governments had been set up. Both governments claimed
to be the legitimate government of Korea, and neither side accepted the border as permanent. The conflict escalated
into open warfare when North Korean forces—supported by the Soviet Union and China—invaded South Korea on 25 June
1950. On that day, the United Nations Security Council recognized this North Korean act as invasion and called for
an immediate ceasefire. On 27 June, the Security Council adopted S/RES/83: Complaint of aggression upon the Republic
of Korea and decided the formation and dispatch of the UN Forces in Korea. Twenty-one countries of the United Nations
eventually contributed to the defense of South Korea, with the United States providing 88% of the UN's military personnel.
After the first two months of the conflict, South Korean forces were on the point of defeat, forced back to the Pusan
Perimeter. In September 1950, an amphibious UN counter-offensive was launched at Inchon, and cut off many of the
North Korean attackers. Those that escaped envelopment and capture were rapidly forced back north all the way to
the border with China at the Yalu River, or into the mountainous interior. At this point, in October 1950, Chinese
forces crossed the Yalu and entered the war. Chinese intervention triggered a retreat of UN forces which continued
until mid-1951. After these dramatic reversals of fortune, which saw Seoul change hands four times, the last two
years of conflict became a war of attrition, with the front line close to the 38th parallel. The war in the air,
however, was never a stalemate. North Korea was subject to a massive bombing campaign. Jet fighters confronted each
other in air-to-air combat for the first time in history, and Soviet pilots covertly flew in defense of their Communist
allies. In China, the war is officially called the "War to Resist U.S. Aggression and Aid Korea" (simplified Chinese:
抗美援朝战争; traditional Chinese: 抗美援朝戰爭; pinyin: Kàngměiyuáncháo zhànzhēng), although the term "Chaoxian (Korean) War"
(simplified Chinese: 朝鲜战争; traditional Chinese: 朝鮮戰爭; pinyin: Cháoxiǎn zhànzhēng) is also used in unofficial contexts,
along with the term "Korean Conflict" (simplified Chinese: 韩战; traditional Chinese: 韓戰; pinyin: Hán Zhàn) more commonly
used in regions such as Hong Kong and Macau. Korea was considered to be part of the Empire of Japan as an industrialized
colony along with Taiwan, and both were part of the Greater East Asia Co-Prosperity Sphere. In 1937, the colonial
Governor-General, General Jirō Minami, commanded the attempted cultural assimilation of Korea's 23.5 million people
by banning the use and study of Korean language, literature, and culture, to be replaced with that of mandatory use
and study of their Japanese counterparts. Starting in 1939, the populace was required to use Japanese names under
the Sōshi-kaimei policy. Conscription of Koreans for labor in war industries began in 1939, with as many as 2 million
Koreans conscripted into either the Japanese Army or into the Japanese labor force. During World War II, Japan used
Korea's food, livestock, and metals for their war effort. Japanese forces in Korea increased from 46,000 soldiers
in 1941 to 300,000 in 1945. Japanese Korea conscripted 2.6 million forced laborers controlled with a collaborationist
Korean police force; some 723,000 people were sent to work in the overseas empire and in metropolitan Japan. By 1942,
Korean men were being conscripted into the Imperial Japanese Army. By January 1945, Koreans made up 32% of Japan's
labor force. At the end of the war, other world powers did not recognize Japanese rule in Korea and Taiwan. On the
night of 10 August in Washington, American Colonels Dean Rusk and Charles H. Bonesteel III were tasked with dividing
the Korean Peninsula into Soviet and U.S. occupation zones and proposed the 38th parallel. This was incorporated
into America's General Order No. 1 which responded to the Japanese surrender on 15 August. Explaining the choice
of the 38th parallel, Rusk observed, "even though it was further north than could be realistically reached by U.S.
forces, in the event of Soviet disagreement...we felt it important to include the capital of Korea in the area of
responsibility of American troops". He noted that he was "faced with the scarcity of US forces immediately available,
and time and space factors, which would make it difficult to reach very far north, before Soviet troops could enter
the area". As Rusk's comments indicate, the Americans doubted whether the Soviet government would agree to this.
Stalin, however, maintained his wartime policy of co-operation, and on 16 August the Red Army halted at the 38th
parallel for three weeks to await the arrival of U.S. forces in the south. On 8 September 1945, U.S. Lt. Gen. John
R. Hodge arrived in Incheon to accept the Japanese surrender south of the 38th parallel. Appointed as military governor,
General Hodge directly controlled South Korea as head of the United States Army Military Government in Korea (USAMGIK
1945–48). He established control by restoring to power the key Japanese colonial administrators, but in the face
of Korean protests he quickly reversed this decision. The USAMGIK refused to recognize the provisional government
of the short-lived People's Republic of Korea (PRK) because it suspected it was communist. On 23 September 1946,
an 8,000-strong railroad worker strike began in Pusan. Civil disorder spread throughout the country in what became
known as the Autumn uprising. On 1 October 1946, Korean police killed three students in the Daegu Uprising; protesters
counter-attacked, killing 38 policemen. On 3 October, some 10,000 people attacked the Yeongcheon police station,
killing three policemen and injuring some 40 more; elsewhere, some 20 landlords and pro-Japanese South Korean officials
were killed. The USAMGIK declared martial law. Citing the inability of the Joint Commission to make progress, the
U.S. government decided to hold an election under United Nations auspices with the aim of creating an independent
Korea. The Soviet authorities and the Korean Communists refused to co-operate on the grounds it would not be fair,
and many South Korean politicians also boycotted it. A general election was held in the South on 10 May 1948. It
was marred by terrorism and sabotage resulting in 600 deaths. North Korea held parliamentary elections three months
later on 25 August. The resultant South Korean government promulgated a national political constitution on 17 July
1948, and elected Syngman Rhee as President on 20 July 1948. The Republic of Korea (South Korea) was established
on 15 August 1948. In the Russian Korean Zone of Occupation, the Soviet Union established a Communist North Korean
government led by Kim Il-sung. President Rhee's régime excluded communists and leftists from southern politics. Disenfranchised,
they headed for the hills, to prepare for guerrilla war against the US-sponsored ROK Government. With the end of
the war with Japan, the Chinese Civil War resumed between the Chinese Communists and the Chinese Nationalists. While
the Communists were struggling for supremacy in Manchuria, they were supported by the North Korean government with
matériel and manpower. According to Chinese sources, the North Koreans donated 2,000 railway cars worth of matériel
while thousands of Koreans served in the Chinese People's Liberation Army (PLA) during the war. North Korea also
provided the Chinese Communists in Manchuria with a safe refuge for non-combatants and communications with the rest
of China. The North Korean contributions to the Chinese Communist victory were not forgotten after the creation of
the People's Republic of China in 1949. As a token of gratitude, between 50,000 and 70,000 Korean veterans that served
in the PLA were sent back along with their weapons, and they later played a significant role in the initial invasion
of South Korea. China promised to support the North Koreans in the event of a war against South Korea. The Chinese
support created a deep division between the Korean Communists, and Kim Il-sung's authority within the Communist party
was challenged by the Chinese faction led by Pak Il-yu, who was later purged by Kim. After the formation of the People's
Republic of China in 1949, the Chinese government named the Western nations, led by the United States, as the biggest
threat to its national security. Basing this judgment on China's century of humiliation beginning in the early 19th
century, American support for the Nationalists during the Chinese Civil War, and the ideological struggles between
revolutionaries and reactionaries, the Chinese leadership believed that China would become a critical battleground
in the United States' crusade against Communism. As a countermeasure and to elevate China's standing among the worldwide
Communist movements, the Chinese leadership adopted a foreign policy that actively promoted Communist revolutions
throughout territories on China's periphery. By spring 1950, Stalin believed the strategic situation had changed.
The Soviets had detonated their first nuclear bomb in September 1949; American soldiers had fully withdrawn from
Korea; the Americans had not intervened to stop the communist victory in China, and Stalin calculated that the Americans
would be even less willing to fight in Korea—which had seemingly much less strategic significance. The Soviets had
also cracked the codes used by the US to communicate with the US embassy in Moscow, and reading these dispatches
convinced Stalin that Korea did not have the importance to the US that would warrant a nuclear confrontation. Stalin
began a more aggressive strategy in Asia based on these developments, including promising economic and military aid
to China through the Sino–Soviet Friendship, Alliance, and Mutual Assistance Treaty. In April 1950, Stalin gave Kim
permission to invade the South under the condition that Mao would agree to send reinforcements if they became needed.
Stalin made it clear that Soviet forces would not openly engage in combat, to avoid a direct war with the Americans.
Kim met with Mao in May 1950. Mao was concerned that the Americans would intervene but agreed to support the North
Korean invasion. China desperately needed the economic and military aid promised by the Soviets. At that time, the
Chinese were in the process of demobilizing half of the PLA's 5.6 million soldiers. However, Mao sent more ethnic
Korean PLA veterans to Korea and promised to move an army closer to the Korean border. Once Mao's commitment was
secured, preparations for war accelerated. Soviet generals with extensive combat experience from the Second World
War were sent to North Korea as the Soviet Advisory Group. These generals completed the plans for the attack by May.
The original plans called for a skirmish to be initiated in the Ongjin Peninsula on the west coast of Korea. The
North Koreans would then launch a "counterattack" that would capture Seoul and encircle and destroy the South Korean
army. The final stage would involve destroying South Korean government remnants, capturing the rest of South Korea,
including the ports. On 7 June 1950, Kim Il-sung called for a Korea-wide election on 5–8 August 1950 and a consultative
conference in Haeju on 15–17 June 1950. On 11 June, the North sent three diplomats to the South, as a peace overture
that Rhee rejected. On 21 June, Kim Il-Sung revised his war plan to involve general attack across the 38th parallel,
rather than a limited operation in the Ongjin peninsula. Kim was concerned that South Korean agents had learned about
the plans and South Korean forces were strengthening their defenses. Stalin agreed to this change of plan. While
these preparations were underway in the North, there were frequent clashes along the 38th parallel, especially at
Kaesong and Ongjin, many initiated by the South. The Republic of Korea Army (ROK Army) was being trained by the U.S.
Korean Military Advisory Group (KMAG). On the eve of war, KMAG's commander General William Lynn Roberts voiced utmost
confidence in the ROK Army and boasted that any North Korean invasion would merely provide "target practice". For
his part, Syngman Rhee repeatedly expressed his desire to conquer the North, including when American diplomat John
Foster Dulles visited Korea on 18 June. At dawn on Sunday, 25 June 1950, the Korean People's Army crossed the 38th
parallel behind artillery fire. The KPA justified its assault with the claim that ROK troops had attacked first,
and that they were aiming to arrest and execute the "bandit traitor Syngman Rhee". Fighting began on the strategic
Ongjin peninsula in the west. There were initial South Korean claims that they had captured the city of Haeju, and
this sequence of events has led some scholars to argue that the South Koreans actually fired first. On 27 June, Rhee
evacuated from Seoul with some of the government. On 28 June, at 2 am, the South Korean Army blew up the highway
bridge across the Han River in an attempt to stop the North Korean army. The bridge was detonated while 4,000 refugees
were crossing the bridge, and hundreds were killed. Destroying the bridge also trapped many South Korean military
units north of the Han River. In spite of such desperate measures, Seoul fell that same day. A number of South Korean
National Assemblymen remained in Seoul when it fell, and forty-eight subsequently pledged allegiance to the North.
One facet of the changing attitude toward Korea and whether to get involved was Japan. Especially after the fall
of China to the Communists, U.S. East Asian experts saw Japan as the critical counterweight to the Soviet Union and
China in the region. While there was no United States policy that dealt with South Korea directly as a national interest,
its proximity to Japan increased the importance of South Korea. Said Kim: "The recognition that the security of Japan
required a non-hostile Korea led directly to President Truman's decision to intervene... The essential point... is
that the American response to the North Korean attack stemmed from considerations of US policy toward Japan." A major
consideration was the possible Soviet reaction in the event that the US intervened. The Truman administration was
fretful that a war in Korea was a diversionary assault that would escalate to a general war in Europe once the United
States committed in Korea. At the same time, "[t]here was no suggestion from anyone that the United Nations or the
United States could back away from [the conflict]". Yugoslavia–a possible Soviet target because of the Tito-Stalin
Split—was vital to the defense of Italy and Greece, and the country was first on the list of the National Security
Council's post-North Korea invasion list of "chief danger spots". Truman believed if aggression went unchecked a
chain reaction would be initiated that would marginalize the United Nations and encourage Communist aggression elsewhere.
The UN Security Council approved the use of force to help the South Koreans and the US immediately began using what
air and naval forces that were in the area to that end. The Administration still refrained from committing on the
ground because some advisers believed the North Koreans could be stopped by air and naval power alone. On 25 June
1950, the United Nations Security Council unanimously condemned the North Korean invasion of the Republic of Korea,
with UN Security Council Resolution 82. The Soviet Union, a veto-wielding power, had boycotted the Council meetings
since January 1950, protesting that the Republic of China (Taiwan), not the People's Republic of China, held a permanent
seat in the UN Security Council. After debating the matter, the Security Council, on 27 June 1950, published Resolution
83 recommending member states provide military assistance to the Republic of Korea. On 27 June President Truman ordered
U.S. air and sea forces to help the South Korean regime. On 4 July the Soviet Deputy Foreign Minister accused the
United States of starting armed intervention on behalf of South Korea. The Soviet Union challenged the legitimacy
of the war for several reasons. The ROK Army intelligence upon which Resolution 83 was based came from U.S. Intelligence;
North Korea was not invited as a sitting temporary member of the UN, which violated UN Charter Article 32; and the
Korean conflict was beyond the UN Charter's scope, because the initial north–south border fighting was classed as
a civil war. Because the Soviet Union was boycotting the Security Council at the time, legal scholars posited that
deciding upon an action of this type required the unanimous vote of the five permanent members. By mid-1950, North
Korean forces numbered between 150,000 and 200,000 troops, organized into 10 infantry divisions, one tank division,
and one air force division, with 210 fighter planes and 280 tanks, who captured scheduled objectives and territory,
among them Kaesong, Chuncheon, Uijeongbu, and Ongjin. Their forces included 274 T-34-85 tanks, 200 artillery pieces,
110 attack bombers, some 150 Yak fighter planes, 78 Yak trainers, and 35 reconnaissance aircraft. In addition to
the invasion force, the North KPA had 114 fighters, 78 bombers, 105 T-34-85 tanks, and some 30,000 soldiers stationed
in reserve in North Korea. Although each navy consisted of only several small warships, the North and South Korean
navies fought in the war as sea-borne artillery for their in-country armies. In contrast, the ROK Army defenders
were relatively unprepared and ill-equipped. In South to the Naktong, North to the Yalu (1961), R.E. Appleman reports
the ROK forces' low combat readiness as of 25 June 1950. The ROK Army had 98,000 soldiers (65,000 combat, 33,000
support), no tanks (they had been requested from the U.S. military, but requests were denied), and a 22-piece air
force comprising 12 liaison-type and 10 AT6 advanced-trainer airplanes. There were no large foreign military garrisons
in Korea at the time of the invasion, but there were large U.S. garrisons and air forces in Japan. On Saturday, 24
June 1950, U.S. Secretary of State Dean Acheson informed President Truman that the North Koreans had invaded South
Korea. Truman and Acheson discussed a U.S. invasion response and agreed that the United States was obligated to act,
paralleling the North Korean invasion with Adolf Hitler's aggressions in the 1930s, with the conclusion being that
the mistake of appeasement must not be repeated. Several U.S. industries were mobilized to supply materials, labor,
capital, production facilities, and other services necessary to support the military objectives of the Korean War.
However, President Truman later acknowledged that he believed fighting the invasion was essential to the American
goal of the global containment of communism as outlined in the National Security Council Report 68 (NSC-68) (declassified
in 1975): As an initial response, Truman called for a naval blockade of North Korea, and was shocked to learn that
such a blockade could be imposed only 'on paper', since the U.S. Navy no longer had the warships with which to carry
out his request. In fact, because of the extensive defense cuts and the emphasis placed on building a nuclear bomber
force, none of the services were in a position to make a robust response with conventional military strength. General
Omar Bradley, Chairman of the Joint Chiefs of Staff, was faced with re-organizing and deploying an American military
force that was a shadow of its World War II counterpart. The impact of the Truman administration's defense budget
cutbacks were now keenly felt, as American troops fought a series of costly rearguard actions. Lacking sufficient
anti-tank weapons, artillery or armor, they were driven back down the Korean peninsula to Pusan. In a postwar analysis
of the unpreparedness of U.S. Army forces deployed to Korea during the summer and fall of 1950, Army Major General
Floyd L. Parks stated that "Many who never lived to tell the tale had to fight the full range of ground warfare from
offensive to delaying action, unit by unit, man by man ... [T]hat we were able to snatch victory from the jaws of
defeat ... does not relieve us from the blame of having placed our own flesh and blood in such a predicament." Acting
on State Secretary Acheson's recommendation, President Truman ordered General MacArthur to transfer matériel to the
Army of the Republic of Korea while giving air cover to the evacuation of U.S. nationals. The President disagreed
with advisers who recommended unilateral U.S. bombing of the North Korean forces, and ordered the US Seventh Fleet
to protect the Republic of China (Taiwan), whose government asked to fight in Korea. The United States denied ROC's
request for combat, lest it provoke a communist Chinese retaliation. Because the United States had sent the Seventh
Fleet to "neutralize" the Taiwan Strait, Chinese premier Zhou Enlai criticized both the UN and U.S. initiatives as
"armed aggression on Chinese territory." The Battle of Osan, the first significant American engagement of the Korean
War, involved the 540-soldier Task Force Smith, which was a small forward element of the 24th Infantry Division which
had been flown in from Japan. On 5 July 1950, Task Force Smith attacked the North Koreans at Osan but without weapons
capable of destroying the North Koreans' tanks. They were unsuccessful; the result was 180 dead, wounded, or taken
prisoner. The KPA progressed southwards, pushing back the US force at Pyongtaek, Chonan, and Chochiwon, forcing the
24th Division's retreat to Taejeon, which the KPA captured in the Battle of Taejon; the 24th Division suffered 3,602
dead and wounded and 2,962 captured, including the Division's Commander, Major General William F. Dean. By August,
the KPA had pushed back the ROK Army and the Eighth United States Army to the vicinity of Pusan in southeast Korea.
In their southward advance, the KPA purged the Republic of Korea's intelligentsia by killing civil servants and intellectuals.
On 20 August, General MacArthur warned North Korean leader Kim Il-sung that he was responsible for the KPA's atrocities.
By September, the UN Command controlled the Pusan perimeter, enclosing about 10% of Korea, in a line partially defined
by the Nakdong River. Although Kim's early successes had led him to predict that he would end the war by the end
of August, Chinese leaders were more pessimistic. To counter a possible U.S. deployment, Zhou Enlai secured a Soviet
commitment to have the Soviet Union support Chinese forces with air cover, and deployed 260,000 soldiers along the
Korean border, under the command of Gao Gang. Zhou commanded Chai Chengwen to conduct a topographical survey of Korea,
and directed Lei Yingfu, Zhou's military advisor in Korea, to analyze the military situation in Korea. Lei concluded
that MacArthur would most likely attempt a landing at Incheon. After conferring with Mao that this would be MacArthur's
most likely strategy, Zhou briefed Soviet and North Korean advisers of Lei's findings, and issued orders to Chinese
army commanders deployed on the Korean border to prepare for American naval activity in the Korea Strait. In the
resulting Battle of Pusan Perimeter (August–September 1950), the U.S. Army withstood KPA attacks meant to capture
the city at the Naktong Bulge, P'ohang-dong, and Taegu. The United States Air Force (USAF) interrupted KPA logistics
with 40 daily ground support sorties that destroyed 32 bridges, halting most daytime road and rail traffic. KPA forces
were forced to hide in tunnels by day and move only at night. To deny matériel to the KPA, the USAF destroyed logistics
depots, petroleum refineries, and harbors, while the U.S. Navy air forces attacked transport hubs. Consequently,
the over-extended KPA could not be supplied throughout the south. On 27 August, 67th Fighter Squadron aircraft mistakenly
attacked facilities in Chinese territory and the Soviet Union called the UN Security Council's attention to China's
complaint about the incident. The US proposed that a commission of India and Sweden determine what the US should
pay in compensation but the Soviets vetoed the US proposal. Meanwhile, U.S. garrisons in Japan continually dispatched
soldiers and matériel to reinforce defenders in the Pusan Perimeter. Tank battalions deployed to Korea directly from
the U.S. mainland from the port of San Francisco to the port of Pusan, the largest Korean port. By late August, the
Pusan Perimeter had some 500 medium tanks battle-ready. In early September 1950, ROK Army and UN Command forces outnumbered
the KPA 180,000 to 100,000 soldiers. The UN forces, once prepared, counterattacked and broke out of the Pusan Perimeter.
Against the rested and re-armed Pusan Perimeter defenders and their reinforcements, the KPA were undermanned and
poorly supplied; unlike the UN Command, they lacked naval and air support. To relieve the Pusan Perimeter, General
MacArthur recommended an amphibious landing at Inchon (now known as Incheon), near Seoul and well over 100 miles
(160 km) behind the KPA lines. On 6 July, he ordered Major General Hobart R. Gay, Commander, 1st Cavalry Division,
to plan the division's amphibious landing at Incheon; on 12–14 July, the 1st Cavalry Division embarked from Yokohama,
Japan to reinforce the 24th Infantry Division inside the Pusan Perimeter. Soon after the war began, General MacArthur
had begun planning a landing at Incheon, but the Pentagon opposed him. When authorized, he activated a combined U.S.
Army and Marine Corps, and ROK Army force. The X Corps, led by General Edward Almond, Commander, consisted of 40,000
men of the 1st Marine Division, the 7th Infantry Division and around 8,600 ROK Army soldiers. By 15 September, the
amphibious assault force faced few KPA defenders at Incheon: military intelligence, psychological warfare, guerrilla
reconnaissance, and protracted bombardment facilitated a relatively light battle. However, the bombardment destroyed
most of the city of Incheon. After the Incheon landing, the 1st Cavalry Division began its northward advance from
the Pusan Perimeter. "Task Force Lynch" (after Lieutenant Colonel James H. Lynch), 3rd Battalion, 7th Cavalry Regiment,
and two 70th Tank Battalion units (Charlie Company and the Intelligence–Reconnaissance Platoon) effected the "Pusan
Perimeter Breakout" through 106.4 miles (171.2 km) of enemy territory to join the 7th Infantry Division at Osan.
The X Corps rapidly defeated the KPA defenders around Seoul, thus threatening to trap the main KPA force in Southern
Korea. On 18 September, Stalin dispatched General H. M. Zakharov to Korea to advise Kim Il-sung to halt his offensive
around the Pusan perimeter and to redeploy his forces to defend Seoul. Chinese commanders were not briefed on North
Korean troop numbers or operational plans. As the overall commander of Chinese forces, Zhou Enlai suggested that
the North Koreans should attempt to eliminate the enemy forces at Inchon only if they had reserves of at least 100,000
men; otherwise, he advised the North Koreans to withdraw their forces north. On 25 September, Seoul was recaptured
by South Korean forces. American air raids caused heavy damage to the KPA, destroying most of its tanks and much
of its artillery. North Korean troops in the south, instead of effectively withdrawing north, rapidly disintegrated,
leaving Pyongyang vulnerable. During the general retreat only 25,000 to 30,000 soldiers managed to rejoin the Northern
KPA lines. On 27 September, Stalin convened an emergency session of the Politburo, in which he condemned the incompetence
of the KPA command and held Soviet military advisers responsible for the defeat. On 27 September, MacArthur received
the top secret National Security Council Memorandum 81/1 from Truman reminding him that operations north of the 38th
parallel were authorized only if "at the time of such operation there was no entry into North Korea by major Soviet
or Chinese Communist forces, no announcements of intended entry, nor a threat to counter our operations militarily..."
On 29 September MacArthur restored the government of the Republic of Korea under Syngman Rhee. On 30 September, Defense
Secretary George Marshall sent an eyes-only message to MacArthur: "We want you to feel unhampered tactically and
strategically to proceed north of the 38th parallel." During October, the ROK police executed people who were suspected
to be sympathetic to North Korea, and similar massacres were carried out until early 1951. On 30 September, Zhou
Enlai warned the United States that China was prepared to intervene in Korea if the United States crossed the 38th
parallel. Zhou attempted to advise North Korean commanders on how to conduct a general withdrawal by using the same
tactics which had allowed Chinese communist forces to successfully escape Chiang Kai-shek's Encirclement Campaigns
in the 1930s, but by some accounts North Korean commanders did not utilize these tactics effectively. Historian Bruce
Cumings argues, however, the KPA's rapid withdrawal was strategic, with troops melting into the mountains from where
they could launch guerrilla raids on the UN forces spread out on the coasts. By 1 October 1950, the UN Command repelled
the KPA northwards past the 38th parallel; the ROK Army crossed after them, into North Korea. MacArthur made a statement
demanding the KPA's unconditional surrender. Six days later, on 7 October, with UN authorization, the UN Command
forces followed the ROK forces northwards. The X Corps landed at Wonsan (in southeastern North Korea) and Riwon (in
northeastern North Korea), already captured by ROK forces. The Eighth U.S. Army and the ROK Army drove up western
Korea and captured Pyongyang city, the North Korean capital, on 19 October 1950. The 187th Airborne Regimental Combat
Team ("Rakkasans") made their first of two combat jumps during the Korean War on 20 October 1950 at Sunchon and Sukchon.
The missions of the 187th were to cut the road north going to China, preventing North Korean leaders from escaping
from Pyongyang; and to rescue American prisoners of war. At month's end, UN forces held 135,000 KPA prisoners of
war. As they neared the Sino-Korean border, the UN forces in the west were divided from those in the east by 50–100
miles of mountainous terrain. On 27 June 1950, two days after the KPA invaded and three months before the Chinese
entered the war, President Truman dispatched the United States Seventh Fleet to the Taiwan Strait, to prevent hostilities
between the Nationalist Republic of China (Taiwan) and the People's Republic of China (PRC). On 4 August 1950, with
the PRC invasion of Taiwan aborted, Mao Zedong reported to the Politburo that he would intervene in Korea when the
People's Liberation Army's (PLA) Taiwan invasion force was reorganized into the PLA North East Frontier Force. China
justified its entry into the war as a response to "American aggression in the guise of the UN". In a series of emergency
meetings that lasted from 2–5 October, Chinese leaders debated whether to send Chinese troops into Korea. There was
considerable resistance among many leaders, including senior military leaders, to confronting the U.S. in Korea.
Mao strongly supported intervention, and Zhou was one of the few Chinese leaders who firmly supported him. After
Lin Biao politely refused Mao's offer to command Chinese forces in Korea (citing his upcoming medical treatment),
Mao decided that Peng Dehuai would be the commander of the Chinese forces in Korea after Peng agreed to support Mao's
position. Mao then asked Peng to speak in favor of intervention to the rest of the Chinese leaders. After Peng made
the case that if U.S. troops conquered Korea and reached the Yalu they might cross it and invade China the Politburo
agreed to intervene in Korea. Later, the Chinese claimed that US bombers had violated PRC national airspace on three
separate occasions and attacked Chinese targets before China intervened. On 8 October 1950, Mao Zedong redesignated
the PLA North East Frontier Force as the Chinese People's Volunteer Army (PVA). In order to enlist Stalin's support,
Zhou and a Chinese delegation left for Moscow on 8 October, arriving there on 10 October at which point they flew
to Stalin's home at the Black Sea. There they conferred with the top Soviet leadership which included Joseph Stalin
as well as Vyacheslav Molotov, Lavrentiy Beria and Georgi Malenkov. Stalin initially agreed to send military equipment
and ammunition, but warned Zhou that the Soviet Union's air force would need two or three months to prepare any operations.
In a subsequent meeting, Stalin told Zhou that he would only provide China with equipment on a credit basis, and
that the Soviet air force would only operate over Chinese airspace, and only after an undisclosed period of time.
Stalin did not agree to send either military equipment or air support until March 1951. Mao did not find Soviet air
support especially useful, as the fighting was going to take place on the south side of the Yalu. Soviet shipments
of matériel, when they did arrive, were limited to small quantities of trucks, grenades, machine guns, and the like.
UN aerial reconnaissance had difficulty sighting PVA units in daytime, because their march and bivouac discipline
minimized aerial detection. The PVA marched "dark-to-dark" (19:00–03:00), and aerial camouflage (concealing soldiers,
pack animals, and equipment) was deployed by 05:30. Meanwhile, daylight advance parties scouted for the next bivouac
site. During daylight activity or marching, soldiers were to remain motionless if an aircraft appeared, until it
flew away; PVA officers were under order to shoot security violators. Such battlefield discipline allowed a three-division
army to march the 286 miles (460 km) from An-tung, Manchuria, to the combat zone in some 19 days. Another division
night-marched a circuitous mountain route, averaging 18 miles (29 km) daily for 18 days. Meanwhile, on 10 October
1950, the 89th Tank Battalion was attached to the 1st Cavalry Division, increasing the armor available for the Northern
Offensive. On 15 October, after moderate KPA resistance, the 7th Cavalry Regiment and Charlie Company, 70th Tank
Battalion captured Namchonjam city. On 17 October, they flanked rightwards, away from the principal road (to Pyongyang),
to capture Hwangju. Two days later, the 1st Cavalry Division captured Pyongyang, the North's capital city, on 19
October 1950. Kim Il Sung and his government temporarily moved its capital to Sinuiju – although as UNC forces approached,
the government again moved – this time to Kanggye. On 15 October 1950, President Truman and General MacArthur met
at Wake Island in the mid-Pacific Ocean. This meeting was much publicized because of the General's discourteous refusal
to meet the President on the continental United States. To President Truman, MacArthur speculated there was little
risk of Chinese intervention in Korea, and that the PRC's opportunity for aiding the KPA had lapsed. He believed
the PRC had some 300,000 soldiers in Manchuria, and some 100,000–125,000 soldiers at the Yalu River. He further concluded
that, although half of those forces might cross south, "if the Chinese tried to get down to Pyongyang, there would
be the greatest slaughter" without air force protection. After secretly crossing the Yalu River on 19 October, the
PVA 13th Army Group launched the First Phase Offensive on 25 October, attacking the advancing UN forces near the
Sino-Korean border. This military decision made solely by China changed the attitude of the Soviet Union. Twelve
days after Chinese troops entered the war, Stalin allowed the Soviet Air Force to provide air cover, and supported
more aid to China. After decimating the ROK II Corps at the Battle of Onjong, the first confrontation between Chinese
and U.S. military occurred on 1 November 1950; deep in North Korea, thousands of soldiers from the PVA 39th Army
encircled and attacked the U.S. 8th Cavalry Regiment with three-prong assaults—from the north, northwest, and west—and
overran the defensive position flanks in the Battle of Unsan. The surprise assault resulted in the UN forces retreating
back to the Ch'ongch'on River, while the Chinese unexpectedly disappeared into mountain hideouts following victory.
It is unclear why the Chinese did not press the attack and follow up their victory. On 25 November at the Korean
western front, the PVA 13th Army Group attacked and overran the ROK II Corps at the Battle of the Ch'ongch'on River,
and then decimated the US 2nd Infantry Division on the UN forces' right flank. The UN Command retreated; the U.S.
Eighth Army's retreat (the longest in US Army history) was made possible because of the Turkish Brigade's successful,
but very costly, rear-guard delaying action near Kunuri that slowed the PVA attack for two days (27–29 November).
On 27 November at the Korean eastern front, a U.S. 7th Infantry Division Regimental Combat Team (3,000 soldiers)
and the U.S. 1st Marine Division (12,000–15,000 marines) were unprepared for the PVA 9th Army Group's three-pronged
encirclement tactics at the Battle of Chosin Reservoir, but they managed to escape under Air Force and X Corps support
fire—albeit with some 15,000 collective casualties. By 30 November, the PVA 13th Army Group managed to expel the
U.S. Eighth Army from northwest Korea. Retreating from the north faster than they had counter-invaded, the Eighth
Army crossed the 38th parallel border in mid December. UN morale hit rock bottom when commanding General Walton Walker
of the U.S. Eighth Army was killed on 23 December 1950 in an automobile accident. In northeast Korea by 11 December,
the U.S. X Corps managed to cripple the PVA 9th Army Group while establishing a defensive perimeter at the port city
of Hungnam. The X Corps were forced to evacuate by 24 December in order to reinforce the badly depleted U.S. Eighth
Army to the south. During the Hungnam evacuation, about 193 shiploads of UN Command forces and matériel (approximately
105,000 soldiers, 98,000 civilians, 17,500 vehicles, and 350,000 tons of supplies) were evacuated to Pusan. The SS
Meredith Victory was noted for evacuating 14,000 refugees, the largest rescue operation by a single ship, even though
it was designed to hold 12 passengers. Before escaping, the UN Command forces razed most of Hungnam city, especially
the port facilities; and on 16 December 1950, President Truman declared a national emergency with Presidential Proclamation
No. 2914, 3 C.F.R. 99 (1953), which remained in force until 14 September 1978.[b] The next day (17 December 1950)
Kim Il-sung was deprived of the right of command of KPA by China. After that, the leading part of the war became
the Chinese army. Following that, on 1 February 1951, United Nations General Assembly adopted a draft resolution
condemning China as an aggressor in the Korean War. With Lieutenant-General Matthew Ridgway assuming the command
of the U.S. Eighth Army on 26 December, the PVA and the KPA launched their Third Phase Offensive (also known as the
"Chinese New Year's Offensive") on New Year's Eve of 1950. Utilizing night attacks in which UN Command fighting positions
were encircled and then assaulted by numerically superior troops who had the element of surprise, the attacks were
accompanied by loud trumpets and gongs, which fulfilled the double purpose of facilitating tactical communication
and mentally disorienting the enemy. UN forces initially had no familiarity with this tactic, and as a result some
soldiers panicked, abandoning their weapons and retreating to the south. The Chinese New Year's Offensive overwhelmed
UN forces, allowing the PVA and KPA to conquer Seoul for the second time on 4 January 1951. UN forces retreated to
Suwon in the west, Wonju in the center, and the territory north of Samcheok in the east, where the battlefront stabilized
and held. The PVA had outrun its logistics capability and thus were unable to press on beyond Seoul as food, ammunition,
and matériel were carried nightly, on foot and bicycle, from the border at the Yalu River to the three battle lines.
In late January, upon finding that the PVA had abandoned their battle lines, General Ridgway ordered a reconnaissance-in-force,
which became Operation Roundup (5 February 1951). A full-scale X Corps advance proceeded, which fully exploited the
UN Command's air superiority, concluding with the UN reaching the Han River and recapturing Wonju. In early February,
the South Korean 11th Division ran the operation to destroy the guerrillas and their sympathizer citizens in Southern
Korea. During the operation, the division and police conducted the Geochang massacre and Sancheong-Hamyang massacre.
In mid-February, the PVA counterattacked with the Fourth Phase Offensive and achieved initial victory at Hoengseong.
But the offensive was soon blunted by the IX Corps positions at Chipyong-ni in the center. Units of the U.S. 2nd
Infantry Division and the French Battalion fought a short but desperate battle that broke the attack's momentum.
The battle is sometimes known as the Gettysburg of the Korean War. The battle saw 5,600 Korean, American and French
troops defeat a numerically superior Chinese force. Surrounded on all sides, the U.S. 2nd Infantry Division Warrior
Division's 23rd Regimental Combat Team with an attached French Battalion was hemmed in by more than 25,000 Chinese
Communist forces. United Nations forces had previously retreated in the face of large Communist forces instead of
getting cut off, but this time they stood and fought at odds of roughly 15 to 1. In the last two weeks of February
1951, Operation Roundup was followed by Operation Killer, carried out by the revitalized Eighth Army. It was a full-scale,
battlefront-length attack staged for maximum exploitation of firepower to kill as many KPA and PVA troops as possible.
Operation Killer concluded with I Corps re-occupying the territory south of the Han River, and IX Corps capturing
Hoengseong. On 7 March 1951, the Eighth Army attacked with Operation Ripper, expelling the PVA and the KPA from Seoul
on 14 March 1951. This was the city's fourth conquest in a years' time, leaving it a ruin; the 1.5 million pre-war
population was down to 200,000, and people were suffering from severe food shortages. On 1 March 1951 Mao sent a
cable to Stalin, in which he emphasized the difficulties faced by Chinese forces and the urgent need for air cover,
especially over supply lines. Apparently impressed by the Chinese war effort, Stalin finally agreed to supply two
air force divisions, three anti-aircraft divisions, and six thousand trucks. PVA troops in Korea continued to suffer
severe logistical problems throughout the war. In late April Peng Dehuai sent his deputy, Hong Xuezhi, to brief Zhou
Enlai in Beijing. What Chinese soldiers feared, Hong said, was not the enemy, but that they had nothing to eat, no
bullets to shoot, and no trucks to transport them to the rear when they were wounded. Zhou attempted to respond to
the PVA's logistical concerns by increasing Chinese production and improving methods of supply, but these efforts
were never completely sufficient. At the same time, large-scale air defense training programs were carried out, and
the Chinese Air Force began to participate in the war from September 1951 onward. On 11 April 1951, Commander-in-Chief
Truman relieved the controversial General MacArthur, the Supreme Commander in Korea. There were several reasons for
the dismissal. MacArthur had crossed the 38th parallel in the mistaken belief that the Chinese would not enter the
war, leading to major allied losses. He believed that whether or not to use nuclear weapons should be his own decision,
not the President's. MacArthur threatened to destroy China unless it surrendered. While MacArthur felt total victory
was the only honorable outcome, Truman was more pessimistic about his chances once involved in a land war in Asia,
and felt a truce and orderly withdrawal from Korea could be a valid solution. MacArthur was the subject of congressional
hearings in May and June 1951, which determined that he had defied the orders of the President and thus had violated
the U.S. Constitution. A popular criticism of MacArthur was that he never spent a night in Korea, and directed the
war from the safety of Tokyo. General Ridgway was appointed Supreme Commander, Korea; he regrouped the UN forces
for successful counterattacks, while General James Van Fleet assumed command of the U.S. Eighth Army. Further attacks
slowly depleted the PVA and KPA forces; Operations Courageous (23–28 March 1951) and Tomahawk (23 March 1951) were
a joint ground and airborne infilltration meant to trap Chinese forces between Kaesong and Seoul. UN forces advanced
to "Line Kansas", north of the 38th parallel. The 187th Airborne Regimental Combat Team's ("Rakkasans") second of
two combat jumps was on Easter Sunday, 1951, at Munsan-ni, South Korea, codenamed Operation Tomahawk. The mission
was to get behind Chinese forces and block their movement north. The 60th Indian Parachute Field Ambulance provided
the medical cover for the operations, dropping an ADS and a surgical team and treating over 400 battle casualties
apart from the civilian casualties that formed the core of their objective as the unit was on a humanitarian mission.
The Chinese counterattacked in April 1951, with the Fifth Phase Offensive, also known as the Chinese Spring Offensive,
with three field armies (approximately 700,000 men). The offensive's first thrust fell upon I Corps, which fiercely
resisted in the Battle of the Imjin River (22–25 April 1951) and the Battle of Kapyong (22–25 April 1951), blunting
the impetus of the offensive, which was halted at the "No-name Line" north of Seoul. On 15 May 1951, the Chinese
commenced the second impulse of the Spring Offensive and attacked the ROK Army and the U.S. X Corps in the east at
the Soyang River. After initial success, they were halted by 20 May. At month's end, the U.S. Eighth Army counterattacked
and regained "Line Kansas", just north of the 38th parallel. The UN's "Line Kansas" halt and subsequent offensive
action stand-down began the stalemate that lasted until the armistice of 1953. For the remainder of the Korean War
the UN Command and the PVA fought, but exchanged little territory; the stalemate held. Large-scale bombing of North
Korea continued, and protracted armistice negotiations began 10 July 1951 at Kaesong. On the Chinese side, Zhou Enlai
directed peace talks, and Li Kenong and Qiao Guanghua headed the negotiation team. Combat continued while the belligerents
negotiated; the UN Command forces' goal was to recapture all of South Korea and to avoid losing territory. The PVA
and the KPA attempted similar operations, and later effected military and psychological operations in order to test
the UN Command's resolve to continue the war. The principal battles of the stalemate include the Battle of Bloody
Ridge (18 August–15 September 1951), the Battle of the Punchbowl (31 August-21 September 1951), the Battle of Heartbreak
Ridge (13 September–15 October 1951), the Battle of Old Baldy (26 June–4 August 1952), the Battle of White Horse
(6–15 October 1952), the Battle of Triangle Hill (14 October–25 November 1952), the Battle of Hill Eerie (21 March–21
June 1952), the sieges of Outpost Harry (10–18 June 1953), the Battle of the Hook (28–29 May 1953), the Battle of
Pork Chop Hill (23 March–16 July 1953), and the Battle of Kumsong (13–27 July 1953). Chinese troops suffered from
deficient military equipment, serious logistical problems, overextended communication and supply lines, and the constant
threat of UN bombers. All of these factors generally led to a rate of Chinese casualties that was far greater than
the casualties suffered by UN troops. The situation became so serious that, on November 1951, Zhou Enlai called a
conference in Shenyang to discuss the PVA's logistical problems. At the meeting it was decided to accelerate the
construction of railways and airfields in the area, to increase the number of trucks available to the army, and to
improve air defense by any means possible. These commitments did little to directly address the problems confronting
PVA troops. In the months after the Shenyang conference Peng Dehuai went to Beijing several times to brief Mao and
Zhou about the heavy casualties suffered by Chinese troops and the increasing difficulty of keeping the front lines
supplied with basic necessities. Peng was convinced that the war would be protracted, and that neither side would
be able to achieve victory in the near future. On 24 February 1952, the Military Commission, presided over by Zhou,
discussed the PVA's logistical problems with members of various government agencies involved in the war effort. After
the government representatives emphasized their inability to meet the demands of the war, Peng, in an angry outburst,
shouted: "You have this and that problem... You should go to the front and see with your own eyes what food and clothing
the soldiers have! Not to speak of the casualties! For what are they giving their lives? We have no aircraft. We
have only a few guns. Transports are not protected. More and more soldiers are dying of starvation. Can't you overcome
some of your difficulties?" The atmosphere became so tense that Zhou was forced to adjourn the conference. Zhou subsequently
called a series of meetings, where it was agreed that the PVA would be divided into three groups, to be dispatched
to Korea in shifts; to accelerate the training of Chinese pilots; to provide more anti-aircraft guns to the front
lines; to purchase more military equipment and ammunition from the Soviet Union; to provide the army with more food
and clothing; and, to transfer the responsibility of logistics to the central government. The on-again, off-again
armistice negotiations continued for two years, first at Kaesong, on the border between North and South Korea, and
then at the neighbouring village of Panmunjom. A major, problematic negotiation point was prisoner of war (POW) repatriation.
The PVA, KPA, and UN Command could not agree on a system of repatriation because many PVA and KPA soldiers refused
to be repatriated back to the north, which was unacceptable to the Chinese and North Koreans. In the final armistice
agreement, signed on 27 July 1953, a Neutral Nations Repatriation Commission, under the chairman Indian General K.
S. Thimayya, was set up to handle the matter. In 1952, the United States elected a new president, and on 29 November
1952, the president-elect, Dwight D. Eisenhower, went to Korea to learn what might end the Korean War. With the United
Nations' acceptance of India's proposed Korean War armistice, the KPA, the PVA, and the UN Command ceased fire with
the battle line approximately at the 38th parallel. Upon agreeing to the armistice, the belligerents established
the Korean Demilitarized Zone (DMZ), which has since been patrolled by the KPA and ROKA, United States, and Joint
UN Commands. The Demilitarized Zone runs northeast of the 38th parallel; to the south, it travels west. The old Korean
capital city of Kaesong, site of the armistice negotiations, originally was in pre-war South Korea, but now is part
of North Korea. The United Nations Command, supported by the United States, the North Korean People's Army, and the
Chinese People's Volunteers, signed the Armistice Agreement on 27 July 1953 to end the fighting. The Armistice also
called upon the governments of South Korea, North Korea, China and the United States to participate in continued
peace talks. The war is considered to have ended at this point, even though there was no peace treaty. North Korea
nevertheless claims that it won the Korean War. After the war, Operation Glory was conducted from July to November
1954, to allow combatant countries to exchange their dead. The remains of 4,167 U.S. Army and U.S. Marine Corps dead
were exchanged for 13,528 KPA and PVA dead, and 546 civilians dead in UN prisoner-of-war camps were delivered to
the South Korean government. After Operation Glory, 416 Korean War unknown soldiers were buried in the National Memorial
Cemetery of the Pacific (The Punchbowl), on the island of Oahu, Hawaii. Defense Prisoner of War/Missing Personnel
Office (DPMO) records indicate that the PRC and the DPRK transmitted 1,394 names, of which 858 were correct. From
4,167 containers of returned remains, forensic examination identified 4,219 individuals. Of these, 2,944 were identified
as American, and all but 416 were identified by name. From 1996 to 2006, the DPRK recovered 220 remains near the
Sino-Korean border. After a new wave of UN sanctions, on 11 March 2013, North Korea claimed that it had invalidated
the 1953 armistice. On 13 March 2013, North Korea confirmed it ended the 1953 Armistice and declared North Korea
"is not restrained by the North-South declaration on non-aggression". On 30 March 2013, North Korea stated that it
had entered a "state of war" with South Korea and declared that "The long-standing situation of the Korean peninsula
being neither at peace nor at war is finally over". Speaking on 4 April 2013, the U.S. Secretary of Defense, Chuck
Hagel, informed the press that Pyongyang had "formally informed" the Pentagon that it had "ratified" the potential
usage of a nuclear weapon against South Korea, Japan and the United States of America, including Guam and Hawaii.
Hagel also stated that the United States would deploy the Terminal High Altitude Area Defense anti-ballistic missile
system to Guam, because of a credible and realistic nuclear threat from North Korea. The initial assault by North
Korean KPA forces were aided by the use of Soviet T-34-85 tanks. A North Korean tank corps equipped with about 120
T-34s spearheaded the invasion. These drove against a ROK Army with few anti-tank weapons adequate to deal with the
Soviet T-34s. Additional Soviet armor was added as the offensive progressed. The North Korean tanks had a good deal
of early successes against South Korean infantry, elements of the 24th Infantry Division, and those United States
built M24 Chaffee light tanks that they encountered. Interdiction by ground attack aircraft was the only means of
slowing the advancing Korean armor. The tide turned in favour of the United Nations forces in August 1950 when the
North Koreans suffered major tank losses during a series of battles in which the UN forces brought heavier equipment
to bear, including M4A3 Sherman medium tanks backed by U.S. M26 heavy tanks, along with the British Centurion, Churchill,
and Cromwell tanks. Because neither Korea had a significant navy, the Korean War featured few naval battles. A skirmish
between North Korea and the UN Command occurred on 2 July 1950; the U.S. Navy cruiser USS Juneau, the Royal Navy
cruiser HMS Jamaica, and the frigate HMS Black Swan fought four North Korean torpedo boats and two mortar gunboats,
and sank them. USS Juneau later sank several ammunition ships that had been present. The last sea battle of the Korean
War occurred at Inchon, days before the Battle of Incheon; the ROK ship PC-703 sank a North Korean mine layer in
the Battle of Haeju Island, near Inchon. Three other supply ships were sunk by PC-703 two days later in the Yellow
Sea. Thereafter, vessels from the UN nations held undisputed control of the sea about Korea. The gun ships were used
in shore bombardment, while the aircraft carriers provided air support to the ground forces. During most of the war,
the UN navies patrolled the west and east coasts of North Korea, sinking supply and ammunition ships and denying
the North Koreans the ability to resupply from the sea. Aside from very occasional gunfire from North Korean shore
batteries, the main threat to United States and UN navy ships was from magnetic mines. During the war, five U.S.
Navy ships were lost to mines: two minesweepers, two minesweeper escorts, and one ocean tug. Mines and gunfire from
North Korean coastal artillery damaged another 87 U.S. warships, resulting in slight to moderate damage. The Chinese
intervention in late October 1950 bolstered the Korean People's Air Force (KPAF) of North Korea with the MiG-15,
one of the world's most advanced jet fighters. The fast, heavily armed MiG outflew first-generation UN jets such
as the F-80 (United States Air Force) and Gloster Meteors (Royal Australian Air Force), posing a real threat to B-29
Superfortress bombers even under fighter escort. Fearful of confronting the United States directly, the Soviet Union
denied involvement of their personnel in anything other than an advisory role, but air combat quickly resulted in
Soviet pilots dropping their code signals and speaking over the wireless in Russian. This known direct Soviet participation
was a casus belli that the UN Command deliberately overlooked, lest the war for the Korean peninsula expand to include
the Soviet Union, and potentially escalate into atomic warfare. The USAF countered the MiG-15 by sending over three
squadrons of its most capable fighter, the F-86 Sabre. These arrived in December 1950. The MiG was designed as a
bomber interceptor. It had a very high service ceiling—50,000 feet (15,000 m) and carried very heavy weaponry: one
37 mm cannon and two 23 mm cannons. They were fast enough to dive past the fighter escort of P-80 Shooting Stars
and F9F Panthers and could reach and destroy the U.S. heavy bombers. B-29 losses could not be avoided, and the Air
Force was forced to switch from a daylight bombing campaign to the necessarily less accurate nighttime bombing of
targets. The MiGs were countered by the F-86 Sabres. They had a ceiling of 42,000 feet (13,000 m) and were armed
with six .50 caliber (12.7 mm) machine guns, which were range adjusted by radar gunsights. If coming in at higher
altitude the advantage of engaging or not went to the MiG. Once in a level flight dogfight, both swept-wing designs
attained comparable maximum speeds of around 660 mph (1,100 km/h). The MiG climbed faster, but the Sabre turned and
dived better.
Copyright infringement is the use of works protected by copyright law without permission, infringing certain exclusive rights
granted to the copyright holder, such as the right to reproduce, distribute, display or perform the protected work,
or to make derivative works. The copyright holder is typically the work's creator, or a publisher or other business
to whom copyright has been assigned. Copyright holders routinely invoke legal and technological measures to prevent
and penalize copyright infringement. Copyright infringement disputes are usually resolved through direct negotiation,
a notice and take down process, or litigation in civil court. Egregious or large-scale commercial infringement, especially
when it involves counterfeiting, is sometimes prosecuted via the criminal justice system. Shifting public expectations,
advances in digital technology, and the increasing reach of the Internet have led to such widespread, anonymous infringement
that copyright-dependent industries now focus less on pursuing individuals who seek and share copyright-protected
content online, and more on expanding copyright law to recognize and penalize – as "indirect" infringers – the service
providers and software distributors which are said to facilitate and encourage individual acts of infringement by
others. The terms piracy and theft are often associated with copyright infringement. The original meaning of piracy
is "robbery or illegal violence at sea", but the term has been in use for centuries as a synonym for acts of copyright
infringement. Theft, meanwhile, emphasizes the potential commercial harm of infringement to copyright holders. However,
copyright is a type of intellectual property, an area of law distinct from that which covers robbery or theft, offenses
related only to tangible property. Not all copyright infringement results in commercial loss, and the U.S. Supreme
Court ruled in 1985 that infringement does not easily equate with theft. The practice of labelling the infringement
of exclusive rights in creative works as "piracy" predates statutory copyright law. Prior to the Statute of Anne
in 1710, the Stationers' Company of London in 1557, received a Royal Charter giving the company a monopoly on publication
and tasking it with enforcing the charter. Those who violated the charter were labelled pirates as early as 1603.
The term "piracy" has been used to refer to the unauthorized copying, distribution and selling of works in copyright.
Article 12 of the 1886 Berne Convention for the Protection of Literary and Artistic Works uses the term "piracy"
in relation to copyright infringement, stating "Pirated works may be seized on importation into those countries of
the Union where the original work enjoys legal protection." Article 61 of the 1994 Agreement on Trade-Related Aspects
of Intellectual Property Rights (TRIPs) requires criminal procedures and penalties in cases of "willful trademark
counterfeiting or copyright piracy on a commercial scale." Piracy traditionally refers to acts of copyright infringement
intentionally committed for financial gain, though more recently, copyright holders have described online copyright
infringement, particularly in relation to peer-to-peer file sharing networks, as "piracy." Copyright holders frequently
refer to copyright infringement as theft. In copyright law, infringement does not refer to theft of physical objects
that take away the owner's possession, but an instance where a person exercises one of the exclusive rights of the
copyright holder without authorization. Courts have distinguished between copyright infringement and theft. For instance,
the United States Supreme Court held in Dowling v. United States (1985) that bootleg phonorecords did not constitute
stolen property. Instead, "interference with copyright does not easily equate with theft, conversion, or fraud. The
Copyright Act even employs a separate term of art to define one who misappropriates a copyright: '[...] an infringer
of the copyright.'" The court said that in the case of copyright infringement, the province guaranteed to the copyright
holder by copyright law – certain exclusive rights – is invaded, but no control, physical or otherwise, is taken
over the copyright, nor is the copyright holder wholly deprived of using the copyrighted work or exercising the exclusive
rights held. Sometimes only partial compliance with license agreements is the cause. For example, in 2013, the US
Army settled a lawsuit with Texas-based company Apptricity, which makes software that allows the army to track their
soldiers in real time. In 2004, the US Army paid US$4.5 million for a license of 500 users, while allegedly installing
the software for more than 9000 users; the case was settled for US$50 million. Major anti-piracy organizations, like
the BSA, conduct software licensing audits regularly to ensure full compliance. Cara Cusumano, director of the Tribeca
Film Festival, stated in April 2014: "Piracy is less about people not wanting to pay and more about just wanting
the immediacy – people saying, 'I want to watch Spiderman right now' and downloading it". The statement occurred
during the third year that the festival used the Internet to present its content, while it was the first year that
it featured a showcase of content producers who work exclusively online. Cusumano further explained that downloading
behavior is not merely conducted by people who merely want to obtain content for free: In response to Cusumano's
perspective, Screen Producers Australia executive director Matt Deaner clarified the motivation of the film industry:
"Distributors are usually wanting to encourage cinema-going as part of this process [monetizing through returns]
and restrict the immediate access to online so as to encourage the maximum number of people to go to the cinema."
Deaner further explained the matter in terms of the Australian film industry, stating: "there are currently restrictions
on quantities of tax support that a film can receive unless the film has a traditional cinema release." In a study
published in the Journal of Behavioural and Experimental Economics, and reported on in early May 2014, researchers
from the University of Portsmouth in the UK discussed findings from examining the illegal downloading behavior of
6,000 Finnish people, aged seven to 84. The list of reasons for downloading given by the study respondents included
money saving; the ability to access material not on general release, or before it was released; and assisting artists
to avoid involvement with record companies and movie studios. According to the same study, even though digital piracy
inflicts additional costs on the production side of media, it also offers the main access to media goods in developing
countries. The strong tradeoffs that favor using digital piracy in developing economies dictate the current neglected
law enforcements toward digital piracy. In China, the issue of digital infringement is not merely legal, but social
– originating from the high demand for cheap and affordable goods as well as the governmental connections of the
businesses which produce such goods. There have been instances where a country's government bans a movie, resulting
in the spread of copied videos and DVDs. Romanian-born documentary maker Ilinca Calugareanu wrote a New York Times
article telling the story of Irina Margareta Nistor, a narrator for state TV under Nicolae Ceauşescu's regime. A
visitor from the west gave her bootlegged copies of American movies, which she dubbed for secret viewings through
Romania. According to the article, she dubbed more than 3,000 movies and became the country's second-most famous
voice after Ceauşescu, even though no one knew her name until many years later. In the U.S., copyright infringement
is sometimes confronted via lawsuits in civil court, against alleged infringers directly, or against providers of
services and software that support unauthorized copying. For example, major motion-picture corporation MGM Studios
filed suit against P2P file-sharing services Grokster and Streamcast for their contributory role in copyright infringement.
In 2005, the Supreme Court ruled in favor of MGM, holding that such services could be held liable for copyright infringement
since they functioned and, indeed, willfully marketed themselves as venues for acquiring copyrighted movies. The
MGM v. Grokster case did not overturn the earlier Sony decision, but rather clouded the legal waters; future designers
of software capable of being used for copyright infringement were warned. In some jurisdictions, copyright or the
right to enforce it can be contractually assigned to a third party which did not have a role in producing the work.
When this outsourced litigator appears to have no intention of taking any copyright infringement cases to trial,
but rather only takes them just far enough through the legal system to identify and exact settlements from suspected
infringers, critics commonly refer to the party as a "copyright troll." Such practices have had mixed results in
the U.S. The first criminal provision in U.S. copyright law was added in 1897, which established a misdemeanor penalty
for "unlawful performances and representations of copyrighted dramatic and musical compositions" if the violation
had been "willful and for profit." Criminal copyright infringement requires that the infringer acted "for the purpose
of commercial advantage or private financial gain." 17 U.S.C. § 506. To establish criminal liability, the prosecutor
must first show the basic elements of copyright infringement: ownership of a valid copyright, and the violation of
one or more of the copyright holder's exclusive rights. The government must then establish that defendant willfully
infringed or, in other words, possessed the necessary mens rea. Misdemeanor infringement has a very low threshold
in terms of number of copies and the value of the infringed works. United States v. LaMacchia 871 F.Supp. 535 (1994)
was a case decided by the United States District Court for the District of Massachusetts which ruled that, under
the copyright and cybercrime laws effective at the time, committing copyright infringement for non-commercial motives
could not be prosecuted under criminal copyright law. The ruling gave rise to what became known as the "LaMacchia
Loophole," wherein criminal charges of fraud or copyright infringement would be dismissed under current legal standards,
so long as there was no profit motive involved. The United States No Electronic Theft Act (NET Act), a federal law
passed in 1997, in response to LaMacchia, provides for criminal prosecution of individuals who engage in copyright
infringement under certain circumstances, even when there is no monetary profit or commercial benefit from the infringement.
Maximum penalties can be five years in prison and up to $250,000 in fines. The NET Act also raised statutory damages
by 50%. The court's ruling explicitly drew attention to the shortcomings of current law that allowed people to facilitate
mass copyright infringement while being immune to prosecution under the Copyright Act. The personal copying exemption
in the copyright law of EU member states stems from the EU Copyright Directive of 2001, which is generally devised
to allow EU members to enact laws sanctioning making copies without authorization, as long as they are for personal,
noncommerical use. The Copyright Directive was not intended to legitimize file-sharing, but rather the common practice
of space shifting copyright-protected content from a legally purchased CD (for example) to certain kinds of devices
and media, provided rights holders are compensated and no copy protection measures are circumvented. Rights-holder
compensation takes various forms, depending on the country, but is generally either a levy on "recording" devices
and media, or a tax on the content itself. In some countries, such as Canada, the applicability of such laws to copying
onto general-purpose storage devices like computer hard drives, portable media players, and phones, for which no
levies are collected, has been the subject of debate and further efforts to reform copyright law. In some countries,
the personal copying exemption explicitly requires that the content being copied was obtained legitimately – i.e.,
from authorized sources, not file-sharing networks. Other countries, such as the Netherlands, make no such distinction;
the exemption there had been assumed, even by the government, to apply to any such copying, even from file-sharing
networks. However, in April 2014, the Court of Justice of the European Union ruled that "national legislation which
makes no distinction between private copies made from lawful sources and those made from counterfeited or pirated
sources cannot be tolerated." Thus, in the Netherlands, for example, downloading from file-sharing networks is no
longer legal. Title I of the U.S. DMCA, the WIPO Copyright and Performances and Phonograms Treaties Implementation
Act has provisions that prevent persons from "circumvent[ing] a technological measure that effectively controls access
to a work". Thus if a distributor of copyrighted works has some kind of software, dongle or password access device
installed in instances of the work, any attempt to bypass such a copy protection scheme may be actionable – though
the US Copyright Office is currently reviewing anticircumvention rulemaking under DMCA – anticircumvention exemptions
that have been in place under the DMCA include those in software designed to filter websites that are generally seen
to be inefficient (child safety and public library website filtering software) and the circumvention of copy protection
mechanisms that have malfunctioned, have caused the instance of the work to become inoperable or which are no longer
supported by their manufacturers. In addition, intermediaries are now also generally understood to include Internet
portals, software and games providers, those providing virtual information such as interactive forums and comment
facilities with or without a moderation system, aggregators of various kinds, such as news aggregators, universities,
libraries and archives, web search engines, chat rooms, web blogs, mailing lists, and any website which provides
access to third party content through, for example, hyperlinks, a crucial element of the World Wide Web. Early court
cases focused on the liability of Internet service providers (ISPs) for hosting, transmitting or publishing user-supplied
content that could be actioned under civil or criminal law, such as libel, defamation, or pornography. As different
content was considered in different legal systems, and in the absence of common definitions for "ISPs," "bulletin
boards" or "online publishers," early law on online intermediaries' liability varied widely from country to country.
The first laws on online intermediaries' liability were passed from the mid-1990s onwards.[citation needed] The U.S.
Digital Millennium Copyright Act (1998) and the European E-Commerce Directive (2000) provide online intermediaries
with limited statutory immunity from liability for copyright infringement. Online intermediaries hosting content
that infringes copyright are not liable, so long as they do not know about it and take actions once the infringing
content is brought to their attention. In U.S. law this is characterized as "safe harbor" provisions. Under European
law, the governing principles for Internet Service Providers are "mere conduit", meaning that they are neutral 'pipes'
with no knowledge of what they are carrying; and 'no obligation to monitor' meaning that they cannot be given a general
mandate by governments to monitor content. These two principles are a barrier for certain forms of online copyright
enforcement and they were the reason behind an attempt to amend the European Telecoms Package in 2009 to support
new measures against copyright infringement. These types of intermediaries do not host or transmit infringing content,
themselves, but may be regarded in some courts as encouraging, enabling or facilitating infringement by users. These
intermediaries may include the author, publishers and marketers of peer-to-peer networking software, and the websites
that allow users to download such software. In the case of the BitTorrent protocol, intermediaries may include the
torrent tracker and any websites or search engines which facilitate access to torrent files. Torrent files don't
contain copyrighted content, but they may make reference to files that do, and they may point to trackers which coordinate
the sharing of those files. Some torrent indexing and search sites, such as The Pirate Bay, now encourage the use
of magnet links, instead of direct links to torrent files, creating another layer of indirection; using such links,
torrent files are obtained from other peers, rather than from a particular website. Nevertheless, whether and to
what degree any of these types of intermediaries have secondary liability is the subject of ongoing litigation. The
decentralised structure of peer-to-peer networks, in particular, does not sit easily with existing laws on online
intermediaries' liability. The BitTorrent protocol established an entirely decentralised network architecture in
order to distribute large files effectively. Recent developments in peer-to-peer technology towards more complex
network configurations are said to have been driven by a desire to avoid liability as intermediaries under existing
laws. Article 10 of the Berne Convention mandates that national laws provide for limitations to copyright, so that
copyright protection does not extend to certain kinds of uses that fall under what the treaty calls "fair practice,"
including but not limited to minimal quotations used in journalism and education. The laws implementing these limitations
and exceptions for uses that would otherwise be infringing broadly fall into the categories of either fair use or
fair dealing. In common law systems, these fair practice statutes typically enshrine principles underlying many earlier
judicial precedents, and are considered essential to freedom of speech. Another example is the practice of compulsory
licensing, which is where the law forbids copyright owners from denying a license for certain uses of certain kinds
of works, such as compilations and live performances of music. Compulsory licensing laws generally say that for certain
uses of certain works, no infringement occurs as long as a royalty, at a rate determined by law rather than private
negotiation, is paid to the copyright owner or representative copyright collective. Some fair dealing laws, such
as Canada's, include similar royalty requirements. In Europe, the copyright infringement case Public Relations Consultants
Association Ltd v Newspaper Licensing Agency Ltd had two prongs; one concerned whether a news aggregator service
infringed the copyright of the news generators; the other concerned whether the temporary web cache created by the
web browser of a consumer of the aggregator's service, also infringed the copyright of the news generators. The first
prong was decided in favor of the news generators; in June 2014 the second prong was decided by the Court of Justice
of the European Union (CJEU), which ruled that the temporary web cache of consumers of the aggregator did not infringe
the copyright of the news generators. In order to qualify for protection, a work must be an expression with a degree
of originality, and it must be in a fixed medium, such as written down on paper or recorded digitally. The idea itself
is not protected. That is, a copy of someone else's original idea is not infringing unless it copies that person's
unique, tangible expression of the idea. Some of these limitations, especially regarding what qualifies as original,
are embodied only in case law (judicial precedent), rather than in statutes. In the U.S., for example, copyright
case law contains a substantial similarity requirement to determine whether the work was copied. Likewise, courts
may require computer software to pass an Abstraction-Filtration-Comparison test (AFC Test) to determine if it is
too abstract to qualify for protection, or too dissimilar to an original work to be considered infringing. Software-related
case law has also clarified that the amount of R&D, effort and expense put into a work's creation doesn't affect
copyright protection. Corporations and legislatures take different types of preventative measures to deter copyright
infringement, with much of the focus since the early 1990s being on preventing or reducing digital methods of infringement.
Strategies include education, civil & criminal legislation, and international agreements, as well as publicizing
anti-piracy litigation successes and imposing forms of digital media copy protection, such as controversial DRM technology
and anti-circumvention laws, which limit the amount of control consumers have over the use of products and content
they have purchased. Legislatures have reduced infringement by narrowing the scope of what is considered infringing.
Aside from upholding international copyright treaty obligations to provide general limitations and exceptions, nations
have enacted compulsory licensing laws applying specifically to digital works and uses. For example, in the U.S.,
the DMCA, an implementation of the 1996 WIPO Copyright Treaty, considers digital transmissions of audio recordings
to be licensed as long as a designated copyright collective's royalty and reporting requirements are met. The DMCA
also provides safe harbor for digital service providers whose users are suspected of copyright infringement, thus
reducing the likelihood that the providers themselves will be considered directly infringing. Some copyright owners
voluntarily reduce the scope of what is considered infringement by employing relatively permissive, "open" licensing
strategies: rather than privately negotiating license terms with individual users who must first seek out the copyright
owner and ask for permission, the copyright owner publishes and distributes the work with a prepared license that
anyone can use, as long as they adhere to certain conditions. This has the effect of reducing infringement – and
the burden on courts – by simply permitting certain types of uses under terms that the copyright owner considers
reasonable. Examples include free software licenses, like the GNU General Public License (GPL), and the Creative
Commons licenses, which are predominantly applied to visual and literary works. To prevent piracy of films, the standard
drill of film distribution is to have a movie first released through movie theaters (theatrical window), on average
approximately 16 and a half weeks, before having it released to Blu-Ray and DVD (entering its video window). During
the theatrical window, digital versions of films are often transported in data storage devices by couriers rather
than by data transmission. The data can be encrypted, with the key being made to work only at specific times in order
to prevent leakage between screens. Coded Anti-Piracy marks can be added to films to identify the source of illegal
copies and shut them down. As a result of these measures, the only versions of films available for piracy during
the theatrical window are usually "cams" made by video recordings of the movie screens, which are of inferior quality
compared to the original film version. The U.S. GAO's 2010 findings regarding the great difficulty of accurately
gauging the economic impact of copyright infringement was reinforced within the same report by the body's research
into three commonly cited estimates that had previously been provided to U.S. agencies. The GAO report explained
that the sources – a Federal Bureau of Investigation (FBI) estimate, a Customs and Border Protection (CBP) press
release and a Motor and Equipment Manufacturers Association estimate – "cannot be substantiated or traced back to
an underlying data source or methodology." According to a 2007 BSA and International Data Corporation (IDC) study,
the five countries with the highest rates of software piracy were: 1. Armenia (93%); 2. Bangladesh (92%); 3. Azerbaijan
(92%); 4. Moldova (92%); and 5. Zimbabwe (91%). According to the study's results, the five countries with the lowest
piracy rates were: 1. U.S. (20%); 2. Luxembourg (21%); 3. New Zealand (22%); 4. Japan (23%); and 5. Austria (25%).
The 2007 report showed that the Asia-Pacific region was associated with the highest amount of loss, in terms of U.S.
dollars, with $14,090,000, followed by the European Union, with a loss of $12,383,000; the lowest amount of U.S.
dollars was lost in the Middle East/Africa region, where $2,446,000 was documented. In its 2011 report, conducted
in partnership with IDC and Ipsos Public Affairs, the BSA stated: "Over half of the world's personal computer users
– 57 percent – admit to pirating software." The ninth annual "BSA Global Software Piracy Study" claims that the "commercial
value of this shadow market of pirated software" was worth US$63.4 billion in 2011, with the highest commercial value
of pirated PC software existent in the U.S. during that time period (US$9,773,000). According to the 2011 study,
Zimbabwe was the nation with the highest piracy rate, at 92%, while the lowest piracy rate was present in the U.S.,
at 19%. In 2007, the Institute for Policy Innovation (IPI) reported that music piracy took $12.5 billion from the
U.S. economy. According to the study, musicians and those involved in the recording industry are not the only ones
who experience losses attributed to music piracy. Retailers have lost over a billion dollars, while piracy has resulted
in 46,000 fewer production-level jobs and almost 25,000 retail jobs. The U.S. government was also reported to suffer
from music piracy, losing $422 million in tax revenue. Professor Aram Sinnreich, in his book The Piracy Crusade,
states that the connection between declining music sails and the creation of peer to peer file sharing sites such
as Napster is tenuous, based on correlation rather than causation. He argues that the industry at the time was undergoing
artificial expansion, what he describes as a "'perfect bubble'—a confluence of economic, political, and technological
forces that drove the aggregate value of music sales to unprecedented heights at the end of the twentieth century".
The 2011 Business Software Alliance Piracy Study Standard, estimates the total commercial value of illegally copied
software to be at $59 billion in 2010, with emerging markets accounting for $31.9 billion, over half of the total.
Furthermore, mature markets for the first time received less PC shipments than emerging economies in 2010, making
emerging markets now responsible for more than half of all computers in use worldwide. In addition with software
infringement rates of 68 percent comparing to 24 percent of mature markets, emerging markets thus possess the majority
of the global increase in the commercial value of counterfeit software. China continues to have the highest commercial
value of such software at $8.9 billion among developing countries and second in the world behind the US at $9.7 billion
in 2011. In 2011, the Business Software Alliance announced that 83 percent of software deployed on PCs in Africa
has been pirated (excluding South Africa).
Greece is strategically located at the crossroads of Europe, Asia, and Africa. Situated on the southern tip of the Balkan
peninsula, it shares land borders with Albania to the northwest, the Republic of Macedonia and Bulgaria to the north
and Turkey to the northeast. Greece consists of nine geographic regions: Macedonia, Central Greece, the Peloponnese,
Thessaly, Epirus, the Aegean Islands (including the Dodecanese and Cyclades), Thrace, Crete, and the Ionian Islands.
The Aegean Sea lies to the east of the mainland, the Ionian Sea to the west, and the Mediterranean Sea to the south.
Greece has the longest coastline on the Mediterranean Basin and the 11th longest coastline in the world at 13,676
km (8,498 mi) in length, featuring a vast number of islands, of which 227 are inhabited. Eighty percent of Greece
is mountainous, with Mount Olympus being the highest peak at 2,918 metres (9,573 ft). Greece has one of the longest
histories of any country, and is considered the cradle of Western civilization, and as such, is the birthplace of
democracy, Western philosophy, the Olympic Games, Western literature, historiography, political science, major scientific
and mathematical principles, and Western drama, including both tragedy and comedy. Greece was first unified under
Philip of Macedon in the fourth century BC. His son Alexander the Great rapidly conquered much of the ancient world,
spreading Greek culture and science from the eastern Mediterranean to the Indus River. Annexed by Rome in the second
century BC, Greece became an integral part of the Roman Empire and its successor, the Byzantine Empire. The first
century AD saw the establishment of the Greek Orthodox Church, which shaped the modern Greek identity and transmitted
Greek traditions to the wider Orthodox World. Falling under Ottoman dominion in the mid-15th century, the modern
nation state of Greece emerged in 1830 following the war of independence. Greece's rich historical legacy is reflected
in large part by its 17 UNESCO World Heritage Sites, among the most in Europe and the world. Greece is a democratic
and developed country with an advanced high-income economy, a high quality of life and a very high standard of living.
A founding member of the United Nations, Greece was the tenth member to join the European Communities (precursor
to the European Union) and has been part of the Eurozone since 2001. It is also a member of numerous other international
institutions, including the Council of Europe, NATO,[a] OECD, OIF, OSCE and the WTO. Greece, which is one of the
world's largest shipping powers, middle powers and top tourist destinations, has the largest economy in the Balkans,
where it is an important regional investor. The names for the nation of Greece and the Greek people differ from the
names used in other languages, locations and cultures. Although the Greeks call the country Hellas or Ellada (Greek:
Ἑλλάς or Ελλάδα) and its official name is the Hellenic Republic, in English it is referred to as Greece, which comes
from the Latin term Graecia as used by the Romans, which literally means 'the land of the Greeks', and derives from
the Greek name Γραικός. However, the name Hellas is sometimes used in English as well. The earliest evidence of the
presence of human ancestors in the southern Balkans, dated to 270,000 BC, is to be found in the Petralona cave, in
the Greek province of Macedonia. All three stages of the stone age (Paleolithic, Mesolithic, and Neolithic) are represented
in Greece, for example in the Franchthi Cave. Neolithic settlements in Greece, dating from the 7th millennium BC,
are the oldest in Europe by several centuries, as Greece lies on the route via which farming spread from the Near
East to Europe. Greece is home to the first advanced civilizations in Europe and is considered the birthplace of
Western civilization,[citation clutter] beginning with the Cycladic civilization on the islands of the Aegean Sea
at around 3200 BC, the Minoan civilization in Crete (2700–1500 BC), and then the Mycenaean civilization on the mainland
(1900–1100 BC). These civilizations possessed writing, the Minoans writing in an undeciphered script known as Linear
A, and the Mycenaeans in Linear B, an early form of Greek. The Mycenaeans gradually absorbed the Minoans, but collapsed
violently around 1200 BC, during a time of regional upheaval known as the Bronze Age collapse. This ushered in a
period known as the Greek Dark Ages, from which written records are absent. The end of the Dark Ages is traditionally
dated to 776 BC, the year of the first Olympic Games. The Iliad and the Odyssey, the foundational texts of Western
literature, are believed to have been composed by Homer in the 8th or 7th centuries BC With the end of the Dark Ages,
there emerged various kingdoms and city-states across the Greek peninsula, which spread to the shores of the Black
Sea, Southern Italy (Latin: Magna Graecia, or Greater Greece) and Asia Minor. These states and their colonies reached
great levels of prosperity that resulted in an unprecedented cultural boom, that of classical Greece, expressed in
architecture, drama, science, mathematics and philosophy. In 508 BC, Cleisthenes instituted the world's first democratic
system of government in Athens. By 500 BC, the Persian Empire controlled the Greek city states in Asia Minor and
had made territorial gains in the Balkans and Eastern Europe proper as well. Attempts by some of the Greek city-states
of Asia Minor to overthrow Persian rule failed, and Persia invaded the states of mainland Greece in 492 BC, but was
forced to withdraw after a defeat at the Battle of Marathon in 490 BC. A second invasion by the Persians followed
in 480 BC. Despite a heroic resistance at Thermopylae by Spartans and other Greeks led by King Leonidas, and a simultaneous
naval engagement at Artemisium,[page needed] Persian forces occupied Athens, which had been evacuated in time, as
well as briefly overrunning half of Greece. Following decisive Greek victories in 480 and 479 BC at Salamis, Plataea,
and Mycale, the Persians were forced to withdraw for a second time, marking their eventual withdrawal from all of
their European territories. Led by Athens and Sparta, the Greek victories in the Greco-Persian Wars are considered
a pivotal moment in world history, as the 50 years of peace that followed are known as Golden Age of Athens, the
seminal period of ancient Greece that laid many of the foundations of Western civilization. Lack of political unity
within Greece resulted in frequent conflict between Greek states. The most devastating intra-Greek war was the Peloponnesian
War (431–404 BC), won by Sparta and marking the demise of the Athenian Empire as the leading power in ancient Greece.
Both Athens and Sparta were later overshadowed by Thebes and eventually Macedon, with the latter uniting the Greek
world in the League of Corinth (also known as the Hellenic League or Greek League) under the guidance of Phillip
II, who was elected leader of the first unified Greek state in history. After a period of confusion following Alexander's
death, the Antigonid dynasty, descended from one of Alexander's generals, established its control over Macedon and
most of the Greek city-states by 276 BC. From about 200 BC the Roman Republic became increasingly involved in Greek
affairs and engaged in a series of wars with Macedon. Macedon's defeat at the Battle of Pydna in 168 BC signalled
the end of Antigonid power in Greece. In 146 BC Macedonia was annexed as a province by Rome, and the rest of Greece
became a Roman protectorate. The process was completed in 27 BC when the Roman Emperor Augustus annexed the rest
of Greece and constituted it as the senatorial province of Achaea. Despite their military superiority, the Romans
admired and became heavily influenced by the achievements of Greek culture, hence Horace's famous statement: Graecia
capta ferum victorem cepit ("Greece, although captured, took its wild conqueror captive"). The epics of Homer inspired
the Aeneid of Virgil, and authors such as Seneca the younger wrote using Greek styles. Roman heroes such as Scipio
Africanus, tended to study philosophy and regarded Greek culture and science as an example to be followed. Similarly,
most Roman emperors maintained an admiration for things Greek in nature. The Roman Emperor Nero visited Greece in
AD 66, and performed at the Ancient Olympic Games, despite the rules against non-Greek participation. Hadrian was
also particularly fond of the Greeks; before he became emperor he served as an eponymous archon of Athens. Greek-speaking
communities of the Hellenized East were instrumental in the spread of early Christianity in the 2nd and 3rd centuries,
and Christianity's early leaders and writers (notably St Paul) were mostly Greek-speaking, though generally not from
Greece itself. The New Testament was written in Greek, and some of its sections (Corinthians, Thessalonians, Philippians,
Revelation of St. John of Patmos) attest to the importance of churches in Greece in early Christianity. Nevertheless,
much of Greece clung tenaciously to paganism, and ancient Greek religious practices were still in vogue in the late
4th century AD, when they were outlawed by the Roman emperor Theodosius I in 391-392. The last recorded Olympic games
were held in 393, and many temples were destroyed or damaged in the century that followed. In Athens and rural areas,
paganism is attested well into the sixth century AD and even later. The closure of the Neoplatonic Academy of Athens
by the emperor Justinian in 529 is considered by many to mark the end of antiquity, although there is evidence that
the Academy continued its activities for some time after that. Some remote areas such as the southeastern Peloponnese
remained pagan until well into the 10th century AD. From the 4th century, the Empire's Balkan territories, including
Greece, suffered from the dislocation of the Barbarian Invasions. The raids and devastation of the Goths and Huns
in the 4th and 5th centuries and the Slavic invasion of Greece in the 7th century resulted in a dramatic collapse
in imperial authority in the Greek peninsula. Following the Slavic invasion, the imperial government retained formal
control of only the islands and coastal areas, particularly the densely populated walled cities such as Athens, Corinth
and Thessalonica, while some mountainous areas in the interior held out on their own and continued to recognize imperial
authority. Outside of these areas, a limited amount of Slavic settlement is generally thought to have occurred, although
on a much smaller scale than previously thought. The Byzantine recovery of lost provinces began toward the end of
the 8th century and most of the Greek peninsula came under imperial control again, in stages, during the 9th century.
This process was facilitated by a large influx of Greeks from Sicily and Asia Minor to the Greek peninsula, while
at the same time many Slavs were captured and re-settled in Asia Minor and those that remained were assimilated.
During the 11th and 12th centuries the return of stability resulted in the Greek peninsula benefiting from strong
economic growth – much stronger than that of the Anatolian territories of the Empire. Following the Fourth Crusade
and the fall of Constantinople to the "Latins" in 1204 mainland Greece was split between the Greek Despotate of Epirus
(a Byzantine successor state) and Frankish rule (known as the Frankokratia), while some islands came under Venetian
rule. The re-establishment of the Byzantine imperial capital in Constantinople in 1261 was accompanied by the empire's
recovery of much of the Greek peninsula, although the Frankish Principality of Achaea in the Peloponnese and the
rival Greek Despotate of Epirus in the north both remained important regional powers into the 14th century, while
the islands remained largely under Genoese and Venetian control. In the 14th century, much of the Greek peninsula
was lost by the Byzantine Empire at first to the Serbs and then to the Ottomans. By the beginning of the 15th century,
the Ottoman advance meant that Byzantine territory in Greece was limited mainly to its then-largest city, Thessaloniki,
and the Peloponnese (Despotate of the Morea). After the fall of Constantinople to the Ottomans in 1453, the Morea
was the last remnant of the Byzantine Empire to hold out against the Ottomans. However, this, too, fell to the Ottomans
in 1460, completing the Ottoman conquest of mainland Greece. With the Turkish conquest, many Byzantine Greek scholars,
who up until then were largely responsible for preserving Classical Greek knowledge, fled to the West, taking with
them a large body of literature and thereby significantly contributing to the Renaissance. While most of mainland
Greece and the Aegean islands was under Ottoman control by the end of the 15th century, Cyprus and Crete remained
Venetian territory and did not fall to the Ottomans until 1571 and 1670 respectively. The only part of the Greek-speaking
world that escaped long-term Ottoman rule was the Ionian Islands, which remained Venetian until their capture by
the First French Republic in 1797, then passed to the United Kingdom in 1809 until their unification with Greece
in 1864.[page needed] The Greek Orthodox Church and the Ecumenical Patriarchate of Constantinople were considered
by the Ottoman governments as the ruling authorities of the entire Orthodox Christian population of the Ottoman Empire,
whether ethnically Greek or not. Although the Ottoman state did not force non-Muslims to convert to Islam, Christians
faced several types of discrimination intended to highlight their inferior status in the Ottoman Empire. Discrimination
against Christians, particularly when combined with harsh treatment by local Ottoman authorities, led to conversions
to Islam, if only superficially. In the 19th century, many "crypto-Christians" returned to their old religious allegiance.[page
needed] When military conflicts broke out between the Ottoman Empire and other states, Greeks usually took arms against
the Empire, with few exceptions. Prior to the Greek revolution, there had been a number of wars which saw Greeks
fight against the Ottomans, such as the Greek participation in the Battle of Lepanto in 1571, the Epirus peasants'
revolts of 1600–1601, the Morean War of 1684–1699, and the Russian-instigated Orlov Revolt in 1770, which aimed at
breaking up the Ottoman Empire in favor of Russian interests.[page needed] These uprisings were put down by the Ottomans
with great bloodshed. The 16th and 17th centuries are regarded as something of a "dark age" in Greek history, with
the prospect of overthrowing Ottoman rule appearing remote with only the Ionian islands remaining free of Turkish
domination. Corfu withstood three major sieges in 1537, 1571 and 1716 all of which resulted in the repulsion of the
Ottomans. However, in the 18th century, there arose through shipping a wealthy and dispersed Greek merchant class.
These merchants came to dominate trade within the Ottoman Empire, establishing communities throughout the Mediterranean,
the Balkans, and Western Europe. Though the Ottoman conquest had cut Greece off from significant European intellectual
movements such as the Reformation and the Enlightenment, these ideas together with the ideals of the French Revolution
and romantic nationalism began to penetrate the Greek world via the mercantile diaspora.[page needed] In the late
18th century, Rigas Feraios, the first revolutionary to envision an independent Greek state, published a series of
documents relating to Greek independence, including but not limited to a national anthem and the first detailed map
of Greece, in Vienna, and was murdered by Ottoman agents in 1798.[page needed] In 1814, a secret organization called
the Filiki Eteria (Society of Friends) was founded with the aim of liberating Greece. The Filiki Eteria planned to
launch revolution in the Peloponnese, the Danubian Principalities and Constantinople. The first of these revolts
began on 6 March 1821 in the Danubian Principalities under the leadership of Alexandros Ypsilantis, but it was soon
put down by the Ottomans. The events in the north spurred the Greeks of the Peloponnese into action and on 17 March
1821 the Maniots declared war on the Ottomans. By the end of the month, the Peloponnese was in open revolt against
the Ottomans and by October 1821 the Greeks under Theodoros Kolokotronis had captured Tripolitsa. The Peloponnesian
revolt was quickly followed by revolts in Crete, Macedonia and Central Greece, which would soon be suppressed. Meanwhile,
the makeshift Greek navy was achieving success against the Ottoman navy in the Aegean Sea and prevented Ottoman reinforcements
from arriving by sea. In 1822 and 1824 the Turks and Egyptians ravaged the islands, including Chios and Psara, committing
wholesale massacres of the population. This had the effect of galvanizing public opinion in western Europe in favor
of the Greek rebels.[page needed] Tensions soon developed among different Greek factions, leading to two consecutive
civil wars. Meanwhile, the Ottoman Sultan negotiated with Mehmet Ali of Egypt, who agreed to send his son Ibrahim
Pasha to Greece with an army to suppress the revolt in return for territorial gain. Ibrahim landed in the Peloponnese
in February 1825 and had immediate success: by the end of 1825, most of the Peloponnese was under Egyptian control,
and the city of Missolonghi—put under siege by the Turks since April 1825—fell in April 1826. Although Ibrahim was
defeated in Mani, he had succeeded in suppressing most of the revolt in the Peloponnese and Athens had been retaken.
After years of negotiation, three Great Powers, Russia, the United Kingdom and France, decided to intervene in the
conflict and each nation sent a navy to Greece. Following news that combined Ottoman–Egyptian fleets were going to
attack the Greek island of Hydra, the allied fleet intercepted the Ottoman–Egyptian fleet at Navarino. After a week-long
standoff, a battle began which resulted in the destruction of the Ottoman–Egyptian fleet. A French expeditionary
force was dispatched to supervise the evacuation of the Egyptian army from the Peloponnese, while the Greeks proceeded
to the captured part of Central Greece by 1828. As a result of years of negotiation, the nascent Greek state was
finally recognized under the London Protocol in 1830. Corruption and Trikoupis' increased spending to create necessary
infrastructure like the Corinth Canal overtaxed the weak Greek economy, forcing the declaration of public insolvency
in 1893 and to accept the imposition of an International Financial Control authority to pay off the country's debtors.
Another political issue in 19th-century Greece was uniquely Greek: the language question. The Greek people spoke
a form of Greek called Demotic. Many of the educated elite saw this as a peasant dialect and were determined to restore
the glories of Ancient Greek. All Greeks were united, however, in their determination to liberate the Greek-speaking
provinces of the Ottoman Empire, regardless of the dialect they spoke. Especially in Crete, a prolonged revolt in
1866–1869 had raised nationalist fervour. When war broke out between Russia and the Ottomans in 1877, Greek popular
sentiment rallied to Russia's side, but Greece was too poor, and too concerned of British intervention, to officially
enter the war. Nevertheless, in 1881, Thessaly and small parts of Epirus were ceded to Greece as part of the Treaty
of Berlin, while frustrating Greek hopes of receiving Crete. At the end of the Balkan Wars, the extent of Greece's
territory and population had increased. In the following years, the struggle between King Constantine I and charismatic
Prime Minister Eleftherios Venizelos over the country's foreign policy on the eve of World War I dominated the country's
political scene, and divided the country into two opposing groups. During parts of the First World War, Greece had
two governments; a royalist pro-German government in Athens and a Venizelist pro-Britain one in Thessaloniki. The
two governments were united in 1917, when Greece officially entered the war on the side of the Triple Entente. In
the aftermath of the First World War, Greece attempted further expansion into Asia Minor, a region with a large native
Greek population at the time, but was defeated in the Greco-Turkish War of 1919–1922, contributing to a massive flight
of Asia Minor Greeks. These events overlapped, with both happening during the Greek genocide (1914-1922), a period
during which, according to various sources, Ottoman and Turkish officials contributed to the death of several hundred
thousand Asia Minor Greeks. The resultant Greek exodus from Asia Minor was made permanent, and expanded, in an official
Population exchange between Greece and Turkey. The exchange was part of the terms of the Treaty of Lausanne which
ended the war. The following era was marked by instability, as over 1.5 million propertyless Greek refugees from
Turkey had to be integrated into Greek society. Because the term "Greeks" in the exchange was based on religion Cappadocian
Greeks, Pontian Greeks, and non Greek followers of Greek Orthodoxy were all subject to the exchange as well. Many
of these refugees couldn't even speak the language, and were from alien environments, such as the case of the non
Greeks and Cappadocians. The refugees also made a dramatic post war population boost, as the amount of refugees was
more than a quarter of Greece's prior population. The task was undertaken by settling the Pontians and Cappadocians
in the Macedonian mountains, where they would adapt better, and settling the Demotic speakers and non Greeks in the
Greek Isles and cities, where they were already adapted to. Following the catastrophic events in Asia Minor, the
monarchy was abolished via a referendum in 1924 and the Second Hellenic Republic was declared. Premier Georgios Kondylis
took power in 1935 and effectively abolished the republic by bringing back the monarchy via a referendum in 1935.
A coup d'état followed in 1936 and installed Ioannis Metaxas as the head of a dictatorial regime known as the 4th
of August Regime. Although a dictatorship, Greece remained on good terms with Britain and was not allied with the
Axis. Greece was eventually occupied by the Nazis who proceeded to administer Athens and Thessaloniki, while other
regions of the country were given to Nazi Germany's partners, Fascist Italy and Bulgaria. The occupation brought
about terrible hardships for the Greek civilian population. Over 100,000 civilians died of starvation during the
winter of 1941–1942, tens of thousands more died because of reprisals by Nazis and collaborators, the country's economy
was ruined and the great majority of Greek Jews were deported and murdered in Nazi concentration camps. The Greek
Resistance, one of the most effective resistance movements in Europe fought vehemently against the Nazis and their
collaborators. The German occupiers committed numerous atrocities, mass executions, and wholesale slaughter of civilians
and destruction of towns and villages in reprisals. In the course of the concerted anti-guerrilla campaign, hundreds
of villages were systematically torched and almost 1,000,000 Greeks left homeless. In total, the Germans executed
some 21,000 Greeks, the Bulgarians 40,000 and the Italians 9,000. Meanwhile, Andreas Papandreou founded the Panhellenic
Socialist Movement (PASOK) in response to Karamanlis's conservative New Democracy party, with the two political formations
alternating in government ever since. Greece rejoined NATO in 1980. Greece became the tenth member of the European
Communities (subsequently subsumed by the European Union) on 1 January 1981, ushering in a period of sustained growth.
Widespread investments in industrial enterprises and heavy infrastructure, as well as funds from the European Union
and growing revenues from tourism, shipping and a fast-growing service sector raised the country's standard of living
to unprecedented levels. Traditionally strained relations with neighbouring Turkey improved when successive earthquakes
hit both nations in 1999, leading to the lifting of the Greek veto against Turkey's bid for EU membership. Located
in Southern Europe, Greece consists of a mountainous, peninsular mainland jutting out into the sea at the southern
end of the Balkans, ending at the Peloponnese peninsula (separated from the mainland by the canal of the Isthmus
of Corinth) and strategically located at the crossroads of Europe, Asia, and Africa. Due to its highly indented coastline
and numerous islands, Greece has the 11th longest coastline in the world with 13,676 km (8,498 mi); its land boundary
is 1,160 km (721 mi). The country lies approximately between latitudes 34° and 42° N, and longitudes 19° and 30°
E, with the extreme points being: Eighty percent of Greece consists of mountains or hills, making the country one
of the most mountainous in Europe. Mount Olympus, the mythical abode of the Greek Gods, culminates at Mytikas peak
2,918 metres (9,573 ft), the highest in the country. Western Greece contains a number of lakes and wetlands and is
dominated by the Pindus mountain range. The Pindus, a continuation of the Dinaric Alps, reaches a maximum elevation
of 2,637 m (8,652 ft) at Mt. Smolikas (the second-highest in Greece) and historically has been a significant barrier
to east-west travel. The Pindus range continues through the central Peloponnese, crosses the islands of Kythera and
Antikythera and finds its way into southwestern Aegean, in the island of Crete where it eventually ends. The islands
of the Aegean are peaks of underwater mountains that once constituted an extension of the mainland. Pindus is characterized
by its high, steep peaks, often dissected by numerous canyons and a variety of other karstic landscapes. The spectacular
Vikos Gorge, part of the Vikos-Aoos National Park in the Pindus range, is listed by the Guinness book of World Records
as the deepest gorge in the world. Another notable formation are the Meteora rock pillars, atop which have been built
medieval Greek Orthodox monasteries. The Greek islands are traditionally grouped into the following clusters: The
Argo-Saronic Islands in the Saronic gulf near Athens, the Cyclades, a large but dense collection occupying the central
part of the Aegean Sea, the North Aegean islands, a loose grouping off the west coast of Turkey, the Dodecanese,
another loose collection in the southeast between Crete and Turkey, the Sporades, a small tight group off the coast
of northeast Euboea, and the Ionian Islands, located to the west of the mainland in the Ionian Sea. The climate of
Greece is primarily Mediterranean, featuring mild, wet winters and hot, dry summers. This climate occurs at all coastal
locations, including Athens, the Cyclades, the Dodecanese, Crete, the Peloponnese, the Ionian Islands and parts of
the Central Continental Greece region. The Pindus mountain range strongly affects the climate of the country, as
areas to the west of the range are considerably wetter on average (due to greater exposure to south-westerly systems
bringing in moisture) than the areas lying to the east of the range (due to a rain shadow effect). The mountainous
areas of Northwestern Greece (parts of Epirus, Central Greece, Thessaly, Western Macedonia) as well as in the mountainous
central parts of Peloponnese – including parts of the regional units of Achaea, Arcadia and Laconia – feature an
Alpine climate with heavy snowfalls. The inland parts of northern Greece, in Central Macedonia and East Macedonia
and Thrace feature a temperate climate with cold, damp winters and hot, dry summers with frequent thunderstorms.
Snowfalls occur every year in the mountains and northern areas, and brief snowfalls are not unknown even in low-lying
southern areas, such as Athens. Phytogeographically, Greece belongs to the Boreal Kingdom and is shared between the
East Mediterranean province of the Mediterranean Region and the Illyrian province of the Circumboreal Region. According
to the World Wide Fund for Nature and the European Environment Agency, the territory of Greece can be subdivided
into six ecoregions: the Illyrian deciduous forests, Pindus Mountains mixed forests, Balkan mixed forests, Rhodope
montane mixed forests, Aegean and Western Turkey sclerophyllous and mixed forests and Crete Mediterranean forests.
Greece is a unitary parliamentary republic. The nominal head of state is the President of the Republic, who is elected
by the Parliament for a five-year term. The current Constitution was drawn up and adopted by the Fifth Revisionary
Parliament of the Hellenes and entered into force in 1975 after the fall of the military junta of 1967–1974. It has
been revised three times since, in 1986, 2001 and 2008. The Constitution, which consists of 120 articles, provides
for a separation of powers into executive, legislative, and judicial branches, and grants extensive specific guarantees
(further reinforced in 2001) of civil liberties and social rights. Women's suffrage was guaranteed with an amendment
to the 1952 Constitution. According to the Constitution, executive power is exercised by the President of the Republic
and the Government. From the Constitutional amendment of 1986 the President's duties were curtailed to a significant
extent, and they are now largely ceremonial; most political power thus lies in the hands of the Prime Minister. The
position of Prime Minister, Greece's head of government, belongs to the current leader of the political party that
can obtain a vote of confidence by the Parliament. The President of the Republic formally appoints the Prime Minister
and, on his recommendation, appoints and dismisses the other members of the Cabinet. Legislative powers are exercised
by a 300-member elective unicameral Parliament. Statutes passed by the Parliament are promulgated by the President
of the Republic. Parliamentary elections are held every four years, but the President of the Republic is obliged
to dissolve the Parliament earlier on the proposal of the Cabinet, in view of dealing with a national issue of exceptional
importance. The President is also obliged to dissolve the Parliament earlier, if the opposition manages to pass a
motion of no confidence. The coalition government led the country to the parliamentary elections of May 2012. The
power of the traditional Greek political parties, PASOK and New Democracy, declined from 43% to 13% and from 33%
to 18%, respectively, due to their support on the politics of Mnimonio and the austerity measures. The leftist party
of SYRIZA became the second major party, with an increase from 4% to 16%. No party could form a sustainable government,
which led to the parliamentary elections of June 2012. The result of the second elections was the formation of a
coalition government composed of New Democracy (29%), PASOK (12%) and Democratic Left (6%) parties. Greece's foreign
policy is conducted through the Ministry for Foreign Affairs and its head, the Minister for Foreign Affairs. The
current minister is Nikos Kotzias. According to the official website, the main aims of the Ministry for Foreign Affairs
are to represent Greece before other states and international organizations; safeguarding the interests of the Greek
state and of its citizens abroad; the promotion of Greek culture; the fostering of closer relations with the Greek
diaspora; and the promotion of international cooperation. Additionally, due to its political and geographical proximity
to Europe, Asia, the Middle East and Africa, Greece is a country of significant geostrategic importance and is considered
to be a middle power and has developed a regional policy to help promote peace and stability in the Balkans, the
Mediterranean, and the Middle East. Greece has universal compulsory military service for males, while females are
exempted from conscription but may otherwise serve in the military. As of 2009[update], mandatory military service
is nine months for male citizens between the ages of 19 and 45. Additionally, Greek males between the age of 18 and
60 who live in strategically sensitive areas may be required to serve part-time in the National Guard. However, as
the military has sought to become a completely professional force, the government has promised to reduce mandatory
military service or abolish it completely. Since the Kallikratis programme reform entered into effect on 1 January
2011, Greece has consisted of thirteen regions subdivided into a total of 325 municipalities. The 54 old prefectures
and prefecture-level administrations have been largely retained as sub-units of the regions. Seven decentralized
administrations group one to three regions for administrative purposes on a regional basis. There is also one autonomous
area, Mount Athos (Greek: Agio Oros, "Holy Mountain"), which borders the region of Central Macedonia. Greece is a
developed country with high standards of living[citation needed] and high Human Development Index. Its economy mainly
comprises the service sector (85.0%) and industry (12.0%), while agriculture makes up 3.0% of the national economic
output. Important Greek industries include tourism (with 14.9 million international tourists in 2009, it is ranked
as the 7th most visited country in the European Union and 16th in the world by the United Nations World Tourism Organization)
and merchant shipping (at 16.2% of the world's total capacity, the Greek merchant marine is the largest in the world),
while the country is also a considerable agricultural producer (including fisheries) within the union. With an economy
larger than all the Balkan economies combined, Greece is the largest economy in the Balkans, and an important regional
investor. Greece is the number-two foreign investor of capital in Albania, the number-three foreign investor in Bulgaria,
at the top-three of foreign investors in Romania and Serbia and the most important trading partner and largest foreign
investor of the Republic of Macedonia. Greek banks open a new branch somewhere in the Balkans on an almost weekly
basis. The Greek telecommunications company OTE has become a strong investor in Yugoslavia and other Balkan countries.
The Greek economy is classified as advanced and high-income. Greece was a founding member of the Organisation for
Economic Co-operation and Development (OECD) and the Organization of the Black Sea Economic Cooperation (BSEC). In
1979 the accession of the country in the European Communities and the single market was signed, and the process was
completed in 1982. Greece was accepted into the Economic and Monetary Union of the European Union on 19 June 2000,
and in January 2001 adopted the Euro as its currency, replacing the Greek drachma at an exchange rate of 340.75 drachma
to the Euro. Greece is also a member of the International Monetary Fund and the World Trade Organization, and is
ranked 24th on the KOF Globalization Index for 2013. According to Der Spiegel credits given to European governments
were disguised as "swaps" and consequently did not get registered as debt. As Eurostat at the time ignored statistics
involving financial derivatives, a German derivatives dealer had commented to Der Spiegel that "The Maastricht rules
can be circumvented quite legally through swaps," and "In previous years, Italy used a similar trick to mask its
true debt with the help of a different US bank." These conditions had enabled Greek as well as many other European
governments to spend beyond their means, while meeting the deficit targets of the European Union. In terms of total
number of ships, the Greek Merchant Navy stands at 4th worldwide, with 3,150 ships (741 of which are registered in
Greece whereas the rest 2,409 in other ports). In terms of ship categories, Greece ranks first in both tankers and
dry bulk carriers, fourth in the number of containers, and fifth in other ships. However, today's fleet roster is
smaller than an all-time high of 5,000 ships in the late 1970s. Additionally, the total number of ships flying a
Greek flag (includes non-Greek fleets) is 1,517, or 5.3% of the world's dwt (ranked 5th). The vast majority of visitors
in Greece in 2007 came from the European continent, numbering 12.7 million, while the most visitors from a single
nationality were those from the United Kingdom, (2.6 million), followed closely by those from Germany (2.3 million).
In 2010, the most visited region of Greece was that of Central Macedonia, with 18% of the country's total tourist
flow (amounting to 3.6 million tourists), followed by Attica with 2.6 million and the Peloponnese with 1.8 million.
Northern Greece is the country's most-visited geographical region, with 6.5 million tourists, while Central Greece
is second with 6.3 million. Railway connections play a somewhat lesser role in Greece than in many other European
countries, but they too have also been expanded, with new suburban/commuter rail connections, serviced by Proastiakos
around Athens, towards its airport, Kiato and Chalkida; around Thessaloniki, towards the cities of Larissa and Edessa;
and around Patras. A modern intercity rail connection between Athens and Thessaloniki has also been established,
while an upgrade to double lines in many parts of the 2,500 km (1,600 mi) network is underway. International railway
lines connect Greek cities with the rest of Europe, the Balkans and Turkey. Internet cafés that provide net access,
office applications and multiplayer gaming are also a common sight in the country, while mobile internet on 3G and
4G- LTE cellphone networks and Wi-Fi connections can be found almost everywhere. 3G/4G mobile internet usage has
been on a sharp increase in recent years, with a 340% increase between August 2011 and August 2012. The United Nations
International Telecommunication Union ranks Greece among the top 30 countries with a highly developed information
and communications infrastructure. Greece's technology parks with incubator facilities include the Science and Technology
Park of Crete (Heraklion), the Thessaloniki Technology Park, the Lavrio Technology Park and the Patras Science Park,
the Science and Technology Park of Epirus (Ioannina). Greece has been a member of the European Space Agency (ESA)
since 2005. Cooperation between ESA and the Hellenic National Space Committee began in the early 1990s. In 1994 Greece
and ESA signed their first cooperation agreement. Having formally applied for full membership in 2003, Greece became
the ESA's sixteenth member on 16 March 2005. As member of the ESA, Greece participates in the agency's telecommunication
and technology activities, and the Global Monitoring for Environment and Security Initiative. Notable Greek scientists
of modern times include Dimitrios Galanos, Georgios Papanikolaou (inventor of the Pap test), Nicholas Negroponte,
Constantin Carathéodory (known for the Carathéodory theorems and Carathéodory conjecture), Manolis Andronikos (discovered
the tomb of Philip II of Macedon in Vergina), Michael Dertouzos, John Argyris, Panagiotis Kondylis, John Iliopoulos
(2007 Dirac Prize for his contributions on the physics of the charm quark, a major contribution to the birth of the
Standard Model, the modern theory of Elementary Particles), Joseph Sifakis (2007 Turing Award, the "Nobel Prize"
of Computer Science), Christos Papadimitriou (2002 Knuth Prize, 2012 Gödel Prize), Mihalis Yannakakis (2005 Knuth
Prize), Dimitri Nanopoulos and Helene Ahrweiler. Estimates of the recognized Greek Muslim minority, which is mostly
located in Thrace, range from 98,000 to 140,000, (about 1%) while the immigrant Muslim community numbers between
200,000 and 300,000. Albanian immigrants to Greece are usually associated with the Muslim religion, although most
are secular in orientation. Following the 1919–1922 Greco-Turkish War and the 1923 Treaty of Lausanne, Greece and
Turkey agreed to a population transfer based on cultural and religious identity. About 500,000 Muslims from Greece,
predominantly those defined as Turks, but also Greek Muslims like the Vallahades of western Macedonia, were exchanged
with approximately 1,500,000 Greeks from Turkey. However, many refugees who settled in former Ottoman Muslim villages
in Central Macedonia and were defined as Christian Orthodox Caucasus Greeks arrived from the former Russian Transcaucasus
province of Kars Oblast after it had been retroceded to Turkey but in the few years before the official population
exchange. Greek citizens who are Roman Catholic are estimated to be at around 50,000 with the Roman Catholic immigrant
community in the country approximately 200,000. Old Calendarists account for 500,000 followers. Protestants, including
Greek Evangelical Church and Free Evangelical Churches, stand at about 30,000. Assemblies of God, International Church
of the Foursquare Gospel and other Pentecostal churches of the Greek Synod of Apostolic Church have 12,000 members.
Independent Free Apostolic Church of Pentecost is the biggest Protestant denomination in Greece with 120 churches.
There are not official statistics about Free Apostolic Church of Pentecost, but the Orthodox Church estimates the
followers as 20,000. The Jehovah's Witnesses report having 28,874 active members. In recent years there has been
a small-scale revival of the ancient Greek religion, with estimates of 2,000 people active practitioners, and 100,000
"sympathisers". During the 19th and 20th centuries there was a major dispute known as the Greek language question,
on whether the official language of Greece should be the archaic Katharevousa, created in the 19th century and used
as the state and scholarly language, or the Dimotiki, the form of the Greek language which evolved naturally from
Byzantine Greek and was the language of the people. The dispute was finally resolved in 1976, when Dimotiki was made
the only official variation of the Greek language, and Katharevousa fell to disuse. Greece is today relatively homogeneous
in linguistic terms, with a large majority of the native population using Greek as their first or only language.
Among the Greek-speaking population, speakers of the distinctive Pontic dialect came to Greece from Asia Minor after
the Greek genocide and constitute a sizable group. The Cappadocian dialect came to Greece due to the genocide as
well, but is endangered and is barely spoken now. Indigenous Greek dialects include the archaic Greek spoken by the
Sarakatsani, traditionally transhument mountain shepherds of Greek Macedonia and other parts of Northern Greece.
The Tsakonian language, a distinct Greek language deriving from Doric Greek instead of Ionic Greek, is still spoken
in some villages in the southeastern Peloponnese. The Muslim minority in Thrace, which amounts to approximately 0.95%
of the total population, consists of speakers of Turkish, Bulgarian (Pomaks) and Romani. Romani is also spoken by
Christian Roma in other parts of the country. Further minority languages have traditionally been spoken by regional
population groups in various parts of the country. Their use has decreased radically in the course of the 20th century
through assimilation with the Greek-speaking majority. Today they are only maintained by the older generations and
are on the verge of extinction. This goes for the Arvanites, an Albanian-speaking group mostly located in the rural
areas around the capital Athens, and for the Aromanians and Moglenites, also known as Vlachs, whose language is closely
related to Romanian and who used to live scattered across several areas of mountainous central Greece. Members of
these groups ethnically identify as Greeks and are today all at least bilingual in Greek. Near the northern Greek
borders there are also some Slavic–speaking groups, locally known as Slavomacedonian-speaking, most of whose members
identify ethnically as Greeks. Their dialects can be linguistically classified as forms of either Macedonian Slavic
or Bulgarian. It is estimated that after the population exchanges of 1923, Macedonia had 200,000 to 400,000 Slavic
speakers. The Jewish community in Greece traditionally spoke Ladino (Judeo-Spanish), today maintained only by a few
thousand speakers. Other notable minority languages include Armenian, Georgian, and the Greco-Turkic dialect spoken
by the Urums, a community of Caucasus Greeks from the Tsalka region of central Georgia and ethnic Greeks from southeastern
Ukraine who arrived in mainly Northern Greece as economic migrants in the 1990s. A study from the Mediterranean Migration
Observatory maintains that the 2001 census recorded 762,191 persons residing in Greece without Greek citizenship,
constituting around 7% of total population. Of the non-citizen residents, 48,560 were EU or European Free Trade Association
nationals and 17,426 were Cypriots with privileged status. The majority come from Eastern European countries: Albania
(56%), Bulgaria (5%) and Romania (3%), while migrants from the former Soviet Union (Georgia, Russia, Ukraine, Moldova,
etc.) comprise 10% of the total. Some of the immigrants from Albania are from the Greek minority in Albania centred
on the region of Northern Epirus. In addition the total Albanian national population which includes temporary migrants
and undocumented persons is around 600,000. Greece, together with Italy and Spain, is a major entry point for illegal
immigrants trying to enter the EU. Illegal immigrants entering Greece mostly do so from the border with Turkey at
the Evros River and the islands of the eastern Aegean across from Turkey (mainly Lesbos, Chios, Kos, and Samos).
In 2012, the majority of illegal immigrants entering Greece came from Afghanistan, followed by Pakistanis and Bangladeshis.
In 2015, arrivals of refugees by sea have increased dramatically mainly due to the ongoing Syrian civil war. There
were 856,723 arrivals by sea in Greece, an almost fivefold increase to the same period of 2014, of which the Syrians
represent almost 45%. An estimated 8% of the arrivals applied for asylum in Greece. Greeks have a long tradition
of valuing and investing in paideia (education). Paideia was one of the highest societal values in the Greek and
Hellenistic world while the first European institution described as a university was founded in 5th century Constantinople
and operated in various incarnations until the city's fall to the Ottomans in 1453. The University of Constantinople
was Christian Europe's first secular institution of higher learning since no theological subjects were taught, and
considering the original meaning of the world university as a corporation of students, the world’s first university
as well. Greece's post-compulsory secondary education consists of two school types: unified upper secondary schools
(Γενικό Λύκειο, Genikό Lykeiό) and technical–vocational educational schools (Τεχνικά και Επαγγελματικά Εκπαιδευτήρια,
"TEE"). Post-compulsory secondary education also includes vocational training institutes (Ινστιτούτα Επαγγελματικής
Κατάρτισης, "IEK") which provide a formal but unclassified level of education. As they can accept both Gymnasio (lower
secondary school) and Lykeio (upper secondary school) graduates, these institutes are not classified as offering
a particular level of education. According to the Framework Law (3549/2007), Public higher education "Highest Educational
Institutions" (Ανώτατα Εκπαιδευτικά Ιδρύματα, Anótata Ekpaideytiká Idrýmata, "ΑΕΙ") consists of two parallel sectors:the
University sector (Universities, Polytechnics, Fine Arts Schools, the Open University) and the Technological sector
(Technological Education Institutions (TEI) and the School of Pedagogic and Technological Education). There are also
State Non-University Tertiary Institutes offering vocationally oriented courses of shorter duration (2 to 3 years)
which operate under the authority of other Ministries. Students are admitted to these Institutes according to their
performance at national level examinations taking place after completion of the third grade of Lykeio. Additionally,
students over twenty-two years old may be admitted to the Hellenic Open University through a form of lottery. The
Capodistrian University of Athens is the oldest university in the eastern Mediterranean. Greece has universal health
care. In a 2000 World Health Organization report, its health care system ranked 14th in overall performance of 191
countries surveyed. In a 2013 Save the Children report, Greece was ranked the 19th best country (out of 176 countries
surveyed) for the state of mothers and newborn babies. In 2010, there were 138 hospitals with 31,000 beds in the
country, but on 1 July 2011, the Ministry for Health and Social Solidarity announced its plans to decrease the number
to 77 hospitals with 36,035 beds, as a necessary reform to reduce expenses and further enhance healthcare standards.[disputed
– discuss] Greece's healthcare expenditures as a percentage of GDP were 9.6% in 2007 according to a 2011 OECD report,
just above the OECD average of 9.5%. The country has the largest number of doctors-to-population ratio of any OECD
country. The culture of Greece has evolved over thousands of years, beginning in Mycenaean Greece and continuing
most notably into Classical Greece, through the influence of the Roman Empire and its Greek Eastern continuation,
the Eastern Roman or Byzantine Empire. Other cultures and nations, such as the Latin and Frankish states, the Ottoman
Empire, the Venetian Republic, the Genoese Republic, and the British Empire have also left their influence on modern
Greek culture, although historians credit the Greek War of Independence with revitalising Greece and giving birth
to a single, cohesive entity of its multi-faceted culture. In ancient times, Greece was the birthplace of Western
culture. Modern democracies owe a debt to Greek beliefs in government by the people, trial by jury, and equality
under the law. The ancient Greeks pioneered in many fields that rely on systematic thought, including biology, geometry,
history, philosophy, physics and mathematics. They introduced such important literary forms as epic and lyric poetry,
history, tragedy, and comedy. In their pursuit of order and proportion, the Greeks created an ideal of beauty that
strongly influenced Western art. The modern Greek theatre was born after the Greek independence, in the early 19th
century, and initially was influenced by the Heptanesean theatre and melodrama, such as the Italian opera. The Nobile
Teatro di San Giacomo di Corfù was the first theatre and opera house of modern Greece and the place where the first
Greek opera, Spyridon Xyndas' The Parliamentary Candidate (based on an exclusively Greek libretto) was performed.
During the late 19th and early 20th century, the Athenian theatre scene was dominated by revues, musical comedies,
operettas and nocturnes and notable playwrights included Spyridon Samaras, Dionysios Lavrangas, Theophrastos Sakellaridis
and others. Aristotle of Stagira, the most important disciple of Plato, shared with his teacher the title of the
greatest philosopher of antiquity. But while Plato had sought to elucidate and explain things from the supra-sensual
standpoint of the forms, his pupil preferred to start from the facts given us by experience. Except from these three
most significant Greek philosophers other known schools of Greek philosophy from other founders during ancient times
were Stoicism, Epicureanism, Skepticism and Neoplatonism. At the beginning of Greek literature stand the two monumental
works of Homer: the Iliad and the Odyssey. Though dates of composition vary, these works were fixed around 800 BC
or after. In the classical period many of the genres of western literature became more prominent. Lyrical poetry,
odes, pastorals, elegies, epigrams; dramatic presentations of comedy and tragedy; historiography, rhetorical treatises,
philosophical dialectics, and philosophical treatises all arose in this period. The two major lyrical poets were
Sappho and Pindar. The Classical era also saw the dawn of drama. Cinema first appeared in Greece in 1896 but the
first actual cine-theatre was opened in 1907. In 1914 the Asty Films Company was founded and the production of long
films began. Golfo (Γκόλφω), a well known traditional love story, is considered the first Greek feature film, although
there were several minor productions such as newscasts before this. In 1931 Orestis Laskos directed Daphnis and Chloe
(Δάφνις και Χλόη), containing the first nude scene in the history of European cinema; it was also the first Greek
movie which was played abroad. In 1944 Katina Paxinou was honoured with the Best Supporting Actress Academy Award
for For Whom the Bell Tolls. The 1950s and early 1960s are considered by many to be a golden age of Greek cinema.
Directors and actors of this era were recognized as important historical figures in Greece and some gained international
acclaim: Irene Papas, Melina Mercouri, Mihalis Kakogiannis, Alekos Sakellarios, Nikos Tsiforos, Iakovos Kambanelis,
Katina Paxinou, Nikos Koundouros, Ellie Lambeti, and others. More than sixty films per year were made, with the majority
having film noir elements. Notable films were Η κάλπικη λίρα (1955 directed by Giorgos Tzavellas), Πικρό Ψωμί (1951,
directed by Grigoris Grigoriou), O Drakos (1956 directed by Nikos Koundouros), Stella (1955 directed by Cacoyannis
and written by Kampanellis). Cacoyannis also directed Zorba the Greek with Anthony Quinn which received Best Director,
Best Adapted Screenplay and Best Film nominations. Finos Film also contributed to this period with movies such as
Λατέρνα, Φτώχεια και Φιλότιμο, Madalena, Η Θεία από το Σικάγο, Το ξύλο βγήκε από τον Παράδεισο and many more. During
the 1970s and 1980s Theo Angelopoulos directed a series of notable and appreciated movies. His film Eternity and
a Day won the Palme d'Or and the Prize of the Ecumenical Jury at the 1998 Cannes Film Festival. Greek cuisine is
characteristic of the healthy Mediterranean diet, which is epitomized by dishes of Crete. Greek cuisine incorporates
fresh ingredients into a variety of local dishes such as moussaka, stifado, Greek salad, fasolada, spanakopita and
souvlaki. Some dishes can be traced back to ancient Greece like skordalia (a thick purée of walnuts, almonds, crushed
garlic and olive oil), lentil soup, retsina (white or rosé wine sealed with pine resin) and pasteli (candy bar with
sesame seeds baked with honey). Throughout Greece people often enjoy eating from small dishes such as meze with various
dips such as tzatziki, grilled octopus and small fish, feta cheese, dolmades (rice, currants and pine kernels wrapped
in vine leaves), various pulses, olives and cheese. Olive oil is added to almost every dish. Sweet desserts such
as galaktoboureko, and drinks such as ouzo, metaxa and a variety of wines including retsina. Greek cuisine differs
widely from different parts of the mainland and from island to island. It uses some flavorings more often than other
Mediterranean cuisines: oregano, mint, garlic, onion, dill and bay laurel leaves. Other common herbs and spices include
basil, thyme and fennel seed. Many Greek recipes, especially in the northern parts of the country, use "sweet" spices
in combination with meat, for example cinnamon and cloves in stews. Greek vocal music extends far back into ancient
times where mixed-gender choruses performed for entertainment, celebration and spiritual reasons. Instruments during
that period included the double-reed aulos and the plucked string instrument, the lyre, especially the special kind
called a kithara. Music played an important role in the education system during ancient times. Boys were taught music
from the age of six. Later influences from the Roman Empire, Middle East, and the Byzantine Empire also had effect
on Greek music. While the new technique of polyphony was developing in the West, the Eastern Orthodox Church resisted
any type of change. Therefore, Byzantine music remained monophonic and without any form of instrumental accompaniment.
As a result, and despite certain attempts by certain Greek chanters (such as Manouel Gazis, Ioannis Plousiadinos
or the Cypriot Ieronimos o Tragoudistis), Byzantine music was deprived of elements of which in the West encouraged
an unimpeded development of art. However, this method which kept music away from polyphony, along with centuries
of continuous culture, enabled monophonic music to develop to the greatest heights of perfection. Byzantium presented
the monophonic Byzantine chant; a melodic treasury of inestimable value for its rhythmical variety and expressive
power. Along with the Byzantine (Church) chant and music, the Greek people also cultivated the Greek folk song which
is divided into two cycles, the akritic and klephtic. The akritic was created between the 9th and 10th centuries
and expressed the life and struggles of the akrites (frontier guards) of the Byzantine empire, the most well known
being the stories associated with Digenes Akritas. The klephtic cycle came into being between the late Byzantine
period and the start of the Greek War of Independence. The klephtic cycle, together with historical songs, paraloghes
(narrative song or ballad), love songs, mantinades, wedding songs, songs of exile and dirges express the life of
the Greeks. There is a unity between the Greek people's struggles for freedom, their joys and sorrow and attitudes
towards love and death. The Heptanesean kantádhes (καντάδες 'serenades'; sing.: καντάδα) became the forerunners of
the Greek modern song, influencing its development to a considerable degree. For the first part of the next century,
several Greek composers continued to borrow elements from the Heptanesean style. The most successful songs during
the period 1870–1930 were the so-called Athenian serenades, and the songs performed on stage (επιθεωρησιακά τραγούδια
'theatrical revue songs') in revue, operettas and nocturnes that were dominating Athens' theater scene. Rebetiko,
initially a music associated with the lower classes, later (and especially after the population exchange between
Greece and Turkey) reached greater general acceptance as the rough edges of its overt subcultural character were
softened and polished, sometimes to the point of unrecognizability. It was the base of the later laïkó (song of the
people). The leading performers of the genre include Apostolos Kaldaras, Grigoris Bithikotsis, Stelios Kazantzidis,
George Dalaras, Haris Alexiou and Glykeria. Regarding the classical music, it was through the Ionian islands (which
were under western rule and influence) that all the major advances of the western European classical music were introduced
to mainland Greeks. The region is notable for the birth of the first School of modern Greek classical music (Heptanesean
or Ionian School, Greek: Επτανησιακή Σχολή), established in 1815. Prominent representatives of this genre include
Nikolaos Mantzaros, Spyridon Xyndas, Spyridon Samaras and Pavlos Carrer. Manolis Kalomiris is considered the founder
of the Greek National School of Music. In the 20th century, Greek composers have had a significant impact on the
development of avant garde and modern classical music, with figures such as Iannis Xenakis, Nikos Skalkottas, and
Dimitri Mitropoulos achieving international prominence. At the same time, composers and musicians such as Mikis Theodorakis,
Manos Hatzidakis, Eleni Karaindrou, Vangelis and Demis Roussos garnered an international following for their music,
which include famous film scores such as Zorba the Greek, Serpico, Never on Sunday, America America, Eternity and
a Day, Chariots of Fire, Blade Runner, among others. Greek American composers known for their film scores include
Yanni and Basil Poledouris. Notable Greek opera singers and classical musicians of the 20th and 21st century include
Maria Callas, Nana Mouskouri, Mario Frangoulis, Leonidas Kavakos, Dimitris Sgouros and others. Greece participated
in the Eurovision Song Contest 35 times after its debut at the 1974 Contest. In 2005, Greece won with the song "My
Number One", performed by Greek-Swedish singer Elena Paparizou. The song received 230 points with 10 sets of 12 points
from Belgium, Bulgaria, Hungary, the United Kingdom, Turkey, Albania, Cyprus, Serbia & Montenegro, Sweden and Germany
and also became a smash hit in different countries and especially in Greece. The 51st Eurovision Song Contest was
held in Athens at the Olympic Indoor Hall of the Athens Olympic Sports Complex in Maroussi, with hosted by Maria
Menounos and Sakis Rouvas. Greece is the birthplace of the ancient Olympic Games, first recorded in 776 BC in Olympia,
and hosted the modern Olympic Games twice, the inaugural 1896 Summer Olympics and the 2004 Summer Olympics. During
the parade of nations Greece is always called first, as the founding nation of the ancient precursor of modern Olympics.
The nation has competed at every Summer Olympic Games, one of only four countries to have done so. Having won a total
of 110 medals (30 gold, 42 silver and 38 bronze), Greece is ranked 32nd by gold medals in the all-time Summer Olympic
medal count. Their best ever performance was in the 1896 Summer Olympics, when Greece finished second in the medal
table with 10 gold medals. The Greek national football team, ranking 12th in the world in 2014 (and having reached
a high of 8th in the world in 2008 and 2011), were crowned European Champions in Euro 2004 in one of the biggest
upsets in the history of the sport and became one of the most successful national teams in European football, being
one of only nine national teams to have won the UEFA European Championship. The Greek Super League is the highest
professional football league in the country comprising eighteen teams. The most successful are Olympiacos, Panathinaikos,
AEK Athens and PAOK. The Greek national basketball team has a decades-long tradition of excellence in the sport,
being considered among the world's top basketball powers. As of 2012, it ranked 4th in the world and 2nd in Europe.
They have won the European Championship twice in 1987 and 2005, and have reached the final four in two of the last
four FIBA World Championships, taking the second place in the world in 2006 FIBA World Championship, after a spectacular
101–95 win against Team USA in the tournament's semifinal. The domestic top basketball league, A1 Ethniki, is composed
of fourteen teams. The most successful Greek teams are Olympiacos, Panathinaikos, Aris Thessaloniki, AEK Athens and
P.A.O.K. Greek basketball teams are the most successful in European basketball the last 25 years, having won as many
as 9 Euroleagues since the establishment of the modern era Euroleague Final Four format in 1988, while no other nation
has won more than 4 Euroleague championships in this period. Besides the 9 Euroleagues, Greek basketball teams (Panathinaikos,
Olympiacos, Aris Thessaloniki, AEK Athens, P.A.O.K, Maroussi) have won 3 Triple Crowns, 5 Saporta Cups, 2 Korać Cups
and 1 FIBA Europe Champions Cup. After the 2005 European Championship triumph of the Greek national basketball team,
Greece became the reigning European Champion in both football and basketball. The Greece women's national water polo
team have emerged as one of the leading powers in the world, becoming World Champions after their gold medal win
against the hosts China at the 2011 World Championship. They have also won the silver medal at the 2004 Summer Olympics,
the gold medal at the 2005 World League and the silver medals at the 2010 and 2012 European Championships. The Greece
men's national water polo team became the third best water polo team in the world in 2005, after their win against
Croatia in the bronze medal game at the 2005 World Aquatics Championships in Canada. The domestic top water polo
leagues, Greek Men's Water Polo League and Greek Women's Water Polo League are considered amongst the top national
leagues in European water polo, as its clubs have made significant success in European competitions. In men's European
competitions, Olympiacos has won the Champions League, the European Super Cup and the Triple Crown in 2002 becoming
the first club in Water polo history to win every title in which it has competed within a single year (National championship,
National cup, Champions League and European Super Cup), while NC Vouliagmeni has won the LEN Cup Winners' Cup in
1997. In women's European competitions, Greek water polo teams (NC Vouliagmeni, Glyfada NSC, Olympiacos, Ethnikos
Piraeus) are amongst the most successful in European water polο, having won as many as 4 LEN Champions Cups, 3 LEN
Trophies and 2 European Supercups. The Greek men's national volleyball team has won two bronze medals, one in the
European Volleyball Championship and another one in the European Volleyball League, a 5th place in the Olympic Games
and a 6th place in the FIVB Volleyball Men's World Championship. The Greek league, the A1 Ethniki, is considered
one of the top volleyball leagues in Europe and the Greek clubs have made significant success in European competitions.
Olympiacos is the most successful volleyball club in the country having won the most domestic titles and being the
only Greek club to have won European titles; they have won two CEV Cups, they have been CEV Champions League runners-up
twice and they have played in as many as 12 Final Fours in the European competitions, making them one of the most
traditional volleyball clubs in Europe. Iraklis have also seen significant success in European competitions, having
been three times runners-up of the CEV Champions League. The principal gods of the ancient Greek religion were the
Dodekatheon, or the Twelve Gods, who lived on the top of Mount Olympus. The most important of all ancient Greek gods
was Zeus, the king of the gods, who was married to Hera, who was also Zeus's sister. The other Greek gods that made
up the Twelve Olympians were Demeter, Ares, Poseidon, Athena, Dionysus, Apollo, Artemis, Aphrodite, Hephaestus and
Hermes. Apart from these twelve gods, Greeks also had a variety of other mystical beliefs, such as nymphs and other
magical creatures. According to Greek law, every Sunday of the year is a public holiday. In addition, there are four
mandatory official public holidays: 25 March (Greek Independence Day), Easter Monday, 15 August (Assumption or Dormition
of the Holy Virgin), and 25 December (Christmas). 1 May (Labour Day) and 28 October (Ohi Day) are regulated by law
as being optional but it is customary for employees to be given the day off. There are, however, more public holidays
celebrated in Greece than are announced by the Ministry of Labour each year as either obligatory or optional. The
list of these non-fixed national holidays rarely changes and has not changed in recent decades, giving a total of
eleven national holidays each year. In 2011, it became apparent that the bail-out would be insufficient and a second
bail-out amounting to €130 billion ($173 billion) was agreed in 2012, subject to strict conditions, including financial
reforms and further austerity measures. As part of the deal, there was to be a 53% reduction in the Greek debt burden
to private creditors and any profits made by Eurozone central banks on their holdings of Greek debt are to be repatriated
back to Greece. Greece achieved a primary government budget surplus in 2013. In April 2014, Greece returned to the
global bond market as it successfully sold €3 billion worth of five-year government bonds at a yield of 4.95%. Greece
returned to growth after six years of economic decline in the second quarter of 2014, and was the Eurozone's fastest-growing
economy in the third quarter. Following the assassination of Phillip II, his son Alexander III ("The Great") assumed
the leadership of the League of Corinth and launched an invasion of the Persian Empire with the combined forces of
all Greek states in 334 BC. Undefeated in battle, Alexander had conquered the Persian Empire in its entirety by 330
BC. By the time of his death in 323 BC, he had created one of the largest empires in history, stretching from Greece
to India. His empire split into several kingdoms upon his death, the most famous of which were the Seleucid Empire,
Ptolemaic Egypt, the Greco-Bactrian Kingdom and the Indo-Greek Kingdom. Many Greeks migrated to Alexandria, Antioch,
Seleucia and the many other new Hellenistic cities in Asia and Africa. Although the political unity of Alexander's
empire could not be maintained, it resulted in the Hellenistic civilization and spread the Greek language and Greek
culture in the territories conquered by Alexander. Greek science, technology and mathematics are generally considered
to have reached their peak during the Hellenistic period.
Shell was vertically integrated and is active in every area of the oil and gas industry, including exploration and production,
refining, distribution and marketing, petrochemicals, power generation and trading. It has minor renewable energy
activities in the form of biofuels and wind. It has operations in over 90 countries, produces around 3.1 million
barrels of oil equivalent per day and has 44,000 service stations worldwide. Shell Oil Company, its subsidiary in
the United States, is one of its largest businesses. In February 1907, the Royal Dutch Shell Group was created through
the amalgamation of two rival companies: the Royal Dutch Petroleum Company of the Netherlands and the "Shell" Transport
and Trading Company Ltd of the United Kingdom. It was a move largely driven by the need to compete globally with
Standard Oil. The Royal Dutch Petroleum Company was a Dutch company founded in 1890 to develop an oilfield in Sumatra,
and initially led by August Kessler, Hugo Loudon, and Henri Deterding. The "Shell" Transport and Trading Company
(the quotation marks were part of the legal name) was a British company, founded in 1897 by Marcus Samuel, 1st Viscount
Bearsted, and his brother Samuel Samuel. Their father had owned an antique company in Houndsditch, London, which
expanded in 1833 to import and sell sea-shells, after which the company "Shell" took its name. For various reasons,
the new firm operated as a dual-listed company, whereby the merging companies maintained their legal existence, but
operated as a single-unit partnership for business purposes. The terms of the merger gave 60 percent ownership of
the new group to the Dutch arm and 40 percent to the British. National patriotic sensibilities would not permit a
full-scale merger or takeover of either of the two companies. The Dutch company, Koninklijke Nederlandsche Petroleum
Maatschappij, was in charge at The Hague of production and manufacture. A British company was formed, called the
Anglo-Saxon Petroleum Company, based in London, to direct the transport and storage of the products. In November
2004, following a period of turmoil caused by the revelation that Shell had been overstating its oil reserves, it
was announced that the Shell Group would move to a single capital structure, creating a new parent company to be
named Royal Dutch Shell plc, with its primary listing on the London Stock Exchange, a secondary listing on the Amsterdam
Stock Exchange, its headquarters and tax residency in The Hague, Netherlands and its registered office in London.
The unification was completed on 20 July 2005 and the original owners delisted their companies from the respective
exchanges. On 20 July 2005, the Shell Transport & Trading Company plc was delisted from the LSE, where as, Royal
Dutch Petroleum Company from NYSE on 18 November 2005. The shares of the company were issued at a 60/40 advantage
for the shareholders of Royal Dutch in line with the original ownership of the Shell Group. In February 2010 Shell
and Cosan formed a 50:50 joint-venture, Raízen, comprising all of Cosan's Brazilian ethanol, energy generation, fuel
distribution and sugar activities, and all of Shell's Brazilian retail fuel and aviation distribution businesses.
In March 2010, Shell announced the sale of some of its assets, including its liquid petroleum gas (LPG) business,
to meet the cost of a planned $28bn capital spending programme. Shell invited buyers to submit indicative bids, due
by 22 March, with a plan to raise $2–3bn from the sale. In June 2010, Royal Dutch Shell agreed to acquire all the
business of East Resources for a cash consideration of $4.7 billion. The transaction included East Resources' tight
gas fields. Over the course of 2013, the corporation began the sale of its US shale gas assets and cancelled a US$20
billion gas project that was to be constructed in the US state of Louisiana. A new CEO Ben van Beurden was appointed
in January 2014, prior to the announcement that the corporation's overall performance in 2013 was 38 per cent lower
than 2012—the value of Shell's shares fell by 3 per cent as a result. Following the sale of the majority of its Australian
assets in February 2014, the corporation plans to sell a further US$15 billion worth of assets in the period leading
up to 2015, with deals announced in Australia, Brazil and Italy. The presence of companies like Shell in the Niger-Delta
has led to extreme environmental issues in the Niger Delta. Many pipelines in the Niger-Delta owned by Shell are
old and corroded. Shell has acknowledged its responsibility for keeping the pipelines new but has also denied responsibility
for environmental causes. This has led to mass protests from the Niger-Delta inhabitants and Amnesty International
against Shell and Friends of the Earth Netherlands. It has also led to action plans to boycott Shell by environmental
groups, and human rights groups. In January 2013, a Dutch court rejected four out of five allegations brought against
the firm over oil pollution in the Niger Delta but found a subsidiary guilty of one case of pollution, ordering compensation
to be paid to a Nigerian farmer. The name Shell is linked to The "Shell" Transport and Trading Company. In 1833,
the founder's father, Marcus Samuel, founded an import business to sell seashells to London collectors. When collecting
seashell specimens in the Caspian Sea area in 1892, the younger Samuel realised there was potential in exporting
lamp oil from the region and commissioned the world's first purpose-built oil tanker, the Murex (Latin for a type
of snail shell), to enter this market; by 1907 the company had a fleet. Although for several decades the company
had a refinery at Shell Haven on the Thames, there is no evidence of this having provided the name. Shell's primary
business is the management of a vertically integrated oil company. The development of technical and commercial expertise
in all stages of this vertical integration, from the initial search for oil (exploration) through its harvesting
(production), transportation, refining and finally trading and marketing established the core competencies on which
the company was founded. Similar competencies were required for natural gas, which has become one of the most important
businesses in which Shell is involved, and which contributes a significant proportion of the company's profits. While
the vertically integrated business model provided significant economies of scale and barriers to entry, each business
now seeks to be a self-supporting unit without subsidies from other parts of the company. Traditionally, Shell was
a heavily decentralised business worldwide (especially in the downstream) with companies in over 100 countries, each
of which operated with a high degree of independence. The upstream tended to be far more centralised with much of
the technical and financial direction coming from the central offices in The Hague. Nevertheless, there were very
large "exploration and production" companies in a few major oil and gas production centres such as the United Kingdom
(Shell Expro, a Joint Venture with Exxon), Nigeria, Brunei, and Oman. Downstream operations, which now also includes
the chemicals business, generates a third of Shell's profits worldwide and is known for its global network of more
than 40,000 petrol stations and its 47 oil refineries. The downstream business, which in some countries also included
oil refining, generally included a retail petrol station network, lubricants manufacture and marketing, industrial
fuel and lubricants sales and a host of other product/market sectors such as LPG and bitumen. The practice in Shell
was that these businesses were essentially local and that they were best managed by local "operating companies" –
often with middle and senior management reinforced by expatriates. In the 1990s, this paradigm began to change, and
the independence of operating companies around the world was gradually reduced. Today, virtually all of Shell's operations
in various businesses are much more directly managed from London and The Hague. The autonomy of "operating companies"
has been largely removed, as more "global businesses" have been created. In April 2010, Shell announced its intention
to divest from downstream business of all African countries except South Africa and Egypt to Vitol and "Helios".
In several countries such as Tunisia, protests and strikes broke out. Shell denied rumours of the sellout. Shell
continues however upstream activities/extracting crude oil in the oil-rich Niger Delta as well as downstream/commercial
activities in South Africa. In June 2013, the company announced a strategic review of its operations in Nigeria,
hinting that assets could be divested. In August 2014, the company disclosed it was in the process of finalizing
the sale of its interests in four Nigerian oil fields. On 27 August 2007, Royal Dutch Shell and Reitan Group, the
owner of the 7-Eleven brand in Scandinavia, announced an agreement to re-brand some 269 service stations across Norway,
Sweden, Finland and Denmark, subject to obtaining regulatory approvals under the different competition laws in each
country. On April 2010 Shell announced that the corporation is in process of trying to find a potential buyer for
all of its operations in Finland and is doing similar market research concerning Swedish operations. On October 2010
Shell's gas stations and the heavy vehicle fuel supply networks in Finland and Sweden, along with a refinery located
in Gothenburg, Sweden were sold to St1, a Finnish energy company, more precisely to its major shareholding parent
company Keele Oy. Shell branded gas stations will be rebranded within maximum of five years from the acquisition
and the number of gas stations is likely to be reduced. Until then the stations will operate under Shell brand licence.
Through most of Shell's early history, the Shell Oil Company business in the United States was substantially independent
with its stock being traded on the NYSE and with little direct involvement from the group's central offices in the
running of the American business. However, in 1984, Royal Dutch Shell made a bid to purchase those shares of Shell
Oil Company it did not own (around 30%) and despite opposition from some minority shareholders, which led to a court
case, Shell completed the buyout for a sum of $5.7 billion. On 20 May 2011, Royal Dutch Shell's final investment
decision for the world's first floating liquefied natural gas (FLNG) facility was finalized following the discovery
of the remote offshore Prelude field—located off Australia's northwestern coast and estimated to contain about 3
trillion cubic feet of natural gas equivalent reserves—in 2007. FLNG technology is based on liquefied natural gas
(LNG) developments that were pioneered in the mid-20th century and facilitates the exploitation of untapped natural
gas reserves located in remote areas, often too small to extract any other way. Shell sold 9.5% of its 23.1% stake
in Woodside Petroleum in June 2014 and advised that it had reached an agreement for Woodside to buy back 9.5% of
its shares at a later stage. Shell became a major shareholder in Woodside after a 2001 takeover attempt was blocked
by then federal Treasurer Peter Costello and the corporation has been open about its intention to sell its stake
in Woodside as part of its target to shed assets. At a general body meeting, held on 1 August 2014, 72 percent of
shareholders voted to approve the buy-back, short of the 75 percent vote that was required for approval. A statement
from Shell read: "Royal Dutch Shell acknowledges the outcome of Woodside Petroleum Limited's shareholders' negative
vote on the selective buy-back proposal. Shell is reviewing its options in relation to its remaining 13.6 percent
holding." Following the purchase of an offshore lease in 2005, Shell initiated its US$4.5 billion Arctic drilling
program in 2006, after the corporation purchased the "Kulluk" oil rig and leased the Noble Discoverer drillship.
At inception, the project was led by Pete Slaiby, a Shell executive who had previously worked in the North Sea. However,
after the purchase of a second offshore lease in 2008, Shell only commenced drilling work in 2012, due to the refurbishment
of rigs, permit delays from the relevant authorities and lawsuits. The plans to drill in the Arctic led to protests
from environmental groups, particularly Greenpeace; furthermore, analysts in the energy field, as well as related
industries, also expressed skepticism due to perceptions that drilling in the region is "too dangerous because of
harsh conditions and remote locations". Further problems hampered the Arctic project after the commencement of drilling
in 2012, as Shell dealt with a series of issues that involved air permits, Coast Guard certification of a marine
vessel and severe damage to essential oil-spill equipment. Additionally, difficult weather conditions resulted in
the delay of drilling during mid-2012 and the already dire situation was exacerbated by the "Kulluk" incident at
the end of the year. Royal Dutch Shell had invested nearly US$5 billion by this stage of the project. As the Kulluk
oil rig was being towed to the American state of Washington to be serviced in preparation for the 2013 drilling season,
a winter storm on 27 December 2012 caused the towing crews, as well as the rescue service, to lose control of the
situation. As of 1 January 2013, the Kulluk was grounded off the coast Sitkalidak Island, near the eastern end of
Kodiak Island. Following the accident, a Fortune magazine contacted Larry McKinney, the executive director at the
Harte Research Institute for Gulf of Mexico Studies at Texas A&M, and he explained that "A two-month delay in the
Arctic is not a two-month delay ... A two-month delay could wipe out the entire drilling season." It was unclear
if Shell would recommence drilling in mid-2013, following the "Kulluk" incident and, in February 2013, the corporation
stated that it would "pause" its closely watched drilling project off the Alaskan coast in 2013, and will instead
prepare for future exploration. In January 2014, the corporation announced the extension of the suspension of its
drilling program in the Arctic, with chief executive van Beurden explaining that the project is "under review" due
to both market and internal issues. In the 1990s, protesters criticised the company's environmental record, particularly
the possible pollution caused by the proposed disposal of the Brent Spar platform into the North Sea. Despite support
from the UK government, Shell reversed the decision under public pressure but maintained that sinking the platform
would have been environmentally better. Shell subsequently published an unequivocal commitment to sustainable development,
supported by executive speeches reinforcing this commitment. In the beginning of 1996, several human rights groups
brought cases to hold Shell accountable for alleged human rights violations in Nigeria, including summary execution,
crimes against humanity, torture, inhumane treatment and arbitrary arrest and detention. In particular, Shell stood
accused of collaborating in the execution of Ken Saro-Wiwa and eight other leaders of the Ogoni tribe of southern
Nigeria, who were hanged in 1995 by Nigeria's then military rulers. The lawsuits were brought against Royal Dutch
Shell and Brian Anderson, the head of its Nigerian operation. In 2009, Shell agreed to pay $15.5m in a legal settlement.
Shell has not accepted any liability over the allegations against it. In 2010, a leaked cable revealed that Shell
claims to have inserted staff into all the main ministries of the Nigerian government and know "everything that was
being done in those ministries", according to Shell's top executive in Nigeria. The same executive also boasted that
the Nigerian government had forgotten about the extent of Shell's infiltration. Documents released in 2009 (but not
used in the court case) reveal that Shell regularly made payments to the Nigerian military in order to prevent protests.
On 16 March 2012, 52 Greenpeace activists from five different countries boarded Fennica and Nordica, multipurpose
icebreakers chartered to support Shell's drilling rigs near Alaska. Around the same time period, a reporter for Fortune
magazine spoke with Edward Itta, an Inupiat Eskimo leader and the former mayor of the North Slope Borough, who expressed
that he was conflicted about Shell's plans in the Arctic, as he was very concerned that an oil spill could destroy
the Inupiat Eskimo's hunting-and-fishing culture, but his borough also received major tax revenue from oil and gas
production; additionally, further revenue from energy activity was considered crucial to the future of the living
standard in Itta's community. In response, Shell filed lawsuits to seek injunctions from possible protests, and Benjamin
Jealous of the NAACP and Radford argued that the legal action was "trampling American's rights." According to Greenpeace,
Shell lodged a request with Google to ban video footage of a Greenpeace protest action that occurred at the Shell-sponsored
Formula One (F1) Belgian Grand Prix on 25 August 2013, in which "SaveTheArctic.org" banners appear at the winners'
podium ceremony. In the video, the banners rise up automatically—activists controlled their appearance with the use
of four radio car antennas—revealing the website URL, alongside an image that consists of half of a polar bear's
head and half of the Shell logo.
Mammals include the largest animals on the planet, the rorquals and other large whales, as well as some of the most intelligent,
such as elephants, primates, including humans, and cetaceans. The basic body type is a four-legged land-borne animal,
but some mammals are adapted for life at sea, in the air, in trees, or on two legs. The largest group of mammals,
the placentals, have a placenta, which enables feeding the fetus during gestation. Mammals range in size from the
30–40 mm (1.2–1.6 in) bumblebee bat to the 33-meter (108 ft) blue whale. The word "mammal" is modern, from the scientific
name Mammalia coined by Carl Linnaeus in 1758, derived from the Latin mamma ("teat, pap"). All female mammals nurse
their young with milk, which is secreted from special glands, the mammary glands. According to Mammal Species of
the World, 5,416 species were known in 2006. These were grouped in 1,229 genera, 153 families and 29 orders. In 2008
the IUCN completed a five-year, 1,700-scientist Global Mammal Assessment for its IUCN Red List, which counted 5,488
accepted species. Except for the five species of monotremes (egg-laying mammals), all modern mammals give birth to
live young. Most mammals, including the six most species-rich orders, belong to the placental group. The three largest
orders in numbers, are first Rodentia: mice, rats, porcupines, beavers, capybaras, and other gnawing mammals; then
Chiroptera: bats; and then Soricomorpha: shrews, moles and solenodons. The next three orders, depending on the biological
classification scheme used, are the Primates including the humans; the Cetartiodactyla including the whales and the
even-toed hoofed mammals; and the Carnivora, that is, cats, dogs, weasels, bears, seals, and their relatives. The
early synapsid mammalian ancestors were sphenacodont pelycosaurs, a group that produced the non-mammalian Dimetrodon.
At the end of the Carboniferous period, this group diverged from the sauropsid line that led to today's reptiles
and birds. The line following the stem group Sphenacodontia split-off several diverse groups of non-mammalian synapsids—sometimes
referred to as mammal-like reptiles—before giving rise to the proto-mammals (Therapsida) in the early Mesozoic era.
The modern mammalian orders arose in the Paleogene and Neogene periods of the Cenozoic era, after the extinction
of the non-avian dinosaurs 66 million years ago. In an influential 1988 paper, Timothy Rowe defined Mammalia phylogenetically
as the crown group mammals, the clade consisting of the most recent common ancestor of living monotremes (echidnas
and platypuses) and therian mammals (marsupials and placentals) and all descendants of that ancestor. Since this
ancestor lived in the Jurassic period, Rowe's definition excludes all animals from the earlier Triassic, despite
the fact that Triassic fossils in the Haramiyida have been referred to the Mammalia since the mid-19th century. If
Mammalia is considered as the crown group, its origin can be roughly dated as the first known appearance of animals
more closely related to some extant mammals than to others. Ambondro is more closely related to monotremes than to
therian mammals while Amphilestes and Amphitherium are more closely related to the therians; as fossils of all three
genera are dated about 167 million years ago in the Middle Jurassic, this is a reasonable estimate for the appearance
of the crown group. The earliest known synapsid satisfying Kemp's definitions is Tikitherium, dated 225 Ma, so the
appearance of mammals in this broader sense can be given this Late Triassic date. In any case, the temporal range
of the group extends to the present day. George Gaylord Simpson's "Principles of Classification and a Classification
of Mammals" (AMNH Bulletin v. 85, 1945) was the original source for the taxonomy listed here. Simpson laid out a
systematics of mammal origins and relationships that was universally taught until the end of the 20th century. Since
Simpson's classification, the paleontological record has been recalibrated, and the intervening years have seen much
debate and progress concerning the theoretical underpinnings of systematization itself, partly through the new concept
of cladistics. Though field work gradually made Simpson's classification outdated, it remained the closest thing
to an official classification of mammals. In 1997, the mammals were comprehensively revised by Malcolm C. McKenna
and Susan K. Bell, which has resulted in the McKenna/Bell classification. Their 1997 book, Classification of Mammals
above the Species Level, is the most comprehensive work to date on the systematics, relationships, and occurrences
of all mammal taxa, living and extinct, down through the rank of genus, though recent molecular genetic data challenge
several of the higher level groupings. The authors worked together as paleontologists at the American Museum of Natural
History, New York. McKenna inherited the project from Simpson and, with Bell, constructed a completely updated hierarchical
system, covering living and extinct taxa that reflects the historical genealogy of Mammalia. Molecular studies based
on DNA analysis have suggested new relationships among mammal families over the last few years. Most of these findings
have been independently validated by retrotransposon presence/absence data. Classification systems based on molecular
studies reveal three major groups or lineages of placental mammals- Afrotheria, Xenarthra, and Boreoeutheria- which
diverged from early common ancestors in the Cretaceous. The relationships between these three lineages is contentious,
and all three possible different hypotheses have been proposed with respect to which group is basal with respect
to other placentals. These hypotheses are Atlantogenata (basal Boreoeutheria), Epitheria (basal Xenarthra), and Exafroplacentalia
(basal Afrotheria). Boreoeutheria in turn contains two major lineages- Euarchontoglires and Laurasiatheria. The first
amniotes apparently arose in the Late Carboniferous. They descended from earlier reptiliomorph amphibious tetrapods,
which lived on land that was already inhabited by insects and other invertebrates as well as by ferns, mosses and
other plants. Within a few million years, two important amniote lineages became distinct: the synapsids, which would
later include the common ancestor of the mammals; and the sauropsids, which would eventually come to include turtles,
lizards, snakes, crocodilians, dinosaurs and birds. Synapsids have a single hole (temporal fenestra) low on each
side of the skull. Therapsids descended from pelycosaurs in the Middle Permian, about 265 million years ago, and
became the dominant land vertebrates. They differ from basal eupelycosaurs in several features of the skull and jaws,
including: larger temporal fenestrae and incisors which are equal in size. The therapsid lineage leading to mammals
went through a series of stages, beginning with animals that were very like their pelycosaur ancestors and ending
with probainognathian cynodonts, some of which could easily be mistaken for mammals. Those stages were characterized
by: The Permian–Triassic extinction event, which was a prolonged event due to the accumulation of several extinction
pulses, ended the dominance of the carnivores among the therapsids. In the early Triassic, all the medium to large
land carnivore niches were taken over by archosaurs which, over an extended period of time (35 million years), came
to include the crocodylomorphs, the pterosaurs, and the dinosaurs. By the Jurassic, the dinosaurs had come to dominate
the large terrestrial herbivore niches as well. The oldest known fossil among the Eutheria ("true beasts") is the
small shrewlike Juramaia sinensis, or "Jurassic mother from China", dated to 160 million years ago in the Late Jurassic.
A later eutherian, Eomaia, dated to 125 million years ago in the Early Cretaceous, possessed some features in common
with the marsupials but not with the placentals, evidence that these features were present in the last common ancestor
of the two groups but were later lost in the placental lineage. In particular: Recent molecular phylogenetic studies
suggest that most placental orders diverged about 100 to 85 million years ago and that modern families appeared in
the period from the late Eocene through the Miocene. But paleontologists object that no placental fossils have been
found from before the end of the Cretaceous. The earliest undisputed fossils of placentals come from the early Paleocene,
after the extinction of the dinosaurs. In particular, scientists have recently identified an early Paleocene animal
named Protungulatum donnae as one of the first placental mammals. The earliest known ancestor of primates is Archicebus
achilles from around 55 million years ago. This tiny primate weighed 20–30 grams (0.7–1.1 ounce) and could fit within
a human palm. The earliest clear evidence of hair or fur is in fossils of Castorocauda, from 164 million years ago
in the Middle Jurassic. In the 1950s, it was suggested that the foramina (passages) in the maxillae and premaxillae
(bones in the front of the upper jaw) of cynodonts were channels which supplied blood vessels and nerves to vibrissae
(whiskers) and so were evidence of hair or fur; it was soon pointed out, however, that foramina do not necessarily
show that an animal had vibrissae, as the modern lizard Tupinambis has foramina that are almost identical to those
found in the nonmammalian cynodont Thrinaxodon. Popular sources, nevertheless, continue to attribute whiskers to
Thrinaxodon. When endothermy first appeared in the evolution of mammals is uncertain. Modern monotremes have lower
body temperatures and more variable metabolic rates than marsupials and placentals, but there is evidence that some
of their ancestors, perhaps including ancestors of the therians, may have had body temperatures like those of modern
therians. Some of the evidence found so far suggests that Triassic cynodonts had fairly high metabolic rates, but
it is not conclusive. For small animals, an insulative covering like fur is necessary for the maintenance of a high
and stable body temperature. Breathing is largely driven by the muscular diaphragm, which divides the thorax from
the abdominal cavity, forming a dome with its convexity towards the thorax. Contraction of the diaphragm flattens
the dome, increasing the volume of the cavity in which the lung is enclosed. Air enters through the oral and nasal
cavities; it flows through the larynx, trachea and bronchi and expands the alveoli. Relaxation of the diaphragm has
the opposite effect, passively recoiling during normal breathing. During exercise, the abdominal wall contracts,
increasing visceral pressure on the diaphragm, thus forcing the air out more quickly and forcefully. The rib cage
itself also is able to expand and contract the thoracic cavity to some degree, through the action of other respiratory
and accessory respiratory muscles. As a result, air is sucked into or expelled out of the lungs, always moving down
its pressure gradient. This type of lung is known as a bellows lung as it resembles a blacksmith's bellows. Mammals
take oxygen into their lungs, and discard carbon dioxide. The epidermis is typically 10 to 30 cells thick; its main
function is to provide a waterproof layer. Its outermost cells are constantly lost; its bottommost cells are constantly
dividing and pushing upward. The middle layer, the dermis, is 15 to 40 times thicker than the epidermis. The dermis
is made up of many components, such as bony structures and blood vessels. The hypodermis is made up of adipose tissue.
Its job is to store lipids, and to provide cushioning and insulation. The thickness of this layer varies widely from
species to species. Mammalian hair, also known as pelage, can vary in color between populations, organisms within
a population, and even on the individual organism. Light-dark color variation is common in the mammalian taxa. Sometimes,
this color variation is determined by age variation, however, in other cases, it is determined by other factors.
Selective pressures, such as ecological interactions with other populations or environmental conditions, often lead
to the variation in mammalian coloration. These selective pressures favor certain colors in order to increase survival.
Camouflage is thought to be a major selection pressure shaping coloration in mammals, although there is also evidence
that sexual selection, communication, and physiological processes may influence the evolution of coloration as well.
Camouflage is the most predominant mechanism for color variation, as it aids in the concealment of the organisms
from predators or from their prey. Coat color can also be for intraspecies communication such as warning members
of their species about predators, indicating health for reproductive purposes, communicating between mother and young,
and intimidating predators. Studies have shown that in some cases, differences in female and male coat color could
indicate information nutrition and hormone levels, which are important in the mate selection process. One final mechanism
for coat color variation is physiological response purposes, such as temperature regulation in tropical or arctic
environments. Although much has been observed about color variation, much of the genetic that link coat color to
genes is still unknown. The genetic sites where pigmentation genes are found are known to affect phenotype by: 1)
altering the spatial distribution of pigmentation of the hairs, and 2) altering the density and distribution of the
hairs. Quantitative trait mapping is being used to better understand the distribution of loci responsible for pigmentation
variation. However, although the genetic sites are known, there is still much to learn about how these genes are
expressed. Most mammals are viviparous, giving birth to live young. However, the five species of monotreme, the platypuses
and the echidnas, lay eggs. The monotremes have a sex determination system different from that of most other mammals.
In particular, the sex chromosomes of a platypus are more like those of a chicken than those of a therian mammal.
Like marsupials and most other mammals, monotreme young are larval and fetus-like, as the presence of epipubic bones
prevents the expansion of the torso, forcing them to produce small young. Viviparous mammals are in the subclass
Theria; those living today are in the marsupial and placental infraclasses. A marsupial has a short gestation period,
typically shorter than its estrous cycle, and gives birth to an undeveloped newborn that then undergoes further development;
in many species, this takes place within a pouch-like sac, the marsupium, located in the front of the mother's abdomen.
This is the plesyomorphic condition among viviparous mammals; the presence of epipubic bones in all non-placental
mammals prevents the expansion of the torso needed for full pregnancy. Even non-placental eutherians probably reproduced
this way. In intelligent mammals, such as primates, the cerebrum is larger relative to the rest of the brain. Intelligence
itself is not easy to define, but indications of intelligence include the ability to learn, matched with behavioral
flexibility. Rats, for example, are considered to be highly intelligent, as they can learn and perform new tasks,
an ability that may be important when they first colonize a fresh habitat. In some mammals, food gathering appears
to be related to intelligence: a deer feeding on plants has a brain smaller than a cat, which must think to outwit
its prey. To maintain a high constant body temperature is energy expensive – mammals therefore need a nutritious
and plentiful diet. While the earliest mammals were probably predators, different species have since adapted to meet
their dietary requirements in a variety of ways. Some eat other animals – this is a carnivorous diet (and includes
insectivorous diets). Other mammals, called herbivores, eat plants. A herbivorous diet includes subtypes such as
fruit-eating and grass-eating. An omnivore eats both prey and plants. Carnivorous mammals have a simple digestive
tract, because the proteins, lipids, and minerals found in meat require little in the way of specialized digestion.
Plants, on the other hand, contain complex carbohydrates, such as cellulose. The digestive tract of an herbivore
is therefore host to bacteria that ferment these substances, and make them available for digestion. The bacteria
are either housed in the multichambered stomach or in a large cecum. The size of an animal is also a factor in determining
diet type. Since small mammals have a high ratio of heat-losing surface area to heat-generating volume, they tend
to have high energy requirements and a high metabolic rate. Mammals that weigh less than about 18 oz (500 g) are
mostly insectivorous because they cannot tolerate the slow, complex digestive process of a herbivore. Larger animals,
on the other hand, generate more heat and less of this heat is lost. They can therefore tolerate either a slower
collection process (those that prey on larger vertebrates) or a slower digestive process (herbivores). Furthermore,
mammals that weigh more than 18 oz (500 g) usually cannot collect enough insects during their waking hours to sustain
themselves. The only large insectivorous mammals are those that feed on huge colonies of insects (ants or termites).
The deliberate or accidental hybridising of two or more species of closely related animals through captive breeding
is a human activity which has been in existence for millennia and has grown in recent times for economic purposes.
The number of successful interspecific mammalian hybrids is relatively small, although it has come to be known that
there is a significant number of naturally occurring hybrids between forms or regional varieties of a single species.[citation
needed] These may form zones of gradation known as clines. Indeed, the distinction between some hitherto distinct
species can become clouded once it can be shown that they may not only breed but produce fertile offspring. Some
hybrid animals exhibit greater strength and resilience than either parent. This is known as hybrid vigor. The existence
of the mule (donkey sire; horse dam) being used widely as a hardy draught animal throughout ancient and modern history
is testament to this. Other well known examples are the lion/tiger hybrid, the liger, which is by far the largest
big cat and sometimes used in circuses; and cattle hybrids such as between European and Indian domestic cattle or
between domestic cattle and American bison, which are used in the meat industry and marketed as Beefalo. There is
some speculation that the donkey itself may be the result of an ancient hybridisation between two wild ass species
or sub-species. Hybrid animals are normally infertile partly because their parents usually have slightly different
numbers of chromosomes, resulting in unpaired chromosomes in their cells, which prevents division of sex cells and
the gonads from operating correctly, particularly in males. There are exceptions to this rule, especially if the
speciation process was relatively recent or incomplete as is the case with many cattle and dog species. Normally
behavior traits, natural hostility, natural ranges and breeding cycle differences maintain the separateness of closely
related species and prevent natural hybridisation. However, the widespread disturbances to natural animal behaviours
and range caused by human activity, cities, dumping grounds with food, agriculture, fencing, roads and so on do force
animals together which would not normally breed. Clear examples exist between the various sub-species of grey wolf,
coyote and domestic dog in North America. As many birds and mammals imprint on their mother and immediate family
from infancy, a practice used by animal hybridizers is to foster a planned parent in a hybridization program with
the same species as the one with which they are planned to mate.
Soon after the defeat of the Spanish Armada in 1588, London merchants presented a petition to Queen Elizabeth I for permission
to sail to the Indian Ocean. The permission was granted, and despite the defeat of the English Armada in 1589, on
10 April 1591 three ships sailed from Torbay around the Cape of Good Hope to the Arabian Sea on one of the earliest
English overseas Indian expeditions. One of them, Edward Bonventure, then sailed around Cape Comorin and on to the
Malay Peninsula and subsequently returned to England in 1594. This time they succeeded, and on 31 December 1600,
the Queen granted a Royal Charter to "George, Earl of Cumberland, and 215 Knights, Aldermen, and Burgesses" under
the name, Governor and Company of Merchants of London trading with the East Indies. For a period of fifteen years
the charter awarded the newly formed company a monopoly on trade with all countries east of the Cape of Good Hope
and west of the Straits of Magellan. Sir James Lancaster commanded the first East India Company voyage in 1601 and
returned in 1603. and in March 1604 Sir Henry Middleton commanded the second voyage. General William Keeling, a captain
during the second voyage, led the third voyage from 1607 to 1610. In the next two years, the company established
its first factory in south India in the town of Machilipatnam on the Coromandel Coast of the Bay of Bengal. The high
profits reported by the company after landing in India initially prompted King James I to grant subsidiary licences
to other trading companies in England. But in 1609 he renewed the charter given to the company for an indefinite
period, including a clause that specified that the charter would cease to be in force if the trade turned unprofitable
for three consecutive years. The company, which benefited from the imperial patronage, soon expanded its commercial
trading operations, eclipsing the Portuguese Estado da Índia, which had established bases in Goa, Chittagong, and
Bombay, which Portugal later ceded to England as part of the dowry of Catherine de Braganza. The East India Company
also launched a joint attack with the Dutch United East India Company on Portuguese and Spanish ships off the coast
of China, which helped secure their ports in China. The company established trading posts in Surat (1619), Madras
(1639), Bombay (1668), and Calcutta (1690). By 1647, the company had 23 factories, each under the command of a factor
or master merchant and governor if so chosen, and 90 employees in India. The major factories became the walled forts
of Fort William in Bengal, Fort St George in Madras, and Bombay Castle. In 1634, the Mughal emperor extended his
hospitality to the English traders to the region of Bengal, and in 1717 completely waived customs duties for the
trade. The company's mainstay businesses were by then cotton, silk, indigo dye, saltpetre, and tea. The Dutch were
aggressive competitors and had meanwhile expanded their monopoly of the spice trade in the Malaccan straits by ousting
the Portuguese in 1640–41. With reduced Portuguese and Spanish influence in the region, the EIC and Dutch East India
Company (VOC) entered a period of intense competition, resulting in the Anglo-Dutch Wars of the 17th and 18th centuries.
In September 1695, Captain Henry Every, an English pirate on board the Fancy, reached the Straits of Bab-el-Mandeb,
where he teamed up with five other pirate captains to make an attack on the Indian fleet making the annual voyage
to Mocha. The Mughal convoy included the treasure-laden Ganj-i-Sawai, reported to be the greatest in the Mughal fleet
and the largest ship operational in the Indian Ocean, and its escort, the Fateh Muhammed. They were spotted passing
the straits en route to Surat. The pirates gave chase and caught up with Fateh Muhammed some days later, and meeting
little resistance, took some £50,000 to £60,000 worth of treasure. Every continued in pursuit and managed to overhaul
Ganj-i-Sawai, which resisted strongly before eventually striking. Ganj-i-Sawai carried enormous wealth and, according
to contemporary East India Company sources, was carrying a relative of the Grand Mughal, though there is no evidence
to suggest that it was his daughter and her retinue. The loot from the Ganj-i-Sawai had a total value between £325,000
and £600,000, including 500,000 gold and silver pieces, and has become known as the richest ship ever taken by pirates.
When the news arrived in England it caused an outcry. In response, a combined bounty of £1,000 was offered for Every's
capture by the Privy Council and East India Company, leading to the first worldwide manhunt in recorded history.
The plunder of Aurangzeb's treasure ship had serious consequences for the English East India Company. The furious
Mughal Emperor Aurangzeb ordered Sidi Yaqub and Nawab Daud Khan to attack and close four of the company's factories
in India and imprison their officers, who were almost lynched by a mob of angry Mughals, blaming them for their countryman's
depredations, and threatened to put an end to all English trading in India. To appease Emperor Aurangzeb and particularly
his Grand Vizier Asad Khan, Parliament exempted Every from all of the Acts of Grace (pardons) and amnesties it would
subsequently issue to other pirates. This allowed any English firm to trade with India, unless specifically prohibited
by act of parliament, thereby annulling the charter that had been in force for almost 100 years. By an act that was
passed in 1698, a new "parallel" East India Company (officially titled the English Company Trading to the East Indies)
was floated under a state-backed indemnity of £2 million. The powerful stockholders of the old company quickly subscribed
a sum of £315,000 in the new concern, and dominated the new body. The two companies wrestled with each other for
some time, both in England and in India, for a dominant share of the trade. In the following decades there was a
constant battle between the company lobby and the Parliament. The company sought a permanent establishment, while
the Parliament would not willingly allow it greater autonomy and so relinquish the opportunity to exploit the company's
profits. In 1712, another act renewed the status of the company, though the debts were repaid. By 1720, 15% of British
imports were from India, almost all passing through the company, which reasserted the influence of the company lobby.
The licence was prolonged until 1766 by yet another act in 1730. At this time, Britain and France became bitter rivals.
Frequent skirmishes between them took place for control of colonial possessions. In 1742, fearing the monetary consequences
of a war, the British government agreed to extend the deadline for the licensed exclusive trade by the company in
India until 1783, in return for a further loan of £1 million. Between 1756 and 1763, the Seven Years' War diverted
the state's attention towards consolidation and defence of its territorial possessions in Europe and its colonies
in North America. With the advent of the Industrial Revolution, Britain surged ahead of its European rivals. Demand
for Indian commodities was boosted by the need to sustain the troops and the economy during the war, and by the increased
availability of raw materials and efficient methods of production. As home to the revolution, Britain experienced
higher standards of living. Its spiralling cycle of prosperity, demand and production had a profound influence on
overseas trade. The company became the single largest player in the British global market. William Henry Pyne notes
in his book The Microcosm of London (1808) that: Outstanding debts were also agreed and the company permitted to
export 250 tons of saltpetre. Again in 1673, Banks successfully negotiated another contract for 700 tons of saltpetre
at £37,000 between the king and the company. So urgent was the need to supply the armed forces in the United Kingdom,
America and elsewhere that the authorities sometimes turned a blind eye on the untaxed sales. One governor of the
company was even reported as saying in 1864 that he would rather have the saltpetre made than the tax on salt. By
the Treaty of Paris (1763), France regained the five establishments captured by the British during the war (Pondichéry,
Mahe, Karikal, Yanam and Chandernagar) but was prevented from erecting fortifications and keeping troops in Bengal
(art. XI). Elsewhere in India, the French were to remain a military threat, particularly during the War of American
Independence, and up to the capture of Pondichéry in 1793 at the outset of the French Revolutionary Wars without
any military presence. Although these small outposts remained French possessions for the next two hundred years,
French ambitions on Indian territories were effectively laid to rest, thus eliminating a major source of economic
competition for the company. In its first century and half, the EIC used a few hundred soldiers as guards. The great
expansion came after 1750, when it had 3000 regular troops. By 1763, it had 26,000; by 1778, it had 67,000. It recruited
largely Indian troops, and trained them along European lines. The company, fresh from a colossal victory, and with
the backing of its own private well-disciplined and experienced army, was able to assert its interests in the Carnatic
region from its base at Madras and in Bengal from Calcutta, without facing any further obstacles from other colonial
powers. With the gradual weakening of the Marathas in the aftermath of the three Anglo-Maratha wars, the British
also secured the Ganges-Jumna Doab, the Delhi-Agra region, parts of Bundelkhand, Broach, some districts of Gujarat,
the fort of Ahmmadnagar, province of Cuttack (which included Mughalbandi/the coastal part of Odisha, Garjat/the princely
states of Odisha, Balasore Port, parts of Midnapore district of West Bengal), Bombay (Mumbai) and the surrounding
areas, leading to a formal end of the Maratha empire and firm establishment of the British East India Company in
India. Within the Army, British officers who initially trained at the company's own academy at the Addiscombe Military
Seminary, always outranked Indians, no matter how long their service. The highest rank to which an Indian soldier
could aspire was Subadar-Major (or Rissaldar-Major in cavalry units), effectively a senior subaltern equivalent.
Promotion for both British and Indian soldiers was strictly by seniority, so Indian soldiers rarely reached the commissioned
ranks of Jamadar or Subadar before they were middle aged at best. They received no training in administration or
leadership to make them independent of their British officers. In 1838 with the amount of smuggled opium entering
China approaching 1,400 tons a year, the Chinese imposed a death penalty for opium smuggling and sent a Special Imperial
Commissioner, Lin Zexu, to curb smuggling. This resulted in the First Opium War (1839–42). After the war Hong Kong
island was ceded to Britain under the Treaty of Nanking and the Chinese market opened to the opium traders of Britain
and other nations. The Jardines and Apcar and Company dominated the trade, although P&O also tried to take a share.
A Second Opium War fought by Britain and France against China lasted from 1856 until 1860 and led to the Treaty of
Tientsin, which legalised the importation of opium. Legalisation stimulated domestic Chinese opium production and
increased the importation of opium from Turkey and Persia. This increased competition for the Chinese market led
to India reducing its opium output and diversifying its exports. Despite stiff resistance from the East India lobby
in parliament and from the Company's shareholders, the Act passed. It introduced substantial governmental control
and allowed British India to be formally under the control of the Crown, but leased back to the Company at £40,000
for two years. Under the Act's most important provision, a governing Council composed of five members was created
in Calcutta. The three members nominated by Parliament and representing the Government's interest could, and invariably
would, outvote the two Company members. The Council was headed by Warren Hastings, the incumbent Governor, who became
the first Governor-General of Bengal, with an ill-defined authority over the Bombay and Madras Presidencies. His
nomination, made by the Court of Directors, would in future be subject to the approval of a Council of Four appointed
by the Crown. Initially, the Council consisted of Lt. General Sir John Clavering, The Honourable Sir George Monson,
Sir Richard Barwell, and Sir Philip Francis. Hastings was entrusted with the power of peace and war. British judges
and magistrates would also be sent to India to administer the legal system. The Governor General and the council
would have complete legislative powers. The company was allowed to maintain its virtual monopoly over trade in exchange
for the biennial sum and was obligated to export a minimum quantity of goods yearly to Britain. The costs of administration
were to be met by the company. The Company initially welcomed these provisions, but the annual burden of the payment
contributed to the steady decline of its finances. Pitt's Act was deemed a failure because it quickly became apparent
that the boundaries between government control and the company's powers were nebulous and highly subjective. The
government felt obliged to respond to humanitarian calls for better treatment of local peoples in British-occupied
territories. Edmund Burke, a former East India Company shareholder and diplomat, was moved to address the situation
and introduced a new Regulating Bill in 1783. The bill was defeated amid lobbying by company loyalists and accusations
of nepotism in the bill's recommendations for the appointment of councillors. This Act clearly demarcated borders
between the Crown and the Company. After this point, the Company functioned as a regularised subsidiary of the Crown,
with greater accountability for its actions and reached a stable stage of expansion and consolidation. Having temporarily
achieved a state of truce with the Crown, the Company continued to expand its influence to nearby territories through
threats and coercive actions. By the middle of the 19th century, the Company's rule extended across most of India,
Burma, Malaya, Singapore, and British Hong Kong, and a fifth of the world's population was under its trading influence.
In addition, Penang, one of the states in Malaya, became the fourth most important settlement, a presidency, of the
Company's Indian territories. The aggressive policies of Lord Wellesley and the Marquis of Hastings led to the Company
gaining control of all India (except for the Punjab and Sindh), and some part of the then kingdom of Nepal under
the Sugauli Treaty. The Indian Princes had become vassals of the Company. But the expense of wars leading to the
total control of India strained the Company's finances. The Company was forced to petition Parliament for assistance.
This was the background to the Charter Act of 1813 which, among other things: The Company's headquarters in London,
from which much of India was governed, was East India House in Leadenhall Street. After occupying premises in Philpot
Lane from 1600 to 1621; in Crosby House, Bishopsgate, from 1621 to 1638; and in Leadenhall Street from 1638 to 1648,
the Company moved into Craven House, an Elizabethan mansion in Leadenhall Street. The building had become known as
East India House by 1661. It was completely rebuilt and enlarged in 1726–9; and further significantly remodelled
and expanded in 1796–1800. It was finally vacated in 1860 and demolished in 1861–62. The site is now occupied by
the Lloyd's building. In 1803, an Act of Parliament, promoted by the East India Company, established the East India
Dock Company, with the aim of establishing a new set of docks (the East India Docks) primarily for the use of ships
trading with India. The existing Brunswick Dock, part of the Blackwall Yard site, became the Export Dock; while a
new Import Dock was built to the north. In 1838 the East India Dock Company merged with the West India Dock Company.
The docks were taken over by the Port of London Authority in 1909, and closed in 1967. From the period of 1600, the
canton consisted of a St George's Cross representing the Kingdom of England. With the Acts of Union 1707, the canton
was updated to be the new Union Flag—consisting of an English St George's Cross combined with a Scottish St Andrew's
cross—representing the Kingdom of Great Britain. After the Acts of Union 1800 that joined Ireland with Great Britain
to form the United Kingdom, the canton of the East India Company flag was altered accordingly to include a Saint
Patrick's Saltire replicating the updated Union Flag representing the United Kingdom of Great Britain and Ireland.
"Azure, three ships with three masts, rigged and under full sail, the sails, pennants and ensigns Argent, each charged
with a cross Gules; on a chief of the second a pale quarterly Azure and Gules, on the 1st and 4th a fleur-de-lis
or, on the 2nd and 3rd a leopard or, between two roses Gules seeded Or barbed Vert." The shield had as a crest: "A
sphere without a frame, bounded with the Zodiac in bend Or, between two pennants flottant Argent, each charged with
a cross Gules, over the sphere the words DEUS INDICAT" (Latin: God Indicates). The supporters were two sea lions
(lions with fishes' tails) and the motto was DEO DUCENTE NIL NOCET (Latin: Where God Leads, Nothing Hurts). The East
India Company's arms, granted in 1698, were: "Argent a cross Gules; in the dexter chief quarter an escutcheon of
the arms of France and England quarterly, the shield ornamentally and regally crowned Or." The crest was: "A lion
rampant guardant Or holding between the forepaws a regal crown proper." The supporters were: "Two lions rampant guardant
Or, each supporting a banner erect Argent, charged with a cross Gules." The motto was AUSPICIO REGIS ET SENATUS ANGLIÆ
(Latin: By right of the King and the Senate of England). During the period of the Napoleonic Wars, the East India
Company arranged for letters of marque for its vessels such as the Lord Nelson. This was not so that they could carry
cannon to fend off warships, privateers and pirates on their voyages to India and China (that they could do without
permission) but so that, should they have the opportunity to take a prize, they could do so without being guilty
of piracy. Similarly, the Earl of Mornington, an East India Company packet ship of only six guns, also sailed under
a letter of marque. At the Battle of Pulo Aura, which was probably the company's most notable naval victory, Nathaniel
Dance, Commodore of a convoy of Indiamen and sailing aboard the Warley, led several Indiamen in a skirmish with a
French squadron, driving them off. Some six years earlier, on 28 January 1797, five Indiamen, the Woodford, under
Captain Charles Lennox, the Taunton-Castle, Captain Edward Studd, Canton, Captain Abel Vyvyan, and Boddam, Captain
George Palmer, and Ocean, Captain John Christian Lochner, had encountered Admiral de Sercey and his squadron of frigates.
On this occasion the Indiamen also succeeded in bluffing their way to safety, and without any shots even being fired.
Lastly, on 15 June 1795, the General Goddard played a large role in the capture of seven Dutch East Indiamen off
St Helena. Unlike all other British Government records, the records from the East India Company (and its successor
the India Office) are not in The National Archives at Kew, London, but are held by the British Library in London
as part of the Asia, Pacific and Africa Collections. The catalogue is searchable online in the Access to Archives
catalogues. Many of the East India Company records are freely available online under an agreement that the Families
in British India Society has with the British Library. Published catalogues exist of East India Company ships' journals
and logs, 1600–1834; and of some of the Company's daughter institutions, including the East India Company College,
Haileybury, and Addiscombe Military Seminary.
Hokkien /hɒˈkiɛn/ (traditional Chinese: 福建話; simplified Chinese: 福建话; pinyin: Fújiànhuà; Pe̍h-ōe-jī: Hok-kiàn oē) or Quanzhang
(Quanzhou–Zhangzhou / Chinchew–Changchew; BP: Zuánziū–Ziāngziū) is a group of mutually intelligible Min Nan Chinese
dialects spoken throughout Southeast Asia, Taiwan, and by many other overseas Chinese. Hokkien originated from a
dialect in southern Fujian. It is closely related to the Teochew, though mutual comprehension is difficult, and is
somewhat more distantly related to Hainanese. Besides Hokkien, there are also other Min and Hakka dialects in Fujian
province, most of which are not mutually intelligible with Hokkien. The term Hokkien (福建; hɔk˥˥kɪɛn˨˩) is itself
a term not used in Chinese to refer to the dialect, as it simply means Fujian province. In Chinese linguistics, these
dialects are known by their classification under the Quanzhang Division (Chinese: 泉漳片; pinyin: Quánzhāng piàn) of
Min Nan, which comes from the first characters of the two main Hokkien urban centers Quanzhou and Zhangzhou. The
variety is also known by other terms such as the more general Min Nan (traditional Chinese: 閩南語, 閩南話; simplified
Chinese: 闽南语, 闽南话; pinyin: Mǐnnányǔ, Mǐnnánhuà; Pe̍h-ōe-jī: Bân-lâm-gí,Bân-lâm-oē) or Southern Min, and Fulaohua
(traditional Chinese: 福佬話; simplified Chinese: 福佬话; pinyin: Fúlǎohuà; Pe̍h-ōe-jī: Hō-ló-oē). The term Hokkien (Chinese:
福建話; Pe̍h-ōe-jī: hok-kiàn oē;Tâi-lô:Hok-kiàn-uē), on the other hand, is used commonly in South East Asia to refer
to Min-nan dialects. There are many Hokkien speakers among overseas Chinese in Southeast Asia as well as in the United
States. Many ethnic Han Chinese emigrants to the region were Hoklo from southern Fujian, and brought the language
to what is now Burma (Myanmar), Indonesia (the former Dutch East Indies) and present day Malaysia and Singapore (formerly
Malaya and the British Straits Settlements). Many of the Hokkien dialects of this region are highly similar to Taiwanese
and Amoy. Hokkien is reportedly the native language of up to 98.5% of the Chinese Filipino in the Philippines, among
which is known locally as Lan-nang or Lán-lâng-oē ("Our people’s language"). Hokkien speakers form the largest group
of Chinese in Singapore, Malaysia and Indonesia.[citation needed] During the Three Kingdoms period of ancient China,
there was constant warfare occurring in the Central Plain of China. Northerners began to enter into Fujian region,
causing the region to incorporate parts of northern Chinese dialects. However, the massive migration of northern
Han Chinese into Fujian region mainly occurred after the Disaster of Yongjia. The Jìn court fled from the north to
the south, causing large numbers of northern Han Chinese to move into Fujian region. They brought the old Chinese
— spoken in Central Plain of China from prehistoric era to 3rd century — into Fujian. This then gradually evolved
into the Quanzhou dialect. In 677 (during the reign of Emperor Gaozong), Chen Zheng (陳政), together with his son Chen
Yuanguang (陳元光), led a military expedition to pacify the rebellion in Fujian. They settled in Zhangzhou and brought
the Middle Chinese phonology of northern China during the 7th century into Zhangzhou; In 885, (during the reign of
Emperor Xizong of Tang), the two brothers Wang Chao (王潮) and Wang Shenzhi (王審知), led a military expedition force
to pacify the Huang Chao rebellion. They brought the Middle Chinese phonology commonly spoken in Northern China into
Zhangzhou. These two waves of migrations from the north generally brought the language of northern Middle Chinese
into the Fujian region. This then gradually evolved into the Zhangzhou dialect. Xiamen dialect, sometimes known as
Amoy, is the main dialect spoken in the Chinese city of Xiamen and its surrounding regions of Tong'an and Xiang'an,
both of which are now included in the Greater Xiamen area. This dialect developed in the late Ming dynasty when Xiamen
was increasingly taking over Quanzhou's position as the main port of trade in southeastern China. Quanzhou traders
began travelling southwards to Xiamen to carry on their businesses while Zhangzhou peasants began traveling northwards
to Xiamen in search of job opportunities. It is at this time when a need for a common language arose. The Quanzhou
and Zhangzhou varieties are similar in many ways (as can be seen from the common place of Henan Luoyang where they
originated), but due to differences in accents, communication can be a problem. Quanzhou businessmen considered their
speech to be the prestige accent and considered Zhangzhou's to be a village dialect. Over the centuries, dialect
leveling occurred and the two speeches mixed to produce the Amoy dialect. Hokkien has one of the most diverse phoneme
inventories among Chinese varieties, with more consonants than Standard Mandarin or Cantonese. Vowels are more-or-less
similar to that of Standard Mandarin. Hokkien varieties retain many pronunciations that are no longer found in other
Chinese varieties. These include the retention of the /t/ initial, which is now /tʂ/ (Pinyin 'zh') in Mandarin (e.g.
'bamboo' 竹 is tik, but zhú in Mandarin), having disappeared before the 6th century in other Chinese varieties. In
general, Hokkien dialects have 5 to 7 phonemic tones. According to the traditional Chinese system, however, there
are 7 to 9 "tones",[citation needed] more correctly termed tone classes since two of them are non-phonemic "entering
tones" (see the discussion on Chinese tone). Tone sandhi is extensive. There are minor variations between the Quanzhou
and Zhangzhou tone systems. Taiwanese tones follow the patterns of Amoy or Quanzhou, depending on the area of Taiwan.
Many dialects have an additional phonemic tone ("tone 9" according to the traditional reckoning), used only in special
or foreign loan words. The Amoy dialect (Xiamen) is a hybrid of the Quanzhou and Zhangzhou dialects. Taiwanese is
also a hybrid of these two dialects. Taiwanese in northern Taiwan tends to be based on the Quanzhou variety, whereas
the Taiwanese spoken in southern Taiwan tends to be based on Zhangzhou speech. There are minor variations in pronunciation
and vocabulary between Quanzhou and Zhangzhou dialects. The grammar is generally the same. Additionally, extensive
contact with the Japanese language has left a legacy of Japanese loanwords in Taiwanese Hokkien. On the other hand,
the variants spoken in Singapore and Malaysia have a substantial number of loanwords from Malay and to a lesser extent,
from English and other Chinese varieties, such as the closely related Teochew and some Cantonese. Hokkien dialects
are analytic; in a sentence, the arrangement of words is important to its meaning. A basic sentence follows the subject–verb–object
pattern (i.e. a subject is followed by a verb then by an object), though this order is often violated because Hokkien
dialects are topic-prominent. Unlike synthetic languages, seldom do words indicate time, gender and plural by inflection.
Instead, these concepts are expressed through adverbs, aspect markers, and grammatical particles, or are deduced
from the context. Different particles are added to a sentence to further specify its status or intonation. The existence
of literary and colloquial readings (文白異讀), called tha̍k-im (讀音), is a prominent feature of some Hokkien dialects
and indeed in many Sinitic varieties in the south. The bulk of literary readings (文讀, bûn-tha̍k), based on pronunciations
of the vernacular during the Tang Dynasty, are mainly used in formal phrases and written language (e.g. philosophical
concepts, surnames, and some place names), while the colloquial (or vernacular) ones (白讀, pe̍h-tha̍k) are basically
used in spoken language and vulgar phrases. Literary readings are more similar to the pronunciations of the Tang
standard of Middle Chinese than their colloquial equivalents. The pronounced divergence between literary and colloquial
pronunciations found in Hokkien dialects is attributed to the presence of several strata in the Min lexicon. The
earliest, colloquial stratum is traced to the Han dynasty (206 BCE - 220 CE); the second colloquial one comes from
the period of the Southern and Northern Dynasties (420 - 589 CE); the third stratum of pronunciations (typically
literary ones) comes from the Tang Dynasty (618–907 CE) and is based on the prestige dialect of Chang'an (modern
day Xi'an), its capital. Quite a few words from the variety of Old Chinese spoken in the state of Wu (where the ancestral
language of Min and Wu dialect families originated and which was likely influenced by the Chinese spoken in the state
of Chu which itself was not founded by Chinese speakers),[citation needed] and later words from Middle Chinese as
well, have retained the original meanings in Hokkien, while many of their counterparts in Mandarin Chinese have either
fallen out of daily use, have been substituted with other words (some of which are borrowed from other languages
while others are new developments), or have developed newer meanings. The same may be said of Hokkien as well, since
some lexical meaning evolved in step with Mandarin while others are wholly innovative developments. Hokkien originated
from Quanzhou. After the Opium War in 1842, Xiamen (Amoy) became one of the major treaty ports to be opened for trade
with the outside world. From mid-19th century onwards, Xiamen slowly developed to become the political and economical
center of the Hokkien-speaking region in China. This caused Amoy dialect to gradually replace the position of dialect
variants from Quanzhou and Zhangzhou. From mid-19th century until the end of World War II, western diplomats usually
learned Amoy Hokkien as the preferred dialect if they were to communicate with the Hokkien-speaking populace in China
or South-East Asia. In the 1940s and 1950s, Taiwan also held Amoy Hokkien as its standard and tended to incline itself
towards Amoy dialect. In the 1990s, marked by the liberalization of language development and mother tongue movement
in Taiwan, Taiwanese Hokkien had undergone a fast pace in its development. In 1993, Taiwan became the first region
in the world to implement the teaching of Taiwanese Hokkien in Taiwanese schools. In 2001, the local Taiwanese language
program was further extended to all schools in Taiwan, and Taiwanese Hokkien became one of the compulsory local Taiwanese
languages to be learned in schools. The mother tongue movement in Taiwan even influenced Xiamen (Amoy) to the point
that in 2010, Xiamen also began to implement the teaching of Hokkien dialect in its schools. In 2007, the Ministry
of Education in Taiwan also completed the standardization of Chinese characters used for writing Hokkien and developed
Tai-lo as the standard Hokkien pronunciation and romanization guide. A number of universities in Taiwan also offer
Hokkien degree courses for training Hokkien-fluent talents to work for the Hokkien media industry and education.
Taiwan also has its own Hokkien literary and cultural circles whereby Hokkien poets and writers compose poetry or
literature in Hokkien on a regular basis. Hokkien dialects are typically written using Chinese characters (漢字, Hàn-jī).
However, the written script was and remains adapted to the literary form, which is based on classical Chinese, not
the vernacular and spoken form. Furthermore, the character inventory used for Mandarin (standard written Chinese)
does not correspond to Hokkien words, and there are a large number of informal characters (替字, thè-jī or thòe-jī;
'substitute characters') which are unique to Hokkien (as is the case with Cantonese). For instance, about 20 to 25%
of Taiwanese morphemes lack an appropriate or standard Chinese character. While most Hokkien morphemes have standard
designated characters, they are not always etymological or phono-semantic. Similar-sounding, similar-meaning or rare
characters are commonly borrowed or substituted to represent a particular morpheme. Examples include "beautiful"
(美 bí is the literary form), whose vernacular morpheme suí is represented by characters like 媠 (an obsolete character),
婎 (a vernacular reading of this character) and even 水 (transliteration of the sound suí), or "tall" (高 ko is the
literary form), whose morpheme kôan is 懸. Common grammatical particles are not exempt; the negation particle m̄ (not)
is variously represented by 毋, 呣 or 唔, among others. In other cases, characters are invented to represent a particular
morpheme (a common example is the character 𪜶 in, which represents the personal pronoun "they"). In addition, some
characters have multiple and unrelated pronunciations, adapted to represent Hokkien words. For example, the Hokkien
word bah ("meat") has been reduced to the character 肉, which has etymologically unrelated colloquial and literary
readings (he̍k and jio̍k, respectively). Another case is the word 'to eat,' chia̍h, which is often transcribed in
Taiwanese newspapers and media as 呷 (a Mandarin transliteration, xiā, to approximate the Hokkien term), even though
its recommended character in dictionaries is 食. Hokkien, especially Taiwanese, is sometimes written in the Latin
script using one of several alphabets. Of these the most popular is Pe̍h-ōe-jī (traditional Chinese: 白話字; simplified
Chinese: 白话字; pinyin: Báihuàzì). POJ was developed first by Presbyterian missionaries in China and later by the indigenous
Presbyterian Church in Taiwan; use of this alphabet has been actively promoted since the late 19th century. The use
of a mixed script of Han characters and Latin letters is also seen, though remains uncommon. Other Latin-based alphabets
also exist. Min Nan texts, all Hokkien, can be dated back to the 16th century. One example is the Doctrina Christiana
en letra y lengua china, presumably written after 1587 by the Spanish Dominicans in the Philippines. Another is a
Ming Dynasty script of a play called Romance of the Lychee Mirror (1566), supposedly the earliest Southern Min colloquial
text. Xiamen University has also developed an alphabet based on Pinyin, which has been published in a dictionary
called the Minnan Fangyan-Putonghua Cidian (閩南方言普通話詞典) and a language teaching book, which is used to teach the language
to foreigners and Chinese non-speakers. It is known as Pumindian. All Latin characters required by Pe̍h-ōe-jī can
be represented using Unicode (or the corresponding ISO/IEC 10646: Universal Character Set), using precomposed or
combining (diacritics) characters. Prior to June 2004, the vowel akin to but more open than o, written with a dot
above right, was not encoded. The usual workaround was to use the (stand-alone; spacing) character Interpunct (U+00B7,
·) or less commonly the combining character dot above (U+0307). As these are far from ideal, since 1997 proposals
have been submitted to the ISO/IEC working group in charge of ISO/IEC 10646—namely, ISO/IEC JTC1/SC2/WG2—to encode
a new combining character dot above right. This is now officially assigned to U+0358 (see documents N1593, N2507,
N2628, N2699, and N2713). Font support is expected to follow. In 2002, the Taiwan Solidarity Union, a party with
about 10% of the Legislative Yuan seats at the time, suggested making Taiwanese a second official language. This
proposal encountered strong opposition not only from Mainlander groups but also from Hakka and Taiwanese aboriginal
groups who felt that it would slight their home languages, as well as others including Hoklo who objected to the
proposal on logistical grounds and on the grounds that it would increase ethnic tensions. Because of these objections,
support for this measure was lukewarm among moderate Taiwan independence supporters, and the proposal did not pass.
Professional wrestling (colloquially abbreviated to pro wrestling or wrestling) is an athletic form of entertainment based
on a portrayal of a combat sport. Taking the form of live events held by touring promotions, it portrays a unique
style of combat based on a combination of adopted styles, which include classical wrestling, catch wrestling and
various forms of martial arts, as well as an innovative style based on grappling (holds/throws), striking, and aerialism.
Various forms of weaponry are sometimes used. The content including match outcomes is choreographed and the combative
actions and reactions are executed in special manners designed to both protect from, yet simulate, pain. These facts
were once kept highly secret, but they are now openly declared as the truth. By and large, the true nature of the
content is ignored by the performing promotion in official media in order to sustain and promote the willing suspension
of disbelief for the audience by maintaining an aura of verisimilitude. Fan communications by individual wrestlers
and promotions through outside media (i.e., interviews) will often directly acknowledge the fictional nature of the
spectacle. Originating as a popular form of entertainment in 19th-century Europe and later as a sideshow exhibition
in North American traveling carnivals and vaudeville halls, professional wrestling grew into a standalone genre of
entertainment with many diverse variations in cultures around the globe, and is now considered a multimillion-dollar
entertainment industry. While it has greatly declined in Europe, in North America it has experienced several different
periods of prominent cultural popularity during its century and a half of existence. The advent of television gave
professional wrestling a new outlet, and wrestling (along with boxing) was instrumental in making pay-per-view a
viable method of content delivery. Although professional wrestling started out as petty acts in sideshows, traveling
circuses and carnivals, today it is a billion-dollar industry. Revenue is drawn from live event ticket sales, network
television broadcasts, pay-per-view broadcasts, personal appearances by performers, branded merchandise and home
video. Particularly since the 1950s, pro wrestling events have frequently been responsible for sellout crowds at
large arenas, including Madison Square Garden, as well as football stadiums, by promotions including the WWE, the
NWA territory system, WCW, and AWA. Pro wrestling was also instrumental in making pay-per-view a viable method of
content delivery. Annual shows such as WrestleMania, SummerSlam, Royal Rumble, and formerly Bash at the Beach, Halloween
Havoc and Starrcade are among the highest-selling pay-per-view programming each year. In modern day, internet programming
has been utilized by a number of companies to air web shows, internet pay-per-views (iPPVs) or on-demand content,
helping to generate internet-related revenue earnings from the evolving World Wide Web. Due to its persistent cultural
presence and to its novelty within the performing arts, wrestling constitutes a recurring topic in both academia
and the media. Several documentaries have been produced looking at professional wrestling, most notably, Beyond the
Mat directed by Barry W. Blaustein, and Wrestling with Shadows featuring wrestler Bret Hart and directed by Paul
Jay. There have also been many fictional depictions of wrestling; the 2008 film The Wrestler received several Oscar
nominations and began a career revival for star Mickey Rourke. Currently, the largest professional wrestling company
worldwide is the United States-based WWE, which bought out many smaller regional companies in the late 20th century,
as well as its primary US competitors WCW and Extreme Championship Wrestling (ECW) in early 2001. Other prominent
professional wrestling companies worldwide include the US-based Total Nonstop Action Wrestling (TNA) and Ring of
Honor (ROH), Consejo Mundial de Lucha Libre (CMLL) and Asistencia Asesoría y Administración (AAA) in Mexico, and
the Japanese New Japan Pro Wrestling (NJPW), All Japan Pro Wrestling (AJPW), and Pro Wrestling Noah (NOAH) leagues.
Gradually, the predetermined nature of professional wrestling became an open secret, as prominent figures in the
wrestling business (including WWE owner Vince McMahon) began to publicly admit that wrestling was entertainment,
not competition. This public reveal has garnered mixed reactions from the wrestling community, as some feel that
exposure ruins the experience to the spectators as does exposure in illusionism. Despite the public admission of
the theatrical nature of professional wrestling, many U.S. states still regulate professional wrestling as they do
other professional competitive sports. For example, New York State still regulates "professional wrestling" through
the New York State Athletic Commission (SAC). Professional wrestling shows can be considered a form of theatre in
the round, with the ring, ringside area, and entryway comprising a thrust stage. However, there is a much more limited
concept of a fourth wall than in most theatric performances. The audience is recognized and acknowledged by the performers
as spectators to the sporting event being portrayed, and are encouraged to interact as such. This leads to a high
level of audience participation; in fact, their reactions can dictate how the performance unfolds. Often, individual
matches will be part of a longer storyline conflict between "babyfaces" (often shortened to just "faces") and "heels".
"Faces" (the "good guys") are those whose actions are intended to encourage the audience to cheer, while "heels"
(the "bad guys") act to draw the spectators' ire. Most wrestling matches last for a set number of falls, with the
first side to achieve the majority number of pinfalls, submissions, or countouts being the winner. Historically,
matches were wrestled to 3 falls ("best 2 out of 3") or 5 falls ("best 3 out of 5"). The standard for modern matches
is one fall. However, even though it is now standard, many announcers will explicitly state this (e.g. "The following
contest is set for one fall with a 20-minute time limit"). These matches are given a time limit; if not enough falls
are scored by the end of the time limit, the match is declared a draw. Modern matches are generally given a 10- to
30-minute time limit for standard matches; title matches can go for up to one hour. British wrestling matches held
under Admiral-Lord Mountevans rules are 2 out of 3 falls. In matches with multiple competitors, an elimination system
may be used. Any wrestler who has a fall scored against them is forced out of the match, and the match continues
until only one remains. However, it is much more common when more than two wrestlers are involved to simply go one
fall, with the one scoring the fall, regardless of who they scored it against, being the winner. In championship
matches, this means that, unlike one-on-one matches (where the champion can simply disqualify themselves or get themselves
counted out to retain the title via the "champion's advantag"), the champion does not have to be pinned or involved
in the decision to lose the championship. However, heel champions often find advantages, not in champion's advantage,
but in the use of weapons and outside interference, as these poly-sided matches tend to involve no holds barred rules.
Many modern specialty matches have been devised, with unique winning conditions. The most common of these is the
ladder match. In the basic ladder match, the wrestlers or teams of wrestlers must climb a ladder to obtain a prize
that is hoisted above the ring. The key to winning this match is that the wrestler or team of wrestlers must try
to incapacitate each other long enough for one wrestler to climb the ladder and secure that prize for their team.
As a result, the ladder can be used as a weapon. The prizes include – but are not limited to any given championship
belt (the traditional prize), a document granting the winner the right to a future title shot, or any document that
matters to the wrestlers involved in the match (such as one granting the winner a cash prize). Another common specialty
match is known as the battle royal. In a battle royal, all the wrestlers enter the ring to the point that there are
20-30 wrestlers in the ring at one time. When the match begins, the simple objective is to throw the opponent over
the top rope and out of the ring with both feet on the floor in order to eliminate that opponent. The last wrestler
standing is declared the winner. A variant on this type of match is the WWE's Royal Rumble where two wrestlers enter
the ring to start the match and other wrestlers follow in 90 second intervals (previously 2 minutes) until 30-40
wrestlers have entered the ring. All other rules stay the same. For more match types, see Professional wrestling
match types. Due to the legitimate role that referees play in wrestling of serving as liaison between the bookers
backstage and the wrestlers in the ring (the role of being a final arbitrator is merely kayfabe), the referee is
present, even in matches that do not at first glance appear to require a referee (such as a ladder match, as it is
no holds barred, and the criteria for victory could theoretically be assessed from afar). Although their actions
are also frequently scripted for dramatic effect, referees are subject to certain general rules and requirements
in order to maintain the theatrical appearance of unbiased authority. The most basic rule is that an action must
be seen by a referee to be declared for a fall or disqualification. This allows for heel characters to gain a scripted
advantage by distracting or disabling the referee in order to perform some ostensibly illegal maneuver on their opponent.
Most referees are unnamed and essentially anonymous, though the WWE has let their officials reveal their names. Special
guest referees may be used from time to time; by virtue of their celebrity status, they are often scripted to dispense
with the appearance of neutrality and use their influence to unfairly influence the outcome of the match for added
dramatic impact. Face special referees will often fight back against hostile heel wrestlers, particularly if the
special referee is either a wrestler themselves or a famous martial artist (such as Tito Ortiz in the main event
at TNA's THard Justice in 2005). They also have the power to eject from ringside any of the heel wrestler's entourage/stable,
who may otherwise interfere with the match. Matches are held within a wrestling ring, an elevated square canvas mat
with posts on each corner. A cloth apron hangs over the edges of the ring. Three horizontal ropes or cables surround
the ring, suspended with turnbuckles which are connected to the posts. For safety, the ropes are padded at the turnbuckles
and cushioned mats surround the floor outside the ring. Guardrails or a similar barrier enclose this area from the
audience. Wrestlers are generally expected to stay within the confines of the ring, though matches sometimes end
up outside the ring, and even in the audience, to add excitement. In some team matches, only one entrant from each
team may be designated as the "legal" or "active" wrestler at any given moment. Two wrestlers must make physical
contact (typically palm-to-palm) in order to transfer this legal status. This is known as a "tag", with the participants
"tagging out" and "tagging in". Typically the wrestler who is tagging out has a 5-second count to leave the ring,
whereas the one tagging in can enter the ring at any time, resulting in heels legally double-teaming a face. Sometimes,
poly-sided matches that pit every one for themselves will incorporate tagging rules. Outside of kayfabe, this is
done to give wrestlers a break from the action (as these matches tend to go on for long periods of time), and to
make the action in the ring easier to choreograph. One of the most mainstream examples of this is the four-corner
match, the most common type of match in the WWE before it was replaced with its equivalent fatal four-way; four wrestlers,
each for themselves, fight in a match, but only two wrestlers can be in the match at any given time. The other two
are positioned in the corner, and tags can be made between any two wrestlers. A count may be started at any time
that a wrestler's shoulders are down (both shoulders touching the mat), back-first and any part of the opponent's
body is lying over the wrestler. This often results in pins that can easily be kicked out of, if the defensive wrestler
is even slightly conscious. For example, an attacking wrestler who is half-conscious may simply drape an arm over
an opponent, or a cocky wrestler may place their foot gently on the opponent's body, prompting a three-count from
the referee. However, although almost any scenario where one wrestler is covering another prone, back-first wrestler
can be considered a pin attempt, there is one important exception to that rule: Pin attempts broken up by other wrestlers.
In matches involving multiple wrestlers (such as triple threat matches or tag team matches), wrestlers who see a
pin attempt that, if successful, would result in them losing the match are expected to run in and break the pin attempt
by performing some sort of offensive maneuver on the wrestler attempting the pin. The most common attacks for breaking
pins are a stomp to the back and an elbow to the back of the head, as they are simple to pull off in the spur of
the moment. However, these moves, simple as they are, still leave the pinning wrestler on top of the pinned wrestler.
Despite the pinning wrestler still technically being on top of the pinned wrestler, the referee will still consider
the pin attempt to be broken. Illegal pinning methods include using the ropes for leverage and hooking the opponent's
clothing, which are therefore popular cheating methods for heels, unless certain stipulations make such an advantage
legal. Such pins as these are rarely seen by the referee (as they have to see if their shoulders are down) and are
subsequently often used by heels and on occasion by cheating faces to win matches. Even if it is noticed, it is rare
for such an attempt to result in a disqualification (see below), and instead it simply results in nullification of
the pin attempt, so the heel wrestler rarely has anything to lose for trying it, anyway. A wrestler may voluntarily
submit by verbally informing the referee (usually used in moves such as the Mexican Surfboard, where all four limbs
are incapacitated, making tapping impossible). Also, since Ken Shamrock (a legitimate UFC competitor in its early
days) popularized it in 1997, a wrestler can indicate a voluntary submission by "tapping out", that is, tapping a
free hand against the mat or against an opponent. Occasionally, a wrestler will reach for a rope (see rope breaks
below), only to put their hand back on the mat so they can crawl towards the rope some more; this is not a submission,
and the referee decides what their intent is. Submission was initially a large factor in professional wrestling,
but following the decline of the submission-oriented catch-as-catch-can style from mainstream professional wrestling,
the submission largely faded until the rise of the legitimate sport of mixed martial arts. Despite this, some wrestlers,
such as Chris Jericho, The Undertaker, Ric Flair, Bret Hart, Kurt Angle, Ken Shamrock, Dean Malenko, Chris Benoit,
and Tazz, became famous for winning matches via submission. A wrestler with a signature submission technique is portrayed
as better at applying the hold, making it more painful or more difficult to get out of than others who use it, or
can be falsely credited as inventing the hold (such as when Tazz popularized the kata ha jime judo choke in pro wrestling
as the "Tazzmission"). Since all contact between the wrestlers must cease if any part of the body is touching, or
underneath, the ropes, many wrestlers will attempt to break submission holds by deliberately grabbing the bottom
ropes. This is called a "rope break", and it is one of the most common ways to break a submission hold. Most holds
leave an arm or leg free, so that the person can tap out if they want. Instead, they use these free limbs to either
grab one of the ring ropes (the bottom one is the most common, as it is nearest the wrestlers, though other ropes
sometimes are used for standing holds such as Chris Masters' Master Lock) or drape their foot across, or underneath
one. Once this has been accomplished, and the accomplishment is witnessed by the referee, the referee will demand
that the offending wrestler break the hold, and start counting to five if the wrestler does not. If the referee reaches
the count of five, and the wrestler still does not break the hold, they are disqualified. A wrestler can win by knockout
(sometimes referred to as a referee stoppage) if they do not resort to submission holds, but stills pummels their
opponent to the point that they are unconscious or are unable to intelligently defend themselves. To check for a
knockout in this manner, a referee will wave their hand in front of the wrestler's face; if the wrestler does not
react in any way, the referee will award the victory to the other wrestler. If all the active wrestlers in a match
are down inside the ring at the same time, the referee will begin a count (usually ten seconds, twenty in Japan).
If nobody rises to their feet by the end of the count, the match is ruled a draw. Any participant who stands up in
time will end the count for everyone else. In a Last Man Standing match, this form of a knockout is the only way
that the match can end, so the referee will count when one or more wrestlers are down, and one wrestler standing
up before the 10-count doesn't stop the count for another wrestler who is still down. A referee may stop the match
when they or official ring physician decides that a wrestler cannot safely continue the match. This may be decided
if the wrestler cannot continue the match due to an injury. At the Great American Bash in 2008, Chris Jericho was
declared the winner of a match against Shawn Michaels when Michaels could not defend himself due to excessive blood
loss and impaired vision. At NXT TakeOver: Rival in 2015, the referee stopped the match when Sami Zayn could not
defend himself due to an injury sustained against Kevin Owens for the NXT Championship. A countout (alternatively
"count-out" or "count out") happens when a wrestler is out of the ring long enough for the referee to count to ten
(twenty in some promotions) and thus disqualified. The count is broken and restarted when a wrestler in the ring
exits the ring. Playing into this, some wrestlers will "milk" the count by sliding in the ring, and immediately sliding
back out. As they were technically inside the ring for a split second before exiting again, it is sufficient to restart
the count. This is often referred to by commentators as "breaking the count." Heels often use this tactic in order
to buy themselves more time to catch their breath, or to attempt to frustrate their babyface opponents. Disqualification
(sometimes abbreviated as "DQ") occurs when a wrestler violates the match's rules, thus losing automatically. Although
a countout can technically be considered a disqualification (as it is, for all intents and purposes, an automatic
loss suffered as a result of violating a match rule), the two concepts are often distinct in wrestling. A no disqualification
match can still end by countout (although this is rare); typically, a match must be declared a "no holds barred"
match, a "street fight" or some other term, in order for both disqualifications and countouts to be waived.[citation
needed] In practice, not all rule violations will result in a disqualification as the referee may use their own judgement
and is not obligated to stop the match. Usually, the only offenses that the referee will see and immediately disqualify
the match on (as opposed to having multiple offenses) are low blows, weapon usage, interference, or assaulting the
referee. In WWE, a referee must see the violation with their own eyes to rule that the match end in a disqualification
(simply watching the video tape is not usually enough) and the referee's ruling is almost always final, although
Dusty finishes (named after, and made famous by, Dusty Rhodes) will often result in the referee's decision being
overturned. It is not uncommon for the referees themselves to get knocked out during a match, which is commonly referred
to by the term "ref bump". While the referee remains "unconscious", wrestlers are free to violate rules until the
referee is revived or replaced. In some cases, a referee might disqualify a person under the presumption that it
was that wrestler who knocked them out; most referee knockouts are arranged to allow a wrestler, usually a heel,
to gain an advantage. For example, a wrestler may get whipped into a referee at a slower speed, knocking the ref
down for short amount of time; during that interim period, one wrestler may pin their opponent for a three-count
and would have won the match but for the referee being down (sometimes, another referee will sprint to the ring from
backstage to attempt to make the count, but by then, the other wrestler has had enough time to kick-out on their
own accord). A professional wrestling match can end in a draw. A draw occurs if both opponents are simultaneously
disqualified (as via countout or if the referee loses complete control of the match and both opponents attack each
other with no regard to being in a match, like Brock Lesnar vs. Undertaker at Unforgiven in 2002), neither opponent
is able to answer a ten-count, or both opponents simultaneously win the match. The latter can occur if, for example,
one opponent's shoulders touch the mat while maintaining a submission hold against another opponent. If the opponent
in the hold begins to tap out at the same time a referee counts to three for pinning the opponent delivering the
hold, both opponents have legally achieved scoring conditions simultaneously. Traditionally, a championship may not
change hands in the event of a draw (though it may become vacant), though some promotions such as TNA have endorsed
rules where the champion may lose a title by disqualification. A variant of the draw is the time-limit draw, where
the match does not have a winner by a specified time period (a one-hour draw, which was once common, is known in
wrestling circles as a "Broadway"). A wrestling match may be declared a no contest if the winning conditions are
unable to occur. This can be due to excessive interference, loss of referee's control over the match, one or more
participants sustaining debilitating injury not caused by the opponent, or the inability of a scheduled match to
even begin. A no contest is a state separate and distinct from a draw — a draw indicates winning conditions were
met. Although the terms are sometimes used interchangeably in practice, this usage is technically incorrect. While
each wrestling match is ostensibly a competition of athletics and strategy, the goal of each match from a business
standpoint is to excite and entertain the audience. Although the competition is staged, dramatic emphasis can be
utilized to draw out the most intense reaction from the audience. Heightened interest results in higher attendance
rates, increased ticket sales, higher ratings on television broadcasts (which result in greater ad revenue), higher
pay-per-view buyrates, and sales of branded merchandise and recorded video footage. All of these contribute to the
profit of the promotion company. In Latin America and English-speaking countries, most wrestlers (and other on-stage
performers) portray character roles, sometimes with personalities wildly different from their own. These personalities
are a gimmick intended to heighten interest in a wrestler without regard to athletic ability. Some can be unrealistic
and cartoon-like (such as Doink the Clown), while others carry more verisimilitude and can be seen as exaggerated
versions of the performer's real life personality (such as Chris Jericho, The Rock, John Cena, Stone Cold Steve Austin,
and CM Punk). In lucha libre, many characters wear masks, adopting a secret identity akin to a superhero, a near-sacred
tradition. An individual wrestler may sometimes use their real name, or a minor variation of it, for much of their
career, such as Angelo Poffo, Ernie Ladd, Verne Gagne, Bret Hart, and Randy Orton. Others can keep one ring name
for their entire career (cases in point include Chris Jericho, Shawn Michaels, CM Punk and Ricky Steamboat), or may
change from time to time to better suit the demands of the audience or company. Sometimes a character is owned and
trademarked by the company, forcing the wrestler to find a new one when they leave (although a simple typeset change,
such as changing Rhyno to Rhino, can usually get around this), and sometimes a character is owned by the wrestler.
Sometimes, a wrestler may change their legal name in order to obtain ownership of their ring name (examples include
Andrew Martin and Warrior). Many wrestlers (such as The Rock and The Undertaker) are strongly identified with their
character, even responding to the name in public or between friends. It's actually considered proper decorum for
fellow wrestlers to refer to each other by their stage names/characters rather than their birth/legal names, unless
otherwise introduced. A professional wrestling character's popularity can grow to the point that it makes appearances
in other media (see Hulk Hogan and El Santo) or even give the performer enough visibility to enter politics (Antonio
Inoki and Jesse Ventura, among others). Typically, matches are staged between a protagonist (historically an audience
favorite, known as a babyface, or "the good guy") and an antagonist (historically a villain with arrogance, a tendency
to break rules, or other unlikable qualities, called a heel). In recent years, however, antiheroes have also become
prominent in professional wrestling. There is also a less common role of a "tweener", who is neither fully face nor
fully heel yet able to play either role effectively (case in point, Samoa Joe during his first run in TNA from June
2005 to November 2006). At times a character may "turn", altering their face/heel alignment. This may be an abrupt,
surprising event, or it may slowly build up over time. It almost always is accomplished with a markable change in
behavior on the part of the character. Some turns become defining points in a wrestler's career, as was the case
when Hulk Hogan turned heel after being a top face for over a decade. Others may have no noticeable effect on the
character's status. If a character repeatedly switches between being a face and heel, this lessens the effect of
such turns, and may result in apathy from the audience. Vince McMahon is a good example of having more heel and face
turns than anyone in WWE history. As with personae in general, a character's face or heel alignment may change with
time, or remain constant over its lifetime (the most famous example of the latter is Ricky Steamboat, a WWE Hall
of Famer who remained a babyface throughout his entire career). Sometimes a character's heel turn will become so
popular that eventually the audience response will alter the character's heel-face cycle to the point where the heel
persona will, in practice, become a face persona, and what was previously the face persona, will turn into the heel
persona, such as when Dwayne Johnson first began using "The Rock" persona as a heel character, as opposed to his
original "Rocky Maivia" babyface persona. Another legendary example is Stone Cold Steve Austin, who was originally
booked as a heel, with such mannerisms as drinking on the job, using profanity, breaking company property, and even
breaking into people's private homes. However, much to WWF's surprise, the fans enjoyed Austin's antics so much that
he became one of the greatest antiheroes in the history of the business. He, along with the stable of D-Generation
X, is generally credited with ushering in the Attitude Era of WWF programming. In some cases a wrestler may possess
admirable physical traits but perceived mediocre public speaking abilities (such as Brock Lesnar), or their gimmick
may be that of a "wild savage" needing a handler (such as Kamala). Such performers have historically employed a manager,
who speaks on their behalf and adds to the performance. Managers have sometimes become major personalities, including
Bobby Heenan, Paul Heyman, Ernie Roth, and Paul Bearer. A manager role may also be filled by a "valet", typically
an appealing female who may participate in love triangle storylines, "damsel in distress" situations, and scripted
fights with female wrestlers. Some of these have also gone on to become recognized stars, such as Tammy Lynn Sytch,
Stacy Keibler, and Miss Elizabeth. While true exhibition matches are not uncommon, most matches tell a story analogous
to a scene in a play or film, or an episode of a serial drama: The face will sometimes win (triumph) or sometimes
lose (tragedy). Longer story arcs can result from multiple matches over the course of time. Since most promotions
have a championship title, competition for the championship is a common impetus for stories. Also, anything from
a character's own hair to their job with the promotion can be wagered in a match. The same type of good vs. evil
storylines were also once popular in roller derby. Other stories result from a natural rivalry between two or more
characters. Outside of performance, these are referred to as feuds. A feud can exist between any number of participants
and can last for a few days up to multiple decades. The feud between Ric Flair and Ricky Steamboat lasted from the
late 1970s into the early 1990s and allegedly spanned over two thousand matches (although most of those matches were
mere dark matches). The career-spanning history between characters Mike Awesome and Masato Tanaka is another example
of a long-running feud, as is the case of Stone Cold Steve Austin vs. Mr. McMahon, one of the most lucrative feuds
in the World Wrestling Federation (WWF) during 1998 and 1999. Also, anything that can be used as an element of drama
can exist in professional wrestling stories: romantic relationships (including love triangles and marriage), racism,
classism, nepotism, favoritism, corporate corruption, family bonds, personal histories, grudges, theft, cheating,
assault, betrayal, bribery, seduction, stalking, confidence tricks, extortion, blackmail, substance abuse, self-doubt,
self-sacrifice; even kidnapping, sexual fetishism, necrophilia, misogyny, rape and death have been portrayed in wrestling.
Some promotions have included supernatural elements such as magic, curses, the undead and Satanic imagery (most notably
The Undertaker and his Ministry of Darkness, a stable that regularly performed evil rituals and human sacrifice in
Satanic-like worship of a hidden power figure). Celebrities would also be involved in storylines. Behind the scenes,
the bookers in a company will place the title on the most accomplished performer, or those the bookers believe will
generate fan interest in terms of event attendance and television viewership. Lower ranked titles may also be used
on the performers who show potential, thus allowing them greater exposure to the audience. However other circumstances
may also determine the use of a championship. A combination of a championship's lineage, the caliber of performers
as champion, and the frequency and manner of title changes, dictates the audience's perception of the title's quality,
significance and reputation. A wrestler's championship accomplishments can be central to their career, becoming a
measure of their performance ability and drawing power. In general, a wrestler with multiple title reigns or an extended
title reign is indicative of a wrestler's ability to maintain audience interest and/or a wrestler's ability to perform
in the ring. As such, the most accomplished or decorated wrestlers tend to be revered as legends despite the predetermined
nature of title reigns. American wrestler Ric Flair has had multiple world heavyweight championship reigns spanning
over three decades. Japanese wrestler Último Dragón once held and defended a record 10 titles simultaneously. All
notable wrestlers now enter the ring accompanied by music, and regularly add other elements to their entrance. The
music played during the ring entrance will usually mirror the wrestler's personality. Many wrestlers, particularly
in America, have music and lyrics specially written for their ring entrance. While invented long before, the practice
of including music with the entrance gained rapid popularity during the 1980s, largely as a result of the huge success
of Hulk Hogan and the WWF, and their Rock 'n' Wrestling Connection. When a match is won, the victor's theme music
is usually also played in celebration. The women's division of professional wrestling has maintained a recognized
world champion since 1937, when Mildred Burke won the original World Women's title. She then formed the World Women's
Wrestling Association in the early 1950s and recognized herself as the first champion, although the championship
would be vacated upon her retirement in 1956. The NWA, however, ceased to acknowledge Burke as their Women's World
champion in 1954, and instead acknowledged June Byers as champion after a controversial finish to a high-profile
match between Burke and Byers that year. Upon Byers' retirement in 1964, The Fabulous Moolah, who won a junior heavyweight
version of the NWA World Women's Championship (the predecessor to the WWE's Women's Championship) in a tournament
back in 1958, was recognized by most NWA promoters as champion by default. In Japan, professional wrestling done
by female wrestlers is called joshi puroresu (女子プロレス) or joshi puro for short. Female wrestling is usually handled
by promotions that specialize in joshi puroresu rather than divisions of otherwise male-dominated promotions, as
is the case in the United States. However, joshi puroresu promotions usually have agreements with male puroresu promotions
such that they recognize each other's titles as legitimate, and may share cards. All Japan Women's Pro-Wrestling
was the dominant joshi organization from the 1970s to the 1990s. In the 1980s, mixed tag team matches began to take
place, with a male and female on each team and a rule stating that each wrestler could only attack the opponent of
the same gender. If a tag was made, the other team had to automatically switch their legal wrestler as well. Despite
these restrictions, many mixed tag matches do feature some physical interaction between participants of different
genders. For example, a heel may take a cheap shot at the female wrestler of the opposing team to draw a negative
crowd reaction. In lucha libre, cheap-shots and male-female attacks are not uncommon. Intergender singles bouts were
first fought on a national level in the 1990s. This began with Luna Vachon, who faced men in ECW and WWF. Later,
Chyna became the first female to hold a belt that was not exclusive to women when she won the Intercontinental Championship.
While it is a rare feat in WWE, in TNA, ODB participates in singles intergender matches. Also, ODB's kayfabe husband
and tag team partner Eric Young held the Knockouts Tag Team Championship for a record 478 days before it was stripped
by Brooke Hogan because Young was a male. Some wrestlers may have their own specific "mini me", like Mascarita Sagrada,
Alebrije has Quije, etc. There are also cases in which midgets can become valets for a wrestler, and even get physically
involved in matches, like Alushe, who often accompanies Tinieblas, or KeMonito, who is portrayed as Consejo Mundial
de Lucha Libre's mascot and is also a valet for Mistico. Dave Finlay was often aided in his matches by a midget known
mainly as Hornswoggle while in WWE, who hid under the ring and gave a shillelagh to Finlay to use on his opponent.
Finlay also occasionally threw him at his opponent(s). Hornswoggle has also been given a run with the Cruiserweight
Championship and feuded with D-Generation X in 2009. Though they have not had the level of exposure as other wrestlers,
bears have long been a part of professional wrestling. Usually declawed and muzzled, they often wrestled shoot matches
against audience members, offered a cash reward if they could pin the bear. They also wrestled professionals in worked,
often battle royal or handicap, matches (usually booked so the bear won). Though they have wrestled around the world
and continue to do so, wrestling bears enjoyed their greatest popularity in the Southern United States, during the
1960s and 1970s. The practice of bear wrestling has met strong opposition from animal rights activists in recent
decades, contributing to its lack of mainstream acceptance. As of 2006, it is banned in 20 U.S. states. Perhaps the
most famous wrestling bears are Ginger, Victor, Hercules and Terrible Ted. Professional wrestling in the U.S. tends
to have a heavy focus on story building and the establishment of characters (and their personalities). There is a
story for each match, and even a longer story for successive matches. The stories usually contain characters like
faces and heels, and less often antiheroes and tweeners. It is a "triumph" if the face wins, while it is a "tragedy"
if the heel wins. The characters usually have strong and sharp personalities, with examples like Doink the Clown,
whose personality is melodramatic, slapstick and fantastical. The opposition between faces and heels is very intense
in the story, and the heels may even attack the faces during TV interviews. The relationship between different characters
can also be very complex. Although professional wrestling in Mexico (lucha libre) also has stories and characters,
they are less emphasized. Wrestlers in Mexico are traditionally more agile and perform more aerial maneuvers than
professional wrestlers in the U.S. who, more often, rely on power moves and strikes to subdue their opponents. The
difference in styles is due to the independent evolution of the sport in Mexico beginning in the 1930s and the fact
that wrestlers in the cruiserweight division (peso semicompleto) are often the most popular wrestlers in Mexican
lucha libre. Wrestlers often execute high flying moves characteristic of lucha libre by utilizing the wrestling ring's
ropes to catapult themselves towards their opponents, using intricate combinations in rapid-fire succession, and
applying complex submission holds. Lucha libre is also known for its tag team wrestling matches, in which the teams
are often made up of three members, instead of two as is common in the U.S. The style of Japanese professional wrestling
(puroresu) is again different. With its origins in traditional American style of wrestling and still being under
the same genre, it has become an entity in itself. Despite the similarity to its American counterpart in that the
outcome of the matches remains predetermined, the phenomena are different in the form of the psychology and presentation
of the sport; it is treated as a full contact combat sport as it mixes hard hitting martial arts strikes with shoot
style submission holds, while in the U.S. it is rather more regarded as an entertainment show. Wrestlers incorporate
kicks and strikes from martial arts disciplines, and a strong emphasis is placed on submission wrestling, and unlike
the use of involved storylines in the U.S., they are not as intricate in Japan, more emphasis is placed on the concept
of Fighting Spirit, meaning the Wrestlers display of physical and mental stamina are valued a lot more than theatrics.
Many of Japan's wrestlers including top stars such as Shinya Hashimoto, Riki Choshu and Keiji Mutoh came from a legitimate
martial arts background and many Japanese wrestlers in the 1990s began to pursue careers in mixed martial arts organizations
such as Pancrase and Shooto which at the time retained the original look of puroresu, but were actual competitions.
Those involved in producing professional wrestling have developed a kind of global fraternity, with familial bonds,
shared language and passed-down traditions. New performers are expected to "pay their dues" for a few years by working
in lower-profile promotions and working as ring crew before working their way upward. The permanent rosters of most
promotions develop a backstage pecking order, with veterans mediating conflicts and mentoring younger wrestlers.
For many decades (and still to a lesser extent today), performers were expected to keep the illusions of wrestling's
legitimacy alive even while not performing, essentially acting in character any time they were in public. Some veterans
speak of a "sickness" among wrestling performers, an inexplicable pull to remain active in the wrestling world despite
the devastating effects the job can have on one's life and health. Fans of professional wrestling have their own
subculture, comparable to those of science fiction, video games, or comic books (in some cases, the "fandoms" overlap;
in recent years, some professional wrestlers, particularly those who nurture an anti-establishment rebel persona,
such as CM Punk, have made guest appearances at comic book conventions). Those who are interested in the backstage
occurrences, future storylines, and reasonings behind company decisions read newsletters written by journalists with
inside ties to the wrestling industry. These "rags" or "dirt sheets" have expanded into the Internet, where their
information can be dispensed on an up-to-the-minute basis. Some have expanded into radio shows. Some fans enjoy a
pastime of collecting tapes of wrestling shows from specific companies, of certain wrestlers, or of specific genres.
The Internet has given fans exposure to worldwide variations of wrestling they would be unable to see otherwise.
Since the 1990s, many companies have been founded which deal primarily in wrestling footage. When the WWF purchased
both WCW and ECW in 2001, they also obtained the entire past video libraries of both productions and have released
many past matches online and on home video. From the first established world championship, the top professional wrestlers
have garnered fame within mainstream society. Each successive generation has produced a number of wrestlers who extend
their careers into the realms of music, acting, writing, business, politics or public speaking, and are known to
those who are unfamiliar with wrestling in general. Conversely, celebrities from other sports or general pop culture
also become involved with wrestling for brief periods of time. A prime example of this is The Rock 'n' Wrestling
Connection of the 1980s, which combined wrestling with MTV. Many television shows and films have been produced which
portray in-character professional wrestlers as protagonists, such as Ready to Rumble, ¡Mucha Lucha!, Nacho Libre,
and the Santo film series. In the wildly popular Rocky series of films about the fictional boxer Rocky Balboa, Rocky
III saw its hero fighting a "boxer vs. wrestler" exhibition match against the enormous and villainous wrestler "Thunderlips",
portrayed by real-life soon-to-be wrestling icon Hulk Hogan. At least two stage plays set in the world of pro wrestling
have been produced: The Baron is a comedy that retells the life of an actual performer known as Baron von Raschke.
From Parts Unknown... is an award-nominated Canadian drama about the rise and fall of a fictional wrestler. The 2009
South Park episode "W.T.F." played on the soap operatic elements of professional wrestling. One of the lead characters
on the Disney Channel series Kim Possible was a huge fan of pro wrestling and actually featured it on an episode
(with two former WWE wrestlers voicing the two fictitious wrestlers featured in the episode). The 2008 film The Wrestler,
about a washed-up professional wrestler, garnered several Oscar nominations. With its growing popularity, professional
wrestling has attracted attention as a subject of serious academic study and journalistic criticism. Many courses,
theses, essays, and dissertations have analyzed wrestling's conventions, content, and its role in modern society.
It is often included as part of studies on theatre, sociology, performance, and media. The Massachusetts Institute
of Technology developed a course of study on the cultural significance of professional wrestling, and anthropologist
Heather Levi has written an ethnography about the culture of lucha libre in Mexico. However, this was not always
the case; in the early 20th century, once it became apparent that the "sport" was worked, pro wrestling was looked
down on as a cheap entertainment for the uneducated working class — an attitude that still exists to varying degrees
today. The French theorist Roland Barthes was among the first to propose that wrestling was worthy of deeper analysis,
in his essay "The World of Wrestling" from his book Mythologies, first published in 1957. Barthes argued that it
should be looked at not as a scamming of the ignorant, but as spectacle; a mode of theatric performance for a willing,
if bloodthirsty, audience. Wrestling is described as performed art which demands an immediate reading of the juxtaposed
meanings. The logical conclusion is given least importance over the theatrical performers of the wrestlers and the
referee. According to Barthes, the function of a wrestler is not to win: it is to go exactly through the motions
which are expected of them and to give the audience a theatrical spectacle. This work is considered a foundation
of all later study. While pro wrestling is often described simplistically as a "soap opera for males", it has also
been cited as filling the role of past forms of literature and theatre; a synthesis of classical heroics, commedia
dell'arte, revenge tragedies, morality plays, and burlesque. The characters and storylines portrayed by a successful
promotion are seen to reflect the current mood, attitudes, and concerns of that promotion's society (and can, in
turn, influence those same things). Wrestling's high levels of violence and masculinity make it a vicarious outlet
for aggression during peacetime. Documentary filmmakers have studied the lives of wrestlers and the effects the profession
has on them and their families. The 1999 theatrical documentary Beyond the Mat focused on Terry Funk, a wrestler
nearing retirement; Mick Foley, a wrestler within his prime; Jake Roberts, a former star fallen from grace; and a
school of wrestling student trying to break into the business. The 2005 release Lipstick and Dynamite, Piss and Vinegar:
The First Ladies of Wrestling chronicled the development of women's wrestling throughout the 20th century. Pro wrestling
has been featured several times on HBO's Real Sports with Bryant Gumbel. MTV's documentary series True Life featured
two episodes titled "I'm a Professional Wrestler" and "I Want to Be a Professional Wrestler". Other documentaries
have been produced by The Learning Channel (The Secret World of Professional Wrestling) and A&E (Hitman Hart: Wrestling
with Shadows). Bloodstained Memoirs explored the careers of several pro wrestlers, including Chris Jericho, Rob Van
Dam and Roddy Piper. Although professional wrestling is worked, there is a high chance of injury, and even death.
Strikes are often stiff, especially in Japan and in independent wrestling promotions such as Combat Zone Wrestling
(CZW) and Ring of Honor (ROH). The ring is often made out of 2 by 8 timber planks. Many of the injuries that occur
in pro wrestling are shoulders, knee, back, neck, and rib injuries. Chronic traumatic encephalopathy and traumatic
brain injuries have also been linked to pro wrestling, including in the double-murder suicide case involving Chris
Benoit. Professional wrestler Davey Richards said in 2015, "We train to take damage, we know we are going to take
damage and we accept that".
Relatively insensitive film, with a correspondingly lower speed index, requires more exposure to light to produce the same
image density as a more sensitive film, and is thus commonly termed a slow film. Highly sensitive films are correspondingly
termed fast films. In both digital and film photography, the reduction of exposure corresponding to use of higher
sensitivities generally leads to reduced image quality (via coarser film grain or higher image noise of other types).
In short, the higher the sensitivity, the grainier the image will be. Ultimately sensitivity is limited by the quantum
efficiency of the film or sensor. The Warnerke Standard Sensitometer consisted of a frame holding an opaque screen
with an array of typically 25 numbered, gradually pigmented squares brought into contact with the photographic plate
during a timed test exposure under a phosphorescent tablet excited before by the light of a burning Magnesium ribbon.
The speed of the emulsion was then expressed in 'degrees' Warnerke (sometimes seen as Warn. or °W.) corresponding
with the last number visible on the exposed plate after development and fixation. Each number represented an increase
of 1/3 in speed, typical plate speeds were between 10° and 25° Warnerke at the time. The Scheinergrade (Sch.) system
was devised by the German astronomer Julius Scheiner (1858–1913) in 1894 originally as a method of comparing the
speeds of plates used for astronomical photography. Scheiner's system rated the speed of a plate by the least exposure
to produce a visible darkening upon development. Speed was expressed in degrees Scheiner, originally ranging from
1° Sch. to 20° Sch., where an increment of 19° Sch. corresponded to a hundredfold increase in sensitivity, which
meant that an increment of 3° Sch. came close to a doubling of sensitivity. The system was later extended to cover
larger ranges and some of its practical shortcomings were addressed by the Austrian scientist Josef Maria Eder (1855–1944)
and Flemish-born botanist Walter Hecht (de) (1896–1960), (who, in 1919/1920, jointly developed their Eder–Hecht neutral
wedge sensitometer measuring emulsion speeds in Eder–Hecht grades). Still, it remained difficult for manufactures
to reliably determine film speeds, often only by comparing with competing products, so that an increasing number
of modified semi-Scheiner-based systems started to spread, which no longer followed Scheiner's original procedures
and thereby defeated the idea of comparability. The DIN system, officially DIN standard 4512 by Deutsches Institut
für Normung (but still named Deutscher Normenausschuß (DNA) at this time), was published in January 1934. It grew
out of drafts for a standardized method of sensitometry put forward by Deutscher Normenausschuß für Phototechnik
as proposed by the committee for sensitometry of the Deutsche Gesellschaft für photographische Forschung since 1930
and presented by Robert Luther (de) (1868–1945) and Emanuel Goldberg (1881–1970) at the influential VIII. International
Congress of Photography (German: Internationaler Kongreß für wissenschaftliche und angewandte Photographie) held
in Dresden from August 3 to 8, 1931. As in the Scheiner system, speeds were expressed in 'degrees'. Originally the
sensitivity was written as a fraction with 'tenths' (for example "18/10° DIN"), where the resultant value 1.8 represented
the relative base 10 logarithm of the speed. 'Tenths' were later abandoned with DIN 4512:1957-11, and the example
above would be written as "18° DIN". The degree symbol was finally dropped with DIN 4512:1961-10. This revision also
saw significant changes in the definition of film speeds in order to accommodate then-recent changes in the American
ASA PH2.5-1960 standard, so that film speeds of black-and-white negative film effectively would become doubled, that
is, a film previously marked as "18° DIN" would now be labeled as "21 DIN" without emulsion changes. On an international
level the German DIN 4512 system has been effectively superseded in the 1980s by ISO 6:1974, ISO 2240:1982, and ISO
5800:1979 where the same sensitivity is written in linear and logarithmic form as "ISO 100/21°" (now again with degree
symbol). These ISO standards were subsequently adopted by DIN as well. Finally, the latest DIN 4512 revisions were
replaced by corresponding ISO standards, DIN 4512-1:1993-05 by DIN ISO 6:1996-02 in September 2000, DIN 4512-4:1985-08
by DIN ISO 2240:1998-06 and DIN 4512-5:1990-11 by DIN ISO 5800:1998-06 both in July 2002. Before the advent of the
ASA system, the system of Weston film speed ratings was introduced by Edward Faraday Weston (1878–1971) and his father
Dr. Edward Weston (1850–1936), a British-born electrical engineer, industrialist and founder of the US-based Weston
Electrical Instrument Corporation, with the Weston model 617, one of the earliest photo-electric exposure meters,
in August 1932. The meter and film rating system were invented by William Nelson Goodwin, Jr., who worked for them
and later received a Howard N. Potts Medal for his contributions to engineering. The Weston Cadet (model 852 introduced
in 1949), Direct Reading (model 853 introduced 1954) and Master III (models 737 and S141.3 introduced in 1956) were
the first in their line of exposure meters to switch and utilize the meanwhile established ASA scale instead. Other
models used the original Weston scale up until ca. 1955. The company continued to publish Weston film ratings after
1955, but while their recommended values often differed slightly from the ASA film speeds found on film boxes, these
newer Weston values were based on the ASA system and had to be converted for use with older Weston meters by subtracting
1/3 exposure stop as per Weston's recommendation. Vice versa, "old" Weston film speed ratings could be converted
into "new" Westons and the ASA scale by adding the same amount, that is, a film rating of 100 Weston (up to 1955)
corresponded with 125 ASA (as per ASA PH2.5-1954 and before). This conversion was not necessary on Weston meters
manufactured and Weston film ratings published since 1956 due to their inherent use of the ASA system; however the
changes of the ASA PH2.5-1960 revision may be taken into account when comparing with newer ASA or ISO values. General
Electric switched to use the ASA scale in 1946. Meters manufactured since February 1946 were equipped with the ASA
scale (labeled "Exposure Index") already. For some of the older meters with scales in "Film Speed" or "Film Value"
(e.g. models DW-48, DW-49 as well as early DW-58 and GW-68 variants), replaceable hoods with ASA scales were available
from the manufacturer. The company continued to publish recommended film values after that date, however, they were
now aligned to the ASA scale. Based on earlier research work by Loyd Ancile Jones (1884–1954) of Kodak and inspired
by the systems of Weston film speed ratings and General Electric film values, the American Standards Association
(now named ANSI) defined a new method to determine and specify film speeds of black-and-white negative films in 1943.
ASA Z38.2.1-1943 was revised in 1946 and 1947 before the standard grew into ASA PH2.5-1954. Originally, ASA values
were frequently referred to as American standard speed numbers or ASA exposure-index numbers. (See also: Exposure
Index (EI).) The ASA standard underwent a major revision in 1960 with ASA PH2.5-1960, when the method to determine
film speed was refined and previously applied safety factors against under-exposure were abandoned, effectively doubling
the nominal speed of many black-and-white negative films. For example, an Ilford HP3 that had been rated at 200 ASA
before 1960 was labeled 400 ASA afterwards without any change to the emulsion. Similar changes were applied to the
DIN system with DIN 4512:1961-10 and the BS system with BS 1380:1963 in the following years. Film speed is found
from a plot of optical density vs. log of exposure for the film, known as the D–log H curve or Hurter–Driffield curve.
There typically are five regions in the curve: the base + fog, the toe, the linear region, the shoulder, and the
overexposed region. For black-and-white negative film, the “speed point” m is the point on the curve where density
exceeds the base + fog density by 0.1 when the negative is developed so that a point n where the log of exposure
is 1.3 units greater than the exposure at point m has a density 0.8 greater than the density at point m. The exposure
Hm, in lux-s, is that for point m when the specified contrast condition is satisfied. The ISO arithmetic speed is
determined from: Film speed is used in the exposure equations to find the appropriate exposure parameters. Four variables
are available to the photographer to obtain the desired effect: lighting, film speed, f-number (aperture size), and
shutter speed (exposure time). The equation may be expressed as ratios, or, by taking the logarithm (base 2) of both
sides, by addition, using the APEX system, in which every increment of 1 is a doubling of exposure; this increment
is commonly known as a "stop". The effective f-number is proportional to the ratio between the lens focal length
and aperture diameter, the diameter itself being proportional to the square root of the aperture area. Thus, a lens
set to f/1.4 allows twice as much light to strike the focal plane as a lens set to f/2. Therefore, each f-number
factor of the square root of two (approximately 1.4) is also a stop, so lenses are typically marked in that progression:
f/1.4, 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32, etc. Upon exposure, the amount of light energy that reaches the film determines
the effect upon the emulsion. If the brightness of the light is multiplied by a factor and the exposure of the film
decreased by the same factor by varying the camera's shutter speed and aperture, so that the energy received is the
same, the film will be developed to the same density. This rule is called reciprocity. The systems for determining
the sensitivity for an emulsion are possible because reciprocity holds. In practice, reciprocity works reasonably
well for normal photographic films for the range of exposures between 1/1000 second to 1/2 second. However, this
relationship breaks down outside these limits, a phenomenon known as reciprocity failure. Some high-speed black-and-white
films, such as Ilford Delta 3200 and Kodak T-MAX P3200, are marketed with film speeds in excess of their true ISO
speed as determined using the ISO testing method. For example, the Ilford product is actually an ISO 1000 film, according
to its data sheet. The manufacturers do not indicate that the 3200 number is an ISO rating on their packaging. Kodak
and Fuji also marketed E6 films designed for pushing (hence the "P" prefix), such as Ektachrome P800/1600 and Fujichrome
P1600, both with a base speed of ISO 400. For digital photo cameras ("digital still cameras"), an exposure index
(EI) rating—commonly called ISO setting—is specified by the manufacturer such that the sRGB image files produced
by the camera will have a lightness similar to what would be obtained with film of the same EI rating at the same
exposure. The usual design is that the camera's parameters for interpreting the sensor data values into sRGB values
are fixed, and a number of different EI choices are accommodated by varying the sensor's signal gain in the analog
realm, prior to conversion to digital. Some camera designs provide at least some EI choices by adjusting the sensor's
signal gain in the digital realm. A few camera designs also provide EI adjustment through a choice of lightness parameters
for the interpretation of sensor data values into sRGB; this variation allows different tradeoffs between the range
of highlights that can be captured and the amount of noise introduced into the shadow areas of the photo. Digital
cameras have far surpassed film in terms of sensitivity to light, with ISO equivalent speeds of up to 409,600, a
number that is unfathomable in the realm of conventional film photography. Faster processors, as well as advances
in software noise reduction techniques allow this type of processing to be executed the moment the photo is captured,
allowing photographers to store images that have a higher level of refinement and would have been prohibitively time
consuming to process with earlier generations of digital camera hardware. The ISO standard ISO 12232:2006 gives digital
still camera manufacturers a choice of five different techniques for determining the exposure index rating at each
sensitivity setting provided by a particular camera model. Three of the techniques in ISO 12232:2006 are carried
over from the 1998 version of the standard, while two new techniques allowing for measurement of JPEG output files
are introduced from CIPA DC-004. Depending on the technique selected, the exposure index rating can depend on the
sensor sensitivity, the sensor noise, and the appearance of the resulting image. The standard specifies the measurement
of light sensitivity of the entire digital camera system and not of individual components such as digital sensors,
although Kodak has reported using a variation to characterize the sensitivity of two of their sensors in 2001. The
Recommended Exposure Index (REI) technique, new in the 2006 version of the standard, allows the manufacturer to specify
a camera model’s EI choices arbitrarily. The choices are based solely on the manufacturer’s opinion of what EI values
produce well-exposed sRGB images at the various sensor sensitivity settings. This is the only technique available
under the standard for output formats that are not in the sRGB color space. This is also the only technique available
under the standard when multi-zone metering (also called pattern metering) is used. The Standard Output Sensitivity
(SOS) technique, also new in the 2006 version of the standard, effectively specifies that the average level in the
sRGB image must be 18% gray plus or minus 1/3 stop when the exposure is controlled by an automatic exposure control
system calibrated per ISO 2721 and set to the EI with no exposure compensation. Because the output level is measured
in the sRGB output from the camera, it is only applicable to sRGB images—typically JPEG—and not to output files in
raw image format. It is not applicable when multi-zone metering is used. The CIPA DC-004 standard requires that Japanese
manufacturers of digital still cameras use either the REI or SOS techniques, and DC-008 updates the Exif specification
to differentiate between these values. Consequently, the three EI techniques carried over from ISO 12232:1998 are
not widely used in recent camera models (approximately 2007 and later). As those earlier techniques did not allow
for measurement from images produced with lossy compression, they cannot be used at all on cameras that produce images
only in JPEG format. where is the maximum possible exposure that does not lead to a clipped or bloomed camera output.
Typically, the lower limit of the saturation speed is determined by the sensor itself, but with the gain of the amplifier
between the sensor and the analog-to-digital converter, the saturation speed can be increased. The factor 78 is chosen
such that exposure settings based on a standard light meter and an 18-percent reflective surface will result in an
image with a grey level of 18%/√2 = 12.7% of saturation. The factor √2 indicates that there is half a stop of headroom
to deal with specular reflections that would appear brighter than a 100% reflecting white surface. The noise-based
speed is defined as the exposure that will lead to a given signal-to-noise ratio on individual pixels. Two ratios
are used, the 40:1 ("excellent image quality") and the 10:1 ("acceptable image quality") ratio. These ratios have
been subjectively determined based on a resolution of 70 pixels per cm (178 DPI) when viewed at 25 cm (9.8 inch)
distance. The signal-to-noise ratio is defined as the standard deviation of a weighted average of the luminance and
color of individual pixels. The noise-based speed is mostly determined by the properties of the sensor and somewhat
affected by the noise in the electronic gain and AD converter. The standard specifies how speed ratings should be
reported by the camera. If the noise-based speed (40:1) is higher than the saturation-based speed, the noise-based
speed should be reported, rounded downwards to a standard value (e.g. 200, 250, 320, or 400). The rationale is that
exposure according to the lower saturation-based speed would not result in a visibly better image. In addition, an
exposure latitude can be specified, ranging from the saturation-based speed to the 10:1 noise-based speed. If the
noise-based speed (40:1) is lower than the saturation-based speed, or undefined because of high noise, the saturation-based
speed is specified, rounded upwards to a standard value, because using the noise-based speed would lead to overexposed
images. The camera may also report the SOS-based speed (explicitly as being an SOS speed), rounded to the nearest
standard speed rating. Despite these detailed standard definitions, cameras typically do not clearly indicate whether
the user "ISO" setting refers to the noise-based speed, saturation-based speed, or the specified output sensitivity,
or even some made-up number for marketing purposes. Because the 1998 version of ISO 12232 did not permit measurement
of camera output that had lossy compression, it was not possible to correctly apply any of those measurements to
cameras that did not produce sRGB files in an uncompressed format such as TIFF. Following the publication of CIPA
DC-004 in 2006, Japanese manufacturers of digital still cameras are required to specify whether a sensitivity rating
is REI or SOS.[citation needed]
Mexico City, or the City of Mexico (Spanish: Ciudad de México audio (help·info) American Spanish: [sjuˈða(ð) ðe ˈméxiko];
abbreviated as "CDMX"), is the capital of Mexico. As an "alpha" global city, Mexico City is one of the most important
financial centers in the Americas. It is located in the Valley of Mexico (Valle de México), a large valley in the
high plateaus at the center of Mexico, at an altitude of 2,240 metres (7,350 ft). The city consists of sixteen municipalities
(previously called boroughs). The Greater Mexico City has a gross domestic product (GDP) of US$411 billion in 2011,
making Mexico City urban agglomeration one of the economically largest metropolitan areas in the world. The city
was responsible for generating 15.8% of Mexico's Gross Domestic Product and the metropolitan area accounted for about
22% of total national GDP. As a stand-alone country, in 2013, Mexico City would be the fifth-largest economy in Latin
America—five times as large as Costa Rica's and about the same size as Peru's. Mexico’s capital is both the oldest
capital city in the Americas and one of two founded by Amerindians (Native Americans), the other being Quito. The
city was originally built on an island of Lake Texcoco by the Aztecs in 1325 as Tenochtitlan, which was almost completely
destroyed in the 1521 siege of Tenochtitlan, and subsequently redesigned and rebuilt in accordance with the Spanish
urban standards. In 1524, the municipality of Mexico City was established, known as México Tenochtitlán, and as of
1585 it was officially known as Ciudad de México (Mexico City). Mexico City served as the political, administrative
and financial center of a major part of the Spanish colonial empire. After independence from Spain was achieved,
the Federal District was created in 1824. After years of demanding greater political autonomy, residents were given
the right to directly elect a Head of Government and the representatives of the unicameral Legislative Assembly by
popular vote in 1997. Ever since, the left-wing Party of the Democratic Revolution (PRD) has controlled both of them.
In recent years, the local government has passed a wave of liberal policies, such as abortion on request, a limited
form of euthanasia, no-fault divorce, and same-sex marriage. On January 29, 2016, it ceased to be called the Federal
District (Spanish: Distrito Federal or D.F.) and is now in transition to become the country's 32nd federal entity,
giving it a level of autonomy comparable to that of a state. Because of a clause in the Mexican Constitution, however,
as the seat of the powers of the Union, it can never become a state, lest the capital of the country be relocated
elsewhere. The city grew as the population did, coming up against the lake's waters. As the depth of the lake water
fluctuated, Mexico City was subject to periodic flooding. A major labor draft, the desagüe, compelled thousands of
Indians over the colonial period to work on infrastructure to prevent flooding. Floods were not only an inconvenience
but also a health hazard, since during flood periods human waste polluted the city's streets. By draining the area,
the mosquito population dropped as did the frequency of the diseases they spread. However, draining the wetlands
also changed the habitat for fish and birds and the areas accessible for Indian cultivation close to the capital.
The concept of nobility flourished in New Spain in a way not seen in other parts of the Americas. Spaniards encountered
a society in which the concept of nobility mirrored that of their own. Spaniards respected the indigenous order of
nobility and added to it. In the ensuing centuries, possession of a noble title in Mexico did not mean one exercised
great political power, for one's power was limited even if the accumulation of wealth was not. The concept of nobility
in Mexico was not political but rather a very conservative Spanish social one, based on proving the worthiness of
the family. Most of these families proved their worth by making fortunes in New Spain outside of the city itself,
then spending the revenues in the capital, building churches, supporting charities and building extravagant palatial
homes. The craze to build the most opulent residence possible reached its height in the last half of the 18th century.
Many of these palaces can still be seen today, leading to Mexico City's nickname of "The city of palaces" given by
Alexander Von Humboldt. The Grito de Dolores ("Cry of Dolores") also known as El Grito de la Independencia ("Cry
of Independence"), uttered from the small town of Dolores near Guanajuato on September 16, 1810, is the event that
marks the beginning of the Mexican War of Independence and is the most important national holiday observed in Mexico.
The "Grito" was the battle cry of the Mexican War of Independence by Miguel Hidalgo y Costilla, a Roman Catholic
priest. Hidalgo and several criollos were involved in a planned revolt against the Spanish colonial government, and
the plotters were betrayed. Fearing his arrest, Hidalgo commanded his brother Mauricio as well as Ignacio Allende
and Mariano Abasolo to go with a number of other armed men to make the sheriff release the pro-independence inmates
there on the night of September 15. They managed to set eighty free. Around 6:00 am September 16, 1810, Hidalgo ordered
the church bells to be rung and gathered his congregation. Flanked by Allende and Juan Aldama, he addressed the people
in front of his church, encouraging them to revolt. The Battle of Guanajuato, the first major engagement of the insurgency,
occurred four days later. Mexico's independence from Spain was effectively declared in the Declaration of Independence
of the Mexican Empire on September 27, 1821, after a decade of war. Unrest followed for the next several decades,
as different factions fought for control of Mexico. The Battle for Mexico City was the series of engagements from
September 8 to September 15, 1847, in the general vicinity of Mexico City during the Mexican–American War. Included
are major actions at the battles of Molino del Rey and Chapultepec, culminating with the fall of Mexico City. The
U.S. Army under Winfield Scott scored a major success that ended the war. The American invasion into the Federal
District was first resisted during the Battle of Churubusco on August 8 where the Saint Patrick's Battalion, which
was composed primarily of Catholic Irish and German immigrants, but also Canadians, English, French, Italians, Poles,
Scots, Spaniards, Swiss, and Mexican people, fought for the Mexican cause repelling the American attacks. After defeating
the Saint Patrick's Battalion, the Mexican–American War came to a close after the United States deployed combat units
deep into Mexico resulting in the capture of Mexico City and Veracruz by the U.S. Army's 1st, 2nd, 3rd and 4th Divisions.
The invasion culminated with the storming of Chapultepec Castle in the city itself. During this battle, on September
13, the 4th Division, under John A. Quitman, spearheaded the attack against Chapultepec and carried the castle. Future
Confederate generals George E. Pickett and James Longstreet participated in the attack. Serving in the Mexican defense
were the cadets later immortalized as Los Niños Héroes (the "Boy Heroes"). The Mexican forces fell back from Chapultepec
and retreated within the city. Attacks on the Belén and San Cosme Gates came afterwards. The treaty of Guadalupe
Hidalgo was signed in what is now the far north of the city. During this era of Porfirian rule, the city underwent
an extensive modernization. Many Spanish Colonial style buildings were destroyed, replaced by new much larger Porfirian
institutions and many outlying rural zones were transformed into urban or industrialized districts with most having
electrical, gas and sewage utilities by 1908. While the initial focus was on developing modern hospitals, schools,
factories and massive public works, perhaps the most long-lasting effects of the Porfirian modernization were creation
of the Colonia Roma area and the development of Reforma Avenue. Many of Mexico City's major attractions and landmarks
were built during this era in this style. Diaz's plans called for the entire city to eventually be modernized or
rebuilt in the Porfirian/French style of the Colonia Roma; but the Mexican Revolution began soon afterward and the
plans never came to fruition, with many projects being left half-completed. One of the best examples of this is the
Monument to the Mexican Revolution. Originally the monument was to be the main dome of Diaz's new senate hall, but
when the revolution erupted only the dome of the senate hall and its supporting pillars were completed, this was
subsequently seen as a symbol by many Mexicans that the Porfirian era was over once and for all and as such, it was
turned into a monument to victory over Diaz. Zapatist forces, which were based in neighboring Morelos had strengths
in the southern edge of the Federal District, which included Xochimilco, Tlalpan, Tláhuac and Milpa Alta to fight
against the regimes of Victoriano Huerta and Venustiano Carranza. After the assassination of Carranza and a short
mandate by Adolfo de la Huerta, Álvaro Obregón took power. After willing to be re-elected, he was killed by José
de León Toral, a devout Catholic, in a restaurant near La Bombilla Park in San Ángel in 1928. Plutarco Elias Calles
replaced Obregón and culminated the Mexican Revolution. In 1980 half of all the industrial jobs in Mexico were located
in Mexico City. Under relentless growth, the Mexico City government could barely keep up with services. Villagers
from the countryside who continued to pour into the city to escape poverty only compounded the city's problems. With
no housing available, they took over lands surrounding the city, creating huge shantytowns that extended for many
miles. This caused serious air pollution in Mexico City and water pollution problems, as well as subsidence due to
overextraction of groundwater. Air and water pollution has been contained and improved in several areas due to government
programs, the renovation of vehicles and the modernization of public transportation. On Thursday, September 19, 1985,
at 7:19 am local time, Mexico City was struck by an earthquake of magnitude 8.1 on the Richter scale. Although this
earthquake was not as deadly or destructive as many similar events in Asia and other parts of Latin America, it proved
to be a disaster politically for the one-party government. The government was paralyzed by its own bureaucracy and
corruption, forcing ordinary citizens to create and direct their own rescue efforts and to reconstruct much of the
housing that was lost as well. Mexico City is located in the Valley of Mexico, sometimes called the Basin of Mexico.
This valley is located in the Trans-Mexican Volcanic Belt in the high plateaus of south-central Mexico. It has a
minimum altitude of 2,200 meters (7,200 feet) above sea level and is surrounded by mountains and volcanoes that reach
elevations of over 5,000 metres (16,000 feet). This valley has no natural drainage outlet for the waters that flow
from the mountainsides, making the city vulnerable to flooding. Drainage was engineered through the use of canals
and tunnels starting in the 17th century. Mexico city primarily rests on what was Lake Texcoco. Seismic activity
is frequent here. Lake Texcoco was drained starting from the 17th century. Although none of the lake waters remain,
the city rests on the lake bed's heavily saturated clay. This soft base is collapsing due to the over-extraction
of groundwater, called groundwater-related subsidence. Since the beginning of the 20th century the city has sunk
as much as nine metres (30 feet) in some areas. This sinking is causing problems with runoff and wastewater management,
leading to flooding problems, especially during the rainy season. The entire lake bed is now paved over and most
of the city's remaining forested areas lie in the southern boroughs of Milpa Alta, Tlalpan and Xochimilco. The area
receives about 820 millimetres (32.3 in) of annual rainfall, which is concentrated from June through September/October
with little or no precipitation the remainder of the year. The area has two main seasons. The rainy season runs from
June to October when winds bring in tropical moisture from the sea. The dry season runs from November to May, when
the air is relatively drier. This dry season subdivides into a cold period and a warm period. The cold period spans
from November to February when polar air masses push down from the north and keep the air fairly dry. The warm period
extends from March to May when tropical winds again dominate but do not yet carry enough moisture for rain. Originally
much of the valley laid beneath the waters of Lake Texcoco, a system of interconnected salt and freshwater lakes.
The Aztecs built dikes to separate the fresh water used to raise crops in chinampas and to prevent recurrent floods.
These dikes were destroyed during the siege of Tenochtitlan, and during colonial times the Spanish regularly drained
the lake to prevent floods. Only a small section of the original lake remains, located outside the Federal District,
in the municipality of Atenco, State of Mexico. By the 1990s Mexico City had become infamous as one of the world's
most polluted cities; however the city has become a model for dramatically lowering pollution levels. By 2014 carbon
monoxide pollution had dropped dramatically, while levels of sulfur dioxide and nitrogen dioxide were nearly three
times lower than in 1992. The levels of signature pollutants in Mexico City are similar to those of Los Angeles.[citation
needed] Despite the cleanup, the metropolitan area is still the most ozone-polluted part of the country, with ozone
levels 2.5 times beyond WHO-defined safe limits. To clean up pollution, the federal and local governments implemented
numerous plans including the constant monitoring and reporting of environmental conditions, such as ozone and nitrogen
oxides. When the levels of these two pollutants reached critical levels, contingency actions were implemented which
included closing factories, changing school hours, and extending the A day without a car program to two days of the
week. The government also instituted industrial technology improvements, a strict biannual vehicle emission inspection
and the reformulation of gasoline and diesel fuels. The introduction of Metrobús bus rapid transit and the Ecobici
bike-sharing were among efforts to encourage alternate, greener forms of transportation. The Acta Constitutiva de
la Federación of January 31, 1824, and the Federal Constitution of October 4, 1824, fixed the political and administrative
organization of the United Mexican States after the Mexican War of Independence. In addition, Section XXVIII of Article
50 gave the new Congress the right to choose where the federal government would be located. This location would then
be appropriated as federal land, with the federal government acting as the local authority. The two main candidates
to become the capital were Mexico City and Querétaro. Due in large part to the persuasion of representative Servando
Teresa de Mier, Mexico City was chosen because it was the center of the country's population and history, even though
Querétaro was closer to the center geographically. The choice was official on November 18, 1824, and Congress delineated
a surface area of two leagues square (8,800 acres) centered on the Zocalo. This area was then separated from the
State of Mexico, forcing that state's government to move from the Palace of the Inquisition (now Museum of Mexican
Medicine) in the city to Texcoco. This area did not include the population centers of the towns of Coyoacán, Xochimilco,
Mexicaltzingo and Tlalpan, all of which remained as part of the State of Mexico. In 1854 president Antonio López
de Santa Anna enlarged the area of the Federal District almost eightfold from the original 220 to 1,700 km2 (80 to
660 sq mi), annexing the rural and mountainous areas to secure the strategic mountain passes to the south and southwest
to protect the city in event of a foreign invasion. (The Mexican–American War had just been fought.) The last changes
to the limits of the Federal District were made between 1898 and 1902, reducing the area to the current 1,479 km2
(571 sq mi) by adjusting the southern border with the state of Morelos. By that time, the total number of municipalities
within the Federal District was twenty-two. While the Federal District was ruled by the federal government through
an appointed governor, the municipalities within it were autonomous, and this duality of powers created tension between
the municipalities and the federal government for more than a century. In 1903, Porfirio Díaz largely reduced the
powers of the municipalities within the Federal District. Eventually, in December 1928, the federal government decided
to abolish all the municipalities of the Federal District. In place of the municipalities, the Federal District was
divided into one "Central Department" and 13 delegaciones (boroughs) administered directly by the government of the
Federal District. The Central Department was integrated by the former municipalities of Mexico City, Tacuba, Tacubaya
and Mixcoac. In 1941, the General Anaya borough was merged to the Central Department, which was then renamed "Mexico
City" (thus reviving the name, but not the autonomous municipality). From 1941 to 1970, the Federal District comprised
twelve delegaciones and Mexico City. In 1970 Mexico City was split into four different delegaciones: Cuauhtémoc,
Miguel Hidalgo, Venustiano Carranza and Benito Juárez, increasing the number of delegaciones to sixteen. Since then,
in a de facto manner, the whole Federal District, whose delegaciones had by then almost formed a single urban area,
began to be considered a synonym of Mexico City. Mexico City, being the seat of the powers of the Union, did not
belong to any particular state but to all. Therefore, it was the president, representing the federation, who used
to designate the head of government of the Federal District, a position which is sometimes presented outside Mexico
as the "Mayor" of Mexico City.[citation needed] In the 1980s, given the dramatic increase in population of the previous
decades, the inherent political inconsistencies of the system, as well as the dissatisfaction with the inadequate
response of the federal government after the 1985 earthquake, residents began to request political and administrative
autonomy to manage their local affairs.[citation needed] Some political groups even proposed that the Federal District
be converted into the 32nd state of the federation. In response to the demands, in 1987 the Federal District received
a greater degree of autonomy, with the elaboration the first Statute of Government (Estatuto de Gobierno), and the
creation of an Assembly of Representatives.[citation needed] In the 1990s, this autonomy was further expanded and,
starting from 1997, residents can directly elect the head of government of the Federal District and the representatives
of a unicameral Legislative Assembly (which succeeded the previous Assembly) by popular vote. The first elected head
of government was Cuauhtémoc Cárdenas. Cárdenas resigned in 1999 to run in the 2000 presidential elections and designated
Rosario Robles to succeed him, who became the first woman (elected or otherwise) to govern Mexico City. In 2000 Andrés
Manuel López Obrador was elected, and resigned in 2005 to run in the 2006 presidential elections, Alejandro Encinas
being designated by the Legislative Assembly to finish the term. In 2006, Marcelo Ebrard was elected for the 2006–2012
period. The Legislative Assembly of the Federal District is formed, as it is the case in all legislatures in Mexico,
by both single-seat and proportional seats, making it a system of parallel voting. The Federal District is divided
into 40 electoral constituencies of similar population which elect one representative by first-past-the-post plurality
(FPP), locally called "uninominal deputies". The Federal District as a whole constitutes a single constituency for
the parallel election of 26 representatives by proportionality (PR) with open-party lists, locally called "plurinominal
deputies". Even though proportionality is confined to the proportional seats to prevent a part from being overrepresented,
several restrictions apply in the assignation of the seats; namely, that no party can have more than 63% of all seats,
both uninominal and plurinominal. In the 2006 elections leftist PRD got the absolute majority in the direct uninominal
elections, securing 34 of the 40 FPP seats. As such, the PRD was not assigned any plurinominal seat to comply with
the law that prevents over-representation. The overall composition of the Legislative Assembly is: The politics pursued
by the administrations of heads of government in Mexico City since the second half of the 20th century have usually
been more liberal than those of the rest of the country, whether with the support of the federal government—as was
the case with the approval of several comprehensive environmental laws in the 1980s—or through laws recently approved
by the Legislative Assembly. In April of the same year, the Legislative Assembly expanded provisions on abortions,
becoming the first federal entity to expand abortion in Mexico beyond cases of rape and economic reasons, to permit
it regardless of the reason should the mother request it before the twelfth week of pregnancy. In December 2009,
the Federal District became the first city in Latin America, and one of very few in the world, to legalize same-sex
marriage. For administrative purposes, the Federal District is divided into 16 "delegaciones" or boroughs. While
not fully equivalent to a municipality, the 16 boroughs have gained significant autonomy, and since 2000 their heads
of government are elected directly by plurality (they were previously appointed by the head of government of the
Federal District). Given that Mexico City is organized entirely as a Federal District, most of the city services
are provided or organized by the Government of the Federal District and not by the boroughs themselves, while in
the constituent states these services would be provided by the municipalities. The 16 boroughs of the Federal District
with their 2010 populations are: The boroughs are composed by hundreds of colonias or neighborhoods, which have no
jurisdictional autonomy or representation. The Historic Center is the oldest part of the city (along with some other,
formerly separate colonial towns such as Coyoacán and San Ángel), some of the buildings dating back to the 16th century.
Other well-known central neighborhoods include Condesa, known for its Art Deco architecture and its restaurant scene;
Colonia Roma, a beaux arts neighborhood and artistic and culinary hot-spot, the Zona Rosa, formerly the center of
nightlife and restaurants, now reborn as the center of the LGBT and Korean-Mexican communities; and Tepito and La
Lagunilla, known for their local working-class foklore and large flea markets. Santa María la Ribera and San Rafael
are the latest neighborhoods of magnificent Porfiriato architecture seeing the first signs of gentrification. West
of the Historic Center (Centro Histórico) along Paseo de la Reforma are many of the city's wealthiest neighborhoods
such as Polanco, Lomas de Chapultepec, Bosques de las Lomas, Santa Fe, and (in the State of Mexico) Interlomas, which
are also the city's most important areas of class A office space, corporate headquarters, skyscrapers and shopping
malls. Nevertheless, areas of lower income colonias exist in some cases cheek-by-jowl with rich neighborhoods, particularly
in the case of Santa Fe. The south of the city is home to some other high-income neighborhoods such as Colonia del
Valle and Jardines del Pedregal, and the formerly separate colonial towns of Coyoacán, San Ángel, and San Jerónimo.
Along Avenida Insurgentes from Paseo de la Reforma, near the center, south past the World Trade Center and UNAM university
towards the Periférico ring road, is another important corridor of corporate office space. The far southern boroughs
of Xochimilco and Tláhuac have a significant rural population with Milpa Alta being entirely rural. North of the
Historic Center, Azcapotzalco and Gustavo A. Madero have important industrial centers and neighborhoods that range
from established middle-class colonias such as Claveria and Lindavista to huge low-income housing areas that share
hillsides with adjacent municipalities in the State of Mexico. In recent years much of northern Mexico City's industry
has moved to nearby municipalities in the State of Mexico. Northwest of Mexico City itself is Ciudad Satélite, a
vast middle to upper-middle-class residential and business area. The Human Development Index report of 2005 shows
that there were three boroughs with a very high Human Development Index, 12 with a high HDI value (9 above .85) and
one with a medium HDI value (almost high). Benito Juárez borough had the highest HDI of the country (.9510) followed
by Miguel Hidalgo which came up 4th nationally with a HDI of (.9189) and Coyoacán (5th nationally) with a HDI value
of (.9169). Cuajimalpa, Cuauhtémoc and Azcapotzalco had very high values; respectively .8994 (15th nationally),.8922
(23rd) and .8915 (25th). In contrast, the boroughs of Xochimilco (172th), Tláhuac (177th) and Iztapalapa (183th)
presented the lowest HDI values of the Federal District with values of .8481, .8473 and .8464 respectively—values
still in the global high-HDI range. The only borough that did not present a high HDI was that of rural Milpa Alta
which presented a "medium" HDI of .7984, far below all other boroughs (627th nationally while the rest stood in the
top 200). Mexico City's HDI for the 2005 report was of .9012 (very high), and its 2010 value of .9225 (very high)
or (by newer methodology) .8307, and Mexico's highest. Mexico City is home to some of the best private hospitals
in the country; Hospital Ángeles, Hospital ABC and Médica Sur to name a few. The national public healthcare institution
for private-sector employees, IMSS, has its largest facilities in Mexico City—including the National Medical Center
and the La Raza Medical Center—and has an annual budget of over 6 billion pesos. The IMSS and other public health
institutions, including the ISSSTE (Public Sector Employees' Social Security Institute) and the National Health Ministry
(SSA) maintain large specialty facilities in the city. These include the National Institutes of Cardiology, Nutrition,
Psychiatry, Oncology, Pediatrics, Rehabilitation, among others. The World Bank has sponsored a project to curb air
pollution through public transport improvements and the Mexican government has started shutting down polluting factories.
They have phased out diesel buses and mandated new emission controls on new cars; since 1993 all new cars must be
fitted with a catalytic converter, which reduces the emissions released. Trucks must use only liquefied petroleum
gas (LPG). Also construction of an underground rail system was begun in 1968 in order to help curb air pollution
problems and alleviate traffic congestion. Today it has over 201 km (125 mi) of track and carries over 5 million
people every day. Fees are kept low to encourage use of the system and during rush hours the crush is so great, that
authorities have reserved a special carriage specifically for women. Due to these initiatives and others, the air
quality in Mexico City has begun to improve, with the air becoming cleaner since 1991, when the air quality was declared
to be a public health risk for 355 days of the year.[citation needed] Mexico City is one of the most important economic
hubs in Latin America. The city proper (Federal District) produces 15.8% of the country's gross domestic product.
According to a study conducted by PwC, Mexico City had a GDP of $390 billion, ranking it as the eighth richest city
in the world after the greater metropolitan areas of Tokyo, New York City, Los Angeles, Chicago, Paris, London and
Osaka/Kobe (and the richest in the whole of Latin America). Excluding the rest of the Mexican economy, Mexico City
alone would rank as the 30th largest economy in the world. Mexico City is the greatest contributor to the country's
industrial GDP (15.8%) and also the greatest contributor to the country's GDP in the service sector (25.3%). Due
to the limited non-urbanized space at the south—most of which is protected through environmental laws—the contribution
of the Federal District in agriculture is the smallest of all federal entities in the country. Mexico City has one
of the world's fastest-growing economies and its GDP is set to double by 2020. The economic reforms of President
Carlos Salinas de Gortari had a tremendous effect on the city, as a number of businesses, including banks and airlines,
were privatized. He also signed the North American Free Trade Agreement (NAFTA). This led to decentralization and
a shift in Mexico City's economic base, from manufacturing to services, as most factories moved away to either the
State of Mexico, or more commonly to the northern border. By contrast, corporate office buildings set their base
in the city. Historically, and since pre-Hispanic times, the Valley of Anahuac has been one of the most densely populated
areas in Mexico. When the Federal District was created in 1824, the urban area of Mexico City extended approximately
to the area of today's Cuauhtémoc borough. At the beginning of the 20th century, the elites began migrating to the
south and west and soon the small towns of Mixcoac and San Ángel were incorporated by the growing conurbation. According
to the 1921 census, 54.78% of the city's population was considered Mestizo (Indigenous mixed with European), 22.79%
considered European, and 18.74% considered Indigenous. This was the last Mexican Census which asked people to self-identify
themselves with an heritage other than Amerindian. However, the census had the particularity that, unlike racial/ethnic
census in other countries, it was focused in the perception of cultural heritage rather than in a racial perception,
leading to a good number of white people to identify with "Mixed heritage" due cultural influence. In 1921, Mexico
City had less than one million inhabitants. Up to the 1990s, the Federal District was the most populous federal entity
in Mexico, but since then its population has remained stable at around 8.7 million. The growth of the city has extended
beyond the limits of the Federal District to 59 municipalities of the state of Mexico and 1 in the state of Hidalgo.
With a population of approximately 19.8 million inhabitants (2008), it is one of the most populous conurbations in
the world. Nonetheless, the annual rate of growth of the Metropolitan Area of Mexico City is much lower than that
of other large urban agglomerations in Mexico, a phenomenon most likely attributable to the environmental policy
of decentralization. The net migration rate of the Federal District from 1995 to 2000 was negative. On the other
hand, Mexico City is also home to large communities of expatriates and immigrants, most notably from the rest of
North America (U.S. and Canada), from South America (mainly from Argentina and Colombia, but also from Brazil, Chile,
Uruguay and Venezuela), from Central America and the Caribbean (mainly from Cuba, Guatemala, El Salvador, Haiti and
Honduras); from Europe (mainly from Spain, Germany and Switzerland, but also from Czech Republic, Hungary, France,
Italy, Ireland, the Netherlands, Poland and Romania), from the Middle East (mainly from Egypt, Lebanon and Syria);
and recently from Asia-Pacific (mainly from China and South Korea). Historically since the era of New Spain, many
Filipinos settled in the city and have become integrated in Mexican society. While no official figures have been
reported, population estimates of each of these communities are quite significant. The Historic center of Mexico
City (Centro Histórico) and the "floating gardens" of Xochimilco in the southern borough have been declared World
Heritage Sites by UNESCO. Famous landmarks in the Historic Center include the Plaza de la Constitución (Zócalo),
the main central square with its epoch-contrasting Spanish-era Metropolitan Cathedral and National Palace, ancient
Aztec temple ruins Templo Mayor ("Major Temple") and modern structures, all within a few steps of one another. (The
Templo Mayor was discovered in 1978 while workers were digging to place underground electric cables). The most recognizable
icon of Mexico City is the golden Angel of Independence on the wide, elegant avenue Paseo de la Reforma, modeled
by the order of the Emperor Maximilian of Mexico after the Champs-Élysées in Paris. This avenue was designed over
the Americas' oldest known major roadway in the 19th century to connect the National Palace (seat of government)
with the Castle of Chapultepec, the imperial residence. Today, this avenue is an important financial district in
which the Mexican Stock Exchange and several corporate headquarters are located. Another important avenue is the
Avenida de los Insurgentes, which extends 28.8 km (17.9 mi) and is one of the longest single avenues in the world.
Chapultepec Park houses the Chapultepec Castle, now a museum on a hill that overlooks the park and its numerous museums,
monuments and the national zoo and the National Museum of Anthropology (which houses the Aztec Calendar Stone). Another
piece of architecture is the Fine Arts Palace, a white marble theatre/museum whose weight is such that it has gradually
been sinking into the soft ground below. Its construction began during the presidency of Porfirio Díaz and ended
in 1934, after being interrupted by the Mexican Revolution in the 1920s. The Plaza of the Three Cultures in the Tlatelolco
neighbourhood, and the shrine and Basilicas of Our Lady of Guadalupe are also important sites. There is a double-decker
bus, known as the "Turibus", that circles most of these sites, and has timed audio describing the sites in multiple
languages as they are passed. In addition, the city has about 160 museums—the world's greatest single metropolitan
concentration —over 100 art galleries, and some 30 concert halls, all of which maintain a constant cultural activity
during the whole year. It has either the third or fourth-highest number of theatres in the world after New York,
London and perhaps Toronto. Many areas (e.g. Palacio Nacional and the National Institute of Cardiology) have murals
painted by Diego Rivera. He and his wife Frida Kahlo lived in Coyoacán, where several of their homes, studios, and
art collections are open to the public. The house where Leon Trotsky was initially granted asylum and finally murdered
in 1940 is also in Coyoacán. Mexico City is served by the Sistema de Transporte Colectivo, a 225.9 km (140 mi) metro
system, which is the largest in Latin America. The first portions were opened in 1969 and it has expanded to 12 lines
with 195 stations. The metro is one of the busiest in the world transporting approximately 4.5 million people every
day, surpassed only by subway lines in Moscow (7.5 million), Tokyo (5.9 million), and New York City (5.1 million).
It is heavily subsidized, and has some of the lowest fares in the world, each trip costing 5.00 pesos from 05:00
am to midnight. Several stations display pre-Columbian artifacts and architecture that were discovered during the
metro's construction.[citation needed] However, the metro covers less than half of the total urban area. The Metro
stations are also differentiated by the use of icons and glyphs which were proposed for people who could not read.
The specific icons were developed based on historical (characters, sites, pre-Hispanic motifs), linguistic, symbolic
(glyphs) or location references and has being emulated in further transportations alternatives in the City and in
other Mexican cities. Mexico City is the only city in the world to use the icon reference and has become a popular
culture trademark for the city. The city's first bus rapid transit line, the Metrobús, began operation in June 2005,
along Avenida Insurgentes. Line 2 opened in December 2008, serving Eje 4 Sur, line 3 opened in February 2011, serving
Eje 1 Poniente, and line 4 opened in April 2012 connecting the airport with San Lázaro and Buenavista Station at
Insurgentes. As the microbuses were removed from its route, it was hoped that the Metrobús could reduce pollution
and decrease transit time for passengers. In June 2013, Mexico City's mayor announced two more lines to come: Line
5 serving Eje 3 Oriente and Line 6 serving Eje 5 Norte. As of June 2013, 367 Metrobús buses transported 850,000 passengers
daily. In the late 1970s many arterial roads were redesigned as ejes viales; high-volume one-way roads that cross,
in theory, Mexico City proper from side to side. The eje vial network is based on a quasi-Cartesian grid, with the
ejes themselves being called Eje 1 Poniente, Eje Central, and Eje 1 Oriente, for example, for the north-south roads,
and Eje 2 Sur and Eje 3 Norte, for example, for east-west roads. Ring roads are the Circuito Interior (inner ring),
Anillo Periférico; the Circuito Exterior Mexiquense ("State of Mexico outer loop") toll road skirting the northeastern
and eastern edges of the metropolitan area, the Chamapa-La Venta toll road skirting the northwestern edge, and the
Arco Norte completely bypassing the metropolitan area in an arc from northwest (Atlacomulco) to north (Tula, Hidalgo)
to east (Puebla). A second level (where tolls are charged) of the Periférico, colloquially called the segundo piso
("second floor"), was officially opened in 2012, with sections still being completed. The Viaducto Miguel Alemán
crosses the city east-west from Observatorio to the airport. In 2013 the Supervía Poniente opened, a toll road linking
the new Santa Fe business district with southwestern Mexico City. There is an environmental program, called Hoy No
Circula ("Today Does Not Run", or "One Day without a Car"), whereby vehicles that have not passed emissions testing
are restricted from circulating on certain days according to the ending digit of their license plates; this in an
attempt to cut down on pollution and traffic congestion. While in 2003, the program still restricted 40% of vehicles
in the metropolitan area, with the adoption of stricter emissions standards in 2001 and 2006, in practice, these
days most vehicles are exempt from the circulation restrictions as long as they pass regular emissions tests. Street
parking in urban neighborhoods is mostly controlled by the franeleros a.k.a. "viene vienes" (lit. "come on, come
on"), who ask drivers for a fee to park, in theory to guard the car, but with the implicit threat that the franelero
will damage the car if the fee is not paid. Double parking is common (with franeleros moving the cars as required),
impeding on the available lanes for traffic to pass. In order to mitigate that and other problems and to raise revenue,
721 parking meters (as of October 2013), have been installed in the west-central neighborhoods Lomas de Chapultepec,
Condesa, Roma, Polanco and Anzures, in operation from 8 AM to 8 PM on weekdays and charging a rate of 2 pesos per
15 minutes, with offenders' cars booted, costing about 500 pesos to remove. 30 percent of the monthly 16 million-peso
(as of October 2013) income from the parking-meter system (named "ecoParq") is earmarked for neighborhood improvements.
The granting of the license for all zones exclusively to a new company without experience in operating parking meters,
Operadora de Estacionamientos Bicentenario, has generated controversy. The local government continuously strives
for a reduction of massive traffic congestion, and has increased incentives for making a bicycle-friendly city. This
includes North America's second-largest bicycle sharing system, EcoBici, launched in 2010, in which registered residents
can get bicycles for 45 minutes with a pre-paid subscription of 300 pesos a year. There are, as of September 2013,
276 stations with 4,000 bicycles across an area stretching from the Historic center to Polanco. within 300 metres
(980 feet) of one another and are fully automatic using a transponder based card. Bicycle-service users have access
to several permanent Ciclovías (dedicated bike paths/lanes/streets), including ones along Paseo de la Reforma and
Avenida Chapultepec as well as one running 59 kilometres (37 miles) from Polanco to Fierro del Toro, which is located
south of Cumbres del Ajusco National Park, near the Morelos state line. The city's initiative is inspired by forward
thinking examples, such as Denmark's Copenhagenization. Mexico City is served by Mexico City International Airport
(IATA Airport Code: MEX). This airport is Latin America's second busiest and one of the largests in traffic, with
daily flights to United States and Canada, mainland Mexico, Central America and the Caribbean, South America, Europe
and Asia. Aeroméxico (Skyteam) is based at this airport, and provide codeshare agreements with non-Mexican airlines
that span the entire globe. In 2014, the airport handled well over 34 million passengers, just over 2 million more
than the year before. This traffic exceeds the current capacity of the airport, which has historically centralized
the majority of air traffic in the country. An alternate option is Lic. Adolfo López Mateos International Airport
(IATA Airport Code: TLC) in nearby Toluca, State of Mexico, although due to several airlines' decisions to terminate
service to TLC, the airport has seen a passenger drop to just over 700,000 passengers in 2014 from over 2.1 million
passengers just four years prior. In the Mexico City airport, the government engaged in an extensive restructuring
program that includes the addition of a new second terminal, which began operations in 2007, and the enlargement
of four other airports (at the nearby cities of Toluca, Querétaro, Puebla and Cuernavaca) that, along with Mexico
City's airport, comprise the Grupo Aeroportuario del Valle de México, distributing traffic to different regions in
Mexico. The city of Pachuca will also provide additional expansion to central Mexico's airport network. Mexico City's
airport is the main hub for 11 of the 21 national airline companies. During his annual state-of-the-nation address
on September 2, 2014, President of Mexico Enrique Peña Nieto unveiled plans for a new international airport to ease
the city's notorious air traffic congestion, tentatively slated for a 2018 opening. The new airport, which would
have six runways, will cost $9.15 billion and would be built on vacant federal land east of Mexico City International
Airport. Goals are to eventually handle 120 million passengers a year, which would make it the busiest airport in
the world. Having been capital of a vast pre-Hispanic empire, and also the capital of richest viceroyalty within
the Spanish Empire (ruling over a vast territory in the Americas and Spanish West Indies), and, finally, the capital
of the United Mexican States, Mexico City has a rich history of artistic expression. Since the mesoamerican pre-Classical
period the inhabitants of the settlements around Lake Texcoco produced many works of art and complex craftsmanship,
some of which are today displayed at the world-renowned National Museum of Anthropology and the Templo Mayor museum.
While many pieces of pottery and stone-engraving have survived, the great majority of the Amerindian iconography
was destroyed during the Conquest of Mexico. Much of the early colonial art stemmed from the codices (Aztec illustrated
books), aiming to recover and preserve some Aztec and other Amerindian iconography and history. From then, artistic
expressions in Mexico were mostly religious in theme. The Metropolitan Cathedral still displays works by Juan de
Rojas, Juan Correa and an oil painting whose authorship has been attributed to Murillo. Secular works of art of this
period include the equestrian sculpture of Charles IV of Spain, locally known as El Caballito ("The little horse").
This piece, in bronze, was the work of Manuel Tolsá and it has been placed at the Plaza Tolsá, in front of the Palacio
de Minería (Mining Palace). Directly in front of this building is the beautiful Museo Nacional de Arte (Munal) (the
National Museum of Art). During the 19th century, an important producer of art was the Academia de San Carlos (San
Carlos Art Academy), founded during colonial times, and which later became the Escuela Nacional de Artes Plásticas
(the National School of Arts) including painting, sculpture and graphic design, one of UNAM's art schools. Many of
the works produced by the students and faculty of that time are now displayed in the Museo Nacional de San Carlos
(National Museum of San Carlos). One of the students, José María Velasco, is considered one of the greatest Mexican
landscape painters of the 19th century. Porfirio Díaz's regime sponsored arts, especially those that followed the
French school. Popular arts in the form of cartoons and illustrations flourished, e.g. those of José Guadalupe Posada
and Manuel Manilla. The permanent collection of the San Carlos Museum also includes paintings by European masters
such as Rembrandt, Velázquez, Murillo, and Rubens. During the 20th century, many artists immigrated to Mexico City
from different regions of Mexico, such as Leopoldo Méndez, an engraver from Veracruz, who supported the creation
of the socialist Taller de la Gráfica Popular (Popular Graphics Workshop), designed to help blue-collar workers find
a venue to express their art. Other painters came from abroad, such as Catalan painter Remedios Varo and other Spanish
and Jewish exiles. It was in the second half of the 20th century that the artistic movement began to drift apart
from the Revolutionary theme. José Luis Cuevas opted for a modernist style in contrast to the muralist movement associated
with social politics. Mexico City has numerous museums dedicated to art, including Mexican colonial, modern and contemporary
art, and international art. The Museo Tamayo was opened in the mid-1980s to house the collection of international
contemporary art donated by famed Mexican (born in the state of Oaxaca) painter Rufino Tamayo. The collection includes
pieces by Picasso, Klee, Kandinsky, Warhol and many others, though most of the collection is stored while visiting
exhibits are shown. The Museo de Arte Moderno (Museum of Modern Art) is a repository of Mexican artists from the
20th century, including Rivera, Orozco, Siqueiros, Kahlo, Gerzso, Carrington, Tamayo, among others, and also regularly
hosts temporary exhibits of international modern art. In southern Mexico City, the Museo Carrillo Gil (Carrillo Gil
Museum) showcases avant-garde artists, as does the University Museum/Contemporary Art (Museo Universitario Arte Contemporáneo
– or MUAC), designed by famed Mexican architect Teodoro González de León, inaugurated in late 2008. The Museo Soumaya,
named after the wife of Mexican magnate Carlos Slim, has the largest private collection of original Rodin sculptures
outside Paris. It also has a large collection of Dalí sculptures, and recently began showing pieces in its masters
collection including El Greco, Velázquez, Picasso and Canaletto. The museum inaugurated a new futuristic-design facility
in 2011 just north of Polanco, while maintaining a smaller facility in Plaza Loreto in southern Mexico City. The
Colección Júmex is a contemporary art museum located on the sprawling grounds of the Jumex juice company in the northern
industrial suburb of Ecatepec. It is said to have the largest private contemporary art collection in Latin America
and hosts pieces from its permanent collection as well as traveling exhibits by leading contemporary artists. The
new Museo Júmex in Nuevo Polanco was slated to open in November 2013. The Museo de San Ildefonso, housed in the Antiguo
Colegio de San Ildefonso in Mexico City's historic downtown district is a 17th-century colonnaded palace housing
an art museum that regularly hosts world-class exhibits of Mexican and international art. Recent exhibits have included
those on David LaChapelle, Antony Gormley and Ron Mueck. The National Museum of Art (Museo Nacional de Arte) is also
located in a former palace in the historic center. It houses a large collection of pieces by all major Mexican artists
of the last 400 years and also hosts visiting exhibits. Another major addition to the city's museum scene is the
Museum of Remembrance and Tolerance (Museo de la Memoria y Tolerancia), inaugurated in early 2011. The brainchild
of two young Mexican women as a Holocaust museum, the idea morphed into a unique museum dedicated to showcasing all
major historical events of discrimination and genocide. Permanent exhibits include those on the Holocaust and other
large-scale atrocities. It also houses temporary exhibits; one on Tibet was inaugurated by the Dalai Lama in September
2011. Mexico City is home to a number of orchestras offering season programs. These include the Mexico City Philharmonic,
which performs at the Sala Ollin Yoliztli; the National Symphony Orchestra, whose home base is the Palacio de Bellas
Artes (Palace of the Fine Arts), a masterpiece of art nouveau and art decó styles; the Philharmonic Orchestra of
the National Autonomous University of Mexico (OFUNAM), and the Minería Symphony Orchestra, both of which perform
at the Sala Nezahualcóyotl, which was the first wrap-around concert hall in the Western Hemisphere when inaugurated
in 1976. There are also many smaller ensembles that enrich the city's musical scene, including the Carlos Chávez
Youth Symphony, the New World Orchestra (Orquesta del Nuevo Mundo), the National Polytechnical Symphony and the Bellas
Artes Chamber Orchestra (Orquesta de Cámara de Bellas Artes). The city is also a leading center of popular culture
and music. There are a multitude of venues hosting Spanish and foreign-language performers. These include the 10,000-seat
National Auditorium that regularly schedules the Spanish and English-language pop and rock artists, as well as many
of the world's leading performing arts ensembles, the auditorium also broadcasts Grand Opera performances from New
York's Metropolitan Opera on giant, high definition screens. In 2007 National Auditorium was selected world's best
venue by multiple genre media. Other popular sites for pop-artist performances include the 3,000-seat Teatro Metropolitan,
the 15,000-seat Palacio de los Deportes, and the larger 50,000-seat Foro Sol Stadium, where popular international
artists perform on a regular basis. The Cirque du Soleil has held several seasons at the Carpa Santa Fe, in the Santa
Fe district in the western part of the city. There are numerous venues for smaller musical ensembles and solo performers.
These include the Hard Rock Live, Bataclán, Foro Scotiabank, Lunario, Circo Volador and Voilá Acoustique. Recent
additions include the 20,000-seat Arena Ciudad de México, the 3,000-seat Pepsi Center World Trade Center, and the
2,500-seat Auditorio Blackberry. The Centro Nacional de las Artes (National Center for the Arts has several venues
for music, theatre, dance. UNAM's main campus, also in the southern part of the city, is home to the Centro Cultural
Universitario (the University Culture Center) (CCU). The CCU also houses the National Library, the interactive Universum,
Museo de las Ciencias, the Sala Nezahualcóyotl concert hall, several theatres and cinemas, and the new University
Museum of Contemporary Art (MUAC). A branch of the National University's CCU cultural center was inaugurated in 2007
in the facilities of the former Ministry of Foreign Affairs, known as Tlatelolco, in north-central Mexico City. The
Papalote children's museum, which houses the world's largest dome screen, is located in the wooded park of Chapultepec,
near the Museo Tecnológico, and La Feria amusement park. The theme park Six Flags México (the largest amusement park
in Latin America) is located in the Ajusco neighborhood, in Tlalpan borough, southern Mexico City. During the winter,
the main square of the Zócalo is transformed into a gigantic ice skating rink, which is said to be the largest in
the world behind that of Moscow's Red Square. The Cineteca Nacional (the Mexican Film Library), near the Coyoacán
suburb, shows a variety of films, and stages many film festivals, including the annual International Showcase, and
many smaller ones ranging from Scandinavian and Uruguayan cinema, to Jewish and LGBT-themed films. Cinépolis and
Cinemex, the two biggest film business chains, also have several film festivals throughout the year, with both national
and international movies. Mexico City tops the world in number of IMAX theatres,[citation needed] providing residents
and visitors access to films ranging from documentaries to popular blockbusters on these especially large, dramatic
screens. Mexico City offers a variety of cuisines. Restaurants specializing in the regional cuisines of Mexico's
31 states are available in the city. Also available are an array of international cuisines, including Canadian, French,
Italian, Croatian, Spanish (including many regional variations), Jewish, Lebanese, Chinese (again with regional variations),
Indian, Japanese, Korean, Thai, Vietnamese; and of course fellow Latin American cuisines such as Argentine, Brazilian,
and Peruvian. Haute, fusion, kosher, vegetarian and vegan cuisines are also available, as are restaurants solely
based on the concepts of local food and slow Food. The city also has several branches of renowned international restaurants
and chefs. These include Paris' Au Pied de Cochon and Brasserie Lipp, Philippe (by Philippe Chow); Nobu, Morimoto;
Pámpano, owned by Mexican-raised opera legend Plácido Domingo. There are branches of the exclusive Japanese restaurant
Suntory, Rome's famed Alfredo, as well as New York steakhouses Morton's and The Palm, and Monte Carlo's BeefBar.
Three of the most famous Lima-based Haute Peruvian restaurants, La Mar, Segundo Muelle and Astrid y Gastón have locations
in Mexico City. Association football is the country's most popular and most televised franchised sport. Its important
venues in Mexico City include the Azteca Stadium, home to the Mexico national football team and giants América, which
can seat 91,653 fans, making it the biggest stadium in Latin America. The Olympic Stadium in Ciudad Universitaria
is home to the football club giants Universidad Nacional, with a seating capacity of over 52,000. The Estadio Azul,
which seats 33,042 fans, is near the World Trade Center Mexico City in the Nochebuena neighborhood, and is home to
the giants Cruz Azul. The three teams are based in Mexico City and play in the First Division; they are also part,
with Guadalajara-based giants Club Deportivo Guadalajara, of Mexico's traditional "Big Four" (though recent years
have tended to erode the teams' leading status at least in standings). The country hosted the FIFA World Cup in 1970
and 1986, and Azteca Stadium is the first stadium in World Cup history to host the final twice. Mexico City remains
the only Latin American city to host the Olympic Games, having held the Summer Olympics in 1968, winning bids against
Buenos Aires, Lyon and Detroit. (This too will change thanks to Rio, 2016 Summer Games host). The city hosted the
1955 and 1975 Pan American Games, the last after Santiago and São Paulo withdrew. The ICF Flatwater Racing World
Championships were hosted here in 1974 and 1994. Lucha libre is a Mexican style of wrestling, and is one of the more
popular sports throughout the country. The main venues in the city are Arena México and Arena Coliseo. The National
Autonomous University of Mexico (UNAM), located in Mexico City, is the largest university on the continent, with
more than 300,000 students from all backgrounds. Three Nobel laureates, several Mexican entrepreneurs and most of
Mexico's modern-day presidents are among its former students. UNAM conducts 50% of Mexico's scientific research and
has presence all across the country with satellite campuses, observatories and research centres. UNAM ranked 74th
in the Top 200 World University Ranking published by Times Higher Education (then called Times Higher Education Supplement)
in 2006, making it the highest ranked Spanish-speaking university in the world. The sprawling main campus of the
university, known as Ciudad Universitaria, was named a World Heritage Site by UNESCO in 2007. The second largest
higher-education institution is the National Polytechnic Institute (IPN), which includes among many other relevant
centers the Centro de Investigación y de Estudios Avanzados (Cinvestav), where varied high-level scientific and technological
research is done. Other major higher-education institutions in the city include the Metropolitan Autonomous University
(UAM), the National School of Anthropology and History (ENAH), the Instituto Tecnológico Autónomo de México (ITAM),
the Monterrey Institute of Technology and Higher Education (3 campuses), the Universidad Panamericana (UP), the Universidad
La Salle, the Universidad del Valle de Mexico (UVM), the Universidad Anáhuac, Simon Bolivar University (USB), the
Alliant International University, the Universidad Iberoamericana, El Colegio de México (Colmex), Escuela Libre de
Derecho and the Centro de Investigación y Docencia Económica, (CIDE). In addition, the prestigious University of
California maintains a campus known as "Casa de California" in the city. The Universidad Tecnológica de México is
also in Mexico City. Unlike those of Mexican states' schools, curricula of Mexico City's public schools is managed
by the federal Secretary of Public Education. The whole funding is allocated by the government of Mexico City (in
some specific cases, such as El Colegio de México, funding comes from both the city's government and other public
and private national and international entities).[citation needed] The city's public high school system is the Instituto
de Educación Media Superior del Distrito Federal (IEMS-DF). A special case is that of El Colegio Nacional, created
during the district's governmental period of Miguel Alemán Valdés to have, in Mexico, an institution similar to the
College of France. The select and privileged group of Mexican scientists and artists belonging to this institution—membership
is for life—include, among many, Mario Lavista, Ruy Pérez Tamayo, José Emilio Pacheco, Marcos Moshinsky (d.2009),
Guillermo Soberón Acevedo. Members are obligated to publicly disclose their works through conferences and public
events such as concerts and recitals. Mexico City is Latin America's leading center for the television, music and
film industries. It is also Mexico's most important for the printed media and book publishing industries. Dozens
of daily newspapers are published, including El Universal, Excélsior, Reforma and La Jornada. Other major papers
include Milenio, Crónica, El Economista and El Financiero. Leading magazines include Expansión, Proceso, Poder, as
well as dozens of entertainment publications such as Vanidades, Quién, Chilango, TV Notas, and local editions of
Vogue, GQ, and Architectural Digest. Mexico City offers an immense and varied consumer retail market, ranging from
basic foods to ultra high-end luxury goods. Consumers may buy in fixed indoor markets, mobile markets (tianguis),
from street vendors, from downtown shops in a street dedicated to a certain type of good, in convenience stores and
traditional neighborhood stores, in modern supermarkets, in warehouse and membership stores and the shopping centers
that they anchor, in department stores, big-box stores and in modern shopping malls. A staple for consumers in the
city is the omnipresent "mercado". Every major neighborhood in the city has its own borough-regulated market, often
more than one. These are large well-established facilities offering most basic products, such as fresh produce and
meat/poultry, dry goods, tortillerías, and many other services such as locksmiths, herbal medicine, hardware goods,
sewing implements; and a multitude of stands offering freshly made, home-style cooking and drinks in the tradition
of aguas frescas and atole. Street vendors play their trade from stalls in the tianguis as well as at non-officially
controlled concentrations around metro stations and hospitals; at plazas comerciales, where vendors of a certain
"theme" (e.g. stationery) are housed; originally these were organized to accommodate vendors formerly selling on
the street; or simply from improvised stalls on a city sidewalk. In addition, food and goods are sold from people
walking with baskets, pushing carts, from bicycles or the backs of trucks, or simply from a tarp or cloth laid on
the ground. Mexico City has three zoos. Chapultepec Zoo, the San Juan de Aragon Zoo and Los Coyotes Zoo. Chapultepec
Zoo is located in the first section of Chapultepec Park in the Miguel Hidalgo. It was opened in 1924. Visitors can
see about 243 specimens of different species including kangaroos, giant panda, gorillas, caracal, hyena, hippos,
jaguar, giraffe, lemur, lion, among others. Zoo San Juan de Aragon is near the San Juan de Aragon Park in the Gustavo
A. Madero. In this zoo, opened in 1964, there are species that are in danger of extinction such as the jaguar and
the Mexican wolf. Other guests are the golden eagle, pronghorn, bighorn sheep, caracara, zebras, African elephant,
macaw, hippo, among others. Zoo Los Coyotes is a 27.68-acre (11.2 ha) zoo located south of Mexico City in the Coyoacan.
It was inaugurated on February 2, 1999. It has more than 301 specimens of 51 species of wild native or endemic fauna
from the Mexico City. You can admire eagles, ajolotes, coyotes, macaws, bobcats, Mexican wolves, raccoons, mountain
lions, teporingos, foxes, white-tailed deer. During Andrés López Obrador's administration a political slogan was
introduced: la Ciudad de la Esperanza ("The City of Hope"). This motto was quickly adopted as a city nickname, but
has faded since the new motto Capital en Movimiento ("Capital in Movement") was adopted by the administration headed
by Marcelo Ebrard, though the latter is not treated as often as a nickname in media. Since 2013, to refer to the
City particularly in relation to government campaigns, the abbreviation CDMX has been used (from Ciudad de México).
The city is colloquially known as Chilangolandia after the locals' nickname chilangos. Chilango is used pejoratively
by people living outside Mexico City to "connote a loud, arrogant, ill-mannered, loutish person". For their part
those living in Mexico City designate insultingly those who live elsewhere as living in la provincia ("the provinces",
the periphery) and many proudly embrace the term chilango. Residents of Mexico City are more recently called defeños
(deriving from the postal abbreviation of the Federal District in Spanish: D.F., which is read "De-Efe"). They are
formally called capitalinos (in reference to the city being the capital of the country), but "[p]erhaps because capitalino
is the more polite, specific, and correct word, it is almost never utilized". Between 2000 and 2004 an average of
478 crimes were reported each day in Mexico City; however, the actual crime rate is thought to be much higher "since
most people are reluctant to report crime". Under policies enacted by Mayor Marcelo Ebrard between 2009 and 2011,
Mexico City underwent a major security upgrade with violent and petty crime rates both falling significantly despite
the rise in violent crime in other parts of the country. Some of the policies enacted included the installation of
11,000 security cameras around the city and a very large expansion of the police force. Mexico City has one of the
world's highest police officer-to-resident ratios, with one uniformed officer per 100 citizens.
Napoléon Bonaparte (/nəˈpoʊliən, -ˈpoʊljən/; French: [napɔleɔ̃ bɔnapaʁt], born Napoleone di Buonaparte; 15 August 1769 –
5 May 1821) was a French military and political leader who rose to prominence during the French Revolution and led
several successful campaigns during the Revolutionary Wars. As Napoleon I, he was Emperor of the French from 1804
until 1814, and again in 1815. Napoleon dominated European and global affairs for more than a decade while leading
France against a series of coalitions in the Napoleonic Wars. He won most of these wars and the vast majority of
his battles, building a large empire that ruled over continental Europe before its final collapse in 1815. Often
considered one of the greatest commanders in history, his wars and campaigns are studied at military schools worldwide.
He also remains one of the most celebrated and controversial political figures in Western history. In civil affairs,
Napoleon had a major long-term impact by bringing liberal reforms to the territories that he conquered, especially
the Low Countries, Switzerland, and large parts of modern Italy and Germany. He implemented fundamental liberal policies
in France and throughout Western Europe.[note 1] His lasting legal achievement, the Napoleonic Code, has been adopted
in various forms by a quarter of the world's legal systems, from Japan to Quebec. Napoleon was born in Corsica to
a relatively modest family of noble Tuscan ancestry. Napoleon supported the French Revolution from the outset in
1789 while serving in the French army, and he tried to spread its ideals to Corsica but was banished from the island
in 1793. Two years later, he saved the French government from collapse by firing on the Parisian mobs with cannons.
The Directory rewarded Napoleon by giving him command of the Army of Italy at age 26, when he began his first military
campaign against the Austrians and their Italian allies, scoring a series of decisive victories that made him famous
all across Europe. He followed the defeat of the Allies in Europe by commanding a military expedition to Egypt in
1798, invading and occupying the Ottoman province after defeating the Mamelukes and launching modern Egyptology through
the discoveries made by his army. After returning from Egypt, Napoleon engineered a coup in November 1799 and became
First Consul of the Republic. Another victory over the Austrians at the Battle of Marengo in 1800 secured his political
power. With the Concordat of 1801, Napoleon restored the religious privileges of the Catholic Church while keeping
the lands seized by the Revolution. The state continued to nominate the bishops and to control church finances. He
extended his political control over France until the Senate declared him Emperor of the French in 1804, launching
the French Empire. Intractable differences with the British meant that the French were facing a Third Coalition by
1805. Napoleon shattered this coalition with decisive victories in the Ulm Campaign and a historic triumph at the
Battle of Austerlitz, which led to the elimination of the Holy Roman Empire. In October 1805, however, a Franco-Spanish
fleet was destroyed at the Battle of Trafalgar, allowing Britain to impose a naval blockade of the French coasts.
In retaliation, Napoleon established the Continental System in 1806 to cut off continental trade with Britain. The
Fourth Coalition took up arms against him the same year because Prussia became worried about growing French influence
on the continent. Napoleon knocked out Prussia at the battles of Jena and Auerstedt, then turned his attention towards
the Russians and annihilated them in June 1807 at Friedland, which forced the Russians to accept the Treaties of
Tilsit. Hoping to extend the Continental System, Napoleon invaded Iberia and declared his brother Joseph the King
of Spain in 1808. The Spanish and the Portuguese revolted with British support. The Peninsular War lasted six years,
noted for its brutal guerrilla warfare, and culminated in an Allied victory. Fighting also erupted in Central Europe,
as the Austrians launched another attack against the French in 1809. Napoleon defeated them at the Battle of Wagram,
dissolving the Fifth Coalition formed against France. By 1811, Napoleon ruled over 70 million people across an empire
that had domination in Europe, which had not witnessed this level of political consolidation since the days of the
Roman Empire. He maintained his strategic status through a series of alliances and family appointments. He created
a new aristocracy in France while allowing the return of nobles who had been forced into exile by the Revolution.
Tensions over rising Polish nationalism and the economic effects of the Continental System led to renewed confrontation
with Russia. To enforce his blockade, Napoleon launched an invasion of Russia in the summer of 1812. The resulting
campaign witnessed the catastrophic collapse of the Grand Army, forcing the French to retreat, as well as leading
to the widespread destruction of Russian lands and cities. In 1813, Prussia and Austria joined Russian forces in
a Sixth Coalition against France. A chaotic military campaign in Central Europe eventually culminated in a large
Allied army defeating Napoleon at the Battle of Leipzig in October. The next year, the Allies invaded France and
captured Paris, forcing Napoleon to abdicate in April 1814. He was exiled to the island of Elba. The Bourbons were
restored to power and the French lost most of the territories that they had conquered since the Revolution. However,
Napoleon escaped from Elba in February 1815 and took control of the government once again. The Allies responded by
forming a Seventh Coalition, which ultimately defeated Napoleon at the Battle of Waterloo in June. The Royal Navy
then thwarted his planned escape to the United States in July, so he surrendered to the British after running out
of other options. The British exiled him to the remote island of Saint Helena in the South Atlantic. His death in
1821 at the age of 51 was received with shock and grief throughout Europe. In 1840, a million people witnessed his
remains returning to Paris, where they still reside at Les Invalides. Napoleon was born on 15 August 1769, to Carlo
Maria di Buonaparte and Maria Letizia Ramolino, in his family's ancestral home Casa Buonaparte in Ajaccio, the capital
of the island of Corsica. He was their fourth child and third son. This was a year after the island was transferred
to France by the Republic of Genoa. He was christened Napoleone di Buonaparte, probably named after an uncle (an
older brother who did not survive infancy was the first of the sons to be called Napoleone). In his 20s, he adopted
the more French-sounding Napoléon Bonaparte.[note 2] Napoleon's noble, moderately affluent background afforded him
greater opportunities to study than were available to a typical Corsican of the time. In January 1779, he was enrolled
at a religious school in Autun. In May, he was admitted to a military academy at Brienne-le-Château. His first language
was Corsican, and he always spoke French with a marked Corsican accent and never learned to spell French properly.
He was teased by other students for his accent and applied himself to reading. An examiner observed that Napoleon
"has always been distinguished for his application in mathematics. He is fairly well acquainted with history and
geography... This boy would make an excellent sailor."[note 3] Upon graduating in September 1785, Bonaparte was commissioned
a second lieutenant in La Fère artillery regiment.[note 4] He served in Valence and Auxonne until after the outbreak
of the Revolution in 1789, and took nearly two years' leave in Corsica and Paris during this period. At this time,
he was a fervent Corsican nationalist, and wrote to Corsican leader Pasquale Paoli in May 1789, "As the nation was
perishing I was born. Thirty thousand Frenchmen were vomited on to our shores, drowning the throne of liberty in
waves of blood. Such was the odious sight which was the first to strike me." Some contemporaries alleged that Bonaparte
was put under house arrest at Nice for his association with the Robespierres following their fall in the Thermidorian
Reaction in July 1794, but Napoleon's secretary Bourrienne disputed the allegation in his memoirs. According to Bourrienne,
jealousy was responsible, between the Army of the Alps and the Army of Italy (with whom Napoleon was seconded at
the time). Bonaparte dispatched an impassioned defense in a letter to the commissar Salicetti, and he was subsequently
acquitted of any wrongdoing. By 1795, Bonaparte had become engaged to Désirée Clary, daughter of François Clary.
Désirée's sister Julie Clary had married Bonaparte's elder brother Joseph. In April 1795, he was assigned to the
Army of the West, which was engaged in the War in the Vendée—a civil war and royalist counter-revolution in Vendée,
a region in west central France on the Atlantic Ocean. As an infantry command, it was a demotion from artillery general—for
which the army already had a full quota—and he pleaded poor health to avoid the posting. He was moved to the Bureau
of Topography of the Committee of Public Safety and sought unsuccessfully to be transferred to Constantinople in
order to offer his services to the Sultan. During this period, he wrote the romantic novella Clisson et Eugénie,
about a soldier and his lover, in a clear parallel to Bonaparte's own relationship with Désirée. On 15 September,
Bonaparte was removed from the list of generals in regular service for his refusal to serve in the Vendée campaign.
He faced a difficult financial situation and reduced career prospects. Two days after the marriage, Bonaparte left
Paris to take command of the Army of Italy. He immediately went on the offensive, hoping to defeat the forces of
Piedmont before their Austrian allies could intervene. In a series of rapid victories during the Montenotte Campaign,
he knocked Piedmont out of the war in two weeks. The French then focused on the Austrians for the remainder of the
war, the highlight of which became the protracted struggle for Mantua. The Austrians launched a series of offensives
against the French to break the siege, but Napoleon defeated every relief effort, scoring notable victories at the
battles of Castiglione, Bassano, Arcole, and Rivoli. The decisive French triumph at Rivoli in January 1797 led to
the collapse of the Austrian position in Italy. At Rivoli, the Austrians lost up to 14,000 men while the French lost
about 5,000. The next phase of the campaign featured the French invasion of the Habsburg heartlands. French forces
in Southern Germany had been defeated by the Archduke Charles in 1796, but the Archduke withdrew his forces to protect
Vienna after learning about Napoleon's assault. In the first notable encounter between the two commanders, Napoleon
pushed back his opponent and advanced deep into Austrian territory after winning at the Battle of Tarvis in March
1797. The Austrians were alarmed by the French thrust that reached all the way to Leoben, about 100 km from Vienna,
and finally decided to sue for peace. The Treaty of Leoben, followed by the more comprehensive Treaty of Campo Formio,
gave France control of most of northern Italy and the Low Countries, and a secret clause promised the Republic of
Venice to Austria. Bonaparte marched on Venice and forced its surrender, ending 1,100 years of independence. He also
authorized the French to loot treasures such as the Horses of Saint Mark. Bonaparte could win battles by concealment
of troop deployments and concentration of his forces on the 'hinge' of an enemy's weakened front. If he could not
use his favourite envelopment strategy, he would take up the central position and attack two co-operating forces
at their hinge, swing round to fight one until it fled, then turn to face the other. In this Italian campaign, Bonaparte's
army captured 150,000 prisoners, 540 cannons, and 170 standards. The French army fought 67 actions and won 18 pitched
battles through superior artillery technology and Bonaparte's tactics. During the campaign, Bonaparte became increasingly
influential in French politics. He founded two newspapers: one for the troops in his army and another for circulation
in France. The royalists attacked Bonaparte for looting Italy and warned that he might become a dictator. All told,
Napoleon's forces extracted an estimated $45 million in funds from Italy during their campaign there, another $12
million in precious metals and jewels; atop that, his forces confiscated more than three-hundred priceless paintings
and sculptures. Bonaparte sent General Pierre Augereau to Paris to lead a coup d'état and purge the royalists on
4 September—Coup of 18 Fructidor. This left Barras and his Republican allies in control again but dependent on Bonaparte,
who proceeded to peace negotiations with Austria. These negotiations resulted in the Treaty of Campo Formio, and
Bonaparte returned to Paris in December as a hero. He met Talleyrand, France's new Foreign Minister—who later served
in the same capacity for Emperor Napoleon—and they began to prepare for an invasion of Britain. General Bonaparte
and his expedition eluded pursuit by the Royal Navy and landed at Alexandria on 1 July. He fought the Battle of Shubra
Khit against the Mamluks, Egypt's ruling military caste. This helped the French practice their defensive tactic for
the Battle of the Pyramids, fought on 21 July, about 24 km (15 mi) from the pyramids. General Bonaparte's forces
of 25,000 roughly equalled those of the Mamluks' Egyptian cavalry. Twenty-nine French and approximately 2,000 Egyptians
were killed. The victory boosted the morale of the French army. On 1 August, the British fleet under Horatio Nelson
captured or destroyed all but two French vessels in the Battle of the Nile, defeating Bonaparte's goal to strengthen
the French position in the Mediterranean. His army had succeeded in a temporary increase of French power in Egypt,
though it faced repeated uprisings. In early 1799, he moved an army into the Ottoman province of Damascus (Syria
and Galilee). Bonaparte led these 13,000 French soldiers in the conquest of the coastal towns of Arish, Gaza, Jaffa,
and Haifa. The attack on Jaffa was particularly brutal. Bonaparte discovered that many of the defenders were former
prisoners of war, ostensibly on parole, so he ordered the garrison and 1,400 prisoners to be executed by bayonet
or drowning to save bullets. Men, women, and children were robbed and murdered for three days. Bonaparte began with
an army of 13,000 men; 1,500 were reported missing, 1,200 died in combat, and thousands perished from disease—mostly
bubonic plague. He failed to reduce the fortress of Acre, so he marched his army back to Egypt in May. To speed up
the retreat, Bonaparte ordered plague-stricken men to be poisoned with opium; the number who died remains disputed,
ranging from a low of 30 to a high of 580. He also brought out 1,000 wounded men. Back in Egypt on 25 July, Bonaparte
defeated an Ottoman amphibious invasion at Abukir. Despite the failures in Egypt, Napoleon returned to a hero's welcome.
He drew together an alliance with director Emmanuel Joseph Sieyès, his brother Lucien, speaker of the Council of
Five Hundred Roger Ducos, director Joseph Fouché, and Talleyrand, and they overthrew the Directory by a coup d'état
on 9 November 1799 ("the 18th Brumaire" according to the revolutionary calendar), closing down the council of five
hundred. Napoleon became "first consul" for ten years, with two consuls appointed by him who had consultative voices
only. His power was confirmed by the new "Constitution of the Year VIII", originally devised by Sieyès to give Napoleon
a minor role, but rewritten by Napoleon, and accepted by direct popular vote (3,000,000 in favor, 1,567 opposed).
The constitution preserved the appearance of a republic but in reality established a dictatorship. Napoleon established
a political system that historian Martyn Lyons called "dictatorship by plebiscite." Worried by the democratic forces
unleashed by the Revolution, but unwilling to ignore them entirely, Napoleon resorted to regular electoral consultations
with the French people on his road to imperial power. He drafted the Constitution of the Year VIII and secured his
own election as First Consul, taking up residence at the Tuileries. The constitution was approved in a rigged plebiscite
held the following January, with 99.94 percent officially listed as voting "yes." Napoleon's brother, Lucien, had
falsified the returns to show that 3 million people had participated in the plebiscite; the real number was 1.5 million.
Political observers at the time assumed the eligible French voting public numbered about 5 million people, so the
regime artificially doubled the participation rate to indicate popular enthusiasm for the Consulate. In the first
few months of the Consulate, with war in Europe still raging and internal instability still plaguing the country,
Napoleon's grip on power remained very tenuous. In the spring of 1800, Napoleon and his troops crossed the Swiss
Alps into Italy, aiming to surprise the Austrian armies that had reoccupied the peninsula when Napoleon was still
in Egypt.[note 5] After a difficult crossing over the Alps, the French army entered the plains of Northern Italy
virtually unopposed. While one French army approached from the north, the Austrians were busy with another stationed
in Genoa, which was besieged by a substantial force. The fierce resistance of this French army, under André Masséna,
gave the northern striking force precious time to carry out their operations with little interference. After spending
several days looking for each other, the two armies finally collided at the Battle of Marengo on June 14. General
Melas had a numerical advantage, fielding about 30,000 Austrian soldiers while Napoleon commanded 24,000 French troops.
The battle began favorably for the Austrians as their initial attack surprised the French and gradually drove them
back. Melas concluded that he'd won the battle and retired to his headquarters around 3 pm, leaving his subordinates
in charge of pursuing the French. However, the French lines never broke during their tactical retreat; Napoleon constantly
rode out among the troops urging them to stand and fight. Late in the afternoon, a full division under Desaix arrived
on the field and dramatically reversed the tide of the battle. A series of artillery barrages and fortunate cavalry
charges managed to decimate the Austrian army, which fled chaotically over the Bormida River back to Alessandria,
leaving behind 14,000 casualties. The following day, the Austrian army agreed to abandon Northern Italy once more
with the Convention of Alessandria, which granted them safe passage to friendly soil in exchange for their fortresses
throughout the region. Although critics have blamed Napoleon for several tactical mistakes preceding the battle,
they have also praised his audacity for selecting a risky campaign strategy, choosing to invade the Italian peninsula
from the north when the vast majority of French invasions came from the west, near or along the coastline. As Chandler
points out, Napoleon spent almost a year getting the Austrians out of Italy in his first campaign; in 1800, it took
him only a month to achieve the same goal. German strategist and field marshal Alfred von Schlieffen concluded that
"Bonaparte did not annihilate his enemy but eliminated him and rendered him harmless" while "[attaining] the object
of the campaign: the conquest of North Italy." Napoleon's triumph at Marengo secured his political authority and
boosted his popularity back home, but it did not lead to an immediate peace. Bonaparte's brother, Joseph, led the
complex negotiations in Lunéville and reported that Austria, emboldened by British support, would not acknowledge
the new territory that France had acquired. As negotiations became increasingly fractious, Bonaparte gave orders
to his general Moreau to strike Austria once more. Moreau and the French swept through Bavaria and scored an overwhelming
victory at Hohenlinden in December 1800. As a result, the Austrians capitulated and signed the Treaty of Lunéville
in February 1801. The treaty reaffirmed and expanded earlier French gains at Campo Formio. Britain now remained the
only nation that was still at war with France. After a decade of constant warfare, France and Britain signed the
Treaty of Amiens in March 1802, bringing the Revolutionary Wars to an end. Amiens called for the withdrawal of British
troops from recently conquered colonial territories as well as for assurances to curtail the expansionary goals of
the French Republic. With Europe at peace and the economy recovering, Napoleon's popularity soared to its highest
levels under the Consulate, both domestically and abroad. In a new plebiscite during the spring of 1802, the French
public came out in huge numbers to approve a constitution that made the Consulate permanent, essentially elevating
Napoleon to dictator for life. Whereas the plebiscite two years earlier had brought out 1.5 million people to the
polls, the new referendum enticed 3.6 million to go and vote (72% of all eligible voters). There was no secret ballot
in 1802 and few people wanted to openly defy the regime; the constitution gained approval with over 99% of the vote.
His broad powers were spelled out in the new constitution: Article 1. The French people name, and the Senate proclaims
Napoleon-Bonaparte First Consul for Life. After 1802, he was generally referred to as Napoleon rather than Bonaparte.
The brief peace in Europe allowed Napoleon to focus on the French colonies abroad. Saint-Domingue had managed to
acquire a high level of political autonomy during the Revolutionary Wars, with Toussaint Louverture installing himself
as de facto dictator by 1801. Napoleon saw his chance to recuperate the formerly wealthy colony when he signed the
Treaty of Amiens. During the Revolution, the National Convention voted to abolish slavery in February 1794. Under
the terms of Amiens, however, Napoleon agreed to appease British demands by not abolishing slavery in any colonies
where the 1794 decree had never been implemented. The resulting Law of 20 May never applied to colonies like Guadeloupe
or Guyane, even though rogue generals and other officials used the pretext of peace as an opportunity to reinstate
slavery in some of these places. The Law of 20 May officially restored the slave trade to the Caribbean colonies,
not slavery itself. Napoleon sent an expedition under General Leclerc designed to reassert control over Sainte-Domingue.
Although the French managed to capture Toussaint Louverture, the expedition failed when high rates of disease crippled
the French army. In May 1803, the last 8000 French troops left the island and the slaves proclaimed an independent
republic that they called Haïti in 1804. Seeing the failure of his colonial efforts, Napoleon decided in 1803 to
sell the Louisiana Territory to the United States, instantly doubling the size of the U.S. The selling price in the
Louisiana Purchase was less than three cents per acre, a total of $15 million. During the Consulate, Napoleon faced
several royalist and Jacobin assassination plots, including the Conspiration des poignards (Dagger plot) in October
1800 and the Plot of the Rue Saint-Nicaise (also known as the Infernal Machine) two months later. In January 1804,
his police uncovered an assassination plot against him that involved Moreau and which was ostensibly sponsored by
the Bourbon family, the former rulers of France. On the advice of Talleyrand, Napoleon ordered the kidnapping of
the Duke of Enghien, violating the sovereignty of Baden. The Duke was quickly executed after a secret military trial,
even though he had not been involved in the plot. Enghien's execution infuriated royal courts throughout Europe,
become one of the contributing political factors for the outbreak of the Napoleonic Wars. To expand his power, Napoleon
used these assassination plots to justify the creation of an imperial system based on the Roman model. He believed
that a Bourbon restoration would be more difficult if his family's succession was entrenched in the constitution.
Launching yet another referendum, Napoleon was elected as Emperor of the French by a tally exceeding 99%. As with
the Life Consulate two years earlier, this referendum produced heavy participation, bringing out almost 3.6 million
voters to the polls. Napoleon's coronation took place on December 2, 1804. Two separate crowns were brought for the
ceremony: a golden laurel wreath recalling the Roman Empire and a replica of Charlemagne's crown. Napoleon entered
the ceremony wearing the laurel wreath and kept it on his head throughout the proceedings. For the official coronation,
he raised the Charlemagne crown over his own head in a symbolic gesture, but never placed it on top because he was
already wearing the golden wreath. Instead he placed the crown on Josephine's head, the event commemorated in the
officially sanctioned painting by Jacques-Louis David. Napoleon was also crowned King of Italy, with the Iron Crown
of Lombardy, at the Cathedral of Milan on May 26, 1805. He created eighteen Marshals of the Empire from amongst his
top generals to secure the allegiance of the army. Before the formation of the Third Coalition, Napoleon had assembled
an invasion force, the Armée d'Angleterre, around six camps at Boulogne in Northern France. He intended to use this
invasion force to strike at England. They never invaded, but Napoleon's troops received careful and invaluable training
for future military operations. The men at Boulogne formed the core for what Napoleon later called La Grande Armée.
At the start, this French army had about 200,000 men organized into seven corps, which were large field units that
contained 36 to 40 cannons each and were capable of independent action until other corps could come to the rescue.
A single corps properly situated in a strong defensive position could survive at least a day without support, giving
the Grande Armée countless strategic and tactical options on every campaign. On top of these forces, Napoleon created
a cavalry reserve of 22,000 organized into two cuirassier divisions, four mounted dragoon divisions, one division
of dismounted dragoons, and one of light cavalry, all supported by 24 artillery pieces. By 1805, the Grande Armée
had grown to a force of 350,000 men, who were well equipped, well trained, and led by competent officers. Napoleon
knew that the French fleet could not defeat the Royal Navy in a head-to-head battle, so he planned to lure it away
from the English Channel through diversionary tactics. The main strategic idea involved the French Navy escaping
from the British blockades of Toulon and Brest and threatening to attack the West Indies. In the face of this attack,
it was hoped, the British would weaken their defense of the Western Approaches by sending ships to the Caribbean,
allowing a combined Franco-Spanish fleet to take control of the channel long enough for French armies to cross and
invade. However, the plan unraveled after the British victory at the Battle of Cape Finisterre in July 1805. French
Admiral Villeneuve then retreated to Cádiz instead of linking up with French naval forces at Brest for an attack
on the English Channel. By August 1805, Napoleon had realized that the strategic situation had changed fundamentally.
Facing a potential invasion from his continental enemies, he decided to strike first and turned his army's sights
from the English Channel to the Rhine. His basic objective was to destroy the isolated Austrian armies in Southern
Germany before their Russian allies could arrive. On 25 September, after great secrecy and feverish marching, 200,000
French troops began to cross the Rhine on a front of 260 km (160 mi). Austrian commander Karl Mack had gathered the
greater part of the Austrian army at the fortress of Ulm in Swabia. Napoleon swung his forces to the southeast and
the Grande Armée performed an elaborate wheeling movement that outflanked the Austrian positions. The Ulm Maneuver
completely surprised General Mack, who belatedly understood that his army had been cut off. After some minor engagements
that culminated in the Battle of Ulm, Mack finally surrendered after realizing that there was no way to break out
of the French encirclement. For just 2000 French casualties, Napoleon had managed to capture a total of 60,000 Austrian
soldiers through his army's rapid marching. The Ulm Campaign is generally regarded as a strategic masterpiece and
was influential in the development of the Schlieffen Plan in the late 19th century. For the French, this spectacular
victory on land was soured by the decisive victory that the Royal Navy attained at the Battle of Trafalgar on 21
October. After Trafalgar, Britain had total domination of the seas for the duration of the Napoleonic Wars. Following
the Ulm Campaign, French forces managed to capture Vienna in November. The fall of Vienna provided the French a huge
bounty as they captured 100,000 muskets, 500 cannons, and the intact bridges across the Danube. At this critical
juncture, both Tsar Alexander I and Holy Roman Emperor Francis II decided to engage Napoleon in battle, despite reservations
from some of their subordinates. Napoleon sent his army north in pursuit of the Allies, but then ordered his forces
to retreat so that he could feign a grave weakness. Desperate to lure the Allies into battle, Napoleon gave every
indication in the days preceding the engagement that the French army was in a pitiful state, even abandoning the
dominant Pratzen Heights near the village of Austerlitz. At the Battle of Austerlitz, in Moravia on 2 December, he
deployed the French army below the Pratzen Heights and deliberately weakened his right flank, enticing the Allies
to launch a major assault there in the hopes of rolling up the whole French line. A forced march from Vienna by Marshal
Davout and his III Corps plugged the gap left by Napoleon just in time. Meanwhile, the heavy Allied deployment against
the French right weakened their center on the Pratzen Heights, which was viciously attacked by the IV Corps of Marshal
Soult. With the Allied center demolished, the French swept through both enemy flanks and sent the Allies fleeing
chaotically, capturing thousands of prisoners in the process. The battle is often seen as a tactical masterpiece
because of the near-perfect execution of a calibrated but dangerous plan — of the same stature as Cannae, the celebrated
triumph by Hannibal some 2000 years before. The Allied disaster at Austerlitz significantly shook the faith of Emperor
Francis in the British-led war effort. France and Austria agreed to an armistice immediately and the Treaty of Pressburg
followed shortly after on 26 December. Pressburg took Austria out of both the war and the Coalition while reinforcing
the earlier treaties of Campo Formio and of Lunéville between the two powers. The treaty confirmed the Austrian loss
of lands to France in Italy and Bavaria, and lands in Germany to Napoleon's German allies. It also imposed an indemnity
of 40 million francs on the defeated Habsburgs and allowed the fleeing Russian troops free passage through hostile
territories and back to their home soil. Napoleon went on to say, "The battle of Austerlitz is the finest of all
I have fought." Frank McLynn suggests that Napoleon was so successful at Austerlitz that he lost touch with reality,
and what used to be French foreign policy became a "personal Napoleonic one". Vincent Cronin disagrees, stating that
Napoleon was not overly ambitious for himself, "he embodied the ambitions of thirty million Frenchmen". Napoleon
continued to entertain a grand scheme to establish a French presence in the Middle East in order to put pressure
on Britain and Russia, and perhaps form an alliance with the Ottoman Empire. In February 1806, Ottoman Emperor Selim
III finally recognized Napoleon as Emperor. He also opted for an alliance with France, calling France "our sincere
and natural ally." That decision brought the Ottoman Empire into a losing war against Russia and Britain. A Franco-Persian
alliance was also formed between Napoleon and the Persian Empire of Fat′h-Ali Shah Qajar. It collapsed in 1807, when
France and Russia themselves formed an unexpected alliance. In the end, Napoleon had made no effective alliances
in the Middle East. After Austerlitz, Napoleon established the Confederation of the Rhine in 1806. A collection of
German states intended to serve as a buffer zone between France and Central Europe, the creation of the Confederation
spelled the end of the Holy Roman Empire and significantly alarmed the Prussians. The brazen reorganization of German
territory by the French risked threatening Prussian influence in the region, if not eliminating it outright. War
fever in Berlin rose steadily throughout the summer of 1806. At the insistence of his court, especially his wife
Queen Louise, Frederick William III decided to challenge the French domination of Central Europe by going to war.
The initial military maneuvers began in September 1806. In a notable letter to Marshal Soult detailing the plan for
the campaign, Napoleon described the essential features of Napoleonic warfare and introduced the phrase le bataillon-carré
('square battalion'). In the bataillon-carré system, the various corps of the Grande Armée would march uniformly
together in close supporting distance. If any single corps was attacked, the others could quickly spring into action
and arrive to help. Napoleon invaded Prussia with 180,000 troops, rapidly marching on the right bank of the River
Saale. As in previous campaigns, his fundamental objective was to destroy one opponent before reinforcements from
another could tip the balance of the war. Upon learning the whereabouts of the Prussian army, the French swung westwards
and crossed the Saale with overwhelming force. At the twin battles of Jena and Auerstedt, fought on 14 October, the
French convincingly defeated the Prussians and inflicted heavy casualties. With several major commanders dead or
incapacitated, the Prussian king proved incapable of effectively commanding the army, which began to quickly disintegrate.
In a vaunted pursuit that epitomized the "peak of Napoleonic warfare," according to historian Richard Brooks, the
French managed to capture 140,000 soldiers, over 2,000 cannons and hundreds of ammunition wagons, all in a single
month. Historian David Chandler wrote of the Prussian forces: "Never has the morale of any army been more completely
shattered." Despite their overwhelming defeat, the Prussians refused to negotiate with the French until the Russians
had an opportunity to enter the fight. Following his triumph, Napoleon imposed the first elements of the Continental
System through the Berlin Decree issued in November 1806. The Continental System, which prohibited European nations
from trading with Britain, was widely violated throughout his reign. In the next few months, Napoleon marched against
the advancing Russian armies through Poland and was involved in the bloody stalemate at the Battle of Eylau in February
1807. After a period of rest and consolidation on both sides, the war restarted in June with an initial struggle
at Heilsberg that proved indecisive. On 14 June, however, Napoleon finally obtained an overwhelming victory over
the Russians at the Battle of Friedland, wiping out the majority of the Russian army in a very bloody struggle. The
scale of their defeat convinced the Russians to make peace with the French. On 19 June, Czar Alexander sent an envoy
to seek an armistice with Napoleon. The latter assured the envoy that the Vistula River represented the natural borders
between French and Russian influence in Europe. On that basis, the two emperors began peace negotiations at the town
of Tilsit after meeting on an iconic raft on the River Niemen. The very first thing Alexander said to Napoleon was
probably well-calibrated: "I hate the English as much as you do." Alexander faced pressure from his brother, Duke
Constantine, to make peace with Napoleon. Given the victory he had just achieved, the French emperor offered the
Russians relatively lenient terms–demanding that Russia join the Continental System, withdraw its forces from Wallachia
and Moldavia, and hand over the Ionian Islands to France. By contrast, Napoleon dictated very harsh peace terms for
Prussia, despite the ceaseless exhortations of Queen Louise. Wiping out half of Prussian territories from the map,
Napoleon created a new kingdom of 1,100 square miles called Westphalia. He then appointed his young brother Jérôme
as the new monarch of this kingdom. Prussia's humiliating treatment at Tilsit caused a deep and bitter antagonism
which festered as the Napoleonic era progressed. Moreover, Alexander's pretensions at friendship with Napoleon led
the latter to seriously misjudge the true intentions of his Russian counterpart, who would violate numerous provisions
of the treaty in the next few years. Despite these problems, the Treaties of Tilsit at last gave Napoleon a respite
from war and allowed him to return to France, which he had not seen in over 300 days. The settlements at Tilsit gave
Napoleon time to organize his empire. One of his major objectives became enforcing the Continental System against
the British. He decided to focus his attention on the Kingdom of Portugal, which consistently violated his trade
prohibitions. After defeat in the War of the Oranges in 1801, Portugal adopted a double-sided policy. At first, John
VI agreed to close his ports to British trade. The situation changed dramatically after the Franco-Spanish defeat
at Trafalgar; John grew bolder and officially resumed diplomatic and trade relations with Britain. Unhappy with this
change of policy by the Portuguese government, Napoleon sent an army to invade Portugal. On 17 October 1807, 24,000
French troops under General Junot crossed the Pyrenees with Spanish cooperation and headed towards Portugal to enforce
Napoleon's orders. This attack was the first step in what would eventually become the Peninsular War, a six-year
struggle that significantly sapped French strength. Throughout the winter of 1808, French agents became increasingly
involved in Spanish internal affairs, attempting to incite discord between members of the Spanish royal family. On
16 February 1808, secret French machinations finally materialized when Napoleon announced that he would intervene
to mediate between the rival political factions in the country. Marshal Murat led 120,000 troops into Spain and the
French arrived in Madrid on 24 March, where wild riots against the occupation erupted just a few weeks later. Napoleon
appointed his brother, Joseph Bonaparte, as the new King of Spain in the summer of 1808. The appointment enraged
a heavily religious and conservative Spanish population. Resistance to French aggression soon spread throughout the
country. The shocking French defeat at the Battle of Bailén in July gave hope to Napoleon's enemies and partly persuaded
the French emperor to intervene in person. Before going to Iberia, Napoleon decided to address several lingering
issues with the Russians. At the Congress of Erfurt in October 1808, Napoleon hoped to keep Russia on his side during
the upcoming struggle in Spain and during any potential conflict against Austria. The two sides reached an agreement,
the Erfurt Convention, that called upon Britain to cease its war against France, that recognized the Russian conquest
of Finland from Sweden, and that affirmed Russian support for France in a possible war against Austria "to the best
of its ability." Napoleon then returned to France and prepared for war. The Grande Armée, under the Emperor's personal
command, rapidly crossed the Ebro River in November 1808 and inflicted a series of crushing defeats against the Spanish
forces. After clearing the last Spanish force guarding the capital at Somosierra, Napoleon entered Madrid on 4 December
with 80,000 troops. He then unleashed his soldiers against Moore and the British forces. The British were swiftly
driven to the coast, and they withdrew from Spain entirely after a last stand at the Battle of Corunna in January
1809. Napoleon would end up leaving Iberia in order to deal with the Austrians in Central Europe, but the Peninsular
War continued on long after his absence. He never returned to Spain after the 1808 campaign. Several months after
Corunna, the British sent another army to the peninsula under the future Duke of Wellington. The war then settled
into a complex and asymmetric strategic deadlock where all sides struggled to gain the upper hand. The highlight
of the conflict became the brutal guerrilla warfare that engulfed much of the Spanish countryside. Both sides committed
the worst atrocities of the Napoleonic Wars during this phase of the conflict. The vicious guerrilla fighting in
Spain, largely absent from the French campaigns in Central Europe, severely disrupted the French lines of supply
and communication. Although France maintained roughly 300,000 troops in Iberia during the Peninsular War, the vast
majority were tied down to garrison duty and to intelligence operations. The French were never able to concentrate
all of their forces effectively, prolonging the war until events elsewhere in Europe finally turned the tide in favor
of the Allies. After the invasion of Russia in 1812, the number of French troops in Spain vastly declined as Napoleon
needed reinforcements to conserve his strategic position in Europe. By 1814, after scores of battles and sieges throughout
Iberia, the Allies had managed to push the French out of the peninsula. After four years on the sidelines, Austria
sought another war with France to avenge its recent defeats. Austria could not count on Russian support because the
latter was at war with Britain, Sweden, and the Ottoman Empire in 1809. Frederick William of Prussia initially promised
to help the Austrians, but reneged before conflict began. A report from the Austrian finance minister suggested that
the treasury would run out of money by the middle of 1809 if the large army that the Austrians had formed since the
Third Coalition remained mobilized. Although Archduke Charles warned that the Austrians were not ready for another
showdown with Napoleon, a stance that landed him in the so-called "peace party," he did not want to see the army
demobilized either. On 8 February 1809, the advocates for war finally succeeded when the Imperial Government secretly
decided on another confrontation against the French. In the early morning of 10 April, leading elements of the Austrian
army crossed the Inn River and invaded Bavaria. The early Austrian attack surprised the French; Napoleon himself
was still in Paris when he heard about the invasion. He arrived at Donauwörth on the 17th to find the Grande Armée
in a dangerous position, with its two wings separated by 75 miles (121 km) and joined together by a thin cordon of
Bavarian troops. Charles pressed the left wing of the French army and hurled his men towards the III Corps of Marshal
Davout. In response, Napoleon came up with a plan to cut off the Austrians in the celebrated Landshut Maneuver. He
realigned the axis of his army and marched his soldiers towards the town of Eckmühl. The French scored a convincing
win in the resulting Battle of Eckmühl, forcing Charles to withdraw his forces over the Danube and into Bohemia.
On 13 May, Vienna fell for the second time in four years, although the war continued since most of the Austrian army
had survived the initial engagements in Southern Germany. By 17 May, the main Austrian army under Charles had arrived
on the Marchfeld. Charles kept the bulk of his troops several miles away from the river bank in hopes of concentrating
them at the point where Napoleon decided to cross. On 21 May, the French made their first major effort to cross the
Danube, precipitating the Battle of Aspern-Essling. The Austrians enjoyed a comfortable numerical superiority over
the French throughout the battle; on the first day, Charles disposed of 110,000 soldiers against only 31,000 commanded
by Napoleon. By the second day, reinforcements had boosted French numbers up to 70,000. The battle was characterized
by a vicious back-and-forth struggle for the two villages of Aspern and Essling, the focal points of the French bridgehead.
By the end of the fighting, the French had lost Aspern but still controlled Essling. A sustained Austrian artillery
bombardment eventually convinced Napoleon to withdraw his forces back onto Lobau Island. Both sides inflicted about
23,000 casualties on each other. It was the first defeat Napoleon suffered in a major set-piece battle, and it caused
excitement throughout many parts of Europe because it proved that he could be beaten on the battlefield. After the
setback at Aspern-Essling, Napoleon took more than six weeks in planning and preparing for contingencies before he
made another attempt at crossing the Danube. From 30 June to the early days of July, the French recrossed the Danube
in strength, with more than 180,000 troops marching across the Marchfeld towards the Austrians. Charles received
the French with 150,000 of his own men. In the ensuing Battle of Wagram, which also lasted two days, Napoleon commanded
his forces in what was the largest battle of his career up until then. Neither side made much progress on 5 July,
but the 6th produced a definitive outcome. Both sides launched major assaults on their flanks. Austrian attacks against
the French left wing looked dangerous initially, but they were all beaten back. Meanwhile, a steady French attack
against the Austrian left wing eventually compromised the entire position for Charles. Napoleon finished off the
battle with a concentrated central thrust that punctured a hole in the Austrian army and forced Charles to retreat.
Austrian losses were very heavy, reaching well over 40,000 casualties. The French were too exhausted to pursue the
Austrians immediately, but Napoleon eventually caught up with Charles at Znaim and the latter signed an armistice
on 12 July. In the Kingdom of Holland, the British launched the Walcheren Campaign to open up a second front in the
war and to relieve the pressure on the Austrians. The British army only landed at Walcheren on 30 July, by which
point the Austrians had already been defeated. The Walcheren Campaign was characterized by little fighting but heavy
casualties thanks to the popularly dubbed "Walcheren Fever." Over 4000 British troops were lost in a bungled campaign,
and the rest withdrew in December 1809. The main strategic result from the campaign became the delayed political
settlement between the French and the Austrians. Emperor Francis wanted to wait and see how the British performed
in their theater before entering into negotiations with Napoleon. Once it became apparent that the British were going
nowhere, the Austrians agreed to peace talks. The resulting Treaty of Schönbrunn in October 1809 was the harshest
that France had imposed on Austria in recent memory. Metternich and Archduke Charles had the preservation of the
Habsburg Empire as their fundamental goal, and to this end they succeeded by making Napoleon seek more modest goals
in return for promises of friendship between the two powers. Nevertheless, while most of the hereditary lands remained
a part of the Habsburg realm, France received Carinthia, Carniola, and the Adriatic ports, while Galicia was given
to the Poles and the Salzburg area of the Tyrol went to the Bavarians. Austria lost over three million subjects,
about one-fifth of her total population, as a result of these territorial changes. Although fighting in Iberia continued,
the War of the Fifth Coalition would be the last major conflict on the European continent for the next three years.
Napoleon turned his focus to domestic affairs after the war. Empress Joséphine had still not given birth to a child
from Napoleon, who became worried about the future of his empire following his death. Desperate for a legitimate
heir, Napoleon divorced Joséphine in January 1810 and started looking for a new wife. Hoping to cement the recent
alliance with Austria through a family connection, Napoleon married the Archduchess Marie Louise, who was 18 years
old at the time. On 20 March 1811, Marie Louise gave birth to a baby boy, whom Napoleon made heir apparent and bestowed
the title of King of Rome. His son never actually ruled the empire, but historians still refer to him as Napoleon
II. In 1808, Napoleon and Czar Alexander met at the Congress of Erfurt to preserve the Russo-French alliance. The
leaders had a friendly personal relationship after their first meeting at Tilsit in 1807. By 1811, however, tensions
had increased and Alexander was under pressure from the Russian nobility to break off the alliance. A major strain
on the relationship between the two nations became the regular violations of the Continental System by the Russians,
which led Napoleon to threaten Alexander with serious consequences if he formed an alliance with Britain. In an attempt
to gain increased support from Polish nationalists and patriots, Napoleon termed the war the Second Polish War—the
First Polish War had been the Bar Confederation uprising by Polish nobles against Russia in 1768. Polish patriots
wanted the Russian part of Poland to be joined with the Duchy of Warsaw and an independent Poland created. This was
rejected by Napoleon, who stated he had promised his ally Austria this would not happen. Napoleon refused to manumit
the Russian serfs because of concerns this might provoke a reaction in his army's rear. The serfs later committed
atrocities against French soldiers during France's retreat. The Russians avoided Napoleon's objective of a decisive
engagement and instead retreated deeper into Russia. A brief attempt at resistance was made at Smolensk in August;
the Russians were defeated in a series of battles, and Napoleon resumed his advance. The Russians again avoided battle,
although in a few cases this was only achieved because Napoleon uncharacteristically hesitated to attack when the
opportunity arose. Owing to the Russian army's scorched earth tactics, the French found it increasingly difficult
to forage food for themselves and their horses. The Russians eventually offered battle outside Moscow on 7 September:
the Battle of Borodino resulted in approximately 44,000 Russian and 35,000 French dead, wounded or captured, and
may have been the bloodiest day of battle in history up to that point in time. Although the French had won, the Russian
army had accepted, and withstood, the major battle Napoleon had hoped would be decisive. Napoleon's own account was:
"The most terrible of all my battles was the one before Moscow. The French showed themselves to be worthy of victory,
but the Russians showed themselves worthy of being invincible." The Russian army withdrew and retreated past Moscow.
Napoleon entered the city, assuming its fall would end the war and Alexander would negotiate peace. However, on orders
of the city's governor Feodor Rostopchin, rather than capitulation, Moscow was burned. After five weeks, Napoleon
and his army left. In early November Napoleon got concerned about loss of control back in France after the Malet
coup of 1812. His army walked through snow up to their knees and nearly 10,000 men and horses froze to death on the
night of 8/9 November alone. After Battle of Berezina Napoleon succeeded to escape but had to abandon much of the
remaining artillery and baggage train. On 5 December, shortly before arriving in Vilnius, Napoleon left the army
in a sledge. The Allies offered peace terms in the Frankfurt proposals in November 1813. Napoleon would remain as
Emperor of France, but it would be reduced to its "natural frontiers." That meant that France could retain control
of Belgium, Savoy and the Rhineland (the west bank of the Rhine River), while giving up control of all the rest,
including all of Spain and the Netherlands, and most of Italy and Germany. Metternich told Napoleon these were the
best terms the Allies were likely to offer; after further victories, the terms would be harsher and harsher. Metternich's
motivation was to maintain France as a balance against Russian threats, while ending the highly destabilizing series
of wars. Napoleon, expecting to win the war, delayed too long and lost this opportunity; by December the Allies had
withdrawn the offer. When his back was to the wall in 1814 he tried to reopen peace negotiations on the basis of
accepting the Frankfurt proposals. The Allies now had new, harsher terms that included the retreat of France to its
1791 boundaries, which meant the loss of Belgium. Napoleon would remain Emperor, however he rejected the term. The
British wanted Napoleon permanently removed; they prevailed. Napoleon adamantly refused. On 1 April, Alexander addressed
the Sénat conservateur. Long docile to Napoleon, under Talleyrand's prodding it had turned against him. Alexander
told the Sénat that the Allies were fighting against Napoleon, not France, and they were prepared to offer honorable
peace terms if Napoleon were removed from power. The next day, the Sénat passed the Acte de déchéance de l'Empereur
("Emperor's Demise Act"), which declared Napoleon deposed. Napoleon had advanced as far as Fontainebleau when he
learned that Paris was lost. When Napoleon proposed the army march on the capital, his senior officers and marshals
mutinied. On 4 April, led by Ney, they confronted Napoleon. Napoleon asserted the army would follow him, and Ney
replied the army would follow its generals. While the ordinary soldiers and regimental officers wanted to fight on,
without any senior officers or marshals any prospective invasion of Paris would have been impossible. Bowing to the
inevitable, on 4 April Napoleon abdicated in favour of his son, with Marie-Louise as regent. However, the Allies
refused to accept this under prodding from Alexander, who feared that Napoleon might find an excuse to retake the
throne. Napoleon was then forced to announce his unconditional abdication only two days later. In the Treaty of Fontainebleau,
the Allies exiled him to Elba, an island of 12,000 inhabitants in the Mediterranean, 20 km (12 mi) off the Tuscan
coast. They gave him sovereignty over the island and allowed him to retain the title of Emperor. Napoleon attempted
suicide with a pill he had carried after nearly being captured by the Russians during the retreat from Moscow. Its
potency had weakened with age, however, and he survived to be exiled while his wife and son took refuge in Austria.
In the first few months on Elba he created a small navy and army, developed the iron mines, and issued decrees on
modern agricultural methods. The 5th Regiment was sent to intercept him and made contact just south of Grenoble on
March 7, 1815. Napoleon approached the regiment alone, dismounted his horse and, when he was within gunshot range,
shouted to the soldiers, "Here I am. Kill your Emperor, if you wish." The soldiers quickly responded with, "Vive
L'Empereur!" Ney, who had boasted to the restored Bourbon king, Louis XVIII, that he would bring Napoleon to Paris
in an iron cage, affectionately kissed his former emperor and forgot his oath of allegiance to the Bourbon monarch.
The two then marched together towards Paris with a growing army. The unpopular Louis XVIII fled to Belgium after
realizing he had little political support. On March 13, the powers at the Congress of Vienna declared Napoleon an
outlaw. Four days later, Great Britain, Russia, Austria, and Prussia each pledged to put 150,000 men into the field
to end his rule. Napoleon returned to Paris and found that both the legislature and the people had turned against
him. Realizing his position was untenable, he abdicated on 22 June in favour of his son. He left Paris three days
later and settled at Josephine's former palace in Malmaison (on the western bank of the Seine about 17 kilometres
(11 mi) west of Paris). Even as Napoleon travelled to Paris, the Coalition forces crossed the frontier swept through
France (arriving in the vicinity of Paris on 29 June), with the stated intent on restoring Louis XVIII to the French
throne. In 1840, Louis Philippe I obtained permission from the British to return Napoleon's remains to France. On
15 December 1840, a state funeral was held. The hearse proceeded from the Arc de Triomphe down the Champs-Élysées,
across the Place de la Concorde to the Esplanade des Invalides and then to the cupola in St Jérôme's Chapel, where
it remained until the tomb designed by Louis Visconti was completed. In 1861, Napoleon's remains were entombed in
a porphyry sarcophagus in the crypt under the dome at Les Invalides. In 1955, the diaries of Napoleon's valet, Louis
Marchand, were published. His description of Napoleon in the months before his death led Sten Forshufvud in a 1961
paper in Nature to put forward other causes for his death, including deliberate arsenic poisoning. Arsenic was used
as a poison during the era because it was undetectable when administered over a long period. Forshufvud, in a 1978
book with Ben Weider, noted that Napoleon's body was found to be remarkably well preserved when moved in 1840. Arsenic
is a strong preservative, and therefore this supported the poisoning hypothesis. Forshufvud and Weider observed that
Napoleon had attempted to quench abnormal thirst by drinking large amounts of orgeat syrup that contained cyanide
compounds in the almonds used for flavouring. They maintained that the potassium tartrate used in his treatment prevented
his stomach from expelling these compounds and that his thirst was a symptom of the poison. Their hypothesis was
that the calomel given to Napoleon became an overdose, which killed him and left extensive tissue damage behind.
According to a 2007 article, the type of arsenic found in Napoleon's hair shafts was mineral, the most toxic, and
according to toxicologist Patrick Kintz, this supported the conclusion that he was murdered. There have been modern
studies that have supported the original autopsy finding. In a 2008 study, researchers analysed samples of Napoleon's
hair from throughout his life, as well as samples from his family and other contemporaries. All samples had high
levels of arsenic, approximately 100 times higher than the current average. According to these researchers, Napoleon's
body was already heavily contaminated with arsenic as a boy, and the high arsenic concentration in his hair was not
caused by intentional poisoning; people were constantly exposed to arsenic from glues and dyes throughout their lives.[note
7] Studies published in 2007 and 2008 dismissed evidence of arsenic poisoning, and confirmed evidence of peptic ulcer
and gastric cancer as the cause of death. Napoleon had a civil marriage with Joséphine de Beauharnais, without religious
ceremony. During the campaign in Egypt, Napoleon showed much tolerance towards religion for a revolutionary general,
holding discussions with Muslim scholars and ordering religious celebrations, but General Dupuy, who accompanied
Napoleon, revealed, shortly after Pope Pius VI's death, the political reasons for such behaviour: "We are fooling
Egyptians with our pretended interest for their religion; neither Bonaparte nor we believe in this religion more
than we did in Pius the Defunct's one".[note 8] In his memoirs, Bonaparte's secretary Bourienne wrote about Napoleon's
religious interests in the same vein. His religious opportunism is epitomized in his famous quote: "It is by making
myself Catholic that I brought peace to Brittany and Vendée. It is by making myself Italian that I won minds in Italy.
It is by making myself a Moslem that I established myself in Egypt. If I governed a nation of Jews, I should reestablish
the Temple of Solomon." However, according to Juan Cole, "Bonaparte's admiration for the Prophet Muhammad, in contrast,
was genuine" and during his captivity on St Helena he defended him against Voltaire's critical play Mahomet. Napoleon
was crowned Emperor Napoleon I on 2 December 1804 at Notre Dame de Paris by Pope Pius VII. On 1 April 1810, Napoleon
religiously married the Austrian princess Marie Louise. During his brother's rule in Spain, he abolished the Spanish
Inquisition in 1813. In a private discussion with general Gourgaud during his exile on Saint Helena, Napoleon expressed
materialistic views on the origin of man,[note 9]and doubted the divinity of Jesus, stating that it is absurd to
believe that Socrates, Plato, Muslims, and the Anglicans should be damned for not being Roman Catholics.[note 10]
He also said to Gourgaud in 1817 "I like the Mohammedan religion best. It has fewer incredible things in it than
ours." and that "the Mohammedan religion is the finest of all." However, Napoleon was anointed by a priest before
his death. Seeking national reconciliation between revolutionaries and Catholics, the Concordat of 1801 was signed
on 15 July 1801 between Napoleon and Pope Pius VII. It solidified the Roman Catholic Church as the majority church
of France and brought back most of its civil status. The hostility of devout Catholics against the state had now
largely been resolved. It did not restore the vast church lands and endowments that had been seized during the revolution
and sold off. As a part of the Concordat, he presented another set of laws called the Organic Articles. While the
Concordat restored much power to the papacy, the balance of church-state relations had tilted firmly in Napoleon's
favour. He selected the bishops and supervised church finances. Napoleon and the pope both found the Concordat useful.
Similar arrangements were made with the Church in territories controlled by Napoleon, especially Italy and Germany.
Now, Napoleon could win favor with the Catholics while also controlling Rome in a political sense. Napoleon said
in April 1801, "Skillful conquerors have not got entangled with priests. They can both contain them and use them."
French children were issued a catechism that taught them to love and respect Napoleon. Historians agree that Napoleon's
remarkable personality was one key to his influence. They emphasize the strength of his ambition that took him from
an obscure village to command of most of Europe. George F. E. Rudé stresses his "rare combination of will, intellect
and physical vigour." At 5 ft 6 in (168 cm), he was not physically imposing but in one-on-one situations he typically
had a hypnotic impact on people and seemingly bent the strongest leaders to his will. He understood military technology,
but was not an innovator in that regard. He was an innovator in using the financial, bureaucratic, and diplomatic
resources of France. He could rapidly dictate a series of complex commands to his subordinates, keeping in mind where
major units were expected to be at each future point, and like a chess master, "seeing" the best plays moves ahead.
Napoleon maintained strict, efficient work habits, prioritizing what needed to be done. He cheated at cards, but
repaid the losses; he had to win at everything he attempted. He kept relays of staff and secretaries at work. Unlike
many generals, Napoleon did not examine history to ask what Hannibal or Alexander or anyone else did in a similar
situation. Critics said he won many battles simply because of luck; Napoleon responded, "Give me lucky generals,"
aware that "luck" comes to leaders who recognize opportunity, and seize it. Dwyer argues that Napoleon's victories
at Austerlitz and Jena in 1805-06 heightened his sense of self-grandiosity, leaving him even more certain of his
destiny and invincibility. By the Russian campaign in 1812, however, Napoleon seems to have lost his verve. With
crisis after crisis at hand, he rarely rose to the occasion. Some historians have suggested a physical deterioration,
but others note that an impaired Napoleon was still a brilliant general. In terms of impact on events, it was more
than Napoleon's personality that took effect. He reorganized France itself to supply the men and money needed for
great wars. Above all he inspired his men—Wellington said his presence on the battlefield was worth 40,000 soldiers,
for he inspired confidence from privates to field marshals. He also unnerved the enemy. At the Battle of Auerstadt
in 1806, King Frederick William III of Prussia outnumbered the French by 63,000 to 27,000; however, when he mistakenly
was told that Napoleon was in command, he ordered a hasty retreat that turned into a rout. The force of his personality
neutralized material difficulties as his soldiers fought with the confidence that with Napoleon in charge they would
surely win. During the Napoleonic Wars he was taken seriously by the British press as a dangerous tyrant, poised
to invade. He was often referred to by the British as Boney. A nursery rhyme warned children that Bonaparte ravenously
ate naughty people; the "bogeyman". The British Tory press sometimes depicted Napoleon as much smaller than average
height, and this image persists. Confusion about his height also results from the difference between the French pouce
and British inch—2.71 cm and 2.54 cm, respectively. The myth of the "Napoleon Complex” — named after him to describe
men who have an inferiority complex — stems primarily from the fact that he was listed, incorrectly, as 5 feet 2
inches (in French units) at the time of his death. In fact, he was 1.68 metres (5 ft 6 in) tall, an average height
for a man in that period.[note 11] When he became First Consul and later Emperor, Napoleon eschewed his general's
uniform and habitually wore the simple green colonel uniform (non-Hussar) of a colonel of the Chasseur à Cheval of
the Imperial Guard, the regiment that often served as his personal escort, with a large bicorne. He also habitually
wore (usually on Sundays) the blue uniform of a colonel of the Imperial Guard Foot Grenadiers (blue with white facings
and red cuffs). He also wore his Légion d'honneur star, medal and ribbon, and the Order of the Iron Crown decorations,
white French-style culottes and white stockings. This was in contrast to the gorgeous and complex uniforms with many
decorations of his marshals and those around him. Napoleon instituted lasting reforms, including higher education,
a tax code, road and sewer systems, and established the Banque de France, the first central bank in French history.
He negotiated the Concordat of 1801 with the Catholic Church, which sought to reconcile the mostly Catholic population
to his regime. It was presented alongside the Organic Articles, which regulated public worship in France. His dissolution
of the Holy Roman Empire paved the way to German Unification later in the 19th century. The sale of the Louisiana
Territory to the United States doubled the size of the country and was a major event in American history. Napoleon's
set of civil laws, the Code Civil—now often known as the Napoleonic Code—was prepared by committees of legal experts
under the supervision of Jean Jacques Régis de Cambacérès, the Second Consul. Napoleon participated actively in the
sessions of the Council of State that revised the drafts. The development of the code was a fundamental change in
the nature of the civil law legal system with its stress on clearly written and accessible law. Other codes ("Les
cinq codes") were commissioned by Napoleon to codify criminal and commerce law; a Code of Criminal Instruction was
published, which enacted rules of due process. His opponents learned from Napoleon's innovations. The increased importance
of artillery after 1807 stemmed from his creation of a highly mobile artillery force, the growth in artillery numbers,
and changes in artillery practices. As a result of these factors, Napoleon, rather than relying on infantry to wear
away the enemy's defenses, now could use massed artillery as a spearhead to pound a break in the enemy's line that
was then exploited by supporting infantry and cavalry. McConachy rejects the alternative theory that growing reliance
on artillery by the French army beginning in 1807 was an outgrowth of the declining quality of the French infantry
and, later, France's inferiority in cavalry numbers. Weapons and other kinds of military technology remained largely
static through the Revolutionary and Napoleonic eras, but 18th-century operational mobility underwent significant
change. The official introduction of the metric system in September 1799 was unpopular in large sections of French
society. Napoleon's rule greatly aided adoption of the new standard not only across France but also across the French
sphere of influence. Napoleon ultimately took a retrograde step in 1812 when he passed legislation to introduce the
mesures usuelles (traditional units of measurement) for retail trade—a system of measure that resembled the pre-revolutionary
units but were based on the kilogram and the metre; for example the livre metrique (metric pound) was 500 g instead
of 489.5 g—the value of the livre du roi (the king's pound). Other units of measure were rounded in a similar manner.
This however laid the foundations for the definitive introduction of the metric system across Europe in the middle
of the 19th century. Napoleon's educational reforms laid the foundation of a modern system of education in France
and throughout much of Europe. Napoleon synthesized the best academic elements from the Ancien Régime, The Enlightenment,
and the Revolution, with the aim of establishing a stable, well-educated and prosperous society. He made French the
only official language. He left some primary education in the hands of religious orders, but he offered public support
to secondary education. Napoleon founded a number of state secondary schools (lycées) designed to produce a standardized
education that was uniform across France. All students were taught the sciences along with modern and classical languages.
Unlike the system during the Ancien Régime, religious topics did not dominate the curriculum, although they were
present in addition to teachers from the clergy. Napoleon simply hoped to use religion to produce social stability.
He gave special attention to the advanced centers, notably the École Polytechnique, that provided both military expertise
and state-of-the-art research in science. Napoleon made some of the first major efforts at establishing a system
of secular and public education. The system featured scholarships and strict discipline, with the result being a
French educational system that outperformed its European counterparts, many of which borrowed from the French system.
In the political realm, historians debate whether Napoleon was "an enlightened despot who laid the foundations of
modern Europe or, instead, a megalomaniac who wrought greater misery than any man before the coming of Hitler." Many
historians have concluded that he had grandiose foreign policy ambitions. The Continental powers as late as 1808
were willing to give him nearly all of his remarkable gains and titles, but some scholars maintain he was overly
aggressive and pushed for too much, until his empire collapsed. Napoleon ended lawlessness and disorder in post-Revolutionary
France. He was, however, considered a tyrant and usurper by his opponents. His critics charge that he was not significantly
troubled when faced with the prospect of war and death for thousands, turned his search for undisputed rule into
a series of conflicts throughout Europe and ignored treaties and conventions alike. His role in the Haitian Revolution
and decision to reinstate slavery in France's oversea colonies are controversial and have an impact on his reputation.
Napoleon institutionalised plunder of conquered territories: French museums contain art stolen by Napoleon's forces
from across Europe. Artefacts were brought to the Musée du Louvre for a grand central museum; his example would later
serve as inspiration for more notorious imitators. He was compared to Adolf Hitler most famously by the historian
Pieter Geyl in 1947 and Claude Ribbe in 2005. David G. Chandler, a foremost historian of Napoleonic warfare, wrote
in 1973 that, "Nothing could be more degrading to the former [Napoleon] and more flattering to the latter [Hitler].
The comparison is odious. On the whole Napoleon was inspired by a noble dream, wholly dissimilar from Hitler's...
Napoleon left great and lasting testimonies to his genius—in codes of law and national identities which survive to
the present day. Adolf Hitler left nothing but destruction." Critics argue Napoleon's true legacy must reflect the
loss of status for France and needless deaths brought by his rule: historian Victor Davis Hanson writes, "After all,
the military record is unquestioned—17 years of wars, perhaps six million Europeans dead, France bankrupt, her overseas
colonies lost." McLynn notes that, "He can be viewed as the man who set back European economic life for a generation
by the dislocating impact of his wars." However, Vincent Cronin replies that such criticism relies on the flawed
premise that Napoleon was responsible for the wars which bear his name, when in fact France was the victim of a series
of coalitions which aimed to destroy the ideals of the Revolution. Napoleon's use of propaganda contributed to his
rise to power, legitimated his régime, and established his image for posterity. Strict censorship, controlling aspects
of the press, books, theater, and art, was part of his propaganda scheme, aimed at portraying him as bringing desperately
wanted peace and stability to France. The propagandistic rhetoric changed in relation to events and to the atmosphere
of Napoleon's reign, focusing first on his role as a general in the army and identification as a soldier, and moving
to his role as emperor and a civil leader. Specifically targeting his civilian audience, Napoleon fostered an important,
though uneasy, relationship with the contemporary art community, taking an active role in commissioning and controlling
different forms of art production to suit his propaganda goals. Widespread rumors of Napoleon's return from St. Helena
and Napoleon as an inspiration for patriotism, individual and collective liberties, and political mobilization manifested
themselves in seditious materials, displaying the tricolor and rosettes. There were also subversive activities celebrating
anniversaries of Napoleon's life and reign and disrupting royal celebrations—they demonstrated the prevailing and
successful goal of the varied supporters of Napoleon to constantly destabilize the Bourbon regime. Datta (2005) shows
that, following the collapse of militaristic Boulangism in the late 1880s, the Napoleonic legend was divorced from
party politics and revived in popular culture. Concentrating on two plays and two novels from the period—Victorien
Sardou's Madame Sans-Gêne (1893), Maurice Barrès's Les Déracinés (1897), Edmond Rostand's L'Aiglon (1900), and André
de Lorde and Gyp's Napoléonette (1913) Datta examines how writers and critics of the Belle Époque exploited the Napoleonic
legend for diverse political and cultural ends. After the fall of Napoleon, not only was Napoleonic Code retained
by conquered countries including the Netherlands, Belgium, parts of Italy and Germany, but has been used as the basis
of certain parts of law outside Europe including the Dominican Republic, the US state of Louisiana and the Canadian
province of Quebec. The memory of Napoleon in Poland is favorable, for his support for independence and opposition
to Russia, his legal code, the abolition of serfdom, and the introduction of modern middle class bureaucracies. Napoleon
could be considered one of the founders of modern Germany. After dissolving the Holy Roman Empire, he reduced the
number of German states from 300 to less than 50, paving the way to German Unification. A byproduct of the French
occupation was a strong development in German nationalism. Napoleon also significantly aided the United States when
he agreed to sell the territory of Louisiana for 15 million dollars during the presidency of Thomas Jefferson. That
territory almost doubled the size of the United States, adding the equivalent of 13 states to the Union. Napoleon
married Joséphine de Beauharnais in 1796, when he was 26; she was a 32-year-old widow whose first husband had been
executed during the Revolution. Until she met Bonaparte, she had been known as "Rose", a name which he disliked.
He called her "Joséphine" instead, and she went by this name henceforth. Bonaparte often sent her love letters while
on his campaigns. He formally adopted her son Eugène and cousin Stéphanie and arranged dynastic marriages for them.
Joséphine had her daughter Hortense marry Napoleon's brother Louis. Napoleon acknowledged one illegitimate son: Charles
Léon (1806–1881) by Eléonore Denuelle de La Plaigne. Alexandre Colonna-Walewski (1810–1868), the son of his mistress
Maria Walewska, although acknowledged by Walewska's husband, was also widely known to be his child, and the DNA of
his direct male descendant has been used to help confirm Napoleon's Y-chromosome haplotype. He may have had further
unacknowledged illegitimate offspring as well, such as Eugen Megerle von Mühlfeld by Emilie Victoria Kraus and Hélène
Napoleone Bonaparte (1816–1907) by Albine de Montholon.
Of approximately 100 million native speakers of German in the world, roughly 80 million consider themselves Germans.[citation
needed] There are an additional 80 million people of German ancestry mainly in the United States, Brazil (mainly
in the South Region of the country), Argentina, Canada, South Africa, the post-Soviet states (mainly in Russia and
Kazakhstan), and France, each accounting for at least 1 million.[note 2] Thus, the total number of Germans lies somewhere
between 100 and more than 150 million, depending on the criteria applied (native speakers, single-ancestry ethnic
Germans, partial German ancestry, etc.). Conflict between the Germanic tribes and the forces of Rome under Julius
Caesar forced major Germanic tribes to retreat to the east bank of the Rhine. Roman emperor Augustus in 12 BC ordered
the conquest of the Germans, but the catastrophic Roman defeat at the Battle of the Teutoburg Forest resulted in
the Roman Empire abandoning its plans to completely conquer Germany. Germanic peoples in Roman territory were culturally
Romanized, and although much of Germany remained free of direct Roman rule, Rome deeply influenced the development
of German society, especially the adoption of Christianity by the Germans who obtained it from the Romans. In Roman-held
territories with Germanic populations, the Germanic and Roman peoples intermarried, and Roman, Germanic, and Christian
traditions intermingled. The adoption of Christianity would later become a major influence in the development of
a common German identity. The Germanic peoples during the Migrations Period came into contact with other peoples;
in the case of the populations settling in the territory of modern Germany, they encountered Celts to the south,
and Balts and Slavs towards the east. The Limes Germanicus was breached in AD 260. Migrating Germanic tribes commingled
with the local Gallo-Roman populations in what is now Swabia and Bavaria. The arrival of the Huns in Europe resulted
in Hun conquest of large parts of Eastern Europe, the Huns initially were allies of the Roman Empire who fought against
Germanic tribes, but later the Huns cooperated with the Germanic tribe of the Ostrogoths, and large numbers of Germans
lived within the lands of the Hunnic Empire of Attila. Attila had both Hunnic and Germanic families and prominent
Germanic chiefs amongst his close entourage in Europe. The Huns living in Germanic territories in Eastern Europe
adopted an East Germanic language as their lingua franca. A major part of Attila's army were Germans, during the
Huns' campaign against the Roman Empire. After Attila's unexpected death the Hunnic Empire collapsed with the Huns
disappearing as a people in Europe – who either escaped into Asia, or otherwise blended in amongst Europeans. The
migration-period peoples who later coalesced into a "German" ethnicity were the Germanic tribes of the Saxons, Franci,
Thuringii, Alamanni and Bavarii. These five tribes, sometimes with inclusion of the Frisians, are considered as the
major groups to take part in the formation of the Germans. The varieties of the German language are still divided
up into these groups. Linguists distinguish low Saxon, Franconian, Bavarian, Thuringian and Alemannic varieties in
modern German. By the 9th century, the large tribes which lived on the territory of modern Germany had been united
under the rule of the Frankish king Charlemagne, known in German as Karl der Große. Much of what is now Eastern Germany
became Slavonic-speaking (Sorbs and Veleti), after these areas were vacated by Germanic tribes (Vandals, Lombards,
Burgundians and Suebi amongst others) which had migrated into the former areas of the Roman Empire. A German ethnicity
emerged in the course of the Middle Ages, ultimately as a result of the formation of the kingdom of Germany within
East Francia and later the Holy Roman Empire, beginning in the 9th century. The process was gradual and lacked any
clear definition, and the use of exonyms designating "the Germans" develops only during the High Middle Ages. The
title of rex teutonicum "King of the Germans" is first used in the late 11th century, by the chancery of Pope Gregory
VII, to describe the future Holy Roman Emperor of the German Nation Henry IV. Natively, the term ein diutscher ("a
German") is used for the people of Germany from the 12th century. After Christianization, the Roman Catholic Church
and local rulers led German expansion and settlement in areas inhabited by Slavs and Balts, known as Ostsiedlung.
During the wars waged in the Baltic by the Catholic German Teutonic Knights; the lands inhabited by the ethnic group
of the Old Prussians (the current reference to the people known then simply as the "Prussians"), were conquered by
the Germans. The Old Prussians were an ethnic group related to the Latvian and Lithuanian Baltic peoples. The former
German state of Prussia took its name from the Baltic Prussians, although it was led by Germans who had assimilated
the Old Prussians; the old Prussian language was extinct by the 17th or early 18th century. The Slavic people of
the Teutonic-controlled Baltic were assimilated into German culture and eventually there were many intermarriages
of Slavic and German families, including amongst the Prussia's aristocracy known as the Junkers. Prussian military
strategist Karl von Clausewitz is a famous German whose surname is of Slavic origin. Massive German settlement led
to the assimilation of Baltic (Old Prussians) and Slavic (Wends) populations, who were exhausted by previous warfare.
At the same time, naval innovations led to a German domination of trade in the Baltic Sea and parts of Eastern Europe
through the Hanseatic League. Along the trade routes, Hanseatic trade stations became centers of the German culture.
German town law (Stadtrecht) was promoted by the presence of large, relatively wealthy German populations, their
influence and political power. Thus people who would be considered "Germans", with a common culture, language, and
worldview different from that of the surrounding rural peoples, colonized trading towns as far north of present-day
Germany as Bergen (in Norway), Stockholm (in Sweden), and Vyborg (now in Russia). The Hanseatic League was not exclusively
German in any ethnic sense: many towns who joined the league were outside the Holy Roman Empire and a number of them
may only loosely be characterized as German. The Empire itself was not entirely German either. It had a multi-ethnic
and multi-lingual structure, some of the smaller ethnicities and languages used at different times were Dutch, Italian,
French, Czech and Polish. By the Middle Ages, large numbers of Jews lived in the Holy Roman Empire and had assimilated
into German culture, including many Jews who had previously assimilated into French culture and had spoken a mixed
Judeo-French language. Upon assimilating into German culture, the Jewish German peoples incorporated major parts
of the German language and elements of other European languages into a mixed language known as Yiddish. However tolerance
and assimilation of Jews in German society suddenly ended during the Crusades with many Jews being forcefully expelled
from Germany and Western Yiddish disappeared as a language in Germany over the centuries, with German Jewish people
fully adopting the German language. The Napoleonic Wars were the cause of the final dissolution of the Holy Roman
Empire, and ultimately the cause for the quest for a German nation state in 19th-century German nationalism. After
the Congress of Vienna, Austria and Prussia emerged as two competitors. Austria, trying to remain the dominant power
in Central Europe, led the way in the terms of the Congress of Vienna. The Congress of Vienna was essentially conservative,
assuring that little would change in Europe and preventing Germany from uniting. These terms came to a sudden halt
following the Revolutions of 1848 and the Crimean War in 1856, paving the way for German unification in the 1860s.
By the 1820s, large numbers of Jewish German women had intermarried with Christian German men and had converted to
Christianity. Jewish German Eduard Lasker was a prominent German nationalist figure who promoted the unification
of Germany in the mid-19th century. In 1866, the feud between Austria and Prussia finally came to a head. There were
several reasons behind this war. As German nationalism grew strongly inside the German Confederation and neither
could decide on how Germany was going to be unified into a nation-state. The Austrians favoured the Greater Germany
unification but were not willing to give up any of the non-German-speaking land inside of the Austrian Empire and
take second place to Prussia. The Prussians however wanted to unify Germany as Little Germany primarily by the Kingdom
of Prussia, whilst excluding Austria. In the final battle of the German war (Battle of Königgrätz) the Prussians
successfully defeated the Austrians and succeeded in creating the North German Confederation. In 1870, after France
attacked Prussia, Prussia and its new allies in Southern Germany (among them Bavaria) were victorious in the Franco-Prussian
War. It created the German Empire in 1871 as a German nation-state, effectively excluding the multi-ethnic Austrian
Habsburg monarchy and Liechtenstein. Integrating the Austrians nevertheless remained a strong desire for many people
of Germany and Austria, especially among the liberals, the social democrats and also the Catholics who were a minority
within the Protestant Germany. The Nazis, led by Adolf Hitler, attempted to unite all the people they claimed were
"Germans" (Volksdeutsche) into one realm, including ethnic Germans in eastern Europe, many of whom had emigrated
more than one hundred fifty years before and developed separate cultures in their new lands. This idea was initially
welcomed by many ethnic Germans in Sudetenland, Austria, Poland, Danzig and western Lithuania, particularly the Germans
from Klaipeda (Memel). The Swiss resisted the idea. They had viewed themselves as a distinctly separate nation since
the Peace of Westphalia of 1648. After World War II, eastern European countries such as the Soviet Union, Poland,
Czechoslovakia, Hungary, Romania and Yugoslavia expelled the Germans from their territories. Many of those had inhabited
these lands for centuries, developing a unique culture. Germans were also forced to leave the former eastern territories
of Germany, which were annexed by Poland (Silesia, Pomerania, parts of Brandenburg and southern part of East Prussia)
and the Soviet Union (northern part of East Prussia). Between 12 and 16,5 million ethnic Germans and German citizens
were expelled westwards to allied-occupied Germany. The event of the Protestant Reformation and the politics that
ensued has been cited as the origins of German identity that arose in response to the spread of a common German language
and literature. Early German national culture was developed through literary and religious figures including Martin
Luther, Johann Wolfgang von Goethe and Friedrich Schiller. The concept of a German nation was developed by German
philosopher Johann Gottfried Herder. The popularity of German identity arose in the aftermath of the French Revolution.
Persons who speak German as their first language, look German and whose families have lived in Germany for generations
are considered "most German", followed by categories of diminishing Germanness such as Aussiedler (people of German
ancestry whose families have lived in Eastern Europe but who have returned to Germany), Restdeutsche (people living
in lands that have historically belonged to Germany but which is currently outside of Germany), Auswanderer (people
whose families have emigrated from Germany and who still speak German), German speakers in German-speaking nations
such as Austrians, and finally people of German emigrant background who no longer speak German. The native language
of Germans is German, a West Germanic language, related to and classified alongside English and Dutch, and sharing
many similarities with the North Germanic and Scandinavian languages. Spoken by approximately 100 million native
speakers, German is one of the world's major languages and the most widely spoken first language in the European
Union. German has been replaced by English as the dominant language of science-related Nobel Prize laureates during
the second half of the 20th century. It was a lingua franca in the Holy Roman Empire. People of German origin are
found in various places around the globe. United States is home to approximately 50 million German Americans or one
third of the German diaspora, making it the largest centre of German-descended people outside Germany. Brazil is
the second largest with 5 million people claiming German ancestry. Other significant centres are Canada, Argentina,
South Africa and France each accounting for at least 1 million. While the exact number of German-descended people
is difficult to calculate, the available data makes it safe to claim the number is exceeding 100 million people.
German philosophers have helped shape western philosophy from as early as the Middle Ages (Albertus Magnus). Later,
Leibniz (17th century) and most importantly Kant played central roles in the history of philosophy. Kantianism inspired
the work of Schopenhauer and Nietzsche as well as German idealism defended by Fichte and Hegel. Engels helped develop
communist theory in the second half of the 19th century while Heidegger and Gadamer pursued the tradition of German
philosophy in the 20th century. A number of German intellectuals were also influential in sociology, most notably
Adorno, Habermas, Horkheimer, Luhmann, Simmel, Tönnies, and Weber. The University of Berlin founded in 1810 by linguist
and philosopher Wilhelm von Humboldt served as an influential model for a number of modern western universities.
The work of David Hilbert and Max Planck was crucial to the foundation of modern physics, which Werner Heisenberg
and Erwin Schrödinger developed further. They were preceded by such key physicists as Hermann von Helmholtz, Joseph
von Fraunhofer, and Gabriel Daniel Fahrenheit, among others. Wilhelm Conrad Röntgen discovered X-rays, an accomplishment
that made him the first winner of the Nobel Prize in Physics in 1901. The Walhalla temple for "laudable and distinguished
Germans", features a number of scientists, and is located east of Regensburg, in Bavaria. In the field of music,
Germany claims some of the most renowned classical composers of the world including Bach, Mozart and Beethoven, who
marked the transition between the Classical and Romantic eras in Western classical music. Other composers of the
Austro-German tradition who achieved international fame include Brahms, Wagner, Haydn, Schubert, Händel, Schumann,
Liszt, Mendelssohn Bartholdy, Johann Strauss II, Bruckner, Mahler, Telemann, Richard Strauss, Schoenberg, Orff, and
most recently, Henze, Lachenmann, and Stockhausen. As of 2008[update], Germany is the fourth largest music market
in the world and has exerted a strong influence on Dance and Rock music, and pioneered trance music. Artists such
as Herbert Grönemeyer, Scorpions, Rammstein, Nena, Dieter Bohlen, Tokio Hotel and Modern Talking have enjoyed international
fame. German musicians and, particularly, the pioneering bands Tangerine Dream and Kraftwerk have also contributed
to the development of electronic music. Germany hosts many large rock music festivals annually. The Rock am Ring
festival is the largest music festival in Germany, and among the largest in the world. German artists also make up
a large percentage of Industrial music acts, which is called Neue Deutsche Härte. Germany hosts some of the largest
Goth scenes and festivals in the entire world, with events like Wave-Gothic-Treffen and M'era Luna Festival easily
attracting up to 30,000 people. Amongst Germany's famous artists there are various Dutch entertainers, such as Johannes
Heesters. German cinema dates back to the very early years of the medium with the work of Max Skladanowsky. It was
particularly influential during the years of the Weimar Republic with German expressionists such as Robert Wiene
and Friedrich Wilhelm Murnau. The Nazi era produced mostly propaganda films although the work of Leni Riefenstahl
still introduced new aesthetics in film. From the 1960s, New German Cinema directors such as Volker Schlöndorff,
Werner Herzog, Wim Wenders, Rainer Werner Fassbinder placed West-German cinema back onto the international stage
with their often provocative films, while the Deutsche Film-Aktiengesellschaft controlled film production in the
GDR. More recently, films such as Das Boot (1981), The Never Ending Story (1984) Run Lola Run (1998), Das Experiment
(2001), Good Bye Lenin! (2003), Gegen die Wand (Head-on) (2004) and Der Untergang (Downfall) (2004) have enjoyed
international success. In 2002 the Academy Award for Best Foreign Language Film went to Caroline Link's Nowhere in
Africa, in 2007 to Florian Henckel von Donnersmarck's The Lives of Others. The Berlin International Film Festival,
held yearly since 1951, is one of the world's foremost film and cinema festivals. Roman Catholicism was the sole
established religion in the Holy Roman Empire until the Reformation changed this drastically. In 1517, Martin Luther
challenged the Catholic Church as he saw it as a corruption of Christian faith. Through this, he altered the course
of European and world history and established Protestantism. The Thirty Years' War (1618–1648) was one of the most
destructive conflicts in European history. The war was fought primarily in what is now Germany, and at various points
involved most of the countries of Europe. The war was fought largely as a religious conflict between Protestants
and Catholics in the Holy Roman Empire. According to the latest nationwide census, Roman Catholics constituted 30.8%
of the total population of Germany, followed by the Evangelical Protestants at 30.3%. Other religions, atheists or
not specified constituted 38.8% of the population at the time. Among "others" are Protestants not included in Evangelical
Church of Germany, and other Christians such as the Restorationist New Apostolic Church. Protestantism was more common
among the citizens of Germany. The North and East Germany is predominantly Protestant, the South and West rather
Catholic. Nowadays there is a non-religious majority in Hamburg and the East German states. Sport forms an integral
part of German life, as demonstrated by the fact that 27 million Germans are members of a sports club and an additional
twelve million pursue such an activity individually. Football is by far the most popular sport, and the German Football
Federation (Deutscher Fußballbund) with more than 6.3 million members is the largest athletic organisation in the
country. It also attracts the greatest audience, with hundreds of thousands of spectators attending Bundesliga matches
and millions more watching on television. Since the 2006 FIFA World Cup, the internal and external evaluation of
Germany's national image has changed. In the annual Nation Brands Index global survey, Germany became significantly
and repeatedly more highly ranked after the tournament. People in 20 different states assessed the country's reputation
in terms of culture, politics, exports, its people and its attractiveness to tourists, immigrants and investments.
Germany has been named the world's second most valued nation among 50 countries in 2010. Another global opinion poll,
for the BBC, revealed that Germany is recognised for the most positive influence in the world in 2010. A majority
of 59% have a positive view of the country, while 14% have a negative view. Pan-Germanism's origins began in the
early 19th century following the Napoleonic Wars. The wars launched a new movement that was born in France itself
during the French Revolution. Nationalism during the 19th century threatened the old aristocratic regimes. Many ethnic
groups of Central and Eastern Europe had been divided for centuries, ruled over by the old Monarchies of the Romanovs
and the Habsburgs. Germans, for the most part, had been a loose and disunited people since the Reformation when the
Holy Roman Empire was shattered into a patchwork of states. The new German nationalists, mostly young reformers such
as Johann Tillmann of East Prussia, sought to unite all the German-speaking and ethnic-German (Volksdeutsche) people.
By the 1860s the Kingdom of Prussia and the Austrian Empire were the two most powerful nations dominated by German-speaking
elites. Both sought to expand their influence and territory. The Austrian Empire – like the Holy Roman Empire – was
a multi-ethnic state, but German-speaking people there did not have an absolute numerical majority; the creation
of the Austro-Hungarian Empire was one result of the growing nationalism of other ethnicities especially the Hungarians.
Prussia under Otto von Bismarck would ride on the coat-tails of nationalism to unite all of modern-day Germany. The
German Empire ("Second Reich") was created in 1871 following the proclamation of Wilhelm I as head of a union of
German-speaking states, while disregarding millions of its non-German subjects who desired self-determination from
German rule. Following the defeat in World War I, influence of German-speaking elites over Central and Eastern Europe
was greatly limited. At the treaty of Versailles Germany was substantially reduced in size. Austria-Hungary was split
up. Rump-Austria, which to a certain extent corresponded to the German-speaking areas of Austria-Hungary (a complete
split into language groups was impossible due to multi-lingual areas and language-exclaves) adopted the name "German-Austria"
(German: Deutschösterreich). The name German-Austria was forbidden by the victorious powers of World War I. Volga
Germans living in the Soviet Union were interned in gulags or forcibly relocated during the Second World War. For
decades after the Second World War, any national symbol or expression was a taboo. However, the Germans are becoming
increasingly patriotic. During a study in 2009, in which some 2,000 German citizens age 14 and upwards filled out
a questionnaire, nearly 60% of those surveyed agreed with the sentiment "I'm proud to be German." And 78%, if free
to choose their nation, would opt for German nationality with "near or absolute certainty". Another study in 2009,
carried out by the Identity Foundation in Düsseldorf, showed that 73% of the Germans were proud of their country,
twice more than 8 years earlier. According to Eugen Buss, a sociology professor at the University of Hohenheim, there's
an ongoing normalisation and more and more Germans are becoming openly proud of their country. In the midst of the
European sovereign-debt crisis, Radek Sikorski, Poland's Foreign Minister, stated in November 2011, "I will probably
be the first Polish foreign minister in history to say so, but here it is: I fear German power less than I am beginning
to fear German inactivity. You have become Europe's indispensable nation." According to Jacob Heilbrunn, a senior
editor at The National Interest, such a statement is unprecedented when taking into consideration Germany's history.
"This was an extraordinary statement from a top official of a nation that was ravaged by Germany during World War
II. And it reflects a profound shift taking place throughout Germany and Europe about Berlin's position at the center
of the Continent." Heilbrunn believes that the adage, "what was good for Germany was bad for the European Union"
has been supplanted by a new mentality—what is in the interest of Germany is also in the interest of its neighbors.
The evolution in Germany's national identity stems from focusing less on its Nazi past and more on its Prussian history,
which many Germans believe was betrayed—and not represented—by Nazism. The evolution is further precipitated by Germany's
conspicuous position as Europe's strongest economy. Indeed, this German sphere of influence has been welcomed by
the countries that border it, as demonstrated by Polish foreign minister Radek Sikorski's effusive praise for his
country's western neighbor. This shift in thinking is boosted by a newer generation of Germans who see World War
II as a distant memory.
Definitions of "Southeast Asia" vary, but most definitions include the area represented by the countries (sovereign states
and dependent territories) listed below. All of the states except for East Timor are members of the Association of
Southeast Asian Nations (ASEAN). The area, together with part of South Asia, was widely known as the East Indies
or simply the Indies until the 20th century. Christmas Island and the Cocos (Keeling) Islands[citation needed] are
considered part of Southeast Asia though they are governed by Australia.[citation needed] Sovereignty issues exist
over some territories in the South China Sea. Papua New Guinea has stated that it might join ASEAN, and is currently
an observer. The Andaman and Nicobar Islands of India are geographically considered part of Southeast Asia. Eastern
Bangladesh and the Seven Sister States of India are culturally part of Southeast Asia and sometimes considered both
South Asian and Southeast Asian. The Seven Sister States of India are also geographically part of Southeast Asia.[citation
needed] The rest of the island of New Guinea which is not part of Indonesia, namely, Papua New Guinea, is sometimes
included so are Palau, Guam, and the Northern Mariana Islands, which were all part of the Spanish East Indies.[citation
needed] Homo sapiens reached the region by around 45,000 years ago, having moved eastwards from the Indian subcontinent.
Homo floresiensis also lived in the area up until 12,000 years ago, when they became extinct. Austronesian people,
who form the majority of the modern population in Indonesia, Malaysia, Brunei, East Timor, and the Philippines, may
have migrated to Southeast Asia from Taiwan. They arrived in Indonesia around 2000 BC,and as they spread through
the archipelago, they often settled along coastal areas and confined indigenous peoples such as Negritos of the Philippines
or Papuans of New Guinea to inland regions. The Jawa Dwipa Hindu kingdom in Java and Sumatra existed around 200 BCE.
The history of the Malay-speaking world began with the advent of Indian influence, which dates back to at least the
3rd century BCE. Indian traders came to the archipelago both for its abundant forest and maritime products and to
trade with merchants from China, who also discovered the Malay world at an early date. Both Hinduism and Buddhism
were well established in the Malay Peninsula by the beginning of the 1st century CE, and from there spread across
the archipelago. The Majapahit Empire was an Indianised kingdom based in eastern Java from 1293 to around 1500. Its
greatest ruler was Hayam Wuruk, whose reign from 1350 to 1389 marked the empire's peak when it dominated other kingdoms
in the southern Malay Peninsula, Borneo, Sumatra, and Bali. Various sources such as the Nagarakertagama also mention
that its influence spanned over parts of Sulawesi, Maluku, and some areas of western New Guinea and the Philippines,
making it the largest empire to ever exist in Southeast Asian history. In the 11th century, a turbulent period occurred
in the history of Maritime Southeast Asia. The Indian Chola navy crossed the ocean and attacked the Srivijaya kingdom
of Sangrama Vijayatungavarman in Kadaram (Kedah), the capital of the powerful maritime kingdom was sacked and the
king was taken captive. Along with Kadaram, Pannai in present-day Sumatra and Malaiyur and the Malayan peninsula
were attacked too. Soon after that, the king of Kedah Phra Ong Mahawangsa became the first ruler to abandon the traditional
Hindu faith, and converted to Islam with the Sultanate of Kedah established in year 1136. Samudera Pasai converted
to Islam in the year 1267, the King of Malacca Parameswara married the princess of Pasai, and the son became the
first sultan of Malacca. Soon, Malacca became the center of Islamic study and maritime trade, and other rulers followed
suit. Indonesian religious leader and Islamic scholar Hamka (1908–1981) wrote in 1961: "The development of Islam
in Indonesia and Malaya is intimately related to a Chinese Muslim, Admiral Zheng He." There are several theories
to the Islamisation process in Southeast Asia. Another theory is trade. The expansion of trade among West Asia, India
and Southeast Asia helped the spread of the religion as Muslim traders from Southern Yemen (Hadramout) brought Islam
to the region with their large volume of trade. Many settled in Indonesia, Singapore, and Malaysia. This is evident
in the Arab-Indonesian, Arab-Singaporean, and Arab-Malay populations who were at one time very prominent in each
of their countries. The second theory is the role of missionaries or Sufis.[citation needed] The Sufi missionaries
played a significant role in spreading the faith by introducing Islamic ideas to the region. Finally, the ruling
classes embraced Islam and that further aided the permeation of the religion throughout the region. The ruler of
the region's most important port, Malacca Sultanate, embraced Islam in the 15th century, heralding a period of accelerated
conversion of Islam throughout the region as Islam provided a positive force among the ruling and trading classes.
During World War II, Imperial Japan invaded most of the former western colonies. The Shōwa occupation regime committed
violent actions against civilians such as the Manila massacre and the implementation of a system of forced labour,
such as the one involving 4 to 10 million romusha in Indonesia. A later UN report stated that four million people
died in Indonesia as a result of famine and forced labour during the Japanese occupation. The Allied powers who defeated
Japan in the South-East Asian theatre of World War II then contended with nationalists to whom the occupation authorities
had granted independence. Indonesia is the largest country in Southeast Asia and it also the largest archipelago
in the world by size (according to the CIA World Factbook). Geologically, the Indonesian Archipelago is one of the
most volcanically active regions in the world. Geological uplifts in the region have also produced some impressive
mountains, culminating in Puncak Jaya in Papua, Indonesia at 5,030 metres (16,500 feet), on the island of New Guinea;
it is the only place where ice glaciers can be found in Southeast Asia. The second tallest peak is Mount Kinabalu
in Sabah, Malaysia on the island of Borneo with a height of 4,095 metres (13,435 feet). The highest mountain in Southeast
Asia is Hkakabo Razi at 5,967 meters and can be found in northern Burma sharing the same range of its parent peak,
Mount Everest. The climate in Southeast Asia is mainly tropical–hot and humid all year round with plentiful rainfall.
Northern Vietnam and the Myanmar Himalayas are the only regions in Southeast Asia that feature a subtropical climate,
which has a cold winter with snow. The majority of Southeast Asia has a wet and dry season caused by seasonal shift
in winds or monsoon. The tropical rain belt causes additional rainfall during the monsoon season. The rain forest
is the second largest on earth (with the Amazon being the largest). An exception to this type of climate and vegetation
is the mountain areas in the northern region, where high altitudes lead to milder temperatures and drier landscape.
Other parts fall out of this climate because they are desert like. The Indonesian Archipelago is split by the Wallace
Line. This line runs along what is now known to be a tectonic plate boundary, and separates Asian (Western) species
from Australasian (Eastern) species. The islands between Java/Borneo and Papua form a mixed zone, where both types
occur, known as Wallacea. As the pace of development accelerates and populations continue to expand in Southeast
Asia, concern has increased regarding the impact of human activity on the region's environment. A significant portion
of Southeast Asia, however, has not changed greatly and remains an unaltered home to wildlife. The nations of the
region, with only few exceptions, have become aware of the need to maintain forest cover not only to prevent soil
erosion but to preserve the diversity of flora and fauna. Indonesia, for example, has created an extensive system
of national parks and preserves for this purpose. Even so, such species as the Javan rhinoceros face extinction,
with only a handful of the animals remaining in western Java. The shallow waters of the Southeast Asian coral reefs
have the highest levels of biodiversity for the world's marine ecosystems, where coral, fish and molluscs abound.
According to Conservation International, marine surveys suggest that the marine life diversity in the Raja Ampat
(Indonesia) is the highest recorded on Earth. Diversity is considerably greater than any other area sampled in the
Coral Triangle composed of Indonesia, Philippines, and Papua New Guinea. The Coral Triangle is the heart of the world's
coral reef biodiversity, the Verde Passage is dubbed by Conservation International as the world's "center of the
center of marine shorefish biodiversity". The whale shark, the world's largest species of fish and 6 species of sea
turtles can also be found in the South China Sea and the Pacific Ocean territories of the Philippines. While Southeast
Asia is rich in flora and fauna, Southeast Asia is facing severe deforestation which causes habitat loss for various
endangered species such as orangutan and the Sumatran tiger. Predictions have been made that more than 40% of the
animal and plant species in Southeast Asia could be wiped out in the 21st century. At the same time, haze has been
a regular occurrence. The two worst regional hazes were in 1997 and 2006 in which multiple countries were covered
with thick haze, mostly caused by "slash and burn" activities in Sumatra and Borneo. In reaction, several countries
in Southeast Asia signed the ASEAN Agreement on Transboundary Haze Pollution to combat haze pollution. Even prior
to the penetration of European interests, Southeast Asia was a critical part of the world trading system. A wide
range of commodities originated in the region, but especially important were spices such as pepper, ginger, cloves,
and nutmeg. The spice trade initially was developed by Indian and Arab merchants, but it also brought Europeans to
the region. First Spaniards (Manila galleon) and Portuguese, then the Dutch, and finally the British and French became
involved in this enterprise in various countries. The penetration of European commercial interests gradually evolved
into annexation of territories, as traders lobbied for an extension of control to protect and expand their activities.
As a result, the Dutch moved into Indonesia, the British into Malaya and parts of Borneo, the French into Indochina,
and the Spanish and the US into the Philippines. The overseas Chinese community has played a large role in the development
of the economies in the region. These business communities are connected through the bamboo network, a network of
overseas Chinese businesses operating in the markets of Southeast Asia that share common family and cultural ties.
The origins of Chinese influence can be traced to the 16th century, when Chinese migrants from southern China settled
in Indonesia, Thailand, and other Southeast Asian countries. Chinese populations in the region saw a rapid increase
following the Communist Revolution in 1949, which forced many refugees to emigrate outside of China. The region's
economy greatly depends on agriculture; rice and rubber have long been prominent exports. Manufacturing and services
are becoming more important. An emerging market, Indonesia is the largest economy in this region. Newly industrialised
countries include Indonesia, Malaysia, Thailand, and the Philippines, while Singapore and Brunei are affluent developed
economies. The rest of Southeast Asia is still heavily dependent on agriculture, but Vietnam is notably making steady
progress in developing its industrial sectors. The region notably manufactures textiles, electronic high-tech goods
such as microprocessors and heavy industrial products such as automobiles. Oil reserves in Southeast Asia are plentiful.
Tourism has been a key factor in economic development for many Southeast Asian countries, especially Cambodia. According
to UNESCO, "tourism, if correctly conceived, can be a tremendous development tool and an effective means of preserving
the cultural diversity of our planet." Since the early 1990s, "even the non-ASEAN nations such as Cambodia, Laos,
Vietnam and Burma, where the income derived from tourism is low, are attempting to expand their own tourism industries."
In 1995, Singapore was the regional leader in tourism receipts relative to GDP at over 8%. By 1998, those receipts
had dropped to less than 6% of GDP while Thailand and Lao PDR increased receipts to over 7%. Since 2000, Cambodia
has surpassed all other ASEAN countries and generated almost 15% of its GDP from tourism in 2006. Southeast Asia
has an area of approximately 4,000,000 km2 (1.6 million square miles). As of 2013, Around 625 million people lived
in the region, more than a fifth of them (143 million) on the Indonesian island of Java, the most densely populated
large island in the world. Indonesia is the most populous country with 255 million people as of 2015, and also the
4th most populous country in the world. The distribution of the religions and people is diverse in Southeast Asia
and varies by country. Some 30 million overseas Chinese also live in Southeast Asia, most prominently in Christmas
Island, Indonesia, Malaysia, the Philippines, Singapore, and Thailand, and also, as the Hoa, in Vietnam. In modern
times, the Javanese are the largest ethnic group in Southeast Asia, with more than 100 million people, mostly concentrated
in Java, Indonesia. In Burma, the Burmese account for more than two-thirds of the ethnic stock in this country, while
ethnic Thais and Vietnamese account for about four-fifths of the respective populations of those countries. Indonesia
is clearly dominated by the Javanese and Sundanese ethnic groups, while Malaysia is split between half Malays and
one-quarter Chinese. Within the Philippines, the Tagalog, Cebuano, Ilocano, and Hiligaynon groups are significant.
Islam is the most widely practised religion in Southeast Asia, numbering approximately 240 million adherents which
translate to about 40% of the entire population, with majorities in Indonesia, Brunei, Malaysia and in Southern Philippines
with Indonesia as the largest and most populated Muslim country around the world. Countries in Southeast Asia practice
many different religions. Buddhism is predominant in Thailand, Cambodia, Laos, Burma, Vietnam and Singapore. Ancestor
worship and Confucianism are also widely practised in Vietnam and Singapore. Christianity is predominant in the Philippines,
eastern Indonesia, East Malaysia and East Timor. The Philippines has the largest Roman Catholic population in Asia.
East Timor is also predominantly Roman Catholic due to a history of Portuguese rule. Religions and peoples are diverse
in Southeast Asia and not one country is homogeneous. In the world's most populous Muslim nation, Indonesia, Hinduism
is dominant on islands such as Bali. Christianity also predominates in the rest of the part of the Philippines, New
Guinea and Timor. Pockets of Hindu population can also be found around Southeast Asia in Singapore, Malaysia etc.
Garuda (Sanskrit: Garuḍa), the phoenix who is the mount (vahanam) of Vishnu, is a national symbol in both Thailand
and Indonesia; in the Philippines, gold images of Garuda have been found on Palawan; gold images of other Hindu gods
and goddesses have also been found on Mindanao. Balinese Hinduism is somewhat different from Hinduism practised elsewhere,
as Animism and local culture is incorporated into it. Christians can also be found throughout Southeast Asia; they
are in the majority in East Timor and the Philippines, Asia's largest Christian nation. In addition, there are also
older tribal religious practices in remote areas of Sarawak in East Malaysia,Highland Philippines and Papua in eastern
Indonesia. In Burma, Sakka (Indra) is revered as a nat. In Vietnam, Mahayana Buddhism is practised, which is influenced
by native animism but with strong emphasis on Ancestor Worship. The culture in Southeast Asia is very diverse: on
mainland Southeast Asia, the culture is a mix of Indochinese (Burma, Cambodia, Laos and Thailand) and Chinese (Singapore
and Vietnam). While in Indonesia, the Philippines and Malaysia the culture is a mix of indigenous Austronesian, Indian,
Islamic, Western, and Chinese cultures. Also Brunei shows a strong influence from Arabia. Singapore and Vietnam show
more Chinese influence in that Singapore, although being geographically a Southeast Asian nation, is home to a large
Chinese majority and Vietnam was in China's sphere of influence for much of its history. Indian influence in Singapore
is only evident through the Tamil migrants, which influenced, to some extent, the cuisine of Singapore. Throughout
Vietnam's history, it has had no direct influence from India - only through contact with the Thai, Khmer and Cham
peoples. The arts of Southeast Asia have affinity with the arts of other areas. Dance in much of Southeast Asia includes
movement of the hands as well as the feet, to express the dance's emotion and meaning of the story that the ballerina
is going to tell the audience. Most of Southeast Asia introduced dance into their court; in particular, Cambodian
royal ballet represented them in the early 7th century before the Khmer Empire, which was highly influenced by Indian
Hinduism. Apsara Dance, famous for strong hand and feet movement, is a great example of Hindu symbolic dance. Puppetry
and shadow plays were also a favoured form of entertainment in past centuries, a famous one being Wayang from Indonesia.
The arts and literature in some of Southeast Asia is quite influenced by Hinduism, which was brought to them centuries
ago. Indonesia, despite conversion to Islam which opposes certain forms of art, has retained many forms of Hindu-influenced
practices, culture, art and literature. An example is the Wayang Kulit (Shadow Puppet) and literature like the Ramayana.
The wayang kulit show has been recognized by UNESCO on November 7, 2003, as a Masterpiece of Oral and Intangible
Heritage of Humanity. It has been pointed out that Khmer and Indonesian classical arts were concerned with depicting
the life of the gods, but to the Southeast Asian mind the life of the gods was the life of the peoples themselves—joyous,
earthy, yet divine. The Tai, coming late into Southeast Asia, brought with them some Chinese artistic traditions,
but they soon shed them in favour of the Khmer and Mon traditions, and the only indications of their earlier contact
with Chinese arts were in the style of their temples, especially the tapering roof, and in their lacquerware. The
antiquity of this form of writing extends before the invention of paper around the year 100 in China. Note each palm
leaf section was only several lines, written longitudinally across the leaf, and bound by twine to the other sections.
The outer portion was decorated. The alphabets of Southeast Asia tended to be abugidas, until the arrival of the
Europeans, who used words that also ended in consonants, not just vowels. Other forms of official documents, which
did not use paper, included Javanese copperplate scrolls. This material would have been more durable than paper in
the tropical climate of Southeast Asia.
Brigham Young University (often referred to as BYU or, colloquially, The Y) is a private research university located in Provo,
Utah, United States. It is owned and operated by The Church of Jesus Christ of Latter-day Saints (LDS Church), and,
excluding online students, is the largest of any religious university and the third largest private university in
the United States, with 29,672 on-campus students. Approximately 99 percent of the students are members of the LDS
Church, and one-third of its US students are from Utah. Students attending BYU are required to follow an honor code,
which mandates behavior in line with LDS teachings such as academic honesty, adherence to dress and grooming standards,
and abstinence from extramarital sex and from the consumption of drugs and alcohol. Many students (88 percent of
men, 33 percent of women) either delay enrollment or take a hiatus from their studies to serve as Mormon missionaries.
(Men typically serve for two-years, while women serve for 18 months.) An education at BYU is also less expensive
than at similar private universities, since "a significant portion" of the cost of operating the university is subsidized
by the church's tithing funds. BYU offers programs in liberal arts, engineering, agriculture, management, physical
and mathematical sciences, nursing and law. The university is broadly organized into 11 colleges or schools at its
main Provo campus, with certain colleges and divisions defining their own admission standards. The university also
administers two satellite campuses, one in Jerusalem and one in Salt Lake City, while its parent organization, the
Church Educational System (CES), sponsors sister schools in Hawaii and Idaho. The university's primary focus is on
undergraduate education, but it also has 68 master's and 25 doctoral degree programs. Brigham Young University's
origin can be traced back to 1862 when a man named Warren Dusenberry started a Provo school in a prominent adobe
building called Cluff Hall, which was located in the northeast corner of 200 East and 200 North. On October 16, 1875,
Brigham Young, then president of the LDS Church, personally purchased the Lewis Building after previously hinting
that a school would be built in Draper, Utah in 1867. Hence, October 16, 1875 is commonly held as BYU's founding
date. Said Young about his vision: "I hope to see an Academy established in Provo... at which the children of the
Latter-day Saints can receive a good education unmixed with the pernicious atheistic influences that are found in
so many of the higher schools of the country." The school broke off from the University of Deseret and became Brigham
Young Academy, with classes commencing on January 3, 1876. Warren Dusenberry served as interim principal of the school
for several months until April 1876 when Brigham Young's choice for principal arrived—a German immigrant named Karl
Maeser. Under Maeser's direction the school educated many luminaries including future U.S. Supreme Court Justice
George Sutherland and future U.S. Senator Reed Smoot among others. The school, however, did not become a university
until the end of Benjamin Cluff, Jr's term at the helm of the institution. At that time, the school was also still
privately supported by members of the community and was not absorbed and sponsored officially by the LDS Church until
July 18, 1896. A series of odd managerial decisions by Cluff led to his demotion; however, in his last official act,
he proposed to the Board that the Academy be named "Brigham Young University". The suggestion received a large amount
of opposition, with many members of the Board saying that the school wasn't large enough to be a university, but
the decision ultimately passed. One opponent to the decision, Anthon H. Lund, later said, "I hope their head will
grow big enough for their hat." In 1903, Brigham Young Academy was dissolved, and was replaced by two institutions:
Brigham Young High School, and Brigham Young University. (The BY High School class of 1907 was ultimately responsible
for the famous giant "Y" that is to this day embedded on a mountain near campus.) The Board elected George H. Brimhall
as the new President of BYU. He had not received a high school education until he was forty. Nevertheless, he was
an excellent orator and organizer. Under his tenure in 1904 the new Brigham Young University bought 17 acres (69,000
m2) of land from Provo called "Temple Hill". After some controversy among locals over BYU's purchase of this property,
construction began in 1909 on the first building on the current campus, the Karl G. Maeser Memorial. Brimhall also
presided over the University during a brief crisis involving the theory of evolution. The religious nature of the
school seemed at the time to collide with this scientific theory. Joseph F. Smith, LDS Church president, settled
the question for a time by asking that evolution not be taught at the school. A few have described the school at
this time as nothing more than a "religious seminary". However, many of its graduates at this time would go on to
great success and become well renowned in their fields. Franklin S. Harris was appointed the university's president
in 1921. He was the first BYU president to have a doctoral degree. Harris made several important changes to the school,
reorganizing it into a true university, whereas before, its organization had remnants of the Academy days. At the
beginning of his tenure, the school was not officially recognized as a university by any accreditation organization.
By the end of his term, the school was accredited under all major accrediting organizations at the time. He was eventually
replaced by Howard S. McDonald, who received his doctorate from the University of California. When he first received
the position, the Second World War had just ended, and thousands of students were flooding into BYU. By the end of
his stay, the school had grown nearly five times to an enrollment of 5,440 students. The university did not have
the facilities to handle such a large influx, so he bought part of an Air Force Base in Ogden, Utah and rebuilt it
to house some of the students. The next president, Ernest L. Wilkinson, also oversaw a period of intense growth,
as the school adopted an accelerated building program. Wilkinson was responsible for the building of over eighty
structures on the campus, many of which still stand. During his tenure, the student body increased six times, making
BYU the largest private school at the time. The quality of the students also increased, leading to higher educational
standards at the school. Finally, Wilkinson reorganized the LDS Church units on campus, with ten stakes and over
100 wards being added during his administration. Dallin H. Oaks replaced Wilkinson as president in 1971. Oaks continued
the expansion of his predecessor, adding a law school and proposing plans for a new School of Management. During
his administration, a new library was also added, doubling the library space on campus. Jeffrey R. Holland followed
as president in 1980, encouraging a combination of educational excellence and religious faith at the university.
He believed that one of the school's greatest strengths was its religious nature and that this should be taken advantage
of rather than hidden. During his administration, the university added a campus in Jerusalem, now called the BYU
Jerusalem Center. In 1989, Holland was replaced by Rex E. Lee. Lee was responsible for the Benson Science Building
and the Museum of Art on campus. A cancer victim, Lee is memorialized annually at BYU during a cancer fundraiser
called the Rex Lee Run. Shortly before his death, Lee was replaced in 1995 by Merrill J. Bateman. Bateman was responsible
for the building of 36 new buildings for the university both on and off campus, including the expansion of the Harold
B. Lee Library. He was also one of several key college leaders who brought about the creation of the Mountain West
Conference, which BYU's athletics program joined — BYU previously participated in the Western Athletic Conference.
A BYU satellite TV network also opened in 2000 under his leadership. Bateman was also president during the September
11th attacks in 2001. The planes crashed on a Tuesday, hours before the weekly devotional normally held at BYU. Previous
plans for the devotional were altered, as Bateman led the student body in a prayer for peace. Bateman was followed
by Cecil O. Samuelson in 2003. Samuelson was succeeded by Kevin J Worthen in 2014. BYU accepted 49 percent of the
11,423 people who applied for admission in the summer term and fall semester of 2013. The average GPA for these admitted
students was 3.82. U.S. News and World Report describes BYU's selectivity as being "more selective" and compares
it with such universities as the University of Texas at Austin and The Ohio State University. In addition, BYU is
ranked 26th in colleges with the most freshman Merit Scholars, with 88 in 2006. BYU has one of the highest percentage
of accepted applicants that go on to enroll (78 percent in 2010). For 2016, U.S. News & World Report ranked BYU as
tied for 66th for national universities in the United States. A 2013 Quarterly Journal of Economics study of where
the nation's top high school students choose to enroll ranked BYU No. 21 in its peer-reviewed study. The Princeton
Review has ranked BYU the best value for college in 2007, and its library is consistently ranked in the nation's
top ten — No. 1 in 2004 and No. 4 in 2007. BYU is also ranked No. 19 in the U.S. News and World Report's "Great Schools,
Great Prices" lineup, and No. 12 in lowest student-incurred debt. Due in part to the school's emphasis on undergraduate
research, in rankings for 2008-2009, BYU was ranked No. 10 nationally for the number of students who go on to earn
PhDs, No. 1 nationally for students who go on to dental school, No. 6 nationally for students who go on to law school,
and No. 10 nationally for students who go on to medical school. BYU is designated as a research university with high
research activity by the Carnegie Foundation for the Advancement of Teaching.]] Forbes Magazine ranked it as the
No. 1 "Top University to Work For in 2014" and as the best college in Utah. In 2009, the university's Marriott School
of Management received a No. 5 ranking by BusinessWeek for its undergraduate programs, and its MBA program was ranked
by several sources: No. 22 ranking by BusinessWeek, No. 16 by Forbes, and No. 29 by U.S. News & World Report. Among
regional schools the MBA program was ranked No. 1 by The Wall Street Journal's most recent ranking (2007), and it
was ranked No. 92 among business schools worldwide in 2009 by Financial Times. For 2009, the university's School
of Accountancy, which is housed within the Marriott School, received two No. 3 rankings for its undergraduate program—one
by Public Accounting Report and the other by U.S. News & World Report. The same two reporting agencies also ranked
the school's MAcc program No. 3 and No. 8 in the nation, respectively. In 2010, an article in the Wall Street Journal
listing institutions whose graduates were the top-rated by recruiters ranked BYU No. 11. Using 2010 fiscal year data,
the Association of University Technology Managers ranked BYU No. 3 in an evaluation of universities creating the
most startup companies through campus research. Scientists associated with BYU have created some notable inventions.
Philo T. Farnsworth, inventor of the electronic television, received his education at BYU, and later returned to
do fusion research, receiving an honorary degree from the university. Harvey Fletcher, also an alumnus of BYU, inventor
of stereophonic sound, went on to carry out the now famous oil-drop experiment with Robert Millikan, and was later
Founding Dean of the BYU College of Engineering. H. Tracy Hall, inventor of the man-made diamond, left General Electric
in 1955 and became a full professor of chemistry and Director of Research at BYU. While there, he invented a new
type of diamond press, the tetrahedral press. In student achievements, BYU Ad Lab teams won both the 2007 and 2008
L'Oréal National Brandstorm Competition, and students developed the Magnetic Lasso algorithm found in Adobe Photoshop.
In prestigious scholarships, BYU has produced 10 Rhodes Scholars, four Gates Scholars in the last six years, and
in the last decade has claimed 41 Fulbright scholars and 3 Jack Kent Cooke scholars. Over three quarters of the student
body has some proficiency in a second language (numbering 107 languages in total). This is partially due to the fact
that 45 percent of the student body at BYU has been missionaries for LDS Church, and many of them learned a foreign
language as part of their mission assignment. During any given semester, about one-third of the student body is enrolled
in foreign language classes, a rate nearly four times the national average. BYU offers courses in over 60 different
languages, many with advanced courses that are seldom offered elsewhere. Several of its language programs are the
largest of their kind in the nation, the Russian program being one example. The university was selected by the United
States Department of Education as the location of the national Middle East Language Resource Center, making the school
a hub for experts on that region. It was also selected as a Center for International Business Education Research,
a function of which is to train business employees in international languages and relations. Beyond this, BYU also
runs a very large study abroad program, with satellite centers in London, Jerusalem, and Paris, as well as more than
20 other sites. Nearly 2,000 students take advantage of these programs yearly. In 2001, the Institute of International
Education ranked BYU as the number one university in the U.S. to offer students study abroad opportunities. The BYU
Jerusalem Center, which was closed in 2000 due to student security concerns related to the Second Intifada and, more
recently, the 2006 Israel-Lebanon conflict, was reopened to students in the Winter 2007 semester. A few special additions
enhance the language-learning experience. For example, BYU's International Cinema, featuring films in several languages,
is the largest and longest-running university-run foreign film program in the country. As already noted, BYU also
offers an intensive foreign language living experience, the Foreign Language Student Residence. This is an on-campus
apartment complex where students commit to speak only their chosen foreign language while in their apartments. Each
apartment has at least one native speaker to ensure correct language usage. In 1992, the university drafted a new
Statement on Academic Freedom, specifying that limitations may be placed upon "expression with students or in public
that: (1) contradicts or opposes, rather than analyzes or discusses, fundamental Church doctrine or policy; (2) deliberately
attacks or derides the Church or its general leaders; or (3) violates the Honor Code because the expression is dishonest,
illegal, unchaste, profane, or unduly disrespectful of others." These restrictions have caused some controversy as
several professors have been disciplined according to the new rule. The American Association of University Professors
has claimed that "infringements on academic freedom are distressingly common and that the climate for academic freedom
is distressingly poor." The new rules have not affected BYU's accreditation, as the university's chosen accrediting
body allows "religious colleges and universities to place limitations on academic freedom so long as they publish
those limitations candidly", according to associate academic vice president Jim Gordon. The AAUP's concern was not
with restrictions on the faculty member's religious expression but with a failure, as alleged by the faculty member
and AAUP, that the restrictions had not been adequately specified in advance by BYU: "The AAUP requires that any
doctrinal limitations on academic freedom be laid out clearly in writing. We [AAUP] concluded that BYU had failed
to do so adequately." Brigham Young University is a part of the Church Educational System of LDS Church. It is organized
under a Board of Trustees, with the President of the Church (currently Thomas S. Monson) as chairman. This board
consists of the same people as the Church Board of Education, a pattern that has been in place since 1939. Prior
to 1939, BYU had a separate board of trustees that was subordinate to the Church Board of Education. The President
of BYU, currently Kevin J Worthen, reports to the Board, through the Commissioner of Education. The university operates
under 11 colleges or schools, which collectively offer 194 bachelor's degree programs, 68 master's degree programs,
25 PhD programs, and a Juris Doctor program. BYU also manages some courses and majors through the David M. Kennedy
Center for International Studies and "miscellaneous" college departments, including Undergraduate Education, Graduate
Studies, Independent Study, Continuing Education, and the Honors Program. BYU's Winter semester ends earlier than
most universities in April since there is no Spring break, thus allowing students to pursue internships and other
summer activities earlier. A typical academic year is broken up into two semesters: Fall (September–December) and
Winter (January–April), as well as two shorter terms during the summer months: Spring (May–June) and Summer (July–August).
The main campus in Provo, Utah, United States sits on approximately 560 acres (2.3 km2) nestled at the base of the
Wasatch Mountains and includes 295 buildings. The buildings feature a wide variety of architectural styles, each
building being built in the style of its time. The grass, trees, and flower beds on BYU's campus are impeccably maintained.
Furthermore, views of the Wasatch Mountains, (including Mount Timpanogos) can be seen from the campus. BYU's Harold
B. Lee Library (also known as "HBLL"), which The Princeton Review ranked as the No. 1 "Great College Library" in
2004, has approximately 8½ million items in its collections, contains 98 miles (158 km) of shelving, and can seat
4,600 people. The Spencer W. Kimball Tower, shortened to SWKT and pronounced Swicket by many students, is home to
several of the university's departments and programs and is the tallest building in Provo, Utah. Furthermore, BYU's
Marriott Center, used as a basketball arena, can seat over 22,000 and is one of the largest on-campus arenas in the
nation. Interestingly absent on the campus of this church owned university is a campus chapel. Notwithstanding, each
Sunday LDS Church services for students are conducted on campus, but due to the large number of students attending
these services, nearly all of the buildings and possible meeting spaces on campus are utilized (in addition, many
students attend services off campus in LDS chapels in the surrounding communities). The campus is home to several
museums containing exhibits from many different fields of study. BYU's Museum of Art, for example, is one of the
largest and most attended art museums in the Mountain West. This Museum aids in academic pursuits of students at
BYU via research and study of the artworks in its collection. The Museum is also open to the general public and provides
educational programming. The Museum of Peoples and Cultures is a museum of archaeology and ethnology. It focuses
on native cultures and artifacts of the Great Basin, American Southwest, Mesoamerica, Peru, and Polynesia. Home to
more than 40,000 artifacts and 50,000 photographs, it documents BYU's archaeological research. The BYU Museum of
Paleontology was built in 1976 to display the many fossils found by BYU's Dr. James A. Jensen. It holds many artifacts
from the Jurassic Period (210-140 million years ago), and is one of the top five collections in the world of fossils
from that time period. It has been featured in magazines, newspapers, and on television internationally. The museum
receives about 25,000 visitors every year. The Monte L. Bean Life Science Museum was formed in 1978. It features
several forms of plant and animal life on display and available for research by students and scholars. The campus
also houses several performing arts facilities. The de Jong Concert Hall seats 1282 people and is named for Gerrit
de Jong Jr. The Pardoe Theatre is named for T. Earl and Kathryn Pardoe. Students use its stage in a variety of theatre
experiments, as well as for Pardoe Series performances. It seats 500 people, and has quite a large stage with a proscenium
opening of 19 by 55 feet (17 m). The Margetts Theatre was named for Philip N. Margetts, a prominent Utah theatre
figure. A smaller, black box theater, it allows a variety of seating and staging formats. It seats 125, and measures
30 by 50 feet (15 m). The Nelke Theatre, named for one of BYU's first drama teachers, is used largely for instruction
in experimental theater. It seats 280. BYU has designated energy conservation, products and materials, recycling,
site planning and building design, student involvement, transportation, water conservation, and zero waste events
as top priority categories in which to further its efforts to be an environmentally sustainable campus. The university
has stated that "we have a responsibility to be wise stewards of the earth and its resources." BYU is working to
increase the energy efficiency of its buildings by installing various speed drives on all pumps and fans, replacing
incandescent lighting with fluorescent lighting, retrofitting campus buildings with low-E reflective glass, and upgraded
roof insulation to prevent heat loss. The student groups BYU Recycles, Eco-Response, and BYU Earth educate students,
faculty, staff, and administrators about how the campus can decrease its environmental impact. BYU Recycles spearheaded
the recent campaign to begin recycling plastics, which the university did after a year of student campaigning. The
BYU Ballroom Dance Company is known as one of the best formation ballroom dance teams in the world, having won the
U.S. National Formation Dance Championship every year since 1982. BYU's Ballroom dance team has won first place in
Latin or Standard (or both) many times when they have competed at the Blackpool Dance Festival, and they were the
first U.S. team to win the formation championships at the famed British Championships in Blackpool, England in 1972
. The NDCA National DanceSport championships have been held at BYU for several years, and BYU holds dozens of ballroom
dance classes each semester and is consequently the largest collegiate ballroom dance program in the world. In addition,
BYU has a number of other notable dance teams and programs. These teams include the Theatre Ballet, Contemporary
Dance Theatre, Living Legends, and International Folk Dance Ensemble. The Living Legends perform Latin, Native American,
and Polynesian dancing. BYU boasts one of the largest dance departments in the nation. Many students from all different
majors across campus participate in various dance classes each semester. BYU has 21 NCAA varsity teams. Nineteen
of these teams played mainly in the Mountain West Conference from its inception in 1999 until the school left that
conference in 2011. Prior to that time BYU teams competed in the Western Athletic Conference. All teams are named
the "Cougars", and Cosmo the Cougar has been the school's mascot since 1953. The school's fight song is the Cougar
Fight Song. Because many of its players serve on full-time missions for two years (men when they're 18, women when
19), BYU athletes are often older on average than other schools' players. The NCAA allows students to serve missions
for two years without subtracting that time from their eligibility period. This has caused minor controversy, but
is largely recognized as not lending the school any significant advantage, since players receive no athletic and
little physical training during their missions. BYU has also received attention from sports networks for refusal
to play games on Sunday, as well as expelling players due to honor code violations. Beginning in the 2011 season,
BYU football competes in college football as an independent. In addition, most other sports now compete in the West
Coast Conference. Teams in swimming and diving and indoor track and field for both men and women joined the men's
volleyball program in the Mountain Pacific Sports Federation. For outdoor track and field, the Cougars became an
Independent. Softball returned to the Western Athletic Conference, but spent only one season in the WAC; the team
moved to the Pacific Coast Softball Conference after the 2012 season. The softball program may move again after the
2013 season; the July 2013 return of Pacific to the WCC will enable that conference to add softball as an official
sport. BYU's stated mission "is to assist individuals in their quest for perfection and eternal life." BYU is thus
considered by its leaders to be at heart a religious institution, wherein, ideally, religious and secular education
are interwoven in a way that encourages the highest standards in both areas. This weaving of the secular and the
religious aspects of a religious university goes back as far as Brigham Young himself, who told Karl G. Maeser when
the Church purchased the school: "I want you to remember that you ought not to teach even the alphabet or the multiplication
tables without the Spirit of God." BYU has been considered by some Latter-day Saints, as well as some university
and church leaders, to be "The Lord's university". This phrase is used in reference to the school's mission as an
"ambassador" to the world for the LDS Church and thus, for Jesus Christ. In the past, some students and faculty have
expressed dissatisfaction with this nickname, stating that it gives students the idea that university authorities
are always divinely inspired and never to be contradicted. Leaders of the school, however, acknowledge that the nickname
represents more a goal that the university strives for and not its current state of being. Leaders encourage students
and faculty to help fulfill the goal by following the teachings of their religion, adhering to the school's honor
code, and serving others with the knowledge they gain while attending. BYU mandates that its students who are members
of the LDS Church be religiously active. Both LDS and Non-LDS students are required to provide an endorsement from
an ecclesiastic leader with their application for admittance. Over 900 rooms on BYU campus are used for the purposes
of LDS Church congregations. More than 150 congregations meet on BYU campus each Sunday. "BYU's campus becomes one
of the busiest and largest centers of worship in the world" with about 24,000 persons attending church services on
campus. Some 97 percent of male BYU graduates and 32 percent of female graduates took a hiatus from their undergraduate
studies at one point to serve as LDS missionaries. In October 2012, the LDS Church announced at its general conference
that young men could serve a mission after they turn 18 and have graduated from high school, rather than after age
19 under the old policy. Many young men would often attend a semester or two of higher education prior to beginning
missionary service. This policy change will likely impact what has been the traditional incoming freshman class at
BYU. Female students may now begin their missionary service anytime after turning 19, rather than age 21 under the
previous policy. For males, a full-time mission is two years in length, and for females it lasts 18 months. All students
and faculty, regardless of religion, are required to agree to adhere to an honor code. Early forms of the Church
Educational System Honor Code are found as far back as the days of the Brigham Young Academy and early school President
Karl G. Maeser. Maeser created the "Domestic Organization", which was a group of teachers who would visit students
at their homes to see that they were following the schools moral rules prohibiting obscenity, profanity, smoking,
and alcohol consumption. The Honor Code itself was not created until about 1940, and was used mainly for cases of
cheating and academic dishonesty. President Wilkinson expanded the Honor Code in 1957 to include other school standards.
This led to what the Honor Code represents today: rules regarding chastity, dress, grooming, drugs, and alcohol.
A signed commitment to live the honor code is part of the application process, and must be adhered by all students,
faculty, and staff. Students and faculty found in violation of standards are either warned or called to meet with
representatives of the Honor Council. In certain cases, students and faculty can be expelled from the school or lose
tenure. Both LDS and non-LDS students are required to meet annually with a Church leader to receive an ecclesiastical
endorsement for both acceptance and continuance. Various LGBT advocacy groups have protested the honor code and criticized
it as being anti-gay, and The Princeton Review ranked BYU as the 3rd most LGBT-unfriendly school in the United States.
BYU's social and cultural atmosphere is unique. The high rate of enrollment at the university by members of The Church
of Jesus Christ of Latter-day Saints (more than 98 percent) results in an amplification of LDS cultural norms; BYU
was ranked by The Princeton Review in 2008 as 14th in the nation for having the happiest students and highest quality
of life. However, the quirkiness and sometimes "too nice" culture is often caricatured, for example, in terms of
marrying early and being very conservative. One of the characteristics of BYU most often pointed out is its reputation
for emphasizing a "marriage culture". Members of The Church of Jesus Christ of Latter-day Saints highly value marriage
and family, especially marriage within the faith. Approximately 51 percent of the graduates in BYU's class of 2005
were married. This is compared to a national marriage average among college graduates of 11 percent. BYU students
on average marry at the age of 22, according to a 2005 study, while the national average age is 25 years for men
and 27 years for women. Many visitors to BYU, and Utah Valley as a whole, report being surprised by the culturally
conservative environment. Brigham Young University's Honor Code, which all BYU students agree to follow as a condition
of studying at BYU, prohibits the consumption of alcoholic beverages, tobacco, etc. As mentioned earlier, The Princeton
Review has rated BYU the "#1 stone cold sober school" in the nation for several years running, an honor which the
late LDS Church president Gordon B. Hinckley had commented on with pride. BYU's 2014 "#1 stone cold" sober rating
marked the 17th year in a row that the school had earned that rating. BYU has used this and other honors awarded
to the school to advertise itself to prospective students, showing that BYU is proud of the rating. According to
the Uniform Crime Reports, incidents of crime in Provo are lower than the national average. Murder is rare, and robberies
are about 1/10 the national average. Business Insider rated BYU as the #1 safest college campus in the nation. The
BYU Broadcasting Technical Operations Center is an HD production and distribution facility that is home to local
PBS affiliate KBYU-TV, local classical music station KBYU-FM Classical 89, BYU Radio, BYU Radio Instrumental, BYU
Radio International, BYUtv and BYU Television International with content in Spanish and Portuguese (both available
via terrestrial, satellite, and internet signals). BYUtv is also available via cable throughout some areas of the
United States. The BYU Broadcasting Technical Operations Center is home to three television production studios, two
television control rooms, radio studios, radio performance space, and master control operations. BYU alumni in academia
include former Dean of the Harvard Business School Kim B. Clark, two time world's most influential business thinker
Clayton M. Christensen, Michael K. Young '73, current president of the University of Washington, Matthew S. Holland,
current president of Utah Valley University, Stan L. Albrecht, current president of Utah State University, Teppo
Felin, Professor at the University of Oxford, and Stephen D. Nadauld, previous president of Dixie State University.
The University also graduated Nobel Prize winner Paul D. Boyer, as well as Philo Farnsworth (inventor of the electronic
television) and Harvey Fletcher (inventor of the hearing aid). Four of BYU's thirteen presidents were alumni of the
University. Additionally, alumni of BYU who have served as business leaders include Citigroup CFO Gary Crittenden
'76, former Dell CEO Kevin Rollins '84, Deseret Book CEO Sheri L. Dew, and Matthew K. McCauley, CEO of children's
clothing company Gymboree. In literature and journalism, BYU has produced several best-selling authors, including
Orson Scott Card '75, Brandon Sanderson '00 & '05, Ben English '98, and Stephenie Meyer '95. BYU also graduated American
activist and contributor for ABC News Elizabeth Smart-Gilmour. Other media personalities include former CBS News
correspondent Art Rascon, award-winning ESPN sportscaster and former Miss America Sharlene Wells Hawkes '86 and former
co-host of CBS's The Early Show Jane Clayson Johnson '90. In entertainment and television, BYU is represented by
Jon Heder '02 (best known for his role as Napoleon Dynamite), writer-director Daryn Tufts '98, Golden Globe-nominated
Aaron Eckhart '94, animator and filmmaker Don Bluth '54, Jeopardy! all-time champion Ken Jennings '00, and Richard
Dutcher, the "Father of Mormon Cinema." In the music industry BYU is represented by lead singer of the Grammy Award
winning band Imagine Dragons Dan Reynolds, multi-platinum selling drummer Elaine Bradley from the band Neon Trees,
crossover dubstep violinist Lindsey Stirling, former American Idol contestant Carmen Rasmusen, Mormon Tabernacle
Choir director Mack Wilberg and pianist Massimiliano Frani. A number of BYU alumni have found success in professional
sports, representing the University in 7 MLB World Series, 5 NBA Finals, and 25 NFL Super Bowls. In baseball, BYU
alumni include All-Stars Rick Aguilera '83, Wally Joyner '84, and Jack Morris '76. Professional basketball players
include three-time NBA champion Danny Ainge '81, 1952 NBA Rookie of the Year and 4-time NBA All-Star Mel Hutchins
'51,[citation needed] three-time Olympic medalist and Hall of Famer Krešimir Ćosić '73, and consensus 2011 national
college player of the year Jimmer Fredette '11, currently with the New York Knicks organization. BYU also claims
notable professional football players including two-time NFL MVP and Super Bowl MVP and Pro Football Hall of Fame
quarterback Steve Young '84 & J.D. '96, Heisman Trophy winner Ty Detmer '90, and two-time Super Bowl winner Jim McMahon.
In golf, BYU alumni include two major championship winners: Johnny Miller ('69) at the 1973 U.S. Open and 1976 British
Open and Mike Weir ('92) at the 2003 Masters.
Department stores today have sections that sell the following: clothing, furniture, home appliances, toys, cosmetics, gardening,
toiletries, sporting goods, do it yourself, paint, and hardware and additionally select other lines of products such
as food, books, jewelry, electronics, stationery, photographic equipment, baby products, and products for pets. Customers
check out near the front of the store or, alternatively, at sales counters within each department. Some are part
of a retail chain of many stores, while others may be independent retailers. In the 1970s, they came under heavy
pressure from discounters. Since 2010, they have come under even heavier pressure from online stores such as Amazon.
The origins of the department store lay in the growth of the conspicuous consumer society at the turn of the 19th
century. As the Industrial Revolution accelerated economy expansion, the affluent middle-class grew in size and wealth.
This urbanized social group, sharing a culture of consumption and changing fashion, was the catalyst for the retail
revolution. As rising prosperity and social mobility increased the number of people, especially women (who found
they could shop unaccompanied at department stores without damaging their reputation), with disposable income in
the late Georgian period, window shopping was transformed into a leisure activity and entrepreneurs, like the potter
Josiah Wedgwood, pioneered the use of marketing techniques to influence the prevailing tastes and preferences of
society. All the major British cities had flourishing department stores by the mid-or late nineteenth century. Increasingly,
women became the major shoppers and middle-class households. Kendals (formerly Kendal Milne & Faulkner) in Manchester
lays claim to being one of the first department stores and is still known to many of its customers as Kendal's, despite
its 2005 name change to House of Fraser. The Manchester institution dates back to 1836 but had been trading as Watts
Bazaar since 1796. At its zenith the store had buildings on both sides of Deansgate linked by a subterranean passage
"Kendals Arcade" and an art nouveau tiled food hall. The store was especially known for its emphasis on quality and
style over low prices giving it the nickname "the Harrods of the North", although this was due in part to Harrods
acquiring the store in 1919. Other large Manchester stores included Paulden's (currently Debenhams) and Lewis's (now
a Primark). Selfridges was established in 1909 by American-born Harry Gordon Selfridge on Oxford Street. The company's
innovative marketing promoted the radical notion of shopping for pleasure rather than necessity and its techniques
were adopted by modern department stores the world over. The store was extensively promoted through paid advertising.
The shop floors were structured so that goods could be made more accessible to customers. There were elegant restaurants
with modest prices, a library, reading and writing rooms, special reception rooms for French, German, American and
"Colonial" customers, a First Aid Room, and a Silence Room, with soft lights, deep chairs, and double-glazing, all
intended to keep customers in the store as long as possible. Staff members were taught to be on hand to assist customers,
but not too aggressively, and to sell the merchandise. Selfridge attracted shoppers with educational and scientific
exhibits; – in 1909, Louis Blériot's monoplane was exhibited at Selfridges (Blériot was the first to fly over the
English Channel), and the first public demonstration of television by John Logie Baird took place in the department
store in 1925. The Paris department store had its roots in the magasin de nouveautés, or novelty store; the first,
the Tapis Rouge, was created in 1784. They flourished in the early 19th century, with La Belle Jardiniere (1824),
Aux Trois Quartiers (1829), and Le Petit Saint Thomas (1830). Balzac described their functioning in his novel César
Birotteau. In the 1840s, with the arrival of the railroads in Paris and the increased number of shoppers they brought,
they grew in size, and began to have large plate glass display windows, fixed prices and price tags, and advertising
in newspapers. A novelty shop called Au Bon Marché had been founded in Paris in 1838 to sell lace, ribbons, sheets,
mattresses, buttons, umbrellas and other assorted goods. It originally had four departments, twelve employees, and
a floor space of three hundred meters. The entrepreneur Aristide Boucicaut became a partner in 1852, and changed
the marketing plan, instituting fixed prices and guarantees that allowed exchanges and refunds, advertising, and
a much wider variety of merchandise. The annual income of the store increased from 500,000 francs in 1852 to five
million in 1860. In 1869 he built much larger building at 24 rue de Sèvres on the Left Bank, and enlarged the store
again in 1872, with help from the engineering firm of Gustave Eiffel, creator of the Eiffel Tower. The income rose
from twenty million francs in 1870 to 72 million at the time of the Boucicaut's death in 1877. The floor space had
increased from three hundred square meters in 1838 to fifty thousand, and the number of employees had increased from
twelve in 1838 to 1788 in 1879. Boucicaut was famous for his marketing innovations; a reading room for husbands while
their wives shopped; extensive newspaper advertising; entertainment for children; and six million catalogs sent out
to customers. By 1880 half the employees were women; unmarried women employees lived in dormitories on the upper
floors. The Grands Magasins Dufayel was a huge department store with inexpensive prices built in 1890 in the northern
part of Paris, where it reached a very large new customer base in the working class. In a neighborhood with few public
spaces, it provided a consumer version of the public square. It educated workers to approach shopping as an exciting
social activity not just a routine exercise in obtaining necessities, just as the bourgeoisie did at the famous department
stores in the central city. Like the bourgeois stores, it helped transform consumption from a business transaction
into a direct relationship between consumer and sought-after goods. Its advertisements promised the opportunity to
participate in the newest, most fashionable consumerism at reasonable cost. The latest technology was featured, such
as cinemas and exhibits of inventions like X-ray machines (that could be used to fit shoes) and the gramophone. Arnold,
Constable was the first American department store. It was founded in 1825 by Aaron Arnold (1794?-1876), an emigrant
from Great Britain, as a small dry goods store on Pine Street in New York City. In 1857 the store moved into a five-story
white marble dry goods palace known as the Marble House. During the Civil War Arnold, Constable was one of the first
stores to issue charge bills of credit to its customers each month instead of on a bi-annual basis. Recognized as
an emporium for high-quality fashions, the store soon outgrew the Marble House and erected a cast-iron building on
Broadway and Nineteenth Street in 1869; this “Palace of Trade” expanded over the years until it was necessary to
move into a larger space in 1914. In 1925, Arnold, Constable merged with Stewart & Company and expanded into the
suburbs, first with a 1937 store in New Rochelle, New York and later in Hempstead and Manhasset on Long Island, and
in New Jersey. Financial problems led to bankruptcy in 1975. In New York City in 1846, Alexander Turney Stewart established
the "Marble Palace" on Broadway, between Chambers and Reade streets. He offered European retail merchandise at fixed
prices on a variety of dry goods, and advertised a policy of providing "free entrance" to all potential customers.
Though it was clad in white marble to look like a Renaissance palazzo, the building's cast iron construction permitted
large plate glass windows that permitted major seasonal displays, especially in the Christmas shopping season. In
1862, Stewart built a new store on a full city block with eight floors and nineteen departments of dress goods and
furnishing materials, carpets, glass and china, toys and sports equipment, ranged around a central glass-covered
court. His innovations included buying from manufacturers for cash and in large quantities, keeping his markup small
and prices low, truthful presentation of merchandise, the one-price policy (so there was no haggling), simple merchandise
returns and cash refund policy, selling for cash and not credit, buyers who searched worldwide for quality merchandise,
departmentalization, vertical and horizontal integration, volume sales, and free services for customers such as waiting
rooms and free delivery of purchases. His innovations were quickly copied by other department stores. In 1877, John
Wanamaker opened the United State's first modern department store in a former Pennsylvania Railroad freight terminal
in Philadelphia. Wanamakers was the first department store to offer fixed prices marked on every article and also
introduced electrical illumination (1878), the telephone (1879), and the use of pneumatic tubes to transport cash
and documents (1880) to the department store business. Subsequent department stores founded in Philadelphia included
Strawbridge and Clothier, Gimbels, Lit Brothers, and Snellenbergs. Marshall Field & Company originated in 1852. It
was the premier department store on the main shopping street in the Midwest, State Street in Chicago. Upscale shoppers
came by train from throughout the region, patronizing nearby hotels. It grew to become a major chain before converting
to the Macy's nameplate on 9 September 2006. Marshall Field's Served as a model for other departments stores in that
it had exceptional customer service. Field's also brought with it the now famous Frango mints brand that became so
closely identified with Marshall Field's and Chicago from the now defunct Frederick & Nelson Department store. Marshall
Field's also had the firsts, among many innovations by Marshall Field's. Field's had the first European buying office,
which was located in Manchester, England, and the first bridal registry. The company was the first to introduce the
concept of the personal shopper, and that service was provided without charge in every Field's store, until the chain's
last days under the Marshall Field's name. It was the first store to offer revolving credit and the first department
store to use escalators. Marshall Field's book department in the State Street store was legendary; it pioneered the
concept of the "book signing." Moreover, every year at Christmas, Marshall Field's downtown store windows were filled
with animated displays as part of the downtown shopping district display; the "theme" window displays became famous
for their ingenuity and beauty, and visiting the Marshall Field's windows at Christmas became a tradition for Chicagoans
and visitors alike, as popular a local practice as visiting the Walnut Room with its equally famous Christmas tree
or meeting "under the clock" on State Street. David Jones was started by David Jones, a Welsh merchant who met Hobart
businessman Charles Appleton in London. Appleton established a store in Sydney in 1825 and Jones subsequently established
a partnership with Appleton, moved to Australia in 1835, and the Sydney store became known as Appleton & Jones. When
the partnership was dissolved in 1838, Jones moved his business to premises on the corner of George Street and Barrack
Lane, Sydney. David Jones claims to be the oldest department store in the world still trading under its original
name. Although there were a number of department stores in Australia for much of the 20th Century, including chains
such as Grace Bros. and Waltons, many disappeared during the 1980s and 1990s. Today Myer and David Jones, located
nationally, are practically the national department stores duopoly in Australia. When Russian-born migrant, Sidney
Myer, came to Australia in 1899 he formed the Myer retail group with his brother, Elcon Myer. In 1900, they opened
the first Myer department store, in Bendigo, Victoria. Since then, the Myer retail group has grown to be Australia's
largest retailer. Both, Myer and David Jones, are up-market chains, offering a wide variety of products from mid-range
names to luxury brands. Other retail chain stores such as Target (unrelated to the American chain of the same name),
Venture (now defunct), Kmart and Big W, also located nationally, are considered to be Australia's discount department
stores. Harris Scarfe, though only operating in four states and one territory, is a department store using both the
large full-line and small discount department store formats. Most department stores in Australia have their own credit
card companies, each having their own benefits while the discount department stores do not have their own credit
card rights. From its origins in the fur trade, the Hudson's Bay Company is the oldest corporation in North America
and was the largest department store operator in Canada until the mid-1980s, with locations across the country. It
also previously owned Zellers, another major Canadian department store which ceased to exist in March 2013 after
selling its lease holdings to Target Canada. Other department stores in Canada are: Canadian Tire, Sears Canada,
Ogilvy, Les Ailes de la Mode, Giant Tiger, Co-op, Costco and Holt Renfrew. Grocery giant Superstores carry many non-grocery
items akin to a department store. Woolco had 160 stores in Canada when operations ceased (Walmart bought out Woolco
in 1994). Today low-price Walmart is by far the most dominant department store retailer in Canada with outlets throughout
the country. Historically, department stores were a significant component in Canadian economic life, and chain stores
such as Eaton's, Charles Ogilvy Limited, Freiman's, Spencer's, Simpsons, Morgan's, and Woodward's were staples in
their respective communities. Department stores in Canada are similar in design and style to department stores in
the United States. Before the 1950s, the department store held an eminent place in both Canada and Australia, during
both the Great Depression and World War II. Since then, they have suffered from strong competition from specialist
stores. Most recently the competition has intensified with the advent of larger-scale superstores (Jones et al. 1994;
Merrilees and Miller 1997). Competition was not the only reason for the department stores' weakening strength; the
changing structure of cities also affected them. The compact and centralized 19th century city with its mass transit
lines converging on the downtown was a perfect environment for department store growth. But as residents moved out
of the downtown areas to the suburbs, the large, downtown department stores became inconvenient and lost business
to the newer suburban shopping malls. In 2003, U.S. department store sales were surpassed by big-box store sales
for the first time (though some stores may be classified as "big box" by physical layout and "department store" by
merchandise). Since the opening policy in 1979, the Chinese department stores also develops swiftly along with the
fast-growing economy. There are different department store groups dominating different regions. For example, INTIME
department store has the biggest market presence in Zhejiang province, while Jinying department stores dominate Jiangsu
Province. Besides, there are many other department store groups, such as Pacific, Parkson, Wangfujing,New World,etc.,
many of them are expanding quickly by listing in the financial market. The first department stores Lane Crawford
was opened in 1850 by Scots Thomas Ash Lane and Ninian Crawford on Des Voeux Road, Hong Kong Island. At the beginning,
the store mainly catered visiting ships' crews as well as British Navy staff and their families. In 1900, the first
ethnic-Chinese owned Sincere Department Store was opened by Ma Ying Piu, who returned from Australia and inspired
by David Jones. In 1907, another former Hong Kong expatriate in Australia, the Kwok's family, returned to Hong Kong
and founded Wing On. In Denmark you find three department store chains: Magasin (1868), Illum (1891), Salling (1906).
Magasin is by far the largest with 6 stores all over the country, with the flagship store being Magasin du Nord on
Kongens Nytorv in Copenhagen. Illums only store on Amagertorv in Copenhagen has the appearance of a department store
with 20% run by Magasin, but has individual shop owners making it a shopping centre. But in people's mind it remains
a department store. Salling has two stores in Jutland with one of these being the reason for the closure of a magasin
store due to the competition. France's major upscale department stores are Galeries Lafayette and Le Printemps, which
both have flagship stores on Boulevard Haussmann in Paris and branches around the country. The first department store
in France, Le Bon Marché in Paris, was founded in 1852 and is now owned by the luxury goods conglomerate LVMH. La
Samaritaine, another upscale department store also owned by LVMH, closed in 2005. Mid-range department stores chains
also exist in France such as the BHV (Bazar de l'Hotel de Ville), part of the same group as Galeries Lafayette. The
design and function of department stores in Germany followed the lead of London, Paris and New York. Germany used
to have a number of department stores; nowadays only a few of them remain. Next to some smaller, independent department
stores these are Karstadt (in 2010 taken over by Nicolas Berggruen, also operating the KaDeWe in Berlin, the Alsterhaus
in Hamburg and the Oberpollinger in Munich), GALERIA Kaufhof (part of the Metro AG). Others like Hertie, Wertheim
and Horten AG were taken over by others and either fully integrated or later closed. The middle up segment is mainly
occupied by Metro Department Store originated from Singapore and Sogo from Japan. 2007 saw the re-opening of Jakarta's
Seibu, poised to be the largest and second most upscale department store in Indonesia after Harvey Nichols, which
the latter closed in 2010 and yet plans to return. Other international department stores include Debenhams and Marks
& Spencer. Galeries Lafayette also joins the Indonesian market in 2013 inside Pacific Place Mall. This department
store is targeting middle up market with price range from affordable to luxury, poised to be the largest upscale
department store. Galeries Lafayette, Debenhams, Harvey Nichols, Marks & Spencer, Seibu and Sogo are all operated
by PT. Mitra Adiperkasa. Parkson enters by acquiring local brand Centro Department Store in 2011. Centro still operates
for middle market while the 'Parkson' brand itself, positioned for middle-up segment, enters in 2014 by opening its
first store in Medan, followed by its second store in Jakarta. Lotte, meanwhile, enters the market by inking partnership
with Ciputra Group, creating what its called 'Lotte Shopping Avenue' inside the Ciputra World Jakarta complex, as
well as acquiring Makro and rebranding it into Lotte Mart. Ireland developed a strong middle class, especially in
the major cities, by the mid-nineteenth century. They were active patrons of department stores. Delany's New Mart
was opened in 1853 in Dublin, Ireland. Unlike others, Delany's had not evolved gradually from a smaller shop on site.
Thus it could claim to be the first purpose-built Department Store in the world. The word department store had not
been invented at that time and thus it was called the "Monster House". The store was completely destroyed in the
1916 Easter Rising, but reopened in 1922. Mexico has a large number of department stores based in Mexico, of which
the most traditional are El Palacio de Hierro (High end and luxury goods) and Liverpool (Upper-middle income), with
its middle income sister store Fabricas de Francia. Sanborns owns over 100 middle income level stores throughout
the country. Grupo Carso operates Sears Mexico and two high-end Saks 5th Avenue stores. Other large chains are Coppel
and Elektra, which offer items for the bargain price seeker. Wal-Mart operates Suburbia for lower income shoppers,
along with stores under the brand names of Wal-Mart, Bodega Aurrera, and Superama. The iconic department stores of
New Zealand's three major centres are Smith & Caughey's (founded 1880), in New Zealand's most populous city, Auckland;
Kirkcaldie & Stains (founded 1863) in the capital, Wellington; and Ballantynes (founded 1854) in New Zealand's second
biggest city, Christchurch. These offer high-end and luxury items. Additionally, Arthur Barnett (1903) operates in
Dunedin. H & J Smith is a small chain operating throughout Southland with a large flagship store in Invercargill.
Farmers is a mid-range national chain of stores (originally a mail-order firm known as Laidlaw Leeds founded in 1909).
Historical department stores include DIC. Discount chains include The Warehouse, Kmart Australia, and the now-defunct
DEKA. Panama's first department stores such as Bazaar Francés, La Dalia and La Villa de Paris started as textile
retailers at the turn of the nineteenth century. Later on in the twentieth century these eventually gave way to stores
such as Felix B. Maduro, Sarah Panamá, Figali, Danté, Sears, Gran Morrison and smaller ones such as Bon Bini, Cocos,
El Lider, Piccolo and Clubman among others. Of these only Felix B. Maduro (usually referred to as Felix by locals)
and Danté remain strong. All the others have either folded or declined although Cocos has managed to secure a good
position in the market. The first department store in the Philippines is the Hoskyn's Department Store of Hoskyn
& Co. established in 1877 in Iloilo by the Englishman Henry Hoskyn, nephew of Nicholas Loney, the first British vice-consul
in Iloilo. Some of the earliest department stores in the Philippines were located in Manila as early as 1898 with
the opening of the American Bazaar, which was later named Beck's. During the course of the American occupation of
the Philippines, many department stores were built throughout the city, many of which were located in Escolta. Heacock's,
a luxury department store, was considered as the best department store in the Orient. Other department stores included
Aguinaldo's, La Puerta del Sol, Estrella del Norte, and the Crystal Arcade, all of which were destroyed during the
Battle of Manila in 1945. After the war, department stores were once again alive with the establishment of Shoemart
(now SM), and Rustan's. Since the foundation of these companies in the 1950s, there are now more than one hundred
department stores to date. At present, due to the huge success of shopping malls, department stores in the Philippines
usually are anchor tenants within malls. SM Supermalls and Robinsons Malls are two of the country's most prominent
mall chains, all of which has Department Store sections. In Puerto Rico, various department stores have operated,
such as Sears, JC Penney, Macy's, Kmart, Wal-Mart, Marshalls, Burlington Coat Factory, T.J. Maxx, Costco, Sam's Club
and others. La New York was a Puerto Rican department store. Topeka, Capri and Pitusa are competitors on the Puerto
Rican market that also have hypermarkets operating under their names. Retailers Nordstrom and Saks Fifth Avenue also
have plans to come to the Mall of San Juan, a new high-end retail project with over 100 tenants. The mall is set
to open in March 2015. The site where the Saint Petersburg Passage sprawls had been devoted to trade since the city's
foundation in the early 18th century. It had been occupied by various shops and warehouses (Maly Gostiny Dvor, Schukin
Dvor, Apraksin Dvor) until 1846, when Count Essen-Stenbock-Fermor acquired the grounds to build an elite shopping
mall for the Russian nobility and wealthy bourgeoisie. Stenbock-Fermor conceived of the Passage as more than a mere
shopping mall, but also as a cultural and social centre for the people of St Petersburg. The edifice contained coffee-houses,
confectioneries, panorama installations, an anatomical museum, a wax museum, and even a small zoo, described by Dostoyevsky
in his extravaganza "Crocodile, or Passage through the Passage". The concert hall became renowned as a setting for
literary readings attended by the likes of Dostoevsky and Turgenev. Parenthetically, the Passage premises have long
been associated with the entertainment industry and still remains home to the Komissarzhevskaya Theatre. Socialism
confronted consumerism in the chain State Department Stores (GUM), set up by Lenin in 1921 as a model retail enterprise.
It operated stores throughout Russia and targeted consumers across class, gender, and ethnic lines. GUM was designed
to advance the Bolsheviks' goals of eliminating private enterprise and rebuilding consumerism along socialist lines,
as well as democratizing consumption for workers and peasants nationwide. GUM became a major propaganda purveyor,
with advertising and promotional campaigns that taught Russians the goals of the regime and attempted to inculcate
new attitudes and behavior. In trying to create a socialist consumer culture from scratch, GUM recast the functions
and meanings of buying and selling, turning them into politically charged acts that could either contribute to or
delay the march toward utopian communism. By the late 1920s, however, GUM's gandiose goals had proven unrealistic
and largely alienated consumers, who instead learned a culture of complaint and entitlement. GUM's main function
became one of distributing whatever the factories sent them, regardless of consumer demand or quality. In the 21st
century the most famous department store in Russia is GUM in Moscow, followed by TsUM and the Petrovsky Passage.
Other popular stores are Mega (shopping malls), Stockmann, and Marks & Spencer. Media Markt, M-video, Technosila,
and White Wind (Beliy Veter) sell large number of electronic devices. In St. Petersburg The Passage has been popular
since the 1840s. 1956 Soviet film Behind Store Window (За витриной универмага) on YouTube depicts operation of a
Moscow department store in 1950's. The five most prevalent chains are Lotte, Hyundai, Shinsegae, Galleria, AK plaza.
Lotte Department Store is the largest, operating more than 40 stores (include outlet, young plaza, foreign branches).
Hyundai Department Store has about 14 stores (13dept, 1outlet), and there are 10 stores in Shinsegae. Shinsegae has
3 outlet store with Simon. Galleria has 5, AK has 5 stores. Galleriaeast and west is well known by luxury goods.
These five department stores are known to people as representative corporations in the field of distirution in South
Korea. From fashion items to electric appliances, people can buy various kinds of products. Every weekend, people
are fond of going around these department stores, because their location is usually easy to visit. As of 2010 the
Shinsegae department store in Centum City, Busan, is the largest department store in the world. The first department
store in Spain was Almacenes el Siglo opened in October 1881 in Barcelona. Following the 2002 closure by the Australian
group Partridges of their SEPU (Sociedad Española de Precios Unicos) department store chain, which was one of Spain's
oldest, the market is now dominated by El Corte Inglés, founded in 1934 as a drapery store. El Corte Inglés stores
tend to be vast buildings, selling a very broad range of products and the group also controls a number of other retail
formats including supermarket chain 'Supercor' and hypermarket chain 'Hipercor'. Other competitors such as 'Simago'
and 'Galerías Preciados' closed in the 1990s, however El Corte Inglés, faces major competition from French discount
operators such as Carrefour and Auchan. John Lewis Newcastle (formerly Bainbridge) in Newcastle upon Tyne, is the
world's oldest Department Store. It is still known to many of its customers as Bainbridge, despite the name change
to 'John Lewis'. The Newcastle institution dates back to 1838 when Emerson Muschamp Bainbridge, aged 21, went into
partnership with William Alder Dunn and opened a draper's and fashion in Market Street, Newcastle. In terms of retailing
history, one of the most significant facts about the Newcastle Bainbridge shop, is that as early as 1849 weekly takings
were recorded by department, making it the earliest of all department stores. This ledger survives and is kept in
the John Lewis archives. John Lewis bought the Bainbridge store in 1952. Also, Kendals in Manchester can lay claim
to being one of the oldest department stores in the UK. Beginning as a small shop owned by S. and J. Watts in 1796,
its sold a variety of goods. Kendal Milne and Faulkner purchased the business in 1835. Expanding the space, rather
than use it as a typical warehouse simply to showcase textiles, it became a vast bazaar. Serving Manchester's upmarket
clientele for over 200 years, it was taken over by House of Fraser and recently rebranded as House of Fraser Manchester
– although most Mancunians still refer to it as Kendals. The Kendal Milne signage still remains over the main entrance
to the art deco building in the city's Deansgate. All major cities have their distinctive local department stores,
which anchored the downtown shopping district until the arrival of the malls in the 1960s. Washington, for example,
after 1887 had Woodward & Lothrop and Garfinckel's starting in 1905. Garfield's went bankrupt in 1990, as did Woodward
& Lothrop in 1994. Baltimore had four major department stores: Hutzler's was the prestige leader, followed by Hecht's,
Hochschild's and Stewart's. They all operated branches in the suburbs, but all closed in the late twentieth century.
By 2015, most locally owned department stores around the country had been consolidated into larger chains, or had
closed down entirely. Chain department stores grew rapidly after 1920, and provided competition for the downtown
upscale department stores, as well as local department stores in small cities. J. C. Penney had four stores in 1908,
312 in 1920, and 1452 in 1930. Sears, Roebuck & Company, a giant mail-order house, opened its first eight retail
stores in 1925, and operated 338 by 1930, and 595 by 1940. The chains reached a middle-class audience, that was more
interested in value than in upscale fashions. Sears was a pioneer in creating department stores that catered to men
as well as women, especially with lines of hardware and building materials. It deemphasized the latest fashions in
favor of practicality and durability, and allowed customers to select goods without the aid of a clerk. Its stores
were oriented to motorists – set apart from existing business districts amid residential areas occupied by their
target audience; had ample, free, off-street parking; and communicated a clear corporate identity. In the 1930s,
the company designed fully air-conditioned, "windowless" stores whose layout was driven wholly by merchandising concerns.
After World War II Hudson's realized that the limited parking space at its downtown skyscraper would increasingly
be a problem for its customers. The solution in 1954 was to open the Northland Center in nearby Southfield, just
beyond the city limits. It was the largest suburban shopping center in the world, and quickly became the main shopping
destination for northern and western Detroit, and for much of the suburbs. By 1961 the downtown skyscraper accounted
for only half of Hudson's sales; it closed in 1986. The Northland Center Hudson's, rebranded Macy's in 2006 following
acquisition by Federated Department Stores, was closed along with the remaining stores in the center in March 2015
due to the mall's high storefront vacancy, decaying infrastructure, and financial mismanagement. George Dayton had
founded his Dayton's Dry Goods store in Minneapolis in 1902 and the AMC cooperative in 1912. His descendants built
Southdale Center in 1956, opened the Target discount store chain in 1962 and the B. Dalton Bookseller chain in 1966.
Dayton's grew to 19 stores under the Dayton's name plus five other regional names acquired by Dayton-Hudson. The
Dayton-Hudson Corporation closed the flagship J. L. Hudson Department Store in downtown Detroit in 1983, but expanded
its other retail operations. It acquired Mervyn's in 1978, Marshall Field's in 1990, and renamed itself the Target
Corporation in 2000. In 2002, Dayton's and Hudson's were consolidated into the Marshall Field's name. In 2005, May
Department Stores acquired all of the Marshall Field's stores and shortly thereafter, Macy's acquired May. In 1849,
Horne's began operations and soon became a leading Pittsburgh department store. In 1879, it opened a seven-story
landmark which was the first department store in the city's downtown. In 1972, Associated Dry Goods acquired Horne's,
and ADG expanded operations of Horne's to several stores in suburban malls throughout the Pittsburgh region as well
as in Erie, Pennsylvania and Northeast Ohio. In December 1986, Horne's was acquired by a local investor group following
ADG's acquisition by May Department Stores. By 1994, Federated Department Stores acquired the remaining ten Horne's
stores and merged them with its Lazarus division, completely ceasing all operations of any store under the Horne's
name.
The German equivalent was used with the founding of the North German Confederation whose constitution granted legislative
power over the protection of intellectual property (Schutz des geistigen Eigentums) to the confederation. When the
administrative secretariats established by the Paris Convention (1883) and the Berne Convention (1886) merged in
1893, they located in Berne, and also adopted the term intellectual property in their new combined title, the United
International Bureaux for the Protection of Intellectual Property. The term can be found used in an October 1845
Massachusetts Circuit Court ruling in the patent case Davoll et al. v. Brown., in which Justice Charles L. Woodbury
wrote that "only in this way can we protect intellectual property, the labors of the mind, productions and interests
are as much a man's own...as the wheat he cultivates, or the flocks he rears." The statement that "discoveries are...property"
goes back earlier. Section 1 of the French law of 1791 stated, "All new discoveries are the property of the author;
to assure the inventor the property and temporary enjoyment of his discovery, there shall be delivered to him a patent
for five, ten or fifteen years." In Europe, French author A. Nion mentioned propriété intellectuelle in his Droits
civils des auteurs, artistes et inventeurs, published in 1846. The concept's origins can potentially be traced back
further. Jewish law includes several considerations whose effects are similar to those of modern intellectual property
laws, though the notion of intellectual creations as property does not seem to exist – notably the principle of Hasagat
Ge'vul (unfair encroachment) was used to justify limited-term publisher (but not author) copyright in the 16th century.
In 500 BCE, the government of the Greek state of Sybaris offered one year's patent "to all who should discover any
new refinement in luxury". A patent is a form of right granted by the government to an inventor, giving the owner
the right to exclude others from making, using, selling, offering to sell, and importing an invention for a limited
period of time, in exchange for the public disclosure of the invention. An invention is a solution to a specific
technological problem, which may be a product or a process and generally has to fulfil three main requirements: it
has to be new, not obvious and there needs to be an industrial applicability.:17 The stated objective of most intellectual
property law (with the exception of trademarks) is to "Promote progress." By exchanging limited exclusive rights
for disclosure of inventions and creative works, society and the patentee/copyright owner mutually benefit, and an
incentive is created for inventors and authors to create and disclose their work. Some commentators have noted that
the objective of intellectual property legislators and those who support its implementation appears to be "absolute
protection". "If some intellectual property is desirable because it encourages innovation, they reason, more is better.
The thinking is that creators will not have sufficient incentive to invent unless they are legally entitled to capture
the full social value of their inventions". This absolute protection or full value view treats intellectual property
as another type of "real" property, typically adopting its law and rhetoric. Other recent developments in intellectual
property law, such as the America Invents Act, stress international harmonization. Recently there has also been much
debate over the desirability of using intellectual property rights to protect cultural heritage, including intangible
ones, as well as over risks of commodification derived from this possibility. The issue still remains open in legal
scholarship. In 2013 the United States Patent & Trademark Office approximated that the worth of intellectual property
to the U.S. economy is more than US$5 trillion and creates employment for an estimated 18 million American people.
The value of intellectual property is considered similarly high in other developed nations, such as those in the
European Union. In the UK, IP has become a recognised asset class for use in pension-led funding and other types
of business finance. However, in 2013, the UK Intellectual Property Office stated: "There are millions of intangible
business assets whose value is either not being leveraged at all, or only being leveraged inadvertently". Patent
infringement typically is caused by using or selling a patented invention without permission from the patent holder.
The scope of the patented invention or the extent of protection is defined in the claims of the granted patent. There
is safe harbor in many jurisdictions to use a patented invention for research. This safe harbor does not exist in
the US unless the research is done for purely philosophical purposes, or in order to gather data in order to prepare
an application for regulatory approval of a drug. In general, patent infringement cases are handled under civil law
(e.g., in the United States) but several jurisdictions incorporate infringement in criminal law also (for example,
Argentina, China, France, Japan, Russia, South Korea). Copyright infringement is reproducing, distributing, displaying
or performing a work, or to make derivative works, without permission from the copyright holder, which is typically
a publisher or other business representing or assigned by the work's creator. It is often called "piracy". While
copyright is created the instance a work is fixed, generally the copyright holder can only get money damages if the
owner registers the copyright.[citation needed] Enforcement of copyright is generally the responsibility of the copyright
holder. The ACTA trade agreement, signed in May 2011 by the United States, Japan, Switzerland, and the EU, and which
has not entered into force, requires that its parties add criminal penalties, including incarceration and fines,
for copyright and trademark infringement, and obligated the parties to active police for infringement. There are
limitations and exceptions to copyright, allowing limited use of copyrighted works, which does not constitute infringement.
Examples of such doctrines are the fair use and fair dealing doctrine. Trademark infringement occurs when one party
uses a trademark that is identical or confusingly similar to a trademark owned by another party, in relation to products
or services which are identical or similar to the products or services of the other party. In many countries, a trademark
receives protection without registration, but registering a trademark provides legal advantages for enforcement.
Infringement can be addressed by civil litigation and, in several jurisdictions, under criminal law. Trade secret
misappropriation is different from violations of other intellectual property laws, since by definition trade secrets
are secret, while patents and registered copyrights and trademarks are publicly available. In the United States,
trade secrets are protected under state law, and states have nearly universally adopted the Uniform Trade Secrets
Act. The United States also has federal law in the form of the Economic Espionage Act of 1996 (18 U.S.C. §§ 1831–1839),
which makes the theft or misappropriation of a trade secret a federal crime. This law contains two provisions criminalizing
two sorts of activity. The first, 18 U.S.C. § 1831(a), criminalizes the theft of trade secrets to benefit foreign
powers. The second, 18 U.S.C. § 1832, criminalizes their theft for commercial or economic purposes. (The statutory
penalties are different for the two offenses.) In Commonwealth common law jurisdictions, confidentiality and trade
secrets are regarded as an equitable right rather than a property right but penalties for theft are roughly the same
as the United States.[citation needed] Criticism of the term intellectual property ranges from discussing its vagueness
and abstract overreach to direct contention to the semantic validity of using words like property and rights in fashions
that contradict practice and law. Many detractors think this term specially serves the doctrinal agenda of parties
opposing reform in the public interest or otherwise abusing related legislations; and that it disallows intelligent
discussion about specific and often unrelated aspects of copyright, patents, trademarks, etc. Free Software Foundation
founder Richard Stallman argues that, although the term intellectual property is in wide use, it should be rejected
altogether, because it "systematically distorts and confuses these issues, and its use was and is promoted by those
who gain from this confusion". He claims that the term "operates as a catch-all to lump together disparate laws [which]
originated separately, evolved differently, cover different activities, have different rules, and raise different
public policy issues" and that it creates a "bias" by confusing these monopolies with ownership of limited physical
things, likening them to "property rights". Stallman advocates referring to copyrights, patents and trademarks in
the singular and warns against abstracting disparate laws into a collective term. On the assumption that intellectual
property rights are actual rights Stallman argues that this claim does not live to the historical intentions behind
these laws, which in the case of copyright served as a censorship system, and later on, a regulatory model for the
printing press that may have benefited authors incidentally, but never interfered with the freedom of average readers.
Still referring to copyright, he cites legal literature such as the United States Constitution and case law to demonstrate
that it is meant to be an optional and experimental bargain that temporarily trades property rights and free speech
for public, not private, benefit in the form of increased artistic production and knowledge. He mentions that "if
copyright were a natural right nothing could justify terminating this right after a certain period of time". Law
professor, writer and political activist Lawrence Lessig, along with many other copyleft and free software activists,
has criticized the implied analogy with physical property (like land or an automobile). They argue such an analogy
fails because physical property is generally rivalrous while intellectual works are non-rivalrous (that is, if one
makes a copy of a work, the enjoyment of the copy does not prevent enjoyment of the original). Other arguments along
these lines claim that unlike the situation with tangible property, there is no natural scarcity of a particular
idea or information: once it exists at all, it can be re-used and duplicated indefinitely without such re-use diminishing
the original. Stephan Kinsella has objected to intellectual property on the grounds that the word "property" implies
scarcity, which may not be applicable to ideas. Some critics of intellectual property, such as those in the free
culture movement, point at intellectual monopolies as harming health (in the case of pharmaceutical patents), preventing
progress, and benefiting concentrated interests to the detriment of the masses, and argue that the public interest
is harmed by ever-expansive monopolies in the form of copyright extensions, software patents, and business method
patents. More recently scientists and engineers are expressing concern that patent thickets are undermining technological
development even in high-tech fields like nanotechnology. The World Intellectual Property Organization (WIPO) recognizes
that conflicts may exist between the respect for and implementation of current intellectual property systems and
other human rights. In 2001 the UN Committee on Economic, Social and Cultural Rights issued a document called "Human
rights and intellectual property" that argued that intellectual property tends to be governed by economic goals when
it should be viewed primarily as a social product; in order to serve human well-being, intellectual property systems
must respect and conform to human rights laws. According to the Committee, when systems fail to do so they risk infringing
upon the human right to food and health, and to cultural participation and scientific benefits. In 2004 the General
Assembly of WIPO adopted The Geneva Declaration on the Future of the World Intellectual Property Organization which
argues that WIPO should "focus more on the needs of developing countries, and to view IP as one of many tools for
development—not as an end in itself". Further along these lines, The ethical problems brought up by IP rights are
most pertinent when it is socially valuable goods like life-saving medicines are given IP protection. While the application
of IP rights can allow companies to charge higher than the marginal cost of production in order to recoup the costs
of research and development, the price may exclude from the market anyone who cannot afford the cost of the product,
in this case a life-saving drug. "An IPR driven regime is therefore not a regime that is conductive to the investment
of R&D of products that are socially valuable to predominately poor populations".:1108–9 Another limitation of current
U.S. Intellectual Property legislation is its focus on individual and joint works; thus, copyright protection can
only be obtained in 'original' works of authorship. This definition excludes any works that are the result of community
creativity, for example Native American songs and stories; current legislation does not recognize the uniqueness
of indigenous cultural "property" and its ever-changing nature. Simply asking native cultures to 'write down' their
cultural artifacts on tangible mediums ignores their necessary orality and enforces a Western bias of the written
form as more authoritative. Also with respect to copyright, the American film industry helped to change the social
construct of intellectual property via its trade organization, the Motion Picture Association of America. In amicus
briefs in important cases, in lobbying before Congress, and in its statements to the public, the MPAA has advocated
strong protection of intellectual-property rights. In framing its presentations, the association has claimed that
people are entitled to the property that is produced by their labor. Additionally Congress's awareness of the position
of the United States as the world's largest producer of films has made it convenient to expand the conception of
intellectual property. These doctrinal reforms have further strengthened the industry, lending the MPAA even more
power and authority. The growth of the Internet, and particularly distributed search engines like Kazaa and Gnutella,
have represented a challenge for copyright policy. The Recording Industry Association of America, in particular,
has been on the front lines of the fight against copyright infringement, which the industry calls "piracy". The industry
has had victories against some services, including a highly publicized case against the file-sharing company Napster,
and some people have been prosecuted for sharing files in violation of copyright. The electronic age has seen an
increase in the attempt to use software-based digital rights management tools to restrict the copying and use of
digitally based works. Laws such as the Digital Millennium Copyright Act have been enacted, that use criminal law
to prevent any circumvention of software used to enforce digital rights management systems. Equivalent provisions,
to prevent circumvention of copyright protection have existed in EU for some time, and are being expanded in, for
example, Article 6 and 7 the Copyright Directive. Other examples are Article 7 of the Software Directive of 1991
(91/250/EEC), and the Conditional Access Directive of 1998 (98/84/EEC). This can hinder legal uses, affecting public
domain works, limitations and exceptions to copyright, or uses allowed by the copyright holder. Some copyleft licenses,
like GNU GPL 3, are designed to counter that. Laws may permit circumvention under specific conditions like when it
is necessary to achieve interoperability with the circumventor's program, or for accessibility reasons; however,
distribution of circumvention tools or instructions may be illegal. In the context of trademarks, this expansion
has been driven by international efforts to harmonise the definition of "trademark", as exemplified by the Agreement
on Trade-Related Aspects of Intellectual Property Rights ratified in 1994, which formalized regulations for IP rights
that had been handled by common law, or not at all, in member states. Pursuant to TRIPs, any sign which is "capable
of distinguishing" the products or services of one business from the products or services of another business is
capable of constituting a trademark.
Florida i/ˈflɒrɪdə/ (Spanish for "flowery land") is a state located in the southeastern region of the United States. The
state is bordered to the west by the Gulf of Mexico, to the north by Alabama and Georgia, to the east by the Atlantic
Ocean, and to the south by the Straits of Florida and the sovereign state of Cuba. Florida is the 22nd most extensive,
the 3rd most populous, and the 8th most densely populated of the United States. Jacksonville is the most populous
city in Florida, and the largest city by area in the contiguous United States. The Miami metropolitan area is the
eighth-largest metropolitan area in the United States. Tallahassee is the state capital. A peninsula between the
Gulf of Mexico, the Atlantic Ocean, and the Straits of Florida, it has the longest coastline in the contiguous United
States, approximately 1,350 miles (2,170 km), and is the only state that borders both the Gulf of Mexico and the
Atlantic Ocean. Much of the state is at or near sea level and is characterized by sedimentary soil. The climate varies
from subtropical in the north to tropical in the south. The American alligator, American crocodile, Florida panther,
and manatee can be found in the Everglades National Park. "By May 1539, Conquistador Hernando de Soto skirted the
coast of Florida, searching for a deep harbor to land. He described seeing a thick wall of red mangroves spread mile
after mile, some reaching as high as 70 feet (21 m), with intertwined and elevated roots making landing difficult.
Very soon, 'many smokes' appeared 'along the whole coast', billowing against the sky, when the Native ancestors of
the Seminole spotted the newcomers and spread the alarm by signal fires". The Spanish introduced Christianity, cattle,
horses, sheep, the Spanish language, and more to Florida.[full citation needed] Both the Spanish and French established
settlements in Florida, with varying degrees of success. In 1559, Don Tristán de Luna y Arellano established a colony
at present-day Pensacola, one of the first European settlements in the continental United States, but it was abandoned
by 1561. In 1763, Spain traded Florida to the Kingdom of Great Britain for control of Havana, Cuba, which had been
captured by the British during the Seven Years' War. It was part of a large expansion of British territory following
the country's victory in the Seven Years' War. Almost the entire Spanish population left, taking along most of the
remaining indigenous population to Cuba. The British soon constructed the King's Road connecting St. Augustine to
Georgia. The road crossed the St. Johns River at a narrow point, which the Seminole called Wacca Pilatka and the
British named "Cow Ford", both names ostensibly reflecting the fact that cattle were brought across the river there.
The British divided Florida into the two colonies of British East Florida and British West Florida. The British government
gave land grants to officers and soldiers who had fought in the French and Indian War in order to encourage settlement.
In order to induce settlers to move to the two new colonies reports of the natural wealth of Florida were published
in England. A large number of British colonists who were "energetic and of good character" moved to Florida, mostly
coming from South Carolina, Georgia and England though there was also a group of settlers who came from the colony
of Bermuda. This would be the first permanent English-speaking population in what is now Duval County, Baker County,
St. Johns County and Nassau County. The British built good public roads and introduced the cultivation of sugar cane,
indigo and fruits as well the export of lumber. As a result of these initiatives northeastern Florida prospered economically
in a way it never did under Spanish rule. Furthermore, the British governors were directed to call general assemblies
as soon as possible in order to make laws for the Floridas and in the meantime they were, with the advice of councils,
to establish courts. This would be the first introduction of much of the English-derived legal system which Florida
still has today including trial by jury, habeas corpus and county-based government. Neither East Florida nor West
Florida would send any representatives to Philadelphia to draft the Declaration of Independence. Florida would remain
a Loyalist stronghold for the duration of the American Revolution. Americans of English descent and Americans of
Scots-Irish descent began moving into northern Florida from the backwoods of Georgia and South Carolina. Though technically
not allowed by the Spanish authorities, the Spanish were never able to effectively police the border region and the
backwoods settlers from the United States would continue to migrate into Florida unchecked. These migrants, mixing
with the already present British settlers who had remained in Florida since the British period, would be the progenitors
of the population known as Florida Crackers. These American settlers established a permanent foothold in the area
and ignored Spanish officials. The British settlers who had remained also resented Spanish rule, leading to a rebellion
in 1810 and the establishment for ninety days of the so-called Free and Independent Republic of West Florida on September
23. After meetings beginning in June, rebels overcame the Spanish garrison at Baton Rouge (now in Louisiana), and
unfurled the flag of the new republic: a single white star on a blue field. This flag would later become known as
the "Bonnie Blue Flag". Seminole Indians based in East Florida began raiding Georgia settlements, and offering havens
for runaway slaves. The United States Army led increasingly frequent incursions into Spanish territory, including
the 1817–1818 campaign against the Seminole Indians by Andrew Jackson that became known as the First Seminole War.
The United States now effectively controlled East Florida. Control was necessary according to Secretary of State
John Quincy Adams because Florida had become "a derelict open to the occupancy of every enemy, civilized or savage,
of the United States, and serving no other earthly purpose than as a post of annoyance to them.". Florida had become
a burden to Spain, which could not afford to send settlers or garrisons. Madrid therefore decided to cede the territory
to the United States through the Adams-Onís Treaty, which took effect in 1821. President James Monroe was authorized
on March 3, 1821 to take possession of East Florida and West Florida for the United States and provide for initial
governance. Andrew Jackson served as military governor of the newly acquired territory, but only for a brief period.
On March 30, 1822, the United States merged East Florida and part of West Florida into the Florida Territory. By
the early 1800s, Indian removal was a significant issue throughout the southeastern U.S. and also in Florida. In
1830, the U.S. Congress passed the Indian Removal Act and as settlement increased, pressure grew on the United States
government to remove the Indians from Florida. Seminoles harbored runaway blacks, known as the Black Seminoles, and
clashes between whites and Indians grew with the influx of new settlers. In 1832, the Treaty of Payne's Landing promised
to the Seminoles lands west of the Mississippi River if they agreed to leave Florida. Many Seminole left at this
time. The climate of Florida is tempered somewhat by the fact that no part of the state is distant from the ocean.
North of Lake Okeechobee, the prevalent climate is humid subtropical (Köppen: Cfa), while areas south of the lake
(including the Florida Keys) have a true tropical climate (Köppen: Aw). Mean high temperatures for late July are
primarily in the low 90s Fahrenheit (32–34 °C). Mean low temperatures for early to mid January range from the low
40s Fahrenheit (4–7 °C) in northern Florida to above 60 °F (16 °C) from Miami on southward. With an average daily
temperature of 70.7 °F (21.5 °C), it is the warmest state in the country. Florida's nickname is the "Sunshine State",
but severe weather is a common occurrence in the state. Central Florida is known as the lightning capital of the
United States, as it experiences more lightning strikes than anywhere else in the country. Florida has one of the
highest average precipitation levels of any state, in large part because afternoon thunderstorms are common in much
of the state from late spring until early autumn. A narrow eastern part of the state including Orlando and Jacksonville
receives between 2,400 and 2,800 hours of sunshine annually. The rest of the state, including Miami, receives between
2,800 and 3,200 hours annually. Hurricanes pose a severe threat each year during the June 1 to November 30 hurricane
season, particularly from August to October. Florida is the most hurricane-prone state, with subtropical or tropical
water on a lengthy coastline. Of the category 4 or higher storms that have struck the United States, 83% have either
hit Florida or Texas. From 1851 to 2006, Florida was struck by 114 hurricanes, 37 of them major—category 3 and above.
It is rare for a hurricane season to pass without any impact in the state by at least a tropical storm.[citation
needed] Extended systems of underwater caves, sinkholes and springs are found throughout the state and supply most
of the water used by residents. The limestone is topped with sandy soils deposited as ancient beaches over millions
of years as global sea levels rose and fell. During the last glacial period, lower sea levels and a drier climate
revealed a much wider peninsula, largely savanna. The Everglades, an enormously wide, slow-flowing river encompasses
the southern tip of the peninsula. Sinkhole damage claims on property in the state exceeded a total of $2 billion
from 2006 through 2010. The United States Census Bureau estimates that the population of Florida was 20,271,272 on
July 1, 2015, a 7.82% increase since the 2010 United States Census. The population of Florida in the 2010 census
was 18,801,310. Florida was the seventh fastest-growing state in the U.S. in the 12-month period ending July 1, 2012.
In 2010, the center of population of Florida was located between Fort Meade and Frostproof. The center of population
has moved less than 5 miles (8 km) to the east and approximately 1 mile (1.6 km) to the north between 1980 and 2010
and has been located in Polk County since the 1960 census. The population exceeded 19.7 million by December 2014,
surpassing the population of the state of New York for the first time. Florida is among the three states with the
most severe felony disenfranchisement laws. Florida requires felons to have completed sentencing, parole and/or probation,
and then seven years later, to apply individually for restoration of voting privileges. As in other aspects of the
criminal justice system, this law has disproportionate effects for minorities. As a result, according to Brent Staples,
based on data from The Sentencing Project, the effect of Florida's law is such that in 2014 "[m]ore than one in ten
Floridians – and nearly one in four African-American Floridians – are shut out of the polls because of felony convictions."
In 2010, 6.9% of the population (1,269,765) considered themselves to be of only American ancestry (regardless of
race or ethnicity). Many of these were of English or Scotch-Irish descent; however, their families have lived in
the state for so long, that they choose to identify as having "American" ancestry or do not know their ancestry.
In the 1980 United States census the largest ancestry group reported in Florida was English with 2,232,514 Floridians
claiming that they were of English or mostly English American ancestry. Some of their ancestry went back to the original
thirteen colonies. As of 2010, those of (non-Hispanic white) European ancestry accounted for 57.9% of Florida's population.
Out of the 57.9%, the largest groups were 12.0% German (2,212,391), 10.7% Irish (1,979,058), 8.8% English (1,629,832),
6.6% Italian (1,215,242), 2.8% Polish (511,229), and 2.7% French (504,641). White Americans of all European backgrounds
are present in all areas of the state. In 1970, non-Hispanic whites were nearly 80% of Florida's population. Those
of English and Irish ancestry are present in large numbers in all the urban/suburban areas across the state. Some
native white Floridians, especially those who have descended from long-time Florida families, may refer to themselves
as "Florida crackers"; others see the term as a derogatory one. Like whites in most of the other Southern states,
they descend mainly from English and Scots-Irish settlers, as well as some other British American settlers. As of
2010, those of Hispanic or Latino ancestry ancestry accounted for 22.5% (4,223,806) of Florida's population. Out
of the 22.5%, the largest groups were 6.5% (1,213,438) Cuban, 4.5% (847,550) Puerto Rican, 3.3% (629,718) Mexican,
and 1.6% (300,414) Colombian. Florida's Hispanic population includes large communities of Cuban Americans in Miami
and Tampa, Puerto Ricans in Orlando and Tampa, and Mexican/Central American migrant workers. The Hispanic community
continues to grow more affluent and mobile. As of 2011, 57.0% of Florida's children under the age of 1 belonged to
minority groups. Florida has a large and diverse Hispanic population, with Cubans and Puerto Ricans being the largest
groups in the state. Nearly 80% of Cuban Americans live in Florida, especially South Florida where there is a long-standing
and affluent Cuban community. Florida has the second largest Puerto Rican population after New York, as well as the
fastest-growing in the nation. Puerto Ricans are more widespread throughout the state, though the heaviest concentrations
are in the Orlando area of Central Florida. As of 2010, those of African ancestry accounted for 16.0% of Florida's
population, which includes African Americans. Out of the 16.0%, 4.0% (741,879) were West Indian or Afro-Caribbean
American. During the early 1900s, black people made up nearly half of the state's population. In response to segregation,
disfranchisement and agricultural depression, many African Americans migrated from Florida to northern cities in
the Great Migration, in waves from 1910 to 1940, and again starting in the later 1940s. They moved for jobs, better
education for their children and the chance to vote and participate in society. By 1960 the proportion of African
Americans in the state had declined to 18%. Conversely large numbers of northern whites moved to the state.[citation
needed] Today, large concentrations of black residents can be found in northern and central Florida. Aside from blacks
descended from African slaves brought to the US south, there are also large numbers of blacks of West Indian, recent
African, and Afro-Latino immigrant origins, especially in the Miami/South Florida area. In 2010, Florida had the
highest percentage of West Indians in the United States, with 2.0% (378,926) from Haitian ancestry, and 1.3% (236,950)
Jamaican. All other (non-Hispanic) Caribbean nations were well below 0.1% of Florida residents. From 1952 to 1964,
most voters were registered Democrats, but the state voted for the Republican presidential candidate in every election
except for 1964. The following year, Congress passed and President Lyndon B. Johnson signed the Voting Rights Act
of 1965, providing for oversight of state practices and enforcement of constitutional voting rights for African Americans
and other minorities in order to prevent the discrimination and disenfranchisement that had excluded most of them
for decades from the political process. From the 1930s through much of the 1960s, Florida was essentially a one-party
state dominated by white conservative Democrats, who together with other Democrats of the Solid South, exercised
considerable control in Congress. They gained federal money from national programs; like other southern states, Florida
residents have received more federal monies than they pay in taxes: the state is a net beneficiary. Since the 1970s,
the conservative white majority of voters in the state has largely shifted from the Democratic to the Republican
Party. It has continued to support Republican presidential candidates through the 20th century, except in 1976 and
1996, when the Democratic nominee was from the South. They have had "the luxury of voting for presidential candidates
who pledge to cut taxes and halt the expansion of government while knowing that their congressional delegations will
continue to protect federal spending." The first post-Reconstruction era Republican elected to Congress from Florida
was William C. Cramer in 1954 from Pinellas County on the Gulf Coast, where demographic changes were underway. In
this period, African Americans were still disenfranchised by the state's constitution and discriminatory practices;
in the 19th century they had made up most of the Republican Party. Cramer built a different Republican Party in Florida,
attracting local white conservatives and transplants from northern and midwestern states. In 1966 Claude R. Kirk,
Jr. was elected as the first post-Reconstruction Republican governor, in an upset election. In 1968 Edward J. Gurney,
also a white conservative, was elected as the state's first post-reconstruction Republican US Senator. In 1970 Democrats
took the governorship and the open US Senate seat, and maintained dominance for years. In 1998, Democratic voters
dominated areas of the state with a high percentage of racial minorities and transplanted white liberals from the
northeastern United States, known colloquially as "snowbirds". South Florida and the Miami metropolitan area are
dominated by both racial minorities and white liberals. Because of this, the area has consistently voted as one of
the most Democratic areas of the state. The Daytona Beach area is similar demographically and the city of Orlando
has a large Hispanic population, which has often favored Democrats. Republicans, made up mostly of white conservatives,
have dominated throughout much of the rest of Florida, particularly in the more rural and suburban areas. This is
characteristic of its voter base throughout the Deep South. The fast-growing I-4 corridor area, which runs through
Central Florida and connects the cities of Daytona Beach, Orlando, and Tampa/St. Petersburg, has had a fairly even
breakdown of Republican and Democratic voters. The area is often seen as a merging point of the conservative northern
portion of the state and the liberal southern portion, making it the biggest swing area in the state. Since the late
20th century, the voting results in this area, containing 40% of Florida voters, has often determined who will win
the state of Florida in presidential elections. Reapportionment following the 2010 United States Census gave the
state two more seats in the House of Representatives. The legislature's redistricting, announced in 2012, was quickly
challenged in court, on the grounds that it had unfairly benefited Republican interests. In 2015, the Florida Supreme
Court ruled on appeal that the congressional districts had to be redrawn because of the legislature's violation of
the Fair District Amendments to the state constitution passed in 2010; it accepted a new map in early December 2015.
In the closely contested 2000 election, the state played a pivotal role. Out of more than 5.8 million votes for the
two main contenders Bush and Al Gore, around 500 votes separated the two candidates for the all-decisive Florida
electoral votes that landed Bush the election win. Florida's felony disenfranchisement law is more severe than most
European nations or other American states. A 2002 study in the American Sociological Review concluded that "if the
state’s 827,000 disenfranchised felons had voted at the same rate as other Floridians, Democratic candidate Al Gore
would have won Florida—and the presidency—by more than 80,000 votes." The court ruled in 2014, after lengthy testimony,
that at least two districts had to be redrawn because of gerrymandering. After this was appealed, in July 2015 the
Florida Supreme Court ruled that lawmakers had followed an illegal and unconstitutional process overly influenced
by party operatives, and ruled that at least eight districts had to be redrawn. On December 2, 2015, a 5-2 majority
of the Court accepted a new map of congressional districts, some of which was drawn by challengers. Their ruling
affirmed the map previously approved by Leon County Judge Terry Lewis, who had overseen the original trial. It particularly
makes changes in South Florida. There are likely to be additional challenges to the map and districts. The Gross
Domestic Product (GDP) of Florida in 2010 was $748 billion. Its GDP is the fourth largest economy in the United States.
In 2010, it became the fourth largest exporter of trade goods. The major contributors to the state's gross output
in 2007 were general services, financial services, trade, transportation and public utilities, manufacturing and
construction respectively. In 2010–11, the state budget was $70.5 billion, having reached a high of $73.8 billion
in 2006–07. Chief Executive Magazine name Florida the third "Best State for Business" in 2011. At the end of the
third quarter in 2008, Florida had the highest mortgage delinquency rate in the country, with 7.8% of mortgages delinquent
at least 60 days. A 2009 list of national housing markets that were hard hit in the real estate crash included a
disproportionate number in Florida. The early 21st-century building boom left Florida with 300,000 vacant homes in
2009, according to state figures. In 2009, the US Census Bureau estimated that Floridians spent an average 49.1%
of personal income on housing-related costs, the third highest percentage in the country. After the watershed events
of Hurricane Andrew in 1992, the state of Florida began investing in economic development through the Office of Trade,
Tourism, and Economic Development. Governor Jeb Bush realized that watershed events such as Andrew negatively impacted
Florida's backbone industry of tourism severely. The office was directed to target Medical/Bio-Sciences among others.
Three years later, The Scripps Research Institute (TSRI) announced it had chosen Florida for its newest expansion.
In 2003, TSRI announced plans to establish a major science center in Palm Beach, a 364,000 square feet (33,800 m2)
facility on 100 acres (40 ha), which TSRI planned to occupy in 2006. Some sections of the state feature architectural
styles including Spanish revival, Florida vernacular, and Mediterranean Revival Style. It has the largest collection
of Art Deco and Streamline Moderne buildings in both the United States and the entire world, most of which are located
in the Miami metropolitan area, especially Miami Beach's Art Deco District, constructed as the city was becoming
a resort destination. A unique architectural design found only in Florida is the post-World War II Miami Modern,
which can be seen in areas such as Miami's MiMo Historic District. Florida is served by Amtrak, operating numerous
lines throughout, connecting the state's largest cities to points north in the United States and Canada. The busiest
Amtrak train stations in Florida in 2011 were: Sanford (259,944), Orlando (179,142), Tampa Union Station (140,785),
Miami (94,556), and Jacksonville (74,733). Sanford, in Greater Orlando, is the southern terminus of the Auto Train,
which originates at Lorton, Virginia, south of Washington, D.C.. Until 2005, Orlando was also the eastern terminus
of the Sunset Limited, which travels across the southern United States via New Orleans, Houston, and San Antonio
to its western terminus of Los Angeles. Florida is served by two additional Amtrak trains (the Silver Star and the
Silver Meteor), which operate between New York City and Miami. Miami Central Station, the city's rapid transit, commuter
rail, intercity rail, and bus hub, is under construction. NASCAR (headquartered in Daytona Beach) begins all three
of its major auto racing series in Florida at Daytona International Speedway in February, featuring the Daytona 500,
and ends all three Series in November at Homestead-Miami Speedway. Daytona also has the Coke Zero 400 NASCAR race
weekend around Independence Day in July. The 24 Hours of Daytona is one of the world's most prestigious endurance
auto races. The Grand Prix of St. Petersburg and Grand Prix of Miami have held IndyCar races as well.
Before forming Queen, Brian May and Roger Taylor had played together in a band named Smile. Freddie Mercury (then known by
his birth name of Farrokh "Freddie" Bulsara) was a fan of Smile and encouraged them to experiment with more elaborate
stage and recording techniques. Mercury joined the band in 1970, suggested "Queen" as a new band name, and adopted
his familiar stage name. John Deacon was recruited prior to recording their eponymous debut album in 1973. Queen
first charted in the UK with their second album, Queen II, in 1974, but it was the release of Sheer Heart Attack
later that year and A Night at the Opera in 1975 which brought them international success. The latter featured "Bohemian
Rhapsody", which stayed at number one in the UK for nine weeks and popularised the music video. Their 1977 album,
News of the World, contained "We Will Rock You" and "We Are the Champions", which have become anthems at sporting
events. By the early 1980s, Queen were one of the biggest stadium rock bands in the world. Their performance at 1985's
Live Aid is ranked among the greatest in rock history by various music publications, with a 2005 industry poll ranking
it the best. In 1991, Mercury died of bronchopneumonia, a complication of AIDS, and Deacon retired in 1997. Since
then, May and Taylor have occasionally performed together, including with Paul Rodgers (2004–09) and with Adam Lambert
(since 2011). In November 2014, Queen released a new album, Queen Forever, featuring vocals from the late Mercury.
While attending Ealing Art College, Tim Staffell became friends with Farrokh Bulsara, a fellow student who had assumed
the English name of Freddie. Bulsara felt that he and the band had the same tastes and soon became a keen fan of
Smile. In late 1970, after Staffell left to join the band Humpy Bong, the remaining Smile members, encouraged by
Bulsara, changed their name to "Queen" and continued working together. When asked about the name, Bulsara explained,
"I thought up the name Queen. It's just a name, but it's very regal obviously, and it sounds splendid. It's a strong
name, very universal and immediate. It had a lot of visual potential and was open to all sorts of interpretations.
I was certainly aware of gay connotations, but that was just one facet of it." The band had a number of bass players
during this period who did not fit with the band's chemistry. It was not until February 1971 that they settled on
John Deacon and began to rehearse for their first album. They recorded four of their own songs, "Liar", "Keep Yourself
Alive", "The Night Comes Down" and "Jesus", for a demo tape; no record companies were interested. It was also around
this time Freddie changed his surname to "Mercury", inspired by the line "Mother Mercury, look what they've done
to me" in the song "My Fairy King". On 2 July 1971, Queen played their first show in the classic line-up of Mercury,
May, Taylor and Deacon at a Surrey college outside London. Having attended art college, Mercury also designed Queen's
logo, called the Queen crest, shortly before the release of the band's first album. The logo combines the zodiac
signs of all four members: two lions for Leo (Deacon and Taylor), a crab for Cancer (May), and two fairies for Virgo
(Mercury). The lions embrace a stylised letter Q, the crab rests atop the letter with flames rising directly above
it, and the fairies are each sheltering below a lion. There is also a crown inside the Q and the whole logo is over-shadowed
by an enormous phoenix. The whole symbol bears a passing resemblance to the Royal coat of arms of the United Kingdom,
particularly with the lion supporters. The original logo, as found on the reverse-side of the first album cover,
was a simple line drawing but more intricate colour versions were used on later sleeves. In 1972, Queen entered discussions
with Trident Studios after being spotted at De La Lane Studios by John Anthony and after discussions were offered
a management deal by Norman Sheffield under Neptune Productions, a subsidiary of Trident to manage the band and enable
them to use the facilities at Trident to record new material whilst the management search for a record label to sign
Queen. This suited both parties at the time as Trident were expanding into management and Queen under the deal were
able to make use of the hi-tech recording facilities shared by bands at the time such as the Beatles and Elton John
to produce new material. However, Trident found it difficult to find a label for a band bearing a name with such
connotation during the early 1970s. In July 1973, Queen finally under a Trident/EMI deal released their eponymous
debut album, an effort influenced by the heavy metal and progressive rock of the day. The album was received well
by critics; Gordon Fletcher of Rolling Stone said "their debut album is superb", and Chicago's Daily Herald called
it an "above average debut". It drew little mainstream attention, and the lead single "Keep Yourself Alive", a Brian
May composition, sold poorly. Retrospectively, "Keep Yourself Alive" is cited as the highlight of the album, and
in 2008 Rolling Stone ranked it 31st in the "100 Greatest Guitar Songs of All Time", describing it as "an entire
album's worth of riffs crammed into a single song". The album was certified gold in the UK and the US. The group's
second LP, Queen II, was released in 1974, and features rock photographer Mick Rock's iconic image of the band on
the cover. This image would be used as the basis for the 1975 "Bohemian Rhapsody" music video production. The album
reached number five on the British album chart and became the first Queen album to chart in the UK. The Freddie Mercury-written
lead single "Seven Seas of Rhye" reached number ten in the UK, giving the band their first hit. The album is the
first real testament to the band's distinctive layered sound, and features long complex instrumental passages, fantasy-themed
lyrics, and musical virtuosity. Aside from its only single, the album also included the song "The March of the Black
Queen", a six-minute epic which lacks a chorus. The Daily Vault described the number as "menacing". Critical reaction
was mixed; the Winnipeg Free Press, while praising the band's debut album, described Queen II as a "over-produced
monstrosity". Allmusic has described the album as a favourite among the band's hardcore fans, and it is the first
of three Queen albums to feature in the book 1001 Albums You Must Hear Before You Die. After the band's six-night
stand at New York's Uris Theatre in May 1974, Brian May collapsed and was diagnosed as having hepatitis. While recuperating,
May was initially absent when the band started work on their third album, but he returned midway through the recording
process. Released in 1974, Sheer Heart Attack reached number two in the United Kingdom, sold well throughout Europe,
and went gold in the United States. It gave the band their first real experience of international success, and was
a hit on both sides of the Atlantic. The album experimented with a variety of musical genres, including British music
hall, heavy metal, ballads, ragtime, and Caribbean. At this point, Queen started to move away from the progressive
tendencies of their first two releases into a more radio-friendly, song-orientated style. Sheer Heart Attack introduced
new sound and melody patterns that would be refined on their next album, A Night at the Opera. The single "Killer
Queen" from Sheer Heart Attack reached number two on the British charts, and became their first US hit, reaching
number 12 on the Billboard Hot 100. It combines camp, vaudeville, and British music hall with May's guitar virtuosity.
The album's second single, "Now I'm Here", a more traditional hard rock composition, was a number eleven hit in Britain,
while the high speed rocker "Stone Cold Crazy" featuring May's uptempo riffs is a precursor to speed metal. In recent
years, the album has received acclaim from music publications: In 2006, Classic Rock ranked it number 28 in "The
100 Greatest British Rock Albums Ever", and in 2007, Mojo ranked it No.88 in "The 100 Records That Changed the World".
It is also the second of three Queen albums to feature in the book 1001 Albums You Must Hear Before You Die. In 1975,
the band left for a world tour with each member in Zandra Rhodes-created costumes and accompanied with banks of lights
and effects. They toured the US as headliners, and played in Canada for the first time. In September, after an acromonious
split with Trident, the band negotiated themselves out of their Trident Studios contract and searched for new management.
One of the options they considered was an offer from Led Zeppelin's manager, Peter Grant. Grant wanted them to sign
with Led Zeppelin's own production company, Swan Song Records. The band found the contract unacceptable and instead
contacted Elton John's manager, John Reid, who accepted the position. In late 1975, Queen recorded and released A
Night at the Opera, taking its name from the popular Marx Brothers movie. At the time, it was the most expensive
album ever produced. Like its predecessor, the album features diverse musical styles and experimentation with stereo
sound. In "The Prophet's Song", an eight-minute epic, the middle section is a canon, with simple phrases layered
to create a full-choral sound. The Mercury penned ballad, "Love of My Life", featured a harp and overdubbed vocal
harmonies. The album was very successful in Britain, and went triple platinum in the United States. The British public
voted it the 13th greatest album of all time in a 2004 Channel 4 poll. It has also ranked highly in international
polls; in a worldwide Guinness poll, it was voted the 19th greatest of all time, while an ABC poll saw the Australian
public vote it the 28th greatest of all time. A Night at the Opera has frequently appeared in "greatest albums" lists
reflecting the opinions of critics. Among other accolades, it was ranked number 16 in Q Magazine's "The 50 Best British
Albums Ever" in 2004, and number 11 in Rolling Stone's "The 100 Greatest Albums of All Time" as featured in their
Mexican edition in 2004. It was also placed at No. 230 on Rolling Stone magazine's list of "The 500 Greatest Albums
of All Time" in 2003. A Night at the Opera is the third and final Queen album to be featured in the book 1001 Albums
You Must Hear Before You Die. The album also featured the hit single "Bohemian Rhapsody", which was number one in
the UK for nine weeks. Mercury's close friend and advisor, Capital London radio DJ Kenny Everett, played a pivotal
role in giving the single exposure. It is the third-best-selling single of all time in the UK, surpassed only by
Band Aid's "Do They Know It's Christmas?" and Elton John's "Candle in the Wind 1997", and is the best-selling commercial
single in the UK. It also reached number nine in the United States (a 1992 re-release reached number two on the Billboard
Hot 100 for five weeks). It is the only single ever to sell a million copies on two separate occasions, and became
the Christmas number one twice in the UK, the only single ever to do so. "Bohemian Rhapsody" has been voted numerous
times the greatest song of all time. The band decided to make a video to help go with the single and hired Trilion,
a subsidiary of the former management company Trident Studios, using new technology to create the video; the result
is generally considered to have been the first "true" music video ever produced, and popularised the medium. The
album's first track "Death on Two Legs" is said to be written by Mercury about Norman Sheffield and the former management
at Trident who helped make the video so popular. Although other bands, including the Beatles, had made short promotional
films or videos of songs prior to this, generally, those were specifically made to be aired on specific television
shows. On the impact of "Bohemian Rhapsody", Rolling Stone states: "Its influence cannot be overstated, practically
inventing the music video seven years before MTV went on the air." The second single from the album, "You're My Best
Friend", the second song composed by John Deacon, and his first single, peaked at number sixteen in the United States
and went on to become a worldwide Top Ten hit. The band's A Night at the Opera Tour began in November 1975, and covered
Europe, the United States, Japan, and Australia. By 1976, Queen were back in the studio recording A Day at the Races,
which is often regarded as a sequel album to A Night at the Opera. It again borrowed the name of a Marx Brothers
movie, and its cover was similar to that of A Night at the Opera, a variation on the same Queen Crest. The most recognisable
of the Marx Brothers, Groucho Marx, invited Queen to visit him in his Los Angeles home in March 1977; there the band
thanked him in person, and performed "'39" a cappella. Musically, A Day at the Races was by both fans' and critics'
standards a strong effort, reaching number one in the UK and Japan, and number five in the US. The major hit on the
album was "Somebody to Love", a gospel-inspired song in which Mercury, May, and Taylor multi-tracked their voices
to create a 100-voice gospel choir. The song went to number two in the UK, and number thirteen in the US. The album
also featured one of the band's heaviest songs, May's "Tie Your Mother Down", which became a staple of their live
shows. During 1976, Queen played one of their most famous gigs, a free concert in Hyde Park, London. A concert organised
by the entrepreneur Richard Branson, it set an attendance record with 150,000 people confirmed in the audience. On
1 December 1976, Queen were the intended guests on London's early evening Today programme, but they pulled out at
the last-minute, which saw their late replacement on the show, EMI labelmate the Sex Pistols, give their seminal
interview. During the A Day at the Races Tour in 1977, Queen performed sold-out shows at Madison Square Garden, New
York, in February, and Earls Court, London, in June. The band's sixth studio album News of the World was released
in 1977, which has gone four times platinum in the United States, and twice in the UK. The album contained many songs
tailor-made for live performance, including two of rock's most recognisable anthems, "We Will Rock You" and the rock
ballad "We Are the Champions", both of which became enduring international sports anthems, and the latter reached
number four in the US. Queen commenced the News of the World Tour in October 1977, and Robert Hilburn of the Los
Angeles Times called this concert tour the band's "most spectacularly staged and finely honed show". In 1978, the
band released Jazz, which reached number two in the UK and number six on the Billboard 200 in the US. The album included
the hit singles "Fat Bottomed Girls" and "Bicycle Race" on a double-sided record. Queen rented Wimbledon Stadium
for a day to shoot the video, with 65 naked female models hired to stage a nude bicycle race. Reviews of the album
in recent years have been more favourable. Another notable track from Jazz, "Don't Stop Me Now", provides another
example of the band's exuberant vocal harmonies. In 1978, Queen toured the US and Canada, and spent much of 1979
touring in Europe and Japan. They released their first live album, Live Killers, in 1979; it went platinum twice
in the US. Queen also released the very successful single "Crazy Little Thing Called Love", a rockabilly inspired
song done in the style of Elvis Presley. The song made the top 10 in many countries, topped the Australian ARIA Charts
for seven consecutive weeks, and was the band's first number one single in the United States where it topped the
Billboard Hot 100 for four weeks. Having written the song on guitar and played rhythm on the record, Mercury played
rhythm guitar while performing the song live, which was the first time he ever played guitar in concert. In December
1979, Queen played the opening night at the Concert for the People of Kampuchea in London, having accepted a request
by the event's organiser Paul McCartney. Queen began their 1980s career with The Game. It featured the singles "Crazy
Little Thing Called Love" and "Another One Bites the Dust", both of which reached number one in the US. After attending
a Queen concert in Los Angeles, Michael Jackson suggested to Mercury backstage that "Another One Bites the Dust"
be released as a single, and in October 1980 it spent three weeks at number one. The album topped the Billboard 200
for five weeks, and sold over four million copies in the US. It was also the first appearance of a synthesiser on
a Queen album. Heretofore, their albums featured a distinctive "No Synthesisers!" sleeve note. The note is widely
assumed to reflect an anti-synth, pro-"hard"-rock stance by the band, but was later revealed by producer Roy Thomas
Baker to be an attempt to clarify that those albums' multi-layered solos were created with guitars, not synths, as
record company executives kept assuming at the time. In September 1980, Queen performed three sold-out shows at Madison
Square Garden. In 1980, Queen also released the soundtrack they had recorded for Flash Gordon. At the 1981 American
Music Awards in January, "Another One Bites the Dust" won the award for Favorite Pop/Rock Single, and Queen were
nominated for Favorite Pop/Rock Band, Duo, or Group. In February 1981, Queen travelled to South America as part of
The Game Tour, and became the first major rock band to play in Latin American stadiums. The tour included five shows
in Argentina, one of which drew the largest single concert crowd in Argentine history with an audience of 300,000
in Buenos Aires and two concerts at the Morumbi Stadium in São Paulo, Brazil, where they played to an audience of
more than 131,000 people in the first night (then the largest paying audience for a single band anywhere in the world)
and more than 120,000 people the following night. In October of the same year, Queen performed for more than 150,000
fans on 9 October at Monterrey (Estadio Universitario) and 17 and 18 at Puebla (Estadio Zaragoza), Mexico. On 24
and 25 November, Queen played two sell out nights at the Montreal Forum, Quebec, Canada. One of Mercury's most notable
performances of The Game's final track, "Save Me", took place in Montreal, and the concert is recorded in the live
album, Queen Rock Montreal. In 1982, the band released the album Hot Space, a departure from their trademark seventies
sound, this time being a mixture of rock, pop rock, dance, funk, and R&B. Most of the album was recorded in Munich
during the most turbulent period in the band's history, and Taylor and May lamented the new sound, with both being
very critical of the influence Mercury's personal manager Paul Prenter had on the singer. May was also scathing of
Prenter, who was Mercury's manager from the early 1980s to 1984, for being dismissive of the importance of radio
stations, such as the US networks, and their vital connection between the artist and the community, and for denying
them access to Mercury. The band stopped touring North America after their Hot Space Tour, as their success there
had waned, although they would perform on American television for the only time during the eighth season premiere
of Saturday Night Live. Queen left Elektra Records, their label in the United States, Canada, Japan, Australia, and
New Zealand, and signed onto EMI/Capitol Records. That year, Queen began The Works Tour, the first tour to feature
keyboardist Spike Edney as an extra live musician. The tour featured nine sold-out dates in October in Bophuthatswana,
South Africa, at the arena in Sun City. Upon returning to England, they were the subject of outrage, having played
in South Africa during the height of apartheid and in violation of worldwide divestment efforts and a United Nations
cultural boycott. The band responded to the critics by stating that they were playing music for fans in South Africa,
and they also stressed that the concerts were played before integrated audiences. Queen donated to a school for the
deaf and blind as a philanthropic gesture but were fined by the British Musicians' Union and placed on the United
Nations' blacklisted artists. At Live Aid, held at Wembley on 13 July 1985, in front of the biggest-ever TV audience
of 1.9 billion, Queen performed some of their greatest hits, during which the sold-out stadium audience of 72,000
people clapped, sang, and swayed in unison. The show's organisers, Bob Geldof and Midge Ure, other musicians such
as Elton John, Cliff Richard and Dave Grohl, and music journalists writing for the BBC, CNN, Rolling Stone, MTV,
The Telegraph among others, stated that Queen stole the show. An industry poll in 2005 ranked it the greatest rock
performance of all time. Mercury's powerful, sustained note during the a cappella section came to be known as "The
Note Heard Round the World". When interviewed for Mojo magazine the band said the most amazing sight at Live Aid
was to see the audience clapping to "Radio Ga Ga". Brian May stated: "I'd never seen anything like that in my life
and it wasn't calculated either. We understood our audience and played to them but that was one of those weird accidents
because of the (music) video. I remember thinking 'oh great, they've picked it up' and then I thought 'this is not
a Queen audience'. This is a general audience who've bought tickets before they even knew we were on the bill. And
they all did it. How did they know? Nobody told them to do it." The band, now revitalised by the response to Live
Aid – a "shot in the arm" Roger Taylor called it, — and the ensuing increase in record sales, ended 1985 by releasing
the single "One Vision", which was the third time after "Stone Cold Crazy" and "Under Pressure (with David Bowie)"
that all four bandmembers received a writing credit for the one song. Also, a limited-edition boxed set containing
all Queen albums to date was released under the title of The Complete Works. The package included previously unreleased
material, most notably Queen's non-album single of Christmas 1984, titled "Thank God It's Christmas". In summer of
1986, Queen went on their final tour with Freddie Mercury. A sold-out tour in support of A Kind of Magic, once again
they hired Spike Edney, leading to him being dubbed the unofficial fifth member. The Magic Tour's highlight was at
Wembley Stadium in London and resulted in the live double album, Queen at Wembley, released on CD and as a live concert
DVD, which has gone five times platinum in the US and four times platinum in the UK. Queen could not book Wembley
for a third night, but they did play at Knebworth Park. The show sold out within two hours and over 120,000 fans
packed the park for what was Queen's final live performance with Mercury. Queen began the tour at the Råsunda Stadium
in Stockholm, Sweden, and during the tour the band performed a concert at Slane Castle, Ireland, in front of an audience
of 95,000, which broke the venue's attendance record. The band also played behind the Iron Curtain when they performed
to a crowd of 80,000 at the Népstadion in Budapest, in what was one of the biggest rock concerts ever held in Eastern
Europe. More than one million people saw Queen on the tour—400,000 in the United Kingdom alone, a record at the time.
After working on various solo projects during 1988 (including Mercury's collaboration with Montserrat Caballé, Barcelona),
the band released The Miracle in 1989. The album continued the direction of A Kind of Magic, using a pop-rock sound
mixed with a few heavy numbers. It spawned the European hits "I Want It All", "Breakthru", "The Invisible Man", "Scandal",
and "The Miracle". The Miracle also began a change in direction of Queen's songwriting philosophy. Since the band's
beginning, nearly all songs had been written by and credited to a single member, with other members adding minimally.
With The Miracle, the band's songwriting became more collaborative, and they vowed to credit the final product only
to Queen as a group. After fans noticed Mercury's increasingly gaunt appearance in 1988, rumours began to spread
that Mercury was suffering from AIDS. Mercury flatly denied this, insisting he was merely "exhausted" and too busy
to provide interviews. The band decided to continue making albums, starting with The Miracle in 1989 and continuing
with Innuendo in 1991. Despite his deteriorating health, the lead singer continued to contribute. For the last two
albums made while Mercury was still alive, the band credited all songs to Queen, rather than specific members of
the group, freeing them of internal conflict and differences. In 1990, Queen ended their contract with Capitol and
signed with Disney's Hollywood Records, which has since remained the group's music catalogue owner in the United
States and Canada. That same year, Mercury made his final public appearance when he joined the rest of Queen to collect
the Brit Award for Outstanding Contribution to British Music. Innuendo was released in early 1991 with an eponymous
number 1 UK hit and other charting singles including, "The Show Must Go On". Mercury was increasingly ill and could
barely walk when the band recorded "The Show Must Go On" in 1990. Because of this, May had concerns about whether
he was physically capable of singing it. Recalling Mercury's successful performance May states; "he went in and killed
it, completely lacerated that vocal". The rest of the band were ready to record when Mercury felt able to come in
to the studio, for an hour or two at a time. May says of Mercury: “He just kept saying. 'Write me more. Write me
stuff. I want to just sing this and do it and when I am gone you can finish it off.’ He had no fear, really.” The
band's second greatest hits compilation, Greatest Hits II, followed in October 1991, which is the eighth best-selling
album of all time in the UK and has sold 16 million copies worldwide. On 23 November 1991, in a prepared statement
made on his deathbed, Mercury confirmed that he had AIDS. Within 24 hours of the statement, he died of bronchial
pneumonia, which was brought on as a complication of AIDS. His funeral service on 27 November in Kensal Green, West
London was private, and held in accordance with the Zoroastrian religious faith of his family. "Bohemian Rhapsody"
was re-released as a single shortly after Mercury's death, with "These Are the Days of Our Lives" as the double A-side.
The music video for "These Are the Days of Our Lives" contains Mercury's final scenes in front of the camera. The
single went to number one in the UK, remaining there for five weeks – the only recording to top the Christmas chart
twice and the only one to be number one in four different years (1975, 1976, 1991, and 1992). Initial proceeds from
the single – approximately £1,000,000 – were donated to the Terrence Higgins Trust. Queen's popularity was stimulated
in North America when "Bohemian Rhapsody" was featured in the 1992 comedy film Wayne's World. Its inclusion helped
the song reach number two on the Billboard Hot 100 for five weeks in 1992 (it remained in the Hot 100 for over 40
weeks), and won the band an MTV Award at the 1992 MTV Video Music Awards. The compilation album Classic Queen also
reached number four on the Billboard 200, and is certified three times platinum in the US. Wayne's World footage
was used to make a new music video for "Bohemian Rhapsody", with which the band and management were delighted. On
20 April 1992, The Freddie Mercury Tribute Concert was held at London's Wembley Stadium to a 72,000-strong crowd.
Performers, including Def Leppard, Robert Plant, Guns N' Roses, Elton John, David Bowie, George Michael, Annie Lennox,
Seal, Extreme, and Metallica performed various Queen songs along with the three remaining Queen members (and Spike
Edney.) The concert is listed in the Guinness Book of Records as "The largest rock star benefit concert", as it was
televised to over 1.2 billion viewers worldwide, and raised over £20,000,000 for AIDS charities. Queen's last album
featuring Mercury, titled Made in Heaven, was finally released in 1995, four years after his death. Featuring tracks
such as "Too Much Love Will Kill You" and "Heaven for Everyone", it was constructed from Mercury's final recordings
in 1991, material left over from their previous studio albums and re-worked material from May, Taylor, and Mercury's
solo albums. The album also featured the song "Mother Love", the last vocal recording Mercury made prior to his death,
which he completed using a drum machine, over which May, Taylor and Deacon later added the instrumental track. After
completing the penultimate verse, Mercury had told the band he "wasn't feeling that great" and stated, "I will finish
it when I come back, next time"; however, he never made it back into the studio, so May later recorded the final
verse of the song. Both stages of recording, before and after Mercury's death, were completed at the band's studio
in Montreux, Switzerland. The album reached No. 1 on the UK charts immediately following its release, and has sold
20 million copies worldwide. On 25 November 1996, a statue of Mercury was unveiled in Montreux overlooking Lake Geneva,
almost five years to the day since his death. In 1997, Queen returned to the studio to record "No-One but You (Only
the Good Die Young)", a song dedicated to Mercury and all those that die too soon. It was released as a bonus track
on the Queen Rocks compilation album later that year. In January 1997, Queen performed "The Show Must Go On" live
with Elton John and the Béjart Ballet in Paris on a night Mercury was remembered, and it marked the last performance
and public appearance of John Deacon, who chose to retire. The Paris concert was only the second time Queen had played
live since Mercury's death, prompting Elton John to urge them to perform again. Brian May and Roger Taylor performed
together at several award ceremonies and charity concerts, sharing vocals with various guest singers. During this
time, they were billed as Queen + followed by the guest singer's name. In 1998, the duo appeared at Luciano Pavarotti's
benefit concert with May performing "Too Much Love Will Kill You" with Pavarotti, later playing "Radio Ga Ga", "We
Will Rock You", and "We Are the Champions" with Zucchero. They again attended and performed at Pavarotti's benefit
concert in Modena, Italy in May 2003. Several of the guest singers recorded new versions of Queen's hits under the
Queen + name, such as Robbie Williams providing vocals for "We Are the Champions" for the soundtrack of A Knight's
Tale (2001). In 1999, a Greatest Hits III album was released. This featured, among others, "Queen + Wyclef Jean"
on a rap version of "Another One Bites the Dust". A live version of "Somebody to Love" by George Michael and a live
version of "The Show Must Go On" with Elton John were also featured in the album. By this point, Queen's vast amount
of record sales made them the second best selling artist in the UK of all time, behind the Beatles. In 2002, Queen
were awarded the 2,207th star on the Hollywood Walk of Fame, which is located at 6358 Hollywood Blvd. On 29 November
2003, May and Taylor performed at the 46664 Concert hosted by Nelson Mandela at Green Point Stadium, Cape Town, to
raise awareness of the spread of HIV/AIDS in South Africa. May and Taylor spent time at Mandela's home, discussing
how Africa's problems might be approached, and two years later the band was made ambassadors for the 46664 cause.
At the end of 2004, May and Taylor announced that they would reunite and return to touring in 2005 with Paul Rodgers
(founder and former lead singer of Free and Bad Company). Brian May's website also stated that Rodgers would be "featured
with" Queen as "Queen + Paul Rodgers", not replacing Mercury. The retired John Deacon would not be participating.
In November 2004, Queen were among the inaugural inductees into the UK Music Hall of Fame, and the award ceremony
was the first event at which Rodgers joined May and Taylor as vocalist. Between 2005 and 2006, Queen + Paul Rodgers
embarked on a world tour, which was the first time Queen toured since their last tour with Freddie Mercury in 1986.
The band's drummer Roger Taylor commented; "We never thought we would tour again, Paul [Rodgers] came along by chance
and we seemed to have a chemistry. Paul is just such a great singer. He's not trying to be Freddie." The first leg
was in Europe, the second in Japan, and the third in the US in 2006. Queen received the inaugural VH1 Rock Honors
at the Mandalay Bay Events Center in Las Vegas, Nevada, on 25 May 2006. The Foo Fighters paid homage to the band
in performing "Tie Your Mother Down" to open the ceremony before being joined on stage by May, Taylor, and Paul Rodgers,
who played a selection of Queen hits. On 15 August 2006, Brian May confirmed through his website and fan club that
Queen + Paul Rodgers would begin producing their first studio album beginning in October, to be recorded at a "secret
location". Queen + Paul Rodgers performed at the Nelson Mandela 90th Birthday Tribute held in Hyde Park, London on
27 June 2008, to commemorate Mandela's ninetieth birthday, and again promote awareness of the HIV/AIDS pandemic.
The first Queen + Paul Rodgers album, titled The Cosmos Rocks, was released in Europe on 12 September 2008 and in
the United States on 28 October 2008. Following the release of the album, the band again went on a tour through Europe,
opening on Kharkiv's Freedom Square in front of 350,000 Ukrainian fans. The Kharkiv concert was later released on
DVD. The tour then moved to Russia, and the band performed two sold-out shows at the Moscow Arena. Having completed
the first leg of its extensive European tour, which saw the band play 15 sold-out dates across nine countries, the
UK leg of the tour sold out within 90 minutes of going on sale and included three London dates, the first of which
was The O2 on 13 October. The last leg of the tour took place in South America, and included a sold-out concert at
the Estadio José Amalfitani, Buenos Aires. On 20 May 2009, May and Taylor performed "We Are the Champions" live on
the season finale of American Idol with winner Kris Allen and runner-up Adam Lambert providing a vocal duet. In mid-2009,
after the split of Queen + Paul Rodgers, the Queen online website announced a new greatest hits compilation named
Absolute Greatest. The album was released on 16 November and peaked at number 3 in the official UK Chart. The album
contains 20 of Queen's biggest hits spanning their entire career and was released in four different formats: single
disc, double disc (with commentary), double disc with feature book, and a vinyl record. Prior to its release, a competition
was run by Queen online to guess the track listing as a promotion for the album. On 30 October 2009, May wrote a
fanclub letter on his website stating that Queen had no intentions to tour in 2010 but that there was a possibility
of a performance. He was quoted as saying, "The greatest debate, though, is always about when we will next play together
as Queen. At the moment, in spite of the many rumours that are out there, we do not have plans to tour in 2010. The
good news, though, is that Roger and I have a much closer mutual understanding these days—privately and professionally
... and all ideas are carefully considered. Music is never far away from us. As I write, there is an important one-off
performance on offer, in the USA, and it remains to be decided whether we will take up this particular challenge.
Every day, doors seem to open, and every day, we interact, perhaps more than ever before, with the world outside.
It is a time of exciting transition in Rock music and in 'The Business'. It's good that the pulse still beats". On
15 November 2009, May and Taylor performed "Bohemian Rhapsody" live on the British TV show The X Factor alongside
the finalists. On 7 May 2010, May and Taylor announced that they were quitting their record label, EMI, after almost
40 years. On 20 August 2010, Queen's manager Jim Beach put out a Newsletter stating that the band had signed a new
contract with Universal Music. During an interview for Hardtalk on the BBC on 22 September, May confirmed that the
band's new deal was with Island Records, a subsidiary of Universal Music Group. For the first time since the late
1980s, Queen's catalogue will have the same distributor worldwide, as their current North American label—Hollywood
Records—is currently distributed by Universal (for a time in the late 1980s, Queen was on EMI-owned Capitol Records
in the US). In May 2011, Jane's Addiction vocalist Perry Farrell noted that Queen are currently scouting their once
former and current live bassist Chris Chaney to join the band. Farrell stated: "I have to keep Chris away from Queen,
who want him and they're not gonna get him unless we're not doing anything. Then they can have him." In the same
month, Paul Rodgers stated he may tour with Queen again in the near future. At the 2011 Broadcast Music, Incorporated
(BMI) Awards held in London on 4 October, Queen received the BMI Icon Award in recognition for their airplay success
in the US. At the 2011 MTV Europe Music Awards on 6 November, Queen received the Global Icon Award, which Katy Perry
presented to Brian May. Queen closed the awards ceremony, with Adam Lambert on vocals, performing "The Show Must
Go On", "We Will Rock You" and "We Are the Champions". The collaboration garnered a positive response from both fans
and critics, resulting in speculation about future projects together. On 25 and 26 April, May and Taylor appeared
on the eleventh series of American Idol at the Nokia Theatre, Los Angeles, performing a Queen medley with the six
finalists on the first show, and the following day performed "Somebody to Love" with the 'Queen Extravaganza' band.
Queen were scheduled to headline Sonisphere at Knebworth on 7 July 2012 with Adam Lambert before the festival was
cancelled. Queen's final concert with Freddie Mercury was in Knebworth in 1986. Brian May commented, "It's a worthy
challenge for us, and I'm sure Adam would meet with Freddie's approval." Queen expressed disappointment at the cancellation
and released a statement to the effect that they were looking to find another venue. It was later announced that
Queen + Adam Lambert would play two shows at the Hammersmith Apollo, London on 11 and 12 July 2012. Both shows sold
out within 24 hours of tickets going on open sale. A third London date was scheduled for 14 July. On 30 June, Queen
+ Lambert performed in Kiev, Ukraine at a joint concert with Elton John for the Elena Pinchuk ANTIAIDS Foundation.
Queen also performed with Lambert on 3 July 2012 at Moscow's Olympic Stadium, and on 7 July 2012 at the Municipal
Stadium in Wroclaw, Poland. On 20 September 2013, Queen + Adam Lambert performed at the iHeartRadio Music Festival
at the MGM Grand Hotel & Casino in Las Vegas. On 6 March 2014, the band announced on Good Morning America that Queen
+ Adam Lambert will tour North America in Summer 2014. The band will also tour Australia and New Zealand in August/September
2014. In an interview with Rolling Stone, May and Taylor said that although the tour with Lambert is a limited thing,
they are open to him becoming an official member, and cutting new material with him. Queen drew artistic influence
from British rock acts of the 1960s and early 1970s, such as the Beatles, the Kinks, Cream, Led Zeppelin, Pink Floyd,
the Who, Black Sabbath, Slade, Deep Purple, David Bowie, Genesis and Yes, in addition to American guitarist Jimi
Hendrix, with Mercury also inspired by the gospel singer Aretha Franklin. May referred to the Beatles as being "our
bible in the way they used the studio and they painted pictures and this wonderful instinctive use of harmonies."
At their outset in the early 1970s, Queen's music has been characterised as "Led Zeppelin meets Yes" due to its combination
of "acoustic/electric guitar extremes and fantasy-inspired multi-part song epics". Queen composed music that drew
inspiration from many different genres of music, often with a tongue-in-cheek attitude. The genres they have been
associated with include progressive rock, symphonic rock, art rock, glam rock, hard rock, heavy metal, pop rock,
and psychedelic rock. Queen also wrote songs that were inspired by diverse musical styles which are not typically
associated with rock groups, such as opera, music hall, folk music, gospel, ragtime, and dance/disco. Several Queen
songs were written with audience participation in mind, such as "We Will Rock You" and "We Are the Champions". Similarly,
"Radio Ga Ga" became a live favourite because it would have "crowds clapping like they were at a Nuremberg rally".
In 1963, the teenage Brian May and his father custom-built his signature guitar Red Special, which was purposely
designed to feedback. Sonic experimentation figured heavily in Queen's songs. A distinctive characteristic of Queen's
music are the vocal harmonies which are usually composed of the voices of May, Mercury, and Taylor best heard on
the studio albums A Night at the Opera and A Day at the Races. Some of the ground work for the development of this
sound can be attributed to their former producer Roy Thomas Baker, and their engineer Mike Stone. Besides vocal harmonies,
Queen were also known for multi-tracking voices to imitate the sound of a large choir through overdubs. For instance,
according to Brian May, there are over 180 vocal overdubs in "Bohemian Rhapsody". The band's vocal structures have
been compared with the Beach Boys, but May stated they were not "much of an influence". Queen have been recognised
as having made significant contributions to such genres as hard rock, and heavy metal, among others. Hence, the band
have been cited as an influence by many other musicians. Moreover, like their music, the bands and artists that have
claimed to be influenced by Queen and have expressed admiration for them are diverse, spanning different generations,
countries, and genres, including heavy metal: Judas Priest, Iron Maiden, Metallica, Dream Theater, Trivium, Megadeth,
Anthrax, Slipknot and Rage Against the Machine; hard rock: Guns N' Roses, Def Leppard, Van Halen, Mötley Crüe, Steve
Vai, the Cult, the Darkness, Manic Street Preachers, Kid Rockand Foo Fighters; alternative rock: Nirvana, Radiohead,
Trent Reznor, Muse, Franz Ferdinand, Red Hot Chili Peppers, Jane's Addiction, Faith No More, Melvins, the Flaming
Lips, Yeah Yeah Yeahs and The Smashing Pumpkins; pop rock: Meat Loaf, The Killers, My Chemical Romance, Fall Out
Boy and Panic! At the Disco; and pop: Michael Jackson, George Michael, Robbie Williams, Adele, Lady Gaga and Katy
Perry. In 2002, Queen's "Bohemian Rhapsody" was voted "the UK's favourite hit of all time" in a poll conducted by
the Guinness World Records British Hit Singles Book. In 2004 the song was inducted into the Grammy Hall of Fame.
Many scholars consider the "Bohemian Rhapsody" music video ground-breaking, and credit it with popularising the medium.
Rock historian Paul Fowles states the song is "widely credited as the first global hit single for which an accompanying
video was central to the marketing strategy". It has been hailed as launching the MTV age. Acclaimed for their stadium
rock, in 2005 an industry poll ranked Queen's performance at Live Aid in 1985 as the best live act in history. In
2007, they were also voted the greatest British band in history by BBC Radio 2 listeners. The band have released
a total of eighteen number one albums, eighteen number one singles, and ten number one DVDs worldwide, making them
one of the world's best-selling music artists. Queen have sold over 150 million records, with some estimates in excess
of 300 million records worldwide, including 34.5 million albums in the US as of 2004. Inducted into the Rock and
Roll Hall of Fame in 2001, the band is the only group in which every member has composed more than one chart-topping
single, and all four members were inducted into the Songwriters Hall of Fame in 2003. In 2009, "We Will Rock You"
and "We Are the Champions" were inducted into the Grammy Hall of Fame, and the latter was voted the world's favourite
song in a global music poll. Queen are one of the most bootlegged bands ever, according to Nick Weymouth, who manages
the band's official website. A 2001 survey discovered the existence of 12,225 websites dedicated to Queen bootlegs,
the highest number for any band. Bootleg recordings have contributed to the band's popularity in certain countries
where Western music is censored, such as Iran. In a project called Queen: The Top 100 Bootlegs, many of these have
been made officially available to download for a nominal fee from Queen's website, with profits going to the Mercury
Phoenix Trust. Rolling Stone ranked Queen at number 52 on its list of the "100 Greatest Artists of All Time", while
ranking Mercury the 18th greatest singer, and May the twenty-sixth greatest guitarist. Queen were named 13th on VH1's
100 Greatest Artists of Hard Rock list, and in 2010 were ranked 17th on VH1's 100 Greatest Artists of All Time list.
In 2012, Gigwise readers named Queen the best band of past 60 years. The original London production was scheduled
to close on Saturday, 7 October 2006, at the Dominion Theatre, but due to public demand, the show ran until May 2014.
We Will Rock You has become the longest running musical ever to run at this prime London theatre, overtaking the
previous record holder, the Grease musical. Brian May stated in 2008 that they were considering writing a sequel
to the musical. The musical toured around the UK in 2009, playing at Manchester Palace Theatre, Sunderland Empire,
Birmingham Hippodrome, Bristol Hippodrome, and Edinburgh Playhouse. Under the supervision of May and Taylor, numerous
restoration projects have been under way involving Queen's lengthy audio and video catalogue. DVD releases of their
1986 Wembley concert (titled Live at Wembley Stadium), 1982 Milton Keynes concert (Queen on Fire – Live at the Bowl),
and two Greatest Video Hits (Volumes 1 and 2, spanning the 1970s and 1980s) have seen the band's music remixed into
5.1 and DTS surround sound. So far, only two of the band's albums, A Night at the Opera and The Game, have been fully
remixed into high-resolution multichannel surround on DVD-Audio. A Night at the Opera was re-released with some revised
5.1 mixes and accompanying videos in 2005 for the 30th anniversary of the album's original release (CD+DVD-Video
set). In 2007, a Blu-ray edition of Queen's previously released concerts, Queen Rock Montreal & Live Aid, was released,
marking their first project in 1080p HD. Queen have been featured multiple times in the Guitar Hero franchise: a
cover of "Killer Queen" in the original Guitar Hero, "We Are The Champions", "Fat Bottomed Girls", and the Paul Rodgers
collaboration "C-lebrity" in a track pack for Guitar Hero World Tour, "Under Pressure" with David Bowie in Guitar
Hero 5, "I Want It All" in Guitar Hero: Van Halen, "Stone Cold Crazy" in Guitar Hero: Metallica, and "Bohemian Rhapsody"
in Guitar Hero: Warriors of Rock. On 13 October 2009, Brian May revealed there was "talk" going on "behind the scenes"
about a dedicated Queen Rock Band game. Queen contributed music directly to the films Flash Gordon (1980), with "Flash"
as the theme song, and Highlander (the original 1986 film), with "A Kind of Magic", "One Year of Love", "Who Wants
to Live Forever", "Hammer to Fall", and the theme "Princes of the Universe", which was also used as the theme of
the Highlander TV series (1992–1998). In the United States, "Bohemian Rhapsody" was re-released as a single in 1992
after appearing in the comedy film Wayne's World. The single subsequently reached number two on the Billboard Hot
100 (with "The Show Must Go On" as the first track on the single) and helped rekindle the band's popularity in North
America. Several films have featured their songs performed by other artists. A version of "Somebody to Love" by Anne
Hathaway was in the 2004 film Ella Enchanted. In 2006, Brittany Murphy also recorded a cover of the same song for
the 2006 film Happy Feet. In 2001, a version of "The Show Must Go On" was performed by Jim Broadbent and Nicole Kidman
in the film musical Moulin Rouge!. The 2001 film A Knight's Tale has a version of "We Are the Champions" performed
by Robbie Williams and Queen; the film also features "We Will Rock You" played by the medieval audience. On 11 April
2006, Brian May and Roger Taylor appeared on the American singing contest television show American Idol. Each contestant
was required to sing a Queen song during that week of the competition. Songs which appeared on the show included
"Bohemian Rhapsody", "Fat Bottomed Girls", "The Show Must Go On", "Who Wants to Live Forever", and "Innuendo". Brian
May later criticised the show for editing specific scenes, one of which made the group's time with contestant Ace
Young look negative, despite it being the opposite. Taylor and May again appeared on the American Idol season 8 finale
in May 2009, performing "We Are the Champions" with finalists Adam Lambert and Kris Allen. On 15 November 2009, Brian
May and Roger Taylor appeared on the singing contest television show X Factor in the UK. In the autumn of 2009, Glee
featured the fictional high school's show choir singing "Somebody to Love" as their second act performance in the
episode "The Rhodes Not Taken". The performance was included on the show's Volume 1 soundtrack CD. In June 2010,
the choir performed "Another One Bites the Dust" in the episode "Funk". The following week's episode, "Journey to
Regionals", features a rival choir performing "Bohemian Rhapsody" in its entirety. The song was featured on the episode's
EP. In May 2012, the choir performed "We Are the Champions" in the episode "Nationals", and the song features in
The Graduation Album. In September 2010, Brian May announced in a BBC interview that Sacha Baron Cohen was to play
Mercury in a film of the same name. Time commented with approval on his singing ability and visual similarity to
Mercury. However, in July 2013, Baron Cohen dropped out of the role due to "creative differences" between him and
the surviving band members. In December 2013, it was announced that Ben Whishaw, best known for playing Q in the
James Bond film Skyfall, had been chosen to replace Cohen in the role of Mercury. The motion picture is being written
by Peter Morgan, who had been nominated for Oscars for his screenplays The Queen and Frost/Nixon. The film, which
is being co-produced by Robert De Niro's TriBeCa Productions, will focus on Queen's formative years and the period
leading up to the celebrated performance at the 1985 Live Aid concert.
Presbyterianism is a part of the Reformed tradition within Protestantism which traces its origins to the British Isles. Presbyterian
churches derive their name from the presbyterian form of church government, which is governed by representative assemblies
of elders. Many Reformed churches are organized this way, but the word "Presbyterian," when capitalized, is often
applied uniquely to the churches that trace their roots to the Scottish and English churches that bore that name
and English political groups that formed during the English Civil War. Presbyterian theology typically emphasizes
the sovereignty of God, the authority of the Scriptures, and the necessity of grace through faith in Christ. Presbyterian
church government was ensured in Scotland by the Acts of Union in 1707 which created the kingdom of Great Britain.
In fact, most Presbyterians found in England can trace a Scottish connection, and the Presbyterian denomination was
also taken to North America mostly by Scots and Scots-Irish immigrants. The Presbyterian denominations in Scotland
hold to the theology of John Calvin and his immediate successors, although there are a range of theological views
within contemporary Presbyterianism. Local congregations of churches which use presbyterian polity are governed by
sessions made up of representatives of the congregation (elders); a conciliar approach which is found at other levels
of decision-making (presbytery, synod and general assembly). The roots of Presbyterianism lie in the European Reformation
of the 16th century; the example of John Calvin's Geneva being particularly influential. Most Reformed churches which
trace their history back to Scotland are either presbyterian or congregationalist in government. In the twentieth
century, some Presbyterians played an important role in the ecumenical movement, including the World Council of Churches.
Many Presbyterian denominations have found ways of working together with other Reformed denominations and Christians
of other traditions, especially in the World Communion of Reformed Churches. Some Presbyterian churches have entered
into unions with other churches, such as Congregationalists, Lutherans, Anglicans, and Methodists. Presbyterians
in the United States came largely from Scotch-Irish immigrants communities, and also from New England Yankee communities
that had originally been Congregational but changed because of an agreed-upon "Plan of Union of 1801" for frontier
areas. Presbyterian history is part of the history of Christianity, but the beginning of Presbyterianism as a distinct
movement occurred during the 16th-century Protestant Reformation. As the Catholic Church resisted the reformers,
several different theological movements splintered from the Church and bore different denominations. Presbyterianism
was especially influenced by the French theologian John Calvin, who is credited with the development of Reformed
theology, and the work of John Knox, a Scotsman who studied with Calvin in Geneva, Switzerland and brought his teachings
back to Scotland. The Presbyterian church traces its ancestry back primarily to England and Scotland. In August 1560
the Parliament of Scotland adopted the Scots Confession as the creed of the Scottish Kingdom. In December 1560, the
First Book of Discipline was published, outlining important doctrinal issues but also establishing regulations for
church government, including the creation of ten ecclesiastical districts with appointed superintendents which later
became known as presbyteries. Presbyterians distinguish themselves from other denominations by doctrine, institutional
organization (or "church order") and worship; often using a "Book of Order" to regulate common practice and order.
The origins of the Presbyterian churches are in Calvinism. Many branches of Presbyterianism are remnants of previous
splits from larger groups. Some of the splits have been due to doctrinal controversy, while some have been caused
by disagreement concerning the degree to which those ordained to church office should be required to agree with the
Westminster Confession of Faith, which historically serves as an important confessional document – second only to
the Bible, yet directing particularities in the standardization and translation of the Bible – in Presbyterian churches.
Presbyterians place great importance upon education and lifelong learning. Continuous study of the scriptures, theological
writings, and understanding and interpretation of church doctrine are embodied in several statements of faith and
catechisms formally adopted by various branches of the church, often referred to as "subordinate standards". It is
generally considered that the point of such learning is to enable one to put one's faith into practice; some Presbyterians
generally exhibit their faith in action as well as words, by generosity, hospitality, as well as proclaiming the
gospel of Christ. Presbyterian government is by councils (known as courts) of elders. Teaching and ruling elders
are ordained and convene in the lowest council known as a session or consistory responsible for the discipline, nurture,
and mission of the local congregation. Teaching elders (pastors) have responsibility for teaching, worship, and performing
sacraments. Pastors are called by individual congregations. A congregation issues a call for the pastor's service,
but this call must be ratified by the local presbytery. Ruling elders are usually laymen (and laywomen in some denominations)
who are elected by the congregation and ordained to serve with the teaching elders, assuming responsibility for nurture
and leadership of the congregation. Often, especially in larger congregations, the elders delegate the practicalities
of buildings, finance, and temporal ministry to the needy in the congregation to a distinct group of officers (sometimes
called deacons, which are ordained in some denominations). This group may variously be known as a "Deacon Board",
"Board of Deacons" "Diaconate", or "Deacons' Court". These are sometimes known as "presbyters" to the full congregation.
Above the sessions exist presbyteries, which have area responsibilities. These are composed of teaching elders and
ruling elders from each of the constituent congregations. The presbytery sends representatives to a broader regional
or national assembly, generally known as the General Assembly, although an intermediate level of a synod sometimes
exists. This congregation / presbytery / synod / general assembly schema is based on the historical structure of
the larger Presbyterian churches, such as the Church of Scotland or the Presbyterian Church (U.S.A.); some bodies,
such as the Presbyterian Church in America and the Presbyterian Church in Ireland, skip one of the steps between
congregation and General Assembly, and usually the step skipped is the Synod. The Church of Scotland has now abolished
the Synod.[citation needed] Presbyterianism is historically a confessional tradition. This has two implications.
The obvious one is that confessional churches express their faith in the form of "confessions of faith," which have
some level of authoritative status. However this is based on a more subtle point: In confessional churches, theology
is not solely an individual matter. While individuals are encouraged to understand Scripture, and may challenge the
current institutional understanding, theology is carried out by the community as a whole. It is this community understanding
of theology that is expressed in confessions. Some Presbyterian traditions adopt only the Westminster Confession
of Faith as the doctrinal standard to which teaching elders are required to subscribe, in contrast to the Larger
and Shorter catechisms, which are approved for use in instruction. Many Presbyterian denominations, especially in
North America, have adopted all of the Westminster Standards as their standard of doctrine which is subordinate to
the Bible. These documents are Calvinistic in their doctrinal orientation. The Presbyterian Church in Canada retains
the Westminster Confession of Faith in its original form, while admitting the historical period in which it was written
should be understood when it is read. The Westminster Confession is "The principal subordinate standard of the Church
of Scotland" but "with due regard to liberty of opinion in points which do not enter into the substance of the Faith"
(V). This formulation represents many years of struggle over the extent to which the confession reflects the Word
of God and the struggle of conscience of those who came to believe it did not fully do so (e.g. William Robertson
Smith). Some Presbyterian Churches, such as the Free Church of Scotland, have no such "conscience clause". The Presbyterian
Church (U.S.A.) has adopted the Book of Confessions, which reflects the inclusion of other Reformed confessions in
addition to the Westminster Standards. These other documents include ancient creedal statements (the Nicene Creed,
the Apostles' Creed), 16th-century Reformed confessions (the Scots Confession, the Heidelberg Catechism, the Second
Helvetic Confession), and 20th century documents (The Theological Declaration of Barmen, Confession of 1967 and A
Brief Statement of Faith). Presbyterian denominations that trace their heritage to the British Isles usually organise
their church services inspired by the principles in the Directory of Public Worship, developed by the Westminster
Assembly in the 1640s. This directory documented Reformed worship practices and theology adopted and developed over
the preceding century by British Puritans, initially guided by John Calvin and John Knox. It was enacted as law by
the Scottish Parliament, and became one of the foundational documents of Presbyterian church legislation elsewhere.
Over subsequent centuries, many Presbyterian churches modified these prescriptions by introducing hymnody, instrumental
accompaniment, and ceremonial vestments into worship. However, there is not one fixed "Presbyterian" worship style.
Although there are set services for the "Lord's Day", one can find a service to be evangelical and even revivalist
in tone (especially in some conservative denominations), or strongly liturgical, approximating the practices of Lutheranism
or Anglicanism (especially where Scottish tradition is esteemed),[clarification needed] or semi-formal, allowing
for a balance of hymns, preaching, and congregational participation (favored by probably most American Presbyterians).
Most Presbyterian churches follow the traditional liturgical year and observe the traditional holidays, holy seasons,
such as Advent, Christmas, Ash Wednesday, Holy Week, Easter, Pentecost, etc. They also make use of the appropriate
seasonal liturgical colors, etc. Many, incorporate ancient liturgical prayers and responses into the communion services
and follow a daily, seasonal, and festival lectionary. Other Presbyterians, however, such as the Reformed Presbyterians,
would practice a cappella exclusive psalmody, as well as eschew the celebration of holy days. Among the paleo-orthodox
and emerging church movements in Protestant and evangelical churches, in which some Presbyterians are involved, clergy
are moving away from the traditional black Geneva gown to such vestments as the alb and chasuble, but also cassock
and surplice (typically a full length Old English style surplice which resembles the Celtic alb, an ungirdled liturgical
tunic of the old Gallican Rite), which some, particularly those identifying with the Liturgical Renewal Movement,
hold to be more ancient and representative of a more ecumenical past. Early Presbyterians were careful to distinguish
between the "church," which referred the members, and the "meeting house," which was the building in which the church
met. Until the late 19th century, very few Presbyterians ever referred to their buildings as "churches." Presbyterians
believed that meeting-houses (now called churches) are buildings to support the worship of God. The decor in some
instances was austere so as not to detract from worship. Early Presbyterian meeting-houses were extremely plain.
No stained glass, no elaborate furnishings, and no images were to be found in the meeting-house. The pulpit, often
raised so as only to be accessible by a staircase, was the centerpiece of the building. Usually a Presbyterian church
will not have statues of saints, nor the ornate altar more typical of a Roman Catholic church. Instead, one will
find a "communion table," usually on the same level as the congregation. There may be a rail between the communion
table and the "Chancel" behind it, which may contain a more decorative altar-type table, choir loft, or choir stalls,
lectern and clergy area. The altar is called the communion table and the altar area is called the Chancel by Presbyterians.
In a Presbyterian (Reformed Church) there may be an altar cross, either on the communion table or on a table in the
chancel. By using the "empty" cross, or cross of the resurrection, Presbyterians emphasize the resurrection and that
Christ is not continually dying, but died once and is alive for all eternity. Some Presbyterian church buildings
are often decorated with a cross that has a circle around the center, or Celtic cross. This not only emphasized the
resurrection, but also acknowledges historical aspects of Presbyterianism. A baptismal font will be located either
at the entrance or near the chancel area. Presbyterian architecture generally makes significant use of symbolism.
You may also find decorative and ornate stained glass windows depicting scenes from the bible. Some Presbyterian
churches will also have ornate statues of Christ or Graven Scenes from the Last Supper located behind the Chancel.
St. Giles Cathedral ( Church Of Scotland- The Mother Church of Presbyterians) does have a Crucifix next to one of
the Pulpits that hangs alongside. The image of Christ is more of faint image and more modern design. John Knox (1505–1572),
a Scot who had spent time studying under Calvin in Geneva, returned to Scotland and urged his countrymen to reform
the Church in line with Calvinist doctrines. After a period of religious convulsion and political conflict culminating
in a victory for the Protestant party at the Siege of Leith the authority of the Church of Rome was abolished in
favour of Reformation by the legislation of the Scottish Reformation Parliament in 1560. The Church was eventually
organised by Andrew Melville along Presbyterian lines to become the national Church of Scotland. King James VI and
I moved the Church of Scotland towards an episcopal form of government, and in 1637, James' successor, Charles I
and William Laud, the Archbishop of Canterbury, attempted to force the Church of Scotland to use the Book of Common
Prayer. What resulted was an armed insurrection, with many Scots signing the Solemn League and Covenant. The Covenanters
would serve as the government of Scotland for nearly a decade, and would also send military support to the Parliamentarians
during the English Civil War. Following the restoration of the monarchy in 1660, Charles II, despite the initial
support that he received from the Covenanters, reinstated an episcopal form of government on the church. However,
with the Glorious Revolution of 1688 the Church of Scotland was finally unequivocally recognised as a Presbyterian
institution by the monarch due to Scottish Presbyterian support for the aforementioned revolution and the Acts of
Union 1707 between Scotland and England guaranteed the Church of Scotland's form of government. However, legislation
by the United Kingdom parliament allowing patronage led to splits in the Church. In 1733, a group of ministers seceded
from the Church of Scotland to form the Associate Presbytery, another group seceded in 1761 to form the Relief Church
and the Disruption of 1843 led to the formation of the Free Church of Scotland. Further splits took place, especially
over theological issues, but most Presbyterians in Scotland were reunited by 1929 union of the established Church
of Scotland and the United Free Church of Scotland. In England, Presbyterianism was established in secret in 1592.
Thomas Cartwright is thought to be the first Presbyterian in England. Cartwright's controversial lectures at Cambridge
University condemning the episcopal hierarchy of the Elizabethan Church led to his deprivation of his post by Archbishop
John Whitgift and his emigration abroad. Between 1645 and 1648, a series of ordinances of the Long Parliament established
Presbyterianism as the polity of the Church of England. Presbyterian government was established in London and Lancashire
and in a few other places in England, although Presbyterian hostility to the execution of Charles I and the establishment
of the republican Commonwealth of England meant that Parliament never enforced the Presbyterian system in England.
The re-establishment of the monarchy in 1660 brought the return of Episcopal church government in England (and in
Scotland for a short time); but the Presbyterian church in England continued in Non-Conformity, outside of the established
church. In 1719 a major split, the Salter's Hall controversy, occurred; with the majority siding with nontrinitarian
views. Thomas Bradbury published several sermons bearing on the controversy, and in 1719, "An answer to the reproaches
cast on the dissenting ministers who subscribed their belief of the Eternal Trinity.". By the 18th century many English
Presbyterian congregations had become Unitarian in doctrine. A number of new Presbyterian Churches were founded by
Scottish immigrants to England in the 19th century and later. Following the 'Disruption' in 1843 many of those linked
to the Church of Scotland eventually joined what became the Presbyterian Church of England in 1876. Some, that is
Crown Court (Covent Garden, London), St Andrew's (Stepney, London) and Swallow Street (London), did not join the
English denomination, which is why there are Church of Scotland congregations in England such as those at Crown Court,
and St Columba's, Pont Street (Knightsbridge) in London. There is also a congregation in the heart of London's financial
district called London City Presbyterian Church that is also affiliated with Free Church of Scotland. In 1972, the
Presbyterian Church of England (PCofE) united with the Congregational Church in England and Wales to form the United
Reformed Church (URC). Among the congregations the PCofE brought to the URC were Tunley (Lancashire), Aston Tirrold
(Oxfordshire) and John Knox Presbyterian Church, Stepney, London (now part of Stepney Meeting House URC) – these
are among the sole survivors today of the English Presbyterian churches of the 17th century. The URC also has a presence
in Scotland, mostly of former Congregationalist Churches. Two former Presbyterian congregations, St Columba's, Cambridge
(founded in 1879), and St Columba's, Oxford (founded as a chaplaincy by the PCofE and the Church of Scotland in 1908
and as a congregation of the PCofE in 1929), continue as congregations of the URC and university chaplaincies of
the Church of Scotland. Presbyterianism is the largest Protestant denomination in Northern Ireland and the second
largest on the island of Ireland (after the Anglican Church of Ireland),[citation needed] and was brought by Scottish
plantation settlers to Ulster who had been strongly encouraged to emigrate by James VI of Scotland, later James I
of England. An estimated 100,000 Scottish Presbyterians moved to the northern counties of Ireland between 1607 and
the Battle of the Boyne in 1690.[citation needed] The Presbytery of Ulster was formed in 1642 separately from the
established Anglican Church. Presbyterians, along with Roman Catholics in Ulster and the rest of Ireland, suffered
under the discriminatory Penal Laws until they were revoked in the early 19th century. Presbyterianism is represented
in Ireland by the Presbyterian Church in Ireland, the Free Presbyterian Church of Ulster, the Non-subscribing Presbyterian
Church of Ireland, the Reformed Presbyterian Church of Ireland and the Evangelical Presbyterian Church. Presbyterianism
first officially arrived in Colonial America in 1703 with the establishment of the first Presbytery in Philadelphia.
In time, the presbytery would be joined by two more to form a synod (1717) and would eventually evolve into the Presbyterian
Church in the United States of America in 1789. The nation's largest Presbyterian denomination, the Presbyterian
Church (U.S.A.) – PC (USA) – can trace their heritage back to the original PCUSA, as can the Presbyterian Church
in America (PCA), the Orthodox Presbyterian Church (OPC), the Bible Presbyterian Church (BPC), the Cumberland Presbyterian
Church (CPC), the Cumberland Presbyterian Church in America the Evangelical Presbyterian Church (EPC) and the Evangelical
Covenant Order of Presbyterians (ECO). Other Presbyterian bodies in the United States include the Reformed Presbyterian
Church of North America (RPCNA), the Associate Reformed Presbyterian Church (ARP), the Reformed Presbyterian Church
in the United States (RPCUS), the Reformed Presbyterian Church General Assembly, the Reformed Presbyterian Church
– Hanover Presbytery, the Covenant Presbyterian Church, the Presbyterian Reformed Church, the Westminster Presbyterian
Church in the United States, the Korean American Presbyterian Church, and the Free Presbyterian Church of North America.
In the late 1800s, Presbyterian missionaries established a presence in what is now northern New Mexico. This provided
an alternative to the Catholicism, which was brought to the area by the Spanish Conquistadors and had remained unchanged.
The area experienced a "mini" reformation, in that many converts were made to Presbyterianism, prompting persecution.
In some cases, the converts left towns and villages to establish their own neighboring villages. The arrival of the
United States to the area prompted the Catholic church to modernize and make efforts at winning the converts back,
many of which did return. However, there are still stalwart Presbyterians and Presbyterian churches in the area.
In Canada, the largest Presbyterian denomination – and indeed the largest Protestant denomination – was the Presbyterian
Church in Canada, formed in 1875 with the merger of four regional groups. In 1925, the United Church of Canada was
formed by the majority of Presbyterians combining with the Methodist Church, Canada, and the Congregational Union
of Canada. A sizable minority of Canadian Presbyterians, primarily in southern Ontario but also throughout the entire
nation, withdrew, and reconstituted themselves as a non-concurring continuing Presbyterian body. They regained use
of the original name in 1939. The biggest Presbyterian church is the National Presbyterian Church in Mexico (Iglesia
Nacional Presbiteriana de México), which has around 2,500,000 members and associates and 3000 congregations, but
there are other small denominations like the Associate Reformed Presbyterian Church in Mexico which was founded in
1875 by the Associate Reformed Church in North America. The Independent Presbyterian Church and the Presbyterian
Reformed Church in Mexico, the National Conservative Presbyterian Church in Mexico are existing churches in the Reformed
tradition. In Brazil, the Presbyterian Church of Brazil (Igreja Presbiteriana do Brasil) totals approximately 1,011,300
members; other Presbyterian churches (Independents, United, Conservatives, Renovated, etc.) in this nation have around
350,000 members. The Renewed Presbyterian Church in Brazil was influenced by the charismatic movement and has about
131 000 members as of 2011. The Conservative Presbyterian Church was founded in 1940 and has eight presbyteries.
The Fundamentalist Presbyterian church in Brazil was influenced by Karl McIntosh and the Bible Presbyterian church
USA and has around 1 800 members. The Independent Presbyterian Church in Brasil was founded in 1903 by pastor Pereira,
has 500 congregations and 75 000 members. The United Presbyterian Church in Brazil has around 4 000 members. There
are also ethnic Korean Presbyterian churches in the country. The Evangelical Reformed Church in Brazil has Dutch
origin. The Reformed Churches in Brazil were recently founded by the Canadian Reformed Churches with the Reformed
Church in the Netherlands (liberated). African Presbyterian churches often incorporate diaconal ministries, including
social services, emergency relief, and the operation of mission hospitals. A number of partnerships exist between
presbyteries in Africa and the PC(USA), including specific connections with Lesotho, Malawi, South Africa, Ghana
and Zambia. For example, the Lackawanna Presbytery, located in Northeastern Pennsylvania, has a partnership with
a presbytery in Ghana. Also the Southminster Presbyterian Church, located near Pittsburgh, has partnerships with
churches in Malawi and Kenya. The Presbyterian Church of Nigeria, western Africa is also healthy and strong in mostly
the southern states of this nation, strong density in the south-eastern states of this country. Beginning from Cross
River state, the nearby coastal states, Rivers state, Lagos state to Ebonyi and Abia States. The missionary expedition
of Mary Slessor and Hope Waddel and their group in the mid 18th century in this coastal regions of the ten British
colony has brought about the beginning and the flourishing of this church in these areas. The Reformed Presbyterian
Church in Malawi has 150 congregations and 17 000–20 000 members. It was a mission of the Free Presbyterian church
of Scotland. The Restored Reformed Church works with RPCM. Evangelical Presbyterian Church in Malawi is an existing
small church. Part of the Presbyterian Church in Malawi and Zambia is known as CCAP, Church of Central Africa-Presbyterian.
Often the churches there have one main congregation and a number of Prayer Houses develop. education, health ministries
as well as worship and spiritual development are important. Most of the Korean Presbyterian denominations share the
same name in Korean, 대한예수교장로회 (literally means the Presbyterian Church of Korea or PCK), tracing its roots to the
United Presbyterian Assembly before its long history of disputes and schisms. The Presbyterian schism began with
the controversy in relation to the Japanese shrine worship enforced during the Japanese colonial period and the establishment
of a minor division (Koryu-pa, 고려파, later The Koshin Presbyterian Church in Korea, Koshin 고신) in 1952. And in 1953
the second schism happened when the theological orientation of the Chosun Seminary (later Hanshin University) founded
in 1947 could not be tolerated in the PCK and another minor group (The Presbyterian Church in the Republic of Korea,
Kijang, 기장) was separated. The last major schism had to do with the issue of whether the PCK should join the WCC.
The controversy divided the PCK into two denominations, The Presbyterian Church of Korea (Tonghap, 통합) and The General
Assembly of Presbyterian Church in Korea (Hapdong, 합동) in 1959. All major seminaries associated with each denomination
claim heritage from the Pyung Yang Theological Seminary, therefore, not only Presbyterian University and Theological
Seminary and Chongsin University which are related to PCK but also Hanshin University of PROK all celebrated the
100th class in 2007, 100 years from the first graduates of Pyung Yang Theological Seminary. Korean Presbyterian denominations
are active in evangelism and many of its missionaries are being sent overseas, being the second biggest missionary
sender in the world after the United States. GSM, the missionary body of the "Hapdong" General Assembly of Presbyterian
Churches of Korea, is the single largest Presbyterian missionary organization in Korea. In addition there are many
Korean-American Presbyterians in the United States, either with their own church sites or sharing space in pre-existing
churches as is the case in Australia, New Zealand and even Muslim countries such as Saudi Arabia with Korean immigration.
The Presbyterian Church in Taiwan (PCT) is by far the largest Protestant denomination in Taiwan, with some 238,372
members as of 2009 (including a majority of the island's aborigines). English Presbyterian missionary James Laidlaw
Maxwell established the first Presbyterian church in Tainan in 1865. His colleague George Leslie Mackay, of the Canadian
Presbyterian Mission, was active in Danshui and north Taiwan from 1872 to 1901; he founded the island's first university
and hospital, and created a written script for Taiwanese Minnan. The English and Canadian missions joined together
as the PCT in 1912. One of the few churches permitted to operate in Taiwan through the era of Japanese rule (1895–1945),
the PCT experienced rapid growth during the era of Guomindang-imposed martial law (1949–1987), in part due to its
support for democracy, human rights, and Taiwan independence. Former ROC president Lee Teng-hui (in office 1988–2000)
is a Presbyterian. In the mainly Christian Indian state of Mizoram, the Presbyterian denomination is the largest
denomination; it was brought to the region with missionaries from Wales in 1894. Prior to Mizoram, the Welsh Presbyterians
(missionaries) started venturing into the north-east of India through the Khasi Hills (presently located within the
state of Meghalaya in India) and established Presbyterian churches all over the Khasi Hills from the 1840s onwards.
Hence there is a strong presence of Presbyterians in Shillong (the present capital of Meghalaya) and the areas adjoining
it. The Welsh missionaries built their first church in Sohra (aka Cherrapunji) in 1846. Presbyterians participated
in the mergers that resulted in the Church of North India and the Church of South India.Sohra In Australia, Presbyterianism
is the fourth largest denomination of Christianity, with nearly 600,000 Australians claiming to be Presbyterian in
the 2006 Commonwealth Census. Presbyterian churches were founded in each colony, some with links to the Church of
Scotland and others to the Free Church. There were also congregations originating from United Presbyterian Church
of Scotland as well as a number founded by John Dunmore Lang. Most of these bodies merged between 1859 and 1870,
and in 1901 formed a federal union called the Presbyterian Church of Australia but retaining their state assemblies.
The Presbyterian Church of Eastern Australia representing the Free Church of Scotland tradition, and congregations
in Victoria of the Reformed Presbyterian Church, originally from Ireland, are the other existing denominations dating
from colonial times. In 1977, two thirds of the Presbyterian Church of Australia, along with most of the Congregational
Union of Australia and all the Methodist Church of Australasia, combined to form the Uniting Church in Australia.
The third who did not unite had various reasons for so acting, often cultural attachment but often conservative theological
or social views. The permission for the ordination of women given in 1974 was rescinded in 1991 without affecting
the two or three existing woman ministers. The approval of women elders given in the 1960s has been rescinded in
all states except New South Wales, which has the largest membership. The theology of the church is now generally
conservative and Reformed. A number of small Presbyterian denominations have arisen since the 1950s through migration
or schism. The Presbyterian Church in Vanuatu is the largest denomination in the country, with approximately one-third
of the population of Vanuatu members of the church. The PCV was taken to Vanuatu by missionaries from Scotland. The
PCV (Presbyterian Church of Vanuatu) is headed by a moderator with offices in Port Vila. The PCV is particularly
strong in the provinces of Tafea, Shefa, and Malampa. The Province of Sanma is mainly Presbyterian with a strong
Roman Catholic minority in the Francophone areas of the province. There are some Presbyterian people, but no organised
Presbyterian churches in Penama and Torba, both of which are traditionally Anglican. Vanuatu is the only country
in the South Pacific with a significant Presbyterian heritage and membership. The PCV is a founding member of the
Vanuatu Christian Council (VCC). The PCV runs many primary schools and Onesua secondary school. The church is strong
in the rural villages.
Since the Protestant Reformation, the most prominent Christian denomination in Thuringia has been Lutheranism. During the
GDR period, church membership was discouraged and has continued shrinking since the reunification in 1990. Today
over two thirds of the population is non-religious. The Protestant Evangelical Church in Germany has had the largest
number of members in the state, adhered to by 24.0% of the population in 2009. Members of the Catholic Church formed
7.8% of the population, while 68.2% of Thuringians were non-religious or adhere to other faiths. The highest Protestant
concentrations are in the small villages of southern and western Thuringia, whereas the bigger cities are even more
non-religious (up to 88% in Gera). Catholic regions are the Eichsfeld in the northwest and parts of the Rhön Mountains
around Geisa in the southwest. Protestant church membership is shrinking rapidly, whereas the Catholic Church is
somewhat more stable because of Catholic migration from Poland, Southern Europe and West Germany. Other religions
play no significant role in Thuringia. There are only a few thousand Muslims (largely migrants) and about 750 Jews
(mostly migrants from Russia) living in Thuringia. Furthermore, there are some Orthodox communities of Eastern European
migrants and some traditional Protestant Free churches in Thuringia without any societal influence. The name Thuringia
or Thüringen derives from the Germanic tribe Thuringii, who emerged during the Migration Period. Their origin is
not completely known. An older theory claimed that they were successors of the Hermunduri, but later research rejected
the idea. Other historians argue that the Thuringians were allies of the Huns, came to central Europe together with
them, and lived before in what is Galicia today. Publius Flavius Vegetius Renatus first mentioned the Thuringii around
400; during that period, the Thuringii were famous for their excellent horses. The Thuringian Realm existed until
531 and later, the Landgraviate of Thuringia was the largest state in the region, persisting between 1131 and 1247.
Afterwards there was no state named Thuringia, nevertheless the term commonly described the region between the Harz
mountains in the north, the Weiße Elster river in the east, the Franconian Forest in the south and the Werra river
in the west. After the Treaty of Leipzig, Thuringia had its own dynasty again, the Ernestine Wettins. Their various
lands formed the Free State of Thuringia, founded in 1920, together with some other small principalities. The Prussian
territories around Erfurt, Mühlhausen and Nordhausen joined Thuringia in 1945. Thuringia became a landgraviate in
1130 AD. After the extinction of the reigning Ludowingian line of counts and landgraves in 1247 and the War of the
Thuringian Succession (1247–1264), the western half became independent under the name of "Hesse", never to become
a part of Thuringia again. Most of the remaining Thuringia came under the rule of the Wettin dynasty of the nearby
Margraviate of Meissen, the nucleus of the later Electorate and Kingdom of Saxony. With the division of the house
of Wettin in 1485, Thuringia went to the senior Ernestine branch of the family, which subsequently subdivided the
area into a number of smaller states, according to the Saxon tradition of dividing inheritance amongst male heirs.
These were the "Saxon duchies", consisting, among others, of the states of Saxe-Weimar, Saxe-Eisenach, Saxe-Jena,
Saxe-Meiningen, Saxe-Altenburg, Saxe-Coburg, and Saxe-Gotha; Thuringia became merely a geographical concept. Thuringia
generally accepted the Protestant Reformation, and Roman Catholicism was suppressed as early as 1520[citation needed];
priests who remained loyal to it were driven away and churches and monasteries were largely destroyed, especially
during the German Peasants' War of 1525. In Mühlhausen and elsewhere, the Anabaptists found many adherents. Thomas
Müntzer, a leader of some non-peaceful groups of this sect, was active in this city. Within the borders of modern
Thuringia the Roman Catholic faith only survived in the Eichsfeld district, which was ruled by the Archbishop of
Mainz, and to a small degree in Erfurt and its immediate vicinity. Some reordering of the Thuringian states occurred
during the German Mediatisation from 1795 to 1814, and the territory was included within the Napoleonic Confederation
of the Rhine organized in 1806. The 1815 Congress of Vienna confirmed these changes and the Thuringian states' inclusion
in the German Confederation; the Kingdom of Prussia also acquired some Thuringian territory and administered it within
the Province of Saxony. The Thuringian duchies which became part of the German Empire in 1871 during the Prussian-led
unification of Germany were Saxe-Weimar-Eisenach, Saxe-Meiningen, Saxe-Altenburg, Saxe-Coburg-Gotha, Schwarzburg-Sondershausen,
Schwarzburg-Rudolstadt and the two principalities of Reuss Elder Line and Reuss Younger Line. In 1920, after World
War I, these small states merged into one state, called Thuringia; only Saxe-Coburg voted to join Bavaria instead.
Weimar became the new capital of Thuringia. The coat of arms of this new state was simpler than they had been previously.
In 1930 Thuringia was one of the free states where the Nazis gained real political power. Wilhelm Frick was appointed
Minister of the Interior for the state of Thuringia after the Nazi Party won six delegates to the Thuringia Diet.
In this position he removed from the Thuringia police force anyone he suspected of being a republican and replaced
them with men who were favourable towards the Nazi Party. He also ensured that whenever an important position came
up within Thuringia, he used his power to ensure that a Nazi was given that post. The landscapes of Thuringia are
quite diverse. The far north is occupied by the Harz mountains, followed by the Goldene Aue, a fertile floodplain
around Nordhausen with the Helme as most important river. The north-west includes the Eichsfeld, a hilly and sometimes
forested region, where the Leine river emanates. The central and northern part of Thuringia is defined by the 3000
km² wide Thuringian Basin, a very fertile and flat area around the Unstrut river and completely surrounded by the
following hill chains (clockwise from the north-west): Dün, Hainleite, Windleite, Kyffhäuser, Hohe Schrecke, Schmücke,
Finne, Ettersberg, Steigerwald, Thuringian Forest, Hörselberge and Hainich. Within the Basin the smaller hill chains
Fahner Höhe and Heilinger Höhen. South of the Thuringian Basin is the Land's largest mountain range, marked by the
Thuringian Forest in the north-west, the Thuringian Highland in the middle and the Franconian Forest in the south-east.
Most of this range is forested and the Großer Beerberg (983 m) is Thuringia's highest mountain. To the south-west,
the Forest is followed up by Werra river valley, dividing it from the Rhön Mountains in the west and the Grabfeld
plain in the south. Eastern Thuringia, commonly described as the area east of Saale and Loquitz valley, is marked
by a hilly landscape, rising slowly from the flat north to the mountainous south. The Saale in the west and the Weiße
Elster in the east are the two big rivers running from south to north and forming densely settled valleys in this
area. Between them lies the flat and forested Holzland in the north, the flat and fertile Orlasenke in the middle
and the Vogtland, a hilly but in most parts non-forested region in the south. The far eastern region (east of Weiße
Elster) is the Osterland or Altenburger Land along Pleiße river, a flat, fertile and densely settled agricultural
area. The most important river in Thuringia is the Saale (a tributary of the Elbe) with its tributaries Unstrut,
Ilm and Weiße Elster, draining the most parts of Thuringia and the Werra (the headwater of the Weser), draining the
south-west and west of the Land. Furthermore, some small parts on the southern border are drained by tributaries
of the Main (a tributary of the Rhine). There are no large natural lakes in Thuringia, but it does have some of Germany's
biggest dams including the Bleiloch Dam and the Hohenwarte Dam at Saale river same as the Leibis-Lichte Dam and the
Goldisthal Pumped Storage Station within the Highland. Thuringia is Germany's only state without connection to navigable
waterways. Due to many centuries of intensive settlement, most of the area is shaped by human influence. The original
natural vegetation of Thuringia is forest with beech as its predominant species, as can still be found in the Hainich
mountains today. In the uplands, a mixture of beech and spruce would be natural. However, most of the plains have
been cleared and are in intensive agricultural use while most of the forests are planted with spruce and pine. Since
1990, Thuringia's forests have been managed aiming for a more natural and tough vegetation more resilient to climate
change as well as diseases and vermin. In comparison to the forest, agriculture is still quite conventional and dominated
by large structures and monocultures. Problems here are caused especially by increasingly prolonged dry periods during
the summer months. Environmental damage in Thuringia has been reduced to a large extent after 1990. The condition
of forests, rivers and air was improved by modernizing factories, houses (decline of coal heating) and cars, and
contaminated areas such as the former Uranium surface mines around Ronneburg have been remediated. Today's environmental
problems are the salination of the Werra river, caused by discharges of K+S salt mines around Unterbreizbach and
overfertilisation in agriculture, damaging the soil and small rivers. During the Middle Ages, Thuringia was situated
at the border between Germanic and Slavic territories, marked by the Saale river. The Ostsiedlung movement led to
the assimilation of Slavic people between the 11th and the 13th century under German rule. The population growth
increased during the 18th century and stayed high until World War I, before it slowed within the 20th century and
changed to a decline since 1990. Since the beginning of Urbanisation around 1840, the Thuringian cities have higher
growth rates resp. smaller rates of decline than rural areas (many villages lost half of their population since 1950,
whereas the biggest cities (Erfurt and Jena) keep growing). In July 2013, there were 41,000 non-Germans by citizenship
living in Thuringia (1.9% of the population − among the smallest proportions of any state in Germany). Nevertheless,
the number rose from 33,000 in July 2011, an increase of 24% in only two years. About 4% of the population are migrants
(including persons that already received the German citizenship). The biggest groups of foreigners by citizenship
are (as of 2012): Russians (3,100), Poles (3,000), Vietnamese (2,800), Turks (2,100) and Ukrainians (2,000). The
amount of foreigners varies between regions: the college towns Erfurt, Jena, Weimar and Ilmenau have the highest
rates, whereas there are almost no migrants living in the most rural smaller municipalities. The Thuringian population
has a significant sex ratio gap, caused by the emigration of young women, especially in rural areas. Overall, there
are 115 to 120 men per 100 women in the 25–40 age group ("family founders") which has negative consequences for the
birth ratio. Furthermore, the population is getting older and older with some rural municipalities recording more
than 30% of over-65s (pensioners). This is a problem for the regional labour market, as there are twice as many people
leaving as entering the job market annually. Migration plays an important role in Thuringia. The internal migration
shows a strong tendency from rural areas towards the big cities. From 2008 to 2012, there was a net migration from
Thuringia to Erfurt of +6,700 persons (33 per 1000 inhabitants), +1,800 to Gera (19 per 1000), +1,400 to Jena (14
per 1000), +1,400 to Eisenach (33 per 1000) and +1,300 to Weimar (21 per 1000). Between Thuringia and the other German
states, the balance is negative: In 2012, Thuringia lost 6,500 persons to other federal states, the most to Bavaria,
Saxony, Hesse and Berlin. Only with Saxony-Anhalt and Brandenburg the balance is positive. The international migration
is fluctuating heavily. In 2009, the balance was +700, in 2010 +1,800, in 2011 +2,700 and in 2012 +4,800. The most
important countries of origin of the Thuringia migrants from 2008 to 2012 were Poland (+1,700), Romania (+1,200),
Afghanistan (+1,100) and Serbia/Montenegro/Kosovo (+1,000), whereas the balance was negative with Switzerland (−2,800)
and Austria (−900). Of the approximately 850 municipalities of Thuringia, 126 are classed as towns (within a district)
or cities (forming their own urban district). Most of the towns are small with a population of less than 10,000;
only the ten biggest ones have a population greater than 30,000. The first towns emerged during the 12th century,
whereas the latest ones received town status only in the 20th century. Today, all municipalities within districts
are equal in law, whether they are towns or villages. Independent cities (i.e. urban districts) have greater powers
(the same as any district) than towns within a district. Agriculture and forestry have declined in importance over
the decades. Nevertheless, they are more important than in the most other areas of Germany, especially within rural
regions. 54% of Thuringia's territory is in agricultural use. The fertile basins such as the large Thuringian Basin
or the smaller Goldene Aue, Orlasenke and Osterland are in intensive use for growing cereals, vegetables, fruits
and energy crops. Important products are apples, strawberries, cherries and plums in the fruit sector, cabbage, potatoes,
cauliflower, tomatoes (grown in greenhouses), onions, cucumbers and asparagus in the vegetable sector, as well as
maize, rapeseed, wheat, barley and sugar beets in the crop sector. Like most other regions of central and southern
Germany, Thuringia has a significant industrial sector reaching back to the mid-19th-century industrialisation. The
economic transition after the German reunification in 1990 led to the closure of most large-scale factories and companies,
leaving small and medium-sized ones to dominate the manufacturing sector. Well-known industrial centres are Jena
(a world centre for optical instruments with companies like Carl Zeiss, Schott and Jenoptik) and Eisenach, where
BMW started its car production in the 1920s and an Opel factory is based today. The most important industrial branches
today are engineering and metalworking, vehicle production and food industries. Especially the small and mid-sized
towns in central and southwestern Thuringia (e.g. Arnstadt, Schmalkalden and Ohrdruf) are highly industrialised,
whereas there are fewer industrial companies in the northern and eastern parts of the Land. Traditional industries
like production of glass, porcelain and toys collapsed during the economic crises between 1930 and 1990. Mining was
important in Thuringia since the later Middle Ages, especially within the mining towns of the Thuringian Forest such
as Schmalkalden, Suhl and Ilmenau. Following the industrial revolution, the old iron, copper and silver mines declined
because the competition from imported metal was too strong. On the other hand, the late 19th century brought new
types of mines to Thuringia: the lignite surface mining around Meuselwitz near Altenburg in the east of the Land
started in the 1870s, and two potash mining districts were established around 1900. These are the Südharzrevier in
the north of the state, between Bischofferode in the west and Roßleben in the east with Sondershausen at its centre,
and the Werrarevier on the Hessian border around Vacha and Bad Salzungen in the west. Together, they accounted for
a significant part of the world's potash production in the mid-20th century. After the reunification, the Südharzrevier
was abandoned, whereas K+S took over the mines in the Werrarevier. Between 1950 and 1990, uranium mining was also
important to cover the Soviet Union's need for this metal. The centre was Ronneburg near Gera in eastern Thuringia
and the operating company Wismut was under direct Soviet control. The GDP of Thuringia is below the national average,
in line with the other former East German Lands. Until 2004, Thuringia was one of the weakest regions within the
European Union. The accession of several new countries, the crisis in southern Europe and the sustained economic
growth in Germany since 2005 has brought the Thuringian GDP close to the EU average since then. The high economic
subsidies granted by the federal government and the EU after 1990 are being reduced gradually and will end around
2020. The unemployment rate reached its peak of 20% in 2005. Since then, it has decreased to 7% in 2013, which is
only slightly above the national average. The decrease is caused on the one hand by the emergence of new jobs and
on the other by a marked decrease in the working-age population, caused by emigration and low birth rates for decades.
The wages in Thuringia are low compared to rich bordering Lands like Hesse and Bavaria. Therefore, many Thuringians
are working in other German Lands and even in Austria and Switzerland as weekly commuters. Nevertheless, the demographic
transition in Thuringia leads to a lack of workers in some sectors. External immigration into Thuringia has been
encouraged by the government since about 2010 to counter this problem. During the 1930s, the first two motorways
were built across the Land, the A4 motorway as an important east-west connection in central Germany and the main
link between Berlin and south-west Germany, and the A9 motorway as the main north-south route in eastern Germany,
connecting Berlin with Munich. The A4 runs from Frankfurt in Hesse via Eisenach, Gotha, Erfurt, Weimar, Jena and
Gera to Dresden in Saxony, connecting Thuringia's most important cities. At Hermsdorf junction it is connected with
the A9. Both highways were widened from four to six lanes (three each way) after 1990, including some extensive re-routing
in the Eisenach and Jena areas. Furthermore, three new motorways were built during the 1990s and 2000s. The A71 crosses
the Land in southwest-northeast direction, connecting Würzburg in Bavaria via Meiningen, Suhl, Ilmenau, Arnstadt,
Erfurt and Sömmerda with Sangerhausen and Halle in Saxony-Anhalt. The crossing of the Thuringian Forest by the A71
has been one of Germany's most expensive motorway segments with various tunnels (including Germany's longest road
tunnel, the Rennsteig Tunnel) and large bridges. The A73 starts at the A71 south of Erfurt in Suhl and runs south
towards Nuremberg in Bavaria. The A38 is another west-east connection in the north of Thuringia running from Göttingen
in Lower Saxony via Heiligenstadt and Nordhausen to Leipzig in Saxony. Furthermore, there is a dense network of federal
highways complementing the motorway network. The upgrading of federal highways is prioritised in the federal trunk
road programme 2015 (Bundesverkehrswegeplan 2015). Envisaged projects include upgrades of the B247 from Gotha to
Leinefelde to improve Mühlhausen's connection to the national road network, the B19 from Eisenach to Meiningen to
improve access to Bad Salzungen and Schmalkalden, and the B88 and B281 for strengthening the Saalfeld/Rudolstadt
region. The traditional energy supply of Thuringia is lignite, mined in the bordering Leipzig region. Since 2000,
the importance of environmentally unfriendly lignite combustion has declined in favour of renewable energies, which
reached an amount of 40% (in 2013), and more clean gas combustion, often carried out as Cogeneration in the municipal
power stations. The most important forms of renewable energies are Wind power and Biomass, followed by Solar energy
and Hydroelectricity. Furthermore, Thuringia hosts two big pumped storage stations: the Goldisthal Pumped Storage
Station and the Hohenwarte Dam. Health care in Thuringia is currently undergoing a concentration process. Many smaller
hospitals in the rural towns are closing, whereas the bigger ones in centres like Jena and Erfurt get enlarged. Overall,
there is an oversupply of hospital beds, caused by rationalisation processes in the German health care system, so
that many smaller hospitals generate losses. On the other hand, there is a lack of family doctors, especially in
rural regions with increased need of health care provision because of overageing. Early-years education is quite
common in Thuringia. Since the 1950s, nearly all children have been using the service, whereas early-years education
is less developed in western Germany. Its inventor Friedrich Fröbel lived in Thuringia and founded the world's first
Kindergartens there in the 19th century. The Thuringian primary school takes four years and most primary schools
are all-day schools offering optional extracurricular activities in the afternoon. At the age of ten, pupils are
separated according to aptitude and proceed to either the Gymnasium or the Regelschule. The former leads to the Abitur
exam after a further eight years and prepares for higher education, while the latter has a more vocational focus
and finishes with exams after five or six years, comparable to the Hauptschule and Realschule found elsewhere in
Germany. The German higher education system comprises two forms of academic institutions: universities and polytechnics
(Fachhochschule). The University of Jena is the biggest amongst Thuringia's four universities and offers nearly every
discipline. It was founded in 1558, and today has 21,000 students. The second-largest is the Technische Universität
Ilmenau with 7,000 students, founded in 1894, which offers many technical disciplines such as engineering and mathematics.
The University of Erfurt, founded in 1392, has 5,000 students today and an emphasis on humanities and teacher training.
The Bauhaus University Weimar with 4,000 students is Thuringia's smallest university, specialising in creative subjects
such as architecture and arts. It was founded in 1860 and came to prominence as Germany's leading art school during
the inter-war period, the Bauhaus. The polytechnics of Thuringia are based in Erfurt (4,500 students), Jena (5,000
students), Nordhausen (2,500 students) and Schmalkalden (3,000 students). In addition, there is a civil service college
in Gotha with 500 students, the College of Music "Franz Liszt" in Weimar (800 students) as well as two private colleges,
the Adam-Ries-Fachhochschule in Erfurt (500 students) and the SRH College for nursing and allied medical subjects
(SRH Fachhochschule für Gesundheit Gera) in Gera (500 students). Finally, there are colleges for those studying for
a technical qualification while working in a related field (Berufsakademie) at Eisenach (600 students) and Gera (700
students). Thuringia's leading research centre is Jena, followed by Ilmenau. Both focus on technology, in particular
life sciences and optics at Jena and information technology at Ilmenau. Erfurt is a centre of Germany's horticultural
research, whereas Weimar and Gotha with their various archives and libraries are centres of historic and cultural
research. Most of the research in Thuringia is publicly funded basic research due to the lack of large companies
able to invest significant amounts in applied research, with the notable exception of the optics sector at Jena.
The first railways in Thuringia had been built in the 1840s and the network of main lines was finished around 1880.
By 1920, many branch lines had been built, giving Thuringia one of the densest rail networks in the world before
World War II with about 2,500 km of track. Between 1950 and 2000 most of the branch lines were abandoned, reducing
Thuringia's network by half compared to 1940. On the other hand, most of the main lines were refurbished after 1990,
resulting in improved speed of travel. The most important railway lines at present are the Thuringian Railway, connecting
Halle and Leipzig via Weimar, Erfurt, Gotha and Eisenach with Frankfurt and Kassel and the Saal Railway from Halle/Leipzig
via Jena and Saalfeld to Nuremberg. The former has an hourly ICE/IC service from Dresden to Frankfurt while the latter
is served hourly by ICE trains from Berlin to Munich. In 2017, a new high speed line will be opened, diverting long-distance
services from these mid-19th century lines. Both ICE routes will then use the Erfurt–Leipzig/Halle high-speed railway,
and the Berlin-Munich route will continue via the Nuremberg–Erfurt high-speed railway. Only the segment west of Erfurt
of the Frankfurt-Dresden line will continue to be used by ICE trains after 2017, with an increased line speed of
200 km/h (currently 160 km/h). Erfurt's central station, which was completely rebuilt for this purpose in the 2000s
(decade), will be the new connection between both ICE lines. The most important regional railway lines in Thuringia
are the Neudietendorf–Ritschenhausen railway from Erfurt to Würzburg and Meiningen, the Weimar–Gera railway from
Erfurt to Chemnitz, the Sangerhausen–Erfurt railway from Erfurt to Magdeburg, the Gotha–Leinefelde railway from Erfurt
to Göttingen, the Halle–Kassel railway from Halle via Nordhausen to Kassel and the Leipzig–Hof railway from Leipzig
via Altenburg to Zwickau and Hof. Most regional and local lines have hourly service, but some run only every other
hour.
In an ecosystem, predation is a biological interaction where a predator (an organism that is hunting) feeds on its prey (the
organism that is attacked). Predators may or may not kill their prey prior to feeding on them, but the act of predation
often results in the death of the prey and the eventual absorption of the prey's tissue through consumption. Thus
predation is often, though not always, carnivory. Other categories of consumption are herbivory (eating parts of
plants), fungivory (eating parts of fungi), and detritivory (the consumption of dead organic material (detritus)).
All these consumption categories fall under the rubric of consumer-resource systems. It can often be difficult to
separate various types of feeding behaviors. For example, some parasitic species prey on a host organism and then
lay their eggs on it for their offspring to feed on it while it continues to live in or on its decaying corpse after
it has died. The key characteristic of predation however is the predator's direct impact on the prey population.
On the other hand, detritivores simply eat dead organic material arising from the decay of dead individuals and have
no direct impact on the "donor" organism(s). Classification of predators by the extent to which they feed on and
interact with their prey is one way ecologists may wish to categorize the different types of predation. Instead of
focusing on what they eat, this system classifies predators by the way in which they eat, and the general nature
of the interaction between predator and prey species. Two factors are considered here: How close the predator and
prey are (in the latter two cases the term prey may be replaced with host) and whether or not the prey are directly
murdered by the predator is considered, with true predation and parasitoidism involving certain death. A true predator
can commonly be known as one that kills and eats another living thing. Whereas other types of predator all harm their
prey in some way, this form kills them. Predators may hunt actively for prey in pursuit predation, or sit and wait
for prey to approach within striking distance, as in ambush predators. Some predators kill large prey and dismember
or chew it prior to eating it, such as a jaguar or a human; others may eat their (usually much smaller) prey whole,
as does a bottlenose dolphin swallowing a fish, or a snake, duck or stork swallowing a frog. Some animals that kill
both large and small prey for their size (domestic cats and dogs are prime examples) may do either depending upon
the circumstances; either would devour a large insect whole but dismember a rabbit. Some predation entails venom
that subdues a prey creature before the predator ingests the prey by killing, which the box jellyfish does, or disabling
it, found in the behavior of the cone shell. In some cases, the venom, as in rattlesnakes and some spiders, contributes
to the digestion of the prey item even before the predator begins eating. In other cases, the prey organism may die
in the mouth or digestive system of the predator. Baleen whales, for example, eat millions of microscopic plankton
at once, the prey being broken down well after entering the whale. Seed predation and egg predation are other forms
of true predation, as seeds and eggs represent potential organisms. Predators of this classification need not eat
prey entirely. For example, some predators cannot digest bones, while others can. Some may eat only part of an organism,
as in grazing (see below), but still consistently cause its direct death. Grazing organisms may also kill their prey
species, but this is seldom the case. While some herbivores like zooplankton live on unicellular phytoplankton and
therefore, by the individualized nature of the organism, kill their prey, many only eat a small part of the plant.
Grazing livestock may pull some grass out at the roots, but most is simply grazed upon, allowing the plant to regrow
once again. Kelp is frequently grazed in subtidal kelp forests, but regrows at the base of the blade continuously
to cope with browsing pressure. Animals may also be 'grazed' upon; female mosquitos land on hosts briefly to gain
sufficient proteins for the development of their offspring. Starfish may be grazed on, being capable of regenerating
lost arms. Parasites can at times be difficult to distinguish from grazers. Their feeding behavior is similar in
many ways, however they are noted for their close association with their host species. While a grazing species such
as an elephant may travel many kilometers in a single day, grazing on many plants in the process, parasites form
very close associations with their hosts, usually having only one or at most a few in their lifetime. This close
living arrangement may be described by the term symbiosis, "living together", but unlike mutualism the association
significantly reduces the fitness of the host. Parasitic organisms range from the macroscopic mistletoe, a parasitic
plant, to microscopic internal parasites such as cholera. Some species however have more loose associations with
their hosts. Lepidoptera (butterfly and moth) larvae may feed parasitically on only a single plant, or they may graze
on several nearby plants. It is therefore wise to treat this classification system as a continuum rather than four
isolated forms. Parasitoids are organisms living in or on their host and feeding directly upon it, eventually leading
to its death. They are much like parasites in their close symbiotic relationship with their host or hosts. Like the
previous two classifications parasitoid predators do not kill their hosts instantly. However, unlike parasites, they
are very similar to true predators in that the fate of their prey is quite inevitably death. A well-known example
of a parasitoids are the ichneumon wasps, solitary insects living a free life as an adult, then laying eggs on or
in another species such as a caterpillar. Its larva(e) feed on the growing host causing it little harm at first,
but soon devouring the internal organs until finally destroying the nervous system resulting in prey death. By this
stage the young wasp(s) are developed sufficiently to move to the next stage in their life cycle. Though limited
mainly to the insect order Hymenoptera, Diptera and Coleoptera parasitoids make up as much as 10% of all insect species.
Among predators there is a large degree of specialization. Many predators specialize in hunting only one species
of prey. Others are more opportunistic and will kill and eat almost anything (examples: humans, leopards, dogs and
alligators). The specialists are usually particularly well suited to capturing their preferred prey. The prey in
turn, are often equally suited to escape that predator. This is called an evolutionary arms race and tends to keep
the populations of both species in equilibrium. Some predators specialize in certain classes of prey, not just single
species. Some will switch to other prey (with varying degrees of success) when the preferred target is extremely
scarce, and they may also resort to scavenging or a herbivorous diet if possible.[citation needed] Predators are
often another organism's prey, and likewise prey are often predators. Though blue jays prey on insects, they may
in turn be prey for cats and snakes, and snakes may be the prey of hawks. One way of classifying predators is by
trophic level. Organisms that feed on autotrophs, the producers of the trophic pyramid, are known as herbivores or
primary consumers; those that feed on heterotrophs such as animals are known as secondary consumers. Secondary consumers
are a type of carnivore, but there are also tertiary consumers eating these carnivores, quartary consumers eating
them, and so forth. Because only a fraction of energy is passed on to the next level, this hierarchy of predation
must end somewhere, and very seldom goes higher than five or six levels, and may go only as high as three trophic
levels (for example, a lion that preys upon large herbivores such as wildebeest, which in turn eat grasses). A predator
at the top of any food chain (that is, one that is preyed upon by no organism) is called an apex predator; examples
include the orca, sperm whale, anaconda, Komodo dragon, tiger, lion, tiger shark, Nile crocodile, and most eagles
and owls—and even omnivorous humans and grizzly bears. An apex predator in one environment may not retain this position
as a top predator if introduced to another habitat, such as a dog among alligators, a skunk in the presence of the
great horned owl immune to skunk spray, or a snapping turtle among jaguars; a predatory species introduced into an
area where it faces no predators, such as a domestic cat or a dog in some insular environments, can become an apex
predator by default. Many organisms (of which humans are prime examples) eat from multiple levels of the food chain
and, thus, make this classification problematic. A carnivore may eat both secondary and tertiary consumers, and its
prey may itself be difficult to classify for similar reasons. Organisms showing both carnivory and herbivory are
known as omnivores. Even herbivores such as the giant panda may supplement their diet with meat. Scavenging of carrion
provides a significant part of the diet of some of the most fearsome predators. Carnivorous plants would be very
difficult to fit into this classification, producing their own food but also digesting anything that they may trap.
Organisms that eat detritivores or parasites would also be difficult to classify by such a scheme. An alternative
view offered by Richard Dawkins is of predation as a form of competition: the genes of both the predator and prey
are competing for the body (or 'survival machine') of the prey organism. This is best understood in the context of
the gene centered view of evolution. Another manner in which predation and competition are connected is throughout
intraguild predation. Intraguild predators are those that kill and eat other predators of different species at the
same trophic level, and thus that are potential competitors. Predators may increase the biodiversity of communities
by preventing a single species from becoming dominant. Such predators are known as keystone species and may have
a profound influence on the balance of organisms in a particular ecosystem. Introduction or removal of this predator,
or changes in its population density, can have drastic cascading effects on the equilibrium of many other populations
in the ecosystem. For example, grazers of a grassland may prevent a single dominant species from taking over. The
elimination of wolves from Yellowstone National Park had profound impacts on the trophic pyramid. Without predation,
herbivores began to over-graze many woody browse species, affecting the area's plant populations. In addition, wolves
often kept animals from grazing in riparian areas, which protected beavers from having their food sources encroached
upon. The removal of wolves had a direct effect on beaver populations, as their habitat became territory for grazing.
Furthermore, predation keeps hydrological features such as creeks and streams in normal working order. Increased
browsing on willows and conifers along Blacktail Creek due to a lack of predation caused channel incision because
they helped slow the water down and hold the soil in place. The act of predation can be broken down into a maximum
of four stages: Detection of prey, attack, capture and finally consumption. The relationship between predator and
prey is one that is typically beneficial to the predator, and detrimental to the prey species. Sometimes, however,
predation has indirect benefits to the prey species, though the individuals preyed upon themselves do not benefit.
This means that, at each applicable stage, predator and prey species are in an evolutionary arms race to maximize
their respective abilities to obtain food or avoid being eaten. This interaction has resulted in a vast array of
adaptations in both groups. One adaptation helping both predators and prey avoid detection is camouflage, a form
of crypsis where species have an appearance that helps them blend into the background. Camouflage consists of not
only color but also shape and pattern. The background upon which the organism is seen can be both its environment
(e.g., the praying mantis to the right resembling dead leaves) or other organisms (e.g., zebras' stripes blend in
with each other in a herd, making it difficult for lions to focus on a single target). The more convincing camouflage
is, the more likely it is that the organism will go unseen. Mimicry is a related phenomenon where an organism has
a similar appearance to another species. One such example is the drone fly, which looks a lot like a bee, yet is
completely harmless as it cannot sting at all. Another example of batesian mimicry is the io moth, (Automeris io),
which has markings on its wings that resemble an owl's eyes. When an insectivorous predator disturbs the moth, it
reveals its hind wings, temporarily startling the predator and giving it time to escape. Predators may also use mimicry
to lure their prey, however. Female fireflies of the genus Photuris, for example, copy the light signals of other
species, thereby attracting male fireflies, which are then captured and eaten (see aggressive mimicry). While successful
predation results in a gain of energy, hunting invariably involves energetic costs as well. When hunger is not an
issue, in general most predators will not seek to attack prey since the costs outweigh the benefits. For instance,
a large predatory fish like a shark that is well fed in an aquarium will typically ignore the smaller fish swimming
around it (while the prey fish take advantage of the fact that the apex predator is apparently uninterested). Surplus
killing represents a deviation from this type of behaviour. The treatment of consumption in terms of cost-benefit
analysis is known as optimal foraging theory, and has been quite successful in the study of animal behavior. In general,
costs and benefits are considered in energy gain per unit time, though other factors are also important, such as
essential nutrients that have no caloric value but are necessary for survival and health. Social predation offers
the possibility of predators to kill creatures larger than those that members of the species could overpower singly.
Lions, hyenas, wolves, dholes, African wild dogs, and piranhas can kill large herbivores that single animals of the
same species usually don't dispatch. Social predation allows some animals to organize hunts of creatures that would
easily escape a single predator; thus chimpanzees can prey upon colobus monkeys, and Harris's hawks can cut off all
possible escapes for a doomed rabbit. Extreme specialization of roles is evident in some hunting that requires co-operation
between predators of very different species: humans with the aid of falcons or dogs, or fishing with cormorants.
Social predation is often very complex behavior, and not all social creatures (for example, domestic cats) perform
it. Even without complex intelligence but instinct alone, some ant species can destroy much larger creatures. It
has been observed that well-fed predator animals in a lax captivity (for instance, pet or farm animals) will usually
differentiate between putative prey animals who are familiar co-inhabitants in the same human area from wild ones
outside the area. This interaction can range from peaceful coexistence to close companionship; motivation to ignore
the predatory instinct may result from mutual advantage or fear of reprisal from human masters who have made clear
that harming co-inhabitants will not be tolerated. Pet cats and pet mice, for example, may live together in the same
human residence without incident as companions. Pet cats and pet dogs under human mastership often depend on each
other for warmth, companionship, and even protection, particularly in rural areas. Predatory animals often use their
usual methods of attacking prey to inflict or to threaten grievous injury to their own predators. The electric eel
uses the same electric current to kill prey and to defend itself against animals (anacondas, caimans, egrets, jaguars,
mountain lions, giant otters, humans, dogs, and cats) that ordinarily prey upon fish similar to an electric eel in
size; the electric eel thus remains an apex predator in a predator-rich environment. A predator small enough to be
prey for others, the domestic cat uses its formidable teeth and claws as weapons against animals that might confuse
a cat with easier prey. Many non-predatory prey animals, such as a zebra, can give a strong kick that can maim or
kill, while others charge with tusks or horns. Mobbing can be an interspecies activity: it is common for birds to
respond to mobbing calls of a different species. Many birds will show up at the sight of mobbing and watch and call,
but not participate. It should also be noted that some species can be on both ends of a mobbing attack. Crows are
frequently mobbed by smaller songbirds as they prey on eggs and young from these birds' nests, but these same crows
will cooperate with smaller birds to drive away hawks or larger mammalian predators. On occasion, birds will mob
animals that pose no threat. Aposematism, where organisms are brightly colored as a warning to predators, is the
antithesis of camouflage. Some organisms pose a threat to their predators—for example they may be poisonous, or able
to harm them physically. Aposematic coloring involves bright, easily recognizable and unique colors and patterns.
For example, bright coloration in Variable Checkerspot butterflies leads to decreased predation attempts by avian
predators. Upon being harmed (e.g., stung) by their prey, the appearance in such an organism will be remembered as
something to avoid. While that particular prey organism may be killed, the coloring benefits the prey species as
a whole. It is fairly clear that predators tend to lower the survival and fecundity of their prey, but on a higher
level of organization, populations of predator and prey species also interact. It is obvious that predators depend
on prey for survival, and this is reflected in predator populations being affected by changes in prey populations.
It is not so obvious, however, that predators affect prey populations. Eating a prey organism may simply make room
for another if the prey population is approaching its carrying capacity. A lone naked human is at a physical disadvantage
to other comparable apex predators in areas such as speed, bone density, weight, and physical strength. Humans also
lack innate weaponry such as claws. Without crafted weapons, society, or cleverness, a lone human can easily be defeated
by fit predatory animals, such as wild dogs, big cats and bears (see Man-eater). However, humans are not solitary
creatures; they are social animals with highly developed social behaviors. Early humans, such as Homo erectus, have
been using stone tools and weapons for well over a million years. Anatomically modern humans have been apex predators
since they first evolved, and many species of carnivorous megafauna actively avoid interacting with humans; the primary
environmental competitor for a human is other humans. The one subspecies of carnivorous megafauna that does interact
frequently with humans in predatory roles is the domestic dog, but usually as a partner in predation especially if
they hunt together. Cannibalism has occurred in various places, among various cultures, and for various reasons.
At least a few people, such as the Donner party, are said to have resorted to it in desperation. Having small population
size is a characteristic almost universally inherent to apex predators, humans and dogs by far the most blatant exceptions.
Low numbers wouldn't be a problem for apex predators if there was an abundance of prey and no competition or niche
overlap, a scenario that is rarely, if ever, encountered in the wild. The competitive exclusion principle states
that if two species' ecological niches overlap, there is a very high likelihood of competition as both species are
in direct competition for the same resources. This factor alone could lead to the extirpation of one or both species,
but is compounded by the added factor of prey abundance. A predator's effect on its prey species is hard to see in
the short-term. However, if observed over a longer period of time, it is seen that the population of a predator will
correlationally rise and fall with the population of its prey in a cycle similar to the boom and bust cycle of economics.
If a predator overhunts its prey, the prey population will lower to numbers that are too scarce for the predators
to find. This will cause the predator population to dip, decreasing the predation pressure on the prey population.
The decrease in predators will allow the small number of prey left to slowly increase their population to somewhere
around their previous abundance, which will allow the predator population to increase in response to the greater
availability of resources. If a predator hunts its prey species to numbers too low to sustain the population in the
short term, they can cause not only the extinction or extirpation of the prey but also the extinction of their own
species, a phenomenon known as coextinction. This is a risk that wildlife conservationists encounter when introducing
predators to prey that have not coevolved with the same or similar predators. This possibility depends largely on
how well and how fast the prey species is able to adapt to the introduced predator. One way that this risk can be
avoided is if the predator finds an alternative prey species or if an alternative prey species is introduced (something
that ecologists and environmentalists try to avoid whenever possible). An alternative prey species would help to
lift some of the predation pressure from the initial prey species, giving the population a chance to recover, however
it does not guarantee that the initial prey species will be able to recover as the initial prey population may have
been hunted to below sustainable numbers or to complete extinction. Predators may be put to use in conservation efforts
to control introduced species. Although the aim in this situation is to remove the introduced species entirely, keeping
its abundance down is often the only possibility. Predators from its natural range may be introduced to control populations,
though in some cases this has little effect, and may even cause unforeseen problems. Besides their use in conservation
biology, predators are also important for controlling pests in agriculture. Natural predators are an environmentally
friendly and sustainable way of reducing damage to crops, and are one alternative to the use of chemical agents such
as pesticides.
Marvel counts among its characters such well-known superheroes as Spider-Man, Iron Man, Captain America, Wolverine, Thor,
Hulk, Ant-Man, such teams as the Avengers, the Guardians of the Galaxy, the Fantastic Four, the Inhumans and the
X-Men, and antagonists such as Doctor Doom, The Enchantress, Green Goblin, Ultron, Doctor Octopus, Thanos, Magneto
and Loki. Most of Marvel's fictional characters operate in a single reality known as the Marvel Universe, with locations
that mirror real-life cities. Characters such as Spider-Man, the Fantastic Four, the Avengers, Daredevil and Doctor
Strange are based in New York City, whereas the X-Men have historically been based in Salem Center, New York and
Hulk's stories often have been set in the American Southwest. Martin Goodman founded the company later known as Marvel
Comics under the name Timely Publications in 1939. Martin Goodman, a pulp magazine publisher who had started with
a Western pulp in 1933, was expanding into the emerging—and by then already highly popular—new medium of comic books.
Launching his new line from his existing company's offices at 330 West 42nd Street, New York City, he officially
held the titles of editor, managing editor, and business manager, with Abraham Goodman officially listed as publisher.
Timely's first publication, Marvel Comics #1 (cover dated Oct. 1939), included the first appearance of Carl Burgos'
android superhero the Human Torch, and the first appearances of Bill Everett's anti-hero Namor the Sub-Mariner, among
other features. The issue was a great success, with it and a second printing the following month selling, combined,
nearly 900,000 copies. While its contents came from an outside packager, Funnies, Inc., Timely had its own staff
in place by the following year. The company's first true editor, writer-artist Joe Simon, teamed with artist and
emerging industry notable Jack Kirby to create one of the first patriotically themed superheroes, Captain America,
in Captain America Comics #1 (March 1941). It, too, proved a hit, with sales of nearly one million. Goodman formed
Timely Comics, Inc., beginning with comics cover-dated April 1941 or Spring 1941. While no other Timely character
would achieve the success of these "big three", some notable heroes—many of which continue to appear in modern-day
retcon appearances and flashbacks—include the Whizzer, Miss America, the Destroyer, the original Vision, and the
Angel. Timely also published one of humor cartoonist Basil Wolverton's best-known features, "Powerhouse Pepper",
as well as a line of children's funny-animal comics featuring popular characters like Super Rabbit and the duo Ziggy
Pig and Silly Seal. Goodman's business strategy involved having his various magazines and comic books published by
a number of corporations all operating out of the same office and with the same staff. One of these shell companies
through which Timely Comics was published was named Marvel Comics by at least Marvel Mystery Comics #55 (May 1944).
As well, some comics' covers, such as All Surprise Comics #12 (Winter 1946–47), were labeled "A Marvel Magazine"
many years before Goodman would formally adopt the name in 1961. Atlas, rather than innovate, took a proven route
of following popular trends in television and movies—Westerns and war dramas prevailing for a time, drive-in movie
monsters another time—and even other comic books, particularly the EC horror line. Atlas also published a plethora
of children's and teen humor titles, including Dan DeCarlo's Homer the Happy Ghost (à la Casper the Friendly Ghost)
and Homer Hooper (à la Archie Andrews). Atlas unsuccessfully attempted to revive superheroes from late 1953 to mid-1954,
with the Human Torch (art by Syd Shores and Dick Ayers, variously), the Sub-Mariner (drawn and most stories written
by Bill Everett), and Captain America (writer Stan Lee, artist John Romita Sr.). Atlas did not achieve any breakout
hits and, according to Stan Lee, Atlas survived chiefly because it produced work quickly, cheaply, and at a passable
quality. In 1961, writer-editor Stan Lee revolutionized superhero comics by introducing superheroes designed to appeal
to more all-ages readers than the predominantly child audiences of the medium. Modern Marvel's first superhero team,
the titular stars of The Fantastic Four #1 (Nov. 1961), broke convention with other comic book archetypes of the
time by squabbling, holding grudges both deep and petty, and eschewing anonymity or secret identities in favor of
celebrity status. Subsequently, Marvel comics developed a reputation for focusing on characterization and adult issues
to a greater extent than most superhero comics before them, a quality which the new generation of older readers appreciated.
This applied to The Amazing Spider-Man title in particular, which turned out to be Marvel's most successful book.
Its young hero suffered from self-doubt and mundane problems like any other teenager, something readers could identify
with. Lee and freelance artist and eventual co-plotter Jack Kirby's Fantastic Four originated in a Cold War culture
that led their creators to revise the superhero conventions of previous eras to better reflect the psychological
spirit of their age. Eschewing such comic-book tropes as secret identities and even costumes at first, having a monster
as one of the heroes, and having its characters bicker and complain in what was later called a "superheroes in the
real world" approach, the series represented a change that proved to be a great success. All of these elements struck
a chord with the older readers, such as college-aged adults, and they successfully gained in a way not seen before.
In 1965, Spider-Man and the Hulk were both featured in Esquire magazine's list of 28 college campus heroes, alongside
John F. Kennedy and Bob Dylan. In 2009 writer Geoff Boucher reflected that, "Superman and DC Comics instantly seemed
like boring old Pat Boone; Marvel felt like The Beatles and the British Invasion. It was Kirby's artwork with its
tension and psychedelia that made it perfect for the times—or was it Lee's bravado and melodrama, which was somehow
insecure and brash at the same time?" In addition to Spider-Man and the Fantastic Four, Marvel began publishing further
superhero titles featuring such heroes and antiheroes as the Hulk, Thor, Ant-Man, Iron Man, the X-Men, Daredevil,
the Inhumans, Black Panther, Doctor Strange, Captain Marvel and the Silver Surfer, and such memorable antagonists
as Doctor Doom, Magneto, Galactus, Loki, the Green Goblin, and Doctor Octopus, all existing in a shared reality known
as the Marvel Universe, with locations that mirror real-life cities such as New York, Los Angeles and Chicago. In
1968, while selling 50 million comic books a year, company founder Goodman revised the constraining distribution
arrangement with Independent News he had reached under duress during the Atlas years, allowing him now to release
as many titles as demand warranted. Late that year he sold Marvel Comics and his other publishing businesses to the
Perfect Film and Chemical Corporation, which continued to group them as the subsidiary Magazine Management Company,
with Goodman remaining as publisher. In 1969, Goodman finally ended his distribution deal with Independent by signing
with Curtis Circulation Company. In 1971, the United States Department of Health, Education, and Welfare approached
Marvel Comics editor-in-chief Stan Lee to do a comic book story about drug abuse. Lee agreed and wrote a three-part
Spider-Man story portraying drug use as dangerous and unglamorous. However, the industry's self-censorship board,
the Comics Code Authority, refused to approve the story because of the presence of narcotics, deeming the context
of the story irrelevant. Lee, with Goodman's approval, published the story regardless in The Amazing Spider-Man #96–98
(May–July 1971), without the Comics Code seal. The market reacted well to the storyline, and the CCA subsequently
revised the Code the same year. A series of new editors-in-chief oversaw the company during another slow time for
the industry. Once again, Marvel attempted to diversify, and with the updating of the Comics Code achieved moderate
to strong success with titles themed to horror (The Tomb of Dracula), martial arts, (Shang-Chi: Master of Kung Fu),
sword-and-sorcery (Conan the Barbarian, Red Sonja), satire (Howard the Duck) and science fiction (2001: A Space Odyssey,
"Killraven" in Amazing Adventures, Battlestar Galactica, Star Trek, and, late in the decade, the long-running Star
Wars series). Some of these were published in larger-format black and white magazines, under its Curtis Magazines
imprint. Marvel was able to capitalize on its successful superhero comics of the previous decade by acquiring a new
newsstand distributor and greatly expanding its comics line. Marvel pulled ahead of rival DC Comics in 1972, during
a time when the price and format of the standard newsstand comic were in flux. Goodman increased the price and size
of Marvel's November 1971 cover-dated comics from 15 cents for 36 pages total to 25 cents for 52 pages. DC followed
suit, but Marvel the following month dropped its comics to 20 cents for 36 pages, offering a lower-priced product
with a higher distributor discount. Goodman, now disconnected from Marvel, set up a new company called Seaboard Periodicals
in 1974, reviving Marvel's old Atlas name for a new Atlas Comics line, but this lasted only a year and a half. In
the mid-1970s a decline of the newsstand distribution network affected Marvel. Cult hits such as Howard the Duck
fell victim to the distribution problems, with some titles reporting low sales when in fact the first specialty comic
book stores resold them at a later date.[citation needed] But by the end of the decade, Marvel's fortunes were reviving,
thanks to the rise of direct market distribution—selling through those same comics-specialty stores instead of newsstands.
Marvel held its own comic book convention, Marvelcon '75, in spring 1975, and promised a Marvelcon '76. At the 1975
event, Stan Lee used a Fantastic Four panel discussion to announce that Jack Kirby, the artist co-creator of most
of Marvel's signature characters, was returning to Marvel after having left in 1970 to work for rival DC Comics.
In October 1976, Marvel, which already licensed reprints in different countries, including the UK, created a superhero
specifically for the British market. Captain Britain debuted exclusively in the UK, and later appeared in American
comics. In 1978, Jim Shooter became Marvel's editor-in-chief. Although a controversial personality, Shooter cured
many of the procedural ills at Marvel, including repeatedly missed deadlines. During Shooter's nine-year tenure as
editor-in-chief, Chris Claremont and John Byrne's run on the Uncanny X-Men and Frank Miller's run on Daredevil became
critical and commercial successes. Shooter brought Marvel into the rapidly evolving direct market, institutionalized
creator royalties, starting with the Epic Comics imprint for creator-owned material in 1982; introduced company-wide
crossover story arcs with Contest of Champions and Secret Wars; and in 1986 launched the ultimately unsuccessful
New Universe line to commemorate the 25th anniversary of the Marvel Comics imprint. Star Comics, a children-oriented
line differing from the regular Marvel titles, was briefly successful during this period. Marvel earned a great deal
of money and recognition during the comic book boom of the early 1990s, launching the successful 2099 line of comics
set in the future (Spider-Man 2099, etc.) and the creatively daring though commercially unsuccessful Razorline imprint
of superhero comics created by novelist and filmmaker Clive Barker. In 1990, Marvel began selling Marvel Universe
Cards with trading card maker SkyBox International. These were collectible trading cards that featured the characters
and events of the Marvel Universe. The 1990s saw the rise of variant covers, cover enhancements, swimsuit issues,
and company-wide crossovers that affected the overall continuity of the fictional Marvel Universe In 1996, Marvel
had some of its titles participate in "Heroes Reborn", a crossover that allowed Marvel to relaunch some of its flagship
characters such as the Avengers and the Fantastic Four, and outsource them to the studios of two of the former Marvel
artists turned Image Comics founders, Jim Lee and Rob Liefeld. The relaunched titles, which saw the characters transported
to a parallel universe with a history distinct from the mainstream Marvel Universe, were a solid success amidst a
generally struggling industry, but Marvel discontinued the experiment after a one-year run and returned the characters
to the Marvel Universe proper. In 1998, the company launched the imprint Marvel Knights, taking place within Marvel
continuity; helmed by soon-to-become editor-in-chief Joe Quesada, it featured tough, gritty stories showcasing such
characters as the Inhumans, Black Panther and Daredevil. In late 1994, Marvel acquired the comic book distributor
Heroes World Distribution to use as its own exclusive distributor. As the industry's other major publishers made
exclusive distribution deals with other companies, the ripple effect resulted in the survival of only one other major
distributor in North America, Diamond Comic Distributors Inc. In early 1997, when Marvel's Heroes World endeavor
failed, Diamond also forged an exclusive deal with Marvel—giving the company its own section of its comics catalog
Previews. With the new millennium, Marvel Comics emerged from bankruptcy and again began diversifying its offerings.
In 2001, Marvel withdrew from the Comics Code Authority and established its own Marvel Rating System for comics.
The first title from this era to not have the code was X-Force #119 (October 2001). Marvel also created new imprints,
such as MAX (an explicit-content line) and Marvel Adventures (developed for child audiences). In addition, the company
created an alternate universe imprint, Ultimate Marvel, that allowed the company to reboot its major titles by revising
and updating its characters to introduce to a new generation. On August 31, 2009, The Walt Disney Company announced
a deal to acquire Marvel Comics' parent corporation, Marvel Entertainment, for $4 billion or $4.2 billion, with Marvel
shareholders to receive $30 and 0.745 Disney shares for each share of Marvel they own. As of 2008, Marvel and its
major, longtime competitor DC Comics shared over 80% of the American comic-book market. As of September 2010, Marvel
switched its bookstores distribution company from Diamond Book Distributors to Hachette Distribution Services. Marvel
discontinued its Marvel Adventures imprint in March 2012, and replaced them with a line of two titles connected to
the Marvel Universe TV block. Also in March, Marvel announced its Marvel ReEvolution initiative that included Infinite
Comics, a line of digital comics, Marvel AR, an application software that provides an augmented reality experience
to readers and Marvel NOW!, a relaunch of most of the company's major titles with different creative teams. Marvel
NOW! also saw the debut of new flagship titles including Uncanny Avengers and All-New X-Men. In April 2013, Marvel
and other Disney conglomerate components began announcing joint projects. With ABC, a Once Upon a Time graphic novel
was announced for publication in September. With Disney, Marvel announced in October 2013 that in January 2014 it
would release its first title under their joint "Disney Kingdoms" imprint "Seekers of the Weird", a five-issue miniseries.
On January 3, 2014, fellow Disney subsidiary Lucasfilm Limited, LLC announced that as of 2015, Star Wars comics would
once again be published by Marvel. Marvel first licensed two prose novels to Bantam Books, who printed The Avengers
Battle the Earth Wrecker by Otto Binder (1967) and Captain America: The Great Gold Steal by Ted White (1968). Various
publishers took up the licenses from 1978 to 2002. Also, with the various licensed films being released beginning
in 1997, various publishers put out movie novelizations. In 2003, following publication of the prose young adult
novel Mary Jane, starring Mary Jane Watson from the Spider-Man mythos, Marvel announced the formation of the publishing
imprint Marvel Press. However, Marvel moved back to licensing with Pocket Books from 2005 to 2008. With few books
issued under the imprint, Marvel and Disney Books Group relaunched Marvel Press in 2011 with the Marvel Origin Storybooks
line. Walt Disney Parks and Resorts plans on creating original Marvel attractions at their theme parks, with Hong
Kong Disneyland becoming the first Disney theme park to feature a Marvel attraction. Due to the licensing agreement
with Universal Studios, signed prior to Disney's purchase of Marvel, Walt Disney World and Tokyo Disney are barred
from having Marvel characters in their parks. However, this only includes characters Universal is currently using,
other characters in their "families" (X-Men, Avengers, Fantastic Four, etc.), and the villains associated with said
characters. This clause has allowed Walt Disney World to have meet and greets, merchandise, attractions and more
with other Marvel characters not associated with the characters at Islands of Adventures, such as Star-Lord and Gamora
from Guardians of the Galaxy as well as Baymax and Hiro from Big Hero 6.
The British Empire comprised the dominions, colonies, protectorates, mandates and other territories ruled or administered
by the United Kingdom. It originated with the overseas possessions and trading posts established by England between
the late 16th and early 18th centuries. At its height, it was the largest empire in history and, for over a century,
was the foremost global power. By 1922 the British Empire held sway over about 458 million people, one-fifth of the
world's population at the time, and covered more than 13,000,000 sq mi (33,670,000 km2), almost a quarter of the
Earth's total land area. As a result, its political, legal, linguistic and cultural legacy is widespread. At the
peak of its power, the phrase "the empire on which the sun never sets" was often used to describe the British Empire,
because its expanse around the globe meant that the sun was always shining on at least one of its territories. During
the Age of Discovery in the 15th and 16th centuries, Portugal and Spain pioneered European exploration of the globe,
and in the process established large overseas empires. Envious of the great wealth these empires generated, England,
France, and the Netherlands began to establish colonies and trade networks of their own in the Americas and Asia.
A series of wars in the 17th and 18th centuries with the Netherlands and France left England (and then, following
union between England and Scotland in 1707, Great Britain) the dominant colonial power in North America and India.
The independence of the Thirteen Colonies in North America in 1783 after the American War of Independence caused
Britain to lose some of its oldest and most populous colonies. British attention soon turned towards Asia, Africa,
and the Pacific. After the defeat of France in the Revolutionary and Napoleonic Wars (1792–1815), Britain emerged
as the principal naval and imperial power of the 19th century (with London the largest city in the world from about
1830). Unchallenged at sea, British dominance was later described as Pax Britannica ("British Peace"), a period of
relative peace in Europe and the world (1815–1914) during which the British Empire became the global hegemon and
adopted the role of global policeman. In the early 19th century, the Industrial Revolution began to transform Britain;
by the time of the Great Exhibition in 1851 the country was described as the "workshop of the world". The British
Empire expanded to include India, large parts of Africa and many other territories throughout the world. Alongside
the formal control it exerted over its own colonies, British dominance of much of world trade meant that it effectively
controlled the economies of many regions, such as Asia and Latin America. Domestically, political attitudes favoured
free trade and laissez-faire policies and a gradual widening of the voting franchise. During this century, the population
increased at a dramatic rate, accompanied by rapid urbanisation, causing significant social and economic stresses.
To seek new markets and sources of raw materials, the Conservative Party under Disraeli launched a period of imperialist
expansion in Egypt, South Africa, and elsewhere. Canada, Australia, and New Zealand became self-governing dominions.
By the start of the 20th century, Germany and the United States challenged Britain's economic lead. Subsequent military
and economic tensions between Britain and Germany were major causes of the First World War, during which Britain
relied heavily upon its empire. The conflict placed enormous strain on the military, financial and manpower resources
of Britain. Although the British Empire achieved its largest territorial extent immediately after World War I, Britain
was no longer the world's pre-eminent industrial or military power. In the Second World War, Britain's colonies in
South-East Asia were occupied by Imperial Japan. Despite the final victory of Britain and its allies, the damage
to British prestige helped to accelerate the decline of the empire. India, Britain's most valuable and populous possession,
achieved independence as part of a larger decolonisation movement in which Britain granted independence to most territories
of the Empire. The transfer of Hong Kong to China in 1997 marked for many the end of the British Empire. Fourteen
overseas territories remain under British sovereignty. After independence, many former British colonies joined the
Commonwealth of Nations, a free association of independent states. The United Kingdom is now one of 16 Commonwealth
nations, a grouping known informally as the Commonwealth realms, that share one monarch—Queen Elizabeth II. The foundations
of the British Empire were laid when England and Scotland were separate kingdoms. In 1496 King Henry VII of England,
following the successes of Spain and Portugal in overseas exploration, commissioned John Cabot to lead a voyage to
discover a route to Asia via the North Atlantic. Cabot sailed in 1497, five years after the European discovery of
America, and although he successfully made landfall on the coast of Newfoundland (mistakenly believing, like Christopher
Columbus, that he had reached Asia), there was no attempt to found a colony. Cabot led another voyage to the Americas
the following year but nothing was heard of his ships again. No further attempts to establish English colonies in
the Americas were made until well into the reign of Queen Elizabeth I, during the last decades of the 16th century.
In the meantime the Protestant Reformation had turned England and Catholic Spain into implacable enemies . In 1562,
the English Crown encouraged the privateers John Hawkins and Francis Drake to engage in slave-raiding attacks against
Spanish and Portuguese ships off the coast of West Africa with the aim of breaking into the Atlantic trade system.
This effort was rebuffed and later, as the Anglo-Spanish Wars intensified, Elizabeth I gave her blessing to further
privateering raids against Spanish ports in the Americas and shipping that was returning across the Atlantic, laden
with treasure from the New World. At the same time, influential writers such as Richard Hakluyt and John Dee (who
was the first to use the term "British Empire") were beginning to press for the establishment of England's own empire.
By this time, Spain had become the dominant power in the Americas and was exploring the Pacific ocean, Portugal had
established trading posts and forts from the coasts of Africa and Brazil to China, and France had begun to settle
the Saint Lawrence River area, later to become New France. In 1578, Elizabeth I granted a patent to Humphrey Gilbert
for discovery and overseas exploration. That year, Gilbert sailed for the West Indies with the intention of engaging
in piracy and establishing a colony in North America, but the expedition was aborted before it had crossed the Atlantic.
In 1583 he embarked on a second attempt, on this occasion to the island of Newfoundland whose harbour he formally
claimed for England, although no settlers were left behind. Gilbert did not survive the return journey to England,
and was succeeded by his half-brother, Walter Raleigh, who was granted his own patent by Elizabeth in 1584. Later
that year, Raleigh founded the colony of Roanoke on the coast of present-day North Carolina, but lack of supplies
caused the colony to fail. In 1603, James VI, King of Scots, ascended (as James I) to the English throne and in 1604
negotiated the Treaty of London, ending hostilities with Spain. Now at peace with its main rival, English attention
shifted from preying on other nations' colonial infrastructures to the business of establishing its own overseas
colonies. The British Empire began to take shape during the early 17th century, with the English settlement of North
America and the smaller islands of the Caribbean, and the establishment of private companies, most notably the English
East India Company, to administer colonies and overseas trade. This period, until the loss of the Thirteen Colonies
after the American War of Independence towards the end of the 18th century, has subsequently been referred to by
some historians as the "First British Empire". The Caribbean initially provided England's most important and lucrative
colonies, but not before several attempts at colonisation failed. An attempt to establish a colony in Guiana in 1604
lasted only two years, and failed in its main objective to find gold deposits. Colonies in St Lucia (1605) and Grenada
(1609) also rapidly folded, but settlements were successfully established in St. Kitts (1624), Barbados (1627) and
Nevis (1628). The colonies soon adopted the system of sugar plantations successfully used by the Portuguese in Brazil,
which depended on slave labour, and—at first—Dutch ships, to sell the slaves and buy the sugar. To ensure that the
increasingly healthy profits of this trade remained in English hands, Parliament decreed in 1651 that only English
ships would be able to ply their trade in English colonies. This led to hostilities with the United Dutch Provinces—a
series of Anglo-Dutch Wars—which would eventually strengthen England's position in the Americas at the expense of
the Dutch. In 1655, England annexed the island of Jamaica from the Spanish, and in 1666 succeeded in colonising the
Bahamas. England's first permanent settlement in the Americas was founded in 1607 in Jamestown, led by Captain John
Smith and managed by the Virginia Company. Bermuda was settled and claimed by England as a result of the 1609 shipwreck
there of the Virginia Company's flagship, and in 1615 was turned over to the newly formed Somers Isles Company. The
Virginia Company's charter was revoked in 1624 and direct control of Virginia was assumed by the crown, thereby founding
the Colony of Virginia. The London and Bristol Company was created in 1610 with the aim of creating a permanent settlement
on Newfoundland, but was largely unsuccessful. In 1620, Plymouth was founded as a haven for puritan religious separatists,
later known as the Pilgrims. Fleeing from religious persecution would become the motive of many English would-be
colonists to risk the arduous trans-Atlantic voyage: Maryland was founded as a haven for Roman Catholics (1634),
Rhode Island (1636) as a colony tolerant of all religions and Connecticut (1639) for Congregationalists. The Province
of Carolina was founded in 1663. With the surrender of Fort Amsterdam in 1664, England gained control of the Dutch
colony of New Netherland, renaming it New York. This was formalised in negotiations following the Second Anglo-Dutch
War, in exchange for Suriname. In 1681, the colony of Pennsylvania was founded by William Penn. The American colonies
were less financially successful than those of the Caribbean, but had large areas of good agricultural land and attracted
far larger numbers of English emigrants who preferred their temperate climates. Two years later, the Royal African
Company was inaugurated, receiving from King Charles a monopoly of the trade to supply slaves to the British colonies
of the Caribbean. From the outset, slavery was the basis of the British Empire in the West Indies. Until the abolition
of the slave trade in 1807, Britain was responsible for the transportation of 3.5 million African slaves to the Americas,
a third of all slaves transported across the Atlantic. To facilitate this trade, forts were established on the coast
of West Africa, such as James Island, Accra and Bunce Island. In the British Caribbean, the percentage of the population
of African descent rose from 25 percent in 1650 to around 80 percent in 1780, and in the 13 Colonies from 10 percent
to 40 percent over the same period (the majority in the southern colonies). For the slave traders, the trade was
extremely profitable, and became a major economic mainstay for such western British cities as Bristol and Liverpool,
which formed the third corner of the so-called triangular trade with Africa and the Americas. For the transported,
harsh and unhygienic conditions on the slaving ships and poor diets meant that the average mortality rate during
the Middle Passage was one in seven. In 1695, the Scottish Parliament granted a charter to the Company of Scotland,
which established a settlement in 1698 on the isthmus of Panama. Besieged by neighbouring Spanish colonists of New
Granada, and afflicted by malaria, the colony was abandoned two years later. The Darien scheme was a financial disaster
for Scotland—a quarter of Scottish capital was lost in the enterprise—and ended Scottish hopes of establishing its
own overseas empire. The episode also had major political consequences, persuading the governments of both England
and Scotland of the merits of a union of countries, rather than just crowns. This occurred in 1707 with the Treaty
of Union, establishing the Kingdom of Great Britain. At the end of the 16th century, England and the Netherlands
began to challenge Portugal's monopoly of trade with Asia, forming private joint-stock companies to finance the voyages—the
English, later British, East India Company and the Dutch East India Company, chartered in 1600 and 1602 respectively.
The primary aim of these companies was to tap into the lucrative spice trade, an effort focused mainly on two regions;
the East Indies archipelago, and an important hub in the trade network, India. There, they competed for trade supremacy
with Portugal and with each other. Although England ultimately eclipsed the Netherlands as a colonial power, in the
short term the Netherlands' more advanced financial system and the three Anglo-Dutch Wars of the 17th century left
it with a stronger position in Asia. Hostilities ceased after the Glorious Revolution of 1688 when the Dutch William
of Orange ascended the English throne, bringing peace between the Netherlands and England. A deal between the two
nations left the spice trade of the East Indies archipelago to the Netherlands and the textiles industry of India
to England, but textiles soon overtook spices in terms of profitability, and by 1720, in terms of sales, the British
company had overtaken the Dutch. Peace between England and the Netherlands in 1688 meant that the two countries entered
the Nine Years' War as allies, but the conflict—waged in Europe and overseas between France, Spain and the Anglo-Dutch
alliance—left the English a stronger colonial power than the Dutch, who were forced to devote a larger proportion
of their military budget on the costly land war in Europe. The 18th century saw England (after 1707, Britain) rise
to be the world's dominant colonial power, and France becoming its main rival on the imperial stage. At the concluding
Treaty of Utrecht, Philip renounced his and his descendants' right to the French throne and Spain lost its empire
in Europe. The British Empire was territorially enlarged: from France, Britain gained Newfoundland and Acadia, and
from Spain, Gibraltar and Minorca. Gibraltar became a critical naval base and allowed Britain to control the Atlantic
entry and exit point to the Mediterranean. Spain also ceded the rights to the lucrative asiento (permission to sell
slaves in Spanish America) to Britain. During the middle decades of the 18th century, there were several outbreaks
of military conflict on the Indian subcontinent, the Carnatic Wars, as the English East India Company (the Company)
and its French counterpart, the Compagnie française des Indes orientales, struggled alongside local rulers to fill
the vacuum that had been left by the decline of the Mughal Empire. The Battle of Plassey in 1757, in which the British,
led by Robert Clive, defeated the Nawab of Bengal and his French allies, left the Company in control of Bengal and
as the major military and political power in India. France was left control of its enclaves but with military restrictions
and an obligation to support British client states, ending French hopes of controlling India. In the following decades
the Company gradually increased the size of the territories under its control, either ruling directly or via local
rulers under the threat of force from the British Indian Army, the vast majority of which was composed of Indian
sepoys. The British and French struggles in India became but one theatre of the global Seven Years' War (1756–1763)
involving France, Britain and the other major European powers. The signing of the Treaty of Paris (1763) had important
consequences for the future of the British Empire. In North America, France's future as a colonial power there was
effectively ended with the recognition of British claims to Rupert's Land, and the ceding of New France to Britain
(leaving a sizeable French-speaking population under British control) and Louisiana to Spain. Spain ceded Florida
to Britain. Along with its victory over France in India, the Seven Years' War therefore left Britain as the world's
most powerful maritime power. During the 1760s and early 1770s, relations between the Thirteen Colonies and Britain
became increasingly strained, primarily because of resentment of the British Parliament's attempts to govern and
tax American colonists without their consent. This was summarised at the time by the slogan "No taxation without
representation", a perceived violation of the guaranteed Rights of Englishmen. The American Revolution began with
rejection of Parliamentary authority and moves towards self-government. In response Britain sent troops to reimpose
direct rule, leading to the outbreak of war in 1775. The following year, in 1776, the United States declared independence.
The entry of France to the war in 1778 tipped the military balance in the Americans' favour and after a decisive
defeat at Yorktown in 1781, Britain began negotiating peace terms. American independence was acknowledged at the
Peace of Paris in 1783. The loss of such a large portion of British America, at the time Britain's most populous
overseas possession, is seen by some historians as the event defining the transition between the "first" and "second"
empires, in which Britain shifted its attention away from the Americas to Asia, the Pacific and later Africa. Adam
Smith's Wealth of Nations, published in 1776, had argued that colonies were redundant, and that free trade should
replace the old mercantilist policies that had characterised the first period of colonial expansion, dating back
to the protectionism of Spain and Portugal. The growth of trade between the newly independent United States and Britain
after 1783 seemed to confirm Smith's view that political control was not necessary for economic success. Events in
America influenced British policy in Canada, where between 40,000 and 100,000 defeated Loyalists had migrated from
America following independence. The 14,000 Loyalists who went to the Saint John and Saint Croix river valleys, then
part of Nova Scotia, felt too far removed from the provincial government in Halifax, so London split off New Brunswick
as a separate colony in 1784. The Constitutional Act of 1791 created the provinces of Upper Canada (mainly English-speaking)
and Lower Canada (mainly French-speaking) to defuse tensions between the French and British communities, and implemented
governmental systems similar to those employed in Britain, with the intention of asserting imperial authority and
not allowing the sort of popular control of government that was perceived to have led to the American Revolution.
Since 1718, transportation to the American colonies had been a penalty for various criminal offences in Britain,
with approximately one thousand convicts transported per year across the Atlantic. Forced to find an alternative
location after the loss of the 13 Colonies in 1783, the British government turned to the newly discovered lands of
Australia. The western coast of Australia had been discovered for Europeans by the Dutch explorer Willem Jansz in
1606 and was later named New Holland by the Dutch East India Company, but there was no attempt to colonise it. In
1770 James Cook discovered the eastern coast of Australia while on a scientific voyage to the South Pacific Ocean,
claimed the continent for Britain, and named it New South Wales. In 1778, Joseph Banks, Cook's botanist on the voyage,
presented evidence to the government on the suitability of Botany Bay for the establishment of a penal settlement,
and in 1787 the first shipment of convicts set sail, arriving in 1788. Britain continued to transport convicts to
New South Wales until 1840. The Australian colonies became profitable exporters of wool and gold, mainly because
of gold rushes in the colony of Victoria, making its capital Melbourne the richest city in the world and the largest
city after London in the British Empire. During his voyage, Cook also visited New Zealand, first discovered by Dutch
explorer Abel Tasman in 1642, and claimed the North and South islands for the British crown in 1769 and 1770 respectively.
Initially, interaction between the indigenous Māori population and Europeans was limited to the trading of goods.
European settlement increased through the early decades of the 19th century, with numerous trading stations established,
especially in the North. In 1839, the New Zealand Company announced plans to buy large tracts of land and establish
colonies in New Zealand. On 6 February 1840, Captain William Hobson and around 40 Maori chiefs signed the Treaty
of Waitangi. This treaty is considered by many to be New Zealand's founding document, but differing interpretations
of the Maori and English versions of the text have meant that it continues to be a source of dispute. The Napoleonic
Wars were therefore ones in which Britain invested large amounts of capital and resources to win. French ports were
blockaded by the Royal Navy, which won a decisive victory over a Franco-Spanish fleet at Trafalgar in 1805. Overseas
colonies were attacked and occupied, including those of the Netherlands, which was annexed by Napoleon in 1810. France
was finally defeated by a coalition of European armies in 1815. Britain was again the beneficiary of peace treaties:
France ceded the Ionian Islands, Malta (which it had occupied in 1797 and 1798 respectively), Mauritius, St Lucia,
and Tobago; Spain ceded Trinidad; the Netherlands Guyana, and the Cape Colony. Britain returned Guadeloupe, Martinique,
French Guiana, and Réunion to France, and Java and Suriname to the Netherlands, while gaining control of Ceylon (1795–1815).
With support from the British abolitionist movement, Parliament enacted the Slave Trade Act in 1807, which abolished
the slave trade in the empire. In 1808, Sierra Leone was designated an official British colony for freed slaves.
The Slavery Abolition Act passed in 1833 abolished slavery in the British Empire on 1 August 1834 (with the exception
of St. Helena, Ceylon and the territories administered by the East India Company, though these exclusions were later
repealed). Under the Act, slaves were granted full emancipation after a period of 4 to 6 years of "apprenticeship".
Between 1815 and 1914, a period referred to as Britain's "imperial century" by some historians, around 10,000,000
square miles (26,000,000 km2) of territory and roughly 400 million people were added to the British Empire. Victory
over Napoleon left Britain without any serious international rival, other than Russia in central Asia. Unchallenged
at sea, Britain adopted the role of global policeman, a state of affairs later known as the Pax Britannica, and a
foreign policy of "splendid isolation". Alongside the formal control it exerted over its own colonies, Britain's
dominant position in world trade meant that it effectively controlled the economies of many countries, such as China,
Argentina and Siam, which has been characterised by some historians as "Informal Empire". From its base in India,
the Company had also been engaged in an increasingly profitable opium export trade to China since the 1730s. This
trade, illegal since it was outlawed by the Qing dynasty in 1729, helped reverse the trade imbalances resulting from
the British imports of tea, which saw large outflows of silver from Britain to China. In 1839, the confiscation by
the Chinese authorities at Canton of 20,000 chests of opium led Britain to attack China in the First Opium War, and
resulted in the seizure by Britain of Hong Kong Island, at that time a minor settlement. During the late 18th and
early 19th centuries the British Crown began to assume an increasingly large role in the affairs of the Company.
A series of Acts of Parliament were passed, including the Regulating Act of 1773, Pitt's India Act of 1784 and the
Charter Act of 1813 which regulated the Company's affairs and established the sovereignty of the Crown over the territories
that it had acquired. The Company's eventual end was precipitated by the Indian Rebellion, a conflict that had begun
with the mutiny of sepoys, Indian troops under British officers and discipline. The rebellion took six months to
suppress, with heavy loss of life on both sides. The following year the British government dissolved the Company
and assumed direct control over India through the Government of India Act 1858, establishing the British Raj, where
an appointed governor-general administered India and Queen Victoria was crowned the Empress of India. India became
the empire's most valuable possession, "the Jewel in the Crown", and was the most important source of Britain's strength.
During the 19th century, Britain and the Russian Empire vied to fill the power vacuums that had been left by the
declining Ottoman Empire, Qajar dynasty and Qing Dynasty. This rivalry in Eurasia came to be known as the "Great
Game". As far as Britain was concerned, defeats inflicted by Russia on Persia and Turkey demonstrated its imperial
ambitions and capabilities and stoked fears in Britain of an overland invasion of India. In 1839, Britain moved to
pre-empt this by invading Afghanistan, but the First Anglo-Afghan War was a disaster for Britain. When Russia invaded
the Turkish Balkans in 1853, fears of Russian dominance in the Mediterranean and Middle East led Britain and France
to invade the Crimean Peninsula to destroy Russian naval capabilities. The ensuing Crimean War (1854–56), which involved
new techniques of modern warfare, and was the only global war fought between Britain and another imperial power during
the Pax Britannica, was a resounding defeat for Russia. The situation remained unresolved in Central Asia for two
more decades, with Britain annexing Baluchistan in 1876 and Russia annexing Kirghizia, Kazakhstan, and Turkmenistan.
For a while it appeared that another war would be inevitable, but the two countries reached an agreement on their
respective spheres of influence in the region in 1878 and on all outstanding matters in 1907 with the signing of
the Anglo-Russian Entente. The destruction of the Russian Navy by the Japanese at the Battle of Port Arthur during
the Russo-Japanese War of 1904–05 also limited its threat to the British. The Dutch East India Company had founded
the Cape Colony on the southern tip of Africa in 1652 as a way station for its ships travelling to and from its colonies
in the East Indies. Britain formally acquired the colony, and its large Afrikaner (or Boer) population in 1806, having
occupied it in 1795 to prevent its falling into French hands, following the invasion of the Netherlands by France.
British immigration began to rise after 1820, and pushed thousands of Boers, resentful of British rule, northwards
to found their own—mostly short-lived—independent republics, during the Great Trek of the late 1830s and early 1840s.
In the process the Voortrekkers clashed repeatedly with the British, who had their own agenda with regard to colonial
expansion in South Africa and with several African polities, including those of the Sotho and the Zulu nations. Eventually
the Boers established two republics which had a longer lifespan: the South African Republic or Transvaal Republic
(1852–77; 1881–1902) and the Orange Free State (1854–1902). In 1902 Britain occupied both republics, concluding a
treaty with the two Boer Republics following the Second Boer War (1899–1902). In 1869 the Suez Canal opened under
Napoleon III, linking the Mediterranean with the Indian Ocean. Initially the Canal was opposed by the British; but
once opened, its strategic value was quickly recognised and became the "jugular vein of the Empire". In 1875, the
Conservative government of Benjamin Disraeli bought the indebted Egyptian ruler Isma'il Pasha's 44 percent shareholding
in the Suez Canal for £4 million (£340 million in 2013). Although this did not grant outright control of the strategic
waterway, it did give Britain leverage. Joint Anglo-French financial control over Egypt ended in outright British
occupation in 1882. The French were still majority shareholders and attempted to weaken the British position, but
a compromise was reached with the 1888 Convention of Constantinople, which made the Canal officially neutral territory.
With French, Belgian and Portuguese activity in the lower Congo River region undermining orderly incursion of tropical
Africa, the Berlin Conference of 1884–85 was held to regulate the competition between the European powers in what
was called the "Scramble for Africa" by defining "effective occupation" as the criterion for international recognition
of territorial claims. The scramble continued into the 1890s, and caused Britain to reconsider its decision in 1885
to withdraw from Sudan. A joint force of British and Egyptian troops defeated the Mahdist Army in 1896, and rebuffed
a French attempted invasion at Fashoda in 1898. Sudan was nominally made an Anglo-Egyptian Condominium, but a British
colony in reality. The path to independence for the white colonies of the British Empire began with the 1839 Durham
Report, which proposed unification and self-government for Upper and Lower Canada, as a solution to political unrest
there. This began with the passing of the Act of Union in 1840, which created the Province of Canada. Responsible
government was first granted to Nova Scotia in 1848, and was soon extended to the other British North American colonies.
With the passage of the British North America Act, 1867 by the British Parliament, Upper and Lower Canada, New Brunswick
and Nova Scotia were formed into the Dominion of Canada, a confederation enjoying full self-government with the exception
of international relations. Australia and New Zealand achieved similar levels of self-government after 1900, with
the Australian colonies federating in 1901. The term "dominion status" was officially introduced at the Colonial
Conference of 1907. The last decades of the 19th century saw concerted political campaigns for Irish home rule. Ireland
had been united with Britain into the United Kingdom of Great Britain and Ireland with the Act of Union 1800 after
the Irish Rebellion of 1798, and had suffered a severe famine between 1845 and 1852. Home rule was supported by the
British Prime minister, William Gladstone, who hoped that Ireland might follow in Canada's footsteps as a Dominion
within the empire, but his 1886 Home Rule bill was defeated in Parliament. Although the bill, if passed, would have
granted Ireland less autonomy within the UK than the Canadian provinces had within their own federation, many MPs
feared that a partially independent Ireland might pose a security threat to Great Britain or mark the beginning of
the break-up of the empire. A second Home Rule bill was also defeated for similar reasons. A third bill was passed
by Parliament in 1914, but not implemented because of the outbreak of the First World War leading to the 1916 Easter
Rising. By the turn of the 20th century, fears had begun to grow in Britain that it would no longer be able to defend
the metropole and the entirety of the empire while at the same time maintaining the policy of "splendid isolation".
Germany was rapidly rising as a military and industrial power and was now seen as the most likely opponent in any
future war. Recognising that it was overstretched in the Pacific and threatened at home by the Imperial German Navy,
Britain formed an alliance with Japan in 1902 and with its old enemies France and Russia in 1904 and 1907, respectively.
Britain's fears of war with Germany were realised in 1914 with the outbreak of the First World War. Britain quickly
invaded and occupied most of Germany's overseas colonies in Africa. In the Pacific, Australia and New Zealand occupied
German New Guinea and Samoa respectively. Plans for a post-war division of the Ottoman Empire, which had joined the
war on Germany's side, were secretly drawn up by Britain and France under the 1916 Sykes–Picot Agreement. This agreement
was not divulged to the Sharif of Mecca, who the British had been encouraging to launch an Arab revolt against their
Ottoman rulers, giving the impression that Britain was supporting the creation of an independent Arab state. The
British declaration of war on Germany and its allies also committed the colonies and Dominions, which provided invaluable
military, financial and material support. Over 2.5 million men served in the armies of the Dominions, as well as
many thousands of volunteers from the Crown colonies. The contributions of Australian and New Zealand troops during
the 1915 Gallipoli Campaign against the Ottoman Empire had a great impact on the national consciousness at home,
and marked a watershed in the transition of Australia and New Zealand from colonies to nations in their own right.
The countries continue to commemorate this occasion on Anzac Day. Canadians viewed the Battle of Vimy Ridge in a
similar light. The important contribution of the Dominions to the war effort was recognised in 1917 by the British
Prime Minister David Lloyd George when he invited each of the Dominion Prime Ministers to join an Imperial War Cabinet
to co-ordinate imperial policy. Under the terms of the concluding Treaty of Versailles signed in 1919, the empire
reached its greatest extent with the addition of 1,800,000 square miles (4,700,000 km2) and 13 million new subjects.
The colonies of Germany and the Ottoman Empire were distributed to the Allied powers as League of Nations mandates.
Britain gained control of Palestine, Transjordan, Iraq, parts of Cameroon and Togo, and Tanganyika. The Dominions
themselves also acquired mandates of their own: the Union of South Africa gained South-West Africa (modern-day Namibia),
Australia gained German New Guinea, and New Zealand Western Samoa. Nauru was made a combined mandate of Britain and
the two Pacific Dominions. The changing world order that the war had brought about, in particular the growth of the
United States and Japan as naval powers, and the rise of independence movements in India and Ireland, caused a major
reassessment of British imperial policy. Forced to choose between alignment with the United States or Japan, Britain
opted not to renew its Japanese alliance and instead signed the 1922 Washington Naval Treaty, where Britain accepted
naval parity with the United States. This decision was the source of much debate in Britain during the 1930s as militaristic
governments took hold in Japan and Germany helped in part by the Great Depression, for it was feared that the empire
could not survive a simultaneous attack by both nations. Although the issue of the empire's security was a serious
concern in Britain, at the same time the empire was vital to the British economy. In 1919, the frustrations caused
by delays to Irish home rule led members of Sinn Féin, a pro-independence party that had won a majority of the Irish
seats at Westminster in the 1918 British general election, to establish an Irish assembly in Dublin, at which Irish
independence was declared. The Irish Republican Army simultaneously began a guerrilla war against the British administration.
The Anglo-Irish War ended in 1921 with a stalemate and the signing of the Anglo-Irish Treaty, creating the Irish
Free State, a Dominion within the British Empire, with effective internal independence but still constitutionally
linked with the British Crown. Northern Ireland, consisting of six of the 32 Irish counties which had been established
as a devolved region under the 1920 Government of Ireland Act, immediately exercised its option under the treaty
to retain its existing status within the United Kingdom. A similar struggle began in India when the Government of
India Act 1919 failed to satisfy demand for independence. Concerns over communist and foreign plots following the
Ghadar Conspiracy ensured that war-time strictures were renewed by the Rowlatt Acts. This led to tension, particularly
in the Punjab region, where repressive measures culminated in the Amritsar Massacre. In Britain public opinion was
divided over the morality of the event, between those who saw it as having saved India from anarchy, and those who
viewed it with revulsion. The subsequent Non-Co-Operation movement was called off in March 1922 following the Chauri
Chaura incident, and discontent continued to simmer for the next 25 years. In 1922, Egypt, which had been declared
a British protectorate at the outbreak of the First World War, was granted formal independence, though it continued
to be a British client state until 1954. British troops remained stationed in Egypt until the signing of the Anglo-Egyptian
Treaty in 1936, under which it was agreed that the troops would withdraw but continue to occupy and defend the Suez
Canal zone. In return, Egypt was assisted to join the League of Nations. Iraq, a British mandate since 1920, also
gained membership of the League in its own right after achieving independence from Britain in 1932. In Palestine,
Britain was presented with the problem of mediating between the Arab and Jewish communities. The 1917 Balfour Declaration,
which had been incorporated into the terms of the mandate, stated that a national home for the Jewish people would
be established in Palestine, and Jewish immigration allowed up to a limit that would be determined by the mandatory
power. This led to increasing conflict with the Arab population, who openly revolted in 1936. As the threat of war
with Germany increased during the 1930s, Britain judged the support of the Arab population in the Middle East as
more important than the establishment of a Jewish homeland, and shifted to a pro-Arab stance, limiting Jewish immigration
and in turn triggering a Jewish insurgency. The ability of the Dominions to set their own foreign policy, independent
of Britain, was recognised at the 1923 Imperial Conference. Britain's request for military assistance from the Dominions
at the outbreak of the Chanak Crisis the previous year had been turned down by Canada and South Africa, and Canada
had refused to be bound by the 1923 Treaty of Lausanne. After pressure from Ireland and South Africa, the 1926 Imperial
Conference issued the Balfour Declaration, declaring the Dominions to be "autonomous Communities within the British
Empire, equal in status, in no way subordinate one to another" within a "British Commonwealth of Nations". This declaration
was given legal substance under the 1931 Statute of Westminster. The parliaments of Canada, Australia, New Zealand,
the Union of South Africa, the Irish Free State and Newfoundland were now independent of British legislative control,
they could nullify British laws and Britain could no longer pass laws for them without their consent. Newfoundland
reverted to colonial status in 1933, suffering from financial difficulties during the Great Depression. Ireland distanced
itself further from Britain with the introduction of a new constitution in 1937, making it a republic in all but
name. After the German occupation of France in 1940, Britain and the empire stood alone against Germany, until the
entry of the Soviet Union to the war in 1941. British Prime Minister Winston Churchill successfully lobbied President
Franklin D. Roosevelt for military aid from the United States, but Roosevelt was not yet ready to ask Congress to
commit the country to war. In August 1941, Churchill and Roosevelt met and signed the Atlantic Charter, which included
the statement that "the rights of all peoples to choose the form of government under which they live" should be respected.
This wording was ambiguous as to whether it referred to European countries invaded by Germany, or the peoples colonised
by European nations, and would later be interpreted differently by the British, Americans, and nationalist movements.
In December 1941, Japan launched, in quick succession, attacks on British Malaya, the United States naval base at
Pearl Harbor, and Hong Kong. Churchill's reaction to the entry of the United States into the war was that Britain
was now assured of victory and the future of the empire was safe, but the manner in which British forces were rapidly
defeated in the Far East irreversibly harmed Britain's standing and prestige as an imperial power. Most damaging
of all was the fall of Singapore, which had previously been hailed as an impregnable fortress and the eastern equivalent
of Gibraltar. The realisation that Britain could not defend its entire empire pushed Australia and New Zealand, which
now appeared threatened by Japanese forces, into closer ties with the United States. This resulted in the 1951 ANZUS
Pact between Australia, New Zealand and the United States of America. Though Britain and the empire emerged victorious
from the Second World War, the effects of the conflict were profound, both at home and abroad. Much of Europe, a
continent that had dominated the world for several centuries, was in ruins, and host to the armies of the United
States and the Soviet Union, who now held the balance of global power. Britain was left essentially bankrupt, with
insolvency only averted in 1946 after the negotiation of a $US 4.33 billion loan (US$56 billion in 2012) from the
United States, the last instalment of which was repaid in 2006. At the same time, anti-colonial movements were on
the rise in the colonies of European nations. The situation was complicated further by the increasing Cold War rivalry
of the United States and the Soviet Union. In principle, both nations were opposed to European colonialism. In practice,
however, American anti-communism prevailed over anti-imperialism, and therefore the United States supported the continued
existence of the British Empire to keep Communist expansion in check. The "wind of change" ultimately meant that
the British Empire's days were numbered, and on the whole, Britain adopted a policy of peaceful disengagement from
its colonies once stable, non-Communist governments were available to transfer power to. This was in contrast to
other European powers such as France and Portugal, which waged costly and ultimately unsuccessful wars to keep their
empires intact. Between 1945 and 1965, the number of people under British rule outside the UK itself fell from 700
million to five million, three million of whom were in Hong Kong. The pro-decolonisation Labour government, elected
at the 1945 general election and led by Clement Attlee, moved quickly to tackle the most pressing issue facing the
empire: that of Indian independence. India's two major political parties—the Indian National Congress and the Muslim
League—had been campaigning for independence for decades, but disagreed as to how it should be implemented. Congress
favoured a unified secular Indian state, whereas the League, fearing domination by the Hindu majority, desired a
separate Islamic state for Muslim-majority regions. Increasing civil unrest and the mutiny of the Royal Indian Navy
during 1946 led Attlee to promise independence no later than 1948. When the urgency of the situation and risk of
civil war became apparent, the newly appointed (and last) Viceroy, Lord Mountbatten, hastily brought forward the
date to 15 August 1947. The borders drawn by the British to broadly partition India into Hindu and Muslim areas left
tens of millions as minorities in the newly independent states of India and Pakistan. Millions of Muslims subsequently
crossed from India to Pakistan and Hindus vice versa, and violence between the two communities cost hundreds of thousands
of lives. Burma, which had been administered as part of the British Raj, and Sri Lanka gained their independence
the following year in 1948. India, Pakistan and Sri Lanka became members of the Commonwealth, while Burma chose not
to join. The British Mandate of Palestine, where an Arab majority lived alongside a Jewish minority, presented the
British with a similar problem to that of India. The matter was complicated by large numbers of Jewish refugees seeking
to be admitted to Palestine following the Holocaust, while Arabs were opposed to the creation of a Jewish state.
Frustrated by the intractability of the problem, attacks by Jewish paramilitary organisations and the increasing
cost of maintaining its military presence, Britain announced in 1947 that it would withdraw in 1948 and leave the
matter to the United Nations to solve. The UN General Assembly subsequently voted for a plan to partition Palestine
into a Jewish and an Arab state. Following the defeat of Japan in the Second World War, anti-Japanese resistance
movements in Malaya turned their attention towards the British, who had moved to quickly retake control of the colony,
valuing it as a source of rubber and tin. The fact that the guerrillas were primarily Malayan-Chinese Communists
meant that the British attempt to quell the uprising was supported by the Muslim Malay majority, on the understanding
that once the insurgency had been quelled, independence would be granted. The Malayan Emergency, as it was called,
began in 1948 and lasted until 1960, but by 1957, Britain felt confident enough to grant independence to the Federation
of Malaya within the Commonwealth. In 1963, the 11 states of the federation together with Singapore, Sarawak and
North Borneo joined to form Malaysia, but in 1965 Chinese-majority Singapore was expelled from the union following
tensions between the Malay and Chinese populations. Brunei, which had been a British protectorate since 1888, declined
to join the union and maintained its status until independence in 1984. In 1951, the Conservative Party returned
to power in Britain, under the leadership of Winston Churchill. Churchill and the Conservatives believed that Britain's
position as a world power relied on the continued existence of the empire, with the base at the Suez Canal allowing
Britain to maintain its pre-eminent position in the Middle East in spite of the loss of India. However, Churchill
could not ignore Gamal Abdul Nasser's new revolutionary government of Egypt that had taken power in 1952, and the
following year it was agreed that British troops would withdraw from the Suez Canal zone and that Sudan would be
granted self-determination by 1955, with independence to follow. Sudan was granted independence on 1 January 1956.
In July 1956, Nasser unilaterally nationalised the Suez Canal. The response of Anthony Eden, who had succeeded Churchill
as Prime Minister, was to collude with France to engineer an Israeli attack on Egypt that would give Britain and
France an excuse to intervene militarily and retake the canal. Eden infuriated US President Dwight D. Eisenhower,
by his lack of consultation, and Eisenhower refused to back the invasion. Another of Eisenhower's concerns was the
possibility of a wider war with the Soviet Union after it threatened to intervene on the Egyptian side. Eisenhower
applied financial leverage by threatening to sell US reserves of the British pound and thereby precipitate a collapse
of the British currency. Though the invasion force was militarily successful in its objectives, UN intervention and
US pressure forced Britain into a humiliating withdrawal of its forces, and Eden resigned. The Suez Crisis very publicly
exposed Britain's limitations to the world and confirmed Britain's decline on the world stage, demonstrating that
henceforth it could no longer act without at least the acquiescence, if not the full support, of the United States.
The events at Suez wounded British national pride, leading one MP to describe it as "Britain's Waterloo" and another
to suggest that the country had become an "American satellite". Margaret Thatcher later described the mindset she
believed had befallen the British political establishment as "Suez syndrome", from which Britain did not recover
until the successful recapture of the Falkland Islands from Argentina in 1982. While the Suez Crisis caused British
power in the Middle East to weaken, it did not collapse. Britain again deployed its armed forces to the region, intervening
in Oman (1957), Jordan (1958) and Kuwait (1961), though on these occasions with American approval, as the new Prime
Minister Harold Macmillan's foreign policy was to remain firmly aligned with the United States. Britain maintained
a military presence in the Middle East for another decade. In January 1968, a few weeks after the devaluation of
the pound, Prime Minister Harold Wilson and his Defence Secretary Denis Healey announced that British troops would
be withdrawn from major military bases East of Suez, which included the ones in the Middle East, and primarily from
Malaysia and Singapore. The British withdrew from Aden in 1967, Bahrain in 1971, and Maldives in 1976. Britain's
remaining colonies in Africa, except for self-governing Southern Rhodesia, were all granted independence by 1968.
British withdrawal from the southern and eastern parts of Africa was not a peaceful process. Kenyan independence
was preceded by the eight-year Mau Mau Uprising. In Rhodesia, the 1965 Unilateral Declaration of Independence by
the white minority resulted in a civil war that lasted until the Lancaster House Agreement of 1979, which set the
terms for recognised independence in 1980, as the new nation of Zimbabwe. Most of the UK's Caribbean territories
achieved independence after the departure in 1961 and 1962 of Jamaica and Trinidad from the West Indies Federation,
established in 1958 in an attempt to unite the British Caribbean colonies under one government, but which collapsed
following the loss of its two largest members. Barbados achieved independence in 1966 and the remainder of the eastern
Caribbean islands in the 1970s and 1980s, but Anguilla and the Turks and Caicos Islands opted to revert to British
rule after they had already started on the path to independence. The British Virgin Islands, Cayman Islands and Montserrat
opted to retain ties with Britain, while Guyana achieved independence in 1966. Britain's last colony on the American
mainland, British Honduras, became a self-governing colony in 1964 and was renamed Belize in 1973, achieving full
independence in 1981. A dispute with Guatemala over claims to Belize was left unresolved. In 1980, Rhodesia, Britain's
last African colony, became the independent nation of Zimbabwe. The New Hebrides achieved independence (as Vanuatu)
in 1980, with Belize following suit in 1981. The passage of the British Nationality Act 1981, which reclassified
the remaining Crown colonies as "British Dependent Territories" (renamed British Overseas Territories in 2002) meant
that, aside from a scattering of islands and outposts the process of decolonisation that had begun after the Second
World War was largely complete. In 1982, Britain's resolve in defending its remaining overseas territories was tested
when Argentina invaded the Falkland Islands, acting on a long-standing claim that dated back to the Spanish Empire.
Britain's ultimately successful military response to retake the islands during the ensuing Falklands War was viewed
by many to have contributed to reversing the downward trend in Britain's status as a world power. The same year,
the Canadian government severed its last legal link with Britain by patriating the Canadian constitution from Britain.
The 1982 Canada Act passed by the British parliament ended the need for British involvement in changes to the Canadian
constitution. Similarly, the Constitution Act 1986 reformed the constitution of New Zealand to sever its constitutional
link with Britain, and the Australia Act 1986 severed the constitutional link between Britain and the Australian
states. In 1984, Brunei, Britain's last remaining Asian protectorate, gained its independence. In September 1982,
Prime minister Margaret Thatcher travelled to Beijing to negotiate with the Chinese government on the future of Britain's
last major and most populous overseas territory, Hong Kong. Under the terms of the 1842 Treaty of Nanking, Hong Kong
Island itself had been ceded to Britain in perpetuity, but the vast majority of the colony was constituted by the
New Territories, which had been acquired under a 99-year lease in 1898, due to expire in 1997. Thatcher, seeing parallels
with the Falkland Islands, initially wished to hold Hong Kong and proposed British administration with Chinese sovereignty,
though this was rejected by China. A deal was reached in 1984—under the terms of the Sino-British Joint Declaration,
Hong Kong would become a special administrative region of the People's Republic of China, maintaining its way of
life for at least 50 years. The handover ceremony in 1997 marked for many, including Charles, Prince of Wales, who
was in attendance, "the end of Empire". Britain retains sovereignty over 14 territories outside the British Isles,
which were renamed the British Overseas Territories in 2002. Some are uninhabited except for transient military or
scientific personnel; the remainder are self-governing to varying degrees and are reliant on the UK for foreign relations
and defence. The British government has stated its willingness to assist any Overseas Territory that wishes to proceed
to independence, where that is an option. British sovereignty of several of the overseas territories is disputed
by their geographical neighbours: Gibraltar is claimed by Spain, the Falkland Islands and South Georgia and the South
Sandwich Islands are claimed by Argentina, and the British Indian Ocean Territory is claimed by Mauritius and Seychelles.
The British Antarctic Territory is subject to overlapping claims by Argentina and Chile, while many countries do
not recognise any territorial claims in Antarctica. Most former British colonies and protectorates are among the
53 member states of the Commonwealth of Nations, a non-political, voluntary association of equal members, comprising
a population of around 2.2 billion people. Sixteen Commonwealth realms voluntarily continue to share the British
monarch, Queen Elizabeth II, as their head of state. These sixteen nations are distinct and equal legal entities
– the United Kingdom, Australia, Canada, New Zealand, Papua New Guinea, Antigua and Barbuda, The Bahamas, Barbados,
Belize, Grenada, Jamaica, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Solomon Islands and
Tuvalu. Political boundaries drawn by the British did not always reflect homogeneous ethnicities or religions, contributing
to conflicts in formerly colonised areas. The British Empire was also responsible for large migrations of peoples.
Millions left the British Isles, with the founding settler populations of the United States, Canada, Australia and
New Zealand coming mainly from Britain and Ireland. Tensions remain between the white settler populations of these
countries and their indigenous minorities, and between white settler minorities and indigenous majorities in South
Africa and Zimbabwe. Settlers in Ireland from Great Britain have left their mark in the form of divided nationalist
and unionist communities in Northern Ireland. Millions of people moved to and from British colonies, with large numbers
of Indians emigrating to other parts of the empire, such as Malaysia and Fiji, and Chinese people to Malaysia, Singapore
and the Caribbean. The demographics of Britain itself was changed after the Second World War owing to immigration
to Britain from its former colonies.
Botany, also called plant science(s) or plant biology, is the science of plant life and a branch of biology. A botanist or
plant scientist is a scientist who specializes in this field. The term "botany" comes from the Ancient Greek word
βοτάνη (botanē) meaning "pasture", "grass", or "fodder"; βοτάνη is in turn derived from βόσκειν (boskein), "to feed"
or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists
respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International
Botanical Congress. Nowadays, botanists study approximately 400,000 species of living organisms of which some 260,000
species are vascular plants and about 248,000 are flowering plants. Botany originated in prehistory as herbalism
with the efforts of early humans to identify – and later cultivate – edible, medicinal and poisonous plants, making
it one of the oldest branches of science. Medieval physic gardens, often attached to monasteries, contained plants
of medical importance. They were forerunners of the first botanical gardens attached to universities, founded from
the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study
of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy, and led in
1753 to the binomial system of Carl Linnaeus that remains in use to this day. Modern botany is a broad, multidisciplinary
subject with inputs from most other areas of science and technology. Research topics include the study of plant structure,
growth and differentiation, reproduction, biochemistry and primary metabolism, chemical products, development, diseases,
evolutionary relationships, systematics, and plant taxonomy. Dominant themes in 21st century plant science are molecular
genetics and epigenetics, which are the mechanisms and control of gene expression during differentiation of plant
cells and tissues. Botanical research has diverse applications in providing staple foods, materials such as timber,
oil, rubber, fibre and drugs, in modern horticulture, agriculture and forestry, plant propagation, breeding and genetic
modification, in the synthesis of chemicals and raw materials for construction and energy production, in environmental
management, and the maintenance of biodiversity. Another work from Ancient Greece that made an early impact on botany
is De Materia Medica, a five-volume encyclopedia about herbal medicine written in the middle of the first century
by Greek physician and pharmacologist Pedanius Dioscorides. De Materia Medica was widely read for more than 1,500
years. Important contributions from the medieval Muslim world include Ibn Wahshiyya's Nabatean Agriculture, Abū Ḥanīfa
Dīnawarī's (828–896) the Book of Plants, and Ibn Bassal's The Classification of Soils. In the early 13th century,
Abu al-Abbas al-Nabati, and Ibn al-Baitar (d. 1248) wrote on botany in a systematic and scientific manner. In the
mid-16th century, "botanical gardens" were founded in a number of Italian universities – the Padua botanical garden
in 1545 is usually considered to be the first which is still in its original location. These gardens continued the
practical value of earlier "physic gardens", often associated with monasteries, in which plants were cultivated for
medical use. They supported the growth of botany as an academic subject. Lectures were given about the plants grown
in the gardens and their medical uses demonstrated. Botanical gardens came much later to northern Europe; the first
in England was the University of Oxford Botanic Garden in 1621. Throughout this period, botany remained firmly subordinate
to medicine. Physician Valerius Cordus (1515–1544) authored a botanically and pharmacologically important herbal
Historia Plantarum in 1544 and a pharmacopoeia of lasting importance, the Dispensatorium in 1546. Naturalist Conrad
von Gesner (1516–1565) and herbalist John Gerard (1545–c. 1611) published herbals covering the medicinal uses of
plants. Naturalist Ulisse Aldrovandi (1522–1605) was considered the father of natural history, which included the
study of plants. In 1665, using an early microscope, Polymath Robert Hooke discovered cells, a term he coined, in
cork, and a short time later in living plant tissue. During the 18th century, systems of plant identification were
developed comparable to dichotomous keys, where unidentified plants are placed into taxonomic groups (e.g. family,
genus and species) by making a series of choices between pairs of characters. The choice and sequence of the characters
may be artificial in keys designed purely for identification (diagnostic keys) or more closely related to the natural
or phyletic order of the taxa in synoptic keys. By the 18th century, new plants for study were arriving in Europe
in increasing numbers from newly discovered countries and the European colonies worldwide. In 1753 Carl von Linné
(Carl Linnaeus) published his Species Plantarum, a hierarchical classification of plant species that remains the
reference point for modern botanical nomenclature. This established a standardised binomial or two-part naming scheme
where the first name represented the genus and the second identified the species within the genus. For the purposes
of identification, Linnaeus's Systema Sexuale classified plants into 24 groups according to the number of their male
sexual organs. The 24th group, Cryptogamia, included all plants with concealed reproductive parts, mosses, liverworts,
ferns, algae and fungi. Increasing knowledge of plant anatomy, morphology and life cycles led to the realisation
that there were more natural affinities between plants than the artificial sexual system of Linnaeus had indicated.
Adanson (1763), de Jussieu (1789), and Candolle (1819) all proposed various alternative natural systems of classification
that grouped plants using a wider range of shared characters and were widely followed. The Candollean system reflected
his ideas of the progression of morphological complexity and the later classification by Bentham and Hooker, which
was influential until the mid-19th century, was influenced by Candolle's approach. Darwin's publication of the Origin
of Species in 1859 and his concept of common descent required modifications to the Candollean system to reflect evolutionary
relationships as distinct from mere morphological similarity. Botany was greatly stimulated by the appearance of
the first "modern" text book, Matthias Schleiden's Grundzüge der Wissenschaftlichen Botanik, published in English
in 1849 as Principles of Scientific Botany. Schleiden was a microscopist and an early plant anatomist who co-founded
the cell theory with Theodor Schwann and Rudolf Virchow and was among the first to grasp the significance of the
cell nucleus that had been described by Robert Brown in 1831. In 1855, Adolf Fick formulated Fick's laws that enabled
the calculation of the rates of molecular diffusion in biological systems. The discipline of plant ecology was pioneered
in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities,
and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today.
The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of
ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited
with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced
the concept of ecosystems to biology. Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov
(1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants.
Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological
processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates
of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through
stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and
the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere.
Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated
rational experimental design and data analysis in botanical research. The discovery and identification of the auxin
plant hormones by Kenneth V. Thimann in 1948 enabled regulation of plant growth by externally applied chemicals.
Frederick Campion Steward pioneered techniques of micropropagation and plant tissue culture controlled by plant hormones.
The synthetic auxin 2,4-Dichlorophenoxyacetic acid or 2,4-D was one of the first commercial synthetic herbicides.
20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis,
such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological
approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome
and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed
experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent
and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or
genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being
expressed. These technologies enable the biotechnological use of whole plants or plant cell cultures grown in bioreactors
to synthesise pesticides, antibiotics or other pharmaceuticals, as well as the practical application of genetically
modified crops designed for traits such as improved yield. Modern morphology recognizes a continuum between the major
morphological categories of root, stem (caulome), leaf (phyllome) and trichome. Furthermore, it emphasizes structural
dynamics. Modern systematics aims to reflect and discover phylogenetic relationships between plants. Modern Molecular
phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA
sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny
of flowering plants, answering many of the questions about relationships among angiosperm families and species. The
theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA
barcoding is the subject of active current research. The study of plants is vital because they underpin almost all
animal life on Earth by generating a large proportion of the oxygen and food that provide humans and other organisms
with aerobic respiration with the chemical energy they need to exist. Plants, algae and cyanobacteria are the major
groups of organisms that carry out photosynthesis, a process that uses the energy of sunlight to convert water and
carbon dioxide into sugars that can be used both as a source of chemical energy and of organic molecules that are
used in the structural components of cells. As a by-product of photosynthesis, plants release oxygen into the atmosphere,
a gas that is required by nearly all living things to carry out cellular respiration. In addition, they are influential
in the global carbon and water cycles and plant roots bind and stabilise soils, preventing soil erosion. Plants are
crucial to the future of human society as they provide food, oxygen, medicine, and products for people, as well as
creating and preserving soil. The strictest definition of "plant" includes only the "land plants" or embryophytes,
which include seed plants (gymnosperms, including the pines, and flowering plants) and the free-sporing cryptogams
including ferns, clubmosses, liverworts, hornworts and mosses. Embryophytes are multicellular eukaryotes descended
from an ancestor that obtained its energy from sunlight by photosynthesis. They have life cycles with alternating
haploid and diploid phases. The sexual haploid phase of embryophytes, known as the gametophyte, nurtures the developing
diploid embryo sporophyte within its tissues for at least part of its life, even in the seed plants, where the gametophyte
itself is nurtured by its parent sporophyte. Other groups of organisms that were previously studied by botanists
include bacteria (now studied in bacteriology), fungi (mycology) – including lichen-forming fungi (lichenology),
non-chlorophyte algae (phycology), and viruses (virology). However, attention is still given to these groups by botanists,
and fungi (including lichens) and photosynthetic protists are usually covered in introductory botany courses. Paleobotanists
study ancient plants in the fossil record to provide information about the evolutionary history of plants. Cyanobacteria,
the first oxygen-releasing photosynthetic organisms on Earth, are thought to have given rise to the ancestor of plants
by entering into an endosymbiotic relationship with an early eukaryote, ultimately becoming the chloroplasts in plant
cells. The new photosynthetic plants (along with their algal relatives) accelerated the rise in atmospheric oxygen
started by the cyanobacteria, changing the ancient oxygen-free, reducing, atmosphere to one in which free oxygen
has been abundant for more than 2 billion years. Virtually all staple foods come either directly from primary production
by plants, or indirectly from animals that eat them. Plants and other photosynthetic organisms are at the base of
most food chains because they use the energy from the sun and nutrients from the soil and atmosphere, converting
them into a form that can be used by animals. This is what ecologists call the first trophic level. The modern forms
of the major staple foods, such as maize, rice, wheat and other cereal grasses, pulses, bananas and plantains, as
well as flax and cotton grown for their fibres, are the outcome of prehistoric selection over thousands of years
from among wild ancestral plants with the most desirable characteristics. Botanists study how plants produce food
and how to increase yields, for example through plant breeding, making their work important to mankind's ability
to feed the world and provide food security for future generations. Botanists also study weeds, which are a considerable
problem in agriculture, and the biology and control of plant pathogens in agriculture and natural ecosystems. Ethnobotany
is the study of the relationships between plants and people. When applied to the investigation of historical plant–people
relationships ethnobotany may be referred to as archaeobotany or palaeoethnobotany. Plants and various other groups
of photosynthetic eukaryotes collectively known as "algae" have unique organelles known as chloroplasts. Chloroplasts
are thought to be descended from cyanobacteria that formed endosymbiotic relationships with ancient plant and algal
ancestors. Chloroplasts and cyanobacteria contain the blue-green pigment chlorophyll a. Chlorophyll a (as well as
its plant and green algal-specific cousin chlorophyll b)[a] absorbs light in the blue-violet and orange/red parts
of the spectrum while reflecting and transmitting the green light that we see as the characteristic colour of these
organisms. The energy in the red and blue light that these pigments absorb is used by chloroplasts to make energy-rich
carbon compounds from carbon dioxide and water by oxygenic photosynthesis, a process that generates molecular oxygen
(O2) as a by-product. The light energy captured by chlorophyll a is initially in the form of electrons (and later
a proton gradient) that's used to make molecules of ATP and NADPH which temporarily store and transport energy. Their
energy is used in the light-independent reactions of the Calvin cycle by the enzyme rubisco to produce molecules
of the 3-carbon sugar glyceraldehyde 3-phosphate (G3P). Glyceraldehyde 3-phosphate is the first product of photosynthesis
and the raw material from which glucose and almost all other organic molecules of biological origin are synthesized.
Some of the glucose is converted to starch which is stored in the chloroplast. Starch is the characteristic energy
store of most land plants and algae, while inulin, a polymer of fructose is used for the same purpose in the sunflower
family Asteraceae. Some of the glucose is converted to sucrose (common table sugar) for export to the rest of the
plant. Plants synthesize a number of unique polymers like the polysaccharide molecules cellulose, pectin and xyloglucan
from which the land plant cell wall is constructed. Vascular land plants make lignin, a polymer used to strengthen
the secondary cell walls of xylem tracheids and vessels to keep them from collapsing when a plant sucks water through
them under water stress. Lignin is also used in other cell types like sclerenchyma fibers that provide structural
support for a plant and is a major constituent of wood. Sporopollenin is a chemically resistant polymer found in
the outer cell walls of spores and pollen of land plants responsible for the survival of early land plant spores
and the pollen of seed plants in the fossil record. It is widely regarded as a marker for the start of land plant
evolution during the Ordovician period. The concentration of carbon dioxide in the atmosphere today is much lower
than it was when plants emerged onto land during the Ordovician and Silurian periods. Many monocots like maize and
the pineapple and some dicots like the Asteraceae have since independently evolved pathways like Crassulacean acid
metabolism and the C4 carbon fixation pathway for photosynthesis which avoid the losses resulting from photorespiration
in the more common C3 carbon fixation pathway. These biochemical strategies are unique to land plants. Phytochemistry
is a branch of plant biochemistry primarily concerned with the chemical substances produced by plants during secondary
metabolism. Some of these compounds are toxins such as the alkaloid coniine from hemlock. Others, such as the essential
oils peppermint oil and lemon oil are useful for their aroma, as flavourings and spices (e.g., capsaicin), and in
medicine as pharmaceuticals as in opium from opium poppies. Many medicinal and recreational drugs, such as tetrahydrocannabinol
(active ingredient in cannabis), caffeine, morphine and nicotine come directly from plants. Others are simple derivatives
of botanical natural products. For example, the pain killer aspirin is the acetyl ester of salicylic acid, originally
isolated from the bark of willow trees, and a wide range of opiate painkillers like heroin are obtained by chemical
modification of morphine obtained from the opium poppy. Popular stimulants come from plants, such as caffeine from
coffee, tea and chocolate, and nicotine from tobacco. Most alcoholic beverages come from fermentation of carbohydrate-rich
plant products such as barley (beer), rice (sake) and grapes (wine). Sugar, starch, cotton, linen, hemp, some types
of rope, wood and particle boards, papyrus and paper, vegetable oils, wax, and natural rubber are examples of commercially
important materials made from plant tissues or their secondary products. Charcoal, a pure form of carbon made by
pyrolysis of wood, has a long history as a metal-smelting fuel, as a filter material and adsorbent and as an artist's
material and is one of the three ingredients of gunpowder. Cellulose, the world's most abundant organic polymer,
can be converted into energy, fuels, materials and chemical feedstock. Products made from cellulose include rayon
and cellophane, wallpaper paste, biobutanol and gun cotton. Sugarcane, rapeseed and soy are some of the plants with
a highly fermentable sugar or oil content that are used as sources of biofuels, important alternatives to fossil
fuels, such as biodiesel. Plant ecology is the science of the functional relationships between plants and their habitats—the
environments where they complete their life cycles. Plant ecologists study the composition of local and regional
floras, their biodiversity, genetic diversity and fitness, the adaptation of plants to their environment, and their
competitive or mutualistic interactions with other species. The goals of plant ecology are to understand the causes
of their distribution patterns, productivity, environmental impact, evolution, and responses to environmental change.
Plants depend on certain edaphic (soil) and climatic factors in their environment but can modify these factors too.
For example, they can change their environment's albedo, increase runoff interception, stabilize mineral soils and
develop their organic content, and affect local temperature. Plants compete with other organisms in their ecosystem
for resources. They interact with their neighbours at a variety of spatial scales in groups, populations and communities
that collectively constitute vegetation. Regions with characteristic vegetation types and dominant plants as well
as similar abiotic and biotic factors, climate, and geography make up biomes like tundra or tropical rainforest.
Plant responses to climate and other environmental changes can inform our understanding of how these changes affect
ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical
climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen
deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates
of atmospheric CO2 concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes
and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B),
resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and
taxonomy is essential to understanding vegetation change, habitat destruction and species extinction. Inheritance
in plants follows the same fundamental principles of genetics as in other multicellular organisms. Gregor Mendel
discovered the genetic laws of inheritance by studying inherited traits such as shape in Pisum sativum (peas). What
Mendel learned from studying plants has had far reaching benefits outside of botany. Similarly, "jumping genes" were
discovered by Barbara McClintock while she was studying maize. Nevertheless, there are some distinctive genetic differences
between plants and other organisms. Species boundaries in plants may be weaker than in animals, and cross species
hybrids are often possible. A familiar example is peppermint, Mentha × piperita, a sterile hybrid between Mentha
aquatica and spearmint, Mentha spicata. The many cultivated varieties of wheat are the result of multiple inter-
and intra-specific crosses between wild species and their hybrids. Angiosperms with monoecious flowers often have
self-incompatibility mechanisms that operate between the pollen and stigma so that the pollen either fails to reach
the stigma or fails to germinate and produce male gametes. This is one of several methods used by plants to promote
outcrossing. In many land plants the male and female gametes are produced by separate individuals. These species
are said to be dioecious when referring to vascular plant sporophytes and dioicous when referring to bryophyte gametophytes.
Unlike in higher animals, where parthenogenesis is rare, asexual reproduction may occur in plants by several different
mechanisms. The formation of stem tubers in potato is one example. Particularly in arctic or alpine habitats, where
opportunities for fertilisation of flowers by animals are rare, plantlets or bulbs, may develop instead of flowers,
replacing sexual reproduction with asexual reproduction and giving rise to clonal populations genetically identical
to the parent. This is one of several types of apomixis that occur in plants. Apomixis can also happen in a seed,
producing a seed that contains an embryo genetically identical to the parent. Most sexually reproducing organisms
are diploid, with paired chromosomes, but doubling of their chromosome number may occur due to errors in cytokinesis.
This can occur early in development to produce an autopolyploid or partly autopolyploid organism, or during normal
processes of cellular differentiation to produce some cell types that are polyploid (endopolyploidy), or during gamete
formation. An allopolyploid plant may result from a hybridisation event between two different species. Both autopolyploid
and allopolyploid plants can often reproduce normally, but may be unable to cross-breed successfully with the parent
population because there is a mismatch in chromosome numbers. These plants that are reproductively isolated from
the parent species but live within the same geographical area, may be sufficiently successful to form a new species.
Some otherwise sterile plant polyploids can still reproduce vegetatively or by seed apomixis, forming clonal populations
of identical individuals. Durum wheat is a fertile tetraploid allopolyploid, while bread wheat is a fertile hexaploid.
The commercial banana is an example of a sterile, seedless triploid hybrid. Common dandelion is a triploid that produces
viable seeds by apomictic seed. A considerable amount of new knowledge about plant function comes from studies of
the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard
family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by
about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was
the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of
rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics,
cellular and molecular biology of cereals, grasses and monocots generally. Model plants such as Arabidopsis thaliana
are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small
genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used
to study mechanisms of photosynthesis and phloem loading of sugar in C4 plants. The single celled green alga Chlamydomonas
reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants,
making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast
functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology.
Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing
Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu
(1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen
fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is
one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops.
Epigenetics is the study of mitotically and/or meiotically heritable changes in gene function that cannot be explained
by changes in the underlying DNA sequence but cause the organism's genes to behave (or "express themselves") differently.
One example of epigenetic change is the marking of the genes by DNA methylation which determines whether they will
be expressed or not. Gene expression can also be controlled by repressor proteins that attach to silencer regions
of the DNA and prevent that region of the DNA code from being expressed. Epigenetic marks may be added or removed
from the DNA during programmed stages of development of the plant, and are responsible, for example, for the differences
between anthers, petals and normal leaves, despite the fact that they all have the same underlying genetic code.
Epigenetic changes may be temporary or may remain through successive cell divisions for the remainder of the cell's
life. Some epigenetic changes have been shown to be heritable, while others are reset in the germ cells. Epigenetic
changes in eukaryotic biology serve to regulate the process of cellular differentiation. During morphogenesis, totipotent
stem cells become the various pluripotent cell lines of the embryo, which in turn become fully differentiated cells.
A single fertilized egg cell, the zygote, gives rise to the many different plant cell types including parenchyma,
xylem vessel elements, phloem sieve tubes, guard cells of the epidermis, etc. as it continues to divide. The process
results from the epigenetic activation of some genes and inhibition of others. Unlike animals, many plant cells,
particularly those of the parenchyma, do not terminally differentiate, remaining totipotent with the ability to give
rise to a new individual plant. Exceptions include highly lignified cells, the sclerenchyma and xylem which are dead
at maturity, and the phloem sieve tubes which lack nuclei. While plants use many of the same epigenetic mechanisms
as animals, such as chromatin remodeling, an alternative hypothesis is that plants set their gene expression patterns
using positional information from the environment and surrounding cells to determine their developmental fate. The
algae are a polyphyletic group and are placed in various divisions, some more closely related to plants than others.
There are many differences between them in features such as cell wall composition, biochemistry, pigmentation, chloroplast
structure and nutrient reserves. The algal division Charophyta, sister to the green algal division Chlorophyta, is
considered to contain the ancestor of true plants. The Charophyte class Charophyceae and the land plant sub-kingdom
Embryophyta together form the monophyletic group or clade Streptophytina. Nonvascular land plants are embryophytes
that lack the vascular tissues xylem and phloem. They include mosses, liverworts and hornworts. Pteridophytic vascular
plants with true xylem and phloem that reproduced by spores germinating into free-living gametophytes evolved during
the Silurian period and diversified into several lineages during the late Silurian and early Devonian. Representatives
of the lycopods have survived to the present day. By the end of the Devonian period, several groups, including the
lycopods, sphenophylls and progymnosperms, had independently evolved "megaspory" – their spores were of two distinct
sizes, larger megaspores and smaller microspores. Their reduced gametophytes developed from megaspores retained within
the spore-producing organs (megasporangia) of the sporophyte, a condition known as endospory. Seeds consist of an
endosporic megasporangium surrounded by one or two sheathing layers (integuments). The young sporophyte develops
within the seed, which on germination splits to release it. The earliest known seed plants date from the latest Devonian
Famennian stage. Following the evolution of the seed habit, seed plants diversified, giving rise to a number of now-extinct
groups, including seed ferns, as well as the modern gymnosperms and angiosperms. Gymnosperms produce "naked seeds"
not fully enclosed in an ovary; modern representatives include conifers, cycads, Ginkgo, and Gnetales. Angiosperms
produce seeds enclosed in a structure such as a carpel or an ovary. Ongoing research on the molecular phylogenetics
of living plants appears to show that the angiosperms are a sister clade to the gymnosperms. Plant physiology encompasses
all the internal chemical and physical activities of plants associated with life. Chemicals obtained from the air,
soil and water form the basis of all plant metabolism. The energy of sunlight, captured by oxygenic photosynthesis
and released by cellular respiration, is the basis of almost all life. Photoautotrophs, including all green plants,
algae and cyanobacteria gather energy directly from sunlight by photosynthesis. Heterotrophs including all animals,
all fungi, all completely parasitic plants, and non-photosynthetic bacteria take in organic molecules produced by
photoautotrophs and respire them or use them in the construction of cells and tissues. Respiration is the oxidation
of carbon compounds by breaking them down into simpler structures to release the energy they contain, essentially
the opposite of photosynthesis. Molecules are moved within plants by transport processes that operate at a variety
of spatial scales. Subcellular transport of ions, electrons and molecules such as water and enzymes occurs across
cell membranes. Minerals and water are transported from roots to other parts of the plant in the transpiration stream.
Diffusion, osmosis, and active transport and mass flow are all different ways transport can occur. Examples of elements
that plants need to transport are nitrogen, phosphorus, potassium, calcium, magnesium, and sulphur. In vascular plants,
these elements are extracted from the soil as soluble ions by the roots and transported throughout the plant in the
xylem. Most of the elements required for plant nutrition come from the chemical breakdown of soil minerals. Sucrose
produced by photosynthesis is transported from the leaves to other parts of the plant in the phloem and plant hormones
are transported by a variety of processes. The hypothesis that plant growth and development is coordinated by plant
hormones or plant growth regulators first emerged in the late 19th century. Darwin experimented on the movements
of plant shoots and roots towards light and gravity, and concluded "It is hardly an exaggeration to say that the
tip of the radicle . . acts like the brain of one of the lower animals . . directing the several movements". About
the same time, the role of auxins (from the Greek auxein, to grow) in control of plant growth was first outlined
by the Dutch scientist Frits Went. The first known auxin, indole-3-acetic acid (IAA), which promotes cell growth,
was only isolated from plants about 50 years later. This compound mediates the tropic responses of shoots and roots
towards light and gravity. The finding in 1939 that plant callus could be maintained in culture containing IAA, followed
by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of
growth hormones were key steps in the development of plant biotechnology and genetic modification. Cytokinins are
a class of plant hormones named for their control of cell division or cytokinesis. The natural cytokinin zeatin was
discovered in corn, Zea mays, and is a derivative of the purine adenine. Zeatin is produced in roots and transported
to shoots in the xylem where it promotes cell division, bud development, and the greening of chloroplasts. The gibberelins,
such as Gibberelic acid are diterpenes synthesised from acetyl CoA via the mevalonate pathway. They are involved
in the promotion of germination and dormancy-breaking in seeds, in regulation of plant height by controlling stem
elongation and the control of flowering. Abscisic acid (ABA) occurs in all land plants except liverworts, and is
synthesised from carotenoids in the chloroplasts and other plastids. It inhibits cell division, promotes seed maturation,
and dormancy, and promotes stomatal closure. It was so named because it was originally thought to control abscission.
Ethylene is a gaseous hormone that is produced in all higher plant tissues from methionine. It is now known to be
the hormone that stimulates or regulates fruit ripening and abscission, and it, or the synthetic growth regulator
ethephon which is rapidly metabolised to produce ethylene, are used on industrial scale to promote ripening of cotton,
pineapples and other climacteric crops. Plant anatomy is the study of the structure of plant cells and tissues, whereas
plant morphology is the study of their external form. All plants are multicellular eukaryotes, their DNA stored in
nuclei. The characteristic features of plant cells that distinguish them from those of animals and fungi include
a primary cell wall composed of the polysaccharides cellulose, hemicellulose and pectin, larger vacuoles than in
animal cells and the presence of plastids with unique photosynthetic and biosynthetic functions as in the chloroplasts.
Other plastids contain storage products such as starch (amyloplasts) or lipids (elaioplasts). Uniquely, streptophyte
cells and those of the green algal order Trentepohliales divide by construction of a phragmoplast as a template for
building a cell plate late in cell division. The bodies of vascular plants including clubmosses, ferns and seed plants
(gymnosperms and angiosperms) generally have aerial and subterranean subsystems. The shoots consist of stems bearing
green photosynthesising leaves and reproductive structures. The underground vascularised roots bear root hairs at
their tips and generally lack chlorophyll. Non-vascular plants, the liverworts, hornworts and mosses do not produce
ground-penetrating vascular roots and most of the plant participates in photosynthesis. The sporophyte generation
is nonphotosynthetic in liverworts but may be able to contribute part of its energy needs by photosynthesis in mosses
and hornworts. The root system and the shoot system are interdependent – the usually nonphotosynthetic root system
depends on the shoot system for food, and the usually photosynthetic shoot system depends on water and minerals from
the root system. Cells in each system are capable of creating cells of the other and producing adventitious shoots
or roots. Stolons and tubers are examples of shoots that can grow roots. Roots that spread out close to the surface,
such as those of willows, can produce shoots and ultimately new plants. In the event that one of the systems is lost,
the other can often regrow it. In fact it is possible to grow an entire plant from a single leaf, as is the case
with Saintpaulia, or even a single cell – which can dedifferentiate into a callus (a mass of unspecialised cells)
that can grow into a new plant. In vascular plants, the xylem and phloem are the conductive tissues that transport
resources between shoots and roots. Roots are often adapted to store food such as sugars or starch, as in sugar beets
and carrots. Stems mainly provide support to the leaves and reproductive structures, but can store water in succulent
plants such as cacti, food as in potato tubers, or reproduce vegetatively as in the stolons of strawberry plants
or in the process of layering. Leaves gather sunlight and carry out photosynthesis. Large, flat, flexible, green
leaves are called foliage leaves. Gymnosperms, such as conifers, cycads, Ginkgo, and gnetophytes are seed-producing
plants with open seeds. Angiosperms are seed-producing plants that produce flowers and have enclosed seeds. Woody
plants, such as azaleas and oaks, undergo a secondary growth phase resulting in two additional types of tissues:
wood (secondary xylem) and bark (secondary phloem and cork). All gymnosperms and many angiosperms are woody plants.
Some plants reproduce sexually, some asexually, and some via both means. Systematic botany is part of systematic
biology, which is concerned with the range and diversity of organisms and their relationships, particularly as determined
by their evolutionary history. It involves, or is related to, biological classification, scientific taxonomy and
phylogenetics. Biological classification is the method by which botanists group organisms into categories such as
genera or species. Biological classification is a form of scientific taxonomy. Modern taxonomy is rooted in the work
of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been
revised to align better with the Darwinian principle of common descent – grouping organisms by ancestry rather than
superficial characteristics. While scientists do not always agree on how to classify organisms, molecular phylogenetics,
which uses DNA sequences as data, has driven many recent revisions along evolutionary lines and is likely to continue
to do so. The dominant classification system is called Linnaean taxonomy. It includes ranks and binomial nomenclature.
The nomenclature of botanical organisms is codified in the International Code of Nomenclature for algae, fungi, and
plants (ICN) and administered by the International Botanical Congress. Kingdom Plantae belongs to Domain Eukarya
and is broken down recursively until each species is separately classified. The order is: Kingdom; Phylum (or Division);
Class; Order; Family; Genus (plural genera); Species. The scientific name of a plant represents its genus and its
species within the genus, resulting in a single world-wide name for each organism. For example, the tiger lily is
Lilium columbianum. Lilium is the genus, and columbianum the specific epithet. The combination is the name of the
species. When writing the scientific name of an organism, it is proper to capitalise the first letter in the genus
and put all of the specific epithet in lowercase. Additionally, the entire term is ordinarily italicised (or underlined
when italics are not available). The evolutionary relationships and heredity of a group of organisms is called its
phylogeny. Phylogenetic studies attempt to discover phylogenies. The basic approach is to use similarities based
on shared inheritance to determine relationships. As an example, species of Pereskia are trees or bushes with prominent
leaves. They do not obviously resemble a typical leafless cactus such as an Echinocactus. However, both Pereskia
and Echinocactus have spines produced from areoles (highly specialised pad-like structures) suggesting that the two
genera are indeed related. Judging relationships based on shared characters requires care, since plants may resemble
one another through convergent evolution in which characters have arisen independently. Some euphorbias have leafless,
rounded bodies adapted to water conservation similar to those of globular cacti, but characters such as the structure
of their flowers make it clear that the two groups are not closely related. The cladistic method takes a systematic
approach to characters, distinguishing between those that carry no information about shared evolutionary history
– such as those evolved separately in different groups (homoplasies) or those left over from ancestors (plesiomorphies)
– and derived characters, which have been passed down from innovations in a shared ancestor (apomorphies). Only derived
characters, such as the spine-producing areoles of cacti, provide evidence for descent from a common ancestor. The
results of cladistic analyses are expressed as cladograms: tree-like diagrams showing the pattern of evolutionary
branching and descent. From the 1990s onwards, the predominant approach to constructing phylogenies for living plants
has been molecular phylogenetics, which uses molecular characters, particularly DNA sequences, rather than morphological
characters like the presence or absence of spines and areoles. The difference is that the genetic code itself is
used to decide evolutionary relationships, instead of being used indirectly via the characters it gives rise to.
Clive Stace describes this as having "direct access to the genetic basis of evolution." As a simple example, prior
to the use of genetic evidence, fungi were thought either to be plants or to be more closely related to plants than
animals. Genetic evidence suggests that the true evolutionary relationship of multicelled organisms is as shown in
the cladogram below – fungi are more closely related to animals than to plants. In 1998 the Angiosperm Phylogeny
Group published a phylogeny for flowering plants based on an analysis of DNA sequences from most families of flowering
plants. As a result of this work, many questions, such as which families represent the earliest branches of angiosperms,
have now been answered. Investigating how plant species are related to each other allows botanists to better understand
the process of evolution in plants. Despite the study of model plants and increasing use of DNA evidence, there is
ongoing work and discussion among taxonomists about how best to classify plants into various taxa. Technological
developments such as computers and electron microscopes have greatly increased the level of detail studied and speed
at which data can be analysed.
Madonna Louise Ciccone (/tʃɪˈkoʊni/; Italian: [tʃikˈkoːne]; born August 16, 1958) is an American singer, songwriter, actress,
and businesswoman. She achieved popularity by pushing the boundaries of lyrical content in mainstream popular music
and imagery in her music videos, which became a fixture on MTV. Madonna is known for reinventing both her music and
image, and for maintaining her autonomy within the recording industry. Music critics have acclaimed her musical productions,
which have generated some controversy. Often referred to as the "Queen of Pop", she is often cited as an influence
by other artists. Born in Bay City, Michigan, Madonna moved to New York City in 1977 to pursue a career in modern
dance. After performing in the music groups Breakfast Club and Emmy, she signed with Sire Records (an auxiliary label
of Warner Bros. Records) in 1982 and released her self-titled debut album the following year. She followed it with
a series of commercially and critcally successful albums, including the Grammy Award winners Ray of Light (1998)
and Confessions on a Dance Floor (2005). Throughout her career, Madonna has written and produced most of her songs,
with many of them reaching number one on the record charts, including "Like a Virgin", "Into the Groove", "Papa Don't
Preach", "Like a Prayer", "Vogue", "Frozen", "Music", "Hung Up", and "4 Minutes". Madonna's popularity was further
enhanced by her film roles, including Desperately Seeking Susan (1985), Dick Tracy (1990), and Evita (1996); the
latter earned her a Golden Globe Award for Best Actress. However, most of her other films have been panned by critics.
Her other ventures include fashion design, writing children's books, and filmmaking. She has been acclaimed as a
businesswoman, particularly after she founded entertainment company Maverick (including the label Maverick Records).
In 2007 she signed an unprecedented US $120 million 360 deal with Live Nation. Having sold more than 300 million
records worldwide, Madonna is recognized as the best-selling female recording artist of all time by Guinness World
Records. The Recording Industry Association of America (RIAA) listed her as the best-selling female rock artist of
the 20th century and the second best-selling female artist in the United States, with 64.5 million certified albums.
According to Billboard, Madonna is the highest-grossing solo touring artist of all time, earning US $1.31 billion
from her concerts since 1990. She was ranked at number two, behind only The Beatles, on the Billboard Hot 100 All-Time
Top Artists, making her the most successful solo artist in the history of American singles chart. Madonna became
one of the five founding members of the UK Music Hall of Fame and was inducted into the Rock and Roll Hall of Fame
in her first year of eligibility. Madonna was born to Catholic parents Silvio Anthony "Tony" Ciccone (b. 1931) and
Madonna Louise Fortin (c. 1933 – December 1, 1963) in Bay City, Michigan, on August 16, 1958. Her father's parents
were immigrants from Pacentro, Italy, while her mother was of French-Canadian ancestry. Tony worked as an engineer
designer for Chrysler and General Motors. Since Madonna had the same name as her mother, family members called her
"Little Nonni". She has two elder brothers, Anthony (born 1956) and Martin (born 1957), and three younger siblings,
Paula (born 1959), Christopher (born 1960), and Melanie (born 1962). Upon being confirmed in 1966, she adopted Veronica
as a confirmation name. She was raised in the Detroit suburbs of Pontiac and Avon Township (now Rochester Hills).
Months before her mother died of breast cancer, Madonna noticed changes in her behavior and personality, although
she did not understand the reason. Her mother was at a loss to explain her medical condition, and often began to
cry when Madonna questioned her about it. Madonna later acknowledged that she had not grasped the concept of her
mother dying. Madonna turned to her paternal grandmother for solace. The Ciccone siblings resented housekeepers and
invariably rebelled against anyone brought into their home ostensibly to take the place of their beloved mother.
Madonna later told Vanity Fair that she saw herself in her youth as a "lonely girl who was searching for something.
I wasn't rebellious in a certain way. I cared about being good at something. I didn't shave my underarms and I didn't
wear make-up like normal girls do. But I studied and I got good grades.... I wanted to be somebody." Terrified that
her father Tony could be taken from her as well, Madonna was often unable to sleep unless she was near him. In 1966,
Tony married the family's housekeeper Joan Gustafson; they had two children, Jennifer (born 1967) and Mario (born
1968). At this point, Madonna started to resent him for decades, and developed a rebellious attitude. She attended
St. Frederick's and St. Andrew's Catholic Elementary Schools, and West Middle School. Madonna was known for her high
grade point average, and achieved notoriety for her unconventional behavior. She would perform cartwheels and handstands
in the hallways between classes, dangle by her knees from the monkey bars during recess, and pull up her skirt during
class—all so that the boys could see her underwear. In 1978, she dropped out of college and relocated to New York
City. She had little money and worked as a waitress at Dunkin' Donuts and with modern dance troupes, taking classes
at the Alvin Ailey American Dance Theater and eventually performing with Pear Lang Dance Theater. Madonna said of
her move to New York, "It was the first time I'd ever taken a plane, the first time I'd ever gotten a taxi cab. I
came here with $35 in my pocket. It was the bravest thing I'd ever done." She started to work as a backup dancer
for other established artists. Madonna claimed that during a late night she was returning from a rehearsal, when
a pair of men held her at knifepoint and forced her to perform fellatio. Madonna later commented that "the episode
was a taste of my weakness, it showed me that I still could not save myself in spite of all the strong-girl show.
I could never forget it." While performing as a backup singer and dancer for the French disco artist Patrick Hernandez
on his 1979 world tour, Madonna became romantically involved with musician Dan Gilroy. Together, they formed her
first rock band, the Breakfast Club, for which Madonna sang and played drums and guitar. In 1980 or 1981 she left
Breakfast Club and, with her former boyfriend Stephen Bray as drummer, formed the band Emmy. The two began writing
songs together, and Madonna later decided to market herself as a solo act. Their music impressed DJ and record producer
Mark Kamins who arranged a meeting between Madonna and Sire Records founder Seymour Stein. After Madonna signed a
singles deal with Sire, her debut single, "Everybody", was released in October 1982, and the second, "Burning Up",
in March 1983. Both became big club hits in the United States, reaching number three on Hot Dance Club Songs chart
compiled by Billboard magazine. After this success, she started developing her debut album, Madonna, which was primarily
produced by Reggie Lucas of Warner Bros. However, she was not happy with the completed tracks and disagreed with
Lucas' production techniques, so decided to seek additional help. Madonna moved in with boyfriend John "Jellybean"
Benitez, asking his help for finishing the album's production. Benitez remixed most of the tracks and produced "Holiday",
which was her third single and her first global hit. The overall sound of Madonna was dissonant and in the form of
upbeat synthetic disco, using some of the new technology of the time, like the Linn drum machine, Moog bass and the
OB-X synthesizer. The album was released in July 1983 and peaked at number eight on the Billboard 200 six months
later, in 1984. It yielded two more hit singles, "Borderline" and "Lucky Star". Madonna's look and style of dressing,
her performances, and her music videos influenced young girls and women. Her style became one of the female fashion
trends of the 1980s. Created by stylist and jewelry designer Maripol, the look consisted of lace tops, skirts over
capri pants, fishnet stockings, jewelry bearing the crucifix, bracelets, and bleached hair. Madonna achieved global
recognition after the release of her second studio album, Like a Virgin, in November 1984. It topped the charts in
several countries and became her first number one album on the Billboard 200. The title track, "Like a Virgin", topped
the Billboard Hot 100 chart for six consecutive weeks. It attracted the attention of organizations who complained
that the song and its accompanying video promoted premarital sex and undermined family values, and moralists sought
to have the song and video banned. Madonna was criticized for her performance of "Like a Virgin" at the first 1984
MTV Video Music Awards (VMA). She appeared on stage atop a giant wedding cake, wearing a wedding dress and white
gloves. The performance is noted by MTV as an iconic moment in VMA history. In later years, Madonna commented that
she was terrified of the performance. The next hit was "Material Girl" promoted by her video, a mimicry of Marilyn
Monroe's performance of the song "Diamonds Are a Girl's Best Friend" from the 1953 film Gentlemen Prefer Blondes.
While filming this video, Madonna started dating actor Sean Penn. They married on her birthday in 1985. Like a Virgin
was certified diamond by the Recording Industry Association of America and sold more than 25 million copies worldwide.
In February 1984, according to the film director Sir Richard Attenborough, Madonna auditioned at the Royale Theatre
on Broadway for a dance role in his movie version of A Chorus Line using her birth-name of Ciccone, but he rejected
her. Madonna entered mainstream films in February 1985, beginning with a brief appearance as a club singer in Vision
Quest, a romantic drama film. Its soundtrack contained two new singles, her U.S. number-one single, "Crazy for You"
and "Gambler". She also appeared in the comedy Desperately Seeking Susan in March 1985, a film which introduced the
song "Into the Groove", her first number one single in the United Kingdom. Although Madonna was not the lead actress
for the film, her profile was such that the movie widely became considered (and marketed) as a Madonna vehicle. The
New York Times film critic Vincent Canby named it one of the ten best films of 1985. Beginning in April 1985, Madonna
embarked on her first concert tour in North America, The Virgin Tour, with the Beastie Boys as her opening act. She
progressed from playing CBGB and the Mudd Club to playing large sporting arenas. At that time she released two more
hit singles from the album, "Angel" and "Dress You Up". In July, Penthouse and Playboy magazines published a number
of nude photos of Madonna, taken in New York in 1978. She had posed for the photographs as she needed money at the
time, and was paid as little as $25 a session. The publication of the photos caused a media uproar, but Madonna remained
"unapologetic and defiant". The photographs were ultimately sold for up to $100,000. She referred to these events
at the 1985 outdoor Live Aid charity concert, saying that she would not take her jacket off because "[the media]
might hold it against me ten years from now." In June 1986, Madonna released her third studio album, True Blue, which
was inspired by and dedicated to Sean Penn. Rolling Stone magazine was generally impressed with the effort, writing
that the album "sound[s] as if it comes from the heart". It resulted in three singles making it to number-one on
the Billboard Hot 100: "Live to Tell", "Papa Don't Preach" and "Open Your Heart", and two more top-five singles:
"True Blue" and "La Isla Bonita". The album topped the charts in over 28 countries worldwide, an unprecedented achievement
at the time, and became her best-selling studio album of her career to this date with sales of 25 million. In the
same year, Madonna starred in the critically panned film Shanghai Surprise, for which she was awarded the Golden
Raspberry Award for "worst actress". She made her theatrical debut in a production of David Rabe's Goose and Tom-Tom;
the film and play both co-starred Penn. The next year, Madonna was featured in the film Who's That Girl. She contributed
four songs to its soundtrack, including the title track and "Causing a Commotion". In January 1989, Madonna signed
an endorsement deal with soft-drink manufacturer, Pepsi. In one of her Pepsi commercials, she debuted her song "Like
a Prayer". The corresponding music video featured many Catholic symbols such as stigmata and cross burning, and a
dream of making love to a saint, leading the Vatican to condemn the video. Religious groups sought to ban the commercial
and boycott Pepsi products. Pepsi revoked the commercial and canceled her sponsorship contract. The song was included
on Madonna's fourth studio album, Like a Prayer, which was co-written and co-produced by Patrick Leonard and Stephen
Bray. Madonna received positive feedback for the album, with Rolling Stone writing that it was "as close to art as
pop music gets". Like a Prayer peaked at number one on the Billboard 200 and sold 15 million copies worldwide, with
4 million copies sold in the U.S. alone. Six singles were released from the album, including "Like a Prayer", which
reached number one, and "Express Yourself" and "Cherish", both peaking at number two. By the end of the 1980s, Madonna
was named as the "Artist of the Decade" by MTV, Billboard and Musician magazine. Madonna starred as Breathless Mahoney
in the film Dick Tracy (1990), with Warren Beatty playing the title role. Her performance led to a Saturn Award nomination
for Best Actress. To accompany the film, she released the soundtrack album, I'm Breathless, which included songs
inspired by the film's 1930s setting. It also featured the US number-one hit "Vogue" and "Sooner or Later", which
earned songwriter Stephen Sondheim an Academy Award for Best Original Song in 1991. While shooting the film, Madonna
began a relationship with Beatty, which dissolved by the end of 1990. In April 1990, Madonna began her Blond Ambition
World Tour, which was held until August. Rolling Stone called it an "elaborately choreographed, sexually provocative
extravaganza" and proclaimed it "the best tour of 1990". The tour generated strong negative reaction from religious
groups for her performance of "Like a Virgin", during which two male dancers caressed her body before she simulated
masturbation. In response, Madonna said, "The tour in no way hurts anybody's sentiments. It's for open minds and
gets them to see sexuality in a different way. Their own and others". The Laserdisc release of the tour won Madonna
a Grammy Award in 1992 for Best Long Form Music Video. The Immaculate Collection, Madonna's first greatest-hits compilation
album, was released in November 1990. It included two new songs, "Justify My Love" and "Rescue Me". The album was
certified diamond by RIAA and sold over 30 million copies worldwide, becoming the best-selling compilation album
by a solo artist in history. "Justify My Love" reached number one in the U.S. and top ten worldwide. Its music video
featured scenes of sadomasochism, bondage, same-sex kissing, and brief nudity. The video was deemed too sexually
explicit for MTV and was banned from the network. Madonna responded to the banning: "Why is it that people are willing
to go and watch a movie about someone getting blown to bits for no reason at all, and nobody wants to see two girls
kissing and two men snuggling?" In 1992, Madonna had a role in A League of Their Own as Mae Mordabito, a baseball
player on an all-women's team. She recorded the film's theme song, "This Used to Be My Playground", which became
a Hot 100 number one hit. The same year, she founded her own entertainment company, Maverick, consisting of a record
company (Maverick Records), a film production company (Maverick Films), and associated music publishing, television
broadcasting, book publishing and merchandising divisions. The deal was a joint venture with Time Warner and paid
Madonna an advance of $60 million. It gave her 20% royalties from the music proceedings, one of the highest rates
in the industry, equaled at that time only by Michael Jackson's royalty rate established a year earlier with Sony.
The first release from the venture was Madonna's book, titled Sex. It consisted of sexually provocative and explicit
images, photographed by Steven Meisel. The book received strong negative reaction from the media and the general
public, but sold 1.5 million copies at $50 each in a matter of days. At the same time she released her fifth studio
album, Erotica, which debuted at number two on the Billboard 200. Its title track peaked at number three on the Billboard
Hot 100. Erotica also produced five singles: "Deeper and Deeper", "Bad Girl", "Fever", "Rain" and "Bye Bye Baby".
Madonna had provocative imagery featured in the erotic thriller, Body of Evidence, a film which contained scenes
of sadomasochism and bondage. It was poorly received by critics. She also starred in the film Dangerous Game, which
was released straight to video in North America. The New York Times described the film as "angry and painful, and
the pain feels real." In September 1993, Madonna embarked on The Girlie Show World Tour, in which she dressed as
a whip-cracking dominatrix surrounded by topless dancers. In Puerto Rico she rubbed the island's flag between her
legs on stage, resulting in outrage among the audience. In March 1994, she appeared as a guest on the Late Show with
David Letterman, using profanity that required censorship on television, and handing Letterman a pair of her panties
and asking him to smell it. The releases of her sexually explicit films, albums and book, and the aggressive appearance
on Letterman all made critics question Madonna as a sexual renegade. Critics and fans reacted negatively, who commented
that "she had gone too far" and that her career was over. Biographer J. Randy Taraborrelli described her ballad "I'll
Remember" (1994) as an attempt to tone down her provocative image. The song was recorded for Alek Keshishian's film
With Honors. She made a subdued appearance with Letterman at an awards show and appeared on The Tonight Show with
Jay Leno after realizing that she needed to change her musical direction in order to sustain her popularity. With
her sixth studio album, Bedtime Stories (1994), Madonna employed a softer image to try to improve the public perception.
The album debuted at number three on the Billboard 200 and produced four singles, including "Secret" and "Take a
Bow", the latter topping the Hot 100 for seven weeks, the longest period of any Madonna single. At the same time,
she became romantically involved with fitness trainer Carlos Leon. Something to Remember, a collection of ballads,
was released in November 1995. The album featured three new songs: "You'll See", "One More Chance", and a cover of
Marvin Gaye's "I Want You". In Evita (1996), Madonna played the title role of Eva Perón. For a long time, Madonna
had desired to play Perón and wrote to director Alan Parker to explain why she would be perfect for the part. She
said later, "This is the role I was born to play. I put everything of me into this because it was much more than
a role in a movie. It was exhilarating and intimidating at the same time..... And I am prouder of Evita than anything
else I have done." After securing the role, she had vocal training and learned about the history of Argentina and
Perón. During shooting she became ill several times due to the intense emotional effort required. However, as she
told Oprah, she was also pregnant during the filming: "I was winded after every take. I had to lie on the couch every
ten minutes so I could recover from dizzy spells, I was worried that I was shaking the baby around too much and that
would injure it in some way." Madonna wrote in her personal diary at the time: "Ironically, this feeling of vulnerability
and weakness is helping me in the movie. I'm sure Evita felt this way every day of her life once she discovered she
was ill." After its release, Evita garnered critical appreciation. Zach Conner from Time magazine commented, "It's
a relief to say that Evita is pretty damn fine, well cast and handsomely visualized. Madonna once again confounds
our expectations. She plays Evita with a poignant weariness and has more than just a bit of star quality. Love or
hate Madonna-Eva, she is a magnet for all eyes." Madonna won a Golden Globe Award for Best Actress in Motion Picture
Musical or Comedy for the role. She released three singles from the Evita soundtrack album, including "You Must Love
Me" (which won an Academy Award for Best Original Song in 1997) and "Don't Cry for Me Argentina". Madonna was later
presented with the Artist Achievement Award by Tony Bennett at the 1996 Billboard Music Awards. On October 14, 1996,
Madonna gave birth to Lourdes Maria Ciccone Leon, her daughter with Leon. Biographer Mary Cross writes that although
Madonna was often ill during the filming and worried that her pregnancy would harm the film, she reached some important
personal goals: "Now 38 years old, Madonna had at last triumphed on screen and achieved her dream of having a child,
both in the same year. She had reached another turning point in her career, reinventing herself and her image with
the public." Her relationship with Carlos Leon ended in May 1997; she declared that they were "better off as best
friends." After Lourdes' birth, Madonna became involved in Eastern mysticism and Kabbalah. She was introduced to
Jewish mysticism by actress Sandra Bernhard in 1997. Madonna's seventh studio album, Ray of Light, (1998) reflected
a change in her image. She collaborated with electronica producer William Orbit and wanted to create a sound that
could blend dance music with pop and British rock. American music critic Ann Powers explained that what Madonna searched
for with Orbit "was a kind of a lushness that she wanted for this record. Techno and rave was happening in the 90's
and had a lot of different forms. There was very experimental, more hard stuff like Aphex Twin. There was party stuff
like Fatboy Slim. That's not what Madonna wanted for this. She wanted something more like a singer-songwriter, really.
And William Orbit provided her with that." The album garnered critical acclaim. Ray of Light was honored with four
Grammy Awards. In 2003, Slant Magazine called it "one of the great pop masterpieces of the '90s" and Rolling Stone
listed it among "The 500 Greatest Albums of All Time". Commercially, the album peaked at number one in numerous countries
and sold more than 16 million copies worldwide. The album's first single, "Frozen", became Madonna's first single
to debut at number one in the UK, while in the U.S. it became her sixth number-two single, setting another record
for Madonna as the artist with the most number two hits. The second single, "Ray of Light", debuted at number five
on the Billboard Hot 100. The 1998 edition of Guinness Book of World Records stated: "No female artist has sold more
records than Madonna around the world". In 1999 Madonna signed to play a violin teacher in the film Music of the
Heart but left the project, citing "creative differences" with director Wes Craven. She recorded the single "Beautiful
Stranger" for the 1999 film Austin Powers: The Spy Who Shagged Me. It reached number 19 on the Hot 100 solely on
radio airplay. Madonna won a Grammy Award for "Best Song Written for a Motion Picture, Television or Other Visual
Media". In 2000, Madonna starred in the film The Next Best Thing, and contributed two songs to the film's soundtrack;
"Time Stood Still" and a cover of Don McLean's 1971 song "American Pie". She released her eighth studio album, Music,
in September 2000. It featured elements from the electronica-inspired Ray of Light era, and like its predecessor,
received acclaim from critics. Collaborating with French producer Mirwais Ahmadzaï, Madonna commented: "I love to
work with the weirdos that no one knows about—the people who have raw talent and who are making music unlike anyone
else out there. Music is the future of sound." Stephen Thomas Erlewine from AllMusic felt that "Music blows by in
a kaleidoscopic rush of color, technique, style and substance. It has so many depth and layers that it's easily as
self-aware and earnest as Ray of Light." The album took the number-one position in more than 20 countries worldwide
and sold four million copies in the first ten days. In the U.S., Music debuted at the top, and became her first number-one
album in eleven years since Like a Prayer. It produced three singles: the Hot 100 number one "Music", "Don't Tell
Me", and "What It Feels Like for a Girl". The music video of "What It Feels Like for a Girl" depicted Madonna committing
acts of crime and vandalism, and was banned by MTV and VH1. She met director Guy Ritchie, who would become her second
husband, in November 1998 and gave birth to their son Rocco John Ritchie on August 11, 2000 in Los Angeles. Rocco
and Madonna suffered complications from the birth due to her experiencing placenta praevia. He was christened at
Dornoch Cathedral in Dornoch, Scotland, on December 21, 2000. Madonna married Ritchie the following day at nearby
Skibo Castle. Her fifth concert tour, titled Drowned World Tour, started in June 2001. The tour visited cities in
the U.S. and Europe and was the highest-grossing concert tour of the year by a solo artist, earning $75 million from
47 sold-out shows. She also released her second greatest-hits collection, titled GHV2, to coincide with the home
video release of the tour. GHV2 debuted at number seven on the Billboard 200. Madonna starred in the film Swept Away,
directed by Ritchie. Released direct-to-video in the UK, the film was a commercial and critical failure. In May 2002
she appeared in London in the West End play Up For Grabs at the Wyndhams Theatre (billed as 'Madonna Ritchie'), to
universally bad reviews and was described as "the evening's biggest disappointment" by one. That October, she released
"Die Another Day", the title song of the James Bond film Die Another Day, in which she had a cameo role, described
by The Guardian film reviewer as "incredibly wooden". The song reached number eight on the Billboard Hot 100 and
was nominated for both a Golden Globe Award for Best Original Song and a Golden Raspberry for Worst Song. Following
Die Another Day, Madonna collaborated with fashion photographer Steven Klein in 2003 for an exhibition installation
named X-STaTIC Pro=CeSS. It included photography from a photo shoot in W magazine, and seven video segments. The
installation ran from March to May in New York's Deitch Projects gallery. It traveled the world in an edited form.
The same year, Madonna released her ninth studio album, American Life, which was based on her observations of American
society; it received mixed reviews. She commented, "[American Life] was like a trip down memory lane, looking back
at everything I've accomplished and all the things I once valued and all the things that were important to me." Larry
Flick from The Advocate felt that "American Life is an album that is among her most adventurous and lyrically intelligent"
while condemning it as "a lazy, half-arsed effort to sound and take her seriously." The title song peaked at number
37 on the Hot 100. Its original music video was canceled as Madonna thought that the video, featuring violence and
war imagery, would be deemed unpatriotic since America was then at war with Iraq. With four million copies sold worldwide,
American Life was the lowest-selling album of her career at that point. Madonna gave another provocative performance
later that year at the 2003 MTV Video Music Awards, while singing "Hollywood" with Britney Spears, Christina Aguilera,
and Missy Elliott. Madonna sparked controversy for kissing Spears and Aguilera suggestively during the performance.
In October 2003, Madonna provided guest vocals on Spears' single "Me Against the Music". It was followed with the
release of Remixed & Revisited. The EP contained remixed versions of songs from American Life and included "Your
Honesty", a previously unreleased track from the Bedtime Stories recording sessions. Madonna also signed a contract
with Callaway Arts & Entertainment to be the author of five children's books. The first of these books, titled The
English Roses, was published in September 2003. The story was about four English schoolgirls and their envy and jealousy
of each other. Kate Kellway from The Guardian commented, "[Madonna] is an actress playing at what she can never be—a
JK Rowling, an English rose." The book debuted at the top of The New York Times Best Seller list and became the fastest-selling
children's picture book of all time. The next year Madonna and Maverick sued Warner Music Group and its former parent
company Time Warner, claiming that mismanagement of resources and poor bookkeeping had cost the company millions
of dollars. In return, Warner filed a countersuit alleging that Maverick had lost tens of millions of dollars on
its own. The dispute was resolved when the Maverick shares, owned by Madonna and Ronnie Dashev, were purchased by
Warner. Madonna and Dashev's company became a wholly owned subsidiary of Warner Music, but Madonna was still signed
to Warner under a separate recording contract. In mid-2004 Madonna embarked on the Re-Invention World Tour in the
U.S., Canada, and Europe. It became the highest-grossing tour of 2004, earning around $120 million and became the
subject of her documentary I'm Going to Tell You a Secret. In November 2004, she was inducted into the UK Music Hall
of Fame as one of its five founding members, along with The Beatles, Elvis Presley, Bob Marley, and U2. In January
2005, Madonna performed a cover version of the John Lennon song "Imagine" at Tsunami Aid. She also performed at the
Live 8 benefit concert in London in July 2005. Her tenth studio album, Confessions on a Dance Floor, was released
in November 2005. Musically the album was structured like a club set composed by a DJ. It was acclaimed by critics,
with Keith Caulfield from Billboard commenting that the album was a "welcome return to form for the Queen of Pop."
The album won a Grammy Award for Best Electronic/Dance Album. Confessions on a Dance Floor and its lead single, "Hung
Up", went on to reach number one in 40 and 41 countries respectively, earning a place in Guinness World Records.
The song contained a sample of ABBA's "Gimme! Gimme! Gimme! (A Man After Midnight)", only the second time that ABBA
has allowed their work to be used. ABBA songwriter Björn Ulvaeus remarked "It is a wonderful track—100 per cent solid
pop music." "Sorry", the second single, became Madonna's twelfth number-one single in the UK. Madonna embarked on
the Confessions Tour in May 2006, which had a global audience of 1.2 million and grossed over $193.7 million, becoming
the highest-grossing tour to that date for a female artist. Madonna used religious symbols, such as the crucifix
and Crown of Thorns, in the performance of "Live to Tell". It caused the Russian Orthodox Church and the Federation
of Jewish Communities of Russia to urge all their members to boycott her concert. At the same time, the International
Federation of the Phonographic Industry (IFPI) announced officially that Madonna had sold over 200 million copies
for her albums alone worldwide. While on tour Madonna participated in the Raising Malawi initiative by partially
funding an orphanage in and traveling to that country. While there, she decided to adopt a boy named David Banda
in October 2006. The adoption raised strong public reaction, because Malawian law requires would-be parents to reside
in Malawi for one year before adopting, which Madonna did not do. She addressed this on The Oprah Winfrey Show, saying
that there were no written adoption laws in Malawi that regulated foreign adoption. She described how Banda had been
suffering from pneumonia after surviving malaria and tuberculosis when she first met him. Banda's biological father,
Yohane, commented, "These so-called human rights activists are harassing me every day, threatening me that I am not
aware of what I am doing..... They want me to support their court case, a thing I cannot do for I know what I agreed
with Madonna and her husband." The adoption was finalized in May 2008. Madonna released the song "Hey You" for the
Live Earth series of concerts. The song was available as a free download during its first week of release. She also
performed it at the London Live Earth concert. Madonna announced her departure from Warner Bros. Records, and a new
$120 million, ten-year 360 deal with Live Nation. She produced and wrote I Am Because We Are, a documentary on the
problems faced by Malawians. The documentary was directed by Nathan Rissman, who worked as Madonna's gardener. She
also directed her first film Filth and Wisdom. The plot of the film revolved around three friends and their aspirations.
The Times said she had "done herself proud" while The Daily Telegraph described the film as "not an entirely unpromising
first effort [but] Madonna would do well to hang on to her day job." In December 2007, the Rock and Roll Hall of
Fame announced Madonna as one of the five inductees of 2008. At the induction ceremony on March 10, 2008, Madonna
did not sing but asked fellow Hall of Fame inductees and Michigan natives The Stooges to perform her songs "Burning
Up" and "Ray of Light". She thanked Christopher Flynn, her dance teacher from 35 years earlier, for his encouragement
to follow her dreams. Madonna released her eleventh studio album, Hard Candy, in April 2008. Containing R&B and urban
pop influences, the songs on Hard Candy were autobiographical in nature and saw Madonna collaborating with Justin
Timberlake, Timbaland, Pharrell Williams and Nate "Danja" Hills. The album debuted at number one in thirty seven
countries and on the Billboard 200. Don Shewey from Rolling Stone complimented it as an "impressive taste of her
upcoming tour." It received generally positive reviews worldwide though some critics panned it as "an attempt to
harness the urban market". "4 Minutes" was released as the album's lead single and peaked at number three on the
Billboard Hot 100. It was Madonna's 37th top-ten hit on the chart—it pushed Madonna past Elvis Presley as the artist
with the most top-ten hits. In the UK she retained her record for the most number-one singles for a female artist;
"4 Minutes" becoming her thirteenth. At the 23rd Japan Gold Disc Awards, Madonna received her fifth Artist of the
Year trophy from Recording Industry Association of Japan, the most for any artist. To further promote the album,
Madonna embarked on the Sticky & Sweet Tour; her first major venture with Live Nation. With a gross of $280 million,
it became the highest-grossing tour by a solo artist then, surpassing the previous record Madonna set with the Confessions
Tour; it was later surpassed by Roger Waters' The Wall Live. It was extended to the next year, adding new European
dates, and after it ended, the total gross was $408 million. Life with My Sister Madonna, a book by Madonna's brother
Christopher, debuted at number two on The New York Times bestseller list. The book caused some friction between Madonna
and her brother, because of the unsolicited publication. Problems also arose between Madonna and Ritchie, with the
media reporting that they were on the verge of separation. Ultimately, Madonna filed for divorce from Ritchie, citing
irreconcilable differences, which was finalized in December 2008. She decided to adopt from Malawi. The country's
High Court initially approved the adoption of Chifundo "Mercy" James; however, the application was rejected because
Madonna was not a resident of the country. Madonna appealed, and on June 12, 2009, the Supreme Court of Malawi granted
Madonna the right to adopt Mercy James. She also released Celebration, her third greatest-hits album and final release
with Warner. It contained the new songs "Celebration" and "Revolver" along with 34 hits spanning her career. Celebration
reached number one in the UK, tying her with Elvis Presley as the solo act with most number one albums in the British
chart history. She appeared at the 2009 MTV Video Music Awards on September 13, 2009, to speak in tribute to deceased
pop star Michael Jackson. Controversy erupted when Madonna decided to adopt from Malawi again. Chifundo "Mercy" James
was finally adopted in June 2009. Madonna had known Mercy from the time she went to adopt David. Mercy's grandmother
had initially protested the adoption, but later gave in, saying "At first I didn't want her to go but as a family
we had to sit down and reach an agreement and we agreed that Mercy should go. The men insisted that Mercy be adopted
and I won't resist anymore. I still love Mercy. She is my dearest." Mercy's father was still adamant saying that
he could not support the adoption since he was alive. Madonna performed at the Hope for Haiti Now: A Global Benefit
for Earthquake Relief concert in January 2010. In April she released her third live album, Sticky & Sweet Tour. It
was her first release under Live Nation, but was distributed by Warner Bros. Madonna granted American TV show Glee
the rights to her entire catalogue of music, and the producers planned an episode featuring Madonna songs exclusively.
Glee: The Music, The Power of Madonna, an EP containing eight cover versions of Madonna songs featured in the episode,
was released afterward and debuted at number one on the Billboard 200. Madonna released the Material Girl clothing
line, which she designed with her daughter, Lourdes. The 1980s inspired clothing line, borrowed from Madonna's punk-girl
style when she rose to fame in the 1980s, was released under the Macy's label. Madonna also opened a series of fitness
centers around the world named Hard Candy Fitness. In November 2011, Madonna and MG Icon announced the release of
a second fashion brand called Truth or Dare by Madonna to include footwear, underclothing, and accessories. She also
directed her second feature film, W.E., a biographic about the affair between King Edward VIII and Wallis Simpson;
it was co-written with Alek Keshishian. Critical and commercial response to the film was negative. Madonna contributed
the ballad "Masterpiece" for the film's soundtrack, which won her a Golden Globe Award for Best Original Song. In
2012, Madonna performed at Super Bowl XLVI halftime show, visualized by Cirque Du Soleil and Jamie King and featured
special guests LMFAO, Nicki Minaj, M.I.A. and Cee Lo Green. It became the then most-watched Super Bowl halftime show
in history with 114 million viewers, higher than the game itself. It was also revealed that the singer had signed
a three-album deal with Interscope Records, who would act as the distributor in partnership with her 360 deal with
Live Nation. Her twelfth studio album, MDNA, was released in March 2012 and saw collaboration with various producers,
most notably with William Orbit again and Martin Solveig. The album was well received by music critics, with Priya
Elan from NME calling it "a ridiculously enjoyable romp", citing its "psychotic, soul-bearing stuff" as "some of
the most visceral stuff she's ever done." MDNA debuted at number one on the Billboard 200 and many other countries
worldwide. Madonna surpassed Elvis Presley's record for the most number-one album by a solo artist in the UK. The
lead single "Give Me All Your Luvin'", featuring guest vocals from Minaj and M.I.A., became Madonna's record-extending
38th top-ten hit on the Billboard Hot 100. The MDNA Tour, which further promoted the album, began in May 2012 in
Tel Aviv, Israel. The tour has received positive critical reception, but featured controversial subjects such as
violence, firearms, human rights, nudity and politics. Lawsuits threatened against Madonna have also been engaged
from the tour. It was a box office success with a gross of $305.2 million from 88 sold-out shows, and became the
highest-grossing tour of 2012 and the tenth highest-grossing tour of all time. At the 2013 Billboard Music Awards,
Madonna won three trophies for Top Touring Artist, Top Dance Artist and Top Dance Album. Madonna was named the top-earning
celebrity of the year by Forbes, earning an estimated $125 million, due to the success of the tour. By 2013, Madonna's
Raising Malawi organization built ten schools to educate 4,000 children in Malawi at a value of $400,000. When Madonna
visited the schools in April 2013, President of Malawi Joyce Banda expressed criticism of the star and her charity,
accusing her of exaggerating her charity's contribution. Madonna responded by releasing a statement saying she was
saddened that Banda had chosen to act negatively about her endeavors. "I have no intention of being distracted by
these ridiculous allegations," she added. Later, it was confirmed that Banda had not approved the statement released
written by her press team and was "incandescent with anger" over the mix-up. Working with photographer Steven Klein,
Madonna completed a 17-minute film called secretprojectrevolution. The BitTorrent company was selected by Madonna
to release the film as part of a Madonna bundle. It was released on September 24, 2013, and consisted of the 17-minute
film, its stills, a Vice interview, and a message from Madonna. With the film she launched the Art for Freedom initiative,
which helped to promote "art and free speech as a means to address persecution and injustice across the globe". The
website for the project has had over 3,000 art related submissions since its inception, with Madonna regularly monitoring
and enlisting the help of other artists like David Blaine and Katy Perry as guest curators. From the beginning of
2014, Madonna began to make multiple media appearances. She appeared at the 56th Annual Grammy Awards in January
2014, performing "Open Your Heart" alongside rappers Macklemore & Ryan Lewis and singer Mary Lambert, who sang their
single "Same Love", as 33 couples were wed onstage, officiated by Queen Latifah. Days later, she joined singer Miley
Cyrus on her MTV Unplugged special, singing a mash-up of "Don't Tell Me" and Cyrus' single "We Can't Stop" (2013).
She also extended her business ventures and in February 2014 the singer premiered MDNA Skin, a range of skin care
products, in Tokyo, Japan. After visiting her hometown of Detroit during May 2014, Madonna decided to contribute
funds to three of the city's organizations, to help eliminate poverty from there. The singer released a statement
saying that she was inspired by their work, adding that "it was obvious to me that I had to get involved and be part
of the solution to help Detroit recover". Madonna also began work on her thirteenth studio album, with collaborators
including Avicii, Diplo and Natalia Kills. In December 2014, thirteen demos recorded for the album leaked onto the
Internet. She posted in response that half of the tracks would not be used on the final release, while the other
half had "changed and evolved". The album, titled Rebel Heart, was released on March 10, 2015. From September 2015,
she embarked on the Rebel Heart Tour to promote the album; the tour ended in March 2016 and traveled throughout North
America, Europe and Asia and was the singer's first visit to Australia in 23 years, where she also performed a one-off
show for her fans. It grossed a total of $169.8 million from the 82 shows, with over 1.045 million ticket sales.
While on tour Madonna became embroiled in a legal battle with Ritchie, over the custody of her son Rocco. The dispute
started when Rocco decided to continue living at England with Ritchie when the Rebel Heart Tour had visited there,
while Madonna wanted him to return with her. Court hearings took place in both New York and London, and after multiple
deliberations, Madonna decided to withdraw her application for custody, and appealed for a mutual discussion between
herself and Ritchie about Rocco. Madonna's music has been the subject of much analysis and scrutiny. Robert M. Grant,
author of Contemporary Strategy Analysis (2005), commented that what has brought Madonna success is "certainly not
outstanding natural talent. As a vocalist, musician, dancer, songwriter, or actress, Madonna's talents seem modest."
He asserts Madonna's success is in relying on the talents of others, and that her personal relationships have served
as cornerstones to the numerous reinventions in the longevity of her career. Madonna's approach was far from the
music industry wisdom of "Find a winning formula and stick to it." Her musical career has been a continuous experimentation
with new musical ideas and new images and a constant quest for new heights of fame and acclaim. Grant concluded that
"having established herself as the queen of popular music, Madonna did not stop there, but continued re-inventing."
Musicologist Susan McClary wrote that "Madonna's art itself repeatedly deconstructs the traditional notion of the
unified subject with finite ego boundaries. Her pieces explore various ways of constituting identities that refuse
stability, that remain fluid, that resist definition." Throughout her career Madonna has been involved in writing
and producing most of her own music. Madonna's early songwriting skill was developed during her time with the Breakfast
Club in 1979. According to author Carol Gnojewski, her first attempts at songwriting are perceived as an important
self-revelation, as Madonna said: "I don't know where [the songs] came from. It was like magic. I'd write a song
every day. I said 'Wow, I was meant to do this'." Mark Kamins, her first producer, believed that Madonna is "a much
underrated musician and lyricist." Rolling Stone has named her "an exemplary songwriter with a gift for hooks and
indelible lyrics." According to Freya Jarman-Ivens, Madonna's talent for developing "incredible" hooks for her songs
allows the lyrics to capture the attention of the audience, even without the influence of the music. As an example,
Jarman-Ivens cites the 1985 single "Into the Groove" and its line "Live out your fantasy here with me, just let the
music set you free; Touch my body, and move in time, now I know you're mine." Madonna's songwriting are often autobiographical
over the years, dealing with various themes from love and relationships to self-respect and female empowerment. Her
songs also speak about taboo and unconventional issues of their period, such as sexuality and AIDS on Erotica (1992).
Many of her lyrics contain innuendos and double entendre, which lead to multiple interpretations among music critics
and scholars. Madonna has been nominated for being inducted into the Songwriters Hall of Fame twice, for 2014 and
2016 ceremony. Rolling Stone listed Madonna at number 56 on the "100 Greatest Songwriters of All Time". Before emerging
as a pop star, Madonna has spent her early years in rock music alongside her bands, Breakfast Club and Emmy. While
performing with Emmy, Madonna recorded about 12-14 songs which resemble the punk rock of that period. Her early rock
roots also can be found on the demo album Pre-Madonna. Stephen Thomas Erlewine noted that with her self-titled debut
album, Madonna began her career as a disco diva, in an era that did not have any such divas to speak of. In the beginning
of the '80's, disco was an anathema to the mainstream pop, and according to Erlewine, Madonna had a huge role in
popularizing dance music as mainstream music. The album's songs reveal several key trends that have continued to
define her success, including a strong dance-based idiom, catchy hooks, highly polished arrangements and Madonna's
own vocal style. Her second album, Like a Virgin (1984), foreshadowed several trends in her later works. It contained
references to classical works (pizzicato synthesizer line that opens "Angel"); potential negative reaction from social
groups ("Dress You Up" was blacklisted by the Parents Music Resource Center); and retro styles ("Shoo-Bee-Doo", Madonna's
homage to Motown). Her mature artistic statement was visible in True Blue (1986) and Like a Prayer (1989). In True
Blue, she incorporated classical music in order to engage an older audience who had been skeptical of her music.
Like a Prayer introduced live recorded songs and incorporated different genres of music, including dance, funk, R&B
and gospel music. Her versatility was further shown on I'm Breathless, which consists predominantly of the 1940s
Broadway showtune-flavoured jazz, swing and big band tracks. Madonna continued to compose ballads and uptempo dance
songs for Erotica (1992) and Bedtime Stories (1994). Both albums explored element of new jack swing, with Jim Farber
from Entertainment Weekly saying that "she could actually be viewed as new jack swing's godmother." She tried to
remain contemporary by incorporating samples, drum loops and hip hop into her music. With Ray of Light, Madonna brought
electronic music from its underground status into massive popularity in mainstream music scene. Madonna experimented
with more folk and acoustic music in Music (2000) and American Life (2003). A change was noted in the content of
the songs in Music, with most of them being simple love songs, but with an underlying tone of melancholy. According
to Q magazine, American Life was characterized by "a thumping techno rhythm, liquid keyboard lines, an acoustic chorus
and a bizarre Madonna rap." The "conventional rock songs" of the album were suffused with dramatic lyrics about patriotism
and composition, including the appearance of a gospel choir in the song "Nothing Fails". Madonna returned to pure
dance songs with Confessions on a Dance Floor, infusing club beats and retro music with the lyrics about paradoxical
metaphors and reference to her earlier works. Madonna moved to urban direction with Hard Candy (2008), mixing R&B
and hip hop music with dance tunes. MDNA (2012) largely focused in electronic dance music, which she has embraced
since Ray of Light. Possessing a mezzo-soprano vocal range, Madonna has always been self-conscious about her voice,
especially in comparison to her vocal idols such as Ella Fitzgerald, Prince, and Chaka Khan. Mark Bego, author of
Madonna: Blonde Ambition, called her "the perfect vocalist for lighter-than-air songs", despite not being a "heavyweight
talent." According to MSNBC critic Tony Sclafani, "Madonna's vocals are the key to her rock roots. Pop vocalists
usually sing songs "straight," but Madonna employs subtext, irony, aggression and all sorts of vocal idiosyncrasies
in the ways John Lennon and Bob Dylan did." Madonna used a bright, girlish vocal timbre in her early albums which
became passé in her later works. The change was deliberate since she was constantly reminded of how the critics had
once labelled her as "Minnie Mouse on helium". During the filming of Evita, Madonna had to take vocal lessons, which
increased her range further. Of this experience she commented, "I studied with a vocal coach for Evita and I realized
there was a whole piece of my voice I wasn't using. Before, I just believed I had a really limited range and was
going to make the most of it." Besides singing Madonna has the ability to play several musical instruments. She learned
to play drum and guitar from her then-boyfriend Dan Gilroy in the late 1970s before joining the Breakfast Club line-up
as the drummer. This helped her to form the band Emmy, where she performed as the guitarist and lead vocalist. Madonna
later played guitar on her demo recordings. On the liner notes of Pre-Madonna, Stephen Bray wrote: "I've always thought
she passed up a brilliant career as a rhythm guitarist." After her career breakthrough, Madonna focused mainly in
singing but was also credited for playing cowbell on Madonna (1983) and synthesizer on Like a Prayer (1989). In 1999,
Madonna had studied for three months to play the violin for the role as a violin teacher in the film Music of the
Heart, before eventually leaving the project. After two decades, Madonna decided to perform with guitar again during
the promotion of Music (2000). She took further lessons from guitarist Monte Pittman to improve her guitar skill.
Since then Madonna has played guitar on every tour, as well as her studio albums. At the 2002 Orville H. Gibson Guitar
Awards, she received nomination for Les Paul Horizon Award, which honors the most promising up-and-coming guitarist.
According to Taraborrelli, the defining moment of Madonna's childhood was the tragic and untimely death of her beloved
mother. Psychiatrist Keith Ablow suggests her mother's death would have had an immeasurable impact on the young Madonna
at a time when her personality was still forming. According to Ablow, the younger a child is at the time of a serious
loss, the more profound the influence and the longer lasting the impact. He concludes that "some people never reconcile
themselves to such a loss at an early age, Madonna is not different than them." Conversely, author Lucy O'Brien feels
the impact of the rape she suffered is, in fact, the motivating factor behind everything Madonna has done, more important
even than the death of her mother: "It's not so much grief at her mother's death that drives her, as the sense of
abandonment that left her unprotected. She encountered her own worst possible scenario, becoming a victim of male
violence, and thereafter turned that full-tilt into her work, reversing the equation at every opportunity." As they
grew older Madonna and her sisters would feel deep sadness as the vivid memory of their mother began drifting farther
from them. They would study pictures of her and come to think that she resembled poet Anne Sexton and Hollywood actresses.
This would later raise Madonna's interest in poetry, with Sylvia Plath being her favourite. Later, Madonna commented:
"We were all wounded in one way or another by [her death], and then we spent the rest of our lives reacting to it
or dealing with it or trying to turn into something else. The anguish of losing my mom left me with a certain kind
of loneliness and an incredible longing for something. If I hadn't had that emptiness, I wouldn't have been so driven.
Her death had a lot to do with me saying—after I got over my heartache—I'm going to be really strong if I can't have
my mother. I'm going to take care of myself." Taraborrelli felt that in time, no doubt because of the devastation
she felt, Madonna would never again allow herself, or even her daughter, to feel as abandoned as she had felt when
her mother died. "Her death had taught [Madonna] a valuable lesson, that she would have to remain strong for herself
because, she feared weakness—particularly her own—and wanted to be the queen of her own castle." In 1985, Madonna
commented that the first song to ever make a strong impression on her was "These Boots Are Made for Walkin'" by Nancy
Sinatra; she said it summed up her own "take-charge attitude". As a young woman, she attempted to broaden her taste
in literature, art, and music, and during this time became interested in classical music. She noted that her favorite
style was baroque, and loved Mozart and Chopin because she liked their "feminine quality". Madonna's major influences
include Karen Carpenter, The Supremes and Led Zeppelin, as well as dancers Martha Graham and Rudolf Nureyev. She
also grew up listening to David Bowie, whose show was the first rock concert she ever attended. Madonna's Italian-Catholic
background and her relationship with her parents are reflected in the album Like a Prayer. It was an evocation of
the impact religion had on her career. Her video for the title track contains Catholic symbolism, such as the stigmata.
During The Virgin Tour, she wore a rosary and prayed with it in the music video for "La Isla Bonita". The "Open Your
Heart" video sees her boss scolding her in the Italian language. On the Who's That Girl World Tour, she dedicated
the song "Papa Don't Preach" to Pope John Paul II. During her childhood, Madonna was inspired by actors, later saying,
"I loved Carole Lombard and Judy Holliday and Marilyn Monroe. They were all incredibly funny ... and I saw myself
in them ... my girlishness, my knowingness and my innocence." Her "Material Girl" music video recreated Monroe's
look in the song "Diamonds Are a Girl's Best Friend", from the film Gentlemen Prefer Blondes (1953). She studied
the screwball comedies of the 1930s, particularly those of Lombard, in preparation for the film Who's That Girl.
The video for "Express Yourself" (1989) was inspired by Fritz Lang's silent film Metropolis (1927). The video for
"Vogue" recreated the style of Hollywood glamour photographs, in particular those by Horst P. Horst, and imitated
the poses of Marlene Dietrich, Carole Lombard, and Rita Hayworth, while the lyrics referred to many of the stars
who had inspired her, including Bette Davis, described by Madonna as an idol. However, Madonna's film career has
been largely received negatively by the film critic community. Stephanie Zacharek, critic for Time magazine, stated
that, "[Madonna] seems wooden and unnatural as an actress, and it's tough to watch, because she's clearly trying
her damnedest." According to biographer Andrew Morton, "Madonna puts a brave face on the criticism, but privately
she is deeply hurt." After the box office bomb Swept Away (2002), Madonna vowed that she would never again act in
a film, hoping her repertoire as a bad actress would never be discussed again. Influences also came to her from the
art world, most notably through the works of Mexican artist Frida Kahlo. The music video of the song "Bedtime Story"
featured images inspired by the paintings of Kahlo and Remedios Varo. Madonna is also a collector of Tamara de Lempicka's
Art Deco paintings and has included them in her music videos and tours. Her video for "Hollywood" (2003) was an homage
to the work of photographer Guy Bourdin; Bourdin's son subsequently filed a lawsuit for unauthorised use of his father's
work. Pop artist Andy Warhol's use of sadomasochistic imagery in his underground films were reflected in the music
videos for "Erotica" and "Deeper and Deeper". Madonna is dedicated to Kabbalah, and in 2004 she adopted the name
Esther which in Persian means "star". She has donated millions of dollars to New York and London schools teaching
the subject. She faced opposition from rabbis who felt Madonna's adoption of the Kabbalah was sacrilegious and a
case of celebrity dilettantism. Madonna defended her studies, saying: "It would be less controversial if I joined
the Nazi Party", and that her involvement with the Kabbalah is "not hurting anybody". The influence of the Kabbalah
was subsequently observed in Madonna's music, especially albums like Ray of Light and Music. During the Re-Invention
World Tour, at one point in the show, Madonna and her dancers wore T-shirts that read "Kabbalists Do It Better".
Her 2012 MDNA album has also drawn many influences from her Catholic upbringing, and since 2011 she has been attending
meetings and services at an Opus Dei center, a Catholic institution that encourages spirituality through every day
life. In The Madonna Companion biographers Allen Metz and Carol Benson noted that more than any other recent pop
artist, Madonna had used MTV and music videos to establish her popularity and enhance her recorded work. According
to them, many of her songs have the imagery of the music video in strong context, while referring to the music. Cultural
critic Mark C. Taylor in his book Nots (1993) felt that the postmodern art form par excellence is video and the reigning
"queen of video" is Madonna. He further asserted that "the most remarkable creation of MTV is Madonna. The responses
to Madonna's excessively provocative videos have been predictably contradictory." The media and public reaction towards
her most-discussed songs such as "Papa Don't Preach", "Like a Prayer", or "Justify My Love" had to do with the music
videos created to promote the songs and their impact, rather than the songs themselves. Morton felt that "artistically,
Madonna's songwriting is often overshadowed by her striking pop videos." Madonna's initial music videos reflected
her American and Hispanic mixed street style combined with a flamboyant glamor. She was able to transmit her avant-garde
downtown New York fashion sense to the American audience. The imagery and incorporation of Hispanic culture and Catholic
symbolism continued with the music videos from the True Blue era. Author Douglas Kellner noted, "such 'multiculturalism'
and her culturally transgressive moves turned out to be highly successful moves that endeared her to large and varied
youth audiences." Madonna's Spanish look in the videos became the fashion trend of that time, in the form of boleros
and layered skirts, accessorizing with rosary beads and a crucifix as in the video of "La Isla Bonita". Academics
noted that with her videos, Madonna was subtly reversing the usual role of male as the dominant sex. This symbolism
and imagery was probably the most prevalent in the music video for "Like a Prayer". The video included scenes of
an African-American church choir, Madonna being attracted to a statue of a black saint, and singing in front of burning
crosses. This mix of the sacred and the profane upset the Vatican and resulted in the Pepsi commercial withdrawal.
In 2003, MTV named her "The Greatest Music Video Star Ever" and said that "Madonna's innovation, creativity and contribution
to the music video art form is what won her the award." Madonna's emergence occurred during the advent of MTV; Chris
Nelson from The New York Times spoke of pop artists like Madonna saying, "MTV, with its almost exclusively lip-synched
videos, ushered in an era in which average music fans might happily spend hours a day, every day, watching singers
just mouth the words." The symbiotic relationship between the music video and lip-syncing led to a desire for the
spectacle and imagery of the music video to be transferred to live stage shows. He added, "Artists like Madonna and
Janet Jackson set new standards for showmanship, with concerts that included not only elaborate costumes and precision-timed
pyrotechnics but also highly athletic dancing. These effects came at the expense of live singing." Thor Christensen
of The Dallas Morning News commented that while Madonna earned a reputation for lip-syncing during her 1990 Blond
Ambition World Tour, she has subsequently reorganized her performances by "stay[ing] mostly still during her toughest
singing parts and [leaves] the dance routines to her backup troupe ... [r]ather than try to croon and dance up a
storm at the same time." To allow for greater movement while dancing and singing, Madonna was one of the earliest
adopters of hands-free radio-frequency headset microphones, with the headset fastened over the ears or the top of
the head, and the microphone capsule on a boom arm that extended to the mouth. Because of her prominent usage, the
microphone design came to be known as the "Madonna mic". Metz noted that Madonna represents a paradox as she is often
perceived as living her whole life as a performance. While her big-screen performances are panned, her live performances
are critical successes. Madonna was the first artist to have her concert tours as reenactment of her music videos.
Author Elin Diamond explained that reciprocally, the fact that images from Madonna's videos can be recreated in a
live setting enhances the realism of the original videos. Thus her live performances have become the means by which
mediatized representations are naturalized. Taraborrelli said that encompassing multimedia, latest technology and
sound systems, Madonna's concerts and live performances are deemed as "extravagant show piece, a walking art show."
Various music journalists, critical theorists, and authors have deemed Madonna the most influential female recording
artist of all time. Author Carol Clerk wrote that "during her career, Madonna has transcended the term 'pop star'
to become a global cultural icon." Rolling Stone of Spain wrote that "She became the first viral Master of Pop in
history, years before the Internet was massively used. Madonna was everywhere; in the almighty music television channels,
'radio formulas', magazine covers and even in bookshops. A pop dialectic, never seen since The Beatles's reign, which
allowed her to keep on the edge of tendency and commerciality." Laura Barcella in her book Madonna and Me: Women
Writers on the Queen of Pop (2012) wrote that "really, Madonna changed everything the musical landscape, the '80s
look du jour, and most significantly, what a mainstream female pop star could (and couldn't) say, do, or accomplish
in the public eye." William Langley from The Daily Telegraph felt that "Madonna has changed the world's social history,
has done more things as more different people than anyone else is ever likely to." Alan McGee from The Guardian felt
that Madonna is a post-modern art, the likes of which we will never see again. He further asserted that Madonna and
Michael Jackson invented the terms Queen and King of Pop. According to Tony Sclafani from MSNBC, "It's worth noting
that before Madonna, most music mega-stars were guy rockers; after her, almost all would be female singers ... When
The Beatles hit America, they changed the paradigm of performer from solo act to band. Madonna changed it back—with
an emphasis on the female." Howard Kramer, curatorial director of the Rock and Roll Hall of Fame and Museum, asserted
that "Madonna and the career she carved out for herself made possible virtually every other female pop singer to
follow ... She certainly raised the standards of all of them ... She redefined what the parameters were for female
performers." According to Fouz-Hernández, subsequent female singers such as Britney Spears, Christina Aguilera, Kylie
Minogue, the Spice Girls, Destiny's Child, Jennifer Lopez, and Pink were like her "daughters in the very direct sense
that they grew up listening to and admiring Madonna, and decided they wanted to be like her." Time magazine included
her in the list of the "25 Most Powerful Women of the Past Century", where she became one of only two singers to
be included, alongside Aretha Franklin. She also topped VH1's lists of "100 Greatest Women in Music" and "50 Greatest
Women of the Video Era". Madonna's use of sexual imagery has benefited her career and catalyzed public discourse
on sexuality and feminism. As Roger Chapman documents in Culture Wars: An Encyclopedia of Issues, Viewpoints, and
Voices, Volume 1 (2010), she has drawn frequent condemnation from religious organizations, social conservatives and
parental watchdog groups for her use of explicit, sexual imagery and lyrics, religious symbolism, and otherwise "irreverent"
behavior in her live performances. The Times wrote that she had "started a revolution amongst women in music ...
Her attitudes and opinions on sex, nudity, style and sexuality forced the public to sit up and take notice." Professor
John Fiske noted that the sense of empowerment that Madonna offers is inextricably connected with the pleasure of
exerting some control over the meanings of self, of sexuality, and of one's social relations. In Doing Gender in
Media, Art and Culture (2009), the authors noted that Madonna, as a female celebrity, performer, and pop icon, is
able to unsettle standing feminist reflections and debates. According to lesbian feminist Sheila Jeffreys, Madonna
represents woman's occupancy of what Monique Wittig calls the category of sex, as powerful, and appears to gleefully
embrace the performance of the sexual corvée allotted to women. Professor Sut Jhally has referred to Madonna as "an
almost sacred feminist icon." Madonna has received acclaim as a role model for businesswomen in her industry, "achieving
the kind of financial control that women had long fought for within the industry", and generating over $1.2 billion
in sales within the first decade of her career. Professor Colin Barrow from Cranfield School of Management described
Madonna as "America's smartest businesswoman ... who has moved to the top of her industry and stayed there by constantly
reinventing herself." London Business School academics called her a "dynamic entrepreneur" worth copying; they identified
her vision of success, her understanding of the music industry, her ability to recognize her own performance limits
(and thus bring in help), her willingness to work hard and her ability to adapt as the keys to her commercial success.
Morton wrote that "Madonna is opportunistic, manipulative, and ruthless—somebody who won't stop until she gets what
she wants—and that's something you can get at the expense of maybe losing your close ones. But that hardly mattered
to her." Hazel Blackmore and Rafael Fernández de Castro in the book ¿Qué es Estados Unidos? from the Fondo de Cultura
Económica, noted: "Madonna has been undoubtedly the most important woman in the history of popular music and a great
businesswoman in herself; creating fashion, breaking taboos and provoking controversies." Madonna has sold more than
300 million records worldwide. The Guinness World Records acknowledged her as the best-selling female recording artist
and the fourth best-selling act of all time, behind The Beatles, Elvis Presley, and Michael Jackson. According to
the Recording Industry Association of America (RIAA), she is the best-selling female rock artist of the 20th century
and the second top-selling female albums artist in the United States, with 64.5 million certified albums. Madonna
is the most certified artist of all time in United Kingdom, with 45 awards from the British Phonographic Industry
(BPI) as of April 2013. Billboard named Madonna as the top touring female artist of all time. She is also the highest
grossing solo touring artist, with over $1.31 billion in concert gross, starting from the Blond Ambition World Tour;
she first crossed a billion gross with The MDNA Tour. Overall, Madonna ranks third on all-time top-grossing Billboard
Boxscore list, with just The Rolling Stones ($1.84 billion) and U2 ($1.67 billion) ahead of her. Madonna has been
honored with 20 MTV Video Music Awards—the most for any artist—including the lifetime achievement Video Vanguard
Award in 1986. Madonna holds the record for the most number-ones on all combined Billboard charts, including twelve
number-one songs on the Billboard Hot 100 and eight number-one albums on the Billboard 200. With 45 songs topping
the Hot Dance Club Songs chart, Madonna became the artist with the most number-one songs on an active Billboard chart,
pulling ahead of George Strait with 44 number-one songs on the Hot Country Songs chart. She has also scored 38 top-ten
singles on the Hot 100, more than any other artist in history. In 2008, Billboard magazine ranked her at number two,
behind The Beatles, on the Billboard Hot 100 All-Time Top Artists, making her the most successful solo artist in
the history of American singles chart.
The law of the United States comprises many levels of codified and uncodified forms of law, of which the most important is
the United States Constitution, the foundation of the federal government of the United States. The Constitution sets
out the boundaries of federal law, which consists of acts of Congress, treaties ratified by the Senate, regulations
promulgated by the executive branch, and case law originating from the federal judiciary. The United States Code
is the official compilation and codification of general and permanent federal statutory law. Federal law and treaties,
so long as they are in accordance with the Constitution, preempt conflicting state and territorial laws in the 50
U.S. states and in the territories. However, the scope of federal preemption is limited because the scope of federal
power is not universal. In the dual-sovereign system of American federalism (actually tripartite because of the presence
of Indian reservations), states are the plenary sovereigns, each with their own constitution, while the federal sovereign
possesses only the limited supreme authority enumerated in the Constitution. Indeed, states may grant their citizens
broader rights than the federal Constitution as long as they do not infringe on any federal constitutional rights.
Thus, most U.S. law (especially the actual "living law" of contract, tort, property, criminal, and family law experienced
by the majority of citizens on a day-to-day basis) consists primarily of state law, which can and does vary greatly
from one state to the next. Notably, a statute does not disappear automatically merely because it has been found
unconstitutional; it must be deleted by a subsequent statute. Many federal and state statutes have remained on the
books for decades after they were ruled to be unconstitutional. However, under the principle of stare decisis, no
sensible lower court will enforce an unconstitutional statute, and any court that does so will be reversed by the
Supreme Court. Conversely, any court that refuses to enforce a constitutional statute (where such constitutionality
has been expressly established in prior cases) will risk reversal by the Supreme Court. Notably, the most broadly
influential innovation of 20th-century American tort law was the rule of strict liability for defective products,
which originated with judicial glosses on the law of warranty. In 1963, Roger J. Traynor of the Supreme Court of
California threw away legal fictions based on warranties and imposed strict liability for defective products as a
matter of public policy in the landmark case of Greenman v. Yuba Power Products. The American Law Institute subsequently
adopted a slightly different version of the Greenman rule in Section 402A of the Restatement (Second) of Torts, which
was published in 1964 and was very influential throughout the United States. Outside the U.S., the rule was adopted
by the European Economic Community in the Product Liability Directive of July 1985 by Australia in July 1992 and
by Japan in June 1994. Tort law covers the entire imaginable spectrum of wrongs which humans can inflict upon each
other, and of course, partially overlaps with wrongs also punishable by criminal law. Although the American Law Institute
has attempted to standardize tort law through the development of several versions of the Restatement of Torts, many
states have chosen to adopt only certain sections of the Restatements and to reject others. Thus, because of its
immense size and diversity, American tort law cannot be easily summarized. However, it is important to understand
that despite the presence of reception statutes, much of contemporary American common law has diverged significantly
from English common law. The reason is that although the courts of the various Commonwealth nations are often influenced
by each other's rulings, American courts rarely follow post-Revolution Commonwealth rulings unless there is no American
ruling on point, the facts and law at issue are nearly identical, and the reasoning is strongly persuasive. The actual
substance of English law was formally "received" into the United States in several ways. First, all U.S. states except
Louisiana have enacted "reception statutes" which generally state that the common law of England (particularly judge-made
law) is the law of the state to the extent that it is not repugnant to domestic law or indigenous conditions. Some
reception statutes impose a specific cutoff date for reception, such as the date of a colony's founding, while others
are deliberately vague. Thus, contemporary U.S. courts often cite pre-Revolution cases when discussing the evolution
of an ancient judge-made common law principle into its modern form, such as the heightened duty of care traditionally
imposed upon common carriers. Early on, American courts, even after the Revolution, often did cite contemporary English
cases. This was because appellate decisions from many American courts were not regularly reported until the mid-19th
century; lawyers and judges, as creatures of habit, used English legal materials to fill the gap. But citations to
English decisions gradually disappeared during the 19th century as American courts developed their own principles
to resolve the legal problems of the American people. The number of published volumes of American reports soared
from eighteen in 1810 to over 8,000 by 1910. By 1879 one of the delegates to the California constitutional convention
was already complaining: "Now, when we require them to state the reasons for a decision, we do not mean they shall
write a hundred pages of detail. We [do] not mean that they shall include the small cases, and impose on the country
all this fine judicial literature, for the Lord knows we have got enough of that already." Federal law originates
with the Constitution, which gives Congress the power to enact statutes for certain limited purposes like regulating
interstate commerce. The United States Code is the official compilation and codification of the general and permanent
federal statutes. Many statutes give executive branch agencies the power to create regulations, which are published
in the Federal Register and codified into the Code of Federal Regulations. Regulations generally also carry the force
of law under the Chevron doctrine. Many lawsuits turn on the meaning of a federal statute or regulation, and judicial
interpretations of such meaning carry legal force under the principle of stare decisis. During the 18th and 19th
centuries, federal law traditionally focused on areas where there was an express grant of power to the federal government
in the federal Constitution, like the military, money, foreign relations (especially international treaties), tariffs,
intellectual property (specifically patents and copyrights), and mail. Since the start of the 20th century, broad
interpretations of the Commerce and Spending Clauses of the Constitution have enabled federal law to expand into
areas like aviation, telecommunications, railroads, pharmaceuticals, antitrust, and trademarks. In some areas, like
aviation and railroads, the federal government has developed a comprehensive scheme that preempts virtually all state
law, while in others, like family law, a relatively small number of federal statutes (generally covering interstate
and international situations) interacts with a much larger body of state law. In areas like antitrust, trademark,
and employment law, there are powerful laws at both the federal and state levels that coexist with each other. In
a handful of areas like insurance, Congress has enacted laws expressly refusing to regulate them as long as the states
have laws regulating them (see, e.g., the McCarran-Ferguson Act). After the President signs a bill into law (or Congress
enacts it over his veto), it is delivered to the Office of the Federal Register (OFR) of the National Archives and
Records Administration (NARA) where it is assigned a law number, and prepared for publication as a slip law. Public
laws, but not private laws, are also given legal statutory citation by the OFR. At the end of each session of Congress,
the slip laws are compiled into bound volumes called the United States Statutes at Large, and they are known as session
laws. The Statutes at Large present a chronological arrangement of the laws in the exact order that they have been
enacted. Congress often enacts statutes that grant broad rulemaking authority to federal agencies. Often, Congress
is simply too gridlocked to draft detailed statutes that explain how the agency should react to every possible situation,
or Congress believes the agency's technical specialists are best equipped to deal with particular fact situations
as they arise. Therefore, federal agencies are authorized to promulgate regulations. Under the principle of Chevron
deference, regulations normally carry the force of law as long as they are based on a reasonable interpretation of
the relevant statutes. The difficult question is whether federal judicial power extends to formulating binding precedent
through strict adherence to the rule of stare decisis. This is where the act of deciding a case becomes a limited
form of lawmaking in itself, in that an appellate court's rulings will thereby bind itself and lower courts in future
cases (and therefore also impliedly binds all persons within the court's jurisdiction). Prior to a major change to
federal court rules in 2007, about one-fifth of federal appellate cases were published and thereby became binding
precedents, while the rest were unpublished and bound only the parties to each case. As federal judge Alex Kozinski
has pointed out, binding precedent as we know it today simply did not exist at the time the Constitution was framed.
Judicial decisions were not consistently, accurately, and faithfully reported on both sides of the Atlantic (reporters
often simply rewrote or failed to publish decisions which they disliked), and the United Kingdom lacked a coherent
court hierarchy prior to the end of the 19th century. Furthermore, English judges in the eighteenth century subscribed
to now-obsolete natural law theories of law, by which law was believed to have an existence independent of what individual
judges said. Judges saw themselves as merely declaring the law which had always theoretically existed, and not as
making the law. Therefore, a judge could reject another judge's opinion as simply an incorrect statement of the law,
in the way that scientists regularly reject each other's conclusions as incorrect statements of the laws of science.
Unlike the situation with the states, there is no plenary reception statute at the federal level that continued the
common law and thereby granted federal courts the power to formulate legal precedent like their English predecessors.
Federal courts are solely creatures of the federal Constitution and the federal Judiciary Acts. However, it is universally
accepted that the Founding Fathers of the United States, by vesting "judicial power" into the Supreme Court and the
inferior federal courts in Article Three of the United States Constitution, thereby vested in them the implied judicial
power of common law courts to formulate persuasive precedent; this power was widely accepted, understood, and recognized
by the Founding Fathers at the time the Constitution was ratified. Several legal scholars have argued that the federal
judicial power to decide "cases or controversies" necessarily includes the power to decide the precedential effect
of those cases and controversies. In turn, according to Kozinski's analysis, the contemporary rule of binding precedent
became possible in the U.S. in the nineteenth century only after the creation of a clear court hierarchy (under the
Judiciary Acts), and the beginning of regular verbatim publication of U.S. appellate decisions by West Publishing.
The rule gradually developed, case-by-case, as an extension of the judiciary's public policy of effective judicial
administration (that is, in order to efficiently exercise the judicial power). The rule of precedent is generally
justified today as a matter of public policy, first, as a matter of fundamental fairness, and second, because in
the absence of case law, it would be completely unworkable for every minor issue in every legal case to be briefed,
argued, and decided from first principles (such as relevant statutes, constitutional provisions, and underlying public
policies), which in turn would create hopeless inefficiency, instability, and unpredictability, and thereby undermine
the rule of law. Under the doctrine of Erie Railroad Co. v. Tompkins (1938), there is no general federal common law.
Although federal courts can create federal common law in the form of case law, such law must be linked one way or
another to the interpretation of a particular federal constitutional provision, statute, or regulation (which in
turn was enacted as part of the Constitution or after). Federal courts lack the plenary power possessed by state
courts to simply make up law, which the latter are able to do in the absence of constitutional or statutory provisions
replacing the common law. Only in a few narrow limited areas, like maritime law, has the Constitution expressly authorized
the continuation of English common law at the federal level (meaning that in those areas federal courts can continue
to make law as they see fit, subject to the limitations of stare decisis). The other major implication of the Erie
doctrine is that federal courts cannot dictate the content of state law when there is no federal issue (and thus
no federal supremacy issue) in a case. When hearing claims under state law pursuant to diversity jurisdiction, federal
trial courts must apply the statutory and decisional law of the state in which they sit, as if they were a court
of that state, even if they believe that the relevant state law is irrational or just bad public policy. And under
Erie, deference is one-way only: state courts are not bound by federal interpretations of state law. The fifty American
states are separate sovereigns, with their own state constitutions, state governments, and state courts. All states
have a legislative branch which enacts state statutes, an executive branch that promulgates state regulations pursuant
to statutory authorization, and a judicial branch that applies, interprets, and occasionally overturns both state
statutes and regulations, as well as local ordinances. They retain plenary power to make laws covering anything not
preempted by the federal Constitution, federal statutes, or international treaties ratified by the federal Senate.
Normally, state supreme courts are the final interpreters of state constitutions and state law, unless their interpretation
itself presents a federal issue, in which case a decision may be appealed to the U.S. Supreme Court by way of a petition
for writ of certiorari. State laws have dramatically diverged in the centuries since independence, to the extent
that the United States cannot be regarded as one legal system as to the majority of types of law traditionally under
state control, but must be regarded as 50 separate systems of tort law, family law, property law, contract law, criminal
law, and so on. Most cases are litigated in state courts and involve claims and defenses under state laws. In a 2012
report, the National Center for State Courts' Court Statistics Project found that state trial courts received 103.5
million newly filed cases in 2010, which consisted of 56.3 million traffic cases, 20.4 million criminal cases, 19.0
million civil cases, 5.9 million domestic relations cases, and 1.9 million juvenile cases. In 2010, state appellate
courts received 272,795 new cases. By way of comparison, all federal district courts in 2010 together received only
about 282,000 new civil cases, 77,000 new criminal cases, and 1.5 million bankruptcy cases, while federal appellate
courts received 56,000 new cases. The law of criminal procedure in the United States consists of a massive overlay
of federal constitutional case law interwoven with the federal and state statutes that actually provide the foundation
for the creation and operation of law enforcement agencies and prison systems as well as the proceedings in criminal
trials. Due to the perennial inability of legislatures in the U.S. to enact statutes that would actually force law
enforcement officers to respect the constitutional rights of criminal suspects and convicts, the federal judiciary
gradually developed the exclusionary rule as a method to enforce such rights. In turn, the exclusionary rule spawned
a family of judge-made remedies for the abuse of law enforcement powers, of which the most famous is the Miranda
warning. The writ of habeas corpus is often used by suspects and convicts to challenge their detention, while the
Civil Rights Act of 1871 and Bivens actions are used by suspects to recover tort damages for police brutality. The
law of civil procedure governs process in all judicial proceedings involving lawsuits between private parties. Traditional
common law pleading was replaced by code pleading in 24 states after New York enacted the Field Code in 1850 and
code pleading in turn was subsequently replaced again in most states by modern notice pleading during the 20th century.
The old English division between common law and equity courts was abolished in the federal courts by the adoption
of the Federal Rules of Civil Procedure in 1938; it has also been independently abolished by legislative acts in
nearly all states. The Delaware Court of Chancery is the most prominent of the small number of remaining equity courts.
New York, Illinois, and California are the most significant states that have not adopted the FRCP. Furthermore, all
three states continue to maintain most of their civil procedure laws in the form of codified statutes enacted by
the state legislature, as opposed to court rules promulgated by the state supreme court, on the ground that the latter
are undemocratic. But certain key portions of their civil procedure laws have been modified by their legislatures
to bring them closer to federal civil procedure. Generally, American civil procedure has several notable features,
including extensive pretrial discovery, heavy reliance on live testimony obtained at deposition or elicited in front
of a jury, and aggressive pretrial "law and motion" practice designed to result in a pretrial disposition (that is,
summary judgment) or a settlement. U.S. courts pioneered the concept of the opt-out class action, by which the burden
falls on class members to notify the court that they do not wish to be bound by the judgment, as opposed to opt-in
class actions, where class members must join into the class. Another unique feature is the so-called American Rule
under which parties generally bear their own attorneys' fees (as opposed to the English Rule of "loser pays"), though
American legislators and courts have carved out numerous exceptions. Criminal law involves the prosecution by the
state of wrongful acts which are considered to be so serious that they are a breach of the sovereign's peace (and
cannot be deterred or remedied by mere lawsuits between private parties). Generally, crimes can result in incarceration,
but torts (see below) cannot. The majority of the crimes committed in the United States are prosecuted and punished
at the state level. Federal criminal law focuses on areas specifically relevant to the federal government like evading
payment of federal income tax, mail theft, or physical attacks on federal officials, as well as interstate crimes
like drug trafficking and wire fraud. Some states distinguish between two levels: felonies and misdemeanors (minor
crimes). Generally, most felony convictions result in lengthy prison sentences as well as subsequent probation, large
fines, and orders to pay restitution directly to victims; while misdemeanors may lead to a year or less in jail and
a substantial fine. To simplify the prosecution of traffic violations and other relatively minor crimes, some states
have added a third level, infractions. These may result in fines and sometimes the loss of one's driver's license,
but no jail time. Contract law covers obligations established by agreement (express or implied) between private parties.
Generally, contract law in transactions involving the sale of goods has become highly standardized nationwide as
a result of the widespread adoption of the Uniform Commercial Code. However, there is still significant diversity
in the interpretation of other kinds of contracts, depending upon the extent to which a given state has codified
its common law of contracts or adopted portions of the Restatement (Second) of Contracts.
Myanmar (myan-MAR i/miɑːnˈmɑːr/ mee-ahn-MAR, /miˈɛnmɑːr/ mee-EN-mar or /maɪˈænmɑːr/ my-AN-mar (also with the stress on first
syllable); Burmese pronunciation: [mjəmà]),[nb 1] officially the Republic of the Union of Myanmar and also known
as Burma, is a sovereign state in Southeast Asia bordered by Bangladesh, India, China, Laos and Thailand. One-third
of Myanmar's total perimeter of 1,930 km (1,200 miles) forms an uninterrupted coastline along the Bay of Bengal and
the Andaman Sea. The country's 2014 census revealed a much lower population than expected, with 51 million people
recorded. Myanmar is 676,578 square kilometres (261,227 sq mi) in size. Its capital city is Naypyidaw and its largest
city is Yangon (Rangoon). Early civilisations in Myanmar included the Tibeto-Burman-speaking Pyu city-states in Upper
Burma and the Mon kingdoms in Lower Burma. In the 9th century, the Bamar people entered the upper Irrawaddy valley
and, following the establishment of the Pagan Kingdom in the 1050s, the Burmese language, culture and Theravada Buddhism
slowly became dominant in the country. The Pagan Kingdom fell due to the Mongol invasions and several warring states
emerged. In the 16th century, reunified by the Taungoo Dynasty, the country was for a brief period the largest empire
in the history of Southeast Asia. The early 19th century Konbaung Dynasty ruled over an area that included modern
Myanmar and briefly controlled Manipur and Assam as well. The British conquered Myanmar after three Anglo-Burmese
Wars in the 19th century and the country became a British colony. Myanmar became an independent nation in 1948, initially
as a democratic nation and then, following a coup d'état in 1962, a military dictatorship. For most of its independent
years, the country has been engrossed in rampant ethnic strife and Burma's myriad ethnic groups have been involved
in one of the world's longest-running ongoing civil wars. During this time, the United Nations and several other
organisations have reported consistent and systematic human rights violations in the country. In 2011, the military
junta was officially dissolved following a 2010 general election, and a nominally civilian government was installed.
While former military leaders still wield enormous power in the country, Burmese Military have taken steps toward
relinquishing control of the government. This, along with the release of Aung San Suu Kyi and political prisoners,
has improved the country's human rights record and foreign relations, and has led to the easing of trade and other
economic sanctions. There is, however, continuing criticism of the government's treatment of the Muslim Rohingya
minority and its poor response to the religious clashes. In the landmark 2015 election, Aung San Suu Kyi's party
won a majority in both houses, ending military rule. In English, the country is popularly known as either "Burma"
or "Myanmar" i/ˈmjɑːnˌmɑːr/. Both these names are derived from the name of the majority Burmese Bamar ethnic group.
Myanmar is considered to be the literary form of the name of the group, while Burma is derived from "Bamar", the
colloquial form of the group's name. Depending on the register used, the pronunciation would be Bama (pronounced:
[bəmà]) or Myamah (pronounced: [mjəmà]). The name Burma has been in use in English since the 18th century. Burma
continues to be used in English by the governments of many countries, such as Australia, Canada and the United Kingdom.
Official United States policy retains Burma as the country's name, although the State Department's website lists
the country as "Burma (Myanmar)" and Barack Obama has referred to the country by both names. The Czech Republic uses
officially Myanmar, although its Ministry of Foreign Affairs mentions both Myanmar and Burma on its website. The
United Nations uses Myanmar, as do the Association of Southeast Asian Nations, Russia, Germany, China, India, Norway,
and Japan. Archaeological evidence shows that Homo erectus lived in the region now known as Myanmar as early as 400,000
years ago. The first evidence of Homo sapiens is dated to about 11,000 BC, in a Stone Age culture called the Anyathian
with discoveries of stone tools in central Myanmar. Evidence of neolithic age domestication of plants and animals
and the use of polished stone tools dating to sometime between 10,000 and 6,000 BC has been discovered in the form
of cave paintings near the city of Taunggyi. The Bronze Age arrived circa 1500 BC when people in the region were
turning copper into bronze, growing rice and domesticating poultry and pigs; they were among the first people in
the world to do so. Human remains and artifacts from this era were discovered in Monywa District in the Sagaing Division.
The Iron Age began around 500 BC with the emergence of iron-working settlements in an area south of present-day Mandalay.
Evidence also shows the presence of rice-growing settlements of large villages and small towns that traded with their
surroundings as far as China between 500 BC and 200 AD. Iron Age Burmese cultures also had influences from outside
sources such as India and Thailand, as seen in their funerary practices concerning child burials. This indicates
some form of communication between groups in Myanmar and other places, possibly through trade. Around the second
century BC the first-known city-states emerged in central Myanmar. The city-states were founded as part of the southward
migration by the Tibeto-Burman-speaking Pyu city-states, the earliest inhabitants of Myanmar of whom records are
extant, from present-day Yunnan. The Pyu culture was heavily influenced by trade with India, importing Buddhism as
well as other cultural, architectural and political concepts which would have an enduring influence on later Burmese
culture and political organisation. Pagan's collapse was followed by 250 years of political fragmentation that lasted
well into the 16th century. Like the Burmans four centuries earlier, Shan migrants who arrived with the Mongol invasions
stayed behind. Several competing Shan States came to dominate the entire northwestern to eastern arc surrounding
the Irrawaddy valley. The valley too was beset with petty states until the late 14th century when two sizeable powers,
Ava Kingdom and Hanthawaddy Kingdom, emerged. In the west, a politically fragmented Arakan was under competing influences
of its stronger neighbours until the Kingdom of Mrauk U unified the Arakan coastline for the first time in 1437.
Like the Pagan Empire, Ava, Hanthawaddy and the Shan states were all multi-ethnic polities. Despite the wars, cultural
synchronisation continued. This period is considered a golden age for Burmese culture. Burmese literature "grew more
confident, popular, and stylistically diverse", and the second generation of Burmese law codes as well as the earliest
pan-Burma chronicles emerged. Hanthawaddy monarchs introduced religious reforms that later spread to the rest of
the country. Many splendid temples of Mrauk U were built during this period. Political unification returned in the
mid-16th century, due to the efforts of Taungoo, a former vassal state of Ava. Taungoo's young, ambitious king Tabinshwehti
defeated the more powerful Hanthawaddy in the Toungoo–Hanthawaddy War (1534–41). His successor Bayinnaung went on
to conquer a vast swath of mainland Southeast Asia including the Shan states, Lan Na, Manipur, Mong Mao, the Ayutthaya
Kingdom, Lan Xang and southern Arakan. However, the largest empire in the history of Southeast Asia unravelled soon
after Bayinnaung's death in 1581, completely collapsing by 1599. Ayutthaya seized Tenasserim and Lan Na, and Portuguese
mercenaries established Portuguese rule at Thanlyin (Syriam). The dynasty regrouped and defeated the Portuguese in
1613 and Siam in 1614. It restored a smaller, more manageable kingdom, encompassing Lower Myanmar, Upper Myanmar,
Shan states, Lan Na and upper Tenasserim. The Restored Toungoo kings created a legal and political framework whose
basic features would continue well into the 19th century. The crown completely replaced the hereditary chieftainships
with appointed governorships in the entire Irrawaddy valley, and greatly reduced the hereditary rights of Shan chiefs.
Its trade and secular administrative reforms built a prosperous economy for more than 80 years. From the 1720s onward,
the kingdom was beset with repeated Meithei raids into Upper Myanmar and a nagging rebellion in Lan Na. In 1740,
the Mon of Lower Myanmar founded the Restored Hanthawaddy Kingdom. Hanthawaddy forces sacked Ava in 1752, ending
the 266-year-old Toungoo Dynasty. With Burma preoccupied by the Chinese threat, Ayutthaya recovered its territories
by 1770, and went on to capture Lan Na by 1776. Burma and Siam went to war until 1855, but all resulted in a stalemate,
exchanging Tenasserim (to Burma) and Lan Na (to Ayutthaya). Faced with a powerful China and a resurgent Ayutthaya
in the east, King Bodawpaya turned west, acquiring Arakan (1785), Manipur (1814) and Assam (1817). It was the second-largest
empire in Burmese history but also one with a long ill-defined border with British India. Konbaung kings extended
Restored Toungoo's administrative reforms, and achieved unprecedented levels of internal control and external expansion.
For the first time in history, the Burmese language and culture came to predominate the entire Irrawaddy valley.
The evolution and growth of Burmese literature and theatre continued, aided by an extremely high adult male literacy
rate for the era (half of all males and 5% of females). Nonetheless, the extent and pace of reforms were uneven and
ultimately proved insufficient to stem the advance of British colonialism. Burmese resentment was strong and was
vented in violent riots that paralysed Yangon (Rangoon) on occasion all the way until the 1930s. Some of the discontent
was caused by a disrespect for Burmese culture and traditions such as the British refusal to remove shoes when they
entered pagodas. Buddhist monks became the vanguards of the independence movement. U Wisara, an activist monk, died
in prison after a 166-day hunger strike to protest against a rule that forbade him from wearing his Buddhist robes
while imprisoned. A major battleground, Burma was devastated during World War II. By March 1942, within months after
they entered the war, Japanese troops had advanced on Rangoon and the British administration had collapsed. A Burmese
Executive Administration headed by Ba Maw was established by the Japanese in August 1942. Wingate's British Chindits
were formed into long-range penetration groups trained to operate deep behind Japanese lines. A similar American
unit, Merrill's Marauders, followed the Chindits into the Burmese jungle in 1943. Beginning in late 1944, allied
troops launched a series of offensives that led to the end of Japanese rule in July 1945. The battles were intense
with much of Burma laid waste by the fighting. Overall, the Japanese lost some 150,000 men in Burma. Only 1,700 prisoners
were taken. Following World War II, Aung San negotiated the Panglong Agreement with ethnic leaders that guaranteed
the independence of Myanmar as a unified state. Aung Zan Wai, Pe Khin, Bo Hmu Aung, Sir Maung Gyi, Dr. Sein Mya Maung,
Myoma U Than Kywe were among the negotiators of the historical Panglong Conference negotiated with Bamar leader General
Aung San and other ethnic leaders in 1947. In 1947, Aung San became Deputy Chairman of the Executive Council of Myanmar,
a transitional government. But in July 1947, political rivals assassinated Aung San and several cabinet members.
In 1988, unrest over economic mismanagement and political oppression by the government led to widespread pro-democracy
demonstrations throughout the country known as the 8888 Uprising. Security forces killed thousands of demonstrators,
and General Saw Maung staged a coup d'état and formed the State Law and Order Restoration Council (SLORC). In 1989,
SLORC declared martial law after widespread protests. The military government finalised plans for People's Assembly
elections on 31 May 1989. SLORC changed the country's official English name from the "Socialist Republic of the Union
of Burma" to the "Union of Myanmar" in 1989. In August 2007, an increase in the price of diesel and petrol led to
Saffron Revolution led by Buddhist monks that were dealt with harshly by the government. The government cracked down
on them on 26 September 2007. The crackdown was harsh, with reports of barricades at the Shwedagon Pagoda and monks
killed. There were also rumours of disagreement within the Burmese armed forces, but none was confirmed. The military
crackdown against unarmed protesters was widely condemned as part of the International reactions to the Saffron Revolution
and led to an increase in economic sanctions against the Burmese Government. In May 2008, Cyclone Nargis caused extensive
damage in the densely populated, rice-farming delta of the Irrawaddy Division. It was the worst natural disaster
in Burmese history with reports of an estimated 200,000 people dead or missing, and damage totalled to 10 billion
US Dollars, and as many as 1 million left homeless. In the critical days following this disaster, Myanmar's isolationist
government was accused of hindering United Nations recovery efforts. Humanitarian aid was requested but concerns
about foreign military or intelligence presence in the country delayed the entry of United States military planes
delivering medicine, food, and other supplies. In October 2012 the number of ongoing conflicts in Myanmar included
the Kachin conflict, between the Pro-Christian Kachin Independence Army and the government; a civil war between the
Rohingya Muslims, and the government and non-government groups in Rakhine State; and a conflict between the Shan,
Lahu and Karen minority groups, and the government in the eastern half of the country. In addition al-Qaeda signalled
an intention to become involved in Myanmar. In a video released 3 September 2014 mainly addressed to India, the militant
group's leader Ayman al-Zawahiri said al-Qaeda had not forgotten the Muslims of Myanmar and that the group was doing
"what they can to rescue you". In response, the military raised its level of alertness while the Burmese Muslim Association
issued a statement saying Muslims would not tolerate any threat to their motherland. Armed conflict between ethnic
Chinese rebels and the Myanmar Armed Forces have resulted in the Kokang offensive in February 2015. The conflict
had forced 40,000 to 50,000 civilians to flee their homes and seek shelter on the Chinese side of the border. During
the incident the government of China was accused of giving military assistance to the ethnic Chinese rebels. Burmese
officials have been historically 'manipulated' and pressured by the communist Chinese government throughout Burmese
modern history to create closer and binding ties with China, creating a Chinese satellite state in Southeast Asia.
The goal of the Burmese constitutional referendum of 2008, held on 10 May 2008, is the creation of a "discipline-flourishing
democracy". As part of the referendum process, the name of the country was changed from the "Union of Myanmar" to
the "Republic of the Union of Myanmar", and general elections were held under the new constitution in 2010. Observer
accounts of the 2010 election describe the event as mostly peaceful; however, allegations of polling station irregularities
were raised, and the United Nations (UN) and a number of Western countries condemned the elections as fraudulent.
Opinions differ whether the transition to liberal democracy is underway. According to some reports, the military's
presence continues as the label 'disciplined democracy' suggests. This label asserts that the Burmese military is
allowing certain civil liberties while clandestinely institutionalising itself further into Burmese politics. Such
an assertion assumes that reforms only occurred when the military was able to safeguard its own interests through
the transition—here, "transition" does not refer to a transition to a liberal democracy, but transition to a quasi-military
rule. Since the 2010 election, the government has embarked on a series of reforms to direct the country towards liberal
democracy, a mixed economy, and reconciliation, although doubts persist about the motives that underpin such reforms.
The series of reforms includes the release of pro-democracy leader Aung San Suu Kyi from house arrest, the establishment
of the National Human Rights Commission, the granting of general amnesties for more than 200 political prisoners,
new labour laws that permit labour unions and strikes, a relaxation of press censorship, and the regulation of currency
practices. The impact of the post-election reforms has been observed in numerous areas, including ASEAN's approval
of Myanmar's bid for the position of ASEAN chair in 2014; the visit by United States Secretary of State Hillary Clinton
in December 2011 for the encouragement of further progress—it was the first visit by a Secretary of State in more
than fifty years (Clinton met with the Burmese president and former military commander Thein Sein, as well as opposition
leader Aung San Suu Kyi); and the participation of Aung San Suu Kyi's National League for Democracy (NLD) party in
the 2012 by-elections, facilitated by the government's abolition of the laws that previously barred the NLD. As of
July 2013, about 100 political prisoners remain imprisoned, while conflict between the Burmese Army and local insurgent
groups continues. In 1 April 2012 by-elections the NLD won 43 of the 45 available seats; previously an illegal organisation,
the NLD had never won a Burmese election until this time. The 2012 by-elections were also the first time that international
representatives were allowed to monitor the voting process in Myanmar. Following announcement of the by-elections,
the Freedom House organisation raised concerns about "reports of fraud and harassment in the lead up to elections,
including the March 23 deportation of Somsri Hananuntasuk, executive director of the Asian Network for Free Elections
(ANFREL), a regional network of civil society organisations promoting democratization." However, uncertainties exist
as some other political prisoners have not been released and clashes between Burmese troops and local insurgent groups
continue. Burma is bordered in the northwest by the Chittagong Division of Bangladesh and the Mizoram, Manipur, Nagaland
and Arunachal Pradesh states of India. Its north and northeast border is with the Tibet Autonomous Region and Yunnan
province for a Sino-Burman border total of 2,185 km (1,358 mi). It is bounded by Laos and Thailand to the southeast.
Burma has 1,930 km (1,200 mi) of contiguous coastline along the Bay of Bengal and Andaman Sea to the southwest and
the south, which forms one quarter of its total perimeter. Much of the country lies between the Tropic of Cancer
and the Equator. It lies in the monsoon region of Asia, with its coastal regions receiving over 5,000 mm (196.9 in)
of rain annually. Annual rainfall in the delta region is approximately 2,500 mm (98.4 in), while average annual rainfall
in the Dry Zone in central Myanmar is less than 1,000 mm (39.4 in). The Northern regions of Myanmar are the coolest,
with average temperatures of 21 °C (70 °F). Coastal and delta regions have an average maximum temperature of 32 °C
(89.6 °F). Typical jungle animals, particularly tigers and leopards, occur sparsely in Myanmar. In upper Myanmar,
there are rhinoceros, wild buffalo, wild boars, deer, antelope, and elephants, which are also tamed or bred in captivity
for use as work animals, particularly in the lumber industry. Smaller mammals are also numerous, ranging from gibbons
and monkeys to flying foxes and tapirs. The abundance of birds is notable with over 800 species, including parrots,
peafowl, pheasants, crows, herons, and paddybirds. Among reptile species there are crocodiles, geckos, cobras, Burmese
pythons, and turtles. Hundreds of species of freshwater fish are wide-ranging, plentiful and are very important food
sources. For a list of protected areas, see List of protected areas of Myanmar. The elections of 2010 resulted in
a victory for the military-backed Union Solidarity and Development Party. Various foreign observers questioned the
fairness of the elections. One criticism of the election was that only government sanctioned political parties were
allowed to contest in it and the popular National League for Democracy was declared illegal. However, immediately
following the elections, the government ended the house arrest of the democracy advocate and leader of the National
League for Democracy, Aung San Suu Kyi, and her ability to move freely around the country is considered an important
test of the military's movement toward more openness. After unexpected reforms in 2011, NLD senior leaders have decided
to register as a political party and to field candidates in future by-elections. Though the country's foreign relations,
particularly with Western nations, have been strained, relations have thawed since the reforms following the 2010
elections. After years of diplomatic isolation and economic and military sanctions, the United States relaxed curbs
on foreign aid to Myanmar in November 2011 and announced the resumption of diplomatic relations on 13 January 2012
The European Union has placed sanctions on Myanmar, including an arms embargo, cessation of trade preferences, and
suspension of all aid with the exception of humanitarian aid. Sanctions imposed by the United States and European
countries against the former military government, coupled with boycotts and other direct pressure on corporations
by supporters of the democracy movement, have resulted in the withdrawal from the country of most US and many European
companies. On 13 April 2012 British Prime Minister David Cameron called for the economic sanctions on Myanmar to
be suspended in the wake of the pro-democracy party gaining 43 seats out of a possible 45 in the 2012 by-elections
with the party leader, Aung San Suu Kyi becoming a member of the Burmese parliament. Despite Western isolation, Asian
corporations have generally remained willing to continue investing in the country and to initiate new investments,
particularly in natural resource extraction. The country has close relations with neighbouring India and China with
several Indian and Chinese companies operating in the country. Under India's Look East policy, fields of co-operation
between India and Myanmar include remote sensing, oil and gas exploration, information technology, hydro power and
construction of ports and buildings. In 2008, India suspended military aid to Myanmar over the issue of human rights
abuses by the ruling junta, although it has preserved extensive commercial ties, which provide the regime with much-needed
revenue. The thaw in relations began on 28 November 2011, when Belarusian Prime Minister Mikhail Myasnikovich and
his wife Ludmila arrived in the capital, Naypyidaw, the same day as the country received a visit by US Secretary
of State Hillary Rodham Clinton, who also met with pro-democracy opposition leader Aung San Suu Kyi. International
relations progress indicators continued in September 2012 when Aung San Suu Kyi visited to the US followed by Myanmar's
reformist president visit to the United Nations. In May 2013, Thein Sein became the first Myanmar president to visit
the White House in 47 years; the last Burmese leader to visit the White House was Ne Win in September 1966. President
Barack Obama praised the former general for political and economic reforms, and the cessation of tensions between
Myanmar and the United States. Political activists objected to the visit due to concerns over human rights abuses
in Myanmar but Obama assured Thein Sein that Myanmar will receive US support. The two leaders discussed to release
more political prisoners, the institutionalisation of political reform and rule of law, and ending ethnic conflict
in Myanmar—the two governments agreed to sign a bilateral trade and investment framework agreement on 21 May 2013.
Myanmar has received extensive military aid from China in the past Myanmar has been a member of ASEAN since 1997.
Though it gave up its turn to hold the ASEAN chair and host the ASEAN Summit in 2006, it chaired the forum and hosted
the summit in 2014. In November 2008, Myanmar's political situation with neighbouring Bangladesh became tense as
they began searching for natural gas in a disputed block of the Bay of Bengal. Controversy surrounding the Rohingya
population also remains an issue between Bangladesh and Myanmar. Myanmar's armed forces are known as the Tatmadaw,
which numbers 488,000. The Tatmadaw comprises the Army, the Navy, and the Air Force. The country ranked twelfth in
the world for its number of active troops in service. The military is very influential in Myanmar, with all top cabinet
and ministry posts usually held by military officials. Official figures for military spending are not available.
Estimates vary widely because of uncertain exchange rates, but Myanmar's military forces' expenses are high. Myanmar
imports most of its weapons from Russia, Ukraine, China and India. Until 2005, the United Nations General Assembly
annually adopted a detailed resolution about the situation in Myanmar by consensus. But in 2006 a divided United
Nations General Assembly voted through a resolution that strongly called upon the government of Myanmar to end its
systematic violations of human rights. In January 2007, Russia and China vetoed a draft resolution before the United
Nations Security Council calling on the government of Myanmar to respect human rights and begin a democratic transition.
South Africa also voted against the resolution. There is consensus that the military regime in Myanmar is one of
the world's most repressive and abusive regimes. In November 2012, Samantha Power, Barack Obama's Special Assistant
to the President on Human Rights, wrote on the White House blog in advance of the president's visit that "Serious
human rights abuses against civilians in several regions continue, including against women and children." Members
of the United Nations and major international human rights organisations have issued repeated and consistent reports
of widespread and systematic human rights violations in Myanmar. The United Nations General Assembly has repeatedly
called on the Burmese Military Junta to respect human rights and in November 2009 the General Assembly adopted a
resolution "strongly condemning the ongoing systematic violations of human rights and fundamental freedoms" and calling
on the Burmese Military Regime "to take urgent measures to put an end to violations of international human rights
and humanitarian law." International human rights organisations including Human Rights Watch, Amnesty International
and the American Association for the Advancement of Science have repeatedly documented and condemned widespread human
rights violations in Myanmar. The Freedom in the World 2011 report by Freedom House notes, "The military junta has
... suppressed nearly all basic rights; and committed human rights abuses with impunity." In July 2013, the Assistance
Association for Political Prisoners indicated that there were approximately 100 political prisoners being held in
Burmese prisons. Child soldiers have and continue to play a major part in the Burmese Army as well as Burmese rebel
movements. The Independent reported in June 2012 that "Children are being sold as conscripts into the Burmese military
for as little as $40 and a bag of rice or a can of petrol." The UN's Special Representative of the Secretary-General
for Children and Armed Conflict, Radhika Coomaraswamy, who stepped down from her position a week later, met representatives
of the Government of Myanmar on 5 July 2012 and stated that she hoped the government's signing of an action plan
would "signal a transformation." In September 2012, the Myanmar Armed Forces released 42 child soldiers and the International
Labour Organization met with representatives of the government as well as the Kachin Independence Army to secure
the release of more child soldiers. According to Samantha Power, a US delegation raised the issue of child soldiers
with the government in October 2012. However, she did not comment on the government's progress towards reform in
this area. The Rohingya people have consistently faced human rights abuses by the Burmese regime that has refused
to acknowledge them as Burmese citizens (despite some of them having lived in Burma for over three generations)—the
Rohingya have been denied Burmese citizenship since the enactment of a 1982 citizenship law. The law created three
categories of citizenship: citizenship, associate citizenship, and naturalised citizenship. Citizenship is given
to those who belong to one of the national races such as Kachin, Kayah (Karenni), Karen, Chin, Burman, Mon, Rakhine,
Shan, Kaman, or Zerbadee. Associate citizenship is given to those who cannot prove their ancestors settled in Myanmar
before 1823, but can prove they have one grandparent, or pre-1823 ancestor, who was a citizen of another country,
as well as people who applied for citizenship in 1948 and qualified then by those laws. Naturalized citizenship is
only given to those who have at least one parent with one of these types of Burmese citizenship or can provide "conclusive
evidence" that their parents entered and resided in Burma prior to independence in 1948. The Burmese regime has attempted
to forcibly expel Rohingya and bring in non-Rohingyas to replace them—this policy has resulted in the expulsion of
approximately half of the 800,000 Rohingya from Burma, while the Rohingya people have been described as "among the
world's least wanted" and "one of the world's most persecuted minorities." But the origin of ‘most persecuted minority’
statement is unclear. In 2007 the German professor Bassam Tibi suggested that the Rohingya conflict may be driven
by an Islamist political agenda to impose religious laws, while non-religious causes have also been raised, such
as a lingering resentment over the violence that occurred during the Japanese occupation of Burma in World War II—during
this time period the British allied themselves with the Rohingya and fought against the puppet government of Burma
(composed mostly of Bamar Japanese) that helped to establish the Tatmadaw military organisation that remains in power
as of March 2013. Since the democratic transition began in 2011, there has been continuous violence as 280 people
have been killed and 140,000 forced to flee from their homes in the Rakhine state. A UN envoy reported in March 2013
that unrest had re-emerged between Myanmar's Buddhist and Muslim communities, with violence spreading to towns that
are located closer to Yangon. The BBC News media outlet obtained video footage of a man with severe burns who received
no assistance from passers-by or police officers even though he was lying on the ground in a public area. The footage
was filmed by members of the Burmese police force in the town of Meiktila and was used as evidence that Buddhists
continued to kill Muslims after the European Union sanctions were lifted on 23 April 2013. The immediate cause of
the riots is unclear, with many commentators citing the killing of ten Burmese Muslims by ethnic Rakhine after the
rape and murder of a Rakhine woman as the main cause. Whole villages have been "decimated". Over 300 houses and a
number of public buildings have been razed. According to Tun Khin, the president of the Burmese Rohingya Organisation
UK (BROUK), as of 28 June 2012, 650 Rohingyas have been killed, 1,200 are missing, and more than 80,000 have been
displaced. According to the Myanmar authorities, the violence, between ethnic Rakhine Buddhists and Rohingya Muslims,
left 78 people dead, 87 injured, and thousands of homes destroyed. It displaced more than 52,000 people. The government
has responded by imposing curfews and by deploying troops in the regions. On 10 June 2012, a state of emergency was
declared in Rakhine, allowing the military to participate in administration of the region. The Burmese army and police
have been accused of targeting Rohingya Muslims through mass arrests and arbitrary violence. A number of monks' organisations
that played a vital role in Myanmar's struggle for democracy have taken measures to block any humanitarian assistance
to the Rohingya community. Restrictions on media censorship were significantly eased in August 2012 following demonstrations
by hundreds of protesters who wore shirts demanding that the government "Stop Killing the Press." The most significant
change has come in the form that media organisations will no longer have to submit their content to a censorship
board before publication. However, as explained by one editorial in the exiled press The Irrawaddy, this new "freedom"
has caused some Burmese journalists to simply see the new law as an attempt to create an environment of self-censorship
as journalists "are required to follow 16 guidelines towards protecting the three national causes — non-disintegration
of the Union, non-disintegration of national solidarity, perpetuation of sovereignty — and "journalistic ethics"
to ensure their stories are accurate and do not jeopardise national security." In July 2014 five journalists were
sentenced to 10 years in jail after publishing a report saying the country was planning to build a new chemical weapons
plant. Journalists described the jailings as a blow to the recently-won news media freedoms that had followed five
decades of censorship and persecution. According to the Crisis Group, since Myanmar transitioned to a new government
in August 2011, the country's human rights record has been improving. Previously giving Myanmar its lowest rating
of 7, the 2012 Freedom in the World report also notes improvement, giving Myanmar a 6 for improvements in civil liberties
and political rights, the release of political prisoners, and a loosening of restrictions. In 2013, Myanmar improved
yet again, receiving a score of five in civil liberties and a six in political freedoms The government has assembled
a National Human Rights Commission that consists of 15 members from various backgrounds. Several activists in exile,
including Thee Lay Thee Anyeint members, have returned to Myanmar after President Thein Sein's invitation to expatriates
to return home to work for national development. In an address to the United Nations Security Council on 22 September
2011, Myanmar's Foreign Minister Wunna Maung Lwin confirmed the government's intention to release prisoners in the
near future. The government has also relaxed reporting laws, but these remain highly restrictive. In September 2011,
several banned websites, including YouTube, Democratic Voice of Burma and Voice of America, were unblocked. A 2011
report by the Hauser Center for Nonprofit Organizations found that, while contact with the Myanmar government was
constrained by donor restrictions, international humanitarian non-governmental organisations (NGOs) see opportunities
for effective advocacy with government officials, especially at the local level. At the same time, international
NGOs are mindful of the ethical quandary of how to work with the government without bolstering or appeasing it. Following
Thein Sein's first ever visit to the UK and a meeting with Prime Minister David Cameron, the Myanmar president declared
that all of his nation's political prisoners will be released by the end of 2013, in addition to a statement of support
for the well-being of the Rohingya Muslim community. In a speech at Chatham House, he revealed that "We [Myanmar
government] are reviewing all cases. I guarantee to you that by the end of this year, there will be no prisoners
of conscience in Myanmar.", in addition to expressing a desire to strengthen links between the UK and Myanmar's military
forces. Under British administration, Myanmar was the second-wealthiest country in South-East Asia. It had been the
world's largest exporter of rice. Myanmar also had a wealth of natural and labour resources. British Burma began
exporting crude oil in 1853, making it one of the earliest petroleum producers in the world. It produced 75% of the
world's teak and had a highly literate population. The wealth was however, mainly concentrated in the hands of Europeans.
In 1930s, agricultural production fell dramatically as international rice prices declined, and did not recover for
several decades. During World War II, the British destroyed the major government buildings, oil wells and mines for
tungsten, tin, lead and silver to keep them from the Japanese. Myanmar was bombed extensively by both sides. After
independence, the country was in ruins with its major infrastructure completely destroyed. After a parliamentary
government was formed in 1948, Prime Minister U Nu embarked upon a policy of nationalisation and the state was declared
the owner of all land. The government also tried to implement a poorly considered Eight-Year plan. By the 1950s,
rice exports had fallen by two thirds and mineral exports by over 96% (as compared to the pre-World War II period).
Plans were partly financed by printing money, which led to inflation. The major agricultural product is rice, which
covers about 60% of the country's total cultivated land area. Rice accounts for 97% of total food grain production
by weight. Through collaboration with the International Rice Research Institute 52 modern rice varieties were released
in the country between 1966 and 1997, helping increase national rice production to 14 million tons in 1987 and to
19 million tons in 1996. By 1988, modern varieties were planted on half of the country's ricelands, including 98
percent of the irrigated areas. In 2008 rice production was estimated at 50 million tons. Many US and European jewellery
companies, including Bulgari, Tiffany, and Cartier, refuse to import these stones based on reports of deplorable
working conditions in the mines. Human Rights Watch has encouraged a complete ban on the purchase of Burmese gems
based on these reports and because nearly all profits go to the ruling junta, as the majority of mining activity
in the country is government-run. The government of Myanmar controls the gem trade by direct ownership or by joint
ventures with private owners of mines. The most popular available tourist destinations in Myanmar include big cities
such as Yangon and Mandalay; religious sites in Mon State, Pindaya, Bago and Hpa-An; nature trails in Inle Lake,
Kengtung, Putao, Pyin Oo Lwin; ancient cities such as Bagan and Mrauk-U; as well as beaches in Nabule, Ngapali, Ngwe-Saung,
Mergui. Nevertheless, much of the country is off-limits to tourists, and interactions between foreigners and the
people of Myanmar, particularly in the border regions, are subject to police scrutiny. They are not to discuss politics
with foreigners, under penalty of imprisonment and, in 2001, the Myanmar Tourism Promotion Board issued an order
for local officials to protect tourists and limit "unnecessary contact" between foreigners and ordinary Burmese people.
The most common way for travellers to enter the country seems to be by air. According to the website Lonely Planet,
getting into Myanmar is problematic: "No bus or train service connects Myanmar with another country, nor can you
travel by car or motorcycle across the border – you must walk across.", and states that, "It is not possible for
foreigners to go to/from Myanmar by sea or river." There are a small number of border crossings that allow the passage
of private vehicles, such as the border between Ruili (China) to Mu-se, the border between Htee Kee (Myanmar) and
Ban Phu Nam Ron (Thailand) (the most direct border between Dawei and Kanchanaburi), and the border between Myawaddy
(Myanmar) and Mae Sot (Thailand). At least one tourist company has successfully run commercial overland routes through
these borders since 2013. "From Mae Sai (Thailand) you can cross to Tachileik, but can only go as far as Kengtung.
Those in Thailand on a visa run can cross to Kawthaung but cannot venture farther into Myanmar." Flights are available
from most countries, though direct flights are limited to mainly Thai and other ASEAN airlines. According to Eleven
magazine, "In the past, there were only 15 international airlines and increasing numbers of airlines have began launching
direct flights from Japan, Qatar, Taiwan, South Korea, Germany and Singapore." Expansions were expected in September
2013, but yet again are mainly Thai and other Asian-based airlines according to Eleven Media Group's Eleven, "Thailand-based
Nok Air and Business Airlines and Singapore-based Tiger Airline". In December 2014, Myanmar signed an agreement to
set up its first stock exchange. The Yangon Stock Exchange Joint Venture Co. Ltd will be set up with Myanma Economic
Bank sharing 51 percent, Japan's Daiwa Institute of Research Ltd 30.25 percent and Japan Exchange Group 18.75 percent.
The Yangon Stock Exchange (YSX) officially opened for business on Friday, March 25, 2016. First Myanmar Investment
Co., Ltd. (FMI) became the first stock to be traded after receiving approval for an opening price of 26,000 kyats
($22). The provisional results of the 2014 Myanmar Census show that the total population is 51,419,420. This figure
includes an estimated 1,206,353 persons in parts of northern Rakhine State, Kachin State and Kayin State who were
not counted. People who were out of the country at the time of the census are not included in these figures. There
are over 600,000 registered migrant workers from Myanmar in Thailand, and millions more work illegally. Burmese migrant
workers account for 80% of Thailand's migrant workers. Population density is 76 per square kilometre (200/sq mi),
among the lowest in Southeast Asia. The Bamar form an estimated 68% of the population. 10% of the population are
Shan. The Kayin make up 7% of the population. The Rakhine people constitute 4% of the population. Overseas Chinese
form approximately 3% of the population. Myanmar's ethnic minority groups prefer the term "ethnic nationality" over
"ethnic minority" as the term "minority" furthers their sense of insecurity in the face of what is often described
as "Burmanisation"—the proliferation and domination of the dominant Bamar culture over minority cultures. Mon, who
form 2% of the population, are ethno-linguistically related to the Khmer. Overseas Indians are 2%. The remainder
are Kachin, Chin, Rohingya, Anglo-Indians, Gurkha, Nepali and other ethnic minorities. Included in this group are
the Anglo-Burmese. Once forming a large and influential community, the Anglo-Burmese left the country in steady streams
from 1958 onwards, principally to Australia and the UK. It is estimated that 52,000 Anglo-Burmese remain in Myanmar.
As of 2009[update], 110,000 Burmese refugees were living in refugee camps in Thailand. Myanmar is home to four major
language families: Sino-Tibetan, Tai–Kadai, Austro-Asiatic, and Indo-European. Sino-Tibetan languages are most widely
spoken. They include Burmese, Karen, Kachin, Chin, and Chinese (mainly Hokkien). The primary Tai–Kadai language is
Shan. Mon, Palaung, and Wa are the major Austroasiatic languages spoken in Myanmar. The two major Indo-European languages
are Pali, the liturgical language of Theravada Buddhism, and English. Little known fact about Myanmar is there are
more than 130 languages spoken by people in Myanmar. Since many of them are known only within small tribes around
the country, they may have been lost (many if not all) after a few generations. Burmese, the mother tongue of the
Bamar and official language of Myanmar, is related to Tibetan and Chinese language. It is written in a script consisting
of circular and semi-circular letters, which were adapted from the Mon script, which in turn was developed from a
southern Indian script in the 5th century. The earliest known inscriptions in the Burmese script date from the 11th
century. It is also used to write Pali, the sacred language of Theravada Buddhism, as well as several ethnic minority
languages, including Shan, several Karen dialects, and Kayah (Karenni), with the addition of specialised characters
and diacritics for each language. Many religions are practised in Myanmar. Religious edifices and orders have been
in existence for many years. Festivals can be held on a grand scale. The Christian and Muslim populations do, however,
face religious persecution and it is hard, if not impossible, for non-Buddhists to join the army or get government
jobs, the main route to success in the country. Such persecution and targeting of civilians is particularly notable
in Eastern Myanmar, where over 3000 villages have been destroyed in the past ten years. More than 200,000 Muslims
have fled to Bangladesh over the last 20 years to escape Islamophobic persecution. According to Pew Research, 7%
of the population identifies as Christian; 4% as Muslim; 1% follows traditional animistic beliefs; and 2% follow
other religions, including Mahayana Buddhism, Hinduism, and East Asian religions. However, according to a US State
Department's 2010 international religious freedom report, official statistics are alleged to underestimate the non-Buddhist
population. Independent researchers put the Muslim population at 6 to 10% of the population[citation needed]. Jehovah's
Witnesses have been present since 1914 and have about 80 congregations around the country and a branch office in
Yangon publishing in 16 languages. A tiny Jewish community in Rangoon had a synagogue but no resident rabbi to conduct
services. The educational system of Myanmar is operated by the government agency, the Ministry of Education. The
education system is based on the United Kingdom's system due to nearly a century of British and Christian presences
in Myanmar. Nearly all schools are government-operated, but there has been a recent increase in privately funded
English language schools. Schooling is compulsory until the end of elementary school, approximately about 9 years
old, while the compulsory schooling age is 15 or 16 at international level. A diverse range of indigenous cultures
exist in Myanmar, the majority culture is primarily Buddhist and Bamar. Bamar culture has been influenced by the
cultures of neighbouring countries. This is manifested in its language, cuisine, music, dance and theatre. The arts,
particularly literature, have historically been influenced by the local form of Theravada Buddhism. Considered the
national epic of Myanmar, the Yama Zatdaw, an adaptation of India's Ramayana, has been influenced greatly by Thai,
Mon, and Indian versions of the play. Buddhism is practised along with nat worship, which involves elaborate rituals
to propitiate one from a pantheon of 37 nats. In a traditional village, the monastery is the centre of cultural life.
Monks are venerated and supported by the lay people. A novitiation ceremony called shinbyu is the most important
coming of age events for a boy, during which he enters the monastery for a short time. All male children in Buddhist
families are encouraged to be a novice (beginner for Buddhism) before the age of twenty and to be a monk after the
age of twenty. Girls have ear-piercing ceremonies (နားသ) at the same time. Burmese culture is most evident in villages
where local festivals are held throughout the year, the most important being the pagoda festival. Many villages have
a guardian nat, and superstition and taboos are commonplace. British colonial rule introduced Western elements of
culture to Burma. Burma's education system is modelled after that of the United Kingdom. Colonial architectural influences
are most evident in major cities such as Yangon. Many ethnic minorities, particularly the Karen in the southeast
and the Kachin and Chin who populate the north and northeast, practice Christianity. According to the The World Factbook,
the Burman population is 68% and the ethnic groups constitute 32%. However, the exiled leaders and organisations
claims that ethnic population is 40%, which is implicitly contrasted with CIA report (official US report). Mohinga
is the traditional breakfast dish and is Myanmar's national dish. Seafood is a common ingredient in coastal cities
such as Sittwe, Kyaukpyu, Mawlamyaing (formerly Moulmein), Mergui (Myeik) and Dawei, while meat and poultry are more
commonly used in landlocked cities like Mandalay. Freshwater fish and shrimp have been incorporated into inland cooking
as a primary source of protein and are used in a variety of ways, fresh, salted whole or filleted, salted and dried,
made into a salty paste, or fermented sour and pressed. Myanmar's first film was a documentary of the funeral of
Tun Shein — a leading politician of the 1910s, who campaigned for Burmese independence in London. The first Burmese
silent film Myitta Ne Thuya (Love and Liquor) in 1920 which proved a major success, despite its poor quality due
to a fixed camera position and inadequate film accessories. During the 1920s and 1930s, many Burmese-owned film companies
made and produced several films. The first Burmese sound film was produced in 1932 in Bombay, India with the title
Ngwe Pay Lo Ma Ya (Money Can't Buy It). After World War II, Burmese cinema continued to address political themes.
Many of the films produced in the early Cold War era had a strong propaganda element to them.
Jews originated as a national and religious group in the Middle East during the second millennium BCE, in the part of the
Levant known as the Land of Israel. The Merneptah Stele appears to confirm the existence of a people of Israel, associated
with the god El, somewhere in Canaan as far back as the 13th century BCE. The Israelites, as an outgrowth of the
Canaanite population, consolidated their hold with the emergence of the Kingdom of Israel, and the Kingdom of Judah.
Some consider that these Canaanite sedentary Israelites melded with incoming nomadic groups known as 'Hebrews'. Though
few sources in the Bible mention the exilic periods in detail, the experience of diaspora life, from the Ancient
Egyptian rule over the Levant, to Assyrian Captivity and Exile, to Babylonian Captivity and Exile, to Seleucid Imperial
rule, to the Roman occupation, and the historical relations between Israelites and the homeland, became a major feature
of Jewish history, identity and memory. According to a report published in 2014, about 43% of all Jews reside in
Israel (6.1 million), and 40% in the United States (5.7 million), with most of the remainder living in Europe (1.4
million) and Canada (0.4 million). These numbers include all those who self-identified as Jews in a socio-demographic
study or were identified as such by a respondent in the same household. The exact world Jewish population, however,
is difficult to measure. In addition to issues with census methodology, disputes among proponents of halakhic, secular,
political, and ancestral identification factors regarding who is a Jew may affect the figure considerably depending
on the source. The etymological equivalent is in use in other languages, e.g., يَهُودِيّ yahūdī (sg.), al-yahūd (pl.),
and بَنُو اِسرَائِيل banū isrāʼīl in Arabic, "Jude" in German, "judeu" in Portuguese, "juif" in French, "jøde" in
Danish and Norwegian, "judío" in Spanish, "jood" in Dutch, etc., but derivations of the word "Hebrew" are also in
use to describe a Jew, e.g., in Italian (Ebreo), in Persian ("Ebri/Ebrani" (Persian: عبری/عبرانی)) and Russian
(Еврей, Yevrey). The German word "Jude" is pronounced [ˈjuːdə], the corresponding adjective "jüdisch" [ˈjyːdɪʃ] (Jewish)
is the origin of the word "Yiddish". (See Jewish ethnonyms for a full overview.) According to the Hebrew Bible narrative,
Jewish ancestry is traced back to the Biblical patriarchs such as Abraham, Isaac and Jacob, and the Biblical matriarchs
Sarah, Rebecca, Leah, and Rachel, who lived in Canaan around the 18th century BCE. Jacob and his family migrated
to Ancient Egypt after being invited to live with Jacob's son Joseph by the Pharaoh himself. The patriarchs' descendants
were later enslaved until the Exodus led by Moses, traditionally dated to the 13th century BCE, after which the Israelites
conquered Canaan.[citation needed] Modern archaeology has largely discarded the historicity of the Patriarchs and
of the Exodus story, with it being reframed as constituting the Israelites' inspiring national myth narrative. The
Israelites and their culture, according to the modern archaeological account, did not overtake the region by force,
but instead branched out of the Canaanite peoples and culture through the development of a distinct monolatristic
— and later monotheistic — religion centered on Yahweh, one of the Ancient Canaanite deities. The growth of Yahweh-centric
belief, along with a number of cultic practices, gradually gave rise to a distinct Israelite ethnic group, setting
them apart from other Canaanites. The Canaanites themselves are archeologically attested in the Middle Bronze Age,
while the Hebrew language is the last extant member of the Canaanite languages. In the Iron Age I period (1200–1000
BCE) Israelite culture was largely Canaanite in nature. Although the Israelites were divided into Twelve Tribes,
the Jews (being one offshoot of the Israelites, another being the Samaritans) are traditionally said to descend mostly
from the Israelite tribes of Judah (from where the Jews derive their ethnonym) and Benjamin, and partially from the
tribe of Levi, who had together formed the ancient Kingdom of Judah, and the remnants of the northern Kingdom of
Israel who migrated to the Kingdom of Judah and assimilated after the 720s BCE, when the Kingdom of Israel was conquered
by the Neo-Assyrian Empire. Israelites enjoyed political independence twice in ancient history, first during the
periods of the Biblical judges followed by the United Monarchy.[disputed – discuss] After the fall of the United
Monarchy the land was divided into Israel and Judah. The term Jew originated from the Roman "Judean" and denoted
someone from the southern kingdom of Judah. The shift of ethnonym from "Israelites" to "Jews" (inhabitant of Judah),
although not contained in the Torah, is made explicit in the Book of Esther (4th century BCE), a book in the Ketuvim,
the third section of the Jewish Tanakh. In 587 BC Nebuchadnezzar II, King of the Neo-Babylonian Empire, besieged
Jerusalem, destroyed the First Temple, and deported the most prominent citizens of Judah. In 586 BC, Judah itself
ceased to be an independent kingdom, and its remaining Jews were left stateless. The Babylonian exile ended in 539
BCE when the Achaemenid Empire conquered Babylon and Cyrus the Great allowed the exiled Jews to return to Yehud and
rebuild their Temple. The Second Temple was completed in 515 BCE. Yehud province was a peaceful part of the Achaemenid
Empire until the fall of the Empire in c. 333 BCE to Alexander the Great. Jews were also politically independent
during the Hasmonean dynasty spanning from 140 to 37 BCE and to some degree under the Herodian dynasty from 37 BCE
to 6 CE. Since the destruction of the Second Temple in 70 CE, most Jews have lived in diaspora. As an ethnic minority
in every country in which they live (except Israel), they have frequently experienced persecution throughout history,
resulting in a population that has fluctuated both in numbers and distribution over the centuries.[citation needed]
Genetic studies on Jews show that most Jews worldwide bear a common genetic heritage which originates in the Middle
East, and that they bear their strongest resemblance to the peoples of the Fertile Crescent. The genetic composition
of different Jewish groups shows that Jews share a common genetic pool dating back 4,000 years, as a marker of their
common ancestral origin. Despite their long-term separation and beside their shared genetic origin, Jews also maintained
a common culture, tradition, and language. The Hebrew Bible, a religious interpretation of the traditions and early
national history of the Jews, established the first of the Abrahamic religions, which are now practiced by 54% of
the world. Judaism guides its adherents in both practice and belief, and has been called not only a religion, but
also a "way of life," which has made drawing a clear distinction between Judaism, Jewish culture, and Jewish identity
rather difficult. Throughout history, in eras and places as diverse as the ancient Hellenic world, in Europe before
and after The Age of Enlightenment (see Haskalah), in Islamic Spain and Portugal, in North Africa and the Middle
East, India, China, or the contemporary United States and Israel, cultural phenomena have developed that are in some
sense characteristically Jewish without being at all specifically religious. Some factors in this come from within
Judaism, others from the interaction of Jews or specific communities of Jews with their surroundings, others from
the inner social and cultural dynamics of the community, as opposed to from the religion itself. This phenomenon
has led to considerably different Jewish cultures unique to their own communities, each as authentically Jewish as
the next. Judaism shares some of the characteristics of a nation, an ethnicity, a religion, and a culture, making
the definition of who is a Jew vary slightly depending on whether a religious or national approach to identity is
used. Generally, in modern secular usage Jews include three groups: people who were born to a Jewish family regardless
of whether or not they follow the religion, those who have some Jewish ancestral background or lineage (sometimes
including those who do not have strictly matrilineal descent), and people without any Jewish ancestral background
or lineage who have formally converted to Judaism and therefore are followers of the religion. Historical definitions
of Jewish identity have traditionally been based on halakhic definitions of matrilineal descent, and halakhic conversions.
Historical definitions of who is a Jew date back to the codification of the Oral Torah into the Babylonian Talmud,
around 200 CE. Interpretations of sections of the Tanakh, such as Deuteronomy 7:1–5, by Jewish sages, are used as
a warning against intermarriage between Jews and Canaanites because "[the non-Jewish husband] will cause your child
to turn away from Me and they will worship the gods (i.e., idols) of others." Leviticus 24:10 says that the son in
a marriage between a Hebrew woman and an Egyptian man is "of the community of Israel." This is complemented by Ezra
10:2–3, where Israelites returning from Babylon vow to put aside their gentile wives and their children. Since the
anti-religious Haskalah movement of the late 18th and 19th centuries, halakhic interpretations of Jewish identity
have been challenged. According to historian Shaye J. D. Cohen, the status of the offspring of mixed marriages was
determined patrilineally in the Bible. He brings two likely explanations for the change in Mishnaic times: first,
the Mishnah may have been applying the same logic to mixed marriages as it had applied to other mixtures (Kil'ayim).
Thus, a mixed marriage is forbidden as is the union of a horse and a donkey, and in both unions the offspring are
judged matrilineally. Second, the Tannaim may have been influenced by Roman law, which dictated that when a parent
could not contract a legal marriage, offspring would follow the mother. By the 1st century, Babylonia, to which Jews
migrated to after the Babylonian conquest as well as after the Bar Kokhba revolt in 135 CE, already held a speedily
growing population of an estimated 1,000,000 Jews, which increased to an estimated 2 million between the years 200
CE – 500 CE, both by natural growth and by immigration of more Jews from the Land of Israel, making up about 1/6
of the world Jewish population at that era. At times conversion has accounted for a part of Jewish population growth.
Some have claimed that in the 1st century of the Christian era, for example, the population more than doubled, from
four to 8–10 million within the confines of the Roman Empire, in good part as a result of a wave of conversion. Other
historians believe that conversion during the Roman era was limited in number and did not account for much of the
Jewish population growth, due to various factors such as the illegality of male conversion to Judaism in the Roman
world from the mid-2nd century. Another factor that would have made conversion difficult in the Roman world was the
halakhic requirement of circumcision, a requirement that proselytizing Christianity quickly dropped. The Fiscus Judaicus,
a tax imposed on Jews in 70 CE and relaxed to exclude Christians in 96 CE, also limited Judaism's appeal. In addition,
historians argue the very figure (4 million) that had been guessed to account for the population of Jews in the ancient
Roman Empire is an error that has long been disproven and thus the assumption that conversion impacted Jewish population
growth in ancient Rome on a large scale is false. The 8 million figure is also in doubt as it may refer to a census
of total Roman citizens. Within the world's Jewish population there are distinct ethnic divisions, most of which
are primarily the result of geographic branching from an originating Israelite population, and subsequent independent
evolutions. An array of Jewish communities was established by Jewish settlers in various places around the Old World,
often at great distances from one another, resulting in effective and often long-term isolation. During the millennia
of the Jewish diaspora the communities would develop under the influence of their local environments: political,
cultural, natural, and populational. Today, manifestations of these differences among the Jews can be observed in
Jewish cultural expressions of each community, including Jewish linguistic diversity, culinary preferences, liturgical
practices, religious interpretations, as well as degrees and sources of genetic admixture. Jews are often identified
as belonging to one of two major groups: the Ashkenazim and the Sephardim. Ashkenazim, or "Germanics" (Ashkenaz meaning
"Germany" in Hebrew), are so named denoting their German Jewish cultural and geographical origins, while Sephardim,
or "Hispanics" (Sefarad meaning "Spain/Hispania" or "Iberia" in Hebrew), are so named denoting their Spanish/Portuguese
Jewish cultural and geographic origins. The more common term in Israel for many of those broadly called Sephardim,
is Mizrahim (lit. "Easterners", Mizrach being "East" in Hebrew), that is, in reference to the diverse collection
of Middle Eastern and North African Jews who are often, as a group, referred to collectively as Sephardim (together
with Sephardim proper) for liturgical reasons, although Mizrahi Jewish groups and Sephardi Jews proper are ethnically
distinct. The divisions between all these groups are approximate and their boundaries are not always clear. The Mizrahim
for example, are a heterogeneous collection of North African, Central Asian, Caucasian, and Middle Eastern Jewish
communities that are no closer related to each other than they are to any of the earlier mentioned Jewish groups.
In modern usage, however, the Mizrahim are sometimes termed Sephardi due to similar styles of liturgy, despite independent
development from Sephardim proper. Thus, among Mizrahim there are Egyptian Jews, Iraqi Jews, Lebanese Jews, Kurdish
Jews, Libyan Jews, Syrian Jews, Bukharian Jews, Mountain Jews, Georgian Jews, Iranian Jews and various others. The
Teimanim from Yemen are sometimes included, although their style of liturgy is unique and they differ in respect
to the admixture found among them to that found in Mizrahim. In addition, there is a differentiation made between
Sephardi migrants who established themselves in the Middle East and North Africa after the expulsion of the Jews
from Spain and Portugal in the 1490s and the pre-existing Jewish communities in those regions. Ashkenazi Jews represent
the bulk of modern Jewry, with at least 70% of Jews worldwide (and up to 90% prior to World War II and the Holocaust).
As a result of their emigration from Europe, Ashkenazim also represent the overwhelming majority of Jews in the New
World continents, in countries such as the United States, Canada, Argentina, Australia, and Brazil. In France, the
immigration of Jews from Algeria (Sephardim) has led them to outnumber the Ashkenazim. Only in Israel is the Jewish
population representative of all groups, a melting pot independent of each group's proportion within the overall
world Jewish population. Hebrew is the liturgical language of Judaism (termed lashon ha-kodesh, "the holy tongue"),
the language in which most of the Hebrew scriptures (Tanakh) were composed, and the daily speech of the Jewish people
for centuries. By the 5th century BCE, Aramaic, a closely related tongue, joined Hebrew as the spoken language in
Judea. By the 3rd century BCE, some Jews of the diaspora were speaking Greek. Others, such as in the Jewish communities
of Babylonia, were speaking Hebrew and Aramaic, the languages of the Babylonian Talmud. These languages were also
used by the Jews of Israel at that time.[citation needed] For centuries, Jews worldwide have spoken the local or
dominant languages of the regions they migrated to, often developing distinctive dialectal forms or branches that
became independent languages. Yiddish is the Judæo-German language developed by Ashkenazi Jews who migrated to Central
Europe. Ladino is the Judæo-Spanish language developed by Sephardic Jews who migrated to the Iberian peninsula. Due
to many factors, including the impact of the Holocaust on European Jewry, the Jewish exodus from Arab and Muslim
countries, and widespread emigration from other Jewish communities around the world, ancient and distinct Jewish
languages of several communities, including Judæo-Georgian, Judæo-Arabic, Judæo-Berber, Krymchak, Judæo-Malayalam
and many others, have largely fallen out of use. Despite efforts to revive Hebrew as the national language of the
Jewish people, knowledge of the language is not commonly possessed by Jews worldwide and English has emerged as the
lingua franca of the Jewish diaspora. Although many Jews once had sufficient knowledge of Hebrew to study the classic
literature, and Jewish languages like Yiddish and Ladino were commonly used as recently as the early 20th century,
most Jews lack such knowledge today and English has by and large superseded most Jewish vernaculars. The three most
commonly spoken languages among Jews today are Hebrew, English, and Russian. Some Romance languages, particularly
French and Spanish, are also widely used. Yiddish has been spoken by more Jews in history than any other language,
but it is far less used today following the Holocaust and the adoption of Modern Hebrew by the Zionist movement and
the State of Israel. In some places, the mother language of the Jewish community differs from that of the general
population or the dominant group. For example, in Quebec, the Ashkenazic majority has adopted English, while the
Sephardic minority uses French as its primary language. Similarly, South African Jews adopted English rather than
Afrikaans. Due to both Czarist and Soviet policies, Russian has superseded Yiddish as the language of Russian Jews,
but these policies have also affected neighboring communities. Today, Russian is the first language for many Jewish
communities in a number of Post-Soviet states, such as Ukraine and Uzbekistan, as well as for Ashkenazic Jews in
Azerbaijan, Georgia, and Tajikistan. Although communities in North Africa today are small and dwindling, Jews there
had shifted from a multilingual group to a monolingual one (or nearly so), speaking French in Algeria, Morocco, and
the city of Tunis, while most North Africans continue to use Arabic as their mother tongue.[citation needed] Y DNA
studies tend to imply a small number of founders in an old population whose members parted and followed different
migration paths. In most Jewish populations, these male line ancestors appear to have been mainly Middle Eastern.
For example, Ashkenazi Jews share more common paternal lineages with other Jewish and Middle Eastern groups than
with non-Jewish populations in areas where Jews lived in Eastern Europe, Germany and the French Rhine Valley. This
is consistent with Jewish traditions in placing most Jewish paternal origins in the region of the Middle East. Conversely,
the maternal lineages of Jewish populations, studied by looking at mitochondrial DNA, are generally more heterogeneous.
Scholars such as Harry Ostrer and Raphael Falk believe this indicates that many Jewish males found new mates from
European and other communities in the places where they migrated in the diaspora after fleeing ancient Israel. In
contrast, Behar has found evidence that about 40% of Ashkenazi Jews originate maternally from just four female founders,
who were of Middle Eastern origin. The populations of Sephardi and Mizrahi Jewish communities "showed no evidence
for a narrow founder effect." Subsequent studies carried out by Feder et al. confirmed the large portion of non-local
maternal origin among Ashkenazi Jews. Reflecting on their findings related to the maternal origin of Ashkenazi Jews,
the authors conclude "Clearly, the differences between Jews and non-Jews are far larger than those observed among
the Jewish communities. Hence, differences between the Jewish communities can be overlooked when non-Jews are included
in the comparisons." Studies of autosomal DNA, which look at the entire DNA mixture, have become increasingly important
as the technology develops. They show that Jewish populations have tended to form relatively closely related groups
in independent communities, with most in a community sharing significant ancestry in common. For Jewish populations
of the diaspora, the genetic composition of Ashkenazi, Sephardi, and Mizrahi Jewish populations show a predominant
amount of shared Middle Eastern ancestry. According to Behar, the most parsimonious explanation for this shared Middle
Eastern ancestry is that it is "consistent with the historical formulation of the Jewish people as descending from
ancient Hebrew and Israelite residents of the Levant" and "the dispersion of the people of ancient Israel throughout
the Old World". North African, Italian and others of Iberian origin show variable frequencies of admixture with non-Jewish
historical host populations among the maternal lines. In the case of Ashkenazi and Sephardi Jews (in particular Moroccan
Jews), who are closely related, the source of non-Jewish admixture is mainly southern European, while Mizrahi Jews
show evidence of admixture with other Middle Eastern populations and Sub-Saharan Africans. Behar et al. have remarked
on an especially close relationship of Ashkenazi Jews and modern Italians. The studies also show that the Sephardic
Bnei Anusim (descendants of the "anusim" forced converts to Catholicism) of Iberia (estimated at about 19.8% of modern
Iberia) and Ibero-America (estimated at least 10% of modern Ibero-America) have Sephardic Jewish origins within the
last few centuries, while the Bene Israel and Cochin Jews of India, Beta Israel of Ethiopia, and a portion of the
Lemba people of Southern Africa, despite more closely resembling the local populations of their native countries,
also have some more remote ancient Jewish descent. Between 1948 and 1958, the Jewish population rose from 800,000
to two million. Currently, Jews account for 75.4% of the Israeli population, or 6 million people. The early years
of the State of Israel were marked by the mass immigration of Holocaust survivors in the aftermath of the Holocaust
and Jews fleeing Arab lands. Israel also has a large population of Ethiopian Jews, many of whom were airlifted to
Israel in the late 1980s and early 1990s. Between 1974 and 1979 nearly 227,258 immigrants arrived in Israel, about
half being from the Soviet Union. This period also saw an increase in immigration to Israel from Western Europe,
Latin America, and North America. More than half of the Jews live in the Diaspora (see Population table). Currently,
the largest Jewish community outside Israel, and either the largest or second-largest Jewish community in the world,
is located in the United States, with 5.2 million to 6.4 million Jews by various estimates. Elsewhere in the Americas,
there are also large Jewish populations in Canada (315,000), Argentina (180,000-300,000), and Brazil (196,000-600,000),
and smaller populations in Mexico, Uruguay, Venezuela, Chile, Colombia and several other countries (see History of
the Jews in Latin America). Demographers disagree on whether the United States has a larger Jewish population than
Israel, with many maintaining that Israel surpassed the United States in Jewish population during the 2000s, while
others maintain that the United States still has the largest Jewish population in the world. Currently, a major national
Jewish population survey is planned to ascertain whether or not Israel has overtaken the United States in Jewish
population. Western Europe's largest Jewish community, and the third-largest Jewish community in the world, can be
found in France, home to between 483,000 and 500,000 Jews, the majority of whom are immigrants or refugees from North
African Arab countries such as Algeria, Morocco, and Tunisia (or their descendants). The United Kingdom has a Jewish
community of 292,000. In Eastern Europe, there are anywhere from 350,000 to one million Jews living in the former
Soviet Union, but exact figures are difficult to establish. In Germany, the 102,000 Jews registered with the Jewish
community are a slowly declining population, despite the immigration of tens of thousands of Jews from the former
Soviet Union since the fall of the Berlin Wall. Thousands of Israelis also live in Germany, either permanently or
temporarily, for economic reasons. Prior to 1948, approximately 800,000 Jews were living in lands which now make
up the Arab world (excluding Israel). Of these, just under two-thirds lived in the French-controlled Maghreb region,
15–20% in the Kingdom of Iraq, approximately 10% in the Kingdom of Egypt and approximately 7% in the Kingdom of Yemen.
A further 200,000 lived in Pahlavi Iran and the Republic of Turkey. Today, around 26,000 Jews live in Arab countries
and around 30,000 in Iran and Turkey. A small-scale exodus had begun in many countries in the early decades of the
20th century, although the only substantial aliyah came from Yemen and Syria. The exodus from Arab and Muslim countries
took place primarily from 1948. The first large-scale exoduses took place in the late 1940s and early 1950s, primarily
in Iraq, Yemen and Libya, with up to 90% of these communities leaving within a few years. The peak of the exodus
from Egypt occurred in 1956. The exodus in the Maghreb countries peaked in the 1960s. Lebanon was the only Arab country
to see a temporary increase in its Jewish population during this period, due to an influx of refugees from other
Arab countries, although by the mid-1970s the Jewish community of Lebanon had also dwindled. In the aftermath of
the exodus wave from Arab states, an additional migration of Iranian Jews peaked in the 1980s when around 80% of
Iranian Jews left the country.[citation needed] Since at least the time of the Ancient Greeks, a proportion of Jews
have assimilated into the wider non-Jewish society around them, by either choice or force, ceasing to practice Judaism
and losing their Jewish identity. Assimilation took place in all areas, and during all time periods, with some Jewish
communities, for example the Kaifeng Jews of China, disappearing entirely. The advent of the Jewish Enlightenment
of the 18th century (see Haskalah) and the subsequent emancipation of the Jewish populations of Europe and America
in the 19th century, accelerated the situation, encouraging Jews to increasingly participate in, and become part
of, secular society. The result has been a growing trend of assimilation, as Jews marry non-Jewish spouses and stop
participating in the Jewish community. Rates of interreligious marriage vary widely: In the United States, it is
just under 50%, in the United Kingdom, around 53%; in France; around 30%, and in Australia and Mexico, as low as
10%. In the United States, only about a third of children from intermarriages affiliate with Jewish religious practice.
The result is that most countries in the Diaspora have steady or slightly declining religiously Jewish populations
as Jews continue to assimilate into the countries in which they live.[citation needed] In the Papal States, which
existed until 1870, Jews were required to live only in specified neighborhoods called ghettos. In the 19th and (before
the end of World War II) 20th centuries, the Roman Catholic Church adhered to a distinction between "good antisemitism"
and "bad antisemitism". The "bad" kind promoted hatred of Jews because of their descent. This was considered un-Christian
because the Christian message was intended for all of humanity regardless of ethnicity; anyone could become a Christian.
The "good" kind criticized alleged Jewish conspiracies to control newspapers, banks, and other institutions, to care
only about accumulation of wealth, etc. Islam and Judaism have a complex relationship. Traditionally Jews and Christians
living in Muslim lands, known as dhimmis, were allowed to practice their religions and administer their internal
affairs, but they were subject to certain conditions. They had to pay the jizya (a per capita tax imposed on free
adult non-Muslim males) to the Islamic state. Dhimmis had an inferior status under Islamic rule. They had several
social and legal disabilities such as prohibitions against bearing arms or giving testimony in courts in cases involving
Muslims. Many of the disabilities were highly symbolic. The one described by Bernard Lewis as "most degrading" was
the requirement of distinctive clothing, not found in the Quran or hadith but invented in early medieval Baghdad;
its enforcement was highly erratic. On the other hand, Jews rarely faced martyrdom or exile, or forced compulsion
to change their religion, and they were mostly free in their choice of residence and profession. Notable exceptions
include the massacre of Jews and forcible conversion of some Jews by the rulers of the Almohad dynasty in Al-Andalus
in the 12th century, as well as in Islamic Persia, and the forced confinement of Moroccan Jews to walled quarters
known as mellahs beginning from the 15th century and especially in the early 19th century. In modern times, it has
become commonplace for standard antisemitic themes to be conflated with anti-Zionist publications and pronouncements
of Islamic movements such as Hezbollah and Hamas, in the pronouncements of various agencies of the Islamic Republic
of Iran, and even in the newspapers and other publications of Turkish Refah Partisi." Throughout history, many rulers,
empires and nations have oppressed their Jewish populations or sought to eliminate them entirely. Methods employed
ranged from expulsion to outright genocide; within nations, often the threat of these extreme methods was sufficient
to silence dissent. The history of antisemitism includes the First Crusade which resulted in the massacre of Jews;
the Spanish Inquisition (led by Tomás de Torquemada) and the Portuguese Inquisition, with their persecution and autos-da-fé
against the New Christians and Marrano Jews; the Bohdan Chmielnicki Cossack massacres in Ukraine; the Pogroms backed
by the Russian Tsars; as well as expulsions from Spain, Portugal, England, France, Germany, and other countries in
which the Jews had settled. According to a 2008 study published in the American Journal of Human Genetics, 19.8%
of the modern Iberian population has Sephardic Jewish ancestry, indicating that the number of conversos may have
been much higher than originally thought. The persecution reached a peak in Nazi Germany's Final Solution, which
led to the Holocaust and the slaughter of approximately 6 million Jews. Of the world's 15 million Jews in 1939, more
than a third were killed in the Holocaust. The Holocaust—the state-led systematic persecution and genocide of European
Jews (and certain communities of North African Jews in European controlled North Africa) and other minority groups
of Europe during World War II by Germany and its collaborators remains the most notable modern-day persecution of
Jews. The persecution and genocide were accomplished in stages. Legislation to remove the Jews from civil society
was enacted years before the outbreak of World War II. Concentration camps were established in which inmates were
used as slave labour until they died of exhaustion or disease. Where the Third Reich conquered new territory in Eastern
Europe, specialized units called Einsatzgruppen murdered Jews and political opponents in mass shootings. Jews and
Roma were crammed into ghettos before being transported hundreds of miles by freight train to extermination camps
where, if they survived the journey, the majority of them were killed in gas chambers. Virtually every arm of Germany's
bureaucracy was involved in the logistics of the mass murder, turning the country into what one Holocaust scholar
has called "a genocidal nation." There is also a trend of Orthodox movements pursuing secular Jews in order to give
them a stronger Jewish identity so there is less chance of intermarriage. As a result of the efforts by these and
other Jewish groups over the past 25 years, there has been a trend (known as the Baal Teshuva movement) for secular
Jews to become more religiously observant, though the demographic implications of the trend are unknown. Additionally,
there is also a growing rate of conversion to Jews by Choice of gentiles who make the decision to head in the direction
of becoming Jews.
The fiber is most often spun into yarn or thread and used to make a soft, breathable textile. The use of cotton for fabric
is known to date to prehistoric times; fragments of cotton fabric dated from 5000 BC have been excavated in Mexico
and the Indus Valley Civilization in Ancient India (modern-day Pakistan and some parts of India). Although cultivated
since antiquity, it was the invention of the cotton gin that lowered the cost of production that led to its widespread
use, and it is the most widely used natural fiber cloth in clothing today. The earliest evidence of cotton use in
South Asia has been found at the site of Mehrgarh, Pakistan, where cotton threads have been found preserved in copper
beads; these finds have been dated to Neolithic (between 6000 and 5000 BCE). Cotton cultivation in the region is
dated to the Indus Valley Civilization, which covered parts of modern eastern Pakistan and northwestern India between
3300 and 1300 BCE The Indus cotton industry was well-developed and some methods used in cotton spinning and fabrication
continued to be used until the industrialization of India. Between 2000 and 1000 BC cotton became widespread across
much of India. For example, it has been found at the site of Hallus in Karnataka dating from around 1000 BC. In Iran
(Persia), the history of cotton dates back to the Achaemenid era (5th century BC); however, there are few sources
about the planting of cotton in pre-Islamic Iran. The planting of cotton was common in Merv, Ray and Pars of Iran.
In Persian poets' poems, especially Ferdowsi's Shahname, there are references to cotton ("panbe" in Persian). Marco
Polo (13th century) refers to the major products of Persia, including cotton. John Chardin, a French traveler of
the 17th century who visited the Safavid Persia, spoke approvingly of the vast cotton farms of Persia. Though known
since antiquity the commercial growing of cotton in Egypt only started in 1820's, following a Frenchman, by the name
of M. Jumel, propositioning the then ruler, Mohamed Ali Pasha, that he could earn a substantial income by growing
an extra-long staple Maho (Barbadence) cotton, in Lower Egypt, for the French market. Mohamed Ali Pasha accepted
the proposition and granted himself the monopoly on the sale and export of cotton in Egypt; and later dictated cotton
should be grown in preference to other crops. By the time of the American Civil war annual exports had reached $16
million (120,000 bales), which rose to $56 million by 1864, primarily due to the loss of the Confederate supply on
the world market. Exports continued to grow even after the reintroduction of US cotton, produced now by a paid workforce,
and Egyptian exports reached 1.2 million bales a year by 1903. During the late medieval period, cotton became known
as an imported fiber in northern Europe, without any knowledge of how it was derived, other than that it was a plant.
Because Herodotus had written in his Histories, Book III, 106, that in India trees grew in the wild producing wool,
it was assumed that the plant was a tree, rather than a shrub. This aspect is retained in the name for cotton in
several Germanic languages, such as German Baumwolle, which translates as "tree wool" (Baum means "tree"; Wolle means
"wool"). Noting its similarities to wool, people in the region could only imagine that cotton must be produced by
plant-borne sheep. John Mandeville, writing in 1350, stated as fact the now-preposterous belief: "There grew there
[India] a wonderful tree which bore tiny lambs on the endes of its branches. These branches were so pliable that
they bent down to allow the lambs to feed when they are hungrie [sic]." (See Vegetable Lamb of Tartary.) By the end
of the 16th century, cotton was cultivated throughout the warmer regions in Asia and the Americas. India's cotton-processing
sector gradually declined during British expansion in India and the establishment of colonial rule during the late
18th and early 19th centuries. This was largely due to aggressive colonialist mercantile policies of the British
East India Company, which made cotton processing and manufacturing workshops in India uncompetitive. Indian markets
were increasingly forced to supply only raw cotton and, by British-imposed law, to purchase manufactured textiles
from Britain.[citation needed] The advent of the Industrial Revolution in Britain provided a great boost to cotton
manufacture, as textiles emerged as Britain's leading export. In 1738, Lewis Paul and John Wyatt, of Birmingham,
England, patented the roller spinning machine, as well as the flyer-and-bobbin system for drawing cotton to a more
even thickness using two sets of rollers that traveled at different speeds. Later, the invention of the James Hargreaves'
spinning jenny in 1764, Richard Arkwright's spinning frame in 1769 and Samuel Crompton's spinning mule in 1775 enabled
British spinners to produce cotton yarn at much higher rates. From the late 18th century on, the British city of
Manchester acquired the nickname "Cottonopolis" due to the cotton industry's omnipresence within the city, and Manchester's
role as the heart of the global cotton trade. Production capacity in Britain and the United States was improved by
the invention of the cotton gin by the American Eli Whitney in 1793. Before the development of cotton gins, the cotton
fibers had to be pulled from the seeds tediously by hand. By the late 1700s a number of crude ginning machines had
been developed. However, to produce a bale of cotton required over 600 hours of human labor, making large-scale production
uneconomical in the United States, even with the use of humans as slave labor. The gin that Whitney manufactured
(the Holmes design) reduced the hours down to just a dozen or so per bale. Although Whitney patented his own design
for a cotton gin, he manufactured a prior design from Henry Odgen Holmes, for which Holmes filed a patent in 1796.
Improving technology and increasing control of world markets allowed British traders to develop a commercial chain
in which raw cotton fibers were (at first) purchased from colonial plantations, processed into cotton cloth in the
mills of Lancashire, and then exported on British ships to captive colonial markets in West Africa, India, and China
(via Shanghai and Hong Kong). By the 1840s, India was no longer capable of supplying the vast quantities of cotton
fibers needed by mechanized British factories, while shipping bulky, low-price cotton from India to Britain was time-consuming
and expensive. This, coupled with the emergence of American cotton as a superior type (due to the longer, stronger
fibers of the two domesticated native American species, Gossypium hirsutum and Gossypium barbadense), encouraged
British traders to purchase cotton from plantations in the United States and plantations in the Caribbean. By the
mid-19th century, "King Cotton" had become the backbone of the southern American economy. In the United States, cultivating
and harvesting cotton became the leading occupation of slaves. During the American Civil War, American cotton exports
slumped due to a Union blockade on Southern ports, and also because of a strategic decision by the Confederate government
to cut exports, hoping to force Britain to recognize the Confederacy or enter the war. This prompted the main purchasers
of cotton, Britain and France, to turn to Egyptian cotton. British and French traders invested heavily in cotton
plantations. The Egyptian government of Viceroy Isma'il took out substantial loans from European bankers and stock
exchanges. After the American Civil War ended in 1865, British and French traders abandoned Egyptian cotton and returned
to cheap American exports,[citation needed] sending Egypt into a deficit spiral that led to the country declaring
bankruptcy in 1876, a key factor behind Egypt's occupation by the British Empire in 1882. Cotton remained a key crop
in the Southern economy after emancipation and the end of the Civil War in 1865. Across the South, sharecropping
evolved, in which landless black and white farmers worked land owned by others in return for a share of the profits.
Some farmers rented the land and bore the production costs themselves. Until mechanical cotton pickers were developed,
cotton farmers needed additional labor to hand-pick cotton. Picking cotton was a source of income for families across
the South. Rural and small town school systems had split vacations so children could work in the fields during "cotton-picking."
Successful cultivation of cotton requires a long frost-free period, plenty of sunshine, and a moderate rainfall,
usually from 600 to 1,200 mm (24 to 47 in). Soils usually need to be fairly heavy, although the level of nutrients
does not need to be exceptional. In general, these conditions are met within the seasonally dry tropics and subtropics
in the Northern and Southern hemispheres, but a large proportion of the cotton grown today is cultivated in areas
with less rainfall that obtain the water from irrigation. Production of the crop for a given year usually starts
soon after harvesting the preceding autumn. Cotton is naturally a perennial but is grown as an annual to help control
pests. Planting time in spring in the Northern hemisphere varies from the beginning of February to the beginning
of June. The area of the United States known as the South Plains is the largest contiguous cotton-growing region
in the world. While dryland (non-irrigated) cotton is successfully grown in this region, consistent yields are only
produced with heavy reliance on irrigation water drawn from the Ogallala Aquifer. Since cotton is somewhat salt and
drought tolerant, this makes it an attractive crop for arid and semiarid regions. As water resources get tighter
around the world, economies that rely on it face difficulties and conflict, as well as potential environmental problems.
For example, improper cropping and irrigation practices have led to desertification in areas of Uzbekistan, where
cotton is a major export. In the days of the Soviet Union, the Aral Sea was tapped for agricultural irrigation, largely
of cotton, and now salination is widespread. Genetically modified (GM) cotton was developed to reduce the heavy reliance
on pesticides. The bacterium Bacillus thuringiensis (Bt) naturally produces a chemical harmful only to a small fraction
of insects, most notably the larvae of moths and butterflies, beetles, and flies, and harmless to other forms of
life. The gene coding for Bt toxin has been inserted into cotton, causing cotton, called Bt cotton, to produce this
natural insecticide in its tissues. In many regions, the main pests in commercial cotton are lepidopteran larvae,
which are killed by the Bt protein in the transgenic cotton they eat. This eliminates the need to use large amounts
of broad-spectrum insecticides to kill lepidopteran pests (some of which have developed pyrethroid resistance). This
spares natural insect predators in the farm ecology and further contributes to noninsecticide pest management. But
Bt cotton is ineffective against many cotton pests, however, such as plant bugs, stink bugs, and aphids; depending
on circumstances it may still be desirable to use insecticides against these. A 2006 study done by Cornell researchers,
the Center for Chinese Agricultural Policy and the Chinese Academy of Science on Bt cotton farming in China found
that after seven years these secondary pests that were normally controlled by pesticide had increased, necessitating
the use of pesticides at similar levels to non-Bt cotton and causing less profit for farmers because of the extra
expense of GM seeds. However, a 2009 study by the Chinese Academy of Sciences, Stanford University and Rutgers University
refuted this. They concluded that the GM cotton effectively controlled bollworm. The secondary pests were mostly
miridae (plant bugs) whose increase was related to local temperature and rainfall and only continued to increase
in half the villages studied. Moreover, the increase in insecticide use for the control of these secondary insects
was far smaller than the reduction in total insecticide use due to Bt cotton adoption. A 2012 Chinese study concluded
that Bt cotton halved the use of pesticides and doubled the level of ladybirds, lacewings and spiders. The International
Service for the Acquisition of Agri-biotech Applications (ISAAA) said that, worldwide, GM cotton was planted on an
area of 25 million hectares in 2011. This was 69% of the worldwide total area planted in cotton. GM cotton acreage
in India grew at a rapid rate, increasing from 50,000 hectares in 2002 to 10.6 million hectares in 2011. The total
cotton area in India was 12.1 million hectares in 2011, so GM cotton was grown on 88% of the cotton area. This made
India the country with the largest area of GM cotton in the world. A long-term study on the economic impacts of Bt
cotton in India, published in the Journal PNAS in 2012, showed that Bt cotton has increased yields, profits, and
living standards of smallholder farmers. The U.S. GM cotton crop was 4.0 million hectares in 2011 the second largest
area in the world, the Chinese GM cotton crop was third largest by area with 3.9 million hectares and Pakistan had
the fourth largest GM cotton crop area of 2.6 million hectares in 2011. The initial introduction of GM cotton proved
to be a success in Australia – the yields were equivalent to the non-transgenic varieties and the crop used much
less pesticide to produce (85% reduction). The subsequent introduction of a second variety of GM cotton led to increases
in GM cotton production until 95% of the Australian cotton crop was GM in 2009 making Australia the country with
the fifth largest GM cotton crop in the world. Other GM cotton growing countries in 2011 were Argentina, Myanmar,
Burkina Faso, Brazil, Mexico, Colombia, South Africa and Costa Rica. Organic cotton is generally understood as cotton
from plants not genetically modified and that is certified to be grown without the use of any synthetic agricultural
chemicals, such as fertilizers or pesticides. Its production also promotes and enhances biodiversity and biological
cycles. In the United States, organic cotton plantations are required to enforce the National Organic Program (NOP).
This institution determines the allowed practices for pest control, growing, fertilizing, and handling of organic
crops. As of 2007, 265,517 bales of organic cotton were produced in 24 countries, and worldwide production was growing
at a rate of more than 50% per year. Historically, in North America, one of the most economically destructive pests
in cotton production has been the boll weevil. Due to the US Department of Agriculture's highly successful Boll Weevil
Eradication Program (BWEP), this pest has been eliminated from cotton in most of the United States. This program,
along with the introduction of genetically engineered Bt cotton (which contains a bacterial gene that codes for a
plant-produced protein that is toxic to a number of pests such as cotton bollworm and pink bollworm), has allowed
a reduction in the use of synthetic insecticides. Most cotton in the United States, Europe and Australia is harvested
mechanically, either by a cotton picker, a machine that removes the cotton from the boll without damaging the cotton
plant, or by a cotton stripper, which strips the entire boll off the plant. Cotton strippers are used in regions
where it is too windy to grow picker varieties of cotton, and usually after application of a chemical defoliant or
the natural defoliation that occurs after a freeze. Cotton is a perennial crop in the tropics, and without defoliation
or freezing, the plant will continue to grow. The era of manufactured fibers began with the development of rayon
in France in the 1890s. Rayon is derived from a natural cellulose and cannot be considered synthetic, but requires
extensive processing in a manufacturing process, and led the less expensive replacement of more naturally derived
materials. A succession of new synthetic fibers were introduced by the chemicals industry in the following decades.
Acetate in fiber form was developed in 1924. Nylon, the first fiber synthesized entirely from petrochemicals, was
introduced as a sewing thread by DuPont in 1936, followed by DuPont's acrylic in 1944. Some garments were created
from fabrics based on these fibers, such as women's hosiery from nylon, but it was not until the introduction of
polyester into the fiber marketplace in the early 1950s that the market for cotton came under threat. The rapid uptake
of polyester garments in the 1960s caused economic hardship in cotton-exporting economies, especially in Central
American countries, such as Nicaragua, where cotton production had boomed tenfold between 1950 and 1965 with the
advent of cheap chemical pesticides. Cotton production recovered in the 1970s, but crashed to pre-1960 levels in
the early 1990s. Beginning as a self-help program in the mid-1960s, the Cotton Research and Promotion Program (CRPP)
was organized by U.S. cotton producers in response to cotton's steady decline in market share. At that time, producers
voted to set up a per-bale assessment system to fund the program, with built-in safeguards to protect their investments.
With the passage of the Cotton Research and Promotion Act of 1966, the program joined forces and began battling synthetic
competitors and re-establishing markets for cotton. Today, the success of this program has made cotton the best-selling
fiber in the U.S. and one of the best-selling fibers in the world.[citation needed] Cotton is used to make a number
of textile products. These include terrycloth for highly absorbent bath towels and robes; denim for blue jeans; cambric,
popularly used in the manufacture of blue work shirts (from which we get the term "blue-collar"); and corduroy, seersucker,
and cotton twill. Socks, underwear, and most T-shirts are made from cotton. Bed sheets often are made from cotton.
Cotton also is used to make yarn used in crochet and knitting. Fabric also can be made from recycled or recovered
cotton that otherwise would be thrown away during the spinning, weaving, or cutting process. While many fabrics are
made completely of cotton, some materials blend cotton with other fibers, including rayon and synthetic fibers such
as polyester. It can either be used in knitted or woven fabrics, as it can be blended with elastine to make a stretchier
thread for knitted fabrics, and apparel such as stretch jeans. The cottonseed which remains after the cotton is ginned
is used to produce cottonseed oil, which, after refining, can be consumed by humans like any other vegetable oil.
The cottonseed meal that is left generally is fed to ruminant livestock; the gossypol remaining in the meal is toxic
to monogastric animals. Cottonseed hulls can be added to dairy cattle rations for roughage. During the American slavery
period, cotton root bark was used in folk remedies as an abortifacient, that is, to induce a miscarriage. Gossypol
was one of the many substances found in all parts of the cotton plant and it was described by the scientists as 'poisonous
pigment'. It also appears to inhibit the development of sperm or even restrict the mobility of the sperm. Also, it
is thought to interfere with the menstrual cycle by restricting the release of certain hormones. Cotton linters are
fine, silky fibers which adhere to the seeds of the cotton plant after ginning. These curly fibers typically are
less than 1⁄8 inch (3.2 mm) long. The term also may apply to the longer textile fiber staple lint as well as the
shorter fuzzy fibers from some upland species. Linters are traditionally used in the manufacture of paper and as
a raw material in the manufacture of cellulose. In the UK, linters are referred to as "cotton wool". This can also
be a refined product (absorbent cotton in U.S. usage) which has medical, cosmetic and many other practical uses.
The first medical use of cotton wool was by Sampson Gamgee at the Queen's Hospital (later the General Hospital) in
Birmingham, England. Cotton lisle is a finely-spun, tightly twisted type of cotton that is noted for being strong
and durable. Lisle is composed of two strands that have each been twisted an extra twist per inch than ordinary yarns
and combined to create a single thread. The yarn is spun so that it is compact and solid. This cotton is used mainly
for underwear, stockings, and gloves. Colors applied to this yarn are noted for being more brilliant than colors
applied to softer yarn. This type of thread was first made in the city of Lisle, France (now Lille), hence its name.
The largest producers of cotton, currently (2009), are China and India, with annual production of about 34 million
bales and 33.4 million bales, respectively; most of this production is consumed by their respective textile industries.
The largest exporters of raw cotton are the United States, with sales of $4.9 billion, and Africa, with sales of
$2.1 billion. The total international trade is estimated to be $12 billion. Africa's share of the cotton trade has
doubled since 1980. Neither area has a significant domestic textile industry, textile manufacturing having moved
to developing nations in Eastern and South Asia such as India and China. In Africa, cotton is grown by numerous small
holders. Dunavant Enterprises, based in Memphis, Tennessee, is the leading cotton broker in Africa, with hundreds
of purchasing agents. It operates cotton gins in Uganda, Mozambique, and Zambia. In Zambia, it often offers loans
for seed and expenses to the 180,000 small farmers who grow cotton for it, as well as advice on farming methods.
Cargill also purchases cotton in Africa for export. The 25,000 cotton growers in the United States of America are
heavily subsidized at the rate of $2 billion per year although China now provides the highest overall level of cotton
sector support. The future of these subsidies is uncertain and has led to anticipatory expansion of cotton brokers'
operations in Africa. Dunavant expanded in Africa by buying out local operations. This is only possible in former
British colonies and Mozambique; former French colonies continue to maintain tight monopolies, inherited from their
former colonialist masters, on cotton purchases at low fixed prices. While Brazil was fighting the US through the
WTO's Dispute Settlement Mechanism against a heavily subsidized cotton industry, a group of four least-developed
African countries – Benin, Burkina Faso, Chad, and Mali – also known as "Cotton-4" have been the leading protagonist
for the reduction of US cotton subsidies through negotiations. The four introduced a "Sectoral Initiative in Favour
of Cotton", presented by Burkina Faso's President Blaise Compaoré during the Trade Negotiations Committee on 10 June
2003. In addition to concerns over subsidies, the cotton industries of some countries are criticized for employing
child labor and damaging workers' health by exposure to pesticides used in production. The Environmental Justice
Foundation has campaigned against the prevalent use of forced child and adult labor in cotton production in Uzbekistan,
the world's third largest cotton exporter. The international production and trade situation has led to "fair trade"
cotton clothing and footwear, joining a rapidly growing market for organic clothing, fair fashion or "ethical fashion".
The fair trade system was initiated in 2005 with producers from Cameroon, Mali and Senegal. A public genome sequencing
effort of cotton was initiated in 2007 by a consortium of public researchers. They agreed on a strategy to sequence
the genome of cultivated, tetraploid cotton. "Tetraploid" means that cultivated cotton actually has two separate
genomes within its nucleus, referred to as the A and D genomes. The sequencing consortium first agreed to sequence
the D-genome relative of cultivated cotton (G. raimondii, a wild Central American cotton species) because of its
small size and limited number of repetitive elements. It is nearly one-third the number of bases of tetraploid cotton
(AD), and each chromosome is only present once.[clarification needed] The A genome of G. arboreum would be sequenced
next. Its genome is roughly twice the size of G. raimondii's. Part of the difference in size between the two genomes
is the amplification of retrotransposons (GORGE). Once both diploid genomes are assembled, then research could begin
sequencing the actual genomes of cultivated cotton varieties. This strategy is out of necessity; if one were to sequence
the tetraploid genome without model diploid genomes, the euchromatic DNA sequences of the AD genomes would co-assemble
and the repetitive elements of AD genomes would assembly independently into A and D sequences respectively. Then
there would be no way to untangle the mess of AD sequences without comparing them to their diploid counterparts.
The public sector effort continues with the goal to create a high-quality, draft genome sequence from reads generated
by all sources. The public-sector effort has generated Sanger reads of BACs, fosmids, and plasmids as well as 454
reads. These later types of reads will be instrumental in assembling an initial draft of the D genome. In 2010, two
companies (Monsanto and Illumina), completed enough Illumina sequencing to cover the D genome of G. raimondii about
50x. They announced that they would donate their raw reads to the public. This public relations effort gave them
some recognition for sequencing the cotton genome. Once the D genome is assembled from all of this raw material,
it will undoubtedly assist in the assembly of the AD genomes of cultivated varieties of cotton, but a lot of hard
work remains.
In signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits
than the original representation. Compression can be either lossy or lossless. Lossless compression reduces bits
by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression
reduces bits by identifying unnecessary information and removing it. The process of reducing the size of a data file
is referred to as data compression. In the context of data transmission, it is called source coding (encoding done
at the source of the data before it is stored or transmitted) in opposition to channel coding. Compression is useful
because it helps reduce resource usage, such as data storage space or transmission capacity. Because compressed data
must be decompressed to use, this extra processing imposes computational or other costs through decompression; this
situation is far from being a free lunch. Data compression is subject to a space–time complexity trade-off. For instance,
a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be
viewed as it is being decompressed, and the option to decompress the video in full before watching it may be inconvenient
or require additional storage. The design of data compression schemes involves trade-offs among various factors,
including the degree of compression, the amount of distortion introduced (when using lossy data compression), and
the computational resources required to compress and decompress the data. Lossless data compression algorithms usually
exploit statistical redundancy to represent data without losing any information, so that the process is reversible.
Lossless compression is possible because most real-world data exhibits statistical redundancy. For example, an image
may have areas of colour that do not change over several pixels; instead of coding "red pixel, red pixel, ..." the
data may be encoded as "279 red pixels". This is a basic example of run-length encoding; there are many schemes to
reduce file size by eliminating redundancy. The Lempel–Ziv (LZ) compression methods are among the most popular algorithms
for lossless storage. DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression
can be slow. DEFLATE is used in PKZIP, Gzip and PNG. LZW (Lempel–Ziv–Welch) is used in GIF images. Also noteworthy
is the LZR (Lempel-Ziv–Renau) algorithm, which serves as the basis for the Zip method.[citation needed] LZ methods
use a table-based compression model where table entries are substituted for repeated strings of data. For most LZ
methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded
(e.g. SHRI, LZX). Current LZ-based coding schemes that perform well are Brotli and LZX. LZX is used in Microsoft's
CAB format. In a further refinement of the direct use of probabilistic modelling, statistical estimates can be coupled
to an algorithm called arithmetic coding. Arithmetic coding is a more modern coding technique that uses the mathematical
calculations of a finite-state machine to produce a string of encoded bits from a series of input data symbols. It
can achieve superior compression to other techniques such as the better-known Huffman algorithm. It uses an internal
memory state to avoid the need to perform a one-to-one mapping of individual input symbols to distinct representations
that use an integer number of bits, and it clears out the internal memory only after encoding the entire string of
data symbols. Arithmetic coding applies especially well to adaptive data compression tasks where the statistics vary
and are context-dependent, as it can be easily coupled with an adaptive model of the probability distribution of
the input data. An early example of the use of arithmetic coding was its use as an optional (but not widely used)
feature of the JPEG image coding standard. It has since been applied in various other designs including H.264/MPEG-4
AVC and HEVC for video coding. Lossy data compression is the converse of lossless data compression. In these schemes,
some loss of information is acceptable. Dropping nonessential detail from the data source can save storage space.
Lossy data compression schemes are designed by research on how people perceive the data in question. For example,
the human eye is more sensitive to subtle variations in luminance than it is to the variations in color. JPEG image
compression works in part by rounding off nonessential bits of information. There is a corresponding trade-off between
preserving information and reducing size. A number of popular compression formats exploit these perceptual differences,
including those used in music files, images, and video. In lossy audio compression, methods of psychoacoustics are
used to remove non-audible (or less audible) components of the audio signal. Compression of human speech is often
performed with even more specialized techniques; speech coding, or voice coding, is sometimes distinguished as a
separate discipline from audio compression. Different audio and speech compression standards are listed under audio
coding formats. Voice compression is used in internet telephony, for example, audio compression is used for CD ripping
and is decoded by the audio players. There is a close connection between machine learning and compression: a system
that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression
(by using arithmetic coding on the output distribution) while an optimal compressor can be used for prediction (by
finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification
for using data compression as a benchmark for "general intelligence." Data compression can be viewed as a special
case of data differencing: Data differencing consists of producing a difference given a source and a target, with
patching producing a target given a source and a difference, while data compression consists of producing a compressed
file given a target, and decompression consists of producing a target given only a compressed file. Thus, one can
consider data compression as data differencing with empty source data, the compressed file corresponding to a "difference
from nothing." This is the same as considering absolute entropy (corresponding to data compression) as a special
case of relative entropy (corresponding to data differencing) with no initial data. Audio data compression, not to
be confused with dynamic range compression, has the potential to reduce the transmission bandwidth and storage requirements
of audio data. Audio compression algorithms are implemented in software as audio codecs. Lossy audio compression
algorithms provide higher compression at the cost of fidelity and are used in numerous audio applications. These
algorithms almost all rely on psychoacoustics to eliminate less audible or meaningful sounds, thereby reducing the
space required to store or transmit them. Lossless audio compression produces a representation of digital data that
decompress to an exact digital duplicate of the original audio stream, unlike playback from lossy compression techniques
such as Vorbis and MP3. Compression ratios are around 50–60% of original size, which is similar to those for generic
lossless data compression. Lossless compression is unable to attain high compression ratios due to the complexity
of waveforms and the rapid changes in sound forms. Codecs like FLAC, Shorten and TTA use linear prediction to estimate
the spectrum of the signal. Many of these algorithms use convolution with the filter [-1 1] to slightly whiten or
flatten the spectrum, thereby allowing traditional lossless compression to work more efficiently. The process is
reversed upon decompression. Lossy audio compression is used in a wide range of applications. In addition to the
direct applications (mp3 players or computers), digitally compressed audio streams are used in most video DVDs, digital
television, streaming media on the internet, satellite and cable radio, and increasingly in terrestrial radio broadcasts.
Lossy compression typically achieves far greater compression than lossless compression (data of 5 percent to 20 percent
of the original stream, rather than 50 percent to 60 percent), by discarding less-critical data. To determine what
information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such
as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain.
Once transformed, typically into the frequency domain, component frequencies can be allocated bits according to how
audible they are. Audibility of spectral components calculated using the absolute threshold of hearing and the principles
of simultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency—and, in
some cases, temporal masking—where a signal is masked by another signal separated by time. Equal-loudness contours
may also be used to weight the perceptual importance of components. Models of the human ear-brain combination incorporating
such effects are often called psychoacoustic models. Other types of lossy compressors, such as the linear predictive
coding (LPC) used with speech, are source-based coders. These coders use a model of the sound's generator (such as
the human vocal tract with LPC) to whiten the audio signal (i.e., flatten its spectrum) before quantization. LPC
may be thought of as a basic perceptual coding technique: reconstruction of an audio signal using a linear predictor
shapes the coder's quantization noise into the spectrum of the target signal, partially masking it. Latency results
from the methods used to encode and decode the data. Some codecs will analyze a longer segment of the data to optimize
efficiency, and then code it in a manner that requires a larger segment of data at one time to decode. (Often codecs
create segments called a "frame" to create discrete data segments for encoding and decoding.) The inherent latency
of the coding algorithm can be critical; for example, when there is a two-way transmission of data, such as with
a telephone conversation, significant delays may seriously degrade the perceived quality. In contrast to the speed
of compression, which is proportional to the number of operations required by the algorithm, here latency refers
to the number of samples that must be analysed before a block of audio is processed. In the minimum case, latency
is zero samples (e.g., if the coder/decoder simply reduces the number of bits used to quantize the signal). Time
domain algorithms such as LPC also often have low latencies, hence their popularity in speech coding for telephony.
In algorithms such as MP3, however, a large number of samples have to be analyzed to implement a psychoacoustic model
in the frequency domain, and latency is on the order of 23 ms (46 ms for two-way communication)). If the data to
be compressed is analog (such as a voltage that varies with time), quantization is employed to digitize it into numbers
(normally integers). This is referred to as analog-to-digital (A/D) conversion. If the integers generated by quantization
are 8 bits each, then the entire range of the analog signal is divided into 256 intervals and all the signal values
within an interval are quantized to the same number. If 16-bit integers are generated, then the range of the analog
signal is divided into 65,536 intervals. A literature compendium for a large variety of audio coding systems was
published in the IEEE Journal on Selected Areas in Communications (JSAC), February 1988. While there were some papers
from before that time, this collection documented an entire variety of finished, working audio coders, nearly all
of them using perceptual (i.e. masking) techniques and some kind of frequency analysis and back-end noiseless coding.
Several of these papers remarked on the difficulty of obtaining good, clean digital audio for research purposes.
Most, if not all, of the authors in the JSAC edition were also active in the MPEG-1 Audio committee. The world's
first commercial broadcast automation audio compression system was developed by Oscar Bonello, an engineering professor
at the University of Buenos Aires. In 1983, using the psychoacoustic principle of the masking of critical bands first
published in 1967, he started developing a practical application based on the recently developed IBM PC computer,
and the broadcast automation system was launched in 1987 under the name Audicom. Twenty years later, almost all the
radio stations in the world were using similar technology manufactured by a number of companies. The majority of
video compression algorithms use lossy compression. Uncompressed video requires a very high data rate. Although lossless
video compression codecs perform at a compression factor of 5-12, a typical MPEG-4 lossy compression video has a
compression factor between 20 and 200. As in all lossy compression, there is a trade-off between video quality, cost
of processing the compression and decompression, and system requirements. Highly compressed video may present visible
or distracting artifacts. Some video compression schemes typically operate on square-shaped groups of neighboring
pixels, often called macroblocks. These pixel groups or blocks of pixels are compared from one frame to the next,
and the video compression codec sends only the differences within those blocks. In areas of video with more motion,
the compression must encode more data to keep up with the larger number of pixels that are changing. Commonly during
explosions, flames, flocks of animals, and in some panning shots, the high-frequency detail leads to quality decreases
or to increases in the variable bitrate. Video data may be represented as a series of still image frames. The sequence
of frames contains spatial and temporal redundancy that video compression algorithms attempt to eliminate or code
in a smaller size. Similarities can be encoded by only storing differences between frames, or by using perceptual
features of human vision. For example, small differences in color are more difficult to perceive than are changes
in brightness. Compression algorithms can average a color across these similar areas to reduce space, in a manner
similar to those used in JPEG image compression. Some of these methods are inherently lossy while others may preserve
all relevant information from the original, uncompressed video. The most powerful used method works by comparing
each frame in the video with the previous one. If the frame contains areas where nothing has moved, the system simply
issues a short command that copies that part of the previous frame, bit-for-bit, into the next one. If sections of
the frame move in a simple manner, the compressor emits a (slightly longer) command that tells the decompressor to
shift, rotate, lighten, or darken the copy. This longer command still remains much shorter than intraframe compression.
Interframe compression works well for programs that will simply be played back by the viewer, but can cause problems
if the video sequence needs to be edited. Because interframe compression copies data from one frame to another, if
the original frame is simply cut out (or lost in transmission), the following frames cannot be reconstructed properly.
Some video formats, such as DV, compress each frame independently using intraframe compression. Making 'cuts' in
intraframe-compressed video is almost as easy as editing uncompressed video: one finds the beginning and ending of
each frame, and simply copies bit-for-bit each frame that one wants to keep, and discards the frames one doesn't
want. Another difference between intraframe and interframe compression is that, with intraframe systems, each frame
uses a similar amount of data. In most interframe systems, certain frames (such as "I frames" in MPEG-2) aren't allowed
to copy data from other frames, so they require much more data than other frames nearby. Today, nearly all commonly
used video compression methods (e.g., those in standards approved by the ITU-T or ISO) apply a discrete cosine transform
(DCT) for spatial redundancy reduction. The DCT that is widely used in this regard was introduced by N. Ahmed, T.
Natarajan and K. R. Rao in 1974. Other methods, such as fractal compression, matching pursuit and the use of a discrete
wavelet transform (DWT) have been the subject of some research, but are typically not used in practical products
(except for the use of wavelet coding as still-image coders without motion compensation). Interest in fractal compression
seems to be waning, due to recent theoretical analysis showing a comparative lack of effectiveness of such methods.
Genetics compression algorithms are the latest generation of lossless algorithms that compress data (typically sequences
of nucleotides) using both conventional compression algorithms and genetic algorithms adapted to the specific datatype.
In 2012, a team of scientists from Johns Hopkins University published a genetic compression algorithm that does not
use a reference genome for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression
(95% reduction in file size), providing 2- to 4-fold better compression and in much faster time than the leading
general-purpose compression utilities. For this, Chanda, Elhaik, and Bader introduced MAF based encoding (MAFE),
which reduces the heterogeneity of the dataset by sorting SNPs by their minor allele frequency, thus homogenizing
the dataset. Other algorithms in 2009 and 2013 (DNAZip and GenomeZip) have compression ratios of up to 1200-fold—allowing
6 billion basepair diploid human genomes to be stored in 2.5 megabytes (relative to a reference genome or averaged
over many genomes).
The Sun had the largest circulation of any daily newspaper in the United Kingdom, but in late 2013 slipped to second largest
Saturday newspaper behind the Daily Mail. It had an average daily circulation of 2.2 million copies in March 2014.
Between July and December 2013 the paper had an average daily readership of approximately 5.5 million, with approximately
31% of those falling into the ABC1 demographic and 68% in the C2DE demographic. Approximately 41% of readers are
women. The Sun has been involved in many controversies in its history, including its coverage of the 1989 Hillsborough
football stadium disaster. Regional editions of the newspaper for Scotland, Northern Ireland and the Republic of
Ireland are published in Glasgow (The Scottish Sun), Belfast (The Sun) and Dublin (The Irish Sun) respectively. On
26 February 2012, The Sun on Sunday was launched to replace the closed News of the World, employing some of its former
journalists. In late 2013, it was given a new look, with a new typeface. The average circulation for The Sun on Sunday
in March 2014 was 1,686,840; but in May 2015 The Mail on Sunday sold more copies for the first time, an average of
28,650 over those of its rival: 1,497,855 to 1,469,195. Roy Greenslade issued some caveats over the May 2015 figures,
but believes the weekday Daily Mail will overtake The Sun in circulation during 2016. Research commissioned by Cecil
King from Mark Abrams of Sussex University, The Newspaper Reading Public of Tomorrow, identified demographic changes
which suggested reasons why the Herald might be in decline. The new paper was intended to add a readership of 'social
radicals' to the Herald's 'political radicals'. Launched with an advertising budget of £400,000 the brash new paper
"burst forth with tremendous energy", according to The Times. Its initial print run of 3.5 million was attributed
to 'curiosity' and the 'advantage of novelty', and had declined to the previous circulation of the Daily Herald (1.2
million) within a few weeks. Seizing the opportunity to increase his presence on Fleet Street, he made an agreement
with the print unions, promising fewer redundancies if he acquired the newspaper. He assured IPC that he would publish
a "straightforward, honest newspaper" which would continue to support Labour. IPC, under pressure from the unions,
rejected Maxwell's offer, and Murdoch bought the paper for £800,000, to be paid in instalments. He would later remark:
"I am constantly amazed at the ease with which I entered British newspapers." Murdoch found he had such a rapport
with Larry Lamb over lunch that other potential recruits as editor were not interviewed and Lamb was appointed as
the first editor of the new Sun. He was scathing in his opinion of the Mirror, where he had recently been employed
as a senior sub-editor, and shared Murdoch's view that a paper's quality was best measured by its sales, and he regarded
the Mirror as overstaffed, and primarily aimed at an ageing readership. Lamb hastily recruited a staff of about 125
reporters, who were mostly selected for their availability rather than their ability. Sex was used as an important
element in the content and marketing the paper from the start, which Lamb believed was the most important part of
his readers' lives. The first topless Page 3 model appeared on 17 November 1970, German-born Stephanie Rahn; she
was tagged as a "Birthday Suit Girl" to mark the first anniversary of the relaunched Sun. A topless Page 3 model
gradually became a regular fixture, and with increasingly risqué poses. Both feminists and many cultural conservatives
saw the pictures as pornographic and misogynistic. Lamb expressed some regret at introducing the feature, although
denied it was sexist. A Conservative council in Sowerby Bridge, Yorkshire, was the first to ban the paper from its
public library, shortly after Page 3 began, because of its excessive sexual content. This decision was reversed after
a sustained campaign by the newspaper itself lasting 16 months, and the election of a Labour-led council in 1971.
Politically, The Sun in the early Murdoch years, remained nominally Labour. It supported the Labour Party led by
Harold Wilson in the 1970 General Election, with the headline "Why It Must Be Labour" but by February 1974 it was
calling for a vote for the Conservative Party led by Edward Heath while suggesting that it might support a Labour
Party led by James Callaghan or Roy Jenkins. In the October election an editorial asserted: "ALL our instincts are
left rather than right and we would vote for any able politician who would describe himself as a Social Democrat."
The editor, Larry Lamb, was originally from a Labour background, with a socialist upbringing while his temporary
replacement Bernard Shrimsley (1972–75) was a middle-class uncommitted Conservative. An extensive advertising campaign
on the ITV network in this period, voiced by actor Christopher Timothy, may have helped The Sun to overtake the Daily
Mirror's circulation in 1978. Despite the industrial relations of the 1970s – the so-called "Spanish practices" of
the print unions – The Sun was very profitable, enabling Murdoch to expand his operations to the United States from
1973. The Daily Star had been launched in 1978 by Express Newspaper, and by 1981 had begun to affect sales of The
Sun. Bingo was introduced as a marketing tool and a 2p drop in cover price removed the Daily Star's competitive advantage
opening a new circulation battle which resulted in The Sun neutralising the threat of the new paper. The new editor
of The Sun, Kelvin MacKenzie, took up his post in 1981 just after these developments, and "changed the British tabloid
concept more profoundly than [Larry] Lamb did", according to Bruce Page, MacKenzie The paper became "more outrageous,
opinionated and irreverent than anything ever produced in Britain". On 1 May, The Sun claimed to have 'sponsored'
a British missile. Under the headline "Stick This Up Your Junta: A Sun missile for Galtieri’s gauchos", the newspaper
published a photograph of a missile, (actually a Polaris missile stock shot from the Ministry of Defence) which had
a large Sun logo printed on its side with the caption "Here It Comes, Senors..." underneath. The paper explained
that it was 'sponsoring' the missile by contributing to the eventual victory party on HMS Invincible when the war
ended. In copy written by Wendy Henry, the paper said that the missile would shortly be used against Argentinian
forces. Despite this, it was not well received by the troops and copies of The Sun were soon burnt. Tony Snow, The
Sun journalist on HMS Invincible who had 'signed' the missile, reported a few days later that it had hit an Argentinian
target. One of the paper's best known front pages, published on 4 May 1982, commemorated the torpedoing of the Argentine
ship the General Belgrano by running the story under the headline "GOTCHA". At MacKenzie's insistence, and against
the wishes of Murdoch (the mogul was present because almost all the journalists were on strike), the headline was
changed for later editions after the extent of Argentinian casualties became known. John Shirley, a reporter for
The Sunday Times, witnessed copies of this edition of The Sun being thrown overboard by sailors and marines on HMS
Fearless. After HMS Sheffield was wrecked by an Argentinian attack, The Sun was heavily criticised and even mocked
for its coverage of the war in The Daily Mirror and The Guardian, and the wider media queried the veracity of official
information and worried about the number of casualties, The Sun gave its response. "There are traitors in our midst",
wrote leader writer Ronald Spark on 7 May, accusing commentators on Daily Mirror and The Guardian, plus the BBC's
defence correspondent Peter Snow, of "treason" for aspects of their coverage. These years included what was called
"spectacularly malicious coverage" of the Labour Party by The Sun and other newspapers. During the general election
of 1983 The Sun ran a front page featuring an unflattering photograph of Michael Foot, then aged almost 70, claiming
he was unfit to be Prime Minister on grounds of his age, appearance and policies, alongside the headline "Do You
Really Want This Old Fool To Run Britain?" A year later, in 1984, The Sun made clear its enthusiastic support for
the re-election of Ronald Reagan as president in the USA. Reagan was two weeks off his 74th birthday when he started
his second term, in January 1985. The Sun, during the Miners' strike of 1984–85, supported the police and the Thatcher
government against the striking NUM miners, and in particular the union's president, Arthur Scargill. On 23 May 1984,
The Sun prepared a front page with the headline "Mine Führer" and a photograph of Scargill with his arm in the air,
a pose which made him look as though he was giving a Nazi salute. The print workers at The Sun refused to print it.
The Sun strongly supported the April 1986 bombing of Libya by the US, which was launched from British bases. Several
civilians were killed during the bombing. Their leader was "Right Ron, Right Maggie". That year, Labour MP Clare
Short attempted in vain to persuade Parliament to outlaw the pictures on Page Three and gained opprobrium from the
newspaper for her stand. Murdoch has responded to some of the arguments against the newspaper by saying that critics
are "snobs" who want to "impose their tastes on everyone else", while MacKenzie claims the same critics are people
who, if they ever had a "popular idea", would have to "go and lie down in a dark room for half an hour". Both have
pointed to the huge commercial success of the Sun in this period and its establishment as Britain's top-selling newspaper,
claiming that they are "giving the public what they want". This conclusion is disputed by critics. John Pilger has
said that a late-1970s edition of the Daily Mirror, which replaced the usual celebrity and domestic political news
items with an entire issue devoted to his own front-line reporting of the genocide in Pol Pot's Cambodia, not only
outsold The Sun on the day it was issued but became the only edition of the Daily Mirror to ever sell every single
copy issued throughout the country, something never achieved by The Sun. According to Max Clifford: Read All About
It, written by Clifford and Angela Levin, La Salle invented the story out of frustration with Starr who had been
working on a book with McCaffrey. She contacted an acquaintance who worked for The Sun in Manchester. The story reportedly
delighted MacKenzie, who was keen to run it, and Max Clifford, who had been Starr's public relations agent. Starr
had to be persuaded that the apparent revelation would not damage him; the attention helped to revive his career.
In his 2001 autobiography Unwrapped, Starr wrote that the incident was a complete fabrication: "I have never eaten
or even nibbled a live hamster, gerbil, guinea pig, mouse, shrew, vole or any other small mammal." Eventually resulting
in 17 libel writs in total, The Sun ran a series of false stories about the pop musician Elton John from 25 February
1987. They began with an invented account of the singer having sexual relationships with rent boys. The singer-songwriter
was abroad on the day indicated in the story, as former Sun journalist John Blake, recently poached by the Daily
Mirror, soon discovered. After further stories, in September 1987, The Sun accused John of having his Rottweiler
guard dogs voice boxes surgically removed. In November, the Daily Mirror found their rival's only source for the
rent boy story and he admitted it was a totally fictitious concoction created for money. The inaccurate story about
his dogs, actually Alsatians, put pressure on The Sun, and John received £1 million in an out of court settlement,
then the largest damages payment in British history. The Sun ran a front-page apology on 12 December 1988, under
the banner headline "SORRY, ELTON". In May 1987 gay men were offered free one-way airline tickets to Norway to leave
Britain for good: "Fly Away Gays - And We Will Pay" was the paper's headline. Gay Church of England clergymen were
described in one headline in November 1987 as "Pulpit poofs". Television personality Piers Morgan, a former editor
of the Daily Mirror and of The Sun's Bizarre pop column, has said that during the late 1980s, at Kelvin MacKenzie's
behest, he was ordered to speculate on the sexuality of male pop stars for a feature headlined "The Poofs of Pop".
He also recalls MacKenzie headlining a January 1989 story about the first same-sex kiss on the BBC television soap
opera EastEnders "EastBenders", describing the kiss between Colin Russell and Guido Smith as "a homosexual love scene
between yuppie poofs ... when millions of children were watching". On 17 November 1989, The Sun headlined a page
2 news story titled "STRAIGHT SEX CANNOT GIVE YOU AIDS – OFFICIAL." The Sun favourably cited the opinions of Lord
Kilbracken, a member of the All Parliamentary Group on AIDS. Lord Kilbracken said that only one person out of the
2,372 individuals with HIV/AIDS mentioned in a specific Department of Health report was not a member of a "high risk
group", such as homosexuals and recreational drug users. The Sun also ran an editorial further arguing that "At last
the truth can be told... the risk of catching AIDS if you are heterosexual is "statistically invisible". In other
words, impossible. So now we know – everything else is homosexual propaganda." Although many other British press
services covered Lord Kilbracken's public comments, none of them made the argument that the Sun did in its editorial
and none of them presented Lord Kilbracken's ideas without context or criticism. Critics stated that both The Sun
and Lord Kilbracken cherry-picked the results from one specific study while ignoring other data reports on HIV infection
and not just AIDS infection, which the critics viewed as unethical politicisation of a medical issue. Lord Kilbracken
himself criticised The Sun's editorial and the headline of its news story; he stated that while he thought that gay
people were more at risk of developing AIDS it was still wrong to imply that no one else could catch the disease.
The Press Council condemned The Sun for committing what it called a "gross distortion". The Sun later ran an apology,
which they ran on Page 28. Journalist David Randall argued in the textbook The Universal Journalist that The Sun's
story was one of the worst cases of journalistic malpractice in recent history, putting its own readers in harm's
way. Under a front page headline "The Truth", the paper printed allegations provided to them that some fans picked
the pockets of crushed victims, that others urinated on members of the emergency services as they tried to help and
that some even assaulted a police constable "whilst he was administering the kiss of life to a patient." Despite
the headline, written by Kelvin MacKenzie, the story was based on allegations either by unnamed and unattributable
sources, or hearsay accounts of what named individuals had said – a fact made clear to MacKenzie by Harry Arnold,
the reporter who wrote the story. The front page caused outrage in Liverpool, where the paper lost more than three-quarters
of its estimated 55,000 daily sales and still sells poorly in the city more than 25 years later (around 12,000).
It is unavailable in many parts of the city, as many newsagents refuse to stock it. It was revealed in a documentary
called Alexei Sayle's Liverpool, aired in September 2008, that many Liverpudlians will not even take the newspaper
for free, and those who do may simply burn or tear it up. Liverpudlians refer to the paper as 'The Scum' with campaigners
believing it handicapped their fight for justice. On 7 July 2004, in response to verbal attacks in Liverpool on Wayne
Rooney, just before his transfer from Everton to Manchester United, who had sold his life story to The Sun, the paper
devoted a full-page editorial to an apology for the "awful error" of its Hillsborough coverage and argued that Rooney
(who was still only three years old at the time of Hillsborough) should not be punished for its "past sins". In January
2005, The Sun's managing editor Graham Dudman admitting the Hillsborough coverage was "the worst mistake in our history",
added: "What we did was a terrible mistake. It was a terrible, insensitive, horrible article, with a dreadful headline;
but what we'd also say is: we have apologised for it, and the entire senior team here now is completely different
from the team that put the paper out in 1989." The Sun remained loyal to Thatcher right up to her resignation in
November 1990, despite the party's fall in popularity over the previous year following the introduction of the Poll
tax (officially known as the Community Charge). This change to the way local government is funded was vociferously
supported by the newspaper, despite widespread opposition, (some from Conservative MPs), which is seen as having
contributed to Thatcher's own downfall. The tax was quickly repealed by her successor John Major, whom The Sun initially
supported enthusiastically, believing he was a radical Thatcherite – despite the economy having entered recession
at this time. Despite its initial opposition to the closures, until 1997, the newspaper repeatedly called for the
implementation of further Thatcherite policies, such as Royal Mail privatisation,[verification needed] and social
security cutbacks, with leaders such as "Peter Lilley is right, we can't carry on like this",[verification needed]
The paper showed hostility to the EU and approval of public spending cuts, tax cuts, and promotion of right-wing
ministers to the cabinet, with leaders such as "More of the Redwood, not Deadwood". The Sun switched support to the
Labour party on 18 March 1997, six weeks before the General Election victory which saw the New Labour leader Tony
Blair become Prime Minister with a large parliamentary majority, despite the paper having attacked Blair and New
Labour up to a month earlier. Its front page headline read THE SUN BACKS BLAIR and its front page editorial made
clear that while it still opposed some New Labour policies, such as the Minimum Wage and Devolution, it believed
Blair to be "the breath of fresh air this great country needs". John Major's Conservatives, it said, were "tired,
divided and rudderless". Blair, who had radically altered his party's image and policies, noting the influence the
paper could have over its readers' political thinking, had courted it (and Murdoch) for some time by granting exclusive
interviews and writing columns. In exchange for Rupert Murdoch's support, Blair agreed not to join the European Exchange
Rate Mechanism – which John Major had withdrawn the country from in September 1992 after barely two years. Cabinet
Minister Peter Mandelson was "outed" by Matthew Parris (a former Sun columnist) on BBC TV's Newsnight in November
1998. Misjudging public response, The Sun's editor David Yelland demanded to know in a front page editorial whether
Britain was governed by a "gay mafia" of a "closed world of men with a mutual self-interest". Three days later the
paper apologised in another editorial which said The Sun would never again reveal a person's sexuality unless it
could be defended on the grounds of "overwhelming public interest". In 2003 the paper was accused of racism by the
Government over its criticisms of what it perceived as the "open door" policy on immigration. The attacks came from
the Prime Minister's press spokesman Alastair Campbell and the Home Secretary David Blunkett (later a Sun columnist).
The paper rebutted the claim, believing that it was not racist to suggest that a "tide" of unchecked illegal immigrants
was increasing the risk of terrorist attacks and infectious diseases. It did not help its argument by publishing
a front page story on 4 July 2003, under the headline "Swan Bake", which claimed that asylum seekers were slaughtering
and eating swans. It later proved to have no basis in fact. Subsequently The Sun published a follow-up headlined
"Now they're after our fish!". Following a Press Complaints Commission adjudication a "clarification" was eventually
printed, on page 41. In 2005 The Sun published photographs of Prince Harry sporting a Nazi costume to a fancy dress
party. The photographs caused outrage across the world and Clarence House was forced to issue a statement in response
apologising for any offence or embarrassment caused. Despite being a persistent critic of some of the government's
policies, the paper supported Labour in both subsequent elections the party won. For the 2005 general election, The
Sun backed Blair and Labour for a third consecutive election win and vowed to give him "one last chance" to fulfil
his promises, despite berating him for several weaknesses including a failure to control immigration. However, it
did speak of its hope that the Conservatives (led by Michael Howard) would one day be fit for a return to government.
This election (Blair had declared it would be his last as prime minister) resulted in Labour's third successive win
but with a much reduced majority. On 22 September 2003 the newspaper appeared to misjudge the public mood surrounding
mental health, as well as its affection for former world heavyweight champion boxer Frank Bruno, who had been admitted
to hospital, when the headline "Bonkers Bruno Locked Up" appeared on the front page of early editions. The adverse
reaction, once the paper had hit the streets on the evening of 21 September, led to the headline being changed for
the paper's second edition to the more sympathetic "Sad Bruno In Mental Home". The Sun has been openly antagonistic
towards other European nations, particularly the French and Germans. During the 1980s and 1990s, the nationalities
were routinely described in copy and headlines as "frogs", "krauts" or "hun". As the paper is opposed to the EU it
has referred to foreign leaders who it deemed hostile to the UK in unflattering terms. Former President Jacques Chirac
of France, for instance, was branded "le Worm". An unflattering picture of German chancellor Angela Merkel, taken
from the rear, bore the headline "I'm Big in the Bumdestag" (17 April 2006). On 7 January 2009, The Sun ran an exclusive
front page story claiming that participants in a discussion on Ummah.com, a British Muslim internet forum, had made
a "hate hit list" of British Jews to be targeted by extremists over the Gaza War. It was claimed that "Those listed
[on the forum] should treat it very seriously. Expect a hate campaign and intimidation by 20 or 30 thugs." The UK
magazine Private Eye claimed that Glen Jenvey, a man quoted by The Sun as a terrorism expert, who had been posting
to the forum under the pseudonym "Abuislam", was the only forum member promoting a hate campaign while other members
promoted peaceful advocacy, such as writing 'polite letters'. The story has since been removed from The Sun's website
following complaints to the UK's Press Complaints Commission. On 9 December 2010, The Sun published a front-page
story claiming that terrorist group Al-Qaeda had threatened a terrorist attack on Granada Television in Manchester
to disrupt the episode of the soap opera Coronation Street to be transmitted live that evening. The paper cited unnamed
sources, claiming "cops are throwing a ring of steel around tonight's live episode of Coronation Street over fears
it has been targeted by Al-Qaeda." Later that morning, however, Greater Manchester Police categorically denied having
"been made aware of any threat from Al-Qaeda or any other proscribed organisation." The Sun published a small correction
on 28 December, admitting "that while cast and crew were subject to full body searches, there was no specific threat
from Al-Qaeda as we reported." The apology had been negotiated by the Press Complaints Commission. For the day following
the 2011 Norway attacks The Sun produced an early edition blaming the massacre on al-Qaeda. Later the perpetrator
was revealed to be Anders Behring Breivik, a Norwegian nationalist. In January 2008 the Wapping presses printed The
Sun for the last time and London printing was transferred to Waltham Cross in the Borough of Broxbourne in Hertfordshire,
where News International had built what is claimed to be the largest printing centre in Europe with 12 presses. The
site also produces The Times and Sunday Times, Daily Telegraph and Sunday Telegraph, Wall Street Journal Europe (also
now a Murdoch newspaper), London's Evening Standard and local papers. Northern printing had earlier been switched
to a new plant at Knowsley on Merseyside and the Scottish Sun to another new plant at Motherwell near Glasgow. The
three print centres represent a £600 million investment by NI and allowed all the titles to be produced with every
page in full colour from 2008. The Waltham Cross plant is capable of producing one million copies an hour of a 120-page
tabloid newspaper. Politically, the paper's stance was less clear under Prime Minister Gordon Brown who succeeded
Blair in June 2007. Its editorials were critical of many of Brown's policies and often more supportive of those of
Conservative leader David Cameron. Rupert Murdoch, head of The Sun's parent company News Corporation, speaking at
a 2007 meeting with the House of Lords Select Committee on Communications, which was investigating media ownership
and the news, said that he acts as a "traditional proprietor". This means he exercises editorial control on major
issues such as which political party to back in a general election or which policy to adopt on Europe. During the
campaign for the United Kingdom general election, 2010, The Independent ran ads declaring that "Rupert Murdoch won't
decide this election – you will." In response James Murdoch and Rebekah Wade "appeared unannounced and uninvited
on the editorial floor" of the Independent, and had an energetic conversation with its editor Simon Kelner. Several
days later the Independent reported The Sun's failure to report its own YouGov poll result which said that "if people
thought Mr Clegg's party had a significant chance of winning the election" the Liberal Democrats would win 49% of
the vote, and with it a landslide majority. On election day (6 May 2010), The Sun urged its readers to vote for David
Cameron's "modern and positive" Conservatives in order to save Britain from "disaster" which the paper thought the
country would face if the Labour government was re-elected. The election ended in the first hung parliament after
an election for 36 years, with the Tories gaining the most seats and votes but being 20 seats short of an overall
majority. They finally came to power on 11 May when Gordon Brown stepped down as prime minister, paving the way for
David Cameron to become prime minister by forming a coalition with the Liberal Democrats. On 28 January 2012, police
arrested four current and former staff members of The Sun, as part of a probe in which journalists paid police officers
for information; a police officer was also arrested in the probe. The Sun staffers arrested were crime editor Mike
Sullivan, head of news Chris Pharo, former deputy editor Fergus Shanahan, and former managing editor Graham Dudman,
who since became a columnist and media writer. All five arrested were held on suspicion of corruption. Police also
searched the offices of News International, the publishers of The Sun, as part of a continuing investigation into
the News of the World scandal. The main party leaders, David Cameron, Nick Clegg and Ed Miliband, were all depicted
holding a copy of the special issue in publicity material. Miliband's decision to pose with a copy of The Sun received
a strong response. Organisations representing the relatives of Hillsborough victims described Miliband's action as
an "absolute disgrace" and he faced criticism too from Liverpool Labour MPs and the city's Labour Mayor, Joe Anderson.
A statement was issued on 13 June explaining that Miliband "was promoting England's bid to win the World Cup", although
"he understands the anger that is felt towards the Sun over Hillsborough by many people in Merseyside and he is sorry
to those who feel offended." On 2 June 2013, The Sun on Sunday ran a front page story on singer-songwriter Tulisa
Contostavlos. The front page read: "Tulisa's cocaine deal shame"; this story was written by The Sun On Sunday's undercover
reporter Mahzer Mahmood, who had previously worked for the News of the World. It was claimed that Tulisa introduced
three film producers (actually Mahmood and two other Sun journalists) to a drug dealer and set up a £800 deal. The
subterfuge involved conning the singer into believing that she was being considered for a role in an £8 million Bollywood
film. At her subsequent trial, the case against Tulisa collapsed at Southwark Crown Court in July 2014, with the
judge commenting that there were "strong grounds" to believe that Mahmood had lied at a pre-trial hearing and tried
to manipulate evidence against the co-defendant Tulisa. Tulisa was cleared of supplying Class A drugs. After these
events, The Sun released a statement saying that the newspaper "takes the Judge's remarks very seriously. Mahmood
has been suspended pending an immediate internal investigation." In October 2014, the trial of six senior staff and
journalists at The Sun newspaper began. All six were charged with conspiring to commit misconduct in a public office.
They included The Sun's head of news Chris Pharo, who faced six charges, while ex-managing editor Graham Dudman and
ex-Sun deputy news editor Ben O'Driscoll were accused of four charges each. Thames Valley district reporter Jamie
Pyatt and picture editor John Edwards were charged with three counts each, while ex-reporter John Troup was accused
of two counts. The trial related to illegal payments allegedly made to public officials, with prosecutors saying
the men conspired to pay officials from 2002–11, including police, prison officers and soldiers. They were accused
of buying confidential information about the Royal Family, public figures and prison inmates. They all denied the
charges. On 16 January 2015, Troup and Edwards were cleared by the jury of all charges against them. The jury also
partially cleared O'Driscoll and Dudman but continued deliberating over other counts faced by them, as well as the
charges against Pharo and Pyatt. On 21 January 2015, the jury told the court that it was unable to reach unanimous
verdicts on any of the outstanding charges and was told by the judge, Richard Marks, that he would accept majority
verdicts. Shortly afterwards, one of the jurors sent a note to the judge and was discharged. The judge told the remaining
11 jurors that their colleague had been "feeling unwell and feeling under a great deal of pressure and stress from
the situation you are in", and that under the circumstances he was prepared to accept majority verdicts of "11 to
zero or 10 to 1". On 22 January 2015, the jury was discharged after failing to reach verdicts on the outstanding
charges. The Crown Prosecution Service (CPS) announced that it would seek a retrial. On 6 February 2015, it was announced
that Judge Richard Marks is to be replaced by Judge Charles Wide at the retrial. Two days earlier, Marks had emailed
counsel for the defendants telling them: "It has been decided (not by me but by my elders and betters) that I am
not going to be doing the retrial". Reporting the decision in UK newspaper The Guardian, Lisa O’Carroll wrote: "Wide
is the only judge so far to have presided in a case which has seen a conviction of a journalist in relation to allegations
of unlawful payments to public officials for stories. The journalist, who cannot be named for legal reasons, is appealing
the verdict". Defence counsel for the four journalists threatened to take the decision to judicial review, with the
barrister representing Pharo, Nigel Rumfitt QC, saying: "The way this has come about gives rise to the impression
that something has been going on behind the scenes which should not have been going on behind the scenes and which
should have been dealt with transparently". He added that the defendants were "extremely concerned" and "entitled"
to know why Marks was being replaced by Wide. On 22 May 2015, Sun reporter Anthony France was found guilty of aiding
and abetting misconduct in a public office between 2008 and 2011. France’s trial followed the London Metropolitan
Police's Operation Elveden, an ongoing investigation into alleged payments to police and officials in exchange for
information. He had paid a total of more than £22,000 to PC Timothy Edwards, an anti-terrorism police officer based
at Heathrow Airport. The police officer had already pleaded guilty to misconduct in a public office and given a two-year
gaol sentence in 2014, but the jury in France’s trial was not informed of this. Following the passing of the guilty
verdict, the officer leading Operation Elveden, Detective Chief Superintendent Gordon Briggs said France and Edwards
had been in a "long-term, corrupt relationship". The BBC reported that France was the first journalist to face trial
and be convicted under Operation Elveden since the Crown Prosecution Service (CPS) had revised its guidance in April
2015 so that prosecutions would only be brought against journalists who had made payments to police officers over
a period of time. As a result of the change in the CPS’ policy, charges against several journalists who had made
payments to other types of public officials – including civil servants, health workers and prison staff - had been
dropped. In July 2015, Private Eye magazine reported that at a costs hearing at the Old Bailey The Sun's parent company
had refused to pay for the prosecution costs relating to France’s trial, leading the presiding judge to express his
"considerable disappointment" at this state of affairs. Judge Timothy Pontius said in court that France’s illegal
actions had been part of a "clearly recognised procedure at The Sun", adding that, "There can be no doubt that News
International bears some measure of moral responsibility if not legal culpability for the acts of the defendant".
The Private Eye report noted that despite this The Sun's parent organisation was "considering disciplinary actions"
against France whilst at the same time it was also preparing to bring a case to the Investigatory Powers Tribunal
against the London Metropolitan Police Service for its actions relating to him and two other journalists. In August
2013, The Irish Sun ended the practice of featuring topless models on Page 3. The main newspaper was reported to
have followed in 2015 with the edition of 16 January supposedly the last to carry such photographs after a report
in The Times made such an assertion. After substantial coverage in the media about an alleged change in editorial
policy, Page 3 returned to its usual format on 22 January 2015. A few hours before the issue was published, the Head
of PR at the newspaper said the reputed end of Page 3 had been "speculation" only. On 17 April 2015, The Sun's columnist
Katie Hopkins called migrants to Britain "cockroaches" and "feral humans" and said they were "spreading like the
norovirus". Her remarks were condemned by the United Nations High Commission for Human Rights. In a statement released
on 24 April 2015, High Commissioner Zeid Ra'ad Al Hussein stated that Hopkins' used "language very similar to that
employed by Rwanda's Kangura newspaper and Radio Mille Collines during the run up to the 1994 genocide", and noted
that both media organizations were subsequently convicted by an international tribunal of public incitement to commit
genocide. Hopkins' column also drew criticism on Twitter, including from Russell Brand, to whom Hopkins responded
by accusing Brand's "champagne socialist humanity" of neglecting taxpayers. Simon Usborne, writing in The Independent,
compared her use of the word "cockroach" to previous uses by the Nazis and just before the Rwandan Genocide by its
perpetrators. He suspected that if any other contributor had written the piece it would not have been published and
questioned her continued employment by the newspaper. Zoe Williams commented in The Guardian: "It is no joke when
people start talking like this. We are not 'giving her what she wants' when we make manifest our disgust. It is not
a free speech issue. I’m not saying gag her: I’m saying fight her". On 9 March 2016, The Sun's front page proclaimed
that Queen Elizabeth II was backing "Brexit", a common term for a British withdrawal from the European Union. It
claimed that in 2011 at Windsor Castle, while having lunch with Deputy Prime Minister Nick Clegg, the monarch criticised
the union. Clegg denied that the Queen made such a statement, and a Buckingham Palace spokesperson confirmed that
a complaint had been made to the Independent Press Standards Organisation over a breach of guidelines relating to
accuracy.
Pesticides are substances meant for attracting, seducing, and then destroying any pest. They are a class of biocide. The
most common use of pesticides is as plant protection products (also known as crop protection products), which in
general protect plants from damaging influences such as weeds, fungi, or insects. This use of pesticides is so common
that the term pesticide is often treated as synonymous with plant protection product, although it is in fact a broader
term, as pesticides are also used for non-agricultural purposes. The term pesticide includes all of the following:
herbicide, insecticide, insect growth regulator, nematicide, termiticide, molluscicide, piscicide, avicide, rodenticide,
predacide, bactericide, insect repellent, animal repellent, antimicrobial, fungicide, disinfectant (antimicrobial),
and sanitizer. In general, a pesticide is a chemical or biological agent (such as a virus, bacterium, antimicrobial,
or disinfectant) that deters, incapacitates, kills, or otherwise discourages pests. Target pests can include insects,
plant pathogens, weeds, mollusks, birds, mammals, fish, nematodes (roundworms), and microbes that destroy property,
cause nuisance, or spread disease, or are disease vectors. Although pesticides have benefits, some also have drawbacks,
such as potential toxicity to humans and other species. According to the Stockholm Convention on Persistent Organic
Pollutants, 9 of the 12 most dangerous and persistent organic chemicals are organochlorine pesticides. Pesticides
can be classified by target organism (e.g., herbicides, insecticides, fungicides, rodenticides, and pediculicides
- see table), chemical structure (e.g., organic, inorganic, synthetic, or biological (biopesticide), although the
distinction can sometimes blur), and physical state (e.g. gaseous (fumigant)). Biopesticides include microbial pesticides
and biochemical pesticides. Plant-derived pesticides, or "botanicals", have been developing quickly. These include
the pyrethroids, rotenoids, nicotinoids, and a fourth group that includes strychnine and scilliroside.:15 Many pesticides
can be grouped into chemical families. Prominent insecticide families include organochlorines, organophosphates,
and carbamates. Organochlorine hydrocarbons (e.g., DDT) could be separated into dichlorodiphenylethanes, cyclodiene
compounds, and other related compounds. They operate by disrupting the sodium/potassium balance of the nerve fiber,
forcing the nerve to transmit continuously. Their toxicities vary greatly, but they have been phased out because
of their persistence and potential to bioaccumulate.:239–240 Organophosphate and carbamates largely replaced organochlorines.
Both operate through inhibiting the enzyme acetylcholinesterase, allowing acetylcholine to transfer nerve impulses
indefinitely and causing a variety of symptoms such as weakness or paralysis. Organophosphates are quite toxic to
vertebrates, and have in some cases been replaced by less toxic carbamates.:136–137 Thiocarbamate and dithiocarbamates
are subclasses of carbamates. Prominent families of herbicides include phenoxy and benzoic acid herbicides (e.g.
2,4-D), triazines (e.g., atrazine), ureas (e.g., diuron), and Chloroacetanilides (e.g., alachlor). Phenoxy compounds
tend to selectively kill broad-leaf weeds rather than grasses. The phenoxy and benzoic acid herbicides function similar
to plant growth hormones, and grow cells without normal cell division, crushing the plant's nutrient transport system.:300
Triazines interfere with photosynthesis.:335 Many commonly used pesticides are not included in these families, including
glyphosate. Pesticides can be classified based upon their biological mechanism function or application method. Most
pesticides work by poisoning pests. A systemic pesticide moves inside a plant following absorption by the plant.
With insecticides and most fungicides, this movement is usually upward (through the xylem) and outward. Increased
efficiency may be a result. Systemic insecticides, which poison pollen and nectar in the flowers[citation needed],
may kill bees and other needed pollinators[citation needed]. Pesticides are used to control organisms that are considered
to be harmful. For example, they are used to kill mosquitoes that can transmit potentially deadly diseases like West
Nile virus, yellow fever, and malaria. They can also kill bees, wasps or ants that can cause allergic reactions.
Insecticides can protect animals from illnesses that can be caused by parasites such as fleas. Pesticides can prevent
sickness in humans that could be caused by moldy food or diseased produce. Herbicides can be used to clear roadside
weeds, trees and brush. They can also kill invasive weeds that may cause environmental damage. Herbicides are commonly
applied in ponds and lakes to control algae and plants such as water grasses that can interfere with activities like
swimming and fishing and cause the water to look or smell unpleasant. Uncontrolled pests such as termites and mold
can damage structures such as houses. Pesticides are used in grocery stores and food storage facilities to manage
rodents and insects that infest food such as grain. Each use of a pesticide carries some associated risk. Proper
pesticide use decreases these associated risks to a level deemed acceptable by pesticide regulatory agencies such
as the United States Environmental Protection Agency (EPA) and the Pest Management Regulatory Agency (PMRA) of Canada.
DDT, sprayed on the walls of houses, is an organochlorine that has been used to fight malaria since the 1950s. Recent
policy statements by the World Health Organization have given stronger support to this approach. However, DDT and
other organochlorine pesticides have been banned in most countries worldwide because of their persistence in the
environment and human toxicity. DDT use is not always effective, as resistance to DDT was identified in Africa as
early as 1955, and by 1972 nineteen species of mosquito worldwide were resistant to DDT. In 2006 and 2007, the world
used approximately 2.4 megatonnes (5.3×109 lb) of pesticides, with herbicides constituting the biggest part of the
world pesticide use at 40%, followed by insecticides (17%) and fungicides (10%). In 2006 and 2007 the U.S. used approximately
0.5 megatonnes (1.1×109 lb) of pesticides, accounting for 22% of the world total, including 857 million pounds (389
kt) of conventional pesticides, which are used in the agricultural sector (80% of conventional pesticide use) as
well as the industrial, commercial, governmental and home & garden sectors.Pesticides are also found in majority
of U.S. households with 78 million out of the 105.5 million households indicating that they use some form of pesticide.
As of 2007, there were more than 1,055 active ingredients registered as pesticides, which yield over 20,000 pesticide
products that are marketed in the United States. Every dollar ($1) that is spent on pesticides for crops yields four
dollars ($4) in crops saved. This means based that, on the amount of money spent per year on pesticides, $10 billion,
there is an additional $40 billion savings in crop that would be lost due to damage by insects and weeds. In general,
farmers benefit from having an increase in crop yield and from being able to grow a variety of crops throughout the
year. Consumers of agricultural products also benefit from being able to afford the vast quantities of produce available
year-round. The general public also benefits from the use of pesticides for the control of insect-borne diseases
and illnesses, such as malaria. The use of pesticides creates a large job market within the agrichemical sector.
Pesticides may cause acute and delayed health effects in people who are exposed. Pesticide exposure can cause a variety
of adverse health effects, ranging from simple irritation of the skin and eyes to more severe effects such as affecting
the nervous system, mimicking hormones causing reproductive problems, and also causing cancer. A 2007 systematic
review found that "most studies on non-Hodgkin lymphoma and leukemia showed positive associations with pesticide
exposure" and thus concluded that cosmetic use of pesticides should be decreased. There is substantial evidence of
associations between organophosphate insecticide exposures and neurobehavioral alterations. Limited evidence also
exists for other negative outcomes from pesticide exposure including neurological, birth defects, fetal death, The
World Health Organization and the UN Environment Programme estimate that each year, 3 million workers in agriculture
in the developing world experience severe poisoning from pesticides, about 18,000 of whom die. Owing to inadequate
regulation and safety precautions, 99% of pesticide related deaths occur in developing countries that account for
only 25% of pesticide usage. According to one study, as many as 25 million workers in developing countries may suffer
mild pesticide poisoning yearly. There are several careers aside from agriculture that may also put individuals at
risk of health effects from pesticide exposure including pet groomers, groundskeepers, and fumigators. Pesticide
use raises a number of environmental concerns. Over 98% of sprayed insecticides and 95% of herbicides reach a destination
other than their target species, including non-target species, air, water and soil. Pesticide drift occurs when pesticides
suspended in the air as particles are carried by wind to other areas, potentially contaminating them. Pesticides
are one of the causes of water pollution, and some pesticides are persistent organic pollutants and contribute to
soil contamination. Since chlorinated hydrocarbon pesticides dissolve in fats and are not excreted, organisms tend
to retain them almost indefinitely. Biological magnification is the process whereby these chlorinated hydrocarbons
(pesticides) are more concentrated at each level of the food chain. Among marine animals, pesticide concentrations
are higher in carnivorous fishes, and even more so in the fish-eating birds and mammals at the top of the ecological
pyramid. Global distillation is the process whereby pesticides are transported from warmer to colder regions of the
Earth, in particular the Poles and mountain tops. Pesticides that evaporate into the atmosphere at relatively high
temperature can be carried considerable distances (thousands of kilometers) by the wind to an area of lower temperature,
where they condense and are carried back to the ground in rain or snow. In order to reduce negative impacts, it is
desirable that pesticides be degradable or at least quickly deactivated in the environment. Such loss of activity
or toxicity of pesticides is due to both innate chemical properties of the compounds and environmental processes
or conditions. For example, the presence of halogens within a chemical structure often slows down degradation in
an aerobic environment. Adsorption to soil may retard pesticide movement, but also may reduce bioavailability to
microbial degraders. Alternatives to pesticides are available and include methods of cultivation, use of biological
pest controls (such as pheromones and microbial pesticides), genetic engineering, and methods of interfering with
insect breeding. Application of composted yard waste has also been used as a way of controlling pests. These methods
are becoming increasingly popular and often are safer than traditional chemical pesticides. In addition, EPA is registering
reduced-risk conventional pesticides in increasing numbers. The term "push-pull" was established in 1987 as an approach
for integrated pest management (IPM). This strategy uses a mixture of behavior-modifying stimuli to manipulate the
distribution and abundance of insects. "Push" means the insects are repelled or deterred away from whatever resource
that is being protected. "Pull" means that certain stimuli (semiochemical stimuli, pheromones, food additives, visual
stimuli, genetically altered plants, etc.) are used to attract pests to trap crops where they will be killed. There
are numerous different components involved in order to implement a Push-Pull Strategy in IPM. Some evidence shows
that alternatives to pesticides can be equally effective as the use of chemicals. For example, Sweden has halved
its use of pesticides with hardly any reduction in crops.[unreliable source?] In Indonesia, farmers have reduced
pesticide use on rice fields by 65% and experienced a 15% crop increase.[unreliable source?] A study of Maize fields
in northern Florida found that the application of composted yard waste with high carbon to nitrogen ratio to agricultural
fields was highly effective at reducing the population of plant-parasitic nematodes and increasing crop yield, with
yield increases ranging from 10% to 212%; the observed effects were long-term, often not appearing until the third
season of the study. Pesticides are often referred to according to the type of pest they control. Pesticides can
also be considered as either biodegradable pesticides, which will be broken down by microbes and other living beings
into harmless compounds, or persistent pesticides, which may take months or years before they are broken down: it
was the persistence of DDT, for example, which led to its accumulation in the food chain and its killing of birds
of prey at the top of the food chain. Another way to think about pesticides is to consider those that are chemical
pesticides or are derived from a common source or production method. The following sulfonylureas have been commercialized
for weed control: amidosulfuron, azimsulfuron, bensulfuron-methyl, chlorimuron-ethyl, ethoxysulfuron, flazasulfuron,
flupyrsulfuron-methyl-sodium, halosulfuron-methyl, imazosulfuron, nicosulfuron, oxasulfuron, primisulfuron-methyl,
pyrazosulfuron-ethyl, rimsulfuron, sulfometuron-methyl Sulfosulfuron, terbacil, bispyribac-sodium, cyclosulfamuron,
and pyrithiobac-sodium. Nicosulfuron, triflusulfuron methyl, and chlorsulfuron are broad-spectrum herbicides that
kill plants by inhibiting the enzyme acetolactate synthase. In the 1960s, more than 1 kg/ha (0.89 lb/acre) crop protection
chemical was typically applied, while sulfonylureates allow as little as 1% as much material to achieve the same
effect. Though pesticide regulations differ from country to country, pesticides, and products on which they were
used are traded across international borders. To deal with inconsistencies in regulations among countries, delegates
to a conference of the United Nations Food and Agriculture Organization adopted an International Code of Conduct
on the Distribution and Use of Pesticides in 1985 to create voluntary standards of pesticide regulation for different
countries. The Code was updated in 1998 and 2002. The FAO claims that the code has raised awareness about pesticide
hazards and decreased the number of countries without restrictions on pesticide use. Three other efforts to improve
regulation of international pesticide trade are the United Nations London Guidelines for the Exchange of Information
on Chemicals in International Trade and the United Nations Codex Alimentarius Commission.[citation needed] The former
seeks to implement procedures for ensuring that prior informed consent exists between countries buying and selling
pesticides, while the latter seeks to create uniform standards for maximum levels of pesticide residues among participating
countries. Both initiatives operate on a voluntary basis. Pesticides safety education and pesticide applicator regulation
are designed to protect the public from pesticide misuse, but do not eliminate all misuse. Reducing the use of pesticides
and choosing less toxic pesticides may reduce risks placed on society and the environment from pesticide use. Integrated
pest management, the use of multiple approaches to control pests, is becoming widespread and has been used with success
in countries such as Indonesia, China, Bangladesh, the U.S., Australia, and Mexico. IPM attempts to recognize the
more widespread impacts of an action on an ecosystem, so that natural balances are not upset. New pesticides are
being developed, including biological and botanical derivatives and alternatives that are thought to reduce health
and environmental risks. In addition, applicators are being encouraged to consider alternative controls and adopt
methods that reduce the use of chemical pesticides. In the United States, the Environmental Protection Agency (EPA)
is responsible for regulating pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and
the Food Quality Protection Act (FQPA). Studies must be conducted to establish the conditions in which the material
is safe to use and the effectiveness against the intended pest(s). The EPA regulates pesticides to ensure that these
products do not pose adverse effects to humans or the environment. Pesticides produced before November 1984 continue
to be reassessed in order to meet the current scientific and regulatory standards. All registered pesticides are
reviewed every 15 years to ensure they meet the proper standards. During the registration process, a label is created.
The label contains directions for proper use of the material in addition to safety restrictions. Based on acute toxicity,
pesticides are assigned to a Toxicity Class. Some pesticides are considered too hazardous for sale to the general
public and are designated restricted use pesticides. Only certified applicators, who have passed an exam, may purchase
or supervise the application of restricted use pesticides. Records of sales and use are required to be maintained
and may be audited by government agencies charged with the enforcement of pesticide regulations. These records must
be made available to employees and state or territorial environmental regulatory agencies. Since before 2000 BC,
humans have utilized pesticides to protect their crops. The first known pesticide was elemental sulfur dusting used
in ancient Sumer about 4,500 years ago in ancient Mesopotamia. The Rig Veda, which is about 4,000 years old, mentions
the use of poisonous plants for pest control. By the 15th century, toxic chemicals such as arsenic, mercury, and
lead were being applied to crops to kill pests. In the 17th century, nicotine sulfate was extracted from tobacco
leaves for use as an insecticide. The 19th century saw the introduction of two more natural pesticides, pyrethrum,
which is derived from chrysanthemums, and rotenone, which is derived from the roots of tropical vegetables. Until
the 1950s, arsenic-based pesticides were dominant. Paul Müller discovered that DDT was a very effective insecticide.
Organochlorines such as DDT were dominant, but they were replaced in the U.S. by organophosphates and carbamates
by 1975. Since then, pyrethrin compounds have become the dominant insecticide. Herbicides became common in the 1960s,
led by "triazine and other nitrogen-based compounds, carboxylic acids such as 2,4-dichlorophenoxyacetic acid, and
glyphosate". The first legislation providing federal authority for regulating pesticides was enacted in 1910; however,
decades later during the 1940s manufacturers began to produce large amounts of synthetic pesticides and their use
became widespread. Some sources consider the 1940s and 1950s to have been the start of the "pesticide era." Although
the U.S. Environmental Protection Agency was established in 1970 and amendments to the pesticide law in 1972, pesticide
use has increased 50-fold since 1950 and 2.3 million tonnes (2.5 million short tons) of industrial pesticides are
now[when?] used each year. Seventy-five percent of all pesticides in the world are used in developed countries, but
use in developing countries is increasing. A study of USA pesticide use trends through 1997 was published in 2003
by the National Science Foundation's Center for Integrated Pest Management.
Somerset is a rural county of rolling hills such as the Blackdown Hills, Mendip Hills, Quantock Hills and Exmoor National
Park, and large flat expanses of land including the Somerset Levels. There is evidence of human occupation from Paleolithic
times, and of subsequent settlement in the Roman and Anglo-Saxon periods. The county played a significant part in
the consolidation of power and rise of King Alfred the Great, and later in the English Civil War and the Monmouth
Rebellion. The city of Bath is famous for its substantial Georgian architecture and is a UNESCO World Heritage Site.
The people of Somerset are mentioned in the Anglo-Saxon Chronicle's entry for AD 845, in the inflected form "Sumursætum",
and the county is recorded in the entry for 1015 using the same name. The archaic name Somersetshire was mentioned
in the Chronicle's entry for 878. Although "Somersetshire" was in common use as an alternative name for the county,
it went out of fashion in the late 19th century, and is no longer used possibly due to the adoption of "Somerset"
as the county's official name after the establishment of the county council in 1889. As with other counties not ending
in "shire," the suffix was superfluous, as there was no need to differentiate between the county and a town within
it. After the Romans left, Britain was invaded by Anglo-Saxon peoples. By AD 600 they had established control over
much of what is now England, but Somerset was still in native British hands. The British held back Saxon advance
into the south-west for some time longer, but by the early eighth century King Ine of Wessex had pushed the boundaries
of the West Saxon kingdom far enough west to include Somerset. The Saxon royal palace in Cheddar was used several
times in the 10th century to host the Witenagemot. After the Norman Conquest, the county was divided into 700 fiefs,
and large areas were owned by the crown, with fortifications such as Dunster Castle used for control and defence.
Somerset contains HM Prison Shepton Mallet, which was England's oldest prison still in use prior to its closure in
2013, having opened in 1610. In the English Civil War Somerset was largely Parliamentarian, with key engagements
being the Sieges of Taunton and the Battle of Langport. In 1685 the Monmouth Rebellion was played out in Somerset
and neighbouring Dorset. The rebels landed at Lyme Regis and travelled north, hoping to capture Bristol and Bath,
but they were defeated in the Battle of Sedgemoor at Westonzoyland, the last pitched battle fought in England. Arthur
Wellesley took his title, Duke of Wellington from the town of Wellington; he is commemorated on a nearby hill by
a large, spotlit obelisk, known as the Wellington Monument. The Industrial Revolution in the Midlands and Northern
England spelled the end for most of Somerset's cottage industries. Farming continued to flourish, however, and the
Bath and West of England Society for the Encouragement of Agriculture, Arts, Manufactures and Commerce was founded
in 1777 to improve farming methods. Despite this, 20 years later John Billingsley conducted a survey of the county's
agriculture in 1795 and found that agricultural methods could still be improved. Coal mining was an important industry
in north Somerset during the 18th and 19th centuries, and by 1800 it was prominent in Radstock. The Somerset Coalfield
reached its peak production by the 1920s, but all the pits have now been closed, the last in 1973. Most of the surface
buildings have been removed, and apart from a winding wheel outside Radstock Museum, little evidence of their former
existence remains. Further west, the Brendon Hills were mined for iron ore in the late 19th century; this was taken
by the West Somerset Mineral Railway to Watchet Harbour for shipment to the furnaces at Ebbw Vale. Many Somerset
soldiers died during the First World War, with the Somerset Light Infantry suffering nearly 5,000 casualties. War
memorials were put up in most of the county's towns and villages; only nine, described as the Thankful Villages,
had none of their residents killed. During the Second World War the county was a base for troops preparing for the
D-Day landings. Some of the hospitals which were built for the casualties of the war remain in use. The Taunton Stop
Line was set up to repel a potential German invasion. The remains of its pill boxes can still be seen along the coast,
and south through Ilminster and Chard. A number of decoy towns were constructed in Somerset in World War II to protect
Bristol and other towns, at night. They were designed to mimic the geometry of "blacked out" streets, railway lines,
and Bristol Temple Meads railway station, to encourage bombers away from these targets. One, on the radio beam flight
path to Bristol, was constructed on Beacon Batch. It was laid out by Shepperton Studios, based on aerial photographs
of the city's railway marshalling yards. The decoys were fitted with dim red lights, simulating activities like the
stoking of steam locomotives. Burning bales of straw soaked in creosote were used to simulate the effects of incendiary
bombs dropped by the first wave of Pathfinder night bombers; meanwhile, incendiary bombs dropped on the correct location
were quickly smothered, wherever possible. Drums of oil were also ignited to simulate the effect of a blazing city
or town, with the aim of fooling subsequent waves of bombers into dropping their bombs on the wrong location. The
Chew Magna decoy town was hit by half-a-dozen bombs on 2 December 1940, and over a thousand incendiaries on 3 January
1941. The following night the Uphill decoy town, protecting Weston-super-Mare's airfield, was bombed; a herd of dairy
cows was hit, killing some and severely injuring others. The boundaries of Somerset are largely unaltered from medieval
times. The River Avon formed much of the border with Gloucestershire, except that the hundred of Bath Forum, which
straddles the Avon, formed part of Somerset. Bristol began as a town on the Gloucestershire side of the Avon, however
as it grew it extended across the river into Somerset. In 1373 Edward III proclaimed "that the town of Bristol with
its suburbs and precincts shall henceforth be separate from the counties of Gloucester and Somerset... and that it
should be a county by itself". Somerton took over from Ilchester as the county town in the late thirteenth century,
but it declined in importance and the status of county town transferred to Taunton about 1366. The county has two
cities, Bath and Wells, and 30 towns (including the county town of Taunton, which has no town council but instead
is the chief settlement of the county's only borough). The largest urban areas in terms of population are Bath, Weston-super-Mare,
Taunton, Yeovil and Bridgwater. Many settlements developed because of their strategic importance in relation to geographical
features, such as river crossings or valleys in ranges of hills. Examples include Axbridge on the River Axe, Castle
Cary on the River Cary, North Petherton on the River Parrett, and Ilminster, where there was a crossing point on
the River Isle. Midsomer Norton lies on the River Somer; while the Wellow Brook and the Fosse Way Roman road run
through Radstock. Chard is the most southerly town in Somerset, and at an altitude of 121 m (397 ft) it is also the
highest. To the north-east of the Somerset Levels, the Mendip Hills are moderately high limestone hills. The central
and western Mendip Hills was designated an Area of Outstanding Natural Beauty in 1972 and covers 198 km2 (76 sq mi).
The main habitat on these hills is calcareous grassland, with some arable agriculture. To the south-west of the Somerset
Levels are the Quantock Hills which was England's first Area of Outstanding Natural Beauty designated in 1956 which
is covered in heathland, oak woodlands, ancient parklands with plantations of conifer and covers 99 square kilometres.
The Somerset Coalfield is part of a larger coalfield which stretches into Gloucestershire. To the north of the Mendip
hills is the Chew Valley and to the south, on the clay substrate, are broad valleys which support dairy farming and
drain into the Somerset Levels. There is an extensive network of caves, including Wookey Hole, underground rivers,
and gorges, including the Cheddar Gorge and Ebbor Gorge. The county has many rivers, including the Axe, Brue, Cary,
Parrett, Sheppey, Tone and Yeo. These both feed and drain the flat levels and moors of mid and west Somerset. In
the north of the county the River Chew flows into the Bristol Avon. The Parrett is tidal almost to Langport, where
there is evidence of two Roman wharfs. At the same site during the reign of King Charles I, river tolls were levied
on boats to pay for the maintenance of the bridge. The Somerset Levels (or Somerset Levels and Moors as they are
less commonly but more correctly known) are a sparsely populated wetland area of central Somerset, between the Quantock
and Mendip hills. They consist of marine clay levels along the coast, and the inland (often peat based) moors. The
Levels are divided into two by the Polden Hills; land to the south is drained by the River Parrett while land to
the north is drained by the River Axe and the River Brue. The total area of the Levels amounts to about 647.5 square
kilometres (160,000 acres) and broadly corresponds to the administrative district of Sedgemoor but also includes
the south west of Mendip district. Approximately 70% of the area is grassland and 30% is arable. Stretching about
32 kilometres (20 mi) inland, this expanse of flat land barely rises above sea level. Before it was drained, much
of the land was under a shallow brackish sea in winter and was marsh land in summer. Drainage began with the Romans,
and was restarted at various times: by the Anglo-Saxons; in the Middle Ages by the Glastonbury Abbey, from 1400–1770;
and during the Second World War, with the construction of the Huntspill River. Pumping and management of water levels
still continues. The main coastal towns are, from the west to the north-east, Minehead, Watchet, Burnham-on-Sea,
Weston-super-Mare, Clevedon and Portishead. The coastal area between Minehead and the eastern extreme of the administrative
county's coastline at Brean Down is known as Bridgwater Bay, and is a National Nature Reserve. North of that, the
coast forms Weston Bay and Sand Bay whose northern tip, Sand Point, marks the lower limit of the Severn Estuary.
In the mid and north of the county the coastline is low as the level wetlands of the levels meet the sea. In the
west, the coastline is high and dramatic where the plateau of Exmoor meets the sea, with high cliffs and waterfalls.
Along with the rest of South West England, Somerset has a temperate climate which is generally wetter and milder
than the rest of the country. The annual mean temperature is approximately 10 °C (50.0 °F). Seasonal temperature
variation is less extreme than most of the United Kingdom because of the adjacent sea temperatures. The summer months
of July and August are the warmest with mean daily maxima of approximately 21 °C (69.8 °F). In winter mean minimum
temperatures of 1 °C (33.8 °F) or 2 °C (35.6 °F) are common. In the summer the Azores high pressure affects the south-west
of England, but convective cloud sometimes forms inland, reducing the number of hours of sunshine. Annual sunshine
rates are slightly less than the regional average of 1,600 hours. In December 1998 there were 20 days without sun
recorded at Yeovilton. Most the rainfall in the south-west is caused by Atlantic depressions or by convection. Most
of the rainfall in autumn and winter is caused by the Atlantic depressions, which is when they are most active. In
summer, a large proportion of the rainfall is caused by sun heating the ground leading to convection and to showers
and thunderstorms. Average rainfall is around 700 mm (28 in). About 8–15 days of snowfall is typical. November to
March have the highest mean wind speeds, and June to August the lightest winds. The predominant wind direction is
from the south-west. Bridgwater was developed during the Industrial Revolution as the area's leading port. The River
Parrett was navigable by large ships as far as Bridgwater. Cargoes were then loaded onto smaller boats at Langport
Quay, next to the Bridgwater Bridge, to be carried further up river to Langport; or they could turn off at Burrowbridge
and then travel via the River Tone to Taunton. The Parrett is now only navigable as far as Dunball Wharf. Bridgwater,
in the 19th and 20th centuries, was a centre for the manufacture of bricks and clay roof tiles, and later cellophane,
but those industries have now stopped. With its good links to the motorway system, Bridgwater has developed as a
distribution hub for companies such as Argos, Toolstation, Morrisons and Gerber Juice. AgustaWestland manufactures
helicopters in Yeovil, and Normalair Garratt, builder of aircraft oxygen systems, is also based in the town. Many
towns have encouraged small-scale light industries, such as Crewkerne's Ariel Motor Company, one of the UK's smallest
car manufacturers. Somerset is an important supplier of defence equipment and technology. A Royal Ordnance Factory,
ROF Bridgwater was built at the start of the Second World War, between the villages of Puriton and Woolavington,
to manufacture explosives. The site was decommissioned and closed in July 2008. Templecombe has Thales Underwater
Systems, and Taunton presently has the United Kingdom Hydrographic Office and Avimo, which became part of Thales
Optics. It has been announced twice, in 2006 and 2007, that manufacturing is to end at Thales Optics' Taunton site,
but the trade unions and Taunton Deane District Council are working to reverse or mitigate these decisions. Other
high-technology companies include the optics company Gooch and Housego, at Ilminster. There are Ministry of Defence
offices in Bath, and Norton Fitzwarren is the home of 40 Commando Royal Marines. The Royal Naval Air Station in Yeovilton,
is one of Britain's two active Fleet Air Arm bases and is home to the Royal Navy's Lynx helicopters and the Royal
Marines Commando Westland Sea Kings. Around 1,675 service and 2,000 civilian personnel are stationed at Yeovilton
and key activities include training of aircrew and engineers and the Royal Navy's Fighter Controllers and surface-based
aircraft controllers. Agriculture and food and drink production continue to be major industries in the county, employing
over 15,000 people. Apple orchards were once plentiful, and Somerset is still a major producer of cider. The towns
of Taunton and Shepton Mallet are involved with the production of cider, especially Blackthorn Cider, which is sold
nationwide, and there are specialist producers such as Burrow Hill Cider Farm and Thatchers Cider. Gerber Products
Company in Bridgwater is the largest producer of fruit juices in Europe, producing brands such as "Sunny Delight"
and "Ocean Spray." Development of the milk-based industries, such as Ilchester Cheese Company and Yeo Valley Organic,
have resulted in the production of ranges of desserts, yoghurts and cheeses, including Cheddar cheese—some of which
has the West Country Farmhouse Cheddar Protected Designation of Origin (PDO). Traditional willow growing and weaving
(such as basket weaving) is not as extensive as it used to be but is still carried out on the Somerset Levels and
is commemorated at the Willows and Wetlands Visitor Centre. Fragments of willow basket were found near the Glastonbury
Lake Village, and it was also used in the construction of several Iron Age causeways. The willow was harvested using
a traditional method of pollarding, where a tree would be cut back to the main stem. During the 1930s more than 3,600
hectares (8,900 acres) of willow were being grown commercially on the Levels. Largely due to the displacement of
baskets with plastic bags and cardboard boxes, the industry has severely declined since the 1950s. By the end of
the 20th century only about 140 hectares (350 acres) were grown commercially, near the villages of Burrowbridge,
Westonzoyland and North Curry. The Somerset Levels is now the only area in the UK where basket willow is grown commercially.
Towns such as Castle Cary and Frome grew around the medieval weaving industry. Street developed as a centre for the
production of woollen slippers and, later, boots and shoes, with C. & J. Clark establishing its headquarters in the
town. C&J Clark's shoes are no longer manufactured there as the work was transferred to lower-wage areas, such as
China and Asia. Instead, in 1993, redundant factory buildings were converted to form Clarks Village, the first purpose-built
factory outlet in the UK. C&J Clark also had shoe factories, at one time at Bridgwater, Minehead, Westfield and Weston
super Mare to provide employment outside the main summer tourist season, but those satellite sites were closed in
the late 1980s, before the main site at Street. Dr. Martens shoes were also made in Somerset, by the Northampton-based
R. Griggs Group, using redundant skilled shoemakers from C&J Clark; that work has also been transferred to Asia.
The county has a long tradition of supplying freestone and building stone. Quarries at Doulting supplied freestone
used in the construction of Wells Cathedral. Bath stone is also widely used. Ralph Allen promoted its use in the
early 18th century, as did Hans Price in the 19th century, but it was used long before then. It was mined underground
at Combe Down and Bathampton Down Mines, and as a result of cutting the Box Tunnel, at locations in Wiltshire such
as Box. Bath stone is still used on a reduced scale today, but more often as a cladding rather than a structural
material. Further south, Hamstone is the colloquial name given to stone from Ham Hill, which is also widely used
in the construction industry. Blue Lias has been used locally as a building stone and as a raw material for lime
mortar and Portland cement. Until the 1960s, Puriton had Blue Lias stone quarries, as did several other Polden villages.
Its quarries also supplied a cement factory at Dunball, adjacent to the King's Sedgemoor Drain. Its derelict, early
20th century remains, was removed when the M5 motorway was constructed in the mid-1970s. Since the 1920s, the county
has supplied aggregates. Foster Yeoman is Europe's large supplier of limestone aggregates, with quarries at Merehead
Quarry. It has a dedicated railway operation, Mendip Rail, which is used to transport aggregates by rail from a group
of Mendip quarries. Tourism is a major industry, estimated in 2001 to support around 23,000 people. Attractions include
the coastal towns, part of the Exmoor National Park, the West Somerset Railway (a heritage railway), and the museum
of the Fleet Air Arm at RNAS Yeovilton. The town of Glastonbury has mythical associations, including legends of a
visit by the young Jesus of Nazareth and Joseph of Arimathea, with links to the Holy Grail, King Arthur, and Camelot,
identified by some as Cadbury Castle, an Iron Age hill fort. Glastonbury also gives its name to an annual open-air
rock festival held in nearby Pilton. There are show caves open to visitors in the Cheddar Gorge, as well as its locally
produced cheese, although there is now only one remaining cheese maker in the village of Cheddar. Hinkley Point C
nuclear power station is a project to construct a 3,200 MW two reactor nuclear power station. On 18 October 2010,
the British government announced that Hinkley Point – already the site of the disused Hinkley Point A and the still
operational Hinkley Point B power stations – was one of the eight sites it considered suitable for future nuclear
power stations. NNB Generation Company, a subsidiary of EDF, submitted an application for development consent to
the Infrastructure Planning Commission on 31 October 2011. A protest group, Stop Hinkley, was formed to campaign
for the closure of Hinkley Point B and oppose any expansion at the Hinkley Point site. In December 2013, the European
Commission opened an investigation to assess whether the project breaks state-aid rules. On 8 October 2014 it was
announced that the European Commission has approved the project, with an overwhelming majority and only four commissioners
voting against the decision. Population growth is higher than the national average, with a 6.4% increase, in the
Somerset County Council area, since 1991, and a 17% increase since 1981. The population density is 1.4 persons per
hectare, which can be compared to 2.07 persons per hectare for the South West region. Within the county, population
density ranges 0.5 in West Somerset to 2.2 persons per hectare in Taunton Deane. The percentage of the population
who are economically active is higher than the regional and national average, and the unemployment rate is lower
than the regional and national average. Somerset has a high indigenous British population, with 98.8% registering
as white British and 92.4% of these as born in the United Kingdom. Chinese is the largest ethnic group, while the
black minority ethnic proportion of the total population is 2.9%. Over 25% of Somerset's population is concentrated
in Taunton, Bridgwater and Yeovil. The rest of the county is rural and sparsely populated. Over 9 million tourist
nights are spent in Somerset each year, which significantly increases the population at peak times. The ceremonial
county of Somerset consists of a two-tier non-metropolitan county, which is administered by Somerset County Council
and five district councils, and two unitary authority areas (whose councils combine the functions of a county and
a district). The five districts of Somerset are West Somerset, South Somerset, Taunton Deane, Mendip, and Sedgemoor.
The two unitary authorities — which were established on 1 April 1996 following the break-up of the short-lived county
of Avon — are North Somerset, and Bath & North East Somerset. All of the ceremonial county of Somerset is covered
by the Avon and Somerset Constabulary, a police force which also covers Bristol and South Gloucestershire. The Devon
and Somerset Fire and Rescue Service was formed in 2007 upon the merger of the Somerset Fire and Rescue Service with
its neighbouring Devon service; it covers the area of Somerset County Council as well as the entire ceremonial county
of Devon. The unitary districts of North Somerset and Bath & North East Somerset are instead covered by the Avon
Fire and Rescue Service, a service which also covers Bristol and South Gloucestershire. The South Western Ambulance
Service covers the entire South West of England, including all of Somerset; prior to February 2013 the unitary districts
of Somerset came under the Great Western Ambulance Service, which merged into South Western. The Dorset and Somerset
Air Ambulance is a charitable organisation based in the county. The Glastonbury Festival of Contemporary Performing
Arts takes place most years in Pilton, near Shepton Mallet, attracting over 170,000 music and culture lovers from
around the world to see world-famous entertainers. The Big Green Gathering which grew out of the Green fields at
the Glastonbury Festival is held in the Mendip Hills between Charterhouse and Compton Martin each summer. The annual
Bath Literature Festival is one of several local festivals in the county; others include the Frome Festival and the
Trowbridge Village Pump Festival, which, despite its name, is held at Farleigh Hungerford in Somerset. The annual
circuit of West Country Carnivals is held in a variety of Somerset towns during the autumn, forming a major regional
festival, and the largest Festival of Lights in Europe. In Arthurian legend, Avalon became associated with Glastonbury
Tor when monks at Glastonbury Abbey claimed to have discovered the bones of King Arthur and his queen. What is more
certain is that Glastonbury was an important religious centre by 700 and claims to be "the oldest above-ground Christian
church in the World" situated "in the mystical land of Avalon." The claim is based on dating the founding of the
community of monks at AD 63, the year of the legendary visit of Joseph of Arimathea, who was supposed to have brought
the Holy Grail. During the Middle Ages there were also important religious sites at Woodspring Priory and Muchelney
Abbey. The present Diocese of Bath and Wells covers Somerset – with the exception of the Parish of Abbots Leigh with
Leigh Woods in North Somerset – and a small area of Dorset. The Episcopal seat of the Bishop of Bath and Wells is
now in the Cathedral Church of Saint Andrew in the city of Wells, having previously been at Bath Abbey. Before the
English Reformation, it was a Roman Catholic diocese; the county now falls within the Roman Catholic Diocese of Clifton.
The Benedictine monastery Saint Gregory's Abbey, commonly known as Downside Abbey, is at Stratton-on-the-Fosse, and
the ruins of the former Cistercian Cleeve Abbey are near the village of Washford. The county has several museums;
those at Bath include the American Museum in Britain, the Museum of Bath Architecture, the Herschel Museum of Astronomy,
the Jane Austen Centre, and the Roman Baths. Other visitor attractions which reflect the cultural heritage of the
county include: Claverton Pumping Station, Dunster Working Watermill, the Fleet Air Arm Museum at Yeovilton, Nunney
Castle, The Helicopter Museum in Weston-super-Mare, King John's Hunting Lodge in Axbridge, Blake Museum Bridgwater,
Radstock Museum, Museum of Somerset in Taunton, the Somerset Rural Life Museum in Glastonbury, and Westonzoyland
Pumping Station Museum. Somerset has 11,500 listed buildings, 523 scheduled monuments, 192 conservation areas, 41
parks and gardens including those at Barrington Court, Holnicote Estate, Prior Park Landscape Garden and Tintinhull
Garden, 36 English Heritage sites and 19 National Trust sites, including Clevedon Court, Fyne Court, Montacute House
and Tyntesfield as well as Stembridge Tower Mill, the last remaining thatched windmill in England. Other historic
houses in the county which have remained in private ownership or used for other purposes include Halswell House and
Marston Bigot. A key contribution of Somerset architecture is its medieval church towers. Jenkins writes, "These
structures, with their buttresses, bell-opening tracery and crowns, rank with Nottinghamshire alabaster as England's
finest contribution to medieval art." Bath Rugby play at the Recreation Ground in Bath, and the Somerset County Cricket
Club are based at the County Ground in Taunton. The county gained its first Football League club in 2003, when Yeovil
Town won promotion to Division Three as Football Conference champions. They had achieved numerous FA Cup victories
over football League sides in the past 50 years, and since joining the elite they have won promotion again—as League
Two champions in 2005. They came close to yet another promotion in 2007, when they reached the League One playoff
final, but lost to Blackpool at the newly reopened Wembley Stadium. Yeovil achieved promotion to the Championship
in 2013 after beating Brentford in the playoff final. Horse racing courses are at Taunton and Wincanton. The Somerset
Coal Canal was built in the early 19th century to reduce the cost of transportation of coal and other heavy produce.
The first 16 kilometres (10 mi), running from a junction with the Kennet and Avon Canal, along the Cam valley, to
a terminal basin at Paulton, were in use by 1805, together with several tramways. A planned 11.7 km (7.3 mi) branch
to Midford was never built, but in 1815 a tramway was laid along its towing path. In 1871 the tramway was purchased
by the Somerset and Dorset Joint Railway (S&DJR), and operated until the 1950s. The usefulness of the canals was
short-lived, though some have now been restored for recreation. The 19th century also saw the construction of railways
to and through Somerset. The county was served by five pre-1923 Grouping railway companies: the Great Western Railway
(GWR); a branch of the Midland Railway (MR) to Bath Green Park (and another one to Bristol); the Somerset and Dorset
Joint Railway, and the London and South Western Railway (L&SWR). The former main lines of the GWR are still in use
today, although many of its branch lines were scrapped under the notorious Beeching Axe. The former lines of the
Somerset and Dorset Joint Railway closed completely, as has the branch of the Midland Railway to Bath Green Park
(and to Bristol St Philips); however, the L&SWR survived as a part of the present West of England Main Line. None
of these lines, in Somerset, are electrified. Two branch lines, the West and East Somerset Railways, were rescued
and transferred back to private ownership as "heritage" lines. The fifth railway was a short-lived light railway,
the Weston, Clevedon and Portishead Light Railway. The West Somerset Mineral Railway carried the iron ore from the
Brendon Hills to Watchet. Until the 1960s the piers at Weston-super-Mare, Clevedon, Portishead and Minehead were
served by the paddle steamers of P and A Campbell who ran regular services to Barry and Cardiff as well as Ilfracombe
and Lundy Island. The pier at Burnham-on-Sea was used for commercial goods, one of the reasons for the Somerset and
Dorset Joint Railway was to provide a link between the Bristol Channel and the English Channel. The pier at Burnham-on-Sea
is the shortest pier in the UK. In the 1970s the Royal Portbury Dock was constructed to provide extra capacity for
the Port of Bristol. State schools in Somerset are provided by three local education authorities: Bath and North
East Somerset, North Somerset, and the larger Somerset County Council. All state schools are comprehensive. In some
areas primary, infant and junior schools cater for ages four to eleven, after which the pupils move on to secondary
schools. There is a three-tier system of first, middle and upper schools in the Cheddar Valley, and in West Somerset,
while most other schools in the county use the two-tier system. Somerset has 30 state and 17 independent secondary
schools; Bath and North East Somerset has 13 state and 5 independent secondary schools; and North Somerset has 10
state and 2 independent secondary schools, excluding sixth form colleges. Some of the county's secondary schools
have specialist school status. Some schools have sixth forms and others transfer their sixth formers to colleges.
Several schools can trace their origins back many years, such as The Blue School in Wells and Richard Huish College
in Taunton. Others have changed their names over the years such as Beechen Cliff School which was started in 1905
as the City of Bath Boys' School and changed to its present name in 1972 when the grammar school was amalgamated
with a local secondary modern school, to form a comprehensive school. Many others were established and built since
the Second World War. In 2006, 5,900 pupils in Somerset sat GCSE examinations, with 44.5% achieving 5 grades A-C
including English and Maths (compared to 45.8% for England). There is also a range of independent or public schools.
Many of these are for pupils between 11 and 18 years, such as King's College, Taunton and Taunton School. King's
School, Bruton, was founded in 1519 and received royal foundation status around 30 years later in the reign of Edward
VI. Millfield is the largest co-educational boarding school. There are also preparatory schools for younger children,
such as All Hallows, and Hazlegrove Preparatory School. Chilton Cantelo School offers places both to day pupils and
boarders aged 7 to 16. Other schools provide education for children from the age of 3 or 4 years through to 18, such
as King Edward's School, Bath, Queen's College, Taunton and Wells Cathedral School which is one of the five established
musical schools for school-age children in Britain. Some of these schools have religious affiliations, such as Monkton
Combe School, Prior Park College, Sidcot School which is associated with the Religious Society of Friends, Downside
School which is a Roman Catholic public school in Stratton-on-the-Fosse, situated next to the Benedictine Downside
Abbey, and Kingswood School, which was founded by John Wesley in 1748 in Kingswood near Bristol, originally for the
education of the sons of the itinerant ministers (clergy) of the Methodist Church. The University of Bath and Bath
Spa University are higher education establishments in the north-east of the county. The University of Bath gained
its Royal Charter in 1966, although its origins go back to the Bristol Trade School (founded 1856) and Bath School
of Pharmacy (founded 1907). It has a purpose-built campus at Claverton on the outskirts of Bath, and has 15,000 students.
Bath Spa University, which is based at Newton St Loe, achieved university status in 2005, and has origins including
the Bath Academy of Art (founded 1898), Bath Teacher Training College, and the Bath College of Higher Education.
It has several campuses and 5,500 students.
Yale University is an American private Ivy League research university in New Haven, Connecticut. Founded in 1701 in Saybrook
Colony as the Collegiate School, the University is the third-oldest institution of higher education in the United
States. The school was renamed Yale College in 1718 in recognition of a gift from Elihu Yale, who was governor of
the British East India Company. Established to train Congregationalist ministers in theology and sacred languages,
by 1777 the school's curriculum began to incorporate humanities and sciences. In the 19th century the school incorporated
graduate and professional instruction, awarding the first Ph.D. in the United States in 1861 and organizing as a
university in 1887. Yale is organized into fourteen constituent schools: the original undergraduate college, the
Yale Graduate School of Arts and Sciences, and twelve professional schools. While the university is governed by the
Yale Corporation, each school's faculty oversees its curriculum and degree programs. In addition to a central campus
in downtown New Haven, the University owns athletic facilities in western New Haven, including the Yale Bowl, a campus
in West Haven, Connecticut, and forest and nature preserves throughout New England. The university's assets include
an endowment valued at $25.6 billion as of September 2015, the second largest of any educational institution.The
Yale University Library, serving all constituent schools, holds more than 15 million volumes and is the third-largest
academic library in the United States. Yale traces its beginnings to "An Act for Liberty to Erect a Collegiate School,"
passed by the General Court of the Colony of Connecticut on October 9, 1701, while meeting in New Haven. The Act
was an effort to create an institution to train ministers and lay leadership for Connecticut. Soon thereafter, a
group of ten Congregationalist ministers: Samuel Andrew, Thomas Buckingham, Israel Chauncy, Samuel Mather, Rev. James
Noyes II (son of James Noyes), James Pierpont, Abraham Pierson, Noadiah Russell, Joseph Webb and Timothy Woodbridge,
all alumni of Harvard, met in the study of Reverend Samuel Russell in Branford, Connecticut, to pool their books
to form the school's library. The group, led by James Pierpont, is now known as "The Founders".[citation needed]
In 1718, at the behest of either Rector Samuel Andrew or the colony's Governor Gurdon Saltonstall, Cotton Mather
contacted a successful businessman named Elihu Yale, who lived in Wales but had been born in Boston and whose father,
David, had been one of the original settlers in New Haven, to ask him for financial help in constructing a new building
for the college. Through the persuasion of Jeremiah Dummer, Yale, who had made a fortune through trade while living
in Madras as a representative of the East India Company, donated nine bales of goods, which were sold for more than
£560, a substantial sum at the time. Cotton Mather suggested that the school change its name to Yale College. Meanwhile,
a Harvard graduate working in England convinced some 180 prominent intellectuals that they should donate books to
Yale. The 1714 shipment of 500 books represented the best of modern English literature, science, philosophy and theology.
It had a profound effect on intellectuals at Yale. Undergraduate Jonathan Edwards discovered John Locke's works and
developed his original theology known as the "new divinity." In 1722 the Rector and six of his friends, who had a
study group to discuss the new ideas, announced that they had given up Calvinism, become Arminians, and joined the
Church of England. They were ordained in England and returned to the colonies as missionaries for the Anglican faith.
Thomas Clapp became president in 1745, and struggled to return the college to Calvinist orthodoxy; but he did not
close the library. Other students found Deist books in the library. Serious American students of theology and divinity,
particularly in New England, regarded Hebrew as a classical language, along with Greek and Latin, and essential for
study of the Old Testament in the original words. The Reverend Ezra Stiles, president of the College from 1778 to
1795, brought with him his interest in the Hebrew language as a vehicle for studying ancient Biblical texts in their
original language (as was common in other schools), requiring all freshmen to study Hebrew (in contrast to Harvard,
where only upperclassmen were required to study the language) and is responsible for the Hebrew phrase אורים ותמים
(Urim and Thummim) on the Yale seal. Stiles' greatest challenge occurred in July 1779 when hostile British forces
occupied New Haven and threatened to raze the College. However, Yale graduate Edmund Fanning, Secretary to the British
General in command of the occupation, interceded and the College was saved. Fanning later was granted an honorary
degree LL.D., at 1803, for his efforts. The Yale Report of 1828 was a dogmatic defense of the Latin and Greek curriculum
against critics who wanted more courses in modern languages, mathematics, and science. Unlike higher education in
Europe, there was no national curriculum for colleges and universities in the United States. In the competition for
students and financial support, college leaders strove to keep current with demands for innovation. At the same time,
they realized that a significant portion of their students and prospective students demanded a classical background.
The Yale report meant the classics would not be abandoned. All institutions experimented with changes in the curriculum,
often resulting in a dual track. In the decentralized environment of higher education in the United States, balancing
change with tradition was a common challenge because no one could afford to be completely modern or completely classical.
A group of professors at Yale and New Haven Congregationalist ministers articulated a conservative response to the
changes brought about by the Victorian culture. They concentrated on developing a whole man possessed of religious
values sufficiently strong to resist temptations from within, yet flexible enough to adjust to the 'isms' (professionalism,
materialism, individualism, and consumerism) tempting him from without. William Graham Sumner, professor from 1872
to 1909, taught in the emerging disciplines of economics and sociology to overflowing classrooms. He bested President
Noah Porter, who disliked social science and wanted Yale to lock into its traditions of classical education. Porter
objected to Sumner's use of a textbook by Herbert Spencer that espoused agnostic materialism because it might harm
students. The Revolutionary War soldier Nathan Hale (Yale 1773) was the prototype of the Yale ideal in the early
19th century: a manly yet aristocratic scholar, equally well-versed in knowledge and sports, and a patriot who "regretted"
that he "had but one life to lose" for his country. Western painter Frederic Remington (Yale 1900) was an artist
whose heroes gloried in combat and tests of strength in the Wild West. The fictional, turn-of-the-20th-century Yale
man Frank Merriwell embodied the heroic ideal without racial prejudice, and his fictional successor Frank Stover
in the novel Stover at Yale (1911) questioned the business mentality that had become prevalent at the school. Increasingly
the students turned to athletic stars as their heroes, especially since winning the big game became the goal of the
student body, and the alumni, as well as the team itself. Between 1892, when Harvard and Yale met in one of the first
intercollegiate debates, and 1909, the year of the first Triangular Debate of Harvard, Yale, and Princeton, the rhetoric,
symbolism, and metaphors used in athletics were used to frame these early debates. Debates were covered on front
pages of college newspapers and emphasized in yearbooks, and team members even received the equivalent of athletic
letters for their jackets. There even were rallies sending off the debating teams to matches. Yet, the debates never
attained the broad appeal that athletics enjoyed. One reason may be that debates do not have a clear winner, as is
the case in sports, and that scoring is subjective. In addition, with late 19th-century concerns about the impact
of modern life on the human body, athletics offered hope that neither the individual nor the society was coming apart.
In 1909–10, football faced a crisis resulting from the failure of the previous reforms of 1905–06 to solve the problem
of serious injuries. There was a mood of alarm and mistrust, and, while the crisis was developing, the presidents
of Harvard, Yale, and Princeton developed a project to reform the sport and forestall possible radical changes forced
by government upon the sport. President Arthur Hadley of Yale, A. Lawrence Lowell of Harvard, and Woodrow Wilson
of Princeton worked to develop moderate changes to reduce injuries. Their attempts, however, were reduced by rebellion
against the rules committee and formation of the Intercollegiate Athletic Association. The big three had tried to
operate independently of the majority, but changes did reduce injuries. Yale expanded gradually, establishing the
Yale School of Medicine (1810), Yale Divinity School (1822), Yale Law School (1843), Yale Graduate School of Arts
and Sciences (1847), the Sheffield Scientific School (1847), and the Yale School of Fine Arts (1869). In 1887, as
the college continued to grow under the presidency of Timothy Dwight V, Yale College was renamed Yale University.
The university would later add the Yale School of Music (1894), the Yale School of Forestry & Environmental Studies
(founded by Gifford Pinchot in 1900), the Yale School of Public Health (1915), the Yale School of Nursing (1923),
the Yale School of Drama (1955), the Yale Physician Associate Program (1973), and the Yale School of Management (1976).
It would also reorganize its relationship with the Sheffield Scientific School. Expansion caused controversy about
Yale's new roles. Noah Porter, moral philosopher, was president from 1871 to 1886. During an age of tremendous expansion
in higher education, Porter resisted the rise of the new research university, claiming that an eager embrace of its
ideals would corrupt undergraduate education. Many of Porter's contemporaries criticized his administration, and
historians since have disparaged his leadership. Levesque argues Porter was not a simple-minded reactionary, uncritically
committed to tradition, but a principled and selective conservative. He did not endorse everything old or reject
everything new; rather, he sought to apply long-established ethical and pedagogical principles to a rapidly changing
culture. He may have misunderstood some of the challenges of his time, but he correctly anticipated the enduring
tensions that have accompanied the emergence and growth of the modern university. Between 1925 and 1940, philanthropic
foundations, especially ones connected with the Rockefellers, contributed about $7 million to support the Yale Institute
of Human Relations and the affiliated Yerkes Laboratories of Primate Biology. The money went toward behavioral science
research, which was supported by foundation officers who aimed to "improve mankind" under an informal, loosely defined
human engineering effort. The behavioral scientists at Yale, led by President James R. Angell and psychobiologist
Robert M. Yerkes, tapped into foundation largesse by crafting research programs aimed to investigate, then suggest,
ways to control, sexual and social behavior. For example, Yerkes analyzed chimpanzee sexual behavior in hopes of
illuminating the evolutionary underpinnings of human development and providing information that could ameliorate
dysfunction. Ultimately, the behavioral-science results disappointed foundation officers, who shifted their human-engineering
funds toward biological sciences. Slack (2003) compares three groups that conducted biological research at Yale during
overlapping periods between 1910 and 1970. Yale proved important as a site for this research. The leaders of these
groups were Ross Granville Harrison, Grace E. Pickford, and G. Evelyn Hutchinson, and their members included both
graduate students and more experienced scientists. All produced innovative research, including the opening of new
subfields in embryology, endocrinology, and ecology, respectively, over a long period of time. Harrison's group is
shown to have been a classic research school; Pickford's and Hutchinson's were not. Pickford's group was successful
in spite of her lack of departmental or institutional position or power. Hutchinson and his graduate and postgraduate
students were extremely productive, but in diverse areas of ecology rather than one focused area of research or the
use of one set of research tools. Hutchinson's example shows that new models for research groups are needed, especially
for those that include extensive field research. Milton Winternitz led the Yale Medical School as its dean from 1920
to 1935. Dedicated to the new scientific medicine established in Germany, he was equally fervent about "social medicine"
and the study of humans in their culture and environment. He established the "Yale System" of teaching, with few
lectures and fewer exams, and strengthened the full-time faculty system; he also created the graduate-level Yale
School of Nursing and the Psychiatry Department, and built numerous new buildings. Progress toward his plans for
an Institute of Human Relations, envisioned as a refuge where social scientists would collaborate with biological
scientists in a holistic study of humankind, unfortunately lasted for only a few years before the opposition of resentful
anti-Semitic colleagues drove him to resign. The American studies program reflected the worldwide anti-Communist
ideological struggle. Norman Holmes Pearson, who worked for the Office of Strategic Studies in London during World
War II, returned to Yale and headed the new American studies program, in which scholarship quickly became an instrument
of promoting liberty. Popular among undergraduates, the program sought to instruct them in the fundamentals of American
civilization and thereby instill a sense of nationalism and national purpose. Also during the 1940s and 1950s, Wyoming
millionaire William Robertson Coe made large contributions to the American studies programs at Yale University and
at the University of Wyoming. Coe was concerned to celebrate the 'values' of the Western United States in order to
meet the "threat of communism." In 1966, Yale began discussions with its sister school Vassar College about merging
to foster coeducation at the undergraduate level. Vassar, then all-female and part of the Seven Sisters—elite higher
education schools that historically served as sister institutions to the Ivy League when the Ivy League still only
admitted men—tentatively accepted, but then declined the invitation. Both schools introduced coeducation independently
in 1969. Amy Solomon was the first woman to register as a Yale undergraduate; she was also the first woman at Yale
to join an undergraduate society, St. Anthony Hall. The undergraduate class of 1973 was the first class to have women
starting from freshman year; at the time, all undergraduate women were housed in Vanderbilt Hall at the south end
of Old Campus.[citation needed] A decade into co-education, rampant student assault and harassment by faculty became
the impetus for the trailblazing lawsuit Alexander v. Yale. While unsuccessful in the courts, the legal reasoning
behind the case changed the landscape of sex discrimination law and resulted in the establishment of Yale's Grievance
Board and the Yale Women's Center. In March 2011 a Title IX complaint was filed against Yale by students and recent
graduates, including editors of Yale's feminist magazine Broad Recognition, alleging that the university had a hostile
sexual climate. In response, the university formed a Title IX steering committee to address complaints of sexual
misconduct. Yale has a complicated relationship with its home city; for example, thousands of students volunteer
every year in a myriad of community organizations, but city officials, who decry Yale's exemption from local property
taxes, have long pressed the university to do more to help. Under President Levin, Yale has financially supported
many of New Haven's efforts to reinvigorate the city. Evidence suggests that the town and gown relationships are
mutually beneficial. Still, the economic power of the university increased dramatically with its financial success
amid a decline in the local economy. The Boston Globe wrote that "if there's one school that can lay claim to educating
the nation's top national leaders over the past three decades, it's Yale." Yale alumni were represented on the Democratic
or Republican ticket in every U.S. Presidential election between 1972 and 2004. Yale-educated Presidents since the
end of the Vietnam War include Gerald Ford, George H.W. Bush, Bill Clinton, and George W. Bush, and major-party nominees
during this period include John Kerry (2004), Joseph Lieberman (Vice President, 2000), and Sargent Shriver (Vice
President, 1972). Other Yale alumni who made serious bids for the Presidency during this period include Hillary Clinton
(2008), Howard Dean (2004), Gary Hart (1984 and 1988), Paul Tsongas (1992), Pat Robertson (1988) and Jerry Brown
(1976, 1980, 1992). Several explanations have been offered for Yale’s representation in national elections since
the end of the Vietnam War. Various sources note the spirit of campus activism that has existed at Yale since the
1960s, and the intellectual influence of Reverend William Sloane Coffin on many of the future candidates. Yale President
Richard Levin attributes the run to Yale’s focus on creating "a laboratory for future leaders," an institutional
priority that began during the tenure of Yale Presidents Alfred Whitney Griswold and Kingman Brewster. Richard H.
Brodhead, former dean of Yale College and now president of Duke University, stated: "We do give very significant
attention to orientation to the community in our admissions, and there is a very strong tradition of volunteerism
at Yale." Yale historian Gaddis Smith notes "an ethos of organized activity" at Yale during the 20th century that
led John Kerry to lead the Yale Political Union's Liberal Party, George Pataki the Conservative Party, and Joseph
Lieberman to manage the Yale Daily News. Camille Paglia points to a history of networking and elitism: "It has to
do with a web of friendships and affiliations built up in school." CNN suggests that George W. Bush benefited from
preferential admissions policies for the "son and grandson of alumni", and for a "member of a politically influential
family." New York Times correspondent Elisabeth Bumiller and The Atlantic Monthly correspondent James Fallows credit
the culture of community and cooperation that exists between students, faculty, and administration, which downplays
self-interest and reinforces commitment to others. During the 1988 presidential election, George H. W. Bush (Yale
'48) derided Michael Dukakis for having "foreign-policy views born in Harvard Yard's boutique". When challenged on
the distinction between Dukakis's Harvard connection and his own Yale background, he said that, unlike Harvard, Yale's
reputation was "so diffuse, there isn't a symbol, I don't think, in the Yale situation, any symbolism in it" and
said Yale did not share Harvard's reputation for "liberalism and elitism". In 2004 Howard Dean stated, "In some ways,
I consider myself separate from the other three (Yale) candidates of 2004. Yale changed so much between the class
of '68 and the class of '71. My class was the first class to have women in it; it was the first class to have a significant
effort to recruit African Americans. It was an extraordinary time, and in that span of time is the change of an entire
generation". In 2009, former British Prime Minister Tony Blair picked Yale as one location – the others are Britain's
Durham University and Universiti Teknologi Mara – for the Tony Blair Faith Foundation's United States Faith and Globalization
Initiative. As of 2009, former Mexican President Ernesto Zedillo is the director of the Yale Center for the Study
of Globalization and teaches an undergraduate seminar, "Debating Globalization". As of 2009, former presidential
candidate and DNC chair Howard Dean teaches a residential college seminar, "Understanding Politics and Politicians."
Also in 2009, an alliance was formed among Yale, University College London, and both schools’ affiliated hospital
complexes to conduct research focused on the direct improvement of patient care—a growing field known as translational
medicine. President Richard Levin noted that Yale has hundreds of other partnerships across the world, but "no existing
collaboration matches the scale of the new partnership with UCL". The Yale Provost's Office has launched several
women into prominent university presidencies. In 1977 Hanna Holborn Gray was appointed acting President of Yale from
this position, and went on to become President of the University of Chicago, the first woman to be full president
of a major university. In 1994 Yale Provost Judith Rodin became the first female president of an Ivy League institution
at the University of Pennsylvania. In 2002 Provost Alison Richard became the Vice Chancellor of the University of
Cambridge. In 2004, Provost Susan Hockfield became the President of the Massachusetts Institute of Technology. In
2007 Deputy Provost Kim Bottomly was named President of Wellesley College. In 2003, the Dean of the Divinity School,
Rebecca Chopp, was appointed president of Colgate University and now heads Swarthmore College. Much of Yale University's
staff, including most maintenance staff, dining hall employees, and administrative staff, are unionized. Clerical
and technical employees are represented by Local 34 of UNITE HERE and service and maintenance workers by Local 35
of the same international. Together with the Graduate Employees and Students Organization (GESO), an unrecognized
union of graduate employees, Locals 34 and 35 make up the Federation of Hospital and University Employees. Also included
in FHUE are the dietary workers at Yale-New Haven Hospital, who are members of 1199 SEIU. In addition to these unions,
officers of the Yale University Police Department are members of the Yale Police Benevolent Association, which affiliated
in 2005 with the Connecticut Organization for Public Safety Employees. Finally, Yale security officers voted to join
the International Union of Security, Police and Fire Professionals of America in fall 2010 after the National Labor
Relations Board ruled they could not join AFSCME; the Yale administration contested the election. Yale has a history
of difficult and prolonged labor negotiations, often culminating in strikes. There have been at least eight strikes
since 1968, and The New York Times wrote that Yale has a reputation as having the worst record of labor tension of
any university in the U.S. Yale's unusually large endowment exacerbates the tension over wages. Moreover, Yale has
been accused of failing to treat workers with respect. In a 2003 strike, however, the university claimed that more
union employees were working than striking. Professor David Graeber was 'retired' after he came to the defense of
a student who was involved in campus labor issues. Yale's central campus in downtown New Haven covers 260 acres (1.1
km2) and comprises its main, historic campus and a medical campus adjacent to the Yale-New Haven Hospital. In western
New Haven, the university holds 500 acres (2.0 km2) of athletic facilities, including the Yale Golf Course. In 2008,
Yale purchased the 136-acre (0.55 km2) former Bayer Pharmaceutical campus in West Haven, Connecticut, the buildings
of which are now used as laboratory and research space. Yale also owns seven forests in Connecticut, Vermont, and
New Hampshire—the largest of which is the 7,840-acre (31.7 km2) Yale-Myers Forest in Connecticut's Quiet Corner—and
nature preserves including Horse Island. Yale is noted for its largely Collegiate Gothic campus as well as for several
iconic modern buildings commonly discussed in architectural history survey courses: Louis Kahn's Yale Art Gallery
and Center for British Art, Eero Saarinen's Ingalls Rink and Ezra Stiles and Morse Colleges, and Paul Rudolph's Art
& Architecture Building. Yale also owns and has restored many noteworthy 19th-century mansions along Hillhouse Avenue,
which was considered the most beautiful street in America by Charles Dickens when he visited the United States in
the 1840s. In 2011, Travel+Leisure listed the Yale campus as one of the most beautiful in the United States. Many
of Yale's buildings were constructed in the Collegiate Gothic architecture style from 1917 to 1931, financed largely
by Edward S. Harkness Stone sculpture built into the walls of the buildings portray contemporary college personalities
such as a writer, an athlete, a tea-drinking socialite, and a student who has fallen asleep while reading. Similarly,
the decorative friezes on the buildings depict contemporary scenes such as policemen chasing a robber and arresting
a prostitute (on the wall of the Law School), or a student relaxing with a mug of beer and a cigarette. The architect,
James Gamble Rogers, faux-aged these buildings by splashing the walls with acid, deliberately breaking their leaded
glass windows and repairing them in the style of the Middle Ages, and creating niches for decorative statuary but
leaving them empty to simulate loss or theft over the ages. In fact, the buildings merely simulate Middle Ages architecture,
for though they appear to be constructed of solid stone blocks in the authentic manner, most actually have steel
framing as was commonly used in 1930. One exception is Harkness Tower, 216 feet (66 m) tall, which was originally
a free-standing stone structure. It was reinforced in 1964 to allow the installation of the Yale Memorial Carillon.
Other examples of the Gothic (also called neo-Gothic and collegiate Gothic) style are on Old Campus by such architects
as Henry Austin, Charles C. Haight and Russell Sturgis. Several are associated with members of the Vanderbilt family,
including Vanderbilt Hall, Phelps Hall, St. Anthony Hall (a commission for member Frederick William Vanderbilt),
the Mason, Sloane and Osborn laboratories, dormitories for the Sheffield Scientific School (the engineering and sciences
school at Yale until 1956) and elements of Silliman College, the largest residential college. Alumnus Eero Saarinen,
Finnish-American architect of such notable structures as the Gateway Arch in St. Louis, Washington Dulles International
Airport main terminal, Bell Labs Holmdel Complex and the CBS Building in Manhattan, designed Ingalls Rink at Yale
and the newest residential colleges of Ezra Stiles and Morse. These latter were modelled after the medieval Italian
hilltown of San Gimignano – a prototype chosen for the town's pedestrian-friendly milieu and fortress-like stone
towers. These tower forms at Yale act in counterpoint to the college's many Gothic spires and Georgian cupolas. Yale's
Office of Sustainability develops and implements sustainability practices at Yale. Yale is committed to reduce its
greenhouse gas emissions 10% below 1990 levels by the year 2020. As part of this commitment, the university allocates
renewable energy credits to offset some of the energy used by residential colleges. Eleven campus buildings are candidates
for LEED design and certification. Yale Sustainable Food Project initiated the introduction of local, organic vegetables,
fruits, and beef to all residential college dining halls. Yale was listed as a Campus Sustainability Leader on the
Sustainable Endowments Institute’s College Sustainability Report Card 2008, and received a "B+" grade overall. Yale's
secret society buildings (some of which are called "tombs") were built both to be private yet unmistakable. A diversity
of architectural styles is represented: Berzelius, Donn Barber in an austere cube with classical detailing (erected
in 1908 or 1910); Book and Snake, Louis R. Metcalfe in a Greek Ionic style (erected in 1901); Elihu, architect unknown
but built in a Colonial style (constructed on an early 17th-century foundation although the building is from the
18th century); Mace and Chain, in a late colonial, early Victorian style (built in 1823). Interior moulding is said
to have belonged to Benedict Arnold; Manuscript Society, King Lui-Wu with Dan Kniley responsible for landscaping
and Josef Albers for the brickwork intaglio mural. Building constructed in a mid-century modern style; Scroll and
Key, Richard Morris Hunt in a Moorish- or Islamic-inspired Beaux-Arts style (erected 1869–70); Skull and Bones, possibly
Alexander Jackson Davis or Henry Austin in an Egypto-Doric style utilizing Brownstone (in 1856 the first wing was
completed, in 1903 the second wing, 1911 the Neo-Gothic towers in rear garden were completed); St. Elmo, (former
tomb) Kenneth M. Murchison, 1912, designs inspired by Elizabethan manor. Current location, brick colonial; Shabtai,
1882, the Anderson Mansion built in the Second Empire architectural style; and Wolf's Head, Bertram Grosvenor Goodhue
(erected 1923-4). Several campus safety strategies have been pioneered at Yale. The first campus police force was
founded at Yale in 1894, when the university contracted city police officers to exclusively cover the campus. Later
hired by the university, the officers were originally brought in to quell unrest between students and city residents
and curb destructive student behavior. In addition to the Yale Police Department, a variety of safety services are
available including blue phones, a safety escort, and 24-hour shuttle service. Through its program of need-based
financial aid, Yale commits to meet the full demonstrated financial need of all applicants. Most financial aid is
in the form of grants and scholarships that do not need to be paid back to the university, and the average need-based
aid grant for the Class of 2017 was $46,395. 15% of Yale College students are expected to have no parental contribution,
and about 50% receive some form of financial aid. About 16% of the Class of 2013 had some form of student loan debt
at graduation, with an average debt of $13,000 among borrowers. Rare books are found in several Yale collections.
The Beinecke Rare Book Library has a large collection of rare books and manuscripts. The Harvey Cushing/John Hay
Whitney Medical Library includes important historical medical texts, including an impressive collection of rare books,
as well as historical medical instruments. The Lewis Walpole Library contains the largest collection of 18th‑century
British literary works. The Elizabethan Club, technically a private organization, makes its Elizabethan folios and
first editions available to qualified researchers through Yale. Yale's museum collections are also of international
stature. The Yale University Art Gallery, the country's first university-affiliated art museum, contains more than
180,000 works, including Old Masters and important collections of modern art, in the Swartout and Kahn buildings.
The latter, Louis Kahn's first large-scale American work (1953), was renovated and reopened in December 2006. The
Yale Center for British Art, the largest collection of British art outside of the UK, grew from a gift of Paul Mellon
and is housed in another Kahn-designed building. Yale's English and Comparative Literature departments were part
of the New Criticism movement. Of the New Critics, Robert Penn Warren, W.K. Wimsatt, and Cleanth Brooks were all
Yale faculty. Later, the Yale Comparative literature department became a center of American deconstruction. Jacques
Derrida, the father of deconstruction, taught at the Department of Comparative Literature from the late seventies
to mid-1980s. Several other Yale faculty members were also associated with deconstruction, forming the so-called
"Yale School". These included Paul de Man who taught in the Departments of Comparative Literature and French, J.
Hillis Miller, Geoffrey Hartman (both taught in the Departments of English and Comparative Literature), and Harold
Bloom (English), whose theoretical position was always somewhat specific, and who ultimately took a very different
path from the rest of this group. Yale's history department has also originated important intellectual trends. Historians
C. Vann Woodward and David Brion Davis are credited with beginning in the 1960s and 1970s an important stream of
southern historians; likewise, David Montgomery, a labor historian, advised many of the current generation of labor
historians in the country. Yale's Music School and Department fostered the growth of Music Theory in the latter half
of the 20th century. The Journal of Music Theory was founded there in 1957; Allen Forte and David Lewin were influential
teachers and scholars. Yale's residential college system was established in 1933 by Edward S. Harkness, who admired
the social intimacy of Oxford and Cambridge and donated significant funds to found similar colleges at Yale and Harvard.
Though Yale's colleges resemble their English precursors organizationally and architecturally, they are dependent
entities of Yale College and have limited autonomy. The colleges are led by a master and an academic dean, who reside
in the college, and university faculty and affiliates comprise each college's fellowship. Colleges offer their own
seminars, social events, and speaking engagements known as "Master's Teas," but do not contain programs of study
or academic departments. Instead, all undergraduate courses are taught by the Faculty of Arts and Sciences and are
open to members of any college. While Harkness' original colleges were Georgian Revival or Collegiate Gothic in style,
two colleges constructed in the 1960s, Morse and Ezra Stiles Colleges, have modernist designs. All twelve college
quadrangles are organized around a courtyard, and each has a dining hall, courtyard, library, common room, and a
range of student facilities. The twelve colleges are named for important alumni or significant places in university
history. In 2017, the university expects to open two new colleges near Science Hill. In the wake of the racially-motivated"
church shooting in Charleston, South Carolina, Yale was under criticism again in the summer of 2015 for Calhoun College,
one of 12 residential colleges, which was named after John C. Calhoun, a slave-owner and strong slavery supporter
in the nineteenth century. In July 2015 students signed a petition calling for the name change. They argued in the
petition that—while Calhoun was respected in the 19th century as an "extraordinary American statesman"—he was "one
of the most prolific defenders of slavery and white supremacy" in the history of the United States. In August 2015
Yale President Peter Salovey addressed the Freshman Class of 2019 in which he responded to the racial tensions but
explained why the college would not be renamed. He described Calhoun as a "a notable political theorist, a vice president
to two different U.S. presidents, a secretary of war and of state, and a congressman and senator representing South
Carolina." He acknowledged that Calhoun also "believed that the highest forms of civilization depend on involuntary
servitude. Not only that, but he also believed that the races he thought to be inferior, black people in particular,
ought to be subjected to it for the sake of their own best interests." Racial tensions increased in the fall of 2015
centering on comments by Nicholas A. Christakis and his wife Erika regarding freedom of speech. In April 2016 Salovey
announced that "despite decades of vigorous alumni and student protests," Calhoun's name will remain on the Yale
residential college explaining that it is preferable for Yale students to live in Calhoun's "shadow" so they will
be "better prepared to rise to the challenges of the present and the future." He claimed that if they removed Calhoun's
name, it would "obscure" his "legacy of slavery rather than addressing it." "Yale is part of that history" and "We
cannot erase American history, but we can confront it, teach it and learn from it." One change that will be issued
is the title of “master” for faculty members who serve as residential college leaders will be renamed to “head of
college” due to its connotation of slavery. The university hosts a variety of student journals, magazines, and newspapers.
Established in 1872, The Yale Record is the world's oldest humor magazine. Newspapers include the Yale Daily News,
which was first published in 1878, and the weekly Yale Herald, which was first published in 1986. Dwight Hall, an
independent, non-profit community service organization, oversees more than 2,000 Yale undergraduates working on more
than 70 community service initiatives in New Haven. The Yale College Council runs several agencies that oversee campus
wide activities and student services. The Yale Dramatic Association and Bulldog Productions cater to the theater
and film communities, respectively. In addition, the Yale Drama Coalition serves to coordinate between and provide
resources for the various Sudler Fund sponsored theater productions which run each weekend. WYBC Yale Radio is the
campus's radio station, owned and operated by students. While students used to broadcast on AM & FM frequencies,
they now have an Internet-only stream. Yale seniors at graduation smash clay pipes underfoot to symbolize passage
from their "bright college years," though in recent history the pipes have been replaced with "bubble pipes". ("Bright
College Years," the University's alma mater, was penned in 1881 by Henry Durand, Class of 1881, to the tune of Die
Wacht am Rhein.) Yale's student tour guides tell visitors that students consider it good luck to rub the toe of the
statue of Theodore Dwight Woolsey on Old Campus. Actual students rarely do so. In the second half of the 20th century
Bladderball, a campus-wide game played with a large inflatable ball, became a popular tradition but was banned by
administration due to safety concerns. In spite of administration opposition, students revived the game in 2009,
2011, and 2014, but its future remains uncertain. Yale has numerous athletic facilities, including the Yale Bowl
(the nation's first natural "bowl" stadium, and prototype for such stadiums as the Los Angeles Memorial Coliseum
and the Rose Bowl), located at The Walter Camp Field athletic complex, and the Payne Whitney Gymnasium, the second-largest
indoor athletic complex in the world. October 21, 2000, marked the dedication of Yale's fourth new boathouse in 157
years of collegiate rowing. The Richard Gilder Boathouse is named to honor former Olympic rower Virginia Gilder '79
and her father Richard Gilder '54, who gave $4 million towards the $7.5 million project. Yale also maintains the
Gales Ferry site where the heavyweight men's team trains for the Yale-Harvard Boat Race. Yale has had many financial
supporters, but some stand out by the magnitude or timeliness of their contributions. Among those who have made large
donations commemorated at the university are: Elihu Yale; Jeremiah Dummer; the Harkness family (Edward, Anna, and
William); the Beinecke family (Edwin, Frederick, and Walter); John William Sterling; Payne Whitney; Joseph E. Sheffield,
Paul Mellon, Charles B. G. Murphy and William K. Lanman. The Yale Class of 1954, led by Richard Gilder, donated $70
million in commemoration of their 50th reunion. Charles B. Johnson, a 1954 graduate of Yale College, pledged a $250
million gift in 2013 to support of the construction of two new residential colleges. Yale has produced alumni distinguished
in their respective fields. Among the best-known are U.S. Presidents William Howard Taft, Gerald Ford, George H.
W. Bush, Bill Clinton and George W. Bush; royals Crown Princess Victoria Bernadotte, Prince Rostislav Romanov and
Prince Akiiki Hosea Nyabongo; heads of state, including Italian prime minister Mario Monti, Turkish prime minister
Tansu Çiller, Mexican president Ernesto Zedillo, German president Karl Carstens, and Philippines president José Paciano
Laurel; U.S. Supreme Court Justices Sonia Sotomayor, Samuel Alito and Clarence Thomas; U.S. Secretaries of State
John Kerry, Hillary Clinton, Cyrus Vance, and Dean Acheson; authors Sinclair Lewis, Stephen Vincent Benét, and Tom
Wolfe; lexicographer Noah Webster; inventors Samuel F. B. Morse and Eli Whitney; patriot and "first spy" Nathan Hale;
theologian Jonathan Edwards; actors, directors and producers Paul Newman, Henry Winkler, Vincent Price, Meryl Streep,
Sigourney Weaver, Jodie Foster, Angela Bassett, Patricia Clarkson, Courtney Vance, Frances McDormand, Elia Kazan,
George Roy Hill, Edward Norton, Lupita Nyong'o, Allison Williams, Oliver Stone, Sam Waterston, and Michael Cimino;
"Father of American football" Walter Camp, James Franco, "The perfect oarsman" Rusty Wailes; baseball players Ron
Darling, Bill Hutchinson, and Craig Breslow; basketball player Chris Dudley; football players Gary Fencik, and Calvin
Hill; hockey players Chris Higgins and Mike Richter; figure skater Sarah Hughes; swimmer Don Schollander; skier Ryan
Max Riley; runner Frank Shorter; composers Charles Ives, Douglas Moore and Cole Porter; Peace Corps founder Sargent
Shriver; child psychologist Benjamin Spock; architects Eero Saarinen and Norman Foster; sculptor Richard Serra; film
critic Gene Siskel; television commentators Dick Cavett and Anderson Cooper; New York Times journalist David Gonzalez;
pundits William F. Buckley, Jr., and Fareed Zakaria; economists Irving Fischer, Mahbub ul Haq, and Paul Krugman;
cyclotron inventor and Nobel laureate in Physics, Ernest Lawrence; Human Genome Project director Francis S. Collins;
mathematician and chemist Josiah Willard Gibbs; and businesspeople, including Time Magazine co-founder Henry Luce,
Morgan Stanley founder Harold Stanley, Boeing CEO James McNerney, FedEx founder Frederick W. Smith, Time Warner president
Jeffrey Bewkes, Electronic Arts co-founder Bing Gordon, and investor/philanthropist Sir John Templeton; pioneer in
electrical applications Austin Cornelius Dunham. Yale University, one of the oldest universities in the United States,
is a cultural referent as an institution that produces some of the most elite members of society and its grounds,
alumni, and students have been prominently portrayed in fiction and U.S. popular culture. For example, Owen Johnson's
novel, Stover at Yale, follows the college career of Dink Stover and Frank Merriwell, the model for all later juvenile
sports fiction, plays football, baseball, crew, and track at Yale while solving mysteries and righting wrongs. Yale
University also is featured in F. Scott Fitzgerald's novel "The Great Gatsby". The narrator, Nick Carraway, wrote
a series of editorials for the Yale News, and Tom Buchanan was "one of the most powerful ends that ever played football"
for Yale.
Around 1300, centuries of prosperity and growth in Europe came to a halt. A series of famines and plagues, including the
Great Famine of 1315–1317 and the Black Death, reduced the population to around half of what it was before the calamities.
Along with depopulation came social unrest and endemic warfare. France and England experienced serious peasant uprisings,
such as the Jacquerie and the Peasants' Revolt, as well as over a century of intermittent conflict in the Hundred
Years' War. To add to the many problems of the period, the unity of the Catholic Church was shattered by the Western
Schism. Collectively these events are sometimes called the Crisis of the Late Middle Ages. Despite these crises,
the 14th century was also a time of great progress in the arts and sciences. Following a renewed interest in ancient
Greek and Roman texts that took root in the High Middle Ages, the Italian Renaissance began. The absorption of Latin
texts had started before the Renaissance of the 12th century through contact with Arabs during the Crusades, but
the availability of important Greek texts accelerated with the capture of Constantinople by the Ottoman Turks, when
many Byzantine scholars had to seek refuge in the West, particularly Italy. Combined with this influx of classical
ideas was the invention of printing which facilitated dissemination of the printed word and democratized learning.
These two things would later lead to the Protestant Reformation. Toward the end of the period, an era of discovery
began (Age of Discovery). The rise of the Ottoman Empire, culminating in the Fall of Constantinople in 1453, eroded
the last remnants of the Byzantine Empire and cut off trading possibilities with the east. Europeans were forced
to seek new trading routes, leading to the expedition of Columbus to the Americas in 1492, and Vasco da Gama’s circumnavigation
of India and Africa in 1498. Their discoveries strengthened the economy and power of European nations. The term "Late
Middle Ages" refers to one of the three periods of the Middle Ages, along with the Early Middle Ages and the High
Middle Ages. Leonardo Bruni was the first historian to use tripartite periodization in his History of the Florentine
People (1442). Flavio Biondo used a similar framework in Decades of History from the Deterioration of the Roman Empire
(1439–1453). Tripartite periodization became standard after the German historian Christoph Cellarius published Universal
History Divided into an Ancient, Medieval, and New Period (1683). As economic and demographic methods were applied
to the study of history, the trend was increasingly to see the late Middle Ages as a period of recession and crisis.
Belgian historian Henri Pirenne continued the subdivision of Early, High, and Late Middle Ages in the years around
World War I. Yet it was his Dutch colleague, Johan Huizinga, who was primarily responsible for popularising the pessimistic
view of the Late Middle Ages, with his book The Autumn of the Middle Ages (1919). To Huizinga, whose research focused
on France and the Low Countries rather than Italy, despair and decline were the main themes, not rebirth. Modern
historiography on the period has reached a consensus between the two extremes of innovation and crisis. It is now
(generally) acknowledged that conditions were vastly different north and south of the Alps, and "Late Middle Ages"
is often avoided entirely within Italian historiography. The term "Renaissance" is still considered useful for describing
certain intellectual, cultural, or artistic developments, but not as the defining feature of an entire European historical
epoch. The period from the early 14th century up until – and sometimes including – the 16th century, is rather seen
as characterised by other trends: demographic and economic decline followed by recovery, the end of western religious
unity and the subsequent emergence of the nation state, and the expansion of European influence onto the rest of
the world. After the failed union of Sweden and Norway of 1319–1365, the pan-Scandinavian Kalmar Union was instituted
in 1397. The Swedes were reluctant members of the Danish-dominated union from the start. In an attempt to subdue
the Swedes, King Christian II of Denmark had large numbers of the Swedish aristocracy killed in the Stockholm Bloodbath
of 1520. Yet this measure only led to further hostilities, and Sweden broke away for good in 1523. Norway, on the
other hand, became an inferior party of the union and remained united with Denmark until 1814. Bohemia prospered
in the 14th century, and the Golden Bull of 1356 made the king of Bohemia first among the imperial electors, but
the Hussite revolution threw the country into crisis. The Holy Roman Empire passed to the Habsburgs in 1438, where
it remained until its dissolution in 1806. Yet in spite of the extensive territories held by the Habsburgs, the Empire
itself remained fragmented, and much real power and influence lay with the individual principalities. In addition,
financial institutions, such as the Hanseatic League and the Fugger family, held great power, on both economic and
a political levels. Louis did not leave a son as heir after his death in 1382. Instead, he named as his heir the
young prince Sigismund of Luxemburg, who was 11 years old. The Hungarian nobility did not accept his claim, and the
result was an internal war. Sigismund eventually achieved total control of Hungary and established his court in Buda
and Visegrád. Both palaces were rebuilt and improved, and were considered the richest of the time in Europe. Inheriting
the throne of Bohemia and the Holy Roman Empire, Sigismund continued conducting his politics from Hungary, but he
was kept busy fighting the Hussites and the Ottoman Empire, which was becoming a menace to Europe in the beginning
of the 15th century. The Bulgarian Empire was in decline by the 14th century, and the ascendancy of Serbia was marked
by the Serbian victory over the Bulgarians in the Battle of Velbazhd in 1330. By 1346, the Serbian king Stefan Dušan
had been proclaimed emperor. Yet Serbian dominance was short-lived; the Serbian army led by the Lazar Hrebljevanovic
was defeated by the Ottomans at the Battle of Kosovo in 1389, where most of the Serbian nobility was killed and the
south of the country came under Ottoman occupation, as much of southern Bulgaria had become Ottoman territory in
1371. Northern remnants of Bulgaria were finally conquered by 1396, Serbia fell in 1459, Bosnia in 1463, and Albania
was finally subordinated in 1479 only a few years after the death of Skanderbeg. Belgrade, an Hungarian domain at
the time, was the last large Balkan city to fall under Ottoman rule, in 1521. By the end of the medieval period,
the entire Balkan peninsula was annexed by, or became vassal to, the Ottomans. Avignon was the seat of the papacy
from 1309 to 1376. With the return of the Pope to Rome in 1378, the Papal State developed into a major secular power,
culminating in the morally corrupt papacy of Alexander VI. Florence grew to prominence amongst the Italian city-states
through financial business, and the dominant Medici family became important promoters of the Renaissance through
their patronage of the arts. Other city states in northern Italy also expanded their territories and consolidated
their power, primarily Milan and Venice. The War of the Sicilian Vespers had by the early 14th century divided southern
Italy into an Aragon Kingdom of Sicily and an Anjou Kingdom of Naples. In 1442, the two kingdoms were effectively
united under Aragonese control. The 1469 marriage of Isabella I of Castile and Ferdinand II of Aragon and the 1479
death of John II of Aragon led to the creation of modern-day Spain. In 1492, Granada was captured from the Moors,
thereby completing the Reconquista. Portugal had during the 15th century – particularly under Henry the Navigator
– gradually explored the coast of Africa, and in 1498, Vasco da Gama found the sea route to India. The Spanish monarchs
met the Portuguese challenge by financing the expedition of Christopher Columbus to find a western sea route to India,
leading to the discovery of the Americas in 1492. Around 1300–1350 the Medieval Warm Period gave way to the Little
Ice Age. The colder climate resulted in agricultural crises, the first of which is known as the Great Famine of 1315-1317.
The demographic consequences of this famine, however, were not as severe as the plagues that occurred later in the
century, particularly the Black Death. Estimates of the death rate caused by this epidemic range from one third to
as much as sixty percent. By around 1420, the accumulated effect of recurring plagues and famines had reduced the
population of Europe to perhaps no more than a third of what it was a century earlier. The effects of natural disasters
were exacerbated by armed conflicts; this was particularly the case in France during the Hundred Years' War. As the
European population was severely reduced, land became more plentiful for the survivors, and labour consequently more
expensive. Attempts by landowners to forcibly reduce wages, such as the English 1351 Statute of Laborers, were doomed
to fail. These efforts resulted in nothing more than fostering resentment among the peasantry, leading to rebellions
such as the French Jacquerie in 1358 and the English Peasants' Revolt in 1381. The long-term effect was the virtual
end of serfdom in Western Europe. In Eastern Europe, on the other hand, landowners were able to exploit the situation
to force the peasantry into even more repressive bondage. Up until the mid-14th century, Europe had experienced steadily
increasing urbanisation. Cities were also decimated by the Black Death, but the role of urban areas as centres of
learning, commerce and government ensured continued growth. By 1500, Venice, Milan, Naples, Paris and Constantinople
each probably had more than 100,000 inhabitants. Twenty-two other cities were larger than 40,000; most of these were
in Italy and the Iberian peninsula, but there were also some in France, the Empire, the Low Countries, plus London
in England. Changes also took place within the recruitment and composition of armies. The use of the national or
feudal levy was gradually replaced by paid troops of domestic retinues or foreign mercenaries. The practice was associated
with Edward III of England and the condottieri of the Italian city-states. All over Europe, Swiss soldiers were in
particularly high demand. At the same time, the period also saw the emergence of the first permanent armies. It was
in Valois France, under the heavy demands of the Hundred Years' War, that the armed forces gradually assumed a permanent
nature. Parallel to the military developments emerged also a constantly more elaborate chivalric code of conduct
for the warrior class. This new-found ethos can be seen as a response to the diminishing military role of the aristocracy,
and gradually it became almost entirely detached from its military origin. The spirit of chivalry was given expression
through the new (secular) type of chivalric orders; the first of these was the Order of St. George, founded by Charles
I of Hungary in 1325, while the best known was probably the English Order of the Garter, founded by Edward III in
1348. The French crown's increasing dominance over the Papacy culminated in the transference of the Holy See to Avignon
in 1309. When the Pope returned to Rome in 1377, this led to the election of different popes in Avignon and Rome,
resulting in the Papal Schism (1378–1417). The Schism divided Europe along political lines; while France, her ally
Scotland and the Spanish kingdoms supported the Avignon Papacy, France's enemy England stood behind the Pope in Rome,
together with Portugal, Scandinavia and most of the German princes. Though many of the events were outside the traditional
time-period of the Middle Ages, the end of the unity of the Western Church (the Protestant Reformation), was one
of the distinguishing characteristics of the medieval period. The Catholic Church had long fought against heretic
movements, but during the Late Middle Ages, it started to experience demands for reform from within. The first of
these came from Oxford professor John Wycliffe in England. Wycliffe held that the Bible should be the only authority
in religious questions, and he spoke out against transubstantiation, celibacy and indulgences. In spite of influential
supporters among the English aristocracy, such as John of Gaunt, the movement was not allowed to survive. Though
Wycliffe himself was left unmolested, his supporters, the Lollards, were eventually suppressed in England. The marriage
of Richard II of England to Anne of Bohemia established contacts between the two nations and brought Lollard ideas
to her homeland. The teachings of the Czech priest Jan Hus were based on those of John Wycliffe, yet his followers,
the Hussites, were to have a much greater political impact than the Lollards. Hus gained a great following in Bohemia,
and in 1414, he was requested to appear at the Council of Constance to defend his cause. When he was burned as a
heretic in 1415, it caused a popular uprising in the Czech lands. The subsequent Hussite Wars fell apart due to internal
quarrels and did not result in religious or national independence for the Czechs, but both the Catholic Church and
the German element within the country were weakened. Martin Luther, a German monk, started the German Reformation
by posting 95 theses on the castle church of Wittenberg on October 31, 1517. The immediate provocation spurring this
act was Pope Leo X’s renewal of the indulgence for the building of the new St. Peter's Basilica in 1514. Luther was
challenged to recant his heresy at the Diet of Worms in 1521. When he refused, he was placed under the ban of the
Empire by Charles V. Receiving the protection of Frederick the Wise, he was then able to translate the Bible into
German. In the late 13th and early 14th centuries, a process took place – primarily in Italy but partly also in the
Empire – that historians have termed a 'commercial revolution'. Among the innovations of the period were new forms
of partnership and the issuing of insurance, both of which contributed to reducing the risk of commercial ventures;
the bill of exchange and other forms of credit that circumvented the canonical laws for gentiles against usury, and
eliminated the dangers of carrying bullion; and new forms of accounting, in particular double-entry bookkeeping,
which allowed for better oversight and accuracy. With the financial expansion, trading rights became more jealously
guarded by the commercial elite. Towns saw the growing power of guilds, while on a national level special companies
would be granted monopolies on particular trades, like the English wool Staple. The beneficiaries of these developments
would accumulate immense wealth. Families like the Fuggers in Germany, the Medicis in Italy, the de la Poles in England,
and individuals like Jacques Coeur in France would help finance the wars of kings, and achieve great political influence
in the process. Though there is no doubt that the demographic crisis of the 14th century caused a dramatic fall in
production and commerce in absolute terms, there has been a vigorous historical debate over whether the decline was
greater than the fall in population. While the older orthodoxy held that the artistic output of the Renaissance was
a result of greater opulence, more recent studies have suggested that there might have been a so-called 'depression
of the Renaissance'. In spite of convincing arguments for the case, the statistical evidence is simply too incomplete
for a definite conclusion to be made. The predominant school of thought in the 13th century was the Thomistic reconciliation
of the teachings of Aristotle with Christian theology. The Condemnation of 1277, enacted at the University of Paris,
placed restrictions on ideas that could be interpreted as heretical; restrictions that had implication for Aristotelian
thought. An alternative was presented by William of Ockham, who insisted that the world of reason and the world of
faith had to be kept apart. Ockham introduced the principle of parsimony – or Occam's razor – whereby a simple theory
is preferred to a more complex one, and speculation on unobservable phenomena is avoided. This new approach liberated
scientific speculation from the dogmatic restraints of Aristotelian science, and paved the way for new approaches.
Particularly within the field of theories of motion great advances were made, when such scholars as Jean Buridan,
Nicole Oresme and the Oxford Calculators challenged the work of Aristotle. Buridan developed the theory of impetus
as the cause of the motion of projectiles, which was an important step towards the modern concept of inertia. The
works of these scholars anticipated the heliocentric worldview of Nicolaus Copernicus. Certain technological inventions
of the period – whether of Arab or Chinese origin, or unique European innovations – were to have great influence
on political and social developments, in particular gunpowder, the printing press and the compass. The introduction
of gunpowder to the field of battle affected not only military organisation, but helped advance the nation state.
Gutenberg's movable type printing press made possible not only the Reformation, but also a dissemination of knowledge
that would lead to a gradually more egalitarian society. The compass, along with other innovations such as the cross-staff,
the mariner's astrolabe, and advances in shipbuilding, enabled the navigation of the World Oceans, and the early
phases of colonialism. Other inventions had a greater impact on everyday life, such as eyeglasses and the weight-driven
clock. The period saw several important technical innovations, like the principle of linear perspective found in
the work of Masaccio, and later described by Brunelleschi. Greater realism was also achieved through the scientific
study of anatomy, championed by artists like Donatello. This can be seen particularly well in his sculptures, inspired
by the study of classical models. As the centre of the movement shifted to Rome, the period culminated in the High
Renaissance masters da Vinci, Michelangelo and Raphael. The ideas of the Italian Renaissance were slow to cross the
Alps into northern Europe, but important artistic innovations were made also in the Low Countries. Though not – as
previously believed – the inventor of oil painting, Jan van Eyck was a champion of the new medium, and used it to
create works of great realism and minute detail. The two cultures influenced each other and learned from each other,
but painting in the Netherlands remained more focused on textures and surfaces than the idealized compositions of
Italy. Dante Alighieri's Divine Comedy, written in the early 14th century, merged a medieval world view with classical
ideals. Another promoter of the Italian language was Boccaccio with his Decameron. The application of the vernacular
did not entail a rejection of Latin, and both Dante and Boccaccio wrote prolifically in Latin as well as Italian,
as would Petrarch later (whose Canzoniere also promoted the vernacular and whose contents are considered the first
modern lyric poems). Together the three poets established the Tuscan dialect as the norm for the modern Italian language.
Music was an important part of both secular and spiritual culture, and in the universities it made up part of the
quadrivium of the liberal arts. From the early 13th century, the dominant sacred musical form had been the motet;
a composition with text in several parts. From the 1330s and onwards, emerged the polyphonic style, which was a more
complex fusion of independent voices. Polyphony had been common in the secular music of the Provençal troubadours.
Many of these had fallen victim to the 13th-century Albigensian Crusade, but their influence reached the papal court
at Avignon. The main representatives of the new style, often referred to as ars nova as opposed to the ars antiqua,
were the composers Philippe de Vitry and Guillaume de Machaut. In Italy, where the Provençal troubadours had also
found refuge, the corresponding period goes under the name of trecento, and the leading composers were Giovanni da
Cascia, Jacopo da Bologna and Francesco Landini. Prominent reformer of Orthodox Church music from the first half
of 14th century was John Kukuzelis; he also introduced a system of notation widely used in the Balkans in the following
centuries. Morality plays emerged as a distinct dramatic form around 1400 and flourished until 1550. The most interesting
morality play is The Castle of Perseverance which depicts mankind's progress from birth to death. However, the most
famous morality play and perhaps best known medieval drama is Everyman. Everyman receives Death's summons, struggles
to escape and finally resigns himself to necessity. Along the way, he is deserted by Kindred, Goods, and Fellowship
- only Good Deeds goes with him to the grave. At the end of the Late Middle Ages, professional actors began to appear
in England and Europe. Richard III and Henry VII both maintained small companies of professional actors. Their plays
were performed in the Great Hall of a nobleman's residence, often with a raised platform at one end for the audience
and a "screen" at the other for the actors. Also important were Mummers' plays, performed during the Christmas season,
and court masques. These masques were especially popular during the reign of Henry VIII who had a House of Revels
built and an Office of Revels established in 1545. The end of medieval drama came about due to a number of factors,
including the weakening power of the Catholic Church, the Protestant Reformation and the banning of religious plays
in many countries. Elizabeth I forbid all religious plays in 1558 and the great cycle plays had been silenced by
the 1580s. Similarly, religious plays were banned in the Netherlands in 1539, the Papal States in 1547 and in Paris
in 1548. The abandonment of these plays destroyed the international theatre that had thereto existed and forced each
country to develop its own form of drama. It also allowed dramatists to turn to secular subjects and the reviving
interest in Greek and Roman theatre provided them with the perfect opportunity. After the end of the late Middle
Ages period, the Renaissance would spread unevenly over continental Europe from the southern European region. The
intellectual transformation of the Renaissance is viewed as a bridge between the Middle Ages and the Modern era.
Europeans would later begin an era of world discovery. Combined with the influx of classical ideas was the invention
of printing which facilitated dissemination of the printed word and democratized learning. These two things would
lead to the Protestant Reformation. Europeans also discovered new trading routes, as was the case with Columbus’s
travel to the Americas in 1492, and Vasco da Gama’s circumnavigation of Africa and India in 1498. Their discoveries
strengthened the economy and power of European nations. At the end of the 15th century the Ottoman Empire advanced
all over Southeastern Europe, eventually conquering the Byzantine Empire and extending control over the Balkan states.
Hungary was the last bastion of the Latin Christian world in the East, and fought to keep its rule over a period
of two centuries. After the tragic death of the young king Vladislaus I of Hungary during the Battle of Varna in
1444 against the Ottomans, the Kingdom was placed in the hands of count John Hunyadi, who became Hungary's regent-governor
(1446–1453). Hunyadi was considered one of the most relevant military figures of the 15th century: Pope Pius II awarded
him the title of Athleta Christi or Champion of Christ for being the only hope of resisting the Ottomans from advancing
to Central and Western Europe. Hunyadi succeeded during the Siege of Belgrade in 1456 against the Ottomans, the biggest
victory against that empire in decades. This battle became a real Crusade against the Muslims, as the peasants were
motivated by the Franciscan monk Saint John of Capistrano, who came from Italy predicating Holy War. The effect that
it created in that time was one of the main factors that helped in achieving the victory. However the premature death
of the Hungarian Lord left Pannonia defenseless and in chaos. In an extremely unusual event for the Middle Ages,
Hunyadi's son, Matthias, was elected as King of Hungary by the nobility. For the first time, a member of an aristocratic
family (and not from a royal family) was crowned. King Matthias Corvinus of Hungary (1458–1490) was one of the most
prominent figures of the period, directing campaigns to the West, conquering Bohemia in answer to the Pope's call
for help against the Hussite Protestants. Also, in resolving political hostilities with the German emperor Frederick
III of Habsburg, he invaded his western domains. Matthew organized the Black Army of mercenary soldiers; it was considered
as the biggest army of its time. Using this powerful tool, the Hungarian king led wars against the Turkish armies
and stopped the Ottomans during his reign. After the death of Matthew, and with end of the Black Army, the Ottoman
Empire grew in strength and Central Europe was defenseless. At the Battle of Mohács, the forces of the Ottoman Empire
annihilated the Hungarian army and Louis II of Hungary drowned in the Csele Creek while trying to escape. The leader
of the Hungarian army, Pál Tomori, also died in the battle. This is considered to be one of the final battles of
Medieval times. The changes brought about by these developments have led many scholars to view this period as the
end of the Middle Ages and beginning of modern history and early modern Europe. However, the division is somewhat
artificial, since ancient learning was never entirely absent from European society. As a result there was developmental
continuity between the ancient age (via classical antiquity) and the modern age. Some historians, particularly in
Italy, prefer not to speak of the late Middle Ages at all, but rather see the high period of the Middle Ages transitioning
to the Renaissance and the modern era.
Ann Arbor was founded in 1824, named for wives of the village's founders and the stands of Bur Oak trees. The University
of Michigan moved from Detroit to Ann Arbor in 1837, and the city grew at a rapid rate in the early to mid-20th century.
During the 1960s and 1970s, the city gained a reputation as a center for left-wing politics. Ann Arbor became a focal
point for political activism and served as a hub for the civil-rights movement and anti-Vietnam War movement, as
well as various student movements. Ann Arbor was founded in 1824 by land speculators John Allen and Elisha Walker
Rumsey. On 25 May 1824, the town plat was registered with Wayne County as "Annarbour;" this represents the earliest
known use of the town's name. Allen and Rumsey decided to name it for their wives, both named Ann, and for the stands
of Bur Oak in the 640 acres (260 ha) of land they purchased for $800 from the federal government at $1.25 per acre.
The local Ojibwa named the settlement kaw-goosh-kaw-nick, after the sound of Allen's sawmill. Since the university's
establishment in the city in 1837, the histories of the University of Michigan and Ann Arbor have been closely linked.
The town became a regional transportation hub in 1839 with the arrival of the Michigan Central Railroad, and a north—south
railway connecting Ann Arbor to Toledo and other markets to the south was established in 1878. Throughout the 1840s
and the 1850s settlers continued to come to Ann Arbor. While the earlier settlers were primarily of British ancestry,
the newer settlers also consisted of Germans, Irish, and African-Americans. In 1851, Ann Arbor was chartered as a
city, though the city showed a drop in population during the Depression of 1873. It was not until the early 1880s
that Ann Arbor again saw robust growth, with new immigrants coming from Greece, Italy, Russia, and Poland. Ann Arbor
saw increased growth in manufacturing, particularly in milling. Ann Arbor's Jewish community also grew after the
turn of the 20th century, and its first and oldest synagogue, Beth Israel Congregation, was established in 1916.
During the 1960s and 1970s, the city gained a reputation as an important center for liberal politics. Ann Arbor also
became a locus for left-wing activism and served as a hub for the civil-rights movement and anti-Vietnam War movement,
as well as the student movement. The first major meetings of the national left-wing campus group Students for a Democratic
Society took place in Ann Arbor in 1960; in 1965, the city was home to the first U.S. teach-in against the Vietnam
War. During the ensuing 15 years, many countercultural and New Left enterprises sprang up and developed large constituencies
within the city. These influences washed into municipal politics during the early and mid-1970s when three members
of the Human Rights Party (HRP) won city council seats on the strength of the student vote. During their time on
the council, HRP representatives fought for measures including pioneering antidiscrimination ordinances, measures
decriminalizing marijuana possession, and a rent-control ordinance; many of these remain in effect in modified form.
Alongside these liberal and left-wing efforts, a small group of conservative institutions were born in Ann Arbor.
These include Word of God (established in 1967), a charismatic inter-denominational movement; and the Thomas More
Law Center (established in 1999), a religious-conservative advocacy group. In the past several decades, Ann Arbor
has grappled with the effects of sharply rising land values, gentrification, and urban sprawl stretching into outlying
countryside. On 4 November 2003, voters approved a greenbelt plan under which the city government bought development
rights on agricultural parcels of land adjacent to Ann Arbor to preserve them from sprawling development. Since then,
a vociferous local debate has hinged on how and whether to accommodate and guide development within city limits.
Ann Arbor consistently ranks in the "top places to live" lists published by various mainstream media outlets every
year. In 2008, it was ranked by CNNMoney.com 27th out of 100 "America's best small cities". And in the year 2010,
Forbes listed Ann Arbor as one of the most liveable cities in the United States of America. According to the United
States Census Bureau, the city has a total area of 28.70 square miles (74.33 km2), of which, 27.83 square miles (72.08
km2) of it is land and 0.87 square miles (2.25 km2) is water, much of which is part of the Huron River. Ann Arbor
is about 35 miles (56 km) west of Detroit. Ann Arbor Charter Township adjoins the city's north and east sides. Ann
Arbor is situated on the Huron River in a productive agricultural and fruit-growing region. The landscape of Ann
Arbor consists of hills and valleys, with the terrain becoming steeper near the Huron River. The elevation ranges
from about 750 feet (230 m) along the Huron River to 1,015 feet (309 m) on the city's west side, near the intersection
of Maple Road and Pauline Blvd. Generally, the west-central and northwestern parts of the city and U-M's North Campus
are the highest parts of the city; the lowest parts are along the Huron River and in the southeast. Ann Arbor Municipal
Airport, which is south of the city at 42°13.38′N 83°44.74′W / 42.22300°N 83.74567°W / 42.22300; -83.74567, has
an elevation of 839 feet (256 m). Ann Arbor's "Tree Town" nickname stems from the dense forestation of its parks
and residential areas. The city contains more than 50,000 trees along its streets and an equal number in parks. In
recent years, the emerald ash borer has destroyed many of the city's approximately 10,500 ash trees. The city contains
157 municipal parks ranging from small neighborhood green spots to large recreation areas. Several large city parks
and a university park border sections of the Huron River. Fuller Recreation Area, near the University Hospital complex,
contains sports fields, pedestrian and bike paths, and swimming pools. The Nichols Arboretum, owned by the University
of Michigan, is a 123-acre (50 ha) arboretum that contains hundreds of plant and tree species. It is on the city's
east side, near the university's Central Campus. Located across the Huron River just beyond the university's North
Campus is the university's Matthaei Botanical Gardens, which contains 300 acres of gardens and a large tropical conservatory.
The Kerrytown Shops, Main Street Business District, the State Street Business District, and the South University
Business District are commercial areas in downtown Ann Arbor. Three commercial areas south of downtown include the
areas near I-94 and Ann Arbor-Saline Road, Briarwood Mall, and the South Industrial area. Other commercial areas
include the Arborland/Washtenaw Avenue and Packard Road merchants on the east side, the Plymouth Road area in the
northeast, and the Westgate/West Stadium areas on the west side. Downtown contains a mix of 19th- and early-20th-century
structures and modern-style buildings, as well as a farmers' market in the Kerrytown district. The city's commercial
districts are composed mostly of two- to four-story structures, although downtown and the area near Briarwood Mall
contain a small number of high-rise buildings. Ann Arbor's residential neighborhoods contain architectural styles
ranging from classic 19th-century and early-20th-century designs to ranch-style houses. Among these homes are a number
of kit houses built in the early 20th century. Contemporary-style houses are farther from the downtown district.
Surrounding the University of Michigan campus are houses and apartment complexes occupied primarily by student renters.
Tower Plaza, a 26-story condominium building located between the University of Michigan campus and downtown, is the
tallest building in Ann Arbor. The 19th-century buildings and streetscape of the Old West Side neighborhood have
been preserved virtually intact; in 1972, the district was listed on the National Register of Historic Places, and
it is further protected by city ordinances and a nonprofit preservation group. Ann Arbor has a typically Midwestern
humid continental climate (Köppen Dfa), which is influenced by the Great Lakes. There are four distinct seasons:
winters are cold with moderate to heavy snowfall, while summers are very warm and humid, and spring and autumn are
short but mild. The area experiences lake effect weather, primarily in the form of increased cloudiness during late
fall and early winter. The monthly daily average temperature in July is 72.6 °F (22.6 °C), while the same figure
for January is 24.5 °F (−4.2 °C). Temperatures reach or exceed 90 °F (32 °C) on 10 days, and drop to or below 0 °F
(−18 °C) on 4.6 nights. Precipitation tends to be the heaviest during the summer months, but most frequent during
winter. Snowfall, which normally occurs from November to April but occasionally starts in October, averages 58 inches
(147 cm) per season. The lowest recorded temperature was −23 °F (−31 °C) on 11 February 1885 and the highest recorded
temperature was 105 °F (41 °C) on 24 July 1934. As of the 2010 U.S. Census, there were 113,394 people, 45,634 households,
and 21,704 families residing in the city. The population density was 4,270.33 people per square mile (2653.47/km²).
There were 49,982 housing units at an average density of 1,748.0 per square mile (675.0/km²), making it less densely
populated than inner-ring Detroit suburbs like Oak Park and Ferndale (and than Detroit proper), but more densely
populated than outer-ring suburbs like Livonia or Troy. The racial makeup of the city was 73.0% White (70.4% non-Hispanic
White), 7.7% Black or African American, 0.3% Native American, 14.4% Asian, 0.0% Pacific Islander, 1.0% from other
races, and 3.6% from two or more races. Hispanic or Latino residents of any race were 4.1% of the population. In
2000, out of 45,693 households, 23.0% had children under the age of 18 living with them, 37.8% were married couples
living together, 7.5% had a female householder with no husband present, and 52.5% were nonfamilies. 35.5% of households
were made up of individuals and 6.6% had someone living alone who was 65 years of age or older. The average household
size was 2.22 and the average family size was 2.90. The age distribution was 16.8% under 18, 26.8% from 18 to 24,
31.2% from 25 to 44, 17.3% from 45 to 64, and 7.9% were 65 or older. The median age was 28 years. For every 100 females
there were 97.7 males; while for every 100 females age 18 and over, there were 96.4 males. The University of Michigan
shapes Ann Arbor's economy significantly. It employs about 30,000 workers, including about 12,000 in the medical
center. Other employers are drawn to the area by the university's research and development money, and by its graduates.
High tech, health services and biotechnology are other major components of the city's economy; numerous medical offices,
laboratories, and associated companies are located in the city. Automobile manufacturers, such as General Motors
and Visteon, also employ residents. High tech companies have located in the area since the 1930s, when International
Radio Corporation introduced the first mass-produced AC/DC radio (the Kadette, in 1931) as well as the first pocket
radio (the Kadette Jr., in 1933). The Argus camera company, originally a subsidiary of International Radio, manufactured
cameras in Ann Arbor from 1936 to the 1960s. Current firms include Arbor Networks (provider of Internet traffic engineering
and security systems), Arbortext (provider of XML-based publishing software), JSTOR (the digital scholarly journal
archive), MediaSpan (provider of software and online services for the media industries), Truven Health Analytics,
and ProQuest, which includes UMI. Ann Arbor Terminals manufactured a video-display terminal called the Ann Arbor
Ambassador during the 1980s. Barracuda Networks, which provides networking, security, and storage products based
on network appliances and cloud services, opened an engineering office in Ann Arbor in 2008 on Depot St. and recently
announced it will move downtown to occupy the building previously used as the Borders headquarters. Websites and
online media companies in or near the city include All Media Guide, the Weather Underground, and Zattoo. Ann Arbor
is the home to Internet2 and the Merit Network, a not-for-profit research and education computer network. Both are
located in the South State Commons 2 building on South State Street, which once housed the Michigan Information Technology
Center Foundation. The city is also home to the headquarters of Google's AdWords program—the company's primary revenue
stream. The recent surge in companies operating in Ann Arbor has led to a decrease in its office and flex space vacancy
rates. As of 31 December 2012, the total market vacancy rate for office and flex space is 11.80%, a 1.40% decrease
in vacancy from one year previous, and the lowest overall vacancy level since 2003. The office vacancy rate decreased
to 10.65% in 2012 from 12.08% in 2011, while the flex vacancy rate decreased slightly more, with a drop from 16.50%
to 15.02%. Pfizer, once the city's second largest employer, operated a large pharmaceutical research facility on
the northeast side of Ann Arbor. On 22 January 2007, Pfizer announced it would close operations in Ann Arbor by the
end of 2008. The facility was previously operated by Warner-Lambert and, before that, Parke-Davis. In December 2008,
the University of Michigan Board of Regents approved the purchase of the facilities, and the university anticipates
hiring 2,000 researchers and staff during the next 10 years. The city is the home of other research and engineering
centers, including those of Lotus Engineering, General Dynamics and the National Oceanic and Atmospheric Administration
(NOAA). Other research centers sited in the city are the United States Environmental Protection Agency's National
Vehicle and Fuel Emissions Laboratory and the Toyota Technical Center. The city is also home to National Sanitation
Foundation International (NSF International), the nonprofit non-governmental organization that develops generally
accepted standards for a variety of public health related industries and subject areas. Borders Books, started in
Ann Arbor, was opened by brothers Tom and Louis Borders in 1971 with a stock of used books. The Borders chain was
based in the city, as was its flagship store until it closed in September 2011. Domino's Pizza's headquarters is
near Ann Arbor on Domino's Farms, a 271-acre (110 ha) Frank Lloyd Wright-inspired complex just northeast of the city.
Another Ann Arbor-based company is Zingerman's Delicatessen, which serves sandwiches and has developed businesses
under a variety of brand names. Zingerman's has grown into a family of companies which offers a variety of products
(bake shop, mail order, creamery, coffee) and services (business education). Flint Ink Corp., another Ann Arbor-based
company, was the world's largest privately held ink manufacturer until it was acquired by Stuttgart-based XSYS Print
Solutions in October 2005. Avfuel, a global supplier of aviation fuels and services, is also headquartered in Ann
Arbor. Aastrom Biosciences, a publicly traded company that develops stem cell treatments for cardiovascular diseases,
is headquartered in Ann Arbor. Several performing arts groups and facilities are on the University of Michigan's
campus, as are museums dedicated to art, archaeology, and natural history and sciences. Founded in 1879, the University
Musical Society is an independent performing arts organization that presents over 60 events each year, bringing international
artists in music, dance, and theater. Since 2001 Shakespeare in the Arb has presented one play by Shakespeare each
June, in a large park near downtown. Regional and local performing arts groups not associated with the university
include the Ann Arbor Civic Theatre, the Arbor Opera Theater, the Ann Arbor Symphony Orchestra, the Ann Arbor Ballet
Theater, the Ann Arbor Civic Ballet (established in 1954 as Michigan's first chartered ballet company), The Ark,
and Performance Network Theatre. Another unique piece of artistic expression in Ann Arbor is the fairy doors. These
small portals are examples of installation art and can be found throughout the downtown area. The Ann Arbor Hands-On
Museum is located in a renovated and expanded historic downtown fire station. Multiple art galleries exist in the
city, notably in the downtown area and around the University of Michigan campus. Aside from a large restaurant scene
in the Main Street, South State Street, and South University Avenue areas, Ann Arbor ranks first among U.S. cities
in the number of booksellers and books sold per capita. The Ann Arbor District Library maintains four branch outlets
in addition to its main downtown building. The city is also home to the Gerald R. Ford Presidential Library. Several
annual events—many of them centered on performing and visual arts—draw visitors to Ann Arbor. One such event is the
Ann Arbor Art Fairs, a set of four concurrent juried fairs held on downtown streets. Scheduled on Wednesday through
Saturday of the third week of July, the fairs draw upward of half a million visitors. Another is the Ann Arbor Film
Festival, held during the third week of March, which receives more than 2,500 submissions annually from more than
40 countries and serves as one of a handful of Academy Award–qualifying festivals in the United States. Ann Arbor
has a long history of openness to marijuana, given Ann Arbor's decriminalization of cannabis, the large number of
medical marijuana dispensaries in the city (one dispensary, called People's Co-op, was directly across the street
from Michigan Stadium until zoning forced it to move one mile to the west), the large number of pro-marijuana residents,
and the annual Hash Bash: an event that is held on the first Saturday of April. Until (at least) the successful passage
of Michigan's medical marijuana law, the event had arguably strayed from its initial intent, although for years,
a number of attendees have received serious legal responses due to marijuana use on University of Michigan property,
which does not fall under the City's progressive and compassionate ticketing program. Ann Arbor is a major scene
of college sports, most notably at the University of Michigan, a member of the Big Ten Conference. Several well-known
college sports facilities exist in the city, including Michigan Stadium, the largest American football stadium in
the world. The stadium was completed in 1927 and cost more than $950,000 to build. It has a 109,901 seating capacity
after multiple renovations were made. The stadium is colloquially known as "The Big House". Crisler Center and Yost
Ice Arena play host to the school's basketball (both men's and women's) and ice hockey teams, respectively. Concordia
University, a member of the NAIA, also fields sports teams. A person from Ann Arbor is called an "Ann Arborite",
and many long-time residents call themselves "townies". The city itself is often called "A²" ("A-squared") or "A2"
("A two") or "AA", "The Deuce" (mainly by Chicagoans), and "Tree Town". With tongue-in-cheek reference to the city's
liberal political leanings, some occasionally refer to Ann Arbor as "The People's Republic of Ann Arbor" or "25 square
miles surrounded by reality", the latter phrase being adapted from Wisconsin Governor Lee Dreyfus's description of
Madison, Wisconsin. In A Prairie Home Companion broadcast from Ann Arbor, Garrison Keillor described Ann Arbor as
"a city where people discuss socialism, but only in the fanciest restaurants." Ann Arbor sometimes appears on citation
indexes as an author, instead of a location, often with the academic degree MI, a misunderstanding of the abbreviation
for Michigan. Ann Arbor has become increasingly gentrified in recent years. Ann Arbor has a council-manager form
of government. The City Council has 11 voting members: the mayor and 10 city council members. The mayor and city
council members serve two-year terms: the mayor is elected every even-numbered year, while half of the city council
members are up for election annually (five in even-numbered and five in odd-numbered years). Two council members
are elected from each of the city's five wards. The mayor is elected citywide. The mayor is the presiding officer
of the City Council and has the power to appoint all Council committee members as well as board and commission members,
with the approval of the City Council. The current mayor of Ann Arbor is Christopher Taylor, a Democrat who was elected
as mayor in 2014. Day-to-day city operations are managed by a city administrator chosen by the city council. Ann
Arbor is part of Michigan's 12th congressional district, represented in Congress by Representative Debbie Dingell,
a Democrat. On the state level, the city is part of the 18th district in the Michigan Senate, represented by Democrat
Rebekah Warren. In the Michigan House of Representatives, representation is split between the 55th district (northern
Ann Arbor, part of Ann Arbor Township, and other surrounding areas, represented by Democrat Adam Zemke), the 53rd
district (most of downtown and the southern half of the city, represented by Democrat Jeff Irwin) and the 52nd district
(southwestern areas outside Ann Arbor proper and western Washtenaw County, represented by Democrat Gretchen Driskell).
Left-wing politics have been particularly strong in municipal government since the 1960s. Voters approved charter
amendments that have lessened the penalties for possession of marijuana (1974), and that aim to protect access to
abortion in the city should it ever become illegal in the State of Michigan (1990). In 1974, Kathy Kozachenko's victory
in an Ann Arbor city-council race made her the country's first openly homosexual candidate to win public office.
In 1975, Ann Arbor became the first U.S. city to use instant-runoff voting for a mayoral race. Adopted through a
ballot initiative sponsored by the local Human Rights Party, which feared a splintering of the liberal vote, the
process was repealed in 1976 after use in only one election. As of August 2009, Democrats hold the mayorship and
all council seats. The left tilt of politics in the city has earned it the nickname "The People's Republic of Ann
Arbor". Nationally, Ann Arbor is located in Michigan's 12th congressional district, represented by Democrat Debbie
Dingell. Other local colleges and universities include Concordia University Ann Arbor, a Lutheran liberal-arts institution;
a campus of the University of Phoenix; and Cleary University, a private business school. Washtenaw Community College
is located in neighboring Ann Arbor Township. In 2000, the Ave Maria School of Law, a Roman Catholic law school established
by Domino's Pizza founder Tom Monaghan, opened in northeastern Ann Arbor, but the school moved to Ave Maria, Florida
in 2009, and the Thomas M. Cooley Law School acquired the former Ave Maria buildings for use as a branch campus.
Public schools are part of the Ann Arbor Public Schools (AAPS) district. AAPS has one of the country's leading music
programs. In September 2008, 16,539 students had been enrolled in the Ann Arbor Public Schools. There were 21 elementary
schools, five middle schools (Forsythe, Slauson, Tappan, Scarlett, and Clague) three traditional high schools (Pioneer,
Huron, and Skyline), and three alternative high schools (Community High, Stone School, and Roberto Clemente) in the
district. The district also operates a K-8 open school program, Ann Arbor Open School, out of the former Mack School.
This program is open to all families who live within the district. Ann Arbor Public Schools also operates a preschool
and family center, with programs for at-risk infants and children before kindergarten. The district has a preschool
center with both free and tuition-based programs for preschoolers in the district. The Ann Arbor News, owned by the
Michigan-based Booth Newspapers chain, is the major daily newspaper serving Ann Arbor and the rest of Washtenaw County.
The newspaper ended its 174-year print run in 2009, due to economic difficulties. It was replaced by AnnArbor.com,
but returned to a limited print publication under its former name in 2013. Another Ann Arbor-based publication that
has ceased production was the Ann Arbor Paper, a free monthly. Ann Arbor has been said to be the first significant
city to lose its only daily paper. The Ann Arbor Chronicle, an online newspaper, covered local news, including meetings
of the library board, county commission, and DDA until September 3, 2014. Current publications in the city include
the Ann Arbor Journal (A2 Journal), a weekly community newspaper; the Ann Arbor Observer, a free monthly local magazine;
the Ann Arbor Independent, a locally owned, independent weekly; and Current, a free entertainment-focused alt-weekly.
The Ann Arbor Business Review covers local business in the area. Car and Driver magazine and Automobile Magazine
are also based in Ann Arbor. The University of Michigan is served by many student publications, including the independent
Michigan Daily student newspaper, which reports on local, state, and regional issues in addition to campus news.
Four major AM radio stations based in or near Ann Arbor are WAAM 1600, a conservative news and talk station; WLBY
1290, a business news and talk station; WDEO 990, Catholic radio; and WTKA 1050, which is primarily a sports station.
The city's FM stations include NPR affiliate WUOM 91.7; country station WWWW 102.9; and adult-alternative station
WQKL 107.1. Freeform station WCBN-FM 88.3 is a local community radio/college radio station operated by the students
of the University of Michigan featuring noncommercial, eclectic music and public-affairs programming. The city is
also served by public and commercial radio broadcasters in Ypsilanti, the Lansing/Jackson area, Detroit, Windsor,
and Toledo. WPXD channel 31, an affiliate of the ION Television network, is licensed to the city. WHTV channel 18,
a MyNetworkTV-affiliated station for the Lansing market, broadcasts from a transmitter in Lyndon Township, west of
Ann Arbor. Community Television Network (CTN) is a city-provided cable television channel with production facilities
open to city residents and nonprofit organizations. Detroit and Toledo-area radio and television stations also serve
Ann Arbor, and stations from Lansing and Windsor, Ontario, can be heard in parts of the area. The University of Michigan
Medical Center, the only teaching hospital in the city, took the number 1 slot in U.S. News & World Report for best
hospital in the state of Michigan, as of 2015. The University of Michigan Health System (UMHS) includes University
Hospital, C.S. Mott Children's Hospital and Women's Hospital in its core complex. UMHS also operates out-patient
clinics and facilities throughout the city. The area's other major medical centers include a large facility operated
by the Department of Veterans Affairs in Ann Arbor, and Saint Joseph Mercy Hospital in nearby Superior Township.
The city provides sewage disposal and water supply services, with water coming from the Huron River and groundwater
sources. There are two water-treatment plants, one main and three outlying reservoirs, four pump stations, and two
water towers. These facilities serve the city, which is divided into five water districts. The city's water department
also operates four dams along the Huron River, two of which provide hydroelectric power. The city also offers waste
management services, with Recycle Ann Arbor handling recycling service. Other utilities are provided by private entities.
Electrical power and gas are provided by DTE Energy. AT&T Inc. is the primary wired telephone service provider for
the area. Cable TV service is primarily provided by Comcast. The streets in downtown Ann Arbor conform to a grid
pattern, though this pattern is less common in the surrounding areas. Major roads branch out from the downtown district
like spokes on a wheel to the highways surrounding the city. The city is belted by three freeways: I-94, which runs
along the southern portion of the city; U.S. Highway 23 (US 23), which primarily runs along the eastern edge of Ann
Arbor; and M-14, which runs along the northern edge of the city. Other nearby highways include US 12, M-17, and M-153.
Several of the major surface arteries lead to the I-94/M-14 interchange in the west, US 23 in the east, and the city's
southern areas. The city also has a system of bike routes and paths and includes the nearly complete Washtenaw County
Border-to-Border Trail.
Gothic architecture is a style of architecture that flourished during the high and late medieval period. It evolved from
Romanesque architecture and was succeeded by Renaissance architecture. Originating in 12th-century France and lasting
into the 16th century, Gothic architecture was known during the period as Opus Francigenum ("French work") with the
term Gothic first appearing during the later part of the Renaissance. Its characteristics include the pointed arch,
the ribbed vault and the flying buttress. Gothic architecture is most familiar as the architecture of many of the
great cathedrals, abbeys and churches of Europe. It is also the architecture of many castles, palaces, town halls,
guild halls, universities and to a less prominent extent, private dwellings, such as dorms and rooms. It is in the
great churches and cathedrals and in a number of civic buildings that the Gothic style was expressed most powerfully,
its characteristics lending themselves to appeals to the emotions, whether springing from faith or from civic pride.
A great number of ecclesiastical buildings remain from this period, of which even the smallest are often structures
of architectural distinction while many of the larger churches are considered priceless works of art and are listed
with UNESCO as World Heritage Sites. For this reason a study of Gothic architecture is largely a study of cathedrals
and churches. The term "Gothic architecture" originated as a pejorative description. Giorgio Vasari used the term
"barbarous German style" in his Lives of the Artists to describe what is now considered the Gothic style, and in
the introduction to the Lives he attributes various architectural features to "the Goths" whom he holds responsible
for destroying the ancient buildings after they conquered Rome, and erecting new ones in this style. At the time
in which Vasari was writing, Italy had experienced a century of building in the Classical architectural vocabulary
revived in the Renaissance and seen as evidence of a new Golden Age of learning and refinement. The greatest number
of surviving Gothic buildings are churches. These range from tiny chapels to large cathedrals, and although many
have been extended and altered in different styles, a large number remain either substantially intact or sympathetically
restored, demonstrating the form, character and decoration of Gothic architecture. The Gothic style is most particularly
associated with the great cathedrals of Northern France, the Low Countries, England and Spain, with other fine examples
occurring across Europe. At the end of the 12th century, Europe was divided into a multitude of city states and kingdoms.
The area encompassing modern Germany, southern Denmark, the Netherlands, Belgium, Luxembourg, Switzerland, Austria,
Slovakia, Czech Republic and much of northern Italy (excluding Venice and Papal State) was nominally part of the
Holy Roman Empire, but local rulers exercised considerable autonomy. France, Denmark, Poland, Hungary, Portugal,
Scotland, Castile, Aragon, Navarre, Sicily and Cyprus were independent kingdoms, as was the Angevin Empire, whose
Plantagenet kings ruled England and large domains in what was to become modern France. Norway came under the influence
of England, while the other Scandinavian countries and Poland were influenced by trading contacts with the Hanseatic
League. Angevin kings brought the Gothic tradition from France to Southern Italy, while Lusignan kings introduced
French Gothic architecture to Cyprus. Throughout Europe at this time there was a rapid growth in trade and an associated
growth in towns. Germany and the Lowlands had large flourishing towns that grew in comparative peace, in trade and
competition with each other, or united for mutual weal, as in the Hanseatic League. Civic building was of great importance
to these towns as a sign of wealth and pride. England and France remained largely feudal and produced grand domestic
architecture for their kings, dukes and bishops, rather than grand town halls for their burghers. The Catholic Church
prevailed across Europe at this time, influencing not only faith but also wealth and power. Bishops were appointed
by the feudal lords (kings, dukes and other landowners) and they often ruled as virtual princes over large estates.
The early Medieval periods had seen a rapid growth in monasticism, with several different orders being prevalent
and spreading their influence widely. Foremost were the Benedictines whose great abbey churches vastly outnumbered
any others in France and England. A part of their influence was that towns developed around them and they became
centers of culture, learning and commerce. The Cluniac and Cistercian Orders were prevalent in France, the great
monastery at Cluny having established a formula for a well planned monastic site which was then to influence all
subsequent monastic building for many centuries. From the 10th to the 13th century, Romanesque architecture had become
a pan-European style and manner of construction, affecting buildings in countries as far apart as Ireland, Croatia,
Sweden and Sicily. The same wide geographic area was then affected by the development of Gothic architecture, but
the acceptance of the Gothic style and methods of construction differed from place to place, as did the expressions
of Gothic taste. The proximity of some regions meant that modern country borders do not define divisions of style.
On the other hand, some regions such as England and Spain produced defining characteristics rarely seen elsewhere,
except where they have been carried by itinerant craftsmen, or the transfer of bishops. Regional differences that
are apparent in the great abbey churches and cathedrals of the Romanesque period often become even more apparent
in the Gothic. In Northern Germany, Netherlands, northern Poland, Denmark, and the Baltic countries local building
stone was unavailable but there was a strong tradition of building in brick. The resultant style, Brick Gothic, is
called "Backsteingotik" in Germany and Scandinavia and is associated with the Hanseatic League. In Italy, stone was
used for fortifications, but brick was preferred for other buildings. Because of the extensive and varied deposits
of marble, many buildings were faced in marble, or were left with undecorated façade so that this might be achieved
at a later date. By the 12th century, Romanesque architecture (termed Norman architecture in England because of its
association with the Norman invasion), was established throughout Europe and provided the basic architectural forms
and units that were to remain in evolution throughout the Medieval period. The important categories of building:
the cathedral church, the parish church, the monastery, the castle, the palace, the great hall, the gatehouse, the
civic building, had been established in the Romanesque period. It was principally the widespread introduction of
a single feature, the pointed arch, which was to bring about the change that separates Gothic from Romanesque. The
technological change permitted a stylistic change which broke the tradition of massive masonry and solid walls penetrated
by small openings, replacing it with a style where light appears to triumph over substance. With its use came the
development of many other architectural devices, previously put to the test in scattered buildings and then called
into service to meet the structural, aesthetic and ideological needs of the new style. These include the flying buttresses,
pinnacles and traceried windows which typify Gothic ecclesiastical architecture. But while pointed arch is so strongly
associated with the Gothic style, it was first used in Western architecture in buildings that were in other ways
clearly Romanesque, notably Durham Cathedral in the north of England, Monreale Cathedral and Cathedral of Cefalù
in Sicily, Autun Cathedral in France. The pointed arch, one of the defining attributes of Gothic, was earlier incorporated
into Islamic architecture following the Islamic conquests of Roman Syria and the Sassanid Empire in the Seventh Century.
The pointed arch and its precursors had been employed in Late Roman and Sassanian architecture; within the Roman
context, evidenced in early church building in Syria and occasional secular structures, like the Roman Karamagara
Bridge; in Sassanid architecture, in the parabolic and pointed arches employed in palace and sacred construction.
Increasing military and cultural contacts with the Muslim world, including the Norman conquest of Islamic Sicily
in 1090, the Crusades, beginning 1096, and the Islamic presence in Spain, may have influenced Medieval Europe's adoption
of the pointed arch, although this hypothesis remains controversial. Certainly, in those parts of the Western Mediterranean
subject to Islamic control or influence, rich regional variants arose, fusing Romanesque and later Gothic traditions
with Islamic decorative forms, as seen, for example, in Monreale and Cefalù Cathedrals, the Alcázar of Seville, and
Teruel Cathedral. The characteristic forms that were to define Gothic architecture grew out of Romanesque architecture
and developed at several different geographic locations, as the result of different influences and structural requirements.
While barrel vaults and groin vaults are typical of Romanesque architecture, ribbed vaults were used in the naves
of two Romanesque churches in Caen, Abbey of Saint-Étienne and Abbaye aux Dames in 1120. Another early example is
the nave and apse area of the Cathedral of Cefalù in 1131. The ribbed vault over the north transept at Durham Cathedral
in England, built from 1128 to 1133, is probably earlier still and was the first time pointed arches were used in
a high vault. The Basilica of Saint Denis is generally cited as the first truly Gothic building, however the distinction
is best reserved for the choir, of which the ambulatory remains intact. Noyon Cathedral, also in France, saw the
earliest completion of a rebuilding of an entire cathedral in the new style from 1150 to 1231. While using all those
features that came to be known as Gothic, including pointed arches, flying buttresses and ribbed vaulting, the builders
continued to employ many of the features and much of the character of Romanesque architecture including round-headed
arch throughout the building, varying the shape to pointed where it was functionally practical to do so. At the Abbey
Saint-Denis, Noyon Cathedral, Notre Dame de Paris and at the eastern end of Canterbury Cathedral in England, simple
cylindrical columns predominate over the Gothic forms of clustered columns and shafted piers. Wells Cathedral in
England, commenced at the eastern end in 1175, was the first building in which the designer broke free from Romanesque
forms. The architect entirely dispensed with the round arch in favour of the pointed arch and with cylindrical columns
in favour of piers composed of clusters of shafts which lead into the mouldings of the arches. The transepts and
nave were continued by Adam Locke in the same style and completed in about 1230. The character of the building is
entirely Gothic. Wells Cathedral is thus considered the first truly Gothic cathedral. Suger, friend and confidant
of the French Kings, Louis VI and Louis VII, decided in about 1137, to rebuild the great Church of Saint-Denis, attached
to an abbey which was also a royal residence. He began with the West Front, reconstructing the original Carolingian
façade with its single door. He designed the façade of Saint-Denis to be an echo of the Roman Arch of Constantine
with its three-part division and three large portals to ease the problem of congestion. The rose window is the earliest-known
example above the West portal in France. The façade combines both round arches and pointed arches of the Gothic style.
At the completion of the west front in 1140, Abbot Suger moved on to the reconstruction of the eastern end, leaving
the Carolingian nave in use. He designed a choir that would be suffused with light. To achieve his aims, his masons
drew on the several new features which evolved or had been introduced to Romanesque architecture, the pointed arch,
the ribbed vault, the ambulatory with radiating chapels, the clustered columns supporting ribs springing in different
directions and the flying buttresses which enabled the insertion of large clerestory windows. While many secular
buildings exist from the Late Middle Ages, it is in the buildings of cathedrals and great churches that Gothic architecture
displays its pertinent structures and characteristics to the fullest advantage. A Gothic cathedral or abbey was,
prior to the 20th century, generally the landmark building in its town, rising high above all the domestic structures
and often surmounted by one or more towers and pinnacles and perhaps tall spires. These cathedrals were the skyscrapers
of that day and would have been the largest buildings by far that Europeans would ever have seen. It is in the architecture
of these Gothic churches that a unique combination of existing technologies established the emergence of a new building
style. Those technologies were the ogival or pointed arch, the ribbed vault, and the buttress. The eastern arm shows
considerable diversity. In England it is generally long and may have two distinct sections, both choir and presbytery.
It is often square ended or has a projecting Lady Chapel, dedicated to the Virgin Mary. In France the eastern end
is often polygonal and surrounded by a walkway called an ambulatory and sometimes a ring of chapels called a "chevet".
While German churches are often similar to those of France, in Italy, the eastern projection beyond the transept
is usually just a shallow apsidal chapel containing the sanctuary, as at Florence Cathedral. Contrary to the diffusionist
theory, it appears that there was simultaneously a structural evolution towards the pointed arch, for the purpose
of vaulting spaces of irregular plan, or to bring transverse vaults to the same height as diagonal vaults. This latter
occurs at Durham Cathedral in the nave aisles in 1093. Pointed arches also occur extensively in Romanesque decorative
blind arcading, where semi-circular arches overlap each other in a simple decorative pattern, and the points are
accidental to the design. The Gothic vault, unlike the semi-circular vault of Roman and Romanesque buildings, can
be used to roof rectangular and irregularly shaped plans such as trapezoids. The other structural advantage is that
the pointed arch channels the weight onto the bearing piers or columns at a steep angle. This enabled architects
to raise vaults much higher than was possible in Romanesque architecture. While, structurally, use of the pointed
arch gave a greater flexibility to architectural form, it also gave Gothic architecture a very different and more
vertical visual character than Romanesque. Externally, towers and spires are characteristic of Gothic churches both
great and small, the number and positioning being one of the greatest variables in Gothic architecture. In Italy,
the tower, if present, is almost always detached from the building, as at Florence Cathedral, and is often from an
earlier structure. In France and Spain, two towers on the front is the norm. In England, Germany and Scandinavia
this is often the arrangement, but an English cathedral may also be surmounted by an enormous tower at the crossing.
Smaller churches usually have just one tower, but this may also be the case at larger buildings, such as Salisbury
Cathedral or Ulm Minster, which has the tallest spire in the world, slightly exceeding that of Lincoln Cathedral,
the tallest which was actually completed during the medieval period, at 160 metres (520 ft). On the exterior, the
verticality is emphasised in a major way by the towers and spires and in a lesser way by strongly projecting vertical
buttresses, by narrow half-columns called attached shafts which often pass through several storeys of the building,
by long narrow windows, vertical mouldings around doors and figurative sculpture which emphasises the vertical and
is often attenuated. The roofline, gable ends, buttresses and other parts of the building are often terminated by
small pinnacles, Milan Cathedral being an extreme example in the use of this form of decoration. On the interior
of the building attached shafts often sweep unbroken from floor to ceiling and meet the ribs of the vault, like a
tall tree spreading into branches. The verticals are generally repeated in the treatment of the windows and wall
surfaces. In many Gothic churches, particularly in France, and in the Perpendicular period of English Gothic architecture,
the treatment of vertical elements in gallery and window tracery creates a strongly unifying feature that counteracts
the horizontal divisions of the interior structure. Expansive interior light has been a feature of Gothic cathedrals
since the first structure was opened. The metaphysics of light in the Middle Ages led to clerical belief in its divinity
and the importance of its display in holy settings. Much of this belief was based on the writings of Pseudo-Dionysius,
a sixth-century mystic whose book, The Celestial Hierarchy, was popular among monks in France. Pseudo-Dionysius held
that all light, even light reflected from metals or streamed through windows, was divine. To promote such faith,
the abbot in charge of the Saint-Denis church on the north edge of Paris, the Abbot Suger, encouraged architects
remodeling the building to make the interior as bright as possible. Above the main portal there is generally a large
window, like that at York Minster, or a group of windows such as those at Ripon Cathedral. In France there is generally
a rose window like that at Reims Cathedral. Rose windows are also often found in the façades of churches of Spain
and Italy, but are rarer elsewhere and are not found on the façades of any English Cathedrals. The gable is usually
richly decorated with arcading or sculpture or, in the case of Italy, may be decorated with the rest of the façade,
with polychrome marble and mosaic, as at Orvieto Cathedral. The distinctive characteristic of French cathedrals,
and those in Germany and Belgium that were strongly influenced by them, is their height and their impression of verticality.
Each French cathedral tends to be stylistically unified in appearance when compared with an English cathedral where
there is great diversity in almost every building. They are compact, with slight or no projection of the transepts
and subsidiary chapels. The west fronts are highly consistent, having three portals surmounted by a rose window,
and two large towers. Sometimes there are additional towers on the transept ends. The east end is polygonal with
ambulatory and sometimes a chevette of radiating chapels. In the south of France, many of the major churches are
without transepts and some are without aisles. The distinctive characteristic of English cathedrals is their extreme
length, and their internal emphasis upon the horizontal, which may be emphasised visually as much or more than the
vertical lines. Each English cathedral (with the exception of Salisbury) has an extraordinary degree of stylistic
diversity, when compared with most French, German and Italian cathedrals. It is not unusual for every part of the
building to have been built in a different century and in a different style, with no attempt at creating a stylistic
unity. Unlike French cathedrals, English cathedrals sprawl across their sites, with double transepts projecting strongly
and Lady Chapels tacked on at a later date, such as at Westminster Abbey. In the west front, the doors are not as
significant as in France, the usual congregational entrance being through a side porch. The West window is very large
and never a rose, which are reserved for the transept gables. The west front may have two towers like a French Cathedral,
or none. There is nearly always a tower at the crossing and it may be very large and surmounted by a spire. The distinctive
English east end is square, but it may take a completely different form. Both internally and externally, the stonework
is often richly decorated with carvings, particularly the capitals. Romanesque architecture in Germany, Poland, the
Czech Lands and Austria is characterised by its massive and modular nature. This is expressed in the Gothic architecture
of Central Europe in the huge size of the towers and spires, often projected, but not always completed. The west
front generally follows the French formula, but the towers are very much taller and, if complete, are surmounted
by enormous openwork spires that are a regional feature. Because of the size of the towers, the section of the façade
between them may appear narrow and compressed. The eastern end follows the French form. The distinctive character
of the interior of German Gothic cathedrals is their breadth and openness. This is the case even when, as at Cologne,
they have been modelled upon a French cathedral. German cathedrals, like the French, tend not to have strongly projecting
transepts. There are also many hall churches (Hallenkirchen) without clerestory windows. The distinctive characteristic
of Gothic cathedrals of the Iberian Peninsula is their spatial complexity, with many areas of different shapes leading
from each other. They are comparatively wide, and often have very tall arcades surmounted by low clerestories, giving
a similar spacious appearance to the 'Hallenkirche of Germany, as at the Church of the Batalha Monastery in Portugal.
Many of the cathedrals are completely surrounded by chapels. Like English cathedrals, each is often stylistically
diverse. This expresses itself both in the addition of chapels and in the application of decorative details drawn
from different sources. Among the influences on both decoration and form are Islamic architecture and, towards the
end of the period, Renaissance details combined with the Gothic in a distinctive manner. The West front, as at Leon
Cathedral, typically resembles a French west front, but wider in proportion to height and often with greater diversity
of detail and a combination of intricate ornament with broad plain surfaces. At Burgos Cathedral there are spires
of German style. The roofline often has pierced parapets with comparatively few pinnacles. There are often towers
and domes of a great variety of shapes and structural invention rising above the roof. The distinctive characteristic
of Italian Gothic is the use of polychrome decoration, both externally as marble veneer on the brick façade and also
internally where the arches are often made of alternating black and white segments, and where the columns may be
painted red, the walls decorated with frescoes and the apse with mosaic. The plan is usually regular and symmetrical,
Italian cathedrals have few and widely spaced columns. The proportions are generally mathematically equilibrated,
based on the square and the concept of "armonìa", and except in Venice where they loved flamboyant arches, the arches
are almost always equilateral. Colours and moldings define the architectural units rather than blending them. Italian
cathedral façades are often polychrome and may include mosaics in the lunettes over the doors. The façades have projecting
open porches and occular or wheel windows rather than roses, and do not usually have a tower. The crossing is usually
surmounted by a dome. There is often a free-standing tower and baptistry. The eastern end usually has an apse of
comparatively low projection. The windows are not as large as in northern Europe and, although stained glass windows
are often found, the favourite narrative medium for the interior is the fresco. The Palais des Papes in Avignon is
the best complete large royal palace, alongside the Royal palace of Olite, built during the 13th and 14th centuries
for the kings of Navarre. The Malbork Castle built for the master of the Teutonic order is an example of Brick Gothic
architecture. Partial survivals of former royal residences include the Doge's Palace of Venice, the Palau de la Generalitat
in Barcelona, built in the 15th century for the kings of Aragon, or the famous Conciergerie, former palace of the
kings of France, in Paris. Secular Gothic architecture can also be found in a number of public buildings such as
town halls, universities, markets or hospitals. The Gdańsk, Wrocław and Stralsund town halls are remarkable examples
of northern Brick Gothic built in the late 14th centuries. The Belfry of Bruges or Brussels Town Hall, built during
the 15th century, are associated to the increasing wealth and power of the bourgeoisie in the late Middle Ages; by
the 15th century, the traders of the trade cities of Burgundy had acquired such wealth and influence that they could
afford to express their power by funding lavishly decorated buildings of vast proportions. This kind of expressions
of secular and economic power are also found in other late mediaeval commercial cities, including the Llotja de la
Seda of Valencia, Spain, a purpose built silk exchange dating from the 15th century, in the partial remains of Westminster
Hall in the Houses of Parliament in London, or the Palazzo Pubblico in Siena, Italy, a 13th-century town hall built
to host the offices of the then prosperous republic of Siena. Other Italian cities such as Florence (Palazzo Vecchio),
Mantua or Venice also host remarkable examples of secular public architecture. By the late Middle Ages university
towns had grown in wealth and importance as well, and this was reflected in the buildings of some of Europe's ancient
universities. Particularly remarkable examples still standing nowadays include the Collegio di Spagna in the University
of Bologna, built during the 14th and 15th centuries; the Collegium Carolinum of the University of Prague in Bohemia;
the Escuelas mayores of the University of Salamanca in Spain; the chapel of King's College, Cambridge; or the Collegium
Maius of the Jagiellonian University in Kraków, Poland. Other cities with a concentration of secular Gothic include
Bruges and Siena. Most surviving small secular buildings are relatively plain and straightforward; most windows are
flat-topped with mullions, with pointed arches and vaulted ceilings often only found at a few focal points. The country-houses
of the nobility were slow to abandon the appearance of being a castle, even in parts of Europe, like England, where
defence had ceased to be a real concern. The living and working parts of many monastic buildings survive, for example
at Mont Saint-Michel. In 1663 at the Archbishop of Canterbury's residence, Lambeth Palace, a Gothic hammerbeam roof
was built to replace that destroyed when the building was sacked during the English Civil War. Also in the late 17th
century, some discrete Gothic details appeared on new construction at Oxford University and Cambridge University,
notably on Tom Tower at Christ Church, Oxford, by Christopher Wren. It is not easy to decide whether these instances
were Gothic survival or early appearances of Gothic revival. In England, partly in response to a philosophy propounded
by the Oxford Movement and others associated with the emerging revival of 'high church' or Anglo-Catholic ideas during
the second quarter of the 19th century, neo-Gothic began to become promoted by influential establishment figures
as the preferred style for ecclesiastical, civic and institutional architecture. The appeal of this Gothic revival
(which after 1837, in Britain, is sometimes termed Victorian Gothic), gradually widened to encompass "low church"
as well as "high church" clients. This period of more universal appeal, spanning 1855–1885, is known in Britain as
High Victorian Gothic. The Houses of Parliament in London by Sir Charles Barry with interiors by a major exponent
of the early Gothic Revival, Augustus Welby Pugin, is an example of the Gothic revival style from its earlier period
in the second quarter of the 19th century. Examples from the High Victorian Gothic period include George Gilbert
Scott's design for the Albert Memorial in London, and William Butterfield's chapel at Keble College, Oxford. From
the second half of the 19th century onwards it became more common in Britain for neo-Gothic to be used in the design
of non-ecclesiastical and non-governmental buildings types. Gothic details even began to appear in working-class
housing schemes subsidised by philanthropy, though given the expense, less frequently than in the design of upper
and middle-class housing. In France, simultaneously, the towering figure of the Gothic Revival was Eugène Viollet-le-Duc,
who outdid historical Gothic constructions to create a Gothic as it ought to have been, notably at the fortified
city of Carcassonne in the south of France and in some richly fortified keeps for industrial magnates. Viollet-le-Duc
compiled and coordinated an Encyclopédie médiévale that was a rich repertory his contemporaries mined for architectural
details. He effected vigorous restoration of crumbling detail of French cathedrals, including the Abbey of Saint-Denis
and famously at Notre Dame de Paris, where many of whose most "Gothic" gargoyles are Viollet-le-Duc's. He taught
a generation of reform-Gothic designers and showed how to apply Gothic style to modern structural materials, especially
cast iron.
The movement was pioneered by Georges Braque and Pablo Picasso, joined by Jean Metzinger, Albert Gleizes, Robert Delaunay,
Henri Le Fauconnier, Fernand Léger and Juan Gris. A primary influence that led to Cubism was the representation of
three-dimensional form in the late works of Paul Cézanne. A retrospective of Cézanne's paintings had been held at
the Salon d'Automne of 1904, current works were displayed at the 1905 and 1906 Salon d'Automne, followed by two commemorative
retrospectives after his death in 1907. In France, offshoots of Cubism developed, including Orphism, Abstract art
and later Purism. In other countries Futurism, Suprematism, Dada, Constructivism and De Stijl developed in response
to Cubism. Early Futurist paintings hold in common with Cubism the fusing of the past and the present, the representation
of different views of the subject pictured at the same time, also called multiple perspective, simultaneity or multiplicity,
while Constructivism was influenced by Picasso's technique of constructing sculpture from separate elements. Other
common threads between these disparate movements include the faceting or simplification of geometric forms, and the
association of mechanization and modern life. Cubism began between 1907 and 1911. Pablo Picasso's 1907 painting Les
Demoiselles d'Avignon has often been considered a proto-Cubist work. Georges Braque's 1908 Houses at L’Estaque (and
related works) prompted the critic Louis Vauxcelles to refer to bizarreries cubiques (cubic oddities). Gertrude Stein
referred to landscapes made by Picasso in 1909, such as Reservoir at Horta de Ebro, as the first Cubist paintings.
The first organized group exhibition by Cubists took place at the Salon des Indépendants in Paris during the spring
of 1911 in a room called 'Salle 41'; it included works by Jean Metzinger, Albert Gleizes, Fernand Léger, Robert Delaunay
and Henri Le Fauconnier, yet no works by Picasso or Braque were exhibited. Historians have divided the history of
Cubism into phases. In one scheme, the first phase of Cubism, known as Analytic Cubism, a phrase coined by Juan Gris
a posteriori, was both radical and influential as a short but highly significant art movement between 1910 and 1912
in France. A second phase, Synthetic Cubism, remained vital until around 1919, when the Surrealist movement gained
popularity. English art historian Douglas Cooper proposed another scheme, describing three phases of Cubism in his
book, The Cubist Epoch. According to Cooper there was "Early Cubism", (from 1906 to 1908) when the movement was initially
developed in the studios of Picasso and Braque; the second phase being called "High Cubism", (from 1909 to 1914)
during which time Juan Gris emerged as an important exponent (after 1911); and finally Cooper referred to "Late Cubism"
(from 1914 to 1921) as the last phase of Cubism as a radical avant-garde movement. Douglas Cooper's restrictive use
of these terms to distinguish the work of Braque, Picasso, Gris (from 1911) and Léger (to a lesser extent) implied
an intentional value judgement. The assertion that the Cubist depiction of space, mass, time, and volume supports
(rather than contradicts) the flatness of the canvas was made by Daniel-Henry Kahnweiler as early as 1920, but it
was subject to criticism in the 1950s and 1960s, especially by Clement Greenberg. Contemporary views of Cubism are
complex, formed to some extent in response to the "Salle 41" Cubists, whose methods were too distinct from those
of Picasso and Braque to be considered merely secondary to them. Alternative interpretations of Cubism have therefore
developed. Wider views of Cubism include artists who were later associated with the "Salle 41" artists, e.g., Francis
Picabia; the brothers Jacques Villon, Raymond Duchamp-Villon and Marcel Duchamp, who beginning in late 1911 formed
the core of the Section d'Or (or the Puteaux Group); the sculptors Alexander Archipenko, Joseph Csaky and Ossip Zadkine
as well as Jacques Lipchitz and Henri Laurens; and painters such as Louis Marcoussis, Roger de La Fresnaye, František
Kupka, Diego Rivera, Léopold Survage, Auguste Herbin, André Lhote, Gino Severini (after 1916), María Blanchard (after
1916) and Georges Valmier (after 1918). More fundamentally, Christopher Green argues that Douglas Cooper's terms
were "later undermined by interpretations of the work of Picasso, Braque, Gris and Léger that stress iconographic
and ideological questions rather than methods of representation." During the late 19th and early 20th centuries,
Europeans were discovering African, Polynesian, Micronesian and Native American art. Artists such as Paul Gauguin,
Henri Matisse, and Pablo Picasso were intrigued and inspired by the stark power and simplicity of styles of those
foreign cultures. Around 1906, Picasso met Matisse through Gertrude Stein, at a time when both artists had recently
acquired an interest in primitivism, Iberian sculpture, African art and African tribal masks. They became friendly
rivals and competed with each other throughout their careers, perhaps leading to Picasso entering a new period in
his work by 1907, marked by the influence of Greek, Iberian and African art. Picasso's paintings of 1907 have been
characterized as Protocubism, as notably seen in Les Demoiselles d'Avignon, the antecedent of Cubism. The art historian
Douglas Cooper states that Paul Gauguin and Paul Cézanne "were particularly influential to the formation of Cubism
and especially important to the paintings of Picasso during 1906 and 1907". Cooper goes on to say: "The Demoiselles
is generally referred to as the first Cubist picture. This is an exaggeration, for although it was a major first
step towards Cubism it is not yet Cubist. The disruptive, expressionist element in it is even contrary to the spirit
of Cubism, which looked at the world in a detached, realistic spirit. Nevertheless, the Demoiselles is the logical
picture to take as the starting point for Cubism, because it marks the birth of a new pictorial idiom, because in
it Picasso violently overturned established conventions and because all that followed grew out of it." The most serious
objection to regarding the Demoiselles as the origin of Cubism, with its evident influence of primitive art, is that
"such deductions are unhistorical", wrote the art historian Daniel Robbins. This familiar explanation "fails to give
adequate consideration to the complexities of a flourishing art that existed just before and during the period when
Picasso's new painting developed." Between 1905 and 1908, a conscious search for a new style caused rapid changes
in art across France, Germany, Holland, Italy, and Russia. The Impressionists had used a double point of view, and
both Les Nabis and the Symbolists (who also admired Cézanne) flattened the picture plane, reducing their subjects
to simple geometric forms. Neo-Impressionist structure and subject matter, most notably to be seen in the works of
Georges Seurat (e.g., Parade de Cirque, Le Chahut and Le Cirque), was another important influence. There were also
parallels in the development of literature and social thought. In addition to Seurat, the roots of cubism are to
be found in the two distinct tendencies of Cézanne's later work: first his breaking of the painted surface into small
multifaceted areas of paint, thereby emphasizing the plural viewpoint given by binocular vision, and second his interest
in the simplification of natural forms into cylinders, spheres, and cones. However, the cubists explored this concept
further than Cézanne. They represented all the surfaces of depicted objects in a single picture plane, as if the
objects had all their faces visible at the same time. This new kind of depiction revolutionized the way objects could
be visualized in painting and art. The historical study of Cubism began in the late 1920s, drawing at first from
sources of limited data, namely the opinions of Guillaume Apollinaire. It came to rely heavily on Daniel-Henry Kahnweiler's
book Der Weg zum Kubismus (published in 1920), which centered on the developments of Picasso, Braque, Léger, and
Gris. The terms "analytical" and "synthetic" which subsequently emerged have been widely accepted since the mid-1930s.
Both terms are historical impositions that occurred after the facts they identify. Neither phase was designated as
such at the time corresponding works were created. "If Kahnweiler considers Cubism as Picasso and Braque," wrote
Daniel Robbins, "our only fault is in subjecting other Cubists' works to the rigors of that limited definition."
The traditional interpretation of "Cubism", formulated post facto as a means of understanding the works of Braque
and Picasso, has affected our appreciation of other twentieth-century artists. It is difficult to apply to painters
such as Jean Metzinger, Albert Gleizes, Robert Delaunay and Henri Le Fauconnier, whose fundamental differences from
traditional Cubism compelled Kahnweiler to question their right to be called Cubists at all. According to Daniel
Robbins, "To suggest that merely because these artists developed differently or varied from the traditional pattern
they deserved to be relegated to a secondary or satellite role in Cubism is a profound mistake." The term Cubism
did not come into general usage until 1911, mainly with reference to Metzinger, Gleizes, Delaunay, and Léger. In
1911, the poet and critic Guillaume Apollinaire accepted the term on behalf of a group of artists invited to exhibit
at the Brussels Indépendants. The following year, in preparation for the Salon de la Section d'Or, Metzinger and
Gleizes wrote and published Du "Cubisme" in an effort to dispel the confusion raging around the word, and as a major
defence of Cubism (which had caused a public scandal following the 1911 Salon des Indépendants and the 1912 Salon
d'Automne in Paris). Clarifying their aims as artists, this work was the first theoretical treatise on Cubism and
it still remains the clearest and most intelligible. The result, not solely a collaboration between its two authors,
reflected discussions by the circle of artists who met in Puteaux and Courbevoie. It mirrored the attitudes of the
"artists of Passy", which included Picabia and the Duchamp brothers, to whom sections of it were read prior to publication.
The concept developed in Du "Cubisme" of observing a subject from different points in space and time simultaneously,
i.e., the act of moving around an object to seize it from several successive angles fused into a single image (multiple
viewpoints, mobile perspective, simultaneity or multiplicity), is a generally recognized device used by the Cubists.
There was a distinct difference between Kahnweiler’s Cubists and the Salon Cubists. Prior to 1914, Picasso, Braque,
Gris and Léger (to a lesser extent) gained the support of a single committed art dealer in Paris, Daniel-Henry Kahnweiler,
who guaranteed them an annual income for the exclusive right to buy their works. Kahnweiler sold only to a small
circle of connoisseurs. His support gave his artists the freedom to experiment in relative privacy. Picasso worked
in Montmartre until 1912, while Braque and Gris remained there until after the First World War. Léger was based in
Montparnasse. In contrast, the Salon Cubists built their reputation primarily by exhibiting regularly at the Salon
d'Automne and the Salon des Indépendants, both major non-academic Salons in Paris. They were inevitably more aware
of public response and the need to communicate. Already in 1910 a group began to form which included Metzinger, Gleizes,
Delaunay and Léger. They met regularly at Henri le Fauconnier's studio near the Boulevard de Montparnasse. These
soirées often included writers such as Guillaume Apollinaire and André Salmon. Together with other young artists,
the group wanted to emphasise a research into form, in opposition to the Neo-Impressionist emphasis on color. At
the Salon d'Automne of the same year, in addition to the Indépendants group of Salle 41, were exhibited works by
André Lhote, Marcel Duchamp, Jacques Villon, Roger de La Fresnaye, André Dunoyer de Segonzac and František Kupka.
The exhibition was reviewed in the October 8, 1911 issue of The New York Times. This article was published a year
after Gelett Burgess' The Wild Men of Paris, and two years prior to the Armory Show, which introduced astonished
Americans, accustomed to realistic art, to the experimental styles of the European avant garde, including Fauvism,
Cubism, and Futurism. The 1911 New York Times article portrayed works by Picasso, Matisse, Derain, Metzinger and
others dated before 1909; not exhibited at the 1911 Salon. The article was titled The "Cubists" Dominate Paris' Fall
Salon and subtitled Eccentric School of Painting Increases Its Vogue in the Current Art Exhibition - What Its Followers
Attempt to Do. The subsequent 1912 Salon des Indépendants was marked by the presentation of Marcel Duchamp's Nude
Descending a Staircase, No. 2, which itself caused a scandal, even amongst the Cubists. It was in fact rejected by
the hanging committee, which included his brothers and other Cubists. Although the work was shown in the Salon de
la Section d'Or in October 1912 and the 1913 Armory Show in New York, Duchamp never forgave his brothers and former
colleagues for censoring his work. Juan Gris, a new addition to the Salon scene, exhibited his Portrait of Picasso
(Art Institute of Chicago), while Metzinger's two showings included La Femme au Cheval (Woman with a horse) 1911-1912
(National Gallery of Denmark). Delaunay's monumental La Ville de Paris (Musée d'art moderne de la Ville de Paris)
and Léger's La Noce, The Wedding (Musée National d'Art Moderne, Paris) were also exhibited. The Cubist contribution
to the 1912 Salon d'Automne created scandal regarding the use of government owned buildings, such as the Grand Palais,
to exhibit such artwork. The indignation of the politician Jean Pierre Philippe Lampué made the front page of Le
Journal, 5 October 1912. The controversy spread to the Municipal Council of Paris, leading to a debate in the Chambre
des Députés about the use of public funds to provide the venue for such art. The Cubists were defended by the Socialist
deputy, Marcel Sembat. It was against this background of public anger that Jean Metzinger and Albert Gleizes wrote
Du "Cubisme" (published by Eugène Figuière in 1912, translated to English and Russian in 1913). Among the works exhibited
were Le Fauconnier's vast composition Les Montagnards attaqués par des ours (Mountaineers Attacked by Bears) now
at Rhode Island School of Design Museum, Joseph Csaky's Deux Femme, Two Women (a sculpture now lost), in addition
to the highly abstract paintings by Kupka, Amorpha (The National Gallery, Prague), and Picabia, La Source, The Spring
(Museum of Modern Art, New York). The most extreme forms of Cubism were not those practiced by Picasso and Braque,
who resisted total abstraction. Other Cubists, by contrast, especially František Kupka, and those considered Orphists
by Apollinaire (Delaunay, Léger, Picabia and Duchamp), accepted abstraction by removing visible subject matter entirely.
Kupka’s two entries at the 1912 Salon d'Automne, Amorpha-Fugue à deux couleurs and Amorpha chromatique chaude, were
highly abstract (or nonrepresentational) and metaphysical in orientation. Both Duchamp in 1912 and Picabia from 1912
to 1914 developed an expressive and allusive abstraction dedicated to complex emotional and sexual themes. Beginning
in 1912 Delaunay painted a series of paintings entitled Simultaneous Windows, followed by a series entitled Formes
Circulaires, in which he combined planar structures with bright prismatic hues; based on the optical characteristics
of juxtaposed colors his departure from reality in the depiction of imagery was quasi-complete. In 1913–14 Léger
produced a series entitled Contrasts of Forms, giving a similar stress to color, line and form. His Cubism, despite
its abstract qualities, was associated with themes of mechanization and modern life. Apollinaire supported these
early developments of abstract Cubism in Les Peintres cubistes (1913), writing of a new "pure" painting in which
the subject was vacated. But in spite of his use of the term Orphism these works were so different that they defy
attempts to place them in a single category. Also labeled an Orphist by Apollinaire, Marcel Duchamp was responsible
for another extreme development inspired by Cubism. The ready-made arose from a joint consideration that the work
itself is considered an object (just as a painting), and that it uses the material detritus of the world (as collage
and papier collé in the Cubist construction and Assemblage). The next logical step, for Duchamp, was to present an
ordinary object as a self-sufficient work of art representing only itself. In 1913 he attached a bicycle wheel to
a kitchen stool and in 1914 selected a bottle-drying rack as a sculpture in its own right. The Section d'Or, also
known as Groupe de Puteaux, founded by some of the most conspicuous Cubists, was a collective of painters, sculptors
and critics associated with Cubism and Orphism, active from 1911 through about 1914, coming to prominence in the
wake of their controversial showing at the 1911 Salon des Indépendants. The Salon de la Section d'Or at the Galerie
La Boétie in Paris, October 1912, was arguably the most important pre-World War I Cubist exhibition; exposing Cubism
to a wide audience. Over 200 works were displayed, and the fact that many of the artists showed artworks representative
of their development from 1909 to 1912 gave the exhibition the allure of a Cubist retrospective. The fact that the
1912 exhibition had been curated to show the successive stages through which Cubism had transited, and that Du "Cubisme"
had been published for the occasion, indicates the artists' intention of making their work comprehensible to a wide
audience (art critics, art collectors, art dealers and the general public). Undoubtedly, due to the great success
of the exhibition, Cubism became recognized as a tendency, genre or style in art with a specific common philosophy
or goal: a new avant-garde movement. The Cubism of Picasso, Braque and Gris had more than a technical or formal significance,
and the distinct attitudes and intentions of the Salon Cubists produced different kinds of Cubism, rather than a
derivative of their work. "It is by no means clear, in any case," wrote Christopher Green, "to what extent these
other Cubists depended on Picasso and Braque for their development of such techniques as faceting, 'passage' and
multiple perspective; they could well have arrived at such practices with little knowledge of 'true' Cubism in its
early stages, guided above all by their own understanding of Cézanne." The works exhibited by these Cubists at the
1911 and 1912 Salons extended beyond the conventional Cézanne-like subjects—the posed model, still-life and landscape—favored
by Picasso and Braque to include large-scale modern-life subjects. Aimed at a large public, these works stressed
the use of multiple perspective and complex planar faceting for expressive effect while preserving the eloquence
of subjects endowed with literary and philosophical connotations. In Du "Cubisme" Metzinger and Gleizes explicitly
related the sense of time to multiple perspective, giving symbolic expression to the notion of ‘duration’ proposed
by the philosopher Henri Bergson according to which life is subjectively experienced as a continuum, with the past
flowing into the present and the present merging into the future. The Salon Cubists used the faceted treatment of
solid and space and effects of multiple viewpoints to convey a physical and psychological sense of the fluidity of
consciousness, blurring the distinctions between past, present and future. One of the major theoretical innovations
made by the Salon Cubists, independently of Picasso and Braque, was that of simultaneity, drawing to greater or lesser
extent on theories of Henri Poincaré, Ernst Mach, Charles Henry, Maurice Princet, and Henri Bergson. With simultaneity,
the concept of separate spatial and temporal dimensions was comprehensively challenged. Linear perspective developed
during the Renaissance was vacated. The subject matter was no longer considered from a specific point of view at
a moment in time, but built following a selection of successive viewpoints, i.e., as if viewed simultaneously from
numerous angles (and in multiple dimensions) with the eye free to roam from one to the other. This technique of representing
simultaneity, multiple viewpoints (or relative motion) is pushed to a high degree of complexity in Gleizes' monumental
Le Dépiquage des Moissons (Harvest Threshing), exhibited at the 1912 Salon de la Section d'Or, Le Fauconnier’s Abundance
shown at the Indépendants of 1911, and Delaunay's City of Paris, shown at the Indépendants in 1912. These ambitious
works are some of the largest paintings in the history of Cubism. Léger’s The Wedding, also shown at the Salon des
Indépendants in 1912, gave form to the notion of simultaneity by presenting different motifs as occurring within
a single temporal frame, where responses to the past and present interpenetrate with collective force. The conjunction
of such subject matter with simultaneity aligns Salon Cubism with early Futurist paintings by Umberto Boccioni, Gino
Severini and Carlo Carrà; themselves made in response to early Cubism. Cubism and modern European art was introduced
into the United States at the now legendary 1913 Armory Show in New York City, which then traveled to Chicago and
Boston. In the Armory show Pablo Picasso exhibited La Femme au pot de moutarde (1910), the sculpture Head of a Woman
(Fernande) (1909–10), Les Arbres (1907) amongst other cubist works. Jacques Villon exhibited seven important and
large drypoints, his brother Marcel Duchamp shocked the American public with his painting Nude Descending a Staircase,
No. 2 (1912). Francis Picabia exhibited his abstractions La Danse à la source and La Procession, Seville (both of
1912). Albert Gleizes exhibited La Femme aux phlox (1910) and L'Homme au balcon (1912), two highly stylized and faceted
cubist works. Georges Braque, Fernand Léger, Raymond Duchamp-Villon, Roger de La Fresnaye and Alexander Archipenko
also contributed examples of their cubist works. Cubist sculpture developed in parallel to Cubist painting. During
the autumn of 1909 Picasso sculpted Head of a Woman (Fernande) with positive features depicted by negative space
and vice versa. According to Douglas Cooper: "The first true Cubist sculpture was Picasso's impressive Woman's Head,
modeled in 1909–10, a counterpart in three dimensions to many similar analytical and faceted heads in his paintings
at the time." These positive/negative reversals were ambitiously exploited by Alexander Archipenko in 1912–13, for
example in Woman Walking. Joseph Csaky, after Archipenko, was the first sculptor in Paris to join the Cubists, with
whom he exhibited from 1911 onwards. They were followed by Raymond Duchamp-Villon and then in 1914 by Jacques Lipchitz,
Henri Laurens and Ossip Zadkine. A significant modification of Cubism between 1914 and 1916 was signaled by a shift
towards a strong emphasis on large overlapping geometric planes and flat surface activity. This grouping of styles
of painting and sculpture, especially significant between 1917 and 1920, was practiced by several artists; particularly
those under contract with the art dealer and collector Léonce Rosenberg. The tightening of the compositions, the
clarity and sense of order reflected in these works, led to its being referred to by the critic Maurice Raynal (fr)
as 'crystal' Cubism. Considerations manifested by Cubists prior to the outset of World War I—such as the fourth dimension,
dynamism of modern life, the occult, and Henri Bergson's concept of duration—had now been vacated, replaced by a
purely formal frame of reference. The most innovative period of Cubism was before 1914. After World War I, with the
support given by the dealer Léonce Rosenberg, Cubism returned as a central issue for artists, and continued as such
until the mid-1920s when its avant-garde status was rendered questionable by the emergence of geometric abstraction
and Surrealism in Paris. Many Cubists, including Picasso, Braque, Gris, Léger, Gleizes, and Metzinger, while developing
other styles, returned periodically to Cubism, even well after 1925. Cubism reemerged during the 1920s and the 1930s
in the work of the American Stuart Davis and the Englishman Ben Nicholson. In France, however, Cubism experienced
a decline beginning in about 1925. Léonce Rosenberg exhibited not only the artists stranded by Kahnweiler’s exile
but others including Laurens, Lipchitz, Metzinger, Gleizes, Csaky, Herbin and Severini. In 1918 Rosenberg presented
a series of Cubist exhibitions at his Galerie de l’Effort Moderne in Paris. Attempts were made by Louis Vauxcelles
to claim that Cubism was dead, but these exhibitions, along with a well-organized Cubist show at the 1920 Salon des
Indépendants and a revival of the Salon de la Section d’Or in the same year, demonstrated it was still alive. The
reemergence of Cubism coincided with the appearance from about 1917–24 of a coherent body of theoretical writing
by Pierre Reverdy, Maurice Raynal and Daniel-Henry Kahnweiler and, among the artists, by Gris, Léger and Gleizes.
The occasional return to classicism—figurative work either exclusively or alongside Cubist work—experienced by many
artists during this period (called Neoclassicism) has been linked to the tendency to evade the realities of the war
and also to the cultural dominance of a classical or Latin image of France during and immediately following the war.
Cubism after 1918 can be seen as part of a wide ideological shift towards conservatism in both French society and
culture. Yet, Cubism itself remained evolutionary both within the oeuvre of individual artists, such as Gris and
Metzinger, and across the work of artists as different from each other as Braque, Léger and Gleizes. Cubism as a
publicly debated movement became relatively unified and open to definition. Its theoretical purity made it a gauge
against which such diverse tendencies as Realism or Naturalism, Dada, Surrealism and abstraction could be compared.
Cubism formed an important link between early-20th-century art and architecture. The historical, theoretical, and
socio-political relationships between avant-garde practices in painting, sculpture and architecture had early ramifications
in France, Germany, the Netherlands and Czechoslovakia. Though there are many points of intersection between Cubism
and architecture, only a few direct links between them can be drawn. Most often the connections are made by reference
to shared formal characteristics: faceting of form, spatial ambiguity, transparency, and multiplicity. Architectural
interest in Cubism centered on the dissolution and reconstitution of three-dimensional form, using simple geometric
shapes, juxtaposed without the illusions of classical perspective. Diverse elements could be superimposed, made transparent
or penetrate one another, while retaining their spatial relationships. Cubism had become an influential factor in
the development of modern architecture from 1912 (La Maison Cubiste, by Raymond Duchamp-Villon and André Mare) onwards,
developing in parallel with architects such as Peter Behrens and Walter Gropius, with the simplification of building
design, the use of materials appropriate to industrial production, and the increased use of glass. Cubism was relevant
to an architecture seeking a style that needed not refer to the past. Thus, what had become a revolution in both
painting and sculpture was applied as part of "a profound reorientation towards a changed world". The Cubo-Futurist
ideas of Filippo Tommaso Marinetti influenced attitudes in avant-garde architecture. The influential De Stijl movement
embraced the aesthetic principles of Neo-plasticism developed by Piet Mondrian under the influence of Cubism in Paris.
De Stijl was also linked by Gino Severini to Cubist theory through the writings of Albert Gleizes. However, the linking
of basic geometric forms with inherent beauty and ease of industrial application—which had been prefigured by Marcel
Duchamp from 1914—was left to the founders of Purism, Amédée Ozenfant and Charles-Édouard Jeanneret (better known
as Le Corbusier,) who exhibited paintings together in Paris and published Après le cubisme in 1918. Le Corbusier's
ambition had been to translate the properties of his own style of Cubism to architecture. Between 1918 and 1922,
Le Corbusier concentrated his efforts on Purist theory and painting. In 1922, Le Corbusier and his cousin Jeanneret
opened a studio in Paris at 35 rue de Sèvres. His theoretical studies soon advanced into many different architectural
projects. At the 1912 Salon d'Automne an architectural installation was exhibited that quickly became known as Maison
Cubiste (Cubist House), signed Raymond Duchamp-Villon and André Mare along with a group of collaborators. Metzinger
and Gleizes in Du "Cubisme", written during the assemblage of the "Maison Cubiste", wrote about the autonomous nature
of art, stressing the point that decorative considerations should not govern the spirit of art. Decorative work,
to them, was the "antithesis of the picture". "The true picture" wrote Metzinger and Gleizes, "bears its raison d'être
within itself. It can be moved from a church to a drawing-room, from a museum to a study. Essentially independent,
necessarily complete, it need not immediately satisfy the mind: on the contrary, it should lead it, little by little,
towards the fictitious depths in which the coordinative light resides. It does not harmonize with this or that ensemble;
it harmonizes with things in general, with the universe: it is an organism...". "Mare's ensembles were accepted as
frames for Cubist works because they allowed paintings and sculptures their independence", writes Christopher Green,
"creating a play of contrasts, hence the involvement not only of Gleizes and Metzinger themselves, but of Marie Laurencin,
the Duchamp brothers (Raymond Duchamp-Villon designed the facade) and Mare's old friends Léger and Roger La Fresnaye".
La Maison Cubiste was a fully furnished house, with a staircase, wrought iron banisters, a living room—the Salon
Bourgeois, where paintings by Marcel Duchamp, Metzinger (Woman with a Fan), Gleizes, Laurencin and Léger were hung—and
a bedroom. It was an example of L'art décoratif, a home within which Cubist art could be displayed in the comfort
and style of modern, bourgeois life. Spectators at the Salon d'Automne passed through the full-scale 10-by-3-meter
plaster model of the ground floor of the facade, designed by Duchamp-Villon. This architectural installation was
subsequently exhibited at the 1913 Armory Show, New York, Chicago and Boston, listed in the catalogue of the New
York exhibit as Raymond Duchamp-Villon, number 609, and entitled "Facade architectural, plaster" (Façade architecturale).
The original Cubist architecture is very rare. There is only one country in the world where Cubism was really applied
to architecture – namely Bohemia (today Czech Republic) and especially its capital, Prague. Czech architects were
the first and only ones in the world to ever design original Cubist buildings. Cubist architecture flourished for
the most part between 1910–1914, but the Cubist or Cubism-influenced buildings were also built after the World War
I. After the war, the architectural style called Rondo-Cubism was developed in Prague fusing the Cubist architecture
with round shapes. In their theoretical rules, the Cubist architects expressed the requirement of dynamism, which
would surmount the matter and calm contained in it, through a creative idea, so that the result would evoke feelings
of dynamism and expressive plasticity in the viewer. This should be achieved by shapes derived from pyramids, cubes
and prisms, by arrangements and compositions of oblique surfaces, mainly triangular, sculpted facades in protruding
crystal-like units, reminiscent of the so-called diamond cut, or even cavernous that are reminiscent of the late
Gothic architecture. In this way, the entire surfaces of the facades including even the gables and dormers are sculpted.
The grilles as well as other architectural ornaments attain a three-dimensional form. Thus, new forms of windows
and doors were also created, e. g. hexagonal windows. Czech Cubist architects also designed Cubist furniture. The
leading Cubist architects were Pavel Janák, Josef Gočár, Vlastislav Hofman, Emil Králíček and Josef Chochol. They
worked mostly in Prague but also in other Bohemian towns. The best-known Cubist building is the House of the Black
Madonna in the Old Town of Prague built in 1912 by Josef Gočár with the only Cubist café in the world, Grand Café
Orient. Vlastislav Hofman built the entrance pavilions of Ďáblice Cemetery in 1912–1914, Josef Chochol designed several
residential houses under Vyšehrad. A Cubist streetlamp has also been preserved near the Wenceslas Square, designed
by Emil Králíček in 1912, who also built the Diamond House in the New Town of Prague around 1913. The influence of
cubism extended to other artistic fields, outside painting and sculpture. In literature, the written works of Gertrude
Stein employ repetition and repetitive phrases as building blocks in both passages and whole chapters. Most of Stein's
important works utilize this technique, including the novel The Making of Americans (1906–08). Not only were they
the first important patrons of Cubism, Gertrude Stein and her brother Leo were also important influences on Cubism
as well. Picasso in turn was an important influence on Stein's writing. The poets generally associated with Cubism
are Guillaume Apollinaire, Blaise Cendrars, Jean Cocteau, Max Jacob, André Salmon and Pierre Reverdy. As American
poet Kenneth Rexroth explains, Cubism in poetry "is the conscious, deliberate dissociation and recombination of elements
into a new artistic entity made self-sufficient by its rigorous architecture. This is quite different from the free
association of the Surrealists and the combination of unconscious utterance and political nihilism of Dada." Nonetheless,
the Cubist poets' influence on both Cubism and the later movements of Dada and Surrealism was profound; Louis Aragon,
founding member of Surrealism, said that for Breton, Soupault, Éluard and himself, Reverdy was "our immediate elder,
the exemplary poet." Though not as well remembered as the Cubist painters, these poets continue to influence and
inspire; American poets John Ashbery and Ron Padgett have recently produced new translations of Reverdy's work. Wallace
Stevens' "Thirteen Ways of Looking at a Blackbird" is also said to demonstrate how cubism's multiple perspectives
can be translated into poetry.
Chinese political philosophy dates back to the Spring and Autumn Period, specifically with Confucius in the 6th century BC.
Chinese political philosophy was developed as a response to the social and political breakdown of the country characteristic
of the Spring and Autumn Period and the Warring States period. The major philosophies during the period, Confucianism,
Legalism, Mohism, Agrarianism and Taoism, each had a political aspect to their philosophical schools. Philosophers
such as Confucius, Mencius, and Mozi, focused on political unity and political stability as the basis of their political
philosophies. Confucianism advocated a hierarchical, meritocratic government based on empathy, loyalty, and interpersonal
relationships. Legalism advocated a highly authoritarian government based on draconian punishments and laws. Mohism
advocated a communal, decentralized government centered on frugality and ascetism. The Agrarians advocated a peasant
utopian communalism and egalitarianism. Taoism advocated a proto-anarchism. Legalism was the dominant political philosophy
of the Qin Dynasty, but was replaced by State Confucianism in the Han Dynasty. Prior to China's adoption of communism,
State Confucianism remained the dominant political philosophy of China up to the 20th century. Western political
philosophy originates in the philosophy of ancient Greece, where political philosophy dates back to at least Plato.
Ancient Greece was dominated by city-states, which experimented with various forms of political organization, grouped
by Plato into four categories: timocracy, tyranny, democracy and oligarchy. One of the first, extremely important
classical works of political philosophy is Plato's Republic, which was followed by Aristotle's Nichomachean Ethics
and Politics. Roman political philosophy was influenced by the Stoics, including the Roman statesman Cicero. Indian
political philosophy evolved in ancient times and demarcated a clear distinction between (1) nation and state (2)
religion and state. The constitutions of Hindu states evolved over time and were based on political and legal treatises
and prevalent social institutions. The institutions of state were broadly divided into governance, administration,
defense, law and order. Mantranga, the principal governing body of these states, consisted of the King, Prime Minister,
Commander in chief of army, Chief Priest of the King. The Prime Minister headed the committee of ministers along
with head of executive (Maha Amatya). Chanakya, 4th Century BC Indian political philosopher. The Arthashastra provides
an account of the science of politics for a wise ruler, policies for foreign affairs and wars, the system of a spy
state and surveillance and economic stability of the state. Chanakya quotes several authorities including Bruhaspati,
Ushanas, Prachetasa Manu, Parasara, and Ambi, and described himself as a descendant of a lineage of political philosophers,
with his father Chanaka being his immediate predecessor. Another influential extant Indian treatise on political
philosophy is the Sukra Neeti. An example of a code of law in ancient India is the Manusmṛti or Laws of Manu. The
early Christian philosophy of Augustine of Hippo was heavily influenced by Plato. A key change brought about by Christian
thought was the moderatation of the Stoicism and theory of justice of the Roman world, as well emphasis on the role
of the state in applying mercy as a moral example. Augustine also preached that one was not a member of his or her
city, but was either a citizen of the City of God (Civitas Dei) or the City of Man (Civitas Terrena). Augustine's
City of God is an influential work of this period that attacked the thesis, held by many Christian Romans, that the
Christian view could be realized on Earth. The rise of Islam, based on both the Qur'an and Muhammad strongly altered
the power balances and perceptions of origin of power in the Mediterranean region. Early Islamic philosophy emphasized
an inexorable link between science and religion, and the process of ijtihad to find truth—in effect all philosophy
was "political" as it had real implications for governance. This view was challenged by the "rationalist" Mutazilite
philosophers, who held a more Hellenic view, reason above revelation, and as such are known to modern scholars as
the first speculative theologians of Islam; they were supported by a secular aristocracy who sought freedom of action
independent of the Caliphate. By the late ancient period, however, the "traditionalist" Asharite view of Islam had
in general triumphed. According to the Asharites, reason must be subordinate to the Quran and the Sunna. Islamic
political philosophy, was, indeed, rooted in the very sources of Islam—i.e., the Qur'an and the Sunnah, the words
and practices of Muhammad—thus making it essentially theocratic. However, in the Western thought, it is generally
supposed that it was a specific area peculiar merely to the great philosophers of Islam: al-Kindi (Alkindus), al-Farabi
(Abunaser), İbn Sina (Avicenna), Ibn Bajjah (Avempace), Ibn Rushd (Averroes), and Ibn Khaldun. The political conceptions
of Islam such as kudrah (power), sultan, ummah, cemaa (obligation)-and even the "core" terms of the Qur'an—i.e.,
ibadah (worship), din (religion), rab (master) and ilah (deity)—is taken as the basis of an analysis. Hence, not
only the ideas of the Muslim political philosophers but also many other jurists and ulama posed political ideas and
theories. For example, the ideas of the Khawarij in the very early years of Islamic history on Khilafa and Ummah,
or that of Shia Islam on the concept of Imamah are considered proofs of political thought. The clashes between the
Ehl-i Sunna and Shia in the 7th and 8th centuries had a genuine political character. Medieval political philosophy
in Europe was heavily influenced by Christian thinking. It had much in common with the Mutazalite Islamic thinking
in that the Roman Catholics though subordinating philosophy to theology did not subject reason to revelation but
in the case of contradictions, subordinated reason to faith as the Asharite of Islam. The Scholastics by combining
the philosophy of Aristotle with the Christianity of St. Augustine emphasized the potential harmony inherent in reason
and revelation. Perhaps the most influential political philosopher of medieval Europe was St. Thomas Aquinas who
helped reintroduce Aristotle's works, which had only been transmitted to Catholic Europe through Muslim Spain, along
with the commentaries of Averroes. Aquinas's use of them set the agenda, for scholastic political philosophy dominated
European thought for centuries even unto the Renaissance. One of the most influential works during this burgeoning
period was Niccolò Machiavelli's The Prince, written between 1511–12 and published in 1532, after Machiavelli's death.
That work, as well as The Discourses, a rigorous analysis of the classical period, did much to influence modern political
thought in the West. A minority (including Jean-Jacques Rousseau) interpreted The Prince as a satire meant to be
given to the Medici after their recapture of Florence and their subsequent expulsion of Machiavelli from Florence.
Though the work was written for the di Medici family in order to perhaps influence them to free him from exile, Machiavelli
supported the Republic of Florence rather than the oligarchy of the di Medici family. At any rate, Machiavelli presents
a pragmatic and somewhat consequentialist view of politics, whereby good and evil are mere means used to bring about
an end—i.e., the secure and powerful state. Thomas Hobbes, well known for his theory of the social contract, goes
on to expand this view at the start of the 17th century during the English Renaissance. Although neither Machiavelli
nor Hobbes believed in the divine right of kings, they both believed in the inherent selfishness of the individual.
It was necessarily this belief that led them to adopt a strong central power as the only means of preventing the
disintegration of the social order. These theorists were driven by two basic questions: one, by what right or need
do people form states; and two, what the best form for a state could be. These fundamental questions involved a conceptual
distinction between the concepts of "state" and "government." It was decided that "state" would refer to a set of
enduring institutions through which power would be distributed and its use justified. The term "government" would
refer to a specific group of people who occupied the institutions of the state, and create the laws and ordinances
by which the people, themselves included, would be bound. This conceptual distinction continues to operate in political
science, although some political scientists, philosophers, historians and cultural anthropologists have argued that
most political action in any given society occurs outside of its state, and that there are societies that are not
organized into states that nevertheless must be considered in political terms. As long as the concept of natural
order was not introduced, the social sciences could not evolve independently of theistic thinking. Since the cultural
revolution of the 17th century in England, which spread to France and the rest of Europe, society has been considered
subject to natural laws akin to the physical world. Political and economic relations were drastically influenced
by these theories as the concept of the guild was subordinated to the theory of free trade, and Roman Catholic dominance
of theology was increasingly challenged by Protestant churches subordinate to each nation-state, which also (in a
fashion the Roman Catholic Church often decried angrily) preached in the vulgar or native language of each region.
However, the enlightenment was an outright attack on religion, particularly Christianity. The most outspoken critic
of the church in France was François Marie Arouet de Voltaire, a representative figure of the enlightenment. After
Voltaire, religion would never be the same again in France. In the Ottoman Empire, these ideological reforms did
not take place and these views did not integrate into common thought until much later. As well, there was no spread
of this doctrine within the New World and the advanced civilizations of the Aztec, Maya, Inca, Mohican, Delaware,
Huron and especially the Iroquois. The Iroquois philosophy in particular gave much to Christian thought of the time
and in many cases actually inspired some of the institutions adopted in the United States: for example, Benjamin
Franklin was a great admirer of some of the methods of the Iroquois Confederacy, and much of early American literature
emphasized the political philosophy of the natives. John Locke in particular exemplified this new age of political
theory with his work Two Treatises of Government. In it Locke proposes a state of nature theory that directly complements
his conception of how political development occurs and how it can be founded through contractual obligation. Locke
stood to refute Sir Robert Filmer's paternally founded political theory in favor of a natural system based on nature
in a particular given system. The theory of the divine right of kings became a passing fancy, exposed to the type
of ridicule with which John Locke treated it. Unlike Machiavelli and Hobbes but like Aquinas, Locke would accept
Aristotle's dictum that man seeks to be happy in a state of social harmony as a social animal. Unlike Aquinas's preponderant
view on the salvation of the soul from original sin, Locke believes man's mind comes into this world as tabula rasa.
For Locke, knowledge is neither innate, revealed nor based on authority but subject to uncertainty tempered by reason,
tolerance and moderation. According to Locke, an absolute ruler as proposed by Hobbes is unnecessary, for natural
law is based on reason and seeking peace and survival for man. The Marxist critique of capitalism — developed with
Friedrich Engels — was, alongside liberalism and fascism, one of the defining ideological movements of the Twentieth
Century. The industrial revolution produced a parallel revolution in political thought. Urbanization and capitalism
greatly reshaped society. During this same period, the socialist movement began to form. In the mid-19th century,
Marxism was developed, and socialism in general gained increasing popular support, mostly from the urban working
class. Without breaking entirely from the past, Marx established principles that would be used by future revolutionaries
of the 20th century namely Vladimir Lenin, Mao Zedong, Ho Chi Minh, and Fidel Castro. Though Hegel's philosophy of
history is similar to Immanuel Kant's, and Karl Marx's theory of revolution towards the common good is partly based
on Kant's view of history—Marx declared that he was turning Hegel's dialectic, which was "standing on its head",
"the right side up again". Unlike Marx who believed in historical materialism, Hegel believed in the Phenomenology
of Spirit. By the late 19th century, socialism and trade unions were established members of the political landscape.
In addition, the various branches of anarchism, with thinkers such as Mikhail Bakunin, Pierre-Joseph Proudhon or
Peter Kropotkin, and syndicalism also gained some prominence. In the Anglo-American world, anti-imperialism and pluralism
began gaining currency at the turn of the 20th century. World War I was a watershed event in human history, changing
views of governments and politics. The Russian Revolution of 1917 (and similar, albeit less successful, revolutions
in many other European countries) brought communism - and in particular the political theory of Leninism, but also
on a smaller level Luxemburgism (gradually) - on the world stage. At the same time, social democratic parties won
elections and formed governments for the first time, often as a result of the introduction of universal suffrage.
However, a group of central European economists led by Austrian School economists Ludwig von Mises and Friedrich
Hayek identified the collectivist underpinnings to the various new socialist and fascist doctrines of government
power as being different brands of political totalitarianism. From the end of World War II until 1971, when John
Rawls published A Theory of Justice, political philosophy declined in the Anglo-American academic world, as analytic
philosophers expressed skepticism about the possibility that normative judgments had cognitive content, and political
science turned toward statistical methods and behavioralism. In continental Europe, on the other hand, the postwar
decades saw a huge blossoming of political philosophy, with Marxism dominating the field. This was the time of Jean-Paul
Sartre and Louis Althusser, and the victories of Mao Zedong in China and Fidel Castro in Cuba, as well as the events
of May 1968 led to increased interest in revolutionary ideology, especially by the New Left. A number of continental
European émigrés to Britain and the United States—including Karl Popper, Friedrich Hayek, Leo Strauss, Isaiah Berlin,
Eric Voegelin and Judith Shklar—encouraged continued study in political philosophy in the Anglo-American world, but
in the 1950s and 1960s they and their students remained at odds with the analytic establishment. Communism remained
an important focus especially during the 1950s and 1960s. Colonialism and racism were important issues that arose.
In general, there was a marked trend towards a pragmatic approach to political issues, rather than a philosophical
one. Much academic debate regarded one or both of two pragmatic topics: how (or whether) to apply utilitarianism
to problems of political policy, or how (or whether) to apply economic models (such as rational choice theory) to
political issues. The rise of feminism, LGBT social movements and the end of colonial rule and of the political exclusion
of such minorities as African Americans and sexual minorities in the developed world has led to feminist, postcolonial,
and multicultural thought becoming significant. This led to a challenge to the social contract by philosophers Charles
W. Mills in his book The Racial Contract and Carole Patemen in her book The Sexual Contract that the social contract
excluded persons of colour and women respectively. In Anglo-American academic political philosophy, the publication
of John Rawls's A Theory of Justice in 1971 is considered a milestone. Rawls used a thought experiment, the original
position, in which representative parties choose principles of justice for the basic structure of society from behind
a veil of ignorance. Rawls also offered a criticism of utilitarian approaches to questions of political justice.
Robert Nozick's 1974 book Anarchy, State, and Utopia, which won a National Book Award, responded to Rawls from a
libertarian perspective and gained academic respectability for libertarian viewpoints. Contemporaneously with the
rise of analytic ethics in Anglo-American thought, in Europe several new lines of philosophy directed at critique
of existing societies arose between the 1950s and 1980s. Most of these took elements of Marxist economic analysis,
but combined them with a more cultural or ideological emphasis. Out of the Frankfurt School, thinkers like Herbert
Marcuse, Theodor W. Adorno, Max Horkheimer, and Jürgen Habermas combined Marxian and Freudian perspectives. Along
somewhat different lines, a number of other continental thinkers—still largely influenced by Marxism—put new emphases
on structuralism and on a "return to Hegel". Within the (post-) structuralist line (though mostly not taking that
label) are thinkers such as Gilles Deleuze, Michel Foucault, Claude Lefort, and Jean Baudrillard. The Situationists
were more influenced by Hegel; Guy Debord, in particular, moved a Marxist analysis of commodity fetishism to the
realm of consumption, and looked at the relation between consumerism and dominant ideology formation. Another debate
developed around the (distinct) criticisms of liberal political theory made by Michael Walzer, Michael Sandel and
Charles Taylor. The liberal-communitarian debate is often considered valuable for generating a new set of philosophical
problems, rather than a profound and illuminating clash of perspective.These and other communitarians (such as Alasdair
MacIntyre and Daniel A. Bell) argue that, contra liberalism, communities are prior to individuals and therefore should
be the center of political focus. Communitarians tend to support greater local control as well as economic and social
policies which encourage the growth of social capital. A pair of overlapping political perspectives arising toward
the end of the 20th century are republicanism (or neo- or civic-republicanism) and the capability approach. The resurgent
republican movement aims to provide an alternate definition of liberty from Isaiah Berlin's positive and negative
forms of liberty, namely "liberty as non-domination." Unlike liberals who understand liberty as "non-interference,"
"non-domination" entails individuals not being subject to the arbitrary will of anyother person. To a liberal, a
slave who is not interfered with may be free, yet to a republican the mere status as a slave, regardless of how that
slave is treated, is objectionable. Prominent republicans include historian Quentin Skinner, jurist Cass Sunstein,
and political philosopher Philip Pettit. The capability approach, pioneered by economists Mahbub ul Haq and Amartya
Sen and further developed by legal scholar Martha Nussbaum, understands freedom under allied lines: the real-world
ability to act. Both the capability approach and republicanism treat choice as something which must be resourced.
In other words, it is not enough to be legally able to do something, but to have the real option of doing it.
An alloy is a mixture of metals or a mixture of a metal and another element. Alloys are defined by metallic bonding character.
An alloy may be a solid solution of metal elements (a single phase) or a mixture of metallic phases (two or more
solutions). Intermetallic compounds are alloys with a defined stoichiometry and crystal structure. Zintl phases are
also sometimes considered alloys depending on bond types (see also: Van Arkel-Ketelaar triangle for information on
classifying bonding in binary compounds). An alloy is a mixture of either pure or fairly pure chemical elements,
which forms an impure substance (admixture) that retains the characteristics of a metal. An alloy is distinct from
an impure metal, such as wrought iron, in that, with an alloy, the added impurities are usually desirable and will
typically have some useful benefit. Alloys are made by mixing two or more elements; at least one of which being a
metal. This is usually called the primary metal or the base metal, and the name of this metal may also be the name
of the alloy. The other constituents may or may not be metals but, when mixed with the molten base, they will be
soluble, dissolving into the mixture. When the alloy cools and solidifies (crystallizes), its mechanical properties
will often be quite different from those of its individual constituents. A metal that is normally very soft and malleable,
such as aluminium, can be altered by alloying it with another soft metal, like copper. Although both metals are very
soft and ductile, the resulting aluminium alloy will be much harder and stronger. Adding a small amount of non-metallic
carbon to iron produces an alloy called steel. Due to its very-high strength and toughness (which is much higher
than pure iron), and its ability to be greatly altered by heat treatment, steel is one of the most common alloys
in modern use. By adding chromium to steel, its resistance to corrosion can be enhanced, creating stainless steel,
while adding silicon will alter its electrical characteristics, producing silicon steel. Although the elements usually
must be soluble in the liquid state, they may not always be soluble in the solid state. If the metals remain soluble
when solid, the alloy forms a solid solution, becoming a homogeneous structure consisting of identical crystals,
called a phase. If the mixture cools and the constituents become insoluble, they may separate to form two or more
different types of crystals, creating a heterogeneous microstructure of different phases. However, in other alloys,
the insoluble elements may not separate until after crystallization occurs. These alloys are called intermetallic
alloys because, if cooled very quickly, they first crystallize as a homogeneous phase, but they are supersaturated
with the secondary constituents. As time passes, the atoms of these supersaturated alloys separate within the crystals,
forming intermetallic phases that serve to reinforce the crystals internally. Some alloys occur naturally, such as
electrum, which is an alloy that is native to Earth, consisting of silver and gold. Meteorites are sometimes made
of naturally occurring alloys of iron and nickel, but are not native to the Earth. One of the first alloys made by
humans was bronze, which is made by mixing the metals tin and copper. Bronze was an extremely useful alloy to the
ancients, because it is much stronger and harder than either of its components. Steel was another common alloy. However,
in ancient times, it could only be created as an accidental byproduct from the heating of iron ore in fires (smelting)
during the manufacture of iron. Other ancient alloys include pewter, brass and pig iron. In the modern age, steel
can be created in many forms. Carbon steel can be made by varying only the carbon content, producing soft alloys
like mild steel or hard alloys like spring steel. Alloy steels can be made by adding other elements, such as molybdenum,
vanadium or nickel, resulting in alloys such as high-speed steel or tool steel. Small amounts of manganese are usually
alloyed with most modern-steels because of its ability to remove unwanted impurities, like phosphorus, sulfur and
oxygen, which can have detrimental effects on the alloy. However, most alloys were not created until the 1900s, such
as various aluminium, titanium, nickel, and magnesium alloys. Some modern superalloys, such as incoloy, inconel,
and hastelloy, may consist of a multitude of different components. The term alloy is used to describe a mixture of
atoms in which the primary constituent is a metal. The primary metal is called the base, the matrix, or the solvent.
The secondary constituents are often called solutes. If there is a mixture of only two types of atoms, not counting
impurities, such as a copper-nickel alloy, then it is called a binary alloy. If there are three types of atoms forming
the mixture, such as iron, nickel and chromium, then it is called a ternary alloy. An alloy with four constituents
is a quaternary alloy, while a five-part alloy is termed a quinary alloy. Because the percentage of each constituent
can be varied, with any mixture the entire range of possible variations is called a system. In this respect, all
of the various forms of an alloy containing only two constituents, like iron and carbon, is called a binary system,
while all of the alloy combinations possible with a ternary alloy, such as alloys of iron, carbon and chromium, is
called a ternary system. Although an alloy is technically an impure metal, when referring to alloys, the term "impurities"
usually denotes those elements which are not desired. These impurities are often found in the base metals or the
solutes, but they may also be introduced during the alloying process. For instance, sulfur is a common impurity in
steel. Sulfur combines readily with iron to form iron sulfide, which is very brittle, creating weak spots in the
steel. Lithium, sodium and calcium are common impurities in aluminium alloys, which can have adverse effects on the
structural integrity of castings. Conversely, otherwise pure-metals that simply contain unwanted impurities are often
called "impure metals" and are not usually referred to as alloys. Oxygen, present in the air, readily combines with
most metals to form metal oxides; especially at higher temperatures encountered during alloying. Great care is often
taken during the alloying process to remove excess impurities, using fluxes, chemical additives, or other methods
of extractive metallurgy. The term "alloy" is sometimes used in everyday speech as a synonym for a particular alloy.
For example, automobile wheels made of an aluminium alloy are commonly referred to as simply "alloy wheels", although
in point of fact steels and most other metals in practical use are also alloys. Steel is such a common alloy that
many items made from it, like wheels, barrels, or girders, are simply referred to by the name of the item, assuming
it is made of steel. When made from other materials, they are typically specified as such, (i.e.: "bronze wheel,"
"plastic barrel," or "wood girder"). Alloying a metal is done by combining it with one or more other metals or non-metals
that often enhance its properties. For example, steel is stronger than iron, its primary element. The electrical
and thermal conductivity of alloys is usually lower than that of the pure metals. The physical properties, such as
density, reactivity, Young's modulus of an alloy may not differ greatly from those of its elements, but engineering
properties such as tensile strength and shear strength may be substantially different from those of the constituent
materials. This is sometimes a result of the sizes of the atoms in the alloy, because larger atoms exert a compressive
force on neighboring atoms, and smaller atoms exert a tensile force on their neighbors, helping the alloy resist
deformation. Sometimes alloys may exhibit marked differences in behavior even when small amounts of one element are
present. For example, impurities in semiconducting ferromagnetic alloys lead to different properties, as first predicted
by White, Hogan, Suhl, Tian Abrie and Nakamura. Some alloys are made by melting and mixing two or more metals. Bronze,
an alloy of copper and tin, was the first alloy discovered, during the prehistoric period now known as the bronze
age; it was harder than pure copper and originally used to make tools and weapons, but was later superseded by metals
and alloys with better properties. In later times bronze has been used for ornaments, bells, statues, and bearings.
Brass is an alloy made from copper and zinc. Alloys are often made to alter the mechanical properties of the base
metal, to induce hardness, toughness, ductility, or other desired properties. Most metals and alloys can be work
hardened by creating defects in their crystal structure. These defects are created during plastic deformation, such
as hammering or bending, and are permanent unless the metal is recrystallized. However, some alloys can also have
their properties altered by heat treatment. Nearly all metals can be softened by annealing, which recrystallizes
the alloy and repairs the defects, but not as many can be hardened by controlled heating and cooling. Many alloys
of aluminium, copper, magnesium, titanium, and nickel can be strengthened to some degree by some method of heat treatment,
but few respond to this to the same degree that steel does. At a certain temperature, (usually between 1,500 °F (820
°C) and 1,600 °F (870 °C), depending on carbon content), the base metal of steel undergoes a change in the arrangement
of the atoms in its crystal matrix, called allotropy. This allows the small carbon atoms to enter the interstices
of the iron crystal, diffusing into the iron matrix. When this happens, the carbon atoms are said to be in solution,
or mixed with the iron, forming a single, homogeneous, crystalline phase called austenite. If the steel is cooled
slowly, the iron will gradually change into its low temperature allotrope. When this happens the carbon atoms will
no longer be soluble with the iron, and will be forced to precipitate out of solution, nucleating into the spaces
between the crystals. The steel then becomes heterogeneous, being formed of two phases; the carbon (carbide) phase
cementite, and ferrite. This type of heat treatment produces steel that is rather soft and bendable. However, if
the steel is cooled quickly the carbon atoms will not have time to precipitate. When rapidly cooled, a diffusionless
(martensite) transformation occurs, in which the carbon atoms become trapped in solution. This causes the iron crystals
to deform intrinsically when the crystal structure tries to change to its low temperature state, making it very hard
and brittle. Conversely, most heat-treatable alloys are precipitation hardening alloys, which produce the opposite
effects that steel does. When heated to form a solution and then cooled quickly, these alloys become much softer
than normal, during the diffusionless transformation, and then harden as they age. The solutes in these alloys will
precipitate over time, forming intermetallic phases, which are difficult to discern from the base metal. Unlike steel,
in which the solid solution separates to form different crystal phases, precipitation hardening alloys separate to
form different phases within the same crystal. These intermetallic alloys appear homogeneous in crystal structure,
but tend to behave heterogeneous, becoming hard and somewhat brittle. When a molten metal is mixed with another substance,
there are two mechanisms that can cause an alloy to form, called atom exchange and the interstitial mechanism. The
relative size of each element in the mix plays a primary role in determining which mechanism will occur. When the
atoms are relatively similar in size, the atom exchange method usually happens, where some of the atoms composing
the metallic crystals are substituted with atoms of the other constituent. This is called a substitutional alloy.
Examples of substitutional alloys include bronze and brass, in which some of the copper atoms are substituted with
either tin or zinc atoms. With the interstitial mechanism, one atom is usually much smaller than the other, so cannot
successfully replace an atom in the crystals of the base metal. The smaller atoms become trapped in the spaces between
the atoms in the crystal matrix, called the interstices. This is referred to as an interstitial alloy. Steel is an
example of an interstitial alloy, because the very small carbon atoms fit into interstices of the iron matrix. Stainless
steel is an example of a combination of interstitial and substitutional alloys, because the carbon atoms fit into
the interstices, but some of the iron atoms are replaced with nickel and chromium atoms. The use of alloys by humans
started with the use of meteoric iron, a naturally occurring alloy of nickel and iron. It is the main constituent
of iron meteorites which occasionally fall down on Earth from outer space. As no metallurgic processes were used
to separate iron from nickel, the alloy was used as it was. Meteoric iron could be forged from a red heat to make
objects such as tools, weapons, and nails. In many cultures it was shaped by cold hammering into knives and arrowheads.
They were often used as anvils. Meteoric iron was very rare and valuable, and difficult for ancient people to work.
Iron is usually found as iron ore on Earth, except for one deposit of native iron in Greenland, which was used by
the Inuit people. Native copper, however, was found worldwide, along with silver, gold and platinum, which were also
used to make tools, jewelry, and other objects since Neolithic times. Copper was the hardest of these metals, and
the most widely distributed. It became one of the most important metals to the ancients. Eventually, humans learned
to smelt metals such as copper and tin from ore, and, around 2500 BC, began alloying the two metals to form bronze,
which is much harder than its ingredients. Tin was rare, however, being found mostly in Great Britain. In the Middle
East, people began alloying copper with zinc to form brass. Ancient civilizations took into account the mixture and
the various properties it produced, such as hardness, toughness and melting point, under various conditions of temperature
and work hardening, developing much of the information contained in modern alloy phase diagrams. Arrowheads from
the Chinese Qin dynasty (around 200 BC) were often constructed with a hard bronze-head, but a softer bronze-tang,
combining the alloys to prevent both dulling and breaking during use. Mercury has been smelted from cinnabar for
thousands of years. Mercury dissolves many metals, such as gold, silver, and tin, to form amalgams (an alloy in a
soft paste, or liquid form at ambient temperature). Amalgams have been used since 200 BC in China for plating objects
with precious metals, called gilding, such as armor and mirrors. The ancient Romans often used mercury-tin amalgams
for gilding their armor. The amalgam was applied as a paste and then heated until the mercury vaporized, leaving
the gold, silver, or tin behind. Mercury was often used in mining, to extract precious metals like gold and silver
from their ores. Many ancient civilizations alloyed metals for purely aesthetic purposes. In ancient Egypt and Mycenae,
gold was often alloyed with copper to produce red-gold, or iron to produce a bright burgundy-gold. Gold was often
found alloyed with silver or other metals to produce various types of colored gold. These metals were also used to
strengthen each other, for more practical purposes. Copper was often added to silver to make sterling silver, increasing
its strength for use in dishes, silverware, and other practical items. Quite often, precious metals were alloyed
with less valuable substances as a means to deceive buyers. Around 250 BC, Archimedes was commissioned by the king
to find a way to check the purity of the gold in a crown, leading to the famous bath-house shouting of "Eureka!"
upon the discovery of Archimedes' principle. The term pewter covers a variety of alloys consisting primarily of tin.
As a pure metal, tin was much too soft to be used for any practical purpose. However, in the Bronze age, tin was
a rare metal and, in many parts of Europe and the Mediterranean, was often valued higher than gold. To make jewelry,
forks and spoons, or other objects from tin, it was usually alloyed with other metals to increase its strength and
hardness. These metals were typically lead, antimony, bismuth or copper. These solutes sometimes were added individually
in varying amounts, or added together, making a wide variety of things, ranging from practical items, like dishes,
surgical tools, candlesticks or funnels, to decorative items such as ear rings and hair clips. The first known smelting
of iron began in Anatolia, around 1800 BC. Called the bloomery process, it produced very soft but ductile wrought
iron. By 800 BC, iron-making technology had spread to Europe, arriving in Japan around 700 AD. Pig iron, a very hard
but brittle alloy of iron and carbon, was being produced in China as early as 1200 BC, but did not arrive in Europe
until the Middle Ages. Pig iron has a lower melting point than iron, and was used for making cast-iron. However,
these metals found little practical use until the introduction of crucible steel around 300 BC. These steels were
of poor quality, and the introduction of pattern welding, around the 1st century AD, sought to balance the extreme
properties of the alloys by laminating them, to create a tougher metal. Around 700 AD, the Japanese began folding
bloomery-steel and cast-iron in alternating layers to increase the strength of their swords, using clay fluxes to
remove slag and impurities. This method of Japanese swordsmithing produced one of the purest steel-alloys of the
early Middle Ages. While the use of iron started to become more widespread around 1200 BC, mainly because of interruptions
in the trade routes for tin, the metal is much softer than bronze. However, very small amounts of steel, (an alloy
of iron and around 1% carbon), was always a byproduct of the bloomery process. The ability to modify the hardness
of steel by heat treatment had been known since 1100 BC, and the rare material was valued for the manufacture of
tools and weapons. Because the ancients could not produce temperatures high enough to melt iron fully, the production
of steel in decent quantities did not occur until the introduction of blister steel during the Middle Ages. This
method introduced carbon by heating wrought iron in charcoal for long periods of time, but the penetration of carbon
was not very deep, so the alloy was not homogeneous. In 1740, Benjamin Huntsman began melting blister steel in a
crucible to even out the carbon content, creating the first process for the mass production of tool steel. Huntsman's
process was used for manufacturing tool steel until the early 1900s. With the introduction of the blast furnace to
Europe in the Middle Ages, pig iron was able to be produced in much higher volumes than wrought iron. Because pig
iron could be melted, people began to develop processes of reducing the carbon in the liquid pig iron to create steel.
Puddling was introduced during the 1700s, where molten pig iron was stirred while exposed to the air, to remove the
carbon by oxidation. In 1858, Sir Henry Bessemer developed a process of steel-making by blowing hot air through liquid
pig iron to reduce the carbon content. The Bessemer process was able to produce the first large scale manufacture
of steel. Once the Bessemer process began to gain widespread use, other alloys of steel began to follow. Mangalloy,
an alloy of steel and manganese exhibiting extreme hardness and toughness, was one of the first alloy steels, and
was created by Robert Hadfield in 1882. In 1906, precipitation hardening alloys were discovered by Alfred Wilm. Precipitation
hardening alloys, such as certain alloys of aluminium, titanium, and copper, are heat-treatable alloys that soften
when quenched (cooled quickly), and then harden over time. After quenching a ternary alloy of aluminium, copper,
and magnesium, Wilm discovered that the alloy increased in hardness when left to age at room temperature. Although
an explanation for the phenomenon was not provided until 1919, duralumin was one of the first "age hardening" alloys
to be used, and was soon followed by many others. Because they often exhibit a combination of high strength and low
weight, these alloys became widely used in many forms of industry, including the construction of modern aircraft.
Norfolk Island (i/ˈnɔːrfək ˈaɪlənd/; Norfuk: Norf'k Ailen) is a small island in the Pacific Ocean located between Australia,
New Zealand and New Caledonia, 1,412 kilometres (877 mi) directly east of mainland Australia's Evans Head, and about
900 kilometres (560 mi) from Lord Howe Island. The island is part of the Commonwealth of Australia. Together with
two neighbouring islands, it forms one of Australia's external territories. It has 1,796 inhabitants living on a
total area of about 35 km2 (14 sq mi). Its capital is Kingston. Norfolk Island was colonised by East Polynesians
but was long unpeopled when it was settled by Great Britain as part of its settlement of Australia from 1788. The
island served as a convict penal settlement from 6 March 1788 until 5 May 1855, except for an 11-year hiatus between
15 February 1814 and 6 June 1825, when it lay abandoned. On 8 June 1856, permanent civilian residence on the island
began when it was settled from Pitcairn Island. In 1913, the UK handed Norfolk over to Australia to administer as
an external territory. Sir John Call argued the advantages of Norfolk Island in that it was uninhabited and that
New Zealand flax grew there. In 1786 the British government included Norfolk Island as an auxiliary settlement, as
proposed by John Call, in its plan for colonisation of New South Wales. The decision to settle Norfolk Island was
taken due to Empress Catherine II of Russia's decision to restrict sales of hemp. Practically all the hemp and flax
required by the Royal Navy for cordage and sailcloth was imported from Russia. As early as 1794, Lieutenant-Governor
of New South Wales Francis Grose suggested its closure as a penal settlement, as it was too remote and difficult
for shipping and too costly to maintain. The first group of people left in February 1805, and by 1808 only about
200 remained, forming a small settlement until the remnants were removed in 1813. A small party remained to slaughter
stock and destroy all buildings, so that there would be no inducement for anyone, especially from other European
powers, to visit and lay claim to the place. From 15 February 1814 to 6 June 1825 the island was abandoned. In 1824
the British government instructed the Governor of New South Wales Thomas Brisbane to occupy Norfolk Island as a place
to send "the worst description of convicts". Its remoteness, previously seen as a disadvantage, was now viewed as
an asset for the detention of recalcitrant male prisoners. The convicts detained have long been assumed to be a hardcore
of recidivists, or 'doubly-convicted capital respites' – that is, men transported to Australia who committed fresh
colonial crimes for which they were sentenced to death, and were spared the gallows on condition of life at Norfolk
Island. However, a recent study has demonstrated, utilising a database of 6,458 Norfolk Island convicts, that the
reality was somewhat different: more than half were detained at Norfolk Island without ever receiving a colonial
conviction, and only 15% had been reprieved from a death sentence. Furthermore, the overwhelming majority of convicts
sent to Norfolk Island had committed non-violent property sentences, and the average length of detention was three
years. On 8 June 1856, the next settlement began on Norfolk Island. These were the descendants of Tahitians and the
HMS Bounty mutineers, including those of Fletcher Christian. They resettled from the Pitcairn Islands, which had
become too small for their growing population. On 3 May 1856, 193 persons left Pitcairn Islands aboard the "Morayshire".
On 8 June 194 persons arrived, a baby having been born in transit. The Pitcairners occupied many of the buildings
remaining from the penal settlements, and gradually established traditional farming and whaling industries on the
island. Although some families decided to return to Pitcairn in 1858 and 1863, the island's population continued
to grow. They accepted additional settlers, who often arrived with whaling fleets. After the creation of the Commonwealth
of Australia in 1901, Norfolk Island was placed under the authority of the new Commonwealth government to be administered
as an external territory. During World War II, the island became a key airbase and refuelling depot between Australia
and New Zealand, and New Zealand and the Solomon Islands. The airstrip was constructed by Australian, New Zealand
and United States servicemen during 1942. Since Norfolk Island fell within New Zealand's area of responsibility it
was garrisoned by a New Zealand Army unit known as N Force at a large Army camp which had the capacity to house a
1,500 strong force. N Force relieved a company of the Second Australian Imperial Force. The island proved too remote
to come under attack during the war and N Force left the island in February 1944. Financial problems and a reduction
in tourism led to Norfolk Island's administration appealing to the Australian federal government for assistance in
2010. In return, the islanders were to pay income tax for the first time but would be eligible for greater welfare
benefits. However, by May 2013 agreement had not been reached and islanders were having to leave to find work and
welfare. An agreement was finally signed in Canberra on 12 March 2015 to replace self-government with a local council
but against the wishes of the Norfolk Island government. A majority of Norfolk Islanders have objected to the Australian
plan to make changes to Norfolk Island without first consulting them and allowing their say with 68% of voters against
forced changes. Norfolk Island is located in the South Pacific Ocean, east of the Australian mainland. Norfolk Island
is the main island of the island group the territory encompasses and is located at 29°02′S 167°57′E / 29.033°S
167.950°E / -29.033; 167.950. It has an area of 34.6 square kilometres (13.4 sq mi), with no large-scale internal
bodies of water and 32 km (20 mi) of coastline. The island's highest point is Mount Bates (319 metres (1,047 feet)
above sea level), located in the northwest quadrant of the island. The majority of the terrain is suitable for farming
and other agricultural uses. Phillip Island, the second largest island of the territory, is located at 29°07′S 167°57′E
/ 29.117°S 167.950°E / -29.117; 167.950, seven kilometres (4.3 miles) south of the main island. The coastline of
Norfolk Island consists, to varying degrees, of cliff faces. A downward slope exists towards Slaughter Bay and Emily
Bay, the site of the original colonial settlement of Kingston. There are no safe harbour facilities on Norfolk Island,
with loading jetties existing at Kingston and Cascade Bay. All goods not domestically produced are brought in by
ship, usually to Cascade Bay. Emily Bay, protected from the Pacific Ocean by a small coral reef, is the only safe
area for recreational swimming, although surfing waves can be found at Anson and Ball Bays. Norfolk Island has 174
native plants; 51 of them are endemic. At least 18 of the endemic species are rare or threatened. The Norfolk Island
palm (Rhopalostylis baueri) and the smooth tree-fern (Cyathea brownii), the tallest tree-fern in the world, are common
in the Norfolk Island National Park but rare elsewhere on the island. Before European colonization, most of Norfolk
Island was covered with subtropical rain forest, the canopy of which was made of Araucaria heterophylla (Norfolk
Island pine) in exposed areas, and the palm Rhopalostylis baueri and tree ferns Cyathea brownii and C. australis
in moister protected areas. The understory was thick with lianas and ferns covering the forest floor. Only one small
tract (5 km2) of rainforest remains, which was declared as the Norfolk Island National Park in 1986. As a relatively
small and isolated oceanic island, Norfolk has few land birds but a high degree of endemicity among them. Many of
the endemic species and subspecies have become extinct as a result of massive clearance of the island's native vegetation
of subtropical rainforest for agriculture, hunting and persecution as agricultural pests. The birds have also suffered
from the introduction of mammals such as rats, cats, pigs and goats, as well as from introduced competitors such
as common blackbirds and crimson rosellas. The Norfolk Island Group Nepean Island is also home to breeding seabirds.
The providence petrel was hunted to local extinction by the beginning of the 19th century, but has shown signs of
returning to breed on Phillip Island. Other seabirds breeding there include the white-necked petrel, Kermadec petrel,
wedge-tailed shearwater, Australasian gannet, red-tailed tropicbird and grey ternlet. The sooty tern (known locally
as the whale bird) has traditionally been subject to seasonal egg harvesting by Norfolk Islanders. Cetaceans were
historically abundant around the island as commercial hunts on the island was operating until 1956. Today, numbers
of larger whales have disappeared, but even today many species such humpback whale, minke whale, sei whale, and dolphins
can be observed close to shores, and scientific surveys have been conducted regularly. Southern right whales were
once regular migrants to the Norfolk hence naming the island as the "Middle ground" by whalers, but had been severely
depleted by historical hunts, and further by illegal Soviet and Japan whaling, resulting in none of very few, if
remnants still live, right whales in these regions along with Lord Howe Island. Sixty-two percent of islanders are
Christians. After the death of the first chaplain Rev G. H. Nobbs in 1884, a Methodist church was formed and in 1891
a Seventh-day Adventist congregation led by one of Nobbs' sons. Some unhappiness with G. H. Nobbs, the more organised
and formal ritual of the Church of England service arising from the influence of the Melanesian Mission, decline
in spirituality, the influence of visiting American whalers, literature sent by Christians overseas impressed by
the Pitcairn story, and the adoption of Seventh-day Adventism by the descendants of the mutineers still on Pitcairn,
all contributed to these developments. The Roman Catholic Church began work in 1957 and in the late 1990s a group
left the former Methodist (then Uniting Church) and formed a charismatic fellowship. In 2011, 34 percent of the ordinary
residents identified as Anglican, 13 percent as Uniting Church, 12 percent as Roman Catholic and three percent as
Seventh-day Adventist. Nine percent were from other religions. Twenty four percent had no religion, and seven percent
did not indicate a religion. Typical ordinary congregations in any church do not exceed 30 local residents as of
2010[update]. The three older denominations have good facilities. Ministers are usually short-term visitors. Islanders
speak both English and a creole language known as Norfuk, a blend of 18th-century English and Tahitian. The Norfuk
language is decreasing in popularity as more tourists travel to the island and more young people leave for work and
study reasons; however, there are efforts to keep it alive via dictionaries and the renaming of some tourist attractions
to their Norfuk equivalents. In 2004 an act of the Norfolk Island Assembly made it a co-official language of the
island. The act is long-titled: "An Act to recognise the Norfolk Island Language (Norf'k) as an official language
of Norfolk Island." The "language known as 'Norf'k'" is described as the language "that is spoken by descendants
of the first free settlers of Norfolk Island who were descendants of the settlers of Pitcairn Island". The act recognises
and protects use of the language but does not require it; in official use, it must be accompanied by an accurate
translation into English. 32% of the total population reported speaking a language other than English in the 2011
census, and just under three-quarters of the ordinarily resident population could speak Norfuk. Norfolk Island is
the only non-mainland Australian territory to have achieved self-governance. The Norfolk Island Act 1979, passed
by the Parliament of Australia in 1979, is the Act under which the island was governed until the passing of the Norfolk
Island Legislation Amendment Act 2015. The Australian government maintains authority on the island through an Administrator,
currently Gary Hardgrave. From 1979 to 2015, a Legislative Assembly was elected by popular vote for terms of not
more than three years, although legislation passed by the Australian Parliament could extend its laws to the territory
at will, including the power to override any laws made by the assembly. The Assembly consisted of nine seats, with
electors casting nine equal votes, of which no more than two could be given to any individual candidate. It is a
method of voting called a "weighted first past the post system". Four of the members of the Assembly formed the Executive
Council, which devised policy and acted as an advisory body to the Administrator. The last Chief Minister of Norfolk
Island was Lisle Snell. Other ministers included: Minister for Tourism, Industry and Development; Minister for Finance;
Minister for Cultural Heritage and Community Services; and Minister for Environment. Disagreements over the island's
relationship with Australia were put in sharper relief by a 2006 review undertaken by the Australian government.
Under the more radical of two models proposed in the review, the island's legislative assembly would have been reduced
to the status of a local council. However, in December 2006, citing the "significant disruption" that changes to
the governance would impose on the island's economy, the Australian government ended the review leaving the existing
governance arrangements unaltered. It was announced on 19 March 2015 that self-governance for the island would be
revoked by the Commonwealth and replaced by a local council with the state of New South Wales providing services
to the island. A reason given was that the island had never gained self-sufficiency and was being heavily subsidised
by the Commonwealth, by $12.5 million in 2015 alone. It meant that residents would have to start paying Australian
income tax, but they would also be covered by Australian welfare schemes such as Centrelink and Medicare. The Norfolk
Island Legislative Assembly decided to hold a referendum on the proposal. On 8 May 2015, voters were asked if Norfolk
Islanders should freely determine their political status and their economic, social and cultural development, and
to "be consulted at referendum or plebiscite on the future model of governance for Norfolk Island before such changes
are acted upon by the Australian parliament". 68% out of 912 voters voted in favour. The Norfolk Island Chief Minister,
Lisle Snell, said that "the referendum results blow a hole in Canberra's assertion that the reforms introduced before
the Australian Parliament that propose abolishing the Legislative Assembly and Norfolk Island Parliament were overwhelmingly
supported by the people of Norfolk Island". Norfolk Island was originally a colony acquired by settlement but was
never within the British Settlements Act. It was accepted as a territory of Australia, separate from any state, by
the Norfolk Island Act 1913 (Cth), passed under the territories power (Constitution section 122) and made effective
in 1914. In 1976 the High Court of Australia held unanimously that Norfolk Island is a part of the Commonwealth.
Again, in 2007 the High Court of Australia affirmed the validity of legislation that made Australian citizenship
a necessary qualification for voting for, and standing for election to, the Legislative Assembly of Norfolk Island.
The island is subject to separate immigration controls from the remainder of Australia. Until recently immigration
to Norfolk Island even by other Australian citizens was heavily restricted. In 2012, immigration controls were relaxed
with the introduction of an Unrestricted Entry Permit for all Australian and New Zealand citizens upon arrival and
the option to apply for residency; the only criteria are to pass a police check and be able to pay into the local
health scheme. From 1 July 2016, the Australian migration system will replace the immigration arrangements currently
maintained by the Norfolk Island Government. Australian citizens and residents from other parts of the nation now
have automatic right of residence on the island after meeting these criteria (Immigration (Amendment No. 2) Act 2012).
Australian citizens must carry either a passport or a Document of Identity to travel to Norfolk Island. Citizens
of all other nations must carry a passport to travel to Norfolk Island even if arriving from other parts of Australia.
Holders of Australian visas who travel to Norfolk Island have departed the Australian Migration Zone. Unless they
hold a multiple-entry visa, the visa will have ceased; in which case they will require another visa to re-enter mainland
Australia. Non-Australian citizens who are Australian permanent residents should be aware that during their stay
on Norfolk Island they are "outside of Australia" for the purposes of the Migration Act. This means that not only
will they need a still-valid migrant visa or Resident return visa to return from Norfolk Island to the mainland,
but also the time spent in Norfolk Island will not be counted for satisfying the residence requirement for obtaining
a Resident return visa in the future. On the other hand, as far as Australian nationality law is concerned, Norfolk
Island is a part of Australia, and any time spent by an Australian permanent resident on Norfolk Island will count
as time spent in Australia for the purpose of applying for Australian citizenship. Norfolk Island Hospital is the
only medical centre on the island. Medicare and the Pharmaceutical Benefits Scheme do not cover Norfolk Island. All
visitors to Norfolk Island, including Australians, are recommended to purchase travel insurance. Although the hospital
can perform minor surgery, serious medical conditions are not permitted to be treated on the island and patients
are flown back to mainland Australia. Air charter transport can cost in the order of A$30,000. For serious emergencies,
medical evacuations are provided by the Royal Australian Air Force. The island has one ambulance staffed by St John
Ambulance Australia volunteers. The Australian government controls the exclusive economic zone (EEZ) and revenue
from it extending 200 nautical miles (370 km) around Norfolk Island (roughly 428,000km2) and territorial sea claims
to three nautical miles (6 km) from the island. There is a strong belief on the island that some of the revenue generated
from Norfolk's EEZ should be available to providing services such as health and infrastructure on the island, which
the island has been responsible for, similar to how the Northern Territory is able to access revenue from their mineral
resources. The exclusive economic zone provides the islanders with fish, its only major natural resource. Norfolk
Island has no direct control over any marine areas but has an agreement with the Commonwealth through the Australian
Fisheries Management Authority (AFMA) to fish "recreationally" in a small section of the EEZ known locally as "the
Box". While there is speculation that the zone may include oil and gas deposits, this is not proven. There are no
major arable lands or permanent farmlands, though about 25 per cent of the island is a permanent pasture. There is
no irrigated land. The island uses the Australian dollar as its currency. Residents of Norfolk Island do not pay
Australian federal taxes, creating a tax haven for locals and visitors alike. Because there is no income tax, the
island's legislative assembly raises money through an import duty, fuel levy, medicare levy, GST of 12% and local/international
phone calls. In a move that apparently surprised many islanders the Chief Minister of Norfolk Island, David Buffett,
announced on 6 November 2010 that the island would voluntarily surrender its tax free status in return for a financial
bailout from the federal government to cover significant debts. The introduction of income taxation will now come
into effect on July 1, 2016, with a variation of opinion on the island about these changes but with many understanding
that for the island's governance to continue there is a need to pay into the commonwealth revenue pool so that the
island can have assistance in supporting its delivery of State government responsibilities such as health, education,
medicare, and infrastructure. Prior to these reforms residents of Norfolk Island were not entitled to social services.
It appears that the reforms do extend to companies and trustees and not only individuals. As of 2004[update], 2532
telephone main lines are in use, a mix of analog (2500) and digital (32) circuits. Satellite communications services
are planned.[citation needed] There is one locally based radio station (Radio Norfolk 89.9FM), broadcasting on both
AM and FM frequencies. There is also one TV station, Norfolk TV, featuring local programming, plus transmitters for
Australian channels ABC, SBS, Imparja Television and Southern Cross Television. The Internet country code top-level
domain (ccTLD) is .nf. There are no railways, waterways, ports or harbours on the island. Loading jetties are located
at Kingston and Cascade, but ships cannot get close to either of them. When a supply ship arrives, it is emptied
by whaleboats towed by launches, five tonnes at a time. Which jetty is used depends on the prevailing weather on
the day. The jetty on the leeward side of the island is often used. If the wind changes significantly during unloading/loading,
the ship will move around to the other side. Visitors often gather to watch the activity when a supply ship arrives.
Burke was born in Dublin, Ireland. His mother Mary née Nagle (c. 1702 – 1770) was a Roman Catholic who hailed from a déclassé
County Cork family (and a cousin of Nano Nagle), whereas his father, a successful solicitor, Richard (died 1761),
was a member of the Church of Ireland; it remains unclear whether this is the same Richard Burke who converted from
Catholicism. The Burke dynasty descends from an Anglo-Norman knight surnamed de Burgh (latinised as de Burgo) who
arrived in Ireland in 1185 following Henry II of England's 1171 invasion of Ireland. In 1744, Burke started at Trinity
College Dublin, a Protestant establishment, which up until 1793, did not permit Catholics to take degrees. In 1747,
he set up a debating society, "Edmund Burke's Club", which, in 1770, merged with TCD's Historical Club to form the
College Historical Society; it is the oldest undergraduate society in the world. The minutes of the meetings of Burke's
Club remain in the collection of the Historical Society. Burke graduated from Trinity in 1748. Burke's father wanted
him to read Law, and with this in mind he went to London in 1750, where he entered the Middle Temple, before soon
giving up legal study to travel in Continental Europe. After eschewing the Law, he pursued a livelihood through writing.
Burke claimed that Bolingbroke's arguments against revealed religion could apply to all social and civil institutions
as well. Lord Chesterfield and Bishop Warburton (and others) initially thought that the work was genuinely by Bolingbroke
rather than a satire. All the reviews of the work were positive, with critics especially appreciative of Burke's
quality of writing. Some reviewers failed to notice the ironic nature of the book, which led to Burke stating in
the preface to the second edition (1757) that it was a satire. Richard Hurd believed that Burke's imitation was near-perfect
and that this defeated his purpose: an ironist "should take care by a constant exaggeration to make the ridicule
shine through the Imitation. Whereas this Vindication is everywhere enforc'd, not only in the language, and on the
principles of L. Bol., but with so apparent, or rather so real an earnestness, that half his purpose is sacrificed
to the other". A minority of scholars have taken the position that, in fact, Burke did write the Vindication in earnest,
later disowning it only for political reasons. On 25 February 1757, Burke signed a contract with Robert Dodsley to
write a "history of England from the time of Julius Caesar to the end of the reign of Queen Anne", its length being
eighty quarto sheets (640 pages), nearly 400,000 words. It was to be submitted for publication by Christmas 1758.
Burke completed the work to the year 1216 and stopped; it was not published until after Burke's death, being included
in an 1812 collection of his works, entitled An Essay Towards an Abridgement of the English History. G. M. Young
did not value Burke's history and claimed that it was "demonstrably a translation from the French". Lord Acton, on
commenting on the story that Burke stopped his history because David Hume published his, said "it is ever to be regretted
that the reverse did not occur". During the year following that contract, with Dodsley, Burke founded the influential
Annual Register, a publication in which various authors evaluated the international political events of the previous
year. The extent to which Burke contributed to the Annual Register is unclear: in his biography of Burke, Robert
Murray quotes the Register as evidence of Burke's opinions, yet Philip Magnus in his biography does not cite it directly
as a reference. Burke remained the chief editor of the publication until at least 1789 and there is no evidence that
any other writer contributed to it before 1766. At about this same time, Burke was introduced to William Gerard Hamilton
(known as "Single-speech Hamilton"). When Hamilton was appointed Chief Secretary for Ireland, Burke accompanied him
to Dublin as his private secretary, a position he held for three years. In 1765 Burke became private secretary to
the liberal Whig statesman, Charles, Marquess of Rockingham, then Prime Minister of Great Britain, who remained Burke's
close friend and associate until his untimely death in 1782. Rockingham also introduced Burke as a Freemason. Burke
took a leading role in the debate regarding the constitutional limits to the executive authority of the king. He
argued strongly against unrestrained royal power and for the role of political parties in maintaining a principled
opposition capable of preventing abuses, either by the monarch, or by specific factions within the government. His
most important publication in this regard was his Thoughts on the Cause of the Present Discontents of 23 April 1770.
Burke identified the "discontents" as stemming from the "secret influence" of a neo-Tory group he labelled as, the
"king's friends", whose system "comprehending the exterior and interior administrations, is commonly called, in the
technical language of the Court, Double Cabinet". Britain needed a party with "an unshaken adherence to principle,
and attachment to connexion, against every allurement of interest". Party divisions "whether operating for good or
evil, are things inseparable from free government". In May 1778, Burke supported a parliamentary motion revising
restrictions on Irish trade. His constituents, citizens of the great trading city of Bristol, however urged Burke
to oppose free trade with Ireland. Burke resisted their protestations and said: "If, from this conduct, I shall forfeit
their suffrages at an ensuing election, it will stand on record an example to future representatives of the Commons
of England, that one man at least had dared to resist the desires of his constituents when his judgment assured him
they were wrong". Burke was not merely presenting a peace agreement to Parliament; rather, he stepped forward with
four reasons against using force, carefully reasoned. He laid out his objections in an orderly manner, focusing on
one before moving to the next. His first concern was that the use of force would have to be temporary, and that the
uprisings and objections to British governance in America would not be. Second, Burke worried about the uncertainty
surrounding whether Britain would win a conflict in America. "An armament", Burke said, "is not a victory". Third,
Burke brought up the issue of impairment; it would do the British Government no good to engage in a scorched earth
war and have the object they desired (America) become damaged or even useless. The American colonists could always
retreat into the mountains, but the land they left behind would most likely be unusable, whether by accident or design.
The fourth and final reason to avoid the use of force was experience; the British had never attempted to rein in
an unruly colony by force, and they did not know if it could be done, let alone accomplished thousands of miles away
from home. Not only were all of these concerns reasonable, but some turned out to be prophetic – the American colonists
did not surrender, even when things looked extremely bleak, and the British were ultimately unsuccessful in their
attempts to win a war fought on American soil. Among the reasons this speech was so greatly admired was its passage
on Lord Bathurst (1684–1775); Burke describes an angel in 1704 prophesying to Bathurst the future greatness of England
and also of America: "Young man, There is America – which at this day serves little more than to amuse you with stories
of savage men, and uncouth manners; yet shall, before you taste of death, shew itself equal to the whole of that
commerce which now attracts the envy of the world". Samuel Johnson was so irritated at hearing it continually praised,
that he made a parody of it, where the devil appears to a young Whig and predicts that in short time, Whiggism will
poison even the paradise of America! The administration of Lord North (1770–1782) tried to defeat the colonist rebellion
by military force. British and American forces clashed in 1775 and, in 1776, came the American Declaration of Independence.
Burke was appalled by celebrations in Britain of the defeat of the Americans at New York and Pennsylvania. He claimed
the English national character was being changed by this authoritarianism. Burke wrote: "As to the good people of
England, they seem to partake every day more and more of the Character of that administration which they have been
induced to tolerate. I am satisfied, that within a few years there has been a great Change in the National Character.
We seem no longer that eager, inquisitive, jealous, fiery people, which we have been formerly". The Paymaster General
Act 1782 ended the post as a lucrative sinecure. Previously, Paymasters had been able to draw on money from HM Treasury
at their discretion. Now they were required to put the money they had requested to withdraw from the Treasury into
the Bank of England, from where it was to be withdrawn for specific purposes. The Treasury would receive monthly
statements of the Paymaster's balance at the Bank. This act was repealed by Shelburne's administration, but the act
that replaced it repeated verbatim almost the whole text of the Burke Act. Burke was a leading sceptic with respect
to democracy. While admitting that theoretically, in some cases it might be desirable, he insisted a democratic government
in Britain in his day would not only be inept, but also oppressive. He opposed democracy for three basic reasons.
First, government required a degree of intelligence and breadth of knowledge of the sort that occurred rarely among
the common people. Second, he thought that if they had the vote, common people had dangerous and angry passions that
could be aroused easily by demagogues; he feared that the authoritarian impulses that could be empowered by these
passions would undermine cherished traditions and established religion, leading to violence and confiscation of property.
Third, Burke warned that democracy would create a tyranny over unpopular minorities, who needed the protection of
the upper classes. For years Burke pursued impeachment efforts against Warren Hastings, formerly Governor-General
of Bengal, that resulted in the trial during 1786. His interaction with the British dominion of India began well
before Hastings' impeachment trial. For two decades prior to the impeachment, Parliament had dealt with the Indian
issue. This trial was the pinnacle of years of unrest and deliberation. In 1781 Burke was first able to delve into
the issues surrounding the East India Company when he was appointed Chairman of the Commons Select Committee on East
Indian Affairs—from that point until the end of the trial; India was Burke's primary concern. This committee was
charged "to investigate alleged injustices in Bengal, the war with Hyder Ali, and other Indian difficulties". While
Burke and the committee focused their attention on these matters, a second 'secret' committee was formed to assess
the same issues. Both committee reports were written by Burke. Among other purposes, the reports conveyed to the
Indian princes that Britain would not wage war on them, along with demanding that the HEIC recall Hastings. This
was Burke's first call for substantive change regarding imperial practices. When addressing the whole House of Commons
regarding the committee report, Burke described the Indian issue as one that "began 'in commerce' but 'ended in empire.'"
On 4 April 1786, Burke presented the Commons with the Article of Charge of High Crimes and Misdemeanors against Hastings.
The impeachment in Westminster Hall, which did not begin until 14 February 1788, would be the "first major public
discursive event of its kind in England", bringing the morality and duty of imperialism to the forefront of public
perception. Burke already was known for his eloquent rhetorical skills and his involvement in the trial only enhanced
its popularity and significance. Burke's indictment, fuelled by emotional indignation, branded Hastings a 'captain-general
of iniquity'; who never dined without 'creating a famine'; whose heart was 'gangrened to the core', and who resembled
both a 'spider of Hell' and a 'ravenous vulture devouring the carcasses of the dead'. The House of Commons eventually
impeached Hastings, but subsequently, the House of Lords acquitted him of all charges. Initially, Burke did not condemn
the French Revolution. In a letter of 9 August 1789, Burke wrote: "England gazing with astonishment at a French struggle
for Liberty and not knowing whether to blame or to applaud! The thing indeed, though I thought I saw something like
it in progress for several years, has still something in it paradoxical and Mysterious. The spirit it is impossible
not to admire; but the old Parisian ferocity has broken out in a shocking manner". The events of 5–6 October 1789,
when a crowd of Parisian women marched on Versailles to compel King Louis XVI to return to Paris, turned Burke against
it. In a letter to his son, Richard Burke, dated 10 October he said: "This day I heard from Laurence who has sent
me papers confirming the portentous state of France—where the Elements which compose Human Society seem all to be
dissolved, and a world of Monsters to be produced in the place of it—where Mirabeau presides as the Grand Anarch;
and the late Grand Monarch makes a figure as ridiculous as pitiable". On 4 November Charles-Jean-François Depont
wrote to Burke, requesting that he endorse the Revolution. Burke replied that any critical language of it by him
should be taken "as no more than the expression of doubt" but he added: "You may have subverted Monarchy, but not
recover'd freedom". In the same month he described France as "a country undone". Burke's first public condemnation
of the Revolution occurred on the debate in Parliament on the army estimates on 9 February 1790, provoked by praise
of the Revolution by Pitt and Fox: In January 1790, Burke read Dr. Richard Price's sermon of 4 November 1789 entitled,
A Discourse on the Love of our Country, to the Revolution Society. That society had been founded to commemorate the
Glorious Revolution of 1688. In this sermon Price espoused the philosophy of universal "Rights of Men". Price argued
that love of our country "does not imply any conviction of the superior value of it to other countries, or any particular
preference of its laws and constitution of government". Instead, Price asserted that Englishmen should see themselves
"more as citizens of the world than as members of any particular community". Immediately after reading Price's sermon,
Burke wrote a draft of what eventually became, Reflections on the Revolution in France. On 13 February 1790, a notice
in the press said that shortly, Burke would publish a pamphlet on the Revolution and its British supporters, however
he spent the year revising and expanding it. On 1 November he finally published the Reflections and it was an immediate
best-seller. Priced at five shillings, it was more expensive than most political pamphlets, but by the end of 1790,
it had gone through ten printings and sold approximately 17,500 copies. A French translation appeared on 29 November
and on 30 November the translator, Pierre-Gaëton Dupont, wrote to Burke saying 2,500 copies had already been sold.
The French translation ran to ten printings by June 1791. Burke put forward that "We fear God, we look up with awe
to kings; with affection to parliaments; with duty to magistrates; with reverence to priests; and with respect to
nobility. Why? Because when such ideas are brought before our minds, it is natural to be so affected". Burke defended
this prejudice on the grounds that it is "the general bank and capital of nations, and of ages" and superior to individual
reason, which is small in comparison. "Prejudice", Burke claimed, "is of ready application in the emergency; it previously
engages the mind in a steady course of wisdom and virtue, and does not leave the man hesitating in the moment of
decision, skeptical, puzzled, and unresolved. Prejudice renders a man's virtue his habit". Burke criticised social
contract theory by claiming that society is indeed, a contract, but "a partnership not only between those who are
living, but between those who are living, those who are dead, and those who are to be born". The most famous passage
in Burke's Reflections was his description of the events of 5–6 October 1789 and the part of Marie-Antoinette in
them. Burke's account differs little from modern historians who have used primary sources. His use of flowery language
to describe it, however, provoked both praise and criticism. Philip Francis wrote to Burke saying that what he wrote
of Marie-Antoinette was "pure foppery". Edward Gibbon, however, reacted differently: "I adore his chivalry". Burke
was informed by an Englishman who had talked with the Duchesse de Biron, that when Marie-Antoinette was reading the
passage, she burst into tears and took considerable time to finish reading it. Price had rejoiced that the French
king had been "led in triumph" during the October Days, but to Burke this symbolised the opposing revolutionary sentiment
of the Jacobins and the natural sentiments of those who shared his own view with horror—that the ungallant assault
on Marie-Antoinette—was a cowardly attack on a defenceless woman. Louis XVI translated the Reflections "from end
to end" into French. Fellow Whig MPs Richard Sheridan and Charles James Fox, disagreed with Burke and split with
him. Fox thought the Reflections to be "in very bad taste" and "favouring Tory principles". Other Whigs such as the
Duke of Portland and Earl Fitzwilliam privately agreed with Burke, but did not wish for a public breach with their
Whig colleagues. Burke wrote on 29 November 1790: "I have received from the Duke of Portland, Lord Fitzwilliam, the
Duke of Devonshire, Lord John Cavendish, Montagu (Frederick Montagu MP), and a long et cetera of the old Stamina
of the Whiggs a most full approbation of the principles of that work and a kind indulgence to the execution". The
Duke of Portland said in 1791 that when anyone criticised the Reflections to him, he informed them that he had recommended
the book to his sons as containing the true Whig creed. Burke's Reflections sparked a pamphlet war. Thomas Paine
penned the Rights of Man in 1791 as a response to Burke; Mary Wollstonecraft published A Vindication of the Rights
of Men and James Mackintosh wrote Vindiciae Gallicae. Mackintosh was the first to see the Reflections as "the manifesto
of a Counter Revolution". Mackintosh later agreed with Burke's views, remarking in December 1796 after meeting him,
that Burke was "minutely and accurately informed, to a wonderful exactness, with respect to every fact relating to
the French Revolution". Mackintosh later said: "Burke was one of the first thinkers as well as one of the greatest
orators of his time. He is without parallel in any age, excepting perhaps Lord Bacon and Cicero; and his works contain
an ampler store of political and moral wisdom than can be found in any other writer whatever". In November 1790,
François-Louis-Thibault de Menonville, a member of the National Assembly of France, wrote to Burke, praising Reflections
and requesting more "very refreshing mental food" that he could publish. This Burke did in April 1791 when he published
A Letter to a Member of the National Assembly. Burke called for external forces to reverse the revolution and included
an attack on the late French philosopher Jean-Jacques Rousseau, as being the subject of a personality cult that had
developed in revolutionary France. Although Burke conceded that Rousseau sometimes showed "a considerable insight
into human nature" he mostly was critical. Although he did not meet Rousseau on his visit to Britain in 1766–7 Burke
was a friend of David Hume, with whom Rousseau had stayed. Burke said Rousseau "entertained no principle either to
influence of his heart, or to guide his understanding—but vanity"—which he "was possessed to a degree little short
of madness". He also cited Rousseau's Confessions as evidence that Rousseau had a life of "obscure and vulgar vices"
that was not "chequered, or spotted here and there, with virtues, or even distinguished by a single good action".
Burke contrasted Rousseau's theory of universal benevolence and his having sent his children to a foundling hospital:
"a lover of his kind, but a hater of his kindred". These events and the disagreements that arose from them within
the Whig Party, led to its break-up and to the rupture of Burke's friendship with Fox. In debate in Parliament on
Britain's relations with Russia, Fox praised the principles of the revolution, although Burke was not able to reply
at this time as he was "overpowered by continued cries of question from his own side of the House". When Parliament
was debating the Quebec Bill for a constitution for Canada, Fox praised the revolution and criticised some of Burke's
arguments, such as hereditary power. On 6 May 1791, during another debate in Parliament on the Quebec Bill, Burke
used the opportunity to answer Fox, and to condemn the new French Constitution and "the horrible consequences flowing
from the French idea of the Rights of Man". Burke asserted that those ideas were the antithesis of both the British
and the American constitutions. Burke was interrupted, and Fox intervened, saying that Burke should be allowed to
carry on with his speech. A vote of censure was moved against Burke, however, for noticing the affairs of France,
which was moved by Lord Sheffield and seconded by Fox. Pitt made a speech praising Burke, and Fox made a speech—both
rebuking and complimenting Burke. He questioned the sincerity of Burke, who seemed to have forgotten the lessons
he had learned from him, quoting from Burke's own speeches of fourteen and fifteen years before. At this point, Fox
whispered that there was "no loss of friendship". "I regret to say there is", Burke replied, "I have indeed made
a great sacrifice; I have done my duty though I have lost my friend. There is something in the detested French constitution
that envenoms every thing it touches". This provoked a reply from Fox, yet he was unable to give his speech for some
time since he was overcome with tears and emotion, he appealed to Burke to remember their inalienable friendship,
but also repeated his criticisms of Burke and uttered "unusually bitter sarcasms". This only aggravated the rupture
between the two men. Burke demonstrated his separation from the party on 5 June 1791 by writing to Fitzwilliam, declining
money from him. Burke knew that many members of the Whig Party did not share Fox's views and he wanted to provoke
them into condemning the French Revolution. Burke wrote that he wanted to represent the whole Whig party "as tolerating,
and by a toleration, countenancing those proceedings" so that he could "stimulate them to a public declaration of
what every one of their acquaintance privately knows to be...their sentiments". Therefore, on 3 August 1791 Burke
published his Appeal from the New to the Old Whigs, in which he renewed his criticism of the radical revolutionary
programmes inspired by the French Revolution and attacked the Whigs who supported them, as holding principles contrary
to those traditionally held by the Whig party. Although Whig grandees such as Portland and Fitzwilliam privately
agreed with Burke's Appeal, they wished he had used more moderate language. Fitzwilliam saw the Appeal as containing
"the doctrines I have sworn by, long and long since". Francis Basset, a backbench Whig MP, wrote to Burke: "...though
for reasons which I will not now detail I did not then deliver my sentiments, I most perfectly differ from Mr. Fox
& from the great Body of opposition on the French Revolution". Burke sent a copy of the Appeal to the king and the
king requested a friend to communicate to Burke that he had read it "with great Satisfaction". Burke wrote of its
reception: "Not one word from one of our party. They are secretly galled. They agree with me to a title; but they
dare not speak out for fear of hurting Fox. ... They leave me to myself; they see that I can do myself justice".
Charles Burney viewed it as "a most admirable book—the best & most useful on political subjects that I have ever
seen" but believed the differences in the Whig Party between Burke and Fox should not be aired publicly. Burke supported
the war against revolutionary France, seeing Britain as fighting on the side of the royalists and émigres in a civil
war, rather than fighting against the whole nation of France. Burke also supported the royalist uprising in La Vendée,
describing it on 4 November 1793 in a letter to William Windham, as "the sole affair I have much heart in". Burke
wrote to Henry Dundas on 7 October urging him to send reinforcements there, as he viewed it as the only theatre in
the war that might lead to a march on Paris. Dundas did not follow Burke's advice, however. Burke believed the Government
was not taking the uprising seriously enough, a view reinforced by a letter he had received from the Prince Charles
of France (S.A.R. le comte d'Artois), dated 23 October, requesting that he intercede on behalf of the royalists to
the Government. Burke was forced to reply on 6 November: "I am not in His Majesty's Service; or at all consulted
in his Affairs". Burke published his Remarks on the Policy of the Allies with Respect to France, begun in October,
where he said: "I am sure every thing has shewn us that in this war with France, one Frenchman is worth twenty foreigners.
La Vendée is a proof of this". On 20 June 1794, Burke received a vote of thanks from the Commons for his services
in the Hastings Trial and he immediately resigned his seat, being replaced by his son Richard. A tragic blow fell
upon Burke with the loss of Richard in August 1794, to whom he was tenderly attached, and in whom he saw signs of
promise, which were not patent to others and which, in fact, appear to have been non-existent (though this view may
have rather reflected the fact that Richard Burke had worked successfully in the early battle for Catholic emancipation).
King George III, whose favour he had gained by his attitude on the French Revolution, wished to create him Earl of
Beaconsfield, but the death of his son deprived the opportunity of such an honour and all its attractions, so the
only award he would accept was a pension of £2,500. Even this modest reward was attacked by the Duke of Bedford and
the Earl of Lauderdale, to whom Burke replied in his Letter to a Noble Lord (1796): "It cannot at this time be too
often repeated; line upon line; precept upon precept; until it comes into the currency of a proverb, To innovate
is not to reform". He argued that he was rewarded on merit, but the Duke of Bedford received his rewards from inheritance
alone, his ancestor being the original pensioner: "Mine was from a mild and benevolent sovereign; his from Henry
the Eighth". Burke also hinted at what would happen to such people if their revolutionary ideas were implemented,
and included a description of the British constitution: Burke's last publications were the Letters on a Regicide
Peace (October 1796), called forth by negotiations for peace with France by the Pitt government. Burke regarded this
as appeasement, injurious to national dignity and honour. In his Second Letter, Burke wrote of the French Revolutionary
Government: "Individuality is left out of their scheme of government. The State is all in all. Everything is referred
to the production of force; afterwards, everything is trusted to the use of it. It is military in its principle,
in its maxims, in its spirit, and in all its movements. The State has dominion and conquest for its sole objects—dominion
over minds by proselytism, over bodies by arms". This is held to be the first explanation of the modern concept of
totalitarian state. Burke regarded the war with France as ideological, against an "armed doctrine". He wished that
France would not be partitioned due to the effect this would have on the balance of power in Europe, and that the
war was not against France, but against the revolutionaries governing her. Burke said: "It is not France extending
a foreign empire over other nations: it is a sect aiming at universal empire, and beginning with the conquest of
France". In November 1795, there was a debate in Parliament on the high price of corn and Burke wrote a memorandum
to Pitt on the subject. In December Samuel Whitbread MP introduced a bill giving magistrates the power to fix minimum
wages and Fox said he would vote for it. This debate probably led Burke to editing his memorandum, as there appeared
a notice that Burke would soon publish a letter on the subject to the Secretary of the Board of Agriculture, Arthur
Young; but he failed to complete it. These fragments were inserted into the memorandum after his death and published
posthumously in 1800 as, Thoughts and Details on Scarcity. In it, Burke expounded "some of the doctrines of political
economists bearing upon agriculture as a trade". Burke criticised policies such as maximum prices and state regulation
of wages, and set out what the limits of government should be: Writing to a friend in May 1795, Burke surveyed the
causes of discontent: "I think I can hardly overrate the malignity of the principles of Protestant ascendency, as
they affect Ireland; or of Indianism [i.e. corporate tyranny, as practiced by the British East Indies Company], as
they affect these countries, and as they affect Asia; or of Jacobinism, as they affect all Europe, and the state
of human society itself. The last is the greatest evil". By March 1796, however Burke had changed his mind: "Our
Government and our Laws are beset by two different Enemies, which are sapping its foundations, Indianism, and Jacobinism.
In some Cases they act separately, in some they act in conjunction: But of this I am sure; that the first is the
worst by far, and the hardest to deal with; and for this amongst other reasons, that it weakens discredits, and ruins
that force, which ought to be employed with the greatest Credit and Energy against the other; and that it furnishes
Jacobinism with its strongest arms against all formal Government". Burke believed that property was essential to
human life. Because of his conviction that people desire to be ruled and controlled, the division of property formed
the basis for social structure, helping develop control within a property-based hierarchy. He viewed the social changes
brought on by property as the natural order of events, which should be taking place as the human race progressed.
With the division of property and the class system, he also believed that it kept the monarch in check to the needs
of the classes beneath the monarch. Since property largely aligned or defined divisions of social class, class too,
was seen as natural—part of a social agreement that the setting of persons into different classes, is the mutual
benefit of all subjects. Concern for property is not Burke's only influence. As Christopher Hitchens summarises,
"If modern conservatism can be held to derive from Burke, it is not just because he appealed to property owners in
behalf of stability but also because he appealed to an everyday interest in the preservation of the ancestral and
the immemorial." In the nineteenth century Burke was praised by both liberals and conservatives. Burke's friend Philip
Francis wrote that Burke "was a man who truly & prophetically foresaw all the consequences which would rise from
the adoption of the French principles" but because Burke wrote with so much passion, people were doubtful of his
arguments. William Windham spoke from the same bench in the House of Commons as Burke had, when he had separated
from Fox, and an observer said Windham spoke "like the ghost of Burke" when he made a speech against peace with France
in 1801. William Hazlitt, a political opponent of Burke, regarded him as amongst his three favourite writers (the
others being Junius and Rousseau), and made it "a test of the sense and candour of any one belonging to the opposite
party, whether he allowed Burke to be a great man". William Wordsworth was originally a supporter of the French Revolution
and attacked Burke in 'A Letter to the Bishop of Llandaff' (1793), but by the early nineteenth century he had changed
his mind and came to admire Burke. In his Two Addresses to the Freeholders of Westmorland Wordsworth called Burke
"the most sagacious Politician of his age" whose predictions "time has verified". He later revised his poem The Prelude
to include praise of Burke ("Genius of Burke! forgive the pen seduced/By specious wonders") and portrayed him as
an old oak. Samuel Taylor Coleridge came to have a similar conversion: he had criticised Burke in The Watchman, but
in his Friend (1809–10) Coleridge defended Burke from charges of inconsistency. Later, in his Biographia Literaria
(1817) Coleridge hails Burke as a prophet and praises Burke for referring "habitually to principles. He was a scientific
statesman; and therefore a seer". Henry Brougham wrote of Burke: "... all his predictions, save one momentary expression,
had been more than fulfilled: anarchy and bloodshed had borne sway in France; conquest and convulsion had desolated
Europe...the providence of mortals is not often able to penetrate so far as this into futurity". George Canning believed
that Burke's Reflections "has been justified by the course of subsequent events; and almost every prophecy has been
strictly fulfilled". In 1823 Canning wrote that he took Burke's "last works and words [as] the manual of my politics".
The Conservative Prime Minister Benjamin Disraeli "was deeply penetrated with the spirit and sentiment of Burke's
later writings". The 19th-century Liberal Prime Minister William Ewart Gladstone considered Burke "a magazine of
wisdom on Ireland and America" and in his diary recorded: "Made many extracts from Burke—sometimes almost divine".
The Radical MP and anti-Corn Law activist Richard Cobden often praised Burke's Thoughts and Details on Scarcity.
The Liberal historian Lord Acton considered Burke one of the three greatest Liberals, along with William Gladstone
and Thomas Babington Macaulay. Lord Macaulay recorded in his diary: "I have now finished reading again most of Burke's
works. Admirable! The greatest man since Milton". The Gladstonian Liberal MP John Morley published two books on Burke
(including a biography) and was influenced by Burke, including his views on prejudice. The Cobdenite Radical Francis
Hirst thought Burke deserved "a place among English libertarians, even though of all lovers of liberty and of all
reformers he was the most conservative, the least abstract, always anxious to preserve and renovate rather than to
innovate. In politics he resembled the modern architect who would restore an old house instead of pulling it down
to construct a new one on the site". Burke's Reflections on the Revolution in France was controversial at the time
of its publication, but after his death, it was to become his best known and most influential work, and a manifesto
for Conservative thinking. The historian Piers Brendon asserts that Burke laid the moral foundations for the British
Empire, epitomised in the trial of Warren Hastings, that was ultimately to be its undoing: when Burke stated that
"The British Empire must be governed on a plan of freedom, for it will be governed by no other", this was "...an
ideological bacillus that would prove fatal. This was Edmund Burke's paternalistic doctrine that colonial government
was a trust. It was to be so exercised for the benefit of subject people that they would eventually attain their
birthright—freedom". As a consequence of this opinion, Burke objected to the opium trade, which he called a "smuggling
adventure" and condemned "the great Disgrace of the British character in India". Burke's religious writing comprises
published works and commentary on the subject of religion. Burke's religious thought was grounded in the belief that
religion is the foundation of civil society. He sharply criticised deism and atheism, and emphasised Christianity
as a vehicle of social progress. Born in Ireland to a Catholic mother and a Protestant father, Burke vigorously defended
the Anglican Church, but also demonstrated sensitivity to Catholic concerns. He linked the conservation of a state
(established) religion with the preservation of citizens' constitutional liberties and highlighted Christianity's
benefit not only to the believer's soul, but also to political arrangements.
The Independent State of Samoa ( Samoan: Malo Sa 'oloto Tuto 'atasi o Sāmoa, IPA: [ˌsaːˈmoa]), commonly known as Samoa (Samoan:
Sāmoa) and formerly known as Western Samoa, is a Unitary Parliamentary Republic with eleven administrative divisions.
The two main islands are Savai'i and Upolu with four smaller islands surrounding the landmasses. The capital city
is Apia. The Lapita people discovered and settled the Samoan islands around 3,500 years ago. They developed a unique
language and cultural identity. The origins of the Samoans are closely studied in modern research about Polynesia
in various scientific disciplines such as genetics, linguistics and anthropology. Scientific research is ongoing,
although a number of different theories exist; including one proposing that the Samoans originated from Austronesian
predecessors during the terminal eastward Lapita expansion period from Southeast Asia and Melanesia between 2,500
and 1,500 BCE. The Samoan origins are currently being reassessed due to new scientific evidence and carbon dating
findings from 2003 and onwards. Mission work in Samoa had begun in late 1830 by John Williams, of the London Missionary
Society arriving in Sapapali'i from The Cook Islands and Tahiti. According to Barbara A. West, "The Samoans were
also known to engage in ‘headhunting', a ritual of war in which a warrior took the head of his slain opponent to
give to his leader, thus proving his bravery." However, Robert Louis Stevenson, who lived in Samoa from 1889 until
his death in 1894, wrote in A Footnote to History: Eight Years of Trouble in Samoa, "… the Samoans are gentle people."
Britain also sent troops to protect British business enterprise, harbour rights, and consulate office. This was followed
by an eight-year civil war, during which each of the three powers supplied arms, training and in some cases combat
troops to the warring Samoan parties. The Samoan crisis came to a critical juncture in March 1889 when all three
colonial contenders sent warships into Apia harbour, and a larger-scale war seemed imminent. A massive storm on 15
March 1889 damaged or destroyed the warships, ending the military conflict. From the end of World War I until 1962,
New Zealand controlled Samoa as a Class C Mandate under trusteeship through the League of Nations, then through the
United Nations. There followed a series of New Zealand administrators who were responsible for two major incidents.
In the first incident, approximately one fifth of the Samoan population died in the influenza epidemic of 1918–1919.
Between 1919 and 1962, Samoa was administered by the Department of External Affairs, a government department which
had been specially created to oversee New Zealand's Island Territories and Samoa. In 1943, this Department was renamed
the Department of Island Territories after a separate Department of External Affairs was created to conduct New Zealand's
foreign affairs. However, Samoans greatly resented New Zealand's colonial rule, and blamed inflation and the catastrophic
1918 flu epidemic on its misrule. By the late 1920s the resistance movement against colonial rule had gathered widespread
support. One of the Mau leaders was Olaf Frederick Nelson, a half Samoan and half Swedish merchant. Nelson was eventually
exiled during the late 1920s and early 1930s, but he continued to assist the organisation financially and politically.
In accordance with the Mau's non-violent philosophy, the newly elected leader, High Chief Tupua Tamasese Lealofi,
led his fellow uniformed Mau in a peaceful demonstration in downtown Apia on 28 December 1929. The New Zealand police
attempted to arrest one of the leaders in the demonstration. When he resisted, a struggle developed between the police
and the Mau. The officers began to fire randomly into the crowd and a Lewis machine gun, mounted in preparation for
this demonstration, was used to disperse the demonstrators. Chief Tamasese was shot from behind and killed while
trying to bring calm and order to the Mau demonstrators, screaming "Peace, Samoa". Ten others died that day and approximately
50 were injured by gunshot wounds and police batons. That day would come to be known in Samoa as Black Saturday.
The Mau grew, remaining steadfastly non-violent, and expanded to include a highly influential women's branch. After
repeated efforts by the Samoan independence movement, the New Zealand Western Samoa Act 1961 of 24 November 1961
granted Samoa independence effective 1 January 1962, upon which the Trusteeship Agreement terminated. Samoa also
signed a friendship treaty with New Zealand. Samoa, the first small-island country in the Pacific to become independent,
joined the Commonwealth of Nations on 28 August 1970. While independence was achieved at the beginning of January,
Samoa annually celebrates 1 June as its independence day. Fiame Mata'afa Faumuina Mulinu’u II, one of the four highest-ranking
paramount chiefs in the country, became Samoa's first Prime Minister. Two other paramount chiefs at the time of independence
were appointed joint heads of state for life. Tupua Tamasese Mea'ole died in 1963, leaving Malietoa Tanumafili II
sole head of state until his death on 11 May 2007, upon which Samoa changed from a constitutional monarchy to a parliamentary
republic de facto. The next Head of State, Tuiatua Tupua Tamasese Efi, was elected by the legislature on 17 June
2007 for a fixed five-year term, and was re-elected unopposed in July 2012. The unicameral legislature (the Fono)
consists of 49 members serving 5-year terms. Forty-seven are matai title-holders elected from territorial districts
by Samoans; the other two are chosen by non-Samoans with no chiefly affiliation on separate electoral rolls. Universal
suffrage was adopted in 1990, but only chiefs (matai) may stand for election to the Samoan seats. There are more
than 25,000 matais in the country, about 5% of whom are women. The prime minister, chosen by a majority in the Fono,
is appointed by the head of state to form a government. The prime minister's choices for the 12 cabinet positions
are appointed by the head of state, subject to the continuing confidence of the Fono. The capital village of each
district administers and coordinates the affairs of the district and confers each district's paramount title, amongst
other responsibilities. For example, the District of A'ana has its capital at Leulumoega. The paramount title of
A'ana is the TuiA'ana. The orator group which confers this title – the Faleiva (House of Nine) – is based at Leulumoega.
This is also the same for the other districts. In the district of Tuamasaga, the paramount title of the district
– the Malietoa title – is conferred by the FaleTuamasaga based in Afega. The Samoan islands have been produced by
vulcanism, the source of which is the Samoa hotspot which is probably the result of a mantle plume. While all of
the islands have volcanic origins, only Savai'i, the western most island in Samoa, is volcanically active with the
most recent eruptions in Mt Matavanu (1905–1911), Mata o le Afi (1902) and Mauga Afi (1725). The highest point in
Samoa is Mt Silisili, at 1858 m (6,096 ft). The Saleaula lava fields situated on the central north coast of Savai'i
are the result of the Mt Matavanu eruptions which left 50 km² (20 sq mi) of solidified lava. The country currency
is the Samoan tālā, issued and regulated by the Central Bank of Samoa. The economy of Samoa has traditionally been
dependent on agriculture and fishing at the local level. In modern times, development aid, private family remittances
from overseas, and agricultural exports have become key factors in the nation's economy. Agriculture employs two-thirds
of the labour force, and furnishes 90% of exports, featuring coconut cream, coconut oil, noni (juice of the nonu
fruit, as it is known in Samoan), and copra. The Samoan government has called for deregulation of the financial sector,
encouragement of investment, and continued fiscal discipline.[citation needed] Observers point to the flexibility
of the labour market as a basic strength for future economic advances.[citation needed] The sector has been helped
enormously by major capital investment in hotel infrastructure, political instability in neighbouring Pacific countries,
and the 2005 launch of Virgin Samoa a joint-venture between the government and Virgin Australia (then Virgin Blue).
In the period before German colonisation, Samoa produced mostly copra. German merchants and settlers were active
in introducing large scale plantation operations and developing new industries, notably cocoa bean and rubber, relying
on imported labourers from China and Melanesia. When the value of natural rubber fell drastically, about the end
of the Great War (World War I), the New Zealand government encouraged the production of bananas, for which there
is a large market in New Zealand.[citation needed] The staple products of Samoa are copra (dried coconut meat), cocoa
bean (for chocolate), and bananas. The annual production of both bananas and copra has been in the range of 13,000
to 15,000 metric tons (about 14,500 to 16,500 short tons). If the rhinoceros beetle in Samoa were eradicated, Samoa
could produce in excess of 40,000 metric tons (44,000 short tons) of copra. Samoan cocoa beans are of very high quality
and used in fine New Zealand chocolates. Most are Criollo-Forastero hybrids. Coffee grows well, but production has
been uneven. WSTEC is the biggest coffee producer. Rubber has been produced in Samoa for many years, but its export
value has little impact on the economy.[citation needed] Samoans' religious adherence includes the following: Christian
Congregational Church of Samoa 31.8%, Roman Catholic 19.4%, Methodist 15.2%, Assembly of God 13.7%, Mormon 7.6%,
Seventh-day Adventist 3.9%, Worship Centre 1.7%, other Christian 5.5%, other 0.7%, none 0.1%, unspecified 0.1% (2011
estimate). The Head of State until 2007, His Highness Malietoa Tanumafili II, was a Bahá'í convert. Samoa hosts one
of seven Bahá'í Houses of Worship in the world; completed in 1984 and dedicated by the Head of State, it is located
in Tiapapata, 8 km (5 mi) from Apia. Some Samoans are spiritual and religious, and have subtly adapted the dominant
religion of Christianity to 'fit in' with fa'a Samoa and vice versa. As such, ancient beliefs continue to co-exist
side-by-side with Christianity, particularly in regard to the traditional customs and rituals of fa'a Samoa. The
Samoan culture is centred around the principle of vāfealoa'i, the relationships between people. These relationships
are based on respect, or fa'aaloalo. When Christianity was introduced in Samoa, most Samoan people converted. Currently
98% of the population identify themselves as Christian. The Samoan word for dance is siva with unique gentle movements
of the body in time to music and which tells a story, although the Samoan male dances can be more physical and snappy.
The sasa is also a traditional dance where rows of dancers perform rapid synchronised movements in time to the rhythm
of wooden drums (pate) or rolled mats. Another dance performed by males is called the fa'ataupati or the slap dance,
creating rhythmic sounds by slapping different parts of the body. This is believed to have been derived from slapping
insects on the body. Albert Wendt is a significant Samoan writer whose novels and stories tell the Samoan experience.
In 1989, his novel Flying Fox in a Freedom Tree was made into a feature film in New Zealand, directed by Martyn Sanderson.
Another novel Sons for the Return Home had also been made into a feature film in 1979, directed by Paul Maunder.
The late John Kneubuhl, born in American Samoa, was an accomplished playwright and screenwriter and writer. Sia Figiel
won the 1997 Commonwealth Writers' Prize for fiction in the south-east Asia/South Pacific region with her novel "Where
We Once Belonged". Momoe Von Reiche is an internationally recognised poet and artist. Tusiata Avia is a performance
poet. Her first book of poetry Wild Dogs Under My Skirt was published by Victoria University Press in 2004. Dan Taulapapa
McMullin is an artist and writer. Other Samoan poets and writers include Sapa'u Ruperake Petaia, Eti Sa'aga and Savea
Sano Malifa, the editor of the Samoa Observer. In music, popular local bands include The Five Stars, Penina o Tiafau
and Punialava'a. The Yandall Sisters' cover of the song Sweet Inspiration reached number one on the New Zealand charts
in 1974. King Kapisi was the first hip hop artist to receive the prestigious New Zealand APRA Silver Scroll Award
in 1999 for his song Reverse Resistance. The music video for Reverse Resistance was filmed in Savai'i at his villages.
Other successful Samoan hip hop artists include rapper Scribe, Dei Hamo, Savage and Tha Feelstyle whose music video
Suamalie was filmed in Samoa. Lemi Ponifasio is a director and choreographer who is prominent internationally with
his dance Company MAU. Neil Ieremia's company Black Grace has also received international acclaim with tours to Europe
and New York. Hip hop has had a significant impact on Samoan culture. According to Katerina Martina Teaiwa, PhD from
the University of Hawaii at Manoa, "Hip hop culture in particular is popular amongst Samoan youth." Like very many
other countries, hip hop music is popular. In addition, the integration of hip hop elements into Samoan tradition
also "testifies to the transferability of the dance forms themselves," and to the "circuits through which people
and all their embodied knowledge travel." Dance both in its traditional form and its more modern forms has remained
a central cultural currency to Samoans, especially youths. Director Sima Urale is an award-winning filmmaker. Urale's
short film O Tamaiti won the prestigious Best Short Film at the Venice Film Festival in 1996. Her first feature film
Apron Strings opened the 2008 NZ International Film Festival. The feature film Siones Wedding, co-written by Oscar
Kightley, was financially successful following premieres in Auckland and Apia. The 2011 film The Orator was the first
ever fully Samoan film, shot in Samoa in the Samoan language with a Samoan cast telling a uniquely Samoan story.
Written and directed by Tusi Tamasese, it received much critical acclaim and attention at film festivals throughout
the world. Rugby union is the national sport in Samoa and the national team, nicknamed the Manu Samoa, is consistently
competitive against teams from vastly more populous nations. Samoa has competed at every Rugby World Cup since 1991,
and made the quarter finals in 1991, 1995 and the second round of the 1999 world cup. At the 2003 world cup, Manu
Samoa came close to beating eventual world champions, England. Samoa also played in the Pacific Nations Cup and the
Pacific Tri-Nations The sport is governed by the Samoa Rugby Football Union, who are members of the Pacific Islands
Rugby Alliance, and thus, also contribute to the international Pacific Islanders rugby union team. Rugby league is
mostly played by Samoans living in New Zealand and Australia,[citation needed] with Samoa reaching the quarter finals
of the 2013 Rugby League World Cup made of players playing in the NRL, Super League and domestic players. Many Samoans
and New Zealanders or Australians of Samoan descent play in the Super League and National Leagues in Britain. Francis
Meli, Ta'ane Lavulavu of Workington Town, Maurie Fa'asavalu of St Helens and David Fatialofa of Whitehaven and Setima
Sa who signed with London Irish rugby club. Other noteworthy players from NZ and Australia have represented the Samoan
National team. The 2011 domestic Samoan rugby league competition contained 10 teams with plans to expand to 12 in
2012.
Pope Paul VI (Latin: Paulus VI; Italian: Paolo VI), born Giovanni Battista Enrico Antonio Maria Montini (Italian pronunciation:
[dʒioˈvani baˈtista enˈriko anˈtonjo marˈija monˈtini]; 26 September 1897 – 6 August 1978), reigned as Pope from
21 June 1963 to his death in 1978. Succeeding Pope John XXIII, he continued the Second Vatican Council which he closed
in 1965, implementing its numerous reforms, and fostered improved ecumenical relations with Eastern Orthodox and
Protestants, which resulted in many historic meetings and agreements. Montini served in the Vatican's Secretariat
of State from 1922 to 1954. While in the Secretariat of State, Montini and Domenico Tardini were considered as the
closest and most influential colleagues of Pope Pius XII, who in 1954 named him Archbishop of Milan, the largest
Italian diocese. Montini automatically became the Secretary of the Italian Bishops Conference. John XXIII elevated
him to the College of Cardinals in 1958, and after the death of John XXIII, Montini was considered one of his most
likely successors. Upon his election to the papacy, Montini took the pontifical name Paul VI (the first to take the
name "Paul" since 1605) to indicate a renewed worldwide mission to spread the message of Christ, following the example
of Apostle St. Paul.[citation needed] He re-convened the Second Vatican Council, which was automatically closed with
the death of John XXIII, and gave it priority and direction. After the council had concluded its work, Paul VI took
charge of the interpretation and implementation of its mandates, often walking a thin line between the conflicting
expectations of various groups within Catholicism. The magnitude and depth of the reforms affecting all fields of
Church life during his pontificate exceeded similar reform policies of his predecessors and successors. Paul VI was
a Marian devotee, speaking repeatedly to Marian congresses and mariological meetings, visiting Marian shrines and
issuing three Marian encyclicals. Following his famous predecessor Saint Ambrose of Milan, he named Mary as the Mother
of the Church during the Second Vatican Council. Paul VI sought dialogue with the world, with other Christians, other
religions, and atheists, excluding nobody. He saw himself as a humble servant for a suffering humanity and demanded
significant changes of the rich in North America and Europe in favour of the poor in the Third World. His positions
on birth control, promulgated most famously in the 1968 encyclical Humanae vitae, and other political issues, were
often controversial, especially in Western Europe and North America. Giovanni Battista Montini was born in the village
of Concesio, in the province of Brescia, Lombardy in 1897. His father Giorgio Montini was a lawyer, journalist, director
of the Catholic Action and member of the Italian Parliament. His mother was Giudetta Alghisi, from a family of rural
nobility. He had two brothers, Francesco Montini, who became a physician, and Lodovico Montini, who became a lawyer
and politician. On 30 September 1897, he was baptized in the name of Giovanni Battista Enrico Antonio Maria Montini.
He attended Cesare Arici, a school run by the Jesuits, and in 1916, he received a diploma from Arnaldo da Brescia,
a public school in Brescia. His education was often interrupted by bouts of illness. In 1916, he entered the seminary
to become a Roman Catholic priest. He was ordained priest on 29 May 1920 in Brescia and celebrated his first Holy
Mass in Brescia in the Basilica of Santa Maria delle Grazie. Montini concluded his studies in Milan with a doctorate
in Canon Law in the same year. Afterwards he studied at the Gregorian University, the University of Rome La Sapienza
and, at the request of Giuseppe Pizzardo at the Accademia dei Nobili Ecclesiastici. At the age of twenty-five, again
at the request of Giuseppe Pizzardo, Montini entered the Secretariat of State in 1922, where he worked under Pizzardo
together with Francesco Borgongini-Duca, Alfredo Ottaviani, Carlo Grano, Domenico Tardini and Francis Spellman. Consequently,
he spent not a day as a parish priest. In 1925 he helped found the publishing house Morcelliana in Brescia, focused
on promoting a 'Christian inspired culture'. The only foreign diplomatic experience Montini underwent was his time
in the nunciature in Warsaw, Poland in 1923. Like Achille Ratti before him,[a] he felt confronted with the huge problem,
not limited to Poland, of excessive nationalism: "This form of nationalism treats foreigners as enemies, especially
foreigners with whom one has common frontiers. Then one seeks the expansion of one's own country at the expense of
the immediate neighbours. People grow up with a feeling of being hemmed in. Peace becomes a transient compromise
between wars." When he was recalled to Rome he was happy to go, because "this concludes this episode of my life,
which has provided useful, though not always joyful, experiences." His organisational skills led him to a career
in the Roman Curia, the papal civil service. In 1931, Pacelli appointed him to teach history at the Papal Academy
for Diplomats In 1937, after his mentor Giuseppe Pizzardo was named a cardinal and was succeeded by Domenico Tardini,
Montini was named Substitute for Ordinary Affairs under Cardinal Pacelli, the Secretary of State under Pope Pius
XI. From Pius XI, whom he viewed with awe, he adopted the view, that learning is a life long process, and that history
was the magister vitae teacher of life His immediate supervisor in the Vatican was Domenico Tardini, with whom he
got along well. The election of Pacelli to the papacy in 1939, anticipated by everybody and openly promoted by Pope
Pius XI in his last years, was a good omen for Montini, whose position was confirmed in the position under the new
Cardinal Secretary of State Luigi Maglione. He met the pope every morning until 1954 and thus developed a rather
close relationship: As war broke out, Maglione, Tardini and Montini were the main figures in the Vatican's State
Department, as despatches originated from or addressed to them during the war years.[page needed] Montini was in
charge of taking care of the "ordinary affairs" of the Secretariat of State, which took much of the mornings of every
working day. In the afternoon he moved to the third floor into the Office of the Private Secretary of the Pontiff.
Pius XII did not have a personal secretary. As did several popes before him, he delegated the secretarial functions
to the State Secretariat. During the war years, thousands of letters from all parts of the world arrived at the desk
of the pope, most of them asking for understanding, prayer and help. Montini was tasked to formulate the replies
in the name of Pius XII, expressing his empathy, and understanding and providing help, where possible. At the request
of the pope, he created an information office for prisoners of war and refugees, which in the years of its existence
from 1939 until 1947 received almost ten million (9 891 497) information requests and produced over eleven million
(11.293.511) answers about missing persons. Montini was several times openly attacked by Benito Mussolini's government
as a politician, and meddling in politics, but each time he found powerful defenses by the Vatican. In 1944, Luigi
Maglione died, and Pius XII appointed Tardini and Montini together as heads of the State Department. Montini's admiration
was almost filial, when he described Pope Pius XII: As Secretary of State Montini coordinated the activities of assistance
to the persecuted hidden in convents, parishes, seminaries, and in ecclesiastical schools. At the request of the
pope, together with Pascalina Lehnert, Ferdinando Baldelli and Otto Faller, he created the Pontificia Commissione
di Assistenza, which aided large number of Romans and refugees from everywhere with shelter, food and other material
assistance. In Rome alone this organization distributed almost two million portions of free food in the year 1944.
The Vatican and the Papal Residence Castel Gandolfo were opened to refugees. Some 15,000 persons lived in Castel
Gandolfo alone, supported by the Pontificia Commissione di Assistenza. At the request of Pius XII, Montini was also
involved in the re-establishment of Church Asylum, providing protection to hundreds of Allied soldiers, who had escaped
from Axis prison camps, Jews, anti-Fascists, Socialists, Communists, and after the liberation of Rome, German soldiers,
partisans and other displaced persons. After the war and later as pope, Montini turned the Pontificia Commissione
di Assistenza, into the major charity, Caritas Italiana.[b] Pius XII delivered an address about Montini's appointment
from his sick-bed over radio to those assembled in St. Peter's Basilica on 12 December 1954. Both Montini and the
pope had tears in their eyes when Montini parted for his dioceses with 1,000 churches, 2,500 priests and 3,500,000
souls. On 5 January 1955, Montini formally took possession of his Cathedral of Milan. Montini, after a period of
preparation, liked his new tasks as archbishop, connecting to all groups of faithful in Milan. He enjoyed meetings
with intellectuals, artists and writers. Montini and Angelo Roncalli were considered to be friends, but when Roncalli,
as Pope John XXIII announced a new Ecumenical Council, Cardinal Montini reacted with disbelief and said to Giulio
Bevilacqua: "This old boy does not know what a hornets nest he is stirring up." He was appointed to the Central Preparatory
Commission in 1961. During the Council, his friend Pope John XXIII asked him to live in the Vatican. He was a member
of the Commission for Extraordinary Affairs but did not engage himself much into the floor debates on various issues.
His main advisor was Monsignore Giovanni Colombo, whom he later appointed to be his successor in Milan The Commission
was greatly overshadowed by the insistence of John XXIII to have the Council complete all its work in one single
session before Christmas 1962, to the 400th anniversary of the Council of Trent, an insistence which may have also
been influenced by the Pope's recent knowledge that he had cancer. During his period in Milan, Montini was known
as a progressive member of the Catholic hierarchy. Montini went new ways in pastoral care, which he reformed. He
used his authority to ensure that the liturgical reforms of Pius XII were carried out at the local level and employed
innovative methods to reach the people of Milan: Huge posters announced that 1,000 voices would speak to them from
10 to 24 November 1957. More than 500 priests and many bishops, cardinals and lay persons delivered 7,000 sermons
in the period not only in churches but in factories, meeting halls, houses, courtyards, schools, offices, military
barracks, hospitals, hotels and other places, where people meet. His goal was the re-introduction of faith to a city
without much religion. "If only we can say Our Father and know what this means, then we would understand the Christian
faith." Pius XII asked Archbishop Montini to Rome October 1957, where he gave the main presentation to the Second
World Congress of Lay Apostolate. Previously as Pro-Secretary, he had worked hard to unify a worldwide organization
of lay people of 58 nations, representing 42 national organizations. He presented them to Pius XII in Rome in 1951.
The second meeting in 1957 gave Montini an opportunity to express the lay apostolate in modern terms: "Apostolate
means love. We will love all, but especially those, who need help... We will love our time, our technology, our art,
our sports, our world." Although some cardinals seem to have viewed him as papabile, a likely candidate to become
pope, and may have received some votes in the 1958 conclave, Montini was not yet a cardinal, which made him an unlikely
choice.[c] Angelo Roncalli was elected pope on 28 October 1958 and assumed the name John XXIII. On 17 November 1958,
L'Osservatore Romano announced a consistory for the creation of new cardinals. Montini's name led the list. When
the pope raised Montini to the cardinalate on 15 December 1958, he became Cardinal-Priest of Ss. Silvestro e Martino
ai Monti. He appointed him simultaneously to several Vatican congregations which resulted in many visits by Montini
to Rome in the coming years. As a Cardinal, Montini journeyed to Africa (1962), where he visited Ghana, Sudan, Kenya,
Congo, Rhodesia, South Africa, and Nigeria. After his journey, John XXIII gave him a private audience on his trip
which lasted for several hours. In fifteen other trips he visited Brazil (1960) and the USA (1960), including New
York City, Washington, DC, Chicago, the University of Notre Dame in Indiana, Boston, Philadelphia, and Baltimore.
While a cardinal, he usually vacationed in Engelberg Abbey, a secluded Benedictine monastery in Switzerland. Unlike
the papabile cardinals Giacomo Lercaro of Bologna and Giuseppe Siri of Genoa, he was not identified with either the
left or right, nor was he seen as a radical reformer. He was viewed as most likely to continue the Second Vatican
Council, which already, without any tangible results, had lasted longer than anticipated by John XXIII, who had a
vision but "did not have a clear agenda. His rhetoric seems to have had a note of over-optimism, a confidence in
progress, which was characteristic of the 1960s." When John XXIII died of stomach cancer on 3 June 1963, it triggered
a conclave to elect a new pope. Paul VI did away with much of the regal splendor of the papacy. He was the last pope
to date to be crowned; his successor Pope John Paul I replaced the Papal Coronation (which Paul had already substantially
modified, but which he left mandatory in his 1975 apostolic constitution Romano Pontifici Eligendo) with a Papal
Inauguration. Paul VI donated his own Papal Tiara, a gift from his former Archdiocese of Milan, to the Basilica of
the National Shrine of the Immaculate Conception in Washington, DC (where it is on permanent display in the Crypt)
as a gift to American Catholics. During Vatican II, the Council Fathers avoided statements which might anger Christians
of other faiths.[page needed] Cardinal Augustin Bea, the President of the Christian Unity Secretariat, always had
the full support of Paul VI in his attempts to ensure that the Council language was friendly and open to the sensitivities
of Protestant and Orthodox Churches, whom he had invited to all sessions at the request of Pope John XXIII. Bea also
was strongly involved in the passage of Nostra aetate, which regulates the Church's relations with the Jewish faith
and members of other religions.[d] After his election as Bishop of Rome, Paul VI first met with the priests in his
new dioceses. He told them that in Milan he started a dialogue with the modern world and asked them to seek contact
with all people from all walks of life. Six days after his election he announced that he would continue Vatican II
and convened the opening to take place on 29 September 1963. In a radio address to the world, Paul VI recalled the
uniqueness of his predecessors, the strength of Pius XI, the wisdom and intelligence of Pius XII and the love of
John XXIII. As "his pontifical goals" he mentioned the continuation and completion of Vatican II, the reform of the
Canon Law and improved social peace and justice in the world. The Unity of Christianity would be central to his activities.
He reminded the council fathers that only a few years earlier Pope Pius XII had issued the encyclical Mystici corporis
about the mystical body of Christ. He asked them not to repeat or create new dogmatic definitions but to explain
in simple words how the Church sees itself. He thanked the representatives of other Christian communities for their
attendance and asked for their forgiveness if the Catholic Church is guilty for the separation. He also reminded
the Council Fathers that many bishops from the east could not attend because the governments in the East did not
permit their journeys. Paul VI opened the third period on 14 September 1964, telling the Council Fathers that he
viewed the text about the Church as the most important document to come out from the Council. As the Council discussed
the role of bishops in the papacy, Paul VI issued an explanatory note confirming the primacy of the papacy, a step
which was viewed by some as meddling in the affairs of the Council American bishops pushed for a speedy resolution
on religious freedom, but Paul VI insisted this to be approved together with related texts such as ecumenism. The
Pope concluded the session on 21 November 1964, with the formal pronouncement of Mary as Mother of the Church. Between
the third and fourth sessions the pope announced reforms in the areas of Roman Curia, revision of Canon Law, regulations
for mixed marriages involving several faiths, and birth control issues. He opened the final session of the council,
concelebrating with bishops from countries where the Church was persecuted. Several texts proposed for his approval
had to be changed. But all texts were finally agreed upon. The Council was concluded on 8 December 1965, the Feast
of the Immaculate Conception. Pope Paul VI knew the Roman Curia well, having worked there for a generation from 1922
to 1954. He implemented his reforms in stages, rather than in one fell swoop. On 1 March 1968, he issued a regulation,
a process that had been initiated by Pius XII and continued by John XXIII. On 28 March, with Pontificalis Domus,
and in several additional Apostolic Constitutions in the following years, he revamped the entire Curia, which included
reduction of bureaucracy, streamlining of existing congregations and a broader representation of non-Italians in
the curial positions. Paul VI revolutionized papal elections by ordering that only cardinals below the age of eighty
might participate in future conclaves. In Ecclesiae Sanctae, his motu proprio of 6 August 1966, he further invited
all bishops to offer their retirement to the pontiff no later than the completion of their 75th year of age. This
requirement was extended to all Cardinals of the Catholic Church on 21 November 1970. With these two stipulations,
the Pope filled several positions with younger bishops and cardinals, and further internationalized the Roman Curia
in light of several resignations due to age. Reform of the liturgy had been a part of the liturgical movements in
the 20th century mainly in France, and Germany which were officially recognized by Pius XII in his encyclical Mediator
Dei. During the pontificate of Pius XII, the Vatican eased regulations on the use of Latin in Roman Catholic liturgies,
permitting some use of vernacular languages during baptisms, funerals and other events. In 1951 and 1955, the Easter
liturgies underwent revision, most notably including the reintroduction of the Easter Triduum. The Second Vatican
Council made no changes to the Roman Missal, but in the document Sacrosanctum Concilium mandated that a general revision
of it take place. After the Vatican Council, in April 1969, Paul VI approved the "new Order of Mass" promulgated
in 1970, as stated in the Acta Apostolica Sedis to "end experimentation" with the Mass and which included the introduction
of three new Eucharistic Prayers to what was up to then a single Roman Canon. The Mass of Paul VI was also in Latin
but approval was given for the use of vernacular languages. There had been other instructions issued by the Pope
in 1964, 1967, 1968, 1969 and 1970 which centered on the reform of all liturgies of the Roman Church. These major
reforms were not welcomed by all and in all countries. The sudden apparent "outlawing" of the 400-year-old Mass,
the last typical edition of which being promulgated only a few years earlier in 1962 by Paul's predecessor, Pope
John XXIII, was not always explained well. Further experimentation with the new Mass by liturgists, such as the usage
of pop/folk music (as opposed to the Gregorian Chant advocated by Pope Pius X), along with concurrent changes in
the order of sanctuaries, was viewed by some as vandalism. In 2007, Pope Benedict XVI clarified that the 1962 Mass
of John XXIII and the 1970 Mass of Paul VI are two forms of the same Roman Rite, the first, which had never been
"juridically abrogated", now being an "extraordinary form of the Roman Rite", while the other "obviously is and continues
to be the normal Form – the Forma ordinaria – of the Eucharistic Liturgy". In 1964, Paul VI created a Secretariat
for non-Christians, later renamed the Pontifical Council for Interreligious Dialogue and a year later a new Secretariat
(later Pontifical Council) for Dialogue with Non-Believers. This latter was in 1993 incorporated by Pope John Paul
II in the Pontifical Council for Culture, which he had established in 1982. In 1971, Paul VI created a papal office
for economic development and catastrophic assistance. To foster common bonds with all persons of good will, he decreed
an annual peace day to be celebrated on January first of every year. Trying to improve the condition of Christians
behind the Iron Curtain, Paul VI engaged in dialogue with Communist authorities at several levels, receiving Foreign
Minister Andrei Gromyko and Chairman of the Presidium of the Supreme Soviet Nikolai Podgorny in 1966 and 1967 in
the Vatican. The situation of the Church in Hungary, Poland and Romania, improved during his pontificate. In 1976
Montini became the first pontiff in modern history to deny the accusation of homosexuality. Published by his order
in January 1976 was a homily Persona Humana: Declaration on Certain Questions concerning Sexual Ethics, which outlawed
pre or extra-marital sex, condemned homosexuality, and forbade masturbation. It provoked French author and former
diplomat Roger Peyrefitte, in an interview published by the magazine Tempo, to accuse Montini of hypocrisy, and of
having a longtime lover who was a movie actor. According to rumors prevalent both inside the Curia and in Italian
society, this was Paolo Carlini, who had a bit part as a hairdresser in the Audrey Hepburn film Roman Holiday. Peyrefitte
had previously published the accusation in two books, but the interview (previously published in a French gay magazine)
brought the rumors to a wider public and caused an uproar. In a brief address to a crowd of approximately 20,000
in St. Peters Square on April 18, Montini called the charges "horrible and slanderous insinuations" and appealed
for prayers on his behalf. Special prayers for Montini were said in all Italian Roman Catholic churches in "a day
of consolation". In 1984 a New York Times correspondent repeated the allegations. Pope Paul VI became the first pope
to visit six continents, and was the most travelled pope in history to that time, earning the nickname "the Pilgrim
Pope". With his travels he opened new avenues for the papacy, which were continued by his successors John Paul II
and Benedict XVI. He travelled to the Holy Land in 1964, to the Eucharistic Congresses in Bombay, India and Bogotá,
Colombia. In 1966, however, he was twice denied permission to visit Poland for the 1,000th anniversary of the baptism
of Poland. In 1967, however, fifty years after the first apparition, he visited Fátima in Portugal. He undertook
a pastoral visit to Africa in 1969. On 27 November 1970 he was the target of an assassination attempt at Manila International
Airport in the Philippines. He was only lightly stabbed by the would-be assassin Benjamín Mendoza y Amor Flores,
who was subdued by the pope's personal bodyguard and trip organizer, Msgr. Paul Marcinkus. Pope Paul VI became the
first reigning pontiff ever to visit the Americas when he flew to New York in October 1965 to address the United
Nations. As a gesture of goodwill, the pope gave to the UN two pieces of papal jewelry, a diamond cross and ring,
with the hopes that the proceeds from their sale at auction would contribute to the UN's efforts to end human suffering.
During the pope's visit, as the U.S. involvement in the Vietnam War escalated under President Johnson, Paul VI pleaded
for peace before the UN: Like his predecessor Pius XII, Paul VI put much emphasis on the dialogue with all nations
of the world through establishing diplomatic relations. The number of foreign embassies accredited to the Vatican
doubled during his pontificate. This was a reflection of a new understanding between Church and State, which had
been formulated first by Pius XI and Pius XII but decreed by Vatican II. The pastoral constitution Gaudium et spes
stated that the Catholic Church is not bound to any form of government and willing to cooperate with all forms. The
Church maintained its right to select bishops on its own without any interference by the State. Ecclesiam suam was
given at St. Peter's, Rome, on the Feast of the Transfiguration, 6 August 1964, the second year of his Pontificate.
It is considered an important document, identifying the Catholic Church with the Body of Christ. A later Council
document Lumen Gentium stated that the Church subsists in the Body of Christ, raising questions as to the difference
between "is" and "subsists in". Paul VI appealed to "all people of good will" and discussed necessary dialogues within
the Church and between the Churches and with atheism. Sacerdotalis caelibatus (Latin for "Of the celibate priesthood"),
promulgated on 24 June 1967, defends the Catholic Church's tradition of priestly celibacy in the West. This encyclical
was written in the wake of Vatican II, when the Catholic Church was questioning and revising many long-held practices.
Priestly celibacy is considered a discipline rather than dogma, and some had expected that it might be relaxed. In
response to these questions, the Pope reaffirms the discipline as a long-held practice with special importance in
the Catholic Church. The encyclical Sacerdotalis caelibatus from 24 June 1967, confirms the traditional Church teaching,
that celibacy is an ideal state and continues to be mandatory for Roman Catholic priests. Celibacy symbolizes the
reality of the kingdom of God amid modern society. The priestly celibacy is closely linked to the sacramental priesthood.
However, during his pontificate Paul VI was considered generous in permitting bishops to grant laicization of priests
who wanted to leave the sacerdotal state, a position which was drastically reversed by John Paul II in 1980 and cemented
in the 1983 Canon Law that only the pope can in exceptional circumstances grant laicization. Of his eight encyclicals,
Pope Paul VI is best known for his encyclical Humanae vitae (Of Human Life, subtitled On the Regulation of Birth),
published on 25 July 1968. In this encyclical he reaffirmed the Catholic Church's traditional view of marriage and
marital relations and a continued condemnation of artificial birth control. There were two Papal committees and numerous
independent experts looking into the latest advancement of science and medicine on the question of artificial birth
control. which were noted by the Pope in his encyclical The expressed views of Paul VI reflected the teachings of
his predecessors, especially Pius XI, Pius XII and John XXIII and never changed, as he repeatedly stated them in
the first few years of his Pontificate To the pope as to all his predecessors, marital relations are much more than
a union of two people. They constitute a union of the loving couple with a loving God, in which the two persons create
a new person materially, while God completes the creation by adding the soul. For this reason, Paul VI teaches in
the first sentence of Humanae vitae that the transmission of human life is a most serious role in which married people
collaborate freely and responsibly with God the Creator. This divine partnership, according to Paul VI, does not
allow for arbitrary human decisions, which may limit divine providence. The Pope does not paint an overly romantic
picture of marriage: marital relations are a source of great joy, but also of difficulties and hardships. The question
of human procreation exceeds in the view of Paul VI specific disciplines such as biology, psychology, demography
or sociology. The reason for this, according to Paul VI, is that married love takes its origin from God, who "is
love". From this basic dignity, he defines his position: The reaction to the encyclical's continued prohibitions
of artificial birth control was very mixed. In Italy, Spain, Portugal and Poland, the encyclical was welcomed. In
Latin America, much support developed for the Pope and his encyclical. As World Bank President Robert McNamara declared
at the 1968 Annual Meeting of the International Monetary Fund and the World Bank Group that countries permitting
birth control practices would get preferential access to resources, doctors in La Paz, Bolivia called it insulting
that money should be exchanged for the conscience of a Catholic nation. In Colombia, Cardinal archbishop Aníbal Muñoz
Duque declared, if American conditionality undermines Papal teachings, we prefer not to receive one cent. The Senate
of Bolivia passed a resolution stating that Humanae vitae could be discussed in its implications for individual consciences,
but was of greatest significance because the papal document defended the rights of developing nations to determine
their own population policies. The Jesuit Journal Sic dedicated one edition to the encyclical with supportive contributions.
Paul VI was concerned but not surprised by the negative reaction in Western Europe and the United States. He fully
anticipated this reaction to be a temporary one: "Don't be afraid", he reportedly told Edouard Gagnon on the eve
of the encyclical, "in twenty years time they'll call me a prophet." His biography on the Vatican's website notes
of his reaffirmations of priestly celibacy and the traditional teaching on contraception that "[t]he controversies
over these two pronouncements tended to overshadow the last years of his pontificate". Pope John Paul II later reaffirmed
and expanded upon Humanae vitae with the encyclical Evangelium vitae. After the Council, Paul VI contributed in two
ways to the continued growth of ecumenical dialogue. The separated brothers and sisters, as he called them, were
not able to contribute to the Council as invited observers. After the Council, many of them took initiative to seek
out their Catholic counterparts and the Pope in Rome, who welcomed such visits. But the Catholic Church itself recognized
from the many previous ecumenical encounters, that much needed to be done within, to be an open partner for ecumenism.
To those who are entrusted the highest and deepest truth and therefore, so Paul VI, believed that he had the most
difficult part to communicate. Ecumenical dialogue, in the view of Paul VI, requires from a Catholic the whole person:
one's entire reason, will, and heart. Paul VI, like Pius XII before him, was reluctant to give in on a lowest possible
point. And yet, Paul felt compelled to admit his ardent Gospel-based desire to be everything to everybody and to
help all people Being the successor of Peter, he felt the words of Christ, "Do you love me more" like a sharp knife
penetrating to the marrow of his soul. These words meant to Paul VI love without limits, and they underscore the
Church's fundamental approach to ecumenism. This was a significant step towards restoring communion between Rome
and Constantinople. It produced the Catholic-Orthodox Joint declaration of 1965, which was read out on 7 December
1965, simultaneously at a public meeting of the Second Vatican Council in Rome and at a special ceremony in Istanbul.
The declaration did not end the schism, but showed a desire for greater reconciliation between the two churches.
In May 1973, the Coptic Patriarch Shenouda III of Alexandria visited the Vatican, where he met three times with Pope
Paul VI. A common declaration and a joint Creed issued after the visit demonstrated that there are virtually no more[additional
citation needed] theological discrepancies between the Coptic and Roman Catholic Churches. Paul VI was the first
pope to receive an Anglican Archbishop of Canterbury, Michael Ramsey in official audience as Head of Church, after
the private audience visit of Archbishop Geoffrey Fisher to Pope John XXIII on 2 December 1960. Ramsey met Paul three
times during his visit and opened the Anglican Centre in Rome to increase their mutual knowledge. He praised Paul
VI[e] and his contributions in the service of unity. Paul replied that "by entering into our house, you are entering
your own house, we are happy to open our door and heart to you." The two Church leaders signed a common declaration,
which put an end to the disputes of the past and outlined a common agenda for the future. Cardinal Augustin Bea,
the head of the Secretariat for Promoting Christian Unity, added at the end of the visit, "Let us move forward in
Christ. God wants it. Humanity is waiting for it." Unmoved by a harsh condemnation by the Congregation of Faith on
mixed marriages precisely at this time of the visit, Paul VI and Ramsey appointed a preparatory commission which
was to put the common agenda into practice on such issues as mixed marriages. This resulted in a joint Malta declaration,
the first joint agreement on the Creed since the Reformation. Paul VI was a good friend of the Anglican Church, which
he described as "our beloved sister Church". This description was unique to Paul and not used by later popes. In
1965, Paul VI decided on the creation of a joint working group with the World Council of Churches to map all possible
avenues of dialogue and cooperation. In the following three years, eight sessions were held which resulted in many
joint proposals. It was proposed to work closely together in areas of social justice and development and Third World
Issues such as hunger and poverty. On the religious side, it was agreed to share together in the Week of Prayer for
Christian Unity, to be held every year. The joint working group was to prepare texts which were to be used by all
Christians. On 19 July 1968, the meeting of the World Council of Churches took place in Uppsala, Sweden, which Pope
Paul called a sign of the times. He sent his blessing in an ecumenical manner: "May the Lord bless everything you
do for the case of Christian Unity." The World Council of Churches decided on including Catholic theologians in its
committees, provided they have the backing of the Vatican. The Lutherans were the first Protestant Church offering
a dialogue to the Catholic Church in September 1964 in Reykjavík, Iceland. It resulted in joint study groups of several
issues. The dialogue with the Methodist Church began October 1965, after its representatives officially applauded
remarkable changes, friendship and cooperation of the past five years. The Reformed Churches entered four years later
into a dialogue with the Catholic Church. The President of the Lutheran World Federation and member of the central
committee of the World Council of Churches Fredrik A. Schiotz stated during the 450th anniversary of the Reformation,
that earlier commemorations were viewed almost as a triumph. Reformation should be celebrated as a thanksgiving to
God, his truth and his renewed life. He welcomed the announcement of Pope Paul VI to celebrate the 1900th anniversary
of the death of the Apostle Peter and Apostle Paul, and promised the participation and cooperation in the festivities.
Paul VI supported the new-found harmony and cooperation with Protestants on so many levels. When Cardinal Augustin
Bea went to see him for permission for a joint Catholic-Protestant translation of the Bible with Protestant Bible
societies, the pope walked towards him and exclaimed, "as far as the cooperation with Bible societies is concerned,
I am totally in favour." He issued a formal approval on Pentecost 1967, the feast on which the Holy Spirit descended
on the Christians, overcoming all linguistic difficulties, according to Christian tradition. The next three popes,
including Pope Emeritus Benedict XVI, were created cardinals by him. His immediate successor, Albino Luciani, who
took the name John Paul I, was created a cardinal in the consistory of 5 March 1973. Karol Wojtyła was created a
cardinal in the consistory of 26 June 1967. Joseph Ratzinger was created a cardinal in the small four-appointment
consistory of 27 June 1977, which also included Bernardin Gantin from Benin, Africa. This became the last of Paul
VI's consistories before his death in August 1978. Pope Paul was asked towards the end of his papacy whether he would
retire at age 80, he replied "Kings can abdicate, Popes cannot."[citation needed] Pope Paul VI left the Vatican to
go to the papal summer residence, Castel Gandolfo, on 14 July 1978, visiting on the way the tomb of Cardinal Giuseppe
Pizzardo, who had introduced him to the Vatican half a century earlier. Although he was sick, he agreed to see the
new Italian President Sandro Pertini for over two hours. In the evening he watched a Western on TV, happy only when
he saw "horses, the most beautiful animals that God had created." He had breathing problems and needed oxygen. On
Sunday, at the Feast of the Transfiguration, he was tired, but wanted to say the Angelus. He was neither able nor
permitted to do so and instead stayed in bed, his temperature rising. On 20 December 2012, Pope Benedict XVI, in
an audience with the Cardinal Prefect of the Congregation for the Causes of Saints, declared that the late pontiff
had lived a life of heroic virtue, which means that he could be called "Venerable". A miracle attributed to the intercession
of Paul VI was approved on 9 May 2014 by Pope Francis. The beatification ceremony for Paul VI was held on 19 October
2014, which means that he can now be called "Blessed". His liturgical feast day is celebrated on the date of his
birth, 26 September, rather than the day of his death as is usual. In December 2013, Vatican officials approved a
supposed miracle that was attributed to the intercession of the late pontiff which was the curing of an unborn child
in California, U.S.A in the 1990s. It was expected that Pope Francis would approve the miracle in the near future,
thus, warranting the beatification of the late pontiff. In February 2014, it was reported that the consulting Vatican
theologians to the Congregation for the Causes of Saints recognized the miracle attributed to the late pontiff. On
24 April 2014, it was reported in the Italian magazine Credere that the late pope could possibly be beatified on
19 October 2014. This report from the magazine further stated that several cardinals and bishops would meet on 5
May to confirm the miracle that had previously been approved, and then present it to Pope Francis who may sign the
decree for beatification shortly after that. The Congregation for the Causes of Saints held that meeting and positively
concluded that the healing was indeed a miracle that could be attributed to the late pope. The matter shall now soon
be presented to the pope for approval. On basic Church teachings, the pope was unwavering. On the tenth anniversary
of Humanae vitae, he reconfirmed this teaching. In his style and methodology, he was a disciple of Pius XII, whom
he deeply revered. He suffered for the attacks on Pius XII for his alleged silences during the Holocaust. Pope Paul
VI was less outstanding than his predecessors: he was not credited with an encyclopedic memory, nor a gift for languages,
nor the brilliant writing style of Pius XII, nor did he have the charisma and outpouring love, sense of humor and
human warmth of John XXIII. He took on himself the unfinished reform work of these two popes, bringing them diligently
with great humility and common sense and without much fanfare to conclusion. In doing so, Paul VI saw himself following
in the footsteps of the Apostle Paul, torn to several directions as Saint Paul, who said, "I am attracted to two
sides at once, because the Cross always divides." Unlike his predecessors and successors, Paul VI refused to excommunicate
the opponents. He admonished but did not punish those with other views. The new theological freedoms which he fostered
resulted in a pluralism of opinions and uncertainties among the faithful. New demands were voiced, which were taboo
at the Council, the reintegration of divorced Catholics, the sacramental character of the confession, and the role
of women in the Church and its ministries. Conservatives complained, that "women wanted to be priests, priests wanted
to get married, bishops became regional popes and theologians claimed absolute teaching authority. Protestants claimed
equality, homosexuals and divorced called for full acceptance." Changes such as the reorientation of the liturgy,
alterations to the ordinary of the Mass, alterations to the liturgical calendar in the motu proprio Mysterii Paschalis,
and the relocation of the tabernacle were controversial among some Catholics. Some critiqued Paul VI's decision;
the newly created Synod of Bishops had an advisory role only and could not make decisions on their own, although
the Council decided exactly that. During the pontificate of Paul VI, five such synods took place, and he is on record
of implementing all their decisions. Related questions were raised about the new National Bishop Conferences, which
became mandatory after Vatican II. Others questioned his Ostpolitik and contacts with Communism and the deals he
engaged in for the faithful. From his bed he participated in Sunday Mass at 18:00. After communion, the pope suffered
a massive heart attack, after which he continued to live for three hours. On 6 August 1978 at 21:41 Paul VI died
in Castel Gandolfo. According to his will, he was buried in the grottos of the Vatican not in an ornate tomb, but
in a grave in the ground. He is buried beneath the floor of Saint Peter's Basilica with other popes. In his will,
he requested to be buried in the "true earth" and therefore, he does not have an ornate sarcophagus but an in-ground
grave. With the six consistories, Paul VI continued the internationalization policies started by Pius XII in 1946
and continued by John XXIII. In his 1976 consistory, five of twenty cardinals originated from Africa, one of them
a son of a tribal chief with fifty wives. Several prominent Latin Americans like Eduardo Francisco Pironio of Argentina;
Luis Aponte Martinez of Puerto Rico and Eugênio de Araújo Sales and Aloisio Lorscheider from Brazil were also elevated
by him. There were voices within the Church at the time saying that the European period of the Church was coming
to a close, a view shared by Britain's Cardinal Basil Hume. At the same time, the members of the College of Cardinals
lost some of their previous influences, after Paul VI decreed, that not only cardinals but also bishops too may participate
in committees of the Roman Curia. The age limit of eighty years imposed by the Pope, a numerical increase of Cardinals
by almost 100%, and a reform of the regal vestments of the "Princes of the Church" further contributed to a service-oriented
perception of Cardinals under his pontificate. The increased number of Cardinals from the Third World and the papal
emphasis on related issues was nevertheless welcomed by many in Western Europe. Paul VI did renounce many traditional
symbols of the papacy and the Catholic Church; some of his changes to the papal dress were reversed by Pope Benedict
XVI in the early 21st century. Refusing a Vatican army of colourful military uniforms from centuries, he got rid
of them. He became the first pope to visit five continents. Paul VI systematically continued and completed the efforts
of his predecessors, to turn the Euro-centric Church into a Church of the world, by integrating the bishops from
all continents in its government and in the Synods which he convened. His 6 August 1967 motu proprio Pro Comperto
Sane opened the Roman Curia to the bishops of the world. Until then, only Cardinals could be leading members of the
Curia.
Found in applications as diverse as industrial fans, blowers and pumps, machine tools, household appliances, power tools,
and disk drives, electric motors can be powered by direct current (DC) sources, such as from batteries, motor vehicles
or rectifiers, or by alternating current (AC) sources, such as from the power grid, inverters or generators. Small
motors may be found in electric watches. General-purpose motors with highly standardized dimensions and characteristics
provide convenient mechanical power for industrial use. The largest of electric motors are used for ship propulsion,
pipeline compression and pumped-storage applications with ratings reaching 100 megawatts. Electric motors may be
classified by electric power source type, internal construction, application, type of motion output, and so on. Perhaps
the first electric motors were simple electrostatic devices created by the Scottish monk Andrew Gordon in the 1740s.
The theoretical principle behind production of mechanical force by the interactions of an electric current and a
magnetic field, Ampère's force law, was discovered later by André-Marie Ampère in 1820. The conversion of electrical
energy into mechanical energy by electromagnetic means was demonstrated by the British scientist Michael Faraday
in 1821. A free-hanging wire was dipped into a pool of mercury, on which a permanent magnet (PM) was placed. When
a current was passed through the wire, the wire rotated around the magnet, showing that the current gave rise to
a close circular magnetic field around the wire. This motor is often demonstrated in physics experiments, brine substituting
for toxic mercury. Though Barlow's wheel was an early refinement to this Faraday demonstration, these and similar
homopolar motors were to remain unsuited to practical application until late in the century. In 1827, Hungarian physicist
Ányos Jedlik started experimenting with electromagnetic coils. After Jedlik solved the technical problems of the
continuous rotation with the invention of the commutator, he called his early devices "electromagnetic self-rotors".
Although they were used only for instructional purposes, in 1828 Jedlik demonstrated the first device to contain
the three main components of practical DC motors: the stator, rotor and commutator. The device employed no permanent
magnets, as the magnetic fields of both the stationary and revolving components were produced solely by the currents
flowing through their windings. After many other more or less successful attempts with relatively weak rotating and
reciprocating apparatus the Prussian Moritz von Jacobi created the first real rotating electric motor in May 1834
that actually developed a remarkable mechanical output power. His motor set a world record which was improved only
four years later in September 1838 by Jacobi himself. His second motor was powerful enough to drive a boat with 14
people across a wide river. It was not until 1839/40 that other developers worldwide managed to build motors of similar
and later also of higher performance. The first commutator DC electric motor capable of turning machinery was invented
by the British scientist William Sturgeon in 1832. Following Sturgeon's work, a commutator-type direct-current electric
motor made with the intention of commercial use was built by the American inventor Thomas Davenport, which he patented
in 1837. The motors ran at up to 600 revolutions per minute, and powered machine tools and a printing press. Due
to the high cost of primary battery power, the motors were commercially unsuccessful and Davenport went bankrupt.
Several inventors followed Sturgeon in the development of DC motors but all encountered the same battery power cost
issues. No electricity distribution had been developed at the time. Like Sturgeon's motor, there was no practical
commercial market for these motors. A major turning point in the development of DC machines took place in 1864, when
Antonio Pacinotti described for the first time the ring armature with its symmetrically grouped coils closed upon
themselves and connected to the bars of a commutator, the brushes of which delivered practically non-fluctuating
current. The first commercially successful DC motors followed the invention by Zénobe Gramme who, in 1871, reinvented
Pacinotti's design. In 1873, Gramme showed that his dynamo could be used as a motor, which he demonstrated to great
effect at exhibitions in Vienna and Philadelphia by connecting two such DC motors at a distance of up to 2 km away
from each other, one as a generator. (See also 1873 : l'expérience décisive [Decisive Workaround] .) In 1886, Frank
Julian Sprague invented the first practical DC motor, a non-sparking motor that maintained relatively constant speed
under variable loads. Other Sprague electric inventions about this time greatly improved grid electric distribution
(prior work done while employed by Thomas Edison), allowed power from electric motors to be returned to the electric
grid, provided for electric distribution to trolleys via overhead wires and the trolley pole, and provided controls
systems for electric operations. This allowed Sprague to use electric motors to invent the first electric trolley
system in 1887–88 in Richmond VA, the electric elevator and control system in 1892, and the electric subway with
independently powered centrally controlled cars, which were first installed in 1892 in Chicago by the South Side
Elevated Railway where it became popularly known as the "L". Sprague's motor and related inventions led to an explosion
of interest and use in electric motors for industry, while almost simultaneously another great inventor was developing
its primary competitor, which would become much more widespread. The development of electric motors of acceptable
efficiency was delayed for several decades by failure to recognize the extreme importance of a relatively small air
gap between rotor and stator. Efficient designs have a comparatively small air gap. [a] The St. Louis motor, long
used in classrooms to illustrate motor principles, is extremely inefficient for the same reason, as well as appearing
nothing like a modern motor. Application of electric motors revolutionized industry. Industrial processes were no
longer limited by power transmission using line shafts, belts, compressed air or hydraulic pressure. Instead every
machine could be equipped with its own electric motor, providing easy control at the point of use, and improving
power transmission efficiency. Electric motors applied in agriculture eliminated human and animal muscle power from
such tasks as handling grain or pumping water. Household uses of electric motors reduced heavy labor in the home
and made higher standards of convenience, comfort and safety possible. Today, electric motors stand for more than
half of the electric energy consumption in the US. In 1824, the French physicist François Arago formulated the existence
of rotating magnetic fields, termed Arago's rotations, which, by manually turning switches on and off, Walter Baily
demonstrated in 1879 as in effect the first primitive induction motor. In the 1880s, many inventors were trying to
develop workable AC motors because AC's advantages in long-distance high-voltage transmission were counterbalanced
by the inability to operate motors on AC. The first alternating-current commutatorless induction motors were independently
invented by Galileo Ferraris and Nikola Tesla, a working motor model having been demonstrated by the former in 1885
and by the latter in 1887. In 1888, the Royal Academy of Science of Turin published Ferraris' research detailing
the foundations of motor operation while however concluding that "the apparatus based on that principle could not
be of any commercial importance as motor." In 1888, Tesla presented his paper A New System for Alternating Current
Motors and Transformers to the AIEE that described three patented two-phase four-stator-pole motor types: one with
a four-pole rotor forming a non-self-starting reluctance motor, another with a wound rotor forming a self-starting
induction motor, and the third a true synchronous motor with separately excited DC supply to rotor winding. One of
the patents Tesla filed in 1887, however, also described a shorted-winding-rotor induction motor. George Westinghouse
promptly bought Tesla's patents, employed Tesla to develop them, and assigned C. F. Scott to help Tesla, Tesla left
for other pursuits in 1889. The constant speed AC induction motor was found not to be suitable for street cars but
Westinghouse engineers successfully adapted it to power a mining operation in Telluride, Colorado in 1891. Steadfast
in his promotion of three-phase development, Mikhail Dolivo-Dobrovolsky invented the three-phase cage-rotor induction
motor in 1889 and the three-limb transformer in 1890. This type of motor is now used for the vast majority of commercial
applications. However, he claimed that Tesla's motor was not practical because of two-phase pulsations, which prompted
him to persist in his three-phase work. Although Westinghouse achieved its first practical induction motor in 1892
and developed a line of polyphase 60 hertz induction motors in 1893, these early Westinghouse motors were two-phase
motors with wound rotors until B. G. Lamme developed a rotating bar winding rotor. The General Electric Company began
developing three-phase induction motors in 1891. By 1896, General Electric and Westinghouse signed a cross-licensing
agreement for the bar-winding-rotor design, later called the squirrel-cage rotor. Induction motor improvements flowing
from these inventions and innovations were such that a 100 horsepower (HP) induction motor currently has the same
mounting dimensions as a 7.5 HP motor in 1897. A commutator is a mechanism used to switch the input of most DC machines
and certain AC machines consisting of slip ring segments insulated from each other and from the electric motor's
shaft. The motor's armature current is supplied through the stationary brushes in contact with the revolving commutator,
which causes required current reversal and applies power to the machine in an optimal manner as the rotor rotates
from pole to pole. In absence of such current reversal, the motor would brake to a stop. In light of significant
advances in the past few decades due to improved technologies in electronic controller, sensorless control, induction
motor, and permanent magnet motor fields, electromechanically commutated motors are increasingly being displaced
by externally commutated induction and permanent-magnet motors. A commutated DC motor has a set of rotating windings
wound on an armature mounted on a rotating shaft. The shaft also carries the commutator, a long-lasting rotary electrical
switch that periodically reverses the flow of current in the rotor windings as the shaft rotates. Thus, every brushed
DC motor has AC flowing through its rotating windings. Current flows through one or more pairs of brushes that bear
on the commutator; the brushes connect an external source of electric power to the rotating armature. The rotating
armature consists of one or more coils of wire wound around a laminated, magnetically "soft" ferromagnetic core.
Current from the brushes flows through the commutator and one winding of the armature, making it a temporary magnet
(an electromagnet). The magnetic field produced by the armature interacts with a stationary magnetic field produced
by either PMs or another winding a field coil, as part of the motor frame. The force between the two magnetic fields
tends to rotate the motor shaft. The commutator switches power to the coils as the rotor turns, keeping the magnetic
poles of the rotor from ever fully aligning with the magnetic poles of the stator field, so that the rotor never
stops (like a compass needle does), but rather keeps rotating as long as power is applied. Many of the limitations
of the classic commutator DC motor are due to the need for brushes to press against the commutator. This creates
friction. Sparks are created by the brushes making and breaking circuits through the rotor coils as the brushes cross
the insulating gaps between commutator sections. Depending on the commutator design, this may include the brushes
shorting together adjacent sections – and hence coil ends – momentarily while crossing the gaps. Furthermore, the
inductance of the rotor coils causes the voltage across each to rise when its circuit is opened, increasing the sparking
of the brushes. This sparking limits the maximum speed of the machine, as too-rapid sparking will overheat, erode,
or even melt the commutator. The current density per unit area of the brushes, in combination with their resistivity,
limits the output of the motor. The making and breaking of electric contact also generates electrical noise; sparking
generates RFI. Brushes eventually wear out and require replacement, and the commutator itself is subject to wear
and maintenance (on larger motors) or replacement (on small motors). The commutator assembly on a large motor is
a costly element, requiring precision assembly of many parts. On small motors, the commutator is usually permanently
integrated into the rotor, so replacing it usually requires replacing the whole rotor. Large brushes are desired
for a larger brush contact area to maximize motor output, but small brushes are desired for low mass to maximize
the speed at which the motor can run without the brushes excessively bouncing and sparking. (Small brushes are also
desirable for lower cost.) Stiffer brush springs can also be used to make brushes of a given mass work at a higher
speed, but at the cost of greater friction losses (lower efficiency) and accelerated brush and commutator wear. Therefore,
DC motor brush design entails a trade-off between output power, speed, and efficiency/wear. A PM motor does not have
a field winding on the stator frame, instead relying on PMs to provide the magnetic field against which the rotor
field interacts to produce torque. Compensating windings in series with the armature may be used on large motors
to improve commutation under load. Because this field is fixed, it cannot be adjusted for speed control. PM fields
(stators) are convenient in miniature motors to eliminate the power consumption of the field winding. Most larger
DC motors are of the "dynamo" type, which have stator windings. Historically, PMs could not be made to retain high
flux if they were disassembled; field windings were more practical to obtain the needed amount of flux. However,
large PMs are costly, as well as dangerous and difficult to assemble; this favors wound fields for large machines.
To minimize overall weight and size, miniature PM motors may use high energy magnets made with neodymium or other
strategic elements; most such are neodymium-iron-boron alloy. With their higher flux density, electric machines with
high-energy PMs are at least competitive with all optimally designed singly-fed synchronous and induction electric
machines. Miniature motors resemble the structure in the illustration, except that they have at least three rotor
poles (to ensure starting, regardless of rotor position) and their outer housing is a steel tube that magnetically
links the exteriors of the curved field magnets. Operating at normal power line frequencies, universal motors are
often found in a range less than 1000 watts. Universal motors also formed the basis of the traditional railway traction
motor in electric railways. In this application, the use of AC to power a motor originally designed to run on DC
would lead to efficiency losses due to eddy current heating of their magnetic components, particularly the motor
field pole-pieces that, for DC, would have used solid (un-laminated) iron and they are now rarely used. An advantage
of the universal motor is that AC supplies may be used on motors which have some characteristics more common in DC
motors, specifically high starting torque and very compact design if high running speeds are used. The negative aspect
is the maintenance and short life problems caused by the commutator. Such motors are used in devices such as food
mixers and power tools which are used only intermittently, and often have high starting-torque demands. Multiple
taps on the field coil provide (imprecise) stepped speed control. Household blenders that advertise many speeds frequently
combine a field coil with several taps and a diode that can be inserted in series with the motor (causing the motor
to run on half-wave rectified AC). Universal motors also lend themselves to electronic speed control and, as such,
are an ideal choice for devices like domestic washing machines. The motor can be used to agitate the drum (both forwards
and in reverse) by switching the field winding with respect to the armature. Whereas SCIMs cannot turn a shaft faster
than allowed by the power line frequency, universal motors can run at much higher speeds. This makes them useful
for appliances such as blenders, vacuum cleaners, and hair dryers where high speed and light weight are desirable.
They are also commonly used in portable power tools, such as drills, sanders, circular and jig saws, where the motor's
characteristics work well. Many vacuum cleaner and weed trimmer motors exceed 10,000 rpm, while many similar miniature
grinders exceed 30,000 rpm. Currents induced into this winding provide the rotor magnetic field. The shape of the
rotor bars determines the speed-torque characteristics. At low speeds, the current induced in the squirrel cage is
nearly at line frequency and tends to be in the outer parts of the rotor cage. As the motor accelerates, the slip
frequency becomes lower, and more current is in the interior of the winding. By shaping the bars to change the resistance
of the winding portions in the interior and outer parts of the cage, effectively a variable resistance is inserted
in the rotor circuit. However, the majority of such motors have uniform bars. In a WRIM, the rotor winding is made
of many turns of insulated wire and is connected to slip rings on the motor shaft. An external resistor or other
control devices can be connected in the rotor circuit. Resistors allow control of the motor speed, although significant
power is dissipated in the external resistance. A converter can be fed from the rotor circuit and return the slip-frequency
power that would otherwise be wasted back into the power system through an inverter or separate motor-generator.
When used with a load that has a torque curve that increases with speed, the motor will operate at the speed where
the torque developed by the motor is equal to the load torque. Reducing the load will cause the motor to speed up,
and increasing the load will cause the motor to slow down until the load and motor torque are equal. Operated in
this manner, the slip losses are dissipated in the secondary resistors and can be very significant. The speed regulation
and net efficiency is also very poor. A common application of a torque motor would be the supply- and take-up reel
motors in a tape drive. In this application, driven from a low voltage, the characteristics of these motors allow
a relatively constant light tension to be applied to the tape whether or not the capstan is feeding tape past the
tape heads. Driven from a higher voltage, (and so delivering a higher torque), the torque motors can also achieve
fast-forward and rewind operation without requiring any additional mechanics such as gears or clutches. In the computer
gaming world, torque motors are used in force feedback steering wheels. Another common application is the control
of the throttle of an internal combustion engine in conjunction with an electronic governor. In this usage, the motor
works against a return spring to move the throttle in accordance with the output of the governor. The latter monitors
engine speed by counting electrical pulses from the ignition system or from a magnetic pickup and, depending on the
speed, makes small adjustments to the amount of current applied to the motor. If the engine starts to slow down relative
to the desired speed, the current will be increased, the motor will develop more torque, pulling against the return
spring and opening the throttle. Should the engine run too fast, the governor will reduce the current being applied
to the motor, causing the return spring to pull back and close the throttle. A synchronous electric motor is an AC
motor distinguished by a rotor spinning with coils passing magnets at the same rate as the AC and resulting magnetic
field which drives it. Another way of saying this is that it has zero slip under usual operating conditions. Contrast
this with an induction motor, which must slip to produce torque. One type of synchronous motor is like an induction
motor except the rotor is excited by a DC field. Slip rings and brushes are used to conduct current to the rotor.
The rotor poles connect to each other and move at the same speed hence the name synchronous motor. Another type,
for low load torque, has flats ground onto a conventional squirrel-cage rotor to create discrete poles. Yet another,
such as made by Hammond for its pre-World War II clocks, and in the older Hammond organs, has no rotor windings and
discrete poles. It is not self-starting. The clock requires manual starting by a small knob on the back, while the
older Hammond organs had an auxiliary starting motor connected by a spring-loaded manually operated switch. Finally,
hysteresis synchronous motors typically are (essentially) two-phase motors with a phase-shifting capacitor for one
phase. They start like induction motors, but when slip rate decreases sufficiently, the rotor (a smooth cylinder)
becomes temporarily magnetized. Its distributed poles make it act like a PMSM. The rotor material, like that of a
common nail, will stay magnetized, but can also be demagnetized with little difficulty. Once running, the rotor poles
stay in place; they do not drift. Doubly fed electric motors have two independent multiphase winding sets, which
contribute active (i.e., working) power to the energy conversion process, with at least one of the winding sets electronically
controlled for variable speed operation. Two independent multiphase winding sets (i.e., dual armature) are the maximum
provided in a single package without topology duplication. Doubly-fed electric motors are machines with an effective
constant torque speed range that is twice synchronous speed for a given frequency of excitation. This is twice the
constant torque speed range as singly-fed electric machines, which have only one active winding set. Nothing in the
principle of any of the motors described above requires that the iron (steel) portions of the rotor actually rotate.
If the soft magnetic material of the rotor is made in the form of a cylinder, then (except for the effect of hysteresis)
torque is exerted only on the windings of the electromagnets. Taking advantage of this fact is the coreless or ironless
DC motor, a specialized form of a PM DC motor. Optimized for rapid acceleration, these motors have a rotor that is
constructed without any iron core. The rotor can take the form of a winding-filled cylinder, or a self-supporting
structure comprising only the magnet wire and the bonding material. The rotor can fit inside the stator magnets;
a magnetically soft stationary cylinder inside the rotor provides a return path for the stator magnetic flux. A second
arrangement has the rotor winding basket surrounding the stator magnets. In that design, the rotor fits inside a
magnetically soft cylinder that can serve as the housing for the motor, and likewise provides a return path for the
flux. Because the rotor is much lighter in weight (mass) than a conventional rotor formed from copper windings on
steel laminations, the rotor can accelerate much more rapidly, often achieving a mechanical time constant under one
ms. This is especially true if the windings use aluminum rather than the heavier copper. But because there is no
metal mass in the rotor to act as a heat sink, even small coreless motors must often be cooled by forced air. Overheating
might be an issue for coreless DC motor designs. These motors were originally invented to drive the capstan(s) of
magnetic tape drives in the burgeoning computer industry, where minimal time to reach operating speed and minimal
stopping distance were critical. Pancake motors are still widely used in high-performance servo-controlled systems,
robotic systems, industrial automation and medical devices. Due to the variety of constructions now available, the
technology is used in applications from high temperature military to low cost pump and basic servos. A servomotor
is a motor, very often sold as a complete module, which is used within a position-control or speed-control feedback
control system mainly control valves, such as motor-operated control valves. Servomotors are used in applications
such as machine tools, pen plotters, and other process systems. Motors intended for use in a servomechanism must
have well-documented characteristics for speed, torque, and power. The speed vs. torque curve is quite important
and is high ratio for a servo motor. Dynamic response characteristics such as winding inductance and rotor inertia
are also important; these factors limit the overall performance of the servomechanism loop. Large, powerful, but
slow-responding servo loops may use conventional AC or DC motors and drive systems with position or speed feedback
on the motor. As dynamic response requirements increase, more specialized motor designs such as coreless motors are
used. AC motors' superior power density and acceleration characteristics compared to that of DC motors tends to favor
PM synchronous, BLDC, induction, and SRM drive applications. A servo system differs from some stepper motor applications
in that the position feedback is continuous while the motor is running; a stepper system relies on the motor not
to "miss steps" for short term accuracy, although a stepper system may include a "home" switch or other element to
provide long-term stability of control. For instance, when a typical dot matrix computer printer starts up, its controller
makes the print head stepper motor drive to its left-hand limit, where a position sensor defines home position and
stops stepping. As long as power is on, a bidirectional counter in the printer's microprocessor keeps track of print-head
position. Stepper motors are a type of motor frequently used when precise rotations are required. In a stepper motor
an internal rotor containing PMs or a magnetically soft rotor with salient poles is controlled by a set of external
magnets that are switched electronically. A stepper motor may also be thought of as a cross between a DC electric
motor and a rotary solenoid. As each coil is energized in turn, the rotor aligns itself with the magnetic field produced
by the energized field winding. Unlike a synchronous motor, in its application, the stepper motor may not rotate
continuously; instead, it "steps"—starts and then quickly stops again—from one position to the next as field windings
are energized and de-energized in sequence. Depending on the sequence, the rotor may turn forwards or backwards,
and it may change direction, stop, speed up or slow down arbitrarily at any time. Simple stepper motor drivers entirely
energize or entirely de-energize the field windings, leading the rotor to "cog" to a limited number of positions;
more sophisticated drivers can proportionally control the power to the field windings, allowing the rotors to position
between the cog points and thereby rotate extremely smoothly. This mode of operation is often called microstepping.
Computer controlled stepper motors are one of the most versatile forms of positioning systems, particularly when
part of a digital servo-controlled system. Stepper motors can be rotated to a specific angle in discrete steps with
ease, and hence stepper motors are used for read/write head positioning in computer floppy diskette drives. They
were used for the same purpose in pre-gigabyte era computer disk drives, where the precision and speed they offered
was adequate for the correct positioning of the read/write head of a hard disk drive. As drive density increased,
the precision and speed limitations of stepper motors made them obsolete for hard drives—the precision limitation
made them unusable, and the speed limitation made them uncompetitive—thus newer hard disk drives use voice coil-based
head actuator systems. (The term "voice coil" in this connection is historic; it refers to the structure in a typical
(cone type) loudspeaker. This structure was used for a while to position the heads. Modern drives have a pivoted
coil mount; the coil swings back and forth, something like a blade of a rotating fan. Nevertheless, like a voice
coil, modern actuator coil conductors (the magnet wire) move perpendicular to the magnetic lines of force.) Stepper
motors were and still are often used in computer printers, optical scanners, and digital photocopiers to move the
optical scanning element, the print head carriage (of dot matrix and inkjet printers), and the platen or feed rollers.
Likewise, many computer plotters (which since the early 1990s have been replaced with large-format inkjet and laser
printers) used rotary stepper motors for pen and platen movement; the typical alternatives here were either linear
stepper motors or servomotors with closed-loop analog control systems. Since the armature windings of a direct-current
or universal motor are moving through a magnetic field, they have a voltage induced in them. This voltage tends to
oppose the motor supply voltage and so is called "back electromotive force (emf)". The voltage is proportional to
the running speed of the motor. The back emf of the motor, plus the voltage drop across the winding internal resistance
and brushes, must equal the voltage at the brushes. This provides the fundamental mechanism of speed regulation in
a DC motor. If the mechanical load increases, the motor slows down; a lower back emf results, and more current is
drawn from the supply. This increased current provides the additional torque to balance the new load. All the electromagnetic
motors, and that includes the types mentioned here derive the torque from the vector product of the interacting fields.
For calculating the torque it is necessary to know the fields in the air gap . Once these have been established by
mathematical analysis using FEA or other tools the torque may be calculated as the integral of all the vectors of
force multiplied by the radius of each vector. The current flowing in the winding is producing the fields and for
a motor using a magnetic material the field is not linearly proportional to the current. This makes the calculation
difficult but a computer can do the many calculations needed. When optimally designed within a given core saturation
constraint and for a given active current (i.e., torque current), voltage, pole-pair number, excitation frequency
(i.e., synchronous speed), and air-gap flux density, all categories of electric motors or generators will exhibit
virtually the same maximum continuous shaft torque (i.e., operating torque) within a given air-gap area with winding
slots and back-iron depth, which determines the physical size of electromagnetic core. Some applications require
bursts of torque beyond the maximum operating torque, such as short bursts of torque to accelerate an electric vehicle
from standstill. Always limited by magnetic core saturation or safe operating temperature rise and voltage, the capacity
for torque bursts beyond the maximum operating torque differs significantly between categories of electric motors
or generators. The brushless wound-rotor synchronous doubly-fed (BWRSDF) machine is the only electric machine with
a truly dual ported transformer circuit topology (i.e., both ports independently excited with no short-circuited
port). The dual ported transformer circuit topology is known to be unstable and requires a multiphase slip-ring-brush
assembly to propagate limited power to the rotor winding set. If a precision means were available to instantaneously
control torque angle and slip for synchronous operation during motoring or generating while simultaneously providing
brushless power to the rotor winding set, the active current of the BWRSDF machine would be independent of the reactive
impedance of the transformer circuit and bursts of torque significantly higher than the maximum operating torque
and far beyond the practical capability of any other type of electric machine would be realizable. Torque bursts
greater than eight times operating torque have been calculated. The continuous torque density of conventional electric
machines is determined by the size of the air-gap area and the back-iron depth, which are determined by the power
rating of the armature winding set, the speed of the machine, and the achievable air-gap flux density before core
saturation. Despite the high coercivity of neodymium or samarium-cobalt PMs, continuous torque density is virtually
the same amongst electric machines with optimally designed armature winding sets. Continuous torque density relates
to method of cooling and permissible period of operation before destruction by overheating of windings or PM damage.
An electrostatic motor is based on the attraction and repulsion of electric charge. Usually, electrostatic motors
are the dual of conventional coil-based motors. They typically require a high-voltage power supply, although very
small motors employ lower voltages. Conventional electric motors instead employ magnetic attraction and repulsion,
and require high current at low voltages. In the 1750s, the first electrostatic motors were developed by Benjamin
Franklin and Andrew Gordon. Today the electrostatic motor finds frequent use in micro-electro-mechanical systems
(MEMS) where their drive voltages are below 100 volts, and where moving, charged plates are far easier to fabricate
than coils and iron cores. Also, the molecular machinery which runs living cells is often based on linear and rotary
electrostatic motors.[citation needed]
Switzerland (/ˈswɪtsərlənd/), officially the Swiss Confederation (Latin: Confoederatio Helvetica, hence its abbreviation
CH), is a country in Europe. While still named the "Swiss Confederation" for historical reasons, modern Switzerland
is a federal directorial republic consisting of 26 cantons, with Bern as the seat of the federal authorities, called
Bundesstadt ("federal city").[note 3] The country is situated in Western and Central Europe,[note 4] and is bordered
by Italy to the south, France to the west, Germany to the north, and Austria and Liechtenstein to the east. Switzerland
is a landlocked country geographically divided between the Alps, the Swiss Plateau and the Jura, spanning an area
of 41,285 km2 (15,940 sq mi). While the Alps occupy the greater part of the territory, the Swiss population of approximately
8 million people is concentrated mostly on the Plateau, where the largest cities are to be found: among them are
the two global and economic centres, Zürich and Geneva. The establishment of the Swiss Confederation is traditionally
dated to 1 August 1291, which is celebrated annually as the Swiss National Day. The country has a long history of
armed neutrality—it has not been in a state of war internationally since 1815—and did not join the United Nations
until 2002. Nevertheless, it pursues an active foreign policy and is frequently involved in peace-building processes
around the world. In addition to being the birthplace of the Red Cross, Switzerland is home to numerous international
organizations, including the second largest UN office. On the European level, it is a founding member of the European
Free Trade Association, but notably it is not part of the European Union, nor the European Economic Area. However
the country does participate in the Schengen Area and the EU's single market through a number of bilateral treaties.
Straddling the intersection of Germanic and Romance Europe, Switzerland comprises four main linguistic and cultural
regions: German, French, Italian and Romansh. Therefore, the Swiss, although predominantly German-speaking, do not
form a nation in the sense of a common ethnicity or language; rather, Switzerland's strong sense of identity and
community is founded on a common historical background, shared values such as federalism and direct democracy, and
Alpine symbolism. Due to its linguistic diversity, Switzerland is known by a variety of native names: Schweiz [ˈʃvaɪts]
(German);[note 5] Suisse [sɥis(ə)] (French); Svizzera [ˈzvittsera] (Italian); and Svizra [ˈʒviːtsrɐ] or [ˈʒviːtsʁːɐ]
(Romansh).[note 6] Switzerland is one of the richest and wealthiest countries in the world. Switzerland ranks top
or close to the top in several metrics of national performance, including government transparency, civil liberties,
quality of life, economic competitiveness, and human development. It has the highest nominal wealth (financial and
non-financial assets) per adult in the world according to Credit Suisse and the eighth-highest per capita gross domestic
product on the IMF list. Zürich and Geneva have each been ranked among the top cities with the highest quality of
life in the world, with the former ranked 2nd globally, according to Mercer. The English name Switzerland is a compound
containing Switzer, an obsolete term for the Swiss, which was in use during the 16th to 19th centuries. The English
adjective Swiss is a loan from French Suisse, also in use since the 16th century. The name Switzer is from the Alemannic
Schwiizer, in origin an inhabitant of Schwyz and its associated territory, one of the Waldstätten cantons which formed
the nucleus of the Old Swiss Confederacy. The name originates as an exonym, applied pars pro toto to the troops of
the Confederacy. The Swiss began to adopt the name for themselves after the Swabian War of 1499, used alongside the
term for "Confederates", Eidgenossen (literally: comrades by oath), used since the 14th century. The toponym Schwyz
itself is first attested in 972, as Old High German Suittes, ultimately perhaps related to suedan "to burn", referring
to the area of forest that was burned and cleared to build. The name was extended to the area dominated by the canton,
and after the Swabian War of 1499 gradually came to be used for the entire Confederation. The Swiss German name of
the country, Schwiiz, is homophonous to that of the canton and the settlement, but distinguished by the use of the
definite article (d'Schwiiz for the Confederation, but simply Schwyz for the canton and the town). The earliest known
cultural tribes of the area were members of the Hallstatt and La Tène cultures, named after the archaeological site
of La Tène on the north side of Lake Neuchâtel. La Tène culture developed and flourished during the late Iron Age
from around 450 BC, possibly under some influence from the Greek and Etruscan civilisations. One of the most important
tribal groups in the Swiss region was the Helvetii. Steadily harassed by the Germans, in 58 BC the Helvetii decided
to abandon the Swiss plateau and migrate to western Gallia, but Julius Caesar's armies pursued and defeated them
at the Battle of Bibracte, in today's western France, forcing the tribe to move back to its original homeland. In
15 BC, Tiberius, who was destined to be the second Roman emperor and his brother, Drusus, conquered the Alps, integrating
them into the Roman Empire. The area occupied by the Helvetii—the namesakes of the later Confoederatio Helvetica—first
became part of Rome's Gallia Belgica province and then of its Germania Superior province, while the eastern portion
of modern Switzerland was integrated into the Roman province of Raetia. Sometime around the start of the Common Era,
the Romans maintained a large legionary camp called Vindonissa, now a ruin at the confluence of the Aare and Reuss
rivers, near the town of Windisch, an outskirt of Brugg. In about 260 AD, the fall of the Agri Decumates territory
north of the Rhine transformed today's Switzerland into a frontier land of the Empire. Repeated raids by the Alamanni
tribes provoked the ruin of the Roman towns and economy, forcing the population to find shelter near Roman fortresses,
like the Castrum Rauracense near Augusta Raurica. The Empire built another line of defense at the north border (the
so-called Donau-Iller-Rhine-Limes), but at the end of the fourth century the increased Germanic pressure forced the
Romans to abandon the linear defence concept, and the Swiss plateau was finally open to the settlement of German
tribes. In the Early Middle Ages, from the end of the 4th century, the western extent of modern-day Switzerland was
part of the territory of the Kings of the Burgundians. The Alemanni settled the Swiss plateau in the 5th century
and the valleys of the Alps in the 8th century, forming Alemannia. Modern-day Switzerland was therefore then divided
between the kingdoms of Alemannia and Burgundy. The entire region became part of the expanding Frankish Empire in
the 6th century, following Clovis I's victory over the Alemanni at Tolbiac in 504 AD, and later Frankish domination
of the Burgundians. By 1200, the Swiss plateau comprised the dominions of the houses of Savoy, Zähringer, Habsburg,
and Kyburg. Some regions (Uri, Schwyz, Unterwalden, later known as Waldstätten) were accorded the Imperial immediacy
to grant the empire direct control over the mountain passes. With the extinction of its male line in 1263 the Kyburg
dynasty fell in AD 1264; then the Habsburgs under King Rudolph I (Holy Roman Emperor in 1273) laid claim to the Kyburg
lands and annexed them extending their territory to the eastern Swiss plateau. By 1353, the three original cantons
had joined with the cantons of Glarus and Zug and the Lucerne, Zürich and Bern city states to form the "Old Confederacy"
of eight states that existed until the end of the 15th century. The expansion led to increased power and wealth for
the federation. By 1460, the confederates controlled most of the territory south and west of the Rhine to the Alps
and the Jura mountains, particularly after victories against the Habsburgs (Battle of Sempach, Battle of Näfels),
over Charles the Bold of Burgundy during the 1470s, and the success of the Swiss mercenaries. The Swiss victory in
the Swabian War against the Swabian League of Emperor Maximilian I in 1499 amounted to de facto independence within
the Holy Roman Empire. The Old Swiss Confederacy had acquired a reputation of invincibility during these earlier
wars, but expansion of the federation suffered a setback in 1515 with the Swiss defeat in the Battle of Marignano.
This ended the so-called "heroic" epoch of Swiss history. The success of Zwingli's Reformation in some cantons led
to inter-cantonal religious conflicts in 1529 and 1531 (Wars of Kappel). It was not until more than one hundred years
after these internal wars that, in 1648, under the Peace of Westphalia, European countries recognized Switzerland's
independence from the Holy Roman Empire and its neutrality. In 1798, the revolutionary French government conquered
Switzerland and imposed a new unified constitution. This centralised the government of the country, effectively abolishing
the cantons: moreover, Mülhausen joined France and Valtellina valley, the Cisalpine Republic, separating from Switzerland.
The new regime, known as the Helvetic Republic, was highly unpopular. It had been imposed by a foreign invading army
and destroyed centuries of tradition, making Switzerland nothing more than a French satellite state. The fierce French
suppression of the Nidwalden Revolt in September 1798 was an example of the oppressive presence of the French Army
and the local population's resistance to the occupation. When war broke out between France and its rivals, Russian
and Austrian forces invaded Switzerland. The Swiss refused to fight alongside the French in the name of the Helvetic
Republic. In 1803 Napoleon organised a meeting of the leading Swiss politicians from both sides in Paris. The result
was the Act of Mediation which largely restored Swiss autonomy and introduced a Confederation of 19 cantons. Henceforth,
much of Swiss politics would concern balancing the cantons' tradition of self-rule with the need for a central government.
The restoration of power to the patriciate was only temporary. After a period of unrest with repeated violent clashes
such as the Züriputsch of 1839, civil war (the Sonderbundskrieg) broke out in 1847 when some Catholic cantons tried
to set up a separate alliance (the Sonderbund). The war lasted for less than a month, causing fewer than 100 casualties,
most of which were through friendly fire. Yet however minor the Sonderbundskrieg appears compared with other European
riots and wars in the 19th century, it nevertheless had a major impact on both the psychology and the society of
the Swiss and of Switzerland. Thus, while the rest of Europe saw revolutionary uprisings, the Swiss drew up a constitution
which provided for a federal layout, much of it inspired by the American example. This constitution provided for
a central authority while leaving the cantons the right to self-government on local issues. Giving credit to those
who favoured the power of the cantons (the Sonderbund Kantone), the national assembly was divided between an upper
house (the Council of States, two representatives per canton) and a lower house (the National Council, with representatives
elected from across the country). Referenda were made mandatory for any amendment of this constitution. During World
War II, detailed invasion plans were drawn up by the Germans, but Switzerland was never attacked. Switzerland was
able to remain independent through a combination of military deterrence, concessions to Germany, and good fortune
as larger events during the war delayed an invasion. Under General Henri Guisan central command, a general mobilisation
of the armed forces was ordered. The Swiss military strategy was changed from one of static defence at the borders
to protect the economic heartland, to one of organised long-term attrition and withdrawal to strong, well-stockpiled
positions high in the Alps known as the Reduit. Switzerland was an important base for espionage by both sides in
the conflict and often mediated communications between the Axis and Allied powers. Switzerland's trade was blockaded
by both the Allies and by the Axis. Economic cooperation and extension of credit to the Third Reich varied according
to the perceived likelihood of invasion and the availability of other trading partners. Concessions reached a peak
after a crucial rail link through Vichy France was severed in 1942, leaving Switzerland completely surrounded by
the Axis. Over the course of the war, Switzerland interned over 300,000 refugees and the International Red Cross,
based in Geneva, played an important part during the conflict. Strict immigration and asylum policies as well as
the financial relationships with Nazi Germany raised controversy, but not until the end of the 20th century. Switzerland
was the last Western republic to grant women the right to vote. Some Swiss cantons approved this in 1959, while at
the federal level it was achieved in 1971 and, after resistance, in the last canton Appenzell Innerrhoden (one of
only two remaining Landsgemeinde) in 1990. After obtaining suffrage at the federal level, women quickly rose in political
significance, with the first woman on the seven member Federal Council executive being Elisabeth Kopp, who served
from 1984–1989, and the first female president being Ruth Dreifuss in 1999. In 2002 Switzerland became a full member
of the United Nations, leaving the Vatican City as the last widely recognised state without full UN membership. Switzerland
is a founding member of the EFTA, but is not a member of the European Economic Area. An application for membership
in the European Union was sent in May 1992, but not advanced since the EEA was rejected in December 1992 when Switzerland
was the only country to launch a referendum on the EEA. There have since been several referenda on the EU issue;
due to a mixed reaction from the population the membership application has been frozen. Nonetheless, Swiss law is
gradually being adjusted to conform with that of the EU, and the government has signed a number of bilateral agreements
with the European Union. Switzerland, together with Liechtenstein, has been completely surrounded by the EU since
Austria's entry in 1995. On 5 June 2005, Swiss voters agreed by a 55% majority to join the Schengen treaty, a result
that was regarded by EU commentators as a sign of support by Switzerland, a country that is traditionally perceived
as independent and reluctant to enter supranational bodies. Extending across the north and south side of the Alps
in west-central Europe, Switzerland encompasses a great diversity of landscapes and climates on a limited area of
41,285 square kilometres (15,940 sq mi). The population is about 8 million, resulting in an average population density
of around 195 people per square kilometre (500/sq mi). The more mountainous southern half of the country is far more
sparsely populated than the northern half. In the largest Canton of Graubünden, lying entirely in the Alps, population
density falls to 27 /km² (70 /sq mi). Switzerland lies between latitudes 45° and 48° N, and longitudes 5° and 11°
E. It contains three basic topographical areas: the Swiss Alps to the south, the Swiss Plateau or Central Plateau,
and the Jura mountains on the west. The Alps are a high mountain range running across the central-south of the country,
comprising about 60% of the country's total area. The majority of the Swiss population live in the Swiss Plateau.
Among the high valleys of the Swiss Alps many glaciers are found, totalling an area of 1,063 square kilometres (410
sq mi). From these originate the headwaters of several major rivers, such as the Rhine, Inn, Ticino and Rhône, which
flow in the four cardinal directions into the whole of Europe. The hydrographic network includes several of the largest
bodies of freshwater in Central and Western Europe, among which are included Lake Geneva (also called le Lac Léman
in French), Lake Constance (known as Bodensee in German) and Lake Maggiore. Switzerland has more than 1500 lakes,
and contains 6% of Europe's stock of fresh water. Lakes and glaciers cover about 6% of the national territory. The
largest lake is Lake Geneva, in western Switzerland shared with France. The Rhône is both the main source and outflow
of Lake Geneva. Lake Constance is the second largest Swiss lake and, like the Lake Geneva, an intermediate step by
the Rhine at the border to Austria and Germany. While the Rhône flows into the Mediterranean Sea at the French Camarque
region and the Rhine flows into the North Sea at Rotterdam in the Netherlands, about 1000 km apart, both springs
are only about 22 km apart from each other in the Swiss Alps. 48 of Switzerland's mountains are 4,000 metres (13,000
ft) above sea in altitude or higher. At 4,634 m (15,203 ft), Monte Rosa is the highest, although the Matterhorn (4,478
m or 14,692 ft) is often regarded as the most famous. Both are located within the Pennine Alps in the canton of Valais.
The section of the Bernese Alps above the deep glacial Lauterbrunnen valley, containing 72 waterfalls, is well known
for the Jungfrau (4,158 m or 13,642 ft) Eiger and Mönch, and the many picturesque valleys in the region. In the southeast
the long Engadin Valley, encompassing the St. Moritz area in canton of Graubünden, is also well known; the highest
peak in the neighbouring Bernina Alps is Piz Bernina (4,049 m or 13,284 ft). The Swiss climate is generally temperate,
but can vary greatly between the localities, from glacial conditions on the mountaintops to the often pleasant near
Mediterranean climate at Switzerland's southern tip. There are some valley areas in the southern part of Switzerland
where some cold-hardy palm trees are found. Summers tend to be warm and humid at times with periodic rainfall so
they are ideal for pastures and grazing. The less humid winters in the mountains may see long intervals of stable
conditions for weeks, while the lower lands tend to suffer from inversion, during these periods, thus seeing no sun
for weeks. A weather phenomenon known as the föhn (with an identical effect to the chinook wind) can occur at all
times of the year and is characterised by an unexpectedly warm wind, bringing air of very low relative humidity to
the north of the Alps during rainfall periods on the southern face of the Alps. This works both ways across the alps
but is more efficient if blowing from the south due to the steeper step for oncoming wind from the south. Valleys
running south to north trigger the best effect. The driest conditions persist in all inner alpine valleys that receive
less rain because arriving clouds lose a lot of their content while crossing the mountains before reaching these
areas. Large alpine areas such as Graubünden remain drier than pre-alpine areas and as in the main valley of the
Valais wine grapes are grown there. Switzerland's ecosystems can be particularly fragile, because the many delicate
valleys separated by high mountains often form unique ecologies. The mountainous regions themselves are also vulnerable,
with a rich range of plants not found at other altitudes, and experience some pressure from visitors and grazing.
The climatic, geological and topographical conditions of the alpine region make for a very fragile ecosystem that
is particularly sensitive to climate change. Nevertheless, according to the 2014 Environmental Performance Index,
Switzerland ranks first among 132 nations in safeguarding the environment, due to its high scores on environmental
public health, its heavy reliance on renewable sources of energy (hydropower and geothermal energy), and its control
of greenhouse gas emissions. The Federal Constitution adopted in 1848 is the legal foundation of the modern federal
state. It is among the oldest constitutions in the world. A new Constitution was adopted in 1999, but did not introduce
notable changes to the federal structure. It outlines basic and political rights of individuals and citizen participation
in public affairs, divides the powers between the Confederation and the cantons and defines federal jurisdiction
and authority. There are three main governing bodies on the federal level: the bicameral parliament (legislative),
the Federal Council (executive) and the Federal Court (judicial). The Swiss Parliament consists of two houses: the
Council of States which has 46 representatives (two from each canton and one from each half-canton) who are elected
under a system determined by each canton, and the National Council, which consists of 200 members who are elected
under a system of proportional representation, depending on the population of each canton. Members of both houses
serve for 4 years. When both houses are in joint session, they are known collectively as the Federal Assembly. Through
referendums, citizens may challenge any law passed by parliament and through initiatives, introduce amendments to
the federal constitution, thus making Switzerland a direct democracy. The Federal Council constitutes the federal
government, directs the federal administration and serves as collective Head of State. It is a collegial body of
seven members, elected for a four-year mandate by the Federal Assembly which also exercises oversight over the Council.
The President of the Confederation is elected by the Assembly from among the seven members, traditionally in rotation
and for a one-year term; the President chairs the government and assumes representative functions. However, the president
is a primus inter pares with no additional powers, and remains the head of a department within the administration.
Direct democracy and federalism are hallmarks of the Swiss political system. Swiss citizens are subject to three
legal jurisdictions: the commune, canton and federal levels. The 1848 federal constitution defines a system of direct
democracy (sometimes called half-direct or representative direct democracy because it is aided by the more commonplace
institutions of a representative democracy). The instruments of this system at the federal level, known as civic
rights (Volksrechte, droits civiques), include the right to submit a constitutional initiative and a referendum,
both of which may overturn parliamentary decisions. Similarly, the federal constitutional initiative allows citizens
to put a constitutional amendment to a national vote, if 100,000 voters sign the proposed amendment within 18 months.[note
8] Parliament can supplement the proposed amendment with a counter-proposal, and then voters must indicate a preference
on the ballot in case both proposals are accepted. Constitutional amendments, whether introduced by initiative or
in Parliament, must be accepted by a double majority of the national popular vote and the cantonal popular votes.[note
9] The cantons have a permanent constitutional status and, in comparison with the situation in other countries, a
high degree of independence. Under the Federal Constitution, all 26 cantons are equal in status. Each canton has
its own constitution, and its own parliament, government and courts. However, there are considerable differences
between the individual cantons, most particularly in terms of population and geographical area. Their populations
vary between 15,000 (Appenzell Innerrhoden) and 1,253,500 (Zürich), and their area between 37 km2 (14 sq mi) (Basel-Stadt)
and 7,105 km2 (2,743 sq mi) (Graubünden). The Cantons comprise a total of 2,485 municipalities. Within Switzerland
there are two enclaves: Büsingen belongs to Germany, Campione d'Italia belongs to Italy. Traditionally, Switzerland
avoids alliances that might entail military, political, or direct economic action and has been neutral since the
end of its expansion in 1515. Its policy of neutrality was internationally recognised at the Congress of Vienna in
1815. Only in 2002 did Switzerland become a full member of the United Nations and it was the first state to join
it by referendum. Switzerland maintains diplomatic relations with almost all countries and historically has served
as an intermediary between other states. Switzerland is not a member of the European Union; the Swiss people have
consistently rejected membership since the early 1990s. However, Switzerland does participate in the Schengen Area.
A large number of international institutions have their seats in Switzerland, in part because of its policy of neutrality.
Geneva is the birthplace of the Red Cross and Red Crescent Movement and the Geneva Conventions and, since 2006, hosts
the United Nations Human Rights Council. Even though Switzerland is one of the most recent countries to have joined
the United Nations, the Palace of Nations in Geneva is the second biggest centre for the United Nations after New
York, and Switzerland was a founding member and home to the League of Nations. Apart from the United Nations headquarters,
the Swiss Confederation is host to many UN agencies, like the World Health Organization (WHO), the International
Labour Organization (ILO), the International Telecommunication Union (ITU), the United Nations High Commissioner
for Refugees (UNHCR) and about 200 other international organisations, including the World Trade Organization and
the World Intellectual Property Organization. The annual meetings of the World Economic Forum in Davos bring together
top international business and political leaders from Switzerland and foreign countries to discuss important issues
facing the world, including health and the environment. Additionally the headquarters of the Bank for International
Settlements (BIS) are located in Basel since 1930. The structure of the Swiss militia system stipulates that the
soldiers keep their Army issued equipment, including all personal weapons, at home. Some organizations and political
parties find this practice controversial but mainstream Swiss opinion is in favour of the system. Compulsory military
service concerns all male Swiss citizens; women can serve voluntarily. Men usually receive military conscription
orders for training at the age of 18. About two thirds of the young Swiss are found suited for service; for those
found unsuited, various forms of alternative service exist. Annually, approximately 20,000 persons are trained in
recruit centres for a duration from 18 to 21 weeks. The reform "Army XXI" was adopted by popular vote in 2003, it
replaced the previous model "Army 95", reducing the effectives from 400,000 to about 200,000. Of those, 120,000 are
active in periodic Army training and 80,000 are non-training reserves. Switzerland has a stable, prosperous and high-tech
economy and enjoys great wealth, being ranked as the wealthiest country in the world per capita in multiple rankings.
In 2011 it was ranked as the wealthiest country in the world in per capita terms (with "wealth" being defined to
include both financial and non-financial assets), while the 2013 Credit Suisse Global Wealth Report showed that Switzerland
was the country with the highest average wealth per adult in 2013. It has the world's nineteenth largest economy
by nominal GDP and the thirty-sixth largest by purchasing power parity. It is the twentieth largest exporter, despite
its small size. Switzerland has the highest European rating in the Index of Economic Freedom 2010, while also providing
large coverage through public services. The nominal per capita GDP is higher than those of the larger Western and
Central European economies and Japan. If adjusted for purchasing power parity, Switzerland ranks 8th in the world
in terms of GDP per capita, according to the World Bank and IMF (ranked 15th according to the CIA Worldfactbook).
The World Economic Forum's Global Competitiveness Report currently ranks Switzerland's economy as the most competitive
in the world, while ranked by the European Union as Europe's most innovative country. For much of the 20th century,
Switzerland was the wealthiest country in Europe by a considerable margin (by GDP – per capita). In 2007 the gross
median household income in Switzerland was an estimated 137,094 USD at Purchasing power parity while the median income
was 95,824 USD. Switzerland also has one of the world's largest account balances as a percentage of GDP. Switzerland's
most important economic sector is manufacturing. Manufacturing consists largely of the production of specialist chemicals,
health and pharmaceutical goods, scientific and precision measuring instruments and musical instruments. The largest
exported goods are chemicals (34% of exported goods), machines/electronics (20.9%), and precision instruments/watches
(16.9%). Exported services amount to a third of exports. The service sector – especially banking and insurance, tourism,
and international organisations – is another important industry for Switzerland. Around 3.8 million people work in
Switzerland; about 25% of employees belonged to a trade union in 2004. Switzerland has a more flexible job market
than neighbouring countries and the unemployment rate is very low. The unemployment rate increased from a low of
1.7% in June 2000 to a peak of 4.4% in December 2009. The unemployment rate is 3.2% in 2014. Population growth from
net immigration is quite high, at 0.52% of population in 2004. The foreign citizen population was 21.8% in 2004,
about the same as in Australia. GDP per hour worked is the world's 16th highest, at 49.46 international dollars in
2012. Switzerland has an overwhelmingly private sector economy and low tax rates by Western World standards; overall
taxation is one of the smallest of developed countries. Switzerland is a relatively easy place to do business, currently
ranking 20th of 189 countries in the Ease of Doing Business Index. The slow growth Switzerland experienced in the
1990s and the early 2000s has brought greater support for economic reforms and harmonization with the European Union.
According to Credit Suisse, only about 37% of residents own their own homes, one of the lowest rates of home ownership
in Europe. Housing and food price levels were 171% and 145% of the EU-25 index in 2007, compared to 113% and 104%
in Germany. The Swiss Federal budget had a size of 62.8 billion Swiss francs in 2010, which is an equivalent 11.35%
of the country's GDP in that year; however, the regional (canton) budgets and the budgets of the municipalities are
not counted as part of the federal budget and the total rate of government spending is closer to 33.8% of GDP. The
main sources of income for the federal government are the value-added tax (33%) and the direct federal tax (29%)
and the main expenditure is located in the areas of social welfare and finance & tax. The expenditures of the Swiss
Confederation have been growing from 7% of GDP in 1960 to 9.7% in 1990 and to 10.7% in 2010. While the sectors social
welfare and finance & tax have been growing from 35% in 1990 to 48.2% in 2010, a significant reduction of expenditures
has been occurring in the sectors of agriculture and national defense; from 26.5% in to 12.4% (estimation for the
year 2015). Agricultural protectionism—a rare exception to Switzerland's free trade policies—has contributed to high
food prices. Product market liberalisation is lagging behind many EU countries according to the OECD. Nevertheless,
domestic purchasing power is one of the best in the world. Apart from agriculture, economic and trade barriers between
the European Union and Switzerland are minimal and Switzerland has free trade agreements worldwide. Switzerland is
a member of the European Free Trade Association (EFTA). Education in Switzerland is very diverse because the constitution
of Switzerland delegates the authority for the school system to the cantons. There are both public and private schools,
including many private international schools. The minimum age for primary school is about six years in all cantons,
but most cantons provide a free "children's school" starting at four or five years old. Primary school continues
until grade four, five or six, depending on the school. Traditionally, the first foreign language in school was always
one of the other national languages, although recently (2000) English was introduced first in a few cantons. There
are 12 universities in Switzerland, ten of which are maintained at cantonal level and usually offer a range of non-technical
subjects. The first university in Switzerland was founded in 1460 in Basel (with a faculty of medicine) and has a
tradition of chemical and medical research in Switzerland. The biggest university in Switzerland is the University
of Zurich with nearly 25,000 students. The two institutes sponsored by the federal government are the ETHZ in Zürich
(founded 1855) and the EPFL in Lausanne (founded 1969 as such, formerly an institute associated with the University
of Lausanne) which both have an excellent international reputation.[note 10] Many Nobel prizes have been awarded
to Swiss scientists, for example to the world-famous physicist Albert Einstein in the field of physics who developed
his Special relativity while working in Bern. More recently Vladimir Prelog, Heinrich Rohrer, Richard Ernst, Edmond
Fischer, Rolf Zinkernagel and Kurt Wüthrich received Nobel prizes in the sciences. In total, 113 Nobel Prize winners
in all fields stand in relation to Switzerland[note 11] and the Nobel Peace Prize has been awarded nine times to
organisations residing in Switzerland. Geneva and the nearby French department of Ain co-host the world's largest
laboratory, CERN, dedicated to particle physics research. Another important research center is the Paul Scherrer
Institute. Notable inventions include lysergic acid diethylamide (LSD), the scanning tunneling microscope (Nobel
prize) and Velcro. Some technologies enabled the exploration of new worlds such as the pressurized balloon of Auguste
Piccard and the Bathyscaphe which permitted Jacques Piccard to reach the deepest point of the world's oceans. Switzerland
voted against membership in the European Economic Area in a referendum in December 1992 and has since maintained
and developed its relationships with the European Union (EU) and European countries through bilateral agreements.
In March 2001, the Swiss people refused in a popular vote to start accession negotiations with the EU. In recent
years, the Swiss have brought their economic practices largely into conformity with those of the EU in many ways,
in an effort to enhance their international competitiveness. The economy grew at 3% in 2010, 1.9% in 2011, and 1%
in 2012. Full EU membership is a long-term objective of some in the Swiss government, but there is considerable popular
sentiment against this supported by the conservative SVP party. The western French-speaking areas and the urban regions
of the rest of the country tend to be more pro-EU, however with far from any significant share of the population.
The government has established an Integration Office under the Department of Foreign Affairs and the Department of
Economic Affairs. To minimise the negative consequences of Switzerland's isolation from the rest of Europe, Bern
and Brussels signed seven bilateral agreements to further liberalise trade ties. These agreements were signed in
1999 and took effect in 2001. This first series of bilateral agreements included the free movement of persons. A
second series covering nine areas was signed in 2004 and has since been ratified, which includes the Schengen Treaty
and the Dublin Convention besides others. They continue to discuss further areas for cooperation. In 2006, Switzerland
approved 1 billion francs of supportive investment in the poorer Southern and Central European countries in support
of cooperation and positive ties to the EU as a whole. A further referendum will be needed to approve 300 million
francs to support Romania and Bulgaria and their recent admission. The Swiss have also been under EU and sometimes
international pressure to reduce banking secrecy and to raise tax rates to parity with the EU. Preparatory discussions
are being opened in four new areas: opening up the electricity market, participation in the European GNSS project
Galileo, cooperating with the European centre for disease prevention and recognising certificates of origin for food
products. On 9 February 2014, Swiss voters narrowly approved by 50.3% a ballot initiative launched by the national
conservative Swiss People's Party (SVP/UDC) to restrict immigration, and thus reintroducing a quota system on the
influx of foreigners. This initiative was mostly backed by rural (57.6% approvals), suburban (51.2% approvals), and
isolated cities (51.3% approvals) of Switzerland as well as by a strong majority (69.2% approval) in the canton of
Ticino, while metropolitan centres (58.5% rejection) and the French-speaking part (58.5% rejection) of Switzerland
rather rejected it. Some news commentators claim that this proposal de facto contradicts the bilateral agreements
on the free movement of persons from these respective countries. The former ten-year moratorium on the construction
of new nuclear power plants was the result of a citizens' initiative voted on in 1990 which had passed with 54.5%
Yes vs. 45.5% No votes. Plans for a new nuclear plant in the Canton of Bern have been put on hold after the accident
at the Fukushima Daiichi power plant in 2011. The Swiss Federal Office of Energy (SFOE) is the office responsible
for all questions relating to energy supply and energy use within the Federal Department of Environment, Transport,
Energy and Communications (DETEC). The agency is supporting the 2000-watt society initiative to cut the nation's
energy use by more than half by the year 2050. On 25 May 2011 the Swiss government announced that it plans to end
its use of nuclear energy in the next 2 or 3 decades. "The government has voted for a phaseout because we want to
ensure a secure and autonomous supply of energy", Energy Minister Doris Leuthard said that day at a press conference
in Bern. "Fukushima showed that the risk of nuclear power is too high, which in turn has also increased the costs
of this energy form." The first reactor would reportedly be taken offline in 2019 and the last one in 2034. Parliament
will discuss the plan in June 2011, and there could be a referendum as well. The most dense rail network in Europe
of 5,063 km (3,146 mi) carries over 350 million passengers annually. In 2007, each Swiss citizen travelled on average
2,258 km (1,403 mi) by rail, which makes them the keenest rail users. The network is administered mainly by the Federal
Railways, except in Graubünden, where the 366 km (227 mi) narrow gauge railway is operated by the Rhaetian Railways
and includes some World Heritage lines. The building of new railway base tunnels through the Alps is under way to
reduce the time of travel between north and south through the AlpTransit project. Swiss private-public managed road
network is funded by road tolls and vehicle taxes. The Swiss autobahn/autoroute system requires the purchase of a
vignette (toll sticker)—which costs 40 Swiss francs—for one calendar year in order to use its roadways, for both
passenger cars and trucks. The Swiss autobahn/autoroute network has a total length of 1,638 km (1,018 mi) (as of
2000) and has, by an area of 41,290 km2 (15,940 sq mi), also one of the highest motorway densities in the world.
Zürich Airport is Switzerland's largest international flight gateway, which handled 22.8 million passengers in 2012.
The other international airports are Geneva Airport (13.9 million passengers in 2012), EuroAirport Basel-Mulhouse-Freiburg
which is located in France, Bern Airport, Lugano Airport, St. Gallen-Altenrhein Airport and Sion Airport. Swiss International
Air Lines is the flag carrier of Switzerland. Its main hub is Zürich. Switzerland has one of the best environmental
records among nations in the developed world; it was one of the countries to sign the Kyoto Protocol in 1998 and
ratified it in 2003. With Mexico and the Republic of Korea it forms the Environmental Integrity Group (EIG). The
country is heavily active in recycling and anti-littering regulations and is one of the top recyclers in the world,
with 66% to 96% of recyclable materials being recycled, depending on the area of the country. The 2014 Global Green
Economy Index ranked Switzerland among the top 10 green economies in the world. In many places in Switzerland, household
rubbish disposal is charged for. Rubbish (except dangerous items, batteries etc.) is only collected if it is in bags
which either have a payment sticker attached, or in official bags with the surcharge paid at the time of purchase.
This gives a financial incentive to recycle as much as possible, since recycling is free. Illegal disposal of garbage
is not tolerated but usually the enforcement of such laws is limited to violations that involve the unlawful disposal
of larger volumes at traffic intersections and public areas. Fines for not paying the disposal fee range from CHF
200–500. In 2012, resident foreigners made up 23.3% of the population. Most of these (64%) were from European Union
or EFTA countries. Italians were the largest single group of foreigners with 15.6% of total foreign population. They
were closely followed by Germans (15.2%), immigrants from Portugal (12.7%), France (5.6%), Serbia (5.3%), Turkey
(3.8%), Spain (3.7%), and Austria (2%). Immigrants from Sri Lanka, most of them former Tamil refugees, were the largest
group among people of Asian origin (6.3%). Additionally, the figures from 2012 show that 34.7% of the permanent resident
population aged 15 or over in Switzerland, i.e. 2,335,000 persons, had an immigrant background. A third of this population
(853,000) held Swiss citizenship. Four fifths of persons with an immigration background were themselves immigrants
(first generation foreigners and native-born and naturalised Swiss citizens), whereas one fifth were born in Switzerland
(second generation foreigners and native-born and naturalised Swiss citizens). In the 2000s, domestic and international
institutions expressed concern about what they perceived as an increase in xenophobia, particularly in some political
campaigns. In reply to one critical report the Federal Council noted that "racism unfortunately is present in Switzerland",
but stated that the high proportion of foreign citizens in the country, as well as the generally unproblematic integration
of foreigners", underlined Switzerland's openness. Switzerland has four official languages: principally German (63.5%
total population share, with foreign residents, in 2013); French (22.5%) in the west; and Italian (8.1%) in the south.
The fourth official language, Romansh (0.5%), is a Romance language spoken locally in the southeastern trilingual
canton of Graubünden, and is designated by Article 4 of the Federal Constitution as a national language along with
German, French, and Italian, and in Article 70 as an official language if the authorities communicate with persons
who speak Romansh. However, federal laws and other official acts do not need to be decreed in Romansh. Aside from
the official forms of their respective languages, the four linguistic regions of Switzerland also have their local
dialectal forms. The role played by dialects in each linguistic region varies dramatically: in the German-speaking
regions, Swiss German dialects have become ever more prevalent since the second half of the 20th century, especially
in the media, such as radio and television, and are used as an everyday language, while the Swiss variety of Standard
German is almost always used instead of dialect for written communication (c.f. diglossic usage of a language). Conversely,
in the French-speaking regions the local dialects have almost disappeared (only 6.3% of the population of Valais,
3.9% of Fribourg, and 3.1% of Jura still spoke dialects at the end of the 20th century), while in the Italian-speaking
regions dialects are mostly limited to family settings and casual conversation. The principal official languages
(German, French, and Italian) have terms, not used outside of Switzerland, known as Helvetisms. German Helvetisms
are, roughly speaking, a large group of words typical of Swiss Standard German, which do not appear either in Standard
German, nor in other German dialects. These include terms from Switzerland's surrounding language cultures (German
Billette from French), from similar term in another language (Italian azione used not only as act but also as discount
from German Aktion). The French spoken in Switzerland has similar terms, which are equally known as Helvetisms. The
most frequent characteristics of Helvetisms are in vocabulary, phrases, and pronunciation, but certain Helvetisms
denote themselves as special in syntax and orthography likewise. Duden, one of the prescriptive sources for Standard
German, is aware of about 3000 Helvetisms. Current French dictionaries, such as the Petit Larousse, include several
hundred Helvetisms. Swiss citizens are universally required to buy health insurance from private insurance companies,
which in turn are required to accept every applicant. While the cost of the system is among the highest it compares
well with other European countries in terms of health outcomes; patients who are citizens have been reported as being,
in general, highly satisfied with it. In 2012, life expectancy at birth was 80.4 years for men and 84.7 years for
women — the highest in the world. However, spending on health is particularly high at 11.4% of GDP (2010), on par
with Germany and France (11.6%) and other European countries, and notably less than spending in the USA (17.6%).
From 1990, a steady increase can be observed, reflecting the high costs of the services provided. With an ageing
population and new healthcare technologies, health spending will likely continue to rise. Between two thirds and
three quarters of the population live in urban areas. Switzerland has gone from a largely rural country to an urban
one in just 70 years. Since 1935 urban development has claimed as much of the Swiss landscape as it did during the
previous 2,000 years. This urban sprawl does not only affect the plateau but also the Jura and the Alpine foothills
and there are growing concerns about land use. However, from the beginning of the 21st century, the population growth
in urban areas is higher than in the countryside. Switzerland has a dense network of cities, where large, medium
and small cities are complementary. The plateau is very densely populated with about 450 people per km2 and the landscape
continually shows signs of human presence. The weight of the largest metropolitan areas, which are Zürich, Geneva–Lausanne,
Basel and Bern tend to increase. In international comparison the importance of these urban areas is stronger than
their number of inhabitants suggests. In addition the two main centers of Zürich and Geneva are recognized for their
particularly great quality of life. Christianity is the predominant religion of Switzerland (about 71% of resident
population and 75% of Swiss citizens), divided between the Catholic Church (38.21% of the population), the Swiss
Reformed Church (26.93%), further Protestant churches (2.89%) and other Christian denominations (2.79%). There has
been a recent rise in Evangelicalism. Immigration has brought Islam (4.95%) and Eastern Orthodoxy (around 2%) as
sizeable minority religions. According to a 2015 poll by Gallup International, 12% of Swiss people self-identified
as "convinced atheists." As of the 2000 census other Christian minority communities include Neo-Pietism (0.44%),
Pentecostalism (0.28%, mostly incorporated in the Schweizer Pfingstmission), Methodism (0.13%), the New Apostolic
Church (0.45%), Jehovah's Witnesses (0.28%), other Protestant denominations (0.20%), the Old Catholic Church (0.18%),
other Christian denominations (0.20%). Non-Christian religions are Hinduism (0.38%), Buddhism (0.29%), Judaism (0.25%)
and others (0.11%); 4.3% did not make a statement. 21.4% in 2012 declared themselves as unchurched i.e. not affiliated
with any church or other religious body (Agnostic, Atheist, or just not related to any official religion). The country
was historically about evenly balanced between Catholic and Protestant, with a complex patchwork of majorities over
most of the country. Geneva converted to Protestantism in 1536, just before John Calvin arrived there. One canton,
Appenzell, was officially divided into Catholic and Protestant sections in 1597. The larger cities and their cantons
(Bern, Geneva, Lausanne, Zürich and Basel) used to be predominantly Protestant. Central Switzerland, the Valais,
the Ticino, Appenzell Innerrhodes, the Jura, and Fribourg are traditionally Catholic. The Swiss Constitution of 1848,
under the recent impression of the clashes of Catholic vs. Protestant cantons that culminated in the Sonderbundskrieg,
consciously defines a consociational state, allowing the peaceful co-existence of Catholics and Protestants. A 1980
initiative calling for the complete separation of church and state was rejected by 78.9% of the voters. Some traditionally
Protestant cantons and cities nowadays have a slight Catholic majority, not because they were growing in members,
quite the contrary, but only because since about 1970 a steadily growing minority became not affiliated with any
church or other religious body (21.4% in Switzerland, 2012) especially in traditionally Protestant regions, such
as Basel-City (42%), canton of Neuchâtel (38%), canton of Geneva (35%), canton of Vaud (26%), or Zürich city (city:
>25%; canton: 23%). Three of Europe's major languages are official in Switzerland. Swiss culture is characterised
by diversity, which is reflected in a wide range of traditional customs. A region may be in some ways strongly culturally
connected to the neighbouring country that shares its language, the country itself being rooted in western European
culture. The linguistically isolated Romansh culture in Graubünden in eastern Switzerland constitutes an exception,
it survives only in the upper valleys of the Rhine and the Inn and strives to maintain its rare linguistic tradition.
Alpine symbolism has played an essential role in shaping the history of the country and the Swiss national identity.
Nowadays some concentrated mountain areas have a strong highly energetic ski resort culture in winter, and a hiking
(ger: das Wandern) or Mountain biking culture in summer. Other areas throughout the year have a recreational culture
that caters to tourism, yet the quieter seasons are spring and autumn when there are fewer visitors. A traditional
farmer and herder culture also predominates in many areas and small farms are omnipresent outside the cities. Folk
art is kept alive in organisations all over the country. In Switzerland it is mostly expressed in music, dance, poetry,
wood carving and embroidery. The alphorn, a trumpet-like musical instrument made of wood, has become alongside yodeling
and the accordion an epitome of traditional Swiss music. The government exerts greater control over broadcast media
than print media, especially due to finance and licensing. The Swiss Broadcasting Corporation, whose name was recently
changed to SRG SSR, is charged with the production and broadcast of radio and television programs. SRG SSR studios
are distributed throughout the various language regions. Radio content is produced in six central and four regional
studios while the television programs are produced in Geneva, Zürich and Lugano. An extensive cable network also
allows most Swiss to access the programs from neighboring countries. Skiing, snowboarding and mountaineering are
among the most popular sports in Switzerland, the nature of the country being particularly suited for such activities.
Winter sports are practiced by the natives and tourists since the second half of the 19th century with the invention
of bobsleigh in St. Moritz. The first world ski championships were held in Mürren (1931) and St. Moritz (1934). The
latter town hosted the second Winter Olympic Games in 1928 and the fifth edition in 1948. Among the most successful
skiers and world champions are Pirmin Zurbriggen and Didier Cuche. Swiss are fans of football and the national team
is nicknamed the 'Nati'. The headquarters of the sport's governing body, the International Federation of Association
Football (FIFA), is located in Zürich. Switzerland hosted the 1954 FIFA World Cup, and was the joint host, with Austria,
of the Euro 2008 tournament. The Swiss Super League is the nation's professional club league. For the Brasil 2014
World Cup finals tournament, the country's German-speaking cantons will be closely monitored by local police forces
to prevent celebrations beyond one hour after matches end. Europe's highest football pitch, at 2,000 metres (6,600
ft) above sea level, is located in Switzerland and is named the Ottmar Hitzfeld Stadium. Many Swiss also follow ice
hockey and support one of the 12 clubs in the League A, which is the most attended league in Europe. In 2009, Switzerland
hosted the IIHF World Championship for the 10th time. It also became World Vice-Champion in 2013. The numerous lakes
make Switzerland an attractive place for sailing. The largest, Lake Geneva, is the home of the sailing team Alinghi
which was the first European team to win the America's Cup in 2003 and which successfully defended the title in 2007.
Tennis has become an increasingly popular sport, and Swiss players such as Martina Hingis, Roger Federer, and most
recently, Stanislas Wawrinka have won multiple Grand Slams. Swiss professional wrestler Claudio Castagnoli is currently
signed with WWE, and is a former United States champion. Motorsport racecourses and events were banned in Switzerland
following the 1955 Le Mans disaster with exception to events such as Hillclimbing. During this period, the country
still produced successful racing drivers such as Clay Regazzoni, Sebastian Buemi, Jo Siffert, Dominique Aegerter,
successful World Touring Car Championship driver Alain Menu, 2014 24 Hours of Le Mans winner Marcel Fässler and 2015
24 Hours Nürburgring winner Nico Müller. Switzerland also won the A1GP World Cup of Motorsport in 2007–08 with driver
Neel Jani. Swiss motorcycle racer Thomas Lüthi won the 2005 MotoGP World Championship in the 125cc category. In June
2007 the Swiss National Council, one house of the Federal Assembly of Switzerland, voted to overturn the ban, however
the other house, the Swiss Council of States rejected the change and the ban remains in place. Traditional sports
include Swiss wrestling or "Schwingen". It is an old tradition from the rural central cantons and considered the
national sport by some. Hornussen is another indigenous Swiss sport, which is like a cross between baseball and golf.
Steinstossen is the Swiss variant of stone put, a competition in throwing a heavy stone. Practiced only among the
alpine population since prehistoric times, it is recorded to have taken place in Basel in the 13th century. It is
also central to the Unspunnenfest, first held in 1805, with its symbol the 83.5 kg stone named Unspunnenstein. The
cuisine of Switzerland is multifaceted. While some dishes such as fondue, raclette or rösti are omnipresent through
the country, each region developed its own gastronomy according to the differences of climate and languages. Traditional
Swiss cuisine uses ingredients similar to those in other European countries, as well as unique dairy products and
cheeses such as Gruyère or Emmental, produced in the valleys of Gruyères and Emmental. The number of fine-dining
establishments is high, particularly in western Switzerland. The most popular alcoholic drink in Switzerland is wine.
Switzerland is notable for the variety of grapes grown because of the large variations in terroirs, with their specific
mixes of soil, air, altitude and light. Swiss wine is produced mainly in Valais, Vaud (Lavaux), Geneva and Ticino,
with a small majority of white wines. Vineyards have been cultivated in Switzerland since the Roman era, even though
certain traces can be found of a more ancient origin. The most widespread varieties are the Chasselas (called Fendant
in Valais) and Pinot noir. The Merlot is the main variety produced in Ticino.
Mali (i/ˈmɑːli/; French: [maˈli]), officially the Republic of Mali (French: République du Mali), is a landlocked country
in West Africa. Mali is the eighth-largest country in Africa, with an area of just over 1,240,000 square kilometres
(480,000 sq mi). The population of Mali is 14.5 million. Its capital is Bamako. Mali consists of eight regions and
its borders on the north reach deep into the middle of the Sahara Desert, while the country's southern part, where
the majority of inhabitants live, features the Niger and Senegal rivers. The country's economy centers on agriculture
and fishing. Some of Mali's prominent natural resources include gold, being the third largest producer of gold in
the African continent, and salt. About half the population lives below the international poverty line of $1.25 (U.S.)
a day. A majority of the population (55%) are non-denominational Muslims. Present-day Mali was once part of three
West African empires that controlled trans-Saharan trade: the Ghana Empire, the Mali Empire (for which Mali is named),
and the Songhai Empire. During its golden age, there was a flourishing of mathematics, astronomy, literature, and
art. At its peak in 1300, the Mali Empire covered an area about twice the size of modern-day France and stretched
to the west coast of Africa. In the late 19th century, during the Scramble for Africa, France seized control of Mali,
making it a part of French Sudan. French Sudan (then known as the Sudanese Republic) joined with Senegal in 1959,
achieving independence in 1960 as the Mali Federation. Shortly thereafter, following Senegal's withdrawal from the
federation, the Sudanese Republic declared itself the independent Republic of Mali. After a long period of one-party
rule, a coup in 1991 led to the writing of a new constitution and the establishment of Mali as a democratic, multi-party
state. In January 2012, an armed conflict broke out in northern Mali, which Tuareg rebels took control of by April
and declared the secession of a new state, Azawad. The conflict was complicated by a military coup that took place
in March and later fighting between Tuareg and Islamist rebels. In response to Islamist territorial gains, the French
military launched Opération Serval in January 2013. A month later, Malian and French forces recaptured most of the
north. Presidential elections were held on 28 July 2013, with a second round run-off held on 11 August, and legislative
elections were held on 24 November and 15 December 2013. In the late 14th century, the Songhai gradually gained independence
from the Mali Empire and expanded, ultimately subsuming the entire eastern portion of the Mali Empire. The Songhai
Empire's eventual collapse was largely the result of a Moroccan invasion in 1591, under the command of Judar Pasha.
The fall of the Songhai Empire marked the end of the region's role as a trading crossroads. Following the establishment
of sea routes by the European powers, the trans-Saharan trade routes lost significance. On 19 November 1968, following
progressive economic decline, the Keïta regime was overthrown in a bloodless military coup led by Moussa Traoré,
a day which is now commemorated as Liberation Day. The subsequent military-led regime, with Traoré as president,
attempted to reform the economy. His efforts were frustrated by political turmoil and a devastating drought between
1968 to 1974, in which famine killed thousands of people. The Traoré regime faced student unrest beginning in the
late 1970s and three coup attempts. The Traoré regime repressed all dissenters until the late 1980s. Anti-government
protests in 1991 led to a coup, a transitional government, and a new constitution. Opposition to the corrupt and
dictatorial regime of General Moussa Traoré grew during the 1980s. During this time strict programs, imposed to satisfy
demands of the International Monetary Fund, brought increased hardship upon the country's population, while elites
close to the government supposedly lived in growing wealth. Peaceful student protests in January 1991 were brutally
suppressed, with mass arrests and torture of leaders and participants. Scattered acts of rioting and vandalism of
public buildings followed, but most actions by the dissidents remained nonviolent. From 22 March through 26 March
1991, mass pro-democracy rallies and a nationwide strike was held in both urban and rural communities, which became
known as les evenements ("the events") or the March Revolution. In Bamako, in response to mass demonstrations organized
by university students and later joined by trade unionists and others, soldiers opened fire indiscriminately on the
nonviolent demonstrators. Riots broke out briefly following the shootings. Barricades as well as roadblocks were
erected and Traoré declared a state of emergency and imposed a nightly curfew. Despite an estimated loss of 300 lives
over the course of four days, nonviolent protesters continued to return to Bamako each day demanding the resignation
of the dictatorial president and the implementation of democratic policies. 26 March 1991 is the day that marks the
clash between military soldiers and peaceful demonstrating students which climaxed in the massacre of dozens under
the orders of then President Moussa Traoré. He and three associates were later tried and convicted and received the
death sentence for their part in the decision-making of that day. Nowadays, the day is a national holiday in order
to remember the tragic events and the people that were killed.[unreliable source?] The coup is remembered as Mali's
March Revolution of 1991. By 26 March, the growing refusal of soldiers to fire into the largely nonviolent protesting
crowds turned into a full-scale tumult, and resulted into thousands of soldiers putting down their arms and joining
the pro-democracy movement. That afternoon, Lieutenant Colonel Amadou Toumani Touré announced on the radio that he
had arrested the dictatorial president, Moussa Traoré. As a consequence, opposition parties were legalized and a
national congress of civil and political groups met to draft a new democratic constitution to be approved by a national
referendum. In January 2012 a Tuareg rebellion began in Northern Mali, led by the National Movement for the Liberation
of Azawad. In March, military officer Amadou Sanogo seized power in a coup d'état, citing Touré's failures in quelling
the rebellion, and leading to sanctions and an embargo by the Economic Community of West African States. The MNLA
quickly took control of the north, declaring independence as Azawad. However, Islamist groups including Ansar Dine
and Al-Qaeda in the Islamic Maghreb (AQIM), who had helped the MNLA defeat the government, turned on the Tuareg and
took control of the North with the goal of implementing sharia in Mali. Mali lies in the torrid zone and is among
the hottest countries in the world. The thermal equator, which matches the hottest spots year-round on the planet
based on the mean daily annual temperature, crosses the country. Most of Mali receives negligible rainfall and droughts
are very frequent. Late June to early December is the rainy season in the southernmost area. During this time, flooding
of the Niger River is common, creating the Inner Niger Delta. The vast northern desert part of Mali has a hot desert
climate (Köppen climate classification (BWh) with long, extremely hot summers and scarce rainfall which decreases
northwards. The central area has a hot semi-arid climate (Köppen climate classification (BSh) with very high temperatures
year-round, a long, intense dry season and a brief, irregular rainy season. The little southern band possesses a
tropical wet and dry climate (Köppen climate classification (Aw) very high temperatures year-round with a dry season
and a rainy season. Until the military coup of 22 March 2012 and a second military coup in December 2012, Mali was
a constitutional democracy governed by the Constitution of 12 January 1992, which was amended in 1999. The constitution
provides for a separation of powers among the executive, legislative, and judicial branches of government. The system
of government can be described as "semi-presidential". Executive power is vested in a president, who is elected to
a five-year term by universal suffrage and is limited to two terms. The president serves as a chief of state and
commander in chief of the armed forces. A prime minister appointed by the president serves as head of government
and in turn appoints the Council of Ministers. The unicameral National Assembly is Mali's sole legislative body,
consisting of deputies elected to five-year terms. Following the 2007 elections, the Alliance for Democracy and Progress
held 113 of 160 seats in the assembly. The assembly holds two regular sessions each year, during which it debates
and votes on legislation that has been submitted by a member or by the government. Mali's constitution provides for
an independent judiciary, but the executive continues to exercise influence over the judiciary by virtue of power
to appoint judges and oversee both judicial functions and law enforcement. Mali's highest courts are the Supreme
Court, which has both judicial and administrative powers, and a separate Constitutional Court that provides judicial
review of legislative acts and serves as an election arbiter. Various lower courts exist, though village chiefs and
elders resolve most local disputes in rural areas. Mali underwent economic reform, beginning in 1988 by signing agreements
with the World Bank and the International Monetary Fund. During 1988 to 1996, Mali's government largely reformed
public enterprises. Since the agreement, sixteen enterprises were privatized, 12 partially privatized, and 20 liquidated.
In 2005, the Malian government conceded a railroad company to the Savage Corporation. Two major companies, Societé
de Telecommunications du Mali (SOTELMA) and the Cotton Ginning Company (CMDT), were expected to be privatized in
2008. In 2007, about 48 percent of Malians were younger than 12 years old, 49 percent were 15–64 years old, and 3
percent were 65 and older. The median age was 15.9 years. The birth rate in 2014 is 45.53 births per 1,000, and the
total fertility rate (in 2012) was 6.4 children per woman. The death rate in 2007 was 16.5 deaths per 1,000. Life
expectancy at birth was 53.06 years total (51.43 for males and 54.73 for females). Mali has one of the world's highest
rates of infant mortality, with 106 deaths per 1,000 live births in 2007. In the far north, there is a division between
Berber-descendent Tuareg nomad populations and the darker-skinned Bella or Tamasheq people, due the historical spread
of slavery in the region. An estimated 800,000 people in Mali are descended from slaves. Slavery in Mali has persisted
for centuries. The Arabic population kept slaves well into the 20th century, until slavery was suppressed by French
authorities around the mid-20th century. There still persist certain hereditary servitude relationships, and according
to some estimates, even today approximately 200,000 Malians are still enslaved. Although Mali has enjoyed a reasonably
good inter-ethnic relationships based on the long history of coexistence, some hereditary servitude and bondage relationship
exist, as well as ethnic tension between settled Songhai and nomadic Tuaregs of the north. Due to a backlash against
the northern population after independence, Mali is now in a situation where both groups complain about discrimination
on the part of the other group. This conflict also plays a role in the continuing Northern Mali conflict where there
is a tension between both Tuaregs and the Malian government, and the Tuaregs and radical Islamists who are trying
to establish sharia law. Mali faces numerous health challenges related to poverty, malnutrition, and inadequate hygiene
and sanitation. Mali's health and development indicators rank among the worst in the world. Life expectancy at birth
is estimated to be 53.06 years in 2012. In 2000, 62–65 percent of the population was estimated to have access to
safe drinking water and only 69 percent to sanitation services of some kind. In 2001, the general government expenditures
on health totalled about US$4 per capita at an average exchange rate. Efforts have been made to improve nutrition,
and reduce associated health problems, by encouraging women to make nutritious versions of local recipes. For example,
the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) and the Aga Khan Foundation, trained
women's groups to make equinut, a healthy and nutritional version of the traditional recipe di-dèguè (comprising
peanut paste, honey and millet or rice flour). The aim was to boost nutrition and livelihoods by producing a product
that women could make and sell, and which would be accepted by the local community because of its local heritage.
Medical facilities in Mali are very limited, and medicines are in short supply. Malaria and other arthropod-borne
diseases are prevalent in Mali, as are a number of infectious diseases such as cholera and tuberculosis. Mali's population
also suffers from a high rate of child malnutrition and a low rate of immunization. An estimated 1.9 percent of the
adult and children population was afflicted with HIV/AIDS that year, among the lowest rates in Sub-Saharan Africa.
An estimated 85–91 percent of Mali's girls and women have had female genital mutilation (2006 and 2001 data). Malian
musical traditions are derived from the griots, who are known as "Keepers of Memories". Malian music is diverse and
has several different genres. Some famous Malian influences in music are kora virtuoso musician Toumani Diabaté,
the late roots and blues guitarist Ali Farka Touré, the Tuareg band Tinariwen, and several Afro-pop artists such
as Salif Keita, the duo Amadou et Mariam, Oumou Sangare, and Habib Koité. Dance also plays a large role in Malian
culture. Dance parties are common events among friends, and traditional mask dances are performed at ceremonial events.
Raleigh (/ˈrɑːli/; RAH-lee) is the capital of the state of North Carolina as well as the seat of Wake County in the United
States. It is the second most populous city in North Carolina, after Charlotte. Raleigh is known as the "City of
Oaks" for its many oak trees, which line the streets in the heart of the city. The city covers a land area of 142.8
square miles (370 km2). The U.S. Census Bureau estimated the city's population to be 439,896 as of July 1, 2014.
It is also one of the fastest-growing cities in the country. The city of Raleigh is named after Sir Walter Raleigh,
who established the lost Roanoke Colony in present-day Dare County. Raleigh is home to North Carolina State University
and is part of the Research Triangle area, together with Durham (home of Duke University) and Chapel Hill (home of
the University of North Carolina at Chapel Hill). The "Triangle" nickname originated after the 1959 creation of the
Research Triangle Park, located in Durham & Wake Counties partway between the three cities and their universities.
The Research Triangle region encompasses the U.S. Census Bureau's Raleigh-Durham-Chapel Hill Combined Statistical
Area (CSA), which had an estimated population of 2,037,430 in 2013. The Raleigh Metropolitan Statistical Area (MSA)
had an estimated population of 1,214,516 in 2013. Raleigh is an early example in the United States of a planned city,
chosen as the site of the state capital in 1788 and incorporated in 1792 as such. The city was originally laid out
in a grid pattern with the North Carolina State Capitol in Union Square at the center. In the United States Civil
War the city was spared from any significant battle, only falling in the closing days of the war, though it did not
escape the economic hardships that plagued the rest of the American South during the Reconstruction Era. The twentieth
century saw the opening of the Research Triangle Park in 1959, and with the jobs it created the region and city saw
a large influx of population, making it one of the fastest growing communities in the United States by the early
21st century. Raleigh is home to numerous cultural, educational, and historic sites. The Duke Energy Center for the
Performing Arts in Downtown Raleigh features three theater venues and serves as the home for the North Carolina Symphony
and the Carolina Ballet. Walnut Creek Amphitheatre is a large music amphitheater located in Southeast Raleigh. Museums
in Raleigh include the North Carolina Museum of Art in West Raleigh, as well as the North Carolina Museum of History
and North Carolina Museum of Natural Sciences located next to each other near the State Capitol in Downtown Raleigh.
Several major universities and colleges call Raleigh home, including North Carolina State University, the largest
public university in the state, and Shaw University, the first historically black university in the American South
and site of the foundation of the Student Nonviolent Coordinating Committee, an important civil rights organization
of the 1960s. One U.S. president, Andrew Johnson, was born in Raleigh. The city's location was chosen, in part, for
being within 11 mi (18 km) of Isaac Hunter's Tavern, a popular tavern frequented by the state legislators. No known
city or town existed previously on the chosen city site. Raleigh is one of the few cities in the United States that
was planned and built specifically to serve as a state capital. Its original boundaries were formed by the downtown
streets of North, East, West and South streets. The plan, a grid with two main axes meeting at a central square and
an additional square in each corner, was based on Thomas Holme's 1682 plan for Philadelphia. After the Civil War
began, Governor Zebulon Baird Vance ordered the construction of breastworks around the city as protection from Union
troops. During General Sherman's Carolinas Campaign, Raleigh was captured by Union cavalry under the command of General
Hugh Judson Kilpatrick on April 13, 1865. As the Confederate cavalry retreated west, the Union soldiers followed,
leading to the nearby Battle of Morrisville. The city was spared significant destruction during the War, but due
to the economic problems of the post-war period and Reconstruction, with a state economy based on agriculture, it
grew little over the next several decades. In 1880, the newspapers News and Observer combined to form The News &
Observer. It remains Raleigh's primary daily newspaper. The North Carolina College of Agriculture and Mechanic Arts,
now known as North Carolina State University, was founded as a land-grant college in 1887. The city's Rex Hospital
opened in 1889 and included the state's first nursing school. The Baptist Women's College, now known as Meredith
College, opened in 1891, and in 1898, The Academy of Music, a private music conservatory, was established. In the
late nineteenth century, two black Congressmen were elected from North Carolina's 2nd district, the last in 1898.
George Henry White sought to promote civil rights for blacks and to challenge efforts by white Democrats to reduce
black voting by new discriminatory laws. They were unsuccessful. In 1900, the state legislature passed a new constitution,
with voter registration rules that disfranchised most blacks and many poor whites. The state succeeded in reducing
black voting to zero by 1908. Loss of the ability to vote disqualified black men (and later women) from sitting on
juries and serving in any office, local, state or federal. The rising black middle-class in Raleigh and other areas
was politically silenced and shut out of local governance, and the Republican Party was no longer competitive. It
was not until after federal civil rights legislation was passed in the mid-1960s that the majority of blacks in North
Carolina would again be able to vote, sit on juries and serve in local offices. No African American was elected to
Congress until 1992. During the difficult 1930s of the Great Depression, government at all levels was integral to
creating jobs. The city provided recreational and educational programs, and hired people for public works projects.
In 1932, Raleigh Memorial Auditorium was dedicated. The North Carolina Symphony, founded the same year, performed
in its new home. From 1934 to 1937, the federal Civilian Conservation Corps constructed the area now known as William
B. Umstead State Park. In 1939, the State General Assembly chartered the Raleigh-Durham Aeronautical Authority to
build a larger airport between Raleigh and Durham, with the first flight occurring in 1943. Raleigh is located in
the northeast central region of North Carolina, where the Piedmont and Atlantic Coastal Plain regions meet. This
area is known as the "fall line" because it marks the elevation inland at which waterfalls begin to appear in creeks
and rivers. As a result, most of Raleigh features gently rolling hills that slope eastward toward the state's flat
coastal plain. Its central Piedmont location situates Raleigh about two hours west of Atlantic Beach, North Carolina,
by car and four hours east of the Great Smoky Mountains. The city is 155 miles (249 km) south of Richmond, Virginia,
263 miles (423 km) south of Washington, D.C., and 150 miles (240 km) northeast of Charlotte, North Carolina. Downtown
area is home to historic neighborhoods and buildings such as the Sir Walter Raleigh Hotel built in the early 20th
century, the restored City Market, the Fayetteville Street downtown business district, which includes the PNC Plaza
and Wells Fargo Capitol Center buildings, as well as the North Carolina Museum of History, North Carolina Museum
of Natural Sciences, North Carolina State Capitol, Peace College, the Raleigh City Museum, Raleigh Convention Center,
Shaw University, and St. Augustine's College. The neighborhoods in Old Raleigh include Cameron Park, Boylan Heights,
Country Club Hills, Coley Forest, Five Points, Budleigh, Glenwood-Brooklyn, Hayes Barton Historic District, Moore
Square, Mordecai, Rosengarten Park, Belvidere Park, Woodcrest, and Historic Oakwood. In the 2000s, an effort by the
Downtown Raleigh Alliance was made to separate this area of the city into five smaller districts: Fayetteville Street,
Moore Square, Glenwood South, Warehouse (Raleigh), and Capital District (Raleigh). Some of the names have become
common place among locals such as the Warehouse, Fayetteville Street, and Glenwood South Districts. Midtown Raleigh
is a residential and commercial area just North of the I-440 Beltline and is part of North Raleigh. It is roughly
framed by Glenwood/Creedmoor Road to the West, Wake Forest Road to the East, and Millbrook Road to the North. It
includes shopping centers such as North Hills and Crabtree Valley Mall. It also includes North Hills Park and part
of the Raleigh Greenway System. The term was coined by the Greater Raleigh Chamber of Commerce, developer John Kane
and planning director Mitchell Silver. The News & Observer newspaper started using the term for marketing purposes
only. The Midtown Raleigh Alliance was founded on July 25, 2011 as a way for community leaders to promote the area.
West Raleigh lies along Hillsborough Street and Western Boulevard. The area is bordered to the west by suburban Cary.
It is home to North Carolina State University, Meredith College, Pullen Park, Pullen Memorial Baptist Church, Cameron
Village, Lake Johnson, the North Carolina Museum of Art and historic Saint Mary's School. Primary thoroughfares serving
West Raleigh, in addition to Hillsborough Street, are Avent Ferry Road, Blue Ridge Road, and Western Boulevard. The
PNC Arena is also located here adjacent to the North Carolina State Fairgrounds. These are located approximately
2 miles from Rex Hospital. North Raleigh is an expansive, diverse, and fast-growing suburban area of the city that
is home to established neighborhoods to the south along with many newly built subdivisions and along its northern
fringes. The area generally falls North of Millbrook Road. It is primarily suburban with large shopping areas. Primary
neighborhoods and subdivisions in North Raleigh include Harrington Grove, Springdale, Dominion Park, Bedford, Bent
Tree, Brentwood, Brier Creek, Brookhaven, Black Horse Run, Coachman's Trail, Crossgate, Crosswinds, Falls River,
Hidden Valley, Lake Park, North Haven, North Ridge, Oakcroft, Shannon Woods, Six Forks Station, Springdale, Stonebridge,
Stone Creek, Stonehenge, Summerfield, Valley Estates, Wakefield, Weathersfield, Windsor Forest, and Wood Valley.
The area is served by a number of primary transportation corridors including Glenwood Avenue U.S. Route 70, Interstate
540, Wake Forest Road, Millbrook Road, Lynn Road, Six Forks Road, Spring Forest Road, Creedmoor Road, Leesville Road,
Strickland Road, and North Hills Drive. South Raleigh is located along U.S. 401 south toward Fuquay-Varina and along
US 70 into suburban Garner. This area is the least developed and least dense area of Raleigh (much of the area lies
within the Swift Creek watershed district, where development regulations limit housing densities and construction).
The area is bordered to the west by Cary, to the east by Garner, and to the southwest by Holly Springs. Neighborhoods
in South Raleigh include Renaissance Park, Lake Wheeler, Swift Creek, Carolina Pines, Rhamkatte, Riverbrooke, and
Enchanted Oaks. Southeast Raleigh is bounded by downtown on the west, Garner on the southwest, and rural Wake County
to the southeast. The area includes areas along Rock Quarry Road, Poole Road, and New Bern Avenue. Primary neighborhoods
include Chastain, Chavis Heights, Raleigh Country Club, Southgate, Kingwood Forest, Rochester Heights, Emerald Village
and Biltmore Hills. Time Warner Cable Music Pavilion (formerly Alltel Pavilion and Walnut Creek Amphitheatre) is
one of the region's major outdoor concert venues and is located on Rock Quarry Road. Shaw University is located in
this part of the city. Like much of the southeastern United States, Raleigh has a humid subtropical climate (Köppen
Cfa), with four distinct seasons. Winters are short and generally cool, with a January daily average of 41.0 °F (5.0
°C). On average, there are 69 nights per year that drop to or below freezing, and only 2.7 days that fail to rise
above freezing. April is the driest month, with an average of 2.91 inches (73.9 mm) of precipitation. Precipitation
is well distributed around the year, with a slight maximum between July and September; on average, July is the wettest
month, owing to generally frequent, sometimes heavy, showers and thunderstorms. Summers are hot and humid, with a
daily average in July of 80.0 °F (26.7 °C). There are 48 days per year with highs at or above 90 °F (32 °C). Autumn
is similar to spring overall but has fewer days of rainfall. Extremes in temperature have ranged from −9 °F (−23
°C) on January 21, 1985 up to 105 °F (41 °C), most recently on July 8, 2012. Raleigh receives an average of 6.0 inches
(15.2 cm) of snow in winter. Freezing rain and sleet also occur most winters, and occasionally the area experiences
a major damaging ice storm. On January 24–25, 2000, Raleigh received its greatest snowfall from a single storm –
20.3 inches (52 cm) – the Winter Storm of January 2000. Storms of this magnitude are generally the result of cold
air damming that affects the city due to its proximity to the Appalachian Mountains. Winter storms have caused traffic
problems in the past as well. The region also experiences occasional periods of drought, during which the city sometimes
has restricted water use by residents. During the late summer and early fall, Raleigh can experience hurricanes.
In 1996, Hurricane Fran caused severe damage in the Raleigh area, mostly from falling trees. The most recent hurricane
to have a considerable effect on the area was Isabel in 2003. Tornadoes also have on occasion affected the city of
Raleigh most notably the November 28, 1988 tornado which occurred in the early morning hours and rated an F4 on the
Fujita Tornado Scale and affected Northwestern portions of the city. Also the April 16, 2011 F3 Tornado which affected
portions of downtown and North east Raleigh and the suburb of Holly Springs. As of the 2000 United States census,
there were 276,093 persons (July 2008 estimate was 380,173) and 61,371 families residing in Raleigh. The population
density was 2,409.2 people per square mile (930.2/km²). There were 120,699 housing units at an average density of
1,053.2 per square mile (406.7/km²). The racial composition of the city was: 63.31% White, 27.80% Black or African
American, 7.01% Hispanic or Latino American, 3.38% Asian American, 0.36% Native American, 0.04% Native Hawaiian or
Other Pacific Islander, 3.24% some other race, and 1.88% two or more races. There were 112,608 households in the
city in 2000, of which 26.5% included children below the age of 18, 39.5% were composed of married couples living
together, 11.4% reported a female householder with no husband present, and 45.5% classified themselves as nonfamily.
Unmarried partners were present in 2.2% of households. In addition, 33.1% of all households were composed of individuals
living alone, of which 6.2% was someone 65 years of age or older. The average household size in Raleigh was 2.30
persons, and the average family size was 2.97 persons. Raleigh is home to a wide variety of religious practitioners.
As of 2013, 46.41% of people in Raleigh are affiliated with a religion. The predominant religion in Raleigh is Christianity,
with the largest numbers of adherents being Roman Catholic (11.3%), Baptist (10.85%), and Methodist (7.08%). Others
include Presbyterian (2.52%), Pentecostal (1.99%), Episcopalian (1.12%), Lutheran (1.06%), Latter-Day Saints (0.99%),
and other Christian denominations (6.68%) including Eastern Orthodox, Coptic Orthodox, Jehovah's Witness, Christian
Science, Christian Unitarianism, other Mainline Protestant groups, and non-denominational. Raleigh's industrial base
includes banking/financial services; electrical, medical, electronic and telecommunications equipment; clothing and
apparel; food processing; paper products; and pharmaceuticals. Raleigh is part of North Carolina's Research Triangle,
one of the country's largest and most successful research parks, and a major center in the United States for high-tech
and biotech research, as well as advanced textile development. The city is a major retail shipping point for eastern
North Carolina and a wholesale distributing point for the grocery industry. The Time Warner Cable Music Pavilion
at Walnut Creek hosts major international touring acts. In 2011, the Downtown Raleigh Amphitheater opened (now sponsored
as the Red Hat Amphitheater), which hosts numerous concerts primarily in the summer months. An additional amphitheater
sits on the grounds of the North Carolina Museum of Art, which hosts a summer concert series and outdoor movies.
Nearby Cary is home to the Koka Booth Amphitheatre which hosts additional summer concerts and outdoor movies, and
serves as the venue for regularly scheduled outdoor concerts by the North Carolina Symphony based in Raleigh. During
the North Carolina State Fair, Dorton Arena hosts headline acts. The private Lincoln Theatre is one of several clubs
in downtown Raleigh that schedules many concerts throughout the year in multiple formats (rock, pop, country). The
Duke Energy Center for the Performing Arts complex houses the Raleigh Memorial Auditorium, the Fletcher Opera Theater,
the Kennedy Theatre, and the Meymandi Concert Hall. In 2008, a new theatre space, the Meymandi Theatre at the Murphey
School, was opened in the restored auditorium of the historic Murphey School. Theater performances are also offered
at the Raleigh Little Theatre, Long View Center, Ira David Wood III Pullen Park Theatre, and Stewart and Thompson
Theaters at North Carolina State University. North Carolina Museum of Art, occupying a large suburban campus on Blue
Ridge Road near the North Carolina State Fairgrounds, maintains one of the premier public art collections located
between Washington, D.C., and Atlanta. In addition to its extensive collections of American Art, European Art and
ancient art, the museum recently has hosted major exhibitions featuring Auguste Rodin (in 2000) and Claude Monet
(in 2006-07), each attracting more than 200,000 visitors. Unlike most prominent public museums, the North Carolina
Museum of Art acquired a large number of the works in its permanent collection through purchases with public funds.
The museum's outdoor park is one of the largest such art parks in the country. The museum facility underwent a major
expansion which greatly expanded the exhibit space that was completed in 2010. The 127,000 sf new expansion is designed
by NYC architect Thomas Phifer and Partners. The National Hockey League's Carolina Hurricanes franchise moved to
Raleigh in 1997 from Hartford, Connecticut (where it was known as the Hartford Whalers). The team played its first
two seasons more than 60 miles away at Greensboro Coliseum while its home arena, Raleigh Entertainment and Sports
Arena (later RBC Center and now PNC Arena), was under construction. The Hurricanes are the only major league (NFL,
NHL, NBA, MLB) professional sports team in North Carolina to have won a championship, winning the Stanley Cup in
2006, over the Edmonton Oilers. The city played host to the 2011 NHL All-Star Game. Several other professional sports
leagues have had former franchises (now defunct) in Raleigh, including the Raleigh IceCaps of the ECHL (1991–1998);
Carolina Cobras of the Arena Football League (2000–2004); the Raleigh–Durham Skyhawks of the World League of American
Football (1991); the Raleigh Bullfrogs of the Global Basketball Association (1991–1992); the Raleigh Cougars of the
United States Basketball League (1997–1999); and most recently, the Carolina Courage of the Women's United Soccer
Association (2000–2001 in Chapel Hill, 2001–2003 in suburban Cary), which won that league's championship Founders
Cup in 2002. North Carolina State University is located in southwest Raleigh where the Wolfpack competes nationally
in 24 intercollegiate varsity sports as a member of the Atlantic Coast Conference. The university's football team
plays in Carter-Finley Stadium, the third largest football stadium in North Carolina, while the men's basketball
team shares the PNC Arena with the Carolina Hurricanes hockey club. The Wolfpack women's basketball, volleyball,
and gymnastics as well as men's wrestling events are held on campus at Reynolds Coliseum. The men's baseball team
plays at Doak Field. The Raleigh Parks and Recreation Department offers a wide variety of leisure opportunities at
more than 150 sites throughout the city, which include: 8,100 acres (33 km2) of park land, 78 miles (126 km) of greenway,
22 community centers, a BMX championship-caliber race track, 112 tennis courts among 25 locations, 5 public lakes,
and 8 public aquatic facilities. The J. C. Raulston Arboretum, an 8-acre (32,000 m²) arboretum and botanical garden
in west Raleigh administered by North Carolina State University, maintains a year-round collection that is open daily
to the public without charge. According to the Federal Bureau of Investigation's Uniform Crime Reports, in 2010 the
Raleigh Police Department and other agencies in the city reported 1,740 incidents of violent crime and 12,995 incidents
of property crime – far below both the national average and the North Carolina average. Of the violent crimes reported,
14 were murders, 99 were forcible rapes and 643 were robberies. Aggravated assault accounted for 984 of the total
violent crimes. Property crimes included burglaries which accounted for 3,021, larcenies for 9,104 and arson for
63 of the total number of incidents. Motor vehicle theft accounted for 870 incidents out of the total. Public schools
in Raleigh are operated by the Wake County Public School System. Observers have praised the Wake County Public School
System for its innovative efforts to maintain a socially, economically and racial balanced system by using income
as a prime factor in assigning students to schools. Raleigh is home to three magnet high schools and three high schools
offering the International Baccalaureate program. There are four early college high schools in Raleigh. Raleigh also
has two alternative high schools. Raleigh-Durham International Airport, the region's primary airport and the second-largest
in North Carolina, located northwest of downtown Raleigh via Interstate-40 between Raleigh and Durham, serves the
city and greater Research Triangle metropolitan region, as well as much of eastern North Carolina. The airport offers
service to more than 35 domestic and international destinations and serves approximately 10 million passengers a
year. The airport also offers facilities for cargo and general aviation. The airport authority tripled the size of
its Terminal 2 (formerly Terminal C) in January 2011. Raleigh is also served by Triangle Transit (known formerly
as the Triangle Transit Authority, or TTA). Triangle Transit offers scheduled, fixed-route regional and commuter
bus service between Raleigh and the region's other principal cities of Durham, Cary and Chapel Hill, as well as to
and from the Raleigh-Durham International Airport, Research Triangle Park and several of the region's larger suburban
communities. Triangle Transit also coordinates an extensive vanpool and rideshare program that serves the region's
larger employers and commute destinations.
Registered dietitian nutritionists (RDs or RDNs) are health professionals qualified to provide safe, evidence-based dietary
advice which includes a review of what is eaten, a thorough review of nutritional health, and a personalized nutritional
treatment plan. They also provide preventive and therapeutic programs at work places, schools and similar institutions.
Certified Clinical Nutritionists or CCNs, are trained health professionals who also offer dietary advice on the role
of nutrition in chronic disease, including possible prevention or remediation by addressing nutritional deficiencies
before resorting to drugs. Government regulation especially in terms of licensing, is currently less universal for
the CCN than that of RD or RDN. Another advanced Nutrition Professional is a Certified Nutrition Specialist or CNS.
These Board Certified Nutritionists typically specialize in obesity and chronic disease. In order to become board
certified, potential CNS candidate must pass an examination, much like Registered Dieticians. This exam covers specific
domains within the health sphere including; Clinical Intervention and Human Health. According to Walter Gratzer,
the study of nutrition probably began during the 6th century BC. In China, the concept of Qi developed, a spirit
or "wind" similar to what Western Europeans later called pneuma. Food was classified into "hot" (for example, meats,
blood, ginger, and hot spices) and "cold" (green vegetables) in China, India, Malaya, and Persia. Humours developed
perhaps first in China alongside qi. Ho the Physician concluded that diseases are caused by deficiencies of elements
(Wu Xing: fire, water, earth, wood, and metal), and he classified diseases as well as prescribed diets. About the
same time in Italy, Alcmaeon of Croton (a Greek) wrote of the importance of equilibrium between what goes in and
what goes out, and warned that imbalance would result disease marked by obesity or emaciation. The first recorded
nutritional experiment with human subjects is found in the Bible's Book of Daniel. Daniel and his friends were captured
by the king of Babylon during an invasion of Israel. Selected as court servants, they were to share in the king's
fine foods and wine. But they objected, preferring vegetables (pulses) and water in accordance with their Jewish
dietary restrictions. The king's chief steward reluctantly agreed to a trial. Daniel and his friends received their
diet for 10 days and were then compared to the king's men. Appearing healthier, they were allowed to continue with
their diet. One mustn't overlook the doctrines of Galen: In use from his life in the 1st century AD until the 17th
century, it was heresy to disagree with him for 1500 years. Galen was physician to gladiators in Pergamon, and in
Rome, physician to Marcus Aurelius and the three emperors who succeeded him. Most of Galen's teachings were gathered
and enhanced in the late 11th century by Benedictine monks at the School of Salerno in Regimen sanitatis Salernitanum,
which still had users in the 17th century. Galen believed in the bodily humours of Hippocrates, and he taught that
pneuma is the source of life. Four elements (earth, air, fire and water) combine into "complexion", which combines
into states (the four temperaments: sanguine, phlegmatic, choleric, and melancholic). The states are made up of pairs
of attributes (hot and moist, cold and moist, hot and dry, and cold and dry), which are made of four humours: blood,
phlegm, green (or yellow) bile, and black bile (the bodily form of the elements). Galen thought that for a person
to have gout, kidney stones, or arthritis was scandalous, which Gratzer likens to Samuel Butler's Erehwon (1872)
where sickness is a crime. In the 1500s, Paracelsus was probably the first to criticize Galen publicly. Also in the
16th century, scientist and artist Leonardo da Vinci compared metabolism to a burning candle. Leonardo did not publish
his works on this subject, but he was not afraid of thinking for himself and he definitely disagreed with Galen.
Ultimately, 16th century works of Andreas Vesalius, sometimes called the father of modern medicine, overturned Galen's
ideas. He was followed by piercing thought amalgamated with the era's mysticism and religion sometimes fueled by
the mechanics of Newton and Galileo. Jan Baptist van Helmont, who discovered several gases such as carbon dioxide,
performed the first quantitative experiment. Robert Boyle advanced chemistry. Sanctorius measured body weight. Physician
Herman Boerhaave modeled the digestive process. Physiologist Albrecht von Haller worked out the difference between
nerves and muscles. Sometimes overlooked during his life, James Lind, a physician in the British navy, performed
the first scientific nutrition experiment in 1747. Lind discovered that lime juice saved sailors that had been at
sea for years from scurvy, a deadly and painful bleeding disorder. Between 1500 and 1800, an estimated two million
sailors had died of scurvy. The discovery was ignored for forty years, after which British sailors became known as
"limeys." The essential vitamin C within citrus fruits would not be identified by scientists until 1932. In 1816,
François Magendie discovered that dogs fed only carbohydrates (sugar), fat (olive oil), and water died evidently
of starvation, but dogs also fed protein survived, identifying protein as an essential dietary component. William
Prout in 1827 was the first person to divide foods into carbohydrates, fat, and protein. During the 19th century,
Jean-Baptiste Dumas and Justus von Liebig quarrelled over their shared belief that animals get their protein directly
from plants (animal and plant protein are the same and that humans do not create organic compounds). With a reputation
as the leading organic chemist of his day but with no credentials in animal physiology, Liebig grew rich making food
extracts like beef bouillon and infant formula that were later found to be of questionable nutritious value. In the
1860s, Claude Bernard discovered that body fat can be synthesized from carbohydrate and protein, showing that the
energy in blood glucose can be stored as fat or as glycogen. In the early 1880s, Kanehiro Takaki observed that Japanese
sailors (whose diets consisted almost entirely of white rice) developed beriberi (or endemic neuritis, a disease
causing heart problems and paralysis), but British sailors and Japanese naval officers did not. Adding various types
of vegetables and meats to the diets of Japanese sailors prevented the disease, (not because of the increased protein
as Takaki supposed but because it introduced a few parts per million of thiamine to the diet, later understood as
a cure). In 1896, Eugen Baumann observed iodine in thyroid glands. In 1897, Christiaan Eijkman worked with natives
of Java, who also suffered from beriberi. Eijkman observed that chickens fed the native diet of white rice developed
the symptoms of beriberi but remained healthy when fed unprocessed brown rice with the outer bran intact. Eijkman
cured the natives by feeding them brown rice, discovering that food can cure disease. Over two decades later, nutritionists
learned that the outer rice bran contains vitamin B1, also known as thiamine. In the early 20th century, Carl von
Voit and Max Rubner independently measured caloric energy expenditure in different species of animals, applying principles
of physics in nutrition. In 1906, Edith G. Willcock and Frederick Hopkins showed that the amino acid tryptophan aids
the well-being of mice but it did not assure their growth. In the middle of twelve years of attempts to isolate them,
Hopkins said in a 1906 lecture that "unsuspected dietetic factors," other than calories, protein, and minerals, are
needed to prevent deficiency diseases. In 1907, Stephen M. Babcock and Edwin B. Hart conducted the single-grain experiment,
which took nearly four years to complete. In 1913, Elmer McCollum discovered the first vitamins, fat-soluble vitamin
A, and water-soluble vitamin B (in 1915; now known to be a complex of several water-soluble vitamins) and named vitamin
C as the then-unknown substance preventing scurvy. Lafayette Mendel and Thomas Osborne also performed pioneering
work on vitamins A and B. In 1919, Sir Edward Mellanby incorrectly identified rickets as a vitamin A deficiency because
he could cure it in dogs with cod liver oil. In 1922, McCollum destroyed the vitamin A in cod liver oil, but found
that it still cured rickets. Also in 1922, H.M. Evans and L.S. Bishop discover vitamin E as essential for rat pregnancy,
originally calling it "food factor X" until 1925. The list of nutrients that people are known to require is, in the
words of Marion Nestle, "almost certainly incomplete". As of 2014, nutrients are thought to be of two types: macro-nutrients
which are needed in relatively large amounts, and micronutrients which are needed in smaller quantities. A type of
carbohydrate, dietary fiber, i.e. non-digestible material such as cellulose, is required, for both mechanical and
biochemical reasons, although the exact reasons remain unclear. Other micronutrients include antioxidants and phytochemicals,
which are said to influence (or protect) some body systems. Their necessity is not as well established as in the
case of, for instance, vitamins. The macronutrients are carbohydrates, fats, protein, and water. The macronutrients
(excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from
which cell membranes and some signaling molecules are built) and energy. Some of the structural material can be used
to generate energy internally, and in either case it is measured in Joules or kilocalories (often called "Calories"
and written with a capital C to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17
kJ approximately (4 kcal) of energy per gram, while fats provide 37 kJ (9 kcal) per gram, though the net energy from
either depends on such factors as absorption and digestive effort, which vary substantially from instance to instance.
Vitamins, minerals, fiber, and water do not provide energy, but are required for other reasons. Molecules of carbohydrates
and fats consist of carbon, hydrogen, and oxygen atoms. Carbohydrates range from simple monosaccharides (glucose,
fructose, galactose) to complex polysaccharides (starch). Fats are triglycerides, made of assorted fatty acid monomers
bound to a glycerol backbone. Some fatty acids, but not all, are essential in the diet: they cannot be synthesized
in the body. Protein molecules contain nitrogen atoms in addition to carbon, oxygen, and hydrogen. The fundamental
components of protein are nitrogen-containing amino acids, some of which are essential in the sense that humans cannot
make them internally. Some of the amino acids are convertible (with the expenditure of energy) to glucose and can
be used for energy production, just as ordinary glucose, in a process known as gluconeogenesis. By breaking down
existing protein, the carbon skeleton of the various amino acids can be metabolized to intermediates in cellular
respiration; the remaining ammonia is discarded primarily as urea in urine. This occurs normally only during prolonged
starvation. Traditionally, simple carbohydrates are believed to be absorbed quickly, and therefore to raise blood-glucose
levels more rapidly than complex carbohydrates. This, however, is not accurate. Some simple carbohydrates (e.g.,
fructose) follow different metabolic pathways (e.g., fructolysis) that result in only a partial catabolism to glucose,
while, in essence, many complex carbohydrates may be digested at the same rate as simple carbohydrates. Glucose stimulates
the production of insulin through food entering the bloodstream, which is grasped by the beta cells in the pancreas.
Dietary fiber is a carbohydrate that is incompletely absorbed in humans and in some animals. Like all carbohydrates,
when it is metabolized it can produce four Calories (kilocalories) of energy per gram. However, in most circumstances
it accounts for less than that because of its limited absorption and digestibility. Dietary fiber consists mainly
of cellulose, a large carbohydrate polymer which is indigestible as humans do not have the required enzymes to disassemble
it. There are two subcategories: soluble and insoluble fiber. Whole grains, fruits (especially plums, prunes, and
figs), and vegetables are good sources of dietary fiber. There are many health benefits of a high-fiber diet. Dietary
fiber helps reduce the chance of gastrointestinal problems such as constipation and diarrhea by increasing the weight
and size of stool and softening it. Insoluble fiber, found in whole wheat flour, nuts and vegetables, especially
stimulates peristalsis – the rhythmic muscular contractions of the intestines, which move digesta along the digestive
tract. Soluble fiber, found in oats, peas, beans, and many fruits, dissolves in water in the intestinal tract to
produce a gel that slows the movement of food through the intestines. This may help lower blood glucose levels because
it can slow the absorption of sugar. Additionally, fiber, perhaps especially that from whole grains, is thought to
possibly help lessen insulin spikes, and therefore reduce the risk of type 2 diabetes. The link between increased
fiber consumption and a decreased risk of colorectal cancer is still uncertain. A molecule of dietary fat typically
consists of several fatty acids (containing long chains of carbon and hydrogen atoms), bonded to a glycerol. They
are typically found as triglycerides (three fatty acids attached to one glycerol backbone). Fats may be classified
as saturated or unsaturated depending on the detailed structure of the fatty acids involved. Saturated fats have
all of the carbon atoms in their fatty acid chains bonded to hydrogen atoms, whereas unsaturated fats have some of
these carbon atoms double-bonded, so their molecules have relatively fewer hydrogen atoms than a saturated fatty
acid of the same length. Unsaturated fats may be further classified as monounsaturated (one double-bond) or polyunsaturated
(many double-bonds). Furthermore, depending on the location of the double-bond in the fatty acid chain, unsaturated
fatty acids are classified as omega-3 or omega-6 fatty acids. Trans fats are a type of unsaturated fat with trans-isomer
bonds; these are rare in nature and in foods from natural sources; they are typically created in an industrial process
called (partial) hydrogenation. There are nine kilocalories in each gram of fat. Fatty acids such as conjugated linoleic
acid, catalpic acid, eleostearic acid and punicic acid, in addition to providing energy, represent potent immune
modulatory molecules. Saturated fats (typically from animal sources) have been a staple in many world cultures for
millennia. Unsaturated fats (e. g., vegetable oil) are considered healthier, while trans fats are to be avoided.
Saturated and some trans fats are typically solid at room temperature (such as butter or lard), while unsaturated
fats are typically liquids (such as olive oil or flaxseed oil). Trans fats are very rare in nature, and have been
shown to be highly detrimental to human health, but have properties useful in the food processing industry, such
as rancidity resistance.[citation needed] Most fatty acids are non-essential, meaning the body can produce them as
needed, generally from other fatty acids and always by expending energy to do so. However, in humans, at least two
fatty acids are essential and must be included in the diet. An appropriate balance of essential fatty acids—omega-3
and omega-6 fatty acids—seems also important for health, although definitive experimental demonstration has been
elusive. Both of these "omega" long-chain polyunsaturated fatty acids are substrates for a class of eicosanoids known
as prostaglandins, which have roles throughout the human body. They are hormones, in some respects. The omega-3 eicosapentaenoic
acid (EPA), which can be made in the human body from the omega-3 essential fatty acid alpha-linolenic acid (ALA),
or taken in through marine food sources, serves as a building block for series 3 prostaglandins (e.g., weakly inflammatory
PGE3). The omega-6 dihomo-gamma-linolenic acid (DGLA) serves as a building block for series 1 prostaglandins (e.g.
anti-inflammatory PGE1), whereas arachidonic acid (AA) serves as a building block for series 2 prostaglandins (e.g.
pro-inflammatory PGE 2). Both DGLA and AA can be made from the omega-6 linoleic acid (LA) in the human body, or can
be taken in directly through food. An appropriately balanced intake of omega-3 and omega-6 partly determines the
relative production of different prostaglandins, which is one reason why a balance between omega-3 and omega-6 is
believed important for cardiovascular health. In industrialized societies, people typically consume large amounts
of processed vegetable oils, which have reduced amounts of the essential fatty acids along with too much of omega-6
fatty acids relative to omega-3 fatty acids. The conversion rate of omega-6 DGLA to AA largely determines the production
of the prostaglandins PGE1 and PGE2. Omega-3 EPA prevents AA from being released from membranes, thereby skewing
prostaglandin balance away from pro-inflammatory PGE2 (made from AA) toward anti-inflammatory PGE1 (made from DGLA).
Moreover, the conversion (desaturation) of DGLA to AA is controlled by the enzyme delta-5-desaturase, which in turn
is controlled by hormones such as insulin (up-regulation) and glucagon (down-regulation). The amount and type of
carbohydrates consumed, along with some types of amino acid, can influence processes involving insulin, glucagon,
and other hormones; therefore, the ratio of omega-3 versus omega-6 has wide effects on general health, and specific
effects on immune function and inflammation, and mitosis (i.e., cell division). Proteins are structural materials
in much of the animal body (e.g. muscles, skin, and hair). They also form the enzymes that control chemical reactions
throughout the body. Each protein molecule is composed of amino acids, which are characterized by inclusion of nitrogen
and sometimes sulphur (these components are responsible for the distinctive smell of burning protein, such as the
keratin in hair). The body requires amino acids to produce new proteins (protein retention) and to replace damaged
proteins (maintenance). As there is no protein or amino acid storage provision, amino acids must be present in the
diet. Excess amino acids are discarded, typically in the urine. For all animals, some amino acids are essential (an
animal cannot produce them internally) and some are non-essential (the animal can produce them from other nitrogen-containing
compounds). About twenty amino acids are found in the human body, and about ten of these are essential and, therefore,
must be included in the diet. A diet that contains adequate amounts of amino acids (especially those that are essential)
is particularly important in some situations: during early development and maturation, pregnancy, lactation, or injury
(a burn, for instance). A complete protein source contains all the essential amino acids; an incomplete protein source
lacks one or more of the essential amino acids. It is possible with protein combinations of two incomplete protein
sources (e.g., rice and beans) to make a complete protein source, and characteristic combinations are the basis of
distinct cultural cooking traditions. However, complementary sources of protein do not need to be eaten at the same
meal to be used together by the body. Excess amino acids from protein can be converted into glucose and used for
fuel through a process called gluconeogenesis. The amino acids remaining after such conversion are discarded. Early
recommendations for the quantity of water required for maintenance of good health suggested that 6–8 glasses of water
daily is the minimum to maintain proper hydration. However the notion that a person should consume eight glasses
of water per day cannot be traced to a credible scientific source. The original water intake recommendation in 1945
by the Food and Nutrition Board of the National Research Council read: "An ordinary standard for diverse persons
is 1 milliliter for each calorie of food. Most of this quantity is contained in prepared foods." More recent comparisons
of well-known recommendations on fluid intake have revealed large discrepancies in the volumes of water we need to
consume for good health. Therefore, to help standardize guidelines, recommendations for water consumption are included
in two recent European Food Safety Authority (EFSA) documents (2010): (i) Food-based dietary guidelines and (ii)
Dietary reference values for water or adequate daily intakes (ADI). These specifications were provided by calculating
adequate intakes from measured intakes in populations of individuals with “desirable osmolarity values of urine and
desirable water volumes per energy unit consumed.” For healthful hydration, the current EFSA guidelines recommend
total water intakes of 2.0 L/day for adult females and 2.5 L/day for adult males. These reference values include
water from drinking water, other beverages, and from food. About 80% of our daily water requirement comes from the
beverages we drink, with the remaining 20% coming from food. Water content varies depending on the type of food consumed,
with fruit and vegetables containing more than cereals, for example. These values are estimated using country-specific
food balance sheets published by the Food and Agriculture Organisation of the United Nations. Other guidelines for
nutrition also have implications for the beverages we consume for healthy hydration- for example, the World Health
Organization (WHO) recommend that added sugars should represent no more than 10% of total energy intake. The EFSA
panel also determined intakes for different populations. Recommended intake volumes in the elderly are the same as
for adults as despite lower energy consumption, the water requirement of this group is increased due to a reduction
in renal concentrating capacity. Pregnant and breastfeeding women require additional fluids to stay hydrated. The
EFSA panel proposes that pregnant women should consume the same volume of water as non-pregnant women, plus an increase
in proportion to the higher energy requirement, equal to 300 mL/day. To compensate for additional fluid output, breastfeeding
women require an additional 700 mL/day above the recommended intake values for non-lactating women. Dietary minerals
are inorganic chemical elements required by living organisms, other than the four elements carbon, hydrogen, nitrogen,
and oxygen that are present in nearly all organic molecules. The term "mineral" is archaic, since the intent is to
describe simply the less common elements in the diet. Some are heavier than the four just mentioned, including several
metals, which often occur as ions in the body. Some dietitians recommend that these be supplied from foods in which
they occur naturally, or at least as complex compounds, or sometimes even from natural inorganic sources (such as
calcium carbonate from ground oyster shells). Some minerals are absorbed much more readily in the ionic forms found
in such sources. On the other hand, minerals are often artificially added to the diet as supplements; the most famous
is likely iodine in iodized salt which prevents goiter. As with the minerals discussed above, some vitamins are recognized
as organic essential nutrients, necessary in the diet for good health. (Vitamin D is the exception: it can be synthesized
in the skin, in the presence of UVB radiation.) Certain vitamin-like compounds that are recommended in the diet,
such as carnitine, are thought useful for survival and health, but these are not "essential" dietary nutrients because
the human body has some capacity to produce them from other compounds. Moreover, thousands of different phytochemicals
have recently been discovered in food (particularly in fresh vegetables), which may have desirable properties including
antioxidant activity (see below); however, experimental demonstration has been suggestive but inconclusive. Other
essential nutrients that are not classified as vitamins include essential amino acids (see above), choline, essential
fatty acids (see above), and the minerals discussed in the preceding section. As cellular metabolism/energy production
requires oxygen, potentially damaging (e.g., mutation causing) compounds known as free radicals can form. Most of
these are oxidizers (i.e., acceptors of electrons) and some react very strongly. For the continued normal cellular
maintenance, growth, and division, these free radicals must be sufficiently neutralized by antioxidant compounds.
Recently, some researchers suggested an interesting theory of evolution of dietary antioxidants. Some are produced
by the human body with adequate precursors (glutathione, Vitamin C), and those the body cannot produce may only be
obtained in the diet via direct sources (Vitamin C in humans, Vitamin A, Vitamin K) or produced by the body from
other compounds (Beta-carotene converted to Vitamin A by the body, Vitamin D synthesized from cholesterol by sunlight).
Phytochemicals (Section Below) and their subgroup, polyphenols, make up the majority of antioxidants; about 4,000
are known. Different antioxidants are now known to function in a cooperative network. For example, Vitamin C can
reactivate free radical-containing glutathione or Vitamin E by accepting the free radical itself. Some antioxidants
are more effective than others at neutralizing different free radicals. Some cannot neutralize certain free radicals.
Some cannot be present in certain areas of free radical development (Vitamin A is fat-soluble and protects fat areas,
Vitamin C is water-soluble and protects those areas). When interacting with a free radical, some antioxidants produce
a different free radical compound that is less dangerous or more dangerous than the previous compound. Having a variety
of antioxidants allows any byproducts to be safely dealt with by more efficient antioxidants in neutralizing a free
radical's butterfly effect. Animal intestines contain a large population of gut flora. In humans, the four dominant
phyla are Firmicutes, Bacteroidetes, Actinobacteria, and Proteobacteria. They are essential to digestion and are
also affected by food that is consumed. Bacteria in the gut perform many important functions for humans, including
breaking down and aiding in the absorption of otherwise indigestible food; stimulating cell growth; repressing the
growth of harmful bacteria, training the immune system to respond only to pathogens; producing vitamin B12; and defending
against some infectious diseases. Heart disease, cancer, obesity, and diabetes are commonly called "Western" diseases
because these maladies were once rarely seen in developing countries. An international study in China found some
regions had virtually no cancer or heart disease, while in other areas they reflected "up to a 100-fold increase"
coincident with shifts from diets that were found to be entirely plant-based to heavily animal-based, respectively.
In contrast, diseases of affluence like cancer and heart disease are common throughout the developed world, including
the United States. Adjusted for age and exercise, large regional clusters of people in China rarely suffered from
these "Western" diseases possibly because their diets are rich in vegetables, fruits, and whole grains, and have
little dairy and meat products. Some studies show these to be, in high quantities, possible causes of some cancers.
There are arguments for and against this controversial issue. The United Healthcare/Pacificare nutrition guideline
recommends a whole plant food diet, and recommends using protein only as a condiment with meals. A National Geographic
cover article from November 2005, entitled The Secrets of Living Longer, also recommends a whole plant food diet.
The article is a lifestyle survey of three populations, Sardinians, Okinawans, and Adventists, who generally display
longevity and "suffer a fraction of the diseases that commonly kill people in other parts of the developed world,
and enjoy more healthy years of life." In sum, they offer three sets of 'best practices' to emulate. The rest is
up to you. In common with all three groups is to "Eat fruits, vegetables, and whole grains." Carnivore and herbivore
diets are contrasting, with basic nitrogen and carbon proportions vary for their particular foods. "The nitrogen
content of plant tissues averages about 2%, while in fungi, animals, and bacteria it averages about 5% to 10%." Many
herbivores rely on bacterial fermentation to create digestible nutrients from indigestible plant cellulose, while
obligate carnivores must eat animal meats to obtain certain vitamins or nutrients their bodies cannot otherwise synthesize.
All animals' diets must provide sufficient amounts of the basic building blocks they need, up to the point where
their particular biology can synthesize the rest. Animal tissue contains chemical compounds, such as water, carbohydrates
(sugar, starch, and fiber), amino acids (in proteins), fatty acids (in lipids), and nucleic acids (DNA and RNA).
These compounds in turn consist of elements such as carbon, hydrogen, oxygen, nitrogen, phosphorus, calcium, iron,
zinc, magnesium, manganese, and so on. All of these chemical compounds and elements occur in various forms and combinations
(e.g. hormones, vitamins, phospholipids, hydroxyapatite). Animal tissue consists of elements and compounds ingested,
digested, absorbed, and circulated through the bloodstream to feed the cells of the body. Except in the unborn fetus,
the digestive system is the first system involved[vague]. Digestive juices break chemical bonds in ingested molecules,
and modify their conformations and energy states. Though some molecules are absorbed into the bloodstream unchanged,
digestive processes release them from the matrix of foods. Unabsorbed matter, along with some waste products of metabolism,
is eliminated from the body in the feces. Studies of nutritional status must take into account the state of the body
before and after experiments, as well as the chemical composition of the whole diet and of all material excreted
and eliminated from the body (in urine and feces). Comparing the food to the waste can help determine the specific
compounds and elements absorbed and metabolized in the body. The effects of nutrients may only be discernible over
an extended period, during which all food and waste must be analyzed. The number of variables involved in such experiments
is high, making nutritional studies time-consuming and expensive, which explains why the science of animal nutrition
is still slowly evolving. Plants uptake essential elements from the soil through their roots and from the air (consisting
of mainly nitrogen and oxygen) through their leaves. Green plants obtain their carbohydrate supply from the carbon
dioxide in the air by the process of photosynthesis. Carbon and oxygen are absorbed from the air, while other nutrients
are absorbed from the soil. Nutrient uptake in the soil is achieved by cation exchange, wherein root hairs pump hydrogen
ions (H+) into the soil through proton pumps. These hydrogen ions displace cations attached to negatively charged
soil particles so that the cations are available for uptake by the root. In the leaves, stomata open to take in carbon
dioxide and expel oxygen. The carbon dioxide molecules are used as the carbon source in photosynthesis. Research
in the field of nutrition has greatly contributed in finding out the essential facts about how environmental depletion
can lead to crucial nutrition-related health problems like contamination, spread of contagious diseases, malnutrition,
etc. Moreover, environmental contamination due to discharge of agricultural as well as industrial chemicals like
organocholrines, heavy metal, and radionucleotides may adversely affect the human and the ecosystem as a whole. As
far as safety of the human health is concerned, then these environmental contaminants can reduce people's nutritional
status and health. This could directly or indirectly cause drastic changes in their diet habits. Hence, food-based
remedial as well as preventive strategies are essential to address global issues like hunger and malnutrition and
to enable the susceptible people to adapt themselves to all these environmental as well as socio-economic alterations.
In the US, dietitians are registered (RD) or licensed (LD) with the Commission for Dietetic Registration and the
American Dietetic Association, and are only able to use the title "dietitian," as described by the business and professions
codes of each respective state, when they have met specific educational and experiential prerequisites and passed
a national registration or licensure examination, respectively. In California, registered dietitians must abide by
the "Business and Professions Code of Section 2585-2586.8". Anyone may call themselves a nutritionist, including
unqualified dietitians, as this term is unregulated. Some states, such as the State of Florida, have begun to include
the title "nutritionist" in state licensure requirements. Most governments provide guidance on nutrition, and some
also impose mandatory disclosure/labeling requirements for processed food manufacturers and restaurants to assist
consumers in complying with such guidance. In the US, nutritional standards and recommendations are established jointly
by the US Department of Agriculture and US Department of Health and Human Services. Dietary and physical activity
guidelines from the USDA are presented in the concept of MyPlate, which superseded the food pyramid, which replaced
the Four Food Groups. The Senate committee currently responsible for oversight of the USDA is the Agriculture, Nutrition
and Forestry Committee. Committee hearings are often televised on C-SPAN. An example of a state initiative to promote
nutrition literacy is Smart Bodies, a public-private partnership between the state’s largest university system and
largest health insurer, Louisiana State Agricultural Center and Blue Cross and Blue Shield of Louisiana Foundation.
Launched in 2005, this program promotes lifelong healthful eating patterns and physically active lifestyles for children
and their families. It is an interactive educational program designed to help prevent childhood obesity through classroom
activities that teach children healthful eating habits and physical exercise. Nutrition is taught in schools in many
countries. In England and Wales, the Personal and Social Education and Food Technology curricula include nutrition,
stressing the importance of a balanced diet and teaching how to read nutrition labels on packaging. In many schools,
a Nutrition class will fall within the Family and Consumer Science or Health departments. In some American schools,
students are required to take a certain number of FCS or Health related classes. Nutrition is offered at many schools,
and, if it is not a class of its own, nutrition is included in other FCS or Health classes such as: Life Skills,
Independent Living, Single Survival, Freshmen Connection, Health etc. In many Nutrition classes, students learn about
the food groups, the food pyramid, Daily Recommended Allowances, calories, vitamins, minerals, malnutrition, physical
activity, healthful food choices, portion sizes, and how to live a healthy life. At the time of this entry, we were
not able to identify any specific nutrition literacy studies in the U.S. at a national level. However, the findings
of the 2003 National Assessment of Adult Literacy (NAAL) provide a basis upon which to frame the nutrition literacy
problem in the U.S. NAAL introduced the first ever measure of "the degree to which individuals have the capacity
to obtain, process and understand basic health information and services needed to make appropriate health decisions"
– an objective of Healthy People 2010 and of which nutrition literacy might be considered an important subset. On
a scale of below basic, basic, intermediate and proficient, NAAL found 13 percent of adult Americans have proficient
health literacy, 44% have intermediate literacy, 29 percent have basic literacy and 14 percent have below basic health
literacy. The study found that health literacy increases with education and people living below the level of poverty
have lower health literacy than those above it. Another study examining the health and nutrition literacy status
of residents of the lower Mississippi Delta found that 52 percent of participants had a high likelihood of limited
literacy skills. While a precise comparison between the NAAL and Delta studies is difficult, primarily because of
methodological differences, Zoellner et al. suggest that health literacy rates in the Mississippi Delta region are
different from the U.S. general population and that they help establish the scope of the problem of health literacy
among adults in the Delta region. For example, only 12 percent of study participants identified the My Pyramid graphic
two years after it had been launched by the USDA. The study also found significant relationships between nutrition
literacy and income level and nutrition literacy and educational attainment further delineating priorities for the
region. These statistics point to the complexities surrounding the lack of health/nutrition literacy and reveal the
degree to which they are embedded in the social structure and interconnected with other problems. Among these problems
are the lack of information about food choices, a lack of understanding of nutritional information and its application
to individual circumstances, limited or difficult access to healthful foods, and a range of cultural influences and
socioeconomic constraints such as low levels of education and high levels of poverty that decrease opportunities
for healthful eating and living. Malnutrition refers to insufficient, excessive, or imbalanced consumption of nutrients
by an organism. In developed countries, the diseases of malnutrition are most often associated with nutritional imbalances
or excessive consumption. In developing countries, malnutrition is more likely to be caused by poor access to a range
of nutritious foods or inadequate knowledge. In Mali the International Crops Research Institute for the Semi-Arid
Tropics (ICRISAT) and the Aga Khan Foundation, trained women's groups to make equinut, a healthy and nutritional
version of the traditional recipe di-dèguè (comprising peanut paste, honey and millet or rice flour). The aim was
to boost nutrition and livelihoods by producing a product that women could make and sell, and which would be accepted
by the local community because of its local heritage. Nutritionism is the view that excessive reliance on food science
and the study of nutrition can lead to poor nutrition and to ill health. It was originally credited to Gyorgy Scrinis,
and was popularized by Michael Pollan. Since nutrients are invisible, policy makers rely on nutrition experts to
advise on food choices. Because science has an incomplete understanding of how food affects the human body, Pollan
argues, nutritionism can be blamed for many of the health problems relating to diet in the Western World today. Some
organizations have begun working with teachers, policymakers, and managed foodservice contractors to mandate improved
nutritional content and increased nutritional resources in school cafeterias from primary to university level institutions.
Health and nutrition have been proven to have close links with overall educational success. Currently, less than
10% of American college students report that they eat the recommended five servings of fruit and vegetables daily.
Better nutrition has been shown to have an impact on both cognitive and spatial memory performance; a study showed
those with higher blood sugar levels performed better on certain memory tests. In another study, those who consumed
yogurt performed better on thinking tasks when compared to those that consumed caffeine-free diet soda or confections.
Nutritional deficiencies have been shown to have a negative effect on learning behavior in mice as far back as 1951.
Cancer is now common in developing countries. According to a study by the International Agency for Research on Cancer,
"In the developing world, cancers of the liver, stomach and esophagus were more common, often linked to consumption
of carcinogenic preserved foods, such as smoked or salted food, and parasitic infections that attack organs." Lung
cancer rates are rising rapidly in poorer nations because of increased use of tobacco. Developed countries "tended
to have cancers linked to affluence or a 'Western lifestyle' — cancers of the colon, rectum, breast and prostate
— that can be caused by obesity, lack of exercise, diet and age." Several lines of evidence indicate lifestyle-induced
hyperinsulinemia and reduced insulin function (i.e., insulin resistance) as a decisive factor in many disease states.
For example, hyperinsulinemia and insulin resistance are strongly linked to chronic inflammation, which in turn is
strongly linked to a variety of adverse developments such as arterial microinjuries and clot formation (i.e., heart
disease) and exaggerated cell division (i.e., cancer). Hyperinsulinemia and insulin resistance (the so-called metabolic
syndrome) are characterized by a combination of abdominal obesity, elevated blood sugar, elevated blood pressure,
elevated blood triglycerides, and reduced HDL cholesterol. The negative impact of hyperinsulinemia on prostaglandin
PGE1/PGE2 balance may be significant. The state of obesity clearly contributes to insulin resistance, which in turn
can cause type 2 diabetes. Virtually all obese and most type 2 diabetic individuals have marked insulin resistance.
Although the association between overweight and insulin resistance is clear, the exact (likely multifarious) causes
of insulin resistance remain less clear. It is important to note that it has been demonstrated that appropriate exercise,
more regular food intake, and reducing glycemic load (see below) all can reverse insulin resistance in overweight
individuals (and thereby lower blood sugar levels in those with type 2 diabetes). Obesity can unfavourably alter
hormonal and metabolic status via resistance to the hormone leptin, and a vicious cycle may occur in which insulin/leptin
resistance and obesity aggravate one another. The vicious cycle is putatively fuelled by continuously high insulin/leptin
stimulation and fat storage, as a result of high intake of strongly insulin/leptin stimulating foods and energy.
Both insulin and leptin normally function as satiety signals to the hypothalamus in the brain; however, insulin/leptin
resistance may reduce this signal and therefore allow continued overfeeding despite large body fat stores. In addition,
reduced leptin signalling to the brain may reduce leptin's normal effect to maintain an appropriately high metabolic
rate. There is a debate about how and to what extent different dietary factors— such as intake of processed carbohydrates,
total protein, fat, and carbohydrate intake, intake of saturated and trans fatty acids, and low intake of vitamins/minerals—contribute
to the development of insulin and leptin resistance. In any case, analogous to the way modern man-made pollution
may possess the potential to overwhelm the environment's ability to maintain homeostasis, the recent explosive introduction
of high glycemic index and processed foods into the human diet may possess the potential to overwhelm the body's
ability to maintain homeostasis and health (as evidenced by the metabolic syndrome epidemic). Excess water intake,
without replenishment of sodium and potassium salts, leads to hyponatremia, which can further lead to water intoxication
at more dangerous levels. A well-publicized case occurred in 2007, when Jennifer Strange died while participating
in a water-drinking contest. More usually, the condition occurs in long-distance endurance events (such as marathon
or triathlon competition and training) and causes gradual mental dulling, headache, drowsiness, weakness, and confusion;
extreme cases may result in coma, convulsions, and death. The primary damage comes from swelling of the brain, caused
by increased osmosis as blood salinity decreases. Effective fluid replacement techniques include water aid stations
during running/cycling races, trainers providing water during team games, such as soccer, and devices such as Camel
Baks, which can provide water for a person without making it too hard to drink the water. The relatively recent increased
consumption of sugar has been linked to the rise of some afflictions such as diabetes, obesity, and more recently
heart disease. Increased consumption of sugar has been tied to these three, among others. Obesity levels have more
than doubled in the last 30 years among adults, going from 15% to 35% in the United States. Obesity and diet also
happen to be high risk factors for diabetes. In the same time span that obesity doubled, diabetes numbers quadrupled
in America. Increased weight, especially in the form of belly fat, and high sugar intake are also high risk factors
for heart disease. Both sugar intake and fatty tissue increase the probability of elevated LDL cholesterol in the
bloodstream. Elevated amounts of Low-density lipoprotein (LDL) cholesterol, is the primary factor in heart disease.
In order to avoid all the dangers of sugar, moderate consumption is paramount. Since the Industrial Revolution some
two hundred years ago, the food processing industry has invented many technologies that both help keep foods fresh
longer and alter the fresh state of food as they appear in nature. Cooling is the primary technology used to maintain
freshness, whereas many more technologies have been invented to allow foods to last longer without becoming spoiled.
These latter technologies include pasteurisation, autoclavation, drying, salting, and separation of various components,
all of which appearing to alter the original nutritional contents of food. Pasteurisation and autoclavation (heating
techniques) have no doubt improved the safety of many common foods, preventing epidemics of bacterial infection.
But some of the (new) food processing technologies have downfalls as well. Modern separation techniques such as milling,
centrifugation, and pressing have enabled concentration of particular components of food, yielding flour, oils, juices,
and so on, and even separate fatty acids, amino acids, vitamins, and minerals. Inevitably, such large-scale concentration
changes the nutritional content of food, saving certain nutrients while removing others. Heating techniques may also
reduce food's content of many heat-labile nutrients such as certain vitamins and phytochemicals, and possibly other
yet-to-be-discovered substances. Because of reduced nutritional value, processed foods are often 'enriched' or 'fortified'
with some of the most critical nutrients (usually certain vitamins) that were lost during processing. Nonetheless,
processed foods tend to have an inferior nutritional profile compared to whole, fresh foods, regarding content of
both sugar and high GI starches, potassium/sodium, vitamins, fiber, and of intact, unoxidized (essential) fatty acids.
In addition, processed foods often contain potentially harmful substances such as oxidized fats and trans fatty acids.
A dramatic example of the effect of food processing on a population's health is the history of epidemics of beri-beri
in people subsisting on polished rice. Removing the outer layer of rice by polishing it removes with it the essential
vitamin thiamine, causing beri-beri. Another example is the development of scurvy among infants in the late 19th
century in the United States. It turned out that the vast majority of sufferers were being fed milk that had been
heat-treated (as suggested by Pasteur) to control bacterial disease. Pasteurisation was effective against bacteria,
but it destroyed the vitamin C. As mentioned, lifestyle- and obesity-related diseases are becoming increasingly prevalent
all around the world. There is little doubt that the increasingly widespread application of some modern food processing
technologies has contributed to this development. The food processing industry is a major part of modern economy,
and as such it is influential in political decisions (e.g., nutritional recommendations, agricultural subsidising).
In any known profit-driven economy, health considerations are hardly a priority; effective production of cheap foods
with a long shelf-life is more the trend. In general, whole, fresh foods have a relatively short shelf-life and are
less profitable to produce and sell than are more processed foods. Thus, the consumer is left with the choice between
more expensive, but nutritionally superior, whole, fresh foods, and cheap, usually nutritionally inferior, processed
foods. Because processed foods are often cheaper, more convenient (in both purchasing, storage, and preparation),
and more available, the consumption of nutritionally inferior foods has been increasing throughout the world along
with many nutrition-related health complications.
The Crimean War was a military conflict fought between October 1853 – March 1856 in which Russia lost to an alliance of France,
the United Kingdom, the Ottoman Empire, and Sardinia. The immediate cause involved the rights of Christian minorities
in the Holy Land, which was controlled by the Ottoman Empire. The French promoted the rights of Catholics, while
Russia promoted those of the Eastern Orthodox Christians. The longer-term causes involved the decline of the Ottoman
Empire and the unwillingness of the United Kingdom and France to allow Russia to gain territory and power at Ottoman
expense. It has widely been noted that the causes, in one case involving an argument over a key, have never revealed
a "greater confusion of purpose", yet led to a war noted for its "notoriously incompetent international butchery."
While the churches eventually worked out their differences and came to an initial agreement, both Nicholas I of Russia
and Napoleon III refused to back down. Nicholas issued an ultimatum that the Orthodox subjects of the Empire be placed
under his protection. Britain attempted to mediate, and arranged a compromise that Nicholas agreed to. When the Ottomans
demanded changes, Nicholas refused and prepared for war. Having obtained promises of support from France and Britain,
the Ottomans officially declared war on Russia in October 1853. The war opened in the Balkans when Russian troops
occupied provinces in modern Romania and began to cross the Danube. Led by Omar Pasha, the Ottomans fought a strong
defensive battle and stopped the advance at Silistra. A separate action on the fort town of Kars in eastern Turkey
led to a siege, and a Turkish attempt to reinforce the garrison was destroyed by a Russian fleet at Sinop. Fearing
an Ottoman collapse, France and the UK rushed forces to Gallipoli. They then moved north to Varna in June, arriving
just in time for the Russians to abandon Silistra. Aside from a minor skirmish at Constanța there was little for
the allies to do. Karl Marx quipped that "there they are, the French doing nothing and the British helping them as
fast as possible". Frustrated by the wasted effort, and with demands for action from their citizens, the allied force
decided to attack the center of Russian strength in the Black Sea at Sevastopol on the Crimean peninsula. After extended
preparations, the forces landed on the peninsula in September 1854 and fought their way to a point south of Sevastopol
after a series of successful battles. The Russians counterattacked on 25 October in what became the Battle of Balaclava
and were repulsed, but at the cost of seriously depleting the British Army forces. A second counterattack, ordered
personally by Nicholas, was defeated by Omar Pasha. The front settled into a siege and led to horrible conditions
for troops on both sides. Smaller actions were carried out in the Baltic, the Caucasus, the White Sea and in the
North Pacific. Sevastopol fell after eleven months, and formerly neutral countries began to join the allied cause.
Isolated and facing a bleak prospect of invasion from the west if the war continued, Russia sued for peace in March
1856. This was welcomed by France and the UK, where the citizens began to turn against their governments as the war
dragged on. The war was officially ended by the Treaty of Paris, signed on 30 March 1856. Russia lost the war, and
was forbidden from hosting warships in the Black Sea. The Ottoman vassal states of Wallachia and Moldavia became
largely independent. Christians were granted a degree of official equality, and the Orthodox church regained control
of the Christian churches in dispute.:415 The Crimean War was one of the first conflicts to use modern technologies
such as explosive naval shells, railways, and telegraphs.(Preface) The war was one of the first to be documented
extensively in written reports and photographs. As the legend of the "Charge of the Light Brigade" demonstrates,
the war quickly became an iconic symbol of logistical, medical and tactical failures and mismanagement. The reaction
in the UK was a demand for professionalization, most famously achieved by Florence Nightingale, who gained worldwide
attention for pioneering modern nursing while treating the wounded. In 1820-1830’s the Ottoman Empire endured a number
of strikes which challenged the existence of the country. The Greek Uprising (began in the spring of 1821) evidenced
internal and military weakness of Ottoman Empire and caused severe atrocities by Ottoman military forces (see Chios
massacre). The disbandment of the centuries-old Janissary corps by Sultan Mahmud II on 15 June 1826 (Auspicious Incident)
was a good deed for the country in the longer term, but it has deprived the country from its army forces for the
nearest future. In 1827 the allied Anglo-Franco-Russian fleet destroyed almost all the Ottoman naval forces during
the Battle of Navarino. In 1830 Greece becomes an independent state after 10 years of independence war and the Russo-Turkish
War of 1828–1829. According to the Treaty of Adrianople (1829) Russian and European commercial ships were authorized
to freely pass through Black Sea straits, Serbia received autonomy, and Danubian Principalities (Moldavia and Walachia)
became the territories under Russian protection. France used the right moment and occupied Alger in 1830. In 1831
Muhammad Ali of Egypt, who was the most powerful vassal of the Ottoman Empire, claimed independence. Ottoman forces
were defeated in a number of battles, and Egyptians were ready to capture Constantinople, which forced the sultan
Mahmud II to seek for Russian military aid. 10 000 Russian army corps landed on the Bosphorus shores in 1833 and
helped to prevent the capture of Constantinople, thus the possible disappearance of the Ottoman Empire was prevented.
In 1838 the situation was slightly the same as in 1831. Muhammad Ali of Egypt was not happy about lack of his control
and power in Syria, he resumed military actions. The Ottoman army lost to Egyptians at the Battle of Nezib on June
24, 1839. The Ottoman Empire was saved by Great Britain, Austria, Prussia and Russia by signing a convention in London
in July 15, 1840 to grant Muhammad Ali and his descendants the right to inherit power in Egypt in exchange for removal
of Egyptian military forces from Syria and Lebanon. Moreover, Muhammad Ali had to admit a formal dependence from
the Ottoman sultan. After Muhammad Ali refused to obey the requirements of the London convention, the allied Anglo-Austrian
fleet blocked the Delta, bombarded Beirut and captured Acre. Muhammad Ali accepted the conditions of the London convention
in 1840. Russia, as a member of the Holy Alliance, had operated as the "police of Europe", maintaining the balance
of power that had been established in the Treaty of Vienna in 1815. Russia had assisted Austria's efforts in suppressing
the Hungarian Revolution of 1848, and expected gratitude; it wanted a free hand in settling its problems with the
Ottoman Empire — the "sick man of Europe". The United Kingdom could not tolerate Russian dominance of Ottoman affairs,
as that would challenge the British domination of the eastern Mediterranean. For over 200 years, Russia had been
expanding southwards across the sparsely populated "Wild Fields" toward the warm water ports of the Black Sea that
did not freeze over like the handful of other ports available in the north. The goal was to promote year-round trade
and a year-round navy.:11 Pursuit of this goal brought the emerging Russian state into conflict with the Ukrainian
Cossacks and then with the Tatars of the Crimean Khanate and Circassians. When Russia conquered these groups and
gained possession of southern Ukraine, known as New Russia during Russian imperial times, the Ottoman Empire lost
its buffer zone against Russian expansion, and Russia and the Ottoman Empire fell into direct conflict. The conflict
with the Ottoman Empire also presented a religious issue of importance, as Russia saw itself as the protector of
Orthodox Christians, many of whom lived under Ottoman control and were treated as second-class citizens.(ch 1) It
is often said that Russia was militarily weak, technologically backward, and administratively incompetent. Despite
its grand ambitions toward the south, it had not built its railroad network in that direction, and communications
were poor. The bureaucracy was riddled with graft, corruption and inefficiency and was unprepared for war. Its navy
was weak and technologically backward; its army, although very large, was good only for parades, suffered from colonels
who pocketed their men's pay, poor morale, and was out of touch with the latest technology developed by Britain and
France. By the war's end, everyone realized the profound weaknesses of the Russian military, and the Russian leadership
was determined to reform it. The immediate chain of events leading to France and the United Kingdom declaring war
on Russia on 27 and 28 March 1854 came from the ambition of the French emperor Napoleon III to restore the grandeur
of France. He wanted Catholic support that would come his way if he attacked Eastern Orthodoxy, as sponsored by Russia.:103
The Marquis Charles de La Valette was a zealous Catholic and a leading member of the "clerical party," which demanded
French protection of the Roman Catholic rights to the holy places in Palestine. In May 1851, Napoleon appointed La
Valette as his ambassador to the Porte (the Ottoman Empire).:7–9 The appointment was made with the intent of forcing
the Ottomans to recognise France as the "sovereign authority" over the Christian population.:19 Russia disputed this
attempted change in authority. Pointing to two more treaties, one in 1757 and the 1774 Treaty of Küçük Kaynarca,
the Ottomans reversed their earlier decision, renouncing the French treaty and insisting that Russia was the protector
of the Orthodox Christians in the Ottoman Empire. Napoleon III responded with a show of force, sending the ship of
the line Charlemagne to the Black Sea. This action was a violation of the London Straits Convention.:104:19 Thus,
France's show of force presented a real threat, and when combined with aggressive diplomacy and money, induced the
Ottoman Sultan Abdülmecid I to accept a new treaty, confirming France and the Roman Catholic Church as the supreme
Christian authority with control over the Roman Catholic holy places and possession of the keys to the Church of
the Nativity, previously held by the Greek Orthodox Church.:20 Nicholas began courting Britain by means of conversations
with the British ambassador, George Hamilton Seymour, in January and February 1853.:105 Nicholas insisted that he
no longer wished to expand Imperial Russia:105 but that he had an obligation to the Christian communities in the
Ottoman Empire.:105 The Tsar next dispatched a highly abrasive diplomat, Prince Menshikov, on a special mission to
the Ottoman Sublime Porte in February 1853. By previous treaties, the sultan was committed "to protect the (Eastern
Orthodox) Christian religion and its churches." Menshikov demanded a Russian protectorate over all 12 million Orthodox
Christians in the Empire, with control of the Orthodox Church's hierarchy. A compromise was reached regarding Orthodox
access to the Holy Land, but the Sultan, strongly supported by the British ambassador, rejected the more sweeping
demands. In February 1853, the British government of Lord Aberdeen, the prime minister, re-appointed Stratford Canning
as British ambassador to the Ottoman Empire.:110 Having resigned the ambassadorship in January, he had been replaced
by Colonel Rose as chargé d'affaires. Lord Stratford then turned around and sailed back to Constantinople, arriving
there on 5 April 1853. There he convinced the Sultan to reject the Russian treaty proposal, as compromising the independence
of the Turks. The Leader of the Opposition in the British House of Commons, Benjamin Disraeli, blamed Aberdeen and
Stratford's actions for making war inevitable, thus starting the process which would eventually force the Aberdeen
government to resign in January 1855, over the war. Shortly after he learned of the failure of Menshikov's diplomacy
toward the end of June 1853, the Tsar sent armies under the commands of Field Marshal Ivan Paskevich and General
Mikhail Gorchakov across the Pruth River into the Ottoman-controlled Danubian Principalities of Moldavia and Wallachia.
Fewer than half of the 80,000 Russian soldiers who crossed the Pruth in 1853 survived. By far, most of the deaths
would result from sickness rather than combat,:118–119 for the Russian army still suffered from medical services
that ranged from bad to none. Russia had previously obtained recognition from the Ottoman Empire of the Tsar's role
as special guardian of the Orthodox Christians in Moldavia and Wallachia. Now Russia used the Sultan's failure to
resolve the issue of the protection of the Christian sites in the Holy Land as a pretext for Russian occupation of
these Danubian provinces. Nicholas believed that the European powers, especially Austria, would not object strongly
to the annexation of a few neighbouring Ottoman provinces, especially considering that Russia had assisted Austria's
efforts in suppressing the Hungarian Revolution in 1849. The European powers continued to pursue diplomatic avenues.
The representatives of the four neutral Great Powers—the United Kingdom, France, Austria and Prussia—met in Vienna,
where they drafted a note that they hoped would be acceptable to both the Russians and the Ottomans. The peace terms
arrived at by the four powers at the Vienna Conference were delivered to the Russians by the Austrian Foreign Minister
Count Karl von Buol on 5 December 1853. The note met with the approval of Nicholas I; however, Abdülmecid I rejected
the proposal, feeling that the document's poor phrasing left it open to many different interpretations. The United
Kingdom, France, and Austria united in proposing amendments to mollify the Sultan, but the court of St. Petersburg
ignored their suggestions.:143 The UK and France then set aside the idea of continuing negotiations, but Austria
and Prussia did not believe that the rejection of the proposed amendments justified the abandonment of the diplomatic
process. The Russians sent a fleet to Sinop in northern Anatolia. In the Battle of Sinop on 30 November 1853 they
destroyed a patrol squadron of Ottoman frigates and corvettes while they were anchored in port. Public opinion in
the UK and France was outraged and demanded war. Sinop provided the United Kingdom and France with the casus belli
("cause for war") for declaring war against Russia. On 28 March 1854, after Russia ignored an Anglo-French ultimatum
to withdraw from the Danubian Principalities, the UK and France formally declared war. Britain was concerned about
Russian activity and Sir John Burgoyne senior advisor to Lord Aberdeen urged that the Dardanelles should be occupied
and throw up works of sufficient strength to block any Russian move to capture Constantinople and gain access to
the Mediterranean Sea. The Corps of Royal Engineers sent men to the Dardanelles while Burgoyne went to Paris, meeting
the British Ambassador and the French Emperor. The Lord Cowley wrote on 8 February to Burgoyne "Your visit to Paris
has produced a visible change in the Emperor's views, and he is making every preparation for a land expedition in
case the last attempt at negotiation should break down.":411 Nicholas felt that, because of Russian assistance in
suppressing the Hungarian revolution of 1848, Austria would side with him, or at the very least remain neutral. Austria,
however, felt threatened by the Russian troops in the Balkans. On 27 February 1854, the United Kingdom and France
demanded the withdrawal of Russian forces from the principalities; Austria supported them and, though it did not
declare war on Russia, it refused to guarantee its neutrality. Russia's rejection of the ultimatum caused the UK
and France to enter the war. Following the Ottoman ultimatum in September 1853, forces under the Ottoman general
Omar Pasha crossed the Danube at Vidin and captured Calafat in October 1853. Simultaneously, in the east, the Ottomans
crossed the Danube at Silistra and attacked the Russians at Oltenița. The resulting Battle of Oltenița was the first
engagement following the declaration of war. The Russians counterattacked, but were beaten back. On 31 December 1853,
the Ottoman forces at Calafat moved against the Russian force at Chetatea or Cetate, a small village nine miles north
of Calafat, and engaged them on 6 January 1854. The battle began when the Russians made a move to recapture Calafat.
Most of the heavy fighting, however, took place in and around Chetatea until the Russians were driven out of the
village. Despite the setback at Chetatea, on 28 January 1854, Russian forces laid siege to Calafat. The siege would
continue until May 1854 when the Russians lifted the siege. The Ottomans would also later beat the Russians in battle
at Caracal.:130–43 In the spring of 1854 the Russians again advanced, crossing the Danube River into the Turkish
province of Dobruja. By April 1854, the Russians had reached the lines of Trajan's Wall where they were finally halted.
In the center, the Russian forces crossed the Danube and laid siege to Silistra from 14 April with 60,000 troops,
the defenders with 15,000 had supplies for three months.:415 The siege was lifted on 23 June 1854. The English and
French forces at this time were unable to take the field for lack of equipment.:415 In the west, the Russians were
dissuaded from attacking Vidin by the presence of the Austrian forces, which had swelled to 280,000 men. On 28 May
1854 a protocol of the Vienna Conference was signed by Austria and Russia. One of the aims of the Russian advance
had been to encourage the Orthodox Christian Serbs and Bulgarians living under Ottoman rule to rebel. However, when
the Russian troops actually crossed the River Pruth into Moldavia, the Orthodox Christians still showed no interest
in rising up against the Turks.:131, 137 Adding to the worries of Nicholas I was the concern that Austria would enter
the war against the Russians and attack his armies on the western flank. Indeed, after attempting to mediate a peaceful
settlement between Russia and Turkey, the Austrians entered the war on the side of Turkey with an attack against
the Russians in the Principalities which threatened to cut off the Russian supply lines. Accordingly, the Russians
were forced to raise the siege of Silistra on 23 June 1854, and begin abandoning the Principalities.:185 The lifting
of the siege reduced the threat of a Russian advance into Bulgaria. In June 1854, the Allied expeditionary force
landed at Varna, a city on the Black Sea's western coast (now in Bulgaria). They made little advance from their base
there.:175–176 In July 1854, the Turks under Omar Pasha crossed the Danube into Wallachia and on 7 July 1854, engaged
the Russians in the city of Giurgiu and conquered it. The capture of Giurgiu by the Turks immediately threatened
Bucharest in Wallachia with capture by the same Turk army. On 26 July 1854, Tsar Nicholas I ordered the withdrawal
of Russian troops from the Principalities. Also, in late July 1854, following up on the Russian retreat, the French
staged an expedition against the Russian forces still in Dobruja, but this was a failure.:188–190 During this period,
the Russian Black Sea Fleet was operating against Ottoman coastal traffic between Constantinople (currently named
Istanbul) and the Caucasus ports, while the Ottoman fleet sought to protect this supply line. The clash came on 30
November 1853 when a Russian fleet attacked an Ottoman force in the harbour at Sinop, and destroyed it at the Battle
of Sinop. The battle outraged opinion in UK, which called for war. There was little additional naval action until
March 1854 when on the declaration of war the British frigate Furious was fired on outside Odessa harbour. In response
an Anglo-French fleet bombarded the port, causing much damage to the town. To show support for Turkey after the battle
of Sinop, on the 22th of December 1853, the Anglo-French squadron entered the Black Sea and the steamship HMS Retribution
approached the Port of Sevastopol, the commander of which received an ultimatum not to allow any ships in the Black
Sea. In June, the fleets transported the Allied expeditionary forces to Varna, in support of the Ottoman operations
on the Danube; in September they again transported the armies, this time to the Crimea. The Russian fleet during
this time declined to engage the allies, preferring to maintain a "fleet in being"; this strategy failed when Sevastopol,
the main port and where most of the Black Sea fleet was based, came under siege. The Russians were reduced to scuttling
their warships as blockships, after stripping them of their guns and men to reinforce batteries on shore. During
the siege, the Russians lost four 110- or 120-gun, three-decker ships of the line, twelve 84-gun two-deckers and
four 60-gun frigates in the Black Sea, plus a large number of smaller vessels. During the rest of the campaign the
allied fleets remained in control of the Black Sea, ensuring the various fronts were kept supplied. The Russians
evacuated Wallachia and Moldavia in late July 1854. With the evacuation of the Danubian Principalities, the immediate
cause of war was withdrawn and the war might have ended at this time.:192 However, war fever among the public in
both the UK and France had been whipped up by the press in both countries to the degree that politicians found it
untenable to propose ending the war at this point. Indeed, the coalition government of George Hamilton-Gordon, 4th
Earl of Aberdeen fell on 30 January 1855 on a no-confidence vote as Parliament voted to appoint a committee to investigate
mismanagement of the war.:311 The Crimean campaign opened in September 1854. 360 ships sailed in seven columns, each
steamer towing two sailing ships.:422 Anchoring on 13 September in the bay of Eupatoria, the town surrendered and
500 Marines landed to occupy it. This town and bay would provide a fall back position in case of disaster.:201 The
ships then sailed east to make the landing of the allied expeditionary force on the sandy beaches of Calamita Bay
on the south west coast of the Crimean Peninsula. The landing surprised the Russians, as they had been expecting
a landing at Katcha; the last minute change proving that Russia had known the original battle plan. There was no
sign of the enemy and the men were all landed on 14 September. It took another four days to land all the stores,
equipment, horses and artillery. The landing was north of Sevastopol, so the Russians had arrayed their army in expectation
of a direct attack. The allies advanced and on the morning of 20 September came up to the Alma river and the whole
Russian army. The position was strong, but after three hours,:424 the frontal attack had driven the Russians out
of their dug in positions with losses of 6000 men. The Battle of the Alma had 3,300 Allied losses. Failing to pursue
the retreating forces was one of many strategic errors made during the war, and the Russians themselves noted that
had they pressed south that day they would have easily captured Sevastopol. Believing the northern approaches to
the city too well defended, especially due to the presence of a large star fort and because Sevastopol was on the
south side of the inlet from the sea that made the harbour, Sir John Burgoyne, the engineer advisor, recommended
that the allies attack Sevastopol from the south. This was agreed by the joint commanders, Raglan and St Arnaud.:426
On 25 September the whole army marched southeast and encircled the city to the south. This let them set up a new
supply center in a number of protected inlets on the south coast. The Russians retreated into the city. The Allied
army relocated without problems to the south and the heavy artillery was brought ashore with batteries and connecting
trenches built so that by 10 October some batteries were ready and by 17 October—when the bombardment commenced—126
guns were firing, 53 of them French.:430 The fleet at the same time engaged the shore batteries. The British bombardment
worked better than the French, who had smaller caliber guns. The fleet suffered high casualties during the day. The
British wanted to attack that afternoon, but the French wanted to defer the attack. A postponement was agreed, but
on the next day the French were still not ready. By 19 October the Russians had transferred some heavy guns to the
southern defenses and outgunned the allies.:431 A large Russian assault on the allied supply base to the southeast,
at Balaclava was rebuffed on 25 October 1854.:521–527 The Battle of Balaclava is remembered in the UK for the actions
of two British units. At the start of the battle, a large body of Russian cavalry charged the 93rd Highlanders, who
were posted north of the village of Kadikoi. Commanding them was Sir Colin Campbell. Rather than 'form square', the
traditional method of repelling cavalry, Campbell took the risky decision to have his Highlanders form a single line,
two men deep. Campbell had seen the effectiveness of the new Minie rifles, with which his troops were armed, at the
Battle of the Alma a month before, and was confident his men could beat back the Russians. His tactics succeeded.
From up on the ridge to the west, Times correspondent William Howard Russell saw the Highlanders as a 'thin red streak
topped with steel', a phrase which soon became the 'Thin Red Line.' Soon after, a Russian cavalry movement was countered
by the Heavy Brigade, who charged and fought hand-to-hand until the Russians retreated. This caused a more widespread
Russian retreat, including a number of their artillery units. When the local commanders failed to take advantage
of the retreat, Lord Raglan sent out orders to move up. The local commanders ignored the demands, leading to the
British aide-de-camp personally delivering a quickly written and confusing order to attack the artillery. When the
Earl of Cardigan questioned what they referred to, the aide-de-camp pointed to the first Russian battery he could
see – the wrong one. Cardigan formed up his unit and charged the length of the Valley of the Balaclava, under fire
from Russian batteries in the hills. The charge of the Light Brigade caused 278 casualties of the 700-man unit. The
Light Brigade was memorialized in the famous poem by Alfred Lord Tennyson, "The Charge of the Light Brigade." Although
traditionally the charge of the Light Brigade was looked upon as a glorious but wasted sacrifice of good men and
horses, recent historians say that the charge of the Light Brigade did succeed in at least some of its objectives.
The aim of any cavalry charge is to scatter the enemy lines and frighten the enemy off the battlefield. The charge
of the Light Brigade had so unnerved the Russian cavalry, which had previously been routed by the Heavy Brigade,
that the Russian Cavalry was set to full-scale flight by the subsequent charge of the Light Brigade.:252 Winter,
and a deteriorating supply situation on both sides of troops and materiel, led to a halt in ground operations. Sevastopol
remained invested by the allies, while the allied armies were hemmed in by the Russian army in the interior. On 14
November a storm sank thirty allied transport ships including HMS Prince which was carrying a cargo of winter clothing.:435
The storm and heavy traffic caused the road from the coast to the troops to disintegrate into a quagmire, requiring
engineers to devote most of their time to its repair including quarrying stone. A tramroad was ordered. It arrived
in January with a civilian engineering crew, however it was March before it was sufficiently advanced to be of any
appreciable value.:439 An Electrical telegraph was also ordered, but the frozen ground delayed its installation until
March, when communications from the base port of Balaklava to the British HQ was established. The Pipe-and-cable-laying
plough failed because of the hard frozen soil, but even so 21 miles of cable were laid.:449 The Allies had had time
to consider the problem. The French being brought around to agree that the key to the defence was the Malakoff.:441
Emphasis of the siege at Sevastopol shifted to the British left, against the fortifications on Malakoff hill.:339
In March, there was fighting by the French over a new fort being built by the Russians at Mamelon, located on a hill
in front of the Malakoff. Several weeks of fighting saw little change in the front line, and the Mamelon remained
in Russian hands. Many more artillery pieces had arrived and been dug into batteries. In June, a third bombardment
was followed after two days by a successful attack on the Mamelon, but a follow-up assault on the Malakoff failed
with heavy losses. During this time the garrison commander, Admiral Nakhimov fell on 30 June 1855.:378 Raglan having
also died on 28 June.:460 In August, the Russians again made an attack towards the base at Balaclava, defended by
the French, newly arrived Sardinian and Ottoman troops.:461 The resulting battle of Tchernaya was a defeat for the
Russians, who suffered heavy casualties. For months each side had been building forward rifle pits and defensive
positions, which resulted in many skirmishes. Artillery fire aiming to gain superiority over the enemy guns.:450–462
September saw the final assault. On 5 September, another French bombardment (the sixth) was followed by an assault
by the French Army on 8 September resulting in the capture of the Malakoff by the French, and following their failure
to retake it, the collapse of the Russian defences. Meanwhile, the British captured the Great Redan, just south of
the city of Sevastopol. The Russians retreated to the north, blowing up their magazines and the city fell on 9 September
1855 after a 337-day-long siege.:106 In spring 1855, the allied British-French commanders decided to send an Anglo-French
naval squadron into the Azov Sea to undermine Russian communications and supplies to besieged Sevastopol. On 12 May
1855, British-French warships entered the Kerch Strait and destroyed the coast battery of the Kamishevaya Bay. On
21 May 1855, the gunboats and armed steamers attacked the seaport of Taganrog, the most important hub near Rostov
on Don. The vast amounts of food, especially bread, wheat, barley, and rye that were amassed in the city after the
outbreak of war were prevented from being exported. In July 1855, the allied squadron tried to go past Taganrog to
Rostov on Don, entering the Don River through the Mius River. On 12 July 1855 HMS Jasper grounded near Taganrog thanks
to a fisherman who moved buoys into shallow water. The Cossacks captured the gunboat with all of its guns and blew
it up. The third siege attempt was made 19–31 August 1855, but the city was already fortified and the squadron could
not approach close enough for landing operations. The allied fleet left the Gulf of Taganrog on the 2nd September
1855, with minor military operations along the Azov Sea coast continuing until late autumn 1855. 1853: There were
four main events. 1. In the north the Turks captured the border fort of Saint Nicholas in a surprise night attack
(27/28 October). They then pushed about 20000 troops across the Cholok River border. Being outnumbered the Russians
abandoned Poti and Redut Kale and drew back to Marani. Both sides remained immobile for the next seven months. 2.
In the center the Turks moved north from Ardahan to within cannon-shot of Akhaltsike and awaited reinforcements (13
November). The Russians routed them. The claimed losses were 4000 Turks and 400 Russians. 3. In the south about 30000
Turks slowly moved east to the main Russian concentration at Gyumri or Alexandropol (November). They crossed the
border and set up artillery south of town. Prince Orbeliani tried to drive them off and found himself trapped. The
Turks failed to press their advantage, the remaining Russians rescued Orbeliani and the Turks retired west. Orbeliani
lost about 1000 men out of 5000. The Russians now decided to advance, the Turks took up a strong position on the
Kars road and attacked. They were defeated in the battle of Başgedikler, losing 6000 men, half their artillery and
all their supply train. The Russians lost 1300, including Prince Orbeliani. This was Prince Ellico Orbeliani whose
wife was later kidnaped by Shamyl at Tsinandali. 4. At sea the Turks sent a fleet east which was destroyed by Admiral
Nakhimov at Sinope. In the north Eristov pushed southwest, fought two battles, forced the Turks back to Batum, retired
behind the Cholok River and suspended action for the rest of the year (June). In the far south Wrangel pushed west,
fought a battle and occupied Bayazit. In the center the main forces stood at Kars and Gyumri. Both slowly approached
along the Kars-Gyumri road and faced each other, neither side choosing to fight (June–July). On 4 August Russian
scouts saw a movement which they thought was the start of a withdrawal, the Russians advanced and the Turks attacked
first. They were defeated, losing 8000 men to the Russian 3000. 10000 irregulars deserted to their villages. Both
sides withdrew to their former positions. About this time the Persians made a semi-secret agreement to remain neutral
in exchange for the cancellation of the indemnity from the previous war. 1855:Kars: In the year up to May 1855 Turkish
forces in the east were reduced from 120,000 to 75,000, mostly by disease. The local Armenian population kept Muravyev
well-informed about the Turks at Kars and he judged they had about five months of supplies. He therefore decided
to control the surrounding area with cavalry and starve them out. He started in May and by June was south and west
of the town. A relieving force fell back and there was a possibility of taking Erzerum, but Muravyev chose not to.
In late September he learned of the fall of Sevastopol and a Turkish landing at Batum. This led him to reverse policy
and try a direct attack. It failed, the Russians losing 8000 men and the Turks 1500 (29 September). The blockade
continued and Kars surrendered on 8 November. 1855: Georgian coast: Omar Pasha, the Turkish commander at Crimea had
long wanted to land in Georgia, but the western powers vetoed it. When they relented in August most of the campaigning
season was lost. In September 8000 Turks landed at Batum, but the main concentration was at Sukhum Kale. This required
a 100-mile march south through a country with poor roads. The Russians planned to hold the line of the Ingur River
which separates Abkhazia from Georgia proper. Omar crossed the Ingur on 7 November and then wasted a great deal of
time, the Russians doing little. By 2 December he had reached the Tskhenis-dzqali, the rainy season had started,
his camps were submerged in mud and there was no bread. Learning of the fall of Kars he withdrew to the Ingur. The
Russians did nothing and he evacuated to Batum in February of the following year. The Baltic was[when?] a forgotten
theatre of the Crimean War. Popularisation of events elsewhere overshadowed the significance of this theatre, which
was close to Saint Petersburg, the Russian capital. In April 1854 an Anglo-French fleet entered the Baltic to attack
the Russian naval base of Kronstadt and the Russian fleet stationed there. In August 1854 the combined British and
French fleet returned to Kronstadt for another attempt. The outnumbered Russian Baltic Fleet confined its movements
to the areas around its fortifications. At the same time, the British and French commanders Sir Charles Napier and
Alexandre Ferdinand Parseval-Deschenes—although they led the largest fleet assembled since the Napoleonic Wars—considered
the Sveaborg fortress too well-defended to engage. Thus, shelling of the Russian batteries was limited to two attempts
in the summers of 1854 and 1855, and initially, the attacking fleets limited their actions to blockading Russian
trade in the Gulf of Finland. Naval attacks on other ports, such as the ones in the island of Hogland in the Gulf
of Finland, proved more successful. Additionally, allies conducted raids on less fortified sections of the Finnish
coast. These battles are known in Finland as the Åland war. In August 1855 a Franco-British naval force captured
and destroyed the Russian Bomarsund fortress on Åland Islands. In the same month, the Western Allied Baltic Fleet
tried to destroy heavily defended Russian dockyards at Sveaborg outside Helsinki. More than 1000 enemy guns tested
the strength of the fortress for two days. Despite the shelling, the sailors of the 120-gun ship Rossiya, led by
Captain Viktor Poplonsky, defended the entrance to the harbor. The Allies fired over 20,000 shells but failed to
defeat the Russian batteries. A massive new fleet of more than 350 gunboats and mortar vessels was prepared[by whom?],
but before the attack was launched, the war ended. Part of the Russian resistance was credited[by whom?] to the deployment
of newly invented blockade mines. Perhaps the most influential contributor to the development of naval mining was
a Swede resident in Russia, the inventor and civil engineer Immanuel Nobel (the father of Alfred Nobel). Immanuel
Nobel helped the Russian war effort by applying his knowledge of industrial explosives, such as nitroglycerin and
gunpowder. One account dates modern naval mining from the Crimean War: "Torpedo mines, if I may use this name given
by Fulton to self-acting mines underwater, were among the novelties attempted by the Russians in their defences about
Cronstadt and Sevastopol", as one American officer put it in 1860. Minor naval skirmishes also occurred in the Far
East, where at Petropavlovsk on the Kamchatka Peninsula a British and French Allied squadron including HMS Pique
under Rear Admiral David Price and a French force under Counter-Admiral Auguste Febvrier Despointes besieged a smaller
Russian force under Rear Admiral Yevfimy Putyatin. In September 1854, an Allied landing force was beaten back with
heavy casualties, and the Allies withdrew. The Russians escaped under the cover of snow in early 1855 after Allied
reinforcements arrived in the region. Camillo di Cavour, under orders of Victor Emmanuel II of Piedmont-Sardinia,
sent an expeditionary corps of 15,000 soldiers, commanded by General Alfonso La Marmora, to side with French and
British forces during the war.:111–12 This was an attempt at gaining the favour of the French, especially when the
issue of uniting Italy would become an important matter. The deployment of Italian troops to the Crimea, and the
gallantry shown by them in the Battle of the Chernaya (16 August 1855) and in the siege of Sevastopol, allowed the
Kingdom of Sardinia to be among the participants at the peace conference at the end of the war, where it could address
the issue of the Risorgimento to other European powers. Greece played a peripheral role in the war. When Russia attacked
the Ottoman Empire in 1853, King Otto of Greece saw an opportunity to expand North and South into Ottoman areas that
had large Greek Christian majorities. However, Greece did not coordinate its plans with Russia, did not declare war,
and received no outside military or financial support. Greece, an Orthodox nation, had considerable support in Russia,
but the Russian government decided it was too dangerous to help Greece expand its holdings.:32–40 When the Russians
invaded the Principalities, the Ottoman forces were tied down so Greece invaded Thessaly and Epirus. To block further
Greek moves, the British and French occupied the main Greek port at Piraeus from April 1854 to February 1857, and
effectively neutralized the Greek army. Greeks, gambling on a Russian victory, incited the large-scale Epirus Revolt
of 1854 as well as uprisings in Crete. The insurrections were failures that were easily crushed by the Ottoman army.
Greece was not invited to the peace conference and made no gains out of the war.:139 The frustrated Greek leadership
blamed the King for failing to take advantage of the situation; his popularity plunged and he was later forced to
abdicate. Dissatisfaction with the conduct of the war was growing with the public in the UK and in other countries,
aggravated by reports of fiascos, especially the humiliating defeat of the Charge of the Light Brigade at the Battle
of Balaclava. On Sunday, 21 January 1855, a "snowball riot" occurred in Trafalgar Square near St. Martin-in-the-Field
in which 1,500 people gathered to protest against the war by pelting buses, cabs, and pedestrians with snow balls.
When the police intervened, the snowballs were directed at them. The riot was finally put down by troops and police
acting with truncheons. In Parliament, Tories demanded an accounting of all soldiers, cavalry and sailors sent to
the Crimea and accurate figures as to the number of casualties that had been sustained by all British armed forces
in the Crimea; they were especially concerned with the Battle of Balaclava. When Parliament passed a bill to investigate
by the vote of 305 to 148, Aberdeen said he had lost a vote of no confidence and resigned as prime minister on 30
January 1855. The veteran former Foreign Secretary Lord Palmerston became prime minister. Palmerston took a hard
line; he wanted to expand the war, foment unrest inside the Russian Empire, and permanently reduce the Russian threat
to Europe. Sweden and Prussia were willing to join the UK and France, and Russia was isolated.:400–402, 406–408 Peace
negotiations at the Congress of Paris resulted in the signing of the Treaty of Paris on 30 March 1856. In compliance
with article III, Russia restored to the Ottoman Empire the city and citadel of Kars in common with "all other parts
of the Ottoman territory of which the Russian troop were in possession". Russia ceded some land in Bessarabia at
the mouth of the Danube to Moldavia. By article IV The United Kingdom, France, Sardinia and Turkey restored to Russia
"the towns and ports of Sevastopol, Balaklava, Kamish, Eupatoria, Kerch, Jenikale, Kinburn, as well as all other
territories occupied by the allied troops". In conformity with article XI and XIII, the Tsar and the Sultan agreed
not to establish any naval or military arsenal on the Black Sea coast. The Black Sea clauses weakened Russia, and
it no longer posed a naval threat to the Ottomans. The principalities of Moldavia and Wallachia were nominally returned
to the Ottoman Empire; in practice they became independent. The Great Powers pledged to respect the independence
and territorial integrity of the Ottoman Empire.:432–33 The Treaty of Paris stood until 1871, when France was defeated
by Prussia in the Franco-Prussian War of 1870–1871. While Prussia and several other German states united to form
a powerful German Empire, the Emperor of the French, Napoleon III, was deposed to permit the formation of a Third
French Republic. During his reign, Napoleon III, eager for the support of the United Kingdom, had opposed Russia
over the Eastern Question. Russian interference in the Ottoman Empire, however, did not in any significant manner
threaten the interests of France. Thus, France abandoned its opposition to Russia after the establishment of a republic.
Encouraged by the decision of the French, and supported by the German minister Otto von Bismarck, Russia renounced
the Black Sea clauses of the treaty agreed to in 1856. As the United Kingdom alone could not enforce the clauses,
Russia once again established a fleet in the Black Sea. Although it was Russia that was punished by the Paris Treaty,
in the long run it was Austria that lost the most from the Crimean War despite having barely taken part in it.:433
Having abandoned its alliance with Russia, Austria was diplomatically isolated following the war,:433 which contributed
to its disastrous defeats in the 1859 Franco-Austrian War that resulted in the cession of Lombardy to the Kingdom
of Sardinia, and later in the loss of the Habsburg rule of Tuscany and Modena, which meant the end of Austrian influence
in Italy. Furthermore, Russia did not do anything to assist its former ally, Austria, in the 1866 Austro-Prussian
War:433 with its loss of Venetia and more important than that, its influence in most German-speaking lands. The status
of Austria as a great power, with the unifications of Germany and Italy was now severely questioned. It had to compromise
with Hungary, the two countries shared the Danubian Empire and Austria slowly became a little more than a German
satellite. With France now hostile to Germany, allied with Russia, and Russia competing with the newly renamed Austro-Hungarian
Empire for an increased role in the Balkans at the expense of the Ottoman Empire, the foundations were in place for
creating the diplomatic alliances that would lead to World War I. The Crimean War marked the ascendancy of France
to the position of pre-eminent power on the Continent,:411 the continued decline of the Ottoman Empire, and the beginning
of a decline for Tsarist Russia. As Fuller notes, "Russia had been beaten on the Crimean peninsula, and the military
feared that it would inevitably be beaten again unless steps were taken to surmount its military weakness." The Crimean
War marks the demise of the Concert of Europe, the balance of power that had dominated Europe since the Congress
of Vienna in 1815, and which had included France, Russia, Austria and the United Kingdom. This view of 'diplomatic
drift' as the cause of the war was first popularised by A. W, Kinglake, who portrayed the British as victims of newspaper
sensationalism and duplicitous French and Ottoman diplomacy. More recently, the historians Andrew Lambert and Winfried
Baumgart have argued that, first, Britain was following a geopolitical strategy in aiming to destroy a fledgling
Russian Navy which might challenge the Royal Navy for control of the seas, and second that the war was a joint European
response to a century of Russian expansion not just southwards but also into western Europe. Russia feared losing
Russian America without compensation in some future conflict, especially to the British. While Alaska attracted little
interest at the time, the population of nearby British Columbia started to increase rapidly a few years after hostilities
ended. Therefore, the Russian emperor, Alexander II, decided to sell Alaska. In 1859 the Russians offered to sell
the territory to the United States, hoping that its presence in the region would offset the plans of Russia's greatest
regional rival, the United Kingdom. Notable documentation of the war was provided by William Howard Russell (writing
for The Times newspaper) and the photographs of Roger Fenton.:306–309 News from war correspondents reached all nations
involved in the war and kept the public citizenry of those nations better informed of the day-to-day events of the
war than had been the case in any other war to that date. The British public was very well informed regarding the
day-to-day realities of the war in the Crimea. After the French extended the telegraph to the coast of the Black
Sea during the winter of 1854, the news reached London in two days. When the British laid an underwater cable to
the Crimean peninsula in April 1855, news reached London in a few hours. The daily news reports energised public
opinion, which brought down the Aberdeen government and carried Lord Palmerston into office as prime minister.:304–11
As the memory of the "Charge of the Light Brigade" demonstrates, the war became an iconic symbol of logistical, medical
and tactical failures and mismanagement. Public opinion in the UK was outraged at the logistical and command failures
of the war; the newspapers demanded drastic reforms, and parliamentary investigations demonstrated the multiple failures
of the Army. However, the reform campaign was not well organized, and the traditional aristocratic leadership of
the Army pulled itself together, and blocked all serious reforms. No one was punished. The outbreak of the Indian
Revolution in 1857 shifted attention to the heroic defense of British interest by the army, and further talk of reform
went nowhere. The demand for professionalization was, however, achieved by Florence Nightingale, who gained worldwide
attention for pioneering and publicizing modern nursing while treating the wounded.:469–71 The Crimean War also saw
the first tactical use of railways and other modern inventions, such as the electric telegraph, with the first "live"
war reporting to The Times by William Howard Russell. Some credit Russell with prompting the resignation of the sitting
British government through his reporting of the lacklustre condition of British forces deployed in Crimea. Additionally,
the telegraph reduced the independence of British overseas possessions from their commanders in London due to such
rapid communications. Newspaper readership informed public opinion in the United Kingdom and France as never before.
It was the first European war to be photographed.
A nonprofit organization (NPO, also known as a non-business entity) is an organization whose purposes are other than making
a profit. A nonprofit organization is often dedicated to furthering a particular social cause or advocating for a
particular point of view. In economic terms, a nonprofit organization uses its surplus revenues to further achieve
its purpose or mission, rather than distributing its surplus income to the organization's shareholders (or equivalents)
as profit or dividends. This is known as the distribution constraint. The decision to adopt a nonprofit legal structure
is one that will often have taxation implications, particularly where the nonprofit seeks income tax exemption, charitable
status and so on. The nonprofit landscape is highly varied, although many people have come to associate NPOs with
charitable organizations. Although charities do comprise an often high profile or visible aspect of the sector, there
are many other types of nonprofits. Overall, they tend to be either member-serving or community-serving. Member-serving
organizations include mutual societies, cooperatives, trade unions, credit unions, industry associations, sports
clubs, retired serviceman's clubs and peak bodies – organizations that benefit a particular group of people i.e.
the members of the organization. Typically, community-serving organizations are focused on providing services to
the community in general, either globally or locally: organizations delivering human services programs or projects,
aid and development programs, medical research, education and health services, and so on. It could be argued many
nonprofits sit across both camps, at least in terms of the impact they make. For example, the grassroots support
group that provides a lifeline to those with a particular condition or disease could be deemed to be serving both
its members (by directly supporting them) and the broader community (through the provision of a helping service for
fellow citizens). Although NPOs are permitted to generate surplus revenues, they must be retained by the organization
for its self-preservation, expansion, or plans. NPOs have controlling members or a board of directors. Many have
paid staff including management, whereas others employ unpaid volunteers and even executives who work with or without
compensation (occasionally nominal). In some countries, where there is a token fee, in general it is used to meet
legal requirements for establishing a contract between the executive and the organization. Some NPOs may also be
a charity or service organization; they may be organized as a profit corporation or as a trust, a cooperative, or
they exist informally. A very similar type of organization termed a supporting organization operates like a foundation,
but they are more complicated to administer, hold more favorable tax status and are restricted in the public charities
they support. Their mole is not to be successful in terms of wealth, but in terms of giving value to the groups of
people they administer to. The two major types of nonprofit organization are membership and board-only. A membership
organization elects the board and has regular meetings and the power to amend the bylaws. A board-only organization
typically has a self-selected board, and a membership whose powers are limited to those delegated to it by the board.
A board-only organization's bylaws may even state that the organization does not have any membership, although the
organization's literature may refer to its donors or service recipients as "members"; examples of such organizations
are Fairvote and the National Organization for the Reform of Marijuana Laws. The Model Nonprofit Corporation Act
imposes many complexities and requirements on membership decision-making. Accordingly, many organizations, such as
Wikimedia, have formed board-only structures. The National Association of Parliamentarians has generated concerns
about the implications of this trend for the future of openness, accountability, and understanding of public concerns
in nonprofit organizations. Specifically, they note that nonprofit organizations, unlike business corporations, are
not subject to market discipline for products and shareholder discipline of their capital; therefore, without membership
control of major decisions such as election of the board, there are few inherent safeguards against abuse. A rebuttal
to this might be that as nonprofit organizations grow and seek larger donations, the degree of scrutiny increases,
including expectations of audited financial statements. A further rebuttal might be that NPOs are constrained, by
their choice of legal structure, from financial benefit as far as distribution of profit to its members/directors
is concerned. Beware of board-only organizations- review the board members annual income before donating, such as
the Clinton Foundation. Board members who decide what percentage of your donations will increase their personal wealth
are rampant in abusing this designation of an NPO, and this is why they attempt to avoid audits and use a double
bottom line for taxing. Canada allows nonprofits to be incorporated or unincorporated. Nonprofits may incorporate
either federally, under Part II of the Canada Business Corporations Act or under provincial legislation. Many of
the governing Acts for Canadian nonprofits date to the early 1900s, meaning that nonprofit legislation has not kept
pace with legislation that governs for-profit corporations; particularly with regards to corporate governance. Federal,
and in some provinces (such as Ontario), incorporation is by way of Letters Patent, and any change to the Letters
Patent (even a simple name change) requires formal approval by the appropriate government, as do by-law changes.
Other provinces (such as Alberta) permit incorporation as of right, by the filing of Articles of Incorporation or
Articles of Association. During 2009, the federal government enacted new legislation repealing the Canada Corporations
Act, Part II - the Canada Not-for-Profit Corporations Act. This Act was last amended on 10 October 2011 and the act
was current till 4 March 2013. It allows for incorporation as of right, by Articles of Incorporation; does away with
the ultra vires doctrine for nonprofits; establishes them as legal persons; and substantially updates the governance
provisions for nonprofits. Ontario also overhauled its legislation, adopting the Ontario Not-for-Profit Corporations
Act during 2010; pending the outcome of an anticipated election during October 2011,[dated info] the new Act is expected
to be in effect as of 1 July 2013. Canada also permits a variety of charities (including public and private foundations).
Charitable status is granted by the Canada Revenue Agency (CRA) upon application by a nonprofit; charities are allowed
to issue income tax receipts to donors, must spend a certain percentage of their assets (including cash, investments
and fixed assets) and file annual reports in order to maintain their charitable status. In determining whether an
organization can become a charity, CRA applies a common law test to its stated objects and activities. These must
be: In South Africa, charities issue a tax certificate when requested by donors which can be used as a tax deduction
by the donor. Non Profit Organisations are registered under Companies and Intellectual Property Commission as Nonprofit
Companies (NPCs) but may voluntarily register with The Nonprofit Companies Directorate. Trusts are registered by
the Master of the High Court. Section 21 Companies are registered under the Company's Act. All are classified as
Voluntary Organisations and all must be registered with the South Africa Revenue Services "SARS".[citation needed]
A charity is a nonprofit organisation that meets stricter criteria regarding its purpose and the method in which
it makes decisions and reports its finances. For example, a charity is generally not allowed to pay its Trustees.
In England and Wales, charities may be registered with the Charity Commission. In Scotland, the Office of the Scottish
Charity Regulator serves the same function. Other organizations which are classified as nonprofit organizations elsewhere,
such as trade unions, are subject to separate regulations, and are not regarded as "charities" in the technical sense.
After a nonprofit organization has been formed at the state level, the organization may seek recognition of tax exempt
status with respect to U.S. federal income tax. That is done typically by applying to the Internal Revenue Service
(IRS), although statutory exemptions exist for limited types of nonprofit organizations. The IRS, after reviewing
the application to ensure the organization meets the conditions to be recognized as a tax exempt organization (such
as the purpose, limitations on spending, and internal safeguards for a charity), may issue an authorization letter
to the nonprofit granting it tax exempt status for income tax payment, filing, and deductibility purposes. The exemption
does not apply to other Federal taxes such as employment taxes. Additionally, a tax-exempt organization must pay
federal tax on income that is unrelated to their exempt purpose. Failure to maintain operations in conformity to
the laws may result in an organization losing its tax exempt status. Individual states and localities offer nonprofits
exemptions from other taxes such as sales tax or property tax. Federal tax-exempt status does not guarantee exemption
from state and local taxes, and vice versa. These exemptions generally have separate applications and their requirements
may differ from the IRS requirements. Furthermore, even a tax exempt organization may be required to file annual
financial reports (IRS Form 990) at the state and federal level. A tax exempt organization's 990 forms are required
to be made available for public scrutiny. An example of nonprofit organization in the US is Project Vote Smart. The
board of directors has ultimate control over the organization, but typically an executive director is hired. In some
cases, the board is elected by a membership, but commonly, the board of directors is self-perpetuating. In these
"board-only" organizations, board members nominate new members and vote on their fellow directors nominations. Part
VI, section A, question 7a of the Form 990 asks "members, stockholders, or other persons who had the power to elect
or appoint one or more members of the governing body?". Capacity building is an ongoing problem experienced by NPOs
for a number of reasons. Most rely on external funding (government funds, grants from charitable foundations, direct
donations) to maintain their operations and changes in these sources of revenue may influence the reliability or
predictability with which the organization can hire and retain staff, sustain facilities, create programs, or maintain
tax-exempt status. For example, a university that sells research to for-profit companies may have tax exemption problems.
In addition, unreliable funding, long hours and low pay can result in employee retention problems. During 2009, the
US government acknowledged this critical need by the inclusion of the Nonprofit Capacity Building Program in the
Serve America Act. Further efforts to quantify the scope of the sector and propose policy solutions for community
benefit were included in the Nonprofit Sector and Community Solutions Act, proposed during 2010. In Australia, nonprofit
organisations include trade unions, charitable entities, co-operatives, universities and hospitals, mutual societies,
grass-root and support groups, political parties, religious groups, incorporated associations, not-for-profit companies,
trusts and more. Furthermore, they operate across a multitude of domains and industries, from health, employment,
disability and other human services to local sporting clubs, credit unions and research institutes. A nonprofit organisation
in Australia can choose from a number of legal forms depending on the needs and activities of the organisation: co-operative,
company limited by guarantee, unincorporated association, incorporated association (by the Associations Incorporation
Act 1985) or incorporated association or council (by the Commonwealth Aboriginal Councils and Associations Act 1976).
From an academic perspective, social enterprise is for the most part considered a sub-set of the nonprofit sector
as typically they too are concerned with a purpose relating to a public good, however these are not bound to adhere
to a nonprofit legal structure and many incorporate and operate as for-profit entities. Many nonprofit organizations
find it difficult to create consistent messaging that resonates with their various stakeholders as marketing budgets
are minimal or nonexistent. Marketing is in many cases a taboo word that NPOs or others don't like to associate with
such community benefit organizations. There are strategic ways in which nonprofits can leverage their access to various
community stakeholders to get their name and cause recognized by the public, but it is imperative to have an outreach
strategy which includes a financial plan to execute that outreach/marketing strategy, particularly if the organization
has plans to rebrand or expand their initiaives. Resource mismanagement is a particular problem with NPOs because
the employees are not accountable to anybody with a direct stake in the organization. For example, an employee may
start a new program without disclosing its complete liabilities. The employee may be rewarded for improving the NPO's
reputation, making other employees happy, and attracting new donors. Liabilities promised on the full faith and credit
of the organization but not recorded anywhere constitute accounting fraud. But even indirect liabilities negatively
affect the financial sustainability of the NPO, and the NPO will have financial problems unless strict controls are
instated. Some commentators have also argued that receiving significant funding from large for-profit corporations
can ultimately alter the NPO's functions. Competition for employees with the public and private sector is another
problem that Nonprofit organizations will inevitably face, particularly for management positions. There are reports
of major talent shortages in the nonprofit sector today regarding newly graduated workers, and NPOs have for too
long relegated hiring to a secondary priority, which could be why they find themselves in the position many do. While
many established NPO's are well-funded and comparative to their public sector competetitors, many more are independent
and must be creative with which incentives they use to attract and maintain vibrant personalities. The initial interest
for many is the wage and benefits package, though many who have been questioned after leaving an NPO have reported
that it was stressful work environments and implacable work that drove them away. Public and private sector employment
has, for the most part, been able to offer more for their employees than most nonprofit agencies throughout history.
Either in the form of higher wages, more comprehensive benefit packages, or less tedious work, the public and private
sector has enjoyed an advantage in attracting employees over NPOs. Traditionally, the NPO has attracted mission-driven
individuals who want to assist their chosen cause. Compounding the issue is that some NPOs do not operate in a manner
similar to most businesses, or only seasonally. This leads many young and driven employees to forego NPOs in favor
of more stable employment. Today however, Nonprofit organizations are adopting methods used by their competitors
and finding new means to retain their employees and attract the best of the newly minted workforce. It has been mentioned
that most nonprofits will never be able to match the pay of the private sector and therefore should focus their attention
on benefits packages, incentives and implementing pleasurable work environments. Pleasurable work conditions are
ranked as being more preferable than a high salary and implacable work. NPOs are encouraged to pay as much as they
are able, and offer a low stress work environment that the employee can associate him or herself positively with.
Other incentives that should be implemented are generous vacation allowances or flexible work hours. In the United
States, two of the wealthiest nonprofit organizations are the Bill and Melinda Gates Foundation, which has an endowment
of US$38 billion, and the Howard Hughes Medical Institute originally funded by Hughes Aircraft prior to divestiture,
which has an endowment of approximately $14.8 billion. Outside the United States, another large NPO is the British
Wellcome Trust, which is a "charity" by British usage. See: List of wealthiest foundations. Note that this assessment
excludes universities, at least a few of which have assets in the tens of billions of dollars. For example; List
of U.S. colleges and universities by endowment. Some NPOs which are particularly well known, often for the charitable
or social nature of their activities performed during a long period of time, include Amnesty International, Oxfam,
Rotary International, Kiwanis International, Carnegie Corporation of New York, Nourishing USA, DEMIRA Deutsche Minenräumer
(German Mine Clearers), FIDH International Federation for Human Rights, Goodwill Industries, United Way, ACORN (now
defunct), Habitat for Humanity, Teach For America, the Red Cross and Red Crescent organizations, UNESCO, IEEE, INCOSE,
World Wide Fund for Nature, Heifer International, Translators Without Borders and SOS Children's Villages. In the
traditional domain noted in RFC 1591, .org is for "organizations that didn't fit anywhere else" in the naming system,
which implies that it is the proper category for non-commercial organizations if they are not governmental, educational,
or one of the other types with a specific TLD. It is not designated specifically for charitable organizations or
any specific organizational or tax-law status, however; it encompasses anything that is not classifiable as another
category. Currently, no restrictions are enforced on registration of .com or .org, so one can find organizations
of all sorts in either of these domains, as well as other top-level domains including newer, more specific ones which
may apply to particular sorts of organizations such as .museum for museums or .coop for cooperatives. Organizations
might also register by the appropriate country code top-level domain for their country. Instead of being defined
by "non" words, some organizations are suggesting new, positive-sounding terminology to describe the sector. The
term "civil society organization" (CSO) has been used by a growing number of organizations, such as the Center for
the Study of Global Governance. The term "citizen sector organization" (CSO) has also been advocated to describe
the sector – as one of citizens, for citizens – by organizations such as Ashoka: Innovators for the Public. A more
broadly applicable term, "Social Benefit Organization" (SBO) has been advocated for by organizations such as MiniDonations.
Advocates argue that these terms describe the sector in its own terms, without relying on terminology used for the
government or business sectors. However, use of terminology by a nonprofit of self-descriptive language that is not
legally compliant risks confusing the public about nonprofit abilities, capabilities and limitations.
Literature consists of written productions, often restricted to those deemed to have artistic or intellectual value. Its
Latin root literatura/litteratura (derived itself from littera, letter or handwriting) was used to refer to all written
accounts, but intertwined with the roman concept of cultura: learning or cultivation. Literature often uses language
differently than ordinary language (see literariness). Literature can be classified according to whether it is fiction
or non-fiction and whether it is poetry or prose; it can be further distinguished according to major forms such as
the novel, short story or drama; and works are often categorised according to historical periods or their adherence
to certain aesthetic features or expectations (genre). Definitions of literature have varied over time; it is a "culturally
relative definition". In Western Europe prior to the eighteenth century, literature as a term indicated all books
and writing. A more restricted sense of the term emerged during the Romantic period, in which it began to demarcate
"imaginative" literature. Contemporary debates over what constitutes literature can be seen as returning to the older,
more inclusive notion of what constitutes literature. Cultural studies, for instance, takes as its subject of analysis
both popular and minority genres, in addition to canonical works. The value judgement definition of literature considers
it to exclusively include writing that possesses high quality or distinction, forming part of the so-called belles-lettres
('fine writing') tradition. This is the definition used in the Encyclopædia Britannica Eleventh Edition (1910–11)
when it classifies literature as "the best expression of the best thought reduced to writing." However, this has
the result that there is no objective definition of what constitutes "literature"; anything can be literature, and
anything which is universally regarded as literature has the potential to be excluded, since value-judgements can
change over time. The formalist definition is that the history of "literature" foregrounds poetic effects; it is
the "literariness" or "poeticity" of literature that distinguishes it from ordinary speech or other kinds of writing
(e.g., journalism). Jim Meyer considers this a useful characteristic in explaining the use of the term to mean published
material in a particular field (e.g., "scientific literature"), as such writing must use language according to particular
standards. The problem with the formalist definition is that in order to say that literature deviates from ordinary
uses of language, those uses must first be identified; this is difficult because "ordinary language" is an unstable
category, differing according to social categories and across history. Poetry is a form of literary art which uses
aesthetic and rhythmic qualities of language to evoke meanings in addition to, or in place of, prosaic ostensible
meaning. Poetry has traditionally been distinguished from prose by its being set in verse;[a] prose is cast in sentences,
poetry in lines; the syntax of prose is dictated by meaning, whereas that of poetry is held across metre or the visual
aspects of the poem. Prior to the nineteenth century, poetry was commonly understood to be something set in metrical
lines; accordingly, in 1658 a definition of poetry is "any kind of subject consisting of Rythm or Verses". Possibly
as a result of Aristotle's influence (his Poetics), "poetry" before the nineteenth century was usually less a technical
designation for verse than a normative category of fictive or rhetorical art. As a form it may pre-date literacy,
with the earliest works being composed within and sustained by an oral tradition; hence it constitutes the earliest
example of literature. Drama is literature intended for performance. The form is often combined with music and dance,
as in opera and musical theatre. A play is a subset of this form, referring to the written dramatic work of a playwright
that is intended for performance in a theatre; it comprises chiefly dialogue between characters, and usually aims
at dramatic or theatrical performance rather than at reading. A closet drama, by contrast, refers to a play written
to be read rather than to be performed; hence, it is intended that the meaning of such a work can be realized fully
on the page. Nearly all drama took verse form until comparatively recently. Greek drama exemplifies the earliest
form of drama of which we have substantial knowledge. Tragedy, as a dramatic genre, developed as a performance associated
with religious and civic festivals, typically enacting or developing upon well-known historical or mythological themes.
Tragedies generally presented very serious themes. With the advent of newer technologies, scripts written for non-stage
media have been added to this form. War of the Worlds (radio) in 1938 saw the advent of literature written for radio
broadcast, and many works of Drama have been adapted for film or television. Conversely, television, film, and radio
literature have been adapted to printed or electronic media. “The roots of all our modern academic fields can be
found within the pages of literature.” Literature in all its forms can be seen as written records, whether the literature
itself be factual or fictional, it is still quite possible to decipher facts through things like characters’ actions
and words or the authors’ style of writing and the intent behind the words. The plot is for more than just entertainment
purposes; within it lies information about economics, psychology, science, religions, politics, cultures, and social
depth. Studying and analyzing literature becomes very important in terms of learning about our history. Through the
study of past literature we are able to learn about how society has evolved and about the societal norms during each
of the different periods all throughout history. This can even help us to understand references made in more modern
literature because authors often make references to Greek mythology and other old religious texts or historical moments.
Not only is there literature written on each of the aforementioned topics themselves, and how they have evolved throughout
history (like a book about the history of economics or a book about evolution and science, for example) but we can
also learn about these things in fictional works. Authors often include historical moments in their works, like when
Lord Byron talks about the Spanish and the French in ‘‘Childe Harold’s Pilgrimage: Canto I’’ and expresses his opinions
through his character Childe Harold. Through literature we are able to continuously uncover new information about
history. It is easy to see how all academic fields have roots in literature. Information became easier to pass down
from generation to generation once we began to write it down. Eventually everything was written down, from things
like home remedies and cures for illness, or how to build shelter to traditions and religious practices. From there
people were able to study literature, improve on ideas, further our knowledge, and academic fields such as the medical
field or trades could be started. In much the same way as the literature that we study today continue to be updated
as we continue to evolve and learn more and more. As a more urban culture developed, academies provided a means of
transmission for speculative and philosophical literature in early civilizations, resulting in the prevalence of
literature in Ancient China, Ancient India, Persia and Ancient Greece and Rome. Many works of earlier periods, even
in narrative form, had a covert moral or didactic purpose, such as the Sanskrit Panchatantra or the Metamorphoses
of Ovid. Drama and satire also developed as urban culture provided a larger public audience, and later readership,
for literary production. Lyric poetry (as opposed to epic poetry) was often the speciality of courts and aristocratic
circles, particularly in East Asia where songs were collected by the Chinese aristocracy as poems, the most notable
being the Shijing or Book of Songs. Over a long period, the poetry of popular pre-literate balladry and song interpenetrated
and eventually influenced poetry in the literary medium. In ancient China, early literature was primarily focused
on philosophy, historiography, military science, agriculture, and poetry. China, the origin of modern paper making
and woodblock printing, produced one of the world's first print cultures. Much of Chinese literature originates with
the Hundred Schools of Thought period that occurred during the Eastern Zhou Dynasty (769-269 BCE). The most important
of these include the Classics of Confucianism, of Daoism, of Mohism, of Legalism, as well as works of military science
(e.g. Sun Tzu's The Art of War) and Chinese history (e.g. Sima Qian's Records of the Grand Historian). Ancient Chinese
literature had a heavy emphasis on historiography, with often very detailed court records. An exemplary piece of
narrative history of ancient China was the Zuo Zhuan, which was compiled no later than 389 BCE, and attributed to
the blind 5th century BCE historian Zuo Qiuming. In ancient India, literature originated from stories that were originally
orally transmitted. Early genres included drama, fables, sutras and epic poetry. Sanskrit literature begins with
the Vedas, dating back to 1500–1000 BCE, and continues with the Sanskrit Epics of Iron Age India. The Vedas are among
the oldest sacred texts. The Samhitas (vedic collections) date to roughly 1500–1000 BCE, and the "circum-Vedic" texts,
as well as the redaction of the Samhitas, date to c. 1000-500 BCE, resulting in a Vedic period, spanning the mid
2nd to mid 1st millennium BCE, or the Late Bronze Age and the Iron Age. The period between approximately the 6th
to 1st centuries BC saw the composition and redaction of the two most influential Indian epics, the Mahabharata and
the Ramayana, with subsequent redaction progressing down to the 4th century AD. In ancient Greece, the epics of Homer,
who wrote the Iliad and the Odyssey, and Hesiod, who wrote Works and Days and Theogony, are some of the earliest,
and most influential, of Ancient Greek literature. Classical Greek genres included philosophy, poetry, historiography,
comedies and dramas. Plato and Aristotle authored philosophical texts that are the foundation of Western philosophy,
Sappho and Pindar were influential lyric poets, and Herodotus and Thucydides were early Greek historians. Although
drama was popular in Ancient Greece, of the hundreds of tragedies written and performed during the classical age,
only a limited number of plays by three authors still exist: Aeschylus, Sophocles, and Euripides. The plays of Aristophanes
provide the only real examples of a genre of comic drama known as Old Comedy, the earliest form of Greek Comedy,
and are in fact used to define the genre. Roman histories and biographies anticipated the extensive mediaeval literature
of lives of saints and miraculous chronicles, but the most characteristic form of the Middle Ages was the romance,
an adventurous and sometimes magical narrative with strong popular appeal. Controversial, religious, political and
instructional literature proliferated during the Renaissance as a result of the invention of printing, while the
mediaeval romance developed into a more character-based and psychological form of narrative, the novel, of which
early and important examples are the Chinese Monkey and the German Faust books. In the Age of Reason philosophical
tracts and speculations on history and human nature integrated literature with social and political developments.
The inevitable reaction was the explosion of Romanticism in the later 18th century which reclaimed the imaginative
and fantastical bias of old romances and folk-literature and asserted the primacy of individual experience and emotion.
But as the 19th-century went on, European fiction evolved towards realism and naturalism, the meticulous documentation
of real life and social trends. Much of the output of naturalism was implicitly polemical, and influenced social
and political change, but 20th century fiction and drama moved back towards the subjective, emphasising unconscious
motivations and social and environmental pressures on the individual. Writers such as Proust, Eliot, Joyce, Kafka
and Pirandello exemplify the trend of documenting internal rather than external realities. Genre fiction also showed
it could question reality in its 20th century forms, in spite of its fixed formulas, through the enquiries of the
skeptical detective and the alternative realities of science fiction. The separation of "mainstream" and "genre"
forms (including journalism) continued to blur during the period up to our own times. William Burroughs, in his early
works, and Hunter S. Thompson expanded documentary reporting into strong subjective statements after the second World
War, and post-modern critics have disparaged the idea of objective realism in general. As advances and specialization
have made new scientific research inaccessible to most audiences, the "literary" nature of science writing has become
less pronounced over the last two centuries. Now, science appears mostly in journals. Scientific works of Aristotle,
Copernicus, and Newton still exhibit great value, but since the science in them has largely become outdated, they
no longer serve for scientific instruction. Yet, they remain too technical to sit well in most programmes of literary
study. Outside of "history of science" programmes, students rarely read such works. Philosophy has become an increasingly
academic discipline. More of its practitioners lament this situation than occurs with the sciences; nonetheless most
new philosophical work appears in academic journals. Major philosophers through history—Plato, Aristotle, Socrates,
Augustine, Descartes, Kierkegaard, Nietzsche—have become as canonical as any writers. Some recent philosophy works
are argued to merit the title "literature", but much of it does not, and some areas, such as logic, have become extremely
technical to a degree similar to that of mathematics. Literature allows readers to access intimate emotional aspects
of a person’s character that would not be obvious otherwise. It benefits the psychological development and understanding
of the reader. For example, it allows a person to access emotional states from which the person has distanced himself
or herself. An entry written by D. Mitchell featured in ‘‘The English Journal’’ explains how the author utilized
young adult literature in order to re-experience the emotional psychology she experienced as a child which she describes
as a state of “wonder”. Hogan also explains that the temporal and emotional amount which a person devotes to understanding
a character’s situation in literature allows literature to be considered “ecological[ly] valid in the study of emotion”.
This can be understood in the sense that literature unites a large community by provoking universal emotions. It
also allows readers to access cultural aspects that they are not exposed to thus provoking new emotional experiences.
Authors choose literary device according to what psychological emotion he or she is attempting to describe, thus
certain literary devices are more emotionally effective than others. Maslow’s ‘‘Third Force Psychology Theory’’ even
allows literary analysts to critically understand how characters reflect the culture and the history in which they
are contextualized. It also allows analysts to understand the author’s intended message and to understand the author’s
psychology. The theory suggests that human beings possess a nature within them that demonstrates their true “self”
and it suggests that the fulfillment of this nature is the reason for living. It also suggests that neurological
development hinders actualizing the nature because a person becomes estranged from his or her true self. Therefore,
literary devices reflect a characters’s and an author’s natural self. In his ‘‘Third Force Psychology and the Study
of Literature’’, Paris argues “D.H Lawrence’s “pristine unconscious” is a metaphor for the real self”. Thus Literature
is a reputable tool that allows readers to develop and apply critical reasoning to the nature of emotions. A significant
portion of historical writing ranks as literature, particularly the genre known as creative nonfiction, as can a
great deal of journalism, such as literary journalism. However, these areas have become extremely large, and often
have a primarily utilitarian purpose: to record data or convey immediate information. As a result, the writing in
these fields often lacks a literary quality, although it often(and in its better moments)has that quality. Major
"literary" historians include Herodotus, Thucydides and Procopius, all of whom count as canonical literary figures.
Law offers more ambiguity. Some writings of Plato and Aristotle, the law tables of Hammurabi of Babylon, or even
the early parts of the Bible could be seen as legal literature. Roman civil law as codified in the Corpus Juris Civilis
during the reign of Justinian I of the Byzantine Empire has a reputation as significant literature. The founding
documents of many countries, including Constitutions and Law Codes, can count as literature; however, most legal
writings rarely exhibit much literary merit, as they tend to be rather Written by Samuel Dean. A literary technique
or literary device can be used by authors in order to enhance the written framework of a piece of literature, and
produce specific effects. Literary techniques encompass a wide range of approaches to crafting a work: whether a
work is narrated in first-person or from another perspective, whether to use a traditional linear narrative or a
nonlinear narrative, or the choice of literary genre, are all examples of literary technique. They may indicate to
a reader that there is a familiar structure and presentation to a work, such as a conventional murder-mystery novel;
or, the author may choose to experiment with their technique to surprise the reader.
Ibn Sina created an extensive corpus of works during what is commonly known as the Islamic Golden Age, in which the translations
of Greco-Roman, Persian, and Indian texts were studied extensively. Greco-Roman (Mid- and Neo-Platonic, and Aristotelian)
texts translated by the Kindi school were commented, redacted and developed substantially by Islamic intellectuals,
who also built upon Persian and Indian mathematical systems, astronomy, algebra, trigonometry and medicine. The Samanid
dynasty in the eastern part of Persia, Greater Khorasan and Central Asia as well as the Buyid dynasty in the western
part of Persia and Iraq provided a thriving atmosphere for scholarly and cultural development. Under the Samanids,
Bukhara rivaled Baghdad as a cultural capital of the Islamic world. The study of the Quran and the Hadith thrived
in such a scholarly atmosphere. Philosophy, Fiqh and theology (kalaam) were further developed, most noticeably by
Avicenna and his opponents. Al-Razi and Al-Farabi had provided methodology and knowledge in medicine and philosophy.
Avicenna had access to the great libraries of Balkh, Khwarezm, Gorgan, Rey, Isfahan and Hamadan. Various texts (such
as the 'Ahd with Bahmanyar) show that he debated philosophical points with the greatest scholars of the time. Aruzi
Samarqandi describes how before Avicenna left Khwarezm he had met Al-Biruni (a famous scientist and astronomer),
Abu Nasr Iraqi (a renowned mathematician), Abu Sahl Masihi (a respected philosopher) and Abu al-Khayr Khammar (a
great physician). Avicenna was born c. 980 in Afšana, a village near Bukhara (in present-day Uzbekistan), the capital
of the Samanids, a Persian dynasty in Central Asia and Greater Khorasan. His mother, named Setareh, was from Bukhara;
his father, Abdullah, was a respected Ismaili scholar from Balkh, an important town of the Samanid Empire, in what
is today Balkh Province, Afghanistan, although this is not universally agreed upon. His father worked in the government
of Samanid in the village Kharmasain, a Sunni regional power. After five years, his younger brother, Mahmoud, was
born. Avicenna first began to learn the Quran and literature in such a way that when he was ten years old he had
essentially learned all of them. A number of theories have been proposed regarding Avicenna's madhab (school of thought
within Islamic jurisprudence). Medieval historian Ẓahīr al-dīn al-Bayhaqī (d. 1169) considered Avicenna to be a follower
of the Brethren of Purity. On the other hand, Dimitri Gutas along with Aisha Khan and Jules J. Janssens demonstrated
that Avicenna was a Sunni Hanafi. However, the 14th cenutry Shia faqih Nurullah Shushtari according to Seyyed Hossein
Nasr, maintained that he was most likely a Twelver Shia. Conversely, Sharaf Khorasani, citing a rejection of an invitation
of the Sunni Governor Sultan Mahmoud Ghazanavi by Avicenna to his court, believes that Avicenna was an Ismaili. Similar
disagreements exist on the background of Avicenna's family, whereas some writers considered them Sunni, some more
recent writers contested that they were Shia. According to his autobiography, Avicenna had memorised the entire Quran
by the age of 10. He learned Indian arithmetic from an Indian greengrocer,ءMahmoud Massahi and he began to learn
more from a wandering scholar who gained a livelihood by curing the sick and teaching the young. He also studied
Fiqh (Islamic jurisprudence) under the Sunni Hanafi scholar Ismail al-Zahid. Avicenna was taught some extent of philosophy
books such as Introduction (Isagoge)'s Porphyry (philosopher), Euclid's Elements, Ptolemy's Almagest by an unpopular
philosopher, Abu Abdullah Nateli, who claimed philosophizing. As a teenager, he was greatly troubled by the Metaphysics
of Aristotle, which he could not understand until he read al-Farabi's commentary on the work. For the next year and
a half, he studied philosophy, in which he encountered greater obstacles. In such moments of baffled inquiry, he
would leave his books, perform the requisite ablutions, then go to the mosque, and continue in prayer till light
broke on his difficulties. Deep into the night, he would continue his studies, and even in his dreams problems would
pursue him and work out their solution. Forty times, it is said, he read through the Metaphysics of Aristotle, till
the words were imprinted on his memory; but their meaning was hopelessly obscure, until one day they found illumination,
from the little commentary by Farabi, which he bought at a bookstall for the small sum of three dirhams. So great
was his joy at the discovery, made with the help of a work from which he had expected only mystery, that he hastened
to return thanks to God, and bestowed alms upon the poor. He turned to medicine at 16, and not only learned medical
theory, but also by gratuitous attendance of the sick had, according to his own account, discovered new methods of
treatment. The teenager achieved full status as a qualified physician at age 18, and found that "Medicine is no hard
and thorny science, like mathematics and metaphysics, so I soon made great progress; I became an excellent doctor
and began to treat patients, using approved remedies." The youthful physician's fame spread quickly, and he treated
many patients without asking for payment. Ibn Sina's first appointment was that of physician to the emir, Nuh II,
who owed him his recovery from a dangerous illness (997). Ibn Sina's chief reward for this service was access to
the royal library of the Samanids, well-known patrons of scholarship and scholars. When the library was destroyed
by fire not long after, the enemies of Ibn Sina accused him of burning it, in order for ever to conceal the sources
of his knowledge. Meanwhile, he assisted his father in his financial labors, but still found time to write some of
his earliest works. When Ibn Sina was 22 years old, he lost his father. The Samanid dynasty came to its end in December
1004. Ibn Sina seems to have declined the offers of Mahmud of Ghazni, and proceeded westwards to Urgench in modern
Turkmenistan, where the vizier, regarded as a friend of scholars, gave him a small monthly stipend. The pay was small,
however, so Ibn Sina wandered from place to place through the districts of Nishapur and Merv to the borders of Khorasan,
seeking an opening for his talents. Qabus, the generous ruler of Tabaristan, himself a poet and a scholar, with whom
Ibn Sina had expected to find asylum, was on about that date (1012) starved to death by his troops who had revolted.
Ibn Sina himself was at this time stricken by a severe illness. Finally, at Gorgan, near the Caspian Sea, Ibn Sina
met with a friend, who bought a dwelling near his own house in which Ibn Sina lectured on logic and astronomy. Several
of Ibn Sina's treatises were written for this patron; and the commencement of his Canon of Medicine also dates from
his stay in Hyrcania. Ibn Sina subsequently settled at Rey, in the vicinity of modern Tehran, the home town of Rhazes;
where Majd Addaula, a son of the last Buwayhid emir, was nominal ruler under the regency of his mother (Seyyedeh
Khatun). About thirty of Ibn Sina's shorter works are said to have been composed in Rey. Constant feuds which raged
between the regent and her second son, Shams al-Daula, however, compelled the scholar to quit the place. After a
brief sojourn at Qazvin he passed southwards to Hamadãn where Shams al-Daula, another Buwayhid emir, had established
himself. At first, Ibn Sina entered into the service of a high-born lady; but the emir, hearing of his arrival, called
him in as medical attendant, and sent him back with presents to his dwelling. Ibn Sina was even raised to the office
of vizier. The emir decreed that he should be banished from the country. Ibn Sina, however, remained hidden for forty
days in sheikh Ahmed Fadhel's house, until a fresh attack of illness induced the emir to restore him to his post.
Even during this perturbed time, Ibn Sina persevered with his studies and teaching. Every evening, extracts from
his great works, the Canon and the Sanatio, were dictated and explained to his pupils. On the death of the emir,
Ibn Sina ceased to be vizier and hid himself in the house of an apothecary, where, with intense assiduity, he continued
the composition of his works. Meanwhile, he had written to Abu Ya'far, the prefect of the dynamic city of Isfahan,
offering his services. The new emir of Hamadan, hearing of this correspondence and discovering where Ibn Sina was
hiding, incarcerated him in a fortress. War meanwhile continued between the rulers of Isfahan and Hamadãn; in 1024
the former captured Hamadan and its towns, expelling the Tajik mercenaries. When the storm had passed, Ibn Sina returned
with the emir to Hamadan, and carried on his literary labors. Later, however, accompanied by his brother, a favorite
pupil, and two slaves, Ibn Sina escaped from the city in the dress of a Sufi ascetic. After a perilous journey, they
reached Isfahan, receiving an honorable welcome from the prince. Ibn Sīnā wrote extensively on early Islamic philosophy,
especially the subjects logic, ethics, and metaphysics, including treatises named Logic and Metaphysics. Most of
his works were written in Arabic – then the language of science in the Middle East – and some in Persian. Of linguistic
significance even to this day are a few books that he wrote in nearly pure Persian language (particularly the Danishnamah-yi
'Ala', Philosophy for Ala' ad-Dawla'). Ibn Sīnā's commentaries on Aristotle often criticized the philosopher,[citation
needed] encouraging a lively debate in the spirit of ijtihad. His Book of Healing became available in Europe in partial
Latin translation some fifty years after its composition, under the title Sufficientia, and some authors have identified
a "Latin Avicennism" as flourishing for some time, paralleling the more influential Latin Averroism, but suppressed
by the Parisian decrees of 1210 and 1215. Avicenna's psychology and theory of knowledge influenced William of Auvergne,
Bishop of Paris and Albertus Magnus, while his metaphysics had an impact on the thought of Thomas Aquinas. Early
Islamic philosophy and Islamic metaphysics, imbued as it is with Islamic theology, distinguishes more clearly than
Aristotelianism between essence and existence. Whereas existence is the domain of the contingent and the accidental,
essence endures within a being beyond the accidental. The philosophy of Ibn Sīnā, particularly that part relating
to metaphysics, owes much to al-Farabi. The search for a definitive Islamic philosophy separate from Occasionalism
can be seen in what is left of his work. Following al-Farabi's lead, Avicenna initiated a full-fledged inquiry into
the question of being, in which he distinguished between essence (Mahiat) and existence (Wujud). He argued that the
fact of existence can not be inferred from or accounted for by the essence of existing things, and that form and
matter by themselves cannot interact and originate the movement of the universe or the progressive actualization
of existing things. Existence must, therefore, be due to an agent-cause that necessitates, imparts, gives, or adds
existence to an essence. To do so, the cause must be an existing thing and coexist with its effect. Avicenna's consideration
of the essence-attributes question may be elucidated in terms of his ontological analysis of the modalities of being;
namely impossibility, contingency, and necessity. Avicenna argued that the impossible being is that which cannot
exist, while the contingent in itself (mumkin bi-dhatihi) has the potentiality to be or not to be without entailing
a contradiction. When actualized, the contingent becomes a 'necessary existent due to what is other than itself'
(wajib al-wujud bi-ghayrihi). Thus, contingency-in-itself is potential beingness that could eventually be actualized
by an external cause other than itself. The metaphysical structures of necessity and contingency are different. Necessary
being due to itself (wajib al-wujud bi-dhatihi) is true in itself, while the contingent being is 'false in itself'
and 'true due to something else other than itself'. The necessary is the source of its own being without borrowed
existence. It is what always exists. The Necessary exists 'due-to-Its-Self', and has no quiddity/essence (mahiyya)
other than existence (wujud). Furthermore, It is 'One' (wahid ahad) since there cannot be more than one 'Necessary-Existent-due-to-Itself'
without differentia (fasl) to distinguish them from each other. Yet, to require differentia entails that they exist
'due-to-themselves' as well as 'due to what is other than themselves'; and this is contradictory. However, if no
differentia distinguishes them from each other, then there is no sense in which these 'Existents' are not one and
the same. Avicenna adds that the 'Necessary-Existent-due-to-Itself' has no genus (jins), nor a definition (hadd),
nor a counterpart (nadd), nor an opposite (did), and is detached (bari) from matter (madda), quality (kayf), quantity
(kam), place (ayn), situation (wad), and time (waqt). Avicenna was a devout Muslim and sought to reconcile rational
philosophy with Islamic theology. His aim was to prove the existence of God and His creation of the world scientifically
and through reason and logic. Avicenna's views on Islamic theology (and philosophy) were enormously influential,
forming part of the core of the curriculum at Islamic religious schools until the 19th century. Avicenna wrote a
number of short treatises dealing with Islamic theology. These included treatises on the prophets (whom he viewed
as "inspired philosophers"), and also on various scientific and philosophical interpretations of the Quran, such
as how Quranic cosmology corresponds to his own philosophical system. In general these treatises linked his philosophical
writings to Islamic religious ideas; for example, the body's afterlife. There are occasional brief hints and allusions
in his longer works however that Avicenna considered philosophy as the only sensible way to distinguish real prophecy
from illusion. He did not state this more clearly because of the political implications of such a theory, if prophecy
could be questioned, and also because most of the time he was writing shorter works which concentrated on explaining
his theories on philosophy and theology clearly, without digressing to consider epistemological matters which could
only be properly considered by other philosophers. Later interpretations of Avicenna's philosophy split into three
different schools; those (such as al-Tusi) who continued to apply his philosophy as a system to interpret later political
events and scientific advances; those (such as al-Razi) who considered Avicenna's theological works in isolation
from his wider philosophical concerns; and those (such as al-Ghazali) who selectively used parts of his philosophy
to support their own attempts to gain greater spiritual insights through a variety of mystical means. It was the
theological interpretation championed by those such as al-Razi which eventually came to predominate in the madrasahs.
While he was imprisoned in the castle of Fardajan near Hamadhan, Avicenna wrote his famous "Floating Man" – literally
falling man – thought experiment to demonstrate human self-awareness and the substantiality and immateriality of
the soul. Avicenna believed his "Floating Man" thought experiment demonstrated that the soul is a substance, and
claimed humans cannot doubt their own consciousness, even in a situation that prevents all sensory data input. The
thought experiment told its readers to imagine themselves created all at once while suspended in the air, isolated
from all sensations, which includes no sensory contact with even their own bodies. He argued that, in this scenario,
one would still have self-consciousness. Because it is conceivable that a person, suspended in air while cut off
from sense experience, would still be capable of determining his own existence, the thought experiment points to
the conclusions that the soul is a perfection, independent of the body, and an immaterial substance. The conceivability
of this "Floating Man" indicates that the soul is perceived intellectually, which entails the soul's separateness
from the body. Avicenna referred to the living human intelligence, particularly the active intellect, which he believed
to be the hypostasis by which God communicates truth to the human mind and imparts order and intelligibility to nature.
Following is an English translation of the argument: However, Avicenna posited the brain as the place where reason
interacts with sensation. Sensation prepares the soul to receive rational concepts from the universal Agent Intellect.
The first knowledge of the flying person would be "I am," affirming his or her essence. That essence could not be
the body, obviously, as the flying person has no sensation. Thus, the knowledge that "I am" is the core of a human
being: the soul exists and is self-aware. Avicenna thus concluded that the idea of the self is not logically dependent
on any physical thing, and that the soul should not be seen in relative terms, but as a primary given, a substance.
The body is unnecessary; in relation to it, the soul is its perfection. In itself, the soul is an immaterial substance.
In the Al-Burhan (On Demonstration) section of The Book of Healing, Avicenna discussed the philosophy of science
and described an early scientific method of inquiry. He discusses Aristotle's Posterior Analytics and significantly
diverged from it on several points. Avicenna discussed the issue of a proper methodology for scientific inquiry and
the question of "How does one acquire the first principles of a science?" He asked how a scientist would arrive at
"the initial axioms or hypotheses of a deductive science without inferring them from some more basic premises?" He
explains that the ideal situation is when one grasps that a "relation holds between the terms, which would allow
for absolute, universal certainty." Avicenna then adds two further methods for arriving at the first principles:
the ancient Aristotelian method of induction (istiqra), and the method of examination and experimentation (tajriba).
Avicenna criticized Aristotelian induction, arguing that "it does not lead to the absolute, universal, and certain
premises that it purports to provide." In its place, he develops a "method of experimentation as a means for scientific
inquiry." An early formal system of temporal logic was studied by Avicenna. Although he did not develop a real theory
of temporal propositions, he did study the relationship between temporalis and the implication. Avicenna's work was
further developed by Najm al-Dīn al-Qazwīnī al-Kātibī and became the dominant system of Islamic logic until modern
times. Avicennian logic also influenced several early European logicians such as Albertus Magnus and William of Ockham.
Avicenna endorsed the law of noncontradiction proposed by Aristotle, that a fact could not be both true and false
at the same time and in the same sense of the terminology used. He stated, "Anyone who denies the law of noncontradiction
should be beaten and burned until he admits that to be beaten is not the same as not to be beaten, and to be burned
is not the same as not to be burned." Avicenna's legacy in classical psychology is primarily embodied in the Kitab
al-nafs parts of his Kitab al-shifa (The Book of Healing) and Kitab al-najat (The Book of Deliverance). These were
known in Latin under the title De Anima (treatises "on the soul").[dubious – discuss] Notably, Avicenna develops
what is called the "flying man" argument in the Psychology of The Cure I.1.7 as defense of the argument that the
soul is without quantitative extension, which has an affinity with Descartes's cogito argument (or what phenomenology
designates as a form of an "epoche"). Avicenna's psychology requires that connection between the body and soul be
strong enough to ensure the soul's individuation, but weak enough to allow for its immortality. Avicenna grounds
his psychology on physiology, which means his account of the soul is one that deals almost entirely with the natural
science of the body and its abilities of perception. Thus, the philosopher's connection between the soul and body
is explained almost entirely by his understanding of perception; in this way, bodily perception interrelates with
the immaterial human intellect. In sense perception, the perceiver senses the form of the object; first, by perceiving
features of the object by our external senses. This sensory information is supplied to the internal senses, which
merge all the pieces into a whole, unified conscious experience. This process of perception and abstraction is the
nexus of the soul and body, for the material body may only perceive material objects, while the immaterial soul may
only receive the immaterial, universal forms. The way the soul and body interact in the final abstraction of the
universal from the concrete particular is the key to their relationship and interaction, which takes place in the
physical body. Avicenna's astronomical writings had some influence on later writers, although in general his work
could be considered less developed than Alhazen or Al-Biruni. One important feature of his writing is that he considers
mathematical astronomy as a separate discipline to astrology. He criticized Aristotle's view of the stars receiving
their light from the Sun, stating that the stars are self-luminous, and believed that the planets are also self-luminous.
He claimed to have observed Venus as a spot on the Sun. This is possible, as there was a transit on May 24, 1032,
but Avicenna did not give the date of his observation, and modern scholars have questioned whether he could have
observed the transit from his location at that time; he may have mistaken a sunspot for Venus. He used his transit
observation to help establish that Venus was, at least sometimes, below the Sun in Ptolemaic cosmology, i.e. the
sphere of Venus comes before the sphere of the Sun when moving out from the Earth in the prevailing geocentric model.
Liber Aboali Abincine de Anima in arte Alchemiae was the most influential, having influenced later medieval chemists
and alchemists such as Vincent of Beauvais. However Anawati argues (following Ruska) that the de Anima is a fake
by a Spanish author. Similarly the Declaratio is believed not to be actually by Avicenna. The third work (The Book
of Minerals) is agreed to be Avicenna's writing, adapted from the Kitab al-Shifa (Book of the Remedy). Ibn Sina classified
minerals into stones, fusible substances, sulfurs, and salts, building on the ideas of Aristotle and Jabir. The epistola
de Re recta is somewhat less sceptical of alchemy; Anawati argues that it is by Avicenna, but written earlier in
his career when he had not yet firmly decided that transmutation was impossible. George Sarton, the author of The
History of Science, described Ibn Sīnā as "one of the greatest thinkers and medical scholars in history" and called
him "the most famous scientist of Islam and one of the most famous of all races, places, and times." He was one of
the Islamic world's leading writers in the field of medicine. Along with Rhazes, Abulcasis, Ibn al-Nafis, and al-Ibadi,
Ibn Sīnā is considered an important compiler of early Muslim medicine. He is remembered in the Western history of
medicine as a major historical figure who made important contributions to medicine and the European Renaissance.
His medical texts were unusual in that where controversy existed between Galen and Aristotle's views on medical matters
(such as anatomy), he preferred to side with Aristotle, where necessary updating Aristotle's position to take into
account post-Aristotelian advances in anatomical knowledge. Aristotle's dominant intellectual influence among medieval
European scholars meant that Avicenna's linking of Galen's medical writings with Aristotle's philosophical writings
in the Canon of Medicine (along with its comprehensive and logical organisation of knowledge) significantly increased
Avicenna's importance in medieval Europe in comparison to other Islamic writers on medicine. His influence following
translation of the Canon was such that from the early fourteenth to the mid-sixteenth centuries he was ranked with
Hippocrates and Galen as one of the acknowledged authorities, princeps medicorum ("prince of physicians"). In modern
Iran, he is considered a national icon, and is often regarded as one of the greatest Persians to have ever lived.
A monument was erected outside the Bukhara museum[year needed]. The Avicenna Mausoleum and Museum in Hamadan was
built in 1952. Bu-Ali Sina University in Hamadan (Iran), Avicenna Research Institute in Tehran (Iran), the ibn Sīnā
Tajik State Medical University in Dushanbe, Ibn Sina Academy of Medieval Medicine and Sciences at Aligarh, India,
Avicenna School in Karachi and Avicenna Medical College in Lahore, Pakistan Ibne Sina Balkh Medical School in his
native province of Balkh in Afghanistan, Ibni Sina Faculty Of Medicine of Ankara University Ankara, Turkey and Ibn
Sina Integrated School in Marawi City (Philippines) are all named in his honour. His portrait hangs in the Hall of
the Avicenna Faculty of Medicine in the University of Paris. There is also a crater on the Moon named Avicenna and
a plant genus Avicennia. In 1980, the Soviet Union, which then ruled his birthplace Bukhara, celebrated the thousandth
anniversary of Avicenna's birth by circulating various commemorative stamps with artistic illustrations, and by erecting
a bust of Avicenna based on anthropological research by Soviet scholars.[citation needed] Near his birthplace in
Qishlak Afshona, some 25 km (16 mi) north of Bukhara, a training college for medical staff has been named for him.[year
needed] On the grounds is a museum dedicated to his life, times and work.[citation needed] In March 2008, it was
announced that Avicenna's name would be used for new Directories of education institutions for health care professionals,
worldwide. The Avicenna Directories will list universities and schools where doctors, public health practitioners,
pharmacists and others, are educated. The project team stated "Why Avicenna? Avicenna ... was ... noted for his synthesis
of knowledge from both east and west. He has had a lasting influence on the development of medicine and health sciences.
The use of Avicenna's name symbolises the worldwide partnership that is needed for the promotion of health services
of high quality." The soviet film "Youth of Genius" (1982), filmed and studios Uzbekfilm and Tajikfilm, dedicated
to children and youth years Avicenna. The film's director Elyor Ishmuhamedov. Romantic and stormy, performed works,
danger and irresistible thirst of knowledge was the youth of Al-Husayn ibn Abdallah ibn al-Hasan ibn Ali ibn Sina,
which will be known around the world under the name of Avicenna – a great physician, scientist and educator X-XI
centuries. The film is set in the ancient city of Bukhara at the turn of the millennium. In Louis L'Amour's 1985
historical novel The Walking Drum, Kerbouchard studies and discusses Avicenna's The Canon of Medicine. In his book
The Physician (1988) Noah Gordon tells the story of a young English medical apprentice who disguises himself as a
Jew to travel from England to Persia and learn from Avicenna, the great master of his time. The novel was adapted
into a feature film, The Physician, in 2013. Avicenna was played by Ben Kingsley. Ibn Sīnā wrote at least one treatise
on alchemy, but several others have been falsely attributed to him. His Logic, Metaphysics, Physics, and De Caelo,
are treatises giving a synoptic view of Aristotelian doctrine, though Metaphysics demonstrates a significant departure
from the brand of Neoplatonism known as Aristotelianism in Ibn Sīnā's world; Arabic philosophers[who?][year needed]
have hinted at the idea that Ibn Sīnā was attempting to "re-Aristotelianise" Muslim philosophy in its entirety, unlike
his predecessors, who accepted the conflation of Platonic, Aristotelian, Neo- and Middle-Platonic works transmitted
into the Muslim world. The Logic and Metaphysics have been extensively reprinted, the latter, e.g., at Venice in
1493, 1495, and 1546. Some of his shorter essays on medicine, logic, etc., take a poetical form (the poem on logic
was published by Schmoelders in 1836).[citation needed] Two encyclopaedic treatises, dealing with philosophy, are
often mentioned. The larger, Al-Shifa' (Sanatio), exists nearly complete in manuscript in the Bodleian Library and
elsewhere; part of it on the De Anima appeared at Pavia (1490) as the Liber Sextus Naturalium, and the long account
of Ibn Sina's philosophy given by Muhammad al-Shahrastani seems to be mainly an analysis, and in many places a reproduction,
of the Al-Shifa'. A shorter form of the work is known as the An-najat (Liberatio). The Latin editions of part of
these works have been modified by the corrections which the monastic editors confess that they applied. There is
also a حكمت مشرقيه (hikmat-al-mashriqqiyya, in Latin Philosophia Orientalis), mentioned by Roger Bacon, the majority
of which is lost in antiquity, which according to Averroes was pantheistic in tone.
Chinese characters are logograms used in the writing of Chinese and some other Asian languages. In Standard Chinese they
are called Hanzi (simplified Chinese: 汉字; traditional Chinese: 漢字). They have been adapted to write a number of other
languages including: Japanese, where they are known as kanji, Korean, where they are known as hanja, and Vietnamese
in a system known as chữ Nôm. Collectively, they are known as CJKV characters. In English, they are sometimes called
Han characters. Chinese characters constitute the oldest continuously used system of writing in the world. By virtue
of their widespread current use in East Asia, and historic use throughout the Sinosphere, Chinese characters are
among the most widely adopted writing systems in the world. Chinese characters number in the tens of thousands, though
most of them are minor graphic variants encountered only in historical texts. Studies in China have shown that functional
literacy in written Chinese requires a knowledge of between three and four thousand characters. In Japan, 2,136 are
taught through secondary school (the Jōyō kanji); hundreds more are in everyday use. There are various national standard
lists of characters, forms, and pronunciations. Simplified forms of certain characters are used in China, Singapore,
and Malaysia; the corresponding traditional characters are used in Taiwan, Hong Kong, Macau, and to a limited extent
in South Korea. In Japan, common characters are written in post-WWII Japan-specific simplified forms (shinjitai),
which are closer to traditional forms than Chinese simplifications, while uncommon characters are written in Japanese
traditional forms (kyūjitai), which are virtually identical to Chinese traditional forms. In South Korea, when Chinese
characters are used they are of the traditional variant and are almost identical to those used in places like Taiwan
and Hong Kong. Teaching of Chinese characters in South Korea starts in the 7th grade and continues until the 12th
grade where 1,800 total characters are taught albeit these characters are only used in certain cases (on signs, academic
papers, historical writings, etc.) and are slowly declining in use. Most modern Chinese dictionaries and Chinese
dictionaries sold to English speakers use the traditional radical-based character index in a section at the front,
while the main body of the dictionary arranges the main character entries alphabetically according to their pinyin
spelling. To find a character with unknown sound using one of these dictionaries, the reader finds the radical and
stroke number of the character, as before, and locates the character in the radical index. The character's entry
will have the character's pronunciation in pinyin written down; the reader then turns to the main dictionary section
and looks up the pinyin spelling alphabetically. In Old Chinese, (e.g. Classical Chinese) most words were monosyllabic
and there was a close correspondence between characters and words. In modern Chinese (esp. Mandarin Chinese), characters
do not necessarily correspond to words; indeed the majority of Chinese words today consist of two or more characters
due to the merging and loss of sounds in the Chinese language over time. Rather, a character almost always corresponds
to a single syllable that is also a morpheme. However, there are a few exceptions to this general correspondence,
including bisyllabic morphemes (written with two characters), bimorphemic syllables (written with two characters)
and cases where a single character represents a polysyllabic word or phrase. Modern Chinese has many homophones;
thus the same spoken syllable may be represented by many characters, depending on meaning. A single character may
also have a range of meanings, or sometimes quite distinct meanings; occasionally these correspond to different pronunciations.
Cognates in the several varieties of Chinese are generally written with the same character. They typically have similar
meanings, but often quite different pronunciations. In other languages, most significantly today in Japanese and
sometimes in Korean, characters are used to represent Chinese loanwords, to represent native words independent of
the Chinese pronunciation, and as purely phonetic elements based on their pronunciation in the historical variety
of Chinese from which they were acquired. These foreign adaptations of Chinese pronunciation are known as Sino-Xenic
pronunciations, and have been useful in the reconstruction of Middle Chinese. Chinese characters represent words
of the language using several strategies. A few characters, including some of the most commonly used, were originally
pictograms, which depicted the objects denoted, or simple ideograms, in which meaning was expressed iconically. Some
other words were expressed by compound ideograms, but the vast majority were written using the rebus principle, in
which a character for a similarly sounding word was either simply borrowed or (more commonly) extended with a disambiguating
semantic marker to form a phono-semantic compound character. Semantic-phonetic compounds or pictophonetic compounds
are by far the most numerous characters. These characters are composed of two parts: one of a limited set of characters
(the semantic indicator, often graphically simplified) which suggests the general meaning of the compound character,
and another character (the phonetic indicator) whose pronunciation suggests the pronunciation of the compound character.
In most cases the semantic indicator is also the radical under which the character is listed in dictionaries. Examples
are 河 hé "river", 湖 hú "lake", 流 liú "stream", 沖 chōng "riptide" (or "flush"), 滑 huá "slippery". All these characters
have on the left a radical of three short strokes (氵), which is a reduced form of the character 水 shuǐ meaning "water",
indicating that the character has a semantic connection with water. The right-hand side in each case is a phonetic
indicator. For example, in the case of 沖 chōng (Old Chinese *ɡ-ljuŋ), the phonetic indicator is 中 zhōng (Old Chinese
*k-ljuŋ), which by itself means "middle". In this case it can be seen that the pronunciation of the character is
slightly different from that of its phonetic indicator; the process of historical phonetic change means that the
composition of such characters can sometimes seem arbitrary today. Occasionally a bisyllabic word is written with
two characters that contain the same radical, as in 蝴蝶 húdié "butterfly", where both characters have the insect radical
虫. A notable example is pipa (a Chinese lute, also a fruit, the loquat, of similar shape) – originally written as
批把 with the hand radical, referring to the down and up strokes when playing this instrument, which was then changed
to 枇杷 (tree radical), which is still used for the fruit, while the character was changed to 琵琶 when referring to
the instrument. In other cases a compound word may coincidentally share a radical without this being meaningful.
In recent decades, a series of inscribed graphs and pictures have been found at Neolithic sites in China, including
Jiahu (c. 6500 BC), Dadiwan and Damaidi from the 6th millennium BC, and Banpo (5th millennium BC). Often these finds
are accompanied by media reports that push back the purported beginnings of Chinese writing by thousands of years.
However, because these marks occur singly, without any implied context, and are made crudely and simply, Qiu Xigui
concluded that "we do not have any basis for stating that these constituted writing nor is there reason to conclude
that they were ancestral to Shang dynasty Chinese characters." They do however demonstrate a history of sign use
in the Yellow River valley during the Neolithic through to the Shang period. The earliest confirmed evidence of the
Chinese script yet discovered is the body of inscriptions on oracle bones from the late Shang dynasty (c. 1200–1050
BC). These symbols, carved on pieces of bone and turtle shell being sold as "dragon bones" for medicinal purposes,
were identified as Chinese writing by scholars in 1899. By 1928, the source of the oracle bones had been traced to
a village near Anyang in Henan Province, which was excavated by the Academia Sinica between 1928 and 1937. Over 150,000
fragments have been found. The traditional picture of an orderly series of scripts, each one invented suddenly and
then completely displacing the previous one, has been conclusively demonstrated to be fiction by the archaeological
finds and scholarly research of the later 20th and early 21st centuries. Gradual evolution and the coexistence of
two or more scripts was more often the case. As early as the Shang dynasty, oracle-bone script coexisted as a simplified
form alongside the normal script of bamboo books (preserved in typical bronze inscriptions), as well as the extra-elaborate
pictorial forms (often clan emblems) found on many bronzes. Based on studies of these bronze inscriptions, it is
clear that, from the Shang dynasty writing to that of the Western Zhou and early Eastern Zhou, the mainstream script
evolved in a slow, unbroken fashion, until assuming the form that is now known as seal script in the late Eastern
Zhou in the state of Qin, without any clear line of division. Meanwhile, other scripts had evolved, especially in
the eastern and southern areas during the late Zhou dynasty, including regional forms, such as the gǔwén ("ancient
forms") of the eastern Warring States preserved as variant forms in the Han dynasty character dictionary Shuowen
Jiezi, as well as decorative forms such as bird and insect scripts. Seal script, which had evolved slowly in the
state of Qin during the Eastern Zhou dynasty, became standardized and adopted as the formal script for all of China
in the Qin dynasty (leading to a popular misconception that it was invented at that time), and was still widely used
for decorative engraving and seals (name chops, or signets) in the Han dynasty period. However, despite the Qin script
standardization, more than one script remained in use at the time. For example, a little-known, rectilinear and roughly
executed kind of common (vulgar) writing had for centuries coexisted with the more formal seal script in the Qin
state, and the popularity of this vulgar writing grew as the use of writing itself became more widespread. By the
Warring States period, an immature form of clerical script called "early clerical" or "proto-clerical" had already
developed in the state of Qin based upon this vulgar writing, and with influence from seal script as well. The coexistence
of the three scripts – small seal, vulgar and proto-clerical, with the latter evolving gradually in the Qin to early
Han dynasties into clerical script – runs counter to the traditional belief that the Qin dynasty had one script only,
and that clerical script was suddenly invented in the early Han dynasty from the small seal script. Contrary to the
popular belief of there being only one script per period, there were in fact multiple scripts in use during the Han
period. Although mature clerical script, also called 八分 (bāfēn) script, was dominant at that time, an early type
of cursive script was also in use by the Han by at least as early as 24 BC (during the very late Western Han period),[b]
incorporating cursive forms popular at the time, well as many elements from the vulgar writing of the Warring State
of Qin. By around the time of the Eastern Jin dynasty, this Han cursive became known as 章草 zhāngcǎo (also known as
隶草 / 隸草 lìcǎo today), or in English sometimes clerical cursive, ancient cursive, or draft cursive. Some believe that
the name, based on 章 zhāng meaning "orderly", arose because the script was a more orderly form of cursive than the
modern form, which emerged during the Eastern Jin dynasty and is still in use today, called 今草 jīncǎo or "modern
cursive". By the late Eastern Han period, an early form of semi-cursive script appeared, developing out of a cursively
written form of neo-clerical script[c] and simple cursive. This semi-cursive script was traditionally attributed
to Liu Desheng c. 147–188 AD,[d] although such attributions refer to early masters of a script rather than to their
actual inventors, since the scripts generally evolved into being over time. Qiu gives examples of early semi-cursive
script, showing that it had popular origins rather than being purely Liu’s invention. Regular script has been attributed
to Zhong Yao, of the Eastern Han to Cao Wei period (c. 151–230 AD), who has been called the "father of regular script".
However, some scholars postulate that one person alone could not have developed a new script which was universally
adopted, but could only have been a contributor to its gradual formation. The earliest surviving pieces written in
regular script are copies of Yao's works, including at least one copied by Wang Xizhi. This new script, which is
the dominant modern Chinese script, developed out of a neatly written form of early semi-cursive, with addition of
the pause (頓/顿 dùn) technique to end horizontal strokes, plus heavy tails on strokes which are written to the downward-right
diagonal. Thus, early regular script emerged from a neat, formal form of semi-cursive, which had itself emerged from
neo-clerical (a simplified, convenient form of clerical script). It then matured further in the Eastern Jin dynasty
in the hands of the "Sage of Calligraphy", Wang Xizhi, and his son Wang Xianzhi. It was not, however, in widespread
use at that time, and most writers continued using neo-clerical, or a somewhat semi-cursive form of it, for daily
writing, while the conservative bafen clerical script remained in use on some stelae, alongside some semi-cursive,
but primarily neo-clerical. It was not until the Northern and Southern dynasties that regular script rose to dominant
status. During that period, regular script continued evolving stylistically, reaching full maturity in the early
Tang dynasty. Some call the writing of the early Tang calligrapher Ouyang Xun (557–641) the first mature regular
script. After this point, although developments in the art of calligraphy and in character simplification still lay
ahead, there were no more major stages of evolution for the mainstream script. For instance, to look up the character
where the sound is not known, e.g., 松 (pine tree), the user first determines which part of the character is the radical
(here 木), then counts the number of strokes in the radical (four), and turns to the radical index (usually located
on the inside front or back cover of the dictionary). Under the number "4" for radical stroke count, the user locates
木, then turns to the page number listed, which is the start of the listing of all the characters containing this
radical. This page will have a sub-index giving remainder stroke numbers (for the non-radical portions of characters)
and page numbers. The right half of the character also contains four strokes, so the user locates the number 4, and
turns to the page number given. From there, the user must scan the entries to locate the character he or she is seeking.
Some dictionaries have a sub-index which lists every character containing each radical, and if the user knows the
number of strokes in the non-radical portion of the character, he or she can locate the correct page directly. Chinese
character dictionaries often allow users to locate entries in several ways. Many Chinese, Japanese, and Korean dictionaries
of Chinese characters list characters in radical order: characters are grouped together by radical, and radicals
containing fewer strokes come before radicals containing more strokes (radical-and-stroke sorting). Under each radical,
characters are listed by their total number of strokes. It is often also possible to search for characters by sound,
using pinyin (in Chinese dictionaries), zhuyin (in Taiwanese dictionaries), kana (in Japanese dictionaries) or hangul
(in Korean dictionaries). Most dictionaries also allow searches by total number of strokes, and individual dictionaries
often allow other search methods as well. While new characters can be easily coined by writing on paper, they are
difficult to represent on a computer – they must generally be represented as a picture, rather than as text – which
presents a significant barrier to their use or widespread adoption. Compare this with the use of symbols as names
in 20th century musical albums such as Led Zeppelin IV (1971) and Love Symbol Album (1993); an album cover may potentially
contain any graphics, but in writing and other computation these symbols are difficult to use. New characters can
in principle be coined at any time, just as new words can be, but they may not be adopted. Significant historically
recent coinages date to scientific terms of the 19th century. Specifically, Chinese coined new characters for chemical
elements – see chemical elements in East Asian languages – which continue to be used and taught in schools in China
and Taiwan. In Japan, in the Meiji era (specifically, late 19th century), new characters were coined for some (but
not all) SI units, such as 粁 (米 "meter" + 千 "thousand, kilo-") for kilometer. These kokuji (Japanese-coinages) have
found use in China as well – see Chinese characters for SI units for details. In addition, there are a number of
dialect characters (方言字) that are not used in formal written Chinese but represent colloquial terms in non-Mandarin
varieties of Chinese. One such variety is Written Cantonese, in widespread use in Hong Kong even for certain formal
documents, due to the former British colonial administration's recognition of Cantonese for use for official purposes.
In Taiwan, there is also an informal body of characters used to represent Hokkien Chinese. Many varieties have specific
characters for words exclusive to them. For example, the vernacular character 㓾, pronounced cii11 in Hakka, means
"to kill". Furthermore, Shanghainese and Sichuanese also have their own series of written text, but these are not
widely used in actual texts, Mandarin being the preference for all mainland regions. In the Republic of China (Taiwan),
which uses traditional Chinese characters, the Ministry of Education's Chángyòng Guózì Biāozhǔn Zìtǐ Biǎo (常用國字標準字體表,
Chart of Standard Forms of Common National Characters) lists 4,808 characters; the Cì Chángyòng Guózì Biāozhǔn Zìtǐ
Biǎo (次常用國字標準字體表, Chart of Standard Forms of Less-Than-Common National Characters) lists another 6,341 characters.
The Chinese Standard Interchange Code (CNS11643)—the official national encoding standard—supports 48,027 characters,
while the most widely used encoding scheme, BIG-5, supports only 13,053. In China, which uses simplified Chinese
characters, the Xiàndài Hànyǔ Chángyòng Zìbiǎo (现代汉语常用字表, Chart of Common Characters of Modern Chinese) lists 2,500
common characters and 1,000 less-than-common characters, while the Xiàndài Hànyǔ Tōngyòng Zìbiǎo (现代汉语通用字表, Chart
of Generally Utilized Characters of Modern Chinese) lists 7,000 characters, including the 3,500 characters already
listed above. GB2312, an early version of the national encoding standard used in the People's Republic of China,
has 6,763 code points. GB18030, the modern, mandatory standard, has a much higher number. The New Hànyǔ Shuǐpíng
Kǎoshì (汉语水平考试, Chinese Proficiency Test) covers approximately 2,600 characters at its highest level (level six).
Modified radicals and new variants are two common reasons for the ever-increasing number of characters. There are
about 300 radicals and 100 are in common use. Creating a new character by modifying the radical is an easy way to
disambiguate homographs among xíngshēngzì pictophonetic compounds. This practice began long before the standardization
of Chinese script by Qin Shi Huang and continues to the present day. The traditional 3rd-person pronoun tā (他 "he,
she, it"), which is written with the "person radical", illustrates modifying significs to form new characters. In
modern usage, there is a graphic distinction between tā (她 "she") with the "woman radical", tā (牠 "it") with the
"animal radical", tā (它 "it") with the "roof radical", and tā (祂 "He") with the "deity radical", One consequence
of modifying radicals is the fossilization of rare and obscure variant logographs, some of which are not even used
in Classical Chinese. For instance, he 和 "harmony, peace", which combines the "grain radical" with the "mouth radical",
has infrequent variants 咊 with the radicals reversed and 龢 with the "flute radical". Even the Zhonghua Zihai does
not include characters in the Chinese family of scripts created to represent non-Chinese languages. Characters formed
by Chinese principles in other languages include the roughly 1,500 Japanese-made kokuji given in the Kokuji no Jiten,
the Korean-made gukja, the over 10,000 Sawndip characters still in use in Guangxi, and the almost 20,000 Nôm characters
formerly used in Vietnam.[citation needed] More divergent descendents of Chinese script include Tangut script, which
created over 5,000 characters with similar strokes but different formation principles to Chinese characters. The
total number of Chinese characters from past to present remains unknowable because new ones are developed all the
time – for instance, brands may create new characters when none of the existing ones allow for the intended meaning.
Chinese characters are theoretically an open set and anyone can create new characters, though such inventions are
rarely included in official character sets. The number of entries in major Chinese dictionaries is the best means
of estimating the historical growth of character inventory. One of the most complex characters found in modern Chinese
dictionaries[g] is 齉 (U+9F49) (nàng, listen (help·info), pictured below, middle image), meaning "snuffle" (that is,
a pronunciation marred by a blocked nose), with "just" thirty-six strokes. However, this is not in common use. The
most complex character that can be input using the Microsoft New Phonetic IME 2002a for traditional Chinese is 龘
(dá, "the appearance of a dragon flying"). It is composed of the dragon radical represented three times, for a total
of 16 × 3 = 48 strokes. Among the most complex characters in modern dictionaries and also in frequent modern use
are 籲 (yù, "to implore"), with 32 strokes; 鬱 (yù, "luxuriant, lush; gloomy"), with 29 strokes, as in 憂鬱 (yōuyù, "depressed");
豔 (yàn, "colorful"), with 28 strokes; and 釁 (xìn, "quarrel"), with 25 strokes, as in 挑釁 (tiǎoxìn, "to pick a fight").
Also in occasional modern use is 鱻 (xiān "fresh"; variant of 鮮 xiān) with 33 strokes. There are also some extremely
complex characters which have understandably become rather rare. According to Joël Bellassen (1989), the most complex
Chinese character is /𪚥 (U+2A6A5) zhé listen (help·info), meaning "verbose" and containing sixty-four strokes; this
character fell from use around the 5th century. It might be argued, however, that while containing the most strokes,
it is not necessarily the most complex character (in terms of difficulty), as it simply requires writing the same
sixteen-stroke character 龍 lóng (lit. "dragon") four times in the space for one. Another 64-stroke character is /𠔻
(U+2053B) zhèng composed of 興 xīng/xìng (lit. "flourish") four times. One man who has encountered this problem is
Taiwanese politician Yu Shyi-kun, due to the rarity of the last character in his name. Newspapers have dealt with
this problem in varying ways, including using software to combine two existing, similar characters, including a picture
of the personality, or, especially as is the case with Yu Shyi-kun, simply substituting a homophone for the rare
character in the hope that the reader would be able to make the correct inference. Taiwanese political posters, movie
posters etc. will often add the bopomofo phonetic symbols next to such a character. Japanese newspapers may render
such names and words in katakana instead of kanji, and it is accepted practice for people to write names for which
they are unsure of the correct kanji in katakana instead. The use of such contractions is as old as Chinese characters
themselves, and they have frequently been found in religious or ritual use. In the Oracle Bone script, personal names,
ritual items, and even phrases such as 受又(祐) shòu yòu "receive blessings" are commonly contracted into single characters.
A dramatic example is that in medieval manuscripts 菩薩 púsà "bodhisattva" (simplified: 菩萨) is sometimes written with
a single character formed of a 2×2 grid of four 十 (derived from the grass radical over two 十). However, for the sake
of consistency and standardization, the CPC seeks to limit the use of such polysyllabic characters in public writing
to ensure that every character only has one syllable. Modern examples particularly include Chinese characters for
SI units. In Chinese these units are disyllabic and standardly written with two characters, as 厘米 límǐ "centimeter"
(厘 centi-, 米 meter) or 千瓦 qiānwǎ "kilowatt". However, in the 19th century these were often written via compound characters,
pronounced disyllabically, such as 瓩 for 千瓦 or 糎 for 厘米 – some of these characters were also used in Japan, where
they were pronounced with borrowed European readings instead. These have now fallen out of general use, but are occasionally
seen. Less systematic examples include 圕 túshūguǎn "library", a contraction of 圖書館, A four-morpheme word, 社会主义 shèhuì
zhǔyì "socialism", is commonly written with a single character formed by combining the last character, 义, with the
radical of the first, 社, yielding roughly 礻义. A commonly seen example is the double happiness symbol 囍, formed as
a ligature of 喜喜 and referred to by its disyllabic name (simplified Chinese: 双喜; traditional Chinese: 雙喜; pinyin:
shuāngxǐ). In handwriting, numbers are very frequently squeezed into one space or combined – common ligatures include
廿 niàn, "twenty", normally read as 二十 èrshí, 卅 sà, "thirty", normally read as 三十 sānshí, and 卌 xì "forty", normally
read as 四十 "sìshí". In some cases counters are also merged into one character, such as 七十人 qīshí rén "seventy people".
Another common abbreviation is 门 with a "T" written inside it, for 問題, 问题, wèntí ("question; problem"), where the
"T" is from pinyin for the second syllable tí 题. Since polysyllabic characters are often non-standard, they are often
excluded incharcter dictionaries. In certain cases compound words and set phrases may be contracted into single characters.
Some of these can be considered logograms, where characters represent whole words rather than syllable-morphemes,
though these are generally instead considered ligatures or abbreviations (similar to scribal abbreviations, such
as & for "et"), and as non-standard. These do see use, particularly in handwriting or decoration, but also in some
cases in print. In Chinese, these ligatures are called héwén (合文), héshū (合書) or hétǐzì (合体字), and in the special
case of combining two characters, these are known as "two-syllable Chinese characters" (双音节汉字, 雙音節漢字). Chinese characters
are primarily morphosyllabic, meaning that most Chinese morphemes are monosyllabic and are written with a single
character, though in modern Chinese most words are disyllabic and dimorphemic, consisting of two syllables, each
of which is a morpheme. In modern Chinese 10% of morphemes only occur as part of a given compound. However, a few
morphemes are disyllabic, some of them dating back to Classical Chinese. Excluding foreign loan words, these are
typically words for plants and small animals. They are usually written with a pair of phono-semantic compound characters
sharing a common radical. Examples are 蝴蝶 húdié "butterfly" and 珊瑚 shānhú "coral". Note that the 蝴 hú of húdié and
the 瑚 hú of shānhú have the same phonetic, 胡, but different radicals ("insect" and "jade", respectively). Neither
exists as an independent morpheme except as a poetic abbreviation of the disyllabic word. In addition to strictness
in character size and shape, Chinese characters are written with very precise rules. The most important rules regard
the strokes employed, stroke placement, and stroke order. Just as each region that uses Chinese characters has standardized
character forms, each also has standardized stroke orders, with each standard being different. Most characters can
be written with just one correct stroke order, though some words also have many valid stroke orders, which may occasionally
result in different stroke counts. Some characters are also written with different stroke orders due to character
simplification. Just as Roman letters have a characteristic shape (lower-case letters mostly occupying the x-height,
with ascenders or descenders on some letters), Chinese characters occupy a more or less square area in which the
components of every character are written to fit in order to maintain a uniform size and shape, especially with small
printed characters in Ming and sans-serif styles. Because of this, beginners often practise writing on squared graph
paper, and the Chinese sometimes use the term "Square-Block Characters" (方块字 / 方塊字, fāngkuàizì), sometimes translated
as tetragraph, in reference to Chinese characters. Regular script typefaces are also commonly used, but not as common
as Ming or sans-serif typefaces for body text. Regular script typefaces are often used to teach students Chinese
characters, and often aim to match the standard forms of the region where they are meant to be used. Most typefaces
in the Song dynasty were regular script typefaces which resembled a particular person's handwriting (e.g. the handwriting
of Ouyang Xun, Yan Zhenqing, or Liu Gongquan), while most modern regular script typefaces tend toward anonymity and
regularity. The art of writing Chinese characters is called Chinese calligraphy. It is usually done with ink brushes.
In ancient China, Chinese calligraphy is one of the Four Arts of the Chinese Scholars. There is a minimalist set
of rules of Chinese calligraphy. Every character from the Chinese scripts is built into a uniform shape by means
of assigning it a geometric area in which the character must occur. Each character has a set number of brushstrokes;
none must be added or taken away from the character to enhance it visually, lest the meaning be lost. Finally, strict
regularity is not required, meaning the strokes may be accentuated for dramatic effect of individual style. Calligraphy
was the means by which scholars could mark their thoughts and teachings for immortality, and as such, represent some
of the more precious treasures that can be found from ancient China. The cursive script (草書(书), cǎoshū, literally
"grass script") is used informally. The basic character shapes are suggested, rather than explicitly realized, and
the abbreviations are sometimes extreme. Despite being cursive to the point where individual strokes are no longer
differentiable and the characters often illegible to the untrained eye, this script (also known as draft) is highly
revered for the beauty and freedom that it embodies. Some of the simplified Chinese characters adopted by the People's
Republic of China, and some simplified characters used in Japan, are derived from the cursive script. The Japanese
hiragana script is also derived from this script. The Shang dynasty oracle bone script and the Zhou dynasty scripts
found on Chinese bronze inscriptions are no longer used; the oldest script that is still in use today is the Seal
Script (篆書(书), zhuànshū). It evolved organically out of the Spring and Autumn period Zhou script, and was adopted
in a standardized form under the first Emperor of China, Qin Shi Huang. The seal script, as the name suggests, is
now used only in artistic seals. Few people are still able to read it effortlessly today, although the art of carving
a traditional seal in the script remains alive; some calligraphers also work in this style. The following is a comparison
of Chinese characters in the Standard Form of National Characters, a common traditional Chinese standard used in
Taiwan, the Table of General Standard Chinese Characters, the standard for Mainland Chinese simplified Chinese characters,
and the jōyō kanji, the standard for Japanese kanji. Generally, the jōyō kanji are more similar to traditional Chinese
characters than simplified Chinese characters are to traditional Chinese characters. "Simplified" refers to having
significant differences from the Taiwan standard, not necessarily being a newly created character or a newly performed
substitution. The characters in the Hong Kong standard and the Kangxi Dictionary are also known as "Traditional,"
but are not shown. In the years after World War II, the Japanese government also instituted a series of orthographic
reforms. Some characters were given simplified forms called shinjitai 新字体 (lit. "new character forms", the older
forms were then labelled the kyūjitai 旧字体, lit. "old character forms"). The number of characters in common use was
restricted, and formal lists of characters to be learned during each grade of school were established, first the
1850-character tōyō kanji 当用漢字 list in 1945, the 1945-character jōyō kanji 常用漢字 list in 1981, and a 2136-character
reformed version of the jōyō kanji in 2010. Many variant forms of characters and obscure alternatives for common
characters were officially discouraged. This was done with the goal of facilitating learning for children and simplifying
kanji use in literature and periodicals. These are simply guidelines, hence many characters outside these standards
are still widely known and commonly used, especially those used for personal and place names (for the latter, see
jinmeiyō kanji),[citation needed] as well as for some common words such as "dragon" (Japanese kana: たつ, Rōmaji: tatsu)
in which both the shinjitai 竜 and the kyūjitai 龍 forms of the kanji are both acceptable and widely known amongst
native Japanese speakers. The majority of simplified characters are drawn from conventional abbreviated forms, or
ancient standard forms. For example, the orthodox character 來 lái ("come") was written with the structure 来 in the
clerical script (隶书 / 隸書, lìshū) of the Han dynasty. This clerical form uses one fewer stroke, and was thus adopted
as a simplified form. The character 雲 yún ("cloud") was written with the structure 云 in the oracle bone script of
the Shang dynasty, and had remained in use later as a phonetic loan in the meaning of "to say" while the 雨 radical
was added to differentiate meanings. The simplified form adopts the original structure. The People's Republic of
China issued its first round of official character simplifications in two documents, the first in 1956 and the second
in 1964. A second round of character simplifications (known as erjian, or "second round simplified characters") was
promulgated in 1977. It was poorly received, and in 1986 the authorities rescinded the second round completely, while
making six revisions to the 1964 list, including the restoration of three traditional characters that had been simplified:
叠 dié, 覆 fù, 像 xiàng. Although most often associated with the People's Republic of China, character simplification
predates the 1949 communist victory. Caoshu, cursive written text, almost always includes character simplification,
and simplified forms have always existed in print, albeit not for the most formal works. In the 1930s and 1940s,
discussions on character simplification took place within the Kuomintang government, and a large number of Chinese
intellectuals and writers have long maintained that character simplification would help boost literacy in China.
Indeed, this desire by the Kuomintang to simplify the Chinese writing system (inherited and implemented by the Communist
Party of China) also nursed aspirations of some for the adoption of a phonetic script based on the Latin script,
and spawned such inventions as the Gwoyeu Romatzyh. The use of traditional Chinese characters versus simplified Chinese
characters varies greatly, and can depend on both the local customs and the medium. Before the official reform, character
simplifications were not officially sanctioned and generally adopted vulgar variants and idiosyncratic substitutions.
Orthodox variants were mandatory in printed works, while the (unofficial) simplified characters would be used in
everyday writing or quick notes. Since the 1950s, and especially with the publication of the 1964 list, the People's
Republic of China has officially adopted simplified Chinese characters for use in mainland China, while Hong Kong,
Macau, and the Republic of China (Taiwan) were not affected by the reform. There is no absolute rule for using either
system, and often it is determined by what the target audience understands, as well as the upbringing of the writer.
According to the Rev. John Gulick: "The inhabitants of other Asiatic nations, who have had occasion to represent
the words of their several languages by Chinese characters, have as a rule used unaspirated characters for the sounds,
g, d, b. The Muslims from Arabia and Persia have followed this method … The Mongols, Manchu, and Japanese also constantly
select unaspirated characters to represent the sounds g, d, b, and j of their languages. These surrounding Asiatic
nations, in writing Chinese words in their own alphabets, have uniformly used g, d, b, & c., to represent the unaspirated
sounds." Although Chinese characters in Vietnam are now limited to ceremonial uses, they were once in widespread
use. Until the early 20th century, Literary Chinese was used in Vietnam for all official and scholarly writing. Around
the 13th century the Nôm script was developed to record folk literature in the Vietnamese language. The script used
Chinese characters to represent both borrowed Sino-Vietnamese vocabulary and native words with similar pronunciation
or meaning. In addition thousands of new compound characters were created to write Vietnamese words. This process
resulted in a highly complex system that was never mastered by more than 5% of the population. Both Literary Chinese
and Nôm were replaced in the early 20th century by Vietnamese written with the Latin-based Vietnamese alphabet. After
Kim Jong Il, the second ruler of North Korea, died in December 2011, Kim Jong Un stepped up and began mandating the
use of Hanja as a source of definition for the Korean language. Currently, it is said that North Korea teaches around
3,000 Hanja characters to North Korean students, and in some cases, the characters appear within advertisements and
newspapers. However, it is also said that the authorities implore students not to use the characters in public. Due
to North Korea's strict isolationism, accurate reports about hanja use in North Korea are hard to obtain. When learning
how to write hanja, students are taught to memorize the native Korean pronunciation for the hanja's meaning and the
Sino-Korean pronunciations (the pronunciation based on the Chinese pronunciation of the characters) for each hanja
respectively so that students know what the syllable and meaning is for a particular hanja. For example, the name
for the hanja 水 is 물 수 (mul-su) in which 물 (mul) is the native Korean pronunciation for "water", while 수 (su) is
the Sino-Korean pronunciation of the character. The naming of hanja is similar to if "water" were named "water-aqua",
"horse-equus", or "gold-aurum" based on a hybridization of both the English and the Latin names. Other examples include
사람 인 (saram-in) for 人 "person/people", 큰 대 (keun-dae) for 大 "big/large//great", 작을 소 (jakeul-so) for 小 "small/little",
아래 하 (arae-ha) for 下 "underneath/below/low", 아비 부 (abi-bu) for 父 "father", and 나라이름 한 (naraimreum-han) for 韓 "Han/Korea".
There is a clear trend toward the exclusive use of hangul in day-to-day South Korean society. Hanja are still used
to some extent, particularly in newspapers, weddings, place names and calligraphy (although it is nowhere near the
extent of kanji use in day-to-day Japanese society). Hanja is also extensively used in situations where ambiguity
must be avoided,[citation needed] such as academic papers, high-level corporate reports, government documents, and
newspapers; this is due to the large number of homonyms that have resulted from extensive borrowing of Chinese words.
In times past, until the 15th century, in Korea, Literary Chinese was the dominant form of written communication,
prior to the creation of hangul, the Korean alphabet. Much of the vocabulary, especially in the realms of science
and sociology, comes directly from Chinese, comparable to Latin or Greek root words in European languages. However,
due to the lack of tones in Korean,[citation needed] as the words were imported from Chinese, many dissimilar characters
took on identical sounds, and subsequently identical spelling in hangul.[citation needed] Chinese characters are
sometimes used to this day for either clarification in a practical manner, or to give a distinguished appearance,
as knowledge of Chinese characters is considered a high class attribute and an indispensable part of a classical
education.[citation needed] It is also observed that the preference for Chinese characters is treated as being conservative
and Confucian. Written Japanese also includes a pair of syllabaries known as kana, derived by simplifying Chinese
characters selected to represent syllables of Japanese. The syllabaries differ because they sometimes selected different
characters for a syllable, and because they used different strategies to reduce these characters for easy writing:
the angular katakana were obtained by selecting a part of each character, while hiragana were derived from the cursive
forms of whole characters. Modern Japanese writing uses a composite system, using kanji for word stems, hiragana
for inflexional endings and grammatical words, and katakana to transcribe non-Chinese loanwords as well as serve
as a method to emphasize native words (similar to how italics are used in Romance languages). Although most of the
simplified Chinese characters in use today are the result of the works moderated by the government of the People's
Republic of China in the 1950s and 60s, character simplification predates the republic's formation in 1949. One of
the earliest proponents of character simplification was Lufei Kui, who proposed in 1909 that simplified characters
should be used in education. In the years following the May Fourth Movement in 1919, many anti-imperialist Chinese
intellectuals sought ways to modernise China. In the 1930s and 1940s, discussions on character simplification took
place within the Kuomintang government, and many Chinese intellectuals and writers have long maintained that character
simplification would help boost literacy in China. In many world languages, literacy has been promoted as a justification
for spelling reforms. The People's Republic of China issued its first round of official character simplifications
in two documents, the first in 1956 and the second in 1964. In the 1950s and 1960s, while confusion about simplified
characters was still rampant, transitional characters that mixed simplified parts with yet-to-be simplified parts
of characters together appeared briefly, then disappeared.
The first known European explorer to reach Bermuda was Spanish sea captain Juan de Bermúdez in 1503, after whom the islands
are named. He claimed the apparently uninhabited islands for the Spanish Empire. Paying two visits to the archipelago,
Bermúdez never landed on the islands, but did create a recognisable map of the archipelago. Shipwrecked Portuguese
mariners are now thought to have been responsible for the 1543 inscription in Portuguese Rock (previously called
Spanish Rock). Subsequent Spanish or other European parties are believed to have released pigs there, which had become
feral and abundant on the island by the time European settlement began. In 1609, the English Virginia Company, which
had established Jamestown in Virginia (a term originally applied to all of the North American continent) two years
earlier, permanently settled Bermuda in the aftermath of a hurricane, when the crew and passengers of the Sea Venture
steered the ship onto the surrounding reef to prevent its sinking, then landed ashore. The island was administered
as an extension of Virginia by the Company until 1614. Its spin-off, the Somers Isles Company, took over in 1615
and managed the colony until 1684. At that time, the company's charter was revoked, and the English Crown took over
administration. The islands became a British colony following the 1707 unification of the parliaments of Scotland
and England, which created the Kingdom of Great Britain. After 1949, when Newfoundland became part of Canada, Bermuda
was automatically ranked as the oldest remaining British Overseas Territory. Since the return of Hong Kong to China
in 1997, it is the most populous Territory. Its first capital, St. George's, was established in 1612 and is the oldest
continuously inhabited English town in the New World. Bermuda's economy is based on offshore insurance and reinsurance,
and tourism, the two largest economic sectors. Bermuda had one of the world's highest GDP per capita for most of
the 20th century and several years beyond. Recently, its economic status has been affected by the global recession.
It has a subtropical climate. Bermuda is the northernmost point of the Bermuda Triangle, a region of sea in which,
according to legend, a number of aircraft and surface vessels have disappeared under supposedly unexplained or mysterious
circumstances. The island is in the hurricane belt and prone to severe weather. However, it is somewhat protected
from the full force of a hurricane by the coral reef that surrounds the island. Bermuda is a group of low-forming
volcanoes located in the Atlantic Ocean, near the western edge of the Sargasso Sea, roughly 578 nautical miles (1,070
km (665 mi)) east-southeast of Cape Hatteras on the Outer Banks of North Carolina and about 594 nautical miles (1,100
km (684 mi)) southeast of Martha's Vineyard of Massachusetts. It is 898 nautical miles (1,664 km (1,034 mi)) northeast
of Miami, Florida, and 667 nautical miles (1,236 km (768 mi)) from Cape Sable Island, in Nova Scotia, Canada. The
islands lie due east of Fripp Island, South Carolina, west of Portugal and north of Puerto Rico. The archipelago
is formed by high points on the rim of the caldera of a submarine volcano that forms a seamount. The volcano is one
part of a range that was formed as part of the same process that formed the floor of the Atlantic, and the Mid-Atlantic
Ridge. The top of the seamount has gone through periods of complete submergence, during which its limestone cap was
formed by marine organisms, and during the Ice Ages the entire caldera was above sea level, forming an island of
approximately two hundred square miles. Despite the small land mass, place names are repeated; there are, for example,
two islands named Long Island, three bays named Long Bay (on Somerset, Main, and Cooper's islands), two Horseshoe
Bays (one in Southampton, on the Main Island, the other at Morgan's Point, formerly Tucker's Island), there are two
roads through cuttings called Khyber Pass (one in Warwick, the other in St. George's Parish), and St George's Town
is located on St George's Island within St George's Parish (each known as St George's). There is a Hamilton Parish
in addition to the City of Hamilton (which is in Pembroke Parish). Bermuda's pink sand beaches and clear, cerulean
blue ocean waters are popular with tourists. Many of Bermuda's hotels are located along the south shore of the island.
In addition to its beaches, there are a number of sightseeing attractions. Historic St George's is a designated World
Heritage Site. Scuba divers can explore numerous wrecks and coral reefs in relatively shallow water (typically 30–40
ft or 9–12 m in depth), with virtually unlimited visibility. Many nearby reefs are readily accessible from shore
by snorkellers, especially at Church Bay. The only indigenous mammals of Bermuda are five species of bats, all of
which are also found in the eastern United States: Lasionycteris noctivagans, Lasiurus borealis, Lasiurus cinereus,
Lasiurus seminolus and Perimyotis subflavus. Other commonly known fauna of Bermuda include its national bird, the
Bermuda petrel or cahow. It was rediscovered in 1951 after having been thought extinct since the 1620s. It is important
as an example of a Lazarus species. The government has a programme to protect it, including restoration of a habitat
area. The Bermuda rock skink was long thought to have been the only indigenous land vertebrate of Bermuda, discounting
the marine turtles that lay their eggs on its beaches. Recently through genetic DNA studies, scientists have discovered
that a species of turtle, the diamondback terrapin, previously thought to have been introduced, pre-dated the arrival
of humans in the archipelago. As this species spends most of its time in brackish ponds, some question whether it
should be classified as a land vertebrate to compete with the skink's unique status. The island experienced large-scale
immigration over the 20th century, especially after the Second World War. Bermuda has a diverse population including
both those with relatively deep roots in Bermuda extending back for centuries, and newer communities whose ancestry
results from recent immigration, especially from Britain, North America, the West Indies, and the Portuguese Atlantic
islands (especially the Azores), although these groups are steadily merging. About 46% of the population identified
themselves with Bermudian ancestry in 2010, which was a decrease from the 51% who did so in the 2000 census. Those
identifying with British ancestry dropped by 1% to 11% (although those born in Britain remain the largest non-native
group at 3,942 persons). The number of people born in Canada declined by 13%. Those who reported West Indian ancestry
were 13%. The number of people born in the West Indies actually increased by 538. A significant segment of the population
is of Portuguese ancestry (10%), the result of immigration over the past 160 years, of whom 79% have residency status.
The deeper ancestral demography of Bermuda's population has been obscured by the ethnic homogenisation of the last
four centuries. There is effectively no ethnic distinction between black and white Bermudians, other than those characterising
recent immigrant communities. In the 17th century, this was not so. For the first hundred years of settlement, white
Protestants of English heritage were the distinct majority, with white minorities of Irish (the native language of
many of whom can be assumed to have been Gaelic) and Scots sent to Bermuda after the English invasions of their homelands
that followed the English Civil War. Non-white minorities included Spanish-speaking, free (indentured) blacks from
the West Indies, black chattel slaves primarily captured from Spanish and Portuguese ships by Bermudian privateers,
and Native Americans, primarily from the Algonquian and other tribes of the Atlantic seaboard, but possibly from
as far away as Mexico. By the 19th century, the white ethnically-English Bermudians had lost their numerical advantage.
Despite the banning of the importation of Irish, and the repeated attempts to force free blacks to emigrate and the
owners of black slaves to export them, the merging of the various minority groups, along with some of the white English,
had resulted in a new demographic group, "coloured" (which term, in Bermuda, referred to anyone not wholly of European
ancestry) Bermudians, gaining a slight majority. Any child born before or since then to one coloured and one white
parent has been added to the coloured statistic. Most of those historically described as "coloured" are today described
as "black", or "of African heritage", which obscures their non-African heritage (those previously described as "coloured"
who were not of African ancestry had been very few, though the numbers of South Asians, particularly, is now growing.
The number of persons born in Asian countries doubled between the 2000 and the 2010 censuses), blacks have remained
in the majority, with new white immigration from Portugal, Britain and elsewhere countered by black immigration from
the West Indies. Bermuda's modern black population contains more than one demographic group. Although the number
of residents born in Africa is very small, it has tripled between 2000 and 2010 (this group also includes non-blacks).
The majority of blacks in Bermuda can be termed "Bermudian blacks", whose ancestry dates back centuries between the
17th century and the end of slavery in 1834, Bermuda's black population was self-sustaining, with its growth resulting
largely from natural expansion. This contrasts to the enslaved blacks of the plantation colonies, who were subjected
to conditions so harsh as to drop their birth rate below the death rate, and slaveholders in the United States and
the West Indies found it necessary to continue importing more enslaved blacks from Africa until the end of slavery
(the same had been true for the Native Americans that the Africans had replaced on the New World plantations). The
indigenous populations of many West Indian islands, and much of the South-East of what is now the United States that
had survived the 16th- and 17th-century epidemics of European-introduced diseases then became the victims of large-scale
slave raiding, with much of the region completely depopulated. When the supply of indigenous slaves ran out, the
slaveholders looked to Africa). The ancestry of Bermuda's black population is distinguished from that of the British
West Indian black population in two ways: firstly, the higher degree of European and Native American admixture; secondly,
the source of the African ancestry. In the British West Indian islands (and also in the United States), the majority
of enslaved blacks brought across the Atlantic came from West Africa (roughly between modern Senegal and Ghana).
Very little of Bermuda's original black emigration came from this area. The first blacks to arrive in Bermuda in
any numbers were free blacks from Spanish-speaking areas of the West Indies, and most of the remainder were recently
enslaved Africans captured from the Spanish and Portuguese. As Spain and Portugal sourced most of their slaves from
South-West Africa (the Portuguese through ports in modern-day Angola; the Spanish purchased most of their African
slaves from Portuguese traders, and from Arabs whose slave trading was centred in Zanzibar). Genetic studies have
consequently shown that the African ancestry of black Bermudians (other than those resulting from recent immigration
from the British West Indian islands) is largely from the a band across southern Africa, from Angola to Mozambique,
which is similar to what is revealed in Latin America, but distinctly different from the blacks of the West Indies
and the United States. Most of Bermuda's black population trace some of their ancestry to Native Americans, although
awareness of this is largely limited to St David's Islanders and most who have such ancestry are unaware of it. During
the colonial period, hundreds of Native Americans were shipped to Bermuda. The best-known examples were the Algonquian
peoples who were exiled from the southern New England colonies and sold into slavery in the 17th century, notably
in the aftermaths of the Pequot and King Philip's wars. Bermuda's culture is a mixture of the various sources of
its population: Native American, Spanish-Caribbean, English, Irish, and Scots cultures were evident in the 17th century,
and became part of the dominant British culture. English is the primary and official language. Due to 160 years of
immigration from Portuguese Atlantic islands (primarily the Azores, though also from Madeira and the Cape Verde Islands),
a portion of the population also speaks Portuguese. There are strong British influences, together with Afro-Caribbean
ones. The first notable, and historically important, book credited to a Bermudian was The History of Mary Prince,
a slave narrative by Mary Prince. It is thought to have contributed to the abolition of slavery in the British Empire.
Ernest Graham Ingham, an expatriate author, published his books at the turn of the 19th and 20th centuries. In the
20th century, numerous books were written and published locally, though few were directed at a wider market than
Bermuda. (The latter consisted primarily of scholarly works rather than creative writing). The novelist Brian Burland
(1931– 2010) achieved a degree of success and acclaim internationally. More recently, Angela Barry has won critical
recognition for her published fiction. Bermuda watercolours painted by local artists are sold at various galleries.
Hand-carved cedar sculptures are another speciality. One such 7 ft (2.1 m) sculpture, created by Bermudian sculptor
Chesley Trott, is installed at the airport's baggage claim area. In 2010, his sculpture The Arrival was unveiled
near the bay to commemorate the freeing of slaves from the American brig Enterprise in 1835. Local artwork may also
be viewed at several galleries around the island. Alfred Birdsey was one of the more famous and talented watercolourists;
his impressionistic landscapes of Hamilton, St George's and the surrounding sailboats, homes, and bays of Bermuda
are world-renowned. Bermuda was discovered in 1503 by Spanish explorer Juan de Bermúdez. It is mentioned in Legatio
Babylonica, published in 1511 by historian Pedro Mártir de Anglería, and was also included on Spanish charts of that
year. Both Spanish and Portuguese ships used the islands as a replenishment spot to take on fresh meat and water.
Legends arose of spirits and devils, now thought to have stemmed from the calls of raucous birds (most likely the
Bermuda petrel, or Cahow) and the loud noise heard at night from wild hogs. Combined with the frequent storm-wracked
conditions and the dangerous reefs, the archipelago became known as the Isle of Devils. Neither Spain nor Portugal
tried to settle it. It established a colony at Jamestown, Virginia, in 1607. Two years later, a flotilla of seven
ships left England under the Company's Admiral, Sir George Somers, and the new Governor of Jamestown, Sir Thomas
Gates, with several hundred settlers, food and supplies to relieve the colony of Jamestown. Somers had previous experience
sailing with both Sir Francis Drake and Sir Walter Raleigh. The flotilla was broken up by a storm. As the flagship,
the Sea Venture, was taking on water, Somers drove it onto Bermuda's reef and gained the shores safely with smaller
boats – all 150 passengers and a dog survived. (William Shakespeare's play The Tempest, in which the character Ariel
refers to the "still-vex'd Bermoothes" (I.ii.229), is thought to have been inspired by William Strachey's account
of this shipwreck.) They stayed 10 months, starting a new settlement and building two small ships to sail to Jamestown.
The island was claimed for the English Crown, and the charter of the Virginia Company was later extended to include
it. In 1610, all but three of the survivors of the Sea Venture sailed on to Jamestown. Among them was John Rolfe,
whose wife and child died and were buried in Bermuda. Later in Jamestown he married Pocahontas, a daughter of the
powerful Powhatan, leader of a large confederation of about 30 Algonquian-speaking tribes in coastal Virginia. In
1612, the English began intentional settlement of Bermuda with the arrival of the ship Plough. St. George's was settled
that year and designated as Bermuda's first capital. It is the oldest continually inhabited English town in the New
World. Because of its limited land area, Bermuda has had difficulty with over-population. In the first two centuries
of settlement, it relied on steady human emigration to keep the population manageable.[citation needed] Before the
American Revolution more than ten thousand Bermudians (over half of the total population through the years) gradually
emigrated, primarily to the Southern United States. As Great Britain displaced Spain as the dominant European imperial
power, it opened up more land for colonial development. A steady trickle of outward migration continued. With seafaring
the only real industry in the early decades, by the end of the 18th century, at least a third of the island's manpower
was at sea at any one time. In the 17th century, the Somers Isles Company suppressed shipbuilding, as it needed Bermudians
to farm to generate income from the land. Agricultural production met with limited success, however. The Bermuda
cedar boxes used to ship tobacco to England were reportedly worth more than their contents.[citation needed] The
colony of Virginia far surpassed Bermuda in both quality and quantity of tobacco produced. Bermudians began to turn
to maritime trades relatively early in the 17th century, but the Somers Isles Company used all its authority to suppress
turning away from agriculture. This interference led to the islanders demanding, and receiving, the revocation of
the Company's charter in 1684, and the Company was dissolved. The end of the war, however, was to cause profound
change in Bermuda, though some of those changes would take decades to crystallise. Following the war, with the buildup
of Naval and military forces in Bermuda, the primary leg of the Bermudian economy became defence infrastructure.
Even after tourism began later in the 19th century, Bermuda remained, in the eyes of London, a base more than a colony.
The Crown strengthened its political and economic ties to Bermuda, and the colony's independence on the world stage
was diminished. The war had removed Bermuda's primary trading partners, the American colonies, from the empire, and
dealt a harsh blow to Bermuda's merchant shipping trade. This also suffered due to the deforestation of Bermuda,
as well as the advent of metal ships and steam propulsion, for which it did not have raw materials. During the course
of the following War of 1812, the primary market for Bermuda's salt disappeared as the Americans developed their
own sources. Control of the Turks had passed to the Bahamas in 1819. The most famous escapee was the Boer prisoner
of war Captain Fritz Joubert Duquesne who was serving a life sentence for "conspiracy against the British government
and on (the charge of) espionage.". On the night of 25 June 1902, Duquesne slipped out of his tent, worked his way
over a barbed-wire fence, swam 1.5 miles (2.4 km) past patrol boats and bright spot lights, through storm-wracked,
shark infested waters, using a distant lighthouse for navigation until he arrived ashore on the main island. From
there he escaped to the port of St. George's and a week later, he stowed away on a boat heading to Baltimore, Maryland.
He settled in the US and later became a spy for Germany in both World Wars. In 1942, Col. Duquesne was arrested by
the FBI for leading the Duquesne Spy Ring, which still to this day the largest espionage case in the history of the
United States. After several failed attempts, in 1930 the first aeroplane reached Bermuda. A Stinson Detroiter seaplane
flying from New York, it had to land twice in the ocean: once because of darkness and again to refuel. Navigation
and weather forecasting improved in 1933 when the Royal Air Force (then responsible for providing equipment and personnel
for the Royal Navy's Fleet Air Arm) established a station at the Royal Naval Dockyard to repair (and supply replacement)
float planes for the fleet. In 1936 Luft Hansa began to experiment with seaplane flights from Berlin via the Azores
with continuation to New York City. In 1937, Imperial Airways and Pan American World Airways began operating scheduled
flying-boat airline services from New York and Baltimore to Darrell's Island, Bermuda. In 1948, regularly scheduled
commercial airline service by land-based aeroplanes began to Kindley Field (now L.F. Wade International Airport),
helping tourism to reach its peak in the 1960s–1970s. By the end of the 1970s, international business had supplanted
tourism as the dominant sector of Bermuda's economy (see Economy of Bermuda). Executive authority in Bermuda is vested
in the monarch and is exercised on her behalf by the Governor. The governor is appointed by the Queen on the advice
of the British Government. The current governor is George Fergusson; he was sworn in on 23 May 2012. There is also
a Deputy Governor (currently David Arkley JP). Defence and foreign affairs are carried out by the United Kingdom,
which also retains responsibility to ensure good government. It must approve any changes to the Constitution of Bermuda.
Bermuda is classified as a British Overseas Territory, but it is the oldest British colony. In 1620, a Royal Assent
granted Bermuda limited self-governance; its Parliament is the fifth oldest in the world, behind the Parliament of
the United Kingdom, the Tynwald of the Isle of Man, the Althing of Iceland, and Sejm of Poland. Of these, only Bermuda's
and the Isle of Man's Tynwald have been in continuous existence since 1620. The Constitution of Bermuda came into
force on 1 June 1967; it was amended in 1989 and 2003. The head of government is the premier. A cabinet is nominated
by the premier and appointed officially by the governor. The legislative branch consists of a bicameral parliament
modelled on the Westminster system. The Senate is the upper house, consisting of 11 members appointed by the governor
on the advice of the premier and the leader of the opposition. The House of Assembly, or lower house, has 36 members,
elected by the eligible voting populace in secret ballot to represent geographically defined constituencies. There
are few accredited diplomats in Bermuda. The United States maintains the largest diplomatic mission in Bermuda, comprising
both the United States Consulate and the US Customs and Border Protection Services at the L.F. Wade International
Airport. The current US Consul General is Robert Settje, who took office in August 2012. The United States is Bermuda's
largest trading partner (providing over 71% of total imports, 85% of tourist visitors, and an estimated $163 billion
of US capital in the Bermuda insurance/re-insurance industry), and an estimated 5% of Bermuda residents are US citizens,
representing 14% of all foreign-born persons. The American diplomatic presence is an important element in the Bermuda
political landscape. On 11 June 2009, four Uyghurs who had been held in the United States Guantánamo Bay detention
camp, in Cuba, were transferred to Bermuda. The four men were among 22 Uyghurs who claimed to be refugees, who were
captured in 2001 in Pakistan after fleeing the American aerial bombardment of Afghanistan. They were accused of training
to assist the Taliban's military. They were cleared as safe for release from Guantánamo in 2005 or 2006, but US domestic
law prohibited deporting them back to China, their country of citizenship, because the US government determined that
China was likely to violate their human rights. Homosexuality was decriminalised in Bermuda with the passage of the
Stubbs Bill in May 1994. Legislation was introduced by Private Members Bill by PLP MP Wayne Furbert to amend the
Human Rights Act of Bermuda to disallow Same Sex Marriage under the Act in February 2016. The OBA government simultaneously
introduced a bill to permit Civil Unions. Both measures were in response to a decision by His Hon Mr. Justice Ian
Kawaley, Chief Justice of Bermuda's earlier ruling that same sex spouses of Bermuda citizens could not be denied
basic Human Rights. This is a socio-economic bloc of nations in or near the Caribbean Sea. Other outlying member
states include the Co-operative Republic of Guyana and the Republic of Suriname in South America, along with Belize
in Central America. The Turks and Caicos Islands, an associate member of CARICOM, and the Commonwealth of The Bahamas,
a full member of CARICOM, are in the Atlantic, but near to the Caribbean. Other nearby nations or territories, such
as the United States, are not members (although the US Commonwealth of Puerto Rico has observer status, and the United
States Virgin Islands announced in 2007 they would seek ties with CARICOM). Bermuda, at roughly a thousand miles
from the Caribbean Sea, has little trade with, and little economically in common with, the region, and joined primarily
to strengthen cultural links. Bermuda was colonised by the English as an extension of Virginia and has long had close
ties with the US Atlantic Seaboard and Canadian Maritimes as well as the UK. It had a history of African slavery,
although Britain abolished it decades before the US. Since the 20th century, there has been considerable immigration
to Bermuda from the West Indies, as well as continued immigration from Portuguese Atlantic islands. Unlike immigrants
from British colonies in the West Indies, the latter immigrants have had greater difficulty in becoming permanent
residents as they lacked British citizenship, mostly spoke no English, and required renewal of work permits to remain
beyond an initial period. From the 1950s onwards, Bermuda relaxed its immigration laws, allowing increased immigration
from Britain and Canada. Some Black politicians accused the government of using this device to counter the West Indian
immigration of previous decades. The PLP, the party in government when the decision to join CARICOM was made, has
been dominated for decades by West Indians and their descendants. (The prominent roles of West Indians among Bermuda's
black politicians and labour activists predated party politics in Bermuda, as exemplified by Dr. E. F. Gordon). The
late PLP leader, Dame Lois Browne-Evans, and her Trinidadian-born husband, John Evans (who co-founded the West Indian
Association of Bermuda in 1976), were prominent members of this group. They have emphasised Bermuda's cultural connections
with the West Indies. Many Bermudians, both black and white, who lack family connections to the West Indies have
objected to this emphasis. Once known as "the Gibraltar of the West" and "Fortress Bermuda", Bermuda today is defended
by forces of the British government. For the first two centuries of settlement, the most potent armed force operating
from Bermuda was its merchant shipping fleet, which turned to privateering at every opportunity. The Bermuda government
maintained a local militia. After the American Revolutionary War, Bermuda was established as the Western Atlantic
headquarters of the Royal Navy. Once the Royal Navy established a base and dockyard defended by regular soldiers,
however, the militias were disbanded following the War of 1812. At the end of the 19th century, the colony raised
volunteer units to form a reserve for the military garrison. In May 1940, the US requested base rights in Bermuda
from the United Kingdom, but British Prime Minister Winston Churchill was initially unwilling to accede to the American
request without getting something in return. In September 1940, as part of the Destroyers for Bases Agreement, the
UK granted the US base rights in Bermuda. Bermuda and Newfoundland were not originally included in the agreement,
but both were added to it, with no war material received by the UK in exchange. One of the terms of the agreement
was that the airfield the US Army built would be used jointly by the US and the UK (which it was for the duration
of the war, with RAF Transport Command relocating there from Darrell's Island in 1943). Construction began in 1941
of two airbases consisting of 5.8 km2 (2.2 sq mi) of land, largely reclaimed from the sea. For many years, Bermuda's
bases were used by US Air Force transport and refuelling aircraft and by US Navy aircraft patrolling the Atlantic
for enemy submarines, first German and, later, Soviet. The principal installation, Kindley Air Force Base on the
eastern coast, was transferred to the US Navy in 1970 and redesignated Naval Air Station Bermuda. As a naval air
station, the base continued to host both transient and deployed USN and USAF aircraft, as well as transitioning or
deployed Royal Air Force and Canadian Forces aircraft. The original NAS Bermuda on the west side of the island, a
seaplane base until the mid-1960s, was designated as the Naval Air Station Bermuda Annex. It provided optional anchorage
and/or dockage facilities for transiting US Navy, US Coast Guard and NATO vessels, depending on size. An additional
US Navy compound known as Naval Facility Bermuda (NAVFAC Bermuda), a SOSUS station, was located to the west of the
Annex near a Canadian Forces communications facility. Although leased for 99 years, US forces withdrew in 1995, as
part of the wave of base closures following the end of the Cold War. Bermudians served in the British armed forces
during both World War I and World War II. After the latter, Major-General Glyn Charles Anglim Gilbert, Bermuda's
highest-ranking soldier, was instrumental in developing the Bermuda Regiment. A number of other Bermudians and their
descendants had preceded him into senior ranks, including Bahamian-born Admiral Lord Gambier, and Bermudian-born
Royal Marines Brigadier Harvey. When promoted to Brigadier at age 39, following his wounding at the Anzio landings,
Harvey became the youngest-ever Royal Marine Brigadier. The Cenotaph in front of the Cabinet Building (in Hamilton)
was erected in tribute to Bermuda's Great War dead (the tribute was later extended to Bermuda's Second World War
dead) and is the site of the annual Remembrance Day commemoration. In 1970 the country switched its currency from
the Bermudian pound to the Bermudian dollar, which is pegged at par with the US dollar. US notes and coins are used
interchangeably with Bermudian notes and coins within the islands for most practical purposes; however, banks levy
an exchange rate fee for the purchase of US dollars with Bermudian dollars. Bermudian notes carry the image of Queen
Elizabeth II. The Bermuda Monetary Authority is the issuing authority for all banknotes and coins, and regulates
financial institutions. The Royal Naval Dockyard Museum holds a permanent exhibition of Bermuda notes and coins.
Bermuda is an offshore financial centre, which results from its minimal standards of business regulation/laws and
direct taxation on personal or corporate income. It has one of the highest consumption taxes in the world and taxes
all imports in lieu of an income tax system. Bermudas's consumption tax is equivalent to local income tax to local
residents and funds government and infrastructure expenditures. The local tax system depends upon import duties,
payroll taxes and consumption taxes. The legal system is derived from that of the United Kingdom, with recourse to
English courts of final appeal. Foreign private individuals cannot easily open bank accounts or subscribe to mobile
phone or internet services. There are four hundred securities listed on the stock exchange, of which almost three
hundred are offshore funds and alternative investment structures attracted by Bermuda's regulatory environment. The
Exchange specialises in listing and trading of capital market instruments such as equities, debt issues, funds (including
hedge fund structures) and depository receipt programmes. The BSX is a full member of the World Federation of Exchanges
and is located in an OECD member nation. It also has Approved Stock Exchange status under Australia's Foreign Investment
Fund (FIF) taxation rules and Designated Investment Exchange status by the UK's Financial Services Authority. Many
sports popular today were formalised by British Public schools and universities in the 19th century. These schools
produced the civil servants and military and naval officers required to build and maintain the British empire, and
team sports were considered a vital tool for training their students to think and act as part of a team. Former public
schoolboys continued to pursue these activities, and founded organisations such as the Football Association (FA).
Today's association of football with the working classes began in 1885 when the FA changed its rules to allow professional
players. The professionals soon displaced the amateur ex-Public schoolboys. Bermuda's role as the primary Royal Navy
base in the Western Hemisphere, with an army garrison to match, ensured that the naval and military officers quickly
introduced the newly formalised sports to Bermuda, including cricket, football, Rugby football, and even tennis and
rowing (rowing did not adapt well from British rivers to the stormy Atlantic. The officers soon switched to sail
racing, founding the Royal Bermuda Yacht Club). Once these sports reached Bermuda, they were eagerly adopted by Bermudians.
Bermuda's national cricket team participated in the Cricket World Cup 2007 in the West Indies. Their most famous
player is a 130 kilograms (290 lb) police officer named Dwayne Leverock. But India defeated Bermuda and set a record
of 413 runs in a One-Day International (ODI). Bermuda were knocked out of the World Cup. Also very well-known is
David Hemp, a former captain of Glamorgan in English first class cricket. The annual "Cup Match" cricket tournament
between rival parishes St George's in the east and Somerset in the west is the occasion for a popular national holiday.
This tournament began in 1872 when Captain Moresby of the Royal Navy introduced the game to Bermuda, holding a match
at Somerset to mark forty years since the unjust thraldom of slavery. The East End versus West End rivalry resulted
from the locations of the St. George's Garrison (the original army headquarters in Bermuda) on Barrack Hill, St.
George's, and the Royal Naval Dockyard at Ireland Island. Moresby founded the Somerset Cricket Club which plays the
St. George's Cricket Club in this game (the membership of both clubs has long been mostly civilian). At the 2004
Summer Olympics, Bermuda competed in sailing, athletics, swimming, diving, triathlon and equestrian events. In those
Olympics, Bermuda's Katura Horton-Perinchief made history by becoming the first black female diver to compete in
the Olympic Games. Bermuda has had one Olympic medallist, Clarence Hill, who won a bronze medal in boxing. Bermuda
also competed in Men's Skeleton at the 2006 Winter Olympics in Turin, Italy. Patrick Singleton placed 19th, with
a final time of 1:59.81. Jillian Teceira competed in the Beijing Olympics in 2008. It is tradition for Bermuda to
march in the Opening Ceremony in Bermuda shorts, regardless of the summer or winter Olympic celebration. Bermuda
also competes in the biennial Island Games, which it hosted in 2013. Bermuda has developed a proud Rugby Union community.
The Bermuda Rugby Union team won the 2011 Caribbean championships, defeating Guyana in the final. They previously
beat The Bahamas and Mexico to take the crown. Rugby 7's is also played, with four rounds scheduled to take place
in the 2011–2012 season. The Bermuda 7's team competed in the 2011 Las Vegas 7's, defeating the Mexican team. There
are four clubs on the island: (1) Police (2) Mariners (3) Teachers (4) Renegades. There is a men's and women's competition–current
league champions are Police (Men) (winning the title for the first time since the 1990s) and Renegades (women's).
Games are currently played at Warwick Academy. Bermuda u/19 team won the 2010 Caribbean Championships.
Modern-day Nigeria has been the site of numerous kingdoms and tribal states over the millennia. The modern state originated
from British colonial rule beginning in the 19th century, and the merging of the Southern Nigeria Protectorate and
Northern Nigeria Protectorate in 1914. The British set up administrative and legal structures whilst practising indirect
rule through traditional chiefdoms. Nigeria became a formally independent federation in 1960, and plunged into a
civil war from 1967 to 1970. It has since alternated between democratically-elected civilian governments and military
dictatorships, until it achieved a stable democracy in 1999, with its 2011 presidential elections being viewed as
the first to be conducted reasonably freely and fairly. Nigeria is often referred to as the "Giant of Africa", owing
to its large population and economy. With approximately 182 million inhabitants, Nigeria is the most populous country
in Africa and the seventh most populous country in the world. Nigeria has one of the largest populations of youth
in the world. The country is viewed as a multinational state, as it is inhabited by over 500 ethnic groups, of which
the three largest are the Hausa, Igbo and Yoruba; these ethnic groups speak over 500 different languages, and are
identified with wide variety of cultures. The official language is English. Nigeria is divided roughly in half between
Christians, who live mostly in the southern part of the country, and Muslims in the northern part. A minority of
the population practise religions indigenous to Nigeria, such as those native to Igbo and Yoruba peoples. As of 2015[update],
Nigeria is the world's 20th largest economy, worth more than $500 billion and $1 trillion in terms of nominal GDP
and purchasing power parity respectively. It overtook South Africa to become Africa's largest economy in 2014. Also,
the debt-to-GDP ratio is only 11 percent, which is 8 percent below the 2012 ratio. Nigeria is considered to be an
emerging market by the World Bank; It has been identified as a regional power on the African continent, a middle
power in international affairs, and has also been identified as an emerging global power. Nigeria is a member of
the MINT group of countries, which are widely seen as the globe's next "BRIC-like" economies. It is also listed among
the "Next Eleven" economies set to become among the biggest in the world. Nigeria is a founding member of the Commonwealth
of Nations, the African Union, OPEC, and the United Nations amongst other international organisations. Since 2002,
the North East of the country has seen sectarian violence by Boko Haram, an Islamist movement that seeks to abolish
the secular system of government and establish Sharia law. Nigerian President Goodluck Jonathan in May 2014 claimed
that Boko Haram attacks have left at least 12,000 people dead and 8,000 people crippled. At the same time, neighbouring
countries, Benin, Chad, Cameroon and Niger joined Nigeria in a united effort to combat Boko Haram in the aftermath
of a world media highlighted kidnapping of 276 schoolgirls and the spread of Boko Haram attacks to these countries.
The name Nigeria was taken from the Niger River running through the country. This name was allegedly coined in the
late 19th century by British journalist Flora Shaw, she was inspired by the name of the river, in preference to terms
such as "Central Sudan". The origin of the name ''Nigeria'' came from the name of the Niger River. The word ( Niger
) is an alteration of the Tuareg name egerew n-igerewen used by inhabitants along the middle reaches of the river
around Timbuktu prior to 19th-century European colonialism. Egerew n-igerewen means River of the Rivers. The Kingdom
of Nri of the Igbo people consolidated in the 10th century and continued until it lost its sovereignty to the British
in 1911. Nri was ruled by the Eze Nri, and the city of Nri is considered to be the foundation of Igbo culture. Nri
and Aguleri, where the Igbo creation myth originates, are in the territory of the Umeuri clan. Members of the clan
trace their lineages back to the patriarchal king-figure Eri. In West Africa, the oldest bronzes made using the lost-wax
process were from Igbo Ukwu, a city under Nri influence. For centuries, various peoples in modern-day Nigeria traded
overland with traders from North Africa. Cities in the area became regional centres in a broad network of trade routes
that spanned western, central and northern Africa. In the 16th century, Spanish and Portuguese explorers were the
first Europeans to begin significant, direct trade with peoples of modern-day Nigeria, at the port they named Lagos
and in Calabar. Europeans traded goods with peoples at the coast; coastal trade with Europeans also marked the beginnings
of the Atlantic slave trade. The port of Calabar on the historical Bight of Biafra (now commonly referred to as the
Bight of Bonny) become one of the largest slave trading posts in West Africa in the era of the transatlantic slave
trade. Other major slaving ports in Nigeria were located in Badagry, Lagos on the Bight of Benin and on Bonny Island
on the Bight of Biafra. The majority of those enslaved and taken to these ports were captured in raids and wars.
Usually the captives were taken back to the conquerors' territory as forced labour; after time, they were sometimes
acculturated and absorbed into the conquerors' society. A number of slave routes were established throughout Nigeria
linking the hinterland areas with the major coastal ports. Some of the more prolific slave traders were linked with
the Oyo Empire in the southwest, the Aro Confederacy in the southeast and the Sokoto Caliphate in the north. The
slave trade was engaged in by European state and non-state actors such as Great Britain, the Netherlands, Portugal
and private companies, as well as various African states and non-state actors. With rising anti-slavery sentiment
at home and changing economic realities, Great Britain outlawed the international slave trade in 1807. Following
the Napoleonic Wars, Great Britain established the West Africa Squadron in an attempt to halt the international traffic
in slaves. It stopped ships of other nations that were leaving the African coast with slaves; the seized slaves were
taken to Freetown, a colony in West Africa originally established for the resettlement of freed slaves from Britain.
Britain intervened in the Lagos Kingship power struggle by bombarding Lagos in 1851, deposing the slave trade friendly
Oba Kosoko, helping to install the amenable Oba Akitoye, and signing the Treaty between Great Britain and Lagos on
1 January 1852. Britain annexed Lagos as a Crown Colony in August 1861 with the Lagos Treaty of Cession. British
missionaries expanded their operations and travelled further inland. In 1864, Samuel Ajayi Crowther became the first
African bishop of the Anglican Church. In 1885, British claims to a West African sphere of influence received recognition
from other European nations at the Berlin Conference. The following year, it chartered the Royal Niger Company under
the leadership of Sir George Taubman Goldie. In 1900 the company's territory came under the control of the British
government, which moved to consolidate its hold over the area of modern Nigeria. On 1 January 1901, Nigeria became
a British protectorate, and part of the British Empire, the foremost world power at the time. In the late 19th and
early 20th centuries the independent kingdoms of what would become Nigeria fought a number of conflicts against the
British Empire's efforts to expand its territory. By war, the British conquered Benin in 1897, and, in the Anglo-Aro
War (1901–1902), defeated other opponents. The restraint or conquest of these states opened up the Niger area to
British rule. Christian missions established Western educational institutions in the Protectorates. Under Britain's
policy of indirect rule and validation of Islamic tradition, the Crown did not encourage the operation of Christian
missions in the northern, Islamic part of the country. Some children of the southern elite went to Great Britain
to pursue higher education. By independence in 1960, regional differences in modern educational access were marked.
The legacy, though less pronounced, continues to the present-day. Imbalances between North and South were expressed
in Nigeria's political life as well. For instance, northern Nigeria did not outlaw slavery until 1936 whilst in other
parts of Nigeria slavery was abolished soon after colonialism. Nigeria gained independence from the United Kingdom
as a Commonwealth Realm on 1 October 1960. Nigeria's government was a coalition of conservative parties: the Nigerian
People's Congress (NPC), a party dominated by Northerners and those of the Islamic faith, and the Igbo and Christian-dominated
National Council of Nigeria and the Cameroons (NCNC) led by Nnamdi Azikiwe. Azikiwe became Nigeria's maiden Governor-General
in 1960. The opposition comprised the comparatively liberal Action Group (AG), which was largely dominated by the
Yoruba and led by Obafemi Awolowo. The cultural and political differences between Nigeria's dominant ethnic groups
– the Hausa ('Northerners'), Igbo ('Easterners') and Yoruba ('Westerners') – were sharp. The disquilibrium and perceived
corruption of the electoral and political process led, in 1966, to back-to-back military coups. The first coup was
in January 1966 and was led by Igbo soldiers under Majors Emmanuel Ifeajuna and Chukwuma Kaduna Nzeogwu. The coup
plotters succeeded in murdering Prime Minister Abubakar Tafawa Balewa, Premier Ahmadu Bello of the Northern Region
and Premier Ladoke Akintola of the Western Region. But, the coup plotters struggled to form a central government.
President Nwafor Orizu handed over government control to the Army, then under the command of another Igbo officer,
General JTU Aguiyi-Ironsi. In May 1967, the Eastern Region declared independence as a state called the Republic of
Biafra, under the leadership of Lt. Colonel Emeka Ojukwu. The Nigerian Civil War began as the official Nigerian government
side (predominated by soldiers from the North and West) attacked Biafra (Southeastern) on 6 July 1967 at Garkem.
The 30 month war, with a long siege of Biafra and its isolation from trade and supplies, ended in January 1970. Estimates
of the number of dead in the former Eastern Region are between 1 and 3 million people, from warfare, disease, and
starvation, during the 30-month civil war. During the oil boom of the 1970s, Nigeria joined OPEC and the huge revenue
generated made the economy richer. Despite huge revenues from oil production and sale, the military administration
did little to improve the standard of living of the population, help small and medium businesses, or invest in infrastructure.
As oil revenues fuelled the rise of federal subventions to states, the federal government became the centre of political
struggle and the threshold of power in the country. As oil production and revenue rose, the Nigerian government became
increasingly dependent on oil revenues and the international commodity markets for budgetary and economic concerns.
It did not develop other sources of the economy for economic stability. That spelled doom to federalism in Nigeria.
Beginning in 1979, Nigerians participated in a brief return to democracy when Olusegun Obasanjo transferred power
to the civilian regime of Shehu Shagari. The Shagari government became viewed as corrupt and incompetent by virtually
all sectors of Nigerian society. The military coup of Muhammadu Buhari shortly after the regime's fraudulent re-election
in 1984 was generally viewed as a positive development. Buhari promised major reforms, but his government fared little
better than its predecessor. His regime was overthrown by another military coup in 1985. The new head of state, Ibrahim
Babangida, declared himself president and commander in chief of the armed forces and the ruling Supreme Military
Council. He set 1990 as the official deadline for a return to democratic governance. Babangida's tenure was marked
by a flurry of political activity: he instituted the International Monetary Fund's Structural Adjustment Program
(SAP) to aid in the repayment of the country's crushing international debt, which most federal revenue was dedicated
to servicing. He enrolled Nigeria in the Organisation of the Islamic Conference, which aggravated religious tensions
in the country. After Babangida survived an abortive coup, he pushed back the promised return to democracy to 1992.
Free and fair elections were finally held on 12 June 1993, with a presidential victory for Moshood Kashimawo Olawale
Abiola. Babangida annulled the elections, leading to mass civilian violent protests which effectively shut down the
country for weeks. Babangida finally kept his promise to relinquish office to a civilian-run government, but not
before appointing Ernest Shonekan as head of the interim government. Babangida's regime has been considered the most
corrupt, and responsible for creating a culture of corruption in Nigeria. Nigeria regained democracy in 1999 when
it elected Olusegun Obasanjo, the former military head of state, as the new President of Nigeria. This ended almost
33 years of military rule (from 1966 until 1999), excluding the short-lived second republic (between 1979 and 1983)
by military dictators who seized power in coups d'état and counter-coups during the Nigerian military juntas of 1966–1979
and 1983–1998. Although the elections which brought Obasanjo to power in 1999 and again in 2003 were condemned as
unfree and unfair, Nigeria has shown marked improvements in attempts to tackle government corruption and to hasten
development. Goodluck Jonathan served as Nigeria's president till 16 April 2011, when a new presidential election
in Nigeria was conducted. Jonathan of the PDP was declared the winner on 19 April 2011, having won the election with
a total of 22,495,187 of the 39,469,484 votes cast, to stand ahead of Muhammadu Buhari from the main opposition party,
the Congress for Progressive Change (CPC), which won 12,214,853 of the total votes cast. The international media
reported the elections as having run smoothly with relatively little violence or voter fraud, in contrast to previous
elections. Nigeria is a Federal Republic modelled after the United States, with executive power exercised by the
president. It is influenced by the Westminster System model[citation needed] in the composition and management of
the upper and lower houses of the bicameral legislature. The president presides as both Head of State and head of
the national executive; the leader is elected by popular vote to a maximum of two 4-year terms. In the March 28,
2015 presidential election, General Muhammadu Buhari emerged victorious to become the Federal President of Nigeria,
defeating then incumbent Goodluck Jonathan. Ethnocentrism, tribalism, religious persecution, and prebendalism have
affected Nigerian politics both prior and subsequent to independence in 1960. Kin-selective altruism has made its
way into Nigerian politics, resulting in tribalist efforts to concentrate Federal power to a particular region of
their interests. Nationalism has also led to active secessionist movements such as MASSOB, Nationalist movements
such as Oodua Peoples Congress, Movement for the Emancipation of the Niger Delta and a civil war. Nigeria's three
largest ethnic groups (Hausa, Igbo and Yoruba) have maintained historical preeminence in Nigerian politics; competition
amongst these three groups has fuelled corruption and graft. Because of the above issues, Nigeria's political parties
are pan-national and secular in character (though this does not preclude the continuing preeminence of the dominant
ethnicities). The major political parties at that time included the then ruling People's Democratic Party of Nigeria,
which maintains 223 seats in the House and 76 in the Senate (61.9% and 69.7% respectively); the opposition formerly
All Nigeria People's Party now All Progressives Congress has 96 House seats and 27 in the Senate (26.6% and 24.7%).
About twenty minor opposition parties are registered. Nigeria's foreign policy was tested in the 1970s after the
country emerged united from its own civil war. It supported movements against white minority governments in the Southern
Africa sub-region. Nigeria backed the African National Congress (ANC) by taking a committed tough line with regard
to the South African government and their military actions in southern Africa. Nigeria was also a founding member
of the Organisation for African Unity (now the African Union), and has tremendous influence in West Africa and Africa
on the whole. Nigeria has additionally founded regional cooperative efforts in West Africa, functioning as standard-bearer
for the Economic Community of West African States (ECOWAS) and ECOMOG, economic and military organisations, respectively.
Nigeria has a varied landscape. The far south is defined by its tropical rainforest climate, where annual rainfall
is 60 to 80 inches (1,500 to 2,000 mm) a year. In the southeast stands the Obudu Plateau. Coastal plains are found
in both the southwest and the southeast. This forest zone's most southerly portion is defined as "salt water swamp,"
also known as a mangrove swamp because of the large amount of mangroves in the area. North of this is fresh water
swamp, containing different vegetation from the salt water swamp, and north of that is rain forest. The area near
the border with Cameroon close to the coast is rich rainforest and part of the Cross-Sanaga-Bioko coastal forests
ecoregion, an important centre for biodiversity. It is habitat for the drill monkey, which is found in the wild only
in this area and across the border in Cameroon. The areas surrounding Calabar, Cross River State, also in this forest,
are believed to contain the world's largest diversity of butterflies. The area of southern Nigeria between the Niger
and the Cross Rivers has lost most of its forest because of development and harvesting by increased population, with
it being replaced by grassland (see Cross-Niger transition forests). Everything in between the far south and the
far north is savannah (insignificant tree cover, with grasses and flowers located between trees). Rainfall is more
limited, to between 500 and 1,500 millimetres (20 and 60 in) per year. The savannah zone's three categories are Guinean
forest-savanna mosaic, Sudan savannah, and Sahel savannah. Guinean forest-savanna mosaic is plains of tall grass
interrupted by trees. Sudan savannah is similar but with shorter grasses and shorter trees. Sahel savannah consists
of patches of grass and sand, found in the northeast. In the Sahel region, rain is less than 500 millimetres (20
in) per year and the Sahara Desert is encroaching. In the dry north-east corner of the country lies Lake Chad, which
Nigeria shares with Niger, Chad and Cameroon. Waste management including sewage treatment, the linked processes of
deforestation and soil degradation, and climate change or global warming are the major environmental problems in
Nigeria. Waste management presents problems in a mega city like Lagos and other major Nigerian cities which are linked
with economic development, population growth and the inability of municipal councils to manage the resulting rise
in industrial and domestic waste. This huge waste management problem is also attributable to unsustainable environmental
management lifestyles of Kubwa Community in the Federal Capital Territory, where there are habits of indiscriminate
disposal of waste, dumping of waste along or into the canals, sewerage systems that are channels for water flows,
etc. Nigeria is divided into thirty-six states and one Federal Capital Territory, which are further sub-divided into
774 Local Government Areas (LGAs). The plethora of states, of which there were only three at independence, reflect
the country's tumultuous history and the difficulties of managing such a heterogeneous national entity at all levels
of government. In some contexts, the states are aggregated into six geopolitical zones: North West, North East, North
Central, South East, South South, and South West. Nigeria was ranked 30th in the world in terms of GDP (PPP) in 2012.
Nigeria is the United States' largest trading partner in sub-Saharan Africa and supplies a fifth of its oil (11%
of oil imports). It has the seventh-largest trade surplus with the US of any country worldwide. Nigeria is the 50th-largest
export market for US goods and the 14th-largest exporter of goods to the US. The United States is the country's largest
foreign investor. The International Monetary Fund (IMF) projected economic growth of 9% in 2008 and 8.3% in 2009.
The IMF further projects an 8% growth in the Nigerian economy in 2011. The Niger Delta Nembe Creek Oil field was
discovered in 1973 and produces from middle Miocene deltaic sandstone-shale in an anticline structural trap at a
depth of 2–4 km. In June 2013, the company announced a strategic review of its operations in Nigeria, hinting that
assets could be divested. While many international oil companies have operated there for decades, by 2014 most were
making moves to divest their interests, citing a range of issues including oil theft. In August 2014, Shell Oil Company
said it was finalising its interests in four Nigerian oil fields. According to the International Organization for
Migration, Nigeria witnessed a dramatic increase in remittances sent home from overseas Nigerians, going from USD
2.3 billion in 2004 to 17.9 billion in 2007. The United States accounts for the largest portion of official remittances,
followed by the United Kingdom, Italy, Canada, Spain and France. On the African continent, Egypt, Equatorial Guinea,
Chad, Libya and South Africa are important source countries of remittance flows to Nigeria, while China is the biggest
remittance-sending country in Asia. Nigeria in recent years has been embracing industrialisation. It currently has
an indigenous vehicle manufacturing company, Innoson Motors, which manufactures Rapid Transit Buses, Trucks and SUVs
with an upcoming introduction of Cars. Nigeria also has few Electronic manufacturers like Zinox, the first Branded
Nigerian Computer and Electronic gadgets (like tablet PCs) manufacturers. In 2013, Nigeria introduced a policy regarding
import duty on vehicles to encourage local manufacturing companies in the country. In this regard, some foreign vehicle
manufacturing companies like Nissan have made known their plans to have manufacturing plants in Nigeria. Ogun is
considered to be the current Nigeria's industrial hub, as most factories are located in Ogun and more companies are
moving there, followed by Lagos. The Nigerian government has commissioned the overseas production and launch of four
satellites. The Nigeriasat-1 was the first satellite to be built under the Nigerian government sponsorship. The satellite
was launched from Russia on 27 September 2003. Nigeriasat-1 was part of the world-wide Disaster Monitoring Constellation
System. The primary objectives of the Nigeriasat-1 were: to give early warning signals of environmental disaster;
to help detect and control desertification in the northern part of Nigeria; to assist in demographic planning; to
establish the relationship between malaria vectors and the environment that breeds malaria and to give early warning
signals on future outbreaks of meningitis using remote sensing technology; to provide the technology needed to bring
education to all parts of the country through distant learning; and to aid in conflict resolution and border disputes
by mapping out state and International borders. NigeriaSat-2, Nigeria's second satellite, was built as a high-resolution
earth satellite by Surrey Space Technology Limited, a United Kingdom-based satellite technology company. It has 2.5-metre
resolution panchromatic (very high resolution), 5-metre multispectral (high resolution, NIR red, green and red bands),
and 32-metre multispectral (medium resolution, NIR red, green and red bands) antennas, with a ground receiving station
in Abuja. The NigeriaSat-2 spacecraft alone was built at a cost of over £35 million. This satellite was launched
into orbit from a military base in China. NigComSat-1, a Nigerian satellite built in 2004, was Nigeria's third satellite
and Africa's first communication satellite. It was launched on 13 May 2007, aboard a Chinese Long March 3B carrier
rocket, from the Xichang Satellite Launch Centre in China. The spacecraft was operated by NigComSat and the Nigerian
Space Agency, NASRDA. On 11 November 2008, NigComSat-1 failed in orbit after running out of power because of an anomaly
in its solar array. It was based on the Chinese DFH-4 satellite bus, and carries a variety of transponders: 4 C-band;
14 Ku-band; 8 Ka-band; and 2 L-band. It was designed to provide coverage to many parts of Africa, and the Ka-band
transponders would also cover Italy. On 24 March 2009, the Nigerian Federal Ministry of Science and Technology, NigComSat
Ltd. and CGWIC signed another contract for the in-orbit delivery of the NigComSat-1R satellite. NigComSat-1R was
also a DFH-4 satellite, and the replacement for the failed NigComSat-1 was successfully launched into orbit by China
in Xichang on December 19, 2011. The satellite according to then-Nigerian President Goodluck Jonathan which was paid
for by the insurance policy on NigComSat-1 which de-orbited in 2009, would have a positive impact on national development
in various sectors such as communications, internet services, health, agriculture, environmental protection and national
security. The United Nations estimates that the population in 2009 was at 154,729,000, distributed as 51.7% rural
and 48.3% urban, and with a population density of 167.5 people per square kilometre. National census results in the
past few decades have been disputed. The results of the most recent census were released in December 2006 and gave
a population of 140,003,542. The only breakdown available was by gender: males numbered 71,709,859, females numbered
68,293,08. On June 2012, President Goodluck Jonathan said that Nigerians should limit their number of children. Even
though most ethnic groups prefer to communicate in their own languages, English as the official language is widely
used for education, business transactions and for official purposes. English as a first language is used only by
a small minority of the country's urban elite, and it is not spoken at all in some rural areas. Hausa is the most
widely spoken of the 3 main languages spoken in Nigeria itself (Igbo, Hausa and Yoruba) but unlike the Yorubas and
Igbos, the Hausas tend not to travel far outside Nigeria itself.[citation needed] With the majority of Nigeria's
populace in the rural areas, the major languages of communication in the country remain indigenous languages. Some
of the largest of these, notably Yoruba and Igbo, have derived standardised languages from a number of different
dialects and are widely spoken by those ethnic groups. Nigerian Pidgin English, often known simply as 'Pidgin' or
'Broken' (Broken English), is also a popular lingua franca, though with varying regional influences on dialect and
slang. The pidgin English or Nigerian English is widely spoken within the Niger Delta Regions, predominately in Warri,
Sapele, Port Harcourt, Agenebode, Ewu, and Benin City. Nigeria is a religiously diverse society, with Islam and Christianity
being the most widely professed religions. Nigerians are nearly equally divided into Christians and Muslims, with
a tiny minority of adherents of Animism and other religions. According to one recent estimate, over 40% of Nigeria's
population adheres to Islam (mainly Sunni, other branches are also present). Christianity is practised by 58% of
the population (among them 74% are Protestant, 25% Roman Catholic, 1% other Christian). Adherents of Animism and
other religions collectively represent 1.4% of the population. The vast majority of Muslims in Nigeria are Sunni
belonging to Maliki school of jurisprudence; however, a sizeable minority also belongs to Shafi madhhab. A large
number of Sunni Muslims are members of Sufi brotherhoods. Most Sufis follow the Qadiriyya, Tijaniyyah and/or the
Mouride movements. A significant Shia minority exists (see Shia in Nigeria). Some northern states have incorporated
Sharia law into their previously secular legal systems, which has brought about some controversy. Kano State has
sought to incorporate Sharia law into its constitution. The majority of Quranists follow the Kalo Kato or Quraniyyun
movement. There are also Ahmadiyya and Mahdiyya minorities. According to a 2001 report from The World Factbook by
CIA, about 50% of Nigeria's population is Muslim, 40% are Christians and 10% adhere to local religions. But in some
recent report, the Christian population is now sightly larger than the Muslim population. An 18 December 2012 report
on religion and public life by the Pew Research Center stated that in 2010, 49.3 percent of Nigeria's population
was Christian, 48.8 percent was Muslim, and 1.9 percent were followers of indigenous and other religions, or unaffiliated.
Additionally, the 2010s census of Association of Religion Data Archives has reported that 46.5 percent of the total
population is Christian, slightly bigger than the Muslim population of 45.5 percent, and that 7.7 percent are members
of other religious groups. Among Christians, the Pew Research survey found that 74% were Protestant, 25% were Catholic,
and 1% belonged to other Christian denominations, including a small Orthodox Christian community. In terms of Nigeria's
major ethnic groups, the Hausa ethnic group (predominant in the north) was found to be 95% Muslim and 5% Christian,
the Yoruba tribe (predominant in the west) was 55% Muslim, 35% Christian and 10% adherents of other religions, while
the Igbos (predominant in the east) and the Ijaw (south) were 98% Christian, with 2% practising traditional religions.
The middle belt of Nigeria contains the largest number of minority ethnic groups in Nigeria, who were found to be
mostly Christians and members of traditional religions, with a small proportion of Muslims. Leading Protestant churches
in the country include the Church of Nigeria of the Anglican Communion, the Assemblies of God Church, the Nigerian
Baptist Convention and The Synagogue, Church Of All Nations Since the 1990s, there has been significant growth in
many other churches, particularly the evangelical Protestant ones. These include the Redeemed Christian Church of
God, Winners' Chapel, Christ Apostolic Church (the first Aladura Movement in Nigeria), Deeper Christian Life Ministry,
Evangelical Church of West Africa, Mountain of Fire and Miracles, Christ Embassy and The Synagogue Church Of All
Nations. In addition, The Church of Jesus Christ of Latter-day Saints, the Aladura Church, the Seventh-day Adventist
and various indigenous churches have also experienced growth. Health care delivery in Nigeria is a concurrent responsibility
of the three tiers of government in the country, and the private sector. Nigeria has been reorganising its health
system since the Bamako Initiative of 1987, which formally promoted community-based methods of increasing accessibility
of drugs and health care services to the population, in part by implementing user fees. The new strategy dramatically
increased accessibility through community-based healthcare reform, resulting in more efficient and equitable provision
of services. A comprehensive approach strategy was extended to all areas of health care, with subsequent improvement
in the health care indicators and improvement in health care efficiency and cost. HIV/AIDS rate in Nigeria is much
lower compared to the other African nations such as Kenya or South Africa whose prevalence (percentage) rates are
in the double digits. As of 2012[update], the HIV prevalence rate among adults ages 15–49 was just 3.1 percent. As
of 2014[update], Life expectancy in Nigeria is 52.62 years on average according to CIA, and just over half the population
have access to potable water and appropriate sanitation; As of 2010[update], the Infant mortality is 8.4 deaths per
1000 live births. Nigeria was the only country in Africa to have never eradicated polio, which it periodically exported
to other African countries; Polio was cut 98% between 2009 and 2010. However, a major breakthrough came in December
2014, when it was reported that Nigeria hadn't recorded a polio case in 6 months, and on its way to be declared Polio
free. In 2012, a new bone marrow donor program was launched by the University of Nigeria to help people with leukaemia,
lymphoma, or sickle cell disease to find a compatible donor for a life-saving bone marrow transplant, which cures
them of their conditions. Nigeria became the second African country to have successfully carried out this surgery.
In the 2014 ebola outbreak, Nigeria was the first country to effectively contain and eliminate the Ebola threat that
was ravaging three other countries in the West African region, the Nigerian unique method of contact tracing employed
by Nigeria became an effective method later used by countries, such as the united States, when ebola threats were
discovered. Education in Nigeria is overseen by the Ministry of Education. Local authorities take responsibility
for implementing policy for state-controlled public education and state schools at a regional level. The education
system is divided into Kindergarten, primary education, secondary education and tertiary education. After the 1970s
oil boom, tertiary education was improved so that it would reach every subregion of Nigeria. 68% of the Nigerian
population is literate, and the rate for men (75.7%) is higher than that for women (60.6%). Nigeria is home to a
substantial network of organised crime, active especially in drug trafficking. Nigerian criminal groups are heavily
involved in drug trafficking, shipping heroin from Asian countries to Europe and America; and cocaine from South
America to Europe and South Africa. . The various Nigerian Confraternities or "campus cults" are active in both organised
crime and in political violence as well as providing a network of corruption within Nigeria. As confraternities have
extensive connections with political and military figures, they offer excellent alumni networking opportunities.
The Supreme Vikings Confraternity, for example, boasts that twelve members of the Rivers State House of Assembly
are cult members. On lower levels of society, there are the "area boys", organised gangs mostly active in Lagos who
specialise in mugging and small-scale drug dealing. According to official statistics, gang violence in Lagos resulted
in 273 civilians and 84 policemen killed in the period of August 2000 to May 2001. Internationally, Nigeria is infamous
for a form of bank fraud dubbed 419, a type of advance fee fraud (named after Section 419 of the Nigerian Penal Code)
along with the "Nigerian scam", a form of confidence trick practised by individuals and criminal syndicates. These
scams involve a complicit Nigerian bank (the laws being set up loosely to allow it) and a scammer who claims to have
money he needs to obtain from that bank. The victim is talked into exchanging bank account information on the premise
that the money will be transferred to him, and then he'll get to keep a cut. In reality, money is taken out instead,
and/or large fees (which seem small in comparison with the imaginary wealth he awaits) are deducted. In 2003, the
Nigerian Economic and Financial Crimes Commission (or EFCC) was created, ostensibly to combat this and other forms
of organised financial crime. Nigeria has also been pervaded by political corruption. It was ranked 143 out of 182
countries in Transparency International's 2011 Corruption Perceptions Index; however, it improved to 136th position
in 2014. More than $400 billion were stolen from the treasury by Nigeria's leaders between 1960 and 1999. In late
2013, Nigeria's then central bank governor Lamido Sanusi informed President Goodluck Jonathan that the state oil
company, NNPC had failed to remit US$20 billion of oil revenues, which it owed the state. Jonathan however dismissed
the claim and replaced Sanusi for his mismanagement of the central bank's budget. A Senate committee also found Sanusi’s
account to be lacking substance. After the conclusion of the NNPC's account Audit, it was announced in January 2015
that NNPC's non-remitted revenue is actually US$1.48billion, which it needs to refund back to the Government. The
Nigerian film industry is known as Nollywood (a portmanteau of Nigeria and Hollywood) and is now the 2nd-largest
producer of movies in the world. Nigerian film studios are based in Lagos, Kano and Enugu, forming a major portion
of the local economy of these cities. Nigerian cinema is Africa's largest movie industry in terms of both value and
the number of movies produced per year. Although Nigerian films have been produced since the 1960s, the country's
film industry has been aided by the rise of affordable digital filming and editing technologies. Football is largely
considered the Nigeria's national sport and the country has its own Premier League of football. Nigeria's national
football team, known as the "Super Eagles", has made the World Cup on five occasions 1994, 1998, 2002, 2010, and
most recently in 2014. In April 1994, the Super Eagles ranked 5th in the FIFA World Rankings, the highest ranking
achieved by an African football team. They won the African Cup of Nations in 1980, 1994, and 2013, and have also
hosted the U-17 & U-20 World Cup. They won the gold medal for football in the 1996 Summer Olympics (in which they
beat Argentina) becoming the first African football team to win gold in Olympic Football. The nation's cadet team
from Japan '93 produced some international players notably Nwankwo Kanu, a two-time African Footballer of the year
who won the European Champions League with Ajax Amsterdam and later played with Inter Milan, Arsenal, West Bromwich
Albion and Portsmouth. Other players that graduated from the junior teams are Nduka Ugbade, Jonathan Akpoborie, Victor
Ikpeba, Celestine Babayaro, Wilson Oruma and Taye Taiwo. Some other famous Nigerian footballers include John Obi
Mikel, Obafemi Martins, Vincent Enyeama, Yakubu Aiyegbeni, Rashidi Yekini, Peter Odemwingie and Jay-Jay Okocha. Nigeria's
human rights record remains poor; According to the US Department of State, the most significant human rights problems
are: use of excessive force by security forces; impunity for abuses by security forces; arbitrary arrests; prolonged
pretrial detention; judicial corruption and executive influence on the judiciary; rape, torture and other cruel,
inhuman or degrading treatment of prisoners, detainees and suspects; harsh and life‑threatening prison and detention
centre conditions; human trafficking for the purpose of prostitution and forced labour; societal violence and vigilante
killings; child labour, child abuse and child sexual exploitation; female genital mutilation (FGM); domestic violence;
discrimination based on sex, ethnicity, region and religion.
Although there is some evidence of earlier inhabitation in the region of Utrecht, dating back to the Stone Age (app. 2200
BCE) and settling in the Bronze Age (app. 1800–800 BCE), the founding date of the city is usually related to the
construction of a Roman fortification (castellum), probably built in around 50 CE. A series of such fortresses was
built after the Roman emperor Claudius decided the empire should not expand north. To consolidate the border the
limes Germanicus defense line was constructed along the main branch of the river Rhine, which at that time flowed
through a more northern bed compared to today (what is now the Kromme Rijn). These fortresses were designed to house
a cohort of about 500 Roman soldiers. Near the fort settlements would grow housing artisans, traders and soldiers'
wives and children. From the middle of the 3rd century Germanic tribes regularly invaded the Roman territories. Around
275 the Romans could no longer maintain the northern border and Utrecht was abandoned. Little is known about the
next period 270–650. Utrecht is first spoken of again several centuries after the Romans left. Under the influence
of the growing realms of the Franks, during Dagobert I's reign in the 7th century, a church was built within the
walls of the Roman fortress. In ongoing border conflicts with the Frisians this first church was destroyed. By the
mid-7th century, English and Irish missionaries set out to convert the Frisians. The pope appointed their leader,
Willibrordus, bishop of the Frisians. The tenure of Willibrordus is generally considered to be the beginning of the
Bishopric of Utrecht. In 723, the Frankish leader Charles Martel bestowed the fortress in Utrecht and the surrounding
lands as the base of the bishops. From then on Utrecht became one of the most influential seats of power for the
Roman Catholic Church in the Netherlands. The archbishops of Utrecht were based at the uneasy northern border of
the Carolingian Empire. In addition, the city of Utrecht had competition from the nearby trading centre Dorestad.
After the fall of Dorestad around 850, Utrecht became one of the most important cities in the Netherlands. The importance
of Utrecht as a centre of Christianity is illustrated by the election of the Utrecht-born Adriaan Florenszoon Boeyens
as pope in 1522 (the last non-Italian pope before John Paul II). When the Frankish rulers established the system
of feudalism, the Bishops of Utrecht came to exercise worldly power as prince-bishops. The territory of the bishopric
not only included the modern province of Utrecht (Nedersticht, 'lower Sticht'), but also extended to the northeast.
The feudal conflict of the Middle Ages heavily affected Utrecht. The prince-bishopric was involved in almost continuous
conflicts with the Counts of Holland and the Dukes of Guelders. The Veluwe region was seized by Guelders, but large
areas in the modern province of Overijssel remained as the Oversticht. Several churches and monasteries were built
inside, or close to, the city of Utrecht. The most dominant of these was the Cathedral of Saint Martin, inside the
old Roman fortress. The construction of the present Gothic building was begun in 1254 after an earlier romanesque
construction had been badly damaged by fire. The choir and transept were finished from 1320 and were followed then
by the ambitious Dom tower. The last part to be constructed was the central nave, from 1420. By that time, however,
the age of the great cathedrals had come to an end and declining finances prevented the ambitious project from being
finished, the construction of the central nave being suspended before the planned flying buttresses could be finished.
Besides the cathedral there were four collegiate churches in Utrecht: St. Salvator's Church (demolished in the 16th
century), on the Dom square, dating back to the early 8th century. Saint John (Janskerk), originating in 1040; Saint
Peter, building started in 1039 and Saint Mary's church building started around 1090 (demolished in the early 19th
century, cloister survives). Besides these churches the city housed St. Paul's Abbey, the 15th-century beguinage
of St. Nicholas, and a 14th-century chapter house of the Teutonic Knights. The location on the banks of the river
Rhine allowed Utrecht to become an important trade centre in the Northern Netherlands. The growing town Utrecht was
granted city rights by Henry V in 1122. When the main flow of the Rhine moved south, the old bed, which still flowed
through the heart of the town became evermore canalized; and the wharf system was built as an inner city harbour
system. On the wharfs storage facilities (werfkelders) were built, on top of which the main street, including houses
was constructed. The wharfs and the cellars are accessible from a platform at water level with stairs descending
from the street level to form a unique structure.[nb 2] The relations between the bishop, who controlled many lands
outside of the city, and the citizens of Utrecht was not always easy. The bishop, for example dammed the Kromme Rijn
at Wijk bij Duurstede to protect his estates from flooding. This threatened shipping for the city and led the city
of Utrecht to commission a canal to ensure access to the town for shipping trade: the Vaartse Rijn, connecting Utrecht
to the Hollandse IJssel at IJsselstein. In 1528 the bishop lost secular power over both Neder- and Oversticht – which
included the city of Utrecht – to Charles V, Holy Roman Emperor. Charles V combined the Seventeen Provinces (the
current Benelux and the northern parts of France) as a personal union. This ended the prince-bishopric Utrecht, as
the secular rule was now the lordship of Utrecht, with the religious power remaining with the bishop, although Charles
V had gained the right to appoint new bishops. In 1559 the bishopric of Utrecht was raised to archbishopric to make
it the religious center of the Northern ecclesiastical province in the Seventeen provinces. The transition from independence
to a relatively minor part of a larger union was not easily accepted. To quell uprisings Charles V was struggling
to exert his power over the citizens of the city, who had struggled to gain a certain level of independence from
the bishops and were not willing to cede this to their new lord. The heavily fortified castle Vredenburg was built
to house a large garrison whose main task was to maintain control over the city. The castle would last less than
50 years before it was demolished in an uprising in the early stages of the Dutch Revolt. In 1579 the northern seven
provinces signed the Union of Utrecht, in which they decided to join forces against Spanish rule. The Union of Utrecht
is seen as the beginning of the Dutch Republic. In 1580 the new and predominantly Protestant state abolished the
bishoprics, including the archbishopric of Utrecht. The stadtholders disapproved of the independent course of the
Utrecht bourgeoisie and brought the city under much more direct control of the republic; which shifted the power
towards its dominant province Holland. This was the start of a long period of stagnation of trade and development
in Utrecht. Utrecht remained an atypical city in the new republic with about 40% Catholic in the mid-17th century,
and even more among the elite groups, who included many rural nobility and gentry with town houses there. The fortified
city temporarily fell to the French invasion in 1672 (the Disaster Year); where the French invasion was only stopped
west of Utrecht at the Old Hollandic Waterline. In 1674, only two years after the French left, the centre of Utrecht
was struck by a tornado. The halt to building before construction of flying buttresses in the 15th century now proved
to be the undoing of the central section of the cathedral of St Martin church which collapsed; creating the current
Dom square between the tower and choir. In 1713, Utrecht hosted one of the first international peace negotiations
when the Treaty of Utrecht settled the War of the Spanish Succession. Since 1723 Utrecht became the centre of the
non-Roman Old Catholic Churches in the world. In the early 19th century, the role of Utrecht as a fortified town
had become obsolete. The fortifications of the Nieuwe Hollandse Waterlinie were moved east of Utrecht. The town walls
could now be demolished to allow for expansion. The moats remained intact and formed an important feature of the
Zocher plantsoen, an English style landscape park that remains largely intact today. Growth of the city increased
when, in 1843, a railway connecting Utrecht to Amsterdam was opened. After that, Utrecht gradually became the main
hub of the Dutch railway network. With the industrial revolution finally gathering speed in the Netherlands and the
ramparts taken down, Utrecht began to grow far beyond the medieval centre. In 1853, the Dutch government allowed
the bishopric of Utrecht to be reinstated by Rome, and Utrecht became the centre of Dutch Catholicism once more.
From the 1880s onward neighbourhoods such as Oudwijk, Wittevrouwen, Vogelenbuurt to the East, and Lombok to the West
were developed. New middle class residential areas, such as Tuindorp and Oog in Al, were built in the 1920s and 1930s.
During this period, several Jugendstil houses and office buildings were built, followed by Rietveld who built the
Rietveld Schröder House (1924), and Dudok's construction of the city theater (1941). The area surrounding Utrecht
Centraal railway station and the station itself were developed following modernist ideas of the 1960s, in a brutalist
style. This led to the construction of the shopping mall Hoog Catharijne (nl), music centre Vredenburg (Hertzberger,
1979), and conversion of part of the ancient canal structure into a highway (Catherijnebaan). Protest against further
modernisation of the city centre followed even before the last buildings were finalised. In the early 21st century
the whole area is being redeveloped. The music redeveloped music centre opened in 2014 where the original Vredenburg
concert and rock and jazz halls are brought together in a single building. About 69% of the population is of Dutch
ancestry. Approximately 10% of the population consists of immigrants from Western countries, while 21% of the population
is of non-Western origin (9% Moroccan, 5% Turkish, 3% Surinamese and Dutch Caribbean and 5% of other countries).
Some of the city's boroughs have a relatively high percentage of originally non-Dutch inhabitants – i.e. Kanaleneiland
83% and Overvecht 57%. Like Rotterdam, Amsterdam, The Hague and other large Dutch cities, Utrecht faces some socio-economic
problems. About 38% percent of its population either earns a minimum income or is dependent on social welfare (17%
of all households). Boroughs such as Kanaleneiland, Overvecht and Hoograven consist primarily of high-rise housing
developments, and are known for relatively high poverty and crime rate. Utrecht is the centre of a densely populated
area, which makes concise definitions of its agglomeration difficult, and somewhat arbitrary. The smaller Utrecht
agglomeration of continuously built up areas counts some 420,000 inhabitants and includes Nieuwegein, IJsselstein
and Maarssen. It is sometimes argued that the close by municipalities De Bilt, Zeist, Houten, Vianen, Driebergen-Rijsenburg
(Utrechtse Heuvelrug), and Bunnik should also be counted towards the Utrecht agglomeration, bringing the total to
640,000 inhabitants. The larger region, including slightly more remote towns such as Woerden and Amersfoort counts
up to 820,000 inhabitants. Utrecht's cityscape is dominated by the Dom Tower, the tallest belfry in the Netherlands
and originally part of the Cathedral of Saint Martin. An ongoing debate is over whether any building in or near the
centre of town should surpass the Dom Tower in height (112 m). Nevertheless, some tall buildings are now being constructed
that will become part of the skyline of Utrecht. The second tallest building of the city, the Rabobank-tower, was
completed in 2010 and stands 105 m (344.49 ft) tall. Two antennas will increase that height to 120 m (393.70 ft).
Two other buildings were constructed around the Nieuw Galgenwaard stadium (2007). These buildings, the 'Kantoortoren
Galghenwert' and 'Apollo Residence', stand 85.5 and 64.5 metres high respectively. Another landmark is the old centre
and the canal structure in the inner city. The Oudegracht is a curved canal, partly following the ancient main branch
of the Rhine. It is lined with the unique wharf-basement structures that create a two-level street along the canals.
The inner city has largely retained its Medieval structure, and the moat ringing the old town is largely intact.
Because of the role of Utrecht as a fortified city, construction outside the medieval centre and its city walls was
restricted until the 19th century. Surrounding the medieval core there is a ring of late 19th- and early 20th-century
neighbourhoods, with newer neighbourhoods positioned farther out. The eastern part of Utrecht remains fairly open.
The Dutch Water Line, moved east of the city in the early 19th century required open lines of fire, thus prohibiting
all permanent constructions until the middle of the 20th century on the east side of the city. Utrecht Centraal is
the main railway station of Utrecht. There are regular intercity services to all major Dutch cities; direct services
to Schiphol Airport. Utrecht Centraal is a station on the night service, providing 7 days a week an all night service
to (among others) Schiphol Airport, Amsterdam and Rotterdam. International InterCityExpress (ICE) services to Germany
(and further) through Arnhem call at Utrecht Centraal. Regular local trains to all areas surrounding Utrecht also
depart from Utrecht Centraal; and service several smaller stations: Utrecht Lunetten, Utrecht Vaartsche Rijn, Utrecht
Overvecht, Utrecht Leidsche Rijn, Utrecht Terwijde, Utrecht Zuilen and Vleuten. A former station Utrecht Maliebaan
closed in 1939 and has since been converted into the Dutch Railway Museum. The main local and regional bus station
of Utrecht is located adjacent to Utrecht Centraal railway station, at the East and West entrances. Due to large
scale renovation and construction works at the railway station, the station's bus stops are changing frequently.
As a general rule, westbound buses depart from the bus station on the west entrance, other buses from the east side
station. Local buses in Utrecht are operated by Qbuzz – its services include a high-frequency service to the Uithof
university district. The local bus fleet is one of Europe's cleanest, using only buses compliant with the Euro-VI
standard as well as electric buses for inner city transport. Regional buses from the city are operated by Arriva
and Connexxion. Like most Dutch cities, Utrecht has an extensive network of cycle paths, making cycling safe and
popular. 33% of journeys within the city are by bicycle, more than any other mode of transport. (Cars, for example,
account for 30% of trips). Bicycles are used by young and old people, and by individuals and families. They are mostly
traditional, upright, steel-framed bicycles, with few or no gears. There are also barrow bikes, for carrying shopping
or small children. As thousands of bicycles are parked haphazardly in town, creating an eyesore but also impeding
pedestrians, the City Council decided in 2014 to build the world's largest bicycle parking station, near the Central
Railway Station. This 3-floor construction will cost an estimated 48 million Euro and will hold 12,500 bicycles.
Completion is foreseen in 2018. Utrecht is well-connected to the Dutch road network. Two of the most important major
roads serve the city of Utrecht: the A12 and A2 motorways connect Amsterdam, Arnhem, The Hague and Maastricht, as
well as Belgium and Germany. Other major motorways in the area are the Almere–Breda A27 and the Utrecht–Groningen
A28. Due to the increasing traffic and the ancient city plan, traffic congestion is a common phenomenon in and around
Utrecht, causing elevated levels of air pollutants. This has led to a passionate debate in the city about the best
way to improve the city's air quality. Production industry constitutes a small part of the economy of Utrecht. The
economy of Utrecht depends for a large part on the several large institutions located in the city. It is the centre
of the Dutch railroad network and the location of the head office of Nederlandse Spoorwegen. ProRail is headquartered
in The De Inktpot (nl) (The Inkpot) – the largest brick building in the Netherlands (the "UFO" featured on its façade
stems from an art program in 2000). Rabobank, a large bank, has its headquarters in Utrecht. A large indoor shopping
centre Hoog Catharijne (nl) is located between Utrecht Centraal railway station and the city centre. The corridors
are treated as public places like streets, and the route between the station and the city centre is open all night.
In 20 years from 2004, parts of Hoog Catharijne will be redeveloped as part of the renovation of the larger station
area. Parts of the city's network of canals, which were filled to create the shopping center and central station
area, will be recreated. The Jaarbeurs, one of the largest convention centres in the Netherlands, is located at the
west side of the central railway station. Utrecht hosts several large institutions of higher education. The most
prominent of these is Utrecht University (est. 1636), the largest university of the Netherlands with 30,449 students
(as of 2012). The university is partially based in the inner city as well as in the Uithof campus area, to the east
of the city. According to Shanghai Jiaotong University's university ranking in 2014 it is the 57th best university
in the world. Utrecht also houses the much smaller University of Humanistic Studies, which houses about 400 students.
Utrecht city has an active cultural life, and in the Netherlands is second only to Amsterdam. There are several theatres
and theatre companies. The 1941 main city theatre was built by Dudok. Besides theatres there is a large number of
cinemas including three arthouse cinemas. Utrecht is host to the international Early Music Festival (Festival Oude
Muziek, for music before 1800) and the Netherlands Film Festival. The city has an important classical music hall
Vredenburg (1979 by Herman Hertzberger). Its acoustics are considered among the best of the 20th-century original
music halls.[citation needed] The original Vredenburg music hall has been redeveloped as part of the larger station
area redevelopment plan and in 2014 has gained additional halls that allowed its merger with the rock club Tivoli
and the SJU jazzpodium. There are several other venues for music throughout the city. Young musicians are educated
in the conservatory, a department of the Utrecht School of the Arts. There is a specialised museum of automatically
playing musical instruments. There are many art galleries in Utrecht. There are also several foundations to support
art and artists. Training of artists is done at the Utrecht School of the Arts. The Centraal Museum has many exhibitions
on the arts, including a permanent exhibition on the works of Utrecht resident illustrator Dick Bruna, who is best
known for creating Miffy ("Nijntje", in Dutch). Although street art is illegal in Utrecht, the Utrechtse Kabouter,
a picture of a gnome with a red hat, became a common sight in 2004. Utrecht also houses one of the landmarks of modern
architecture, the 1924 Rietveld Schröder House, which is listed on UNESCO's world heritage sites. To promote culture
Utrecht city organizes cultural Sundays. During a thematic Sunday several organisations create a program, which is
open to everyone without, or with a very much reduced, admission fee. There are also initiatives for amateur artists.
The city subsidises an organisation for amateur education in arts aimed at all inhabitants (Utrechts Centrum voor
de Kunsten), as does the university for its staff and students. Additionally there are also several private initiatives.
The city council provides coupons for discounts to inhabitants who receive welfare to be used with many of the initiatives.
Utrecht is home to the premier league (professional) football club FC Utrecht, which plays in Stadium Nieuw Galgenwaard.
It is also the home of Kampong, the largest (amateur) sportsclub in the Netherlands (4,500 members), SV Kampong.
Kampong features fieldhockey, soccer, cricket, tennis, squash and jeu de boules. Kampong's men and women top hockey
squads play in the highest Dutch hockey league, the Rabohoofdklasse.Utrecht is also home to the baseball and Sofball
club: UVV which plays in the highest Dutch baseball league: de Hoofdklasse. Utrecht's waterways are used by several
rowing clubs. Viking is a large club open to the general public, and the student clubs Orca and Triton compete in
the Varsity each year.
The stated clauses of the Nazi-Soviet non-aggression pact were a guarantee of non-belligerence by each party towards the
other, and a written commitment that neither party would ally itself to, or aid, an enemy of the other party. In
addition to stipulations of non-aggression, the treaty included a secret protocol that divided territories of Romania,
Poland, Lithuania, Latvia, Estonia, and Finland into German and Soviet "spheres of influence", anticipating potential
"territorial and political rearrangements" of these countries. Thereafter, Germany invaded Poland on 1 September
1939. After the Soviet–Japanese ceasefire agreement took effect on 16 September, Stalin ordered his own invasion
of Poland on 17 September. Part of southeastern (Karelia) and Salla region in Finland were annexed by the Soviet
Union after the Winter War. This was followed by Soviet annexations of Estonia, Latvia, Lithuania, and parts of Romania
(Bessarabia, Northern Bukovina, and the Hertza region). Concern about ethnic Ukrainians and Belarusians had been
proffered as justification for the Soviet invasion of Poland. Stalin's invasion of Bukovina in 1940 violated the
pact, as it went beyond the Soviet sphere of influence agreed with the Axis. Of the territories of Poland annexed
by the Soviet Union between 1939 and 1940, the region around Białystok and a minor part of Galicia east of the San
river around Przemyśl were returned to the Polish state at the end of World War II. Of all other territories annexed
by the USSR in 1939–40, the ones detached from Finland (Karelia, Petsamo), Estonia (Ingrian area and Petseri County)
and Latvia (Abrene) remained part of the Russian Federation, the successor state of the Soviet Union, after 1991.
Northern Bukovina, Southern Bessarabia and Hertza remain part of Ukraine. The outcome of the First World War was
disastrous for both the German Reich and the Russian Soviet Federative Socialist Republic. During the war, the Bolsheviks
struggled for survival, and Vladimir Lenin recognised the independence of Finland, Estonia, Latvia, Lithuania and
Poland. Moreover, facing a German military advance, Lenin and Trotsky were forced to enter into the Treaty of Brest-Litovsk,
which ceded massive western Russian territories to the German Empire. After Germany's collapse, a multinational Allied-led
army intervened in the Russian Civil War (1917–22). At the beginning of the 1930s, the Nazi Party's rise to power
increased tensions between Germany and the Soviet Union along with other countries with ethnic Slavs, who were considered
"Untermenschen" (inferior) according to Nazi racial ideology. Moreover, the anti-Semitic Nazis associated ethnic
Jews with both communism and financial capitalism, both of which they opposed. Consequently, Nazi theory held that
Slavs in the Soviet Union were being ruled by "Jewish Bolshevik" masters. In 1934, Hitler himself had spoken of an
inescapable battle against both Pan-Slavism and Neo-Slavism, the victory in which would lead to "permanent mastery
of the world", though he stated that they would "walk part of the road with the Russians, if that will help us."
The resulting manifestation of German anti-Bolshevism and an increase in Soviet foreign debts caused German–Soviet
trade to dramatically decline.[b] Imports of Soviet goods to Germany fell to 223 million Reichsmarks in 1934 as the
more isolationist Stalinist regime asserted power and the abandonment of post–World War I Treaty of Versailles military
controls decreased Germany's reliance on Soviet imports.[clarification needed] Hitler's fierce anti-Soviet rhetoric
was one of the reasons why the UK and France decided that Soviet participation in the 1938 Munich Conference regarding
Czechoslovakia would be both dangerous and useless. The Munich Agreement that followed marked a partial German annexation
of Czechoslovakia in late 1938 followed by its complete dissolution in March 1939, which as part of the appeasement
of Germany conducted by Chamberlain's and Daladier's cabinets. This policy immediately raised the question of whether
the Soviet Union could avoid being next on Hitler's list. The Soviet leadership believed that the West wanted to
encourage German aggression in the East and that France and Britain might stay neutral in a war initiated by Germany,
hoping that the warring states would wear each other out and put an end to both the Soviet Union and Nazi Germany.
For Germany, because an autarkic economic approach or an alliance with Britain were impossible, closer relations
with the Soviet Union to obtain raw materials became necessary, if not just for economic reasons alone. Moreover,
an expected British blockade in the event of war would create massive shortages for Germany in a number of key raw
materials. After the Munich agreement, the resulting increase in German military supply needs and Soviet demands
for military machinery, talks between the two countries occurred from late 1938 to March 1939. The third Soviet Five
Year Plan required new infusions of technology and industrial equipment.[clarification needed] German war planners
had estimated serious shortfalls of raw materials if Germany entered a war without Soviet supply. The Soviet Union,
which feared Western powers and the possibility of "capitalist encirclements", had little faith either that war could
be avoided, or faith in the Polish army, and wanted nothing less than an ironclad military alliance with France and
Britain that would provide a guaranteed support for a two-pronged attack on Germany; thus, Stalin's adherence to
the collective security line was purely conditional. Britain and France believed that war could still be avoided,
and that the Soviet Union, weakened by the Great Purge, could not be a main military participant, a point that many
military sources were at variance with, especially Soviet victories over the Japanese Kwantung army on the Manchurian
frontier. France was more anxious to find an agreement with the USSR than was Britain; as a continental power, it
was more willing to make concessions, more fearful of the dangers of an agreement between the USSR and Germany. These
contrasting attitudes partly explain why the USSR has often been charged with playing a double game in 1939: carrying
on open negotiations for an alliance with Britain and France while secretly considering propositions from Germany.
By the end of May, drafts were formally presented. In mid-June, the main Tripartite negotiations started. The discussion
was focused on potential guarantees to central and east European countries should a German aggression arise. The
USSR proposed to consider that a political turn towards Germany by the Baltic states would constitute an "indirect
aggression" towards the Soviet Union. Britain opposed such proposals, because they feared the Soviets' proposed language
could justify a Soviet intervention in Finland and the Baltic states, or push those countries to seek closer relations
with Germany. The discussion about a definition of "indirect aggression" became one of the sticking points between
the parties, and by mid-July, the tripartite political negotiations effectively stalled, while the parties agreed
to start negotiations on a military agreement, which the Soviets insisted must be entered into simultaneously with
any political agreement. From April–July, Soviet and German officials made statements regarding the potential for
the beginning of political negotiations, while no actual negotiations took place during that time period. The ensuing
discussion of a potential political deal between Germany and the Soviet Union had to be channeled into the framework
of economic negotiations between the two countries, because close military and diplomatic connections, as was the
case before the mid-1930s, had afterward been largely severed. In May, Stalin replaced his Foreign Minister Maxim
Litvinov, who was regarded as pro-western and who was also Jewish, with Vyacheslav Molotov, allowing the Soviet Union
more latitude in discussions with more parties, not only with Britain and France. At the same time, British, French,
and Soviet negotiators scheduled three-party talks on military matters to occur in Moscow in August 1939, aiming
to define what the agreement would specify should be the reaction of the three powers to a German attack. The tripartite
military talks, started in mid-August, hit a sticking point regarding the passage of Soviet troops through Poland
if Germans attacked, and the parties waited as British and French officials overseas pressured Polish officials to
agree to such terms. Polish officials refused to allow Soviet troops into Polish territory if Germany attacked; as
Polish foreign minister Józef Beck pointed out, they feared that once the Red Army entered their territories, it
might never leave. On August 19, the 1939 German–Soviet Commercial Agreement was finally signed. On 21 August, the
Soviets suspended Tripartite military talks, citing other reasons. That same day, Stalin received assurance that
Germany would approve secret protocols to the proposed non-aggression pact that would place half of Poland (border
along the Vistula river), Latvia, Estonia, Finland, and Bessarabia in the Soviets' sphere of influence. That night,
Stalin replied that the Soviets were willing to sign the pact and that he would receive Ribbentrop on 23 August.
On 22 August, one day after the talks broke down with France and Britain, Moscow revealed that Ribbentrop would visit
Stalin the next day. This happened while the Soviets were still negotiating with the British and French missions
in Moscow. With the Western nations unwilling to accede to Soviet demands, Stalin instead entered a secret Nazi–Soviet
pact. On 24 August a 10-year non-aggression pact was signed with provisions that included: consultation, arbitration
if either party disagreed, neutrality if either went to war against a third power, no membership of a group "which
is directly or indirectly aimed at the other". Most notably, there was also a secret protocol to the pact, revealed
only after Germany's defeat in 1945, although hints about its provisions were leaked much earlier, e.g., to influence
Lithuania. According to said protocol Romania, Poland, Lithuania, Latvia, Estonia and Finland were divided into German
and Soviet "spheres of influence". In the north, Finland, Estonia and Latvia were assigned to the Soviet sphere.
Poland was to be partitioned in the event of its "political rearrangement"—the areas east of the Pisa, Narev, Vistula
and San rivers going to the Soviet Union while Germany would occupy the west. Lithuania, adjacent to East Prussia,
would be in the German sphere of influence, although a second secret protocol agreed to in September 1939 reassigned
the majority of Lithuania to the USSR. According to the secret protocol, Lithuania would be granted the city of Vilnius
– its historical capital, which was under Polish control during the inter-war period. Another clause of the treaty
was that Germany would not interfere with the Soviet Union's actions towards Bessarabia, then part of Romania; as
the result, Bessarabia was joined to the Moldovan ASSR, and become the Moldovan SSR under control of Moscow. On 24
August, Pravda and Izvestia carried news of the non-secret portions of the Pact, complete with the now infamous front-page
picture of Molotov signing the treaty, with a smiling Stalin looking on. The news was met with utter shock and surprise
by government leaders and media worldwide, most of whom were aware only of the British–French–Soviet negotiations
that had taken place for months. The Molotov–Ribbentrop Pact was received with shock by Nazi Germany's allies, notably
Japan, by the Comintern and foreign communist parties, and by Jewish communities all around the world. So, that day,
German diplomat Hans von Herwarth, whose grandmother was Jewish, informed Guido Relli, an Italian diplomat, and American
chargé d'affaires Charles Bohlen on the secret protocol regarding vital interests in the countries' allotted "spheres
of influence", without revealing the annexation rights for "territorial and political rearrangement". Soviet propaganda
and representatives went to great lengths to minimize the importance of the fact that they had opposed and fought
against the Nazis in various ways for a decade prior to signing the Pact. Upon signing the pact, Molotov tried to
reassure the Germans of his good intentions by commenting to journalists that "fascism is a matter of taste". For
its part, Nazi Germany also did a public volte-face regarding its virulent opposition to the Soviet Union, though
Hitler still viewed an attack on the Soviet Union as "inevitable".[citation needed] The day after the Pact was signed,
the French and British military negotiation delegation urgently requested a meeting with Soviet military negotiator
Kliment Voroshilov. On August 25, Voroshilov told them "[i]n view of the changed political situation, no useful purpose
can be served in continuing the conversation." That day, Hitler told the British ambassador to Berlin that the pact
with the Soviets prevented Germany from facing a two front war, changing the strategic situation from that in World
War I, and that Britain should accept his demands regarding Poland. On 1 September, Germany invaded Poland from the
west. Within the first few days of the invasion, Germany began conducting massacres of Polish and Jewish civilians
and POWs. These executions took place in over 30 towns and villages in the first month of German occupation. The
Luftwaffe also took part by strafing fleeing civilian refugees on roads and carrying out a bombing campaign. The
Soviet Union assisted German air forces by allowing them to use signals broadcast by the Soviet radio station at
Minsk allegedly "for urgent aeronautical experiments". On 21 September, the Soviets and Germans signed a formal agreement
coordinating military movements in Poland, including the "purging" of saboteurs. A joint German–Soviet parade was
held in Lvov and Brest-Litovsk, while the countries commanders met in the latter location. Stalin had decided in
August that he was going to liquidate the Polish state, and a German–Soviet meeting in September addressed the future
structure of the "Polish region". Soviet authorities immediately started a campaign of Sovietization of the newly
acquired areas. The Soviets organized staged elections, the result of which was to become a legitimization of Soviet
annexation of eastern Poland. Eleven days after the Soviet invasion of the Polish Kresy, the secret protocol of the
Molotov–Ribbentrop Pact was modified by the German–Soviet Treaty of Friendship, Cooperation and Demarcation,) allotting
Germany a larger part of Poland and transferring Lithuania's territory (with the exception of left bank of river
Scheschupe, the "Lithuanian Strip") from the envisioned German sphere to the Soviets. On 28 September 1939, the Soviet
Union and German Reich issued a joint declaration in which they declared: After the Baltic states were forced to
accept treaties, Stalin turned his sights on Finland, confident that Finnish capitulation could be attained without
great effort. The Soviets demanded territories on the Karelian Isthmus, the islands of the Gulf of Finland and a
military base near the Finnish capital Helsinki, which Finland rejected. The Soviets staged the shelling of Mainila
and used it as a pretext to withdraw from the non-aggression pact. The Red Army attacked in November 1939. Simultaneously,
Stalin set up a puppet government in the Finnish Democratic Republic.[clarification needed] The leader of the Leningrad
Military District Andrei Zhdanov commissioned a celebratory piece from Dmitri Shostakovich, entitled "Suite on Finnish
Themes" to be performed as the marching bands of the Red Army would be parading through Helsinki. After Finnish defenses
surprisingly held out for over three months while inflicting stiff losses on Soviet forces, the Soviets settled for
an interim peace. Finland ceded southeastern areas of Karelia (10% of Finnish territory), which resulted in approximately
422,000 Karelians (12% of Finland's population) losing their homes. Soviet official casualty counts in the war exceeded
200,000, although Soviet Premier Nikita Khrushchev later claimed the casualties may have been one million. In mid-June
1940, when international attention was focused on the German invasion of France, Soviet NKVD troops raided border
posts in Lithuania, Estonia and Latvia. State administrations were liquidated and replaced by Soviet cadres, in which
34,250 Latvians, 75,000 Lithuanians and almost 60,000 Estonians were deported or killed. Elections were held with
single pro-Soviet candidates listed for many positions, with resulting peoples assemblies immediately requesting
admission into the USSR, which was granted by the Soviet Union. The USSR annexed the whole of Lithuania, including
the Scheschupe area, which was to be given to Germany. Finally, on 26 June, four days after France sued for an armistice
with the Third Reich, the Soviet Union issued an ultimatum demanding Bessarabia and, unexpectedly, Northern Bukovina
from Romania. Two days later, the Romanians caved to the Soviet demands and the Soviets occupied the territory. The
Hertza region was initially not requested by the USSR but was later occupied by force after the Romanians agreed
to the initial Soviet demands. The subsequent waves of deportations began in Bessarabia and Northern Bukovina. Elimination
of Polish elites and intelligentia was part of Generalplan Ost. The Intelligenzaktion, a plan to eliminate the Polish
intelligentsia, Poland's 'leadership class', took place soon after the German invasion of Poland, lasting from fall
of 1939 till spring of 1940. As the result of this operation in 10 regional actions about 60,000 Polish nobles, teachers,
social workers, priests, judges and political activists were killed. It was continued in May 1940 when Germany launched
AB-Aktion, More than 16,000 members of the intelligentsia were murdered in Operation Tannenberg alone. Although Germany
used forced labourers in most occupied countries, Poles and other Slavs were viewed as inferior by Nazi propaganda,
thus, better suited for such duties. Between 1 and 2.5 million Polish citizens were transported to the Reich for
forced labour, against their will. All Polish males were required to perform forced labour. While ethnic Poles were
subject to selective persecution, all ethnic Jews were targeted by the Reich. In the winter of 1939–40, about 100,000
Jews were thus deported to Poland. They were initially gathered into massive urban ghettos, such as 380,000 held
in the Warsaw Ghetto, where large numbers died under the harsh conditions therein, including 43,000 in the Warsaw
Ghetto alone. Poles and ethnic Jews were imprisoned in nearly every camp of the extensive concentration camp system
in German-occupied Poland and the Reich. In Auschwitz, which began operating on 14 June 1940, 1.1 million people
died. On 10 January 1941, Germany and the Soviet Union signed an agreement settling several ongoing issues. Secret
protocols in the new agreement modified the "Secret Additional Protocols" of the German–Soviet Boundary and Friendship
Treaty, ceding the Lithuanian Strip to the Soviet Union in exchange for 7.5 million dollars (31.5 million Reichsmark).
The agreement formally set the border between Germany and the Soviet Union between the Igorka river and the Baltic
Sea. It also extended trade regulation of the 1940 German–Soviet Commercial Agreement until August 1, 1942, increased
deliveries above the levels of year one of that agreement, settled trading rights in the Baltics and Bessarabia,
calculated the compensation for German property interests in the Baltic States now occupied by the Soviets and other
issues. It also covered the migration to Germany within two and a half months of ethnic Germans and German citizens
in Soviet-held Baltic territories, and the migration to the Soviet Union of Baltic and "White Russian" "nationals"
in German-held territories. Before the pact's announcement, Communists in the West denied that such a treaty would
be signed. Future member of the Hollywood Ten Herbert Biberman denounced rumors as "Fascist propaganda". Earl Browder,
head of the Communist Party USA, stated that "there is as much chance of agreement as of Earl Browder being elected
president of the Chamber of Commerce." Beginning in September 1939, the Soviet Comintern suspended all anti-Nazi
and anti-fascist propaganda, explaining that the war in Europe was a matter of capitalist states attacking each other
for imperialist purposes. Western Communists acted accordingly; while before they supported protecting collective
security, now they denounced Britain and France going to war. When anti-German demonstrations erupted in Prague,
Czechoslovakia, the Comintern ordered the Czech Communist Party to employ all of its strength to paralyze "chauvinist
elements." Moscow soon forced the Communist Parties of France and Great Britain to adopt an anti-war position. On
7 September, Stalin called Georgi Dimitrov,[clarification needed] and the latter sketched a new Comintern line on
the war. The new line—which stated that the war was unjust and imperialist—was approved by the secretariat of the
Communist International on 9 September. Thus, the various western Communist parties now had to oppose the war, and
to vote against war credits. Although the French Communists had unanimously voted in Parliament for war credits on
2 September and on 19 September declared their "unshakeable will" to defend the country, on 27 September the Comintern
formally instructed the party to condemn the war as imperialist. By 1 October the French Communists advocated listening
to German peace proposals, and Communist leader Maurice Thorez deserted from the French Army on 4 October and fled
to Russia. Other Communists also deserted from the army. The Communist Party of Germany featured similar attitudes.
In Die Welt, a communist newspaper published in Stockholm[e] the exiled communist leader Walter Ulbricht opposed
the allies (Britain representing "the most reactionary force in the world") and argued: "The German government declared
itself ready for friendly relations with the Soviet Union, whereas the English–French war bloc desires a war against
the socialist Soviet Union. The Soviet people and the working people of Germany have an interest in preventing the
English war plan." When a joint German–Soviet peace initiative was rejected by Britain and France on 28 September
1939, Soviet foreign policy became critical of the Allies and more pro-German in turn. During the fifth session of
the Supreme Soviet on 31 October 1939 Molotov analysed the international situation thus giving the direction for
Communist propaganda. According to Molotov Germany had a legitimate interest in regaining its position as a great
power and the Allies had started an aggressive war in order to maintain the Versailles system. Molotov declared in
his report entitled "On the Foreign Policy of the Soviet Union" (31 October 1939) held on the fifth (extraordinary)
session of the Supreme Soviet, that the Western "ruling circles" disguise their intentions with the pretext of defending
democracy against Hitlerism, declaring "their aim in war with Germany is nothing more, nothing less than extermination
of Hitlerism. [...] There is absolutely no justification for this kind of war. The ideology of Hitlerism, just like
any other ideological system, can be accepted or rejected, this is a matter of political views. But everyone grasps,
that an ideology can not be exterminated by force, must not be finished off with a war." Germany and the Soviet Union
entered an intricate trade pact on February 11, 1940, that was over four times larger than the one the two countries
had signed in August 1939. The trade pact helped Germany to surmount a British blockade of Germany. In the first
year, Germany received one million tons of cereals, half a million tons of wheat, 900,000 tons of oil, 100,000 tons
of cotton, 500,000 tons of phosphates and considerable amounts of other vital raw materials, along with the transit
of one million tons of soybeans from Manchuria.[citation needed] These and other supplies were being transported
through Soviet and occupied Polish territories. The Soviets were to receive a naval cruiser, the plans to the battleship
Bismarck, heavy naval guns, other naval gear and thirty of Germany's latest warplanes, including the Me-109 and Me-110
fighters and Ju-88 bomber. The Soviets would also receive oil and electric equipment, locomotives, turbines, generators,
diesel engines, ships, machine tools and samples of German artillery, tanks, explosives, chemical-warfare equipment
and other items. The Soviets also helped Germany to avoid British naval blockades by providing a submarine base,
Basis Nord, in the northern Soviet Union near Murmansk. This also provided a refueling and maintenance location,
and a takeoff point for raids and attacks on shipping. In addition, the Soviets provided Germany with access to the
Northern Sea Route for both cargo ships and raiders (though only the commerce raider Komet used the route before
the German invasion), which forced Britain to protect sea lanes in both the Atlantic and the Pacific. The Finnish
and Baltic invasions began a deterioration of relations between the Soviets and Germany. Stalin's invasions were
a severe irritant to Berlin, as the intent to accomplish these was not communicated to the Germans beforehand, and
prompted concern that Stalin was seeking to form an anti-German bloc. Molotov's reassurances to the Germans, and
the Germans' mistrust, intensified. On June 16, as the Soviets invaded Lithuania, but before they had invaded Latvia
and Estonia, Ribbentrop instructed his staff "to submit a report as soon as possible as to whether in the Baltic
States a tendency to seek support from the Reich can be observed or whether an attempt was made to form a bloc."
In August 1940, the Soviet Union briefly suspended its deliveries under their commercial agreement after their relations
were strained following disagreement over policy in Romania, the Soviet war with Finland, Germany falling behind
in its deliveries of goods under the pact and with Stalin worried that Hitler's war with the West might end quickly
after France signed an armistice. The suspension created significant resource problems for Germany. By the end of
August, relations improved again as the countries had redrawn the Hungarian and Romanian borders, settled some Bulgarian
claims and Stalin was again convinced that Germany would face a long war in the west with Britain's improvement in
its air battle with Germany and the execution of an agreement between the United States and Britain regarding destroyers
and bases. However, in late August, Germany arranged its own occupation of Romania, targeting oil fields. The move
raised tensions with the Soviets, who responded that Germany was supposed to have consulted with the Soviet Union
under Article III of the Molotov–Ribbentrop Pact. After Germany entered a Tripartite Pact with Japan and Italy, Ribbentrop
wrote to Stalin, inviting Molotov to Berlin for negotiations aimed to create a 'continental bloc' of Germany, Italy,
Japan and the USSR that would oppose Britain and the USA. Stalin sent Molotov to Berlin to negotiate the terms for
the Soviet Union to join the Axis and potentially enjoy the spoils of the pact. After negotiations during November
1940 on where to extend the USSR's sphere of influence, Hitler broke off talks and continued planning for the eventual
attempts to invade the Soviet Union. In an effort to demonstrate peaceful intentions toward Germany, on 13 April
1941, the Soviets signed a neutrality pact with Axis power Japan. While Stalin had little faith in Japan's commitment
to neutrality, he felt that the pact was important for its political symbolism, to reinforce a public affection for
Germany. Stalin felt that there was a growing split in German circles about whether Germany should initiate a war
with the Soviet Union. Stalin did not know that Hitler had been secretly discussing an invasion of the Soviet Union
since summer 1940, and that Hitler had ordered his military in late 1940 to prepare for war in the east regardless
of the parties' talks of a potential Soviet entry as a fourth Axis Power. Nazi Germany terminated the Molotov–Ribbentrop
Pact at 03:15 on 22 June 1941 by launching a massive attack on the Soviet positions in eastern Poland which marked
the beginning of the invasion of the Soviet Union known as Operation Barbarossa. Stalin had ignored several warnings
that Germany was likely to invade, and ordered no 'full-scale' mobilization of forces although the mobilization was
ongoing. After the launch of the invasion, the territories gained by the Soviet Union as a result of the Molotov–Ribbentrop
Pact were lost in a matter of weeks. Within six months, the Soviet military had suffered 4.3 million casualties,
and Germany had captured three million Soviet prisoners. The lucrative export of Soviet raw materials to Nazi Germany
over the course of the Nazi–Soviet economic relations (1934–41) continued uninterrupted until the outbreak of hostilities.
The Soviet exports in several key areas enabled Germany to maintain its stocks of rubber and grain from the first
day of the invasion until October 1941. The German original of the secret protocols was presumably destroyed in the
bombing of Germany, but in late 1943, Ribbentrop had ordered that the most secret records of the German Foreign Office
from 1933 on, amounting to some 9,800 pages, be microfilmed. When the various departments of the Foreign Office in
Berlin were evacuated to Thuringia at the end of the war, Karl von Loesch, a civil servant who had worked for the
chief interpreter Paul Otto Schmidt, was entrusted with these microfilm copies. He eventually received orders to
destroy the secret documents but decided to bury the metal container with the microfilms as a personal insurance
for his future well-being. In May 1945, von Loesch approached the British Lt. Col. Robert C. Thomson with the request
to transmit a personal letter to Duncan Sandys, Churchill's son-in-law. In the letter, von Loesch revealed that he
had knowledge of the documents' whereabouts but expected preferential treatment in return. Colonel Thomson and his
American counterpart Ralph Collins agreed to transfer von Loesch to Marburg in the American zone if he would produce
the microfilms. The microfilms contained a copy of the Non-Aggression Treaty as well as the Secret Protocol. Both
documents were discovered as part of the microfilmed records in August 1945 by the State Department employee Wendell
B. Blancke, head of a special unit called "Exploitation German Archives" (EGA). The treaty was published in the United
States for the first time by the St. Louis Post-Dispatch on May 22, 1946, in Britain by the Manchester Guardian.
It was also part of an official State Department publication, Nazi–Soviet Relations 1939–1941, edited by Raymond
J. Sontag and James S. Beddie in January 1948. The decision to publish the key documents on German–Soviet relations,
including the treaty and protocol, had been taken already in spring 1947. Sontag and Beddie prepared the collection
throughout the summer of 1947. In November 1947, President Truman personally approved the publication but it was
held back in view of the Foreign Ministers Conference in London scheduled for December. Since negotiations at that
conference did not prove constructive from an American point of view, the document edition was sent to press. The
documents made headlines worldwide. State Department officials counted it as a success: "The Soviet Government was
caught flat-footed in what was the first effective blow from our side in a clear-cut propaganda war." In response
to the publication of the secret protocols and other secret German–Soviet relations documents in the State Department
edition Nazi–Soviet Relations (1948), Stalin published Falsifiers of History, which included the claim that, during
the Pact's operation, Stalin rejected Hitler's claim to share in a division of the world, without mentioning the
Soviet offer to join the Axis. That version persisted, without exception, in historical studies, official accounts,
memoirs and textbooks published in the Soviet Union until the Soviet Union's dissolution. For decades, it was the
official policy of the Soviet Union to deny the existence of the secret protocol to the Soviet–German Pact. At the
behest of Mikhail Gorbachev, Alexander Nikolaevich Yakovlev headed a commission investigating the existence of such
a protocol. In December 1989, the commission concluded that the protocol had existed and revealed its findings to
the Congress of People's Deputies of the Soviet Union. As a result, the Congress passed the declaration confirming
the existence of the secret protocols, condemning and denouncing them. Both successor-states of the pact parties
have declared the secret protocols to be invalid from the moment they were signed. The Federal Republic of Germany
declared this on September 1, 1989 and the Soviet Union on December 24, 1989, following an examination of the microfilmed
copy of the German originals. Regarding the timing of German rapprochement, many historians agree that the dismissal
of Maxim Litvinov, whose Jewish ethnicity was viewed unfavorably by Nazi Germany, removed an obstacle to negotiations
with Germany. Stalin immediately directed Molotov to "purge the ministry of Jews." Given Litvinov's prior attempts
to create an anti-fascist coalition, association with the doctrine of collective security with France and Britain,
and pro-Western orientation by the standards of the Kremlin, his dismissal indicated the existence of a Soviet option
of rapprochement with Germany.[f] Likewise, Molotov's appointment served as a signal to Germany that the USSR was
open to offers. The dismissal also signaled to France and Britain the existence of a potential negotiation option
with Germany. One British official wrote that Litvinov's disappearance also meant the loss of an admirable technician
or shock-absorber, while Molotov's "modus operandi" was "more truly Bolshevik than diplomatic or cosmopolitan." Carr
argued that the Soviet Union's replacement of Foreign Minister Litvinov with Molotov on May 3, 1939 indicated not
an irrevocable shift towards alignment with Germany, but rather was Stalin's way of engaging in hard bargaining with
the British and the French by appointing a proverbial hard man, namely Molotov, to the Foreign Commissariat. Historian
Albert Resis stated that the Litvinov dismissal gave the Soviets freedom to pursue faster-paced German negotiations,
but that they did not abandon British–French talks. Derek Watson argued that Molotov could get the best deal with
Britain and France because he was not encumbered with the baggage of collective security and could negotiate with
Germany. Geoffrey Roberts argued that Litvinov's dismissal helped the Soviets with British–French talks, because
Litvinov doubted or maybe even opposed such discussions. Edward Hallett Carr, a frequent defender of Soviet policy,
stated: "In return for 'non-intervention' Stalin secured a breathing space of immunity from German attack."[page
needed] According to Carr, the "bastion" created by means of the Pact, "was and could only be, a line of defense
against potential German attack."[page needed] According to Carr, an important advantage was that "if Soviet Russia
had eventually to fight Hitler, the Western Powers would already be involved."[page needed] However, during the last
decades, this view has been disputed. Historian Werner Maser stated that "the claim that the Soviet Union was at
the time threatened by Hitler, as Stalin supposed ... is a legend, to whose creators Stalin himself belonged. In
Maser's view, "neither Germany nor Japan were in a situation [of] invading the USSR even with the least perspective
[sic] of success," and this could not have been unknown to Stalin. Carr further stated that, for a long time, the
primary motive of Stalin's sudden change of course was assumed to be the fear of German aggressive intentions. Some
critics of Stalin's policy, such as the popular writer Viktor Suvorov, claim that Stalin's primary motive for signing
the Soviet–German non-aggression treaty was his calculation that such a pact could result in a conflict between the
capitalist countries of Western Europe.[citation needed] This idea is supported by Albert L. Weeks.[page needed]
Claims by Suvorov that Stalin planned to invade Germany in 1941 are debated by historians with, for example, David
Glantz opposing such claims, while Mikhail Meltyukhov supports them.[citation needed] The authors of The Black Book
of Communism consider the pact a crime against peace and a "conspiracy to conduct war of aggression."
A capacitor (originally known as a condenser) is a passive two-terminal electrical component used to store electrical energy
temporarily in an electric field. The forms of practical capacitors vary widely, but all contain at least two electrical
conductors (plates) separated by a dielectric (i.e. an insulator that can store energy by becoming polarized). The
conductors can be thin films, foils or sintered beads of metal or conductive electrolyte, etc. The nonconducting
dielectric acts to increase the capacitor's charge capacity. Materials commonly used as dielectrics include glass,
ceramic, plastic film, air, vacuum, paper, mica, and oxide layers. Capacitors are widely used as parts of electrical
circuits in many common electrical devices. Unlike a resistor, an ideal capacitor does not dissipate energy. Instead,
a capacitor stores energy in the form of an electrostatic field between its plates. When there is a potential difference
across the conductors (e.g., when a capacitor is attached across a battery), an electric field develops across the
dielectric, causing positive charge +Q to collect on one plate and negative charge −Q to collect on the other plate.
If a battery has been attached to a capacitor for a sufficient amount of time, no current can flow through the capacitor.
However, if a time-varying voltage is applied across the leads of the capacitor, a displacement current can flow.
In October 1745, Ewald Georg von Kleist of Pomerania, Germany, found that charge could be stored by connecting a
high-voltage electrostatic generator by a wire to a volume of water in a hand-held glass jar. Von Kleist's hand and
the water acted as conductors, and the jar as a dielectric (although details of the mechanism were incorrectly identified
at the time). Von Kleist found that touching the wire resulted in a powerful spark, much more painful than that obtained
from an electrostatic machine. The following year, the Dutch physicist Pieter van Musschenbroek invented a similar
capacitor, which was named the Leyden jar, after the University of Leiden where he worked. He also was impressed
by the power of the shock he received, writing, "I would not take a second shock for the kingdom of France." Daniel
Gralath was the first to combine several jars in parallel into a "battery" to increase the charge storage capacity.
Benjamin Franklin investigated the Leyden jar and came to the conclusion that the charge was stored on the glass,
not in the water as others had assumed. He also adopted the term "battery", (denoting the increasing of power with
a row of similar units as in a battery of cannon), subsequently applied to clusters of electrochemical cells. Leyden
jars were later made by coating the inside and outside of jars with metal foil, leaving a space at the mouth to prevent
arcing between the foils.[citation needed] The earliest unit of capacitance was the jar, equivalent to about 1.11
nanofarads. Since the beginning of the study of electricity non conductive materials like glass, porcelain, paper
and mica have been used as insulators. These materials some decades later were also well-suited for further use as
the dielectric for the first capacitors. Paper capacitors made by sandwiching a strip of impregnated paper between
strips of metal, and rolling the result into a cylinder were commonly used in the late 19century; their manufacture
started in 1876, and they were used from the early 20th century as decoupling capacitors in telecommunications (telephony).
Charles Pollak (born Karol Pollak), the inventor of the first electrolytic capacitors, found out that the oxide layer
on an aluminum anode remained stable in a neutral or alkaline electrolyte, even when the power was switched off.
In 1896 he filed a patent for an "Electric liquid capacitor with aluminum electrodes." Solid electrolyte tantalum
capacitors were invented by Bell Laboratories in the early 1950s as a miniaturized and more reliable low-voltage
support capacitor to complement their newly invented transistor. Last but not least the electric double-layer capacitor
(now Supercapacitors) were invented. In 1957 H. Becker developed a "Low voltage electrolytic capacitor with porous
carbon electrodes". He believed that the energy was stored as a charge in the carbon pores used in his capacitor
as in the pores of the etched foils of electrolytic capacitors. Because the double layer mechanism was not known
by him at the time, he wrote in the patent: "It is not known exactly what is taking place in the component if it
is used for energy storage, but it leads to an extremely high capacity.". A capacitor consists of two conductors
separated by a non-conductive region. The non-conductive region is called the dielectric. In simpler terms, the dielectric
is just an electrical insulator. Examples of dielectric media are glass, air, paper, vacuum, and even a semiconductor
depletion region chemically identical to the conductors. A capacitor is assumed to be self-contained and isolated,
with no net electric charge and no influence from any external electric field. The conductors thus hold equal and
opposite charges on their facing surfaces, and the dielectric develops an electric field. In SI units, a capacitance
of one farad means that one coulomb of charge on each conductor causes a voltage of one volt across the device. The
current I(t) through any component in an electric circuit is defined as the rate of flow of a charge Q(t) passing
through it, but actual charges—electrons—cannot pass through the dielectric layer of a capacitor. Rather, one electron
accumulates on the negative plate for each one that leaves the positive plate, resulting in an electron depletion
and consequent positive charge on one electrode that is equal and opposite to the accumulated negative charge on
the other. Thus the charge on the electrodes is equal to the integral of the current as well as proportional to the
voltage, as discussed above. As with any antiderivative, a constant of integration is added to represent the initial
voltage V(t0). This is the integral form of the capacitor equation: The simplest model capacitor consists of two
thin parallel conductive plates separated by a dielectric with permittivity ε . This model may also be used to make
qualitative predictions for other device geometries. The plates are considered to extend uniformly over an area A
and a charge density ±ρ = ±Q/A exists on their surface. Assuming that the length and width of the plates are much
greater than their separation d, the electric field near the centre of the device will be uniform with the magnitude
E = ρ/ε. The voltage is defined as the line integral of the electric field between the plates The maximum energy
is a function of dielectric volume, permittivity, and dielectric strength. Changing the plate area and the separation
between the plates while maintaining the same volume causes no change of the maximum amount of energy that the capacitor
can store, so long as the distance between plates remains much smaller than both the length and width of the plates.
In addition, these equations assume that the electric field is entirely concentrated in the dielectric between the
plates. In reality there are fringing fields outside the dielectric, for example between the sides of the capacitor
plates, which will increase the effective capacitance of the capacitor. This is sometimes called parasitic capacitance.
For some simple capacitor geometries this additional capacitance term can be calculated analytically. It becomes
negligibly small when the ratios of plate width to separation and length to separation are large. Capacitors deviate
from the ideal capacitor equation in a number of ways. Some of these, such as leakage current and parasitic effects
are linear, or can be assumed to be linear, and can be dealt with by adding virtual components to the equivalent
circuit of the capacitor. The usual methods of network analysis can then be applied. In other cases, such as with
breakdown voltage, the effect is non-linear and normal (i.e., linear) network analysis cannot be used, the effect
must be dealt with separately. There is yet another group, which may be linear but invalidate the assumption in the
analysis that capacitance is a constant. Such an example is temperature dependence. Finally, combined parasitic effects
such as inherent inductance, resistance, or dielectric losses can exhibit non-uniform behavior at variable frequencies
of operation. For air dielectric capacitors the breakdown field strength is of the order 2 to 5 MV/m; for mica the
breakdown is 100 to 300 MV/m; for oil, 15 to 25 MV/m; it can be much less when other materials are used for the dielectric.
The dielectric is used in very thin layers and so absolute breakdown voltage of capacitors is limited. Typical ratings
for capacitors used for general electronics applications range from a few volts to 1 kV. As the voltage increases,
the dielectric must be thicker, making high-voltage capacitors larger per capacitance than those rated for lower
voltages. The breakdown voltage is critically affected by factors such as the geometry of the capacitor conductive
parts; sharp edges or points increase the electric field strength at that point and can lead to a local breakdown.
Once this starts to happen, the breakdown quickly tracks through the dielectric until it reaches the opposite plate,
leaving carbon behind and causing a short (or relatively low resistance) circuit. The results can be explosive as
the short in the capacitor draws current from the surrounding circuitry and dissipates the energy. Ripple current
is the AC component of an applied source (often a switched-mode power supply) whose frequency may be constant or
varying. Ripple current causes heat to be generated within the capacitor due to the dielectric losses caused by the
changing field strength together with the current flow across the slightly resistive supply lines or the electrolyte
in the capacitor. The equivalent series resistance (ESR) is the amount of internal series resistance one would add
to a perfect capacitor to model this. Some types of capacitors, primarily tantalum and aluminum electrolytic capacitors,
as well as some film capacitors have a specified rating value for maximum ripple current. The capacitance of certain
capacitors decreases as the component ages. In ceramic capacitors, this is caused by degradation of the dielectric.
The type of dielectric, ambient operating and storage temperatures are the most significant aging factors, while
the operating voltage has a smaller effect. The aging process may be reversed by heating the component above the
Curie point. Aging is fastest near the beginning of life of the component, and the device stabilizes over time. Electrolytic
capacitors age as the electrolyte evaporates. In contrast with ceramic capacitors, this occurs towards the end of
life of the component. Capacitors, especially ceramic capacitors, and older designs such as paper capacitors, can
absorb sound waves resulting in a microphonic effect. Vibration moves the plates, causing the capacitance to vary,
in turn inducing AC current. Some dielectrics also generate piezoelectricity. The resulting interference is especially
problematic in audio applications, potentially causing feedback or unintended recording. In the reverse microphonic
effect, the varying electric field between the capacitor plates exerts a physical force, moving them as a speaker.
This can generate audible sound, but drains energy and stresses the dielectric and the electrolyte, if any. In DC
circuits and pulsed circuits, current and voltage reversal are affected by the damping of the system. Voltage reversal
is encountered in RLC circuits that are under-damped. The current and voltage reverse direction, forming a harmonic
oscillator between the inductance and capacitance. The current and voltage will tend to oscillate and may reverse
direction several times, with each peak being lower than the previous, until the system reaches an equilibrium. This
is often referred to as ringing. In comparison, critically damped or over-damped systems usually do not experience
a voltage reversal. Reversal is also encountered in AC circuits, where the peak current will be equal in each direction.
For maximum life, capacitors usually need to be able to handle the maximum amount of reversal that a system will
experience. An AC circuit will experience 100% voltage reversal, while under-damped DC circuits will experience less
than 100%. Reversal creates excess electric fields in the dielectric, causes excess heating of both the dielectric
and the conductors, and can dramatically shorten the life expectancy of the capacitor. Reversal ratings will often
affect the design considerations for the capacitor, from the choice of dielectric materials and voltage ratings to
the types of internal connections used. Capacitors made with any type of dielectric material will show some level
of "dielectric absorption" or "soakage". On discharging a capacitor and disconnecting it, after a short time it may
develop a voltage due to hysteresis in the dielectric. This effect can be objectionable in applications such as precision
sample and hold circuits or timing circuits. The level of absorption depends on many factors, from design considerations
to charging time, since the absorption is a time-dependent process. However, the primary factor is the type of dielectric
material. Capacitors such as tantalum electrolytic or polysulfone film exhibit very high absorption, while polystyrene
or Teflon allow very small levels of absorption. In some capacitors where dangerous voltages and energies exist,
such as in flashtubes, television sets, and defibrillators, the dielectric absorption can recharge the capacitor
to hazardous voltages after it has been shorted or discharged. Any capacitor containing over 10 joules of energy
is generally considered hazardous, while 50 joules or higher is potentially lethal. A capacitor may regain anywhere
from 0.01 to 20% of its original charge over a period of several minutes, allowing a seemingly safe capacitor to
become surprisingly dangerous. Leakage is equivalent to a resistor in parallel with the capacitor. Constant exposure
to heat can cause dielectric breakdown and excessive leakage, a problem often seen in older vacuum tube circuits,
particularly where oiled paper and foil capacitors were used. In many vacuum tube circuits, interstage coupling capacitors
are used to conduct a varying signal from the plate of one tube to the grid circuit of the next stage. A leaky capacitor
can cause the grid circuit voltage to be raised from its normal bias setting, causing excessive current or signal
distortion in the downstream tube. In power amplifiers this can cause the plates to glow red, or current limiting
resistors to overheat, even fail. Similar considerations apply to component fabricated solid-state (transistor) amplifiers,
but owing to lower heat production and the use of modern polyester dielectric barriers this once-common problem has
become relatively rare. Most types of capacitor include a dielectric spacer, which increases their capacitance. These
dielectrics are most often insulators. However, low capacitance devices are available with a vacuum between their
plates, which allows extremely high voltage operation and low losses. Variable capacitors with their plates open
to the atmosphere were commonly used in radio tuning circuits. Later designs use polymer foil dielectric between
the moving and stationary plates, with no significant air space between them. Several solid dielectrics are available,
including paper, plastic, glass, mica and ceramic materials. Paper was used extensively in older devices and offers
relatively high voltage performance. However, it is susceptible to water absorption, and has been largely replaced
by plastic film capacitors. Plastics offer better stability and ageing performance, which makes them useful in timer
circuits, although they may be limited to low operating temperatures and frequencies. Ceramic capacitors are generally
small, cheap and useful for high frequency applications, although their capacitance varies strongly with voltage
and they age poorly. They are broadly categorized as class 1 dielectrics, which have predictable variation of capacitance
with temperature or class 2 dielectrics, which can operate at higher voltage. Glass and mica capacitors are extremely
reliable, stable and tolerant to high temperatures and voltages, but are too expensive for most mainstream applications.
Electrolytic capacitors and supercapacitors are used to store small and larger amounts of energy, respectively, ceramic
capacitors are often used in resonators, and parasitic capacitance occurs in circuits wherever the simple conductor-insulator-conductor
structure is formed unintentionally by the configuration of the circuit layout. Electrolytic capacitors use an aluminum
or tantalum plate with an oxide dielectric layer. The second electrode is a liquid electrolyte, connected to the
circuit by another foil plate. Electrolytic capacitors offer very high capacitance but suffer from poor tolerances,
high instability, gradual loss of capacitance especially when subjected to heat, and high leakage current. Poor quality
capacitors may leak electrolyte, which is harmful to printed circuit boards. The conductivity of the electrolyte
drops at low temperatures, which increases equivalent series resistance. While widely used for power-supply conditioning,
poor high-frequency characteristics make them unsuitable for many applications. Electrolytic capacitors will self-degrade
if unused for a period (around a year), and when full power is applied may short circuit, permanently damaging the
capacitor and usually blowing a fuse or causing failure of rectifier diodes (for instance, in older equipment, arcing
in rectifier tubes). They can be restored before use (and damage) by gradually applying the operating voltage, often
done on antique vacuum tube equipment over a period of 30 minutes by using a variable transformer to supply AC power.
Unfortunately, the use of this technique may be less satisfactory for some solid state equipment, which may be damaged
by operation below its normal power range, requiring that the power supply first be isolated from the consuming circuits.
Such remedies may not be applicable to modern high-frequency power supplies as these produce full output voltage
even with reduced input. Several other types of capacitor are available for specialist applications. Supercapacitors
store large amounts of energy. Supercapacitors made from carbon aerogel, carbon nanotubes, or highly porous electrode
materials, offer extremely high capacitance (up to 5 kF as of 2010[update]) and can be used in some applications
instead of rechargeable batteries. Alternating current capacitors are specifically designed to work on line (mains)
voltage AC power circuits. They are commonly used in electric motor circuits and are often designed to handle large
currents, so they tend to be physically large. They are usually ruggedly packaged, often in metal cases that can
be easily grounded/earthed. They also are designed with direct current breakdown voltages of at least five times
the maximum AC voltage. If a capacitor is driven with a time-varying voltage that changes rapidly enough, at some
frequency the polarization of the dielectric cannot follow the voltage. As an example of the origin of this mechanism,
the internal microscopic dipoles contributing to the dielectric constant cannot move instantly, and so as frequency
of an applied alternating voltage increases, the dipole response is limited and the dielectric constant diminishes.
A changing dielectric constant with frequency is referred to as dielectric dispersion, and is governed by dielectric
relaxation processes, such as Debye relaxation. Under transient conditions, the displacement field can be expressed
as (see electric susceptibility): where a single prime denotes the real part and a double prime the imaginary part,
Z(ω) is the complex impedance with the dielectric present, Ccmplx(ω) is the so-called complex capacitance with the
dielectric present, and C0 is the capacitance without the dielectric. (Measurement "without the dielectric" in principle
means measurement in free space, an unattainable goal inasmuch as even the quantum vacuum is predicted to exhibit
nonideal behavior, such as dichroism. For practical purposes, when measurement errors are taken into account, often
a measurement in terrestrial vacuum, or simply a calculation of C0, is sufficiently accurate.) The arrangement of
plates and dielectric has many variations depending on the desired ratings of the capacitor. For small values of
capacitance (microfarads and less), ceramic disks use metallic coatings, with wire leads bonded to the coating. Larger
values can be made by multiple stacks of plates and disks. Larger value capacitors usually use a metal foil or metal
film layer deposited on the surface of a dielectric film to make the plates, and a dielectric film of impregnated
paper or plastic – these are rolled up to save space. To reduce the series resistance and inductance for long plates,
the plates and dielectric are staggered so that connection is made at the common edge of the rolled-up plates, not
at the ends of the foil or metalized film strips that comprise the plates. Capacitors may have their connecting leads
arranged in many configurations, for example axially or radially. "Axial" means that the leads are on a common axis,
typically the axis of the capacitor's cylindrical body – the leads extend from opposite ends. Radial leads might
more accurately be referred to as tandem; they are rarely actually aligned along radii of the body's circle, so the
term is inexact, although universal. The leads (until bent) are usually in planes parallel to that of the flat body
of the capacitor, and extend in the same direction; they are often parallel as manufactured. Small, cheap discoidal
ceramic capacitors have existed since the 1930s, and remain in widespread use. Since the 1980s, surface mount packages
for capacitors have been widely used. These packages are extremely small and lack connecting leads, allowing them
to be soldered directly onto the surface of printed circuit boards. Surface mount components avoid undesirable high-frequency
effects due to the leads and simplify automated assembly, although manual handling is made difficult due to their
small size. Mechanically controlled variable capacitors allow the plate spacing to be adjusted, for example by rotating
or sliding a set of movable plates into alignment with a set of stationary plates. Low cost variable capacitors squeeze
together alternating layers of aluminum and plastic with a screw. Electrical control of capacitance is achievable
with varactors (or varicaps), which are reverse-biased semiconductor diodes whose depletion region width varies with
applied voltage. They are used in phase-locked loops, amongst other applications. Most capacitors have numbers printed
on their bodies to indicate their electrical characteristics. Larger capacitors like electrolytics usually display
the actual capacitance together with the unit (for example, 220 μF). Smaller capacitors like ceramics, however, use
a shorthand consisting of three numeric digits and a letter, where the digits indicate the capacitance in pF (calculated
as XY × 10Z for digits XYZ) and the letter indicates the tolerance (J, K or M for ±5%, ±10% and ±20% respectively).
Capacitors are connected in parallel with the power circuits of most electronic devices and larger systems (such
as factories) to shunt away and conceal current fluctuations from the primary power source to provide a "clean" power
supply for signal or control circuits. Audio equipment, for example, uses several capacitors in this way, to shunt
away power line hum before it gets into the signal circuitry. The capacitors act as a local reserve for the DC power
source, and bypass AC currents from the power supply. This is used in car audio applications, when a stiffening capacitor
compensates for the inductance and resistance of the leads to the lead-acid car battery. In electric power distribution,
capacitors are used for power factor correction. Such capacitors often come as three capacitors connected as a three
phase load. Usually, the values of these capacitors are given not in farads but rather as a reactive power in volt-amperes
reactive (var). The purpose is to counteract inductive loading from devices like electric motors and transmission
lines to make the load appear to be mostly resistive. Individual motor or lamp loads may have capacitors for power
factor correction, or larger sets of capacitors (usually with automatic switching devices) may be installed at a
load center within a building or in a large utility substation. When an inductive circuit is opened, the current
through the inductance collapses quickly, creating a large voltage across the open circuit of the switch or relay.
If the inductance is large enough, the energy will generate a spark, causing the contact points to oxidize, deteriorate,
or sometimes weld together, or destroying a solid-state switch. A snubber capacitor across the newly opened circuit
creates a path for this impulse to bypass the contact points, thereby preserving their life; these were commonly
found in contact breaker ignition systems, for instance. Similarly, in smaller scale circuits, the spark may not
be enough to damage the switch but will still radiate undesirable radio frequency interference (RFI), which a filter
capacitor absorbs. Snubber capacitors are usually employed with a low-value resistor in series, to dissipate energy
and minimize RFI. Such resistor-capacitor combinations are available in a single package. In single phase squirrel
cage motors, the primary winding within the motor housing is not capable of starting a rotational motion on the rotor,
but is capable of sustaining one. To start the motor, a secondary "start" winding has a series non-polarized starting
capacitor to introduce a lead in the sinusoidal current. When the secondary (start) winding is placed at an angle
with respect to the primary (run) winding, a rotating electric field is created. The force of the rotational field
is not constant, but is sufficient to start the rotor spinning. When the rotor comes close to operating speed, a
centrifugal switch (or current-sensitive relay in series with the main winding) disconnects the capacitor. The start
capacitor is typically mounted to the side of the motor housing. These are called capacitor-start motors, that have
relatively high starting torque. Typically they can have up-to four times as much starting torque than a split-phase
motor and are used on applications such as compressors, pressure washers and any small device requiring high starting
torques. Capacitors may retain a charge long after power is removed from a circuit; this charge can cause dangerous
or even potentially fatal shocks or damage connected equipment. For example, even a seemingly innocuous device such
as a disposable-camera flash unit, powered by a 1.5 volt AA battery, has a capacitor which may contain over 15 joules
of energy and be charged to over 300 volts. This is easily capable of delivering a shock. Service procedures for
electronic devices usually include instructions to discharge large or high-voltage capacitors, for instance using
a Brinkley stick. Capacitors may also have built-in discharge resistors to dissipate stored energy to a safe level
within a few seconds after power is removed. High-voltage capacitors are stored with the terminals shorted, as protection
from potentially dangerous voltages due to dielectric absorption or from transient voltages the capacitor may pick
up from static charges or passing weather events. Capacitors may catastrophically fail when subjected to voltages
or currents beyond their rating, or as they reach their normal end of life. Dielectric or metal interconnection failures
may create arcing that vaporizes the dielectric fluid, resulting in case bulging, rupture, or even an explosion.
Capacitors used in RF or sustained high-current applications can overheat, especially in the center of the capacitor
rolls. Capacitors used within high-energy capacitor banks can violently explode when a short in one capacitor causes
sudden dumping of energy stored in the rest of the bank into the failing unit. High voltage vacuum capacitors can
generate soft X-rays even during normal operation. Proper containment, fusing, and preventive maintenance can help
to minimize these hazards.
The history of science is the study of the development of science and scientific knowledge, including both the natural sciences
and social sciences. (The history of the arts and humanities is termed as the history of scholarship.) Science is
a body of empirical, theoretical, and practical knowledge about the natural world, produced by scientists who emphasize
the observation, explanation, and prediction of real world phenomena. Historiography of science, in contrast, often
draws on the historical methods of both intellectual history and social history. The English word scientist is relatively
recent—first coined by William Whewell in the 19th century. Previously, people investigating nature called themselves
natural philosophers. While empirical investigations of the natural world have been described since classical antiquity
(for example, by Thales, Aristotle, and others), and scientific methods have been employed since the Middle Ages
(for example, by Ibn al-Haytham, and Roger Bacon), the dawn of modern science is often traced back to the early modern
period and in particular to the scientific revolution that took place in 16th- and 17th-century Europe. Scientific
methods are considered to be so fundamental to modern science that some consider earlier inquiries into nature to
be pre-scientific. Traditionally, historians of science have defined science sufficiently broadly to include those
inquiries. From the 18th century through late 20th century, the history of science, especially of the physical and
biological sciences, was often presented in a progressive narrative in which true theories replaced false beliefs.
More recent historical interpretations, such as those of Thomas Kuhn, tend to portray the history of science in different
terms, such as that of competing paradigms or conceptual systems in a wider matrix that includes intellectual, cultural,
economic and political themes outside of science. The development of writing enabled knowledge to be stored and communicated
across generations with much greater fidelity. Combined with the development of agriculture, which allowed for a
surplus of food, it became possible for early civilizations to develop, because more time and effort could be devoted
to tasks (other than food production) than hunter-gatherers or early subsistence farmers had available. This surplus
allowed a community to support individuals who did things other than work towards bare survival. These other tasks
included systematic studies of nature, study of written information gathered and recorded by others, and often of
adding to that body of information. Ancient Egypt made significant advances in astronomy, mathematics and medicine.
Their development of geometry was a necessary outgrowth of surveying to preserve the layout and ownership of farmland,
which was flooded annually by the Nile river. The 3-4-5 right triangle and other rules of thumb were used to build
rectilinear structures, and the post and lintel architecture of Egypt. Egypt was also a center of alchemy research
for much of the Mediterranean.The Edwin Smith papyrus is one of the first medical documents still extant, and perhaps
the earliest document that attempts to describe and analyse the brain: it might be seen as the very beginnings of
modern neuroscience. However, while Egyptian medicine had some effective practices, it was not without its ineffective
and sometimes harmful practices. Medical historians believe that ancient Egyptian pharmacology, for example, was
largely ineffective. Nevertheless, it applies the following components to the treatment of disease: examination,
diagnosis, treatment, and prognosis, which display strong parallels to the basic empirical method of science and
according to G. E. R. Lloyd played a significant role in the development of this methodology. The Ebers papyrus (c.
1550 BC) also contains evidence of traditional empiricism. From their beginnings in Sumer (now Iraq) around 3500
BC, the Mesopotamian people began to attempt to record some observations of the world with numerical data. But their
observations and measurements were seemingly taken for purposes other than for elucidating scientific laws. A concrete
instance of Pythagoras' law was recorded, as early as the 18th century BC: the Mesopotamian cuneiform tablet Plimpton
322 records a number of Pythagorean triplets (3,4,5) (5,12,13). ..., dated 1900 BC, possibly millennia before Pythagoras,
but an abstract formulation of the Pythagorean theorem was not. In Babylonian astronomy, records of the motions of
the stars, planets, and the moon are left on thousands of clay tablets created by scribes. Even today, astronomical
periods identified by Mesopotamian proto-scientists are still widely used in Western calendars such as the solar
year and the lunar month. Using these data they developed arithmetical methods to compute the changing length of
daylight in the course of the year and to predict the appearances and disappearances of the Moon and planets and
eclipses of the Sun and Moon. Only a few astronomers' names are known, such as that of Kidinnu, a Chaldean astronomer
and mathematician. Kiddinu's value for the solar year is in use for today's calendars. Babylonian astronomy was "the
first and highly successful attempt at giving a refined mathematical description of astronomical phenomena." According
to the historian A. Aaboe, "all subsequent varieties of scientific astronomy, in the Hellenistic world, in India,
in Islam, and in the West—if not indeed all subsequent endeavour in the exact sciences—depend upon Babylonian astronomy
in decisive and fundamental ways." In Classical Antiquity, the inquiry into the workings of the universe took place
both in investigations aimed at such practical goals as establishing a reliable calendar or determining how to cure
a variety of illnesses and in those abstract investigations known as natural philosophy. The ancient people who are
considered the first scientists may have thought of themselves as natural philosophers, as practitioners of a skilled
profession (for example, physicians), or as followers of a religious tradition (for example, temple healers). The
earliest Greek philosophers, known as the pre-Socratics, provided competing answers to the question found in the
myths of their neighbors: "How did the ordered cosmos in which we live come to be?" The pre-Socratic philosopher
Thales (640-546 BC), dubbed the "father of science", was the first to postulate non-supernatural explanations for
natural phenomena, for example, that land floats on water and that earthquakes are caused by the agitation of the
water upon which the land floats, rather than the god Poseidon. Thales' student Pythagoras of Samos founded the Pythagorean
school, which investigated mathematics for its own sake, and was the first to postulate that the Earth is spherical
in shape. Leucippus (5th century BC) introduced atomism, the theory that all matter is made of indivisible, imperishable
units called atoms. This was greatly expanded by his pupil Democritus. Subsequently, Plato and Aristotle produced
the first systematic discussions of natural philosophy, which did much to shape later investigations of nature. Their
development of deductive reasoning was of particular importance and usefulness to later scientific inquiry. Plato
founded the Platonic Academy in 387 BC, whose motto was "Let none unversed in geometry enter here", and turned out
many notable philosophers. Plato's student Aristotle introduced empiricism and the notion that universal truths can
be arrived at via observation and induction, thereby laying the foundations of the scientific method. Aristotle also
produced many biological writings that were empirical in nature, focusing on biological causation and the diversity
of life. He made countless observations of nature, especially the habits and attributes of plants and animals in
the world around him, classified more than 540 animal species, and dissected at least 50. Aristotle's writings profoundly
influenced subsequent Islamic and European scholarship, though they were eventually superseded in the Scientific
Revolution. The important legacy of this period included substantial advances in factual knowledge, especially in
anatomy, zoology, botany, mineralogy, geography, mathematics and astronomy; an awareness of the importance of certain
scientific problems, especially those related to the problem of change and its causes; and a recognition of the methodological
importance of applying mathematics to natural phenomena and of undertaking empirical research. In the Hellenistic
age scholars frequently employed the principles developed in earlier Greek thought: the application of mathematics
and deliberate empirical research, in their scientific investigations. Thus, clear unbroken lines of influence lead
from ancient Greek and Hellenistic philosophers, to medieval Muslim philosophers and scientists, to the European
Renaissance and Enlightenment, to the secular sciences of the modern day. Neither reason nor inquiry began with the
Ancient Greeks, but the Socratic method did, along with the idea of Forms, great advances in geometry, logic, and
the natural sciences. According to Benjamin Farrington, former Professor of Classics at Swansea University: The astronomer
Aristarchus of Samos was the first known person to propose a heliocentric model of the solar system, while the geographer
Eratosthenes accurately calculated the circumference of the Earth. Hipparchus (c. 190 – c. 120 BC) produced the first
systematic star catalog. The level of achievement in Hellenistic astronomy and engineering is impressively shown
by the Antikythera mechanism (150-100 BC), an analog computer for calculating the position of planets. Technological
artifacts of similar complexity did not reappear until the 14th century, when mechanical astronomical clocks appeared
in Europe. In Hellenistic Egypt, the mathematician Euclid laid down the foundations of mathematical rigor and introduced
the concepts of definition, axiom, theorem and proof still in use today in his Elements, considered the most influential
textbook ever written. Archimedes, considered one of the greatest mathematicians of all time, is credited with using
the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series,
and gave a remarkably accurate approximation of Pi. He is also known in physics for laying the foundations of hydrostatics,
statics, and the explanation of the principle of the lever. Theophrastus wrote some of the earliest descriptions
of plants and animals, establishing the first taxonomy and looking at minerals in terms of their properties such
as hardness. Pliny the Elder produced what is one of the largest encyclopedias of the natural world in 77 AD, and
must be regarded as the rightful successor to Theophrastus. For example, he accurately describes the octahedral shape
of the diamond, and proceeds to mention that diamond dust is used by engravers to cut and polish other gems owing
to its great hardness. His recognition of the importance of crystal shape is a precursor to modern crystallography,
while mention of numerous other minerals presages mineralogy. He also recognises that other minerals have characteristic
crystal shapes, but in one example, confuses the crystal habit with the work of lapidaries. He was also the first
to recognise that amber was a fossilized resin from pine trees because he had seen samples with trapped insects within
them. Mathematics: The earliest traces of mathematical knowledge in the Indian subcontinent appear with the Indus
Valley Civilization (c. 4th millennium BC ~ c. 3rd millennium BC). The people of this civilization made bricks whose
dimensions were in the proportion 4:2:1, considered favorable for the stability of a brick structure. They also tried
to standardize measurement of length to a high degree of accuracy. They designed a ruler—the Mohenjo-daro ruler—whose
unit of length (approximately 1.32 inches or 3.4 centimetres) was divided into ten equal parts. Bricks manufactured
in ancient Mohenjo-daro often had dimensions that were integral multiples of this unit of length. Indian astronomer
and mathematician Aryabhata (476-550), in his Aryabhatiya (499) introduced a number of trigonometric functions (including
sine, versine, cosine and inverse sine), trigonometric tables, and techniques and algorithms of algebra. In 628 AD,
Brahmagupta suggested that gravity was a force of attraction. He also lucidly explained the use of zero as both a
placeholder and a decimal digit, along with the Hindu-Arabic numeral system now used universally throughout the world.
Arabic translations of the two astronomers' texts were soon available in the Islamic world, introducing what would
become Arabic numerals to the Islamic World by the 9th century. During the 14th–16th centuries, the Kerala school
of astronomy and mathematics made significant advances in astronomy and especially mathematics, including fields
such as trigonometry and analysis. In particular, Madhava of Sangamagrama is considered the "founder of mathematical
analysis". Astronomy: The first textual mention of astronomical concepts comes from the Vedas, religious literature
of India. According to Sarma (2008): "One finds in the Rigveda intelligent speculations about the genesis of the
universe from nonexistence, the configuration of the universe, the spherical self-supporting earth, and the year
of 360 days divided into 12 equal parts of 30 days each with a periodical intercalary month.". The first 12 chapters
of the Siddhanta Shiromani, written by Bhāskara in the 12th century, cover topics such as: mean longitudes of the
planets; true longitudes of the planets; the three problems of diurnal rotation; syzygies; lunar eclipses; solar
eclipses; latitudes of the planets; risings and settings; the moon's crescent; conjunctions of the planets with each
other; conjunctions of the planets with the fixed stars; and the patas of the sun and moon. The 13 chapters of the
second part cover the nature of the sphere, as well as significant astronomical and trigonometric calculations based
on it. Medicine: Findings from Neolithic graveyards in what is now Pakistan show evidence of proto-dentistry among
an early farming culture. Ayurveda is a system of traditional medicine that originated in ancient India before 2500
BC, and is now practiced as a form of alternative medicine in other parts of the world. Its most famous text is the
Suśrutasamhitā of Suśruta, which is notable for describing procedures on various forms of surgery, including rhinoplasty,
the repair of torn ear lobes, perineal lithotomy, cataract surgery, and several other excisions and other surgical
procedures. Mathematics: From the earliest the Chinese used a positional decimal system on counting boards in order
to calculate. To express 10, a single rod is placed in the second box from the right. The spoken language uses a
similar system to English: e.g. four thousand two hundred seven. No symbol was used for zero. By the 1st century
BC, negative numbers and decimal fractions were in use and The Nine Chapters on the Mathematical Art included methods
for extracting higher order roots by Horner's method and solving linear equations and by Pythagoras' theorem. Cubic
equations were solved in the Tang dynasty and solutions of equations of order higher than 3 appeared in print in
1245 AD by Ch'in Chiu-shao. Pascal's triangle for binomial coefficients was described around 1100 by Jia Xian. Astronomy:
Astronomical observations from China constitute the longest continuous sequence from any civilisation and include
records of sunspots (112 records from 364 BC), supernovas (1054), lunar and solar eclipses. By the 12th century,
they could reasonably accurately make predictions of eclipses, but the knowledge of this was lost during the Ming
dynasty, so that the Jesuit Matteo Ricci gained much favour in 1601 by his predictions. By 635 Chinese astronomers
had observed that the tails of comets always point away from the sun. Seismology: To better prepare for calamities,
Zhang Heng invented a seismometer in 132 CE which provided instant alert to authorities in the capital Luoyang that
an earthquake had occurred in a location indicated by a specific cardinal or ordinal direction. Although no tremors
could be felt in the capital when Zhang told the court that an earthquake had just occurred in the northwest, a message
came soon afterwards that an earthquake had indeed struck 400 km (248 mi) to 500 km (310 mi) northwest of Luoyang
(in what is now modern Gansu). Zhang called his device the 'instrument for measuring the seasonal winds and the movements
of the Earth' (Houfeng didong yi 候风地动仪), so-named because he and others thought that earthquakes were most likely
caused by the enormous compression of trapped air. See Zhang's seismometer for further details. There are many notable
contributors to the field of Chinese science throughout the ages. One of the best examples would be Shen Kuo (1031–1095),
a polymath scientist and statesman who was the first to describe the magnetic-needle compass used for navigation,
discovered the concept of true north, improved the design of the astronomical gnomon, armillary sphere, sight tube,
and clepsydra, and described the use of drydocks to repair boats. After observing the natural process of the inundation
of silt and the find of marine fossils in the Taihang Mountains (hundreds of miles from the Pacific Ocean), Shen
Kuo devised a theory of land formation, or geomorphology. He also adopted a theory of gradual climate change in regions
over time, after observing petrified bamboo found underground at Yan'an, Shaanxi province. If not for Shen Kuo's
writing, the architectural works of Yu Hao would be little known, along with the inventor of movable type printing,
Bi Sheng (990-1051). Shen's contemporary Su Song (1020–1101) was also a brilliant polymath, an astronomer who created
a celestial atlas of star maps, wrote a pharmaceutical treatise with related subjects of botany, zoology, mineralogy,
and metallurgy, and had erected a large astronomical clocktower in Kaifeng city in 1088. To operate the crowning
armillary sphere, his clocktower featured an escapement mechanism and the world's oldest known use of an endless
power-transmitting chain drive. The Jesuit China missions of the 16th and 17th centuries "learned to appreciate the
scientific achievements of this ancient culture and made them known in Europe. Through their correspondence European
scientists first learned about the Chinese science and culture." Western academic thought on the history of Chinese
technology and science was galvanized by the work of Joseph Needham and the Needham Research Institute. Among the
technological accomplishments of China were, according to the British scholar Needham, early seismological detectors
(Zhang Heng in the 2nd century), the water-powered celestial globe (Zhang Heng), matches, the independent invention
of the decimal system, dry docks, sliding calipers, the double-action piston pump, cast iron, the blast furnace,
the iron plough, the multi-tube seed drill, the wheelbarrow, the suspension bridge, the winnowing machine, the rotary
fan, the parachute, natural gas as fuel, the raised-relief map, the propeller, the crossbow, and a solid fuel rocket,
the multistage rocket, the horse collar, along with contributions in logic, astronomy, medicine, and other fields.
With the division of the Roman Empire, the Western Roman Empire lost contact with much of its past. In the Middle
East, Greek philosophy was able to find some support under the newly created Arab Empire. With the spread of Islam
in the 7th and 8th centuries, a period of Muslim scholarship, known as the Islamic Golden Age, lasted until the 13th
century. This scholarship was aided by several factors. The use of a single language, Arabic, allowed communication
without need of a translator. Access to Greek texts from the Byzantine Empire, along with Indian sources of learning,
provided Muslim scholars a knowledge base to build upon. Muslim scientists placed far greater emphasis on experiment
than had the Greeks. This led to an early scientific method being developed in the Muslim world, where significant
progress in methodology was made, beginning with the experiments of Ibn al-Haytham (Alhazen) on optics from c. 1000,
in his Book of Optics. The law of refraction of light was known to the Persians. The most important development of
the scientific method was the use of experiments to distinguish between competing scientific theories set within
a generally empirical orientation, which began among Muslim scientists. Ibn al-Haytham is also regarded as the father
of optics, especially for his empirical proof of the intromission theory of light. Some have also described Ibn al-Haytham
as the "first scientist" for his development of the modern scientific method. In mathematics, the Persian mathematician
Muhammad ibn Musa al-Khwarizmi gave his name to the concept of the algorithm, while the term algebra is derived from
al-jabr, the beginning of the title of one of his publications. What is now known as Arabic numerals originally came
from India, but Muslim mathematicians did make several refinements to the number system, such as the introduction
of decimal point notation. Sabian mathematician Al-Battani (850-929) contributed to astronomy and mathematics, while
Persian scholar Al-Razi contributed to chemistry and medicine. In astronomy, Al-Battani improved the measurements
of Hipparchus, preserved in the translation of Ptolemy's Hè Megalè Syntaxis (The great treatise) translated as Almagest.
Al-Battani also improved the precision of the measurement of the precession of the Earth's axis. The corrections
made to the geocentric model by al-Battani, Ibn al-Haytham, Averroes and the Maragha astronomers such as Nasir al-Din
al-Tusi, Mo'ayyeduddin Urdi and Ibn al-Shatir are similar to Copernican heliocentric model. Heliocentric theories
may have also been discussed by several other Muslim astronomers such as Ja'far ibn Muhammad Abu Ma'shar al-Balkhi,
Abu-Rayhan Biruni, Abu Said al-Sijzi, Qutb al-Din al-Shirazi, and Najm al-Dīn al-Qazwīnī al-Kātibī. Ibn Sina (Avicenna)
is regarded as the most influential philosopher of Islam. He pioneered the science of experimental medicine and was
the first physician to conduct clinical trials. His two most notable works in medicine are the Kitāb al-shifāʾ ("Book
of Healing") and The Canon of Medicine, both of which were used as standard medicinal texts in both the Muslim world
and in Europe well into the 17th century. Amongst his many contributions are the discovery of the contagious nature
of infectious diseases, and the introduction of clinical pharmacology. An intellectual revitalization of Europe started
with the birth of medieval universities in the 12th century. The contact with the Islamic world in Spain and Sicily,
and during the Reconquista and the Crusades, allowed Europeans access to scientific Greek and Arabic texts, including
the works of Aristotle, Ptolemy, Jābir ibn Hayyān, al-Khwarizmi, Alhazen, Avicenna, and Averroes. European scholars
had access to the translation programs of Raymond of Toledo, who sponsored the 12th century Toledo School of Translators
from Arabic to Latin. Later translators like Michael Scotus would learn Arabic in order to study these texts directly.
The European universities aided materially in the translation and propagation of these texts and started a new infrastructure
which was needed for scientific communities. In fact, European university put many works about the natural world
and the study of nature at the center of its curriculum, with the result that the "medieval university laid far greater
emphasis on science than does its modern counterpart and descendent." At the beginning of the 13th century, there
were reasonably accurate Latin translations of the main works of almost all the intellectually crucial ancient authors,
allowing a sound transfer of scientific ideas via both the universities and the monasteries. By then, the natural
philosophy contained in these texts began to be extended by notable scholastics such as Robert Grosseteste, Roger
Bacon, Albertus Magnus and Duns Scotus. Precursors of the modern scientific method, influenced by earlier contributions
of the Islamic world, can be seen already in Grosseteste's emphasis on mathematics as a way to understand nature,
and in the empirical approach admired by Bacon, particularly in his Opus Majus. Pierre Duhem's provocative thesis
of the Catholic Church's Condemnation of 1277 led to the study of medieval science as a serious discipline, "but
no one in the field any longer endorses his view that modern science started in 1277". However, many scholars agree
with Duhem's view that the Middle Ages were a period of important scientific developments. The first half of the
14th century saw much important scientific work being done, largely within the framework of scholastic commentaries
on Aristotle's scientific writings. William of Ockham introduced the principle of parsimony: natural philosophers
should not postulate unnecessary entities, so that motion is not a distinct thing but is only the moving object and
an intermediary "sensible species" is not needed to transmit an image of an object to the eye. Scholars such as Jean
Buridan and Nicole Oresme started to reinterpret elements of Aristotle's mechanics. In particular, Buridan developed
the theory that impetus was the cause of the motion of projectiles, which was a first step towards the modern concept
of inertia. The Oxford Calculators began to mathematically analyze the kinematics of motion, making this analysis
without considering the causes of motion. In 1348, the Black Death and other disasters sealed a sudden end to the
previous period of massive philosophic and scientific development. Yet, the rediscovery of ancient texts was improved
after the Fall of Constantinople in 1453, when many Byzantine scholars had to seek refuge in the West. Meanwhile,
the introduction of printing was to have great effect on European society. The facilitated dissemination of the printed
word democratized learning and allowed a faster propagation of new ideas. New ideas also helped to influence the
development of European science at this point: not least the introduction of Algebra. These developments paved the
way for the Scientific Revolution, which may also be understood as a resumption of the process of scientific inquiry,
halted at the start of the Black Death. The renewal of learning in Europe, that began with 12th century Scholasticism,
came to an end about the time of the Black Death, and the initial period of the subsequent Italian Renaissance is
sometimes seen as a lull in scientific activity. The Northern Renaissance, on the other hand, showed a decisive shift
in focus from Aristoteleian natural philosophy to chemistry and the biological sciences (botany, anatomy, and medicine).
Thus modern science in Europe was resumed in a period of great upheaval: the Protestant Reformation and Catholic
Counter-Reformation; the discovery of the Americas by Christopher Columbus; the Fall of Constantinople; but also
the re-discovery of Aristotle during the Scholastic period presaged large social and political changes. Thus, a suitable
environment was created in which it became possible to question scientific doctrine, in much the same way that Martin
Luther and John Calvin questioned religious doctrine. The works of Ptolemy (astronomy) and Galen (medicine) were
found not always to match everyday observations. Work by Vesalius on human cadavers found problems with the Galenic
view of anatomy. The willingness to question previously held truths and search for new answers resulted in a period
of major scientific advancements, now known as the Scientific Revolution. The Scientific Revolution is traditionally
held by most historians to have begun in 1543, when the books De humani corporis fabrica (On the Workings of the
Human Body) by Andreas Vesalius, and also De Revolutionibus, by the astronomer Nicolaus Copernicus, were first printed.
The thesis of Copernicus' book was that the Earth moved around the Sun. The period culminated with the publication
of the Philosophiæ Naturalis Principia Mathematica in 1687 by Isaac Newton, representative of the unprecedented growth
of scientific publications throughout Europe. The Age of Enlightenment was a European affair. The 17th century "Age
of Reason" opened the avenues to the decisive steps towards modern science, which took place during the 18th century
"Age of Enlightenment". Directly based on the works of Newton, Descartes, Pascal and Leibniz, the way was now clear
to the development of modern mathematics, physics and technology by the generation of Benjamin Franklin (1706–1790),
Leonhard Euler (1707–1783), Mikhail Lomonosov (1711–1765) and Jean le Rond d'Alembert (1717–1783), epitomized in
the appearance of Denis Diderot's Encyclopédie between 1751 and 1772. The impact of this process was not limited
to science and technology, but affected philosophy (Immanuel Kant, David Hume), religion (the increasingly significant
impact of science upon religion), and society and politics in general (Adam Smith, Voltaire), the French Revolution
of 1789 setting a bloody cesura indicating the beginning of political modernity[citation needed]. The early modern
period is seen as a flowering of the European Renaissance, in what is often known as the Scientific Revolution, viewed
as a foundation of modern science. The Romantic Movement of the early 19th century reshaped science by opening up
new pursuits unexpected in the classical approaches of the Enlightenment. Major breakthroughs came in biology, especially
in Darwin's theory of evolution, as well as physics (electromagnetism), mathematics (non-Euclidean geometry, group
theory) and chemistry (organic chemistry). The decline of Romanticism occurred because a new movement, Positivism,
began to take hold of the ideals of the intellectuals after 1840 and lasted until about 1880. The scientific revolution
is a convenient boundary between ancient thought and classical physics. Nicolaus Copernicus revived the heliocentric
model of the solar system described by Aristarchus of Samos. This was followed by the first known model of planetary
motion given by Johannes Kepler in the early 17th century, which proposed that the planets follow elliptical orbits,
with the Sun at one focus of the ellipse. Galileo ("Father of Modern Physics") also made use of experiments to validate
physical theories, a key element of the scientific method. In 1687, Isaac Newton published the Principia Mathematica,
detailing two comprehensive and successful physical theories: Newton's laws of motion, which led to classical mechanics;
and Newton's Law of Gravitation, which describes the fundamental force of gravity. The behavior of electricity and
magnetism was studied by Faraday, Ohm, and others during the early 19th century. These studies led to the unification
of the two phenomena into a single theory of electromagnetism, by James Clerk Maxwell (known as Maxwell's equations).
The beginning of the 20th century brought the start of a revolution in physics. The long-held theories of Newton
were shown not to be correct in all circumstances. Beginning in 1900, Max Planck, Albert Einstein, Niels Bohr and
others developed quantum theories to explain various anomalous experimental results, by introducing discrete energy
levels. Not only did quantum mechanics show that the laws of motion did not hold on small scales, but even more disturbingly,
the theory of general relativity, proposed by Einstein in 1915, showed that the fixed background of spacetime, on
which both Newtonian mechanics and special relativity depended, could not exist. In 1925, Werner Heisenberg and Erwin
Schrödinger formulated quantum mechanics, which explained the preceding quantum theories. The observation by Edwin
Hubble in 1929 that the speed at which galaxies recede positively correlates with their distance, led to the understanding
that the universe is expanding, and the formulation of the Big Bang theory by Georges Lemaître. In 1938 Otto Hahn
and Fritz Strassmann discovered nuclear fission with radiochemical methods, and in 1939 Lise Meitner and Otto Robert
Frisch wrote the first theoretical interpretation of the fission process, which was later improved by Niels Bohr
and John A. Wheeler. Further developments took place during World War II, which led to the practical application
of radar and the development and use of the atomic bomb. Though the process had begun with the invention of the cyclotron
by Ernest O. Lawrence in the 1930s, physics in the postwar period entered into a phase of what historians have called
"Big Science", requiring massive machines, budgets, and laboratories in order to test their theories and move into
new frontiers. The primary patron of physics became state governments, who recognized that the support of "basic"
research could often lead to technologies useful to both military and industrial applications. Currently, general
relativity and quantum mechanics are inconsistent with each other, and efforts are underway to unify the two. Modern
chemistry emerged from the sixteenth through the eighteenth centuries through the material practices and theories
promoted by alchemy, medicine, manufacturing and mining. A decisive moment came when 'chymistry' was distinguished
from alchemy by Robert Boyle in his work The Sceptical Chymist, in 1661; although the alchemical tradition continued
for some time after his work. Other important steps included the gravimetric experimental practices of medical chemists
like William Cullen, Joseph Black, Torbern Bergman and Pierre Macquer and through the work of Antoine Lavoisier (Father
of Modern Chemistry) on oxygen and the law of conservation of mass, which refuted phlogiston theory. The theory that
all matter is made of atoms, which are the smallest constituents of matter that cannot be broken down without losing
the basic chemical and physical properties of that matter, was provided by John Dalton in 1803, although the question
took a hundred years to settle as proven. Dalton also formulated the law of mass relationships. In 1869, Dmitri Mendeleev
composed his periodic table of elements on the basis of Dalton's discoveries. The synthesis of urea by Friedrich
Wöhler opened a new research field, organic chemistry, and by the end of the 19th century, scientists were able to
synthesize hundreds of organic compounds. The later part of the 19th century saw the exploitation of the Earth's
petrochemicals, after the exhaustion of the oil supply from whaling. By the 20th century, systematic production of
refined materials provided a ready supply of products which provided not only energy, but also synthetic materials
for clothing, medicine, and everyday disposable resources. Application of the techniques of organic chemistry to
living organisms resulted in physiological chemistry, the precursor to biochemistry. The 20th century also saw the
integration of physics and chemistry, with chemical properties explained as the result of the electronic structure
of the atom. Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to
deduce bond angles in ever-more complicated molecules. Pauling's work culminated in the physical modelling of DNA,
the secret of life (in the words of Francis Crick, 1953). In the same year, the Miller–Urey experiment demonstrated
in a simulation of primordial processes, that basic constituents of proteins, simple amino acids, could themselves
be built up from simpler molecules. Geology existed as a cloud of isolated, disconnected ideas about rocks, minerals,
and landforms long before it became a coherent science. Theophrastus' work on rocks, Peri lithōn, remained authoritative
for millennia: its interpretation of fossils was not overturned until after the Scientific Revolution. Chinese polymath
Shen Kua (1031–1095) first formulated hypotheses for the process of land formation. Based on his observation of fossils
in a geological stratum in a mountain hundreds of miles from the ocean, he deduced that the land was formed by erosion
of the mountains and by deposition of silt. Geology did not undergo systematic restructuring during the Scientific
Revolution, but individual theorists made important contributions. Robert Hooke, for example, formulated a theory
of earthquakes, and Nicholas Steno developed the theory of superposition and argued that fossils were the remains
of once-living creatures. Beginning with Thomas Burnet's Sacred Theory of the Earth in 1681, natural philosophers
began to explore the idea that the Earth had changed over time. Burnet and his contemporaries interpreted Earth's
past in terms of events described in the Bible, but their work laid the intellectual foundations for secular interpretations
of Earth history. Modern geology, like modern chemistry, gradually evolved during the 18th and early 19th centuries.
Benoît de Maillet and the Comte de Buffon saw the Earth as much older than the 6,000 years envisioned by biblical
scholars. Jean-Étienne Guettard and Nicolas Desmarest hiked central France and recorded their observations on some
of the first geological maps. Aided by chemical experimentation, naturalists such as Scotland's John Walker, Sweden's
Torbern Bergman, and Germany's Abraham Werner created comprehensive classification systems for rocks and minerals—a
collective achievement that transformed geology into a cutting edge field by the end of the eighteenth century. These
early geologists also proposed a generalized interpretations of Earth history that led James Hutton, Georges Cuvier
and Alexandre Brongniart, following in the steps of Steno, to argue that layers of rock could be dated by the fossils
they contained: a principle first applied to the geology of the Paris Basin. The use of index fossils became a powerful
tool for making geological maps, because it allowed geologists to correlate the rocks in one locality with those
of similar age in other, distant localities. Over the first half of the 19th century, geologists such as Charles
Lyell, Adam Sedgwick, and Roderick Murchison applied the new technique to rocks throughout Europe and eastern North
America, setting the stage for more detailed, government-funded mapping projects in later decades. Midway through
the 19th century, the focus of geology shifted from description and classification to attempts to understand how
the surface of the Earth had changed. The first comprehensive theories of mountain building were proposed during
this period, as were the first modern theories of earthquakes and volcanoes. Louis Agassiz and others established
the reality of continent-covering ice ages, and "fluvialists" like Andrew Crombie Ramsay argued that river valleys
were formed, over millions of years by the rivers that flow through them. After the discovery of radioactivity, radiometric
dating methods were developed, starting in the 20th century. Alfred Wegener's theory of "continental drift" was widely
dismissed when he proposed it in the 1910s, but new data gathered in the 1950s and 1960s led to the theory of plate
tectonics, which provided a plausible mechanism for it. Plate tectonics also provided a unified explanation for a
wide range of seemingly unrelated geological phenomena. Since 1970 it has served as the unifying principle in geology.
In 1847, Hungarian physician Ignác Fülöp Semmelweis dramatically reduced the occurrency of puerperal fever by simply
requiring physicians to wash their hands before attending to women in childbirth. This discovery predated the germ
theory of disease. However, Semmelweis' findings were not appreciated by his contemporaries and came into use only
with discoveries by British surgeon Joseph Lister, who in 1865 proved the principles of antisepsis. Lister's work
was based on the important findings by French biologist Louis Pasteur. Pasteur was able to link microorganisms with
disease, revolutionizing medicine. He also devised one of the most important methods in preventive medicine, when
in 1880 he produced a vaccine against rabies. Pasteur invented the process of pasteurization, to help prevent the
spread of disease through milk and other foods. Perhaps the most prominent, controversial and far-reaching theory
in all of science has been the theory of evolution by natural selection put forward by the British naturalist Charles
Darwin in his book On the Origin of Species in 1859. Darwin proposed that the features of all living things, including
humans, were shaped by natural processes over long periods of time. The theory of evolution in its current form affects
almost all areas of biology. Implications of evolution on fields outside of pure science have led to both opposition
and support from different parts of society, and profoundly influenced the popular understanding of "man's place
in the universe". In the early 20th century, the study of heredity became a major investigation after the rediscovery
in 1900 of the laws of inheritance developed by the Moravian monk Gregor Mendel in 1866. Mendel's laws provided the
beginnings of the study of genetics, which became a major field of research for both scientific and industrial research.
By 1953, James D. Watson, Francis Crick and Maurice Wilkins clarified the basic structure of DNA, the genetic material
for expressing life in all its forms. In the late 20th century, the possibilities of genetic engineering became practical
for the first time, and a massive international effort began in 1990 to map out an entire human genome (the Human
Genome Project). The discipline of ecology typically traces its origin to the synthesis of Darwinian evolution and
Humboldtian biogeography, in the late 19th and early 20th centuries. Equally important in the rise of ecology, however,
were microbiology and soil science—particularly the cycle of life concept, prominent in the work Louis Pasteur and
Ferdinand Cohn. The word ecology was coined by Ernst Haeckel, whose particularly holistic view of nature in general
(and Darwin's theory in particular) was important in the spread of ecological thinking. In the 1930s, Arthur Tansley
and others began developing the field of ecosystem ecology, which combined experimental soil science with physiological
concepts of energy and the techniques of field biology. The history of ecology in the 20th century is closely tied
to that of environmentalism; the Gaia hypothesis, first formulated in the 1960s, and spreading in the 1970s, and
more recently the scientific-religious movement of Deep Ecology have brought the two closer together. Political science
is a late arrival in terms of social sciences[citation needed]. However, the discipline has a clear set of antecedents
such as moral philosophy, political philosophy, political economy, history, and other fields concerned with normative
determinations of what ought to be and with deducing the characteristics and functions of the ideal form of government.
The roots of politics are in prehistory. In each historic period and in almost every geographic area, we can find
someone studying politics and increasing political understanding. In Western culture, the study of politics is first
found in Ancient Greece. The antecedents of European politics trace their roots back even earlier than Plato and
Aristotle, particularly in the works of Homer, Hesiod, Thucydides, Xenophon, and Euripides. Later, Plato analyzed
political systems, abstracted their analysis from more literary- and history- oriented studies and applied an approach
we would understand as closer to philosophy. Similarly, Aristotle built upon Plato's analysis to include historical
empirical evidence in his analysis. An ancient Indian treatise on statecraft, economic policy and military strategy
by Kautilya and Viṣhṇugupta, who are traditionally identified with Chāṇakya (c. 350–-283 BCE). In this treatise,
the behaviors and relationships of the people, the King, the State, the Government Superintendents, Courtiers, Enemies,
Invaders, and Corporations are analysed and documented. Roger Boesche describes the Arthaśāstra as "a book of political
realism, a book analysing how the political world does work and not very often stating how it ought to work, a book
that frequently discloses to a king what calculating and sometimes brutal measures he must carry out to preserve
the state and the common good." With the fall of the Western Roman Empire, there arose a more diffuse arena for political
studies. The rise of monotheism and, particularly for the Western tradition, Christianity, brought to light a new
space for politics and political action[citation needed]. During the Middle Ages, the study of politics was widespread
in the churches and courts. Works such as Augustine of Hippo's The City of God synthesized current philosophies and
political traditions with those of Christianity, redefining the borders between what was religious and what was political.
Most of the political questions surrounding the relationship between Church and State were clarified and contested
in this period. Historical linguistics emerged as an independent field of study at the end of the 18th century. Sir
William Jones proposed that Sanskrit, Persian, Greek, Latin, Gothic, and Celtic languages all shared a common base.
After Jones, an effort to catalog all languages of the world was made throughout the 19th century and into the 20th
century. Publication of Ferdinand de Saussure's Cours de linguistique générale created the development of descriptive
linguistics. Descriptive linguistics, and the related structuralism movement caused linguistics to focus on how language
changes over time, instead of just describing the differences between languages. Noam Chomsky further diversified
linguistics with the development of generative linguistics in the 1950s. His effort is based upon a mathematical
model of language that allows for the description and prediction of valid syntax. Additional specialties such as
sociolinguistics, cognitive linguistics, and computational linguistics have emerged from collaboration between linguistics
and other disciplines. The basis for classical economics forms Adam Smith's An Inquiry into the Nature and Causes
of the Wealth of Nations, published in 1776. Smith criticized mercantilism, advocating a system of free trade with
division of labour. He postulated an "invisible hand" that regulated economic systems made up of actors guided only
by self-interest. Karl Marx developed an alternative economic theory, called Marxian economics. Marxian economics
is based on the labor theory of value and assumes the value of good to be based on the amount of labor required to
produce it. Under this assumption, capitalism was based on employers not paying the full value of workers labor to
create profit. The Austrian school responded to Marxian economics by viewing entrepreneurship as driving force of
economic development. This replaced the labor theory of value by a system of supply and demand. In the 1920s, John
Maynard Keynes prompted a division between microeconomics and macroeconomics. Under Keynesian economics macroeconomic
trends can overwhelm economic choices made by individuals. Governments should promote aggregate demand for goods
as a means to encourage economic expansion. Following World War II, Milton Friedman created the concept of monetarism.
Monetarism focuses on using the supply and demand of money as a method for controlling economic activity. In the
1970s, monetarism has adapted into supply-side economics which advocates reducing taxes as a means to increase the
amount of money available for economic expansion. The above "history of economics" reflects modern economic textbooks
and this means that the last stage of a science is represented as the culmination of its history (Kuhn, 1962). The
"invisible hand" mentioned in a lost page in the middle of a chapter in the middle of the to "Wealth of Nations",
1776, advances as Smith's central message.[clarification needed] It is played down that this "invisible hand" acts
only "frequently" and that it is "no part of his [the individual's] intentions" because competition leads to lower
prices by imitating "his" invention. That this "invisible hand" prefers "the support of domestic to foreign industry"
is cleansed—often without indication that part of the citation is truncated. The opening passage of the "Wealth"
containing Smith's message is never mentioned as it cannot be integrated into modern theory: "Wealth" depends on
the division of labour which changes with market volume and on the proportion of productive to Unproductive labor.
The end of the 19th century marks the start of psychology as a scientific enterprise. The year 1879 is commonly seen
as the start of psychology as an independent field of study. In that year Wilhelm Wundt founded the first laboratory
dedicated exclusively to psychological research (in Leipzig). Other important early contributors to the field include
Hermann Ebbinghaus (a pioneer in memory studies), Ivan Pavlov (who discovered classical conditioning), William James,
and Sigmund Freud. Freud's influence has been enormous, though more as cultural icon than a force in scientific psychology.
The final decades of the 20th century have seen the rise of a new interdisciplinary approach to studying human psychology,
known collectively as cognitive science. Cognitive science again considers the mind as a subject for investigation,
using the tools of psychology, linguistics, computer science, philosophy, and neurobiology. New methods of visualizing
the activity of the brain, such as PET scans and CAT scans, began to exert their influence as well, leading some
researchers to investigate the mind by investigating the brain, rather than cognition. These new forms of investigation
assume that a wide understanding of the human mind is possible, and that such an understanding may be applied to
other research domains, such as artificial intelligence. Ibn Khaldun can be regarded as the earliest scientific systematic
sociologist. The modern sociology, emerged in the early 19th century as the academic response to the modernization
of the world. Among many early sociologists (e.g., Émile Durkheim), the aim of sociology was in structuralism, understanding
the cohesion of social groups, and developing an "antidote" to social disintegration. Max Weber was concerned with
the modernization of society through the concept of rationalization, which he believed would trap individuals in
an "iron cage" of rational thought. Some sociologists, including Georg Simmel and W. E. B. Du Bois, utilized more
microsociological, qualitative analyses. This microlevel approach played an important role in American sociology,
with the theories of George Herbert Mead and his student Herbert Blumer resulting in the creation of the symbolic
interactionism approach to sociology. American sociology in the 1940s and 1950s was dominated largely by Talcott
Parsons, who argued that aspects of society that promoted structural integration were therefore "functional". This
structural functionalism approach was questioned in the 1960s, when sociologists came to see this approach as merely
a justification for inequalities present in the status quo. In reaction, conflict theory was developed, which was
based in part on the philosophies of Karl Marx. Conflict theorists saw society as an arena in which different groups
compete for control over resources. Symbolic interactionism also came to be regarded as central to sociological thinking.
Erving Goffman saw social interactions as a stage performance, with individuals preparing "backstage" and attempting
to control their audience through impression management. While these theories are currently prominent in sociological
thought, other approaches exist, including feminist theory, post-structuralism, rational choice theory, and postmodernism.
Computer science, built upon a foundation of theoretical linguistics, discrete mathematics, and electrical engineering,
studies the nature and limits of computation. Subfields include computability, computational complexity, database
design, computer networking, artificial intelligence, and the design of computer hardware. One area in which advances
in computing have contributed to more general scientific development is by facilitating large-scale archiving of
scientific data. Contemporary computer science typically distinguishes itself by emphasising mathematical 'theory'
in contrast to the practical emphasis of software engineering. As an academic field, history of science began with
the publication of William Whewell's History of the Inductive Sciences (first published in 1837). A more formal study
of the history of science as an independent discipline was launched by George Sarton's publications, Introduction
to the History of Science (1927) and the Isis journal (founded in 1912). Sarton exemplified the early 20th-century
view of the history of science as the history of great men and great ideas. He shared with many of his contemporaries
a Whiggish belief in history as a record of the advances and delays in the march of progress. The history of science
was not a recognized subfield of American history in this period, and most of the work was carried out by interested
scientists and physicians rather than professional historians. With the work of I. Bernard Cohen at Harvard, the
history of science became an established subdiscipline of history after 1945. Much of the study of the history of
science has been devoted to answering questions about what science is, how it functions, and whether it exhibits
large-scale patterns and trends. The sociology of science in particular has focused on the ways in which scientists
work, looking closely at the ways in which they "produce" and "construct" scientific knowledge. Since the 1960s,
a common trend in science studies (the study of the sociology and history of science) has been to emphasize the "human
component" of scientific knowledge, and to de-emphasize the view that scientific data are self-evident, value-free,
and context-free. The field of Science and Technology Studies, an area that overlaps and often informs historical
studies of science, focuses on the social context of science in both contemporary and historical periods. Humboldtian
science refers to the early 19th century approach of combining scientific field work with the age of Romanticism
sensitivity, ethics and aesthetic ideals. It helped to install natural history as a separate field, gave base for
ecology and was based on the role model of scientist, naturalist and explorer Alexander von Humboldt. The later 19th
century positivism asserted that all authentic knowledge allows verification and that all authentic knowledge assumes
that the only valid knowledge is scientific. The mid 20th century saw a series of studies relying to the role of
science in a social context, starting from Thomas Kuhn's The Structure of Scientific Revolutions in 1962. It opened
the study of science to new disciplines by suggesting that the evolution of science was in part sociologically determined
and that positivism did not explain the actual interactions and strategies of the human participants in science.
As Thomas Kuhn put it, the history of science may be seen in more nuanced terms, such as that of competing paradigms
or conceptual systems in a wider matrix that includes intellectual, cultural, economic and political themes outside
of science. "Partly by selection and partly by distortion, the scientists of earlier ages are implicitly presented
as having worked upon the same set of fixed problems and in accordance with the same set of fixed canons that the
most recent revolution in scientific theory and method made seem scientific." Further studies, e.g. Jerome Ravetz
1971 Scientific Knowledge and its Social Problems referred to the role of the scientific community, as a social construct,
in accepting or rejecting (objective) scientific knowledge. The Science wars of the 1990 were about the influence
of especially French philosophers, which denied the objectivity of science in general or seemed to do so. They described
as well differences between the idealized model of a pure science and the actual scientific practice; while scientism,
a revival of the positivism approach, saw in precise measurement and rigorous calculation the basis for finally settling
enduring metaphysical and moral controversies. However, more recently some of the leading critical theorists have
recognized that their postmodern deconstructions have at times been counter-productive, and are providing intellectual
ammunition for reactionary interests. Bruno Latour noted that "dangerous extremists are using the very same argument
of social construction to destroy hard-won evidence that could save our lives. Was I wrong to participate in the
invention of this field known as science studies? Is it enough to say that we did not really mean what we meant?"
(デジモン Dejimon, branded as Digimon: Digital Monsters, stylized as DIGIMON), short for "Digital Monsters" (デジタルモンスター Dejitaru
Monsutā), is a Japanese media franchise encompassing virtual pet toys, anime, manga, video games, films and a trading
card game. The franchise focuses on Digimon creatures, which are monsters living in a "Digital World", a parallel
universe that originated from Earth's various communication networks. In many incarnations, Digimon are raised by
humans called "Digidestined" or "Tamers", and they team up to defeat evil Digimon and human villains who are trying
to destroy the fabric of the Digital world. The franchise was first created in 1997 as a series of virtual pets,
akin to—and influenced in style by—the contemporary Tamagotchi or nano Giga Pet toys. The creatures were first designed
to look cute and iconic even on the devices' small screens; later developments had them created with a harder-edged
style influenced by American comics. The franchise gained momentum with its first anime incarnation, Digimon Adventure,
and an early video game, Digimon World, both released in 1999. Several seasons of the anime and films based on them
have aired, and the video game series has expanded into genres such as role-playing, racing, fighting, and MMORPGs.
Other media forms have also been released. Digimon was first conceived as a virtual pet toy in the vein of Tamagotchis
and, as such, took influence from Tamagotchis' cute and round designs. The small areas of the screens (16 by 16 pixels)
meant that character designers had to create monsters whose forms would be easily recognizable. As such, many of
the early Digimon—including Tyrannomon, the first one ever created—were based on dinosaurs. Many further designs
were created by Kenji Watanabe, who was brought in to help with the "X-Antibody" creatures and art for the Digimon
collectible card game. Watanabe was one influenced by American comics, which were beginning to gain popularity in
Japan, and as such began to make his characters look stronger and "cool." The character creation process, however,
has for most of the franchise's history been collaborative and reliant on conversation and brainstorming. Digimon
hatch from types of eggs which are called Digi-Eggs (デジタマ, Dejitama?). In the English iterations of the franchise
there is another type of Digi-Egg that can be used to digivolve, or transform, Digimon. This second type of Digi-Egg
is called a Digimental (デジメンタル, Dejimentaru?) in Japanese. (This type of Digi-Egg was also featured as a major object
throughout season 2 as a way of Digivolution available only to certain characters at certain points throughout the
season.) They age via a process called "Digivolution" which changes their appearance and increases their powers.
The effect of Digivolution, however, is not permanent in the partner Digimon of the main characters in the anime,
and Digimon who have digivolved will most of the time revert to their previous form after a battle or if they are
too weak to continue. Some Digimon act feral. Most, however, are capable of intelligence and human speech. They are
able to digivolve by the use of Digivices that their human partners have. In some cases, as in the first series,
the DigiDestined (known as the 'Chosen Children' in the original Japanese) had to find some special items such as
crests and tags so the Digimon could digivolve into further stages of evolution known as Ultimate and Mega in the
dub. The first Digimon anime introduced the Digimon life cycle: They age in a similar fashion to real organisms,
but do not die under normal circumstances because they are made of reconfigurable data, which can be seen throughout
the show. Any Digimon that receives a fatal wound will dissolve into infinitesimal bits of data. The data then recomposes
itself as a Digi-Egg, which will hatch when rubbed gently, and the Digimon goes through its life cycle again. Digimon
who are reincarnated in this way will sometimes retain some or all their memories of their previous life. However,
if a Digimon's data is completely destroyed, they will die. Digimon started out as digital pets called "Digital Monsters",
similar in style and concept to the Tamagotchi. It was planned by WiZ and released by Bandai on June 26, 1997. The
toy began as the simple concept of a Tamagotchi mainly for boys. The V-Pet is similar to its predecessors, with the
exceptions of being more difficult and being able to fight other Digimon v-pets. Every owner would start with a Baby
Digimon, train it, evolve it, take care of it, and then have battles with other Digimon owners to see who was stronger.
The Digimon pet had several evolution capabilities and abilities too, so many owners had different Digimon. In December,
the second generation of Digital Monster was released, followed by a third edition in 1998. "Digimon" are "Digital
Monsters". According to the stories, they are inhabitants of the "DigiWorld", a manifestation of Earth's communication
network. The stories tell of a group of mostly pre-teens, who accompany special Digimon born to defend their world
(and ours) from various evil forces. To help them surmount the most difficult obstacles found within both realms,
the Digimon have the ability to evolve (Digivolve) In this process, the Digimon change appearance and become much
stronger, often changing in personality as well. The group of children who come in contact with the Digital World
changes from series to series. As of 2011, there have been six series — Digimon Adventure, the follow-up sequel Digimon
Adventure 02, Digimon Tamers, Digimon Frontier, Digimon Data Squad and Digimon Fusion. The first two series take
place in the same fictional universe, but the third, fourth, fifth and sixth each occupy their own unique world.
Each series is commonly based on the original storyline but things are added to make them unique. However, in Tamers,
the Adventure universe is referred to as a commercial enterprise — a trading card game in Japan, plus a show-within-a-show
in the English dub. It also features an appearance by a character from the Adventure universe. In addition, each
series has spawned assorted feature films. Digimon still shows popularity, as new card series, video games, and movies
are still being produced and released: new card series include Eternal Courage, Hybrid Warriors, Generations, and
Operation X; the video game, Digimon Rumble Arena 2; and the previously unreleased movies Revenge of Diaboromon,
Runaway Locomon, Battle of Adventurers, and Island of Lost Digimon. In Japan, Digital Monster X-Evolution, the eighth
TV movie, was released on January 3, 2005, and on December 23, 2005 at Jump Festa 2006, the fifth series, Digimon
Savers was announced for Japan to begin airing after a three-year hiatus of the show. A sixth television series,
Digimon Xros Wars, began airing in 2010, and was followed by a second season, which started on October 2, 2011 as
a direct sequel to Digimon Xros Wars. The first Digimon television series, which began airing on March 7, 1999 in
Japan on Fuji TV and Kids Station and on August 14, 1999 in the United States on Fox Kids dubbed by Saban Entertainment
for the North American English version. Its premise is a group of 7 kids who, while at summer camp, travel to the
Digital World, inhabited by creatures known as Digital Monsters, or Digimon, learning they are chosen to be "DigiDestined"
("Chosen Children" in the Japanese version) to save both the Digital and Real World from evil. Each Kid was given
a Digivice which selected them to be transported to the DigiWorld and was destined to be paired up with a Digimon
Partner, such as Tai being paired up with Agumon and Matt with Gabumon. The children are helped by a mysterious man/digimon
named Gennai, who helps them via hologram. The Digivices help their Digimon allies to Digivolve into stronger creatures
in times of peril. The Digimon usually reached higher forms when their human partners are placed in dangerous situations,
such as fighting the evil forces of Devimon, Etemon and Myotismon in their Champion forms. Later, each character
discovered a crest that each belonged to a person; Tai the Crest of Courage, Matt the Crest of Friendship, Sora the
Crest of Love, Izzy the Crest of Knowledge, Mimi the Crest of Sincerity, Joe the Crest of Reliability, T.K. the Crest
of Hope, and later Kari the Crest of Light which allowed their Digimon to digivolve into their Ultimate forms. The
group consisted of seven original characters: Taichi "Tai" Kamiya, Yamato "Matt" Ishida, Sora Takenouchi, Koushiro
"Izzy" Izumi, Mimi Tachikawa, Joe Kido, and Takeru "T.K." Takaishi. Later on in the series, an eighth character was
introduced: Hikari "Kari" Kamiya (who is Taichi's younger sister). The second Digimon series is direct continuation
of the first one, and began airing on April 2, 2000. Three years later, with most of the original DigiDestined now
in high school at age fourteen, the Digital World was supposedly secure and peaceful. However, a new evil has appeared
in the form of the Digimon Emperor (Digimon Kaiser) who as opposed to previous enemies is a human just like the DigiDestined.
The Digimon Emperor has been enslaving Digimon with Dark Rings and Control Spires and has somehow made regular Digivolution
impossible. However, five set Digi-Eggs with engraved emblems had been appointed to three new DigiDestined along
with T.K. and Kari, two of the DigiDestined from the previous series. This new evolutionary process, dubbed Armor
Digivolution helps the new DigiDestined to defeat evil lurking in the Digital World. Eventually, the DigiDestined
defeat the Digimon Emperor, more commonly known as Ken Ichijouji on Earth, only with the great sacrifice of Ken's
own Digimon, Wormmon. Just when things were thought to be settled, new Digimon enemies made from the deactivated
Control Spires start to appear and cause trouble in the Digital World. To atone for his past mistakes, Ken joins
the DigiDestined, being a DigiDestined himself, with his Partner Wormmon revived to fight against them. They soon
save countries including France and Australia from control spires and defeat MaloMyotismon (BelialVamdemon), the
digivolved form of Myotismon (Vamdemon) from the previous series. They stop the evil from destroying the two worlds,
and at the end, every person on Earth gains their own Digimon partner. The third Digimon series, which began airing
on April 1, 2001, is set largely in a "real world" where the Adventure and Adventure 02 series are television shows,
and where Digimon game merchandise (based on actual items) become key to providing power boosts to real Digimon which
appear in that world. The plot revolves around three Tamers, Takato Matsuki, Rika Nonaka, and Henry Wong. It began
with Takato creating his own Digimon partner by sliding a mysterious blue card through his card reader, which then
became a D-Power. Guilmon takes form from Takato's sketchings of a new Digimon. (Tamers’ only human connection to
the Adventure series is Ryo Akiyama, a character featured in some of the Digimon video games and who made an appearance
in some occasions of the Adventure story-line.) Some of the changes in this series include the way the Digimon digivolve
with the introduction of Biomerge-Digivolution and the way their "Digivices" work. In this series, the Tamers can
slide game cards through their "Digivices" and give their Digimon partners certain advantages, as in the card game.
This act is called "Digi-Modify" (Card Slash in the Japanese version). The same process was often used to Digivolve
the Digimon, but as usual, emotions play a big part in the digivolving process. Unlike the two seasons before it
and most of the seasons that followed, Digimon Tamers takes a darker and more realistic approach to its story featuring
Digimon who do not reincarnate after their deaths and more complex character development in the original Japanese.
The anime has become controversial over the decade, with debates about how appropriate this show actually is for
its "target" audience, especially due to the Lovecraftian nature of the last arc. The English dub is more lighthearted
dialogue-wise, though still not as much as previous series. The fourth Digimon series, which began airing on April
7, 2002, radically departs from the previous three by focusing on a new and very different kind of evolution, Spirit
Evolution, in which the human characters use their D-Tectors (this series' Digivice) to transform themselves into
special Digimon called Legendary Warriors, detracting from the customary formula of having digital partners. After
receiving unusual phone messages from Ophanimon (one of the three ruling Digimon alongside Seraphimon and Cherubimon)
Takuya Kanbara, Koji Minamoto, Junpei Shibayama, Zoe Orimoto, Tommy Himi, and Koichi Kimura go to a subway station
and take a train to the Digital World. Summoned by Ophanimon, the Digidestined realize that they must find the ten
legendary spirits and stop the forces of Cherubimon from physically destroying the Digital World. After finding the
ten spirits of the Legendary Warriors and defeating Mercurymon, Grumblemon, Ranamon, and Arbormon, they finally end
up fighting Cherubimon hoping to foil his effort to dominate the Digital World. After the defeat of Cherubimon, the
Digidestined find they must face an even greater challenge as they try to stop the Royal Knights—Dynasmon and Crusadermon—from
destroying the Digital World and using the collected data to revive the original ruler of the Digital World: the
tyrannical Lucemon. Ultimately the Digidestined fail in preventing Lucemon from reawakening but they do manage to
prevent him from escaping into the Real World. In the final battle, all of the legendary spirits the digidestined
have collected thus far merge and create Susanoomon. With this new form, the digidestined are able to effectively
defeat Lucemon and save the Digital World. In general, Frontier has a much lighter tone than that of Tamers, yet
remains darker than Adventure and Adventure 02. After a three-year hiatus, a fifth Digimon series began airing on
April 2, 2006. Like Frontier, Savers has no connection with the previous installments, and also marks a new start
for the Digimon franchise, with a drastic change in character designs and story-line, in order to reach a broader
audience. The story focuses on the challenges faced by the members of D.A.T.S. ("Digital Accident Tactics Squad"),
an organization created to conceal the existence of the Digital World and Digimon from the rest of mankind, and secretly
solve any Digimon-related incidents occurring on Earth. Later the D.A.T.S. is dragged into a massive conflict between
Earth and the Digital World, triggered by an ambitious human scientist named Akihiro Kurata, determined to make use
of the Digimon for his own personal gains. The English version was dubbed by Studiopolis and it premiered on the
Jetix block on Toon Disney on October 1, 2007. Digivolution in Data Squad requires the human partner's DNA ("Digital
Natural Ability" in the English version and "Digisoul" in the Japanese version) to activate, a strong empathy with
their Digimon and a will to succeed. 'Digimon Savers' also introduces a new form of digivolving called Burst Mode
which is essentially the level above Mega (previously the strongest form a digimon could take). Like previously in
Tamers, this plot takes on a dark tone throughout the story and the anime was aimed, originally in Japan, at an older
audience consisting of late teens and people in their early twenties from ages 16 to 21. Because of that, along with
the designs, the anime being heavily edited and localized for western US audiences like past series, and the English
dub being aimed mostly toward younger audiences of children aged 6 to 10 and having a lower TV-Y7-FV rating just
like past dubs, Studiopolis dubbed the anime on Jetix with far more edits, changes, censorship, and cut footage.
This included giving the Japanese characters full Americanized names and American surnames as well as applying far
more Americanization (Marcus Damon as opposed to the Japanese Daimon Masaru), cultural streamlining and more edits
to their version similar to the changes 4Kids often made (such as removal of Japanese text for the purpose of cultural
streamlining). Despite all that, the setting of the country was still in Japan and the characters were Japanese in
the dub. This series was the first to show any Japanese cultural concepts that were unfamiliar with American audiences
(such as the manju), which were left unedited and used in the English dub. Also despite the heavy censorship and
the English dub aimed at young children, some of the Digimon's attacks named after real weapons such as RizeGreymon's
Trident Revolver are not edited and used in the English dub. Well Go USA released it on DVD instead of Disney. The
North American English dub was televised on Jetix in the U.S. and on the Family Channel in Canada. Three and a quarter
years after the end of the fifth series, a new sixth series was confirmed by Bandai for the Digimon anime, its official
name of the series revealed in the June issue of Shueisha's V Jump magazine being Digimon Xros Wars. It began airing
in Japan on TV Asahi from July 6, 2010 onwards. Reverting to the design style of the first four series as well as
the plot taking on the younger, lighter tone present in series one, two and four throughout the story. The story
follows a boy named Mikey Kudō (Taiki Kudo in Japan) who, along with his friends, ends up in the Digital World where
they meet Shoutmon and his Digimon friends. Wielding a digivice known as a Fusion Loader (Xros Loader in Japan),
Mikey is able to combine multiple Digimon onto one to enhance his power, Shoutmon being the usual core of the combination,
using a technique known as 'DigiFuse' (Digi-Xros in Japan). Forming Team Fusion Fighters (Team Xros Heart in Japan),
Mikey, Shoutmon and their friends travel through the Digital World to liberate it from the evil Bagra Army, led by
Bagramon(Lord Bagra in English), and Midnight, a shady group led by AxeKnightmon with Nene as a figurehead before
joining the Fusion Fighters. The Fusion Fighters also finds themselves at odds with Blue Flare, led by Christopher
Aonuma (Kiriha Anouma in Japan). The second arc of Xros Wars was subtitled The Evil Death Generals and the Seven
Kingdoms. It saw the main cast reshuffled with a new wardrobe while Angie (Akari in Japan) and Jeremy (Zenjiro in
Japan) stay behind in the Human World; thus making Mikey, Christopher and Nene the lead protagonists as they set
off to face the Seven Death Generals of the Bagra Army and AxeKnightmon's new pawn: Nene's brother Ewan (Yuu in Japan).
A new evolution known as Super Digivolution was introduced at the end of the first arc. The English dub of the series
began airing on Nickelodeon on September 7, 2013, which is produced by Saban Brands. On August 17, 2011, Shueisha's
V-Jump magazine announced a sequel set one year later, a third arc of Xros Wars subtitled The Young Hunters Who Leapt
Through Time, which aired from October 2, 2011 to March 25, 2012, following on from the previous arc. It focuses
on a new protagonist, Tagiru Akashi and his partner Gumdramon who embark on a new journey with an older Mikey, Shoutmon,
an older Ewan and the revived Damemon, along with other new comrades as they deal with a hidden dimension that lies
between the Human World and the Digital World called DigiQuartz. The series finale reintroduces the heroes of the
previous five seasons as they all come together and help the current heroes in the final battle due to the fact that
the DigiQuartz is essentially a tear in Space and Time, allowing all of the Digimon universes to converge. A new
Digimon series was announced 30 months after the end of Digimon Fusion at a 15th anniversary concert and theater
event for the franchise in August 2014. The series announced the return of the protagonists from the original Digimon
Adventure series, most of them now as high school students. A countdown clicking game was posted on the show's official
website, offering news when specific clicks were met. On December 13, 2014 the series title and a key visual featuring
character designs by Atsuya Uki were revealed with Keitaro Motonaga announced as director with a tentative premiere
date of Spring, 2015. However, on May 6, 2015, it was announced that tri. would not be a television series, but rather
a 6-part theatrical film series. The films are being streamed in episodic format outside Japan by Crunchyroll and
Hulu from the same day they premiere on Japanese theaters. The series is set three years after the events of Digimon
Adventure 02, when Digimon who turn rogue by a mysterious infection appear to wreak havoc in the Human World. Tai
and the other DigiDestined from the original series reunite with their partners and start fighting back with support
from the Japanese government, while Davis, Yolei, Cody and Ken are defeated by a powerful enemy called Alphamon and
disappear without a trace. Tai and the others also meet another DigiDestined called Meiko Mochizuki and her partner
Meicoomon who become their friends, until Meicoomon turns hostile as well and flees after an encounter with Ken,
who reappears suddenly, once again as the Digimon Emperor. The film series also feature several DigiDestined having
their partners Digivolve up to the Mega level for the first time, a feat only Tai and Matt had achieved previously.
There have been nine Digimon movies released in Japan. The first seven were directly connected to their respective
anime series; Digital Monster X-Evolution originated from the Digimon Chronicle merchandise line. All movies except
X-Evolution and Ultimate Power! Activate Burst Mode have been released and distributed internationally. Digimon:
The Movie, released in the U.S. and Canada territory by Fox Kids through 20th Century Fox on October 6, 2000, consists
of the union of the first three Japanese movies. The European publishing company, Panini, approached Digimon in different
ways in different countries. While Germany created their own adaptations of episodes, the United Kingdom (UK) reprinted
the Dark Horse titles, then translated some of the German adaptations of Adventure 02 episodes. Eventually the UK
comics were given their own original stories, which appeared in both the UK's official Digimon Magazine and the official
UK Fox Kids companion magazine, Wickid. These original stories only roughly followed the continuity of Adventure
02. When the comic switched to the Tamers series the storylines adhered to continuity more strictly; sometimes it
would expand on subject matter not covered by the original Japanese anime (such as Mitsuo Yamaki's past) or the English
adaptations of the television shows and movies (such as Ryo's story or the movies that remained undubbed until 2005).
In a money saving venture, the original stories were later removed from Digimon Magazine, which returned to printing
translated German adaptations of Tamers episodes. Eventually, both magazines were cancelled. The Digimon series has
a large number of video games which usually have their own independent storylines with a few sometimes tying into
the stories of the anime series or manga series. The games consists of a number of genres including life simulation,
adventure, video card game, strategy and racing games, though they are mainly action role-playing games. The games
released in North America are: Digimon World, Digimon World 2, Digimon World 3, Digimon World 4, Digimon Digital
Card Battle, Digimon Rumble Arena, Digimon Rumble Arena 2, Digimon Battle Spirit, Digimon Battle Spirit 2, Digimon
Racing, Digimon World DS, Digimon World Data Squad, Digimon World Dawn and Dusk, Digimon World Championship, and
Digimon Masters. In 2011, Bandai posted a countdown on a teaser site. Once the countdown was finished, it revealed
a reboot of the Digimon World series titled Digimon World Re:Digitize. An enhanced version of the game released on
Nintendo 3DS as Digimon World Re:Digitize Decode in 2013. Another role-playing game by the name Digimon Story: Cyber
Sleuth is set for release in 2015 for PlayStation Vita. It is part of the Digimon Story sub-series, originally on
Nintendo DS and has also been released with English subtitles in North America.
A glacier (US /ˈɡleɪʃər/ or UK /ˈɡlæsiə/) is a persistent body of dense ice that is constantly moving under its own weight;
it forms where the accumulation of snow exceeds its ablation (melting and sublimation) over many years, often centuries.
Glaciers slowly deform and flow due to stresses induced by their weight, creating crevasses, seracs, and other distinguishing
features. They also abrade rock and debris from their substrate to create landforms such as cirques and moraines.
Glaciers form only on land and are distinct from the much thinner sea ice and lake ice that form on the surface of
bodies of water. On Earth, 99% of glacial ice is contained within vast ice sheets in the polar regions, but glaciers
may be found in mountain ranges on every continent except Australia, and on a few high-latitude oceanic islands.
Between 35°N and 35°S, glaciers occur only in the Himalayas, Andes, Rocky Mountains, a few high mountains in East
Africa, Mexico, New Guinea and on Zard Kuh in Iran. Glaciers cover about 10 percent of Earth's land surface. Continental
glaciers cover nearly 13,000,000 km2 (5×10^6 sq mi) or about 98 percent of Antarctica's 13,200,000 km2 (5.1×10^6
sq mi), with an average thickness of 2,100 m (7,000 ft). Greenland and Patagonia also have huge expanses of continental
glaciers. Glacial ice is the largest reservoir of freshwater on Earth. Many glaciers from temperate, alpine and seasonal
polar climates store water as ice during the colder seasons and release it later in the form of meltwater as warmer
summer temperatures cause the glacier to melt, creating a water source that is especially important for plants, animals
and human uses when other sources may be scant. Within high altitude and Antarctic environments, the seasonal temperature
difference is often not sufficient to release meltwater. Glacial bodies larger than 50,000 km2 (19,000 sq mi) are
called ice sheets or continental glaciers. Several kilometers deep, they obscure the underlying topography. Only
nunataks protrude from their surfaces. The only extant ice sheets are the two that cover most of Antarctica and Greenland.
They contain vast quantities of fresh water, enough that if both melted, global sea levels would rise by over 70
m (230 ft). Portions of an ice sheet or cap that extend into water are called ice shelves; they tend to be thin with
limited slopes and reduced velocities. Narrow, fast-moving sections of an ice sheet are called ice streams. In Antarctica,
many ice streams drain into large ice shelves. Some drain directly into the sea, often with an ice tongue, like Mertz
Glacier. Tidewater glaciers are glaciers that terminate in the sea, including most glaciers flowing from Greenland,
Antarctica, Baffin and Ellesmere Islands in Canada, Southeast Alaska, and the Northern and Southern Patagonian Ice
Fields. As the ice reaches the sea, pieces break off, or calve, forming icebergs. Most tidewater glaciers calve above
sea level, which often results in a tremendous impact as the iceberg strikes the water. Tidewater glaciers undergo
centuries-long cycles of advance and retreat that are much less affected by the climate change than those of other
glaciers. Thermally, a temperate glacier is at melting point throughout the year, from its surface to its base. The
ice of a polar glacier is always below freezing point from the surface to its base, although the surface snowpack
may experience seasonal melting. A sub-polar glacier includes both temperate and polar ice, depending on depth beneath
the surface and position along the length of the glacier. In a similar way, the thermal regime of a glacier is often
described by the temperature at its base alone. A cold-based glacier is below freezing at the ice-ground interface,
and is thus frozen to the underlying substrate. A warm-based glacier is above or at freezing at the interface, and
is able to slide at this contact. This contrast is thought to a large extent to govern the ability of a glacier to
effectively erode its bed, as sliding ice promotes plucking at rock from the surface below. Glaciers which are partly
cold-based and partly warm-based are known as polythermal. Glaciers form where the accumulation of snow and ice exceeds
ablation. The area in which a glacier forms is called a cirque (corrie or cwm) - a typically armchair-shaped geological
feature (such as a depression between mountains enclosed by arêtes) - which collects and compresses through gravity
the snow which falls into it. This snow collects and is compacted by the weight of the snow falling above it forming
névé. Further crushing of the individual snowflakes and squeezing the air from the snow turns it into 'glacial ice'.
This glacial ice will fill the cirque until it 'overflows' through a geological weakness or vacancy, such as the
gap between two mountains. When the mass of snow and ice is sufficiently thick, it begins to move due to a combination
of surface slope, gravity and pressure. On steeper slopes, this can occur with as little as 15 m (50 ft) of snow-ice.
Glaciers are broken into zones based on surface snowpack and melt conditions. The ablation zone is the region where
there is a net loss in glacier mass. The equilibrium line separates the ablation zone and the accumulation zone;
it is the altitude where the amount of new snow gained by accumulation is equal to the amount of ice lost through
ablation. The upper part of a glacier, where accumulation exceeds ablation, is called the accumulation zone. In general,
the accumulation zone accounts for 60–70% of the glacier's surface area, more if the glacier calves icebergs. Ice
in the accumulation zone is deep enough to exert a downward force that erodes underlying rock. After a glacier melts,
it often leaves behind a bowl- or amphitheater-shaped depression that ranges in size from large basins like the Great
Lakes to smaller mountain depressions known as cirques. The top 50 m (160 ft) of a glacier are rigid because they
are under low pressure. This upper section is known as the fracture zone and moves mostly as a single unit over the
plastically flowing lower section. When a glacier moves through irregular terrain, cracks called crevasses develop
in the fracture zone. Crevasses form due to differences in glacier velocity. If two rigid sections of a glacier move
at different speeds and directions, shear forces cause them to break apart, opening a crevasse. Crevasses are seldom
more than 46 m (150 ft) deep but in some cases can be 300 m (1,000 ft) or even deeper. Beneath this point, the plasticity
of the ice is too great for cracks to form. Intersecting crevasses can create isolated peaks in the ice, called seracs.
Crevasses can form in several different ways. Transverse crevasses are transverse to flow and form where steeper
slopes cause a glacier to accelerate. Longitudinal crevasses form semi-parallel to flow where a glacier expands laterally.
Marginal crevasses form from the edge of the glacier, due to the reduction in speed caused by friction of the valley
walls. Marginal crevasses are usually largely transverse to flow. Moving glacier ice can sometimes separate from
stagnant ice above, forming a bergschrund. Bergschrunds resemble crevasses but are singular features at a glacier's
margins. Mean speeds vary greatly, but is typically around 1 m (3 ft) per day. There may be no motion in stagnant
areas; for example, in parts of Alaska, trees can establish themselves on surface sediment deposits. In other cases,
glaciers can move as fast as 20–30 m (70–100 ft) per day, such as in Greenland's Jakobshavn Isbræ (Greenlandic: Sermeq
Kujalleq). Velocity increases with increasing slope, increasing thickness, increasing snowfall, increasing longitudinal
confinement, increasing basal temperature, increasing meltwater production and reduced bed hardness. A few glaciers
have periods of very rapid advancement called surges. These glaciers exhibit normal movement until suddenly they
accelerate, then return to their previous state. During these surges, the glacier may reach velocities far greater
than normal speed. These surges may be caused by failure of the underlying bedrock, the pooling of meltwater at the
base of the glacier — perhaps delivered from a supraglacial lake — or the simple accumulation of mass beyond a critical
"tipping point". Temporary rates up to 90 m (300 ft) per day have occurred when increased temperature or overlying
pressure caused bottom ice to melt and water to accumulate beneath a glacier. In glaciated areas where the glacier
moves faster than one km per year, glacial earthquakes occur. These are large scale temblors that have seismic magnitudes
as high as 6.1. The number of glacial earthquakes in Greenland peaks every year in July, August and September and
is increasing over time. In a study using data from January 1993 through October 2005, more events were detected
every year since 2002, and twice as many events were recorded in 2005 as there were in any other year. This increase
in the numbers of glacial earthquakes in Greenland may be a response to global warming. Ogives are alternating wave
crests and valleys that appear as dark and light bands of ice on glacier surfaces. They are linked to seasonal motion
of glaciers; the width of one dark and one light band generally equals the annual movement of the glacier. Ogives
are formed when ice from an icefall is severely broken up, increasing ablation surface area during summer. This creates
a swale and space for snow accumulation in the winter, which in turn creates a ridge. Sometimes ogives consist only
of undulations or color bands and are described as wave ogives or band ogives. Glaciers are present on every continent
and approximately fifty countries, excluding those (Australia, South Africa) that have glaciers only on distant subantarctic
island territories. Extensive glaciers are found in Antarctica, Chile, Canada, Alaska, Greenland and Iceland. Mountain
glaciers are widespread, especially in the Andes, the Himalayas, the Rocky Mountains, the Caucasus, and the Alps.
Mainland Australia currently contains no glaciers, although a small glacier on Mount Kosciuszko was present in the
last glacial period. In New Guinea, small, rapidly diminishing, glaciers are located on its highest summit massif
of Puncak Jaya. Africa has glaciers on Mount Kilimanjaro in Tanzania, on Mount Kenya and in the Rwenzori Mountains.
Oceanic islands with glaciers occur on Iceland, Svalbard, New Zealand, Jan Mayen and the subantarctic islands of
Marion, Heard, Grande Terre (Kerguelen) and Bouvet. During glacial periods of the Quaternary, Taiwan, Hawaii on Mauna
Kea and Tenerife also had large alpine glaciers, while the Faroe and Crozet Islands were completely glaciated. The
permanent snow cover necessary for glacier formation is affected by factors such as the degree of slope on the land,
amount of snowfall and the winds. Glaciers can be found in all latitudes except from 20° to 27° north and south of
the equator where the presence of the descending limb of the Hadley circulation lowers precipitation so much that
with high insolation snow lines reach above 6,500 m (21,330 ft). Between 19˚N and 19˚S, however, precipitation is
higher and the mountains above 5,000 m (16,400 ft) usually have permanent snow. Even at high latitudes, glacier formation
is not inevitable. Areas of the Arctic, such as Banks Island, and the McMurdo Dry Valleys in Antarctica are considered
polar deserts where glaciers cannot form because they receive little snowfall despite the bitter cold. Cold air,
unlike warm air, is unable to transport much water vapor. Even during glacial periods of the Quaternary, Manchuria,
lowland Siberia, and central and northern Alaska, though extraordinarily cold, had such light snowfall that glaciers
could not form. Glacial abrasion is commonly characterized by glacial striations. Glaciers produce these when they
contain large boulders that carve long scratches in the bedrock. By mapping the direction of the striations, researchers
can determine the direction of the glacier's movement. Similar to striations are chatter marks, lines of crescent-shape
depressions in the rock underlying a glacier. They are formed by abrasion when boulders in the glacier are repeatedly
caught and released as they are dragged along the bedrock. Glacial moraines are formed by the deposition of material
from a glacier and are exposed after the glacier has retreated. They usually appear as linear mounds of till, a non-sorted
mixture of rock, gravel and boulders within a matrix of a fine powdery material. Terminal or end moraines are formed
at the foot or terminal end of a glacier. Lateral moraines are formed on the sides of the glacier. Medial moraines
are formed when two different glaciers merge and the lateral moraines of each coalesce to form a moraine in the middle
of the combined glacier. Less apparent are ground moraines, also called glacial drift, which often blankets the surface
underneath the glacier downslope from the equilibrium line. Before glaciation, mountain valleys have a characteristic
"V" shape, produced by eroding water. During glaciation, these valleys are widened, deepened, and smoothed, forming
a "U"-shaped glacial valley. The erosion that creates glacial valleys eliminates the spurs of earth that extend across
mountain valleys, creating triangular cliffs called truncated spurs. Within glacial valleys, depressions created
by plucking and abrasion can be filled by lakes, called paternoster lakes. If a glacial valley runs into a large
body of water, it forms a fjord. At the start of a classic valley glacier is a bowl-shaped cirque, which has escarped
walls on three sides but is open on the side that descends into the valley. Cirques are where ice begins to accumulate
in a glacier. Two glacial cirques may form back to back and erode their backwalls until only a narrow ridge, called
an arête is left. This structure may result in a mountain pass. If multiple cirques encircle a single mountain, they
create pointed pyramidal peaks; particularly steep examples are called horns. Some rock formations in the path of
a glacier are sculpted into small hills called roche moutonnée, or "sheepback" rock. Roche moutonnée are elongated,
rounded, and asymmetrical bedrock knobs that can be produced by glacier erosion. They range in length from less than
a meter to several hundred meters long. Roche moutonnée have a gentle slope on their up-glacier sides and a steep
to vertical face on their down-glacier sides. The glacier abrades the smooth slope on the upstream side as it flows
along, but tears loose and carries away rock from the downstream side via plucking. Large masses, such as ice sheets
or glaciers, can depress the crust of the Earth into the mantle. The depression usually totals a third of the ice
sheet or glacier's thickness. After the ice sheet or glacier melts, the mantle begins to flow back to its original
position, pushing the crust back up. This post-glacial rebound, which proceeds very slowly after the melting of the
ice sheet or glacier, is currently occurring in measurable amounts in Scandinavia and the Great Lakes region of North
America.
Comcast Corporation, formerly registered as Comcast Holdings,[note 1] is an American multinational mass media company and
is the largest broadcasting and largest cable company in the world by revenue. It is the second largest pay-TV company
after the AT&T-DirecTV acquisition, largest cable TV company and largest home Internet service provider in the United
States, and the nation's third largest home telephone service provider. Comcast services U.S. residential and commercial
customers in 40 states and the District of Columbia. The company's headquarters are located in Philadelphia, Pennsylvania.
Comcast operates multiple cable-only channels (including E! Entertainment Television, the Golf Channel, and NBCSN),
over-the-air national broadcast network channels (NBC and Telemundo), the film production studio Universal Pictures,
and Universal Parks & Resorts, with a global total of nearly 200 family entertainment locations and attractions in
the U.S. and several other countries including U.A.E., South Korea, Russia and China, with several new locations
reportedly planned and being developed for future operation. Comcast also has significant holding in digital distribution
(thePlatform). In February 2014 the company agreed to merge with Time Warner Cable in an equity swap deal worth $45.2
billion. Under the terms of the agreement Comcast was to acquire 100% of Time Warner Cable. However, on April 24,
2015, Comcast terminated the agreement. Comcast has been criticized for multiple reasons. The company's customer
satisfaction often ranks among the lowest in the cable industry. Comcast has violated net neutrality practices in
the past; and, despite Comcast's commitment to a narrow definition of net neutrality, critics advocate a definition
of which precludes distinction between Comcast's private network services and the rest of the Internet. Critics also
point out a lack of competition in the vast majority of Comcast's service area; there is limited competition among
cable providers. Given Comcast's negotiating power as a large ISP, some suspect that Comcast could leverage paid
peering agreements to unfairly influence end-user connection speeds. Its ownership of both content production (in
NBCUniversal) and content distribution (as an ISP) has raised antitrust concerns. These issues, in addition to others,
led to Comcast being dubbed "The Worst Company in America" by The Consumerist in 2014 and 2010. Comcast is sometimes
described as a family business. Brian L. Roberts, Chairman, President, and CEO of Comcast, is son of co-founder Ralph
Roberts. Roberts owns or controls just over 1% of all Comcast shares but all of the Class B supervoting shares, which
gives him an "undilutable 33% voting power over the company". Legal expert Susan P. Crawford has said this gives
him "effective control over its [Comcast's] every step". In 2010, he was one of the highest-paid executives in the
United States, with total compensation of about $31 million. The company is often criticized by both the media and
its own staff for its less upstanding policies regarding employee relations. A 2012 Reddit post written by an anonymous
Comcast call center employee eager to share their negative experiences with the public received attention from publications
including The Huffington Post. A 2014 investigative series published by The Verge involved interviews with 150 of
Comcast's employees. It sought to examine why the company has become so widely criticized by its customers, the media
and even members of its own staff. The series claimed part of the problem is internal and that Comcast's staff endures
unreasonable corporate policies. According to the report: "customer service has been replaced by an obsession with
sales; technicians are understaffed while tech support is poorly trained; and the company is hobbled by internal
fragmentation." A widely read article penned by an anonymous call center employee working for Comcast appeared in
November 2014 on Cracked. Titled "Five Nightmares You Live While Working For America's Worst Company," the article
also claimed that Comcast is obsessed with sales, doesn't train its employees properly and concluded that "the system
makes good customer service impossible." Comcast has also earned a reputation for being anti-union. According to
one of the company's training manuals, "Comcast does not feel union representation is in the best interest of its
employees, customers, or shareholders". A dispute in 2004 with CWA, a labor union that represented many employees
at Comcast's offices in Beaverton, Oregon, led to allegations of management intimidating workers, requiring them
to attend anti-union meetings and unwarranted disciplinary action for union members. In 2011, Comcast received criticism
from Writers Guild of America for its policies in regards to unions. Despite these criticisms, Comcast has appeared
on multiple "top places to work" lists. In 2009, it was included on CableFAX magazine's "Top 10 Places to Work in
Cable", which cited its "scale, savvy and vision". Similarly, the Philadelphia Business Journal awarded Comcast the
silver medal among extra-large companies in Philadelphia, with the gold medal going to partner organization, Comcast-Spectacor.
The Boston Globe found Comcast to be that city's top place to work in 2009. Employee diversity is also an attribute
upon which Comcast receives strong marks. In 2008, Black Enterprise magazine rated Comcast among the top 15 companies
for workforce diversity. Comcast was also named a "Top 2014 Workplace" by the Washington Post in their annual feature.
The book value of the company nearly doubled from $8.19 a share in 1999 to $15 a share in 2009. Revenues grew sixfold
from 1999's $6 billion to almost $36 billion in 2009. Net profit margin rose from 4.2% in 1999 to 8.4% in 2009, with
operating margins improving 31 percent and return on equity doubling to 6.7 percent in the same time span. Between
1999 and 2009, return on capital nearly tripled to 7 percent. Comcast reported first quarter 2012 profit increases
of 30% due to increase in high-speed internet customers. In February 2014, Comcast generated 1.1 billion in revenue
during the first quarter due to the Sochi Olympics,. With $18.8 million spent in 2013, Comcast has the seventh largest
lobbying budget of any individual company or organization in the United States. Comcast employs multiple former US
Congressmen as lobbyists. The National Cable & Telecommunications Association, which has multiple Comcast executives
on its board, also represents Comcast and other cable companies as the fifth largest lobbying organization in the
United States, spending $19.8 million in 2013. Comcast was among the top backers of Barack Obama's presidential runs,
with Comcast vice president David Cohen raising over $2.2 million from 2007 to 2012. Cohen has been described by
many sources as influential in the US government, though he is no longer a registered lobbyist, as the time he spends
lobbying falls short of the 20% which requires official registration. Comcast's PAC, the Comcast Corporation and
NBCUniversal Political Action Committee, is the among the largest PACs in the US, raising about $3.7 million from
2011-2012 for the campaigns of various candidates for office in the United States Federal Government. Comcast is
also a major backer of the National Cable and Telecommunications Association Political Action Committee, which raised
$2.6 million from 2011-2012. Comcast spent the most money of any organization in support of the Stop Online Piracy
and PROTECT IP bills, spending roughly $5 million to lobby for their passage. In 1963, Ralph J. Roberts in conjunction
with his two business partners, Daniel Aaron and Julian A. Brodsky, purchased American Cable Systems as a corporate
spin-off from its parent, Jerrold Electronics, for US $500,000. At the time, American Cable was a small cable operator
in Tupelo, Mississippi, with five channels and 12,000 customers. Storecast Corporation of America, a product placement
supermarket specialist marketing firm, was purchased by American Cable in 1965. With Storecast being a Muzak client,
American Cable purchased its first Muzak franchise of many in Orlando, Florida. In 1994, Comcast became the third
largest cable operator in the United States with around 3.5 million subscribers following its purchase of Maclean-Hunter's
American division for $1.27 billion. The company's UK branch, Comcast UK Cable Partners, goes public while constructing
a cable telecommunications network. With five other media companies, the corporation becomes an original investor
in The Golf Channel. Following a bid in 1994 for $2.1 billion, Comcast increased its ownership of QVC from 15.5%
of stock to a majority, in a move to prevent QVC from merging with CBS. Comcast later sold its QVC shares in 2004
to Liberty Media for $7.9 billion. Comcast sold Comcast Cellular to SBC Communications in 1999 for $400 million,
releasing them from $1.27 billion in debt. Comcast acquired Greater Philadelphia Cablevision in 1999. In March 1999,
Comcast offered to buy MediaOne for $60 billion. However, MediaOne decided to accept AT&T Corporation's offer of
$62 billion instead. Comcast University started in 1999 as well as Comcast Interactive Capital Group to make technology
and Internet related investments taking its first investment in VeriSign. In 2001, Comcast announced it would acquire
the assets of the largest cable television operator at the time, AT&T Broadband, for US$44.5 billion. The proposed
name for the merged company was "AT&T Comcast", but the companies ultimately decided to keep only the Comcast name.
In 2002, Comcast acquired all assets of AT&T Broadband, thus making Comcast the largest cable television company
in the United States with over 22 million subscribers. This also spurred the start of Comcast Advertising Sales (using
AT&T's groundwork) which would later be renamed Comcast Spotlight. As part of this acquisition, Comcast also acquired
the National Digital Television Center in Centennial, Colorado as a wholly owned subsidiary, which is today known
as the Comcast Media Center. On February 11, 2004, Comcast announced a $54 billion bid for The Walt Disney Company,
as well as taking on $12 billion of Disney's debt. The deal would have made Comcast the largest media conglomerate
in the world. However, after rejection by Disney and uncertain response from investors, the bid was abandoned in
April. The main reason for the buyout attempt was so that Comcast could acquire Disney's 80 percent stake in ESPN,
which a Comcast executive called "the most important and valuable asset" that Disney owned. On April 8, 2005, a partnership
led by Comcast and Sony Pictures Entertainment finalized a deal to acquire MGM and its affiliate studio, United Artists,
and create an additional outlet to carry MGM/UA's material for cable and Internet distribution. On October 31, 2005,
Comcast officially announced that it had acquired Susquehanna Communications a South Central Pennsylvania, -based
cable television and broadband services provider and unit of the former Susquehanna Pfaltzgraff company, for $775
million cash. In this deal Comcast acquired approximately 230,000 basic cable customers, 71,000 digital cable customers,
and 86,000 high-speed Internet customers. Comcast previously owned approximately 30 percent of Susquehanna Communications
through affiliate company Lenfest. In December 2005, Comcast announced the creation of Comcast Interactive Media,
a new division focused on online media. Comcast announced in May 2007 and launched in September 2008 a dashboard
called SmartZone. Hewlett-Packard led "design, creation and management". Collaboration and unified messaging technology
came from open-source vendor Zimbra. "SmartZone users will be able to send and receive e-mail, listen to their voicemail
messages online and forward that information via e-mail to others, send instant messages and video instant messages
and merge their contacts into one address book". There is also Cloudmark spam and phishing protection and Trend Micro
antivirus. The address book is Comcast Plaxo software. In April 2005, Comcast and Time Warner Cable announced plans
to buy the assets of bankrupted Adelphia Cable. The two companies paid a total of $17.6 billion in the deal that
was finalized in the second quarter of 2006—after the U.S. Federal Communications Commission (FCC) completed a seven-month
investigation without raising an objection. Time Warner Cable became the second largest cable provider in the U.S.,
ranking behind Comcast. As part of the deal, Time Warner and Comcast traded existing subscribers in order to consolidate
them into larger geographic clusters. Media outlets began reporting in late September 2009 that Comcast was in talks
to buy NBCUniversal. Comcast denied the rumors at first, while NBC would not comment on them. However, CNBC itself
reported on October 1 that General Electric was considering spinning NBCUniversal off into a separate company that
would merge the NBC television network and its cable properties such as USA Network, Syfy and MSNBC with Comcast's
content assets. GE would maintain 49% control of the new company, while Comcast owned 51%. Vivendi, which owns 20%,
would have to sell its stake to GE. It was reported that under the current deal with GE that it would happen in November
or December. It was also reported that Time Warner would be interested in placing a bid, until CEO Jeffrey L. Bewkes
directly denied interest, leaving Comcast the sole bidder. On November 1, 2009, The New York Times reported Comcast
had moved closer to a deal to purchase NBCUniversal and that a formal announcement could be made sometime the following
week. Following a tentative agreement on by December 1, on December 3, 2009, the parties announced that Comcast would
buy a controlling 51% stake in NBCUniversal for $6.5 billion in cash and $7.3 billion in programming. GE would take
over the remaining 49% stake in NBCUniversal, using $5.8 billion to buy out Vivendi's 20% minority stake in NBCUniversal.
On January 18, 2011, the FCC approved the deal by a vote of 4 to 1. The sale was completed on January 28, 2011. In
late December 2012, Comcast added the NBC peacock symbol to their new logo. On February 12, 2013, Comcast announced
an intention to acquire the remaining 49% of General Electric's interest in NBCUniversal, which Comcast completed
on March 19, 2013. On February 12, 2014, the Los Angeles Times reported that Comcast sought to acquire Time Warner
Cable in a deal valued at $45.2 billion. On February 13, it was reported that Time Warner Cable agreed to the acquisition.
This was to add several metropolitan areas to the Comcast portfolio, such as New York City, Los Angeles, Dallas-Fort
Worth, Cleveland, Columbus, Cincinnati, Charlotte, San Diego, and San Antonio. Time Warner Cable and Comcast aimed
to merge into one company by the end of 2014 and both have praised the deal, emphasizing the increased capabilities
of a combined telecommunications network, and to "create operating efficiencies and economies of scale". Critics
noted in 2013 that Tom Wheeler, the head of the FCC, which has to approve the deal, is the former head of both the
largest cable lobbying organization, the National Cable & Telecommunications Association, and as largest wireless
lobby, CTIA – The Wireless Association. According to Politico, Comcast "donated to almost every member of Congress
who has a hand in regulating it." The US Senate Judiciary Committee held a hearing on the deal on April 9, 2014.
The House Judiciary Committee planned its own hearing. On March 6, 2014 the United States Department of Justice Antitrust
Division confirmed it was investigating the deal. In March 2014, the division's chairman, William Baer, recused himself
because he was involved in a prior Comcast NBCUniversal acquisition. Several states' attorneys general have announced
support for the federal investigation. On April 24, 2015, Jonathan Sallet, general counsel of the F.C.C., said that
he was going to recommend a hearing before an administrative law judge, equivalent to a collapse of the deal. Comcast
delivers third-party television programming content to its own customers, and also produces its own first-party content
both for subscribers and customers of other competing television services. Fully or partially owned Comcast programming
includes Comcast Newsmakers, Comcast Network, Comcast SportsNet, SportsNet New York, MLB Network, Comcast Sports
Southeast/Charter Sports Southeast, NBC Sports Network, The Golf Channel, AZN Television, and FEARnet. On May 19,
2009, Disney and ESPN announced an agreement to allow Comcast Corporation to carry the channels ESPNU and ESPN3.
The U.S. Olympic Committee and Comcast intended to team up to create The U.S. Olympic Network, which was slated to
launch after the 2010 Vancouver Olympic Games. These plans were then put on hold by the U.S. Olympic Committee. The
U.S. Olympic Committee and Comcast have ended the plans to create The U.S. Olympic Network. Comcast also owns many
local channels. Comcast also has a variety network known as Comcast Network, available exclusively to Comcast and
Cablevision subscribers. The channel shows news, sports, and entertainment and places emphasis in Philadelphia and
the Baltimore/Washington, D.C. areas, though the channel is also available in New York, Pittsburgh, and Richmond.
In August 2004, Comcast started a channel called Comcast Entertainment Television, for Colorado Comcast subscribers,
and focusing on life in Colorado. It also carries some National Hockey League and National Basketball Association
games when Altitude Sports & Entertainment is carrying the NHL or NBA. In January 2006, CET became the primary channel
for Colorado's Emergency Alert System in the Denver Metro Area. In 2006, Comcast helped found the channel SportsNet
New York, acquiring a minority stake. The other partners in the project were New York Mets and Time Warner Cable.
In 1996, Comcast bought a controlling stake in Spectacor from the company's founder, Ed Snider. Comcast-Spectacor
holdings now include the Philadelphia Flyers NHL hockey team, the Philadelphia 76ers National Basketball Association
basketball team and two large multipurpose arenas in Philadelphia. Over a number of years, Comcast became majority
owner of Comcast SportsNet, as well as Golf Channel and NBCSN (formerly the Outdoor Life Network, then Versus). In
2002, Comcast paid the University of Maryland $25 million for naming rights to the new basketball arena built on
the College Park campus, the XFINITY Center. Before it was renamed for Comcast's cable subsidiary, XFINITY Center
was called Comcast Center from its opening in 2002 through July 2014. In 2004 and 2007, the American Customer Satisfaction
Index (ACSI) survey found that Comcast had the worst customer satisfaction rating of any company or government agency
in the country, including the Internal Revenue Service. The ACSI indicates that almost half of all cable customers
(regardless of company) have registered complaints, and that cable is the only industry to score below 60 in the
ACSI. Comcast's Customer Service Rating by the ACSI surveys indicate that the company's customer service has not
improved since the surveys began in 2001. Analysis of the surveys states that "Comcast is one of the lowest scoring
companies in ACSI. As its customer satisfaction eroded by 7% over the past year, revenue increased by 12%." The ACSI
analysis also addresses this contradiction, stating that "Such pricing power usually comes with some level of monopoly
protection and most cable companies have little competition at the local level. This also means that a cable company
can do well financially even though its customers are not particularly satisfied." Comcast was given an "F" for its
corporate governance practices in 2010, by Corporate Library, an independent shareholder-research organization. According
to Corporate Library, Comcast's board of directors ability to oversee and control management was severely compromised
(at least in 2010) by the fact that several of the directors either worked for the company or had business ties to
it (making them susceptible to management pressure), and a third of the directors were over 70 years of age. According
to the Wall Street Journal nearly two-thirds of the flights of Comcast's $40 million corporate jet purchased for
business travel related to the NBCU acquisition, were to CEO Brian Roberts' private homes or to resorts. In January
2015, a customer named Ricardo Brown received a bill from Comcast with his name changed to "Asshole Brown". Brown's
wife, Lisa, believed a Comcast employee changed the name in response to the Browns' request to cancel their cable
service, an incident in which she was refused a cancellation unless she paid a $60 fee and instead was routed to
a retention specialist. Comcast refused to correct the name on their bill after bringing it to the attention of numerous
customer service outlets for the company by explaining that Ricardo is the legal name of the customer, so the Browns
turned to consumer advocate Christopher Elliott. Elliott posted the facts of the incident, along with a copy of the
bill, on his blog. Shortly thereafter, Elliott contacted Comcast and Comcast offered the Browns an apology, a $60
refund, and a promise to track down and fire the responsible employee. The Browns instead requested a full refund
for their negative experience and Comcast agreed to refund the family the last two years of service and provide the
next two years of service at no charge. Comcast released a statement explaining: "We have spoken with our customer
and apologized for this completely unacceptable and inappropriate name change. We have zero tolerance for this type
of disrespectful behavior and are conducting a thorough investigation to determine what happened. We are working
with our customer to make this right and will take appropriate steps to prevent this from happening again."
Tuberculosis (TB) is an infectious disease usually caused by the bacterium Mycobacterium tuberculosis (MTB). Tuberculosis
generally affects the lungs, but can also affect other parts of the body. Most infections do not have symptoms, known
as latent tuberculosis. About 10% of latent infections progress to active disease which, if left untreated, kills
about half of those infected. The classic symptoms of active TB are a chronic cough with blood-containing sputum,
fever, night sweats, and weight loss. The historical term "consumption" came about due to the weight loss. Infection
of other organs can cause a wide range of symptoms. One-third of the world's population is thought to be infected
with TB. New infections occur in about 1% of the population each year. In 2014, there were 9.6 million cases of active
TB which resulted in 1.5 million deaths. More than 95% of deaths occurred in developing countries. The number of
new cases each year has decreased since 2000. About 80% of people in many Asian and African countries test positive
while 5–10% of people in the United States population tests positive by the tuberculin test. Tuberculosis has been
present in humans since ancient times. If a tuberculosis infection does become active, it most commonly involves
the lungs (in about 90% of cases). Symptoms may include chest pain and a prolonged cough producing sputum. About
25% of people may not have any symptoms (i.e. they remain "asymptomatic"). Occasionally, people may cough up blood
in small amounts, and in very rare cases, the infection may erode into the pulmonary artery or a Rasmussen's aneurysm,
resulting in massive bleeding. Tuberculosis may become a chronic illness and cause extensive scarring in the upper
lobes of the lungs. The upper lung lobes are more frequently affected by tuberculosis than the lower ones. The reason
for this difference is not clear. It may be due either to better air flow, or to poor lymph drainage within the upper
lungs. In 15–20% of active cases, the infection spreads outside the lungs, causing other kinds of TB. These are collectively
denoted as "extrapulmonary tuberculosis". Extrapulmonary TB occurs more commonly in immunosuppressed persons and
young children. In those with HIV, this occurs in more than 50% of cases. Notable extrapulmonary infection sites
include the pleura (in tuberculous pleurisy), the central nervous system (in tuberculous meningitis), the lymphatic
system (in scrofula of the neck), the genitourinary system (in urogenital tuberculosis), and the bones and joints
(in Pott disease of the spine), among others. When it spreads to the bones, it is also known as "osseous tuberculosis",
a form of osteomyelitis. Sometimes, bursting of a tubercular abscess through skin results in tuberculous ulcer. An
ulcer originating from nearby infected lymph nodes is painless, slowly enlarging and has an appearance of "wash leather".
A potentially more serious, widespread form of TB is called "disseminated tuberculosis", also known as miliary tuberculosis.
Miliary TB makes up about 10% of extrapulmonary cases. The main cause of TB is Mycobacterium tuberculosis, a small,
aerobic, nonmotile bacillus. The high lipid content of this pathogen accounts for many of its unique clinical characteristics.
It divides every 16 to 20 hours, which is an extremely slow rate compared with other bacteria, which usually divide
in less than an hour. Mycobacteria have an outer membrane lipid bilayer. If a Gram stain is performed, MTB either
stains very weakly "Gram-positive" or does not retain dye as a result of the high lipid and mycolic acid content
of its cell wall. MTB can withstand weak disinfectants and survive in a dry state for weeks. In nature, the bacterium
can grow only within the cells of a host organism, but M. tuberculosis can be cultured in the laboratory. Using histological
stains on expectorated samples from phlegm (also called "sputum"), scientists can identify MTB under a microscope.
Since MTB retains certain stains even after being treated with acidic solution, it is classified as an acid-fast
bacillus. The most common acid-fast staining techniques are the Ziehl–Neelsen stain and the Kinyoun stain, which
dye acid-fast bacilli a bright red that stands out against a blue background. Auramine-rhodamine staining and fluorescence
microscopy are also used. The M. tuberculosis complex (MTBC) includes four other TB-causing mycobacteria: M. bovis,
M. africanum, M. canetti, and M. microti. M. africanum is not widespread, but it is a significant cause of tuberculosis
in parts of Africa. M. bovis was once a common cause of tuberculosis, but the introduction of pasteurized milk has
almost completely eliminated this as a public health problem in developed countries. M. canetti is rare and seems
to be limited to the Horn of Africa, although a few cases have been seen in African emigrants. M. microti is also
rare and is seen almost only in immunodeficient people, although its prevalence may be significantly underestimated.
People with prolonged, frequent, or close contact with people with TB are at particularly high risk of becoming infected,
with an estimated 22% infection rate. A person with active but untreated tuberculosis may infect 10–15 (or more)
other people per year. Transmission should occur from only people with active TB – those with latent infection are
not thought to be contagious. The probability of transmission from one person to another depends upon several factors,
including the number of infectious droplets expelled by the carrier, the effectiveness of ventilation, the duration
of exposure, the virulence of the M. tuberculosis strain, the level of immunity in the uninfected person, and others.
The cascade of person-to-person spread can be circumvented by segregating those with active ("overt") TB and putting
them on anti-TB drug regimens. After about two weeks of effective treatment, subjects with nonresistant active infections
generally do not remain contagious to others. If someone does become infected, it typically takes three to four weeks
before the newly infected person becomes infectious enough to transmit the disease to others. TB infection begins
when the mycobacteria reach the pulmonary alveoli, where they invade and replicate within endosomes of alveolar macrophages.
Macrophages identify the bacterium as foreign and attempt to eliminate it by phagocytosis. During this process, the
bacterium is enveloped by the macrophage and stored temporarily in a membrane-bound vesicle called a phagosome. The
phagosome then combines with a lysosome to create a phagolysosome. In the phagolysosome, the cell attempts to use
reactive oxygen species and acid to kill the bacterium. However, M. tuberculosis has a thick, waxy mycolic acid capsule
that protects it from these toxic substances. M. tuberculosis is able to reproduce inside the macrophage and will
eventually kill the immune cell. The primary site of infection in the lungs, known as the "Ghon focus", is generally
located in either the upper part of the lower lobe, or the lower part of the upper lobe. Tuberculosis of the lungs
may also occur via infection from the blood stream. This is known as a Simon focus and is typically found in the
top of the lung. This hematogenous transmission can also spread infection to more distant sites, such as peripheral
lymph nodes, the kidneys, the brain, and the bones. All parts of the body can be affected by the disease, though
for unknown reasons it rarely affects the heart, skeletal muscles, pancreas, or thyroid. Tuberculosis is classified
as one of the granulomatous inflammatory diseases. Macrophages, T lymphocytes, B lymphocytes, and fibroblasts aggregate
to form granulomas, with lymphocytes surrounding the infected macrophages. When other macrophages attack the infected
macrophage, they fuse together to form a giant multinucleated cell in the alveolar lumen. The granuloma may prevent
dissemination of the mycobacteria and provide a local environment for interaction of cells of the immune system.
However, more recent evidence suggests that the bacteria use the granulomas to avoid destruction by the host's immune
system. Macrophages and dendritic cells in the granulomas are unable to present antigen to lymphocytes; thus the
immune response is suppressed. Bacteria inside the granuloma can become dormant, resulting in latent infection. Another
feature of the granulomas is the development of abnormal cell death (necrosis) in the center of tubercles. To the
naked eye, this has the texture of soft, white cheese and is termed caseous necrosis. In many people, the infection
waxes and wanes. Tissue destruction and necrosis are often balanced by healing and fibrosis. Affected tissue is replaced
by scarring and cavities filled with caseous necrotic material. During active disease, some of these cavities are
joined to the air passages bronchi and this material can be coughed up. It contains living bacteria, so can spread
the infection. Treatment with appropriate antibiotics kills bacteria and allows healing to take place. Upon cure,
affected areas are eventually replaced by scar tissue. Diagnosing active tuberculosis based only on signs and symptoms
is difficult, as is diagnosing the disease in those who are immunosuppressed. A diagnosis of TB should, however,
be considered in those with signs of lung disease or constitutional symptoms lasting longer than two weeks. A chest
X-ray and multiple sputum cultures for acid-fast bacilli are typically part of the initial evaluation. Interferon-γ
release assays and tuberculin skin tests are of little use in the developing world. IGRA have similar limitations
in those with HIV. The Mantoux tuberculin skin test is often used to screen people at high risk for TB. Those who
have been previously immunized may have a false-positive test result. The test may be falsely negative in those with
sarcoidosis, Hodgkin's lymphoma, malnutrition, and most notably, active tuberculosis. Interferon gamma release assays
(IGRAs), on a blood sample, are recommended in those who are positive to the Mantoux test. These are not affected
by immunization or most environmental mycobacteria, so they generate fewer false-positive results. However, they
are affected by M. szulgai, M. marinum, and M. kansasii. IGRAs may increase sensitivity when used in addition to
the skin test, but may be less sensitive than the skin test when used alone. It is the most widely used vaccine worldwide,
with more than 90% of all children being vaccinated. The immunity it induces decreases after about ten years. As
tuberculosis is uncommon in most of Canada, the United Kingdom, and the United States, BCG is administered only to
those people at high risk. Part of the reasoning against the use of the vaccine is that it makes the tuberculin skin
test falsely positive, reducing the test's use in screening. A number of new vaccines are currently in development.
The World Health Organization declared TB a "global health emergency" in 1993, and in 2006, the Stop TB Partnership
developed a Global Plan to Stop Tuberculosis that aims to save 14 million lives between its launch and 2015. A number
of targets they have set are not likely to be achieved by 2015, mostly due to the increase in HIV-associated tuberculosis
and the emergence of multiple drug-resistant tuberculosis. A tuberculosis classification system developed by the
American Thoracic Society is used primarily in public health programs. Treatment of TB uses antibiotics to kill the
bacteria. Effective TB treatment is difficult, due to the unusual structure and chemical composition of the mycobacterial
cell wall, which hinders the entry of drugs and makes many antibiotics ineffective. The two antibiotics most commonly
used are isoniazid and rifampicin, and treatments can be prolonged, taking several months. Latent TB treatment usually
employs a single antibiotic, while active TB disease is best treated with combinations of several antibiotics to
reduce the risk of the bacteria developing antibiotic resistance. People with latent infections are also treated
to prevent them from progressing to active TB disease later in life. Directly observed therapy, i.e., having a health
care provider watch the person take their medications, is recommended by the WHO in an effort to reduce the number
of people not appropriately taking antibiotics. The evidence to support this practice over people simply taking their
medications independently is poor. Methods to remind people of the importance of treatment do, however, appear effective.
Primary resistance occurs when a person becomes infected with a resistant strain of TB. A person with fully susceptible
MTB may develop secondary (acquired) resistance during therapy because of inadequate treatment, not taking the prescribed
regimen appropriately (lack of compliance), or using low-quality medication. Drug-resistant TB is a serious public
health issue in many developing countries, as its treatment is longer and requires more expensive drugs. MDR-TB is
defined as resistance to the two most effective first-line TB drugs: rifampicin and isoniazid. Extensively drug-resistant
TB is also resistant to three or more of the six classes of second-line drugs. Totally drug-resistant TB is resistant
to all currently used drugs. It was first observed in 2003 in Italy, but not widely reported until 2012, and has
also been found in Iran and India. Bedaquiline is tentatively supported for use in multiple drug-resistant TB. The
risk of reactivation increases with immunosuppression, such as that caused by infection with HIV. In people coinfected
with M. tuberculosis and HIV, the risk of reactivation increases to 10% per year. Studies using DNA fingerprinting
of M. tuberculosis strains have shown reinfection contributes more substantially to recurrent TB than previously
thought, with estimates that it might account for more than 50% of reactivated cases in areas where TB is common.
The chance of death from a case of tuberculosis is about 4% as of 2008, down from 8% in 1995. Roughly one-third of
the world's population has been infected with M. tuberculosis, with new infections occurring in about 1% of the population
each year. However, most infections with M. tuberculosis do not cause TB disease, and 90–95% of infections remain
asymptomatic. In 2012, an estimated 8.6 million chronic cases were active. In 2010, 8.8 million new cases of TB were
diagnosed, and 1.20–1.45 million deaths occurred, most of these occurring in developing countries. Of these 1.45
million deaths, about 0.35 million occur in those also infected with HIV. Tuberculosis is the second-most common
cause of death from infectious disease (after those due to HIV/AIDS). The total number of tuberculosis cases has
been decreasing since 2005, while new cases have decreased since 2002. China has achieved particularly dramatic progress,
with about an 80% reduction in its TB mortality rate between 1990 and 2010. The number of new cases has declined
by 17% between 2004–2014. Tuberculosis is more common in developing countries; about 80% of the population in many
Asian and African countries test positive in tuberculin tests, while only 5–10% of the US population test positive.
Hopes of totally controlling the disease have been dramatically dampened because of a number of factors, including
the difficulty of developing an effective vaccine, the expensive and time-consuming diagnostic process, the necessity
of many months of treatment, the increase in HIV-associated tuberculosis, and the emergence of drug-resistant cases
in the 1980s. In 2007, the country with the highest estimated incidence rate of TB was Swaziland, with 1,200 cases
per 100,000 people. India had the largest total incidence, with an estimated 2.0 million new cases. In developed
countries, tuberculosis is less common and is found mainly in urban areas. Rates per 100,000 people in different
areas of the world were: globally 178, Africa 332, the Americas 36, Eastern Mediterranean 173, Europe 63, Southeast
Asia 278, and Western Pacific 139 in 2010. In Canada and Australia, tuberculosis is many times more common among
the aboriginal peoples, especially in remote areas. In the United States Native Americans have a fivefold greater
mortality from TB, and racial and ethnic minorities accounted for 84% of all reported TB cases. Tuberculosis has
been present in humans since antiquity. The earliest unambiguous detection of M. tuberculosis involves evidence of
the disease in the remains of bison in Wyoming dated to around 17,000 years ago. However, whether tuberculosis originated
in bovines, then was transferred to humans, or whether it diverged from a common ancestor, is currently unclear.
A comparison of the genes of M. tuberculosis complex (MTBC) in humans to MTBC in animals suggests humans did not
acquire MTBC from animals during animal domestication, as was previously believed. Both strains of the tuberculosis
bacteria share a common ancestor, which could have infected humans as early as the Neolithic Revolution. The bacillus
causing tuberculosis, M. tuberculosis, was identified and described on 24 March 1882 by Robert Koch. He received
the Nobel Prize in physiology or medicine in 1905 for this discovery. Koch did not believe the bovine (cattle) and
human tuberculosis diseases were similar, which delayed the recognition of infected milk as a source of infection.
Later, the risk of transmission from this source was dramatically reduced by the invention of the pasteurization
process. Koch announced a glycerine extract of the tubercle bacilli as a "remedy" for tuberculosis in 1890, calling
it "tuberculin". While it was not effective, it was later successfully adapted as a screening test for the presence
of pre-symptomatic tuberculosis. The World Tuberculosis Day was established on 24 March for this reason. Tuberculosis
caused the most widespread public concern in the 19th and early 20th centuries as an endemic disease of the urban
poor. In 1815, one in four deaths in England was due to "consumption". By 1918, one in six deaths in France was still
caused by TB. After TB was determined to be contagious, in the 1880s, it was put on a notifiable disease list in
Britain; campaigns were started to stop people from spitting in public places, and the infected poor were "encouraged"
to enter sanatoria that resembled prisons (the sanatoria for the middle and upper classes offered excellent care
and constant medical attention). Whatever the (purported) benefits of the "fresh air" and labor in the sanatoria,
even under the best conditions, 50% of those who entered died within five years (circa 1916). In Europe, rates of
tuberculosis began to rise in the early 1600s to a peak level in the 1800s, when it caused nearly 25% of all deaths.
By the 1950s, mortality had decreased nearly 90%. Improvements in public health began significantly reducing rates
of tuberculosis even before the arrival of streptomycin and other antibiotics, although the disease remained a significant
threat to public health such that when the Medical Research Council was formed in Britain in 1913, its initial focus
was tuberculosis research. Slow progress has led to frustration, expressed by executive director of the Global Fund
to Fight AIDS, Tuberculosis and Malaria – Mark Dybul: "we have the tools to end TB as a pandemic and public health
threat on the planet, but we are not doing it." Several international organizations are pushing for more transparency
in treatment, and more countries are implementing mandatory reporting of cases to the government, although adherence
is often sketchy. Commercial treatment-providers may at times overprescribe second-line drugs as well as supplementary
treatment, promoting demands for further regulations. The government of Brazil provides universal TB-care, which
reduces this problem. Conversely, falling rates of TB-infection may not relate to the number of programs directed
at reducing infection rates, but may be tied to increased level of education, income and health of the population.
Costs of the disease, as calculated by the World Bank in 2009 may exceed 150 billion USD per year in "high burden"
countries. Lack of progress eradicating the disease may also be due to lack of patient follow-up – as among the 250M
rural migrants in China. One way to decrease stigma may be through the promotion of "TB clubs", where those infected
may share experiences and offer support, or through counseling. Some studies have shown TB education programs to
be effective in decreasing stigma, and may thus be effective in increasing treatment adherence. Despite this, studies
on relationship between reduced stigma and mortality are lacking as of 2010, and similar efforts to decrease stigma
surrounding AIDS have been minimally effective. Some have claimed the stigma to be worse than the disease, and healthcare
providers may unintentionally reinforce stigma, as those with TB are often perceived as difficult or otherwise undesirable.
A greater understanding of the social and cultural dimensions of tuberculosis may also help with stigma reduction.
The BCG vaccine has limitations, and research to develop new TB vaccines is ongoing. A number of potential candidates
are currently in phase I and II clinical trials. Two main approaches are being used to attempt to improve the efficacy
of available vaccines. One approach involves adding a subunit vaccine to BCG, while the other strategy is attempting
to create new and better live vaccines. MVA85A, an example of a subunit vaccine, currently in trials in South Africa,
is based on a genetically modified vaccinia virus. Vaccines are hoped to play a significant role in treatment of
both latent and active disease. To encourage further discovery, researchers and policymakers are promoting new economic
models of vaccine development, including prizes, tax incentives, and advance market commitments. A number of groups,
including the Stop TB Partnership, the South African Tuberculosis Vaccine Initiative, and the Aeras Global TB Vaccine
Foundation, are involved with research. Among these, the Aeras Global TB Vaccine Foundation received a gift of more
than $280 million (US) from the Bill and Melinda Gates Foundation to develop and license an improved vaccine against
tuberculosis for use in high burden countries. A number of medications are being studied for multi drug resistant
tuberculosis including: bedaquiline and delamanid. Bedaquiline received U.S. Food and Drug Administration (FDA) approval
in late 2012. The safety and effectiveness of these new agents are still uncertain, because they are based on the
results of a relatively small studies. However, existing data suggest that patients taking bedaquiline in addition
to standard TB therapy are five times more likely to die than those without the new drug, which has resulted in medical
journal articles raising health policy questions about why the FDA approved the drug and whether financial ties to
the company making bedaquiline influenced physicians' support for its use
Affirmative action in the United States tends to focus on issues such as education and employment, specifically granting
special consideration to racial minorities, Native Americans, and women who have been historically excluded groups
in America. Reports have shown that minorities and women have faced discrimination in schools and businesses for
many years and this discrimination produced unfair advantages for whites and males in education and employment. The
impetus toward affirmative action is redressing the disadvantages associated with past and present discrimination.
Further impetus is a desire to ensure public institutions, such as universities, hospitals, and police forces, are
more representative of the populations they serve. Affirmative action is a subject of controversy. Some policies
adopted as affirmative action, such as racial quotas or gender quotas for collegiate admission, have been criticized
as a form of reverse discrimination, and such implementation of affirmative action has been ruled unconstitutional
by the majority opinion of Gratz v. Bollinger. Affirmative action as a practice was upheld by the Supreme Court's
decision in Grutter v. Bollinger in 2003. Affirmative action policies were developed in order to correct decades
of discrimination stemming from the Reconstruction Era by granting disadvantaged minorities opportunities. Many believe
that the diversity of current American society suggests that affirmative action policies succeeded and are no longer
required. Opponents of affirmative action argue that these policies are outdated and lead to reverse discrimination
which entails favoring one group over another based upon racial preference rather than achievement. Ideas for affirmative
action came as early as the Reconstruction Era (1865-1877) in which a former slave population lacked the skills and
resources for sustainable living. In 1865, General William Tecumseh Sherman proposed to divide the land and goods
from Georgia and grant it to families of color which became the "Forty acres and a mule" policy. The proposal was
never widely adopted due to strong political opposition. Nearly a century later (1950s-1960s), policies to assist
classes of individuals reemerged during the Civil Rights Movement. The civil rights guarantees came through the interpretation
of the Equal Protection Clause of the 14th Amendment. The decisions came to be known as affirmative action in which
mandatory, as well as voluntary programs, affirmed the civil rights of people of color. Furthermore, these affirmative
action programs protected people of color from the present effects stemming from past discrimination. In 1961, President
John F. Kennedy became the first to utilize the term "affirmative action" in Executive Order 10925 to ensure that
government contractors "take affirmative action to ensure that applicants are employed, and employees are treated
during employment, without regard to their race, creed, color, or national origin." This executive order realized
the government's intent to create equal opportunities for all qualified people. This executive order was eventually
amended and superseded by Lyndon B. Johnson's Executive Order 11246 which prevented discrimination based on race,
color, religion, and national origin by organizations which received federal contracts and subcontracts. In 1967,
the order was amended to include sex as well. The Reagan administration was opposed to the affirmative action requirements
of Executive Order 11246, but these contemplated changes[which?] faced bi-partisan opposition in Congress. The first
appearance of the term 'affirmative action' was in the National Labor Relations Act, better known as the Wagner Act,
of 1935.:15 Proposed and championed by U.S. Senator Robert F. Wagner of New York, the Wagner Act was in line with
President Roosevelt's goal of providing economic security to workers and other low-income groups. During this time
period it was not uncommon for employers to blacklist or fire employees associated with unions. The Wagner Act allowed
workers to unionize without fear of being discriminated against, and empowered a National Labor Relations Board to
review potential cases of worker discrimination. In the event of discrimination, employees were to be restored to
an appropriate status in the company through 'affirmative action'. While the Wagner Act protected workers and unions
it did not protect minorities, who, exempting the Congress of Industrial Organizations, were often barred from union
ranks.:11 This original coining of the term therefore has little to do with affirmative action policy as it is seen
today, but helped set the stage for all policy meant to compensate or address an individual's unjust treatment.[citation
needed] FDR's New Deal programs often contained equal opportunity clauses stating "no discrimination shall be made
on account of race, color or creed",:11 but the true forerunner to affirmative action was the Interior Secretary
of the time, Harold L. Ickes. Ickes prohibited discrimination in hiring for Public Works Administration funded projects
and oversaw not only the institution of a quota system, where contractors were required to employ a fixed percentage
of Black workers, by Robert C. Weaver and Clark Foreman,:12 but also the equal pay of women proposed by Harry Hopkins.:14FDR's
largest contribution to affirmative action, however, lay in his Executive Order 8802 which prohibited discrimination
in the defense industry or government.:22 The executive order promoted the idea that if taxpayer funds were accepted
through a government contract, then all taxpayers should have an equal opportunity to work through the contractor.:23–4
To enforce this idea, Roosevelt created the Fair Employment Practices Committee (FEPC) with the power to investigate
hiring practices by government contractors.:22 Following the Sergeant Isaac Woodard incident, President Harry S.
Truman, himself a combat veteran of World War I, issued Executive Order 9808 establishing the President's Committee
on Civil Rights to examine the violence and recommend appropriate federal legislation. Hearing of the incident, Truman
turned to NAACP leader Walter Francis White and declared, "My God! I had no idea it was as terrible as that. We've
got to do something." In 1947 the committee published its findings, To Secure These Rights. The book was widely read,
influential, and considered utopian for the times: "In our land men are equal, but they are free to be different.
From these very differences among our people has come the great human and national strength of America." The report
discussed and demonstrated racial discrimination in basic freedoms, education, public facilities, personal safety,
and employment opportunities. The committee was disturbed by the state of race relations, and included the evacuation
of Americans of Japanese descent during the war "made without a trial or any sort of hearing…Fundamental to our whole
system of law is the belief that guilt is personal and not a matter of heredity or association." The recommendations
were radical, calling for federal policies and laws to end racial discrimination and bring about equality: "We can
tolerate no restrictions upon the individual which depend upon irrelevant factors such as his race, his color, his
religion, or the social position to which he is born." To Secure These Rights set the liberal legislative agenda
for the next generation that eventually would be signed into law by Lyndon B. Johnson.:35–36 To Secure These Rights
also called for desegregation of the Armed Forces. "Prejudice in any area is an ugly, undemocratic phenomenon, but
in the armed services, where all men run the risk of death, it is especially repugnant." The rationale was fairness:
"When an individual enters the service of the country, he necessarily surrenders some of the rights and privileges
which are inherent in American citizenship." In return, the government "undertakes to protect his integrity as an
individual." Yet that was not possible in the segregated Army, since "any discrimination which…prevents members of
the minority groups from rendering full military service in defense of their country is for them a humiliating badge
of inferiority." The report called for an end to "all discrimination and segregation based on race, color, creed,
or national origins in…all branches of the Armed Services.":38–39 In June, Truman became the first president to address
the NAACP. His speech was a significant departure from traditional race relations in the United States. In front
of 10,000 people at the Lincoln Memorial, the president left no doubt where he stood on civil rights. According to
his speech, America had "reached a turning point in the long history of our country's efforts to guarantee freedom
and equality to all our citizens…Each man must be guaranteed equality of opportunity." He proposed what black citizens
had been calling for - an enhanced role of federal authority through the states. "We must make the Federal government
a friendly, vigilant defender of the rights and equalities of all Americans. And again I mean all Americans.":40
On July 26, Truman mandated the end of hiring and employment discrimination in the federal government, reaffirming
FDR's order of 1941.:40 He issued two executive orders on July 26, 1948: Executive Order 9980 and Executive Order
9981. Executive Order 9980, named Regulations Governing for Employment Practices within the Federal Establishment,
instituted fair employment practices in the civilian agencies of the federal government. The order created the position
of Fair Employment Officer. The order "established in the Civil Service Commission a Fair Employment Board of not
less than seven persons." Executive Order 9981, named Establishing the President's Committee on Equality of Treatment
and Opportunity in the Armed Services, called for the integration of the Armed Forces and the creation of the National
Military Establishment to carry out the executive order. When Eisenhower was elected President in 1952, he believed
hiring practices and anti-discrimination laws should be decided by the states, although the administration gradually
continued to desegregate the Armed Forces and the federal government.:50 The President also established the Government
Contract Committee in 1953, which "conducted surveys of the racial composition of federal employees and tax-supported
contractors".:50–51 The committee, chaired by Vice President Richard Nixon, had minimal outcomes in that they imposed
the contractors with the primary responsibility of desegregation within their own companies and corporations.:51
In the 1960 presidential election, Democratic candidate and future President John F. Kennedy "criticized President
Eisenhower for not ending discrimination in federally supported housing" and "advocated a permanent Fair Employment
Practices Commission".:59 Shortly after taking office, Kennedy issued Executive Order 10925 in March 1961, requiring
government contractors to "consider and recommend additional affirmative steps which should be taken by executive
departments and agencies to realize more fully the national policy of nondiscrimination…. The contractor will take
affirmative action to ensure that applicants are employed, and that employees are treated during employment, without
regard to their race, creed, color, or national origin".:60 The order also established the President's Committee
on Equal Employment Opportunity (PCEEO), chaired by Vice President Lyndon B. Johnson. Federal contractors who failed
to comply or violated the executive order were punished by contract cancellation and the possible debarment from
future government contracts. The administration was "not demanding any special preference or treatment or quotas
for minorities" but was rather "advocating racially neutral hiring to end job discrimination".:61 Turning to issues
of women's rights, Kennedy initiated a Commission on the Status of Women in December 1961. The commission was charged
with "examining employment policies and practices of the government and of contractors" with regard to sex.:66 In
June 1963, President Kennedy continued his policy of affirmative action by issuing another mandate, Executive Order
11114. The order supplemented to his previous 1961 executive order declaring it was the "policy of the United States
to encourage by affirmative action the elimination of discrimination in employment".:72 Through this order, all federal
funds, such as "grants, loans, unions and employers who accepted taxpayer funds, and other forms of financial assistance
to state and local governments," were forced to comply to the government's policies on affirmative action in employment
practices.:72 The first time "affirmative action" is used by the federal government concerning race is in President
John F. Kennedy's Executive Order 10925, which was chaired by Vice President Johnson. At Johnson's inaugural ball
in Texas, he met with a young black lawyer, Hobart Taylor Jr., and gave him the task to co-author the executive order.
He wanted a phrase that "gave a sense of positivity to performance under the order." He was torn between the words
"positive action" and "affirmative action," and selected the later due to its alliterative quality. The term "active
recruitment" started to be used as well. This order, albeit heavily worked up as a significant piece of legislation,
in reality carried little actual power. The scope was limited to a couple hundred defense contractors, leaving nearly
$7.5 billion in federal grants and loans unsupervised.:60 NAACP had many problem's with JFK's "token" proposal. They
wanted jobs. One day after the order took effect, NAACP labor secretary Herbert Hill filed complaints against the
hiring and promoting practices of Lockheed Aircraft Corporation. Lockheed was doing business with the Defense Department
on the first billion-dollar contract. Due to taxpayer-funding being 90% of Lockheed's business, along with disproportionate
hiring practices, black workers charged Lockheed with "overt discrimination." Lockheed signed an agreement with Vice
President Johnson that pledged an "aggressive seeking out for more qualified minority candidates for technical and
skill positions.:63–64 This agreement was the administration's model for a "plan of progress." Johnson and his assistants
soon pressured other defense contractors, including Boeing and General Electric, to sign similar voluntary agreements
indicating plans for progress. However, these plans were just that, voluntary. Many corporations in the South, still
afflicted with Jim Crow laws, largely ignored the federal recommendations.:63–64 This eventually led to LBJ's Civil
Rights Act, which came shortly after President Kennedy's assassination. This document was more holistic than any
President Kennedy had offered, and therefore more controversial. It aimed not only to integrate public facilities,
but also private businesses that sold to the public, such as motels, restaurants, theaters, and gas stations. Public
schools, hospitals, libraries, parks, among other things, were included in the bill as well. It also worked with
JFK's executive order 11114 by prohibiting discrimination in the awarding of federal contracts and holding the authority
of the government to deny contracts to businesses who discriminate. Maybe most significant of all, Title VII of the
Civil Rights Act aimed to end discrimination in all firms with 25 or more employees. Another provision established
the Equal Employment Opportunity Commission as the agency charged with ending discrimination in the nation's workplace.:74
Title VII was perhaps the most controversial of the entire bill. Many conservatives accused it of advocating a de
facto quota system, and claimed unconstitutionality as it attempts to regulate the workplace. Minnesota Senator Hubert
Humphrey corrected this notion: "there is nothing in [Title VII] that will give power to the Commission to require
hiring, firing, and promotion to meet a racial 'quota.' [. . .] Title VII is designed to encourage the hiring on
basis of ability and qualifications, not race or religion." Title VII prohibits discrimination. Humphrey was the
silent hero of the bill's passing through Congress. He pledged that the bill required no quotas, just nondiscrimination.
Doing so, he convinced many pro-business Republicans, including Senate Minority Leader Everett Dirksen (IL) to support
Title VII.:78–80 The strides that the Johnson presidency made in ensuring equal opportunity in the workforce were
further picked up by his successor Nixon. In 1969 the Nixon administration initiated the "Philadelphia Order". It
was regarded as the most forceful plan thus far to guarantee fair hiring practices in construction jobs. Philadelphia
was selected as the test case because, as Assistant Secretary of Labor Arthur Fletcher explained, "The craft unions
and the construction industry are among the most egregious offenders against equal opportunity laws . . . openly
hostile toward letting blacks into their closed circle." The order included definite "goals and timetables." As President
Nixon asserted, "We would not impose quotas, but would require federal contractors to show 'affirmative action' to
meet the goals of increasing minority employment." After the Nixon administration, advancements in affirmative action
became less prevalent. "During the brief Ford administration, affirmative action took a back seat, while enforcement
stumbled along.":145 Equal rights was still an important subject to many Americans, yet the world was changing and
new issues were being raised. People began to look at affirmative action as a glorified issue of the past and now
there were other areas that needed focus. "Of all the triumphs that have marked this as America's Century –...none
is more inspiring, if incomplete, than our pursuit of racial justice." In the beginning, racial classifications that
identified race were inherently suspect and subject to strict scrutiny. These classifications would only be upheld
if necessary to promote a compelling governmental interest. Later the U.S. Supreme Court decided that racial classifications
that benefited underrepresented minorities were to only be upheld if necessary and promoted a compelling governmental
purpose. (See Richmond v. J.A. Croson Co.) There is no clear guidance about when government action is not "compelling",
and such rulings are rare. Ricci v. DeStefano was heard by the United States Supreme Court in 2009. The case concerns
White and Hispanic firefighters in New Haven, Connecticut, who upon passing their test for promotions to management
were denied the promotions, allegedly because of a discriminatory or at least questionable test. The test gave 17
whites and two Hispanics the possibility of immediate promotion. Although 23% of those taking the test were African
American, none scored high enough to qualify. Because of the possibility the tests were biased in violation of Title
VII of the Civil Rights Act, no candidates were promoted pending outcome of the controversy. In a split 5-4 vote,
the Supreme Court ruled that New Haven had engaged in impermissible racial discrimination against the White and Hispanic
majority. President Kennedy stated in Executive Order 10925 that "discrimination because of race, creed, color, or
national origin is contrary to the Constitutional principles and policies of the United States"; that "it is the
plain and positive obligation of the United States Government to promote and ensure equal opportunity for all qualified
persons, without regard to race, creed, color, or national origin, employed or seeking employment with the Federal
Government and on government contracts"; that "it is the policy of the executive branch of the Government to encourage
by positive measures equal opportunity for all qualified persons within the Government"; and that "it is in the general
interest and welfare of the United States to promote its economy, security, and national defense through the most
efficient and effective utilization of all available manpower". Proponents of affirmative action argue that by nature
the system is not only race based, but also class and gender based. To eliminate two of its key components would
undermine the purpose of the entire system. The African American Policy Forum believes that the class based argument
is based on the idea that non-poor minorities do not experience racial and gender based discrimination. The AAPF
believes that "Race-conscious affirmative action remains necessary to address race-based obstacles that block the
path to success of countless people of color of all classes". The groups goes on to say that affirmative action is
responsible for creating the African American middle class, so it does not make sense to say that the system only
benefits the middle and upper classes. Following the end of World War II the educational gap between White and Black
Americans was widened by Dwight D. Eisenhower's GI Bill. This piece of legislation paved the way for white GIs to
attend college. Despite their veteran status returning black servicemen were not afforded loans at the same rate
as whites. Furthermore, at the time of its introduction, segregation was still the law of the land barring blacks
from the best institutions. Overall, "Nearly 8 million servicemen and servicewomen were educated under the provisions
of the GI Bill after World War II. But for blacks, higher educational opportunities were so few that the promise
of the GI Bill went largely unfulfilled." According to a study by Dr. Paul Brest, Hispanics or "Latinos" include
immigrants who are descendants of immigrants from the countries comprising Central and South America. In 1991, Mexican
Americans, Puerto Ricans, and Cuban Americans made up 80% of the Latino population in the United States. Latinos
are disadvantaged compared to White Americans and are more likely to live in poverty. They are the least well educated
major ethnic group and suffered a 3% drop in high school completion rate while African Americans experienced a 12%
increase between 1975-1990. In 1990, they constituted 9% of the population, but only received 3.1% of the bachelors's
degrees awarded. At times when it is favorable to lawmakers, Latinos were considered "white" by the Jim Crow laws
during the Reconstruction. In other cases, according to Paul Brest, Latinos have been classified as an inferior race
and a threat to white purity. Latinos have encountered considerable discrimination in areas such as employment, housing,
and education. Brest finds that stereotypes continue to be largely negative and many perceive Latinos as "lazy, unproductive,
and on the dole." Furthermore, native-born Latino-Americans and recent immigrants are seen as identical since outsiders
tend not to differentiate between Latino groups. The category of Native American applies to the diverse group of
people who lived in North America before European settlement. During the U.S. government's westward expansion, Native
Americans were displaced from their land which had been their home for centuries. Instead, they were forced onto
reservations which were far smaller and less productive. According to Brest, land belonging to Native Americans was
reduced from 138 million acres in 1887 to 52 million acres in 1934. In 1990, the poverty rate for Native Americans
was more than triple that of the whites and only 9.4% of Native Americans have completed a bachelor's degree as opposed
to 25.2% of whites and 12.2% of African Americans. Early Asian immigrants experienced prejudice and discrimination
in the forms of not having the ability to become naturalized citizens. They also struggled with many of the same
school segregation laws that African Americans faced. Particularly, during World War II, Japanese Americans were
interned in camps and lost their property, homes, and businesses. Discrimination against Asians began with the Chinese
Exclusion Act of 1882 and then continued with the Scott Act of 1888 and the Geary Act of 1892. At the beginning of
the 20th century, the United States passed the Immigration Act of 1924 to prevent Asian immigration out of fear that
Asians were stealing white jobs and lowering the standard for wages. In addition, whites and non-Asians do not differentiate
among the different Asian groups and perpetuate the "model minority" stereotype. According to a 2010 article by Professor
Qin Zhang of Fairfield University, Asians are characterized as one dimensional in having great work ethic and valuing
education, but lacking in communication skills and personality. A negative outcome of this stereotype is that Asians
have been portrayed as having poor leadership and interpersonal skills. This has contributing to the "glass ceiling"
phenomenon in which although there are many qualified Asian Americans, they occupy a disproportionately small number
of executive positions in businesses. Furthermore, the model minority stereotype has led to resentment of Asian success
and several universities and colleges have limited or have been accused of limiting Asian matriculation. Proponents
of affirmative action recognize that the policy is inherently unequal; however, minding the inescapable fact that
historic inequalities exist in America, they believe the policy is much more fair than one in which these circumstances
are not taken into account. Furthermore, those in favor of affirmative action see it as an effort towards inclusion
rather than a discriminatory practice. "Job discrimination is grounded in prejudice and exclusion, whereas affirmative
action is an effort to overcome prejudicial treatment through inclusion. The most effective way to cure society of
exclusionary practices is to make special efforts at inclusion, which is exactly what affirmative action does." There
are a multitude of supporters as well as opponents to the policy of affirmative action. Many presidents throughout
the last century have failed to take a very firm stance on the policy, and the public has had to discern the president's
opinion for themselves. Bill Clinton, however, made his stance on affirmative action very clear in a speech on July
19, 1995, nearly two and a half years after his inauguration. In his speech, he discussed the history in the United
States that brought the policy into fruition: slavery, Jim Crow, and segregation. Clinton also mentioned a point
similar to President Lyndon B. Johnson's "Freedom is not Enough" speech, and declared that just outlawing discrimination
in the country would not be enough to give everyone in America equality. He addressed the arguments that affirmative
action hurt the white middle class and said that the policy was not the source of their problems. Clinton plainly
outlined his stance on affirmative action, saying: The National Conference of State Legislatures held in Washington
D.C. stated in a 2014 overview that many supporters for affirmative action argue that policies stemming from affirmative
action help to open doors for historically excluded groups in workplace settings and higher education. Workplace
diversity has become a business management concept in which employers actively seek to promote an inclusive workplace.
By valuing diversity, employers have the capacity to create an environment in which there is a culture of respect
for individual differences as well as the ability to draw in talent and ideas from all segments of the population.
By creating this diverse workforce, these employers and companies gain a competitive advantage in an increasingly
global economy. According to the U.S. Equal Employment Opportunity Commission, many private sector employers have
concluded that a diverse workforce makes a "company stronger, more profitable, and a better place to work." Therefore,
these diversity promoting policies are implemented for competitive reasons rather than as a response to discrimination,
but have shown the value in having diversity. In the year 2000, according to a study by American Association of University
Professors (AAUP), affirmative action promoted diversity within colleges and universities. This has been shown to
have positive effects on the educational outcomes and experiences of college students as well as the teaching of
faculty members. According to a study by Geoffrey Maruyama and José F. Moreno, the results showed that faculty members
believed diversity helps students to reach the essential goals of a college education, Caucasian students suffer
no detrimental effects from classroom diversity, and that attention to multicultural learning improves the ability
of colleges and universities to accomplish their missions. Furthermore, a diverse population of students offers unique
perspectives in order to challenge preconceived notions through exposure to the experiences and ideas of others.
According to Professor Gurin of the University of Michigan, skills such as "perspective-taking, acceptance of differences,
a willingness and capacity to find commonalities among differences, acceptance of conflict as normal, conflict resolution,
participation in democracy, and interest in the wider social world" can potentially be developed in college while
being exposed to heterogeneous group of students. In addition, broadening perspectives helps students confront personal
and substantive stereotypes and fosters discussion about racial and ethnic issues in a classroom setting. Furthermore,
the 2000 AAUP study states that having a diversity of views leads to a better discussion and greater understanding
among the students on issues of race, tolerance, fairness, etc. Richard Sander claims that by artificially elevating
minority students into schools they otherwise would not be capable of attending, this discourages them and tends
to engender failure and high dropout rates for these students. For example, about half of black college students
rank in the bottom 20 percent of their classes, black law school graduates are four times as likely to fail bar exams
as are whites, and interracial friendships are more likely to form among students with relatively similar levels
of academic preparation; thus, blacks and Hispanics are more socially integrated on campuses where they are less
academically mismatched. He claims that the supposed "beneficiaries" of affirmative action – minorities – do not
actually benefit and rather are harmed by the policy. Sander's claims have been disputed, and his empirical analyses
have been subject to substantial criticism. A group including some of the country's lead statistical methodologists
told the Supreme Court that Sander's analyses were sufficiently flawed that the Court would be wise to ignore them
entirely. At the same time many scholars have found that minorities gain substantially from affirmative action. The
controversy surrounding affirmative action's effectiveness is based on the idea of class inequality. Opponents of
racial affirmative action argue that the program actually benefits middle- and upper-class African Americans and
Hispanic Americans at the expense of lower-class European Americans and Asian Americans. This argument supports the
idea of class-based affirmative action. America's poor is disproportionately made up of people of color, so class-based
affirmative action would disproportionately help people of color. This would eliminate the need for race-based affirmative
action as well as reducing any disproportionate benefits for middle- and upper-class people of color. In 1976, a
group of Italian American professors at City University of New York asked to be added as an affirmative action category
for promotion and hiring. Italian Americans are usually considered white in the US and would not be covered under
affirmative action policies, but the professors believed they were underrepresented. Libertarian economist Thomas
Sowell wrote in his book, Affirmative Action Around the World: An Empirical Study, that affirmative action policies
encourage non-preferred groups to designate themselves as members of preferred groups [i.e., primary beneficiaries
of affirmative action] to take advantage of group preference policies. Frederick Lynch, the author of Invisible Victims:
White Males and the Crisis of Affirmative Action, did a study on white males that said they were victims of reverse
discrimination. Lynch explains that these white men felt frustrated and unfairly victimized by affirmative action.
Shelby Steele, another author against affirmative action, wanted to see affirmative action go back to its original
meaning of enforcing equal opportunity. He argued that blacks had to take full responsibility in their education
and in maintaining a job. Steele believes that there is still a long way to go in America to reach our goals of eradicating
discrimination. Terry Eastland, the author who wrote From Ending Affirmative Action: The Case for Colorblind Justice
states, "Most arguments for affirmative action fall into two categories: remedying past discrimination and promoting
diversity". Eastland believes that the founders of affirmative action did not anticipate how the benefits of affirmative
action would go to those who did not need it, mostly middle class minorities. Additionally, she argues that affirmative
action carries with it a stigma that can create feelings of self-doubt and entitlement in minorities. Eastland believes
that affirmative action is a great risk that only sometimes pays off, and that without it we would be able to compete
more freely with one another. Libertarian economist Thomas Sowell identified what he says are negative results of
affirmative action in his book, Affirmative Action Around the World: An Empirical Study. Sowell writes that affirmative
action policies encourage non-preferred groups to designate themselves as members of preferred groups [i.e., primary
beneficiaries of affirmative action] to take advantage of group preference policies; that they tend to benefit primarily
the most fortunate among the preferred group (e.g., upper and middle class blacks), often to the detriment of the
least fortunate among the non-preferred groups (e.g., poor white or Asian); that they reduce the incentives of both
the preferred and non-preferred to perform at their best – the former because doing so is unnecessary and the latter
because it can prove futile – thereby resulting in net losses for society as a whole; and that they engender animosity
toward preferred groups as well.:115–147 Some commentators have defined reverse discrimination as a policy or practice
in which members of a majority are discriminated against in favor of a historically disadvantaged group or minority.[non-primary
source needed] Many argue that reverse discrimination results from affirmative action policies and that these policies
are just another form of discrimination no different from examples in the past. People like Ward Connerly assert
that affirmative action requires the very discrimination it is seeking to eliminate. According to these opponents,
this contradiction might make affirmative action counter-productive. One argument for reverse discrimination is the
idea that affirmative action encourages mediocrity and incompetence. Job positions would not be offered to the applicants
who are the most qualified, but to applicants with a special trait such as a certain race, ethnicity, or gender.
For example, opponents say affirmative action causes unprepared applicants to be accepted in highly demanding educational
institutions or jobs which result in eventual failure (see, for example, Richard Sander's study of affirmative action
in Law School, bar exam and eventual performance at law firms). Other opponents say that affirmative action lowers
the bar and so denies those who strive for excellence on their own merit and the sense of real achievement. Opponents
of affirmative action suggest that merit should be the primary factor considered in applying for job positions, college,
graduate school, etc. Another popular argument for affirmative action is the compensation argument. Blacks were mistreated
in the past for a morally irrelevant characteristic of being black so society today should compensate for the injuries.
This causes reverse discrimination in the form of preferential hirings, contracts, and scholarships as a means to
ameliorate past wrongs. Many opponents argue that this form of reparation is morally indefensible because if blacks
were harmed for being black in the past, then preferential treatment for this same trait is illogical. In addition,
arguments are made that whites today who innocently benefited from past injustices should not be punished for something
they had no control over. Therefore, they are being reverse discriminated against because they are receiving the
punishment that should be given to people who willingly and knowingly benefited from discriminatory practices Some
opponents further claim that affirmative action has undesirable side-effects and that it fails to achieve its goals.
They argue that it hinders reconciliation, replaces old wrongs with new wrongs, undermines the achievements of minorities,
and encourages groups to identify themselves as disadvantaged, even if they are not. It may increase racial tension
and benefit the more privileged people within minority groups at the expense of the disenfranchised within better-off
groups (such as lower-class whites and Asians).There has recently been a strong push among American states to ban
racial or gender preferences in university admissions, in reaction to the controversial and unprecedented decision
in Grutter v. Bollinger. In 2006, nearly 60% of Michigan voters decided to ban affirmative action in university admissions.
Michigan joined California, Florida, Texas, and Washington in banning the use of race or sex in admissions considerations.
Some opponents believe, among other things, that affirmative action devalues the accomplishments of people who belong
to a group it's supposed to help, therefore making affirmative action counter-productive. Furthermore, opponents
of affirmative action claim that these policies dehumanize individuals and applicants to jobs or school are judged
as members of a group without consideration for the individual person. In the US, a prominent form of racial preferences
relates to access to education, particularly admission to universities and other forms of higher education. Race,
ethnicity, native language, social class, geographical origin, parental attendance of the university in question
(legacy admissions), and/or gender are sometimes taken into account when the university assesses an applicant's grades
and test scores. Individuals can also be awarded scholarships and have fees paid on the basis of criteria listed
above. In 1978, the Supreme Court ruled in Bakke v. Regents that public universities (and other government institutions)
could not set specific numerical targets based on race for admissions or employment. The Court said that "goals"
and "timetables" for diversity could be set instead. The racial preferences debate related to admission to US colleges
and universities reflects competing notions of the mission of colleges: "To what extent should they pursue scholarly
excellence, to what extent civic goods, and how should these purposes be balanced?". Scholars such as Ronald Dworkin
have asserted that no college applicant has a right to expect that a university will design its admissions policies
in a way that prizes any particular set of qualities. In this view, admission is not an honor bestowed to reward
superior merit but rather a way to advance the mission as each university defines it. If diversity is a goal of the
university and their racial preferences do not discriminate against applicants based on hatred or contempt, then
affirmative action can be judged acceptable based on the criteria related to the mission the university sets for
itself. To accommodate the ruling in Hopwood v. Texas banning any use of race in school admissions, the State of
Texas passed a law guaranteeing entry to any state university if a student finished in the top 10% of their graduating
class. Florida and California have also replaced racial quotas with class rank and other criteria. Class rank tends
to benefit top students at less competitive high schools, to the detriment of students at more competitive high schools.
This effect, however, may be intentional since less-funded, less competitive schools are more likely to be schools
where minority enrollment is high. Critics argue that class rank is more a measure of one's peers than of one's self.
The top 10% rule adds racial diversity only because schools are still highly racially segregated because of residential
patterns. The class rank rule has the same consequence as traditional affirmative action: opening schools to students
who would otherwise not be admitted had the given school used a holistic, merit-based approach. From 1996 to 1998,
Texas had merit-based admission to its state universities, and minority enrollment dropped. The state's adoption
of the "top 10 percent" rule returned minority enrollment to pre-1996 levels. During a panel discussion at Harvard
University's reunion for African American alumni during the 2003–04 academic year, two prominent black professors
at the institution—Lani Guinier and Henry Louis Gates—pointed out an unintended effect of affirmative action policies
at Harvard. They stated that only about a third of black Harvard undergraduates were from families in which all four
grandparents were born into the African American community. The majority of black students at Harvard were Caribbean
and African immigrants or their children, with some others the mixed-race children of biracial couples. One Harvard
student, born in the South Bronx to a black family whose ancestors have been in the United States for multiple generations,
said that there were so few Harvard students from the historic African American community that they took to calling
themselves "the descendants" (i.e., descendants of American slaves). The reasons for this underrepresentation of
historic African Americans, and possible remedies, remain a subject of debate. UCLA professor Richard H. Sander published
an article in the November 2004 issue of the Stanford Law Review that questioned the effectiveness of racial preferences
in law schools. He noted that, prior to his article, there had been no comprehensive study on the effects of affirmative
action. The article presents a study that shows that half of all black law students rank near the bottom of their
class after the first year of law school and that black law students are more likely to drop out of law school and
to fail the bar exam. The article offers a tentative estimate that the production of new black lawyers in the United
States would grow by eight percent if affirmative action programs at all law schools were ended. Less qualified black
students would attend less prestigious schools where they would be more closely matched in abilities with their classmates
and thus perform relatively better. Sander helped to develop a socioeconomically-based affirmative action plan for
the UCLA School of Law after the passage of Proposition 209 in 1996, which prohibited the use of racial preferences
by public universities in California. This change occurred after studies showed that the graduation rate of blacks
at UCLA was 41%, compared to 73% for whites. A study in 2007 by Mark Long, an economics professor at the University
of Washington, demonstrated that the alternatives of affirmative action proved ineffective in restoring minority
enrollment in public flagship universities in California, Texas, and Washington. More specifically, apparent rebounds
of minority enrollment can be explained by increasing minority enrollment in high schools of those states, and the
beneficiaries of class-based (not race) affirmative action would be white students. At the same time, affirmative
action itself is both morally and materially costly: 52 percent of white populace (compared to 14 percent of black)
thought it should be abolished, implying white distaste of using racial identity, and full-file review is expected
to cost the universities an additional $1.5 million to $2 million per year, excluding possible cost of litigation.
In 2006, Jian Li, a Chinese undergraduate at Yale University, filed a civil rights complaint with the Office for
Civil Rights against Princeton University, claiming that his race played a role in their decision to reject his application
for admission and seeking the suspension of federal financial assistance to the university until it "discontinues
discrimination against Asian Americans in all forms" by eliminating race and legacy preferences. Princeton Dean of
Admissions Janet Rapelye responded to the claims in the November 30, 2006, issue of the Daily Princetonian by stating
that "the numbers don't indicate [discrimination]." She said that Li was not admitted because "many others had far
better qualifications." Li's extracurricular activities were described as "not all that outstanding". Li countered
in an email, saying that his placement on the waitlist undermines Rapelye's claim. "Princeton had initially waitlisted
my application," Li said. "So if it were not for a yield which was higher than expected, the admissions office very
well may have admitted a candidate whose "outside activities were not all that outstanding". In 2012, Abigail Fisher,
an undergraduate student at Louisiana State University, and Rachel Multer Michalewicz, a law student at Southern
Methodist University, filed a lawsuit to challenge the University of Texas admissions policy, asserting it had a
"race-conscious policy" that "violated their civil and constitutional rights". The University of Texas employs the
"Top Ten Percent Law", under which admission to any public college or university in Texas is guaranteed to high school
students who graduate in the top ten percent of their high school class. Fisher has brought the admissions policy
to court because she believes that she was denied acceptance to the University of Texas based on her race, and thus,
her right to equal protection according to the 14th Amendment was violated. The Supreme Court heard oral arguments
in Fisher on October 10, 2012, and rendered an ambiguous ruling in 2013 that sent the case back to the lower court,
stipulating only that the University must demonstrate that it could not achieve diversity through other, non-race
sensitive means. In July 2014, the US Court of Appeals for the Fifth Circuit concluded that U of T maintained a "holistic"
approach in its application of affirmative action, and could continue the practice. On February 10, 2015, lawyers
for Fisher filed a new case in the Supreme Court. It is a renewed complaint that the U.S. Court of Appeals for the
Fifth Circuit got the issue wrong — on the second try as well as on the first. The Supreme Court agreed in June 2015
to hear the case a second time. It will likely be decided by June 2016. On November 17, 2014, Students for Fair Admissions,
an offshoot of the Project on Fair Representation, filed lawsuits in federal district court challenging the admissions
practices of Harvard University and the University of North Carolina at Chapel Hill. The UNC-Chapel Hill lawsuit
alleges discrimination against white and Asian students, while the Harvard lawsuit focuses on discrimination against
Asian applicants. Both universities requested the court to halt the lawsuits until the U.S. Supreme Court provides
clarification of relevant law by ruling in Fisher v. University of Texas at Austin for the second time. This Supreme
Court case will likely be decided in June 2016 or slightly earlier. In May 2015, a coalition of more than 60 Asian-American
organizations filed federal complaints with the Education and Justice Departments against Harvard University. The
coalition asked for a civil rights investigation into what they described as Harvard's discriminatory admission practices
against Asian-American applicants. The complaint asserts that recent studies indicate that Harvard has engaged in
systematic and continuous discrimination against Asian Americans in its "holistic" admissions process. Asian-American
applicants with near-perfect test scores, top-one-percent grade point averages, academic awards, and leadership positions
are allegedly rejected by Harvard because the university uses racial stereotypes, racially differentiated standards,
and de facto racial quotas. This federal complaint was dismissed in July 2015 because the Students for Fair Admissions
lawsuit makes similar allegations.
The competition is open to any eligible club down to Levels 10 of the English football league system - all 92 professional
clubs in the Premier League and Football League (Levels 1 to 4), and several hundred "non-league" teams in Steps
1 to 6 of the National League System (Levels 5 to 10). A record 763 clubs competed in 2011–12. The tournament consists
of 12 randomly drawn rounds followed by the semi-finals and the final. Entrants are not seeded, although a system
of byes based on league level ensures higher ranked teams enter in later rounds - the minimum number of games needed
to win the competition ranges from six to fourteen. The first six rounds are the Qualifying Competition, from which
32 teams progress to the first round of the Competition Proper, meeting the first of the 92 professional teams. The
last entrants are the Premier League and Championship clubs, into the draw for the Third Round Proper. In the modern
era, non-league teams have never reached the quarter finals, and teams below Level 2 have never reached the final.[note
1] As a result, as well as who wins, significant focus is given to those "minnows" (smaller teams) who progress furthest,
especially if they achieve an unlikely "giant-killing" victory. Winners receive the FA Cup trophy, of which there
have been two designs and five actual cups; the latest is a 2014 replica of the second design, introduced in 1911.
Winners also qualify for European football and a place in the FA Community Shield match. Arsenal are the current
holders, having beaten Aston Villa 4–0 in the 2015 final to win the cup for the second year in a row. It was their
12th FA Cup title overall, making Arsenal the FA Cup's most successful club ahead of Manchester United on 11. In
1863, the newly founded Football Association (the FA) published the Laws of the Game of Association Football, unifying
the various different rules in use before then. On 20 July 1871, in the offices of The Sportsman newspaper, the FA
Secretary C. W. Alcock proposed to the FA committee that "it is desirable that a Challenge Cup should be established
in connection with the Association for which all clubs belonging to the Association should be invited to compete".
The inaugural FA Cup tournament kicked off in November 1871. After thirteen games in all, Wanderers were crowned
the winners in the final, on 16 March 1872. Wanderers retained the trophy the following year. The modern cup was
beginning to be established by the 1888–89 season, when qualifying rounds were introduced. Following the 1914–15
edition, the competition was suspended due to the First World War, and didn't resume until 1919–20. The 1922–23 competition
saw the first final to be played in the newly opened Wembley Stadium (known at the time as the Empire Stadium). Due
to the outbreak of World War II, the competition wasn't played between the 1938–39 and 1945–46 editions. Due to the
wartime breaks, the competition didn't celebrate its centenary year until 1980–81; fittingly the final featured a
goal by Ricky Villa which was later voted the greatest goal ever scored at Wembley Stadium, but has since been replaced
by Steven Gerrard. The competition is open to any club down to Level 10 of the English football league system which
meets the eligibility criteria. All clubs in the top four levels (the Premier League and the three divisions of the
Football League) are automatically eligible. Clubs in the next six levels (non-league football) are also eligible
provided they have played in either the FA Cup, FA Trophy or FA Vase competitions in the previous season. Newly formed
clubs, such as F.C. United of Manchester in 2005–06 and also 2006–07, may not therefore play in the FA Cup in their
first season. All clubs entering the competition must also have a suitable stadium. It is very rare for top clubs
to miss the competition, although it can happen in exceptional circumstances. Defending holders Manchester United
did not enter the 1999–2000 FA Cup, as they were already in the inaugural Club World Championship, with the club
stating that entering both tournaments would overload their fixture schedule and make it more difficult to defend
their Champions League and Premiership titles. The club claimed that they did not want to devalue the FA Cup by fielding
a weaker side. The move benefited United as they received a two-week break and won the 1999–2000 league title by
an 18-point margin, although they did not progress past the group stage of the Club World Championship. The withdrawal
from the FA Cup, however, drew considerable criticism as this weakened the tournament's prestige and Sir Alex Ferguson
later admitted his regret regarding their handling of the situation. Welsh sides that play in English leagues are
eligible, although since the creation of the League of Wales there are only six clubs remaining: Cardiff City (the
only non-English team to win the tournament, in 1927), Swansea City, Newport County, Wrexham, Merthyr Town and Colwyn
Bay. In the early years other teams from Wales, Ireland and Scotland also took part in the competition, with Glasgow
side Queen's Park losing the final to Blackburn Rovers in 1884 and 1885 before being barred from entering by the
Scottish Football Association. In the 2013–14 season the first Channel Island club entered the competition when Guernsey
F.C. competed for the first time. The number of entrants has increased greatly in recent years. In the 2004–05 season,
660 clubs entered the competition, beating the long-standing record of 656 from the 1921–22 season. In 2005–06 this
increased to 674 entrants, in 2006–07 to 687, in 2007–08 to 731 clubs, and for the 2008–09 and 2009–10 competitions
it reached 762. The number has varied slightly but remained roughly stable since then, with 759 clubs participating
in 2010–11, a record 763 in 2011–12, 758 for 2012–13, 737 for 2013–14 and 736 for 2014–15. By comparison, the other
major English domestic cup, the League Cup, involves only the 92 members of the Premier League and Football League.
Beginning in August, the competition proceeds as a knockout tournament throughout, consisting of twelve rounds, a
semi-final and then a final, in May. A system of byes ensures clubs above Level 9 and 10 enter the competition at
later stages. There is no seeding, the fixtures in each round being determined by a random draw. Prior to the semi-finals,
fixtures ending in a tie are replayed once only. The first six rounds are qualifiers, with the draws organised on
a regional basis. The next six rounds are the "proper" rounds where all clubs are in one draw. The final is normally
held the Saturday after the Premier League season finishes in May. The only seasons in recent times when this pattern
was not followed were 1999–2000, when most rounds were played a few weeks earlier than normal as an experiment, and
2010–11 and 2012–13 when the FA Cup Final was played before the Premier League season had finished, to allow Wembley
Stadium to be ready for the UEFA Champions League final, as well as in 2011–12 to allow England time to prepare for
that summer's European Championships. Until the 1990s further replays would be played until one team was victorious.
Some ties took as many as six matches to settle; in their 1975 campaign, Fulham played a total of 12 games over six
rounds, which remains the most games played by a team to reach a final. Replays were traditionally played three or
four days after the original game, but from 1991–92 they were staged at least 10 days later on police advice. This
led to penalty shoot-outs being introduced, the first of which came on 26 November 1991 when Rotherham United eliminated
Scunthorpe United. The FA Cup winners qualify for the following season's UEFA Europa League (formerly named the UEFA
Cup; until 1998 they entered the Cup Winners' Cup instead). This European place applies even if the team is relegated
or is not in the English top flight. In the past, if the FA Cup winning team also qualified for the following season's
Champions League or Europa League through their league position, then the losing FA Cup finalist was given the Europa
League place instead. FA Cup winners enter the Europa League at the group stage. Losing finalists, if they entered
the Europa League, began earlier, at the play-off or third qualifying round stage. From the 2015–16 UEFA Europa League
season, however, UEFA will not allow the runners-up to qualify for the Europa League through the competition. The
semi-finals have been played exclusively at the rebuilt Wembley Stadium since 2008, one year after it opened and
after it had already hosted a final (in 2007). For the first decade of the competition, the Kennington Oval was used
as the semi-final venue. In the period between this first decade and the reopening of Wembley, semi-finals were played
at high-capacity neutral venues around England; usually the home grounds of teams not involved in that semi-final,
chosen to be roughly equidistant between the two teams for fairness of travel. The top three most used venues in
this period were Villa Park in Birmingham (55 times), Hillsborough in Sheffield (34 times) and Old Trafford in Manchester
(23 times). The original Wembley Stadium was also used seven times for semi-final, between 1991 and 2000 (the last
held there), but not always for fixtures featuring London teams. In 2005, both were held at the Millennium Stadium.
In 2003 the FA took the decision to permanently use the new Wembley for semi-finals to recoup debts in financing
the new stadium. This was controversial, with the move seen as both unfair to fans of teams located far from London,
as well as taking some of the prestige away from a Wembley final. In defending the move, the FA has also cited the
extra capacity Wembley offers, although the 2013 fixture between Millwall and Wigan led to the unprecedented step
of placing 6,000 tickets on sale to neutral fans after the game failed to sell out. A fan poll by The Guardian in
2013 found 86% opposition to Wembley semi-finals. The final has been played at the rebuilt Wembley Stadium since
it opened, in 2007. The rebuilding process meant that between 2001 and 2006 they were hosted at the Millennium Stadium
in Cardiff in Wales. Prior to rebuilding, the final was hosted by the original Wembley Stadium since it opened in
1923 (being originally named the Empire Stadium). One exception to this 78 year series of Empire Stadium finals (including
five replays) was the 1970 replay between Leeds and Chelsea, held at Old Trafford in Manchester. In the 51 years
prior to the Empire Stadium opening, the final (including 8 replays) was held in a variety of locations, predominantly
in London, and mainly at the Kennington Oval and then Crystal Palace. It was played 22 times in the Oval (the inaugural
competition in 1872, and then all but two times until 1892). After the Oval, Crystal Palace hosted 21 finals from
1895 to 1914, broken up by 4 four replays elsewhere. The other London venues were Stamford Bridge from 1920 to 1922
(the last three finals before the move to Empire Stadium); and Oxford University's Lillie Bridge in Fulham for the
second ever final, in 1873. The other venues used sparingly in this period were all outside of London, as follows:
The FA permitted artificial turf (3G) pitches in all rounds of the competition from the 2014–15 edition and beyond.
Under the 2015-16 rules, the pitch must be of FIFA One Star quality, or Two Star for ties if they involve one of
the 92 professional clubs. This followed approval two years previously for their use in the qualifying rounds only
- if a team with a 3G pitch progressed to the competition proper, they had to switch their tie to the ground of another
eligible entrant with a natural grass pitch. Having been strong proponents of the surface, the first match in the
proper rounds to be played on a 3G surface was a televised first round replay at Maidstone United's Gallagher Stadium
on 20 November 2015. The trophy comes in three parts - the cup itself, plus a lid and a base. There have been two
designs of trophy in use, but five physical trophies have been presented. The original trophy, known as the "little
tin idol", was 18 inches high and made by Martin, Hall & Co. It was stolen in 1895 and never recovered, and so was
replaced by an exact replica, used until 1910. The FA decided to change the design after the 1909 winners, Manchester
United, made their own replica, leading the FA to realise they did not own the copyright. This new, larger design
was by Messers Fattorini and Sons, and was used from 1911. In order to preserve this original, from 1992 it was replaced
by an exact replica, although this had to be replaced after just over two decades, after showing wear and tear from
being handled more than in previous eras. This third replica, first used in 2014, was built heavier to withstand
the increased handling. Of the four surviving trophies, only the 1895 replica has entered private ownership. The
name of the winning team is engraved on the silver band around the base as soon as the final has finished, in order
to be ready in time for the presentation ceremony. This means the engraver has just five minutes to perform a task
which would take twenty under normal conditions, although time is saved by engraving the year on during the match,
and sketching the presumed winner. During the final, the trophy wears is decorated with ribbons in the colours of
both finalists, with the loser's ribbons being removed at the end of the game. Traditionally, at Wembley finals,
the presentation is made at the Royal Box, with players, led by the captain, mounting a staircase to a gangway in
front of the box and returning by a second staircase on the other side of the box. At Cardiff the presentation was
made on a podium on the pitch. The tradition of presenting the trophy immediately after the game did not start until
the 1882 final; after the first final in 1872 the trophy was not presented to the winners, Wanderers, until a reception
held four weeks later in the Pall Mall Restaurant in London. Under the original rules, the trophy was to be permanently
presented to any club which won the competition three times, although when inaugural winners Wanderers achieved this
feat by the 1876 final, the rules were changed by FA Secretary CW Alcock (who was also captain of Wanderers in their
first victory). Almost 60 years later, 80 year old career criminal Henry (Harry) James Burge claimed to have committed
the theft, confessing to a newspaper, with the story being published in the Sunday Pictorial newspaper on 23 February
1958. He claimed to have carried out the robbery with two other men, although when discrepancies with a contemporaneous
report in the Birmingham Post newspaper (the crime pre-dated written police reports) in his account of the means
of entry and other items stolen, detectives decided there was no realistic possibility of a conviction and the case
was closed. Burge claimed the cup had been melted down to make counterfeit half-crown coins, which matched known
intelligence of the time, in which stolen silver was being used to forge coins which were then laundered through
betting shops at a local racecourse, although Burge had no past history of forgery in a record of 42 previous convictions
for which he had spent 42 years in prison. He had been further imprisoned in 1957 for seven years for theft from
cars. Released in 1961, he died in 1964. After being rendered obsolete by the redesign, the 1895 replica was presented
in 1910 to the FA's long-serving president Lord Kinnaird. Kinnaird died in 1923, and his family kept it in their
possession, out of view, until putting it up for auction in 2005. It was duly sold at Christie's auction house on
19 May 2005 for £420,000 (£478,400 including auction fees and taxes). The sale price set a new world record for a
piece of football memorabilia, surpassing the £254,000 paid for the Jules Rimet World Cup Trophy in 1997. The successful
bidder was David Gold, the then joint chairman of Birmingham City; claiming the FA and government were doing nothing
proactive to ensure the trophy remained in the country, Gold stated his purchase was motivated by wanting to save
it for the nation. Accordingly, Gold presented the trophy to the National Football Museum in Preston on 20 April
2006, where it went on immediate public display. It later moved with the museum to its new location in Manchester.
In November 2012, it was ceremonially presented to Royal Engineers, after they beat Wanderers 7–1 in a charity replay
of the first FA Cup final. Since the start of the 1994–95 season, the FA Cup has been sponsored. However, to protect
the identity of the competition, the sponsored name has always included 'The FA Cup' in addition to the sponsor's
name, unlike sponsorship deals for the League Cup where the word 'cup' is preceded by only the sponsor's name. Sponsorship
deals run for four years, though – as in the case of E.ON – one-year extensions may be agreed. Emirates airline is
the sponsor from 2015 to 2018, renaming the competition as 'The Emirates FA Cup', unlike previous editions, which
included 'The FA Cup in association with E.ON' and 'The FA Cup with Budweiser'. The possibility of unlikely victories
in the earlier rounds of the competition, where lower ranked teams beat higher placed opposition, known as "giant
killings", is much anticipated by the public, and is considered an integral part of the tradition and prestige of
the competition, alongside that gained by teams winning the competition. Almost every club in the League Pyramid
has a fondly remembered giant-killing act in its history. It is considered particularly newsworthy when a top Premier
League team suffers an upset defeat, or where the giant-killer is a non-league club, i.e. from outside the professional
levels of The Football League. The Football League was founded in 1888, 16 years after the first FA Cup competition.
Since the creation of The Football League, Tottenham Hotspur is the only non-league "giant-killer" to win the Cup,
taking the 1901 FA Cup with a victory over reigning league runners-up Sheffield United: although at that time, there
were only two divisions and 36 clubs in the Football League, and Spurs were champions of the next lowest football
tier - the Southern League and probably already good enough for the First Division (as was shown when they joined
the Second Division in 1908 and immediately won promotion to the First.) Only two other actual non-League clubs have
even reached the final since the founding of the League: Sheffield Wednesday in 1890 (champions of the Football Alliance,
a rival league which was already effectively the Second Division, which it formally became in 1892 – Wednesday being
let straight into the First Division), and Southampton in 1900 and 1902 (in which years they were also Southern League
champions, proving the strength of that league: again, they were probably of equivalent standard to a First Division
club at the time, but Southampton's form subsequently faded and they did not join the League till 1920 and the formation
of the Third Division.) Chasetown, whilst playing at Level 8 of English football during the 2007–08 competition,
are the lowest-ranked team to play in the Third Round Proper (final 64, of 731 teams entered that season). Chasetown
was then a member of the Southern League Division One Midlands (a lower level within the Southern Football League),
when they lost to Football League Championship (Level 2) team Cardiff City, the eventual FA Cup runners-up that year.
Their success earned the lowly organisation over £60,000 in prize money. Seven clubs have won the FA Cup as part
of a League and Cup double, namely Preston North End (1889), Aston Villa (1897), Tottenham Hotspur (1961), Arsenal
(1971, 1998, 2002), Liverpool (1986), Manchester United (1994, 1996, 1999) and Chelsea (2010). In 1993, Arsenal became
the first side to win both the FA Cup and the League Cup in the same season when they beat Sheffield Wednesday 2–1
in both finals. Liverpool (in 2001) and Chelsea (in 2007) have since repeated this feat. In 2012, Chelsea accomplished
a different cup double consisting of the FA Cup and the 2012 Champions League. In 1998–99, Manchester United added
the 1999 Champions League title to their league and cup double to complete a unique Treble. Two years later, in 2000–01,
Liverpool won the FA Cup, League Cup and UEFA Cup to complete a cup treble. An English Treble has never been achieved.
The final has never been contested by two teams from outside the top division and there have only been eight winners
who weren't in the top flight: Notts County (1894); Tottenham Hotspur (1901); Wolverhampton Wanderers (1908); Barnsley
(1912); West Bromwich Albion (1931); Sunderland (1973), Southampton (1976) and West Ham United (1980). With the exception
of Tottenham, these clubs were all playing in the second tier (the old Second Division) - Tottenham were playing
in the Southern League and were only elected to the Football League in 1908, meaning they are the only non-league
winners of the FA Cup. Other than Tottenham's victory, only 24 finalists have come from outside English football's
top tier, with a record of 7 wins and 17 runners-up: and none at all from the third tier or lower, Southampton (1902)
being the last finalist from outside the top two tiers. In the early years of coverage the BBC had exclusive radio
coverage with a picture of the pitch marked in the Radio Times with numbered squares to help the listener follow
the match on the radio. The first FA Cup Final on Radio was in 1926 between Bolton Wanderers and Manchester City
but this was only broadcast in Manchester, the first national final on BBC Radio was between Arsenal and Cardiff
in 1927. The first final on BBC Television was in 1937 in a match which featured Sunderland and Preston North End
but this was not televised in full. The following season's final between Preston and Huddersfield was covered in
full by the BBC. When ITV was formed in 1955 they shared final coverage with the BBC in one of the only club matches
shown live on television, during the 1970s and 1980s coverage became more elaborate with BBC and ITV trying to steal
viewers from the others by starting coverage earlier and earlier some starting as early as 9 a.m. which was six hours
before kick off. Nowadays, this continues with Setanta and ESPN having all-day broadcasts from Wembley, but terrestrial
TV coverage usually begins two hours before kick off. The sharing of rights between BBC and ITV continued from 1955
to 1988, when ITV lost coverage to the new Sports Channel which later became Sky Sports. From 1988 to 1997, the BBC
and Sky Sports had coverage of the FA Cup, the BBC had highlights on Match of the Day and usually one match per round
while Sky had the same deal. From 1997 to 2001, ITV and Sky shared live coverage with both having two matches per
round and BBC continuing with highlights on Match of the Day. From 2002 to 2008, BBC and Sky again shared coverage
with BBC having two or three matches per round and Sky having one or two. From 2008–09 to 2013–14, FA Cup matches
are shown live by ITV across England and Wales, with UTV broadcasting to Northern Ireland but STV refusing to show
them. ITV shows 16 FA Cup games per season, including the first pick of live matches from each of the first to sixth
rounds of the competition, plus one semi-final exclusively live. The final is also shown live on ITV. Under the same
2008 contract, Setanta Sports showed three games and one replay in each round from round three to five, two quarter-finals,
one semi-final and the final. The channel also broadcast ITV's matches exclusively to Scotland, after the ITV franchise
holder in Scotland, STV, decided not to broadcast FA Cup games. Setanta entered administration in June 2009 and as
a result the FA terminated Setanta's deal to broadcast FA-sanctioned competitions and England internationals. As
a result of Setanta going out of business ITV showed the competition exclusively in the 2009–10 season with between
three and four matches per round, all quarter finals, semi-finals and final live as the FA could not find a pay TV
broadcaster in time. ESPN bought the competition for the 2010–11 to 2012–13 season and during this time Rebecca Lowe
became the first woman to host the FA Cup Final in the UK. Many[who?] expected BSkyB to make a bid to show some of
the remaining FA Cup games for the remainder of the 2009–10 season which would include a semi-final and shared rights
to the final. ESPN took over the package Setanta held for the FA Cup from the 2010–11 season. The 2011 final was
also shown live on Sky 3D in addition to ESPN (who provided the 3D coverage for Sky 3D) and ITV. Following the sale
of ESPN's UK and Ireland channels to BT, ESPN's rights package transferred to BT Sport from the 2013–14 season.
New Haven (local /nuː ˈheɪvən/, noo-HAY-vən), in the U.S. state of Connecticut, is the principal municipality in Greater
New Haven, which had a total population of 862,477 in 2010. It is located on New Haven Harbor on the northern shore
of the Long Island Sound in New Haven County, Connecticut, which in turn comprises the outer limits of the New York
metropolitan area. It is the second-largest city in Connecticut (after Bridgeport), with a population of 129,779
people as of the 2010 United States Census. According to a census of 1 July 2012, by the Census Bureau, the city
had a population of 130,741. In 1637 a small party of Puritans reconnoitered the New Haven harbor area and wintered
over. In April 1638, the main party of five hundred Puritans who left the Massachusetts Bay Colony under the leadership
of the Reverend John Davenport and the London merchant Theophilus Eaton sailed into the harbor. These settlers were
hoping to establish a (in their mind) better theological community, with the government more closely linked to the
church than the one they left in Massachusetts and sought to take advantage of the excellent port capabilities of
the harbor. The Quinnipiacs, who were under attack by neighboring Pequots, sold their land to the settlers in return
for protection. By 1640, the town's theocratic government and nine-square grid plan were in place, and the town was
renamed Newhaven from Quinnipiac. However, the area north of New Haven remained Quinnipiac until 1678, when it was
renamed Hamden. The settlement became the headquarters of the New Haven Colony. At the time, the New Haven Colony
was separate from the Connecticut Colony, which had been established to the north centering on Hartford. One of the
principal differences between the two colonies was that the New Haven colony was an intolerant theocracy that did
not permit other churches to be established, while the Connecticut colony permitted the establishment of other churches.
For over a century, New Haven citizens had fought in the colonial militia alongside regular British forces, as in
the French and Indian War. As the American Revolution approached, General David Wooster and other influential residents
hoped that the conflict with the government in Britain could be resolved short of rebellion. On 23 April 1775, which
is still celebrated in New Haven as Powder House Day, the Second Company, Governor's Foot Guard, of New Haven entered
the struggle against the governing British parliament. Under Captain Benedict Arnold, they broke into the powder
house to arm themselves and began a three-day march to Cambridge, Massachusetts. Other New Haven militia members
were on hand to escort George Washington from his overnight stay in New Haven on his way to Cambridge. Contemporary
reports, from both sides, remark on the New Haven volunteers' professional military bearing, including uniforms.
On July 5, 1779, 2,600 loyalists and British regulars under General William Tryon, governor of New York, landed in
New Haven Harbor and raided the 3,500-person town. A militia of Yale students had been prepping for battle, and former
Yale president and Yale Divinity School professor Naphtali Daggett rode out to confront the Redcoats. Yale president
Ezra Stiles recounted in his diary that while he moved furniture in anticipation of battle, he still couldn't quite
believe the revolution had begun. New Haven was not torched as the invaders did with Danbury in 1777, or Fairfield
and Norwalk a week after the New Haven raid, so many of the town's colonial features were preserved. The city struck
fortune in the late 18th century with the inventions and industrial activity of Eli Whitney, a Yale graduate who
remained in New Haven to develop the cotton gin and establish a gun-manufacturing factory in the northern part of
the city near the Hamden town line. That area is still known as Whitneyville, and the main road through both towns
is known as Whitney Avenue. The factory is now the Eli Whitney Museum, which has a particular emphasis on activities
for children and exhibits pertaining to the A. C. Gilbert Company. His factory, along with that of Simeon North,
and the lively clock-making and brass hardware sectors, contributed to making early Connecticut a powerful manufacturing
economy; so many arms manufacturers sprang up that the state became known as "The Arsenal of America". It was in
Whitney's gun-manufacturing plant that Samuel Colt invented the automatic revolver in 1836. The Farmington Canal,
created in the early 19th century, was a short-lived transporter of goods into the interior regions of Connecticut
and Massachusetts, and ran from New Haven to Northampton, Massachusetts. New Haven was home to one of the important
early events in the burgeoning anti-slavery movement when, in 1839, the trial of mutineering Mende tribesmen being
transported as slaves on the Spanish slaveship Amistad was held in New Haven's United States District Court. There
is a statue of Joseph Cinqué, the informal leader of the slaves, beside City Hall. See "Museums" below for more information.
Abraham Lincoln delivered a speech on slavery in New Haven in 1860, shortly before he secured the Republican nomination
for President. The American Civil War boosted the local economy with wartime purchases of industrial goods, including
that of the New Haven Arms Company, which would later become the Winchester Repeating Arms Company. (Winchester would
continue to produce arms in New Haven until 2006, and many of the buildings that were a part of the Winchester plant
are now a part of the Winchester Repeating Arms Company Historic District.) After the war, population grew and doubled
by the start of the 20th century, most notably due to the influx of immigrants from southern Europe, particularly
Italy. Today, roughly half the populations of East Haven, West Haven, and North Haven are Italian-American. Jewish
immigration to New Haven has left an enduring mark on the city. Westville was the center of Jewish life in New Haven,
though today many have fanned out to suburban communities such as Woodbridge and Cheshire. In 1954, then-mayor Richard
C. Lee began some of the earliest major urban renewal projects in the United States. Certain sections of downtown
New Haven were redeveloped to include museums, new office towers, a hotel, and large shopping complexes. Other parts
of the city were affected by the construction of Interstate 95 along the Long Wharf section, Interstate 91, and the
Oak Street Connector. The Oak Street Connector (Route 34), running between Interstate 95, downtown, and The Hill
neighborhood, was originally intended as a highway to the city's western suburbs but was only completed as a highway
to the downtown area, with the area to the west becoming a boulevard (See "Redevelopment" below). Since approximately
2000, many parts of downtown New Haven have been revitalized, with new restaurants, nightlife, and small retail stores.
In particular, the area surrounding the New Haven Green has experienced an influx of apartments and condominiums.
In recent years, downtown retail options have increased with the opening of new stores such as Urban Oufitters, J
Crew, Origins, American Apparel, Gant Clothing, and an Apple Store, joining older stores such as Barnes & Noble,
Cutlers Records, and Raggs Clothing. In addition, downtown's growing residential population will be served by two
new supermarkets, a Stop & Shop just outside downtown and Elm City Market located one block from the Green. The recent
turnaround of downtown New Haven has received positive press from various periodicals. Major projects include the
current construction of a new campus for Gateway Community College downtown, and also a 32-story, 500-unit apartment/retail
building called 360 State Street. The 360 State Street project is now occupied and is the largest residential building
in Connecticut. A new boathouse and dock is planned for New Haven Harbor, and the linear park Farmington Canal Trail
is set to extend into downtown New Haven within the coming year. Additionally, foundation and ramp work to widen
I-95 to create a new harbor crossing for New Haven, with an extradosed bridge to replace the 1950s-era Q Bridge,
has begun. The city still hopes to redevelop the site of the New Haven Coliseum, which was demolished in 2007. In
April 2009, the United States Supreme Court agreed to hear a suit over reverse discrimination brought by 18 white
firefighters against the city. The suit involved the 2003 promotion test for the New Haven Fire Department. After
the tests were scored, no black firefighters scored high enough to qualify for consideration for promotion, so the
city announced that no one would be promoted. In the subsequent Ricci v. DeStefano decision the court found 5-4 that
New Haven's decision to ignore the test results violated Title VII of the Civil Rights Act of 1964. As a result,
a district court subsequently ordered the city to promote 14 of the white firefighters. In 2010 and 2011, state and
federal funds were awarded to Connecticut (and Massachusetts) to construct the Hartford Line, with a southern terminus
at New Haven's Union Station and a northern terminus at Springfield's Union Station. According to the White House,
"This corridor [currently] has one train per day connecting communities in Connecticut and Massachusetts to the Northeast
Corridor and Vermont. The vision for this corridor is to restore the alignment to its original route via the Knowledge
Corridor in western Massachusetts, improving trip time and increasing the population base that can be served." Set
for construction in 2013, the "Knowledge Corridor high speed intercity passenger rail" project will cost approximately
$1 billion, and the ultimate northern terminus for the project is reported to be Montreal in Canada. Train speeds
between will reportedly exceed 110 miles per hour (180 km/h) and increase both cities' rail traffic exponentially.
New Haven's best-known geographic features are its large deep harbor, and two reddish basalt trap rock ridges which
rise to the northeast and northwest of the city core. These trap rocks are known respectively as East Rock and West
Rock, and both serve as extensive parks. West Rock has been tunneled through to make way for the east-west passage
of the Wilbur Cross Parkway (the only highway tunnel through a natural obstacle in Connecticut), and once served
as the hideout of the "Regicides" (see: Regicides Trail). Most New Haveners refer to these men as "The Three Judges".
East Rock features the prominent Soldiers and Sailors war monument on its peak as well as the "Great/Giant Steps"
which run up the rock's cliffside. The city is drained by three rivers; the West, Mill, and Quinnipiac, named in
order from west to east. The West River discharges into West Haven Harbor, while the Mill and Quinnipiac rivers discharge
into New Haven Harbor. Both harbors are embayments of Long Island Sound. In addition, several smaller streams flow
through the city's neighborhoods, including Wintergreen Brook, the Beaver Ponds Outlet, Wilmot Brook, Belden Brook,
and Prospect Creek. Not all of these small streams have continuous flow year-round. New Haven lies in the transition
between a humid continental climate (Köppen climate classification: Dfa) and humid subtropical climate (Köppen Cfa),
but having more characteristics of the former, as is typical of much of the New York metropolitan area. Summers are
humid and warm, with temperatures exceeding 90 °F (32 °C) on 7–8 days per year. Winters are cold with moderate snowfall
interspersed with rainfall and occasionally mixed precipitation. The weather patterns that affect New Haven result
from a primarily offshore direction, thus reducing the marine influence of Long Island Sound—although, like other
marine areas, differences in temperature between areas right along the coastline and areas a mile or two inland can
be large at times. New Haven has a long tradition of urban planning and a purposeful design for the city's layout.
The city could be argued to have some of the first preconceived layouts in the country. Upon founding, New Haven
was laid out in a grid plan of nine square blocks; the central square was left open, in the tradition of many New
England towns, as the city green (a commons area). The city also instituted the first public tree planting program
in America. As in other cities, many of the elms that gave New Haven the nickname "Elm City" perished in the mid-20th
century due to Dutch Elm disease, although many have since been replanted. The New Haven Green is currently home
to three separate historic churches which speak to the original theocratic nature of the city. The Green remains
the social center of the city today. It was named a National Historic Landmark in 1970. Downtown New Haven, occupied
by nearly 7,000 residents, has a more residential character than most downtowns. The downtown area provides about
half of the city's jobs and half of its tax base and in recent years has become filled with dozens of new upscale
restaurants, several of which have garnered national praise (such as Ibiza, recognized by Esquire and Wine Spectator
magazines as well as the New York Times as the best Spanish food in the country), in addition to shops and thousands
of apartments and condominium units which subsequently help overall growth of the city. The city has many distinct
neighborhoods. In addition to Downtown, centered on the central business district and the Green, are the following
neighborhoods: the west central neighborhoods of Dixwell and Dwight; the southern neighborhoods of The Hill, historic
water-front City Point (or Oyster Point), and the harborside district of Long Wharf; the western neighborhoods of
Edgewood, West River, Westville, Amity, and West Rock-Westhills; East Rock, Cedar Hill, Prospect Hill, and Newhallville
in the northern side of town; the east central neighborhoods of Mill River and Wooster Square, an Italian-American
neighborhood; Fair Haven, an immigrant community located between the Mill and Quinnipiac rivers; Quinnipiac Meadows
and Fair Haven Heights across the Quinnipiac River; and facing the eastern side of the harbor, The Annex and East
Shore (or Morris Cove). New Haven's economy originally was based in manufacturing, but the postwar period brought
rapid industrial decline; the entire Northeast was affected, and medium-sized cities with large working-class populations,
like New Haven, were hit particularly hard. Simultaneously, the growth and expansion of Yale University further affected
the economic shift. Today, over half (56%) of the city's economy is now made up of services, in particular education
and health care; Yale is the city's largest employer, followed by Yale – New Haven Hospital. Other large employers
include St. Raphael Hospital, Smilow Cancer Hospital, Southern Connecticut State University, Assa Abloy Manufacturing,
the Knights of Columbus headquarters, Higher One, Alexion Pharmaceuticals, Covidien and United Illuminating. Yale
and Yale-New Haven are also among the largest employers in the state, and provide more $100,000+-salaried positions
than any other employer in Connecticut.[citation needed] The Knights of Columbus, the world's largest Catholic fraternal
service organization and a Fortune 1000 company, is headquartered in New Haven. Two more Fortune 1000 companies are
based in Greater New Haven: the electrical equipment producers Hubbell, based in Orange, and Amphenol, based in Wallingford.
Eight Courant 100 companies are based in Greater New Haven, with four headquartered in New Haven proper. New Haven-based
companies traded on stock exchanges include NewAlliance Bank, the second largest bank in Connecticut and fourth-largest
in New England (NYSE: NAL), Higher One Holdings (NYSE: ONE), a financial services firm United Illuminating, the electricity
distributor for southern Connecticut (NYSE: UIL), Achillion Pharmaceuticals (NASDAQ: ACHN), Alexion Pharmaceuticals
(NasdaqGS: ALXN), and Transpro Inc. (AMEX: TPR). Vion Pharmaceuticals is traded OTC (OTC BB: VIONQ.OB). Other notable
companies based in the city include the Peter Paul Candy Manufacturing Company (the candy-making division of the
Hershey Company), the American division of Assa Abloy (one of the world's leading manufacturers of locks), Yale University
Press, and the Russell Trust Association (the business arm of the Skull and Bones Society). The Southern New England
Telephone Company (SNET) began operations in the city as the District Telephone Company of New Haven in 1878; the
company remains headquartered in New Haven as a subsidiary of AT&T Inc., now doing business as AT&T Connecticut,
and provides telephone service for all but two municipalities in Connecticut. The U.S. Census Bureau reports a 2010
population of 129,779, with 47,094 households and 25,854 families within the city of New Haven. The population density
is 6,859.8 people per square mile (2,648.6/km²). There are 52,941 housing units at an average density of 2,808.5
per square mile (1,084.4/km²). The racial makeup of the city is 42.6% White, 35.4% African American, 0.5% Native
American, 4.6% Asian, 0.1% Pacific Islander, 12.9% from other races, and 3.9% from two or more races. Hispanic or
Latino residents of any race were 27.4% of the population. Non-Hispanic Whites were 31.8% of the population in 2010,
down from 69.6% in 1970. The city's demography is shifting rapidly: New Haven has always been a city of immigrants
and currently the Latino population is growing rapidly. Previous influxes among ethnic groups have been African-Americans
in the postwar era, and Irish, Italian and (to a lesser degree) Slavic peoples in the prewar period. New Haven is
a predominantly Roman Catholic city, as the city's Dominican, Irish, Italian, Mexican, Ecuadorian, and Puerto Rican
populations are overwhelmingly Catholic. The city is part of the Archdiocese of Hartford. Jews also make up a considerable
portion of the population, as do Black Baptists. There is a growing number of (mostly Puerto Rican) Pentecostals
as well. There are churches for all major branches of Christianity within the city, multiple store-front churches,
ministries (especially in working-class Latino and Black neighborhoods), a mosque, many synagogues (including two
yeshivas), and other places of worship; the level of religious diversity in the city is high. New Haven is the birthplace
of former president George W. Bush, who was born when his father, former president George H. W. Bush, was living
in New Haven while a student at Yale. In addition to being the site of the college educations of both Presidents
Bush, as Yale students, New Haven was also the temporary home of former presidents William Howard Taft, Gerald Ford,
and Bill Clinton, as well as Secretary of State John Kerry. President Clinton met his wife, former U.S. Secretary
of State Hillary Rodham Clinton, while the two were students at Yale Law School. Former vice presidents John C. Calhoun
and Dick Cheney also studied in New Haven (although the latter did not graduate from Yale). Before the 2008 election,
the last time there was not a person with ties to New Haven and Yale on either major party's ticket was 1968. James
Hillhouse, a New Haven native, served as President pro tempore of the United States Senate in 1801. New Haven was
the subject of Who Governs? Democracy and Power in An American City, a very influential book in political science
by preeminent Yale professor Robert A. Dahl, which includes an extensive history of the city and thorough description
of its politics in the 1950s. New Haven's theocratic history is also mentioned several times by Alexis de Tocqueville
in his classic volume on 19th-century American political life, Democracy in America. New Haven was the residence
of conservative thinker William F. Buckley, Jr., in 1951, when he wrote his influential God and Man at Yale. William
Lee Miller's The Fifteenth Ward and the Great Society (1966) similarly explores the relationship between local politics
in New Haven and national political movements, focusing on Lyndon Johnson's Great Society and urban renewal. In 1970,
the New Haven Black Panther trials took place, the largest and longest trials in Connecticut history. Black Panther
Party co-founder Bobby Seale and ten other Party members were tried for murdering an alleged informant. Beginning
on May Day, the city became a center of protest for 12,000 Panther supporters, college students, and New Left activists
(including Jean Genet, Benjamin Spock, Abbie Hoffman, Jerry Rubin, and John Froines), who amassed on the New Haven
Green, across the street from where the trials were being held. Violent confrontations between the demonstrators
and the New Haven police occurred, and several bombs were set off in the area by radicals. The event became a rallying
point for the New Left and critics of the Nixon Administration. In April 2009, the United States Supreme Court agreed
to hear a suit over reverse discrimination brought by 20 white and Hispanic firefighters against the city. The suit
involved the 2003 promotion test for the New Haven Fire Department. After the tests were scored, no blacks scored
high enough to qualify for consideration for promotion, so the city announced that no one would be promoted. On 29
June 2009, the United States Supreme Court ruled in favor of the firefighters, agreeing that they were improperly
denied promotion because of their race. The case, Ricci v. DeStefano, became highly publicized and brought national
attention to New Haven politics due to the involvement of then-Supreme Court nominee (and Yale Law School graduate)
Sonia Sotomayor in a lower court decision. Garry Trudeau, creator of the political Doonesbury comic strip, attended
Yale University. There he met fellow student and later Green Party candidate for Congress Charles Pillsbury, a long-time
New Haven resident for whom Trudeau's comic strip is named. During his college years, Pillsbury was known by the
nickname "The Doones". A theory of international law, which argues for a sociological normative approach in regards
to jurisprudence, is named the New Haven Approach, after the city. Connecticut US senator Richard Blumenthal is a
Yale graduate, as is former Connecticut US Senator Joe Lieberman who also was a New Haven resident for many years,
before moving back to his hometown of Stamford. However, an analysis by the Regional Data Cooperative for Greater
New Haven, Inc., has shown that due to issues of comparative denominators and other factors, such municipality-based
rankings can be considered inaccurate. For example, two cities of identical population can cover widely differing
land areas, making such analyses irrelevant. The research organization called for comparisons based on neighborhoods,
blocks, or standard methodologies (similar to those used by Brookings, DiversityData, and other established institutions),
not based on municipalities. New Haven is a notable center for higher education. Yale University, at the heart of
downtown, is one of the city's best known features and its largest employer. New Haven is also home to Southern Connecticut
State University, part of the Connecticut State University System, and Albertus Magnus College, a private institution.
Gateway Community College has a campus in downtown New Haven, formerly located in the Long Wharf district; Gateway
consolidated into one campus downtown into a new state-of-the-art campus (on the site of the old Macy's building)
and was open for the Fall 2012 semester. Hopkins School, a private school, was founded in 1660 and is the fifth-oldest
educational institution in the United States. New Haven is home to a number of other private schools as well as public
magnet schools, including Metropolitan Business Academy, High School in the Community, Hill Regional Career High
School, Co-op High School, New Haven Academy, ACES Educational Center for the Arts, the Foote School and the Sound
School, all of which draw students from New Haven and suburban towns. New Haven is also home to two Achievement First
charter schools, Amistad Academy and Elm City College Prep, and to Common Ground, an environmental charter school.
The city is home to New Haven Promise, a scholarship funded by Yale University for students who meet the requirements.
Students must be enrolled in a public high school (charters included) for four years, be a resident of the city during
that time, carry a 3.0 cumulative grade-point average, have a 90-percent attendance rate and perform 40 hours of
service to the city. The initiative was launched in 2010 and there are currently more than 500 Scholars enrolled
in qualifying Connecticut colleges and universities. There are more than 60 cities in the country that have a Promise-type
program for their students. Livability.com named New Haven as the Best Foodie City in the country in 2014. There
are 56 Zagat-rated restaurants in New Haven, the most in Connecticut and the third most in New England (after Boston
and Cambridge). More than 120 restaurants are located within two blocks of the New Haven Green. The city is home
to an eclectic mix of ethnic restaurants and small markets specializing in various foreign foods. Represented cuisines
include Malaysian, Ethiopian, Spanish, Belgian, French, Greek, Latin American, Mexican, Italian, Thai, Chinese, Japanese,
Vietnamese, Korean, Indian, Jamaican, Cuban, Peruvian, Syrian/Lebanese, and Turkish. New Haven's greatest culinary
claim to fame may be its pizza, which has been claimed to be among the best in the country, or even in the world.
New Haven-style pizza, called "apizza" (pronounced ah-BEETS, [aˈpitts] in the original Italian dialect), made its
debut at the iconic Frank Pepe Pizzeria Napoletana (known as Pepe's) in 1925. Apizza is baked in coal- or wood-fired
brick ovens, and is notable for its thin crust. Apizza may be red (with a tomato-based sauce) or white (with a sauce
of garlic and olive oil), and pies ordered "plain" are made without the otherwise customary mozzarella cheese (originally
smoked mozzarella, known as "scamorza" in Italian). A white clam pie is a well-known specialty of the restaurants
on Wooster Street in the Little Italy section of New Haven, including Pepe's and Sally's Apizza (which opened in
1938). Modern Apizza on State Street, which opened in 1934, is also well-known. A second New Haven gastronomical
claim to fame is Louis' Lunch, which is located in a small brick building on Crown Street and has been serving fast
food since 1895. Though fiercely debated, the restaurant's founder Louis Lassen is credited by the Library of Congress
with inventing the hamburger and steak sandwich. Louis' Lunch broils hamburgers, steak sandwiches and hot dogs vertically
in original antique 1898 cast iron stoves using gridirons, patented by local resident Luigi Pieragostini in 1939,
that hold the meat in place while it cooks. During weekday lunchtime, over 150 lunch carts and food trucks from neighborhood
restaurants cater to different student populations throughout Yale's campus. The carts cluster at three main points:
by Yale – New Haven Hospital in the center of the Hospital Green (Cedar and York streets), by Yale's Trumbull College
(Elm and York streets), and on the intersection of Prospect and Sachem streets by the Yale School of Management.
Popular farmers' markets, managed by the local non-profit CitySeed, set up shop weekly in several neighborhoods,
including Westville/Edgewood Park, Fair Haven, Upper State Street, Wooster Square, and Downtown/New Haven Green.
The city hosts numerous theatres and production houses, including the Yale Repertory Theatre, the Long Wharf Theatre,
and the Shubert Theatre. There is also theatre activity from the Yale School of Drama, which works through the Yale
University Theatre and the student-run Yale Cabaret. Southern Connecticut State University hosts the Lyman Center
for the Performing Arts. The shuttered Palace Theatre (opposite the Shubert Theatre) is being renovated and will
reopen as the College Street Music Hall in May, 2015. Smaller theatres include the Little Theater on Lincoln Street.
Cooperative Arts and Humanities High School also boasts a state-of-the-art theatre on College Street. The theatre
is used for student productions as well as the home to weekly services to a local non-denominational church, the
City Church New Haven. New Haven has a variety of museums, many of them associated with Yale. The Beinecke Rare Book
and Manuscript Library features an original copy of the Gutenberg Bible. There is also the Connecticut Children's
Museum; the Knights of Columbus museum near that organization's world headquarters; the Peabody Museum of Natural
History; the Yale University Collection of Musical Instruments; the Eli Whitney Museum (across the town line in Hamden,
Connecticut, on Whitney Avenue); the Yale Center for British Art, which houses the largest collection of British
art outside the U.K., and the Yale University Art Gallery, the nation's oldest college art museum.[citation needed]
New Haven is also home to the New Haven Museum and Historical Society on Whitney Avenue, which has a library of many
primary source treasures dating from Colonial times to the present. Artspace on Orange Street is one of several contemporary
art galleries around the city, showcasing the work of local, national, and international artists. Others include
City Gallery and A. Leaf Gallery in the downtown area. Westville galleries include Kehler Liddell, Jennifer Jane
Gallery, and The Hungry Eye. The Erector Square complex in the Fair Haven neighborhood houses the Parachute Factory
gallery along with numerous artist studios, and the complex serves as an active destination during City-Wide Open
Studios held yearly in October. The New Haven Green is the site of many free music concerts, especially during the
summer months. These have included the New Haven Symphony Orchestra, the July Free Concerts on the Green in July,
and the New Haven Jazz Festival in August. The Jazz Festival, which began in 1982, is one of the longest-running
free outdoor festivals in the U.S., until it was canceled for 2007. Headliners such as The Breakfast, Dave Brubeck,
Ray Charles and Celia Cruz have historically drawn 30,000 to 50,000 fans, filling up the New Haven Green to capacity.
The New Haven Jazz Festival was revived in 2008 and has been sponsored since by Jazz Haven. In addition to the Jazz
Festival (described above), New Haven serves as the home city of the annual International Festival of Arts and Ideas.
New Haven's Saint Patrick's Day parade, which began in 1842, is New England's oldest St. Patty's Day parade and draws
the largest crowds of any one-day spectator event in Connecticut. The St. Andrew the Apostle Italian Festival has
taken place in the historic Wooster Square neighborhood every year since 1900. Other parishes in the city celebrate
the Feast of Saint Anthony of Padua and a carnival in honor of St. Bernadette Soubirous. New Haven celebrates Powder
House Day every April on the New Haven Green to commemorate the city's entrance into the Revolutionary War. The annual
Wooster Square Cherry Blossom Festival commemorates the 1973 planting of 72 Yoshino Japanese Cherry Blossom trees
by the New Haven Historic Commission in collaboration with the New Haven Parks Department and residents of the neighborhood.
The Festival now draws well over 5,000 visitors. The Film Fest New Haven has been held annually since 1995. New Haven
is served by the daily New Haven Register, the weekly "alternative" New Haven Advocate (which is run by Tribune,
the corporation owning the Hartford Courant), the online daily New Haven Independent, and the monthly Grand News
Community Newspaper. Downtown New Haven is covered by an in-depth civic news forum, Design New Haven. The Register
also backs PLAY magazine, a weekly entertainment publication. The city is also served by several student-run papers,
including the Yale Daily News, the weekly Yale Herald and a humor tabloid, Rumpus Magazine. WTNH Channel 8, the ABC
affiliate for Connecticut, WCTX Channel 59, the MyNetworkTV affiliate for the state, and Connecticut Public Television
station WEDY channel 65, a PBS affiliate, broadcast from New Haven. All New York City news and sports team stations
broadcast to New Haven County. New Haven has a history of professional sports franchises dating back to the 19th
century and has been the home to professional baseball, basketball, football, hockey, and soccer teams—including
the New York Giants of the National Football League from 1973 to 1974, who played at the Yale Bowl. Throughout the
second half of the 20th century, New Haven consistently had minor league hockey and baseball teams, which played
at the New Haven Arena (built in 1926, demolished in 1972), New Haven Coliseum (1972–2002), and Yale Field (1928–present).
When John DeStefano, Jr., became mayor of New Haven in 1995, he outlined a plan to transform the city into a major
cultural and arts center in the Northeast, which involved investments in programs and projects other than sports
franchises. As nearby Bridgeport built new sports facilities, the brutalist New Haven Coliseum rapidly deteriorated.
Believing the upkeep on the venue to be a drain of tax dollars, the DeStefano administration closed the Coliseum
in 2002; it was demolished in 2007. New Haven's last professional sports team, the New Haven County Cutters, left
in 2009. The DeStefano administration did, however, see the construction of the New Haven Athletic Center in 1998,
a 94,000-square-foot (8,700 m2) indoor athletic facility with a seating capacity of over 3,000. The NHAC, built adjacent
to Hillhouse High School, is used for New Haven public schools athletics, as well as large-scale area and state sporting
events; it is the largest high school indoor sports complex in the state. New Haven was the host of the 1995 Special
Olympics World Summer Games; then-President Bill Clinton spoke at the opening ceremonies. The city is home to the
Pilot Pen International tennis event, which takes place every August at the Connecticut Tennis Center, one of the
largest tennis venues in the world. New Haven biannually hosts "The Game" between Yale and Harvard, the country's
second-oldest college football rivalry. Numerous road races take place in New Haven, including the USA 20K Championship
during the New Haven Road Race. New Haven has many architectural landmarks dating from every important time period
and architectural style in American history. The city has been home to a number of architects and architectural firms
that have left their mark on the city including Ithiel Town and Henry Austin in the 19th century and Cesar Pelli,
Warren Platner, Kevin Roche, Herbert Newman and Barry Svigals in the 20th. The Yale School of Architecture has fostered
this important component of the city's economy. Cass Gilbert, of the Beaux-Arts school, designed New Haven's Union
Station and the New Haven Free Public Library and was also commissioned for a City Beautiful plan in 1919. Frank
Lloyd Wright, Marcel Breuer, Alexander Jackson Davis, Philip C. Johnson, Gordon Bunshaft, Louis Kahn, James Gamble
Rogers, Frank Gehry, Charles Willard Moore, Stefan Behnisch, James Polshek, Paul Rudolph, Eero Saarinen and Robert
Venturi all have designed buildings in New Haven. Yale's 1950s-era Ingalls Rink, designed by Eero Saarinen, was included
on the America's Favorite Architecture list created in 2007. Many historical sites exist throughout the city, including
59 properties listed on the National Register of Historic Places. Of these, nine are among the 60 U.S. National Historic
Landmarks in Connecticut. The New Haven Green, one of the National Historic Landmarks, was formed in 1638, and is
home to three 19th-century churches. Below one of the churches (referred to as the Center Church on-the-Green) lies
a 17th-century crypt, which is open to visitors. Some of the more famous burials include the first wife of Benedict
Arnold and the aunt and grandmother of President Rutherford B. Hayes; Hayes visited the crypt while President in
1880. The Old Campus of Yale University is located next to the Green, and includes Connecticut Hall, Yale's oldest
building and a National Historic Landmark. The Hillhouse Avenue area, which is listed on the National Register of
Historic Places and is also a part of Yale's campus, has been called a walkable museum, due to its 19th-century mansions
and street scape; Charles Dickens is said to have called Hillhouse Avenue "the most beautiful street in America"
when visiting the city in 1868. After the American Revolutionary War broke out in 1776, the Connecticut colonial
government ordered the construction of Black Rock Fort (to be built on top of an older 17th-century fort) to protect
the port of New Haven. In 1779, during the Battle of New Haven, British soldiers captured Black Rock Fort and burned
the barracks to the ground. The fort was reconstructed in 1807 by the federal government (on orders from the Thomas
Jefferson administration), and rechristened Fort Nathan Hale, after the Revolutionary War hero who had lived in New
Haven. The cannons of Fort Nathan Hale were successful in defying British war ships during the War of 1812. In 1863,
during the Civil War, a second Fort Hale was built next to the original, complete with bomb-resistant bunkers and
a moat, to defend the city should a Southern raid against New Haven be launched. The United States Congress deeded
the site to the state in 1921, and all three versions of the fort have been restored. The site is now listed on the
National Register of Historic Places and receives thousands of visitors each year. Grove Street Cemetery, a National
Historic Landmark which lies adjacent to Yale's campus, contains the graves of Roger Sherman, Eli Whitney, Noah Webster,
Josiah Willard Gibbs, Charles Goodyear and Walter Camp, among other notable burials. The cemetery is known for its
grand Egyptian Revival gateway. The Union League Club of New Haven building, located on Chapel Street, is notable
for not only being a historic Beaux-Arts building, but also is built on the site where Roger Sherman's home once
stood; George Washington is known to have stayed at the Sherman residence while President in 1789 (one of three times
Washington visited New Haven throughout his lifetime). Lighthouse Point Park, a public beach run by the city, was
a popular tourist destination during the Roaring Twenties, attracting luminaries of the period such as Babe Ruth
and Ty Cobb. The park remains popular among New Haveners, and is home to the Five Mile Point Lighthouse, constructed
in 1847, and the Lighthouse Point Carousel, constructed in 1916. Five Mile Point Light was decommissioned in 1877
following the construction of Southwest Ledge Light at the entrance of the harbor, which remains in service to this
day. Both of the lighthouses and the carousel are listed on the National Register of Historic Places. Union Station
is further served by four Amtrak lines: the Northeast Regional and the high-speed Acela Express provide service to
New York, Washington, D.C. and Boston, and rank as the first and second busiest routes in the country; the New Haven–Springfield
Line provides service to Hartford and Springfield, Massachusetts; and the Vermonter provides service to both Washington,
D.C., and Vermont, 15 miles (24 km) from the Canadian border. Amtrak also codeshares with United Airlines for travel
to any airport serviced by United Airlines, via Newark Airport (EWR) originating from or terminating at Union Station,
(IATA: ZVE). The New Haven Division buses follow routes that had originally been covered by trolley service. Horse-drawn
steetcars began operating in New Haven in the 1860s, and by the mid-1890s all the lines had become electric. In the
1920s and 1930s, some of the trolley lines began to be replaced by bus lines, with the last trolley route converted
to bus in 1948. The City of New Haven is in the very early stages of considering the restoration of streetcar (light-rail)
service, which has been absent since the postwar period. The Farmington Canal Trail is a rail trail that will eventually
run continuously from downtown New Haven to Northampton, Massachusetts. The scenic trail follows the path of the
historic New Haven and Northampton Company and the Farmington Canal. Currently, there is a continuous 14-mile (23
km) stretch of the trail from downtown, through Hamden and into Cheshire, making bicycle commuting between New Haven
and those suburbs possible. The trail is part of the East Coast Greenway, a proposed 3,000-mile (4,800 km) bike path
that would link every major city on the East Coast from Florida to Maine. In 2004, the first bike lane in the city
was added to Orange Street, connecting East Rock Park and the East Rock neighborhood to downtown. Since then, bike
lanes have also been added to sections of Howard Ave, Elm St, Dixwell Avenue, Water Street, Clinton Avenue and State
Street. The city has created recommended bike routes for getting around New Haven, including use of the Canal Trail
and the Orange Street lane. A bike map of the city entire can be seen here , and bike maps broken down by area here
. As of the end of 2012, bicycle lanes have also been added in both directions on Dixwell Avenue along most of the
street from downtown to the Hamden town line, as well as along Howard Avenue from Yale New Haven Hospital to City
Point. New Haven lies at the intersection of Interstate 95 on the coast—which provides access southwards and/or westwards
to the western coast of Connecticut and to New York City, and eastwards to the eastern Connecticut shoreline, Rhode
Island, and eastern Massachusetts—and Interstate 91, which leads northward to the interior of Massachusetts and Vermont
and the Canadian border. I-95 is infamous for traffic jams increasing with proximity to New York City; on the east
side of New Haven it passes over the Quinnipiac River via the Pearl Harbor Memorial, or "Q Bridge", which often presents
a major bottleneck to traffic. I-91, however, is relatively less congested, except at the intersection with I-95
during peak travel times. The Oak Street Connector (Connecticut Route 34) intersects I-91 at exit 1, just south of
the I-95/I-91 interchange, and runs northwest for a few blocks as an expressway spur into downtown before emptying
onto surface roads. The Wilbur Cross Parkway (Connecticut Route 15) runs parallel to I-95 west of New Haven, turning
northwards as it nears the city and then running northwards parallel to I-91 through the outer rim of New Haven and
Hamden, offering an alternative to the I-95/I-91 journey (restricted to non-commercial vehicles). Route 15 in New
Haven is the site of the only highway tunnel in the state (officially designated as Heroes Tunnel), running through
West Rock, home to West Rock Park and the Three Judges Cave. The city also has several major surface arteries. U.S.
Route 1 (Columbus Avenue, Union Avenue, Water Street, Forbes Avenue) runs in an east-west direction south of downtown
serving Union Station and leading out of the city to Milford, West Haven, East Haven and Branford. The main road
from downtown heading northwest is Whalley Avenue (partly signed as Route 10 and Route 63) leading to Westville and
Woodbridge. Heading north towards Hamden, there are two major thoroughfares, Dixwell Avenue and Whitney Avenue. To
the northeast are Middletown Avenue (Route 17), which leads to the Montowese section of North Haven, and Foxon Boulevard
(Route 80), which leads to the Foxon section of East Haven and to the town of North Branford. To the west is Route
34, which leads to the city of Derby. Other major intracity arteries are Ella Grasso Boulevard (Route 10) west of
downtown, and College Street, Temple Street, Church Street, Elm Street, and Grove Street in the downtown area. New
Haven Harbor is home to the Port of New Haven, a deep-water seaport with three berths capable of hosting vessels
and barges as well as the facilities required to handle break bulk cargo. The port has the capacity to load 200 trucks
a day from the ground or via loading docks. Rail transportation access is available, with a private switch engine
for yard movements and private siding for loading and unloading. Approximately 400,000 square feet (40,000 m2) of
inside storage and 50 acres (200,000 m2) of outside storage are available at the site. Five shore cranes with a 250-ton
capacity and 26 forklifts, each with a 26-ton capacity, are also available. The New Haven area supports several medical
facilities that are considered some of the best hospitals in the country. There are two major medical centers downtown:
Yale – New Haven Hospital has four pavilions, including the Yale – New Haven Children's Hospital and the Smilow Cancer
Hospital; the Hospital of Saint Raphael is several blocks north, and touts its excellent cardiac emergency care program.
Smaller downtown health facilities are the Temple Medical Center located downtown on Temple Street, Connecticut Mental
Health Center/ across Park Street from Y-NHH, and the Hill Health Center, which serves the working-class Hill neighborhood.
A large Veterans Affairs hospital is located in neighboring West Haven. To the west in Milford is Milford Hospital,
and to the north in Meriden is the MidState Medical Center. Yale and New Haven are working to build a medical and
biotechnology research hub in the city and Greater New Haven region, and are succeeding to some extent.[citation
needed] The city, state and Yale together run Science Park, a large site three blocks northwest of Yale's Science
Hill campus. This multi-block site, approximately bordered by Mansfield Street, Division Street, and Shelton Avenue,
is the former home of Winchester's and Olin Corporation's 45 large-scale factory buildings. Currently, sections of
the site are large-scale parking lots or abandoned structures, but there is also a large remodeled and functioning
area of buildings (leased primarily by a private developer) with numerous Yale employees, financial service and biotech
companies. A second biotechnology district is being planned for the median strip on Frontage Road, on land cleared
for the never-built Route 34 extension. As of late 2009, a Pfizer drug-testing clinic, a medical laboratory building
serving Yale – New Haven Hospital, and a mixed-use structure containing parking, housing and office space, have been
constructed on this corridor. A former SNET telephone building at 300 George Street is being converted into lab space,
and has been so far quite successful in attracting biotechnology and medical firms. Near New Haven there is the static
inverter plant of the HVDC Cross Sound Cable. There are three PureCell Model 400 fuel cells placed in the city of
New Haven—one at the New Haven Public Schools and newly constructed Roberto Clemente School, one at the mixed-use
360 State Street building, and one at City Hall. According to Giovanni Zinn of the city's Office of Sustainability,
each fuel cell may save the city up to $1 million in energy costs over a decade. The fuel cells were provided by
ClearEdge Power, formerly UTC Power. New Haven has been depicted in a number of movies. Scenes in the film All About
Eve (1950) are set at the Taft Hotel (now Taft Apartments) on the corner of College and Chapel streets, and the history
of New Haven theaters as Broadway "tryouts" is depicted in the Fred Astaire film The Band Wagon (1953). The city
was fictionally portrayed in the Steven Spielberg movie Amistad (1997) concerning the events around the mutiny trial
of that ship's rebelling captives. New Haven was also fictionalized in the movie The Skulls (2000), which focused
on conspiracy theories surrounding the real-life Skull and Bones secret society which is located in New Haven. Several
recent movies have been filmed in New Haven, including Mona Lisa Smile (2003), with Julia Roberts, The Life Before
Her Eyes (2007), with Uma Thurman, and Indiana Jones and the Kingdom of the Crystal Skull (2008) directed by Steven
Spielberg and starring Harrison Ford, Cate Blanchett and Shia LaBeouf. The filming of Crystal Skull involved an extensive
chase sequence through the streets of New Haven. Several downtown streets were closed to traffic and received a "makeover"
to look like streets of 1957, when the film is set. 500 locals were cast as extras for the film. In Everybody's Fine
(2009), Robert De Niro has a close encounter in what is supposed to be the Denver train station; the scene was filmed
in New Haven's Union Station. New Haven is repeatedly referenced by Nick Carraway in F. Scott Fitzgerald's literary
classic The Great Gatsby, as well as by fellow fictional Yale alumnus C. Montgomery Burns, a character from The Simpsons
television show. A fictional native of New Haven is Alex Welch from the novella, The Odd Saga of the American and
a Curious Icelandic Flock. The TV show Gilmore Girls is set (but not filmed) in New Haven and at Yale University,
as are scenes in the film The Sisterhood of the Traveling Pants 2 (2008). New Haven was the location of one of Jim
Morrison's infamous arrests while he fronted the rock group The Doors. The near-riotous concert and arrest in 1967
at the New Haven Arena was commemorated by Morrison in the lyrics to "Peace Frog" which include the line "...blood
in the streets in the town of New Haven..." This was the first time a rock star had ever been arrested in concert.[citation
needed] This event is portrayed in the movie The Doors (1991), starring Val Kilmer as Morrison, with a concert hall
in Los Angeles used to depict the New Haven Arena.
The region, as part of Lorraine, was part of the Holy Roman Empire, and then was gradually annexed by France in the 17th
century, and formalized as one of the provinces of France. The Calvinist manufacturing republic of Mulhouse, known
as Stadtrepublik Mülhausen, became a part of Alsace after a vote by its citizens on 4 January 1798. Alsace is frequently
mentioned with and as part of Lorraine and the former duchy of Lorraine, since it was a vital part of the duchy,
and later because German possession as the imperial province (Alsace-Lorraine, 1871–1918) was contested in the 19th
and 20th centuries; France and Germany exchanged control of parts of Lorraine (including Alsace) four times in 75
years. With the decline of the Roman Empire, Alsace became the territory of the Germanic Alemanni. The Alemanni were
agricultural people, and their Germanic language formed the basis of modern-day dialects spoken along the Upper Rhine
(Alsatian, Alemannian, Swabian, Swiss). Clovis and the Franks defeated the Alemanni during the 5th century AD, culminating
with the Battle of Tolbiac, and Alsace became part of the Kingdom of Austrasia. Under Clovis' Merovingian successors
the inhabitants were Christianized. Alsace remained under Frankish control until the Frankish realm, following the
Oaths of Strasbourg of 842, was formally dissolved in 843 at the Treaty of Verdun; the grandsons of Charlemagne divided
the realm into three parts. Alsace formed part of the Middle Francia, which was ruled by the youngest grandson Lothar
I. Lothar died early in 855 and his realm was divided into three parts. The part known as Lotharingia, or Lorraine,
was given to Lothar's son. The rest was shared between Lothar's brothers Charles the Bald (ruler of the West Frankish
realm) and Louis the German (ruler of the East Frankish realm). The Kingdom of Lotharingia was short-lived, however,
becoming the stem duchy of Lorraine in Eastern Francia after the Treaty of Ribemont in 880. Alsace was united with
the other Alemanni east of the Rhine into the stem duchy of Swabia. At about this time the surrounding areas experienced
recurring fragmentation and reincorporations among a number of feudal secular and ecclesiastical lordships, a common
process in the Holy Roman Empire. Alsace experienced great prosperity during the 12th and 13th centuries under Hohenstaufen
emperors. Frederick I set up Alsace as a province (a procuratio, not a provincia) to be ruled by ministeriales, a
non-noble class of civil servants. The idea was that such men would be more tractable and less likely to alienate
the fief from the crown out of their own greed. The province had a single provincial court (Landgericht) and a central
administration with its seat at Hagenau. Frederick II designated the Bishop of Strasbourg to administer Alsace, but
the authority of the bishop was challenged by Count Rudolf of Habsburg, who received his rights from Frederick II's
son Conrad IV. Strasbourg began to grow to become the most populous and commercially important town in the region.
In 1262, after a long struggle with the ruling bishops, its citizens gained the status of free imperial city. A stop
on the Paris-Vienna-Orient trade route, as well as a port on the Rhine route linking southern Germany and Switzerland
to the Netherlands, England and Scandinavia, it became the political and economic center of the region. Cities such
as Colmar and Hagenau also began to grow in economic importance and gained a kind of autonomy within the "Decapole"
or "Dekapolis", a federation of ten free towns. As in much of Europe, the prosperity of Alsace came to an end in
the 14th century by a series of harsh winters, bad harvests, and the Black Death. These hardships were blamed on
Jews, leading to the pogroms of 1336 and 1339. In 1349, Jews of Alsace were accused of poisoning the wells with plague,
leading to the massacre of thousands of Jews during the Strasbourg pogrom. Jews were subsequently forbidden to settle
in the town. An additional natural disaster was the Rhine rift earthquake of 1356, one of Europe's worst which made
ruins of Basel. Prosperity returned to Alsace under Habsburg administration during the Renaissance. Holy Roman Empire
central power had begun to decline following years of imperial adventures in Italian lands, often ceding hegemony
in Western Europe to France, which had long since centralized power. France began an aggressive policy of expanding
eastward, first to the rivers Rhône and Meuse, and when those borders were reached, aiming for the Rhine. In 1299,
the French proposed a marriage alliance between Philip IV of France's sister Blanche and Albert I of Germany's son
Rudolf, with Alsace to be the dowry; however, the deal never came off. In 1307, the town of Belfort was first chartered
by the Counts of Montbéliard. During the next century, France was to be militarily shattered by the Hundred Years'
War, which prevented for a time any further tendencies in this direction. After the conclusion of the war, France
was again free to pursue its desire to reach the Rhine and in 1444 a French army appeared in Lorraine and Alsace.
It took up winter quarters, demanded the submission of Metz and Strasbourg and launched an attack on Basel. In 1469,
following the Treaty of St. Omer, Upper Alsace was sold by Archduke Sigismund of Austria to Charles the Bold, Duke
of Burgundy. Although Charles was the nominal landlord, taxes were paid to Frederick III, Holy Roman Emperor. The
latter was able to use this tax and a dynastic marriage to his advantage to gain back full control of Upper Alsace
(apart from the free towns, but including Belfort) in 1477 when it became part of the demesne of the Habsburg family,
who were also rulers of the empire. The town of Mulhouse joined the Swiss Confederation in 1515, where it was to
remain until 1798. By the time of the Protestant Reformation in the 16th century, Strasbourg was a prosperous community,
and its inhabitants accepted Protestantism in 1523. Martin Bucer was a prominent Protestant reformer in the region.
His efforts were countered by the Roman Catholic Habsburgs who tried to eradicate heresy in Upper Alsace. As a result,
Alsace was transformed into a mosaic of Catholic and Protestant territories. On the other hand, Mömpelgard (Montbéliard)
to the southwest of Alsace, belonging to the Counts of Württemberg since 1397, remained a Protestant enclave in France
until 1793. This situation prevailed until 1639, when most of Alsace was conquered by France so as to keep it out
of the hands of the Spanish Habsburgs, who wanted a clear road to their valuable and rebellious possessions in the
Spanish Netherlands. Beset by enemies and seeking to gain a free hand in Hungary, the Habsburgs sold their Sundgau
territory (mostly in Upper Alsace) to France in 1646, which had occupied it, for the sum of 1.2 million Thalers.
When hostilities were concluded in 1648 with the Treaty of Westphalia, most of Alsace was recognized as part of France,
although some towns remained independent. The treaty stipulations regarding Alsace were complex; although the French
king gained sovereignty, existing rights and customs of the inhabitants were largely preserved. France continued
to maintain its customs border along the Vosges mountains where it had been, leaving Alsace more economically oriented
to neighbouring German-speaking lands. The German language remained in use in local administration, in schools, and
at the (Lutheran) University of Strasbourg, which continued to draw students from other German-speaking lands. The
1685 Edict of Fontainebleau, by which the French king ordered the suppression of French Protestantism, was not applied
in Alsace. France did endeavour to promote Catholicism; Strasbourg Cathedral, for example, which had been Lutheran
from 1524 to 1681, was returned to the Catholic Church. However, compared to the rest of France, Alsace enjoyed a
climate of religious tolerance. The year 1789 brought the French Revolution and with it the first division of Alsace
into the départements of Haut- and Bas-Rhin. Alsatians played an active role in the French Revolution. On 21 July
1789, after receiving news of the Storming of the Bastille in Paris, a crowd of people stormed the Strasbourg city
hall, forcing the city administrators to flee and putting symbolically an end to the feudal system in Alsace. In
1792, Rouget de Lisle composed in Strasbourg the Revolutionary marching song "La Marseillaise" (as Marching song
for the Army of the Rhine), which later became the anthem of France. "La Marseillaise" was played for the first time
in April of that year in front of the mayor of Strasbourg Philippe-Frédéric de Dietrich. Some of the most famous
generals of the French Revolution also came from Alsace, notably Kellermann, the victor of Valmy, Kléber, who led
the armies of the French Republic in Vendée and Westermann, who also fought in the Vendée. At the same time, some
Alsatians were in opposition to the Jacobins and sympathetic to the invading forces of Austria and Prussia who sought
to crush the nascent revolutionary republic. Many of the residents of the Sundgau made "pilgrimages" to places like
Mariastein Abbey, near Basel, in Switzerland, for baptisms and weddings. When the French Revolutionary Army of the
Rhine was victorious, tens of thousands fled east before it. When they were later permitted to return (in some cases
not until 1799), it was often to find that their lands and homes had been confiscated. These conditions led to emigration
by hundreds of families to newly vacant lands in the Russian Empire in 1803–4 and again in 1808. A poignant retelling
of this event based on what Goethe had personally witnessed can be found in his long poem Hermann and Dorothea. The
population grew rapidly, from 800,000 in 1814 to 914,000 in 1830 and 1,067,000 in 1846. The combination of economic
and demographic factors led to hunger, housing shortages and a lack of work for young people. Thus, it is not surprising
that people left Alsace, not only for Paris – where the Alsatian community grew in numbers, with famous members such
as Baron Haussmann – but also for more distant places like Russia and the Austrian Empire, to take advantage of the
new opportunities offered there: Austria had conquered lands in Eastern Europe from the Ottoman Empire and offered
generous terms to colonists as a way of consolidating its hold on the new territories. Many Alsatians also began
to sail to the United States, settling in many areas from 1820 to 1850. In 1843 and 1844, sailing ships bringing
immigrant families from Alsace arrived at the port of New York. Some settled in Illinois, many to farm or to seek
success in commercial ventures: for example, the sailing ships Sully (in May 1843) and Iowa (in June 1844) brought
families who set up homes in northern Illinois and northern Indiana. Some Alsatian immigrants were noted for their
roles in 19th-century American economic development. Others ventured to Canada to settle in southwestern Ontario,
notably Waterloo County. By 1790, the Jewish population of Alsace was approximately 22,500, about 3% of the provincial
population. They were highly segregated and subject to long-standing anti-Jewish regulations. They maintained their
own customs, Yiddish language, and historic traditions within the tightly-knit ghettos; they adhered to Talmudic
law enforced by their rabbis. Jews were barred from most cities and instead lived in villages. They concentrated
in trade, services, and especially in money lending. They financed about a third of the mortgages in Alsace. Official
tolerance grew during the French Revolution, with full emancipation in 1791. However, local antisemitism also increased
and Napoleon turned hostile in 1806, imposing a one-year moratorium on all debts owed to Jews.[citation needed] In
the 1830-1870 era most Jews moved to the cities, where they integrated and acculturated, as antisemitism sharply
declined. By 1831, the state began paying salaries to official rabbis, and in 1846 a special legal oath for Jews
was discontinued. Antisemitic local riots occasionally occurred, especially during the Revolution of 1848. Merger
of Alsace into Germany in 1871-1918 lessened antisemitic violence. France started the Franco-Prussian War (1870–71),
and was defeated by the Kingdom of Prussia and other German states. The end of the war led to the unification of
Germany. Otto von Bismarck annexed Alsace and northern Lorraine to the new German Empire in 1871; unlike other members
states of the German federation, which had governments of their own, the new Imperial territory of Alsace-Lorraine
was under the sole authority of the Kaiser, administered directly by the imperial government in Berlin. Between 100,000
and 130,000 Alsatians (of a total population of about a million and a half) chose to remain French citizens and leave
Reichsland Elsaß-Lothringen, many of them resettling in French Algeria as Pieds-Noirs. Only in 1911 was Alsace-Lorraine
granted some measure of autonomy, which was manifested also in a flag and an anthem (Elsässisches Fahnenlied). In
1913, however, the Saverne Affair (French: Incident de Saverne) showed the limits of this new tolerance of the Alsatian
identity. During the First World War, to avoid ground fights between brothers, many Alsatians served as sailors in
the Kaiserliche Marine and took part in the Naval mutinies that led to the abdication of the Kaiser in November 1918,
which left Alsace-Lorraine without a nominal head of state. The sailors returned home and tried to found a republic.
While Jacques Peirotes, at this time deputy at the Landrat Elsass-Lothringen and just elected mayor of Strasbourg,
proclaimed the forfeiture of the German Empire and the advent of the French Republic, a self-proclaimed government
of Alsace-Lorraine declared independence as the "Republic of Alsace-Lorraine". French troops entered Alsace less
than two weeks later to quash the worker strikes and remove the newly established Soviets and revolutionaries from
power. At the arrival of the French soldiers, many Alsatians and local Prussian/German administrators and bureaucrats
cheered the re-establishment of order (which can be seen and is described in detail in the reference video below).
Although U.S. President Woodrow Wilson had insisted that the région was self-ruling by legal status, as its constitution
had stated it was bound to the sole authority of the Kaiser and not to the German state, France tolerated no plebiscite,
as granted by the League of Nations to some eastern German territories at this time, because Alsatians were considered
by the French public as fellow Frenchmen liberated from German rule. Germany ceded the region to France under the
Treaty of Versailles. Alsace-Lorraine was occupied by Germany in 1940 during the Second World War. Although Germany
never formally annexed Alsace-Lorraine, it was incorporated into the Greater German Reich, which had been restructured
into Reichsgaue. Alsace was merged with Baden, and Lorraine with the Saarland, to become part of a planned Westmark.
During the war, 130,000 young men from Alsace and Lorraine were inducted into the German army against their will
(malgré-nous) and in some cases, the Waffen SS. Some of the latter were involved in war crimes such as the Oradour-sur-Glane
massacre. Most of them perished on the eastern front. The few that could escape fled to Switzerland or joined the
resistance. In July 1944, 1500 malgré-nous were released from Soviet captivity and sent to Algiers, where they joined
the Free French Forces. Alsace is one of the most conservative régions of France. It is one of just two régions in
metropolitan France where the conservative right won the 2004 région elections and thus controls the Alsace Regional
Council. Conservative leader Nicolas Sarkozy got his best score in Alsace (over 65%) in the second round of the French
presidential elections of 2007. The president of the Regional Council is Philippe Richert, a member of the Union
for a Popular Movement, elected in the 2010 regional election. The frequently changing status of the région throughout
history has left its mark on modern day politics in terms of a particular interest in national identity issues. Alsace
is also one of the most pro-EU regions of France. It was one of the few French regions that voted 'yes' to the European
Constitution in 2005. Most of the Alsatian population is Roman Catholic, but, largely because of the region's German
heritage, a significant Protestant community also exists: today, the EPCAAL (a Lutheran church) is France's second
largest Protestant church, also forming an administrative union (UEPAL) with the much smaller Calvinist EPRAL. Unlike
the rest of France, the Local law in Alsace-Moselle still provides for to the Napoleonic Concordat of 1801 and the
organic articles, which provides public subsidies to the Roman Catholic, Lutheran, and Calvinist churches, as well
as to Jewish synagogues; religion classes in one of these faiths is compulsory in public schools. This divergence
in policy from the French majority is due to the region having been part of Imperial Germany when the 1905 law separating
the French church and state was instituted (for a more comprehensive history, see: Alsace-Lorraine). Controversy
erupts periodically on the appropriateness of this legal disposition, as well as on the exclusion of other religions
from this arrangement. Following the Protestant Reformation, promoted by local reformer Martin Bucer, the principle
of cuius regio, eius religio led to a certain amount of religious diversity in the highlands of northern Alsace.
Landowners, who as "local lords" had the right to decide which religion was allowed on their land, were eager to
entice populations from the more attractive lowlands to settle and develop their property. Many accepted without
discrimination Catholics, Lutherans, Calvinists, Jews and Anabaptists. Multiconfessional villages appeared, particularly
in the region of Alsace bossue. Alsace became one of the French regions boasting a thriving Jewish community, and
the only region with a noticeable Anabaptist population. The schism of the Amish under the lead of Jacob Amman from
the Mennonites occurred in 1693 in Sainte-Marie-aux-Mines. The strongly Catholic Louis XIV tried in vain to drive
them from Alsace. When Napoleon imposed military conscription without religious exception, most emigrated to the
American continent. There is controversy around the recognition of the Alsacian flag. The authentic historical flag
is the Rot-un-Wiss ; Red and White are commonly found on the coat of arms of Alsacian cities (Strasbourg, Mulhouse,
Sélestat...) and of many Swiss cites, especially in Basel's region. The German region Hesse uses a flag similar to
the Rot-un-Wiss. As it underlines the Germanic roots of the region, it was replaced in 1949 by a new "Union jack-like"
flag representing the union of the two déprtements. It has, however, no real historical relevance. It has been since
replaced again by a slightly different one, also representing the two départements. With the purpose of "Frenchizing"
the region, the Rot-un-Wiss has not been recognized by Paris. Some overzealous statesmen have called it a Nazi invention
- while its origins date back to the XIth century and the Red and White banner of Gérard de Lorraine (aka. d'Alsace).
The Rot-un-Wiss flag is still known as the real historical emblem of the region by most of the population and the
departments' parliaments and has been widely used during protests against the creation of a new "super-region" gathering
Champagne-Ardennes, Lorraine and Alsace, namely on Colmar's statue of liberty. From the annexation of Alsace by France
in the 17th century and the language policy of the French Revolution up to 1870, knowledge of French in Alsace increased
considerably. With the education reforms of the 19th century, the middle classes began to speak and write French
well. The French language never really managed, however, to win over the masses, the vast majority of whom continued
to speak their German dialects and write in German (which we would now call "standard German").[citation needed]
During a reannexation by Germany (1940–1945), High German was reinstated as the language of education. The population
was forced to speak German and 'French' family names were Germanized. Following the Second World War, the 1927 regulation
was not reinstated and the teaching of German in primary schools was suspended by a provisional rectorial decree,
which was supposed to enable French to regain lost ground. The teaching of German became a major issue, however,
as early as 1946. Following World War II, the French government pursued, in line with its traditional language policy,
a campaign to suppress the use of German as part of a wider Francization campaign. It was not until 9 June 1982,
with the Circulaire sur la langue et la culture régionales en Alsace (Memorandum on regional language and culture
in Alsace) issued by the Vice-Chancellor of the Académie Pierre Deyon, that the teaching of German in primary schools
in Alsace really began to be given more official status. The Ministerial Memorandum of 21 June 1982, known as the
Circulaire Savary, introduced financial support, over three years, for the teaching of regional languages in schools
and universities. This memorandum was, however, implemented in a fairly lax manner. Both Alsatian and Standard German
were for a time banned from public life (including street and city names, official administration, and educational
system). Though the ban has long been lifted and street signs today are often bilingual, Alsace-Lorraine is today
very French in language and culture. Few young people speak Alsatian today, although there do still exist one or
two enclaves in the Sundgau region where some older inhabitants cannot speak French, and where Alsatian is still
used as the mother tongue. A related Alemannic German survives on the opposite bank of the Rhine, in Baden, and especially
in Switzerland. However, while French is the major language of the region, the Alsatian dialect of French is heavily
influenced by German and other languages such a Yiddish in phonology and vocabulary. The constitution of the Fifth
Republic states that French alone is the official language of the Republic. However, Alsatian, along with other regional
languages, are recognized by the French government in the official list of languages of France. A 1999 INSEE survey
counted 548,000 adult speakers of Alsatian in France, making it the second most-spoken regional language in the country
(after Occitan). Like all regional languages in France, however, the transmission of Alsatian is on the decline.
While 39% of the adult population of Alsace speaks Alsatian, only one in four children speaks it, and only one in
ten children uses it regularly. The gastronomic symbol of the région is undoubtedly the Choucroute, a local variety
of Sauerkraut. The word Sauerkraut in Alsatian has the form sûrkrût, same as in other southwestern German dialects,
and means "sour cabbage" as its Standard German equivalent. This word was included into the French language as choucroute.
To make it, the cabbage is finely shredded, layered with salt and juniper and left to ferment in wooden barrels.
Sauerkraut can be served with poultry, pork, sausage or even fish. Traditionally it is served with Strasbourg sausage
or frankfurters, bacon, smoked pork or smoked Morteau or Montbéliard sausages, or a selection of other pork products.
Served alongside are often roasted or steamed potatoes or dumplings. "Alsatia", the Latin form of Alsace's name,
has long ago entered the English language with the specialized meaning of "a lawless place" or "a place under no
jurisdiction" - since Alsace was conceived by English people to be such. It was used into the 20th century as a term
for a ramshackle marketplace, "protected by ancient custom and the independence of their patrons". As of 2007, the
word is still in use among the English and Australian judiciaries with the meaning of a place where the law cannot
reach: "In setting up the Serious Organised Crime Agency, the state has set out to create an Alsatia - a region of
executive action free of judicial oversight," Lord Justice Sedley in UMBS v SOCA 2007. At present, plans are being
considered for building a new dual carriageway west of Strasbourg, which would reduce the buildup of traffic in that
area by picking up north and southbound vehicles and getting rid of the buildup outside Strasbourg. The line plans
to link up the interchange of Hœrdt to the north of Strasbourg, with Innenheim in the southwest. The opening is envisaged
at the end of 2011, with an average usage of 41,000 vehicles a day. Estimates of the French Works Commissioner however,
raised some doubts over the interest of such a project, since it would pick up only about 10% of the traffic of the
A35 at Strasbourg. Paradoxically, this reversed the situation of the 1950s. At that time, the French trunk road left
of the Rhine not been built, so that traffic would cross into Germany to use the Karlsruhe-Basel Autobahn.
Carnival (see other spellings and names) is a Christian festive season that occurs before the Christian season of Lent. The
main events typically occur during February or early March, during the period historically known as Shrovetide (or
Pre-Lent). Carnival typically involves a public celebration and/or parade combining some elements of a circus, masks
and public street party. People wear masks and costumes during many such celebrations, allowing them to lose their
everyday individuality and experience a heightened sense of social unity. Excessive consumption of alcohol, meat,
and other foods proscribed during Lent is extremely common. Other common features of carnival include mock battles
such as food fights; social satire and mockery of authorities; the grotesque body displaying exaggerated features
especially large noses, bellies, mouths, and phalli or elements of animal bodies; abusive language and degrading
acts; depictions of disease and gleeful death; and a general reversal of everyday rules and norms. The term Carnival
is traditionally used in areas with a large Catholic presence. However, the Philippines, a predominantly Roman Catholic
country, does not celebrate Carnival anymore since the dissolution of the Manila Carnival after 1939, the last carnival
in the country. In historically Lutheran countries, the celebration is known as Fastelavn, and in areas with a high
concentration of Anglicans and Methodists, pre-Lenten celebrations, along with penitential observances, occur on
Shrove Tuesday. In Eastern Orthodox nations, Maslenitsa is celebrated during the last week before Great Lent. In
German-speaking Europe and the Netherlands, the Carnival season traditionally opens on 11/11 (often at 11:11 a.m.).
This dates back to celebrations before the Advent season or with harvest celebrations of St. Martin's Day. Traditionally
a carnival feast was the last opportunity to eat well before the time of food shortage at the end of the winter during
which one was limited to the minimum necessary. On what nowadays is called vastenavond (the days before fasting)
all the remaining winter stores of lard, butter and meat which were left would be eaten, for it would soon start
to rot and decay. The selected livestock had in fact already been slaughtered in November and the meat would be no
longer preservable. All the food that had survived the winter had to be eaten to assure that everyone was fed enough
to survive until the coming spring would provide new food sources. Several Germanic tribes celebrated the returning
of the daylight. A predominant deity was during this jubilee driven around in a noisy procession on a ship on wheels.
The winter would be driven out, to make sure that fertility could return in spring. A central figure was possibly
the fertility goddess Nerthus. Also there are some indications that the effigy of Nerthus or Freyr was placed on
a ship with wheels and accompanied by a procession of people in animal disguise and men in women's clothes. Aboard
the ship would the marriage of a man and woman be consummated as a fertility ritual. Tacitus wrote in his Germania:
Germania 9.6: Ceterum nec cohibere parietibus deos neque in ullam humani oris speciem adsimulare ex magnitudine caelestium
arbitrator – "The Germans, however, do not consider it consistent with the grandeur of celestial beings to confine
the gods within walls, or to liken them to the form of any human countenance." Germania 40: mox vehiculum et vestis
et, si credere velis, numen ipsum secreto lacu abluitur – "Afterwards the car, the vestments, and, if you like to
believe it, the divinity herself, are purified in a secret lake." Traditionally the feast also applied to sexual
desires, which were supposed to be suppressed during the following fasting. Before Lent began, all rich food and
drink were consumed in what became a giant celebration that involved the whole community, and is thought to be the
origin of Carnival. The Lenten period of the Liturgical calendar, the six weeks directly before Easter, was originally
marked by fasting and other pious or penitential practices. During Lent, no parties or celebrations were held, and
people refrained from eating rich foods, such as meat, dairy, fat and sugar. While christian festivals such as corpus
christi were church-sanctioned celebrations, Carnival was also a manifestation of European folk culture. In the Christian
tradition the fasting is to commemorate the 40 days that Jesus fasted in the desert according to the New Testament
and also to reflect on Christian values. As with many other Christian festivals such as Christmas which was originally
a pagan midwinter festival, the Christian church has found it easier to turn the pagan Carnaval in a catholic tradition
than to eliminate it. Unlike today, carnival in the Middle Ages took not just a few days, but it covered almost the
entire period between Christmas and the beginning of Lent. In those two months, several Catholic holidays were seized
by the Catholic population as an outlet for their daily frustrations. In the year 743 the synod in Leptines (Leptines
is located near Binche in Belgium) spoke out furiously against the excesses in the month of February. Also from the
same period dates the phrase: "Whoever in February by a variety of less honorable acts tries to drive out winter
is not a Christian, but a pagan." Confession books from around 800 contain more information about how people would
dress as an animal or old woman during the festivities in January and February, even though this was a sin with no
small penance. Also in Spain, San Isidoro de Sevilla is written complaint in the seventh century that people coming
out into the streets disguised in many cases the opposite gender. While forming an integral part of the Christian
calendar, particularly in Catholic regions, many Carnival traditions resemble those antedating Christianity. Italian
Carnival is sometimes thought to be derived from the ancient Roman festivals of Saturnalia and Bacchanalia. The Saturnalia,
in turn, may be based on the Greek Dionysia and Oriental festivals. For the start of the Roman Saturnalia, on December
17 authorities chose an enemy of the Roman people to represent the Lord of Misrule in each community. These men and
women were forced to indulge in food and physical pleasures throughout the week, horribly murdered on December 25th:
"destroying the forces of darkness". While medieval pageants and festivals such as Corpus Christi were church-sanctioned,
Carnival was also a manifestation of medieval folk culture. Many local Carnival customs are claimed to derive from
local pre-Christian rituals, such as elaborate rites involving masked figures in the Swabian–Alemannic Fastnacht.
However, evidence is insufficient to establish a direct origin from Saturnalia or other ancient festivals. No complete
accounts of Saturnalia survive and the shared features of feasting, role reversals, temporary social equality, masks
and permitted rule-breaking do not necessarily constitute a coherent festival or link these festivals. These similarities
may represent a reservoir of cultural resources that can embody multiple meanings and functions. For example, Easter
begins with the resurrection of Jesus, followed by a liminal period and ends with rebirth. Carnival reverses this
as King Carnival comes to life, a liminal period follows before his death. Both feasts are calculated by the lunar
calendar. Both Jesus and King Carnival may be seen as expiatory figures who make a gift to the people with their
deaths. In the case of Jesus, the gift is eternal life in heaven and in the case of King Carnival, the acknowledgement
that death is a necessary part of the cycle of life. Besides Christian anti-Judaism, the commonalities between church
and Carnival rituals and imagery suggest a common root. Christ's passion is itself grotesque: Since early Christianity
Christ is figured as the victim of summary judgement, is tortured and executed by Romans before a Jewish mob ("His
blood is on us and on our children!" Matthew 27:24–25). Holy Week processions in Spain include crowds who vociferously
insult the figure of Jesus. Irreverence, parody, degradation and laughter at a tragicomic effigy God can be seen
as intensifications of the sacred order. In 1466, the Catholic Church under Pope Paul II revieved customs of the
Saturnalia carnival: Jews were forced to race naked through the streets of the city of Rome. “Before they were to
run, the Jews were richly fed, so as to make the race more difficult for them and at the same time more amusing for
spectators. They ran… amid Rome’s taunting shrieks and peals of laughter, while the Holy Father stood upon a richly
ornamented balcony and laughed heartily”, an eyewitness reports. Some of the best-known traditions, including carnal
parades and masquerade balls, were first recorded in medieval Italy. The carnival of Venice was, for a long time,
the most famous carnival (although Napoleon abolished it in 1797 and only in 1979 was the tradition restored). From
Italy, Carnival traditions spread to Spain, Portugal and France and from France to New France in North America. From
Spain and Portugal it spread with colonization to the Caribbean and Latin America. In the early 19th century in the
German Rhineland and Southern Netherlands, the weakened medieval tradition also revive. Continuously in the 18th
and 19th centuries CE, as part of the annual Saturnalia abuse of the carnival in Rome, rabbis of the ghetto were
forced to march through the city streets wearing foolish guise, jeered upon and pelted by a variety of missiles from
the crowd. A petition of the Jewish community of Rome sent in 1836 to Pope Gregory XVI to stop the annual anti-semitic
Saturnalia abuse got a negation: “It is not opportune to make any innovation.” Carnival was introduced by Portuguese
settlers. The celebration is celebrated on each of the archipelago's nine inhabited islands. In Mindelo, São Vicente
groups challenge each other for a yearly prize. It has imported various Brazilian carnival traditions. The celebration
in SãoNicolau is more traditional, where established groups parade through the Ribeira Brava, gathering in the town
square, although it has adopted drums, floats and costumes from Brazil. In São Nicolau three groups, Copa Cabana,
Estrela Azul and Brilho Da Zona constructs a painted float using fire, newspaper for the mold, iron and steel to
for structure. Carnival São Nicolau is celebrated over three days: dawn Saturday, Sunday afternoon, and Tuesday.
In India, Carnival is celebrated only in the state of Goa and a Roman Catholic tradition, where it is known as Intruz
which means swindler while Entrudo, the appropriate word in Portuguese for Carnival. The largest celebration takes
place in the city of Panjim which was part of Velha Conquista, Goa, but now is celebrate throughout the state. The
tradition was introduced by the Portuguese who ruled Goa for over four centuries. On Tuesday preceding Ash Wednesday,
the European Tradition of Fat Tuesday is celebrated with the partaking of eating of crepes also called as "AleBelle."
The crepes are filled with freshly grated coconut and heat condensed coconut sap that sequentially converts it into
a brown sweet molasses and additional heat concentration solidifies it to jaggery. The celebrations of Carnival peak
for three days and nights and precede Ash Wednesday. When the legendary King Momo takes over the state. All-night
parades occur throughout the state with bands, dances and floats and grand balls are held in the evenings. The Carnival
of Malmedy is locally called Cwarmê. Even if Malmedy is located in the east Belgium, near the German-speaking area,
the Cwarmê is a pure walloon and Latin carnival. The celebration takes place during 4 days before the Shrove Tuesday.
The Cwarmê Sunday is the most important and insteresting to see. All the old traditional costumes parade in the street.
The Cwarmê is a "street carnival" and is not only a parade. People who are disguised pass through the crowd and perform
a part of the traditional costume they wear. The famous traditional costumes at the Cwarmê of Malmedy are the Haguète,
the Longuès-Brèsses and the Long-Né. Some Belgian cities hold Carnivals during Lent. One of the best-known is Stavelot,
where the Carnival de la Laetare takes place on Laetare Sunday, the fourth Sunday of Lent. The participants include
the Blancs-Moussis, who dress in white, carry long red noses and parade through town attacking bystanders with confetti
and dried pig bladders. The town of Halle also celebrates on Laetare Sunday. Belgium's oldest parade is the Carnival
Parade of Maaseik, also held on Laetare Sunday, which originated in 1865. Many towns in Croatia's Kvarner region
(and in other parts of the country) observe the Carnival period, incorporating local traditions and celebrating local
culture. Just before the end of Carnival, every Kvarner town burns a man-like doll called a "Jure Piškanac", who
is blamed for all the strife of the previous year. The Zvončari, or bell-ringers, wear bells and large head regalia
representing their areas of origin (for example, those from Halubje wear regalia in the shape of animal heads). The
traditional Carnival food is fritule, a pastry. This festival can also be called Poklade. Carnival has been celebrated
for centuries. The tradition was likely established under Venetian rule around the 16th century. It may have been
influenced by Greek traditions, such as festivities for deities such as Dionysus. The celebration originally involved
dressing in costumes and holding masked balls or visiting friends. In the twentieth century it became an organized
event held during the 10 days preceding Lent (according to the Greek Orthodox calendar). The festival is celebrated
almost exclusively in the city of Limassol. Three main parades take place during Carnival. The first is held on the
first day, during which the "Carnival King" (either a person in costume or an effigy) rides through the city on his
carriage. The second is held on the first Sunday of the festival and the participants are mainly children. The third
and largest takes place on the last day of Carnival and involves hundreds of people walking in costume along the
town's longest avenue. The latter two parades are open to anyone who wishes to participate. In Norway, students having
seen celebrations in Paris introduced Carnival processions, masked balls and Carnival balls to Christiana in the
1840s and 1850s. From 1863, the artist federation kunstnerforeningen held annual Carnival balls in the old Freemasons
lodge, which inspired Johan Svendsens compositions "Norsk Kunstnerkarneval" and "Karneval in Paris". The following
year, Svendsens Festpolonaise was written for the opening procession. Edvard Grieg attended and wrote "aus dem Karneval"
(folkelivsbilleder Op. 19). Since 1988, the student organization Tårnseilerne has produced annual masquerade balls
in Oslo, with masks, costumes and processions after attending an opera performance. The Carnival season also includes
Fastelavens søndag (with cream buns) and fastelavensris with decorated branches. The "Rheinische" Carnival is held
in the west of Germany, mainly in the states of North Rhine-Westphalia or Nordrhein-Westfalen, Rhineland Palatinate
or Rheinland-Pfalz, but also in Hessen [including Oberhessen], Bavaria and other states. Some cities are more famous
for celebrations such as parades and costume balls. Köln or Cologne Carnival, as well as Mainz and Düsseldorf are
the largest and most famous. Other cities have their own, often less well-known celebrations, parades and parties
such as Worms am Rhein, Speyer, Kaiserslautern, Frankfurt, Darmstadt, Mannheim, Ludwigshafen, Stuttgart, Augsburg
and München [Munich] Nürnberg. On Carnival Thursday (called "Old Women Day" or "The Women's Day") in commemoration
of an 1824 revolt by washer-women, women storm city halls, cut men's ties, and are allowed to kiss any passing man.
In Greece Carnival is also known as the Apokriés (Greek: Αποκριές, "saying goodbye to meat"), or the season of the
"Opening of the Triodion", so named after the liturgical book used by the church from then until Holy Week. One of
the season's high points is Tsiknopempti, when celebrants enjoy roast beef dinners; the ritual is repeated the following
Sunday. The following week, the last before Lent, is called Tyrinē (Greek: Τυρινή, "cheese [week]") because meat
is forbidden, although dairy products are not. Lent begins on "Clean Monday", the day after "Cheese Sunday". Throughout
the Carnival season, people disguise themselves as maskarádes ("masqueraders") and engage in pranks and revelry.
Other regions host festivities of smaller extent, focused on the reenactment of traditional carnevalic customs, such
as Tyrnavos (Thessaly), Kozani (West Macedonia), Rethymno (Crete) and in Xanthi (East Macedonia and Thrace). Tyrnavos
holds an annual Phallus festival, a traditional "phallkloric" event in which giant, gaudily painted effigies of phalluses
made of papier maché are paraded, and which women are asked to touch or kiss. Their reward for so doing is a shot
of the famous local tsipouro alcohol spirit. Every year, from 1 to 8 January, mostly in regions of Western Macedonia,
Carnival fiestas and festivals erupt. The best known is the Kastorian Carnival or "Ragoutsaria" (Gr. "Ραγκουτσάρια")
[tags: Kastoria, Kastorian Carnival, Ragoutsaria, Ραγκουτσαρια, Καστοριά]. It takes place from 6 to 8 January with
mass participation serenaded by brass bands, pipises, Macedonian and grand casa drums. It is an ancient celebration
of nature's rebirth (fiestas for Dionysus (Dionysia) and Kronos (Saturnalia)), which ends the third day in a dance
in the medieval square Ntoltso where the bands play at the same time. Carnival in the Netherlands is called Carnaval,
Vastenavond or Vastelaovend(j), and is most celebrated in traditionally Catholic regions, mainly the southern provinces
North Brabant and Limburg. Dutch Carnaval is officially celebrated on the Sunday through Tuesday preceding Ash Wednesday.
Although traditions vary from town to town, some common characteristics of Dutch Carnaval include a parade, a "prince"
plus cortège ("Jester/adjutant and Council of 11"), a Peasant Wedding (boerenbruiloft), and eating herring (haring
happen) on Ash Wednesday. The Strumica Carnival (Macedonian: Струмички Карневал, translated Strumichki Karneval)
has been held since at least 1670, when the Turkish author Evlija Chelebija wrote while staying there, "I came into
a town located in the foothills of a high hillock and what I saw that night was masked people running house–to–house,
with laughter, scream and song." The Carnival took an organized form in 1991; in 1994, Strumica became a member of
FECC and in 1998 hosted the XVIII International Congress of Carnival Cities. The Strumica Carnival opens on a Saturday
night at a masked ball where the Prince and Princess are chosen; the main Carnival night is on Tuesday, when masked
participants (including groups from abroad) compete in various subjects. As of 2000, the Festival of Caricatures
and Aphorisms has been held as part of Strumica's Carnival celebrations. The Slovenian countryside displays a variety
of disguised groups and individual characters among which the most popular and characteristic is the Kurent (plural:
Kurenti), a monstrous and demon-like, but fluffy figure. The most significant festival is held in Ptuj (see: Kurentovanje).
Its special feature are the Kurents themselves, magical creatures from another world, who visit major events throughout
the country, trying to banish the winter and announce spring's arrival, fertility, and new life with noise and dancing.
The origin of the Kurent is a mystery, and not much is known of the times, beliefs, or purposes connected with its
first appearance. The origin of the name itself is obscure. The most famous groups are the chirigotas, choirs and
comparsas. The chirigotas are well known witty, satiric popular groups who sing about politics, new times and household
topics, wearing the same costume, which they prepare for the whole year. The Choirs (coros) are wider groups that
go on open carts through the streets singing with an orchestra of guitars and lutes. Their signature piece is the
"Carnival Tango", alternating comical and serious repertory. The comparsas are the serious counterpart of the chirigota
in Cádiz, and the poetic lyrics and the criticism are their main ingredients. They have a more elaborated polyphony
that is easily recognizable by the typical countertenor voice. In Catalonia people dress in masks and costume (often
in themed groups) and organize a week-long series of parties, pranks, outlandish activities such as bed races, street
dramas satirizing public figures and raucous processions to welcome the arrival of Sa Majestat el Rei Carnestoltes
(His Majesty King Carnival), known by various titles, including el Rei dels poca-soltes (King of the Crackpots),
Princep etern de Cornudella (Eternal Prince of Cuckoldry), Duc de ximples i corrumputs (Duke of Fools and the Corrupt),
Marquès de la bona mamella (Marquis of the lovely breast), Comte de tots els barruts (Count of the Insolent), Baró
de les Calaverades (Baron of Nocturnal Debaucheries), and Senyor de l'alt Plàtan florit, dels barraquers i gamberrades
i artista d'honor dalt del llit (Lord of the Tall Banana in Bloom, of the Voyeurs and Punks and the Artist of Honor
upon the Bed). The King presides over a period of misrule in which conventional social rules may be broken and reckless
behavior is encouraged. Festivities are held in the open air, beginning with a cercavila, a ritual procession throughout
the town to call everyone to attend. Rues of masked revelers dance alongside. On Thursday Dijous Gras (Fat Thursday)
is celebrated, also called 'omelette day' (el dia de la truita), coques (de llardons, butifarra d'ou, butifarra)
and omelettes are eaten. The festivities end on Ash Wednesday with elaborate funeral rituals marking the death of
King Carnival, who is typically burned on a pyre in what is called the burial of the sardine (enterrament de la sardina),
or, in Vilanova, as l'enterro. The Carnival of Vilanova i la Geltrú has documented history from 1790 and is one of
the richest in the variety of its acts and rituals. It adopts an ancient style in which satire, the grotesque body
(particularly cross-dressing and displays of exaggerated bellies, noses and phalli) and above all, active participation
are valued over glamorous, media-friendly spectacles that Vilanovins mock as "thighs and feathers". It is best known
for Les Comparses (held on Sunday), a tumultuous dance in which 12,000 or more dancers organized into rival groups
throw 75 tons of hard candies at one other. The women protect their faces with Mantons de Manila (Manila shawls)
but eye-patches and slings for broken arms are common the following week. Vilanovins organize an elaborate ritual
for the arrival of King Carnival called l'Arrivo that changes every year. It includes a raucous procession of floats
and dancers lampooning current events or public figures and a bitingly satiric sermon (el sermo) delivered by the
King himself. On Dijous Gras, Vilanovin children are excused from school to participate in the Merengada, a day-long
scene of eating and fighting with sticky, sweet meringue. Adults have a meringue battle at midnight at the historic
Plaça de les Cols. In the mysterious sortida del Moixo Foguer (the outing of Little-Bird-Bonfire) accompanied by
the Xerraire (jabberer) who insults the crowd. In the King's precession he and his concubines scandalize the town
with their sexual behavior. A correfoc (fire run) or Devil's dance (Ball de diables, features dancing youth amid
the sparks and explosions of the ritual crew of devils. Other items includes bed races in the streets, the debauched
Nit dels Mascarots, Karaoke sausage roasts, xatonades, the children's party, Vidalet, the last night of revelry,
Vidalot, the talking-dance of the Mismatched Couples (Ball de Malcasats) and the children's King Caramel whose massive
belly, long nose and sausage-like hair hint at his insatiable appetites. For the King's funeral, people dress in
elaborate mourning costume, many of them cross-dressing men who carry bouquets of phallic vegetables. In the funeral
house, the body of the King is surrounded by an honor guard and weeping concubines, crying over the loss of sexual
pleasure brought about by his death. The King's body is carried to the Plaça de la Vila where a satiric eulogy is
delivered while the townspeople eat salty grilled sardines with bread and wine, suggesting the symbolic cannibalism
of the communion ritual. Finally, amid rockets and explosions, the King's body is burned in a massive pyre. Carnaval
de Solsona takes place in Solsona, Lleida. It is one of the longest; free events in the streets, and nightly concerts
run for more than a week. The Carnival is known for a legend that explains how a donkey was hung at the tower bell
− because the animal wanted to eat grass that grew on the top of the tower. To celebrate this legend, locals hang
a stuffed donkey at the tower that "pisses" above the excited crowd using a water pump. This event is the most important
and takes place on Saturday night. For this reason, the inhabitants are called "matarrucs" ("donkey killers"). Tarragona
has one of the region's most complete ritual sequences. The events start with the building of a huge barrel and ends
with its burning with the effigies of the King and Queen. On Saturday, the main parade takes place with masked groups,
zoomorphic figures, music and percussion bands, and groups with fireworks (the devils, the dragon, the ox, the female
dragon). Carnival groups stand out for their clothes full of elegance, showing brilliant examples of fabric crafts,
at the Saturday and Sunday parades. About 5,000 people are members of the parade groups. Carnival means weeks of
events that bring colourfully decorated floats, contagiously throbbing music, luxuriously costumed groups of celebrants
of all ages, King and Queen elections, electrifying jump-ups and torchlight parades, the Jouvert morning: the Children's
Parades and finally the Grand Parade. Aruba's biggest celebration is a month-long affair consisting of festive "jump-ups"
(street parades), spectacular parades and creative contests. Music and flamboyant costumes play a central role, from
the Queen elections to the Grand Parade. Street parades continue in various districts throughout the month, with
brass band, steel drum and roadmarch tunes. On the evening before Lent, Carnival ends with the symbolic burning of
King Momo. Carnival is known as Crop Over and is Barbados's biggest festival. Its early beginnings were on the sugar
cane plantations during the colonial period. Crop over began in 1688, and featured singing, dancing and accompaniment
by shak-shak, banjo, triangle, fiddle, guitar, bottles filled with water and bones. Other traditions included climbing
a greased pole, feasting and drinking competitions. Originally signaling the end of the yearly cane harvest, it evolved
into a national festival. In the late 20th century, Crop Over began to closely mirror the Trinidad Carnival. Beginning
in June, Crop Over runs until the first Monday in August when it culminates in the finale, The Grand Kadooment. A
major feature is the calypso competition. Calypso music, originating in Trinidad, uses syncopated rhythm and topical
lyrics. It offers a medium in which to satirise local politics, amidst the general bacchanal. Calypso tents, also
originating in Trinidad, feature cadres of musicians who perform biting social commentaries, political exposés or
rousing exhortations to "wuk dah waistline" and "roll dat bumper". The groups compete for the Calypso Monarch Award,
while the air is redolent with the smells of Bajan cooking during the Bridgetown Market Street Fair. The Cohobblopot
Festival blends dance, drama and music with the crowning of the King and Queen of costume bands. Every evening the
"Pic-o-de-Crop" Show is performed after the King of Calypso is finally crowned. The climax of the festival is Kadooment
Day celebrated with a national holiday when costume bands fill the streets with pulsating Barbadian rhythms and fireworks.
Comparsas are held throughout the week, consisting of large groups "of dancers dancing and traveling on the streets,
followed by a Carrosa (carriage) where the musicians play. The Comparsa is a development of African processions where
groups of devotees follow a given saint or deity during a particular religious celebration". One of the most popular
comparsas of Fiesta de Carnaval is the male group comparsa, usually composed of notable men from the community who
dress up in outlandish costumes or cross-dress and dance to compete for money and prizes. Other popular activities
include body painting and flour fighting. "On the last day of Carnival painters flood the street to paint each other.
This simply means that a mixture of water paint and water or raw eggs is used to paint people on the streets, the
goal being to paint as many people as you can". Carnival in Haiti started in 1804 in the capital Port-au-Prince after
the declaration of independence. The Port-au-Prince Carnival is one of the largest in North America. It is known
as Kanaval in the Creole language. It starts in January, known as "Pre-Kanaval", while the main carnival activities
begin in February. In July 2012, Haiti had another carnival called Kanaval de Fleur. Beautiful costumes, floats,
Rara parades, masks, foods, and popular rasin music (like Boukman Eksperyans, Foula Vodoule, Tokay, Boukan Ginen,
Eritaj, etc.) and kompa bands (such as T-Vice, Djakout No. 1, Sweet Micky, Kreyòl La, D.P. Express, Mizik Mizik,
Ram, T-Micky, Carimi, Djakout Mizik, and Scorpio Fever) play for dancers in the streets of the plaza of Champ-de-Mars.
An annual song competition takes place. J'ouvert, or "Dirty Mas", takes place before dawn on the Monday (known as
Carnival Monday) before Ash Wednesday. It means ""opening of the day". Revelers dress in costumes embodying puns
on current affairs, especially political and social events. "Clean Mud" (clay mud), oil paint and body paint are
familiar during J'ouvert. A common character is "Jab-jabs" (devils, blue, black or red) complete with pitchfork,
pointed horns and tails. A King and Queen of J'ouvert are chosen, based on their witty political/social messages.
Carnival Tuesday hosts the main events. Full costume is worn, complete with make-up and body paint/adornment. Usually
"Mas Boots" that complement the costumes are worn. Each band has their costume presentation based on a particular
theme, and contains various sections (some consisting of thousands of revelers) that reflect these themes. The street
parade and band costume competition take place. The mas bands eventually converge on the Queen's Park Savannah to
pass on "The Stage" for judging. The singer of the most played song is crowned Road March King or Queen earning prize
money and usually a vehicle. In Mexico, Carnival is celebrated in about 225 cities and towns. The largest is in Mazatlán
and the city of Veracruz with others in Baja California and Yucatán. The larger city Carnivals employ costumes, elected
queens and parades with floats, but Carnival celebrations in smaller and rural areas vary widely depending on the
level of European influence during Mexico's colonial period. The largest of these is in Huejotzingo, Puebla where
most townspeople take part in mock combat with rifles shooting blanks, roughly based on the Battle of Puebla. Other
important states with local traditions include Morelos, Oaxaca, Tlaxcala and Chiapas. Carnival celebrations, usually
referred to as Mardi Gras (Fat Tuesday in French), were first celebrated in the Gulf Coast area, but now occur in
many states. Customs originated in the onetime French colonial capitals of Mobile (now in Alabama), New Orleans (Louisiana)
and Biloxi (Mississippi), all of which have celebrated for many years with street parades and masked balls. Other
major American cities with celebrations include Washington, DC; St. Louis, Missouri; San Francisco; San Diego; Galveston,
Texas; and Miami, Pensacola, Tampa, and Orlando in Florida. Carnival is celebrated in New York City in Brooklyn.
As in the UK, the timing of Carnival split from the Christian calendar and is celebrated on Labor Day Monday, in
September. It is called the Labor Day Carnival, West Indian Day Parade or West Indian Day Carnival, and was founded
by immigrants from Trinidad. That country has one of the largest Caribbean Carnivals. In the mid twentieth century,
West Indians moved the event from the beginning of Lent to the Labor Day weekend. Carnival is one of the largest
parades and street festivals in New York, with over one million attending. The parade, which consists of steel bands,
floats, elaborate Carnival costumes and sound trucks, proceeds along Brooklyn's Eastern Parkway in the Crown Heights
neighborhood. In Argentina, the most representative Carnival performed is the so-called Murga, although other famous
Carnivals, more like Brazil's, are held in Argentine Mesopotamia and the North-East. Gualeguaychú in the east of
Entre Ríos province is the most important Carnival city and has one of the largest parades. It adopts a musical background
similar to Brazilian or Uruguayan Carnival. Corrientes is another city with a Carnival tradition. Chamame is a popular
musical style. In all major cities and many towns throughout the country, Carnival is celebrated. La Diablada Carnival
takes place in Oruro in central Bolivia. It is celebrated in honor of the miners' patron saint, Vírgen de Socavon
(the Virgin of the Tunnels). Over 50 parade groups dance, sing and play music over a five kilometre-long course.
Participants dress up as demons, devils, angels, Incas and Spanish conquistadors. Dances include caporales and tinkus.
The parade runs from morning until late at night, 18 hours a day, 3 days before Ash Wednesday. It was declared the
2001 "Masterpieces of Oral Heritage and Intangible Heritage of Humanity" for UNESCO. Throughout the country celebrations
are held involving traditional rhythms and water parties. In Santa Cruz de la Sierra, on the east side of the country,
tropical weather allows a Brazilian-type Carnival, with Comparsas dancing traditional songs in matching uniforms.
Samba Schools are large, social entities with thousands of members and a theme for their song and parade each year.
In Rio Carnival, samba schoolsparade in the Sambadrome ("sambódromo" in Portuguese). Some of the most famous include
GRES Estação Primeira de Mangueira, GRES Portela, GRES Imperatriz Leopoldinense, GRES Beija-Flor de Nilópolis, GRES
Mocidade Independente de Padre Miguel, and recently, Unidos da Tijuca and GRES União da Ilha do Governador. Local
tourists pay $500–950, depending on the costume, to buy a Samba costume and dance in the parade. Blocos are small
informal groups with a definite theme in their samba, usually satirizing the political situation. About 30 schools
in Rio gather hundreds of thousands of participants. More than 440 blocos operate in Rio. Bandas are samba musical
bands, also called "street carnival bands", usually formed within a single neighborhood or musical back-ground. The
Carnival industry chain amassed in 2012 almost US$1 billion in revenues. The Carnival continued its evolution in
small/unimportant towns out of view of the rulers. The result was the uninterrupted celebration of Carnival festivals
in Barranquilla (see Barranquilla's Carnival) now recognized as one of the Masterpieces of the Oral and Intangible
Heritage of Humanity. The Barranquilla Carnival includes several parades on Friday and Saturday nights beginning
on 11 January and ending with a six-day non-stop festival, beginning the Wednesday prior to Ash Wednesday and ending
Tuesday midnight. Other celebrations occur in villages along the lower Magdalena River in northern Colombia, and
in Pasto, Nariño (see Blacks and Whites' Carnival) in the south of the country. In the early 20th century, attempts
to introduce Carnival in Bogotá were rejected by the government. The Bogotá Carnival was renewed in the 21st century.
The most famed Carnival festivities are in Guaranda (Bolivar province) and Ambato (Tungurahua province). In Ambato,
the festivities are called Fiesta de las Flores y las Frutas (Festival of the Flowers and Fruits). Other cities have
revived Carnival traditions with colorful parades, such as in Azogues (Cañar Province). In Azogues and the Southern
Andes in general, Taita Carnival is always an indigenous Cañari. Recently a celebration has gained prominence in
the northern part of the Sierra in the Chota Valley in Imbabura which is a zone of a strong afro-Ecuadorian population
and so the Carnival is celebrated with bomba del chota music. A uniquely Creole tradition is the touloulous. These
women wear decorative gowns, gloves, masks and headdresses that cover them completely, making them unrecognisable,
even to the colour of their skin. On Friday and Saturday nights of Carnival, touloulou balls are held in so-called
universities; in reality, large dance halls that open only at Carnival time. Touloulous get in free, and are even
given condoms in the interest of the sexual health of the community. Men attend the balls, but they pay admittance
and are not disguised. The touloulous pick their dance partners, who may not refuse. The setup is designed to make
it easy for a woman to create a temporary liaison with a man in total anonymity. Undisguised women are not welcomed.
By tradition, if such a woman gets up to dance, the orchestra stops playing. Alcohol is served at bars – the disguised
women whisper to the men "touloulou thirsty", at which a round of drinks is expected, to be drunk through a straw
protect their anonymity. Peruvian Carnival incorporates elements of violence and reflects the urban violence in Peruvian
society following the internal conflict in Peru. Traditionally, Peruvian Andean festivities were held on this period
every year because it is the rainy season. It was already violent during the 19th century, but the government limited
the practice. During the early 20th century it consisted partying and parading, while in the second half of the 20th
century it acquired violent characteristics that continued. It was banned, first from the streets in 1958 and altogether
in 1959 by the Prado government. It consisted basically of water battles in a traditional way,[clarification needed]
while in later years it included playing with dirty water, mud, oil and colorants -and also including fighting and
sometimes looting private property and sexual assaults on women. It has become an excuse for criminal gangs to rob
people while pretending to celebrate. As of 2010, it had become so violent that the government imposed penalties
of up to eight years in prison for violence during the games (the games themselves are not forbidden, but using violence
during the games or coercing others to participate is). The Carnival in Uruguay covers more than 40 days, generally
beginning towards the end of January and running through mid March. Celebrations in Montevideo are the largest. The
festival is performed in the European parade style with elements from Bantu and Angolan Benguela cultures imported
with slaves in colonial times. The main attractions of Uruguayan Carnival include two colorful parades called Desfile
de Carnaval (Carnival Parade) and Desfile de Llamadas (Calls Parade, a candombe-summoning parade). During the celebration,
theaters called tablados are built in many places throughout the cities, especially in Montevideo. Traditionally
formed by men and now starting to be open to women, the different Carnival groups (Murgas, Lubolos or Parodistas)
perform a kind of popular opera at the tablados, singing and dancing songs that generally relate to the social and
political situation. The 'Calls' groups, basically formed by drummers playing the tamboril, perform candombe rhythmic
figures. Revelers wear their festival clothing. Each group has its own theme. Women wearing elegant, bright dresses
are called vedettes and provide a sensual touch to parades.
Baptists are individuals who comprise a group of Christian denominations and churches that subscribe to a doctrine that baptism
should be performed only for professing believers (believer's baptism, as opposed to infant baptism), and that it
must be done by complete immersion (as opposed to affusion or sprinkling). Other tenets of Baptist churches include
soul competency (liberty), salvation through faith alone, Scripture alone as the rule of faith and practice, and
the autonomy of the local congregation. Baptists recognize two ministerial offices, elders and deacons. Baptist churches
are widely considered to be Protestant churches, though some Baptists disavow this identity. Historians trace the
earliest church labeled "Baptist" back to 1609 in Amsterdam, with English Separatist John Smyth as its pastor. In
accordance with his reading of the New Testament, he rejected baptism of infants and instituted baptism only of believing
adults. Baptist practice spread to England, where the General Baptists considered Christ's atonement to extend to
all people, while the Particular Baptists believed that it extended only to the elect. In 1638, Roger Williams established
the first Baptist congregation in the North American colonies. In the mid-18th century, the First Great Awakening
increased Baptist growth in both New England and the South. The Second Great Awakening in the South in the early
19th century increased church membership, as did the preachers' lessening of support for abolition and manumission
of slavery, which had been part of the 18th-century teachings. Baptist missionaries have spread their church to every
continent. Baptist historian Bruce Gourley outlines four main views of Baptist origins: (1) The modern scholarly
consensus that the movement traces its origin to the 17th century via the English Separatists, (2) the view that
it was an outgrowth of Anabaptist traditions, (3) the perpetuity view which assumes that the Baptist faith and practice
has existed since the time of Christ, and (4) the successionist view, or "Baptist successionism", which argues that
Baptist churches actually existed in an unbroken chain since the time of Christ. Modern Baptist churches trace their
history to the English Separatist movement in the century after the rise of the original Protestant denominations.
This view of Baptist origins has the most historical support and is the most widely accepted. Adherents to this position
consider the influence of Anabaptists upon early Baptists to be minimal. It was a time of considerable political
and religious turmoil. Both individuals and churches were willing to give up their theological roots if they became
convinced that a more biblical "truth" had been discovered.[page needed] During the Protestant Reformation, the Church
of England (Anglicans) separated from the Roman Catholic Church. There were some Christians who were not content
with the achievements of the mainstream Protestant Reformation. There also were Christians who were disappointed
that the Church of England had not made corrections of what some considered to be errors and abuses. Of those most
critical of the Church's direction, some chose to stay and try to make constructive changes from within the Anglican
Church. They became known as "Puritans" and are described by Gourley as cousins of the English Separatists. Others
decided they must leave the Church because of their dissatisfaction and became known as the Separatists. Historians
trace the earliest Baptist church back to 1609 in Amsterdam, with John Smyth as its pastor. Three years earlier,
while a Fellow of Christ's College, Cambridge, he had broken his ties with the Church of England. Reared in the Church
of England, he became "Puritan, English Separatist, and then a Baptist Separatist," and ended his days working with
the Mennonites. He began meeting in England with 60–70 English Separatists, in the face of "great danger." The persecution
of religious nonconformists in England led Smyth to go into exile in Amsterdam with fellow Separatists from the congregation
he had gathered in Lincolnshire, separate from the established church (Anglican). Smyth and his lay supporter, Thomas
Helwys, together with those they led, broke with the other English exiles because Smyth and Helwys were convinced
they should be baptized as believers. In 1609 Smyth first baptized himself and then baptized the others. In 1609,
while still there, Smyth wrote a tract titled "The Character of the Beast," or "The False Constitution of the Church."
In it he expressed two propositions: first, infants are not to be baptized; and second, "Antichristians converted
are to be admitted into the true Church by baptism." Hence, his conviction was that a scriptural church should consist
only of regenerate believers who have been baptized on a personal confession of faith. He rejected the Separatist
movement's doctrine of infant baptism (paedobaptism). Shortly thereafter, Smyth left the group, and layman Thomas
Helwys took over the leadership, leading the church back to England in 1611. Ultimately, Smyth became committed to
believers' baptism as the only biblical baptism. He was convinced on the basis of his interpretation of Scripture
that infants would not be damned should they die in infancy. Smyth, convinced that his self-baptism was invalid,
applied with the Mennonites for membership. He died while waiting for membership, and some of his followers became
Mennonites. Thomas Helwys and others kept their baptism and their Baptist commitments. The modern Baptist denomination
is an outgrowth of Smyth's movement. Baptists rejected the name Anabaptist when they were called that by opponents
in derision. McBeth writes that as late as the 18th century, many Baptists referred to themselves as "the Christians
commonly—though falsely—called Anabaptists." Another milestone in the early development of Baptist doctrine was in
1638 with John Spilsbury, a Calvinistic minister who helped to promote the strict practice of believer's baptism
by immersion. According to Tom Nettles, professor of historical theology at Southern Baptist Theological Seminary,
"Spilsbury's cogent arguments for a gathered, disciplined congregation of believers baptized by immersion as constituting
the New Testament church gave expression to and built on insights that had emerged within separatism, advanced in
the life of John Smyth and the suffering congregation of Thomas Helwys, and matured in Particular Baptists." A minority
view is that early seventeenth-century Baptists were influenced by (but not directly connected to) continental Anabaptists.
According to this view, the General Baptists shared similarities with Dutch Waterlander Mennonites (one of many Anabaptist
groups) including believer's baptism only, religious liberty, separation of church and state, and Arminian views
of salvation, predestination and original sin. Representative writers including A.C. Underwood and William R. Estep.
Gourley wrote that among some contemporary Baptist scholars who emphasize the faith of the community over soul liberty,
the Anabaptist influence theory is making a comeback. Both Roger Williams and John Clarke, his compatriot and coworker
for religious freedom, are variously credited as founding the earliest Baptist church in North America. In 1639,
Williams established a Baptist church in Providence, Rhode Island, and Clarke began a Baptist church in Newport,
Rhode Island. According to a Baptist historian who has researched the matter extensively, "There is much debate over
the centuries as to whether the Providence or Newport church deserved the place of 'first' Baptist congregation in
America. Exact records for both congregations are lacking." Baptist missionary work in Canada began in the British
colony of Nova Scotia (present day Nova Scotia and New Brunswick) in the 1760s. The first official record of a Baptist
church in Canada was that of the Horton Baptist Church (now Wolfville) in Wolfville, Nova Scotia on 29 October 1778.
The church was established with the assistance of the New Light evangelist Henry Alline. Many of Alline's followers,
after his death, would convert and strengthen the Baptist presence in the Atlantic region.[page needed] Two major
groups of Baptists formed the basis of the churches in the Maritimes. These were referred to as Regular Baptist (Calvinistic
in their doctrine) and Free Will Baptists. In May 1845, the Baptist congregations in the United States split over
slavery and missions. The Home Mission Society prevented slaveholders from being appointed as missionaries. The split
created the Southern Baptist Convention, while the northern congregations formed their own umbrella organization
now called the American Baptist Churches USA (ABC-USA). The Methodist Episcopal Church, South had recently separated
over the issue of slavery, and southern Presbyterians would do so shortly thereafter. Many Baptist churches choose
to affiliate with organizational groups that provide fellowship without control. The largest such group is the Southern
Baptist Convention. There also are a substantial number of smaller cooperative groups. Finally, there are Baptist
churches that choose to remain autonomous and independent of any denomination, organization, or association. It has
been suggested that a primary Baptist principle is that local Baptist Churches are independent and self-governing,
and if so the term 'Baptist denomination' may be considered somewhat incongruous. Baptists, like other Christians,
are defined by doctrine—some of it common to all orthodox and evangelical groups and a portion of it distinctive
to Baptists. Through the years, different Baptist groups have issued confessions of faith—without considering them
to be creeds—to express their particular doctrinal distinctions in comparison to other Christians as well as in comparison
to other Baptists. Most Baptists are evangelical in doctrine, but Baptist beliefs can vary due to the congregational
governance system that gives autonomy to individual local Baptist churches. Historically, Baptists have played a
key role in encouraging religious freedom and separation of church and state. Shared doctrines would include beliefs
about one God; the virgin birth; miracles; atonement for sins through the death, burial, and bodily resurrection
of Jesus; the Trinity; the need for salvation (through belief in Jesus Christ as the son of God, his death and resurrection,
and confession of Christ as Lord); grace; the Kingdom of God; last things (eschatology) (Jesus Christ will return
personally and visibly in glory to the earth, the dead will be raised, and Christ will judge everyone in righteousness);
and evangelism and missions. Some historically significant Baptist doctrinal documents include the 1689 London Baptist
Confession of Faith, 1742 Philadelphia Baptist Confession, the 1833 New Hampshire Baptist Confession of Faith, the
Southern Baptist Convention's Baptist Faith and Message, and written church covenants which some individual Baptist
churches adopt as a statement of their faith and beliefs. Baptists have faced many controversies in their 400-year
history, controversies of the level of crises. Baptist historian Walter Shurden says the word "crisis" comes from
the Greek word meaning "to decide." Shurden writes that contrary to the presumed negative view of crises, some controversies
that reach a crisis level may actually be "positive and highly productive." He claims that even schism, though never
ideal, has often produced positive results. In his opinion crises among Baptists each have become decision-moments
that shaped their future. Some controversies that have shaped Baptists include the "missions crisis", the "slavery
crisis", the "landmark crisis", and the "modernist crisis". Leading up to the American Civil War, Baptists became
embroiled in the controversy over slavery in the United States. Whereas in the First Great Awakening, Methodist and
Baptist preachers had opposed slavery and urged manumission, over the decades they made more of an accommodation
with the institution. They worked with slaveholders in the South to urge a paternalistic institution. Both denominations
made direct appeals to slaves and free blacks for conversion. The Baptists particularly allowed them active roles
in congregations. By the mid-19th century, northern Baptists tended to oppose slavery. As tensions increased, in
1844 the Home Mission Society refused to appoint a slaveholder as a missionary who had been proposed by Georgia.
It noted that missionaries could not take servants with them, and also that the Board did not want to appear to condone
slavery. As early as the late 18th century, black Baptists began to organize separate churches, associations and
mission agencies, especially in the northern states. Not only did blacks set up some independent congregations in
the South before the American Civil War, freedmen quickly separated from white congregations and associations after
the war. They wanted to be free of white supervision. In 1866 the Consolidated American Baptist Convention, formed
from black Baptists of the South and West, helped southern associations set up black state conventions, which they
did in Alabama, Arkansas, Virginia, North Carolina, and Kentucky. In 1880 black state conventions united in the national
Foreign Mission Convention, to support black Baptist missionary work. Two other national black conventions were formed,
and in 1895 they united as the National Baptist Convention. This organization later went through its own changes,
spinning off other conventions. It is the largest black religious organization and the second largest Baptist organization
in the world. Baptists are numerically most dominant in the Southeast. In 2007, the Pew Research Center's Religious
Landscape Survey found that 45% of all African-Americans identify with Baptist denominations, with the vast majority
of those being within the historically black tradition. Elsewhere in the Americas, in the Caribbean in particular,
Baptist missionaries took an active role in the anti-slavery movement. In Jamaica, for example, William Knibb, a
prominent British Baptist missionary, worked toward the emancipation of slaves in the British West Indies (which
took place in 1838). Knibb also protagonised the creation of "Free Villages"; rural communities centred around a
Baptist church where emancipated slaves could farm their own land. Baptists were likewise active in promoting the
education of former slaves; for example, Jamaica's Calabar High School, named after the slave port of Calabar, was
formed by Baptist missionaries. At the same time, during and after slavery, slaves and free formed their own Spiritual
Baptist movements - breakaway spiritual movements which often expressed resistance to oppression. On 20 June 1995,
the Southern Baptist Convention voted to adopt a resolution renouncing its racist roots and apologizing for its past
defense of slavery. More than 20,000 Southern Baptists registered for the meeting in Atlanta. The resolution declared
that messengers, as SBC delegates are called, "unwaveringly denounce racism, in all its forms, as deplorable sin"
and "lament and repudiate historic acts of evil such as slavery from which we continue to reap a bitter harvest."
It offered an apology to all African-Americans for "condoning and/or perpetuating individual and systemic racism
in our lifetime" and repentance for "racism of which we have been guilty, whether consciously or unconsciously."
Although Southern Baptists have condemned racism in the past, this was the first time the predominantly white convention
had dealt specifically with the issue of slavery. Southern Baptist Landmarkism sought to reset the ecclesiastical
separation which had characterized the old Baptist churches, in an era when inter-denominational union meetings were
the order of the day. James Robinson Graves was an influential Baptist of the 19th century and the primary leader
of this movement. While some Landmarkers eventually separated from the Southern Baptist Convention, the influence
of the movement on the Convention continued into the 20th century. Its influence continues to affect convention policies.
In 2005, the Southern Baptist International Mission Board forbade its missionaries to receive alien immersions for
baptism. Following similar conflicts over modernism, the Southern Baptist Convention adhered to conservative theology
as its official position. Two new Baptist groups were formed by moderate Southern Baptists who disagreed with the
direction in which the Southern Baptist Convention was heading: the Alliance of Baptists in 1987 and the Cooperative
Baptist Fellowship in 1991. Members of both groups originally identified as Southern Baptist, but over time the groups
"became permanent new families of Baptists."
Child labour refers to the employment of children in any work that deprives children of their childhood, interferes with
their ability to attend regular school, and that is mentally, physically, socially or morally dangerous and harmful.
This practice is considered exploitative by many international organisations. Legislation across the world prohibit
child labour. These laws do not consider all work by children as child labour; exceptions include work by child artists,
family duties, supervised training, certain categories of work such as those by Amish children, some forms of child
work common among indigenous American children, and others. In developing countries, with high poverty and poor schooling
opportunities, child labour is still prevalent. In 2010, sub-saharan Africa had the highest incidence rates of child
labour, with several African nations witnessing over 50 percent of children aged 5–14 working. Worldwide agriculture
is the largest employer of child labour. Vast majority of child labour is found in rural settings and informal urban
economy; children are predominantly employed by their parents, rather than factories. Poverty and lack of schools
are considered as the primary cause of child labour. The work of children was important in pre-industrial societies,
as children needed to provide their labour for their survival and that of their group. Pre-industrial societies were
characterised by low productivity and short life expectancy, preventing children from participating in productive
work would be more harmful to their welfare and that of their group in the long run. In pre-industrial societies,
there was little need for children to attend school. This is especially the case in non literate societies. Most
pre-industrial skill and knowledge were amenable to being passed down through direct mentoring or apprenticing by
competent adults. The Victorian era in particular became notorious for the conditions under which children were employed.
Children as young as four were employed in production factories and mines working long hours in dangerous, often
fatal, working conditions. In coal mines, children would crawl through tunnels too narrow and low for adults. Children
also worked as errand boys, crossing sweepers, shoe blacks, or selling matches, flowers and other cheap goods. Some
children undertook work as apprentices to respectable trades, such as building or as domestic servants (there were
over 120,000 domestic servants in London in the mid-18th century). Working hours were long: builders worked 64 hours
a week in summer and 52 in winter, while domestic servants worked 80 hour weeks. Child labour played an important
role in the Industrial Revolution from its outset, often brought about by economic hardship. The children of the
poor were expected to contribute to their family income. In 19th-century Great Britain, one-third of poor families
were without a breadwinner, as a result of death or abandonment, obliging many children to work from a young age.
In England and Scotland in 1788, two-thirds of the workers in 143 water-powered cotton mills were described as children.
A high number of children also worked as prostitutes. The author Charles Dickens worked at the age of 12 in a blacking
factory, with his family in debtor's prison. Throughout the second half of the 19th century, child labour began to
decline in industrialised societies due to regulation and economic factors. The regulation of child labour began
from the earliest days of the Industrial revolution. The first act to regulate child labour in Britain was passed
in 1803. As early as 1802 and 1819 Factory Acts were passed to regulate the working hours of workhouse children in
factories and cotton mills to 12 hours per day. These acts were largely ineffective and after radical agitation,
by for example the "Short Time Committees" in 1831, a Royal Commission recommended in 1833 that children aged 11–18
should work a maximum of 12 hours per day, children aged 9–11 a maximum of eight hours, and children under the age
of nine were no longer permitted to work. This act however only applied to the textile industry, and further agitation
led to another act in 1847 limiting both adults and children to 10-hour working days. Lord Shaftesbury was an outspoken
advocate of regulating child labour. In the early 20th century, thousands of boys were employed in glass making industries.
Glass making was a dangerous and tough job especially without the current technologies. The process of making glass
includes intense heat to melt glass (3133 °F). When the boys are at work, they are exposed to this heat. This could
cause eye trouble, lung ailments, heat exhaustion, cut, and burns. Since workers were paid by the piece, they had
to work productively for hours without a break. Since furnaces had to be constantly burning, there were night shifts
from 5:00 pm to 3:00 am. Many factory owners preferred boys under 16 years of age. In 1910, over 2 million children
in the same age group were employed in the United States. This included children who rolled cigarettes, engaged in
factory work, worked as bobbin doffers in textile mills, worked in coal mines and were employed in canneries. Lewis
Hine's photographs of child labourers in the 1910s powerfully evoked the plight of working children in the American
south. Hines took these photographs between 1908 and 1917 as the staff photographer for the National Child Labor
Committee. Factories and mines were not the only places where child labour was prevalent in the early 20th century.
Home-based manufacturing across the United States and Europe employed children as well. Governments and reformers
argued that labour in factories must be regulated and the state had an obligation to provide welfare for poor. Legislation
that followed had the effect of moving work out of factories into urban homes. Families and women, in particular,
preferred it because it allowed them to generate income while taking care of household duties. Home-based manufacturing
operations were active year round. Families willingly deployed their children in these income generating home enterprises.
In many cases, men worked from home. In France, over 58 percent of garment workers operated out of their homes; in
Germany, the number of full-time home operations nearly doubled between 1882 and 1907; and in the United States,
millions of families operated out of home seven days a week, year round to produce garments, shoes, artificial flowers,
feathers, match boxes, toys, umbrellas and other products. Children aged 5–14 worked alongside the parents. Home-based
operations and child labour in Australia, Britain, Austria and other parts of the world was common. Rural areas similarly
saw families deploying their children in agriculture. In 1946, Frieda Miller - then Director of United States Department
of Labour - told the International Labour Organisation that these home-based operations offered, "low wages, long
hours, child labour, unhealthy and insanitary working conditions." Child labour is still common in many parts of
the world. Estimates for child labour vary. It ranges between 250 and 304 million, if children aged 5–17 involved
in any economic activity are counted. If light occasional work is excluded, ILO estimates there were 153 million
child labourers aged 5–14 worldwide in 2008. This is about 20 million less than ILO estimate for child labourers
in 2004. Some 60 percent of the child labour was involved in agricultural activities such as farming, dairy, fisheries
and forestry. Another 25 percent of child labourers were in service activities such as retail, hawking goods, restaurants,
load and transfer of goods, storage, picking and recycling trash, polishing shoes, domestic help, and other services.
The remaining 15 percent laboured in assembly and manufacturing in informal economy, home-based enterprises, factories,
mines, packaging salt, operating machinery, and such operations. Two out of three child workers work alongside their
parents, in unpaid family work situations. Some children work as guides for tourists, sometimes combined with bringing
in business for shops and restaurants. Child labour predominantly occurs in the rural areas (70%) and informal urban
sector (26%). Child labour accounts for 22% of the workforce in Asia, 32% in Africa, 17% in Latin America, 1% in
the US, Canada, Europe and other wealthy nations. The proportion of child labourers varies greatly among countries
and even regions inside those countries. Africa has the highest percentage of children aged 5–17 employed as child
labour, and a total of over 65 million. Asia, with its larger population, has the largest number of children employed
as child labour at about 114 million. Latin America and Caribbean region have lower overall population density, but
at 14 million child labourers has high incidence rates too. Accurate present day child labour information is difficult
to obtain because of disagreements between data sources as to what constitutes child labour. In some countries, government
policy contributes to this difficulty. For example, the overall extent of child labour in China is unclear due to
the government categorizing child labour data as “highly secret”. China has enacted regulations to prevent child
labour; still, the practice of child labour is reported to be a persistent problem within China, generally in agriculture
and low-skill service sectors as well as small workshops and manufacturing enterprises. In 2014, the U.S. Department
of Labor issued a List of Goods Produced by Child Labor or Forced Labor where China was attributed 12 goods the majority
of which were produced by both underage children and indentured labourers. The report listed electronics, garments,
toys and coal among other goods. Maplecroft Child Labour Index 2012 survey reports 76 countries pose extreme child
labour complicity risks for companies operating worldwide. The ten highest risk countries in 2012, ranked in decreasing
order, were: Myanmar, North Korea, Somalia, Sudan, DR Congo, Zimbabwe, Afghanistan, Burundi, Pakistan and Ethiopia.
Of the major growth economies, Maplecroft ranked Philippines 25th riskiest, India 27th, China 36th, Viet Nam 37th,
Indonesia 46th, and Brazil 54th - all of them rated to involve extreme risks of child labour uncertainties, to corporations
seeking to invest in developing world and import products from emerging markets. Lack of meaningful alternatives,
such as affordable schools and quality education, according to ILO, is another major factor driving children to harmful
labour. Children work because they have nothing better to do. Many communities, particularly rural areas where between
60–70% of child labour is prevalent, do not possess adequate school facilities. Even when schools are sometimes available,
they are too far away, difficult to reach, unaffordable or the quality of education is so poor that parents wonder
if going to school is really worth it. In European history when child labour was common, as well as in contemporary
child labour of modern world, certain cultural beliefs have rationalised child labour and thereby encouraged it.
Some view that work is good for the character-building and skill development of children. In many cultures, particular
where the informal economy and small household businesses thrive, the cultural tradition is that children follow
in their parents' footsteps; child labour then is a means to learn and practice that trade from a very early age.
Similarly, in many cultures the education of girls is less valued or girls are simply not expected to need formal
schooling, and these girls pushed into child labour such as providing domestic services. Biggeri and Mehrotra have
studied the macroeconomic factors that encourage child labour. They focus their study on five Asian nations including
India, Pakistan, Indonesia, Thailand and Philippines. They suggest that child labour is a serious problem in all
five, but it is not a new problem. Macroeconomic causes encouraged widespread child labour across the world, over
most of human history. They suggest that the causes for child labour include both the demand and the supply side.
While poverty and unavailability of good schools explain the child labour supply side, they suggest that the growth
of low-paying informal economy rather than higher paying formal economy is amongst the causes of the demand side.
Other scholars too suggest that inflexible labour market, sise of informal economy, inability of industries to scale
up and lack of modern manufacturing technologies are major macroeconomic factors affecting demand and acceptability
of child labour. Systematic use of child labour was common place in the colonies of European powers between 1650
and 1950. In Africa, colonial administrators encouraged traditional kin-ordered modes of production, that is hiring
a household for work not just the adults. Millions of children worked in colonial agricultural plantations, mines
and domestic service industries. Sophisticated schemes were promulgated where children in these colonies between
the ages of 5–14 were hired as an apprentice without pay in exchange for learning a craft. A system of Pauper Apprenticeship
came into practice in the 19th century where the colonial master neither needed the native parents' nor child's approval
to assign a child to labour, away from parents, at a distant farm owned by a different colonial master. Other schemes
included 'earn-and-learn' programs where children would work and thereby learn. Britain for example passed a law,
the so-called Masters and Servants Act of 1899, followed by Tax and Pass Law, to encourage child labour in colonies
particularly in Africa. These laws offered the native people the legal ownership to some of the native land in exchange
for making labour of wife and children available to colonial government's needs such as in farms and as picannins.
In southeast Asian colonies, such as Hong Kong, child labour such as the Mui Tsai (妹仔), was rationalised as a cultural
tradition and ignored by British authorities. The Dutch East India Company officials rationalised their child labour
abuses with, "it is a way to save these children from a worse fate." Christian mission schools in regions stretching
from Zambia to Nigeria too required work from children, and in exchange provided religious education, not secular
education. Elsewhere, the Canadian Dominion Statutes in form of so-called Breaches of Contract Act, stipulated jail
terms for uncooperative child workers. Children working at a young age has been a consistent theme throughout Africa.
Many children began first working in the home to help their parents run the family farm. Children in Africa today
are often forced into exploitative labour due to family debt and other financial factors, leading to ongoing poverty.
Other types of domestic child labour include working in commercial plantations, begging, and other sales such as
boot shining. In total, there is an estimated five million children who are currently working in the field of agriculture
which steadily increases during the time of harvest. Along with 30 percent of children who are picking coffee, there
are an estimated 25,000 school age children who work year round. What industries children work in depends on if they
grew up in a rural area or an urban area. Children who were born in urban areas often found themselves working for
street vendors, washing cars, helping in construction sites, weaving clothing, and sometimes even working as exotic
dancers. While children who grew up in rural areas would work on farms doing physical labour, working with animals,
and selling crops. Of all the child workers, the most serious cases involved street children and trafficked children
due to the physical and emotional abuse they endured by their employers. To address the issue of child labour, the
United Nations Conventions on the Rights of the Child Act was implemented in 1959. Yet due to poverty, lack of education
and ignorance, the legal actions were not/are not wholly enforced or accepted in Africa. Other legal factors that
have been implemented to end and reduce child labour includes the global response that came into force in 1979 by
the declaration of the International Year of the Child. Along with the Human Rights Committee of the United Nations,
these two declarations worked on many levels to eliminate child labour. Although many actions have been taken to
end this epidemic, child labour in Africa is still an issue today due to the unclear definition of adolescence and
how much time is needed for children to engage in activities that are crucial for their development. Another issue
that often comes into play is the link between what constitutes as child labour within the household due to the cultural
acceptance of children helping run the family business. In the end, there is a consistent challenge for the national
government to strengthen its grip politically on child labour, and to increase education and awareness on the issue
of children working below the legal age limit. With children playing an important role in the African economy, child
labour still plays an important role for many in the 20th century. From European settlement in 1888, child convicts
were occasionally sent to Australia where they were made to work. Child labour was not as excessive in Australia
as in Britain. With a low population, agricultural productivity was higher and families did not face starvation as
in established industrialised countries. Australia also did not have significant industry until the later part of
the 20th century when child labour laws, and compulsory schooling had developed under the influence of Britain. From
the 1870s Child labour was restricted by compulsorry schooling. Child labour has been a consistent struggle for children
in Brazil ever since the country was colonized on April 22, 1550 by Pedro Álvares Cabral. Work that many children
took part in was not always visible, legal, or paid. Free or slave labour was a common occurrence for many youths
and was a part of their everyday lives as they grew into adulthood. Yet due to there being no clear definition of
how to classify what a child or youth is, there has been little historical documentation of child labour during the
colonial period. Due to this lack of documentation, it is hard to determine just how many children were used for
what kinds of work before the nineteenth century. The first documentation of child labour in Brazil occurred during
the time of indigenous societies and slave labour where it was found that children were forcibly working on tasks
that exceeded their emotional and physical limits. Armando Dias, for example, died in November 1913 whilst still
very young, a victim of an electric shock when entering the textile industry where he worked. Boys and girls were
victims of industrial accidents on a daily basis. In Brazil, the minimum working age has been identified as fourteen
due to continuous constitutional amendments that occurred in 1934, 1937, and 1946. Yet due to a change in the dictatorship
by the military in the 80’s, the minimum age restriction was reduced to the age of twelve but was reviewed due to
reports of dangerous and hazardous working conditions in 1988. This led to the minimum age being raised once again
to 14. Another set of restrictions was passed in 1998 that restricted the kinds of work youth could partake in, such
as work that was considered hazardous like running construction equipment, or certain kinds of factory work. Although
many steps were taken to reduce the risk and occurrence of child labour, there is still a high number of children
and adolescents working under the age of fourteen in Brazil. It was not until recently in the 80’s that it was discovered
that almost nine million children in Brazil were working illegally and not partaking in traditional childhood activities
that help to develop important life experiences. Brazilian census data (PNAD, 1999) indicate that 2.55 million 10-14
year-olds were illegally holding jobs. They were joined by 3.7 million 15-17 year-olds and about 375,000 5-9 year-olds.
Due to the raised age restriction of 14, at least half of the recorded young workers had been employed illegally
which lead to many not being protect by important labour laws. Although substantial time has passed since the time
of regulated child labour, there is still a large number of children working illegally in Brazil. Many children are
used by drug cartels to sell and carry drugs, guns, and other illegal substances because of their perception of innocence.
This type of work that youth are taking part in is very dangerous due to the physical and psychological implications
that come with these jobs. Yet despite the hazards that come with working with drug dealers, there has been an increase
in this area of employment throughout the country. Many factors played a role in Britain’s long-term economic growth,
such as the industrial revolution in the late 1700s and the prominent presence of child labour during the industrial
age. Children who worked at an early age were often not forced; but did so because they needed to help their family
survive financially. Due to poor employment opportunities for many parents, sending their children to work on farms
and in factories was a way to help feed and support the family. Child Labour first started to occur in England when
household businesses were turned into local labour markets that mass-produced the once homemade goods. Because children
often helped produce the goods out of their homes, working in a factory to make those same goods was a simple change
for many of these youths. Although there are many counts of children under the age of ten working for factories,
the majority of children workers were between the ages of ten and fourteen. This age range was an important time
for many youths as they were first helping to provide for their families; while also transitioning to save for their
own future families. Besides the obligation, many children had to help support their families financially; another
factor that influenced child labour was the demographic changes that occurred in the eighteenth century. By the end
of the eighteenth century, 20 percent of the population was made up of children between the ages of 5 and 14. Due
to this substantial shift in available workers, and the development of the industrial revolution, children began
to work earlier in life in companies outside of the home. Yet, even though there was an increase of child labour
in factories such as cotton textiles, there consistently was large numbers of children working in the field of agriculture
and domestic production. With such a high percentage of children working, the rising of illiteracy, and the lack
of a formal education became a widespread issue for many children who worked to provide for their families. Due to
this problematic trend, many parents developed a change of opinion when deciding whether or not to send their children
to work. Other factors that lead to the decline of child labour included financial changes in the economy, changes
in the development of technology, raised wages, and continuous regulations on factory legislation. The first legal
steps taken to end the occurrence of child labour was enacted more than fifty years ago. In 1966, the nation adopted
the UN General Assembly of the International Covenant on Economic, Social and Cultural Rights. This act legally limited
the minimum age for when children could start work at the age of 14. But 23 years later in 1989 the Convention on
the Rights of Children was adopted and helped to reduce the exploitation of children and demanded safe working environments.
They all worked towards the goal of ending the most problematic forms of child labour. On 23 June 1757, the English
East India Company defeated Siraj-ud-Daula, the Nawab of Bengal, in the Battle of Plassey. The British thus became
masters of east India (Bengal, Bihar, Orissa) – a prosperous region with a flourishing agriculture, industry and
trade. This led to a large amount of children being forced into labour due to the increasing need of cheap labour
to produce large numbers of goods. Many multinationals often employed children because that they can be recruited
for less pay, and have more endurance to utilise in factory environments. Another reason many Indian children were
hired was because they lack knowledge of their basic rights, they did not cause trouble or complain, and they were
often more trustworthy. The innocence that comes with childhood was utilised to make a profit by many and was encouraged
by the need for family income. A variety of Indian social scientists as well as the Non-Governmental Organization
(NGOs) have done extensive research on the numeric figures of child labour found in India and determined that India
contributes to one-third of Asia’s child labour and one-fourth of the world's child labour. Due to a large number
of children being illegally employed, the Indian government began to take extensive actions to reduce the number
of children working, and to focus on the importance of facilitating the proper growth and development of children.
International influences help to encourage legal actions to be taken in India, such as the Geneva Declaration of
the Right of Children Act was passed in 1924. This act was followed by The Universal Declaration of Human Rights
in 1948 to which incorporated the basic human rights and needs of children for proper progression and growth in their
younger years. These international acts encouraged major changes to the workforce in India which occurred in 1986
when the Child Labour (Prohibition and Regulation) Act was put into place. This act prohibited hiring children younger
than the age of 14, and from working in hazardous conditions. From the 1950s on, the students were also used for
unpaid work at schools, where they cleaned and performed repairs. This practice has continued in the Russian Federation,
where up to 21 days of the summer holidays is sometimes set aside for school works. By law, this is only allowed
as part of specialized occupational training and with the students' and parents' permission, but those provisions
are widely ignored. In 2012 there was an accident near city of Nalchik where a car killed several pupils cleaning
up a highway shoulder during their "holiday work" as well as their teacher who was supervising them. Out of former
Soviet Union republics Uzbekistan continued and expanded the program of child labour on industrial scale to increase
profits on the main source of Islam Karimov's income, cotton harvesting. In September, when school normally starts,
the classes are suspended and children are sent to cotton fields for work, where they are assigned daily quotas of
20 to 60 kg of raw cotton they have to collect. This process is repeated in spring, when collected cotton needs to
be hoed and weeded. In 2006 it is estimated that 2.7 million children were forced to work this way. As in many other
countries, child labour in Switzerland affected among the so-called Kaminfegerkinder ("chimney sweep children") and
chidren working p.e. in spinning mills, factories and in agriculture in 19th-century Switzerland, but also to the
1960s so-called Verdingkinder (literally: "contract children" or "indentured child laborers") were children who were
taken from their parents, often due to poverty or moral reasons – usually mothers being unmarried, very poor citizens,
of Gypsy–Yeniche origin, so-called Kinder der Landstrasse, etc. – and sent to live with new families, often poor
farmers who needed cheap labour. There were even Verdingkinder auctions where children were handed over to the farmer
asking the least amount of money from the authorities, thus securing cheap labour for his farm and relieving the
authority from the financial burden of looking after the children. In the 1930s 20% of all agricultural labourers
in the Canton of Bern were children below the age of 15. Swiss municipality guardianship authorities acted so, commonly
tolerated by federal authorities, to the 1960s, not all of them of course, but usually communities affected of low
taxes in some Swiss cantons Swiss historian Marco Leuenberger investigated, that in 1930 there were some 35,000 indentured
children, and between 1920 and 1970 more than 100,000 are believed to have been placed with families or homes. 10,000
Verdingkinder are still alive. Therefore, the so-called Wiedergutmachungsinitiative was started in April 2014. In
April 2014 the collection of targeted at least authenticated 100,000 signatures of Swiss citizens has started, and
still have to be collected to October 2015. In 1999, ILO helped lead the Worst Forms Convention 182 (C182), which
has so far been signed upon and domestically ratified by 151 countries including the United States. This international
law prohibits worst forms of child labour, defined as all forms of slavery and slavery-like practices, such as child
trafficking, debt bondage, and forced labour, including forced recruitment of children into armed conflict. The law
also prohibits the use of a child for prostitution or the production of pornography, child labour in illicit activities
such as drug production and trafficking; and in hazardous work. Both the Worst Forms Convention (C182) and the Minimum
Age Convention (C138) are examples of international labour standards implemented through the ILO that deal with child
labour. In addition to setting the international law, the United Nations initiated International Program on the Elimination
of Child Labour (IPEC) in 1992. This initiative aims to progressively eliminate child labour through strengthening
national capacities to address some of the causes of child labour. Amongst the key initiative is the so-called time-bounded
programme countries, where child labour is most prevalent and schooling opportunities lacking. The initiative seeks
to achieve amongst other things, universal primary school availability. The IPEC has expanded to at least the following
target countries: Bangladesh, Brazil, China, Egypt, India, Indonesia, Mexico, Nigeria, Pakistan, Democratic Republic
of Congo, El Salvador, Nepal, Tanzania, Dominican Republic, Costa Rica, Philippines, Senegal, South Africa and Turkey.
In 2004, the United States passed an amendment to the Fair Labour Standards Act of 1938. The amendment allows certain
children aged 14–18 to work in or outside a business where machinery is used to process wood. The law aims to respect
the religious and cultural needs of the Amish community of the United States. The Amish believe that one effective
way to educate children is on the job. The new law allows Amish children the ability to work with their families,
once they are passed eighth grade in school. Similarly, in 1996, member countries of the European Union, per Directive
94/33/EC, agreed to a number of exceptions for young people in its child labour laws. Under these rules, children
of various ages may work in cultural, artistic, sporting or advertising activities if authorised by the competent
authority. Children above the age of 13 may perform light work for a limited number of hours per week in other economic
activities as defined at the discretion of each country. Additionally, the European law exception allows children
aged 14 years or over to work as part of a work/training scheme. The EU Directive clarified that these exceptions
do not allow child labour where the children may experience harmful exposure to dangerous substances. Nonetheless,
many children under the age of 13 do work, even in the most developed countries of the EU. For instance, a recent
study showed over a third of Dutch twelve-year-old kids had a job, the most common being babysitting. Some scholars[who?]
suggest any labour by children aged 18 years or less is wrong since this encourages illiteracy, inhumane work and
lower investment in human capital. Child labour, claim these activists, also leads to poor labour standards for adults,
depresses the wages of adults in developing countries as well as the developed countries, and dooms the third world
economies to low-skill jobs only capable of producing poor quality cheap exports. More children that work in poor
countries, the fewer and worse-paid are the jobs for adults in these countries. In other words, there are moral and
economic reasons that justify a blanket ban on labour from children aged 18 years or less, everywhere in the world.
Other scholars[who?] suggest that these arguments are flawed, ignores history and more laws will do more harm than
good. According to them, child labour is merely the symptom of a greater disease named poverty. If laws ban all lawful
work that enables the poor to survive, informal economy, illicit operations and underground businesses will thrive.
These will increase abuse of the children. In poor countries with very high incidence rates of child labour - such
as Ethiopia, Chad, Niger and Nepal - schools are not available, and the few schools that exist offer poor quality
education or are unaffordable. The alternatives for children who currently work, claim these studies, are worse:
grinding subsistence farming, militia or prostitution. Child labour is not a choice, it is a necessity, the only
option for survival. It is currently the least undesirable of a set of very bad choices. These scholars suggest,
from their studies of economic and social data, that early 20th-century child labour in Europe and the United States
ended in large part as a result of the economic development of the formal regulated economy, technology development
and general prosperity. Child labour laws and ILO conventions came later. Edmonds suggests, even in contemporary
times, the incidence of child labour in Vietnam has rapidly reduced following economic reforms and GDP growth. These
scholars suggest economic engagement, emphasis on opening quality schools rather than more laws and expanding economically
relevant skill development opportunities in the third world. International legal actions, such as trade sanctions
increase child labour. In 1998, UNICEF reported that Ivory Coast farmers used enslaved children – many from surrounding
countries. In late 2000 a BBC documentary reported the use of enslaved children in the production of cocoa—the main
ingredient in chocolate— in West Africa. Other media followed by reporting widespread child slavery and child trafficking
in the production of cocoa. In 2001, the US State Department estimated there were 15,000 child slaves cocoa, cotton
and coffee farms in the Ivory Coast, and the Chocolate Manufacturers Association acknowledged that child slavery
is used in the cocoa harvest.[not in citation given][better source needed] Malian migrants have long worked on cocoa
farms in the Ivory Coast, but in 2000 cocoa prices had dropped to a 10-year low and some farmers stopped paying their
employees. The Malian counsel had to rescue some boys who had not been paid for five years and who were beaten if
they tried to run away. Malian officials believed that 15,000 children, some as young as 11 years old, were working
in the Ivory Coast in 2001. These children were often from poor families or the slums and were sold to work in other
countries. Parents were told the children would find work and send money home, but once the children left home, they
often worked in conditions resembling slavery. In other cases, children begging for food were lured from bus stations
and sold as slaves. In 2002, the Ivory Coast had 12,000 children with no relatives nearby, which suggested they were
trafficked, likely from neighboring Mali, Burkina Faso and Togo. The cocoa industry was accused of profiting from
child slavery and trafficking. The European Cocoa Association dismissed these accusations as "false and excessive"
and the industry said the reports were not representative of all areas. Later the industry acknowledged the working
conditions for children were unsatisfactory and children's rights were sometimes violated and acknowledged the claims
could not be ignored. In a BBC interview, the ambassador for Ivory Coast to the United Kingdom called these reports
of widespread use of slave child labour by 700,000 cocoa farmers as absurd and inaccurate. In 2001, a voluntary agreement
called the Harkin-Engel Protocol, was accepted by the international cocoa and chocolate industry to eliminate the
worst forms of child labour, as defined by ILO's Convention 182, in West Africa. This agreement created a foundation
named International Cocoa Initiative in 2002. The foundation claims it has, as of 2011, active programs in 290 cocoa
growing communities in Côte d'Ivoire and Ghana, reaching a total population of 689,000 people to help eliminate the
worst forms of child labour in cocoa industry. Other organisations claim progress has been made, but the protocol's
2005 deadlines have not yet been met. In 2008, Bloomberg claimed child labour in copper and cobalt mines that supplied
Chinese companies in Congo. The children are creuseurs, that is they dig the ore by hand, carry sacks of ores on
their backs, and these are then purchased by these companies. Over 60 of Katanga's 75 processing plants are owned
by Chinese companies and 90 percent of the region's minerals go to China. An African NGO report claimed 80,000 child
labourers under the age of 15, or about 40% of all miners, were supplying ore to Chinese companies in this African
region. Amnesty International alleged in 2016 that some cobalt sold by Congo Dongfang Mining was produced by child
labor, and that it was being used in lithium-ion batteries powering electric cars and mobile devices worldwide. BBC,
in 2012, accused Glencore of using child labour in its mining and smelting operations of Africa. Glencore denied
it used child labour, and said it has strict policy of not using child labour. The company claimed it has a strict
policy whereby all copper was mined correctly, placed in bags with numbered seals and then sent to the smelter. Glencore
mentioned being aware of child miners who were part of a group of artisanal miners who had without authorisation
raided the concession awarded to the company since 2010; Glencore has been pleading with the government to remove
the artisanal miners from the concession. Small-scale artisanal mining of gold is another source of dangerous child
labour in poor rural areas in certain parts of the world. This form of mining uses labour-intensive and low-tech
methods. It is informal sector of the economy. Human Rights Watch group estimates that about 12 percent of global
gold production comes from artisanal mines. In west Africa, in countries such as Mali - the third largest exporter
of gold in Africa - between 20,000 and 40,000 children work in artisanal mining. Locally known as orpaillage, children
as young as 6 years old work with their families. These children and families suffer chronic exposure to toxic chemicals
including mercury, and do hazardous work such as digging shafts and working underground, pulling up, carrying and
crushing the ore. The poor work practices harm the long term health of children, as well as release hundreds of tons
of mercury every year into local rivers, ground water and lakes. Gold is important to the economy of Mali and Ghana.
For Mali, it is the second largest earner of its export revenue. For many poor families with children, it is the
primary and sometimes the only source of income. In early August 2008, Iowa Labour Commissioner David Neil announced
that his department had found that Agriprocessors, a kosher meatpacking company in Postville which had recently been
raided by Immigration and Customs Enforcement, had employed 57 minors, some as young as 14, in violation of state
law prohibiting anyone under 18 from working in a meatpacking plant. Neil announced that he was turning the case
over to the state Attorney General for prosecution, claiming that his department's inquiry had discovered "egregious
violations of virtually every aspect of Iowa's child labour laws." Agriprocessors claimed that it was at a loss to
understand the allegations. Agriprocessors' CEO went to trial on these charges in state court on 4 May 2010. After
a five-week trial he was found not guilty of all 57 charges of child labour violations by the Black Hawk County District
Court jury in Waterloo, Iowa, on 7 June 2010. In December 2009, campaigners in the UK called on two leading high
street retailers to stop selling clothes made with cotton which may have been picked by children. Anti-Slavery International
and the Environmental Justice Foundation (EJF) accused H&M and Zara of using cotton suppliers in Bangladesh. It is
also suspected that many of their raw materials originates from Uzbekistan, where children aged 10 are forced to
work in the fields. The activists were calling to ban the use of Uzbek cotton and implement a "track and trace" systems
to guarantee an ethical responsible source of the material. In 2008, the BBC reported that the company Primark was
using child labor in the manufacture of clothing. In particular, a £4 hand-embroidered shirt was the starting point
of a documentary produced by BBC's Panorama programme. The programme asks consumers to ask themselves, "Why am I
only paying £4 for a hand embroidered top? This item looks handmade. Who made it for such little cost?", in addition
to exposing the violent side of the child labour industry in countries where child exploitation is prevalent. Primark
continued to investigate the allegations for three years, concluding that BBC report was a fake. In 2011, following
an investigation by the BBC Trust’s Editorial Standards Committee, the BBC announced, "Having carefully scrutinised
all of the relevant evidence, the committee concluded that, on the balance of probabilities, it was more likely than
not that the Bangalore footage was not authentic." BBC subsequently apologised for faking footage, and returned the
television award for investigative reporting. Concerns have often been raised over the buying public's moral complicity
in purchasing products assembled or otherwise manufactured in developing countries with child labour. However, others
have raised concerns that boycotting products manufactured through child labour may force these children to turn
to more dangerous or strenuous professions, such as prostitution or agriculture. For example, a UNICEF study found
that after the Child Labour Deterrence Act was introduced in the US, an estimated 50,000 children were dismissed
from their garment industry jobs in Bangladesh, leaving many to resort to jobs such as "stone-crushing, street hustling,
and prostitution", jobs that are "more hazardous and exploitative than garment production". The study suggests that
boycotts are "blunt instruments with long-term consequences, that can actually harm rather than help the children
involved." According to Milton Friedman, before the Industrial Revolution virtually all children worked in agriculture.
During the Industrial Revolution many of these children moved from farm work to factory work. Over time, as real
wages rose, parents became able to afford to send their children to school instead of work and as a result child
labour declined, both before and after legislation. Austrian School economist Murray Rothbard said that British and
American children of the pre- and post-Industrial Revolution lived and suffered in infinitely worse conditions where
jobs were not available for them and went "voluntarily and gladly" to work in factories. According to Thomas DeGregori,
an economics professor at the University of Houston, in an article published by the Cato Institute, a libertarian
think-tank operating in Washington D.C., "it is clear that technological and economic change are vital ingredients
in getting children out of the workplace and into schools. Then they can grow to become productive adults and live
longer, healthier lives. However, in poor countries like Bangladesh, working children are essential for survival
in many families, as they were in our own heritage until the late 19th century. So, while the struggle to end child
labour is necessary, getting there often requires taking different routes—and, sadly, there are many political obstacles.
The term child labour can be misleading when it confuses harmful work with employment that may be beneficial to children.
It can also ignore harmful work outside employment and any benefits children normally derive from their work. Domestic
work is an example: all families but the rich must work at cleaning, cooking, caring, and more to maintain their
homes. In most families in the world, this process extends to productive activities, especially herding and various
types of agriculture, and to a variety of small family businesses. Where trading is a significant feature of social
life, children can start trading in small items at an early age, often in the company of family members or of peers.
Work is undertaken from an early age by vast numbers of children in the world and may have a natural place in growing
up. Work can contribute to the well-being of children in a variety of ways; children often choose to work to improve
their lives, both in the short- and long-term. At the material level, children’s work often contributes to producing
food or earning income that benefits themselves and their families; and such income is especially important when
the families are poor. Work can provide an escape from debilitating poverty, sometimes by allowing a young person
to move away from an impoverished environment. Young people often enjoy their work, especially paid work, or when
work involves the company of peers. Even when work is intensive and enforced, children often find ways to combine
their work with play. While full-time work hinders schooling, empirical evidence is varied on the relationship between
part-time work and school. Sometimes even part-time work may hinder school attendance or performance. On the other
hand, many poor children work for resources to attend school. Children who are not doing well at school sometimes
seek more satisfactory experience in work. Good relations with a supervisor at work can provide relief from tensions
that children feel at school and home. In the modern world, school education has become so central to society that
schoolwork has become the dominant work for most children, often replacing participation in productive work. If school
curricula or quality do not provide children with appropriate skills for available jobs or if children do nor have
the aptitude for schoolwork, school may impede the learning of skills, such as agriculture, which will become necessary
for future livelihood.
North Carolina consists of three main geographic sections: the Atlantic Coastal Plain, which occupies the eastern 45% of
the state; the Piedmont region, which contains the middle 35%; and the Appalachian Mountains and foothills. The extreme
eastern section of the state contains the Outer Banks, a string of sandy, narrow barrier islands between the Atlantic
Ocean and two inland waterways or "sounds": Albemarle Sound in the north and Pamlico Sound in the south. They are
the two largest landlocked sounds in the United States. The coastal plain transitions to the Piedmont region along
the Atlantic Seaboard fall line, a line which marks the elevation at which waterfalls first appear on streams and
rivers. The Piedmont region of central North Carolina is the state's most urbanized and densely populated section.
It consists of gently rolling countryside frequently broken by hills or low mountain ridges. Small, isolated, and
deeply eroded mountain ranges and peaks are located in the Piedmont, including the Sauratown Mountains, Pilot Mountain,
the Uwharrie Mountains, Crowder's Mountain, King's Pinnacle, the Brushy Mountains, and the South Mountains. The Piedmont
ranges from about 300 to 400 feet (91 to 122 m) in elevation in the east to over 1,000 feet (300 m) in the west.
Because of the rapid population growth in the Piedmont, a significant part of the rural area in this region is being
transformed into suburbs with shopping centers, housing, and corporate offices. Agriculture is steadily declining
in importance. The major rivers of the Piedmont, such as the Yadkin and Catawba, tend to be fast-flowing, shallow,
and narrow. The western section of the state is part of the Appalachian Mountain range. Among the subranges of the
Appalachians located in the state are the Great Smoky Mountains, Blue Ridge Mountains, Great Balsam Mountains, and
Black Mountains. The Black Mountains are the highest in the eastern United States, and culminate in Mount Mitchell
at 6,684 feet (2,037 m) the highest point east of the Mississippi River. Although agriculture still remains important,
tourism has become a dominant industry in the mountains. Growing Christmas trees has recently become an important
industry as well. Because of the higher altitude, the climate in the mountains often differs markedly from that of
the rest of the state. Winter in western North Carolina typically features high snowfall and subfreezing temperatures
more akin to those of a midwestern state than of a southern state. The climate of the coastal plain is influenced
by the Atlantic Ocean, which keeps conditions mild in winter and moderate, although humid, in summer. The highest
coastal, daytime temperature averages less than 89 °F (32 °C) during summer months. The coast has mild temperatures
in winter, with daytime highs rarely below 40 °F (4 °C). The average daytime temperature in the coastal plain is
usually in the mid-50s °F (11–14 °C) in winter. Temperatures in the coastal plain only occasionally drop below the
freezing point at night. The coastal plain averages only around 1 inch (2.5 cm) of snow or ice annually, and in many
years, there may be no snow or ice at all. The Atlantic Ocean has less influence on the climate of the Piedmont region,
which has hotter summers and colder winters than in the coast. Daytime highs in the Piedmont often reach over 90
°F (32 °C) in the summer. While it is not common for the temperature to reach over 100 °F (38 °C) in the state, such
temperatures, when they occur, typically are found only in the lower-elevation areas of the Piedmont and far-inland
areas of the coastal plain. The weaker influence of the Atlantic Ocean also means that temperatures in the Piedmont
often fluctuate more widely than in the coast. In winter, the Piedmont is colder than the coast, with temperatures
usually averaging in the upper 40s–lower 50s °F (8–12 °C) during the day and often dropping below the freezing point
at night. The region averages around 3–5 in (8–13 cm) of snowfall annually in the Charlotte area, and slightly more
north toward the Virginia border. The Piedmont is especially notorious for sleet and freezing rain. Freezing rain
can be heavy enough to snarl traffic and break down trees and power lines. Annual precipitation and humidity are
lower in the Piedmont than in the mountains or the coast, but even at its lowest, the average is 40 in (1,020 mm)
per year. The Appalachian Mountains are the coolest area of the state, with temperatures averaging in the low 40s
and upper 30s °F (6–3 °C) for highs in the winter and falling into the low 20s °F (−5 °C) or lower on winter nights.
Relatively cool summers have temperatures rarely rising above 80 °F (27 °C). Average snowfall in many areas exceeds
30 in (76 cm) per year, and can be heavy at the higher elevations; for example, during the Blizzard of 1993 more
than 60 in (152 cm) of snow fell on Mount Mitchell over a period of three days. Mount Mitchell has received snow
in every month of the year. Severe weather occurs regularly in North Carolina. On the average, a hurricane hits the
state once a decade. Destructive hurricanes that have struck the state include Hurricane Fran, Hurricane Floyd, and
Hurricane Hazel, the strongest storm to make landfall in the state, as a Category 4 in 1954. Hurricane Isabel stands
out as the most damaging of the 21st century. Tropical storms arrive every 3 or 4 years. In addition, many hurricanes
and tropical storms graze the state. In some years, several hurricanes or tropical storms can directly strike the
state or brush across the coastal areas. Only Florida and Louisiana are hit by hurricanes more often. Although many
people believe that hurricanes menace only coastal areas, the rare hurricane which moves inland quickly enough can
cause severe damage; for example, in 1989, Hurricane Hugo caused heavy damage in Charlotte and even as far inland
as the Blue Ridge Mountains in the northwestern part of the state. On the average, North Carolina has 50 days of
thunderstorm activity per year, with some storms becoming severe enough to produce hail, flash floods, and damaging
winds. North Carolina averages fewer than 20 tornadoes per year, many of them produced by hurricanes or tropical
storms along the coastal plain. Tornadoes from thunderstorms are a risk, especially in the eastern part of the state.
The western Piedmont is often protected by the mountains, which tend to break up storms as they try to cross over;
the storms will often re-form farther east. Also a weather phenomenon known as "cold air damming" often occurs in
the northwestern part of the state, which can also weaken storms but can also lead to major ice events in winter.
Before A.D. 200, residents were building earthwork mounds, which were used for ceremonial and religious purposes.
Succeeding peoples, including those of the ancient Mississippian culture established by A.D. 1000 in the Piedmont,
continued to build or add onto such mounds. In the 500–700 years preceding European contact, the Mississippian culture
built large, complex cities and maintained far-flung regional trading networks. Historically documented tribes in
the North Carolina region included the Carolina Algonquian-speaking tribes of the coastal areas, such as the Chowanoke,
Roanoke, Pamlico, Machapunga, Coree, Cape Fear Indians, and others, who were the first to encounter the English;
Iroquoian-speaking Meherrin, Cherokee and Tuscarora of the interior; and Southeastern Siouan tribes, such as the
Cheraw, Waxhaw, Saponi, Waccamaw, and Catawba.[citation needed] In June 1718 Blackbeard, aka Edward Teach, ran his
flagship, the Queen Anne's Revenge, aground at Beaufort Inlet, North Carolina, in present-day Carteret County. After
the grounding her crew and supplies were transferred to smaller ships. In 1996 Intersal, Inc., a private firm, discovered
the remains of a vessel likely to be the Queen Anne's Revenge, which was added to the US National Register of Historic
Places. In November, after losing his ship and appealing to the governor of North Carolina who promised safe-haven
and a pardon, the notorious pirate, Blackbeard (Edward Teach) was killed in an ambush by troops from Virginia. North
Carolina became one of the English Thirteen Colonies and with the territory of South Carolina was originally known
as the Province of Carolina. The northern and southern parts of the original province separated in 1729. Originally
settled by small farmers, sometimes having a few slaves, who were oriented toward subsistence agriculture, the colony
lacked cities or towns. Pirates menaced the coastal settlements, but by 1718 the pirates had been captured and killed.
Growth was strong in the middle of the 18th century, as the economy attracted Scots-Irish, Quaker, English and German
immigrants. The colonists generally supported the American Revolution, as the number of Loyalists was smaller than
in some other colonies. During colonial times, Edenton served as the state capital beginning in 1722, and New Bern
was selected as the capital in 1766. Construction of Tryon Palace, which served as the residence and offices of the
provincial governor William Tryon, began in 1767 and was completed in 1771. In 1788 Raleigh was chosen as the site
of the new capital, as its central location protected it from attacks from the coast. Officially established in 1792
as both county seat and state capital, the city was named after Sir Walter Raleigh, sponsor of Roanoke, the "lost
colony" on Roanoke Island. North Carolina made the smallest per-capita contribution to the war of any state, as only
7,800 men joined the Continental Army under General George Washington; an additional 10,000 served in local militia
units under such leaders as General Nathanael Greene. There was some military action, especially in 1780–81. Many
Carolinian frontiersmen had moved west over the mountains, into the Washington District (later known as Tennessee),
but in 1789, following the Revolution, the state was persuaded to relinquish its claim to the western lands. It ceded
them to the national government so that the Northwest Territory could be organized and managed nationally. After
1800, cotton and tobacco became important export crops. The eastern half of the state, especially the Tidewater region,
developed a slave society based on a plantation system and slave labor. Many free people of color migrated to the
frontier along with their European-American neighbors, where the social system was looser. By 1810, nearly 3 percent
of the free population consisted of free people of color, who numbered slightly more than 10,000. The western areas
were dominated by white families, especially Scots-Irish, who operated small subsistence farms. In the early national
period, the state became a center of Jeffersonian and Jacksonian democracy, with a strong Whig presence, especially
in the West. After Nat Turner's slave uprising in 1831, North Carolina and other southern states reduced the rights
of free blacks. In 1835 the legislature withdrew their right to vote. With the defeat of the Confederacy in 1865,
the Reconstruction Era began. The United States abolished slavery without compensation to slaveholders or reparations
to freedmen. A Republican Party coalition of black freedmen, northern carpetbaggers and local scalawags controlled
state government for three years. The white conservative Democrats regained control of the state legislature in 1870,
in part by Ku Klux Klan violence and terrorism at the polls, to suppress black voting. Republicans were elected to
the governorship until 1876, when the Red Shirts, a paramilitary organization that arose in 1874 and was allied with
the Democratic Party, helped suppress black voting. More than 150 black Americans were murdered in electoral violence
in 1876. Democrats were elected to the legislature and governor's office, but the Populists attracted voters displeased
with them. In 1896 a biracial, Populist-Republican Fusionist coalition gained the governor's office. The Democrats
regained control of the legislature in 1896 and passed laws to impose Jim Crow and racial segregation of public facilities.
Voters of North Carolina's 2nd congressional district elected a total of four African-American congressmen through
these years of the late 19th century. In 1899 the state legislature passed a new constitution, with requirements
for poll taxes and literacy tests for voter registration which disfranchised most black Americans in the state. Exclusion
from voting had wide effects: it meant that black Americans could not serve on juries or in any local office. After
a decade of white supremacy, many people forgot that North Carolina had ever had thriving middle-class black Americans.
Black citizens had no political voice in the state until after the federal Civil Rights Act of 1964 and Voting Rights
Act of 1965 were passed to enforce their constitutional rights. It was not until 1992 that another African American
was elected as a US Representative from North Carolina. As in the rest of the former Confederacy, North Carolina
had become a one-party state, dominated by the Democratic Party. Impoverished by the Civil War, the state continued
with an economy based on tobacco, cotton and agriculture. Towns and cities remained few in the east. A major industrial
base emerged in the late 19th century in the western counties of the Piedmont, based on cotton mills established
at the fall line. Railroads were built to connect the new industrializing cities. The state was the site of the first
successful controlled, powered and sustained heavier-than-air flight, by the Wright brothers, near Kitty Hawk on
December 17, 1903. In the first half of the 20th century, many African Americans left the state to go North for better
opportunities, in the Great Migration. Their departure changed the demographic characteristics of many areas. North
Carolina was hard hit by the Great Depression, but the New Deal programs of Franklin D. Roosevelt for cotton and
tobacco significantly helped the farmers. After World War II, the state's economy grew rapidly, highlighted by the
growth of such cities as Charlotte, Raleigh, and Durham in the Piedmont. Raleigh, Durham, and Chapel Hill form the
Research Triangle, a major area of universities and advanced scientific and technical research. In the 1990s, Charlotte
became a major regional and national banking center. Tourism has also been a boon for the North Carolina economy
as people flock to the Outer Banks coastal area and the Appalachian Mountains anchored by Asheville. North Carolina
was inhabited for thousands of years by succeeding cultures of prehistoric indigenous cultures. Before 200 AD, they
were building earthwork mounds, which were used for ceremonial and religious purposes. Succeeding peoples, including
those of the ancient Mississippian culture established by 1000 AD in the Piedmont, continued to build or add on to
such mounds. In the 500–700 years preceding European contact, the Mississippian culture built large, complex cities
and maintained far-flung regional trading networks. Its largest city was Cahokia, located in present-day Illinois
near the Mississippi River. Spanish explorers traveling inland in the 16th century met Mississippian culture people
at Joara, a regional chiefdom near present-day Morganton. Records of Hernando de Soto attested to his meeting with
them in 1540. In 1567 Captain Juan Pardo led an expedition to claim the area for the Spanish colony and to establish
another route to protect silver mines in Mexico. Pardo made a winter base at Joara, which he renamed Cuenca. His
expedition built Fort San Juan and left a contingent of 30 men there, while Pardo traveled further, and built and
garrisoned five other forts. He returned by a different route to Santa Elena on Parris Island, South Carolina, then
a center of Spanish Florida. In the spring of 1568, natives killed all but one of the soldiers and burned the six
forts in the interior, including the one at Fort San Juan. Although the Spanish never returned to the interior, this
effort marked the first European attempt at colonization of the interior of what became the United States. A 16th-century
journal by Pardo's scribe Bandera and archaeological findings since 1986 at Joara have confirmed the settlement.
In 1584, Elizabeth I granted a charter to Sir Walter Raleigh, for whom the state capital is named, for land in present-day
North Carolina (then part of the territory of Virginia). It was the second American territory which the English attempted
to colonize. Raleigh established two colonies on the coast in the late 1580s, but both failed. The fate of the "Lost
Colony" of Roanoke Island remains one of the most widely debated mysteries of American history. Virginia Dare, the
first English child to be born in North America, was born on Roanoke Island on August 18, 1587; Dare County is named
for her. As early as 1650, settlers from the Virginia colony moved into the area of Albemarle Sound. By 1663, King
Charles II of England granted a charter to start a new colony on the North American continent; it generally established
North Carolina's borders. He named it Carolina in honor of his father Charles I. By 1665, a second charter was issued
to attempt to resolve territorial questions. In 1710, owing to disputes over governance, the Carolina colony began
to split into North Carolina and South Carolina. The latter became a crown colony in 1729. After the Spanish in the
16th century, the first permanent European settlers of North Carolina were English colonists who migrated south from
Virginia. The latter had grown rapidly and land was less available. Nathaniel Batts was documented as one of the
first of these Virginian migrants. He settled south of the Chowan River and east of the Great Dismal Swamp in 1655.
By 1663, this northeastern area of the Province of Carolina, known as the Albemarle Settlements, was undergoing full-scale
English settlement. During the same period, the English monarch Charles II gave the province to the Lords Proprietors,
a group of noblemen who had helped restore Charles to the throne in 1660. The new province of "Carolina" was named
in honor and memory of King Charles I (Latin: Carolus). In 1712, North Carolina became a separate colony. Except
for the Earl Granville holdings, it became a royal colony seventeen years later. A large revolt happened in the state
in 1711 known as Cary's Rebellion. Differences in the settlement patterns of eastern and western North Carolina,
or the Low Country and uplands, affected the political, economic, and social life of the state from the 18th until
the 20th century. The Tidewater in eastern North Carolina was settled chiefly by immigrants from rural England and
the Scottish Highlands. The upcountry of western North Carolina was settled chiefly by Scots-Irish, English, and
German Protestants, the so-called "cohee". Arriving during the mid- to late 18th century, the Scots-Irish from what
is today Northern Ireland were the largest non-English immigrant group before the Revolution; English indentured
servants were overwhelmingly the largest immigrant group before the Revolution. During the American Revolutionary
War, the English and Highland Scots of eastern North Carolina tended to remain loyal to the British Crown, because
of longstanding business and personal connections with Great Britain. The English, Welsh, Scots-Irish, and German
settlers of western North Carolina tended to favor American independence from Britain. Most of the English colonists
had arrived as indentured servants, hiring themselves out as laborers for a fixed period to pay for their passage.
In the early years the line between indentured servants and African slaves or laborers was fluid. Some Africans were
allowed to earn their freedom before slavery became a lifelong status. Most of the free colored families formed in
North Carolina before the Revolution were descended from unions or marriages between free white women and enslaved
or free African or African-American men. Because the mothers were free, their children were born free. Many had migrated
or were descendants of migrants from colonial Virginia. As the flow of indentured laborers to the colony decreased
with improving economic conditions in Great Britain, planters imported more slaves, and the state's legal delineations
between free and slave status tightened, effectively hardening the latter into a racial caste. The economy's growth
and prosperity was based on slave labor, devoted first to the production of tobacco. On April 12, 1776, the colony
became the first to instruct its delegates to the Continental Congress to vote for independence from the British
Crown, through the Halifax Resolves passed by the North Carolina Provincial Congress. The dates of both of these
events are memorialized on the state flag and state seal. Throughout the Revolutionary War, fierce guerrilla warfare
erupted between bands of pro-independence and pro-British colonists. In some cases the war was also an excuse to
settle private grudges and rivalries. A major American victory in the war took place at King's Mountain along the
North Carolina–South Carolina border; on October 7, 1780, a force of 1000 mountain men from western North Carolina
(including what is today the state of Tennessee)and Southwest Virginia overwhelmed a force of some 1000 British troops
led by Major Patrick Ferguson. Most of the soldiers fighting for the British side in this battle were Carolinians
who had remained loyal to the Crown (they were called "Tories" or Loyalists). The American victory at Kings Mountain
gave the advantage to colonists who favored American independence, and it prevented the British Army from recruiting
new soldiers from the Tories. The road to Yorktown and America's independence from Great Britain led through North
Carolina. As the British Army moved north from victories in Charleston and Camden, South Carolina, the Southern Division
of the Continental Army and local militia prepared to meet them. Following General Daniel Morgan's victory over the
British Cavalry Commander Banastre Tarleton at the Battle of Cowpens on January 17, 1781, southern commander Nathanael
Greene led British Lord Charles Cornwallis across the heartland of North Carolina, and away from the latter's base
of supply in Charleston, South Carolina. This campaign is known as "The Race to the Dan" or "The Race for the River."
In the Battle of Cowan's Ford, Cornwallis met resistance along the banks of the Catawba River at Cowan's Ford on
February 1, 1781, in an attempt to engage General Morgan's forces during a tactical withdrawal. Morgan had moved
to the northern part of the state to combine with General Greene's newly recruited forces. Generals Greene and Cornwallis
finally met at the Battle of Guilford Courthouse in present-day Greensboro on March 15, 1781. Although the British
troops held the field at the end of the battle, their casualties at the hands of the numerically superior Continental
Army were crippling. Following this "Pyrrhic victory", Cornwallis chose to move to the Virginia coastline to get
reinforcements, and to allow the Royal Navy to protect his battered army. This decision would result in Cornwallis'
eventual defeat at Yorktown, Virginia, later in 1781. The Patriots' victory there guaranteed American independence.
On November 21, 1789, North Carolina became the twelfth state to ratify the Constitution. In 1840, it completed the
state capitol building in Raleigh, still standing today. Most of North Carolina's slave owners and large plantations
were located in the eastern portion of the state. Although North Carolina's plantation system was smaller and less
cohesive than that of Virginia, Georgia, or South Carolina, significant numbers of planters were concentrated in
the counties around the port cities of Wilmington and Edenton, as well as suburban planters around the cities of
Raleigh, Charlotte, and Durham in the Piedmont. Planters owning large estates wielded significant political and socio-economic
power in antebellum North Carolina, which was a slave society. They placed their interests above those of the generally
non-slave-holding "yeoman" farmers of western North Carolina. In mid-century, the state's rural and commercial areas
were connected by the construction of a 129-mile (208 km) wooden plank road, known as a "farmer's railroad", from
Fayetteville in the east to Bethania (northwest of Winston-Salem). Besides slaves, there were a number of free people
of color in the state. Most were descended from free African Americans who had migrated along with neighbors from
Virginia during the 18th century. The majority were the descendants of unions in the working classes between white
women, indentured servants or free, and African men, indentured, slave or free. After the Revolution, Quakers and
Mennonites worked to persuade slaveholders to free their slaves. Some were inspired by their efforts and the language
of the Revolution to arrange for manumission of their slaves. The number of free people of color rose markedly in
the first couple of decades after the Revolution. On October 25, 1836, construction began on the Wilmington and Raleigh
Railroad to connect the port city of Wilmington with the state capital of Raleigh. In 1849 the North Carolina Railroad
was created by act of the legislature to extend that railroad west to Greensboro, High Point, and Charlotte. During
the Civil War, the Wilmington-to-Raleigh stretch of the railroad would be vital to the Confederate war effort; supplies
shipped into Wilmington would be moved by rail through Raleigh to the Confederate capital of Richmond, Virginia.
While slaveholding was slightly less concentrated than in some Southern states, according to the 1860 census, more
than 330,000 people, or 33% of the population of 992,622, were enslaved African Americans. They lived and worked
chiefly on plantations in the eastern Tidewater. In addition, 30,463 free people of color lived in the state. They
were also concentrated in the eastern coastal plain, especially at port cities such as Wilmington and New Bern, where
a variety of jobs were available. Free African Americans were allowed to vote until 1835, when the state revoked
their suffrage in restrictions following the slave rebellion of 1831 led by Nat Turner. Southern slave codes criminalized
willful killing of a slave in most cases. In 1860, North Carolina was a slave state, in which one-third of the population
was enslaved. This was a smaller proportion than in many Southern states. The state did not vote to join the Confederacy
until President Abraham Lincoln called on it to invade its sister state, South Carolina, becoming the last or second-to-last
state to officially join the Confederacy. The title of "last to join the Confederacy" has been disputed; although
Tennessee's informal secession on May 7, 1861, preceded North Carolina's official secession on May 20, the Tennessee
legislature did not formally vote to secede until June 8, 1861. After secession, some North Carolinians refused to
support the Confederacy. Some of the yeoman farmers in the state's mountains and western Piedmont region remained
neutral during the Civil War, while some covertly supported the Union cause during the conflict. Approximately 2,000
North Carolinians from western North Carolina enlisted in the Union Army and fought for the North in the war. Two
additional Union Army regiments were raised in the coastal areas of the state, which were occupied by Union forces
in 1862 and 1863. Numerous slaves escaped to Union lines, where they became essentially free. Confederate troops
from all parts of North Carolina served in virtually all the major battles of the Army of Northern Virginia, the
Confederacy's most famous army. The largest battle fought in North Carolina was at Bentonville, which was a futile
attempt by Confederate General Joseph Johnston to slow Union General William Tecumseh Sherman's advance through the
Carolinas in the spring of 1865. In April 1865, after losing the Battle of Morrisville, Johnston surrendered to Sherman
at Bennett Place, in what is today Durham. North Carolina's port city of Wilmington was the last Confederate port
to fall to the Union, in February 1865, after the Union won the nearby Second Battle of Fort Fisher, its major defense
downriver. The first Confederate soldier to be killed in the Civil War was Private Henry Wyatt from North Carolina,
in the Battle of Big Bethel in June 1861. At the Battle of Gettysburg in July 1863, the 26th North Carolina Regiment
participated in Pickett/Pettigrew's Charge and advanced the farthest into the Northern lines of any Confederate regiment.
During the Battle of Chickamauga, the 58th North Carolina Regiment advanced farther than any other regiment on Snodgrass
Hill to push back the remaining Union forces from the battlefield. At Appomattox Court House in Virginia in April
1865, the 75th North Carolina Regiment, a cavalry unit, fired the last shots of the Confederate Army of Northern
Virginia in the Civil War. For many years, North Carolinians proudly boasted that they had been "First at Bethel,
Farthest at Gettysburg and Chickamauga, and Last at Appomattox." While the Baptists in total (counting both blacks
and whites) have maintained the majority in this part of the country (known as the Bible Belt), the population in
North Carolina practices a wide variety of faiths, including Judaism, Islam, Baha'i, Buddhism, and Hinduism. As of
2010 the Southern Baptist Church was the biggest denomination, with 4,241 churches and 1,513,000 members; the second
largest was the United Methodist Church, with 660,000 members and 1,923 churches. The third was the Roman Catholic
Church, with 428,000 members in 190 congregations. The fourth greatest was the Presbyterian Church (USA), with 186,000
members and 710 congregations; this denomination was brought by Scots-Irish immigrants who settled the backcountry
in the colonial era. Currently, the rapid influx of northerners and immigrants from Latin America is steadily increasing
ethnic and religious diversity: the number of Roman Catholics and Jews in the state has increased, as well as general
religious diversity. The second-largest Protestant denomination in North Carolina after Baptist traditions is Methodism,
which is strong in the northern Piedmont, especially in populous Guilford County. There are also a substantial number
of Quakers in Guilford County and northeastern North Carolina. Many universities and colleges in the state have been
founded on religious traditions, and some currently maintain that affiliation, including: According to a Forbes article
written in 2013 Employment in the "Old North State" has gained many different industry sectors. See the following
article summary: science, technology, energy and math, or STEM, industries in the area surrounding North Carolina's
capital have grown 17.9 percent since 2001, placing Raleigh-Cary at No. 5 among the 51 largest metro areas in the
country where technology is booming. In 2010 North Carolina's total gross state product was $424.9 billion, while
the state debt in November 2012, according to one source, totalled US$2.4bn, while according to another, was in 2012
US$57.8bn. In 2011 the civilian labor force was at around 4.5 million with employment near 4.1 million. The working
population is employed across the major employment sectors. The economy of North Carolina covers 15 metropolitan
areas. In 2010, North Carolina was chosen as the third-best state for business by Forbes Magazine, and the second-best
state by Chief Executive Officer Magazine. North Carolina's party loyalties have undergone a series of important
shifts in the last few years: While the 2010 midterms saw Tar Heel voters elect a bicameral Republican majority legislature
for the first time in over a century, North Carolina has also become a Southern swing state in presidential races.
Since Southern Democrat Jimmy Carter's comfortable victory in the state in 1976, the state had consistently leaned
Republican in presidential elections until Democrat Barack Obama narrowly won the state in 2008. In the 1990s, Democrat
Bill Clinton came within a point of winning the state in 1992 and also only narrowly lost the state in 1996. In the
early 2000s, Republican George W. Bush easily won the state by over 12 points, but by 2008, demographic shifts, population
growth, and increased liberalization in heavily populated areas such as the Research Triangle, Charlotte, Greensboro,
Winston-Salem, Fayetteville, and Asheville, propelled Barack Obama to victory in North Carolina, the first Democrat
to win the state since 1976. In 2012, North Carolina was again considered a competitive swing state, with the Democrats
even holding their 2012 Democratic National Convention in Charlotte. However, Republican Mitt Romney ultimately eked
out a 2-point win in North Carolina, the only 2012 swing state that Obama lost, and one of only two states (along
with Indiana) to flip from Obama in 2008 to the GOP in 2012. In 2012, the state elected a Republican Governor (Pat
McCrory) and Lieutenant Governor (Dan Forest) for the first time in more than two decades, while also giving the
Republicans veto-proof majorities in both the State House of Representatives and the State Senate. Several U.S. House
of Representatives seats also flipped control, with the Republicans holding nine seats to the Democrats' four. In
the 2014 mid-term elections, Republican David Rouzer won the state's Seventh Congressional District seat, increasing
the congressional delegation party split to 10-3 in favor of the GOP. Elementary and secondary public schools are
overseen by the North Carolina Department of Public Instruction. The North Carolina Superintendent of Public Instruction
is the secretary of the North Carolina State Board of Education, but the board, rather than the superintendent, holds
most of the legal authority for making public education policy. In 2009, the board's chairman also became the "chief
executive officer" for the state's school system. North Carolina has 115 public school systems, each of which is
overseen by a local school board. A county may have one or more systems within it. The largest school systems in
North Carolina are the Wake County Public School System, Charlotte-Mecklenburg Schools, Guilford County Schools,
Winston-Salem/Forsyth County Schools, and Cumberland County Schools. In total there are 2,425 public schools in the
state, including 99 charter schools. North Carolina Schools were segregated until the Brown v. Board of Education
trial and the release of the Pearsall Plan. In 1795, North Carolina opened the first public university in the United
States—the University of North Carolina (now named the University of North Carolina at Chapel Hill). More than 200
years later, the University of North Carolina system encompasses 17 public universities including North Carolina
State University, North Carolina A&T State University, North Carolina Central University, the University of North
Carolina at Chapel Hill, the University of North Carolina at Greensboro, East Carolina University, Western Carolina
University, Winston-Salem State University, the University of North Carolina at Asheville, the University of North
Carolina at Charlotte, the University of North Carolina at Pembroke, UNC Wilmington, Elizabeth City State University,
Appalachian State University, Fayetteville State University, and UNC School of the Arts, and . Along with its public
universities, North Carolina has 58 public community colleges in its community college system.The largest university
in North Carolina is currently North Carolina State University, with more than 34,000 students. North Carolina is
home to many excellent universities as well as dozens of community colleges and private universities. North Carolina
is also home to many well-known private colleges and universities, including Duke University, Wake Forest University,
Pfeiffer University, Lees-McRae College, Davidson College, Barton College, North Carolina Wesleyan College, Elon
University, Guilford College, Livingstone College, Salem College, Shaw University (the first historically black college
or university in the South), Laurel University, Meredith College, Methodist University, Belmont Abbey College (the
only Catholic college in the Carolinas), Campbell University, University of Mount Olive, Montreat College, High Point
University, Lenoir-Rhyne University (the only Lutheran university in North Carolina) and Wingate University. North
Carolina is home to three major league sports franchises: the Carolina Panthers of the National Football League and
the Charlotte Hornets of the National Basketball Association are based in Charlotte, while the Raleigh-based Carolina
Hurricanes play in the National Hockey League. The Panthers and Hurricanes are the only two major professional sports
teams that have the same geographical designation while playing in different metropolitan areas. The Hurricanes are
the only major professional team from North Carolina to have won a league championship, having captured the Stanley
Cup in 2006. North Carolina is also home to Charlotte Hounds of the Major League Lacrosse. In addition to professional
team sports, North Carolina has a strong affiliation with NASCAR and stock-car racing, with Charlotte Motor Speedway
in Concord hosting two Sprint Cup Series races every year. Charlotte also hosts the NASCAR Hall of Fame, while Concord
is the home of several top-flight racing teams, including Hendrick Motorsports, Roush Fenway Racing, Richard Petty
Motorsports, Stewart-Haas Racing, and Chip Ganassi Racing. Numerous other tracks around North Carolina host races
from low-tier NASCAR circuits as well. College sports are also popular in North Carolina, with 18 schools competing
at the Division I level. The Atlantic Coast Conference (ACC) is headquartered in Greensboro, and both the ACC Football
Championship Game (Charlotte) and the ACC Men's Basketball Tournament (Greensboro) were most recently held in North
Carolina. College basketball in particular is very popular, buoyed by the Tobacco Road rivalries between Duke, North
Carolina, North Carolina State, and Wake Forest. The ACC Championship Game and The Belk Bowl are held annually in
Charlotte's Bank of America Stadium, featuring teams from the ACC and the Southeastern Conference. Additionally,
the state has hosted the NCAA Men's Basketball Final Four on two occasions, in Greensboro in 1974 and in Charlotte
in 1994. Every year the Appalachian Mountains attract several million tourists to the Western part of the state,
including the historic Biltmore Estate. The scenic Blue Ridge Parkway and Great Smoky Mountains National Park are
the two most visited national park and unit in the United States with over 25 million visitors in 2013. The City
of Asheville is consistently voted as one of the top places to visit and live in the United States, known for its
rich art deco architecture, mountain scenery and outdoor activities, and liberal and happy residents. In Raleigh
many tourists visit the Capital, African American Cultural Complex, Contemporary Art Museum of Raleigh, Gregg Museum
of Art & Design at NCSU, Haywood Hall House & Gardens, Marbles Kids Museum, North Carolina Museum of Art, North Carolina
Museum of History, North Carolina Museum of Natural Sciences, North Carolina Sports Hall of Fame, Raleigh City Museum,
J. C. Raulston Arboretum, Joel Lane House, Mordecai House, Montfort Hall, and the Pope House Museum. The Carolina
Hurricanes NHL hockey team is also located in the city. The Piedmont Triad, or center of the state, is home to Krispy
Kreme, Mayberry, Texas Pete, the Lexington Barbecue Festival, and Moravian cookies. The internationally acclaimed
North Carolina Zoo in Asheboro attracts visitors to its animals, plants, and a 57-piece art collection along five
miles of shaded pathways in the world's largest-land-area natural-habitat park. Seagrove, in the central portion
of the state, attracts many tourists along Pottery Highway (NC Hwy 705). MerleFest in Wilkesboro attracts more than
80,000 people to its four-day music festival; and Wet 'n Wild Emerald Pointe water park in Greensboro is another
attraction. North Carolina provides a large range of recreational activities, from swimming at the beach to skiing
in the mountains. North Carolina offers fall colors, freshwater and saltwater fishing, hunting, birdwatching, agritourism,
ATV trails, ballooning, rock climbing, biking, hiking, skiing, boating and sailing, camping, canoeing, caving (spelunking),
gardens, and arboretums. North Carolina has theme parks, aquariums, museums, historic sites, lighthouses, elegant
theaters, concert halls, and fine dining. North Carolinians enjoy outdoor recreation utilizing numerous local bike
paths, 34 state parks, and 14 national parks. National Park Service units include the Appalachian National Scenic
Trail, the Blue Ridge Parkway, Cape Hatteras National Seashore, Cape Lookout National Seashore, Carl Sandburg Home
National Historic Site at Flat Rock, Fort Raleigh National Historic Site at Manteo, Great Smoky Mountains National
Park, Guilford Courthouse National Military Park in Greensboro, Moores Creek National Battlefield near Currie in
Pender County, the Overmountain Victory National Historic Trail, Old Salem National Historic Site in Winston-Salem,
the Trail of Tears National Historic Trail, and Wright Brothers National Memorial in Kill Devil Hills. National Forests
include Uwharrie National Forest in central North Carolina, Croatan National Forest in Eastern North Carolina, Pisgah
National Forest in the northern mountains, and Nantahala National Forest in the southwestern part of the state. North
Carolina has rich traditions in art, music, and cuisine. The nonprofit arts and culture industry generates $1.2 billion
in direct economic activity in North Carolina, supporting more than 43,600 full-time equivalent jobs and generating
$119 million in revenue for local governments and the state of North Carolina. North Carolina established the North
Carolina Museum of Art as the first major museum collection in the country to be formed by state legislation and
funding and continues to bring millions into the NC economy. Also see this list of museums in North Carolina. North
Carolina has a variety of shopping choices. SouthPark Mall in Charlotte is currently the largest in the Carolinas,
with almost 2.0 million square feet. Other major malls in Charlotte include Northlake Mall and Carolina Place Mall
in nearby suburb Pineville. Other major malls throughout the state include Hanes Mall in Winston-Salem; Crabtree
Valley Mall, North Hills Mall, and Triangle Town Center in Raleigh; Friendly Center and Four Seasons Town Centre
in Greensboro; Oak Hollow Mall in High Point; Concord Mills in Concord; Valley Hills Mall in Hickory; and The Streets
at Southpoint and Northgate Mall in Durham and Independence Mall in Wilmington, NC, and Tanger Outlets in Charlotte,
Nags Head, Blowing Rock, and Mebane, NC. A culinary staple of North Carolina is pork barbecue. There are strong regional
differences and rivalries over the sauces and methods used in making the barbecue. The common trend across Western
North Carolina is the use of premium grade Boston butt. Western North Carolina pork barbecue uses a tomato-based
sauce, and only the pork shoulder (dark meat) is used. Western North Carolina barbecue is commonly referred to as
Lexington barbecue after the Piedmont Triad town of Lexington, home of the Lexington Barbecue Festival, which attracts
over 100,000 visitors each October. Eastern North Carolina pork barbecue uses a vinegar-and-red-pepper-based sauce
and the "whole hog" is cooked, thus integrating both white and dark meat. Krispy Kreme, an international chain of
doughnut stores, was started in North Carolina; the company's headquarters are in Winston-Salem. Pepsi-Cola was first
produced in 1898 in New Bern. A regional soft drink, Cheerwine, was created and is still based in the city of Salisbury.
Despite its name, the hot sauce Texas Pete was created in North Carolina; its headquarters are also in Winston-Salem.
The Hardee's fast-food chain was started in Rocky Mount. Another fast-food chain, Bojangles', was started in Charlotte,
and has its corporate headquarters there. A popular North Carolina restaurant chain is Golden Corral. Started in
1973, the chain was founded in Fayetteville, with headquarters located in Raleigh. Popular pickle brand Mount Olive
Pickle Company was founded in Mount Olive in 1926. Fast casual burger chain Hwy 55 Burgers, Shakes & Fries also makes
its home in Mount Olive. Cook Out, a popular fast-food chain featuring burgers, hot dogs, and milkshakes in a wide
variety of flavors, was founded in Greensboro in 1989 and has begun expanding outside of North Carolina. In 2013,
Southern Living named Durham - Chapel Hill the South's "Tastiest City." Over the last decade, North Carolina has
become a cultural epicenter and haven for internationally prize-winning wine (Noni Bacca Winery), internationally
prized cheeses (Ashe County), "L'institut International aux Arts Gastronomiques: Conquerront Les Yanks les Truffes,
January 15, 2010" international hub for truffles (Garland Truffles), and beer making, as tobacco land has been converted
to grape orchards while state laws regulating alcohol content in beer allowed a jump in ABV from 6% to 15%. The Yadkin
Valley in particular has become a strengthening market for grape production, while Asheville recently won the recognition
of being named 'Beer City USA.' Asheville boasts the largest breweries per capita of any city in the United States.
Recognized and marketed brands of beer in North Carolina include Highland Brewing, Duck Rabbit Brewery, Mother Earth
Brewery, Weeping Radish Brewery, Big Boss Brewing, Foothills Brewing, Carolina Brewing Company, Lonerider Brewing,
and White Rabbit Brewing Company. Tobacco was one of the first major industries to develop after the Civil War. Many
farmers grew some tobacco, and the invention of the cigarette made the product especially popular. Winston-Salem
is the birthplace of R. J. Reynolds Tobacco Company (RJR), founded by R. J. Reynolds in 1874 as one of 16 tobacco
companies in the town. By 1914 it was selling 425 million packs of Camels a year. Today it is the second-largest
tobacco company in the U.S. (behind Altria Group). RJR is an indirect wholly owned subsidiary of Reynolds American
Inc., which in turn is 42% owned by British American Tobacco. Located in Jacksonville, Marine Corps Base Camp Lejeune,
combined with nearby bases Marine Corps Air Station (MCAS) Cherry Point, MCAS New River, Camp Geiger, Camp Johnson,
Stone Bay and Courthouse Bay, makes up the largest concentration of Marines and sailors in the world. MCAS Cherry
Point is home of the 2nd Marine Aircraft Wing. Located in Goldsboro, Seymour Johnson Air Force Base is home of the
4th Fighter Wing and 916th Air Refueling Wing. One of the busiest air stations in the United States Coast Guard is
located at the Coast Guard Air Station in Elizabeth City. Also stationed in North Carolina is the Military Ocean
Terminal Sunny Point in Southport.
The Heian period (平安時代, Heian jidai?) is the last division of classical Japanese history, running from 794 to 1185. The period
is named after the capital city of Heian-kyō, or modern Kyōto. It is the period in Japanese history when Buddhism,
Taoism and other Chinese influences were at their height. The Heian period is also considered the peak of the Japanese
imperial court and noted for its art, especially poetry and literature. Although the Imperial House of Japan had
power on the surface, the real power was in the hands of the Fujiwara clan, a powerful aristocratic family who had
intermarried with the imperial family. Many emperors actually had mothers from the Fujiwara family. Heian (平安?) means
"peace" in Japanese. The Heian period was preceded by the Nara period and began in 794 A.D after the movement of
the capital of Japan to Heian-kyō (present day Kyōto京都), by the 50th emperor, Emperor Kanmu. Kanmu first tried to
move the capital to Nagaoka-kyō, but a series of disasters befell the city, prompting the emperor to relocate the
capital a second time, to Heian. The Heian Period is considered a high point in Japanese culture that later generations
have always admired. The period is also noted for the rise of the samurai class, which would eventually take power
and start the feudal period of Japan. Nominally, sovereignty lay in the emperor but in fact power was wielded by
the Fujiwara nobility. However, to protect their interests in the provinces, the Fujiwara and other noble families
required guards, police and soldiers. The warrior class made steady political gains throughout the Heian period.
As early as 939 A.D, Taira no Masakado threatened the authority of the central government, leading an uprising in
the eastern province of Hitachi, and almost simultaneously, Fujiwara no Sumitomo rebelled in the west. Still, a true
military takeover of the Japanese government was centuries away, when much of the strength of the government would
lie within the private armies of the shogunate. When Emperor Kammu moved the capital to Heian-kyō (Kyōto), which
remained the imperial capital for the next 1,000 years, he did so not only to strengthen imperial authority but also
to improve his seat of government geopolitically. Nara was abandoned after only 70 years in part due to the ascendancy
of Dōkyō and the encroaching secular power of the Buddhist institutions there. Kyōto had good river access to the
sea and could be reached by land routes from the eastern provinces. The early Heian period (784–967) continued Nara
culture; the Heian capital was patterned on the Chinese Tang capital at Chang'an, as was Nara, but on a larger scale
than Nara. Kammu endeavoured to improve the Tang-style administrative system which was in use. Known as the ritsuryō,
this system attempted to recreate the Tang imperium in Japan, despite the "tremendous differences in the levels of
development between the two countries". Despite the decline of the Taika-Taihō reforms, imperial government was vigorous
during the early Heian period. Indeed, Kammu's avoidance of drastic reform decreased the intensity of political struggles,
and he became recognized as one of Japan's most forceful emperors. Although Kammu had abandoned universal conscription
in 792, he still waged major military offensives to subjugate the Emishi, possible descendants of the displaced Jōmon,
living in northern and eastern Japan. After making temporary gains in 794, in 797 Kammu appointed a new commander,
Sakanoue no Tamuramaro, under the title Sei-i Taishōgun (Barbarian-subduing generalissimo). By 801 the shogun had
defeated the Emishi and had extended the imperial domains to the eastern end of Honshū. Imperial control over the
provinces was tenuous at best, however. In the ninth and tenth centuries, much authority was lost to the great families,
who disregarded the Chinese-style land and tax systems imposed by the government in Kyoto. Stability came to Japan,
but, even though succession was ensured for the imperial family through heredity, power again concentrated in the
hands of one noble family, the Fujiwara which also helped Japan develop more. Following Kammu's death in 806 and
a succession struggle among his sons, two new offices were established in an effort to adjust the Taika-Taihō administrative
structure. Through the new Emperor's Private Office, the emperor could issue administrative edicts more directly
and with more self-assurance than before. The new Metropolitan Police Board replaced the largely ceremonial imperial
guard units. While these two offices strengthened the emperor's position temporarily, soon they and other Chinese-style
structures were bypassed in the developing state. In 838 the end of the imperial-sanctioned missions to Tang China,
which had begun in 630, marked the effective end of Chinese influence. Tang China was in a state of decline, and
Chinese Buddhists were severely persecuted, undermining Japanese respect for Chinese institutions. Japan began to
turn inward. As the Soga clan had taken control of the throne in the sixth century, the Fujiwara by the ninth century
had intermarried with the imperial family, and one of their members was the first head of the Emperor's Private Office.
Another Fujiwara became regent, Sesshō for his grandson, then a minor emperor, and yet another was appointed Kampaku.
Toward the end of the ninth century, several emperors tried, but failed, to check the Fujiwara. For a time, however,
during the reign of Emperor Daigo (897-930), the Fujiwara regency was suspended as he ruled directly. Nevertheless,
the Fujiwara were not demoted by Daigo but actually became stronger during his reign. Central control of Japan had
continued to decline, and the Fujiwara, along with other great families and religious foundations, acquired ever
larger shōen and greater wealth during the early tenth century. By the early Heian period, the shōen had obtained
legal status, and the large religious establishments sought clear titles in perpetuity, waiver of taxes, and immunity
from government inspection of the shōen they held. Those people who worked the land found it advantageous to transfer
title to shōen holders in return for a share of the harvest. People and lands were increasingly beyond central control
and taxation, a de facto return to conditions before the Taika Reform. Despite their usurpation of imperial authority,
the Fujiwara presided over a period of cultural and artistic flowering at the imperial court and among the aristocracy.
There was great interest in graceful poetry and vernacular literature. Two types of phonetic Japanese script: katakana,
a simplified script that was developed by using parts of Chinese characters, was abbreviated to hiragana, a cursive
syllabary with a distinct writing method that was uniquely Japanese. Hiragana gave written expression to the spoken
word and, with it, to the rise in Japan's famous vernacular literature, much of it written by court women who had
not been trained in Chinese as had their male counterparts. Three late tenth century and early eleventh century women
presented their views of life and romance at the Heian court in Kagerō Nikki by "the mother of Fujiwara Michitsuna",
The Pillow Book by Sei Shōnagon and The Tale of Genji by Murasaki Shikibu. Indigenous art also flourished under the
Fujiwara after centuries of imitating Chinese forms. Vividly colored yamato-e, Japanese style paintings of court
life and stories about temples and shrines became common in the mid- and late Heian periods, setting patterns for
Japanese art to this day. As culture flourished, so did decentralization. Whereas the first phase of shōen development
in the early Heian period had seen the opening of new lands and the granting of the use of lands to aristocrats and
religious institutions, the second phase saw the growth of patrimonial "house governments," as in the old clan system.
(In fact, the form of the old clan system had remained largely intact within the great old centralized government.)
New institutions were now needed in the face of social, economic, and political changes. The Taihō Code lapsed, its
institutions relegated to ceremonial functions. Family administrations now became public institutions. As the most
powerful family, the Fujiwara governed Japan and determined the general affairs of state, such as succession to the
throne. Family and state affairs were thoroughly intermixed, a pattern followed among other families, monasteries,
and even the imperial family. Land management became the primary occupation of the aristocracy, not so much because
direct control by the imperial family or central government had declined but more from strong family solidarity and
a lack of a sense of Japan as a single nation. Under the early courts, when military conscription had been centrally
controlled, military affairs had been taken out of the hands of the provincial aristocracy. But as the system broke
down after 792, local power holders again became the primary source of military strength. The re-establishment of
an efficient military system was made gradually through a process of trial-and-error. At that time the imperial court
did not possess an army but rather relied on an organization of professional warriors composed mainly of oryoshi,
which were appointed to an individual province and tsuibushi, which were appointed over imperial circuits or for
specific tasks. This gave rise to the Japanese military class. Nonetheless final authority rested with the imperial
court. Shōen holders had access to manpower and, as they obtained improved military technology (such as new training
methods, more powerful bows, armor, horses, and superior swords) and faced worsening local conditions in the ninth
century, military service became part of shōen life. Not only the shōen but also civil and religious institutions
formed private guard units to protect themselves. Gradually, the provincial upper class was transformed into a new
military elite based on the ideals of the bushi (warrior) or samurai (literally, one who serves). Bushi interests
were diverse, cutting across old power structures to form new associations in the tenth century. Mutual interests,
family connections, and kinship were consolidated in military groups that became part of family administration. In
time, large regional military families formed around members of the court aristocracy who had become prominent provincial
figures. These military families gained prestige from connections to the imperial court and court-granted military
titles and access to manpower. The Fujiwara family, Taira clan, and Minamoto clan were among the most prominent families
supported by the new military class. The Fujiwara controlled the throne until the reign of Emperor Go-Sanjō (1068-1073),
the first emperor not born of a Fujiwara mother since the ninth century. Go-Sanjo, determined to restore imperial
control through strong personal rule, implemented reforms to curb Fujiwara influence. He also established an office
to compile and validate estate records with the aim of reasserting central control. Many shōen were not properly
certified, and large landholders, like the Fujiwara, felt threatened with the loss of their lands. Go-Sanjo also
established the In-no-cho (ja:院庁 Office of the Cloistered Emperor), which was held by a succession of emperors who
abdicated to devote themselves to behind-the-scenes governance, or insei. The In-no-cho filled the void left by the
decline of Fujiwara power. Rather than being banished, the Fujiwara were mostly retained in their old positions of
civil dictator and minister of the center while being bypassed in decision making. In time, many of the Fujiwara
were replaced, mostly by members of the rising Minamoto family. While the Fujiwara fell into disputes among themselves
and formed northern and southern factions, the insei system allowed the paternal line of the imperial family to gain
influence over the throne. The period from 1086 to 1156 was the age of supremacy of the In-no-cho and of the rise
of the military class throughout the country. Military might rather than civil authority dominated the government.
A struggle for succession in the mid-twelfth century gave the Fujiwara an opportunity to regain their former power.
Fujiwara no Yorinaga sided with the retired emperor in a violent battle in 1156 against the heir apparent, who was
supported by the Taira and Minamoto (Hōgen Rebellion). In the end, the Fujiwara were destroyed, the old system of
government supplanted, and the insei system left powerless as bushi took control of court affairs, marking a turning
point in Japanese history. In 1159, the Taira and Minamoto clashed (Heiji Rebellion), and a twenty-year period of
Taira ascendancy began. Taira Kiyomori emerged as the real power in Japan following the Minamoto's destruction, and
he would remain in command for the next 20 years. He gave his daughter Tokuko in marriage to the young emperor Takakura,
who died at only 19, leaving their infant son Antoku to succeed to the throne. Kiyomori filled no less than 50 government
posts with his relatives, rebuilt the Inland Sea, and encouraged trade with Sung China. He also took aggressive actions
to safeguard his power when necessary, including the removal and exile of 45 court officials and the razing of two
troublesome temples, Todai-ji and Kofuku-ji. With Yoritomo firmly established, the bakufu system that would govern
Japan for the next seven centuries was in place. He appointed military governors, or daimyos, to rule over the provinces,
and stewards, or jito to supervise public and private estates. Yoritomo then turned his attention to the elimination
of the powerful Fujiwara family, which sheltered his rebellious brother Yoshitsune. Three years later, he was appointed
shogun in Kyoto. One year before his death in 1199, Yoritomo expelled the teenage emperor Go-Toba from the throne.
Two of Go-Toba's sons succeeded him, but they would also be removed by Yoritomo's successors to the shogunate. Buddhism
began to spread throughout Japan during the Heian period, primarily through two major esoteric sects, Tendai and
Shingon. Tendai originated in China and is based on the Lotus Sutra, one of the most important sutras of Mahayana
Buddhism; Saichō was key to its transmission to Japan. Shingon is the Japanese transmission of the Chinese Chen Yen
school. Shingon, brought to Japan by the monk Kūkai, emphasizes Esoteric Buddhism. Both Kūkai and Saichō aimed to
connect state and religion and establish support from the aristocracy, leading to the notion of 'aristocratic Buddhism'.
An important element of Tendai doctrine was the suggestion that enlightenment was accessible to "every creature".
Saichō also sought independent ordination for Tendai monks. A close relationship developed between the Tendai monastery
complex on Mount Hiei and the imperial court in its new capital at the foot of the mountain. As a result, Tendai
emphasized great reverence for the emperor and the nation. Kammu himself was a notable patron of the otherworldly
Tendai sect, which rose to great power over the ensuing centuries. Kūkai greatly impressed the emperors who succeeded
Emperor Kammu, and also generations of Japanese, not only with his holiness but also with his poetry, calligraphy,
painting, and sculpture. Shingon, through its use of "rich symbols, rituals and mandalas" held a wide-ranging appeal.
Poetry, in particular, was a staple of court life. Nobles and ladies-in-waiting were expected to be well versed in
the art of writing poetry as a mark of their status. Every occasion could call for the writing of a verse, from the
birth of a child to the coronation of an emperor, or even a pretty scene of nature. A well-written poem or haiku
could easily make or break one's reputation, and often was a key part of social interaction.Almost as important was
the choice of calligraphy, or handwriting, used. The Japanese of this period believed handwriting could reflect the
condition of a person's soul: therefore, poor or hasty writing could be considered a sign of poor breeding. Whether
the script was Chinese or Japanese, good writing and artistic skill was paramount to social reputation when it came
to poetry. Sei Shonagon mentions in her Pillow Book that when a certain courtesan tried to ask her advice about how
to write a poem to the empress Sadako, she had to politely rebuke him because his writing was so poor. The lyrics
of the modern Japanese national anthem, Kimi ga Yo, were written in the Heian period, as was The Tale of Genji by
Murasaki Shikibu, one of the first novels ever written. Murasaki Shikibu's contemporary and rival Sei Shōnagon's
revealing observations and musings as an attendant in the Empress' court were recorded collectively as The Pillow
Book in the 990s, which revealed the quotidian capital lifestyle. The Heian period produced a flowering of poetry
including works of Ariwara no Narihira, Ono no Komachi, Izumi Shikibu, Murasaki Shikibu, Saigyō and Fujiwara no Teika.
The famous Japanese poem known as the Iroha (いろは), of uncertain authorship, was also written during the Heian period.
While on one hand the Heian period was an unusually long period of peace, it can also be argued that the period weakened
Japan economically and led to poverty for all but a tiny few of its inhabitants. The control of rice fields provided
a key source of income for families such as the Fujiwara and were a fundamental base for their power. The aristocratic
beneficiaries of Heian culture, the Ryōmin (良民 "Good People") numbered about five thousand in a land of perhaps five
million. One reason the samurai were able to take power was that the ruling nobility proved incompetent at managing
Japan and its provinces. By the year 1000 the government no longer knew how to issue currency and money was gradually
disappearing. Instead of a fully realised system of money circulation, rice was the primary unit of exchange. The
lack of a solid medium of economic exchange is implicitly illustrated in novels of the time. For instance, messengers
were rewarded with useful objects, e.g., an old silk kimono, rather than paid a fee. The Fujiwara rulers failed to
maintain adequate police forces, which left robbers free to prey on travelers. This is implicitly illustrated in
novels by the terror that night travel inspired in the main characters. The shōen system enabled the accumulation
of wealth by an aristocratic elite; the economic surplus can be linked to the cultural developments of the Heian
period and the "pursuit of arts". The major Buddhist temples in Heian-kyō and Nara also made use of the shōen. The
establishment of branches rurally and integration of some Shinto shrines within these temple networks reflects a
greater "organizational dynamism". The game Total War: Shogun 2 has the Rise of the Samurai expansion pack as downloadable
campaign. It allows the player to make their own version of the Gempei War which happened during the Heian period.
The player is able to choose one of the most powerful families of Japan at the time, the Taira, Minamoto or Fujiwara;
each family fielding two branches for a total of six playable clans. The expansion pack features a different set
of land units, ships and buildings and is also playable in the multiplayer modes.
On the Origin of Species, published on 24 November 1859, is a work of scientific literature by Charles Darwin which is considered
to be the foundation of evolutionary biology. Darwin's book introduced the scientific theory that populations evolve
over the course of generations through a process of natural selection. It presented a body of evidence that the diversity
of life arose by common descent through a branching pattern of evolution. Darwin included evidence that he had gathered
on the Beagle expedition in the 1830s and his subsequent findings from research, correspondence, and experimentation.
Various evolutionary ideas had already been proposed to explain new findings in biology. There was growing support
for such ideas among dissident anatomists and the general public, but during the first half of the 19th century the
English scientific establishment was closely tied to the Church of England, while science was part of natural theology.
Ideas about the transmutation of species were controversial as they conflicted with the beliefs that species were
unchanging parts of a designed hierarchy and that humans were unique, unrelated to other animals. The political and
theological implications were intensely debated, but transmutation was not accepted by the scientific mainstream.
The book was written for non-specialist readers and attracted widespread interest upon its publication. As Darwin
was an eminent scientist, his findings were taken seriously and the evidence he presented generated scientific, philosophical,
and religious discussion. The debate over the book contributed to the campaign by T. H. Huxley and his fellow members
of the X Club to secularise science by promoting scientific naturalism. Within two decades there was widespread scientific
agreement that evolution, with a branching pattern of common descent, had occurred, but scientists were slow to give
natural selection the significance that Darwin thought appropriate. During "the eclipse of Darwinism" from the 1880s
to the 1930s, various other mechanisms of evolution were given more credit. With the development of the modern evolutionary
synthesis in the 1930s and 1940s, Darwin's concept of evolutionary adaptation through natural selection became central
to modern evolutionary theory, and it has now become the unifying concept of the life sciences. In later editions
of the book, Darwin traced evolutionary ideas as far back as Aristotle; the text he cites is a summary by Aristotle
of the ideas of the earlier Greek philosopher Empedocles. Early Christian Church Fathers and Medieval European scholars
interpreted the Genesis creation narrative allegorically rather than as a literal historical account; organisms were
described by their mythological and heraldic significance as well as by their physical form. Nature was widely believed
to be unstable and capricious, with monstrous births from union between species, and spontaneous generation of life.
The Protestant Reformation inspired a literal interpretation of the Bible, with concepts of creation that conflicted
with the findings of an emerging science seeking explanations congruent with the mechanical philosophy of René Descartes
and the empiricism of the Baconian method. After the turmoil of the English Civil War, the Royal Society wanted to
show that science did not threaten religious and political stability. John Ray developed an influential natural theology
of rational order; in his taxonomy, species were static and fixed, their adaptation and complexity designed by God,
and varieties showed minor differences caused by local conditions. In God's benevolent design, carnivores caused
mercifully swift death, but the suffering caused by parasitism was a puzzling problem. The biological classification
introduced by Carl Linnaeus in 1735 also viewed species as fixed according to the divine plan. In 1766, Georges Buffon
suggested that some similar species, such as horses and asses, or lions, tigers, and leopards, might be varieties
descended from a common ancestor. The Ussher chronology of the 1650s had calculated creation at 4004 BC, but by the
1780s geologists assumed a much older world. Wernerians thought strata were deposits from shrinking seas, but James
Hutton proposed a self-maintaining infinite cycle, anticipating uniformitarianism. Charles Darwin's grandfather Erasmus
Darwin outlined a hypothesis of transmutation of species in the 1790s, and Jean-Baptiste Lamarck published a more
developed theory in 1809. Both envisaged that spontaneous generation produced simple forms of life that progressively
developed greater complexity, adapting to the environment by inheriting changes in adults caused by use or disuse.
This process was later called Lamarckism. Lamarck thought there was an inherent progressive tendency driving organisms
continuously towards greater complexity, in parallel but separate lineages with no extinction. Geoffroy contended
that embryonic development recapitulated transformations of organisms in past eras when the environment acted on
embryos, and that animal structures were determined by a constant plan as demonstrated by homologies. Georges Cuvier
strongly disputed such ideas, holding that unrelated, fixed species showed similarities that reflected a design for
functional needs. His palæontological work in the 1790s had established the reality of extinction, which he explained
by local catastrophes, followed by repopulation of the affected areas by other species. In Britain, William Paley's
Natural Theology saw adaptation as evidence of beneficial "design" by the Creator acting through natural laws. All
naturalists in the two English universities (Oxford and Cambridge) were Church of England clergymen, and science
became a search for these laws. Geologists adapted catastrophism to show repeated worldwide annihilation and creation
of new fixed species adapted to a changed environment, initially identifying the most recent catastrophe as the biblical
flood. Some anatomists such as Robert Grant were influenced by Lamarck and Geoffroy, but most naturalists regarded
their ideas of transmutation as a threat to divinely appointed social order. Darwin went to Edinburgh University
in 1825 to study medicine. In his second year he neglected his medical studies for natural history and spent four
months assisting Robert Grant's research into marine invertebrates. Grant revealed his enthusiasm for the transmutation
of species, but Darwin rejected it. Starting in 1827, at Cambridge University, Darwin learnt science as natural theology
from botanist John Stevens Henslow, and read Paley, John Herschel and Alexander von Humboldt. Filled with zeal for
science, he studied catastrophist geology with Adam Sedgwick. In December 1831, he joined the Beagle expedition as
a gentleman naturalist and geologist. He read Charles Lyell's Principles of Geology and from the first stop ashore,
at St. Jago, found Lyell's uniformitarianism a key to the geological history of landscapes. Darwin discovered fossils
resembling huge armadillos, and noted the geographical distribution of modern species in hope of finding their "centre
of creation". The three Fuegian missionaries the expedition returned to Tierra del Fuego were friendly and civilised,
yet to Darwin their relatives on the island seemed "miserable, degraded savages", and he no longer saw an unbridgeable
gap between humans and animals. As the Beagle neared England in 1836, he noted that species might not be fixed. Richard
Owen showed that fossils of extinct species Darwin found in South America were allied to living species on the same
continent. In March 1837, ornithologist John Gould announced that Darwin's rhea was a separate species from the previously
described rhea (though their territories overlapped), that mockingbirds collected on the Galápagos Islands represented
three separate species each unique to a particular island, and that several distinct birds from those islands were
all classified as finches. Darwin began speculating, in a series of notebooks, on the possibility that "one species
does change into another" to explain these findings, and around July sketched a genealogical branching of a single
evolutionary tree, discarding Lamarck's independent lineages progressing to higher forms. Unconventionally, Darwin
asked questions of fancy pigeon and animal breeders as well as established scientists. At the zoo he had his first
sight of an ape, and was profoundly impressed by how human the orangutan seemed. In late September 1838, he started
reading Thomas Malthus's An Essay on the Principle of Population with its statistical argument that human populations,
if unrestrained, breed beyond their means and struggle to survive. Darwin related this to the struggle for existence
among wildlife and botanist de Candolle's "warring of the species" in plants; he immediately envisioned "a force
like a hundred thousand wedges" pushing well-adapted variations into "gaps in the economy of nature", so that the
survivors would pass on their form and abilities, and unfavourable variations would be destroyed. By December 1838,
he had noted a similarity between the act of breeders selecting traits and a Malthusian Nature selecting among variants
thrown up by "chance" so that "every part of newly acquired structure is fully practical and perfected". Darwin continued
to research and extensively revise his theory while focusing on his main work of publishing the scientific results
of the Beagle voyage. He tentatively wrote of his ideas to Lyell in January 1842; then in June he roughed out a 35-page
"Pencil Sketch" of his theory. Darwin began correspondence about his theorising with the botanist Joseph Dalton Hooker
in January 1844, and by July had rounded out his "sketch" into a 230-page "Essay", to be expanded with his research
results and published if he died prematurely. In November 1844, the anonymously published popular science book Vestiges
of the Natural History of Creation, written by Scottish journalist Robert Chambers, widened public interest in the
concept of transmutation of species. Vestiges used evidence from the fossil record and embryology to support the
claim that living things had progressed from the simple to the more complex over time. But it proposed a linear progression
rather than the branching common descent theory behind Darwin's work in progress, and it ignored adaptation. Darwin
read it soon after publication, and scorned its amateurish geology and zoology, but he carefully reviewed his own
arguments after leading scientists, including Adam Sedgwick, attacked its morality and scientific errors. Vestiges
had significant influence on public opinion, and the intense debate helped to pave the way for the acceptance of
the more scientifically sophisticated Origin by moving evolutionary speculation into the mainstream. While few naturalists
were willing to consider transmutation, Herbert Spencer became an active proponent of Lamarckism and progressive
development in the 1850s. Darwin's barnacle studies convinced him that variation arose constantly and not just in
response to changed circumstances. In 1854, he completed the last part of his Beagle-related writing and began working
full-time on evolution. His thinking changed from the view that species formed in isolated populations only, as on
islands, to an emphasis on speciation without isolation; that is, he saw increasing specialisation within large stable
populations as continuously exploiting new ecological niches. He conducted empirical research focusing on difficulties
with his theory. He studied the developmental and anatomical differences between different breeds of many domestic
animals, became actively involved in fancy pigeon breeding, and experimented (with the help of his son Francis) on
ways that plant seeds and animals might disperse across oceans to colonise distant islands. By 1856, his theory was
much more sophisticated, with a mass of supporting evidence. An 1855 paper on the "introduction" of species, written
by Alfred Russel Wallace, claimed that patterns in the geographical distribution of living and fossil species could
be explained if every new species always came into existence near an already existing, closely related species. Charles
Lyell recognised the implications of Wallace's paper and its possible connection to Darwin's work, although Darwin
did not, and in a letter written on 1–2 May 1856 Lyell urged Darwin to publish his theory to establish priority.
Darwin was torn between the desire to set out a full and convincing account and the pressure to quickly produce a
short paper. He met Lyell, and in correspondence with Joseph Dalton Hooker affirmed that he did not want to expose
his ideas to review by an editor as would have been required to publish in an academic journal. He began a "sketch"
account on 14 May 1856, and by July had decided to produce a full technical treatise on species. His theory including
the principle of divergence was complete by 5 September 1857 when he sent Asa Gray a brief but detailed abstract
of his ideas. Darwin was hard at work on his "big book" on Natural Selection, when on 18 June 1858 he received a
parcel from Wallace, who stayed on the Maluku Islands (Ternate and Gilolo). It enclosed twenty pages describing an
evolutionary mechanism, a response to Darwin's recent encouragement, with a request to send it on to Lyell if Darwin
thought it worthwhile. The mechanism was similar to Darwin's own theory. Darwin wrote to Lyell that "your words have
come true with a vengeance, ... forestalled" and he would "of course, at once write and offer to send [it] to any
journal" that Wallace chose, adding that "all my originality, whatever it may amount to, will be smashed". Lyell
and Hooker agreed that a joint publication putting together Wallace's pages with extracts from Darwin's 1844 Essay
and his 1857 letter to Gray should be presented at the Linnean Society, and on 1 July 1858, the papers entitled On
the Tendency of Species to form Varieties; and on the Perpetuation of Varieties and Species by Natural Means of Selection,
by Wallace and Darwin respectively, were read out but drew little reaction. While Darwin considered Wallace's idea
to be identical to his concept of natural selection, historians have pointed out differences. Darwin described natural
selection as being analogous to the artificial selection practised by animal breeders, and emphasised competition
between individuals; Wallace drew no comparison to selective breeding, and focused on ecological pressures that kept
different varieties adapted to local conditions. Some historians have suggested that Wallace was actually discussing
group selection rather than selection acting on individual variation. After the meeting, Darwin decided to write
"an abstract of my whole work". He started work on 20 July 1858, while on holiday at Sandown, and wrote parts of
it from memory. Lyell discussed arrangements with publisher John Murray III, of the publishing house John Murray,
who responded immediately to Darwin's letter of 31 March 1859 with an agreement to publish the book without even
seeing the manuscript, and an offer to Darwin of 2⁄3 of the profits. (eventually Murray paid £180 to Darwin for the
1st edition and by Darwin's death in 1882 the book was in its 6th edition, earning Darwin nearly £3000.) Darwin had
initially decided to call his book An abstract of an Essay on the Origin of Species and Varieties Through natural
selection, but with Murray's persuasion it was eventually changed to the snappier title: On the Origin of Species,
with the title page adding by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for
Life. Here the term "races" is used as an alternative for "varieties" and does not carry the modern connotation of
human races—the first use in the book refers to "the several races, for instance, of the cabbage" and proceeds to
a discussion of "the hereditary varieties or races of our domestic animals and plants". Darwin had his basic theory
of natural selection "by which to work" by December 1838, yet almost twenty years later, when Wallace's letter arrived
on 18 June 1858, Darwin was still not ready to publish his theory. It was long thought that Darwin avoided or delayed
making his ideas public for personal reasons. Reasons suggested have included fear of religious persecution or social
disgrace if his views were revealed, and concern about upsetting his clergymen naturalist friends or his pious wife
Emma. Charles Darwin's illness caused repeated delays. His paper on Glen Roy had proved embarrassingly wrong, and
he may have wanted to be sure he was correct. David Quammen has suggested all these factors may have contributed,
and notes Darwin's large output of books and busy family life during that time. A more recent study by science historian
John van Wyhe has determined that the idea that Darwin delayed publication only dates back to the 1940s, and Darwin's
contemporaries thought the time he took was reasonable. Darwin always finished one book before starting another.
While he was researching, he told many people about his interest in transmutation without causing outrage. He firmly
intended to publish, but it was not until September 1854 that he could work on it full-time. His estimate that writing
his "big book" would take five years was optimistic. On the Origin of Species was first published on Thursday 24
November 1859, priced at fifteen shillings with a first printing of 1250 copies. The book had been offered to booksellers
at Murray's autumn sale on Tuesday 22 November, and all available copies had been taken up immediately. In total,
1,250 copies were printed but after deducting presentation and review copies, and five for Stationers' Hall copyright,
around 1,170 copies were available for sale. Significantly, 500 were taken by Mudie's Library, ensuring that the
book promptly reached a large number of subscribers to the library. The second edition of 3,000 copies was quickly
brought out on 7 January 1860, and incorporated numerous corrections as well as a response to religious objections
by the addition of a new epigraph on page ii, a quotation from Charles Kingsley, and the phrase "by the Creator"
added to the closing sentence. During Darwin's lifetime the book went through six editions, with cumulative changes
and revisions to deal with counter-arguments raised. The third edition came out in 1861, with a number of sentences
rewritten or added and an introductory appendix, An Historical Sketch of the Recent Progress of Opinion on the Origin
of Species, while the fourth in 1866 had further revisions. The fifth edition, published on 10 February 1869, incorporated
more changes and for the first time included the phrase "survival of the fittest", which had been coined by the philosopher
Herbert Spencer in his Principles of Biology (1864). In January 1871, George Jackson Mivart's On the Genesis of Species
listed detailed arguments against natural selection, and claimed it included false metaphysics. Darwin made extensive
revisions to the sixth edition of the Origin (this was the first edition in which he used the word "evolution" which
had commonly been associated with embryological development, though all editions concluded with the word "evolved"),
and added a new chapter VII, Miscellaneous objections, to address Mivart's arguments. In the United States, botanist
Asa Gray an American colleague of Darwin negotiated with a Boston publisher for publication of an authorised American
version, but learnt that two New York publishing firms were already planning to exploit the absence of international
copyright to print Origin. Darwin was delighted by the popularity of the book, and asked Gray to keep any profits.
Gray managed to negotiate a 5% royalty with Appleton's of New York, who got their edition out in mid January 1860,
and the other two withdrew. In a May letter, Darwin mentioned a print run of 2,500 copies, but it is not clear if
this referred to the first printing only as there were four that year. The book was widely translated in Darwin's
lifetime, but problems arose with translating concepts and metaphors, and some translations were biased by the translator's
own agenda. Darwin distributed presentation copies in France and Germany, hoping that suitable applicants would come
forward, as translators were expected to make their own arrangements with a local publisher. He welcomed the distinguished
elderly naturalist and geologist Heinrich Georg Bronn, but the German translation published in 1860 imposed Bronn's
own ideas, adding controversial themes that Darwin had deliberately omitted. Bronn translated "favoured races" as
"perfected races", and added essays on issues including the origin of life, as well as a final chapter on religious
implications partly inspired by Bronn's adherence to Naturphilosophie. In 1862, Bronn produced a second edition based
on the third English edition and Darwin's suggested additions, but then died of a heart attack. Darwin corresponded
closely with Julius Victor Carus, who published an improved translation in 1867. Darwin's attempts to find a translator
in France fell through, and the translation by Clémence Royer published in 1862 added an introduction praising Darwin's
ideas as an alternative to religious revelation and promoting ideas anticipating social Darwinism and eugenics, as
well as numerous explanatory notes giving her own answers to doubts that Darwin expressed. Darwin corresponded with
Royer about a second edition published in 1866 and a third in 1870, but he had difficulty getting her to remove her
notes and was troubled by these editions. He remained unsatisfied until a translation by Edmond Barbier was published
in 1876. A Dutch translation by Tiberius Cornelis Winkler was published in 1860. By 1864, additional translations
had appeared in Italian and Russian. In Darwin's lifetime, Origin was published in Swedish in 1871, Danish in 1872,
Polish in 1873, Hungarian in 1873–1874, Spanish in 1877 and Serbian in 1878. By 1977, it had appeared in an additional
18 languages. Page ii contains quotations by William Whewell and Francis Bacon on the theology of natural laws, harmonising
science and religion in accordance with Isaac Newton's belief in a rational God who established a law-abiding cosmos.
In the second edition, Darwin added an epigraph from Joseph Butler affirming that God could work through scientific
laws as much as through miracles, in a nod to the religious concerns of his oldest friends. The Introduction establishes
Darwin's credentials as a naturalist and author, then refers to John Herschel's letter suggesting that the origin
of species "would be found to be a natural in contradistinction to a miraculous process": Chapter I covers animal
husbandry and plant breeding, going back to ancient Egypt. Darwin discusses contemporary opinions on the origins
of different breeds under cultivation to argue that many have been produced from common ancestors by selective breeding.
As an illustration of artificial selection, he describes fancy pigeon breeding, noting that "[t]he diversity of the
breeds is something astonishing", yet all were descended from one species of rock pigeon. Darwin saw two distinct
kinds of variation: (1) rare abrupt changes he called "sports" or "monstrosities" (example: ancon sheep with short
legs), and (2) ubiquitous small differences (example: slightly shorter or longer bill of pigeons). Both types of
hereditary changes can be used by breeders. However, for Darwin the small changes were most important in evolution.
In Chapter II, Darwin specifies that the distinction between species and varieties is arbitrary, with experts disagreeing
and changing their decisions when new forms were found. He concludes that "a well-marked variety may be justly called
an incipient species" and that "species are only strongly marked and permanent varieties". He argues for the ubiquity
of variation in nature. Historians have noted that naturalists had long been aware that the individuals of a species
differed from one another, but had generally considered such variations to be limited and unimportant deviations
from the archetype of each species, that archetype being a fixed ideal in the mind of God. Darwin and Wallace made
variation among individuals of the same species central to understanding the natural world. He notes that both A.
P. de Candolle and Charles Lyell had stated that all organisms are exposed to severe competition. Darwin emphasizes
that he used the phrase "struggle for existence" in "a large and metaphorical sense, including dependence of one
being on another"; he gives examples ranging from plants struggling against drought to plants competing for birds
to eat their fruit and disseminate their seeds. He describes the struggle resulting from population growth: "It is
the doctrine of Malthus applied with manifold force to the whole animal and vegetable kingdoms." He discusses checks
to such increase including complex ecological interdependencies, and notes that competition is most severe between
closely related forms "which fill nearly the same place in the economy of nature". Chapter IV details natural selection
under the "infinitely complex and close-fitting ... mutual relations of all organic beings to each other and to their
physical conditions of life". Darwin takes as an example a country where a change in conditions led to extinction
of some species, immigration of others and, where suitable variations occurred, descendants of some species became
adapted to new conditions. He remarks that the artificial selection practised by animal breeders frequently produced
sharp divergence in character between breeds, and suggests that natural selection might do the same, saying: Darwin
proposes sexual selection, driven by competition between males for mates, to explain sexually dimorphic features
such as lion manes, deer antlers, peacock tails, bird songs, and the bright plumage of some male birds. He analysed
sexual selection more fully in The Descent of Man, and Selection in Relation to Sex (1871). Natural selection was
expected to work very slowly in forming new species, but given the effectiveness of artificial selection, he could
"see no limit to the amount of change, to the beauty and infinite complexity of the coadaptations between all organic
beings, one with another and with their physical conditions of life, which may be effected in the long course of
time by nature's power of selection". Using a tree diagram and calculations, he indicates the "divergence of character"
from original species into new species and genera. He describes branches falling off as extinction occurred, while
new branches formed in "the great Tree of life ... with its ever branching and beautiful ramifications". In Darwin's
time there was no agreed-upon model of heredity; in Chapter I Darwin admitted, "The laws governing inheritance are
quite unknown." He accepted a version of the inheritance of acquired characteristics (which after Darwin's death
came to be called Lamarckism), and Chapter V discusses what he called the effects of use and disuse; he wrote that
he thought "there can be little doubt that use in our domestic animals strengthens and enlarges certain parts, and
disuse diminishes them; and that such modifications are inherited", and that this also applied in nature. Darwin
stated that some changes that were commonly attributed to use and disuse, such as the loss of functional wings in
some island dwelling insects, might be produced by natural selection. In later editions of Origin, Darwin expanded
the role attributed to the inheritance of acquired characteristics. Darwin also admitted ignorance of the source
of inheritable variations, but speculated they might be produced by environmental factors. However, one thing was
clear: whatever the exact nature and causes of new variations, Darwin knew from observation and experiment that breeders
were able to select such variations and produce huge differences in many generations of selection. The observation
that selection works in domestic animals is not destroyed by lack of understanding of the underlying hereditary mechanism.
More detail was given in Darwin's 1868 book on The Variation of Animals and Plants under Domestication, which tried
to explain heredity through his hypothesis of pangenesis. Although Darwin had privately questioned blending inheritance,
he struggled with the theoretical difficulty that novel individual variations would tend to blend into a population.
However, inherited variation could be seen, and Darwin's concept of selection working on a population with a range
of small variations was workable. It was not until the modern evolutionary synthesis in the 1930s and 1940s that
a model of heredity became completely integrated with a model of variation. This modern evolutionary synthesis had
been dubbed Neo Darwinian Evolution because it encompasses Charles Darwin's theories of evolution with Gregor Mendel's
theories of genetic inheritance. Chapter VI begins by saying the next three chapters will address possible objections
to the theory, the first being that often no intermediate forms between closely related species are found, though
the theory implies such forms must have existed. As Darwin noted, "Firstly, why, if species have descended from other
species by insensibly fine gradations, do we not everywhere see innumerable transitional forms? Why is not all nature
in confusion, instead of the species being, as we see them, well defined?" Darwin attributed this to the competition
between different forms, combined with the small number of individuals of intermediate forms, often leading to extinction
of such forms. This difficulty can be referred to as the absence or rarity of transitional varieties in habitat space.
His answer was that in many cases animals exist with intermediate structures that are functional. He presented flying
squirrels, and flying lemurs as examples of how bats might have evolved from non-flying ancestors. He discussed various
simple eyes found in invertebrates, starting with nothing more than an optic nerve coated with pigment, as examples
of how the vertebrate eye could have evolved. Darwin concludes: "If it could be demonstrated that any complex organ
existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would
absolutely break down. But I can find out no such case." Chapter VII (of the first edition) addresses the evolution
of instincts. His examples included two he had investigated experimentally: slave-making ants and the construction
of hexagonal cells by honey bees. Darwin noted that some species of slave-making ants were more dependent on slaves
than others, and he observed that many ant species will collect and store the pupae of other species as food. He
thought it reasonable that species with an extreme dependency on slave workers had evolved in incremental steps.
He suggested that bees that make hexagonal cells evolved in steps from bees that made round cells, under pressure
from natural selection to economise wax. Darwin concluded: Chapter VIII addresses the idea that species had special
characteristics that prevented hybrids from being fertile in order to preserve separately created species. Darwin
said that, far from being constant, the difficulty in producing hybrids of related species, and the viability and
fertility of the hybrids, varied greatly, especially among plants. Sometimes what were widely considered to be separate
species produced fertile hybrid offspring freely, and in other cases what were considered to be mere varieties of
the same species could only be crossed with difficulty. Darwin concluded: "Finally, then, the facts briefly given
in this chapter do not seem to me opposed to, but even rather to support the view, that there is no fundamental distinction
between species and varieties." In the sixth edition Darwin inserted a new chapter VII (renumbering the subsequent
chapters) to respond to criticisms of earlier editions, including the objection that many features of organisms were
not adaptive and could not have been produced by natural selection. He said some such features could have been by-products
of adaptive changes to other features, and that often features seemed non-adaptive because their function was unknown,
as shown by his book on Fertilisation of Orchids that explained how their elaborate structures facilitated pollination
by insects. Much of the chapter responds to George Jackson Mivart's criticisms, including his claim that features
such as baleen filters in whales, flatfish with both eyes on one side and the camouflage of stick insects could not
have evolved through natural selection because intermediate stages would not have been adaptive. Darwin proposed
scenarios for the incremental evolution of each feature. Chapter IX deals with the fact that the geologic record
appears to show forms of life suddenly arising, without the innumerable transitional fossils expected from gradual
changes. Darwin borrowed Charles Lyell's argument in Principles of Geology that the record is extremely imperfect
as fossilisation is a very rare occurrence, spread over vast periods of time; since few areas had been geologically
explored, there could only be fragmentary knowledge of geological formations, and fossil collections were very poor.
Evolved local varieties which migrated into a wider area would seem to be the sudden appearance of a new species.
Darwin did not expect to be able to reconstruct evolutionary history, but continuing discoveries gave him well founded
hope that new finds would occasionally reveal transitional forms. To show that there had been enough time for natural
selection to work slowly, he again cited Principles of Geology and other observations based on sedimentation and
erosion, including an estimate that erosion of The Weald had taken 300 million years. The initial appearance of entire
groups of well developed organisms in the oldest fossil-bearing layers, now known as the Cambrian explosion, posed
a problem. Darwin had no doubt that earlier seas had swarmed with living creatures, but stated that he had no satisfactory
explanation for the lack of fossils. Fossil evidence of pre-Cambrian life has since been found, extending the history
of life back for billions of years. Chapter X examines whether patterns in the fossil record are better explained
by common descent and branching evolution through natural selection, than by the individual creation of fixed species.
Darwin expected species to change slowly, but not at the same rate – some organisms such as Lingula were unchanged
since the earliest fossils. The pace of natural selection would depend on variability and change in the environment.
This distanced his theory from Lamarckian laws of inevitable progress. It has been argued that this anticipated the
punctuated equilibrium hypothesis, but other scholars have preferred to emphasise Darwin's commitment to gradualism.
He cited Richard Owen's findings that the earliest members of a class were a few simple and generalised species with
characteristics intermediate between modern forms, and were followed by increasingly diverse and specialised forms,
matching the branching of common descent from an ancestor. Patterns of extinction matched his theory, with related
groups of species having a continued existence until extinction, then not reappearing. Recently extinct species were
more similar to living species than those from earlier eras, and as he had seen in South America, and William Clift
had shown in Australia, fossils from recent geological periods resembled species still living in the same area. Chapter
XI deals with evidence from biogeography, starting with the observation that differences in flora and fauna from
separate regions cannot be explained by environmental differences alone; South America, Africa, and Australia all
have regions with similar climates at similar latitudes, but those regions have very different plants and animals.
The species found in one area of a continent are more closely allied with species found in other regions of that
same continent than to species found on other continents. Darwin noted that barriers to migration played an important
role in the differences between the species of different regions. The coastal sea life of the Atlantic and Pacific
sides of Central America had almost no species in common even though the Isthmus of Panama was only a few miles wide.
His explanation was a combination of migration and descent with modification. He went on to say: "On this principle
of inheritance with modification, we can understand how it is that sections of genera, whole genera, and even families
are confined to the same areas, as is so commonly and notoriously the case." Darwin explained how a volcanic island
formed a few hundred miles from a continent might be colonised by a few species from that continent. These species
would become modified over time, but would still be related to species found on the continent, and Darwin observed
that this was a common pattern. Darwin discussed ways that species could be dispersed across oceans to colonise islands,
many of which he had investigated experimentally. Darwin discusses morphology, including the importance of homologous
structures. He says, "What can be more curious than that the hand of a man, formed for grasping, that of a mole for
digging, the leg of the horse, the paddle of the porpoise, and the wing of the bat, should all be constructed on
the same pattern, and should include the same bones, in the same relative positions?" He notes that animals of the
same class often have extremely similar embryos. Darwin discusses rudimentary organs, such as the wings of flightless
birds and the rudiments of pelvis and leg bones found in some snakes. He remarks that some rudimentary organs, such
as teeth in baleen whales, are found only in embryonic stages. The final chapter reviews points from earlier chapters,
and Darwin concludes by hoping that his theory might produce revolutionary changes in many fields of natural history.
Although he avoids the controversial topic of human origins in the rest of the book so as not to prejudice readers
against his theory, here he ventures a cautious hint that psychology would be put on a new foundation and that "Light
will be thrown on the origin of man". Darwin ends with a passage that became well known and much quoted: Darwin's
aims were twofold: to show that species had not been separately created, and to show that natural selection had been
the chief agent of change. He knew that his readers were already familiar with the concept of transmutation of species
from Vestiges, and his introduction ridicules that work as failing to provide a viable mechanism. Therefore, the
first four chapters lay out his case that selection in nature, caused by the struggle for existence, is analogous
to the selection of variations under domestication, and that the accumulation of adaptive variations provides a scientifically
testable mechanism for evolutionary speciation. Later chapters provide evidence that evolution has occurred, supporting
the idea of branching, adaptive evolution without directly proving that selection is the mechanism. Darwin presents
supporting facts drawn from many disciplines, showing that his theory could explain a myriad of observations from
many fields of natural history that were inexplicable under the alternate concept that species had been individually
created. The structure of Darwin's argument showed the influence of John Herschel, whose philosophy of science maintained
that a mechanism could be called a vera causa (true cause) if three things could be demonstrated: its existence in
nature, its ability to produce the effects of interest, and its ability to explain a wide range of observations.
While the book was readable enough to sell, its dryness ensured that it was seen as aimed at specialist scientists
and could not be dismissed as mere journalism or imaginative fiction. Unlike the still-popular Vestiges, it avoided
the narrative style of the historical novel and cosmological speculation, though the closing sentence clearly hinted
at cosmic progression. Darwin had long been immersed in the literary forms and practices of specialist science, and
made effective use of his skills in structuring arguments. David Quammen has described the book as written in everyday
language for a wide audience, but noted that Darwin's literary style was uneven: in some places he used convoluted
sentences that are difficult to read, while in other places his writing was beautiful. Quammen advised that later
editions were weakened by Darwin making concessions and adding details to address his critics, and recommended the
first edition. James T. Costa said that because the book was an abstract produced in haste in response to Wallace's
essay, it was more approachable than the big book on natural selection Darwin had been working on, which would have
been encumbered by scholarly footnotes and much more technical detail. He added that some parts of Origin are dense,
but other parts are almost lyrical, and the case studies and observations are presented in a narrative style unusual
in serious scientific books, which broadened its audience. The book aroused international interest and a widespread
debate, with no sharp line between scientific issues and ideological, social and religious implications. Much of
the initial reaction was hostile, but Darwin had to be taken seriously as a prominent and respected name in science.
There was much less controversy than had greeted the 1844 publication Vestiges of Creation, which had been rejected
by scientists, but had influenced a wide public readership into believing that nature and human society were governed
by natural laws. The Origin of Species as a book of wide general interest became associated with ideas of social
reform. Its proponents made full use of a surge in the publication of review journals, and it was given more popular
attention than almost any other scientific work, though it failed to match the continuing sales of Vestiges. Darwin's
book legitimised scientific discussion of evolutionary mechanisms, and the newly coined term Darwinism was used to
cover the whole range of evolutionism, not just his own ideas. By the mid-1870s, evolutionism was triumphant. Scientific
readers were already aware of arguments that species changed through processes that were subject to laws of nature,
but the transmutational ideas of Lamarck and the vague "law of development" of Vestiges had not found scientific
favour. Darwin presented natural selection as a scientifically testable mechanism while accepting that other mechanisms
such as inheritance of acquired characters were possible. His strategy established that evolution through natural
laws was worthy of scientific study, and by 1875, most scientists accepted that evolution occurred but few thought
natural selection was significant. Darwin's scientific method was also disputed, with his proponents favouring the
empiricism of John Stuart Mill's A System of Logic, while opponents held to the idealist school of William Whewell's
Philosophy of the Inductive Sciences, in which investigation could begin with the intuitive truth that species were
fixed objects created by design. Early support for Darwin's ideas came from the findings of field naturalists studying
biogeography and ecology, including Joseph Dalton Hooker in 1860, and Asa Gray in 1862. Henry Walter Bates presented
research in 1861 that explained insect mimicry using natural selection. Alfred Russel Wallace discussed evidence
from his Malay archipelago research, including an 1864 paper with an evolutionary explanation for the Wallace line.
Evolution had less obvious applications to anatomy and morphology, and at first had little impact on the research
of the anatomist Thomas Henry Huxley. Despite this, Huxley strongly supported Darwin on evolution; though he called
for experiments to show whether natural selection could form new species, and questioned if Darwin's gradualism was
sufficient without sudden leaps to cause speciation. Huxley wanted science to be secular, without religious interference,
and his article in the April 1860 Westminster Review promoted scientific naturalism over natural theology, praising
Darwin for "extending the domination of Science over regions of thought into which she has, as yet, hardly penetrated"
and coining the term "Darwinism" as part of his efforts to secularise and professionalise science. Huxley gained
influence, and initiated the X Club, which used the journal Nature to promote evolution and naturalism, shaping much
of late Victorian science. Later, the German morphologist Ernst Haeckel would convince Huxley that comparative anatomy
and palaeontology could be used to reconstruct evolutionary genealogies. The leading naturalist in Britain was the
anatomist Richard Owen, an idealist who had shifted to the view in the 1850s that the history of life was the gradual
unfolding of a divine plan. Owen's review of the Origin in the April 1860 Edinburgh Review bitterly attacked Huxley,
Hooker and Darwin, but also signalled acceptance of a kind of evolution as a teleological plan in a continuous "ordained
becoming", with new species appearing by natural birth. Others that rejected natural selection, but supported "creation
by birth", included the Duke of Argyll who explained beauty in plumage by design. Since 1858, Huxley had emphasised
anatomical similarities between apes and humans, contesting Owen's view that humans were a separate sub-class. Their
disagreement over human origins came to the fore at the British Association for the Advancement of Science meeting
featuring the legendary 1860 Oxford evolution debate. In two years of acrimonious public dispute that Charles Kingsley
satirised as the "Great Hippocampus Question" and parodied in The Water-Babies as the "great hippopotamus test",
Huxley showed that Owen was incorrect in asserting that ape brains lacked a structure present in human brains. Others,
including Charles Lyell and Alfred Russel Wallace, thought that humans shared a common ancestor with apes, but higher
mental faculties could not have evolved through a purely material process. Darwin published his own explanation in
the Descent of Man (1871). Evolutionary ideas, although not natural selection, were accepted by German biologists
accustomed to ideas of homology in morphology from Goethe's Metamorphosis of Plants and from their long tradition
of comparative anatomy. Bronn's alterations in his German translation added to the misgivings of conservatives, but
enthused political radicals. Ernst Haeckel was particularly ardent, aiming to synthesise Darwin's ideas with those
of Lamarck and Goethe while still reflecting the spirit of Naturphilosophie. Their ambitious programme to reconstruct
the evolutionary history of life was joined by Huxley and supported by discoveries in palaeontology. Haeckel used
embryology extensively in his recapitulation theory, which embodied a progressive, almost linear model of evolution.
Darwin was cautious about such histories, and had already noted that von Baer's laws of embryology supported his
idea of complex branching. French-speaking naturalists in several countries showed appreciation of the much modified
French translation by Clémence Royer, but Darwin's ideas had little impact in France, where any scientists supporting
evolutionary ideas opted for a form of Lamarckism. The intelligentsia in Russia had accepted the general phenomenon
of evolution for several years before Darwin had published his theory, and scientists were quick to take it into
account, although the Malthusian aspects were felt to be relatively unimportant. The political economy of struggle
was criticised as a British stereotype by Karl Marx and by Leo Tolstoy, who had the character Levin in his novel
Anna Karenina voice sharp criticism of the morality of Darwin's views. There were serious scientific objections to
the process of natural selection as the key mechanism of evolution, including Karl von Nägeli's insistence that a
trivial characteristic with no adaptive advantage could not be developed by selection. Darwin conceded that these
could be linked to adaptive characteristics. His estimate that the age of the Earth allowed gradual evolution was
disputed by William Thomson (later awarded the title Lord Kelvin), who calculated that it had cooled in less than
100 million years. Darwin accepted blending inheritance, but Fleeming Jenkin calculated that as it mixed traits,
natural selection could not accumulate useful traits. Darwin tried to meet these objections in the 5th edition. Mivart
supported directed evolution, and compiled scientific and religious objections to natural selection. In response,
Darwin made considerable changes to the sixth edition. The problems of the age of the Earth and heredity were only
resolved in the 20th century. By the mid-1870s, most scientists accepted evolution, but relegated natural selection
to a minor role as they believed evolution was purposeful and progressive. The range of evolutionary theories during
"the eclipse of Darwinism" included forms of "saltationism" in which new species were thought to arise through "jumps"
rather than gradual adaptation, forms of orthogenesis claiming that species had an inherent tendency to change in
a particular direction, and forms of neo-Lamarckism in which inheritance of acquired characteristics led to progress.
The minority view of August Weismann, that natural selection was the only mechanism, was called neo-Darwinism. It
was thought that the rediscovery of Mendelian inheritance invalidated Darwin's views. While some, like Spencer, used
analogy from natural selection as an argument against government intervention in the economy to benefit the poor,
others, including Alfred Russel Wallace, argued that action was needed to correct social and economic inequities
to level the playing field before natural selection could improve humanity further. Some political commentaries,
including Walter Bagehot's Physics and Politics (1872), attempted to extend the idea of natural selection to competition
between nations and between human races. Such ideas were incorporated into what was already an ongoing effort by
some working in anthropology to provide scientific evidence for the superiority of Caucasians over non white races
and justify European imperialism. Historians write that most such political and economic commentators had only a
superficial understanding of Darwin's scientific theory, and were as strongly influenced by other concepts about
social progress and evolution, such as the Lamarckian ideas of Spencer and Haeckel, as they were by Darwin's work.
Darwin objected to his ideas being used to justify military aggression and unethical business practices as he believed
morality was part of fitness in humans, and he opposed polygenism, the idea that human races were fundamentally distinct
and did not share a recent common ancestry. Natural theology was not a unified doctrine, and while some such as Louis
Agassiz were strongly opposed to the ideas in the book, others sought a reconciliation in which evolution was seen
as purposeful. In the Church of England, some liberal clergymen interpreted natural selection as an instrument of
God's design, with the cleric Charles Kingsley seeing it as "just as noble a conception of Deity". In the second
edition of January 1860, Darwin quoted Kingsley as "a celebrated cleric", and added the phrase "by the Creator" to
the closing sentence, which from then on read "life, with its several powers, having been originally breathed by
the Creator into a few forms or into one". While some commentators have taken this as a concession to religion that
Darwin later regretted, Darwin's view at the time was of God creating life through the laws of nature, and even in
the first edition there are several references to "creation". Baden Powell praised "Mr Darwin's masterly volume [supporting]
the grand principle of the self-evolving powers of nature". In America, Asa Gray argued that evolution is the secondary
effect, or modus operandi, of the first cause, design, and published a pamphlet defending the book in terms of theistic
evolution, Natural Selection is not inconsistent with Natural Theology. Theistic evolution became a popular compromise,
and St. George Jackson Mivart was among those accepting evolution but attacking Darwin's naturalistic mechanism.
Eventually it was realised that supernatural intervention could not be a scientific explanation, and naturalistic
mechanisms such as neo-Lamarckism were favoured over natural selection as being more compatible with purpose. Even
though the book had barely hinted at human evolution, it quickly became central to the debate as mental and moral
qualities were seen as spiritual aspects of the immaterial soul, and it was believed that animals did not have spiritual
qualities. This conflict could be reconciled by supposing there was some supernatural intervention on the path leading
to humans, or viewing evolution as a purposeful and progressive ascent to mankind's position at the head of nature.
While many conservative theologians accepted evolution, Charles Hodge argued in his 1874 critique "What is Darwinism?"
that "Darwinism", defined narrowly as including rejection of design, was atheism though he accepted that Asa Gray
did not reject design. Asa Gray responded that this charge misrepresented Darwin's text. By the early 20th century,
four noted authors of The Fundamentals were explicitly open to the possibility that God created through evolution,
but fundamentalism inspired the American creation–evolution controversy that began in the 1920s. Some conservative
Roman Catholic writers and influential Jesuits opposed evolution in the late 19th and early 20th century, but other
Catholic writers, starting with Mivart, pointed out that early Church Fathers had not interpreted Genesis literally
in this area. The Vatican stated its official position in a 1950 papal encyclical, which held that evolution was
not inconsistent with Catholic teaching. Modern evolutionary theory continues to develop. Darwin's theory of evolution
by natural selection, with its tree-like model of branching common descent, has become the unifying theory of the
life sciences. The theory explains the diversity of living organisms and their adaptation to the environment. It
makes sense of the geologic record, biogeography, parallels in embryonic development, biological homologies, vestigiality,
cladistics, phylogenetics and other fields, with unrivalled explanatory power; it has also become essential to applied
sciences such as medicine and agriculture. Despite the scientific consensus, a religion-based political controversy
has developed over how evolution is taught in schools, especially in the United States. Interest in Darwin's writings
continues, and scholars have generated an extensive literature, the Darwin Industry, about his life and work. The
text of Origin itself has been subject to much analysis including a variorum, detailing the changes made in every
edition, first published in 1959, and a concordance, an exhaustive external index published in 1981. Worldwide commemorations
of the 150th anniversary of the publication of On the Origin of Species and the bicentenary of Darwin's birth were
scheduled for 2009. They celebrated the ideas which "over the last 150 years have revolutionised our understanding
of nature and our place within it".
The dissolution of the Soviet Union was formally enacted on December 26, 1991, as a result of the declaration no. 142-Н of
the Soviet of the Republics of the Supreme Soviet of the Soviet Union. The declaration acknowledged the independence
of the former Soviet republics and created the Commonwealth of Independent States (CIS), although five of the signatories
ratified it much later or not at all. On the previous day, Soviet President Mikhail Gorbachev, the eighth and last
leader of the Soviet Union, resigned, declared his office extinct, and handed over its powers – including control
of the Soviet nuclear missile launching codes – to Russian President Boris Yeltsin. That evening at 7:32 p.m., the
Soviet flag was lowered from the Kremlin for the last time and replaced with the pre-revolutionary Russian flag.
Mikhail Gorbachev was elected General Secretary by the Politburo on March 11, 1985, three hours after predecessor
Konstantin Chernenko's death at age 73. Gorbachev, aged 54, was the youngest member of the Politburo. His initial
goal as general secretary was to revive the Soviet economy, and he realized that doing so would require reforming
underlying political and social structures. The reforms began with personnel changes of senior Brezhnev-era officials
who would impede political and economic change. On April 23, 1985, Gorbachev brought two protégés, Yegor Ligachev
and Nikolai Ryzhkov, into the Politburo as full members. He kept the "power" ministries happy by promoting KGB Head
Viktor Chebrikov from candidate to full member and appointing Minister of Defence Marshal Sergei Sokolov as a Politburo
candidate. This liberalization, however, fostered nationalist movements and ethnic disputes within the Soviet Union.
It also led indirectly to the revolutions of 1989, in which Soviet-imposed communist regimes of the Warsaw Pact were
peacefully toppled (Romania excepted), which in turn increased pressure on Gorbachev to introduce greater democracy
and autonomy for the Soviet Union's constituent republics. Under Gorbachev's leadership, the Communist Party of the
Soviet Union in 1989 introduced limited competitive elections to a new central legislature, the Congress of People's
Deputies (although the ban on other political parties was not lifted until 1990). In May 1985, Gorbachev delivered
a speech in Leningrad advocating reforms and an anti-alcohol campaign to tackle widespread alcoholism. Prices on
vodka, wine, and beer were raised in order to make these drinks more expensive and a disincentive to consumers, and
the introduction of rationing. Unlike most forms of rationing intended to conserve scarce goods, this was done to
restrict sales with the overt goal of curtailing drunkenness. Gorbachev's plan also included billboards promoting
sobriety, increased penalties for public drunkenness, and to censor drinking scenes from old movies. Although this
program was not a direct copycat of Tsar Nicholas II's outright prohibition during World War I, Gorbachev faced the
same adverse economic reaction as did the last Tsar. The disincentivization of alcohol consumption was a serious
blow to the state budget according to Alexander Yakovlev, who noted annual collections of alcohol taxes decreased
by 100 billion rubles. Alcohol production migrated to the black market, or through moonshining as some made "bathtub
vodka" with homegrown potatoes. Poorer, less educated Russians resorted to drinking unhealthy substitutes such as
nail polish, rubbing alcohol or men's cologne, which only served to be an additional burden on Russia's healthcare
sector due to the subsequent poisoning cases. The purpose of these reforms, however, was to prop up the existing
centrally planned economy, unlike later reforms, which tended toward market socialism. On July 1, 1985, Gorbachev
promoted Eduard Shevardnadze, First Secretary of the Georgian Communist Party, to full member of the Politburo, and
the following day appointed him minister of foreign affairs, replacing longtime Foreign Minister Andrei Gromyko.
The latter, disparaged as "Mr Nyet" in the West, had served for 28 years as Minister of Foreign Affairs. Gromyko
was relegated to the largely ceremonial position of Chairman of the Presidium of the Supreme Soviet (officially Soviet
Head of State), as he was considered an "old thinker." Also on July 1, Gorbachev took the opportunity to dispose
of his main rival by removing Grigory Romanov from the Politburo, and brought Boris Yeltsin and Lev Zaikov into the
CPSU Central Committee Secretariat. In the fall of 1985, Gorbachev continued to bring younger and more energetic
men into government. On September 27, Nikolai Ryzhkov replaced 79-year-old Nikolai Tikhonov as Chairman of the Council
of Ministers, effectively the Soviet prime minister, and on October 14, Nikolai Talyzin replaced Nikolai Baibakov
as chairman of the State Planning Committee (GOSPLAN). At the next Central Committee meeting on October 15, Tikhonov
retired from the Politburo and Talyzin became a candidate. Finally, on December 23, 1985, Gorbachev appointed Yeltsin
First Secretary of the Moscow Communist Party replacing Viktor Grishin. The CTAG (Latvian: Cilvēktiesību aizstāvības
grupa, Human Rights Defense Group) Helsinki-86 was founded in July 1986 in the Latvian port town of Liepāja by three
workers: Linards Grantiņš, Raimonds Bitenieks, and Mārtiņš Bariss. Its name refers to the human-rights statements
of the Helsinki Accords. Helsinki-86 was the first openly anti-Communist organization in the U.S.S.R., and the first
openly organized opposition to the Soviet regime, setting an example for other ethnic minorities' pro-independence
movements.[citation needed] The "Jeltoqsan" (Kazakh for "December") of 1986 were riots in Alma-Ata, Kazakhstan, sparked
by Gorbachev's dismissal of Dinmukhamed Konayev, the First Secretary of the Communist Party of Kazakhstan and an
ethnic Kazakh, who was replaced with Gennady Kolbin, an outsider from the Russian SFSR. Demonstrations started in
the morning of December 17, 1986, with 200 to 300 students in front of the Central Committee building on Brezhnev
Square protesting Konayev's dismissal and replacement by a Russian. Protesters swelled to 1,000 to 5,000 as other
students joined the crowd. The CPK Central Committee ordered troops from the Ministry of Internal Affairs, druzhiniki
(volunteers), cadets, policemen, and the KGB to cordon the square and videotape the participants. The situation escalated
around 5 p.m., as troops were ordered to disperse the protesters. Clashes between the security forces and the demonstrators
continued throughout the night in Almaty. On the next day, December 18, protests turned into civil unrest as clashes
between troops, volunteers, militia units, and Kazakh students turned into a wide-scale confrontation. The clashes
could only be controlled on the third day. The Almaty events were followed by smaller protests and demonstrations
in Shymkent, Pavlodar, Karaganda, and Taldykorgan. Reports from Kazakh SSR authorities estimated that the riots drew
3,000 people. Other estimates are of at least 30,000 to 40,000 protestors with 5,000 arrested and jailed, and an
unknown number of casualties. Jeltoqsan leaders say over 60,000 Kazakhs participated in the protests. According to
the Kazakh SSR government, there were two deaths during the riots, including a volunteer police worker and a student.
Both of them had died due to blows to the head. About 100 others were detained and several others were sentenced
to terms in labor camps. Sources cited by the Library of Congress claimed that at least 200 people died or were summarily
executed soon thereafter; some accounts estimate casualties at more than 1,000. The writer Mukhtar Shakhanov claimed
that a KGB officer testified that 168 protesters were killed, but that figure remains unconfirmed. Gorbachev also
radically expanded the scope of Glasnost, stating that no subject was off-limits for open discussion in the media.
Even so, the cautious Soviet intelligentsia took almost a year to begin pushing the boundaries to see if he meant
what he said. For the first time, the Communist Party leader had appealed over the heads of Central Committee members
for the people's support in exchange for expansion of liberties. The tactic proved successful: Within two years political
reform could no longer be sidetracked by Party "conservatives." An unintended consequence was that having saved reform,
Gorbachev's move ultimately killed the very system it was designed to save. On February 7, 1987, dozens of political
prisoners were freed in the first group release since Khrushchev's "thaw" in the mid-1950s. On May 6, 1987, Pamyat,
a Russian nationalist group, held an unsanctioned demonstration in Moscow. The authorities did not break up the demonstration
and even kept traffic out of the demonstrators' way while they marched to an impromptu meeting with Boris Yeltsin,
head of the Moscow Communist Party and at the time one of Gorbachev's closest allies. On July 25, 1987, 300 Crimean
Tatars staged a noisy demonstration near the Kremlin Wall for several hours, calling for the right to return to their
homeland, from which they were deported in 1944; police and soldiers merely looked on. On September 10, 1987, after
a lecture from hardliner Yegor Ligachev at the Politburo for allowing these two unsanctioned demonstrations in Moscow,
Boris Yeltsin wrote a letter of resignation to Gorbachev, who had been holidaying on the Black Sea. Gorbachev was
stunned – no one had ever voluntarily resigned from the Politburo. At the October 27, 1987, plenary meeting of the
Central Committee, Yeltsin, frustrated that Gorbachev had not addressed any of the issues outlined in his resignation
letter, criticized the slow pace of reform, servility to the general secretary, and opposition from Ligachev that
had led to his (Yeltsin's) resignation. No one had ever addressed the Party leader so brazenly in front of the Central
Committee since Leon Trotsky in the 1920s. In his reply, Gorbachev accused Yeltsin of "political immaturity" and
"absolute irresponsibility." No one backed Yeltsin. On June 14, 1987, about 5,000 people gathered again at Freedom
Monument in Riga, and laid flowers to commemorate the anniversary of Stalin's mass deportation of Latvians in 1941.
This was the first large demonstration in the Baltic republics to commemorate the anniversary of an event contrary
to official Soviet history. The authorities did not crack down on demonstrators, which encouraged more and larger
demonstrations throughout the Baltic States. The next major anniversary after the August 23 Molotov Pact demonstration
was on November 18, the date of Latvia’s independence in 1918. On November 18, 1987, hundreds of police and civilian
militiamen cordoned off the central square to prevent any demonstration at Freedom Monument, but thousands lined
the streets of Riga in silent protest regardless. In spring 1987, a protest movement arose against new phosphate
mines in Estonia. Signatures were collected in Tartu, and students assembled in the university's main hall to express
lack of confidence in the government. At a demonstration on May 1, 1987, young people showed up with banners and
slogans despite an official ban. On August 15, 1987, former political prisoners formed the MRP-AEG group (Estonians
for the Public Disclosure of the Molotov-Ribbentrop Pact), which was headed by Tiit Madisson. In September 1987,
the Edasi newspaper published a proposal by Edgar Savisaar, Siim Kallas, Tiit Made, and Mikk Titma calling for Estonia's
transition to autonomy. Initially geared toward economic independence, then toward a certain amount of political
autonomy, the project, Isemajandav Eesti ("A Self-Managing Estonia") became known according to its Estonian acronym,
IME, which means "miracle". On October 21, a demonstration dedicated to those who gave their lives in the 1918–1920
Estonian War of Independence took place in Võru, which culminated in a conflict with the militia. For the first time
in years, the blue, black, and white national tricolor was publicly displayed. On October 17, 1987, about 3,000 Armenians
demonstrated in Yerevan complaining about the condition of Lake Sevan, the Nairit chemicals plant, and the Metsamor
Nuclear Power Plant, and air pollution in Yerevan. Police tried to prevent the protest but took no action to stop
it once the march was underway. The demonstration was led by Armenian writers such as Silva Kaputikian, Zori Balayan,
and Maro Margarian and leaders from the National Survival organization. The march originated at the Opera Plaza after
speakers, mainly intellectuals, addressed the crowd. On July 1, 1988, the fourth and last day of a bruising 19th
Party Conference, Gorbachev won the backing of the tired delegates for his last-minute proposal to create a new supreme
legislative body called the Congress of People's Deputies. Frustrated by the old guard's resistance, Gorbachev embarked
on a set of constitutional changes to try to separate party and state, and thereby isolate his conservative Party
opponents. Detailed proposals for the new Congress of People's Deputies were published on October 2, 1988, and to
enable the creation of the new legislature the Supreme Soviet, during its November 29–December 1, 1988, session,
implemented amendments to the 1977 Soviet Constitution, enacted a law on electoral reform, and set the date of the
election for March 26, 1989. On October 2, the Popular Front formally launched its political platform at a two-day
congress. Väljas attended, gambling that the front could help Estonia become a model of economic and political revival,
while moderating separatist and other radical tendencies. On November 16, 1988, the Supreme Soviet of the Estonian
SSR adopted a declaration of national sovereignty under which Estonian laws would take precedence over those of the
Soviet Union. Estonia's parliament also laid claim to the republic's natural resources including land, inland waters,
forests, mineral deposits, and to the means of industrial production, agriculture, construction, state banks, transportation,
and municipal services within the territory of Estonia's borders. In February 20, 1988, after a week of growing demonstrations
in Stepanakert, capital of the Nagorno-Karabakh Autonomous Oblast (the Armenian majority area within Azerbaijan Soviet
Socialist Republic), the Regional Soviet voted to secede and join with the Soviet Socialist Republic of Armenia.
This local vote in a small, remote part of the Soviet Union made headlines around the world; it was an unprecedented
defiance of republic and national authorities. On February 22, 1988, in what became known as the "Askeran clash",
two Azerbaijanis were killed by Karabakh police. These deaths, announced on state radio, led to the Sumgait Pogrom.
Between February 26 and March 1, the city of Sumgait (Azerbaijan) saw violent anti-Armenian rioting during which
32 people were killed. The authorities totally lost control and occupied the city with paratroopers and tanks; nearly
all of the 14,000 Armenian residents of Sumgait fled. Gorbachev refused to make any changes to the status of Nagorno
Karabakh, which remained part of Azerbaijan. He instead sacked the Communist Party Leaders in both Republics – on
May 21, 1988, Kamran Baghirov was replaced by Abdulrahman Vezirov as First Secretary of the Azerbaijan Communist
Party. From July 23 to September 1988, a group of Azerbaijani intellectuals began working for a new organization
called the Popular Front of Azerbaijan, loosely based on the Estonian Popular Front. On September 17, when gun battles
broke out between the Armenians and Azerbaijanis near Stepanakert, two soldiers were killed and more than two dozen
injured. This led to almost tit-for-tat ethnic polarization in Nagorno-Karabakh's two main towns: The Azerbaijani
minority was expelled from Stepanakert, and the Armenian minority was expelled from Shusha. On November 17, 1988,
in response to the exodus of tens of thousands of Azerbaijanis from Armenia, a series of mass demonstrations began
in Baku's Lenin Square, lasting 18 days and attracting half a million demonstrators. On December 5, 1988, the Soviet
militia finally moved in, cleared the square by force, and imposed a curfew that lasted ten months. The rebellion
of fellow Armenians in Nagorno-Karabakh had an immediate effect in Armenia itself. Daily demonstrations, which began
in the Armenian capital Yerevan on February 18, initially attracted few people, but each day the Nagorno-Karabakh
issue became increasingly prominent and numbers swelled. On February 20, a 30,000-strong crowd demonstrated in Theater
Square, by February 22, there were 100,000, the next day 300,000, and a transport strike was declared, by February
25, there were close to 1 million demonstrators – about a quarter of Armenia's population. This was the first of
the large, peaceful public demonstrations that would become a feature of communism's overthrow in Prague, Berlin,
and, ultimately, Moscow. Leading Armenian intellectuals and nationalists, including future first President of independent
Armenia Levon Ter-Petrossian, formed the eleven-member Karabakh Committee to lead and organize the new movement.
Gorbachev again refused to make any changes to the status of Nagorno Karabakh, which remained part of Azerbaijan.
Instead he sacked both Republics' Communist Party Leaders: On May 21, 1988, Karen Demirchian was replaced by Suren
Harutyunyan as First Secretary of the Communist Party of Armenia. However, Harutyunyan quickly decided to run before
the nationalist wind and on May 28, allowed Armenians to unfurl the red-blue-gold First Armenian Republic flag for
the first time in almost 70 years. On June 15, 1988, the Armenian Supreme Soviet adopted a resolution formally approving
the idea of Nagorno Karabakh joining Armenia. Armenia, formerly one of the most loyal Republics, had suddenly turned
into the leading rebel republic. On July 5, 1988, when a contingent of troops was sent in to remove demonstrators
by force from Yerevan's Zvartnots International Airport, shots were fired and one student protester was killed. In
September, further large demonstrations in Yerevan led to the deployment of armored vehicles. In the autumn of 1988
almost all the 200,000 Azerbaijani minority in Armenia was expelled by Armenian Nationalists, with over 100 killed
in the process – this, after the Sumgait pogrom earlier that year carried out by Azerbaijanis against ethnic Armenians
and subsequent expulsion of all Armenians from Azerbaijan. On November 25, 1988, a military commandant took control
of Yerevan as the Soviet government moved to prevent further ethnic violence. Beginning in February 1988, the Democratic
Movement of Moldova (formerly Moldavia) organized public meetings, demonstrations, and song festivals, which gradually
grew in size and intensity. In the streets, the center of public manifestations was the Stephen the Great Monument
in Chişinău, and the adjacent park harboring Aleea Clasicilor (The "Alee of the Classics [of the Literature]"). On
January 15, 1988, in a tribute to Mihai Eminescu at his bust on the Aleea Clasicilor, Anatol Şalaru submitted a proposal
to continue the meetings. In the public discourse, the movement called for national awakening, freedom of speech,
revival of Moldavian traditions, and for attainment of official status for the Romanian language and return to the
Latin alphabet. The transition from "movement" (an informal association) to "front" (a formal association) was seen
as a natural "upgrade" once a movement gained momentum with the public, and the Soviet authorities no longer dared
to crack down on it. On April 26, 1988, about 500 people participated in a march organized by the Ukrainian Cultural
Club on Kiev's Khreschatyk Street to mark the second anniversary of the Chernobyl nuclear disaster, carrying placards
with slogans like "Openness and Democracy to the End." Between May and June 1988, Ukrainian Catholics in western
Ukraine celebrated the Millennium of Christianity in Kievan Rus' in secret by holding services in the forests of
Buniv, Kalush, Hoshiv, and Zarvanytsia. On June 5, 1988, as the official celebrations of the Millennium were held
in Moscow, the Ukrainian Cultural Club hosted its own observances in Kiev at the monument to St. Volodymyr the Great,
the grand prince of Kievan Rus'. On June 16, 1988, 6,000 to 8,000 people gathered in Lviv to hear speakers declare
no confidence in the local list of delegates to the 19th Communist Party conference, to begin on June 29. On June
21, a rally in Lviv attracted 50,000 people who had heard about a revised delegate list. Authorities attempted to
disperse the rally in front of Druzhba Stadium. On July 7, 10,000 to 20,000 people witnessed the launch of the Democratic
Front to Promote Perestroika. On July 17, a group of 10,000 gathered in the village Zarvanytsia for Millennium services
celebrated by Ukrainian Greek-Catholic Bishop Pavlo Vasylyk. The militia tried to disperse attendees, but it turned
out to be the largest gathering of Ukrainian Catholics since Stalin outlawed the Church in 1946. On August 4, which
came to be known as "Bloody Thursday," local authorities violently suppressed a demonstration organized by the Democratic
Front to Promote Perestroika. Forty-one people were detained, fined, or sentenced to 15 days of administrative arrest.
On September 1, local authorities violently displaced 5,000 students at a public meeting lacking official permission
at Ivan Franko State University. On November 13, 1988, approximately 10,000 people attended an officially sanctioned
meeting organized by the cultural heritage organization Spadschyna, the Kyiv University student club Hromada, and
the environmental groups Zelenyi Svit ("Green World") and Noosfera, to focus on ecological issues. From November
14–18, 15 Ukrainian activists were among the 100 human-, national- and religious-rights advocates invited to discuss
human rights with Soviet officials and a visiting delegation of the U.S. Commission on Security and Cooperation in
Europe (also known as the Helsinki Commission). On December 10, hundreds gathered in Kiev to observe International
Human Rights Day at a rally organized by the Democratic Union. The unauthorized gathering resulted in the detention
of local activists. The Partyja BPF (Belarusian Popular Front) was established in 1988 as a political party and cultural
movement for democracy and independence, à la the Baltic republics’ popular fronts. The discovery of mass graves
in Kurapaty outside Minsk by historian Zianon Pazniak, the Belarusian Popular Front’s first leader, gave additional
momentum to the pro-democracy and pro-independence movement in Belarus. It claimed that the NKVD performed secret
killings in Kurapaty. Initially the Front had significant visibility because its numerous public actions almost always
ended in clashes with the police and the KGB. Spring 1989 saw the people of the Soviet Union exercising a democratic
choice, albeit limited, for the first time since 1917, when they elected the new Congress of People's Deputies. Just
as important was the uncensored live TV coverage of the legislature's deliberations, where people witnessed the previously
feared Communist leadership being questioned and held accountable. This example fueled a limited experiment with
democracy in Poland, which quickly led to the toppling of the Communist government in Warsaw that summer – which
in turn sparked uprisings that overthrew communism in the other five Warsaw Pact countries before the end of 1989,
the year the Berlin Wall fell. These events showed that the people of Eastern Europe and the Soviet Union did not
support Gorbachev's drive to modernize Communism; rather, they preferred to abandon it altogether. In the March 26
general elections, voter participation was an impressive 89.8%, and 1,958 (including 1,225 district seats) of the
2,250 CPD seats were filled. In district races, run-off elections were held in 76 constituencies on April 2 and 9
and fresh elections were organized on April 20 and 14 to May 23, in the 199 remaining constituencies where the required
absolute majority was not attained. While most CPSU-endorsed candidates were elected, more than 300 lost to independent
candidates such as Yeltsin, physicist Andrei Sakharov and lawyer Anatoly Sobchak. In the first session of the new
Congress of People's Deputies, from May 25 to June 9, hardliners retained control but reformers used the legislature
as a platform for debate and criticism – which was broadcast live and uncensored. This transfixed the population;
nothing like this freewheeling debate had ever been witnessed in the U.S.S.R. On May 29, Yeltsin managed to secure
a seat on the Supreme Soviet, and in the summer he formed the first opposition, the Inter-Regional Deputies Group,
composed of Russian nationalists and liberals. Composing the final legislative group in the Soviet Union, those elected
in 1989 played a vital part in reforms and the eventual breakup of the Soviet Union during the next two years. On
October 25, 1989, the Supreme Soviet voted to eliminate special seats for the Communist Party and other official
organizations in national and local elections, responding to sharp popular criticism that such reserved slots were
undemocratic. After vigorous debate, the 542-member Supreme Soviet passed the measure 254-85 (with 36 abstentions).
The decision required a constitutional amendment, ratified by the full congress, which met December 12–25. It also
passed measures that would allow direct elections for presidents of each of the 15 constituent republics. Gorbachev
strongly opposed such a move during debate but was defeated. The six Warsaw Pact countries of Eastern Europe, while
nominally independent, were widely recognized in the international community as the Soviet satellite states. All
had been occupied by the Soviet Red Army in 1945, had Soviet-style socialist states imposed upon them, and had very
restricted freedom of action in either domestic or international affairs. Any moves towards real independence were
suppressed by military force – in the Hungarian Revolution of 1956 and the Prague Spring in 1968. Gorbachev abandoned
the oppressive and expensive Brezhnev Doctrine, which mandated intervention in the Warsaw Pact states, in favor of
non-intervention in the internal affairs of allies – jokingly termed the Sinatra Doctrine in a reference to the Frank
Sinatra song "My Way". The Baltic Way or Baltic Chain (also Chain of Freedom Estonian: Balti kett, Latvian: Baltijas
ceļš, Lithuanian: Baltijos kelias, Russian: Балтийский путь) was a peaceful political demonstration on August 23,
1989. An estimated 2 million people joined hands to form a human chain extending 600 kilometres (370 mi) across Estonia,
Latvia and Lithuania, which had been forcibly reincorporated into the Soviet Union in 1944. The colossal demonstration
marked the 50th anniversary of the Molotov–Ribbentrop Pact that divided Eastern Europe into spheres of influence
and led to the occupation of the Baltic states in 1940. On December 7, 1989, the Communist Party of Lithuania under
the leadership of Algirdas Brazauskas, split from the Communist Party of the Soviet Union and abandoned its claim
to have a constitutional "leading role" in politics. A smaller loyalist faction of the Communist Party, headed by
hardliner Mykolas Burokevičius, was established and remained affiliated with the CPSU. However, Lithuania’s governing
Communist Party was formally independent from Moscow's control – a first for Soviet Republics and a political earthquake
that prompted Gorbachev to arrange a visit to Lithuania the following month in a futile attempt to bring the local
party back under control. On July 16, 1989, the Popular Front of Azerbaijan held its first congress and elected Abulfaz
Elchibey, who would become President, as its Chairman. On August 19, 600,000 protesters jammed Baku’s Lenin Square
(now Azadliq Square) to demand the release of political prisoners. In the second half of 1989, weapons were handed
out in Nagorno-Karabakh. When Karabakhis got hold of small arms to replace hunting rifles and crossbows, casualties
began to mount; bridges were blown up, roads were blockaded, and hostages were taken. In a new and effective tactic,
the Popular Front launched a rail blockade of Armenia, which caused petrol and food shortages because 85 percent
of Armenia's freight came from Azerbaijan. Under pressure from the Popular Front the Communist authorities in Azerbaijan
started making concessions. On September 25, they passed a sovereignty law that gave precedence to Azerbaijani law,
and on October 4, the Popular Front was permitted to register as a legal organization as long as it lifted the blockade.
Transport communications between Azerbaijan and Armenia never fully recovered. Tensions continued to escalate and
on December 29, Popular Front activists seized local party offices in Jalilabad, wounding dozens. On April 7, 1989,
Soviet troops and armored personnel carriers were sent to Tbilisi after more than 100,000 people protested in front
of Communist Party headquarters with banners calling for Georgia to secede from the Soviet Union and for Abkhazia
to be fully integrated into Georgia. On April 9, 1989, troops attacked the demonstrators; some 20 people were killed
and more than 200 wounded. This event radicalized Georgian politics, prompting many to conclude that independence
was preferable to continued Soviet rule. On April 14, Gorbachev removed Jumber Patiashvili as First Secretary of
the Georgian Communist Party and replaced him with former Georgian KGB chief Givi Gumbaridze. In Ukraine, Lviv and
Kiev celebrated Ukrainian Independence Day on January 22, 1989. Thousands gathered in Lviv for an unauthorized moleben
(religious service) in front of St. George's Cathedral. In Kiev, 60 activists met in a Kiev apartment to commemorate
the proclamation of the Ukrainian People's Republic in 1918. On February 11–12, 1989, the Ukrainian Language Society
held its founding congress. On February 15, 1989, the formation of the Initiative Committee for the Renewal of the
Ukrainian Autocephalous Orthodox Church was announced. The program and statutes of the movement were proposed by
the Writers Association of Ukraine and were published in the journal Literaturna Ukraina on February 16, 1989. The
organization heralded Ukrainian dissidents such as Vyacheslav Chornovil. In late February, large public rallies took
place in Kiev to protest the election laws, on the eve of the March 26 elections to the USSR Congress of People's
Deputies, and to call for the resignation of the first secretary of the Communist Party of Ukraine, Volodymyr Scherbytsky,
lampooned as "the mastodon of stagnation." The demonstrations coincided with a visit to Ukraine by Soviet President
Gorbachev. On February 26, 1989, between 20,000 and 30,000 people participated in an unsanctioned ecumenical memorial
service in Lviv, marking the anniversary of the death of 19th Century Ukrainian artist and nationalist Taras Shevchenko.
On March 4, 1989, the Memorial Society, committed to honoring the victims of Stalinism and cleansing society of Soviet
practices, was founded in Kiev. A public rally was held the next day. On March 12, A pre-election meeting organized
in Lviv by the Ukrainian Helsinki Union and the Marian Society Myloserdia (Compassion) was violently dispersed, and
nearly 300 people were detained. On March 26, elections were held to the union Congress of People's Deputies; by-elections
were held on April 9, May 14, and May 21. Among the 225 Ukrainian deputies, most were conservatives, though a handful
of progressives made the cut. From April 20–23, 1989, pre-election meetings were held in Lviv for four consecutive
days, drawing crowds of up to 25,000. The action included an one-hour warning strike at eight local factories and
institutions. It was the first labor strike in Lviv since 1944. On May 3, a pre-election rally attracted 30,000 in
Lviv. On May 7, The Memorial Society organized a mass meeting at Bykivnia, site of a mass grave of Ukrainian and
Polish victims of Stalinist terror. After a march from Kiev to the site, a memorial service was staged. From mid-May
to September 1989, Ukrainian Greek-Catholic hunger strikers staged protests on Moscow's Arbat to call attention to
the plight of their Church. They were especially active during the July session of the World Council of Churches
held in Moscow. The protest ended with the arrests of the group on September 18. On May 27, 1989, the founding conference
of the Lviv regional Memorial Society was held. On June 18, 1989, an estimated 100,000 faithful participated in public
religious services in Ivano-Frankivsk in western Ukraine, responding to Cardinal Myroslav Lubachivsky's call for
an international day of prayer. On August 19, 1989, the Russian Orthodox Parish of Saints Peter and Paul announced
it would be switching to the Ukrainian Autocephalous Orthodox Church. On September 2, 1989, tens of thousands across
Ukraine protested a draft election law that reserved special seats for the Communist Party and for other official
organizations: 50,000 in Lviv, 40,000 in Kiev, 10,000 in Zhytomyr, 5,000 each in Dniprodzerzhynsk and Chervonohrad,
and 2,000 in Kharkiv. From September 8–10, 1989, writer Ivan Drach was elected to head Rukh, the People's Movement
of Ukraine, at its founding congress in Kiev. On September 17, between 150,000 and 200,000 people marched in Lviv,
demanding the legalization of the Ukrainian Greek Catholic Church. On September 21, 1989, exhumation of a mass grave
begins in Demianiv Laz, a nature preserve south of Ivano-Frankivsk. On September 28, First Secretary of the Communist
Party of the Ukraine Volodymyr Shcherbytsky, a holdover from the Brezhnev era, was replaced by Vladimir Ivashko.
On October 1, 1989, a peaceful demonstration of 10,000 to 15,000 people was violently dispersed by the militia in
front of Lviv's Druzhba Stadium, where a concert celebrating the Soviet "reunification" of Ukrainian lands was being
held. On October 10, Ivano-Frankivsk was the site of a pre-election protest attended by 30,000 people. On October
15, several thousand people gathered in Chervonohrad, Chernivtsi, Rivne, and Zhytomyr; 500 in Dnipropetrovsk; and
30,000 in Lviv to protest the election law. On October 20, faithful and clergy of the Ukrainian Autocephalous Orthodox
Church participated in a synod in Lviv, the first since its forced liquidation in the 1930s. On October 24, the union
Supreme Soviet passed a law eliminating special seats for Communist Party and other official organizations' representatives.
On October 26, twenty factories in Lviv held strikes and meetings to protest the police brutality of October 1 and
the authorities' unwillingness to prosecute those responsible. From October 26–28, the Zelenyi Svit (Friends of the
Earth – Ukraine) environmental association held its founding congress, and on October 27 the Ukrainian Supreme Soviet
passed a law eliminating the special status of party and other official organizations. On October 28, 1989, the Ukrainian
Supreme Soviet decreed that effective January 1, 1990, Ukrainian would be the official language of Ukraine, while
Russian would be used for communication between ethnic groups. On the same day The Congregation of the Church of
the Transfiguration in Lviv left the Russian Orthodox Church and proclaimed itself the Ukrainian Greek Catholic Church.
The following day, thousands attended a memorial service at Demianiv Laz, and a temporary marker was placed to indicate
that a monument to the "victims of the repressions of 1939–1941" soon would be erected. In mid-November The Shevchenko
Ukrainian Language Society was officially registered. On November 19, 1989, a public gathering in Kiev attracted
thousands of mourners, friends and family to the reburial in Ukraine of three inmates of the infamous Gulag Camp
No. 36 in Perm in the Ural Mountains: human-rights activists Vasyl Stus, Oleksiy Tykhy, and Yuriy Lytvyn. Their remains
were reinterred in Baikove Cemetery. On November 26, 1989, a day of prayer and fasting was proclaimed by Cardinal
Myroslav Lubachivsky, thousands of faithful in western Ukraine participated in religious services on the eve of a
meeting between Pope John Paul II and Soviet President Gorbachev. On November 28, 1989, the Ukrainian SSR's Council
for Religious Affairs issued a decree allowing Ukrainian Catholic congregations to register as legal organizations.
The decree was proclaimed on December 1, coinciding with a meeting at the Vatican between the pope and the Soviet
president. On September 30, 1989, thousands of Belorussians, denouncing local leaders, marched through Minsk to demand
additional cleanup of the 1986 Chernobyl disaster site in Ukraine. Up to 15,000 protesters wearing armbands bearing
radioactivity symbols and carrying the banned red-and-white Belorussian national flag filed through torrential rain
in defiance of a ban by local authorities. Later, they gathered in the city center near the government's headquarters,
where speakers demanded resignation of Yefrem Sokolov, the republic's Communist Party leader, and called for the
evacuation of half a million people from the contaminated zones. Thousands of Soviet troops were sent to the Fergana
Valley, southeast of the Uzbek capital Tashkent, to re-establish order after clashes in which local Uzbeks hunted
down members of the Meskhetian minority in several days of rioting between June 4–11, 1989; about 100 people were
killed. On June 23, 1989, Gorbachev removed Rafiq Nishonov as First Secretary of the Communist Party of the Uzbek
SSR and replaced him with Karimov, who went on to lead Uzbekistan as a Soviet Republic and subsequently as an independent
state. In Kazakhstan on June 19, 1989, young men carrying guns, firebombs, iron bars and stones rioted in Zhanaozen,
causing a number of deaths. The youths tried to seize a police station and a water-supply station. They brought public
transportation to a halt and shut down various shops and industries. By June 25, the rioting had spread to five other
towns near the Caspian Sea. A mob of about 150 people armed with sticks, stones and metal rods attacked the police
station in Mangishlak, about 90 miles from Zhanaozen, before they were dispersed by government troops flown in by
helicopters. Mobs of young people also rampaged through Yeraliev, Shepke, Fort-Shevchenko and Kulsary, where they
poured flammable liquid on trains housing temporary workers and set them on fire. Ethnic tensions had escalated between
the Armenians and Azerbaijanis in spring and summer 1988. On January 9, 1990, after the Armenian parliament voted
to include Nagorno-Karabakh within its budget, renewed fighting broke out, hostages were taken, and four Soviet soldiers
were killed. On January 11, Popular Front radicals stormed party buildings and effectively overthrew the communist
powers in the southern town of Lenkoran. Gorbachev resolved to regain control of Azerbaijan; the events that ensued
are known as "Black January." Late on January 19, 1990, after blowing up the central television station and cutting
the phone and radio lines, 26,000 Soviet troops entered the Azerbaijani capital Baku, smashing barricades, attacking
protesters, and firing into crowds. On that night and during subsequent confrontations (which lasted until February),
more than 130 people died – the majority of whom were civilians. More than 700 civilians were wounded, hundreds were
detained, but only a few were actually tried for alleged criminal offenses. Following the hardliners' takeover, the
September 30, 1990 elections (runoffs on October 14) were characterized by intimidation; several Popular Front candidates
were jailed, two were murdered, and unabashed ballot stuffing took place even in the presence of Western observers.
The election results reflected the threatening environment; out of the 350 members, 280 were Communists, with only
45 opposition candidates from the Popular Front and other non-communist groups, who together formed a Democratic
Bloc ("Dembloc"). In May 1990 Mutalibov was elected Chairman of the Supreme Soviet unopposed. On January 21, 1990,
Rukh organized a 300-mile (480 km) human chain between Kiev, Lviv, and Ivano-Frankivsk. Hundreds of thousands joined
hands to commemorate the proclamation of Ukrainian independence in 1918 and the reunification of Ukrainian lands
one year later (1919 Unification Act). On January 23, 1990, the Ukrainian Greek-Catholic Church held its first synod
since its liquidation by the Soviets in 1946 (an act which the gathering declared invalid). On February 9, 1990,
the Ukrainian Ministry of Justice officially registered Rukh. However, the registration came too late for Rukh to
stand its own candidates for the parliamentary and local elections on March 4. At the 1990 elections of people's
deputies to the Supreme Council (Verkhovna Rada), candidates from the Democratic Bloc won landslide victories in
western Ukrainian oblasts. A majority of the seats had to hold run-off elections. On March 18, Democratic candidates
scored further victories in the run-offs. The Democratic Bloc gained about 90 out of 450 seats in the new parliament.
On April 6, 1990, the Lviv City Council voted to return St. George Cathedral to the Ukrainian Greek Catholic Church.
The Russian Orthodox Church refused to yield. On April 29–30, 1990, the Ukrainian Helsinki Union disbanded to form
the Ukrainian Republican Party. On May 15 the new parliament convened. The bloc of conservative communists held 239
seats; the Democratic Bloc, which had evolved into the National Council, had 125 deputies. On June 4, 1990, two candidates
remained in the protracted race for parliament chair. The leader of the Communist Party of Ukraine (CPU), Volodymyr
Ivashko, was elected with 60 percent of the vote as more than 100 opposition deputies boycotted the election. On
June 5–6, 1990, Metropolitan Mstyslav of the U.S.-based Ukrainian Orthodox Church was elected patriarch of the Ukrainian
Autocephalous Orthodox Church (UAOC) during that Church's first synod. The UAOC declared its full independence from
the Moscow Patriarchate of the Russian Orthodox Church, which in March had granted autonomy to the Ukrainian Orthodox
church headed by Metropolitan Filaret. On June 22, 1990, Volodymyr Ivashko withdrew his candidacy for leader of the
Communist Party of Ukraine in view of his new position in parliament. Stanislav Hurenko was elected first secretary
of the CPU. On July 11, Ivashko resigned from his post as chairman of the Ukrainian Parliament after he was elected
deputy general secretary of the Communist Party of the Soviet Union. The Parliament accepted the resignation a week
later, on July 18. On July 16 Parliament overwhelmingly approved the Declaration on State Sovereignty of Ukraine
- with a vote of 355 in favour and four against. The people's deputies voted 339 to 5 to proclaim July 16 a Ukrainian
national holiday. On July 23, 1990, Leonid Kravchuk was elected to replace Ivashko as parliament chairman. On July
30, Parliament adopted a resolution on military service ordering Ukrainian soldiers "in regions of national conflict
such as Armenia and Azerbaijan" to return to Ukrainian territory. On August 1, Parliament voted overwhelmingly to
shut down the Chernobyl Nuclear Power Plant. On August 3, it adopted a law on the economic sovereignty of the Ukrainian
republic. On August 19, the first Ukrainian Catholic liturgy in 44 years was celebrated at St. George Cathedral.
On September 5–7, the International Symposium on the Great Famine of 1932–1933 was held in Kiev. On September 8,
The first "Youth for Christ" rally since 1933 took place held in Lviv, with 40,000 participants. In September 28–30,
the Green Party of Ukraine held its founding congress. On September 30, nearly 100,000 people marched in Kiev to
protest against the new union treaty proposed by Gorbachev. On October 25–28, 1990, Rukh held its second congress
and declared that its principal goal was the "renewal of independent statehood for Ukraine". On October 28 UAOC faithful,
supported by Ukrainian Catholics, demonstrated near St. Sophia’s Cathedral as newly elected Russian Orthodox Church
Patriarch Aleksei and Metropolitan Filaret celebrated liturgy at the shrine. On November 1, the leaders of the Ukrainian
Greek Catholic Church and of the Ukrainian Autocephalous Orthodox Church, respectively, Metropolitan Volodymyr Sterniuk
and Patriarch Mstyslav, met in Lviv during anniversary commemorations of the 1918 proclamation of the Western Ukrainian
National Republic. On January 13, 1991, Soviet troops, along with the KGB Spetsnaz Alpha Group, stormed the Vilnius
TV Tower in Lithuania to suppress the independence movement. Fourteen unarmed civilians were killed and hundreds
more injured. On the night of July 31, 1991, Russian OMON from Riga, the Soviet military headquarters in the Baltics,
assaulted the Lithuanian border post in Medininkai and killed seven Lithuanian servicemen. This event further weakened
the Soviet Union's position internationally and domestically, and stiffened Lithuanian resistance. Faced with growing
separatism, Gorbachev sought to restructure the Soviet Union into a less centralized state. On August 20, 1991, the
Russian SFSR was scheduled to sign a New Union Treaty that would have converted the Soviet Union into a federation
of independent republics with a common president, foreign policy and military. It was strongly supported by the Central
Asian republics, which needed the economic advantages of a common market to prosper. However, it would have meant
some degree of continued Communist Party control over economic and social life. More radical reformists were increasingly
convinced that a rapid transition to a market economy was required, even if the eventual outcome meant the disintegration
of the Soviet Union into several independent states. Independence also accorded with Yeltsin's desires as president
of the Russian Federation, as well as those of regional and local authorities to get rid of Moscow’s pervasive control.
In contrast to the reformers' lukewarm response to the treaty, the conservatives, "patriots," and Russian nationalists
of the USSR – still strong within the CPSU and the military – were opposed to weakening the Soviet state and its
centralized power structure. Thousands of Muscovites came out to defend the White House (the Russian Federation's
parliament and Yeltsin's office), the symbolic seat of Russian sovereignty at the time. The organizers tried but
ultimately failed to arrest Yeltsin, who rallied opposition to the coup with speech-making atop a tank. The special
forces dispatched by the coup leaders took up positions near the White House, but members refused to storm the barricaded
building. The coup leaders also neglected to jam foreign news broadcasts, so many Muscovites watched it unfold live
on CNN. Even the isolated Gorbachev was able to stay abreast of developments by tuning into BBC World Service on
a small transistor radio. On December 8, the leaders of Russia, Ukraine, and Belarus secretly met in Belavezhskaya
Pushcha, in western Belarus, and signed the Belavezha Accords, which proclaimed the Soviet Union had ceased to exist
and announced formation of the Commonwealth of Independent States (CIS) as a looser association to take its place.
They also invited other republics to join the CIS. Gorbachev called it an unconstitutional coup. However, by this
time there was no longer any reasonable doubt that, as the preamble of the Accords put it, "the USSR, as a subject
of international law and a geopolitical reality, is ceasing its existence." On December 12, the Supreme Soviet of
the Russian SFSR formally ratified the Belavezha Accords and renounced the 1922 Union Treaty. The Russian deputies
were also recalled from the Supreme Soviet of the USSR. The legality of this action was questionable, since Soviet
law did not allow a republic to unilaterally recall its deputies. However, no one in either Russia or the Kremlin
objected. Any objections from the latter would have likely had no effect, since the Soviet government had effectively
been rendered impotent long before December. In effect, the largest and most powerful republic had seceded from the
Union. Later that day, Gorbachev hinted for the first time that he was considering stepping down. Doubts remained
over the authority of the Belavezha Accords to disband the Soviet Union, since they were signed by only three republics.
However, on December 21, 1991, representatives of 11 of the 12 former republics – all except Georgia – signed the
Alma-Ata Protocol, which confirmed the dissolution of the Union and formally established the CIS. They also "accepted"
Gorbachev's resignation. While Gorbachev hadn't made any formal plans to leave the scene yet, he did tell CBS News
that he would resign as soon as he saw that the CIS was indeed a reality. In a nationally televised speech early
in the morning of December 25, 1991, Gorbachev resigned as president of the USSR – or, as he put it, "I hereby discontinue
my activities at the post of President of the Union of Soviet Socialist Republics." He declared the office extinct,
and all of its powers (such as control of the nuclear arsenal) were ceded to Yeltsin. A week earlier, Gorbachev had
met with Yeltsin and accepted the fait accompli of the Soviet Union's dissolution. On the same day, the Supreme Soviet
of the Russian SFSR adopted a statute to change Russia's legal name from "Russian Soviet Federative Socialist Republic"
to "Russian Federation," showing that it was now a sovereign state. On the night of December 25, 1991, at 7:32 p.m.
Moscow time, after Gorbachev left the Kremlin, the Soviet flag was lowered for the last time, and the Russian tricolor
was raised in its place, symbolically marking the end of the Soviet Union. The next day, December 26, 1991, the Council
of Republics, the upper chamber of the Union's Supreme Soviet, issued a formal Declaration recognizing that the Soviet
Union had ceased to exist as a state and subject of international law, and voted both itself and the Soviet Union
out of existence (the other chamber of the Supreme Soviet, the Council of the Union, had been unable to work since
December 12, 1991, when the recall of the Russian deputies left it without a quorum). The following day Yeltsin moved
into Gorbachev's former office, though the Russian authorities had taken over the suite two days earlier. By December
31, 1991, the few remaining Soviet institutions that had not been taken over by Russia ceased operation, and individual
republics assumed the central government's role. The Alma-Ata Protocol also addressed other issues, including UN
membership. Notably, Russia was authorized to assume the Soviet Union's UN membership, including its permanent seat
on the Security Council. The Soviet Ambassador to the UN delivered a letter signed by Russian President Yeltsin to
the UN Secretary-General dated December 24, 1991, informing him that by virtue of the Alma-Ata Protocol, Russia was
the successor state to the USSR. After being circulated among the other UN member states, with no objection raised,
the statement was declared accepted on the last day of the year, December 31, 1991. On November 18, 1990, the Ukrainian
Autocephalous Orthodox Church enthroned Mstyslav as Patriarch of Kiev and all Ukraine during ceremonies at Saint
Sophia's Cathedral. Also on November 18, Canada announced that its consul-general to Kiev would be Ukrainian-Canadian
Nestor Gayowsky. On November 19, the United States announced that its consul to Kiev would be Ukrainian-American
John Stepanchuk. On November 19, the chairmen of the Ukrainian and Russian parliaments, respectively, Kravchuk and
Yeltsin, signed a 10-year bilateral pact. In early December 1990 the Party of Democratic Rebirth of Ukraine was founded;
on December 15, the Democratic Party of Ukraine was founded.
According to the canonical gospels, Jesus, whom Christians believe to be the Son of God as well as the Messiah (Christ),
was arrested, tried, and sentenced by Pontius Pilate to be scourged, and finally crucified by the Romans. Jesus was
stripped of his clothing and offered wine mixed with gall to drink, before being crucified. He was then hung for
six hours (according to Mark's Gospel) between two convicted thieves. During this time, the soldiers affixed a sign
to the top of the cross stating "Jesus of Nazareth, King of the Jews" in three languages. They then divided his garments
among them, but cast lots for his seamless robe. After Jesus' death they pierced his side with a spear to be certain
that he had died. The Bible records seven statements that Jesus made while he was on the cross, as well as several
supernatural events that occurred. The baptism of Jesus and his crucifixion are considered to be two historically
certain facts about Jesus. James Dunn states that these "two facts in the life of Jesus command almost universal
assent" and "rank so high on the 'almost impossible to doubt or deny' scale of historical facts" that they are often
the starting points for the study of the historical Jesus. Bart Ehrman states that the crucifixion of Jesus on the
orders of Pontius Pilate is the most certain element about him. John Dominic Crossan states that the crucifixion
of Jesus is as certain as any historical fact can be. Eddy and Boyd state that it is now "firmly established" that
there is non-Christian confirmation of the crucifixion of Jesus. Craig Blomberg states that most scholars in the
third quest for the historical Jesus consider the crucifixion indisputable. Christopher M. Tuckett states that, although
the exact reasons for the death of Jesus are hard to determine, one of the indisputable facts about him is that he
was crucified. Although almost all ancient sources relating to crucifixion are literary, the 1968 archeological discovery
just northeast of Jerusalem of the body of a crucified man dated to the 1st century provided good confirmatory evidence
that crucifixions occurred during the Roman period roughly according to the manner in which the crucifixion of Jesus
is described in the gospels. The crucified man was identified as Yehohanan ben Hagkol and probably died about 70
AD, around the time of the Jewish revolt against Rome. The analyses at the Hadassah Medical School estimated that
he died in his late 20s. Another relevant archaeological find, which also dates to the 1st century AD, is an unidentified
heel bone with a spike discovered in a Jerusalem gravesite, now held by the Israel Antiquities Authority and displayed
in the Israel Museum. The earliest detailed accounts of the death of Jesus are contained in the four canonical gospels.
There are other, more implicit references in the New Testament epistles. In the synoptic gospels, Jesus predicts
his death in three separate episodes. All four Gospels conclude with an extended narrative of Jesus' arrest, trial,
crucifixion, burial, and accounts of resurrection. In each Gospel these five events in the life of Jesus are treated
with more intense detail than any other portion of that Gospel's narrative. Scholars note that the reader receives
an almost hour-by-hour account of what is happening.:p.91 Combining statements in the canonical Gospels produces
the following account: Jesus was arrested in Gethsemane following the Last Supper with the Twelve Apostles, and then
stood trial before the Sanhedrin (a Jewish judicial body), Pontius Pilate (a Roman authority in Judaea), and Herod
Antipas (king of Judea, appointed by Rome), before being handed over for crucifixion by the chief priests of the
Jews. After being flogged, Jesus was mocked by Roman soldiers as the "King of the Jews", clothed in a purple robe,
crowned with thorns, beaten and spat on. Jesus then had to make his way to the place of his crucifixion. Once at
Golgotha, Jesus was offered wine mixed with gall to drink. Matthew's and Mark's Gospels record that he refused this.
He was then crucified and hung between two convicted thieves. According to some translations from the original Greek,
the thieves may have been bandits or Jewish rebels. According to Mark's Gospel, he endured the torment of crucifixion
for some six hours from the third hour, at approximately 9 am, until his death at the ninth hour, corresponding to
about 3 pm. The soldiers affixed a sign above his head stating "Jesus of Nazareth, King of the Jews" in three languages,
divided his garments and cast lots for his seamless robe. The Roman soldiers did not break Jesus' legs, as they did
to the other two men crucified (breaking the legs hastened the crucifixion process), as Jesus was dead already. Each
gospel has its own account of Jesus' last words, seven statements altogether. In the Synoptic Gospels, various supernatural
events accompany the crucifixion, including darkness, an earthquake, and (in Matthew) the resurrection of saints.
Following Jesus' death, his body was removed from the cross by Joseph of Arimathea and buried in a rock-hewn tomb,
with Nicodemus assisting. There are several details that are only found in one of the gospel accounts. For instance,
only Matthew's gospel mentions an earthquake, resurrected saints who went to the city and that Roman soldiers were
assigned to guard the tomb, while Mark is the only one to state the actual time of the crucifixion (the third hour,
or 9 am) and the centurion's report of Jesus' death. The Gospel of Luke's unique contributions to the narrative include
Jesus' words to the women who were mourning, one criminal's rebuke of the other, the reaction of the multitudes who
left "beating their breasts", and the women preparing spices and ointments before resting on the Sabbath. John is
also the only one to refer to the request that the legs be broken and the soldier's subsequent piercing of Jesus'
side (as fulfillment of Old Testament prophecy), as well as that Nicodemus assisted Joseph with burial. According
to the First Epistle to the Corinthians (1 Cor. 15:4), Jesus was raised from the dead ("on the third day" counting
the day of crucifixion as the first) and according to the canonical Gospels, appeared to his disciples on different
occasions before ascending to heaven. The account given in Acts of the Apostles, which says Jesus remained with the
apostles for forty days, appears to differ from the account in the Gospel of Luke, which makes no clear distinction
between the events of Easter Sunday and the Ascension. However, most biblical scholars agree that St. Luke also wrote
the Acts of the Apostles as a follow-up volume to his Gospel account, and the two works must be considered as a whole.
In Mark, Jesus is crucified along with two rebels, and the day goes dark for three hours. Jesus calls out to God,
then gives a shout and dies. The curtain of the Temple is torn in two. Matthew follows Mark, adding an earthquake
and the resurrection of saints. Luke also follows Mark, though he describes the rebels as common criminals, one of
whom defends Jesus, who in turn promises that he (Jesus) and the criminal will be together in paradise. Luke portrays
Jesus as impassive in the face of his crucifixion. John includes several of the same elements as those found in Mark,
though they are treated differently. An early non-Christian reference to the crucifixion of Jesus is likely to be
Mara Bar-Serapion's letter to his son, written sometime after AD 73 but before the 3rd century AD. The letter includes
no Christian themes and the author is presumed to be a pagan. The letter refers to the retributions that followed
the unjust treatment of three wise men: Socrates, Pythagoras, and "the wise king" of the Jews. Some scholars see
little doubt that the reference to the execution of the "king of the Jews" is about the crucifixion of Jesus, while
others place less value in the letter, given the possible ambiguity in the reference. The consensus of modern scholarship
is that the New Testament accounts represent a crucifixion occurring on a Friday, but a Thursday or Wednesday crucifixion
have also been proposed. Some scholars explain a Thursday crucifixion based on a "double sabbath" caused by an extra
Passover sabbath falling on Thursday dusk to Friday afternoon, ahead of the normal weekly Sabbath. Some have argued
that Jesus was crucified on Wednesday, not Friday, on the grounds of the mention of "three days and three nights"
in Matthew before his resurrection, celebrated on Sunday. Others have countered by saying that this ignores the Jewish
idiom by which a "day and night" may refer to any part of a 24-hour period, that the expression in Matthew is idiomatic,
not a statement that Jesus was 72 hours in the tomb, and that the many references to a resurrection on the third
day do not require three literal nights. In Mark 15:25 crucifixion takes place at the third hour (9 a.m.) and Jesus'
death at the ninth hour (3 p.m.). However, in John 19:14 Jesus is still before Pilate at the sixth hour. Scholars
have presented a number of arguments to deal with the issue, some suggesting a reconciliation, e.g., based on the
use of Roman timekeeping in John but not in Mark, yet others have rejected the arguments. Several notable scholars
have argued that the modern precision of marking the time of day should not be read back into the gospel accounts,
written at a time when no standardization of timepieces, or exact recording of hours and minutes was available, and
time was often approximated to the closest three-hour period. Luke's gospel also describes an interaction between
Jesus and the women among the crowd of mourners following him, quoting Jesus as saying "Daughters of Jerusalem, do
not weep for me, but weep for yourselves and for your children. For behold, the days are coming when they will say,
'Blessed are the barren and the wombs that never bore and the breasts that never nursed!' Then they will begin to
say to the mountains, 'Fall on us,' and to the hills, 'Cover us.' For if they do these things when the wood is green,
what will happen when it is dry?"[Lk. 23:28-31] Calvary as an English name for the place is derived from the Latin
word for skull (calvaria), which is used in the Vulgate translation of "place of a skull", the explanation given
in all four Gospels of the Aramaic word Gûlgaltâ which was the name of the place where Jesus was crucified. The text
does not indicate why it was so designated, but several theories have been put forward. One is that as a place of
public execution, Calvary may have been strewn with the skulls of abandoned victims (which would be contrary to Jewish
burial traditions, but not Roman). Another is that Calvary is named after a nearby cemetery (which is consistent
with both of the proposed modern sites). A third is that the name was derived from the physical contour, which would
be more consistent with the singular use of the word, i.e., the place of "a skull". While often referred to as "Mount
Calvary", it was more likely a small hill or rocky knoll. The Gospel of Matthew describes many women at the crucifixion,
some of whom are named in the Gospels. Apart from these women, the three Synoptic Gospels speak of the presence of
others: "the chief priests, with the scribes and elders"; two robbers crucified, one on Jesus' right and one on his
left, whom the Gospel of Luke presents as the penitent thief and the impenitent thief; "the soldiers", "the centurion
and those who were with him, keeping watch over Jesus"; passers-by; "bystanders", "the crowds that had assembled
for this spectacle"; and "his acquaintances" Whereas most Christians believe the gibbet on which Jesus was executed
was the traditional two-beamed cross, the Jehovah's Witnesses hold the view that a single upright stake was used.
The Greek and Latin words used in the earliest Christian writings are ambiguous. The Koine Greek terms used in the
New Testament are stauros (σταυρός) and xylon (ξύλον). The latter means wood (a live tree, timber or an object constructed
of wood); in earlier forms of Greek, the former term meant an upright stake or pole, but in Koine Greek it was used
also to mean a cross. The Latin word crux was also applied to objects other than a cross. However, early Christian
writers who speak of the shape of the particular gibbet on which Jesus died invariably describe it as having a cross-beam.
For instance, the Epistle of Barnabas, which was certainly earlier than 135, and may have been of the 1st century
AD, the time when the gospel accounts of the death of Jesus were written, likened it to the letter T (the Greek letter
tau, which had the numeric value of 300), and to the position assumed by Moses in Exodus 17:11–12. Justin Martyr
(100–165) explicitly says the cross of Christ was of two-beam shape: "That lamb which was commanded to be wholly
roasted was a symbol of the suffering of the cross which Christ would undergo. For the lamb, which is roasted, is
roasted and dressed up in the form of the cross. For one spit is transfixed right through from the lower parts up
to the head, and one across the back, to which are attached the legs of the lamb." Irenaeus, who died around the
end of the 2nd century, speaks of the cross as having "five extremities, two in length, two in breadth, and one in
the middle, on which [last] the person rests who is fixed by the nails." The assumption of the use of a two-beamed
cross does not determine the number of nails used in the crucifixion and some theories suggest three nails while
others suggest four nails. However, throughout history larger numbers of nails have been hypothesized, at times as
high as 14 nails. These variations are also present in the artistic depictions of the crucifixion. In the Western
Church, before the Renaissance usually four nails would be depicted, with the feet side by side. After the Renaissance
most depictions use three nails, with one foot placed on the other. Nails are almost always depicted in art, although
Romans sometimes just tied the victims to the cross. The tradition also carries to Christian emblems, e.g. the Jesuits
use three nails under the IHS monogram and a cross to symbolize the crucifixion. The placing of the nails in the
hands, or the wrists is also uncertain. Some theories suggest that the Greek word cheir (χειρ) for hand includes
the wrist and that the Romans were generally trained to place nails through Destot's space (between the capitate
and lunate bones) without fracturing any bones. Another theory suggests that the Greek word for hand also includes
the forearm and that the nails were placed near the radius and ulna of the forearm. Ropes may have also been used
to fasten the hands in addition to the use of nails. Another issue has been the use of a hypopodium as a standing
platform to support the feet, given that the hands may not have been able to support the weight. In the 17th century
Rasmus Bartholin considered a number of analytical scenarios of that topic. In the 20th century, forensic pathologist
Frederick Zugibe performed a number of crucifixion experiments by using ropes to hang human subjects at various angles
and hand positions. His experiments support an angled suspension, and a two-beamed cross, and perhaps some form of
foot support, given that in an Aufbinden form of suspension from a straight stake (as used by the Nazis in the Dachau
concentration camp during World War II), death comes rather quickly. The only words of Jesus on the cross in the
Mark and Matthew accounts, this is a quotation of Psalm 22. Since other verses of the same Psalm are cited in the
crucifixion accounts, it is often considered a literary and theological creation. Geza Vermes, however, points out
that the verse is cited in Aramaic rather than the Hebrew in which it usually would have been recited, and suggests
that by the time of Jesus, this phrase had become a proverbial saying in common usage. Compared to the accounts in
the other Gospels, which he describes as 'theologically correct and reassuring', he considers this phrase 'unexpected,
disquieting and in consequence more probable'. He describes it as bearing 'all the appearances of a genuine cry'.
Raymond Brown likewise comments that he finds 'no persuasive argument against attributing to the Jesus of Mark/Matt
the literal sentiment of feeling forsaken expressed in the Psalm quote'. Some Christian writers considered the possibility
that pagan commentators may have mentioned this event, mistaking it for a solar eclipse - although this would have
been impossible during the Passover, which takes place at the full moon. Christian traveller and historian Sextus
Julius Africanus and Christian theologian Origen refer to Greek historian Phlegon, who lived in the 2nd century AD,
as having written "with regard to the eclipse in the time of Tiberius Caesar, in whose reign Jesus appears to have
been crucified, and the great earthquakes which then took place" Sextus Julius Africanus further refers to the writings
of historian Thallus: "This darkness Thallus, in the third book of his History, calls, as appears to me without reason,
an eclipse of the sun. For the Hebrews celebrate the passover on the 14th day according to the moon, and the passion
of our Saviour falls on the day before the passover; but an eclipse of the sun takes place only when the moon comes
under the sun." Christian apologist Tertullian believed the event was documented in the Roman archives. Colin Humphreys
and W. G. Waddington of Oxford University considered the possibility that a lunar, rather than solar, eclipse might
have taken place. They concluded that such an eclipse would have been visible, for thirty minutes, from Jerusalem
and suggested the gospel reference to a solar eclipse was the result of a scribe wrongly amending a text. Historian
David Henige dismisses this explanation as 'indefensible' and astronomer Bradley Schaefer points out that the lunar
eclipse would not have been visible during daylight hours. Modern biblical scholarship treats the account in the
synoptic gospels as a literary creation by the author of the Mark Gospel, amended in the Luke and Matthew accounts,
intended to heighten the importance of what they saw as a theologically significant event, and not intended to be
taken literally. This image of darkness over the land would have been understood by ancient readers, a typical element
in the description of the death of kings and other major figures by writers such as Philo, Dio Cassius, Virgil, Plutarch
and Josephus. Géza Vermes describes the darkness account as typical of "Jewish eschatological imagery of the day
of the Lord", and says that those interpreting it as a datable eclipse are "barking up the wrong tree". In his book
The Crucifixion of Jesus, physician and forensic pathologist Frederick Zugibe studied the likely circumstances of
the death of Jesus in great detail. Zugibe carried out a number of experiments over several years to test his theories
while he was a medical examiner. These studies included experiments in which volunteers with specific weights were
hanging at specific angles and the amount of pull on each hand was measured, in cases where the feet were also secured
or not. In these cases the amount of pull and the corresponding pain was found to be significant. Christians believe
that Jesus’ death was instrumental in restoring humankind to relationship with God. Christians believe that through
faith in Jesus’ substitutionary death and triumphant resurrection people are reunited with God and receive new joy
and power in this life as well as eternal life in heaven after the body’s death. Thus the crucifixion of Jesus along
with his resurrection restores access to a vibrant experience of God’s presence, love and grace as well as the confidence
of eternal life. In Johannine "agent Christology" the submission of Jesus to crucifixion is a sacrifice made as an
agent of God or servant of God, for the sake of eventual victory. This builds on the salvific theme of the Gospel
of John which begins in John 1:29 with John the Baptist's proclamation: "The Lamb of God who takes away the sins
of the world". Further reinforcement of the concept is provided in Revelation 21:14 where the "lamb slain but standing"
is the only one worthy of handling the scroll (i.e. the book) containing the names of those who are to be saved.
Paul's Christology has a specific focus on the death and resurrection of Jesus. For Paul, the crucifixion of Jesus
is directly related to his resurrection and the term "the cross of Christ" used in Galatians 6:12 may be viewed as
his abbreviation of the message of the gospels. For Paul, the crucifixion of Jesus was not an isolated event in history,
but a cosmic event with significant eschatological consequences, as in 1 Corinthians 2:8. In the Pauline view, Jesus,
obedient to the point of death (Philippians 2:8) died "at the right time" (Romans 4:25) based on the plan of God.
For Paul the "power of the cross" is not separable from the Resurrection of Jesus. John Calvin supported the "agent
of God" Christology and argued that in his trial in Pilate's Court Jesus could have successfully argued for his innocence,
but instead submitted to crucifixion in obedience to the Father. This Christological theme continued into the 20th
century, both in the Eastern and Western Churches. In the Eastern Church Sergei Bulgakov argued that the crucifixion
of Jesus was "pre-eternally" determined by the Father before the creation of the world, to redeem humanity from the
disgrace caused by the fall of Adam. In the Western Church, Karl Rahner elaborated on the analogy that the blood
of the Lamb of God (and the water from the side of Jesus) shed at the crucifixion had a cleansing nature, similar
to baptismal water. Jesus' death and resurrection underpin a variety of theological interpretations as to how salvation
is granted to humanity. These interpretations vary widely in how much emphasis they place on the death of Jesus as
compared to his words. According to the substitutionary atonement view, Jesus' death is of central importance, and
Jesus willingly sacrificed himself as an act of perfect obedience as a sacrifice of love which pleased God. By contrast
the moral influence theory of atonement focuses much more on the moral content of Jesus' teaching, and sees Jesus'
death as a martyrdom. Since the Middle Ages there has been conflict between these two views within Western Christianity.
Evangelical Protestants typically hold a substitutionary view and in particular hold to the theory of penal substitution.
Liberal Protestants typically reject substitutionary atonement and hold to the moral influence theory of atonement.
Both views are popular within the Roman Catholic church, with the satisfaction doctrine incorporated into the idea
of penance. The presence of the Virgin Mary under the cross[Jn. 19:26-27] has in itself been the subject of Marian
art, and well known Catholic symbolism such as the Miraculous Medal and Pope John Paul II's Coat of Arms bearing
a Marian Cross. And a number of Marian devotions also involve the presence of the Virgin Mary in Calvary, e.g., Pope
John Paul II stated that "Mary was united to Jesus on the Cross". Well known works of Christian art by masters such
as Raphael (e.g., the Mond Crucifixion), and Caravaggio (e.g., his Entombment) depict the Virgin Mary as part of
the crucifixion scene.
However, not all highest courts are named as such. Civil law states do not tend to have singular highest courts. Additionally,
the highest court in some jurisdictions is not named the "Supreme Court", for example, the High Court of Australia;
this is because decisions by the High Court could formerly be appealed to the Privy Council. On the other hand, in
some places the court named the "Supreme Court" is not in fact the highest court; examples include the New York Supreme
Court, the Supreme Courts of several Canadian provinces/territories and the former Supreme Court of Judicature of
England and Wales, which are all superseded by higher Courts of Appeal. Some countries have multiple "supreme courts"
whose respective jurisdictions have different geographical extents, or which are restricted to particular areas of
law. In particular, countries with a federal system of government typically[citation needed] have both a federal
supreme court (such as the Supreme Court of the United States), and supreme courts for each member state (such as
the Supreme Court of Nevada), with the former having jurisdiction over the latter only to the extent that the federal
constitution extends federal law over state law. Jurisdictions with a civil law system often have a hierarchy of
administrative courts separate from the ordinary courts, headed by a supreme administrative court as it the case
in the Netherlands. A number of jurisdictions also maintain a separate constitutional court (first developed in the
Czechoslovak Constitution of 1920), such as Austria, France, Germany, Luxemburg, Portugal, Spain and South Africa.
In jurisdictions using a common law system, the doctrine of stare decisis applies, whereby the principles applied
by the supreme court in its decisions are binding upon all lower courts; this is intended to apply a uniform interpretation
and implementation of the law. In civil law jurisdictions the doctrine of stare decisis is not generally considered
to apply, so the decisions of the supreme court are not necessarily binding beyond the immediate case before it;
however, in practice the decisions of the supreme court usually provide a very strong precedent, or jurisprudence
constante, for both itself and all lower courts. In Canada, the Supreme Court of Canada was established in 1875 but
only became the highest court in the country in 1949 when the right of appeal to the Judicial Committee of the Privy
Council was abolished. This court hears appeals of decisions made by courts of appeal from the provinces and territories
and appeals of decisions made by the Federal Court of Appeal. The court's decisions are final and binding on the
federal courts and the courts from all provinces and territories. The title "Supreme" can be confusing because, for
example, The Supreme Court of British Columbia does not have the final say and controversial cases heard there often
get appealed in higher courts - it is in fact one of the lower courts in such a process. In Hong Kong, the Supreme
Court of Hong Kong (now known as the High Court of Hong Kong) was the final court of appeal during its colonial times
which ended with transfer of sovereignty in 1997. The final adjudication power, as in any other British Colonies,
rested with the Judicial Committee of the Privy Council (JCPC) in London, United Kingdom. Now the power of final
adjudication is vested in the Court of Final Appeal created in 1997. Under the Basic Law, its constitution, the territory
remains a common law jurisdiction. Consequently, judges from other common law jurisdictions (including England and
Wales) can be recruited and continue to serve in the judiciary according to Article 92 of the Basic Law. On the other
hand, the power of interpretation of the Basic Law itself is vested in the Standing Committee of the National People's
Congress (NPCSC) in Beijing (without retroactive effect), and the courts are authorised to interpret the Basic Law
when trying cases, in accordance with Article 158 of the Basic Law. This arrangement became controversial in light
of the right of abode issue in 1999, raising concerns for judicial independence. In India, the Supreme Court of India
was created on January 28, 1950 after adoption of the Constitution. Article 141 of the Constitution of India states
that the law declared by Supreme Court is to be binding on all Courts within the territory of India. It is the highest
court in India and has ultimate judicial authority to interpret the Constitution and decide questions of national
law (including local bylaws). The Supreme Court is also vested with the power of judicial review to ensure the application
of the rule of law. The Supreme Court is the highest court in Ireland. It has authority to interpret the constitution,
and strike down laws and activities of the state that it finds to be unconstitutional. It is also the highest authority
in the interpretation of the law. Constitutionally it must have authority to interpret the constitution but its further
appellate jurisdiction from lower courts is defined by law. The Irish Supreme Court consists of its presiding member,
the Chief Justice, and seven other judges. Judges of the Supreme Court are appointed by the President in accordance
with the binding advice of the Government. The Supreme Court sits in the Four Courts in Dublin. Israel's Supreme
Court is at the head of the court system in the State of Israel. It is the highest judicial instance. The Supreme
Court sits in Jerusalem. The area of its jurisdiction is the entire State. A ruling of the Supreme Court is binding
upon every court, other than the Supreme Court itself. The Israeli supreme court is both an appellate court and the
high court of justice. As an appellate court, the Supreme Court considers cases on appeal (both criminal and civil)
on judgments and other decisions of the District Courts. It also considers appeals on judicial and quasi-judicial
decisions of various kinds, such as matters relating to the legality of Knesset elections and disciplinary rulings
of the Bar Association. As the High Court of Justice (Hebrew: Beit Mishpat Gavoha Le'Zedek בית משפט גבוה לצדק; also
known by its initials as Bagatz בג"ץ), the Supreme Court rules as a court of first instance, primarily in matters
regarding the legality of decisions of State authorities: Government decisions, those of local authorities and other
bodies and persons performing public functions under the law, and direct challenges to the constitutionality of laws
enacted by the Knesset. The court has broad discretionary authority to rule on matters in which it considers it necessary
to grant relief in the interests of justice, and which are not within the jurisdiction of another court or tribunal.
The High Court of Justice grants relief through orders such as injunction, mandamus and Habeas Corpus, as well as
through declaratory judgments. The Supreme Court can also sit at a further hearing on its own judgment. In a matter
on which the Supreme Court has ruled - whether as a court of appeals or as the High Court of Justice - with a panel
of three or more justices, it may rule at a further hearing with a panel of a larger number of justices. A further
hearing may be held if the Supreme Court makes a ruling inconsistent with a previous ruling or if the Court deems
that the importance, difficulty or novelty of a ruling of the Court justifies such hearing. The Supreme Court also
holds the unique power of being able to order "trial de novo" (a retrial). The new Supreme Court of New Zealand was
officially established at the beginning of 2004, although it did not come into operation until July. The High Court
of New Zealand was until 1980 known as the Supreme Court. The Supreme Court has a purely appellate jurisdiction and
hears appeals from the Court of Appeal of New Zealand. In some cases, an appeal may be removed directly to the Supreme
Court from the High Court. For certain cases, particularly cases which commenced in the District Court, a lower court
(typically the High Court or the Court of Appeal) may be the court of final jurisdiction. With respect to Pakistan's
territories (i.e. FATA, Azad Kashmir, Northern Areas and Islamabad Capital Territory (ICT)) the Supreme Court's jurisdiction
is rather limited and varies from territory to territory; it can hear appeals only of a constitutional nature from
FATA and Northern Areas, while ICT generally functions the same as provinces. Azad Kashmir has its own courts system
and the constitution of Pakistan does not apply to it as such; appeals from Azad Kashmir relate to its relationship
with Pakistan. The Supreme Court of the United Kingdom is the ultimate court for criminal and civil matters in England,
Wales and Northern Ireland and for civil matters in Scotland. (The supreme court for criminal matters in Scotland
is the High Court of Justiciary.) The Supreme Court was established by the Constitutional Reform Act 2005 with effect
from 1 October 2009, replacing and assuming the judicial functions of the House of Lords. Devolution issues under
the Scotland Act 1998, Government of Wales Act and Northern Ireland Act were also transferred to the new Supreme
Court by the Constitutional Reform Act, from the Judicial Committee of the Privy Council. The titles of state supreme
court vary, which can cause confusion between jurisdictions because one state may use a name for its highest court
that another uses for a lower court. In New York, Maryland, and the District of Columbia the highest court is called
the Court of Appeals, a name used by many states for their intermediate appellate courts. Further, trial courts of
general jurisdiction in New York are called the Supreme Court, and the intermediate appellate court is called the
Supreme Court, Appellate Division. In West Virginia, the highest court of the state is the Supreme Court of Appeals.
In Maine and Massachusetts the highest court is styled the "Supreme Judicial Court"; the last is the oldest appellate
court of continuous operation in the Western Hemisphere. In Austria, the Austrian Constitution of 1920 (based on
a draft by Hans Kelsen) introduced judicial review of legislative acts for their constitutionality. This function
is performed by the Constitutional Court (Verfassungsgerichtshof), which is also charged with the review of administrative
acts on whether they violate constitutionally guaranteed rights. Other than that, administrative acts are reviewed
by the Administrative Court (Verwaltungsgerichtshof). The Supreme Court (Oberste Gerichtshof (OGH)), stands at the
top of Austria's system of "ordinary courts" (ordentliche Gerichte) as the final instance in issues of private law
and criminal law. In Brazil, the Supreme Federal Tribunal (Supremo Tribunal Federal) is the highest court. It is
both the constitutional court and the court of last resort in Brazilian law. It only reviews cases that may be unconstitutional
or final habeas corpus pleads for criminal cases. It also judges, in original jurisdiction, cases involving members
of congress, senators, ministers of state, members of the high courts and the President and Vice-President of the
Republic. The Superior Court of Justice (Tribunal Superior de Justiça) reviews State and Federal Circuit courts decisions
for civil law and criminal law cases, when dealing with federal law or conflicting rulings. The Superior Labour Tribunal
(Tribunal Superior do Trabalho) reviews cases involving labour law. The Superior Electoral Tribunal (Tribunal Superior
Eleitoral) is the court of last resort of electoral law, and also oversees general elections. The Superior Military
Tribunal (Tribunal Superior Militar) is the highest court in matters of federal military law. Final interpretation
of and amendments to the German Constitution, the Grundgesetz, is the task of the Bundesverfassungsgericht (Federal
Constitutional Court), which is the de facto highest German court, as it can declare both federal and state legislation
ineffective, and has the power to overrule decisions of all other federal courts, despite not being a regular court
of appeals on itself in the German court system. It is also the only court possessing the power and authority to
outlaw political parties, if it is deemed that these parties have repeatedly violated articles of the Constitution.
When it comes to civil and criminal cases, the Bundesgerichtshof is at the top of the hierarchy of courts. The other
branches of the German judicial system each have their own appellate systems, each topped by a high court; these
are the Bundessozialgericht for matters of social security, the Bundesarbeitsgericht for employment and labour, the
Bundesfinanzhof for taxation and financial issues, and the Bundesverwaltungsgericht for administrative law. The so-called
Gemeinsamer Senat der Obersten Gerichtshöfe (Joint Senate of the Supreme Courts) is not a supreme court in itself,
but an ad-hoc body that is convened in only when one supreme court intends to diverge from another supreme court's
legal opinion or when a certain case exceeds the authority of one court. As the courts have well-defined areas of
responsibility, situations like these are rather rare and so, the Joint Senate gathers very infrequently, and only
to consider matters which are mostly definitory. In the Netherlands, the Supreme Court of the Netherlands is the
highest. Its decisions, known as "arresten", are absolutely final. The court is banned from testing legislation against
the constitution, pursuant to the principle of the sovereignty of the States-General; the court can, however, test
legislation against some treaties. Also, the ordinary courts in the Netherlands, including the Hoge Raad, do not
deal with administrative law, which is dealt with in separate administrative courts, the highest of which is the
Council of State (Raad van State) While the Philippines is generally considered a civil law nation, its Supreme Court
is heavily modelled after the American Supreme Court. This can be attributed to the fact that the Philippines was
colonized by both Spain and the United States, and the system of laws of both nations strongly influenced the development
of Philippine laws and jurisprudence. Even as the body of Philippine laws remain mostly codified, the Philippine
Civil Code expressly recognizes that decisions of the Supreme Court "form part of the law of the land", belonging
to the same class as statutes. The 1987 Philippine Constitution also explicitly grants to the Supreme Court the power
of judicial review over laws and executive actions. The Supreme Court is composed of 1 Chief Justice and 14 Associate
Justices. The court sits either en banc or in divisions, depending on the nature of the case to be decided. Spanish
Supreme Court is the highest court for all cases in Spain (both private and public). Only those cases related to
human rights can be appealed at the Constitutional Court (which also decides about acts accordance with Spanish Constitution).
In Spain, high courts cannot create binding precedents; however, lower rank courts usually observe Supreme Court
interpretations. In most private law cases, two Supreme Court judgements supporting a claim are needed to appeal
at the Supreme Court. Five sections form the Spanish Supreme court: In Sweden, the Supreme Court and the Supreme
Administrative Court respectively function as the highest courts of the land. The Supreme Administrative Court considers
cases concerning disputes between individuals and administrative organs, as well as disputes among administrative
organs, while the Supreme Court considers all other cases. The judges are appointed by the Government. In most cases,
the Supreme Courts will only grant leave to appeal a case (prövningstillstånd) if the case involves setting a precedent
in the interpretation of the law. Exceptions are issues where the Supreme Court is the court of first instance. Such
cases include an application for a retrial of a criminal case in the light of new evidence, and prosecutions made
against an incumbent minister of the Government for severe neglect of duty. If a lower court has to try a case which
involves a question where there is no settled interpretation of the law, it can also refer the question to the relevant
Supreme Court for an answer. In Sri Lanka, the Supreme Court of Sri Lanka was created in 1972 after the adoption
of a new Constitution. The Supreme Court is the highest and final superior court of record and is empowered to exercise
its powers, subject to the provisions of the Constitution. The court rulings take precedence over all lower Courts.
The Sri Lanka judicial system is complex blend of both common-law and civil-law. In some cases such as capital punishment,
the decision may be passed on to the President of the Republic for clemency petitions. However, when there is 2/3
majority in the parliament in favour of president (as with present), the supreme court and its judges' powers become
nullified as they could be fired from their positions according to the Constitution, if the president wants. Therefore,
in such situations, Civil law empowerment vanishes. In South Africa, a "two apex" system existed from 1994 to 2013.
The Supreme Court of Appeal (SCA) was created in 1994 and replaced the Appellate Division of the Supreme Court of
South Africa as the highest court of appeal in non-constitutional matters. The SCA is subordinate to the Constitutional
Court, which is the highest court in matters involving the interpretation and application of the Constitution. But
in August 2013 the Constitution was amended to make the Constitutional Court the country's single apex court, superior
to the SCA in all matters, both constitutional and non-constitutional. In most nations with constitutions modelled
after the Soviet Union, the legislature was given the power of being the court of last resort. In the People's Republic
of China, the final power to interpret the law is vested in the Standing Committee of the National People's Congress
(NPCSC). This power includes the power to interpret the basic laws of Hong Kong and Macau, the constitutional documents
of the two special administrative regions which are common law and Portuguese-based legal system jurisdictions respectively.
This power is a legislative power and not a judicial one in that an interpretation by the NPCSC does not affect cases
which have already been decided.
Textual criticism is a branch of textual scholarship, philology, and literary criticism that is concerned with the identification
of textual variants in either manuscripts or printed books. Ancient scribes made alterations when copying manuscripts
by hand. Given a manuscript copy, several or many copies, but not the original document, the textual critic might
seek to reconstruct the original text (the archetype or autograph) as closely as possible. The same processes can
be used to attempt to reconstruct intermediate versions, or recensions, of a document's transcription history. The
ultimate objective of the textual critic's work is the production of a "critical edition" containing a scholarly
curated text. Many ancient works, such as the Bible and the Greek tragedies,[citation needed] survive in hundreds
of copies, and the relationship of each copy to the original may be unclear. Textual scholars have debated for centuries
which sources are most closely derived from the original, hence which readings in those sources are correct.[citation
needed] Although biblical books that are letters, like Greek plays, presumably had one original, the question of
whether some biblical books, like the Gospels, ever had just one original has been discussed. Interest in applying
textual criticism to the Qur'an has also developed after the discovery of the Sana'a manuscripts in 1972, which possibly
date back to the 7–8th centuries. In the English language, the works of Shakespeare have been a particularly fertile
ground for textual criticism—both because the texts, as transmitted, contain a considerable amount of variation,
and because the effort and expense of producing superior editions of his works have always been widely viewed as
worthwhile. The principles of textual criticism, although originally developed and refined for works of antiquity,
the Bible, and Shakespeare, have been applied to many works, extending backwards from the present to the earliest
known written documents, in Mesopotamia and Egypt—a period of about five millennia. However, the application of textual
criticism to non-religious works does not antedate the invention of printing. While Christianity has been relatively
receptive to textual criticism, application of it to the Jewish (Masoretic) Torah and the Qur'an is, to the devout,
taboo.[citation needed] Maas comments further that "A dictation revised by the author must be regarded as equivalent
to an autograph manuscript". The lack of autograph manuscripts applies to many cultures other than Greek and Roman.
In such a situation, a key objective becomes the identification of the first exemplar before any split in the tradition.
That exemplar is known as the archetype. "If we succeed in establishing the text of [the archetype], the constitutio
(reconstruction of the original) is considerably advanced. The textual critic's ultimate objective is the production
of a "critical edition".[citation needed] This contains the text that the author has determined most closely approximates
the original, and is accompanied by an apparatus criticus or critical apparatus. The critical apparatus presents
the author's work in three parts: first, a list or description of the evidence that the editor used (names of manuscripts,
or abbreviations called sigla); second, the editor's analysis of that evidence (sometimes a simple likelihood rating),[citation
needed]; and third, a record of rejected variants of the text (often in order of preference).[citation needed] Before
mechanical printing, literature was copied by hand, and many variations were introduced by copyists. The age of printing
made the scribal profession effectively redundant. Printed editions, while less susceptible to the proliferation
of variations likely to arise during manual transmission, are nonetheless not immune to introducing variations from
an author's autograph. Instead of a scribe miscopying his source, a compositor or a printing shop may read or typeset
a work in a way that differs from the autograph. Since each scribe or printer commits different errors, reconstruction
of the lost original is often aided by a selection of readings taken from many sources. An edited text that draws
from multiple sources is said to be eclectic. In contrast to this approach, some textual critics prefer to identify
the single best surviving text, and not to combine readings from multiple sources. When comparing different documents,
or "witnesses", of a single, original text, the observed differences are called variant readings, or simply variants
or readings. It is not always apparent which single variant represents the author's original work. The process of
textual criticism seeks to explain how each variant may have entered the text, either by accident (duplication or
omission) or intention (harmonization or censorship), as scribes or supervisors transmitted the original author's
text by copying it. The textual critic's task, therefore, is to sort through the variants, eliminating those most
likely to be un-original, hence establishing a "critical text", or critical edition, that is intended to best approximate
the original. At the same time, the critical text should document variant readings, so the relation of extant witnesses
to the reconstructed original is apparent to a reader of the critical edition. In establishing the critical text,
the textual critic considers both "external" evidence (the age, provenance, and affiliation of each witness) and
"internal" or "physical" considerations (what the author and scribes, or printers, were likely to have done). The
collation of all known variants of a text is referred to as a variorum, namely a work of textual criticism whereby
all variations and emendations are set side by side so that a reader can track how textual decisions have been made
in the preparation of a text for publication. The Bible and the works of William Shakespeare have often been the
subjects of variorum editions, although the same techniques have been applied with less frequency to many other works,
such as Walt Whitman's Leaves of Grass, and the prose writings of Edward Fitzgerald. Eclectic readings also normally
give an impression of the number of witnesses to each available reading. Although a reading supported by the majority
of witnesses is frequently preferred, this does not follow automatically. For example, a second edition of a Shakespeare
play may include an addition alluding to an event known to have happened between the two editions. Although nearly
all subsequent manuscripts may have included the addition, textual critics may reconstruct the original without the
addition. External evidence is evidence of each physical witness, its date, source, and relationship to other known
witnesses. Critics will often prefer the readings supported by the oldest witnesses. Since errors tend to accumulate,
older manuscripts should have fewer errors. Readings supported by a majority of witnesses are also usually preferred,
since these are less likely to reflect accidents or individual biases. For the same reasons, the most geographically
diverse witnesses are preferred. Some manuscripts show evidence that particular care was taken in their composition,
for example, by including alternative readings in their margins, demonstrating that more than one prior copy (exemplar)
was consulted in producing the current one. Other factors being equal, these are the best witnesses. The role of
the textual critic is necessary when these basic criteria are in conflict. For instance, there will typically be
fewer early copies, and a larger number of later copies. The textual critic will attempt to balance these criteria,
to determine the original text. Two common considerations have the Latin names lectio brevior (shorter reading) and
lectio difficilior (more difficult reading). The first is the general observation that scribes tended to add words,
for clarification or out of habit, more often than they removed them. The second, lectio difficilior potior (the
harder reading is stronger), recognizes the tendency for harmonization—resolving apparent inconsistencies in the
text. Applying this principle leads to taking the more difficult (unharmonized) reading as being more likely to be
the original. Such cases also include scribes simplifying and smoothing texts they did not fully understand. Brooke
Foss Westcott (1825–1901) and Fenton J. A. Hort (1828–1892) published an edition of the New Testament in Greek in
1881. They proposed nine critical rules, including a version of Bengel's rule, "The reading is less likely to be
original that shows a disposition to smooth away difficulties." They also argued that "Readings are approved or rejected
by reason of the quality, and not the number, of their supporting witnesses", and that "The reading is to be preferred
that most fitly explains the existence of the others." Since the canons of criticism are highly susceptible to interpretation,
and at times even contradict each other, they may be employed to justify a result that fits the textual critic's
aesthetic or theological agenda. Starting in the 19th century, scholars sought more rigorous methods to guide editorial
judgment. Best-text editing (a complete rejection of eclecticism) became one extreme. Stemmatics and copy-text editing
– while both eclectic, in that they permit the editor to select readings from multiple sources – sought to reduce
subjectivity by establishing one or a few witnesses presumably as being favored by "objective" criteria.[citation
needed] The citing of sources used, and alternate readings, and the use of original text and images helps readers
and other critics determine to an extent the depth of research of the critic, and to independently verify their work.
Stemmatics, stemmology or stemmatology is a rigorous approach to textual criticism. Karl Lachmann (1793–1851) greatly
contributed to making this method famous, even though he did not invent it. The method takes its name from the word
stemma. The Ancient Greek word στέμματα and its loanword in classical Latin stemmata may refer to "family trees".
This specific meaning shows the relationships of the surviving witnesses (the first known example of such a stemma,
albeit with the name, dates from 1827). The family tree is also referred to as a cladogram. The method works from
the principle that "community of error implies community of origin." That is, if two witnesses have a number of errors
in common, it may be presumed that they were derived from a common intermediate source, called a hyparchetype. Relations
between the lost intermediates are determined by the same process, placing all extant manuscripts in a family tree
or stemma codicum descended from a single archetype. The process of constructing the stemma is called recension,
or the Latin recensio. The process of selectio resembles eclectic textual criticism, but applied to a restricted
set of hypothetical hyparchetypes. The steps of examinatio and emendatio resemble copy-text editing. In fact, the
other techniques can be seen as special cases of stemmatics in which a rigorous family history of the text cannot
be determined but only approximated. If it seems that one manuscript is by far the best text, then copy text editing
is appropriate, and if it seems that a group of manuscripts are good, then eclecticism on that group would be proper.
The critic Joseph Bédier (1864–1938) launched a particularly withering attack on stemmatics in 1928. He surveyed
editions of medieval French texts that were produced with the stemmatic method, and found that textual critics tended
overwhelmingly to produce trees divided into just two branches. He concluded that this outcome was unlikely to have
occurred by chance, and that therefore, the method was tending to produce bipartite stemmas regardless of the actual
history of the witnesses. He suspected that editors tended to favor trees with two branches, as this would maximize
the opportunities for editorial judgment (as there would be no third branch to "break the tie" whenever the witnesses
disagreed). He also noted that, for many works, more than one reasonable stemma could be postulated, suggesting that
the method was not as rigorous or as scientific as its proponents had claimed. The stemmatic method's final step
is emendatio, also sometimes referred to as "conjectural emendation." But in fact, the critic employs conjecture
at every step of the process. Some of the method's rules that are designed to reduce the exercise of editorial judgment
do not necessarily produce the correct result. For example, where there are more than two witnesses at the same level
of the tree, normally the critic will select the dominant reading. However, it may be no more than fortuitous that
more witnesses have survived that present a particular reading. A plausible reading that occurs less often may, nevertheless,
be the correct one. The bibliographer Ronald B. McKerrow introduced the term copy-text in his 1904 edition of the
works of Thomas Nashe, defining it as "the text used in each particular case as the basis of mine." McKerrow was
aware of the limitations of the stemmatic method, and believed it was more prudent to choose one particular text
that was thought to be particularly reliable, and then to emend it only where the text was obviously corrupt. The
French critic Joseph Bédier likewise became disenchanted with the stemmatic method, and concluded that the editor
should choose the best available text, and emend it as little as possible. By 1939, in his Prolegomena for the Oxford
Shakespeare, McKerrow had changed his mind about this approach, as he feared that a later edition – even if it contained
authorial corrections – would "deviate more widely than the earliest print from the author's original manuscript."
He therefore concluded that the correct procedure would be "produced by using the earliest "good" print as copy-text
and inserting into it, from the first edition which contains them, such corrections as appear to us to be derived
from the author." But, fearing the arbitrary exercise of editorial judgment, McKerrow stated that, having concluded
that a later edition had substantive revisions attributable to the author, "we must accept all the alterations of
that edition, saving any which seem obvious blunders or misprints." Although Greg argued that an editor should be
free to use his judgment to choose between competing substantive readings, he suggested that an editor should defer
to the copy-text when "the claims of two readings ... appear to be exactly balanced. ... In such a case, while there
can be no logical reason for giving preference to the copy-text, in practice, if there is no reason for altering
its reading, the obvious thing seems to be to let it stand." The "exactly balanced" variants are said to be indifferent.
Whereas Greg had limited his illustrative examples to English Renaissance drama, where his expertise lay, Bowers
argued that the rationale was "the most workable editorial principle yet contrived to produce a critical text that
is authoritative in the maximum of its details whether the author be Shakespeare, Dryden, Fielding, Nathaniel Hawthorne,
or Stephen Crane. The principle is sound without regard for the literary period." For works where an author's manuscript
survived – a case Greg had not considered – Bowers concluded that the manuscript should generally serve as copy-text.
Citing the example of Nathaniel Hawthorne, he noted: McKerrow had articulated textual criticism's goal in terms of
"our ideal of an author's fair copy of his work in its final state". Bowers asserted that editions founded on Greg's
method would "represent the nearest approximation in every respect of the author's final intentions." Bowers stated
similarly that the editor's task is to "approximate as nearly as possible an inferential authorial fair copy." Tanselle
notes that, "Textual criticism ... has generally been undertaken with a view to reconstructing, as accurately as
possible, the text finally intended by the author". Bowers and Tanselle argue for rejecting textual variants that
an author inserted at the suggestion of others. Bowers said that his edition of Stephen Crane's first novel, Maggie,
presented "the author's final and uninfluenced artistic intentions." In his writings, Tanselle refers to "unconstrained
authorial intention" or "an author's uninfluenced intentions." This marks a departure from Greg, who had merely suggested
that the editor inquire whether a later reading "is one that the author can reasonably be supposed to have substituted
for the former", not implying any further inquiry as to why the author had made the change. Bowers confronted a similar
problem in his edition of Maggie. Crane originally printed the novel privately in 1893. To secure commercial publication
in 1896, Crane agreed to remove profanity, but he also made stylistic revisions. Bowers's approach was to preserve
the stylistic and literary changes of 1896, but to revert to the 1893 readings where he believed that Crane was fulfilling
the publisher's intention rather than his own. There were, however, intermediate cases that could reasonably have
been attributed to either intention, and some of Bowers's choices came under fire – both as to his judgment, and
as to the wisdom of conflating readings from the two different versions of Maggie. Some critics believe that a clear-text
edition gives the edited text too great a prominence, relegating textual variants to appendices that are difficult
to use, and suggesting a greater sense of certainty about the established text than it deserves. As Shillingsburg
notes, "English scholarly editions have tended to use notes at the foot of the text page, indicating, tacitly, a
greater modesty about the "established" text and drawing attention more forcibly to at least some of the alternative
forms of the text". Cladistics is a technique borrowed from biology, where it was originally named phylogenetic systematics
by Willi Hennig. In biology, the technique is used to determine the evolutionary relationships between different
species. In its application in textual criticism, the text of a number of different manuscripts is entered into a
computer, which records all the differences between them. The manuscripts are then grouped according to their shared
characteristics. The difference between cladistics and more traditional forms of statistical analysis is that, rather
than simply arranging the manuscripts into rough groupings according to their overall similarity, cladistics assumes
that they are part of a branching family tree and uses that assumption to derive relationships between them. This
makes it more like an automated approach to stemmatics. However, where there is a difference, the computer does not
attempt to decide which reading is closer to the original text, and so does not indicate which branch of the tree
is the "root"—which manuscript tradition is closest to the original. Other types of evidence must be used for that
purpose. Although some earlier unpublished studies had been prepared, not until the early 1970s was true textual
criticism applied to the Book of Mormon. At that time BYU Professor Ellis Rasmussen and his associates were asked
by the LDS Church to begin preparation for a new edition of the Holy Scriptures. One aspect of that effort entailed
digitizing the text and preparing appropriate footnotes, another aspect required establishing the most dependable
text. To that latter end, Stanley R. Larson (a Rasmussen graduate student) set about applying modern text critical
standards to the manuscripts and early editions of the Book of Mormon as his thesis project – which he completed
in 1974. To that end, Larson carefully examined the Original Manuscript (the one dictated by Joseph Smith to his
scribes) and the Printer’s Manuscript (the copy Oliver Cowdery prepared for the Printer in 1829–1830), and compared
them with the 1st, 2nd, and 3rd editions of the Book of Mormon to determine what sort of changes had occurred over
time and to make judgments as to which readings were the most original. Larson proceeded to publish a useful set
of well-argued articles on the phenomena which he had discovered. Many of his observations were included as improvements
in the 1981 LDS edition of the Book of Mormon. By 1979, with the establishment of the Foundation for Ancient Research
and Mormon Studies (FARMS) as a California non-profit research institution, an effort led by Robert F. Smith began
to take full account of Larson’s work and to publish a Critical Text of the Book of Mormon. Thus was born the FARMS
Critical Text Project which published the first volume of the 3-volume Book of Mormon Critical Text in 1984. The
third volume of that first edition was published in 1987, but was already being superseded by a second, revised edition
of the entire work, greatly aided through the advice and assistance of then Yale doctoral candidate Grant Hardy,
Dr. Gordon C. Thomasson, Professor John W. Welch (the head of FARMS), Professor Royal Skousen, and others too numerous
to mention here. However, these were merely preliminary steps to a far more exacting and all-encompassing project.
In 1988, with that preliminary phase of the project completed, Professor Skousen took over as editor and head of
the FARMS Critical Text of the Book of Mormon Project and proceeded to gather still scattered fragments of the Original
Manuscript of the Book of Mormon and to have advanced photographic techniques applied to obtain fine readings from
otherwise unreadable pages and fragments. He also closely examined the Printer’s Manuscript (owned by the Community
of Christ—RLDS Church in Independence, Missouri) for differences in types of ink or pencil, in order to determine
when and by whom they were made. He also collated the various editions of the Book of Mormon down to the present
to see what sorts of changes have been made through time. Shemaryahu Talmon, who summarized the amount of consensus
and genetic relation to the Urtext of the Hebrew Bible, concluded that major divergences which intrinsically affect
the sense are extremely rare. As far as the Hebrew Bible referenced by Old Testament is concerned, almost all of
the textual variants are fairly insignificant and hardly affect any doctrine. Professor Douglas Stuart states: "It
is fair to say that the verses, chapters, and books of the Bible would read largely the same, and would leave the
same impression with the reader, even if one adopted virtually every possible alternative reading to those now serving
as the basis for current English translations." While textual criticism developed into a discipline of thorough analysis
of the Bible — both the Hebrew Bible and the New Testament — scholars also use it to determine the original content
of classic texts, such as Plato's Republic. There are far fewer witnesses to classical texts than to the Bible, so
scholars can use stemmatics and, in some cases, copy text editing. However, unlike the New Testament where the earliest
witnesses are within 200 years of the original, the earliest existing manuscripts of most classical texts were written
about a millennium after their composition. All things being equal, textual scholars expect that a larger time gap
between an original and a manuscript means more changes in the text. Scientific and critical editions can be protected
by copyright as works of authorship if enough creativity/originality is provided. The mere addition of a word, or
substitution of a term with another one believed to be more correct, usually does not achieve such level of originality/creativity.
All the notes accounting for the analysis and why and how such changes have been made represent a different work
autonomously copyrightable if the other requirements are satisfied. In the European Union critical and scientific
editions may be protected also by the relevant neighboring right that protects critical and scientific publications
of public domain works as made possible by art. 5 of the Copyright Term Directive. Not all EU member States have
transposed art. 5 into national law.
A gramophone record (phonograph record in American English) or vinyl record, commonly known as a "record", is an analogue
sound storage medium in the form of a flat polyvinyl chloride (previously shellac) disc with an inscribed, modulated
spiral groove. The groove usually starts near the periphery and ends near the center of the disc. Phonograph records
are generally described by their diameter in inches (12", 10", 7"), the rotational speed in rpm at which they are
played (16 2⁄3, 33 1⁄3, 45, 78), and their time capacity resulting from a combination of those parameters (LP – long
playing 33 1⁄3 rpm, SP – 78 rpm single, EP – 12-inch single or extended play, 33 or 45 rpm); their reproductive quality
or level of fidelity (high-fidelity, orthophonic, full-range, etc.), and the number of audio channels provided (mono,
stereo, quad, etc.). The phonograph disc record was the primary medium used for music reproduction until late in
the 20th century, replacing the phonograph cylinder record–with which it had co-existed from the late 1880s through
to the 1920s–by the late 1920s. Records retained the largest market share even when new formats such as compact cassette
were mass-marketed. By the late 1980s, digital media, in the form of the compact disc, had gained a larger market
share, and the vinyl record left the mainstream in 1991. From the 1990s to the 2010s, records continued to be manufactured
and sold on a much smaller scale, and were especially used by disc jockeys (DJ)s, released by artists in some genres,
and listened to by a niche market of audiophiles. The phonograph record has made a niche resurgence in the early
21st century – 9.2 million records were sold in the U.S. in 2014, a 260% increase since 2009. Likewise, in the UK
sales have increased five-fold from 2009 to 2014. The phonautograph, patented by Léon Scott in 1857, used a vibrating
diaphragm and stylus to graphically record sound waves as tracings on sheets of paper, purely for visual analysis
and without any intent of playing them back. In the 2000s, these tracings were first scanned by audio engineers and
digitally converted into audible sound. Phonautograms of singing and speech made by Scott in 1860 were played back
as sound for the first time in 2008. Along with a tuning fork tone and unintelligible snippets recorded as early
as 1857, these are the earliest known recordings of sound. In 1877, Thomas Edison invented the phonograph. Unlike
the phonautograph, it was capable of both recording and reproducing sound. Despite the similarity of name, there
is no documentary evidence that Edison's phonograph was based on Scott's phonautograph. Edison first tried recording
sound on a wax-impregnated paper tape, with the idea of creating a "telephone repeater" analogous to the telegraph
repeater he had been working on. Although the visible results made him confident that sound could be physically recorded
and reproduced, his notes do not indicate that he actually reproduced sound before his first experiment in which
he used tinfoil as a recording medium several months later. The tinfoil was wrapped around a grooved metal cylinder
and a sound-vibrated stylus indented the tinfoil while the cylinder was rotated. The recording could be played back
immediately. The Scientific American article that introduced the tinfoil phonograph to the public mentioned Marey,
Rosapelly and Barlow as well as Scott as creators of devices for recording but, importantly, not reproducing sound.
Edison also invented variations of the phonograph that used tape and disc formats. Numerous applications for the
phonograph were envisioned, but although it enjoyed a brief vogue as a startling novelty at public demonstrations,
the tinfoil phonograph proved too crude to be put to any practical use. A decade later, Edison developed a greatly
improved phonograph that used a hollow wax cylinder instead of a foil sheet. This proved to be both a better-sounding
and far more useful and durable device. The wax phonograph cylinder created the recorded sound market at the end
of the 1880s and dominated it through the early years of the 20th century. Lateral-cut disc records were developed
in the United States by Emile Berliner, who named his system the "gramophone", distinguishing it from Edison's wax
cylinder "phonograph" and Columbia's wax cylinder "graphophone". Berliner's earliest discs, first marketed in 1889,
but only in Europe, were 5 inches (13 cm) in diameter, and were played with a small hand-propelled machine. Both
the records and the machine were adequate only for use as a toy or curiosity, due to the limited sound quality. In
the United States in 1894, under the Berliner Gramophone trademark, Berliner started marketing records with somewhat
more substantial entertainment value, along with somewhat more substantial gramophones to play them. Berliner's records
had poor sound quality compared to wax cylinders, but his manufacturing associate Eldridge R. Johnson eventually
improved the sound quality. Abandoning Berliner's "Gramophone" trademark for legal reasons, in 1901 Johnson's and
Berliner's separate companies reorganized to form the Victor Talking Machine Company, whose products would come to
dominate the market for many years. Emile Berliner moved his company to Montreal in 1900. The factory which became
RCA Victor stills exists. There is a dedicated museum in Montreal for Berliner. In 1901, 10-inch disc records were
introduced, followed in 1903 by 12-inch records. These could play for more than three and four minutes respectively,
while contemporary cylinders could only play for about two minutes. In an attempt to head off the disc advantage,
Edison introduced the Amberol cylinder in 1909, with a maximum playing time of 4½ minutes (at 160 rpm), which in
turn were superseded by Blue Amberol Records, which had a playing surface made of celluloid, a plastic, which was
far less fragile. Despite these improvements, during the 1910s discs decisively won this early format war, although
Edison continued to produce new Blue Amberol cylinders for an ever-dwindling customer base until late in 1929. By
1919 the basic patents for the manufacture of lateral-cut disc records had expired, opening the field for countless
companies to produce them. Analog disc records would dominate the home entertainment market until they were outsold
by the digital compact disc in the late 1980s (which was in turn supplanted by digital audio recordings distributed
via online music stores and Internet file sharing). Early recordings were made entirely acoustically, the sound being
collected by a horn and piped to a diaphragm, which vibrated the cutting stylus. Sensitivity and frequency range
were poor, and frequency response was very irregular, giving acoustic recordings an instantly recognizable tonal
quality. A singer practically had to put his or her face in the recording horn. Lower-pitched orchestral instruments
such as cellos and double basses were often doubled (or replaced) by louder wind instruments, such as tubas. Standard
violins in orchestral ensembles were commonly replaced by Stroh violins, which became popular with recording studios.
Contrary to popular belief, if placed properly and prepared-for, drums could be effectively used and heard on even
the earliest jazz and military band recordings. The loudest instruments such as the drums and trumpets were positioned
the farthest away from the collecting horn. Lillian Hardin Armstrong, a member of King Oliver's Creole Jazz Band,
which recorded at Gennett Records in 1923, remembered that at first Oliver and his young second trumpet, Louis Armstrong,
stood next to each other and Oliver's horn could not be heard. "They put Louis about fifteen feet over in the corner,
looking all sad." For fading instrumental parts in and out while recording, some performers were placed on a moveable
platform, which could draw the performer(s) nearer or further away as required.[citation needed] During the first
half of the 1920s, engineers at Western Electric, as well as independent inventors such as Orlando Marsh, developed
technology for capturing sound with a microphone, amplifying it with vacuum tubes, then using the amplified signal
to drive an electromagnetic recording head. Western Electric's innovations resulted in a greatly expanded and more
even frequency response, creating a dramatically fuller, clearer and more natural-sounding recording. Distant or
less strong sounds that were impossible to record by the old methods could now be captured. Volume was now limited
only by the groove spacing on the record and the limitations of the intended playback device. Victor and Columbia
licensed the new electrical system from Western Electric and began issuing electrically recorded discs in 1925. The
first classical recording was of Chopin impromptus and Schubert's Litanei by Alfred Cortot for Victor. Electrical
recording preceded electrical home reproduction because of the initial high cost of the new system. In 1925, the
Victor company introduced the Victor Orthophonic Victrola, an acoustical record player that was specifically designed
to play electrically recorded discs, as part of a line that also included electrically reproducing Electrolas. The
acoustical Orthophonics ranged in price from US$95 to US$300, depending on cabinetry; by comparison, the cheapest
Electrola cost US$650, the price of a new Ford automobile in an era when clerical jobs paid about $20 a week. The
earliest disc records (1889–1894) were made of various materials including hard rubber. Around 1895, a shellac-based
compound was introduced and became standard. Exact formulas for this compound varied by manufacturer and over the
course of time, but it was typically composed of about one-third shellac and about two-thirds mineral filler, which
meant finely pulverized rock, usually slate and limestone, with an admixture of cotton fibers to add tensile strength,
carbon black for color (without this, it tended to be a "dirty" gray or brown color that most record companies considered
unattractive), and a very small amount of a lubricant to facilitate mold release during manufacture. Some makers,
notably Columbia Records, used a laminated construction with a core disc of coarser material or fiber. The production
of shellac records continued until the end of the 78 rpm format (i.e., the late 1950s in most developed countries,
but well into the 1960s in some other places), but increasingly less abrasive formulations were used during its declining
years and very late examples in truly like-new condition can have as low noise levels as vinyl. Flexible or so-called
"unbreakable" records made of unusual materials were introduced by a number of manufacturers at various times during
the 78 rpm era. In the UK, Nicole records, made of celluloid or a similar substance coated onto a cardboard core
disc, were produced for a few years beginning in 1904, but they suffered from an exceptionally high level of surface
noise. In the United States, Columbia Records introduced flexible, fiber-cored "Marconi Velvet Tone Record" pressings
in 1907, but the advantages and longevity of their relatively noiseless surfaces depended on the scrupulous use of
special gold-plated Marconi Needles and the product was not a success. Thin, flexible plastic records such as the
German Phonycord and the British Filmophone and Goodson records appeared around 1930 but also did not last long.
The contemporary French Pathé Cellodiscs, made of a very thin black plastic, which uncannily resembles the vinyl
"sound sheet" magazine inserts of the 1965–1985 era, were similarly short-lived. In the US, Hit of the Week records,
made of a patented translucent plastic called Durium coated on a heavy brown paper base, were introduced in early
1930. A new issue came out every week and they were sold at newsstands like a weekly magazine. Although inexpensive
and commercially successful at first, they soon fell victim to the Great Depression and production in the US ended
in 1932. Related Durium records continued to be made somewhat later in the UK and elsewhere, and as remarkably late
as 1950 in Italy, where the name "Durium" survived far into the LP era as a trademark on ordinary vinyl records.
Despite all these attempts at innovation, shellac compounds continued to be used for the overwhelming majority of
commercial 78 rpm records during the lifetime of the format. In 1931, RCA Victor introduced their vinyl-based Victrolac
compound as a material for some unusual-format and special-purpose records. By the end of the 1930s vinyl's advantages
of light weight, relative unbreakability and low surface noise had made it the material of choice for prerecorded
radio programming and other critical applications. When it came to ordinary 78 rpm records, however, the much higher
cost of the raw material, as well as its vulnerability to the heavy pickups and crudely mass-produced steel needles
still commonly used in home record players, made its general substitution for shellac impractical at that time. During
the Second World War, the United States Armed Forces produced thousands of 12-inch vinyl 78 rpm V-Discs for use by
the troops overseas. After the war, the wider use of vinyl became more practical as new record players with relatively
lightweight crystal pickups and precision-ground styli made of sapphire or an exotic osmium alloy proliferated. In
late 1945, RCA Victor began offering special transparent red vinyl De Luxe pressings of some classical 78s, at a
de luxe price. Later, Decca Records introduced vinyl Deccalite 78s, while other record companies came up with vinyl
concoctions such as Metrolite, Merco Plastic and Sav-o-flex, but these were mainly used to produce "unbreakable"
children's records and special thin vinyl DJ pressings for shipment to radio stations. In the 1890s, the recording
formats of the earliest (toy) discs were mainly 12.5 cm (nominally five inches) in diameter; by the mid-1890s, the
discs were usually 7 in (nominally 17.5 cm) in diameter. By 1910 the 10-inch (25.4 cm) record was by far the most
popular standard, holding about three minutes of music or other entertainment on a side. From 1903 onwards, 12-inch
records (30.5 cm) were also sold commercially, mostly of classical music or operatic selections, with four to five
minutes of music per side. Victor, Brunswick and Columbia also issued 12-inch popular medleys, usually spotlighting
a Broadway show score. However, other sizes did appear. Eight-inch discs with a 2-inch-diameter (51 mm) label became
popular for about a decade in Britain, but they cannot be played in full on most modern record players because the
tone arm cannot play far enough in toward the center without modification of the equipment. The playing time of a
phonograph record depended on the turntable speed and the groove spacing. At the beginning of the 20th century, the
early discs played for two minutes, the same as early cylinder records. The 12-inch disc, introduced by Victor in
1903, increased the playing time to three and a half minutes. Because a 10-inch 78 rpm record could hold about three
minutes of sound per side and the 10-inch size was the standard size for popular music, almost all popular recordings
were limited to around three minutes in length. For example, when King Oliver's Creole Jazz Band, including Louis
Armstrong on his first recordings, recorded 13 sides at Gennett Records in Richmond, Indiana, in 1923, one side was
2:09 and four sides were 2:52–2:59. In January 1938, Milt Gabler started recording for his new label, Commodore Records,
and to allow for longer continuous performances, he recorded some 12-inch records. Eddie Condon explained: "Gabler
realized that a jam session needs room for development." The first two 12-inch recordings did not take advantage
of the extra length: "Carnegie Drag" was 3:15; "Carnegie Jump", 2:41. But at the second session, on April 30, the
two 12-inch recordings were longer: "Embraceable You" was 4:05; "Serenade to a Shylock", 4:32. Another way around
the time limitation was to issue a selection on both sides of a single record. Vaudeville stars Gallagher and Shean
recorded "Mr. Gallagher and Mr. Shean", written by Irving and Jack Kaufman, as two sides of a 10-inch 78 in 1922
for Cameo. An obvious workaround for longer recordings was to release a set of records. An early multi-record release
was in 1903, when HMV in England made the first complete recording of an opera, Verdi's Ernani, on 40 single-sided
discs. In 1940, Commodore released Eddie Condon and his Band's recording of "A Good Man Is Hard to Find" in four
parts, issued on both sides of two 12-inch 78s. This limitation on the duration of recordings persisted from 1910
until the invention of the LP record, in 1948. In popular music, this time limitation of about 3:30 on a 10-inch
78 rpm record meant that singers usually did not release long pieces on record. One exception is Frank Sinatra's
recording of Rodgers and Hammerstein's "Soliloquy", from Carousel, made on May 28, 1946. Because it ran 7:57, longer
than both sides of a standard 78 rpm 10-inch record, it was released on Columbia's Masterwork label (the classical
division) as two sides of a 12-inch record. The same was true of John Raitt's performance of the song on the original
cast album of Carousel, which had been issued on a 78-rpm album set by American Decca in 1945. German record company
Odeon is often said to have pioneered the album in 1909 when it released the Nutcracker Suite by Tchaikovsky on 4
double-sided discs in a specially designed package. (It is not indicated what size the records are.) However, Deutsche
Grammophon had produced an album for its complete recording of the opera Carmen in the previous year. The practice
of issuing albums does not seem to have been widely taken up by other record companies for many years; however, HMV
provided an album, with a pictorial cover, for the 1917 recording of The Mikado (Gilbert & Sullivan). By about 1910,[note
1] bound collections of empty sleeves with a paperboard or leather cover, similar to a photograph album, were sold
as record albums that customers could use to store their records (the term "record album" was printed on some covers).
These albums came in both 10-inch and 12-inch sizes. The covers of these bound books were wider and taller than the
records inside, allowing the record album to be placed on a shelf upright, like a book, suspending the fragile records
above the shelf and protecting them. In the 1930s, record companies began issuing collections of 78 rpm records by
one performer or of one type of music in specially assembled albums, typically with artwork on the front cover and
liner notes on the back or inside cover. Most albums included three or four records, with two sides each, making
six or eight tunes per album. When the 12-inch vinyl LP era began in 1949, the single record often had the same or
similar number of tunes as a typical album of 78s, and was still often referred to as an "album". For collectable
or nostalgia purposes, or for the benefit of higher-quality audio playback provided by the 78 rpm speed with newer
vinyl records and their lightweight stylus pickups, a small number of 78 rpm records have been released since the
major labels ceased production. One of the first attempts at this was in the 1950s, when inventor Ewing Dunbar Nunn
founded the label Audiophile Records, which released, in addition to standard 33 1/3 rpm LPs, 78 rpm-mastered albums
that were microgroove and pressed on vinyl (as opposed to traditional 78s, with their shellac composition and wider
3-mil sized grooves). This was done by the label mainly to take advantage of the wider audio frequency response that
faster speeds like 78 rpm can provide for vinyl microgroove records, hence the label's name (obviously catering to
the audiophiles of the 1950s "hi-fi" era, when stereo gear could provide a much wider range of audio than before).
Also in the late 1950s, Bell Records released a few budget-priced 7" microgrooved records at 78 rpm. In 1968, Reprise
planned to release a series of 78 rpm singles from their artists on their label at the time, called the Reprise Speed
Series. Only one disc actually saw release, Randy Newman's I Think It's Going to Rain Today, a track from his self-titled
debut album (with The Beehive State on the flipside). Reprise did not proceed further with the series due to a lack
of sales for the single, and a lack of general interest in the concept. Guitarist & vocalist Leon Redbone released
a promotional 78 rpm record in 1978 featuring two songs (Alabama Jubilee and Please Don't Talk About Me When I'm
Gone) from his Champagne Charlie album. In 1980 Stiff Records in the United Kingdom issued a 78 by Joe "King" Carrasco
containing the songs Buena (Spanish for "good," with the alternate spelling "Bueno" on the label) and Tuff Enuff.
Underground comic cartoonist and 78 rpm record collector Robert Crumb released three discs with his Cheap Suit Serenaders
in the 1980s. In the 1990s Rhino Records issued a series of boxed sets of 78 rpm reissues of early rock and roll
hits, intended for owners of vintage jukeboxes. This was a disaster because Rhino did not warn customers that their
records were made of vinyl, and that the vintage 78 RPM juke boxes were designed with heavy tone arms and steel needles
to play the hard shellac records of their time. This failure to warn customers gave the Rhino 78 records a bad reputation,[citation
needed] as they were destroyed by old juke boxes and old record players but played very well on newer 78-capable
turntables with modern lightweight tone arms and jewel needles. In 1931, RCA Victor launched the first commercially
available vinyl long-playing record, marketed as program-transcription discs. These revolutionary discs were designed
for playback at 33 1⁄3 rpm and pressed on a 30 cm diameter flexible plastic disc, with a duration of about ten minutes
playing time per side. RCA Victor's early introduction of a long-play disc was a commercial failure for several reasons
including the lack of affordable, reliable consumer playback equipment and consumer wariness during the Great Depression.
Because of financial hardships that plagued the recording industry during that period (and RCA's own parched revenues),
Victor's long-playing records were discontinued by early 1933. Vinyl's lower surface noise level than shellac was
not forgotten, nor was its durability. In the late 1930s, radio commercials and pre-recorded radio programs being
sent to disc jockeys started being stamped in vinyl, so they would not break in the mail. In the mid-1940s, special
DJ copies of records started being made of vinyl also, for the same reason. These were all 78 rpm. During and after
World War II, when shellac supplies were extremely limited, some 78 rpm records were pressed in vinyl instead of
shellac, particularly the six-minute 12-inch (30 cm) 78 rpm records produced by V-Disc for distribution to United
States troops in World War II. In the 1940s, radio transcriptions, which were usually on 16-inch records, but sometimes
12-inch, were always made of vinyl, but cut at 33 1⁄3 rpm. Shorter transcriptions were often cut at 78 rpm. Beginning
in 1939, Dr. Peter Goldmark and his staff at Columbia Records and at CBS Laboratories undertook efforts to address
problems of recording and playing back narrow grooves and developing an inexpensive, reliable consumer playback system.
It took about eight years of study, except when it was suspended because of World War II. Finally, the 12-inch (30
cm) Long Play (LP) 33 1⁄3 rpm microgroove record album was introduced by the Columbia Record Company at a New York
press conference on June 18, 1948. Unwilling to accept and license Columbia's system, in February 1949 RCA Victor,
in cooperation of its parent, the Radio Corporation of America, released the first 45 rpm single, 7 inches in diameter
with a large center hole. The 45 rpm player included a changing mechanism that allowed multiple disks to be stacked,
much as a conventional changer handled 78s. The short playing time of a single 45 rpm side meant that long works,
such as symphonies, had to be released on multiple 45s instead of a single LP, but RCA claimed that the new high-speed
changer rendered side breaks so brief as to be inaudible or inconsequential. Early 45 rpm records were made from
either vinyl or polystyrene. They had a playing time of eight minutes. One early attempt at lengthening the playing
time should be mentioned. At least one manufacturer in the early 1920s, World Records, produced records that played
at a constant linear velocity, controlled by Noel Pemberton Billing's patented add-on governor device. As these were
played from the outside to the inside, the rotational speed of the records increased as reproduction progressed.
This action is similar (although in reverse) to that on the modern compact disc and the CLV version of its predecessor,
the Philips Laser Disc. In 1925, 78.26 rpm was chosen as the standard because of the introduction of the electrically
powered synchronous turntable motor. This motor ran at 3600 rpm, such that a 46:1 gear ratio would produce 78.26
rpm. In parts of the world that used 50 Hz current, the standard was 77.92 rpm (3,000 rpm with a 77:2 ratio), which
was also the speed at which a strobe disc with 77 lines would "stand still" in 50 Hz light (92 lines for 60 Hz).
After World War II these records were retroactively known as 78s, to distinguish them from other newer disc record
formats. Earlier they were just called records, or when there was a need to distinguish them from cylinders, disc
records. The older 78 format continued to be mass-produced alongside the newer formats using new materials until
about 1960 in the U.S., and in a few countries, such as India (where some Beatles recordings were issued on 78),
into the 1960s. For example, Columbia Records' last reissue of Frank Sinatra songs on 78 rpm records was an album
called Young at Heart, issued November 1, 1954. As late as the 1970s, some children's records were released at the
78 rpm speed. In the United Kingdom, the 78 rpm single lasted longer than in the United States and the 45 rpm took
longer to become popular. The 78 rpm was overtaken in popularity by the 45 rpm in the late 1950s, as teenagers became
increasingly affluent. Some of Elvis Presley's early singles on Sun Records might have sold more copies on 78 than
on 45. This is because the majority of those sales in 1954–55 were to the "hillbilly" market in the South and Southwestern
United States, where replacing the family 78 rpm player with a new 45 rpm player was a luxury few could afford at
the time. By the end of 1957, RCA Victor announced that 78s accounted for less than 10% of Presley's singles sales,
essentially announcing the death throes of the 78 rpm format. The last Presley single released on 78 in the United
States was RCA Victor 20-7410, I Got Stung/One Night (1958), while the last 78 in the UK was RCA 1194, A Mess Of
Blues/Girl Of My Best Friend (1960). After World War II, two new competing formats came onto the market and gradually
replaced the standard "78": the 33 1⁄3 rpm (often just referred to as the 33 rpm), and the 45 rpm (see above). The
33 1⁄3 rpm LP (for "long-play") format was developed by Columbia Records and marketed in June 1948. RCA Victor developed
the 45 rpm format and marketed it in March 1949, each pursuing their own r&d in secret. Both types of new disc used
narrower grooves, intended to be played with smaller stylus—typically 0.001 inches (25 µm) wide, compared to 0.003
inches (76 µm) for a 78—so the new records were sometimes called Microgroove. In the mid-1950s all record companies
agreed to a common recording standard called RIAA equalization. Prior to the establishment of the standard each company
used its own preferred standard, requiring discriminating listeners to use pre-amplifiers with multiple selectable
equalization curves. Some recordings, such as books for the blind, were pressed at 16 2⁄3 rpm. Prestige Records released
jazz records in this format in the late 1950s; for example, two of their Miles Davis albums were paired together
in this format. Peter Goldmark, the man who developed the 33 1⁄3 rpm record, developed the Highway Hi-Fi 16 2⁄3 rpm
record to be played in Chrysler automobiles, but poor performance of the system and weak implementation by Chrysler
and Columbia led to the demise of the 16 2⁄3 rpm records. Subsequently, the 16 2⁄3 rpm speed was used for narrated
publications for the blind and visually impaired, and were never widely commercially available, although it was common
to see new turntable models with a 16 rpm speed setting produced as late as the 1970s. The commercial rivalry between
RCA Victor and Columbia Records led to RCA Victor's introduction of what it had intended to be a competing vinyl
format, the 7-inch (175 mm) 45 rpm disc. For a two-year period from 1948 to 1950, record companies and consumers
faced uncertainty over which of these formats would ultimately prevail in what was known as the "War of the Speeds".
(See also format war.) In 1949 Capitol and Decca adopted the new LP format and RCA gave in and issued its first LP
in January 1950. The 45 rpm size was gaining in popularity, too, and Columbia issued its first 45s in February 1951.
By 1954, 200 million 45s had been sold. Eventually the 12-inch (300 mm) 33 1⁄3 rpm LP prevailed as the predominant
format for musical albums, and 10-inch LPs were no longer issued. The last Columbia Records reissue of any Frank
Sinatra songs on a 10-inch LP record was an album called Hall of Fame, CL 2600, issued on October 26, 1956, containing
six songs, one each by Tony Bennett, Rosemary Clooney, Johnnie Ray, Frank Sinatra, Doris Day, and Frankie Laine.
The 10-inch LP however had a longer life in the United Kingdom, where important early British rock and roll albums
such as Lonnie Donegan's Lonnie Donegan Showcase and Billy Fury's The Sound of Fury were released in that form. The
7-inch (175 mm) 45 rpm disc or "single" established a significant niche for shorter duration discs, typically containing
one item on each side. The 45 rpm discs typically emulated the playing time of the former 78 rpm discs, while the
12-inch LP discs eventually provided up to one half-hour of recorded material per side. The 45 rpm discs also came
in a variety known as extended play (EP), which achieved up to 10–15 minutes play at the expense of attenuating (and
possibly compressing) the sound to reduce the width required by the groove. EP discs were cheaper to produce, and
were used in cases where unit sales were likely to be more limited or to reissue LP albums on the smaller format
for those people who had only 45 rpm players. LP albums could be purchased 1 EP at a time, with four items per EP,
or in a boxed set with 3 EPs or 12 items. The large center hole on 45s allows for easier handling by jukebox mechanisms.
EPs were generally discontinued by the late 1950s in the U.S. as three- and four-speed record players replaced the
individual 45 players. One indication of the decline of the 45 rpm EP is that the last Columbia Records reissue of
Frank Sinatra songs on 45 rpm EP records, called Frank Sinatra (Columbia B-2641) was issued on December 7, 1959.
The EP lasted considerably longer in Europe, and was a popular format during the 1960s for recordings by artists
such as Serge Gainsbourg and the Beatles. From the mid-1950s through the 1960s, in the U.S. the common home record
player or "stereo" (after the introduction of stereo recording) would typically have had these features: a three-
or four-speed player (78, 45, 33 1⁄3, and sometimes 16 2⁄3 rpm); with changer, a tall spindle that would hold several
records and automatically drop a new record on top of the previous one when it had finished playing, a combination
cartridge with both 78 and microgroove styli and a way to flip between the two; and some kind of adapter for playing
the 45s with their larger center hole. The adapter could be a small solid circle that fit onto the bottom of the
spindle (meaning only one 45 could be played at a time) or a larger adaptor that fit over the entire spindle, permitting
a stack of 45s to be played. RCA 45s were also adapted to the smaller spindle of an LP player with a plastic snap-in
insert known as a "spider". These inserts, commissioned by RCA president David Sarnoff and invented by Thomas Hutchison,
were prevalent starting in the 1960s, selling in the tens of millions per year during the 45 rpm heyday. In countries
outside the U.S., 45s often had the smaller album-sized holes, e.g., Australia and New Zealand, or as in the United
Kingdom, especially before the 1970s, the disc had a small hole within a circular central section held only by three
or four lands so that it could be easily punched out if desired (typically for use in jukeboxes). The term "high
fidelity" was coined in the 1920s by some manufacturers of radio receivers and phonographs to differentiate their
better-sounding products claimed as providing "perfect" sound reproduction. The term began to be used by some audio
engineers and consumers through the 1930s and 1940s. After 1949 a variety of improvements in recording and playback
technologies, especially stereo recordings, which became widely available in 1958, gave a boost to the "hi-fi" classification
of products, leading to sales of individual components for the home such as amplifiers, loudspeakers, phonographs,
and tape players. High Fidelity and Audio were two magazines that hi-fi consumers and engineers could read for reviews
of playback equipment and recordings. Stereophonic sound recording, which attempts to provide a more natural listening
experience by reproducing the spatial locations of sound sources in the horizontal plane, was the natural extension
to monophonic recording, and attracted various alternative engineering attempts. The ultimately dominant "45/45"
stereophonic record system was invented by Alan Blumlein of EMI in 1931 and patented the same year. EMI cut the first
stereo test discs using the system in 1933 (see Bell Labs Stereo Experiments of 1933) although the system was not
exploited commercially until much later. The development of quadraphonic records was announced in 1971. These recorded
four separate sound signals. This was achieved on the two stereo channels by electronic matrixing, where the additional
channels were combined into the main signal. When the records were played, phase-detection circuits in the amplifiers
were able to decode the signals into four separate channels. There were two main systems of matrixed quadraphonic
records produced, confusingly named SQ (by CBS) and QS (by Sansui). They proved commercially unsuccessful, but were
an important precursor to later surround-sound systems, as seen in SACD and home cinema today. A different format,
CD-4 (not to be confused with compact disc), by RCA, encoded the front-rear difference information on an ultrasonic
carrier, which required a special wideband cartridge to capture it on carefully calibrated pickup arm/turntable combinations.
CD-4 was even less successful than the two matrixed formats. (A further problem was that no cutting heads were available
that could handle the HF information. That was remedied by cutting at half the speed. Later, the special half-speed
cutting heads and equalization techniques were employed to get a wider frequency response in stereo with reduced
distortion and greater headroom.) Under the direction of recording engineer C. Robert Fine, Mercury Records initiated
a minimalist single microphone monaural recording technique in 1951. The first record, a Chicago Symphony Orchestra
performance of Pictures at an Exhibition, conducted by Rafael Kubelik, was described as "being in the living presence
of the orchestra" by The New York Times music critic. The series of records was then named Mercury Living Presence.
In 1955, Mercury began three-channel stereo recordings, still based on the principle of the single microphone. The
center (single) microphone was of paramount importance, with the two side mics adding depth and space. Record masters
were cut directly from a three-track to two-track mixdown console, with all editing of the master tapes done on the
original three-tracks. In 1961, Mercury enhanced this technique with three-microphone stereo recordings using 35
mm magnetic film instead of half-inch tape for recording. The greater thickness and width of 35 mm magnetic film
prevented tape layer print-through and pre-echo and gained extended frequency range and transient response. The Mercury
Living Presence recordings were remastered to CD in the 1990s by the original producer, Wilma Cozart Fine, using
the same method of 3-to-2 mix directly to the master recorder. Through the 1960s, 1970s, and 1980s, various methods
to improve the dynamic range of mass-produced records involved highly advanced disc cutting equipment. These techniques,
marketed, to name two, as the CBS DisComputer and Teldec Direct Metal Mastering, were used to reduce inner-groove
distortion. RCA Victor introduced another system to reduce dynamic range and achieve a groove with less surface noise
under the commercial name of Dynagroove. Two main elements were combined: another disk material with less surface
noise in the groove and dynamic compression for masking background noise. Sometimes this was called "diaphragming"
the source material and not favoured by some music lovers for its unnatural side effects. Both elements were reflected
in the brandname of Dynagroove, described elsewhere in more detail. It also used the earlier advanced method of forward-looking
control on groove spacing with respect to volume of sound and position on the disk. Lower recorded volume used closer
spacing; higher recorded volume used wider spacing, especially with lower frequencies. Also, the higher track density
at lower volumes enabled disk recordings to end farther away from the disk center than usual, helping to reduce endtrack
distortion even further. Also in the late 1970s, "direct-to-disc" records were produced, aimed at an audiophile niche
market. These completely bypassed the use of magnetic tape in favor of a "purist" transcription directly to the master
lacquer disc. Also during this period, half-speed mastered and "original master" records were released, using expensive
state-of-the-art technology. A further late 1970s development was the Disco Eye-Cued system used mainly on Motown
12-inch singles released between 1978 and 1980. The introduction, drum-breaks, or choruses of a track were indicated
by widely separated grooves, giving a visual cue to DJs mixing the records. The appearance of these records is similar
to an LP, but they only contain one track each side. The mid-1970s saw the introduction of dbx-encoded records, again
for the audiophile niche market. These were completely incompatible with standard record playback preamplifiers,
relying on the dbx compandor encoding/decoding scheme to greatly increase dynamic range (dbx encoded disks were recorded
with the dynamic range compressed by a factor of two in dB: quiet sounds were meant to be played back at low gain
and loud sounds were meant to be played back at high gain, via automatic gain control in the playback equipment;
this reduced the effect of surface noise on quiet passages). A similar and very short-lived scheme involved using
the CBS-developed "CX" noise reduction encoding/decoding scheme. ELPJ, a Japanese-based company, sells a laser turntable
that uses a laser to read vinyl discs optically, without physical contact. The laser turntable eliminates record
wear and the possibility of accidental scratches, which degrade the sound, but its expense limits use primarily to
digital archiving of analog records, and the laser does not play back colored vinyl or picture discs. Various other
laser-based turntables were tried during the 1990s, but while a laser reads the groove very accurately, since it
does not touch the record, the dust that vinyl attracts due to static electric charge is not mechanically pushed
out of the groove, worsening sound quality in casual use compared to conventional stylus playback. In some ways similar
to the laser turntable is the IRENE scanning machine for disc records, which images with microphotography in two
dimensions, invented by a team of physicists at Lawrence Berkeley Laboratories. IRENE will retrieve the information
from a laterally modulated monaural grooved sound source without touching the medium itself, but cannot read vertically
modulated information. This excludes grooved recordings such as cylinders and some radio transcriptions that feature
a hill-and-dale format of recording, and stereophonic or quadraphonic grooved recordings, which utilize a combination
of the two as well as supersonic encoding for quadraphonic. Terms such as "long-play" (LP) and "extended-play" (EP)
describe multi-track records that play much longer than the single-item-per-side records, which typically do not
go much past four minutes per side. An LP can play for up to 30 minutes per side, though most played for about 22
minutes per side, bringing the total playing time of a typical LP recording to about forty-five minutes. Many pre-1952
LPs, however, played for about 15 minutes per side. The 7-inch 45 rpm format normally contains one item per side
but a 7-inch EP could achieve recording times of 10 to 15 minutes at the expense of attenuating and compressing the
sound to reduce the width required by the groove. EP discs were generally used to make available tracks not on singles
including tracks on LPs albums in a smaller, less expensive format for those who had only 45 rpm players. The large
center hole on 7-inch 45 rpm records allows for easier handling by jukebox mechanisms. The term "album", originally
used to mean a "book" with liner notes, holding several 78 rpm records each in its own "page" or sleeve, no longer
has any relation to the physical format: a single LP record, or nowadays more typically a compact disc. In March
1949, as RCA released the 45, Columbia released several hundred 7 inch 33 1/3 rpm small spindle hole singles. This
format was soon dropped as it became clear that the RCA 45 was the single of choice and the Columbia 12 inch LP would
be the 'album' of choice. The first release of the 45 came in seven colors: black 47-xxxx popular series, yellow
47-xxxx juvenile series, green (teal) 48-xxxx country series, deep red 49-xxxx classical series, bright red (cerise)
50-xxxx blues/spiritual series, light blue 51-xxxx international series, dark blue 52-xxxx light classics. All colors
were soon dropped in favor of black because of production problems. However, yellow and deep red were continued until
about 1952. The first 45 rpm record created for sale was "PeeWee the Piccolo" RCA 47-0147 pressed in yellow translucent
vinyl at the Sherman Avenue plant, Indianapolis Dec. 7, 1948, R.O. Price, plant manager. The normal commercial disc
is engraved with two sound-bearing concentric spiral grooves, one on each side, running from the outside edge towards
the center. The last part of the spiral meets an earlier part to form a circle. The sound is encoded by fine variations
in the edges of the groove that cause a stylus (needle) placed in it to vibrate at acoustic frequencies when the
disc is rotated at the correct speed. Generally, the outer and inner parts of the groove bear no intended sound (an
exception is Split Enz's Mental Notes). Towards the center, at the end of the groove, there is another wide-pitched
section known as the lead-out. At the very end of this section the groove joins itself to form a complete circle,
called the lock groove; when the stylus reaches this point, it circles repeatedly until lifted from the record. On
some recordings (for example Sgt. Pepper's Lonely Hearts Club Band by The Beatles, Super Trouper by Abba and Atom
Heart Mother by Pink Floyd), the sound continues on the lock groove, which gives a strange repeating effect. Automatic
turntables rely on the position or angular velocity of the arm, as it reaches the wider spacing in the groove, to
trigger a mechanism that lifts the arm off the record. Precisely because of this mechanism, most automatic turntables
are incapable of playing any audio in the lock groove, since they will lift the arm before it reaches that groove.
When auto-changing turntables were commonplace, records were typically pressed with a raised (or ridged) outer edge
and a raised label area, allowing records to be stacked onto each other without the delicate grooves coming into
contact, reducing the risk of damage. Auto-changers included a mechanism to support a stack of several records above
the turntable itself, dropping them one at a time onto the active turntable to be played in order. Many longer sound
recordings, such as complete operas, were interleaved across several 10-inch or 12-inch discs for use with auto-changing
mechanisms, so that the first disk of a three-disk recording would carry sides 1 and 6 of the program, while the
second disk would carry sides 2 and 5, and the third, sides 3 and 4, allowing sides 1, 2, and 3 to be played automatically;
then the whole stack reversed to play sides 4, 5, and 6. New or "virgin" heavy/heavyweight (180–220 g) vinyl is commonly
used for modern audiophile vinyl releases in all genres. Many collectors prefer to have heavyweight vinyl albums,
which have been reported to have better sound than normal vinyl because of their higher tolerance against deformation
caused by normal play. 180 g vinyl is more expensive to produce only because it uses more vinyl. Manufacturing processes
are identical regardless of weight. In fact, pressing lightweight records requires more care. An exception is the
propensity of 200 g pressings to be slightly more prone to non-fill, when the vinyl biscuit does not sufficiently
fill a deep groove during pressing (percussion or vocal amplitude changes are the usual locations of these artifacts).
This flaw causes a grinding or scratching sound at the non-fill point. The "orange peel" effect on vinyl records
is caused by worn molds. Rather than having the proper mirror-like finish, the surface of the record will have a
texture that looks like orange peel. This introduces noise into the record, particularly in the lower frequency range.
With direct metal mastering (DMM), the master disc is cut on a copper-coated disc, which can also have a minor "orange
peel" effect on the disc itself. As this "orange peel" originates in the master rather than being introduced in the
pressing stage, there is no ill effect as there is no physical distortion of the groove. Original master discs are
created by lathe-cutting: a lathe is used to cut a modulated groove into a blank record. The blank records for cutting
used to be cooked up, as needed, by the cutting engineer, using what Robert K. Morrison describes as a "metallic
soap," containing lead litharge, ozokerite, barium sulfate, montan wax, stearin and paraffin, among other ingredients.
Cut "wax" sound discs would be placed in a vacuum chamber and gold-sputtered to make them electrically conductive
for use as mandrels in an electroforming bath, where pressing stamper parts were made. Later, the French company
Pyral invented a ready-made blank disc having a thin nitro-cellulose lacquer coating (approximately 7 mils thickness
on both sides) that was applied to an aluminum substrate. Lacquer cuts result in an immediately playable, or processable,
master record. If vinyl pressings are wanted, the still-unplayed sound disc is used as a mandrel for electroforming
nickel records that are used for manufacturing pressing stampers. The electroformed nickel records are mechanically
separated from their respective mandrels. This is done with relative ease because no actual "plating" of the mandrel
occurs in the type of electrodeposition known as electroforming, unlike with electroplating, in which the adhesion
of the new phase of metal is chemical and relatively permanent. The one-molecule-thick coating of silver (that was
sprayed onto the processed lacquer sound disc in order to make its surface electrically conductive) reverse-plates
onto the nickel record's face. This negative impression disc (having ridges in place of grooves) is known as a nickel
master, "matrix" or "father." The "father" is then used as a mandrel to electroform a positive disc known as a "mother".
Many mothers can be grown on a single "father" before ridges deteriorate beyond effective use. The "mothers" are
then used as mandrels for electroforming more negative discs known as "sons". Each "mother" can be used to make many
"sons" before deteriorating. The "sons" are then converted into "stampers" by center-punching a spindle hole (which
was lost from the lacquer sound disc during initial electroforming of the "father"), and by custom-forming the target
pressing profile. This allows them to be placed in the dies of the target (make and model) record press and, by center-roughing,
to facilitate the adhesion of the label, which gets stuck onto the vinyl pressing without any glue. In this way,
several million vinyl discs can be produced from a single lacquer sound disc. When only a few hundred discs are required,
instead of electroforming a "son" (for each side), the "father" is removed of its silver and converted into a stamper.
Production by this latter method, known as the "two-step-process" (as it does not entail creation of "sons" but does
involve creation of "mothers," which are used for test playing and kept as "safeties" for electroforming future "sons")
is limited to a few hundred vinyl pressings. The pressing count can increase if the stamper holds out and the quality
of the vinyl is high. The "sons" made during a "three-step" electroforming make better stampers since they don't
require silver removal (which reduces some high fidelity because of etching erasing part of the smallest groove modulations)
and also because they have a stronger metal structure than "fathers". Breakage was very common in the shellac era.
In the 1934 John O'Hara novel, Appointment in Samarra, the protagonist "broke one of his most favorites, Whiteman's
Lady of the Evening ... He wanted to cry but could not." A poignant moment in J. D. Salinger's 1951 novel The Catcher
in the Rye occurs after the adolescent protagonist buys a record for his younger sister but drops it and "it broke
into pieces ... I damn-near cried, it made me feel so terrible." A sequence where a school teacher's collection of
78 rpm jazz records is smashed by a group of rebellious students is a key moment in the film Blackboard Jungle. Vinyl
records do not break easily, but the soft material is easily scratched. Vinyl readily acquires a static charge, attracting
dust that is difficult to remove completely. Dust and scratches cause audio clicks and pops. In extreme cases, they
can cause the needle to skip over a series of grooves, or worse yet, cause the needle to skip backwards, creating
a "locked groove" that repeats over and over. This is the origin of the phrase "like a broken record" or "like a
scratched record", which is often used to describe a person or thing that continually repeats itself. Locked grooves
are not uncommon and were even heard occasionally in radio broadcasts. Vinyl records can be warped by heat, improper
storage, exposure to sunlight, or manufacturing defects such as excessively tight plastic shrinkwrap on the album
cover. A small degree of warp was common, and allowing for it was part of the art of turntable and tonearm design.
"wow" (once-per-revolution pitch variation) could result from warp, or from a spindle hole that was not precisely
centered. Standard practice for LPs was to place the LP in a paper or plastic inner cover. This, if placed within
the outer cardboard cover so that the opening was entirely within the outer cover, was said to reduce ingress of
dust onto the record surface. Singles, with rare exceptions, had simple paper covers with no inner cover. A further
limitation of the gramophone record is that fidelity steadily declines as playback progresses; there is more vinyl
per second available for fine reproduction of high frequencies at the large-diameter beginning of the groove than
exist at the smaller-diameters close to the end of the side. At the start of a groove on an LP there are 510 mm of
vinyl per second traveling past the stylus while the ending of the groove gives 200–210 mm of vinyl per second —
less than half the linear resolution. Distortion towards the end of the side is likely to become more apparent as
record wear increases.* Tonearm skating forces and other perturbations are also picked up by the stylus. This is
a form of frequency multiplexing as the control signal (restoring force) used to keep the stylus in the groove is
carried by the same mechanism as the sound itself. Subsonic frequencies below about 20 Hz in the audio signal are
dominated by tracking effects, which is one form of unwanted rumble ("tracking noise") and merges with audible frequencies
in the deep bass range up to about 100 Hz. High fidelity sound equipment can reproduce tracking noise and rumble.
During a quiet passage, woofer speaker cones can sometimes be seen to vibrate with the subsonic tracking of the stylus,
at frequencies as low as just above 0.5 Hz (the frequency at which a 33 1⁄3 rpm record turns on the turntable; 5⁄9
Hz exactly on an ideal turntable). Another reason for very low frequency material can be a warped disk: its undulations
produce frequencies of only a few hertz and present day amplifiers have large power bandwidths. For this reason,
many stereo receivers contained a switchable subsonic filter. Some subsonic content is directly out of phase in each
channel. If played back on a mono subwoofer system, the noise will cancel, significantly reducing the amount of rumble
that is reproduced. Due to recording mastering and manufacturing limitations, both high and low frequencies were
removed from the first recorded signals by various formulae. With low frequencies, the stylus must swing a long way
from side to side, requiring the groove to be wide, taking up more space and limiting the playing time of the record.
At high frequencies, hiss, pops, and ticks are significant. These problems can be reduced by using equalization to
an agreed standard. During recording the amplitude of low frequencies is reduced, thus reducing the groove width
required, and the amplitude at high frequencies is increased. The playback equipment boosts bass and cuts treble
so as to restore the tonal balance in the original signal; this also reduces the high frequency noise. Thus more
music will fit on the record, and noise is reduced. In 1926 Joseph P. Maxwell and Henry C. Harrison from Bell Telephone
Laboratories disclosed that the recording pattern of the Western Electric "rubber line" magnetic disc cutter had
a constant velocity characteristic. This meant that as frequency increased in the treble, recording amplitude decreased.
Conversely, in the bass as frequency decreased, recording amplitude increased. Therefore, it was necessary to attenuate
the bass frequencies below about 250 Hz, the bass turnover point, in the amplified microphone signal fed to the recording
head. Otherwise, bass modulation became excessive and overcutting took place into the next record groove. When played
back electrically with a magnetic pickup having a smooth response in the bass region, a complementary boost in amplitude
at the bass turnover point was necessary. G. H. Miller in 1934 reported that when complementary boost at the turnover
point was used in radio broadcasts of records, the reproduction was more realistic and many of the musical instruments
stood out in their true form. West in 1930 and later P. G. A. H. Voigt (1940) showed that the early Wente-style condenser
microphones contributed to a 4 to 6 dB midrange brilliance or pre-emphasis in the recording chain. This meant that
the electrical recording characteristics of Western Electric licensees such as Columbia Records and Victor Talking
Machine Company in the 1925 era had a higher amplitude in the midrange region. Brilliance such as this compensated
for dullness in many early magnetic pickups having drooping midrange and treble response. As a result, this practice
was the empirical beginning of using pre-emphasis above 1,000 Hz in 78 rpm and 33 1⁄3 rpm records. Over the years
a variety of record equalization practices emerged and there was no industry standard. For example, in Europe recordings
for years required playback with a bass turnover setting of 250–300 Hz and a treble roll-off at 10,000 Hz ranging
from 0 to −5 dB or more. In the US there were more varied practices and a tendency to use higher bass turnover frequencies
such as 500 Hz as well as a greater treble rolloff like −8.5 dB and even more to record generally higher modulation
levels on the record. Evidence from the early technical literature concerning electrical recording suggests that
it wasn't until the 1942–1949 period that there were serious efforts to standardize recording characteristics within
an industry. Heretofore, electrical recording technology from company to company was considered a proprietary art
all the way back to the 1925 Western Electric licensed method used by Columbia and Victor. For example, what Brunswick-Balke-Collender
(Brunswick Corporation) did was different from the practices of Victor. Broadcasters were faced with having to adapt
daily to the varied recording characteristics of many sources: various makers of "home recordings" readily available
to the public, European recordings, lateral-cut transcriptions, and vertical-cut transcriptions. Efforts were started
in 1942 to standardize within the National Association of Broadcasters (NAB), later known as the National Association
of Radio and Television Broadcasters (NARTB). The NAB, among other items, issued recording standards in 1949 for
laterally and vertically cut records, principally transcriptions. A number of 78 rpm record producers as well as
early LP makers also cut their records to the NAB/NARTB lateral standard. The lateral cut NAB curve was remarkably
similar to the NBC Orthacoustic curve that evolved from practices within the National Broadcasting Company since
the mid-1930s. Empirically, and not by any formula, it was learned that the bass end of the audio spectrum below
100 Hz could be boosted somewhat to override system hum and turntable rumble noises. Likewise at the treble end beginning
at 1,000 Hz, if audio frequencies were boosted by 16 dB at 10,000 Hz the delicate sibilant sounds of speech and high
overtones of musical instruments could survive the noise level of cellulose acetate, lacquer/aluminum, and vinyl
disc media. When the record was played back using a complementary inverse curve, signal-to-noise ratio was improved
and the programming sounded more lifelike. Ultimately, the New Orthophonic curve was disclosed in a publication by
R.C. Moyer of RCA Victor in 1953. He traced RCA Victor characteristics back to the Western Electric "rubber line"
recorder in 1925 up to the early 1950s laying claim to long-held recording practices and reasons for major changes
in the intervening years. The RCA Victor New Orthophonic curve was within the tolerances for the NAB/NARTB, Columbia
LP, and AES curves. It eventually became the technical predecessor to the RIAA curve. Delicate sounds and fine overtones
were mostly lost, because it took a lot of sound energy to vibrate the recording horn diaphragm and cutting mechanism.
There were acoustic limitations due to mechanical resonances in both the recording and playback system. Some pictures
of acoustic recording sessions show horns wrapped with tape to help mute these resonances. Even an acoustic recording
played back electrically on modern equipment sounds like it was recorded through a horn, notwithstanding a reduction
in distortion because of the modern playback. Toward the end of the acoustic era, there were many fine examples of
recordings made with horns. Electric recording which developed during the time that early radio was becoming popular
(1925) benefited from the microphones and amplifiers used in radio studios. The early electric recordings were reminiscent
tonally of acoustic recordings, except there was more recorded bass and treble as well as delicate sounds and overtones
cut on the records. This was in spite of some carbon microphones used, which had resonances that colored the recorded
tone. The double button carbon microphone with stretched diaphragm was a marked improvement. Alternatively, the Wente
style condenser microphone used with the Western Electric licensed recording method had a brilliant midrange and
was prone to overloading from sibilants in speech, but generally it gave more accurate reproduction than carbon microphones.
It was not unusual for electric recordings to be played back on acoustic phonographs. The Victor Orthophonic phonograph
was a prime example where such playback was expected. In the Orthophonic, which benefited from telephone research,
the mechanical pickup head was redesigned with lower resonance than the traditional mica type. Also, a folded horn
with an exponential taper was constructed inside the cabinet to provide better impedance matching to the air. As
a result, playback of an Orthophonic record sounded like it was coming from a radio. Eventually, when it was more
common for electric recordings to be played back electrically in the 1930s and 1940s, the overall tone was much like
listening to a radio of the era. Magnetic pickups became more common and were better designed as time went on, making
it possible to improve the damping of spurious resonances. Crystal pickups were also introduced as lower cost alternatives.
The dynamic or moving coil microphone was introduced around 1930 and the velocity or ribbon microphone in 1932. Both
of these high quality microphones became widespread in motion picture, radio, recording, and public address applications.
Over time, fidelity, dynamic and noise levels improved to the point that it was harder to tell the difference between
a live performance in the studio and the recorded version. This was especially true after the invention of the variable
reluctance magnetic pickup cartridge by General Electric in the 1940s when high quality cuts were played on well-designed
audio systems. The Capehart radio/phonographs of the era with large diameter electrodynamic loudspeakers, though
not ideal, demonstrated this quite well with "home recordings" readily available in the music stores for the public
to buy. There were important quality advances in recordings specifically made for radio broadcast. In the early 1930s
Bell Telephone Laboratories and Western Electric announced the total reinvention of disc recording: the Western Electric
Wide Range System, "The New Voice of Action". The intent of the new Western Electric system was to improve the overall
quality of disc recording and playback. The recording speed was 33 1⁄3 rpm, originally used in the Western Electric/ERPI
movie audio disc system implemented in the early Warner Brothers' Vitaphone "talkies" of 1927. The newly invented
Western Electric moving coil or dynamic microphone was part of the Wide Range System. It had a flatter audio response
than the old style Wente condenser type and didn't require electronics installed in the microphone housing. Signals
fed to the cutting head were pre-emphasized in the treble region to help override noise in playback. Groove cuts
in the vertical plane were employed rather than the usual lateral cuts. The chief advantage claimed was more grooves
per inch that could be crowded together, resulting in longer playback time. Additionally, the problem of inner groove
distortion, which plagued lateral cuts, could be avoided with the vertical cut system. Wax masters were made by flowing
heated wax over a hot metal disc thus avoiding the microscopic irregularities of cast blocks of wax and the necessity
of planing and polishing. Vinyl pressings were made with stampers from master cuts that were electroplated in vacuo
by means of gold sputtering. Audio response was claimed out to 8,000 Hz, later 13,000 Hz, using light weight pickups
employing jeweled styli. Amplifiers and cutters both using negative feedback were employed thereby improving the
range of frequencies cut and lowering distortion levels. Radio transcription producers such as World Broadcasting
System and Associated Music Publishers (AMP) were the dominant licensees of the Western Electric wide range system
and towards the end of the 1930s were responsible for two-thirds of the total radio transcription business. These
recordings use a bass turnover of 300 Hz and a 10,000 Hz rolloff of −8.5 dB. The complete technical disclosure of
the Columbia LP by Peter C. Goldmark, Rene' Snepvangers and William S. Bachman in 1949 made it possible for a great
variety of record companies to get into the business of making long playing records. The business grew quickly and
interest spread in high fidelity sound and the do-it-yourself market for pickups, turntables, amplifier kits, loudspeaker
enclosure plans, and AM/FM radio tuners. The LP record for longer works, 45 rpm for pop music, and FM radio became
high fidelity program sources in demand. Radio listeners heard recordings broadcast and this in turn generated more
record sales. The industry flourished. There is a theory that vinyl records can audibly represent higher frequencies
than compact discs. According to Red Book specifications, the compact disc has a frequency response of 20 Hz up to
22,050 Hz, and most CD players measure flat within a fraction of a decibel from at least 20 Hz to 20 kHz at full
output. Turntable rumble obscures the low-end limit of vinyl but the upper end can be, with some cartridges, reasonably
flat within a few decibels to 30 kHz, with gentle roll-off. Carrier signals of Quad LPs popular in the 1970s were
at 30 kHz to be out of the range of human hearing. The average human auditory system is sensitive to frequencies
from 20 Hz to a maximum of around 20,000 Hz. The upper and lower frequency limits of human hearing vary per person.
For the first several decades of disc record manufacturing, sound was recorded directly on to the "master disc" at
the recording studio. From about 1950 on (earlier for some large record companies, later for some small ones) it
became usual to have the performance first recorded on audio tape, which could then be processed and/or edited, and
then dubbed on to the master disc. A record cutter would engrave the grooves into the master disc. Early versions
of these master discs were soft wax, and later a harder lacquer was used. The mastering process was originally something
of an art as the operator had to manually allow for the changes in sound which affected how wide the space for the
groove needed to be on each rotation. As the playing of gramophone records causes gradual degradation of the recording,
they are best preserved by transferring them onto other media and playing the records as rarely as possible. They
need to be stored on edge, and do best under environmental conditions that most humans would find comfortable. The
medium needs to be kept clean, but alcohol should only be used on PVC or optical media, not on 78s.[citation needed]
The equipment for playback of certain formats (e.g., 16 and 78 rpm) is manufactured only in small quantities, leading
to increased difficulty in finding equipment to play the recordings. Where old disc recordings are considered to
be of artistic or historic interest, from before the era of tape or where no tape master exists, archivists play
back the disc on suitable equipment and record the result, typically onto a digital format, which can be copied and
manipulated to remove analog flaws without any further damage to the source recording. For example, Nimbus Records
uses a specially built horn record player to transfer 78s. Anyone can do this using a standard record player with
a suitable pickup, a phono-preamp (pre-amplifier) and a typical personal computer. However, for accurate transfer,
professional archivists carefully choose the correct stylus shape and diameter, tracking weight, equalisation curve
and other playback parameters and use high-quality analogue-to-digital converters. Groove recordings, first designed
in the final quarter of the 19th century, held a predominant position for nearly a century—withstanding competition
from reel-to-reel tape, the 8-track cartridge, and the compact cassette. In 1988, the compact disc surpassed the
gramophone record in unit sales. Vinyl records experienced a sudden decline in popularity between 1988 and 1991,
when the major label distributors restricted their return policies, which retailers had been relying on to maintain
and swap out stocks of relatively unpopular titles. First the distributors began charging retailers more for new
product if they returned unsold vinyl, and then they stopped providing any credit at all for returns. Retailers,
fearing they would be stuck with anything they ordered, only ordered proven, popular titles that they knew would
sell, and devoted more shelf space to CDs and cassettes. Record companies also deleted many vinyl titles from production
and distribution, further undermining the availability of the format and leading to the closure of pressing plants.
This rapid decline in the availability of records accelerated the format's decline in popularity, and is seen by
some as a deliberate ploy to make consumers switch to CDs, which were more profitable for the record companies. In
spite of their flaws, such as the lack of portability, records still have enthusiastic supporters. Vinyl records
continue to be manufactured and sold today, especially by independent rock bands and labels, although record sales
are considered to be a niche market composed of audiophiles, collectors, and DJs. Old records and out-of-print recordings
in particular are in much demand by collectors the world over. (See Record collecting.) Many popular new albums are
given releases on vinyl records and older albums are also given reissues, sometimes on audiophile-grade vinyl. Many
electronic dance music and hip hop releases today are still preferred on vinyl; however, digital copies are still
widely available. This is because for disc jockeys ("DJs"), vinyl has an advantage over the CD: direct manipulation
of the medium. DJ techniques such as slip-cueing, beatmatching, and scratching originated on turntables. With CDs
or compact audio cassettes one normally has only indirect manipulation options, e.g., the play, stop, and pause buttons.
With a record one can place the stylus a few grooves farther in or out, accelerate or decelerate the turntable, or
even reverse its direction, provided the stylus, record player, and record itself are built to withstand it. However,
many CDJ and DJ advances, such as DJ software and time-encoded vinyl, now have these capabilities and more. In 2014
artist Jack White sold 40,000 copies of his second solo release, Lazaretto, on vinyl. The sales of the record beat
the largest sales in one week on vinyl since 1991. The sales record was previously held by Pearl Jam's, Vitalogy,
which sold 34,000 copies in one week in 1994. In 2014, the sale of vinyl records was the only physical music medium
with increasing sales with relation to the previous year. Sales of other mediums including individual digital tracks,
digital albums and compact discs have fallen, the latter having the greatest drop-in-sales rate.
Historically, the channel's programming consisted mainly of featured classic theatrically released feature films from the
Turner Entertainment film library – which comprises films from Warner Bros. Pictures (covering films released before
1950) and Metro-Goldwyn-Mayer (covering films released before May 1986). However, TCM now has licensing deals with
other Hollywood film studios as well as its Time Warner sister company, Warner Bros. (which now controls the Turner
Entertainment library and its own later films), and occasionally shows more recent films. Turner Classic Movies is
a dedicated film channel and is available in United States, United Kingdom, France (TCM Cinéma), Spain (TCM España),
Nordic countries, Middle East and Africa. In 1986, eight years before the launch of Turner Classic Movies, Ted Turner
acquired the Metro-Goldwyn-Mayer film studio for $1.5 billion. Concerns over Turner Entertainment's corporate debt
load resulted in Turner selling the studio that October back to Kirk Kerkorian, from whom Turner had purchased the
studio less than a year before. As part of the deal, Turner Entertainment retained ownership of MGM's library of
films released up to May 9, 1986. Turner Broadcasting System was split into two companies; Turner Broadcasting System
and Metro-Goldwyn-Mayer and reincorporated as MGM/UA Communications Co. The film library of Turner Entertainment
would serve as the base form of programming for TCM upon the network's launch. Before the creation of Turner Classic
Movies, films from Turner's library of movies aired on the Turner Broadcasting System's advertiser-supported cable
network TNT – along with colorized versions of black-and-white classics such as The Maltese Falcon. After the library
was acquired, MGM/UA signed a deal with Turner to continue distributing the pre-May 1986 MGM and to begin distributing
the pre-1950 Warner Bros. film libraries for video release (the rest of the library went to Turner Home Entertainment).
At the time of its launch, TCM was available to approximately one million cable television subscribers. The network
originally served as a competitor to AMC – which at the time was known as "American Movie Classics" and maintained
a virtually identical format to TCM, as both networks largely focused on films released prior to 1970 and aired them
in an uncut, uncolorized, and commercial-free format. AMC had broadened its film content to feature colorized and
more recent films by 2002 and abandoned its commercial-free format, leaving TCM as the only movie-oriented cable
channel to devote its programming entirely to classic films without commercial interruption. In 1996, Turner Broadcasting
System merged with Time Warner, which besides placing Turner Classic Movies and Warner Bros. Entertainment under
the same corporate umbrella, also gave TCM access to Warner Bros.' library of films released after 1949 (which itself
includes other acquired entities such as the Lorimar, Saul Zaentz and National General Pictures libraries); incidentally,
TCM had already been running select Warner Bros. film titles through a licensing agreement with the studio that was
signed prior to the launch of the channel. In March 1999, MGM paid Warner Bros. and gave up the home video rights
to the MGM/UA films owned by Turner to Warner Home Video. In 2000, TCM started the annual Young Composers Film Competition,
inviting aspiring composers to participate in a judged competition that offers the winner of each year's competition
the opportunity to score a restored, feature-length silent film as a grand prize, mentored by a well-known composer,
with the new work subsequently premiering on the network. As of 2006, films that have been rescored include the 1921
Rudolph Valentino film Camille, two Lon Chaney films: 1921's The Ace of Hearts and 1928's Laugh, Clown, Laugh, and
Greta Garbo's 1926 film The Temptress. In 2008, TCM won a Peabody Award for excellence in broadcasting. In April
2010, Turner Classic Movies held the first TCM Classic Film Festival, an event – now held annually – at the Grauman's
Chinese Theater and the Grauman's Egyptian Theater in Hollywood. Hosted by Robert Osborne, the four-day long annual
festival celebrates Hollywood and its movies, and features celebrity appearances, special events, and screenings
of around 50 classic movies including several newly restored by the Film Foundation, an organization devoted to preserving
Hollywood's classic film legacy. Turner Classic Movies essentially operates as a commercial-free service, with the
only advertisements on the network being shown between features – which advertise TCM products, network promotions
for upcoming special programs and the original trailers for films that are scheduled to be broadcast on TCM (particularly
those that will air during the primetime hours), and featurettes about classic film actors and actresses. In addition
to this, extended breaks between features are filled with theatrically released movie trailers and classic short
subjects – from series such as The Passing Parade, Crime Does Not Pay, Pete Smith Specialties, and Robert Benchley
– under the banner name TCM Extras (formerly One Reel Wonders). In 2007, some of the short films featured on TCM
were made available for streaming on TCM's website. Partly to allow these interstitials, Turner Classic Movies schedules
its feature films either at the top of the hour or at :15, :30 or :45 minutes past the hour, instead of in timeslots
of varying five-minute increments. TCM's film content has remained mostly uncut and uncolorized (with films natively
filmed or post-produced in the format being those only ones presented in color), depending upon the original content
of movies, particularly movies released after the 1968 implementation of the Motion Picture Association of America's
ratings system and the concurrent disestablishment of the Motion Picture Production Code. Because of this, TCM is
formatted similarly to a premium channel with certain films – particularly those made from the 1960s onward – sometimes
featuring nudity, sexual content, violence and/or strong profanity; the network also features rating bumpers prior
to the start of a program (most programs on TCM, especially films, are rated for content using the TV Parental Guidelines,
in lieu of the MPAA's rating system). The network's programming season runs from February until the following March
of each year when a retrospective of Oscar-winning and Oscar-nominated movies is shown, called 31 Days of Oscar.
As a result of its devoted format to classic feature films, viewers that are interested in tracing the career development
of actresses such as Barbara Stanwyck or Greta Garbo or actors like Cary Grant or Humphrey Bogart have the unique
opportunity to see most of the films that were made during their careers, from beginning to end. Turner Classic Movies
presents many of its features in their original aspect ratio (widescreen or full screen) whenever possible – widescreen
films broadcast on TCM are letterboxed on the network's standard definition feed. TCM also regularly presents widescreen
presentations of films not available in the format on any home video release. TCM's library of films spans several
decades of cinema and includes thousands of film titles. Besides its deals to broadcast film releases from Metro-Goldwyn-Mayer
and Warner Bros. Entertainment, Turner Classic Movies also maintains movie licensing rights agreements with Universal
Studios, Paramount Pictures, 20th Century Fox, Walt Disney Studios (primarily film content from Walt Disney Pictures,
as well as most of the Selznick International Pictures library), Sony Pictures Entertainment (primarily film content
from Columbia Pictures), StudioCanal, and Janus Films. Most Paramount sound releases made prior to 1950 are owned
by EMKA, Ltd./NBCUniversal Television Distribution, while Paramount (currently owned by Viacom) holds on to most
of its post-1949 releases, which are distributed for television by Trifecta Entertainment & Media. Columbia's film
output is owned by Sony (through Sony Pictures Television); distribution of 20th Century Fox's film library is handled
for television by its 21st Century Fox subsidiary 20th Television, and the Walt Disney Studios (owned by The Walt
Disney Company) has its library film output handled for television by Disney-ABC Domestic Television. Classic films
released by 20th Century Fox, Paramount Pictures, Universal Studios, and Columbia Pictures are licensed individually
for broadcast on Turner Classic Movies. Most feature movies shown during the prime time and early overnight hours
(8:00 p.m. to 2:30 a.m. Eastern Time) are presented by film historian Robert Osborne (who has been with the network
since its 1994 launch, except for a five-month medical leave from July to December 2011, when guest hosts presented
each night's films) on Sunday through Wednesday evenings – with Osborne only presenting primetime films on weekends
– and Ben Mankiewicz presenting only late evening films on Thursdays, and the "Silent Sunday Nights" and "TCM Imports"
blocks on Sundays. TCM regularly airs a "Star of the Month" throughout the year on Wednesdays starting at 8:00 p.m.
Eastern Time, in which most, if not all, feature films from a classic film star are shown during that night's schedule.
Hosted by Robert Osbourne, the network also marks the occurrence of a film actor's birthday (either antemortem or
posthumously) or recent death with day- or evening-long festivals showcasting several of that artist's best, earliest
or least-known pictures; by effect, marathons scheduled in honor of an actor's passing (which are scheduled within
a month after their death) pre-empt films originally scheduled to air on that date. TCM also features a monthly program
block called the "TCM Guest Programmer", in which Osborne is joined by celebrity guests responsible for choosing
that evening's films (examples of such programmers during 2012 include Jules Feiffer, Anthony Bourdain, Debra Winger,
Ellen Barkin, Spike Lee, Regis Philbin and Jim Lehrer); an offshoot of this block featuring Turner Classic Movies
employees aired during February 2011. Turner Classic Movies also airs regularly scheduled weekly film blocks, which
are periodically preempted for special themed month-long or seasonal scheduling events, such as the "31 Days of Oscar"
film series in the month preceding the Academy Awards and the month-long "Summer Under the Stars" in August; all
featured programming has their own distinctive feature presentation bumper for the particular scheduled presentation.
The Essentials, currently hosted by Osborne and Sally Field as of 2015[update], is a weekly film showcase airing
on Saturday evenings (with a replay on the following Sunday at 6:00 p.m. Eastern Time), which spotlights a different
movie and contains a special introduction and post-movie discussion. The channel also broadcasts two movie blocks
during the late evening hours each Sunday: "Silent Sunday Nights", which features silent films from the United States
and abroad, usually in the latest restored version and often with new musical scores; and "TCM Imports" (which previously
ran on Saturdays until the early 2000s[specify]), a weekly presentation of films originally released in foreign countries.
TCM Underground – which debuted in October 2006 – is a Friday late night block which focuses on cult films, the block
was originally hosted by rocker/filmmaker Rob Zombie until December 2006 (though as of 2014[update], it is the only
regular film presentation block on the channel that does not have a host). Each August, Turner Classic Movies suspends
its regular schedule for a special month of film marathons called "Summer Under the Stars", which features entire
daily schedules devoted to the work of a particular actor, with movies and specials that pertain to the star of the
day. In the summer of 2007, the channel debuted "Funday Night at the Movies", a block hosted by actor Tom Kenny (best
known as the voice of SpongeBob SquarePants). This summer block featured classic feature films (such as The Wizard
of Oz, Sounder, Bringing Up Baby, Singin' in the Rain, Mr. Smith Goes to Washington, The Adventures of Robin Hood
and 20,000 Leagues Under the Sea) aimed at introducing these movies to new generations of children and their families.
"Funday Night at the Movies" was replaced in 2008 by "Essentials Jr.", a youth-oriented version of its weekly series
The Essentials (originally hosted by actors Abigail Breslin and Chris O'Donnell, then by John Lithgow from 2009 to
2011, and then by Bill Hader starting with the 2011 season), which included such family-themed films as National
Velvet, Captains Courageous and Yours, Mine and Ours, as well as more eclectic selections as Sherlock, Jr., The Music
Box, Harvey, Mutiny on the Bounty and The Man Who Knew Too Much. In addition to films, Turner Classic Movies also
airs original content, mostly documentaries about classic movie personalities, the world of filmmaking and particularly
notable films. An occasional month-long series, Race and Hollywood, showcases films by and about people of non-white
races, featuring discussions of how these pictures influenced white people's image of said races, as well as how
people of those races viewed themselves. Previous installments have included "Asian Images on Film" in 2008, "Native
American Images on Film" in 2010, "Black Images on Film" in 2006 "Latino Images on Film" in 2009 and "Arab Images
on Film" in 2011. The network aired the film series Screened Out (which explored the history and depiction of homosexuality
in film) in 2007 and Religion on Film (focusing on the role of religion in cinematic works) in 2005. In 2011, TCM
debuted a new series entitled AFI's Master Class: The Art of Collaboration. In December 1994, TCM debuted "TCM Remembers",
a tribute to recently deceased notable film personalities (including actors, producers, composers, directors, writers
and cinematographers) that occasionally airs during promotional breaks between films. The segments appear in two
forms: individual tributes and a longer end-of-year compilation. Following the recent death of an especially famous
classic film personality (usually an actor, producer, filmmaker or director), the segment will feature a montage
of select shots of the deceased's work. Every December, a longer, more inclusive "TCM Remembers" interstitial is
produced that honors all of the noted film personalities who died during the past year, interspersed with scenes
from settings such as an abandoned drive-in (2012) or a theatre which is closing down and is being dismantled (2013).
Since 2001, the soundtracks for these clipreels have been introspective melodies by indie artists such as Badly Drawn
Boy (2007) or Steve Earle (2009). The TCM Vault Collection consists of several different DVD collections of rare
classic films that have been licensed, remastered and released by Turner Classic Movies (through corporate sister
Warner Home Video). These boxed set releases are of films by notable actors, directors or studios that were previously
unreleased on DVD or VHS. The sets often include bonus discs including documentaries and shorts from the TCM library.
The initial batch of DVDs are printed in limited quantities and subsequent batches are made-on-demand (MOD). In October
2015, TCM announced the launch of the TCM Wineclub, in which they teamed up with Laithwaite to provide a line of
mail-order wines from famous vineyards such as famed writer-director-producer Francis Ford Coppola's winery. Wines
are available in 3 month subscriptions, and can be selected as reds, whites, or a mixture of both. From the wines
chosen, TCM also includes recommended movies to watch with each, such as a "True Grit" wine, to be paired with the
John Wayne film of the same name. Turner Classic Movies is available in many other countries around the world. In
Canada, TCM began to be carried on Shaw Cable and satellite provider Shaw Direct in 2005. Rogers Cable started offering
TCM in December 2006 as a free preview for subscribers of its digital cable tier, and was added to its analogue tier
in February 2007. While the schedule for the Canadian feed is generally the same as that of the U.S. network, some
films are replaced for broadcast in Canada due to rights issues and other reasons. Other versions of TCM are available
in Australia, France, Middle East, South Africa, Cyprus, Spain, Asia, Latin America, Scandinavia, the United Kingdom,
Ireland and Malta. The UK version operates two channels, including a spinoff called TCM 2.
Hindu philosophy refers to a group of darśanas (philosophies, world views, teachings) that emerged in ancient India. The
mainstream Hindu philosophy includes six systems (ṣaḍdarśana) – Samkhya, Yoga, Nyaya, Vaisheshika, Mimamsa and Vedanta.
These are also called the Astika (orthodox) philosophical traditions and are those that accept the Vedas as authoritative,
important source of knowledge.[note 1][note 2] Ancient and medieval India was also the source of philosophies that
share philosophical concepts but rejected the Vedas, and these have been called nāstika (heterodox or non-orthodox)
Indian philosophies. Nāstika Indian philosophies include Buddhism, Jainism, Cārvāka, Ājīvika, and others. Scholars
have debated the relationship and differences within āstika philosophies and with nāstika philosophies, starting
with the writings of Indologists and Orientalists of the 18th and 19th centuries, which were themselves derived from
limited availability of Indian literature and medieval doxographies. The various sibling traditions included in Hindu
philosophies are diverse, and they are united by shared history and concepts, same textual resources, similar ontological
and soteriological focus, and cosmology. While Buddhism and Jainism are considered distinct philosophies and religions,
some heterodox traditions such as Cārvāka are often considered as distinct schools within Hindu philosophy. Hindu
philosophy also includes several sub-schools of theistic philosophies that integrate ideas from two or more of the
six orthodox philosophies, such as the realism of the Nyāya, the naturalism of the Vaiśeṣika, the dualism of the
Sāṅkhya, the monism and knowledge of Self as essential to liberation of Advaita, the self-discipline of yoga and
the asceticism and elements of theistic ideas. Examples of such schools include Pāśupata Śaiva, Śaiva siddhānta,
Pratyabhijña, Raseśvara and Vaiṣṇava. Some sub-schools share Tantric ideas with those found in some Buddhist traditions.
The ideas of these sub-schools are found in the Puranas and Āgamas. Ancient and medieval Hindu texts identify six
pramāṇas as correct means of accurate knowledge and truths: pratyakṣa (perception), anumāṇa (inference), upamāṇa
(comparison and analogy), arthāpatti (postulation, derivation from circumstances), anupalabdi (non-perception, negative/cognitive
proof) and śabda (word, testimony of past or present reliable experts) Each of these are further categorized in terms
of conditionality, completeness, confidence and possibility of error, by each school . The various schools vary on
how many of these six are valid paths of knowledge. For example, the Cārvāka nāstika philosophy holds that only one
(perception) is an epistemically reliable means of knowledge, the Samkhya school holds three are (perception, inference
and testimony), while the Mīmāṃsā and Advaita schools hold all six are epistemically useful and reliable means to
knowledge. Samkhya school espouses dualism between consciousness and matter. It regards the universe as consisting
of two realities; Puruṣa (consciousness) and prakriti (matter). Jiva (a living being) is that state in which puruṣa
is bonded to prakriti in some form. This fusion, state the Samkhya scholars, led to the emergence of buddhi (awareness,
intellect) and ahankara (individualized ego consciousness, “I-maker”). The universe is described by this school as
one created by Purusa-Prakriti entities infused with various permutations and combinations of variously enumerated
elements, senses, feelings, activity and mind. Samkhya philosophy includes a theory of gunas (qualities, innate tendencies,
psyche). Guna, it states, are of three types: Sattva being good, compassionate, illuminating, positive, and constructive;
Rajas guna is one of activity, chaotic, passion, impulsive, potentially good or bad; and Tamas being the quality
of darkness, ignorance, destructive, lethargic, negative. Everything, all life forms and human beings, state Samkhya
scholars, have these three gunas, but in different proportions. The interplay of these gunas defines the character
of someone or something, of nature and determines the progress of life. Samkhya theorises a pluralism of souls (Jeevatmas)
who possess consciousness, but denies the existence of Ishvara (God). Classical Samkhya is considered an atheist
/ non-theistic Hindu philosophy. In Indian philosophy, Yoga is among other things, the name of one of the six āstika
philosophical schools. The Yoga philosophical system is closely allied with the dualism premises of Samkhya school.
The Yoga school accepts the Samkhya psychology and metaphysics, but is considered theistic because it accepts the
concept of "personal god", unlike Samkhya. The epistemology of the Yoga school, like the Sāmkhya school, relies on
three of six prāmaṇas as the means of gaining reliable knowledge: pratyakṣa (perception), anumāṇa (inference) and
śabda (āptavacana, word/testimony of reliable sources). The Yoga school builds on the Samkhya school theory that
jñāna (knowledge) is a sufficient means to moksha. It suggests that systematic techniques/practice (personal experimentation)
combined with Samkhya's approach to knowledge is the path to moksha. Yoga shares several central ideas with Advaita
Vedanta, with the difference that Yoga is a form of experimental mysticism while Advaita Vedanta is a form of monistic
personalism. Like Advaita Vedanta, the Yoga school of Hindu philosophy states that liberation/freedom in this life
is achievable, and this occurs when an individual fully understands and realizes the equivalence of Atman (soul,
self) and Brahman. The Vaiśeṣika philosophy is a naturalist school; it is a form of atomism in natural philosophy.
It postulated that all objects in the physical universe are reducible to paramāṇu (atoms), and one's experiences
are derived from the interplay of substance (a function of atoms, their number and their spatial arrangements), quality,
activity, commonness, particularity and inherence. Knowledge and liberation was achievable by complete understanding
of the world of experience, according to Vaiśeṣika school . The Vaiśeṣika darśana is credited to Kaṇāda Kaśyapa from
the second half of the first millennium BCE. The foundational text, the Vaiśeṣika Sūtra, opens as follows, Vaiśeṣika
metaphysical premises are founded on a form of atomism, that the reality is composed of four substances (earth, water,
air, fire). Each of these four are of two types: atomic (paramāṇu) and composite. An atom is, according to Vaiśeṣika
scholars, that which is indestructible (anitya), indivisible, and has a special kind of dimension, called “small”
(aṇu). A composite, in this philosophy, is defined to be anything which is divisible into atoms. Whatever human beings
perceive is composite, while atoms are invisible. The Vaiśeṣikas stated that size, form, truths and everything that
human beings experience as a whole is a function of atoms, their number and their spatial arrangements, their guṇa
(quality), karma (activity), sāmānya (commonness), viśeṣa (particularity) and amavāya (inherence, inseparable connectedness
of everything). In its metaphysics, Nyāya school is closer to the Vaiśeṣika school than others. It holds that human
suffering results from mistakes/defects produced by activity under wrong knowledge (notions and ignorance). Moksha
(liberation), it states, is gained through right knowledge. This premise led Nyāya to concern itself with epistemology,
that is the reliable means to gain correct knowledge and to remove wrong notions. False knowledge is not merely ignorance
to Naiyayikas, it includes delusion. Correct knowledge is discovering and overcoming one's delusions, and understanding
true nature of soul, self and reality. The Nyāya Sūtras begin: The Mīmāṃsā school has several subschools defined
by epistemology. The Prābhākara subschool of Mīmāṃsā considered five epistemically reliable means to gaining knowledge:
pratyakṣa (perception), anumāṇa (inference), upamāṇa (comparison and analogy), arthāpatti (postulation, derivation
from circumstances), and śabda (word, testimony of past or present reliable experts). The Kumārila Bhaṭṭa sub-school
of Mīmāṃsā added sixth to its canon of reliable epistemology - anupalabdi (non-perception, negative/cognitive proof).
The metaphysics in Mīmāṃsā school consists of both atheistic and theistic doctrines and the school showed little
interest in systematic examination of the existence of God. Rather, it held that the soul is eternal omnipresent,
inherently active spiritual essence, then focussed on the epistemology and metaphysics of dharma. To them, dharma
meant rituals and duties, not devas (gods), because devas existed only in name. The Mīmāṃsākas held that the Vedas
are "eternal authorless infallible", that Vedic vidhi (injunctions) and mantras in rituals are prescriptive karya
(actions), and the rituals are of primary importance and merit. They considered the Upanishads and other self-knowledge,
spirituality-related texts to be of secondary importance, a philosophical view that the Vedanta school disagreed
with. Mīmāṃsā gave rise to the study of philology and the philosophy of language. While their deep analysis of language
and linguistics influenced other schools, their views were not shared by others. Mīmāṃsākas considered the purpose
and power of language was to clearly prescribe the proper, correct and right. In contrast, Vedantins extended the
scope and value of language as a tool to also describe, develop and derive. Mīmāṃsākas considered orderly, law-driven,
procedural life as central purpose and noblest necessity of dharma and society, and divine (theistic) sustenance
means to that end. The Mimamsa school was influential and foundational to the Vedanta school, with the difference
that Mīmāṃsā school developed and emphasized karmakāṇḍa (that part of the śruti which relates to ceremonial acts
and sacrificial rites, the early parts of the Vedas), while the Vedanta school developed and emphasized jñānakāṇḍa
(that portion of the Vedas which relates to knowledge of monism, the latter parts of the Vedas). The Vedānta school
built upon the teachings of the Upanishads and Brahma Sutras from the first millennium BCE and is the most developed
and well-known of the Hindu schools. The epistemology of the Vedantins included, depending on the sub-school, five
or six methods as proper and reliable means of gaining any form of knowledge: pratyakṣa (perception), anumāṇa (inference),
upamāṇa (comparison and analogy), arthāpatti (postulation, derivation from circumstances), anupalabdi (non-perception,
negative/cognitive proof) and śabda (word, testimony of past or present reliable experts). Each of these have been
further categorized in terms of conditionality, completeness, confidence and possibility of error, by each sub-school
of Vedanta. The emergence of Vedanta school represented a period when a more knowledge-centered understanding began
to emerge. These focussed on jnana (knowledge) driven aspects of the Vedic religion and the Upanishads. This included
metaphysical concepts such as ātman and Brahman, and emphasized meditation, self-discipline, self-knowledge and abstract
spirituality, rather than ritualism. The Upanishads were variously interpreted by ancient and medieval era Vedanta
scholars. Consequently, the Vedanta separated into many sub-schools, ranging from theistic dualism to non-theistic
monism, each interpreting the texts in its own way and producing its own series of sub-commentaries. Advaita literally
means "not two, sole, unity". It is a sub-school of Vedanta, and asserts spiritual and universal non-dualism. Its
metaphysics is a form of absolute monism, that is all ultimate reality is interconnected oneness. This is the oldest
and most widely acknowledged Vedantic school. The foundational texts of this school are the Brahma Sutras and the
early Upanishads from the 1st millennium BCE. Its first great consolidator was the 8th century scholar Adi Shankara,
who continued the line of thought of the Upanishadic teachers, and that of his teacher's teacher Gaudapada. He wrote
extensive commentaries on the major Vedantic scriptures and is celebrated as one of the major Hindu philosophers
from whose doctrines the main currents of modern Indian thought are derived. According to this school of Vedanta,
all reality is Brahman, and there exists nothing whatsoever which is not Brahman. Its metaphysics includes the concept
of māyā and ātman. Māyā connotes "that which exists, but is constantly changing and thus is spiritually unreal".
The empirical reality is considered as always changing and therefore "transitory, incomplete, misleading and not
what it appears to be". The concept of ātman is of soul, self within each person, each living being. Advaita Vedantins
assert that ātman is same as Brahman, and this Brahman is within each human being and all life, all living beings
are spiritually interconnected, and there is oneness in all of existence. They hold that dualities and misunderstanding
of māyā as the spiritual reality that matters is caused by ignorance, and are the cause of sorrow, suffering. Jīvanmukti
(liberation during life) can be achieved through Self-knowledge, the understanding that ātman within is same as ātman
in another person and all of Brahman – the eternal, unchanging, entirety of cosmic principles and true reality. Ramanuja
(c. 1037–1137) was the foremost proponent of the philosophy of Viśiṣṭādvaita or qualified non-dualism. Viśiṣṭādvaita
advocated the concept of a Supreme Being with essential qualities or attributes. Viśiṣṭādvaitins argued against the
Advaitin conception of Brahman as an impersonal empty oneness. They saw Brahman as an eternal oneness, but also as
the source of all creation, which was omnipresent and actively involved in existence. To them the sense of subject-object
perception was illusory and a sign of ignorance. However, the individual's sense of self was not a complete illusion
since it was derived from the universal beingness that is Brahman. Ramanuja saw Vishnu as a personification of Brahman.
Dvaita Vedanta is a dualistic interpretation of the Vedas, espouses dualism by theorizing the existence of two separate
realities. The first and the only independent reality, states the Dvaita school, is that of Vishnu or Brahman. Vishnu
is the supreme Self, in a manner similar to monotheistic God in other major religions. The distinguishing factor
of Dvaita philosophy, as opposed to monistic Advaita Vedanta, is that God takes on a personal role and is seen as
a real eternal entity that governs and controls the universe. Like Vishishtadvaita Vedanta subschool, Dvaita philosophy
also embraced Vaishnavism, with the metaphysical concept of Brahman in the Vedas identified with Vishnu and the one
and only Supreme Being. However, unlike Vishishtadvaita which envisions ultimate qualified nondualism, the dualism
of Dvaita was permanent. Dvaitādvaita was proposed by Nimbarka, a 13th-century Vaishnava Philosopher from the Andhra
region. According to this philosophy there are three categories of existence: Brahman, soul, and matter. Soul and
matter are different from Brahman in that they have attributes and capacities different from Brahman. Brahman exists
independently, while soul and matter are dependent. Thus soul and matter have an existence that is separate yet dependent.
Further, Brahman is a controller, the soul is the enjoyer, and matter the thing enjoyed. Also, the highest object
of worship is Krishna and his consort Radha, attended by thousands of gopis; of the Vrindavan; and devotion consists
in self-surrender. Early history of Shaivism is difficult to determine. However, the Śvetāśvatara Upanishad (400
– 200 BCE) is considered to be the earliest textual exposition of a systematic philosophy of Shaivism. Shaivism is
represented by various philosophical schools, including non-dualist (abheda), dualist (bheda), and non-dualist-with-dualist
(bhedābheda) perspectives. Vidyaranya in his works mentions three major schools of Shaiva thought— Pashupata Shaivism,
Shaiva Siddhanta and Pratyabhijña (Kashmir Shaivism). Pāśupata Shaivism (Pāśupata, "of Paśupati") is the oldest of
the major Shaiva schools. The philosophy of Pashupata sect was systematized by Lakulish in the 2nd century CE. Paśu
in Paśupati refers to the effect (or created world), the word designates that which is dependent on something ulterior.
Whereas, Pati means the cause (or principium), the word designates the Lord, who is the cause of the universe, the
pati, or the ruler. Pashupatas disapproved of Vaishnava theology, known for its doctrine servitude of souls to the
Supreme Being, on the grounds that dependence upon anything could not be the means of cessation of pain and other
desired ends. They recognised that those depending upon another and longing for independence will not be emancipated
because they still depend upon something other than themselves. According to Pāśupatas, soul possesses the attributes
of the Supreme Deity when it becomes liberated from the 'germ of every pain'. Pāśupatas divided the created world
into the insentient and the sentient. The insentient was the unconscious and thus dependent on the sentient or conscious.
The insentient was further divided into effects and causes. The effects were of ten kinds, the earth, four elements
and their qualities, colour etc. The causes were of thirteen kinds, the five organs of cognition, the five organs
of action, the three internal organs, intellect, the ego principle and the cognising principle. These insentient
causes were held responsible for the illusive identification of Self with non-Self. Salvation in Pāśupata involved
the union of the soul with God through the intellect. Even though, both Kashmir Shaivism and Advaita Vedanta are
non-dual philosophies which give primacy to Universal Consciousness (Chit or Brahman), in Kashmir Shavisim, as opposed
to Advaita, all things are a manifestation of this Consciousness. This implies that from the point of view of Kashmir
Shavisim, the phenomenal world (Śakti) is real, and it exists and has its being in Consciousness (Chit). Whereas,
Advaita holds that Brahman is inactive (niṣkriya) and the phenomenal world is an illusion (māyā). The objective of
human life, according to Kashmir Shaivism, is to merge in Shiva or Universal Consciousness, or to realize one's already
existing identity with Shiva, by means of wisdom, yoga and grace.
While there is some international commonality in the way political parties are recognized, and in how they operate, there
are often many differences, and some are significant. Many political parties have an ideological core, but some do
not, and many represent very different ideologies than they did when first founded. In democracies, political parties
are elected by the electorate to run a government. Many countries have numerous powerful political parties, such
as Germany and India and some nations have one-party systems, such as China. The United States is a two-party system,
with its two most powerful parties being the Democratic Party and the Republican Party. The first political factions,
cohering around a basic, if fluid, set of principles emerged from the Exclusion Crisis and Glorious Revolution in
late-17th-century England. The Whigs supported Protestant constitutional monarchy against absolute rule and the Tories,
originating in the Royalist (or "Cavalier") faction of the English Civil War, were conservative royalist supporters
of a strong monarchy as a counterbalance to the republican tendencies of Whigs, who were the dominant political faction
for most of the first half of the 18th century; they supported the Hanoverian succession of 1715 against the Jacobite
supporters of the deposed Roman Catholic Stuart dynasty and were able to purge Tory politicians from important government
positions after the failed Jacobite rising of 1715. The leader of the Whigs was Robert Walpole, who maintained control
of the government in the period 1721–1742; his protégé was Henry Pelham (1743–1754). As the century wore on, the
factions slowly began to adopt more coherent political tendencies as the interests of their power bases began to
diverge. The Whig party's initial base of support from the great aristocratic families, widened to include the emerging
industrial interests and wealthy merchants. As well as championing constitutional monarchy with strict limits on
the monarch's power, the Whigs adamantly opposed a Catholic king as a threat to liberty, and believed in extending
toleration to nonconformist Protestants, or dissenters. A major influence on the Whigs were the liberal political
ideas of John Locke, and the concepts of universal rights employed by Locke and Algernon Sidney. Although the Tories
were dismissed from office for half a century, for most of this period (at first under the leadership of Sir William
Wyndham), the Tories retained party cohesion, with occasional hopes of regaining office, particularly at the accession
of George II (1727) and the downfall of the ministry of Sir Robert Walpole in 1742. They acted as a united, though
unavailing, opposition to Whig corruption and scandals. At times they cooperated with the "Opposition Whigs", Whigs
who were in opposition to the Whig government; however, the ideological gap between the Tories and the Opposition
Whigs prevented them from coalescing as a single party. They finally regained power with the accession of George
III in 1760 under Lord Bute. When they lost power, the old Whig leadership dissolved into a decade of factional chaos
with distinct "Grenvillite", "Bedfordite", "Rockinghamite", and "Chathamite" factions successively in power, and
all referring to themselves as "Whigs". Out of this chaos, the first distinctive parties emerged. The first such
party was the Rockingham Whigs under the leadership of Charles Watson-Wentworth and the intellectual guidance of
the political philosopher Edmund Burke. Burke laid out a philosophy that described the basic framework of the political
party as "a body of men united for promoting by their joint endeavours the national interest, upon some particular
principle in which they are all agreed". As opposed to the instability of the earlier factions, which were often
tied to a particular leader and could disintegrate if removed from power, the party was centred around a set of core
principles and remained out of power as a united opposition to government. The modern Conservative Party was created
out of the 'Pittite' Tories of the early 19th century. In the late 1820s disputes over political reform broke up
this grouping. A government led by the Duke of Wellington collapsed amidst dire election results. Following this
disaster Robert Peel set about assembling a new coalition of forces. Peel issued the Tamworth Manifesto in 1834 which
set out the basic principles of Conservatism; – the necessity in specific cases of reform in order to survive, but
an opposition to unnecessary change, that could lead to "a perpetual vortex of agitation". Meanwhile, the Whigs,
along with free trade Tory followers of Robert Peel, and independent Radicals, formed the Liberal Party under Lord
Palmerston in 1859, and transformed into a party of the growing urban middle-class, under the long leadership of
William Ewart Gladstone. Although the Founding Fathers of the United States did not originally intend for American
politics to be partisan, early political controversies in the 1790s over the extent of federal government powers
saw the emergence of two proto-political parties- the Federalist Party and the Democratic-Republican Party, which
were championed by Framers Alexander Hamilton and James Madison, respectively. However, a consensus reached on these
issues ended party politics in 1816 for a decade, a period commonly known as the Era of Good Feelings. At the same
time, the political party reached its modern form, with a membership disciplined through the use of a party whip
and the implementation of efficient structures of control. The Home Rule League Party, campaigning for Home Rule
for Ireland in the British Parliament was fundamentally changed by the great Irish political leader Charles Stewart
Parnell in the 1880s. In 1882, he changed his party's name to the Irish Parliamentary Party and created a well-organized
grass roots structure, introducing membership to replace "ad hoc" informal groupings. He created a new selection
procedure to ensure the professional selection of party candidates committed to taking their seats, and in 1884 he
imposed a firm 'party pledge' which obliged MPs to vote as a bloc in parliament on all occasions. The creation of
a strict party whip and a formal party structure was unique at the time. His party's efficient structure and control
contrasted with the loose rules and flexible informality found in the main British parties; – they soon came to model
themselves on the Parnellite model. A political party is typically led by a party leader (the most powerful member
and spokesperson representing the party), a party secretary (who maintains the daily work and records of party meetings),
party treasurer (who is responsible for membership dues) and party chair (who forms strategies for recruiting and
retaining party members, and also chairs party meetings). Most of the above positions are also members of the party
executive, the leading organization which sets policy for the entire party at the national level. The structure is
far more decentralized in the United States because of the separation of powers, federalism and the multiplicity
of economic interests and religious sects. Even state parties are decentralized as county and other local committees
are largely independent of state central committees. The national party leader in the U.S. will be the president,
if the party holds that office, or a prominent member of Congress in opposition (although a big-state governor may
aspire to that role). Officially, each party has a chairman for its national committee who is a prominent spokesman,
organizer and fund-raiser, but without the status of prominent elected office holders. When the party is represented
by members in the lower house of parliament, the party leader simultaneously serves as the leader of the parliamentary
group of that full party representation; depending on a minimum number of seats held, Westminster-based parties typically
allow for leaders to form frontbench teams of senior fellow members of the parliamentary group to serve as critics
of aspects of government policy. When a party becomes the largest party not part of the Government, the party's parliamentary
group forms the Official Opposition, with Official Opposition frontbench team members often forming the Official
Opposition Shadow cabinet. When a party achieves enough seats in an election to form a majority, the party's frontbench
becomes the Cabinet of government ministers. The freedom to form, declare membership in, or campaign for candidates
from a political party is considered a measurement of a state's adherence to liberal democracy as a political value.
Regulation of parties may run from a crackdown on or repression of all opposition parties, a norm for authoritarian
governments, to the repression of certain parties which hold or promote ideals which run counter to the general ideology
of the state's incumbents (or possess membership by-laws which are legally unenforceable). Furthermore, in the case
of far-right, far-left and regionalism parties in the national parliaments of much of the European Union, mainstream
political parties may form an informal cordon sanitarian which applies a policy of non-cooperation towards those
"Outsider Parties" present in the legislature which are viewed as 'anti-system' or otherwise unacceptable for government.
Cordon Sanitarian, however, have been increasingly abandoned over the past two decades in multi-party democracies
as the pressure to construct broad coalitions in order to win elections – along with the increased willingness of
outsider parties themselves to participate in government – has led to many such parties entering electoral and government
coalitions. In a nonpartisan system, no official political parties exist, sometimes reflecting legal restrictions
on political parties. In nonpartisan elections, each candidate is eligible for office on his or her own merits. In
nonpartisan legislatures, there are no typically formal party alignments within the legislature. The administration
of George Washington and the first few sessions of the United States Congress were nonpartisan. Washington also warned
against political parties during his Farewell Address. In the United States, the unicameral legislature of Nebraska
is nonpartisan but is elected and votes on informal party lines. In Canada, the territorial legislatures of the Northwest
Territories and Nunavut are nonpartisan. In New Zealand, Tokelau has a nonpartisan parliament. Many city and county
governments[vague] are nonpartisan. Nonpartisan elections and modes of governance are common outside of state institutions.
Unless there are legal prohibitions against political parties, factions within nonpartisan systems often evolve into
political parties. In one-party systems, one political party is legally allowed to hold effective power. Although
minor parties may sometimes be allowed, they are legally required to accept the leadership of the dominant party.
This party may not always be identical to the government, although sometimes positions within the party may in fact
be more important than positions within the government. North Korea and China are examples; others can be found in
Fascist states, such as Nazi Germany between 1934 and 1945. The one-party system is thus usually equated with dictatorships
and tyranny. In dominant-party systems, opposition parties are allowed, and there may be even a deeply established
democratic tradition, but other parties are widely considered to have no real chance of gaining power. Sometimes,
political, social and economic circumstances, and public opinion are the reason for others parties' failure. Sometimes,
typically in countries with less of an established democratic tradition, it is possible the dominant party will remain
in power by using patronage and sometimes by voting fraud. In the latter case, the definition between dominant and
one-party system becomes rather blurred. Examples of dominant party systems include the People's Action Party in
Singapore, the African National Congress in South Africa, the Cambodian People's Party in Cambodia, the Liberal Democratic
Party in Japan, and the National Liberation Front in Algeria. One-party dominant system also existed in Mexico with
the Institutional Revolutionary Party until the 1990s, in the southern United States with the Democratic Party from
the late 19th century until the 1970s, in Indonesia with the Golkar from the early 1970s until 1998. The United States
has become essentially a two-party system. Since a conservative (such as the Republican Party) and liberal (such
as the Democratic Party) party has usually been the status quo within American politics. The first parties were called
Federalist and Republican, followed by a brief period of Republican dominance before a split occurred between National
Republicans and Democratic Republicans. The former became the Whig Party and the latter became the Democratic Party.
The Whigs survived only for two decades before they split over the spread of slavery, those opposed becoming members
of the new Republican Party, as did anti-slavery members of the Democratic Party. Third parties (such as the Libertarian
Party) often receive little support and are very rarely the victors in elections. Despite this, there have been several
examples of third parties siphoning votes from major parties that were expected to win (such as Theodore Roosevelt
in the election of 1912 and George Wallace in the election of 1968). As third party movements have learned, the Electoral
College's requirement of a nationally distributed majority makes it difficult for third parties to succeed. Thus,
such parties rarely win many electoral votes, although their popular support within a state may tip it toward one
party or the other. Wallace had weak support outside the South. More generally, parties with a broad base of support
across regions or among economic and other interest groups, have a great chance of winning the necessary plurality
in the U.S.'s largely single-member district, winner-take-all elections. The tremendous land area and large population
of the country are formidable challenges to political parties with a narrow appeal. The UK political system, while
technically a multi-party system, has functioned generally as a two-party (sometimes called a "two-and-a-half party")
system; since the 1920s the two largest political parties have been the Conservative Party and the Labour Party.
Before the Labour Party rose in British politics the Liberal Party was the other major political party along with
the Conservatives. Though coalition and minority governments have been an occasional feature of parliamentary politics,
the first-past-the-post electoral system used for general elections tends to maintain the dominance of these two
parties, though each has in the past century relied upon a third party to deliver a working majority in Parliament.
(A plurality voting system usually leads to a two-party system, a relationship described by Maurice Duverger and
known as Duverger's Law.) There are also numerous other parties that hold or have held a number of seats in Parliament.
More commonly, in cases where there are three or more parties, no one party is likely to gain power alone, and parties
work with each other to form coalition governments. This has been an emerging trend in the politics of the Republic
of Ireland since the 1980s and is almost always the case in Germany on national and state level, and in most constituencies
at the communal level. Furthermore, since the forming of the Republic of Iceland there has never been a government
not led by a coalition (usually of the Independence Party and one other (often the Social Democratic Alliance). A
similar situation exists in the Republic of Ireland; since 1989, no one party has held power on its own. Since then,
numerous coalition governments have been formed. These coalitions have been exclusively led by one of either Fianna
Fáil or Fine Gael. Political change is often easier with a coalition government than in one-party or two-party dominant
systems.[dubious – discuss] If factions in a two-party system are in fundamental disagreement on policy goals, or
even principles, they can be slow to make policy changes, which appears to be the case now in the U.S. with power
split between Democrats and Republicans. Still coalition governments struggle, sometimes for years, to change policy
and often fail altogether, post World War II France and Italy being prime examples. When one party in a two-party
system controls all elective branches, however, policy changes can be both swift and significant. Democrats Woodrow
Wilson, Franklin Roosevelt and Lyndon Johnson were beneficiaries of such fortuitous circumstances, as were Republicans
as far removed in time as Abraham Lincoln and Ronald Reagan. Barack Obama briefly had such an advantage between 2009
and 2011. Political parties, still called factions by some, especially those in the governmental apparatus, are lobbied
vigorously by organizations, businesses and special interest groups such as trade unions. Money and gifts-in-kind
to a party, or its leading members, may be offered as incentives. Such donations are the traditional source of funding
for all right-of-centre cadre parties. Starting in the late 19th century these parties were opposed by the newly
founded left-of-centre workers' parties. They started a new party type, the mass membership party, and a new source
of political fundraising, membership dues. From the second half of the 20th century on parties which continued to
rely on donations or membership subscriptions ran into mounting problems. Along with the increased scrutiny of donations
there has been a long-term decline in party memberships in most western democracies which itself places more strains
on funding. For example, in the United Kingdom and Australia membership of the two main parties in 2006 is less than
an 1/8 of what it was in 1950, despite significant increases in population over that period. In the United Kingdom,
it has been alleged that peerages have been awarded to contributors to party funds, the benefactors becoming members
of the House of Lords and thus being in a position to participate in legislating. Famously, Lloyd George was found
to have been selling peerages. To prevent such corruption in the future, Parliament passed the Honours (Prevention
of Abuses) Act 1925 into law. Thus the outright sale of peerages and similar honours became a criminal act. However,
some benefactors are alleged to have attempted to circumvent this by cloaking their contributions as loans, giving
rise to the 'Cash for Peerages' scandal. There are two broad categories of public funding, direct, which entails
a monetary transfer to a party, and indirect, which includes broadcasting time on state media, use of the mail service
or supplies. According to the Comparative Data from the ACE Electoral Knowledge Network, out of a sample of over
180 nations, 25% of nations provide no direct or indirect public funding, 58% provide direct public funding and 60%
of nations provide indirect public funding. Some countries provide both direct and indirect public funding to political
parties. Funding may be equal for all parties or depend on the results of previous elections or the number of candidates
participating in an election. Frequently parties rely on a mix of private and public funding and are required to
disclose their finances to the Election management body. In fledgling democracies funding can also be provided by
foreign aid. International donors provide financing to political parties in developing countries as a means to promote
democracy and good governance. Support can be purely financial or otherwise. Frequently it is provided as capacity
development activities including the development of party manifestos, party constitutions and campaigning skills.
Developing links between ideologically linked parties is another common feature of international support for a party.
Sometimes this can be perceived as directly supporting the political aims of a political party, such as the support
of the US government to the Georgian party behind the Rose Revolution. Other donors work on a more neutral basis,
where multiple donors provide grants in countries accessible by all parties for various aims defined by the recipients.
There have been calls by leading development think-tanks, such as the Overseas Development Institute, to increase
support to political parties as part of developing the capacity to deal with the demands of interest-driven donors
to improve governance. Green is the color for green parties, Islamist parties, Nordic agrarian parties and Irish
republican parties. Orange is sometimes a color of nationalism, such as in the Netherlands, in Israel with the Orange
Camp or with Ulster Loyalists in Northern Ireland; it is also a color of reform such as in Ukraine. In the past,
Purple was considered the color of royalty (like white), but today it is sometimes used for feminist parties. White
also is associated with nationalism. "Purple Party" is also used as an academic hypothetical of an undefined party,
as a Centrist party in the United States (because purple is created from mixing the main parties' colors of red and
blue) and as a highly idealistic "peace and love" party—in a similar vein to a Green Party, perhaps. Black is generally
associated with fascist parties, going back to Benito Mussolini's blackshirts, but also with Anarchism. Similarly,
brown is sometimes associated with Nazism, going back to the Nazi Party's tan-uniformed storm troopers. Political
color schemes in the United States diverge from international norms. Since 2000, red has become associated with the
right-wing Republican Party and blue with the left-wing Democratic Party. However, unlike political color schemes
of other countries, the parties did not choose those colors; they were used in news coverage of 2000 election results
and ensuing legal battle and caught on in popular usage. Prior to the 2000 election the media typically alternated
which color represented which party each presidential election cycle. The color scheme happened to get inordinate
attention that year, so the cycle was stopped lest it cause confusion the following election. During the 19th and
20th century, many national political parties organized themselves into international organizations along similar
policy lines. Notable examples are The Universal Party, International Workingmen's Association (also called the First
International), the Socialist International (also called the Second International), the Communist International (also
called the Third International), and the Fourth International, as organizations of working class parties, or the
Liberal International (yellow), Hizb ut-Tahrir, Christian Democratic International and the International Democrat
Union (blue). Organized in Italy in 1945, the International Communist Party, since 1974 headquartered in Florence
has sections in six countries.[citation needed] Worldwide green parties have recently established the Global Greens.
The Universal Party, The Socialist International, the Liberal International, and the International Democrat Union
are all based in London. Some administrations (e.g. Hong Kong) outlaw formal linkages between local and foreign political
organizations, effectively outlawing international political parties. French political scientist Maurice Duverger
drew a distinction between cadre parties and mass parties. Cadre parties were political elites that were concerned
with contesting elections and restricted the influence of outsiders, who were only required to assist in election
campaigns. Mass parties tried to recruit new members who were a source of party income and were often expected to
spread party ideology as well as assist in elections.Socialist parties are examples of mass parties, while the British
Conservative Party and the German Christian Democratic Union are examples of hybrid parties. In the United States,
where both major parties were cadre parties, the introduction of primaries and other reforms has transformed them
so that power is held by activists who compete over influence and nomination of candidates.
A cappella [a kapˈpɛlla] (Italian for "in the manner of the chapel") music is specifically group or solo singing without
instrumental accompaniment, or a piece intended to be performed in this way. It contrasts with cantata, which is
accompanied singing. The term "a cappella" was originally intended to differentiate between Renaissance polyphony
and Baroque concertato style. In the 19th century a renewed interest in Renaissance polyphony coupled with an ignorance
of the fact that vocal parts were often doubled by instrumentalists led to the term coming to mean unaccompanied
vocal music. The term is also used, albeit rarely, as a synonym for alla breve. A cappella music was originally used
in religious music, especially church music as well as anasheed and zemirot. Gregorian chant is an example of a cappella
singing, as is the majority of secular vocal music from the Renaissance. The madrigal, up until its development in
the early Baroque into an instrumentally-accompanied form, is also usually in a cappella form. Jewish and Christian
music were originally a cappella,[citation needed] and this practice has continued in both of these religions as
well as in Islam. The polyphony of Christian a cappella music began to develop in Europe around the late 15th century,
with compositions by Josquin des Prez. The early a cappella polyphonies may have had an accompanying instrument,
although this instrument would merely double the singers' parts and was not independent. By the 16th century, a cappella
polyphony had further developed, but gradually, the cantata began to take the place of a cappella forms. 16th century
a cappella polyphony, nonetheless, continued to influence church composers throughout this period and to the present
day. Recent evidence has shown that some of the early pieces by Palestrina, such as what was written for the Sistine
Chapel was intended to be accompanied by an organ "doubling" some or all of the voices. Such is seen in the life
of Palestrina becoming a major influence on Bach, most notably in the aforementioned Mass in B Minor. Other composers
that utilized the a cappella style, if only for the occasional piece, were Claudio Monteverdi and his masterpiece,
Lagrime d'amante al sepolcro dell'amata (A lover's tears at his beloved's grave), which was composed in 1610, and
Andrea Gabrieli when upon his death it was discovered many choral pieces, one of which was in the unaccompanied style.
Learning from the preceding two composeres, Heinrich Schütz utilized the a cappella style in numerous pieces, chief
among these were the pieces in the oratorio style, which were traditionally performed during the Easter week and
dealt with the religious subject matter of that week, such as Christ's suffering and the Passion. Five of Schutz's
Historien were Easter pieces, and of these the latter three, which dealt with the passion from three different viewpoints,
those of Matthew, Luke and John, were all done a cappella style. This was a near requirement for this type of piece,
and the parts of the crowd were sung while the solo parts which were the quoted parts from either Christ or the authors
were performed in a plainchant. In the Byzantine Rite of the Eastern Orthodox Church and the Eastern Catholic Churches,
the music performed in the liturgies is exclusively sung without instrumental accompaniment. Bishop Kallistos Ware
says, "The service is sung, even though there may be no choir... In the Orthodox Church today, as in the early Church,
singing is unaccompanied and instrumental music is not found." This a cappella behavior arises from strict interpretation
of Psalms 150, which states, Let every thing that hath breath praise the Lord. Praise ye the Lord. In keeping with
this philosophy, early Russian musika which started appearing in the late 17th century, in what was known as khorovïye
kontsertï (choral concertos) made a cappella adaptations of Venetian-styled pieces, such as the treatise, Grammatika
musikiyskaya (1675), by Nikolai Diletsky. Divine Liturgies and Western Rite masses composed by famous composers such
as Peter Tchaikovsky, Sergei Rachmaninoff, Alexander Arkhangelsky, and Mykola Leontovych are fine examples of this.
Present-day Christian religious bodies known for conducting their worship services without musical accompaniment
include some Presbyterian churches devoted to the regulative principle of worship, Old Regular Baptists, Primitive
Baptists, Plymouth Brethren, Churches of Christ, the Old German Baptist Brethren, Doukhobors the Byzantine Rite and
the Amish, Old Order Mennonites and Conservative Mennonites. Certain high church services and other musical events
in liturgical churches (such as the Roman Catholic Mass and the Lutheran Divine Service) may be a cappella, a practice
remaining from apostolic times. Many Mennonites also conduct some or all of their services without instruments. Sacred
Harp, a type of folk music, is an a cappella style of religious singing with shape notes, usually sung at singing
conventions. Instruments have divided Christendom since their introduction into worship. They were considered a Catholic
innovation, not widely practiced until the 18th century, and were opposed vigorously in worship by a number of Protestant
Reformers, including Martin Luther (1483–1546), Ulrich Zwingli, John Calvin (1509–1564) and John Wesley (1703–1791).
Alexander Campbell referred to the use of an instrument in worship as "a cow bell in a concert". In Sir Walter Scott's
The Heart of Midlothian, the heroine, Jeanie Deans, a Scottish Presbyterian, writes to her father about the church
situation she has found in England (bold added): Those who subscribe to this interpretation believe that since the
Christian scriptures never counter instrumental language with any negative judgment on instruments, opposition to
instruments instead comes from an interpretation of history. There is no written opposition to musical instruments
in any setting in the first century and a half of Christian churches (33 AD to 180AD). The use of instruments for
Christian worship during this period is also undocumented. Toward the end of the 2nd century, Christians began condemning
the instruments themselves. Those who oppose instruments today believe these Church Fathers had a better understanding
of God's desire for the church,[citation needed] but there are significant differences between the teachings of these
Church Fathers and Christian opposition to instruments today. While worship in the Temple in Jerusalem included musical
instruments (2 Chronicles 29:25–27), traditional Jewish religious services in the Synagogue, both before and after
the last destruction of the Temple, did not include musical instruments given the practice of scriptural cantillation.
The use of musical instruments is traditionally forbidden on the Sabbath out of concern that players would be tempted
to repair (or tune) their instruments, which is forbidden on those days. (This prohibition has been relaxed in many
Reform and some Conservative congregations.) Similarly, when Jewish families and larger groups sing traditional Sabbath
songs known as zemirot outside the context of formal religious services, they usually do so a cappella, and Bar and
Bat Mitzvah celebrations on the Sabbath sometimes feature entertainment by a cappella ensembles. During the Three
Weeks musical instruments are prohibited. Many Jews consider a portion of the 49-day period of the counting of the
omer between Passover and Shavuot to be a time of semi-mourning and instrumental music is not allowed during that
time. This has led to a tradition of a cappella singing sometimes known as sefirah music. The popularization of the
Jewish chant may be found in the writings of the Jewish philosopher Philo, born 20 BCE. Weaving together Jewish and
Greek thought, Philo promoted praise without instruments, and taught that "silent singing" (without even vocal chords)
was better still. This view parted with the Jewish scriptures, where Israel offered praise with instruments by God's
own command (2 Chronicles 29:25). The shofar is the only temple instrument still being used today in the synagogue,
and it is only used from Rosh Chodesh Elul through the end of Yom Kippur. The shofar is used by itself, without any
vocal accompaniment, and is limited to a very strictly defined set of sounds and specific places in the synagogue
service. A strong and prominent a cappella tradition was begun in the midwest part of the United States in 1911 by
F. Melius Christiansen, a music faculty member at St. Olaf College in Northfield, Minnesota. The St. Olaf College
Choir was established as an outgrowth of the local St. John's Lutheran Church, where Christiansen was organist and
the choir was composed, at least partially, of students from the nearby St. Olaf campus. The success of the ensemble
was emulated by other regional conductors, and a rich tradition of a cappella choral music was born in the region
at colleges like Concordia College (Moorhead, Minnesota), Augustana College (Rock Island, Illinois), Wartburg College
(Waverly, Iowa), Luther College (Decorah, Iowa), Gustavus Adolphus College (St. Peter, Minnesota), Augustana College
(Sioux Falls, South Dakota), and Augsburg College (Minneapolis, Minnesota). The choirs typically range from 40 to
80 singers and are recognized for their efforts to perfect blend, intonation, phrasing and pitch in a large choral
setting. In July 1943, as a result of the American Federation of Musicians boycott of US recording studios, the a
cappella vocal group The Song Spinners had a best-seller with "Comin' In On A Wing And A Prayer". In the 1950s several
recording groups, notably The Hi-Los and the Four Freshmen, introduced complex jazz harmonies to a cappella performances.
The King's Singers are credited with promoting interest in small-group a cappella performances in the 1960s. In 1983
an a cappella group known as The Flying Pickets had a Christmas 'number one' in the UK with a cover of Yazoo's (known
in the US as Yaz) "Only You". A cappella music attained renewed prominence from the late 1980s onward, spurred by
the success of Top 40 recordings by artists such as The Manhattan Transfer, Bobby McFerrin, Huey Lewis and the News,
All-4-One, The Nylons, Backstreet Boys and Boyz II Men.[citation needed] Contemporary a cappella includes many vocal
groups and bands who add vocal percussion or beatboxing to create a pop/rock/gospel sound, in some cases very similar
to bands with instruments. Examples of such professional groups include Straight No Chaser, Pentatonix, The House
Jacks, Rockapella, Mosaic, and M-pact. There also remains a strong a cappella presence within Christian music, as
some denominations purposefully do not use instruments during worship. Examples of such groups are Take 6, Glad and
Acappella. Arrangements of popular music for small a cappella ensembles typically include one voice singing the lead
melody, one singing a rhythmic bass line, and the remaining voices contributing chordal or polyphonic accompaniment.
A cappella has been used as the sole orchestration for original works of musical theater that have had commercial
runs Off-Broadway (theaters in New York City with 99 to 500 seats) only four times. The first was Avenue X which
opened on 28 January 1994 and ran for 77 performances. It was produced by Playwrights Horizons with book by John
Jiler, music and lyrics by Ray Leslee. The musical style of the show's score was primarily Doo-Wop as the plot revolved
around Doo-Wop group singers of the 1960s. The a cappella musical Perfect Harmony, a comedy about two high school
a cappella groups vying to win the National championship, made its Off Broadway debut at Theatre Row’s Acorn Theatre
on 42nd Street in New York City in October, 2010 after a successful out-of-town run at the Stoneham Theatre, in Stoneham,
Massachusetts. Perfect Harmony features the hit music of The Jackson 5, Pat Benatar, Billy Idol, Marvin Gaye, Scandal,
Tiffany, The Romantics, The Pretenders, The Temptations, The Contours, The Commodores, Tommy James & the Shondells
and The Partridge Family, and has been compared to a cross between Altar Boyz and The 25th Annual Putnam County Spelling
Bee. The fourth a cappella musical to appear Off-Broadway, In Transit, premiered 5 October 2010 and was produced
by Primary Stages with book, music, and lyrics by Kristen Anderson-Lopez, James-Allen Ford, Russ Kaplan, and Sara
Wordsworth. Set primarily in the New York City subway system its score features an eclectic mix of musical genres
(including jazz, hip hop, Latin, rock, and country). In Transit incorporates vocal beat boxing into its contemporary
a cappella arrangements through the use of a subway beat boxer character. Beat boxer and actor Chesney Snow performed
this role for the 2010 Primary Stages production. According to the show's website, it is scheduled to reopen for
an open-ended commercial run in the Fall of 2011. In 2011 the production received four Lucille Lortel Award nominations
including Outstanding Musical, Outer Critics Circle and Drama League nominations, as well as five Drama Desk nominations
including Outstanding Musical and won for Outstanding Ensemble Performance. Barbershop music is one of several uniquely
American art forms. The earliest reports of this style of a cappella music involved African Americans. The earliest
documented quartets all began in barbershops. In 1938, the first formal men's barbershop organization was formed,
known as the Society for the Preservation and Encouragement of Barber Shop Quartet Singing in America (S.P.E.B.S.Q.S.A),
and in 2004 rebranded itself and officially changed its public name to the Barbershop Harmony Society (BHS). Today
the BHS has over 22,000 members in approximately 800 chapters across the United States, and the barbershop style
has spread around the world with organizations in many other countries. The Barbershop Harmony Society provides a
highly organized competition structure for a cappella quartets and choruses singing in the barbershop style. In 1945,
the first formal women's barbershop organization, Sweet Adelines, was formed. In 1953 Sweet Adelines became an international
organization, although it didn't change its name to Sweet Adelines International until 1991. The membership of nearly
25,000 women, all singing in English, includes choruses in most of the fifty United States as well as in Australia,
Canada, England, Finland, Germany, Ireland, Japan, New Zealand, Scotland, Sweden, Wales and the Netherlands. Headquartered
in Tulsa, Oklahoma, the organization encompasses more than 1,200 registered quartets and 600 choruses. The reasons
for the strong Swedish dominance are as explained by Richard Sparks manifold; suffice to say here that there is a
long-standing tradition, an unsusually large proportion of the populations (5% is often cited) regularly sing in
choirs, the Swedish choral director Eric Ericson had an enormous impact on a cappella choral development not only
in Sweden but around the world, and finally there are a large number of very popular primary and secondary schools
(music schools) with high admission standards based on auditions that combine a rigid academic regimen with high
level choral singing on every school day, a system that started with Adolf Fredrik's Music School in Stockholm in
1939 but has spread over the country. It is not clear exactly where collegiate a cappella began. The Rensselyrics
of Rensselaer Polytechnic Institute (formerly known as the RPI Glee Club), established in 1873 is perhaps the oldest
known collegiate a cappella group.[citation needed] However the longest continuously-singing group is probably The
Whiffenpoofs of Yale University, which was formed in 1909 and once included Cole Porter as a member. Collegiate a
cappella groups grew throughout the 20th century. Some notable historical groups formed along the way include Princeton
University's Tigertones (1946), Colgate University's The Colgate 13 (1942), Dartmouth College's Aires (1946), Cornell
University's Cayuga's Waiters (1949) and The Hangovers (1968), the University of Maine Maine Steiners (1958), the
Columbia University Kingsmen (1949), the Jabberwocks of Brown University (1949), and the University of Rochester
YellowJackets (1956). All-women a cappella groups followed shortly, frequently as a parody of the men's groups: the
Smiffenpoofs of Smith College (1936), The Shwiffs of Connecticut College (The She-Whiffenpoofs, 1944), and The Chattertocks
of Brown University (1951). A cappella groups exploded in popularity beginning in the 1990s, fueled in part by a
change in style popularized by the Tufts University Beelzebubs and the Boston University Dear Abbeys. The new style
used voices to emulate modern rock instruments, including vocal percussion/"beatboxing". Some larger universities
now have multiple groups. Groups often join one another in on-campus concerts, such as the Georgetown Chimes' Cherry
Tree Massacre, a 3-weekend a cappella festival held each February since 1975, where over a hundred collegiate groups
have appeared, as well as International Quartet Champions The Boston Common and the contemporary commercial a cappella
group Rockapella. Co-ed groups have produced many up-and-coming and major artists, including John Legend, an alumnus
of the Counterparts at the University of Pennsylvania, and Sara Bareilles, an alumna of Awaken A Cappella at University
of California, Los Angeles. Mira Sorvino is an alumna of the Harvard-Radcliffe Veritones of Harvard College where
she had the solo on Only You by Yaz. A cappella is gaining popularity among South Asians with the emergence of primarily
Hindi-English College groups. The first South Asian a cappella group was Penn Masala, founded in 1996 at the University
of Pennsylvania. Co-ed South Asian a cappella groups are also gaining in popularity. The first co-ed south Asian
a cappella was Anokha, from the University of Maryland, formed in 2001. Also, Dil se, another co-ed a cappella from
UC Berkeley, hosts the "Anahat" competition at the University of California, Berkeley annually. Maize Mirchi, the
co-ed a cappella group from the University of Michigan hosts "Sa Re Ga Ma Pella", an annual South Asian a cappella
invitational with various groups from the Midwest. Increased interest in modern a cappella (particularly collegiate
a cappella) can be seen in the growth of awards such as the Contemporary A Cappella Recording Awards (overseen by
the Contemporary A Cappella Society) and competitions such as the International Championship of Collegiate A Cappella
for college groups and the Harmony Sweepstakes for all groups. In December 2009, a new television competition series
called The Sing-Off aired on NBC. The show featured eight a cappella groups from the United States and Puerto Rico
vying for the prize of $100,000 and a recording contract with Epic Records/Sony Music. The show was judged by Ben
Folds, Shawn Stockman, and Nicole Scherzinger and was won by an all-male group from Puerto Rico called Nota. The
show returned for a second and third season, won by Committed and Pentatonix, respectively. In addition to singing
words, some a cappella singers also emulate instrumentation by reproducing instrumental sounds with their vocal cords
and mouth. One of the earliest 20th century practitioners of this method were The Mills Brothers whose early recordings
of the 1930s clearly stated on the label that all instrumentation was done vocally. More recently, "Twilight Zone"
by 2 Unlimited was sung a cappella to the instrumentation on the comedy television series Tompkins Square. Another
famous example of emulating instrumentation instead of singing the words is the theme song for The New Addams Family
series on Fox Family Channel (now ABC Family). Groups such as Vocal Sampling and Undivided emulate Latin rhythms
a cappella. In the 1960s, the Swingle Singers used their voices to emulate musical instruments to Baroque and Classical
music. Vocal artist Bobby McFerrin is famous for his instrumental emulation. A cappella group Naturally Seven recreates
entire songs using vocal tones for every instrument. The Swingle Singers used nonsense words to sound like instruments,
but have been known to produce non-verbal versions of musical instruments. Like the other groups, examples of their
music can be found on YouTube. Beatboxing, more accurately known as vocal percussion, is a technique used in a cappella
music popularized by the hip-hop community, where rap is often performed a cappella also. The advent of vocal percussion
added new dimensions to the a cappella genre and has become very prevalent in modern arrangements. Petra Haden used
a four-track recorder to produce an a cappella version of The Who Sell Out including the instruments and fake advertisements
on her album Petra Haden Sings: The Who Sell Out in 2005. Haden has also released a cappella versions of Journey's
"Don't Stop Believin'", The Beach Boys' "God Only Knows" and Michael Jackson's "Thriller". In 2009, Toyota commissioned
Haden to perform three songs for television commercials for the third-generation Toyota Prius, including an a cappella
version of The Bellamy Brothers' 1970s song "Let Your Love Flow".[citation needed]
The Order of Preachers (Latin: Ordo Praedicatorum, hence the abbreviation OP used by members), more commonly known after
the 15th century as the Dominican Order or Dominicans, is a Roman Catholic religious order founded by the Spanish
priest Saint Dominic de Guzman in France and approved by Pope Honorius III (1216–27) on 22 December 1216. Membership
in this "mendicant" order includes friars, nuns, active sisters, and lay or secular Dominicans (formerly known as
tertiaries, though recently there has been a growing number of Associates, who are unrelated to the tertiaries) affiliated
with the order. Founded to preach the Gospel and to combat heresy, the teaching activity of the order and its scholastic
organization placed the Preachers in the forefront of the intellectual life of the Middle Ages. The order is famed
for its intellectual tradition, having produced many leading theologians and philosophers. The Dominican Order is
headed by the Master of the Order, who is currently Bruno Cadoré. Members of the order generally carry the letters
O.P., standing for Ordinis Praedicatorum, meaning of the Order of Preachers, after their names. The Dominican Order
came into being in the Middle Ages at a time when religion began to be contemplated in a new way. Men of God were
no longer expected to stay behind the walls of a cloister. Instead, they travelled among the people, taking as their
examples the apostles of the primitive Church. Out of this ideal emerged two orders of mendicant friars: one, the
Friars Minor, was led by Francis of Assisi; the other, the Friars Preachers, by Dominic of Guzman. Like his contemporary,
Francis, Dominic saw the need for a new type of organization, and the quick growth of the Dominicans and Franciscans
during their first century of existence confirms that the orders of mendicant friars met a need. Dominic sought to
establish a new kind of order, one that would bring the dedication and systematic education of the older monastic
orders like the Benedictines to bear on the religious problems of the burgeoning population of cities, but with more
organizational flexibility than either monastic orders or the secular clergy. Dominic's new order was to be a preaching
order, trained to preach in the vernacular languages. Rather than earning their living on vast farms as the monasteries
had done, the new friars would survive by begging, "selling" themselves through persuasive preaching. Dominic inspired
his followers with loyalty to learning and virtue, a deep recognition of the spiritual power of worldly deprivation
and the religious state, and a highly developed governmental structure. At the same time, Dominic inspired the members
of his order to develop a "mixed" spirituality. They were both active in preaching, and contemplative in study, prayer
and meditation. The brethren of the Dominican Order were urban and learned, as well as contemplative and mystical
in their spirituality. While these traits had an impact on the women of the order, the nuns especially absorbed the
latter characteristics and made those characteristics their own. In England, the Dominican nuns blended these elements
with the defining characteristics of English Dominican spirituality and created a spirituality and collective personality
that set them apart. As an adolescent, he had a particular love of theology and the Scriptures became the foundation
of his spirituality. During his studies in Palencia, Spain, he experienced a dreadful famine, prompting Dominic to
sell all of his beloved books and other equipment to help his neighbors. After he completed his studies, Bishop Martin
Bazan and Prior Diego d'Achebes appointed Dominic to the cathedral chapter and he became a regular canon under the
Rule of St. Augustine and the Constitutions for the cathedral church of Osma. At the age of twenty-four or twenty-five,
he was ordained to the priesthood. In 1203, Dominic joined Prior Diego de Acebo on an embassy to Denmark for the
monarchy of Spain, to arrange the marriage between the son of King Alfonso VIII of Castile and a niece of King Valdemar
II of Denmark. At that time the south of France was the stronghold of the Cathar or Albigensian heresy, named after
the Duke of Albi, a Cathar sympathiser and opponent to the subsequent Albigensian Crusade (1209–1229). Dominic was
fired by a reforming zeal after they encountered Albigensian Christians at Toulouse. Prior Diego saw immediately
one of the paramount reasons for the spread of the unorthodox movement: the representatives of the Holy Church acted
and moved with an offensive amount of pomp and ceremony. On the other hand, the Cathars lived in a state of self-sacrifice
that was widely appealing. For these reasons, Prior Diego suggested that the papal legates begin to live a reformed
apostolic life. The legates agreed to change if they could find a strong leader. The prior took up the challenge,
and he and Dominic dedicated themselves to the conversion of the Albigensians. Despite this particular mission, in
winning the Albigensians over by persuasion Dominic met limited success, "for though in his ten years of preaching
a large number of converts were made, it has to be said that the results were not such as had been hoped for." Dominic
became the spiritual father to several Albigensian women he had reconciled to the faith, and in 1206 he established
them in a convent in Prouille. This convent would become the foundation of the Dominican nuns, thus making the Dominican
nuns older than the Dominican friars. Prior Diego sanctioned the building of a monastery for girls whose parents
had sent them to the care of the Albigensians because their families were too poor to fulfill their basic needs.
The monastery was at Prouille would later become Dominic's headquarters for his missionary effort there. After two
years on the mission field, Prior Diego died while traveling back to Spain. When his preaching companions heard of
his death, all save Dominic and a very small number of others returned to their homes. In July 1215, with the approbation
of Bishop Foulques of Toulouse, Dominic ordered his followers into an institutional life. Its purpose was revolutionary
in the pastoral ministry of the Catholic Church. These priests were organized and well trained in religious studies.
Dominic needed a framework—a rule—to organize these components. The Rule of St. Augustine was an obvious choice for
the Dominican Order, according to Dominic's successor, Jordan of Saxony, because it lent itself to the "salvation
of souls through preaching". By this choice, however, the Dominican brothers designated themselves not monks, but
canons-regular. They could practice ministry and common life while existing in individual poverty. Dominic's education
at Palencia gave him the knowledge he needed to overcome the Manicheans. With charity, the other concept that most
defines the work and spirituality of the order, study became the method most used by the Dominicans in working to
defend the Church against the perils that hounded it, and also of enlarging its authority over larger areas of the
known world. In Dominic's thinking, it was impossible for men to preach what they did not or could not understand.
When the brethren left Prouille, then, to begin their apostolic work, Dominic sent Matthew of Paris to establish
a school near the University of Paris. This was the first of many Dominican schools established by the brethren,
some near large universities throughout Europe. In 1219 Pope Honorius III invited Saint Dominic and his companions
to take up residence at the ancient Roman basilica of Santa Sabina, which they did by early 1220. Before that time
the friars had only a temporary residence in Rome at the convent of San Sisto Vecchio which Honorius III had given
to Dominic circa 1218 intending it to become a convent for a reformation of nuns at Rome under Dominic's guidance.
In May 1220 at Bologna the order's first General Chapter mandated that each new priory of the order maintain its
own studium conventuale thus laying the foundation of the Dominican tradition of sponsoring widespread institutions
of learning. The official foundation of the Dominican convent at Santa Sabina with its studium conventuale occurred
with the legal transfer of property from Honorius III to the Order of Preachers on June 5, 1222. This studium was
transformed into the order's first studium provinciale by Saint Thomas Aquinas in 1265. Part of the curriculum of
this studium was relocated in 1288 at the studium of Santa Maria sopra Minerva which in the 16th century world be
transformed into the College of Saint Thomas (Latin: Collegium Divi Thomæ). In the 20th century the college would
be relocated to the convent of Saints Dominic and Sixtus and would be transformed into the Pontifical University
of Saint Thomas Aquinas, Angelicum. The Dominican friars quickly spread, including to England, where they appeared
in Oxford in 1221. In the 13th century the order reached all classes of Christian society, fought heresy, schism,
and paganism by word and book, and by its missions to the north of Europe, to Africa, and Asia passed beyond the
frontiers of Christendom. Its schools spread throughout the entire Church; its doctors wrote monumental works in
all branches of knowledge, including the extremely important Albertus Magnus and Thomas Aquinas. Its members included
popes, cardinals, bishops, legates, inquisitors, confessors of princes, ambassadors, and paciarii (enforcers of the
peace decreed by popes or councils). The order was appointed by Pope Gregory IX the duty to carry out the Inquisition.
In his Papal Bull Ad extirpanda of 1252, Pope Innocent IV authorised the Dominicans' use of torture under prescribed
circumstances. The expansion of the order produced changes. A smaller emphasis on doctrinal activity favoured the
development here and there of the ascetic and contemplative life and there sprang up, especially in Germany and Italy,
the mystical movement with which the names of Meister Eckhart, Heinrich Suso, Johannes Tauler, and St. Catherine
of Siena are associated. (See German mysticism, which has also been called "Dominican mysticism.") This movement
was the prelude to the reforms undertaken, at the end of the century, by Raymond of Capua, and continued in the following
century. It assumed remarkable proportions in the congregations of Lombardy and the Netherlands, and in the reforms
of Savonarola in Florence. At the same time the order found itself face to face with the Renaissance. It struggled
against pagan tendencies in Renaissance humanism, in Italy through Dominici and Savonarola, in Germany through the
theologians of Cologne but it also furnished humanism with such advanced writers as Francesco Colonna (probably the
writer of the Hypnerotomachia Poliphili) and Matteo Bandello. Many Dominicans took part in the artistic activity
of the age, the most prominent being Fra Angelico and Fra Bartolomeo. During this critical period, the number of
Preachers seems never to have sunk below 3,500. Statistics for 1876 show 3,748, but 500 of these had been expelled
from their convents and were engaged in parochial work. Statistics for 1910 show a total of 4,472 nominally or actually
engaged in proper activities of the order. In the year 2000, there were 5,171 Dominican friars in solemn vows, 917
student brothers, and 237 novices. By the year 2013 there were 6058 Dominican friars, including 4,470 priests. In
the revival movement France held a foremost place, owing to the reputation and convincing power of the orator, Jean-Baptiste
Henri Lacordaire (1802–1861). He took the habit of a Friar Preacher at Rome (1839), and the province of France was
canonically erected in 1850. From this province were detached the province of Lyon, called Occitania (1862), that
of Toulouse (1869), and that of Canada (1909). The French restoration likewise furnished many laborers to other provinces,
to assist in their organization and progress. From it came the master general who remained longest at the head of
the administration during the 19th century, Père Vincent Jandel (1850–1872). Here should be mentioned the province
of St. Joseph in the United States. Founded in 1805 by Edward Fenwick, afterwards first Bishop of Cincinnati, Ohio
(1821–1832), this province has developed slowly, but now ranks among the most flourishing and active provinces of
the order. In 1910 it numbered seventeen convents or secondary houses. In 1905, it established a large house of studies
at Washington, D.C., called the Dominican House of Studies. There are now four Dominican provinces in the United
States. The province of France has produced a large number of preachers. The conferences of Notre-Dame-de-Paris were
inaugurated by Père Lacordaire. The Dominicans of the province of France furnished Lacordaire (1835–1836, 1843–1851),
Jacques Monsabré (1869–1870, 1872–1890), Joseph Ollivier (1871, 1897), Thomas Etourneau (1898–1902).[citation needed]
Since 1903 the pulpit of Notre Dame has been occupied by a succession of Dominicans. Père Henri Didon (d. 1900) was
a Dominican. The house of studies of the province of France publishes L'Année Dominicaine (founded 1859), La Revue
des Sciences Philosophiques et Theologiques (1907), and La Revue de la Jeunesse (1909). French Dominicans founded
and administer the École Biblique et Archéologique française de Jérusalem founded in 1890 by Père Marie-Joseph Lagrange
O.P. (1855–1938), one of the leading international centres for Biblical research. It is at the École Biblique that
the famed Jerusalem Bible (both editions) was prepared. Doctrinal development has had an important place in the restoration
of the Preachers. Several institutions, besides those already mentioned, played important parts. Such is the Biblical
school at Jerusalem, open to the religious of the order and to secular clerics, which publishes the Revue Biblique.
The faculty of theology at the University of Fribourg, confided to the care of the Dominicans in 1890, is flourishing,
and has about 250 students. The Pontificium Collegium Internationale Angelicum, the future Pontifical University
of Saint Thomas Aquinas, Angelicum established at Rome in 1908 by Master Hyacinth Cormier, opened its doors to regulars
and seculars for the study of the sacred sciences. In addition to the reviews above are the Revue Thomiste, founded
by Père Thomas Coconnier (d. 1908), and the Analecta Ordinis Prædicatorum (1893). Among numerous writers of the order
in this period are: Cardinals Thomas Zigliara (d. 1893) and Zephirin González (d. 1894), two esteemed philosophers;
Alberto Guillelmotti (d. 1893), historian of the Pontifical Navy, and Heinrich Denifle, one of the most famous writers
on medieval history (d. 1905).[citation needed] Today, there is a growing number of Associates who share the Dominican
charism. Dominican Associates are Christian women and men; married, single, divorced, and widowed; clergy members
and lay persons who were first drawn to and then called to live out the charism and continue the mission of the Dominican
Order - to praise, to bless, to preach. Associates do not take vows, but rather make a commitment to be partners
with vowed members, and to share the mission and charism of the Dominican Family in their own lives, families, churches,
neighborhoods, workplaces, and cities. The spiritual tradition of Dominic's Order is punctuated not only by charity,
study and preaching, but also by instances of mystical union. The Dominican emphasis on learning and on charity distinguishes
it from other monastic and mendicant orders. As the order first developed on the European continent, learning continued
to be emphasized by these friars and their sisters in Christ. These religious also struggled for a deeply personal,
intimate relationship with God. When the order reached England, many of these attributes were kept, but the English
gave the order additional, specialized characteristics. This topic is discussed below. Dominic's search for a close
relationship with God was determined and unceasing. He rarely spoke, so little of his interior life is known. What
is known about it comes from accounts written by people near to him. St. Cecilia remembered him as cheerful, charitable
and full of unceasing vigor. From a number of accounts, singing was apparently one of Dominic's great delights. Dominic
practiced self-scourging and would mortify himself as he prayed alone in the chapel at night for 'poor sinners.'
He owned a single habit, refused to carry money, and would allow no one to serve him. The spirituality evidenced
throughout all of the branches of the order reflects the spirit and intentions of its founder, though some of the
elements of what later developed might have surprised the Castilian friar. Fundamentally, Dominic was "... a man
of prayer who utilized the full resources of the learning available to him to preach, to teach, and even materially
to assist those searching for the truth found in the gospel of Christ. It is that spirit which [Dominic] bequeathed
to his followers". Humbert of Romans, the master general of the order from 1254 to 1263, was a great administrator,
as well as preacher and writer. It was under his tenure as master general that the sisters in the order were given
official membership. Humbert was a great lover of languages, and encouraged linguistic studies among the Dominicans,
primarily Arabic, because of the missionary work friars were pursuing amongst those led astray or forced to convert
by Muslims in the Middle East. He also wanted his friars to reach excellence in their preaching, and this was his
most lasting contribution to the order. The growth of the spirituality of young preachers was his first priority.
He once cried to his students: "... consider how excellent this office [of preaching] is, because it is apostolic;
how useful, because it is directly ordained for the salvation of souls; how perilous, because few have in them, or
perform, what the office requires, for it is not without great danger ... , vol. xxv. (Lyon, 1677) Humbert is at
the center of ascetic writers in the Dominican Order. In this role, he added significantly to its spirituality. His
writings are permeated with "religious good sense," and he used uncomplicated language that could edify even the
weakest member. Humbert advised his readers, "[Young Dominicans] are also to be instructed not to be eager to see
visions or work miracles, since these avail little to salvation, and sometimes we are fooled by them; but rather
they should be eager to do good in which salvation consists. Also, they should be taught not to be sad if they do
not enjoy the divine consolations they hear others have; but they should know the loving Father for some reason sometimes
withholds these. Again, they should learn that if they lack the grace of compunction or devotion they should not
think they are not in the state of grace as long as they have good will, which is all that God regards". Another
who contributed significantly to the spirituality of the order is Albertus Magnus, the only person of the period
to be given the appellation "Great". His influence on the brotherhood permeated nearly every aspect of Dominican
life. Albert was a scientist, philosopher, astrologer, theologian, spiritual writer, ecumenist, and diplomat. Under
the auspices of Humbert of Romans, Albert molded the curriculum of studies for all Dominican students, introduced
Aristotle to the classroom and probed the work of Neoplatonists, such as Plotinus. Indeed, it was the thirty years
of work done by Thomas Aquinas and himself (1245–1274) that allowed for the inclusion of Aristotelian study in the
curriculum of Dominican schools. One of Albert's greatest contributions was his study of Dionysus the Areopagite,
a mystical theologian whose words left an indelible imprint in the medieval period. Magnus' writings made a significant
contribution to German mysticism, which became vibrant in the minds of the Beguines and women such as Hildegard of
Bingen and Mechthild of Magdeburg. Mysticism, for the purposes of this study, refers to the conviction that all believers
have the capability to experience God's love. This love may manifest itself through brief ecstatic experiences, such
that one may be engulfed by God and gain an immediate knowledge of Him, which is unknowable through the intellect
alone. Albertus Magnus championed the idea, drawn from Dionysus, that positive knowledge of God is possible, but
obscure. Thus, it is easier to state what God is not, than to state what God is: "... we affirm things of God only
relatively, that is, casually, whereas we deny things of God absolutely, that is, with reference to what He is in
Himself. And there is no contradiction between a relative affirmation and an absolute negation. It is not contradictory
to say that someone is white-toothed and not white". Albert the Great wrote that wisdom and understanding enhance
one's faith in God. According to him, these are the tools that God uses to commune with a contemplative. Love in
the soul is both the cause and result of true understanding and judgement. It causes not only an intellectual knowledge
of God, but a spiritual and emotional knowledge as well. Contemplation is the means whereby one can obtain this goal
of understanding. Things that once seemed static and unchanging become full of possibility and perfection. The contemplative
then knows that God is, but she does not know what God is. Thus, contemplation forever produces a mystified, imperfect
knowledge of God. The soul is exalted beyond the rest of God's creation but it cannot see God Himself. As the image
of God grows within man, he learns to rely less on an intellectual pursuit of virtue and more on an affective pursuit
of charity and meekness. Meekness and charity guide Christians to acknowledge that they are nothing without the One
(God/Christ) who created them, sustains them, and guides them. Thus, man then directs his path to that One, and the
love for, and of, Christ guides man's very nature to become centered on the One, and on his neighbor as well. Charity
is the manifestation of the pure love of Christ, both for and by His follower. The Dominican Order was affected by
a number of elemental influences. Its early members imbued the order with a mysticism and learning. The Europeans
of the order embraced ecstatic mysticism on a grand scale and looked to a union with the Creator. The English Dominicans
looked for this complete unity as well, but were not so focused on ecstatic experiences. Instead, their goal was
to emulate the moral life of Christ more completely. The Dartford nuns were surrounded by all of these legacies,
and used them to create something unique. Though they are not called mystics, they are known for their piety toward
God and their determination to live lives devoted to, and in emulation of, Him. Although Albertus Magnus did much
to instill mysticism in the Order of Preachers, it is a concept that reaches back to the Hebrew Bible. In the tradition
of Holy Writ, the impossibility of coming face to face with God is a recurring motif, thus the commandment against
graven images (Exodus 20.4-5). As time passed, Jewish and early Christian writings presented the idea of 'unknowing,'
where God's presence was enveloped in a dark cloud. These images arose out of a confusing mass of ambiguous and ambivalent
statements regarding the nature of God and man's relationship to Him. Although Dominic and the early brethren had
instituted female Dominican houses at Prouille and other places by 1227, some of the brethren of the order had misgivings
about the necessity of female religious establishments in an order whose major purpose was preaching, a duty in which
women could not traditionally engage. In spite of these doubts, women's houses dotted the countryside throughout
Europe. There were seventy-four Dominican female houses in Germany, forty-two in Italy, nine in France, eight in
Spain, six in Bohemia, three in Hungary, and three in Poland. Many of the German religious houses that lodged women
had been home to communities of women, such as Beguines, that became Dominican once they were taught by the traveling
preachers and put under the jurisdiction of the Dominican authoritative structure. A number of these houses became
centers of study and mystical spirituality in the 14th century. There were one hundred and fifty-seven nunneries
in the order by 1358. In that year, the number lessened due to disasters like the Black Death. Female houses differed
from male Dominican houses in a lack of apostolic work for the women. Instead, the sisters chanted the Divine Office
and kept all the monastic observances. Their lives were often much more strict than their brothers' lives. The sisters
had no government of their own, but lived under the authority of the general and provincial chapters of the order.
They were compelled to obey all the rules and shared in all the applicable privileges of the order. Like the Priory
of Dartford, all Dominican nunneries were under the jurisdiction of friars. The friars served as their confessors,
priests, teachers and spiritual mentors. Women could not be professed to the Dominican religious life before the
age of thirteen. The formula for profession contained in the Constitutions of Montargis Priory (1250) demands that
nuns pledge obedience to God, the Blessed Virgin, their prioress and her successors according to the Rule of St.
Augustine and the institute of the order, until death. The clothing of the sisters consisted of a white tunic and
scapular, a leather belt, a black mantle, and a black veil. Candidates to profession were tested to reveal whether
they were actually married women who had merely separated from their husbands. Their intellectual abilities were
also tested. Nuns were to be silent in places of prayer, the cloister, the dormitory, and refectory. Silence was
maintained unless the prioress granted an exception for a specific cause. Speaking was allowed in the common parlor,
but it was subordinate to strict rules, and the prioress, subprioress or other senior nun had to be present. Because
the nuns of the order did not preach among the people, the need to engage in study was not as immediate or intense
as it was for men. They did participate, however, in a number of intellectual activities. Along with sewing and embroidery,
nuns often engaged in reading and discussing correspondence from Church leaders. In the Strassburg monastery of St.
Margaret, some of the nuns could converse fluently in Latin. Learning still had an elevated place in the lives of
these religious. In fact, Margarette Reglerin, a daughter of a wealthy Nuremberg family, was dismissed from a convent
because she did not have the ability or will to learn. As heirs of the Dominican priory of Poissy in France, the
Dartford sisters were also heirs to a tradition of profound learning and piety. Sections of translations of spiritual
writings in Dartford's library, such as Suso's Little Book of Eternal Wisdom and Laurent du Bois' Somme le Roi, show
that the "ghoostli" link to Europe was not lost in the crossing of the Channel. It survived in the minds of the nuns.
Also, the nuns shared a unique identity with Poissy as a religious house founded by a royal house. The English nuns
were proud of this heritage, and aware that many of them shared in England's great history as members of the noble
class, as seen in the next chapter. The English Province was a component of the international order from which it
obtained its laws, direction, and instructions. It was also, however, a group of Englishmen. Its direct supervisors
were from England, and the members of the English Province dwelt and labored in English cities, towns, villages,
and roadways. English and European ingredients constantly came in contact. The international side of the province's
existence influenced the national, and the national responded to, adapted, and sometimes constrained the international.
The first Dominican site in England was at Oxford, in the parishes of St. Edward and St. Adelaide. The friars built
an oratory to the Blessed Virgin Mary and by 1265, the brethren, in keeping with their devotion to study, began erecting
a school. Actually, the Dominican brothers likely began a school immediately after their arrival, as priories were
legally schools. Information about the schools of the English Province is limited, but a few facts are known. Much
of the information available is taken from visitation records. The "visitation" was a section of the province through
which visitors to each priory could describe the state of its religious life and its studies to the next chapter.
There were four such visits in England and Wales—Oxford, London, Cambridge and York. All Dominican students were
required to learn grammar, old and new logic, natural philosophy and theology. Of all of the curricular areas, however,
theology was the most important. This is not surprising when one remembers Dominic's zeal for it. English Dominican
mysticism in the late medieval period differed from European strands of it in that, whereas European Dominican mysticism
tended to concentrate on ecstatic experiences of union with the divine, English Dominican mysticism's ultimate focus
was on a crucial dynamic in one's personal relationship with God. This was an essential moral imitation of the Savior
as an ideal for religious change, and as the means for reformation of humanity's nature as an image of divinity.
This type of mysticism carried with it four elements. First, spiritually it emulated the moral essence of Christ's
life. Second, there was a connection linking moral emulation of Christ's life and humanity's disposition as images
of the divine. Third, English Dominican mysticism focused on an embodied spirituality with a structured love of fellow
men at its center. Finally, the supreme aspiration of this mysticism was either an ethical or an actual union with
God. For English Dominican mystics, the mystical experience was not expressed just in one moment of the full knowledge
of God, but in the journey of, or process of, faith. This then led to an understanding that was directed toward an
experiential knowledge of divinity. It is important to understand, however, that for these mystics it was possible
to pursue mystical life without the visions and voices that are usually associated with such a relationship with
God. They experienced a mystical process that allowed them, in the end, to experience what they had already gained
knowledge of through their faith only. The center of all mystical experience is, of course, Christ. English Dominicans
sought to gain a full knowledge of Christ through an imitation of His life. English mystics of all types tended to
focus on the moral values that the events in Christ's life exemplified. This led to a "progressive understanding
of the meanings of Scripture--literal, moral, allegorical, and anagogical"—that was contained within the mystical
journey itself. From these considerations of Scripture comes the simplest way to imitate Christ: an emulation of
the moral actions and attitudes that Jesus demonstrated in His earthly ministry becomes the most significant way
to feel and have knowledge of God. The English concentrated on the spirit of the events of Christ's life, not the
literality of events. They neither expected nor sought the appearance of the stigmata or any other physical manifestation.
They wanted to create in themselves that environment that allowed Jesus to fulfill His divine mission, insofar as
they were able. At the center of this environment was love: the love that Christ showed for humanity in becoming
human. Christ's love reveals the mercy of God and His care for His creation. English Dominican mystics sought through
this love to become images of God. Love led to spiritual growth that, in turn, reflected an increase in love for
God and humanity. This increase in universal love allowed men's wills to conform to God's will, just as Christ's
will submitted to the Father's will. Concerning humanity as the image of Christ, English Dominican spirituality concentrated
on the moral implications of image-bearing rather than the philosophical foundations of the imago Dei. The process
of Christ's life, and the process of image-bearing, amends humanity to God's image. The idea of the "image of God"
demonstrates both the ability of man to move toward God (as partakers in Christ's redeeming sacrifice), and that,
on some level, man is always an image of God. As their love and knowledge of God grows and is sanctified by faith
and experience, the image of God within man becomes ever more bright and clear.
Eton is one of ten English HMC schools, commonly referred to as "public schools", regulated by the Public Schools Act of
1868. Following the public school tradition, Eton is a full boarding school, which means all pupils live at the school,
and it is one of four such remaining single-sex boys' public schools in the United Kingdom (the others being Harrow,
Radley, and Winchester) to continue this practice. Eton has educated 19 British prime ministers and generations of
the aristocracy and has been referred to as the chief nurse of England's statesmen. Charging up to £11,478 per term
(there are three terms per academic year) in 2014/15, Eton is the sixth most expensive HMC boarding school in the
UK. Eton has a long list of distinguished former pupils. David Cameron is the 19th British prime minister to have
attended the school, and has recommended that Eton set up a school in the state sector to help drive up standards.
Eton now co-sponsors a state sixth-form college in Newham, a deprived area of East London, called the London Academy
of Excellence, opened in 2012, which is free of charge and aims to get all its students into higher education. In
September 2014, Eton opened, and became the sole educational sponsor for, a new purpose-built co-educational state
boarding and day school for around 500 pupils, Holyport College, in Maidenhead in Berkshire, with construction costing
around £15 million, in which a fifth of places for day pupils will be set aside for children from poor homes, 21
boarding places will go to youngsters on the verge of being taken into care, and a further 28 boarders will be funded
or part-funded through bursaries. About 20% of pupils at Eton receive financial support, through a range of bursaries
and scholarships. The recent Head Master, Tony Little, said that Eton is developing plans to allow any boy to attend
the school whatever his parents' income and, in 2011, said that around 250 boys received "significant" financial
help from the school. In early 2014, this figure had risen to 263 pupils receiving the equivalent of around 60% of
school fee assistance, whilst a further 63 received their education free of charge. Little said that, in the short
term, he wanted to ensure that around 320 pupils per year receive bursaries, and that 70 were educated free of charge,
with the intention that the number of pupils receiving financial assistance would continue to increase. These comparatively
new developments will run alongside long-established courses that Eton has provided for pupils from state schools,
most of them in the summer holidays (July and August). Launched in 1982, the Universities Summer School is an intensive
residential course open to boys and girls throughout the UK who attend state schools, are at the end of their first
year in the Sixth Form, and are about to begin their final year of schooling. The Brent-Eton Summer School, started
in 1994, offers 40-50 young people from the London Borough of Brent, an area of inner-city deprivation, an intensive
one-week residential course, free of charge, designed to help bridge the gap between GCSE and A-level. In 2008, Eton
helped found the Eton, Slough, Windsor and Hounslow Independent and State School Partnership (ISSP), with six local
state schools. The ISSP's aims are "to raise pupil achievement, improve pupil self-esteem, raise pupil aspirations
and improve professional practice across the schools". Eton also runs a number of choral and English language courses
during the summer months. In the run-up to the London 2012 Summer Olympic Games and London 2012 Summer Paralympic
Games, Eton's purpose-built Dorney Lake, a permanent, eight-lane, 2,200 metre course (about 1.4 miles) in a 400-acre
park, officially known throughout the Games as Eton Dorney, provided training facilities for Olympic and Paralympic
competitors, and during the Games, hosted the Olympic and Paralympic Rowing competitions as well as the Olympic Canoe
Sprint event, attracting over 400,000 visitors during the Games period (around 30,000 per day), and voted the best
2012 Olympic venue by spectators. Access to the 400-acre parkland around the Lake is provided to members of the public,
free of charge, almost all the year round. Construction of the chapel, originally intended to be slightly over twice
as long, with eighteen - or possibly seventeen - bays (there are eight today) was stopped when Henry VI was deposed.
Only the Quire of the intended building was completed. Eton's first Headmaster, William Waynflete, founder of Magdalen
College, Oxford and previously Head Master of Winchester College, built the ante-chapel that finishes the Chapel
today. The important wall paintings in the Chapel and the brick north range of the present School Yard also date
from the 1480s; the lower storeys of the cloister, including College Hall, had been built between 1441 and 1460.
As the school suffered reduced income while still under construction, the completion and further development of the
school has since depended to some extent on wealthy benefactors. Building resumed when Roger Lupton was Provost,
around 1517. His name is borne by the big gate-house in the west range of the cloisters, fronting School Yard, perhaps
the most famous image of the school. This range includes the important interiors of the Parlour, Election Hall, and
Election Chamber, where most of the 18th century "leaving portraits" are kept. The Duke of Wellington is often incorrectly
quoted as saying that "The Battle of Waterloo was won on the playing-fields of Eton". Wellington was at Eton from
1781 to 1784 and was to send his sons there. According to Nevill (citing the historian Sir Edward Creasy), what Wellington
said, while passing an Eton cricket match many decades later, was, "There grows the stuff that won Waterloo", a remark
Nevill construes as a reference to "the manly character induced by games and sport" amongst English youth generally,
not a comment about Eton specifically. In 1889, Sir William Fraser conflated this uncorroborated remark with the
one attributed to him by Count Charles de Montalembert's "C'est ici qu'a été gagné la bataille de Waterloo" ("It
is here that the Battle of Waterloo was won.") As with other public schools, a scheme was devised towards the end
of the 19th century to familiarize privileged schoolboys with social conditions in deprived areas. The project of
establishing an 'Eton Mission' in the crowded district of Hackney Wick in east London was started at the beginning
of 1880, and lasted until 1971 when it was decided that a more local project (at Dorney) would be more realistic.
However over the years much money was raised for the Eton Mission, a fine church by G. F. Bodley was erected, many
Etonians visited, and stimulated among other things the Eton Manor Boys' Club, a notable rowing club which has survived
the Mission itself, and the 59 Club for motorcyclists. The very large and ornate School Hall and School Library (by
L K. Hall) were erected in 1906-8 across the road from Upper School as the school's memorial to the Etonians who
had died in the Boer War. Many tablets in the cloisters and chapel commemorate the large number of dead Etonians
of the Great War. A bomb destroyed part of Upper School in World War Two and blew out many windows in the Chapel.
The college commissioned replacements by Evie Hone (1949–52) and by John Piper and Patrick Reyntiens (1959 onwards).
In the past, people at Eton have occasionally been guilty of antisemitism. For a time, new admissions were called
'Jews' by their fellow Collegers. In 1945, the school introduced a nationality statute conditioning entry on the
applicant's father being British by birth. The statute was removed after the intervention of Prime Minister Harold
Macmillan in the 1960s after it came to the attention of Oxford's Wykeham Professor of Logic, A. J. Ayer, himself
Jewish and an Old Etonian, who "suspected a whiff of anti-semitism". One boarding house, College, is reserved for
seventy King's Scholars, who attend Eton on scholarships provided by the original foundation and awarded by examination
each year; King's Scholars pay up to 90% of full fees, depending on their means. Of the other pupils, up to a third
receive some kind of bursary or scholarship. The name "King's Scholars" is because the school was founded by King
Henry VI in 1440. The original School consisted of the seventy Scholars (together with some Commensals) and the Scholars
were educated and boarded at the foundation's expense. As the School grew, more students were allowed to attend provided
that they paid their own fees and lived in the town, outside the College's original buildings. These students became
known as Oppidans, from the Latin word oppidum, meaning town. The Houses developed over time as a means of providing
residence for the Oppidans in a more congenial manner, and during the 18th and 19th centuries were mostly run by
women known as "dames". They typically contain about fifty boys. Although classes are organised on a School basis,
most boys spend a large proportion of their time in their House. Each House has a formal name, mainly used for post
and people outside the Eton community. It is generally known by the boys by the initials or surname of the House
Master, the teacher who lives in the house and manages the pupils in it. Not all boys who pass the College election
examination choose to become King's Scholars. If they choose instead to belong to one of the 24 Oppidan Houses, they
are known as Oppidan Scholars. Oppidan scholarships may also be awarded for consistently performing with distinction
in School and external examinations. To gain an Oppidan Scholarship, a boy must have either three distinctions in
a row or four throughout his career. Within the school, an Oppidan Scholar is entitled to use the letters OS after
his name. The Oppidan Houses are named Godolphin House, Jourdelay's, (both built as such c. 1720), Hawtrey House,
Durnford House, (the first two built as such by the Provost and Fellows, 1845, when the school was increasing in
numbers and needed more centralised control), The Hopgarden, South Lawn, Waynflete, Evans's, Keate House, Warre House,
Villiers House, Common Lane House, Penn House, Walpole House, Cotton Hall, Wotton House, Holland House, Mustians,
Angelo's, Manor House, Farrer House, Baldwin's Bec, The Timbralls, and Westbury. For much of Eton's history, junior
boys had to act as "fags", or servants, to older boys. Their duties included cleaning, cooking, and running errands.
A Library member was entitled to yell at any time and without notice, "Boy, Up!" or "Boy, Queue!", and all first-year
boys had to come running. The last boy to arrive was given the task. These practices, known as fagging, were partially
phased out of most houses in the 1970s. Captains of House and Games still sometimes give tasks to first-year boys,
such as collecting the mail from School Office.[citation needed] The long-standing claim that the present uniform
was first worn as mourning for the death of George III is unfounded. "Eton dress" has undergone significant changes
since its standardisation in the 19th century. Originally (along with a top-hat and walking-cane), Etonian dress
was reserved for formal occasions, but boys wear it today for classes, which are referred to as "divisions", or "divs".
As stated above, King's Scholars wear a black gown over the top of their tailcoats, and occasionally a surplice in
Chapel. Members of the teaching staff (known as Beaks) are required to wear a form of school dress when teaching.
Later the emphasis was on classical studies, dominated by Latin and Ancient History, and, for boys with sufficient
ability, Classical Greek. From the latter part of the 19th century this curriculum has changed and broadened: for
example, there are now more than 100 students of Chinese, which is a non-curriculum course. In the 1970s, there was
just one school computer, in a small room attached to the science buildings. It used paper tape to store programs.
Today, all boys must have laptop computers, and the school fibre-optic network connects all classrooms and all boys'
bedrooms to the internet. The primary responsibility for a boy's studies lies with his House Master, but he is assisted
by an additional director of studies, known as a tutor. Classes, colloquially known as "divs" (divisions), are organised
on a School basis; the classrooms are separate from the houses. New school buildings have appeared for teaching purposes
every decade or so since New Schools, designed by Henry Woodyer and built 1861-3. Despite the introduction of modern
technology, the external appearance and locations of many of the classrooms have remained unchanged for a long time.
Societies tend to come and go, of course, depending on the special enthusiasms of the masters and boys in the school
at the time, but some have been in existence many years. Those in existence at present include: Aeronautical, African,
Alexander Cozens (Art), Amnesty, Archeological, Architectural, Astronomy, Banks (conservation), Caledonian, Cheese,
Classical, Comedy, Cosmopolitan, Debating, Design, Entrepreneurship, Geographical, Henry Fielding, Hispanic, History,
Keynes (economics), Law, Literary, Mathematical, Medical, Middle Eastern, Model United Nations, Modern Languages,
Oriental, Orwell (left-wing), Simeon (Christian), Parry (music), Photographic, Political, Praed (poetry), Rock (music),
Rous (equestrian), Salisbury (diplomatic), Savile (Rare Books and Manuscripts), Shelley, Scientific, Sports, Tech
Club, Theatre, Wellington (military), Wine and Wotton’s (philosophy). Prizes are awarded on the results of trials
(internal exams), GCSE and AS-levels. In addition, many subjects and activities have specially endowed prizes, several
of which are awarded by visiting experts. The most prestigious is the Newcastle Scholarship, awarded on the strength
of an examination, consisting of two papers in philosophical theology, moral theory and applied ethics. Also of note
are the Gladstone Memorial Prize and the Coutts Prize, awarded on the results of trials and AS-level examinations
in C; and the Huxley Prize, awarded for a project on a scientific subject. Other specialist prizes include the Newcastle
Classical Prize; the Rosebery Exhibition for History; the Queen’s Prizes for French and German; the Duke of Newcastle’s
Russian Prize; the Beddington Spanish Prize; the Strafford and Bowman Shakespeare Prizes; the Tomline and Russell
Prizes in Mathematics; the Sotheby Prize for History of Art; the Waddington Prize for Theology and Philosophy; the
Birley Prize for History; The Lower Boy Rosebery Prize and the Wilder Prize for Theology. Prizes are awarded too
for excellence in such activities as painting, sculpture, ceramics, playing musical instruments, musical composition,
declamation, silverwork, and design. Various benefactions make it possible to give grants each year to boys who wish,
for educational or cultural reasons, to work or travel abroad. These include the Busk Fund, which supports individual
ventures that show particular initiative; the C.M. Wells Memorial Trust Fund, for the promotion of visits to classical
lands; the Sadler Fund, which supports, amongst others, those intending to enter the Foreign Service; and the Marsden
Fund, for travel in countries where the principal language is not English. If any boy produces an outstanding piece
of work, it may be "Sent Up For Good", storing the effort in the College Archives for posterity. This award has been
around since the 18th century. As Sending Up For Good is fairly infrequent, the process is rather mysterious to many
of Eton's boys. First, the master wishing to Send Up For Good must gain the permission of the relevant Head of Department.
Upon receiving his or her approval, the piece of work will be marked with Sent Up For Good and the student will receive
a card to be signed by House Master, tutor and division master. The opposite of a Show Up is a "Rip". This is for
sub-standard work, which is sometimes torn at the top of the page/sheet and must be submitted to the boy's housemaster
for signature. Boys who accumulate rips are liable to be given a "White Ticket", which must be signed by all his
teachers and may be accompanied by other punishments, usually involving doing domestic chores or writing lines. In
recent times,[when?] a milder form of the rip, 'sign for information', colloquially known as an "info", has been
introduced, which must also be signed by the boy's housemaster and tutor. A boy who is late for any division or other
appointment may be required to sign "Tardy Book", a register kept in the School Office, between 7.35am and 7.45am,
every morning for the duration of his sentence (typically three days). Tardy Book may also be issued for late work.
For more serious misdeeds, a boy is summoned from his lessons to the Head Master, or Lower Master if the boy is in
the lower two years, to talk personally about his misdeeds. This is known as the "Bill". The most serious misdeeds
may result in expulsion, or rustication (suspension). Conversely, should a master be more than 15 minutes late for
a class, traditionally the pupils might claim it as a "run" and absent themselves for the rest of its duration. John
Keate, Head Master from 1809 to 1834, took over at a time when discipline was poor. Anthony Chenevix-Trench, Head
Master from 1964 to 1970, abolished the birch and replaced it with caning, also applied to the bare posterior, which
he administered privately in his office. Chenevix-Trench also abolished corporal punishment administered by senior
boys. Previously, House Captains were permitted to cane miscreants over the seat of the trousers. This was a routine
occurrence, carried out privately with the boy bending over with his head under the edge of a table. Less common
but more severe were the canings administered by Pop (see Eton Society below) in the form of a "Pop-Tanning", in
which a large number of hard strokes were inflicted by the President of Pop in the presence of all Pop members (or,
in earlier times, each member of Pop took it in turns to inflict a stroke). The culprit was summoned to appear in
a pair of old trousers, as the caning would cut the cloth to shreds. This was the most severe form of physical punishment
at Eton. The current "Precentor" (Head of Music) is Tim Johnson, and the School boasts eight organs and an entire
building for music (performance spaces include the School Hall, the Farrer Theatre and two halls dedicated to music,
the Parry Hall and the Concert Hall). Many instruments are taught, including obscure ones such as the didgeridoo.
The School participates in many national competitions; many pupils are part of the National Youth Orchestra, and
the School gives scholarships for dedicated and talented musicians. A former Precentor of the college, Ralph Allwood
set up and organised Eton Choral Courses, which run at the School every summer. Numerous plays are put on every year
at Eton College; there is one main theatre, called the Farrer (seating 400) and 2 Studio theatres, called the Caccia
Studio and Empty Space (seating 90 and 80 respectively). There are about 8 or 9 house productions each year, around
3 or 4 "independent" plays (not confined solely to one house, produced, directed and funded by Etonians) and three
school plays, one specifically for boys in the first two years, and two open to all years. The School Plays have
such good reputations that they are normally fully booked every night. Productions also take place in varying locations
around the School, varying from the sports fields to more historic buildings such as Upper School and College Chapel.
In recent years, the School has put on a musical version of The Bacchae (October 2009) as well as productions of
A Funny Thing Happened on the Way to the Forum (May 2010), The Cherry Orchard (February 2011), Joseph K (October
2011), Cyrano de Bergerac (May 2012), Macbeth (October 2012), London Assurance (May 2013) and Jerusalem (October
2013). Upcoming in May 2014 was a production of A Midsummer Night's Dream . Often girls from surrounding schools,
such as St George's, Ascot, St Mary's School Ascot, Windsor Girls' School and Heathfield St Mary's School, are cast
in female roles. Boys from the School are also responsible for the lighting, sound and stage management of all the
productions, under the guidance of several professional full-time theatre staff. Eton's best-known holiday takes
place on the so-called "Fourth of June", a celebration of the birthday of King George III, Eton's greatest patron.
This day is celebrated with the Procession of Boats, in which the top rowing crews from the top four years row past
in vintage wooden rowing boats. Similar to the Queen's Official Birthday, the "Fourth of June" is no longer celebrated
on 4 June, but on the Wednesday before the first weekend of June. Eton also observes St. Andrew's Day, on which the
Eton wall game is played.[citation needed] Until 18 December 2010, Eton College was an exempt charity under English
law (Charities Act 1993, Schedule 2). Under the provisions of the Charities Act 2006, it is now an excepted charity,
and fully registered with the Charities Commission, and is now one of the 100 largest charities in the UK. As a charity,
it benefits from substantial tax breaks. It was calculated by the late David Jewell, former Master of Haileybury,
that in 1992 such tax breaks saved the School about £1,945 per pupil per year, although he had no direct connection
with the School. This subsidy has declined since the 2001 abolition by the Labour Government of state-funded scholarships
(formerly known as "assisted places") to independent schools. However, no child attended Eton on this scheme, meaning
that the actual level of state assistance to the School has always been lower. Eton's retiring Head Master, Tony
Little, has claimed that the benefits that Eton provides to the local community free of charge (use of its facilities,
etc.) have a higher value than the tax breaks it receives as a result of its charitable status. The fee for the academic
year 2010–2011 was £29,862 (approximately US$48,600 or €35,100 as of March 2011), although the sum is considerably
lower for those pupils on bursaries and scholarships. In 1995 the National Lottery granted money for a £4.6m sports
complex, to add to Eton's existing facilities of two swimming pools, 30 cricket squares, 24 football, rugby and hockey
pitches and a gym. The College paid £200,000 and contributed 4.5 hectares of land in return for exclusive use of
the facilities during the daytime only. The UK Sports Council defended the deal on the grounds that the whole community
would benefit, while the bursar claimed that Windsor, Slough and Eton Athletic Club was "deprived" because local
people (who were not pupils at the College) did not have a world-class running track and facilities to train with.
Steve Osborn, director of the Safe Neighbourhoods Unit, described the decision as "staggering" given the background
of a substantial reduction in youth services by councils across the country, a matter over which, however, neither
the College nor the UK Sports Council, had any control. The facility, which became the Thames Valley Athletics Centre,
opened in April 1999. In October 2004, Sarah Forsyth claimed that she had been dismissed unfairly by Eton College
and had been bullied by senior staff. She also claimed she was instructed to do some of Prince Harry's coursework
to enable him to pass AS Art. As evidence, Forsyth provided secretly recorded conversations with both Prince Harry
and her Head of Department, Ian Burke. An employment tribunal in July 2005 found that she had been unfairly dismissed
and criticised Burke for bullying her and for repeatedly changing his story. It also criticised the school for failing
to produce its capability procedures and criticised the Head Master for not reviewing the case independently. It
criticised Forsyth's decision to record a conversation with Harry as an abuse of teacher–student confidentiality
and said "It is clear whichever version of the evidence is accepted that Mr Burke did ask the claimant to assist
Prince Harry with text for his expressive art project ... It is not part of this tribunal's function to determine
whether or not it was legitimate." In response to the tribunal's ruling concerning the allegations about Prince Harry,
the School issued a statement, saying Forsyth's claims "were dismissed for what they always have been - unfounded
and irrelevant." A spokesperson from Clarence House said, "We are delighted that Harry has been totally cleared of
cheating." In 2005, the Office of Fair Trading found fifty independent schools, including Eton, to have breached
the Competition Act by "regularly and systematically" exchanging information about planned increases in school fees,
which was collated and distributed among the schools by the bursar at Sevenoaks School. Following the investigation
by the OFT, each school was required to pay around £70,000, totalling around £3.5 million, significantly less than
the maximum possible fine. In addition, the schools together agreed to contribute another £3m to a new charitable
educational fund. The incident raised concerns over whether the charitable status of independent schools such as
Eton should be reconsidered, and perhaps revoked. However, Jean Scott, the head of the Independent Schools Council,
said that independent schools had always been exempt from anti-cartel rules applied to business, were following a
long-established procedure in sharing the information with each other, and that they were unaware of the change to
the law (on which they had not been consulted). She wrote to John Vickers, the OFT director-general, saying, "They
are not a group of businessmen meeting behind closed doors to fix the price of their products to the disadvantage
of the consumer. They are schools that have quite openly continued to follow a long-established practice because
they were unaware that the law had changed." A Freedom of Information request in 2005 revealed that Eton had received
£2,652 in farming subsidies in 2004 under the Common Agricultural Policy. Asked to explain under what grounds it
was eligible to receive farming subsidies, Eton admitted that it was 'a bit of a mystery'. The TaxPayers' Alliance
also stated that Eton had received a total of £5,300 in CAP subsidies between 2002 and 2007. Panorama revealed in
March 2012 that farming subsidies were granted to Eton for 'environmental improvements', in effect 'being paid without
having to do any farming at all'. Figures obtained by The Daily Telegraph had revealed that, in 2010, 37 applicants
from Eton were accepted by Oxford whilst state schools had difficulty obtaining entry even for pupils with the country's
most impressive exam results. According to The Economist, Oxford and Cambridge admit more Etonians each year than
applicants from the whole country who qualify for free school meals. In April 2011 the Labour MP David Lammy described
as unfair and 'indefensible' the fact that Oxford University had organised nine 'outreach events' at Eton in 2010,
although he admitted that it had, in fact, held fewer such events for Eton than for another independent school, Wellington
College. In July 2015, Eton accidentally sent emails to 400 prospective students, offering them conditional entrance
to the school in September 2017. The email was intended for nine students, but an IT glitch caused the email to be
sent to 400 additional families, who didn't necessarily have a place. In response, the school issued the following
statement: "This error was discovered within minutes and each family was immediately contacted to notify them that
it should be disregarded and to apologise. We take this type of incident very seriously indeed and so a thorough
investigation, overseen by the headmaster Tony Little and led by the tutor for admissions, is being carried out to
find out exactly what went wrong and ensure it cannot happen again. Eton College offers its sincere apologies to
those boys concerned and their families. We deeply regret the confusion and upset this must have caused." In January
2016, the Eton College beagling club was accused by the League Against Cruel Sports of undertaking an illegal hare
hunt. The allegations were accompanied by a video of the Eton Beagles chasing a hare, as 'the hunt staff urge the
beagles on and make no efforts to call the dogs off.' A spokesman representing Eton College released the following
statement: "Eton College takes its legal responsibilities extremely seriously and expects all school activities to
comply with the law. We are investigating this allegation as a matter of urgency and will be co-operating fully with
the relevant authorities." Eton College has links with some private schools in India today, maintained from the days
of the British Raj, such as The Doon School and Mayo College. Eton College is also a member of the G20 Schools Group,
a collection of college preparatory boarding schools from around the world, including Turkey's Robert College, the
United States' Phillips Academy and Phillips Exeter Academy, Australia's Scotch College, Melbourne Grammar School
and Launceston Church Grammar School, Singapore's Raffles Institution, and Switzerland's International School of
Geneva. Eton has recently fostered[when?] a relationship with the Roxbury Latin School, a traditional all-boys private
school in Boston, USA. Former Eton headmaster and provost Sir Eric Anderson shares a close friendship with Roxbury
Latin Headmaster emeritus F. Washington Jarvis; Anderson has visited Roxbury Latin on numerous occasions, while Jarvis
briefly taught theology at Eton after retiring from his headmaster post at Roxbury Latin. The headmasters' close
friendship spawned the Hennessy Scholarship, an annual prize established in 2005 and awarded to a graduating RL senior
for a year of study at Eton. Hennessy Scholars generally reside in Wotton house. Besides Prince William and Prince
Harry, members of the extended British Royal Family who have attended Eton include Prince Richard, Duke of Gloucester
and his son Alexander Windsor, Earl of Ulster; Prince Edward, Duke of Kent, his eldest son George Windsor, Earl of
St Andrews and grandson Edward Windsor, Lord Downpatrick and his youngest son Lord Nicholas Windsor; Prince Michael
of Kent and his son Lord Frederick Windsor; James Ogilvy, son of Princess Alexandra and the Right Honourable Angus
Ogilvy, himself an Eton alumnus. Prince William of Gloucester (1942-1972) also attended Eton, as did George Lascelles,
7th Earl of Harewood, son of Princess Mary, Princess Royal. Other notable Old Etonians include scientists Robert
Boyle, John Maynard Smith, J. B. S. Haldane, Stephen Wolfram and the 2012 Nobel Prize in Physiology or Medicine winner,
John Gurdon; Beau Brummell; economists John Maynard Keynes and Richard Layard; Antarctic explorer Lawrence Oates;
politician Alan Clark; entrepreneur, charity organiser and partner of Adele, Simon Konecki; cricket commentator Henry
Blofeld; explorer Sir Ranulph Fiennes; adventurer Bear Grylls; composers Thomas Arne, George Butterworth, Roger Quilter,
Frederick Septimus Kelly, Donald Tovey, Thomas Dunhill, Lord Berners, Victor Hely-Hutchinson, and Peter Warlock (Philip
Heseltine); Hubert Parry, who wrote the song Jerusalem and the coronation anthem I was glad; and musicians Frank
Turner and Humphrey Lyttelton. Notable Old Etonians in the media include the former Political Editor of both ITN
and The Times, Julian Haviland; the current BBC Deputy Political Editor, James Landale, and the BBC Science Editor,
David Shukman; the current President of Conde Nast International and Managing Director of Conde Nast UK, Nicholas
Coleridge; the former ITN newscaster and BBC Panorama presenter, Ludovic Kennedy; current BBC World News and BBC
Rough Justice current affairs presenter David Jessel; former chief ITV and Channel 4 racing commentator John Oaksey;
1950s BBC newsreader and 1960s ITN newscaster Timothy Brinton; 1960s BBC newsreader Corbet Woodall; the former Editor
of The Daily Telegraph, Charles Moore; the former Editor of The Spectator, Ferdinand Mount; and the current Editor
of The Mail on Sunday, Geordie Greig. Actor Dominic West has been unenthusiastic about the career benefits of being
an Old Etonian, saying it "is a stigma that is slightly above 'paedophile' in the media in a gallery of infamy",
but asked whether he would consider sending his own children there, said "Yes, I would. It’s an extraordinary place...
It has the facilities and the excellence of teaching and it will find what you’re good at and nurture it", while
the actor Tom Hiddleston says there are widespread misconceptions about Eton, and that "People think it's just full
of braying toffs... It isn’t true... It's actually one of the most broadminded places I’ve ever been. The reason
it’s a good school is that it encourages people to find the thing they love and to go for it. They champion the talent
of the individual and that’s what’s special about it".
Cork was originally a monastic settlement, reputedly founded by Saint Finbarr in the 6th century. Cork achieved an urban
character at some point between 915 and 922 when Norseman (Viking) settlers founded a trading port. It has been proposed
that, like Dublin, Cork was an important trading centre in the global Scandinavian trade network. The ecclesiastical
settlement continued alongside the Viking longphort, with the two developing a type of symbiotic relationship; the
Norsemen providing otherwise unobtainable trade goods for the monastery, and perhaps also military aid. The city's
charter was granted by Prince John, as Lord of Ireland, in 1185. The city was once fully walled, and some wall sections
and gates remain today. For much of the Middle Ages, Cork city was an outpost of Old English culture in the midst
of a predominantly hostile Gaelic countryside and cut off from the English government in the Pale around Dublin.
Neighbouring Gaelic and Hiberno-Norman lords extorted "Black Rent" from the citizens to keep them from attacking
the city. The present extent of the city has exceeded the medieval boundaries of the Barony of Cork City; it now
takes in much of the neighbouring Barony of Cork. Together, these baronies are located between the Barony of Barrymore
to the east, Muskerry East to the west and Kerrycurrihy to the south. The city's municipal government was dominated
by about 12–15 merchant families, whose wealth came from overseas trade with continental Europe — in particular the
export of wool and hides and the import of salt, iron and wine. The medieval population of Cork was about 2,100 people.
It suffered a severe blow in 1349 when almost half the townspeople died of plague when the Black Death arrived in
the town. In 1491, Cork played a part in the English Wars of the Roses when Perkin Warbeck a pretender to the English
throne, landed in the city and tried to recruit support for a plot to overthrow Henry VII of England. The then mayor
of Cork and several important citizens went with Warbeck to England but when the rebellion collapsed they were all
captured and executed. The title of Mayor of Cork was established by royal charter in 1318, and the title was changed
to Lord Mayor in 1900 following the knighthood of the incumbent Mayor by Queen Victoria on her Royal visit to the
city. The climate of Cork, like the rest of Ireland, is mild and changeable with abundant rainfall and a lack of
temperature extremes. Cork lies in plant Hardiness zone 9b. Met Éireann maintains a climatological weather station
at Cork Airport, a few kilometres south of the city. It should be noted that the airport is at an altitude of 151
metres (495 ft) and temperatures can often differ by a few degrees between the airport and the city itself. There
are also smaller synoptic weather stations at UCC and Clover Hill. Temperatures below 0 °C (32 °F) or above 25 °C
(77 °F) are rare. Cork Airport records an average of 1,227.9 millimetres (4.029 ft) of precipitation annually, most
of which is rain. The airport records an average of 7 days of hail and 11 days of snow or sleet a year; though it
only records lying snow for 2 days of the year. The low altitude of the city, and moderating influences of the harbour,
mean that lying snow very rarely occurs in the city itself. There are on average 204 "rainy" days a year (over 0.2
millimetres (0.0079 in) of rainfall), of which there are 73 days with "heavy rain" (over 5 millimetres (0.20 in)).
Cork is also a generally foggy city, with an average of 97 days of fog a year, most common during mornings and during
winter. Despite this, however, Cork is also one of Ireland's sunniest cities, with an average of 3.9 hours of sunshine
every day and only having 67 days where there is no "recordable sunshine", mostly during and around winter. The Cork
School of Music and the Crawford College of Art and Design provide a throughput of new blood, as do the active theatre
components of several courses at University College Cork (UCC). Highlights include: Corcadorca Theatre Company, of
which Cillian Murphy was a troupe member prior to Hollywood fame; the Institute for Choreography and Dance, a national
contemporary dance resource;[citation needed] the Triskel Arts Centre (capacity c.90), which includes the Triskel
Christchurch independent cinema; dance venue the Firkin Crane (capacity c.240); the Cork Academy of Dramatic Art
(CADA) and Graffiti Theatre Company; and the Cork Jazz Festival, Cork Film Festival, and Live at the Marquee events.
The Everyman Palace Theatre (capacity c.650) and the Granary Theatre (capacity c.150) both play host to dramatic
plays throughout the year. Cork is home to the RTÉ Vanbrugh Quartet, and to many musical acts, including John Spillane,
The Frank And Walters, Sultans of Ping, Simple Kid, Microdisney, Fred, Mick Flannery and the late Rory Gallagher.
Singer songwriter Cathal Coughlan and Sean O'Hagan of The High Llamas also hail from Cork. The opera singers Cara
O'Sullivan, Mary Hegarty, Brendan Collins, and Sam McElroy are also Cork born. Ranging in capacity from 50 to 1,000,
the main music venues in the city are the Cork Opera House (capacity c.1000), Cyprus Avenue, Triskel Christchurch,
the Roundy, the Savoy and Coughlan's.[citation needed] Cork's underground scene is supported by Plugd Records.[citation
needed] Cork has been culturally diverse for many years, from Huguenot communities in the 17th century, through to
Eastern European communities and a smaller numbers from African and Asian nations in the 20th and 21st centuries.
This is reflected in the multi-cultural restaurants and shops, including specialist shops for East-European or Middle-Eastern
food, Chinese and Thai restaurants, French patisseries, Indian buffets, and Middle Eastern kebab houses. Cork saw
some Jewish immigration from Lithuania and Russia in the late 19th century. Jewish citizens such as Gerald Goldberg
(several times Lord Mayor), David Marcus (novelist) and Louis Marcus (documentary maker) played notable roles in
20th century Cork. Today, the Jewish community is relatively small in population, although the city still has a Jewish
quarter and synagogue. Cork also features various Christian churches, as well as a mosque. Some Catholic masses around
the city are said in Polish, Filipino, Lithuanian, Romanian and other languages, in addition to the traditional Latin
and local Irish and English language services. The Cork accent, part of the Southwest dialect of Hiberno-English,
displays various features which set it apart from other accents in Ireland. Patterns of tone and intonation often
rise and fall, with the overall tone tending to be more high-pitched than other Irish accents. English spoken in
Cork has a number of dialect words that are peculiar to the city and environs. Like standard Hiberno-English, some
of these words originate from the Irish language, but others through other languages Cork's inhabitants encountered
at home and abroad. The Cork accent displays varying degrees of rhoticity, usually depending on the social-class
of the speaker. The city's FM radio band features RTÉ Radio 1, RTÉ 2fm, RTÉ lyric fm, RTÉ Raidió na Gaeltachta, Today
FM, 4fm, Newstalk and the religious station Spirit Radio. There are also local stations such as Cork's 96FM, Cork's
Red FM, C103, CUH 102.0FM, UCC 98.3FM (formerly Cork Campus Radio 97.4fm) and Christian radio station Life 93.1FM.
Cork also has a temporary licensed city-wide community station 'Cork FM Community Radio' on 100.5FM, which is currently
on-air on Saturdays and Sundays only. Cork has also been home to pirate radio stations, including South Coast Radio
and ERI in the 1980s. Today some small pirates stations remain. A number of neighbouring counties radio stations
can be heard in parts of Cork City including Radio Kerry at 97.0 and WLR FM on 95.1. Cork is home to one of Ireland's
main national newspapers, the Irish Examiner (formerly the Cork Examiner). It also prints the Evening Echo, which
for decades has been connected to the Echo Boys, who were poor and often homeless children who sold the newspaper.
Today, the shouts of the vendors selling the Echo can still be heard in various parts of the city centre. One of
the biggest free newspapers in the city is the Cork Independent. The city's University publishes the UCC Express
and Motley magazine. Cork features architecturally notable buildings originating from the Medieval to Modern periods.
The only notable remnant of the Medieval era is the Red Abbey. There are two cathedrals in the city; St. Mary's Cathedral
and Saint Fin Barre's Cathedral. St Mary's Cathedral, often referred to as the North Cathedral, is the Catholic cathedral
of the city and was begun in 1808. Its distinctive tower was added in the 1860s. St Fin Barre's Cathedral serves
the Protestant faith and is possibly the more famous of the two. It is built on the foundations of an earlier cathedral.
Work began in 1862 and ended in 1879 under the direction of architect William Burges. St. Patrick's Street, the main
street of the city which was remodelled in the mid-2000s, is known for the architecture of the buildings along its
pedestrian-friendly route and is the main shopping thoroughfare. The reason for its curved shape is that it originally
was a channel of the River Lee that was built over on arches. The General Post Office, with its limestone façade,
is on Oliver Plunkett Street, on the site of the Theatre Royal which was built in 1760 and burned down in 1840. The
English circus proprietor Pablo Fanque rebuilt an amphitheatre on the spot in 1850, which was subsequently transformed
into a theatre and then into the present General Post Office in 1877. The Grand Parade is a tree-lined avenue, home
to offices, shops and financial institutions. The old financial centre is the South Mall, with several banks whose
interior derive from the 19th century, such as the Allied Irish Bank's which was once an exchange. Many of the city's
buildings are in the Georgian style, although there are a number of examples of modern landmark structures, such
as County Hall tower, which was, at one time the tallest building in Ireland until being superseded by another Cork
City building: The Elysian. Across the river from County Hall is Ireland's longest building; built in Victorian times,
Our Lady's Psychiatric Hospital has now been renovated and converted into a residential housing complex called Atkins
Hall, after its architect William Atkins. Other notable places include Elizabeth Fort, the Cork Opera House, Christ
Church on South Main Street (now the Triskel Arts Centre and original site of early Hiberno-Norse church), St Mary's
Dominican Church on Popes Quay and Fitzgerald's Park to the west of the city, which contains the Cork Public Museum.
Other popular tourist attractions include the grounds of University College Cork, through which the River Lee flows,
the Women's Gaol at Sundays Well (now a heritage centre) and the English Market. This covered market traces its origins
back to 1610, and the present building dates from 1786. While local government in Ireland has limited powers in comparison
with other countries, the council has responsibility for planning, roads, sanitation, libraries, street lighting,
parks, and a number of other important functions. Cork City Council has 31 elected members representing six electoral
wards. The members are affiliated to the following political parties: Fine Gael (5 members), Fianna Fáil (10 members),
Sinn Féin (8 members), Anti-Austerity Alliance (3 members), Workers' Party (1 member), Independents (4 members).
Certain councillors are co-opted to represent the city at the South-West Regional Authority. A new Lord Mayor of
Cork is chosen in a vote by the elected members of the council under a D'Hondt system count. The retail trade in
Cork city includes a mix of both modern, state of the art shopping centres and family owned local shops. Department
stores cater for all budgets, with expensive boutiques for one end of the market and high street stores also available.
Shopping centres can be found in many of Cork's suburbs, including Blackpool, Ballincollig, Douglas, Ballyvolane,
Wilton and Mahon Point. Others are available in the city centre. These include the recently[when?] completed development
of two large malls The Cornmarket Centre on Cornmarket Street, and new the retail street called "Opera Lane" off
St. Patrick's Street/Academy Street. The Grand Parade scheme, on the site of the former Capitol Cineplex, was planning-approved
for 60,000 square feet (5,600 m2) of retail space, with work commencing in 2016. Cork's main shopping street is St.
Patrick's Street and is the most expensive street in the country per sq. metre after Dublin's Grafton Street. As
of 2015[update] this area has been impacted by the post-2008 downturn, with many retail spaces available for let.[citation
needed] Other shopping areas in the city centre include Oliver Plunkett St. and Grand Parade. Cork is also home to
some of the country's leading department stores with the foundations of shops such as Dunnes Stores and the former
Roches Stores being laid in the city. Outside the city centre is Mahon Point Shopping Centre. Cork City is at the
heart of industry in the south of Ireland. Its main area of industry is pharmaceuticals, with Pfizer Inc. and Swiss
company Novartis being big employers in the region. The most famous product of the Cork pharmaceutical industry is
Viagra. Cork is also the European headquarters of Apple Inc. where over 3,000 staff are involved in manufacturing,
R&D and customer support. Logitech and EMC Corporation are also important IT employers in the area. Three hospitals
are also among the top ten employers in the city (see table below). The city is also home to the Heineken Brewery
that brews Murphy's Irish Stout and the nearby Beamish and Crawford brewery (taken over by Heineken in 2008) which
have been in the city for generations. 45% of the world's Tic Tac sweets are manufactured at the city's Ferrero factory.
For many years, Cork was the home to Ford Motor Company, which manufactured cars in the docklands area before the
plant was closed in 1984. Henry Ford's grandfather was from West Cork, which was one of the main reasons for opening
up the manufacturing facility in Cork. But technology has replaced the old manufacturing businesses of the 1970s
and 1980s, with people now working in the many I.T. centres of the city – such as Amazon.com, the online retailer,
which has set up in Cork Airport Business Park. Public bus services within the city are provided by the national
bus operator Bus Éireann. City routes are numbered from 201 through to 219 and connect the city centre to the principal
suburbs, colleges, shopping centres and places of interest. Two of these bus routes provide orbital services across
the Northern and Southern districts of the city respectively. Buses to the outer suburbs, such as Ballincollig, Glanmire,
Midleton and Carrigaline are provided from the city's bus terminal at Parnell Place in the city centre. Suburban
services also include shuttles to Cork Airport, and a park and ride facility in the south suburbs only. The Cork
area has seen improvements in road infrastructure in recent years. For example, the Cork South Link dual carriageway
was built in the early 1980s, to link the Kinsale Road roundabout with the city centre. Shortly afterwards, the first
sections of the South Ring dual carriageway were opened. Work continued through the 1990s on extending the N25 South
Ring Road, with the opening of the Jack Lynch Tunnel under the River Lee being a significant addition. The Kinsale
Road flyover opened in August 2006 to remove a bottleneck for traffic heading to Cork Airport or Killarney. Other
projects completed at this time include the N20 Blackpool bypass and the N20 Cork to Mallow road projects. The N22
Ballincollig dual carriageway bypass, which links to the Western end of the Cork Southern Ring road was opened in
September 2004. City Centre road improvements include the Patrick Street project - which reconstructed the street
with a pedestrian focus. The M8 motorway links Cork with Dublin. Cork was one of the most rail-oriented cities in
Ireland, featuring eight stations at various times. The main route, still much the same today, is from Dublin Heuston.
Originally terminating on the city's outskirts at Blackpool, the route now reaches the city centre terminus of Kent
Station via Glanmire tunnel. Now a through station, the line through Kent connects the towns of Cobh and Midleton
east of the city. This also connected to the seaside town of Youghal, until the 1980s.[citation needed] Within the
city there have been two tram networks in operation. A proposal to develop a horse-drawn tram (linking the city's
railway termini) was made by American George Francis Train in the 1860s, and implemented in 1872 by the Cork Tramway
Company. However, the company ceased trading in 1875 after Cork Corporation refused permission to extend the line,
mainly because of objections from cab operators to the type of tracks which – although they were laid to the Irish
national railway gauge of 5 ft 3in – protruded from the road surface.[citation needed] The Cork Suburban Rail system
also departs from Kent Station and provides connections to parts of Metropolitan Cork. Stations include Little Island,
Mallow, Midleton, Fota and Cobh. In July 2009 the Glounthaune to Midleton line was reopened, with new stations at
Carrigtwohill and Midleton (with future stations planned for Kilbarry, Monard, Carrigtwohill West and Blarney). Little
Island Railway Station serves Cork's Eastern Suburbs, while Kilbarry Railway Station is planned to serve the Northern
Suburbs. The National Maritime College of Ireland is also located in Cork and is the only college in Ireland in which
Nautical Studies and Marine Engineering can be undertaken. CIT also incorporates the Cork School of Music and Crawford
College of Art and Design as constituent schools. The Cork College of Commerce is the largest post-Leaving Certificate
college in Ireland and is also the biggest provider of Vocational Preparation and Training courses in the country.[citation
needed] Other 3rd level institutions include Griffith College Cork, a private institution, and various other colleges.
Research institutes linked to the third level colleges in the city support the research and innovation capacity of
the city and region. Examples include the Tyndall National Institute (ICT hardware research), IMERC (Marine Energy),
Environmental Research Institute, NIMBUS (Network Embedded Systems); and CREATE (Advanced Therapeutic Engineering).
UCC and CIT also have start-up company incubation centres. In UCC, the IGNITE Graduate Business Innovation Centre
aims to foster and support entrepreneurship. In CIT, The Rubicon Centre is a business innovation hub that is home
to 57 knowledge based start-up companies. Hurling and football are the most popular spectator sports in the city.
Hurling has a strong identity with city and county – with Cork winning 30 All-Ireland Championships. Gaelic football
is also popular, and Cork has won 7 All-Ireland Senior Football Championship titles. There are many Gaelic Athletic
Association clubs in Cork City, including Blackrock National Hurling Club, St. Finbarr's, Glen Rovers, Na Piarsaigh
and Nemo Rangers. The main public venues are Páirc Uí Chaoimh and Páirc Uí Rinn (named after the noted Glen Rovers
player Christy Ring). Camogie (hurling for ladies) and women's gaelic football are increasing in popularity. There
are a variety of watersports in Cork, including rowing and sailing. There are five rowing clubs training on the river
Lee, including Shandon BC, UCC RC, Pres RC, Lee RC, and Cork BC. Naomhóga Chorcaí is a rowing club whose members
row traditional naomhóga on the Lee in occasional competitions. The "Ocean to City" race has been held annually since
2005, and attracts teams and boats from local and visiting clubs who row the 24 kilometres (15 mi) from Crosshaven
into Cork city centre. The decision to move the National Rowing Center to Inniscarra has boosted numbers involved
in the sport.[citation needed] Cork's maritime sailing heritage is maintained through its sailing clubs. The Royal
Cork Yacht Club located in Crosshaven (outside the city) is the world's oldest yacht club, and "Cork Week" is a notable
sailing event. The most notable cricket club in Cork is Cork County Cricket Club, which was formed in 1874. Although
located within the Munster jurisdiction, the club plays in the Leinster Senior League. The club plays at the Mardyke,
a ground which has hosted three first-class matches in 1947, 1961 and 1973. All three involved Ireland playing Scotland.
The Cork Cricket Academy operates within the city, with the stated aim of introducing the sport to schools in the
city and county. Cork's other main cricket club, Harlequins Cricket Club, play close to Cork Airport. The city is
also the home of road bowling, which is played in the north-side and south-west suburbs. There are also boxing and
martial arts clubs (including Brazilian jiu-jitsu, Karate, Muay Thai and Taekwondo) within the city. Cork Racing,
a motorsport team based in Cork, has raced in the Irish Formula Ford Championship since 2005. Cork also hosts one
of Ireland's most successful Australian Rules Football teams, the Leeside Lions, who have won the Australian Rules
Football League of Ireland Premiership four times (in 2002, 2003, 2005 and 2007). There are also inline roller sports,
such as hockey and figure skating, which transfer to the ice over the winter season.[citation needed]
Galicia (English i/ɡəˈlɪsiə/, /ɡəˈlɪʃə/; Galician: [ɡaˈliθja] ( listen), [ħaˈliθja], or [ħaˈlisja]; Spanish: [ɡaˈliθja];
Galician and Portuguese: Galiza, [ɡaˈliθa] ( listen), [ħaˈliθa] or [ħaˈlisa]) is an autonomous community of Spain
and historic nationality under Spanish law. Located in the North-West of the Iberian Peninsula, it comprises the
provinces of A Coruña, Lugo, Ourense and Pontevedra, being bordered by Portugal to the south, the Spanish autonomous
communities of Castile and León and Asturias to the east, and the Atlantic Ocean to the west and the north. It had
a population of 2,765,940 in 2013 and has a total area of 29,574 km2 (11,419 sq mi). Galicia has over 1,660 km (1,030
mi) of coastline, including its offshore islands and islets, among them Cíes Islands, Ons, Sálvora, Cortegada, and—the
largest and most populated—A Illa de Arousa. The area now called Galicia was first inhabited by humans during the
Middle Paleolithic period, and it takes its name from the Gallaeci, the Celtic peoples living north of the Douro
river during the last millennium BC, in a region largely coincidental with that of the Iron Age local Castro culture.
Galicia was incorporated into the Roman Empire at the end of the Cantabrian Wars in 19 BC, being turned into a Roman
province in the 3rd century AD. In 410, the Germanic Suebi established a kingdom with its capital in Braga (Portugal)
which was incorporated into that of the Visigoths in 585. In 711, the Arabs invaded the Iberian Peninsula, taking
the Visigoth kingdom, but soon in 740 Galicia was incorporated into the Christian kingdom of Asturias. During the
Middle Ages, the kingdom of Galicia was occasionally ruled by its own kings, but most of the time it was leagued
to the kingdom of Leon and later to that of Castile, while maintaining its own legal and customary practices and
personality. From the 13th century on, the kings of Castile, as kings of Galicia, appointed an Adiantado-mór, whose
attributions passed to the Governor and Captain General of the Kingdom of Galiza from the last years of the 15th
century. The Governor also presided the Real Audiencia do Reino de Galicia, a royal tribunal and government body.
From the 16th century, the representation and voice of the kingdom was held by an assembly of deputies and representatives
of the cities of the kingdom, the Cortes or Junta of the Kingdom of Galicia, an institution which was forcibly discontinued
in 1833 when the kingdom was divided into four administrative provinces with no legal mutual links. During the 19th
and 20th centuries, demand grew for self-government and for the recognition of the personality of Galicia, a demand
which led to the frustrated Statute of Autonomy of 1936, and to the Statute of Autonomy of 1981, currently in force.
The interior of Galicia is characterized by its hilly landscape, although mountain ranges rise to 2,000 m (6,600
ft) in the east and south. The coastal areas are mostly an alternate series of rías (submerged valleys where the
sea penetrates tens of kilometres inland) and cliffs. The climate of Galicia is temperate and rainy, but it is also
markedly drier in the summer, being usually classified as Oceanic in the west and north, and Mediterranean in the
southeast. Its topographic and climatic conditions have made animal husbandry and farming the primary source of Galicia's
wealth for most of its history. With the exception of shipbuilding and food processing, Galicia was largely a semi-subsistence
farming and fishing economy and did not experience significant industrialization until after the mid-20th century.
In 2012, the gross domestic product at purchasing power parity was €56,000 million, with a nominal GDP per capita
of €20,700. The population is largely concentrated in two coastal areas: from Ferrol to A Coruña in the northwest
and from Pontevedra to Vigo in the southwest. To a lesser extent, there are smaller populations around the interior
cities of Lugo, Ourense and Santiago de Compostela. The political capital is Santiago de Compostela, in the province
of A Coruña. Vigo, in the province of Pontevedra, is the most populous municipality with 294,997 (2014), while A
Coruña is the most populous city with 215.227 (2014). The name evolved during the Middle Ages from Gallaecia, sometimes
written Galletia, to Gallicia. In the 13th century, with the written emergence of the Galician language, Galiza became
the most usual written form of the name of the country, being replaced during the 15th and 16th centuries by the
current form, Galicia, which coincides with the Castilian Spanish name. The historical denomination Galiza became
popular again during the end of the 19th and the first three-quarters of the 20th century, being still used with
some frequency today, although not by the Xunta de Galicia, the local devolved government. The Royal Galician Academy,
the institution responsible for regulating the Galician language, whilst recognizing it as a legitimate current denomination,
has stated that the only official name of the country is Galicia. Although the etymology of the name has been studied
since the 7th century by authors like Isidore of Seville —who wrote that "Galicians are called so, because of their
fair skin, as the Gauls", relating the name to the Greek word for milk—, currently scholars derive the name of the
ancient Callaeci either from Proto-Indo-European *kal-n-eH2 'hill', through a local relational suffix -aik-, so meaning
'the hill (people)'; or either from Proto-Celtic *kallī- 'forest', so meaning 'the forest (people)'. In any case,
Galicia, being per se a derivation of the ethnic name Kallaikói, would mean 'the land of the Galicians'. The oldest
attestation of human presence in Galicia has been found in the Eirós Cave, in the municipality of Triacastela, which
has preserved animal remains and Neanderthal stone objects from the Middle Paleolithic. The earliest culture to have
left significant architectural traces is the Megalithic culture which expanded along the western European coasts
during the Neolithic and Calcolithic eras. Thousands of Megalithic tumuli are distributed throughout the country,
but mostly along the coastal areas. Within each tumulus is a stone burial chamber known locally as anta (dolmen),
frequently preceded by a corridor. Galicia was later fully affected by the Bell Beaker culture. While its rich mineral
deposits - tin and gold - led to the development of Bronze Age metallurgy, and to the commerce of bronze and gold
items all along the Atlantic façade of Western Europe, where a common elite's culture evolved during the Atlantic
Bronze Age. The Castro culture ('Culture of the Castles') developed during the Iron Age, and flourished during the
second half of the first millennium BC. It is usually considered a local evolution of the Atlantic Bronze Age, with
later developments and influences and overlapping into the Roman era. Geographically, it corresponds to the people
Roman called Gallaeci, which were composed by a large series of nations or tribes, among them the Artabri, Bracari,
Limici, Celtici, Albiones and Lemavi. They were capable fighters: Strabo described them as the most difficult foes
the Romans encountered in conquering Lusitania, while Appian mentions their warlike spirit, noting that the women
bore their weapons side by side with their men, frequently preferring death to captivity. According to Pomponius
Mela all the inhabitants of the coastal areas were Celtic people. Gallaeci lived in castros. These were usually annular
forts, with one or more concentric earthen or stony walls, with a trench in front of each one. They were frequently
located at hills, or in seashore cliffs and peninsulas. Some well known castros can be found, in the seashore, at
Fazouro, Santa Tegra, Baroña and O Neixón, and inland at San Cibrao de Lás, Borneiro, Castromao, and Viladonga. Some
other distinctive features, such as temples, baths, reservoirs, warrior statues and decorative carvings have been
found associated to this culture, together with rich gold and metalworking traditions. Later the Muslims invaded
Spain (711), but the Arabs and Moors never managed to have any real control over Galicia, which was later incorporated
into the expanding Christian Kingdom of Asturias, usually known as Gallaecia or Galicia (Yillīqiya and Galīsiya)
by Muslim Chroniclers, as well as by many European contemporaries. This era consolidated Galicia as a Christian society
which spoke a Romance language. During the next century Galician noblemen took northern Portugal, conquering Coimbra
in 871, thus freeing what were considered the southernmost city of ancient Galicia. The Roman legions first entered
the area under Decimus Junius Brutus in 137–136 BC, but the country was only incorporated into the Roman Empire by
the time of Augustus (29 BC – 19 BC). The Romans were interested in Galicia mainly for its mineral resources, most
notably gold. Under Roman rule, most Galician hillforts began to be – sometimes forcibly – abandoned, and Gallaeci
served frequently in the Roman army as auxiliary troops. Romans brought new technologies, new travel routes, new
forms of organizing property, and a new language; latin. The Roman Empire established its control over Galicia through
camps (castra) as Aquis Querquennis, Ciadella camp or Lucus Augusti (Lugo), roads (viae) and monuments as the lighthouse
known as Tower of Hercules, in Corunna, but the remoteness and lesser interest of the country since the 2nd century
of our era, when the gold mines stopped being productive, led to a lesser degree of Romanization. In the 3rd century
it was made a province, under the name Gallaecia, which included also northern Portugal, Asturias, and a large section
of what today is known as Castile and León. In the early 5th century, the deep crisis suffered by the Roman Empire
allowed different tribes of Central Europe (Suebi, Vandals and Alani) to cross the Rhine and penetrate into the rule
on 31 December 406. Its progress towards the Iberian Peninsula forced the Roman authorities to establish a treaty
(foedus) by which the Suebi would settle peacefully and govern Galicia as imperial allies. So, from 409 Galicia was
taken by the Suebi, forming the first medieval kingdom to be created in Europe, in 411, even before the fall of the
Roman Empire, being also the first Germanic kingdom to mint coinage in Roman lands. During this period a Briton colony
and bishopric (see Mailoc) was established in Northern Galicia (Britonia), probably as foederati and allies of the
Suebi. In 585, the Visigothic King Leovigild invaded the Suebic kingdom of Galicia and defeated it, bringing it under
Visigoth control. In the 9th century, the rise of the cult of the Apostle James in Santiago de Compostela gave Galicia
a particular symbolic importance among Christians, an importance it would hold throughout the Reconquista. As the
Middle Ages went on, Santiago became a major pilgrim destination and the Way of Saint James (Camiño de Santiago)
a major pilgrim road, a route for the propagation of Romanesque art and the words and music of the troubadors. During
the 10th and 11th centuries, a period during which Galician nobility become related to the royal family, Galicia
was at times headed by its own native kings, while Vikings (locally known as Leodemanes or Lordomanes) occasionally
raided the coasts. The Towers of Catoira (Pontevedra) were built as a system of fortifications to prevent and stop
the Viking raids on Santiago de Compostela. In 1063, Ferdinand I of Castile divided his realm among his sons, and
the Kingdom of Galicia was granted to Garcia II of Galicia. In 1072, it was forcibly annexed by Garcia's brother
Alfonso VI of León; from that time Galicia was united with the Kingdom of León under the same monarchs. In the 13th
century Alfonso X of Castile standardized the Castilian language and made it the language of court and government.
Nevertheless, in his Kingdom of Galicia the Galician language was the only language spoken, and the most used in
government and legal uses, as well as in literature. On the other hand, the lack of an effective royal justice system
in the Kingdom led to the social conflict known as the Guerras Irmandiñas ('Wars of the brotherhoods'), when leagues
of peasants and burghers, with the support of a number of knights, noblemen, and under legal protection offered by
the remote king, toppled many of the castles of the Kingdom and briefly drove the noblemen into Portugal and Castile.
Soon after, in the late 15th century, in the dynastic conflict between Isabella I of Castile and Joanna La Beltraneja,
part of the Galician aristocracy supported Joanna. After Isabella's victory, she initiated an administrative and
political reform which the chronicler Jeronimo Zurita defined as "doma del Reino de Galicia": 'It was then when the
taming of Galicia began, because not just the local lords and knights, but all the people of that nation were the
ones against the others very bold and warlike'. These reforms, while establishing a local government and tribunal
(the Real Audiencia del Reino de Galicia) and bringing the nobleman under submission, also brought most Galician
monasteries and institutions under Castilian control, in what has been criticized as a process of centralisation.
At the same time the kings began to call the Xunta or Cortes of the Kingdom of Galicia, an assembly of deputies or
representatives of the cities of the Kingdom, to ask for monetary and military contributions. This assembly soon
developed into the voice and legal representation of the Kingdom, and the depositary of its will and laws. The modern
period of the kingdom of Galicia began with the murder or defeat of some of the most powerful Galician lords, such
as Pedro Álvarez de Sotomayor, called Pedro Madruga, and Rodrigo Henriquez Osorio, at the hands of the Castilian
armies sent to Galicia between the years 1480 and 1486. Isabella I of Castile, considered a usurper by many Galician
nobles, eradicated all armed resistance and definitively established the royal power of the Castilian monarchy. Fearing
a general revolt, the monarchs ordered the banishing of the rest of the great lords like Pedro de Bolaño, Diego de
Andrade or Lope Sánchez de Moscoso, among others. The establishment of the Santa Hermandad in 1480, and of the Real
Audiencia del Reino de Galicia in 1500—a tribunal and executive body directed by the Governor-Captain General as
a direct representative of the King—implied initially the submission of the Kingdom to the Crown, after a century
of unrest and fiscal insubordination. As a result, from 1480 to 1520 the Kingdom of Galicia contributed more than
10% of the total earnings of the Crown of Castille, including the Americas, well over its economic relevance. Like
the rest of Spain, the 16th century was marked by population growth up to 1580, when the simultaneous wars with the
Netherlands, France and England hampered Galicia's Atlantic commerce, which consisted mostly in the exportation of
sardines, wood, and some cattle and wine. From that moment Galicia, which participated to a minor extent in the American
expansion of the Spanish Empire, found itself at the center of the Atlantic wars fought by Spain against the French
and the Protestant powers of England and the Netherlands, whose privateers attacked the coastal areas, but major
assaults were not common as the coastline was difficult and the harbors easily defended. The most famous assaults
were upon the city of Vigo by Sir Francis Drake in 1585 and 1589, and the siege of A Coruña in 1589 by the English
Armada. Galicia also suffered occasional slave raids by Barbary pirates, but not as frequently as the Mediterranean
coastal areas. The most famous Barbary attack was the bloody sack of the town of Cangas in 1617. At the time, the
king's petitions for money and troops became more frequent, due to the human and economic exhaustion of Castile;
the Junta of the Kingdom of Galicia (the local Cortes or representative assembly) was initially receptive to these
petitions, raising large sums, accepting the conscription of the men of the kingdom, and even commissioning a new
naval squadron which was sustained with the incomes of the Kingdom. After the rupture of the wars with Portugal and
Catalonia, the Junta changed its attitude, this time due to the exhaustion of Galicia, now involved not just in naval
or oversea operations, but also in an exhausting war with the Portuguese, war which produced thousands of casualties
and refugees and was heavily disturbing to the local economy and commerce. So, in the second half of the 17th century
the Junta frequently denied or considerably reduced the initial petitions of the monarch, and though the tension
didn't rise to the levels experienced in Portugal or Catalonia, there were frequent urban mutinies and some voices
even asked for the secession of the Kingdom of Galicia. In the early 20th century came another turn toward nationalist
politics with Solidaridad Gallega (1907–1912) modeled on Solidaritat Catalana in Catalonia. Solidaridad Gallega failed,
but in 1916 Irmandades da Fala (Brotherhood of the Language) developed first as a cultural association but soon as
a full-blown nationalist movement. Vicente Risco and Ramón Otero Pedrayo were outstanding cultural figures of this
movement, and the magazine Nós ('Us'), founded 1920, its most notable cultural institution, Lois Peña Novo the outstanding
political figure. Galicia was spared the worst of the fighting in that war: it was one of the areas where the initial
coup attempt at the outset of the war was successful, and it remained in Nationalist (Franco's army's) hands throughout
the war. While there were no pitched battles, there was repression and death: all political parties were abolished,
as were all labor unions and Galician nationalist organizations as the Seminario de Estudos Galegos. Galicia's statute
of autonomy was annulled (as were those of Catalonia and the Basque provinces once those were conquered). According
to Carlos Fernández Santander, at least 4,200 people were killed either extrajudicially or after summary trials,
among them republicans, communists, Galician nationalists, socialists and anarchists. Victims included the civil
governors of all four Galician provinces; Juana Capdevielle, the wife of the governor of A Coruña; mayors such as
Ánxel Casal of Santiago de Compostela, of the Partido Galeguista; prominent socialists such as Jaime Quintanilla
in Ferrol and Emilio Martínez Garrido in Vigo; Popular Front deputies Antonio Bilbatúa, José Miñones, Díaz Villamil,
Ignacio Seoane, and former deputy Heraclio Botana); soldiers who had not joined the rebellion, such as Generals Rogelio
Caridad Pita and Enrique Salcedo Molinuevo and Admiral Antonio Azarola; and the founders of the PG, Alexandre Bóveda
and Víctor Casas, as well as other professionals akin to republicans and nationalists, as the journalist Manuel Lustres
Rivas or physician Luis Poza Pastrana. Many others were forced to escape into exile, or were victims of other reprisals
and removed from their jobs and positions. General Francisco Franco — himself a Galician from Ferrol — ruled as dictator
from the civil war until his death in 1975. Franco's centralizing regime suppressed any official use of the Galician
language, including the use of Galician names for newborns, although its everyday oral use was not forbidden. Among
the attempts at resistance were small leftist guerrilla groups such as those led by José Castro Veiga ("El Piloto")
and Benigno Andrade ("Foucellas"), both of whom were ultimately captured and executed. In the 1960s, ministers such
as Manuel Fraga Iribarne introduced some reforms allowing technocrats affiliated with Opus Dei to modernize administration
in a way that facilitated capitalist economic development. However, for decades Galicia was largely confined to the
role of a supplier of raw materials and energy to the rest of Spain, causing environmental havoc and leading to a
wave of migration to Venezuela and to various parts of Europe. Fenosa, the monopolistic supplier of electricity,
built hydroelectric dams, flooding many Galician river valleys. As part of the transition to democracy upon the death
of Franco in 1975, Galicia regained its status as an autonomous region within Spain with the Statute of Autonomy
of 1981, which begins, "Galicia, historical nationality, is constituted as an Autonomous Community to access to its
self-government, in agreement with the Spanish Constitution and with the present Statute (...)". Varying degrees
of nationalist or independentist sentiment are evident at the political level. The Bloque Nacionalista Galego or
BNG, is a conglomerate of left-wing parties and individuals that claims Galician political status as a nation. From
1990 to 2005, Manuel Fraga, former minister and ambassador in the Franco dictature, presided over the Galician autonomous
government, the Xunta de Galicia. Fraga was associated with the Partido Popular ('People's Party', Spain's main national
conservative party) since its founding. In 2002, when the oil tanker Prestige sank and covered the Galician coast
in oil, Fraga was accused by the grassroots movement Nunca Mais ("Never again") of having been unwilling to react.
In the 2005 Galician elections, the 'People's Party' lost its absolute majority, though remaining (barely) the largest
party in the parliament, with 43% of the total votes. As a result, power passed to a coalition of the Partido dos
Socialistas de Galicia (PSdeG) ('Galician Socialists' Party'), a federal sister-party of Spain's main social-democratic
party, the Partido Socialista Obrero Español (PSOE, 'Spanish Socialist Workers Party') and the nationalist Bloque
Nacionalista Galego (BNG). As the senior partner in the new coalition, the PSdeG nominated its leader, Emilio Perez
Touriño, to serve as Galicia's new president, with Anxo Quintana, the leader of BNG, as its vice president. Galicia
has a surface area of 29,574 square kilometres (11,419 sq mi). Its northernmost point, at 43°47′N, is Estaca de Bares
(also the northernmost point of Spain); its southernmost, at 41°49′N, is on the Portuguese border in the Baixa Limia-Serra
do Xurés Natural Park. The easternmost longitude is at 6°42′W on the border between the province of Ourense and the
Castilian-Leonese province of Zamora) its westernmost at 9°18′W, reached in two places: the A Nave Cape in Fisterra
(also known as Finisterre), and Cape Touriñán, both in the province of A Coruña. Topographically, a remarkable feature
of Galicia is the presence of many firth-like inlets along the coast, estuaries that were drowned with rising sea
levels after the ice age. These are called rías and are divided into the smaller Rías Altas ("High Rías"), and the
larger Rías Baixas ("Low Rías"). The Rías Altas include Ribadeo, Foz, Viveiro, Barqueiro, Ortigueira, Cedeira, Ferrol,
Betanzos, A Coruña, Corme e Laxe and Camariñas. The Rías Baixas, found south of Fisterra, include Corcubión, Muros
e Noia, Arousa, Pontevedra and Vigo. The Rías Altas can sometimes refer only to those east of Estaca de Bares, with
the others being called Rías Medias ("Intermediate Rías"). All along the Galician coast are various archipelagos
near the mouths of the rías. These archipelagos provide protected deepwater harbors and also provide habitat for
seagoing birds. A 2007 inventory estimates that the Galician coast has 316 archipelagos, islets, and freestanding
rocks. Among the most important of these are the archipelagos of Cíes, Ons, and Sálvora. Together with Cortegada
Island, these make up the Atlantic Islands of Galicia National Park. Other significant islands are Islas Malveiras,
Islas Sisargas, and, the largest and holding the largest population, Arousa Island. Galicia is quite mountainous,
a fact which has contributed to isolate the rural areas, hampering communications, most notably in the inland. The
main mountain range is the Macizo Galaico (Serra do Eixe, Serra da Lastra, Serra do Courel), also known as Macizo
Galaico-Leonés, located in the eastern parts, bordering with Castile and León. Noteworthy mountain ranges are O Xistral
(northern Lugo), the Serra dos Ancares (on the border with León and Asturias), O Courel (on the border with León),
O Eixe (the border between Ourense and Zamora), Serra de Queixa (in the center of Ourense province), O Faro (the
border between Lugo and Pontevedra), Cova da Serpe (border of Lugo and A Coruña), Montemaior (A Coruña), Montes do
Testeiro, Serra do Suído, and Faro de Avión (between Pontevedra and Ourense); and, to the south, A Peneda, O Xurés
and O Larouco, all on the border of Ourense and Portugal. Galicia is poetically known as the "country of the thousand
rivers" ("o país dos mil ríos"). The largest and most important of these rivers is the Minho, known as O Pai Miño
(Father Minho), 307.5 km (191.1 mi) long and discharging 419 m3 (548 cu yd) per second, with its affluent the Sil,
which has created a spectacular canyon. Most of the rivers in the inland are tributaries of this fluvial system,
which drains some 17,027 km2 (6,574 sq mi). Other rivers run directly into the Atlantic Ocean as Lérez or the Cantabrian
Sea, most of them having short courses. Only the Navia, Ulla, Tambre, and Limia have courses longer than 100 km (62
mi). Deforestation and forest fires are a problem in many areas, as is the continual spread of the eucalyptus tree,
a species imported from Australia, actively promoted by the paper industry since the mid-20th century. Galicia is
one of the more forested areas of Spain, but the majority of Galicia's plantations, usually growing eucalyptus or
pine, lack any formal management. Massive eucalyptus, especially Eucalyptus globulus plantation, began in the Francisco
Franco era, largely on behalf of the paper company Empresa Nacional de Celulosas de España (ENCE) in Pontevedra,
which wanted it for its pulp. Wood products figure significantly in Galicia's economy. Apart from tree plantations
Galicia is also notable for the extensive surface occupied by meadows used for animal husbandry, especially cattle
, an important activity. Hydroelectric development in most rivers has been a serious concern for local conservationists
during the last decades. The animals most often thought of as being "typical" of Galicia are the livestock raised
there. The Galician horse is native to the region, as is the Galician Blond cow and the domestic fowl known as the
galiña de Mos. The latter is an endangered species, although it is showing signs of a comeback since 2001. Galicia's
woodlands and mountains are home to rabbits, hares, wild boars, and roe deer, all of which are popular with hunters.
Several important bird migration routes pass through Galicia, and some of the community's relatively few environmentally
protected areas are Special Protection Areas (such as on the Ría de Ribadeo) for these birds. From a domestic point
of view, Galicia has been credited for author Manuel Rivas as the "land of one million cows". Galician Blond and
Holstein cattle coexist on meadows and farms. Being located on the Atlantic coastline, Galicia has a very mild climate
for the latitude and the marine influence affects most of the province to various degrees. In comparison to similar
latitudes on the other side of the Atlantic, winters are exceptionally mild, with consistently heavy rainfall. Snow
is rare due to temperatures rarely dropping below freezing. The warmest coastal station of Pontevedra has a yearly
mean temperature of 14.8 °C (58.6 °F). Ourense located somewhat inland is only slightly warmer with 14.9 °C (58.8
°F). Due to its exposed north-westerly location, the climate is still very cool by Spanish standards. In coastal
areas summers are temperered, averaging around 25 °C (77 °F) in Vigo. Temperatures are further cooler in A Coruña,
with a subdued 22.8 °C (73.0 °F) normal. Temperatures do however soar in inland areas such as Ourense, where days
above 30 °C (86 °F) are very regular. The lands of Galicia are ascribed to two different areas in the Köppen climate
classification: a south area (roughly, the province of Ourense and Pontevedra) with tendencies to have some summer
drought, classified as a warm-summer Mediterranean climate (Csb), with mild temperatures and rainfall usual throughout
the year; and the western and northern coastal regions, the provinces of Lugo and A Coruña, which are characterized
by their Oceanic climate (Cfb), with a more uniform precipitation distribution along the year, and milder summers.
However, precipitation in southern coastal areas are often classified as oceanic since the averages remain significantly
higher than a typical mediterranean climate. As an example, Santiago de Compostela, the political capital city, has
an average of 129 rainy days and 1,362 millimetres (53.6 in) per year (with just 17 rainy days in the three summer
months) and 2,101 sunlight hours per year, with just 6 days with frosts per year. But the colder city of Lugo, to
the east, has an average of 1,759 sunlight hours per year, 117 days with precipitations (> 1 mm) totalling 901.54
millimetres (35.5 in), and 40 days with frosts per year. The more mountainous parts of the provinces of Ourense and
Lugo receive significant snowfall during the winter months. The sunniest city is Pontevedra with 2,223 sunny hours
per year. Galicia is further divided into 53 comarcas, 315 municipalities (93 in A Coruña, 67 in Lugo, 92 in Ourense,
62 in Pontevedra) and 3,778 parishes. Municipalities are divided into parishes, which may be further divided into
aldeas ("hamlets") or lugares ("places"). This traditional breakdown into such small areas is unusual when compared
to the rest of Spain. Roughly half of the named population entities of Spain are in Galicia, which occupies only
5.8 percent of the country's area. It is estimated that Galicia has over a million named places, over 40,000 of them
being communities. In comparison to the other regions of Spain, the major economic benefit of Galicia is its fishing
Industry. Galicia is a land of economic contrast. While the western coast, with its major population centers and
its fishing and manufacturing industries, is prosperous and increasing in population, the rural hinterland — the
provinces of Ourense and Lugo — is economically dependent on traditional agriculture, based on small landholdings
called minifundios. However, the rise of tourism, sustainable forestry and organic and traditional agriculture are
bringing other possibilities to the Galician economy without compromising the preservation of the natural resources
and the local culture. Galicia was late to catch the tourism boom that has swept Spain in recent decades, but the
coastal regions (especially the Rías Baixas and Santiago de Compostela) are now significant tourist destinations
and are especially popular with visitors from other regions in Spain, where the majority of tourists come from. In
2007, 5.7 million tourists visited Galicia, an 8% growth over the previous year, and part of a continual pattern
of growth in this sector. 85% of tourists who visit Galicia visit Santiago de Compostela. Tourism constitutes 12%
of Galician GDP and employs about 12% of the regional workforce. The most important Galician fishing port is the
Port of Vigo; It is one of the world's leading fishing ports, second only to Tokyo, with an annual catch worth 1,500
million euros. In 2007 the port took in 732,951 metric tons (721,375 long tons; 807,940 short tons) of fish and seafood,
and about 4,000,000 metric tons (3,900,000 long tons; 4,400,000 short tons) of other cargoes. Other important ports
are Ferrol, A Coruña, and the smaller ports of Marín and Vilagarcía de Arousa, as well as important recreational
ports in Pontevedra and Burela. Beyond these, Galicia has 120 other organized ports. Within Galicia are the Autopista
AP-9 from Ferrol to Vigo and the Autopista AP-53 (also known as AG-53, because it was initially built by the Xunta
de Galicia) from Santiago to Ourense. Additional roads under construction include Autovía A-54 from Santiago de Compostela
to Lugo, and Autovía A-56 from Lugo to Ourense. The Xunta de Galicia has built roads connecting comarcal capitals,
such as the aforementioned AG-53, Autovía AG-55 connecting A Coruña to Carballo or AG-41 connecting Pontevedra to
Sanxenxo. The first railway line in Galicia was inaugurated 15 September 1873. It ran from O Carril, Vilagarcía de
Arousa to Cornes, Conxo, Santiago de Compostela. A second line was inaugurated in 1875, connecting A Coruña and Lugo.
In 1883, Galicia was first connected by rail to the rest of Spain, by way of O Barco de Valdeorras. Galicia today
has roughly 1,100 kilometres (680 mi) of rail lines. Several 1,668 mm (5 ft 5 21⁄32 in) Iberian gauge lines operated
by Adif and Renfe Operadora connect all the important Galician cities. A 1,000 mm (3 ft 3 3⁄8 in) metre gauge line
operated by FEVE connects Ferrol to Ribadeo and Oviedo. The only electrified line is the Ponferrada-Monforte de Lemos-Ourense-Vigo
line. Several high-speed rail lines are under construction. Among these are the Olmedo-Zamora-Galicia high-speed
rail line that opened partly in 2011, and the AVE Atlantic Axis route, which will connect all of the major Galician
Atlantic coast cities A Coruña, Santiago de Compostela, Pontevedra and Vigo to Portugal. Another projected AVE line
will connect Ourense to Pontevedra and Vigo. The rapid increase of population of A Coruña, Vigo and to a lesser degree
other major Galician cities, like Ourense, Pontevedra or Santiago de Compostela during the years that followed the
Spanish Civil War during the mid-20th century occurred as the rural population declined: many villages and hamlets
of the four provinces of Galicia disappeared or nearly disappeared during the same period. Economic development and
mechanization of agriculture resulted in the fields being abandoned, and most of the population has moving to find
jobs in the main cities. The number of people working in the Tertiary and Quaternary sectors of the economy has increased
significantly. Spanish was nonetheless the only official language in Galicia for more than four centuries. Over the
many centuries of Castilian domination, Galician faded from day-to-day use in urban areas. The period since the re-establishment
of democracy in Spain—in particular since the Lei de Normalización Lingüística ("Law of Linguistic Normalization",
Ley 3/1983, 15 June 1983)—represents the first time since the introduction of mass education that a generation has
attended school in Galician (Spanish is also still taught in Galician schools). Nowadays, Galician is resurgent,
though in the cities it remains a "second language" for most. According to a 2001 census, 99.16 percent of the populace
of Galicia understand the language, 91.04 percent speak it, 68.65 percent read it and 57.64 percent write it. The
first two numbers (understanding and speaking) remain roughly the same as a decade earlier; the latter two (reading
and writing) both show enormous gains: a decade earlier, only 49.3 percent of the population could read Galician,
and only 34.85 percent could write it. This fact can be easily explained because of the impossibility of teaching
Galician during the Francisco Franco era, so older people speak the language but have no written competence. Galician
is the highest-percentage spoken language in its region among the regional languages of Spain. The earliest known
document in Galician-Portuguese dates from 1228. The Foro do bo burgo do Castro Caldelas was granted by Alfonso IX
of León to the town of Burgo, in Castro Caldelas, after the model of the constitutions of the town of Allariz. A
distinct Galician Literature emerged during the Middle Ages: In the 13th century important contributions were made
to the romance canon in Galician-Portuguese, the most notable those by the troubadour Martín Codax, the priest Airas
Nunes, King Denis of Portugal and King Alfonso X of Castile, Alfonso O Sabio ("Alfonso the Wise"), the same monarch
who began the process of establishing the hegemony of Castilian. During this period, Galician-Portuguese was considered
the language of love poetry in the Iberian Romance linguistic culture. The names and memories of Codax and other
popular cultural figures are well preserved in modern Galicia and, despite the long period of Castilian linguistic
domination, these names are again household words. Christianity is the most widely practised religion in Galicia,
as it has been since its introduction in Late Antiquity, although it lived alongside the old Gallaeci religion for
a few centuries. Today about 73% of Galicians identify themselves as Christians. The largest form of Christianity
practised in the present day is Catholicism, though only 20% of the population described themselves as active members.
The Catholic Church in Galicia has had its primatial seat in Santiago de Compostela since the 12th century. Since
the Middle Ages, the Galician Catholic Church has been organized into five ecclesiastical dioceses (Lugo, Ourense,
Santiago de Compostela, Mondoñedo-Ferrol and Tui-Vigo). While these may have coincided with contemporary 15th-century
civil provinces, they no longer have the same boundaries as the modern civil provincial divisions. The church is
led by one archbishop and four bishops. Moreover, of five dioceses, Galicia is divided between 163 districts and
3,792 parishes, a few of which are governed by administrators, the remainder by parish priests. Hundreds of ancient
standing stone monuments like dolmens, menhirs and megalithics Tumulus were erected during the prehistoric period
in Galicia, amongst the best-known are the dolmens of Dombate, Corveira, Axeitos of Pedra da Arca, menhirs like the
"Lapa de Gargñáns". From the Iron Age, Galicia has a rich heritage based mainly on a great number of Hill forts,
few of them excavated like Baroña, Sta. Tegra, San Cibrao de Lás and Formigueiros among others. With the introduction
of Ancient Roman architecture there was a development of basilicas, castra, city walls, cities, villas, Roman temples,
Roman roads, and the Roman bridge of Ponte Vella. It was the Romans who founded some of the first cities in Galicia
like Lugo and Ourense. Perhaps the best-known examples are the Roman Walls of Lugo and the Tower of Hercules in A
Coruña. The patron saint of Galicia is Saint James the Greater, whose body was discovered – according to the Catholic
tradition – in 814 near Compostela. After that date, the relics of Saint James became an extraordinary centre of
pilgrimage and from the 9th century have been kept in the heart of the church – the modern-day cathedral – dedicated
to him. There are many other Galician and associated saints; some of the best-known are: Saint Ansurius, Saint Rudesind,
Saint Mariña of Augas Santas, Saint Senorina, Trahamunda and Froilan. In northern Galicia, the A Coruña-Ferrol metropolitan
area has become increasingly dominant in terms of population. The population of the city of A Coruña in 1900 was
43,971. The population of the rest of the province including the City and Naval Station of nearby Ferrol and Santiago
de Compostela was 653,556. A Coruña's growth occurred after the Spanish Civil War at the same speed as other major
Galician cities, but it was the arrival of democracy in Spain after the death of Francisco Franco when A Coruña left
all the other Galician cities behind. Galicia's inhabitants are known as Galicians (Galician: galegos, Spanish: gallegos).
For well over a century Galicia has grown more slowly than the rest of Spain, due largely to emigration to Latin
America and to other parts of Spain. Sometimes Galicia has lost population in absolute terms. In 1857, Galicia had
Spain's densest population and constituted 11.5% of the national population. As of 2007, only 6.1% of the Spanish
population resides in the autonomous community. This is due to an exodus of Galician people since the 19th century,
first to South America and later to Central Europe. The Galician road network includes autopistas and autovías connecting
the major cities, as well as national and secondary roads to the rest of the municipalities. The Autovía A-6 connects
A Coruña and Lugo to Madrid, entering Galicia at Pedrafita do Cebreiro. The Autovía A-52 connects O Porriño, Ourense
and Benavente, and enters Galicia at A Gudiña. Two more autovías are under construction. Autovía A-8 enters Galicia
on the Cantabrian coast, and ends in Baamonde (Lugo province). Autovía A-76 enters Galicia in Valdeorras; it is an
upgrade of the existing N-120 to Ourense and Vigo.
USB was designed to standardize the connection of computer peripherals (including keyboards, pointing devices, digital cameras,
printers, portable media players, disk drives and network adapters) to personal computers, both to communicate and
to supply electric power. It has become commonplace on other devices, such as smartphones, PDAs and video game consoles.
USB has effectively replaced a variety of earlier interfaces, such as serial and parallel ports, as well as separate
power chargers for portable devices. Unlike other data cables (e.g., Ethernet, HDMI), each end of a USB cable uses
a different kind of connector; a Type-A or a Type-B. This kind of design was chosen to prevent electrical overloads
and damaged equipment, as only the Type-A socket provides power. There are cables with Type-A connectors on both
ends, but they should be used carefully. Therefore, in general, each of the different "sizes" requires four different
connectors; USB cables have the Type-A and Type-B plugs, and the corresponding receptacles are on the computer or
electronic device. In common practice, the Type-A connector is usually the full size, and the Type-B side can vary
as needed. Counter-intuitively, the "micro" size is the most durable from the point of designed insertion lifetime.
The standard and mini connectors were designed for less than daily connections, with a design lifetime of 1,500 insertion-removal
cycles. (Improved mini-B connectors have reached 5,000-cycle lifetimes.) Micro connectors were designed with frequent
charging of portable devices in mind; not only is design lifetime of the connector improved to 10,000 cycles, but
it was also redesigned to place the flexible contacts, which wear out sooner, on the easily replaced cable, while
the more durable rigid contacts are located in the micro-USB receptacles. Likewise, the springy part of the retention
mechanism (parts that provide required gripping force) were also moved into plugs on the cable side. USB connections
also come in five data transfer modes, in ascending order: Low Speed (1.0), Full Speed (1.0), High Speed (2.0), SuperSpeed
(3.0), and SuperSpeed+ (3.1). High Speed is supported only by specifically designed USB 2.0 High Speed interfaces
(that is, USB 2.0 controllers without the High Speed designation do not support it), as well as by USB 3.0 and newer
interfaces. SuperSpeed is supported only by USB 3.0 and newer interfaces, and requires a connector and cable with
extra pins and wires, usually distinguishable by the blue inserts in connectors. A group of seven companies began
the development of USB in 1994: Compaq, DEC, IBM, Intel, Microsoft, NEC, and Nortel. The goal was to make it fundamentally
easier to connect external devices to PCs by replacing the multitude of connectors at the back of PCs, addressing
the usability issues of existing interfaces, and simplifying software configuration of all devices connected to USB,
as well as permitting greater data rates for external devices. A team including Ajay Bhatt worked on the standard
at Intel; the first integrated circuits supporting USB were produced by Intel in 1995. The original USB 1.0 specification,
which was introduced in January 1996, defined data transfer rates of 1.5 Mbit/s "Low Speed" and 12 Mbit/s "Full Speed".
Microsoft Windows 95, OSR 2.1 provided OEM support for the devices. The first widely used version of USB was 1.1,
which was released in September 1998. The 12 Mbit/s data rate was intended for higher-speed devices such as disk
drives, and the lower 1.5 Mbit/s rate for low data rate devices such as joysticks. Apple Inc.'s iMac was the first
mainstream product with USB and the iMac's success popularized USB itself. Following Apple's design decision to remove
all legacy ports from the iMac, many PC manufacturers began building legacy-free PCs, which led to the broader PC
market using USB as a standard. The new SuperSpeed bus provides a fourth transfer mode with a data signaling rate
of 5.0 Gbit/s, in addition to the modes supported by earlier versions. The payload throughput is 4 Gbit/s[citation
needed] (due to the overhead incurred by 8b/10b encoding), and the specification considers it reasonable to achieve
around 3.2 Gbit/s (0.4 GB/s or 400 MB/s), which should increase with future hardware advances. Communication is full-duplex
in SuperSpeed transfer mode; in the modes supported previously, by 1.x and 2.0, communication is half-duplex, with
direction controlled by the host. As with previous USB versions, USB 3.0 ports come in low-power and high-power variants,
providing 150 mA and 900 mA respectively, while simultaneously transmitting data at SuperSpeed rates. Additionally,
there is a Battery Charging Specification (Version 1.2 – December 2010), which increases the power handling capability
to 1.5 A but does not allow concurrent data transmission. The Battery Charging Specification requires that the physical
ports themselves be capable of handling 5 A of current[citation needed] but limits the maximum current drawn to 1.5
A. A January 2013 press release from the USB group revealed plans to update USB 3.0 to 10 Gbit/s. The group ended
up creating a new USB version, USB 3.1, which was released on 31 July 2013, introducing a faster transfer mode called
SuperSpeed USB 10 Gbit/s, putting it on par with a single first-generation Thunderbolt channel. The new mode's logo
features a "Superspeed+" caption (stylized as SUPERSPEED+). The USB 3.1 standard increases the data signaling rate
to 10 Gbit/s in the USB 3.1 Gen2 mode, double that of USB 3.0 (referred to as USB 3.1 Gen1) and reduces line encoding
overhead to just 3% by changing the encoding scheme to 128b/132b. The first USB 3.1 implementation demonstrated transfer
speeds of 7.2 Gbit/s. Developed at roughly the same time as the USB 3.1 specification, but distinct from it, the
USB Type-C Specification 1.0 was finalized in August 2014 and defines a new small reversible-plug connector for USB
devices. The Type-C plug connects to both hosts and devices, replacing various Type-A and Type-B connectors and cables
with a standard meant to be future-proof, similar to Apple Lightning and Thunderbolt. The 24-pin double-sided connector
provides four power/ground pairs, two differential pairs for USB 2.0 data bus (though only one pair is implemented
in a Type-C cable), four pairs for high-speed data bus, two "sideband use" pins, and two configuration pins for cable
orientation detection, dedicated biphase mark code (BMC) configuration data channel, and VCONN +5 V power for active
cables. Type-A and Type-B adaptors and cables are required for older devices to plug into Type-C hosts. Adapters
and cables with a Type-C receptacle are not allowed.[citation needed] Full-featured USB Type-C cables are active,
electronically marked cables that contain a chip with an ID function based on the configuration data channel and
vendor-defined messages (VDMs) from the USB Power Delivery 2.0 specification. USB Type-C devices also support power
currents of 1.5 A and 3.0 A over the 5 V power bus in addition to baseline 900 mA; devices can either negotiate increased
USB current through the configuration line, or they can support the full Power Delivery specification using both
BMC-coded configuration line and legacy BFSK-coded VBUS line. The design architecture of USB is asymmetrical in its
topology, consisting of a host, a multitude of downstream USB ports, and multiple peripheral devices connected in
a tiered-star topology. Additional USB hubs may be included in the tiers, allowing branching into a tree structure
with up to five tier levels. A USB host may implement multiple host controllers and each host controller may provide
one or more USB ports. Up to 127 devices, including hub devices if present, may be connected to a single host controller.
USB devices are linked in series through hubs. One hub—built into the host controller—is the root hub. A physical
USB device may consist of several logical sub-devices that are referred to as device functions. A single device may
provide several functions, for example, a webcam (video device function) with a built-in microphone (audio device
function). This kind of device is called a composite device. An alternative to this is compound device, in which
the host assigns each logical device a distinctive address and all logical devices connect to a built-in hub that
connects to the physical USB cable. USB device communication is based on pipes (logical channels). A pipe is a connection
from the host controller to a logical entity, found on a device, and named an endpoint. Because pipes correspond
1-to-1 to endpoints, the terms are sometimes used interchangeably. A USB device could have up to 32 endpoints (16
IN, 16 OUT), though it's rare to have so many. An endpoint is defined and numbered by the device during initialization
(the period after physical connection called "enumeration") and so is relatively permanent, whereas a pipe may be
opened and closed. An endpoint of a pipe is addressable with a tuple (device_address, endpoint_number) as specified
in a TOKEN packet that the host sends when it wants to start a data transfer session. If the direction of the data
transfer is from the host to the endpoint, an OUT packet (a specialization of a TOKEN packet) having the desired
device address and endpoint number is sent by the host. If the direction of the data transfer is from the device
to the host, the host sends an IN packet instead. If the destination endpoint is a uni-directional endpoint whose
manufacturer's designated direction does not match the TOKEN packet (e.g. the manufacturer's designated direction
is IN while the TOKEN packet is an OUT packet), the TOKEN packet is ignored. Otherwise, it is accepted and the data
transaction can start. A bi-directional endpoint, on the other hand, accepts both IN and OUT packets. When a USB
device is first connected to a USB host, the USB device enumeration process is started. The enumeration starts by
sending a reset signal to the USB device. The data rate of the USB device is determined during the reset signaling.
After reset, the USB device's information is read by the host and the device is assigned a unique 7-bit address.
If the device is supported by the host, the device drivers needed for communicating with the device are loaded and
the device is set to a configured state. If the USB host is restarted, the enumeration process is repeated for all
connected devices. High-speed USB 2.0 hubs contain devices called transaction translators that convert between high-speed
USB 2.0 buses and full and low speed buses. When a high-speed USB 2.0 hub is plugged into a high-speed USB host or
hub, it operates in high-speed mode. The USB hub then uses either one transaction translator per hub to create a
full/low-speed bus routed to all full and low speed devices on the hub, or uses one transaction translator per port
to create an isolated full/low-speed bus per port on the hub. USB implements connections to storage devices using
a set of standards called the USB mass storage device class (MSC or UMS). This was at first intended for traditional
magnetic and optical drives and has been extended to support flash drives. It has also been extended to support a
wide variety of novel devices as many systems can be controlled with the familiar metaphor of file manipulation within
directories. The process of making a novel device look like a familiar device is also known as extension. The ability
to boot a write-locked SD card with a USB adapter is particularly advantageous for maintaining the integrity and
non-corruptible, pristine state of the booting medium. Though most computers since mid-2004 can boot from USB mass
storage devices, USB is not intended as a primary bus for a computer's internal storage. Buses such as Parallel ATA
(PATA or IDE), Serial ATA (SATA), or SCSI fulfill that role in PC class computers. However, USB has one important
advantage, in that it is possible to install and remove devices without rebooting the computer (hot-swapping), making
it useful for mobile peripherals, including drives of various kinds (given SATA or SCSI devices may or may not support
hot-swapping). Firstly conceived and still used today for optical storage devices (CD-RW drives, DVD drives, etc.),
several manufacturers offer external portable USB hard disk drives, or empty enclosures for disk drives. These offer
performance comparable to internal drives, limited by the current number and types of attached USB devices, and by
the upper limit of the USB interface (in practice about 30 MB/s for USB 2.0 and potentially 400 MB/s or more for
USB 3.0). These external drives typically include a "translating device" that bridges between a drive's interface
to a USB interface port. Functionally, the drive appears to the user much like an internal drive. Other competing
standards for external drive connectivity include eSATA, ExpressCard, FireWire (IEEE 1394), and most recently Thunderbolt.
Media Transfer Protocol (MTP) was designed by Microsoft to give higher-level access to a device's filesystem than
USB mass storage, at the level of files rather than disk blocks. It also has optional DRM features. MTP was designed
for use with portable media players, but it has since been adopted as the primary storage access protocol of the
Android operating system from the version 4.1 Jelly Bean as well as Windows Phone 8 (Windows Phone 7 devices had
used the Zune protocol which was an evolution of MTP). The primary reason for this is that MTP does not require exclusive
access to the storage device the way UMS does, alleviating potential problems should an Android program request the
storage while it is attached to a computer. The main drawback is that MTP is not as well supported outside of Windows
operating systems. USB mice and keyboards can usually be used with older computers that have PS/2 connectors with
the aid of a small USB-to-PS/2 adapter. For mice and keyboards with dual-protocol support, an adaptor that contains
no logic circuitry may be used: the hardware in the USB keyboard or mouse is designed to detect whether it is connected
to a USB or PS/2 port, and communicate using the appropriate protocol. Converters also exist that connect PS/2 keyboards
and mice (usually one of each) to a USB port. These devices present two HID endpoints to the system and use a microcontroller
to perform bidirectional data translation between the two standards. By design, it is difficult to insert a USB plug
into its receptacle incorrectly. The USB specification states that the required USB icon must be embossed on the
"topside" of the USB plug, which "...provides easy user recognition and facilitates alignment during the mating process."
The specification also shows that the "recommended" "Manufacturer's logo" ("engraved" on the diagram but not specified
in the text) is on the opposite side of the USB icon. The specification further states, "The USB Icon is also located
adjacent to each receptacle. Receptacles should be oriented to allow the icon on the plug to be visible during the
mating process." However, the specification does not consider the height of the device compared to the eye level
height of the user, so the side of the cable that is "visible" when mated to a computer on a desk can depend on whether
the user is standing or kneeling. The standard connectors were deliberately intended to enforce the directed topology
of a USB network: Type-A receptacles on host devices that supply power and Type-B receptacles on target devices that
draw power. This prevents users from accidentally connecting two USB power supplies to each other, which could lead
to short circuits and dangerously high currents, circuit failures, or even fire. USB does not support cyclic networks
and the standard connectors from incompatible USB devices are themselves incompatible. The standard connectors were
designed to be robust. Because USB is hot-pluggable, the connectors would be used more frequently, and perhaps with
less care, than other connectors. Many previous connector designs were fragile, specifying embedded component pins
or other delicate parts that were vulnerable to bending or breaking. The electrical contacts in a USB connector are
protected by an adjacent plastic tongue, and the entire connecting assembly is usually protected by an enclosing
metal sheath. The connector construction always ensures that the external sheath on the plug makes contact with its
counterpart in the receptacle before any of the four connectors within make electrical contact. The external metallic
sheath is typically connected to system ground, thus dissipating damaging static charges. This enclosure design also
provides a degree of protection from electromagnetic interference to the USB signal while it travels through the
mated connector pair (the only location when the otherwise twisted data pair travels in parallel). In addition, because
of the required sizes of the power and common connections, they are made after the system ground but before the data
connections. This type of staged make-break timing allows for electrically safe hot-swapping. The newer micro-USB
receptacles are designed for a minimum rated lifetime of 10,000 cycles of insertion and removal between the receptacle
and plug, compared to 1,500 for the standard USB and 5,000 for the mini-USB receptacle. Features intended to accomplish
include, a locking device was added and the leaf-spring was moved from the jack to the plug, so that the most-stressed
part is on the cable side of the connection. This change was made so that the connector on the less expensive cable
would bear the most wear instead of the more expensive micro-USB device. However the idea that these changes did
in fact make the connector more durable in real world use has been widely disputed, with many contending that they
are in fact, much less durable. The USB standard specifies relatively loose tolerances for compliant USB connectors
to minimize physical incompatibilities in connectors from different vendors. To address a weakness present in some
other connector standards, the USB specification also defines limits to the size of a connecting device in the area
around its plug. This was done to prevent a device from blocking adjacent ports due to the size of the cable strain
relief mechanism (usually molding integral with the cable outer insulation) at the connector. Compliant devices must
either fit within the size restrictions or support a compliant extension cable that does. In general, USB cables
have only plugs on their ends, while hosts and devices have only receptacles. Hosts almost universally have Type-A
receptacles, while devices have one or another Type-B variety. Type-A plugs mate only with Type-A receptacles, and
the same applies to their Type-B counterparts; they are deliberately physically incompatible. However, an extension
to the USB standard specification called USB On-The-Go (OTG) allows a single port to act as either a host or a device,
which is selectable by the end of the cable that plugs into the receptacle on the OTG-enabled unit. Even after the
cable is hooked up and the units are communicating, the two units may "swap" ends under program control. This capability
is meant for units such as PDAs in which the USB link might connect to a PC's host port as a device in one instance,
yet connect as a host itself to a keyboard and mouse device in another instance. Various connectors have been used
for smaller devices such as digital cameras, smartphones, and tablet computers. These include the now-deprecated
(i.e. de-certified but standardized) mini-A and mini-AB connectors; mini-B connectors are still supported, but are
not OTG-compliant (On The Go, used in mobile devices). The mini-B USB connector was standard for transferring data
to and from the early smartphones and PDAs. Both mini-A and mini-B plugs are approximately 3 by 7 mm; the mini-A
connector and the mini-AB receptacle connector were deprecated on 23 May 2007. The micro plug design is rated for
at least 10,000 connect-disconnect cycles, which is more than the mini plug design. The micro connector is also designed
to reduce the mechanical wear on the device; instead the easier-to-replace cable is designed to bear the mechanical
wear of connection and disconnection. The Universal Serial Bus Micro-USB Cables and Connectors Specification details
the mechanical characteristics of micro-A plugs, micro-AB receptacles (which accept both micro-A and micro-B plugs),
and micro-B plugs and receptacles, along with a standard-A receptacle to micro-A plug adapter. The cellular phone
carrier group Open Mobile Terminal Platform (OMTP) in 2007 endorsed micro-USB as the standard connector for data
and power on mobile devices In addition, on 22 October 2009 the International Telecommunication Union (ITU) has also
announced that it had embraced micro-USB as the Universal Charging Solution its "energy-efficient one-charger-fits-all
new mobile phone solution," and added: "Based on the Micro-USB interface, UCS chargers also include a 4-star or higher
efficiency rating—up to three times more energy-efficient than an unrated charger." The European Standardisation
Bodies CEN, CENELEC and ETSI (independent of the OMTP/GSMA proposal) defined a common External Power Supply (EPS)
for use with smartphones sold in the EU based on micro-USB. 14 of the world's largest mobile phone manufacturers
signed the EU's common EPS Memorandum of Understanding (MoU). Apple, one of the original MoU signers, makes micro-USB
adapters available – as permitted in the Common EPS MoU – for its iPhones equipped with Apple's proprietary 30-pin
dock connector or (later) Lightning connector. All current USB On-The-Go (OTG) devices are required to have one,
and only one, USB connector: a micro-AB receptacle. Non-OTG compliant devices are not allowed to use the micro-AB
receptacle, due to power supply shorting hazards on the VBUS line. The micro-AB receptacle is capable of accepting
both micro-A and micro-B plugs, attached to any of the legal cables and adapters as defined in revision 1.01 of the
micro-USB specification. Prior to the development of micro-USB, USB On-The-Go devices were required to use mini-AB
receptacles to perform the equivalent job. The OTG device with the A-plug inserted is called the A-device and is
responsible for powering the USB interface when required and by default assumes the role of host. The OTG device
with the B-plug inserted is called the B-device and by default assumes the role of peripheral. An OTG device with
no plug inserted defaults to acting as a B-device. If an application on the B-device requires the role of host, then
the Host Negotiation Protocol (HNP) is used to temporarily transfer the host role to the B-device. USB is a serial
bus, using four shielded wires for the USB 2.0 variant: two for power (VBUS and GND), and two for differential data
signals (labelled as D+ and D− in pinouts). Non-Return-to-Zero Inverted (NRZI) encoding scheme is used for transferring
data, with a sync field to synchronize the host and receiver clocks. D+ and D− signals are transmitted on a twisted
pair, providing half-duplex data transfers for USB 2.0. Mini and micro connectors have their GND connections moved
from pin #4 to pin #5, while their pin #4 serves as an ID pin for the On-The-Go host/client identification. USB 2.0
provides for a maximum cable length of 5 meters for devices running at Hi Speed (480 Mbit/s). The primary reason
for this limit is the maximum allowed round-trip delay of about 1.5 μs. If USB host commands are unanswered by the
USB device within the allowed time, the host considers the command lost. When adding USB device response time, delays
from the maximum number of hubs added to the delays from connecting cables, the maximum acceptable delay per cable
amounts to 26 ns. The USB 2.0 specification requires that cable delay be less than 5.2 ns per meter (192 000 km/s,
which is close to the maximum achievable transmission speed for standard copper wire). A unit load is defined as
100 mA in USB 1.x and 2.0, and 150 mA in USB 3.0. A device may draw a maximum of five unit loads from a port in USB
1.x and 2.0 (500 mA), or six unit loads in USB 3.0 (900 mA). There are two types of devices: low-power and high-power.
A low-power device (such as a USB HID) draws at most one-unit load, with minimum operating voltage of 4.4 V in USB
2.0, and 4 V in USB 3.0. A high-power device draws, at most, the maximum number of unit loads the standard permits.
Every device functions initially as low-power (including high-power functions during their low-power enumeration
phases), but may request high-power, and get it if available on the providing bus. Some devices, such as high-speed
external disk drives, require more than 500 mA of current and therefore may have power issues if powered from just
one USB 2.0 port: erratic function, failure to function, or overloading/damaging the port. Such devices may come
with an external power source or a Y-shaped cable that has two USB connectors (one for power and data, the other
for power only) to plug into a computer. With such a cable, a device can draw power from two USB ports simultaneously.
However, USB compliance specification states that "use of a 'Y' cable (a cable with two A-plugs) is prohibited on
any USB peripheral", meaning that "if a USB peripheral requires more power than allowed by the USB specification
to which it is designed, then it must be self-powered." The USB Battery Charging Specification Revision 1.1 (released
in 2007) defines a new type of USB port, called the charging port. Contrary to the standard downstream port, for
which current draw by a connected portable device can exceed 100 mA only after digital negotiation with the host
or hub, a charging port can supply currents between 500 mA and 1.5 A without the digital negotiation. A charging
port supplies up to 500 mA at 5 V, up to the rated current at 3.6 V or more, and drops its output voltage if the
portable device attempts to draw more than the rated current. The charger port may shut down if the load is too high.
Two types of charging port exist: the charging downstream port (CDP), supporting data transfers as well, and the
dedicated charging port (DCP), without data support. A portable device can recognize the type of USB port; on a dedicated
charging port, the D+ and D− pins are shorted with a resistance not exceeding 200 ohms, while charging downstream
ports provide additional detection logic so their presence can be determined by attached devices. (see ref pg. 2,
Section 1.4.5, & Table 5-3 "Resistances"—pg. 29). The USB Battery Charging Specification Revision 1.2 (released in
2010) makes clear that there are safety limits to the rated current at 5 A coming from USB 2.0. On the other hand,
several changes are made and limits are increasing including allowing 1.5 A on charging downstream ports for unconfigured
devices, allowing high speed communication while having a current up to 1.5 A, and allowing a maximum current of
5 A. Also, revision 1.2 removes support for USB ports type detection via resistive detection mechanisms. In July
2012, the USB Promoters Group announced the finalization of the USB Power Delivery ("PD") specification, an extension
that specifies using certified "PD aware" USB cables with standard USB Type-A and Type-B connectors to deliver increased
power (more than 7.5 W) to devices with larger power demand. Devices can request higher currents and supply voltages
from compliant hosts – up to 2 A at 5 V (for a power consumption of up to 10 W), and optionally up to 3 A or 5 A
at either 12 V (36 W or 60 W) or 20 V (60 W or 100 W). In all cases, both host-to-device and device-to-host configurations
are supported. The USB Power Delivery revision 2.0 specification has been released as part of the USB 3.1 suite.
It covers the Type-C cable and connector with four power/ground pairs and a separate configuration channel, which
now hosts a DC coupled low-frequency BMC-coded data channel that reduces the possibilities for RF interference. Power
Delivery protocols have been updated to facilitate Type-C features such as cable ID function, Alternate Mode negotiation,
increased VBUS currents, and VCONN-powered accessories. Sleep-and-charge USB ports can be used to charge electronic
devices even when the computer is switched off. Normally, when a computer is powered off the USB ports are powered
down, preventing phones and other devices from charging. Sleep-and-charge USB ports remain powered even when the
computer is off. On laptops, charging devices from the USB port when it is not being powered from AC drains the laptop
battery faster; most laptops have a facility to stop charging if their own battery charge level gets too low. On
Dell and Toshiba laptops, the port is marked with the standard USB symbol with an added lightning bolt icon on the
right side. Dell calls this feature PowerShare, while Toshiba calls it USB Sleep-and-Charge. On Acer Inc. and Packard
Bell laptops, sleep-and-charge USB ports are marked with a non-standard symbol (the letters USB over a drawing of
a battery); the feature is simply called Power-off USB. On some laptops such as Dell and Apple MacBook models, it
is possible to plug a device in, close the laptop (putting it into sleep mode) and have the device continue to charge.[citation
needed] The GSM Association (GSMA) followed suit on 17 February 2009, and on 22 April 2009, this was further endorsed
by the CTIA – The Wireless Association, with the International Telecommunication Union (ITU) announcing on 22 October
2009 that it had also embraced the Universal Charging Solution as its "energy-efficient one-charger-fits-all new
mobile phone solution," and added: "Based on the Micro-USB interface, UCS chargers will also include a 4-star or
higher efficiency rating—up to three times more energy-efficient than an unrated charger." In June 2009, many of
the world's largest mobile phone manufacturers signed an EC-sponsored Memorandum of Understanding (MoU), agreeing
to make most data-enabled mobile phones marketed in the European Union compatible with a common External Power Supply
(EPS). The EU's common EPS specification (EN 62684:2010) references the USB Battery Charging standard and is similar
to the GSMA/OMTP and Chinese charging solutions. In January 2011, the International Electrotechnical Commission (IEC)
released its version of the (EU's) common EPS standard as IEC 62684:2011. Some USB devices require more power than
is permitted by the specifications for a single port. This is common for external hard and optical disc drives, and
generally for devices with motors or lamps. Such devices can use an external power supply, which is allowed by the
standard, or use a dual-input USB cable, one input of which is used for power and data transfer, the other solely
for power, which makes the device a non-standard USB device. Some USB ports and external hubs can, in practice, supply
more power to USB devices than required by the specification but a standard-compliant device may not depend on this.
In addition to limiting the total average power used by the device, the USB specification limits the inrush current
(i.e., that used to charge decoupling and filter capacitors) when the device is first connected. Otherwise, connecting
a device could cause problems with the host's internal power. USB devices are also required to automatically enter
ultra low-power suspend mode when the USB host is suspended. Nevertheless, many USB host interfaces do not cut off
the power supply to USB devices when they are suspended. Some non-standard USB devices use the 5 V power supply without
participating in a proper USB network, which negotiates power draw with the host interface. These are usually called
USB decorations.[citation needed] Examples include USB-powered keyboard lights, fans, mug coolers and heaters, battery
chargers, miniature vacuum cleaners, and even miniature lava lamps. In most cases, these items contain no digital
circuitry, and thus are not standard compliant USB devices. This may cause problems with some computers, such as
drawing too much current and damaging circuitry. Prior to the Battery Charging Specification, the USB specification
required that devices connect in a low-power mode (100 mA maximum) and communicate their current requirements to
the host, which then permits the device to switch into high-power mode. USB data is transmitted by toggling the data
lines between the J state and the opposite K state. USB encodes data using the NRZI line coding; a 0 bit is transmitted
by toggling the data lines from J to K or vice versa, while a 1 bit is transmitted by leaving the data lines as-is.
To ensure a minimum density of signal transitions remains in the bitstream, USB uses bit stuffing; an extra 0 bit
is inserted into the data stream after any appearance of six consecutive 1 bits. Seven consecutive received 1 bits
is always an error. USB 3.0 has introduced additional data transmission encodings. A USB packet's end, called EOP
(end-of-packet), is indicated by the transmitter driving 2 bit times of SE0 (D+ and D− both below max.) and 1 bit
time of J state. After this, the transmitter ceases to drive the D+/D− lines and the aforementioned pull up resistors
hold it in the J (idle) state. Sometimes skew due to hubs can add as much as one bit time before the SE0 of the end
of packet. This extra bit can also result in a "bit stuff violation" if the six bits before it in the CRC are 1s.
This bit should be ignored by receiver. USB 2.0 devices use a special protocol during reset, called chirping, to
negotiate the high bandwidth mode with the host/hub. A device that is HS capable first connects as an FS device (D+
pulled high), but upon receiving a USB RESET (both D+ and D− driven LOW by host for 10 to 20 ms) it pulls the D−
line high, known as chirp K. This indicates to the host that the device is high bandwidth. If the host/hub is also
HS capable, it chirps (returns alternating J and K states on D− and D+ lines) letting the device know that the hub
operates at high bandwidth. The device has to receive at least three sets of KJ chirps before it changes to high
bandwidth terminations and begins high bandwidth signaling. Because USB 3.0 uses wiring separate and additional to
that used by USB 2.0 and USB 1.x, such bandwidth negotiation is not required. According to routine testing performed
by CNet, write operations to typical Hi-Speed (USB 2.0) hard drives can sustain rates of 25–30 MB/s, while read operations
are at 30–42 MB/s; this is 70% of the total available bus bandwidth. For USB 3.0, typical write speed is 70–90 MB/s,
while read speed is 90–110 MB/s. Mask Tests, also known as Eye Diagram Tests, are used to determine the quality of
a signal in the time domain. They are defined in the referenced document as part of the electrical test description
for the high-speed (HS) mode at 480 Mbit/s. After the sync field, all packets are made of 8-bit bytes, transmitted
least-significant bit first. The first byte is a packet identifier (PID) byte. The PID is actually 4 bits; the byte
consists of the 4-bit PID followed by its bitwise complement. This redundancy helps detect errors. (Note also that
a PID byte contains at most four consecutive 1 bits, and thus never needs bit-stuffing, even when combined with the
final 1 bit in the sync byte. However, trailing 1 bits in the PID may require bit-stuffing within the first few bits
of the payload.) Handshake packets consist of only a single PID byte, and are generally sent in response to data
packets. Error detection is provided by transmitting four bits that represent the packet type twice, in a single
PID byte using complemented form. Three basic types are ACK, indicating that data was successfully received, NAK,
indicating that the data cannot be received and should be retried, and STALL, indicating that the device has an error
condition and cannot transfer data until some corrective action (such as device initialization) occurs. IN and OUT
tokens contain a seven-bit device number and four-bit function number (for multifunction devices) and command the
device to transmit DATAx packets, or receive the following DATAx packets, respectively. An IN token expects a response
from a device. The response may be a NAK or STALL response, or a DATAx frame. In the latter case, the host issues
an ACK handshake if appropriate. An OUT token is followed immediately by a DATAx frame. The device responds with
ACK, NAK, NYET, or STALL, as appropriate. USB 2.0 also added a larger three-byte SPLIT token with a seven-bit hub
number, 12 bits of control flags, and a five-bit CRC. This is used to perform split transactions. Rather than tie
up the high-bandwidth USB bus sending data to a slower USB device, the nearest high-bandwidth capable hub receives
a SPLIT token followed by one or two USB packets at high bandwidth, performs the data transfer at full or low bandwidth,
and provides the response at high bandwidth when prompted by a second SPLIT token. There are two basic forms of data
packet, DATA0 and DATA1. A data packet must always be preceded by an address token, and is usually followed by a
handshake token from the receiver back to the transmitter. The two packet types provide the 1-bit sequence number
required by Stop-and-wait ARQ. If a USB host does not receive a response (such as an ACK) for data it has transmitted,
it does not know if the data was received or not; the data might have been lost in transit, or it might have been
received but the handshake response was lost. Low-bandwidth devices are supported with a special PID value, PRE.
This marks the beginning of a low-bandwidth packet, and is used by hubs that normally do not send full-bandwidth
packets to low-bandwidth devices. Since all PID bytes include four 0 bits, they leave the bus in the full-bandwidth
K state, which is the same as the low-bandwidth J state. It is followed by a brief pause, during which hubs enable
their low-bandwidth outputs, already idling in the J state. Then a low-bandwidth packet follows, beginning with a
sync sequence and PID byte, and ending with a brief period of SE0. Full-bandwidth devices other than hubs can simply
ignore the PRE packet and its low-bandwidth contents, until the final SE0 indicates that a new packet follows. These
and other differences reflect the differing design goals of the two buses: USB was designed for simplicity and low
cost, while FireWire was designed for high performance, particularly in time-sensitive applications such as audio
and video. Although similar in theoretical maximum transfer rate, FireWire 400 is faster than USB 2.0 Hi-Bandwidth
in real-use, especially in high-bandwidth use such as external hard-drives. The newer FireWire 800 standard is twice
as fast as FireWire 400 and faster than USB 2.0 Hi-Bandwidth both theoretically and practically. However, Firewire's
speed advantages rely on low-level techniques such as direct memory access (DMA), which in turn have created opportunities
for security exploits such as the DMA attack. The IEEE 802.3af Power over Ethernet (PoE) standard specifies a more
elaborate power negotiation scheme than powered USB. It operates at 48 V DC and can supply more power (up to 12.95
W, PoE+ 25.5 W) over a cable up to 100 meters compared to USB 2.0, which provides 2.5 W with a maximum cable length
of 5 meters. This has made PoE popular for VoIP telephones, security cameras, wireless access points and other networked
devices within buildings. However, USB is cheaper than PoE provided that the distance is short, and power demand
is low. Ethernet standards require electrical isolation between the networked device (computer, phone, etc.) and
the network cable up to 1500 V AC or 2250 V DC for 60 seconds. USB has no such requirement as it was designed for
peripherals closely associated with a host computer, and in fact it connects the peripheral and host grounds. This
gives Ethernet a significant safety advantage over USB with peripherals such as cable and DSL modems connected to
external wiring that can assume hazardous voltages under certain fault conditions. eSATA does not supply power to
external devices. This is an increasing disadvantage compared to USB. Even though USB 3.0's 4.5 W is sometimes insufficient
to power external hard drives, technology is advancing and external drives gradually need less power, diminishing
the eSATA advantage. eSATAp (power over eSATA; aka ESATA/USB) is a connector introduced in 2009 that supplies power
to attached devices using a new, backward compatible, connector. On a notebook eSATAp usually supplies only 5 V to
power a 2.5-inch HDD/SSD; on a desktop workstation it can additionally supply 12 V to power larger devices including
3.5-inch HDD/SSD and 5.25-inch optical drives. USB 2.0 High-Speed Inter-Chip (HSIC) is a chip-to-chip variant of
USB 2.0 that eliminates the conventional analog transceivers found in normal USB. It was adopted as a standard by
the USB Implementers Forum in 2007. The HSIC physical layer uses about 50% less power and 75% less board area compared
to traditional USB 2.0. HSIC uses two signals at 1.2 V and has a throughput of 480 Mbit/s. Maximum PCB trace length
for HSIC is 10 cm. It does not have low enough latency to support RAM memory sharing between two chips.
Throughout its prehistory and early history, the region and its vicinity in the Yangtze region was the cradle of unique local
civilizations which can be dated back to at least the 15th century BC and coinciding with the later years of the
Shang and Zhou dynasties in North China. Sichuan was referred to in ancient Chinese sources as Ba-Shu (巴蜀), an abbreviation
of the kingdoms of Ba and Shu which existed within the Sichuan Basin. Ba included Chongqing and the land in eastern
Sichuan along the Yangtze and some tributary streams, while Shu included today's Chengdu, its surrounding plain and
adjacent territories in western Sichuan. The existence of the early state of Shu was poorly recorded in the main
historical records of China. It was, however, referred to in the Book of Documents as an ally of the Zhou. Accounts
of Shu exist mainly as a mixture of mythological stories and historical legends recorded in local annals such as
the Chronicles of Huayang compiled in the Jin dynasty (265–420), with folk stories such as that of Emperor Duyu (杜宇)
who taught the people agriculture and transformed himself into a cuckoo after his death. The existence of a highly
developed civilization with an independent bronze industry in Sichuan eventually came to light with an archaeological
discovery in 1986 at a small village named Sanxingdui in Guanghan, Sichuan. This site, believed to be an ancient
city of Shu, was initially discovered by a local farmer in 1929 who found jade and stone artefacts. Excavations by
archaeologists in the area yielded few significant finds until 1986 when two major sacrificial pits were found with
spectacular bronze items as well as artefacts in jade, gold, earthenware, and stone. This and other discoveries in
Sichuan contest the conventional historiography that the local culture and technology of Sichuan were undeveloped
in comparison to the technologically and culturally "advanced" Yellow River valley of north-central China. The name
Shu continues to be used to refer to Sichuan in subsequent periods in Chinese history up to the present day. The
rulers of the expansionist Qin dynasty, based in present-day Gansu and Shaanxi, were only the first strategists to
realize that the area's military importance matched its commercial and agricultural significance. The Sichuan basin
is surrounded by the Himalayas to the west, the Qin Mountains to the north, and mountainous areas of Yunnan to the
south. Since the Yangtze flows through the basin and then through the perilous Yangzi Gorges to eastern and southern
China, Sichuan was a staging area for amphibious military forces and a refuge for political refugees.[citation needed]
Qin armies finished their conquest of the kingdoms of Shu and Ba by 316 BC. Any written records and civil achievements
of earlier kingdoms were destroyed. Qin administrators introduced improved agricultural technology. Li Bing, engineered
the Dujiangyan irrigation system to control the Min River, a major tributary of the Yangtze. This innovative hydraulic
system was composed of movable weirs which could be adjusted for high or low water flow according to the season,
to either provide irrigation or prevent floods. The increased agricultural output and taxes made the area a source
of provisions and men for Qin's unification of China. Sichuan came under the firm control of a Chinese central government
during the Sui dynasty, but it was during the subsequent Tang dynasty where Sichuan regained its previous political
and cultural prominence for which it was known during the Han. Chengdu became nationally known as a supplier of armies
and the home of Du Fu, who is sometimes called China's greatest poet. During the An Lushan Rebellion (755-763), Emperor
Xuanzong of Tang fled from Chang'an to Sichuan. The region was torn by constant warfare and economic distress as
it was besieged by the Tibetan Empire. In the middle of the 17th century, the peasant rebel leader Zhang Xianzhong
(1606–1646) from Yan'an, Shanxi Province, nicknamed Yellow Tiger, led his peasant troop from north China to the south,
and conquered Sichuan. Upon capturing it, he declared himself emperor of the Daxi Dynasty (大西王朝). In response to
the resistance from local elites, he massacred a large native population. As a result of the massacre as well as
years of turmoil during the Ming-Qing transition, the population of Sichuan fell sharply, requiring a massive resettlement
of people from the neighboring Huguang Province (modern Hubei and Hunan) and other provinces during the Qing dynasty.
In the 20th century, as Beijing, Shanghai, Nanjing, and Wuhan had all been occupied by the Japanese during the Second
Sino-Japanese War, the capital of the Republic of China had been temporary relocated to Chongqing, then a major city
in Sichuan. An enduring legacy of this move is that nearby inland provinces, such as Shaanxi, Gansu, and Guizhou,
which previously never had modern Western-style universities, began to be developed in this regard. The difficulty
of accessing the region overland from the eastern part of China and the foggy climate hindering the accuracy of Japanese
bombing of the Sichuan Basin, made the region the stronghold of Chiang Kai-Shek's Kuomintang government during 1938-45,
and led to the Bombing of Chongqing. The Second Sino-Japanese War was soon followed by the resumed Chinese Civil
War, and the cities of East China fell to the Communists one after another, the Kuomintang government again tried
to make Sichuan its stronghold on the mainland, although it already saw some Communist activity since it was one
area on the road of the Long March. Chiang Kai-Shek himself flew to Chongqing from Taiwan in November 1949 to lead
the defense. But the same month Chongqing fell to the Communists, followed by Chengdu on 10 December. The Kuomintang
general Wang Sheng wanted to stay behind with his troops to continue anticommunist guerilla war in Sichuan, but was
recalled to Taiwan. Many of his soldiers made their way there as well, via Burma. From 1955 until 1997 Sichuan had
been China's most populous province, hitting 100 million mark shortly after the 1982 census figure of 99,730,000.
This changed in 1997 when the Sub-provincial city of Chongqing as well as the three surrounding prefectures of Fuling,
Wanxian, and Qianjiang were split off into the new Chongqing Municipality. The new municipality was formed to spearhead
China's effort to economically develop its western provinces, as well as to coordinate the resettlement of residents
from the reservoir areas of the Three Gorges Dam project. Sichuan consists of two geographically very distinct parts.
The eastern part of the province is mostly within the fertile Sichuan basin (which is shared by Sichuan with Chongqing
Municipality). The western Sichuan consists of the numerous mountain ranges forming the easternmost part of the Qinghai-Tibet
Plateau, which are known generically as Hengduan Mountains. One of these ranges, Daxue Mountains, contains the highest
point of the province Gongga Shan, at 7,556 metres (24,790 ft) above sea level. The Yangtze River and its tributaries
flows through the mountains of western Sichuan and the Sichuan Basin; thus, the province is upstream of the great
cities that stand along the Yangtze River further to the east, such as Chongqing, Wuhan, Nanjing and Shanghai. One
of the major tributaries of the Yangtze within the province is the Min River of central Sichuan, which joins the
Yangtze at Yibin. Sichuan's 4 main rivers, as Sichuan means literally, are Jaling Jiang, Tuo Jiang, Yalong Jiang,
and Jinsha Jiang. Due to great differences in terrain, the climate of the province is highly variable. In general
it has strong monsoonal influences, with rainfall heavily concentrated in the summer. Under the Köppen climate classification,
the Sichuan Basin (including Chengdu) in the eastern half of the province experiences a humid subtropical climate
(Köppen Cwa or Cfa), with long, hot, humid summers and short, mild to cool, dry and cloudy winters. Consequently,
it has China's lowest sunshine totals. The western region has mountainous areas producing a cooler but sunnier climate.
Having cool to very cold winters and mild summers, temperatures generally decrease with greater elevation. However,
due to high altitude and its inland location, many areas such as Garze County and Zoige County in Sichuan exhibit
a subarctic climate (Köppen Dwc)- featuring extremely cold winters down to -30 °C and even cold summer nights. The
region is geologically active with landslides and earthquakes. Average elevation ranges from 2,000 to 3,500 meters;
average temperatures range from 0 to 15 °C. The southern part of the province, including Panzhihua and Xichang, has
a sunny climate with short, very mild winters and very warm to hot summers. Sichuan has been historically known as
the "Province of Abundance". It is one of the major agricultural production bases of China. Grain, including rice
and wheat, is the major product with output that ranked first in China in 1999. Commercial crops include citrus fruits,
sugar cane, sweet potatoes, peaches and grapes. Sichuan also had the largest output of pork among all the provinces
and the second largest output of silkworm cocoons in 1999. Sichuan is rich in mineral resources. It has more than
132 kinds of proven underground mineral resources including vanadium, titanium, and lithium being the largest in
China. The Panxi region alone possesses 13.3% of the reserves of iron, 93% of titanium, 69% of vanadium, and 83%
of the cobalt of the whole country. Sichuan also possesses China's largest proven natural gas reserves, the majority
of which is transported to more developed eastern regions. Sichuan is one of the major industrial centers of China.
In addition to heavy industries such as coal, energy, iron and steel, the province has also established a light industrial
sector comprising building materials, wood processing, food and silk processing. Chengdu and Mianyang are the production
centers for textiles and electronics products. Deyang, Panzhihua, and Yibin are the production centers for machinery,
metallurgical industries, and wine, respectively. Sichuan's wine production accounted for 21.9% of the country’s
total production in 2000. The Three Gorges Dam, the largest dam ever constructed, is being built on the Yangtze River
in nearby Hubei province to control flooding in the Sichuan Basin, neighboring Yunnan province, and downstream. The
plan is hailed by some as China's efforts to shift towards alternative energy sources and to further develop its
industrial and commercial bases, but others have criticised it for its potentially harmful effects, such as massive
resettlement of residents in the reservoir areas, loss of archeological sites, and ecological damages. According
to the Sichuan Department of Commerce, the province's total foreign trade was US$22.04 billion in 2008, with an annual
increase of 53.3 percent. Exports were US$13.1 billion, an annual increase of 52.3 percent, while imports were US$8.93
billion, an annual increase of 54.7 percent. These achievements were accomplished because of significant changes
in China's foreign trade policy, acceleration of the yuan's appreciation, increase of commercial incentives and increase
in production costs. The 18 cities and counties witnessed a steady rate of increase. Chengdu, Suining, Nanchong,
Dazhou, Ya'an, Abazhou, and Liangshan all saw an increase of more than 40 percent while Leshan, Neijiang, Luzhou,
Meishan, Ziyang, and Yibin saw an increase of more than 20 percent. Foreign trade in Zigong, Panzhihua, Guang'an,
Bazhong and Ganzi remained constant. The Sichuan government raised the minimum wage in the province by 12.5 percent
at the end of December 2007. The monthly minimum wage went up from 400 to 450 yuan, with a minimum of 4.9 yuan per
hour for part-time work, effective 26 December 2007. The government also reduced the four-tier minimum wage structure
to three. The top tier mandates a minimum of 650 yuan per month, or 7.1 yuan per hour. National law allows each province
to set minimum wages independently, but with a floor of 450 yuan per month. Chengdu Economic and Technological Development
Zone (Chinese: 成都经济技术开发区; pinyin: Chéngdū jīngjì jìshù kāifā qū) was approved as state-level development zone in
February 2000. The zone now has a developed area of 10.25 km2 (3.96 sq mi) and has a planned area of 26 km2 (10 sq
mi). Chengdu Economic and Technological Development Zone (CETDZ) lies 13.6 km (8.5 mi) east of Chengdu, the capital
city of Sichuan Province and the hub of transportation and communication in southwest China. The zone has attracted
investors and developers from more than 20 countries to carry out their projects there. Industries encouraged in
the zone include mechanical, electronic, new building materials, medicine and food processing. Established in 1988,
Chengdu Hi-tech Industrial Development Zone (Chinese: 成都高新技术产业开发区; pinyin: Chéngdū Gāoxīn Jìshù Chǎnyè Kāifā Qū)
was approved as one of the first national hi-tech development zones in 1991. In 2000, it was open to APEC and has
been recognized as a national advanced hi-tech development zone in successive assessment activities held by China's
Ministry of Science and Technology. It ranks 5th among the 53 national hi-tech development zones in China in terms
of comprehensive strength. Chengdu Hi-tech Development Zone covers an area of 82.5 km2 (31.9 sq mi), consisting of
the South Park and the West Park. By relying on the city sub-center, which is under construction, the South Park
is focusing on creating a modernized industrial park of science and technology with scientific and technological
innovation, incubation R&D, modern service industry and Headquarters economy playing leading roles. Priority has
been given to the development of software industry. Located on both sides of the "Chengdu-Dujiangyan-Jiuzhaigou"
golden tourism channel, the West Park aims at building a comprehensive industrial park targeting at industrial clustering
with complete supportive functions. The West Park gives priority to three major industries i.e. electronic information,
biomedicine and precision machinery. Mianyang Hi-Tech Industrial Development Zone was established in 1992, with a
planned area of 43 km2 (17 sq mi). The zone is situated 96 kilometers away from Chengdu, and is 8 km (5.0 mi) away
from Mianyang Airport. Since its establishment, the zone accumulated 177.4 billion yuan of industrial output, 46.2
billion yuan of gross domestic product, fiscal revenue 6.768 billion yuan. There are more than 136 high-tech enterprises
in the zone and they accounted for more than 90% of the total industrial output. On 3 November 2007, the Sichuan
Transportation Bureau announced that the Sui-Yu Expressway was completed after three years of construction. After
completion of the Chongqing section of the road, the 36.64 km (22.77 mi) expressway connected Cheng-Nan Expressway
and formed the shortest expressway from Chengdu to Chongqing. The new expressway is 50 km (31 mi) shorter than the
pre-existing road between Chengdu and Chongqing; thus journey time between the two cities was reduced by an hour,
now taking two and a half hours. The Sui-Yu Expressway is a four lane overpass with a speed limit of 80 km/h (50
mph). The total investment was 1.045 billion yuan. The majority of the province's population is Han Chinese, who
are found scattered throughout the region with the exception of the far western areas. Thus, significant minorities
of Tibetan, Yi, Qiang and Nakhi people reside in the western portion that are impacted by inclement weather and natural
disasters, environmentally fragile, and impoverished. Sichuan's capital of Chengdu is home to a large community of
Tibetans, with 30,000 permanent Tibetan residents and up to 200,000 Tibetan floating population. The Eastern Lipo,
included with either the Yi or the Lisu people, as well as the A-Hmao, also are among the ethnic groups of the provinces.
Sichuan was China's most populous province before Chongqing became a directly-controlled municipality; it is currently
the fourth most populous, after Guangdong, Shandong and Henan. As of 1832, Sichuan was the most populous of the 18
provinces in China, with an estimated population at that time of 21 million. It was the third most populous sub-national
entity in the world, after Uttar Pradesh, India and the Russian Soviet Federative Socialist Republic until 1991,
when the Soviet Union was dissolved. It is also one of the only six to ever reach 100 million people (Uttar Pradesh,
Russian RSFSR, Maharashtra, Sichuan, Bihar and Punjab). It is currently 10th. Garzê Tibetan Autonomous Prefecture
and Ngawa Tibetan and Qiang Autonomous Prefecture in western Sichuan are populated by Tibetans and Qiang people.
Tibetans speak the Khams and Amdo Tibetan, which are Tibetic languages, as well as various Qiangic languages. The
Qiang speak Qiangic languages and often Tibetic languages as well. The Yi people of Liangshan Yi Autonomous Prefecture
in southern Sichuan speak the Nuosu language, which is one of the Lolo-Burmese languages; Yi is written using the
Yi script, a syllabary standardized in 1974. The Southwest University for Nationalities has one of China's most prominent
Tibetology departments, and the Southwest Minorities Publishing House prints literature in minority languages. In
the minority inhabited regions of Sichuan, there is bi-lingual signage and public school instruction in non-Mandarin
minority languages. The Sichuanese are proud of their cuisine, known as one of the Four Great Traditions of Chinese
cuisine. The cuisine here is of "one dish, one shape, hundreds of dishes, hundreds of tastes", as the saying goes,
to describe its acclaimed diversity. The most prominent traits of Sichuanese cuisine are described by four words:
spicy, hot, fresh and fragrant. Sichuan cuisine is popular in the whole nation of China, so are Sichuan chefs. Two
well-known Sichuan chefs are Chen Kenmin and his son Chen Kenichi, who was Iron Chef Chinese on the Japanese television
series "Iron Chef".
Unicode is a computing industry standard for the consistent encoding, representation, and handling of text expressed in most
of the world's writing systems. Developed in conjunction with the Universal Coded Character Set (UCS) standard and
published as The Unicode Standard, the latest version of Unicode contains a repertoire of more than 120,000 characters
covering 129 modern and historic scripts, as well as multiple symbol sets. The standard consists of a set of code
charts for visual reference, an encoding method and set of standard character encodings, a set of reference data
files, and a number of related items, such as character properties, rules for normalization, decomposition, collation,
rendering, and bidirectional display order (for the correct display of text containing both right-to-left scripts,
such as Arabic and Hebrew, and left-to-right scripts). As of June 2015[update], the most recent version is Unicode
8.0. The standard is maintained by the Unicode Consortium. Unicode can be implemented by different character encodings.
The most commonly used encodings are UTF-8, UTF-16 and the now-obsolete UCS-2. UTF-8 uses one byte for any ASCII
character, all of which have the same code values in both UTF-8 and ASCII encoding, and up to four bytes for other
characters. UCS-2 uses a 16-bit code unit (two 8-bit bytes) for each character but cannot encode every character
in the current Unicode standard. UTF-16 extends UCS-2, using one 16-bit unit for the characters that were representable
in UCS-2 and two 16-bit units (4 × 8 bits) to handle each of the additional characters. Unicode has the explicit
aim of transcending the limitations of traditional character encodings, such as those defined by the ISO 8859 standard,
which find wide usage in various countries of the world but remain largely incompatible with each other. Many traditional
character encodings share a common problem in that they allow bilingual computer processing (usually using Latin
characters and the local script), but not multilingual computer processing (computer processing of arbitrary scripts
mixed with each other). The first 256 code points were made identical to the content of ISO-8859-1 so as to make
it trivial to convert existing western text. Many essentially identical characters were encoded multiple times at
different code points to preserve distinctions used by legacy encodings and therefore, allow conversion from those
encodings to Unicode (and back) without losing any information. For example, the "fullwidth forms" section of code
points encompasses a full Latin alphabet that is separate from the main Latin alphabet section because in Chinese,
Japanese, and Korean (CJK) fonts, these Latin characters are rendered at the same width as CJK ideographs, rather
than at half the width. For other examples, see Duplicate characters in Unicode. In 1996, a surrogate character mechanism
was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. This increased the Unicode codespace
to over a million code points, which allowed for the encoding of many historic scripts (e.g., Egyptian Hieroglyphs)
and thousands of rarely used or obsolete characters that had not been anticipated as needing encoding. Among the
characters not originally intended for Unicode are rarely used Kanji or Chinese characters, many of which are part
of personal and place names, making them rarely used, but much more essential than envisioned in the original architecture
of Unicode. Each code point has a single General Category property. The major categories are: Letter, Mark, Number,
Punctuation, Symbol, Separator and Other. Within these categories, there are subdivisions. The General Category is
not useful for every use, since legacy encodings have used multiple characteristics per single code point. E.g.,
U+000A
Detroit (/dᵻˈtrɔɪt/) is the most populous city in the U.S. state of Michigan, the fourth-largest city in the Midwest and
the largest city on the United States–Canada border. It is the seat of Wayne County, the most populous county in
the state. Detroit's metropolitan area, known as Metro Detroit, is home to 5.3 million people, making it the fourteenth-most
populous metropolitan area in the United States and the second-largest in the Midwestern United States (behind Chicago).
It is a major port on the Detroit River, a strait that connects the Great Lakes system to the Saint Lawrence Seaway.
The City of Detroit anchors the second-largest economic region in the Midwest, behind Chicago, and the thirteenth-largest
in the United States. Detroit is the center of a three-county urban area (population 3,734,090, area of 1,337 square
miles (3,460 km2), a 2010 United States Census) six-county metropolitan statistical area (2010 Census population
of 4,296,250, area of 3,913 square miles [10,130 km2]), and a nine-county Combined Statistical Area (2010 Census
population of 5,218,852, area of 5,814 square miles [15,060 km2]). The Detroit–Windsor area, a commercial link straddling
the Canada–U.S. border, has a total population of about 5,700,000. The Detroit metropolitan region holds roughly
one-half of Michigan's population. Due to industrial restructuring and loss of jobs in the auto industry, Detroit
lost considerable population from the late 20th century to present. Between 2000 and 2010 the city's population fell
by 25 percent, changing its ranking from the nation's 10th-largest city to 18th. In 2010, the city had a population
of 713,777, more than a 60 percent drop from a peak population of over 1.8 million at the 1950 census. This resulted
from suburbanization, industrial restructuring, and the decline of Detroit's auto industry. Following the shift of
population and jobs to its suburbs or other states or nations, the city has focused on becoming the metropolitan
region's employment and economic center. Downtown Detroit has held an increased role as an entertainment destination
in the 21st century, with the restoration of several historic theatres, several new sports stadiums, and a riverfront
revitalization project. More recently, the population of Downtown Detroit, Midtown Detroit, and a handful of other
neighborhoods has increased. Many other neighborhoods remain distressed, with extensive abandonment of properties.
The Governor of Michigan, Rick Snyder, declared a financial emergency for the city in March 2013, appointing an emergency
manager. On July 18, 2013, Detroit filed the largest municipal bankruptcy case in U.S. history. It was declared bankrupt
by Judge Steven W. Rhodes of the Bankruptcy Court for the Eastern District of Michigan on December 3, 2013; he cited
its $18.5 billion debt and declared that negotiations with its thousands of creditors were unfeasible. On November
7, 2014, Judge Rhodes approved the city's bankruptcy plan, allowing the city to begin the process of exiting bankruptcy.
The City of Detroit successfully exited Chapter 9 municipal bankruptcy with all finances handed back to the city
at midnight on December 11, 2014. On the shores of the strait, in 1701, the French officer Antoine de la Mothe Cadillac,
along with fifty-one French people and French Canadians, founded a settlement called Fort Pontchartrain du Détroit,
naming it after Louis Phélypeaux, comte de Pontchartrain, Minister of Marine under Louis XIV. France offered free
land to colonists to attract families to Detroit; when it reached a total population of 800 in 1765, it was the largest
city between Montreal and New Orleans, both also French settlements. By 1773, the population of Detroit was 1,400.
By 1778, its population was up to 2,144 and it was the third-largest city in the Province of Quebec. The region grew
based on the lucrative fur trade, in which numerous Native American people had important roles. Detroit's city flag
reflects its French colonial heritage. (See Flag of Detroit). Descendants of the earliest French and French Canadian
settlers formed a cohesive community who gradually were replaced as the dominant population after more Anglo-American
settlers came to the area in the early 19th century. Living along the shores of Lakes St. Clair, and south to Monroe
and downriver suburbs, the French Canadians of Detroit, also known as Muskrat French, remain a subculture in the
region today. From 1805 to 1847, Detroit was the capital of Michigan (first the territory, then the state). Detroit
surrendered without a fight to British troops during the War of 1812 in the Siege of Detroit. The Battle of Frenchtown
(January 18–23, 1813) was part of a United States effort to retake the city, and American troops suffered their highest
fatalities of any battle in the war. This battle is commemorated at River Raisin National Battlefield Park south
of Detroit in Monroe County. Detroit was finally recaptured by the United States later that year. Numerous men from
Detroit volunteered to fight for the Union during the American Civil War, including the 24th Michigan Infantry Regiment
(part of the legendary Iron Brigade), which fought with distinction and suffered 82% casualties at the Battle of
Gettysburg in 1863. When the First Volunteer Infantry Regiment arrived to fortify Washington, DC, President Abraham
Lincoln is quoted as saying "Thank God for Michigan!" George Armstrong Custer led the Michigan Brigade during the
Civil War and called them the "Wolverines". During the late 19th century, several Gilded Age mansions reflecting
the wealth of industry and shipping magnates were built east and west of the current downtown, along the major avenues
of the Woodward plan. Most notable among them was the David Whitney House located at 4421 Woodward Avenue, which
became a prime location for mansions. During this period some referred to Detroit as the Paris of the West for its
architecture, grand avenues in the Paris style, and for Washington Boulevard, recently electrified by Thomas Edison.
The city had grown steadily from the 1830s with the rise of shipping, shipbuilding, and manufacturing industries.
Strategically located along the Great Lakes waterway, Detroit emerged as a major port and transportation hub. With
the rapid growth of industrial workers in the auto factories, labor unions such as the American Federation of Labor
and the United Auto Workers fought to organize workers to gain them better working conditions and wages. They initiated
strikes and other tactics in support of improvements such as the 8-hour day/40-hour work week, increased wages, greater
benefits and improved working conditions. The labor activism during those years increased influence of union leaders
in the city such as Jimmy Hoffa of the Teamsters and Walter Reuther of the Autoworkers. Detroit, like many places
in the United States, developed racial conflict and discrimination in the 20th century following rapid demographic
changes as hundreds of thousands of new workers were attracted to the industrial city; in a short period it became
the 4th-largest city in the nation. The Great Migration brought rural blacks from the South; they were outnumbered
by southern whites who also migrated to the city. Immigration brought southern and eastern Europeans of Catholic
and Jewish faith; these new groups competed with native-born whites for jobs and housing in the booming city. Detroit
was one of the major Midwest cities that was a site for the dramatic urban revival of the Ku Klux Klan beginning
in 1915. "By the 1920s the city had become a stronghold of the KKK," whose members opposed Catholic and Jewish immigrants,
as well as black Americans. The Black Legion, a secret vigilante group, was active in the Detroit area in the 1930s,
when one-third of its estimated 20,000 to 30,000 members in Michigan were based in the city. It was defeated after
numerous prosecutions following the kidnapping and murder in 1936 of Charles Poole, a Catholic Works Progress Administration
organizer. A total of 49 men of the Black Legion were convicted of numerous crimes, with many sentenced to life in
prison for murder. Jobs expanded so rapidly that 400,000 people were attracted to the city from 1941 to 1943, including
50,000 blacks in the second wave of the Great Migration, and 350,000 whites, many of them from the South. Some European
immigrants and their descendants feared black competition for jobs and housing. The federal government prohibited
discrimination in defense work but when in June 1943, Packard promoted three blacks to work next to whites on its
assembly lines, 25,000 whites walked off the job. The Detroit race riot of 1943 took place three weeks after the
Packard plant protest. Over the course of three days, 34 people were killed, of whom 25 were African American, and
approximately 600 were injured, 75% black people. As in other major American cities in the postwar era, construction
of an extensive highway and freeway system around Detroit and pent-up demand for new housing stimulated suburbanization;
highways made commuting by car easier. In 1956, Detroit's last heavily used electric streetcar line along the length
of Woodward Avenue was removed and replaced with gas-powered buses. It was the last line of what had once been a
534-mile network of electric streetcars. In 1941 at peak times, a streetcar ran on Woodward Avenue every 60 seconds.
In June 1963, Rev. Martin Luther King, Jr. gave a major speech in Detroit that foreshadowed his "I Have a Dream"
speech in Washington, D.C. two months later. While the African-American Civil Rights Movement gained significant
federal civil rights laws in 1964 and 1965, longstanding inequities resulted in confrontations between the police
and inner city black youth wanting change. Longstanding tensions in Detroit culminated in the Twelfth Street riot
in July 1967. Governor George W. Romney ordered the Michigan National Guard into Detroit, and President Johnson sent
in U.S. Army troops. The result was 43 dead, 467 injured, over 7,200 arrests, and more than 2,000 buildings destroyed,
mostly in black residential and business areas. Thousands of small businesses closed permanently or relocated to
safer neighborhoods. The affected district lay in ruins for decades. It was the most costly riot in the United States.
On August 18, 1970, the NAACP filed suit against Michigan state officials, including Governor William Milliken, charging
de facto public school segregation. The NAACP argued that although schools were not legally segregated, the city
of Detroit and its surrounding counties had enacted policies to maintain racial segregation in public schools. The
NAACP also suggested a direct relationship between unfair housing practices and educational segregation, which followed
segregated neighborhoods. The District Court held all levels of government accountable for the segregation in its
ruling. The Sixth Circuit Court affirmed some of the decision, holding that it was the state's responsibility to
integrate across the segregated metropolitan area. The U.S. Supreme Court took up the case February 27, 1974. The
subsequent Milliken v. Bradley decision had wide national influence. In a narrow decision, the Court found that schools
were a subject of local control and that suburbs could not be forced to solve problems in the city's school district.
"Milliken was perhaps the greatest missed opportunity of that period," said Myron Orfield, professor of law at the
University of Minnesota. "Had that gone the other way, it would have opened the door to fixing nearly all of Detroit's
current problems." John Mogk, a professor of law and an expert in urban planning at Wayne State University in Detroit,
says, "Everybody thinks that it was the riots [in 1967] that caused the white families to leave. Some people were
leaving at that time but, really, it was after Milliken that you saw mass flight to the suburbs. If the case had
gone the other way, it is likely that Detroit would not have experienced the steep decline in its tax base that has
occurred since then." In November 1973, the city elected Coleman Young as its first black mayor. After taking office,
Young emphasized increasing racial diversity in the police department. Young also worked to improve Detroit's transportation
system, but tension between Young and his suburban counterparts over regional matters was problematic throughout
his mayoral term. In 1976, the federal government offered $600 million for building a regional rapid transit system,
under a single regional authority. But the inability of Detroit and its suburban neighbors to solve conflicts over
transit planning resulted in the region losing the majority of funding for rapid transit. Following the failure to
reach an agreement over the larger system, the City moved forward with construction of the elevated downtown circulator
portion of the system, which became known as the Detroit People Mover. The gasoline crises of 1973 and 1979 also
affected Detroit and the U.S. auto industry. Buyers chose smaller, more fuel-efficient cars made by foreign makers
as the price of gas rose. Efforts to revive the city were stymied by the struggles of the auto industry, as their
sales and market share declined. Automakers laid off thousands of employees and closed plants in the city, further
eroding the tax base. To counteract this, the city used eminent domain to build two large new auto assembly plants
in the city. As mayor, Young sought to revive the city by seeking to increase investment in the city's declining
downtown. The Renaissance Center, a mixed-use office and retail complex, opened in 1977. This group of skyscrapers
was an attempt to keep businesses in downtown. Young also gave city support to other large developments to attract
middle and upper-class residents back to the city. Despite the Renaissance Center and other projects, the downtown
area continued to lose businesses to the suburbs. Major stores and hotels closed and many large office buildings
went vacant. Young was criticized for being too focused on downtown development and not doing enough to lower the
city's high crime rate and improve city services. Long a major population center and site of worldwide automobile
manufacturing, Detroit has suffered a long economic decline produced by numerous factors. Like many industrial American
cities, Detroit reached its population peak in the 1950 census. The peak population was 1.8 million people. Following
suburbanization, industrial restructuring, and loss of jobs (as described above), by the 2010 census, the city had
less than 40 percent of that number, with just over 700,000 residents. The city has declined in population in each
census since 1950. Campus Martius, a reconfiguration of downtown's main intersection as a new park was opened in
2004. The park has been cited as one of the best public spaces in the United States. The city's riverfront has been
the focus of redevelopment, following successful examples of other older industrial cities. In 2001, the first portion
of the International Riverfront was completed as a part of the city's 300th anniversary celebration, with miles of
parks and associated landscaping completed in succeeding years. In 2011, the Port Authority Passenger Terminal opened
with the river walk connecting Hart Plaza to the Renaissance Center. Since 2006, $9 billion has been invested in
downtown and surrounding neighborhoods; $5.2 billion of that in has come in 2013 and 2014. Construction activity,
particularly rehabilitation of historic downtown buildings, has increased markedly. The number of vacant downtown
buildings has dropped from nearly 50 to around 13.[when?] Among the most notable redevelopment projects are the Book
Cadillac Hotel and the Fort Shelby Hotel; the David Broderick Tower; and the David Whitney Building. Meanwhile, work
is underway or set to begin on the historic, vacant Wurlitzer Building and Strathmore Hotel.[citation needed] Detroit's
protracted decline has resulted in severe urban decay and thousands of empty buildings around the city. Some parts
of Detroit are so sparsely populated that the city has difficulty providing municipal services. The city has considered
various solutions, such as demolishing abandoned homes and buildings; removing street lighting from large portions
of the city; and encouraging the small population in certain areas to move to more populated locations. While some
have estimated 20,000 stray dogs roam the city, studies have shown the true number to be around 1,000-3,000. Roughly
half of the owners of Detroit's 305,000 properties failed to pay their 2011 tax bills, resulting in about $246 million
in taxes and fees going uncollected, nearly half of which was due to Detroit; the rest of the money would have been
earmarked for Wayne County, Detroit Public Schools, and the library system. The city slopes gently from the northwest
to southeast on a till plain composed largely of glacial and lake clay. The most notable topographical feature in
the city is the Detroit Moraine, a broad clay ridge on which the older portions of Detroit and Windsor sit atop,
rising approximately 62 feet (19 m) above the river at its highest point. The highest elevation in the city is located
directly north of Gorham Playground on the northwest side approximately three blocks south of 8 Mile Road, at a height
of 675 to 680 feet (206 to 207 m). Detroit's lowest elevation is along the Detroit River, at a surface height of
572 feet (174 m). Detroit has four border crossings: the Ambassador Bridge and the Detroit–Windsor Tunnel provide
motor vehicle thoroughfares, with the Michigan Central Railway Tunnel providing railroad access to and from Canada.
The fourth border crossing is the Detroit–Windsor Truck Ferry, located near the Windsor Salt Mine and Zug Island.
Near Zug Island, the southwest part of the city was developed over a 1,500-acre (610 ha) salt mine that is 1,100
feet (340 m) below the surface. The Detroit Salt Company mine has over 100 miles (160 km) of roads within. Detroit
and the rest of southeastern Michigan have a humid continental climate (Köppen Dfa) which is influenced by the Great
Lakes; the city and close-in suburbs are part of USDA Hardiness zone 6b, with farther-out northern and western suburbs
generally falling in zone 6a. Winters are cold, with moderate snowfall and temperatures not rising above freezing
on an average 44 days annually, while dropping to or below 0 °F (−18 °C) on an average 4.4 days a year; summers are
warm to hot with temperatures exceeding 90 °F (32 °C) on 12 days. The warm season runs from May to September. The
monthly daily mean temperature ranges from 25.6 °F (−3.6 °C) in January to 73.6 °F (23.1 °C) in July. Official temperature
extremes range from 105 °F (41 °C) on July 24, 1934 down to −21 °F (−29 °C) on January 21, 1984; the record low maximum
is −4 °F (−20 °C) on January 19, 1994, while, conversely the record high minimum is 80 °F (27 °C) on August 1, 2006,
the most recent of five occurrences. A decade or two may pass between readings of 100 °F (38 °C) or higher, which
last occurred July 17, 2012. The average window for freezing temperatures is October 20 thru April 22, allowing a
growing season of 180 days. Precipitation is moderate and somewhat evenly distributed throughout the year, although
the warmer months such as May and June average more, averaging 33.5 inches (850 mm) annually, but historically ranging
from 20.49 in (520 mm) in 1963 to 47.70 in (1,212 mm) in 2011. Snowfall, which typically falls in measurable amounts
between November 15 through April 4 (occasionally in October and very rarely in May), averages 42.5 inches (108 cm)
per season, although historically ranging from 11.5 in (29 cm) in 1881−82 to 94.9 in (241 cm) in 2013−14. A thick
snowpack is not often seen, with an average of only 27.5 days with 3 in (7.6 cm) or more of snow cover. Thunderstorms
are frequent in the Detroit area. These usually occur during spring and summer. Seen in panorama, Detroit's waterfront
shows a variety of architectural styles. The post modern Neo-Gothic spires of the One Detroit Center (1993) were
designed to blend with the city's Art Deco skyscrapers. Together with the Renaissance Center, they form a distinctive
and recognizable skyline. Examples of the Art Deco style include the Guardian Building and Penobscot Building downtown,
as well as the Fisher Building and Cadillac Place in the New Center area near Wayne State University. Among the city's
prominent structures are United States' largest Fox Theatre, the Detroit Opera House, and the Detroit Institute of
Arts. While the Downtown and New Center areas contain high-rise buildings, the majority of the surrounding city consists
of low-rise structures and single-family homes. Outside of the city's core, residential high-rises are found in upper-class
neighborhoods such as the East Riverfront extending toward Grosse Pointe and the Palmer Park neighborhood just west
of Woodward. The University Commons-Palmer Park district in northwest Detroit, near the University of Detroit Mercy
and Marygrove College, anchors historic neighborhoods including Palmer Woods, Sherwood Forest, and the University
District. The Detroit International Riverfront includes a partially completed three-and-one-half mile riverfront
promenade with a combination of parks, residential buildings, and commercial areas. It extends from Hart Plaza to
the MacArthur Bridge accessing Belle Isle Park (the largest island park in a U.S. city). The riverfront includes
Tri-Centennial State Park and Harbor, Michigan's first urban state park. The second phase is a two-mile (3 km) extension
from Hart Plaza to the Ambassador Bridge for a total of five miles (8 km) of parkway from bridge to bridge. Civic
planners envision that the pedestrian parks will stimulate residential redevelopment of riverfront properties condemned
under eminent domain. Lafayette Park is a revitalized neighborhood on the city's east side, part of the Ludwig Mies
van der Rohe residential district. The 78-acre (32 ha) development was originally called the Gratiot Park. Planned
by Mies van der Rohe, Ludwig Hilberseimer and Alfred Caldwell it includes a landscaped, 19-acre (7.7 ha) park with
no through traffic, in which these and other low-rise apartment buildings are situated. Immigrants have contributed
to the city's neighborhood revitalization, especially in southwest Detroit. Southwest Detroit has experienced a thriving
economy in recent years, as evidenced by new housing, increased business openings and the recently opened Mexicantown
International Welcome Center. The city has numerous neighborhoods consisting of vacant properties resulting in low
inhabited density in those areas, stretching city services and infrastructure. These neighborhoods are concentrated
in the northeast and on the city's fringes. A 2009 parcel survey found about a quarter of residential lots in the
city to be undeveloped or vacant, and about 10% of the city's housing to be unoccupied. The survey also reported
that most (86%) of the city's homes are in good condition with a minority (9%) in fair condition needing only minor
repairs. Public funding and private investment have also been made with promises to rehabilitate neighborhoods. In
April 2008, the city announced a $300-million stimulus plan to create jobs and revitalize neighborhoods, financed
by city bonds and paid for by earmarking about 15% of the wagering tax. The city's working plans for neighborhood
revitalizations include 7-Mile/Livernois, Brightmoor, East English Village, Grand River/Greenfield, North End, and
Osborn. Private organizations have pledged substantial funding to the efforts. Additionally, the city has cleared
a 1,200-acre (490 ha) section of land for large-scale neighborhood construction, which the city is calling the Far
Eastside Plan. In 2011, Mayor Bing announced a plan to categorize neighborhoods by their needs and prioritize the
most needed services for those neighborhoods. The loss of industrial and working-class jobs in the city has resulted
in high rates of poverty and associated problems. From 2000 to 2009, the city's estimated median household income
fell from $29,526 to $26,098. As of 2010[update] the mean income of Detroit is below the overall U.S. average by
several thousand dollars. Of every three Detroit residents, one lives in poverty. Luke Bergmann, author of Getting
Ghost: Two Young Lives and the Struggle for the Soul of an American City, said in 2010, "Detroit is now one of the
poorest big cities in the country." Oakland County in Metro Detroit, once rated amongst the wealthiest US counties
per household, is no longer shown in the top 25 listing of Forbes magazine. But internal county statistical methods
– based on measuring per capita income for counties with more than one million residents – show that Oakland is still
within the top 12, slipping from the 4th-most affluent such county in the U.S. in 2004 to 11th-most affluent in 2009.
Detroit dominates Wayne County, which has an average household income of about $38,000, compared to Oakland County's
$62,000. The city's population increased more than sixfold during the first half of the 20th century, fed largely
by an influx of European, Middle Eastern (Lebanese, Assyrian/Chaldean), and Southern migrants to work in the burgeoning
automobile industry. In 1940, Whites were 90.4% of the city's population. Since 1950 the city has seen a major shift
in its population to the suburbs. In 1910, fewer than 6,000 blacks called the city home; in 1930 more than 120,000
blacks lived in Detroit. The thousands of African Americans who came to Detroit were part of the Great Migration
of the 20th century. Detroit remains one of the most racially segregated cities in the United States. From the 1940s
to the 1970s a second wave of Blacks moved to Detroit to escape Jim Crow laws in the south and find jobs. However,
they soon found themselves excluded from white areas of the city—through violence, laws, and economic discrimination
(e.g., redlining). White residents attacked black homes: breaking windows, starting fires, and exploding bombs. The
pattern of segregation was later magnified by white migration to the suburbs. One of the implications of racial segregation,
which correlates with class segregation, may be overall worse health for some populations. While Blacks/African-Americans
comprised only 13 percent of Michigan's population in 2010, they made up nearly 82 percent of Detroit's population.
The next largest population groups were Whites, at 10 percent, and Hispanics, at 6 percent. According to the 2010
Census, segregation in Detroit has decreased in absolute and in relative terms. In the first decade of the 21st century,
about two-thirds of the total black population in metropolitan area resided within the city limits of Detroit. The
number of integrated neighborhoods has increased from 100 in 2000 to 204 in 2010. The city has also moved down the
ranking, from number one most segregated to number four. A 2011 op-ed in The New York Times attributed the decreased
segregation rating to the overall exodus from the city, cautioning that these areas may soon become more segregated.
This pattern already happened in the 1970s, when apparent integration was actually a precursor to white flight and
resegregation. Over a 60-year period, white flight occurred in the city. According to an estimate of the Michigan
Metropolitan Information Center, from 2008 to 2009 the percentage of non-Hispanic White residents increased from
8.4% to 13.3%. Some empty nesters and many younger White people moved into the city while many African Americans
moved to the suburbs. Detroit has a Mexican-American population. In the early 20th century thousands of Mexicans
came to Detroit to work in agricultural, automotive, and steel jobs. During the Mexican Repatriation of the 1930s
many Mexicans in Detroit were willingly repatriated or forced to repatriate. By the 1940s the Mexican community began
to settle what is now Mexicantown. The population significantly increased in the 1990s due to immigration from Jalisco.
In 2010 Detroit had 48,679 Hispanics, including 36,452 Mexicans. The number of Hispanics was a 70% increase from
the number in 1990. As of 2002 there are four areas in Detroit with significant Asian and Asian American populations.
Northeast Detroit has population of Hmong with a smaller group of Lao people. A portion of Detroit next to eastern
Hamtramck includes Bangladeshi Americans, Indian Americans, and Pakistani Americans; nearly all of the Bangladeshi
population in Detroit lives in that area. Many of those residents own small businesses or work in blue collar jobs,
and the population in that area is mostly Muslim. The area north of Downtown Detroit; including the region around
the Henry Ford Hospital, the Detroit Medical Center, and Wayne State University; has transient Asian national origin
residents who are university students or hospital workers. Few of them have permanent residency after schooling ends.
They are mostly Chinese and Indian but the population also includes Filipinos, Koreans, and Pakistanis. In Southwest
Detroit and western Detroit there are smaller, scattered Asian communities including an area in the westside adjacent
to Dearborn and Redford Township that has a mostly Indian Asian population, and a community of Vietnamese and Laotians
in Southwest Detroit. Thousands more employees work in Midtown, north of the central business district. Midtown's
anchors are the city's largest single employer Detroit Medical Center, Wayne State University, and the Henry Ford
Health System in New Center. Midtown is also home to watchmaker Shinola and an array of small and/or startup companies.
New Center bases TechTown, a research and business incubator hub that’s part of the WSU system. Like downtown and
Corktown, Midtown also has a fast-growing retailing and restaurant scene. A number of the city's downtown employers
are relatively new, as there has been a marked trend of companies moving from satellite suburbs around Metropolitan
Detroit into the downtown core.[citation needed] Compuware completed its world headquarters in downtown in 2003.
OnStar, Blue Cross Blue Shield, and HP Enterprise Services are located at the Renaissance Center. PricewaterhouseCoopers
Plaza offices are adjacent to Ford Field, and Ernst & Young completed its office building at One Kennedy Square in
2006. Perhaps most prominently, in 2010, Quicken Loans, one of the largest mortgage lenders, relocated its world
headquarters and 4,000 employees to downtown Detroit, consolidating its suburban offices. In July 2012, the U.S.
Patent and Trademark Office opened its Elijah J. McCoy Satellite Office in the Rivertown/Warehouse District as its
first location outside Washington, D.C.'s metropolitan area. The city of Detroit and other private-public partnerships
have attempted to catalyze the region's growth by facilitating the building and historical rehabilitation of residential
high-rises in the downtown, creating a zone that offers many business tax incentives, creating recreational spaces
such as the Detroit RiverWalk, Campus Martius Park, Dequindre Cut Greenway, and Green Alleys in Midtown. The city
itself has cleared sections of land while retaining a number of historically significant vacant buildings in order
to spur redevelopment; though it has struggled with finances, the city issued bonds in 2008 to provide funding for
ongoing work to demolish blighted properties. Two years earlier, downtown reported $1.3 billion in restorations and
new developments which increased the number of construction jobs in the city. In the decade prior to 2006, downtown
gained more than $15 billion in new investment from private and public sectors. Despite the city's recent financial
issues, many developers remain unfazed by Detroit's problems. Midtown is one of the most successful areas within
Detroit to have a residential occupancy rate of 96%. Numerous developments have been recently completely or are in
various stages of construction. These include the $82 million reconstruction of downtown's David Whitney Building
(now an Aloft Hotel and luxury residences), the Woodward Garden Block Development in Midtown, the residential conversion
of the David Broderick Tower in downtown, the rehabilitation of the Book Cadillac Hotel (now a Westin and luxury
condos) and Fort Shelby Hotel (now Doubletree) also in downtown, and various smaller projects. On May 21, 2014, JPMorgan
Chase announced that it was injecting $100 million over five years into Detroit's economy, providing development
funding for a variety of projects that would increase employment. It is the largest commitment made to any one city
by the nation's biggest bank.[citation needed] Of the $100 million, $50 million will go toward development projects,
$25 million will go toward city blight removal, $12.5 million will go for job training, $7 million will go for small
businesses in the city, and $5.5 million will go toward the M-1 light rail project. On May 19, 2015, JPMorgan Chase
announced that it has invested $32 million for two redevelopment projects in the city's Capitol Park district, the
Capitol Park Lofts (the former Capitol Park Building) and the Detroit Savings Bank building at 1212 Griswold. Those
investments are separate from Chase's five-year, $100-million commitment. A desire to be closer to the urban scene
has also attracted some young professionals to reside in inner ring suburbs such as Grosse Pointe and Royal Oak,
Detroit. Detroit's proximity to Windsor, Ontario, provides for views and nightlife, along with Ontario's minimum
drinking age of 19. A 2011 study by Walk Score recognized Detroit for its above average walkability among large U.S.
cities. About two-thirds of suburban residents occasionally dine and attend cultural events or take in professional
games in the city of Detroit. Known as the world's automotive center, "Detroit" is a metonym for that industry. Detroit's
auto industry, some of which was converted to wartime defense production, was an important element of the American
"Arsenal of Democracy" supporting the Allied powers during World War II. It is an important source of popular music
legacies celebrated by the city's two familiar nicknames, the Motor City and Motown. Other nicknames arose in the
20th century, including City of Champions, beginning in the 1930s for its successes in individual and team sport;
The D; Hockeytown (a trademark owned by the city's NHL club, the Red Wings); Rock City (after the Kiss song "Detroit
Rock City"); and The 313 (its telephone area code). In the 1940s, Detroit blues artist John Lee Hooker became a long-term
resident in the city's southwest Delray neighborhood. Hooker, among other important blues musicians migrated from
his home in Mississippi bringing the Delta blues to northern cities like Detroit. Hooker recorded for Fortune Records,
the biggest pre-Motown blues/soul label. During the 1950s, the city became a center for jazz, with stars performing
in the Black Bottom neighborhood. Prominent emerging Jazz musicians of the 1960s included: trumpet player Donald
Byrd who attended Cass Tech and performed with Art Blakey and the Jazz Messengers early in his career and Saxophonist
Pepper Adams who enjoyed a solo career and accompanied Byrd on several albums. The Graystone International Jazz Museum
documents jazz in Detroit. Other, prominent Motor City R&B stars in the 1950s and early 1960s was Nolan Strong, Andre
Williams and Nathaniel Mayer – who all scored local and national hits on the Fortune Records label. According to
Smokey Robinson, Strong was a primary influence on his voice as a teenager. The Fortune label was a family-operated
label located on Third Avenue in Detroit, and was owned by the husband and wife team of Jack Brown and Devora Brown.
Fortune, which also released country, gospel and rockabilly LPs and 45s, laid the groundwork for Motown, which became
Detroit's most legendary record label. Berry Gordy, Jr. founded Motown Records which rose to prominence during the
1960s and early 1970s with acts such as Stevie Wonder, The Temptations, The Four Tops, Smokey Robinson & The Miracles,
Diana Ross & The Supremes, the Jackson 5, Martha and the Vandellas, The Spinners, Gladys Knight & the Pips, The Marvelettes,
The Elgins, The Monitors, The Velvelettes and Marvin Gaye. Artists were backed by in-house vocalists The Andantes
and The Funk Brothers, the Motown house band that was featured in Paul Justman's 2002 documentary film Standing in
the Shadows of Motown, based on Allan Slutsky's book of the same name. Local artists and bands rose to prominence
in the 1960s and 70s including: the MC5, The Stooges, Bob Seger, Amboy Dukes featuring Ted Nugent, Mitch Ryder and
The Detroit Wheels, Rare Earth, Alice Cooper, and Suzi Quatro. The group Kiss emphasized the city's connection with
rock in the song Detroit Rock City and the movie produced in 1999. In the 1980s, Detroit was an important center
of the hardcore punk rock underground with many nationally known bands coming out of the city and its suburbs, such
as The Necros, The Meatmen, and Negative Approach. In the 1990s and the new millennium, the city has produced a number
of influential hip hop artists, including Eminem, the hip-hop artist with the highest cumulative sales, hip-hop producer
J Dilla, rapper and producer Esham and hip hop duo Insane Clown Posse. The city is also home to rappers Big Sean
and Danny Brown. The band Sponge toured and produced music, with artists such as Kid Rock and Uncle Kracker. The
city also has an active garage rock genre that has generated national attention with acts such as: The White Stripes,
The Von Bondies, The Detroit Cobras, The Dirtbombs, Electric Six, and The Hard Lessons. Detroit is cited as the birthplace
of techno music in the early 1980s. The city also lends its name to an early and pioneering genre of electronic dance
music, "Detroit techno". Featuring science fiction imagery and robotic themes, its futuristic style was greatly influenced
by the geography of Detroit's urban decline and its industrial past. Prominent Detroit techno artists include Juan
Atkins, Derrick May, and Kevin Saunderson. The Detroit Electronic Music Festival, now known as "Movement", occurs
annually in late May on Memorial Day Weekend, and takes place in Hart Plaza. In the early years (2000-2002), this
was a landmark event, boasting over a million estimated attendees annually, coming from all over the world to celebrate
Techno music in the city of its birth. Major theaters in Detroit include the Fox Theatre (5,174 seats), Music Hall
(1,770 seats), the Gem Theatre (451 seats), Masonic Temple Theatre (4,404 seats), the Detroit Opera House (2,765
seats), the Fisher Theatre (2,089 seats), The Fillmore Detroit (2,200 seats), Saint Andrew's Hall, the Majestic Theater,
and Orchestra Hall (2,286 seats) which hosts the renowned Detroit Symphony Orchestra. The Nederlander Organization,
the largest controller of Broadway productions in New York City, originated with the purchase of the Detroit Opera
House in 1922 by the Nederlander family. Many of the area's prominent museums are located in the historic cultural
center neighborhood around Wayne State University and the College for Creative Studies. These museums include the
Detroit Institute of Arts, the Detroit Historical Museum, Charles H. Wright Museum of African American History, the
Detroit Science Center, as well as the main branch of the Detroit Public Library. Other cultural highlights include
Motown Historical Museum, the Ford Piquette Avenue Plant museum (birthplace of the Ford Model T and the world's oldest
car factory building open to the public), the Pewabic Pottery studio and school, the Tuskegee Airmen Museum, Fort
Wayne, the Dossin Great Lakes Museum, the Museum of Contemporary Art Detroit (MOCAD), the Contemporary Art Institute
of Detroit (CAID), and the Belle Isle Conservatory. In 2010, the G.R. N'Namdi Gallery opened in a 16,000-square-foot
(1,500 m2) complex in Midtown. Important history of America and the Detroit area are exhibited at The Henry Ford
in Dearborn, the United States' largest indoor-outdoor museum complex. The Detroit Historical Society provides information
about tours of area churches, skyscrapers, and mansions. Inside Detroit, meanwhile, hosts tours, educational programming,
and a downtown welcome center. Other sites of interest are the Detroit Zoo in Royal Oak, the Cranbrook Art Museum
in Bloomfield Hills, the Anna Scripps Whitcomb Conservatory on Belle Isle, and Walter P. Chrysler Museum in Auburn
Hills. The city's Greektown and three downtown casino resort hotels serve as part of an entertainment hub. The Eastern
Market farmer's distribution center is the largest open-air flowerbed market in the United States and has more than
150 foods and specialty businesses. On Saturdays, about 45,000 people shop the city's historic Eastern Market. The
Midtown and the New Center area are centered on Wayne State University and Henry Ford Hospital. Midtown has about
50,000 residents and attracts millions of visitors each year to its museums and cultural centers; for example, the
Detroit Festival of the Arts in Midtown draws about 350,000 people. Annual summer events include the Electronic Music
Festival, International Jazz Festival, the Woodward Dream Cruise, the African World Festival, the country music Hoedown,
Noel Night, and Dally in the Alley. Within downtown, Campus Martius Park hosts large events, including the annual
Motown Winter Blast. As the world's traditional automotive center, the city hosts the North American International
Auto Show. Held since 1924, America's Thanksgiving Parade is one of the nation's largest. River Days, a five-day
summer festival on the International Riverfront lead up to the Windsor–Detroit International Freedom Festival fireworks,
which draw super sized-crowds ranging from hundreds of thousands to over three million people. An important civic
sculpture in Detroit is "The Spirit of Detroit" by Marshall Fredericks at the Coleman Young Municipal Center. The
image is often used as a symbol of Detroit and the statue itself is occasionally dressed in sports jerseys to celebrate
when a Detroit team is doing well. A memorial to Joe Louis at the intersection of Jefferson and Woodward Avenues
was dedicated on October 16, 1986. The sculpture, commissioned by Sports Illustrated and executed by Robert Graham,
is a 24-foot (7.3 m) long arm with a fisted hand suspended by a pyramidal framework. Detroit is one of 12 American
metropolitan areas that are home to professional teams representing the four major sports in North America. All these
teams but one play within the city of Detroit itself (the NBA's Detroit Pistons play in suburban Auburn Hills at
The Palace of Auburn Hills). There are three active major sports venues within the city: Comerica Park (home of the
Major League Baseball team Detroit Tigers), Ford Field (home of the NFL's Detroit Lions), and Joe Louis Arena (home
of the NHL's Detroit Red Wings). A 1996 marketing campaign promoted the nickname "Hockeytown". In college sports,
Detroit's central location within the Mid-American Conference has made it a frequent site for the league's championship
events. While the MAC Basketball Tournament moved permanently to Cleveland starting in 2000, the MAC Football Championship
Game has been played at Ford Field in Detroit since 2004, and annually attracts 25,000 to 30,000 fans. The University
of Detroit Mercy has a NCAA Division I program, and Wayne State University has both NCAA Division I and II programs.
The NCAA football Little Caesars Pizza Bowl is held at Ford Field each December. In the years following the mid-1930s,
Detroit was referred to as the "City of Champions" after the Tigers, Lions, and Red Wings captured all three major
professional sports championships in a seven-month period of time (the Tigers won the World Series in October 1935;
the Lions won the NFL championship in December 1935; the Red Wings won the Stanley Cup in April 1936). In 1932, Eddie
"The Midnight Express" Tolan from Detroit won the 100- and 200-meter races and two gold medals at the 1932 Summer
Olympics. Joe Louis won the heavyweight championship of the world in 1937. The city is governed pursuant to the Home
Rule Charter of the City of Detroit. The city government is run by a mayor and a nine-member city council and clerk
elected on an at-large nonpartisan ballot. Since voters approved the city's charter in 1974, Detroit has had a "strong
mayoral" system, with the mayor approving departmental appointments. The council approves budgets but the mayor is
not obligated to adhere to any earmarking. City ordinances and substantially large contracts must be approved by
the council. The Detroit City Code is the codification of Detroit's local ordinances. Detroit's courts are state-administered
and elections are nonpartisan. The Probate Court for Wayne County is located in the Coleman A. Young Municipal Center
in downtown Detroit. The Circuit Court is located across Gratiot Ave. in the Frank Murphy Hall of Justice, in downtown
Detroit. The city is home to the Thirty-Sixth District Court, as well as the First District of the Michigan Court
of Appeals and the United States District Court for the Eastern District of Michigan. The city provides law enforcement
through the Detroit Police Department and emergency services through the Detroit Fire Department. Detroit has struggled
with high crime for decades. Detroit held the title of murder capital between 1985-1987 with a murder rate around
58 per 100,000. Crime has since decreased and, in 2014, the murder rate was 43.4 per 100,000, lower than in St. Louis,
Missouri. Although the murder rate increased by 6% during the first half of 2015, it was surpassed by St Louis and
Baltimore which saw much greater spikes in violence. At year-end 2015, Detroit had 295 criminal homicides, down slightly
from 299 in 2014. Nearly two-thirds of all murders in Michigan in 2011 occurred in Detroit. Although the rate of
violent crime dropped 11 percent in 2008, violent crime in Detroit has not declined as much as the national average
from 2007 to 2011. The violent crime rate is one of the highest in the United States. Neighborhoodscout.com reported
a crime rate of 62.18 per 1,000 residents for property crimes, and 16.73 per 1,000 for violent crimes (compared to
national figures of 32 per 1,000 for property crimes and 5 per 1,000 for violent crime in 2008). Beginning with its
incorporation in 1802, Detroit has had a total of 74 mayors. Detroit's last mayor from the Republican Party was Louis
Miriani, who served from 1957 to 1962. In 1973, the city elected its first black mayor, Coleman Young. Despite development
efforts, his combative style during his five terms in office was not well received by many suburban residents. Mayor
Dennis Archer, a former Michigan Supreme Court Justice, refocused the city's attention on redevelopment with a plan
to permit three casinos downtown. By 2008, three major casino resort hotels established operations in the city. In
March 2013, Governor Rick Snyder declared a financial emergency in the city, stating that the city has a $327 million
budget deficit and faces more than $14 billion in long-term debt. It has been making ends meet on a month-to-month
basis with the help of bond money held in a state escrow account and has instituted mandatory unpaid days off for
many city workers. Those troubles, along with underfunded city services, such as police and fire departments, and
ineffective turnaround plans from Bing and the City Council led the state of Michigan to appoint an emergency manager
for Detroit on March 14, 2013. On June 14, 2013 Detroit defaulted on $2.5 billion of debt by withholding $39.7 million
in interest payments, while Emergency Manager Kevyn Orr met with bondholders and other creditors in an attempt to
restructure the city's $18.5 billion debt and avoid bankruptcy. On July 18, 2013, the City of Detroit filed for Chapter
9 bankruptcy protection. It was declared bankrupt by U.S. judge Stephen Rhodes on December 3, with its $18.5 billion
debt he said in accepting the city's contention that it is broke and that negotiations with its thousands of creditors
were infeasible. Detroit is home to several institutions of higher learning including Wayne State University, a national
research university with medical and law schools in the Midtown area offering hundreds of academic degrees and programs.
The University of Detroit Mercy, located in Northwest Detroit in the University District, is a prominent Roman Catholic
co-educational university affiliated with the Society of Jesus (the Jesuits) and the Sisters of Mercy. The University
of Detroit Mercy offers more than a hundred academic degrees and programs of study including business, dentistry,
law, engineering, architecture, nursing and allied health professions. The University of Detroit Mercy School of
Law is located Downtown across from the Renaissance Center. Sacred Heart Major Seminary, originally founded in 1919,
is affiliated with Pontifical University of Saint Thomas Aquinas, Angelicum in Rome and offers pontifical degrees
as well as civil undergraduate and graduate degrees. Sacred Heart Major Seminary offers a variety of academic programs
for both clerical and lay students. Other institutions in the city include the College for Creative Studies, Lewis
College of Business, Marygrove College and Wayne County Community College. In June 2009, the Michigan State University
College of Osteopathic Medicine which is based in East Lansing opened a satellite campus located at the Detroit Medical
Center. The University of Michigan was established in 1817 in Detroit and later moved to Ann Arbor in 1837. In 1959,
University of Michigan–Dearborn was established in neighboring Dearborn. Detroit is served by various private schools,
as well as parochial Roman Catholic schools operated by the Archdiocese of Detroit. As of 2013[update] there are
four Catholic grade schools and three Catholic high schools in the City of Detroit, with all of them in the city's
west side. The Archdiocese of Detroit lists a number of primary and secondary schools in the metro area as Catholic
education has emigrated to the suburbs. Of the three Catholic high schools in the city, two are operated by the Society
of Jesus and the third is co-sponsored by the Sisters, Servants of the Immaculate Heart of Mary and the Congregation
of St. Basil. The Detroit Free Press and The Detroit News are the major daily newspapers, both broadsheet publications
published together under a joint operating agreement called the Detroit Newspaper Partnership. Media philanthropy
includes the Detroit Free Press high school journalism program and the Old Newsboys' Goodfellow Fund of Detroit.
In March 2009, the two newspapers reduced home delivery to three days a week, print reduced newsstand issues of the
papers on non-delivery days and focus resources on Internet-based news delivery. The Metro Times, founded in 1980,
is a weekly publication, covering news, arts & entertainment. Also founded in 1935 and based in Detroit the Michigan
Chronicle is one of the oldest and most respected African-American weekly newspapers in America. Covering politics,
entertainment, sports and community events. The Detroit television market is the 11th largest in the United States;
according to estimates that do not include audiences located in large areas of Ontario, Canada (Windsor and its surrounding
area on broadcast and cable TV, as well as several other cable markets in Ontario, such as the city of Ottawa) which
receive and watch Detroit television stations. Within the city of Detroit, there are over a dozen major hospitals
which include the Detroit Medical Center (DMC), Henry Ford Health System, St. John Health System, and the John D.
Dingell VA Medical Center. The DMC, a regional Level I trauma center, consists of Detroit Receiving Hospital and
University Health Center, Children's Hospital of Michigan, Harper University Hospital, Hutzel Women's Hospital, Kresge
Eye Institute, Rehabilitation Institute of Michigan, Sinai-Grace Hospital, and the Karmanos Cancer Institute. The
DMC has more than 2,000 licensed beds and 3,000 affiliated physicians. It is the largest private employer in the
City of Detroit. The center is staffed by physicians from the Wayne State University School of Medicine, the largest
single-campus medical school in the United States, and the United States' fourth largest medical school overall.
Detroit Medical Center formally became a part of Vanguard Health Systems on December 30, 2010, as a for profit corporation.
Vanguard has agreed to invest nearly $1.5 B in the Detroit Medical Center complex which will include $417 M to retire
debts, at least $350 M in capital expenditures and an additional $500 M for new capital investment. Vanguard has
agreed to assume all debts and pension obligations. The metro area has many other hospitals including William Beaumont
Hospital, St. Joseph's, and University of Michigan Medical Center. On February 18, 2015, Canadian Transport Minister
Lisa Raitt announced that Canada has agreed to pay the entire cost to build a $250 million U.S. Customs plaza adjacent
to the planned new Detroit–Windsor bridge, now the Gordie Howe International Bridge. Canada had already planned to
pay for 95 per cent of the bridge, which will cost $2.1 billion, and is expected to open in 2020. "This allows Canada
and Michigan to move the project forward immediately to its next steps which include further design work and property
acquisition on the U.S. side of the border," Raitt said in a statement issued after she spoke in the House of Commons.
Metro Detroit has an extensive toll-free network of freeways administered by the Michigan Department of Transportation.
Four major Interstate Highways surround the city. Detroit is connected via Interstate 75 (I-75) and I-96 to Kings
Highway 401 and to major Southern Ontario cities such as London, Ontario and the Greater Toronto Area. I-75 (Chrysler
and Fisher freeways) is the region's main north–south route, serving Flint, Pontiac, Troy, and Detroit, before continuing
south (as the Detroit–Toledo and Seaway Freeways) to serve many of the communities along the shore of Lake Erie.
I-94 (Edsel Ford Freeway) runs east–west through Detroit and serves Ann Arbor to the west (where it continues to
Chicago) and Port Huron to the northeast. The stretch of the current I-94 freeway from Ypsilanti to Detroit was one
of America's earlier limited-access highways. Henry Ford built it to link the factories at Willow Run and Dearborn
during World War II. A portion was known as the Willow Run Expressway. The I-96 freeway runs northwest–southeast
through Livingston, Oakland and Wayne counties and (as the Jeffries Freeway through Wayne County) has its eastern
terminus in downtown Detroit. I-275 runs north–south from I-75 in the south to the junction of I-96 and I-696 in
the north, providing a bypass through the western suburbs of Detroit. I-375 is a short spur route in downtown Detroit,
an extension of the Chrysler Freeway. I-696 (Reuther Freeway) runs east–west from the junction of I-96 and I-275,
providing a route through the northern suburbs of Detroit. Taken together, I-275 and I-696 form a semicircle around
Detroit. Michigan state highways designated with the letter M serve to connect major freeways.
London i/ˈlʌndən/ is the capital and most populous city of England and the United Kingdom. Standing on the River Thames in
the south eastern part of the island of Great Britain, London has been a major settlement for two millennia. It was
founded by the Romans, who named it Londinium. London's ancient core, the City of London, largely retains its 1.12-square-mile
(2.9 km2) medieval boundaries and in 2011 had a resident population of 7,375, making it the smallest city in England.
Since at least the 19th century, the term London has also referred to the metropolis developed around this core.
The bulk of this conurbation forms Greater London,[note 1] a region of England governed by the Mayor of London and
the London Assembly.[note 2] The conurbation also covers two English counties: the small district of the City of
London and the county of Greater London. The latter constitutes the vast majority of London, though historically
it was split between Middlesex (a now abolished county), Essex, Surrey, Kent and Hertfordshire. London is a leading
global city, with strengths in the arts, commerce, education, entertainment, fashion, finance, healthcare, media,
professional services, research and development, tourism, and transport all contributing to its prominence. It is
one of the world's leading financial centres and has the fifth-or sixth-largest metropolitan area GDP in the world
depending on measurement.[note 3] London is a world cultural capital. It is the world's most-visited city as measured
by international arrivals and has the world's largest city airport system measured by passenger traffic. London is
one of the world's leading investment destinations, hosting more international retailers and ultra high-net-worth
individuals than any other city. London's 43 universities form the largest concentration of higher education institutes
in Europe, and a 2014 report placed it first in the world university rankings. According to the report London also
ranks first in the world in software, multimedia development and design, and shares first position in technology
readiness. In 2012, London became the first city to host the modern Summer Olympic Games three times. London has
a diverse range of peoples and cultures, and more than 300 languages are spoken within Greater London. The Office
for National Statistics estimated its mid-2014 population to be 8,538,689, the largest of any municipality in the
European Union, and accounting for 12.5 percent of the UK population. London's urban area is the second most populous
in the EU, after Paris, with 9,787,426 inhabitants according to the 2011 census. The city's metropolitan area is
one of the most populous in Europe with 13,879,757 inhabitants,[note 4] while the Greater London Authority states
the population of the city-region (covering a large part of the south east) as 22.7 million. London was the world's
most populous city from around 1831 to 1925. London contains four World Heritage Sites: the Tower of London; Kew
Gardens; the site comprising the Palace of Westminster, Westminster Abbey, and St Margaret's Church; and the historic
settlement of Greenwich (in which the Royal Observatory, Greenwich marks the Prime Meridian, 0° longitude, and GMT).
Other famous landmarks include Buckingham Palace, the London Eye, Piccadilly Circus, St Paul's Cathedral, Tower Bridge,
Trafalgar Square, and The Shard. London is home to numerous museums, galleries, libraries, sporting events and other
cultural institutions, including the British Museum, National Gallery, Tate Modern, British Library and 40 West End
theatres. The London Underground is the oldest underground railway network in the world. From 1898, it was commonly
accepted that the name was of Celtic origin and meant place belonging to a man called *Londinos; this explanation
has since been rejected. Richard Coates put forward an explanation in 1998 that it is derived from the pre-Celtic
Old European *(p)lowonida, meaning 'river too wide to ford', and suggested that this was a name given to the part
of the River Thames which flows through London; from this, the settlement gained the Celtic form of its name, *Lowonidonjon;
this requires quite a serious amendment however. The ultimate difficulty lies in reconciling the Latin form Londinium
with the modern Welsh Llundain, which should demand a form *(h)lōndinion (as opposed to *londīnion), from earlier
*loundiniom. The possibility cannot be ruled out that the Welsh name was borrowed back in from English at a later
date, and thus cannot be used as a basis from which to reconstruct the original name. Two recent discoveries indicate
probable very early settlements near the Thames in the London area. In 1999, the remains of a Bronze Age bridge were
found on the foreshore north of Vauxhall Bridge. This bridge either crossed the Thames, or went to a now lost island
in the river. Dendrology dated the timbers to 1500 BC. In 2010 the foundations of a large timber structure, dated
to 4500 BC, were found on the Thames foreshore, south of Vauxhall Bridge. The function of the mesolithic structure
is not known. Both structures are on South Bank, at a natural crossing point where the River Effra flows into the
River Thames. Although there is evidence of scattered Brythonic settlements in the area, the first major settlement
was founded by the Romans after the invasion of 43 AD. This lasted only until around 61, when the Iceni tribe led
by Queen Boudica stormed it, burning it to the ground. The next, heavily planned, incarnation of Londinium prospered,
and it superseded Colchester as the capital of the Roman province of Britannia in 100. At its height in the 2nd century,
Roman London had a population of around 60,000. With the collapse of Roman rule in the early 5th century, London
ceased to be a capital and the walled city of Londinium was effectively abandoned, although Roman civilisation continued
in the St Martin-in-the-Fields area until around 450. From around 500, an Anglo-Saxon settlement known as Lundenwic
developed in the same area, slightly to the west of the old Roman city. By about 680, it had revived sufficiently
to become a major port, although there is little evidence of large-scale production of goods. From the 820s the town
declined because of repeated Viking invasions. There are three recorded Viking assaults on London; two of which were
successful in 851 and 886 AD, although they were defeated during the attack of 994 AD. The Vikings established Danelaw
over much of the eastern and northern part of England with its boundary roughly stretching from London to Chester.
It was an area of political and geographical control imposed by the Viking incursions which was formally agreed to
by the Danish warlord, Guthrum and west-Saxon king, Alfred the Great in 886 AD. The Anglo-Saxon Chronicle recorded
that London was "refounded" by Alfred the Great in 886. Archaeological research shows that this involved abandonment
of Lundenwic and a revival of life and trade within the old Roman walls. London then grew slowly until about 950,
after which activity increased dramatically. By the 11th century, London was beyond all comparison the largest town
in England. Westminster Abbey, rebuilt in the Romanesque style by King Edward the Confessor, was one of the grandest
churches in Europe. Winchester had previously been the capital of Anglo-Saxon England, but from this time on, London
became the main forum for foreign traders and the base for defence in time of war. In the view of Frank Stenton:
"It had the resources, and it was rapidly developing the dignity and the political self-consciousness appropriate
to a national capital." Following his victory in the Battle of Hastings, William, Duke of Normandy, was crowned King
of England in the newly finished Westminster Abbey on Christmas Day 1066. William constructed the Tower of London,
the first of the many Norman castles in England to be rebuilt in stone, in the southeastern corner of the city, to
intimidate the native inhabitants. In 1097, William II began the building of Westminster Hall, close by the abbey
of the same name. The hall became the basis of a new Palace of Westminster. During the 12th century, the institutions
of central government, which had hitherto accompanied the royal English court as it moved around the country, grew
in size and sophistication and became increasingly fixed in one place. In most cases this was Westminster, although
the royal treasury, having been moved from Winchester, came to rest in the Tower. While the City of Westminster developed
into a true capital in governmental terms, its distinct neighbour, the City of London, remained England's largest
city and principal commercial centre, and it flourished under its own unique administration, the Corporation of London.
In 1100, its population was around 18,000; by 1300 it had grown to nearly 100,000. During the Tudor period the Reformation
produced a gradual shift to Protestantism, much of London passing from church to private ownership. The traffic in
woollen cloths shipped undyed and undressed from London to the nearby shores of the Low Countries, where it was considered
indispensable. But the tentacles of English maritime enterprise hardly extended beyond the seas of north-west Europe.
The commercial route to Italy and the Mediterranean Sea normally lay through Antwerp and over the Alps; any ships
passing through the Strait of Gibraltar to or from England were likely to be Italian or Ragusan. Upon the re-opening
of the Netherlands to English shipping in January 1565, there ensued a strong outburst of commercial activity. The
Royal Exchange was founded. Mercantilism grew, and monopoly trading companies such as the East India Company were
established, with trade expanding to the New World. London became the principal North Sea port, with migrants arriving
from England and abroad. The population rose from an estimated 50,000 in 1530 to about 225,000 in 1605. During the
English Civil War the majority of Londoners supported the Parliamentary cause. After an initial advance by the Royalists
in 1642 culminating in the battles of Brentford and Turnham Green, London was surrounded by defensive perimeter wall
known as the Lines of Communication. The lines were built by an up to 20,000 people, and were completed in under
two months. The fortifications failed their only test when the New Model Army entered London in 1647, and they were
levelled by Parliament the same year. In 1762, George III acquired Buckingham House and it was enlarged over the
next 75 years. During the 18th century, London was dogged by crime, and the Bow Street Runners were established in
1750 as a professional police force. In total, more than 200 offences were punishable by death, including petty theft.
Most children born in the city died before reaching their third birthday. The coffeehouse became a popular place
to debate ideas, with growing literacy and the development of the printing press making news widely available; and
Fleet Street became the centre of the British press. London was the world's largest city from about 1831 to 1925.
London's overcrowded conditions led to cholera epidemics, claiming 14,000 lives in 1848, and 6,000 in 1866. Rising
traffic congestion led to the creation of the world's first local urban rail network. The Metropolitan Board of Works
oversaw infrastructure expansion in the capital and some of the surrounding counties; it was abolished in 1889 when
the London County Council was created out of those areas of the counties surrounding the capital. London was bombed
by the Germans during the First World War while during the Second World War, the Blitz and other bombings by the
German Luftwaffe, killed over 30,000 Londoners and destroyed large tracts of housing and other buildings across the
city. Immediately after the war, the 1948 Summer Olympics were held at the original Wembley Stadium, at a time when
London had barely recovered from the war. Primarily starting in the mid-1960s, London became a centre for the worldwide
youth culture, exemplified by the Swinging London subculture associated with the King's Road, Chelsea and Carnaby
Street. The role of trendsetter was revived during the punk era. In 1965 London's political boundaries were expanded
to take into account the growth of the urban area and a new Greater London Council was created. During The Troubles
in Northern Ireland, London was subjected to bombing attacks by the Provisional IRA. Racial inequality was highlighted
by the 1981 Brixton riot. Greater London's population declined steadily in the decades after the Second World War,
from an estimated peak of 8.6 million in 1939 to around 6.8 million in the 1980s. The principal ports for London
moved downstream to Felixstowe and Tilbury, with the London Docklands area becoming a focus for regeneration, including
the Canary Wharf development. This was borne out of London's ever-increasing role as a major international financial
centre during the 1980s. The Thames Barrier was completed in the 1980s to protect London against tidal surges from
the North Sea. The Greater London Council was abolished in 1986, which left London as the only large metropolis in
the world without a central administration. In 2000, London-wide government was restored, with the creation of the
Greater London Authority. To celebrate the start of the 21st century, the Millennium Dome, London Eye and Millennium
Bridge were constructed. On 6 July 2005 London was awarded the 2012 Summer Olympics, making London the first city
to stage the Olympic Games three times. In January 2015, Greater London's population was estimated to be 8.63 million,
the highest level since 1939. The administration of London is formed of two tiers—a city-wide, strategic tier and
a local tier. City-wide administration is coordinated by the Greater London Authority (GLA), while local administration
is carried out by 33 smaller authorities. The GLA consists of two elected components; the Mayor of London, who has
executive powers, and the London Assembly, which scrutinises the mayor's decisions and can accept or reject the mayor's
budget proposals each year. The headquarters of the GLA is City Hall, Southwark; the mayor is Boris Johnson. The
mayor's statutory planning strategy is published as the London Plan, which was most recently revised in 2011. The
local authorities are the councils of the 32 London boroughs and the City of London Corporation. They are responsible
for most local services, such as local planning, schools, social services, local roads and refuse collection. Certain
functions, such as waste management, are provided through joint arrangements. In 2009–2010 the combined revenue expenditure
by London councils and the GLA amounted to just over £22 billion (£14.7 billion for the boroughs and £7.4 billion
for the GLA). The London Fire Brigade is the statutory fire and rescue service for Greater London. It is run by the
London Fire and Emergency Planning Authority and is the third largest fire service in the world. National Health
Service ambulance services are provided by the London Ambulance Service (LAS) NHS Trust, the largest free-at-the-point-of-use
emergency ambulance service in the world. The London Air Ambulance charity operates in conjunction with the LAS where
required. Her Majesty's Coastguard and the Royal National Lifeboat Institution operate on the River Thames, which
is under the jurisdiction of the Port of London Authority from Teddington Lock to the sea. London is the seat of
the Government of the United Kingdom. Many government departments are based close to the Palace of Westminster, particularly
along Whitehall, including the Prime Minister's residence at 10 Downing Street. The British Parliament is often referred
to as the "Mother of Parliaments" (although this sobriquet was first applied to England itself by John Bright) because
it has been the model for most other parliamentary systems. There are 73 Members of Parliament (MPs) from London,
who correspond to local parliamentary constituencies in the national Parliament. As of May 2015, 45 are from the
Labour Party, 27 are Conservatives, and one is a Liberal Democrat. Policing in Greater London, with the exception
of the City of London, is provided by the Metropolitan Police Service, overseen by the Mayor through the Mayor's
Office for Policing and Crime (MOPAC). The City of London has its own police force – the City of London Police. The
British Transport Police are responsible for police services on National Rail, London Underground, Docklands Light
Railway and Tramlink services. A fourth police force in London, the Ministry of Defence Police, do not generally
become involved with policing the general public. Outward urban expansion is now prevented by the Metropolitan Green
Belt, although the built-up area extends beyond the boundary in places, resulting in a separately defined Greater
London Urban Area. Beyond this is the vast London commuter belt. Greater London is split for some purposes into Inner
London and Outer London. The city is split by the River Thames into North and South, with an informal central London
area in its interior. The coordinates of the nominal centre of London, traditionally considered to be the original
Eleanor Cross at Charing Cross near the junction of Trafalgar Square and Whitehall, are approximately 51°30′26″N
00°07′39″W / 51.50722°N 0.12750°W / 51.50722; -0.12750. Within London, both the City of London and the City of
Westminster have city status and both the City of London and the remainder of Greater London are counties for the
purposes of lieutenancies. The area of Greater London has incorporated areas that were once part of the historic
counties of Middlesex, Kent, Surrey, Essex and Hertfordshire. London's status as the capital of England, and later
the United Kingdom, has never been granted or confirmed officially—by statute or in written form.[note 6] Greater
London encompasses a total area of 1,583 square kilometres (611 sq mi), an area which had a population of 7,172,036
in 2001 and a population density of 4,542 inhabitants per square kilometre (11,760/sq mi). The extended area known
as the London Metropolitan Region or the London Metropolitan Agglomeration, comprises a total area of 8,382 square
kilometres (3,236 sq mi) has a population of 13,709,000 and a population density of 1,510 inhabitants per square
kilometre (3,900/sq mi). Modern London stands on the Thames, its primary geographical feature, a navigable river
which crosses the city from the south-west to the east. The Thames Valley is a floodplain surrounded by gently rolling
hills including Parliament Hill, Addington Hills, and Primrose Hill. The Thames was once a much broader, shallower
river with extensive marshlands; at high tide, its shores reached five times their present width. Summers are generally
warm and sometimes hot. London's average July high is 24 °C (75.2 °F). On average London will see 31 days above 25
°C (77.0 °F) each year, and 4.2 days above 30.0 °C (86.0 °F) every year. During the 2003 European heat wave there
were 14 consecutive days above 30 °C (86.0 °F) and 2 consecutive days where temperatures reached 38 °C (100.4 °F),
leading to hundreds of heat related deaths. Winters are generally cool and damp with little temperature variation.
Snowfall does occur from time to time, and can cause travel disruption when this happens. Spring and autumn are mixed
seasons and can be pleasant. As a large city, London has a considerable urban heat island effect, making the centre
of London at times 5 °C (9 °F) warmer than the suburbs and outskirts. The effect of this can be seen below when comparing
London Heathrow, 15 miles west of London, with the London Weather Centre, in the city centre. London's buildings
are too diverse to be characterised by any particular architectural style, partly because of their varying ages.
Many grand houses and public buildings, such as the National Gallery, are constructed from Portland stone. Some areas
of the city, particularly those just west of the centre, are characterised by white stucco or whitewashed buildings.
Few structures in central London pre-date the Great Fire of 1666, these being a few trace Roman remains, the Tower
of London and a few scattered Tudor survivors in the City. Further out is, for example, the Tudor period Hampton
Court Palace, England's oldest surviving Tudor palace, built by Cardinal Thomas Wolsey c.1515. The Monument in the
City of London provides views of the surrounding area while commemorating the Great Fire of London, which originated
nearby. Marble Arch and Wellington Arch, at the north and south ends of Park Lane respectively, have royal connections,
as do the Albert Memorial and Royal Albert Hall in Kensington. Nelson's Column is a nationally recognised monument
in Trafalgar Square, one of the focal points of central London. Older buildings are mainly brick built, most commonly
the yellow London stock brick or a warm orange-red variety, often decorated with carvings and white plaster mouldings.
In the dense areas, most of the concentration is via medium- and high-rise buildings. London's skyscrapers such as
30 St Mary Axe, Tower 42, the Broadgate Tower and One Canada Square are mostly in the two financial districts, the
City of London and Canary Wharf. High-rise development is restricted at certain sites if it would obstruct protected
views of St Paul's Cathedral and other historic buildings. Nevertheless, there are a number of very tall skyscrapers
in central London (see Tall buildings in London), including the 95-storey Shard London Bridge, the tallest building
in the European Union. The London Natural History Society suggest that London is "one of the World's Greenest Cities"
with more than 40 percent green space or open water. They indicate that 2000 species of flowering plant have been
found growing there and that the tidal Thames supports 120 species of fish. They also state that over 60 species
of bird nest in central London and that their members have recorded 47 species of butterfly, 1173 moths and more
than 270 kinds of spider around London. London's wetland areas support nationally important populations of many water
birds. London has 38 Sites of Special Scientific Interest (SSSIs), two National Nature Reserves and 76 Local Nature
Reserves. Among other inhabitants of London are 10,000 foxes, so that there are now 16 foxes for every square mile
(2.6 square kilometres) of London. These urban foxes are noticeably bolder than their country cousins, sharing the
pavement with pedestrians and raising cubs in people's backyards. Foxes have even sneaked into the Houses of Parliament,
where one was found asleep on a filing cabinet. Another broke into the grounds of Buckingham Palace, reportedly killing
some of Queen Elizabeth II's prized pink flamingos. Generally, however, foxes and city folk appear to get along.
A survey in 2001 by the London-based Mammal Society found that 80 percent of 3,779 respondents who volunteered to
keep a diary of garden mammal visits liked having them around. This sample cannot be taken to represent Londoners
as a whole. Other mammals found in Greater London are hedgehogs, rats, mice, rabbit, shrew, vole, and squirrels,
In wilder areas of Outer London, such as Epping Forest, a wide variety of mammals are found including hare, badger,
field, bank and water vole, wood mouse, yellow-necked mouse, mole, shrew, and weasel, in addition to fox, squirrel
and hedgehog. A dead otter was found at The Highway, in Wapping, about a mile from the Tower Bridge, which would
suggest that they have begun to move back after being absent a hundred years from the city. Ten of England's eighteen
species of bats have been recorded in Epping Forest: soprano, nathusius and common pipistrelles, noctule, serotine,
barbastelle, daubenton's, brown Long-eared, natterer's and leisler's. Herds of red and fallow deer also roam freely
within much of Richmond and Bushy Park. A cull takes place each November and February to ensure numbers can be sustained.
Epping Forest is also known for its fallow deer, which can frequently be seen in herds to the north of the Forest.
A rare population of melanistic, black fallow deer is also maintained at the Deer Sanctuary near Theydon Bois. Muntjac
deer, which escaped from deer parks at the turn of the twentieth century, are also found in the forest. While Londoners
are accustomed to wildlife such as birds and foxes sharing the city, more recently urban deer have started becoming
a regular feature, and whole herds of fallow and white-tailed deer come into residential areas at night to take advantage
of the London's green spaces. The 2011 census recorded that 2,998,264 people or 36.7% of London's population are
foreign-born making London the city with the second largest immigrant population, behind New York City, in terms
of absolute numbers. The table to the right shows the most common countries of birth of London residents. Note that
some of the German-born population, in 18th position, are British citizens from birth born to parents serving in
the British Armed Forces in Germany. With increasing industrialisation, London's population grew rapidly throughout
the 19th and early 20th centuries, and it was for some time in the late 19th and early 20th centuries the most populous
city in the world. Its population peaked at 8,615,245 in 1939 immediately before the outbreak of the Second World
War, but had declined to 7,192,091 at the 2001 Census. However, the population then grew by just over a million between
the 2001 and 2011 Censuses, to reach 8,173,941 in the latter enumeration. The region covers an area of 1,579 square
kilometres (610 sq mi). The population density is 5,177 inhabitants per square kilometre (13,410/sq mi), more than
ten times that of any other British region. In terms of population, London is the 19th largest city and the 18th
largest metropolitan region in the world. As of 2014[update], London has the largest number of billionaires (British
Pound Sterling) in the world, with 72 residing in the city. London ranks as one of the most expensive cities in the
world, alongside Tokyo and Moscow. Across London, Black and Asian children outnumber White British children by about
six to four in state schools. Altogether at the 2011 census, of London's 1,624,768 population aged 0 to 15, 46.4
per cent were White, 19.8 per cent were Asian, 19 per cent were Black, 10.8 per cent were Mixed and 4 per cent represented
another ethnic group. In January 2005, a survey of London's ethnic and religious diversity claimed that there were
more than 300 languages spoken in London and more than 50 non-indigenous communities with a population of more than
10,000. Figures from the Office for National Statistics show that, in 2010[update], London's foreign-born population
was 2,650,000 (33 per cent), up from 1,630,000 in 1997. The 2011 census showed that 36.7 per cent of Greater London's
population were born outside the UK. The table to the right shows the 30 most common countries of birth of London
residents in 2011, the date of the last published UK Census. A portion of the German-born population are likely to
be British nationals born to parents serving in the British Armed Forces in Germany. Estimates produced by the Office
for National Statistics indicate that the five largest foreign-born groups living in London in the period July 2009
to June 2010 were those born in India, Poland, the Republic of Ireland, Bangladesh and Nigeria. London is also home
to sizeable Muslim, Hindu, Sikh, and Jewish communities. Notable mosques include the East London Mosque in Tower
Hamlets, London Central Mosque on the edge of Regent's Park and the Baitul Futuh Mosque of the Ahmadiyya Muslim Community.
Following the oil boom, increasing numbers of wealthy Hindus and Middle-Eastern Muslims have based themselves around
Mayfair and Knightsbridge in West London. There are large Muslim communities in the eastern boroughs of Tower Hamlets
and Newham. Large Hindu communities are in the north-western boroughs of Harrow and Brent, the latter of which is
home to Europe's largest Hindu temple, Neasden Temple. London is also home to 42 Hindu temples. There are Sikh communities
in East and West London, particularly in Southall, home to one of the largest Sikh populations and the largest Sikh
temple outside India. The majority of British Jews live in London, with significant Jewish communities in Stamford
Hill, Stanmore, Golders Green, Finchley, Hampstead, Hendon and Edgware in North London. Bevis Marks Synagogue in
the City of London is affiliated to London's historic Sephardic Jewish community. It is the only synagogue in Europe
which has held regular services continuously for over 300 years. Stanmore and Canons Park Synagogue has the largest
membership of any single Orthodox synagogue in the whole of Europe, overtaking Ilford synagogue (also in London)
in 1998. The community set up the London Jewish Forum in 2006 in response to the growing significance of devolved
London Government. There are many accents that are traditionally thought of as London accents. The most well known
of the London accents long ago acquired the Cockney label, which is heard both in London itself, and across the wider
South East England region more generally. The accent of a 21st-century 'Londoner' varies widely; what is becoming
more and more common amongst the under-30s however is some fusion of Cockney with a whole array of 'ethnic' accents,
in particular Caribbean, which form an accent labelled Multicultural London English (MLE). The other widely heard
and spoken accent is RP (Received Pronunciation) in various forms, which can often be heard in the media and many
of other traditional professions and beyond, although this accent is not limited to London and South East England,
and can also be heard selectively throughout the whole UK amongst certain social groupings. London's largest industry
is finance, and its financial exports make it a large contributor to the UK's balance of payments. Around 325,000
people were employed in financial services in London until mid-2007. London has over 480 overseas banks, more than
any other city in the world. Over 85 percent (3.2 million) of the employed population of greater London works in
the services industries. Because of its prominent global role, London's economy had been affected by the Late-2000s
financial crisis. However, by 2010 the City has recovered; put in place new regulatory powers, proceeded to regain
lost ground and re-established London's economic dominance. The City of London is home to the Bank of England, London
Stock Exchange, and Lloyd's of London insurance market. Along with professional services, media companies are concentrated
in London and the media distribution industry is London's second most competitive sector. The BBC is a significant
employer, while other broadcasters also have headquarters around the City. Many national newspapers are edited in
London. London is a major retail centre and in 2010 had the highest non-food retail sales of any city in the world,
with a total spend of around £64.2 billion. The Port of London is the second-largest in the United Kingdom, handling
45 million tonnes of cargo each year. London is one of the leading tourist destinations in the world and in 2015
was ranked as the most visited city in the world with over 65 million visits. It is also the top city in the world
by visitor cross-border spending, estimated at US$20.23 billion in 2015 Tourism is one of London's prime industries,
employing the equivalent of 350,000 full-time workers in 2003, and the city accounts for 54% of all inbound visitor
spend in UK. As of 2016 London is rated as the world top ranked city destination by TripAdvisor users. Transport
is one of the four main areas of policy administered by the Mayor of London, however the mayor's financial control
does not extend to the longer distance rail network that enters London. In 2007 he assumed responsibility for some
local lines, which now form the London Overground network, adding to the existing responsibility for the London Underground,
trams and buses. The public transport network is administered by Transport for London (TfL) and is one of the most
extensive in the world. London is a major international air transport hub with the busiest city airspace in the world.
Eight airports use the word London in their name, but most traffic passes through six of these. London Heathrow Airport,
in Hillingdon, West London, is the busiest airport in the world for international traffic, and is the major hub of
the nation's flag carrier, British Airways. In March 2008 its fifth terminal was opened. There were plans for a third
runway and a sixth terminal; however, these were cancelled by the Coalition Government on 12 May 2010. Stansted Airport,
north east of London in Essex, is a local UK hub and Luton Airport to the north of London in Bedfordshire, caters
mostly for cheap short-haul flights. London City Airport, the smallest and most central airport, in Newham, East
London, is focused on business travellers, with a mixture of full service short-haul scheduled flights and considerable
business jet traffic. London Southend Airport, east of London in Essex, is a smaller, regional airport that mainly
caters for cheap short-haul flights. There are 366 railway stations in the London Travelcard Zones on an extensive
above-ground suburban railway network. South London, particularly, has a high concentration of railways as it has
fewer Underground lines. Most rail lines terminate around the centre of London, running into eighteen terminal stations,
with the exception of the Thameslink trains connecting Bedford in the north and Brighton in the south via Luton and
Gatwick airports. London has Britain's busiest station by number of passengers – Waterloo, with over 184 million
people using the interchange station complex (which includes Waterloo East station) each year. Clapham Junction is
the busiest station in Europe by the number of trains passing. Some international railway services to Continental
Europe were operated during the 20th century as boat trains, such as the Admiraal de Ruijter to Amsterdam and the
Night Ferry to Paris and Brussels. The opening of the Channel Tunnel in 1994 connected London directly to the continental
rail network, allowing Eurostar services to begin. Since 2007, high-speed trains link St. Pancras International with
Lille, Paris, Brussels and European tourist destinations via the High Speed 1 rail link and the Channel Tunnel. The
first high-speed domestic trains started in June 2009 linking Kent to London. There are plans for a second high speed
line linking London to the Midlands, North West England, and Yorkshire. London's bus network is one of the largest
in the world, running 24 hours a day, with about 8,500 buses, more than 700 bus routes and around 19,500 bus stops.
In 2013, the network had more than 2 billion commuter trips per annum, more than the Underground. Around £850 million
is taken in revenue each year. London has the largest wheelchair accessible network in the world and, from the 3rd
quarter of 2007, became more accessible to hearing and visually impaired passengers as audio-visual announcements
were introduced. The distinctive red double-decker buses are an internationally recognised trademark of London transport
along with black cabs and the Tube. London's first and only cable car, known as the Emirates Air Line, opened in
June 2012. Crossing the River Thames, linking Greenwich Peninsula and the Royal Docks in the east of the city, the
cable car is integrated with London's Oyster Card ticketing system, although special fares are charged. Costing £60
million to build, it carries over 3,500 passengers every day, although this is very much lower than its capacity.
Similar to the Santander Cycles bike hire scheme, the cable car is sponsored in a 10-year deal by the airline Emirates.
Although the majority of journeys involving central London are made by public transport, car travel is common in
the suburbs. The inner ring road (around the city centre), the North and South Circular roads (in the suburbs), and
the outer orbital motorway (the M25, outside the built-up area) encircle the city and are intersected by a number
of busy radial routes—but very few motorways penetrate into inner London. A plan for a comprehensive network of motorways
throughout the city (the Ringways Plan) was prepared in the 1960s but was mostly cancelled in the early 1970s. The
M25 is the longest ring-road motorway in the world at 121.5 mi (195.5 km) long. The A1 and M1 connect London to Leeds,
and Newcastle and Edinburgh. In 2003, a congestion charge was introduced to reduce traffic volumes in the city centre.
With a few exceptions, motorists are required to pay £10 per day to drive within a defined zone encompassing much
of central London. Motorists who are residents of the defined zone can buy a greatly reduced season pass. London
government initially expected the Congestion Charge Zone to increase daily peak period Underground and bus users
by 20,000 people, reduce road traffic by 10 to 15 per cent, increase traffic speeds by 10 to 15 per cent, and reduce
queues by 20 to 30 per cent. Over the course of several years, the average number of cars entering the centre of
London on a weekday was reduced from 195,000 to 125,000 cars – a 35-per-cent reduction of vehicles driven per day.
London is a major global centre of higher education teaching and research and its 43 universities form the largest
concentration of higher education institutes in Europe. According to the QS World University Rankings 2015/16, London
has the greatest concentration of top class universities in the world and the international student population around
110,000 which is also more than any other city in the world. A 2014 PricewaterhouseCoopers report termed London as
the global capital of higher education A number of world-leading education institutions are based in London. In the
2014/15 QS World University Rankings, Imperial College London is ranked joint 2nd in the world (alongside The University
of Cambridge), University College London (UCL) is ranked 5th, and King's College London (KCL) is ranked 16th. The
London School of Economics has been described as the world's leading social science institution for both teaching
and research. The London Business School is considered one of the world's leading business schools and in 2015 its
MBA programme was ranked second best in the world by the Financial Times. With 120,000 students in London, the federal
University of London is the largest contact teaching university in the UK. It includes four large multi-faculty universities
– King's College London, Queen Mary, Royal Holloway and UCL – and a number of smaller and more specialised institutions
including Birkbeck, the Courtauld Institute of Art, Goldsmiths, Guildhall School of Music and Drama, the Institute
of Education, the London Business School, the London School of Economics, the London School of Hygiene & Tropical
Medicine, the Royal Academy of Music, the Central School of Speech and Drama, the Royal Veterinary College and the
School of Oriental and African Studies. Members of the University of London have their own admissions procedures,
and some award their own degrees. A number of universities in London are outside the University of London system,
including Brunel University, City University London, Imperial College London, Kingston University, London Metropolitan
University, Middlesex University, University of East London, University of West London and University of Westminster,
(with over 34,000 students, the largest unitary university in London), London South Bank University, Middlesex University,
University of the Arts London (the largest university of art, design, fashion, communication and the performing arts
in Europe), University of East London, the University of West London and the University of Westminster. In addition
there are three international universities in London – Regent's University London, Richmond, The American International
University in London and Schiller International University. London is home to five major medical schools – Barts
and The London School of Medicine and Dentistry (part of Queen Mary), King's College London School of Medicine (the
largest medical school in Europe), Imperial College School of Medicine, UCL Medical School and St George's, University
of London – and has a large number of affiliated teaching hospitals. It is also a major centre for biomedical research,
and three of the UK's five academic health science centres are based in the city – Imperial College Healthcare, King's
Health Partners and UCL Partners (the largest such centre in Europe). There are a number of business schools in London,
including the London School of Business and Finance, Cass Business School (part of City University London), Hult
International Business School, ESCP Europe, European Business School London, Imperial College Business School and
the London Business School. London is also home to many specialist arts education institutions, including the Academy
of Live and Recorded Arts, Central School of Ballet, LAMDA, London College of Contemporary Arts (LCCA), London Contemporary
Dance School, National Centre for Circus Arts, RADA, Rambert School of Ballet and Contemporary Dance, the Royal College
of Art, the Royal College of Music and Trinity Laban. The majority of primary and secondary schools and further-education
colleges in London are controlled by the London boroughs or otherwise state-funded; leading examples include City
and Islington College, Ealing, Hammersmith and West London College, Leyton Sixth Form College, Tower Hamlets College
and Bethnal Green Academy. There are also a number of private schools and colleges in London, some old and famous,
such as City of London School, Harrow, St Paul's School, Haberdashers' Aske's Boys' School, University College School,
The John Lyon School, Highgate School and Westminster School. Within the City of Westminster in London the entertainment
district of the West End has its focus around Leicester Square, where London and world film premieres are held, and
Piccadilly Circus, with its giant electronic advertisements. London's theatre district is here, as are many cinemas,
bars, clubs and restaurants, including the city's Chinatown district (in Soho), and just to the east is Covent Garden,
an area housing speciality shops. The city is the home of Andrew Lloyd Webber, whose musicals have dominated the
West End theatre since the late 20th century. The United Kingdom's Royal Ballet, English National Ballet, Royal Opera
and English National Opera are based in London and perform at the Royal Opera House, the London Coliseum, Sadler's
Wells Theatre and the Royal Albert Hall as well as touring the country. Islington's 1 mile (1.6 km) long Upper Street,
extending northwards from Angel, has more bars and restaurants than any other street in the United Kingdom. Europe's
busiest shopping area is Oxford Street, a shopping street nearly 1 mile (1.6 km) long, making it the longest shopping
street in the United Kingdom. Oxford Street is home to vast numbers of retailers and department stores, including
the world-famous Selfridges flagship store. Knightsbridge, home to the equally renowned Harrods department store,
lies to the south-west. There is a variety of annual events, beginning with the relatively new New Year's Day Parade,
fireworks display at the London Eye, the world's second largest street party, the Notting Hill Carnival is held during
the late August Bank Holiday each year. Traditional parades include November's Lord Mayor's Show, a centuries-old
event celebrating the annual appointment of a new Lord Mayor of the City of London with a procession along the streets
of the City, and June's Trooping the Colour, a formal military pageant performed by regiments of the Commonwealth
and British armies to celebrate the Queen's Official Birthday. London has been the setting for many works of literature.
The literary centres of London have traditionally been hilly Hampstead and (since the early 20th century) Bloomsbury.
Writers closely associated with the city are the diarist Samuel Pepys, noted for his eyewitness account of the Great
Fire, Charles Dickens, whose representation of a foggy, snowy, grimy London of street sweepers and pickpockets has
been a major influence on people's vision of early Victorian London, and Virginia Woolf, regarded as one of the foremost
modernist literary figures of the 20th century. The pilgrims in Geoffrey Chaucer's late 14th-century Canterbury Tales
set out for Canterbury from London – specifically, from the Tabard inn, Southwark. William Shakespeare spent a large
part of his life living and working in London; his contemporary Ben Jonson was also based there, and some of his
work—most notably his play The Alchemist—was set in the city. A Journal of the Plague Year (1722) by Daniel Defoe
is a fictionalisation of the events of the 1665 Great Plague. Later important depictions of London from the 19th
and early 20th centuries are Dickens' novels, and Arthur Conan Doyle's Sherlock Holmes stories. Modern writers pervasively
influenced by the city include Peter Ackroyd, author of a "biography" of London, and Iain Sinclair, who writes in
the genre of psychogeography. London has played a significant role in the film industry, and has major studios at
Ealing and a special effects and post-production community centred in Soho. Working Title Films has its headquarters
in London. London has been the setting for films including Oliver Twist (1948), Scrooge (1951), Peter Pan (1953),
The 101 Dalmatians (1961), My Fair Lady (1964), Mary Poppins (1964), Blowup (1966), The Long Good Friday (1980),
Notting Hill (1999), Love Actually (2003), V For Vendetta (2005), Sweeney Todd: The Demon Barber Of Fleet Street
(2008) and The King's Speech (2010). Notable actors and filmmakers from London include; Charlie Chaplin, Alfred Hitchcock,
Michael Caine, Helen Mirren, Gary Oldman, Christopher Nolan, Jude Law, Tom Hardy, Keira Knightley and Daniel Day-Lewis.
As of 2008[update], the British Academy Film Awards have taken place at the Royal Opera House. London is a major
centre for television production, with studios including BBC Television Centre, The Fountain Studios and The London
Studios. Many television programmes have been set in London, including the popular television soap opera EastEnders,
broadcast by the BBC since 1985. London is home to many museums, galleries, and other institutions, many of which
are free of admission charges and are major tourist attractions as well as playing a research role. The first of
these to be established was the British Museum in Bloomsbury, in 1753. Originally containing antiquities, natural
history specimens and the national library, the museum now has 7 million artefacts from around the globe. In 1824
the National Gallery was founded to house the British national collection of Western paintings; this now occupies
a prominent position in Trafalgar Square. In the latter half of the 19th century the locale of South Kensington was
developed as "Albertopolis", a cultural and scientific quarter. Three major national museums are there: the Victoria
and Albert Museum (for the applied arts), the Natural History Museum and the Science Museum. The National Portrait
Gallery was founded in 1856 to house depictions of figures from British history; its holdings now comprise the world's
most extensive collection of portraits. The national gallery of British art is at Tate Britain, originally established
as an annexe of the National Gallery in 1897. The Tate Gallery, as it was formerly known, also became a major centre
for modern art; in 2000 this collection moved to Tate Modern, a new gallery housed in the former Bankside Power Station.
London is one of the major classical and popular music capitals of the world and is home to major music corporations,
such as EMI and Warner Music Group as well as countless bands, musicians and industry professionals. The city is
also home to many orchestras and concert halls, such as the Barbican Arts Centre (principal base of the London Symphony
Orchestra and the London Symphony Chorus), Cadogan Hall (Royal Philharmonic Orchestra) and the Royal Albert Hall
(The Proms). London's two main opera houses are the Royal Opera House and the London Coliseum. The UK's largest pipe
organ is at the Royal Albert Hall. Other significant instruments are at the cathedrals and major churches. Several
conservatoires are within the city: Royal Academy of Music, Royal College of Music, Guildhall School of Music and
Drama and Trinity Laban. London has numerous venues for rock and pop concerts, including the world's busiest arena
the o2 arena and other large arenas such as Earls Court, Wembley Arena, as well as many mid-sized venues, such as
Brixton Academy, the Hammersmith Apollo and the Shepherd's Bush Empire. Several music festivals, including the Wireless
Festival, South West Four, Lovebox, and Hyde Park's British Summer Time are all held in London. The city is home
to the first and original Hard Rock Cafe and the Abbey Road Studios where The Beatles recorded many of their hits.
In the 1960s, 1970s and 1980s, musicians and groups like Elton John, Pink Floyd, David Bowie, Queen, The Kinks, The
Rolling Stones, The Who, Eric Clapton, Led Zeppelin, The Small Faces, Iron Maiden, Fleetwood Mac, Elvis Costello,
Cat Stevens, The Police, The Cure, Madness, The Jam, Dusty Springfield, Phil Collins, Rod Stewart and Sade, derived
their sound from the streets and rhythms vibrating through London. London was instrumental in the development of
punk music, with figures such as the Sex Pistols, The Clash, and Vivienne Westwood all based in the city. More recent
artists to emerge from the London music scene include George Michael, Kate Bush, Seal, Siouxsie and the Banshees,
Bush, the Spice Girls, Jamiroquai, Blur, The Prodigy, Gorillaz, Mumford & Sons, Coldplay, Amy Winehouse, Adele, Ed
Sheeran and One Direction. London is also a centre for urban music. In particular the genres UK garage, drum and
bass, dubstep and grime evolved in the city from the foreign genres of hip hop and reggae, alongside local drum and
bass. Black music station BBC Radio 1Xtra was set up to support the rise of home-grown urban music both in London
and in the rest of the UK. The largest parks in the central area of London are three of the eight Royal Parks, namely
Hyde Park and its neighbour Kensington Gardens in the west, and Regent's Park to the north. Hyde Park in particular
is popular for sports and sometimes hosts open-air concerts. Regent's Park contains London Zoo, the world's oldest
scientific zoo, and is near the tourist attraction of Madame Tussauds Wax Museum. Primrose Hill in the northern part
of Regent's Park at 256 feet (78 m) is a popular spot to view the city skyline. Close to Richmond Park is Kew Gardens
which has the world's largest collection of living plants. In 2003, the gardens were put on the UNESCO list of World
Heritage Sites. There are also numerous parks administered by London's borough Councils, including Victoria Park
in the East End and Battersea Park in the centre. Some more informal, semi-natural open spaces also exist, including
the 320-hectare (790-acre) Hampstead Heath of North London, and Epping Forest, which covers 2,476 hectares (6,118.32
acres) in the east. Both are controlled by the City of London Corporation. Hampstead Heath incorporates Kenwood House,
the former stately home and a popular location in the summer months where classical musical concerts are held by
the lake, attracting thousands of people every weekend to enjoy the music, scenery and fireworks. Epping Forest is
a popular venue for various outdoor activities, including mountain biking, walking, horse riding, golf, angling,
and orienteering. Walking is a popular recreational activity in London. Areas that provide for walks include Wimbledon
Common, Epping Forest, Hampton Court Park, Hampstead Heath, the eight Royal Parks, canals and disused railway tracks.
Access to canals and rivers has improved recently, including the creation of the Thames Path, some 28 miles (45 km)
of which is within Greater London, and The Wandle Trail; this runs 12 miles (19 km) through South London along the
River Wandle, a tributary of the River Thames. Other long distance paths, linking green spaces, have also been created,
including the Capital Ring, the Green Chain Walk, London Outer Orbital Path ("Loop"), Jubilee Walkway, Lea Valley
Walk, and the Diana, Princess of Wales Memorial Walk. London's most popular sport is football and it has fourteen
League football clubs, including five in the Premier League: Arsenal, Chelsea, Crystal Palace, Tottenham Hotspur,
and West Ham United. Among other professional teams based in London include Fulham, Queens Park Rangers, Millwall
and Charlton Athletic. In May 2012, Chelsea became the first London club to win the UEFA Champions League. Aside
from Arsenal, Chelsea and Tottenham, none of the other London clubs have ever won the national league title. Three
Aviva Premiership rugby union teams are based in London, (London Irish, Saracens, and Harlequins), although currently
only Harlequins and Saracens play their home games within Greater London. London Scottish and London Welsh play in
the RFU Championship club and other rugby union clubs in the city include Richmond F.C., Rosslyn Park F.C., Westcombe
Park R.F.C. and Blackheath F.C.. Twickenham Stadium in south-west London is the national rugby union stadium, and
has a capacity of 82,000 now that the new south stand has been completed. London is the world's most expensive office
market for the last three years according to world property journal (2015) report. As of 2015[update] the residential
property in London is worth $2.2 trillion - same value as that of Brazil annual GDP. The city has the highest property
prices of any European city according to the Office for National Statistics and the European Office of Statistics.
On average the price per square metre in central London is €24,252 (April 2014). This is higher than the property
prices in other G8 European capital cities; Berlin €3,306, Rome €6,188 and Paris €11,229.
Cambridge English Dictionary states that culture is, "the way of life, especially the general customs and beliefs, of a particular
group of people at a particular time." Terror Management Theory posits that culture is a series of activities and
worldviews that provide humans with the illusion of being individuals of value in a world meaning—raising themselves
above the merely physical aspects of existence, in order to deny the animal insignificance and death that Homo Sapiens
became aware of when they acquired a larger brain. As a defining aspect of what it means to be human, culture is
a central concept in anthropology, encompassing the range of phenomena that are transmitted through social learning
in human societies. The word is used in a general sense as the evolved ability to categorize and represent experiences
with symbols and to act imaginatively and creatively. This ability arose with the evolution of behavioral modernity
in humans around 50,000 years ago.[citation needed] This capacity is often thought to be unique to humans, although
some other species have demonstrated similar, though much less complex abilities for social learning. It is also
used to denote the complex networks of practices and accumulated knowledge and ideas that is transmitted through
social interaction and exist in specific human groups, or cultures, using the plural form. Some aspects of human
behavior, such as language, social practices such as kinship, gender and marriage, expressive forms such as art,
music, dance, ritual, religion, and technologies such as cooking, shelter, clothing are said to be cultural universals,
found in all human societies. The concept material culture covers the physical expressions of culture, such as technology,
architecture and art, whereas the immaterial aspects of culture such as principles of social organization (including,
practices of political organization and social institutions), mythology, philosophy, literature (both written and
oral), and science make up the intangible cultural heritage of a society. In the humanities, one sense of culture,
as an attribute of the individual, has been the degree to which they have cultivated a particular level of sophistication,
in the arts, sciences, education, or manners. The level of cultural sophistication has also sometimes been seen to
distinguish civilizations from less complex societies. Such hierarchical perspectives on culture are also found in
class-based distinctions between a high culture of the social elite and a low culture, popular culture or folk culture
of the lower classes, distinguished by the stratified access to cultural capital. In common parlance, culture is
often used to refer specifically to the symbolic markers used by ethnic groups to distinguish themselves visibly
from each other such as body modification, clothing or jewelry.[dubious – discuss] Mass culture refers to the mass-produced
and mass mediated forms of consumer culture that emerged in the 20th century. Some schools of philosophy, such as
Marxism and critical theory, have argued that culture is often used politically as a tool of the elites to manipulate
the lower classes and create a false consciousness, such perspectives common in the discipline of cultural studies.
In the wider social sciences, the theoretical perspective of cultural materialism holds that human symbolic culture
arises from the material conditions of human life, as humans create the conditions for physical survival, and that
the basis of culture is found in evolved biological dispositions. When used as a count noun "a culture", is the set
of customs, traditions and values of a society or community, such as an ethnic group or nation. In this sense, multiculturalism
is a concept that values the peaceful coexistence and mutual respect between different cultures inhabiting the same
territory. Sometimes "culture" is also used to describe specific practices within a subgroup of a society, a subculture
(e.g. "bro culture"), or a counter culture. Within cultural anthropology, the ideology and analytical stance of cultural
relativism holds that cultures cannot easily be objectively ranked or evaluated because any evaluation is necessarily
situated within the value system of a given culture. The modern term "culture" is based on a term used by the Ancient
Roman orator Cicero in his Tusculanae Disputationes, where he wrote of a cultivation of the soul or "cultura animi",
using an agricultural metaphor for the development of a philosophical soul, understood teleologically as the highest
possible ideal for human development. Samuel Pufendorf took over this metaphor in a modern context, meaning something
similar, but no longer assuming that philosophy was man's natural perfection. His use, and that of many writers after
him "refers to all the ways in which human beings overcome their original barbarism, and through artifice, become
fully human". Social conflict and the development of technologies can produce changes within a society by altering
social dynamics and promoting new cultural models, and spurring or enabling generative action. These social shifts
may accompany ideological shifts and other types of cultural change. For example, the U.S. feminist movement involved
new practices that produced a shift in gender relations, altering both gender and economic structures. Environmental
conditions may also enter as factors. For example, after tropical forests returned at the end of the last ice age,
plants suitable for domestication were available, leading to the invention of agriculture, which in turn brought
about many cultural innovations and shifts in social dynamics. Cultures are externally affected via contact between
societies, which may also produce—or inhibit—social shifts and changes in cultural practices. War or competition
over resources may impact technological development or social dynamics. Additionally, cultural ideas may transfer
from one society to another, through diffusion or acculturation. In diffusion, the form of something (though not
necessarily its meaning) moves from one culture to another. For example, hamburgers, fast food in the United States,
seemed exotic when introduced into China. "Stimulus diffusion" (the sharing of ideas) refers to an element of one
culture leading to an invention or propagation in another. "Direct Borrowing" on the other hand tends to refer to
technological or tangible diffusion from one culture to another. Diffusion of innovations theory presents a research-based
model of why and when individuals and cultures adopt new ideas, practices, and products. Immanuel Kant (1724–1804)
has formulated an individualist definition of "enlightenment" similar to the concept of bildung: "Enlightenment is
man's emergence from his self-incurred immaturity." He argued that this immaturity comes not from a lack of understanding,
but from a lack of courage to think independently. Against this intellectual cowardice, Kant urged: Sapere aude,
"Dare to be wise!" In reaction to Kant, German scholars such as Johann Gottfried Herder (1744–1803) argued that human
creativity, which necessarily takes unpredictable and highly diverse forms, is as important as human rationality.
Moreover, Herder proposed a collective form of bildung: "For Herder, Bildung was the totality of experiences that
provide a coherent identity, and sense of common destiny, to a people." In 1795, the Prussian linguist and philosopher
Wilhelm von Humboldt (1767–1835) called for an anthropology that would synthesize Kant's and Herder's interests.
During the Romantic era, scholars in Germany, especially those concerned with nationalist movements—such as the nationalist
struggle to create a "Germany" out of diverse principalities, and the nationalist struggles by ethnic minorities
against the Austro-Hungarian Empire—developed a more inclusive notion of culture as "worldview" (Weltanschauung).
According to this school of thought, each ethnic group has a distinct worldview that is incommensurable with the
worldviews of other groups. Although more inclusive than earlier views, this approach to culture still allowed for
distinctions between "civilized" and "primitive" or "tribal" cultures. In 1860, Adolf Bastian (1826–1905) argued
for "the psychic unity of mankind". He proposed that a scientific comparison of all human societies would reveal
that distinct worldviews consisted of the same basic elements. According to Bastian, all human societies share a
set of "elementary ideas" (Elementargedanken); different cultures, or different "folk ideas" (Völkergedanken), are
local modifications of the elementary ideas. This view paved the way for the modern understanding of culture. Franz
Boas (1858–1942) was trained in this tradition, and he brought it with him when he left Germany for the United States.
In practice, culture referred to an élite ideal and was associated with such activities as art, classical music,
and haute cuisine. As these forms were associated with urban life, "culture" was identified with "civilization" (from
lat. civitas, city). Another facet of the Romantic movement was an interest in folklore, which led to identifying
a "culture" among non-elites. This distinction is often characterized as that between high culture, namely that of
the ruling social group, and low culture. In other words, the idea of "culture" that developed in Europe during the
18th and early 19th centuries reflected inequalities within European societies. Matthew Arnold contrasted "culture"
with anarchy; other Europeans, following philosophers Thomas Hobbes and Jean-Jacques Rousseau, contrasted "culture"
with "the state of nature". According to Hobbes and Rousseau, the Native Americans who were being conquered by Europeans
from the 16th centuries on were living in a state of nature; this opposition was expressed through the contrast between
"civilized" and "uncivilized." According to this way of thinking, one could classify some countries and nations as
more civilized than others and some people as more cultured than others. This contrast led to Herbert Spencer's theory
of Social Darwinism and Lewis Henry Morgan's theory of cultural evolution. Just as some critics have argued that
the distinction between high and low cultures is really an expression of the conflict between European elites and
non-elites, some critics have argued that the distinction between civilized and uncivilized people is really an expression
of the conflict between European colonial powers and their colonial subjects. Other 19th-century critics, following
Rousseau have accepted this differentiation between higher and lower culture, but have seen the refinement and sophistication
of high culture as corrupting and unnatural developments that obscure and distort people's essential nature. These
critics considered folk music (as produced by "the folk", i.e., rural, illiterate, peasants) to honestly express
a natural way of life, while classical music seemed superficial and decadent. Equally, this view often portrayed
indigenous peoples as "noble savages" living authentic and unblemished lives, uncomplicated and uncorrupted by the
highly stratified capitalist systems of the West. Although anthropologists worldwide refer to Tylor's definition
of culture, in the 20th century "culture" emerged as the central and unifying concept of American anthropology, where
it most commonly refers to the universal human capacity to classify and encode human experiences symbolically, and
to communicate symbolically encoded experiences socially.[citation needed] American anthropology is organized into
four fields, each of which plays an important role in research on culture: biological anthropology, linguistic anthropology,
cultural anthropology, and archaeology. The sociology of culture concerns culture—usually understood as the ensemble
of symbolic codes used by a society—as manifested in society. For Georg Simmel (1858–1918), culture referred to "the
cultivation of individuals through the agency of external forms which have been objectified in the course of history".
Culture in the sociological field can be defined as the ways of thinking, the ways of acting, and the material objects
that together shape a people's way of life. Culture can be any of two types, non-material culture or material culture.
Non-material culture refers to the non physical ideas that individuals have about their culture, including values,
belief system, rules, norms, morals, language, organizations, and institutions. While Material culture is the physical
evidence of a culture in the objects and architecture they make, or have made. The term tends to be relevant only
in archeological and anthropological studies, but it specifically means all material evidence which can be attributed
to culture past or present. Cultural sociology first emerged in Weimar Germany (1918–1933), where sociologists such
as Alfred Weber used the term Kultursoziologie (cultural sociology). Cultural sociology was then "reinvented" in
the English-speaking world as a product of the "cultural turn" of the 1960s, which ushered in structuralist and postmodern
approaches to social science. This type of cultural sociology may loosely be regarded as an approach incorporating
cultural analysis and critical theory. Cultural sociologists tend to reject scientific methods,[citation needed]
instead hermeneutically focusing on words, artifacts and symbols. "Culture" has since become an important concept
across many branches of sociology, including resolutely scientific fields like social stratification and social network
analysis. As a result, there has been a recent influx of quantitative sociologists to the field. Thus there is now
a growing group of sociologists of culture who are, confusingly, not cultural sociologists. These scholars reject
the abstracted postmodern aspects of cultural sociology, and instead look for a theoretical backing in the more scientific
vein of social psychology and cognitive science. "Cultural sociology" is one of the largest sections of the American
Sociological Association. The British establishment of cultural studies means the latter is often taught as a loosely
distinct discipline in the UK. The sociology of culture grew from the intersection between sociology (as shaped by
early theorists like Marx, Durkheim, and Weber) with the growing discipline of anthropology, where in researchers
pioneered ethnographic strategies for describing and analyzing a variety of cultures around the world. Part of the
legacy of the early development of the field lingers in the methods (much of cultural sociological research is qualitative),
in the theories (a variety of critical approaches to sociology are central to current research communities), and
in the substantive focus of the field. For instance, relationships between popular culture, political control, and
social class were early and lasting concerns in the field. In the United Kingdom, sociologists and other scholars
influenced by Marxism, such as Stuart Hall (1932–2014) and Raymond Williams (1921–1988), developed cultural studies.
Following nineteenth-century Romantics, they identified "culture" with consumption goods and leisure activities (such
as art, music, film, food, sports, and clothing). Nevertheless, they saw patterns of consumption and leisure as determined
by relations of production, which led them to focus on class relations and the organization of production. In the
United States, "Cultural Studies" focuses largely on the study of popular culture, that is, on the social meanings
of mass-produced consumer and leisure goods. Richard Hoggart coined the term in 1964 when he founded the Birmingham
Centre for Contemporary Cultural Studies or CCCS. It has since become strongly associated with Stuart Hall, who succeeded
Hoggart as Director. Cultural studies in this sense, then, can be viewed as a limited concentration scoped on the
intricacies of consumerism, which belongs to a wider culture sometimes referred to as "Western Civilization" or as
"Globalism." From the 1970s onward, Stuart Hall's pioneering work, along with that of his colleagues Paul Willis,
Dick Hebdige, Tony Jefferson, and Angela McRobbie, created an international intellectual movement. As the field developed
it began to combine political economy, communication, sociology, social theory, literary theory, media theory, film/video
studies, cultural anthropology, philosophy, museum studies and art history to study cultural phenomena or cultural
texts. In this field researchers often concentrate on how particular phenomena relate to matters of ideology, nationality,
ethnicity, social class, and/or gender.[citation needed] Cultural studies has a concern with the meaning and practices
of everyday life. These practices comprise the ways people do particular things (such as watching television, or
eating out) in a given culture. This field studies the meanings and uses people attribute to various objects and
practices. Specifically, culture involves those meanings and practices held independently of reason. Watching television
in order to view a public perspective on a historical event should not be thought of as culture, unless referring
to the medium of television itself, which may have been selected culturally; however, schoolchildren watching television
after school with their friends in order to "fit in" certainly qualifies, since there is no grounded reason for one's
participation in this practice. Recently, as capitalism has spread throughout the world (a process called globalization),
cultural studies has begun[when?] to analyze local and global forms of resistance to Western hegemony.[citation needed]
Globalization in this context can be defined as western civilization in other ways, it undermines the cultural integrity
of other culture and it is therefore repressive, exploitative and harmful to most people in different places. In
the context of cultural studies, the idea of a text includes not only written language, but also films, photographs,
fashion or hairstyles: the texts of cultural studies comprise all the meaningful artifacts of culture.[citation needed]
Similarly, the discipline widens the concept of "culture". "Culture" for a cultural-studies researcher not only includes
traditional high culture (the culture of ruling social groups) and popular culture, but also everyday meanings and
practices. The last two, in fact, have become the main focus of cultural studies. A further and recent approach is
comparative cultural studies, based on the disciplines of comparative literature and cultural studies.[citation needed]
Scholars in the United Kingdom and the United States developed somewhat different versions of cultural studies after
the late 1970s. The British version of cultural studies had originated in the 1950s and 1960s, mainly under the influence
first of Richard Hoggart, E. P. Thompson, and Raymond Williams, and later that of Stuart Hall and others at the Centre
for Contemporary Cultural Studies at the University of Birmingham. This included overtly political, left-wing views,
and criticisms of popular culture as "capitalist" mass culture; it absorbed some of the ideas of the Frankfurt School
critique of the "culture industry" (i.e. mass culture). This emerges in the writings of early British cultural-studies
scholars and their influences: see the work of (for example) Raymond Williams, Stuart Hall, Paul Willis, and Paul
Gilroy. In the United States, Lindlof and Taylor write, "Cultural studies [were] grounded in a pragmatic, liberal-pluralist
tradition". The American version of cultural studies initially concerned itself more with understanding the subjective
and appropriative side of audience reactions to, and uses of, mass culture; for example, American cultural-studies
advocates wrote about the liberatory aspects of fandom.[citation needed] The distinction between American and British
strands, however, has faded.[citation needed] Some researchers, especially in early British cultural studies, apply
a Marxist model to the field. This strain of thinking has some influence from the Frankfurt School, but especially
from the structuralist Marxism of Louis Althusser and others. The main focus of an orthodox Marxist approach concentrates
on the production of meaning. This model assumes a mass production of culture and identifies power as residing with
those producing cultural artifacts. In a Marxist view, those who control the means of production (the economic base)
essentially control a culture.[citation needed] Other approaches to cultural studies, such as feminist cultural studies
and later American developments of the field, distance themselves from this view. They criticize the Marxist assumption
of a single, dominant meaning, shared by all, for any cultural product. The non-Marxist approaches suggest that different
ways of consuming cultural artifacts affect the meaning of the product. This view comes through in the book Doing
Cultural Studies: The Story of the Sony Walkman (by Paul du Gay et al.), which seeks to challenge the notion that
those who produce commodities control the meanings that people attribute to them. Feminist cultural analyst, theorist
and art historian Griselda Pollock contributed to cultural studies from viewpoints of art history and psychoanalysis.
The writer Julia Kristeva is among influential voices at the turn of the century, contributing to cultural studies
from the field of art and psychoanalytical French feminism.[citation needed] Raimon Panikkar pointed out 29 ways
in which cultural change can be brought about. Some of these are: growth, development, evolution, involution, renovation,
reconception, reform, innovation, revivalism, revolution, mutation, progress, diffusion, osmosis, borrowing, eclecticism,
syncretism, modernization, indigenization, and transformation. Hence Modernization could be similar or related to
the enlightenment but a 'looser' term set to ideal and values that flourish. a belief in objectivity progress. Also
seen as a belief in a secular society (free from religious influences) example objective and rational, science vs
religion and finally been modern means not being religious.
The Sahara (Arabic: الصحراء الكبرى, aṣ-ṣaḥrāʾ al-kubrā , 'the Greatest Desert') is the largest hot desert in the world.
It is the third largest desert after Antarctica and the Arctic. Its surface area of 9,400,000 square kilometres (3,600,000
sq mi)[citation needed]—including the Libyan Desert—is comparable to the respective land areas of China or the United
States. The desert comprises much of the land found within North Africa, excluding the fertile coastal region situated
against the Mediterranean Sea, the Atlas Mountains of the Maghreb, and the Nile Valley of Egypt and Sudan. The Sahara
stretches from the Red Sea in the east and the Mediterranean in the north, to the Atlantic Ocean in the west, where
the landscape gradually transitions to a coastal plain. To the south, it is delimited by the Sahel, a belt of semi-arid
tropical savanna around the Niger River valley and Sudan Region of Sub-Saharan Africa. The Sahara can be divided
into several regions, including the western Sahara, the central Ahaggar Mountains, the Tibesti Mountains, the Aïr
Mountains, the Ténéré desert, and the Libyan Desert. Its name is derived from the plural Arabic language word for
desert (صحارى ṣaḥārā [ˈsˤɑħɑːrɑː]). The central part of the Sahara is hyperarid, with little to no vegetation. The
northern and southern reaches of the desert, along with the highlands, have areas of sparse grassland and desert
shrub, with trees and taller shrubs in wadis where moisture collects. In the central, hyperarid part, there are many
subdivisions of the great desert such as the Tanezrouft, the Ténéré, the Libyan Desert, the Eastern Desert, the Nubian
Desert and others. These absolute desert regions are characterized by their extreme aridity, and some years can pass
without any rainfall. To the north, the Sahara skirts the Mediterranean Sea in Egypt and portions of Libya, but in
Cyrenaica and the Maghreb, the Sahara borders the Mediterranean forest, woodland, and scrub ecoregions of northern
Africa, all of which have a Mediterranean climate characterized by hot summers and cool and rainy winters. According
to the botanical criteria of Frank White and geographer Robert Capot-Rey, the northern limit of the Sahara corresponds
to the northern limit of date palm cultivation and the southern limit of the range of esparto, a grass typical of
the Mediterranean climate portion of the Maghreb and Iberia. The northern limit also corresponds to the 100 mm (3.9
in) isohyet of annual precipitation. To the south, the Sahara is bounded by the Sahel, a belt of dry tropical savanna
with a summer rainy season that extends across Africa from east to west. The southern limit of the Sahara is indicated
botanically by the southern limit of Cornulaca monacantha (a drought-tolerant member of the Chenopodiaceae), or northern
limit of Cenchrus biflorus, a grass typical of the Sahel. According to climatic criteria, the southern limit of the
Sahara corresponds to the 150 mm (5.9 in) isohyet of annual precipitation (this is a long-term average, since precipitation
varies annually). The Sahara is the world's largest low-latitude hot desert. The area is located in the horse latitudes
under the subtropical ridge, a significant belt of semi-permanent subtropical warm-core high pressure where the air
from upper levels of the troposphere tends to sink towards the ground. This steady descending airflow causes a warming
and a drying effect in the upper troposphere. The sinking air prevents evaporating water from rising and, therefore,
prevents the adiabatic cooling, which makes cloud formation extremely difficult to nearly impossible. The permanent
dissolution of clouds allows unhindered light and thermal radiation. The stability of the atmosphere above the desert
prevents any convective overturning, thus making rainfall virtually non-existent. As a consequence, the weather tends
to be sunny, dry and stable with a minimal risk of rainfall. Subsiding, diverging, dry air masses associated with
subtropical high-pressure systems are extremely unfavorable for the development of convectional showers. The subtropical
ridge is the predominate factor that explains the hot desert climate (Köppen climate classification BWh) of this
vast region. The lowering of air is the strongest and the most effective over the eastern part of the Great Desert,
in the Libyan Desert which is the sunniest, the driest and the most nearly rainless place on the planet rivaling
the Atacama Desert, lying in Chile and Peru. The rainfall inhibition and the dissipation of cloud cover are most
accentuated over the eastern section of the Sahara rather than the western. The prevailing air mass lying above the
Sahara is the continental tropical (cT) air mass which is hot and dry. Hot, dry air masses primarily form over the
North-African desert from the heating of the vast continental land area, and it affects the whole desert during most
of the year. Because of this extreme heating process, a thermal low is usually noticed near the surface, and is the
strongest and the most developed during the summertime. The Sahara High represents the eastern continental extension
of the Azores High, centered over the North Atlantic Ocean. The subsidence of the Sahara High nearly reaches the
ground during the coolest part of the year while it limits to the upper troposphere during the hottest periods. The
effects of local surface low pressure are extremely limited because upper-level subsidence still continues to block
any form of air ascent. Also, to be protected against rain-bearing weather systems by the atmospheric circulation
itself, the desert is made even drier by his geographical configuration and location. Indeed, the extreme aridity
of the Sahara can't be only explained by the subtropical high pressure. The Atlas Mountains, found in Algeria, Morocco
and Tunisia also help to enhance the aridity of the northern part of the desert. These major mountain ranges act
as a barrier causing a strong rain shadow effect on the leeward side by dropping much of the humidity brought by
atmospheric disturbances along the polar front which affects the surrounding Mediterranean climates. The primary
source of rain in the Sahara is the equatorial low a continuous belt of low-pressure systems near the equator which
bring the brief, short and irregular rainy season to the Sahel and the southern Sahara. The Sahara doesn't lack precipitation
because of a lack of moisture, but due to the lack of a precipitation-generating mechanism. Rainfall in this giant
desert has to overcome the physical and atmospheric barriers that normally prevent the production of precipitation.
The harsh climate of the Sahara is characterized by extremely low, unreliable, highly erratic rainfall; extremely
high sunshine duration values; high temperatures year-round; negligible rates of relative humidity, a significant
diurnal temperature variation and extremely high levels of potential evaporation which are the highest recorded worldwide.
The sky is usually clear above the desert and the sunshine duration is extremely high everywhere in the Sahara. Most
of the desert enjoys more than 3,600 h of bright sunshine annually or over 82% of the time and a wide area in the
eastern part experiences in excess of 4,000 h of bright sunshine a year or over 91% of the time, and the highest
values are very close to the theoretical maximum value. A value of 4,300 h or 98% of the time would be recorded in
Upper Egypt (Aswan, Luxor) and in the Nubian Desert (Wadi Halfa). The annual average direct solar irradiation is
around 2,800 kWh/(m2 year) in the Great Desert. The Sahara has a huge potential for solar energy production. The
constantly high position of the sun, the extremely low relative humidity, the lack of vegetation and rainfall make
the Great Desert the hottest continuously large area worldwide and certainly the hottest place on Earth during summertime
in some spots. The average high temperature exceeds 38 °C (100.4 °F) - 40 °C (104 °F) during the hottest month nearly
everywhere in the desert except at very high mountainous areas. The highest officially recorded average high temperature
was 47 °C (116.6 °F) in a remote desert town in the Algerian Desert called Bou Bernous with an elevation of 378 meters
above sea level. It's the world's highest recorded average high temperature and only Death Valley, California rivals
it. Other hot spots in Algeria such as Adrar, Timimoun, In Salah, Ouallene, Aoulef, Reggane with an elevation between
200 and 400 meters above sea level get slightly lower summer average highs around 46 °C (114.8 °F) during the hottest
months of the year. Salah, well known in Algeria for its extreme heat, has an average high temperature of 43.8 °C
(110.8 °F), 46.4 °C (115.5 °F), 45.5 (113.9 °F). Furthermore, 41.9 °C (107.4 °F) in June, July, August and September.
In fact, there are even hotter spots in the Sahara, but they are located in extremely remote areas, especially in
the Azalai, lying in northern Mali. The major part of the desert experiences around 3 – 5 months when the average
high strictly exceeds 40 °C (104 °F). The southern central part of the desert experiences up to 6 – 7 months when
the average high temperature strictly exceeds 40 °C (104 °F) which shows the constancy and the length of the really
hot season in the Sahara. Some examples of this are Bilma, Niger and Faya-Largeau, Chad. The annual average daily
temperature exceeds 20 °C (68 °F) everywhere and can approach 30 °C (86 °F) in the hottest regions year-round. However,
most of the desert has a value in excess of 25 °C (77 °F). The sand and ground temperatures are even more extreme.
During daytime, the sand temperature is extremely high as it can easily reach 80 °C (176 °F) or more. A sand temperature
of 83.5 °C (182.3 °F) has been recorded in Port Sudan. Ground temperatures of 72 °C (161.6 °F) have been recorded
in the Adrar of Mauritania and a value of 75 °C (167 °F) has been measured in Borkou, northern Chad. Due to lack
of cloud cover and very low humidity, the desert usually features high diurnal temperature variations between days
and nights. However, it's a myth that the nights are cold after extremely hot days in the Sahara. The average diurnal
temperature range is typically between 13 °C (55.4 °F) and 20 °C (68 °F). The lowest values are found along the coastal
regions due to high humidity and are often even lower than 10 °C (50 °F), while the highest values are found in inland
desert areas where the humidity is the lowest, mainly in the southern Sahara. Still, it's true that winter nights
can be cold as it can drop to the freezing point and even below, especially in high-elevation areas. The average
annual rainfall ranges from very low in the northern and southern fringes of the desert to nearly non-existent over
the central and the eastern part. The thin northern fringe of the desert receives more winter cloudiness and rainfall
due to the arrival of low pressure systems over the Mediterranean Sea along the polar front, although very attenuated
by the rain shadow effects of the mountains and the annual average rainfall ranges from 100 mm (3,93 in) to 250 mm
(9,84 in). For example, Biskra, Algeria and Ouarzazate, Morocco are found in this zone. The southern fringe of the
desert along the border with the Sahel receives summer cloudiness and rainfall due to the arrival of the Intertropical
Convergence Zone from the south and the annual average rainfall ranges from 100 mm (3,93 in) to 250 mm (9,84 in).
For example, Timbuktu, Mali and Agadez, Niger are found in this zone. The vast central hyper-arid core of the desert
is virtually never affected by northerly or southerly atmospheric disturbances and permanently remains under the
influence of the strongest anticyclonic weather regime and the annual average rainfall can drop to less than 1 mm
(0.04 in). In fact, most of the Sahara receives less than 20 mm (0.79 in). Of the 9,000,000 km2 of desert land in
the Sahara, an area of about 2,800,000 km2 (about 31% of the total area) receives an annual average rainfall amount
of 10 mm (0.39 in) or less, while some 1,500,000 km2 (about 17% of the total area) receive an average of 5 mm or
less. The annual average rainfall is virtually zero over a wide area of some 1,000,000 km2 in the eastern Sahara
comprising deserts of Libya, Egypt and Sudan (Tazirbu, Kufra, Dakhla, Kharga, Farafra, Siwa, Asyut, Sohag, Luxor,
Aswan, Abu Simbel, Wadi Halfa) where the long-term mean approximates 0.5 mm per year. The rainfall is very unreliable
and erratic in the Sahara as it may vary considerably year by year. In full contrast to the negligible annual rainfall
amounts, the annual rates of potential evaporation are extraordinarily high, roughly ranging from 2,500 mm/year to
more than 6,000 mm/year in the whole desert. Nowhere else on Earth has air been found as dry and evaporative as in
the Sahara region. With such an evaporative power, the Sahara can only be desiccated and dried out further more and
the moisture deficit is tremendous. The South Saharan steppe and woodlands ecoregion is a narrow band running east
and west between the hyper-arid Sahara and the Sahel savannas to the south. Movements of the equatorial Intertropical
Convergence Zone (ITCZ) bring summer rains during July and August which average 100 to 200 mm (3.9 to 7.9 in) but
vary greatly from year to year. These rains sustain summer pastures of grasses and herbs, with dry woodlands and
shrublands along seasonal watercourses. This ecoregion covers 1,101,700 km2 (425,400 mi2) in Algeria, Chad, Mali,
Mauritania, and Sudan. The central Sahara is estimated to include five hundred species of plants, which is extremely
low considering the huge extent of the area. Plants such as acacia trees, palms, succulents, spiny shrubs, and grasses
have adapted to the arid conditions, by growing lower to avoid water loss by strong winds, by storing water in their
thick stems to use it in dry periods, by having long roots that travel horizontally to reach the maximum area of
water and to find any surface moisture and by having small thick leaves or needles to prevent water loss by evapo-transpiration.
Plant leaves may dry out totally and then recover. The Saharan cheetah (northwest African cheetah) lives in Algeria,
Togo, Niger, Mali, Benin, and Burkina Faso. There remain fewer than 250 mature cheetahs, which are very cautious,
fleeing any human presence. The cheetah avoids the sun from April to October, seeking the shelter of shrubs such
as balanites and acacias. They are unusually pale. The other cheetah subspecies (northeast African cheetah) lives
in Chad, Sudan and the eastern region of Niger. However, it is currently extinct in the wild of Egypt and Libya.
They are approximately 2,000 mature individuals left in the wild. human activities are more likely to affect the
habitat in areas of permanent water (oases) or where water comes close to the surface. Here, the local pressure on
natural resources can be intense. The remaining populations of large mammals have been greatly reduced by hunting
for food and recreation. In recent years development projects have started in the deserts of Algeria and Tunisia
using irrigated water pumped from underground aquifers. These schemes often lead to soil degradation and salinization.
People lived on the edge of the desert thousands of years ago since the last ice age. The Sahara was then a much
wetter place than it is today. Over 30,000 petroglyphs of river animals such as crocodiles survive, with half found
in the Tassili n'Ajjer in southeast Algeria. Fossils of dinosaurs, including Afrovenator, Jobaria and Ouranosaurus,
have also been found here. The modern Sahara, though, is not lush in vegetation, except in the Nile Valley, at a
few oases, and in the northern highlands, where Mediterranean plants such as the olive tree are found to grow. It
was long believed that the region had been this way since about 1600 BCE, after shifts in the Earth's axis increased
temperatures and decreased precipitation. However, this theory has recently been called into dispute, when samples
taken from several 7 million year old sand deposits led scientists to reconsider the timeline for desertification.
During the Neolithic Era, before the onset of desertification, around 9500 BCE the central Sudan had been a rich
environment supporting a large population ranging across what is now barren desert, like the Wadi el-Qa'ab. By the
5th millennium BCE, the people who inhabited what is now called Nubia, were full participants in the "agricultural
revolution", living a settled lifestyle with domesticated plants and animals. Saharan rock art of cattle and herdsmen
suggests the presence of a cattle cult like those found in Sudan and other pastoral societies in Africa today. Megaliths
found at Nabta Playa are overt examples of probably the world's first known archaeoastronomy devices, predating Stonehenge
by some 2,000 years. This complexity, as observed at Nabta Playa, and as expressed by different levels of authority
within the society there, likely formed the basis for the structure of both the Neolithic society at Nabta and the
Old Kingdom of Egypt. By 6000 BCE predynastic Egyptians in the southwestern corner of Egypt were herding cattle and
constructing large buildings. Subsistence in organized and permanent settlements in predynastic Egypt by the middle
of the 6th millennium BCE centered predominantly on cereal and animal agriculture: cattle, goats, pigs and sheep.
Metal objects replaced prior ones of stone. Tanning of animal skins, pottery and weaving were commonplace in this
era also. There are indications of seasonal or only temporary occupation of the Al Fayyum in the 6th millennium BCE,
with food activities centering on fishing, hunting and food-gathering. Stone arrowheads, knives and scrapers from
the era are commonly found. Burial items included pottery, jewelry, farming and hunting equipment, and assorted foods
including dried meat and fruit. Burial in desert environments appears to enhance Egyptian preservation rites, and
dead were buried facing due west. By 3400 BCE, the Sahara was as dry as it is today, due to reduced precipitation
and higher temperatures resulting from a shift in the Earth's orbit. As a result of this aridification, it became
a largely impenetrable barrier to humans, with the remaining settlements mainly being concentrated around the numerous
oases that dot the landscape. Little trade or commerce is known to have passed through the interior in subsequent
periods, the only major exception being the Nile Valley. The Nile, however, was impassable at several cataracts,
making trade and contact by boat difficult. By 500 BCE, Greeks arrived in the desert. Greek traders spread along
the eastern coast of the desert, establishing trading colonies along the Red Sea. The Carthaginians explored the
Atlantic coast of the desert, but the turbulence of the waters and the lack of markets caused a lack of presence
further south than modern Morocco. Centralized states thus surrounded the desert on the north and east; it remained
outside the control of these states. Raids from the nomadic Berber people of the desert were a constant concern of
those living on the edge of the desert. An urban civilization, the Garamantes, arose around 500 BCE in the heart
of the Sahara, in a valley that is now called the Wadi al-Ajal in Fezzan, Libya. The Garamantes achieved this development
by digging tunnels far into the mountains flanking the valley to tap fossil water and bring it to their fields. The
Garamantes grew populous and strong, conquering their neighbors and capturing many slaves (which were put to work
extending the tunnels). The ancient Greeks and the Romans knew of the Garamantes and regarded them as uncivilized
nomads. However, they traded with the Garamantes, and a Roman bath has been found in the Garamantes capital of Garama.
Archaeologists have found eight major towns and many other important settlements in the Garamantes territory. The
Garamantes civilization eventually collapsed after they had depleted available water in the aquifers and could no
longer sustain the effort to extend the tunnels further into the mountains. The Byzantine Empire ruled the northern
shores of the Sahara from the 5th to the 7th centuries. After the Muslim conquest of Arabia (Arabian peninsula) the
Muslim conquest of North Africa began in the mid-7th to early 8th centuries, Islamic influence expanded rapidly on
the Sahara. By the end of 641 all of Egypt was in Muslim hands. The trade across the desert intensified. A significant
slave trade crossed the desert. It has been estimated that from the 10th to 19th centuries some 6,000 to 7,000 slaves
were transported north each year. In the 16th century the northern fringe of the Sahara, such as coastal regencies
in present-day Algeria and Tunisia, as well as some parts of present-day Libya, together with the semi-autonomous
kingdom of Egypt, were occupied by the Ottoman Empire. From 1517 Egypt was a valued part of the Ottoman Empire, ownership
of which provided the Ottomans with control over the Nile Valley, the east Mediterranean and North Africa. The benefit
of the Ottoman Empire was the freedom of movement for citizens and goods. Trade exploited the Ottoman land routes
to handle the spices, gold and silk from the East, manufactures from Europe, and the slave and gold traffic from
Africa. Arabic continued as the local language and Islamic culture was much reinforced. The Sahel and southern Sahara
regions were home to several independent states or to roaming Tuareg clans. European colonialism in the Sahara began
in the 19th century. France conquered the regency of Algiers from the Ottomans in 1830, and French rule spread south
from Algeria and eastwards from Senegal into the upper Niger to include present-day Algeria, Chad, Mali then French
Sudan including Timbuktu, Mauritania, Morocco (1912), Niger, and Tunisia (1881). By the beginning of the 20th century,
the trans-Saharan trade had clearly declined because goods were moved through more modern and efficient means, such
as airplanes, rather than across the desert. Arabic dialects are the most widely spoken languages in the Sahara.
The live in the Red Sea Hills of southeastern Egypt and eastern Sudan. Arabic, Berber and its variants now regrouped
under the term Amazigh (which includes the Guanche language spoken by the original Berber inhabitants of the Canary
Islands) and Beja languages are part of the Afro-Asiatic or Hamito-Semitic family.[citation needed] Unlike neighboring
West Africa and the central governments of the states that comprise the Sahara, the French language bears little
relevance to inter-personal discourse and commerce within the region, its people retaining staunch ethnic and political
affiliations with Tuareg and Berber leaders and culture. The legacy of the French colonial era administration is
primarily manifested in the territorial reorganization enacted by the Third and Fourth republics, which engendered
artificial political divisions within a hitherto isolated and porous region. Diplomacy with local clients was primarily
conducted in Arabic, which was the traditional language of bureaucratic affairs. Mediation of disputes and inter-agency
communication was served by interpreters contracted by the French government, who, according to Keenan, "documented
a space of intercultural mediation," contributing much to preserving indigenous cultural identities in the region.
The rule of law is the legal principle that law should govern a nation, as opposed to being governed by arbitrary decisions
of individual government officials. It primarily refers to the influence and authority of law within society, particularly
as a constraint upon behaviour, including behaviour of government officials. The phrase can be traced back to 16th
century Britain, and in the following century the Scottish theologian Samuel Rutherford used the phrase in his argument
against the divine right of kings. The rule of law was further popularized in the 19th century by British jurist
A. V. Dicey. The concept, if not the phrase, was familiar to ancient philosophers such as Aristotle, who wrote "Law
should govern". Rule of law implies that every citizen is subject to the law, including law makers themselves. In
this sense, it stands in contrast to an autocracy, dictatorship, or oligarchy where the rulers are held above the
law. Lack of the rule of law can be found in both democracies and dictatorships, for example because of neglect or
ignorance of the law, and the rule of law is more apt to decay if a government has insufficient corrective mechanisms
for restoring it. Government based upon the rule of law is called nomocracy. In the West, the ancient Greeks initially
regarded the best form of government as rule by the best men. Plato advocated a benevolent monarchy ruled by an idealized
philosopher king, who was above the law. Plato nevertheless hoped that the best men would be good at respecting established
laws, explaining that "Where the law is subject to some other authority and has none of its own, the collapse of
the state, in my view, is not far off; but if law is the master of the government and the government is its slave,
then the situation is full of promise and men enjoy all the blessings that the gods shower on a state." More than
Plato attempted to do, Aristotle flatly opposed letting the highest officials wield power beyond guarding and serving
the laws. In other words, Aristotle advocated the rule of law: There has recently been an effort to reevaluate the
influence of the Bible on Western constitutional law. In the Old Testament, there was some language in Deuteronomy
imposing restrictions on the Jewish king, regarding such things as how many wives he could have, and how many horses
he could own for his personal use. According to Professor Bernard M. Levinson, "This legislation was so utopian in
its own time that it seems never to have been implemented...." The Deuteronomic social vision may have influenced
opponents of the divine right of kings, including Bishop John Ponet in sixteenth-century England. In 1607, English
Chief Justice Sir Edward Coke said in the Case of Prohibitions (according to his own report) "that the law was the
golden met-wand and measure to try the causes of the subjects; and which protected His Majesty in safety and peace:
with which the King was greatly offended, and said, that then he should be under the law, which was treason to affirm,
as he said; to which I said, that Bracton saith, quod Rex non debed esse sub homine, sed sub Deo et lege (That the
King ought not to be under any man but under God and the law.)." Despite wide use by politicians, judges and academics,
the rule of law has been described as "an exceedingly elusive notion". Among modern legal theorists, one finds that
at least two principal conceptions of the rule of law can be identified: a formalist or "thin" definition, and a
substantive or "thick" definition; one occasionally encounters a third "functional" conception. Formalist definitions
of the rule of law do not make a judgment about the "justness" of law itself, but define specific procedural attributes
that a legal framework must have in order to be in compliance with the rule of law. Substantive conceptions of the
rule of law go beyond this and include certain substantive rights that are said to be based on, or derived from,
the rule of law. Most legal theorists believe that the rule of law has purely formal characteristics, meaning that
the law must be publicly declared, with prospective application, and possess the characteristics of generality, equality,
and certainty, but there are no requirements with regard to the content of the law. Others, including a few legal
theorists, believe that the rule of law necessarily entails protection of individual rights. Within legal theory,
these two approaches to the rule of law are seen as the two basic alternatives, respectively labelled the formal
and substantive approaches. Still, there are other views as well. Some believe that democracy is part of the rule
of law. The "formal" interpretation is more widespread than the "substantive" interpretation. Formalists hold that
the law must be prospective, well-known, and have characteristics of generality, equality, and certainty. Other than
that, the formal view contains no requirements as to the content of the law. This formal approach allows laws that
protect democracy and individual rights, but recognizes the existence of "rule of law" in countries that do not necessarily
have such laws protecting democracy or individual rights. The functional interpretation of the term "rule of law",
consistent with the traditional English meaning, contrasts the "rule of law" with the "rule of man." According to
the functional view, a society in which government officers have a great deal of discretion has a low degree of "rule
of law", whereas a society in which government officers have little discretion has a high degree of "rule of law".
Upholding the rule of law can sometimes require the punishment of those who commit offenses that are justifiable
under natural law but not statutory law. The rule of law is thus somewhat at odds with flexibility, even when flexibility
may be preferable. The rule of law has been considered as one of the key dimensions that determine the quality and
good governance of a country. Research, like the Worldwide Governance Indicators, defines the rule of law as: "the
extent to which agents have confidence and abide by the rules of society, and in particular the quality of contract
enforcement, the police and the courts, as well as the likelihood of crime or violence." Based on this definition
the Worldwide Governance Indicators project has developed aggregate measurements for the rule of law in more than
200 countries, as seen in the map below. A government based on the rule of law can be called a "nomocracy", from
the Greek nomos (law) and kratos (power or rule). All government officers of the United States, including the President,
the Justices of the Supreme Court, state judges and legislators, and all members of Congress, pledge first and foremost
to uphold the Constitution. These oaths affirm that the rule of law is superior to the rule of any human leader.
At the same time, the federal government has considerable discretion: the legislative branch is free to decide what
statutes it will write, as long as it stays within its enumerated powers and respects the constitutionally protected
rights of individuals. Likewise, the judicial branch has a degree of judicial discretion, and the executive branch
also has various discretionary powers including prosecutorial discretion. Scholars continue to debate whether the
U.S. Constitution adopted a particular interpretation of the "rule of law," and if so, which one. For example, John
Harrison asserts that the word "law" in the Constitution is simply defined as that which is legally binding, rather
than being "defined by formal or substantive criteria," and therefore judges do not have discretion to decide that
laws fail to satisfy such unwritten and vague criteria. Law Professor Frederick Mark Gedicks disagrees, writing that
Cicero, Augustine, Thomas Aquinas, and the framers of the U.S. Constitution believed that an unjust law was not really
a law at all. Others argue that the rule of law has survived but was transformed to allow for the exercise of discretion
by administrators. For much of American history, the dominant notion of the rule of law, in this setting, has been
some version of A. V. Dicey's: “no man is punishable or can be lawfully made to suffer in body or goods except for
a distinct breach of law established in the ordinary legal manner before the ordinary Courts of the land.” That is,
individuals should be able to challenge an administrative order by bringing suit in a court of general jurisdiction.
As the dockets of worker compensation commissions, public utility commissions and other agencies burgeoned, it soon
became apparent that letting judges decide for themselves all the facts in a dispute (such as the extent of an injury
in a worker's compensation case) would overwhelm the courts and destroy the advantages of specialization that led
to the creation of administrative agencies in the first place. Even Charles Evans Hughes, a Chief Justice of the
United States, believed “you must have administration, and you must have administration by administrative officers.”
By 1941, a compromise had emerged. If administrators adopted procedures that more-or-less tracked "the ordinary legal
manner" of the courts, further review of the facts by "the ordinary Courts of the land" was unnecessary. That is,
if you had your "day in commission," the rule of law did not require a further "day in court." Thus Dicey's rule
of law was recast into a purely procedural form. James Wilson said during the Philadelphia Convention in 1787 that,
"Laws may be unjust, may be unwise, may be dangerous, may be destructive; and yet not be so unconstitutional as to
justify the Judges in refusing to give them effect." George Mason agreed that judges "could declare an unconstitutional
law void. But with regard to every law, however unjust, oppressive or pernicious, which did not come plainly under
this description, they would be under the necessity as judges to give it a free course." Chief Justice John Marshall
(joined by Justice Joseph Story) took a similar position in 1827: "When its existence as law is denied, that existence
cannot be proved by showing what are the qualities of a law." East Asian cultures are influenced by two schools of
thought, Confucianism, which advocated good governance as rule by leaders who are benevolent and virtuous, and Legalism,
which advocated strict adherence to law. The influence of one school of thought over the other has varied throughout
the centuries. One study indicates that throughout East Asia, only South Korea, Singapore, Japan, Taiwan and Hong
Kong have societies that are robustly committed to a law-bound state. According to Awzar Thi, a member of the Asian
Human Rights Commission, the rule of law in Thailand, Cambodia, and most of Asia is weak or nonexistent: In countries
such as China and Vietnam, the transition to a market economy has been a major factor in a move toward the rule of
law, because a rule of law is important to foreign investors and to economic development. It remains unclear whether
the rule of law in countries like China and Vietnam will be limited to commercial matters or will spill into other
areas as well, and if so whether that spillover will enhance prospects for related values such as democracy and human
rights. The rule of law in China has been widely discussed and debated by both legal scholars and politicians in
China. In Thailand, a kingdom that has had a constitution since the initial attempt to overthrow the absolute monarchy
system in 1932, the rule of law has been more of a principle than actual practice.[citation needed] Ancient prejudices
and political bias have been present in the three branches of government with each of their foundings, and justice
has been processed formally according to the law but in fact more closely aligned with royalist principles that are
still advocated in the 21st century.[citation needed] In November 2013, Thailand faced still further threats to the
rule of law when the executive branch rejected a supreme court decision over how to select senators.[citation needed]
In India, the longest constitutional text in the history of the world has governed that country since 1950. Although
the Constitution of India may have been intended to provide details that would limit the opportunity for judicial
discretion, the more text there is in a constitution the greater opportunity the judiciary may have to exercise judicial
review. According to Indian journalist Harish Khare, "The rule of law or rather the Constitution [is] in danger of
being supplanted by the rule of judges." In 1959, an international gathering of over 185 judges, lawyers, and law
professors from 53 countries, meeting in New Delhi and speaking as the International Commission of Jurists, made
a declaration as to the fundamental principle of the rule of law. This was the Declaration of Delhi. They declared
that the rule of law implies certain rights and freedoms, that it implies an independent judiciary, and that it implies
social, economic and cultural conditions conducive to human dignity. The Declaration of Delhi did not, however, suggest
that the rule of law requires legislative power to be subject to judicial review. The General Assembly has considered
rule of law as an agenda item since 1992, with renewed interest since 2006 and has adopted resolutions at its last
three sessions. The Security Council has held a number of thematic debates on the rule of law, and adopted resolutions
emphasizing the importance of these issues in the context of women, peace and security, children in armed conflict,
and the protection of civilians in armed conflict. The Peacebuilding Commission has also regularly addressed rule
of law issues with respect to countries on its agenda. The Vienna Declaration and Programme of Action also requires
the rule of law be included in human rights education. The International Development Law Organization (IDLO) is an
intergovernmental organization with a joint focus on the promotion of rule of law and development. It works to empower
people and communities to claim their rights, and provides governments with the know-how to realize them. It supports
emerging economies and middle-income countries to strengthen their legal capacity and rule of law framework for sustainable
development and economic opportunity. It is the only intergovernmental organization with an exclusive mandate to
promote the rule of law and has experience working in more than 170 countries around the world. One important aspect
of the rule-of-law initiatives is the study and analysis of the rule of law’s impact on economic development. The
rule-of-law movement cannot be fully successful in transitional and developing countries without an answer to the
question: does the rule of law matter for economic development or not? Constitutional economics is the study of the
compatibility of economic and financial decisions within existing constitutional law frameworks, and such a framework
includes government spending on the judiciary, which, in many transitional and developing countries, is completely
controlled by the executive. It is useful to distinguish between the two methods of corruption of the judiciary:
corruption by the executive branch, in contrast to corruption by private actors. The Rule of Law is especially important
as an influence on the economic development in developing and transitional countries. To date, the term “rule of
law” has been used primarily in the English-speaking countries, and it is not yet fully clarified even with regard
to such well-established democracies as, for instance, Sweden, Denmark, France, Germany, or Japan. A common language
between lawyers of common law and civil law countries as well as between legal communities of developed and developing
countries is critically important for research of links between the rule of law and real economy. The economist F.
A. Hayek analyzed how the Rule of Law might be beneficial to the free market. Hayek proposed that under the Rule
of Law individuals would be able to make wise investments and future plans with some confidence in a successful return
on investment when he stated: "under the Rule of Law the government is prevented from stultifying individual efforts
by ad hoc action. Within the known rules of the game the individual is free to pursue his personal ends and desires,
certain that the powers of government will not be used deliberately to frustrate his efforts."
Tibet (i/tᵻˈbɛt/; Wylie: Bod, pronounced [pʰø̀ʔ]; Chinese: 西藏; pinyin: Xīzàng) is a region on the Tibetan Plateau in Asia.
It is the traditional homeland of the Tibetan people as well as some other ethnic groups such as Monpa, Qiang and
Lhoba peoples and is now also inhabited by considerable numbers of Han Chinese and Hui people. Tibet is the highest
region on Earth, with an average elevation of 4,900 metres (16,000 ft). The highest elevation in Tibet is Mount Everest,
earth's highest mountain rising 8,848 m (29,029 ft) above sea level. The Tibetan Empire emerged in the 7th century,
but with the fall of the empire the region soon divided into a variety of territories. The bulk of western and central
Tibet (Ü-Tsang) was often at least nominally unified under a series of Tibetan governments in Lhasa, Shigatse, or
nearby locations; these governments were at various times under Mongol and Chinese overlordship. The eastern regions
of Kham and Amdo often maintained a more decentralized indigenous political structure, being divided among a number
of small principalities and tribal groups, while also often falling more directly under Chinese rule after the Battle
of Chamdo; most of this area was eventually incorporated into the Chinese provinces of Sichuan and Qinghai. The current
borders of Tibet were generally established in the 18th century. Following the Xinhai Revolution against the Qing
dynasty in 1912, Qing soldiers were disarmed and escorted out of Tibet Area (Ü-Tsang). The region subsequently declared
its independence in 1913 without recognition by the subsequent Chinese Republican government. Later, Lhasa took control
of the western part of Xikang, China. The region maintained its autonomy until 1951 when, following the Battle of
Chamdo, Tibet became incorporated into the People's Republic of China, and the previous Tibetan government was abolished
in 1959 after a failed uprising. Today, China governs western and central Tibet as the Tibet Autonomous Region while
the eastern areas are now mostly ethnic autonomous prefectures within Sichuan, Qinghai and other neighbouring provinces.
There are tensions regarding Tibet's political status and dissident groups that are active in exile. It is also said
that Tibetan activists in Tibet have been arrested or tortured. The economy of Tibet is dominated by subsistence
agriculture, though tourism has become a growing industry in recent decades. The dominant religion in Tibet is Tibetan
Buddhism; in addition there is Bön, which is similar to Tibetan Buddhism, and there are also Tibetan Muslims and
Christian minorities. Tibetan Buddhism is a primary influence on the art, music, and festivals of the region. Tibetan
architecture reflects Chinese and Indian influences. Staple foods in Tibet are roasted barley, yak meat, and butter
tea. The Tibetan name for their land, Bod བོད་, means "Tibet" or "Tibetan Plateau", although it originally meant
the central region around Lhasa, now known in Tibetan as Ü. The Standard Tibetan pronunciation of Bod, [pʰøʔ˨˧˨],
is transcribed Bhö in Tournadre Phonetic Transcription, Bö in the THL Simplified Phonetic Transcription and Poi in
Tibetan pinyin. Some scholars believe the first written reference to Bod "Tibet" was the ancient Bautai people recorded
in the Egyptian Greek works Periplus of the Erythraean Sea (1st century CE) and Geographia (Ptolemy, 2nd century
CE), itself from the Sanskrit form Bhauṭṭa of the Indian geographical tradition. The modern Standard Chinese exonym
for the ethnic Tibetan region is Zangqu (Chinese: 藏区; pinyin: Zàngqū), which derives by metonymy from the Tsang region
around Shigatse plus the addition of a Chinese suffix, 区 qū, which means "area, district, region, ward". Tibetan
people, language, and culture, regardless of where they are from, are referred to as Zang (Chinese: 藏; pinyin: Zàng)
although the geographical term Xīzàng is often limited to the Tibet Autonomous Region. The term Xīzàng was coined
during the Qing dynasty in the reign of the Jiaqing Emperor (1796–1820) through the addition of a prefix meaning
"west" (西 xī) to Zang. The best-known medieval Chinese name for Tibet is Tubo (Chinese: 吐蕃 also written as 土蕃 or
土番; pinyin: Tǔbō or Tǔfān). This name first appears in Chinese characters as 土番 in the 7th century (Li Tai) and as
吐蕃 in the 10th-century (Old Book of Tang describing 608–609 emissaries from Tibetan King Namri Songtsen to Emperor
Yang of Sui). In the Middle Chinese spoken during that period, as reconstructed by William H. Baxter, 土番 was pronounced
thux-phjon and 吐蕃 was pronounced thux-pjon (with the x representing tone). Other pre-modern Chinese names for Tibet
include Wusiguo (Chinese: 烏斯國; pinyin: Wūsīguó; cf. Tibetan dbus, Ü, [wyʔ˨˧˨]), Wusizang (Chinese: 烏斯藏; pinyin: wūsīzàng,
cf. Tibetan dbus-gtsang, Ü-Tsang), Tubote (Chinese: 圖伯特; pinyin: Túbótè), and Tanggute (Chinese: 唐古忒; pinyin: Tánggǔtè,
cf. Tangut). American Tibetologist Elliot Sperling has argued in favor of a recent tendency by some authors writing
in Chinese to revive the term Tubote (simplified Chinese: 图伯特; traditional Chinese: 圖伯特; pinyin: Túbótè) for modern
use in place of Xizang, on the grounds that Tubote more clearly includes the entire Tibetan plateau rather than simply
the Tibet Autonomous Region.[citation needed] The language has numerous regional dialects which are generally not
mutually intelligible. It is employed throughout the Tibetan plateau and Bhutan and is also spoken in parts of Nepal
and northern India, such as Sikkim. In general, the dialects of central Tibet (including Lhasa), Kham, Amdo and some
smaller nearby areas are considered Tibetan dialects. Other forms, particularly Dzongkha, Sikkimese, Sherpa, and
Ladakhi, are considered by their speakers, largely for political reasons, to be separate languages. However, if the
latter group of Tibetan-type languages are included in the calculation, then 'greater Tibetan' is spoken by approximately
6 million people across the Tibetan Plateau. Tibetan is also spoken by approximately 150,000 exile speakers who have
fled from modern-day Tibet to India and other countries. Although spoken Tibetan varies according to the region,
the written language, based on Classical Tibetan, is consistent throughout. This is probably due to the long-standing
influence of the Tibetan empire, whose rule embraced (and extended at times far beyond) the present Tibetan linguistic
area, which runs from northern Pakistan in the west to Yunnan and Sichuan in the east, and from north of Qinghai
Lake south as far as Bhutan. The Tibetan language has its own script which it shares with Ladakhi and Dzongkha, and
which is derived from the ancient Indian Brāhmī script. The earliest Tibetan historical texts identify the Zhang
Zhung culture as a people who migrated from the Amdo region into what is now the region of Guge in western Tibet.
Zhang Zhung is considered to be the original home of the Bön religion. By the 1st century BCE, a neighboring kingdom
arose in the Yarlung valley, and the Yarlung king, Drigum Tsenpo, attempted to remove the influence of the Zhang
Zhung by expelling the Zhang's Bön priests from Yarlung. He was assassinated and Zhang Zhung continued its dominance
of the region until it was annexed by Songtsen Gampo in the 7th century. Prior to Songtsän Gampo, the kings of Tibet
were more mythological than factual, and there is insufficient evidence of their existence. The history of a unified
Tibet begins with the rule of Songtsän Gampo (604–650 CE), who united parts of the Yarlung River Valley and founded
the Tibetan Empire. He also brought in many reforms, and Tibetan power spread rapidly, creating a large and powerful
empire. It is traditionally considered that his first wife was the Princess of Nepal, Bhrikuti, and that she played
a great role in the establishment of Buddhism in Tibet. In 640 he married Princess Wencheng, the niece of the powerful
Chinese emperor Taizong of Tang China. In 821/822 CE Tibet and China signed a peace treaty. A bilingual account of
this treaty, including details of the borders between the two countries, is inscribed on a stone pillar which stands
outside the Jokhang temple in Lhasa. Tibet continued as a Central Asian empire until the mid-9th century, when a
civil war over succession led to the collapse of imperial Tibet. The period that followed is known traditionally
as the Era of Fragmentation, when political control over Tibet became divided between regional warlords and tribes
with no dominant centralized authority. The Mongol Yuan dynasty, through the Bureau of Buddhist and Tibetan Affairs,
or Xuanzheng Yuan, ruled Tibet through a top-level administrative department. One of the department's purposes was
to select a dpon-chen ('great administrator'), usually appointed by the lama and confirmed by the Mongol emperor
in Beijing. The Sakya lama retained a degree of autonomy, acting as the political authority of the region, while
the dpon-chen held administrative and military power. Mongol rule of Tibet remained separate from the main provinces
of China, but the region existed under the administration of the Yuan dynasty. If the Sakya lama ever came into conflict
with the dpon-chen, the dpon-chen had the authority to send Chinese troops into the region. Tibet retained nominal
power over religious and regional political affairs, while the Mongols managed a structural and administrative rule
over the region, reinforced by the rare military intervention. This existed as a "diarchic structure" under the Yuan
emperor, with power primarily in favor of the Mongols. Mongolian prince Khuden gained temporal power in Tibet in
the 1240s and sponsored Sakya Pandita, whose seat became the capital of Tibet. Drogön Chögyal Phagpa, Sakya Pandita's
nephew became Imperial Preceptor of Kublai Khan, founder of the Yuan dynasty. Between 1346 and 1354, Tai Situ Changchub
Gyaltsen toppled the Sakya and founded the Phagmodrupa Dynasty. The following 80 years saw the founding of the Gelug
school (also known as Yellow Hats) by the disciples of Je Tsongkhapa, and the founding of the important Ganden, Drepung
and Sera monasteries near Lhasa. However, internal strife within the dynasty and the strong localism of the various
fiefs and political-religious factions led to a long series of internal conflicts. The minister family Rinpungpa,
based in Tsang (West Central Tibet), dominated politics after 1435. In 1565 they were overthrown by the Tsangpa Dynasty
of Shigatse which expanded its power in different directions of Tibet in the following decades and favoured the Karma
Kagyu sect. The 5th Dalai Lama is known for unifying the Tibetan heartland under the control of the Gelug school
of Tibetan Buddhism, after defeating the rival Kagyu and Jonang sects and the secular ruler, the Tsangpa prince,
in a prolonged civil war. His efforts were successful in part because of aid from Güshi Khan, the Oirat leader of
the Khoshut Khanate. With Güshi Khan as a largely uninvolved overlord, the 5th Dalai Lama and his intimates established
a civil administration which is referred to by historians as the Lhasa state. This Tibetan regime or government is
also referred to as the Ganden Phodrang. Qing dynasty rule in Tibet began with their 1720 expedition to the country
when they expelled the invading Dzungars. Amdo came under Qing control in 1724, and eastern Kham was incorporated
into neighbouring Chinese provinces in 1728. Meanwhile, the Qing government sent resident commissioners called Ambans
to Lhasa. In 1750 the Ambans and the majority of the Han Chinese and Manchus living in Lhasa were killed in a riot,
and Qing troops arrived quickly and suppressed the rebels in the next year. Like the preceding Yuan dynasty, the
Manchus of the Qing dynasty exerted military and administrative control of the region, while granting it a degree
of political autonomy. The Qing commander publicly executed a number of supporters of the rebels and, as in 1723
and 1728, made changes in the political structure and drew up a formal organization plan. The Qing now restored the
Dalai Lama as ruler, leading the governing council called Kashag, but elevated the role of Ambans to include more
direct involvement in Tibetan internal affairs. At the same time the Qing took steps to counterbalance the power
of the aristocracy by adding officials recruited from the clergy to key posts. For several decades, peace reigned
in Tibet, but in 1792 the Qing Qianlong Emperor sent a large Chinese army into Tibet to push the invading Nepalese
out. This prompted yet another Qing reorganization of the Tibetan government, this time through a written plan called
the "Twenty-Nine Regulations for Better Government in Tibet". Qing military garrisons staffed with Qing troops were
now also established near the Nepalese border. Tibet was dominated by the Manchus in various stages in the 18th century,
and the years immediately following the 1792 regulations were the peak of the Qing imperial commissioners' authority;
but there was no attempt to make Tibet a Chinese province. This period also saw some contacts with Jesuits and Capuchins
from Europe, and in 1774 a Scottish nobleman, George Bogle, came to Shigatse to investigate prospects of trade for
the British East India Company. However, in the 19th century the situation of foreigners in Tibet grew more tenuous.
The British Empire was encroaching from northern India into the Himalayas, the Emirate of Afghanistan and the Russian
Empire were expanding into Central Asia and each power became suspicious of the others' intentions in Tibet. In 1904,
a British expedition to Tibet, spurred in part by a fear that Russia was extending its power into Tibet as part of
The Great Game, invaded the country, hoping that negotiations with the 13th Dalai Lama would be more effective than
with Chinese representatives. When the British-led invasion reached Tibet on December 12, 1903, an armed confrontation
with the ethnic Tibetans resulted in the Massacre of Chumik Shenko, which resulted in 600 fatalities amongst the
Tibetan forces, compared to only 12 on the British side. Afterwards, in 1904 Francis Younghusband imposed a treaty
known as the Treaty of Lhasa, which was subsequently repudiated and was succeeded by a 1906 treaty signed between
Britain and China. After the Xinhai Revolution (1911–12) toppled the Qing dynasty and the last Qing troops were escorted
out of Tibet, the new Republic of China apologized for the actions of the Qing and offered to restore the Dalai Lama's
title. The Dalai Lama refused any Chinese title and declared himself ruler of an independent Tibet. In 1913, Tibet
and Mongolia concluded a treaty of mutual recognition. For the next 36 years, the 13th Dalai Lama and the regents
who succeeded him governed Tibet. During this time, Tibet fought Chinese warlords for control of the ethnically Tibetan
areas in Xikang and Qinghai (parts of Kham and Amdo) along the upper reaches of the Yangtze River. In 1914 the Tibetan
government signed the Simla Accord with Britain, ceding the South Tibet region to British India. The Chinese government
denounced the agreement as illegal. After the Dalai Lama's government fled to Dharamsala, India, during the 1959
Tibetan Rebellion, it established a rival government-in-exile. Afterwards, the Central People's Government in Beijing
renounced the agreement and began implementation of the halted social and political reforms. During the Great Leap
Forward, between 200,000 and 1,000,000 Tibetans died, and approximately 6,000 monasteries were destroyed during the
Cultural Revolution. In 1962 China and India fought a brief war over the disputed South Tibet and Aksai Chin regions.
Although China won the war, Chinese troops withdrew north of the McMahon Line, effectively ceding South Tibet to
India. In 1980, General Secretary and reformist Hu Yaobang visited Tibet and ushered in a period of social, political,
and economic liberalization. At the end of the decade, however, analogously to the Tiananmen Square protests of 1989,
monks in the Drepung and Sera monasteries started protesting for independence, and so the government halted reforms
and started an anti-separatist campaign. Human rights organisations have been critical of the Beijing and Lhasa governments'
approach to human rights in the region when cracking down on separatist convulsions that have occurred around monasteries
and cities, most recently in the 2008 Tibetan unrest. Tibet has some of the world's tallest mountains, with several
of them making the top ten list. Mount Everest, located on the border with Nepal, is, at 8,848 metres (29,029 ft),
the highest mountain on earth. Several major rivers have their source in the Tibetan Plateau (mostly in present-day
Qinghai Province). These include the Yangtze, Yellow River, Indus River, Mekong, Ganges, Salween and the Yarlung
Tsangpo River (Brahmaputra River). The Yarlung Tsangpo Grand Canyon, along the Yarlung Tsangpo River, is among the
deepest and longest canyons in the world. The Indus and Brahmaputra rivers originate from a lake (Tib: Tso Mapham)
in Western Tibet, near Mount Kailash. The mountain is a holy pilgrimage site for both Hindus and Tibetans. The Hindus
consider the mountain to be the abode of Lord Shiva. The Tibetan name for Mt. Kailash is Khang Rinpoche. Tibet has
numerous high-altitude lakes referred to in Tibetan as tso or co. These include Qinghai Lake, Lake Manasarovar, Namtso,
Pangong Tso, Yamdrok Lake, Siling Co, Lhamo La-tso, Lumajangdong Co, Lake Puma Yumco, Lake Paiku, Lake Rakshastal,
Dagze Co and Dong Co. The Qinghai Lake (Koko Nor) is the largest lake in the People's Republic of China. The atmosphere
is severely dry nine months of the year, and average annual snowfall is only 18 inches (46 cm), due to the rain shadow
effect. Western passes receive small amounts of fresh snow each year but remain traversible all year round. Low temperatures
are prevalent throughout these western regions, where bleak desolation is unrelieved by any vegetation bigger than
a low bush, and where wind sweeps unchecked across vast expanses of arid plain. The Indian monsoon exerts some influence
on eastern Tibet. Northern Tibet is subject to high temperatures in the summer and intense cold in the winter. The
main crops grown are barley, wheat, buckwheat, rye, potatoes, and assorted fruits and vegetables. Tibet is ranked
the lowest among China’s 31 provinces on the Human Development Index according to UN Development Programme data.
In recent years, due to increased interest in Tibetan Buddhism, tourism has become an increasingly important sector,
and is actively promoted by the authorities. Tourism brings in the most income from the sale of handicrafts. These
include Tibetan hats, jewelry (silver and gold), wooden items, clothing, quilts, fabrics, Tibetan rugs and carpets.
The Central People's Government exempts Tibet from all taxation and provides 90% of Tibet's government expenditures.
However most of this investment goes to pay migrant workers who do not settle in Tibet and send much of their income
home to other provinces. From January 18–20, 2010 a national conference on Tibet and areas inhabited by Tibetans
in Sichuan, Yunnan, Gansu and Qinghai was held in China and a substantial plan to improve development of the areas
was announced. The conference was attended by General secretary Hu Jintao, Wu Bangguo, Wen Jiabao, Jia Qinglin, Li
Changchun, Xi Jinping, Li Keqiang, He Guoqiang and Zhou Yongkang, all members of CPC Politburo Standing Committee
signaling the commitment of senior Chinese leaders to development of Tibet and ethnic Tibetan areas. The plan calls
for improvement of rural Tibetan income to national standards by 2020 and free education for all rural Tibetan children.
China has invested 310 billion yuan (about 45.6 billion U.S. dollars) in Tibet since 2001. "Tibet's GDP was expected
to reach 43.7 billion yuan in 2009, up 170 percent from that in 2000 and posting an annual growth of 12.3 percent
over the past nine years." Historically, the population of Tibet consisted of primarily ethnic Tibetans and some
other ethnic groups. According to tradition the original ancestors of the Tibetan people, as represented by the six
red bands in the Tibetan flag, are: the Se, Mu, Dong, Tong, Dru and Ra. Other traditional ethnic groups with significant
population or with the majority of the ethnic group residing in Tibet (excluding a disputed area with India) include
Bai people, Blang, Bonan, Dongxiang, Han, Hui people, Lhoba, Lisu people, Miao, Mongols, Monguor (Tu people), Menba
(Monpa), Mosuo, Nakhi, Qiang, Nu people, Pumi, Salar, and Yi people. Religion is extremely important to the Tibetans
and has a strong influence over all aspects of their lives. Bön is the ancient religion of Tibet, but has been almost
eclipsed by Tibetan Buddhism, a distinctive form of Mahayana and Vajrayana, which was introduced into Tibet from
the Sanskrit Buddhist tradition of northern India. Tibetan Buddhism is practiced not only in Tibet but also in Mongolia,
parts of northern India, the Buryat Republic, the Tuva Republic, and in the Republic of Kalmykia and some other parts
of China. During China's Cultural Revolution, nearly all Tibet's monasteries were ransacked and destroyed by the
Red Guards. A few monasteries have begun to rebuild since the 1980s (with limited support from the Chinese government)
and greater religious freedom has been granted – although it is still limited. Monks returned to monasteries across
Tibet and monastic education resumed even though the number of monks imposed is strictly limited. Before the 1950s,
between 10 and 20% of males in Tibet were monks. Muslims have been living in Tibet since as early as the 8th or 9th
century. In Tibetan cities, there are small communities of Muslims, known as Kachee (Kache), who trace their origin
to immigrants from three main regions: Kashmir (Kachee Yul in ancient Tibetan), Ladakh and the Central Asian Turkic
countries. Islamic influence in Tibet also came from Persia. After 1959 a group of Tibetan Muslims made a case for
Indian nationality based on their historic roots to Kashmir and the Indian government declared all Tibetan Muslims
Indian citizens later on that year. Other Muslim ethnic groups who have long inhabited Tibet include Hui, Salar,
Dongxiang and Bonan. There is also a well established Chinese Muslim community (gya kachee), which traces its ancestry
back to the Hui ethnic group of China. Roman Catholic Jesuits and Capuchins arrived from Europe in the 17th and 18th
centuries. Portuguese missionaries Jesuit Father António de Andrade and Brother Manuel Marques first reached the
kingdom of Gelu in western Tibet in 1624 and was welcomed by the royal family who allowed them to build a church
later on. By 1627, there were about a hundred local converts in the Guge kingdom. Later on, Christianity was introduced
to Rudok, Ladakh and Tsang and was welcomed by the ruler of the Tsang kingdom, where Andrade and his fellows established
a Jesuit outpost at Shigatse in 1626. In 1661 another Jesuit, Johann Grueber, crossed Tibet from Sining to Lhasa
(where he spent a month), before heading on to Nepal. He was followed by others who actually built a church in Lhasa.
These included the Jesuit Father Ippolito Desideri, 1716–1721, who gained a deep knowledge of Tibetan culture, language
and Buddhism, and various Capuchins in 1707–1711, 1716–1733 and 1741–1745, Christianity was used by some Tibetan
monarchs and their courts and the Karmapa sect lamas to counterbalance the influence of the Gelugpa sect in the 17th
century until in 1745 when all the missionaries were expelled at the lama's insistence. In 1877, the Protestant James
Cameron from the China Inland Mission walked from Chongqing to Batang in Garzê Tibetan Autonomous Prefecture, Sichuan
province, and "brought the Gospel to the Tibetan people." Beginning in the 20th century, in Diqing Tibetan Autonomous
Prefecture in Yunnan, a large number of Lisu people and some Yi and Nu people converted to Christianity. Famous earlier
missionaries include James O. Fraser, Alfred James Broomhall and Isobel Kuhn of the China Inland Mission, among others
who were active in this area. Standing at 117 metres (384 feet) in height and 360 metres (1,180 feet) in width, the
Potala Palace is the most important example of Tibetan architecture. Formerly the residence of the Dalai Lama, it
contains over one thousand rooms within thirteen stories, and houses portraits of the past Dalai Lamas and statues
of the Buddha. It is divided between the outer White Palace, which serves as the administrative quarters, and the
inner Red Quarters, which houses the assembly hall of the Lamas, chapels, 10,000 shrines, and a vast library of Buddhist
scriptures. The Potala Palace is a World Heritage Site, as is Norbulingka, the former summer residence of the Dalai
Lama. Tibetan music often involves chanting in Tibetan or Sanskrit, as an integral part of the religion. These chants
are complex, often recitations of sacred texts or in celebration of various festivals. Yang chanting, performed without
metrical timing, is accompanied by resonant drums and low, sustained syllables. Other styles include those unique
to the various schools of Tibetan Buddhism, such as the classical music of the popular Gelugpa school, and the romantic
music of the Nyingmapa, Sakyapa and Kagyupa schools. Tibet has various festivals that are commonly performed to worship
the Buddha[citation needed] throughout the year. Losar is the Tibetan New Year Festival. Preparations for the festive
event are manifested by special offerings to family shrine deities, painted doors with religious symbols, and other
painstaking jobs done to prepare for the event. Tibetans eat Guthuk (barley noodle soup with filling) on New Year's
Eve with their families. The Monlam Prayer Festival follows it in the first month of the Tibetan calendar, falling
between the fourth and the eleventh days of the first Tibetan month. It involves dancing and participating in sports
events, as well as sharing picnics. The event was established in 1049 by Tsong Khapa, the founder of the Dalai Lama
and the Panchen Lama's order. The most important crop in Tibet is barley, and dough made from barley flour—called
tsampa—is the staple food of Tibet. This is either rolled into noodles or made into steamed dumplings called momos.
Meat dishes are likely to be yak, goat, or mutton, often dried, or cooked into a spicy stew with potatoes. Mustard
seed is cultivated in Tibet, and therefore features heavily in its cuisine. Yak yogurt, butter and cheese are frequently
eaten, and well-prepared yogurt is considered something of a prestige item. Butter tea is very popular to drink.
An exhibition game (also known as a friendly, a scrimmage, a demonstration, a preseason game, a warmup match, or a preparation
match, depending at least in part on the sport) is a sporting event whose prize money and impact on the player's
or the team's rankings is either zero or otherwise greatly reduced. In team sports, matches of this type are often
used to help coaches and managers select players for the competitive matches of a league season or tournament. If
the players usually play in different teams in other leagues, exhibition games offer an opportunity for the players
to learn to work with each other. The games can be held between separate teams or between parts of the same team.
An exhibition game may also be used to settle a challenge, to provide professional entertainment, to promote the
sport, or to raise money for charities. Several sports leagues hold all-star games to showcase their best players
against each other, while other exhibitions games may pit participants from two different leagues or countries to
unofficially determine who would be the best in the world. International competitions like the Olympic Games may
also hold exhibition games as part of a demonstration sport. In the early days of association football, known simply
as football or soccer, friendly matches (or "friendlies") were the most common type of match. However, since the
development of The Football League in England in 1888, league tournaments became established, in addition to lengthy
derby and cup tournaments. By the year 2000, national leagues were established in almost every country throughout
the world, as well as local or regional leagues for lower level teams, thus the significance of friendlies has seriously
declined since the 19th century. Since the introduction of league football, most club sides play a number of friendlies
before the start of each season (called pre-season friendlies). Friendly football matches are considered to be non-competitive
and are only used to "warm up" players for a new season/competitive match. There is generally nothing competitive
at stake and some rules may be changed or experimented with (such as unlimited substitutions, which allow teams to
play younger, less experienced, players, and no cards). Although most friendlies are simply one-off matches arranged
by the clubs themselves, in which a certain amount is paid by the challenger club to the incumbent club, some teams
do compete in short tournaments, such as the Emirates Cup, Teresa Herrera Trophy and the Amsterdam Tournament. Although
these events may involve sponsorship deals and the awarding of a trophy and may even be broadcast on television,
there is little prestige attached to them. International teams also play friendlies, generally in preparation for
the qualifying or final stages of major tournaments. This is essential, since national squads generally have much
less time together in which to prepare. The biggest difference between friendlies at the club and international levels
is that international friendlies mostly take place during club league seasons, not between them. This has on occasion
led to disagreement between national associations and clubs as to the availability of players, who could become injured
or fatigued in a friendly. International friendlies give team managers the opportunity to experiment with team selection
and tactics before the tournament proper, and also allow them to assess the abilities of players they may potentially
select for the tournament squad. Players can be booked in international friendlies, and can be suspended from future
international matches based on red cards or accumulated yellows in a specified period. Caps and goals scored also
count towards a player's career records. In 2004, FIFA ruled that substitutions by a team be limited to six per match
in international friendlies, in response to criticism that such matches were becoming increasingly farcical with
managers making as many as 11 substitutions per match. In the UK and Ireland, "exhibition match" and "friendly match"
refer to two different types of matches. The types described above as friendlies are not termed exhibition matches,
while annual all-star matches such as those held in the US Major League Soccer or Japan's Japanese League are called
exhibition matches rather than friendly matches. A one-off match for charitable fundraising, usually involving one
or two all-star teams, or a match held in honor of a player for contribution to his/her club, may also be described
as exhibition matches but they are normally referred to as charity matches and testimonial matches respectively.
Under the 1995–2004 National Hockey League collective bargaining agreement, teams were limited to nine preseason
games. From 1975 to 1991, NHL teams sometimes played exhibition games against teams from the Soviet Union in the
Super Series, and in 1978, played against World Hockey Association teams also in preseason training. Like the NFL,
the NHL sometimes schedules exhibition games for cities without their own NHL teams, often at a club's minor league
affiliate (e.g. Carolina Hurricanes games at Time Warner Cable Arena in Charlotte, home of their AHL affiliate; Los
Angeles Kings games at Citizens Business Bank Arena in Ontario, California, home of their ECHL affiliate; Montreal
Canadiens games at Colisée Pepsi in Quebec City, which has no pro hockey but used to have an NHL team until 1995;
Washington Capitals at 1st Mariner Arena in the Baltimore Hockey Classic; various Western Canada teams at Credit
Union Centre in Saskatoon, a potential NHL expansion venue). Since the 2000s, some preseason games have been played
in Europe against European teams, as part of the NHL Challenge and NHL Premiere series. In addition to the standard
preseason, there also exist prospect tournaments such as the Vancouver Canucks' YoungStars tournament and the Detroit
Red Wings' training camp, in which NHL teams' younger prospects face off against each other under their parent club's
banner. The Flying Fathers, a Canadian group of Catholic priests, regularly toured North America playing exhibition
hockey games for charity. One of the organization's founders, Les Costello, was a onetime NHL player who was ordained
as a priest after retiring from professional hockey. Another prominent exhibition hockey team is the Buffalo Sabres
Alumni Hockey Team, which is composed almost entirely of retired NHL players, the majority of whom (as the name suggests)
played at least a portion of their career for the Buffalo Sabres. The Major League Baseball's preseason is also known
as spring training. All MLB teams maintain a spring-training base in Arizona or Florida. The teams in Arizona make
up the Cactus League, while the teams in Florida play in the Grapefruit League. Each team plays about 30 preseason
games against other MLB teams. They may also play exhibitions against a local college team or a minor-league team
from their farm system. Some days feature the team playing two games with two different rosters evenly divided up,
which are known as "split-squad" games. Several MLB teams used to play regular exhibition games during the year against
nearby teams in the other major league, but regular-season interleague play has made such games unnecessary. The
two Canadian MLB teams, the Toronto Blue Jays of the American League and the Montreal Expos of the National League,
met annually to play the Pearson Cup exhibition game; this tradition ended when the Expos moved to Washington DC
for the 2005 season. Similarly, the New York Yankees played in the Mayor's Trophy Game against various local rivals
from 1946 to 1983. It also used to be commonplace to have a team play an exhibition against Minor League affiliates
during the regular season, but worries of injuries to players, along with travel issues, have made this very rare.
Exhibitions between inter-city teams in different leagues, like Chicago's Crosstown Classic and New York's Subway
Series which used to be played solely as exhibitions for bragging rights are now blended into interleague play. The
annual MLB All-Star Game, played in July between players from AL teams and players from NL teams, was long considered
an exhibition match, but as of 2003 this status was questioned because the league whose team wins the All-Star game
has been awarded home field advantage for the upcoming World Series. National Basketball Association teams play eight
preseason games per year. Today, NBA teams almost always play each other in the preseason, but mainly at neutral
sites within their market areas in order to allow those who can't usually make a trip to a home team's arena during
the regular season to see a game close to home; for instance the Minnesota Timberwolves will play games in arenas
in North Dakota and South Dakota, while the Phoenix Suns schedule one exhibition game outdoors at Indian Wells Tennis
Garden in Indian Wells, California yearly, the only such instance an NBA game takes place in an outdoor venue. However,
from 1971 to 1975, NBA teams played preseason exhibitions against American Basketball Association teams. In the early
days of the NBA, league clubs sometimes challenged the legendary barnstorming Harlem Globetrotters, with mixed success.
The NBA has played preseason games in Europe and Asia. In the 2006 and 2007 seasons, the NBA and the primary European
club competition, the Euroleague, conducted a preseason tournament featuring two NBA teams and the finalists from
that year's Euroleague.[citation needed] In the 1998-99 and 2011-12 seasons, teams were limited to only two preseason
games due to lockouts. Traditionally, major college basketball teams began their seasons with a few exhibition games.
They played travelling teams made up of former college players on teams such as Athletes in Action or a team sponsored
by Marathon Oil. On occasion before 1992, when FIBA allowed professional players on foreign national teams, colleges
played those teams in exhibitions. However, in 2003, the National Collegiate Athletic Association banned games with
non-college teams. Some teams have begun scheduling exhibition games against teams in NCAA Division II and NCAA Division
III, or even against colleges and universities located in Canada. Major college basketball teams still travel to
other countries during the summer to play in exhibition games, although a college team is allowed one foreign tour
every four years, and a maximum of ten games in each tour. Compared to other team sports, the National Football League
preseason is very structured. Every NFL team plays exactly four pre-season exhibition games a year, two at home and
two away, with the exception of two teams each year who play a fifth game, the Pro Football Hall of Fame Game. These
exhibition games, most of which are held in the month of August, are played for the purpose of helping coaches narrow
down the roster from the offseason limit of 90 players to the regular-season limit of 53 players. While the scheduling
formula is not as rigid for preseason games as they are for the regular season, there are numerous restrictions and
traditions that limit the choices of preseason opponents; teams are also restricted on what days and times they can
play these games. Split-squad games, a practice common in baseball and hockey, where a team that is scheduled to
play two games on the same day splits their team into two squads, are prohibited. The NFL has played exhibition games
in Europe, Japan, Canada, Australia (including the American Bowl in 1999) and Mexico to spread the league's popularity
(a game of this type was proposed for China but, due to financial and logistical problems, was eventually canceled).
The league has tacitly forbidden the playing of non-league opponents, with the last interleague game having come
in 1972 and the last game against a team other than an NFL team (the all-NFL rookie College All-Stars) was held in
1976. Exhibition games are quite unpopular with many fans, who resent having to pay regular-season prices for two
home exhibition games as part of a season-ticket package. Numerous lawsuits have been brought by fans and classes
of fans against the NFL or its member teams regarding this practice, but none have been successful in halting it.[citation
needed] The Pro Bowl, traditionally played after the end of the NFL season (since 2011 is played the week prior to
the Super Bowl), is also considered an exhibition game. The Arena Football League briefly had a two-game exhibition
season in the early 2000s, a practice that ended in 2003 with a new television contract. Exhibition games outside
of a structured season are relatively common among indoor American football leagues; because teams switch leagues
frequently at that level of play, it is not uncommon to see some of the smaller leagues schedule exhibition games
against teams that are from another league, about to join the league as a probational franchise, or a semi-pro outdoor
team to fill holes in a schedule. True exhibition games between opposing colleges at the highest level do not exist
in college football; due to the importance of opinion polling in the top level of college football, even exhibition
games would not truly be exhibitions because they could influence the opinions of those polled. Intramural games
are possible because a team playing against itself leaves little ability for poll participants to make judgments,
and at levels below the Football Bowl Subdivision (FBS), championships are decided by objective formulas and thus
those teams can play non-league games without affecting their playoff hopes. However, most of the major FBS teams
annually schedule early season non-conference preseason home games against lesser opponents that are lower-tier FBS,
Football Championship, or Division II schools, which often result in lopsided victories in favor of the FBS teams
and act as exhibition games in all but name, though they additionally provide a large appearance fee and at least
one guaranteed television appearance for the smaller school. These games also receive the same criticism as NFL exhibition
games, but instead it is targeted to schools scheduling low-quality opponents and the simplicity for a team to run
up the score against a weak opponent. However, these games are susceptible to backfiring, resulting in damage in
poll position and public perception, especially if the higher ranked team loses, although the mere act of scheduling
a weak opponent is harmful to a team's overall strength of schedule itself. Games an FBS team schedules against lower
division opponents do not count toward the minimum seven wins required for bowl eligibility, and only one game against
an FCS team can be counted. With the start of the College Football Playoff system for the 2014 season, major teams
are now discouraged from scheduling weaker opponents for their non-conference schedule because of a much higher emphasis
on strength of schedule than in the Bowl Championship Series era. High school football teams frequently participate
in controlled scrimmages with other teams during preseason practice, but exhibition games are rare because of league
rules and concerns about finances, travel and player injuries, along with enrollments not being registered until
the early part of August in most school districts under the traditional September–June academic term. A more common
exhibition is the high school football all-star game, which brings together top players from a region. These games
are typically played by graduating seniors during the summer or at the end of the season. Many of these games, which
include the U.S. Army All-American Bowl and Under Armour All-America Game, are used as showcases for players to be
seen by colleges. Various auto racing organizations hold exhibition events; these events usually award no championship
points to participants, but they do offer prize money to participants. The NASCAR Sprint Cup Series holds two exhibition
events annually - the Sprint Unlimited, held at Daytona International Speedway at the start of the season, and the
NASCAR Sprint All-Star Race, held at Charlotte Motor Speedway midway through the season. Both events carry a hefty
purse of over USD $1,000,000. NASCAR has also held exhibition races at Suzuka Circuit and Twin Ring Motegi in Japan
and Calder Park Thunderdome in Australia.
Northwestern was founded in 1851 by John Evans, for whom the City of Evanston is named, and eight other lawyers, businessmen
and Methodist leaders. Its founding purpose was to serve the Northwest Territory, an area that today includes the
states of Ohio, Indiana, Illinois, Michigan, Wisconsin and parts of Minnesota. Instruction began in 1855; women were
admitted in 1869. Today, the main campus is a 240-acre (97 ha) parcel in Evanston, along the shores of Lake Michigan
just 12 miles north of downtown Chicago. The university's law, medical, and professional schools are located on a
25-acre (10 ha) campus in Chicago's Streeterville neighborhood. In 2008, the university opened a campus in Education
City, Doha, Qatar with programs in journalism and communication. The foundation of Northwestern University is traceable
to a meeting on May 31, 1850 of nine prominent Chicago businessmen, Methodist leaders and attorneys who had formed
the idea of establishing a university to serve what had once been known as the Northwest Territory. On January 28,
1851, the Illinois General Assembly granted a charter to the Trustees of the North-Western University, making it
the first chartered university in Illinois. The school's nine founders, all of whom were Methodists (three of them
ministers), knelt in prayer and worship before launching their first organizational meeting. Although they affiliated
the university with the Methodist Episcopal Church, they were committed to non-sectarian admissions, believing that
Northwestern should serve all people in the newly developing territory. John Evans, for whom Evanston is named, bought
379 acres (153 ha) of land along Lake Michigan in 1853, and Philo Judson developed plans for what would become the
city of Evanston, Illinois. The first building, Old College, opened on November 5, 1855. To raise funds for its construction,
Northwestern sold $100 "perpetual scholarships" entitling the purchaser and his heirs to free tuition. Another building,
University Hall, was built in 1869 of the same Joliet limestone as the Chicago Water Tower, also built in 1869, one
of the few buildings in the heart of Chicago to survive the Great Chicago Fire of 1871. In 1873 the Evanston College
for Ladies merged with Northwestern, and Frances Willard, who later gained fame as a suffragette and as one of the
founders of the Woman's Christian Temperance Union (WCTU), became the school's first dean of women. Willard Residential
College (1938) is named in her honor. Northwestern admitted its first women students in 1869, and the first woman
was graduated in 1874. Northwestern fielded its first intercollegiate football team in 1882, later becoming a founding
member of the Big Ten Conference. In the 1870s and 1880s, Northwestern affiliated itself with already existing schools
of law, medicine, and dentistry in Chicago. The Northwestern University School of Law is the oldest law school in
Chicago. As the university increased in wealth and distinction, and enrollments grew, these professional schools
were integrated with the undergraduate college in Evanston; the result was a modern research university combining
professional, graduate, and undergraduate programs, which gave equal weight to teaching and research. The Association
of American Universities invited Northwestern to become a member in 1917. Under Walter Dill Scott's presidency from
1920 to 1939, Northwestern began construction of an integrated campus in Chicago designed by James Gamble Rogers
to house the professional schools; established the Kellogg School of Management; and built several prominent buildings
on the Evanston campus, Dyche Stadium (now named Ryan Field) and Deering Library among others. In 1933, a proposal
to merge Northwestern with the University of Chicago was considered but rejected. Northwestern was also one of the
first six universities in the country to establish a Naval Reserve Officers Training Corps (NROTC) in the 1920s.
Northwestern played host to the first-ever NCAA Men's Division I Basketball Championship game in 1939 in the original
Patten Gymnasium, which was later demolished and relocated farther north along with the Dearborn Observatory to make
room for the Technological Institute. Like other American research universities, Northwestern was transformed by
World War II. Franklyn B. Snyder led the university from 1939 to 1949, when nearly 50,000 military officers and personnel
were trained on the Evanston and Chicago campuses. After the war, surging enrollments under the G.I. Bill drove drastic
expansion of both campuses. In 1948 prominent anthropologist Melville J. Herskovits founded the Program of African
Studies at Northwestern, the first center of its kind at an American academic institution. J. Roscoe Miller's tenure
as president from 1949 to 1970 was responsible for the expansion of the Evanston campus, with the construction of
the lakefill on Lake Michigan, growth of the faculty and new academic programs, as well as polarizing Vietnam-era
student protests. In 1978, the first and second Unabomber attacks occurred at Northwestern University. Relations
between Evanston and Northwestern were strained throughout much of the post-war era because of episodes of disruptive
student activism, disputes over municipal zoning, building codes, and law enforcement, as well as restrictions on
the sale of alcohol near campus until 1972. Northwestern's exemption from state and municipal property tax obligations
under its original charter has historically been a source of town and gown tension. Though government support for
universities declined in the 1970s and 1980s, President Arnold R. Weber was able to stabilize university finances,
leading to a revitalization of the campuses. As admissions to colleges and universities grew increasingly competitive
in the 1990s and 2000s, President Henry S. Bienen's tenure saw a notable increase in the number and quality of undergraduate
applicants, continued expansion of the facilities and faculty, and renewed athletic competitiveness. In 1999, Northwestern
student journalists uncovered information exonerating Illinois death row inmate Anthony Porter two days before his
scheduled execution, and the Innocence Project has since exonerated 10 more men. On January 11, 2003, in a speech
at Northwestern School of Law's Lincoln Hall, then Governor of Illinois George Ryan announced that he would commute
the sentences of more than 150 death row inmates. The Latin phrase on Northwestern's seal, Quaecumque sunt vera (Whatsoever
things are true) is drawn from the Epistle of Paul to the Philippians 4:8, while the Greek phrase inscribed on the
pages of an open book is taken from the Gospel of John 1:14: ο λόγος πλήρης χάριτος και αληθείας (The Word full of
grace and truth). Purple became Northwestern's official color in 1892, replacing black and gold after a university
committee concluded that too many other universities had used these colors. Today, Northwestern's official color
is purple, although white is something of an official color as well, being mentioned in both the university's earliest
song, Alma Mater (1907) ("Hail to purple, hail to white") and in many university guidelines. Northwestern's Evanston
campus, where the undergraduate schools, the Graduate School, and the Kellogg School of Management are located, runs
north-south from Lincoln Avenue to Clark Street west of Lake Michigan along Sheridan Road. North and South Campuses
have noticeably different atmospheres, owing to the predominance of Science and Athletics in the one and Humanities
and Arts in the other. North Campus is home to the fraternity quads, the Henry Crown Sports Pavilion and Norris Aquatics
Center and other athletic facilities, the Technological Institute, Dearborn Observatory, and other science-related
buildings including Patrick G. and Shirley W. Ryan Hall for Nanofabrication and Molecular Self-Assembly, and the
Ford Motor Company Engineering Design Center. South Campus is home to the University's humanities buildings, Pick-Staiger
Concert Hall and other music buildings, the Mary and Leigh Block Museum of Art, and the sorority quads. In the 1960s,
the University created an additional 84 acres (34.0 ha) by means of a lakefill in Lake Michigan. Among some of the
buildings located on these broad new acres are University Library, Norris University Center (the student union),
and Pick-Staiger Concert Hall. The Chicago Transit Authority's elevated train running through Evanston is called
the Purple Line, taking its name from Northwestern's school color. The Foster and Davis stations are within walking
distance of the southern end of the campus, while the Noyes station is close to the northern end of the campus. The
Central station is close to Ryan Field, Northwestern's football stadium. The Evanston Davis Street Metra station
serves the Northwestern campus in downtown Evanston and the Evanston Central Street Metra station is near Ryan Field.
Pace Suburban Bus Service and the CTA have several bus routes that run through or near the Evanston campus. Founded
at various times in the university's history, the professional schools originally were scattered throughout Chicago.
In connection with a 1917 master plan for a central Chicago campus and President Walter Dill Scott's capital campaign,
8.5 acres (3.44 ha) of land were purchased at the corner of Chicago Avenue and Lake Shore Drive for $1.5 million
in 1920. The architect James Gamble Rogers was commissioned to create a master plan for the principal buildings on
the new campus which he designed in collegiate gothic style. In 1923, Mrs. Montgomery Ward donated $8 million to
the campaign to finance the construction of the Montgomery Ward Memorial Building which would house the medical and
dental schools and to create endowments for faculty chairs, research grants, scholarships, and building maintenance.
The building would become the first university skyscraper in the United States. In addition to the Ward Building,
Rogers designed Wieboldt Hall to house facilities for the School of Commerce and Levy Mayer Hall to house the School
of Law. The new campus comprising these three new buildings was dedicated during a two-day ceremony in June 1927.
The Chicago campus continued to expand with the addition of Thorne Hall in 1931 and Abbott Hall in 1939. In October
2013, Northwestern began the demolition of the architecturally significant Prentice Women's Hospital. Eric G. Neilson,
dean of the medical school, penned an op-ed that equated retaining the building with loss of life. In Fall 2008,
Northwestern opened a campus in Education City, Doha, Qatar, joining five other American universities: Carnegie Mellon
University, Cornell University, Georgetown University, Texas A&M University, and Virginia Commonwealth University.
Through the Medill School of Journalism and School of Communication, NU-Q offers bachelor's degrees in journalism
and communication respectively. The Qatar Foundation for Education, Science and Community Development provided funding
for construction and administrative costs as well as support to hire 50 to 60 faculty and staff, some of whom rotate
between the Evanston and Qatar campuses. In February 2016, Northwestern reached an agreement with the Qatar Foundation
to extend the operations of the NU-Q branch for an additional decade, through the 2027-2028 academic year. In January
2009, the Green Power Partnership (GPP, sponsored by the EPA) listed Northwestern as one of the top 10 universities
in the country in purchasing energy from renewable sources. The university matches 74 million kilowatt hours (kWh)
of its annual energy use with Green-e Certified Renewable Energy Certificates (RECs). This green power commitment
represents 30 percent of the university's total annual electricity use and places Northwestern in the EPA's Green
Power Leadership Club. The 2010 Report by The Sustainable Endowments Institute awarded Northwestern a "B-" on its
College Sustainability Report Card. The Initiative for Sustainability and Energy at Northwestern (ISEN), supporting
research, teaching and outreach in these themes, was launched in 2008. Northwestern requires that all new buildings
be LEED-certified. Silverman Hall on the Evanston campus was awarded Gold LEED Certification in 2010; Wieboldt Hall
on the Chicago campus was awarded Gold LEED Certification in 2007, and the Ford Motor Company Engineering Design
Center on the Evanston campus was awarded Silver LEED Certification in 2006. New construction and renovation projects
will be designed to provide at least a 20% improvement over energy code requirements where technically feasible.
The university also released at the beginning of the 2008–09 academic year the Evanston Campus Framework Plan, which
outlines plans for future development of the Evanston Campus. The plan not only emphasizes the sustainable construction
of buildings, but also discusses improving transportation by optimizing pedestrian and bicycle access. Northwestern
has had a comprehensive recycling program in place since 1990. Annually more than 1,500 tons are recycled at Northwestern,
which represents 30% of the waste produced on campus. Additionally, all landscape waste at the university is composted.
Northwestern is privately owned and is governed by an appointed Board of Trustees. The board, composed of 70 members
and as of 2011[update] chaired by William A. Osborn '69, delegates its power to an elected president to serve as
the chief executive officer of the university. Northwestern has had sixteen presidents in its history (excluding
interim presidents), the current president, Morton O. Schapiro, an economist, having succeeded Henry Bienen whose
14-year tenure ended on August 31, 2009. The president has a staff of vice presidents, directors, and other assistants
for administrative, financial, faculty, and student matters. Daniel I. Linzer, provost since September 2007, serves
under the president as the chief academic officer of the university to whom the deans of every academic school, leaders
of cross-disciplinary units, and chairs of the standing faculty committee report. Northwestern is a large, residential
research university. Accredited by the North Central Association of Colleges and Schools and the respective national
professional organizations for chemistry, psychology, business, education, journalism, music, engineering, law, and
medicine, the university offers 124 undergraduate programs and 145 graduate and professional programs. Northwestern
conferred 2,190 bachelor's degrees, 3,272 master's degrees, 565 doctoral degrees, and 444 professional degrees in
2012–2013. The four-year, full-time undergraduate program comprises the majority of enrollments at the university
and emphasizes instruction in the arts and sciences, plus the professions of engineering, journalism, communication,
music, and education. Although a foundation in the liberal arts and sciences is required in all majors, there is
no required common core curriculum; individual degree requirements are set by the faculty of each school. Northwestern's
full-time undergraduate and graduate programs operate on an approximately 10-week academic quarter system with the
academic year beginning in late September and ending in early June. Undergraduates typically take four courses each
quarter and twelve courses in an academic year and are required to complete at least twelve quarters on campus to
graduate. Northwestern offers honors, accelerated, and joint degree programs in medicine, science, mathematics, engineering,
and journalism. The comprehensive doctoral graduate program has high coexistence with undergraduate programs. Undergraduate
tuition for the 2012/13 school year was $61,240; this includes the basic tuition of $43,380, fees (health $200, etc.),
room and board of $13,329 (less if commuting), books and supplies $1,842, personal expenses $1,890, transportation
cost of $400. Northwestern awards financial aid solely on the basis of need through loans, work-study, grants, and
scholarships. The University processed in excess of $472 million in financial aid for the 2009–2010 academic year.
This included $265 million in institutional funds, with the remainder coming from federal and state governments and
private organizations and individuals. Northwestern scholarship programs for undergraduate students support needy
students from a variety of income and backgrounds. Approximately 44 percent of the June 2010 graduates had received
federal and/or private loans for their undergraduate education, graduating with an average debt of $17,200. In the
fall of 2014, among the six undergraduate schools, 40.6% of undergraduate students are enrolled in the Weinberg College
of Arts and Sciences, 21.3% in the McCormick School of Engineering and Applied Science, 14.3% in the School of Communication,
11.7% in the Medill School of Journalism, 5.7% in the Bienen School of Music, and 6.4% in the School of Education
and Social Policy. The five most commonly awarded undergraduate degrees are in economics, journalism, communication
studies, psychology, and political science. While professional students are affiliated with their respective schools,
the School of Professional Studies offers master's and bachelor's degree, and certificate programs tailored to the
professional studies. With 2,446 students enrolled in science, engineering, and health fields, the largest graduate
programs by enrollment include chemistry, integrated biology, material sciences, electrical and computer engineering,
neuroscience, and economics. The Kellogg School of Management's MBA, the School of Law's JD, and the Feinberg School
of Medicine's MD are the three largest professional degree programs by enrollment. Admissions are characterized as
"most selective" by U.S. News & World Report. There were 35,099 applications for the undergraduate class of 2020
(entering 2016), and 3,751 (10.7%) were admitted, making Northwestern one of the most selective schools in the United
States. For freshmen enrolling in the class of 2019, the interquartile range (middle 50%) on the SAT was 690–760
for critical reading and 710-800 for math, ACT composite scores for the middle 50% ranged from 31–34, and 91% ranked
in the top ten percent of their high school class. In April 2016, Northwestern announced that it signed on to the
Chicago Star Partnership, a City Colleges initiative. Through this partnership, Northwestern is one of 15 Illinois
public and private universities that will "provide scholarships to students who graduate from Chicago Public Schools,
get their associate degree from one of the city's community colleges, and then get admitted to a bachelor's degree
program." The partnership was influenced by Mayor Rahm Emanuel, who encouraged local universities to increase opportunities
for students in the public school district. The University of Chicago, Northeastern Illinois University, the School
of the Art Institute, DePaul University and Loyola University are also part of the Star Scholars partnership. The
Northwestern library system consists of four libraries on the Evanston campus including the present main library,
University Library and the original library building, Deering Library; three libraries on the Chicago campus; and
the library affiliated with Garrett-Evangelical Theological Seminary. The University Library contains over 4.9 million
volumes, 4.6 million microforms, and almost 99,000 periodicals making it (by volume) the 30th-largest university
library in North America and the 10th-largest library among private universities. Notable collections in the library
system include the Melville J. Herskovits Library of African Studies, the largest Africana collection in the world,
an extensive collection of early edition printed music and manuscripts as well as late-modern works, and an art collection
noted for its 19th and 20th-century Western art and architecture periodicals. The library system participates with
15 other universities in digitizing its collections as a part of the Google Book Search project. The Mary and Leigh
Block Museum of Art is a major art museum in Chicago, containing more than 4,000 works in its permanent collection
as well as dedicating a third of its space to temporary and traveling exhibitions. Northwestern was elected to the
Association of American Universities in 1917 and remains a research university with "very high" research activity.
Northwestern's schools of management, engineering, and communication are among the most academically productive in
the nation. Northwestern received $550 million in research funding in 2014. Northwestern supports nearly 1,500 research
laboratories across two campuses, predominately in the medical and biological sciences. Through the Innovation and
New Ventures Office (INVO), Northwestern researchers disclosed 247 inventions, filed 270 patents applications, received
81 foreign and US patents, started 12 companies, and generated $79.8 million in licensing revenue in 2013. The bulk
of revenue has come from a patent on pregabalin, a synthesized organic molecule discovered by chemistry professor
Richard Silverman, which ultimately was marketed as Lyrica, a drug sold by Pfizer, to combat epilepsy, neuropathic
pain, and fibromyalgia. INVO has been involved in creating a number of centers, including the Center for Developmental
Therapeutics (CDT) and the Center for Device Development (CD2). It has also helped form over 50 Northwestern startup
companies based on Northwestern technologies. Northwestern is home to the Center for Interdisciplinary Exploration
and Research in Astrophysics, Northwestern Institute for Complex Systems, Nanoscale Science and Engineering Center,
Materials Research Center, Institute for Policy Research, International Institute for Nanotechnology, Center for
Catalysis and Surface Science, Buffet Center for International and Comparative Studies, the Initiative for Sustainability
and Energy at Northwestern and the Argonne/Northwestern Solar Energy Research Center and other centers for interdisciplinary
research. The undergraduates have a number of traditions: Painting The Rock (originally a fountain donated by the
Class of 1902) is a way to advertise, for example, campus organizations, events in Greek life, student groups, and
university-wide events. Dance Marathon, a 30-hour philanthropic event, has raised more than 13 million dollars in
its history for various children's charities. Primal Scream is held at 9 p.m. on the Sunday before finals week every
quarter; students lean out of windows or gather in courtyards and scream. Armadillo Day, or, more popularly, Dillo
Day, a day of music and food, is held on Northwestern's Lakefill every Spring on the weekend after Memorial Day.
And in one of the University's newer traditions, every year during freshman orientation, known as Wildcat Welcome,
freshmen and transfer students pass through Weber Arch to the loud huzzahs of upperclassmen and the music of the
University Marching Band. There are traditions long associated with football games. Students growl like wildcats
when the opposing team controls the ball, while simulating a paw with their hands. They will also jingle keys at
the beginning of each kickoff. In the past, before the tradition was discontinued, students would throw marshmallows
during games. The Clock Tower at the Rebecca Crown Center glows purple, instead of its usual white, after a winning
game, thereby proclaiming the happy news. The Clock Tower remains purple until a loss or until the end of the sports
season. Whereas formerly the Clock Tower was lighted only for football victories, wins for men's basketball and women's
lacrosse now merit commemoration as well; important victories in other sports may also prompt an empurpling. Two
annual productions are especially notable: the Waa-Mu show, and the Dolphin show. Waa-Mu is an original musical,
written and produced almost entirely by students. Children's theater is represented on campus by Griffin's Tale and
Purple Crayon Players. Its umbrella organization—the Student Theatre Coalition, or StuCo, organizes nine student
theatre companies, multiple performance groups and more than sixty independent productions each year. Many Northwestern
alumni have used these productions as stepping stones to successful television and film careers. Chicago's Lookingglass
Theatre Company, for example, which began life in the Great Room in Jones Residential College, was founded in 1988
by several alumni, including David Schwimmer; in 2011, it won the Regional Tony Award. Many students are involved
in community service in one form or another. Annual events include Dance Marathon, a thirty-hour event that raised
more than a million dollars for charity in 2011; and Project Pumpkin, a Halloween celebration hosted by the Northwestern
Community Development Corps (NCDC) to which more than 800 local children are invited for an afternoon of games and
sweets. NCDC's work is to connect hundreds of student volunteers to some twenty volunteer sites in Evanston and Chicago
throughout the year. Many students have assisted with the Special Olympics and have taken alternative spring break
trips to hundreds of service sites across the United States. Northwestern students also participate in the Freshman
Urban Program, a program for students interested in community service. A large and growing number of students participate
in the university's Global Engagement Summer Institute (GESI), a group service-learning expedition in Asia, Africa,
or Latin America, in conjunction with the Foundation for Sustainable Development. Several internationally recognized
non-profit organizations have originated at Northwestern including the World Health Imaging, Informatics and Telemedicine
Alliance, a spin-off from an engineering student's honors thesis. Northwestern has several housing options, including
both traditional residence halls and residential colleges which gather together students who have a particular intellectual
interest in common. Among the residential colleges are the Residential College of Cultural and Community Studies
(CCS), Ayers College of Commerce and Industry, Jones Residential College (Arts), and Slivka Residential College (Science
and Engineering). Dorms include 1835 Hinman, Bobb-McCulloch, Foster-Walker complex (commonly referred to as Plex),
and several more. In Winter 2013, 39% of undergraduates were affiliated with a fraternity or sorority. Northwestern
recognizes 21 fraternities and 18 sororities. The Daily Northwestern is the main student newspaper. Established in
1881, and published on weekdays during the academic year, it is directed entirely by undergraduates. Although it
serves the Northwestern community, the Daily has no business ties to the university, being supported wholly by advertisers.
It is owned by the Students Publishing Company. North by Northwestern is an online undergraduate magazine, having
been established in September 2006 by students at the Medill School of Journalism. Published on weekdays, it consists
of updates on news stories and special events inserted throughout the day and on weekends. North by Northwestern
also publishes a quarterly print magazine. Syllabus is the undergraduate yearbook. First published in 1885, the yearbook
is an epitome of that year's events at Northwestern. Published by Students Publishing Company and edited by Northwestern
students, it is distributed in late May. Northwestern Flipside is an undergraduate satirical magazine. Founded in
2009, The Flipside publishes a weekly issue both in print and online. Helicon is the university's undergraduate literary
magazine. Started in 1979, it is published twice a year, a web issue in the Winter, and a print issue with a web
complement in the Spring. The Protest is Northwestern's quarterly social justice magazine. The Northwestern division
of Student Multicultural Affairs also supports publications such as NUAsian, a magazine and blog about Asian and
Asian-American culture and the issues facing Asians and Asian-Americans, Ahora, a magazine about Hispanic and Latino/a
culture and campus life, BlackBoard Magazine about African-American life, and Al Bayan published by the Northwestern
Muslim-cultural Student Association. The Northwestern University Law Review is a scholarly legal publication and
student organization at Northwestern University School of Law. The Law Review's primary purpose is to publish a journal
of broad legal scholarship. The Law Review publishes four issues each year. Student editors make the editorial and
organizational decisions and select articles submitted by professors, judges, and practitioners, as well as student
pieces. The Law Review recently extended its presence onto the web, and now publishes scholarly pieces weekly on
the Colloquy. The Northwestern Journal of Technology and Intellectual Property is a law review published by an independent
student organization at Northwestern University School of Law. Its Bluebook abbreviation is Nw. J. Tech. & Intell.
Prop. The current editor-in-chief is Aisha Lavinier. The Northwestern Interdisciplinary Law Review is a scholarly
legal publication published annually by an editorial board of Northwestern University undergraduates. The journal's
mission is to publish interdisciplinary legal research, drawing from fields such as history, literature, economics,
philosophy, and art. Founded in 2008, the journal features articles by professors, law students, practitioners, and
undergraduates. The journal is funded by the Buffett Center for International and Comparative Studies and the Office
of the Provost. Sherman Ave is a humor website that formed in January 2011. The website often publishes content about
Northwestern student life, and most of Sherman Ave's staffed writers are current Northwestern undergraduate students
writing under pseudonyms. The publication is well known among students for its interviews of prominent campus figures,
its "Freshman Guide", its live-tweeting coverage of football games, and its satiric campaign in autumn 2012 to end
the Vanderbilt University football team's clubbing of baby seals. Politics & Policy was founded at Northwestern and
is dedicated to the analysis of current events and public policy. Begun in 2010 by students in the Weinberg College
of Arts and Sciences, School of Communication, and Medill School of Journalism, the organization reaches students
on more than 250 college campuses around the world. Run entirely by undergraduates, Politics & Policy publishes several
times a week with material ranging from short summaries of events to extended research pieces. The organization is
funded in part by the Buffett Center. Northwestern fields 19 intercollegiate athletic teams (8 men's and 11 women's)
in addition to numerous club sports. The women's lacrosse team won five consecutive NCAA national championships between
2005 and 2009, went undefeated in 2005 and 2009, added more NCAA championships in 2011 and 2012, giving them 7 NCAA
championships in 8 years, and holds several scoring records. The men's basketball team is recognized by the Helms
Athletic Foundation as the 1931 National Champion. In the 2010–11 school year, the Wildcats had one national championship,
12 teams in postseason play, 20 All-Americans, two CoSIDA Academic All-American selections, 8 CoSIDA Academic All0District
selections, 1 conference Coach of the Year and Player of the Year, 53 All-Conference and a record 201 Academic All-Big
Ten athletes. Overall, 12 of Northwestern's 19 varsity programs had NCAA or bowl postseason appearances. The football
team plays at Ryan Field (formerly known as Dyche Stadium); the basketball, wrestling, and volleyball teams play
at Welsh-Ryan Arena. Northwestern's athletic teams are nicknamed the Wildcats. Before 1924, they were known as "The
Purple" and unofficially as "The Fighting Methodists." The name Wildcats was bestowed upon the university in 1924
by Wallace Abbey, a writer for the Chicago Daily Tribune who wrote that even in a loss to the University of Chicago,
"Football players had not come down from Evanston; wildcats would be a name better suited to [Coach Glenn] Thistletwaite's
boys." The name was so popular that university board members made "wildcats" the official nickname just months later.
In 1972, the student body voted to change the official nickname from "Wildcats" to "Purple Haze" but the new name
never stuck. The mascot of Northwestern Athletics is Willie the Wildcat. The first mascot, however, was a live, caged
bear cub from the Lincoln Park Zoo named Furpaw who was brought to the playing field on the day of a game to greet
the fans. But after a losing season, the team, deciding that Furpaw was to blame for its misfortune, banished him
from campus forever. Willie the Wildcat made his debut in 1933 first as a logo, and then in three dimensions in 1947,
when members of the Alpha Delta fraternity dressed as wildcats during a Homecoming Parade. The Northwestern University
Marching Band (NUMB) performs at all home football games and leads cheers in the student section and performs the
Alma Mater at the end of the game. Northwestern's football team has made 73 appearances in the top 10 of the AP poll
since 1936 (including 5 at #1) and has won eight Big Ten conference championships since 1903. At one time, Northwestern
had the longest losing streak in Division I-A, losing 34 consecutive games between 1979 and 1982. They did not appear
in a bowl game after 1949 until the 1996 Rose Bowl. The team did not win a bowl since the 1949 Rose Bowl until the
2013 Gator Bowl. Following the sudden death of football coach Randy Walker in 2006, 31-year-old former All-American
Northwestern linebacker Pat Fitzgerald assumed the position, becoming the youngest Division I FBS coach at the time.
In 1998, two former Northwestern basketball players were charged and convicted for sports bribery as a result of
being paid to shave points in games against three other Big Ten schools during the 1995 season. The football team
became embroiled in a different betting scandal later that year when federal prosecutors indicted four former players
for perjury related to betting on their own games. In August 2001, Rashidi Wheeler, a senior safety, collapsed and
died during practice from an asthma attack. An autopsy revealed that he had ephedrine, a stimulant banned by the
NCAA, in his system, which prompted Northwestern to investigate the prevalence of stimulants and other banned substances
across all of its athletic programs. In 2006, the Northwestern women's soccer team was suspended and coach Jenny
Haigh resigned following the release of images of alleged hazing. The university employs 3,401 full-time faculty
members across its eleven schools, including 18 members of the National Academy of Sciences, 65 members of the American
Academy of Arts and Sciences, 19 members of the National Academy of Engineering, and 6 members of the Institute of
Medicine. Notable faculty include 2010 Nobel Prize–winning economist Dale T. Mortensen; nano-scientist Chad Mirkin;
Tony Award-winning director Mary Zimmerman; management expert Philip Kotler; King Faisal International Prize in Science
recipient Sir Fraser Stoddart; Steppenwolf Theatre director Anna Shapiro; sexual psychologist J. Michael Bailey;
Holocaust denier Arthur Butz; Federalist Society co-founder Steven Calabresi; former Weatherman Bernardine Rae Dohrn;
ethnographer Gary Alan Fine; Pulitzer Prize–winning historian Garry Wills; American Academy of Arts and Sciences
fellow Monica Olvera de la Cruz and MacArthur Fellowship recipients Stuart Dybek, and Jennifer Richeson. Notable
former faculty include political advisor David Axelrod, artist Ed Paschke, writer Charles Newman, Nobel Prize–winning
chemist John Pople, and military sociologist and "don't ask, don't tell" author Charles Moskos. Northwestern has
roughly 225,000 alumni in all branches of business, government, law, science, education, medicine, media, and the
performing arts. Among Northwestern's more notable alumni are U.S. Senator and presidential candidate George McGovern,
Nobel Prize–winning economist George J. Stigler, Nobel Prize–winning novelist Saul Bellow, Pulitzer Prize–winning
composer and diarist Ned Rorem, the much-decorated composer Howard Hanson, Deputy Prime Minister of Turkey Ali Babacan,
the historian and novelist Wilma Dykeman, and the founder of the presidential prayer breakfast Abraham Vereide. U.S.
Supreme Court Associate Justice John Paul Stevens, Supreme Court Justice and Ambassador to the United Nations Arthur
Joseph Goldberg, and Governor of Illinois and Democratic presidential candidate Adlai Stevenson are among the graduates
of the Northwestern School of Law. Many Northwestern alumni play or have played important roles in Chicago and Illinois,
such as former Illinois governor and convicted felon Rod Blagojevich, Chicago Bulls and Chicago White Sox owner Jerry
Reinsdorf, and theater director Mary Zimmerman. Northwestern alumnus David J. Skorton currently serves as president
of Cornell University. Rahm Emanuel, the mayor of Chicago and former White House Chief of Staff, earned a Masters
in Speech and Communication in 1985. Northwestern's School of Communication has been especially fruitful in the number
of actors, actresses, playwrights, and film and television writers and directors it has produced. Alumni who have
made their mark on film and television include Ann-Margret, Warren Beatty, Jodie Markell, Paul Lynde, David Schwimmer,
Anne Dudek, Zach Braff, Zooey Deschanel, Marg Helgenberger, Julia Louis-Dreyfus, Jerry Orbach, Jennifer Jones, Megan
Mullally, John Cameron Mitchell, Dermot Mulroney, Charlton Heston, Richard Kind, Ana Gasteyer, Brad Hall, Shelley
Long, William Daniels, Cloris Leachman, Bonnie Bartlett, Paula Prentiss, Richard Benjamin, Laura Innes, Charles Busch,
Stephanie March, Tony Roberts, Jeri Ryan, Kimberly Williams-Paisley, McLean Stevenson, Tony Randall, Charlotte Rae,
Paul Lynde, Patricia Neal, Nancy Dussault, Robert Reed, Mara Brock Akil, Greg Berlanti, Bill Nuss, Dusty Kay, Dan
Shor, Seth Meyers, Frank DeCaro, Zach Gilford, Nicole Sullivan, Stephen Colbert, Sandra Seacat and Garry Marshall.
Directors who were graduated from Northwestern include Gerald Freedman, Stuart Hagmann, Marshall W. Mason, and Mary
Zimmerman. Lee Phillip Bell hosted a talk show in Chicago from 1952 to 1986 and co-created the Daytime Emmy Award-winning
soap operas The Young and the Restless in 1973 and The Bold and the Beautiful in 1987. Alumni such as Sheldon Harnick,
Stephanie D'Abruzzo, Heather Headley, Kristen Schaal, Lily Rabe, and Walter Kerr have distinguished themselves on
Broadway, as has designer Bob Mackie. Amsterdam-based comedy theater Boom Chicago was founded by Northwestern alumni,
and the school has become a training ground for future The Second City, I.O., ComedySportz, Mad TV and Saturday Night
Live talent. Tam Spiva wrote scripts for The Brady Bunch and Gentle Ben. In New York, Los Angeles, and Chicago, the
number of Northwestern alumni involved in theater, film, and television is so large that a perception has formed
that there's such a thing as a "Northwestern mafia." The Medill School of Journalism has produced notable journalists
and political activists including 38 Pulitzer Prize laureates. National correspondents, reporters and columnists
such as The New York Times's Elisabeth Bumiller, David Barstow, Dean Murphy, and Vincent Laforet, USA Today's Gary
Levin, Susan Page and Christine Brennan, NBC correspondent Kelly O'Donnell, CBS correspondent Richard Threlkeld,
CNN correspondent Nicole Lapin and former CNN and current Al Jazeera America anchor Joie Chen, and ESPN personalities
Rachel Nichols, Michael Wilbon, Mike Greenberg, Steve Weissman, J. A. Adande, and Kevin Blackistone. The bestselling
author of the A Song of Ice and Fire series, George R. R. Martin, earned a B.S. and M.S. from Medill. Elisabeth Leamy
is the recipient of 13 Emmy awards and 4 Edward R. Murrow Awards. The Feinberg School of Medicine (previously the
Northwestern University Medical School) has produced a number of notable graduates, including Mary Harris Thompson,
Class of 1870, ad eundem, first female surgeon in Chicago, first female surgeon at Cook County Hospital, and founder
of the Mary Thomson Hospital, Roswell Park, Class of 1876, prominent surgeon for whom the Roswell Park Cancer Institute
in Buffalo, New York, is named, Daniel Hale Williams, Class of 1883, performed the first successful American open
heart surgery; only black charter member of the American College of Surgeons, Charles Horace Mayo, Class of 1888,
co-founder of Mayo Clinic, Carlos Montezuma, Class of 1889, one of the first Native Americans to receive a Doctor
of Medicine degree from any school, and founder of the Society of American Indians, Howard T. Ricketts, Class of
1897, who discovered bacteria of the genus Rickettsia, and identified the cause and methods of transmission of rocky
mountain spotted fever, Allen B. Kanavel, Class of 1899, founder, regent, and president of the American College of
Surgeons, internationally recognized as founder of modern hand and peripheral nerve surgery, Robert F. Furchgott,
Class of 1940, received a Lasker Award in 1996 and the 1998 Nobel Prize in Physiology or Medicine for his co-discovery
of nitric oxide, Thomas E. Starzl, Class of 1952, performed the first successful liver transplant in 1967 and received
the National Medal of Science in 2004 and a Lasker Award in 2012, Joseph P. Kerwin, first physician in space, flew
on three skylab missions and later served as director of Space and Life Sciences at NASA, C. Richard Schlegel, Class
of 1972, developed the dominant patent for a vaccine against human papillomavirus (administered as Gardasil) to prevent
cervical cancer, David J. Skorton, Class of 1974, a noted cardiologist became president of Cornell University in
2006, and Andrew E. Senyei, Class of 1979, inventor, venture capitalist, and entrepreneur, founder of biotech and
genetics companies, and a university trustee.
Strasbourg (/ˈstræzbɜːrɡ/, French pronunciation: [stʁaz.buʁ, stʁas.buʁ]; Alsatian: Strossburi; German: Straßburg, [ˈʃtʁaːsbʊɐ̯k])
is the capital and largest city of the Alsace-Champagne-Ardenne-Lorraine (ACAL) region in eastern France and is the
official seat of the European Parliament. Located close to the border with Germany, it is the capital of the Bas-Rhin
département. The city and the region of Alsace were historically predominantly Alemannic-speaking, hence the city's
Germanic name. In 2013, the city proper had 275,718 inhabitants, Eurométropole de Strasbourg (Greater Strasbourg)
had 475,934 inhabitants and the Arrondissement of Strasbourg had 482,384 inhabitants. With a population of 768,868
in 2012, Strasbourg's metropolitan area (only the part of the metropolitan area on French territory) is the ninth
largest in France and home to 13% of the ACAL region's inhabitants. The transnational Eurodistrict Strasbourg-Ortenau
had a population of 915,000 inhabitants in 2014. Strasbourg's historic city centre, the Grande Île (Grand Island),
was classified a World Heritage site by UNESCO in 1988, the first time such an honour was placed on an entire city
centre. Strasbourg is immersed in the Franco-German culture and although violently disputed throughout history, has
been a bridge of unity between France and Germany for centuries, especially through the University of Strasbourg,
currently the second largest in France, and the coexistence of Catholic and Protestant culture. The largest Islamic
place of worship in France, the Strasbourg Grand Mosque, was inaugurated by French Interior Minister Manuel Valls
on 27 September 2012. Strasbourg is situated on the eastern border of France with Germany. This border is formed
by the River Rhine, which also forms the eastern border of the modern city, facing across the river to the German
town Kehl. The historic core of Strasbourg however lies on the Grande Île in the River Ill, which here flows parallel
to, and roughly 4 kilometres (2.5 mi) from, the Rhine. The natural courses of the two rivers eventually join some
distance downstream of Strasbourg, although several artificial waterways now connect them within the city. The Romans
under Nero Claudius Drusus established a military outpost belonging to the Germania Superior Roman province at Strasbourg's
current location, and named it Argentoratum. (Hence the town is commonly called Argentina in medieval Latin.) The
name "Argentoratum" was first mentioned in 12 BC and the city celebrated its 2,000th birthday in 1988. "Argentorate"
as the toponym of the Gaulish settlement preceded it before being Latinized, but it is not known by how long. The
Roman camp was destroyed by fire and rebuilt six times between the first and the fifth centuries AD: in 70, 97, 235,
355, in the last quarter of the fourth century, and in the early years of the fifth century. It was under Trajan
and after the fire of 97 that Argentoratum received its most extended and fortified shape. From the year 90 on, the
Legio VIII Augusta was permanently stationed in the Roman camp of Argentoratum. It then included a cavalry section
and covered an area of approximately 20 hectares. Other Roman legions temporarily stationed in Argentoratum were
the Legio XIV Gemina and the Legio XXI Rapax, the latter during the reign of Nero. The centre of Argentoratum proper
was situated on the Grande Île (Cardo: current Rue du Dôme, Decumanus: current Rue des Hallebardes). The outline
of the Roman "castrum" is visible in the street pattern in the Grande Ile. Many Roman artifacts have also been found
along the current Route des Romains, the road that led to Argentoratum, in the suburb of Kœnigshoffen. This was where
the largest burial places were situated, as well as the densest concentration of civilian dwelling places and commerces
next to the camp. Among the most outstanding finds in Kœnigshoffen were (found in 1911–12) the fragments of a grand
Mithraeum that had been shattered by early Christians in the fourth century. From the fourth century, Strasbourg
was the seat of the Bishopric of Strasbourg (made an Archbishopric in 1988). Archaeological excavations below the
current Église Saint-Étienne in 1948 and 1956 unearthed the apse of a church dating back to the late fourth or early
fifth century, considered to be the oldest church in Alsace. It is supposed that this was the first seat of the Roman
Catholic Diocese of Strasbourg. In the fifth century Strasbourg was occupied successively by Alemanni, Huns, and
Franks. In the ninth century it was commonly known as Strazburg in the local language, as documented in 842 by the
Oaths of Strasbourg. This trilingual text contains, alongside texts in Latin and Old High German (teudisca lingua),
the oldest written variety of Gallo-Romance (lingua romana) clearly distinct from Latin, the ancestor of Old French.
The town was also called Stratisburgum or Strateburgus in Latin, from which later came Strossburi in Alsatian and
Straßburg in Standard German, and then Strasbourg in French. The Oaths of Strasbourg is considered as marking the
birth of the two countries of France and Germany with the division of the Carolingian Empire. A revolution in 1332
resulted in a broad-based city government with participation of the guilds, and Strasbourg declared itself a free
republic. The deadly bubonic plague of 1348 was followed on 14 February 1349 by one of the first and worst pogroms
in pre-modern history: over a thousand Jews were publicly burnt to death, with the remainder of the Jewish population
being expelled from the city. Until the end of the 18th century, Jews were forbidden to remain in town after 10 pm.
The time to leave the city was signalled by a municipal herald blowing the Grüselhorn (see below, Museums, Musée
historique);. A special tax, the Pflastergeld (pavement money), was furthermore to be paid for any horse that a Jew
would ride or bring into the city while allowed to. In the 1520s during the Protestant Reformation, the city, under
the political guidance of Jacob Sturm von Sturmeck and the spiritual guidance of Martin Bucer embraced the religious
teachings of Martin Luther. Their adherents established a Gymnasium, headed by Johannes Sturm, made into a University
in the following century. The city first followed the Tetrapolitan Confession, and then the Augsburg Confession.
Protestant iconoclasm caused much destruction to churches and cloisters, notwithstanding that Luther himself opposed
such a practice. Strasbourg was a centre of humanist scholarship and early book-printing in the Holy Roman Empire,
and its intellectual and political influence contributed much to the establishment of Protestantism as an accepted
denomination in the southwest of Germany. (John Calvin spent several years as a political refugee in the city). The
Strasbourg Councillor Sturm and guildmaster Matthias represented the city at the Imperial Diet of Speyer (1529),
where their protest led to the schism of the Catholic Church and the evolution of Protestantism. Together with four
other free cities, Strasbourg presented the confessio tetrapolitana as its Protestant book of faith at the Imperial
Diet of Augsburg in 1530, where the slightly different Augsburg Confession was also handed over to Charles V, Holy
Roman Emperor. Louis' advisors believed that, as long as Strasbourg remained independent, it would endanger the King's
newly annexed territories in Alsace, and, that to defend these large rural lands effectively, a garrison had to be
placed in towns such as Strasbourg. Indeed, the bridge over the Rhine at Strasbourg had been used repeatedly by Imperial
(Holy Roman Empire) forces, and three times during the Franco-Dutch War Strasbourg had served as a gateway for Imperial
invasions into Alsace. In September 1681 Louis' forces, though lacking a clear casus belli, surrounded the city with
overwhelming force. After some negotiation, Louis marched into the city unopposed on 30 September 1681 and proclaimed
its annexation. This annexation was one of the direct causes of the brief and bloody War of the Reunions whose outcome
left the French in possession. The French annexation was recognized by the Treaty of Ryswick (1697). The official
policy of religious intolerance which drove most Protestants from France after the revocation of the Edict of Nantes
in 1685 was not applied in Strasbourg and in Alsace, because both had a special status as a province à l'instar de
l'étranger effectif (a kind of foreign province of the king of France). Strasbourg Cathedral, however, was taken
from the Lutherans to be returned to the Catholics as the French authorities tried to promote Catholicism wherever
they could (some other historic churches remained in Protestant hands). Its language also remained overwhelmingly
German: the German Lutheran university persisted until the French Revolution. Famous students included Goethe and
Herder. Strasbourg's status as a free city was revoked by the French Revolution. Enragés, most notoriously Eulogius
Schneider, ruled the city with an increasingly iron hand. During this time, many churches and monasteries were either
destroyed or severely damaged. The cathedral lost hundreds of its statues (later replaced by copies in the 19th century)
and in April 1794, there was talk of tearing its spire down, on the grounds that it was against the principle of
equality. The tower was saved, however, when in May of the same year citizens of Strasbourg crowned it with a giant
tin Phrygian cap. This artifact was later kept in the historical collections of the city until it was destroyed by
the Germans in 1870 during the Franco-Prussian war. During the Franco-Prussian War and the Siege of Strasbourg, the
city was heavily bombarded by the Prussian army. The bombardment of the city was meant to break the morale of the
people of Strasbourg. On 24 and 26 August 1870, the Museum of Fine Arts was destroyed by fire, as was the Municipal
Library housed in the Gothic former Dominican church, with its unique collection of medieval manuscripts (most famously
the Hortus deliciarum), rare Renaissance books, archeological finds and historical artifacts. The gothic cathedral
was damaged as well as the medieval church of Temple Neuf, the theatre, the city hall, the court of justice and many
houses. At the end of the siege 10,000 inhabitants were left without shelter; over 600 died, including 261 civilians,
and 3200 were injured, including 1,100 civilians. In 1871, after the end of the war, the city was annexed to the
newly established German Empire as part of the Reichsland Elsass-Lothringen under the terms of the Treaty of Frankfurt.
As part of Imperial Germany, Strasbourg was rebuilt and developed on a grand and representative scale, such as the
Neue Stadt, or "new city" around the present Place de la République. Historian Rodolphe Reuss and Art historian Wilhelm
von Bode were in charge of rebuilding the municipal archives, libraries and museums. The University, founded in 1567
and suppressed during the French Revolution as a stronghold of German sentiment,[citation needed] was reopened in
1872 under the name Kaiser-Wilhelms-Universität. A belt of massive fortifications was established around the city,
most of which still stands today, renamed after French generals and generally classified as Monuments historiques;
most notably Fort Roon (now Fort Desaix) and Fort Podbielski (now Fort Ducrot) in Mundolsheim, Fort von Moltke (now
Fort Rapp) in Reichstett, Fort Bismarck (now Fort Kléber) in Wolfisheim, Fort Kronprinz (now Fort Foch) in Niederhausbergen,
Fort Kronprinz von Sachsen (now Fort Joffre) in Holtzheim and Fort Großherzog von Baden (now Fort Frère) in Oberhausbergen.
Following the defeat of the German empire in World War I and the abdication of the German Emperor, some revolutionary
insurgents declared Alsace-Lorraine as an independent Republic, without preliminary referendum or vote. On 11 November
1918 (Armistice Day), communist insurgents proclaimed a "soviet government" in Strasbourg, following the example
of Kurt Eisner in Munich as well as other German towns. French troops commanded by French general Henri Gouraud entered
triumphantly in the city on 22 November. A major street of the city now bears the name of that date (Rue du 22 Novembre)
which celebrates the entry of the French in the city. Viewing the massive cheering crowd gathered under the balcony
of Strasbourg's town hall, French President Raymond Poincaré stated that "the plebiscite is done". In 1919, following
the Treaty of Versailles, the city was restituted to France in accordance with U.S. President Woodrow Wilson's "Fourteen
Points" without a referendum. The date of the assignment was retroactively established on Armistice Day. It is doubtful
whether a referendum in Strasbourg would have ended in France's favour since the political parties striving for an
autonomous Alsace or a connection to France accounted only for a small proportion of votes in the last Reichstag
as well as in the local elections. The Alsatian autonomists who were pro French had won many votes in the more rural
parts of the region and other towns since the annexation of the region by Germany in 1871. The movement started with
the first election for the Reichstag; those elected were called "les députés protestataires", and until the fall
of Bismarck in 1890, they were the only deputies elected by the Alsatians to the German parliament demanding the
return of those territories to France. At the last Reichstag election in Strasbourg and its periphery, the clear
winners were the Social Democrats; the city was the administrative capital of the region, was inhabited by many Germans
appointed by the central government in Berlin and its flourishing economy attracted many Germans. This could explain
the difference between the rural vote and the one in Strasbourg. After the war, many Germans left Strasbourg and
went back to Germany; some of them were denounced by the locals or expelled by the newly appointed authorities. The
Saverne Affair was vivid in the memory among the Alsatians. Between the German invasion of Poland on 1 September
1939 and the Anglo-French declaration of War against the German Reich on 3 September 1939, the entire city (a total
of 120,000 people) was evacuated, like other border towns as well. Until the arrival of the Wehrmacht troops mid-June
1940, the city was, for ten months, completely empty, with the exception of the garrisoned soldiers. The Jews of
Strasbourg had been evacuated to Périgueux and Limoges, the University had been evacuated to Clermont-Ferrand. After
the ceasefire following the Fall of France in June 1940, Alsace was annexed to Germany and a rigorous policy of Germanisation
was imposed upon it by the Gauleiter Robert Heinrich Wagner. When, in July 1940, the first evacuees were allowed
to return, only residents of Alsatian origin were admitted. The last Jews were deported on 15 July 1940 and the main
synagogue, a huge Romanesque revival building that had been a major architectural landmark with its 54-metre-high
dome since its completion in 1897, was set ablaze, then razed. In September 1940 the first Alsatian resistance movement
led by Marcel Weinum called La main noire (The black hand) was created. It was composed by a group of 25 young men
aged from 14 to 18 years old who led several attacks against the German occupation. The actions culminated with the
attack of the Gauleiter Robert Wagner, the highest commander of Alsace directly under the order of Hitler. In March
1942, Marcel Weinum was prosecuted by the Gestapo and sentenced to be beheaded at the age of 18 in April 1942 in
Stuttgart, Germany. His last words will be: "If I have to die, I shall die but with a pure heart". From 1943 the
city was bombarded by Allied aircraft. While the First World War had not notably damaged the city, Anglo-American
bombing caused extensive destruction in raids of which at least one was allegedly carried out by mistake. In August
1944, several buildings in the Old Town were damaged by bombs, particularly the Palais Rohan, the Old Customs House
(Ancienne Douane) and the Cathedral. On 23 November 1944, the city was officially liberated by the 2nd French Armoured
Division under General Leclerc. He achieved the oath that he made with his soldiers, after the decisive Capture of
Kufra. With the Oath of Kuffra, they swore to keep up the fight until the French flag flew over the Cathedral of
Strasbourg. Many people from Strasbourg were incorporated in the German Army against their will, and were sent to
the eastern front, those young men and women were called Malgré-nous. Many tried to escape from the incorporation,
join the French Resistance, or desert the Wehrmacht but many couldn't because they were running the risk of having
their families sent to work or concentration camps by the Germans. Many of these men, especially those who did not
answer the call immediately, were pressured to "volunteer" for service with the SS, often by direct threats on their
families. This threat obliged the majority of them to remain in the German army. After the war, the few that survived
were often accused of being traitors or collaborationists, because this tough situation was not known in the rest
of France, and they had to face the incomprehension of many. In July 1944, 1500 malgré-nous were released from Soviet
captivity and sent to Algiers, where they joined the Free French Forces. Nowadays history recognizes the suffering
of those people, and museums, public discussions and memorials have been built to commemorate this terrible period
of history of this part of Eastern France (Alsace and Moselle). Liberation of Strasbourg took place on 23 November
1944. In 1949, the city was chosen to be the seat of the Council of Europe with its European Court of Human Rights
and European Pharmacopoeia. Since 1952, the European Parliament has met in Strasbourg, which was formally designated
its official 'seat' at the Edinburgh meeting of the European Council of EU heads of state and government in December
1992. (This position was reconfirmed and given treaty status in the 1997 Treaty of Amsterdam). However, only the
(four-day) plenary sessions of the Parliament are held in Strasbourg each month, with all other business being conducted
in Brussels and Luxembourg. Those sessions take place in the Immeuble Louise Weiss, inaugurated in 1999, which houses
the largest parliamentary assembly room in Europe and of any democratic institution in the world. Before that, the
EP sessions had to take place in the main Council of Europe building, the Palace of Europe, whose unusual inner architecture
had become a familiar sight to European TV audiences. In 1992, Strasbourg became the seat of the Franco-German TV
channel and movie-production society Arte. In addition to the cathedral, Strasbourg houses several other medieval
churches that have survived the many wars and destructions that have plagued the city: the Romanesque Église Saint-Étienne,
partly destroyed in 1944 by Allied bombing raids, the part Romanesque, part Gothic, very large Église Saint-Thomas
with its Silbermann organ on which Wolfgang Amadeus Mozart and Albert Schweitzer played, the Gothic Église protestante
Saint-Pierre-le-Jeune with its crypt dating back to the seventh century and its cloister partly from the eleventh
century, the Gothic Église Saint-Guillaume with its fine early-Renaissance stained glass and furniture, the Gothic
Église Saint-Jean, the part Gothic, part Art Nouveau Église Sainte-Madeleine, etc. The Neo-Gothic church Saint-Pierre-le-Vieux
Catholique (there is also an adjacent church Saint-Pierre-le-Vieux Protestant) serves as a shrine for several 15th-century
wood worked and painted altars coming from other, now destroyed churches and installed there for public display.
Among the numerous secular medieval buildings, the monumental Ancienne Douane (old custom-house) stands out. The
German Renaissance has bequeathed the city some noteworthy buildings (especially the current Chambre de commerce
et d'industrie, former town hall, on Place Gutenberg), as did the French Baroque and Classicism with several hôtels
particuliers (i.e. palaces), among which the Palais Rohan (1742, now housing three museums) is the most spectacular.
Other buildings of its kind are the "Hôtel de Hanau" (1736, now the city hall), the Hôtel de Klinglin (1736, now
residence of the préfet), the Hôtel des Deux-Ponts (1755, now residence of the military governor), the Hôtel d'Andlau-Klinglin
(1725, now seat of the administration of the Port autonome de Strasbourg) etc. The largest baroque building of Strasbourg
though is the 150 m (490 ft) long 1720s main building of the Hôpital civil. As for French Neo-classicism, it is the
Opera House on Place Broglie that most prestigiously represents this style. Strasbourg also offers high-class eclecticist
buildings in its very extended German district, the Neustadt, being the main memory of Wilhelmian architecture since
most of the major cities in Germany proper suffered intensive damage during World War II. Streets, boulevards and
avenues are homogeneous, surprisingly high (up to seven stories) and broad examples of German urban lay-out and of
this architectural style that summons and mixes up five centuries of European architecture as well as Neo-Egyptian,
Neo-Greek and Neo-Babylonian styles. The former imperial palace Palais du Rhin, the most political and thus heavily
criticized of all German Strasbourg buildings epitomizes the grand scale and stylistic sturdiness of this period.
But the two most handsome and ornate buildings of these times are the École internationale des Pontonniers (the former
Höhere Mädchenschule, girls college) with its towers, turrets and multiple round and square angles and the École
des Arts décoratifs with its lavishly ornate façade of painted bricks, woodwork and majolica. As for modern and contemporary
architecture, Strasbourg possesses some fine Art Nouveau buildings (such as the huge Palais des Fêtes and houses
and villas like Villa Schutzenberger and Hôtel Brion), good examples of post-World War II functional architecture
(the Cité Rotterdam, for which Le Corbusier did not succeed in the architectural contest) and, in the very extended
Quartier Européen, some spectacular administrative buildings of sometimes utterly large size, among which the European
Court of Human Rights building by Richard Rogers is arguably the finest. Other noticeable contemporary buildings
are the new Music school Cité de la Musique et de la Danse, the Musée d'Art moderne et contemporain and the Hôtel
du Département facing it, as well as, in the outskirts, the tramway-station Hoenheim-Nord designed by Zaha Hadid.
Strasbourg features a number of prominent parks, of which several are of cultural and historical interest: the Parc
de l'Orangerie, laid out as a French garden by André le Nôtre and remodeled as an English garden on behalf of Joséphine
de Beauharnais, now displaying noteworthy French gardens, a neo-classical castle and a small zoo; the Parc de la
Citadelle, built around impressive remains of the 17th-century fortress erected close to the Rhine by Vauban; the
Parc de Pourtalès, laid out in English style around a baroque castle (heavily restored in the 19th century) that
now houses a small three-star hotel, and featuring an open-air museum of international contemporary sculpture. The
Jardin botanique de l'Université de Strasbourg (botanical garden) was created under the German administration next
to the Observatory of Strasbourg, built in 1881, and still owns some greenhouses of those times. The Parc des Contades,
although the oldest park of the city, was completely remodeled after World War II. The futuristic Parc des Poteries
is an example of European park-conception in the late 1990s. The Jardin des deux Rives, spread over Strasbourg and
Kehl on both sides of the Rhine opened in 2004 and is the most extended (60-hectare) park of the agglomeration. The
most recent park is Parc du Heyritz (8,7 ha), opened in 2014 along a canal facing the hôpital civil. Unlike most
other cities, Strasbourg's collections of European art are divided into several museums according not only to type
and area, but also to epoch. Old master paintings from the Germanic Rhenish territories and until 1681 are displayed
in the Musée de l'Œuvre Notre-Dame, old master paintings from all the rest of Europe (including the Dutch Rhenish
territories) and until 1871 as well as old master paintings from the Germanic Rhenish territories between 1681 and
1871 are displayed in the Musée des Beaux-Arts. Old master graphic arts until 1871 is displayed in the Cabinet des
estampes et dessins. Decorative arts until 1681 ("German period") are displayed in the Musée de l'Œuvre Notre-Dame,
decorative arts from 1681 to 1871 ("French period") are displayed in the Musée des Arts décoratifs. International
art (painting, sculpture, graphic arts) and decorative art since 1871 is displayed in the Musée d'art moderne et
contemporain. The latter museum also displays the city's photographic library. Strasbourg, well known as centre of
humanism, has a long history of excellence in higher-education, at the crossroads of French and German intellectual
traditions. Although Strasbourg had been annexed by the Kingdom of France in 1683, it still remained connected to
the German-speaking intellectual world throughout the 18th century and the university attracted numerous students
from the Holy Roman Empire, including Goethe, Metternich and Montgelas, who studied law in Strasbourg, among the
most prominent. Nowadays, Strasbourg is known to offer among the best university courses in France, after Paris.
The Bibliothèque nationale et universitaire (BNU) is, with its collection of more than 3,000,000 titles, the second
largest library in France after the Bibliothèque nationale de France. It was founded by the German administration
after the complete destruction of the previous municipal library in 1871 and holds the unique status of being simultaneously
a students' and a national library. The Strasbourg municipal library had been marked erroneously as "City Hall" in
a French commercial map, which had been captured and used by the German artillery to lay their guns. A librarian
from Munich later pointed out "...that the destruction of the precious collection was not the fault of a German artillery
officer, who used the French map, but of the slovenly and inaccurate scholarship of a Frenchman." As one of the earliest
centers of book-printing in Europe (see above: History), Strasbourg for a long time held a large number of incunabula—documents
printed before 1500—in her library as one of her most precious heritages. After the total destruction of this institution
in 1870, however, a new collection had to be reassembled from scratch. Today, Strasbourg's different public and institutional
libraries again display a sizable total number of incunabula, distributed as follows: Bibliothèque nationale et universitaire,
ca. 2 098 Médiathèque de la ville et de la communauté urbaine de Strasbourg, 394 Bibliothèque du Grand Séminaire,
238 Médiathèque protestante, 94 and Bibliothèque alsatique du Crédit Mutuel, 5. City transportation in Strasbourg
includes the futurist-looking Strasbourg tramway that opened in 1994 and is operated by the regional transit company
Compagnie des Transports Strasbourgeois (CTS), consisting of 6 lines with a total length of 55.8 km (34.7 mi). The
CTS also operates a comprehensive bus network throughout the city that is integrated with the trams. With more than
500 km (311 mi) of bicycle paths, biking in the city is convenient and the CTS operates a cheap bike-sharing scheme
named Vélhop'. The CTS, and its predecessors, also operated a previous generation of tram system between 1878 and
1960, complemented by trolleybus routes between 1939 and 1962. Being a city on the Ill and close to the Rhine, Strasbourg
has always been an important centre of fluvial navigation, as is attested by archeological findings. In 1682 the
Canal de la Bruche was added to the river navigations, initially to provide transport for sandstone from quarries
in the Vosges for use in the fortification of the city. That canal has since closed, but the subsequent Canal du
Rhone au Rhine, Canal de la Marne au Rhin and Grand Canal d'Alsace are still in use, as is the important activity
of the Port autonome de Strasbourg. Water tourism inside the city proper attracts hundreds of thousands of tourists
yearly. The tram system that now criss-crosses the historic city centre complements walking and biking in it. The
centre has been transformed into a pedestrian priority zone that enables and invites walking and biking by making
these active modes of transport comfortable, safe and enjoyable. These attributes are accomplished by applying the
principle of "filtered permeability" to the existing irregular network of streets. It means that the network adaptations
favour active transportation and, selectively, "filter out" the car by reducing the number of streets that run through
the centre. While certain streets are discontinuous for cars, they connect to a network of pedestrian and bike paths
which permeate the entire centre. In addition, these paths go through public squares and open spaces increasing the
enjoyment of the trip. This logic of filtering a mode of transport is fully expressed in a comprehensive model for
laying out neighbourhoods and districts – the Fused Grid. At present the A35 autoroute, which parallels the Rhine
between Karlsruhe and Basel, and the A4 autoroute, which links Paris with Strasbourg, penetrate close to the centre
of the city. The Grand contournement ouest (GCO) project, programmed since 1999, plans to construct a 24 km (15 mi)
long highway connection between the junctions of the A4 and the A35 autoroutes in the north and of the A35 and A352
autoroutes in the south. This routes well to the west of the city and is meant to divest a significant portion of
motorized traffic from the unité urbaine.
Oklahoma i/ˌoʊkləˈhoʊmə/ (Cherokee: Asgaya gigageyi / ᎠᏍᎦᏯ ᎩᎦᎨᏱ; or translated ᎣᎦᎳᎰᎹ (òɡàlàhoma), Pawnee: Uukuhuúwa, Cayuga:
Gahnawiyoˀgeh) is a state located in the South Central United States. Oklahoma is the 20th most extensive and the
28th most populous of the 50 United States. The state's name is derived from the Choctaw words okla and humma, meaning
"red people". It is also known informally by its nickname, The Sooner State, in reference to the non-Native settlers
who staked their claims on the choicest pieces of land before the official opening date, and the Indian Appropriations
Act of 1889, which opened the door for white settlement in America's Indian Territory. The name was settled upon
statehood, Oklahoma Territory and Indian Territory were merged and Indian was dropped from the name. On November
16, 1907, Oklahoma became the 46th state to enter the union. Its residents are known as Oklahomans, or informally
"Okies", and its capital and largest city is Oklahoma City. The name Oklahoma comes from the Choctaw phrase okla
humma, literally meaning red people. Choctaw Chief Allen Wright suggested the name in 1866 during treaty negotiations
with the federal government regarding the use of Indian Territory, in which he envisioned an all-Indian state controlled
by the United States Superintendent of Indian Affairs. Equivalent to the English word Indian, okla humma was a phrase
in the Choctaw language used to describe Native American people as a whole. Oklahoma later became the de facto name
for Oklahoma Territory, and it was officially approved in 1890, two years after the area was opened to white settlers.
Oklahoma is the 20th largest state in the United States, covering an area of 69,898 square miles (181,035 km2), with
68,667 square miles (177847 km2) of land and 1,281 square miles (3,188 km2) of water. It is one of six states on
the Frontier Strip and lies partly in the Great Plains near the geographical center of the 48 contiguous states.
It is bounded on the east by Arkansas and Missouri, on the north by Kansas, on the northwest by Colorado, on the
far west by New Mexico, and on the south and near-west by Texas. The western edge of the Oklahoma panhandle is out
of alignment with its Texas border. The Oklahoma/New Mexico border is actually 2.1 to 2.2 miles east of the Texas
line. The border between Texas and New Mexico was set first as a result of a survey by Spain in 1819. It was then
set along the 103rd Meridian. In the 1890s, when Oklahoma was formally surveyed using more accurate surveying equipment
and techniques, it was discovered that the Texas line was not set along the 103rd Meridian. Surveying techniques
were not as accurate in 1819, and the actual 103rd Meridian was approximately 2.2 miles to the east. It was much
easier to leave the mistake as it was than for Texas to cede land to New Mexico to correct the original surveying
error. The placement of the Oklahoma/New Mexico border represents the true 103rd Meridian. Oklahoma is between the
Great Plains and the Ozark Plateau in the Gulf of Mexico watershed, generally sloping from the high plains of its
western boundary to the low wetlands of its southeastern boundary. Its highest and lowest points follow this trend,
with its highest peak, Black Mesa, at 4,973 feet (1,516 m) above sea level, situated near its far northwest corner
in the Oklahoma Panhandle. The state's lowest point is on the Little River near its far southeastern boundary near
the town of Idabel, OK, which dips to 289 feet (88 m) above sea level. Oklahoma has four primary mountain ranges:
the Ouachita Mountains, the Arbuckle Mountains, the Wichita Mountains, and the Ozark Mountains. Contained within
the U.S. Interior Highlands region, the Ozark and Ouachita Mountains mark the only major mountainous region between
the Rocky Mountains and the Appalachians. A portion of the Flint Hills stretches into north-central Oklahoma, and
near the state's eastern border, Cavanal Hill is regarded by the Oklahoma Tourism & Recreation Department as the
world's tallest hill; at 1,999 feet (609 m), it fails their definition of a mountain by one foot. The semi-arid high
plains in the state's northwestern corner harbor few natural forests; the region has a rolling to flat landscape
with intermittent canyons and mesa ranges like the Glass Mountains. Partial plains interrupted by small, sky island
mountain ranges like the Antelope Hills and the Wichita Mountains dot southwestern Oklahoma; transitional prairie
and oak savannahs cover the central portion of the state. The Ozark and Ouachita Mountains rise from west to east
over the state's eastern third, gradually increasing in elevation in an eastward direction. Forests cover 24 percent
of Oklahoma and prairie grasslands composed of shortgrass, mixed-grass, and tallgrass prairie, harbor expansive ecosystems
in the state's central and western portions, although cropland has largely replaced native grasses. Where rainfall
is sparse in the western regions of the state, shortgrass prairie and shrublands are the most prominent ecosystems,
though pinyon pines, red cedar (junipers), and ponderosa pines grow near rivers and creek beds in the far western
reaches of the panhandle. Southwestern Oklahoma contains many rare, disjunct species including sugar maple, bigtooth
maple, nolina and southern live oak. The state holds populations of white-tailed deer, mule deer, antelope, coyotes,
mountain lions, bobcats, elk, and birds such as quail, doves, cardinals, bald eagles, red-tailed hawks, and pheasants.
In prairie ecosystems, American bison, greater prairie chickens, badgers, and armadillo are common, and some of the
nation's largest prairie dog towns inhabit shortgrass prairie in the state's panhandle. The Cross Timbers, a region
transitioning from prairie to woodlands in Central Oklahoma, harbors 351 vertebrate species. The Ouachita Mountains
are home to black bear, red fox, grey fox, and river otter populations, which coexist with a total of 328 vertebrate
species in southeastern Oklahoma. Also, in southeastern Oklahoma lives the American alligator. With 39,000 acres
(158 km2), the Tallgrass Prairie Preserve in north-central Oklahoma is the largest protected area of tallgrass prairie
in the world and is part of an ecosystem that encompasses only 10 percent of its former land area, once covering
14 states. In addition, the Black Kettle National Grassland covers 31,300 acres (127 km2) of prairie in southwestern
Oklahoma. The Wichita Mountains Wildlife Refuge is the oldest and largest of nine national wildlife refuges in the
state and was founded in 1901, encompassing 59,020 acres (238.8 km2). Oklahoma is located in a humid subtropical
region. Oklahoma lies in a transition zone between humid continental climate to the north, semi-arid climate to the
west, and humid subtropical climate in the central, south and eastern portions of the state. Most of the state lies
in an area known as Tornado Alley characterized by frequent interaction between cold, dry air from Canada, warm to
hot, dry air from Mexico and the Southwestern U.S., and warm, moist air from the Gulf of Mexico. The interactions
between these three contrasting air currents produces severe weather (severe thunderstorms, damaging thunderstorm
winds, large hail and tornadoes) with a frequency virtually unseen anywhere else on planet Earth. An average 62 tornadoes
strike the state per year—one of the highest rates in the world. Because of Oklahoma's position between zones of
differing prevailing temperature and winds, weather patterns within the state can vary widely over relatively short
distances and can change drastically in a short time. As an example, on November 11, 1911, the temperature at Oklahoma
City reached 83 °F (28 °C) in the afternoon (the record high for that date), then an Arctic cold front of unprecedented
intensity slammed across the state, causing the temperature to crash 66 degrees, down to 17 °F (−8 °C) at midnight
(the record low for that date); thus, both the record high and record low for November 11 were set on the same date.
This type of phenomenon is also responsible for many of the tornadoes in the area, such as the 1912 Oklahoma tornado
outbreak, when a warm front traveled along a stalled cold front, resulting in an average of about one tornado per
hour over the course of a day. The humid subtropical climate (Koppen Cfa) of central, southern and eastern Oklahoma
is influenced heavily by southerly winds bringing moisture from the Gulf of Mexico. Traveling westward, the climate
transitions progressively toward a semi-arid zone (Koppen BSk) in the high plains of the Panhandle and other western
areas from about Lawton westward, less frequently touched by southern moisture. Precipitation and temperatures decline
from east to west accordingly, with areas in the southeast averaging an annual temperature of 62 °F (17 °C) and an
annual rainfall of generally over 40 inches (1,020 mm) and up to 56 inches (1,420 mm), while areas of the (higher-elevation)
panhandle average 58 °F (14 °C), with an annual rainfall under 17 inches (430 mm). Over almost all of Oklahoma, winter
is the driest season. Average monthly precipitation increases dramatically in the spring to a peak in May, the wettest
month over most of the state, with its frequent and not uncommonly severe thunderstorm activity. Early June can still
be wet, but most years see a marked decrease in rainfall during June and early July. Mid-summer (July and August)
represents a secondary dry season over much of Oklahoma, with long stretches of hot weather with only sporadic thunderstorm
activity not uncommon many years. Severe drought is common in the hottest summers, such as those of 1934, 1954, 1980
and 2011, all of which featured weeks on end of virtual rainlessness and high temperatures well over 100 °F (38 °C).
Average precipitation rises again from September to mid-October, representing a secondary wetter season, then declines
from late October through December. All of the state frequently experiences temperatures above 100 °F (38 °C) or
below 0 °F (−18 °C), though below-zero temperatures are rare in south-central and southeastern Oklahoma. Snowfall
ranges from an average of less than 4 inches (10 cm) in the south to just over 20 inches (51 cm) on the border of
Colorado in the panhandle. The state is home to the Storm Prediction Center, the National Severe Storms Laboratory,
and the Warning Decision Training Branch, all part of the National Weather Service and located in Norman. Oklahoma's
highest recorded temperature of 120 °F (49 °C) was recorded at Tipton on June 27, 1994 and the lowest recorded temperature
of −31 °F (−35 °C) was recorded at Nowata on February 10, 2011. Evidence exists that native peoples traveled through
Oklahoma as early as the last ice age. Ancestors of the Wichita and Caddo lived in what is now Oklahoma. The Panhandle
culture peoples were precontact residents of the panhandle region. The westernmost center of the Mississippian culture
was Spiro Mounds, in what is now Spiro, Oklahoma, which flourished between AD 850 and 1450. Spaniard Francisco Vásquez
de Coronado traveled through the state in 1541, but French explorers claimed the area in the 1700s and it remained
under French rule until 1803, when all the French territory west of the Mississippi River was purchased by the United
States in the Louisiana Purchase. The new state became a focal point for the emerging oil industry, as discoveries
of oil pools prompted towns to grow rapidly in population and wealth. Tulsa eventually became known as the "Oil Capital
of the World" for most of the 20th century and oil investments fueled much of the state's early economy. In 1927,
Oklahoman businessman Cyrus Avery, known as the "Father of Route 66", began the campaign to create U.S. Route 66.
Using a stretch of highway from Amarillo, Texas to Tulsa, Oklahoma to form the original portion of Highway 66, Avery
spearheaded the creation of the U.S. Highway 66 Association to oversee the planning of Route 66, based in his hometown
of Tulsa. During the 1930s, parts of the state began suffering the consequences of poor farming practices, extended
drought and high winds. Known as the Dust Bowl, areas of Kansas, Texas, New Mexico and northwestern Oklahoma were
hampered by long periods of little rainfall and abnormally high temperatures, sending thousands of farmers into poverty
and forcing them to relocate to more fertile areas of the western United States. Over a twenty-year period ending
in 1950, the state saw its only historical decline in population, dropping 6.9 percent as impoverished families migrated
out of the state after the Dust Bowl. In 1995, Oklahoma City was the site of one of the most destructive acts of
domestic terrorism in American history. The Oklahoma City bombing of April 19, 1995, in which Timothy McVeigh and
Terry Nichols detonated an explosive outside of the Alfred P. Murrah Federal Building, killed 168 people, including
19 children. The two men were convicted of the bombing: McVeigh was sentenced to death and executed by the federal
government on June 11, 2001; his partner Nichols is serving a sentence of life in prison without the possibility
of parole. McVeigh's army buddy, Michael Fortier, was sentenced to 12 years in federal prison and ordered to pay
a $75,000 fine for his role in the bombing plot (i.e. assisting in the sale of guns to raise funds for the bombing,
and examining the Murrah Federal building as a possible target before the terrorist attack). His wife, Lori Fortier,
who has since died, was granted immunity from prosecution in return for her testimony in the case. The English language
has been official in the state of Oklahoma since 2010. The variety of North American English spoken is called Oklahoma
English, and this dialect is quite diverse with its uneven blending of features of North Midland, South Midland,
and Southern dialects. In 2000, 2,977,187 Oklahomans—92.6% of the resident population five years or older—spoke only
English at home, a decrease from 95% in 1990. 238,732 Oklahoma residents reported speaking a language other than
English in the 2000 census, about 7.4% of the total population of the state. Spanish is the second most commonly
spoken language in the state, with 141,060 speakers counted in 2000. The next most commonly spoken language is Cherokee,
with about 22,000 speakers living within the Cherokee Nation tribal jurisdiction area of eastern Oklahoma. Cherokee
is an official language in the Cherokee Nation tribal jurisdiction area and in the United Keetoowah Band of Cherokee
Indians. German is the fourth most commonly used language, with 13,444 speakers representing about 0.4% of the total
state population. Fifth is Vietnamese, spoken by 11,330 people, or about 0.4% of the population, many of whom live
in the Asia District of Oklahoma City. Other languages include French with 8,258 speakers (0.3%), Chinese with 6,413
(0.2%), Korean with 3,948 (0.1%), Arabic with 3,265 (0.1%), other Asian languages with 3,134 (0.1%), Tagalog with
2,888 (0.1%), Japanese with 2,546 (0.1%), and African languages with 2,546 (0.1%). In addition to Cherokee, more
than 25 Native American languages are spoken in Oklahoma, second only to California (though, it should be noted that
only Cherokee exhibits language vitality at present). Oklahoma is part of a geographical region characterized by
conservative and Evangelical Christianity known as the "Bible Belt". Spanning the southern and eastern parts of the
United States, the area is known for politically and socially conservative views, even though Oklahoma has more voters
registered with the Democratic Party than with any other party. Tulsa, the state's second largest city, home to Oral
Roberts University, is sometimes called the "buckle of the Bible Belt". According to the Pew Research Center, the
majority of Oklahoma's religious adherents – 85 percent – are Christian, accounting for about 80 percent of the population.
The percentage of Oklahomans affiliated with Catholicism is half of the national average, while the percentage affiliated
with Evangelical Protestantism is more than twice the national average – tied with Arkansas for the largest percentage
of any state. Oklahoma is host to a diverse range of sectors including aviation, energy, transportation equipment,
food processing, electronics, and telecommunications. Oklahoma is an important producer of natural gas, aircraft,
and food. The state ranks third in the nation for production of natural gas, is the 27th-most agriculturally productive
state, and also ranks 5th in production of wheat. Four Fortune 500 companies and six Fortune 1000 companies are headquartered
in Oklahoma, and it has been rated one of the most business-friendly states in the nation, with the 7th-lowest tax
burden in 2007. Oklahoma is the nation's third-largest producer of natural gas, fifth-largest producer of crude oil,
and has the second-greatest number of active drilling rigs, and ranks fifth in crude oil reserves. While the state
ranked eighth for installed wind energy capacity in 2011, it is at the bottom of states in usage of renewable energy,
with 94 percent of its electricity being generated by non-renewable sources in 2009, including 25 percent from coal
and 46 percent from natural gas. Oklahoma has no nuclear power. Ranking 13th for total energy consumption per capita
in 2009, Oklahoma's energy costs were 8th lowest in the nation. According to Forbes magazine, Oklahoma City-based
Devon Energy Corporation, Chesapeake Energy Corporation, and SandRidge Energy Corporation are the largest private
oil-related companies in the nation, and all of Oklahoma's Fortune 500 companies are energy-related. Tulsa's ONEOK
and Williams Companies are the state's largest and second-largest companies respectively, also ranking as the nation's
second and third-largest companies in the field of energy, according to Fortune magazine. The magazine also placed
Devon Energy as the second-largest company in the mining and crude oil-producing industry in the nation, while Chesapeake
Energy ranks seventh respectively in that sector and Oklahoma Gas & Electric ranks as the 25th-largest gas and electric
utility company. The state has a rich history in ballet with five Native American ballerinas attaining worldwide
fame. These were Yvonne Chouteau, sisters Marjorie and Maria Tallchief, Rosella Hightower and Moscelyne Larkin, known
collectively as the Five Moons. The New York Times rates the Tulsa Ballet as one of the top ballet companies in the
United States. The Oklahoma City Ballet and University of Oklahoma's dance program were formed by ballerina Yvonne
Chouteau and husband Miguel Terekhov. The University program was founded in 1962 and was the first fully accredited
program of its kind in the United States. In Sand Springs, an outdoor amphitheater called "Discoveryland!" is the
official performance headquarters for the musical Oklahoma! Ridge Bond, native of McAlester, Oklahoma, starred in
the Broadway and International touring productions of Oklahoma!, playing the role of "Curly McClain" in more than
2,600 performances. In 1953 he was featured along with the Oklahoma! cast on a CBS Omnibus television broadcast.
Bond was instrumental in the title song becoming the Oklahoma state song and is also featured on the U.S. postage
stamp commemorating the musical's 50th anniversary. Historically, the state has produced musical styles such as The
Tulsa Sound and western swing, which was popularized at Cain's Ballroom in Tulsa. The building, known as the "Carnegie
Hall of Western Swing", served as the performance headquarters of Bob Wills and the Texas Playboys during the 1930s.
Stillwater is known as the epicenter of Red Dirt music, the best-known proponent of which is the late Bob Childers.
Prominent theatre companies in Oklahoma include, in the capital city, Oklahoma City Theatre Company, Carpenter Square
Theatre, Oklahoma Shakespeare in the Park, and CityRep. CityRep is a professional company affording equity points
to those performers and technical theatre professionals. In Tulsa, Oklahoma's oldest resident professional company
is American Theatre Company, and Theatre Tulsa is the oldest community theatre company west of the Mississippi. Other
companies in Tulsa include Heller Theatre and Tulsa Spotlight Theater. The cities of Norman, Lawton, and Stillwater,
among others, also host well-reviewed community theatre companies. Oklahoma is in the nation's middle percentile
in per capita spending on the arts, ranking 17th, and contains more than 300 museums. The Philbrook Museum of Tulsa
is considered one of the top 50 fine art museums in the United States, and the Sam Noble Oklahoma Museum of Natural
History in Norman, one of the largest university-based art and history museums in the country, documents the natural
history of the region. The collections of Thomas Gilcrease are housed in the Gilcrease Museum of Tulsa, which also
holds the world's largest, most comprehensive collection of art and artifacts of the American West. The Egyptian
art collection at the Mabee-Gerrer Museum of Art in Shawnee is considered to be the finest Egyptian collection between
Chicago and Los Angeles. The Oklahoma City Museum of Art contains the most comprehensive collection of glass sculptures
by artist Dale Chihuly in the world, and Oklahoma City's National Cowboy and Western Heritage Museum documents the
heritage of the American Western frontier. With remnants of the Holocaust and artifacts relevant to Judaism, the
Sherwin Miller Museum of Jewish Art of Tulsa preserves the largest collection of Jewish art in the Southwest United
States. Oklahoma's centennial celebration was named the top event in the United States for 2007 by the American Bus
Association, and consisted of multiple celebrations saving with the 100th anniversary of statehood on November 16,
2007. Annual ethnic festivals and events take place throughout the state such as Native American powwows and ceremonial
events, and include festivals (as examples) in Scottish, Irish, German, Italian, Vietnamese, Chinese, Czech, Jewish,
Arab, Mexican and African-American communities depicting cultural heritage or traditions. During a 10-day run in
Oklahoma City, the State Fair of Oklahoma attracts roughly one million people along with the annual Festival of the
Arts. Large national pow-wows, various Latin and Asian heritage festivals, and cultural festivals such as the Juneteenth
celebrations are held in Oklahoma City each year. The Tulsa State Fair attracts over one million people during its
10-day run, and the city's Mayfest festival entertained more than 375,000 people in four days during 2007. In 2006,
Tulsa's Oktoberfest was named one of the top 10 in the world by USA Today and one of the top German food festivals
in the nation by Bon Appetit magazine. Norman plays host to the Norman Music Festival, a festival that highlights
native Oklahoma bands and musicians. Norman is also host to the Medieval Fair of Norman, which has been held annually
since 1976 and was Oklahoma's first medieval fair. The Fair was held first on the south oval of the University of
Oklahoma campus and in the third year moved to the Duck Pond in Norman until the Fair became too big and moved to
Reaves Park in 2003. The Medieval Fair of Norman is Oklahoma's "largest weekend event and the third largest event
in Oklahoma, and was selected by Events Media Network as one of the top 100 events in the nation". With an educational
system made up of public school districts and independent private institutions, Oklahoma had 638,817 students enrolled
in 1,845 public primary, secondary, and vocational schools in 533 school districts as of 2008[update]. Oklahoma has
the highest enrollment of Native American students in the nation with 126,078 students in the 2009-10 school year.
Ranked near the bottom of states in expenditures per student, Oklahoma spent $7,755 for each student in 2008, 47th
in the nation, though its growth of total education expenditures between 1992 and 2002 ranked 22nd. The state is
among the best in pre-kindergarten education, and the National Institute for Early Education Research rated it first
in the United States with regard to standards, quality, and access to pre-kindergarten education in 2004, calling
it a model for early childhood schooling. High school dropout rate decreased from 3.1 to 2.5 percent between 2007
and 2008 with Oklahoma ranked among 18 other states with 3 percent or less dropout rate. In 2004, the state ranked
36th in the nation for the relative number of adults with high school diplomas, though at 85.2 percent, it had the
highest rate among southern states. Oklahoma holds eleven public regional universities, including Northeastern State
University, the second-oldest institution of higher education west of the Mississippi River, also containing the
only College of Optometry in Oklahoma and the largest enrollment of Native American students in the nation by percentage
and amount. Langston University is Oklahoma's only historically black college. Six of the state's universities were
placed in the Princeton Review's list of best 122 regional colleges in 2007, and three made the list of top colleges
for best value. The state has 55 post-secondary technical institutions operated by Oklahoma's CareerTech program
for training in specific fields of industry or trade. In the 2007–2008 school year, there were 181,973 undergraduate
students, 20,014 graduate students, and 4,395 first-professional degree students enrolled in Oklahoma colleges. Of
these students, 18,892 received a bachelor's degree, 5,386 received a master's degree, and 462 received a first professional
degree. This means the state of Oklahoma produces an average of 38,278 degree-holders per completions component (i.e.
July 1, 2007 – June 30, 2008). National average is 68,322 total degrees awarded per completions component. The Cherokee
Nation instigated a 10-year language preservation plan that involved growing new fluent speakers of the Cherokee
language from childhood on up through school immersion programs as well as a collaborative community effort to continue
to use the language at home. This plan was part of an ambitious goal that in 50 years, 80% or more of the Cherokee
people will be fluent in the language. The Cherokee Preservation Foundation has invested $3 million into opening
schools, training teachers, and developing curricula for language education, as well as initiating community gatherings
where the language can be actively used. There is a Cherokee language immersion school in Tahlequah, Oklahoma that
educates students from pre-school through eighth grade. Graduates are fluent speakers of the language. Several universities
offer Cherokee as a second language, including the University of Oklahoma and Northeastern State University. Oklahoma
has teams in basketball, football, arena football, baseball, soccer, hockey, and wrestling located in Oklahoma City,
Tulsa, Enid, Norman, and Lawton. The Oklahoma City Thunder of the National Basketball Association (NBA) is the state's
only major league sports franchise. The state had a team in the Women's National Basketball Association, the Tulsa
Shock, from 2010 through 2015, but the team relocated to Dallas–Fort Worth after that season and became the Dallas
Wings. Oklahoma supports teams in several minor leagues, including Minor League Baseball at the AAA and AA levels
(Oklahoma City Dodgers and Tulsa Drillers, respectively), hockey's ECHL with the Tulsa Oilers, and a number of indoor
football leagues. In the last-named sport, the state's most notable team was the Tulsa Talons, which played in the
Arena Football League until 2012, when the team was moved to San Antonio. The Oklahoma Defenders replaced the Talons
as Tulsa's only professional arena football team, playing the CPIFL. The Oklahoma City Blue, of the NBA Development
League, relocated to Oklahoma City from Tulsa in 2014, where they were formerly known as the Tulsa 66ers. Tulsa is
the base for the Tulsa Revolution, which plays in the American Indoor Soccer League. Enid and Lawton host professional
basketball teams in the USBL and the CBA. The NBA's New Orleans Hornets became the first major league sports franchise
based in Oklahoma when the team was forced to relocate to Oklahoma City's Ford Center, now known as Chesapeake Energy
Arena, for two seasons following Hurricane Katrina in 2005. In July 2008, the Seattle SuperSonics, a franchise owned
by the Professional Basketball Club LLC, a group of Oklahoma City businessmen led by Clayton Bennett, relocated to
Oklahoma City and announced that play would begin at the Ford Center as the Oklahoma City Thunder for the 2008–09
season, becoming the state's first permanent major league franchise. Collegiate athletics are a popular draw in the
state. The state has four schools that compete at the highest level of college sports, NCAA Division I. The most
prominent are the state's two members of the Big 12 Conference, one of the so-called Power Five conferences of the
top tier of college football, Division I FBS. The University of Oklahoma and Oklahoma State University average well
over 50,000 fans attending their football games, and Oklahoma's football program ranked 12th in attendance among
American colleges in 2010, with an average of 84,738 people attending its home games. The two universities meet several
times each year in rivalry matches known as the Bedlam Series, which are some of the greatest sporting draws to the
state. Sports Illustrated magazine rates Oklahoma and Oklahoma State among the top colleges for athletics in the
nation. Two private institutions in Tulsa, the University of Tulsa and Oral Roberts University; are also Division
I members. Tulsa competes in FBS football and other sports in the American Athletic Conference, while Oral Roberts,
which does not sponsor football, is a member of The Summit League. In addition, 12 of the state's smaller colleges
and universities compete in NCAA Division II as members of four different conferences, and eight other Oklahoma institutions
participate in the NAIA, mostly within the Sooner Athletic Conference. Regular LPGA tournaments are held at Cedar
Ridge Country Club in Tulsa, and major championships for the PGA or LPGA have been played at Southern Hills Country
Club in Tulsa, Oak Tree Country Club in Oklahoma City, and Cedar Ridge Country Club in Tulsa. Rated one of the top
golf courses in the nation, Southern Hills has hosted four PGA Championships, including one in 2007, and three U.S.
Opens, the most recent in 2001. Rodeos are popular throughout the state, and Guymon, in the state's panhandle, hosts
one of the largest in the nation. The state has two primary newspapers. The Oklahoman, based in Oklahoma City, is
the largest newspaper in the state and 54th-largest in the nation by circulation, with a weekday readership of 138,493
and a Sunday readership of 202,690. The Tulsa World, the second most widely circulated newspaper in Oklahoma and
79th in the nation, holds a Sunday circulation of 132,969 and a weekday readership of 93,558. Oklahoma's first newspaper
was established in 1844, called the Cherokee Advocate, and was written in both Cherokee and English. In 2006, there
were more than 220 newspapers located in the state, including 177 with weekly publications and 48 with daily publications.
More than 12,000 miles (19,000 km) of roads make up the state's major highway skeleton, including state-operated
highways, ten turnpikes or major toll roads, and the longest drivable stretch of Route 66 in the nation. In 2008,
Interstate 44 in Oklahoma City was Oklahoma's busiest highway, with a daily traffic volume of 123,300 cars. In 2010,
the state had the nation's third highest number of bridges classified as structurally deficient, with nearly 5,212
bridges in disrepair, including 235 National Highway System Bridges. Oklahoma's largest commercial airport is Will
Rogers World Airport in Oklahoma City, averaging a yearly passenger count of more than 3.5 million (1.7 million boardings)
in 2010. Tulsa International Airport, the state's second largest commercial airport, served more than 1.3 million
boardings in 2010. Between the two, six airlines operate in Oklahoma. In terms of traffic, R. L. Jones Jr. (Riverside)
Airport in Tulsa is the state's busiest airport, with 335,826 takeoffs and landings in 2008. In total, Oklahoma has
over 150 public-use airports. Two inland ports on rivers serve Oklahoma: the Port of Muskogee and the Tulsa Port
of Catoosa. The only port handling international cargo in the state, the Tulsa Port of Catoosa is the most inland
ocean-going port in the nation and ships over two million tons of cargo each year. Both ports are located on the
McClellan-Kerr Arkansas River Navigation System, which connects barge traffic from Tulsa and Muskogee to the Mississippi
River via the Verdigris and Arkansas rivers, contributing to one of the busiest waterways in the world. Oklahoma's
judicial branch consists of the Oklahoma Supreme Court, the Oklahoma Court of Criminal Appeals, and 77 District Courts
that each serves one county. The Oklahoma judiciary also contains two independent courts: a Court of Impeachment
and the Oklahoma Court on the Judiciary. Oklahoma has two courts of last resort: the state Supreme Court hears civil
cases, and the state Court of Criminal Appeals hears criminal cases (this split system exists only in Oklahoma and
neighboring Texas). Judges of those two courts, as well as the Court of Civil Appeals are appointed by the Governor
upon the recommendation of the state Judicial Nominating Commission, and are subject to a non-partisan retention
vote on a six-year rotating schedule. The executive branch consists of the Governor, their staff, and other elected
officials. The principal head of government, the Governor is the chief executive of the Oklahoma executive branch,
serving as the ex officio Commander-in-Chief of the Oklahoma National Guard when not called into Federal use and
reserving the power to veto bills passed through the Legislature. The responsibilities of the Executive branch include
submitting the budget, ensuring that state laws are enforced, and ensuring peace within the state is preserved. The
state is divided into 77 counties that govern locally, each headed by a three-member council of elected commissioners,
a tax assessor, clerk, court clerk, treasurer, and sheriff. While each municipality operates as a separate and independent
local government with executive, legislative and judicial power, county governments maintain jurisdiction over both
incorporated cities and non-incorporated areas within their boundaries, but have executive power but no legislative
or judicial power. Both county and municipal governments collect taxes, employ a separate police force, hold elections,
and operate emergency response services within their jurisdiction. Other local government units include school districts,
technology center districts, community college districts, rural fire departments, rural water districts, and other
special use districts. Thirty-nine Native American tribal governments are based in Oklahoma, each holding limited
powers within designated areas. While Indian reservations typical in most of the United States are not present in
Oklahoma, tribal governments hold land granted during the Indian Territory era, but with limited jurisdiction and
no control over state governing bodies such as municipalities and counties. Tribal governments are recognized by
the United States as quasi-sovereign entities with executive, judicial, and legislative powers over tribal members
and functions, but are subject to the authority of the United States Congress to revoke or withhold certain powers.
The tribal governments are required to submit a constitution and any subsequent amendments to the United States Congress
for approval. After the 1948 election, the state turned firmly Republican. Although registered Republicans were a
minority in the state until 2015, starting in 1952, Oklahoma has been carried by Republican presidential candidates
in all but one election (1964). This is not to say that every election has been a landslide for Republicans: Jimmy
Carter lost the state by less than 1.5% in 1976, while Michael Dukakis and Bill Clinton both won 40% or more of the
state's popular vote in 1988 and 1996 respectively. Al Gore in 2000, though, was the last Democrat to even win any
counties in the state. Oklahoma was the only state where Barack Obama failed to carry any of its counties in both
2008 and 2012. Following the 2000 census, the Oklahoma delegation to the U.S. House of Representatives was reduced
from six to five representatives, each serving one congressional district. For the 112th Congress (2011–2013), there
were no changes in party strength, and the delegation included four Republicans and one Democrat. In the 112th Congress,
Oklahoma's U.S. senators were Republicans Jim Inhofe and Tom Coburn, and its U.S. Representatives were John Sullivan
(R-OK-1), Dan Boren (D-OK-2), Frank D. Lucas (R-OK-3), Tom Cole (R-OK-4), and James Lankford (R-OK-5). Oklahoma had
598 incorporated places in 2010, including four cities over 100,000 in population and 43 over 10,000. Two of the
fifty largest cities in the United States are located in Oklahoma, Oklahoma City and Tulsa, and 65 percent of Oklahomans
live within their metropolitan areas, or spheres of economic and social influence defined by the United States Census
Bureau as a metropolitan statistical area. Oklahoma City, the state's capital and largest city, had the largest metropolitan
area in the state in 2010, with 1,252,987 people, and the metropolitan area of Tulsa had 937,478 residents. Between
2000 and 2010, the cities that led the state in population growth were Blanchard (172.4%), Elgin (78.2%), Jenks (77.0%),
Piedmont (56.7%), Bixby (56.6%), and Owasso (56.3%). In descending order of population, Oklahoma's largest cities
in 2010 were: Oklahoma City (579,999, +14.6%), Tulsa (391,906, −0.3%), Norman (110,925, +15.9%), Broken Arrow (98,850,
+32.0%), Lawton (96,867, +4.4%), Edmond (81,405, +19.2%), Moore (55,081, +33.9%), Midwest City (54,371, +0.5%), Enid
(49,379, +5.0%), and Stillwater (45,688, +17.0%). Of the state's ten largest cities, three are outside the metropolitan
areas of Oklahoma City and Tulsa, and only Lawton has a metropolitan statistical area of its own as designated by
the United States Census Bureau, though the metropolitan statistical area of Fort Smith, Arkansas extends into the
state. State law codifies Oklahoma's state emblems and honorary positions; the Oklahoma Senate or House of Representatives
may adopt resolutions designating others for special events and to benefit organizations. Currently the State Senate
is waiting to vote on a change to the state's motto. The House passed HCR 1024, which will change the state motto
from "Labor Omnia Vincit" to "Oklahoma-In God We Trust!" The author of the resolution stated that a constituent researched
the Oklahoma Constitution and found no "official" vote regarding "Labor Omnia Vincit", therefore opening the door
for an entirely new motto.
The history of India includes the prehistoric settlements and societies in the Indian subcontinent; the blending of the Indus
Valley Civilization and Indo-Aryan culture into the Vedic Civilization; the development of Hinduism as a synthesis
of various Indian cultures and traditions; the rise of the Śramaṇa movement; the decline of Śrauta sacrifices and
the birth of the initiatory traditions of Jainism, Buddhism, Shaivism, Vaishnavism and Shaktism; the onset of a succession
of powerful dynasties and empires for more than two millennia throughout various geographic areas of the subcontinent,
including the growth of Muslim dynasties during the Medieval period intertwined with Hindu powers; the advent of
European traders resulting in the establishment of the British rule; and the subsequent independence movement that
led to the Partition of India and the creation of the Republic of India. Evidence of Anatomically modern humans in
the Indian subcontinent is recorded as long as 75,000 years ago, or with earlier hominids including Homo erectus
from about 500,000 years ago. The Indus Valley Civilization which spread and flourished in the northwestern part
of the Indian subcontinent from c. 3200 to 1300 BCE, was the first major civilization in South Asia. A sophisticated
and technologically advanced urban culture developed in the Mature Harappan period, from 2600 to 1900 BCE. This civilization
collapsed at the start of the second millennium BCE and was later followed by the Iron Age Vedic Civilization, which
extended over much of the Indo-Gangetic plain and which witnessed the rise of major polities known as the Mahajanapadas.
In one of these kingdoms, Magadha, Mahavira and Gautama Buddha propagated their Shramanic philosophies during the
fifth and sixth century BCE. Most of the subcontinent was conquered by the Maurya Empire during the 4th and 3rd centuries
BCE. From the 3rd century BC onwards Prakrit and Pali literature in the north and the Sangam literature in southern
India started to flourish. Wootz steel originated in south India in the 3rd century BC and was exported to foreign
countries. Various parts of India were ruled by numerous dynasties for the next 1,500 years, among which the Gupta
Empire stands out. This period, witnessing a Hindu religious and intellectual resurgence, is known as the classical
or "Golden Age of India". During this period, aspects of Indian civilization, administration, culture, and religion
(Hinduism and Buddhism) spread to much of Asia, while kingdoms in southern India had maritime business links with
the Roman Empire from around 77 CE. Indian cultural influence spread over many parts of Southeast Asia which led
to the establishment of Indianized kingdoms in Southeast Asia (Greater India). The most significant event between
the 7th and 11th century was the Tripartite struggle centered on Kannauj that lasted for more than two centuries
between the Pala Empire, Rashtrakuta Empire, and Gurjara Pratihara Empire. Southern India was ruled by the Chalukya,
Chola, Pallava, Chera, Pandyan, and Western Chalukya Empires. The seventh century also saw the advent of Islam as
a political power, though as a fringe, in the western part of the subcontinent, in modern-day Pakistan. The Chola
dynasty conquered southern India and successfully invaded parts of Southeast Asia, Sri Lanka, Maldives and Bengal
in the 11th century. The early medieval period Indian mathematics influenced the development of mathematics and astronomy
in the Arab world and the Hindu numerals were introduced. Muslim rule started in parts of north India in the 13th
century when the Delhi Sultanate was founded in 1206 CE by the Central Asian Turks. The Delhi Sultanate ruled the
major part of northern India in the early 14th century, but declined in the late 14th century when several powerful
Hindu states such as the Vijayanagara Empire, Gajapati Kingdom, Ahom Kingdom, as well as Rajput dynasties and states,
such as Mewar dynasty, emerged. The 15th century saw the emergence of Sikhism. In the 16th century, Mughals came
from Central Asia and gradually covered most of India. The Mughal Empire suffered a gradual decline in the early
18th century, which provided opportunities for the Maratha Empire, Sikh Empire and Mysore Kingdom to exercise control
over large areas of the subcontinent. From the late 18th century to the mid-19th century, large areas of India were
annexed by the British East India Company of British Empire. Dissatisfaction with Company rule led to the Indian
Rebellion of 1857, after which the British provinces of India were directly administered by the British Crown and
witnessed a period of both rapid development of infrastructure and economic stagnation. During the first half of
the 20th century, a nationwide struggle for independence was launched with the leading party involved being the Indian
National Congress which was later joined by other organizations. The subcontinent gained independence from the United
Kingdom in 1947, after the British provinces were partitioned into the dominions of India and Pakistan and the princely
states all acceded to one of the new states. Romila Thapar notes that the division into Hindu-Muslim-British periods
of Indian history gives too much weight to "ruling dynasties and foreign invasions", neglecting the social-economic
history which often showed a strong continuity. The division into Ancient-Medieval-Modern periods overlooks the fact
that the Muslim conquests occurred gradually during which time many things came and went off, while the south was
never completely conquered. According to Thapar, a periodisation could also be based on "significant social and economic
changes", which are not strictly related to a change of ruling powers.[note 1] Isolated remains of Homo erectus in
Hathnora in the Narmada Valley in central India indicate that India might have been inhabited since at least the
Middle Pleistocene era, somewhere between 500,000 and 200,000 years ago. Tools crafted by proto-humans that have
been dated back two million years have been discovered in the northwestern part of the subcontinent. The ancient
history of the region includes some of South Asia's oldest settlements and some of its major civilisations. The earliest
archaeological site in the subcontinent is the palaeolithic hominid site in the Soan River valley. Soanian sites
are found in the Sivalik region across what are now India, Pakistan, and Nepal. The Mesolithic period in the Indian
subcontinent was followed by the Neolithic period, when more extensive settlement of the subcontinent occurred after
the end of the last Ice Age approximately 12,000 years ago. The first confirmed semipermanent settlements appeared
9,000 years ago in the Bhimbetka rock shelters in modern Madhya Pradesh, India. Early Neolithic culture in South
Asia is represented by the Bhirrana findings (7500 BCE) in Haryana, India & Mehrgarh findings (7000–9000 BCE) in
Balochistan, Pakistan. The Mature Indus civilisation flourished from about 2600 to 1900 BCE, marking the beginning
of urban civilisation on the subcontinent. The civilisation included urban centres such as Dholavira, Kalibangan,
Ropar, Rakhigarhi, and Lothal in modern-day India, as well as Harappa, Ganeriwala, and Mohenjo-daro in modern-day
Pakistan. The civilisation is noted for its cities built of brick, roadside drainage system, and multistoreyed houses
and is thought to have had some kind of municipal organization. The Vedic period is named after the Indo-Aryan culture
of north-west India, although other parts of India had a distinct cultural identity during this period. The Vedic
culture is described in the texts of Vedas, still sacred to Hindus, which were orally composed in Vedic Sanskrit.
The Vedas are some of the oldest extant texts in India. The Vedic period, lasting from about 1750 to 500 BCE, and
contributed the foundations of several cultural aspects of Indian subcontinent. In terms of culture, many regions
of the subcontinent transitioned from the Chalcolithic to the Iron Age in this period. At the end of the Rigvedic
period, the Aryan society began to expand from the northwestern region of the Indian subcontinent, into the western
Ganges plain. It became increasingly agricultural and was socially organised around the hierarchy of the four varnas,
or social classes. This social structure was characterized both by syncretising with the native cultures of northern
India, but also eventually by the excluding of indigenous peoples by labelling their occupations impure. During this
period, many of the previous small tribal units and chiefdoms began to coalesce into monarchical, state-level polities.
The Kuru kingdom was the first state-level society of the Vedic period, corresponding to the beginning of the Iron
Age in northwestern India, around 1200 – 800 BCE, as well as with the composition of the Atharvaveda (the first Indian
text to mention iron, as śyāma ayas, literally "black metal"). The Kuru state organized the Vedic hymns into collections,
and developed the orthodox srauta ritual to uphold the social order. When the Kuru kingdom declined, the center of
Vedic culture shifted to their eastern neighbours, the Panchala kingdom. The archaeological Painted Grey Ware culture,
which flourished in the Haryana and western Uttar Pradesh regions of northern India from about 1100 to 600 BCE, is
believed to correspond to the Kuru and Panchala kingdoms. In addition to the Vedas, the principal texts of Hinduism,
the core themes of the Sanskrit epics Ramayana and Mahabharata are said to have their ultimate origins during this
period. The Mahabharata remains, today, the longest single poem in the world. Historians formerly postulated an "epic
age" as the milieu of these two epic poems, but now recognize that the texts (which are both familiar with each other)
went through multiple stages of development over centuries. For instance, the Mahabharata may have been based on
a small-scale conflict (possibly about 1000 BCE) which was eventually "transformed into a gigantic epic war by bards
and poets". There is no conclusive proof from archaeology as to whether the specific events of the Mahabharat have
any historical basis. The existing texts of these epics are believed to belong to the post-Vedic age, between c.
400 BCE and 400 CE. Some even attempted to date the events using methods of archaeoastronomy which have produced,
depending on which passages are chosen and how they are interpreted, estimated dates ranging up to mid 2nd millennium
BCE. During the time between 800 and 200 BCE the Shramana-movement formed, from which originated Jainism and Buddhism.
In the same period the first Upanishads were written. After 500 BCE, the so-called "Second urbanization" started,
with new urban settlements arising at the Ganges plain, especially the Central Ganges plain. The Central Ganges Plain,
where Magadha gained prominence, forming the base of the Mauryan Empire, was a distinct cultural area, with new states
arising after 500 BC[web 1] during the so-called "Second urbanization".[note 3] It was influenced by the Vedic culture,
but differed markedly from the Kuru-Panchala region. It "was the area of the earliest known cultivation of rice in
South Asia and by 1800 BC was the location of an advanced neolithic population associated with the sites of Chirand
and Chechar". In this region the Shramanic movements flourished, and Jainism and Buddhism originated. In the later
Vedic Age, a number of small kingdoms or city states had covered the subcontinent, many mentioned in Vedic, early
Buddhist and Jaina literature as far back as 500 BCE. sixteen monarchies and "republics" known as the Mahajanapadas—Kashi,
Kosala, Anga, Magadha, Vajji (or Vriji), Malla, Chedi, Vatsa (or Vamsa), Kuru, Panchala, Matsya (or Machcha), Shurasena,
Assaka, Avanti, Gandhara, and Kamboja—stretched across the Indo-Gangetic Plain from modern-day Afghanistan to Bengal
and Maharashtra. This period saw the second major rise of urbanism in India after the Indus Valley Civilisation.
Many smaller clans mentioned within early literature seem to have been present across the rest of the subcontinent.
Some of these kings were hereditary; other states elected their rulers. Early "republics" such as the Vajji (or Vriji)
confederation centered in the city of Vaishali, existed as early as the 6th century BCE and persisted in some areas
until the 4th century CE. The educated speech at that time was Sanskrit, while the languages of the general population
of northern India are referred to as Prakrits. Many of the sixteen kingdoms had coalesced to four major ones by 500/400
BCE, by the time of Gautama Buddha. These four were Vatsa, Avanti, Kosala, and Magadha. The Life of Gautam Budhha
was mainly associated with these four kingdoms. The 7th and 6th centuries BC witnessed the composition of the earliest
Upanishads. Upanishads form the theoretical basis of classical Hinduism and are known as Vedanta (conclusion of the
Vedas). The older Upanishads launched attacks of increasing intensity on the ritual. Anyone who worships a divinity
other than the Self is called a domestic animal of the gods in the Brihadaranyaka Upanishad. The Mundaka launches
the most scathing attack on the ritual by comparing those who value sacrifice with an unsafe boat that is endlessly
overtaken by old age and death. Increasing urbanisation of India in 7th and 6th centuries BCE led to the rise of
new ascetic or shramana movements which challenged the orthodoxy of rituals. Mahavira (c. 549–477 BC), proponent
of Jainism, and Buddha (c. 563-483), founder of Buddhism were the most prominent icons of this movement. Shramana
gave rise to the concept of the cycle of birth and death, the concept of samsara, and the concept of liberation.
Buddha found a Middle Way that ameliorated the extreme asceticism found in the Sramana religions. Magadha (Sanskrit:
मगध) formed one of the sixteen Mahā-Janapadas (Sanskrit: "Great Countries") or kingdoms in ancient India. The core
of the kingdom was the area of Bihar south of the Ganges; its first capital was Rajagriha (modern Rajgir) then Pataliputra
(modern Patna). Magadha expanded to include most of Bihar and Bengal with the conquest of Licchavi and Anga respectively,
followed by much of eastern Uttar Pradesh and Orissa. The ancient kingdom of Magadha is heavily mentioned in Jain
and Buddhist texts. It is also mentioned in the Ramayana, Mahabharata, Puranas. A state of Magadha, possibly a tribal
kingdom, is recorded in Vedic texts much earlier in time than 600 BC. Magadha Empire had great rulers like Bimbisara
and Ajatshatru. The earliest reference to the Magadha people occurs in the Atharva-Veda where they are found listed
along with the Angas, Gandharis, and Mujavats. Magadha played an important role in the development of Jainism and
Buddhism, and two of India's greatest empires, the Maurya Empire and Gupta Empire, originated from Magadha. These
empires saw advancements in ancient India's science, mathematics, astronomy, religion, and philosophy and were considered
the Indian "Golden Age". The Magadha kingdom included republican communities such as the community of Rajakumara.
Villages had their own assemblies under their local chiefs called Gramakas. Their administrations were divided into
executive, judicial, and military functions. In 530 BC Cyrus the Great, King of the Persian Achaemenid Empire crossed
the Hindu-Kush mountains to seek tribute from the tribes of Kamboja, Gandhara and the trans-India region (modern
Afghanistan and Pakistan). By 520 BC, during the reign of Darius I of Persia, much of the northwestern subcontinent
(present-day eastern Afghanistan and Pakistan) came under the rule of the Persian Achaemenid Empire, as part of the
far easternmost territories. The area remained under Persian control for two centuries. During this time India supplied
mercenaries to the Persian army then fighting in Greece. By 326 BC, Alexander the Great had conquered Asia Minor
and the Achaemenid Empire and had reached the northwest frontiers of the Indian subcontinent. There he defeated King
Porus in the Battle of the Hydaspes (near modern-day Jhelum, Pakistan) and conquered much of the Punjab. Alexander's
march east put him in confrontation with the Nanda Empire of Magadha and the Gangaridai of Bengal. His army, exhausted
and frightened by the prospect of facing larger Indian armies at the Ganges River, mutinied at the Hyphasis (modern
Beas River) and refused to march further East. Alexander, after the meeting with his officer, Coenus, and learning
about the might of Nanda Empire, was convinced that it was better to return. The Maurya Empire (322–185 BCE) was
the first empire to unify India into one state, and was the largest on the Indian subcontinent. At its greatest extent,
the Mauryan Empire stretched to the north up to the natural boundaries of the Himalayas and to the east into what
is now Assam. To the west, it reached beyond modern Pakistan, to the Hindu Kush mountains in what is now Afghanistan.
The empire was established by Chandragupta Maurya in Magadha (in modern Bihar) when he overthrew the Nanda Dynasty.
Chandragupta's son Bindusara succeeded to the throne around 297 BC. By the time he died in c. 272 BC, a large part
of the subcontinent was under Mauryan suzerainty. However, the region of Kalinga (around modern day Odisha) remained
outside Mauryan control, perhaps interfering with their trade with the south. The Arthashastra and the Edicts of
Ashoka are the primary written records of the Mauryan times. Archaeologically, this period falls into the era of
Northern Black Polished Ware (NBPW). The Mauryan Empire was based on a modern and efficient economy and society.
However, the sale of merchandise was closely regulated by the government. Although there was no banking in the Mauryan
society, usury was customary. A significant amount of written records on slavery are found, suggesting a prevalence
thereof. During this period, a high quality steel called Wootz steel was developed in south India and was later exported
to China and Arabia. During the Sangam period Tamil literateure flourished from the 3rd century BCE to the 4th century
CE. During this period the 3 Tamil Dynasties Chera dynasty, Chola dynasty and the Pandyan Dynasty ruled parts of
southern India. The Sangam literature deals with the history, politics, wars and culture of the Tamil people of this
period. The scholars of the Sangam period rose from among the common people who sought the patronage of the Tamil
Kings but who mainly wrote about the common people and their concerns. Unlike Sanskrit writers who were mostly Brahmins,
Sangam writers came from diverse classes and social backgrounds and were mostly non-Brahmins. They belonged to different
faiths and professions like farmers, artisans, merchants, monks, priests and even princes and quite few of them were
even women. The Śātavāhana Empire was a royal Indian dynasty based from Amaravati in Andhra Pradesh as well as Junnar
(Pune) and Prathisthan (Paithan) in Maharashtra. The territory of the empire covered much of India from 230 BCE onward.
Sātavāhanas started out as feudatories to the Mauryan dynasty, but declared independence with its decline. They are
known for their patronage of Hinduism and Buddhism which resulted in Buddhist monuments from Ellora (a UNESCO World
Heritage Site) to Amaravati. The Sātavāhanas were one of the first Indian states to issue coins struck with their
rulers embossed. They formed a cultural bridge and played a vital role in trade as well as the transfer of ideas
and culture to and from the Indo-Gangetic Plain to the southern tip of India. They had to compete with the Shunga
Empire and then the Kanva dynasty of Magadha to establish their rule. Later, they played a crucial role to protect
a huge part of India against foreign invaders like the Sakas, Yavanas and Pahlavas. In particular their struggles
with the Western Kshatrapas went on for a long time. The notable rulers of the Satavahana Dynasty Gautamiputra Satakarni
and Sri Yajna Sātakarni were able to defeat the foreign invaders like the Western Kshatrapas and to stop their expansion.
In the 3rd century CE the empire was split into smaller states. The Shunga Empire or Shunga Empire was an ancient
Indian dynasty from Magadha that controlled vast areas of the Indian subcontinent from around 187 to 78 BCE. The
dynasty was established by Pushyamitra Shunga, after the fall of the Maurya Empire. Its capital was Pataliputra,
but later emperors such as Bhagabhadra also held court at Besnagar, modern Vidisha in Eastern Malwa. Pushyamitra
Shunga ruled for 36 years and was succeeded by his son Agnimitra. There were ten Shunga rulers. The empire is noted
for its numerous wars with both foreign and indigenous powers. They fought battles with the Kalingas, Satavahanas,
the Indo-Greeks, and possibly the Panchalas and Mathuras. Art, education, philosophy, and other forms of learning
flowered during this period including small terracotta images, larger stone sculptures, and architectural monuments
such as the Stupa at Bharhut, and the renowned Great Stupa at Sanchi. The Shunga rulers helped to establish the tradition
of royal sponsorship of learning and art. The script used by the empire was a variant of Brahmi and was used to write
the Sanskrit language. The Shunga Empire played an imperative role in patronizing Indian culture at a time when some
of the most important developments in Hindu thought were taking place. During the reign of Khārabēḷa, the Chedi dynasty
of Kaḷinga ascended to eminence and restored the lost power and glory of Kaḷinga, which had been subdued since the
devastating war with Ashoka. Kaḷingan military might was reinstated by Khārabēḷa: under Khārabēḷa's generalship,
the Kaḷinga state had a formidable maritime reach with trade routes linking it to the then-Simhala (Sri Lanka), Burma
(Myanmar), Siam (Thailand), Vietnam, Kamboja (Cambodia), Malaysia, Borneo, Bali, Samudra (Sumatra) and Jabadwipa
(Java). Khārabēḷa led many successful campaigns against the states of Magadha, Anga, Satavahanas till the southern
most regions of Pandyan Empire (modern Tamil Nadu). The Kushan Empire expanded out of what is now Afghanistan into
the northwest of the subcontinent under the leadership of their first emperor, Kujula Kadphises, about the middle
of the 1st century CE. They came of an Indo-European language speaking Central Asian tribe called the Yuezhi, a branch
of which was known as the Kushans. By the time of his grandson, Kanishka, they had conquered most of northern India,
at least as far as Saketa and Pataliputra, in the middle Ganges Valley, and probably as far as the Bay of Bengal.
Classical India refers to the period when much of the Indian subcontinent was reunited under the Gupta Empire (c.
320–550 CE). This period has been called the Golden Age of India and was marked by extensive achievements in science,
technology, engineering, art, dialectic, literature, logic, mathematics, astronomy, religion, and philosophy that
crystallized the elements of what is generally known as Hindu culture. The Hindu-Arabic numerals, a positional numeral
system, originated in India and was later transmitted to the West through the Arabs. Early Hindu numerals had only
nine symbols, until 600 to 800 CE, when a symbol for zero was developed for the numeral system. The peace and prosperity
created under leadership of Guptas enabled the pursuit of scientific and artistic endeavors in India. The high points
of this cultural creativity are magnificent architecture, sculpture, and painting. The Gupta period produced scholars
such as Kalidasa, Aryabhata, Varahamihira, Vishnu Sharma, and Vatsyayana who made great advancements in many academic
fields. The Gupta period marked a watershed of Indian culture: the Guptas performed Vedic sacrifices to legitimize
their rule, but they also patronized Buddhism, which continued to provide an alternative to Brahmanical orthodoxy.
The military exploits of the first three rulers – Chandragupta I, Samudragupta, and Chandragupta II - brought much
of India under their leadership. Science and political administration reached new heights during the Gupta era. Strong
trade ties also made the region an important cultural centre and established it as a base that would influence nearby
kingdoms and regions in Burma, Sri Lanka, Maritime Southeast Asia, and Indochina. For these reasons, historian Dr.Barnett
remarked: Kadamba (345 – 525 CE) was an ancient royal dynasty of Karnataka, India that ruled northern Karnataka and
the Konkan from Banavasi in present-day Uttara Kannada district. At the peak of their power under King Kakushtavarma,
the Kadambas of Banavasi ruled large parts of modern Karnataka state. The dynasty was founded by Mayurasharma in
345 CE which at later times showed the potential of developing into imperial proportions, an indication to which
is provided by the titles and epithets assumed by its rulers. King Mayurasharma defeated the armies of Pallavas of
Kanchi possibly with help of some native tribes. The Kadamba fame reached its peak during the rule of Kakusthavarma,
a notable ruler with whom even the kings of Gupta Dynasty of northern India cultivated marital alliances. The Kadambas
were contemporaries of the Western Ganga Dynasty and together they formed the earliest native kingdoms to rule the
land with absolute autonomy. The dynasty later continued to rule as a feudatory of larger Kannada empires, the Chalukya
and the Rashtrakuta empires, for over five hundred years during which time they branched into minor dynasties known
as the Kadambas of Goa, Kadambas of Halasi and Kadambas of Hangal. The Hephthalites (or Ephthalites), also known
as the White Huns, were a nomadic confederation in Central Asia during the late antiquity period. The White Huns
established themselves in modern-day Afghanistan by the first half of the 5th century. Led by the Hun military leader
Toramana, they overran the northern region of Pakistan and North India. Toramana's son Mihirakula, a Saivite Hindu,
moved up to near Pataliputra to the east and Gwalior to the central India. Hiuen Tsiang narrates Mihirakula's merciless
persecution of Buddhists and destruction of monasteries, though the description is disputed as far as the authenticity
is concerned. The Huns were defeated by the Indian kings Yasodharman of Malwa and Narasimhagupta in the 6th century.
Some of them were driven out of India and others were assimilated in the Indian society. After the downfall of the
prior Gupta Empire in the middle of the 6th century, North India reverted to small republics and small monarchical
states ruled by Gupta rulers. Harsha was a convert to Buddhism. He united the small republics from Punjab to central
India, and their representatives crowned Harsha king at an assembly in April 606 giving him the title of Maharaja
when he was merely 16 years old. Harsha belonged to Kanojia. He brought all of northern India under his control.
The peace and prosperity that prevailed made his court a center of cosmopolitanism, attracting scholars, artists
and religious visitors from far and wide. The Chinese traveler Xuan Zang visited the court of Harsha and wrote a
very favorable account of him, praising his justice and generosity. From the fifth century to the thirteenth, Śrauta
sacrifices declined, and initiatory traditions of Buddhism, Jainism or more commonly Shaivism, Vaishnavism and Shaktism
expanded in royal courts. This period produced some of India's finest art, considered the epitome of classical development,
and the development of the main spiritual and philosophical systems which continued to be in Hinduism, Buddhism and
Jainism. Emperor Harsha of Kannauj succeeded in reuniting northern India during his reign in the 7th century, after
the collapse of the Gupta dynasty. His empire collapsed after his death. Ronald Inden writes that by the 8th century
CE symbols of Hindu gods "replaced the Buddha at the imperial centre and pinnacle of the cosmo-political system,
the image or symbol of the Hindu god comes to be housed in a monumental temple and given increasingly elaborate imperial-style
puja worship". Although Buddhism did not disappear from India for several centuries after the eighth, royal proclivities
for the cults of Vishnu and Shiva weakened Buddhism's position within the sociopolitical context and helped make
possible its decline. From the 8th to the 10th century, three dynasties contested for control of northern India:
the Gurjara Pratiharas of Malwa, the Palas of Bengal, and the Rashtrakutas of the Deccan. The Sena dynasty would
later assume control of the Pala Empire, and the Gurjara Pratiharas fragmented into various states. These were the
first of the Rajput states. The first recorded Rajput kingdoms emerged in Rajasthan in the 6th century, and small
Rajput dynasties later ruled much of northern India. One Gurjar Rajput of the Chauhan clan, Prithvi Raj Chauhan,
was known for bloody conflicts against the advancing Turkic sultanates. The Chola empire emerged as a major power
during the reign of Raja Raja Chola I and Rajendra Chola I who successfully invaded parts of Southeast Asia and Sri
Lanka in the 11th century. Lalitaditya Muktapida (r. 724 CE–760 CE) was an emperor of the Kashmiri Karkoṭa dynasty,
which exercised influence in northwestern India from 625 CE until 1003, and was followed by Lohara dynasty. He is
known primarily for his successful battles against the Muslim and Tibetan advances into Kashmiri-dominated regions.
Kalhana in his Rajatarangini credits king Lalitaditya with leading an aggressive military campaign in Northern India
and Central Asia. He broke into the Uttarapatha and defeated the rebellious tribes of the Kambojas, Tukharas (Turks
in Turkmenistan and Tocharians in Badakhshan), Bhautas (Tibetans in Baltistan and Tibet) and Daradas (Dards). His
campaign then led him to subjugate the kingdoms of Pragjyotisha, Strirajya and the Uttarakurus. The Shahi dynasty
ruled portions of eastern Afghanistan, northern Pakistan, and Kashmir from the mid-7th century to the early 11th
century. The Chalukya Empire (Kannada: ಚಾಲುಕ್ಯರು [tʃaːɭukjə]) was an Indian royal dynasty that ruled large parts
of southern and central India between the 6th and the 12th centuries. During this period, they ruled as three related
yet individual dynasties. The earliest dynasty, known as the "Badami Chalukyas", ruled from Vatapi (modern Badami)
from the middle of the 6th century. The Badami Chalukyas began to assert their independence at the decline of the
Kadamba kingdom of Banavasi and rapidly rose to prominence during the reign of Pulakeshin II. The rule of the Chalukyas
marks an important milestone in the history of South India and a golden age in the history of Karnataka. The political
atmosphere in South India shifted from smaller kingdoms to large empires with the ascendancy of Badami Chalukyas.
A Southern India-based kingdom took control and consolidated the entire region between the Kaveri and the Narmada
rivers. The rise of this empire saw the birth of efficient administration, overseas trade and commerce and the development
of new style of architecture called "Chalukyan architecture". The Chalukya dynasty ruled parts of southern and central
India from Badami in Karnataka between 550 and 750, and then again from Kalyani between 970 and 1190. Founded by
Dantidurga around 753, the Rashtrakuta Empire ruled from its capital at Manyakheta for almost two centuries. At its
peak, the Rashtrakutas ruled from the Ganges River and Yamuna River doab in the north to Cape Comorin in the south,
a fruitful time of political expansion, architectural achievements and famous literary contributions.[citation needed]
The early kings of this dynasty were Hindu but the later kings were strongly influenced by Jainism. Govinda III and
Amoghavarsha were the most famous of the long line of able administrators produced by the dynasty. Amoghavarsha,
who ruled for 64 years, was also an author and wrote Kavirajamarga, the earliest known Kannada work on poetics. Architecture
reached a milestone in the Dravidian style, the finest example of which is seen in the Kailasanath Temple at Ellora.
Other important contributions are the sculptures of Elephanta Caves in modern Maharashtra as well as the Kashivishvanatha
temple and the Jain Narayana temple at Pattadakal in modern Karnataka, all of which are UNESCO World Heritage Sites.
The Arab traveler Suleiman described the Rashtrakuta Empire as one of the four great Empires of the world. The Rashtrakuta
period marked the beginning of the golden age of southern Indian mathematics. The great south Indian mathematician
Mahāvīra (mathematician) lived in the Rashtrakuta Empire and his text had a huge impact on the medieval south Indian
mathematicians who lived after him. The Rashtrakuta rulers also patronised men of letters, who wrote in a variety
of languages from Sanskrit to the Apabhraṃśas. The Pala Empire (Bengali: পাল সাম্রাজ্য Pal Samrajyô) flourished during
the Classical period of India, and may be dated during 750–1174 CE. Founded by Gopala I, it was ruled by a Buddhist
dynasty from Bengal in the eastern region of the Indian subcontinent. Though the Palas were followers of the Mahayana
and Tantric schools of Buddhism, they also patronised Shaivism and Vaishnavism. The morpheme Pala, meaning "protector",
was used as an ending for the names of all the Pala monarchs. The empire reached its peak under Dharmapala and Devapala.
Dharmapala is believed to have conquered Kanauj and extended his sway up to the farthest limits of India in the northwest.
The Pala Empire can be considered as the golden era of Bengal in many ways. Dharmapala founded the Vikramashila and
revived Nalanda, considered one of the first great universities in recorded history. Nalanda reached its height under
the patronage of the Pala Empire. The Palas also built many viharas. They maintained close cultural and commercial
ties with countries of Southeast Asia and Tibet. Sea trade added greatly to the prosperity of the Pala kingdom. The
Arab merchant Suleiman notes the enormity of the Pala army in his memoirs. Medieval Cholas rose to prominence during
the middle of the 9th century C.E. and established the greatest empire South India had seen. They successfully united
the South India under their rule and through their naval strength extended their influence in the Southeast Asian
countries such as Srivijaya. Under Rajaraja Chola I and his successors Rajendra Chola I, Rajadhiraja Chola, Virarajendra
Chola and Kulothunga Chola I the dynasty became a military, economic and cultural power in South Asia and South-East
Asia. Rajendra Chola I's navies went even further, occupying the sea coasts from Burma to Vietnam, the Andaman and
Nicobar Islands, the Lakshadweep (Laccadive) islands, Sumatra, and the Malay Peninsula in Southeast Asia and the
Pegu islands. The power of the new empire was proclaimed to the eastern world by the expedition to the Ganges which
Rajendra Chola I undertook and by the occupation of cities of the maritime empire of Srivijaya in Southeast Asia,
as well as by the repeated embassies to China. They dominated the political affairs of Sri Lanka for over two centuries
through repeated invasions and occupation. They also had continuing trade contacts with the Arabs in the west and
with the Chinese empire in the east. Rajaraja Chola I and his equally distinguished son Rajendra Chola I gave political
unity to the whole of Southern India and established the Chola Empire as a respected sea power. Under the Cholas,
the South India reached new heights of excellence in art, religion and literature. In all of these spheres, the Chola
period marked the culmination of movements that had begun in an earlier age under the Pallavas. Monumental architecture
in the form of majestic temples and sculpture in stone and bronze reached a finesse never before achieved in India.
The Western Chalukya Empire (Kannada:ಪಶ್ಚಿಮ ಚಾಲುಕ್ಯ ಸಾಮ್ರಾಜ್ಯ) ruled most of the western Deccan, South India, between
the 10th and 12th centuries. Vast areas between the Narmada River in the north and Kaveri River in the south came
under Chalukya control. During this period the other major ruling families of the Deccan, the Hoysalas, the Seuna
Yadavas of Devagiri, the Kakatiya dynasty and the Southern Kalachuri, were subordinates of the Western Chalukyas
and gained their independence only when the power of the Chalukya waned during the later half of the 12th century.
The Western Chalukyas developed an architectural style known today as a transitional style, an architectural link
between the style of the early Chalukya dynasty and that of the later Hoysala empire. Most of its monuments are in
the districts bordering the Tungabhadra River in central Karnataka. Well known examples are the Kasivisvesvara Temple
at Lakkundi, the Mallikarjuna Temple at Kuruvatti, the Kallesvara Temple at Bagali and the Mahadeva Temple at Itagi.
This was an important period in the development of fine arts in Southern India, especially in literature as the Western
Chalukya kings encouraged writers in the native language of Kannada, and Sanskrit like the philosopher and statesman
Basava and the great mathematician Bhāskara II. The early Islamic literature indicates that the conquest of India
was one of the very early ambitions of the Muslims, though it was recognized as a particularly difficult one. After
conquering Persia, the Arab Umayyad Caliphate incorporated parts of what are now Afghanistan and Pakistan around
720. The book Chach Nama chronicles the Chacha Dynasty's period, following the demise of the Rai Dynasty and the
ascent of Chach of Alor to the throne, down to the Arab conquest by Muhammad bin Qasim in the early 8th century AD,
by defeating the last Hindu monarch of Sindh, Raja Dahir. In 712, Arab Muslim general Muhammad bin Qasim conquered
most of the Indus region in modern-day Pakistan for the Umayyad Empire, incorporating it as the "As-Sindh" province
with its capital at Al-Mansurah, 72 km (45 mi) north of modern Hyderabad in Sindh, Pakistan. After several incursions,
the Hindu kings east of Indus defeated the Arabs at the Battle of Rajasthan, halting their expansion and containing
them at Sindh in Pakistan. The south Indian Chalukya empire under Vikramaditya II, Nagabhata I of the Pratihara dynasty
and Bappa Rawal of the Guhilot dynasty repulsed the Arab invaders in the early 8th century. Several Islamic kingdoms
(sultanates) under both foreign and, newly converted, Rajput rulers were established across the north western subcontinent
(Afghanistan and Pakistan) over a period of a few centuries. From the 10th century, Sindh was ruled by the Rajput
Soomra dynasty, and later, in the mid-13th century by the Rajput Samma dynasty. Additionally, Muslim trading communities
flourished throughout coastal south India, particularly on the western coast where Muslim traders arrived in small
numbers, mainly from the Arabian peninsula. This marked the introduction of a third Abrahamic Middle Eastern religion,
following Judaism and Christianity, often in puritanical form. Mahmud of Ghazni in the early 11th century raided
mainly the north-western parts of the Indian sub-continent 17 times, but he did not seek to establish "permanent
dominion" in those areas. The Kabul Shahi dynasties ruled the Kabul Valley and Gandhara (modern-day Pakistan and
Afghanistan) from the decline of the Kushan Empire in the 3rd century to the early 9th century. The Shahis are generally
split up into two eras: the Buddhist Shahis and the Hindu Shahis, with the change-over thought to have occurred sometime
around 870. The kingdom was known as the Kabul Shahan or Ratbelshahan from 565-670, when the capitals were located
in Kapisa and Kabul, and later Udabhandapura, also known as Hund for its new capital. The Hindu Shahis under Jayapala,
is known for his struggles in defending his kingdom against the Ghaznavids in the modern-day eastern Afghanistan
and Pakistan region. Jayapala saw a danger in the consolidation of the Ghaznavids and invaded their capital city
of Ghazni both in the reign of Sebuktigin and in that of his son Mahmud, which initiated the Muslim Ghaznavid and
Hindu Shahi struggles. Sebuk Tigin, however, defeated him, and he was forced to pay an indemnity. Jayapala defaulted
on the payment and took to the battlefield once more. Jayapala however, lost control of the entire region between
the Kabul Valley and Indus River. However, the army was hopeless in battle against the western forces, particularly
against the young Mahmud of Ghazni. In the year 1001, soon after Sultan Mahmud came to power and was occupied with
the Qarakhanids north of the Hindu Kush, Jaipal attacked Ghazni once more and upon suffering yet another defeat by
the powerful Ghaznavid forces, near present-day Peshawar. After the Battle of Peshawar, he committed suicide because
his subjects thought he had brought disaster and disgrace to the Shahi dynasty. Like other settled, agrarian societies
in history, those in the Indian subcontinent have been attacked by nomadic tribes throughout its long history. In
evaluating the impact of Islam on the sub-continent, one must note that the northwestern sub-continent was a frequent
target of tribes raiding from Central Asia. In that sense, the Muslim intrusions and later Muslim invasions were
not dissimilar to those of the earlier invasions during the 1st millennium. What does however, make the Muslim intrusions
and later Muslim invasions different is that unlike the preceding invaders who assimilated into the prevalent social
system, the successful Muslim conquerors retained their Islamic identity and created new legal and administrative
systems that challenged and usually in many cases superseded the existing systems of social conduct and ethics, even
influencing the non-Muslim rivals and common masses to a large extent, though non-Muslim population was left to their
own laws and customs. They also introduced new cultural codes that in some ways were very different from the existing
cultural codes. This led to the rise of a new Indian culture which was mixed in nature, though different from both
the ancient Indian culture and later westernized modern Indian culture. At the same time it must be noted that overwhelming
majority of Muslims in India are Indian natives converted to Islam. This factor also played an important role in
the synthesis of cultures. The subsequent Slave dynasty of Delhi managed to conquer large areas of northern India,
while the Khilji dynasty conquered most of central India but were ultimately unsuccessful in conquering and uniting
the subcontinent. The Sultanate ushered in a period of Indian cultural renaissance. The resulting "Indo-Muslim" fusion
of cultures left lasting syncretic monuments in architecture, music, literature, religion, and clothing. It is surmised
that the language of Urdu (literally meaning "horde" or "camp" in various Turkic dialects) was born during the Delhi
Sultanate period as a result of the intermingling of the local speakers of Sanskritic Prakrits with immigrants speaking
Persian, Turkic, and Arabic under the Muslim rulers. The Delhi Sultanate is the only Indo-Islamic empire to enthrone
one of the few female rulers in India, Razia Sultana (1236–1240). A Turco-Mongol conqueror in Central Asia, Timur
(Tamerlane), attacked the reigning Sultan Nasir-u Din Mehmud of the Tughlaq Dynasty in the north Indian city of Delhi.
The Sultan's army was defeated on 17 December 1398. Timur entered Delhi and the city was sacked, destroyed, and left
in ruins, after Timur's army had killed and plundered for three days and nights. He ordered the whole city to be
sacked except for the sayyids, scholars, and the "other Muslims" (artists); 100,000 war prisoners were put to death
in one day. The Sultanate suffered significantly from the sacking of Delhi revived briefly under the Lodi Dynasty,
but it was a shadow of the former. The Empire was established in 1336 by Harihara I and his brother Bukka Raya I
of Sangama Dynasty. The empire rose to prominence as a culmination of attempts by the southern powers to ward off
Islamic invasions by the end of the 13th century. The empire is named after its capital city of Vijayanagara, whose
ruins surround present day Hampi, now a World Heritage Site in Karnataka, India. The empire's legacy includes many
monuments spread over South India, the best known of which is the group at Hampi. The previous temple building traditions
in South India came together in the Vijayanagara Architecture style. The mingling of all faiths and vernaculars inspired
architectural innovation of Hindu temple construction, first in the Deccan and later in the Dravidian idioms using
the local granite. South Indian mathematics flourished under the protection of the Vijayanagara Empire in Kerala.
The south Indian mathematician Madhava of Sangamagrama founded the famous Kerala school of astronomy and mathematics
in the 14th century which produced a lot of great south Indian mathematicians like Parameshvara, Nilakantha Somayaji
and Jyeṣṭhadeva in medieval south India. Efficient administration and vigorous overseas trade brought new technologies
such as water management systems for irrigation. The empire's patronage enabled fine arts and literature to reach
new heights in Kannada, Telugu, Tamil and Sanskrit, while Carnatic music evolved into its current form. The Vijayanagara
Empire created an epoch in South Indian history that transcended regionalism by promoting Hinduism as a unifying
factor. The empire reached its peak during the rule of Sri Krishnadevaraya when Vijayanagara armies were consistently
victorious. The empire annexed areas formerly under the Sultanates in the northern Deccan and the territories in
the eastern Deccan, including Kalinga, while simultaneously maintaining control over all its subordinates in the
south. Many important monuments were either completed or commissioned during the time of Krishna Deva Raya. Vijayanagara
went into decline after the defeat in the Battle of Talikota (1565). For two and a half centuries from the mid 13th,
the politics in the Northern India was dominated by the Delhi Sultanate and in the Southern India by the Vijayanagar
Empire which originated as a political heir of the erstwhile Hoysala Empire and Pandyan Empire. However, there were
other regional powers present as well. In the North, the Rajputs were a dominant force in the Western and Central
India. Their power reached to the zenith under Rana Sanga during whose time Rajput armies were constantly victorious
against the Sultanate army. In the South, the Bahmani Sultanate was the chief rival of the Vijaynagara and gave Vijayanagara
tough days many a times. In the early 16th century Krishnadevaraya of the Vijayanagara Empire defeated the last remnant
of Bahmani Sultanate power after which the Bahmani Sultanate collapsed. It was established either by a Brahman convert
or patronized by a Brahman and form that source it got the name Bahmani. In the early 16th century, it collapsed
and got split into five small Deccan sultanates. In the East, the Gajapati Kingdom remained a strong regional power
to reckon with, so was the Ahom Kingdom in the North-east for six centuries. Ahom Kingdom (1228–1826) was a kingdom
and tribe which rose to prominence in present-day Assam early in the thirteenth century. They ruled much of Assam
from the 13th century until the establishment of British rule in 1838. The Ahoms brought with them a tribal religion
and a language of their own, however they later merged with the Hindu religion. From thirteenth till seventeenth
century, repeated attempts were made by the Muslim rulers of Delhi to invade and subdue Ahoms, however the Ahoms
managed to maintain their independence and ruled themselves for nearly 600 years. In 1526, Babur, a Timurid descendant
of Timur and Genghis Khan from Fergana Valley (modern day Uzbekistan), swept across the Khyber Pass and established
the Mughal Empire, which at its zenith covered modern day Afghanistan, Pakistan, India and Bangladesh. However, his
son Humayun was defeated by the Afghan warrior Sher Shah Suri in the year 1540, and Humayun was forced to retreat
to Kabul. After Sher Shah's death, his son Islam Shah Suri and the Hindu emperor Hemu Vikramaditya, who had won 22
battles against Afghan rebels and forces of Akbar, from Punjab to Bengal and had established a secular rule in North
India from Delhi till 1556 after winning Battle of Delhi. Akbar's forces defeated and killed Hemu in the Second Battle
of Panipat on 6 November 1556. Akbar's son, Jahangir more or less followed father's policy. The Mughal dynasty ruled
most of the Indian subcontinent by 1600. The reign of Shah Jahan was the golden age of Mughal architecture. He erected
several large monuments, the most famous of which is the Taj Mahal at Agra, as well as the Moti Masjid, Agra, the
Red Fort, the Jama Masjid, Delhi, and the Lahore Fort. The Mughal Empire reached the zenith of its territorial expanse
during the reign of Aurangzeb and also started its terminal decline in his reign due to Maratha military resurgence
under Shivaji. Historian Sir. J.N. Sarkar wrote, "All seemed to have been gained by Aurangzeb now, but in reality
all was lost." The same was echoed by Vincent Smith: "The Deccan proved to be the graveyard not only of Aurangzeb's
body but also of his empire". The empire went into decline thereafter. The Mughals suffered several blows due to
invasions from Marathas and Afghans. During the decline of the Mughal Empire, several smaller states rose to fill
the power vacuum and themselves were contributing factors to the decline. In 1737, the Maratha general Bajirao of
the Maratha Empire invaded and plundered Delhi. Under the general Amir Khan Umrao Al Udat, the Mughal Emperor sent
8,000 troops to drive away the 5,000 Maratha cavalry soldiers. Baji Rao, however, easily routed the novice Mughal
general and the rest of the imperial Mughal army fled. In 1737, in the final defeat of Mughal Empire, the commander-in-chief
of the Mughal Army, Nizam-ul-mulk, was routed at Bhopal by the Maratha army. This essentially brought an end to the
Mughal Empire. In 1739, Nader Shah, emperor of Iran, defeated the Mughal army at the Battle of Karnal. After this
victory, Nader captured and sacked Delhi, carrying away many treasures, including the Peacock Throne. The Mughal
dynasty was reduced to puppet rulers by 1757. The remnants of the Mughal dynasty were finally defeated during the
Indian Rebellion of 1857, also called the 1857 War of Independence, and the remains of the empire were formally taken
over by the British while the Government of India Act 1858 let the British Crown assume direct control of India in
the form of the new British Raj. The Mughals were perhaps the richest single dynasty to have ever existed. During
the Mughal era, the dominant political forces consisted of the Mughal Empire and its tributaries and, later on, the
rising successor states – including the Maratha Empire – which fought an increasingly weak Mughal dynasty. The Mughals,
while often employing brutal tactics to subjugate their empire, had a policy of integration with Indian culture,
which is what made them successful where the short-lived Sultanates of Delhi had failed. This period marked vast
social change in the subcontinent as the Hindu majority were ruled over by the Mughal emperors, most of whom showed
religious tolerance, liberally patronising Hindu culture. The famous emperor Akbar, who was the grandson of Babar,
tried to establish a good relationship with the Hindus. However, later emperors such as Aurangazeb tried to establish
complete Muslim dominance, and as a result several historical temples were destroyed during this period and taxes
imposed on non-Muslims. Akbar declared "Amari" or non-killing of animals in the holy days of Jainism. He rolled back
the jizya tax for non-Muslims. The Mughal emperors married local royalty, allied themselves with local maharajas,
and attempted to fuse their Turko-Persian culture with ancient Indian styles, creating a unique Indo-Saracenic architecture.
It was the erosion of this tradition coupled with increased brutality and centralization that played a large part
in the dynasty's downfall after Aurangzeb, who unlike previous emperors, imposed relatively non-pluralistic policies
on the general population, which often inflamed the majority Hindu population. The post-Mughal era was dominated
by the rise of the Maratha suzerainty as other small regional states (mostly late Mughal tributary states) emerged,
and also by the increasing activities of European powers. There is no doubt that the single most important power
to emerge in the long twilight of the Mughal dynasty was the Maratha confederacy. The Maratha kingdom was founded
and consolidated by Chatrapati Shivaji, a Maratha aristocrat of the Bhonsle clan who was determined to establish
Hindavi Swarajya. Sir J.N. Sarkar described Shivaji as "the last great constructive genius and nation builder that
the Hindu race has produced". However, the credit for making the Marathas formidable power nationally goes to Peshwa
Bajirao I. Historian K.K. Datta wrote about Bajirao I: By the early 18th century, the Maratha Kingdom had transformed
itself into the Maratha Empire under the rule of the Peshwas (prime ministers). In 1737, the Marathas defeated a
Mughal army in their capital, Delhi itself in Battle of Delhi (1737). The Marathas continued their military campaigns
against Mughals, Nizam, Nawab of Bengal and Durrani Empire to further extend their boundaries. Gordon explained how
the Maratha systematically took control over new regions. They would start with annual raids, followed by collecting
ransom from villages and towns while the declining Mughal Empire retained nominal control and finally taking over
the region. He explained it with the example of Malwa region. Marathas built an efficient system of public administration
known for its attention to detail. It succeeded in raising revenue in districts that recovered from years of raids,
up to levels previously enjoyed by the Mughals. For example, the cornerstone of the Maratha rule in Malwa rested
on the 60 or so local tax collectors who advanced the Maratha ruler Peshwa a portion of their district revenues at
interest. By 1760, the domain of the Marathas stretched across practically the entire subcontinent. The north-western
expansion of the Marathas was stopped after the Third Battle of Panipat (1761). However, the Maratha authority in
the north was re-established within a decade under Peshwa Madhavrao I. The defeat of Marathas by British in third
Anglo-Maratha Wars brought end to the empire by 1820. The last peshwa, Baji Rao II, was defeated by the British in
the Third Anglo-Maratha War. With the defeat of the Marathas, no native power represented any significant threat
for the British afterwards. The Punjabi kingdom, ruled by members of the Sikh religion, was a political entity that
governed the region of modern-day Punjab. The empire, based around the Punjab region, existed from 1799 to 1849.
It was forged, on the foundations of the Khalsa, under the leadership of Maharaja Ranjit Singh (1780–1839) from an
array of autonomous Punjabi Misls. He consolidated many parts of northern India into a kingdom. He primarily used
his highly disciplined Sikh army that he trained and equipped to be the equal of a European force. Ranjit Singh proved
himself to be a master strategist and selected well qualified generals for his army. In stages, he added the central
Punjab, the provinces of Multan and Kashmir, the Peshawar Valley, and the Derajat to his kingdom. This came in the
face of the powerful British East India Company. At its peak, in the 19th century, the empire extended from the Khyber
Pass in the west, to Kashmir in the north, to Sindh in the south, running along Sutlej river to Himachal in the east.
This was among the last areas of the subcontinent to be conquered by the British. The first Anglo-Sikh war and second
Anglo-Sikh war marked the downfall of the Sikh Empire. There were several other kingdoms which ruled over parts of
India in the later medieval period prior to the British occupation. However, most of them were bound to pay regular
tribute to the Marathas. The rule of Wodeyar dynasty which established the Kingdom of Mysore in southern India in
around 1400 CE by was interrupted by Hyder Ali and his son Tipu Sultan in the later half of the 18th century. Under
their rule, Mysore fought a series of wars sometimes against the combined forces of the British and Marathas, but
mostly against the British, with Mysore receiving some aid or promise of aid from the French. The next to arrive
were the Dutch, with their main base in Ceylon. The British—who set up a trading post in the west coast port of Surat
in 1619—and the French. The internal conflicts among Indian kingdoms gave opportunities to the European traders to
gradually establish political influence and appropriate lands. Although these continental European powers controlled
various coastal regions of southern and eastern India during the ensuing century, they eventually lost all their
territories in India to the British islanders, with the exception of the French outposts of Pondichéry and Chandernagore,
the Dutch port of Travancore, and the Portuguese colonies of Goa, Daman and Diu.[citation needed] The Nawab of Bengal
Siraj Ud Daulah, the de facto ruler of the Bengal province, opposed British attempts to use these permits. This led
to the Battle of Plassey on 23 June 1757, in which the Bengal Army of the East India Company, led by Robert Clive,
defeated the French-supported Nawab's forces. This was the first real political foothold with territorial implications
that the British acquired in India. Clive was appointed by the company as its first 'Governor of Bengal' in 1757.
This was combined with British victories over the French at Madras, Wandiwash and Pondichéry that, along with wider
British successes during the Seven Years' War, reduced French influence in India. The British East India Company
extended its control over the whole of Bengal. After the Battle of Buxar in 1764, the company acquired the rights
of administration in Bengal from de jure Mughal Emperor Shah Alam II; this marked the beginning of its formal rule,
which within the next century engulfed most of India. The East India Company monopolized the trade of Bengal. They
introduced a land taxation system called the Permanent Settlement which introduced a feudal-like structure in Bengal,
often with zamindars set in place. As a result of the three Carnatic Wars, the British East India Company gained
exclusive control over the entire Carnatic region of India. The Company soon expanded its territories around its
bases in Bombay and Madras; the Anglo-Mysore Wars (1766–1799) and later the Anglo-Maratha Wars (1772–1818) led to
control of the vast regions of India. Ahom Kingdom of North-east India first fell to Burmese invasion and then to
British after Treaty of Yandabo in 1826. Punjab, North-West Frontier Province, and Kashmir were annexed after the
Second Anglo-Sikh War in 1849; however, Kashmir was immediately sold under the Treaty of Amritsar to the Dogra Dynasty
of Jammu and thereby became a princely state. The border dispute between Nepal and British India, which sharpened
after 1801, had caused the Anglo-Nepalese War of 1814–16 and brought the defeated Gurkhas under British influence.
In 1854, Berar was annexed, and the state of Oudh was added two years later. The Indian rebellion of 1857 was a large-scale
rebellion by soldiers employed by the British East India in northern and central India against the Company's rule.
The rebels were disorganized, had differing goals, and were poorly equipped, led, and trained, and had no outside
support or funding. They were brutally suppressed and the British government took control of the Company and eliminated
many of the grievances that caused it. The government also was determined to keep full control so that no rebellion
of such size would ever happen again. In the aftermath, all power was transferred from the East India Company to
the British Crown, which began to administer most of India as a number of provinces. The Crown controlled the Company's
lands directly and had considerable indirect influence over the rest of India, which consisted of the Princely states
ruled by local royal families. There were officially 565 princely states in 1947, but only 21 had actual state governments,
and only three were large (Mysore, Hyderabad and Kashmir). They were absorbed into the independent nation in 1947–48.
After 1857, the colonial government strengthened and expanded its infrastructure via the court system, legal procedures,
and statutes. The Indian Penal Code came into being. In education, Thomas Babington Macaulay had made schooling a
priority for the Raj in his famous minute of February 1835 and succeeded in implementing the use of English as the
medium of instruction. By 1890 some 60,000 Indians had matriculated. The Indian economy grew at about 1% per year
from 1880 to 1920, and the population also grew at 1%. However, from 1910s Indian private industry began to grow
significantly. India built a modern railway system in the late 19th century which was the fourth largest in the world.
The British Raj invested heavily in infrastructure, including canals and irrigation systems in addition to railways,
telegraphy, roads and ports. However, historians have been bitterly divided on issues of economic history, with the
Nationalist school arguing that India was poorer at the end of British rule than at the beginning and that impoverishment
occurred because of the British. In 1905, Lord Curzon split the large province of Bengal into a largely Hindu western
half and "Eastern Bengal and Assam", a largely Muslim eastern half. The British goal was said to be for efficient
administration but the people of Bengal were outraged at the apparent "divide and rule" strategy. It also marked
the beginning of the organized anti-colonial movement. When the Liberal party in Britain came to power in 1906, he
was removed. Bengal was reunified in 1911. The new Viceroy Gilbert Minto and the new Secretary of State for India
John Morley consulted with Congress leaders on political reforms. The Morley-Minto reforms of 1909 provided for Indian
membership of the provincial executive councils as well as the Viceroy's executive council. The Imperial Legislative
Council was enlarged from 25 to 60 members and separate communal representation for Muslims was established in a
dramatic step towards representative and responsible government. Several socio-religious organizations came into
being at that time. Muslims set up the All India Muslim League in 1906. It was not a mass party but was designed
to protect the interests of the aristocratic Muslims. It was internally divided by conflicting loyalties to Islam,
the British, and India, and by distrust of Hindus. The Akhil Bharatiya Hindu Mahasabha and Rashtriya Swayamsevak
Sangh (RSS) sought to represent Hindu interests though the later always claimed it to be a "cultural" organization.
Sikhs founded the Shiromani Akali Dal in 1920. However, the largest and oldest political party Indian National Congress,
founded in 1885, is perceived to have attempted to keep a distance from the socio-religious movements and identity
politics. The Bengali Renaissance refers to a social reform movement during the nineteenth and early twentieth centuries
in the Bengal region of India during the period of British rule dominated by English educated Bengali Hindus. The
Bengal Renaissance can be said to have started with Raja Ram Mohan Roy (1772–1833) and ended with Rabindranath Tagore
(1861–1941), although many stalwarts thereafter continued to embody particular aspects of the unique intellectual
and creative output of the region. Nineteenth century Bengal was a unique blend of religious and social reformers,
scholars, literary giants, journalists, patriotic orators, and scientists, all merging to form the image of a renaissance,
and marked the transition from the 'medieval' to the 'modern'. During this period, Bengal witnessed an intellectual
awakening that is in some way similar to the Renaissance in Europe during the 16th century, although Europeans of
that age were not confronted with the challenge and influence of alien colonialism. This movement questioned existing
orthodoxies, particularly with respect to women, marriage, the dowry system, the caste system, and religion. One
of the earliest social movements that emerged during this time was the Young Bengal movement, which espoused rationalism
and atheism as the common denominators of civil conduct among upper caste educated Hindus. It played an important
role in reawakening Indian minds and intellect across the sub-continent. During the British Raj, famines in India,
often attributed to failed government policies, were some of the worst ever recorded, including the Great Famine
of 1876–78 in which 6.1 million to 10.3 million people died and the Indian famine of 1899–1900 in which 1.25 to 10
million people died. The Third Plague Pandemic in the mid-19th century killed 10 million people in India. Despite
persistent diseases and famines, the population of the Indian subcontinent, which stood at about 125 million in 1750,
had reached 389 million by 1941. One of the most important events of the 19th century was the rise of Indian nationalism,
leading Indians to seek first "self-rule" and later "complete independence". However, historians are divided over
the causes of its rise. Probable reasons include a "clash of interests of the Indian people with British interests",
"racial discriminations", "the revelation of India's past", "inter-linking of the new social groups in different
regions", and Indians coming in close contact with "European education". The first step toward Indian self-rule was
the appointment of councillors to advise the British viceroy in 1861 and the first Indian was appointed in 1909.
Provincial Councils with Indian members were also set up. The councillors' participation was subsequently widened
into legislative councils. The British built a large British Indian Army, with the senior officers all British and
many of the troops from small minority groups such as Gurkhas from Nepal and Sikhs. The civil service was increasingly
filled with natives at the lower levels, with the British holding the more senior positions. Bal Gangadhar Tilak,
an Indian nationalist leader, declared Swaraj as the destiny of the nation. His popular sentence "Swaraj is my birthright,
and I shall have it" became the source of inspiration for Indians. Tilak was backed by rising public leaders like
Bipin Chandra Pal and Lala Lajpat Rai, who held the same point of view. Under them, India's three big provinces –
Maharashtra, Bengal and Punjab, India shaped the demand of the people and India's nationalism. In 1907, the Congress
was split into two factions: The radicals, led by Tilak, advocated civil agitation and direct revolution to overthrow
the British Empire and the abandonment of all things British. The moderates, led by leaders like Dadabhai Naoroji
and Gopal Krishna Gokhale, on the other hand wanted reform within the framework of British rule. From 1920 leaders
such as Mahatma Gandhi began highly popular mass movements to campaign against the British Raj using largely peaceful
methods. The Gandhi-led independence movement opposed the British rule using non-violent methods like non-cooperation,
civil disobedience and economic resistance. However, revolutionary activities against the British rule took place
throughout the Indian subcontinent and some others adopted a militant approach like the Indian National Army that
sought to overthrow British rule by armed struggle. The Government of India Act 1935 was a major success in this
regard. All these movements succeeded in bringing independence to the new dominions of India and Pakistan on 15 August
1947. Along with the desire for independence, tensions between Hindus and Muslims had also been developing over the
years. The Muslims had always been a minority within the subcontinent, and the prospect of an exclusively Hindu government
made them wary of independence; they were as inclined to mistrust Hindu rule as they were to resist the foreign Raj,
although Gandhi called for unity between the two groups in an astonishing display of leadership. The British, extremely
weakened by the Second World War, promised that they would leave and participated in the formation of an interim
government. The British Indian territories gained independence in 1947, after being partitioned into the Union of
India and Dominion of Pakistan. Following the controversial division of pre-partition Punjab and Bengal, rioting
broke out between Sikhs, Hindus and Muslims in these provinces and spread to several other parts of India, leaving
some 500,000 dead. Also, this period saw one of the largest mass migrations ever recorded in modern history, with
a total of 12 million Hindus, Sikhs and Muslims moving between the newly created nations of India and Pakistan (which
gained independence on 15 and 14 August 1947 respectively). In 1971, Bangladesh, formerly East Pakistan and East
Bengal, seceded from Pakistan.
Gamal Abdel Nasser Hussein (Arabic: جمال عبد الناصر حسين, IPA: [ɡæˈmæːl ʕæbdenˈnɑːsˤeɾ ħeˈseːn]; 15 January 1918 – 28 September
1970) was the second President of Egypt, serving from 1956 until his death. Nasser led the 1952 overthrow of the
monarchy and introduced far-reaching land reforms the following year. Following a 1954 attempt on his life by a Muslim
Brotherhood member acting on his own, he cracked down on the organization, put President Muhammad Naguib under house
arrest, and assumed executive office, officially becoming president in June 1956. Nasser's nationalization of the
Suez Canal and his emergence as the political victor from the subsequent Suez Crisis substantially elevated his popularity
in Egypt and the Arab world. Calls for pan-Arab unity under his leadership increased, culminating with the formation
of the United Arab Republic with Syria (1958–1961). In 1962, Nasser began a series of major socialist measures and
modernization reforms in Egypt. Despite setbacks to his pan-Arabist cause, by 1963 Nasser's supporters gained power
in several Arab countries and he became embroiled in the North Yemen Civil War. He began his second presidential
term in March 1965 after his political opponents were banned from running. Following Egypt's defeat by Israel in
the 1967 Six-Day War, Nasser resigned, but he returned to office after popular demonstrations called for his reinstatement.
By 1968, Nasser had appointed himself prime minister, launched the War of Attrition to regain lost territory, began
a process of depoliticizing the military, and issued a set of political liberalization reforms. After the conclusion
of the 1970 Arab League summit, Nasser suffered a heart attack and died. His funeral in Cairo drew five million mourners
and an outpouring of grief across the Arab world. Nasser remains an iconic figure in the Arab world, particularly
for his strides towards social justice and Arab unity, modernization policies, and anti-imperialist efforts. His
presidency also encouraged and coincided with an Egyptian cultural boom, and launched large industrial projects,
including the Aswan Dam and Helwan City. Nasser's detractors criticize his authoritarianism, his government's human
rights violations, his populist relationship with the citizenry, and his failure to establish civil institutions,
blaming his legacy for future dictatorial governance in Egypt. Historians describe Nasser as a towering political
figure of the Middle East in the 20th century. Gamal Abdel Nasser was born on 15 January 1918 in Bakos, Alexandria,
the first son of Fahima and Abdel Nasser Hussein. Nasser's father was a postal worker born in Beni Mur in Upper Egypt
and raised in Alexandria, and his mother's family came from Mallawi, el-Minya. His parents married in 1917, and later
had two more boys, Izz al-Arab and al-Leithi. Nasser's biographers Robert Stephens and Said Aburish wrote that Nasser's
family believed strongly in the "Arab notion of glory", since the name of Nasser's brother, Izz al-Arab, translates
to "Glory of the Arabs"—a rare name in Egypt. In 1928, Nasser went to Alexandria to live with his maternal grandfather
and attend the city's Attarin elementary school. He left in 1929 for a private boarding school in Helwan, and later
returned to Alexandria to enter the Ras el-Tin secondary school and to join his father, who was working for the city's
postal service. It was in Alexandria that Nasser became involved in political activism. After witnessing clashes
between protesters and police in Manshia Square, he joined the demonstration without being aware of its purpose.
The protest, organized by the ultranationalist Young Egypt Society, called for the end of colonialism in Egypt in
the wake of the 1923 Egyptian constitution's annulment by Prime Minister Isma'il Sidqi. Nasser was arrested and detained
for a night before his father bailed him out. When his father was transferred to Cairo in 1933, Nasser joined him
and attended al-Nahda al-Masria school. He took up acting in school plays for a brief period and wrote articles for
the school's paper, including a piece on French philosopher Voltaire titled "Voltaire, the Man of Freedom". On 13
November 1935, Nasser led a student demonstration against British rule, protesting against a statement made four
days prior by UK foreign minister Samuel Hoare that rejected prospects for the 1923 Constitution's restoration. Two
protesters were killed and Nasser received a graze to the head from a policeman's bullet. The incident garnered his
first mention in the press: the nationalist newspaper Al Gihad reported that Nasser led the protest and was among
the wounded. On 12 December, the new king, Farouk, issued a decree restoring the constitution. Nasser's involvement
in political activity increased throughout his school years, such that he only attended 45 days of classes during
his last year of secondary school. Despite it having the almost unanimous backing of Egypt's political forces, Nasser
strongly objected to the 1936 Anglo-Egyptian Treaty because it stipulated the continued presence of British military
bases in the country. Nonetheless, political unrest in Egypt declined significantly and Nasser resumed his studies
at al-Nahda, where he received his leaving certificate later that year. Aburish asserts that Nasser was not distressed
by his frequent relocations, which broadened his horizons and showed him Egyptian society's class divisions. His
own social status was well below the wealthy Egyptian elite, and his discontent with those born into wealth and power
grew throughout his lifetime. Nasser spent most of his spare time reading, particularly in 1933 when he lived near
the National Library of Egypt. He read the Qur'an, the sayings of Muhammad, the lives of the Sahaba (Muhammad's companions),
and the biographies of nationalist leaders Napoleon, Ataturk, Otto von Bismarck, and Garibaldi and the autobiography
of Winston Churchill. Nasser was greatly influenced by Egyptian nationalism, as espoused by politician Mustafa Kamel,
poet Ahmed Shawqi, and his anti-colonialist instructor at the Royal Military Academy, Aziz al-Masri, to whom Nasser
expressed his gratitude in a 1961 newspaper interview. He was especially influenced by Egyptian writer Tawfiq al-Hakim's
novel Return of the Spirit, in which al-Hakim wrote that the Egyptian people were only in need of a "man in whom
all their feelings and desires will be represented, and who will be for them a symbol of their objective". Nasser
later credited the novel as his inspiration to launch the 1952 revolution. In 1937, Nasser applied to the Royal Military
Academy for army officer training, but his police record of anti-government protest initially blocked his entry.
Disappointed, he enrolled in the law school at King Fuad University, but quit after one semester to reapply to the
Military Academy. From his readings, Nasser, who frequently spoke of "dignity, glory, and freedom" in his youth,
became enchanted with the stories of national liberators and heroic conquerors; a military career became his chief
priority. Convinced that he needed a wasta, or an influential intermediary to promote his application above the others,
Nasser managed to secure a meeting with Under-Secretary of War Ibrahim Khairy Pasha, the person responsible for the
academy's selection board, and requested his help. Khairy Pasha agreed and sponsored Nasser's second application,
which was accepted in late 1937. Nasser focused on his military career from then on, and had little contact with
his family. At the academy, he met Abdel Hakim Amer and Anwar Sadat, both of whom became important aides during his
presidency. After graduating from the academy in July 1938, he was commissioned a second lieutenant in the infantry,
and posted to Mankabad. It was here that Nasser and his closest comrades, including Sadat and Amer, first discussed
their dissatisfaction at widespread corruption in the country and their desire to topple the monarchy. Sadat would
later write that because of his "energy, clear-thinking, and balanced judgement", Nasser emerged as the group's natural
leader. In 1941, Nasser was posted to Khartoum, Sudan, which was part of Egypt at the time. Nasser returned to Sudan
in September 1942 after a brief stay in Egypt, then secured a position as an instructor in the Cairo Royal Military
Academy in May 1943. In 1942, the British Ambassador Miles Lampson marched into King Farouk's palace and ordered
him to dismiss Prime Minister Hussein Sirri Pasha for having pro-Axis sympathies. Nasser saw the incident as a blatant
violation of Egyptian sovereignty and wrote, "I am ashamed that our army has not reacted against this attack", and
wished for "calamity" to overtake the British. Nasser was accepted into the General Staff College later that year.
He began to form a group of young military officers with strong nationalist sentiments who supported some form of
revolution. Nasser stayed in touch with the group's members primarily through Amer, who continued to seek out interested
officers within the Egyptian Armed Force's various branches and presented Nasser with a complete file on each of
them. In May 1948, following the British withdrawal, King Farouk sent the Egyptian army into Palestine, with Nasser
serving in the 6th Infantry Battalion. During the war, he wrote of the Egyptian army's unpreparedness, saying "our
soldiers were dashed against fortifications". Nasser was deputy commander of the Egyptian forces that secured the
Faluja pocket. On 12 July, he was lightly wounded in the fighting. By August, his brigade was surrounded by the Israeli
Army. Appeals for help from Jordan's Arab Legion went unheeded, but the brigade refused to surrender. Negotiations
between Israel and Egypt finally resulted in the ceding of Faluja to Israel. According to veteran journalist Eric
Margolis, the defenders of Faluja, "including young army officer Gamal Abdel Nasser, became national heroes" for
enduring Israeli bombardment while isolated from their command. The Egyptian singer Umm Kulthum hosted a public celebration
for the officers' return despite reservations from the royal government, which had been pressured by the British
to prevent the reception. The apparent difference in attitude between the government and the general public increased
Nasser's determination to topple the monarchy. Nasser had also felt bitter that his brigade had not been relieved
despite the resilience it displayed. He started writing his book Philosophy of the Revolution during the siege. After
the war, Nasser returned to his role as an instructor at the Royal Military Academy. He sent emissaries to forge
an alliance with the Muslim Brotherhood in October 1948, but soon concluded that the religious agenda of the Brotherhood
was not compatible with his nationalism. From then on, Nasser prevented the Brotherhood's influence over his cadres'
activities without severing ties with the organization. Nasser was sent as a member of the Egyptian delegation to
Rhodes in February 1949 to negotiate a formal armistice with Israel, and reportedly considered the terms to be humiliating,
particularly because the Israelis were able to easily occupy the Eilat region while negotiating with the Arabs in
March. Nasser's return to Egypt coincided with Husni al-Za'im's Syrian coup d'état. Its success and evident popular
support among the Syrian people encouraged Nasser's revolutionary pursuits. Soon after his return, he was summoned
and interrogated by Prime Minister Ibrahim Abdel Hadi regarding suspicions that he was forming a secret group of
dissenting officers. According to secondhand reports, Nasser convincingly denied the allegations. Abdel Hadi was
also hesitant to take drastic measures against the army, especially in front of its chief of staff, who was present
during the interrogation, and subsequently released Nasser. The interrogation pushed Nasser to speed up his group's
activities. In the 1950 parliamentary elections, the Wafd Party of el-Nahhas gained a victory—mostly due to the absence
of the Muslim Brotherhood, which boycotted the elections—and was perceived as a threat by the Free Officers as the
Wafd had campaigned on demands similar to their own. Accusations of corruption against Wafd politicians began to
surface, however, breeding an atmosphere of rumor and suspicion that consequently brought the Free Officers to the
forefront of Egyptian politics. By then, the organization had expanded to around ninety members; according to Khaled
Mohieddin, "nobody knew all of them and where they belonged in the hierarchy except Nasser". Nasser felt that the
Free Officers were not ready to move against the government and, for nearly two years, he did little beyond officer
recruitment and underground news bulletins. On 11 October 1951, the Wafd government abrogated the 1936 Anglo-Egyptian
Treaty, which had given the British control over the Suez Canal until 1956. The popularity of this move, as well
as that of government-sponsored guerrilla attacks against the British, put pressure on Nasser to act. According to
Sadat, Nasser decided to wage "a large scale assassination campaign". In January 1952, he and Hassan Ibrahim attempted
to kill the royalist general Hussein Sirri Amer by firing their submachine guns at his car as he drove through the
streets of Cairo. Instead of killing the general, the attackers wounded an innocent female passerby. Nasser recalled
that her wails "haunted" him and firmly dissuaded him from undertaking similar actions in the future. Sirri Amer
was close to King Farouk, and was nominated for the presidency of the Officer's Club—normally a ceremonial office—with
the king's backing. Nasser was determined to establish the independence of the army from the monarchy, and with Amer
as the intercessor, resolved to field a nominee for the Free Officers. They selected Muhammad Naguib, a popular general
who had offered his resignation to Farouk in 1942 over British high-handedness and was wounded three times in the
Palestine War. Naguib won overwhelmingly and the Free Officers, through their connection with a leading Egyptian
daily, al-Misri, publicized his victory while praising the nationalistic spirit of the army. On 25 January 1952,
a confrontation between British forces and police at Ismailia resulted in the deaths of 40 Egyptian policemen, provoking
riots in Cairo the next day which left 76 people dead. Afterwards, Nasser published a simple six-point program in
Rose al-Yūsuf to dismantle feudalism and British influence in Egypt. In May, Nasser received word that Farouk knew
the names of the Free Officers and intended to arrest them; he immediately entrusted Free Officer Zakaria Mohieddin
with the task of planning the government takeover by army units loyal to the association. The Free Officers' intention
was not to install themselves in government, but to re-establish a parliamentary democracy. Nasser did not believe
that a low-ranking officer like himself (a lieutenant colonel) would be accepted by the Egyptian people, and so selected
General Naguib to be his "boss" and lead the coup in name. The revolution they had long sought was launched on 22
July and was declared a success the next day. The Free Officers seized control of all government buildings, radio
stations, and police stations, as well as army headquarters in Cairo. While many of the rebel officers were leading
their units, Nasser donned civilian clothing to avoid detection by royalists and moved around Cairo monitoring the
situation. In a move to stave off foreign intervention two days before the revolution, Nasser had notified the American
and British governments of his intentions, and both had agreed not to aid Farouk. Under pressure from the Americans,
Nasser had agreed to exile the deposed king with an honorary ceremony. On 18 June 1953, the monarchy was abolished
and the Republic of Egypt declared, with Naguib as its first president. According to Aburish, after assuming power,
Nasser and the Free Officers expected to become the "guardians of the people's interests" against the monarchy and
the pasha class while leaving the day-to-day tasks of government to civilians. They asked former prime minister Ali
Maher to accept reappointment to his previous position, and to form an all-civilian cabinet. The Free Officers then
governed as the Revolutionary Command Council (RCC) with Naguib as chairman and Nasser as vice-chairman. Relations
between the RCC and Maher grew tense, however, as the latter viewed many of Nasser's schemes—agrarian reform, abolition
of the monarchy, reorganization of political parties—as too radical, culminating in Maher's resignation on 7 September.
Naguib assumed the additional role of prime minister, and Nasser that of deputy prime minister. In September, the
Agrarian Reform Law was put into effect. In Nasser's eyes, this law gave the RCC its own identity and transformed
the coup into a revolution. Preceding the reform law, in August 1952, communist-led riots broke out at textile factories
in Kafr el-Dawwar, leading to a clash with the army that left nine people dead. While most of the RCC insisted on
executing the riot's two ringleaders, Nasser opposed this. Nonetheless, the sentences were carried out. The Muslim
Brotherhood supported the RCC, and after Naguib's assumption of power, demanded four ministerial portfolios in the
new cabinet. Nasser turned down their demands and instead hoped to co-opt the Brotherhood by giving two of its members,
who were willing to serve officially as independents, minor ministerial posts. In January 1953, Nasser overcame opposition
from Naguib and banned all political parties, creating a one-party system under the Liberation Rally, a loosely structured
movement whose chief task was to organize pro-RCC rallies and lectures, with Nasser its secretary-general. Despite
the dissolution order, Nasser was the only RCC member who still favored holding parliamentary elections, according
to his fellow officer Abdel Latif Boghdadi. Although outvoted, he still advocated holding elections by 1956. In March
1953, Nasser led the Egyptian delegation negotiating a British withdrawal from the Suez Canal. On 25 February 1954,
Naguib announced his resignation after the RCC held an official meeting without his presence two days prior. On 26
February, Nasser accepted the resignation, put Naguib under house arrest, and the RCC proclaimed Nasser as both RCC
chairman and prime minister. As Naguib intended, a mutiny immediately followed, demanding Naguib's reinstatement
and the RCC's dissolution. While visiting the striking officers at Military Headquarters (GHQ) to call for the mutiny's
end, Nasser was initially intimidated into accepting their demands. However, on 27 February, Nasser's supporters
in the army launched a raid on the GHQ, ending the mutiny. Later that day, hundreds of thousands of protesters, mainly
belonging to the Brotherhood, called for Naguib's return and Nasser's imprisonment. In response, a sizable group
within the RCC, led by Khaled Mohieddin, demanded Naguib's release and return to the presidency. Nasser was forced
to acquiesce, but delayed Naguib's reinstatement until 4 March, allowing him to promote Amer to Commander of the
Armed Forces—a position formerly occupied by Naguib. On 5 March, Nasser's security coterie arrested thousands of
participants in the uprising. As a ruse to rally opposition against a return to the pre-1952 order, the RCC decreed
an end to restrictions on monarchy-era parties and the Free Officers' withdrawal from politics. The RCC succeeded
in provoking the beneficiaries of the revolution, namely the workers, peasants, and petty bourgeois, to oppose the
decrees, with one million transport workers launching a strike and thousands of peasants entering Cairo in protest
in late March. Naguib sought to crackdown on the protesters, but his requests were rebuffed by the heads of the security
forces. On 29 March, Nasser announced the decrees' revocation in response to the "impulse of the street." Between
April and June, hundreds of Naguib's supporters in the military were either arrested or dismissed, and Mohieddin
was informally exiled to Switzerland to represent the RCC abroad. King Saud of Saudi Arabia attempted to mend relations
between Nasser and Naguib, but to no avail. The crowd roared in approval and Arab audiences were electrified. The
assassination attempt backfired, quickly playing into Nasser's hands. Upon returning to Cairo, he ordered one of
the largest political crackdowns in the modern history of Egypt, with the arrests of thousands of dissenters, mostly
members of the Brotherhood, but also communists, and the dismissal of 140 officers loyal to Naguib. Eight Brotherhood
leaders were sentenced to death, although the sentence of its chief ideologue, Sayyid Qutb, was commuted to a 15-year
imprisonment. Naguib was removed from the presidency and put under house arrest, but was never tried or sentenced,
and no one in the army rose to defend him. With his rivals neutralized, Nasser became the undisputed leader of Egypt.
Nasser's street following was still too small to sustain his plans for reform and to secure him in office. To promote
himself and the Liberation Rally, he gave speeches in a cross-country tour, and imposed controls over the country's
press by decreeing that all publications had to be approved by the party to prevent "sedition". Both Umm Kulthum
and Abdel Halim Hafez, the leading Arab singers of the era, performed songs praising Nasser's nationalism. Others
produced plays denigrating his political opponents. According to his associates, Nasser orchestrated the campaign
himself. Arab nationalist terms such "Arab homeland" and "Arab nation" frequently began appearing in his speeches
in 1954–55, whereas prior he would refer to the Arab "peoples" or the "Arab region". In January 1955, the RCC appointed
him as their president, pending national elections. Nasser made secret contacts with Israel in 1954–55, but determined
that peace with Israel would be impossible, considering it an "expansionist state that viewed the Arabs with disdain".
On 28 February 1955, Israeli troops attacked the Egyptian-held Gaza Strip with the stated aim of suppressing Palestinian
fedayeen raids. Nasser did not feel that the Egyptian Army was ready for a confrontation and did not retaliate militarily.
His failure to respond to Israeli military action demonstrated the ineffectiveness of his armed forces and constituted
a blow to his growing popularity. Nasser subsequently ordered the tightening of the blockade on Israeli shipping
through the Straits of Tiran and restricted the use of airspace over the Gulf of Aqaba by Israeli aircraft in early
September. The Israelis re-militarized the al-Auja Demilitarized Zone on the Egyptian border on 21 September. Simultaneous
with Israel's February raid, the Baghdad Pact was formed between some regional allies of the UK. Nasser considered
the Baghdad Pact a threat to his efforts to eliminate British military influence in the Middle East, and a mechanism
to undermine the Arab League and "perpetuate [Arab] subservience to Zionism and [Western] imperialism". Nasser felt
that if he was to maintain Egypt's regional leadership position he needed to acquire modern weaponry to arm his military.
When it became apparent to him that Western countries would not supply Egypt under acceptable financial and military
terms, Nasser turned to the Eastern Bloc and concluded a US$320,000,000 armaments agreement with Czechoslovakia on
27 September. Through the Czechoslovakian arms deal, the balance of power between Egypt and Israel was more or less
equalized and Nasser's role as the Arab leader defying the West was enhanced. Nasser mediated discussions between
the pro-Western, pro-Soviet, and neutralist conference factions over the composition of the "Final Communique" addressing
colonialism in Africa and Asia and the fostering of global peace amid the Cold War between the West and the Soviet
Union. At Bandung Nasser sought a proclamation for the avoidance of international defense alliances, support for
the independence of Tunisia, Algeria, and Morocco from French rule, support for the Palestinian right of return,
and the implementation of UN resolutions regarding the Arab–Israeli conflict. He succeeded in lobbying the attendees
to pass resolutions on each of these issues, notably securing the strong support of China and India. Following Bandung,
Nasser officially adopted the "positive neutralism" of Yugoslavian president Josip Broz Tito and Indian Prime Minister
Jawaharlal Nehru as a principal theme of Egyptian foreign policy regarding the Cold War. Nasser was welcomed by large
crowds of people lining the streets of Cairo on his return to Egypt on 2 May and was widely heralded in the press
for his achievements and leadership in the conference. Consequently, Nasser's prestige was greatly boosted as was
his self-confidence and image. In January 1956, the new Constitution of Egypt was drafted, entailing the establishment
of a single-party system under the National Union (NU), a movement Nasser described as the "cadre through which we
will realize our revolution". The NU was a reconfiguration of the Liberation Rally, which Nasser determined had failed
in generating mass public participation. In the new movement, Nasser attempted to incorporate more citizens, approved
by local-level party committees, in order to solidify popular backing for his government. The NU would select a nominee
for the presidential election whose name would be provided for public approval. Nasser's nomination for the post
and the new constitution were put to public referendum on 23 June and each was approved by an overwhelming majority.
A 350-member National Assembly was established, elections for which were held in July 1957. Nasser had ultimate approval
over all the candidates. The constitution granted women's suffrage, prohibited gender-based discrimination, and entailed
special protection for women in the workplace. Coinciding with the new constitution and Nasser's presidency, the
RCC dissolved itself and its members resigned their military commissions as part of the transition to civilian rule.
During the deliberations surrounding the establishment of a new government, Nasser began a process of sidelining
his rivals among the original Free Officers, while elevating his closest allies to high-ranking positions in the
cabinet. After the three-year transition period ended with Nasser's official assumption of power, his domestic and
independent foreign policies increasingly collided with the regional interests of the UK and France. The latter condemned
his strong support for Algerian independence, and the UK's Eden government was agitated by Nasser's campaign against
the Baghdad Pact. In addition, Nasser's adherence to neutralism regarding the Cold War, recognition of communist
China, and arms deal with the Eastern bloc alienated the United States. On 19 July 1956, the US and UK abruptly withdrew
their offer to finance construction of the Aswan Dam, citing concerns that Egypt's economy would be overwhelmed by
the project. Nasser was informed of the British–American withdrawal via a news statement while aboard a plane returning
to Cairo from Belgrade, and took great offense. Although ideas for nationalizing the Suez Canal were in the offing
after the UK agreed to withdraw its military from Egypt in 1954 (the last British troops left on 13 June 1956), journalist
Mohamed Hassanein Heikal asserts that Nasser made the final decision to nationalize the waterway between 19 and 20
July. Nasser himself would later state that he decided on 23 July, after studying the issue and deliberating with
some of his advisers from the dissolved RCC, namely Boghdadi and technical specialist Mahmoud Younis, beginning on
21 July. The rest of the RCC's former members were informed of the decision on 24 July, while the bulk of the cabinet
was unaware of the nationalization scheme until hours before Nasser publicly announced it. According to Ramadan,
Nasser's decision to nationalize the canal was a solitary decision, taken without consultation. On 26 July 1956,
Nasser gave a speech in Alexandria announcing the nationalization of the Suez Canal Company as a means to fund the
Aswan Dam project in light of the British–American withdrawal. In the speech, he denounced British imperialism in
Egypt and British control over the canal company's profits, and upheld that the Egyptian people had a right to sovereignty
over the waterway, especially since "120,000 Egyptians had died (sic)" building it. The motion was technically in
breach of the international agreement he had signed with the UK on 19 October 1954, although he ensured that all
existing stockholders would be paid off. The nationalization announcement was greeted very emotionally by the audience
and, throughout the Arab world, thousands entered the streets shouting slogans of support. US ambassador Henry A.
Byroade stated, "I cannot overemphasize [the] popularity of the Canal Company nationalization within Egypt, even
among Nasser's enemies." Egyptian political scientist Mahmoud Hamad wrote that, prior to 1956, Nasser had consolidated
control over Egypt's military and civilian bureaucracies, but it was only after the canal's nationalization that
he gained near-total popular legitimacy and firmly established himself as the "charismatic leader" and "spokesman
for the masses not only in Egypt, but all over the Third World". According to Aburish, this was Nasser's largest
pan-Arab triumph at the time and "soon his pictures were to be found in the tents of Yemen, the souks of Marrakesh,
and the posh villas of Syria". The official reason given for the nationalization was that funds from the canal would
be used for the construction of the dam in Aswan. That same day, Egypt closed the canal to Israeli shipping. France
and the UK, the largest shareholders in the Suez Canal Company, saw its nationalization as yet another hostile measure
aimed at them by the Egyptian government. Nasser was aware that the canal's nationalization would instigate an international
crisis and believed the prospect of military intervention by the two countries was 80 per cent likely. He believed,
however, that the UK would not be able to intervene militarily for at least two months after the announcement, and
dismissed Israeli action as "impossible". In early October, the UN Security Council met on the matter of the canal's
nationalization and adopted a resolution recognizing Egypt's right to control the canal as long as it continued to
allow passage through it for foreign ships. According to Heikal, after this agreement, "Nasser estimated that the
danger of invasion had dropped to 10 per cent". Shortly thereafter, however, the UK, France, and Israel made a secret
agreement to take over the Suez Canal, occupy the Suez Canal zone, and topple Nasser. On 29 October 1956, Israeli
forces crossed the Sinai Peninsula, overwhelmed Egyptian army posts, and quickly advanced to their objectives. Two
days later, British and French planes bombarded Egyptian airfields in the canal zone. Nasser ordered the military's
high command to withdraw the Egyptian Army from Sinai to bolster the canal's defenses. Moreover, he feared that if
the armored corps was dispatched to confront the Israeli invading force and the British and French subsequently landed
in the canal city of Port Said, Egyptian armor in the Sinai would be cut off from the canal and destroyed by the
combined tripartite forces. Amer strongly disagreed, insisting that Egyptian tanks meet the Israelis in battle. The
two had a heated exchange on 3 November, and Amer conceded. Nasser also ordered blockage of the canal by sinking
or otherwise disabling forty-nine ships at its entrance. Despite the commanded withdrawal of Egyptian troops, about
2,000 Egyptian soldiers were killed during engagement with Israeli forces, and some 5,000 Egyptian soldiers were
captured by the Israeli Army. Amer and Salah Salem proposed requesting a ceasefire, with Salem further recommending
that Nasser surrender himself to British forces. Nasser berated Amer and Salem, and vowed, "Nobody is going to surrender."
Nasser assumed military command. Despite the relative ease in which Sinai was occupied, Nasser's prestige at home
and among Arabs was undamaged. To counterbalance the Egyptian Army's dismal performance, Nasser authorized the distribution
of about 400,000 rifles to civilian volunteers and hundreds of militias were formed throughout Egypt, many led by
Nasser's political opponents. It was at Port Said that Nasser saw a confrontation with the invading forces as being
the strategic and psychological focal point of Egypt's defense. A third infantry battalion and hundreds of national
guardsmen were sent to the city as reinforcements, while two regular companies were dispatched to organize popular
resistance. Nasser and Boghdadi traveled to the canal zone to boost the morale of the armed volunteers. According
to Boghdadi's memoirs, Nasser described the Egyptian Army as "shattered" as he saw the wreckage of Egyptian military
equipment en route. When British and French forces landed in Port Said on 5–6 November, its local militia put up
a stiff resistance, resulting in street-to-street fighting. The Egyptian Army commander in the city was preparing
to request terms for a ceasefire, but Nasser ordered him to desist. The British-French forces managed to largely
secure the city by 7 November. Between 750 and 1,000 Egyptians were killed in the battle for Port Said. The U.S.
Eisenhower administration condemned the tripartite invasion, and supported UN resolutions demanding withdrawal and
a United Nations Emergency Force (UNEF) to be stationed in Sinai. Nasser commended Eisenhower, stating he played
the "greatest and most decisive role" in stopping the "tripartite conspiracy". By the end of December, British and
French forces had totally withdrawn from Egyptian territory, while Israel completed its withdrawal in March 1957
and released all Egyptian prisoners of war. As a result of the Suez Crisis, Nasser brought in a set of regulations
imposing rigorous requirements for residency and citizenship as well as forced expulsions, mostly affecting British
and French nationals and Jews with foreign nationality, as well as some Egyptian Jews. By 1957, pan-Arabism was the
dominant ideology of the Arab world, and the average Arab citizen considered Nasser his undisputed leader. Historian
Adeed Dawisha credited Nasser's status to his "charisma, bolstered by his perceived victory in the Suez Crisis".
The Cairo-based Voice of the Arabs radio station spread Nasser's ideas of united Arab action throughout the Arabic-speaking
world and historian Eugene Rogan wrote, "Nasser conquered the Arab world by radio." Lebanese sympathizers of Nasser
and the Egyptian embassy in Beirut—the press center of the Arab world—bought out Lebanese media outlets to further
disseminate Nasser's ideals. Nasser also enjoyed the support of Arab nationalist organizations, both civilian and
paramilitary, throughout the region. His followers were numerous and well-funded, but lacked any permanent structure
and organization. They called themselves "Nasserites", despite Nasser's objection to the label (he preferred the
term "Arab nationalists"). In January 1957, the US adopted the Eisenhower Doctrine and pledged to prevent the spread
of communism and its perceived agents in the Middle East. Although Nasser was an opponent of communism in the region,
his promotion of pan-Arabism was viewed as a threat by pro-Western states in the region. Eisenhower tried to isolate
Nasser and reduce his regional influence by attempting to transform King Saud into a counterweight. Also in January,
the elected Jordanian prime minister and Nasser supporter Sulayman al-Nabulsi brought Jordan into a military pact
with Egypt, Syria, and Saudi Arabia. Relations between Nasser and King Hussein deteriorated in April when Hussein
implicated Nasser in two coup attempts against him—although Nasser's involvement was never established—and dissolved
al-Nabulsi's cabinet. Nasser subsequently slammed Hussein on Cairo radio as being "a tool of the imperialists". Relations
with King Saud also became antagonistic as the latter began to fear that Nasser's increasing popularity in Saudi
Arabia was a genuine threat to the royal family's survival. Despite opposition from the governments of Jordan, Saudi
Arabia, Iraq, and Lebanon, Nasser maintained his prestige among their citizens and those of other Arab countries.
By the end of 1957, Nasser nationalized all remaining British and French assets in Egypt, including the tobacco,
cement, pharmaceutical, and phosphate industries. When efforts to offer tax incentives and attract outside investments
yielded no tangible results, he nationalized more companies and made them a part of his economic development organization.
He stopped short of total government control: two-thirds of the economy was still in private hands. This effort achieved
a measure of success, with increased agricultural production and investment in industrialization. Nasser initiated
the Helwan steelworks, which subsequently became Egypt's largest enterprise, providing the country with product and
tens of thousands of jobs. Nasser also decided to cooperate with the Soviet Union in the construction of the Aswan
Dam to replace the withdrawal of US funds. As political instability grew in Syria, delegations from the country were
sent to Nasser demanding immediate unification with Egypt. Nasser initially turned down the request, citing the two
countries' incompatible political and economic systems, lack of contiguity, the Syrian military's record of intervention
in politics, and the deep factionalism among Syria's political forces. However, in January 1958, a second Syrian
delegation managed to convince Nasser of an impending communist takeover and a consequent slide to civil strife.
Nasser subsequently opted for union, albeit on the condition that it would be a total political merger with him as
its president, to which the delegates and Syrian president Shukri al-Quwatli agreed. On 1 February, the United Arab
Republic (UAR) was proclaimed and, according to Dawisha, the Arab world reacted in "stunned amazement, which quickly
turned into uncontrolled euphoria." Nasser ordered a crackdown against Syrian communists, dismissing many of them
from their governmental posts. While Nasser was in Syria, King Saud planned to have him assassinated on his return
flight to Cairo. On 4 March, Nasser addressed the masses in Damascus and waved before them the Saudi check given
to Syrian security chief and Nasser supporter Abdel Hamid Sarraj to shoot down Nasser's plane. As a consequence of
Saud's scheme, he was forced by senior members of the Saudi royal family to informally cede most of his powers to
his brother, King Faisal, a major opponent of Nasser and advocate for pan-Islamic unity over pan-Arabism. A day after
announcing the attempt on his life, Nasser established a new provisional constitution proclaiming a 600-member National
Assembly (400 from Egypt and 200 from Syria) and the dissolution of all political parties. Nasser gave each of the
provinces two vice-presidents: Boghdadi and Amer in Egypt, and Sabri al-Asali and Akram al-Hawrani in Syria. Nasser
then left for Moscow to meet with Nikita Khrushchev. At the meeting, Khrushchev pressed Nasser to lift the ban on
the Communist Party, but Nasser refused, stating it was an internal matter which was not a subject of discussion
with outside powers. Khrushchev was reportedly taken aback and denied he had meant to interfere in the UAR's affairs.
The matter was settled as both leaders sought to prevent a rift between their two countries. In Lebanon, clashes
between pro-Nasser factions and supporters of staunch Nasser opponent, then-President Camille Chamoun, culminated
in civil strife by May. The former sought to unite with the UAR, while the latter sought Lebanon's continued independence.
Nasser delegated oversight of the issue to Sarraj, who provided limited aid to Nasser's Lebanese supporters through
money, light arms, and officer training—short of the large-scale support that Chamoun alleged. Nasser did not covet
Lebanon, seeing it as a "special case", but sought to prevent Chamoun from a second presidential term. On 14 July,
Iraqi army officers Abdel Karim Qasim and Abdel Salam Aref overthrew the Iraqi monarchy and, the next day, Iraqi
prime minister and Nasser's chief Arab antagonist, Nuri al-Said, was killed. Nasser recognized the new government
and stated that "any attack on Iraq was tantamount to an attack on the UAR". On 15 July, US marines landed in Lebanon,
and British special forces in Jordan, upon the request of those countries' governments to prevent them from falling
to pro-Nasser forces. Nasser felt that the revolution in Iraq left the road for pan-Arab unity unblocked. On 19 July,
for the first time, he declared that he was opting for full Arab union, although he had no plan to merge Iraq with
the UAR. While most members of the Iraqi Revolutionary Command Council (RCC) favored Iraqi-UAR unity, Qasim sought
to keep Iraq independent and resented Nasser's large popular base in the country. In the fall of 1958, Nasser formed
a tripartite committee consisting of Zakaria Mohieddin, al-Hawrani, and Salah Bitar to oversee developments in Syria.
By moving the latter two, who were Ba'athists, to Cairo, he neutralized important political figures who had their
own ideas about how Syria should be run. He put Syria under Sarraj, who effectively reduced the province to a police
state by imprisoning and exiling landholders who objected to the introduction of Egyptian agricultural reform in
Syria, as well as communists. Following the Lebanese election of Fuad Chehab in September 1958, relations between
Lebanon and the UAR improved considerably. On 25 March 1959, Chehab and Nasser met at the Lebanese–Syrian border
and compromised on an end to the Lebanese crisis. Relations between Nasser and Qasim grew increasingly bitter on
9 March, after Qasim's forces suppressed a rebellion in Mosul, launched a day earlier by a pro-Nasser Iraqi RCC officer
backed by UAR authorities. Nasser had considered dispatching troops to aid his Iraqi sympathizers, but decided against
it. He clamped down on Egyptian communist activity due to the key backing Iraqi communists provided Qasim. Several
influential communists were arrested, including Nasser's old comrade Khaled Mohieddin, who had been allowed to re-enter
Egypt in 1956. Opposition to the union mounted among some of Syria's key elements, namely the socioeconomic, political,
and military elites. In response to Syria's worsening economy, which Nasser attributed to its control by the bourgeoisie,
in July 1961, Nasser decreed socialist measures that nationalized wide-ranging sectors of the Syrian economy. He
also dismissed Sarraj in September to curb the growing political crisis. Aburish states that Nasser was not fully
capable of addressing Syrian problems because they were "foreign to him". In Egypt, the economic situation was more
positive, with a GNP growth of 4.5 percent and a rapid growth of industry. In 1960, Nasser nationalized the Egyptian
press, which had already been cooperating with his government, in order to steer coverage towards the country's socioeconomic
issues and galvanize public support for his socialist measures. On 28 September 1961, secessionist army units launched
a coup in Damascus, declaring Syria's secession from the UAR. In response, pro-union army units in northern Syria
revolted and pro-Nasser protests occurred in major Syrian cities. Nasser sent Egyptian special forces to Latakia
to bolster his allies, but withdrew them two days later, citing a refusal to allow inter-Arab fighting. Addressing
the UAR's breakup on 5 October, Nasser accepted personal responsibility and declared that Egypt would recognize an
elected Syrian government. He privately blamed interference by hostile Arab governments. According to Heikal, Nasser
suffered something resembling a nervous breakdown after the dissolution of the union; he began to smoke more heavily
and his health began to deteriorate. Nasser's regional position changed unexpectedly when Yemeni officers led by
Nasser supporter Abdullah al-Sallal overthrew Imam Badr of North Yemen on 27 September 1962. Al-Badr and his tribal
partisans began receiving increasing support from Saudi Arabia to help reinstate the kingdom, while Nasser subsequently
accepted a request by Sallal to militarily aid the new government on 30 September. Consequently, Egypt became increasingly
embroiled in the drawn-out civil war until it withdrew its forces in 1967. Most of Nasser's old colleagues had questioned
the wisdom of continuing the war, but Amer reassured Nasser of their coming victory. Nasser later remarked in 1968
that intervention in Yemen was a "miscalculation". On 8 February 1963, a military coup in Iraq led by a Ba'athist–Nasserist
alliance toppled Qasim, who was subsequently shot dead. Abdel Salam Aref, a Nasserist, was chosen to be the new president.
A similar alliance toppled the Syrian government on 8 March. On 14 March, the new Iraqi and Syrian governments sent
Nasser delegations to push for a new Arab union. At the meeting, Nasser lambasted the Ba'athists for "facilitating"
Syria's split from the UAR, and asserted that he was the "leader of the Arabs". A transitional unity agreement stipulating
a federal system was signed by the parties on 17 April and the new union was set to be established in May 1965. However,
the agreement fell apart weeks later when Syria's Ba'athists purged Nasser's supporters from the officers corps.
A failed counter-coup by a Nasserist colonel followed, after which Nasser condemned the Ba'athists as "fascists".
In January 1964, Nasser called for an Arab League summit in Cairo to establish a unified Arab response against Israel's
plans to divert the Jordan River's waters for economic purposes, which Syria and Jordan deemed an act of war. Nasser
blamed Arab divisions for what he deemed "the disastrous situation". He discouraged Syria and Palestinian guerrillas
from provoking the Israelis, conceding that he had no plans for war with Israel. During the summit, Nasser developed
cordial relations with King Hussein, and ties were mended with the rulers of Saudi Arabia, Syria, and Morocco. In
May, Nasser moved to formally share his leadership position over the Palestine issue by initiating the creation of
the Palestine Liberation Organization (PLO). In practice, Nasser used the PLO to wield control over the Palestinian
fedayeen. Its head was to be Ahmad Shukeiri, Nasser's personal nominee. After years of foreign policy coordination
and developing ties, Nasser, President Sukarno of Indonesia, President Tito of Yugoslavia, and Prime Minister Nehru
of India founded the Non-Aligned Movement (NAM) in 1961. Its declared purpose was to solidify international non-alignment
and promote world peace amid the Cold War, end colonization, and increase economic cooperation among developing countries.
In 1964, Nasser was made president of the NAM and held the second conference of the organization in Cairo. Nasser
played a significant part in the strengthening of African solidarity in the late 1950s and early 1960s, although
his continental leadership role had increasingly passed to Algeria since 1962. During this period, Nasser made Egypt
a refuge for anti-colonial leaders from several African countries and allowed the broadcast of anti-colonial propaganda
from Cairo. Beginning in 1958, Nasser had a key role in the discussions among African leaders that led to the establishment
of the Organisation of African Unity (OAU) in 1963. In 1961, Nasser sought to firmly establish Egypt as the leader
of the Arab world and to promote a second revolution in Egypt with the purpose of merging Islamic and socialist thinking.
To achieve this, he initiated several reforms to modernize al-Azhar, which serves as the de facto leading authority
in Sunni Islam, and to ensure its prominence over the Muslim Brotherhood and the more conservative Wahhabism promoted
by Saudi Arabia. Nasser had used al-Azhar's most willing ulema (scholars) as a counterweight to the Brotherhood's
Islamic influence, starting in 1953. Following Syria's secession, Nasser grew concerned with Amer's inability to
train and modernize the army, and with the state within a state Amer had created in the military command and intelligence
apparatus. In late 1961, Nasser established the Presidential Council and decreed it the authority to approve all
senior military appointments, instead of leaving this responsibility solely to Amer. Moreover, he instructed that
the primary criterion for promotion should be merit and not personal loyalties. Nasser retracted the initiative after
Amer's allies in the officers corps threatened to mobilize against him. In early 1962 Nasser again attempted to wrest
control of the military command from Amer. Amer responded by directly confronting Nasser for the first time and secretly
rallying his loyalist officers. Nasser ultimately backed down, wary of a possible violent confrontation between the
military and his civilian government. According to Boghdadi, the stress caused by the UAR's collapse and Amer's increasing
autonomy forced Nasser, who already had diabetes, to practically live on painkillers from then on. In October 1961,
Nasser embarked on a major nationalization program for Egypt, believing the total adoption of socialism was the answer
to his country's problems and would have prevented Syria's secession. In order to organize and solidify his popular
base with Egypt's citizens and counter the army's influence, Nasser introduced the National Charter in 1962 and a
new constitution. The charter called for universal health care, affordable housing, vocational schools, greater women's
rights and a family planning program, as well as widening the Suez Canal. Nasser also attempted to maintain oversight
of the country's civil service to prevent it from inflating and consequently becoming a burden to the state. New
laws provided workers with a minimum wage, profit shares, free education, free health care, reduced working hours,
and encouragement to participate in management. Land reforms guaranteed the security of tenant farmers, promoted
agricultural growth, and reduced rural poverty. As a result of the 1962 measures, government ownership of Egyptian
business reached 51 percent, and the National Union was renamed the Arab Socialist Union (ASU). With these measures
came more domestic repression, as thousands of Islamists were imprisoned, including dozens of military officers.
Nasser's tilt toward a Soviet-style system led his aides Boghdadi and Hussein el-Shafei to submit their resignations
in protest. During the presidential referendum in Egypt, Nasser was re-elected to a second term as UAR president
and took his oath on 25 March 1965. He was the only candidate for the position, with virtually all of his political
opponents forbidden by law from running for office, and his fellow party members reduced to mere followers. That
same year, Nasser had the Muslim Brotherhood chief ideologue Sayyed Qutb imprisoned. Qutb was charged and found guilty
by the court of plotting to assassinate Nasser, and was executed in 1966. Beginning in 1966, as Egypt's economy slowed
and government debt became increasingly burdensome, Nasser began to ease state control over the private sector, encouraging
state-owned bank loans to private business and introducing incentives to increase exports. During the 60's, the Egyptian
economy went from sluggishness to the verge of collapse, the society became less free, and Nasser's appeal waned
considerably. In mid May 1967, the Soviet Union issued warnings to Nasser of an impending Israeli attack on Syria,
although Chief of Staff Mohamed Fawzi considered the warnings to be "baseless". According to Kandil, without Nasser's
authorization, Amer used the Soviet warnings as a pretext to dispatch troops to Sinai on 14 May, and Nasser subsequently
demanded UNEF's withdrawal. Earlier that day, Nasser received a warning from King Hussein of Israeli-American collusion
to drag Egypt into war. The message had been originally received by Amer on 2 May, but was withheld from Nasser until
the Sinai deployment on 14 May. Although in the preceding months, Hussein and Nasser had been accusing each other
of avoiding a fight with Israel, Hussein was nonetheless wary that an Egyptian-Israeli war would risk the West Bank's
occupation by Israel. Nasser still felt that the US would restrain Israel from attacking due to assurances that he
received from the US and Soviet Union. In turn, he also reassured both powers that Egypt would only act defensively.
On 21 May, Amer asked Nasser to order the Straits of Tiran blockaded, a move Nasser believed Israel would use as
a casus belli. Amer reassured him that the army was prepared for confrontation, but Nasser doubted Amer's assessment
of the military's readiness. According to Nasser's vice president Zakaria Mohieddin, although "Amer had absolute
authority over the armed forces, Nasser had his ways of knowing what was really going on". Moreover, Amer anticipated
an impending Israeli attack and advocated a preemptive strike. Nasser refused the call upon determination that the
air force lacked pilots and Amer's handpicked officers were incompetent. Still, Nasser concluded that if Israel attacked,
Egypt's quantitative advantage in manpower and arms could stave off Israeli forces for at least two weeks, allowing
for diplomacy towards a ceasefire. Towards the end of May, Nasser increasingly exchanged his positions of deterrence
for deference to the inevitability of war, under increased pressure to act by both the general Arab populace and
various Arab governments. On 26 May Nasser declared, "our basic objective will be to destroy Israel". On 30 May,
King Hussein committed Jordan in an alliance with Egypt and Syria. According to Sadat, it was only when the Israelis
cut off the Egyptian garrison at Sharm el-Sheikh that Nasser became aware of the situation's gravity. After hearing
of the attack, he rushed to army headquarters to inquire about the military situation. The simmering conflict between
Nasser and Amer subsequently came to the fore, and officers present reported the pair burst into "a nonstop shouting
match". The Supreme Executive Committee, set up by Nasser to oversee the conduct of the war, attributed the repeated
Egyptian defeats to the Nasser–Amer rivalry and Amer's overall incompetence. According to Egyptian diplomat Ismail
Fahmi, who became foreign minister during Sadat's presidency, the Israeli invasion and Egypt's consequent defeat
was a result of Nasser's dismissal of all rational analysis of the situation and his undertaking of a series of irrational
decisions. During the first four days of the war, the general population of the Arab world believed Arab radio station
fabrications of imminent Arab victory. On 9 June, Nasser appeared on television to inform Egypt's citizens of their
country's defeat. He announced his resignation on television later that day, and ceded all presidential powers to
his then-Vice President Zakaria Mohieddin, who had no prior information of this decision and refused to accept the
post. Hundreds of thousands of sympathizers poured into the streets in mass demonstrations throughout Egypt and across
the Arab world rejecting his resignation, chanting, "We are your soldiers, Gamal!" Nasser retracted his decision
the next day. On 11 July, Nasser replaced Amer with Mohamed Fawzi as general commander, over the protestations of
Amer's loyalists in the military, 600 of whom marched on army headquarters and demanded Amer's reinstatement. After
Nasser sacked thirty of the loyalists in response, Amer and his allies devised a plan to topple him on 27 August.
Nasser was tipped off about their activities and, after several invitations, he convinced Amer to meet him at his
home on 24 August. Nasser confronted Amer about the coup plot, which he denied before being arrested by Mohieddin.
Amer committed suicide on 14 September. Despite his souring relationship with Amer, Nasser spoke of losing "the person
closest to [him]". Thereafter, Nasser began a process of depoliticizing the armed forces, arresting dozens of leading
military and intelligence figures loyal to Amer. At the 29 August Arab League summit in Khartoum, Nasser's usual
commanding position had receded as the attending heads of state expected Saudi King Faisal to lead. A ceasefire in
the Yemen War was declared and the summit concluded with the Khartoum Resolution. The Soviet Union soon resupplied
the Egyptian military with about half of its former arsenals and broke diplomatic relations with Israel. Nasser cut
relations with the US following the war, and, according to Aburish, his policy of "playing the superpowers against
each other" ended. In November, Nasser accepted UN Resolution 242, which called for Israel's withdrawal from territories
acquired in the war. His supporters claimed Nasser's move was meant to buy time to prepare for another confrontation
with Israel, while his detractors believed his acceptance of the resolution signaled a waning interest in Palestinian
independence. Nasser appointed himself the additional roles of prime minister and supreme commander of the armed
forces on 19 June 1967. Angry at the military court's perceived leniency with air force officers charged with negligence
during the 1967 war, workers and students launched protests calling for major political reforms in late February
1968. Nasser responded to the demonstrations, the most significant public challenge to his rule since workers' protests
in March 1954, by removing most military figures from his cabinet and appointing eight civilians in place of several
high-ranking members of the Arab Socialist Union (ASU). By 3 March, Nasser directed Egypt's intelligence apparatus
to focus on external rather than domestic espionage, and declared the "fall of the mukhabarat state". On 30 March,
Nasser proclaimed a manifesto stipulating the restoration of civil liberties, greater parliamentary independence
from the executive, major structural changes to the ASU, and a campaign to rid the government of corrupt elements.
A public referendum approved the proposed measures in May, and held subsequent elections for the Supreme Executive
Committee, the ASU's highest decision-making body. Observers noted that the declaration signaled an important shift
from political repression to liberalization, although its promises would largely go unfulfilled. Meanwhile, in January
1968, Nasser commenced the War of Attrition to reclaim territory captured by Israel, ordering attacks against Israeli
positions east of the then-blockaded Suez Canal. In March, Nasser offered Yasser Arafat's Fatah movement arms and
funds after their performance against Israeli forces in the Battle of Karameh that month. He also advised Arafat
to think of peace with Israel and the establishment of a Palestinian state comprising the West Bank and the Gaza
Strip. Nasser effectively ceded his leadership of the "Palestine issue" to Arafat. Israel retaliated against Egyptian
shelling with commando raids, artillery shelling and air strikes. This resulted in an exodus of civilians from Egyptian
cities along the Suez Canal's western bank. Nasser ceased all military activities and began a program to build a
network of internal defenses, while receiving the financial backing of various Arab states. The war resumed in March
1969. In November, Nasser brokered an agreement between the PLO and the Lebanese military that granted Palestinian
guerrillas the right to use Lebanese territory to attack Israel. In June 1970, Nasser accepted the US-sponsored Rogers
Plan, which called for an end to hostilities and an Israeli withdrawal from Egyptian territory, but it was rejected
by Israel, the PLO, and most Arab states except Jordan. Nasser had initially rejected the plan, but conceded under
pressure from the Soviet Union, which feared that escalating regional conflict could drag it into a war with the
US. He also determined that a ceasefire could serve as a tactical step toward the strategic goal of recapturing the
Suez Canal. Nasser forestalled any movement toward direct negotiations with Israel. In dozens of speeches and statements,
Nasser posited the equation that any direct peace talks with Israel were tantamount to surrender. Following Nasser's
acceptance, Israel agreed to a ceasefire and Nasser used the lull in fighting to move SAM missiles towards the canal
zone. As the summit closed on 28 September 1970, hours after escorting the last Arab leader to leave, Nasser suffered
a heart attack. He was immediately transported to his house, where his physicians tended to him. Nasser died several
hours later, around 6:00 p.m. Heikal, Sadat, and Nasser's wife Tahia were at his deathbed. According to his doctor,
al-Sawi Habibi, Nasser's likely cause of death was arteriosclerosis, varicose veins, and complications from long-standing
diabetes. Nasser was a heavy smoker with a family history of heart disease—two of his brothers died in their fifties
from the same condition. The state of Nasser's health was not known to the public prior to his death. He had previously
suffered heart attacks in 1966 and September 1969. Following the announcement of Nasser's death, Egypt and the Arab
world were in a state of shock. Nasser's funeral procession through Cairo on 1 October was attended by at least five
million mourners. The 10-kilometer (6.2 mi) procession to his burial site began at the old RCC headquarters with
a flyover by MiG-21 jets. His flag-draped coffin was attached to a gun carriage pulled by six horses and led by a
column of cavalrymen. All Arab heads of state attended, with the exception of Saudi King Faisal. King Hussein and
Arafat cried openly, and Muammar Gaddafi of Libya fainted from emotional distress twice. A few major non-Arab dignitaries
were present, including Soviet Premier Alexei Kosygin and French Prime Minister Jacques Chaban-Delmas. Because of
his ability to motivate nationalistic passions, "men, women, and children wept and wailed in the streets" after hearing
of his death, according to Nutting. The general Arab reaction was one of mourning, with thousands of people pouring
onto the streets of major cities throughout the Arab world. Over a dozen people were killed in Beirut as a result
of the chaos, and in Jerusalem, roughly 75,000 Arabs marched through the Old City chanting, "Nasser will never die."
As a testament to his unchallenged leadership of the Arab people, following his death, the headline of the Lebanese
Le Jour read, "One hundred million human beings—the Arabs—are orphans." Sherif Hetata, a former political prisoner
and later member Nasser's ASU, said that "Nasser's greatest achievement was his funeral. The world will never again
see five million people crying together." Nasser made Egypt fully independent of British influence, and the country
became a major power in the developing world under his leadership. One of Nasser's main domestic efforts was to establish
social justice, which he deemed a prerequisite to liberal democracy. During his presidency, ordinary citizens enjoyed
unprecedented access to housing, education, jobs, health services and nourishment, as well as other forms of social
welfare, while feudalistic influence waned. By the end of his presidency, employment and working conditions improved
considerably, although poverty was still high in the country and substantial resources allocated for social welfare
had been diverted to the war effort. The national economy grew significantly through agrarian reform, major modernization
projects such as the Helwan steel works and the Aswan Dam, and nationalization schemes such as that of the Suez Canal.
However, the marked economic growth of the early 1960s took a downturn for the remainder of the decade, only recovering
in 1970. Egypt experienced a "golden age" of culture during Nasser's presidency, according to historian Joel Gordon,
particularly in film, television, theater, radio, literature, fine arts, comedy, poetry, and music. Egypt under Nasser
dominated the Arab world in these fields, producing cultural icons. During Mubarak's presidency, Nasserist political
parties began to emerge in Egypt, the first being the Arab Democratic Nasserist Party (ADNP). The party carried minor
political influence, and splits between its members beginning in 1995 resulted in the gradual establishment of splinter
parties, including Hamdeen Sabahi's 1997 founding of Al-Karama. Sabahi came in third place during the 2012 presidential
election. Nasserist activists were among the founders of Kefaya, a major opposition force during Mubarak's rule.
On 19 September 2012, four Nasserist parties (the ADNP, Karama, the National Conciliation Party, and the Popular
Nasserist Congress Party) merged to form the United Nasserist Party. Nasser was known for his intimate relationship
with ordinary Egyptians. His availability to the public, despite assassination attempts against him, was unparalleled
among his successors. A skilled orator, Nasser gave 1,359 speeches between 1953 and 1970, a record for any Egyptian
head of state. Historian Elie Podeh wrote that a constant theme of Nasser's image was "his ability to represent Egyptian
authenticity, in triumph or defeat". The national press also helped to foster his popularity and profile—more so
after the nationalization of state media. Historian Tarek Osman wrote: While Nasser was increasingly criticized by
Egyptian intellectuals following the Six-Day War and his death in 1970, the general public was persistently sympathetic
both during and after Nasser's life. According to political scientist Mahmoud Hamad, writing in 2008, "nostalgia
for Nasser is easily sensed in Egypt and all Arab countries today". General malaise in Egyptian society, particularly
during the Mubarak era, augmented nostalgia for Nasser's presidency, which increasingly became associated with the
ideals of national purpose, hope, social cohesion, and vibrant culture. Nasser's Egyptian detractors considered him
a dictator who thwarted democratic progress, imprisoned thousands of dissidents, and led a repressive administration
responsible for numerous human rights violations. Islamists in Egypt, particularly members of the politically persecuted
Brotherhood, viewed Nasser as oppressive, tyrannical, and demonic. Liberal writer Tawfiq al-Hakim described Nasser
as a "confused Sultan" who employed stirring rhetoric, but had no actual plan to achieve his stated goals. Some of
Nasser's liberal and Islamist critics in Egypt, including the founding members of the New Wafd Party and writer Jamal
Badawi, dismissed Nasser's popular appeal with the Egyptian masses during his presidency as being the product of
successful manipulation and demagoguery. Egyptian political scientist Alaa al-Din Desouki blamed the 1952 revolution's
shortcomings on Nasser's concentration of power, and Egypt's lack of democracy on Nasser's political style and his
government's limitations on freedom of expression and political participation. American political scientist Mark
Cooper asserted that Nasser's charisma and his direct relationship with the Egyptian people "rendered intermediaries
(organizations and individuals) unnecessary". He opined that Nasser's legacy was a "guarantee of instability" due
to Nasser's reliance on personal power and the absence of strong political institutions under his rule. Historian
Abd al-Azim Ramadan wrote that Nasser was an irrational and irresponsible leader, blaming his inclination to solitary
decision-making for Egypt's losses during the Suez War, among other events. Miles Copeland, Jr. , once described
as Nasser's closest Western adviser, said that the barriers between Nasser and the outside world have grown so thick
that all but the information that attest to his infallibility, indispensability, and immortality has been filtered
out. Zakaria Mohieddin, who was Nasser's vice president, said that Nasser gradually changed during his reign. He
ceased consulting his colleagues and made more and more of the decisions himself. Although Nasser repeatedly said
that a war with Israel will start at a time of his, or Arab, choosing, on 1967 he started a bluffing game "but a
successful bluff means your opponent must not know which cards you are holding. In this case Nasser's opponent could
see his hand in the mirror and knew he was only holding a pair of deuces" and Nasser knew that his army is not prepared
yet. "All of this was out of character...His tendencies in this regard may have been accentuated by diabetes... That
was the only rational explanation for his actions in 1967". Through his actions and speeches, and because he was
able to symbolize the popular Arab will, Nasser inspired several nationalist revolutions in the Arab world. He defined
the politics of his generation and communicated directly with the public masses of the Arab world, bypassing the
various heads of states of those countries—an accomplishment not repeated by other Arab leaders. The extent of Nasser's
centrality in the region made it a priority for incoming Arab nationalist heads of state to seek good relations with
Egypt, in order to gain popular legitimacy from their own citizens. To varying degrees, Nasser's statist system of
government was continued in Egypt and emulated by virtually all Arab republics, namely Algeria, Syria, Iraq, Tunisia,
Yemen, Sudan, and Libya. Ahmed Ben Bella, Algeria's first president, was a staunch Nasserist. Abdullah al-Sallal
drove out the king of North Yemen in the name of Nasser's pan-Arabism. Other coups influenced by Nasser included
those that occurred in Iraq in July 1958 and Syria in 1963. Muammar Gaddafi, who overthrew the Libyan monarchy in
1969, considered Nasser his hero and sought to succeed him as "leader of the Arabs". Also in 1969, Colonel Gaafar
Nimeiry, a supporter of Nasser, took power in Sudan. The Arab Nationalist Movement (ANM) helped spread Nasser's pan-Arabist
ideas throughout the Arab world, particularly among the Palestinians, Syrians, and Lebanese, and in South Yemen,
the Persian Gulf, and Iraq. While many regional heads of state tried to emulate Nasser, Podeh opined that the "parochialism"
of successive Arab leaders "transformed imitation [of Nasser] into parody". In 1963, Egyptian director Youssef Chahine
produced the film El Nasser Salah El Dine ("Saladin The Victorious"), which intentionally drew parallels between
Saladin, considered a hero in the Arab world, and Nasser and his pan-Arabist policies. Nasser is played by Ahmed
Zaki in Mohamed Fadel's 1996 Nasser 56. The film set the Egyptian box office record at the time, and focused on Nasser
during the Suez Crisis. It is also considered a milestone in Egyptian and Arab cinema as the first film to dramatize
the role of a modern-day Arab leader. Together with the 1999 Syrian biopic Gamal Abdel Nasser, the films marked the
first biographical movies about contemporary public figures produced in the Arab world. In 1944, Nasser married Tahia
Kazem, the 22-year-old daughter of a wealthy Iranian father and an Egyptian mother, both of whom died when she was
young. She was introduced to Nasser through her brother, Abdel Hamid Kazim, a merchant friend of Nasser's, in 1943.
After their wedding, the couple moved into a house in Manshiyat al-Bakri, a suburb of Cairo, where they would live
for the rest of their lives. Nasser's entry into the officer corps in 1937 secured him relatively well-paid employment
in a society where most people lived in poverty. Nasser had few personal vices other than chain smoking. He maintained
18-hour workdays and rarely took time off for vacations. The combination of smoking and working long hours contributed
to his poor health. He was diagnosed with diabetes in the early 1960s and by the time of his death in 1970, he also
had arteriosclerosis, heart disease, and high blood pressure. He suffered two major heart attacks (in 1966 and 1969),
and was on bed rest for six weeks after the second episode. State media reported that Nasser's absence from the public
view at that time was a result of influenza.
Pope Saint John XXIII (Latin: Ioannes XXIII; Italian: Giovanni XXIII) born Angelo Giuseppe Roncalli,[a] Italian pronunciation:
[ˈandʒelo dʒuˈzɛppe roŋˈkalli]; 25 November 1881 – 3 June 1963) reigned as Pope from 28 October 1958 to his death
in 1963 and was canonized on 27 April 2014. Angelo Giuseppe Roncalli was the fourth of fourteen children born to
a family of sharecroppers who lived in a village in Lombardy. He was ordained to the priesthood on 10 August 1904
and served in a number of posts, including papal nuncio in France and a delegate to Bulgaria, Greece and Turkey.
In a consistory on 12 January 1953 Pope Pius XII made Roncalli a cardinal as the Cardinal-Priest of Santa Prisca
in addition to naming him as the Patriarch of Venice. Roncalli was elected pope on 28 October 1958 at age 76 after
11 ballots. His selection was unexpected, and Roncalli himself had come to Rome with a return train ticket to Venice.
He was the first pope to take the pontifical name of "John" upon election in more than 500 years, and his choice
settled the complicated question of official numbering attached to this papal name due to the antipope of this name.
Pope John XXIII surprised those who expected him to be a caretaker pope by calling the historic Second Vatican Council
(1962–65), the first session opening on 11 October 1962. His passionate views on equality were summed up in his famous
statement, "We were all made in God's image, and thus, we are all Godly alike." John XXIII made many passionate speeches
during his pontificate, one of which was on the day that he opened the Second Vatican Council in the middle of the
night to the crowd gathered in St. Peter's Square: "Dear children, returning home, you will find children: give your
children a hug and say: This is a hug from the Pope!" Pope John XXIII did not live to see the Vatican Council to
completion. He died of stomach cancer on 3 June 1963, four and a half years after his election and two months after
the completion of his final and famed encyclical, Pacem in terris. He was buried in the Vatican grottoes beneath
Saint Peter's Basilica on 6 June 1963 and his cause for canonization was opened on 18 November 1965 by his successor,
Pope Paul VI, who declared him a Servant of God. In addition to being named Venerable on 20 December 1999, he was
beatified on 3 September 2000 by Pope John Paul II alongside Pope Pius IX and three others. Following his beatification,
his body was moved on 3 June 2001 from its original place to the altar of Saint Jerome where it could be seen by
the faithful. On 5 July 2013, Pope Francis – bypassing the traditionally required second miracle – declared John
XXIII a saint, after unanimous agreement by a consistory, or meeting, of the College of Cardinals, based on the fact
that he was considered to have lived a virtuous, model lifestyle, and because of the good for the Church which had
come from his having opened the Second Vatican Council. He was canonised alongside Pope Saint John Paul II on 27
April 2014. John XXIII today is affectionately known as the "Good Pope" and in Italian, "il Papa buono". The Roman
Catholic Church celebrates his feast day not on the date of his death, June 3, as is usual, nor even on the day of
his papal inauguration (as is sometimes done with Popes who are Saints, such as with John Paul II) but on 11 October,
the day of the first session of the Second Vatican Council. This is understandable, since he was the one who had
had the idea for it and had convened it. On Thursday, 11 September 2014, Pope Francis added his optional memorial
to the worldwide General Roman Calendar of saints' feast days, in response to global requests. He is commemorated
on the date of his death, 3 June, by the Evangelical Lutheran Church in America and on the following day, 4 June,
by the Anglican Church of Canada and the Episcopal Church (United States). In February 1925, the Cardinal Secretary
of State Pietro Gasparri summoned him to the Vatican and informed him of Pope Pius XI's decision to appoint him as
the Apostolic Visitor to Bulgaria (1925–35). On 3 March, Pius XI also named him for consecration as titular archbishop
of Areopolis, Jordan. Roncalli was initially reluctant about a mission to Bulgaria, but he would soon relent. His
nomination as apostolic visitor was made official on 19 March. Roncalli was consecrated by Giovanni Tacci Porcelli
in the church of San Carlo alla Corso in Rome. After he was consecrated, he introduced his family to Pope Pius XI.
He chose as his episcopal motto Obedientia et Pax ("Obedience and Peace"), which became his guiding motto. While
he was in Bulgaria, an earthquake struck in a town not too far from where he was. Unaffected, he wrote to his sisters
Ancilla and Maria and told them both that he was fine. On 30 November 1934, he was appointed Apostolic Delegate to
Turkey and Greece and titular archbishop of Mesembria, Bulgaria. Thus, he is known as "the Turcophile Pope," by the
Turkish society which is predominantly Muslim. Roncalli took up this post in 1935 and used his office to help the
Jewish underground in saving thousands of refugees in Europe, leading some to consider him to be a Righteous Gentile
(see Pope John XXIII and Judaism). In October 1935, he led Bulgarian pilgrims to Rome and introduced them to Pope
Pius XI on 14 October. In February 1939, he received news from his sisters that his mother was dying. On 10 February
1939, Pope Pius XI died. Roncalli was unable to see his mother for the end as the death of a pontiff meant that he
would have to stay at his post until the election of a new pontiff. Unfortunately, she died on 20 February 1939,
during the nine days of mourning for the late Pius XI. He was sent a letter by Cardinal Eugenio Pacelli, and Roncalli
later recalled that it was probably the last letter Pacelli sent until his election as Pope Pius XII on 2 March 1939.
Roncalli expressed happiness that Pacelli was elected, and, on radio, listened to the coronation of the new pontiff.
On 12 January 1953, he was appointed Patriarch of Venice and, accordingly, raised to the rank of Cardinal-Priest
of Santa Prisca by Pope Pius XII. Roncalli left France for Venice on 23 February 1953 stopping briefly in Milan and
then to Rome. On 15 March 1953, he took possession of his new diocese in Venice. As a sign of his esteem, the President
of France, Vincent Auriol, claimed the ancient privilege possessed by French monarchs and bestowed the red biretta
on Roncalli at a ceremony in the Élysée Palace. It was around this time that he, with the aid of Monsignor Bruno
Heim, formed his coat of arms with a lion of Saint Mark on a white ground. Auriol also awarded Roncalli three months
later with the award of Commander of the Legion of Honour. His sister Ancilla would soon be diagnosed with stomach
cancer in the early 1950s. Roncalli's last letter to her was dated on 8 November 1953 where he promised to visit
her within the next week. He could not keep that promise, as Ancilla died on 11 November 1953 at the time when he
was consecrating a new church in Venice. He attended her funeral back in his hometown. In his will around this time,
he mentioned that he wished to be buried in the crypt of Saint Mark's in Venice with some of his predecessors rather
than with the family in Sotto il Monte. Following the death of Pope Pius XII on 9 October 1958, Roncalli watched
the live funeral on his last full day in Venice on 11 October. His journal was specifically concerned with the funeral
and the abused state of the late pontiff's corpse. Roncalli left Venice for the conclave in Rome well aware that
he was papabile,[b] and after eleven ballots, was elected to succeed the late Pius XII, so it came as no surprise
to him, though he had arrived at the Vatican with a return train ticket to Venice.[citation needed] Many had considered
Giovanni Battista Montini, the Archbishop of Milan, a possible candidate, but, although he was the archbishop of
one of the most ancient and prominent sees in Italy, he had not yet been made a cardinal. Though his absence from
the 1958 conclave did not make him ineligible – under Canon Law any Catholic male who is capable of receiving priestly
ordination and episcopal consecration may be elected – the College of Cardinals usually chose the new pontiff from
among the Cardinals who head archdioceses or departments of the Roman Curia that attend the papal conclave. At the
time, as opposed to contemporary practice, the participating Cardinals did not have to be below age 80 to vote, there
were few Eastern-rite Cardinals, and no Cardinals who were just priests at the time of their elevation. Roncalli
was summoned to the final ballot of the conclave at 4:00 pm. He was elected pope at 4:30 pm with a total of 38 votes.
After the long pontificate of Pope Pius XII, the cardinals chose a man who – it was presumed because of his advanced
age – would be a short-term or "stop-gap" pope. They wished to choose a candidate who would do little during the
new pontificate. Upon his election, Cardinal Eugene Tisserant asked him the ritual questions of whether he would
accept and if so, what name he would take for himself. Roncalli gave the first of his many surprises when he chose
"John" as his regnal name. Roncalli's exact words were "I will be called John". This was the first time in over 500
years that this name had been chosen; previous popes had avoided its use since the time of the Antipope John XXIII
during the Western Schism several centuries before. Far from being a mere "stopgap" pope, to great excitement, John
XXIII called for an ecumenical council fewer than ninety years after the First Vatican Council (Vatican I's predecessor,
the Council of Trent, had been held in the 16th century). This decision was announced on 29 January 1959 at the Basilica
of Saint Paul Outside the Walls. Cardinal Giovanni Battista Montini, who later became Pope Paul VI, remarked to Giulio
Bevilacqua that "this holy old boy doesn't realise what a hornet's nest he's stirring up". From the Second Vatican
Council came changes that reshaped the face of Catholicism: a comprehensively revised liturgy, a stronger emphasis
on ecumenism, and a new approach to the world. John XXIII was an advocate for human rights which included the unborn
and the elderly. He wrote about human rights in his Pacem in terris. He wrote, "Man has the right to live. He has
the right to bodily integrity and to the means necessary for the proper development of life, particularly food, clothing,
shelter, medical care, rest, and, finally, the necessary social services. In consequence, he has the right to be
looked after in the event of ill health; disability stemming from his work; widowhood; old age; enforced unemployment;
or whenever through no fault of his own he is deprived of the means of livelihood." Maintaining continuity with his
predecessors, John XXIII continued the gradual reform of the Roman liturgy, and published changes that resulted in
the 1962 Roman Missal, the last typical edition containing the Tridentine Mass established in 1570 by Pope Pius V
at the request of the Council of Trent and whose continued use Pope Benedict XVI authorized in 2007, under the conditions
indicated in his motu proprio Summorum Pontificum. In response to the directives of the Second Vatican Council, later
editions of the Roman Missal present the 1970 form of the Roman Rite. On 11 October 1962, the first session of the
Second Vatican Council was held in the Vatican. He gave the Gaudet Mater Ecclesia speech, which served as the opening
address for the council. The day was basically electing members for several council commissions that would work on
the issues presented in the council. On that same night following the conclusion of the first session, the people
in Saint Peter's Square chanted and yelled with the sole objective of getting John XXIII to appear at the window
to address them. The first session ended in a solemn ceremony on 8 December 1962 with the next session scheduled
to occur in 1963 from 12 May to 29 June – this was announced on 12 November 1962. John XXIII's closing speech made
subtle references to Pope Pius IX, and he had expressed the desire to see Pius IX beatified and eventually canonized.
In his journal in 1959 during a spiritual retreat, John XXIII made this remark: "I always think of Pius IX of holy
and glorious memory, and by imitating him in his sacrifices, I would like to be worthy to celebrate his canonization".
Pope John XXIII offered to mediate between US President John F. Kennedy and Nikita Khrushchev during the Cuban Missile
Crisis in October 1962. Both men applauded the pope for his deep commitment to peace. Khrushchev would later send
a message via Norman Cousins and the letter expressed his best wishes for the pontiff's ailing health. John XXIII
personally typed and sent a message back to him, thanking him for his letter. Cousins, meanwhile, travelled to New
York City and ensured that John would become Time magazine's 'Man of the Year'. John XXIII became the first Pope
to receive the title, followed by John Paul II in 1994 and Francis in 2013. On 10 May 1963, John XXIII received the
Balzan Prize in private at the Vatican but deflected achievements of himself to the five popes of his lifetime, Pope
Leo XIII to Pius XII. On 11 May, the Italian President Antonio Segni officially awarded Pope John XXIII with the
Balzan Prize for his engagement for peace. While in the car en route to the official ceremony, he suffered great
stomach pains but insisted on meeting with Segni to receive the award in the Quirinal Palace, refusing to do so within
the Vatican. He stated that it would have been an insult to honour a pontiff on the remains of the crucified Saint
Peter. It was the pope's last public appearance. On 25 May 1963, the pope suffered another haemorrhage and required
several blood transfusions, but the cancer had perforated the stomach wall and peritonitis soon set in. The doctors
conferred in a decision regarding this matter and John XXIII's aide Loris F. Capovilla broke the news to him saying
that the cancer had done its work and nothing could be done for him. Around this time, his remaining siblings arrived
to be with him. By 31 May, it had become clear that the cancer had overcome the resistance of John XXIII – it had
left him confined to his bed. "At 11 am Petrus Canisius Van Lierde as Papal Sacristan was at the bedside of the dying
pope, ready to anoint him. The pope began to speak for the very last time: "I had the great grace to be born into
a Christian family, modest and poor, but with the fear of the Lord. My time on earth is drawing to a close. But Christ
lives on and continues his work in the Church. Souls, souls, ut omnes unum sint."[c] Van Lierde then anointed his
eyes, ears, mouth, hands and feet. Overcome by emotion, Van Lierde forgot the right order of anointing. John XXIII
gently helped him before bidding those present a last farewell. John XXIII died of peritonitis caused by a perforated
stomach at 19:49 local time on 3 June 1963 at the age of 81, ending a historic pontificate of four years and seven
months. He died just as a Mass for him finished in Saint Peter's Square below, celebrated by Luigi Traglia. After
he died, his brow was ritually tapped to see if he was dead, and those with him in the room said prayers. Then the
room was illuminated, thus informing the people of what had happened. He was buried on 6 June in the Vatican grottos.
Two wreaths, placed on the two sides of his tomb, were donated by the prisoners of the Regina Coeli prison and the
Mantova jail in Verona. On 22 June 1963, one day after his friend and successor Pope Paul VI was elected, the latter
prayed at his tomb. On 3 December 1963, US President Lyndon B. Johnson posthumously awarded him the Presidential
Medal of Freedom, the United States' highest civilian award, in recognition of the good relationship between Pope
John XXIII and the United States of America. In his speech on 6 December 1963, Johnson said: "I have also determined
to confer the Presidential Medal of Freedom posthumously on another noble man whose death we mourned 6 months ago:
His Holiness, Pope John XXIII. He was a man of simple origins, of simple faith, of simple charity. In this exalted
office he was still the gentle pastor. He believed in discussion and persuasion. He profoundly respected the dignity
of man. He gave the world immortal statements of the rights of man, of the obligations of men to each other, of their
duty to strive for a world community in which all can live in peace and fraternal friendship. His goodness reached
across temporal boundaries to warm the hearts of men of all nations and of all faiths". He was known affectionately
as "Good Pope John". His cause for canonization was opened under Pope Paul VI during the final session of the Second
Vatican Council on 18 November 1965, along with the cause of Pope Pius XII. On 3 September 2000, John XXIII was declared
"Blessed" alongside Pope Pius IX by Pope John Paul II, the penultimate step on the road to sainthood after a miracle
of curing an ill woman was discovered. He was the first pope since Pope Pius X to receive this honour. Following
his beatification, his body was moved from its original burial place in the grottoes below the Vatican to the altar
of St. Jerome and displayed for the veneration of the faithful.[citation needed] The 50th anniversary of his death
was celebrated on 3 June 2013 by Pope Francis, who visited his tomb and prayed there, then addressing the gathered
crowd and spoke about the late pope. The people that gathered there at the tomb were from Bergamo, the province where
the late pope came from. A month later, on 5 July 2013, Francis approved Pope John XXIII for canonization, along
with Pope John Paul II without the traditional second miracle required. Instead, Francis based this decision on John
XXIII's merits for the Second Vatican Council. On Sunday, 27 April 2014, John XXIII and Pope John Paul II were declared
saints on Divine Mercy Sunday.
Time has long been a major subject of study in religion, philosophy, and science, but defining it in a manner applicable
to all fields without circularity has consistently eluded scholars. Nevertheless, diverse fields such as business,
industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective
measuring systems. Some simple definitions of time include "time is what clocks measure", which is a problematically
vague and self-referential definition that utilizes the device used to measure the subject as the definition of the
subject, and "time is what keeps everything from happening at once", which is without substantive meaning in the
absence of the definition of simultaneity in the context of the limitations of human sensation, observation of events,
and the perception of such events. Two contrasting viewpoints on time divide many prominent philosophers. One view
is that time is part of the fundamental structure of the universe—a dimension independent of events, in which events
occur in sequence. Sir Isaac Newton subscribed to this realist view, and hence it is sometimes referred to as Newtonian
time. The opposing view is that time does not refer to any kind of "container" that events and objects "move through",
nor to any entity that "flows", but that it is instead part of a fundamental intellectual structure (together with
space and number) within which humans sequence and compare events. This second view, in the tradition of Gottfried
Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, and thus is not itself measurable nor
can it be travelled. Time is one of the seven fundamental physical quantities in both the International System of
Units and International System of Quantities. Time is used to define other quantities—such as velocity—so defining
time in terms of such quantities would result in circularity of definition. An operational definition of time, wherein
one says that observing a certain number of repetitions of one or another standard cyclical event (such as the passage
of a free-swinging pendulum) constitutes one standard unit such as the second, is highly useful in the conduct of
both advanced experiments and everyday affairs of life. The operational definition leaves aside the question whether
there is something called time, apart from the counting activity just mentioned, that flows and that can be measured.
Investigations of a single continuum called spacetime bring questions about space into questions about time, questions
that have their roots in the works of early students of natural philosophy. Temporal measurement has occupied scientists
and technologists, and was a prime motivation in navigation and astronomy. Periodic events and periodic motion have
long served as standards for units of time. Examples include the apparent motion of the sun across the sky, the phases
of the moon, the swing of a pendulum, and the beat of a heart. Currently, the international unit of time, the second,
is defined by measuring the electronic transition frequency of caesium atoms (see below). Time is also of significant
social importance, having economic value ("time is money") as well as personal value, due to an awareness of the
limited time in each day and in human life spans. Temporal measurement, chronometry, takes two distinct period forms:
the calendar, a mathematical tool for organizing intervals of time, and the clock, a physical mechanism that counts
the passage of time. In day-to-day life, the clock is consulted for periods less than a day whereas the calendar
is consulted for periods longer than a day. Increasingly, personal electronic devices display both calendars and
clocks simultaneously. The number (as on a clock dial or calendar) that marks the occurrence of a specified event
as to hour or date is obtained by counting from a fiducial epoch—a central reference point. Artifacts from the Paleolithic
suggest that the moon was used to reckon time as early as 6,000 years ago. Lunar calendars were among the first to
appear, either 12 or 13 lunar months (either 354 or 384 days). Without intercalation to add days or months to some
years, seasons quickly drift in a calendar based solely on twelve lunar months. Lunisolar calendars have a thirteenth
month added to some years to make up for the difference between a full year (now known to be about 365.24 days) and
a year of just twelve lunar months. The numbers twelve and thirteen came to feature prominently in many cultures,
at least partly due to this relationship of months to years. Other early forms of calendars originated in Mesoamerica,
particularly in ancient Mayan civilization. These calendars were religiously and astronomically based, with 18 months
in a year and 20 days in a month. The most precise timekeeping device of the ancient world was the water clock, or
clepsydra, one of which was found in the tomb of Egyptian pharaoh Amenhotep I (1525–1504 BC). They could be used
to measure the hours even at night, but required manual upkeep to replenish the flow of water. The Ancient Greeks
and the people from Chaldea (southeastern Mesopotamia) regularly maintained timekeeping records as an essential part
of their astronomical observations. Arab inventors and engineers in particular made improvements on the use of water
clocks up to the Middle Ages. In the 11th century, Chinese inventors and engineers invented the first mechanical
clocks driven by an escapement mechanism. The hourglass uses the flow of sand to measure the flow of time. They were
used in navigation. Ferdinand Magellan used 18 glasses on each ship for his circumnavigation of the globe (1522).
Incense sticks and candles were, and are, commonly used to measure time in temples and churches across the globe.
Waterclocks, and later, mechanical clocks, were used to mark the events of the abbeys and monasteries of the Middle
Ages. Richard of Wallingford (1292–1336), abbot of St. Alban's abbey, famously built a mechanical clock as an astronomical
orrery about 1330. Great advances in accurate time-keeping were made by Galileo Galilei and especially Christiaan
Huygens with the invention of pendulum driven clocks along with the invention of the minute hand by Jost Burgi. The
most accurate timekeeping devices are atomic clocks, which are accurate to seconds in many millions of years, and
are used to calibrate other clocks and timekeeping instruments. Atomic clocks use the frequency of electronic transitions
in certain atoms to measure the second. One of the most common atoms used is caesium, most modern atomic clocks probe
caesium with microwaves to determine the frequency of these electron vibrations. Since 1967, the International System
of Measurements bases its unit of time, the second, on the properties of caesium atoms. SI defines the second as
9,192,631,770 cycles of the radiation that corresponds to the transition between two electron spin energy levels
of the ground state of the 133Cs atom. Greenwich Mean Time (GMT) is an older standard, adopted starting with British
railways in 1847. Using telescopes instead of atomic clocks, GMT was calibrated to the mean solar time at the Royal
Observatory, Greenwich in the UK. Universal Time (UT) is the modern term for the international telescope-based system,
adopted to replace "Greenwich Mean Time" in 1928 by the International Astronomical Union. Observations at the Greenwich
Observatory itself ceased in 1954, though the location is still used as the basis for the coordinate system. Because
the rotational period of Earth is not perfectly constant, the duration of a second would vary if calibrated to a
telescope-based standard like GMT or UT—in which a second was defined as a fraction of a day or year. The terms "GMT"
and "Greenwich Mean Time" are sometimes used informally to refer to UT or UTC. Two distinct viewpoints on time divide
many prominent philosophers. One view is that time is part of the fundamental structure of the universe, a dimension
in which events occur in sequence. Sir Isaac Newton subscribed to this realist view, and hence it is sometimes referred
to as Newtonian time. An opposing view is that time does not refer to any kind of actually existing dimension that
events and objects "move through", nor to any entity that "flows", but that it is instead an intellectual concept
(together with space and number) that enables humans to sequence and compare events. This second view, in the tradition
of Gottfried Leibniz and Immanuel Kant, holds that space and time "do not exist in and of themselves, but ... are
the product of the way we represent things", because we can know objects only as they appear to us. The Vedas, the
earliest texts on Indian philosophy and Hindu philosophy dating back to the late 2nd millennium BC, describe ancient
Hindu cosmology, in which the universe goes through repeated cycles of creation, destruction and rebirth, with each
cycle lasting 4,320 million years. Ancient Greek philosophers, including Parmenides and Heraclitus, wrote essays
on the nature of time. Plato, in the Timaeus, identified time with the period of motion of the heavenly bodies. Aristotle,
in Book IV of his Physica defined time as 'number of movement in respect of the before and after'. In Book 11 of
his Confessions, St. Augustine of Hippo ruminates on the nature of time, asking, "What then is time? If no one asks
me, I know: if I wish to explain it to one that asketh, I know not." He begins to define time by what it is not rather
than what it is, an approach similar to that taken in other negative definitions. However, Augustine ends up calling
time a “distention” of the mind (Confessions 11.26) by which we simultaneously grasp the past in memory, the present
by attention, and the future by expectation. Immanuel Kant, in the Critique of Pure Reason, described time as an
a priori intuition that allows us (together with the other a priori intuition, space) to comprehend sense experience.
With Kant, neither space nor time are conceived as substances, but rather both are elements of a systematic mental
framework that necessarily structures the experiences of any rational agent, or observing subject. Kant thought of
time as a fundamental part of an abstract conceptual framework, together with space and number, within which we sequence
events, quantify their duration, and compare the motions of objects. In this view, time does not refer to any kind
of entity that "flows," that objects "move through," or that is a "container" for events. Spatial measurements are
used to quantify the extent of and distances between objects, and temporal measurements are used to quantify the
durations of and between events. Time was designated by Kant as the purest possible schema of a pure concept or category.
According to Martin Heidegger we do not exist inside time, we are time. Hence, the relationship to the past is a
present awareness of having been, which allows the past to exist in the present. The relationship to the future is
the state of anticipating a potential possibility, task, or engagement. It is related to the human propensity for
caring and being concerned, which causes "being ahead of oneself" when thinking of a pending occurrence. Therefore,
this concern for a potential occurrence also allows the future to exist in the present. The present becomes an experience,
which is qualitative instead of quantitative. Heidegger seems to think this is the way that a linear relationship
with time, or temporal existence, is broken or transcended. We are not stuck in sequential time. We are able to remember
the past and project into the future—we have a kind of random access to our representation of temporal existence;
we can, in our thoughts, step out of (ecstasis) sequential time. The theory of special relativity finds a convenient
formulation in Minkowski spacetime, a mathematical structure that combines three dimensions of space with a single
dimension of time. In this formalism, distances in space can be measured by how long light takes to travel that distance,
e.g., a light-year is a measure of distance, and a meter is now defined in terms of how far light travels in a certain
amount of time. Two events in Minkowski spacetime are separated by an invariant interval, which can be either space-like,
light-like, or time-like. Events that have a time-like separation cannot be simultaneous in any frame of reference,
there must be a temporal component (and possibly a spatial one) to their separation. Events that have a space-like
separation will be simultaneous in some frame of reference, and there is no frame of reference in which they do not
have a spatial separation. Different observers may calculate different distances and different time intervals between
two events, but the invariant interval between the events is independent of the observer (and his velocity). In non-relativistic
classical mechanics, Newton's concept of "relative, apparent, and common time" can be used in the formulation of
a prescription for the synchronization of clocks. Events seen by two different observers in motion relative to each
other produce a mathematical concept of time that works sufficiently well for describing the everyday phenomena of
most people's experience. In the late nineteenth century, physicists encountered problems with the classical understanding
of time, in connection with the behavior of electricity and magnetism. Einstein resolved these problems by invoking
a method of synchronizing clocks using the constant, finite speed of light as the maximum signal velocity. This led
directly to the result that observers in motion relative to one another measure different elapsed times for the same
event. Time has historically been closely related with space, the two together merging into spacetime in Einstein's
special relativity and general relativity. According to these theories, the concept of time depends on the spatial
reference frame of the observer, and the human perception as well as the measurement by instruments such as clocks
are different for observers in relative motion. For example, if a spaceship carrying a clock flies through space
at (very nearly) the speed of light, its crew does not notice a change in the speed of time on board their vessel
because everything traveling at the same speed slows down at the same rate (including the clock, the crew's thought
processes, and the functions of their bodies). However, to a stationary observer watching the spaceship fly by, the
spaceship appears flattened in the direction it is traveling and the clock on board the spaceship appears to move
very slowly. On the other hand, the crew on board the spaceship also perceives the observer as slowed down and flattened
along the spaceship's direction of travel, because both are moving at very nearly the speed of light relative to
each other. Because the outside universe appears flattened to the spaceship, the crew perceives themselves as quickly
traveling between regions of space that (to the stationary observer) are many light years apart. This is reconciled
by the fact that the crew's perception of time is different from the stationary observer's; what seems like seconds
to the crew might be hundreds of years to the stationary observer. In either case, however, causality remains unchanged:
the past is the set of events that can send light signals to an entity and the future is the set of events to which
an entity can send light signals. Einstein showed in his thought experiments that people travelling at different
speeds, while agreeing on cause and effect, measure different time separations between events, and can even observe
different chronological orderings between non-causally related events. Though these effects are typically minute
in the human experience, the effect becomes much more pronounced for objects moving at speeds approaching the speed
of light. Many subatomic particles exist for only a fixed fraction of a second in a lab relatively at rest, but some
that travel close to the speed of light can be measured to travel farther and survive much longer than expected (a
muon is one example). According to the special theory of relativity, in the high-speed particle's frame of reference,
it exists, on the average, for a standard amount of time known as its mean lifetime, and the distance it travels
in that time is zero, because its velocity is zero. Relative to a frame of reference at rest, time seems to "slow
down" for the particle. Relative to the high-speed particle, distances seem to shorten. Einstein showed how both
temporal and spatial dimensions can be altered (or "warped") by high-speed motion. Time appears to have a direction—the
past lies behind, fixed and immutable, while the future lies ahead and is not necessarily fixed. Yet for the most
part the laws of physics do not specify an arrow of time, and allow any process to proceed both forward and in reverse.
This is generally a consequence of time being modeled by a parameter in the system being analyzed, where there is
no "proper time": the direction of the arrow of time is sometimes arbitrary. Examples of this include the Second
law of thermodynamics, which states that entropy must increase over time (see Entropy); the cosmological arrow of
time, which points away from the Big Bang, CPT symmetry, and the radiative arrow of time, caused by light only traveling
forwards in time (see light cone). In particle physics, the violation of CP symmetry implies that there should be
a small counterbalancing time asymmetry to preserve CPT symmetry as stated above. The standard description of measurement
in quantum mechanics is also time asymmetric (see Measurement in quantum mechanics). Stephen Hawking in particular
has addressed a connection between time and the Big Bang. In A Brief History of Time and elsewhere, Hawking says
that even if time did not begin with the Big Bang and there were another time frame before the Big Bang, no information
from events then would be accessible to us, and nothing that happened then would have any effect upon the present
time-frame. Upon occasion, Hawking has stated that time actually began with the Big Bang, and that questions about
what happened before the Big Bang are meaningless. This less-nuanced, but commonly repeated formulation has received
criticisms from philosophers such as Aristotelian philosopher Mortimer J. Adler. While the Big Bang model is well
established in cosmology, it is likely to be refined in the future. Little is known about the earliest moments of
the universe's history. The Penrose–Hawking singularity theorems require the existence of a singularity at the beginning
of cosmic time. However, these theorems assume that general relativity is correct, but general relativity must break
down before the universe reaches the Planck temperature, and a correct treatment of quantum gravity may avoid the
singularity. Time travel is the concept of moving backwards or forwards to different points in time, in a manner
analogous to moving through space, and different from the normal "flow" of time to an earthbound observer. In this
view, all points in time (including future times) "persist" in some way. Time travel has been a plot device in fiction
since the 19th century. Traveling backwards in time has never been verified, presents many theoretic problems, and
may be an impossibility. Any technological device, whether fictional or hypothetical, that is used to achieve time
travel is known as a time machine. Another solution to the problem of causality-based temporal paradoxes is that
such paradoxes cannot arise simply because they have not arisen. As illustrated in numerous works of fiction, free
will either ceases to exist in the past or the outcomes of such decisions are predetermined. As such, it would not
be possible to enact the grandfather paradox because it is a historical fact that your grandfather was not killed
before his child (your parent) was conceived. This view doesn't simply hold that history is an unchangeable constant,
but that any change made by a hypothetical future time traveler would already have happened in his or her past, resulting
in the reality that the traveler moves from. More elaboration on this view can be found in the Novikov self-consistency
principle. Psychoactive drugs can impair the judgment of time. Stimulants can lead both humans and rats to overestimate
time intervals, while depressants can have the opposite effect. The level of activity in the brain of neurotransmitters
such as dopamine and norepinephrine may be the reason for this. Such chemicals will either excite or inhibit the
firing of neurons in the brain, with a greater firing rate allowing the brain to register the occurrence of more
events within a given interval (speed up time) and a decreased firing rate reducing the brain's capacity to distinguish
events occurring within a given interval (slow down time). The use of time is an important issue in understanding
human behavior, education, and travel behavior. Time-use research is a developing field of study. The question concerns
how time is allocated across a number of activities (such as time spent at home, at work, shopping, etc.). Time use
changes with technology, as the television or the Internet created new opportunities to use time in different ways.
However, some aspects of time use are relatively stable over long periods of time, such as the amount of time spent
traveling to work, which despite major changes in transport, has been observed to be about 20–30 minutes one-way
for a large number of cities over a long period. A sequence of events, or series of events, is a sequence of items,
facts, events, actions, changes, or procedural steps, arranged in time order (chronological order), often with causality
relationships among the items. Because of causality, cause precedes effect, or cause and effect may appear together
in a single item, but effect never precedes cause. A sequence of events can be presented in text, tables, charts,
or timelines. The description of the items or events may include a timestamp. A sequence of events that includes
the time along with place or location information to describe a sequential path may be referred to as a world line.
Uses of a sequence of events include stories, historical events (chronology), directions and steps in procedures,
and timetables for scheduling activities. A sequence of events may also be used to help describe processes in science,
technology, and medicine. A sequence of events may be focused on past events (e.g., stories, history, chronology),
on future events that must be in a predetermined order (e.g., plans, schedules, procedures, timetables), or focused
on the observation of past events with the expectation that the events will occur in the future (e.g., processes).
The use of a sequence of events occurs in fields as diverse as machines (cam timer), documentaries (Seconds From
Disaster), law (choice of law), computer simulation (discrete event simulation), and electric power transmission
(sequence of events recorder). A specific example of a sequence of events is the timeline of the Fukushima Daiichi
nuclear disaster.
The European Central Bank (ECB) is the central bank for the euro and administers monetary policy of the Eurozone, which consists
of 19 EU member states and is one of the largest currency areas in the world. It is one of the world's most important
central banks and is one of the seven institutions of the European Union (EU) listed in the Treaty on European Union
(TEU). The capital stock of the bank is owned by the central banks of all 28 EU member states.[dated info] The Treaty
of Amsterdam established the bank in 1998, and it is headquartered in Frankfurt, Germany. As of 2015[update] the
President of the ECB is Mario Draghi, former governor of the Bank of Italy, former member of the World Bank, and
former managing director of the Goldman Sachs international division (2002–2005). The bank primarily occupied the
Eurotower prior to, and during, the construction of the new headquarters. The primary objective of the European Central
Bank, as mandated in Article 2 of the Statute of the ECB, is to maintain price stability within the Eurozone. The
basic tasks, as defined in Article 3 of the Statute, are to define and implement the monetary policy for the Eurozone,
to conduct foreign exchange operations, to take care of the foreign reserves of the European System of Central Banks
and operation of the financial market infrastructure under the TARGET2 payments system and the technical platform
(currently being developed) for settlement of securities in Europe (TARGET2 Securities). The ECB has, under Article
16 of its Statute, the exclusive right to authorise the issuance of euro banknotes. Member states can issue euro
coins, but the amount must be authorised by the ECB beforehand. The first President of the Bank was Wim Duisenberg,
the former president of the Dutch central bank and the European Monetary Institute. While Duisenberg had been the
head of the EMI (taking over from Alexandre Lamfalussy of Belgium) just before the ECB came into existence, the French
government wanted Jean-Claude Trichet, former head of the French central bank, to be the ECB's first president. The
French argued that since the ECB was to be located in Germany, its president should be French. This was opposed by
the German, Dutch and Belgian governments who saw Duisenberg as a guarantor of a strong euro. Tensions were abated
by a gentleman's agreement in which Duisenberg would stand down before the end of his mandate, to be replaced by
Trichet. The primary objective of the European Central Bank, as laid down in Article 127(1) of the Treaty on the
Functioning of the European Union, is to maintain price stability within the Eurozone. The Governing Council in October
1998 defined price stability as inflation of under 2%, “a year-on-year increase in the Harmonised Index of Consumer
Prices (HICP) for the euro area of below 2%” and added that price stability ”was to be maintained over the medium
term”. (Harmonised Index of Consumer Prices) Unlike for example the United States Federal Reserve Bank, the ECB has
only one primary objective but this objective has never been defined in statutory law, and the HICP target can be
termed ad-hoc. The banks in effect borrow cash and must pay it back; the short durations allow interest rates to
be adjusted continually. When the repo notes come due the participating banks bid again. An increase in the quantity
of notes offered at auction allows an increase in liquidity in the economy. A decrease has the contrary effect. The
contracts are carried on the asset side of the European Central Bank's balance sheet and the resulting deposits in
member banks are carried as a liability. In layman terms, the liability of the central bank is money, and an increase
in deposits in member banks, carried as a liability by the central bank, means that more money has been put into
the economy.[a] To qualify for participation in the auctions, banks must be able to offer proof of appropriate collateral
in the form of loans to other entities. These can be the public debt of member states, but a fairly wide range of
private banking securities are also accepted. The fairly stringent membership requirements for the European Union,
especially with regard to sovereign debt as a percentage of each member state's gross domestic product, are designed
to insure that assets offered to the bank as collateral are, at least in theory, all equally good, and all equally
protected from the risk of inflation. The Executive Board is responsible for the implementation of monetary policy
(defined by the Governing Council) and the day-to-day running of the bank. It can issue decisions to national central
banks and may also exercise powers delegated to it by the Governing Council. It is composed of the President of the
Bank (currently Mario Draghi), the Vice-President (currently Vitor Constâncio) and four other members. They are all
appointed for non-renewable terms of eight years. They are appointed "from among persons of recognised standing and
professional experience in monetary or banking matters by common accord of the governments of the Member States at
the level of Heads of State or Government, on a recommendation from the Council, after it has consulted the European
Parliament and the Governing Council of the ECB". The Executive Board normally meets every Tuesday. José Manuel González-Páramo,
a Spanish member of the Executive Board since June 2004, was due to leave the board in early June 2012 and no replacement
had been named as of late May 2012. The Spanish had nominated Barcelona-born Antonio Sáinz de Vicuña, an ECB veteran
who heads its legal department, as González-Páramo's replacement as early as January 2012 but alternatives from Luxembourg,
Finland, and Slovenia were put forward and no decision made by May. After a long political battle, Luxembourg's Yves
Mersch, was appointed as González-Páramo's replacement. The Supervisory Board meets twice a month to discuss, plan
and carry out the ECB’s supervisory tasks. It proposes draft decisions to the Governing Council under the non-objection
procedure. It is composed of Chair (appointed for a non-renewable term of five years), Vice-Chair (chosen from among
the members of the ECB's Executive Board) four ECB representatives and representatives of national supervisors. If
the national supervisory authority designated by a Member State is not a national central bank (NCB), the representative
of the competent authority can be accompanied by a representative from their NCB. In such cases, the representatives
are together considered as one member for the purposes of the voting procedure. Although the ECB is governed by European
law directly and thus not by corporate law applying to private law companies, its set-up resembles that of a corporation
in the sense that the ECB has shareholders and stock capital. Its capital is five billion euros which is held by
the national central banks of the member states as shareholders. The initial capital allocation key was determined
in 1998 on the basis of the states' population and GDP, but the key is adjustable. Shares in the ECB are not transferable
and cannot be used as collateral. The internal working language of the ECB is generally English, and press conferences
are usually held in English. External communications are handled flexibly: English is preferred (though not exclusively)
for communication within the ESCB (i.e. with other central banks) and with financial markets; communication with
other national bodies and with EU citizens is normally in their respective language, but the ECB website is predominantly
English; official documents such as the Annual Report are in the official languages of the EU. The independence of
the ECB is instrumental in maintaining price stability. Not only must the bank not seek influence, but EU institutions
and national governments are bound by the treaties to respect the ECB's independence. To offer some accountability,
the ECB is bound to publish reports on its activities and has to address its annual report to the European Parliament,
the European Commission, the Council of the European Union and the European Council. The European Parliament also
gets to question and then issue its opinion on candidates to the executive board. From late 2009 a handful of mainly
southern eurozone member states started being unable to repay their national Euro-denominated government debt or
to finance the bail-out of troubled financial sectors under their national supervision without the assistance of
third parties. This so-called European debt crisis began after Greece's new elected government stopped masking its
true indebtedness and budget deficit and openly communicated the imminent danger of a Greek sovereign default. Seeing
a sovereign default in the eurozone as a shock, the general public, international and European institutions, and
the financial community started to intensively reassess the economic situation and creditworthiness of eurozone states.
Those eurozone states being assessed as not financially sustainable enough on their current path, faced waves of
credit rating downgrades and rising borrowing costs including increasing interest rate spreads. As a consequence,
the ability of these states to borrow new money to further finance their budget deficits or to refinance existing
unsustainable debt levels was strongly reduced. There is also a widespread view[vague][who?] that giving much more
financial support to continuously cover the debt crisis or allow even higher budget deficits or debt levels would
discourage the crisis states to implement necessary reforms to regain their competitiveness.[citation needed] There
has also been a reluctance[citation needed] of financially stable eurozone states like Germany[citation needed] to
further circumvent the no-bailout clause in the EU contracts and to generally take on the burden of financing or
guaranteeing the debts of financially unstable or defaulting eurozone countries.[citation needed] This has led to
public discussions if Greece, Portugal, and even Italy would be better off leaving the eurozone to regain economical
and financial stability if they would not implement reforms to strengthen their competitiveness as part of the eurozone
in time. Greece had the greatest need for reforms but also most problems to implement those, so the Greek exit, also
called "Grexit", has been widely discussed. Germany, as a large and financially stable state being in the focus to
be asked to guarantee or repay other states debt, has never pushed those exits. Their position is to keep Greece
within the eurozone, but not at any cost. If the worst comes to the worst, priority should be given to the euro's
stability. However, if the debt rescheduling causes losses on loans held by European banks, it weakens the private
banking system, which then puts pressure on the central bank to come to the aid of those banks. Private-sector bond
holders are an integral part of the public and private banking system. Another possible response is for wealthy member
countries to guarantee or purchase the debt of countries that have defaulted or are likely to default. This alternative
requires that the tax revenues and credit of the wealthy member countries be used to refinance the previous borrowing
of the weaker member countries, and is politically controversial. In contrast to the Fed, the ECB normally does not
buy bonds outright. The normal procedure used by the ECB for manipulating the money supply has been via the so-called
refinancing facilities. In these facilities, bonds are not purchased but used in reverse transactions: repurchase
agreements, or collateralised loans. These two transactions are similar, i.e. bonds are used as collaterals for loans,
the difference being of legal nature. In the repos the ownership of the collateral changes to the ECB until the loan
is repaid. This changed with the recent sovereign-debt crisis. The ECB always could, and through the late summer
of 2011 did, purchase bonds issued by the weaker states even though it assumes, in doing so, the risk of a deteriorating
balance sheet. ECB buying focused primarily on Spanish and Italian debt. Certain techniques can minimise the impact.
Purchases of Italian bonds by the central bank, for example, were intended to dampen international speculation and
strengthen portfolios in the private sector and also the central bank. On the other hand, certain financial techniques
can reduce the impact of such purchases on the currency. One is sterilisation, in which highly valued assets are
sold at the same time that the weaker assets are purchased, which keeps the money supply neutral. Another technique
is simply to accept the bad assets as long-term collateral (as opposed to short-term repo swaps) to be held until
their market value stabilises. This would imply, as a quid pro quo, adjustments in taxation and expenditure in the
economies of the weaker states to improve the perceived value of the assets. As of 18 June 2012, the ECB in total
had spent €212.1bn (equal to 2.2% of the Eurozone GDP) for bond purchases covering outright debt, as part of its
Securities Markets Programme (SMP) running since May 2010. On 6 September 2012, the ECB announced a new plan for
buying bonds from eurozone countries. The duration of the previous SMP was temporary, while the Outright Monetary
Transactions (OMT) programme has no ex-ante time or size limit. On 4 September 2014, the bank went further by announcing
it would buy bonds and other debt instruments primarily from banks in a bid to boost the availability of credit for
businesses. Rescue operations involving sovereign debt have included temporarily moving bad or weak assets off the
balance sheets of the weak member banks into the balance sheets of the European Central Bank. Such action is viewed
as monetisation and can be seen as an inflationary threat, whereby the strong member countries of the ECB shoulder
the burden of monetary expansion (and potential inflation) to save the weak member countries. Most central banks
prefer to move weak assets off their balance sheets with some kind of agreement as to how the debt will continue
to be serviced. This preference has typically led the ECB to argue that the weaker member countries must: The European
Central Bank had stepped up the buying of member nations debt. In response to the crisis of 2010, some proposals
have surfaced for a collective European bond issue that would allow the central bank to purchase a European version
of US Treasury bills. To make European sovereign debt assets more similar to a US Treasury, a collective guarantee
of the member states' solvency would be necessary.[b] But the German government has resisted this proposal, and other
analyses indicate that "the sickness of the euro" is due to the linkage between sovereign debt and failing national
banking systems. If the European central bank were to deal directly with failing banking systems sovereign debt would
not look as leveraged relative to national income in the financially weaker member states. The bank must also co-operate
within the EU and internationally with third bodies and entities. Finally, it contributes to maintaining a stable
financial system and monitoring the banking sector. The latter can be seen, for example, in the bank's intervention
during the subprime mortgage crisis when it loaned billions of euros to banks to stabilise the financial system.
In December 2007, the ECB decided in conjunction with the Federal Reserve System under a programme called Term auction
facility to improve dollar liquidity in the eurozone and to stabilise the money market. In central banking, the privileged
status of the central bank is that it can make as much money as it deems needed. In the United States Federal Reserve
Bank, the Federal Reserve buys assets: typically, bonds issued by the Federal government. There is no limit on the
bonds that it can buy and one of the tools at its disposal in a financial crisis is to take such extraordinary measures
as the purchase of large amounts of assets such as commercial paper. The purpose of such operations is to ensure
that adequate liquidity is available for functioning of the financial system. Think-tanks such as the World Pensions
Council have also argued that European legislators have pushed somewhat dogmatically for the adoption of the Basel
II recommendations, adopted in 2005, transposed in European Union law through the Capital Requirements Directive
(CRD), effective since 2008. In essence, they forced European banks, and, more importantly, the European Central
Bank itself e.g. when gauging the solvency of financial institutions, to rely more than ever on standardised assessments
of credit risk marketed by two non-European private agencies: Moody's and S&P. The bank is based in Frankfurt, the
largest financial centre in the Eurozone. Its location in the city is fixed by the Amsterdam Treaty. The bank moved
to new purpose-built headquarters in 2014 which were designed a Vienna-based architectural office named Coop Himmelbau.
The building is approximately 180 metres (591 ft) tall and will be accompanied with other secondary buildings on
a landscaped site on the site of the former wholesale market in the eastern part of Frankfurt am Main. The main construction
began in October 2008, and it was expected that the building will become an architectural symbol for Europe. While
it was designed to accommodate double the number of staff who operate in the former Eurotower, that building has
been retained since the ECB took responsibility for banking supervision and more space was hence required. On 21
December 2011 the bank instituted a programme of making low-interest loans with a term of three years (36 months)
and 1% interest to European banks accepting loans from the portfolio of the banks as collateral. Loans totalling
€489.2 bn (US$640 bn) were announced. The loans were not offered to European states, but government securities issued
by European states would be acceptable collateral as would mortgage-backed securities and other commercial paper
that can be demonstrated to be secure. The programme was announced on 8 December 2011 but observers were surprised
by the volume of the loans made when it was implemented. Under its LTRO it loaned €489bn to 523 banks for an exceptionally
long period of three years at a rate of just one percent. The by far biggest amount of €325bn was tapped by banks
in Greece, Ireland, Italy and Spain. This way the ECB tried to make sure that banks have enough cash to pay off €200bn
of their own maturing debts in the first three months of 2012, and at the same time keep operating and loaning to
businesses so that a credit crunch does not choke off economic growth. It also hoped that banks would use some of
the money to buy government bonds, effectively easing the debt crisis. The ECB's first supplementary longer-term
refinancing operation (LTRO) with a six-month maturity was announced March 2008. Previously the longest tender offered
was three months. It announced two 3-month and one 6-month full allotment of Long Term Refinancing Operations (LTROs).
The first tender was settled 3 April, and was more than four times oversubscribed. The €25 billion auction drew bids
amounting to €103.1 billion, from 177 banks. Another six-month tender was allotted on 9 July, again to the amount
of €25 billion. The first 12-month LTRO in June 2009 had close to 1100 bidders.
St. John's (/ˌseɪntˈdʒɒnz/, local /ˌseɪntˈdʒɑːnz/) is the capital and largest city in Newfoundland and Labrador, Canada.
St. John's was incorporated as a city in 1888, yet is considered by some to be the oldest English-founded city in
North America. It is located on the eastern tip of the Avalon Peninsula on the island of Newfoundland. With a population
of 214,285 as of July 1, 2015, the St. John's Metropolitan Area is the second largest Census Metropolitan Area (CMA)
in Atlantic Canada after Halifax and the 20th largest metropolitan area in Canada. It is one of the world's top ten
oceanside destinations, according to National Geographic Magazine. Its name has been attributed to the feast day
of John the Baptist, when John Cabot was believed to have sailed into the harbour in 1497, and also to a Basque fishing
town with the same name. St. John's is one of the oldest settlements in North America, with year-round settlement
beginning sometime after 1630 and seasonal habitation long before that. It is not, however, the oldest surviving
English settlement in North America or Canada, having been preceded by the Cuper's Cove colony at Cupids, founded
in 1610, and the Bristol's Hope colony at Harbour Grace, founded in 1618. In fact, although English fishermen had
begun setting up seasonal camps in Newfoundland in the 16th Century, they were expressly forbidden by the British
government, at the urging of the West Country fishing industry, from establishing permanent settlements along the
English controlled coast, hence the town of St. John's was not established as a permanent community until after the
1630s at the earliest. Other permanent English settlements in the Americas that predate St. John's include: St. George's,
Bermuda (1612) and Jamestown, Virginia (1607). Sebastian Cabot declares in a handwritten Latin text in his original
1545 map, that the St. John's earned its name when he and his father, the Venetian explorer John Cabot became the
first Europeans to sail into the harbour, in the morning of 24 June 1494 (against British and French historians stating
1497), the feast day of Saint John the Baptist. However, the exact locations of Cabot's landfalls are disputed. A
series of expeditions to St. John's by Portuguese from the Azores took place in the early 16th century, and by 1540
French, Spanish and Portuguese ships crossed the Atlantic annually to fish the waters off the Avalon Peninsula. In
the Basque Country, it is a common belief that the name of St. John's was given by Basque fishermen because the bay
of St. John's is very similar to the Bay of Pasaia in the Basque Country, where one of the fishing towns is also
called St. John (in Spanish, San Juan, and in Basque, Donibane). The earliest record of the location appears as São
João on a Portuguese map by Pedro Reinel in 1519. When John Rut visited St. John's in 1527 he found Norman, Breton
and Portuguese ships in the harbour. On 3 August 1527, Rut wrote a letter to King Henry on the findings of his voyage
to North America; this was the first known letter sent from North America. St. Jehan is shown on Nicholas Desliens'
world map of 1541 and San Joham is found in João Freire's Atlas of 1546. It was during this time that Water Street
was first developed, making it the oldest street in North America.[dubious – discuss] By 1620, the fishermen of England's
West Country controlled most of Newfoundland's east coast. In 1627, William Payne, called St. John's "the principal
prime and chief lot in all the whole country". The population grew slowly in the 17th century and St. John's was
the largest settlement in Newfoundland when English naval officers began to take censuses around 1675. The population
would grow in the summers with the arrival of migratory fishermen. In 1680, fishing ships (mostly from South Devon)
set up fishing rooms at St. John's, bringing hundreds of Irish men into the port to operate inshore fishing boats.
The town's first significant defences were likely erected due to commercial interests, following the temporary seizure
of St. John's by the Dutch admiral Michiel de Ruyter in June 1665. The inhabitants were able to fend off a second
Dutch attack in 1673, when this time it was defended by Christopher Martin, an English merchant captain. Martin landed
six cannons from his vessel, the Elias Andrews, and constructed an earthen breastwork and battery near chain Rock
commanding the Narrows leading into the harbour. With only twenty-three men, the valiant Martin beat off an attack
by three Dutch warships. The English government planned to expand these fortifications (Fort William) in around 1689,
but actual construction didn't begin until after the French admiral Pierre Le Moyne d'Iberville captured and destroyed
the town in the Avalon Peninsula Campaign (1696). When 1500 English reinforcements arrived in late 1697 they found
nothing but rubble where the town and fortifications had stood. St. John's was the starting point for the first non-stop
transatlantic aircraft flight, by Alcock and Brown in a modified Vickers Vimy IV bomber, in June 1919, departing
from Lester's Field in St. John's and ending in a bog near Clifden, Connemara, Ireland. In July 2005, the flight
was duplicated by American aviator and adventurer Steve Fossett in a replica Vickers Vimy aircraft, with St. John's
International Airport substituting for Lester's Field (now an urban and residential part of the city). St. John's,
and the province as a whole, was gravely affected in the 1990s by the collapse of the Northern cod fishery, which
had been the driving force of the provincial economy for hundreds of years. After a decade of high unemployment rates
and depopulation, the city's proximity to the Hibernia, Terra Nova and White Rose oil fields has led to an economic
boom that has spurred population growth and commercial development. As a result, the St. John's area now accounts
for about half of the province's economic output. St. John's is located along the coast of the Atlantic Ocean, on
the northeast of the Avalon Peninsula in southeast Newfoundland. The city covers an area of 446.04 square kilometres
(172.22 sq mi) and is the most easterly city in North America, excluding Greenland; it is 295 miles (475 km) closer
to London, England than it is to Edmonton, Alberta. The city of St. John's is located at a distance by air of 3,636
kilometres (2,259 mi) from Lorient, France which lies on a nearly precisely identical latitude across the Atlantic
on the French western coast. The city is the largest in the province and the second largest in the Atlantic Provinces
after Halifax, Nova Scotia. Its downtown area lies to the west and north of St. John's Harbour, and the rest of the
city expands from the downtown to the north, south, east and west. St. John's has a humid continental climate (Köppen
Dfb), with lower seasonal variation than normal for the latitude, which is due to Gulf Stream moderation. However,
despite this maritime moderation, average January high temperatures are actually slightly colder in St. John's than
it is in Kelowna, British Columbia, which is an inland city that is near the more marine air of the Pacific, demonstrating
the cold nature of Eastern Canada. Mean temperatures range from −4.9 °C (23.2 °F) in February to 16.1 °C (61.0 °F)
in August, showing somewhat of a seasonal lag in the climate. The city is also one of the areas of the country most
prone to tropical cyclone activity, as it is bordered by the Atlantic Ocean to the east, where tropical storms (and
sometimes hurricanes) travel from the United States. The city is one of the rainiest in Canada outside of coastal
British Columbia. This is partly due to its propensity for tropical storm activity as well as moist, Atlantic air
frequently blowing ashore and creating precipitation. Of major Canadian cities, St. John's is the foggiest (124 days),
windiest (24.3 km/h (15.1 mph) average speed), and cloudiest (1,497 hours of sunshine). St. John's experiences milder
temperatures during the winter season in comparison to other Canadian cities, and has the mildest winter for any
Canadian city outside of British Columbia. Precipitation is frequent and often heavy, falling year round. On average,
summer is the driest season, with only occasional thunderstorm activity, and the wettest months are from October
to January, with December the wettest single month, with nearly 165 millimetres of precipitation on average. This
winter precipitation maximum is quite unusual for humid continental climates, which most commonly have a late spring
or early summer precipitation maximum (for example, most of the Midwestern U.S.). Most heavy precipitation events
in St. John's are the product of intense mid-latitude storms migrating from the Northeastern U.S. and New England
states, and these are most common and intense from October to March, bringing heavy precipitation (commonly 4 to
8 centimetres of rainfall equivalent in a single storm), and strong winds. In winter, two or more types of precipitation
(rain, freezing rain, sleet and snow) can fall from passage of a single storm. Snowfall is heavy, averaging nearly
335 centimetres per winter season. However, winter storms can bring changing precipitation types. Heavy snow can
transition to heavy rain, melting the snow cover, and possibly back to snow or ice (perhaps briefly) all in the same
storm, resulting in little or no net snow accumulation. Snow cover in St. John's is variable, and especially early
in the winter season, may be slow to develop, but can extend deeply into the spring months (March, April). The St.
John's area is subject to freezing rain (called "silver thaws"), the worst of which paralyzed the city over a three-day
period in April 1984. Starting as a fishing outpost for European fishermen, St. John's consisted mostly of the homes
of fishermen, sheds, storage shacks, and wharves constructed out of wood. Like many other cities of the time, as
the Industrial Revolution took hold and new methods and materials for construction were introduced, the landscape
changed as the city grew in width and height. The Great Fire of 1892 destroyed most of the downtown core, and most
residential and other wood-frame buildings date from this period. Often compared to San Francisco due to the hilly
terrain and steep maze of residential streets, housing in St. John's is typically painted in bright colours. The
city council has implemented strict heritage regulations in the downtown area, including restrictions on the height
of buildings. These regulations have caused much controversy over the years. With the city experiencing an economic
boom a lack of hotel rooms and office space has seen proposals put forward that do not meet the current height regulations.
Heritage advocates argue that the current regulations should be enforced while others believe the regulations should
be relaxed to encourage economic development. To meet the need for more office space downtown without compromising
the city's heritage, the city council amended heritage regulations, which originally restricted height to 15 metres
in the area of land on Water Street between Bishop's Cove and Steer's Cove, to create the "Commercial Central Retail
– West Zone". The new zone will allow for buildings of greater height. A 47-metre, 12-storey office building, which
includes retail space and a parking garage, was the first building to be approved in this area. As of the 2006 Census,
there were 100,646 inhabitants in St. John's itself, 151,322 in the urban area and 181,113 in the St. John's Census
Metropolitan Area (CMA). Thus, St. John's is Newfoundland and Labrador's largest city and Canada's 20th largest CMA.
Apart from St. John's, the CMA includes 12 other communities: the city of Mount Pearl and the towns of Conception
Bay South, Paradise, Portugal Cove-St. Philip's, Torbay, Logy Bay-Middle Cove-Outer Cove, Pouch Cove, Flatrock, Bay
Bulls, Witless Bay, Petty Harbour-Maddox Cove and Bauline. The population of the CMA was 192,326 as of 1 July 2010.
Predominantly Christian, the population of St. John's was once divided along sectarian (Catholic/Protestant) lines.
In recent years, this sectarianism has declined significantly, and is no longer a commonly acknowledged facet of
life in St. John's. St. John's is the seat of the Roman Catholic Archbishop of St. John's, and the Anglican Bishop
of Eastern Newfoundland and Labrador. All major Christian sects showed a decline from 2001–2011 with a large increase
in those with no religion from 3.9% to 11.1%. St. John's economy is connected to both its role as the provincial
capital of Newfoundland and Labrador and to the ocean. The civil service which is supported by the federal, provincial
and municipal governments has been the key to the expansion of the city's labour force and to the stability of its
economy, which supports a sizable retail, service and business sector. The provincial government is the largest employer
in the city, followed by Memorial University. With the collapse of the fishing industry in Newfoundland and Labrador
in the 1990s, the role of the ocean is now tied to what lies beneath it – oil and gas – as opposed to what swims
in or travels across it. The city is the centre of the oil and gas industry in Eastern Canada and is one of 19 World
Energy Cities. ExxonMobil Canada is headquartered in St. John's and companies such as Chevron, Husky Energy, Suncor
Energy and Statoil have major regional operations in the city. Three major offshore oil developments, Hibernia, Terra
Nova and White Rose, are in production off the coast of the city and a fourth development, Hebron, is expected to
be producing oil by 2017. The economy has been growing quickly in recent years. In both 2010 and 2011, the metro
area's gross domestic product (GDP) led 27 other metropolitan areas in the country, according to the Conference Board
of Canada, recording growth of 6.6 per cent and 5.8 per cent respectively. At $52,000 the city's per capita GDP is
the second highest out of all major Canadian cities. Economic forecasts suggest that the city will continue its strong
economic growth in the coming years not only in the "oceanic" industries mentioned above, but also in tourism and
new home construction as the population continues to grow. In May 2011, the city's unemployment rate fell to 5.6
per cent, the second lowest unemployment rate for a major city in Canada. The LSPU Hall is home to the Resource Centre
for the Arts. The "Hall" hosts a vibrant and diverse arts community and is regarded as the backbone of artistic infrastructure
and development in the downtown. The careers of many well-known Newfoundland artists were launched there including
Rick Mercer, Mary Walsh, Cathy Jones, Andy Jones and Greg Thomey. The St. John's Arts and Culture Centre houses an
art gallery, libraries and a 1000-seat theatre, which is the city's major venue for entertainment productions. Pippy
Park is an urban park located in the east end of the city; with over 3,400 acres (14 km2) of land, it is one of Canada's
largest urban parks. The park contains a range of recreational facilities including two golf courses, Newfoundland
and Labrador's largest serviced campground, walking and skiing trails as well as protected habitat for many plants
and animals. Pippy Park is also home to the Fluvarium, an environmental education centre which offers a cross section
view of Nagle's Hill Brook. Bannerman Park is a Victorian-style park located near the downtown. The park was officially
opened in 1891 by Sir Alexander Bannerman, Governor of the Colony of Newfoundland who donated the land to create
the park. Today the park contains a public swimming pool, playground, a baseball diamond and many large open grassy
areas. Bannerman Park plays host to many festivals and sporting events, most notably the Newfoundland and Labrador
Folk Festival and St. John's Peace-a-chord. The park is also the finishing location for the annual Tely 10 Mile Road
Race. Signal Hill is a hill which overlooks the city of St. John's. It is the location of Cabot Tower which was built
in 1897 to commemorate the 400th anniversary of John Cabot's discovery of Newfoundland, and Queen Victoria's Diamond
Jubilee. The first transatlantic wireless transmission was received here by Guglielmo Marconi on 12 December 1901.
Today, Signal Hill is a National Historic Site of Canada and remains incredibly popular amongst tourists and locals
alike; 97% of all tourists to St. John's visit Signal Hill. Amongst its popular attractions are the Signal Hill Tattoo,
showcasing the Royal Newfoundland Regiment of foot, c. 1795, and the North Head Trail which grants an impressive
view of the Atlantic Ocean and the surrounding coast. The rugby union team The Rock is the Eastern Canadian entry
in the Americas Rugby Championship. The Rock play their home games at Swilers Rugby Park, as did the Rugby Canada
Super League champions for 2005 and 2006, the Newfoundland Rock. The city hosted a Rugby World Cup qualifying match
between Canada and the USA on 12 August 2006, where the Canadians heavily defeated the USA 56–7 to qualify for the
2007 Rugby World Cup finals in France. The 2007 age-grade Rugby Canada National Championship Festival was held in
the city. St. John's served as the capital city of the Colony of Newfoundland and the Dominion of Newfoundland before
Newfoundland became Canada's tenth province in 1949. The city now serves as the capital of Newfoundland and Labrador,
therefore the provincial legislature is located in the city. The Confederation Building, located on Confederation
Hill, is home to the House of Assembly along with the offices for the Members of the House of Assembly (MHAs) and
Ministers. The city is represented by ten MHAs, four who are members of the governing Progressive Conservative Party,
three that belong to the New Democratic Party (NDP), and three that belong to the Liberal Party. Lorraine Michael,
leader of the NDP since 2006, represents the district of Signal Hill-Quidi Vidi. St. John's has traditionally been
one of the safest cities in Canada to live; however, in recent years crime in the city has steadily increased. While
nationally crime decreased by 4% in 2009, the total crime rate in St. John's saw an increase of 4%. During this same
time violent crime in the city decreased 6%, compared to a 1% decrease nationally. In 2010 the total crime severity
index for the city was 101.9, an increase of 10% from 2009 and 19.2% above the national average. The violent crime
severity index was 90.1, an increase of 29% from 2009 and 1.2% above the national average. St. John's had the seventh-highest
metropolitan crime index and twelfth-highest metropolitan violent crime index in the country in 2010. St. John's
is served by St. John's International Airport (YYT), located 10 minutes northwest of the downtown core. In 2011,
roughly 1,400,000 passengers travelled through the airport making it the second busiest airport in Atlantic Canada
in passenger volume. Regular destinations include Halifax, Montreal, Ottawa, Toronto, as well as destinations throughout
the province. International locations include Dublin, London, New York City, Saint Pierre and Miquelon, Glasgow and
Varadero. Scheduled service providers include Air Canada, Air Canada Jazz, Air Saint-Pierre, Air Transat, United
Airlines, Porter Airlines, Provincial Airlines, Sunwing Airlines and Westjet. St. John's is the eastern terminus
of the Trans-Canada Highway, one of the longest national highways in the world. The divided highway, also known as
"Outer Ring Road" in the city, runs just outside the main part of the city, with exits to Pitts Memorial Drive, Topsail
Road, Team Gushue Highway, Thorburn Road, Allandale Road, Portugal Cove Road and Torbay Road, providing relatively
easy access to neighbourhoods served by those streets. Pitts Memorial Drive runs from Conception Bay South, through
the city of Mount Pearl and into downtown St. John's, with interchanges for Goulds, Water Street and Hamilton Avenue-New
Gower Street. Metrobus Transit is responsible for public transit in the region. Metrobus has a total of 19 routes,
53 buses and an annual ridership of 3,014,073. Destinations include the Avalon Mall, The Village Shopping Centre,
Memorial University, Academy Canada, the College of the North Atlantic, the Marine Institute, the Confederation Building,
downtown, Stavanger Drive Business Park, Kelsey Drive, Goulds, Kilbride, Shea Heights, the four hospitals in the
city as well as other important areas in St. John's and Mount Pearl. St. John's is served by the Eastern School District,
the largest school district in Newfoundland and Labrador by student population. There are currently 36 primary, elementary
and secondary schools in the city of St. John's, including three private schools. St. John's also includes one school
that is part of the province-wide Conseil Scolaire Francophone (CSF), the Francophone public school district. It
also contains two private schools, St. Bonaventure's College and Lakecrest Independent. Atlantic Canada's largest
university, Memorial University of Newfoundland (MUN), is located in St. John's. MUN provides comprehensive education
and grants degrees in several fields and its historical strengths in engineering, business, geology, and medicine,
make MUN one of the top comprehensive universities in Canada. The Fisheries and Marine Institute of Memorial University
of Newfoundland (MI) or simply Marine Institute, is a post-secondary ocean and marine polytechnic located in St.
John's and is affiliated with Memorial University of Newfoundland. MUN also offers the lowest tuition in Canada ($2,644,
per Academic Year) CJON-DT, known on air as "NTV", is an independent station. The station sublicenses entertainment
programming from Global and news programming from CTV and Global, rather than purchasing primary broadcast rights.
Rogers Cable has its provincial headquarters in St. John's, and their community channel Rogers TV airs local shows
such as Out of the Fog and One Chef One Critic. CBC has its Newfoundland and Labrador headquarters in the city and
their television station CBNT-DT broadcasts from University Avenue. The city is home to 15 am and FM radio stations,
two of which are French-language stations. St. John's is the only Canadian city served by radio stations whose call
letters do not all begin with the letter C. The ITU prefix VO was assigned to the Dominion of Newfoundland before
the province joined Canadian Confederation in 1949, and three AM stations kept their existing call letters. However,
other commercial radio stations in St. John's which went to air after 1949 use the same range of prefixes (CF–CK)
currently in use elsewhere in Canada, with the exception of VOCM-FM, which was permitted to adopt the VOCM callsign
because of its corporate association with the AM station that already bore that callsign. VO also remains in use
in amateur radio.
John von Neumann (/vɒn ˈnɔɪmən/; Hungarian: Neumann János Lajos, pronounced [ˈnɒjmɒn ˈjaːnoʃ ˈlɒjoʃ]; December 28, 1903 –
February 8, 1957) was a Hungarian-American pure and applied mathematician, physicist, inventor, computer scientist,
and polymath. He made major contributions to a number of fields, including mathematics (foundations of mathematics,
functional analysis, ergodic theory, geometry, topology, and numerical analysis), physics (quantum mechanics, hydrodynamics,
fluid dynamics and quantum statistical mechanics), economics (game theory), computing (Von Neumann architecture,
linear programming, self-replicating machines, stochastic computing), and statistics. He was a pioneer of the application
of operator theory to quantum mechanics, in the development of functional analysis, a principal member of the Manhattan
Project and the Institute for Advanced Study in Princeton (as one of the few originally appointed), and a key figure
in the development of game theory and the concepts of cellular automata, the universal constructor and the digital
computer. He published 150 papers in his life; 60 in pure mathematics, 20 in physics, and 60 in applied mathematics.
His last work, an unfinished manuscript written while in the hospital, was later published in book form as The Computer
and the Brain. Von Neumann's mathematical analysis of the structure of self-replication preceded the discovery of
the structure of DNA. In a short list of facts about his life he submitted to the National Academy of Sciences, he
stated "The part of my work I consider most essential is that on quantum mechanics, which developed in Göttingen
in 1926, and subsequently in Berlin in 1927–1929. Also, my work on various forms of operator theory, Berlin 1930
and Princeton 1935–1939; on the ergodic theorem, Princeton, 1931–1932." During World War II he worked on the Manhattan
Project with J. Robert Oppenheimer and Edward Teller, developing the mathematical models behind the explosive lenses
used in the implosion-type nuclear weapon. After the war, he served on the General Advisory Committee of the United
States Atomic Energy Commission, and later as one of its commissioners. He was a consultant to a number of organizations,
including the United States Air Force, the Armed Forces Special Weapons Project, and the Lawrence Livermore National
Laboratory. Along with theoretical physicist Edward Teller, mathematician Stanislaw Ulam, and others, he worked out
key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. Von Neumann was born
Neumann János Lajos (in Hungarian the family name comes first), Hebrew name Yonah, in Budapest, Kingdom of Hungary,
which was then part of the Austro-Hungarian Empire, to wealthy Jewish parents of the Haskalah. He was the eldest
of three children. He had two younger brothers: Michael, born in 1907, and Nicholas, who was born in 1911. His father,
Neumann Miksa (Max Neumann) was a banker, who held a doctorate in law. He had moved to Budapest from Pécs at the
end of the 1880s. Miksa's father and grandfather were both born in Ond (now part of the town of Szerencs), Zemplén
County, northern Hungary. John's mother was Kann Margit (Margaret Kann); her parents were Jakab Kann and Katalin
Meisels. Three generations of the Kann family lived in spacious apartments above the Kann-Heller offices in Budapest;
von Neumann's family occupied an 18-room apartment on the top floor. In 1913, his father was elevated to the nobility
for his service to the Austro-Hungarian Empire by Emperor Franz Joseph. The Neumann family thus acquired the hereditary
appellation Margittai, meaning of Marghita. The family had no connection with the town; the appellation was chosen
in reference to Margaret, as was those chosen coat of arms depicting three marguerites. Neumann János became Margittai
Neumann János (John Neumann of Marghita), which he later changed to the German Johann von Neumann. Formal schooling
did not start in Hungary until the age of ten. Instead, governesses taught von Neumann, his brothers and his cousins.
Max believed that knowledge of languages other than Hungarian was essential, so the children were tutored in English,
French, German and Italian. By the age of 8, von Neumann was familiar with differential and integral calculus, but
he was particularly interested in history, reading his way through Wilhelm Oncken's Allgemeine Geschichte in Einzeldarstellungen.
A copy was contained in a private library Max purchased. One of the rooms in the apartment was converted into a library
and reading room, with bookshelves from ceiling to floor. Von Neumann entered the Lutheran Fasori Evangelikus Gimnázium
in 1911. This was one of the best schools in Budapest, part of a brilliant education system designed for the elite.
Under the Hungarian system, children received all their education at the one gymnasium. Despite being run by the
Lutheran Church, the majority of its pupils were Jewish. The school system produced a generation noted for intellectual
achievement, that included Theodore von Kármán (b. 1881), George de Hevesy (b. 1885), Leó Szilárd (b. 1898), Eugene
Wigner (b. 1902), Edward Teller (b. 1908), and Paul Erdős (b. 1913). Collectively, they were sometimes known as Martians.
Wigner was a year ahead of von Neumann at the Lutheran School. When asked why the Hungary of his generation had produced
so many geniuses, Wigner, who won the Nobel Prize in Physics in 1963, replied that von Neumann was the only genius.
Although Max insisted von Neumann attend school at the grade level appropriate to his age, he agreed to hire private
tutors to give him advanced instruction in those areas in which he had displayed an aptitude. At the age of 15, he
began to study advanced calculus under the renowned analyst Gábor Szegő. On their first meeting, Szegő was so astounded
with the boy's mathematical talent that he was brought to tears. Some of von Neumann's instant solutions to the problems
in calculus posed by Szegő, sketched out on his father's stationery, are still on display at the von Neumann archive
in Budapest. By the age of 19, von Neumann had published two major mathematical papers, the second of which gave
the modern definition of ordinal numbers, which superseded Georg Cantor's definition. At the conclusion of his education
at the gymnasium, von Neumann sat for and won the Eötvös Prize, a national prize for mathematics. Since there were
few posts in Hungary for mathematicians, and those were not well-paid, his father wanted von Neumann to follow him
into industry and therefore invest his time in a more financially useful endeavor than mathematics. So it was decided
that the best career path was to become a chemical engineer. This was not something that von Neumann had much knowledge
of, so it was arranged for him to take a two-year non-degree course in chemistry at the University of Berlin, after
which he sat the entrance exam to the prestigious ETH Zurich, which he passed in September 1923. At the same time,
von Neumann also entered Pázmány Péter University in Budapest, as a Ph.D. candidate in mathematics. For his thesis,
he chose to produce an axiomatization of Cantor's set theory. He passed his final examinations for his Ph.D. soon
after graduating from ETH Zurich in 1926. He then went to the University of Göttingen on a grant from the Rockefeller
Foundation to study mathematics under David Hilbert. Von Neumann's habilitation was completed on December 13, 1927,
and he started his lectures as a privatdozent at the University of Berlin in 1928. By the end of 1927, von Neumann
had published twelve major papers in mathematics, and by the end of 1929, thirty-two papers, at a rate of nearly
one major paper per month. His reputed powers of speedy, massive memorization and recall allowed him to recite volumes
of information, and even entire directories, with ease. In 1929, he briefly became a privatdozent at the University
of Hamburg, where the prospects of becoming a tenured professor were better, but in October of that year a better
offer presented itself when he was invited to Princeton University in Princeton, New Jersey. On New Year's Day in
1930, von Neumann married Mariette Kövesi, who had studied economics at the Budapest University. Before his marriage
he was baptized a Catholic. Max had died in 1929. None of the family had converted to Christianity while he was alive,
but afterwards they all did. They had one child, a daughter, Marina, who is now a distinguished professor of business
administration and public policy at the University of Michigan. The couple divorced in 1937. In October 1938, von
Neumann married Klara Dan, whom he had met during his last trips back to Budapest prior to the outbreak of World
War II. In 1933, von Neumann was offered a lifetime professorship on the faculty of the Institute for Advanced Study
when the institute's plan to appoint Hermann Weyl fell through. He remained a mathematics professor there until his
death, although he announced that shortly before his intention to resign and become a professor at large at the University
of California. His mother, brothers and in-laws followed John to the United States in 1939. Von Neumann anglicized
his first name to John, keeping the German-aristocratic surname of von Neumann. His brothers changed theirs to "Neumann"
and "Vonneumann". Von Neumann became a naturalized citizen of the United States in 1937, and immediately tried to
become a lieutenant in the United States Army's Officers Reserve Corps. He passed the exams easily, but was ultimately
rejected because of his age. His prewar analysis is often quoted. Asked about how France would stand up to Germany
he said "Oh, France won't matter." Von Neumann liked to eat and drink; his wife, Klara, said that he could count
everything except calories. He enjoyed Yiddish and "off-color" humor (especially limericks). He was a non-smoker.
At Princeton he received complaints for regularly playing extremely loud German march music on his gramophone, which
distracted those in neighbouring offices, including Albert Einstein, from their work. Von Neumann did some of his
best work blazingly fast in noisy, chaotic environments, and once admonished his wife for preparing a quiet study
for him to work in. He never used it, preferring the couple's living room with its television playing loudly. Von
Neumann's closest friend in the United States was mathematician Stanislaw Ulam. A later friend of Ulam's, Gian-Carlo
Rota writes: "They would spend hours on end gossiping and giggling, swapping Jewish jokes, and drifting in and out
of mathematical talk." When von Neumann was dying in hospital, every time Ulam would visit he would come prepared
with a new collection of jokes to cheer up his friend. He believed that much of his mathematical thought occurred
intuitively, and he would often go to sleep with a problem unsolved, and know the answer immediately upon waking
up. The axiomatization of mathematics, on the model of Euclid's Elements, had reached new levels of rigour and breadth
at the end of the 19th century, particularly in arithmetic, thanks to the axiom schema of Richard Dedekind and Charles
Sanders Peirce, and geometry, thanks to David Hilbert. At the beginning of the 20th century, efforts to base mathematics
on naive set theory suffered a setback due to Russell's paradox (on the set of all sets that do not belong to themselves).
The problem of an adequate axiomatization of set theory was resolved implicitly about twenty years later by Ernst
Zermelo and Abraham Fraenkel. Zermelo–Fraenkel set theory provided a series of principles that allowed for the construction
of the sets used in the everyday practice of mathematics. But they did not explicitly exclude the possibility of
the existence of a set that belongs to itself. In his doctoral thesis of 1925, von Neumann demonstrated two techniques
to exclude such sets—the axiom of foundation and the notion of class. The axiom of foundation established that every
set can be constructed from the bottom up in an ordered succession of steps by way of the principles of Zermelo and
Fraenkel, in such a manner that if one set belongs to another then the first must necessarily come before the second
in the succession, hence excluding the possibility of a set belonging to itself. To demonstrate that the addition
of this new axiom to the others did not produce contradictions, von Neumann introduced a method of demonstration,
called the method of inner models, which later became an essential instrument in set theory. The second approach
to the problem took as its base the notion of class, and defines a set as a class which belongs to other classes,
while a proper class is defined as a class which does not belong to other classes. Under the Zermelo–Fraenkel approach,
the axioms impede the construction of a set of all sets which do not belong to themselves. In contrast, under the
von Neumann approach, the class of all sets which do not belong to themselves can be constructed, but it is a proper
class and not a set. With this contribution of von Neumann, the axiomatic system of the theory of sets became fully
satisfactory, and the next question was whether or not it was also definitive, and not subject to improvement. A
strongly negative answer arrived in September 1930 at the historic mathematical Congress of Königsberg, in which
Kurt Gödel announced his first theorem of incompleteness: the usual axiomatic systems are incomplete, in the sense
that they cannot prove every truth which is expressible in their language. This result was sufficiently innovative
as to confound the majority of mathematicians of the time. But von Neumann, who had participated at the Congress,
confirmed his fame as an instantaneous thinker, and in less than a month was able to communicate to Gödel himself
an interesting consequence of his theorem: namely that the usual axiomatic systems are unable to demonstrate their
own consistency. However, Gödel had already discovered this consequence, now known as his second incompleteness theorem
and sent von Neumann a preprint of his article containing both incompleteness theorems. Von Neumann acknowledged
Gödel's priority in his next letter. He never thought much of "the American system of claiming personal priority
for everything." Von Neumann founded the field of continuous geometry. It followed his path-breaking work on rings
of operators. In mathematics, continuous geometry is an analogue of complex projective geometry, where instead of
the dimension of a subspace being in a discrete set 0, 1, ..., n, it can be an element of the unit interval [0,1].
Von Neumann was motivated by his discovery of von Neumann algebras with a dimension function taking a continuous
range of dimensions, and the first example of a continuous geometry other than projective space was the projections
of the hyperfinite type II factor. In a series of famous papers, von Neumann made spectacular contributions to measure
theory. The work of Banach had implied that the problem of measure has a positive solution if n = 1 or n = 2 and
a negative solution in all other cases. Von Neumann's work argued that the "problem is essentially group-theoretic
in character, and that, in particular, for the solvability of the problem of measure the ordinary algebraic concept
of solvability of a group is relevant. Thus, according to von Neumann, it is the change of group that makes a difference,
not the change of space." In a number of von Neumann's papers, the methods of argument he employed are considered
even more significant than the results. In anticipation of his later study of dimension theory in algebras of operators,
von Neumann used results on equivalence by finite decomposition, and reformulated the problem of measure in terms
of functions. In his 1936 paper on analytic measure theory, he used the Haar theorem in the solution of Hilbert's
fifth problem in the case of compact groups. In 1938, he was awarded the Bôcher Memorial Prize for his work in analysis.
Von Neumann introduced the study of rings of operators, through the von Neumann algebras. A von Neumann algebra is
a *-algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the
identity operator. The von Neumann bicommutant theorem shows that the analytic definition is equivalent to a purely
algebraic definition as an algebra of symmetries. The direct integral was introduced in 1949 by John von Neumann.
One of von Neumann's analyses was to reduce the classification of von Neumann algebras on separable Hilbert spaces
to the classification of factors. Von Neumann worked on lattice theory between 1937 and 1939. Von Neumann provided
an abstract exploration of dimension in completed complemented modular topological lattices: "Dimension is determined,
up to a positive linear transformation, by the following two properties. It is conserved by perspective mappings
("perspectivities") and ordered by inclusion. The deepest part of the proof concerns the equivalence of perspectivity
with "projectivity by decomposition"—of which a corollary is the transitivity of perspectivity." Garrett Birkhoff
writes: "John von Neumann's brilliant mind blazed over lattice theory like a meteor". Additionally, "[I]n the general
case, von Neumann proved the following basic representation theorem. Any complemented modular lattice L having a
"basis" of n≥4 pairwise perspective elements, is isomorphic with the lattice ℛ(R) of all principal right-ideals of
a suitable regular ring R. This conclusion is the culmination of 140 pages of brilliant and incisive algebra involving
entirely novel axioms. Anyone wishing to get an unforgettable impression of the razor edge of von Neumann's mind,
need merely try to pursue this chain of exact reasoning for himself—realizing that often five pages of it were written
down before breakfast, seated at a living room writing-table in a bathrobe." Von Neumann was the first to establish
a rigorous mathematical framework for quantum mechanics, known as the Dirac–von Neumann axioms, with his 1932 work
Mathematical Foundations of Quantum Mechanics. After having completed the axiomatization of set theory, he began
to confront the axiomatization of quantum mechanics. He realized, in 1926, that a state of a quantum system could
be represented by a point in a (complex) Hilbert space that, in general, could be infinite-dimensional even for a
single particle. In this formalism of quantum mechanics, observable quantities such as position or momentum are represented
as linear operators acting on the Hilbert space associated with the quantum system. The physics of quantum mechanics
was thereby reduced to the mathematics of Hilbert spaces and linear operators acting on them. For example, the uncertainty
principle, according to which the determination of the position of a particle prevents the determination of its momentum
and vice versa, is translated into the non-commutativity of the two corresponding operators. This new mathematical
formulation included as special cases the formulations of both Heisenberg and Schrödinger. When Heisenberg was informed
von Neumann had clarified the difference between an unbounded operator that was a Self-adjoint operator and one that
was merely symmetric, Heisenberg replied "Eh? What is the difference?" Von Neumann's abstract treatment permitted
him also to confront the foundational issue of determinism versus non-determinism, and in the book he presented a
proof that the statistical results of quantum mechanics could not possibly be averages of an underlying set of determined
"hidden variables," as in classical statistical mechanics. In 1966, John S. Bell published a paper arguing that the
proof contained a conceptual error and was therefore invalid. However, in 2010, Jeffrey Bub argued that Bell had
misconstrued von Neumann's proof, and pointed out that the proof, though not valid for all hidden variable theories,
does rule out a well-defined and important subset. Bub also suggests that von Neumann was aware of this limitation,
and that von Neumann did not claim that his proof completely ruled out hidden variable theories. In a chapter of
The Mathematical Foundations of Quantum Mechanics, von Neumann deeply analyzed the so-called measurement problem.
He concluded that the entire physical universe could be made subject to the universal wave function. Since something
"outside the calculation" was needed to collapse the wave function, von Neumann concluded that the collapse was caused
by the consciousness of the experimenter (although this view was accepted by Eugene Wigner, the Von Neumann–Wigner
interpretation never gained acceptance amongst the majority of physicists). In a famous paper of 1936 with Garrett
Birkhoff, the first work ever to introduce quantum logics, von Neumann and Birkhoff first proved that quantum mechanics
requires a propositional calculus substantially different from all classical logics and rigorously isolated a new
algebraic structure for quantum logics. The concept of creating a propositional calculus for quantum logic was first
outlined in a short section in von Neumann's 1932 work, but in 1936, the need for the new propositional calculus
was demonstrated through several proofs. For example, photons cannot pass through two successive filters that are
polarized perpendicularly (e.g., one horizontally and the other vertically), and therefore, a fortiori, it cannot
pass if a third filter polarized diagonally is added to the other two, either before or after them in the succession,
but if the third filter is added in between the other two, the photons will, indeed, pass through. This experimental
fact is translatable into logic as the non-commutativity of conjunction . It was also demonstrated that the laws
of distribution of classical logic, and , are not valid for quantum theory. Von Neumann founded the field of game
theory as a mathematical discipline. Von Neumann proved his minimax theorem in 1928. This theorem establishes that
in zero-sum games with perfect information (i.e. in which players know at each time all moves that have taken place
so far), there exists a pair of strategies for both players that allows each to minimize his maximum losses, hence
the name minimax. When examining every possible strategy, a player must consider all the possible responses of his
adversary. The player then plays out the strategy that will result in the minimization of his maximum loss. The reason
for this is that a quantum disjunction, unlike the case for classical disjunction, can be true even when both of
the disjuncts are false and this is, in turn, attributable to the fact that it is frequently the case, in quantum
mechanics, that a pair of alternatives are semantically determinate, while each of its members are necessarily indeterminate.
This latter property can be illustrated by a simple example. Suppose we are dealing with particles (such as electrons)
of semi-integral spin (angular momentum) for which there are only two possible values: positive or negative. Then,
a principle of indetermination establishes that the spin, relative to two different directions (e.g., x and y) results
in a pair of incompatible quantities. Suppose that the state ɸ of a certain electron verifies the proposition "the
spin of the electron in the x direction is positive." By the principle of indeterminacy, the value of the spin in
the direction y will be completely indeterminate for ɸ. Hence, ɸ can verify neither the proposition "the spin in
the direction of y is positive" nor the proposition "the spin in the direction of y is negative." Nevertheless, the
disjunction of the propositions "the spin in the direction of y is positive or the spin in the direction of y is
negative" must be true for ɸ. In the case of distribution, it is therefore possible to have a situation in which
, while . Such strategies, which minimize the maximum loss for each player, are called optimal. Von Neumann showed
that their minimaxes are equal (in absolute value) and contrary (in sign). Von Neumann improved and extended the
minimax theorem to include games involving imperfect information and games with more than two players, publishing
this result in his 1944 Theory of Games and Economic Behavior (written with Oskar Morgenstern). Morgenstern wrote
a paper on game theory and thought he would show it to von Neumann because of his interest in the subject. He read
it and said to Morgenstern that he should put more in it. This was repeated a couple of times, and then von Neumann
became a coauthor and the paper became 100 pages long. Then it became a book. The public interest in this work was
such that The New York Times ran a front-page story. In this book, von Neumann declared that economic theory needed
to use functional analytic methods, especially convex sets and topological fixed-point theorem, rather than the traditional
differential calculus, because the maximum-operator did not preserve differentiable functions. Von Neumann raised
the intellectual and mathematical level of economics in several stunning publications. For his model of an expanding
economy, von Neumann proved the existence and uniqueness of an equilibrium using his generalization of the Brouwer
fixed-point theorem. Von Neumann's model of an expanding economy considered the matrix pencil A − λB with nonnegative
matrices A and B; von Neumann sought probability vectors p and q and a positive number λ that would solve the complementarity
equation along with two inequality systems expressing economic efficiency. In this model, the (transposed) probability
vector p represents the prices of the goods while the probability vector q represents the "intensity" at which the
production process would run. The unique solution λ represents the growth factor which is 1 plus the rate of growth
of the economy; the rate of growth equals the interest rate. Proving the existence of a positive growth rate and
proving that the growth rate equals the interest rate were remarkable achievements, even for von Neumann. Von Neumann's
results have been viewed as a special case of linear programming, where von Neumann's model uses only nonnegative
matrices. The study of von Neumann's model of an expanding economy continues to interest mathematical economists
with interests in computational economics. This paper has been called the greatest paper in mathematical economics
by several authors, who recognized its introduction of fixed-point theorems, linear inequalities, complementary slackness,
and saddlepoint duality. In the proceedings of a conference on von Neumann's growth model, Paul Samuelson said that
many mathematicians had developed methods useful to economists, but that von Neumann was unique in having made significant
contributions to economic theory itself. Von Neumann's famous 9-page paper started life as a talk at Princeton and
then became a paper in Germany, which was eventually translated into English. His interest in economics that led
to that paper began as follows: When lecturing at Berlin in 1928 and 1929 he spent his summers back home in Budapest,
and so did the economist Nicholas Kaldor, and they hit it off. Kaldor recommended that von Neumann read a book by
the mathematical economist Léon Walras. Von Neumann found some faults in that book and corrected them, for example,
replacing equations by inequalities. He noticed that Walras's General Equilibrium Theory and Walras' Law, which led
to systems of simultaneous linear equations, could produce the absurd result that the profit could be maximized by
producing and selling a negative quantity of a product. He replaced the equations by inequalities, introduced dynamic
equilibria, among other things, and eventually produced the paper. Later, von Neumann suggested a new method of linear
programming, using the homogeneous linear system of Gordan (1873), which was later popularized by Karmarkar's algorithm.
Von Neumann's method used a pivoting algorithm between simplices, with the pivoting decision determined by a nonnegative
least squares subproblem with a convexity constraint (projecting the zero-vector onto the convex hull of the active
simplex). Von Neumann's algorithm was the first interior point method of linear programming. Von Neumann made fundamental
contributions to mathematical statistics. In 1941, he derived the exact distribution of the ratio of the mean square
of successive differences to the sample variance for independent and identically normally distributed variables.
This ratio was applied to the residuals from regression models and is commonly known as the Durbin–Watson statistic
for testing the null hypothesis that the errors are serially independent against the alternative that they follow
a stationary first order autoregression. Von Neumann made fundamental contributions in exploration of problems in
numerical hydrodynamics. For example, with Robert D. Richtmyer he developed an algorithm defining artificial viscosity
that improved the understanding of shock waves. A problem was that when computers solved hydrodynamic or aerodynamic
problems, they tried to put too many computational grid points at regions of sharp discontinuity (shock waves). The
mathematics of artificial viscosity smoothed the shock transition without sacrificing basic physics. Other well known
contributions to fluid dynamics included the classic flow solution to blast waves, and the co-discovery of the ZND
detonation model of explosives. Von Neumann's principal contribution to the atomic bomb was in the concept and design
of the explosive lenses needed to compress the plutonium core of the Fat Man weapon that was later dropped on Nagasaki.
While von Neumann did not originate the "implosion" concept, he was one of its most persistent proponents, encouraging
its continued development against the instincts of many of his colleagues, who felt such a design to be unworkable.
He also eventually came up with the idea of using more powerful shaped charges and less fissionable material to greatly
increase the speed of "assembly". When it turned out that there would not be enough uranium-235 to make more than
one bomb, the implosive lens project was greatly expanded and von Neumann's idea was implemented. Implosion was the
only method that could be used with the plutonium-239 that was available from the Hanford Site. He established the
design of the explosive lenses required, but there remained concerns about "edge effects" and imperfections in the
explosives. His calculations showed that implosion would work if it did not depart by more than 5% from spherical
symmetry. After a series of failed attempts with models, this was achieved by George Kistiakowsky, and the construction
of the Trinity bomb was completed in July 1945. Along with four other scientists and various military personnel,
von Neumann was included in the target selection committee responsible for choosing the Japanese cities of Hiroshima
and Nagasaki as the first targets of the atomic bomb. Von Neumann oversaw computations related to the expected size
of the bomb blasts, estimated death tolls, and the distance above the ground at which the bombs should be detonated
for optimum shock wave propagation and thus maximum effect. The cultural capital Kyoto, which had been spared the
bombing inflicted upon militarily significant cities, was von Neumann's first choice, a selection seconded by Manhattan
Project leader General Leslie Groves. However, this target was dismissed by Secretary of War Henry L. Stimson. On
July 16, 1945, with numerous other Manhattan Project personnel, von Neumann was an eyewitness to the first atomic
bomb blast, code named Trinity, conducted as a test of the implosion method device, at the bombing range near Alamogordo
Army Airfield, 35 miles (56 km) southeast of Socorro, New Mexico. Based on his observation alone, von Neumann estimated
the test had resulted in a blast equivalent to 5 kilotons of TNT (21 TJ) but Enrico Fermi produced a more accurate
estimate of 10 kilotons by dropping scraps of torn-up paper as the shock wave passed his location and watching how
far they scattered. The actual power of the explosion had been between 20 and 22 kilotons. It was in von Neumann's
1944 papers that the expression "kilotons" appeared for the first time. After the war, Robert Oppenheimer remarked
that the physicists involved in the Manhattan project had "known sin". Von Neumann's response was that "sometimes
someone confesses a sin in order to take credit for it." Von Neumann continued unperturbed in his work and became,
along with Edward Teller, one of those who sustained the hydrogen bomb project. He then collaborated with Klaus Fuchs
on further development of the bomb, and in 1946 the two filed a secret patent on "Improvement in Methods and Means
for Utilizing Nuclear Energy", which outlined a scheme for using a fission bomb to compress fusion fuel to initiate
nuclear fusion. The Fuchs–von Neumann patent used radiation implosion, but not in the same way as is used in what
became the final hydrogen bomb design, the Teller–Ulam design. Their work was, however, incorporated into the "George"
shot of Operation Greenhouse, which was instructive in testing out concepts that went into the final design. The
Fuchs–von Neumann work was passed on, by Fuchs, to the Soviet Union as part of his nuclear espionage, but it was
not used in the Soviets' own, independent development of the Teller–Ulam design. The historian Jeremy Bernstein has
pointed out that ironically, "John von Neumann and Klaus Fuchs, produced a brilliant invention in 1946 that could
have changed the whole course of the development of the hydrogen bomb, but was not fully understood until after the
bomb had been successfully made." In 1950, von Neumann became a consultant to the Weapons Systems Evaluation Group
(WSEG), whose function was to advise the Joint Chiefs of Staff and the United States Secretary of Defense on the
development and use of new technologies. He also became an adviser to the Armed Forces Special Weapons Project (AFSWP),
which was responsible for the military aspects on nuclear weapons.Over the following two years, he also became a
consultant to the Central Intelligence Agency (CIA), a member of the influential General Advisory Committee of the
Atomic Energy Commission, a consultant to the newly established Lawrence Livermore National Laboratory, and a member
of the Scientific Advisory Group of the United States Air Force. In 1955, von Neumann became a commissioner of the
AEC. He accepted this position and used it to further the production of compact hydrogen bombs suitable for Intercontinental
ballistic missile delivery. He involved himself in correcting the severe shortage of tritium and lithium 6 needed
for these compact weapons, and he argued against settling for the intermediate range missiles that the Army wanted.
He was adamant that H-bombs delivered into the heart of enemy territory by an ICBM would be the most effective weapon
possible, and that the relative inaccuracy of the missile wouldn't be a problem with an H-bomb. He said the Russians
would probably be building a similar weapon system, which turned out to be the case. Despite his disagreement with
Oppenheimer over the need for a crash program to develop the hydrogen bomb, he testified on the latter's behalf at
the 1954 Oppenheimer security hearing, at which he asserted that Oppenheimer was loyal, and praised him for his helpfulness
once the program went ahead. Shortly before his death, when he was already quite ill, von Neumann headed the United
States government's top secret ICBM committee, and it would sometimes meet in his home. Its purpose was to decide
on the feasibility of building an ICBM large enough to carry a thermonuclear weapon. Von Neumann had long argued
that while the technical obstacles were sizable, they could be overcome in time. The SM-65 Atlas passed its first
fully functional test in 1959, two years after his death. The feasibility of an ICBM owed as much to improved, smaller
warheads as it did to developments in rocketry, and his understanding of the former made his advice invaluable. Von
Neumann is credited with the equilibrium strategy of mutual assured destruction, providing the deliberately humorous
acronym, MAD. (Other humorous acronyms coined by von Neumann include his computer, the Mathematical Analyzer, Numerical
Integrator, and Computer—or MANIAC). He also "moved heaven and earth" to bring MAD about. His goal was to quickly
develop ICBMs and the compact hydrogen bombs that they could deliver to the USSR, and he knew the Soviets were doing
similar work because the CIA interviewed German rocket scientists who were allowed to return to Germany, and von
Neumann had planted a dozen technical people in the CIA. The Russians believed that bombers would soon be vulnerable,
and they shared von Neumann's view that an H-bomb in an ICBM was the ne plus ultra of weapons, and they believed
that whoever had superiority in these weapons would take over the world, without necessarily using them. He was afraid
of a "missile gap" and took several more steps to achieve his goal of keeping up with the Soviets: Von Neumann entered
government service (Manhattan Project) primarily because he felt that, if freedom and civilization were to survive,
it would have to be because the US would triumph over totalitarianism from Nazism, Fascism and Soviet Communism.
During a Senate committee hearing he described his political ideology as "violently anti-communist, and much more
militaristic than the norm". He was quoted in 1950 remarking, "If you say why not bomb [the Soviets] tomorrow, I
say, why not today? If you say today at five o'clock, I say why not one o'clock?" Von Neumann was a founding figure
in computing. Donald Knuth cites von Neumann as the inventor, in 1945, of the merge sort algorithm, in which the
first and second halves of an array are each sorted recursively and then merged. Von Neumann wrote the sorting program
for the EDVAC in ink, being 23 pages long; traces can still be seen on the first page of the phrase "TOP SECRET",
which was written in pencil and later erased. He also worked on the philosophy of artificial intelligence with Alan
Turing when the latter visited Princeton in the 1930s. Von Neumann's hydrogen bomb work was played out in the realm
of computing, where he and Stanislaw Ulam developed simulations on von Neumann's digital computers for the hydrodynamic
computations. During this time he contributed to the development of the Monte Carlo method, which allowed solutions
to complicated problems to be approximated using random numbers. His algorithm for simulating a fair coin with a
biased coin is used in the "software whitening" stage of some hardware random number generators. Because using lists
of "truly" random numbers was extremely slow, von Neumann developed a form of making pseudorandom numbers, using
the middle-square method. Though this method has been criticized as crude, von Neumann was aware of this: he justified
it as being faster than any other method at his disposal, and also noted that when it went awry it did so obviously,
unlike methods which could be subtly incorrect. "Anyone who considers arithmetical methods of producing random digits
is, of course, in a state of sin." While consulting for the Moore School of Electrical Engineering at the University
of Pennsylvania on the EDVAC project, von Neumann wrote an incomplete First Draft of a Report on the EDVAC. The paper,
whose premature distribution nullified the patent claims of EDVAC designers J. Presper Eckert and John Mauchly, described
a computer architecture in which the data and the program are both stored in the computer's memory in the same address
space. This architecture is to this day the basis of modern computer design, unlike the earliest computers that were
"programmed" using a separate memory device such as a paper tape or plugboard. Although the single-memory, stored
program architecture is commonly called von Neumann architecture as a result of von Neumann's paper, the architecture's
description was based on the work of J. Presper Eckert and John William Mauchly, inventors of the ENIAC computer
at the University of Pennsylvania. John von Neumann also consulted for the ENIAC project. The electronics of the
new ENIAC ran at one-sixth the speed, but this in no way degraded the ENIAC's performance, since it was still entirely
I/O bound. Complicated programs could be developed and debugged in days rather than the weeks required for plugboarding
the old ENIAC. Some of von Neumann's early computer programs have been preserved. The next computer that von Neumann
designed was the IAS machine at the Institute for Advanced Study in Princeton, New Jersey. He arranged its financing,
and the components were designed and built at the RCA Research Laboratory nearby. John von Neumann recommended that
the IBM 701, nicknamed the defense computer include a magnetic drum. It was a faster version of the IAS machine and
formed the basis for the commercially successful IBM 704. Stochastic computing was first introduced in a pioneering
paper by von Neumann in 1953. However, the theory could not be implemented until advances in computing of the 1960s.
He also created the field of cellular automata without the aid of computers, constructing the first self-replicating
automata with pencil and graph paper. The concept of a universal constructor was fleshed out in his posthumous work
Theory of Self Reproducing Automata. Von Neumann proved that the most effective way of performing large-scale mining
operations such as mining an entire moon or asteroid belt would be by using self-replicating spacecraft, taking advantage
of their exponential growth. His rigorous mathematical analysis of the structure of self-replication (of the semiotic
relationship between constructor, description and that which is constructed), preceded the discovery of the structure
of DNA. Beginning in 1949, von Neumann's design for a self-reproducing computer program is considered the world's
first computer virus, and he is considered to be the theoretical father of computer virology. Von Neumann's team
performed the world's first numerical weather forecasts on the ENIAC computer; von Neumann published the paper Numerical
Integration of the Barotropic Vorticity Equation in 1950. Von Neumann's interest in weather systems and meteorological
prediction led him to propose manipulating the environment by spreading colorants on the polar ice caps to enhance
absorption of solar radiation (by reducing the albedo). thereby inducing global warming. Noting that the Earth was
only 6 °F (3.3 °C) colder during the last glacial period, he noted that the burning of coal and oil "a general warming
of the Earth by about one degree Fahrenheit." Von Neumann's ability to instantaneously perform complex operations
in his head stunned other mathematicians. Eugene Wigner wrote that, seeing von Neumann's mind at work, "one had the
impression of a perfect instrument whose gears were machined to mesh accurately to a thousandth of an inch." Paul
Halmos states that "von Neumann's speed was awe-inspiring." Israel Halperin said: "Keeping up with him was ... impossible.
The feeling was you were on a tricycle chasing a racing car." Edward Teller wrote that von Neumann effortlessly outdid
anybody he ever met, and said "I never could keep up with him". Teller also said "von Neumann would carry on a conversation
with my 3-year-old son, and the two of them would talk as equals, and I sometimes wondered if he used the same principle
when he talked to the rest of us. Most people avoid thinking if they can, some of us are addicted to thinking, but
von Neumann actually enjoyed thinking, maybe even to the exclusion of everything else." Lothar Wolfgang Nordheim
described von Neumann as the "fastest mind I ever met", and Jacob Bronowski wrote "He was the cleverest man I ever
knew, without exception. He was a genius." George Pólya, whose lectures at ETH Zürich von Neumann attended as a student,
said "Johnny was the only student I was ever afraid of. If in the course of a lecture I stated an unsolved problem,
the chances were he'd come to me at the end of the lecture with the complete solution scribbled on a slip of paper."
Halmos recounts a story told by Nicholas Metropolis, concerning the speed of von Neumann's calculations, when somebody
asked von Neumann to solve the famous fly puzzle: Herman Goldstine wrote: "One of his remarkable abilities was his
power of absolute recall. As far as I could tell, von Neumann was able on once reading a book or article to quote
it back verbatim; moreover, he could do it years later without hesitation. He could also translate it at no diminution
in speed from its original language into English. On one occasion I tested his ability by asking him to tell me how
A Tale of Two Cities started. Whereupon, without any pause, he immediately began to recite the first chapter and
continued until asked to stop after about ten or fifteen minutes." Ulam noted that von Neumann's way of thinking
might not be visual, but more of an aural one. "I have sometimes wondered whether a brain like von Neumann's does
not indicate a species superior to that of man", said Nobel Laureate Hans Bethe of Cornell University. "It seems
fair to say that if the influence of a scientist is interpreted broadly enough to include impact on fields beyond
science proper, then John von Neumann was probably the most influential mathematician who ever lived," wrote Miklós
Rédei in "Selected Letters." James Glimm wrote: "he is regarded as one of the giants of modern mathematics". The
mathematician Jean Dieudonné called von Neumann "the last of the great mathematicians", while Peter Lax described
him as possessing the "most scintillating intellect of this century". In 1955, von Neumann was diagnosed with what
was either bone or pancreatic cancer. His mother, Margaret von Neumann, was diagnosed with cancer in 1956 and died
within two weeks. John had eighteen months from diagnosis till death. In this period von Neumann returned to the
Roman Catholic faith that had also been significant to his mother after the family's conversion in 1929–1930. John
had earlier said to his mother, "There is probably a God. Many things are easier to explain if there is than if there
isn't." Von Neumann held on to his exemplary knowledge of Latin and quoted to a deathbed visitor the declamation
"Judex ergo cum sedebit," and ends "Quid sum miser tunc dicturus? Quem patronum rogaturus, Cum vix iustus sit securus?"
(When the judge His seat hath taken ... What shall wretched I then plead? Who for me shall intercede when the righteous
scarce is freed?) He invited a Roman Catholic priest, Father Anselm Strittmatter, O.S.B., to visit him for consultation.
Von Neumann reportedly said in explanation that Pascal had a point, referring to Pascal's Wager. Father Strittmatter
administered the last sacraments to him. Some of von Neumann's friends (such as Abraham Pais and Oskar Morgenstern)
said they had always believed him to be "completely agnostic." "Of this deathbed conversion, Morgenstern told Heims,
"He was of course completely agnostic all his life, and then he suddenly turned Catholic—it doesn't agree with anything
whatsoever in his attitude, outlook and thinking when he was healthy." Father Strittmatter recalled that von Neumann
did not receive much peace or comfort from it, as he still remained terrified of death.
The console was first officially announced at E3 2005, and was released at the end of 2006. It was the first console to use
Blu-ray Disc as its primary storage medium. The console was the first PlayStation to integrate social gaming services,
included it being the first to introduce Sony's social gaming service, PlayStation Network, and its remote connectivity
with PlayStation Portable and PlayStation Vita, being able to remote control the console from the devices. In September
2009, the Slim model of the PlayStation 3 was released, being lighter and thinner than the original version, which
notably featured a redesigned logo and marketing design, as well as a minor start-up change in software. A Super
Slim variation was then released in late 2012, further refining and redesigning the console. As of March 2016, PlayStation
3 has sold 85 million units worldwide. Its successor, the PlayStation 4, was released later in November 2013. Sony
officially unveiled PlayStation 3 (then marketed as PLAYSTATION 3) to the public on May 16, 2005, at E3 2005, along
with a 'boomerang' shaped prototype design of the Sixaxis controller. A functional version of the system was not
present there, nor at the Tokyo Game Show in September 2005, although demonstrations (such as Metal Gear Solid 4:
Guns of the Patriots) were held at both events on software development kits and comparable personal computer hardware.
Video footage based on the predicted PlayStation 3 specifications was also shown (notably a Final Fantasy VII tech
demo). The initial prototype shown in May 2005 featured two HDMI ports, three Ethernet ports and six USB ports; however,
when the system was shown again a year later at E3 2006, these were reduced to one HDMI port, one Ethernet port and
four USB ports, presumably to cut costs. Two hardware configurations were also announced for the console: a 20 GB
model and a 60 GB model, priced at US$499 (€499) and US$599 (€599), respectively. The 60 GB model was to be the only
configuration to feature an HDMI port, Wi-Fi internet, flash card readers and a chrome trim with the logo in silver.
Both models were announced for a simultaneous worldwide release: November 11, 2006, for Japan and November 17, 2006,
for North America and Europe. On September 6, 2006, Sony announced that PAL region PlayStation 3 launch would be
delayed until March 2007, because of a shortage of materials used in the Blu-ray drive. At the Tokyo Game Show on
September 22, 2006, Sony announced that it would include an HDMI port on the 20 GB system, but a chrome trim, flash
card readers, silver logo and Wi-Fi would not be included. Also, the launch price of the Japanese 20 GB model was
reduced by over 20%, and the 60 GB model was announced for an open pricing scheme in Japan. During the event, Sony
showed 27 playable PS3 games running on final hardware. The console was originally planned for a global release through
November, but at the start of September the release in Europe and the rest of the world was delayed until March.
With it being a somewhat last-minute delay, some companies had taken deposits for pre-orders, at which Sony informed
customers that they were eligible for full refunds or could continue the pre-order. On January 24, 2007, Sony announced
that PlayStation 3 would go on sale on March 23, 2007, in Europe, Australia, the Middle East, Africa and New Zealand.
The system sold about 600,000 units in its first two days. On March 7, 2007, the 60 GB PlayStation 3 launched in
Singapore with a price of S$799. The console was launched in South Korea on June 16, 2007, as a single version equipped
with an 80 GB hard drive and IPTV. Following speculation that Sony was working on a 'slim' model, Sony officially
announced the PS3 CECH-2000 model on August 18, 2009, at the Sony Gamescom press conference. New features included
a slimmer form factor, decreased power consumption, and a quieter cooling system. It was released in major territories
by September 2009. As part of the release for the slim model, the console logo ceased using the "Spider-Man font"
(the same font used for the title of Sony's Spider-Man 3) and the capitalized PLAYSTATION 3. It instead reverted
to a more traditional PlayStation- and PlayStation 2-like 'PlayStation 3' logo with "PS3" imprinted on the console.
Along with the redesigning of the console and logo, the boot screen of all consoles changed from "Sony Computer Entertainment"
to "PS3 PlayStation 3", with a new chime and the game start splash screen being dropped. The cover art and packaging
of games was also changed. In September 2012 at the Tokyo Game Show, Sony announced that a new, slimmer PS3 redesign
(CECH-4000) was due for release in late 2012 and that it would be available with either a 250 GB or 500 GB hard drive.
Three versions Super Slim model were revealed: one with a 500 GB hard drive, a second with a 250 GB hard drive which
is not available in PAL regions, and a third with a 12 GB flash storage that was only available in PAL regions. The
storage of 12 GB model is upgradable with an official standalone 250 GB hard drive. A vertical stand was also released
for the model. In the United Kingdom, the 500 GB model was released on September 28, 2012; and the 12 GB model was
released on October 12, 2012. In the United States, the PS3 Super Slim was first released as a bundled console. The
250 GB was model was bundled with Game of the Year edition of Uncharted 3: Drake's Deception and released on September
25, 2012; and the 500 GB model was bundled with Assassin's Creed III and released on October 30, 2012. In Japan,
the black colored Super Slim model was released on October 4, 2012; and the white colored Super Slim model was released
on November 22, 2012. The Super Slim model is 20 percent smaller and 25 percent lighter than the Slim model and features
a manual sliding disc cover instead of a motorized slot-loading disc cover of the Slim model. The white colored Super
Slim model was released in the United States on January 27, 2013 as part of the Instant Game Collection Bundle. The
Garnet Red and Azurite Blue colored models were launched in Japan on February 28, 2013. The Garnet Red version was
released in North America on March 12, 2013 as part of the God of War: Ascension bundle with 500 GB storage and contained
God of War: Ascension as well as the God of War Saga. The Azurite Blue model was released as a GameStop exclusive
with 250GB storage. PlayStation 3 launched in North America with 14 titles, with another three being released before
the end of 2006. After the first week of sales it was confirmed that Resistance: Fall of Man from Insomniac Games
was the top-selling launch game in North America. The game was heavily praised by numerous video game websites, including
GameSpot and IGN, both of whom awarded it their PlayStation 3 Game of the Year award for 2006. Some titles missed
the launch window and were delayed until early 2007, such as The Elder Scrolls IV: Oblivion, F.E.A.R. and Sonic the
Hedgehog. During the Japanese launch, Ridge Racer 7 was the top-selling game, while Mobile Suit Gundam: Crossfire
also fared well in sales, both of which were offerings from Namco Bandai Games. PlayStation 3 launched in Europe
with 24 titles, including ones that were not offered in North American and Japanese launches, such as Formula One
Championship Edition, MotorStorm and Virtua Fighter 5. Resistance: Fall of Man and MotorStorm were the most successful
titles of 2007, and both games subsequently received sequels in the form of Resistance 2 and MotorStorm: Pacific
Rift. At E3 2007, Sony was able to show a number of their upcoming video games for PlayStation 3, including Heavenly
Sword, Lair, Ratchet & Clank Future: Tools of Destruction, Warhawk and Uncharted: Drake's Fortune; all of which were
released in the third and fourth quarters of 2007. They also showed off a number of titles that were set for release
in 2008 and 2009; most notably Killzone 2, Infamous, Gran Turismo 5 Prologue, LittleBigPlanet and SOCOM: U.S. Navy
SEALs Confrontation. A number of third-party exclusives were also shown, including the highly anticipated Metal Gear
Solid 4: Guns of the Patriots, alongside other high-profile third-party titles such as Grand Theft Auto IV, Call
of Duty 4: Modern Warfare, Assassin's Creed, Devil May Cry 4 and Resident Evil 5. Two other important titles for
PlayStation 3, Final Fantasy XIII and Final Fantasy Versus XIII, were shown at TGS 2007 in order to appease the Japanese
market. Sony have since launched their budget range of PlayStation 3 titles, known as the Greatest Hits range in
North America, the Platinum range in Europe and Australia and The Best range in Japan. Among the titles available
in the budget range include Resistance: Fall of Man, MotorStorm, Uncharted: Drakes Fortune, Rainbow Six: Vegas, Call
Of Duty 3, Assassin's Creed and Ninja Gaiden Sigma. As of October 2009 Metal Gear Solid 4: Guns of the Patriots,
Ratchet & Clank Future: Tools of Destruction, Devil May Cry 4, Army of Two, Battlefield: Bad Company and Midnight
Club: Los Angeles have also joined the list. In December 2008, the CTO of Blitz Games announced that it would bring
stereoscopic 3D gaming and movie viewing to Xbox 360 and PlayStation 3 with its own technology. This was first demonstrated
publicly on PS3 using Sony's own technology in January 2009 at the Consumer Electronics Show. Journalists were shown
Wipeout HD and Gran Turismo 5 Prologue in 3D as a demonstration of how the technology might work if it is implemented
in the future. Firmware update 3.30 officially allowed PS3 titles to be played in 3D, requiring a compatible display
for use. System software update 3.50 prepared it for 3D films. While the game itself must be programmed to take advantage
of the 3D technology, titles may be patched to add in the functionality retroactively. Titles with such patches include
Wipeout HD, Pain, and Super Stardust HD. PS3's hardware has also been used to build supercomputers for high-performance
computing. Fixstars Solutions sells a version of Yellow Dog Linux for PlayStation 3 (originally sold by Terra Soft
Solutions). RapidMind produced a stream programming package for PS3, but were acquired by Intel in 2009. Also, on
January 3, 2007, Dr. Frank Mueller, Associate Professor of Computer science at NCSU, clustered 8 PS3s. Mueller commented
that the 256 MB of system RAM is a limitation for this particular application and is considering attempting to retrofit
more RAM. Software includes: Fedora Core 5 Linux ppc64, MPICH2, OpenMP v 2.5, GNU Compiler Collection and CellSDK
1.1. As a more cost-effective alternative to conventional supercomputers, the U.S. military has purchased clusters
of PS3 units for research purposes. Retail PS3 Slim units cannot be used for supercomputing, because PS3 Slim lacks
the ability to boot into a third-party OS. PlayStation 3 uses the Cell microprocessor, designed by Sony, Toshiba
and IBM, as its CPU, which is made up of one 3.2 GHz PowerPC-based "Power Processing Element" (PPE) and eight Synergistic
Processing Elements (SPEs). The eighth SPE is disabled to improve chip yields. Only six of the seven SPEs are accessible
to developers as the seventh SPE is reserved by the console's operating system. Graphics processing is handled by
the NVIDIA RSX 'Reality Synthesizer', which can produce resolutions from 480i/576i SD up to 1080p HD. PlayStation
3 has 256 MB of XDR DRAM main memory and 256 MB of GDDR3 video memory for the RSX. At its press conference at the
2007 Tokyo Game Show, Sony announced DualShock 3 (trademarked DUALSHOCK 3), a PlayStation 3 controller with the same
function and design as Sixaxis, but with vibration capability included. Hands-on accounts describe the controller
as being noticeably heavier than the standard Sixaxis controller and capable of vibration forces comparable to DualShock
2. It was released in Japan on November 11, 2007; in North America on April 5, 2008; in Australia on April 24, 2008;
in New Zealand on May 9, 2008; in mainland Europe on July 2, 2008, and in the United Kingdom and Ireland on July
4, 2008. The standard PlayStation 3 version of the XrossMediaBar (pronounced Cross Media Bar, or abbreviated XMB)
includes nine categories of options. These are: Users, Settings, Photo, Music, Video, TV/Video Services, Game, Network,
PlayStation Network and Friends (similar to the PlayStation Portable media bar). TheTV/Video Services category is
for services like Netflix and/or if PlayTV or torne is installed; the first category in this section is "My Channels",
which lets users download various streaming services, including Sony's own streaming services Crackle and PlayStation
Vue. By default, the What's New section of PlayStation Network is displayed when the system starts up. PS3 includes
the ability to store various master and secondary user profiles, manage and explore photos with or without a musical
slide show, play music and copy audio CD tracks to an attached data storage device, play movies and video files from
the hard disk drive, an optical disc (Blu-ray Disc or DVD-Video) or an optional USB mass storage or Flash card, compatibility
for a USB keyboard and mouse and a web browser supporting compatible-file download function. Additionally, UPnP media
will appear in the respective audio/video/photo categories if a compatible media server or DLNA server is detected
on the local network. The Friends menu allows mail with emoticon and attached picture features and video chat which
requires an optional PlayStation Eye or EyeToy webcam. The Network menu allows online shopping through the PlayStation
Store and connectivity to PlayStation Portable via Remote Play. PlayStation 3 console protects certain types of data
and uses digital rights management to limit the data's use. Purchased games and content from the PlayStation Network
store are governed by PlayStation's Network Digital Rights Management (NDRM). The NDRM allows users to access the
data from up to 2 different PlayStation 3's that have been activated using a user's PlayStation Network ID. PlayStation
3 also limits the transfer of copy protected videos downloaded from its store to other machines and states that copy
protected video "may not restore correctly" following certain actions after making a backup such as downloading a
new copy protected movie. Photo Gallery is an optional application to view, create and group photos from PS3, which
is installed separately from the system software at 105 MB. It was introduced in system software version 2.60 and
provides a range of tools for sorting through and displaying the system's pictures. The key feature of this application
is that it can organize photos into groups according to various criteria. Notable categorizations are colors, ages,
or facial expressions of the people in the photos. Slideshows can be viewed with the application, along with music
and playlists. The software was updated with the release of system software version 3.40 allowing users to upload
and browse photos on Facebook and Picasa. Since June 2009 VidZone has offered a free music video streaming service
in Europe, Australia and New Zealand. In October 2009, Sony Computer Entertainment and Netflix announced that the
Netflix streaming service would also be available on PlayStation 3 in the United States. A paid Netflix subscription
was required for the service. The service became available in November 2009. Initially users had to use a free Blu-ray
disc to access the service; however, in October 2010 the requirement to use a disc to gain access was removed. The
'OtherOS' functionality was not present in the updated PS Slim models, and the feature was subsequently removed from
previous versions of the PS3 as part of the machine's firmware update version 3.21 which was released on April 1,
2010; Sony cited security concerns as the rationale. The firmware update 3.21 was mandatory for access to the PlayStation
Network. The removal caused some controversy; as the update removed officially advertised features from already sold
products, and gave rise to several class action lawsuits aimed at making Sony return the feature or provide compensation.
On March 1, 2010 (UTC), many of the original "fat" PlayStation 3 models worldwide were experiencing errors related
to their internal system clock. The error had many symptoms. Initially, the main problem seemed to be the inability
to connect to the PlayStation Network. However, the root cause of the problem was unrelated to the PlayStation Network,
since even users who had never been online also had problems playing installed offline games (which queried the system
timer as part of startup) and using system themes. At the same time many users noted that the console's clock had
gone back to December 31, 1999. The event was nicknamed the ApocalyPS3, a play on the word apocalypse and PS3, the
abbreviation for the PlayStation 3 console. Sony confirmed that there was an error and stated that they were narrowing
down the issue and were continuing to work to restore service. By March 2 (UTC), 2010, owners of original PS3 models
could connect to PSN successfully and the clock no longer showed December 31, 1999. Sony stated that the affected
models incorrectly identified 2010 as a leap year, because of a bug in the BCD method of storing the date. However,
for some users, the hardware's operating system clock (mainly updated from the internet and not associated with the
internal clock) needed to be updated manually or by re-syncing it via the internet. PlayStation Portable can connect
with PlayStation 3 in many ways, including in-game connectivity. For example, Formula One Championship Edition, a
racing game, was shown at E3 2006 using a PSP as a real-time rear-view mirror. In addition, users are able to download
original PlayStation format games from the PlayStation Store, transfer and play them on PSP as well as PS3 itself.
It is also possible to use the Remote Play feature to play these and some PlayStation Network games, remotely on
PSP over a network or internet connection. PlayStation Network is the unified online multiplayer gaming and digital
media delivery service provided by Sony Computer Entertainment for PlayStation 3 and PlayStation Portable, announced
during the 2006 PlayStation Business Briefing meeting in Tokyo. The service is always connected, free, and includes
multiplayer support. The network enables online gaming, the PlayStation Store, PlayStation Home and other services.
PlayStation Network uses real currency and PlayStation Network Cards as seen with the PlayStation Store and PlayStation
Home. PlayStation Plus (commonly abbreviated PS+ and occasionally referred to as PSN Plus) is a premium PlayStation
Network subscription service that was officially unveiled at E3 2010 by Jack Tretton, President and CEO of SCEA.
Rumors of such service had been in speculation since Kaz Hirai's announcement at TGS 2009 of a possible paid service
for PSN but with the current PSN service still available. Launched alongside PS3 firmware 3.40 and PSP firmware 6.30
on June 29, 2010, the paid-for subscription service provides users with enhanced services on the PlayStation Network,
on top of the current PSN service which is still available with all of its features. These enhancements include the
ability to have demos, game and system software updates download automatically to PlayStation 3. Subscribers also
get early or exclusive access to some betas, game demos, premium downloadable content and other PlayStation Store
items. North American users also get a free subscription to Qore. Users may choose to purchase either a one-year
or a three-month subscription to PlayStation Plus. The PlayStation Store is an online virtual market available to
users of Sony's PlayStation 3 (PS3) and PlayStation Portable (PSP) game consoles via the PlayStation Network. The
Store offers a range of downloadable content both for purchase and available free of charge. Available content includes
full games, add-on content, playable demos, themes and game and movie trailers. The service is accessible through
an icon on the XMB on PS3 and PSP. The PS3 store can also be accessed on PSP via a Remote Play connection to PS3.
The PSP store is also available via the PC application, Media Go. As of September 24, 2009, there have been over
600 million downloads from the PlayStation Store worldwide. What's New was announced at Gamescom 2009 and was released
on September 1, 2009, with PlayStation 3 system software 3.0. The feature was to replace the existing [Information
Board], which displayed news from the PlayStation website associated with the user's region. The concept was developed
further into a major PlayStation Network feature, which interacts with the [Status Indicator] to display a ticker
of all content, excluding recently played content (currently in North America and Japan only). The system displays
the What's New screen by default instead of the [Games] menu (or [Video] menu, if a movie was inserted) when starting
up. What's New has four sections: "Our Pick", "Recently Played", latest information and new content available in
PlayStation Store. There are four kinds of content the What's New screen displays and links to, on the sections.
"Recently Played" displays the user's recently played games and online services only, whereas, the other sections
can contain website links, links to play videos and access to selected sections of the PlayStation Store. PlayStation
Home is a virtual 3D social networking service for the PlayStation Network. Home allows users to create a custom
avatar, which can be groomed realistically. Users can edit and decorate their personal apartments, avatars or club
houses with free, premium or won content. Users can shop for new items or win prizes from PS3 games, or Home activities.
Users interact and connect with friends and customise content in a virtual world. Home also acts as a meeting place
for users that want to play multiplayer games with others. Life with PlayStation, released on September 18, 2008
to succeed Folding@home, was retired November 6, 2012. Life with PlayStation used virtual globe data to display news
and information by city. Along with Folding@home functionality, the application provided access to three other information
"channels", the first being the Live Channel offering news headlines and weather which were provided by Google News,
The Weather Channel, the University of Wisconsin–Madison Space Science and Engineering Center, among other sources.
The second channel was the World Heritage channel which offered historical information about historical sites. The
third channel was the United Village channel. United Village was designed to share information about communities
and cultures worldwide. An update allowed video and photo viewing in the application. The fourth channel was the
USA exclusive PlayStation Network Game Trailers Channel for direct streaming of game trailers. On April 20, 2011,
Sony shut down the PlayStation Network and Qriocity for a prolonged interval, revealing on April 23 that this was
due to "an external intrusion on our system". Sony later revealed that the personal information of 77 million users
might have been taken, including: names; addresses; countries; email addresses; birthdates; PSN/Qriocity logins,
passwords and handles/PSN online IDs. They also stated that it was possible that users' profile data, including purchase
history and billing address, and PlayStation Network/Qriocity password security answers may have been obtained. There
was no evidence that any credit card data had been taken, but the possibility could not be ruled out, and Sony advised
customers that their credit card data may have been obtained. Additionally, the credit card numbers were encrypted
and Sony never collected the three digit CVC or CSC number from the back of the credit cards which is required for
authenticating some transactions. In response to the incident, Sony announced a "Welcome Back" program, 30 days free
membership of PlayStation Plus for all PSN members, two free downloadable PS3 games, and a free one-year enrollment
in an identity theft protection program. Although its PlayStation predecessors had been very dominant against the
competition and were hugely profitable for Sony, PlayStation 3 had an inauspicious start, and Sony chairman and CEO
Sir Howard Stringer initially could not convince investors of a turnaround in its fortunes. The PS3 lacked the unique
gameplay of the more affordable Wii which became that generation's most successful console in terms of units sold.
Furthermore, PS3 had to compete directly with Xbox 360 which had a market head start, and as a result the platform
no longer had exclusive titles that the PS2 enjoyed such as the Grand Theft Auto and Final Fantasy series (regarding
cross-platform games, Xbox 360 versions were generally considered superior in 2006, although by 2008 the PS3 versions
had reached parity or surpassed), and it took longer than expected for PS3 to enjoy strong sales and close the gap
with Xbox 360. Sony also continued to lose money on each PS3 sold through 2010, although the redesigned "slim" PS3
has cut these losses since then. PlayStation 3's initial production cost is estimated by iSuppli to have been US$805.85
for the 20 GB model and US$840.35 for the 60 GB model. However, they were priced at US$499 and US$599 respectively,
meaning that units may have been sold at an estimated loss of $306 or $241 depending on model, if the cost estimates
were correct, and thus may have contributed to Sony's games division posting an operating loss of ¥232.3 billion
(US$1.97 billion) in the fiscal year ending March 2007. In April 2007, soon after these results were published, Ken
Kutaragi, President of Sony Computer Entertainment, announced plans to retire. Various news agencies, including The
Times and The Wall Street Journal reported that this was due to poor sales, while SCEI maintains that Kutaragi had
been planning his retirement for six months prior to the announcement. In January 2008, Kaz Hirai, CEO of Sony Computer
Entertainment, suggested that the console may start making a profit by early 2009, stating that, "the next fiscal
year starts in April and if we can try to achieve that in the next fiscal year that would be a great thing" and that
"[profitability] is not a definite commitment, but that is what I would like to try to shoot for". However, market
analysts Nikko Citigroup have predicted that PlayStation 3 could be profitable by August 2008. In a July 2008 interview,
Hirai stated that his objective is for PlayStation 3 to sell 150 million units by its ninth year, surpassing PlayStation
2's sales of 140 million in its nine years on the market. In January 2009 Sony announced that their gaming division
was profitable in Q3 2008. Since the system's launch, production costs have been reduced significantly as a result
of phasing out the Emotion Engine chip and falling hardware costs. The cost of manufacturing Cell microprocessors
has fallen dramatically as a result of moving to the 65 nm production process, and Blu-ray Disc diodes have become
cheaper to manufacture. As of January 2008, each unit cost around $400 to manufacture; by August 2009, Sony had reduced
costs by a total of 70%, meaning it only costs Sony around $240 per unit. Critical and commercial reception to PS3
improved over time, after a series of price revisions, Blu-ray's victory over HD DVD, and the release of several
well received titles. Ars Technica's original launch review gave PS3 only a 6/10, but second review of the console
in June 2008 rated it a 9/10. In September 2009, IGN named PlayStation 3 the 15th best gaming console of all time,
behind both of its competitors: Wii (10th) and Xbox 360 (6th). However, PS3 has won IGN's "Console Showdown"—based
on which console offers the best selection of games released during each year—in three of the four years since it
began (2008, 2009 and 2011, with Xbox winning in 2010). IGN judged PlayStation 3 to have the best game line-up of
2008, based on their review scores in comparison to those of Wii and Xbox 360. In a comparison piece by PC mag's
Will Greenwald in June 2012, PS3 was selected as an overall better console compared to Xbox 360. Pocket-lint said
of the console "The PS3 has always been a brilliant games console," and that "For now, this is just about the best
media device for the money." PS3 was given the number-eight spot on PC World magazine's list of "The Top 21 Tech
Screwups of 2006", where it was criticized for being "Late, Expensive and Incompatible". GamesRadar ranked PS3 as
the top item in a feature on game-related PR disasters, asking how Sony managed to "take one of the most anticipated
game systems of all time and — within the space of a year — turn it into a hate object reviled by the entire internet",
but added that despite its problems the system has "untapped potential". Business Week summed up the general opinion
by stating that it was "more impressed with what [the PlayStation 3] could do than with what it currently does".
Developers also found the machine difficult to program for. In 2007, Gabe Newell of Valve said "The PS3 is a total
disaster on so many levels, I think it's really clear that Sony lost track of what customers and what developers
wanted". He continued "I'd say, even at this late date, they should just cancel it and do a do over. Just say, 'This
was a horrible disaster and we're sorry and we're going to stop selling this and stop trying to convince people to
develop for it'". Doug Lombardi VP of Marketing for Valve has since stated that they are interested in developing
for the console and are looking to hire talented PS3 programmers for future projects. He later restated Valve's position,
"Until we have the ability to get a PS3 team together, until we find the people who want to come to Valve or who
are at Valve who want to work on that, I don't really see us moving to that platform". At Sony's E3 2010 press conference,
Newell made a live appearance to recant his previous statements, citing Sony's move to make the system more developer
friendly, and to announce that Valve would be developing Portal 2 for the system. He also claimed that the inclusion
of Steamworks (Valve's system to automatically update their software independently) would help to make the PS3 version
of Portal 2 the best console version on the market. Activision Blizzard CEO Bobby Kotick has criticized PS3's high
development costs and inferior attach rate and return to that of Xbox 360 and Wii. He believes these factors are
pushing developers away from working on the console. In an interview with The Times Kotick stated "I'm getting concerned
about Sony; the PlayStation 3 is losing a bit of momentum and they don't make it easy for me to support the platform."
He continued, "It's expensive to develop for the console, and the Wii and the Xbox are just selling better. Games
generate a better return on invested capital (ROIC) on the Xbox than on the PlayStation." Kotick also claimed that
Activision Blizzard may stop supporting the system if the situation is not addressed. "[Sony has] to cut the [PS3's
retail] price, because if they don't, the attach rates are likely to slow. If we are being realistic, we might have
to stop supporting Sony." Kotick received heavy criticism for the statement, notably from developer Bioware who questioned
the wisdom of the threatened move, and referred to the statement as "silly." Despite the initial negative press,
several websites have given the system very good reviews mostly regarding its hardware. CNET United Kingdom praised
the system saying, "the PS3 is a versatile and impressive piece of home-entertainment equipment that lives up to
the hype [...] the PS3 is well worth its hefty price tag." CNET awarded it a score of 8.8 out of 10 and voted it
as its number one "must-have" gadget, praising its robust graphical capabilities and stylish exterior design while
criticizing its limited selection of available games. In addition, both Home Theater Magazine and Ultimate AV have
given the system's Blu-ray playback very favorable reviews, stating that the quality of playback exceeds that of
many current standalone Blu-ray Disc players. The PlayStation 3 Slim received extremely positive reviews as well
as a boost in sales; less than 24 hours after its announcement, PS3 Slim took the number-one bestseller spot on Amazon.com
in the video games section for fifteen consecutive days. It regained the number-one position again one day later.
PS3 Slim also received praise from PC World giving it a 90 out of 100 praising its new repackaging and the new value
it brings at a lower price as well as praising its quietness and the reduction in its power consumption. This is
in stark contrast to the original PS3's launch in which it was given position number-eight on their "The Top 21 Tech
Screwups of 2006" list. CNET awarded PS3 Slim four out of five stars praising its Blu-ray capabilities, 120 GB hard
drive, free online gaming service and more affordable pricing point, but complained about the lack of backward compatibility
for PlayStation 2 games. TechRadar gave PS3 Slim four and a half stars out of five praising its new smaller size
and summed up its review stating "Over all, the PS3 Slim is a phenomenal piece of kit. It's amazing that something
so small can do so much". However, they criticized the exterior design and the build quality in relation to the original
model. The Super Slim model of PS3 has received positive reviews. Gaming website Spong praised the new Super Slim's
quietness, stating "The most noticeable noise comes when the drive seeks a new area of the disc, such as when starting
to load a game, and this occurs infrequently." They added that the fans are quieter than that of Slim, and went on
to praise the new smaller, lighter size. Criticism was placed on the new disc loader, stating: "The cover can be
moved by hand if you wish, there's also an eject button to do the work for you, but there is no software eject from
the triangle button menus in the Xross Media Bar (XMB) interface. In addition, you have to close the cover by hand,
which can be a bit fiddly if it's upright, and the PS3 won't start reading a disc unless you do [close the cover]."
They also said there is no real drop in retail price. Tech media website CNET gave new Super Slim 4 out of 5 stars
("Excellent"), saying "The Super Slim PlayStation 3 shrinks a powerful gaming machine into an even tinier package
while maintaining the same features as its predecessors: a great gaming library and a strong array of streaming services
[...]", whilst also criticising the "cheap" design and disc-loader, stating: "Sometimes [the cover] doesn't catch
and you feel like you're using one of those old credit card imprinter machines. In short, it feels cheap. You don't
realize how convenient autoloading disc trays are until they're gone. Whether it was to cut costs or save space,
this move is ultimately a step back." The criticism also was due to price, stating the cheapest Super Slim model
was still more expensive than the cheapest Slim model, and that the smaller size and bigger hard drive shouldn't
be considered an upgrade when the hard drive on a Slim model is easily removed and replaced. They did praise that
the hard drive of the Super Slim model is "the easiest yet. Simply sliding off the side panel reveals the drive bay,
which can quickly be unscrewed." They also stated that whilst the Super Slim model is not in any way an upgrade,
it could be an indicator as to what's to come. "It may not be revolutionary, but the Super Slim PS3 is the same impressive
machine in a much smaller package. There doesn't seem to be any reason for existing PS3 owners to upgrade, but for
the prospective PS3 buyer, the Super Slim is probably the way to go if you can deal with not having a slot-loading
disc drive." Technology magazine T3 gave the Super Slim model a positive review, stating the console is almost "nostalgic"
in the design similarities to the original "fat" model, "While we don’t know whether it will play PS3 games or Blu-ray
discs any differently yet, the look and feel of the new PS3 Slim is an obvious homage to the original PS3, minus
the considerable excess weight. Immediately we would be concerned about the durability of the top loading tray that
feels like it could be yanked straight out off the console, but ultimately it all feels like Sony's nostalgic way
of signing off the current generation console in anticipation for the PS4."
Royal assent is sometimes associated with elaborate ceremonies. In the United Kingdom, for instance, the sovereign may appear
personally in the House of Lords or may appoint Lords Commissioners, who announce that royal assent has been granted
at a ceremony held at the Palace of Westminster. However, royal assent is usually granted less ceremonially by letters
patent. In other nations, such as Australia, the governor-general merely signs the bill. In Canada, the governor
general may give assent either in person at a ceremony held in the Senate or by a written declaration notifying parliament
of his or her agreement to the bill. Royal assent is the method by which a country's constitutional monarch (possibly
through a delegated official) formally approves an act of that nation's parliament, thus making it a law or letting
it be promulgated as law. In the vast majority of contemporary monarchies, this act is considered to be little more
than a formality; even in those nations which still permit their ruler to withhold the royal assent (such as the
United Kingdom, Norway, and Liechtenstein), the monarch almost never does so, save in a dire political emergency
or upon the advice of their government. While the power to withhold royal assent was once exercised often in European
monarchies, it is exceedingly rare in the modern, democratic political atmosphere that has developed there since
the 18th century. Under modern constitutional conventions, the sovereign acts on the advice of his or her ministers.
Since these ministers most often maintain the support of parliament and are the ones who obtain the passage of bills,
it is highly improbable that they would advise the sovereign to withhold assent. An exception is sometimes stated
to be if bills are not passed in good faith, though it is difficult to make an interpretation on what this circumstance
might constitute. Hence, in modern practice, royal assent is always granted; a refusal to do so would be appropriate
only in an emergency requiring the use of the monarch's reserve powers. Originally, legislative power was exercised
by the sovereign acting on the advice of the Curia Regis, or Royal Council, in which important magnates and clerics
participated and which evolved into parliament. The so-called Model Parliament included bishops, abbots, earls, barons,
and two knights from each shire and two burgesses from each borough among its members. In 1265, the Earl of Leicester
irregularly called a full parliament without royal authorisation. The body eventually came to be divided into two
branches: bishops, abbots, earls, and barons formed the House of Lords, while the shire and borough representatives
formed the House of Commons. The King would seek the advice and consent of both houses before making any law. During
Henry VI's reign, it became regular practice for the two houses to originate legislation in the form of bills, which
would not become law unless the sovereign's assent was obtained, as the sovereign was, and still remains, the enactor
of laws. Hence, all acts include the clause "Be it enacted by the Queen's (King's) most Excellent Majesty, by and
with the advice and consent of the Lords Spiritual and Temporal, and Commons, in this present Parliament assembled,
and by the authority of the same, as follows...". The Parliament Acts 1911 and 1949 provide a second potential preamble
if the House of Lords were to be excluded from the process. The power of parliament to pass bills was often thwarted
by monarchs. Charles I dissolved parliament in 1629, after it passed motions critical of and bills seeking to restrict
his arbitrary exercise of power. During the eleven years of personal rule that followed, Charles performed legally
dubious actions, such as raising taxes without parliament's approval. After the English Civil War, it was accepted
that parliament should be summoned to meet regularly, but it was still commonplace for monarchs to refuse royal assent
to bills. In 1678, Charles II withheld his assent from a bill "for preserving the Peace of the Kingdom by raising
the Militia, and continuing them in Duty for Two and Forty Days," suggesting that he, not parliament, should control
the militia. The last Stuart monarch, Anne, similarly withheld on 11 March 1708, on the advice of her ministers,
her assent from a bill for the settling of Militia in Scotland. No monarch has since withheld royal assent on a bill
passed by the British parliament. During the rule of the succeeding Hanoverian dynasty, power was gradually exercised
more by parliament and the government. The first Hanoverian monarch, George I, relied on his ministers to a greater
extent than did previous monarchs. Later Hanoverian monarchs attempted to restore royal control over legislation:
George III and George IV both openly opposed Catholic Emancipation and asserted that to grant assent to a Catholic
emancipation bill would violate the Coronation Oath, which required the sovereign to preserve and protect the established
Church of England from Papal domination and would grant rights to individuals who were in league with a foreign power
which did not recognise their legitimacy. However, George IV reluctantly granted his assent upon the advice of his
ministers. Thus, as the concept of ministerial responsibility has evolved, the power to withhold royal assent has
fallen into disuse, both in the United Kingdom and in the other Commonwealth realms. Royal assent is the final stage
in the legislative process for acts of the Scottish parliament. The process is governed by sections 28, 32, and 33
of the Scotland Act 1998. After a bill has been passed, the Presiding Officer of the Scottish Parliament submits
it to the monarch for royal assent after a four-week period, during which the Advocate General for Scotland, the
Lord Advocate, the Attorney General or the Secretary of State for Scotland may refer the bill to the Supreme Court
of the United Kingdom (prior to 1 October 2009, the Judicial Committee of the Privy Council) for review of its legality.
Royal assent is signified by letters patent under the Great Seal of Scotland in the following form which is set out
in The Scottish Parliament (Letters Patent and Proclamations) Order 1999 (SI 1999/737) and of which notice is published
in the London, Edinburgh, and Belfast Gazettes: Measures, which were the means by which the National Assembly for
Wales passed legislation between 2006 and 2011, were assented to by the Queen by means of an Order in Council. Section
102 of the Government of Wales Act 2006 required the Clerk to the Assembly to present measures passed by the assembly
after a four-week period during which the Counsel General for Wales or the Attorney General could refer the proposed
measure to the Supreme Court for a decision as to whether the measure was within the assembly's legislative competence.
Instead, the monarch directly grants royal assent by Order in Council. Assent is granted or refused on the advice
of the Lord Chancellor. A recent example when assent was refused (or, more correctly, when the Lord Chancellor declined
to present the law for assent) was in 2007, concerning reforms to the constitution of the Chief Pleas of Sark. (A
revised version of the proposed reforms was subsequently given assent.) In 2011, campaigners against a law that sought
to reduce the number of senators in the states of Jersey petitioned the Privy Council to advise the Queen to refuse
royal assent. An Order in Council of 13 July 2011 established new rules for the consideration of petitions against
granting royal assent. Royal assent is not sufficient to give legal effect to an Act of Tynwald. By ancient custom,
an act did not come into force until it had been promulgated at an open-air sitting of Tynwald, usually held on Tynwald
Hill at St John's on St John's Day (24 June), but, since the adoption of the Gregorian calendar in 1753, on 5 July
(or on the following Monday if 5 July is a Saturday or Sunday). Promulgation originally consisted of the reading
of the Act in English and Manx; but, after 1865 the reading of the title of the act and a summary of each section
were sufficient. This was reduced in 1895 to the titles and a memorandum of the object and purport of the act, and,
since 1988, only the short title and a summary of the long title have been read. Since 1993, the Sodor and Man Diocesan
Synod has had power to enact measures making provision "with respect to any matter concerning the Church of England
in the Island". If approved by Tynwald, a measure "shall have the force and effect of an Act of Tynwald upon the
Royal Assent thereto being announced to Tynwald". Between 1979 and 1993, the Synod had similar powers, but limited
to the extension to the Isle of Man of measures of the General Synod. Before 1994, royal assent was granted by Order
in Council, as for a bill, but the power to grant royal assent to measures has now been delegated to the lieutenant
governor. A Measure does not require promulgation. If the Governor General of Canada is unable to give assent, it
can be done by either the Deputy of the Governor General of Canada—the Chief Justice of Canada—or another justice
of the Supreme Court of Canada. It is not actually necessary for the governor general to sign a bill passed by a
legislature, the signature being merely an attestation. In each case, the parliament must be apprised of the granting
of assent before the bill is considered to have become law. Two methods are available: the sovereign's representatives
may grant assent in the presence of both houses of parliament; alternatively, each house may be notified separately,
usually by the speaker of that house. However, though both houses must be notified on the same day, notice to the
House of Commons while it is not in session may be given by way of publishing a special issue of the Journals of
the House of Commons, whereas the Senate must be sitting and the governor general's letter read aloud by the speaker.
While royal assent has not been withheld in the United Kingdom since 1708, it has often been withheld in British
colonies and former colonies by governors acting on royal instructions. In the United States Declaration of Independence,
colonists complained that George III "has refused his Assent to Laws, the most wholesome and necessary for the public
good [and] has forbidden his Governors to pass Laws of immediate and pressing importance, unless suspended in their
operation till his Assent should be obtained; and when so suspended, he has utterly neglected to attend to them."
Even after colonies such as Canada, Australia, New Zealand, the Union of South Africa, and Newfoundland were granted
responsible government, the British government continued to sometimes advise governors-general on the granting of
assent; assent was also occasionally reserved to allow the British government to examine a bill before advising the
governor-general. In Australia, a technical issue arose with the royal assent in both 1976 and 2001. In 1976, a bill
originating in the House of Representatives was mistakenly submitted to the Governor-General and assented to. However,
it was later discovered that it had not been passed by each house. The error arose because two bills of the same
title had originated from the house. The Governor-General revoked the first assent, before assenting to the bill
which had actually passed. The same procedure was followed to correct a similar error which arose in 2001. Special
procedures apply to legislation passed by Tynwald, the legislature of the Isle of Man. Before the lordship of the
Island was purchased by the British Crown in 1765 (the Revestment), the assent of the Lord of Mann to a bill was
signified by letter to the governor. After 1765, royal assent was at first signified by letter from the Secretary
of State to the governor; but, during the British Regency, the practice began of granting royal assent by Order in
Council, which continues to this day, though limited to exceptional cases since 1981. In Commonwealth realms other
than the UK, royal assent is granted or withheld either by the realm's sovereign or, more frequently, by the representative
of the sovereign, the governor-general. In federated realms, assent in each state, province, or territory is granted
or withheld by the representatives of the sovereign. In Australia, this is the governors of the states, administrators
of the territories, or the governor-general in the Australian Capital Territory. For Canada, this is the lieutenant
governors of the provinces. A lieutenant governor may defer assent to the governor general, and the governor general
may defer assent to federal bills to the sovereign. Since the Balfour Declaration of 1926 and the Statute of Westminster
1931, the all Commonwealth realms have been sovereign kingdoms, the monarch and governors-general acting solely on
the advice of the local ministers who generally maintain the support of the legislature and are the ones who secure
the passage of bills. They, therefore, are unlikely to advise the sovereign or his or her representative to withhold
assent. The power to withhold the royal assent was exercised by Alberta's lieutenant governor, John C. Bowen, in
1937, in respect of three bills passed in the legislature dominated by William Aberhart's Social Credit party. Two
bills sought to put banks under the authority of the province, thereby interfering with the federal government's
powers. The third, the Accurate News and Information Bill, purported to force newspapers to print government rebuttals
to stories to which the provincial cabinet objected. The unconstitutionality of all three bills was later confirmed
by the Supreme Court of Canada and by the Judicial Committee of the Privy Council. In the United Kingdom, a bill
is presented for royal assent after it has passed all the required stages in both the House of Commons and the House
of Lords. Under the Parliament Acts 1911 and 1949, the House of Commons may, under certain circumstances, direct
that a bill be presented for assent despite lack of passage by the House of Lords. Officially, assent is granted
by the sovereign or by Lords Commissioners authorised to act by letters patent. It may be granted in parliament or
outside parliament; in the latter case, each house must be separately notified before the bill takes effect. The
Clerk of the Parliaments, an official of the House of Lords, traditionally states a formula in Anglo-Norman Law French,
indicating the sovereign's decision. The granting of royal assent to a supply bill is indicated with the words "La
Reyne remercie ses bons sujets, accepte leur benevolence, et ainsi le veult", translated as "The Queen thanks her
good subjects, accepts their bounty, and wills it so." For other public or private bills, the formula is simply "La
Reyne le veult" ("the Queen wills it"). For personal bills, the phrase is "Soit fait comme il est désiré" ("let it
be as it is desired"). The appropriate formula for withholding assent is the euphemistic "La Reyne s'avisera" ("the
Queen will consider it"). When the sovereign is male, Le Roy is substituted for La Reyne. Before the reign of Henry
VIII, the sovereign always granted his or her assent in person. The sovereign, wearing the Imperial State Crown,
would be seated on the throne in the Lords chamber, surrounded by heralds and members of the royal court—a scene
that nowadays is repeated only at the annual State Opening of Parliament. The Commons, led by their speaker, would
listen from the Bar of the Lords, just outside the chamber. The Clerk of the Parliaments presented the bills awaiting
assent to the monarch, save that supply bills were traditionally brought up by the speaker. The Clerk of the Crown,
standing on the sovereign's right, then read aloud the titles of the bills (in earlier times, the entire text of
the bills). The Clerk of the Parliaments, standing on the sovereign's left, responded by stating the appropriate
Norman French formula. A new device for granting assent was created during the reign of King Henry VIII. In 1542,
Henry sought to execute his fifth wife, Catherine Howard, whom he accused of committing adultery; the execution was
to be authorised not after a trial but by a bill of attainder, to which he would have to personally assent after
listening to the entire text. Henry decided that "the repetition of so grievous a Story and the recital of so infamous
a crime" in his presence "might reopen a Wound already closing in the Royal Bosom". Therefore, parliament inserted
a clause into the Act of Attainder, providing that assent granted by Commissioners "is and ever was and ever shall
be, as good" as assent granted by the sovereign personally. The procedure was used only five times during the 16th
century, but more often during the 17th and 18th centuries, especially when George III's health began to deteriorate.
Queen Victoria became the last monarch to personally grant assent in 1854. When granting assent by commission, the
sovereign authorises three or more (normally five) lords who are Privy Counsellors to grant assent in his or her
name. The Lords Commissioners, as the monarch's representatives are known, wear scarlet parliamentary robes and sit
on a bench between the throne and the Woolsack. The Lords Reading Clerk reads the commission aloud; the senior commissioner
then states, "My Lords, in obedience to Her Majesty's Commands, and by virtue of the Commission which has been now
read, We do declare and notify to you, the Lords Spiritual and Temporal and Commons in Parliament assembled, that
Her Majesty has given Her Royal Assent to the several Acts in the Commission mentioned." During the 1960s, the ceremony
of assenting by commission was discontinued and is now only employed once a year, at the end of the annual parliamentary
session. In 1960, the Gentleman Usher of the Black Rod arrived to summon the House of Commons during a heated debate
and several members protested against the disruption by refusing to attend the ceremony. The debacle was repeated
in 1965; this time, when the Speaker left the chair to go to the House of Lords, some members continued to make speeches.
As a result, the Royal Assent Act 1967 was passed, creating an additional form for the granting of royal assent.
As the attorney-general explained, "there has been a good deal of resentment not only at the loss of Parliamentary
time that has been involved but at the breaking of the thread of a possibly eloquent speech and the disruption of
a debate that may be caused." The granting of assent by the monarch in person, or by commission, is still possible,
but this third form is used on a day-to-day basis. Under the Royal Assent Act 1967, royal assent can be granted by
the sovereign in writing, by means of letters patent, that are presented to the presiding officer of each house of
parliament. Then, the presiding officer makes a formal, but simple statement to the house, acquainting each house
that royal assent has been granted to the acts mentioned. Thus, unlike the granting of royal assent by the monarch
in person or by Royal Commissioners, the method created by the Royal Assent Act 1967 does not require both houses
to meet jointly for the purpose of receiving the notice of royal assent. The standard text of the letters patent
is set out in The Crown Office (Forms and Proclamations Rules) Order 1992, with minor amendments in 2000. In practice
this remains the standard method, a fact that is belied by the wording of the letters patent for the appointment
of the Royal Commissioners and by the wording of the letters patent for the granting of royal assent in writing under
the 1967 Act ("... And forasmuch as We cannot at this time be present in the Higher House of Our said Parliament
being the accustomed place for giving Our Royal Assent..."). When the act is assented to by the sovereign in person,
or by empowered Royal Commissioners, royal assent is considered given at the moment when the assent is declared in
the presence of both houses jointly assembled. When the procedure created by the Royal Assent Act 1967 is followed,
assent is considered granted when the presiding officers of both houses, having received the letters patent from
the king or queen signifying the assent, have notified their respective house of the grant of royal assent. Thus,
if each presiding officer makes the announcement at a different time (for instance because one house is not sitting
on a certain date), assent is regarded as effective when the second announcement is made. This is important because,
under British Law, unless there is any provision to the contrary, an act takes effect on the date on which it receives
royal assent and that date is not regarded as being the date when the letters patent are signed, or when they are
delivered to the presiding officers of each house, but the date on which both houses have been formally acquainted
of the assent. Independently of the method used to signify royal assent, it is the responsibility of the Clerk of
the Parliaments, once the assent has been duly notified to both houses, not only to endorse the act in the name of
the monarch with the formal Norman French formula, but to certify that assent has been granted. The clerk signs one
authentic copy of the bill and inserts the date (in English) on which the assent was notified to the two houses after
the title of the act. When an act is published, the signature of the clerk is omitted, as is the Norman French formula,
should the endorsement have been made in writing. However, the date on which the assent was notified is printed in
brackets. In Australia, the formal ceremony of granting assent in parliament has not been regularly used since the
early 20th century. Now, the bill is sent to the governor-general's residence by the house in which it originated.
The governor-general then signs the bill, sending messages to the President of the Senate and the Speaker of the
House of Representatives, who notify their respective houses of the governor-general's action. A similar practice
is followed in New Zealand, where the governor-general has not personally granted the Royal Assent in parliament
since 1875. In Canada, the traditional ceremony for granting assent in parliament was regularly used until the 21st
century, long after it had been discontinued in the United Kingdom and other Commonwealth realms. One result, conceived
as part of a string of royal duties intended to demonstrate Canada's status as an independent kingdom, was that King
George VI personally assented to nine bills of the Canadian parliament during the 1939 royal tour of Canada—85 years
after his great-grandmother, Queen Victoria, had last granted royal assent personally in the United Kingdom. Under
the Royal Assent Act 2002, however, the alternative practice of granting assent in writing, with each house being
notified separately ( the Speaker of the Senate or a representative reads to the senators the letters from the governor
general regarding the written declaration of Royal Assent), was brought into force. As the act also provides, royal
assent is to be signified—by the governor general, or, more often, by a deputy, usually a Justice of the Supreme
Court, at least twice each calendar year: for the first appropriation measure and for at least one other act, usually
the first non-appropriation measure passed. However, the act provides that a grant of royal assent is not rendered
invalid by a failure to employ the traditional ceremony where required. The Royal Assent ceremony takes place in
the Senate, as the sovereign is traditionally barred from the House of Commons. On the day of the event, the Speaker
of the Senate will read to the chamber a notice from the secretary to the governor general indicating when the viceroy
or a deputy thereof will arrive. The Senate thereafter cannot adjourn until after the ceremony. The speaker moves
to sit beside the throne, the Mace Bearer, with mace in hand, stands adjacent to him or her, and the governor general
enters to take the speaker's chair. The Usher of the Black Rod is then commanded by the speaker to summon the Members
of Parliament, who follow Black Rod back to the Senate, the Sergeant-at-Arms carrying the mace of the House of Commons.
In the Senate, those from the commons stand behind the bar, while Black Rod proceeds to stand next to the governor
general, who then nods his or her head to signify Royal Assent to the presented bills (which do not include appropriations
bills). Once the list of bills is complete, the Clerk of the Senate states: "in Her Majesty's name, His [or Her]
Excellency the Governor General [or the deputy] doth assent to these bills." If there are any appropriation bills
to receive Royal Assent, the Speaker of the House of Commons will read their titles and the Senate clerk repeats
them to the governor general, who nods his or her head to communicate Royal Assent. When these bills have all been
assented to, the Clerk of the Senate recites "in Her Majesty's name, His [or Her] Excellency the Governor General
[or the deputy] thanks her loyal subjects, accepts their benevolence and assents to these bills. The governor general
or his or her deputy then depart parliament. In Belgium, the sanction royale has the same legal effect as royal assent;
the Belgian constitution requires a theoretically possible refusal of royal sanction to be countersigned—as any other
act of the monarch—by a minister responsible before the House of Representatives. The monarch promulgates the law,
meaning that he or she formally orders that the law be officially published and executed. In 1990, when King Baudouin
advised his cabinet he could not, in conscience, sign a bill decriminalising abortion (a refusal patently not covered
by a responsible minister), the Council of Ministers, at the King's own request, declared Baudouin incapable of exercising
his powers. In accordance with the Belgian constitution, upon the declaration of the sovereign's incapacity, the
Council of Ministers assumed the powers of the head of state until parliament could rule on the King's incapacity
and appoint a regent. The bill was then assented to by all members of the Council of Ministers "on behalf of the
Belgian People". In a joint meeting, both houses of parliament declared the King capable of exercising his powers
again the next day. The constitution of Jordan grants its monarch the right to withhold assent to laws passed by
its parliament. Article 93 of that document gives the Jordanian sovereign six months to sign or veto any legislation
sent to him from the National Assembly; if he vetoes it within that timeframe, the assembly may override his veto
by a two-thirds vote of both houses; otherwise, the law does not go into effect (but it may be reconsidered in the
next session of the assembly). If the monarch fails to act within six months of the bill being presented to him,
it becomes law without his signature. After the House of Representatives has debated the law, it either approves
it and sends it to the Senate with the text "The Second Chamber of the States General sends the following approved
proposal of law to the First Chamber", or it rejects it and returns it to the government with the text "The Second
Chamber of the States General has rejected the accompanying proposal of law." If the upper house then approves the
law, it sends it back to the government with the text "To the King, The States General have accepted the proposal
of law as it is offered here." The government, consisting of the monarch and the ministers, will then usually approve
the proposal and the sovereign and one of the ministers signs the proposal with the addition of an enacting clause,
thereafter notifying the States General that "The King assents to the proposal." It has happened in exceptional circumstances
that the government does not approve a law that has been passed in parliament. In such a case, neither the monarch
nor a minister will sign the bill, notifying the States General that "The King will keep the proposal under advisement."
A law that has received royal assent will be published in the State Magazine, with the original being kept in the
archives of the King's Offices. Articles 77–79 of the Norwegian Constitution specifically grant the monarch of Norway
the right to withhold royal assent from any bill passed by the Storting. Should the sovereign ever choose to exercise
this privilege, Article 79 provides a means by which his veto may be over-ridden: "If a Bill has been passed unaltered
by two sessions of the Storting, constituted after two separate successive elections and separated from each other
by at least two intervening sessions of the Storting, without a divergent Bill having been passed by any Storting
in the period between the first and last adoption, and it is then submitted to the King with a petition that His
Majesty shall not refuse his assent to a Bill which, after the most mature deliberation, the Storting considers to
be beneficial, it shall become law even if the Royal Assent is not accorded before the Storting goes into recess."
Title IV of the 1978 Spanish constitution invests the Consentimiento Real (Royal Assent) and promulgation (publication)
of laws with the monarch of Spain, while Title III, The Cortes Generales, Chapter 2, Drafting of Bills, outlines
the method by which bills are passed. According to Article 91, within fifteen days of passage of a bill by the Cortes
Generales, the sovereign shall give his or her assent and publish the new law. Article 92 invests the monarch with
the right to call for a referendum, on the advice of the president of the government (commonly referred to in English
as the prime minister) and the authorisation of the cortes. No provision within the constitution grants the monarch
an ability to veto legislation directly; however, no provision prohibits the sovereign from withholding royal assent,
which effectively constitutes a veto. When the Spanish media asked King Juan Carlos if he would endorse the bill
legalising same-sex marriages, he answered "Soy el Rey de España y no el de Bélgica" ("I am the King of Spain and
not that of Belgium")—a reference to King Baudouin I of Belgium, who had refused to sign the Belgian law legalising
abortion. The King gave royal assent to Law 13/2005 on 1 July 2005; the law was gazetted in the Boletín Oficial del
Estado on 2 July and came into effect on 3 July 2005. Likewise, in 2010, King Juan Carlos gave royal assent to a
law permitting abortion on demand. If the Spanish monarch ever refused in conscience to grant royal assent, a procedure
similar to the Belgian handling of King Baudouin's objection would not be possible under the current constitution.
If the sovereign were ever declared incapable of discharging royal authority, his or her powers would not be transferred
to the Cabinet, pending the parliamentary appointment of a regency. Instead, the constitution mandates the next person
of age in the line of succession would immediately become regent. Therefore, had Juan Carlos followed the Belgian
example in 2005 or 2010, a declaration of incapacity would have transferred power to Felipe, then the heir apparent.
Articles 41 and 68 of the constitution empower the sovereign to withhold royal assent from bills adopted by the Legislative
Assembly. In 2010, the kingdom moved towards greater democracy, with King George Tupou V saying that he would be
guided by his prime minister in the exercising of his powers. Nonetheless, this does not preclude an independent
royal decision to exercise a right of veto. In November 2011, the assembly adopted an Arms and Ammunitions (Amendment)
Bill, which reduced the possible criminal sentences for the illicit possession of firearms. The bill was adopted
by ten votes to eight. Two members of the assembly had recently been charged with the illicit possession of firearms.
The Prime Minister, Lord Tuʻivakanō, voted in favour of the amendment. Members of the opposition denounced the bill
and asked the King to veto it, which he did in December.
In mathematics, a group is an algebraic structure consisting of a set of elements equipped with an operation that combines
any two elements to form a third element. The operation satisfies four conditions called the group axioms, namely
closure, associativity, identity and invertibility. One of the most familiar examples of a group is the set of integers
together with the addition operation, but the abstract formalization of the group axioms, detached as it is from
the concrete nature of any particular group and its operation, applies much more widely. It allows entities with
highly diverse mathematical origins in abstract algebra and beyond to be handled in a flexible way while retaining
their essential structural aspects. The ubiquity of groups in numerous areas within and outside mathematics makes
them a central organizing principle of contemporary mathematics. Groups share a fundamental kinship with the notion
of symmetry. For example, a symmetry group encodes symmetry features of a geometrical object: the group consists
of the set of transformations that leave the object unchanged and the operation of combining two such transformations
by performing one after the other. Lie groups are the symmetry groups used in the Standard Model of particle physics;
Point groups are used to help understand symmetry phenomena in molecular chemistry; and Poincaré groups can express
the physical symmetry underlying special relativity. The concept of a group arose from the study of polynomial equations,
starting with Évariste Galois in the 1830s. After contributions from other fields such as number theory and geometry,
the group notion was generalized and firmly established around 1870. Modern group theory—an active mathematical discipline—studies
groups in their own right.a[›] To explore groups, mathematicians have devised various notions to break groups into
smaller, better-understandable pieces, such as subgroups, quotient groups and simple groups. In addition to their
abstract properties, group theorists also study the different ways in which a group can be expressed concretely (its
group representations), both from a theoretical and a computational point of view. A theory has been developed for
finite groups, which culminated with the classification of finite simple groups announced in 1983.aa[›] Since the
mid-1980s, geometric group theory, which studies finitely generated groups as geometric objects, has become a particularly
active area in group theory. The set G is called the underlying set of the group (G, •). Often the group's underlying
set G is used as a short name for the group (G, •). Along the same lines, shorthand expressions such as "a subset
of the group G" or "an element of group G" are used when what is actually meant is "a subset of the underlying set
G of the group (G, •)" or "an element of the underlying set G of the group (G, •)". Usually, it is clear from the
context whether a symbol like G refers to a group or to an underlying set. These symmetries are represented by functions.
Each of these functions sends a point in the square to the corresponding point under the symmetry. For example, r1
sends a point to its rotation 90° clockwise around the square's center, and fh sends a point to its reflection across
the square's vertical middle line. Composing two of these symmetry functions gives another symmetry function. These
symmetries determine a group called the dihedral group of degree 4 and denoted D4. The underlying set of the group
is the above set of symmetry functions, and the group operation is function composition. Two symmetries are combined
by composing them as functions, that is, applying the first one to the square, and the second one to the result of
the first application. The result of performing first a and then b is written symbolically from right to left as
The modern concept of an abstract group developed out of several fields of mathematics. The original motivation for
group theory was the quest for solutions of polynomial equations of degree higher than 4. The 19th-century French
mathematician Évariste Galois, extending prior work of Paolo Ruffini and Joseph-Louis Lagrange, gave a criterion
for the solvability of a particular polynomial equation in terms of the symmetry group of its roots (solutions).
The elements of such a Galois group correspond to certain permutations of the roots. At first, Galois' ideas were
rejected by his contemporaries, and published only posthumously. More general permutation groups were investigated
in particular by Augustin Louis Cauchy. Arthur Cayley's On the theory of groups, as depending on the symbolic equation
θn = 1 (1854) gives the first abstract definition of a finite group. The convergence of these various sources into
a uniform theory of groups started with Camille Jordan's Traité des substitutions et des équations algébriques (1870).
Walther von Dyck (1882) introduced the idea of specifying a group by means of generators and relations, and was also
the first to give an axiomatic definition of an "abstract group", in the terminology of the time. As of the 20th
century, groups gained wide recognition by the pioneering work of Ferdinand Georg Frobenius and William Burnside,
who worked on representation theory of finite groups, Richard Brauer's modular representation theory and Issai Schur's
papers. The theory of Lie groups, and more generally locally compact groups was studied by Hermann Weyl, Élie Cartan
and many others. Its algebraic counterpart, the theory of algebraic groups, was first shaped by Claude Chevalley
(from the late 1930s) and later by the work of Armand Borel and Jacques Tits. The University of Chicago's 1960–61
Group Theory Year brought together group theorists such as Daniel Gorenstein, John G. Thompson and Walter Feit, laying
the foundation of a collaboration that, with input from numerous other mathematicians, classified all finite simple
groups in 1982. This project exceeded previous mathematical endeavours by its sheer size, in both length of proof
and number of researchers. Research is ongoing to simplify the proof of this classification. These days, group theory
is still a highly active mathematical branch, impacting many other fields.a[›] To understand groups beyond the level
of mere symbolic manipulations as above, more structural concepts have to be employed.c[›] There is a conceptual
principle underlying all of the following notions: to take advantage of the structure offered by groups (which sets,
being "structureless", do not have), constructions related to groups have to be compatible with the group operation.
This compatibility manifests itself in the following notions in various ways. For example, groups can be related
to each other via functions called group homomorphisms. By the mentioned principle, they are required to respect
the group structures in a precise sense. The structure of groups can also be understood by breaking them into pieces
called subgroups and quotient groups. The principle of "preserving structures"—a recurring topic in mathematics throughout—is
an instance of working in a category, in this case the category of groups. Two groups G and H are called isomorphic
if there exist group homomorphisms a: G → H and b: H → G, such that applying the two functions one after another
in each of the two possible orders gives the identity functions of G and H. That is, a(b(h)) = h and b(a(g)) = g
for any g in G and h in H. From an abstract point of view, isomorphic groups carry the same information. For example,
proving that g • g = 1G for some element g of G is equivalent to proving that a(g) ∗ a(g) = 1H, because applying
a to the first equality yields the second, and applying b to the second gives back the first. In the example above,
the identity and the rotations constitute a subgroup R = {id, r1, r2, r3}, highlighted in red in the group table
above: any two rotations composed are still a rotation, and a rotation can be undone by (i.e. is inverse to) the
complementary rotations 270° for 90°, 180° for 180°, and 90° for 270° (note that rotation in the opposite direction
is not defined). The subgroup test is a necessary and sufficient condition for a subset H of a group G to be a subgroup:
it is sufficient to check that g−1h ∈ H for all elements g, h ∈ H. Knowing the subgroups is important in understanding
the group as a whole.d[›] In many situations it is desirable to consider two group elements the same if they differ
by an element of a given subgroup. For example, in D4 above, once a reflection is performed, the square never gets
back to the r2 configuration by just applying the rotation operations (and no further reflections), i.e. the rotation
operations are irrelevant to the question whether a reflection has been performed. Cosets are used to formalize this
insight: a subgroup H defines left and right cosets, which can be thought of as translations of H by arbitrary group
elements g. In symbolic terms, the left and right cosets of H containing g are This set inherits a group operation
(sometimes called coset multiplication, or coset addition) from the original group G: (gN) • (hN) = (gh)N for all
g and h in G. This definition is motivated by the idea (itself an instance of general structural considerations outlined
above) that the map G → G / N that associates to any element g its coset gN be a group homomorphism, or by general
abstract considerations called universal properties. The coset eN = N serves as the identity in this group, and the
inverse of gN in the quotient group is (gN)−1 = (g−1)N.e[›] Quotient groups and subgroups together form a way of
describing every group by its presentation: any group is the quotient of the free group over the generators of the
group, quotiented by the subgroup of relations. The dihedral group D4, for example, can be generated by two elements
r and f (for example, r = r1, the right rotation and f = fv the vertical (or any other) reflection), which means
that every symmetry of the square is a finite composition of these two symmetries or their inverses. Together with
the relations Sub- and quotient groups are related in the following way: a subset H of G can be seen as an injective
map H → G, i.e. any element of the target has at most one element that maps to it. The counterpart to injective maps
are surjective maps (every element of the target is mapped onto), such as the canonical map G → G / N.y[›] Interpreting
subgroup and quotients in light of these homomorphisms emphasizes the structural concept inherent to these definitions
alluded to in the introduction. In general, homomorphisms are neither injective nor surjective. Kernel and image
of group homomorphisms and the first isomorphism theorem address this phenomenon. Groups are also applied in many
other mathematical areas. Mathematical objects are often examined by associating groups to them and studying the
properties of the corresponding groups. For example, Henri Poincaré founded what is now called algebraic topology
by introducing the fundamental group. By means of this connection, topological properties such as proximity and continuity
translate into properties of groups.i[›] For example, elements of the fundamental group are represented by loops.
The second image at the right shows some loops in a plane minus a point. The blue loop is considered null-homotopic
(and thus irrelevant), because it can be continuously shrunk to a point. The presence of the hole prevents the orange
loop from being shrunk to a point. The fundamental group of the plane with a point deleted turns out to be infinite
cyclic, generated by the orange loop (or any other loop winding once around the hole). This way, the fundamental
group detects the hole. In modular arithmetic, two integers are added and then the sum is divided by a positive integer
called the modulus. The result of modular addition is the remainder of that division. For any modulus, n, the set
of integers from 0 to n − 1 forms a group under modular addition: the inverse of any element a is n − a, and 0 is
the identity element. This is familiar from the addition of hours on the face of a clock: if the hour hand is on
9 and is advanced 4 hours, it ends up on 1, as shown at the right. This is expressed by saying that 9 + 4 equals
1 "modulo 12" or, in symbols, For any prime number p, there is also the multiplicative group of integers modulo p.
Its elements are the integers 1 to p − 1. The group operation is multiplication modulo p. That is, the usual product
is divided by p and the remainder of this division is the result of modular multiplication. For example, if p = 5,
there are four group elements 1, 2, 3, 4. In this group, 4 · 4 = 1, because the usual product 16 is equivalent to
1, which divided by 5 yields a remainder of 1. for 5 divides 16 − 1 = 15, denoted In the groups Z/nZ introduced above,
the element 1 is primitive, so these groups are cyclic. Indeed, each element is expressible as a sum all of whose
terms are 1. Any cyclic group with n elements is isomorphic to this group. A second example for cyclic groups is
the group of n-th complex roots of unity, given by complex numbers z satisfying zn = 1. These numbers can be visualized
as the vertices on a regular n-gon, as shown in blue at the right for n = 6. The group operation is multiplication
of complex numbers. In the picture, multiplying with z corresponds to a counter-clockwise rotation by 60°. Using
some field theory, the group Fp× can be shown to be cyclic: for example, if p = 5, 3 is a generator since 31 = 3,
32 = 9 ≡ 4, 33 ≡ 2, and 34 ≡ 1. Symmetry groups are groups consisting of symmetries of given mathematical objects—be
they of geometric nature, such as the introductory symmetry group of the square, or of algebraic nature, such as
polynomial equations and their solutions. Conceptually, group theory can be thought of as the study of symmetry.t[›]
Symmetries in mathematics greatly simplify the study of geometrical or analytical objects. A group is said to act
on another mathematical object X if every group element performs some operation on X compatibly to the group law.
In the rightmost example below, an element of order 7 of the (2,3,7) triangle group acts on the tiling by permuting
the highlighted warped triangles (and the other ones, too). By a group action, the group pattern is connected to
the structure of the object being acted on. Likewise, group theory helps predict the changes in physical properties
that occur when a material undergoes a phase transition, for example, from a cubic to a tetrahedral crystalline form.
An example is ferroelectric materials, where the change from a paraelectric to a ferroelectric state occurs at the
Curie temperature and is related to a change from the high-symmetry paraelectric state to the lower symmetry ferroelectic
state, accompanied by a so-called soft phonon mode, a vibrational lattice mode that goes to zero frequency at the
transition. Finite symmetry groups such as the Mathieu groups are used in coding theory, which is in turn applied
in error correction of transmitted data, and in CD players. Another application is differential Galois theory, which
characterizes functions having antiderivatives of a prescribed form, giving group-theoretic criteria for when solutions
of certain differential equations are well-behaved.u[›] Geometric properties that remain stable under group actions
are investigated in (geometric) invariant theory. Matrix groups consist of matrices together with matrix multiplication.
The general linear group GL(n, R) consists of all invertible n-by-n matrices with real entries. Its subgroups are
referred to as matrix groups or linear groups. The dihedral group example mentioned above can be viewed as a (very
small) matrix group. Another important matrix group is the special orthogonal group SO(n). It describes all possible
rotations in n dimensions. Via Euler angles, rotation matrices are used in computer graphics. Exchanging "+" and
"−" in the expression, i.e. permuting the two solutions of the equation can be viewed as a (very simple) group operation.
Similar formulae are known for cubic and quartic equations, but do not exist in general for degree 5 and higher.
Abstract properties of Galois groups associated with polynomials (in particular their solvability) give a criterion
for polynomials that have all their solutions expressible by radicals, i.e. solutions expressible using solely addition,
multiplication, and roots similar to the formula above. A group is called finite if it has a finite number of elements.
The number of elements is called the order of the group. An important class is the symmetric groups SN, the groups
of permutations of N letters. For example, the symmetric group on 3 letters S3 is the group consisting of all possible
orderings of the three letters ABC, i.e. contains the elements ABC, ACB, ..., up to CBA, in total 6 (or 3 factorial)
elements. This class is fundamental insofar as any finite group can be expressed as a subgroup of a symmetric group
SN for a suitable integer N (Cayley's theorem). Parallel to the group of symmetries of the square above, S3 can also
be interpreted as the group of symmetries of an equilateral triangle. Mathematicians often strive for a complete
classification (or list) of a mathematical notion. In the context of finite groups, this aim leads to difficult mathematics.
According to Lagrange's theorem, finite groups of order p, a prime number, are necessarily cyclic (abelian) groups
Zp. Groups of order p2 can also be shown to be abelian, a statement which does not generalize to order p3, as the
non-abelian group D4 of order 8 = 23 above shows. Computer algebra systems can be used to list small groups, but
there is no classification of all finite groups.q[›] An intermediate step is the classification of finite simple
groups.r[›] A nontrivial group is called simple if its only normal subgroups are the trivial group and the group
itself.s[›] The Jordan–Hölder theorem exhibits finite simple groups as the building blocks for all finite groups.
Listing all finite simple groups was a major achievement in contemporary group theory. 1998 Fields Medal winner Richard
Borcherds succeeded in proving the monstrous moonshine conjectures, a surprising and deep relation between the largest
finite simple sporadic group—the "monster group"—and certain modular functions, a piece of classical complex analysis,
and string theory, a theory supposed to unify the description of many physical phenomena. Some topological spaces
may be endowed with a group law. In order for the group law and the topology to interweave well, the group operations
must be continuous functions, that is, g • h, and g−1 must not vary wildly if g and h vary only little. Such groups
are called topological groups, and they are the group objects in the category of topological spaces. The most basic
examples are the reals R under addition, (R ∖ {0}, ·), and similarly with any other topological field such as the
complex numbers or p-adic numbers. All of these groups are locally compact, so they have Haar measures and can be
studied via harmonic analysis. The former offer an abstract formalism of invariant integrals. Invariance means, in
the case of real numbers for example: for any constant c. Matrix groups over these fields fall under this regime,
as do adele rings and adelic algebraic groups, which are basic to number theory. Galois groups of infinite field
extensions such as the absolute Galois group can also be equipped with a topology, the so-called Krull topology,
which in turn is central to generalize the above sketched connection of fields and groups to infinite field extensions.
An advanced generalization of this idea, adapted to the needs of algebraic geometry, is the étale fundamental group.
Lie groups are of fundamental importance in modern physics: Noether's theorem links continuous symmetries to conserved
quantities. Rotation, as well as translations in space and time are basic symmetries of the laws of mechanics. They
can, for instance, be used to construct simple models—imposing, say, axial symmetry on a situation will typically
lead to significant simplification in the equations one needs to solve to provide a physical description.v[›] Another
example are the Lorentz transformations, which relate measurements of time and velocity of two observers in motion
relative to each other. They can be deduced in a purely group-theoretical way, by expressing the transformations
as a rotational symmetry of Minkowski space. The latter serves—in the absence of significant gravitation—as a model
of space time in special relativity. The full symmetry group of Minkowski space, i.e. including translations, is
known as the Poincaré group. By the above, it plays a pivotal role in special relativity and, by implication, for
quantum field theories. Symmetries that vary with location are central to the modern description of physical interactions
with the help of gauge theory. In abstract algebra, more general structures are defined by relaxing some of the axioms
defining a group. For example, if the requirement that every element has an inverse is eliminated, the resulting
algebraic structure is called a monoid. The natural numbers N (including 0) under addition form a monoid, as do the
nonzero integers under multiplication (Z ∖ {0}, ·), see above. There is a general method to formally add inverses
to elements to any (abelian) monoid, much the same way as (Q ∖ {0}, ·) is derived from (Z ∖ {0}, ·), known as the
Grothendieck group. Groupoids are similar to groups except that the composition a • b need not be defined for all
a and b. They arise in the study of more complicated forms of symmetry, often in topological and analytical structures,
such as the fundamental groupoid or stacks. Finally, it is possible to generalize any of these concepts by replacing
the binary operation with an arbitrary n-ary one (i.e. an operation taking n arguments). With the proper generalization
of the group axioms this gives rise to an n-ary group. The table gives a list of several structures generalizing
groups.
The Central African Republic (CAR; Sango: Ködörösêse tî Bêafrîka; French: République centrafricaine pronounced: [ʁepyblik
sɑ̃tʁafʁikɛn], or Centrafrique [sɑ̃tʀafʁik]) is a landlocked country in Central Africa. It is bordered by Chad to
the north, Sudan to the northeast, South Sudan to the east, the Democratic Republic of the Congo and the Republic
of the Congo to the south and Cameroon to the west. The CAR covers a land area of about 620,000 square kilometres
(240,000 sq mi) and had an estimated population of around 4.7 million as of 2014[update]. What is today the Central
African Republic has been inhabited for millennia; however, the country's current borders were established by France,
which ruled the country as a colony starting in the late 19th century. After gaining independence from France in
1960, the Central African Republic was ruled by a series of autocratic leaders; by the 1990s, calls for democracy
led to the first multi-party democratic elections in 1993. Ange-Félix Patassé became president, but was later removed
by General François Bozizé in the 2003 coup. The Central African Republic Bush War began in 2004 and, despite a peace
treaty in 2007 and another in 2011, fighting broke out between various factions in December 2012, leading to ethnic
and religious cleansing of the Muslim minority and massive population displacement in 2013 and 2014. Approximately
10,000 years ago, desertification forced hunter-gatherer societies south into the Sahel regions of northern Central
Africa, where some groups settled and began farming as part of the Neolithic Revolution. Initial farming of white
yam progressed into millet and sorghum, and before 3000 BC the domestication of African oil palm improved the groups'
nutrition and allowed for expansion of the local populations. Bananas arrived in the region and added an important
source of carbohydrates to the diet; they were also used in the production of alcoholic beverages.[when?] This Agricultural
Revolution, combined with a "Fish-stew Revolution", in which fishing began to take place, and the use of boats, allowed
for the transportation of goods. Products were often moved in ceramic pots, which are the first known examples of
artistic expression from the region's inhabitants. During the 16th and 17th centuries slave traders began to raid
the region as part of the expansion of the Saharan and Nile River slave routes. Their captives were slaved and shipped
to the Mediterranean coast, Europe, Arabia, the Western Hemisphere, or to the slave ports and factories along the
West and North Africa or South the Ubanqui and Congo rivers. In the mid 19th century, the Bobangi people became major
slave traders and sold their captives to the Americas using the Ubangi river to reach the coast. During the 18th
century Bandia-Nzakara peoples established the Bangassou Kingdom along the Ubangi River. In 1920 French Equatorial
Africa was established and Ubangi-Shari was administered from Brazzaville. During the 1920s and 1930s the French
introduced a policy of mandatory cotton cultivation, a network of roads was built, attempts were made to combat sleeping
sickness and Protestant missions were established to spread Christianity. New forms of forced labor were also introduced
and a large number of Ubangians were sent to work on the Congo-Ocean Railway. Many of these forced laborers died
of exhaustion, illness, or the poor conditions which claimed between 20% and 25% of the 127,000 workers. In September
1940, during the Second World War, pro-Gaullist French officers took control of Ubangi-Shari and General Leclerc
established his headquarters for the Free French Forces in Bangui. In 1946 Barthélémy Boganda was elected with 9,000
votes to the French National Assembly, becoming the first representative for CAR in the French government. Boganda
maintained a political stance against racism and the colonial regime but gradually became disheartened with the French
political system and returned to CAR to establish the Movement for the Social Evolution of Black Africa (MESAN) in
1950. In the Ubangi-Shari Territorial Assembly election in 1957, MESAN captured 347,000 out of the total 356,000
votes, and won every legislative seat, which led to Boganda being elected president of the Grand Council of French
Equatorial Africa and vice-president of the Ubangi-Shari Government Council. Within a year, he declared the establishment
of the Central African Republic and served as the country's first prime minister. MESAN continued to exist, but its
role was limited. After Boganda's death in a plane crash on 29 March 1959, his cousin, David Dacko, took control
of MESAN and became the country's first president after the CAR had formally received independence from France. Dacko
threw out his political rivals, including former Prime Minister and Mouvement d'évolution démocratique de l'Afrique
centrale (MEDAC), leader Abel Goumba, whom he forced into exile in France. With all opposition parties suppressed
by November 1962, Dacko declared MESAN as the official party of the state. In April 1979, young students protested
against Bokassa's decree that all school attendees would need to buy uniforms from a company owned by one of his
wives. The government violently suppressed the protests, killing 100 children and teenagers. Bokassa himself may
have been personally involved in some of the killings. In September 1979, France overthrew Bokassa and "restored"
Dacko to power (subsequently restoring the name of the country to the Central African Republic). Dacko, in turn,
was again overthrown in a coup by General André Kolingba on 1 September 1981. By 1990, inspired by the fall of the
Berlin Wall, a pro-democracy movement arose. Pressure from the United States, France, and from a group of locally
represented countries and agencies called GIBAFOR (France, the USA, Germany, Japan, the EU, the World Bank, and the
UN) finally led Kolingba to agree, in principle, to hold free elections in October 1992 with help from the UN Office
of Electoral Affairs. After using the excuse of alleged irregularities to suspend the results of the elections as
a pretext for holding on to power, President Kolingba came under intense pressure from GIBAFOR to establish a "Conseil
National Politique Provisoire de la République" (Provisional National Political Council, CNPPR) and to set up a "Mixed
Electoral Commission", which included representatives from all political parties.[citation needed] When a second
round of elections were finally held in 1993, again with the help of the international community coordinated by GIBAFOR,
Ange-Félix Patassé won in the second round of voting with 53% of the vote while Goumba won 45.6%. Patassé's party,
the Mouvement pour la Libération du Peuple Centrafricain (MLPC) or Movement for the Liberation of the Central African
People, gained a simple but not an absolute majority of seats in parliament, which meant Patassé's party required
coalition partners.[citation needed] Patassé purged many of the Kolingba elements from the government and Kolingba
supporters accused Patassé's government of conducting a "witch hunt" against the Yakoma. A new constitution was approved
on 28 December 1994 but had little impact on the country's politics. In 1996–1997, reflecting steadily decreasing
public confidence in the government's erratic behaviour, three mutinies against Patassé's administration were accompanied
by widespread destruction of property and heightened ethnic tension. During this time (1996) the Peace Corps evacuated
all its volunteers to neighboring Cameroon. To date, the Peace Corps has not returned to the Central African Republic.
The Bangui Agreements, signed in January 1997, provided for the deployment of an inter-African military mission,
to Central African Republic and re-entry of ex-mutineers into the government on 7 April 1997. The inter-African military
mission was later replaced by a U.N. peacekeeping force (MINURCA). In the aftermath of the failed coup, militias
loyal to Patassé sought revenge against rebels in many neighborhoods of Bangui and incited unrest including the murder
of many political opponents. Eventually, Patassé came to suspect that General François Bozizé was involved in another
coup attempt against him, which led Bozizé to flee with loyal troops to Chad. In March 2003, Bozizé launched a surprise
attack against Patassé, who was out of the country. Libyan troops and some 1,000 soldiers of Bemba's Congolese rebel
organization failed to stop the rebels and Bozizé's forces succeeded in overthrowing Patassé.[citation needed] In
2004 the Central African Republic Bush War began as forces opposed to Bozizé took up arms against his government.
In May 2005 Bozizé won a presidential election that excluded Patassé and in 2006 fighting continued between the government
and the rebels. In November 2006, Bozizé's government requested French military support to help them repel rebels
who had taken control of towns in the country's northern regions. Though the initially public details of the agreement
pertained to logistics and intelligence, the French assistance eventually included strikes by Mirage jets against
rebel positions. The Syrte Agreement in February and the Birao Peace Agreement in April 2007 called for a cessation
of hostilities, the billeting of FDPC fighters and their integration with FACA, the liberation of political prisoners,
integration of FDPC into government, an amnesty for the UFDR, its recognition as a political party, and the integration
of its fighters into the national army. Several groups continued to fight but other groups signed on to the agreement,
or similar agreements with the government (e.g. UFR on 15 December 2008). The only major group not to sign an agreement
at the time was the CPJP, which continued its activities and signed a peace agreement with the government on 25 August
2012. Michel Djotodia took over as president and in May 2013 Central African Republic's Prime Minister Nicolas Tiangaye
requested a UN peacekeeping force from the UN Security Council and on 31 May former President Bozizé was indicted
for crimes against humanity and incitement of genocide. The security situation did not improve during June–August
2013 and there were reports of over 200,000 internally displaced persons (IDPs) as well as human rights abuses and
renewed fighting between Séléka and Bozizé supporters. In the southwest, the Dzanga-Sangha National Park is located
in a rain forest area. The country is noted for its population of forest elephants and western lowland gorillas.
In the north, the Manovo-Gounda St Floris National Park is well-populated with wildlife, including leopards, lions,
cheetahs and rhinos, and the Bamingui-Bangoran National Park is located in the northeast of CAR. The parks have been
seriously affected by the activities of poachers, particularly those from Sudan, over the past two decades.[citation
needed] There are many missionary groups operating in the country, including Lutherans, Baptists, Catholics, Grace
Brethren, and Jehovah's Witnesses. While these missionaries are predominantly from the United States, France, Italy,
and Spain, many are also from Nigeria, the Democratic Republic of the Congo, and other African countries. Large numbers
of missionaries left the country when fighting broke out between rebel and government forces in 2002–3, but many
of them have now returned to continue their work. In 2006, due to ongoing violence, over 50,000 people in the country's
northwest were at risk of starvation but this was averted due to assistance from the United Nations.[citation needed]
On 8 January 2008, the UN Secretary-General Ban Ki-Moon declared that the Central African Republic was eligible to
receive assistance from the Peacebuilding Fund. Three priority areas were identified: first, the reform of the security
sector; second, the promotion of good governance and the rule of law; and third, the revitalization of communities
affected by conflicts. On 12 June 2008, the Central African Republic requested assistance from the UN Peacebuilding
Commission, which was set up in 2005 to help countries emerging from conflict avoid devolving back into war or chaos.
A new government was appointed on 31 March 2013, which consisted of members of Séléka and representatives of the
opposition to Bozizé, one pro-Bozizé individual, and a number representatives of civil society. On 1 April, the former
opposition parties declared that they would boycott the government. After African leaders in Chad refused to recognize
Djotodia as President, proposing to form a transitional council and the holding of new elections, Djotodia signed
a decree on 6 April for the formation of a council that would act as a transitional parliament. The council was tasked
with electing a president to serve prior to elections in 18 months. The per capita income of the Republic is often
listed as being approximately $400 a year, one of the lowest in the world, but this figure is based mostly on reported
sales of exports and largely ignores the unregistered sale of foods, locally produced alcoholic beverages, diamonds,
ivory, bushmeat, and traditional medicine. For most Central Africans, the informal economy of the CAR is more important
than the formal economy.[citation needed] Export trade is hindered by poor economic development and the country's
landlocked position.[citation needed] Agriculture is dominated by the cultivation and sale of food crops such as
cassava, peanuts, maize, sorghum, millet, sesame, and plantain. The annual real GDP growth rate is just above 3%.
The importance of food crops over exported cash crops is indicated by the fact that the total production of cassava,
the staple food of most Central Africans, ranges between 200,000 and 300,000 tonnes a year, while the production
of cotton, the principal exported cash crop, ranges from 25,000 to 45,000 tonnes a year. Food crops are not exported
in large quantities, but still constitute the principal cash crops of the country, because Central Africans derive
far more income from the periodic sale of surplus food crops than from exported cash crops such as cotton or coffee.[citation
needed] Much of the country is self-sufficient in food crops; however, livestock development is hindered by the presence
of the tsetse fly.[citation needed] Presently, the Central African Republic has active television services, radio
stations, internet service providers, and mobile phone carriers; Socatel is the leading provider for both internet
and mobile phone access throughout the country. The primary governmental regulating bodies of telecommunications
are the Ministère des Postes and Télécommunications et des Nouvelles Technologies. In addition, the Central African
Republic receives international support on telecommunication related operations from ITU Telecommunication Development
Sector (ITU-D) within the International Telecommunication Union to improve infrastructure. The 2009 Human Rights
Report by the United States Department of State noted that human rights in CAR were poor and expressed concerns over
numerous government abuses. The U.S. State Department alleged that major human rights abuses such as extrajudicial
executions by security forces, torture, beatings and rape of suspects and prisoners occurred with impunity. It also
alleged harsh and life-threatening conditions in prisons and detention centers, arbitrary arrest, prolonged pretrial
detention and denial of a fair trial, restrictions on freedom of movement, official corruption, and restrictions
on workers' rights.
Asthma is thought to be caused by a combination of genetic and environmental factors. Environmental factors include exposure
to air pollution and allergens. Other potential triggers include medications such as aspirin and beta blockers. Diagnosis
is usually based on the pattern of symptoms, response to therapy over time, and spirometry. Asthma is classified
according to the frequency of symptoms, forced expiratory volume in one second (FEV1), and peak expiratory flow rate.
It may also be classified as atopic or non-atopic where atopy refers to a predisposition toward developing a type
1 hypersensitivity reaction. There is no cure for asthma. Symptoms can be prevented by avoiding triggers, such as
allergens and irritants, and by the use of inhaled corticosteroids. Long-acting beta agonists (LABA) or antileukotriene
agents may be used in addition to inhaled corticosteroids if asthma symptoms remain uncontrolled. Treatment of rapidly
worsening symptoms is usually with an inhaled short-acting beta-2 agonist such as salbutamol and corticosteroids
taken by mouth. In very severe cases, intravenous corticosteroids, magnesium sulfate, and hospitalization may be
required. Asthma is characterized by recurrent episodes of wheezing, shortness of breath, chest tightness, and coughing.
Sputum may be produced from the lung by coughing but is often hard to bring up. During recovery from an attack, it
may appear pus-like due to high levels of white blood cells called eosinophils. Symptoms are usually worse at night
and in the early morning or in response to exercise or cold air. Some people with asthma rarely experience symptoms,
usually in response to triggers, whereas others may have marked and persistent symptoms. A number of other health
conditions occur more frequently in those with asthma, including gastro-esophageal reflux disease (GERD), rhinosinusitis,
and obstructive sleep apnea. Psychological disorders are also more common, with anxiety disorders occurring in between
16–52% and mood disorders in 14–41%. However, it is not known if asthma causes psychological problems or if psychological
problems lead to asthma. Those with asthma, especially if it is poorly controlled, are at high risk for radiocontrast
reactions. Many environmental factors have been associated with asthma's development and exacerbation including allergens,
air pollution, and other environmental chemicals. Smoking during pregnancy and after delivery is associated with
a greater risk of asthma-like symptoms. Low air quality from factors such as traffic pollution or high ozone levels,
has been associated with both asthma development and increased asthma severity. Exposure to indoor volatile organic
compounds may be a trigger for asthma; formaldehyde exposure, for example, has a positive association. Also, phthalates
in certain types of PVC are associated with asthma in children and adults. The hygiene hypothesis attempts to explain
the increased rates of asthma worldwide as a direct and unintended result of reduced exposure, during childhood,
to non-pathogenic bacteria and viruses. It has been proposed that the reduced exposure to bacteria and viruses is
due, in part, to increased cleanliness and decreased family size in modern societies. Exposure to bacterial endotoxin
in early childhood may prevent the development of asthma, but exposure at an older age may provoke bronchoconstriction.
Evidence supporting the hygiene hypothesis includes lower rates of asthma on farms and in households with pets. Family
history is a risk factor for asthma, with many different genes being implicated. If one identical twin is affected,
the probability of the other having the disease is approximately 25%. By the end of 2005, 25 genes had been associated
with asthma in six or more separate populations, including GSTM1, IL10, CTLA-4, SPINK5, LTC4S, IL4R and ADAM33, among
others. Many of these genes are related to the immune system or modulating inflammation. Even among this list of
genes supported by highly replicated studies, results have not been consistent among all populations tested. In 2006
over 100 genes were associated with asthma in one genetic association study alone; more continue to be found. Asthma
is the result of chronic inflammation of the conducting zone of the airways (most especially the bronchi and bronchioles),
which subsequently results in increased contractability of the surrounding smooth muscles. This among other factors
leads to bouts of narrowing of the airway and the classic symptoms of wheezing. The narrowing is typically reversible
with or without treatment. Occasionally the airways themselves change. Typical changes in the airways include an
increase in eosinophils and thickening of the lamina reticularis. Chronically the airways' smooth muscle may increase
in size along with an increase in the numbers of mucous glands. Other cell types involved include: T lymphocytes,
macrophages, and neutrophils. There may also be involvement of other components of the immune system including: cytokines,
chemokines, histamine, and leukotrienes among others. While asthma is a well recognized condition, there is not one
universal agreed upon definition. It is defined by the Global Initiative for Asthma as "a chronic inflammatory disorder
of the airways in which many cells and cellular elements play a role. The chronic inflammation is associated with
airway hyper-responsiveness that leads to recurrent episodes of wheezing, breathlessness, chest tightness and coughing
particularly at night or in the early morning. These episodes are usually associated with widespread but variable
airflow obstruction within the lung that is often reversible either spontaneously or with treatment". There is currently
no precise test with the diagnosis typically based on the pattern of symptoms and response to therapy over time.
A diagnosis of asthma should be suspected if there is a history of: recurrent wheezing, coughing or difficulty breathing
and these symptoms occur or worsen due to exercise, viral infections, allergens or air pollution. Spirometry is then
used to confirm the diagnosis. In children under the age of six the diagnosis is more difficult as they are too young
for spirometry. Spirometry is recommended to aid in diagnosis and management. It is the single best test for asthma.
If the FEV1 measured by this technique improves more than 12% following administration of a bronchodilator such as
salbutamol, this is supportive of the diagnosis. It however may be normal in those with a history of mild asthma,
not currently acting up. As caffeine is a bronchodilator in people with asthma, the use of caffeine before a lung
function test may interfere with the results. Single-breath diffusing capacity can help differentiate asthma from
COPD. It is reasonable to perform spirometry every one or two years to follow how well a person's asthma is controlled.
Other supportive evidence includes: a ≥20% difference in peak expiratory flow rate on at least three days in a week
for at least two weeks, a ≥20% improvement of peak flow following treatment with either salbutamol, inhaled corticosteroids
or prednisone, or a ≥20% decrease in peak flow following exposure to a trigger. Testing peak expiratory flow is more
variable than spirometry, however, and thus not recommended for routine diagnosis. It may be useful for daily self-monitoring
in those with moderate to severe disease and for checking the effectiveness of new medications. It may also be helpful
in guiding treatment in those with acute exacerbations. Asthma is clinically classified according to the frequency
of symptoms, forced expiratory volume in one second (FEV1), and peak expiratory flow rate. Asthma may also be classified
as atopic (extrinsic) or non-atopic (intrinsic), based on whether symptoms are precipitated by allergens (atopic)
or not (non-atopic). While asthma is classified based on severity, at the moment there is no clear method for classifying
different subgroups of asthma beyond this system. Finding ways to identify subgroups that respond well to different
types of treatments is a current critical goal of asthma research. Although asthma is a chronic obstructive condition,
it is not considered as a part of chronic obstructive pulmonary disease as this term refers specifically to combinations
of disease that are irreversible such as bronchiectasis, chronic bronchitis, and emphysema. Unlike these diseases,
the airway obstruction in asthma is usually reversible; however, if left untreated, the chronic inflammation from
asthma can lead the lungs to become irreversibly obstructed due to airway remodeling. In contrast to emphysema, asthma
affects the bronchi, not the alveoli. Exercise can trigger bronchoconstriction both in people with or without asthma.
It occurs in most people with asthma and up to 20% of people without asthma. Exercise-induced bronchoconstriction
is common in professional athletes. The highest rates are among cyclists (up to 45%), swimmers, and cross-country
skiers. While it may occur with any weather conditions it is more common when it is dry and cold. Inhaled beta2-agonists
do not appear to improve athletic performance among those without asthma however oral doses may improve endurance
and strength. Asthma as a result of (or worsened by) workplace exposures, is a commonly reported occupational disease.
Many cases however are not reported or recognized as such. It is estimated that 5–25% of asthma cases in adults are
work–related. A few hundred different agents have been implicated with the most common being: isocyanates, grain
and wood dust, colophony, soldering flux, latex, animals, and aldehydes. The employment associated with the highest
risk of problems include: those who spray paint, bakers and those who process food, nurses, chemical workers, those
who work with animals, welders, hairdressers and timber workers. Many other conditions can cause symptoms similar
to those of asthma. In children, other upper airway diseases such as allergic rhinitis and sinusitis should be considered
as well as other causes of airway obstruction including: foreign body aspiration, tracheal stenosis or laryngotracheomalacia,
vascular rings, enlarged lymph nodes or neck masses. Bronchiolitis and other viral infections may also produce wheezing.
In adults, COPD, congestive heart failure, airway masses, as well as drug-induced coughing due to ACE inhibitors
should be considered. In both populations vocal cord dysfunction may present similarly. Chronic obstructive pulmonary
disease can coexist with asthma and can occur as a complication of chronic asthma. After the age of 65 most people
with obstructive airway disease will have asthma and COPD. In this setting, COPD can be differentiated by increased
airway neutrophils, abnormally increased wall thickness, and increased smooth muscle in the bronchi. However, this
level of investigation is not performed due to COPD and asthma sharing similar principles of management: corticosteroids,
long acting beta agonists, and smoking cessation. It closely resembles asthma in symptoms, is correlated with more
exposure to cigarette smoke, an older age, less symptom reversibility after bronchodilator administration, and decreased
likelihood of family history of atopy. The evidence for the effectiveness of measures to prevent the development
of asthma is weak. Some show promise including: limiting smoke exposure both in utero and after delivery, breastfeeding,
and increased exposure to daycare or large families but none are well supported enough to be recommended for this
indication. Early pet exposure may be useful. Results from exposure to pets at other times are inconclusive and it
is only recommended that pets be removed from the home if a person has allergic symptoms to said pet. Dietary restrictions
during pregnancy or when breast feeding have not been found to be effective and thus are not recommended. Reducing
or eliminating compounds known to sensitive people from the work place may be effective. It is not clear if annual
influenza vaccinations effects the risk of exacerbations. Immunization; however, is recommended by the World Health
Organization. Smoking bans are effective in decreasing exacerbations of asthma. Avoidance of triggers is a key component
of improving control and preventing attacks. The most common triggers include allergens, smoke (tobacco and other),
air pollution, non selective beta-blockers, and sulfite-containing foods. Cigarette smoking and second-hand smoke
(passive smoke) may reduce the effectiveness of medications such as corticosteroids. Laws that limit smoking decrease
the number of people hospitalized for asthma. Dust mite control measures, including air filtration, chemicals to
kill mites, vacuuming, mattress covers and others methods had no effect on asthma symptoms. Overall, exercise is
beneficial in people with stable asthma. Yoga could provide small improvements in quality of life and symptoms in
people with asthma. For those with severe persistent asthma not controlled by inhaled corticosteroids and LABAs,
bronchial thermoplasty may be an option. It involves the delivery of controlled thermal energy to the airway wall
during a series of bronchoscopies. While it may increase exacerbation frequency in the first few months it appears
to decrease the subsequent rate. Effects beyond one year are unknown. Evidence suggests that sublingual immunotherapy
in those with both allergic rhinitis and asthma improve outcomes. The prognosis for asthma is generally good, especially
for children with mild disease. Mortality has decreased over the last few decades due to better recognition and improvement
in care. Globally it causes moderate or severe disability in 19.4 million people as of 2004 (16 million of which
are in low and middle income countries). Of asthma diagnosed during childhood, half of cases will no longer carry
the diagnosis after a decade. Airway remodeling is observed, but it is unknown whether these represent harmful or
beneficial changes. Early treatment with corticosteroids seems to prevent or ameliorates a decline in lung function.
As of 2011, 235–330 million people worldwide are affected by asthma, and approximately 250,000–345,000 people die
per year from the disease. Rates vary between countries with prevalences between 1 and 18%. It is more common in
developed than developing countries. One thus sees lower rates in Asia, Eastern Europe and Africa. Within developed
countries it is more common in those who are economically disadvantaged while in contrast in developing countries
it is more common in the affluent. The reason for these differences is not well known. Low and middle income countries
make up more than 80% of the mortality. From 2000 to 2010, the average cost per asthma-related hospital stay in the
United States for children remained relatively stable at about $3,600, whereas the average cost per asthma-related
hospital stay for adults increased from $5,200 to $6,600. In 2010, Medicaid was the most frequent primary payer among
children and adults aged 18–44 years in the United States; private insurance was the second most frequent payer.
Among both children and adults in the lowest income communities in the United States there is a higher rates of hospital
stays for asthma in 2010 than those in the highest income communities. In 1873, one of the first papers in modern
medicine on the subject tried to explain the pathophysiology of the disease while one in 1872, concluded that asthma
can be cured by rubbing the chest with chloroform liniment. Medical treatment in 1880, included the use of intravenous
doses of a drug called pilocarpin. In 1886, F.H. Bosworth theorized a connection between asthma and hay fever. Epinephrine
was first referred to in the treatment of asthma in 1905. Oral corticosteroids began to be used for this condition
in the 1950s while inhaled corticosteroids and selective short acting beta agonist came into wide use in the 1960s.
Although the format was capable of offering higher-quality video and audio than its consumer rivals, the VHS and Betamax
videocassette systems, LaserDisc never managed to gain widespread use in North America, largely due to high costs
for the players and video titles themselves and the inability to record TV programming. It also remained a largely
obscure format in Europe and Australia. By contrast, the format was much more popular in Japan and in the more affluent
regions of Southeast Asia, such as Hong Kong, Singapore, and Malaysia, being the prevalent rental video medium in
Hong Kong during the 1990s. Its superior video and audio quality did make it a somewhat popular choice among videophiles
and film enthusiasts during its lifespan. LaserDisc was first available on the market, in Atlanta, Georgia, on December
15, 1978, two years after the introduction of the VHS VCR, and four years before the introduction of the CD (which
is based on laser disc technology). Initially licensed, sold, and marketed as MCA DiscoVision (also known as simply
"DiscoVision") in North America in 1978, the technology was previously referred to internally as Optical Videodisc
System, Reflective Optical Videodisc, Laser Optical Videodisc, and Disco-Vision (with a dash), with the first players
referring to the format as "Video Long Play". Pioneer Electronics later purchased the majority stake in the format
and marketed it as both LaserVision (format name) and LaserDisc (brand name) in 1980, with some releases unofficially
referring to the medium as "Laser Videodisc". Philips produced the players while MCA produced the discs. The Philips-MCA
cooperation was not successful, and discontinued after a few years. Several of the scientists responsible for the
early research (Richard Wilkinson, Ray Dakin and John Winslow) founded Optical Disc Corporation (now ODC Nimbus).
By the early 2000s, LaserDisc was completely replaced by DVD in the North American retail marketplace, as neither
players nor software were then produced. Players were still exported to North America from Japan until the end of
2001. The format has retained some popularity among American collectors, and to a greater degree in Japan, where
the format was better supported and more prevalent during its life. In Europe, LaserDisc always remained an obscure
format. It was chosen by the British Broadcasting Corporation (BBC) for the BBC Domesday Project in the mid-1980s,
a school-based project to commemorate 900 years since the original Domesday Book in England. From 1991 up until the
early 2000s, the BBC also used LaserDisc technology to play out the channel idents. The standard home video LaserDisc
was 30 cm (12 in) in diameter and made up of two single-sided aluminum discs layered in plastic. Although appearing
similar to compact discs or DVDs, LaserDiscs used analog video stored in the composite domain (having a video bandwidth
approximately equivalent to the 1-inch (25 mm) C-Type VTR format) with analog FM stereo sound and PCM digital audio.
The LaserDisc at its most fundamental level was still recorded as a series of pits and lands much like CDs, DVDs,
and even Blu-ray Discs are today. However, while the encoding is of a binary nature, the information is encoded as
analog pulse width modulation with a 50% duty cycle, where the information is contained in the lengths and spacing
of the pits. In true digital media the pits, or their edges, directly represent 1s and 0s of a binary digital information
stream. Early LaserDiscs featured in 1978 were entirely analog but the format evolved to incorporate digital stereo
sound in CD format (sometimes with a TOSlink or coax output to feed an external DAC), and later multi-channel formats
such as Dolby Digital and DTS. As Pioneer introduced Digital Audio to LaserDisc in 1985, they further refined the
CAA format. CAA55 was introduced in 1985 with a total playback capacity per side of 55 minutes 5 seconds, reducing
the video capacity to resolve bandwidth issues with the inclusion of Digital Audio. Several titles released between
1985 and 1987 were analog audio only due to the length of the title and the desire to keep the film on one disc (e.g.,
Back to the Future). By 1987, Pioneer had overcome the technical challenges and was able to once again encode in
CAA60, allowing a total of 60 minutes 5 seconds. Pioneer further refined CAA, offering CAA45, encoding 45 minutes
of material, but filling the entire playback surface of the side. Used on only a handful of titles, CAA65 offered
65 minutes 5 seconds of playback time per side. There are a handful of titles pressed by Technidisc that used CAA50.
The final variant of CAA is CAA70, which could accommodate 70 minutes of playback time per side. There are not any
known uses of this format on the consumer market. Sound could be stored in either analog or digital format and in
a variety of surround sound formats; NTSC discs could carry two analog audio tracks, plus two uncompressed PCM digital
audio tracks, which were (EFM, CIRC, 16-bit and 44.056 kHz sample rate). PAL discs could carry one pair of audio
tracks, either analog or digital and the digital tracks on a PAL disc were 16-bit 44.1 kHz as on a CD; in the UK,
the term "LaserVision" is used to refer to discs with analog sound, while "LaserDisc" is used for those with digital
audio. The digital sound signal in both formats are EFM-encoded as in CD. Dolby Digital (also called AC-3) and DTS—which
are now common on DVD titles—first became available on LaserDisc, and Star Wars: Episode I – The Phantom Menace (1999)
which was released on LaserDisc in Japan, is among the first home video releases ever to include 6.1 channel Dolby
Digital EX Surround. Unlike DVDs, which carry Dolby Digital audio in digital form, LaserDiscs store Dolby Digital
in a frequency modulated form within a track normally used for analog audio. Extracting Dolby Digital from a LaserDisc
required a player equipped with a special "AC-3 RF" output and an external demodulator in addition to an AC-3 decoder.
The demodulator was necessary to convert the 2.88 MHz modulated AC-3 information on the disc into a 384 kbit/s signal
that the decoder could handle. DTS audio, when available on a disc, replaced the digital audio tracks; hearing DTS
sound required only an S/PDIF compliant digital connection to a DTS decoder. In the mid to late 1990s many higher-end
AV receivers included the demodulator circuit specifically for the LaserDisc players RF modulated Dolby Digital AC-3
signal. By the late 1990s with LaserDisc players and disc sales declining due to DVD's growing popularity the AV
receiver manufacturers removed the demodulator circuit. Although DVD players were capable of playing Dolby Digital
tracks, the signal out of DVD player were not in a modulated form and not compatible with the inputs designed for
LaserDisc AC-3. Outboard demodulators were available for a period that convert the AC-3 signal to standard Dolby
Digital signal that was compatible with the standard Dolby Digital/PCM inputs on capable AV receivers. Another type
marketed by Onkyo and others converted the RF AC-3 signal to 6-channel analog audio. At least where the digital audio
tracks were concerned, the sound quality was unsurpassed at the time compared to consumer videotape, but the quality
of the analog soundtracks varied greatly depending on the disc and, sometimes, the player. Many early and lower-end
LD players had poor analog audio components, and many early discs had poorly mastered analog audio tracks, making
digital soundtracks in any form most desirable to serious enthusiasts. Early DiscoVision and LaserDisc titles lacked
the digital audio option, but many of those movies received digital sound in later re-issues by Universal, and the
quality of analog audio tracks generally got far better as time went on. Many discs that had originally carried old
analog stereo tracks received new Dolby Stereo and Dolby Surround tracks instead, often in addition to digital tracks,
helping boost sound quality. Later analog discs also applied CX Noise Reduction, which improved the signal-noise
ratio of their audio. Both AC-3 and DTS surround audio were clumsily implemented on LaserDiscs, leading to some interesting
player- and disc-dependent issues. A disc that included AC-3 audio forfeited the right analog audio channel to the
modulated AC-3 RF stream. If the player did not have an AC-3 output available, the next most attractive playback
option would be the digital Dolby Surround or stereo audio tracks. The reason for this is the RF signal needs to
bypass the audio circuitry in order to be properly processed by the demodulator. If either the player did not support
digital audio tracks (common in older players), or the disc did not include digital audio tracks at all (uncommon
for a disc which is mastered with an AC-3 track), the only remaining option was to fall back to a monophonic presentation
of the left analog audio track. However, many older analog-only players not only failed to output AC-3 streams correctly,
but were not even aware of their potential existence. Such a player will happily play the analog audio tracks verbatim,
resulting in garbage (static) output in the right channel. Only one 5.1 surround sound option exists on a given LaserDisc
(either Dolby Digital or DTS), so if surround sound is desired, the disc must be matched to the capabilities of the
playback equipment (LD Player and Receiver/Decoder) by the purchaser. A fully capable LaserDisc playback system includes
a newer LaserDisc player that is capable of playing digital tracks, has a digital optical output for digital PCM
and DTS audio, is aware of AC-3 audio tracks, and has an AC-3 coaxial output; an external or internal AC-3 RF demodulator
and AC-3 decoder; and a DTS decoder. Many 1990s A/V receivers combine the AC-3 decoder and DTS decoder logic, but
an integrated AC-3 demodulator is rare both in LaserDisc players and in later A/V receivers. PAL LaserDiscs have
a slightly longer playing time than NTSC discs, but have fewer audio options. PAL discs only have two audio tracks,
consisting of either two analog-only tracks on older PAL LDs, or two digital-only tracks on newer discs. In comparison,
later NTSC LDs are capable of carrying four tracks (two analog and two digital). On certain releases, one of the
analog tracks is used to carry a modulated AC-3 signal for 5.1 channel audio (for decoding and playback by newer
LD players with an "AC-3 RF" output). However, older NTSC LDs made before 1984 (such as the original DiscoVision
discs) only have two analog audio tracks. In March 1984, Pioneer introduced the first consumer player with a solid-state
laser, the LD-700. It was also the first LD player to load from the front and not the top. One year earlier Hitachi
introduced an expensive industrial player with a laser diode, but the player, which had poor picture quality due
to an inadequate dropout compensator, was made only in limited quantities. After Pioneer released the LD-700, gas
lasers were no longer used in consumer players, despite their advantages, although Philips continued to use gas lasers
in their industrial units until 1985. During its development, MCA, which co-owned the technology, referred to it
as the Optical Videodisc System, "Reflective Optical Videodisc" or "Laser Optical Videodisc", depending on the document;
changing the name once in 1969 to Disco-Vision and then again in 1978 to DiscoVision (without the hyphen), which
became the official spelling. Technical documents and brochures produced by MCA Disco-Vision during the early and
mid-'70s also used the term "Disco-Vision Records" to refer to the pressed discs. MCA owned the rights to the largest
catalog of films in the world during this time, and they manufactured and distributed the DiscoVision releases of
those films under the "MCA DiscoVision" software and manufacturing label; consumer sale of those titles began on
December 15, 1978, with the aforementioned Jaws. Philips' preferred name for the format was "VLP", after the Dutch
words Video Langspeel-Plaat ("Video long-play disc"), which in English-speaking countries stood for Video Long-Play.
The first consumer player, the Magnavox VH-8000 even had the VLP logo on the player. For a while in the early and
mid-1970s, Philips also discussed a compatible audio-only format they called "ALP", but that was soon dropped as
the Compact Disc system became a non-compatible project in the Philips corporation. Until early 1980, the format
had no "official" name. The LaserVision Association, made up of MCA, Universal-Pioneer, IBM, and Philips/Magnavox,
was formed to standardize the technical specifications of the format (which had been causing problems for the consumer
market) and finally named the system officially as "LaserVision". Pioneer Electronics also entered the optical disc
market in 1977 as a 50/50 joint-venture with MCA called Universal-Pioneer and manufacturing MCA designed industrial
players under the MCA DiscoVision name (the PR-7800 and PR-7820). For the 1980 launch of the first Universal-Pioneer
player, the VP-1000 was noted as a "laser disc player", although the "LaserDisc" logo displayed clearly on the device.
In 1981, "LaserDisc" was used exclusively for the medium itself, although the official name was "LaserVision" (as
seen at the beginning of many LaserDisc releases just before the start of the film). However, as Pioneer reminded
numerous video magazines and stores in 1984, LaserDisc was a trademarked word, standing only for LaserVision products
manufactured for sale by Pioneer Video or Pioneer Electronics. A 1984 Ray Charles ad for the LD-700 player bore the
term "Pioneer LaserDisc brand videodisc player". From 1981 until the early 1990s, all properly licensed discs carried
the LaserVision name and logo, even Pioneer Artists titles. During the early years, MCA also manufactured discs for
other companies including Paramount, Disney and Warner Bros. Some of them added their own names to the disc jacket
to signify that the movie was not owned by MCA. After Discovision Associates shut down in early 1982, Universal Studio's
videodisc software label, called MCA Videodisc until 1984, began reissuing many DiscoVision titles. Unfortunately,
quite a few, such as Battlestar Galactica and Jaws, were time-compressed versions of their CAV or CLV Disco Vision
originals. The time-compressed CLV re-issue of Jaws no longer had the original soundtrack, having had incidental
background music replaced for the video disc version due to licensing cost (the music would not be available until
the THX LaserDisc box set was released in 1995). One Universal/Columbia co-production issued by MCA Disco Vision
in both CAV and CLV versions, The Electric Horseman, is still not available in any other home video format with its
original score intact; even the most recent DVD release has had substantial music replacements of both instrumental
score and Willie Nelson's songs. An MCA release of Universal's Howard the Duck, sees only the start credits shown
in widescreen before changing to 4:3 for the rest of the film. For many years this was the only disc-based release
of the film, until widescreen DVD formats were released with extras. Also, the LaserDisc release of E.T. the Extra-Terrestrial
is the only format to include the cut scene of Harrison Ford playing the part of the school headmaster telling off
Elliott for letting the frogs free in the biology class. LaserDisc had a number of advantages over VHS. It featured
a far sharper picture with a horizontal resolution of 425 TVL lines for NTSC and 440 TVL lines for PAL discs, while
VHS featured only 240 TVL lines with NTSC. It could handle analog and digital audio where VHS was mostly analog only
(VHS can have PCM audio in professional applications but is uncommon), and the NTSC discs could store multiple audio
tracks. This allowed for extras like director's commentary tracks and other features to be added onto a film, creating
"Special Edition" releases that would not have been possible with VHS. Disc access was random and chapter based,
like the DVD format, meaning that one could jump to any point on a given disc very quickly. By comparison, VHS would
require tedious rewinding and fast-forwarding to get to specific points. LaserDiscs were initially cheaper than videocassettes
to manufacture, because they lacked the moving parts and plastic outer shell that are necessary for VHS tapes to
work, and the duplication process was much simpler. A VHS cassette has at least 14 parts including the actual tape
while LaserDisc has one part with five or six layers. A disc can be stamped out in a matter of seconds whereas duplicating
videotape required a complex bulk tape duplication mechanism and was a time-consuming process. However, by the end
of the 1980s, average disc-pressing prices were over $5.00 per two-sided disc, due to the large amount of plastic
material and the costly glass-mastering process needed to make the metal stamper mechanisms. Due to the larger volume
of demand, videocassettes quickly became much cheaper to duplicate, costing as little as $1.00 by the beginning of
the 1990s. LaserDiscs potentially had a much longer lifespan than videocassettes. Because the discs were read optically
instead of magnetically, no physical contact needs to be made between the player and the disc, except for the player's
clamp that holds the disc at its center as it is spun and read. As a result, playback would not wear the information-bearing
part of the discs, and properly manufactured LDs would theoretically last beyond one's lifetime. By contrast, a VHS
tape held all of its picture and sound information on the tape in a magnetic coating which is in contact with the
spinning heads on the head drum, causing progressive wear with each use (though later in VHS's lifespan, engineering
improvements allowed tapes to be made and played back without contact). Also, the tape was thin and delicate, and
it was easy for a player mechanism, especially on a low quality or malfunctioning model, to mishandle the tape and
damage it by creasing it, frilling (stretching) its edges, or even breaking it. LaserDisc was a composite video format:
the luminance (black and white) and chrominance (color) information were transmitted in one signal, separated by
the receiver. While good comb filters can do so adequately, these two signals cannot be completely separated. On
DVDs, data is stored in the form of digital blocks which make up each independent frame. The signal produced is dependent
on the equipment used to master the disc. Signals range from composite and split, to YUV and RGB. Depending upon
which format is used, this can result in far higher fidelity, particularly at strong color borders or regions of
high detail (especially if there is moderate movement in the picture) and low-contrast details like skin tones, where
comb filters almost inevitably smudge some detail. In contrast to the entirely digital DVD, LaserDiscs use only analog
video. As the LaserDisc format is not digitally encoded and does not make use of compression techniques, it is immune
to video macroblocking (most visible as blockiness during high motion sequences) or contrast banding (subtle visible
lines in gradient areas, such as out-of-focus backgrounds, skies, or light casts from spotlights) that can be caused
by the MPEG-2 encoding process as video is prepared for DVD. Early DVD releases held the potential to surpass their
LaserDisc counterparts, but often managed only to match them for image quality, and in some cases, the LaserDisc
version was preferred. However, proprietary human-assisted encoders manually operated by specialists can vastly reduce
the incidence of artifacts, depending on playing time and image complexity. By the end of LaserDisc's run, DVDs were
living up to their potential as a superior format. LaserDisc players can provide a great degree of control over the
playback process. Unlike many DVD players, the transport mechanism always obeys commands from the user: pause, fast-forward,
and fast-reverse commands are always accepted (barring, of course, malfunctions). There were no "User Prohibited
Options" where content protection code instructs the player to refuse commands to skip a specific part (such as fast
forwarding through copyright warnings). (Some DVD players, particularly higher-end units, do have the ability to
ignore the blocking code and play the video without restrictions, but this feature is not common in the usual consumer
market.) Damaged spots on a LaserDisc can be played through or skipped over, while a DVD will often become unplayable
past the damage. Some newer DVD players feature a repair+skip algorithm, which alleviates this problem by continuing
to play the disc, filling in unreadable areas of the picture with blank space or a frozen frame of the last readable
image and sound. The success of this feature depends upon the amount of damage. LaserDisc players, when working in
full analog, recover from such errors faster than DVD players. Direct comparison here is almost impossible due to
the sheer size differences between the two media. A 1 in (3 cm) scratch on a DVD will probably cause more problems
than a 1 in (3 cm) scratch on a LaserDisc, but a fingerprint taking up 1% of the area of a DVD would almost certainly
cause fewer problems than a similar mark covering 1% of the surface of a LaserDisc.[citation needed] Similar to the
CD versus LP sound quality debates common in the audiophile community, some videophiles argue that LaserDisc maintains
a "smoother", more "film-like", natural image while DVD still looks slightly more artificial. Early DVD demo discs
often had compression or encoding problems, lending additional support to such claims at the time. However, the video
signal-to-noise ratio and bandwidth of LaserDisc are substantially less than that of DVDs, making DVDs appear sharper
and clearer to most viewers. Another advantage, at least to some consumers, was the lack of any sort of anti-piracy
technology. It was claimed that Macrovision's Copyguard protection could not be applied to LaserDisc, due to the
format's design. The vertical blanking interval, where the Macrovision signal would be implemented, was also used
for the internal timing on LaserDisc players, so test discs with Macrovision would not play at all. There was never
a push to redesign the format despite the obvious potential for piracy due to its relatively small market share.
The industry simply decided to engineer it into the DVD specification. LaserDisc's support for multiple audio tracks
allowed for vast supplemental materials to be included on-disc and made it the first available format for "Special
Edition" releases; the 1984 Criterion Collection edition of Citizen Kane is generally credited as being the first
"Special Edition" release to home video,[citation needed] and for setting the standard by which future SE discs were
measured. The disc provided interviews, commentary tracks, documentaries, still photographs, and other features for
historians and collectors. The space-consuming analog video signal of a LaserDisc limited playback duration to 30
minutes (CAV) or 60 minutes (CLV) per side because of the hardware manufacturer's refusal to reduce line count for
increased playtime. After one side was finished playing, a disc has to be flipped over in order to continue watching
a movie, and some titles fill two or more discs. Many players, especially units built after the mid-1980s, can "flip"
discs automatically by rotating the optical pickup to the other side of the disc, but this is accompanied by a pause
in the movie during the side change. If the movie is longer than what could be stored on two sides of a single disc,
manually swapping to a second disc is necessary at some point during the film. One exception to this rule is the
Pioneer LD-W1, which features two disc platters. In addition, perfect still frames and random access to individual
still frames is limited only to the more expensive CAV discs, which only had a playing time of approximately 30 minutes
per side. In later years, Pioneer and other manufacturers overcame this limitation by incorporating a digital memory
buffer, which "grabbed" a single frame from a CLV disc. The analog information encoded on LaserDiscs does not include
any form of built-in checksum or error correction. Because of this, slight dust and scratches on the disc surface
can result in read-errors which cause various video quality problems: glitches, streaks, bursts of static, or momentary
picture interruptions. In contrast, the digital MPEG-2 format information used on DVDs has built-in error correction
which ensures that the signal from a damaged disc will remain identical to that from a perfect disc right up until
the point at which damage to the disc surface is so substantial that it prevents the laser from being able to identify
usable data. In addition, LaserDisc videos sometimes exhibit a problem known as "crosstalk". The issue can arise
when the laser optical pickup assembly within the player is out of alignment or because the disc is damaged or excessively
warped, but it could also occur even with a properly functioning player and a factory-new disc, depending on electrical
and mechanical alignment problems. In these instances, the issue arose due to the fact that CLV discs require subtle
changes in rotating speed at various points during playback. During a change in speed, the optical pickup inside
the player might read video information from a track adjacent to the intended one, causing data from the two tracks
to "cross"; the extra video information picked up from that second track shows up as distortion in the picture which
looks reminiscent of swirling "barber poles" or rolling lines of static. Assuming the player's optical pickup is
in proper working order, crosstalk distortion normally does not occur during playback of CAV format LaserDiscs, as
the rotational speed never varies. However, if the player calibration is out of order or if the CAV disc is faulty
or damaged, other problems affecting tracking accuracy can occur. One such problem is "laser lock", where the player
reads the same two fields for a given frame over and over again, causing the picture to look frozen as if the movie
were paused. Another significant issue unique to LaserDisc is one involving the inconsistency of playback quality
between different makers and models of player. On most televisions, a given DVD player will produce a picture that
is visually indistinguishable from other units. Differences in image quality between players only becomes easily
apparent on large televisions and substantial leaps in image quality are generally only obtained with expensive,
high-end players that allow for post-processing of the MPEG-2 stream during playback. In contrast, LaserDisc playback
quality is highly dependent on hardware quality. Major variances in picture quality appear between different makers
and models of LD players, even when tested on a low to mid-range television. The obvious benefits of using high quality
equipment has helped keep demand for some players high, thus also keeping pricing for those units comparably high.
In the 1990s, notable players sold for anywhere from US$200 to well over $1,000, while older and less desirable players
could be purchased in working condition for as little as $25. Many early LDs were not manufactured properly; sometimes
a substandard adhesive was used to sandwich together the two sides of the disc.[citation needed] The adhesive contained
impurities that were able to penetrate the lacquer seal layer and chemically attack the metalized reflective aluminium
layer, causing it to oxidize and lose its reflective characteristics. This was a problem that was termed "laser rot"
among LD enthusiasts, also called "color flash" internally by LaserDisc-pressing plants. Some forms of laser rot
could appear as black spots that looked like mold or burned plastic which cause the disc to skip and the movie to
exhibit excessive speckling noise. But, for the most part, rotted discs could actually appear perfectly fine to the
naked eye. LaserDisc did not have high market penetration in North America due to the high cost of the players and
discs, which were far more expensive than VHS players and tapes, and due to marketplace confusion with the technologically
inferior CED, which also went by the name Videodisc. While the format was not widely adopted by North American consumers,
it was well received among videophiles due to the superior audio and video quality compared to VHS and Betamax tapes,
finding a place in nearly one million American homes by the end of 1990. The format was more popular in Japan than
in North America because prices were kept low to ensure adoption, resulting in minimal price differences between
VHS tapes and the higher quality LaserDiscs, helping ensure that it quickly became the dominant consumer video format
in Japan. Anime collectors in every country the LD format was released, which includes both North America and Japan,
also quickly became familiar with this format, and sought the higher video and sound quality of LaserDisc and the
availability of numerous titles not available on VHS. LaserDiscs were also popular alternatives to videocassettes
among movie enthusiasts in the more affluent regions of South East Asia, such as Singapore, due to their high integration
with the Japanese export market and the disc-based media's superior longevity compared to videocassette, especially
in the humid conditions endemic to that area of the world. The format also became quite popular in Hong Kong during
the 1990s before the introduction of VCDs and DVD; although people rarely bought the discs (because each LD was priced
around USD100), high rental activity helped the video rental business in the city grow larger than it had ever been
previously. Due to integration with the Japanese export market, NTSC LaserDiscs were used in the Hong Kong market,
in contrast to the PAL standard used for broadcast (This anomaly also exists for DVD). This created a market for
multi-system TVs and multi-system VCRs which could display or play both PAL and NTSC materials in addition to SECAM
materials (which were never popular in Hong Kong). Some LD players could convert NTSC signals to PAL so that most
TVs used in Hong Kong could display the LD materials. Although the LaserDisc format was supplanted by DVD by the
late 1990s, many LD titles are still highly coveted by movie enthusiasts (for example, Disney's Song of the South
which is unavailable in the US in any format, but was issued in Japan on LD). This is largely because there are many
films that are still only available on LD and many other LD releases contain supplemental material not available
on subsequent DVD versions of those films. Until the end of 2001, many titles were released on VHS, LD, and DVD in
Japan. In the early 1980s, Philips produced a LaserDisc player model adapted for a computer interface, dubbed "professional".
In 1985, Jasmine Multimedia created LaserDisc Juke Boxes featuring music videos from Michael Jackson, Duran Duran,
and Cyndi Lauper. When connected to a PC this combination could be used to display images or information for educational
or archival purposes, for example thousands of scanned medieval manuscripts. This strange device could be considered
a very early equivalent of a CD-ROM. In the mid-1980s Lucasfilm pioneered the EditDroid non-linear editing system
for film and television based on computer-controlled LaserDisc players. Instead of printing dailies out on film,
processed negatives from the day's shoot would be sent to a mastering plant to be assembled from their 10-minute
camera elements into 20-minute film segments. These were then mastered onto single-sided blank LaserDiscs, just as
a DVD would be burnt at home today, allowing for much easier selection and preparation of an Edit Decision List.
In the days before video assist was available in cinematography, this was the only other way a film crew could see
their work. The EDL went to the negative cutter who then cut the camera negative accordingly and assembled the finished
film. Only 24 EditDroid systems were ever built, even though the ideas and technology are still in use today. Later
EditDroid experiments borrowed from hard-drive technology of having multiple discs on the same spindle and added
numerous playback heads and numerous electronics to the basic jukebox design so that any point on each of the discs
would be accessible within seconds. This eliminated the need for racks and racks of industrial LaserDisc players
since EditDroid discs were only single-sided. In 1986, a SCSI-equipped LaserDisc player attached to a BBC Master
computer was used for the BBC Domesday Project. The player was referred as an LV-ROM (LaserVision Read Only Memory)
as the discs contained the driving software as well as the video frames. The discs used the CAV format, and encoded
data as a binary signal represented by the analog audio recording. These discs could contain in each CAV frame video/audio
or video/binary data, but not both. "Data" frames would appear blank when played as video. It was typical for each
disc to start with the disc catalog (a few blank frames) then the video introduction before the rest of the data.
Because the format (based on the ADFS hard disc format) used a starting sector for each file, the data layout effectively
skipped over any video frames. If all 54,000 frames are used for data storage an LV-ROM disc can contain 324 MB of
data per side. The Domesday Project systems also included a genlock, allowing video frames, clips and audio to be
mixed with graphics originated from the BBC Master; this was used to great effect for displaying high resolution
photographs and maps, which could then be zoomed into. Apple's HyperCard scripting language provided Macintosh computer
users with a means to design databases of slides, animation, video and sounds from LaserDiscs and then to create
interfaces for users to play specific content from the disc through software called LaserStacks. User-created "stacks"
were shared and were especially popular in education where teacher-generated stacks were used to access discs ranging
from art collections to basic biological processes. Commercially available stacks were also popular with the Voyager
company being possibly the most successful distributor. Under contract from the U.S. Military, Matrox produced a
combination computer/LaserDisc player for instructional purposes. The computer was a 286, the LaserDisc player only
capable of reading the analog audio tracks. Together they weighed 43 lb (20 kg) and sturdy handles were provided
in case two people were required to lift the unit. The computer controlled the player via a 25-pin serial port at
the back of the player and a ribbon cable connected to a proprietary port on the motherboard. Many of these were
sold as surplus by the military during the 1990s, often without the controller software. Nevertheless, it is possible
to control the unit by removing the ribbon cable and connecting a serial cable directly from the computer's serial
port to the port on the LaserDisc player. The format's instant-access capability made it possible for a new breed
of LaserDisc-based video arcade games and several companies saw potential in using LaserDiscs for video games in
the 1980s and 1990s, beginning in 1983 with Sega's Astron Belt. American Laser Games and Cinematronics produced elaborate
arcade consoles that used the random-access features to create interactive movies such as Dragon's Lair and Space
Ace. Similarly, the Pioneer Laseractive and Halcyon were introduced as home video game consoles that used LaserDisc
media for their software. In 1991, several manufacturers announced specifications for what would become known as
MUSE LaserDisc, representing a span of almost 15 years until the feats of this HD analog optical disc system would
finally be duplicated digitally by HD DVD and Blu-ray Disc. Encoded using NHK's MUSE "Hi-Vision" analogue TV system,
MUSE discs would operate like standard LaserDiscs but would contain high-definition 1,125-line (1,035 visible lines)
(Sony HDVS) video with a 5:3 aspect ratio. The MUSE players were also capable of playing standard NTSC format discs
and are superior in performance to non-MUSE players even with these NTSC discs. The MUSE-capable players had several
noteworthy advantages over standard LaserDisc players, including a red laser with a much narrower wavelength than
the lasers found in standard players. The red laser was capable of reading through disc defects such as scratches
and even mild disc rot that would cause most other players to stop, stutter or drop-out. Crosstalk was not an issue
with MUSE discs, and the narrow wavelength of the laser allowed for the virtual elimination of crosstalk with normal
discs. In order to view MUSE encoded discs, it was necessary to have a MUSE decoder in addition to a compatible player.
There are televisions with MUSE decoding built-in and set top tuners with decoders that can provide the proper MUSE
input. Equipment prices were high, especially for early HDTVs which generally eclipsed US$10,000, and even in Japan
the market for MUSE was tiny. Players and discs were never officially sold in North America, although several distributors
imported MUSE discs along with other import titles. Terminator 2: Judgment Day, Lawrence of Arabia, A League of Their
Own, Bugsy, Close Encounters of the Third Kind, Bram Stoker's Dracula and Chaplin were among the theatrical releases
available on MUSE LDs. Several documentaries, including one about Formula One at Japan's Suzuka Circuit were also
released. With the release of 16:9 televisions in the mid-1990s, Pioneer and Toshiba decided that it was time to
take advantage of this aspect ratio. Squeeze LDs were enhanced 16:9-ratio widescreen LaserDiscs. During the video
transfer stage, the movie was stored in an anamorphic "squeezed" format. The widescreen movie image was stretched
to fill the entire video frame with less or none of the video resolution wasted to create letterbox bars. The advantage
was a 33% greater vertical resolution compared to letterboxed widescreen LaserDisc. This same procedure was used
for anamorphic DVDs, but unlike all DVD players, very few LD players had the ability to unsqueeze the image for 4:3
sets, and very few if any 4:3 sets could be set to play them properly either. If the discs were played on a standard
4:3 television the image would be distorted. Since very few people owned 16:9 displays, the marketability of these
special discs was very limited. There were no anamorphic LaserDisc titles available in the US except for promotional
purposes. Upon purchase of a Toshiba 16:9 television viewers had the option of selecting a number of Warner Bros.
16:9 films. Titles include Unforgiven, Grumpy Old Men, The Fugitive, and Free Willy. The Japanese lineup of titles
was different. A series of releases under the banner "SQUEEZE LD" from Pioneer of mostly Carolco titles included
Basic Instinct, Stargate, Terminator 2: Judgment Day, Showgirls, Cutthroat Island, and Cliffhanger. Terminator 2
was released twice in Squeeze LD, the second release being THX certified and a notable improvement over the first.
Another form of recordable LaserDisc that is completely playback-compatible with the LaserDisc format (unlike CRVdisc
with its caddy enclosure) is the RLV, or Recordable LaserVision disc. It was developed and first marketed by the
Optical Disc Corporation (ODC, now ODC Nimbus) in 1984. RLV discs, like CRVdisc, are also a WORM technology, and
function exactly like a CD-R disc. RLV discs look almost exactly like standard LaserDiscs, and can play in any standard
LaserDisc player after they have been recorded. The only cosmetic difference between an RLV disc and a regular factory-pressed
LaserDiscs is their reflective purple-violet (or blue with some RLV discs) color resulting from the dye embedded
in the reflective layer of the disc to make it recordable, as opposed to the silver mirror appearance of regular
LDs. The purplish color of RLVs is very similar to DVD-R and DVD+R discs. RLVs were popular for making short-run
quantities of LaserDiscs for specialized applications such as interactive kiosks and flight simulators. There were
also 12 cm (4.7 in) (CD size) "single"-style discs produced that were playable on LaserDisc players. These were referred
to as CD Video (CD-V) discs, and Video Single Discs (VSD). A CD-V carried up to five minutes of analog LaserDisc-type
video content (usually a music video), as well as up to 20 minutes of digital audio CD tracks. The original 1989
release of David Bowie's retrospective Sound + Vision CD box set prominently featured a CD-V video of Ashes to Ashes,
and standalone promo CD-Vs featured the video, plus three audio tracks: "John, I'm Only Dancing", "Changes", and
"The Supermen". CD-Vs are not to be confused with Video CDs (which are all-digital and can only be played on VCD
players, DVD players, CD-i players, computers, and later-model LaserDisc players, such as the DVL series from Pioneer
that can also play DVDs). CD-Vs can only be played back on LaserDisc players with CD-V capability. VSDs were the
same as CD-Vs, but without the audio CD tracks. CD-Vs were somewhat popular for a brief time worldwide, but soon
faded from view. VSDs were popular only in Japan and other parts of Asia, and were never fully introduced to the
rest of the world.
During George's reign the break-up of the British Empire and its transition into the Commonwealth of Nations accelerated.
The parliament of the Irish Free State removed direct mention of the monarch from the country's constitution on the
day of his accession. From 1939, the Empire and Commonwealth, except Ireland, was at war with Nazi Germany. War with
Italy and Japan followed in 1940 and 1941, respectively. Though Britain and its allies were ultimately victorious
in 1945, the United States and the Soviet Union rose as pre-eminent world powers and the British Empire declined.
After the independence of India and Pakistan in 1947, George remained as king of both countries, but the title Emperor
of India was abandoned in June 1948. Ireland formally declared itself a republic and left the Commonwealth in 1949,
and India became a republic within the Commonwealth the following year. George adopted the new title of Head of the
Commonwealth. He was beset by health problems in the later years of his reign. His elder daughter, Elizabeth, succeeded
him. His birthday (14 December 1895) was the 34th anniversary of the death of his great-grandfather, Prince Albert,
the Prince Consort. Uncertain of how the Prince Consort's widow, Queen Victoria, would take the news of the birth,
the Prince of Wales wrote to the Duke of York that the Queen had been "rather distressed". Two days later, he wrote
again: "I really think it would gratify her if you yourself proposed the name Albert to her". Queen Victoria was
mollified by the proposal to name the new baby Albert, and wrote to the Duchess of York: "I am all impatience to
see the new one, born on such a sad day but rather more dear to me, especially as he will be called by that dear
name which is a byword for all that is great and good". Consequently, he was baptised "Albert Frederick Arthur George"
at St. Mary Magdalene's Church near Sandringham three months later.[a] As a great-grandson of Queen Victoria, he
was known formally as His Highness Prince Albert of York from birth. Within the family, he was known informally as
"Bertie". His maternal grandmother, the Duchess of Teck, did not like the first name the baby had been given, and
she wrote prophetically that she hoped the last name "may supplant the less favoured one". Albert spent the first
six months of 1913 on the training ship HMS Cumberland in the West Indies and on the east coast of Canada. He was
rated as a midshipman aboard HMS Collingwood on 15 September 1913, and spent three months in the Mediterranean. His
fellow officers gave him the nickname "Mr. Johnson". One year after his commission, he began service in the First
World War. He was mentioned in despatches for his action as a turret officer aboard Collingwood in the Battle of
Jutland (31 May – 1 June 1916), an indecisive engagement with the German navy that was the largest naval action of
the war. He did not see further combat, largely because of ill health caused by a duodenal ulcer, for which he had
an operation in November 1917. In February 1918, he was appointed Officer in Charge of Boys at the Royal Naval Air
Service's training establishment at Cranwell. With the establishment of the Royal Air Force two months later and
the transfer of Cranwell from Navy to Air Force control, he transferred from the Royal Navy to the Royal Air Force.
He was appointed Officer Commanding Number 4 Squadron of the Boys' Wing at Cranwell until August 1918, before reporting
to the RAF's Cadet School at St Leonards-on-Sea where he completed a fortnight's training and took command of a squadron
on the Cadet Wing. He was the first member of the royal family to be certified as a fully qualified pilot. During
the closing weeks of the war, he served on the staff of the RAF's Independent Air Force at its headquarters in Nancy,
France. Following the disbanding of the Independent Air Force in November 1918, he remained on the Continent for
two months as a staff officer with the Royal Air Force until posted back to Britain. He accompanied the Belgian monarch
King Albert on his triumphal reentry into Brussels on 22 November. Prince Albert qualified as an RAF pilot on 31
July 1919 and gained a promotion to squadron leader on the following day. In October 1919, Albert went up to Trinity
College, Cambridge, where he studied history, economics and civics for a year. On 4 June 1920, he was created Duke
of York, Earl of Inverness and Baron Killarney. He began to take on more royal duties. He represented his father,
and toured coal mines, factories, and railyards. Through such visits he acquired the nickname of the "Industrial
Prince". His stammer, and his embarrassment over it, together with his tendency to shyness, caused him to appear
much less impressive than his older brother, Edward. However, he was physically active and enjoyed playing tennis.
He played at Wimbledon in the Men's Doubles with Louis Greig in 1926, losing in the first round. He developed an
interest in working conditions, and was President of the Industrial Welfare Society. His series of annual summer
camps for boys between 1921 and 1939 brought together boys from different social backgrounds. In a time when royals
were expected to marry fellow royals, it was unusual that Albert had a great deal of freedom in choosing a prospective
wife. An infatuation with the already-married Australian socialite Sheila, Lady Loughborough, came to an end in April
1920 when the King, with the promise of the dukedom of York, persuaded Albert to stop seeing her. That year, he met
for the first time since childhood Lady Elizabeth Bowes-Lyon, the youngest daughter of the Earl and Countess of Strathmore
and Kinghorne. He became determined to marry her. She rejected his proposal twice, in 1921 and 1922, reportedly because
she was reluctant to make the sacrifices necessary to become a member of the royal family. In the words of Lady Elizabeth's
mother, Albert would be "made or marred" by his choice of wife. After a protracted courtship, Elizabeth agreed to
marry him. Because of his stammer, Albert dreaded public speaking. After his closing speech at the British Empire
Exhibition at Wembley on 31 October 1925, one which was an ordeal for both him and his listeners, he began to see
Lionel Logue, an Australian-born speech therapist. The Duke and Logue practised breathing exercises, and the Duchess
rehearsed with him patiently. Subsequently, he was able to speak with less hesitation. With his delivery improved,
the Duke opened the new Parliament House in Canberra, Australia, during a tour of the empire in 1927. His journey
by sea to Australia, New Zealand and Fiji took him via Jamaica, where Albert played doubles tennis partnered with
a black man, which was unusual at the time and taken locally as a display of equality between races. The Duke and
Duchess of York had two children: Elizabeth (called "Lilibet" by the family), and Margaret. The Duke and Duchess
and their two daughters lived a relatively sheltered life at their London residence, 145 Piccadilly. They were a
close and loving family. One of the few stirs arose when the Canadian Prime Minister, R. B. Bennett, considered the
Duke for Governor General of Canada in 1931—a proposal that King George V rejected on the advice of the Secretary
of State for Dominion Affairs, J. H. Thomas. As Edward was unmarried and had no children, Albert was the heir presumptive
to the throne. Less than a year later, on 11 December 1936, Edward VIII abdicated in order to marry his mistress,
Wallis Simpson, who was divorced from her first husband and divorcing her second. Edward had been advised by British
Prime Minister Stanley Baldwin that he could not remain king and marry a divorced woman with two living ex-husbands.
Edward chose abdication in preference to abandoning his marriage plans. Thus Albert became king, a position he was
reluctant to accept. The day before the abdication, he went to London to see his mother, Queen Mary. He wrote in
his diary, "When I told her what had happened, I broke down and sobbed like a child." On the day of the abdication,
the Oireachtas, the parliament of the Irish Free State, removed all direct mention of the monarch from the Irish
constitution. The next day, it passed the External Relations Act, which gave the monarch limited authority (strictly
on the advice of the government) to appoint diplomatic representatives for Ireland and to be involved in the making
of foreign treaties. The two acts made the Irish Free State a republic in essence without removing its links to the
Commonwealth. Albert assumed the regnal name "George VI" to emphasise continuity with his father and restore confidence
in the monarchy. The beginning of George VI's reign was taken up by questions surrounding his predecessor and brother,
whose titles, style and position were uncertain. He had been introduced as "His Royal Highness Prince Edward" for
the abdication broadcast, but George VI felt that by abdicating and renouncing the succession Edward had lost the
right to bear royal titles, including "Royal Highness". In settling the issue, George's first act as king was to
confer upon his brother the title and style "His Royal Highness The Duke of Windsor", but the Letters Patent creating
the dukedom prevented any wife or children from bearing royal styles. George VI was also forced to buy from Edward
the royal residences of Balmoral Castle and Sandringham House, as these were private properties and did not pass
to George VI automatically. Three days after his accession, on his 41st birthday, he invested his wife, the new queen
consort, with the Order of the Garter. George VI's coronation took place on 12 May 1937, the date previously intended
for Edward's coronation. In a break with tradition, Queen Mary attended the ceremony in a show of support for her
son. There was no Durbar held in Delhi for George VI, as had occurred for his father, as the cost would have been
a burden to the government of India. Rising Indian nationalism made the welcome that the royal couple would have
received likely to be muted at best, and a prolonged absence from Britain would have been undesirable in the tense
period before the Second World War. Two overseas tours were undertaken, to France and to North America, both of which
promised greater strategic advantages in the event of war. The growing likelihood of war in Europe dominated the
early reign of George VI. The King was constitutionally bound to support Prime Minister Neville Chamberlain's appeasement
of Hitler. However, when the King and Queen greeted Chamberlain on his return from negotiating the Munich Agreement
in 1938, they invited him to appear on the balcony of Buckingham Palace with them. This public association of the
monarchy with a politician was exceptional, as balcony appearances were traditionally restricted to the royal family.
While broadly popular among the general public, Chamberlain's policy towards Hitler was the subject of some opposition
in the House of Commons, which led historian John Grigg to describe the King's behaviour in associating himself so
prominently with a politician as "the most unconstitutional act by a British sovereign in the present century". In
May and June 1939, the King and Queen toured Canada and the United States. From Ottawa, the royal couple were accompanied
throughout by Canadian Prime Minister William Lyon Mackenzie King, to present themselves in North America as King
and Queen of Canada. George was the first reigning monarch of Canada to visit North America, although he had been
to Canada previously as Prince Albert and as Duke of York. Both Governor General of Canada Lord Tweedsmuir and Mackenzie
King hoped that the King's presence in Canada would demonstrate the principles of the Statute of Westminster 1931,
which gave full sovereignty to the British Dominions. On 19 May, George VI personally accepted and approved the Letter
of Credence of the new U.S. Ambassador to Canada, Daniel Calhoun Roper; gave Royal Assent to nine parliamentary bills;
and ratified two international treaties with the Great Seal of Canada. The official royal tour historian, Gustave
Lanctot, wrote "the Statute of Westminster had assumed full reality" and George gave a speech emphasising "the free
and equal association of the nations of the Commonwealth". The trip was intended to soften the strong isolationist
tendencies among the North American public with regard to the developing tensions in Europe. Although the aim of
the tour was mainly political, to shore up Atlantic support for the United Kingdom in any future war, the King and
Queen were enthusiastically received by the public. The fear that George would be compared unfavourably to his predecessor,
Edward VIII, was dispelled. They visited the 1939 New York World's Fair and stayed with President Franklin D. Roosevelt
at the White House and at his private estate at Hyde Park, New York. A strong bond of friendship was forged between
the King and Queen and the President during the tour, which had major significance in the relations between the United
States and the United Kingdom through the ensuing war years. In September 1939, Britain and the self-governing Dominions,
but not Ireland, declared war on Nazi Germany. George VI and his wife resolved to stay in London, despite German
bombing raids. They officially stayed in Buckingham Palace throughout the war, although they usually spent nights
at Windsor Castle. The first German raid on London, on 7 September 1940, killed about one thousand civilians, mostly
in the East End. On 13 September, the King and Queen narrowly avoided death when two German bombs exploded in a courtyard
at Buckingham Palace while they were there. In defiance, the Queen famously declared: "I am glad we have been bombed.
It makes me feel we can look the East End in the face". The royal family were portrayed as sharing the same dangers
and deprivations as the rest of the country. They were subject to rationing restrictions, and U.S. First Lady Eleanor
Roosevelt remarked on the rationed food served and the limited bathwater that was permitted during a stay at the
unheated and boarded-up Palace. In August 1942, the King's brother, Prince George, Duke of Kent, was killed on active
service. In 1940, Winston Churchill replaced Neville Chamberlain as Prime Minister, though personally George would
have preferred to appoint Lord Halifax. After the King's initial dismay over Churchill's appointment of Lord Beaverbrook
to the Cabinet, he and Churchill developed "the closest personal relationship in modern British history between a
monarch and a Prime Minister". Every Tuesday for four and a half years from September 1940, the two men met privately
for lunch to discuss the war in secret and with frankness. Throughout the war, the King and Queen provided morale-boosting
visits throughout the United Kingdom, visiting bomb sites, munitions factories, and troops. The King visited military
forces abroad in France in December 1939, North Africa and Malta in June 1943, Normandy in June 1944, southern Italy
in July 1944, and the Low Countries in October 1944. Their high public profile and apparently indefatigable determination
secured their place as symbols of national resistance. At a social function in 1944, Chief of the Imperial General
Staff Sir Alan Brooke, revealed that every time he met Field Marshal Bernard Montgomery he thought he was after his
job. The King replied: "You should worry, when I meet him, I always think he's after mine!" George VI's reign saw
the acceleration of the dissolution of the British Empire. The Statute of Westminster 1931 had already acknowledged
the evolution of the Dominions into separate sovereign states. The process of transformation from an empire to a
voluntary association of independent states, known as the Commonwealth, gathered pace after the Second World War.
During the ministry of Clement Attlee, British India became the two independent dominions of India and Pakistan in
1947. George relinquished the title of Emperor of India, and became King of India and King of Pakistan instead. In
1950 he ceased to be King of India when it became a republic within the Commonwealth of Nations, but he remained
King of Pakistan until his death and India recognised his new title of Head of the Commonwealth. Other countries
left the Commonwealth, such as Burma in January 1948, Palestine (divided between Israel and the Arab states) in May
1948 and the Republic of Ireland in 1949. In 1947, the King and his family toured Southern Africa. The Prime Minister
of the Union of South Africa, Jan Smuts, was facing an election and hoped to make political capital out of the visit.
George was appalled, however, when instructed by the South African government to shake hands only with whites, and
referred to his South African bodyguards as "the Gestapo". Despite the tour, Smuts lost the election the following
year, and the new government instituted a strict policy of racial segregation. The stress of the war had taken its
toll on the King's health, exacerbated by his heavy smoking and subsequent development of lung cancer among other
ailments, including arteriosclerosis and thromboangiitis obliterans. A planned tour of Australia and New Zealand
was postponed after the King suffered an arterial blockage in his right leg, which threatened the loss of the leg
and was treated with a right lumbar sympathectomy in March 1949. His elder daughter Elizabeth, the heir presumptive,
took on more royal duties as her father's health deteriorated. The delayed tour was re-organised, with Elizabeth
and her husband, the Duke of Edinburgh, taking the place of the King and Queen. The King was well enough to open
the Festival of Britain in May 1951, but on 23 September 1951, his left lung was removed by Clement Price Thomas
after a malignant tumour was found. In October 1951, Princess Elizabeth and the Duke of Edinburgh went on a month-long
tour of Canada; the trip had been delayed for a week due to the King's illness. At the State Opening of Parliament
in November, the King's speech from the throne was read for him by the Lord Chancellor, Lord Simonds. His Christmas
broadcast of 1951 was recorded in sections, and then edited together. From 9 February for two days his coffin rested
in St. Mary Magdalene Church, Sandringham, before lying in state at Westminster Hall from 11 February. His funeral
took place at St. George's Chapel, Windsor Castle, on the 15th. He was interred initially in the Royal Vault until
he was transferred to the King George VI Memorial Chapel inside St. George's on 26 March 1969. In 2002, fifty years
after his death, the remains of his widow, Queen Elizabeth The Queen Mother, and the ashes of his younger daughter
Princess Margaret, who both died that year, were interred in the chapel alongside him. In the words of Labour Member
of Parliament George Hardie, the abdication crisis of 1936 did "more for republicanism than fifty years of propaganda".
George VI wrote to his brother Edward that in the aftermath of the abdication he had reluctantly assumed "a rocking
throne", and tried "to make it steady again". He became king at a point when public faith in the monarchy was at
a low ebb. During his reign his people endured the hardships of war, and imperial power was eroded. However, as a
dutiful family man and by showing personal courage, he succeeded in restoring the popularity of the monarchy.
Federalism refers to the mixed or compound mode of government, combining a general government (the central or 'federal' government)
with regional governments (provincial, state, Land, cantonal, territorial or other sub-unit governments) in a single
political system. Its distinctive feature, exemplified in the founding example of modern federalism of the United
States of America under the Constitution of 1789, is a relationship of parity between the two levels of government
established. It can thus be defined as a form of government in which there is a division of powers between two levels
of government of equal status. Until recently, in the absence of prior agreement on a clear and precise definition,
the concept was thought to mean (as a shorthand) 'a division of sovereignty between two levels of government'. New
research, however, argues that this cannot be correct, as dividing sovereignty - when this concept is properly understood
in its core meaning of the final and absolute source of political authority in a political community - is not possible.
The descent of the United States into Civil War in the mid-nineteenth century, over disputes about unallocated competences
concerning slavery and ultimately the right of secession, showed this. One or other level of government could be
sovereign to decide such matters, but not both simultaneously. Therefore, it is now suggested that federalism is
more appropriately conceived as 'a division of the powers flowing from sovereignty between two levels of government'.
What differentiates the concept from other multi-level political forms is the characteristic of equality of standing
between the two levels of government established. This clarified definition opens the way to identifying two distinct
federal forms, where before only one was known, based upon whether sovereignty resides in the whole (in one people)
or in the parts (in many peoples): the federal state (or federation) and the federal union of states (or federal
union), respectively. Leading examples of the federal state include the United States, Germany, Canada, Switzerland,
Australia and India. The leading example of the federal union of states is the European Union. The terms 'federalism'
and 'confederalism' both have a root in the Latin word foedus, meaning treaty, pact or covenant. Their common meaning
until the late eighteenth century was a simple league or inter-governmental relationship among sovereign states based
upon a treaty. They were therefore initially synonyms. It was in this sense that James Madison in Federalist 39 had
referred to the new United States as 'neither a national nor a federal Constitution, but a composition of both' (ie.
neither a single large unitary state nor a league/confederation among several small states, but a hybrid of the two).
In the course of the nineteenth century the meaning of federalism would come to shift, strengthening to refer uniquely
to the novel compound political form, while the meaning of confederalism would remain at a league of states. Thus,
this article relates to the modern usage of the word 'federalism'. Whilst it is often perceived as an optimal solution
for states comprising different cultural or ethnic communities, the federalist model seems to work best in largely
homogeneous states such as the United States, Germany or Australia, but there is also evidence to the contrary such
as in Switzerland. Tensions between territories can still be found in federalist countries such as Canada and federation
as a way to appease and quell military conflict has failed recently in places like Lybia or Iraq, while the formula
is simultaneously proposed and dismissed in countries such as Ukraine or Syria. Federations such as Yugoslavia or
Czechoslovakia collapsed as soon as it was possible to put the model to the test. In the United States, federalism
originally referred to belief in a stronger central government. When the U.S. Constitution was being drafted, the
Federalist Party supported a stronger central government, while "Anti-Federalists" wanted a weaker central government.
This is very different from the modern usage of "federalism" in Europe and the United States. The distinction stems
from the fact that "federalism" is situated in the middle of the political spectrum between a confederacy and a unitary
state. The U.S. Constitution was written as a reaction to the Articles of Confederation, under which the United States
was a loose confederation with a weak central government. In contrast, Europe has a greater history of unitary states
than North America, thus European "federalism" argues for a weaker central government, relative to a unitary state.
The modern American usage of the word is much closer to the European sense. As the power of the Federal government
has increased, some people have perceived a much more unitary state than they believe the Founding Fathers intended.
Most people politically advocating "federalism" in the United States argue in favor of limiting the powers of the
federal government, especially the judiciary (see Federalist Society, New Federalism). On the 1st of January 1901
the nation-state of Australia officially came into existence as a federation. The Australian continent was colonised
by the United Kingdom in 1788, which subsequently established six, eventually self-governing, colonies there. In
the 1890s the governments of these colonies all held referendums on becoming a unified, self-governing "Commonwealth"
within the British Empire. When all the colonies voted in favour of federation, the Federation of Australia commenced,
resulting in the establishment of the Commonwealth of Australia in 1901. The model of Australian federalism adheres
closely to the original model of the United States of America, although it does so through a parliamentary Westminster
system rather than a presidential system. In Brazil, the fall of the monarchy in 1889 by a military coup d'état led
to the rise of the presidential system, headed by Deodoro da Fonseca. Aided by well-known jurist Ruy Barbosa, Fonseca
established federalism in Brazil by decree, but this system of government would be confirmed by every Brazilian constitution
since 1891, although some of them would distort some of the federalist principles. The 1937 Constitution, for example,
granted the federal government the authority to appoint State Governors (called interventors) at will, thus centralizing
power in the hands of President Getúlio Vargas. Brazil also uses the Fonseca system to regulate interstate trade.
Brazil is one of the biggest federal governments. The government of India is based on a tiered system, in which the
Constitution of India delineates the subjects on which each tier of government has executive powers. The Constitution
originally provided for a two-tier system of government, the Union Government (also known as the Central Government),
representing the Union of India, and the State governments. Later, a third tier was added in the form of Panchayats
and Municipalities. In the current arrangement, The Seventh Schedule of the Indian Constitution delimits the subjects
of each level of governmental jurisdiction, dividing them into three lists: A distinguishing aspect of Indian federalism
is that unlike many other forms of federalism, it is asymmetric. Article 370 makes special provisions for the state
of Jammu and Kashmir as per its Instrument of Accession. Article 371 makes special provisions for the states of Andhra
Pradesh, Arunachal Pradesh, Assam, Goa, Mizoram, Manipur, Nagaland and Sikkim as per their accession or state-hood
deals. Also one more aspect of Indian federalism is system of President's Rule in which the central government (through
its appointed Governor) takes control of state's administration for certain months when no party can form a government
in the state or there is violent disturbance in the state. Although the drafts of both the Maastricht treaty and
the Treaty establishing a Constitution for Europe mentioned federalism, the representatives of the member countries
(all of whom would have had to agree to use of the term) never formally adopted it. The strongest advocates of European
federalism have been Germany, Italy, Belgium and Luxembourg while those historically most strongly opposed have been
the United Kingdom, Denmark and France (with conservative presidents and governments). Since the presidency of François
Mitterrand (1981-1995), the French authorities have adopted a much more pro-European Unification position, as they
consider that a strong EU is presenting the best "insurance" against a unified Germany which might become too strong
and thus a threat for its neighbours. The Federal War ended in 1863 with the signing of the Treaty of Coche by both
the centralist government of the time and the Federal Forces. The United States of Venezuela were subsequently incorporated
under a "Federation of Sovereign States" upon principles borrowed from the Articles of Confederation of the United
States of America. In this Federation, each State had a "President" of its own that controlled almost every issue,
even the creation of "State Armies," while the Federal Army was required to obtain presidential permission to enter
any given state. On the other hand, Belgian federalism is federated with three components. An affirmative resolution
concerning Brussels' place in the federal system passed in the parliaments of Wallonia and Brussels. These resolutions
passed against the desires of Dutch-speaking parties, who are generally in favour of a federal system with two components
(i.e. the Dutch and French Communities of Belgium). However, the Flemish representatives in the Parliament of the
Brussels Capital-Region voted in favour of the Brussels resolution, with the exception of one party. The chairman
of the Walloon Parliament stated on July 17, 2008 that, "Brussels would take an attitude". Brussels' parliament passed
the resolution on July 18, 2008: However, in order to manage the tensions present in the Spanish transition to democracy,
the drafters of the current Spanish constitution avoided giving labels such as 'federal' to the territorial arrangements.
Besides, unlike in the federal system, the main taxes are taken centrally from Madrid (except for the Basque Country
and Navarre, which were recognized in the Spanish democratic constitution as charter territories drawing from historical
reasons) and then distributed to the Autonomous Communities. Anarchists are against the State but are not against
political organization or "governance"—so long as it is self-governance utilizing direct democracy. The mode of political
organization preferred by anarchists, in general, is federalism or confederalism.[citation needed] However, the anarchist
definition of federalism tends to differ from the definition of federalism assumed by pro-state political scientists.
The following is a brief description of federalism from section I.5 of An Anarchist FAQ: Alternatively, or in addition
to this practice, the members of an upper house may be indirectly elected by the government or legislature of the
component states, as occurred in the United States prior to 1913, or be actual members or delegates of the state
governments, as, for example, is the case in the German Bundesrat and in the Council of the European Union. The lower
house of a federal legislature is usually directly elected, with apportionment in proportion to population, although
states may sometimes still be guaranteed a certain minimum number of seats. Federalism, and other forms of territorially
autonomy, is generally seen as a useful way to structure political systems in order prevent violence among different
groups with countries because it allows certain groups to legislate at the subnational level. Some scholars have
suggested, however, that federalism can divide countries and result in state collapse because it creates proto-states.
Still others have shown that federalism is only divisive when it lacks mechanisms tthat encourage political parties
to compete across regional boundaries. The post-Imperial nature of Russian subdivision of government changed towards
a generally autonomous model which began with the establishment of the USSR (of which Russia was governed as part).
It was liberalized in the aftermath of the Soviet Union, with the reforms under Boris Yeltsin preserving much of
the Soviet structure while applying increasingly liberal reforms to the governance of the constituent republics and
subjects (while also coming into conflict with Chechen secessionist rebels during the Chechen War). Some of the reforms
under Yeltsin were scaled back by Vladimir Putin. Federalism in the United States is the evolving relationship between
state governments and the federal government of the United States. American government has evolved from a system
of dual federalism to one of associative federalism. In "Federalist No. 46," James Madison asserted that the states
and national government "are in fact but different agents and trustees of the people, constituted with different
powers." Alexander Hamilton, writing in "Federalist No. 28," suggested that both levels of government would exercise
authority to the citizens' benefit: "If their [the peoples'] rights are invaded by either, they can make use of the
other as the instrument of redress." (1) Because the states were preexisting political entities, the U.S. Constitution
did not need to define or explain federalism in any one section but it often mentions the rights and responsibilities
of state governments and state officials in relation to the federal government. The federal government has certain
express powers (also called enumerated powers) which are powers spelled out in the Constitution, including the right
to levy taxes, declare war, and regulate interstate and foreign commerce. In addition, the Necessary and Proper Clause
gives the federal government the implied power to pass any law "necessary and proper" for the execution of its express
powers. Other powers—the reserved powers—are reserved to the people or the states. The power delegated to the federal
government was significantly expanded by the Supreme Court decision in McCulloch v. Maryland (1819), amendments to
the Constitution following the Civil War, and by some later amendments—as well as the overall claim of the Civil
War, that the states were legally subject to the final dictates of the federal government. The Federalist Party of
the United States was opposed by the Democratic-Republicans, including powerful figures such as Thomas Jefferson.
The Democratic-Republicans mainly believed that: the Legislature had too much power (mainly because of the Necessary
and Proper Clause) and that they were unchecked; the Executive had too much power, and that there was no check on
the executive; a dictator would arise; and that a bill of rights should be coupled with the constitution to prevent
a dictator (then believed to eventually be the president) from exploiting or tyrannizing citizens. The federalists,
on the other hand, argued that it was impossible to list all the rights, and those that were not listed could be
easily overlooked because they were not in the official bill of rights. Rather, rights in specific cases were to
be decided by the judicial system of courts. The meaning of federalism, as a political movement, and of what constitutes
a 'federalist', varies with country and historical context.[citation needed] Movements associated with the establishment
or development of federations can exhibit either centralising or decentralising trends.[citation needed] For example,
at the time those nations were being established, factions known as "federalists" in the United States and Australia
advocated the formation of strong central government. Similarly, in European Union politics, federalists mostly seek
greater EU integration. In contrast, in Spain and in post-war Germany, federal movements have sought decentralisation:
the transfer of power from central authorities to local units. In Canada, where Quebec separatism has been a political
force for several decades, the "federalist" impulse aims to keep Quebec inside Canada. From 1938 until 1995, the
U.S. Supreme Court did not invalidate any federal statute as exceeding Congress' power under the Commerce Clause.
Most actions by the federal government can find some legal support among the express powers, such as the Commerce
Clause, whose applicability has been narrowed by the Supreme Court in recent years. In 1995 the Supreme Court rejected
the Gun-Free School Zones Act in the Lopez decision, and also rejected the civil remedy portion of the Violence Against
Women Act of 1994 in the United States v. Morrison decision. Recently, the Commerce Clause was interpreted to include
marijuana laws in the Gonzales v. Raich decision. On one hand, this means that the Belgian political landscape, generally
speaking, consists of only two components: the Dutch-speaking population represented by Dutch-language political
parties, and the majority populations of Wallonia and Brussels, represented by their French-speaking parties. The
Brussels region emerges as a third component. This specific dual form of federalism, with the special position of
Brussels, consequently has a number of political issues—even minor ones—that are being fought out over the Dutch/French-language
political division. With such issues, a final decision is possible only in the form of a compromise. This tendency
gives this dual federalism model a number of traits that generally are ascribed to confederalism, and makes the future
of Belgian federalism contentious. Some federal constitutions also provide that certain constitutional amendments
cannot occur without the unanimous consent of all states or of a particular state. The US constitution provides that
no state may be deprived of equal representation in the senate without its consent. In Australia, if a proposed amendment
will specifically impact one or more states, then it must be endorsed in the referendum held in each of those states.
Any amendment to the Canadian constitution that would modify the role of the monarchy would require unanimous consent
of the provinces. The German Basic Law provides that no amendment is admissible at all that would abolish the federal
system. Where every component state of a federation possesses the same powers, we are said to find 'symmetric federalism'.
Asymmetric federalism exists where states are granted different powers, or some possess greater autonomy than others
do. This is often done in recognition of the existence of a distinct culture in a particular region or regions. In
Spain, the Basques and Catalans, as well as the Galicians, spearheaded a historic movement to have their national
specificity recognized, crystallizing in the "historical communities" such as Navarre, Galicia, Catalonia, and the
Basque Country. They have more powers than the later expanded arrangement for other Spanish regions, or the Spain
of the autonomous communities (called also the "coffee for everyone" arrangement), partly to deal with their separate
identity and to appease peripheral nationalist leanings, partly out of respect to specific rights they had held earlier
in history. However, strictly speaking Spain is not a federalism, but a decentralized administrative organization
of the state. Federations often have special procedures for amendment of the federal constitution. As well as reflecting
the federal structure of the state this may guarantee that the self-governing status of the component states cannot
be abolished without their consent. An amendment to the constitution of the United States must be ratified by three-quarters
of either the state legislatures, or of constitutional conventions specially elected in each of the states, before
it can come into effect. In referendums to amend the constitutions of Australia and Switzerland it is required that
a proposal be endorsed not just by an overall majority of the electorate in the nation as a whole, but also by separate
majorities in each of a majority of the states or cantons. In Australia, this latter requirement is known as a double
majority. The structures of most federal governments incorporate mechanisms to protect the rights of component states.
One method, known as 'intrastate federalism', is to directly represent the governments of component states in federal
political institutions. Where a federation has a bicameral legislature the upper house is often used to represent
the component states while the lower house represents the people of the nation as a whole. A federal upper house
may be based on a special scheme of apportionment, as is the case in the senates of the United States and Australia,
where each state is represented by an equal number of senators irrespective of the size of its population. Federations
often employ the paradox of being a union of states, while still being states (or having aspects of statehood) in
themselves. For example, James Madison (author of the US Constitution) wrote in Federalist Paper No. 39 that the
US Constitution "is in strictness neither a national nor a federal constitution; but a composition of both. In its
foundation, it is federal, not national; in the sources from which the ordinary powers of the Government are drawn,
it is partly federal, and partly national..." This stems from the fact that states in the US maintain all sovereignty
that they do not yield to the federation by their own consent. This was reaffirmed by the Tenth Amendment to the
United States Constitution, which reserves all powers and rights that are not delegated to the Federal Government
as left to the States and to the people. Usually, a federation is formed at two levels: the central government and
the regions (states, provinces, territories), and little to nothing is said about second or third level administrative
political entities. Brazil is an exception, because the 1988 Constitution included the municipalities as autonomous
political entities making the federation tripartite, encompassing the Union, the States, and the municipalities.
Each state is divided into municipalities (municípios) with their own legislative council (câmara de vereadores)
and a mayor (prefeito), which are partly autonomous from both Federal and State Government. Each municipality has
a "little constitution", called "organic law" (lei orgânica). Mexico is an intermediate case, in that municipalities
are granted full-autonomy by the federal constitution and their existence as autonomous entities (municipio libre,
"free municipality") is established by the federal government and cannot be revoked by the states' constitutions.
Moreover, the federal constitution determines which powers and competencies belong exclusively to the municipalities
and not to the constituent states. However, municipalities do not have an elected legislative assembly. China is
the largest unitary state in the world by both population and land area. Although China has had long periods of central
rule for centuries, it is often argued that the unitary structure of the Chinese government is far too unwieldy to
effectively and equitably manage the country's affairs. On the other hand, Chinese nationalists are suspicious of
decentralization as a form of secessionism and a backdoor for national disunity; still others argue that the degree
of autonomy given to provincial-level officials in the People's Republic of China amounts to a de facto federalism.
The Philippines is a unitary state with some powers devolved to Local Government Units (LGUs) under the terms of
the Local Government Code. There is also one autonomous region, the Autonomous Region of Muslim Mindanao. Over the
years various modifications have been proposed to the Constitution of the Philippines, including possible transition
to a federal system as part of a shift to a parliamentary system. In 2004, Philippine President Gloria Macapagal
Arroyo established the Consultative Commission which suggested such a Charter Change but no action was taken by the
Philippine Congress to amend the 1987 Constitution. Spain is a unitary state with a high level of decentralisation,
often regarded as a federal system in all but name or a "federation without federalism". The country has been quoted
as being "an extraordinarily decentralized country", with the central government accounting for just 18% of public
spending, 38% for the regional governments, 13% for the local councils, and the remaining 31% for the social security
system. The current Spanish constitution has been implemented in such a way that, in many respects, Spain can be
compared to countries which are undeniably federal. The United Kingdom has traditionally been governed as a unitary
state by the Westminster Parliament in London. Instead of adopting a federal model, the UK has relied on gradual
devolution to decentralise political power. Devolution in the UK began with the Government of Ireland Act 1914 which
granted home rule to Ireland as a constituent country of the former United Kingdom of Great Britain and Ireland.
Following the partition of Ireland in 1921 which saw the creation of the sovereign Irish Free State (which eventually
evolved into the modern day Republic of Ireland), Northern Ireland retained its devolved government through the Parliament
of Northern Ireland, the only part of the UK to have such a body at this time. This body was suspended in 1972 and
Northern Ireland was governed by direct rule during the period of conflict known as The Troubles. In modern times,
a process of devolution in the United Kingdom has decentralised power once again. Since the 1997 referendums in Scotland
and Wales and the Good Friday Agreement in Northern Ireland, three of the four constituent countries of the UK now
have some level of autonomy. Government has been devolved to the Scottish Parliament, the National Assembly for Wales
and the Northern Ireland Assembly. England does not have its own parliament and English affairs continue to be decided
by the Westminster Parliament. In 1998 a set of eight unelected Regional assemblies, or chambers, was created to
support the English Regional Development Agencies, but these were abolished between 2008 and 2010. The Regions of
England continue to be used in certain governmental administrative functions. Federalism also finds expression in
ecclesiology (the doctrine of the church). For example, presbyterian church governance resembles parliamentary republicanism
(a form of political federalism) to a large extent. In Presbyterian denominations, the local church is ruled by elected
elders, some of which are ministerial. Each church then sends representatives or commissioners to presbyteries and
further to a general assembly. Each greater level of assembly has ruling authority over its constituent members.
In this governmental structure, each component has some level of sovereignty over itself. As in political federalism,
in presbyterian ecclesiology there is shared sovereignty. Some Christians argue that the earliest source of political
federalism (or federalism in human institutions; in contrast to theological federalism) is the ecclesiastical federalism
found in the Bible. They point to the structure of the early Christian Church as described (and prescribed, as believed
by many) in the New Testament. In their arguments, this is particularly demonstrated in the Council of Jerusalem,
described in Acts chapter 15, where the Apostles and elders gathered together to govern the Church; the Apostles
being representatives of the universal Church, and elders being such for the local church. To this day, elements
of federalism can be found in almost every Christian denomination, some more than others. In almost all federations
the central government enjoys the powers of foreign policy and national defense as exclusive federal powers. Were
this not the case a federation would not be a single sovereign state, per the UN definition. Notably, the states
of Germany retain the right to act on their own behalf at an international level, a condition originally granted
in exchange for the Kingdom of Bavaria's agreement to join the German Empire in 1871. Beyond this the precise division
of power varies from one nation to another. The constitutions of Germany and the United States provide that all powers
not specifically granted to the federal government are retained by the states. The Constitution of some countries
like Canada and India, on the other hand, state that powers not explicitly granted to the provincial governments
are retained by the federal government. Much like the US system, the Australian Constitution allocates to the Federal
government (the Commonwealth of Australia) the power to make laws about certain specified matters which were considered
too difficult for the States to manage, so that the States retain all other areas of responsibility. Under the division
of powers of the European Union in the Lisbon Treaty, powers which are not either exclusively of European competence
or shared between EU and state as concurrent powers are retained by the constituent states.
The annelids are bilaterally symmetrical, triploblastic, coelomate, invertebrate organisms. They also have parapodia for
locomotion. Most textbooks still use the traditional division into polychaetes (almost all marine), oligochaetes
(which include earthworms) and leech-like species. Cladistic research since 1997 has radically changed this scheme,
viewing leeches as a sub-group of oligochaetes and oligochaetes as a sub-group of polychaetes. In addition, the Pogonophora,
Echiura and Sipuncula, previously regarded as separate phyla, are now regarded as sub-groups of polychaetes. Annelids
are considered members of the Lophotrochozoa, a "super-phylum" of protostomes that also includes molluscs, brachiopods,
flatworms and nemerteans. The basic annelid form consists of multiple segments. Each segment has the same sets of
organs and, in most polychaetes, has a pair of parapodia that many species use for locomotion. Septa separate the
segments of many species, but are poorly defined or absent in others, and Echiura and Sipuncula show no obvious signs
of segmentation. In species with well-developed septa, the blood circulates entirely within blood vessels, and the
vessels in segments near the front ends of these species are often built up with muscles that act as hearts. The
septa of such species also enable them to change the shapes of individual segments, which facilitates movement by
peristalsis ("ripples" that pass along the body) or by undulations that improve the effectiveness of the parapodia.
In species with incomplete septa or none, the blood circulates through the main body cavity without any kind of pump,
and there is a wide range of locomotory techniques – some burrowing species turn their pharynges inside out to drag
themselves through the sediment. Although many species can reproduce asexually and use similar mechanisms to regenerate
after severe injuries, sexual reproduction is the normal method in species whose reproduction has been studied. The
minority of living polychaetes whose reproduction and lifecycles are known produce trochophore larvae, that live
as plankton and then sink and metamorphose into miniature adults. Oligochaetes are full hermaphrodites and produce
a ring-like cocoon around their bodies, in which the eggs and hatchlings are nourished until they are ready to emerge.
Earthworms are Oligochaetes that support terrestrial food chains both as prey and in some regions are important in
aeration and enriching of soil. The burrowing of marine polychaetes, which may constitute up to a third of all species
in near-shore environments, encourages the development of ecosystems by enabling water and oxygen to penetrate the
sea floor. In addition to improving soil fertility, annelids serve humans as food and as bait. Scientists observe
annelids to monitor the quality of marine and fresh water. Although blood-letting is no longer in favor with doctors,
some leech species are regarded as endangered species because they have been over-harvested for this purpose in the
last few centuries. Ragworms' jaws are now being studied by engineers as they offer an exceptional combination of
lightness and strength. Since annelids are soft-bodied, their fossils are rare – mostly jaws and the mineralized
tubes that some of the species secreted. Although some late Ediacaran fossils may represent annelids, the oldest
known fossil that is identified with confidence comes from about 518 million years ago in the early Cambrian period.
Fossils of most modern mobile polychaete groups appeared by the end of the Carboniferous, about 299 million years
ago. Palaeontologists disagree about whether some body fossils from the mid Ordovician, about 472 to 461 million
years ago, are the remains of oligochaetes, and the earliest indisputable fossils of the group appear in the Tertiary
period, which began 65 million years ago. No single feature distinguishes Annelids from other invertebrate phyla,
but they have a distinctive combination of features. Their bodies are long, with segments that are divided externally
by shallow ring-like constrictions called annuli and internally by septa ("partitions") at the same points, although
in some species the septa are incomplete and in a few cases missing. Most of the segments contain the same sets of
organs, although sharing a common gut, circulatory system and nervous system makes them inter-dependent. Their bodies
are covered by a cuticle (outer covering) that does not contain cells but is secreted by cells in the skin underneath,
is made of tough but flexible collagen and does not molt – on the other hand arthropods' cuticles are made of the
more rigid α-chitin, and molt until the arthropods reach their full size. Most annelids have closed circulatory systems,
where the blood makes its entire circuit via blood vessels. Most of an annelid's body consists of segments that are
practically identical, having the same sets of internal organs and external chaetae (Greek χαιτη, meaning "hair")
and, in some species, appendages. However, the frontmost and rearmost sections are not regarded as true segments
as they do not contain the standard sets of organs and do not develop in the same way as the true segments. The frontmost
section, called the prostomium (Greek προ- meaning "in front of" and στομα meaning "mouth") contains the brain and
sense organs, while the rearmost, called the pygidium (Greek πυγιδιον, meaning "little tail") or periproct contains
the anus, generally on the underside. The first section behind the prostomium, called the peristomium (Greek περι-
meaning "around" and στομα meaning "mouth"), is regarded by some zoologists as not a true segment, but in some polychaetes
the peristomium has chetae and appendages like those of other segments. Annelids' cuticles are made of collagen fibers,
usually in layers that spiral in alternating directions so that the fibers cross each other. These are secreted by
the one-cell deep epidermis (outermost skin layer). A few marine annelids that live in tubes lack cuticles, but their
tubes have a similar structure, and mucus-secreting glands in the epidermis protect their skins. Under the epidermis
is the dermis, which is made of connective tissue, in other words a combination of cells and non-cellular materials
such as collagen. Below this are two layers of muscles, which develop from the lining of the coelom (body cavity):
circular muscles make a segment longer and slimmer when they contract, while under them are longitudinal muscles,
usually four distinct strips, whose contractions make the segment shorter and fatter. Some annelids also have oblique
internal muscles that connect the underside of the body to each side. The setae ("hairs") of annelids project out
from the epidermis to provide traction and other capabilities. The simplest are unjointed and form paired bundles
near the top and bottom of each side of each segment. The parapodia ("limbs") of annelids that have them often bear
more complex chetae at their tips – for example jointed, comb-like or hooked. Chetae are made of moderately flexible
β-chitin and are formed by follicles, each of which has a chetoblast ("hair-forming") cell at the bottom and muscles
that can extend or retract the cheta. The chetoblasts produce chetae by forming microvilli, fine hair-like extensions
that increase the area available for secreting the cheta. When the cheta is complete, the microvilli withdraw into
the chetoblast, leaving parallel tunnels that run almost the full length of the cheta. Hence annelids' chetae are
structurally different from the setae ("bristles") of arthropods, which are made of the more rigid α-chitin, have
a single internal cavity, and are mounted on flexible joints in shallow pits in the cuticle. Nearly all polychaetes
have parapodia that function as limbs, while other major annelid groups lack them. Parapodia are unjointed paired
extensions of the body wall, and their muscles are derived from the circular muscles of the body. They are often
supported internally by one or more large, thick chetae. The parapodia of burrowing and tube-dwelling polychaetes
are often just ridges whose tips bear hooked chetae. In active crawlers and swimmers the parapodia are often divided
into large upper and lower paddles on a very short trunk, and the paddles are generally fringed with chetae and sometimes
with cirri (fused bundles of cilia) and gills. The brain generally forms a ring round the pharynx (throat), consisting
of a pair of ganglia (local control centers) above and in front of the pharynx, linked by nerve cords either side
of the pharynx to another pair of ganglia just below and behind it. The brains of polychaetes are generally in the
prostomium, while those of clitellates are in the peristomium or sometimes the first segment behind the peristomium.
In some very mobile and active polychaetes the brain is enlarged and more complex, with visible hindbrain, midbrain
and forebrain sections. The rest of the central nervous system is generally "ladder-like", consisting of a pair of
nerve cords that run through the bottom part of the body and have in each segment paired ganglia linked by a transverse
connection. From each segmental ganglion a branching system of local nerves runs into the body wall and then encircles
the body. However, in most polychaetes the two main nerve cords are fused, and in the tube-dwelling genus Owenia
the single nerve chord has no ganglia and is located in the epidermis. As in arthropods, each muscle fiber (cell)
is controlled by more than one neuron, and the speed and power of the fiber's contractions depends on the combined
effects of all its neurons. Vertebrates have a different system, in which one neuron controls a group of muscle fibers.
Most annelids' longitudinal nerve trunks include giant axons (the output signal lines of nerve cells). Their large
diameter decreases their resistance, which allows them to transmit signals exceptionally fast. This enables these
worms to withdraw rapidly from danger by shortening their bodies. Experiments have shown that cutting the giant axons
prevents this escape response but does not affect normal movement. The sensors are primarily single cells that detect
light, chemicals, pressure waves and contact, and are present on the head, appendages (if any) and other parts of
the body. Nuchal ("on the neck") organs are paired, ciliated structures found only in polychaetes, and are thought
to be chemosensors. Some polychaetes also have various combinations of ocelli ("little eyes") that detect the direction
from which light is coming and camera eyes or compound eyes that can probably form images. The compound eyes probably
evolved independently of arthropods' eyes. Some tube-worms use ocelli widely spread over their bodies to detect the
shadows of fish, so that they can quickly withdraw into their tubes. Some burrowing and tube-dwelling polychaetes
have statocysts (tilt and balance sensors) that tell them which way is down. A few polychaete genera have on the
undersides of their heads palps that are used both in feeding and as "feelers", and some of these also have antennae
that are structurally similar but probably are used mainly as "feelers". Most annelids have a pair of coelomata (body
cavities) in each segment, separated from other segments by septa and from each other by vertical mesenteries. Each
septum forms a sandwich with connective tissue in the middle and mesothelium (membrane that serves as a lining) from
the preceding and following segments on either side. Each mesentery is similar except that the mesothelium is the
lining of each of the pair of coelomata, and the blood vessels and, in polychaetes, the main nerve cords are embedded
in it. The mesothelium is made of modified epitheliomuscular cells; in other words, their bodies form part of the
epithelium but their bases extend to form muscle fibers in the body wall. The mesothelium may also form radial and
circular muscles on the septa, and circular muscles around the blood vessels and gut. Parts of the mesothelium, especially
on the outside of the gut, may also form chloragogen cells that perform similar functions to the livers of vertebrates:
producing and storing glycogen and fat; producing the oxygen-carrier hemoglobin; breaking down proteins; and turning
nitrogenous waste products into ammonia and urea to be excreted. Many annelids move by peristalsis (waves of contraction
and expansion that sweep along the body), or flex the body while using parapodia to crawl or swim. In these animals
the septa enable the circular and longitudinal muscles to change the shape of individual segments, by making each
segment a separate fluid-filled "balloon". However, the septa are often incomplete in annelids that are semi-sessile
or that do not move by peristalsis or by movements of parapodia – for example some move by whipping movements of
the body, some small marine species move by means of cilia (fine muscle-powered hairs) and some burrowers turn their
pharynges (throats) inside out to penetrate the sea-floor and drag themselves into it. The fluid in the coelomata
contains coelomocyte cells that defend the animals against parasites and infections. In some species coelomocytes
may also contain a respiratory pigment – red hemoglobin in some species, green chlorocruorin in others (dissolved
in the plasma) – and provide oxygen transport within their segments. Respiratory pigment is also dissolved in the
blood plasma. Species with well-developed septa generally also have blood vessels running all long their bodies above
and below the gut, the upper one carrying blood forwards while the lower one carries it backwards. Networks of capillaries
in the body wall and around the gut transfer blood between the main blood vessels and to parts of the segment that
need oxygen and nutrients. Both of the major vessels, especially the upper one, can pump blood by contracting. In
some annelids the forward end of the upper blood vessel is enlarged with muscles to form a heart, while in the forward
ends of many earthworms some of the vessels that connect the upper and lower main vessels function as hearts. Species
with poorly developed or no septa generally have no blood vessels and rely on the circulation within the coelom for
delivering nutrients and oxygen. However, leeches and their closest relatives have a body structure that is very
uniform within the group but significantly different from that of other annelids, including other members of the
Clitellata. In leeches there are no septa, the connective tissue layer of the body wall is so thick that it occupies
much of the body, and the two coelomata are widely separated and run the length of the body. They function as the
main blood vessels, although they are side-by-side rather than upper and lower. However, they are lined with mesothelium,
like the coelomata and unlike the blood vessels of other annelids. Leeches generally use suckers at their front and
rear ends to move like inchworms. The anus is on the upper surface of the pygidium. Feeding structures in the mouth
region vary widely, and have little correlation with the animals' diets. Many polychaetes have a muscular pharynx
that can be everted (turned inside out to extend it). In these animals the foremost few segments often lack septa
so that, when the muscles in these segments contract, the sharp increase in fluid pressure from all these segments
everts the pharynx very quickly. Two families, the Eunicidae and Phyllodocidae, have evolved jaws, which can be used
for seizing prey, biting off pieces of vegetation, or grasping dead and decaying matter. On the other hand, some
predatory polychaetes have neither jaws nor eversible pharynges. Selective deposit feeders generally live in tubes
on the sea-floor and use palps to find food particles in the sediment and then wipe them into their mouths. Filter
feeders use "crowns" of palps covered in cilia that wash food particles towards their mouths. Non-selective deposit
feeders ingest soil or marine sediments via mouths that are generally unspecialized. Some clitellates have sticky
pads in the roofs of their mouths, and some of these can evert the pads to capture prey. Leeches often have an eversible
proboscis, or a muscular pharynx with two or three teeth. The gut is generally an almost straight tube supported
by the mesenteries (vertical partitions within segments), and ends with the anus on the underside of the pygidium.
However, in members of the tube-dwelling family Siboglinidae the gut is blocked by a swollen lining that houses symbiotic
bacteria, which can make up 15% of the worms' total weight. The bacteria convert inorganic matter – such as hydrogen
sulfide and carbon dioxide from hydrothermal vents, or methane from seeps – to organic matter that feeds themselves
and their hosts, while the worms extend their palps into the gas flows to absorb the gases needed by the bacteria.
Annelids with blood vessels use metanephridia to remove soluble waste products, while those without use protonephridia.
Both of these systems use a two-stage filtration process, in which fluid and waste products are first extracted and
these are filtered again to re-absorb any re-usable materials while dumping toxic and spent materials as urine. The
difference is that protonephridia combine both filtration stages in the same organ, while metanephridia perform only
the second filtration and rely on other mechanisms for the first – in annelids special filter cells in the walls
of the blood vessels let fluids and other small molecules pass into the coelomic fluid, where it circulates to the
metanephridia. In annelids the points at which fluid enters the protonephridia or metanephridia are on the forward
side of a septum while the second-stage filter and the nephridiopore (exit opening in the body wall) are in the following
segment. As a result, the hindmost segment (before the growth zone and pygidium) has no structure that extracts its
wastes, as there is no following segment to filter and discharge them, while the first segment contains an extraction
structure that passes wastes to the second, but does not contain the structures that re-filter and discharge urine.
It is thought that annelids were originally animals with two separate sexes, which released ova and sperm into the
water via their nephridia. The fertilized eggs develop into trochophore larvae, which live as plankton. Later they
sink to the sea-floor and metamorphose into miniature adults: the part of the trochophore between the apical tuft
and the prototroch becomes the prostomium (head); a small area round the trochophore's anus becomes the pygidium
(tail-piece); a narrow band immediately in front of that becomes the growth zone that produces new segments; and
the rest of the trochophore becomes the peristomium (the segment that contains the mouth). However, the lifecycles
of most living polychaetes, which are almost all marine animals, are unknown, and only about 25% of the 300+ species
whose lifecycles are known follow this pattern. About 14% use a similar external fertilization but produce yolk-rich
eggs, which reduce the time the larva needs to spend among the plankton, or eggs from which miniature adults emerge
rather than larvae. The rest care for the fertilized eggs until they hatch – some by producing jelly-covered masses
of eggs which they tend, some by attaching the eggs to their bodies and a few species by keeping the eggs within
their bodies until they hatch. These species use a variety of methods for sperm transfer; for example, in some the
females collect sperm released into the water, while in others the males have a penis that inject sperm into the
female. There is no guarantee that this is a representative sample of polychaetes' reproductive patterns, and it
simply reflects scientists' current knowledge. Some polychaetes breed only once in their lives, while others breed
almost continuously or through several breeding seasons. While most polychaetes remain of one sex all their lives,
a significant percentage of species are full hermaphrodites or change sex during their lives. Most polychaetes whose
reproduction has been studied lack permanent gonads, and it is uncertain how they produce ova and sperm. In a few
species the rear of the body splits off and becomes a separate individual that lives just long enough to swim to
a suitable environment, usually near the surface, and spawn. Most mature clitellates (the group that includes earthworms
and leeches) are full hermaphrodites, although in a few leech species younger adults function as males and become
female at maturity. All have well-developed gonads, and all copulate. Earthworms store their partners' sperm in spermathecae
("sperm stores") and then the clitellum produces a cocoon that collects ova from the ovaries and then sperm from
the spermathecae. Fertilization and development of earthworm eggs takes place in the cocoon. Leeches' eggs are fertilized
in the ovaries, and then transferred to the cocoon. In all clitellates the cocoon also either produces yolk when
the eggs are fertilized or nutrients while they are developing. All clitellates hatch as miniature adults rather
than larvae. Charles Darwin's book The Formation of Vegetable Mould through the Action of Worms (1881) presented
the first scientific analysis of earthworms' contributions to soil fertility. Some burrow while others live entirely
on the surface, generally in moist leaf litter. The burrowers loosen the soil so that oxygen and water can penetrate
it, and both surface and burrowing worms help to produce soil by mixing organic and mineral matter, by accelerating
the decomposition of organic matter and thus making it more quickly available to other organisms, and by concentrating
minerals and converting them to forms that plants can use more easily. Earthworms are also important prey for birds
ranging in size from robins to storks, and for mammals ranging from shrews to badgers, and in some cases conserving
earthworms may be essential for conserving endangered birds. Terrestrial annelids can be invasive in some situations.
In the glaciated areas of North America, for example, almost all native earthworms are thought to have been killed
by the glaciers and the worms currently found in those areas are all introduced from other areas, primarily from
Europe, and, more recently, from Asia. Northern hardwood forests are especially negatively impacted by invasive worms
through the loss of leaf duff, soil fertility, changes in soil chemistry and the loss of ecological diversity. Especially
of concern is Amynthas agrestis and at least one state (Wisconsin) has listed it as a prohibited species. Earthworms
make a significant contribution to soil fertility. The rear end of the Palolo worm, a marine polychaete that tunnels
through coral, detaches in order to spawn at the surface, and the people of Samoa regard these spawning modules as
a delicacy. Anglers sometimes find that worms are more effective bait than artificial flies, and worms can be kept
for several days in a tin lined with damp moss. Ragworms are commercially important as bait and as food sources for
aquaculture, and there have been proposals to farm them in order to reduce over-fishing of their natural populations.
Some marine polychaetes' predation on molluscs causes serious losses to fishery and aquaculture operations. Accounts
of the use of leeches for the medically dubious practise of blood-letting have come from China around 30 AD, India
around 200 AD, ancient Rome around 50 AD and later throughout Europe. In the 19th century medical demand for leeches
was so high that some areas' stocks were exhausted and other regions imposed restrictions or bans on exports, and
Hirudo medicinalis is treated as an endangered species by both IUCN and CITES. More recently leeches have been used
to assist in microsurgery, and their saliva has provided anti-inflammatory compounds and several important anticoagulants,
one of which also prevents tumors from spreading. Since annelids are soft-bodied, their fossils are rare. Polychaetes'
fossil record consists mainly of the jaws that some species had and the mineralized tubes that some secreted. Some
Ediacaran fossils such as Dickinsonia in some ways resemble polychaetes, but the similarities are too vague for these
fossils to be classified with confidence. The small shelly fossil Cloudina, from 549 to 542 million years ago, has
been classified by some authors as an annelid, but by others as a cnidarian (i.e. in the phylum to which jellyfish
and sea anemones belong). Until 2008 the earliest fossils widely accepted as annelids were the polychaetes Canadia
and Burgessochaeta, both from Canada's Burgess Shale, formed about 505 million years ago in the early Cambrian. Myoscolex,
found in Australia and a little older than the Burgess Shale, was possibly an annelid. However, it lacks some typical
annelid features and has features which are not usually found in annelids and some of which are associated with other
phyla. Then Simon Conway Morris and John Peel reported Phragmochaeta from Sirius Passet, about 518 million years
old, and concluded that it was the oldest annelid known to date. There has been vigorous debate about whether the
Burgess Shale fossil Wiwaxia was a mollusc or an annelid. Polychaetes diversified in the early Ordovician, about
488 to 474 million years ago. It is not until the early Ordovician that the first annelid jaws are found, thus the
crown-group cannot have appeared before this date and probably appeared somewhat later. By the end of the Carboniferous,
about 299 million years ago, fossils of most of the modern mobile polychaete groups had appeared. Many fossil tubes
look like those made by modern sessile polychaetes , but the first tubes clearly produced by polychaetes date from
the Jurassic, less than 199 million years ago. The earliest good evidence for oligochaetes occurs in the Tertiary
period, which began 65 million years ago, and it has been suggested that these animals evolved around the same time
as flowering plants in the early Cretaceous, from 130 to 90 million years ago. A trace fossil consisting of a convoluted
burrow partly filled with small fecal pellets may be evidence that earthworms were present in the early Triassic
period from 251 to 245 million years ago. Body fossils going back to the mid Ordovician, from 472 to 461 million
years ago, have been tentatively classified as oligochaetes, but these identifications are uncertain and some have
been disputed. Traditionally the annelids have been divided into two major groups, the polychaetes and clitellates.
In turn the clitellates were divided into oligochaetes, which include earthworms, and hirudinomorphs, whose best-known
members are leeches. For many years there was no clear arrangement of the approximately 80 polychaete families into
higher-level groups. In 1997 Greg Rouse and Kristian Fauchald attempted a "first heuristic step in terms of bringing
polychaete systematics to an acceptable level of rigour", based on anatomical structures, and divided polychaetes
into: In 2007 Torsten Struck and colleagues compared 3 genes in 81 taxa, of which 9 were outgroups, in other words
not considered closely related to annelids but included to give an indication of where the organisms under study
are placed on the larger tree of life. For a cross-check the study used an analysis of 11 genes (including the original
3) in 10 taxa. This analysis agreed that clitellates, pogonophorans and echiurans were on various branches of the
polychaete family tree. It also concluded that the classification of polychaetes into Scolecida, Canalipalpata and
Aciculata was useless, as the members of these alleged groups were scattered all over the family tree derived from
comparing the 81 taxa. In addition, it also placed sipunculans, generally regarded at the time as a separate phylum,
on another branch of the polychaete tree, and concluded that leeches were a sub-group of oligochaetes rather than
their sister-group among the clitellates. Rouse accepted the analyses based on molecular phylogenetics, and their
main conclusions are now the scientific consensus, although the details of the annelid family tree remain uncertain.
In addition to re-writing the classification of annelids and 3 previously independent phyla, the molecular phylogenetics
analyses undermine the emphasis that decades of previous writings placed on the importance of segmentation in the
classification of invertebrates. Polychaetes, which these analyses found to be the parent group, have completely
segmented bodies, while polychaetes' echiurans and sipunculan offshoots are not segmented and pogonophores are segmented
only in the rear parts of their bodies. It now seems that segmentation can appear and disappear much more easily
in the course of evolution than was previously thought. The 2007 study also noted that the ladder-like nervous system,
which is associated with segmentation, is less universal previously thought in both annelids and arthropods.[n 2]
Annelids are members of the protostomes, one of the two major superphyla of bilaterian animals – the other is the
deuterostomes, which includes vertebrates. Within the protostomes, annelids used to be grouped with arthropods under
the super-group Articulata ("jointed animals"), as segmentation is obvious in most members of both phyla. However,
the genes that drive segmentation in arthropods do not appear to do the same in annelids. Arthropods and annelids
both have close relatives that are unsegmented. It is at least as easy to assume that they evolved segmented bodies
independently as it is to assume that the ancestral protostome or bilaterian was segmented and that segmentation
disappeared in many descendant phyla. The current view is that annelids are grouped with molluscs, brachiopods and
several other phyla that have lophophores (fan-like feeding structures) and/or trochophore larvae as members of Lophotrochozoa.
Bryzoa may be the most basal phylum (the one that first became distinctive) within the Lophotrochozoa, and the relationships
between the other members are not yet known. Arthropods are now regarded as members of the Ecdysozoa ("animals that
molt"), along with some phyla that are unsegmented. The "Lophotrochozoa" hypothesis is also supported by the fact
that many phyla within this group, including annelids, molluscs, nemerteans and flatworms, follow a similar pattern
in the fertilized egg's development. When their cells divide after the 4-cell stage, descendants of these 4 cells
form a spiral pattern. In these phyla the "fates" of the embryo's cells, in other words the roles their descendants
will play in the adult animal, are the same and can be predicted from a very early stage. Hence this development
pattern is often described as "spiral determinate cleavage".
In monotheism and henotheism, God is conceived of as the Supreme Being and principal object of faith. The concept of God
as described by theologians commonly includes the attributes of omniscience (infinite knowledge), omnipotence (unlimited
power), omnipresence (present everywhere), omnibenevolence (perfect goodness), divine simplicity, and eternal and
necessary existence. God is also usually defined as a non-corporeal being without any human biological gender, but
the concept of God actively (as opposed to receptively) creating the universe has caused some religions to give "Him"
the metaphorical name of "Father". Because God is conceived as not being a corporeal being, God cannot(some say should
not) be portrayed in a literal visual image; some religious groups use a man (sometimes old and bearded) to symbolize
God because of His deed of creating man's mind in the image of His own. In theism, God is the creator and sustainer
of the universe, while in deism, God is the creator, but not the sustainer, of the universe. Monotheism is the belief
in the existence of one God or in the oneness of God. In pantheism, God is the universe itself. In atheism, God is
not believed to exist, while God is deemed unknown or unknowable within the context of agnosticism. God has also
been conceived as being incorporeal (immaterial), a personal being, the source of all moral obligation, and the "greatest
conceivable existent". Many notable philosophers have developed arguments for and against the existence of God. There
are many names for God, and different names are attached to different cultural ideas about God's identity and attributes.
In the ancient Egyptian era of Atenism, possibly the earliest recorded monotheistic religion, this deity was called
Aten, premised on being the one "true" Supreme Being and Creator of the Universe. In the Hebrew Bible and Judaism,
"He Who Is", "I Am that I Am", and the tetragrammaton YHWH are used as names of God, while Yahweh and Jehovah are
sometimes used in Christianity as vocalizations of YHWH. In the Christian doctrine of the Trinity, God, consubstantial
in three persons, is called the Father, the Son, and the Holy Spirit. In Judaism, it is common to refer to God by
the titular names Elohim or Adonai, the latter of which is believed by some scholars to descend from the Egyptian
Aten. In Islam, the name Allah, "Al-El", or "Al-Elah" ("the God") is used, while Muslims also have a multitude of
titular names for God. In Hinduism, Brahman is often considered a monistic deity. Other religions have names for
God, for instance, Baha in the Bahá'í Faith, Waheguru in Sikhism, and Ahura Mazda in Zoroastrianism. The earliest
written form of the Germanic word God (always, in this usage, capitalized) comes from the 6th-century Christian Codex
Argenteus. The English word itself is derived from the Proto-Germanic * ǥuđan. The reconstructed Proto-Indo-European
form * ǵhu-tó-m was likely based on the root * ǵhau(ə)-, which meant either "to call" or "to invoke". The Germanic
words for God were originally neuter—applying to both genders—but during the process of the Christianization of the
Germanic peoples from their indigenous Germanic paganism, the words became a masculine syntactic form. In the English
language, the capitalized form of God continues to represent a distinction between monotheistic "God" and "gods"
in polytheism. The English word God and its counterparts in other languages are normally used for any and all conceptions
and, in spite of significant differences between religions, the term remains an English translation common to all.
The same holds for Hebrew El, but in Judaism, God is also given a proper name, the tetragrammaton YHWH, in origin
possibly the name of an Edomite or Midianite deity, Yahweh. In many translations of the Bible, when the word LORD
is in all capitals, it signifies that the word represents the tetragrammaton. There is no clear consensus on the
nature or even the existence of God. The Abrahamic conceptions of God include the monotheistic definition of God
in Judaism, the trinitarian view of Christians, and the Islamic concept of God. The dharmic religions differ in their
view of the divine: views of God in Hinduism vary by region, sect, and caste, ranging from monotheistic to polytheistic
to atheistic. Divinity was recognized by the historical Buddha, particularly Śakra and Brahma. However, other sentient
beings, including gods, can at best only play a supportive role in one's personal path to salvation. Conceptions
of God in the latter developments of the Mahayana tradition give a more prominent place to notions of the divine.[citation
needed] Monotheists hold that there is only one god, and may claim that the one true god is worshiped in different
religions under different names. The view that all theists actually worship the same god, whether they know it or
not, is especially emphasized in Hinduism and Sikhism. In Christianity, the doctrine of the Trinity describes God
as one God in three persons. The Trinity comprises God the Father, God the Son (Jesus), and God the Holy Spirit.
Islam's most fundamental concept is tawhid (meaning "oneness" or "uniqueness"). God is described in the Quran as:
"Say: He is Allah, the One and Only; Allah, the Eternal, Absolute; He begetteth not, nor is He begotten; And there
is none like unto Him." Muslims repudiate the Christian doctrine of the Trinity and divinity of Jesus, comparing
it to polytheism. In Islam, God is beyond all comprehension or equal and does not resemble any of his creations in
any way. Thus, Muslims are not iconodules, and are not expected to visualize God. Theism generally holds that God
exists realistically, objectively, and independently of human thought; that God created and sustains everything;
that God is omnipotent and eternal; and that God is personal and interacting with the universe through, for example,
religious experience and the prayers of humans. Theism holds that God is both transcendent and immanent; thus, God
is simultaneously infinite and in some way present in the affairs of the world. Not all theists subscribe to all
of these propositions, but each usually subscribes to some of them (see, by way of comparison, family resemblance).
Catholic theology holds that God is infinitely simple and is not involuntarily subject to time. Most theists hold
that God is omnipotent, omniscient, and benevolent, although this belief raises questions about God's responsibility
for evil and suffering in the world. Some theists ascribe to God a self-conscious or purposeful limiting of omnipotence,
omniscience, or benevolence. Open Theism, by contrast, asserts that, due to the nature of time, God's omniscience
does not mean the deity can predict the future. Theism is sometimes used to refer in general to any belief in a god
or gods, i.e., monotheism or polytheism. Deism holds that God is wholly transcendent: God exists, but does not intervene
in the world beyond what was necessary to create it. In this view, God is not anthropomorphic, and neither answers
prayers nor produces miracles. Common in Deism is a belief that God has no interest in humanity and may not even
be aware of humanity. Pandeism and Panendeism, respectively, combine Deism with the Pantheistic or Panentheistic
beliefs. Pandeism is proposed to explain as to Deism why God would create a universe and then abandon it, and as
to Pantheism, the origin and purpose of the universe. Pantheism holds that God is the universe and the universe is
God, whereas Panentheism holds that God contains, but is not identical to, the Universe. It is also the view of the
Liberal Catholic Church; Theosophy; some views of Hinduism except Vaishnavism, which believes in panentheism; Sikhism;
some divisions of Neopaganism and Taoism, along with many varying denominations and individuals within denominations.
Kabbalah, Jewish mysticism, paints a pantheistic/panentheistic view of God—which has wide acceptance in Hasidic Judaism,
particularly from their founder The Baal Shem Tov—but only as an addition to the Jewish view of a personal god, not
in the original pantheistic sense that denies or limits persona to God.[citation needed] Even non-theist views about
gods vary. Some non-theists avoid the concept of God, whilst accepting that it is significant to many; other non-theists
understand God as a symbol of human values and aspirations. The nineteenth-century English atheist Charles Bradlaugh
declared that he refused to say "There is no God", because "the word 'God' is to me a sound conveying no clear or
distinct affirmation"; he said more specifically that he disbelieved in the Christian god. Stephen Jay Gould proposed
an approach dividing the world of philosophy into what he called "non-overlapping magisteria" (NOMA). In this view,
questions of the supernatural, such as those relating to the existence and nature of God, are non-empirical and are
the proper domain of theology. The methods of science should then be used to answer any empirical question about
the natural world, and theology should be used to answer questions about ultimate meaning and moral value. In this
view, the perceived lack of any empirical footprint from the magisterium of the supernatural onto natural events
makes science the sole player in the natural world. Another view, advanced by Richard Dawkins, is that the existence
of God is an empirical question, on the grounds that "a universe with a god would be a completely different kind
of universe from one without, and it would be a scientific difference." Carl Sagan argued that the doctrine of a
Creator of the Universe was difficult to prove or disprove and that the only conceivable scientific discovery that
could disprove the existence of a Creator would be the discovery that the universe is infinitely old. Stephen Hawking
and co-author Leonard Mlodinow state in their book, The Grand Design, that it is reasonable to ask who or what created
the universe, but if the answer is God, then the question has merely been deflected to that of who created God. Both
authors claim however, that it is possible to answer these questions purely within the realm of science, and without
invoking any divine beings. Neuroscientist Michael Nikoletseas has proposed that questions of the existence of God
are no different from questions of natural sciences. Following a biological comparative approach, he concludes that
it is highly probable that God exists, and, although not visible, it is possible that we know some of his attributes.
Pascal Boyer argues that while there is a wide array of supernatural concepts found around the world, in general,
supernatural beings tend to behave much like people. The construction of gods and spirits like persons is one of
the best known traits of religion. He cites examples from Greek mythology, which is, in his opinion, more like a
modern soap opera than other religious systems. Bertrand du Castel and Timothy Jurgensen demonstrate through formalization
that Boyer's explanatory model matches physics' epistemology in positing not directly observable entities as intermediaries.
Anthropologist Stewart Guthrie contends that people project human features onto non-human aspects of the world because
it makes those aspects more familiar. Sigmund Freud also suggested that god concepts are projections of one's father.
Likewise, Émile Durkheim was one of the earliest to suggest that gods represent an extension of human social life
to include supernatural beings. In line with this reasoning, psychologist Matt Rossano contends that when humans
began living in larger groups, they may have created gods as a means of enforcing morality. In small groups, morality
can be enforced by social forces such as gossip or reputation. However, it is much harder to enforce morality using
social forces in much larger groups. Rossano indicates that by including ever-watchful gods and spirits, humans discovered
an effective strategy for restraining selfishness and building more cooperative groups. St. Anselm's approach was
to define God as, "that than which nothing greater can be conceived". Famed pantheist philosopher Baruch Spinoza
would later carry this idea to its extreme: "By God I understand a being absolutely infinite, i.e., a substance consisting
of infinite attributes, of which each one expresses an eternal and infinite essence." For Spinoza, the whole of the
natural universe is made of one substance, God, or its equivalent, Nature. His proof for the existence of God was
a variation of the Ontological argument. Some findings in the fields of cosmology, evolutionary biology and neuroscience
are interpreted by atheists (including Lawrence M. Krauss and Sam Harris) as evidence that God is an imaginary entity
only, with no basis in reality. A single, omniscient God who is imagined to have created the universe and is particularly
attentive to the lives of humans has been imagined, embellished and promulgated in a trans-generational manner. Richard
Dawkins interprets various findings not only as a lack of evidence for the material existence of such a God but extensive
evidence to the contrary. According to the Omnipotence paradox or 'Paradox of the Stone', can God create a stone
so heavy that he cannot lift it? Either he can or he can’t. If he can’t, the argument goes, then there is something
that he cannot do, namely create the stone, and therefore he is not omnipotent. If he can, it continues, then there
is also something that he cannot do, namely lift the stone, and therefore he is not omnipotent. Either way, then,
God is not omnipotent. A being that is not omnipotent, though, is not God, according to many theological models.
Such a God, therefore, does not exist. Several answers to this paradox have been proposed. Different religious traditions
assign differing (though often similar) attributes and characteristics to God, including expansive powers and abilities,
psychological characteristics, gender characteristics, and preferred nomenclature. The assignment of these attributes
often differs according to the conceptions of God in the culture from which they arise. For example, attributes of
God in Christianity, attributes of God in Islam, and the Thirteen Attributes of Mercy in Judaism share certain similarities
arising from their common roots. The gender of God may be viewed as either a literal or an allegorical aspect of
a deity who, in classical western philosophy, transcends bodily form. Polytheistic religions commonly attribute to
each of the gods a gender, allowing each to interact with any of the others, and perhaps with humans, sexually. In
most monotheistic religions, God has no counterpart with which to relate sexually. Thus, in classical western philosophy
the gender of this one-and-only deity is most likely to be an analogical statement of how humans and God address,
and relate to, each other. Namely, God is seen as begetter of the world and revelation which corresponds to the active
(as opposed to the receptive) role in sexual intercourse. Prayer plays a significant role among many believers. Muslims
believe that the purpose of existence is to worship God. He is viewed as a personal God and there are no intermediaries,
such as clergy, to contact God. Prayer often also includes supplication and asking forgiveness. God is often believed
to be forgiving. For example, a hadith states God would replace a sinless people with one who sinned but still asked
repentance. Christian theologian Alister McGrath writes that there are good reasons to suggest that a "personal god"
is integral to the Christian outlook, but that one has to understand it is an analogy. "To say that God is like a
person is to affirm the divine ability and willingness to relate to others. This does not imply that God is human,
or located at a specific point in the universe." Adherents of different religions generally disagree as to how to
best worship God and what is God's plan for mankind, if there is one. There are different approaches to reconciling
the contradictory claims of monotheistic religions. One view is taken by exclusivists, who believe they are the chosen
people or have exclusive access to absolute truth, generally through revelation or encounter with the Divine, which
adherents of other religions do not. Another view is religious pluralism. A pluralist typically believes that his
religion is the right one, but does not deny the partial truth of other religions. An example of a pluralist view
in Christianity is supersessionism, i.e., the belief that one's religion is the fulfillment of previous religions.
A third approach is relativistic inclusivism, where everybody is seen as equally right; an example being universalism:
the doctrine that salvation is eventually available for everyone. A fourth approach is syncretism, mixing different
elements from different religions. An example of syncretism is the New Age movement. Many medieval philosophers developed
arguments for the existence of God, while attempting to comprehend the precise implications of God's attributes.
Reconciling some of those attributes generated important philosophical problems and debates. For example, God's omniscience
may seem to imply that God knows how free agents will choose to act. If God does know this, their ostensible free
will might be illusory, or foreknowledge does not imply predestination, and if God does not know it, God may not
be omniscient. The last centuries of philosophy have seen vigorous questions regarding the arguments for God's existence
raised by such philosophers as Immanuel Kant, David Hume and Antony Flew, although Kant held that the argument from
morality was valid. The theist response has been either to contend, as does Alvin Plantinga, that faith is "properly
basic", or to take, as does Richard Swinburne, the evidentialist position. Some theists agree that none of the arguments
for God's existence are compelling, but argue that faith is not a product of reason, but requires risk. There would
be no risk, they say, if the arguments for God's existence were as solid as the laws of logic, a position summed
up by Pascal as "the heart has reasons of which reason does not know." A recent theory using concepts from physics
and neurophysiology proposes that God can be conceptualized within the theory of integrative level.
On 16 September 2001, at Camp David, President George W. Bush used the phrase war on terrorism in an unscripted and controversial
comment when he said, "This crusade – this war on terrorism – is going to take a while, ... " Bush later apologized
for this remark due to the negative connotations the term crusade has to people, e.g. of Muslim faith. The word crusade
was not used again. On 20 September 2001, during a televised address to a joint session of congress, Bush stated
that, "(o)ur 'war on terror' begins with al-Qaeda, but it does not end there. It will not end until every terrorist
group of global reach has been found, stopped, and defeated." U.S. President Barack Obama has rarely used the term,
but in his inaugural address on 20 January 2009, he stated "Our nation is at war, against a far-reaching network
of violence and hatred." In March 2009 the Defense Department officially changed the name of operations from "Global
War on Terror" to "Overseas Contingency Operation" (OCO). In March 2009, the Obama administration requested that
Pentagon staff members avoid use of the term, instead using "Overseas Contingency Operation". Basic objectives of
the Bush administration "war on terror", such as targeting al Qaeda and building international counterterrorism alliances,
remain in place. In December 2012, Jeh Johnson, the General Counsel of the Department of Defense, stated that the
military fight will be replaced by a law enforcement operation when speaking at Oxford University, predicting that
al Qaeda will be so weakened to be ineffective, and has been "effectively destroyed", and thus the conflict will
not be an armed conflict under international law. In May 2013, Obama stated that the goal is "to dismantle specific
networks of violent extremists that threaten America"; which coincided with the U.S. Office of Management and Budget
having changed the wording from "Overseas Contingency Operations" to "Countering Violent Extremism" in 2010. Because
the actions involved in the "war on terrorism" are diffuse, and the criteria for inclusion are unclear, political
theorist Richard Jackson has argued that "the 'war on terrorism' therefore, is simultaneously a set of actual practices—wars,
covert operations, agencies, and institutions—and an accompanying series of assumptions, beliefs, justifications,
and narratives—it is an entire language or discourse." Jackson cites among many examples a statement by John Ashcroft
that "the attacks of September 11 drew a bright line of demarcation between the civil and the savage". Administration
officials also described "terrorists" as hateful, treacherous, barbarous, mad, twisted, perverted, without faith,
parasitical, inhuman, and, most commonly, evil. Americans, in contrast, were described as brave, loving, generous,
strong, resourceful, heroic, and respectful of human rights. The origins of al-Qaeda can be traced to the Soviet
war in Afghanistan (December 1979 – February 1989). The United States, United Kingdom, Saudi Arabia, Pakistan, and
the People's Republic of China supported the Islamist Afghan mujahadeen guerillas against the military forces of
the Soviet Union and the Democratic Republic of Afghanistan. A small number of "Afghan Arab" volunteers joined the
fight against the Soviets, including Osama bin Laden, but there is no evidence they received any external assistance.
In May 1996 the group World Islamic Front for Jihad Against Jews and Crusaders (WIFJAJC), sponsored by bin Laden
(and later re-formed as al-Qaeda), started forming a large base of operations in Afghanistan, where the Islamist
extremist regime of the Taliban had seized power earlier in the year. In February 1998, Osama bin Laden signed a
fatwā, as head of al-Qaeda, declaring war on the West and Israel, later in May of that same year al-Qaeda released
a video declaring war on the U.S. and the West. On 7 August 1998, al-Qaeda struck the U.S. embassies in Kenya and
Tanzania, killing 224 people, including 12 Americans. In retaliation, U.S. President Bill Clinton launched Operation
Infinite Reach, a bombing campaign in Sudan and Afghanistan against targets the U.S. asserted were associated with
WIFJAJC, although others have questioned whether a pharmaceutical plant in Sudan was used as a chemical warfare plant.
The plant produced much of the region's antimalarial drugs and around 50% of Sudan's pharmaceutical needs. The strikes
failed to kill any leaders of WIFJAJC or the Taliban. On the morning of 11 September 2001, 19 men affiliated with
al-Qaeda hijacked four airliners all bound for California. Once the hijackers assumed control of the airliners, they
told the passengers that they had the bomb on board and would spare the lives of passengers and crew once their demands
were met – no passenger and crew actually suspected that they would use the airliners as suicide weapons since it
had never happened before in history. The hijackers – members of al-Qaeda's Hamburg cell – intentionally crashed
two airliners into the Twin Towers of the World Trade Center in New York City. Both buildings collapsed within two
hours from fire damage related to the crashes, destroying nearby buildings and damaging others. The hijackers crashed
a third airliner into the Pentagon in Arlington County, Virginia, just outside Washington D.C. The fourth plane crashed
into a field near Shanksville, Pennsylvania, after some of its passengers and flight crew attempted to retake control
of the plane, which the hijackers had redirected toward Washington D.C., to target the White House, or the U.S. Capitol.
No flights had survivors. A total of 2,977 victims and the 19 hijackers perished in the attacks. The Authorization
for Use of Military Force Against Terrorists or "AUMF" was made law on 14 September 2001, to authorize the use of
United States Armed Forces against those responsible for the attacks on 11 September 2001. It authorized the President
to use all necessary and appropriate force against those nations, organizations, or persons he determines planned,
authorized, committed, or aided the terrorist attacks that occurred on 11 September 2001, or harbored such organizations
or persons, in order to prevent any future acts of international terrorism against the United States by such nations,
organizations or persons. Congress declares this is intended to constitute specific statutory authorization within
the meaning of section 5(b) of the War Powers Resolution of 1973. Subsequently, in October 2001, U.S. forces (with
UK and coalition allies) invaded Afghanistan to oust the Taliban regime. On 7 October 2001, the official invasion
began with British and U.S. forces conducting airstrike campaigns over enemy targets. Kabul, the capital city of
Afghanistan, fell by mid-November. The remaining al-Qaeda and Taliban remnants fell back to the rugged mountains
of eastern Afghanistan, mainly Tora Bora. In December, Coalition forces (the U.S. and its allies) fought within that
region. It is believed that Osama bin Laden escaped into Pakistan during the battle. The Taliban regrouped in western
Pakistan and began to unleash an insurgent-style offensive against Coalition forces in late 2002. Throughout southern
and eastern Afghanistan, firefights broke out between the surging Taliban and Coalition forces. Coalition forces
responded with a series of military offensives and an increase in the amount of troops in Afghanistan. In February
2010, Coalition forces launched Operation Moshtarak in southern Afghanistan along with other military offensives
in the hopes that they would destroy the Taliban insurgency once and for all. Peace talks are also underway between
Taliban affiliated fighters and Coalition forces. In September 2014, Afghanistan and the United States signed a security
agreement, which permits United States and NATO forces to remain in Afghanistan until at least 2024. The United States
and other NATO and non-NATO forces are planning to withdraw; with the Taliban claiming it has defeated the United
States and NATO, and the Obama Administration viewing it as a victory. In December 2014, ISAF encasing its colors,
and Resolute Support began as the NATO operation in Afghanistan. Continued United States operations within Afghanistan
will continue under the name "Operation Freedom's Sentinel". In January 2002, the United States Special Operations
Command, Pacific deployed to the Philippines to advise and assist the Armed Forces of the Philippines in combating
Filipino Islamist groups. The operations were mainly focused on removing the Abu Sayyaf group and Jemaah Islamiyah
(JI) from their stronghold on the island of Basilan. The second portion of the operation was conducted as a humanitarian
program called "Operation Smiles". The goal of the program was to provide medical care and services to the region
of Basilan as part of a "Hearts and Minds" program. Joint Special Operations Task Force – Philippines disbanded in
June 2014, ending a 14-year mission. After JSOTF-P disbanded, as late as November 2014, American forces continued
to operate in the Philippines under the name "PACOM Augmentation Team". On 14 September 2009, U.S. Special Forces
killed two men and wounded and captured two others near the Somali village of Baarawe. Witnesses claim that helicopters
used for the operation launched from French-flagged warships, but that could not be confirmed. A Somali-based al-Qaida
affiliated group, the Al-Shabaab, has confirmed the death of "sheik commander" Saleh Ali Saleh Nabhan along with
an unspecified number of militants. Nabhan, a Kenyan, was wanted in connection with the 2002 Mombasa attacks. The
conflict in northern Mali began in January 2012 with radical Islamists (affiliated to al-Qaeda) advancing into northern
Mali. The Malian government had a hard time maintaining full control over their country. The fledgling government
requested support from the international community on combating the Islamic militants. In January 2013, France intervened
on behalf of the Malian government's request and deployed troops into the region. They launched Operation Serval
on 11 January 2013, with the hopes of dislodging the al-Qaeda affiliated groups from northern Mali. Following the
ceasefire agreement that suspended hostilities (but not officially ended) in the 1991 Gulf War, the United States
and its allies instituted and began patrolling Iraqi no-fly zones, to protect Iraq's Kurdish and Shi'a Arab population—both
of which suffered attacks from the Hussein regime before and after the Gulf War—in Iraq's northern and southern regions,
respectively. U.S. forces continued in combat zone deployments through November 1995 and launched Operation Desert
Fox against Iraq in 1998 after it failed to meet U.S. demands of "unconditional cooperation" in weapons inspections.
The first ground attack came at the Battle of Umm Qasr on 21 March 2003 when a combined force of British, American
and Polish forces seized control of the port city of Umm Qasr. Baghdad, Iraq's capital city, fell to American forces
in April 2003 and Saddam Hussein's government quickly dissolved. On 1 May 2003, Bush announced that major combat
operations in Iraq had ended. However, an insurgency arose against the U.S.-led coalition and the newly developing
Iraqi military and post-Saddam government. The insurgency, which included al-Qaeda affiliated groups, led to far
more coalition casualties than the invasion. Other elements of the insurgency were led by fugitive members of President
Hussein's Ba'ath regime, which included Iraqi nationalists and pan-Arabists. Many insurgency leaders are Islamists
and claim to be fighting a religious war to reestablish the Islamic Caliphate of centuries past. Iraq's former president,
Saddam Hussein was captured by U.S. forces in December 2003. He was executed in 2006. In a major split in the ranks
of Al Qaeda's organization, the Iraqi franchise, known as Al Qaeda in Iraq covertly invaded Syria and the Levant
and began participating in the ongoing Syrian Civil War, gaining enough support and strength to re-invade Iraq's
western provinces under the name of the Islamic State of Iraq and the Levant (ISIS/ISIL), taking over much of the
country in a blitzkrieg-like action and combining the Iraq insurgency and Syrian Civil War into a single conflict.
Due to their extreme brutality and a complete change in their overall ideology, Al Qaeda's core organization in Central
Asia eventually denounced ISIS and directed their affiliates to cut off all ties with this organization. Many analysts[who?]
believe that because of this schism, Al Qaeda and ISIL are now in a competition to retain the title of the world's
most powerful terrorist organization. The Obama administration began to reengage in Iraq with a series of airstrikes
aimed at ISIS beginning on 10 August 2014. On 9 September 2014 President Obama said that he had the authority he
needed to take action to destroy the militant group known as the Islamic State of Iraq and the Levant, citing the
2001 Authorization for Use of Military Force Against Terrorists, and thus did not require additional approval from
Congress. The following day on 10 September 2014 President Barack Obama made a televised speech about ISIL, which
he stated "Our objective is clear: We will degrade, and ultimately destroy, ISIL through a comprehensive and sustained
counter-terrorism strategy". Obama has authorized the deployment of additional U.S. Forces into Iraq, as well as
authorizing direct military operations against ISIL within Syria. On the night of 21/22 September the United States,
Saudi Arabia, Bahrain, the UAE, Jordan and Qatar started air attacks against ISIS in Syria.[citation needed] Following
the 11 September 2001 attacks, former President of Pakistan Pervez Musharraf sided with the U.S. against the Taliban
government in Afghanistan after an ultimatum by then U.S. President George W. Bush. Musharraf agreed to give the
U.S. the use of three airbases for Operation Enduring Freedom. United States Secretary of State Colin Powell and
other U.S. administration officials met with Musharraf. On 19 September 2001, Musharraf addressed the people of Pakistan
and stated that, while he opposed military tactics against the Taliban, Pakistan risked being endangered by an alliance
of India and the U.S. if it did not cooperate. In 2006, Musharraf testified that this stance was pressured by threats
from the U.S., and revealed in his memoirs that he had "war-gamed" the United States as an adversary and decided
that it would end in a loss for Pakistan. On 12 January 2002, Musharraf gave a speech against Islamic extremism.
He unequivocally condemned all acts of terrorism and pledged to combat Islamic extremism and lawlessness within Pakistan
itself. He stated that his government was committed to rooting out extremism and made it clear that the banned militant
organizations would not be allowed to resurface under any new name. He said, "the recent decision to ban extremist
groups promoting militancy was taken in the national interest after thorough consultations. It was not taken under
any foreign influence". In 2002, the Musharraf-led government took a firm stand against the jihadi organizations
and groups promoting extremism, and arrested Maulana Masood Azhar, head of the Jaish-e-Mohammed, and Hafiz Muhammad
Saeed, chief of the Lashkar-e-Taiba, and took dozens of activists into custody. An official ban was imposed on the
groups on 12 January. Later that year, the Saudi born Zayn al-Abidn Muhammed Hasayn Abu Zubaydah was arrested by
Pakistani officials during a series of joint U.S.-Pakistan raids. Zubaydah is said to have been a high-ranking al-Qaeda
official with the title of operations chief and in charge of running al-Qaeda training camps. Other prominent al-Qaeda
members were arrested in the following two years, namely Ramzi bin al-Shibh, who is known to have been a financial
backer of al-Qaeda operations, and Khalid Sheikh Mohammed, who at the time of his capture was the third highest-ranking
official in al-Qaeda and had been directly in charge of the planning for the 11 September attacks. The use of drones
by the Central Intelligence Agency in Pakistan to carry out operations associated with the Global War on Terror sparks
debate over sovereignty and the laws of war. The U.S. Government uses the CIA rather than the U.S. Air Force for
strikes in Pakistan in order to avoid breaching sovereignty through military invasion. The United States was criticized
by[according to whom?] a report on drone warfare and aerial sovereignty for abusing the term 'Global War on Terror'
to carry out military operations through government agencies without formally declaring war. In a 'Letter to American
People' written by Osama bin Laden in 2002, he stated that one of the reasons he was fighting America is because
of its support of India on the Kashmir issue. While on a trip to Delhi in 2002, U.S. Secretary of Defense Donald
Rumsfeld suggested that Al-Qaeda was active in Kashmir, though he did not have any hard evidence. An investigation
in 2002 unearthed evidence that Al-Qaeda and its affiliates were prospering in Pakistan-administered Kashmir with
tacit approval of Pakistan's National Intelligence agency Inter-Services Intelligence. A team of Special Air Service
and Delta Force was sent into Indian-administered Kashmir in 2002 to hunt for Osama bin Laden after reports that
he was being sheltered by the Kashmiri militant group Harkat-ul-Mujahideen. U.S. officials believed that Al-Qaeda
was helping organize a campaign of terror in Kashmir in order to provoke conflict between India and Pakistan. Fazlur
Rehman Khalil, the leader of the Harkat-ul-Mujahideen, signed al-Qaeda's 1998 declaration of holy war, which called
on Muslims to attack all Americans and their allies. Indian sources claimed that In 2006, Al-Qaeda claimed they had
established a wing in Kashmir; this worried the Indian government. India also claimed that Al-Qaeda has strong ties
with the Kashmir militant groups Lashkar-e-Taiba and Jaish-e-Mohammed in Pakistan. While on a visit to Pakistan in
January 2010, U.S. Defense secretary Robert Gates stated that Al-Qaeda was seeking to destabilize the region and
planning to provoke a nuclear war between India and Pakistan. In September 2009, a U.S. Drone strike reportedly killed
Ilyas Kashmiri, who was the chief of Harkat-ul-Jihad al-Islami, a Kashmiri militant group associated with Al-Qaeda.
Kashmiri was described by Bruce Riedel as a 'prominent' Al-Qaeda member, while others described him as the head of
military operations for Al-Qaeda. Waziristan had now become the new battlefield for Kashmiri militants, who were
now fighting NATO in support of Al-Qaeda. On 8 July 2012, Al-Badar Mujahideen, a breakaway faction of Kashmir centric
terror group Hizbul Mujahideen, on conclusion of their two-day Shuhada Conference called for mobilisation of resources
for continuation of jihad in Kashmir. In the following months, NATO took a wide range of measures to respond to the
threat of terrorism. On 22 November 2002, the member states of the Euro-Atlantic Partnership Council (EAPC) decided
on a Partnership Action Plan against Terrorism, which explicitly states, "EAPC States are committed to the protection
and promotion of fundamental freedoms and human rights, as well as the rule of law, in combating terrorism." NATO
started naval operations in the Mediterranean Sea designed to prevent the movement of terrorists or weapons of mass
destruction as well as to enhance the security of shipping in general called Operation Active Endeavour. Support
for the U.S. cooled when America made clear its determination to invade Iraq in late 2002. Even so, many of the "coalition
of the willing" countries that unconditionally supported the U.S.-led military action have sent troops to Afghanistan,
particular neighboring Pakistan, which has disowned its earlier support for the Taliban and contributed tens of thousands
of soldiers to the conflict. Pakistan was also engaged in the War in North-West Pakistan (Waziristan War). Supported
by U.S. intelligence, Pakistan was attempting to remove the Taliban insurgency and al-Qaeda element from the northern
tribal areas. The British 16th Air Assault Brigade (later reinforced by Royal Marines) formed the core of the force
in southern Afghanistan, along with troops and helicopters from Australia, Canada and the Netherlands. The initial
force consisted of roughly 3,300 British, 2,000 Canadian, 1,400 from the Netherlands and 240 from Australia, along
with special forces from Denmark and Estonia and small contingents from other nations. The monthly supply of cargo
containers through Pakistani route to ISAF in Afghanistan is over 4,000 costing around 12 billion in Pakistani Rupees.
In addition to military efforts abroad, in the aftermath of 9/11 the Bush Administration increased domestic efforts
to prevent future attacks. Various government bureaucracies that handled security and military functions were reorganized.
A new cabinet-level agency called the United States Department of Homeland Security was created in November 2002
to lead and coordinate the largest reorganization of the U.S. federal government since the consolidation of the armed
forces into the Department of Defense.[citation needed] The USA PATRIOT Act of October 2001 dramatically reduces
restrictions on law enforcement agencies' ability to search telephone, e-mail communications, medical, financial,
and other records; eases restrictions on foreign intelligence gathering within the United States; expands the Secretary
of the Treasury's authority to regulate financial transactions, particularly those involving foreign individuals
and entities; and broadens the discretion of law enforcement and immigration authorities in detaining and deporting
immigrants suspected of terrorism-related acts. The act also expanded the definition of terrorism to include domestic
terrorism, thus enlarging the number of activities to which the USA PATRIOT Act's expanded law enforcement powers
could be applied. A new Terrorist Finance Tracking Program monitored the movements of terrorists' financial resources
(discontinued after being revealed by The New York Times). Global telecommunication usage, including those with no
links to terrorism, is being collected and monitored through the NSA electronic surveillance program. The Patriot
Act is still in effect. Political interest groups have stated that these laws remove important restrictions on governmental
authority, and are a dangerous encroachment on civil liberties, possible unconstitutional violations of the Fourth
Amendment. On 30 July 2003, the American Civil Liberties Union (ACLU) filed the first legal challenge against Section
215 of the Patriot Act, claiming that it allows the FBI to violate a citizen's First Amendment rights, Fourth Amendment
rights, and right to due process, by granting the government the right to search a person's business, bookstore,
and library records in a terrorist investigation, without disclosing to the individual that records were being searched.
Also, governing bodies in a number of communities have passed symbolic resolutions against the act. In 2005, the
UN Security Council adopted Resolution 1624 concerning incitement to commit acts of terrorism and the obligations
of countries to comply with international human rights laws. Although both resolutions require mandatory annual reports
on counter-terrorism activities by adopting nations, the United States and Israel have both declined to submit reports.
In the same year, the United States Department of Defense and the Chairman of the Joint Chiefs of Staff issued a
planning document, by the name "National Military Strategic Plan for the War on Terrorism", which stated that it
constituted the "comprehensive military plan to prosecute the Global War on Terror for the Armed Forces of the United
States...including the findings and recommendations of the 9/11 Commission and a rigorous examination with the Department
of Defense". Criticism of the War on Terror addresses the issues, morality, efficiency, economics, and other questions
surrounding the War on Terror and made against the phrase itself, calling it a misnomer. The notion of a "war" against
"terrorism" has proven highly contentious, with critics charging that it has been exploited by participating governments
to pursue long-standing policy/military objectives, reduce civil liberties, and infringe upon human rights. It is
argued that the term war is not appropriate in this context (as in War on Drugs), since there is no identifiable
enemy, and that it is unlikely international terrorism can be brought to an end by military means. Other critics,
such as Francis Fukuyama, note that "terrorism" is not an enemy, but a tactic; calling it a "war on terror", obscures
differences between conflicts such as anti-occupation insurgents and international mujahideen. With a military presence
in Iraq and Afghanistan and its associated collateral damage Shirley Williams maintains this increases resentment
and terrorist threats against the West. There is also perceived U.S. hypocrisy, media-induced hysteria, and that
differences in foreign and security policy have damaged America's image in most of the world.
Labour runs a minority government in the Welsh Assembly under Carwyn Jones, is the largest opposition party in the Scottish
Parliament and has twenty MEPs in the European Parliament, sitting in the Socialists and Democrats Group. The party
also organises in Northern Ireland, but does not contest elections to the Northern Ireland Assembly. The Labour Party
is a full member of the Party of European Socialists and Progressive Alliance, and holds observer status in the Socialist
International. In September 2015, Jeremy Corbyn was elected Leader of the Labour Party. The Labour Party's origins
lie in the late 19th century, when it became apparent that there was a need for a new political party to represent
the interests and needs of the urban proletariat, a demographic which had increased in number and had recently been
given franchise. Some members of the trades union movement became interested in moving into the political field,
and after further extensions of the voting franchise in 1867 and 1885, the Liberal Party endorsed some trade-union
sponsored candidates. The first Lib–Lab candidate to stand was George Odger in the Southwark by-election of 1870.
In addition, several small socialist groups had formed around this time, with the intention of linking the movement
to political policies. Among these were the Independent Labour Party, the intellectual and largely middle-class Fabian
Society, the Marxist Social Democratic Federation and the Scottish Labour Party. In 1899, a Doncaster member of the
Amalgamated Society of Railway Servants, Thomas R. Steels, proposed in his union branch that the Trade Union Congress
call a special conference to bring together all left-wing organisations and form them into a single body that would
sponsor Parliamentary candidates. The motion was passed at all stages by the TUC, and the proposed conference was
held at the Memorial Hall on Farringdon Street on 26 and 27 February 1900. The meeting was attended by a broad spectrum
of working-class and left-wing organisations — trades unions represented about one third of the membership of the
TUC delegates. After a debate, the 129 delegates passed Hardie's motion to establish "a distinct Labour group in
Parliament, who shall have their own whips, and agree upon their policy, which must embrace a readiness to cooperate
with any party which for the time being may be engaged in promoting legislation in the direct interests of labour."
This created an association called the Labour Representation Committee (LRC), meant to coordinate attempts to support
MPs sponsored by trade unions and represent the working-class population. It had no single leader, and in the absence
of one, the Independent Labour Party nominee Ramsay MacDonald was elected as Secretary. He had the difficult task
of keeping the various strands of opinions in the LRC united. The October 1900 "Khaki election" came too soon for
the new party to campaign effectively; total expenses for the election only came to £33. Only 15 candidatures were
sponsored, but two were successful; Keir Hardie in Merthyr Tydfil and Richard Bell in Derby. Support for the LRC
was boosted by the 1901 Taff Vale Case, a dispute between strikers and a railway company that ended with the union
being ordered to pay £23,000 damages for a strike. The judgement effectively made strikes illegal since employers
could recoup the cost of lost business from the unions. The apparent acquiescence of the Conservative Government
of Arthur Balfour to industrial and business interests (traditionally the allies of the Liberal Party in opposition
to the Conservative's landed interests) intensified support for the LRC against a government that appeared to have
little concern for the industrial proletariat and its problems. In their first meeting after the election the group's
Members of Parliament decided to adopt the name "The Labour Party" formally (15 February 1906). Keir Hardie, who
had taken a leading role in getting the party established, was elected as Chairman of the Parliamentary Labour Party
(in effect, the Leader), although only by one vote over David Shackleton after several ballots. In the party's early
years the Independent Labour Party (ILP) provided much of its activist base as the party did not have individual
membership until 1918 but operated as a conglomerate of affiliated bodies. The Fabian Society provided much of the
intellectual stimulus for the party. One of the first acts of the new Liberal Government was to reverse the Taff
Vale judgement. The 1910 election saw 42 Labour MPs elected to the House of Commons, a significant victory since,
a year before the election, the House of Lords had passed the Osborne judgment ruling that Trades Unions in the United
Kingdom could no longer donate money to fund the election campaigns and wages of Labour MPs. The governing Liberals
were unwilling to repeal this judicial decision with primary legislation. The height of Liberal compromise was to
introduce a wage for Members of Parliament to remove the need to involve the Trade Unions. By 1913, faced with the
opposition of the largest Trades Unions, the Liberal government passed the Trade Disputes Act to allow Trade Unions
to fund Labour MPs once more. The Communist Party of Great Britain was refused affiliation to the Labour Party between
1921 and 1923. Meanwhile, the Liberal Party declined rapidly, and the party also suffered a catastrophic split which
allowed the Labour Party to gain much of the Liberals' support. With the Liberals thus in disarray, Labour won 142
seats in 1922, making it the second largest political group in the House of Commons and the official opposition to
the Conservative government. After the election the now-rehabilitated Ramsay MacDonald was voted the first official
leader of the Labour Party. The 1923 general election was fought on the Conservatives' protectionist proposals but,
although they got the most votes and remained the largest party, they lost their majority in parliament, necessitating
the formation of a government supporting free trade. Thus, with the acquiescence of Asquith's Liberals, Ramsay MacDonald
became the first ever Labour Prime Minister in January 1924, forming the first Labour government, despite Labour
only having 191 MPs (less than a third of the House of Commons). The government collapsed after only nine months
when the Liberals voted for a Select Committee inquiry into the Campbell Case, a vote which MacDonald had declared
to be a vote of confidence. The ensuing 1924 general election saw the publication, four days before polling day,
of the Zinoviev letter, in which Moscow talked about a Communist revolution in Britain. The letter had little impact
on the Labour vote—which held up. It was the collapse of the Liberal party that led to the Conservative landslide.
The Conservatives were returned to power although Labour increased its vote from 30.7% to a third of the popular
vote, most Conservative gains being at the expense of the Liberals. However many Labourites for years blamed their
defeat on foul play (the Zinoviev Letter), thereby according to A. J. P. Taylor misunderstanding the political forces
at work and delaying needed reforms in the party. As the economic situation worsened MacDonald agreed to form a "National
Government" with the Conservatives and the Liberals. On 24 August 1931 MacDonald submitted the resignation of his
ministers and led a small number of his senior colleagues in forming the National Government together with the other
parties. This caused great anger among those within the Labour Party who felt betrayed by MacDonald's actions: he
and his supporters were promptly expelled from the Labour Party and formed a separate National Labour Organisation.
The remaining Labour Party MPs (led again by Arthur Henderson) and a few Liberals went into opposition. The ensuing
1931 general election resulted in overwhelming victory for the National Government and disaster for the Labour Party
which won only 52 seats, 225 fewer than in 1929. The nationalist parties, in turn, demanded devolution to their respective
constituent countries in return for their supporting the government. When referendums for Scottish and Welsh devolution
were held in March 1979 Welsh devolution was rejected outright while the Scottish referendum returned a narrow majority
in favour without reaching the required threshold of 40% support. When the Labour government duly refused to push
ahead with setting up the proposed Scottish Assembly, the SNP withdrew its support for the government: this finally
brought the government down as it triggered a vote of confidence in Callaghan's government that was lost by a single
vote on 28 March 1979, necessitating a general election. Callaghan had been widely expected to call a general election
in the autumn of 1978 when most opinion polls showed Labour to have a narrow lead. However he decided to extend his
wage restraint policy for another year hoping that the economy would be in a better shape for a 1979 election. But
during the winter of 1978–79 there were widespread strikes among lorry drivers, railway workers, car workers and
local government and hospital workers in favour of higher pay-rises that caused significant disruption to everyday
life. These events came to be dubbed the "Winter of Discontent". After its defeat in the 1979 general election the
Labour Party underwent a period of internal rivalry between the left represented by Tony Benn, and the right represented
by Denis Healey. The election of Michael Foot as leader in 1980, and the leftist policies he espoused, such as unilateral
nuclear disarmament, leaving the European Economic Community (EEC) and NATO, closer governmental influence in the
banking system, the creation of a national minimum wage and a ban on fox hunting led in 1981 to four former cabinet
ministers from the right of the Labour Party (Shirley Williams, William Rodgers, Roy Jenkins and David Owen) forming
the Social Democratic Party. Benn was only narrowly defeated by Healey in a bitterly fought deputy leadership election
in 1981 after the introduction of an electoral college intended to widen the voting franchise to elect the leader
and their deputy. By 1982, the National Executive Committee had concluded that the entryist Militant tendency group
were in contravention of the party's constitution. The Militant newspaper's five member editorial board were expelled
on 22 February 1983. Foot resigned and was replaced as leader by Neil Kinnock, with Roy Hattersley as his deputy.
The new leadership progressively dropped unpopular policies. The miners strike of 1984–85 over coal mine closures,
for which miners' leader Arthur Scargill was blamed, and the Wapping dispute led to clashes with the left of the
party, and negative coverage in most of the press. Tabloid vilification of the so-called loony left continued to
taint the parliamentary party by association from the activities of 'extra-parliamentary' militants in local government.
In the 2010 general election on 6 May that year, Labour with 29.0% of the vote won the second largest number of seats
(258). The Conservatives with 36.5% of the vote won the largest number of seats (307), but no party had an overall
majority, meaning that Labour could still remain in power if they managed to form a coalition with at least one smaller
party. However, the Labour Party would have had to form a coalition with more than one other smaller party to gain
an overall majority; anything less would result in a minority government. On 10 May 2010, after talks to form a coalition
with the Liberal Democrats broke down, Brown announced his intention to stand down as Leader before the Labour Party
Conference but a day later resigned as both Prime Minister and party leader. Finance proved a major problem for the
Labour Party during this period; a "cash for peerages" scandal under Blair resulted in the drying up of many major
sources of donations. Declining party membership, partially due to the reduction of activists' influence upon policy-making
under the reforms of Neil Kinnock and Blair, also contributed to financial problems. Between January and March 2008,
the Labour Party received just over £3 million in donations and were £17 million in debt; compared to the Conservatives'
£6 million in donations and £12 million in debt. Labour improved its performance in 1987, gaining 20 seats and so
reducing the Conservative majority from 143 to 102. They were now firmly re-established as the second political party
in Britain as the Alliance had once again failed to make a breakthrough with seats. A merger of the SDP and Liberals
formed the Liberal Democrats. Following the 1987 election, the National Executive Committee resumed disciplinary
action against members of Militant, who remained in the party, leading to further expulsions of their activists and
the two MPs who supported the group. The "yo yo" in the opinion polls continued into 1992, though after November
1990 any Labour lead in the polls was rarely sufficient for a majority. Major resisted Kinnock's calls for a general
election throughout 1991. Kinnock campaigned on the theme "It's Time for a Change", urging voters to elect a new
government after more than a decade of unbroken Conservative rule. However, the Conservatives themselves had undergone
a dramatic change in the change of leader from Thatcher to Major, at least in terms of style if not substance. From
the outset, it was clearly a well-received change, as Labour's 14-point lead in the November 1990 "Poll of Polls"
was replaced by an 8% Tory lead a month later. Kinnock then resigned as leader and was replaced by John Smith. Smith's
leadership once again saw the re-emergence of tension between those on the party's left and those identified as "modernisers",
both of whom advocated radical revisions of the party's stance albeit in different ways. At the 1993 conference,
Smith successfully changed the party rules and lessened the influence of the trade unions on the selection of candidates
to stand for Parliament by introducing a one member, one vote system called "OMOV" — but only barely, after a barnstorming
speech by John Prescott which required Smith to compromise on other individual negotiations. The Black Wednesday
economic disaster in September 1992 left the Conservative government's reputation for monetary excellence in tatters,
and by the end of that year Labour had a comfortable lead over the Tories in the opinion polls. Although the recession
was declared over in April 1993 and a period of strong and sustained economic growth followed, coupled with a relatively
swift fall in unemployment, the Labour lead in the opinion polls remained strong. However, Smith died from a heart
attack in May 1994. "New Labour" was first termed as an alternative branding for the Labour Party, dating from a
conference slogan first used by the Labour Party in 1994, which was later seen in a draft manifesto published by
the party in 1996, called New Labour, New Life For Britain. It was a continuation of the trend that had begun under
the leadership of Neil Kinnock. "New Labour" as a name has no official status, but remains in common use to distinguish
modernisers from those holding to more traditional positions, normally referred to as "Old Labour". A perceived turning
point was when Blair controversially allied himself with US President George W. Bush in supporting the Iraq War,
which caused him to lose much of his political support. The UN Secretary-General, among many, considered the war
illegal. The Iraq War was deeply unpopular in most western countries, with Western governments divided in their support
and under pressure from worldwide popular protests. The decisions that led up to the Iraq war and its subsequent
conduct are currently the subject of Sir John Chilcot's Iraq Inquiry. Blair announced in September 2006 that he would
quit as leader within the year, though he had been under pressure to quit earlier than May 2007 in order to get a
new leader in place before the May elections which were expected to be disastrous for Labour. In the event, the party
did lose power in Scotland to a minority Scottish National Party government at the 2007 elections and, shortly after
this, Blair resigned as Prime Minister and was replaced by his Chancellor, Gordon Brown. Although the party experienced
a brief rise in the polls after this, its popularity soon slumped to its lowest level since the days of Michael Foot.
During May 2008, Labour suffered heavy defeats in the London mayoral election, local elections and the loss in the
Crewe and Nantwich by-election, culminating in the party registering its worst ever opinion poll result since records
began in 1943, of 23%, with many citing Brown's leadership as a key factor. Membership of the party also reached
a low ebb, falling to 156,205 by the end of 2009: over 40 per cent of the 405,000 peak reached in 1997 and thought
to be the lowest total since the party was founded. Clement Attlee's proved one of the most radical British governments
of the 20th century, enacting Keynesian economic policies, presiding over a policy of nationalising major industries
and utilities including the Bank of England, coal mining, the steel industry, electricity, gas, and inland transport
(including railways, road haulage and canals). It developed and implemented the "cradle to grave" welfare state conceived
by the economist William Beveridge. To this day, the party considers the 1948 creation of Britain's publicly funded
National Health Service (NHS) under health minister Aneurin Bevan its proudest achievement. Attlee's government also
began the process of dismantling the British Empire when it granted independence to India and Pakistan in 1947, followed
by Burma (Myanmar) and Ceylon (Sri Lanka) the following year. At a secret meeting in January 1947, Attlee and six
cabinet ministers, including Foreign Secretary Ernest Bevin, decided to proceed with the development of Britain's
nuclear weapons programme, in opposition to the pacifist and anti-nuclear stances of a large element inside the Labour
Party. Labour went on to win the 1950 general election, but with a much reduced majority of five seats. Soon afterwards,
defence became a divisive issue within the party, especially defence spending (which reached a peak of 14% of GDP
in 1951 during the Korean War), straining public finances and forcing savings elsewhere. The Chancellor of the Exchequer,
Hugh Gaitskell, introduced charges for NHS dentures and spectacles, causing Bevan, along with Harold Wilson (then
President of the Board of Trade), to resign over the dilution of the principle of free treatment on which the NHS
had been established. Wilson's government was responsible for a number of sweeping social and educational reforms
under the leadership of Home Secretary Roy Jenkins such as the abolishment of the death penalty in 1964, the legalisation
of abortion and homosexuality (initially only for men aged 21 or over, and only in England and Wales) in 1967 and
the abolition of theatre censorship in 1968. Comprehensive education was expanded and the Open University created.
However Wilson's government had inherited a large trade deficit that led to a currency crisis and ultimately a doomed
attempt to stave off devaluation of the pound. Labour went on to lose the 1970 general election to the Conservatives
under Edward Heath. After losing the 1970 general election, Labour returned to opposition, but retained Harold Wilson
as Leader. Heath's government soon ran into trouble over Northern Ireland and a dispute with miners in 1973 which
led to the "three-day week". The 1970s proved a difficult time to be in government for both the Conservatives and
Labour due to the 1973 oil crisis which caused high inflation and a global recession. The Labour Party was returned
to power again under Wilson a few weeks after the February 1974 general election, forming a minority government with
the support of the Ulster Unionists. The Conservatives were unable to form a government alone as they had fewer seats
despite receiving more votes numerically. It was the first general election since 1924 in which both main parties
had received less than 40% of the popular vote and the first of six successive general elections in which Labour
failed to reach 40% of the popular vote. In a bid to gain a majority, a second election was soon called for October
1974 in which Labour, still with Harold Wilson as leader, won a majority of three, gaining just 18 seats taking its
total to 319. Fear of advances by the nationalist parties, particularly in Scotland, led to the suppression of a
report from Scottish Office economist Gavin McCrone that suggested that an independent Scotland would be 'chronically
in surplus'. By 1977 by-election losses and defections to the breakaway Scottish Labour Party left Callaghan heading
a minority government, forced to trade with smaller parties in order to govern. An arrangement negotiated in 1977
with Liberal leader David Steel, known as the Lib-Lab Pact, ended after one year. Deals were then forged with various
small parties including the Scottish National Party and the Welsh nationalist Plaid Cymru, prolonging the life of
the government. Harriet Harman became the Leader of the Opposition and acting Leader of the Labour Party following
the resignation of Gordon Brown on 11 May 2010, pending a leadership election subsequently won by Ed Miliband. Miliband
emphasised "responsible capitalism" and greater state intervention to change the balance of the UK economy away from
financial services. Tackling vested interests and opening up closed circles in British society were also themes he
returned to a number of times. Miliband also argued for greater regulation on banks and the energy companies. The
party's performance held up in local elections in 2012 with Labour consolidating its position in the North and Midlands,
while also regaining some ground in Southern England. In Wales the party enjoyed good successes, regaining control
of most Welsh Councils lost in 2008, including the capital city, Cardiff. In Scotland, Labour's held overall control
of Glasgow City Council despite some predictions to the contrary, and also enjoyed a +3.26 swing across Scotland.
In London, results were mixed for the party; Ken Livingstone lost the election for Mayor of London, but the party
gained its highest ever representation in the Greater London Authority in the concurrent assembly election. On 1
March 2014, at a special conference the party reformed internal Labour election procedures, including replacing the
electoral college system for selecting new leaders with a "one member, one vote" system following the recommendation
of a review by former general-secretary Ray Collins. Mass membership would be encouraged by allowing "registered
supporters" to join at a low cost, as well as full membership. Members from the trade unions would also have to explicitly
"opt in" rather than "opt out" of paying a political levy to Labour. The 2015 General Election resulted in a net
loss of seats throughout Great Britain, with Labour representation falling to 232 seats in the House of Commons.
The Party lost 40 of its 41 seats in Scotland in the face of record breaking swings to the Scottish National Party.
The scale of the decline in Labour's support was much greater than what had occurred at the 2011 elections for the
Scottish parliament. Though Labour gained more than 20 seats in England and Wales, mostly from the Liberal Democrats
but also from the Conservative Party, it lost more seats to Conservative challengers, including that of Ed Balls,
for net losses overall. The Labour Party is considered to be left of centre. It was initially formed as a means for
the trade union movement to establish political representation for itself at Westminster. It only gained a 'socialist'
commitment with the original party constitution of 1918. That 'socialist' element, the original Clause IV, was seen
by its strongest advocates as a straightforward commitment to the "common ownership", or nationalisation, of the
"means of production, distribution and exchange". Although about a third of British industry was taken into public
ownership after the Second World War, and remained so until the 1980s, the right of the party were questioning the
validity of expanding on this objective by the late 1950s. Influenced by Anthony Crosland's book, The Future of Socialism
(1956), the circle around party leader Hugh Gaitskell felt that the commitment was no longer necessary. While an
attempt to remove Clause IV from the party constitution in 1959 failed, Tony Blair, and the 'modernisers' saw the
issue as putting off potential voters, and were successful thirty-five years later, with only limited opposition
from senior figures in the party. From the late-1980s onwards, the party adopted free market policies, leading many
observers to describe the Labour Party as social democratic or the Third Way, rather than democratic socialist. Other
commentators go further and argue that traditional social democratic parties across Europe, including the British
Labour Party, have been so deeply transformed in recent years that it is no longer possible to describe them ideologically
as 'social democratic', and claim that this ideological shift has put new strains on the party's traditional relationship
with the trade unions. Historically within the party, differentiation was made between the "soft left" and the "hard
left", with the former embracing more moderately social democratic views while the hard left subscribed to a strongly
socialist, even Marxist, ideology. Members on the hard left were often disparaged as the "loony left," particularly
in the popular media. The term "hard left" was sometimes used in the 1980s to describe Trotskyist groups such as
the Militant tendency, Socialist Organiser and Socialist Action. In more recent times, Members of Parliament in the
Socialist Campaign Group and the Labour Representation Committee are seen as constituting a hard left in contrast
to a soft left represented by organisations such as Compass and the magazine Tribune. Labour has long been identified
with red, a political colour traditionally affiliated with socialism and the labour movement. The party conference
in 1931 passed a motion "That this conference adopts Party Colours, which should be uniform throughout the country,
colours to be red and gold". Since the party's inception, the red flag has been Labour's official symbol; the flag
has been associated with socialism and revolution ever since the 1789 French Revolution and the revolutions of 1848.
The red rose, a symbol of social democracy, was adopted as the party symbol in 1986 as part of a rebranding exercise
and is now incorporated into the party logo. The party's decision-making bodies on a national level formally include
the National Executive Committee (NEC), Labour Party Conference and National Policy Forum (NPF)—although in practice
the Parliamentary leadership has the final say on policy. The 2008 Labour Party Conference was the first at which
affiliated trade unions and Constituency Labour Parties did not have the right to submit motions on contemporary
issues that would previously have been debated. Labour Party conferences now include more "keynote" addresses, guest
speakers and question-and-answer sessions, while specific discussion of policy now takes place in the National Policy
Forum. For many years Labour held to a policy of not allowing residents of Northern Ireland to apply for membership,
instead supporting the Social Democratic and Labour Party (SDLP) which informally takes the Labour whip in the House
of Commons. The 2003 Labour Party Conference accepted legal advice that the party could not continue to prohibit
residents of the province joining, and whilst the National Executive has established a regional constituency party
it has not yet agreed to contest elections there. In December 2015 a meeting of the members of the Labour Party in
Northern Ireland decided unanimously to contest the elections for the Northern Ireland Assembly held in May 2016.
As it was founded by the unions to represent the interests of working-class people, Labour's link with the unions
has always been a defining characteristic of the party. In recent years this link has come under increasing strain,
with the RMT being expelled from the party in 2004 for allowing its branches in Scotland to affiliate to the left-wing
Scottish Socialist Party. Other unions have also faced calls from members to reduce financial support for the Party
and seek more effective political representation for their views on privatisation, public spending cuts and the anti-trade
union laws. Unison and GMB have both threatened to withdraw funding from constituency MPs and Dave Prentis of UNISON
has warned that the union will write "no more blank cheques" and is dissatisfied with "feeding the hand that bites
us". Union funding was redesigned in 2013 after the Falkirk candidate-selection controversy. The party was a member
of the Labour and Socialist International between 1923 and 1940. Since 1951 the party has been a member of the Socialist
International, which was founded thanks to the efforts of the Clement Attlee leadership. However, in February 2013,
the Labour Party NEC decided to downgrade participation to observer membership status, "in view of ethical concerns,
and to develop international co-operation through new networks". Labour was a founding member of the Progressive
Alliance international founded in co-operation with the Social Democratic Party of Germany and other social-democratic
parties on 22 May 2013.
Estonia (i/ɛˈstoʊniə/; Estonian: Eesti [ˈeːsti]), officially the Republic of Estonia (Estonian: Eesti Vabariik), is a country
in the Baltic region of Northern Europe. It is bordered to the north by the Gulf of Finland, to the west by the Baltic
Sea, to the south by Latvia (343 km), and to the east by Lake Peipus and Russia (338.6 km). Across the Baltic Sea
lies Sweden in the west and Finland in the north. The territory of Estonia consists of a mainland and 2,222 islands
and islets in the Baltic Sea, covering 45,339 km2 (17,505 sq mi) of land, and is influenced by a humid continental
climate. After centuries of Danish, Swedish and German rule the native Estonians started to yearn for independence
during the period of national awakening while being governed by the Russian Empire. Established on 24 February 1918,
the Republic of Estonia came into existence towards the end of World War I. During World War II, Estonia was then
occupied by the Soviet Union in 1940, then Nazi Germany a year later and again in 1944 establishing the Estonian
Soviet Socialist Republic. In 1988, during the Singing Revolution, the Estonian SSR issued the Estonian Sovereignty
Declaration to defy against the illegal Soviet rule. Estonia then restored its independence during the 1991 coup
by the Soviets on the night of 20 August 1991. A developed country with an advanced, high-income economy and high
living standards, Estonia ranks very high in the Human Development Index, and performs favourably in measurements
of economic freedom, civil liberties, education, and press freedom (third in the world in 2012). Estonia has been
among the fastest growing economies in the European Union and is a part of the World Trade Organization and the Nordic
Investment Bank. Estonia is often described as one of the most internet-focused countries in Europe. In the first
centuries AD, political and administrative subdivisions began to emerge in Estonia. Two larger subdivisions appeared:
the province (Estonian: kihelkond) and the land (Estonian: maakond). Several elderships or villages made up a province.
Nearly all provinces had at least one fortress. The king or other highest administrative official elder directed
the defense of the local area. By the thirteenth century Estonia consisted of the following provinces: Revala, Harjumaa,
Saaremaa, Hiiumaa, Läänemaa, Alempois, Sakala, Ugandi, Jogentagana, Soopoolitse, Vaiga, Mõhu, Nurmekund, Järvamaa
and Virumaa. The Oeselians or Osilians (Estonian saarlased; singular: saarlane) were a historical subdivision of
Estonians inhabiting Saaremaa (Danish: Øsel; German: Ösel; Swedish: Ösel), an Estonian island in the Baltic Sea.
They were first mentioned as early as the second century BC in Ptolemy's Geography III. The Oeselians were known
in the Old Norse Icelandic Sagas and in Heimskringla as Víkingr frá Esthland (Estonian Vikings). Their sailing vessels
were called pirate ships by Henry of Latvia in his Latin chronicles written at the beginning of the 13th century.
Perhaps the most famous raid by Oeselian pirates occurred in 1187, with the attack on the Swedish town of Sigtuna
by Finnic raiders from Couronia and Oesel. Among the casualties of this raid was the Swedish archbishop Johannes.
The city remained occupied for some time, contributing to its decline as a center of commerce in the 13th century
and the rise of Uppsala, Visby, Kalmar and Stockholm. The Livonian Chronicle describes the Oeselians as using two
kinds of ships, the piratica and the liburna. The former was a warship, the latter mainly a merchant ship. A piratica
could carry approximately 30 men and had a high prow shaped like a dragon or a snakehead and a rectangular sail.
Viking-age treasures from Estonia mostly contain silver coins and bars. Saaremaa has the richest finds of Viking
treasures after Gotland in Sweden. This strongly suggests that Estonia was an important transit country during the
Viking era. The superior god of Oeselians as described by Henry of Latvia was called Tharapita. According to the
legend in the chronicle Tharapita was born on a forested mountain in Virumaa (Latin: Vironia), mainland Estonia from
where he flew to Oesel, Saaremaa The name Taarapita has been interpreted as "Taara, help!"/"Thor, help!" (Taara a(v)ita
in Estonian) or "Taara keeper"/"Thor keeper" (Taara pidaja) Taara is associated with the Scandinavian god Thor. The
story of Tharapita's or Taara's flight from Vironia to Saaremaa has been associated with a major meteor disaster
estimated to have happened in 660 ± 85 BC that formed Kaali crater in Saaremaa. The capital of Danish Estonia (Danish:
Hertugdømmet Estland) was Reval (Tallinn), founded at the place of Lyndanisse after the invasion of 1219. The Danes
built the fortress of Castrum Danorum at Toompea Hill. Estonians still call their capital "Tallinn", which according
to legend derives from Taani linna (meaning Danish town or castle). Reval was granted Lübeck city rights (1248) and
joined the Hanseatic League. Even today, Danish influence can be seen in heraldic symbols. The Danish cross is on
the city of Tallinn's coat of arms, and Estonia's coat of arms displays three lions similar to those found on the
Danish coat of arms. On St. George's Night (Estonian: Jüriöö ülestõus) 23 April 1343, the indigenous Estonian population
in the Duchy of Estonia, the Bishopric of Ösel-Wiek and the insular territories of the State of the Teutonic Order
tried to rid themselves of the Danish and German rulers and landlords, who had conquered the country in the 13th
century during the Livonian crusade, and to eradicate the non-indigenous Christian religion. After initial success
the revolt was ended by the invasion of the Teutonic Order. In 1346 the Duchy of Estonia was sold for 19,000 Köln
marks to the Teutonic Order by the King of Denmark. The shift of sovereignty from Denmark to the State of the Teutonic
Order took place on 1 November 1346. From 1228, after of the Livonian Crusade, through the 1560s, Estonia was part
of Terra Mariana, established on 2 February 1207 as a principality of the Holy Roman Empire and proclaimed by Pope
Innocent III in 1215 as subject to the Holy See. The southern parts of the country were conquered by Livonian Brothers
of the Sword who joined the Teutonic Order in 1237 and became its branch known as the Livonian Order. The Duchy of
Estonia was created out of the northern parts of the country and was a direct dominion of the King of Denmark from
1219 until 1346, when it was sold to the Teutonic Order and became part of the Ordenstaat. In 1343, the people of
northern Estonia and Saaremaa rebelled against German rule in the St. George's Night Uprising, which was put down
by 1345. The unsuccessful rebellion led to a consolidation of power for the Baltic German minority. For the subsequent
centuries they remained the ruling elite in both cities and in the countryside. After the decline of the Teutonic
Order following its defeat in the Battle of Grunwald in 1410, and the defeat of the Livonian Order in the Battle
of Swienta on 1 September 1435, the Livonian Confederation Agreement was signed on 4 December 1435. The Livonian
Confederation ceased to exist during the Livonian War (1558–82). The wars had reduced the Estonian population from
about 250–300,000 people before the Livonian War to 120–140,000 in the 1620s. The Grand Duchy of Moscow and Tsardom
of Russia also attempted invasions in 1481 and 1558, both of which were unsuccessful . The Reformation in Europe
officially began in 1517 with Martin Luther (1483–1546) and his 95 Theses. The Reformation greatly changed the Baltic
region. Its ideas came quickly to the Livonian Confederation and by the 1520s were widespread. Language, education,
religion and politics were transformed. Church services were now conducted in the vernacular instead of in Latin,
previously used. During the Livonian War in 1561, northern Estonia submitted to Swedish control. In the 1560s two
voivodeships of present-day southern Estonia, Dorpat Voivodeship (Tartu region) and Parnawa Voivodeship (Pärnu region),
became the autonomous Duchy of Livonia within the Polish-Lithuanian Commonwealth, under joint control of the Polish
Crown and the Grand Duchy. In 1629, mainland Estonia came entirely under Swedish rule. Estonia was administratively
divided between the provinces of Estonia in the north and Livonia in southern Estonia and northern Latvia. This division
persisted until the early twentieth century. As a result of the abolition of serfdom and the availability of education
to the native Estonian-speaking population, an active Estonian nationalist movement developed in the 19th century.[citation
needed] It began on a cultural level, resulting in the establishment of Estonian language literature, theatre and
professional music and led on to the formation of the Estonian national identity and the Age of Awakening. Among
the leaders of the movement were Johann Voldemar Jannsen, Jakob Hurt and Carl Robert Jakobson. On 14 June, while
the world's attention was focused on the fall of Paris to Nazi Germany a day earlier, the Soviet military blockade
on Estonia went into effect, two Soviet bombers downed the Finnish passenger aeroplane "Kaleva" flying from Tallinn
to Helsinki carrying three diplomatic pouches from the US delegations in Tallinn, Riga and Helsinki. On 16 June,
the Soviet Union invaded Estonia. The Red Army exited from their military bases in Estonia on 17 June. The following
day, some 90,000 additional troops entered the country. In the face of overwhelming Soviet force, the Estonian government
capitulated on 17 June 1940 to avoid bloodshed. Most of the Estonian Defence Forces surrendered according to the
orders of the Estonian government, believing that resistance was useless and were disarmed by the Red Army. Only
the Estonian Independent Signal Battalion showed resistance to Red Army and Communist militia "People's Self-Defence"
units in front of the XXI Grammar School in Tallinn on 21 June. As the Red Army brought in additional reinforcements
supported by six armoured fighting vehicles, the battle lasted several hours until sundown. Finally the military
resistance was ended with negotiations and the Independent Signal Battalion surrendered and was disarmed. There were
two dead Estonian servicemen, Aleksei Männikus and Johannes Mandre, and several wounded on the Estonian side and
about ten killed and more wounded on the Soviet side. On 6 August 1940, Estonia was annexed by the Soviet Union as
the Estonian SSR. The provisions in the Estonian constitution requiring a popular referendum to decide on joining
a supra-national body were ignored. Instead the vote to join the Soviet Union was taken by those elected in the elections
held the previous month. Additionally those who had failed to do their "political duty" of voting Estonia into the
USSR, specifically those who had failed to have their passports stamped for voting, were condemned to death by Soviet
tribunals. The repressions followed with the mass deportations carried out by the Soviets in Estonia on 14 June 1941.
Many of the country's political and intellectual leaders were killed or deported to remote areas of the USSR by the
Soviet authorities in 1940–1941. Repressive actions were also taken against thousands of ordinary people. After Germany
invaded the Soviet Union on 22 June 1941, the Wehrmacht crossed the Estonian southern border on 7 July. The Red Army
retreated behind the Pärnu River – Emajõgi line on 12 July. At the end of July the Germans resumed their advance
in Estonia working in tandem with the Estonian Forest Brothers. Both German troops and Estonian partisans took Narva
on 17 August and the Estonian capital Tallinn on 28 August. After the Soviets were driven out from Estonia, German
troops disarmed all the partisan groups. Although initially the Germans were welcomed by most Estonians as liberators
from the USSR and its oppressions, and hopes were raised for the restoration of the country's independence, it was
soon realised that the Nazis were but another occupying power. The Germans used Estonia's resources for their war
effort; for the duration of the occupation Estonia was incorporated into the German province of Ostland. The Germans
and their collaborators also carried out The Holocaust in Estonia in which they established a network of concentration
camps and murdered thousands of Estonian Jews and Estonian Gypsies, other Estonians, non-Estonian Jews, and Soviet
prisoners of war. Some Estonians, unwilling to side directly with the Nazis, joined the Finnish Army (which was allied
with the Nazis) to fight against the Soviet Union. The Finnish Infantry Regiment 200 (Estonian: soomepoisid) was
formed out of Estonian volunteers in Finland. Although many Estonians were recruited into the German armed forces
(including Estonian Waffen-SS), the majority of them did so only in 1944 when the threat of a new invasion of Estonia
by the Red Army had become imminent. In January 1944 Estonia was again facing the prospect of invasion from the Red
Army and the last legitimate prime minister of the Republic of Estonia (according to the Constitution of the Republic
of Estonia) delivered a radio address asking all able-bodied men born from 1904 through 1923 to report for military
service. The call resulted in around 38,000 new enlistments and several thousand Estonians who had joined the Finnish
Army came back to join the newly formed Territorial Defense Force, assigned to defend Estonia against the Soviet
advance. It was hoped[by whom?] that by engaging in such a war Estonia would be able to attract Western support for
Estonian independence. In the face of the country being re-occupied by the Red Army, tens of thousands of Estonians
(including a majority of the education, culture, science, political and social specialists) chose to either retreat
with the Germans or flee to Finland or Sweden where they sought refuge in other western countries, often by refugee
ships such as the SS Walnut. On 12 January 1949, the Soviet Council of Ministers issued a decree "on the expulsion
and deportation" from Baltic states of "all kulaks and their families, the families of bandits and nationalists",
and others. Half the deported perished, and the other half were not allowed to return until the early 1960s (years
after Stalin's death).[citation needed] The activities of Soviet forces in 1940–41 and after reoccupation sparked
a guerrilla war against Soviet authorities in Estonia by the Forest Brothers, who consisted mostly of Estonian veterans
of the German and Finnish armies and some civilians. This conflict continued into the early 1950s. Material damage
caused by the world war and the following Soviet era significantly slowed Estonia's economic growth, resulting in
a wide wealth gap in comparison with neighbouring Finland and Sweden. Militarization was another aspect of the Soviet
state. Large parts of the country, especially the coastal areas, were closed to all but the Soviet military. Most
of the sea shore and all sea islands (including Saaremaa and Hiiumaa) were declared "border zones". People not actually
residing there were restricted from travelling to them without a permit. A notable closed military installation was
the city of Paldiski, which was entirely closed to all public access. The city had a support base for the Soviet
Baltic Fleet's submarines and several large military bases, including a nuclear submarine training centre complete
with a full-scale model of a nuclear submarine with working nuclear reactors. The Paldiski reactors building passed
into Estonian control in 1994 after the last Russian troops left the country. Immigration was another effect of Soviet
occupation. Hundreds of thousands of migrants were relocated to Estonia from other parts of the Soviet Union to assist
industrialisation and militarisation, contributing an increase of about half a million people within 45 years. The
U.S., UK, France, Italy and the majority of other Western countries considered the annexation of Estonia by the USSR
illegal. They retained diplomatic relations with the representatives of the independent Republic of Estonia, never
de jure recognised the existence of the Estonian SSR, and never recognised Estonia as a legal constituent part of
the Soviet Union. Estonia's return to independence became possible as the Soviet Union faced internal regime challenges,
loosening its hold on the outer empire. As the 1980s progressed, a movement for Estonian autonomy started. In the
initial period of 1987–1989, this was partially for more economic independence, but as the Soviet Union weakened
and it became increasingly obvious that nothing short of full independence would do, Estonia began a course towards
self-determination. In 1989, during the "Singing Revolution", in a landmark demonstration for more independence,
more than two million people formed a human chain stretching through Lithuania, Latvia and Estonia, called the Baltic
Way. All three nations had similar experiences of occupation and similar aspirations for regaining independence.
The Estonian Sovereignty Declaration was issued on 16 November 1988. On 20 August 1991, Estonia declared formal independence
during the Soviet military coup attempt in Moscow, reconstituting the pre-1940 state. The Soviet Union recognised
the independence of Estonia on 6 September 1991. The first country to diplomatically recognise Estonia's reclaimed
independence was Iceland. The last units of the Russian army left on 31 August 1994. Estonia's land border with Latvia
runs 267 kilometers; the Russian border runs 290 kilometers. From 1920 to 1945, Estonia's border with Russia, set
by the 1920 Tartu Peace Treaty, extended beyond the Narva River in the northeast and beyond the town of Pechory (Petseri)
in the southeast. This territory, amounting to some 2,300 square kilometres (888 sq mi), was incorporated into Russia
by Stalin at the end of World War II. For this reason the borders between Estonia and Russia are still not defined.
Estonia lies on the eastern shores of the Baltic Sea immediately across the Gulf of Finland from Finland on the level
northwestern part of the rising East European platform between 57.3° and 59.5° N and 21.5° and 28.1° E. Average elevation
reaches only 50 metres (164 ft) and the country's highest point is the Suur Munamägi in the southeast at 318 metres
(1,043 ft). There is 3,794 kilometres (2,357 mi) of coastline marked by numerous bays, straits, and inlets. The number
of islands and islets is estimated at some 2,355 (including those in lakes). Two of them are large enough to constitute
separate counties: Saaremaa and Hiiumaa. A small, recent cluster of meteorite craters, the largest of which is called
Kaali is found on Saaremaa, Estonia. Estonia is situated in the northern part of the temperate climate zone and in
the transition zone between maritime and continental climate. Estonia has four seasons of near-equal length. Average
temperatures range from 16.3 °C (61.3 °F) on the Baltic islands to 18.1 °C (64.6 °F) inland in July, the warmest
month, and from −3.5 °C (25.7 °F) on the Baltic islands to −7.6 °C (18.3 °F) inland in February, the coldest month.
The average annual temperature in Estonia is 5.2 °C (41.4 °F). The average precipitation in 1961–1990 ranged from
535 to 727 mm (21.1 to 28.6 in) per year. A maakond (county) is the biggest administrative subdivision. The county
government (Maavalitsus) of each county is led by a county governor (Maavanem), who represents the national government
at the regional level. Governors are appointed by the Government of Estonia for a term of five years. Several changes
were made to the borders of counties after Estonia became independent, most notably the formation of Valga County
(from parts of Võru, Tartu and Viljandi counties) and Petseri County (area acquired from Russia with the 1920 Tartu
Peace Treaty). Estonia is a parliamentary representative democratic republic in which the Prime Minister of Estonia
is the head of government and which includes a multi-party system. The political culture is stable in Estonia, where
power is held between two and three parties that have been in politics for a long time. This situation is similar
to other countries in Northern Europe. The former Prime Minister of Estonia, Andrus Ansip, is also Europe's longest-serving
Prime Minister (from 2005 until 2014). The current Estonian Prime Minister is Taavi Rõivas, who is the former Minister
of Social Affairs and the head of the Estonian Reform Party. The Parliament of Estonia (Estonian: Riigikogu) or the
legislative branch is elected by people for a four-year term by proportional representation. The Estonian political
system operates under a framework laid out in the 1992 constitutional document. The Estonian parliament has 101 members
and influences the governing of the state primarily by determining the income and the expenses of the state (establishing
taxes and adopting the budget). At the same time the parliament has the right to present statements, declarations
and appeals to the people of Estonia, ratify and denounce international treaties with other states and international
organisations and decide on the Government loans. The Riigikogu elects and appoints several high officials of the
state, including the President of the Republic. In addition to that, the Riigikogu appoints, on the proposal of the
President of Estonia, the Chairman of the National Court, the chairman of the board of the Bank of Estonia, the Auditor
General, the Legal Chancellor and the Commander-in-Chief of the Defence Forces. A member of the Riigikogu has the
right to demand explanations from the Government of the Republic and its members. This enables the members of the
parliament to observe the activities of the executive power and the above-mentioned high officials of the state.
The Government of Estonia (Estonian: Vabariigi Valitsus) or the executive branch is formed by the Prime Minister
of Estonia, nominated by the president and approved by the parliament. The government exercises executive power pursuant
to the Constitution of Estonia and the laws of the Republic of Estonia and consists of twelve ministers, including
the Prime Minister. The Prime Minister also has the right to appoint other ministers and assign them a subject to
deal with. These are ministers without portfolio — they don't have a ministry to control. The Prime Minister has
the right to appoint a maximum of three such ministers, as the limit of ministers in one government is fifteen. It
is also known as the cabinet. The cabinet carries out the country's domestic and foreign policy, shaped by parliament;
it directs and co-ordinates the work of government institutions and bears full responsibility for everything occurring
within the authority of executive power. The government, headed by the Prime Minister, thus represents the political
leadership of the country and makes decisions in the name of the whole executive power. Estonia has pursued the development
of the e-state and e-government. Internet voting is used in elections in Estonia. The first internet voting took
place in the 2005 local elections and the first in a parliamentary election was made available for the 2007 elections,
in which 30,275 individuals voted over the internet. Voters have a chance to invalidate their electronic vote in
traditional elections, if they wish to. In 2009 in its eighth Worldwide Press Freedom Index, Reporters Without Borders
ranked Estonia sixth out of 175 countries. In the first ever State of World Liberty Index report, Estonia was ranked
first out of 159 countries. According to the Constitution of Estonia (Estonian: Põhiseadus) the supreme power of
the state is vested in the people. The people exercise their supreme power of the state on the elections of the Riigikogu
through citizens who have the right to vote. The supreme judicial power is vested in the Supreme Court or Riigikohus,
with nineteen justices. The Chief Justice is appointed by the parliament for nine years on nomination by the president.
The official Head of State is the President of Estonia, who gives assent to the laws passed by Riigikogu, also having
the right of sending them back and proposing new laws. Estonia was a member of the League of Nations from 22 September
1921, has been a member of the United Nations since 17 September 1991, and of NATO since 29 March 2004, as well as
the European Union since 1 May 2004. Estonia is also a member of the Organization for Security and Cooperation in
Europe (OSCE), Organisation for Economic Co-operation and Development (OECD), Council of the Baltic Sea States (CBSS)
and the Nordic Investment Bank (NIB). As an OSCE participating State, Estonia's international commitments are subject
to monitoring under the mandate of the U.S. Helsinki Commission. Estonia has also signed the Kyoto Protocol. Since
regaining independence, Estonia has pursued a foreign policy of close co-operation with its Western European partners.
The two most important policy objectives in this regard have been accession into NATO and the European Union, achieved
in March and May 2004 respectively. Estonia's international realignment toward the West has been accompanied by a
general deterioration in relations with Russia, most recently demonstrated by the protest triggered by the controversial
relocation of the Bronze Soldier World War II memorial in Tallinn. Since the early 1990s, Estonia is involved in
active trilateral Baltic states co-operation with Latvia and Lithuania, and Nordic-Baltic co-operation with the Nordic
countries. The Baltic Council is the joint forum of the interparliamentary Baltic Assembly (BA) and the intergovernmental
Baltic Council of Ministers (BCM). Nordic-Baltic Eight (NB-8) is the joint co-operation of the governments of Denmark,
Estonia, Finland, Iceland, Latvia, Lithuania, Norway and Sweden. Nordic-Baltic Six (NB-6), comprising Nordic-Baltic
countries that are European Union member states, is a framework for meetings on EU related issues. Parliamentary
co-operation between the Baltic Assembly and Nordic Council began in 1989. Annual summits take place, and in addition
meetings are organised on all possible levels: speakers, presidiums, commissions, and individual members. The Nordic
Council of Ministers has an office in Tallinn with a subsidiary in Tartu and information points in Narva, Valga and
Pärnu. Joint Nordic-Baltic projects include the education programme Nordplus and mobility programmes for business
and industry and for public administration. An important element in Estonia's post-independence reorientation has
been closer ties with the Nordic countries, especially Finland and Sweden. Indeed, Estonians consider themselves
a Nordic people rather than Balts, based on their historical ties with Sweden, Denmark and particularly Finland.
In December 1999, then Estonian foreign minister (and since 2006, president of Estonia) Toomas Hendrik Ilves delivered
a speech entitled "Estonia as a Nordic Country" to the Swedish Institute for International Affairs. In 2003, the
foreign ministry also hosted an exhibit called "Estonia: Nordic with a Twist". In 2005, Estonia joined the European
Union's Nordic Battle Group. It has also shown continued interest in joining the Nordic Council. Whereas in 1992
Russia accounted for 92% of Estonia's international trade, today there is extensive economic interdependence between
Estonia and its Nordic neighbours: three quarters of foreign investment in Estonia originates in the Nordic countries
(principally Finland and Sweden), to which Estonia sends 42% of its exports (as compared to 6.5% going to Russia,
8.8% to Latvia, and 4.7% to Lithuania). On the other hand, the Estonian political system, its flat rate of income
tax, and its non-welfare-state model distinguish it from the Nordic countries and their Nordic model, and indeed
from many other European countries. The military of Estonia is based upon the Estonian Defence Forces (Estonian:
Kaitsevägi), which is the name of the unified armed forces of the republic with Maavägi (Army), Merevägi (Navy),
Õhuvägi (Air Force) and a paramilitary national guard organisation Kaitseliit (Defence League). The Estonian National
Defence Policy aim is to guarantee the preservation of the independence and sovereignty of the state, the integrity
of its land, territorial waters, airspace and its constitutional order. Current strategic goals are to defend the
country's interests, develop the armed forces for interoperability with other NATO and EU member forces, and participation
in NATO missions. Estonia co-operates with Latvia and Lithuania in several trilateral Baltic defence co-operation
initiatives, including Baltic Battalion (BALTBAT), Baltic Naval Squadron (BALTRON), Baltic Air Surveillance Network
(BALTNET) and joint military educational institutions such as the Baltic Defence College in Tartu. Future co-operation
will include sharing of national infrastructures for training purposes and specialisation of training areas (BALTTRAIN)
and collective formation of battalion-sized contingents for use in the NATO rapid-response force. In January 2011
the Baltic states were invited to join NORDEFCO, the defence framework of the Nordic countries. The Ministry of Defence
and the Defence Forces have been working on a cyberwarfare and defence formation for some years now. In 2007, a military
doctrine of an e-military of Estonia was officially introduced as the country was under massive cyberattacks in 2007.
The proposed aim of the e-military is to secure the vital infrastructure and e-infrastructure of Estonia. The main
cyber warfare facility is the Computer Emergency Response Team of Estonia (CERT), founded in 2006. The organisation
operates on security issues in local networks. As a member of the European Union, Estonia is considered a high-income
economy by the World Bank. The GDP (PPP) per capita of the country, a good indicator of wealth, was in 2015 $28,781
according to the IMF, between that of Slovak Republic and Lithuania, but below that of other long-time EU members
such as Italy or Spain. The country is ranked 8th in the 2015 Index of Economic Freedom, and the 4th freest economy
in Europe. Because of its rapid growth, Estonia has often been described as a Baltic Tiger beside Lithuania and Latvia.
Beginning 1 January 2011, Estonia adopted the euro and became the 17th eurozone member state. Estonia produces about
75% of its consumed electricity. In 2011 about 85% of it was generated with locally mined oil shale. Alternative
energy sources such as wood, peat, and biomass make up approximately 9% of primary energy production. Renewable wind
energy was about 6% of total consumption in 2009. Estonia imports petroleum products from western Europe and Russia.
Oil shale energy, telecommunications, textiles, chemical products, banking, services, food and fishing, timber, shipbuilding,
electronics, and transportation are key sectors of the economy. The ice-free port of Muuga, near Tallinn, is a modern
facility featuring good transshipment capability, a high-capacity grain elevator, chill/frozen storage, and new oil
tanker off-loading capabilities.[citation needed] The railroad serves as a conduit between the West, Russia, and
other points to the East.[citation needed] Because of the global economic recession that began in 2007, the GDP of
Estonia decreased by 1.4% in the 2nd quarter of 2008, over 3% in the 3rd quarter of 2008, and over 9% in the 4th
quarter of 2008. The Estonian government made a supplementary negative budget, which was passed by Riigikogu. The
revenue of the budget was decreased for 2008 by EEK 6.1 billion and the expenditure by EEK 3.2 billion. In 2010,
the economic situation stabilized and started a growth based on strong exports. In the fourth quarter of 2010, Estonian
industrial output increased by 23% compared to the year before. The country has been experiencing economic growth
ever since. Since re-establishing independence, Estonia has styled itself as the gateway between East and West and
aggressively pursued economic reform and integration with the West. Estonia's market reforms put it among the economic
leaders in the former COMECON area.[citation needed] In 1994, based on the economic theories of Milton Friedman,
Estonia became one of the first countries to adopt a flat tax, with a uniform rate of 26% regardless of personal
income. In January 2005, the personal income tax rate was reduced to 24%. Another reduction to 23% followed in January
2006. The income tax rate was decreased to 21% by January 2008. The Government of Estonia finalised the design of
Estonian euro coins in late 2004, and adopted the euro as the country's currency on 1 January 2011, later than planned
due to continued high inflation. A Land Value Tax is levied which is used to fund local municipalities. It is a state
level tax, however 100% of the revenue is used to fund Local Councils. The rate is set by the Local Council within
the limits of 0.1–2.5%. It is one of the most important sources of funding for municipalities. The Land Value Tax
is levied on the value of the land only with improvements and buildings not considered. Very few exemptions are considered
on the land value tax and even public institutions are subject to the tax. The tax has contributed to a high rate
(~90%) of owner-occupied residences within Estonia, compared to a rate of 67.4% in the United States. In 1999, Estonia
experienced its worst year economically since it regained independence in 1991, largely because of the impact of
the 1998 Russian financial crisis.[citation needed] Estonia joined the WTO in November 1999. With assistance from
the European Union, the World Bank and the Nordic Investment Bank, Estonia completed most of its preparations for
European Union membership by the end of 2002 and now has one of the strongest economies of the new member states
of the European Union.[citation needed] Estonia joined the OECD in 2010. Estonia is a dependent country in the terms
of energy and energy production. In recent years many local and foreign companies have been investing in renewable
energy sources.[citation needed] The importance of wind power has been increasing steadily in Estonia and currently
the total amount of energy production from wind is nearly 60 MW while at the same time roughly 399 MW worth of projects
are currently being developed and more than 2800 MW worth of projects are being proposed in the Lake Peipus area
and the coastal areas of Hiiumaa. Estonia has had a market economy since the end of the 1990s and one of the highest
per capita income levels in Eastern Europe. Proximity to the Scandinavian markets, its location between the East
and West, competitive cost structure and a highly skilled labour force have been the major Estonian comparative advantages
in the beginning of the 2000s (decade). As the largest city, Tallinn has emerged as a financial centre and the Tallinn
Stock Exchange joined recently with the OMX system. The current government has pursued tight fiscal policies, resulting
in balanced budgets and low public debt. In 2007, however, a large current account deficit and rising inflation put
pressure on Estonia's currency, which was pegged to the Euro, highlighting the need for growth in export-generating
industries. Estonia exports mainly machinery and equipment, wood and paper, textiles, food products, furniture, and
metals and chemical products. Estonia also exports 1.562 billion kilowatt hours of electricity annually. At the same
time Estonia imports machinery and equipment, chemical products, textiles, food products and transportation equipment.
Estonia imports 200 million kilowatt hours of electricity annually. Between 2007 and 2013, Estonia receives 53.3
billion kroons (3.4 billion euros) from various European Union Structural Funds as direct supports by creating the
largest foreign investments into Estonia ever. Majority of the European Union financial aid will be invested into
to the following fields: energy economies, entrepreneurship, administrative capability, education, information society,
environment protection, regional and local development, research and development activities, healthcare and welfare,
transportation and labour market. Between 1945 and 1989, the share of ethnic Estonians in the population resident
within the currently defined boundaries of Estonia dropped to 61%, caused primarily by the Soviet programme promoting
mass immigration of urban industrial workers from Russia, Ukraine, and Belarus, as well as by wartime emigration
and Joseph Stalin's mass deportations and executions.[citation needed] By 1989, minorities constituted more than
one-third of the population, as the number of non-Estonians had grown almost fivefold. At the end of the 1980s, Estonians
perceived their demographic change as a national catastrophe. This was a result of the migration policies essential
to the Soviet Nationalisation Programme aiming to russify Estonia – administrative and military immigration of non-Estonians
from the USSR coupled with the deportation of Estonians to the USSR. In the decade following the reconstitution of
independence, large-scale emigration by ethnic Russians and the removal of the Russian military bases in 1994 caused
the proportion of ethnic Estonians in Estonia to increase from 61% to 69% in 2006. Modern Estonia is a fairly ethnically
heterogeneous country, but this heterogeneity is not a feature of much of the country as the non-Estonian population
is concentrated in two of Estonia's counties. Thirteen of Estonia's 15 counties are over 80% ethnic Estonian, the
most homogeneous being Hiiumaa, where Estonians account for 98.4% of the population. In the counties of Harju (including
the capital city, Tallinn) and Ida-Viru, however, ethnic Estonians make up 60% and 20% of the population, respectively.
Russians make up 25.6% of the total population but account for 36% of the population in Harju county and 70% of the
population in Ida-Viru county. The Estonian Cultural Autonomy law that was passed in 1925 was unique in Europe at
that time. Cultural autonomies could be granted to minorities numbering more than 3,000 people with longstanding
ties to the Republic of Estonia. Before the Soviet occupation, the Germans and Jewish minorities managed to elect
a cultural council. The Law on Cultural Autonomy for National Minorities was reinstated in 1993. Historically, large
parts of Estonia's northwestern coast and islands have been populated by indigenous ethnically Rannarootslased (Coastal
Swedes). The 2008 United Nations Human Rights Council report called "extremely credible" the description of the citizenship
policy of Estonia as "discriminatory". According to surveys, only 5% of the Russian community have considered returning
to Russia in the near future. Estonian Russians have developed their own identity – more than half of the respondents
recognised that Estonian Russians differ noticeably from the Russians in Russia. When comparing the result with a
survey from 2000, then Russians' attitude toward the future is much more positive. Estonia's constitution guarantees
freedom of religion, separation of church and state, and individual rights to privacy of belief and religion. According
to the Dentsu Communication Institute Inc, Estonia is one of the least religious countries in the world, with 75.7%
of the population claiming to be irreligious. The Eurobarometer Poll 2005 found that only 16% of Estonians profess
a belief in a god, the lowest belief of all countries studied (EU study). According to the Lutheran World Federation,
the historic Lutheran denomination remains a large presence with 180,000 registered members. Another major group,
inhabitants who follow Eastern Orthodox Christianity, practised chiefly by the Russian minority, and the Russian
Orthodox Church is the second largest denomination with 150,000 members. The Estonian Apostolic Orthodox Church,
under the Greek-Orthodox Ecumenical Patriarchate, claims another 20,000 members. Thus, the number of adherents of
Lutheranism and Orthodoxy, without regard to citizenship or ethnicity, is roughly equal. Refer to the Table below.
The Catholics have their Latin Apostolic Administration of Estonia. Although the Estonian and Germanic languages
are of very different origins, one can identify many similar words in Estonian and German, for example. This is primarily
because the Estonian language has borrowed nearly one third of its vocabulary from Germanic languages, mainly from
Low Saxon (Middle Low German) during the period of German rule, and High German (including standard German). The
percentage of Low Saxon and High German loanwords can be estimated at 22–25 percent, with Low Saxon making up about
15 percent. Academic higher education in Estonia is divided into three levels: bachelor's, master's, and doctoral
studies. In some specialties (basic medical studies, veterinary, pharmacy, dentistry, architect-engineer, and a classroom
teacher programme) the bachelor's and master's levels are integrated into one unit. Estonian public universities
have significantly more autonomy than applied higher education institutions. In addition to organising the academic
life of the university, universities can create new curricula, establish admission terms and conditions, approve
the budget, approve the development plan, elect the rector, and make restricted decisions in matters concerning assets.
Estonia has a moderate number of public and private universities. The largest public universities are the University
of Tartu, Tallinn University of Technology, Tallinn University, Estonian University of Life Sciences, Estonian Academy
of Arts; the largest private university is Estonian Business School. The Estonian Academy of Sciences is the national
academy of science. The strongest public non-profit research institute that carries out fundamental and applied research
is the National Institute of Chemical Physics and Biophysics (NICPB; Estonian KBFI). The first computer centres were
established in the late 1950s in Tartu and Tallinn. Estonian specialists contributed in the development of software
engineering standards for ministries of the Soviet Union during the 1980s. As of 2011[update], Estonia spends around
2.38% of its GDP on Research and Development, compared to an EU average of around 2.0%. Today, Estonian society encourages
liberty and liberalism, with popular commitment to the ideals of the limited government, discouraging centralised
power and corruption. The Protestant work ethic remains a significant cultural staple, and free education is a highly
prized institution. Like the mainstream culture in the other Nordic countries, Estonian culture can be seen to build
upon the ascetic environmental realities and traditional livelihoods, a heritage of comparatively widespread egalitarianism
out of practical reasons (see: Everyman's right and universal suffrage), and the ideals of closeness to nature and
self-sufficiency (see: summer cottage). The Estonian Academy of Arts (Estonian: Eesti Kunstiakadeemia, EKA) is providing
higher education in art, design, architecture, media, art history and conservation while Viljandi Culture Academy
of University of Tartu has an approach to popularise native culture through such curricula as native construction,
native blacksmithing, native textile design, traditional handicraft and traditional music, but also jazz and church
music. In 2010, there were 245 museums in Estonia whose combined collections contain more than 10 million objects.
The tradition of Estonian Song Festivals (Laulupidu) started at the height of the Estonian national awakening in
1869. Today, it is one of the largest amateur choral events in the world. In 2004, about 100,000 people participated
in the Song Festival. Since 1928, the Tallinn Song Festival Grounds (Lauluväljak) have hosted the event every five
years in July. The last festival took place in July 2014. In addition, Youth Song Festivals are also held every four
or five years, the last of them in 2011, and the next is scheduled for 2017. Estonia won the Eurovision Song Contest
in 2001 with the song "Everybody" performed by Tanel Padar and Dave Benton. In 2002, Estonia hosted the event. Maarja-Liis
Ilus has competed for Estonia on two occasions (1996 and 1997), while Eda-Ines Etti, Koit Toome and Evelin Samuel
owe their popularity partly to the Eurovision Song Contest. Lenna Kuurmaa is a very popular singer in Europe[citation
needed], with her band Vanilla Ninja. "Rändajad" by Urban Symphony, was the first ever song in Estonian to chart
in the UK, Belgium, and Switzerland. The Estonian literature refers to literature written in the Estonian language
(ca. 1 million speakers). The domination of Estonia after the Northern Crusades, from the 13th century to 1918 by
Germany, Sweden, and Russia resulted in few early written literary works in the Estonian language. The oldest records
of written Estonian date from the 13th century. Originates Livoniae in Chronicle of Henry of Livonia contains Estonian
place names, words and fragments of sentences. The Liber Census Daniae (1241) contains Estonian place and family
names. Oskar Luts was the most prominent prose writer of the early Estonian literature, who is still widely read
today, especially his lyrical school novel Kevade (Spring). Anton Hansen Tammsaare's social epic and psychological
realist pentalogy Truth and Justice captured the evolution of Estonian society from a peasant community to an independent
nation. In modern times, Jaan Kross and Jaan Kaplinski are Estonia's best known and most translated writers. Among
the most popular writers of the late 20th and early 21st centuries are Tõnu Õnnepalu and Andrus Kivirähk, who uses
elements of Estonian folklore and mythology, deforming them into absurd and grotesque. The architectural history
of Estonia mainly reflects its contemporary development in northern Europe. Worth mentioning is especially the architectural
ensemble that makes out the medieval old town of Tallinn, which is on the UNESCO World Heritage List. In addition,
the country has several unique, more or less preserved hill forts dating from pre-Christian times, a large number
of still intact medieval castles and churches, while the countryside is still shaped by the presence of a vast number
of manor houses from earlier centuries. Historically, the cuisine of Estonia has been heavily dependent on seasons
and simple peasant food, which today is influenced by many countries. Today, it includes many typical international
foods.[citation needed] The most typical foods in Estonia are black bread, pork, potatoes, and dairy products. Traditionally
in summer and spring, Estonians like to eat everything fresh – berries, herbs, vegetables, and everything else that
comes straight from the garden. Hunting and fishing have also been very common, although currently hunting and fishing
are enjoyed mostly as hobbies. Today, it is also very popular to grill outside in summer. Sport plays an important
role in Estonian culture. After declaring independence from Russia in 1918, Estonia first competed as a nation at
the 1920 Summer Olympics, although the National Olympic Committee was established in 1923. Estonian athletes took
part of the Olympic Games until the country was annexed by the Soviet Union in 1940. The 1980 Summer Olympics Sailing
regatta was held in the capital city Tallinn. After regaining independence in 1991, Estonia has participated in all
Olympics. Estonia has won most of its medals in athletics, weightlifting, wrestling and cross-country skiing. Estonia
has had very good success at the Olympic games given the country's small population. Estonia's best results were
being ranked 13th in the medal table at the 1936 Summer Olympics, and 12th at the 2006 Winter Olympics. Basketball
is also a notable sport in Estonia. Estonia national basketball team previously participated in 1936 Summer Olympics,
appeared in EuroBasket four times. Estonia national team also qualified to EuroBasket 2015, which will be held in
Ukraine. BC Kalev/Cramo, which participates in EuroCup, is the most recent Korvpalli Meistriliiga winner after becoming
champion of the league for the 6th time. Tartu Ülikool/Rock, which participates in EuroChallenge, is the second strongest
Estonian basketball club, previously winning Korvpalli Meistriliiga 22 times. Six Estonian basketball clubs participates
in Baltic Basketball League.
Alaska (i/əˈlæskə/) is a U.S. state situated in the northwest extremity of the Americas. The Canadian administrative divisions
of British Columbia and Yukon border the state to the east while Russia has a maritime border with the state to the
west across the Bering Strait. To the north are the Chukchi and Beaufort Seas, the southern parts of the Arctic Ocean.
To the south and southwest is the Pacific Ocean. Alaska is the largest state in the United States by area, the 3rd
least populous and the least densely populated of the 50 United States. Approximately half of Alaska's residents
(the total estimated at 738,432 by the Census Bureau in 2015) live within the Anchorage metropolitan area. Alaska's
economy is dominated by the fishing, natural gas, and oil industries, resources which it has in abundance. Military
bases and tourism are also a significant part of the economy. Alaska is the northernmost and westernmost state in
the United States and has the most easterly longitude in the United States because the Aleutian Islands extend into
the eastern hemisphere. Alaska is the only non-contiguous U.S. state on continental North America; about 500 miles
(800 km) of British Columbia (Canada) separates Alaska from Washington. It is technically part of the continental
U.S., but is sometimes not included in colloquial use; Alaska is not part of the contiguous U.S., often called "the
Lower 48". The capital city, Juneau, is situated on the mainland of the North American continent but is not connected
by road to the rest of the North American highway system. Also referred to as the Panhandle or Inside Passage, this
is the region of Alaska closest to the rest of the United States. As such, this was where most of the initial non-indigenous
settlement occurred in the years following the Alaska Purchase. The region is dominated by the Alexander Archipelago
as well as the Tongass National Forest, the largest national forest in the United States. It contains the state capital
Juneau, the former capital Sitka, and Ketchikan, at one time Alaska's largest city. The Alaska Marine Highway provides
a vital surface transportation link throughout the area, as only three communities (Haines, Hyder and Skagway) enjoy
direct connections to the contiguous North American road system. The North Slope is mostly tundra peppered with small
villages. The area is known for its massive reserves of crude oil, and contains both the National Petroleum Reserve–Alaska
and the Prudhoe Bay Oil Field. Barrow, the northernmost city in the United States, is located here. The Northwest
Arctic area, anchored by Kotzebue and also containing the Kobuk River valley, is often regarded as being part of
this region. However, the respective Inupiat of the North Slope and of the Northwest Arctic seldom consider themselves
to be one people. With its myriad islands, Alaska has nearly 34,000 miles (54,720 km) of tidal shoreline. The Aleutian
Islands chain extends west from the southern tip of the Alaska Peninsula. Many active volcanoes are found in the
Aleutians and in coastal regions. Unimak Island, for example, is home to Mount Shishaldin, which is an occasionally
smoldering volcano that rises to 10,000 feet (3,048 m) above the North Pacific. It is the most perfect volcanic cone
on Earth, even more symmetrical than Japan's Mount Fuji. The chain of volcanoes extends to Mount Spurr, west of Anchorage
on the mainland. Geologists have identified Alaska as part of Wrangellia, a large region consisting of multiple states
and Canadian provinces in the Pacific Northwest, which is actively undergoing continent building. According to an
October 1998 report by the United States Bureau of Land Management, approximately 65% of Alaska is owned and managed
by the U.S. federal government as public lands, including a multitude of national forests, national parks, and national
wildlife refuges. Of these, the Bureau of Land Management manages 87 million acres (35 million hectares), or 23.8%
of the state. The Arctic National Wildlife Refuge is managed by the United States Fish and Wildlife Service. It is
the world's largest wildlife refuge, comprising 16 million acres (6.5 million hectares). Of the remaining land area,
the state of Alaska owns 101 million acres (41 million hectares), its entitlement under the Alaska Statehood Act.
A portion of that acreage is occasionally ceded to organized boroughs, under the statutory provisions pertaining
to newly formed boroughs. Smaller portions are set aside for rural subdivisions and other homesteading-related opportunities.
These are not very popular due to the often remote and roadless locations. The University of Alaska, as a land grant
university, also owns substantial acreage which it manages independently. Another 44 million acres (18 million hectares)
are owned by 12 regional, and scores of local, Native corporations created under the Alaska Native Claims Settlement
Act (ANCSA) of 1971. Regional Native corporation Doyon, Limited often promotes itself as the largest private landowner
in Alaska in advertisements and other communications. Provisions of ANCSA allowing the corporations' land holdings
to be sold on the open market starting in 1991 were repealed before they could take effect. Effectively, the corporations
hold title (including subsurface title in many cases, a privilege denied to individual Alaskans) but cannot sell
the land. Individual Native allotments can be and are sold on the open market, however. The climate in Southeast
Alaska is a mid-latitude oceanic climate (Köppen climate classification: Cfb) in the southern sections and a subarctic
oceanic climate (Köppen Cfc) in the northern parts. On an annual basis, Southeast is both the wettest and warmest
part of Alaska with milder temperatures in the winter and high precipitation throughout the year. Juneau averages
over 50 in (130 cm) of precipitation a year, and Ketchikan averages over 150 in (380 cm). This is also the only region
in Alaska in which the average daytime high temperature is above freezing during the winter months. The climate of
Western Alaska is determined in large part by the Bering Sea and the Gulf of Alaska. It is a subarctic oceanic climate
in the southwest and a continental subarctic climate farther north. The temperature is somewhat moderate considering
how far north the area is. This region has a tremendous amount of variety in precipitation. An area stretching from
the northern side of the Seward Peninsula to the Kobuk River valley (i. e., the region around Kotzebue Sound) is
technically a desert, with portions receiving less than 10 in (25 cm) of precipitation annually. On the other extreme,
some locations between Dillingham and Bethel average around 100 in (250 cm) of precipitation. Numerous indigenous
peoples occupied Alaska for thousands of years before the arrival of European peoples to the area. Linguistic and
DNA studies done here have provided evidence for the settlement of North America by way of the Bering land bridge.[citation
needed] The Tlingit people developed a society with a matrilineal kinship system of property inheritance and descent
in what is today Southeast Alaska, along with parts of British Columbia and the Yukon. Also in Southeast were the
Haida, now well known for their unique arts. The Tsimshian people came to Alaska from British Columbia in 1887, when
President Grover Cleveland, and later the U.S. Congress, granted them permission to settle on Annette Island and
found the town of Metlakatla. All three of these peoples, as well as other indigenous peoples of the Pacific Northwest
Coast, experienced smallpox outbreaks from the late 18th through the mid-19th century, with the most devastating
epidemics occurring in the 1830s and 1860s, resulting in high fatalities and social disruption. The Aleutian Islands
are still home to the Aleut people's seafaring society, although they were the first Native Alaskans to be exploited
by Russians. Western and Southwestern Alaska are home to the Yup'ik, while their cousins the Alutiiq ~ Sugpiaq lived
in what is now Southcentral Alaska. The Gwich'in people of the northern Interior region are Athabaskan and primarily
known today for their dependence on the caribou within the much-contested Arctic National Wildlife Refuge. The North
Slope and Little Diomede Island are occupied by the widespread Inupiat people. Some researchers believe that the
first Russian settlement in Alaska was established in the 17th century. According to this hypothesis, in 1648 several
koches of Semyon Dezhnyov's expedition came ashore in Alaska by storm and founded this settlement. This hypothesis
is based on the testimony of Chukchi geographer Nikolai Daurkin, who had visited Alaska in 1764–1765 and who had
reported on a village on the Kheuveren River, populated by "bearded men" who "pray to the icons". Some modern researchers
associate Kheuveren with Koyuk River. Starting in the 1890s and stretching in some places to the early 1910s, gold
rushes in Alaska and the nearby Yukon Territory brought thousands of miners and settlers to Alaska. Alaska was officially
incorporated as an organized territory in 1912. Alaska's capital, which had been in Sitka until 1906, was moved north
to Juneau. Construction of the Alaska Governor's Mansion began that same year. European immigrants from Norway and
Sweden also settled in southeast Alaska, where they entered the fishing and logging industries. Statehood for Alaska
was an important cause of James Wickersham early in his tenure as a congressional delegate. Decades later, the statehood
movement gained its first real momentum following a territorial referendum in 1946. The Alaska Statehood Committee
and Alaska's Constitutional Convention would soon follow. Statehood supporters also found themselves fighting major
battles against political foes, mostly in the U.S. Congress but also within Alaska. Statehood was approved by Congress
on July 7, 1958. Alaska was officially proclaimed a state on January 3, 1959. On March 27, 1964, the massive Good
Friday earthquake killed 133 people and destroyed several villages and portions of large coastal communities, mainly
by the resultant tsunamis and landslides. It was the second-most-powerful earthquake in the recorded history of the
world, with a moment magnitude of 9.2. It was over one thousand times more powerful than the 1989 San Francisco earthquake.
The time of day (5:36 pm), time of year and location of the epicenter were all cited as factors in potentially sparing
thousands of lives, particularly in Anchorage. The Alaska Native Language Center at the University of Alaska Fairbanks
claims that at least 20 Alaskan native languages exist and there are also some languages with different dialects.
Most of Alaska's native languages belong to either the Eskimo–Aleut or Na-Dene language families however some languages
are thought to be isolates (e.g. Haida) or have not yet been classified (e.g. Tsimshianic). As of 2014[update] nearly
all of Alaska's native languages were classified as either threatened, shifting, moribund, nearly extinct, or dormant
languages. According to statistics collected by the Association of Religion Data Archives from 2010, about 34% of
Alaska residents were members of religious congregations. 100,960 people identified as Evangelical Protestants, 50,866
as Roman Catholic, and 32,550 as mainline Protestants. Roughly 4% are Mormon, 0.5% are Jewish, 1% are Muslim, 0.5%
are Buddhist, and 0.5% are Hindu. The largest religious denominations in Alaska as of 2010[update] were the Catholic
Church with 50,866 adherents, non-denominational Evangelical Protestants with 38,070 adherents, The Church of Jesus
Christ of Latter-day Saints with 32,170 adherents, and the Southern Baptist Convention with 19,891 adherents. Alaska
has been identified, along with Pacific Northwest states Washington and Oregon, as being the least religious states
of the USA, in terms of church membership. In 1795, the First Russian Orthodox Church was established in Kodiak.
Intermarriage with Alaskan Natives helped the Russian immigrants integrate into society. As a result, an increasing
number of Russian Orthodox churches gradually became established within Alaska. Alaska also has the largest Quaker
population (by percentage) of any state. In 2009 there were 6,000 Jews in Alaska (for whom observance of halakha
may pose special problems). Alaskan Hindus often share venues and celebrations with members of other Asian religious
communities, including Sikhs and Jains. The 2007 gross state product was $44.9 billion, 45th in the nation. Its per
capita personal income for 2007 was $40,042, ranking 15th in the nation. According to a 2013 study by Phoenix Marketing
International, Alaska had the fifth-largest number of millionaires per capita in the United States, with a ratio
of 6.75 percent. The oil and gas industry dominates the Alaskan economy, with more than 80% of the state's revenues
derived from petroleum extraction. Alaska's main export product (excluding oil and natural gas) is seafood, primarily
salmon, cod, Pollock and crab. Employment is primarily in government and industries such as natural resource extraction,
shipping, and transportation. Military bases are a significant component of the economy in the Fairbanks North Star,
Anchorage and Kodiak Island boroughs, as well as Kodiak. Federal subsidies are also an important part of the economy,
allowing the state to keep taxes low. Its industrial outputs are crude petroleum, natural gas, coal, gold, precious
metals, zinc and other mining, seafood processing, timber and wood products. There is also a growing service and
tourism sector. Tourists have contributed to the economy by supporting local lodging. Alaska has vast energy resources,
although its oil reserves have been largely depleted. Major oil and gas reserves were found in the Alaska North Slope
(ANS) and Cook Inlet basins, but according to the Energy Information Administration, by February 2014 Alaska had
fallen to fourth place in the nation in crude oil production after Texas, North Dakota, and California. Prudhoe Bay
on Alaska's North Slope is still the second highest-yielding oil field in the United States, typically producing
about 400,000 barrels per day (64,000 m3/d), although by early 2014 North Dakota's Bakken Formation was producing
over 900,000 barrels per day (140,000 m3/d). Prudhoe Bay was the largest conventional oil field ever discovered in
North America, but was much smaller than Canada's enormous Athabasca oil sands field, which by 2014 was producing
about 1,500,000 barrels per day (240,000 m3/d) of unconventional oil, and had hundreds of years of producible reserves
at that rate. The Trans-Alaska Pipeline can transport and pump up to 2.1 million barrels (330,000 m3) of crude oil
per day, more than any other crude oil pipeline in the United States. Additionally, substantial coal deposits are
found in Alaska's bituminous, sub-bituminous, and lignite coal basins. The United States Geological Survey estimates
that there are 85.4 trillion cubic feet (2,420 km3) of undiscovered, technically recoverable gas from natural gas
hydrates on the Alaskan North Slope. Alaska also offers some of the highest hydroelectric power potential in the
country from its numerous rivers. Large swaths of the Alaskan coastline offer wind and geothermal energy potential
as well. Alaska's economy depends heavily on increasingly expensive diesel fuel for heating, transportation, electric
power and light. Though wind and hydroelectric power are abundant and underdeveloped, proposals for statewide energy
systems (e.g. with special low-cost electric interties) were judged uneconomical (at the time of the report, 2001)
due to low (less than 50¢/gal) fuel prices, long distances and low population. The cost of a gallon of gas in urban
Alaska today is usually 30–60¢ higher than the national average; prices in rural areas are generally significantly
higher but vary widely depending on transportation costs, seasonal usage peaks, nearby petroleum development infrastructure
and many other factors. The Alaska Permanent Fund is a constitutionally authorized appropriation of oil revenues,
established by voters in 1976 to manage a surplus in state petroleum revenues from oil, largely in anticipation of
the recently constructed Trans-Alaska Pipeline System. The fund was originally proposed by Governor Keith Miller
on the eve of the 1969 Prudhoe Bay lease sale, out of fear that the legislature would spend the entire proceeds of
the sale (which amounted to $900 million) at once. It was later championed by Governor Jay Hammond and Kenai state
representative Hugh Malone. It has served as an attractive political prospect ever since, diverting revenues which
would normally be deposited into the general fund. The Alaska Constitution was written so as to discourage dedicating
state funds for a particular purpose. The Permanent Fund has become the rare exception to this, mostly due to the
political climate of distrust existing during the time of its creation. From its initial principal of $734,000, the
fund has grown to $50 billion as a result of oil royalties and capital investment programs. Most if not all the principal
is invested conservatively outside Alaska. This has led to frequent calls by Alaskan politicians for the Fund to
make investments within Alaska, though such a stance has never gained momentum. Starting in 1982, dividends from
the fund's annual growth have been paid out each year to eligible Alaskans, ranging from an initial $1,000 in 1982
(equal to three years' payout, as the distribution of payments was held up in a lawsuit over the distribution scheme)
to $3,269 in 2008 (which included a one-time $1,200 "Resource Rebate"). Every year, the state legislature takes out
8% from the earnings, puts 3% back into the principal for inflation proofing, and the remaining 5% is distributed
to all qualifying Alaskans. To qualify for the Permanent Fund Dividend, one must have lived in the state for a minimum
of 12 months, maintain constant residency subject to allowable absences, and not be subject to court judgments or
criminal convictions which fall under various disqualifying classifications or may subject the payment amount to
civil garnishment. The Tanana Valley is another notable agricultural locus, especially the Delta Junction area, about
100 miles (160 km) southeast of Fairbanks, with a sizable concentration of farms growing agronomic crops; these farms
mostly lie north and east of Fort Greely. This area was largely set aside and developed under a state program spearheaded
by Hammond during his second term as governor. Delta-area crops consist predominately of barley and hay. West of
Fairbanks lies another concentration of small farms catering to restaurants, the hotel and tourist industry, and
community-supported agriculture. Most food in Alaska is transported into the state from "Outside", and shipping costs
make food in the cities relatively expensive. In rural areas, subsistence hunting and gathering is an essential activity
because imported food is prohibitively expensive. Though most small towns and villages in Alaska lie along the coastline,
the cost of importing food to remote villages can be high, because of the terrain and difficult road conditions,
which change dramatically, due to varying climate and precipitation changes. The cost of transport can reach as high
as 50¢ per pound ($1.10/kg) or more in some remote areas, during the most difficult times, if these locations can
be reached at all during such inclement weather and terrain conditions. The cost of delivering a 1 US gallon (3.8
L) of milk is about $3.50 in many villages where per capita income can be $20,000 or less. Fuel cost per gallon is
routinely 20–30¢ higher than the continental United States average, with only Hawaii having higher prices. Alaska
has few road connections compared to the rest of the U.S. The state's road system covers a relatively small area
of the state, linking the central population centers and the Alaska Highway, the principal route out of the state
through Canada. The state capital, Juneau, is not accessible by road, only a car ferry, which has spurred several
debates over the decades about moving the capital to a city on the road system, or building a road connection from
Haines. The western part of Alaska has no road system connecting the communities with the rest of Alaska. Built around
1915, the Alaska Railroad (ARR) played a key role in the development of Alaska through the 20th century. It links
north Pacific shipping through providing critical infrastructure with tracks that run from Seward to Interior Alaska
by way of South Central Alaska, passing through Anchorage, Eklutna, Wasilla, Talkeetna, Denali, and Fairbanks, with
spurs to Whittier, Palmer and North Pole. The cities, towns, villages, and region served by ARR tracks are known
statewide as "The Railbelt". In recent years, the ever-improving paved highway system began to eclipse the railroad's
importance in Alaska's economy. The Alaska Railroad was one of the last railroads in North America to use cabooses
in regular service and still uses them on some gravel trains. It continues to offer one of the last flag stop routes
in the country. A stretch of about 60 miles (100 km) of track along an area north of Talkeetna remains inaccessible
by road; the railroad provides the only transportation to rural homes and cabins in the area. Until construction
of the Parks Highway in the 1970s, the railroad provided the only land access to most of the region along its entire
route. Alaska's well-developed state-owned ferry system (known as the Alaska Marine Highway) serves the cities of
southeast, the Gulf Coast and the Alaska Peninsula. The ferries transport vehicles as well as passengers. The system
also operates a ferry service from Bellingham, Washington and Prince Rupert, British Columbia in Canada through the
Inside Passage to Skagway. The Inter-Island Ferry Authority also serves as an important marine link for many communities
in the Prince of Wales Island region of Southeast and works in concert with the Alaska Marine Highway. Cities not
served by road, sea, or river can be reached only by air, foot, dogsled, or snowmachine, accounting for Alaska's
extremely well developed bush air services—an Alaskan novelty. Anchorage and, to a lesser extent Fairbanks, is served
by many major airlines. Because of limited highway access, air travel remains the most efficient form of transportation
in and out of the state. Anchorage recently completed extensive remodeling and construction at Ted Stevens Anchorage
International Airport to help accommodate the upsurge in tourism (in 2012-2013, Alaska received almost 2 million
visitors). Regular flights to most villages and towns within the state that are commercially viable are challenging
to provide, so they are heavily subsidized by the federal government through the Essential Air Service program. Alaska
Airlines is the only major airline offering in-state travel with jet service (sometimes in combination cargo and
passenger Boeing 737-400s) from Anchorage and Fairbanks to regional hubs like Bethel, Nome, Kotzebue, Dillingham,
Kodiak, and other larger communities as well as to major Southeast and Alaska Peninsula communities. The bulk of
remaining commercial flight offerings come from small regional commuter airlines such as Ravn Alaska, PenAir, and
Frontier Flying Service. The smallest towns and villages must rely on scheduled or chartered bush flying services
using general aviation aircraft such as the Cessna Caravan, the most popular aircraft in use in the state. Much of
this service can be attributed to the Alaska bypass mail program which subsidizes bulk mail delivery to Alaskan rural
communities. The program requires 70% of that subsidy to go to carriers who offer passenger service to the communities.
Many communities have small air taxi services. These operations originated from the demand for customized transport
to remote areas. Perhaps the most quintessentially Alaskan plane is the bush seaplane. The world's busiest seaplane
base is Lake Hood, located next to Ted Stevens Anchorage International Airport, where flights bound for remote villages
without an airstrip carry passengers, cargo, and many items from stores and warehouse clubs. In 2006 Alaska had the
highest number of pilots per capita of any U.S. state. Another Alaskan transportation method is the dogsled. In modern
times (that is, any time after the mid-late 1920s), dog mushing is more of a sport than a true means of transportation.
Various races are held around the state, but the best known is the Iditarod Trail Sled Dog Race, a 1,150-mile (1,850
km) trail from Anchorage to Nome (although the distance varies from year to year, the official distance is set at
1,049 miles or 1,688 km). The race commemorates the famous 1925 serum run to Nome in which mushers and dogs like
Togo and Balto took much-needed medicine to the diphtheria-stricken community of Nome when all other means of transportation
had failed. Mushers from all over the world come to Anchorage each March to compete for cash, prizes, and prestige.
The "Serum Run" is another sled dog race that more accurately follows the route of the famous 1925 relay, leaving
from the community of Nenana (southwest of Fairbanks) to Nome. Alaska's internet and other data transport systems
are provided largely through the two major telecommunications companies: GCI and Alaska Communications. GCI owns
and operates what it calls the Alaska United Fiber Optic system and as of late 2011 Alaska Communications advertised
that it has "two fiber optic paths to the lower 48 and two more across Alaska. In January 2011, it was reported that
a $1 billion project to run connect Asia and rural Alaska was being planned, aided in part by $350 million in stimulus
from the federal government. To finance state government operations, Alaska depends primarily on petroleum revenues
and federal subsidies. This allows it to have the lowest individual tax burden in the United States. It is one of
five states with no state sales tax, one of seven states that do not levy an individual income tax, and one of the
two states that has neither. The Department of Revenue Tax Division reports regularly on the state's revenue sources.
The Department also issues an annual summary of its operations, including new state laws that directly affect the
tax division. Alaska regularly supports Republicans in presidential elections and has done so since statehood. Republicans
have won the state's electoral college votes in all but one election that it has participated in (1964). No state
has voted for a Democratic presidential candidate fewer times. Alaska was carried by Democratic nominee Lyndon B.
Johnson during his landslide election in 1964, while the 1960 and 1968 elections were close. Since 1972, however,
Republicans have carried the state by large margins. In 2008, Republican John McCain defeated Democrat Barack Obama
in Alaska, 59.49% to 37.83%. McCain's running mate was Sarah Palin, the state's governor and the first Alaskan on
a major party ticket. Obama lost Alaska again in 2012, but he captured 40% of the state's vote in that election,
making him the first Democrat to do so since 1968. The Alaska Bush, central Juneau, midtown and downtown Anchorage,
and the areas surrounding the University of Alaska Fairbanks campus and Ester have been strongholds of the Democratic
Party. The Matanuska-Susitna Borough, the majority of Fairbanks (including North Pole and the military base), and
South Anchorage typically have the strongest Republican showing. As of 2004[update], well over half of all registered
voters have chosen "Non-Partisan" or "Undeclared" as their affiliation, despite recent attempts to close primaries
to unaffiliated voters. The Unorganized Borough has no government of its own, but the U.S. Census Bureau in cooperation
with the state divided the Unorganized Borough into 11 census areas solely for the purposes of statistical analysis
and presentation. A recording district is a mechanism for administration of the public record in Alaska. The state
is divided into 34 recording districts which are centrally administered under a State Recorder. All recording districts
use the same acceptance criteria, fee schedule, etc., for accepting documents into the public record. As reflected
in the 2010 United States Census, Alaska has a total of 355 incorporated cities and census-designated places (CDPs).
The tally of cities includes four unified municipalities, essentially the equivalent of a consolidated city–county.
The majority of these communities are located in the rural expanse of Alaska known as "The Bush" and are unconnected
to the contiguous North American road network. The table at the bottom of this section lists the 100 largest cities
and census-designated places in Alaska, in population order. Of Alaska's 2010 Census population figure of 710,231,
20,429 people, or 2.88% of the population, did not live in an incorporated city or census-designated place. Approximately
three-quarters of that figure were people who live in urban and suburban neighborhoods on the outskirts of the city
limits of Ketchikan, Kodiak, Palmer and Wasilla. CDPs have not been established for these areas by the United States
Census Bureau, except that seven CDPs were established for the Ketchikan-area neighborhoods in the 1980 Census (Clover
Pass, Herring Cove, Ketchikan East, Mountain Point, North Tongass Highway, Pennock Island and Saxman East), but have
not been used since. The remaining population was scattered throughout Alaska, both within organized boroughs and
in the Unorganized Borough, in largely remote areas. The Alaska State Troopers are Alaska's statewide police force.
They have a long and storied history, but were not an official organization until 1941. Before the force was officially
organized, law enforcement in Alaska was handled by various federal agencies. Larger towns usually have their own
local police and some villages rely on "Public Safety Officers" who have police training but do not carry firearms.
In much of the state, the troopers serve as the only police force available. In addition to enforcing traffic and
criminal law, wildlife Troopers enforce hunting and fishing regulations. Due to the varied terrain and wide scope
of the Troopers' duties, they employ a wide variety of land, air, and water patrol vehicles. There are many established
music festivals in Alaska, including the Alaska Folk Festival, the Fairbanks Summer Arts Festival, the Anchorage
Folk Festival, the Athabascan Old-Time Fiddling Festival, the Sitka Jazz Festival, and the Sitka Summer Music Festival.
The most prominent orchestra in Alaska is the Anchorage Symphony Orchestra, though the Fairbanks Symphony Orchestra
and Juneau Symphony are also notable. The Anchorage Opera is currently the state's only professional opera company,
though there are several volunteer and semi-professional organizations in the state as well. One of the most prominent
movies filmed in Alaska is MGM's Eskimo/Mala The Magnificent, starring Alaska Native Ray Mala. In 1932 an expedition
set out from MGM's studios in Hollywood to Alaska to film what was then billed as "The Biggest Picture Ever Made."
Upon arriving in Alaska, they set up "Camp Hollywood" in Northwest Alaska, where they lived during the duration of
the filming. Louis B. Mayer spared no expense in spite of the remote location, going so far as to hire the chef from
the Hotel Roosevelt in Hollywood to prepare meals.
Popper is known for his rejection of the classical inductivist views on the scientific method, in favour of empirical falsification:
A theory in the empirical sciences can never be proven, but it can be falsified, meaning that it can and should be
scrutinized by decisive experiments. He used the black swan fallacy to discuss falsification. If the outcome of an
experiment contradicts the theory, one should refrain from ad hoc manoeuvres that evade the contradiction merely
by making it less falsifiable. Popper is also known for his opposition to the classical justificationist account
of knowledge, which he replaced with critical rationalism, "the first non-justificational philosophy of criticism
in the history of philosophy." Karl Popper was born in Vienna (then in Austria-Hungary) in 1902, to upper middle-class
parents. All of Karl Popper's grandparents were Jewish, but the Popper family converted to Lutheranism before Karl
was born, and so he received Lutheran baptism. They understood this as part of their cultural assimilation, not as
an expression of devout belief. Karl's father Simon Siegmund Carl Popper was a lawyer from Bohemia and a doctor of
law at the Vienna University, and mother Jenny Schiff was of Silesian and Hungarian descent. After establishing themselves
in Vienna, the Poppers made a rapid social climb in Viennese society: Simon Siegmund Carl became a partner in the
law firm of Vienna's liberal Burgomaster Herr Grübl and, after Grübl's death in 1898, Simon took over the business.
(Malachi Hacohen records that Herr Grübl's first name was Raimund, after which Karl received his middle name. Popper
himself, in his autobiography, erroneously recalls that Herr Grübl's first name was Carl.) His father was a bibliophile
who had 12,000–14,000 volumes in his personal library. Popper inherited both the library and the disposition from
him. Popper left school at the age of 16 and attended lectures in mathematics, physics, philosophy, psychology and
the history of music as a guest student at the University of Vienna. In 1919, Popper became attracted by Marxism
and subsequently joined the Association of Socialist School Students. He also became a member of the Social Democratic
Workers' Party of Austria, which was at that time a party that fully adopted the Marxist ideology. After the street
battle in the Hörlgasse on 15 June 1919, when police shot eight of his unarmed party comrades, he became disillusioned
by what he saw to be the "pseudo-scientific" historical materialism of Marx, abandoned the ideology, and remained
a supporter of social liberalism throughout his life. He worked in street construction for a short amount of time,
but was unable to cope with the heavy labour. Continuing to attend university as a guest student, he started an apprenticeship
as cabinetmaker, which he completed as a journeyman. He was dreaming at that time of starting a daycare facility
for children, for which he assumed the ability to make furniture might be useful. After that he did voluntary service
in one of psychoanalyst Alfred Adler's clinics for children. In 1922, he did his matura by way of a second chance
education and finally joined the University as an ordinary student. He completed his examination as an elementary
teacher in 1924 and started working at an after-school care club for socially endangered children. In 1925, he went
to the newly founded Pädagogisches Institut and continued studying philosophy and psychology. Around that time he
started courting Josefine Anna Henninger, who later became his wife. In 1928, he earned a doctorate in psychology,
under the supervision of Karl Bühler. His dissertation was entitled "Die Methodenfrage der Denkpsychologie" (The
question of method in cognitive psychology). In 1929, he obtained the authorisation to teach mathematics and physics
in secondary school, which he started doing. He married his colleague Josefine Anna Henninger (1906–1985) in 1930.
Fearing the rise of Nazism and the threat of the Anschluss, he started to use the evenings and the nights to write
his first book Die beiden Grundprobleme der Erkenntnistheorie (The Two Fundamental Problems of the Theory of Knowledge).
He needed to publish one to get some academic position in a country that was safe for people of Jewish descent. However,
he ended up not publishing the two-volume work, but a condensed version of it with some new material, Logik der Forschung
(The Logic of Scientific Discovery), in 1934. Here, he criticised psychologism, naturalism, inductionism, and logical
positivism, and put forth his theory of potential falsifiability as the criterion demarcating science from non-science.
In 1935 and 1936, he took unpaid leave to go to the United Kingdom for a study visit. In 1937, Popper finally managed
to get a position that allowed him to emigrate to New Zealand, where he became lecturer in philosophy at Canterbury
University College of the University of New Zealand in Christchurch. It was here that he wrote his influential work
The Open Society and its Enemies. In Dunedin he met the Professor of Physiology John Carew Eccles and formed a lifelong
friendship with him. In 1946, after the Second World War, he moved to the United Kingdom to become reader in logic
and scientific method at the London School of Economics. Three years later, in 1949, he was appointed professor of
logic and scientific method at the University of London. Popper was president of the Aristotelian Society from 1958
to 1959. He retired from academic life in 1969, though he remained intellectually active for the rest of his life.
In 1985, he returned to Austria so that his wife could have her relatives around her during the last months of her
life; she died in November that year. After the Ludwig Boltzmann Gesellschaft failed to establish him as the director
of a newly founded branch researching the philosophy of science, he went back again to the United Kingdom in 1986,
settling in Kenley, Surrey. Popper died of "complications of cancer, pneumonia and kidney failure" in Kenley at the
age of 92 on 17 September 1994. He had been working continuously on his philosophy until two weeks before, when he
suddenly fell terminally ill. After cremation, his ashes were taken to Vienna and buried at Lainzer cemetery adjacent
to the ORF Centre, where his wife Josefine Anna Popper (called ‘Hennie’) had already been buried. Popper's estate
is managed by his secretary and personal assistant Melitta Mew and her husband Raymond. Popper's manuscripts went
to the Hoover Institution at Stanford University, partly during his lifetime and partly as supplementary material
after his death. Klagenfurt University possesses Popper's library, including his precious bibliophilia, as well as
hard copies of the original Hoover material and microfilms of the supplementary material. The remaining parts of
the estate were mostly transferred to The Karl Popper Charitable Trust. In October 2008 Klagenfurt University acquired
the copyrights from the estate. Popper won many awards and honours in his field, including the Lippincott Award of
the American Political Science Association, the Sonning Prize, the Otto Hahn Peace Medal of the United Nations Association
of Germany in Berlin and fellowships in the Royal Society, British Academy, London School of Economics, King's College
London, Darwin College, Cambridge, and Charles University, Prague. Austria awarded him the Grand Decoration of Honour
in Gold for Services to the Republic of Austria in 1986, and the Federal Republic of Germany its Grand Cross with
Star and Sash of the Order of Merit, and the peace class of the Order Pour le Mérite. He received the Humanist Laureate
Award from the International Academy of Humanism. He was knighted by Queen Elizabeth II in 1965, and was elected
a Fellow of the Royal Society in 1976. He was invested with the Insignia of a Companion of Honour in 1982. Other
awards and recognition for Popper included the City of Vienna Prize for the Humanities (1965), Karl Renner Prize
(1978), Austrian Decoration for Science and Art (1980), Dr. Leopold Lucas Prize (1981), Ring of Honour of the City
of Vienna (1983) and the Premio Internazionale of the Italian Federico Nietzsche Society (1988). In 1992, he was
awarded the Kyoto Prize in Arts and Philosophy for "symbolising the open spirit of the 20th century" and for his
"enormous influence on the formation of the modern intellectual climate". Karl Popper's rejection of Marxism during
his teenage years left a profound mark on his thought. He had at one point joined a socialist association, and for
a few months in 1919 considered himself a communist. During this time he became familiar with the Marxist view of
economics, class-war, and history. Although he quickly became disillusioned with the views expounded by Marxism,
his flirtation with the ideology led him to distance himself from those who believed that spilling blood for the
sake of a revolution was necessary. He came to realise that when it came to sacrificing human lives, one was to think
and act with extreme prudence. The failure of democratic parties to prevent fascism from taking over Austrian politics
in the 1920s and 1930s traumatised Popper. He suffered from the direct consequences of this failure, since events
after the Anschluss, the annexation of Austria by the German Reich in 1938, forced him into permanent exile. His
most important works in the field of social science—The Poverty of Historicism (1944) and The Open Society and Its
Enemies (1945)—were inspired by his reflection on the events of his time and represented, in a sense, a reaction
to the prevalent totalitarian ideologies that then dominated Central European politics. His books defended democratic
liberalism as a social and political philosophy. They also represented extensive critiques of the philosophical presuppositions
underpinning all forms of totalitarianism. Popper puzzled over the stark contrast between the non-scientific character
of Freud and Adler's theories in the field of psychology and the revolution set off by Einstein's theory of relativity
in physics in the early 20th century. Popper thought that Einstein's theory, as a theory properly grounded in scientific
thought and method, was highly "risky", in the sense that it was possible to deduce consequences from it which were,
in the light of the then-dominant Newtonian physics, highly improbable (e.g., that light is deflected towards solid
bodies—confirmed by Eddington's experiments in 1919), and which would, if they turned out to be false, falsify the
whole theory. In contrast, nothing could, even in principle, falsify psychoanalytic theories. He thus came to the
conclusion that psychoanalytic theories had more in common with primitive myths than with genuine science. This led
Popper to conclude that what were regarded[by whom?] as the remarkable strengths of psychoanalytical theories were
actually their weaknesses. Psychoanalytical theories were crafted in a way that made them able to refute any criticism
and to give an explanation for every possible form of human behaviour. The nature of such theories made it impossible
for any criticism or experiment - even in principle - to show them to be false. This realisation had an important
consequence when Popper later tackled the problem of demarcation in the philosophy of science, as it led him to posit
that the strength of a scientific theory lies in its both being susceptible to falsification, and not actually being
falsified by criticism made of it. He considered that if a theory cannot, in principle, be falsified by criticism,
it is not a scientific theory. Popper coined the term "critical rationalism" to describe his philosophy. Concerning
the method of science, the term indicates his rejection of classical empiricism, and the classical observationalist-inductivist
account of science that had grown out of it. Popper argued strongly against the latter, holding that scientific theories
are abstract in nature, and can be tested only indirectly, by reference to their implications. He also held that
scientific theory, and human knowledge generally, is irreducibly conjectural or hypothetical, and is generated by
the creative imagination to solve problems that have arisen in specific historico-cultural settings. Logically, no
number of positive outcomes at the level of experimental testing can confirm a scientific theory, but a single counterexample
is logically decisive: it shows the theory, from which the implication is derived, to be false. To say that a given
statement (e.g., the statement of a law of some scientific theory) -- [call it "T"] -- is "falsifiable" does not
mean that "T" is false. Rather, it means that, if "T" is false, then (in principle), "T" could be shown to be false,
by observation or by experiment. Popper's account of the logical asymmetry between verification and falsifiability
lies at the heart of his philosophy of science. It also inspired him to take falsifiability as his criterion of demarcation
between what is, and is not, genuinely scientific: a theory should be considered scientific if, and only if, it is
falsifiable. This led him to attack the claims of both psychoanalysis and contemporary Marxism to scientific status,
on the basis that their theories are not falsifiable. In All Life is Problem Solving, Popper sought to explain the
apparent progress of scientific knowledge – that is, how it is that our understanding of the universe seems to improve
over time. This problem arises from his position that the truth content of our theories, even the best of them, cannot
be verified by scientific testing, but can only be falsified. Again, in this context the word "falsified" does not
refer to something being "fake"; rather, that something can be (i.e., is capable of being) shown to be false by observation
or experiment. Some things simply do not lend themselves to being shown to be false, and therefore, are not falsifiable.
If so, then how is it that the growth of science appears to result in a growth in knowledge? In Popper's view, the
advance of scientific knowledge is an evolutionary process characterised by his formula: In response to a given problem
situation (), a number of competing conjectures, or tentative theories (), are systematically subjected to the most
rigorous attempts at falsification possible. This process, error elimination (), performs a similar function for
science that natural selection performs for biological evolution. Theories that better survive the process of refutation
are not more true, but rather, more "fit"—in other words, more applicable to the problem situation at hand (). Consequently,
just as a species' biological fitness does not ensure continued survival, neither does rigorous testing protect a
scientific theory from refutation in the future. Yet, as it appears that the engine of biological evolution has,
over many generations, produced adaptive traits equipped to deal with more and more complex problems of survival,
likewise, the evolution of theories through the scientific method may, in Popper's view, reflect a certain type of
progress: toward more and more interesting problems (). For Popper, it is in the interplay between the tentative
theories (conjectures) and error elimination (refutation) that scientific knowledge advances toward greater and greater
problems; in a process very much akin to the interplay between genetic variation and natural selection. Among his
contributions to philosophy is his claim to have solved the philosophical problem of induction. He states that while
there is no way to prove that the sun will rise, it is possible to formulate the theory that every day the sun will
rise; if it does not rise on some particular day, the theory will be falsified and will have to be replaced by a
different one. Until that day, there is no need to reject the assumption that the theory is true. Nor is it rational
according to Popper to make instead the more complex assumption that the sun will rise until a given day, but will
stop doing so the day after, or similar statements with additional conditions. Popper held that rationality is not
restricted to the realm of empirical or scientific theories, but that it is merely a special case of the general
method of criticism, the method of finding and eliminating contradictions in knowledge without ad-hoc-measures. According
to this view, rational discussion about metaphysical ideas, about moral values and even about purposes is possible.
Popper's student W.W. Bartley III tried to radicalise this idea and made the controversial claim that not only can
criticism go beyond empirical knowledge, but that everything can be rationally criticised. To Popper, who was an
anti-justificationist, traditional philosophy is misled by the false principle of sufficient reason. He thinks that
no assumption can ever be or needs ever to be justified, so a lack of justification is not a justification for doubt.
Instead, theories should be tested and scrutinised. It is not the goal to bless theories with claims of certainty
or justification, but to eliminate errors in them. He writes, "there are no such things as good positive reasons;
nor do we need such things [...] But [philosophers] obviously cannot quite bring [themselves] to believe that this
is my opinion, let alone that it is right" (The Philosophy of Karl Popper, p. 1043) In The Open Society and Its Enemies
and The Poverty of Historicism, Popper developed a critique of historicism and a defence of the "Open Society". Popper
considered historicism to be the theory that history develops inexorably and necessarily according to knowable general
laws towards a determinate end. He argued that this view is the principal theoretical presupposition underpinning
most forms of authoritarianism and totalitarianism. He argued that historicism is founded upon mistaken assumptions
regarding the nature of scientific law and prediction. Since the growth of human knowledge is a causal factor in
the evolution of human history, and since "no society can predict, scientifically, its own future states of knowledge",
it follows, he argued, that there can be no predictive science of human history. For Popper, metaphysical and historical
indeterminism go hand in hand. As early as 1934, Popper wrote of the search for truth as "one of the strongest motives
for scientific discovery." Still, he describes in Objective Knowledge (1972) early concerns about the much-criticised
notion of truth as correspondence. Then came the semantic theory of truth formulated by the logician Alfred Tarski
and published in 1933. Popper writes of learning in 1935 of the consequences of Tarski's theory, to his intense joy.
The theory met critical objections to truth as correspondence and thereby rehabilitated it. The theory also seemed,
in Popper's eyes, to support metaphysical realism and the regulative idea of a search for truth. According to this
theory, the conditions for the truth of a sentence as well as the sentences themselves are part of a metalanguage.
So, for example, the sentence "Snow is white" is true if and only if snow is white. Although many philosophers have
interpreted, and continue to interpret, Tarski's theory as a deflationary theory, Popper refers to it as a theory
in which "is true" is replaced with "corresponds to the facts". He bases this interpretation on the fact that examples
such as the one described above refer to two things: assertions and the facts to which they refer. He identifies
Tarski's formulation of the truth conditions of sentences as the introduction of a "metalinguistic predicate" and
distinguishes the following cases: Upon this basis, along with that of the logical content of assertions (where logical
content is inversely proportional to probability), Popper went on to develop his important notion of verisimilitude
or "truthlikeness". The intuitive idea behind verisimilitude is that the assertions or hypotheses of scientific theories
can be objectively measured with respect to the amount of truth and falsity that they imply. And, in this way, one
theory can be evaluated as more or less true than another on a quantitative basis which, Popper emphasises forcefully,
has nothing to do with "subjective probabilities" or other merely "epistemic" considerations. Knowledge, for Popper,
was objective, both in the sense that it is objectively true (or truthlike), and also in the sense that knowledge
has an ontological status (i.e., knowledge as object) independent of the knowing subject (Objective Knowledge: An
Evolutionary Approach, 1972). He proposed three worlds: World One, being the physical world, or physical states;
World Two, being the world of mind, or mental states, ideas, and perceptions; and World Three, being the body of
human knowledge expressed in its manifold forms, or the products of the second world made manifest in the materials
of the first world (i.e., books, papers, paintings, symphonies, and all the products of the human mind). World Three,
he argued, was the product of individual human beings in exactly the same sense that an animal path is the product
of individual animals, and that, as such, has an existence and evolution independent of any individual knowing subjects.
The influence of World Three, in his view, on the individual human mind (World Two) is at least as strong as the
influence of World One. In other words, the knowledge held by a given individual mind owes at least as much to the
total accumulated wealth of human knowledge, made manifest, as to the world of direct experience. As such, the growth
of human knowledge could be said to be a function of the independent evolution of World Three. Many contemporary
philosophers, such as Daniel Dennett, have not embraced Popper's Three World conjecture, due mostly, it seems, to
its resemblance to mind-body dualism. The creation–evolution controversy in the United States raises the issue of
whether creationistic ideas may be legitimately called science and whether evolution itself may be legitimately called
science. In the debate, both sides and even courts in their decisions have frequently invoked Popper's criterion
of falsifiability (see Daubert standard). In this context, passages written by Popper are frequently quoted in which
he speaks about such issues himself. For example, he famously stated "Darwinism is not a testable scientific theory,
but a metaphysical research program—a possible framework for testable scientific theories." He continued: Popper
had his own sophisticated views on evolution that go much beyond what the frequently-quoted passages say. In effect,
Popper agreed with some of the points of both creationists and naturalists, but also disagreed with both views on
crucial aspects. Popper understood the universe as a creative entity that invents new things, including life, but
without the necessity of something like a god, especially not one who is pulling strings from behind the curtain.
He said that evolution must, as the creationists say, work in a goal-directed way but disagreed with their view that
it must necessarily be the hand of god that imposes these goals onto the stage of life. Instead, he formulated the
spearhead model of evolution, a version of genetic pluralism. According to this model, living organisms themselves
have goals, and act according to these goals, each guided by a central control. In its most sophisticated form, this
is the brain of humans, but controls also exist in much less sophisticated ways for species of lower complexity,
such as the amoeba. This control organ plays a special role in evolution—it is the "spearhead of evolution". The
goals bring the purpose into the world. Mutations in the genes that determine the structure of the control may then
cause drastic changes in behaviour, preferences and goals, without having an impact on the organism's phenotype.
Popper postulates that such purely behavioural changes are less likely to be lethal for the organism compared to
drastic changes of the phenotype. Popper contrasts his views with the notion of the "hopeful monster" that has large
phenotype mutations and calls it the "hopeful behavioural monster". After behaviour has changed radically, small
but quick changes of the phenotype follow to make the organism fitter to its changed goals. This way it looks as
if the phenotype were changing guided by some invisible hand, while it is merely natural selection working in combination
with the new behaviour. For example, according to this hypothesis, the eating habits of the giraffe must have changed
before its elongated neck evolved. Popper contrasted this view as "evolution from within" or "active Darwinism" (the
organism actively trying to discover new ways of life and being on a quest for conquering new ecological niches),
with the naturalistic "evolution from without" (which has the picture of a hostile environment only trying to kill
the mostly passive organism, or perhaps segregate some of its groups). About the creation-evolution controversy,
Popper wrote that he considered it "a somewhat sensational clash between a brilliant scientific hypothesis concerning
the history of the various species of animals and plants on earth, and an older metaphysical theory which, incidentally,
happened to be part of an established religious belief" with a footnote to the effect that "[he] agree[s] with Professor
C.E. Raven when, in his Science, Religion, and the Future, 1943, he calls this conflict "a storm in a Victorian tea-cup";
though the force of this remark is perhaps a little impaired by the attention he pays to the vapours still emerging
from the cup—to the Great Systems of Evolutionist Philosophy, produced by Bergson, Whitehead, Smuts, and others."
In an interview that Popper gave in 1969 with the condition that it shall be kept secret until after his death, he
summarised his position on God as follows: "I don't know whether God exists or not. ... Some forms of atheism are
arrogant and ignorant and should be rejected, but agnosticism—to admit that we don't know and to search—is all right.
... When I look at what I call the gift of life, I feel a gratitude which is in tune with some religious ideas of
God. However, the moment I even speak of it, I am embarrassed that I may do something wrong to God in talking about
God." He objected to organised religion, saying "it tends to use the name of God in vain", noting the danger of fanaticism
because of religious conflicts: "The whole thing goes back to myths which, though they may have a kernel of truth,
are untrue. Why then should the Jewish myth be true and the Indian and Egyptian myths not be true?" In a letter unrelated
to the interview, he stressed his tolerant attitude: "Although I am not for religion, I do think that we should show
respect for anybody who believes honestly." Popper played a vital role in establishing the philosophy of science
as a vigorous, autonomous discipline within philosophy, through his own prolific and influential works, and also
through his influence on his own contemporaries and students. Popper founded in 1946 the Department of Philosophy,
Logic and Scientific Method at the London School of Economics and there lectured and influenced both Imre Lakatos
and Paul Feyerabend, two of the foremost philosophers of science in the next generation of philosophy of science.
(Lakatos significantly modified Popper's position,:1 and Feyerabend repudiated it entirely, but the work of both
is deeply influenced by Popper and engaged with many of the problems that Popper set.) While there is some dispute
as to the matter of influence, Popper had a long-standing and close friendship with economist Friedrich Hayek, who
was also brought to the London School of Economics from Vienna. Each found support and similarities in the other's
work, citing each other often, though not without qualification. In a letter to Hayek in 1944, Popper stated, "I
think I have learnt more from you than from any other living thinker, except perhaps Alfred Tarski." Popper dedicated
his Conjectures and Refutations to Hayek. For his part, Hayek dedicated a collection of papers, Studies in Philosophy,
Politics, and Economics, to Popper, and in 1982 said, "...ever since his Logik der Forschung first came out in 1934,
I have been a complete adherent to his general theory of methodology." He does not argue that any such conclusions
are therefore true, or that this describes the actual methods of any particular scientist.[citation needed] Rather,
it is recommended as an essential principle of methodology that, if enacted by a system or community, will lead to
slow but steady progress of a sort (relative to how well the system or community enacts the method). It has been
suggested that Popper's ideas are often mistaken for a hard logical account of truth because of the historical co-incidence
of their appearing at the same time as logical positivism, the followers of which mistook his aims for their own.
The Quine-Duhem thesis argues that it's impossible to test a single hypothesis on its own, since each one comes as
part of an environment of theories. Thus we can only say that the whole package of relevant theories has been collectively
falsified, but cannot conclusively say which element of the package must be replaced. An example of this is given
by the discovery of the planet Neptune: when the motion of Uranus was found not to match the predictions of Newton's
laws, the theory "There are seven planets in the solar system" was rejected, and not Newton's laws themselves. Popper
discussed this critique of naïve falsificationism in Chapters 3 and 4 of The Logic of Scientific Discovery. For Popper,
theories are accepted or rejected via a sort of selection process. Theories that say more about the way things appear
are to be preferred over those that do not; the more generally applicable a theory is, the greater its value. Thus
Newton's laws, with their wide general application, are to be preferred over the much more specific "the solar system
has seven planets".[dubious – discuss] Popper claimed to have recognised already in the 1934 version of his Logic
of Discovery a fact later stressed by Kuhn, "that scientists necessarily develop their ideas within a definite theoretical
framework", and to that extent to have anticipated Kuhn's central point about "normal science". (But Popper criticised
what he saw as Kuhn's relativism.) Also, in his collection Conjectures and Refutations: The Growth of Scientific
Knowledge (Harper & Row, 1963), Popper writes, "Science must begin with myths, and with the criticism of myths; neither
with the collection of observations, nor with the invention of experiments, but with the critical discussion of myths,
and of magical techniques and practices. The scientific tradition is distinguished from the pre-scientific tradition
in having two layers. Like the latter, it passes on its theories; but it also passes on a critical attitude towards
them. The theories are passed on, not as dogmas, but rather with the challenge to discuss them and improve upon them."
Another objection is that it is not always possible to demonstrate falsehood definitively, especially if one is using
statistical criteria to evaluate a null hypothesis. More generally it is not always clear, if evidence contradicts
a hypothesis, that this is a sign of flaws in the hypothesis rather than of flaws in the evidence. However, this
is a misunderstanding of what Popper's philosophy of science sets out to do. Rather than offering a set of instructions
that merely need to be followed diligently to achieve science, Popper makes it clear in The Logic of Scientific Discovery
that his belief is that the resolution of conflicts between hypotheses and observations can only be a matter of the
collective judgment of scientists, in each individual case. In a book called Science Versus Crime, Houck writes that
Popper's falsificationism can be questioned logically: it is not clear how Popper would deal with a statement like
"for every metal, there is a temperature at which it will melt." The hypothesis cannot be falsified by any possible
observation, for there will always be a higher temperature than tested at which the metal may in fact melt, yet it
seems to be a valid scientific hypothesis. These examples were pointed out by Carl Gustav Hempel. Hempel came to
acknowledge that Logical Positivism's verificationism was untenable, but argued that falsificationism was equally
untenable on logical grounds alone. The simplest response to this is that, because Popper describes how theories
attain, maintain and lose scientific status, individual consequences of currently accepted scientific theories are
scientific in the sense of being part of tentative scientific knowledge, and both of Hempel's examples fall under
this category. For instance, atomic theory implies that all metals melt at some temperature. In 2004, philosopher
and psychologist Michel ter Hark (Groningen, The Netherlands) published a book, called Popper, Otto Selz and the
rise of evolutionary epistemology, in which he claimed that Popper took some of his ideas from his tutor, the German
psychologist Otto Selz. Selz never published his ideas, partly because of the rise of Nazism, which forced him to
quit his work in 1933, and the prohibition of referring to Selz' work. Popper, the historian of ideas and his scholarship,
is criticised in some academic quarters for his rejection of Plato, Hegel and Marx. According to John N. Gray, Popper
held that "a theory is scientific only in so far as it is falsifiable, and should be given up as soon as it is falsified."
By applying Popper's account of scientific method, Gray's Straw Dogs states that this would have "killed the theories
of Darwin and Einstein at birth." When they were first advanced, Gray claims, each of them was "at odds with some
available evidence; only later did evidence become available that gave them crucial support." Against this, Gray
seeks to establish the irrationalist thesis that "the progress of science comes from acting against reason." Gray
does not, however, give any indication of what available evidence these theories were at odds with, and his appeal
to "crucial support" illustrates the very inductivist approach to science that Popper sought to show was logically
illegitimate. For, according to Popper, Einstein's theory was at least equally as well corroborated as Newton's upon
its initial conception; they both equally well accounted for all the hitherto available evidence. Moreover, since
Einstein also explained the empirical refutations of Newton's theory, general relativity was immediately deemed suitable
for tentative acceptance on the Popperian account. Indeed, Popper wrote, several decades before Gray's criticism,
in reply to a critical essay by Imre Lakatos: Such a theory would be true with higher probability, because it cannot
be attacked so easily: to falsify the first one, it is sufficient to find that the sun has stopped rising; to falsify
the second one, one additionally needs the assumption that the given day has not yet been reached. Popper held that
it is the least likely, or most easily falsifiable, or simplest theory (attributes which he identified as all the
same thing) that explains known facts that one should rationally prefer. His opposition to positivism, which held
that it is the theory most likely to be true that one should prefer, here becomes very apparent. It is impossible,
Popper argues, to ensure a theory to be true; it is more important that its falsity can be detected as easily as
possible. In his early years Popper was impressed by Marxism, whether of Communists or socialists. An event that
happened in 1919 had a profound effect on him: During a riot, caused by the Communists, the police shot several unarmed
people, including some of Popper's friends, when they tried to free party comrades from prison. The riot had, in
fact, been part of a plan by which leaders of the Communist party with connections to Béla Kun tried to take power
by a coup; Popper did not know about this at that time. However, he knew that the riot instigators were swayed by
the Marxist doctrine that class struggle would produce vastly more dead men than the inevitable revolution brought
about as quickly as possible, and so had no scruples to put the life of the rioters at risk to achieve their selfish
goal of becoming the future leaders of the working class. This was the start of his later criticism of historicism.
Popper began to reject Marxist historicism, which he associated with questionable means, and later socialism, which
he associated with placing equality before freedom (to the possible disadvantage of equality).
A mandolin (Italian: mandolino pronounced [mandoˈliːno]; literally "small mandola") is a musical instrument in the lute family
and is usually plucked with a plectrum or "pick". It commonly has four courses of doubled metal strings tuned in
unison (8 strings), although five (10 strings) and six (12 strings) course versions also exist. The courses are normally
tuned in a succession of perfect fifths. It is the soprano member of a family that includes the mandola, octave mandolin,
mandocello and mandobass. There are many styles of mandolin, but four are common, the Neapolitan or round-backed
mandolin, the carved-top mandolin and the flat-backed mandolin. The round-back has a deep bottom, constructed of
strips of wood, glued together into a bowl. The carved-top or arch-top mandolin has a much shallower, arched back,
and an arched top—both carved out of wood. The flat-backed mandolin uses thin sheets of wood for the body, braced
on the inside for strength in a similar manner to a guitar. Each style of instrument has its own sound quality and
is associated with particular forms of music. Neapolitan mandolins feature prominently in European classical music
and traditional music. Carved-top instruments are common in American folk music and bluegrass music. Flat-backed
instruments are commonly used in Irish, British and Brazilian folk music. Some modern Brazilian instruments feature
an extra fifth course tuned a fifth lower than the standard fourth course. Much of mandolin development revolved
around the soundboard (the top). Pre-mandolin instruments were quiet instruments, strung with as many as six courses
of gut strings, and were plucked with the fingers or with a quill. However, modern instruments are louder—using four
courses of metal strings, which exert more pressure than the gut strings. The modern soundboard is designed to withstand
the pressure of metal strings that would break earlier instruments. The soundboard comes in many shapes—but generally
round or teardrop-shaped, sometimes with scrolls or other projections. There is usually one or more sound holes in
the soundboard, either round, oval, or shaped like a calligraphic F (f-hole). A round or oval sound hole may be covered
or bordered with decorative rosettes or purfling. Beside the introduction of the lute to Spain (Andalusia) by the
Moors, another important point of transfer of the lute from Arabian to European culture was Sicily, where it was
brought either by Byzantine or later by Muslim musicians. There were singer-lutenists at the court in Palermo following
the Norman conquest of the island from the Muslims, and the lute is depicted extensively in the ceiling paintings
in the Palermo’s royal Cappella Palatina, dedicated by the Norman King Roger II of Sicily in 1140. His Hohenstaufen
grandson Frederick II, Holy Roman Emperor (1194 - 1250) continued integrating Muslims into his court, including Moorish
musicians. By the 14th century, lutes had disseminated throughout Italy and, probably because of the cultural influence
of the Hohenstaufen kings and emperor, based in Palermo, the lute had also made significant inroads into the German-speaking
lands. There is confusion currently as to the name of the eldest Vinaccia luthier who first ran the shop. His name
has been put forth as Gennaro Vinaccia (active c. 1710 to c. 1788) and Nic. Vinaccia. His son Antonio Vinaccia was
active c. 1734 to c. 1796. An early extant example of a mandolin is one built by Antonio Vinaccia in 1759, which
resides at the University of Edinburgh. Another is by Giuseppe Vinaccia, built in 1893, is also at the University
of Edinburgh. The earliest extant mandolin was built in 1744 by Antonio's son, Gaetano Vinaccia. It resides in the
Conservatoire Royal de Musique in Brussels, Belgium. The transition from the mandolino to the mandolin began around
1744 with the designing of the metal-string mandolin by the Vinaccia family, 3 brass strings and one of gut, using
friction tuning pegs on a fingerboard that sat "flush" with the sound table. The mandolin grew in popularity over
the next 60 years, in the streets where it was used by young men courting and by street musicians, and in the concert
hall. After the Napoleonic Wars of 1815, however, its popularity began to fall. The 19th century produced some prominent
players, including Bartolomeo Bortolazzi of Venice and Pietro Vimercati. However, professional virtuosity was in
decline, and the mandolin music changed as the mandolin became a folk instrument; "the large repertoire of notated
instrumental music for the mandolino and the mandoline was completely forgotten". The export market for mandolins
from Italy dried up around 1815, and when Carmine de Laurentiis wrote a mandolin method in 1874, the Music World
magazine wrote that the mandolin was "out of date." Salvador Léonardi mentioned this decline in his 1921 book, Méthode
pour Banjoline ou Mandoline-Banjo, saying that the mandolin had been declining in popularity from previous times.
Beginning with the Paris Exposition of 1878, the instrument's popularity rebounded. The Exposition was one of many
stops for a popular new performing group the Estudiantes Españoles (Spanish Students). They danced and played guitars,
violins and the bandurria, which became confused with the mandolin. Along with the energy and awareness created by
the day's hit sensation, a wave of Italian mandolinists travelled Europe in the 1880s and 1890s and in the United
States by the mid-1880s, playing and teaching their instrument. The instrument's popularity continued to increase
during the 1890s and mandolin popularity was at its height in "early years of the 20th century." Thousands were taking
up the instrument as a pastime, and it became an instrument of society, taken up by young men and women. Mandolin
orchestras were formed worldwide, incorporating not only the mandolin family of instruments, but also guitars, double
basses and zithers. The second decline was not as complete as the first. Thousands of people had learned to play
the instrument. Even as the second wave of mandolin popularity declined in the early 20th century, new versions of
the mandolin began to be used in new forms of music. Luthiers created the resonator mandolin, the flatback mandolin,
the carved-top or arched-top mandolin, the mandolin-banjo and the electric mandolin. Musicians began playing it in
Celtic, Bluegrass, Jazz and Rock-n-Roll styles — and Classical too. Like any plucked instrument, mandolin notes decay
to silence rather than sound out continuously as with a bowed note on a violin, and mandolin notes decay faster than
larger stringed instruments like the guitar. This encourages the use of tremolo (rapid picking of one or more pairs
of strings) to create sustained notes or chords. The mandolin's paired strings facilitate this technique: the plectrum
(pick) strikes each of a pair of strings alternately, providing a more full and continuous sound than a single string
would. The Neapolitan style has an almond-shaped body resembling a bowl, constructed from curved strips of wood.
It usually has a bent sound table, canted in two planes with the design to take the tension of the 8 metal strings
arranged in four courses. A hardwood fingerboard sits on top of or is flush with the sound table. Very old instruments
may use wooden tuning pegs, while newer instruments tend to use geared metal tuners. The bridge is a movable length
of hardwood. A pickguard is glued below the sound hole under the strings. European roundbacks commonly use a 13-inch
scale instead of the 13.876 common on archtop Mandolins. Another family of bowlback mandolins came from Milan and
Lombardy. These mandolins are closer to the mandolino or mandore than other modern mandolins. They are shorter and
wider than the standard Neapolitan mandolin, with a shallow back. The instruments have 6 strings, 3 wire treble-strings
and 3 gut or wire-wrapped-silk bass-strings. The strings ran between the tuning pegs and a bridge that was glued
to the soundboard, as a guitar's. The Lombardic mandolins were tuned g b e' a' d" g". A developer of the Milanese
stye was Antonio Monzino (Milan) and his family who made them for 6 generations. Samuel Adelstein described the Lombardi
mandolin in 1893 as wider and shorter than the Neapolitan mandolin, with a shallower back and a shorter and wider
neck, with six single strings to the regular mandolin's set of 4. The Lombardi was tuned C, D, A, E, B, G. The strings
were fastened to the bridge like a guitar's. There were 20 frets, covering three octaves, with an additional 5 notes.
When Adelstein wrote, there were no nylon strings, and the gut and single strings "do not vibrate so clearly and
sweetly as the double steel string of the Neapolitan." In his 1805 mandolin method, Anweisung die Mandoline von selbst
zu erlernen nebst einigen Uebungsstucken von Bortolazzi, Bartolomeo Bortolazzi popularised the Cremonese mandolin,
which had four single-strings and a fixed bridge, to which the strings were attached. Bortolazzi said in this book
that the new wire strung mandolins were uncomfortable to play, when compared with the gut-string instruments. Also,
he felt they had a "less pleasing...hard, zither-like tone" as compared to the gut string's "softer, full-singing
tone." He favored the four single strings of the Cremonese instrument, which were tuned the same as the Neapolitan.
At the very end of the 19th century, a new style, with a carved top and back construction inspired by violin family
instruments began to supplant the European-style bowl-back instruments in the United States. This new style is credited
to mandolins designed and built by Orville Gibson, a Kalamazoo, Michigan luthier who founded the "Gibson Mandolin-Guitar
Manufacturing Co., Limited" in 1902. Gibson mandolins evolved into two basic styles: the Florentine or F-style, which
has a decorative scroll near the neck, two points on the lower body and usually a scroll carved into the headstock;
and the A-style, which is pear shaped, has no points and usually has a simpler headstock. These styles generally
have either two f-shaped soundholes like a violin (F-5 and A-5), or an oval sound hole (F-4 and A-4 and lower models)
directly under the strings. Much variation exists between makers working from these archetypes, and other variants
have become increasingly common. Generally, in the United States, Gibson F-hole F-5 mandolins and mandolins influenced
by that design are strongly associated with bluegrass, while the A-style is associated other types of music, although
it too is most often used for and associated with bluegrass. The F-5's more complicated woodwork also translates
into a more expensive instrument. Numerous modern mandolin makers build instruments that largely replicate the Gibson
F-5 Artist models built in the early 1920s under the supervision of Gibson acoustician Lloyd Loar. Original Loar-signed
instruments are sought after and extremely valuable. Other makers from the Loar period and earlier include Lyon and
Healy, Vega and Larson Brothers. Some notable modern American carved mandolin manufacturers include, in addition
to Kay, Gibson, Weber, Monteleone and Collings. Mandolins from other countries include The Loar (China), Santa, Rosa
(China), Michael Kelly (Korea), Eastman (China), Kentucky (China), Heiden (Canada), Gilchrist (Australia) and Morgan
Monroe (China). The international repertoire of music for mandolin is almost unlimited, and musicians use it to play
various types of music. This is especially true of violin music, since the mandolin has the same tuning of the violin.
Following its invention and early development in Italy the mandolin spread throughout the European continent. The
instrument was primarily used in a classical tradition with Mandolin orchestras, so called Estudiantinas or in Germany
Zupforchestern appearing in many cities. Following this continental popularity of the mandolin family local traditions
appeared outside of Europe in the Americas and in Japan. Travelling mandolin virtuosi like Giuseppe Pettine, Raffaele
Calace and Silvio Ranieri contributed to the mandolin becoming a "fad" instrument in the early 20th century. This
"mandolin craze" was fading by the 1930s, but just as this practice was falling into disuse, the mandolin found a
new niche in American country, old-time music, bluegrass and folk music. More recently, the Baroque and Classical
mandolin repertory and styles have benefited from the raised awareness of and interest in Early music, with media
attention to classical players such as Israeli Avi Avital, Italian Carlo Aonzo and American Joseph Brent. Phil Skinner
played a key role in 20th century development of the mandolin movement in Australia, and was awarded an MBE in 1979
for services to music and the community. He was born Harry Skinner in Sydney in 1903 and started learning music at
age 10 when his uncle tutored him on the banjo. Skinner began teaching part-time at age 18, until the Great Depression
forced him to begin teaching full-time and learn a broader range of instruments. Skinner founded the Sydney Mandolin
Orchestra, the oldest surviving mandolin orchestra in Australia. The Sydney Mandolins (Artistic Director: Adrian
Hooper) have contributed greatly to the repertoire through commissioning over 200 works by Australian and International
composers. Most of these works have been released on Compact Disks and can regularly be heard on radio stations on
the ABC and MBS networks. One of their members, mandolin virtuoso Paul Hooper, has had a number of Concertos written
for him by composers such as Eric Gross. He has performed and recorded these works with the Sydney Symphony Orchestra
and the Tasmanian Symphony Orchestra. As well, Paul Hooper has had many solo works dedicated to him by Australian
composers e.g., Caroline Szeto, Ian Shanahan, Larry Sitsky and Michael Smetanin. In the early 20th century several
mandolin orchestras (Estudiantinas) were active in Belgium. Today only a few groups remain: Royal Estudiantina la
Napolitaine (founded in 1904) in Antwerp, Brasschaats mandoline orkest in Brasschaat and an orchestra in Mons (Bergen).
Gerda Abts is a well known mandolin virtuoso in Belgium. She is also mandolin teacher and gives lessons in the music
academies of Lier, Wijnegem and Brasschaat. She is now also professor mandolin at the music high school “Koninklijk
Conservatorium Artesis Hogeschool Antwerpen”. She also gives various concerts each year in different ensembles. She
is in close contact to the Brasschaat mandolin Orchestra. Her site is www.gevoeligesnaar.be Prior to the Golden Age
of Mandolins, France had a history with the mandolin, with mandolinists playing in Paris until the Napoleonic Wars.
The players, teachers and composers included Giovanni Fouchetti, Eduardo Mezzacapo, Gabriele Leon, and Gervasio.
During the Golden age itself (1880s-1920s), the mandolin had a strong presence in France. Prominent mandolin players
or composers included Jules Cottin and his sister Madeleine Cottin, Jean Pietrapertosa, and Edgar Bara. Paris had
dozens of "estudiantina" mandolin orchestras in the early 1900s. Mandolin magazines included L'Estudiantina, Le Plectre,
École de la mandolie. On the island of Crete, along with the lyra and the laouto (lute), the mandolin is one of the
main instruments used in Cretan Music. It appeared on Crete around the time of the Venetian rule of the island. Different
variants of the mandolin, such as the "mantola," were used to accompany the lyra, the violin, and the laouto. Stelios
Foustalierakis reported that the mandolin and the mpoulgari were used to accompany the lyra in the beginning of the
20th century in the city of Rethimno. There are also reports that the mandolin was mostly a woman's musical instrument.
Nowadays it is played mainly as a solo instrument in personal and family events on the Ionian islands and Crete.
Many adaptations of the instrument have been done to cater to the special needs of Indian Carnatic music. In Indian
classical music and Indian light music, the mandolin, which bears little resemblance to the European mandolin, is
usually tuned E-B-E-B. As there is no concept of absolute pitch in Indian classical music, any convenient tuning
maintaining these relative pitch intervals between the strings can be used. Another prevalent tuning with these intervals
is C-G-C-G, which corresponds to Sa-Pa-Sa-Pa in the Indian carnatic classical music style. This tuning corresponds
to the way violins are tuned for carnatic classical music. This type of mandolin is also used in Bhangra, dance music
popular in Punjabi culture. Though almost any variety of acoustic mandolin might be adequate for Irish traditional
music, virtually all Irish players prefer flat-backed instruments with oval sound holes to the Italian-style bowl-back
mandolins or the carved-top mandolins with f-holes favoured by bluegrass mandolinists. The former are often too soft-toned
to hold their own in a session (as well as having a tendency to not stay in place on the player's lap), whilst the
latter tend to sound harsh and overbearing to the traditional ear. The f-hole mandolin, however, does come into its
own in a traditional session, where its brighter tone cuts through the sonic clutter of a pub. Greatly preferred
for formal performance and recording are flat-topped "Irish-style" mandolins (reminiscent of the WWI-era Martin Army-Navy
mandolin) and carved (arch) top mandolins with oval soundholes, such as the Gibson A-style of the 1920s. Noteworthy
Irish mandolinists include Andy Irvine (who, like Johnny Moynihan, almost always tunes the top E down to D, to achieve
an open tuning of GDAD), Paul Brady, Mick Moloney, Paul Kelly and Claudine Langille. John Sheahan and the late Barney
McKenna, respectively fiddle player and tenor banjo player with The Dubliners, are also accomplished Irish mandolin
players. The instruments used are either flat-backed, oval hole examples as described above (made by UK luthier Roger
Bucknall of Fylde Guitars), or carved-top, oval hole instruments with arched back (made by Stefan Sobell in Northumberland).
The Irish guitarist Rory Gallagher often played the mandolin on stage, and he most famously used it in the song "Going
To My Hometown." Antonio Vivaldi composed a mandolin concerto (Concerto in C major Op.3 6) and two concertos for
two mandolins and orchestra. Wolfgang Amadeus Mozart placed it in his 1787 work Don Giovanni and Beethoven created
four variations of it. Antonio Maria Bononcini composed La conquista delle Spagne di Scipione Africano il giovane
in 1707 and George Frideric Handel composed Alexander Balus in 1748. Others include Giovani Battista Gervasio (Sonata
in D major for Mandolin and Basso Continuo), Giuseppe Giuliano (Sonata in D major for Mandolin and Basso Continuo),
Emanuele Barbella (Sonata in D major for Mandolin and Basso Continuo), Domenico Scarlatti (Sonata n.54 (K.89) in
D minor for Mandolin and Basso Continuo), and Addiego Guerra (Sonata in G major for Mandolin and Basso Continuo).
The expansion of mandolin use continued after World War II through the late 1960s, and Japan still maintains a strong
classical music tradition using mandolins, with active orchestras and university music programs. New orchestras were
founded and new orchestral compositions composed. Japanese mandolin orchestras today may consist of up to 40 or 50
members, and can include woodwind, percussion, and brass sections. Japan also maintains an extensive collection of
20th Century mandolin music from Europe and one of the most complete collections of mandolin magazines from mandolin's
golden age, purchased by Morishige Takei. The bandolim (Portuguese for "mandolin") was a favourite instrument within
the Portuguese bourgeoisie of the 19th century, but its rapid spread took it to other places, joining other instruments.
Today you can see mandolins as part of the traditional and folk culture of Portuguese singing groups and the majority
of the mandolin scene in Portugal is in Madeira Island. Madeira has over 17 active mandolin Orchestras and Tunas.
The mandolin virtuoso Fabio Machado is one of Portugal's most accomplished mandolin players. The Portuguese influence
brought the mandolin to Brazil. The mandolin has been used extensively in the traditional music of England and Scotland
for generations. Simon Mayor is a prominent British player who has produced six solo albums, instructional books
and DVDs, as well as recordings with his mandolin quartet the Mandolinquents. The instrument has also found its way
into British rock music. The mandolin was played by Mike Oldfield (and introduced by Vivian Stanshall) on Oldfield's
album Tubular Bells, as well as on a number of his subsequent albums (particularly prominently on Hergest Ridge (1974)
and Ommadawn (1975)). It was used extensively by the British folk-rock band Lindisfarne, who featured two members
on the instrument, Ray Jackson and Simon Cowe, and whose "Fog on the Tyne" was the biggest selling UK album of 1971-1972.
The instrument was also used extensively in the UK folk revival of the 1960s and 1970s with bands such as Fairport
Convention and Steeleye Span taking it on as the lead instrument in many of their songs. Maggie May by Rod Stewart,
which hit No. 1 on both the British charts and the Billboard Hot 100, also featured Jackson's playing. It has also
been used by other British rock musicians. Led Zeppelin's bassist John Paul Jones is an accomplished mandolin player
and has recorded numerous songs on mandolin including Going to California and That's the Way; the mandolin part on
The Battle of Evermore is played by Jimmy Page, who composed the song. Other Led Zeppelin songs featuring mandolin
are Hey Hey What Can I Do, and Black Country Woman. Pete Townshend of The Who played mandolin on the track Mike Post
Theme, along with many other tracks on Endless Wire. McGuinness Flint, for whom Graham Lyle played the mandolin on
their most successful single, When I'm Dead And Gone, is another example. Lyle was also briefly a member of Ronnie
Lane's Slim Chance, and played mandolin on their hit How Come. One of the more prominent early mandolin players in
popular music was Robin Williamson in The Incredible String Band. Ian Anderson of Jethro Tull is a highly accomplished
mandolin player (beautiful track Pussy Willow), as is his guitarist Martin Barre. The popular song Please Please
Please Let Me Get What I Want by The Smiths featured a mandolin solo played by Johnny Marr. More recently, the Glasgow-based
band Sons and Daughters featured the mandolin, played by Ailidh Lennon, on tracks such as Fight, Start to End, and
Medicine. British folk-punk icons the Levellers also regularly use the mandolin in their songs. Current bands are
also beginning to use the Mandolin and its unique sound - such as South London's Indigo Moss who use it throughout
their recordings and live gigs. The mandolin has also featured in the playing of Matthew Bellamy in the rock band
Muse. It also forms the basis of Paul McCartney's 2007 hit "Dance Tonight." That was not the first time a Beatle
played a mandolin, however; that distinction goes to George Harrison on Gone Troppo, the title cut from the 1982
album of the same name. The mandolin is taught in Lanarkshire by the Lanarkshire Guitar and Mandolin Association
to over 100 people. Also more recently hard rock supergroup Them Crooked Vultures have been playing a song based
primarily using a mandolin. This song was left off their debut album, and features former Led Zeppelin bassist John
Paul Jones.[citation needed] The mandolin's popularity in the United States was spurred by the success of a group
of touring young European musicians known as the Estudiantina Figaro, or in the United States, simply the "Spanish
Students." The group landed in the U.S. on January 2, 1880 in New York City, and played in Boston and New York to
wildly enthusiastic crowds. Ironically, this ensemble did not play mandolins but rather bandurrias, which are also
small, double-strung instruments that resemble the mandolin. The success of the Figaro Spanish Students spawned other
groups who imitated their musical style and costumes. An Italian musician, Carlo Curti, hastily started a musical
ensemble after seeing the Figaro Spanish Students perform; his group of Italian born Americans called themselves
the "Original Spanish Students," counting on the American public to not know the difference between the Spanish bandurrias
and Italian mandolins. The imitators' use of mandolins helped to generate enormous public interest in an instrument
previously relatively unknown in the United States. Mandolin awareness in the United States blossomed in the 1880s,
as the instrument became part of a fad that continued into the mid-1920s. According to Clarence L. Partee, the first
mandolin made in the United States was made in 1883 or 1884 by Joseph Bohmann, who was an established maker of violins
in Chicago. Partee characterized the early instrument as being larger than the European instruments he was used to,
with a "peculiar shape" and "crude construction," and said that the quality improved, until American instruments
were "superior" to imported instruments. At the time, Partee was using an imported French-made mandolin. Instruments
were marketed by teacher-dealers, much as the title character in the popular musical The Music Man. Often, these
teacher-dealers conducted mandolin orchestras: groups of 4-50 musicians who played various mandolin family instruments.
However, alongside the teacher-dealers were serious musicians, working to create a spot for the instrument in classical
music, ragtime and jazz. Like the teacher-dealers, they traveled the U.S., recording records, giving performances
and teaching individuals and mandolin orchestras. Samuel Siegel played mandolin in Vaudeville and became one of America's
preeminent mandolinists. Seth Weeks was an African American who not only taught and performed in the United States,
but also in Europe, where he recorded records. Another pioneering African American musician and director who made
his start with a mandolin orchestra was composer James Reese Europe. W. Eugene Page toured the country with a group,
and was well known for his mandolin and mandola performances. Other names include Valentine Abt, Samuel Adelstein,
William Place, Jr., and Aubrey Stauffer. The instrument was primarily used in an ensemble setting well into the 1930s,
and although the fad died out at the beginning of the 1930s, the instruments that were developed for the orchestra
found a new home in bluegrass. The famous Lloyd Loar Master Model from Gibson (1923) was designed to boost the flagging
interest in mandolin ensembles, with little success. However, The "Loar" became the defining instrument of bluegrass
music when Bill Monroe purchased F-5 S/N 73987 in a Florida barbershop in 1943 and popularized it as his main instrument.
The mandolin orchestras never completely went away, however. In fact, along with all the other musical forms the
mandolin is involved with, the mandolin ensemble (groups usually arranged like the string section of a modern symphony
orchestra, with first mandolins, second mandolins, mandolas, mandocellos, mando-basses, and guitars, and sometimes
supplemented by other instruments) continues to grow in popularity. Since the mid-nineties, several public-school
mandolin-based guitar programs have blossomed around the country, including Fretworks Mandolin and Guitar Orchestra,
the first of its kind. The national organization, Classical Mandolin Society of America, founded by Norman Levine,
represents these groups. Prominent modern mandolinists and composers for mandolin in the classical music tradition
include Samuel Firstman, Howard Fry, Rudy Cipolla, Dave Apollon, Neil Gladd, Evan Marshall, Marilynn Mair and Mark
Davis (the Mair-Davis Duo), Brian Israel, David Evans, Emanuil Shynkman, Radim Zenkl, David Del Tredici and Ernst
Krenek. When Cowan Powers and his family recorded their old-time music from 1924-1926, his daughter Orpha Powers
was one of the earliest known southern-music artists to record with the mandolin. By the 1930s, single mandolins
were becoming more commonly used in southern string band music, most notably by brother duets such as the sedate
Blue Sky Boys (Bill Bolick and Earl Bolick) and the more hard-driving Monroe Brothers (Bill Monroe and Charlie Monroe).
However, the mandolin's modern popularity in country music can be directly traced to one man: Bill Monroe, the father
of bluegrass music. After the Monroe Brothers broke up in 1939, Bill Monroe formed his own group, after a brief time
called the Blue Grass Boys, and completed the transition of mandolin styles from a "parlor" sound typical of brother
duets to the modern "bluegrass" style. He joined the Grand Ole Opry in 1939 and its powerful clear-channel broadcast
signal on WSM-AM spread his style throughout the South, directly inspiring many musicians to take up the mandolin.
Monroe famously played Gibson F-5 mandolin, signed and dated July 9, 1923, by Lloyd Loar, chief acoustic engineer
at Gibson. The F-5 has since become the most imitated tonally and aesthetically by modern builders. Monroe's style
involved playing lead melodies in the style of a fiddler, and also a percussive chording sound referred to as "the
chop" for the sound made by the quickly struck and muted strings. He also perfected a sparse, percussive blues style,
especially up the neck in keys that had not been used much in country music, notably B and E. He emphasized a powerful,
syncopated right hand at the expense of left-hand virtuosity. Monroe's most influential follower of the second generation
is Frank Wakefield and nowadays Mike Compton of the Nashville Bluegrass Band and David Long, who often tour as a
duet. Tiny Moore of the Texas Playboys developed an electric five-string mandolin and helped popularize the instrument
in Western Swing music. Other major bluegrass mandolinists who emerged in the early 1950s and are still active include
Jesse McReynolds (of Jim and Jesse) who invented a syncopated banjo-roll-like style called crosspicking—and Bobby
Osborne of the Osborne Brothers, who is a master of clarity and sparkling single-note runs. Highly respected and
influential modern bluegrass players include Herschel Sizemore, Doyle Lawson, and the multi-genre Sam Bush, who is
equally at home with old-time fiddle tunes, rock, reggae, and jazz. Ronnie McCoury of the Del McCoury Band has won
numerous awards for his Monroe-influenced playing. The late John Duffey of the original Country Gentlemen and later
the Seldom Scene did much to popularize the bluegrass mandolin among folk and urban audiences, especially on the
east coast and in the Washington, D.C. area. Jethro Burns, best known as half of the comedy duo Homer and Jethro,
was also the first important jazz mandolinist. Tiny Moore popularized the mandolin in Western swing music. He initially
played an 8-string Gibson but switched after 1952 to a 5-string solidbody electric instrument built by Paul Bigsby.
Modern players David Grisman, Sam Bush, and Mike Marshall, among others, have worked since the early 1970s to demonstrate
the mandolin's versatility for all styles of music. Chris Thile of California is a well-known player, and has accomplished
many feats of traditional bluegrass, classical, contemporary pop and rock; the band Nickel Creek featured his playing
in its blend of traditional and pop styles, and he now plays in his band Punch Brothers. Most commonly associated
with bluegrass, mandolin has been used a lot in country music over the years. Some well-known players include Marty
Stuart, Vince Gill, and Ricky Skaggs. Mandolin has also been used in blues music, most notably by Ry Cooder, who
performed outstanding covers on his very first recordings, Yank Rachell, Johnny "Man" Young, Carl Martin, and Gerry
Hundt. Howard Armstrong, who is famous for blues violin, got his start with his father's mandolin and played in string
bands similar to the other Tennessee string bands he came into contact with, with band makeup including "mandolins
and fiddles and guitars and banjos. And once in a while they would ease a little ukulele in there and a bass fiddle."
Other blues players from the era's string bands include Willie Black (Whistler And His Jug Band), Dink Brister, Jim
Hill, Charles Johnson, Coley Jones (Dallas String Band), Bobby Leecan (Need More Band), Alfred Martin, Charlie McCoy
(1909-1950), Al Miller, Matthew Prater, and Herb Quinn. The mandolin has been used occasionally in rock music, first
appearing in the psychedelic era of the late 1960s. Levon Helm of The Band occasionally moved from his drum kit to
play mandolin, most notably on Rag Mama Rag, Rockin' Chair, and Evangeline. Ian Anderson of Jethro Tull played mandolin
on Fat Man, from their second album, Stand Up, and also occasionally on later releases. Rod Stewart's 1971 No. 1
hit Maggie May features a significant mandolin riff. David Grisman played mandolin on two Grateful Dead songs on
the American Beauty album, Friend of the Devil and Ripple, which became instant favorites among amateur pickers at
jam sessions and campground gatherings. John Paul Jones and Jimmy Page both played mandolin on Led Zeppelin songs.
The popular alt rock group Imagine Dragons feature the mandolin on a few of their songs, most prominently being It's
Time. Dash Crofts of the soft rock duo Seals and Crofts extensively used mandolin in their repertoire during the
1970s. Styx released the song Boat on the River in 1980, which featured Tommy Shaw on vocals and mandolin. The song
didn't chart in the United States but was popular in much of Europe and the Philippines. Some rock musicians today
use mandolins, often single-stringed electric models rather than double-stringed acoustic mandolins. One example
is Tim Brennan of the Irish-American punk rock band Dropkick Murphys. In addition to electric guitar, bass, and drums,
the band uses several instruments associated with traditional Celtic music, including mandolin, tin whistle, and
Great Highland bagpipes. The band explains that these instruments accentuate the growling sound they favor. The 1991
R.E.M. hit "Losing My Religion" was driven by a few simple mandolin licks played by guitarist Peter Buck, who also
played the mandolin in nearly a dozen other songs. The single peaked at No. 4 on the Billboard Hot 100 chart (#1
on the rock and alternative charts), Luther Dickinson of North Mississippi Allstars and The Black Crowes has made
frequent use of the mandolin, most notably on the Black Crowes song "Locust Street." Armenian American rock group
System of A Down makes extensive use of the mandolin on their 2005 double album Mezmerize/Hypnotize. Pop punk band
Green Day has used a mandolin in several occasions, especially on their 2000 album, Warning. Boyd Tinsley, violin
player of the Dave Matthews Band has been using an electric mandolin since 2005. Frontman Colin Meloy and guitarist
Chris Funk of The Decemberists regularly employ the mandolin in the band's music. Nancy Wilson, rhythm guitarist
of Heart, uses a mandolin in Heart's song "Dream of the Archer" from the album Little Queen, as well as in Heart's
cover of Led Zeppelin's song "The Battle of Evermore." "Show Me Heaven" by Maria McKee, the theme song to the film
Days of Thunder, prominently features a mandolin. As in Brazil, the mandolin has played an important role in the
Music of Venezuela. It has enjoyed a privileged position as the main melodic instrument in several different regions
of the country. Specifically, the eastern states of Sucre, Nueva Esparta, Anzoategui and Monagas have made the mandolin
the main instrument in their versions of Joropo as well as Puntos, Jotas, Polos, Fulias, Merengues and Malagueñas.
Also, in the west of the country the sound of the mandolin is intrinsically associated with the regional genres of
the Venezuelan Andes: Bambucos, Pasillos, Pasodobles, and Waltzes. In the western city of Maracaibo the Mandolin
has been played in Decimas, Danzas and Contradanzas Zulianas; in the capital, Caracas, the Merengue Rucaneao, Pasodobles
and Waltzes have also been played with mandolin for almost a century. Today, Venezuelan mandolists include an important
group of virtuoso players and ensembles such as Alberto Valderrama, Jesus Rengel, Ricardo Sandoval, Saul Vera, and
Cristobal Soto. To fill this gap in the literature, mandolin orchestras have traditionally played many arrangements
of music written for regular orchestras or other ensembles. Some players have sought out contemporary composers to
solicit new works. Traditional mandolin orchestras remain especially popular in Japan and Germany, but also exist
throughout the United States, Europe and the rest of the world. They perform works composed for mandolin family instruments,
or re-orchestrations of traditional pieces. The structure of a contemporary traditional mandolin orchestra consists
of: first and second mandolins, mandolas (either octave mandolas, tuned an octave below the mandolin, or tenor mandolas,
tuned like the viola), mandocellos (tuned like the cello), and bass instruments (conventional string bass or, rarely,
mandobasses). Smaller ensembles, such as quartets composed of two mandolins, mandola, and mandocello, may also be
found.
Insects (from Latin insectum, a calque of Greek ἔντομον [éntomon], "cut into sections") are a class of invertebrates within
the arthropod phylum that have a chitinous exoskeleton, a three-part body (head, thorax and abdomen), three pairs
of jointed legs, compound eyes and one pair of antennae. They are the most diverse group of animals on the planet,
including more than a million described species and representing more than half of all known living organisms. The
number of extant species is estimated at between six and ten million, and potentially represent over 90% of the differing
animal life forms on Earth. Insects may be found in nearly all environments, although only a small number of species
reside in the oceans, a habitat dominated by another arthropod group, crustaceans. The life cycles of insects vary
but most hatch from eggs. Insect growth is constrained by the inelastic exoskeleton and development involves a series
of molts. The immature stages can differ from the adults in structure, habit and habitat, and can include a passive
pupal stage in those groups that undergo 4-stage metamorphosis (see holometabolism). Insects that undergo 3-stage
metamorphosis lack a pupal stage and adults develop through a series of nymphal stages. The higher level relationship
of the Hexapoda is unclear. Fossilized insects of enormous size have been found from the Paleozoic Era, including
giant dragonflies with wingspans of 55 to 70 cm (22–28 in). The most diverse insect groups appear to have coevolved
with flowering plants. Adult insects typically move about by walking, flying, or sometimes swimming (see below, Locomotion).
As it allows for rapid yet stable movement, many insects adopt a tripedal gait in which they walk with their legs
touching the ground in alternating triangles. Insects are the only invertebrates to have evolved flight. Many insects
spend at least part of their lives under water, with larval adaptations that include gills, and some adult insects
are aquatic and have adaptations for swimming. Some species, such as water striders, are capable of walking on the
surface of water. Insects are mostly solitary, but some, such as certain bees, ants and termites, are social and
live in large, well-organized colonies. Some insects, such as earwigs, show maternal care, guarding their eggs and
young. Insects can communicate with each other in a variety of ways. Male moths can sense the pheromones of female
moths over great distances. Other species communicate with sounds: crickets stridulate, or rub their wings together,
to attract a mate and repel other males. Lampyridae in the beetle order Coleoptera communicate with light. Humans
regard certain insects as pests, and attempt to control them using insecticides and a host of other techniques. Some
insects damage crops by feeding on sap, leaves or fruits. A few parasitic species are pathogenic. Some insects perform
complex ecological roles; blow-flies, for example, help consume carrion but also spread diseases. Insect pollinators
are essential to the life-cycle of many flowering plant species on which most organisms, including humans, are at
least partly dependent; without them, the terrestrial portion of the biosphere (including humans) would be devastated.
Many other insects are considered ecologically beneficial as predators and a few provide direct economic benefit.
Silkworms and bees have been used extensively by humans for the production of silk and honey, respectively. In some
cultures, people eat the larvae or adults of certain insects. The word "insect" comes from the Latin word insectum,
meaning "with a notched or divided body", or literally "cut into", from the neuter singular perfect passive participle
of insectare, "to cut into, to cut up", from in- "into" and secare "to cut"; because insects appear "cut into" three
sections. Pliny the Elder introduced the Latin designation as a loan-translation of the Greek word ἔντομος (éntomos)
or "insect" (as in entomology), which was Aristotle's term for this class of life, also in reference to their "notched"
bodies. "Insect" first appears documented in English in 1601 in Holland's translation of Pliny. Translations of Aristotle's
term also form the usual word for "insect" in Welsh (trychfil, from trychu "to cut" and mil, "animal"), Serbo-Croatian
(zareznik, from rezati, "to cut"), Russian (насекомое nasekomoje, from seč'/-sekat', "to cut"), etc. The higher-level
phylogeny of the arthropods continues to be a matter of debate and research. In 2008, researchers at Tufts University
uncovered what they believe is the world's oldest known full-body impression of a primitive flying insect, a 300
million-year-old specimen from the Carboniferous period. The oldest definitive insect fossil is the Devonian Rhyniognatha
hirsti, from the 396-million-year-old Rhynie chert. It may have superficially resembled a modern-day silverfish insect.
This species already possessed dicondylic mandibles (two articulations in the mandible), a feature associated with
winged insects, suggesting that wings may already have evolved at this time. Thus, the first insects probably appeared
earlier, in the Silurian period. Late Carboniferous and Early Permian insect orders include both extant groups, their
stem groups, and a number of Paleozoic groups, now extinct. During this era, some giant dragonfly-like forms reached
wingspans of 55 to 70 cm (22 to 28 in), making them far larger than any living insect. This gigantism may have been
due to higher atmospheric oxygen levels that allowed increased respiratory efficiency relative to today. The lack
of flying vertebrates could have been another factor. Most extinct orders of insects developed during the Permian
period that began around 270 million years ago. Many of the early groups became extinct during the Permian-Triassic
extinction event, the largest mass extinction in the history of the Earth, around 252 million years ago. Insects
were among the earliest terrestrial herbivores and acted as major selection agents on plants. Plants evolved chemical
defenses against this herbivory and the insects, in turn, evolved mechanisms to deal with plant toxins. Many insects
make use of these toxins to protect themselves from their predators. Such insects often advertise their toxicity
using warning colors. This successful evolutionary pattern has also been used by mimics. Over time, this has led
to complex groups of coevolved species. Conversely, some interactions between plants and insects, like pollination,
are beneficial to both organisms. Coevolution has led to the development of very specific mutualisms in such systems.
Insects can be divided into two groups historically treated as subclasses: wingless insects, known as Apterygota,
and winged insects, known as Pterygota. The Apterygota consist of the primitively wingless order of the silverfish
(Thysanura). Archaeognatha make up the Monocondylia based on the shape of their mandibles, while Thysanura and Pterygota
are grouped together as Dicondylia. The Thysanura themselves possibly are not monophyletic, with the family Lepidotrichidae
being a sister group to the Dicondylia (Pterygota and the remaining Thysanura). Traditional morphology-based or appearance-based
systematics have usually given the Hexapoda the rank of superclass,:180 and identified four groups within it: insects
(Ectognatha), springtails (Collembola), Protura, and Diplura, the latter three being grouped together as the Entognatha
on the basis of internalized mouth parts. Supraordinal relationships have undergone numerous changes with the advent
of methods based on evolutionary history and genetic data. A recent theory is that the Hexapoda are polyphyletic
(where the last common ancestor was not a member of the group), with the entognath classes having separate evolutionary
histories from the Insecta. Many of the traditional appearance-based taxa have been shown to be paraphyletic, so
rather than using ranks like subclass, superorder, and infraorder, it has proved better to use monophyletic groupings
(in which the last common ancestor is a member of the group). The following represents the best-supported monophyletic
groupings for the Insecta. Paleoptera and Neoptera are the winged orders of insects differentiated by the presence
of hardened body parts called sclerites, and in the Neoptera, muscles that allow their wings to fold flatly over
the abdomen. Neoptera can further be divided into incomplete metamorphosis-based (Polyneoptera and Paraneoptera)
and complete metamorphosis-based groups. It has proved difficult to clarify the relationships between the orders
in Polyneoptera because of constant new findings calling for revision of the taxa. For example, the Paraneoptera
have turned out to be more closely related to the Endopterygota than to the rest of the Exopterygota. The recent
molecular finding that the traditional louse orders Mallophaga and Anoplura are derived from within Psocoptera has
led to the new taxon Psocodea. Phasmatodea and Embiidina have been suggested to form the Eukinolabia. Mantodea, Blattodea,
and Isoptera are thought to form a monophyletic group termed Dictyoptera. The Exopterygota likely are paraphyletic
in regard to the Endopterygota. Matters that have incurred controversy include Strepsiptera and Diptera grouped together
as Halteria based on a reduction of one of the wing pairs – a position not well-supported in the entomological community.
The Neuropterida are often lumped or split on the whims of the taxonomist. Fleas are now thought to be closely related
to boreid mecopterans. Many questions remain in the basal relationships amongst endopterygote orders, particularly
the Hymenoptera. Though the true dimensions of species diversity remain uncertain, estimates range from 2.6–7.8 million
species with a mean of 5.5 million. This probably represents less than 20% of all species on Earth[citation needed],
and with only about 20,000 new species of all organisms being described each year, most species likely will remain
undescribed for many years unless species descriptions increase in rate. About 850,000–1,000,000 of all described
species are insects. Of the 24 orders of insects, four dominate in terms of numbers of described species, with at
least 3 million species included in Coleoptera, Diptera, Hymenoptera and Lepidoptera. A recent study estimated the
number of beetles at 0.9–2.1 million with a mean of 1.5 million. Insects have segmented bodies supported by exoskeletons,
the hard outer covering made mostly of chitin. The segments of the body are organized into three distinctive but
interconnected units, or tagmata: a head, a thorax and an abdomen. The head supports a pair of sensory antennae,
a pair of compound eyes, and, if present, one to three simple eyes (or ocelli) and three sets of variously modified
appendages that form the mouthparts. The thorax has six segmented legs—one pair each for the prothorax, mesothorax
and the metathorax segments making up the thorax—and, none, two or four wings. The abdomen consists of eleven segments,
though in a few species of insects, these segments may be fused together or reduced in size. The abdomen also contains
most of the digestive, respiratory, excretory and reproductive internal structures.:22–48 Considerable variation
and many adaptations in the body parts of insects occur, especially wings, legs, antenna and mouthparts. The head
is enclosed in a hard, heavily sclerotized, unsegmented, exoskeletal head capsule, or epicranium, which contains
most of the sensing organs, including the antennae, ocellus or eyes, and the mouthparts. Of all the insect orders,
Orthoptera displays the most features found in other insects, including the sutures and sclerites. Here, the vertex,
or the apex (dorsal region), is situated between the compound eyes for insects with a hypognathous and opisthognathous
head. In prognathous insects, the vertex is not found between the compound eyes, but rather, where the ocelli are
normally. This is because the primary axis of the head is rotated 90° to become parallel to the primary axis of the
body. In some species, this region is modified and assumes a different name.:13 The thorax is a tagma composed of
three sections, the prothorax, mesothorax and the metathorax. The anterior segment, closest to the head, is the prothorax,
with the major features being the first pair of legs and the pronotum. The middle segment is the mesothorax, with
the major features being the second pair of legs and the anterior wings. The third and most posterior segment, abutting
the abdomen, is the metathorax, which features the third pair of legs and the posterior wings. Each segment is dilineated
by an intersegmental suture. Each segment has four basic regions. The dorsal surface is called the tergum (or notum)
to distinguish it from the abdominal terga. The two lateral regions are called the pleura (singular: pleuron) and
the ventral aspect is called the sternum. In turn, the notum of the prothorax is called the pronotum, the notum for
the mesothorax is called the mesonotum and the notum for the metathorax is called the metanotum. Continuing with
this logic, the mesopleura and metapleura, as well as the mesosternum and metasternum, are used. The abdomen is the
largest tagma of the insect, which typically consists of 11–12 segments and is less strongly sclerotized than the
head or thorax. Each segment of the abdomen is represented by a sclerotized tergum and sternum. Terga are separated
from each other and from the adjacent sterna or pleura by membranes. Spiracles are located in the pleural area. Variation
of this ground plan includes the fusion of terga or terga and sterna to form continuous dorsal or ventral shields
or a conical tube. Some insects bear a sclerite in the pleural area called a laterotergite. Ventral sclerites are
sometimes called laterosternites. During the embryonic stage of many insects and the postembryonic stage of primitive
insects, 11 abdominal segments are present. In modern insects there is a tendency toward reduction in the number
of the abdominal segments, but the primitive number of 11 is maintained during embryogenesis. Variation in abdominal
segment number is considerable. If the Apterygota are considered to be indicative of the ground plan for pterygotes,
confusion reigns: adult Protura have 12 segments, Collembola have 6. The orthopteran family Acrididae has 11 segments,
and a fossil specimen of Zoraptera has a 10-segmented abdomen. The insect outer skeleton, the cuticle, is made up
of two layers: the epicuticle, which is a thin and waxy water resistant outer layer and contains no chitin, and a
lower layer called the procuticle. The procuticle is chitinous and much thicker than the epicuticle and has two layers:
an outer layer known as the exocuticle and an inner layer known as the endocuticle. The tough and flexible endocuticle
is built from numerous layers of fibrous chitin and proteins, criss-crossing each other in a sandwich pattern, while
the exocuticle is rigid and hardened.:22–24 The exocuticle is greatly reduced in many soft-bodied insects (e.g.,
caterpillars), especially during their larval stages. Insects are the only invertebrates to have developed active
flight capability, and this has played an important role in their success.:186 Their muscles are able to contract
multiple times for each single nerve impulse, allowing the wings to beat faster than would ordinarily be possible.
Having their muscles attached to their exoskeletons is more efficient and allows more muscle connections; crustaceans
also use the same method, though all spiders use hydraulic pressure to extend their legs, a system inherited from
their pre-arthropod ancestors. Unlike insects, though, most aquatic crustaceans are biomineralized with calcium carbonate
extracted from the water. The thoracic segments have one ganglion on each side, which are connected into a pair,
one pair per segment. This arrangement is also seen in the abdomen but only in the first eight segments. Many species
of insects have reduced numbers of ganglia due to fusion or reduction. Some cockroaches have just six ganglia in
the abdomen, whereas the wasp Vespa crabro has only two in the thorax and three in the abdomen. Some insects, like
the house fly Musca domestica, have all the body ganglia fused into a single large thoracic ganglion. At least a
few insects have nociceptors, cells that detect and transmit sensations of pain. This was discovered in 2003 by studying
the variation in reactions of larvae of the common fruitfly Drosophila to the touch of a heated probe and an unheated
one. The larvae reacted to the touch of the heated probe with a stereotypical rolling behavior that was not exhibited
when the larvae were touched by the unheated probe. Although nociception has been demonstrated in insects, there
is no consensus that insects feel pain consciously The salivary glands (element 30 in numbered diagram) in an insect's
mouth produce saliva. The salivary ducts lead from the glands to the reservoirs and then forward through the head
to an opening called the salivarium, located behind the hypopharynx. By moving its mouthparts (element 32 in numbered
diagram) the insect can mix its food with saliva. The mixture of saliva and food then travels through the salivary
tubes into the mouth, where it begins to break down. Some insects, like flies, have extra-oral digestion. Insects
using extra-oral digestion expel digestive enzymes onto their food to break it down. This strategy allows insects
to extract a significant proportion of the available nutrients from the food source.:31 The gut is where almost all
of insects' digestion takes place. It can be divided into the foregut, midgut and hindgut. Once food leaves the crop,
it passes to the midgut (element 13 in numbered diagram), also known as the mesenteron, where the majority of digestion
takes place. Microscopic projections from the midgut wall, called microvilli, increase the surface area of the wall
and allow more nutrients to be absorbed; they tend to be close to the origin of the midgut. In some insects, the
role of the microvilli and where they are located may vary. For example, specialized microvilli producing digestive
enzymes may more likely be near the end of the midgut, and absorption near the origin or beginning of the midgut.:32
In the hindgut (element 16 in numbered diagram), or proctodaeum, undigested food particles are joined by uric acid
to form fecal pellets. The rectum absorbs 90% of the water in these fecal pellets, and the dry pellet is then eliminated
through the anus (element 17), completing the process of digestion. The uric acid is formed using hemolymph waste
products diffused from the Malpighian tubules (element 20). It is then emptied directly into the alimentary canal,
at the junction between the midgut and hindgut. The number of Malpighian tubules possessed by a given insect varies
between species, ranging from only two tubules in some insects to over 100 tubules in others.:71–72, 78–80 The reproductive
system of female insects consist of a pair of ovaries, accessory glands, one or more spermathecae, and ducts connecting
these parts. The ovaries are made up of a number of egg tubes, called ovarioles, which vary in size and number by
species. The number of eggs that the insect is able to make vary by the number of ovarioles with the rate that eggs
can be develop being also influenced by ovariole design. Female insects are able make eggs, receive and store sperm,
manipulate sperm from different males, and lay eggs. Accessory glands or glandular parts of the oviducts produce
a variety of substances for sperm maintenance, transport and fertilization, as well as for protection of eggs. They
can produce glue and protective substances for coating eggs or tough coverings for a batch of eggs called oothecae.
Spermathecae are tubes or sacs in which sperm can be stored between the time of mating and the time an egg is fertilized.:880
For males, the reproductive system is the testis, suspended in the body cavity by tracheae and the fat body. Most
male insects have a pair of testes, inside of which are sperm tubes or follicles that are enclosed within a membranous
sac. The follicles connect to the vas deferens by the vas efferens, and the two tubular vasa deferentia connect to
a median ejaculatory duct that leads to the outside. A portion of the vas deferens is often enlarged to form the
seminal vesicle, which stores the sperm before they are discharged into the female. The seminal vesicles have glandular
linings that secrete nutrients for nourishment and maintenance of the sperm. The ejaculatory duct is derived from
an invagination of the epidermal cells during development and, as a result, has a cuticular lining. The terminal
portion of the ejaculatory duct may be sclerotized to form the intromittent organ, the aedeagus. The remainder of
the male reproductive system is derived from embryonic mesoderm, except for the germ cells, or spermatogonia, which
descend from the primordial pole cells very early during embryogenesis.:885 Insect respiration is accomplished without
lungs. Instead, the insect respiratory system uses a system of internal tubes and sacs through which gases either
diffuse or are actively pumped, delivering oxygen directly to tissues that need it via their trachea (element 8 in
numbered diagram). Since oxygen is delivered directly, the circulatory system is not used to carry oxygen, and is
therefore greatly reduced. The insect circulatory system has no veins or arteries, and instead consists of little
more than a single, perforated dorsal tube which pulses peristaltically. Toward the thorax, the dorsal tube (element
14) divides into chambers and acts like the insect's heart. The opposite end of the dorsal tube is like the aorta
of the insect circulating the hemolymph, arthropods' fluid analog of blood, inside the body cavity.:61–65 Air is
taken in through openings on the sides of the abdomen called spiracles. There are many different patterns of gas
exchange demonstrated by different groups of insects. Gas exchange patterns in insects can range from continuous
and diffusive ventilation, to discontinuous gas exchange.:65–68 During continuous gas exchange, oxygen is taken in
and carbon dioxide is released in a continuous cycle. In discontinuous gas exchange, however, the insect takes in
oxygen while it is active and small amounts of carbon dioxide are released when the insect is at rest. Diffusive
ventilation is simply a form of continuous gas exchange that occurs by diffusion rather than physically taking in
the oxygen. Some species of insect that are submerged also have adaptations to aid in respiration. As larvae, many
insects have gills that can extract oxygen dissolved in water, while others need to rise to the water surface to
replenish air supplies which may be held or trapped in special structures. The majority of insects hatch from eggs.
The fertilization and development takes place inside the egg, enclosed by a shell (chorion) that consists of maternal
tissue. In contrast to eggs of other arthropods, most insect eggs are drought resistant. This is because inside the
chorion two additional membranes develop from embryonic tissue, the amnion and the serosa. This serosa secretes a
cuticle rich in chitin that protects the embryo against desiccation. In Schizophora however the serosa does not develop,
but these flies lay their eggs in damp places, such as rotting matter. Some species of insects, like the cockroach
Blaptica dubia, as well as juvenile aphids and tsetse flies, are ovoviviparous. The eggs of ovoviviparous animals
develop entirely inside the female, and then hatch immediately upon being laid. Some other species, such as those
in the genus of cockroaches known as Diploptera, are viviparous, and thus gestate inside the mother and are born
alive.:129, 131, 134–135 Some insects, like parasitic wasps, show polyembryony, where a single fertilized egg divides
into many and in some cases thousands of separate embryos.:136–137 Insects may be univoltine, bivoltine or multivoltine,
i.e. they may have one, two or many broods (generations) in a year. Other developmental and reproductive variations
include haplodiploidy, polymorphism, paedomorphosis or peramorphosis, sexual dimorphism, parthenogenesis and more
rarely hermaphroditism.:143 In haplodiploidy, which is a type of sex-determination system, the offspring's sex is
determined by the number of sets of chromosomes an individual receives. This system is typical in bees and wasps.
Polymorphism is where a species may have different morphs or forms, as in the oblong winged katydid, which has four
different varieties: green, pink and yellow or tan. Some insects may retain phenotypes that are normally only seen
in juveniles; this is called paedomorphosis. In peramorphosis, an opposite sort of phenomenon, insects take on previously
unseen traits after they have matured into adults. Many insects display sexual dimorphism, in which males and females
have notably different appearances, such as the moth Orgyia recens as an exemplar of sexual dimorphism in insects.
Some insects use parthenogenesis, a process in which the female can reproduce and give birth without having the eggs
fertilized by a male. Many aphids undergo a form of parthenogenesis, called cyclical parthenogenesis, in which they
alternate between one or many generations of asexual and sexual reproduction. In summer, aphids are generally female
and parthenogenetic; in the autumn, males may be produced for sexual reproduction. Other insects produced by parthenogenesis
are bees, wasps and ants, in which they spawn males. However, overall, most individuals are female, which are produced
by fertilization. The males are haploid and the females are diploid. More rarely, some insects display hermaphroditism,
in which a given individual has both male and female reproductive organs. Hemimetabolous insects, those with incomplete
metamorphosis, change gradually by undergoing a series of molts. An insect molts when it outgrows its exoskeleton,
which does not stretch and would otherwise restrict the insect's growth. The molting process begins as the insect's
epidermis secretes a new epicuticle inside the old one. After this new epicuticle is secreted, the epidermis releases
a mixture of enzymes that digests the endocuticle and thus detaches the old cuticle. When this stage is complete,
the insect makes its body swell by taking in a large quantity of water or air, which makes the old cuticle split
along predefined weaknesses where the old exocuticle was thinnest.:142 Holometabolism, or complete metamorphosis,
is where the insect changes in four stages, an egg or embryo, a larva, a pupa and the adult or imago. In these species,
an egg hatches to produce a larva, which is generally worm-like in form. This worm-like form can be one of several
varieties: eruciform (caterpillar-like), scarabaeiform (grub-like), campodeiform (elongated, flattened and active),
elateriform (wireworm-like) or vermiform (maggot-like). The larva grows and eventually becomes a pupa, a stage marked
by reduced movement and often sealed within a cocoon. There are three types of pupae: obtect, exarate or coarctate.
Obtect pupae are compact, with the legs and other appendages enclosed. Exarate pupae have their legs and other appendages
free and extended. Coarctate pupae develop inside the larval skin.:151 Insects undergo considerable change in form
during the pupal stage, and emerge as adults. Butterflies are a well-known example of insects that undergo complete
metamorphosis, although most insects use this life cycle. Some insects have evolved this system to hypermetamorphosis.
Many insects possess very sensitive and, or specialized organs of perception. Some insects such as bees can perceive
ultraviolet wavelengths, or detect polarized light, while the antennae of male moths can detect the pheromones of
female moths over distances of many kilometers. The yellow paper wasp (Polistes versicolor) is known for its wagging
movements as a form of communication within the colony; it can waggle with a frequency of 10.6±2.1 Hz (n=190). These
wagging movements can signal the arrival of new material into the nest and aggression between workers can be used
to stimulate others to increase foraging expeditions. There is a pronounced tendency for there to be a trade-off
between visual acuity and chemical or tactile acuity, such that most insects with well-developed eyes have reduced
or simple antennae, and vice versa. There are a variety of different mechanisms by which insects perceive sound,
while the patterns are not universal, insects can generally hear sound if they can produce it. Different insect species
can have varying hearing, though most insects can hear only a narrow range of frequencies related to the frequency
of the sounds they can produce. Mosquitoes have been found to hear up to 2 kHz., and some grasshoppers can hear up
to 50 kHz. Certain predatory and parasitic insects can detect the characteristic sounds made by their prey or hosts,
respectively. For instance, some nocturnal moths can perceive the ultrasonic emissions of bats, which helps them
avoid predation.:87–94 Insects that feed on blood have special sensory structures that can detect infrared emissions,
and use them to home in on their hosts. Some insects display a rudimentary sense of numbers, such as the solitary
wasps that prey upon a single species. The mother wasp lays her eggs in individual cells and provides each egg with
a number of live caterpillars on which the young feed when hatched. Some species of wasp always provide five, others
twelve, and others as high as twenty-four caterpillars per cell. The number of caterpillars is different among species,
but always the same for each sex of larva. The male solitary wasp in the genus Eumenes is smaller than the female,
so the mother of one species supplies him with only five caterpillars; the larger female receives ten caterpillars
in her cell. A few insects, such as members of the families Poduridae and Onychiuridae (Collembola), Mycetophilidae
(Diptera) and the beetle families Lampyridae, Phengodidae, Elateridae and Staphylinidae are bioluminescent. The most
familiar group are the fireflies, beetles of the family Lampyridae. Some species are able to control this light generation
to produce flashes. The function varies with some species using them to attract mates, while others use them to lure
prey. Cave dwelling larvae of Arachnocampa (Mycetophilidae, Fungus gnats) glow to lure small flying insects into
sticky strands of silk. Some fireflies of the genus Photuris mimic the flashing of female Photinus species to attract
males of that species, which are then captured and devoured. The colors of emitted light vary from dull blue (Orfelia
fultoni, Mycetophilidae) to the familiar greens and the rare reds (Phrixothrix tiemanni, Phengodidae). Most insects,
except some species of cave crickets, are able to perceive light and dark. Many species have acute vision capable
of detecting minute movements. The eyes may include simple eyes or ocelli as well as compound eyes of varying sizes.
Many species are able to detect light in the infrared, ultraviolet and the visible light wavelengths. Color vision
has been demonstrated in many species and phylogenetic analysis suggests that UV-green-blue trichromacy existed from
at least the Devonian period between 416 and 359 million years ago. Insects were the earliest organisms to produce
and sense sounds. Insects make sounds mostly by mechanical action of appendages. In grasshoppers and crickets, this
is achieved by stridulation. Cicadas make the loudest sounds among the insects by producing and amplifying sounds
with special modifications to their body and musculature. The African cicada Brevisana brevis has been measured at
106.7 decibels at a distance of 50 cm (20 in). Some insects, such as the Helicoverpa zeamoths, hawk moths and Hedylid
butterflies, can hear ultrasound and take evasive action when they sense that they have been detected by bats. Some
moths produce ultrasonic clicks that were once thought to have a role in jamming bat echolocation. The ultrasonic
clicks were subsequently found to be produced mostly by unpalatable moths to warn bats, just as warning colorations
are used against predators that hunt by sight. Some otherwise palatable moths have evolved to mimic these calls.
More recently, the claim that some moths can jam bat sonar has been revisited. Ultrasonic recording and high-speed
infrared videography of bat-moth interactions suggest the palatable tiger moth really does defend against attacking
big brown bats using ultrasonic clicks that jam bat sonar. Very low sounds are also produced in various species of
Coleoptera, Hymenoptera, Lepidoptera, Mantodea and Neuroptera. These low sounds are simply the sounds made by the
insect's movement. Through microscopic stridulatory structures located on the insect's muscles and joints, the normal
sounds of the insect moving are amplified and can be used to warn or communicate with other insects. Most sound-making
insects also have tympanal organs that can perceive airborne sounds. Some species in Hemiptera, such as the corixids
(water boatmen), are known to communicate via underwater sounds. Most insects are also able to sense vibrations transmitted
through surfaces. Some species use vibrations for communicating within members of the same species, such as to attract
mates as in the songs of the shield bug Nezara viridula. Vibrations can also be used to communicate between entirely
different species; lycaenid (gossamer-winged butterfly) caterpillars which are myrmecophilous (living in a mutualistic
association with ants) communicate with ants in this way. The Madagascar hissing cockroach has the ability to press
air through its spiracles to make a hissing noise as a sign of aggression; the Death's-head Hawkmoth makes a squeaking
noise by forcing air out of their pharynx when agitated, which may also reduce aggressive worker honey bee behavior
when the two are in close proximity. Chemical communications in animals rely on a variety of aspects including taste
and smell. Chemoreception is the physiological response of a sense organ (i.e. taste or smell) to a chemical stimulus
where the chemicals act as signals to regulate the state or activity of a cell. A semiochemical is a message-carrying
chemical that is meant to attract, repel, and convey information. Types of semiochemicals include pheromones and
kairomones. One example is the butterfly Phengaris arion which uses chemical signals as a form of mimicry to aid
in predation. In addition to the use of sound for communication, a wide range of insects have evolved chemical means
for communication. These chemicals, termed semiochemicals, are often derived from plant metabolites include those
meant to attract, repel and provide other kinds of information. Pheromones, a type of semiochemical, are used for
attracting mates of the opposite sex, for aggregating conspecific individuals of both sexes, for deterring other
individuals from approaching, to mark a trail, and to trigger aggression in nearby individuals. Allomonea benefit
their producer by the effect they have upon the receiver. Kairomones benefit their receiver instead of their producer.
Synomones benefit the producer and the receiver. While some chemicals are targeted at individuals of the same species,
others are used for communication across species. The use of scents is especially well known to have developed in
social insects.:96–105 Social insects, such as termites, ants and many bees and wasps, are the most familiar species
of eusocial animal. They live together in large well-organized colonies that may be so tightly integrated and genetically
similar that the colonies of some species are sometimes considered superorganisms. It is sometimes argued that the
various species of honey bee are the only invertebrates (and indeed one of the few non-human groups) to have evolved
a system of abstract symbolic communication where a behavior is used to represent and convey specific information
about something in the environment. In this communication system, called dance language, the angle at which a bee
dances represents a direction relative to the sun, and the length of the dance represents the distance to be flown.:309–311
Though perhaps not as advanced as honey bees, bumblebees also potentially have some social communication behaviors.
Bombus terrestris, for example, exhibit a faster learning curve for visiting unfamiliar, yet rewarding flowers, when
they can see a conspecific foraging on the same species. Only insects which live in nests or colonies demonstrate
any true capacity for fine-scale spatial orientation or homing. This can allow an insect to return unerringly to
a single hole a few millimeters in diameter among thousands of apparently identical holes clustered together, after
a trip of up to several kilometers' distance. In a phenomenon known as philopatry, insects that hibernate have shown
the ability to recall a specific location up to a year after last viewing the area of interest. A few insects seasonally
migrate large distances between different geographic regions (e.g., the overwintering areas of the Monarch butterfly).:14
The eusocial insects build nest, guard eggs, and provide food for offspring full-time (see Eusociality). Most insects,
however, lead short lives as adults, and rarely interact with one another except to mate or compete for mates. A
small number exhibit some form of parental care, where they will at least guard their eggs, and sometimes continue
guarding their offspring until adulthood, and possibly even feeding them. Another simple form of parental care is
to construct a nest (a burrow or an actual construction, either of which may be simple or complex), store provisions
in it, and lay an egg upon those provisions. The adult does not contact the growing offspring, but it nonetheless
does provide food. This sort of care is typical for most species of bees and various types of wasps. Insects are
the only group of invertebrates to have developed flight. The evolution of insect wings has been a subject of debate.
Some entomologists suggest that the wings are from paranotal lobes, or extensions from the insect's exoskeleton called
the nota, called the paranotal theory. Other theories are based on a pleural origin. These theories include suggestions
that wings originated from modified gills, spiracular flaps or as from an appendage of the epicoxa. The epicoxal
theory suggests the insect wings are modified epicoxal exites, a modified appendage at the base of the legs or coxa.
In the Carboniferous age, some of the Meganeura dragonflies had as much as a 50 cm (20 in) wide wingspan. The appearance
of gigantic insects has been found to be consistent with high atmospheric oxygen. The respiratory system of insects
constrains their size, however the high oxygen in the atmosphere allowed larger sizes. The largest flying insects
today are much smaller and include several moth species such as the Atlas moth and the White Witch (Thysania agrippina).
Many adult insects use six legs for walking and have adopted a tripedal gait. The tripedal gait allows for rapid
walking while always having a stable stance and has been studied extensively in cockroaches. The legs are used in
alternate triangles touching the ground. For the first step, the middle right leg and the front and rear left legs
are in contact with the ground and move the insect forward, while the front and rear right leg and the middle left
leg are lifted and moved forward to a new position. When they touch the ground to form a new stable triangle the
other legs can be lifted and brought forward in turn and so on. The purest form of the tripedal gait is seen in insects
moving at high speeds. However, this type of locomotion is not rigid and insects can adapt a variety of gaits. For
example, when moving slowly, turning, or avoiding obstacles, four or more feet may be touching the ground. Insects
can also adapt their gait to cope with the loss of one or more limbs. Cockroaches are among the fastest insect runners
and, at full speed, adopt a bipedal run to reach a high velocity in proportion to their body size. As cockroaches
move very quickly, they need to be video recorded at several hundred frames per second to reveal their gait. More
sedate locomotion is seen in the stick insects or walking sticks (Phasmatodea). A few insects have evolved to walk
on the surface of the water, especially members of the Gerridae family, commonly known as water striders. A few species
of ocean-skaters in the genus Halobates even live on the surface of open oceans, a habitat that has few insect species.
Many of these species have adaptations to help in under-water locomotion. Water beetles and water bugs have legs
adapted into paddle-like structures. Dragonfly naiads use jet propulsion, forcibly expelling water out of their rectal
chamber. Some species like the water striders are capable of walking on the surface of water. They can do this because
their claws are not at the tips of the legs as in most insects, but recessed in a special groove further up the leg;
this prevents the claws from piercing the water's surface film. Other insects such as the Rove beetle Stenus are
known to emit pygidial gland secretions that reduce surface tension making it possible for them to move on the surface
of water by Marangoni propulsion (also known by the German term Entspannungsschwimmen). Insect ecology is the scientific
study of how insects, individually or as a community, interact with the surrounding environment or ecosystem.:3 Insects
play one of the most important roles in their ecosystems, which includes many roles, such as soil turning and aeration,
dung burial, pest control, pollination and wildlife nutrition. An example is the beetles, which are scavengers that
feed on dead animals and fallen trees and thereby recycle biological materials into forms found useful by other organisms.
These insects, and others, are responsible for much of the process by which topsoil is created.:3, 218–228 Camouflage
is an important defense strategy, which involves the use of coloration or shape to blend into the surrounding environment.
This sort of protective coloration is common and widespread among beetle families, especially those that feed on
wood or vegetation, such as many of the leaf beetles (family Chrysomelidae) or weevils. In some of these species,
sculpturing or various colored scales or hairs cause the beetle to resemble bird dung or other inedible objects.
Many of those that live in sandy environments blend in with the coloration of the substrate. Most phasmids are known
for effectively replicating the forms of sticks and leaves, and the bodies of some species (such as O. macklotti
and Palophus centaurus) are covered in mossy or lichenous outgrowths that supplement their disguise. Some species
have the ability to change color as their surroundings shift (B. scabrinota, T. californica). In a further behavioral
adaptation to supplement crypsis, a number of species have been noted to perform a rocking motion where the body
is swayed from side to side that is thought to reflect the movement of leaves or twigs swaying in the breeze. Another
method by which stick insects avoid predation and resemble twigs is by feigning death (catalepsy), where the insect
enters a motionless state that can be maintained for a long period. The nocturnal feeding habits of adults also aids
Phasmatodea in remaining concealed from predators. Another defense that often uses color or shape to deceive potential
enemies is mimicry. A number of longhorn beetles (family Cerambycidae) bear a striking resemblance to wasps, which
helps them avoid predation even though the beetles are in fact harmless. Batesian and Müllerian mimicry complexes
are commonly found in Lepidoptera. Genetic polymorphism and natural selection give rise to otherwise edible species
(the mimic) gaining a survival advantage by resembling inedible species (the model). Such a mimicry complex is referred
to as Batesian and is most commonly known by the mimicry by the limenitidine Viceroy butterfly of the inedible danaine
Monarch. Later research has discovered that the Viceroy is, in fact more toxic than the Monarch and this resemblance
should be considered as a case of Müllerian mimicry. In Müllerian mimicry, inedible species, usually within a taxonomic
order, find it advantageous to resemble each other so as to reduce the sampling rate by predators who need to learn
about the insects' inedibility. Taxa from the toxic genus Heliconius form one of the most well known Müllerian complexes.
Chemical defense is another important defense found amongst species of Coleoptera and Lepidoptera, usually being
advertised by bright colors, such as the Monarch butterfly. They obtain their toxicity by sequestering the chemicals
from the plants they eat into their own tissues. Some Lepidoptera manufacture their own toxins. Predators that eat
poisonous butterflies and moths may become sick and vomit violently, learning not to eat those types of species;
this is actually the basis of Müllerian mimicry. A predator who has previously eaten a poisonous lepidopteran may
avoid other species with similar markings in the future, thus saving many other species as well. Some ground beetles
of the Carabidae family can spray chemicals from their abdomen with great accuracy, to repel predators. Pollination
is the process by which pollen is transferred in the reproduction of plants, thereby enabling fertilisation and sexual
reproduction. Most flowering plants require an animal to do the transportation. While other animals are included
as pollinators, the majority of pollination is done by insects. Because insects usually receive benefit for the pollination
in the form of energy rich nectar it is a grand example of mutualism. The various flower traits (and combinations
thereof) that differentially attract one type of pollinator or another are known as pollination syndromes. These
arose through complex plant-animal adaptations. Pollinators find flowers through bright colorations, including ultraviolet,
and attractant pheromones. The study of pollination by insects is known as anthecology. Many insects are considered
pests by humans. Insects commonly regarded as pests include those that are parasitic (e.g. lice, bed bugs), transmit
diseases (mosquitoes, flies), damage structures (termites), or destroy agricultural goods (locusts, weevils). Many
entomologists are involved in various forms of pest control, as in research for companies to produce insecticides,
but increasingly rely on methods of biological pest control, or biocontrol. Biocontrol uses one organism to reduce
the population density of another organism — the pest — and is considered a key element of integrated pest management.
Although pest insects attract the most attention, many insects are beneficial to the environment and to humans. Some
insects, like wasps, bees, butterflies and ants, pollinate flowering plants. Pollination is a mutualistic relationship
between plants and insects. As insects gather nectar from different plants of the same species, they also spread
pollen from plants on which they have previously fed. This greatly increases plants' ability to cross-pollinate,
which maintains and possibly even improves their evolutionary fitness. This ultimately affects humans since ensuring
healthy crops is critical to agriculture. As well as pollination ants help with seed distribution of plants. This
helps to spread the plants which increases plant diversity. This leads to an overall better environment. A serious
environmental problem is the decline of populations of pollinator insects, and a number of species of insects are
now cultured primarily for pollination management in order to have sufficient pollinators in the field, orchard or
greenhouse at bloom time.:240–243 Another solution, as shown in Delaware, has been to raise native plants to help
support native pollinators like L. vierecki. Insects also produce useful substances such as honey, wax, lacquer and
silk. Honey bees have been cultured by humans for thousands of years for honey, although contracting for crop pollination
is becoming more significant for beekeepers. The silkworm has greatly affected human history, as silk-driven trade
established relationships between China and the rest of the world. Insectivorous insects, or insects which feed on
other insects, are beneficial to humans because they eat insects that could cause damage to agriculture and human
structures. For example, aphids feed on crops and cause problems for farmers, but ladybugs feed on aphids, and can
be used as a means to get significantly reduce pest aphid populations. While birds are perhaps more visible predators
of insects, insects themselves account for the vast majority of insect consumption. Ants also help control animal
populations by consuming small vertebrates. Without predators to keep them in check, insects can undergo almost unstoppable
population explosions.:328–348:400 Insects play important roles in biological research. For example, because of its
small size, short generation time and high fecundity, the common fruit fly Drosophila melanogaster is a model organism
for studies in the genetics of higher eukaryotes. D. melanogaster has been an essential part of studies into principles
like genetic linkage, interactions between genes, chromosomal genetics, development, behavior and evolution. Because
genetic systems are well conserved among eukaryotes, understanding basic cellular processes like DNA replication
or transcription in fruit flies can help to understand those processes in other eukaryotes, including humans. The
genome of D. melanogaster was sequenced in 2000, reflecting the organism's important role in biological research.
It was found that 70% of the fly genome is similar to the human genome, supporting the evolution theory. In some
cultures, insects, especially deep-fried cicadas, are considered to be delicacies, while in other places they form
part of the normal diet. Insects have a high protein content for their mass, and some authors suggest their potential
as a major source of protein in human nutrition.:10–13 In most first-world countries, however, entomophagy (the eating
of insects), is taboo. Since it is impossible to entirely eliminate pest insects from the human food chain, insects
are inadvertently present in many foods, especially grains. Food safety laws in many countries do not prohibit insect
parts in food, but rather limit their quantity. According to cultural materialist anthropologist Marvin Harris, the
eating of insects is taboo in cultures that have other protein sources such as fish or livestock. Scarab beetles
held religious and cultural symbolism in Old Egypt, Greece and some shamanistic Old World cultures. The ancient Chinese
regarded cicadas as symbols of rebirth or immortality. In Mesopotamian literature, the epic poem of Gilgamesh has
allusions to Odonata which signify the impossibility of immortality. Amongst the Aborigines of Australia of the Arrernte
language groups, honey ants and witchety grubs served as personal clan totems. In the case of the 'San' bush-men
of the Kalahari, it is the praying mantis which holds much cultural significance including creation and zen-like
patience in waiting.:9
Even though there is a broad scientific agreement that essentialist and typological conceptualizations of race are untenable,
scientists around the world continue to conceptualize race in widely differing ways, some of which have essentialist
implications. While some researchers sometimes use the concept of race to make distinctions among fuzzy sets of traits,
others in the scientific community suggest that the idea of race often is used in a naive or simplistic way,[page
needed] and argue that, among humans, race has no taxonomic significance by pointing out that all living humans belong
to the same species, Homo sapiens, and subspecies, Homo sapiens sapiens. There is a wide consensus that the racial
categories that are common in everyday usage are socially constructed, and that racial groups cannot be biologically
defined. Nonetheless, some scholars argue that racial categories obviously correlate with biological traits (e.g.
phenotype) to some degree, and that certain genetic markers have varying frequencies among human populations, some
of which correspond more or less to traditional racial groupings. For this reason, there is no current consensus
about whether racial categories can be considered to have significance for understanding human genetic variation.
When people define and talk about a particular conception of race, they create a social reality through which social
categorization is achieved. In this sense, races are said to be social constructs. These constructs develop within
various legal, economic, and sociopolitical contexts, and may be the effect, rather than the cause, of major social
situations. While race is understood to be a social construct by many, most scholars agree that race has real material
effects in the lives of people through institutionalized practices of preference and discrimination. Socioeconomic
factors, in combination with early but enduring views of race, have led to considerable suffering within disadvantaged
racial groups. Racial discrimination often coincides with racist mindsets, whereby the individuals and ideologies
of one group come to perceive the members of an outgroup as both racially defined and morally inferior. As a result,
racial groups possessing relatively little power often find themselves excluded or oppressed, while hegemonic individuals
and institutions are charged with holding racist attitudes. Racism has led to many instances of tragedy, including
slavery and genocide. In some countries, law enforcement uses race to profile suspects. This use of racial categories
is frequently criticized for perpetuating an outmoded understanding of human biological variation, and promoting
stereotypes. Because in some societies racial groupings correspond closely with patterns of social stratification,
for social scientists studying social inequality, race can be a significant variable. As sociological factors, racial
categories may in part reflect subjective attributions, self-identities, and social institutions. Groups of humans
have always identified themselves as distinct from neighboring groups, but such differences have not always been
understood to be natural, immutable and global. These features are the distinguishing features of how the concept
of race is used today. In this way the idea of race as we understand it today came about during the historical process
of exploration and conquest which brought Europeans into contact with groups from different continents, and of the
ideology of classification and typology found in the natural sciences. The European concept of "race", along with
many of the ideas now associated with the term, arose at the time of the scientific revolution, which introduced
and privileged the study of natural kinds, and the age of European imperialism and colonization which established
political relations between Europeans and peoples with distinct cultural and political traditions. As Europeans encountered
people from different parts of the world, they speculated about the physical, social, and cultural differences among
various human groups. The rise of the Atlantic slave trade, which gradually displaced an earlier trade in slaves
from throughout the world, created a further incentive to categorize human groups in order to justify the subordination
of African slaves. Drawing on Classical sources and upon their own internal interactions — for example, the hostility
between the English and Irish powerfully influenced early European thinking about the differences between people
— Europeans began to sort themselves and others into groups based on physical appearance, and to attribute to individuals
belonging to these groups behaviors and capacities which were claimed to be deeply ingrained. A set of folk beliefs
took hold that linked inherited physical differences between groups to inherited intellectual, behavioral, and moral
qualities. Similar ideas can be found in other cultures, for example in China, where a concept often translated as
"race" was associated with supposed common descent from the Yellow Emperor, and used to stress the unity of ethnic
groups in China. Brutal conflicts between ethnic groups have existed throughout history and across the world. The
first post-Classical published classification of humans into distinct races seems to be François Bernier's Nouvelle
division de la terre par les différents espèces ou races qui l'habitent ("New division of Earth by the different
species or races which inhabit it"), published in 1684. In the 18th century the differences among human groups became
a focus of scientific investigation. But the scientific classification of phenotypic variation was frequently coupled
with racist ideas about innate predispositions of different groups, always attributing the most desirable features
to the White, European race and arranging the other races along a continuum of progressively undesirable attributes.
The 1735 classification of Carl Linnaeus, inventor of zoological taxonomy, divided the human race Homo sapiens into
continental varieties of europaeus, asiaticus, americanus, and afer, each associated with a different humour: sanguine,
melancholic, choleric, and phlegmatic, respectively. Homo sapiens europaeus was described as active, acute, and adventurous,
whereas Homo sapiens afer was said to be crafty, lazy, and careless. The 1775 treatise "The Natural Varieties of
Mankind", by Johann Friedrich Blumenbach proposed five major divisions: the Caucasoid race, Mongoloid race, Ethiopian
race (later termed Negroid, and not to be confused with the narrower Ethiopid race), American Indian race, and Malayan
race, but he did not propose any hierarchy among the races. Blumenbach also noted the graded transition in appearances
from one group to adjacent groups and suggested that "one variety of mankind does so sensibly pass into the other,
that you cannot mark out the limits between them". From the 17th through 19th centuries, the merging of folk beliefs
about group differences with scientific explanations of those differences produced what one scholar has called an
"ideology of race". According to this ideology, races are primordial, natural, enduring and distinct. It was further
argued that some groups may be the result of mixture between formerly distinct populations, but that careful study
could distinguish the ancestral races that had combined to produce admixed groups. Subsequent influential classifications
by Georges Buffon, Petrus Camper and Christoph Meiners all classified "Negros" as inferior to Europeans. In the United
States the racial theories of Thomas Jefferson were influential. He saw Africans as inferior to Whites especially
in regards to their intellect, and imbued with unnatural sexual appetites, but described Native Americans as equals
to whites. In the last two decades of the 18th century, the theory of polygenism, the belief that different races
had evolved separately in each continent and shared no common ancestor, was advocated in England by historian Edward
Long and anatomist Charles White, in Germany by ethnographers Christoph Meiners and Georg Forster, and in France
by Julien-Joseph Virey. In the US, Samuel George Morton, Josiah Nott and Louis Agassiz promoted this theory in the
mid-nineteenth century. Polygenism was popular and most widespread in the 19th century, culminating in the founding
of the Anthropological Society of London (1863) during the period of the American Civil War, in opposition to the
Ethnological Society, which had abolitionist sympathies. Today, all humans are classified as belonging to the species
Homo sapiens and sub-species Homo sapiens sapiens. However, this is not the first species of homininae: the first
species of genus Homo, Homo habilis, are theorized to have evolved in East Africa at least 2 million years ago, and
members of this species populated different parts of Africa in a relatively short time. Homo erectus is theorized
to have evolved more than 1.8 million years ago, and by 1.5 million years ago had spread throughout Europe and Asia.
Virtually all physical anthropologists agree that Archaic Homo sapiens (A group including the possible species H.
heidelbergensis, H. rhodesiensis and H. neanderthalensis) evolved out of African Homo erectus ((sensu lato) or Homo
ergaster). In the early 20th century, many anthropologists accepted and taught the belief that biologically distinct
races were isomorphic with distinct linguistic, cultural, and social groups, while popularly applying that belief
to the field of eugenics, in conjunction with a practice that is now called scientific racism. After the Nazi eugenics
program, racial essentialism lost widespread popularity. Race anthropologists were pressured to acknowledge findings
coming from studies of culture and population genetics, and to revise their conclusions about the sources of phenotypic
variation. A significant number of modern anthropologists and biologists in the West came to view race as an invalid
genetic or biological designation. Population geneticists have debated whether the concept of population can provide
a basis for a new conception of race. In order to do this, a working definition of population must be found. Surprisingly,
there is no generally accepted concept of population that biologists use. Although the concept of population is central
to ecology, evolutionary biology and conservation biology, most definitions of population rely on qualitative descriptions
such as "a group of organisms of the same species occupying a particular space at a particular time" Waples and Gaggiotti
identify two broad types of definitions for populations; those that fall into an ecological paradigm, and those that
fall into an evolutionary paradigm. Examples of such definitions are: Traditionally, subspecies are seen as geographically
isolated and genetically differentiated populations. That is, "the designation 'subspecies' is used to indicate an
objective degree of microevolutionary divergence" One objection to this idea is that it does not specify what degree
of differentiation is required. Therefore, any population that is somewhat biologically different could be considered
a subspecies, even to the level of a local population. As a result, Templeton has argued that it is necessary to
impose a threshold on the level of difference that is required for a population to be designated a subspecies. This
effectively means that populations of organisms must have reached a certain measurable level of difference to be
recognised as subspecies. Dean Amadon proposed in 1949 that subspecies would be defined according to the seventy-five
percent rule which means that 75% of a population must lie outside 99% of the range of other populations for a given
defining morphological character or a set of characters. The seventy-five percent rule still has defenders but other
scholars argue that it should be replaced with ninety or ninety-five percent rule. In 1978, Sewall Wright suggested
that human populations that have long inhabited separated parts of the world should, in general, be considered different
subspecies by the usual criterion that most individuals of such populations can be allocated correctly by inspection.
Wright argued that it does not require a trained anthropologist to classify an array of Englishmen, West Africans,
and Chinese with 100% accuracy by features, skin color, and type of hair despite so much variability within each
of these groups that every individual can easily be distinguished from every other. However, it is customary to use
the term race rather than subspecies for the major subdivisions of the human species as well as for minor ones. Cladistics
is another method of classification. A clade is a taxonomic group of organisms consisting of a single common ancestor
and all the descendants of that ancestor. Every creature produced by sexual reproduction has two immediate lineages,
one maternal and one paternal. Whereas Carl Linnaeus established a taxonomy of living organisms based on anatomical
similarities and differences, cladistics seeks to establish a taxonomy—the phylogenetic tree—based on genetic similarities
and differences and tracing the process of acquisition of multiple characteristics by single organisms. Some researchers
have tried to clarify the idea of race by equating it to the biological idea of the clade. Often mitochondrial DNA
or Y chromosome sequences are used to study ancient human migration paths. These single-locus sources of DNA do not
recombine and are inherited from a single parent. Individuals from the various continental groups tend to be more
similar to one another than to people from other continents, and tracing either mitochondrial DNA or non-recombinant
Y-chromosome DNA explains how people in one place may be largely derived from people in some remote location. Often
taxonomists prefer to use phylogenetic analysis to determine whether a population can be considered a subspecies.
Phylogenetic analysis relies on the concept of derived characteristics that are not shared between groups, usually
applying to populations that are allopatric (geographically separated) and therefore discretely bounded. This would
make a subspecies, evolutionarily speaking, a clade – a group with a common evolutionary ancestor population. The
smooth gradation of human genetic variation in general tends to rule out any idea that human population groups can
be considered monophyletic (cleanly divided), as there appears to always have been considerable gene flow between
human populations. Rachel Caspari (2003) have argued that clades are by definition monophyletic groups (a taxon that
includes all descendants of a given ancestor) and since no groups currently regarded as races are monophyletic, none
of those groups can be clades. For the anthropologists Lieberman and Jackson (1995), however, there are more profound
methodological and conceptual problems with using cladistics to support concepts of race. They claim that "the molecular
and biochemical proponents of this model explicitly use racial categories in their initial grouping of samples".
For example, the large and highly diverse macroethnic groups of East Indians, North Africans, and Europeans are presumptively
grouped as Caucasians prior to the analysis of their DNA variation. This is claimed to limit and skew interpretations,
obscure other lineage relationships, deemphasize the impact of more immediate clinal environmental factors on genomic
diversity, and can cloud our understanding of the true patterns of affinity. They argue that however significant
the empirical research, these studies use the term race in conceptually imprecise and careless ways. They suggest
that the authors of these studies find support for racial distinctions only because they began by assuming the validity
of race. "For empirical reasons we prefer to place emphasis on clinal variation, which recognizes the existence of
adaptive human hereditary variation and simultaneously stresses that such variation is not found in packages that
can be labeled races." One crucial innovation in reconceptualizing genotypic and phenotypic variation was the anthropologist
C. Loring Brace's observation that such variations, insofar as it is affected by natural selection, slow migration,
or genetic drift, are distributed along geographic gradations or clines. In part this is due to isolation by distance.
This point called attention to a problem common to phenotype-based descriptions of races (for example, those based
on hair texture and skin color): they ignore a host of other similarities and differences (for example, blood type)
that do not correlate highly with the markers for race. Thus, anthropologist Frank Livingstone's conclusion, that
since clines cross racial boundaries, "there are no races, only clines". In a response to Livingstone, Theodore Dobzhansky
argued that when talking about race one must be attentive to how the term is being used: "I agree with Dr. Livingstone
that if races have to be 'discrete units,' then there are no races, and if 'race' is used as an 'explanation' of
the human variability, rather than vice versa, then the explanation is invalid." He further argued that one could
use the term race if one distinguished between "race differences" and "the race concept." The former refers to any
distinction in gene frequencies between populations; the latter is "a matter of judgment." He further observed that
even when there is clinal variation, "Race differences are objectively ascertainable biological phenomena… but it
does not follow that racially distinct populations must be given racial (or subspecific) labels." In short, Livingstone
and Dobzhansky agree that there are genetic differences among human beings; they also agree that the use of the race
concept to classify people, and how the race concept is used, is a matter of social convention. They differ on whether
the race concept remains a meaningful and useful social convention. In 1964, the biologists Paul Ehrlich and Holm
pointed out cases where two or more clines are distributed discordantly—for example, melanin is distributed in a
decreasing pattern from the equator north and south; frequencies for the haplotype for beta-S hemoglobin, on the
other hand, radiate out of specific geographical points in Africa. As the anthropologists Leonard Lieberman and Fatimah
Linda Jackson observed, "Discordant patterns of heterogeneity falsify any description of a population as if it were
genotypically or even phenotypically homogeneous". Patterns such as those seen in human physical and genetic variation
as described above, have led to the consequence that the number and geographic location of any described races is
highly dependent on the importance attributed to, and quantity of, the traits considered. Scientists discovered a
skin-lighting mutation that partially accounts for the appearance of Light skin in humans (people who migrated out
of Africa northward into what is now Europe) which they estimate occurred 20,000 to 50,000 years ago. The East Asians
owe their relatively light skin to different mutations. On the other hand, the greater the number of traits (or alleles)
considered, the more subdivisions of humanity are detected, since traits and gene frequencies do not always correspond
to the same geographical location. Or as Ossorio & Duster (2005) put it: Coop et al. (2009) found "a selected allele
that strongly differentiates the French from both the Yoruba and Han could be strongly clinal across Europe, or at
high frequency in Europe and absent elsewhere, or follow any other distribution according to the geographic nature
of the selective pressure. However, we see that the global geographic distributions of these putatively selected
alleles are largely determined simply by their frequencies in Yoruba, French and Han (Figure 3). The global distributions
fall into three major geographic patterns that we interpret as non-African sweeps, west Eurasian sweeps and East
Asian sweeps, respectively." Another way to look at differences between populations is to measure genetic differences
rather than physical differences between groups. The mid-20th-century anthropologist William C. Boyd defined race
as: "A population which differs significantly from other populations in regard to the frequency of one or more of
the genes it possesses. It is an arbitrary matter which, and how many, gene loci we choose to consider as a significant
'constellation'". Leonard Lieberman and Rodney Kirk have pointed out that "the paramount weakness of this statement
is that if one gene can distinguish races then the number of races is as numerous as the number of human couples
reproducing." Moreover, the anthropologist Stephen Molnar has suggested that the discordance of clines inevitably
results in a multiplication of races that renders the concept itself useless. The Human Genome Project states "People
who have lived in the same geographic region for many generations may have some alleles in common, but no allele
will be found in all members of one population and in no members of any other." The population geneticist Sewall
Wright developed one way of measuring genetic differences between populations known as the Fixation index, which
is often abbreviated to FST. This statistic is often used in taxonomy to compare differences between any two given
populations by measuring the genetic differences among and between populations for individual genes, or for many
genes simultaneously. It is often stated that the fixation index for humans is about 0.15. This translates to an
estimated 85% of the variation measured in the overall human population is found within individuals of the same population,
and about 15% of the variation occurs between populations. These estimates imply that any two individuals from different
populations are almost as likely to be more similar to each other than either is to a member of their own group.
Richard Lewontin, who affirmed these ratios, thus concluded neither "race" nor "subspecies" were appropriate or useful
ways to describe human populations. However, others have noticed that group variation was relatively similar to the
variation observed in other mammalian species. Wright himself believed that values >0.25 represent very great genetic
variation and that an FST of 0.15–0.25 represented great variation. However, about 5% of human variation occurs between
populations within continents, therefore FST values between continental groups of humans (or races) of as low as
0.1 (or possibly lower) have been found in some studies, suggesting more moderate levels of genetic variation. Graves
(1996) has countered that FST should not be used as a marker of subspecies status, as the statistic is used to measure
the degree of differentiation between populations, although see also Wright (1978). Jeffrey Long and Rick Kittles
give a long critique of the application of FST to human populations in their 2003 paper "Human Genetic Diversity
and the Nonexistence of Biological Races". They find that the figure of 85% is misleading because it implies that
all human populations contain on average 85% of all genetic diversity. They claim that this does not correctly reflect
human population history, because it treats all human groups as independent. A more realistic portrayal of the way
human groups are related is to understand that some human groups are parental to other groups and that these groups
represent paraphyletic groups to their descent groups. For example, under the recent African origin theory the human
population in Africa is paraphyletic to all other human groups because it represents the ancestral group from which
all non-African populations derive, but more than that, non-African groups only derive from a small non-representative
sample of this African population. This means that all non-African groups are more closely related to each other
and to some African groups (probably east Africans) than they are to others, and further that the migration out of
Africa represented a genetic bottleneck, with much of the diversity that existed in Africa not being carried out
of Africa by the emigrating groups. This view produces a version of human population movements that do not result
in all human populations being independent; but rather, produces a series of dilutions of diversity the further from
Africa any population lives, each founding event representing a genetic subset of its parental population. Long and
Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human
diversity exists in a single African population, whereas only about 70% of human genetic diversity exists in a population
derived from New Guinea. Long and Kittles argued that this still produces a global human population that is genetically
homogeneous compared to other mammalian populations. In his 2003 paper, "Human Genetic Diversity: Lewontin's Fallacy",
A. W. F. Edwards argued that rather than using a locus-by-locus analysis of variation to derive taxonomy, it is possible
to construct a human classification system based on characteristic genetic patterns, or clusters inferred from multilocus
genetic data. Geographically based human studies since have shown that such genetic clusters can be derived from
analyzing of a large number of loci which can assort individuals sampled into groups analogous to traditional continental
racial groups. Joanna Mountain and Neil Risch cautioned that while genetic clusters may one day be shown to correspond
to phenotypic variations between groups, such assumptions were premature as the relationship between genes and complex
traits remains poorly understood. However, Risch denied such limitations render the analysis useless: "Perhaps just
using someone's actual birth year is not a very good way of measuring age. Does that mean we should throw it out?
... Any category you come up with is going to be imperfect, but that doesn't preclude you from using it or the fact
that it has utility." Early human genetic cluster analysis studies were conducted with samples taken from ancestral
population groups living at extreme geographic distances from each other. It was thought that such large geographic
distances would maximize the genetic variation between the groups sampled in the analysis and thus maximize the probability
of finding cluster patterns unique to each group. In light of the historically recent acceleration of human migration
(and correspondingly, human gene flow) on a global scale, further studies were conducted to judge the degree to which
genetic cluster analysis can pattern ancestrally identified groups as well as geographically separated groups. One
such study looked at a large multiethnic population in the United States, and "detected only modest genetic differentiation
between different current geographic locales within each race/ethnicity group. Thus, ancient geographic ancestry,
which is highly correlated with self-identified race/ethnicity—as opposed to current residence—is the major determinant
of genetic structure in the U.S. population." (Tang et al. (2005)) Witherspoon et al. (2007) have argued that even
when individuals can be reliably assigned to specific population groups, it may still be possible for two randomly
chosen individuals from different populations/clusters to be more similar to each other than to a randomly chosen
member of their own cluster. They found that many thousands of genetic markers had to be used in order for the answer
to the question "How often is a pair of individuals from one population genetically more dissimilar than two individuals
chosen from two different populations?" to be "never". This assumed three population groups separated by large geographic
ranges (European, African and East Asian). The entire world population is much more complex and studying an increasing
number of groups would require an increasing number of markers for the same answer. The authors conclude that "caution
should be used when using geographic or genetic ancestry to make inferences about individual phenotypes." Witherspoon,
et al. concluded that, "The fact that, given enough genetic data, individuals can be correctly assigned to their
populations of origin is compatible with the observation that most human genetic variation is found within populations,
not between them. It is also compatible with our finding that, even when the most distinct populations are considered
and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members
of their own population." Anthropologists such as C. Loring Brace, the philosophers Jonathan Kaplan and Rasmus Winther,
and the geneticist Joseph Graves,[page needed] have argued that while there it is certainly possible to find biological
and genetic variation that corresponds roughly to the groupings normally defined as "continental races", this is
true for almost all geographically distinct populations. The cluster structure of the genetic data is therefore dependent
on the initial hypotheses of the researcher and the populations sampled. When one samples continental groups, the
clusters become continental; if one had chosen other sampling patterns, the clustering would be different. Weiss
and Fullerton have noted that if one sampled only Icelanders, Mayans and Maoris, three distinct clusters would form
and all other populations could be described as being clinally composed of admixtures of Maori, Icelandic and Mayan
genetic materials. Kaplan and Winther therefore argue that, seen in this way, both Lewontin and Edwards are right
in their arguments. They conclude that while racial groups are characterized by different allele frequencies, this
does not mean that racial classification is a natural taxonomy of the human species, because multiple other genetic
patterns can be found in human populations that crosscut racial distinctions. Moreover, the genomic data underdetermines
whether one wishes to see subdivisions (i.e., splitters) or a continuum (i.e., lumpers). Under Kaplan and Winther's
view, racial groupings are objective social constructions (see Mills 1998 ) that have conventional biological reality
only insofar as the categories are chosen and constructed for pragmatic scientific reasons. In earlier work, Winther
had identified "diversity partitioning" and "clustering analysis" as two separate methodologies, with distinct questions,
assumptions, and protocols. Each is also associated with opposing ontological consequences vis-a-vis the metaphysics
of race. Many social scientists have replaced the word race with the word "ethnicity" to refer to self-identifying
groups based on beliefs concerning shared culture, ancestry and history. Alongside empirical and conceptual problems
with "race", following the Second World War, evolutionary and social scientists were acutely aware of how beliefs
about race had been used to justify discrimination, apartheid, slavery, and genocide. This questioning gained momentum
in the 1960s during the U.S. civil rights movement and the emergence of numerous anti-colonial movements worldwide.
They thus came to believe that race itself is a social construct, a concept that was believed to correspond to an
objective reality but which was believed in because of its social functions. Craig Venter and Francis Collins of
the National Institute of Health jointly made the announcement of the mapping of the human genome in 2000. Upon examining
the data from the genome mapping, Venter realized that although the genetic variation within the human species is
on the order of 1–3% (instead of the previously assumed 1%), the types of variations do not support notion of genetically
defined races. Venter said, "Race is a social concept. It's not a scientific one. There are no bright lines (that
would stand out), if we could compare all the sequenced genomes of everyone on the planet." "When we try to apply
science to try to sort out these social differences, it all falls apart." The theory that race is merely a social
construct has been challenged by the findings of researchers at the Stanford University School of Medicine, published
in the American Journal of Human Genetics as "Genetic Structure, Self-Identified Race/Ethnicity, and Confounding
in Case-Control Association Studies". One of the researchers, Neil Risch, noted: "we looked at the correlation between
genetic structure [based on microsatellite markers] versus self-description, we found 99.9% concordance between the
two. We actually had a higher discordance rate between self-reported sex and markers on the X chromosome! So you
could argue that sex is also a problematic category. And there are differences between sex and gender; self-identification
may not be correlated with biology perfectly. And there is sexism." Basically, race in Brazil was "biologized", but
in a way that recognized the difference between ancestry (which determines genotype) and phenotypic differences.
There, racial identity was not governed by rigid descent rule, such as the one-drop rule, as it was in the United
States. A Brazilian child was never automatically identified with the racial type of one or both parents, nor were
there only a very limited number of categories to choose from, to the extent that full siblings can pertain to different
racial groups. Over a dozen racial categories would be recognized in conformity with all the possible combinations
of hair color, hair texture, eye color, and skin color. These types grade into each other like the colors of the
spectrum, and not one category stands significantly isolated from the rest. That is, race referred preferentially
to appearance, not heredity, and appearance is a poor indication of ancestry, because only a few genes are responsible
for someone's skin color and traits: a person who is considered white may have more African ancestry than a person
who is considered black, and the reverse can be also true about European ancestry. The complexity of racial classifications
in Brazil reflects the extent of miscegenation in Brazilian society, a society that remains highly, but not strictly,
stratified along color lines. These socioeconomic factors are also significant to the limits of racial lines, because
a minority of pardos, or brown people, are likely to start declaring themselves white or black if socially upward,
and being seen as relatively "whiter" as their perceived social status increases (much as in other regions of Latin
America). Fluidity of racial categories aside, the "biologification" of race in Brazil referred above would match
contemporary concepts of race in the United States quite closely, though, if Brazilians are supposed to choose their
race as one among, Asian and Indigenous apart, three IBGE's census categories. While assimilated Amerindians and
people with very high quantities of Amerindian ancestry are usually grouped as caboclos, a subgroup of pardos which
roughly translates as both mestizo and hillbilly, for those of lower quantity of Amerindian descent a higher European
genetic contribution is expected to be grouped as a pardo. In several genetic tests, people with less than 60-65%
of European descent and 5-10% of Amerindian descent usually cluster with Afro-Brazilians (as reported by the individuals),
or 6.9% of the population, and those with about 45% or more of Subsaharan contribution most times do so (in average,
Afro-Brazilian DNA was reported to be about 50% Subsaharan African, 37% European and 13% Amerindian). If a more consistent
report with the genetic groups in the gradation of miscegenation is to be considered (e.g. that would not cluster
people with a balanced degree of African and non-African ancestry in the black group instead of the multiracial one,
unlike elsewhere in Latin America where people of high quantity of African descent tend to classify themselves as
mixed), more people would report themselves as white and pardo in Brazil (47.7% and 42.4% of the population as of
2010, respectively), because by research its population is believed to have between 65 and 80% of autosomal European
ancestry, in average (also >35% of European mt-DNA and >95% of European Y-DNA). This is not surprising, though: While
the greatest number of slaves imported from Africa were sent to Brazil, totalizing roughly 3.5 million people, they
lived in such miserable conditions that male African Y-DNA there is significantly rare due to the lack of resources
and time involved with raising of children, so that most African descent originarily came from relations between
white masters and female slaves. From the last decades of the Empire until the 1950s, the proportion of the white
population increased significantly while Brazil welcomed 5.5 million immigrants between 1821 and 1932, not much behind
its neighbor Argentina with 6.4 million, and it received more European immigrants in its colonial history than the
United States. Between 1500 and 1760, 700.000 Europeans settled in Brazil, while 530.000 Europeans settled in the
United States for the same given time. Thus, the historical construction of race in Brazilian society dealt primarily
with gradations between persons of majoritarily European ancestry and little minority groups with otherwise lower
quantity therefrom in recent times. The European Union uses the terms racial origin and ethnic origin synonymously
in its documents and according to it "the use of the term 'racial origin' in this directive does not imply an acceptance
of such [racial] theories".[full citation needed] Haney López warns that using "race" as a category within the law
tends to legitimize its existence in the popular imagination. In the diverse geographic context of Europe, ethnicity
and ethnic origin are arguably more resonant and are less encumbered by the ideological baggage associated with "race".
In European context, historical resonance of "race" underscores its problematic nature. In some states, it is strongly
associated with laws promulgated by the Nazi and Fascist governments in Europe during the 1930s and 1940s. Indeed,
in 1996, the European Parliament adopted a resolution stating that "the term should therefore be avoided in all official
texts". The concept of racial origin relies on the notion that human beings can be separated into biologically distinct
"races", an idea generally rejected by the scientific community. Since all human beings belong to the same species,
the ECRI (European Commission against Racism and Intolerance) rejects theories based on the existence of different
"races". However, in its Recommendation ECRI uses this term in order to ensure that those persons who are generally
and erroneously perceived as belonging to "another race" are not excluded from the protection provided for by the
legislation. The law claims to reject the existence of "race", yet penalize situations where someone is treated less
favourably on this ground. Since the end of the Second World War, France has become an ethnically diverse country.
Today, approximately five percent of the French population is non-European and non-white. This does not approach
the number of non-white citizens in the United States (roughly 28–37%, depending on how Latinos are classified; see
Demographics of the United States). Nevertheless, it amounts to at least three million people, and has forced the
issues of ethnic diversity onto the French policy agenda. France has developed an approach to dealing with ethnic
problems that stands in contrast to that of many advanced, industrialized countries. Unlike the United States, Britain,
or even the Netherlands, France maintains a "color-blind" model of public policy. This means that it targets virtually
no policies directly at racial or ethnic groups. Instead, it uses geographic or class criteria to address issues
of social inequalities. It has, however, developed an extensive anti-racist policy repertoire since the early 1970s.
Until recently, French policies focused primarily on issues of hate speech—going much further than their American
counterparts—and relatively less on issues of discrimination in jobs, housing, and in provision of goods and services.
Since the early history of the United States, Amerindians, African–Americans, and European Americans have been classified
as belonging to different races. Efforts to track mixing between groups led to a proliferation of categories, such
as mulatto and octoroon. The criteria for membership in these races diverged in the late 19th century. During Reconstruction,
increasing numbers of Americans began to consider anyone with "one drop" of known "Black blood" to be Black, regardless
of appearance.3 By the early 20th century, this notion was made statutory in many states.4 Amerindians continue to
be defined by a certain percentage of "Indian blood" (called blood quantum). To be White one had to have perceived
"pure" White ancestry. The one-drop rule or hypodescent rule refers to the convention of defining a person as racially
black if he or she has any known African ancestry. This rule meant that those that were mixed race but with some
discernible African ancestry were defined as black. The one-drop rule is specific to not only those with African
ancestry but to the United States, making it a particularly African-American experience. The term "Hispanic" as an
ethnonym emerged in the 20th century with the rise of migration of laborers from the Spanish-speaking countries of
Latin America to the United States. Today, the word "Latino" is often used as a synonym for "Hispanic". The definitions
of both terms are non-race specific, and include people who consider themselves to be of distinct races (Black, White,
Amerindian, Asian, and mixed groups). However, there is a common misconception in the US that Hispanic/Latino is
a race or sometimes even that national origins such as Mexican, Cuban, Colombian, Salvadoran, etc. are races. In
contrast to "Latino" or "Hispanic", "Anglo" refers to non-Hispanic White Americans or non-Hispanic European Americans,
most of whom speak the English language but are not necessarily of English descent. Wang, Štrkalj et al. (2003) examined
the use of race as a biological concept in research papers published in China's only biological anthropology journal,
Acta Anthropologica Sinica. The study showed that the race concept was widely used among Chinese anthropologists.
In a 2007 review paper, Štrkalj suggested that the stark contrast of the racial approach between the United States
and China was due to the fact that race is a factor for social cohesion among the ethnically diverse people of China,
whereas "race" is a very sensitive issue in America and the racial approach is considered to undermine social cohesion
- with the result that in the socio-political context of US academics scientists are encouraged not to use racial
categories, whereas in China they are encouraged to use them. Kaszycka et al. (2009) in 2002–2003 surveyed European
anthropologists' opinions toward the biological race concept. Three factors, country of academic education, discipline,
and age, were found to be significant in differentiating the replies. Those educated in Western Europe, physical
anthropologists, and middle-aged persons rejected race more frequently than those educated in Eastern Europe, people
in other branches of science, and those from both younger and older generations." The survey shows that the views
on race are sociopolitically (ideologically) influenced and highly dependent on education." One result of debates
over the meaning and validity of the concept of race is that the current literature across different disciplines
regarding human variation lacks consensus, though within some fields, such as some branches of anthropology, there
is strong consensus. Some studies use the word race in its early essentialist taxonomic sense. Many others still
use the term race, but use it to mean a population, clade, or haplogroup. Others eschew the concept of race altogether,
and use the concept of population as a less problematic unit of analysis. Eduardo Bonilla-Silva, Sociology professor
at Duke University, remarks, "I contend that racism is, more than anything else, a matter of group power; it is about
a dominant racial group (whites) striving to maintain its systemic advantages and minorities fighting to subvert
the racial status quo." The types of practices that take place under this new color-blind racism is subtle, institutionalized,
and supposedly not racial. Color-blind racism thrives on the idea that race is no longer an issue in the United States.
There are contradictions between the alleged color-blindness of most whites and the persistence of a color-coded
system of inequality. The concept of biological race has declined significantly in frequency of use in physical anthropology
in the United States during the 20th century. A majority of physical anthropologists in the United States have rejected
the concept of biological races. Since 1932, an increasing number of college textbooks introducing physical anthropology
have rejected race as a valid concept: from 1932 to 1976, only seven out of thirty-two rejected race; from 1975 to
1984, thirteen out of thirty-three rejected race; from 1985 to 1993, thirteen out of nineteen rejected race. According
to one academic journal entry, where 78 percent of the articles in the 1931 Journal of Physical Anthropology employed
these or nearly synonymous terms reflecting a bio-race paradigm, only 36 percent did so in 1965, and just 28 percent
did in 1996. According to the 2000 edition of a popular physical anthropology textbook, forensic anthropologists
are overwhelmingly in support of the idea of the basic biological reality of human races. Forensic physical anthropologist
and professor George W. Gill has said that the idea that race is only skin deep "is simply not true, as any experienced
forensic anthropologist will affirm" and "Many morphological features tend to follow geographic boundaries coinciding
often with climatic zones. This is not surprising since the selective forces of climate are probably the primary
forces of nature that have shaped human races with regard not only to skin color and hair form but also the underlying
bony structures of the nose, cheekbones, etc. (For example, more prominent noses humidify air better.)" While he
can see good arguments for both sides, the complete denial of the opposing evidence "seems to stem largely from socio-political
motivation and not science at all". He also states that many biological anthropologists see races as real yet "not
one introductory textbook of physical anthropology even presents that perspective as a possibility. In a case as
flagrant as this, we are not dealing with science but rather with blatant, politically motivated censorship". "Race"
is still sometimes used within forensic anthropology (when analyzing skeletal remains), biomedical research, and
race-based medicine. Brace has criticized this, the practice of forensic anthropologists for using the controversial
concept "race" out of convention when they in fact should be talking about regional ancestry. He argues that while
forensic anthropologists can determine that a skeletal remain comes from a person with ancestors in a specific region
of Africa, categorizing that skeletal as being "black" is a socially constructed category that is only meaningful
in the particular context of the United States, and which is not itself scientifically valid. The authors of the
study also examined 77 college textbooks in biology and 69 in physical anthropology published between 1932 and 1989.
Physical anthropology texts argued that biological races exist until the 1970s, when they began to argue that races
do not exist. In contrast, biology textbooks did not undergo such a reversal but many instead dropped their discussion
of race altogether. The authors attributed this to biologists trying to avoid discussing the political implications
of racial classifications, instead of discussing them, and to the ongoing discussions in biology about the validity
of the concept "subspecies". The authors also noted that some widely used textbooks in biology such as Douglas J.
Futuyama's 1986 "Evolutionary Biology" had abandoned the race concept, "The concept of race, masking the overwhelming
genetic similarity of all peoples and the mosaic patterns of variation that do not correspond to racial divisions,
is not only socially dysfunctional but is biologically indefensible as well (pp. 5 18-5 19)." (Lieberman et al. 1992,
pp. 316–17) Morning (2008) looked at high school biology textbooks during the 1952-2002 period and initially found
a similar pattern with only 35% directly discussing race in the 1983–92 period from initially 92% doing so. However,
this has increased somewhat after this to 43%. More indirect and brief discussions of race in the context of medical
disorders have increased from none to 93% of textbooks. In general, the material on race has moved from surface traits
to genetics and evolutionary history. The study argues that the textbooks' fundamental message about the existence
of races has changed little. In the United States, federal government policy promotes the use of racially categorized
data to identify and address health disparities between racial or ethnic groups. In clinical settings, race has sometimes
been considered in the diagnosis and treatment of medical conditions. Doctors have noted that some medical conditions
are more prevalent in certain racial or ethnic groups than in others, without being sure of the cause of those differences.
Recent interest in race-based medicine, or race-targeted pharmacogenomics, has been fueled by the proliferation of
human genetic data which followed the decoding of the human genome in the first decade of the twenty-first century.
There is an active debate among biomedical researchers about the meaning and importance of race in their research.
Proponents of the use of racial categories in biomedicine argue that continued use of racial categorizations in biomedical
research and clinical practice makes possible the application of new genetic findings, and provides a clue to diagnosis.
Other researchers point out that finding a difference in disease prevalence between two socially defined groups does
not necessarily imply genetic causation of the difference. They suggest that medical practices should maintain their
focus on the individual rather than an individual's membership to any group. They argue that overemphasizing genetic
contributions to health disparities carries various risks such as reinforcing stereotypes, promoting racism or ignoring
the contribution of non-genetic factors to health disparities. International epidemiological data show that living
conditions rather than race make the biggest difference in health outcomes even for diseases that have "race-specific"
treatments. Some studies have found that patients are reluctant to accept racial categorization in medical practice.
In an attempt to provide general descriptions that may facilitate the job of law enforcement officers seeking to
apprehend suspects, the United States FBI employs the term "race" to summarize the general appearance (skin color,
hair texture, eye shape, and other such easily noticed characteristics) of individuals whom they are attempting to
apprehend. From the perspective of law enforcement officers, it is generally more important to arrive at a description
that will readily suggest the general appearance of an individual than to make a scientifically valid categorization
by DNA or other such means. Thus, in addition to assigning a wanted individual to a racial category, such a description
will include: height, weight, eye color, scars and other distinguishing characteristics. Criminal justice agencies
in England and Wales use at least two separate racial/ethnic classification systems when reporting crime, as of 2010.
One is the system used in the 2001 Census when individuals identify themselves as belonging to a particular ethnic
group: W1 (White-British), W2 (White-Irish), W9 (Any other white background); M1 (White and black Caribbean), M2
(White and black African), M3 (White and Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani),
A3 (Asian-Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any other
black background); O1 (Chinese), O9 (Any other). The other is categories used by the police when they visually identify
someone as belonging to an ethnic group, e.g. at the time of a stop and search or an arrest: White – North European
(IC1), White – South European (IC2), Black (IC3), Asian (IC4), Chinese, Japanese, or South East Asian (IC5), Middle
Eastern (IC6), and Unknown (IC0). "IC" stands for "Identification Code;" these items are also referred to as Phoenix
classifications. Officers are instructed to "record the response that has been given" even if the person gives an
answer which may be incorrect; their own perception of the person's ethnic background is recorded separately. Comparability
of the information being recorded by officers was brought into question by the Office for National Statistics (ONS)
in September 2007, as part of its Equality Data Review; one problem cited was the number of reports that contained
an ethnicity of "Not Stated." In the United States, the practice of racial profiling has been ruled to be both unconstitutional
and a violation of civil rights. There is active debate regarding the cause of a marked correlation between the recorded
crimes, punishments meted out, and the country's populations. Many consider de facto racial profiling an example
of institutional racism in law enforcement. The history of misuse of racial categories to impact adversely one or
more groups and/or to offer protection and advantage to another has a clear impact on debate of the legitimate use
of known phenotypical or genotypical characteristics tied to the presumed race of both victims and perpetrators by
the government. Mass incarceration in the United States disproportionately impacts African American and Latino communities.
Michelle Alexander, author of The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2010), argues that
mass incarceration is best understood as not only a system of overcrowded prisons. Mass incarceration is also, "the
larger web of laws, rules, policies, and customs that control those labeled criminals both in and out of prison."
She defines it further as "a system that locks people not only behind actual bars in actual prisons, but also behind
virtual bars and virtual walls", illustrating the second-class citizenship that is imposed on a disproportionate
number of people of color, specifically African-Americans. She compares mass incarceration to Jim Crow laws, stating
that both work as racial caste systems. Similarly, forensic anthropologists draw on highly heritable morphological
features of human remains (e.g. cranial measurements) to aid in the identification of the body, including in terms
of race. In a 1992 article, anthropologist Norman Sauer noted that anthropologists had generally abandoned the concept
of race as a valid representation of human biological diversity, except for forensic anthropologists. He asked, "If
races don't exist, why are forensic anthropologists so good at identifying them?" He concluded: Abu el-Haj argues
that genomics and the mapping of lineages and clusters liberates "the new racial science from the older one by disentangling
ancestry from culture and capacity."[citation needed] As an example, she refers to recent work by Hammer et al.,
which aimed to test the claim that present-day Jews are more closely related to one another than to neighbouring
non-Jewish populations. Hammer et al. found that the degree of genetic similarity among Jews shifted depending on
the locus investigated, and suggested that this was the result of natural selection acting on particular loci. They
focused on the non-recombining Y-chromosome to "circumvent some of the complications associated with selection".
As another example, she points to work by Thomas et al., who sought to distinguish between the Y chromosomes of Jewish
priests (Kohanim), (in Judaism, membership in the priesthood is passed on through the father's line) and the Y chromosomes
of non-Jews. Abu el-Haj concluded that this new "race science" calls attention to the importance of "ancestry" (narrowly
defined, as it does not include all ancestors) in some religions and in popular culture, and people's desire to use
science to confirm their claims about ancestry; this "race science", she argues, is fundamentally different from
older notions of race that were used to explain differences in human behaviour or social status: One problem with
these assignments is admixture. Many people have a highly varied ancestry. For example, in the United States, colonial
and early federal history were periods of numerous interracial relationships, both outside and inside slavery. This
has resulted in a majority of people who identify as African American having some European ancestors. Similarly,
many people who identify as white have some African ancestors. In a survey in a northeastern U.S. university of college
students who identified as "white", about 30% were estimated to have up to 10% African ancestry.
Since the 19th century, the built-up area of Paris has grown far beyond its administrative borders; together with its suburbs,
the whole agglomeration has a population of 10,550,350 (Jan. 2012 census). Paris' metropolitan area spans most of
the Paris region and has a population of 12,341,418 (Jan. 2012 census), or one-fifth of the population of France.
The administrative region covers 12,012 km² (4,638 mi²), with approximately 12 million inhabitants as of 2014, and
has its own regional council and president. Paris is the home of the most visited art museum in the world, the Louvre,
as well as the Musée d'Orsay, noted for its collection of French Impressionist art, and the Musée National d'Art
Moderne, a museum of modern and contemporary art. The notable architectural landmarks of Paris include Notre Dame
Cathedral (12th century); the Sainte-Chapelle (13th century); the Eiffel Tower (1889); and the Basilica of Sacré-Cœur
on Montmartre (1914). In 2014 Paris received 22.4 million visitors, making it one of the world's top tourist destinations.
Paris is also known for its fashion, particularly the twice-yearly Paris Fashion Week, and for its haute cuisine,
and three-star restaurants. Most of France's major universities and grandes écoles are located in Paris, as are France's
major newspapers, including Le Monde, Le Figaro, and Libération. Paris is home to the association football club Paris
Saint-Germain and the rugby union club Stade Français. The 80,000-seat Stade de France, built for the 1998 FIFA World
Cup, is located just north of Paris in the commune of Saint-Denis. Paris hosts the annual French Open Grand Slam
tennis tournament on the red clay of Roland Garros. Paris played host to the 1900 and 1924 Summer Olympics, the 1938
and 1998 FIFA World Cups, and the 2007 Rugby World Cup. Every July, the Tour de France of cycling finishes in the
city. By the end of the Western Roman Empire, the town was known simply as Parisius in Latin and Paris in French.
Christianity was introduced in the middle of the 3rd century AD. According to tradition, it was brought by Saint
Denis, the first Bishop of Paris. When he refused to renounce his faith, he was beheaded on the hill which became
known as the "Mountain of Martyrs" (Mons Martyrum), eventually "Montmartre". His burial place became an important
religious shrine; the Basilica of Saint-Denis was built there and became the burial place of the French Kings. Clovis
the Frank, the first king of the Merovingian dynasty, made the city his capital from 508. A gradual immigration by
the Franks also occurred in Paris in the beginning of the Frankish domination of Gaul which created the Parisian
Francien dialects. Fortification of the Île-de-France failed to prevent sacking by Vikings in 845 but Paris' strategic
importance—with its bridges preventing ships from passing—was established by successful defence in the Siege of Paris
(885–86). In 987 Hugh Capet, Count of Paris (comte de Paris), Duke of the Franks (duc des Francs) was elected King
of the Franks (roi des Franks). Under the rule of the Capetian kings, Paris gradually became the largest and most
prosperous city in France. By the end of the 12th century, Paris had become the political, economic, religious, and
cultural capital of France. The Île de la Cité was the site of the royal palace. In 1163, during the reign of Louis
VII, Maurice de Sully, bishop of Paris, undertook the construction of the Notre Dame Cathedral at its eastern extremity.
The Left Bank was the site of the University of Paris, a corporation of students and teachers formed in the mid-12th
century to train scholars first in theology, and later in canon law, medicine and the arts. During the Hundred Years'
War, the army of the Duke of Burgundy and a force of about two hundred English soldiers occupied Paris from May 1420
until 1436. They repelled an attempt by Joan of Arc to liberate the city in 1429. A century later, during the French
Wars of Religion, Paris was a stronghold of the Catholic League. On 24 August 1572, Paris was the site of the St.
Bartholomew's Day massacre, when thousands of French Protestants were killed. The last of these wars, the eighth
one, ended in 1594, after Henri IV had converted to Catholicism and was finally able to enter Paris as he supposedly
declared Paris vaut bien une messe ("Paris is well worth a Mass"). The city had been neglected for decades; by the
time of his assassination in 1610, Henry IV had rebuilt the Pont Neuf, the first Paris bridge with sidewalks and
not lined with buildings, linked with a new wing the Louvre to the Tuileries Palace, and created the first Paris
residential square, the Place Royale, now Place des Vosges. Louis XIV distrusted the Parisians and moved his court
to Versailles in 1682, but his reign also saw an unprecedented flourishing of the arts and sciences in Paris. The
Comédie-Française, the Academy of Painting, and the French Academy of Sciences were founded and made their headquarters
in the city. To show that the city was safe against attack, he had the city walls demolished, replacing them with
Grands Boulevards. To leave monuments to his reign, he built the Collège des Quatre-Nations, Place Vendôme, Place
des Victoires, and began Les Invalides. Louis XVI and the royal family were brought to Paris and made virtual prisoners
within the Tuileries Palace. In 1793, as the revolution turned more and more radical, the king, queen, and the mayor
were guillotined, along with more than 16,000 others (throughout France), during the Reign of Terror. The property
of the aristocracy and the church was nationalised, and the city's churches were closed, sold or demolished. A succession
of revolutionary factions ruled Paris until 9 November 1799 (coup d'état du 18 brumaire), when Napoléon Bonaparte
seized power as First Consul. Louis-Philippe was overthrown by a popular uprising in the streets of Paris in 1848.
His successor, Napoleon III, and the newly appointed prefect of the Seine, Georges-Eugène Haussmann, launched a gigantic
public works project to build wide new boulevards, a new opera house, a central market, new aqueducts, sewers, and
parks, including the Bois de Boulogne and Bois de Vincennes. In 1860, Napoleon III also annexed the surrounding towns
and created eight new arrondissements, expanding Paris to its current limits. Late in the 19th century, Paris hosted
two major international expositions: the 1889 Universal Exposition, was held to mark the centennial of the French
Revolution and featured the new Eiffel Tower; and the 1900 Universal Exposition, which gave Paris the Pont Alexandre
III, the Grand Palais, the Petit Palais and the first Paris Métro line. Paris became the laboratory of Naturalism
(Émile Zola) and Symbolism (Charles Baudelaire and Paul Verlaine), and of Impressionism in art (Courbet, Manet, Monet,
Renoir.) During the First World War, Paris sometimes found itself on the front line; 600 to 1,000 Paris taxis played
a small but highly important symbolic role in transporting 6,000 soldiers to the front line at the First Battle of
the Marne. The city was also bombed by Zeppelins and shelled by German long-range guns. In the years after the war,
known as Les Années Folles, Paris continued to be a mecca for writers, musicians and artists from around the world,
including Ernest Hemingway, Igor Stravinsky, James Joyce, Josephine Baker, Sidney Bechet and the surrealist Salvador
Dalí. On 14 June 1940, the German army marched into Paris, which had been declared an "open city". On 16–17 July
1942, following German orders, the French police and gendarmes arrested 12,884 Jews, including 4,115 children, and
confined them during five days at the Vel d'Hiv (Vélodrome d'Hiver), from which they were transported by train to
the extermination camp at Auschwitz. None of the children came back. On 25 August 1944, the city was liberated by
the French 2nd Armoured Division and the 4th Infantry Division of the United States Army. General Charles de Gaulle
led a huge and emotional crowd down the Champs Élysées towards Notre Dame de Paris, and made a rousing speech from
the Hôtel de Ville. In the 1950s and the 1960s, Paris became one front of the Algerian War for independence; in August
1961, the pro-independence FLN targeted and killed 11 Paris policemen, leading to the imposition of a curfew on Muslims
of Algeria (who, at that time, were French citizens). On 17 October 1961, an unauthorised but peaceful protest demonstration
of Algerians against the curfew led to violent confrontations between the police and demonstrators, in which at least
40 people were killed, including some thrown into the Seine. The anti-independence Organisation de l'armée secrète
(OAS), for their part, carried out a series of bombings in Paris throughout 1961 and 1962. Most of the postwar's
presidents of the Fifth Republic wanted to leave their own monuments in Paris; President Georges Pompidou started
the Centre Georges Pompidou (1977), Valéry Giscard d'Estaing began the Musée d'Orsay (1986); President François Mitterrand,
in power for 14 years, built the Opéra Bastille (1985-1989), the Bibliothèque nationale de France (1996), the Arche
de la Défense (1985-1989), and the Louvre Pyramid with its underground courtyard (1983-1989); Jacques Chirac (2006),
the Musée du quai Branly. In the early 21st century, the population of Paris began to increase slowly again, as more
young people moved into the city. It reached 2.25 million in 2011. In March 2001, Bertrand Delanoë became the first
socialist mayor of Paris. In 2007, in an effort to reduce car traffic in the city, he introduced the Vélib', a system
which rents bicycles for the use of local residents and visitors. Bertrand Delanoë also transformed a section of
the highway along the left bank of the Seine into an urban promenade and park, the Promenade des Berges de la Seine,
which he inaugurated in June 2013. On 7 January 2015, two French Muslim extremists attacked the Paris headquarters
of Charlie Hebdo and killed thirteen people, and on 9 January, a third terrorist killed four hostages during an attack
at a Jewish grocery store at Porte de Vincennes. On 11 January an estimated 1.5 million people marched in Paris–along
with international political leaders–to show solidarity against terrorism and in defence of freedom of speech. Ten
months later, 13 November 2015, came a series of coordinated terrorist attacks in Paris and Saint-Denis claimed by
the 'Islamic state' organisation ISIL ('Daesh', ISIS); 130 people were killed by gunfire and bombs, and more than
350 were injured. Seven of the attackers killed themselves and others by setting off their explosive vests. On the
morning of 18 November three suspected terrorists, including alleged planner of the attacks Abdelhamid Abaaoud, were
killed in a shootout with police in the Paris suburb of Saint-Denis. President Hollande declared France to be in
a three-month state of emergency. Paris is located in northern central France. By road it is 450 kilometres (280
mi) south-east of London, 287 kilometres (178 mi) south of Calais, 305 kilometres (190 mi) south-west of Brussels,
774 kilometres (481 mi) north of Marseille, 385 kilometres (239 mi) north-east of Nantes, and 135 kilometres (84
mi) south-east of Rouen. Paris is located in the north-bending arc of the river Seine and includes two islands, the
Île Saint-Louis and the larger Île de la Cité, which form the oldest part of the city. The river's mouth on the English
Channel (La Manche) is about 233 mi (375 km) downstream of the city, established around 7600 BC. The city is spread
widely on both banks of the river. Overall, the city is relatively flat, and the lowest point is 35 m (115 ft) above
sea level. Paris has several prominent hills, the highest of which is Montmartre at 130 m (427 ft). Montmartre gained
its name from the martyrdom of Saint Denis, first bishop of Paris, atop the Mons Martyrum, "Martyr's mound", in 250.
Excluding the outlying parks of Bois de Boulogne and Bois de Vincennes, Paris covers an oval measuring about 87 km2
(34 sq mi) in area, enclosed by the 35 km (22 mi) ring road, the Boulevard Périphérique. The city's last major annexation
of outlying territories in 1860 not only gave it its modern form but also created the 20 clockwise-spiralling arrondissements
(municipal boroughs). From the 1860 area of 78 km2 (30 sq mi), the city limits were expanded marginally to 86.9 km2
(33.6 sq mi) in the 1920s. In 1929, the Bois de Boulogne and Bois de Vincennes forest parks were officially annexed
to the city, bringing its area to about 105 km2 (41 sq mi). The metropolitan area of the city is 2,300 km2 (890 sq
mi). Paris has a typical Western European oceanic climate (Köppen climate classification: Cfb ) which is affected
by the North Atlantic Current. The overall climate throughout the year is mild and moderately wet. Summer days are
usually warm and pleasant with average temperatures hovering between 15 and 25 °C (59 and 77 °F), and a fair amount
of sunshine. Each year, however, there are a few days where the temperature rises above 32 °C (90 °F). Some years
have even witnessed long periods of harsh summer weather, such as the heat wave of 2003 when temperatures exceeded
30 °C (86 °F) for weeks, surged up to 40 °C (104 °F) on some days and seldom cooled down at night. More recently,
the average temperature for July 2011 was 17.6 °C (63.7 °F), with an average minimum temperature of 12.9 °C (55.2
°F) and an average maximum temperature of 23.7 °C (74.7 °F). Spring and autumn have, on average, mild days and fresh
nights but are changing and unstable. Surprisingly warm or cool weather occurs frequently in both seasons. In winter,
sunshine is scarce; days are cold but generally above freezing with temperatures around 7 °C (45 °F). Light night
frosts are however quite common, but the temperature will dip below −5 °C (23 °F) for only a few days a year. Snow
falls every year, but rarely stays on the ground. The city sometimes sees light snow or flurries with or without
accumulation. The mayor of Paris is elected indirectly by Paris voters; the voters of each arrondissement elect the
Conseil de Paris (Council of Paris), composed of 163 members. Each arrondissement has a number of members depending
upon its population, from 10 members for each of the least-populated arrondissements (1st through 9th) to 36 members
for the most populated (the 15th). The elected council members select the mayor. Sometimes the candidate who receives
the most votes city-wide is not selected if the other candidate has won the support of the majority of council members.
Mayor Bertrand Delanoë (2001-2014) was elected by only a minority of city voters, but a majority of council members.
Once elected, the council plays a largely passive role in the city government; it meets only once a month. The current
council is divided between a coalition of the left of 91 members, including the socialists, communists, greens, and
extreme left; and 71 members for the centre right, plus a few members from smaller parties. The budget of the city
for 2013 was €7.6 billion, of which 5.4 billion went for city administration, while €2.2 billion went for investment.
The largest part of the budget (38 percent) went for public housing and urbanism projects; 15 percent for roads and
transport; 8 percent for schools (which are mostly financed by the state budget); 5 percent for parks and gardens;
and 4 percent for culture. The main source of income for the city is direct taxes (35 percent), supplemented by a
13-percent real estate tax; 19 percent of the budget comes in a transfer from the national government. The Métropole
du Grand Paris, or Metropolis of Greater Paris, formally came into existence on January 1, 2016. It is an administrative
structure for cooperation between the City of Paris and its nearest suburbs. It includes the City of Paris, plus
the communes, or towns of the three departments of the inner suburbs; Hauts-de-Seine, Seine-Saint-Denis and Val-de-Marne;
plus seven communes in the outer suburbs, including Argenteuil in Val d'Oise and Paray-Vieille-Poste in Essonne,
which were added to include the major airports of Paris. The Metropole covers 814 square kilometers and has a population
of 6.945 million persons. The new structure is administered by a Metropolitan Council of 210 members, not directly
elected, but chosen by the councils of the member Communes. By 2020 its basic competencies will include urban planning,
housing, and protection of the environment. The first president of the metropolitan council, Patrick Ollier, a Republican
and the mayor of the town of Rueil-Malmaison, was elected on January 22, 2016. Though the Metropole has a population
of nearly seven million persons and accounts for 25 percent of the GDP of France, it has a very small budget; just
65 million Euros, compared with eight billion Euros for the City of Paris. The Region of Île de France, including
Paris and its surrounding communities, is governed by the Regional Council, which has its headquarters in the 7th
arrondissement of Paris. It is composed of 209 members representing the different communes within the region. On
December 15, 2015, a list of candidates of the Union of the Right, a coalition of centrist and right-wing parties,
led by Valérie Pécresse, narrowly won the regional election, defeating a coalition of Socialists and ecologists.
The Socialists had governed the region for seventeen years. In 2016, the new regional council will have 121 members
from the Union of the Right, 66 from the Union of the Left and 22 from the extreme right National Front. France's
highest courts are located in Paris. The Court of Cassation, the highest court in the judicial order, which reviews
criminal and civil cases, is located in the Palais de Justice on the Île de la Cité, while the Conseil d'État, which
provides legal advice to the executive and acts as the highest court in the administrative order, judging litigation
against public bodies, is located in the Palais-Royal in the 1st arrondissement. The Constitutional Council, an advisory
body with ultimate authority on the constitutionality of laws and government decrees, also meets in the Montpensier
wing of the Palais Royal. Paris and its region host the headquarters of several international organisations including
UNESCO, the Organisation for Economic Co-operation and Development, the International Chamber of Commerce, the Paris
Club, the European Space Agency, the International Energy Agency, the Organisation internationale de la Francophonie,
the European Union Institute for Security Studies, the International Bureau of Weights and Measures, the International
Exhibition Bureau and the International Federation for Human Rights. The security of Paris is mainly the responsibility
of the Prefecture of Police of Paris, a subdivision of the Ministry of the Interior of France. It supervises the
units of the National Police who patrol the city and the three neighbouring departments. It is also responsible for
providing emergency services, including the Paris Fire Brigade. Its headquarters is on Place Louis Lépine on the
Île de la Cité. There are 30,200 officers under the prefecture, and a fleet of more than 6,000 vehicles, including
police cars, motorcycles, fire trucks, boats and helicopters. In addition to traditional police duties, the local
police monitors the number of discount sales held by large stores (no more than two a year are allowed) and verify
that, during summer holidays, at least one bakery is open in every neighbourhood. The national police has its own
special unit for riot control and crowd control and security of public buildings, called the Compagnies Républicaines
de Sécurité (CRS), a unit formed in 1944 right after the liberation of France. Vans of CRS agents are frequently
seen in the centre of the city when there are demonstrations and public events. Most French rulers since the Middle
Ages made a point of leaving their mark on a city that, contrary to many other of the world's capitals, has never
been destroyed by catastrophe or war. In modernising its infrastructure through the centuries, Paris has preserved
even its earliest history in its street map.[citation needed] At its origin, before the Middle Ages, the city was
composed around several islands and sandbanks in a bend of the Seine; of those, two remain today: the île Saint-Louis,
the île de la Cité; a third one is the 1827 artificially created île aux Cygnes. Modern Paris owes much to its late
19th century Second Empire remodelling by the Baron Haussmann: many of modern Paris' busiest streets, avenues and
boulevards today are a result of that city renovation. Paris also owes its style to its aligned street-fronts, distinctive
cream-grey "Paris stone" building ornamentation, aligned top-floor balconies, and tree-lined boulevards. The high
residential population of its city centre makes it much different from most other western global cities. Paris' urbanism
laws have been under strict control since the early 17th century, particularly where street-front alignment, building
height and building distribution is concerned. In recent developments, a 1974-2010 building height limitation of
37 metres (121 ft) was raised to 50 m (160 ft) in central areas and 180 metres (590 ft) in some of Paris' peripheral
quarters, yet for some of the city's more central quarters, even older building-height laws still remain in effect.
The 210 metres (690 ft) Montparnasse tower was both Paris and France's tallest building until 1973, but this record
has been held by the La Défense quarter Tour First tower in Courbevoie since its 2011 construction. A new project
for La Defense, called Hermitage Plaza, launched in 2009, proposes to build two towers, 85 and 86 stories or 320
metres high, which would be the tallest buildings in the European Union, just slightly shorter than the Eiffel Tower.
They were scheduled for completion in 2019 or 2020, but as of January 2015 construction had not yet begun, and there
were questions in the press about the future of the project. Parisian examples of European architecture date back
more than a millennium; including the Romanesque church of the Abbey of Saint-Germain-des-Prés (1014-1163); the early
Gothic Architecture of the Basilica of Saint-Denis (1144), the Notre Dame Cathedral (1163-1345), the Flamboyant Gothic
of Saint Chapelle (1239-1248), the Baroque churches of Saint-Paul-Saint-Louis (1627-1641) and Les Invalides (1670-1708).
The 19th century produced the neoclassical church of La Madeleine (1808-1842); the Palais Garnier Opera House (1875);
the neo-Byzantine Basilica of Sacré-Cœur (1875-1919), and the exuberant Belle Époque modernism of the Eiffel Tower
(1889). Striking examples of 20th century architecture include the Centre Georges Pompidou by Richard Rogers and
Renzo Piano (1977), and the Louvre Pyramid by I.M. Pei (1989). Contemporary architecture includes the Musée du Quai
Branly by Jean Nouvel (2006) and the new contemporary art museum of the Louis Vuitton Foundation by Frank Gehry (2014).
In 2012 the Paris agglomeration (urban area) counted 28,800 people without a fixed residence, an increase of 84 percent
since 2001; it represents 43 percent of the homeless in all of France. Forty-one percent were women, and 29 percent
were accompanied by children. Fifty-six percent of the homeless were born outside France, the largest number coming
from Africa and Eastern Europe. The city of Paris has sixty homeless shelters, called Centres d'hébergement et de
réinsertion sociale or CHRS, which are funded by the city and operated by private charities and associations. Aside
from the 20th century addition of the Bois de Boulogne, Bois de Vincennes and Paris heliport, Paris' administrative
limits have remained unchanged since 1860. The Seine département had been governing Paris and its suburbs since its
creation in 1790, but the rising suburban population had made it difficult to govern as a unique entity. This problem
was 'resolved' when its parent "District de la région parisienne" (Paris region) was reorganised into several new
departments from 1968: Paris became a department in itself, and the administration of its suburbs was divided between
the three departments surrounding it. The Paris region was renamed "Île-de-France" in 1977, but the "Paris region"
name is still commonly used today. Paris was reunited with its suburbs on January 1, 2016 when the Métropole du Grand
Paris came into existence. Paris' disconnect with its suburbs, its lack of suburban transportation in particular,
became all too apparent with the Paris agglomeration's growth. Paul Delouvrier promised to resolve the Paris-suburbs
mésentente when he became head of the Paris region in 1961: two of his most ambitious projects for the Region were
the construction of five suburban villes nouvelles ("new cities") and the RER commuter train network. Many other
suburban residential districts (grands ensembles) were built between the 1960s and 1970s to provide a low-cost solution
for a rapidly expanding population: these districts were socially mixed at first, but few residents actually owned
their homes (the growing economy made these accessible to the middle classes only from the 1970s). Their poor construction
quality and their haphazard insertion into existing urban growth contributed to their desertion by those able to
move elsewhere and their repopulation by those with more limited possibilities. These areas, quartiers sensibles
("sensitive quarters"), are in northern and eastern Paris, namely around its Goutte d'Or and Belleville neighbourhoods.
To the north of the city they are grouped mainly in the Seine-Saint-Denis department, and to a lesser extreme to
the east in the Val-d'Oise department. Other difficult areas are located in the Seine valley, in Évry et Corbeil-Essonnes
(Essonne), in Mureaux, Mantes-la-Jolie (Yvelines), and scattered among social housing districts created by Delouvrier's
1961 "ville nouvelle" political initiative. The population of Paris in its administrative city limits was 2,241,346
in January 2014. This makes Paris the fifth largest municipality in the European Union, following London, Berlin,
Madrid and Rome. Eurostat, the statistical agency of the EU, places Paris (6.5 million people) second behind London
(8 million) and ahead of Berlin (3.5 million), based on the 2012 populations of what Eurostat calls "urban audit
core cities". The Paris Urban Area, or "unité urbaine", is a statistical area created by the French statistical agency
INSEE to measure the population of built-up areas around the city. It is slightly smaller than the Paris Region.
According to INSEE, the Paris Urban Area had a population of 10,550,350 at the January 2012 census, the most populous
in the European Union, and third most populous in Europe, behind Istanbul and Moscow. The Paris Metropolitan Area
is the second most populous in the European Union after London with a population of 12,341,418 at the Jan. 2012 census.
The population of Paris today is lower than its historical peak of 2.9 million in 1921. The principal reasons were
a significant decline in household size, and a dramatic migration of residents to the suburbs between 1962 and 1975.
Factors in the migration included de-industrialisation, high rent, the gentrification of many inner quarters, the
transformation of living space into offices, and greater affluence among working families. The city's population
loss came to an end in the 21st century; the population estimate of July 2004 showed a population increase for the
first time since 1954, and the population reached 2,234,000 by 2009. According to Eurostat, the EU statistical agency,
in 2012 the Commune of Paris was the most densely populated city in the European Union, with 21,616 people per square
kilometre within the city limits (the NUTS-3 statistical area), ahead of Inner London West, which had 10,374 people
per square kilometre. According to the same census, three departments bordering Paris, Hauts-de-Seine, Seine-Saint-Denis
and Val-de-Marne, had population densities of over ten thousand people per square kilometre, ranking among the ten
most densely populated areas of the EU. The remaining group, people born in foreign countries with no French citizenship
at birth, are those defined as immigrants under French law. According to the 2012 census, 135,853 residents of the
city of Paris were immigrants from Europe, 112,369 were immigrants from the Maghreb, 70,852 from sub-Saharan Africa
and Egypt, 5,059 from Turkey, 91,297 from Asia (outside Turkey), 38,858 from the Americas, and 1,365 from the South
Pacific. Note that the immigrants from the Americas and the South Pacific in Paris are vastly outnumbered by migrants
from French overseas regions and territories located in these regions of the world. At the 2012 census, 59.5% of
jobs in the Paris Region were in market services (12.0% in wholesale and retail trade, 9.7% in professional, scientific,
and technical services, 6.5% in information and communication, 6.5% in transportation and warehousing, 5.9% in finance
and insurance, 5.8% in administrative and support services, 4.6% in accommodation and food services, and 8.5% in
various other market services), 26.9% in non-market services (10.4% in human health and social work activities, 9.6%
in public administration and defence, and 6.9% in education), 8.2% in manufacturing and utilities (6.6% in manufacturing
and 1.5% in utilities), 5.2% in construction, and 0.2% in agriculture. The Paris Region had 5.4 million salaried
employees in 2010, of whom 2.2 million were concentrated in 39 pôles d'emplois or business districts. The largest
of these, in terms of number of employees, is known in French as the QCA, or quartier central des affaires; it is
in the western part of the City of Paris, in the 2nd, 8th, 9th, 16th and 18th arrondissements. In 2010 it was the
workplace of 500,000 salaried employees, about thirty percent of the salaried employees in Paris and ten percent
of those in the Île-de-France. The largest sectors of activity in the central business district were finance and
insurance (16 percent of employees in the district) and business services (15 percent). The district also includes
a large concentration of department stores, shopping areas, hotels and restaurants, as well a government offices
and ministries. The second-largest business district in terms of employment is La Défense, just west of the city,
where many companies installed their offices in the 1990s. In 2010 it was the workplace of 144,600 employees, of
whom 38 percent worked in finance and insurance, 16 percent in business support services. Two other important districts,
Neuilly-sur-Seine and Levallois-Perret, are extensions of the Paris business district and of La Defense. Another
district, including Boulogne-Billancourt, Issy-les-Moulineaux and the southern part of the 15th arrondissement, is
a center of activity for the media and information technology. The Paris Region is France's leading region for economic
activity, with a 2012 GDP of €624 billion (US$687 billion). In 2011, its GDP ranked second among the regions of Europe
and its per-capita GDP was the 4th highest in Europe. While the Paris region's population accounted for 18.8 percent
of metropolitan France in 2011, the Paris region's GDP accounted for 30 percent of metropolitan France's GDP. In
2015 it hosts the world headquarters of 29 of the 31 Fortune Global 500 companies located in France. The Paris Region
economy has gradually shifted from industry to high-value-added service industries (finance, IT services, etc.) and
high-tech manufacturing (electronics, optics, aerospace, etc.). The Paris region's most intense economic activity
through the central Hauts-de-Seine department and suburban La Défense business district places Paris' economic centre
to the west of the city, in a triangle between the Opéra Garnier, La Défense and the Val de Seine. While the Paris
economy is dominated by services, and employment in manufacturing sector has declined sharply, the region remains
an important manufacturing centre, particularly for aeronautics, automobiles, and "eco" industries. The majority
of Paris' salaried employees fill 370,000 businesses services jobs, concentrated in the north-western 8th, 16th and
17th arrondissements. Paris' financial service companies are concentrated in the central-western 8th and 9th arrondissement
banking and insurance district. Paris' department store district in the 1st, 6th, 8th and 9th arrondissements employ
10 percent of mostly female Paris workers, with 100,000 of these registered in the retail trade. Fourteen percent
of Parisians work in hotels and restaurants and other services to individuals. Nineteen percent of Paris employees
work for the State in either in administration or education. The majority of Paris' healthcare and social workers
work at the hospitals and social housing concentrated in the peripheral 13th, 14th, 18th, 19th and 20th arrondissements.
Outside Paris, the western Hauts-de-Seine department La Défense district specialising in finance, insurance and scientific
research district, employs 144,600, and the north-eastern Seine-Saint-Denis audiovisual sector has 200 media firms
and 10 major film studios. Paris' manufacturing is mostly focused in its suburbs, and the city itself has only around
75,000 manufacturing workers, most of which are in the textile, clothing, leather goods and shoe trades. Paris region
manufacturing specialises in transportation, mainly automobiles, aircraft and trains, but this is in a sharp decline:
Paris proper manufacturing jobs dropped by 64 percent between 1990 and 2010, and the Paris region lost 48 percent
during the same period. Most of this is due to companies relocating outside the Paris region. The Paris region's
800 aerospace companies employed 100,000. Four hundred automobile industry companies employ another 100,000 workers:
many of these are centred in the Yvelines department around the Renault and PSA-Citroen plants (this department alone
employs 33,000), but the industry as a whole suffered a major loss with the 2014 closing of a major Aulnay-sous-Bois
Citroen assembly plant. The southern Essonne department specialises in science and technology, and the south-eastern
Val-de-Marne, with its wholesale Rungis food market, specialises in food processing and beverages. The Paris region's
manufacturing decline is quickly being replaced by eco-industries: these employ about 100,000 workers. In 2011, while
only 56,927 construction workers worked in Paris itself, its metropolitan area employed 246,639, in an activity centred
largely around the Seine-Saint-Denis (41,378) and Hauts-de-Seine (37,303) departments and the new business-park centres
appearing there. The average net household income (after social, pension and health insurance contributions) in Paris
was €36,085 for 2011. It ranged from €22,095 in the 19th arrondissement to €82,449 in the 7th arrondissement. The
median taxable income for 2011 was around €25,000 in Paris and €22,200 for Île-de-France. Generally speaking, incomes
are higher in the Western part of the city and in the western suburbs than in the northern and eastern parts of the
urban area.[citation needed] Unemployment was estimated at 8.2 percent in the city of Paris and 8.8 percent in the
Île-de-France region in the first trimester of 2015. It ranged from 7.6 percent in the wealthy Essonne department
to 13.1 percent in the Seine-Saint-Denis department, where many recent immigrants live. While Paris has some of the
richest neighbourhoods in France, it also has some of the poorest, mostly on the eastern side of the city. In 2012,
14 percent of households in the city earned less than €977 per month, the official poverty line. Twenty-five percent
of residents in the 19th arrondissement lived below the poverty line; 24 percent in the 18th, 22 percent in the 20th
and 18 percent in the 10th. In the city's wealthiest neighbourhood, the 7th arrondissement, 7 percent lived below
the poverty line; 8 percent in the 6th arrondissement; and 9 percent in the 16th arrondissement. There were 72.1
million visitors to the city's museums and monuments in 2013. The city's top tourist attraction was the Notre Dame
Cathedral, which welcomed 14 million visitors in 2013. The Louvre museum had more than 9.2 million visitors in 2013,
making it the most visited museum in the world. The other top cultural attractions in Paris in 2013 were the Basilique
du Sacré-Cœur (10.5 million visitors); the Eiffel Tower (6,740,000 visitors); the Centre Pompidou (3,745,000 visitors)
and Musée d'Orsay (3,467,000 visitors). In the Paris region, Disneyland Paris, in Marne-la-Vallée, 32 km (20 miles)
east of the centre of Paris, was the most visited tourist attraction in France, with 14.9 million visitors in 2013.
The centre of Paris contains the most visited monuments in the city, including the Notre Dame Cathedral and the Louvre
as well as the Sainte-Chapelle; Les Invalides, where the tomb of Napoleon is located, and the Eiffel Tower are located
on the Left Bank south-west of the centre. The banks of the Seine from the Pont de Sully to the Pont d'Iéna have
been listed as a UNESCO World Heritage Site since 1991. Other landmarks are laid out east to west along the historic
axis of Paris, which runs from the Louvre through the Tuileries Garden, the Luxor Column in the Place de la Concorde,
the Arc de Triomphe, to the Grande Arche of La Défense. As of 2013 the City of Paris had 1,570 hotels with 70,034
rooms, of which 55 were rated five-star, mostly belonging to international chains and mostly located close to the
centre and the Champs-Élysées. Paris has long been famous for its grand hotels. The Hotel Meurice, opened for British
travellers in 1817, was one of the first luxury hotels in Paris. The arrival of the railroads and the Paris Exposition
of 1855 brought the first flood of tourists and the first modern grand hotels; the Hôtel du Louvre (now an antiques
marketplace) in 1855; the Grand Hotel (now the Intercontinental LeGrand) in 1862; and the Hôtel Continental in 1878.
The Hôtel Ritz on Place Vendôme opened in 1898, followed by the Hôtel Crillon in an 18th-century building on the
Place de la Concorde in 1909; the Hotel Bristol on rue de Fabourg Saint-Honoré in 1925; and the Hotel George V in
1928. For centuries, Paris has attracted artists from around the world, who arrive in the city to educate themselves
and to seek inspiration from its vast pool of artistic resources and galleries. As a result, Paris has acquired a
reputation as the "City of Art". Italian artists were a profound influence on the development of art in Paris in
the 16th and 17th centuries, particularly in sculpture and reliefs. Painting and sculpture became the pride of the
French monarchy and the French royals commissioned many Parisian artists to adorn their palaces during the French
Baroque and Classicism era. Sculptors such as Girardon, Coysevox and Coustou acquired reputations as the finest artists
in the royal court in 17th-century France. Pierre Mignard became the first painter to King Louis XIV during this
period. In 1648, the Académie royale de peinture et de sculpture (Royal Academy of Painting and Sculpture) was established
to accommodate for the dramatic interest in art in the capital. This served as France's top art school until 1793.
Paris was in its artistic prime in the 19th century and early 20th century, when it had a colony of artists established
in the city and in art schools associated with some of the finest painters of the times: Manet, Monet, Berthe Morisot,
Gauguin, Renoir and others. The French Revolution and political and social change in France had a profound influence
on art in the capital. Paris was central to the development of Romanticism in art, with painters such as Gericault.
Impressionism, Art Nouveau, Symbolism, Fauvism Cubism and Art Deco movements all evolved in Paris. In the late 19th
century, many artists in the French provinces and worldwide flocked to Paris to exhibit their works in the numerous
salons and expositions and make a name for themselves. Artists such as Pablo Picasso, Henri Matisse, Vincent van
Gogh, Paul Cézanne, Jean Metzinger, Albert Gleizes, Henri Rousseau, Marc Chagall, Amedeo Modigliani and many others
became associated with Paris. Picasso, living in Montmartre, painted his famous La Famille de Saltimbanques and Les
Demoiselles d'Avignon between 1905 and 1907. Montmartre and Montparnasse became centres for artistic production.
The inventor Nicéphore Niépce produced the first permanent photograph on a polished pewter plate in Paris in 1825,
and then developed the process with Louis Daguerre. The work of Étienne-Jules Marey in the 1880s contributed considerably
to the development of modern photography. Photography came to occupy a central role in Parisian Surrealist activity,
in the works of Man Ray and Maurice Tabard. Numerous photographers achieved renown for their photography of Paris,
including Eugène Atget, noted for his depictions of street scenes, Robert Doisneau, noted for his playful pictures
of people and market scenes (among which Le baiser de l'hôtel de ville has became iconic of the romantic vision of
Paris), Marcel Bovis, noted for his night scenes, and others such as Jacques-Henri Lartigue and Cartier-Bresson.
Poster art also became an important art form in Paris in the late nineteenth century, through the work of Henri de
Toulouse-Lautrec, Jules Chéret, Eugène Grasset, Adolphe Willette, Pierre Bonnard, Georges de Feure, Henri-Gabriel
Ibels, Gavarni, and Alphonse Mucha. The Louvre was the world's most visited art museum in 2014, with 9.3 million
visitors. Its treasures include the Mona Lisa (La Joconde) and the Venus de Milo statue. Starkly apparent with its
service-pipe exterior, the Centre Georges Pompidou, the second-most visited art museum in Paris, also known as Beaubourg,
houses the Musée National d'Art Moderne. The Musée d'Orsay, in the former Orsay railway station, was the third-most
visited museum in the city in 2014; it displays French art of the 19th century, including major collections of the
Impressionists and Post-Impressionists. The original building - a railway station - was constructed for the Universal
Exhibition of 1900. The Musée du quai Branly was the fourth-most visited national museum in Paris in 2014; it displays
art objects from Africa, Asia, Oceania, and the Americas. The Musée national du Moyen Âge, or Cluny Museum, presents
Medieval art, including the famous tapestry cycle of The Lady and the Unicorn. The Guimet Museum, or Musée national
des arts asiatiques, has one of the largest collections of Asian art in Europe. There are also notable museums devoted
to individual artists, including the Picasso Museum the Rodin Museum, and the Musée national Eugène Delacroix. Paris
hosts one of the largest science museums in Europe, the Cité des Sciences et de l'Industrie at La Villette. The National
Museum of Natural History, on the Left Bank, is famous for its dinosaur artefacts, mineral collections, and its Gallery
of Evolution. The military history of France, from the Middle Ages to World War II, is vividly presented by displays
at the Musée de l'Armée at Les Invalides, near the tomb of Napoleon. In addition to the national museums, run by
the French Ministry of Culture, the City of Paris operates 14 museums, including the Carnavalet Museum on the history
of Paris; Musée d'Art Moderne de la Ville de Paris; Palais de Tokyo; the House of Victor Hugo and House of Balzac,
and the Catacombs of Paris. There are also notable private museums; The Contemporary Art museum of the Louis Vuitton
Foundation, designed by architect Frank Gehry, opened in October 2014 in the Bois de Boulogne. The largest opera
houses of Paris are the 19th-century Opéra Garnier (historical Paris Opéra) and modern Opéra Bastille; the former
tends toward the more classic ballets and operas, and the latter provides a mixed repertoire of classic and modern.
In middle of the 19th century, there were three other active and competing opera houses: the Opéra-Comique (which
still exists), Théâtre-Italien, and Théâtre Lyrique (which in modern times changed its profile and name to Théâtre
de la Ville). Philharmonie de Paris, the modern symphonic concert hall of Paris, opened in January 2015. Another
musical landmark is the Théâtre des Champs-Élysées, where the first performances of Diaghilev's Ballets Russes took
place in 1913. Theatre traditionally has occupied a large place in Parisian culture, and many of its most popular
actors today are also stars of French television. The oldest and most famous Paris theatre is the Comédie-Française,
founded in 1680. Run by the French government, it performs mostly French classics at the Salle Richelieu in the Palais-Royal
at 2 rue de Richelieu, next to the Louvre. of Other famous theaters include the Odéon-Théâtre de l'Europe, next to
the Luxembourg Gardens, also a state institution and theatrical landmark; the Théâtre Mogador, and the Théâtre de
la Gaîté-Montparnasse. The music hall and cabaret are famous Paris institutions. The Moulin Rouge was opened in 1889.
It was highly visible because of its large red imitation windmill on its roof, and became the birthplace of the dance
known as the French Cancan. It helped make famous the singers Mistinguett and Édith Piaf and the painter Toulouse-Lautrec,
who made posters for the venue. In 1911, the dance hall Olympia Paris invented the grand staircase as a settling
for its shows, competing with its great rival, the Folies Bergère, Its stars in the 1920s included the American singer
and dancer Josephine Baker. The Casino de Paris presented many famous French singers, including Mistinguett, Maurice
Chevalier, and Tino Rossi. Other famous Paris music halls include Le Lido, on the Champs-Élysées, opened in 1946;
and the Crazy Horse Saloon, featuring strip-tease, dance and magic, opened in 1951. The Olympia Paris has presented
Edith Piaf, Marlene Dietrich, Miles Davis, Judy Garland, and the Grateful Dead. A half dozen music halls exist today
in Paris, attended mostly visitors to the city. The first book printed in France, Epistolae ("Letters"), by Gasparinus
de Bergamo (Gasparino da Barzizza), was published in Paris in 1470 by the press established by Johann Heynlin. Since
then, Paris has been the centre of the French publishing industry, the home of some of the world's best-known writers
and poets, and the setting for many classic works of French literature. Almost all the books published in Paris in
the Middle Ages were in Latin, rather than French. Paris did not become the acknowledged capital of French literature
until the 17th century, with authors such as Boileau, Corneille, La Fontaine, Molière, Racine, several coming from
the provinces, and the foundation of the Académie française. In the 18th century, the literary life of Paris revolved
around the cafés and salons, and was dominated by Voltaire, Jean-Jacques Rousseau, Pierre de Marivaux, and Beaumarchais.
During the 19th century, Paris was the home and subject for some of France's greatest writers, including Charles
Baudelaire, Stéphane Mallarmé, Mérimée, Alfred de Musset, Marcel Proust, Émile Zola, Alexandre Dumas, Gustave Flaubert,
Guy de Maupassant and Honoré de Balzac. Victor Hugo's The Hunchback of Notre Dame inspired the renovation of its
setting, the Notre-Dame de Paris. Another of Victor Hugo's works, Les Misérables, written while he was in exile outside
France during the Second Empire, described the social change and political turmoil in Paris in the early 1830s. One
of the most popular of all French writers, Jules Verne, worked at the Theatre Lyrique and the Paris stock exchange,
while he did research for his stories at the National Library. In the 20th century, the Paris literary community
was dominated by Colette, André Gide, François Mauriac, André Malraux, Albert Camus, and, after World War II, by
Simone de Beauvoir and Jean-Paul Sartre; Between the wars it was the home of many important expatriate writers, including
Ernest Hemingway, Samuel Beckett, and, in the 1970s, Milan Kundera. The winner of the 2014 Nobel Prize in Literature,
Patrick Modiano–who lives in Paris–, based most of his literary work on the depiction of the city during World War
II and the 1960s-1970s. Paris is a city of books and bookstores. In the 1970s, 80 percent of French-language publishing
houses were found in Paris, almost all on the Left Bank in the 5th, 6th and 7th arrondissements. Since that time,
because of high prices, some publishers have moved out to the less expensive areas. It is also a city of small bookstores;
There are about 150 bookstores in the 5th arrondissement alone, plus another 250 book stalls along the Seine. Small
Paris bookstores are protected against competition from discount booksellers by French law; books, even e-books,
cannot be discounted more than five percent below their publisher's cover price. In the late 12th-century, a school
of polyphony was established at the Notre-Dame. A group of Parisian aristocrats, known as Trouvères, became known
for their poetry and songs. Troubadors were also popular. During the reign of Francois I, the lute became popular
in the French court, and a national musical printing house was established. During the Renaissance era, the French
Boleroroyals "disported themselves in masques, ballets, allegorical dances, recitals, and opera and comedy". Baroque-era
composers include Jean-Baptiste Lully, Jean-Philippe Rameau, and François Couperin and were popular. The Conservatoire
de Musique de Paris was founded in 1795. By 1870, Paris had become an important centre for symphony, ballet and operatic
music. Romantic-era composers (in Paris) include Hector Berlioz (La Symphonie fantastique), Charles Gounod (Faust),
Camille Saint-Saëns (Samson et Delilah), Léo Delibes (Lakmé) and Jules Massenet (Thaïs), among others. Georges Bizet's
Carmen premiered 3 March 1875. Carmen has since become one of the most popular and frequently-performed operas in
the classical canon; Impressionist composers Claude Debussy ((La Mer) and Maurice Ravel (Boléro) also made significant
contributions to piano (Clair de lune, Miroirs), orchestra, opera (Palléas et Mélisande), and other musical forms.
Foreign-born composers have made their homes in Paris and have made significant contributions both with their works
and their influence. They include Frédéric Chopin (Poland), Franz Liszt (Hungary), Jacques Offenbach (Germany), and
Igor Stravinsky (Russia). Bal-musette is a style of French music and dance that first became popular in Paris in
the 1870s and 1880s; by 1880 Paris had some 150 dance halls in the working-class neighbourhoods of the city. Patrons
danced the bourrée to the accompaniment of the cabrette (a bellows-blown bagpipe locally called a "musette") and
often the vielle à roue (hurdy-gurdy) in the cafés and bars of the city. Parisian and Italian musicians who played
the accordion adopted the style and established themselves in Auvergnat bars especially in the 19th arrondissement,
and the romantic sounds of the accordion has since become one of the musical icons of the city. Paris became a major
centre for jazz and still attracts jazz musicians from all around the world to its clubs and cafés. Immediately after
the War The Saint-Germain-des-Pres quarter and the nearby Saint-Michel quarter became home to many small jazz clubs,
mostly found in cellars because of a lack of space; these included the Caveau des Lorientais, the Club Saint-Germain,
the Rose Rouge, the Vieux-Colombier, and the most famous, Le Tabou. They introduced Parisians to the music of Claude
Luter, Boris Vian, Sydney Bechet Mezz Mezzrow, and Henri Salvador. Most of the clubs closed by the early 1960s, as
musical tastes shifted toward rock and roll. The movie industry was born in Paris when Auguste and Louis Lumière
projected the first motion picture for a paying audience at the Grand Café on 28 December 1895. Many of Paris' concert/dance
halls were transformed into movie theatres when the media became popular beginning in the 1930s. Later, most of the
largest cinemas were divided into multiple, smaller rooms. Paris' largest cinema room today is in Le Grand Rex theatre
with 2,700 seats. Big multiplex movie theaters have been built since the 1990s. UGC Ciné Cité Les Halles with 27
screens, MK2 Bibliothèque with 20 screens and UGC Ciné Cité Bercy with 18 screens are among the largest. Parisians
tend to share the same movie-going trends as many of the world's global cities, with cinemas primarily dominated
by Hollywood-generated film entertainment. French cinema comes a close second, with major directors (réalisateurs)
such as Claude Lelouch, Jean-Luc Godard, and Luc Besson, and the more slapstick/popular genre with director Claude
Zidi as an example. European and Asian films are also widely shown and appreciated. On 2 February 2000, Philippe
Binant realised the first digital cinema projection in Europe, with the DLP CINEMA technology developed by Texas
Instruments, in Paris. Since the late 18th century, Paris has been famous for its restaurants and haute cuisine,
food meticulously prepared and artfully presented. A luxury restaurant, La Taverne Anglaise, opened in 1786 in the
arcades of the Palais-Royal by Antoine Beauvilliers; it featured an elegant dining room, an extensive menu, linen
tablecloths, a large wine list and well-trained waiters; it became a model for future Paris restaurants. The restaurant
Le Grand Véfour in the Palais-Royal dates from the same period. The famous Paris restaurants of the 19th century,
including the Café de Paris, the Rocher de Cancale, the Café Anglais, Maison Dorée and the Café Riche, were mostly
located near the theatres on the Boulevard des Italiens; they were immortalised in the novels of Balzac and Émile
Zola. Several of the best-known restaurants in Paris today appeared during the Belle Epoque, including Maxim's on
Rue Royale, Ledoyen in the gardens of the Champs-Élysées, and the Tour d'Argent on the Quai de la Tournelle. Today,
thanks to Paris' cosmopolitan population, every French regional cuisine and almost every national cuisine in the
world can be found there; the city has more than 9,000 restaurants. The Michelin Guide has been a standard guide
to French restaurants since 1900, awarding its highest award, three stars, to the best restaurants in France. In
2015, of the 29 Michelin three-star restaurants in France, nine are located in Paris. These include both restaurants
which serve classical French cuisine, such as L'Ambroisie in the Place des Vosges, and those which serve non-traditional
menus, such as L'Astrance, which combines French and Asian cuisines. Several of France's most famous chefs, including
Pierre Gagnaire, Alain Ducasse, Yannick Alléno and Alain Passard, have three-star restaurants in Paris. In addition
to the classical restaurants, Paris has several other kinds of traditional eating places. The café arrived in Paris
in the 17th century, when the beverage was first brought from Turkey, and by the 18th century Parisian cafés were
centres of the city's political and cultural life. The Cafe Procope on the Left Bank dates from this period. In the
20th century, the cafés of the Left Bank, especially Café de la Rotonde and Le Dôme Café in Montparnasse and Café
de Flore and Les Deux Magots on Boulevard Saint Germain, all still in business, were important meeting places for
painters, writers and philosophers. A bistro is a type of eating place loosely defined as a neighbourhood restaurant
with a modest decor and prices and a regular clientele and a congenial atmosphere. Its name is said to have come
in 1814 from the Russian soldiers who occupied the city; "bistro" means "quickly" in Russian, and they wanted their
meals served rapidly so they could get back their encampment. Real bistros are increasingly rare in Paris, due to
rising costs, competition from cheaper ethnic restaurants, and different eating habits of Parisian diners. A brasserie
originally was a tavern located next to a brewery, which served beer and food at any hour. Beginning with the Paris
Exposition of 1867; it became a popular kind of restaurant which featured beer and other beverages served by young
women in the national costume associated with the beverage, particular German costumes for beer. Now brasseries,
like cafés, serve food and drinks throughout the day. Paris has been an international capital of high fashion since
the 19th century, particularly in the domain of haute couture, clothing hand-made to order for private clients. It
is home of some of the largest fashion houses in the world, including Dior and Chanel, and of many well-known fashion
designers, including Karl Lagerfeld, Jean-Paul Gaultier, Christophe Josse and Christian Lacroix. Paris Fashion Week,
held in January and July in the Carrousel du Louvre and other city locations, is among the top four events of the
international fashion calendar, along with the fashion weeks in Milan, London and New York. Paris is also the home
of the world's largest cosmetics company, L'Oréal, and three of the five top global makers of luxury fashion accessories;
Louis Vuitton, Hermés and Cartier. The Paris region hosts France's highest concentration of the grandes écoles –
55 specialised centres of higher-education outside the public university structure. The prestigious public universities
are usually considered grands établissements. Most of the grandes écoles were relocated to the suburbs of Paris in
the 1960s and 1970s, in new campuses much larger than the old campuses within the crowded city of Paris, though the
École Normale Supérieure has remained on rue d'Ulm in the 5th arrondissement. There are a high number of engineering
schools, led by the Paris Institute of Technology which comprises several colleges such as École Polytechnique, École
des Mines, AgroParisTech, Télécom Paris, Arts et Métiers, and École des Ponts et Chaussées. There are also many business
schools, including HEC, INSEAD, ESSEC, and ESCP Europe. The administrative school such as ENA has been relocated
to Strasbourg, the political science school Sciences-Po is still located in Paris' 7th arrondissement and the most
prestigious university of economics and finance, Paris-Dauphine, is located in Paris' 16th. The Parisian school of
journalism CELSA department of the Paris-Sorbonne University is located in Neuilly-sur-Seine. Paris is also home
to several of France's most famous high-schools such as Lycée Louis-le-Grand, Lycée Henri-IV, Lycée Janson de Sailly
and Lycée Condorcet. The National Institute of Sport and Physical Education, located in the 12th arrondissement,
is both a physical education institute and high-level training centre for elite athletes. The Bibliothèque nationale
de France (BnF) operates public libraries in Paris, among them the François Mitterrand Library, Richelieu Library,
Louvois, Opéra Library, and Arsenal Library. There are three public libraries in the 4th arrondissement. The Forney
Library, in the Marais district, is dedicated to the decorative arts; the Arsenal Library occupies a former military
building, and has a large collection on French literature; and the Bibliothèque historique de la ville de Paris,
also in Le Marais, contains the Paris historical research service. The Sainte-Geneviève Library is in 5th arrondissement;
designed by Henri Labrouste and built in the mid-1800s, it contains a rare book and manuscript division. Bibliothèque
Mazarine, in the 6th arrondissement, is the oldest public library in France. The Médiathèque Musicale Mahler in the
8th arrondissement opened in 1986 and contains collections related to music. The François Mitterrand Library (nicknamed
Très Grande Bibliothèque) in the 13th arrondissement was completed in 1994 to a design of Dominique Perrault and
contains four glass towers. There are several academic libraries and archives in Paris. The Sorbonne Library in the
5th arrondissement is the largest university library in Paris. In addition to the Sorbonne location, there are branches
in Malesherbes, Clignancourt-Championnet, Michelet-Institut d'Art et d'Archéologie, Serpente-Maison de la Recherche,
and Institut des Etudes Ibériques. Other academic libraries include Interuniversity Pharmaceutical Library, Leonardo
da Vinci University Library, Paris School of Mines Library, and the René Descartes University Library. Like the rest
of France, Paris has been predominantly Roman Catholic since the early Middle Ages, though religious attendance is
now low. A majority of Parisians are still nominally Roman Catholic. According to 2011 statistics, there are 106
parishes and curates in the city, plus separate parishes for Spanish, Polish and Portuguese Catholics. There are
an additional 10 Eastern Orthodox parishes, and bishops for the Armenian and Ukrainian Orthodox Churches. In addition
there are eighty male religious orders and 140 female religious orders in the city, as well as 110 Catholic schools
with 75,000 students. Almost all Protestant denominations are represented in Paris, with 74 evangelical churches
from various denominations, including 21 parishes of the United Protestant Church of France and two parishes of the
Church of Jesus Christ of the Latter-Day Saints. There are several important churches for the English-speaking community:
the American Church in Paris, founded in 1814, was the first American church outside the United States; the current
church was finished in 1931. The Saint George's Anglican Church in the 16th arrondissement is the principal Anglican
church in the city. During the Middle Ages, Paris was a center of Jewish learning with famous Talmudic scholars,
such as Yechiel of Paris who took part in the Disputation of Paris between Christian and Jewish intellectuals. The
Parisian Jewish community was victim of persecution, alternating expulsions and returns, until France became the
first country in Europe to emancipate its Jewish population during the French Revolution. Although 75% of the Jewish
population in France survived the Holocaust during World War II, half the city's Jewish population perished in Nazi
concentration camps, while some others fled abroad. A large migration of North Africa Sephardic Jews settled Paris
in the 1960s, and represent most of the Paris Jewish community today. There are currently 83 synagogues in the city;
The Marais-quarter Agoudas Hakehilos Synagogue, built in 1913 by architect Hector Guimard, is a Paris landmark. The
Pagode de Vincennes Buddhist temple, near Lake Daumesnil in the Bois de Vincennes, is the former Cameroon pavilion
from the 1931 Paris Colonial Exposition. It hosts several different schools of Buddhism, and does not have a single
leader. It shelters the biggest Buddha statue in Europe, more than nine metres high. There are two other small temples
located in the Asian community in the 13th arrondissement. A Hindu temple, dedicated to Ganesh, on Rue Pajol in the
18th arrondissement, opened in 1985. Paris' most popular sport clubs are the association football club Paris Saint-Germain
F.C. and the rugby union club Stade Français. The 80,000-seat Stade de France, built for the 1998 FIFA World Cup,
is located just north of Paris in the commune of Saint-Denis. It is used for football, rugby union and track and
field athletics. It hosts the French national football team for friendlies and major tournaments qualifiers, annually
hosts the French national rugby team's home matches of the Six Nations Championship, and hosts several important
matches of the Stade Français rugby team. In addition to Paris Saint-Germain FC, the city has a number of other amateur
football clubs: Paris FC, Red Star, RCF Paris and Stade Français Paris. Paris is a major rail, highway, and air transport
hub. The Syndicat des transports d'Île-de-France (STIF), formerly Syndicat des transports parisiens (STP), oversees
the transit network in the region. The syndicate coordinates public transport and contracts it out to the RATP (operating
347 bus lines, the Métro, eight tramway lines, and sections of the RER), the SNCF (operating suburban rails, one
tramway line and the other sections of the RER) and the Optile consortium of private operators managing 1,176 bus
lines. In addition, the Paris region is served by a light rail network of nine lines, the tramway: Line T1 runs from
Asnières-Gennevilliers to Noisy-le-Sec, line T2 runs from Pont de Bezons to Porte de Versailles, line T3a runs from
Pont du Garigliano to Porte de Vincennes, line T3b runs from Porte de Vincennes to Porte de la Chapelle, line T5
runs from Saint-Denis to Garges-Sarcelles, line T6 runs from Châtillon to Velizy, line T7 runs from Villejuif to
Athis-Mons, line T8 runs from Saint-Denis to Épinay-sur-Seine and Villetaneuse, all of which are operated by the
Régie Autonome des Transports Parisiens, and line T4 runs from Bondy RER to Aulnay-sous-Bois, which is operated by
the state rail carrier SNCF. Five new light rail lines are currently in various stages of development. Paris is a
major international air transport hub with the 4th busiest airport system in the world. The city is served by three
commercial international airports: Paris-Charles de Gaulle, Paris-Orly and Beauvais-Tillé. Together these three airports
recorded traffic of 96.5 million passengers in 2014. There is also one general aviation airport, Paris-Le Bourget,
historically the oldest Parisian airport and closest to the city centre, which is now used only for private business
flights and air shows. Orly Airport, located in the southern suburbs of Paris, replaced Le Bourget as the principal
airport of Paris from the 1950s to the 1980s. Charles de Gaulle Airport, located on the edge of the northern suburbs
of Paris, opened to commercial traffic in 1974 and became the busiest Parisian airport in 1993. Today it is the 4th
busiest airport in the world by international traffic, and is the hub for the nation's flag carrier Air France. Beauvais-Tillé
Airport, located 69 km (43 mi) north of Paris' city centre, is used by charter airlines and low-cost carriers such
as Ryanair. Paris in its early history had only the Seine and Bièvre rivers for water. From 1809, the Canal de l'Ourcq
provided Paris with water from less-polluted rivers to the north-east of the capital. From 1857, the civil engineer
Eugène Belgrand, under Napoleon III, oversaw the construction of a series of new aqueducts that brought water from
locations all around the city to several reservoirs built atop the Capital's highest points of elevation. From then
on, the new reservoir system became Paris' principal source of drinking water, and the remains of the old system,
pumped into lower levels of the same reservoirs, were from then on used for the cleaning of Paris' streets. This
system is still a major part of Paris' modern water-supply network. Today Paris has more than 2,400 km (1,491 mi)
of underground passageways dedicated to the evacuation of Paris' liquid wastes. Paris today has more than 421 municipal
parks and gardens, covering more than 3,000 hectares and containing more than 250,000 trees. Two of Paris' oldest
and most famous gardens are the Tuileries Garden, created in 1564 for the Tuileries Palace, and redone by André Le
Nôtre between 1664 and 1672, and the Luxembourg Garden, for the Luxembourg Palace, built for Marie de' Medici in
1612, which today houses the French Senate. The Jardin des Plantes was the first botanical garden in Paris, created
in 1626 by Louis XIII's doctor Guy de La Brosse for the cultivation of medicinal plants. Between 1853 and 1870, the
Emperor Napoleon III and the city's first director of parks and gardens, Jean-Charles Alphand, created the Bois de
Boulogne, the Bois de Vincennes, Parc Montsouris and the Parc des Buttes-Chaumont, located at the four points of
the compass around the city, as well as many smaller parks, squares and gardens in the Paris' quarters. Since 1977,
the city has created 166 new parks, most notably the Parc de la Villette (1987), Parc André Citroën (1992), and Parc
de Bercy (1997). One of the newest parks, the Promenade des Berges de la Seine (2013), built on a former highway
on the Left Bank of the Seine between the Pont de l'Alma and the Musée d'Orsay, has floating gardens and gives a
view of the city's landmarks. In Paris' Roman era, its main cemetery was located to the outskirts of the Left Bank
settlement, but this changed with the rise of Catholicism, where most every inner-city church had adjoining burial
grounds for use by their parishes. With Paris' growth many of these, particularly the city's largest cemetery, les
Innocents, were filled to overflowing, creating quite unsanitary conditions for the capital. When inner-city burials
were condemned from 1786, the contents of all Paris' parish cemeteries were transferred to a renovated section of
Paris' stone mines outside the "Porte d'Enfer" city gate, today place Denfert-Rochereau in the 14th arrondissement.
The process of moving bones from Cimetière des Innocents to the catacombs took place between 1786 and 1814; part
of the network of tunnels and remains can be visited today on the official tour of the catacombs. After a tentative
creation of several smaller suburban cemeteries, the Prefect Nicholas Frochot under Napoleon Bonaparte provided a
more definitive solution in the creation of three massive Parisian cemeteries outside the city limits. Open from
1804, these were the cemeteries of Père Lachaise, Montmartre, Montparnasse, and later Passy; these cemeteries became
inner-city once again when Paris annexed all neighbouring communes to the inside of its much larger ring of suburban
fortifications in 1860. New suburban cemeteries were created in the early 20th century: The largest of these are
the Cimetière parisien de Saint-Ouen, the Cimetière parisien de Pantin (also known as Cimetière parisien de Pantin-Bobigny,
the Cimetière parisien d'Ivry, and the Cimetière parisien de Bagneux).[citation needed] Some of the most famous people
in the world are buried in Parisian cemeteries. Health care and emergency medical service in the city of Paris and
its suburbs are provided by the Assistance publique - Hôpitaux de Paris (AP-HP), a public hospital system that employs
more than 90,000 people (including practitioners, support personnel, and administrators) in 44 hospitals. It is the
largest hospital system in Europe. It provides health care, teaching, research, prevention, education and emergency
medical service in 52 branches of medicine. The hospitals receive more than 5.8 million annual patient visits. Paris
and its close suburbs is home to numerous newspapers, magazines and publications including Le Monde, Le Figaro, Libération,
Le Nouvel Observateur, Le Canard enchaîné, La Croix, Pariscope, Le Parisien (in Saint-Ouen), Les Échos, Paris Match
(Neuilly-sur-Seine), Réseaux & Télécoms, Reuters France, and L'Officiel des Spectacles. France's two most prestigious
newspapers, Le Monde and Le Figaro, are the centrepieces of the Parisian publishing industry. Agence France-Presse
is France's oldest, and one of the world's oldest, continually operating news agencies. AFP, as it is colloquially
abbreviated, maintains its headquarters in Paris, as it has since 1835. France 24 is a television news channel owned
and operated by the French government, and is based in Paris. Another news agency is France Diplomatie, owned and
operated by the Ministry of Foreign and European Affairs, and pertains solely to diplomatic news and occurrences.
The most-viewed network in France, TF1, is in nearby Boulogne-Billancourt; France 2, France 3, Canal+, France 5,
M6 (Neuilly-sur-Seine), Arte, D8, W9, NT1, NRJ 12, La Chaîne parlementaire, France 4, BFM TV, and Gulli are other
stations located in and around the capital. Radio France, France's public radio broadcaster, and its various channels,
is headquartered in Paris' 16th arrondissement. Radio France Internationale, another public broadcaster is also based
in the city. Paris also holds the headquarters of the La Poste, France's national postal carrier.
Apollo (Attic, Ionic, and Homeric Greek: Ἀπόλλων, Apollōn (GEN Ἀπόλλωνος); Doric: Ἀπέλλων, Apellōn; Arcadocypriot: Ἀπείλων,
Apeilōn; Aeolic: Ἄπλουν, Aploun; Latin: Apollō) is one of the most important and complex of the Olympian deities
in classical Greek and Roman religion and Greek and Roman mythology. The ideal of the kouros (a beardless, athletic
youth), Apollo has been variously recognized as a god of music, truth and prophecy, healing, the sun and light, plague,
poetry, and more. Apollo is the son of Zeus and Leto, and has a twin sister, the chaste huntress Artemis. Apollo
is known in Greek-influenced Etruscan mythology as Apulu. As the patron of Delphi (Pythian Apollo), Apollo was an
oracular god—the prophetic deity of the Delphic Oracle. Medicine and healing are associated with Apollo, whether
through the god himself or mediated through his son Asclepius, yet Apollo was also seen as a god who could bring
ill-health and deadly plague. Amongst the god's custodial charges, Apollo became associated with dominion over colonists,
and as the patron defender of herds and flocks. As the leader of the Muses (Apollon Musegetes) and director of their
choir, Apollo functioned as the patron god of music and poetry. Hermes created the lyre for him, and the instrument
became a common attribute of Apollo. Hymns sung to Apollo were called paeans. In Hellenistic times, especially during
the 3rd century BCE, as Apollo Helios he became identified among Greeks with Helios, Titan god of the sun, and his
sister Artemis similarly equated with Selene, Titan goddess of the moon. In Latin texts, on the other hand, Joseph
Fontenrose declared himself unable to find any conflation of Apollo with Sol among the Augustan poets of the 1st
century, not even in the conjurations of Aeneas and Latinus in Aeneid XII (161–215). Apollo and Helios/Sol remained
separate beings in literary and mythological texts until the 3rd century CE. The etymology of the name is uncertain.
The spelling Ἀπόλλων (pronounced [a.pól.lɔːn] in Classical Attic) had almost superseded all other forms by the beginning
of the common era, but the Doric form Apellon (Ἀπέλλων), is more archaic, derived from an earlier *Ἀπέλjων. It probably
is a cognate to the Doric month Apellaios (Ἀπελλαῖος), and the offerings apellaia (ἀπελλαῖα) at the initiation of
the young men during the family-festival apellai (ἀπέλλαι). According to some scholars the words are derived from
the Doric word apella (ἀπέλλα), which originally meant "wall," "fence for animals" and later "assembly within the
limits of the square." Apella (Ἀπέλλα) is the name of the popular assembly in Sparta, corresponding to the ecclesia
(ἐκκλησία). R. S. P. Beekes rejected the connection of the theonym with the noun apellai and suggested a Pre-Greek
proto-form *Apalyun. Several instances of popular etymology are attested from ancient authors. Thus, the Greeks most
often associated Apollo's name with the Greek verb ἀπόλλυμι (apollymi), "to destroy". Plato in Cratylus connects
the name with ἀπόλυσις (apolysis), "redemption", with ἀπόλουσις (apolousis), "purification", and with ἁπλοῦν ([h]aploun),
"simple", in particular in reference to the Thessalian form of the name, Ἄπλουν, and finally with Ἀειβάλλων (aeiballon),
"ever-shooting". Hesychius connects the name Apollo with the Doric ἀπέλλα (apella), which means "assembly", so that
Apollo would be the god of political life, and he also gives the explanation σηκός (sekos), "fold", in which case
Apollo would be the god of flocks and herds. In the Ancient Macedonian language πέλλα (pella) means "stone," and
some toponyms may be derived from this word: Πέλλα (Pella, the capital of Ancient Macedonia) and Πελλήνη (Pellēnē/Pallene).
A number of non-Greek etymologies have been suggested for the name, The Hittite form Apaliunas (dx-ap-pa-li-u-na-aš)
is attested in the Manapa-Tarhunta letter, perhaps related to Hurrian (and certainly the Etruscan) Aplu, a god of
plague, in turn likely from Akkadian Aplu Enlil meaning simply "the son of Enlil", a title that was given to the
god Nergal, who was linked to Shamash, Babylonian god of the sun. The role of Apollo as god of plague is evident
in the invocation of Apollo Smintheus ("mouse Apollo") by Chryses, the Trojan priest of Apollo, with the purpose
of sending a plague against the Greeks (the reasoning behind a god of the plague becoming a god of healing is of
course apotropaic, meaning that the god responsible for bringing the plague must be appeased in order to remove the
plague). As sun-god and god of light, Apollo was also known by the epithets Aegletes (/əˈɡliːtiːz/ ə-GLEE-teez; Αἰγλήτης,
Aiglētēs, from αἴγλη, "light of the sun"), Helius (/ˈhiːliəs/ HEE-lee-əs; Ἥλιος, Helios, literally "sun"), Phanaeus
(/fəˈniːəs/ fə-NEE-əs; Φαναῖος, Phanaios, literally "giving or bringing light"), and Lyceus (/laɪˈsiːəs/ ly-SEE-əs;
Λύκειος, Lykeios, from Proto-Greek *λύκη, "light"). The meaning of the epithet "Lyceus" later became associated with
Apollo's mother Leto, who was the patron goddess of Lycia (Λυκία) and who was identified with the wolf (λύκος), earning
him the epithets Lycegenes (/laɪˈsɛdʒəniːz/ ly-SEJ-ə-neez; Λυκηγενής, Lukēgenēs, literally "born of a wolf" or "born
of Lycia") and Lycoctonus (/laɪˈkɒktənəs/ ly-KOK-tə-nəs; Λυκοκτόνος, Lykoktonos, from λύκος, "wolf", and κτείνειν,
"to kill"). As god of the sun, Apollo was called Sol (/ˈsɒl/ SOL, literally "sun" in Latin) by the Romans. Apollo
was worshipped as Actiacus (/ækˈtaɪ.əkəs/ ak-TY-ə-kəs; Ἄκτιακός, Aktiakos, literally "Actian"), Delphinius (/dɛlˈfɪniəs/
del-FIN-ee-əs; Δελφίνιος, Delphinios, literally "Delphic"), and Pythius (/ˈpɪθiəs/ PITH-ee-əs; Πύθιος, Puthios, from
Πυθώ, Pythō, the area around Delphi), after Actium (Ἄκτιον) and Delphi (Δελφοί) respectively, two of his principal
places of worship. An etiology in the Homeric hymns associated the epithet "Delphinius" with dolphins. He was worshipped
as Acraephius (/əˈkriːfiəs/ ə-KREE-fee-əs; Ἀκραιφιος,[clarification needed] Akraiphios, literally "Acraephian") or
Acraephiaeus (/əˌkriːfiˈiːəs/ ə-KREE-fee-EE-əs; Ἀκραιφιαίος, Akraiphiaios, literally "Acraephian") in the Boeotian
town of Acraephia (Ἀκραιφία), reputedly founded by his son Acraepheus; and as Smintheus (/ˈsmɪnθjuːs/ SMIN-thews;
Σμινθεύς, Smintheus, "Sminthian"—that is, "of the town of Sminthos or Sminthe") near the Troad town of Hamaxitus.
The epithet "Smintheus" has historically been confused with σμίνθος, "mouse", in association with Apollo's role as
a god of disease. For this he was also known as Parnopius (/pɑːrˈnoʊpiəs/ par-NOH-pee-əs; Παρνόπιος, Parnopios, from
πάρνοψ, "locust") and to the Romans as Culicarius (/ˌkjuːlᵻˈkæriəs/ KEW-li-KARR-ee-əs; from Latin culicārius, "of
midges"). In Apollo's role as a healer, his appellations included Acesius (/əˈsiːʒəs/ ə-SEE-zhəs; Ἀκέσιος, Akesios,
from ἄκεσις, "healing"), Acestor (/əˈsɛstər/ ə-SES-tər; Ἀκέστωρ, Akestōr, literally "healer"), Paean (/ˈpiːən/ PEE-ən;
Παιάν, Paiān, from παίειν, "to touch"),[citation needed] and Iatrus (/aɪˈætrəs/ eye-AT-rəs; Ἰατρός, Iātros, literally
"physician"). Acesius was the epithet of Apollo worshipped in Elis, where he had a temple in the agora. The Romans
referred to Apollo as Medicus (/ˈmɛdᵻkəs/ MED-i-kəs; literally "physician" in Latin) in this respect. A temple was
dedicated to Apollo Medicus at Rome, probably next to the temple of Bellona. As a protector and founder, Apollo had
the epithets Alexicacus (/əˌlɛksᵻˈkeɪkəs/ ə-LEK-si-KAY-kəs; Ἀλεξίκακος, Alexikakos, literally "warding off evil"),
Apotropaeus (/əˌpɒtrəˈpiːəs/ ə-POT-rə-PEE-əs; Ἀποτρόπαιος, Apotropaios, from ἀποτρέπειν, "to avert"), and Epicurius
(/ˌɛpᵻˈkjʊriəs/ EP-i-KEWR-ee-əs; Ἐπικούριος, Epikourios, from ἐπικουρέειν, "to aid"), and Archegetes (/ɑːrˈkɛdʒətiːz/
ar-KEJ-ə-teez; Ἀρχηγέτης, Arkhēgetēs, literally "founder"), Clarius (/ˈklæriəs/ KLARR-ee-əs; Κλάριος, Klārios, from
Doric κλάρος, "allotted lot"), and Genetor (/ˈdʒɛnᵻtər/ JEN-i-tər; Γενέτωρ, Genetōr, literally "ancestor"). To the
Romans, he was known in this capacity as Averruncus (/ˌævəˈrʌŋkəs/ AV-ər-RUNG-kəs; from Latin āverruncare, "to avert").
He was also called Agyieus (/əˈdʒaɪ.ᵻjuːs/ ə-GWEE-ews; Ἀγυιεύς, Aguīeus, from ἄγυια, "street") for his role in protecting
roads and homes; and Nomius (/ˈnoʊmiəs/ NOH-mee-əs; Νόμιος, Nomios, literally "pastoral") and Nymphegetes (/nɪmˈfɛdʒᵻtiːz/
nim-FEJ-i-teez; Νυμφηγέτης, Numphēgetēs, from Νύμφη, "Nymph", and ἡγέτης, "leader") for his role as a protector of
shepherds and pastoral life. In his role as god of prophecy and truth, Apollo had the epithets Manticus (/ˈmæntᵻkəs/
MAN-ti-kəs; Μαντικός, Mantikos, literally "prophetic"), Leschenorius (/ˌlɛskᵻˈnɔəriəs/ LES-ki-NOHR-ee-əs; Λεσχηνόριος,
Leskhēnorios, from λεσχήνωρ, "converser"), and Loxias (/ˈlɒksiəs/ LOK-see-əs; Λοξίας, Loxias, from λέγειν, "to say").
The epithet "Loxias" has historically been associated with λοξός, "ambiguous". In this respect, the Romans called
him Coelispex (/ˈsɛlᵻspɛks/ SEL-i-speks; from Latin coelum, "sky", and specere, "to look at"). The epithet Iatromantis
(/aɪˌætrəˈmæntɪs/ eye-AT-rə-MAN-tis; Ἰατρομάντις, Iātromantis, from ἰατρός, "physician", and μάντις, "prophet") refers
to both his role as a god of healing and of prophecy. As god of music and arts, Apollo had the epithet Musagetes
(/mjuːˈsædʒᵻtiːz/ mew-SAJ-i-teez; Doric Μουσαγέτας, Mousāgetās) or Musegetes (/mjuːˈsɛdʒᵻtiːz/ mew-SEJ-i-teez; Μουσηγέτης,
Mousēgetēs, from Μούσα, "Muse", and ἡγέτης, "leader"). As a god of archery, Apollo was known as Aphetor (/əˈfiːtər/
ə-FEE-tər; Ἀφήτωρ, Aphētōr, from ἀφίημι, "to let loose") or Aphetorus (/əˈfɛtərəs/ ə-FET-ər-əs; Ἀφητόρος, Aphētoros,
of the same origin), Argyrotoxus (/ˌɑːrdʒᵻrəˈtɒksəs/ AR-ji-rə-TOK-səs; Ἀργυρότοξος, Argyrotoxos, literally "with
silver bow"), Hecaërgus (/ˌhɛkiˈɜːrɡəs/ HEK-ee-UR-gəs; Ἑκάεργος, Hekaergos, literally "far-shooting"), and Hecebolus
(/hᵻˈsɛbələs/ hi-SEB-ə-ləs; Ἑκηβόλος, Hekēbolos, literally "far-shooting"). The Romans referred to Apollo as Articenens
(/ɑːrˈtɪsᵻnənz/ ar-TISS-i-nənz; "bow-carrying"). Apollo was called Ismenius (/ɪzˈmiːniəs/ iz-MEE-nee-əs; Ἰσμηνιός,
Ismēnios, literally "of Ismenus") after Ismenus, the son of Amphion and Niobe, whom he struck with an arrow. The
cult centers of Apollo in Greece, Delphi and Delos, date from the 8th century BCE. The Delos sanctuary was primarily
dedicated to Artemis, Apollo's twin sister. At Delphi, Apollo was venerated as the slayer of Pytho. For the Greeks,
Apollo was all the Gods in one and through the centuries he acquired different functions which could originate from
different gods. In archaic Greece he was the prophet, the oracular god who in older times was connected with "healing".
In classical Greece he was the god of light and of music, but in popular religion he had a strong function to keep
away evil. Walter Burkert discerned three components in the prehistory of Apollo worship, which he termed "a Dorian-northwest
Greek component, a Cretan-Minoan component, and a Syro-Hittite component." From his eastern origin Apollo brought
the art of inspection of "symbols and omina" (σημεία και τέρατα : semeia kai terata), and of the observation of the
omens of the days. The inspiration oracular-cult was probably introduced from Anatolia. The ritualism belonged to
Apollo from the beginning. The Greeks created the legalism, the supervision of the orders of the gods, and the demand
for moderation and harmony. Apollo became the god of shining youth, the protector of music, spiritual-life, moderation
and perceptible order. The improvement of the old Anatolian god, and his elevation to an intellectual sphere, may
be considered an achievement of the Greek people. The function of Apollo as a "healer" is connected with Paean (Παιών-Παιήων),
the physician of the Gods in the Iliad, who seems to come from a more primitive religion. Paeοn is probably connected
with the Mycenean pa-ja-wo-ne (Linear B: 𐀞𐀊𐀍𐀚), but this is not certain. He did not have a separate cult, but
he was the personification of the holy magic-song sung by the magicians that was supposed to cure disease. Later
the Greeks knew the original meaning of the relevant song "paean" (παιάν). The magicians were also called "seer-doctors"
(ἰατρομάντεις), and they used an ecstatic prophetic art which was used exactly by the god Apollo at the oracles.
Some common epithets of Apollo as a healer are "paion" (παιών, literally "healer" or "helper") "epikourios" (ἐπικουρώ,
"help"), "oulios" (οὐλή, "healed wound", also a "scar" ) and "loimios" (λοιμός, "plague"). In classical times, his
strong function in popular religion was to keep away evil, and was therefore called "apotropaios" (ἀποτρέπω, "divert",
"deter", "avert") and "alexikakos" (from v. ἀλέξω + n. κακόν, "defend from evil"). In later writers, the word, usually
spelled "Paean", becomes a mere epithet of Apollo in his capacity as a god of healing. Homer illustrated Paeon the
god, and the song both of apotropaic thanksgiving or triumph.[citation needed] Such songs were originally addressed
to Apollo, and afterwards to other gods: to Dionysus, to Apollo Helios, to Apollo's son Asclepius the healer. About
the 4th century BCE, the paean became merely a formula of adulation; its object was either to implore protection
against disease and misfortune, or to offer thanks after such protection had been rendered. It was in this way that
Apollo had become recognised as the god of music. Apollo's role as the slayer of the Python led to his association
with battle and victory; hence it became the Roman custom for a paean to be sung by an army on the march and before
entering into battle, when a fleet left the harbour, and also after a victory had been won. The connection with Dorians
and their initiation festival apellai[clarification needed] is reinforced by the month Apellaios in northwest Greek
calendars, but it can explain only the Doric type of the name, which is connected with the Ancient Macedonian word
"pella" (Pella), stone. Stones played an important part in the cult of the god, especially in the oracular shrine
of Delphi (Omphalos). The "Homeric hymn" represents Apollo as a Northern intruder. His arrival must have occurred
during the "Dark Ages" that followed the destruction of the Mycenaean civilization, and his conflict with Gaia (Mother
Earth) was represented by the legend of his slaying her daughter the serpent Python. The earth deity had power over
the ghostly world, and it is believed that she was the deity behind the oracle. The older tales mentioned two dragons
who were perhaps intentionally conflated. A female dragon named Delphyne (δελφύς, "womb"), who is obviously connected
with Delphi and Apollo Delphinios, and a male serpent Typhon (τύφειν, "to smoke"), the adversary of Zeus in the Titanomachy,
who the narrators confused with Python. Python was the good daemon (ἀγαθὸς δαίμων) of the temple as it appears in
Minoan religion, but she was represented as a dragon, as often happens in Northern European folklore as well as in
the East. Apollo and his sister Artemis can bring death with their arrows. The conception that diseases and death
come from invisible shots sent by supernatural beings, or magicians is common in Germanic and Norse mythology. In
Greek mythology Artemis was the leader (ἡγεμών, "hegemon") of the nymphs, who had similar functions with the Nordic
Elves. The "elf-shot" originally indicated disease or death attributed to the elves, but it was later attested denoting
stone arrow-heads which were used by witches to harm people, and also for healing rituals. It seems an oracular cult
existed in Delphi from the Mycenaean ages. In historical times, the priests of Delphi were called Labryaden, "the
double-axe men", which indicates Minoan origin. The double-axe, labrys, was the holy symbol of the Cretan labyrinth.
The Homeric hymn adds that Apollo appeared as a dolphin and carried Cretan priests to Delphi, where they evidently
transferred their religious practices. Apollo Delphinios was a sea-god especially worshiped in Crete and in the islands,
and his name indicates his connection with Delphi and the holy serpent Delphyne ("womb").[citation needed] Apollo's
sister Artemis, who was the Greek goddess of hunting, is identified with Britomartis (Diktynna), the Minoan "Mistress
of the animals". In her earliest depictions she is accompanied by the "Mister of the animals", a male god of hunting
who had the bow as his attribute. His original name is unknown, but it seems that he was absorbed by the more popular
Apollo, who stood by the virgin "Mistress of the Animals", becoming her brother. The old oracles in Delphi seem to
be connected with a local tradition of the priesthood, and there is not clear evidence that a kind of inspiration-prophecy
existed in the temple. This led some scholars to the conclusion that Pythia carried on the rituals in a consistent
procedure through many centuries, according to the local tradition. In that regard, the mythical seeress Sibyl of
Anatolian origin, with her ecstatic art, looks unrelated to the oracle itself. However, the Greek tradition is referring
to the existence of vapours and chewing of laurel-leaves, which seem to be confirmed by recent studies. Plato describes
the priestesses of Delphi and Dodona as frenzied women, obsessed by "mania" (μανία, "frenzy"), a Greek word he connected
with mantis (μάντις, "prophet"). Frenzied women like Sibyls from whose lips the god speaks are recorded in the Near
East as Mari in the second millennium BC. Although Crete had contacts with Mari from 2000 BC, there is no evidence
that the ecstatic prophetic art existed during the Minoan and Mycenean ages. It is more probable that this art was
introduced later from Anatolia and regenerated an existing oracular cult that was local to Delphi and dormant in
several areas of Greece. A non-Greek origin of Apollo has long been assumed in scholarship. The name of Apollo's
mother Leto has Lydian origin, and she was worshipped on the coasts of Asia Minor. The inspiration oracular cult
was probably introduced into Greece from Anatolia, which is the origin of Sibyl, and where existed some of the oldest
oracular shrines. Omens, symbols, purifications, and exorcisms appear in old Assyro-Babylonian texts, and these rituals
were spread into the empire of the Hittites. In a Hittite text is mentioned that the king invited a Babylonian priestess
for a certain "purification". A similar story is mentioned by Plutarch. He writes that the Cretan seer Epimenides
purified Athens after the pollution brought by the Alcmeonidae, and that the seer's expertise in sacrifices and reform
of funeral practices were of great help to Solon in his reform of the Athenian state. The story indicates that Epimenides
was probably heir to the shamanic religions of Asia, and proves, together with the Homeric hymn, that Crete had a
resisting religion up to historical times. It seems that these rituals were dormant in Greece, and they were reinforced
when the Greeks migrated to Anatolia. Homer pictures Apollo on the side of the Trojans, fighting against the Achaeans,
during the Trojan War. He is pictured as a terrible god, less trusted by the Greeks than other gods. The god seems
to be related to Appaliunas, a tutelary god of Wilusa (Troy) in Asia Minor, but the word is not complete. The stones
found in front of the gates of Homeric Troy were the symbols of Apollo. The Greeks gave to him the name ἀγυιεύς agyieus
as the protector god of public places and houses who wards off evil, and his symbol was a tapered stone or column.
However, while usually Greek festivals were celebrated at the full moon, all the feasts of Apollo were celebrated
at the seventh day of the month, and the emphasis given to that day (sibutu) indicates a Babylonian origin. The Late
Bronze Age (from 1700 to 1200 BCE) Hittite and Hurrian Aplu was a god of plague, invoked during plague years. Here
we have an apotropaic situation, where a god originally bringing the plague was invoked to end it. Aplu, meaning
the son of, was a title given to the god Nergal, who was linked to the Babylonian god of the sun Shamash. Homer interprets
Apollo as a terrible god (δεινὸς θεός) who brings death and disease with his arrows, but who can also heal, possessing
a magic art that separates him from the other Greek gods. In Iliad, his priest prays to Apollo Smintheus, the mouse
god who retains an older agricultural function as the protector from field rats. All these functions, including the
function of the healer-god Paean, who seems to have Mycenean origin, are fused in the cult of Apollo. Unusually among
the Olympic deities, Apollo had two cult sites that had widespread influence: Delos and Delphi. In cult practice,
Delian Apollo and Pythian Apollo (the Apollo of Delphi) were so distinct that they might both have shrines in the
same locality. Apollo's cult was already fully established when written sources commenced, about 650 BCE. Apollo
became extremely important to the Greek world as an oracular deity in the archaic period, and the frequency of theophoric
names such as Apollodorus or Apollonios and cities named Apollonia testify to his popularity. Oracular sanctuaries
to Apollo were established in other sites. In the 2nd and 3rd century CE, those at Didyma and Clarus pronounced the
so-called "theological oracles", in which Apollo confirms that all deities are aspects or servants of an all-encompassing,
highest deity. "In the 3rd century, Apollo fell silent. Julian the Apostate (359 - 61) tried to revive the Delphic
oracle, but failed." A lot of temples dedicated to Apollo were built in Greece and in the Greek colonies, and they
show the spread of the cult of Apollo, and the evolution of the Greek architecture, which was mostly based on the
rightness of form, and on mathematical relations. Some of the earliest temples, especially in Crete, don't belong
to any Greek order. It seems that the first peripteral temples were rectangle wooden structures. The different wooden
elements were considered divine, and their forms were preserved in the marble or stone elements of the temples of
Doric order. The Greeks used standard types, because they believed that the world of objects was a series of typical
forms which could be represented in several instances. The temples should be canonic, and the architects were trying
to achieve the esthetic perfection. From the earliest times there were certain rules strictly observed in rectangular
peripteral and prostyle buildings. The first buildings were narrow to hold the roof, and when the dimensions changed,
some mathematical relations became necessary, in order to keep the original forms. This probably influenced the theory
of numbers of Pythagoras, who believed that behind the appearance of things, there was the permanent principle of
mathematics. It is also stated that Hera kidnapped Eileithyia, the goddess of childbirth, to prevent Leto from going
into labor. The other gods tricked Hera into letting her go by offering her a necklace, nine yards (8 m) long, of
amber. Mythographers agree that Artemis was born first and then assisted with the birth of Apollo, or that Artemis
was born one day before Apollo, on the island of Ortygia and that she helped Leto cross the sea to Delos the next
day to give birth to Apollo. Apollo was born on the seventh day (ἑβδομαγενής, hebdomagenes) of the month Thargelion
—according to Delian tradition—or of the month Bysios—according to Delphian tradition. The seventh and twentieth,
the days of the new and full moon, were ever afterwards held sacred to him. Four days after his birth, Apollo killed
the chthonic dragon Python, which lived in Delphi beside the Castalian Spring. This was the spring which emitted
vapors that caused the oracle at Delphi to give her prophecies. Hera sent the serpent to hunt Leto to her death across
the world. To protect his mother, Apollo begged Hephaestus for a bow and arrows. After receiving them, Apollo cornered
Python in the sacred cave at Delphi. Apollo killed Python but had to be punished for it, since Python was a child
of Gaia. When Zeus struck down Apollo's son Asclepius with a lightning bolt for resurrecting Hippolytus from the
dead (transgressing Themis by stealing Hades's subjects), Apollo in revenge killed the Cyclopes, who had fashioned
the bolt for Zeus. Apollo would have been banished to Tartarus forever for this, but was instead sentenced to one
year of hard labor, due to the intercession of his mother, Leto. During this time he served as shepherd for King
Admetus of Pherae in Thessaly. Admetus treated Apollo well, and, in return, the god conferred great benefits on Admetus.
Daphne was a nymph, daughter of the river god Peneus, who had scorned Apollo. The myth explains the connection of
Apollo with δάφνη (daphnē), the laurel whose leaves his priestess employed at Delphi. In Ovid's Metamorphoses, Phoebus
Apollo chaffs Cupid for toying with a weapon more suited to a man, whereupon Cupid wounds him with a golden dart;
simultaneously, however, Cupid shoots a leaden arrow into Daphne, causing her to be repulsed by Apollo. Following
a spirited chase by Apollo, Daphne prays to her father, Peneus, for help, and he changes her into the laurel tree,
sacred to Apollo. Leucothea was daughter of Orchamus and sister of Clytia. She fell in love with Apollo who disguised
himself as Leucothea's mother to gain entrance to her chambers. Clytia, jealous of her sister because she wanted
Apollo for herself, told Orchamus the truth, betraying her sister's trust and confidence in her. Enraged, Orchamus
ordered Leucothea to be buried alive. Apollo refused to forgive Clytia for betraying his beloved, and a grieving
Clytia wilted and slowly died. Apollo changed her into an incense plant, either heliotrope or sunflower, which follows
the sun every day. Coronis, was daughter of Phlegyas, King of the Lapiths. Pregnant with Asclepius, Coronis fell
in love with Ischys, son of Elatus. A crow informed Apollo of the affair. When first informed he disbelieved the
crow and turned all crows black (where they were previously white) as a punishment for spreading untruths. When he
found out the truth he sent his sister, Artemis, to kill Coronis (in other stories, Apollo himself had killed Coronis).
As a result, he also made the crow sacred and gave them the task of announcing important deaths. Apollo rescued the
baby and gave it to the centaur Chiron to raise. Phlegyas was irate after the death of his daughter and burned the
Temple of Apollo at Delphi. Apollo then killed him for what he did. Hyacinth or Hyacinthus was one of Apollo's male
lovers. He was a Spartan prince, beautiful and athletic. The pair was practicing throwing the discus when a discus
thrown by Apollo was blown off course by the jealous Zephyrus and struck Hyacinthus in the head, killing him instantly.
Apollo is said to be filled with grief: out of Hyacinthus' blood, Apollo created a flower named after him as a memorial
to his death, and his tears stained the flower petals with the interjection αἰαῖ, meaning alas. The Festival of Hyacinthus
was a celebration of Sparta. Apollo and the Furies argue about whether the matricide was justified; Apollo holds
that the bond of marriage is sacred and Orestes was avenging his father, whereas the Erinyes say that the bond of
blood between mother and son is more meaningful than the bond of marriage. They invade his temple, and he says that
the matter should be brought before Athena. Apollo promises to protect Orestes, as Orestes has become Apollo's supplicant.
Apollo advocates Orestes at the trial, and ultimately Athena rules in favor of Apollo. Once Pan had the audacity
to compare his music with that of Apollo, and to challenge Apollo, the god of the kithara, to a trial of skill. Tmolus,
the mountain-god, was chosen to umpire. Pan blew on his pipes, and with his rustic melody gave great satisfaction
to himself and his faithful follower, Midas, who happened to be present. Then Apollo struck the strings of his lyre.
Tmolus at once awarded the victory to Apollo, and all but Midas agreed with the judgment. He dissented and questioned
the justice of the award. Apollo would not suffer such a depraved pair of ears any longer, and caused them to become
the ears of a donkey. After they each performed, both were deemed equal until Apollo decreed they play and sing at
the same time. As Apollo played the lyre, this was easy to do. Marsyas could not do this, as he only knew how to
use the flute and could not sing at the same time. Apollo was declared the winner because of this. Apollo flayed
Marsyas alive in a cave near Celaenae in Phrygia for his hubris to challenge a god. He then nailed Marsyas' shaggy
skin to a nearby pine-tree. Marsyas' blood turned into the river Marsyas. On the occasion of a pestilence in the
430s BCE, Apollo's first temple at Rome was established in the Flaminian fields, replacing an older cult site there
known as the "Apollinare". During the Second Punic War in 212 BCE, the Ludi Apollinares ("Apollonian Games") were
instituted in his honor, on the instructions of a prophecy attributed to one Marcius. In the time of Augustus, who
considered himself under the special protection of Apollo and was even said to be his son, his worship developed
and he became one of the chief gods of Rome. As god of colonization, Apollo gave oracular guidance on colonies, especially
during the height of colonization, 750–550 BCE. According to Greek tradition, he helped Cretan or Arcadian colonists
found the city of Troy. However, this story may reflect a cultural influence which had the reverse direction: Hittite
cuneiform texts mention a Minor Asian god called Appaliunas or Apalunas in connection with the city of Wilusa attested
in Hittite inscriptions, which is now generally regarded as being identical with the Greek Ilion by most scholars.
In this interpretation, Apollo's title of Lykegenes can simply be read as "born in Lycia", which effectively severs
the god's supposed link with wolves (possibly a folk etymology). In literary contexts, Apollo represents harmony,
order, and reason—characteristics contrasted with those of Dionysus, god of wine, who represents ecstasy and disorder.
The contrast between the roles of these gods is reflected in the adjectives Apollonian and Dionysian. However, the
Greeks thought of the two qualities as complementary: the two gods are brothers, and when Apollo at winter left for
Hyperborea, he would leave the Delphic oracle to Dionysus. This contrast appears to be shown on the two sides of
the Borghese Vase. The evolution of the Greek sculpture can be observed in his depictions from the almost static
formal Kouros type in early archaic period, to the representation of motion in a relative harmonious whole in late
archaic period. In classical Greece the emphasis is not given to the illusive imaginative reality represented by
the ideal forms, but to the analogies and the interaction of the members in the whole, a method created by Polykleitos.
Finally Praxiteles seems to be released from any art and religious conformities, and his masterpieces are a mixture
of naturalism with stylization. In classical Greece, Anaxagoras asserted that a divine reason (mind) gave order to
the seeds of the universe, and Plato extended the Greek belief of ideal forms to his metaphysical theory of forms
(ideai, "ideas"). The forms on earth are imperfect duplicates of the intellectual celestial ideas. The Greek words
oida (οἶδα, "(I) know") and eidos (εἶδος, "species") have the same root as the word idea (ἰδέα), indicating how the
Greek mind moved from the gift of the senses, to the principles beyond the senses. The artists in Plato's time moved
away from his theories and art tends to be a mixture of naturalism with stylization. The Greek sculptors considered
the senses more important, and the proportions were used to unite the sensible with the intellectual. Kouros (male
youth) is the modern term given to those representations of standing male youths which first appear in the archaic
period in Greece. This type served certain religious needs and was first proposed for what was previously thought
to be depictions of Apollo. The first statues are certainly still and formal. The formality of their stance seems
to be related with the Egyptian precedent, but it was accepted for a good reason. The sculptors had a clear idea
of what a young man is, and embodied the archaic smile of good manners, the firm and springy step, the balance of
the body, dignity, and youthful happiness. When they tried to depict the most abiding qualities of men, it was because
men had common roots with the unchanging gods. The adoption of a standard recognizable type for a long time, is probably
because nature gives preference in survival of a type which has long be adopted by the climatic conditions, and also
due to the general Greek belief that nature expresses itself in ideal forms that can be imagined and represented.
These forms expressed immortality. Apollo was the immortal god of ideal balance and order. His shrine in Delphi,
that he shared in winter with Dionysius had the inscriptions: γνῶθι σεαυτόν (gnōthi seautón="know thyself") and μηδὲν
ἄγαν (mēdén ágan, "nothing in excess"), and ἐγγύα πάρα δ'ἄτη (eggýa pára d'atē, "make a pledge and mischief is nigh").
In the first large-scale depictions during the early archaic period (640–580 BC), the artists tried to draw one's
attention to look into the interior of the face and the body which were not represented as lifeless masses, but as
being full of life. The Greeks maintained, until late in their civilization, an almost animistic idea that the statues
are in some sense alive. This embodies the belief that the image was somehow the god or man himself. A fine example
is the statue of the Sacred gate Kouros which was found at the cemetery of Dipylon in Athens (Dipylon Kouros). The
statue is the "thing in itself", and his slender face with the deep eyes express an intellectual eternity. According
to the Greek tradition the Dipylon master was named Daedalus, and in his statues the limbs were freed from the body,
giving the impression that the statues could move. It is considered that he created also the New York kouros, which
is the oldest fully preserved statue of Kouros type, and seems to be the incarnation of the god himself. The animistic
idea as the representation of the imaginative reality, is sanctified in the Homeric poems and in Greek myths, in
stories of the god Hephaestus (Phaistos) and the mythic Daedalus (the builder of the labyrinth) that made images
which moved of their own accord. This kind of art goes back to the Minoan period, when its main theme was the representation
of motion in a specific moment. These free-standing statues were usually marble, but also the form rendered in limestone,
bronze, ivory and terracotta. The earliest examples of life-sized statues of Apollo, may be two figures from the
Ionic sanctuary on the island of Delos. Such statues were found across the Greek speaking world, the preponderance
of these were found at the sanctuaries of Apollo with more than one hundred from the sanctuary of Apollo Ptoios,
Boeotia alone. The last stage in the development of the Kouros type is the late archaic period (520–485 BC), in which
the Greek sculpture attained a full knowledge of human anatomy and used to create a relative harmonious whole. Ranking
from the very few bronzes survived to us is the masterpiece bronze Piraeus Apollo. It was found in Piraeus, the harbour
of Athens. The statue originally held the bow in its left hand, and a cup of pouring libation in its right hand.
It probably comes from north-eastern Peloponnesus. The emphasis is given in anatomy, and it is one of the first attempts
to represent a kind of motion, and beauty relative to proportions, which appear mostly in post-Archaic art. The statue
throws some light on an artistic centre which, with an independently developed harder, simpler, and heavier style,
restricts Ionian influence in Athens. Finally, this is the germ from which the art of Polykleitos was to grow two
or three generations later. In the next century which is the beginning of the Classical period, it was considered
that beauty in visible things as in everything else, consisted of symmetry and proportions. The artists tried also
to represent motion in a specific moment (Myron), which may be considered as the reappearance of the dormant Minoan
element. Anatomy and geometry are fused in one, and each does something to the other. The Greek sculptors tried to
clarify it by looking for mathematical proportions, just as they sought some reality behind appearances. Polykleitos
in his Canon wrote that beauty consists in the proportion not of the elements (materials), but of the parts, that
is the interrelation of parts with one another and with the whole. It seems that he was influenced by the theories
of Pythagoras. The famous Apollo of Mantua and its variants are early forms of the Apollo Citharoedus statue type,
in which the god holds the cithara in his left arm. The type is represented by neo-Attic Imperial Roman copies of
the late 1st or early 2nd century, modelled upon a supposed Greek bronze original made in the second quarter of the
5th century BCE, in a style similar to works of Polykleitos but more archaic. The Apollo held the cythara against
his extended left arm, of which in the Louvre example, a fragment of one twisting scrolling horn upright remains
against his biceps. Though the proportions were always important in Greek art, the appeal of the Greek sculptures
eludes any explanation by proportion alone. The statues of Apollo were thought to incarnate his living presence,
and these representations of illusive imaginative reality had deep roots in the Minoan period, and in the beliefs
of the first Greek speaking people who entered the region during the bronze-age. Just as the Greeks saw the mountains,
forests, sea and rivers as inhabited by concrete beings, so nature in all of its manifestations possesses clear form,
and the form of a work of art. Spiritual life is incorporated in matter, when it is given artistic form. Just as
in the arts the Greeks sought some reality behind appearances, so in mathematics they sought permanent principles
which could be applied wherever the conditions were the same. Artists and sculptors tried to find this ideal order
in relation with mathematics, but they believed that this ideal order revealed itself not so much to the dispassionate
intellect, as to the whole sentient self. Things as we see them, and as they really are, are one, that each stresses
the nature of the other in a single unity. These representations rely on presenting scenes directly to the eye for
their own visible sake. They care for the schematic arrangements of bodies in space, but only as parts in a larger
whole. While each scene has its own character and completeness it must fit into the general sequence to which it
belongs. In these archaic pediments the sculptors use empty intervals, to suggest a passage to and fro a busy battlefield.
The artists seem to have been dominated by geometrical pattern and order, and this was improved when classical art
brought a greater freedom and economy. Apollo as a handsome beardless young man, is often depicted with a kithara
(as Apollo Citharoedus) or bow in his hand, or reclining on a tree (the Apollo Lykeios and Apollo Sauroctonos types).
The Apollo Belvedere is a marble sculpture that was rediscovered in the late 15th century; for centuries it epitomized
the ideals of Classical Antiquity for Europeans, from the Renaissance through the 19th century. The marble is a Hellenistic
or Roman copy of a bronze original by the Greek sculptor Leochares, made between 350 and 325 BCE.
Bush's margin of victory in the popular vote was the smallest ever for a reelected incumbent president, but marked the first
time since his father's victory 16 years prior that a candidate won a majority of the popular vote. The electoral
map closely resembled that of 2000, with only three states changing sides: New Mexico and Iowa voted Republican in
2004 after having voted Democratic in 2000, while New Hampshire voted Democratic in 2004 after previously voting
Republican. In the Electoral College, Bush received 286 votes to Kerry's 252. Just eight months into his presidency,
the terrorist attacks of September 11, 2001 suddenly transformed Bush into a wartime president. Bush's approval ratings
surged to near 90%. Within a month, the forces of a coalition led by the United States entered Afghanistan, which
had been sheltering Osama bin Laden, suspected mastermind of the September 11 attacks. By December, the Taliban had
been removed as rulers of Kabul, although a long and ongoing reconstruction would follow, severely hampered by ongoing
turmoil and violence within the country. The Bush administration then turned its attention to Iraq, and argued the
need to remove Saddam Hussein from power in Iraq had become urgent. Among the stated reasons were that Saddam's regime
had tried to acquire nuclear material and had not properly accounted for biological and chemical material it was
known to have previously possessed, and believed to still maintain. Both the possession of these weapons of mass
destruction (WMD), and the failure to account for them, would violate the U.N. sanctions. The assertion about WMD
was hotly advanced by the Bush administration from the beginning, but other major powers including China, France,
Germany, and Russia remained unconvinced that Iraq was a threat and refused to allow passage of a UN Security Council
resolution to authorize the use of force. Iraq permitted UN weapon inspectors in November 2002, who were continuing
their work to assess the WMD claim when the Bush administration decided to proceed with war without UN authorization
and told the inspectors to leave the country. The United States invaded Iraq on March 20, 2003, along with a "coalition
of the willing" that consisted of additional troops from the United Kingdom, and to a lesser extent, from Australia
and Poland. Within about three weeks, the invasion caused the collapse of both the Iraqi government and its armed
forces, however, the U.S. and allied forces failed to find any weapon of mass destruction in Iraq. Traces of former
materials and weapons labs were reported to have been located, but no "smoking guns". Nevertheless, on May 1, George
W. Bush landed on the aircraft carrier USS Abraham Lincoln, in a Lockheed S-3 Viking, where he gave a speech announcing
the end of "major combat operations" in the Iraq War. Bush's approval rating in May was at 66%, according to a CNN–USA
Today–Gallup poll. However, Bush's high approval ratings did not last. First, while the war itself was popular in
the U.S., the reconstruction and attempted "democratization" of Iraq lost some support as months passed and casualty
figures increased, with no decrease in violence nor progress toward stability or reconstruction. Second, as investigators
combed through the country, they failed to find the predicted WMD stockpiles, which led to debate over the rationale
for the war. On March 10, 2004, Bush officially clinched the number of delegates needed to be nominated at the 2004
Republican National Convention in New York City. Bush accepted the nomination on September 2, 2004, and selected
Vice President Dick Cheney as his running mate. (In New York, the ticket was also on the ballot as candidates of
the Conservative Party of New York State.) During the convention and throughout the campaign, Bush focused on two
themes: defending America against terrorism and building an ownership society. The ownership society included allowing
people to invest some of their Social Security in the stock market, increasing home and stock ownership, and encouraging
more people to buy their own health insurance. By summer of 2003, Howard Dean had become the apparent front runner
for the Democratic nomination, performing strongly in most polls and leading the pack with the largest campaign war
chest. Dean's strength as a fund raiser was attributed mainly to his embrace of the Internet for campaigning. The
majority of his donations came from individual supporters, who became known as Deanites, or, more commonly, Deaniacs.
Generally regarded as a pragmatic centrist during his time as governor, Dean emerged during his presidential campaign
as a left-wing populist, denouncing the policies of the Bush administration (especially the 2003 invasion of Iraq)
as well as fellow Democrats, who, in his view, failed to strongly oppose them. Senator Lieberman, a liberal on domestic
issues but a hawk on the War on Terror, failed to gain traction with liberal Democratic primary voters. In September
2003, retired four-star general Wesley Clark announced his intention to run in the presidential primary election
for the Democratic Party nomination. His campaign focused on themes of leadership and patriotism; early campaign
ads relied heavily on biography. His late start left him with relatively few detailed policy proposals. This weakness
was apparent in his first few debates, although he soon presented a range of position papers, including a major tax-relief
plan. Nevertheless, the Democrats did not flock to support his campaign. In sheer numbers, Kerry had fewer endorsements
than Howard Dean, who was far ahead in the superdelegate race going into the Iowa caucuses in January 2004, although
Kerry led the endorsement race in Iowa, New Hampshire, Arizona, South Carolina, New Mexico and Nevada. Kerry's main
perceived weakness was in his neighboring state of New Hampshire and nearly all national polls. Most other states
did not have updated polling numbers to give an accurate placing for the Kerry campaign before Iowa. Heading into
the primaries, Kerry's campaign was largely seen as in trouble, particularly after he fired campaign manager Jim
Jordan. The key factors enabling it to survive were when fellow Massachusetts Senator Ted Kennedy assigned Mary Beth
Cahill to be the campaign manager, as well as Kerry's mortgaging his own home to lend the money to his campaign (while
his wife was a billionaire, campaign finance rules prohibited using one's personal fortune). He also brought on the
"magical" Michael Whouley who would be credited with helping bring home the Iowa victory the same as he did in New
Hampshire for Al Gore in 2000 against Bill Bradley. In the race for individual contributions, economist Lyndon LaRouche
dominated the pack leading up to the primaries. According to the Federal Election Commission statistics, LaRouche
had more individual contributors to his 2004 presidential campaign than any other candidate, until the final quarter
of the primary season, when John Kerry surpassed him. As of the April 15 filing, LaRouche had 7834 individual contributions,
of those who have given cumulatively, $200 or more, as compared to 6257 for John Kerry, 5582 for John Edwards, 4090
for Howard Dean, and 2744 for Gephardt. By the January 2004 Iowa caucuses, the field had dwindled down to nine candidates,
as Bob Graham had dropped out of the race. Howard Dean was a strong front-runner. However, the Iowa caucuses yielded
unexpectedly strong results for Democratic candidates John Kerry, who earned 38% of the state's delegates and John
Edwards, who took 32%. Former front-runner Howard Dean slipped to 18% and third place, and Richard Gephardt finished
fourth (11%). In the days leading up to the Iowa vote, there was much negative campaigning between the Dean and Gephardt
camps. The dismal results caused Gephardt to drop out and later endorse Kerry. Carol Moseley Braun also dropped out,
endorsing Howard Dean. Besides the impact of coming in third, Dean was further hurt by a speech he gave at a post-caucus
rally. Dean was shouting over the cheers of his enthusiastic audience, but the crowd noise was being filtered out
by his unidirectional microphone, leaving only his full-throated exhortations audible to the television viewers.
To those at home, he seemed to raise his voice out of sheer emotion. The incessant replaying of the "Dean Scream"
by the press became a debate on the topic of whether Dean was the victim of media bias. The scream scene was shown
approximately 633 times by cable and broadcast news networks in just four days following the incident, a number that
does not include talk shows and local news broadcasts. However, those who were in the actual audience that day insist
that they were not aware of the infamous "scream" until they returned to their hotel rooms and saw it on TV. The
following week, John Edwards won the South Carolina primary and finished a strong second in Oklahoma to Clark. Lieberman
dropped out of the campaign the following day. Kerry dominated throughout February and his support quickly snowballed
as he won caucuses and primaries, taking in a string of wins in Michigan, Washington, Maine, Tennessee, Washington,
D.C., Nevada, Wisconsin, Utah, Hawaii, and Idaho. Clark and Dean dropped out during this time, leaving Edwards as
the only real threat to Kerry. Kucinich and Sharpton continued to run despite poor results at the polls. In March's
Super Tuesday, Kerry won decisive victories in the California, Connecticut, Georgia, Maryland, Massachusetts, New
York, Ohio, and Rhode Island primaries and the Minnesota caucuses. Dean, despite having withdrawn from the race two
weeks earlier, won his home state of Vermont. Edwards finished only slightly behind Kerry in Georgia, but, failing
to win a single state other than South Carolina, chose to withdraw from the presidential race. Sharpton followed
suit a couple weeks later. Kuninch did not leave the race officially until July. On July 6, John Kerry selected John
Edwards as his running mate, shortly before the 2004 Democratic National Convention in Boston, held later that month.
Days before Kerry announced Edwards as his running mate, Kerry gave a short list of three candidates: Sen John Edwards,
Rep Dick Gephardt, and Gov Tom Vilsack. Heading into the convention, the Kerry/Edwards ticket unveiled their new
slogan—a promise to make America "stronger at home and more respected in the world." Kerry made his Vietnam War experience
the prominent theme of the convention. In accepting the nomination, he began his speech with, "I'm John Kerry and
I'm reporting for duty." He later delivered what may have been the speech's most memorable line when he said, "the
future doesn't belong to fear, it belongs to freedom", a quote that later appeared in a Kerry/Edwards television
advertisement. Bush focused his campaign on national security, presenting himself as a decisive leader and contrasted
Kerry as a "flip-flopper." This strategy was designed to convey to American voters the idea that Bush could be trusted
to be tough on terrorism while Kerry would be "uncertain in the face of danger." Bush (just as his father did with
Dukakis in the 1988 election) also sought to portray Kerry as a "Massachusetts liberal" who was out of touch with
mainstream Americans. One of Kerry's slogans was "Stronger at home, respected in the world." This advanced the suggestion
that Kerry would pay more attention to domestic concerns; it also encapsulated Kerry's contention that Bush had alienated
American allies by his foreign policy. During August and September 2004, there was an intense focus on events that
occurred in the late 1960s and early 1970s. Bush was accused of failing to fulfill his required service in the Texas
Air National Guard. However, the focus quickly shifted to the conduct of CBS News after they aired a segment on 60
Minutes Wednesday introducing what became known as the Killian documents. Serious doubts about the documents' authenticity
quickly emerged, leading CBS to appoint a review panel that eventually resulted in the firing of the news producer
and other significant staffing changes. The first debate was held on September 30 at the University of Miami, moderated
by Jim Lehrer of PBS. During the debate, slated to focus on foreign policy, Kerry accused Bush of having failed to
gain international support for the 2003 Invasion of Iraq, saying the only countries assisting the U.S. during the
invasion were the United Kingdom and Australia. Bush replied to this by saying, "Well, actually, he forgot Poland."
Later, a consensus formed among mainstream pollsters and pundits that Kerry won the debate decisively, strengthening
what had come to be seen as a weak and troubled campaign. In the days after, coverage focused on Bush's apparent
annoyance with Kerry and numerous scowls and negative facial expressions. The second presidential debate was held
at Washington University in St. Louis, Missouri, on October 8, moderated by Charles Gibson of ABC. Conducted in a
town meeting format, less formal than the first presidential debate, this debate saw Bush and Kerry taking questions
on a variety of subjects from a local audience. Bush attempted to deflect criticism of what was described as his
scowling demeanor during the first debate, joking at one point about one of Kerry's remarks, "That answer made me
want to scowl." Bush and Kerry met for the third and final debate at Arizona State University on October 13. 51 million
viewers watched the debate which was moderated by Bob Schieffer of CBS News. However, at the time of the ASU debate,
there were 15.2 million viewers tuned in to watch the Major League Baseball playoffs broadcast simultaneously. After
Kerry, responding to a question about gay rights, reminded the audience that Vice President Cheney's daughter was
a lesbian, Cheney responded with a statement calling himself "a pretty angry father" due to Kerry using Cheney's
daughter's sexual orientation for his political purposes. One elector in Minnesota cast a ballot for president with
the name of "John Ewards" [sic] written on it. The Electoral College officials certified this ballot as a vote for
John Edwards for president. The remaining nine electors cast ballots for John Kerry. All ten electors in the state
cast ballots for John Edwards for vice president (John Edwards's name was spelled correctly on all ballots for vice
president). This was the first time in U.S. history that an elector had cast a vote for the same person to be both
president and vice president; another faithless elector in the 1800 election had voted twice for Aaron Burr, but
under that electoral system only votes for the president's position were cast, with the runner-up in the Electoral
College becoming vice president (and the second vote for Burr was discounted and re-assigned to Thomas Jefferson
in any event, as it violated Electoral College rules). The morning after the election, the major candidates were
neck and neck. It was clear that the result in Ohio, along with two other states who had still not declared (New
Mexico and Iowa), would decide the winner. Bush had established a lead of around 130,000 votes but the Democrats
pointed to provisional ballots that had yet to be counted, initially reported to number as high as 200,000. Bush
had preliminary leads of less than 5% of the vote in only four states, but if Iowa, Nevada and New Mexico had all
eventually gone to Kerry, a win for Bush in Ohio would have created a 269–269 tie in the Electoral College. The result
of an electoral tie would cause the election to be decided in the House of Representatives with each state casting
one vote, regardless of population. Such a scenario would almost certainly have resulted in a victory for Bush, as
Republicans controlled more House delegations. Therefore, the outcome of the election hinged solely on the result
in Ohio, regardless of the final totals elsewhere. In the afternoon Ohio's Secretary of State, Ken Blackwell, announced
that it was statistically impossible for the Democrats to make up enough valid votes in the provisional ballots to
win. At the time provisional ballots were reported as numbering 140,000 (and later estimated to be only 135,000).
Faced with this announcement, John Kerry conceded defeat. Had Kerry won Ohio, he would have won the election despite
losing the national popular vote by over 3 million votes, a complete reversal of the 2000 election when Bush won
the presidency despite losing the popular vote to Al Gore by over 500,000 votes. At the official counting of the
electoral votes on January 6, a motion was made contesting Ohio's electoral votes. Because the motion was supported
by at least one member of both the House of Representatives and the Senate, election law mandated that each house
retire to debate and vote on the motion. In the House of Representatives, the motion was supported by 31 Democrats.
It was opposed by 178 Republicans, 88 Democrats and one independent. Not voting were 52 Republicans and 80 Democrats.
Four people elected to the House had not yet taken office, and one seat was vacant. In the Senate, it was supported
only by its maker, Senator Boxer, with 74 Senators opposed and 25 not voting. During the debate, no Senator argued
that the outcome of the election should be changed by either court challenge or revote. Senator Boxer claimed that
she had made the motion not to challenge the outcome, but to "shed the light of truth on these irregularities." Kerry
would later state that "the widespread irregularities make it impossible to know for certain that the [Ohio] outcome
reflected the will of the voters." In the same article, Democratic National Committee Chairman Howard Dean said "I'm
not confident that the election in Ohio was fairly decided... We know that there was substantial voter suppression,
and the machines were not reliable. It should not be a surprise that the Republicans are willing to do things that
are unethical to manipulate elections. That's what we suspect has happened." At the invitation of the United States
government, the Organization for Security and Cooperation in Europe (OSCE) sent a team of observers to monitor the
presidential elections in 2004. It was the first time the OSCE had sent observers to a U.S. presidential election,
although they had been invited in the past. In September 2004 the OSCE issued a report on U.S. electoral processes
and the election final report. The report reads: "The November 2, 2004 elections in the United States mostly met
the OSCE commitments included in the 1990 Copenhagen Document. They were conducted in an environment that reflects
a long-standing democratic tradition, including institutions governed by the rule of law, free and generally professional
media, and a civil society intensively engaged in the election process. There was exceptional public interest in
the two leading presidential candidates and the issues raised by their respective campaigns, as well as in the election
process itself." The 2004 election was the first to be affected by the campaign finance reforms mandated by the Bipartisan
Campaign Reform Act of 2002 (also known as the McCain–Feingold Bill for its sponsors in the United States Senate).
Because of the Act's restrictions on candidates' and parties' fundraising, a large number of so-called 527 groups
emerged. Named for a section of the Internal Revenue Code, these groups were able to raise large amounts of money
for various political causes as long as they do not coordinate their activities with political campaigns. Examples
of 527s include Swift Boat Veterans for Truth, MoveOn.org, the Media Fund, and America Coming Together. Many such
groups were active throughout the campaign season. (There was some similar activity, although on a much lesser scale,
during the 2000 campaign.) To distinguish official campaigning from independent campaigning, political advertisements
on television were required to include a verbal disclaimer identifying the organization responsible for the advertisement.
Advertisements produced by political campaigns usually included the statement, "I'm [candidate's name], and I approve
this message." Advertisements produced by independent organizations usually included the statement, "[Organization
name] is responsible for the content of this advertisement", and from September 3 (60 days before the general election),
such organizations' ads were prohibited from mentioning any candidate by name. Previously, television advertisements
only required a written "paid for by" disclaimer on the screen. A ballot initiative in Colorado, known as Amendment
36, would have changed the way in which the state apportions its electoral votes. Rather than assigning all 9 of
the state's electors to the candidate with a plurality of popular votes, under the amendment Colorado would have
assigned presidential electors proportionally to the statewide vote count, which would be a unique system (Nebraska
and Maine assign electoral votes based on vote totals within each congressional district). Detractors claimed that
this splitting would diminish Colorado's influence in the Electoral College, and the amendment ultimately failed,
receiving only 34% of the vote.
The contemporary Liberal Party generally advocates economic liberalism (see New Right). Historically, the party has supported
a higher degree of economic protectionism and interventionism than it has in recent decades. However, from its foundation
the party has identified itself as anti-socialist. Strong opposition to socialism and communism in Australia and
abroad was one of its founding principles. The party's founder and longest-serving leader Robert Menzies envisaged
that Australia's middle class would form its main constituency. Throughout their history, the Liberals have been
in electoral terms largely the party of the middle class (whom Menzies, in the era of the party's formation called
"The forgotten people"), though such class-based voting patterns are no longer as clear as they once were. In the
1970s a left-wing middle class emerged that no longer voted Liberal.[citation needed] One effect of this was the
success of a breakaway party, the Australian Democrats, founded in 1977 by former Liberal minister Don Chipp and
members of minor liberal parties; other members of the left-leaning section of the middle-class became Labor supporters.[citation
needed] On the other hand, the Liberals have done increasingly well in recent years among socially conservative working-class
voters.[citation needed]However the Liberal Party's key support base remains the upper-middle classes; 16 of the
20 richest federal electorates are held by the Liberals, most of which are safe seats. In country areas they either
compete with or have a truce with the Nationals, depending on various factors. Domestically, Menzies presided over
a fairly regulated economy in which utilities were publicly owned, and commercial activity was highly regulated through
centralised wage-fixing and high tariff protection. Liberal leaders from Menzies to Malcolm Fraser generally maintained
Australia's high tariff levels. At that time the Liberals' coalition partner, the Country Party, the older of the
two in the coalition (now known as the "National Party"), had considerable influence over the government's economic
policies. It was not until the late 1970s and through their period out of power federally in the 1980s that the party
came to be influenced by what was known as the "New Right" – a conservative liberal group who advocated market deregulation,
privatisation of public utilities, reductions in the size of government programs and tax cuts. The Liberals' immediate
predecessor was the United Australia Party (UAP). More broadly, the Liberal Party's ideological ancestry stretched
back to the anti-Labor groupings in the first Commonwealth parliaments. The Commonwealth Liberal Party was a fusion
of the Free Trade Party and the Protectionist Party in 1909 by the second prime minister, Alfred Deakin, in response
to Labor's growing electoral prominence. The Commonwealth Liberal Party merged with several Labor dissidents (including
Billy Hughes) to form the Nationalist Party of Australia in 1917. That party, in turn, merged with Labor dissidents
to form the UAP in 1931. The UAP had been formed as a new conservative alliance in 1931, with Labor defector Joseph
Lyons as its leader. The stance of Lyons and other Labor rebels against the more radical proposals of the Labor movement
to deal the Great Depression had attracted the support of prominent Australian conservatives. With Australia still
suffering the effects of the Great Depression, the newly formed party won a landslide victory at the 1931 Election,
and the Lyons Government went on to win three consecutive elections. It largely avoided Keynesian pump-priming and
pursued a more conservative fiscal policy of debt reduction and balanced budgets as a means of stewarding Australia
out of the Depression. Lyons' death in 1939 saw Robert Menzies assume the Prime Ministership on the eve of war. Menzies
served as Prime Minister from 1939 to 1941 but resigned as leader of the minority World War II government amidst
an unworkable parliamentary majority. The UAP, led by Billy Hughes, disintegrated after suffering a heavy defeat
in the 1943 election. Menzies called a conference of conservative parties and other groups opposed to the ruling
Australian Labor Party, which met in Canberra on 13 October 1944 and again in Albury, New South Wales in December
1944. From 1942 onward Menzies had maintained his public profile with his series of "The Forgotten People" radio
talks–similar to Franklin D. Roosevelt's "fireside chats" of the 1930s–in which he spoke of the middle class as the
"backbone of Australia" but as nevertheless having been "taken for granted" by political parties. The formation of
the party was formally announced at Sydney Town Hall on 31 August 1945. It took the name "Liberal" in honour of the
old Commonwealth Liberal Party. The new party was dominated by the remains of the old UAP; with few exceptions, the
UAP party room became the Liberal party room. The Australian Women's National League, a powerful conservative women's
organisation, also merged with the new party. A conservative youth group Menzies had set up, the Young Nationalists,
was also merged into the new party. It became the nucleus of the Liberal Party's youth division, the Young Liberals.
By September 1945 there were more than 90,000 members, many of whom had not previously been members of any political
party. After an initial loss to Labor at the 1946 election, Menzies led the Liberals to victory at the 1949 election,
and the party stayed in office for a record 23 years—still the longest unbroken run in government at the federal
level. Australia experienced prolonged economic growth during the post-war boom period of the Menzies Government
(1949–1966) and Menzies fulfilled his promises at the 1949 election to end rationing of butter, tea and petrol and
provided a five-shilling endowment for first-born children, as well as for others. While himself an unashamed anglophile,
Menzies' government concluded a number of major defence and trade treaties that set Australia on its post-war trajectory
out of Britain's orbit; opened Australia to multi-ethnic immigration; and instigated important legal reforms regarding
Aboriginal Australians. Menzies came to power the year the Communist Party of Australia had led a coal strike to
improve pit miners' working conditions. That same year Joseph Stalin's Soviet Union exploded its first atomic bomb,
and Mao Zedong led the Communist Party of China to power in China; a year later came the invasion of South Korea
by Communist North Korea. Anti-communism was a key political issue of the 1950s and 1960s. Menzies was firmly anti-Communist;
he committed troops to the Korean War and attempted to ban the Communist Party of Australia in an unsuccessful referendum
during the course of that war. The Labor Party split over concerns about the influence of the Communist Party over
the Trade Union movement, leading to the foundation of the breakaway Democratic Labor Party whose preferences supported
the Liberal and Country parties. In 1951, during the early stages of the Cold War, Menzies spoke of the possibility
of a looming third world war. The Menzies Government entered Australia's first formal military alliance outside of
the British Commonwealth with the signing of the ANZUS Treaty between Australia, New Zealand and the United States
in San Francisco in 1951. External Affairs Minister Percy Spender had put forward the proposal to work along similar
lines to the NATO Alliance. The Treaty declared that any attack on one of the three parties in the Pacific area would
be viewed as a threat to each, and that the common danger would be met in accordance with each nation's constitutional
processes. In 1954 the Menzies Government signed the South East Asia Collective Defence Treaty (SEATO) as a South
East Asian counterpart to NATO. That same year, Soviet diplomat Vladimir Petrov and his wife defected from the Soviet
embassy in Canberra, revealing evidence of Russian spying activities; Menzies called a Royal Commission to investigate.
Menzies continued the expanded immigration program established under Chifley, and took important steps towards dismantling
the White Australia Policy. In the early 1950s, external affairs minister Percy Spender helped to establish the Colombo
Plan for providing economic aid to underdeveloped nations in Australia's region. Under that scheme many future Asian
leaders studied in Australia. In 1958 the government replaced the Immigration Act's arbitrarily applied European
language dictation test with an entry permit system, that reflected economic and skills criteria. In 1962, Menzies'
Commonwealth Electoral Act provided that all Indigenous Australians should have the right to enrol and vote at federal
elections (prior to this, indigenous people in Queensland, Western Australia and some in the Northern Territory had
been excluded from voting unless they were ex-servicemen). In 1949 the Liberals appointed Dame Enid Lyons as the
first woman to serve in an Australian Cabinet. Menzies remained a staunch supporter of links to the monarchy and
British Commonwealth but formalised an alliance with the United States and concluded the Agreement on Commerce between
Australia and Japan which was signed in July 1957 and launched post-war trade with Japan, beginning a growth of Australian
exports of coal, iron ore and mineral resources that would steadily climb until Japan became Australia's largest
trading partner. Holt increased Australian commitment to the growing War in Vietnam, which met with some public opposition.
His government oversaw conversion to decimal currency. Holt faced Britain's withdrawal from Asia by visiting and
hosting many Asian leaders and by expanding ties to the United States, hosting the first visit to Australia by an
American president, his friend Lyndon B. Johnson. Holt's government introduced the Migration Act 1966, which effectively
dismantled the White Australia Policy and increased access to non-European migrants, including refugees fleeing the
Vietnam War. Holt also called the 1967 Referendum which removed the discriminatory clause in the Australian Constitution
which excluded Aboriginal Australians from being counted in the census – the referendum was one of the few to be
overwhelmingly endorsed by the Australian electorate (over 90% voted 'yes'). By the end of 1967, the Liberals' initially
popular support for the war in Vietnam was causing increasing public protest. The Gorton Government increased funding
for the arts, setting up the Australian Council for the Arts, the Australian Film Development Corporation and the
National Film and Television Training School. The Gorton Government passed legislation establishing equal pay for
men and women and increased pensions, allowances and education scholarships, as well as providing free health care
to 250,000 of the nation's poor (but not universal health care). Gorton's government kept Australia in the Vietnam
War but stopped replacing troops at the end of 1970. Gorton maintained good relations with the United States and
Britain, but pursued closer ties with Asia. The Gorton government experienced a decline in voter support at the 1969
election. State Liberal leaders saw his policies as too Centralist, while other Liberals didn't like his personal
behaviour. In 1971, Defence Minister Malcolm Fraser, resigned and said Gorton was "not fit to hold the great office
of Prime Minister". In a vote on the leadership the Liberal Party split 50/50, and although this was insufficient
to remove him as the leader, Gorton decided this was also insufficient support for him, and he resigned. During McMahon's
period in office, Neville Bonner joined the Senate and became the first Indigenous Australian in the Australian Parliament.
Bonner was chosen by the Liberal Party to fill a Senate vacancy in 1971 and celebrated his maiden parliamentary speech
with a boomerang throwing display on the lawns of Parliament. Bonner went on to win election at the 1972 election
and served as a Liberal Senator for 12 years. He worked on Indigenous and social welfare issues and proved an independent
minded Senator, often crossing the floor on Parliamentary votes. Following the 1974–75 Loans Affair, the Malcolm
Fraser led Liberal-Country Party Coalition argued that the Whitlam Government was incompetent and delayed passage
of the Government's money bills in the Senate, until the government would promise a new election. Whitlam refused,
Fraser insisted leading to the divisive 1975 Australian constitutional crisis. The deadlock came to an end when the
Whitlam government was dismissed by the Governor-General, Sir John Kerr on 11 November 1975 and Fraser was installed
as caretaker Prime Minister, pending an election. Fraser won in a landslide at the resulting 1975 election. Fraser
maintained some of the social reforms of the Whitlam era, while seeking increased fiscal restraint. His government
included the first Aboriginal federal parliamentarian, Neville Bonner, and in 1976, Parliament passed the Aboriginal
Land Rights Act 1976, which, while limited to the Northern Territory, affirmed "inalienable" freehold title to some
traditional lands. Fraser established the multicultural broadcaster SBS, accepted Vietnamese refugees, opposed minority
white rule in Apartheid South Africa and Rhodesia and opposed Soviet expansionism. A significant program of economic
reform however was not pursued. By 1983, the Australian economy was suffering with the early 1980s recession and
amidst the effects of a severe drought. Fraser had promoted "states' rights" and his government refused to use Commonwealth
powers to stop the construction of the Franklin Dam in Tasmania in 1982. Liberal minister, Don Chipp split off from
the party to form a new social liberal party, the Australian Democrats in 1977. Fraser won further substantial majorities
at the 1977 and 1980 elections, before losing to the Bob Hawke led Australian Labor Party in the 1983 election. Howard
differed from his Labor predecessor Paul Keating in that he supported traditional Australian institutions like the
Monarchy in Australia, the commemoration of ANZAC Day and the design of the Australian flag, but like Keating he
pursued privatisation of public utilities and the introduction of a broad based consumption tax (although Keating
had dropped support for a GST by the time of his 1993 election victory). Howard's premiership coincided with Al Qaeda's
11 September attacks on the United States. The Howard Government invoked the ANZUS treaty in response to the attacks
and supported America's campaigns in Afghanistan and Iraq. Through 2010, the party improved its vote in the Tasmanian
and South Australian state elections and achieved state government in Victoria. In March 2011, the New South Wales
Liberal-National Coalition led by Barry O'Farrell won government with the largest election victory in post-war Australian
history at the State Election. In Queensland, the Liberal and National parties merged in 2008 to form the new Liberal
National Party of Queensland (registered as the Queensland Division of the Liberal Party of Australia). In March
2012, the new party achieved Government in an historic landslide, led by former Brisbane Lord Mayor, Campbell Newman.
Following the 2007 Federal Election, Dr Brendan Nelson was elected leader by the Parliamentary Liberal Party. On
16 September 2008, in a second contest following a spill motion, Nelson lost the leadership to Malcolm Turnbull.
On 1 December 2009, a subsequent leadership election saw Turnbull lose the leadership to Tony Abbott by 42 votes
to 41 on the second ballot. Abbott led the party to the 2010 federal election, which saw an increase in the Liberal
Party vote and resulted in the first hung parliament since the 1940 election. The party's leader is Malcolm Turnbull
and its deputy leader is Julie Bishop. The pair were elected to their positions at the September 2015 Liberal leadership
ballot, Bishop as the incumbent deputy leader and Turnbull as a replacement for Tony Abbott, whom he consequently
succeeded as Prime Minister of Australia. Now the Turnbull Government, the party had been elected at the 2013 federal
election as the Abbott Government which took office on 18 September 2013. At state and territory level, the Liberal
Party is in office in three states: Colin Barnett has been Premier of Western Australia since 2008, Will Hodgman
Premier of Tasmania since 2014 and Mike Baird Premier of New South Wales since 2014. Adam Giles is also the Chief
Minister of the Northern Territory, having led a Country Liberal minority government since 2015. The party is in
opposition in Victoria, Queensland, South Australia and the Australian Capital Territory. Socially, while liberty
and freedom of enterprise form the basis of its beliefs, elements of the party have wavered between what is termed
"small-l liberalism" and social conservatism. Historically, Liberal Governments have been responsible for the carriage
of a number of notable "socially liberal" reforms, including the opening of Australia to multiethnic immigration
under Menzies and Harold Holt; Holt's 1967 Referendum on Aboriginal Rights; Sir John Gorton's support for cinema
and the arts; selection of the first Aboriginal Senator, Neville Bonner, in 1971; and Malcolm Fraser's Aboriginal
Land Rights Act 1976. A West Australian Liberal, Ken Wyatt, became the first Indigenous Australian elected to the
House of Representatives in 2010. The Liberal Party's organisation is dominated by the six state divisions, reflecting
the party's original commitment to a federalised system of government (a commitment which was strongly maintained
by all Liberal governments until 1983, but was to a large extent abandoned by the Howard Government, which showed
strong centralising tendencies). Menzies deliberately created a weak national party machine and strong state divisions.
Party policy is made almost entirely by the parliamentary parties, not by the party's rank-and-file members, although
Liberal party members do have a degree of influence over party policy. Menzies ran strongly against Labor's plans
to nationalise the Australian banking system and, following victory in the 1949 election, secured a double dissolution
election for April 1951, after the Labor-controlled Senate refused to pass his banking legislation. The Liberal-Country
Coalition was returned with control of the Senate. The Government was returned again in the 1954 election; the formation
of the anti-Communist Democratic Labor Party (DLP) and the consequent split in the Australian Labor Party early in
1955 helped the Liberals to another victory in December 1955. John McEwen replaced Arthur Fadden as leader of the
Country Party in March 1958 and the Menzies-McEwen Coalition was returned again at elections in November 1958 – their
third victory against Labor's H. V. Evatt. The Coalition was narrowly returned against Labor's Arthur Calwell in
the December 1961 election, in the midst of a credit squeeze. Menzies stood for office for the last time in the November
1963 election, again defeating Calwell, with the Coalition winning back its losses in the House of Representatives.
Menzies went on to resign from parliament on 26 January 1966. A period of division for the Liberals followed, with
former Treasurer John Howard competing with former Foreign Minister Andrew Peacock for supremacy. The Australian
economy was facing the early 1990s recession. Unemployment reached 11.4% in 1992. Under Dr John Hewson, in November
1991, the opposition launched the 650-page Fightback! policy document − a radical collection of "dry", economic liberal
measures including the introduction of a Goods and Services Tax (GST), various changes to Medicare including the
abolition of bulk billing for non-concession holders, the introduction of a nine-month limit on unemployment benefits,
various changes to industrial relations including the abolition of awards, a $13 billion personal income tax cut
directed at middle and upper income earners, $10 billion in government spending cuts, the abolition of state payroll
taxes and the privatisation of a large number of government owned enterprises − representing the start of a very
different future direction to the keynesian economic conservatism practiced by previous Liberal/National Coalition
governments. The 15 percent GST was the centerpiece of the policy document. Through 1992, Labor Prime Minister Paul
Keating mounted a campaign against the Fightback package, and particularly against the GST, which he described as
an attack on the working class in that it shifted the tax burden from direct taxation of the wealthy to indirect
taxation as a broad-based consumption tax. Pressure group activity and public opinion was relentless, which led Hewson
to exempt food from the proposed GST − leading to questions surrounding the complexity of what food was and wasn't
to be exempt from the GST. Hewson's difficulty in explaining this to the electorate was exemplified in the infamous
birthday cake interview, considered by some as a turning point in the election campaign. Keating won a record fifth
consecutive Labor term at the 1993 election. A number of the proposals were later adopted in to law in some form,
to a small extent during the Keating Labor government, and to a larger extent during the Howard Liberal government
(most famously the GST), while unemployment benefits and bulk billing were re-targeted for a time by the Abbott Liberal
government. In South Australia, initially a Liberal and Country Party affiliated party, the Liberal and Country League
(LCL), mostly led by Premier of South Australia Tom Playford, was in power from the 1933 election to the 1965 election,
though with assistance from an electoral malapportionment, or gerrymander, known as the Playmander. The LCL's Steele
Hall governed for one term from the 1968 election to the 1970 election and during this time began the process of
dismantling the Playmander. David Tonkin, as leader of the South Australian Division of the Liberal Party of Australia,
became Premier at the 1979 election for one term, losing office at the 1982 election. The Liberals returned to power
at the 1993 election, led by Premiers Dean Brown, John Olsen and Rob Kerin through two terms, until their defeat
at the 2002 election. They have since remained in opposition under a record five Opposition Leaders.
In Japanese, they are usually referred to as bushi (武士?, [bu.ɕi]) or buke (武家?). According to translator William Scott Wilson:
"In Chinese, the character 侍 was originally a verb meaning "to wait upon" or "accompany persons" in the upper ranks
of society, and this is also true of the original term in Japanese, saburau. In both countries the terms were nominalized
to mean "those who serve in close attendance to the nobility", the pronunciation in Japanese changing to saburai.
According to Wilson, an early reference to the word "samurai" appears in the Kokin Wakashū (905–914), the first imperial
anthology of poems, completed in the first part of the 10th century. By the end of the 12th century, samurai became
almost entirely synonymous with bushi, and the word was closely associated with the middle and upper echelons of
the warrior class. The samurai were usually associated with a clan and their lord, were trained as officers in military
tactics and grand strategy, and they followed a set of rules that later came to be known as the bushidō. While the
samurai numbered less than 10% of then Japan's population, their teachings can still be found today in both everyday
life and in modern Japanese martial arts. Following the Battle of Hakusukinoe against Tang China and Silla in 663
AD that led to a Japanese retreat from Korean affairs, Japan underwent widespread reform. One of the most important
was that of the Taika Reform, issued by Prince Naka no Ōe (Emperor Tenji) in 646 AD. This edict allowed the Japanese
aristocracy to adopt the Tang dynasty political structure, bureaucracy, culture, religion, and philosophy. As part
of the Taihō Code, of 702 AD, and the later Yōrō Code, the population was required to report regularly for census,
a precursor for national conscription. With an understanding of how the population was distributed, Emperor Mommu
introduced a law whereby 1 in 3–4 adult males was drafted into the national military. These soldiers were required
to supply their own weapons, and in return were exempted from duties and taxes. This was one of the first attempts
by the Imperial government to form an organized army modeled after the Chinese system. It was called "Gundan-Sei"
(軍団制) by later historians and is believed to have been short-lived.[citation needed] In the early Heian period, the
late 8th and early 9th centuries, Emperor Kammu sought to consolidate and expand his rule in northern Honshū, but
the armies he sent to conquer the rebellious Emishi people lacked motivation and discipline, and failed in their
task.[citation needed] Emperor Kammu introduced the title of sei'i-taishōgun (征夷大将軍) or Shogun, and began to rely
on the powerful regional clans to conquer the Emishi. Skilled in mounted combat and archery (kyūdō), these clan warriors
became the Emperor's preferred tool for putting down rebellions.[citation needed] Though this is the first known
use of the "Shogun" title, it was a temporary title, and was not imbued with political power until the 13th century.
At this time (the 7th to 9th century) the Imperial Court officials considered them merely a military section under
the control of the Imperial Court. After the Genpei war of the late 12th century, a clan leader Minamoto no Yoritomo
obtained the right to appoint shugo and jito, and was allowed to organize soldiers and police, and to collect a certain
amount of tax. Initially, their responsibility was restricted to arresting rebels and collecting needed army provisions,
and they were forbidden from interfering with Kokushi Governors, but their responsibility gradually expanded and
thus the samurai-class appeared as the political ruling power in Japan. Minamoto no Yoritomo opened the Kamakura
Bakufu Shogunate in 1192. Originally the Emperor and non-warrior nobility employed these warrior nobles. In time,
they amassed enough manpower, resources and political backing in the form of alliances with one another, to establish
the first samurai-dominated government. As the power of these regional clans grew, their chief was typically a distant
relative of the Emperor and a lesser member of either the Fujiwara, Minamoto, or Taira clans. Though originally sent
to provincial areas for a fixed four-year term as a magistrate, the toryo declined to return to the capital when
their terms ended, and their sons inherited their positions and continued to lead the clans in putting down rebellions
throughout Japan during the middle- and later-Heian period. Because of their rising military and economic power,
the warriors ultimately became a new force in the politics of the court. Their involvement in the Hōgen in the late
Heian period consolidated their power, and finally pitted the rival Minamoto and Taira clans against each other in
the Heiji Rebellion of 1160. The winner, Taira no Kiyomori, became an imperial advisor, and was the first warrior
to attain such a position. He eventually seized control of the central government, establishing the first samurai-dominated
government and relegating the Emperor to figurehead status. However, the Taira clan was still very conservative when
compared to its eventual successor, the Minamoto, and instead of expanding or strengthening its military might, the
clan had its women marry Emperors and exercise control through the Emperor. The Taira and the Minamoto clashed again
in 1180, beginning the Gempei War, which ended in 1185. Samurai fought at the naval battle of Dan-no-ura, at the
Shimonoseki Strait which separates Honshu and Kyushu in 1185. The victorious Minamoto no Yoritomo established the
superiority of the samurai over the aristocracy. In 1190 he visited Kyoto and in 1192 became Sei'i-taishōgun, establishing
the Kamakura Shogunate, or Kamakura Bakufu. Instead of ruling from Kyoto, he set up the Shogunate in Kamakura, near
his base of power. "Bakufu" means "tent government", taken from the encampments the soldiers would live in, in accordance
with the Bakufu's status as a military government. In 1274, the Mongol-founded Yuan dynasty in China sent a force
of some 40,000 men and 900 ships to invade Japan in northern Kyūshū. Japan mustered a mere 10,000 samurai to meet
this threat. The invading army was harassed by major thunderstorms throughout the invasion, which aided the defenders
by inflicting heavy casualties. The Yuan army was eventually recalled and the invasion was called off. The Mongol
invaders used small bombs, which was likely the first appearance of bombs and gunpowder in Japan. The Japanese defenders
recognized the possibility of a renewed invasion, and began construction of a great stone barrier around Hakata Bay
in 1276. Completed in 1277, this wall stretched for 20 kilometers around the border of the bay. This would later
serve as a strong defensive point against the Mongols. The Mongols attempted to settle matters in a diplomatic way
from 1275 to 1279, but every envoy sent to Japan was executed. This set the stage for one of the most famous engagements
in Japanese history. In 1592, and again in 1597, Toyotomi Hideyoshi, aiming to invade China (唐入り) through Korea,
mobilized an army of 160,000 peasants and samurai and deployed them to Korea. (See Hideyoshi's invasions of Korea,
Chōsen-seibatsu (朝鮮征伐?). Taking advantage of arquebus mastery and extensive wartime experience from the Sengoku period,
Japanese samurai armies made major gains in most of Korea. Kato Kiyomasa advanced to Orangkai territory (present-day
Manchuria) bordering Korea to the northeast and crossed the border into Manchuria, but withdrew after retaliatory
attacks from the Jurchens there, as it was clear he had outpaced the rest of the Japanese invasion force. A few of
the more famous samurai generals of this war were Katō Kiyomasa, Konishi Yukinaga, and Shimazu Yoshihiro. Shimazu
Yoshihiro led some 7,000 samurai and, despite being heavily outnumbered, defeated a host of allied Ming and Korean
forces at the Battle of Sacheon in 1598, near the conclusion of the campaigns. Yoshihiro was feared as Oni-Shimazu
("Shimazu ogre") and his nickname spread across not only Korea but to Ming Dynasty China. In spite of the superiority
of Japanese land forces, ultimately the two expeditions failed (though they did devastate the Korean landmass) from
factors such as Korean naval superiority (which, led by Admiral Yi Sun-shin, harassed Japanese supply lines continuously
throughout the wars, resulting in supply shortages on land), the commitment of sizeable Ming forces to Korea, Korean
guerrilla actions, the underestimation of resistance by Japanese commanders (in the first campaign of 1592, Korean
defenses on land were caught unprepared, under-trained, and under-armed; they were rapidly overrun, with only a limited
number of successfully resistant engagements against the more-experienced and battle-hardened Japanese forces - in
the second campaign of 1597, Korean and Ming forces proved to be a far more difficult challenge and, with the support
of continued Korean naval superiority, limited Japanese gains to parts southeastern Korea), and wavering Japanese
commitment to the campaigns as the wars dragged on. The final death blow to the Japanese campaigns in Korea came
with Hideyoshi's death in late 1598 and the recall of all Japanese forces in Korea by the Council of Five Elders
(established by Hideyoshi to oversee the transition from his regency to that of his son Hideyori). It should be noted
that many samurai forces that were active throughout this period were not deployed to Korea; most importantly, the
daimyo Tokugawa Ieyasu carefully kept forces under his command out of the Korean campaigns, and other samurai commanders
who were opposed to Hideyoshi's domination of Japan either mulled Hideyoshi's call to invade Korea or contributed
a small token force. Most commanders who did opposed or otherwise resisted/resented Hideyoshi ended up as part of
the so-called Eastern Army, while commanders loyal to Hideyoshi and his son (a notable exception to this trend was
Katō Kiyomasa, who deployed with Tokugawa and the Eastern Army) were largely committed to the Western Army; the two
opposing sides (so named for the relative geographical locations of their respective commanders' domains) would later
clash, most notably at the Battle of Sekigahara, which was won by Tokugawa Ieyasu and the Eastern Forces, paving
the way for the establishment of the Tokugawa Shogunate. Oda Nobunaga made innovations in the fields of organization
and war tactics, heavily used arquebuses, developed commerce and industry and treasured innovation. Consecutive victories
enabled him to realize the termination of the Ashikaga Bakufu and the disarmament of the military powers of the Buddhist
monks, which had inflamed futile struggles among the populace for centuries. Attacking from the "sanctuary" of Buddhist
temples, they were constant headaches to any warlord and even the Emperor who tried to control their actions. He
died in 1582 when one of his generals, Akechi Mitsuhide, turned upon him with his army. During the Tokugawa shogunate,
samurai increasingly became courtiers, bureaucrats, and administrators rather than warriors. With no warfare since
the early 17th century, samurai gradually lost their military function during the Tokugawa era (also called the Edo
period). By the end of the Tokugawa era, samurai were aristocratic bureaucrats for the daimyo, with their daisho,
the paired long and short swords of the samurai (cf. katana and wakizashi) becoming more of a symbolic emblem of
power rather than a weapon used in daily life. They still had the legal right to cut down any commoner who did not
show proper respect kiri-sute gomen (斬り捨て御免?), but to what extent this right was used is unknown. When the central
government forced daimyos to cut the size of their armies, unemployed rōnin became a social problem. Theoretical
obligations between a samurai and his lord (usually a daimyo) increased from the Genpei era to the Edo era. They
were strongly emphasized by the teachings of Confucius and Mencius (ca 550 BC), which were required reading for the
educated samurai class. Bushido was formalized by several influential leaders and families before the Edo Period.
Bushido was an ideal, and it remained fairly uniform from the 13th century to the 19th century — the ideals of Bushido
transcended social class, time and geographic location of the warrior class. The relative peace of the Tokugawa era
was shattered with the arrival of Commodore Matthew Perry's massive U.S. Navy steamships in 1853. Perry used his
superior firepower to force Japan to open its borders to trade. Prior to that only a few harbor towns, under strict
control from the Shogunate, were allowed to participate in Western trade, and even then, it was based largely on
the idea of playing the Franciscans and Dominicans off against one another (in exchange for the crucial arquebus
technology, which in turn was a major contributor to the downfall of the classical samurai). From 1854, the samurai
army and the navy were modernized. A Naval training school was established in Nagasaki in 1855. Naval students were
sent to study in Western naval schools for several years, starting a tradition of foreign-educated future leaders,
such as Admiral Enomoto. French naval engineers were hired to build naval arsenals, such as Yokosuka and Nagasaki.
By the end of the Tokugawa shogunate in 1867, the Japanese navy of the shogun already possessed eight western-style
steam warships around the flagship Kaiyō Maru, which were used against pro-imperial forces during the Boshin war,
under the command of Admiral Enomoto. A French Military Mission to Japan (1867) was established to help modernize
the armies of the Bakufu. Emperor Meiji abolished the samurai's right to be the only armed force in favor of a more
modern, western-style, conscripted army in 1873. Samurai became Shizoku (士族) who retained some of their salaries,
but the right to wear a katana in public was eventually abolished along with the right to execute commoners who paid
them disrespect. The samurai finally came to an end after hundreds of years of enjoyment of their status, their powers,
and their ability to shape the government of Japan. However, the rule of the state by the military class was not
yet over. In defining how a modern Japan should be, members of the Meiji government decided to follow the footsteps
of the United Kingdom and Germany, basing the country on the concept of noblesse oblige. Samurai were not a political
force under the new order. With the Meiji reforms in the late 19th century, the samurai class was abolished, and
a western-style national army was established. The Imperial Japanese Armies were conscripted, but many samurai volunteered
as soldiers, and many advanced to be trained as officers. Much of the Imperial Army officer class was of samurai
origin, and were highly motivated, disciplined, and exceptionally trained. Samurai were many of the early exchange
students, not directly because they were samurai, but because many samurai were literate and well-educated scholars.
Some of these exchange students started private schools for higher educations, while many samurai took pens instead
of guns and became reporters and writers, setting up newspaper companies, and others entered governmental service.
Some samurai became businessmen. For example, Iwasaki Yatarō, who was the great-grandson of a samurai, established
Mitsubishi. The philosophies of Buddhism and Zen, and to a lesser extent Confucianism and Shinto, influenced the
samurai culture. Zen meditation became an important teaching due to it offering a process to calm one's mind. The
Buddhist concept of reincarnation and rebirth led samurai to abandon torture and needless killing, while some samurai
even gave up violence altogether and became Buddhist monks after realizing how fruitless their killings were. Some
were killed as they came to terms with these realizations in the battlefield. The most defining role that Confucianism
played in samurai philosophy was to stress the importance of the lord-retainer relationship—the loyalty that a samurai
was required to show his lord. In the 13th century, Hōjō Shigetoki (1198–1261 AD) wrote: "When one is serving officially
or in the master's court, he should not think of a hundred or a thousand people, but should consider only the importance
of the master." Carl Steenstrup noted that 13th and 14th century warrior writings (gunki) "portrayed the bushi in
their natural element, war, eulogizing such virtues as reckless bravery, fierce family pride, and selfless, at times
senseless devotion of master and man". Feudal lords such as Shiba Yoshimasa (1350–1410 AD) stated that a warrior
looked forward to a glorious death in the service of a military leader or the Emperor: "It is a matter of regret
to let the moment when one should die pass by....First, a man whose profession is the use of arms should think and
then act upon not only his own fame, but also that of his descendants. He should not scandalize his name forever
by holding his one and only life too dear....One's main purpose in throwing away his life is to do so either for
the sake of the Emperor or in some great undertaking of a military general. It is that exactly that will be the great
fame of one's descendants." "First of all, a samurai who dislikes battle and has not put his heart in the right place
even though he has been born in the house of the warrior, should not be reckoned among one's retainers....It is forbidden
to forget the great debt of kindness one owes to his master and ancestors and thereby make light of the virtues of
loyalty and filial piety....It is forbidden that one should...attach little importance to his duties to his master...There
is a primary need to distinguish loyalty from disloyalty and to establish rewards and punishments." Katō Kiyomasa
was one of the most powerful and well-known lords of the Sengoku Era. He commanded most of Japan's major clans during
the invasion of Korea (1592–1598). In a handbook he addressed to "all samurai, regardless of rank" he told his followers
that a warrior's only duty in life was to "...grasp the long and the short swords and to die". He also ordered his
followers to put forth great effort in studying the military classics, especially those related to loyalty and filial
piety. He is best known for his quote: "If a man does not investigate into the matter of Bushido daily, it will be
difficult for him to die a brave and manly death. Thus it is essential to engrave this business of the warrior into
one's mind well." Torii Mototada (1539–1600) was a feudal lord in the service of Tokugawa Ieyasu. On the eve of the
battle of Sekigahara, he volunteered to remain behind in the doomed Fushimi Castle while his lord advanced to the
east. Torii and Tokugawa both agreed that the castle was indefensible. In an act of loyalty to his lord, Torii chose
to remain behind, pledging that he and his men would fight to the finish. As was custom, Torii vowed that he would
not be taken alive. In a dramatic last stand, the garrison of 2,000 men held out against overwhelming odds for ten
days against the massive army of Ishida Mitsunari's 40,000 warriors. In a moving last statement to his son Tadamasa,
he wrote: The rival of Takeda Shingen (1521–1573) was Uesugi Kenshin (1530–1578), a legendary Sengoku warlord well-versed
in the Chinese military classics and who advocated the "way of the warrior as death". Japanese historian Daisetz
Teitaro Suzuki describes Uesugi's beliefs as: "Those who are reluctant to give up their lives and embrace death are
not true warriors.... Go to the battlefield firmly confident of victory, and you will come home with no wounds whatever.
Engage in combat fully determined to die and you will be alive; wish to survive in the battle and you will surely
meet death. When you leave the house determined not to see it again you will come home safely; when you have any
thought of returning you will not return. You may not be in the wrong to think that the world is always subject to
change, but the warrior must not entertain this way of thinking, for his fate is always determined." Historian H.
Paul Varley notes the description of Japan given by Jesuit leader St. Francis Xavier (1506–1552): "There is no nation
in the world which fears death less." Xavier further describes the honour and manners of the people: "I fancy that
there are no people in the world more punctilious about their honour than the Japanese, for they will not put up
with a single insult or even a word spoken in anger." Xavier spent the years 1549–1551 converting Japanese to Christianity.
He also observed: "The Japanese are much braver and more warlike than the people of China, Korea, Ternate and all
of the other nations around the Philippines." In December 1547, Francis was in Malacca (Malaysia) waiting to return
to Goa (India) when he met a low-ranked samurai named Anjiro (possibly spelled "Yajiro"). Anjiro was not an intellectual,
but he impressed Xavier because he took careful notes of everything he said in church. Xavier made the decision to
go to Japan in part because this low-ranking samurai convinced him in Portuguese that the Japanese people were highly
educated and eager to learn. They were hard workers and respectful of authority. In their laws and customs they were
led by reason, and, should the Christian faith convince them of its truth, they would accept it en masse. In his
book "Ideals of the Samurai" translator William Scott Wilson states: "The warriors in the Heike Monogatari served
as models for the educated warriors of later generations, and the ideals depicted by them were not assumed to be
beyond reach. Rather, these ideals were vigorously pursued in the upper echelons of warrior society and recommended
as the proper form of the Japanese man of arms. With the Heike Monogatari, the image of the Japanese warrior in literature
came to its full maturity." Wilson then translates the writings of several warriors who mention the Heike Monogatari
as an example for their men to follow. As aristocrats for centuries, samurai developed their own cultures that influenced
Japanese culture as a whole. The culture associated with the samurai such as the tea ceremony, monochrome ink painting,
rock gardens and poetry were adopted by warrior patrons throughout the centuries 1200–1600. These practices were
adapted from the Chinese arts. Zen monks introduced them to Japan and they were allowed to flourish due to the interest
of powerful warrior elites. Musō Soseki (1275–1351) was a Zen monk who was advisor to both Emperor Go-Daigo and General
Ashikaga Takauji (1304–58). Musō, as well as other monks, acted as political and cultural diplomat between Japan
and China. Musō was particularly well known for his garden design. Another Ashikaga patron of the arts was Yoshimasa.
His cultural advisor, the Zen monk Zeami, introduced tea ceremony to him. Previously, tea had been used primarily
for Buddhist monks to stay awake during meditation. For example, the full name of Oda Nobunaga would be "Oda Kazusanosuke
Saburo Nobunaga" (織田上総介三郎信長), in which "Oda" is a clan or family name, "Kazusanosuke" is a title of vice-governor
of Kazusa province, "Saburo" is a formal nickname (yobina), and "Nobunaga" is an adult name (nanori) given at genpuku,
the coming of age ceremony. A man was addressed by his family name and his title, or by his yobina if he did not
have a title. However, the nanori was a private name that could be used by only a very few, including the Emperor.
A samurai could take concubines but their backgrounds were checked by higher-ranked samurai. In many cases, taking
a concubine was akin to a marriage. Kidnapping a concubine, although common in fiction, would have been shameful,
if not criminal. If the concubine was a commoner, a messenger was sent with betrothal money or a note for exemption
of tax to ask for her parents' acceptance. Even though the woman would not be a legal wife, a situation normally
considered a demotion, many wealthy merchants believed that being the concubine of a samurai was superior to being
the legal wife of a commoner. When a merchant's daughter married a samurai, her family's money erased the samurai's
debts, and the samurai's social status improved the standing of the merchant family. If a samurai's commoner concubine
gave birth to a son, the son could inherit his father's social status. A samurai could divorce his wife for a variety
of reasons with approval from a superior, but divorce was, while not entirely nonexistent, a rare event. A wife's
failure to produce a son was cause for divorce, but adoption of a male heir was considered an acceptable alternative
to divorce. A samurai could divorce for personal reasons, even if he simply did not like his wife, but this was generally
avoided as it would embarrass the person who had arranged the marriage. A woman could also arrange a divorce, although
it would generally take the form of the samurai divorcing her. After a divorce samurai had to return the betrothal
money, which often prevented divorces. Maintaining the household was the main duty of samurai women. This was especially
crucial during early feudal Japan, when warrior husbands were often traveling abroad or engaged in clan battles.
The wife, or okugatasama (meaning: one who remains in the home), was left to manage all household affairs, care for
the children, and perhaps even defend the home forcibly. For this reason, many women of the samurai class were trained
in wielding a polearm called a naginata or a special knife called the kaiken in an art called tantojutsu (lit. the
skill of the knife), which they could use to protect their household, family, and honor if the need arose. Traits
valued in women of the samurai class were humility, obedience, self-control, strength, and loyalty. Ideally, a samurai
wife would be skilled at managing property, keeping records, dealing with financial matters, educating the children
(and perhaps servants, too), and caring for elderly parents or in-laws that may be living under her roof. Confucian
law, which helped define personal relationships and the code of ethics of the warrior class required that a woman
show subservience to her husband, filial piety to her parents, and care to the children. Too much love and affection
was also said to indulge and spoil the youngsters. Thus, a woman was also to exercise discipline. This does not mean
that samurai women were always powerless. Powerful women both wisely and unwisely wielded power at various occasions.
After Ashikaga Yoshimasa, 8th shogun of the Muromachi shogunate, lost interest in politics, his wife Hino Tomiko
largely ruled in his place. Nene, wife of Toyotomi Hideyoshi, was known to overrule her husband's decisions at times
and Yodo-dono, his concubine, became the de facto master of Osaka castle and the Toyotomi clan after Hideyoshi's
death. Tachibana Ginchiyo was chosen to lead the Tachibana clan after her father's death. Chiyo, wife of Yamauchi
Kazutoyo, has long been considered the ideal samurai wife. According to legend, she made her kimono out of a quilted
patchwork of bits of old cloth and saved pennies to buy her husband a magnificent horse, on which he rode to many
victories. The fact that Chiyo (though she is better known as "Wife of Yamauchi Kazutoyo") is held in such high esteem
for her economic sense is illuminating in the light of the fact that she never produced an heir and the Yamauchi
clan was succeeded by Kazutoyo's younger brother. The source of power for women may have been that samurai left their
finances to their wives. As the Tokugawa period progressed more value became placed on education, and the education
of females beginning at a young age became important to families and society as a whole. Marriage criteria began
to weigh intelligence and education as desirable attributes in a wife, right along with physical attractiveness.
Though many of the texts written for women during the Tokugawa period only pertained to how a woman could become
a successful wife and household manager, there were those that undertook the challenge of learning to read, and also
tackled philosophical and literary classics. Nearly all women of the samurai class were literate by the end of the
Tokugawa period. The English sailor and adventurer William Adams (1564–1620) was the first Westerner to receive the
dignity of samurai. The Shogun Tokugawa Ieyasu presented him with two swords representing the authority of a samurai,
and decreed that William Adams the sailor was dead and that Anjin Miura (三浦按針), a samurai, was born. Adams also received
the title of hatamoto (bannerman), a high-prestige position as a direct retainer in the Shogun's court. He was provided
with generous revenues: "For the services that I have done and do daily, being employed in the Emperor's service,
the Emperor has given me a living" (Letters). He was granted a fief in Hemi (逸見) within the boundaries of present-day
Yokosuka City, "with eighty or ninety husbandmen, that be my slaves or servants" (Letters). His estate was valued
at 250 koku. He finally wrote "God hath provided for me after my great misery", (Letters) by which he meant the disaster-ridden
voyage that initially brought him to Japan. Jan Joosten van Lodensteijn (1556?–1623?), a Dutch colleague of Adams'
on their ill-fated voyage to Japan in the ship De Liefde, was also given similar privileges by Tokugawa Ieyasu. It
appears Joosten became a samurai[citation needed] and was given a residence within Ieyasu's castle at Edo. Today,
this area at the east exit of Tokyo Station is known as Yaesu (八重洲). Yaesu is a corruption of the Dutchman's Japanese
name, Yayousu (耶楊子). Also in common with Adam's, Joostens was given a Red Seal Ship (朱印船) allowing him to trade between
Japan and Indo-China. On a return journey from Batavia Joosten drowned after his ship ran aground. In the same war,
the Prussian Edward Schnell served the Aizu domain as a military instructor and procurer of weapons. He was granted
the Japanese name Hiramatsu Buhei (平松武兵衛), which inverted the characters of the daimyo's name Matsudaira. Hiramatsu
(Schnell) was given the right to wear swords, as well as a residence in the castle town of Wakamatsu, a Japanese
wife, and retainers. In many contemporary references, he is portrayed wearing a Japanese kimono, overcoat, and swords,
with Western riding trousers and boots. As far back as the seventh century Japanese warriors wore a form of lamellar
armor, this armor eventually evolved into the armor worn by the samurai. The first types of Japanese armors identified
as samurai armor were known as yoroi. These early samurai armors were made from small individual scales known as
kozane. The kozane were made from either iron or leather and were bound together into small strips, the strips were
coated with lacquer to protect the kozane from water. A series of strips of kozane were then laced together with
silk or leather lace and formed into a complete chest armor (dou or dō). In the 1500s a new type of armor started
to become popular due to the advent of firearms, new fighting tactics and the need for additional protection. The
kozane dou made from individual scales was replaced by plate armor. This new armor, which used iron plated dou (dō),
was referred to as Tosei-gusoku, or modern armor. Various other components of armor protected the samurai's body.
The helmet kabuto was an important part of the samurai's armor. Samurai armor changed and developed as the methods
of samurai warfare changed over the centuries. The known last use of samurai armor occurring in 1877 during the satsuma
rebellion. As the last samurai rebellion was crushed, Japan modernized its defenses and turned to a national conscription
army that used uniforms. The term samurai originally meant "those who serve in close attendance to nobility", and
was written with a Chinese character (or kanji) that had the same meaning. In Japanese, it was originally recorded
in the Nara Period as a verb *samorapu ("to watch, to keep watch, to observe, to be on the lookout for something;
to serve, to attend"), which is believed to be derived from the frequentative form (*morapu 守らふ) of the verb moru
(守る, "to watch, to guard, to be on the lookout; to keep, to protect, to take care of, to be in charge of, to have
as one's ward"). By the Heian period, this word had developed into the verb saburahu (さぶらふ, "to serve, to attend"),
from which a deverbal noun saburahi (さぶらひ, "servant, attendant") was later derived, and this noun then yielded samurahi
(さむらひ) in the Edo period. In Japanese literature, there is an early reference to samurai in the Kokinshū (古今集, early
10th century): Bushi was the name given to the ancient Japanese soldiers from traditional warrior families. The bushi
class was developed mainly in the north of Japan. They formed powerful clans, which in the 12th century were against
the noble families who were grouping themselves to support the imperial family who lived in Kyoto. Samurai was a
word used by the Kuge aristocratic class with warriors themselves preferring the word bushi. The term Bushidō, the
"way of the warrior", is derived from this term and the mansion of a warrior was called bukeyashiki. Most samurai
were bound by a code of honor and were expected to set an example for those below them. A notable part of their code
is seppuku (切腹, seppuku?) or hara kiri, which allowed a disgraced samurai to regain his honor by passing into death,
where samurai were still beholden to social rules. Whilst there are many romanticized characterizations of samurai
behavior such as the writing of Bushido (武士道, Bushidō?) in 1905, studies of Kobudo and traditional Budō indicate
that the samurai were as practical on the battlefield as were any other warrior. Despite the rampant romanticism
of the 20th century, samurai could be disloyal and treacherous (e.g., Akechi Mitsuhide), cowardly, brave, or overly
loyal (e.g., Kusunoki Masashige). Samurai were usually loyal to their immediate superiors, who in turn allied themselves
with higher lords. These loyalties to the higher lords often shifted; for example, the high lords allied under Toyotomi
Hideyoshi (豊臣秀吉) were served by loyal samurai, but the feudal lords under them could shift their support to Tokugawa,
taking their samurai with them. There were, however, also notable instances where samurai would be disloyal to their
lord or daimyo, when loyalty to the Emperor was seen to have supremacy. Jidaigeki (literally historical drama) has
always been a staple program on Japanese movies and television. The programs typically feature a samurai. Samurai
films and westerns share a number of similarities and the two have influenced each other over the years. One of Japan’s
most renowned directors, Akira Kurosawa, greatly influenced the samurai aspect in western film-making.[citation needed]
George Lucas’ Star Wars series incorporated many aspects from the Seven Samurai film. One example is that in the
Japanese film, seven samurai warriors are hired by local farmers to protect their land from being overrun by bandits;
In George Lucas’ Star Wars: A New Hope, a similar situation arises. Kurosawa was inspired by the works of director
John Ford and in turn Kurosawa's works have been remade into westerns such as The Seven Samurai into The Magnificent
Seven and Yojimbo into A Fistful of Dollars. There is also a 26 episode anime adaptation (Samurai 7) of The Seven
Samurai. Along with film, literature containing samurai influences are seen as well. Most common are historical works
where the protagonist is either a samurai or former samurai (or another rank or position) who possesses considerable
martial skill. Eiji Yoshikawa is one of the most famous Japanese historical novelists. His retellings of popular
works, including Taiko, Musashi and Heike Tale, are popular among readers for their epic narratives and rich realism
in depicting samurai and warrior culture.[citation needed] The samurai have also appeared frequently in Japanese
comics (manga) and animation (anime). Samurai-like characters are not just restricted to historical settings and
a number of works set in the modern age, and even the future, include characters who live, train and fight like samurai.
Examples are Samurai Champloo, Requiem from the Darkness, Muramasa: The Demon Blade, and Afro Samurai. Some of these
works have made their way to the west, where it has been increasing in popularity with America. Just in the last
two decades,[when?] samurai have become more popular in America. “Hyperbolizing the samurai in such a way that they
appear as a whole to be a loyal body of master warriors provides international interest in certain characters due
to admirable traits” (Moscardi, N.D.). Through various medium, producers and writers have been capitalizing on the
notion that Americans admire the samurai lifestyle. The animated series, Afro Samurai, became well-liked in American
popular culture due to its blend of hack-and-slash animation and gritty urban music. Created by Takashi Okazaki,
Afro Samurai was initially a doujinshi, or manga series, which was then made into an animated series by Studio Gonzo.
In 2007 the animated series debuted on American cable television on the Spike TV channel (Denison, 2010). The series
was produced for American viewers which “embodies the trend... comparing hip-hop artists to samurai warriors, an
image some rappers claim for themselves (Solomon, 2009). The storyline keeps in tone with the perception of a samurais
finding vengeance against someone who has wronged him. Starring the voice of well known American actor Samuel L.
Jackson, “Afro is the second-strongest fighter in a futuristic, yet, still feudal Japan and seeks revenge upon the
gunman who killed his father” (King 2008). Due to its popularity, Afro Samurai was adopted into a full feature animated
film and also became titles on gaming consoles such as the PlayStation 3 and Xbox. Not only has the samurai culture
been adopted into animation and video games, it can also be seen in comic books. American comic books have adopted
the character type for stories of their own like the mutant-villain Silver Samurai of Marvel Comics. The design of
this character preserves the samurai appearance; the villain is “Clad in traditional gleaming samurai armor and wielding
an energy charged katana” (Buxton, 2013). Not only does the Silver Samurai make over 350 comic book appearances,
the character is playable in several video games, such as Marvel Vs. Capcom 1 and 2. In 2013, the samurai villain
was depicted in James Mangold’s film The Wolverine. Ten years before the Wolverine debuted, another film helped pave
the way to ensure the samurai were made known to American cinema: A film released in 2003 titled The Last Samurai,
starring Tom Cruise, is inspired by the samurai way of life. In the film, Cruise’s character finds himself deeply
immersed in samurai culture. The character in the film, “Nathan Algren, is a fictional contrivance to make nineteenth-century
Japanese history less foreign to American viewers”.(Ravina, 2010) After being captured by a group of samurai rebels,
he becomes empathetic towards the cause they fight for. Taking place during the Meiji Period, Tom Cruise plays the
role of US Army Captain Nathan Algren, who travels to Japan to train a rookie army in fighting off samurai rebel
groups. Becoming a product of his environment, Algren joins the samurai clan in an attempt to rescue a captured samurai
leader. “By the end of the film, he has clearly taken on many of the samurai traits, such as zen-like mastery of
the sword, and a budding understanding of spirituality”. (Manion, 2006)
As the number of possible tests for even simple software components is practically infinite, all software testing uses some
strategy to select tests that are feasible for the available time and resources. As a result, software testing typically
(but not exclusively) attempts to execute a program or application with the intent of finding software bugs (errors
or other defects). The job of testing is an iterative process as when one bug is fixed, it can illuminate other,
deeper bugs, or can even create new ones. Although testing can determine the correctness of software under the assumption
of some specific hypotheses (see hierarchy of testing difficulty below), testing cannot identify all the defects
within software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product
against oracles—principles or mechanisms by which someone might recognize a problem. These oracles may include (but
are not limited to) specifications, contracts, comparable products, past versions of the same product, inferences
about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other
criteria. A primary purpose of testing is to detect software failures so that defects may be discovered and corrected.
Testing cannot establish that a product functions properly under all conditions but can only establish that it does
not function properly under specific conditions. The scope of software testing often includes examination of code
as well as execution of that code in various environments and conditions as well as examining the aspects of code:
does it do what it is supposed to do and do what it needs to do. In the current culture of software development,
a testing organization may be separate from the development team. There are various roles for testing team members.
Information derived from software testing may be used to correct the process by which software is developed. Software
faults occur through the following processes. A programmer makes an error (mistake), which results in a defect (fault,
bug) in the software source code. If this defect is executed, in certain situations the system will produce wrong
results, causing a failure. Not all defects will necessarily result in failures. For example, defects in dead code
will never result in failures. A defect can turn into a failure when the environment is changed. Examples of these
changes in environment include the software being run on a new computer hardware platform, alterations in source
data, or interacting with different software. A single defect may result in a wide range of failure symptoms. A fundamental
problem with software testing is that testing under all combinations of inputs and preconditions (initial state)
is not feasible, even with a simple product.:17-18 This means that the number of defects in a software product can
be very large and defects that occur infrequently are difficult to find in testing. More significantly, non-functional
dimensions of quality (how it is supposed to be versus what it is supposed to do)—usability, scalability, performance,
compatibility, reliability—can be highly subjective; something that constitutes sufficient value to one person may
be intolerable to another. Software developers can't test everything, but they can use combinatorial test design
to identify the minimum number of tests needed to get the coverage they want. Combinatorial test design enables users
to get greater test coverage with fewer tests. Whether they are looking for speed or test depth, they can use combinatorial
test design methods to build structured variation into their test cases. Note that "coverage", as used here, is referring
to combinatorial coverage, not requirements coverage. It is commonly believed that the earlier a defect is found,
the cheaper it is to fix it. The following table shows the cost of fixing the defect depending on the stage it was
found. For example, if a problem in the requirements is found only post-release, then it would cost 10–100 times
more to fix than if it had already been found by the requirements review. With the advent of modern continuous deployment
practices and cloud-based services, the cost of re-deployment and maintenance may lessen over time. There are many
approaches available in software testing. Reviews, walkthroughs, or inspections are referred to as static testing,
whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static
testing is often implicit, as proofreading, plus when programming tools/text editors check source code structure
or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when
the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular
sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs/drivers
or execution from a debugger environment. White-box testing (also known as clear box testing, glass box testing,
transparent box testing and structural testing, by seeing the source code) tests internal structures or workings
of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective
of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise
paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g.
in-circuit testing (ICT). Black-box testing treats the software as a "black box", examining functionality without
any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the
software is supposed to do, not how it does it. Black-box testing methods include: equivalence partitioning, boundary
value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing,
use case testing, exploratory testing and specification-based testing. Specification-based testing aims to test the
functionality of software according to the applicable requirements. This level of testing usually requires thorough
test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or
behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built
around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions
of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional
or non-functional, though usually functional. One advantage of the black box technique is that no programming knowledge
is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize
different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark
labyrinth without a flashlight." Because they do not examine the source code, there are situations when a tester
writes many test cases to check something that could have been tested by only one test case, or leaves some parts
of the program untested. Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal
data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box
level. The tester is not required to have full access to the software's source code.[not in citation given] Manipulating
input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of
the "black box" that we are calling the system under test. This distinction is particularly important when conducting
integration testing between two modules of code written by two different developers, where only the interfaces are
exposed for test. By knowing the underlying concepts of how the software works, the tester makes better-informed
testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up
an isolated testing environment with activities such as seeding a database. The tester can observe the state of the
product being tested after performing certain actions such as executing SQL statements against the database and then
executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent
test scenarios, based on limited information. This will particularly apply to data type handling, exception handling,
and so on. There are generally four recognized levels of tests: unit testing, integration testing, component interface
testing, and system testing. Tests are frequently grouped by where they are added in the software development process,
or by the level of specificity of the test. The main levels during the development process as defined by the SWEBOK
guide are unit-, integration-, and system testing that are distinguished by the test target without implying a specific
process model. Other test levels are classified by the testing objective. Unit testing is a software development
process that involves synchronized application of a broad spectrum of defect prevention and detection strategies
in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer
during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses,
it augments it. Unit testing aims to eliminate construction errors before code is promoted to QA; this strategy is
intended to increase the quality of the resulting software as well as the efficiency of the overall development and
QA process. The practice of component interface testing can be used to check the handling of data passed between
various units, or subsystem components, beyond full integration testing between those units. The data being passed
can be considered as "message packets" and the range or data types can be checked, for data generated from one unit,
and tested for validity before being passed into another unit. One option for interface testing is to keep a separate
log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data
passed between units for days or weeks. Tests can include checking the handling of some extreme data values while
other interface variables are passed as normal values. Unusual data values in an interface can help explain unexpected
performance in the next unit. Component interface testing is a variation of black-box testing, with the focus on
the data values beyond just the related actions of a subsystem component. Operational Acceptance is used to conduct
operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is
a common type of non-functional software testing, used mainly in software development and software maintenance projects.
This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of
the production environment. Hence, it is also known as operational readiness testing (ORT) or Operations readiness
and assurance (OR&A) testing. Functional testing within OAT is limited to those tests which are required to verify
the non-functional aspects of the system. A common cause of software failure (real or perceived) is a lack of its
compatibility with other application software, operating systems (or operating system versions, old or new), or target
environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the
desktop now being required to become a web application, which must render in a web browser). For example, in the
case of a lack of backward compatibility, this can occur because the programmers develop and test software only on
the latest version of the target environment, which not all users may be running. This results in the unintended
consequence that the latest work may not function on earlier versions of the target environment, or on older hardware
that earlier versions of the target environment was capable of using. Sometimes such issues can be fixed by proactively
abstracting operating system functionality into a separate program module or library. Regression testing focuses
on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions,
as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality
that was previously working correctly, stops working as intended. Typically, regressions occur as an unintended consequence
of program changes, when the newly developed part of the software collides with the previously existing code. Common
methods of regression testing include re-running previous sets of test-cases and checking whether previously fixed
faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added
features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow,
consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk.
Regression testing is typically the largest test effort in commercial software development, due to checking numerous
details in prior software features, and even new software can be developed while using some old test-cases to test
parts of the new design to ensure prior functionality is still supported. Beta testing comes after alpha testing
and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions,
are released to a limited audience outside of the programming team known as beta testers. The software is released
to groups of people so that further testing can ensure the product has few faults or bugs. Beta versions can be made
available to the open public to increase the feedback field to a maximal number of future users and to deliver value
earlier, for an extended or even indefinite period of time (perpetual beta).[citation needed] Destructive testing
attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when
it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management
routines.[citation needed] Software fault injection, in the form of fuzzing, is an example of failure testing. Various
commercial non-functional testing tools are linked from the software fault injection page; there are also numerous
open-source and free software tools available that perform destructive testing. Load testing is primarily concerned
with testing that the system can continue to operate under a specific load, whether that be large quantities of data
or a large number of users. This is generally referred to as software scalability. The related load testing activity
of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way
to test software functions even when certain components (for example a file or database) increase radically in size.
Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred
to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable
period. Development Testing is a software development process that involves synchronized application of a broad spectrum
of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It
is performed by the software developer or engineer during the construction phase of the software development lifecycle.
Rather than replace traditional QA focuses, it augments it. Development Testing aims to eliminate construction errors
before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well
as the efficiency of the overall development and QA process. In contrast, some emerging software disciplines such
as extreme programming and the agile software development movement, adhere to a "test-driven software development"
model. In this process, unit tests are written first, by the software engineers (often with pair programming in the
extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is
written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new
failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed.
Unit tests are maintained along with the rest of the software source code and generally integrated into the build
process (with inherently interactive tests being relegated to a partially manual build acceptance process). The ultimate
goal of this test process is to achieve continuous integration where software updates can be published to the public
frequently. Bottom Up Testing is an approach to integrated testing where the lowest level components (modules, procedures,
and functions) are tested first, then integrated and used to facilitate the testing of higher level components. After
the integration testing of lower level integrated modules, the next level of modules will be formed and can be used
for integration testing. The process is repeated until the components at the top of the hierarchy are tested. This
approach is helpful only when all or most of the modules of the same development level are ready.[citation needed]
This method also helps to determine the levels of software developed and makes it easier to report testing progress
in the form of a percentage.[citation needed] It has been proved that each class is strictly included into the next.
For instance, testing when we assume that the behavior of the implementation under test can be denoted by a deterministic
finite-state machine for some known finite sets of inputs and outputs and with some known number of states belongs
to Class I (and all subsequent classes). However, if the number of states is not known, then it only belongs to all
classes from Class II on. If the implementation under test must be a deterministic finite-state machine failing the
specification for a single trace (and its continuations), and its number of states is unknown, then it only belongs
to classes from Class III on. Testing temporal machines where transitions are triggered if inputs are produced within
some real-bounded interval only belongs to classes from Class IV on, whereas testing many non-deterministic systems
only belongs to Class V (but not all, and some even belong to Class I). The inclusion into Class I does not require
the simplicity of the assumed computation model, as some testing cases involving implementations written in any programming
language, and testing implementations defined as machines depending on continuous magnitudes, have been proved to
be in Class I. Other elaborated cases, such as the testing framework by Matthew Hennessy under must semantics, and
temporal machines with rational timeouts, belong to Class II. Several certification programs exist to support the
professional aspirations of software testers and quality assurance specialists. No certification now offered actually
requires the applicant to show their ability to test software. No certification is based on a widely accepted body
of knowledge. This has led some to declare that the testing field is not ready for certification. Certification itself
cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence,
or professionalism as a tester. Software testing is a part of the software quality assurance (SQA) process.:347 In
SQA, software process specialists and auditors are concerned for the software development process rather than just
the artifacts such as documentation, code and systems. They examine and change the software engineering process itself
to reduce the number of faults that end up in the delivered software: the so-called "defect rate". What constitutes
an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much
higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments
often exist independently, and there may be no SQA function in some companies.[citation needed]
Germany is a federal republic consisting of sixteen federal states (German: Bundesland, or Land).[a] Since today's Germany
was formed from an earlier collection of several states, it has a federal constitution, and the constituent states
retain a measure of sovereignty. With an emphasis on geographical conditions, Berlin and Hamburg are frequently called
Stadtstaaten (city-states), as is the Free Hanseatic City of Bremen, which in fact includes the cities of Bremen
and Bremerhaven. The remaining 13 states are called Flächenländer (literally: area states). The creation of the Federal
Republic of Germany in 1949 was through the unification of the western states (which were previously under American,
British, and French administration) created in the aftermath of World War II. Initially, in 1949, the states of the
Federal Republic were Baden, Bavaria (in German: Bayern), Bremen, Hamburg, Hesse (Hessen), Lower Saxony (Niedersachsen),
North Rhine Westphalia (Nordrhein-Westfalen), Rhineland-Palatinate (Rheinland-Pfalz), Schleswig-Holstein, Württemberg-Baden,
and Württemberg-Hohenzollern. West Berlin, while officially not part of the Federal Republic, was largely integrated
and considered as a de facto state. In 1952, following a referendum, Baden, Württemberg-Baden, and Württemberg-Hohenzollern
merged into Baden-Württemberg. In 1957, the Saar Protectorate rejoined the Federal Republic as the Saarland. German
reunification in 1990, in which the German Democratic Republic (East Germany) ascended into the Federal Republic,
resulted in the addition of the re-established eastern states of Brandenburg, Mecklenburg-West Pomerania (in German
Mecklenburg-Vorpommern), Saxony (Sachsen), Saxony-Anhalt (Sachsen-Anhalt), and Thuringia (Thüringen), as well as
the reunification of West and East Berlin into Berlin and its establishment as a full and equal state. A regional
referendum in 1996 to merge Berlin with surrounding Brandenburg as "Berlin-Brandenburg" failed to reach the necessary
majority vote in Brandenburg, while a majority of Berliners voted in favour of the merger. Federalism is one of the
entrenched constitutional principles of Germany. According to the German constitution (called Grundgesetz or in English
Basic Law), some topics, such as foreign affairs and defense, are the exclusive responsibility of the federation
(i.e., the federal level), while others fall under the shared authority of the states and the federation; the states
retain residual legislative authority for all other areas, including "culture", which in Germany includes not only
topics such as financial promotion of arts and sciences, but also most forms of education and job training. Though
international relations including international treaties are primarily the responsibility of the federal level, the
constituent states have certain limited powers in this area: in matters that affect them directly, the states defend
their interests at the federal level through the Bundesrat (literally Federal Council, the upper house of the German
Federal Parliament) and in areas where they have legislative authority they have limited powers to conclude international
treaties "with the consent of the federal government". The use of the term Länder (Lands) dates back to the Weimar
Constitution of 1919. Before this time, the constituent states of the German Empire were called Staaten (States).
Today, it is very common to use the term Bundesland (Federal Land). However, this term is not used officially, neither
by the constitution of 1919 nor by the Basic Law (Constitution) of 1949. Three Länder call themselves Freistaaten
(Free States, which is the old-fashioned German expression for Republic), Bavaria (since 1919), Saxony (originally
since 1919 and again since 1990), and Thuringia (since 1994). There is little continuity between the current states
and their predecessors of the Weimar Republic with the exception of the three free states, and the two city-states
of Hamburg and Bremen. A new delimitation of the federal territory keeps being debated in Germany, though "Some scholars
note that there are significant differences among the American states and regional governments in other federations
without serious calls for territorial changes ...", as political scientist Arthur B. Gunlicks remarks. He summarizes
the main arguments for boundary reform in Germany: "... the German system of dual federalism requires strong Länder
that have the administrative and fiscal capacity to implement legislation and pay for it from own source revenues.
Too many Länder also make coordination among them and with the federation more complicated ...". But several proposals
have failed so far; territorial reform remains a controversial topic in German politics and public perception. Federalism
has a long tradition in German history. The Holy Roman Empire comprised many petty states numbering more than 300
around 1796. The number of territories was greatly reduced during the Napoleonic Wars (1796–1814). After the Congress
of Vienna (1815), 39 states formed the German Confederation. The Confederation was dissolved after the Austro-Prussian
War and replaced by a North German Federation under Prussian hegemony; this war left Prussia dominant in Germany,
and German nationalism would compel the remaining independent states to ally with Prussia in the Franco-Prussian
War of 1870–71, and then to accede to the crowning of King Wilhelm of Prussia as German Emperor. The new German Empire
included 25 states (three of them, Hanseatic cities) and the imperial territory of Alsace-Lorraine. The empire was
dominated by Prussia, which controlled 65% of the territory and 62% of the population. After the territorial losses
of the Treaty of Versailles, the remaining states continued as republics of a new German federation. These states
were gradually de facto abolished and reduced to provinces under the Nazi regime via the Gleichschaltung process,
as the states administratively were largely superseded by the Nazi Gau system. During the Allied occupation of Germany
after World War II, internal borders were redrawn by the Allied military governments. No single state comprised more
than 30% of either population or territory; this was intended to prevent any one state from being as dominant within
Germany as Prussia had been in the past. Initially, only seven of the pre-War states remained: Baden (in part), Bavaria
(reduced in size), Bremen, Hamburg, Hesse (enlarged), Saxony, and Thuringia. The states with hyphenated names, such
as Rhineland-Palatinate, North Rhine-Westphalia, and Saxony-Anhalt, owed their existence to the occupation powers
and were created out of mergers of former Prussian provinces and smaller states. Former German territory that lie
east of the Oder-Neisse Line fell under either Polish or Soviet administration but attempts were made at least symbolically
not to abandon sovereignty well into the 1960s. However, no attempts were made to establish new states in these territories
as they lay outside the jurisdiction of West Germany at that time. Upon its founding in 1949, West Germany had eleven
states. These were reduced to nine in 1952 when three south-western states (South Baden, Württemberg-Hohenzollern,
and Württemberg-Baden) merged to form Baden-Württemberg. From 1957, when the French-occupied Saar Protectorate was
returned and formed into the Saarland, the Federal Republic consisted of ten states, which are referred to as the
"Old States" today. West Berlin was under the sovereignty of the Western Allies and neither a Western German state
nor part of one. However, it was in many ways de facto integrated with West Germany under a special status. Later,
the constitution was amended to state that the citizens of the 16 states had successfully achieved the unity of Germany
in free self-determination and that the Basic Law thus applied to the entire German people. Article 23, which had
allowed "any other parts of Germany" to join, was rephrased. It had been used in 1957 to reintegrate the Saar Protectorate
as the Saarland into the Federal Republic, and this was used as a model for German reunification in 1990. The amended
article now defines the participation of the Federal Council and the 16 German states in matters concerning the European
Union. A new delimitation of the federal territory has been discussed since the Federal Republic was founded in 1949
and even before. Committees and expert commissions advocated a reduction of the number of states; academics (Rutz,
Miegel, Ottnad etc.) and politicians (Döring, Apel, and others) made proposals – some of them far-reaching – for
redrawing boundaries but hardly anything came of these public discussions. Territorial reform is sometimes propagated
by the richer states as a means to avoid or reduce fiscal transfers. The debate on a new delimitation of the German
territory started in 1919 as part of discussions about the new constitution. Hugo Preuss, the father of the Weimar
Constitution, drafted a plan to divide the German Reich into 14 roughly equal-sized states. His proposal was turned
down due to opposition of the states and concerns of the government. Article 18 of the constitution enabled a new
delimitation of the German territory but set high hurdles: Three fifth of the votes handed in, and at least the majority
of the population are necessary to decide on the alteration of territory. In fact, until 1933 there were only four
changes in the configuration of the German states: The 7 Thuringian states were merged in 1920, whereby Coburg opted
for Bavaria, Pyrmont joined Prussia in 1922, and Waldeck did so in 1929. Any later plans to break up the dominating
Prussia into smaller states failed because political circumstances were not favorable to state reforms. After the
Nazi Party seized power in January 1933, the Länder increasingly lost importance. They became administrative regions
of a centralised country. Three changes are of particular note: on January 1, 1934, Mecklenburg-Schwerin was united
with the neighbouring Mecklenburg-Strelitz; and, by the Greater Hamburg Act (Groß-Hamburg-Gesetz), from April 1,
1937, the area of the city-state was extended, while Lübeck lost its independence and became part of the Prussian
province of Schleswig-Holstein. As the premiers did not come to an agreement on this question, the Parliamentary
Council was supposed to address this issue. Its provisions are reflected in Article 29. There was a binding provision
for a new delimitation of the federal territory: the Federal Territory must be revised ... (paragraph 1). Moreover,
in territories or parts of territories whose affiliation with a Land had changed after 8 May 1945 without a referendum,
people were allowed to petition for a revision of the current status within a year after the promulgation of the
Basic Law (paragraph 2). If at least one tenth of those entitled to vote in Bundestag elections were in favour of
a revision, the federal government had to include the proposal into its legislation. Then a referendum was required
in each territory or part of a territory whose affiliation was to be changed (paragraph 3). The proposal should not
take effect if within any of the affected territories a majority rejected the change. In this case, the bill had
to be introduced again and after passing had to be confirmed by referendum in the Federal Republic as a whole (paragraph
4). The reorganization should be completed within three years after the Basic Law had come into force (paragraph
6). In the Paris Agreements of 23 October 1954, France offered to establish an independent "Saarland", under the
auspices of the Western European Union (WEU), but on 23 October 1955 in the Saar Statute referendum the Saar electorate
rejected this plan by 67.7% to 32.3% (out of a 96.5% turnout: 423,434 against, 201,975 for) despite the public support
of Federal German Chancellor Konrad Adenauer for the plan. The rejection of the plan by the Saarlanders was interpreted
as support for the Saar to join the Federal Republic of Germany. Paragraph 6 of Article 29 stated that if a petition
was successful a referendum should be held within three years. Since the deadline passed on 5 May 1958 without anything
happening the Hesse state government filed a constitutional complaint with the Federal Constitutional Court in October
1958. The complaint was dismissed in July 1961 on the grounds that Article 29 had made the new delimitation of the
federal territory an exclusively federal matter. At the same time, the Court reaffirmed the requirement for a territorial
revision as a binding order to the relevant constitutional bodies. In his investiture address, given on 28 October
1969 in Bonn, Chancellor Willy Brandt proposed that the government would consider Article 29 of the Basic Law as
a binding order. An expert commission was established, named after its chairman, the former Secretary of State Professor
Werner Ernst. After two years of work, the experts delivered their report in 1973. It provided an alternative proposal
for both northern Germany and central and southwestern Germany. In the north, either a single new state consisting
of Schleswig-Holstein, Hamburg, Bremen and Lower Saxony should be created (solution A) or two new states, one in
the northeast consisting of Schleswig-Holstein, Hamburg and the northern part of Lower Saxony (from Cuxhaven to Lüchow-Dannenberg)
and one in the northwest consisting of Bremen and the rest of Lower Saxony (solution B). In the Center and South
West either Rhineland-Palatinate (with the exception of the Germersheim district but including the Rhine-Neckar region)
should be merged with Hesse and the Saarland (solution C), the district of Germersheim would then become part of
Baden-Württemberg. The Basic Law of the Federal Republic of Germany, the federal constitution, stipulates that the
structure of each Federal State's government must "conform to the principles of republican, democratic, and social
government, based on the rule of law" (Article 28). Most of the states are governed by a cabinet led by a Ministerpräsident
(Minister-President), together with a unicameral legislative body known as the Landtag (State Diet). The states are
parliamentary republics and the relationship between their legislative and executive branches mirrors that of the
federal system: the legislatures are popularly elected for four or five years (depending on the state), and the Minister-President
is then chosen by a majority vote among the Landtag's members. The Minister-President appoints a cabinet to run the
state's agencies and to carry out the executive duties of the state's government. The governments in Berlin, Bremen
and Hamburg are designated by the term Senate. In the three free states of Bavaria, Saxony, and Thuringia the government
is referred to as the State Government (Staatsregierung), and in the other ten states the term Land Government (Landesregierung)
is used. Before January 1, 2000, Bavaria had a bicameral parliament, with a popularly elected Landtag, and a Senate
made up of representatives of the state's major social and economic groups. The Senate was abolished following a
referendum in 1998. The states of Berlin, Bremen, and Hamburg are governed slightly differently from the other states.
In each of those cities, the executive branch consists of a Senate of approximately eight, selected by the state's
parliament; the senators carry out duties equivalent to those of the ministers in the larger states. The equivalent
of the Minister-President is the Senatspräsident (President of the Senate) in Bremen, the Erster Bürgermeister (First
Mayor) in Hamburg, and the Regierender Bürgermeister (Governing Mayor) in Berlin. The parliament for Berlin is called
the Abgeordnetenhaus (House of Representatives), while Bremen and Hamburg both have a Bürgerschaft. The parliaments
in the remaining 13 states are referred to as Landtag (State Parliament). The Districts of Germany (Kreise) are administrative
districts, and every state except the city-states of Berlin, Hamburg, and Bremen consists of "rural districts" (Landkreise),
District-free Towns/Cities (Kreisfreie Städte, in Baden-Württemberg also called "urban districts", or Stadtkreise),
cities that are districts in their own right, or local associations of a special kind (Kommunalverbände besonderer
Art), see below. The state Free Hanseatic City of Bremen consists of two urban districts, while Berlin and Hamburg
are states and urban districts at the same time. Local associations of a special kind are an amalgamation of one
or more Landkreise with one or more Kreisfreie Städte to form a replacement of the aforementioned administrative
entities at the district level. They are intended to implement simplification of administration at that level. Typically,
a district-free city or town and its urban hinterland are grouped into such an association, or Kommunalverband besonderer
Art. Such an organization requires the issuing of special laws by the governing state, since they are not covered
by the normal administrative structure of the respective states. Municipalities (Gemeinden): Every rural district
and every Amt is subdivided into municipalities, while every urban district is a municipality in its own right. There
are (as of 6 March 2009[update]) 12,141 municipalities, which are the smallest administrative units in Germany. Cities
and towns are municipalities as well, also having city rights or town rights (Stadtrechte). Nowadays, this is mostly
just the right to be called a city or town. However, in former times there were many other privileges, including
the right to impose local taxes or to allow industry only within city limits. The municipalities have two major policy
responsibilities. First, they administer programs authorized by the federal or state government. Such programs typically
relate to youth, schools, public health, and social assistance. Second, Article 28(2) of the Basic Law guarantees
the municipalities "the right to regulate on their own responsibility all the affairs of the local community within
the limits set by law." Under this broad statement of competence, local governments can justify a wide range of activities.
For instance, many municipalities develop and expand the economic infrastructure of their communities through the
development of industrial trading estates. In southwestern Germany, territorial revision seemed to be a top priority
since the border between the French and American occupation zones was set along the Autobahn Karlsruhe-Stuttgart-Ulm
(today the A8). Article 118 stated "The division of the territory comprising Baden, Württemberg-Baden and Württemberg-Hohenzollern
into Länder may be revised, without regard to the provisions of Article 29, by agreement between the Länder concerned.
If no agreement is reached, the revision shall be effected by a federal law, which shall provide for an advisory
referendum." Since no agreement was reached, a referendum was held on 9 December 1951 in four different voting districts,
three of which approved the merger (South Baden refused but was overruled as the result of total votes was decisive).
On 25 April 1952, the three former states merged to form Baden-Württemberg.
Many applications of silicate glasses derive from their optical transparency, which gives rise to one of silicate glasses'
primary uses as window panes. Glass will transmit, reflect and refract light; these qualities can be enhanced by
cutting and polishing to make optical lenses, prisms, fine glassware, and optical fibers for high speed data transmission
by light. Glass can be colored by adding metallic salts, and can also be painted and printed with vitreous enamels.
These qualities have led to the extensive use of glass in the manufacture of art objects and in particular, stained
glass windows. Although brittle, silicate glass is extremely durable, and many examples of glass fragments exist
from early glass-making cultures. Because glass can be formed or molded into any shape, and also because it is a
sterile product, it has been traditionally used for vessels: bowls, vases, bottles, jars and drinking glasses. In
its most solid forms it has also been used for paperweights, marbles, and beads. When extruded as glass fiber and
matted as glass wool in a way to trap air, it becomes a thermal insulating material, and when these glass fibers
are embedded into an organic polymer plastic, they are a key structural reinforcement part of the composite material
fiberglass. Some objects historically were so commonly made of silicate glass that they are simply called by the
name of the material, such as drinking glasses and reading glasses. Most common glass contains other ingredients
to change its properties. Lead glass or flint glass is more 'brilliant' because the increased refractive index causes
noticeably more specular reflection and increased optical dispersion. Adding barium also increases the refractive
index. Thorium oxide gives glass a high refractive index and low dispersion and was formerly used in producing high-quality
lenses, but due to its radioactivity has been replaced by lanthanum oxide in modern eyeglasses.[citation needed]
Iron can be incorporated into glass to absorb infrared energy, for example in heat absorbing filters for movie projectors,
while cerium(IV) oxide can be used for glass that absorbs UV wavelengths. Fused quartz is a glass made from chemically-pure
SiO2 (silica). It has excellent thermal shock characteristics, being able to survive immersion in water while red
hot. However, its high melting-temperature (1723 °C) and viscosity make it difficult to work with. Normally, other
substances are added to simplify processing. One is sodium carbonate (Na2CO3, "soda"), which lowers the glass transition
temperature. The soda makes the glass water-soluble, which is usually undesirable, so lime (calcium oxide [CaO],
generally obtained from limestone), some magnesium oxide (MgO) and aluminium oxide (Al2O3) are added to provide for
a better chemical durability. The resulting glass contains about 70 to 74% silica by weight and is called a soda-lime
glass. Soda-lime glasses account for about 90% of manufactured glass. Following the glass batch preparation and mixing,
the raw materials are transported to the furnace. Soda-lime glass for mass production is melted in gas fired units.
Smaller scale furnaces for specialty glasses include electric melters, pot furnaces, and day tanks. After melting,
homogenization and refining (removal of bubbles), the glass is formed. Flat glass for windows and similar applications
is formed by the float glass process, developed between 1953 and 1957 by Sir Alastair Pilkington and Kenneth Bickerstaff
of the UK's Pilkington Brothers, who created a continuous ribbon of glass using a molten tin bath on which the molten
glass flows unhindered under the influence of gravity. The top surface of the glass is subjected to nitrogen under
pressure to obtain a polished finish. Container glass for common bottles and jars is formed by blowing and pressing
methods. This glass is often slightly modified chemically (with more alumina and calcium oxide) for greater water
resistance. Further glass forming techniques are summarized in the table Glass forming techniques. Glass has the
ability to refract, reflect, and transmit light following geometrical optics, without scattering it. It is used in
the manufacture of lenses and windows. Common glass has a refraction index around 1.5. This may be modified by adding
low-density materials such as boron, which lowers the index of refraction (see crown glass), or increased (to as
much as 1.8) with high-density materials such as (classically) lead oxide (see flint glass and lead glass), or in
modern uses, less toxic oxides of zirconium, titanium, or barium. These high-index glasses (inaccurately known as
"crystal" when used in glass vessels) cause more chromatic dispersion of light, and are prized for their diamond-like
optical properties. The most familiar, and historically the oldest, types of glass are "silicate glasses" based on
the chemical compound silica (silicon dioxide, or quartz), the primary constituent of sand. The term glass, in popular
usage, is often used to refer only to this type of material, which is familiar from use as window glass and in glass
bottles. Of the many silica-based glasses that exist, ordinary glazing and container glass is formed from a specific
type called soda-lime glass, composed of approximately 75% silicon dioxide (SiO2), sodium oxide (Na2O) from sodium
carbonate (Na2CO3), calcium oxide, also called lime (CaO), and several minor additives. A very clear and durable
quartz glass can be made from pure silica, but the high melting point and very narrow glass transition of quartz
make glassblowing and hot working difficult. In glasses like soda lime, the compounds added to quartz are used to
lower the melting temperature and improve workability, at a cost in the toughness, thermal stability, and optical
transmittance. Glass is in widespread use largely due to the production of glass compositions that are transparent
to visible light. In contrast, polycrystalline materials do not generally transmit visible light. The individual
crystallites may be transparent, but their facets (grain boundaries) reflect or scatter light resulting in diffuse
reflection. Glass does not contain the internal subdivisions associated with grain boundaries in polycrystals and
hence does not scatter light in the same manner as a polycrystalline material. The surface of a glass is often smooth
since during glass formation the molecules of the supercooled liquid are not forced to dispose in rigid crystal geometries
and can follow surface tension, which imposes a microscopically smooth surface. These properties, which give glass
its clearness, can be retained even if glass is partially light-absorbing—i.e., colored. Naturally occurring glass,
especially the volcanic glass obsidian, has been used by many Stone Age societies across the globe for the production
of sharp cutting tools and, due to its limited source areas, was extensively traded. But in general, archaeological
evidence suggests that the first true glass was made in coastal north Syria, Mesopotamia or ancient Egypt. The earliest
known glass objects, of the mid third millennium BCE, were beads, perhaps initially created as accidental by-products
of metal-working (slags) or during the production of faience, a pre-glass vitreous material made by a process similar
to glazing. Color in glass may be obtained by addition of electrically charged ions (or color centers) that are homogeneously
distributed, and by precipitation of finely dispersed particles (such as in photochromic glasses). Ordinary soda-lime
glass appears colorless to the naked eye when it is thin, although iron(II) oxide (FeO) impurities of up to 0.1 wt%
produce a green tint, which can be viewed in thick pieces or with the aid of scientific instruments. Further FeO
and Cr2O3 additions may be used for the production of green bottles. Sulfur, together with carbon and iron salts,
is used to form iron polysulfides and produce amber glass ranging from yellowish to almost black. A glass melt can
also acquire an amber color from a reducing combustion atmosphere. Manganese dioxide can be added in small amounts
to remove the green tint given by iron(II) oxide. When used in art glass or studio glass is colored using closely
guarded recipes that involve specific combinations of metal oxides, melting temperatures and 'cook' times. Most colored
glass used in the art market is manufactured in volume by vendors who serve this market although there are some glassmakers
with the ability to make their own color from raw materials. Glass remained a luxury material, and the disasters
that overtook Late Bronze Age civilizations seem to have brought glass-making to a halt. Indigenous development of
glass technology in South Asia may have begun in 1730 BCE. In ancient China, though, glassmaking seems to have a
late start, compared to ceramics and metal work. The term glass developed in the late Roman Empire. It was in the
Roman glassmaking center at Trier, now in modern Germany, that the late-Latin term glesum originated, probably from
a Germanic word for a transparent, lustrous substance. Glass objects have been recovered across the Roman empire
in domestic, industrial and funerary contexts.[citation needed] Glass was used extensively during the Middle Ages.
Anglo-Saxon glass has been found across England during archaeological excavations of both settlement and cemetery
sites. Glass in the Anglo-Saxon period was used in the manufacture of a range of objects including vessels, beads,
windows and was also used in jewelry. From the 10th-century onwards, glass was employed in stained glass windows
of churches and cathedrals, with famous examples at Chartres Cathedral and the Basilica of Saint Denis. By the 14th-century,
architects were designing buildings with walls of stained glass such as Sainte-Chapelle, Paris, (1203–1248) and the
East end of Gloucester Cathedral. Stained glass had a major revival with Gothic Revival architecture in the 19th-century.
With the Renaissance, and a change in architectural style, the use of large stained glass windows became less prevalent.
The use of domestic stained glass increased until most substantial houses had glass windows. These were initially
small panes leaded together, but with the changes in technology, glass could be manufactured relatively cheaply in
increasingly larger sheets. This led to larger window panes, and, in the 20th-century, to much larger windows in
ordinary domestic and commercial buildings. In the 20th century, new types of glass such as laminated glass, reinforced
glass and glass bricks have increased the use of glass as a building material and resulted in new applications of
glass. Multi-storey buildings are frequently constructed with curtain walls made almost entirely of glass. Similarly,
laminated glass has been widely applied to vehicles for windscreens. While glass containers have always been used
for storage and are valued for their hygienic properties, glass has been utilized increasingly in industry. Optical
glass for spectacles has been used since the late Middle Ages. The production of lenses has become increasingly proficient,
aiding astronomers as well as having other application in medicine and science. Glass is also employed as the aperture
cover in many solar energy systems. From the 19th century, there was a revival in many ancient glass-making techniques
including cameo glass, achieved for the first time since the Roman Empire and initially mostly used for pieces in
a neo-classical style. The Art Nouveau movement made great use of glass, with René Lalique, Émile Gallé, and Daum
of Nancy producing colored vases and similar pieces, often in cameo glass, and also using luster techniques. Louis
Comfort Tiffany in America specialized in stained glass, both secular and religious, and his famous lamps. The early
20th-century saw the large-scale factory production of glass art by firms such as Waterford and Lalique. From about
1960 onwards there have been an increasing number of small studios hand-producing glass artworks, and glass artists
began to class themselves as in effect sculptors working in glass, and their works as part fine arts. Addition of
lead(II) oxide lowers melting point, lowers viscosity of the melt, and increases refractive index. Lead oxide also
facilitates solubility of other metal oxides and is used in colored glasses. The viscosity decrease of lead glass
melt is very significant (roughly 100 times in comparison with soda glasses); this allows easier removal of bubbles
and working at lower temperatures, hence its frequent use as an additive in vitreous enamels and glass solders. The
high ionic radius of the Pb2+ ion renders it highly immobile in the matrix and hinders the movement of other ions;
lead glasses therefore have high electrical resistance, about two orders of magnitude higher than soda-lime glass
(108.5 vs 106.5 Ohm·cm, DC at 250 °C). For more details, see lead glass. There are three classes of components for
oxide glasses: network formers, intermediates, and modifiers. The network formers (silicon, boron, germanium) form
a highly cross-linked network of chemical bonds. The intermediates (titanium, aluminium, zirconium, beryllium, magnesium,
zinc) can act as both network formers and modifiers, according to the glass composition. The modifiers (calcium,
lead, lithium, sodium, potassium) alter the network structure; they are usually present as ions, compensated by nearby
non-bridging oxygen atoms, bound by one covalent bond to the glass network and holding one negative charge to compensate
for the positive ion nearby. Some elements can play multiple roles; e.g. lead can act both as a network former (Pb4+
replacing Si4+), or as a modifier. The alkali metal ions are small and mobile; their presence in glass allows a degree
of electrical conductivity, especially in molten state or at high temperature. Their mobility decreases the chemical
resistance of the glass, allowing leaching by water and facilitating corrosion. Alkaline earth ions, with their two
positive charges and requirement for two non-bridging oxygen ions to compensate for their charge, are much less mobile
themselves and also hinder diffusion of other ions, especially the alkalis. The most common commercial glasses contain
both alkali and alkaline earth ions (usually sodium and calcium), for easier processing and satisfying corrosion
resistance. Corrosion resistance of glass can be achieved by dealkalization, removal of the alkali ions from the
glass surface by reaction with e.g. sulfur or fluorine compounds. Presence of alkaline metal ions has also detrimental
effect to the loss tangent of the glass, and to its electrical resistance; glasses for electronics (sealing, vacuum
tubes, lamps...) have to take this in account. New chemical glass compositions or new treatment techniques can be
initially investigated in small-scale laboratory experiments. The raw materials for laboratory-scale glass melts
are often different from those used in mass production because the cost factor has a low priority. In the laboratory
mostly pure chemicals are used. Care must be taken that the raw materials have not reacted with moisture or other
chemicals in the environment (such as alkali or alkaline earth metal oxides and hydroxides, or boron oxide), or that
the impurities are quantified (loss on ignition). Evaporation losses during glass melting should be considered during
the selection of the raw materials, e.g., sodium selenite may be preferred over easily evaporating SeO2. Also, more
readily reacting raw materials may be preferred over relatively inert ones, such as Al(OH)3 over Al2O3. Usually,
the melts are carried out in platinum crucibles to reduce contamination from the crucible material. Glass homogeneity
is achieved by homogenizing the raw materials mixture (glass batch), by stirring the melt, and by crushing and re-melting
the first melt. The obtained glass is usually annealed to prevent breakage during processing. In the past, small
batches of amorphous metals with high surface area configurations (ribbons, wires, films, etc.) have been produced
through the implementation of extremely rapid rates of cooling. This was initially termed "splat cooling" by doctoral
student W. Klement at Caltech, who showed that cooling rates on the order of millions of degrees per second is sufficient
to impede the formation of crystals, and the metallic atoms become "locked into" a glassy state. Amorphous metal
wires have been produced by sputtering molten metal onto a spinning metal disk. More recently a number of alloys
have been produced in layers with thickness exceeding 1 millimeter. These are known as bulk metallic glasses (BMG).
Liquidmetal Technologies sell a number of zirconium-based BMGs. Batches of amorphous steel have also been produced
that demonstrate mechanical properties far exceeding those found in conventional steel alloys. In 2004, NIST researchers
presented evidence that an isotropic non-crystalline metallic phase (dubbed "q-glass") could be grown from the melt.
This phase is the first phase, or "primary phase," to form in the Al-Fe-Si system during rapid cooling. Interestingly,
experimental evidence indicates that this phase forms by a first-order transition. Transmission electron microscopy
(TEM) images show that the q-glass nucleates from the melt as discrete particles, which grow spherically with a uniform
growth rate in all directions. The diffraction pattern shows it to be an isotropic glassy phase. Yet there is a nucleation
barrier, which implies an interfacial discontinuity (or internal surface) between the glass and the melt. Glass-ceramic
materials share many properties with both non-crystalline glass and crystalline ceramics. They are formed as a glass,
and then partially crystallized by heat treatment. For example, the microstructure of whiteware ceramics frequently
contains both amorphous and crystalline phases. Crystalline grains are often embedded within a non-crystalline intergranular
phase of grain boundaries. When applied to whiteware ceramics, vitreous means the material has an extremely low permeability
to liquids, often but not always water, when determined by a specified test regime. The term mainly refers to a mix
of lithium and aluminosilicates that yields an array of materials with interesting thermomechanical properties. The
most commercially important of these have the distinction of being impervious to thermal shock. Thus, glass-ceramics
have become extremely useful for countertop cooking. The negative thermal expansion coefficient (CTE) of the crystalline
ceramic phase can be balanced with the positive CTE of the glassy phase. At a certain point (~70% crystalline) the
glass-ceramic has a net CTE near zero. This type of glass-ceramic exhibits excellent mechanical properties and can
sustain repeated and quick temperature changes up to 1000 °C. Mass production of glass window panes in the early
twentieth century caused a similar effect. In glass factories, molten glass was poured onto a large cooling table
and allowed to spread. The resulting glass is thicker at the location of the pour, located at the center of the large
sheet. These sheets were cut into smaller window panes with nonuniform thickness, typically with the location of
the pour centered in one of the panes (known as "bull's-eyes") for decorative effect. Modern glass intended for windows
is produced as float glass and is very uniform in thickness. The observation that old windows are sometimes found
to be thicker at the bottom than at the top is often offered as supporting evidence for the view that glass flows
over a timescale of centuries, the assumption being that the glass has exhibited the liquid property of flowing from
one shape to another. This assumption is incorrect, as once solidified, glass stops flowing. The reason for the observation
is that in the past, when panes of glass were commonly made by glassblowers, the technique used was to spin molten
glass so as to create a round, mostly flat and even plate (the crown glass process, described above). This plate
was then cut to fit a window. The pieces were not absolutely flat; the edges of the disk became a different thickness
as the glass spun. When installed in a window frame, the glass would be placed with the thicker side down both for
the sake of stability and to prevent water accumulating in the lead cames at the bottom of the window. Occasionally
such glass has been found installed with the thicker side at the top, left or right. In physics, the standard definition
of a glass (or vitreous solid) is a solid formed by rapid melt quenching. The term glass is often used to describe
any amorphous solid that exhibits a glass transition temperature Tg. If the cooling is sufficiently rapid (relative
to the characteristic crystallization time) then crystallization is prevented and instead the disordered atomic configuration
of the supercooled liquid is frozen into the solid state at Tg. The tendency for a material to form a glass while
quenched is called glass-forming ability. This ability can be predicted by the rigidity theory. Generally, the structure
of a glass exists in a metastable state with respect to its crystalline form, although in certain circumstances,
for example in atactic polymers, there is no crystalline analogue of the amorphous phase. Some people consider glass
to be a liquid due to its lack of a first-order phase transition where certain thermodynamic variables such as volume,
entropy and enthalpy are discontinuous through the glass transition range. The glass transition may be described
as analogous to a second-order phase transition where the intensive thermodynamic variables such as the thermal expansivity
and heat capacity are discontinuous. Nonetheless, the equilibrium theory of phase transformations does not entirely
hold for glass, and hence the glass transition cannot be classed as one of the classical equilibrium phase transformations
in solids. Although the atomic structure of glass shares characteristics of the structure in a supercooled liquid,
glass tends to behave as a solid below its glass transition temperature. A supercooled liquid behaves as a liquid,
but it is below the freezing point of the material, and in some cases will crystallize almost instantly if a crystal
is added as a core. The change in heat capacity at a glass transition and a melting transition of comparable materials
are typically of the same order of magnitude, indicating that the change in active degrees of freedom is comparable
as well. Both in a glass and in a crystal it is mostly only the vibrational degrees of freedom that remain active,
whereas rotational and translational motion is arrested. This helps to explain why both crystalline and non-crystalline
solids exhibit rigidity on most experimental time scales.
First recognized in 1900 by Max Planck, it was originally the proportionality constant between the minimal increment of energy,
E, of a hypothetical electrically charged oscillator in a cavity that contained black body radiation, and the frequency,
f, of its associated electromagnetic wave. In 1905 the value E, the minimal energy increment of a hypothetical oscillator,
was theoretically associated by Einstein with a "quantum" or minimal element of the energy of the electromagnetic
wave itself. The light quantum behaved in some respects as an electrically neutral particle, as opposed to an electromagnetic
wave. It was eventually called the photon. Classical statistical mechanics requires the existence of h (but does
not define its value). Eventually, following upon Planck's discovery, it was recognized that physical action cannot
take on an arbitrary value. Instead, it must be some multiple of a very small quantity, the "quantum of action",
now called the Planck constant. Classical physics cannot explain this fact. In many cases, such as for monochromatic
light or for atoms, this quantum of action also implies that only certain energy levels are allowed, and values in
between are forbidden. Equivalently, the smallness of the Planck constant reflects the fact that everyday objects
and systems are made of a large number of particles. For example, green light with a wavelength of 555 nanometres
(the approximate wavelength to which human eyes are most sensitive) has a frequency of 7014540000000000000♠540 THz
(7014540000000000000♠540×1012 Hz). Each photon has an energy E = hf = 6981358000000000000♠3.58×10−19 J. That is a
very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual
photons any more than with individual atoms or molecules. An amount of light compatible with everyday experience
is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro
constant, NA ≈ 7023602200000000000♠6.022×1023 mol−1. The result is that green light of wavelength 555 nm has an energy
of 7005216000000000000♠216 kJ/mol, a typical energy of everyday life. In the last years of the nineteenth century,
Planck was investigating the problem of black-body radiation first posed by Kirchhoff some forty years earlier. It
is well known that hot objects glow, and that hotter objects glow brighter than cooler ones. The electromagnetic
field obeys laws of motion similarly to a mass on a spring, and can come to thermal equilibrium with hot atoms. The
hot object in equilibrium with light absorbs just as much light as it emits. If the object is black, meaning it absorbs
all the light that hits it, then its thermal light emission is maximized. The assumption that black-body radiation
is thermal leads to an accurate prediction: the total amount of emitted energy goes up with the temperature according
to a definite rule, the Stefan–Boltzmann law (1879–84). But it was also known that the colour of the light given
off by a hot object changes with the temperature, so that "white hot" is hotter than "red hot". Nevertheless, Wilhelm
Wien discovered the mathematical relationship between the peaks of the curves at different temperatures, by using
the principle of adiabatic invariance. At each different temperature, the curve is moved over by Wien's displacement
law (1893). Wien also proposed an approximation for the spectrum of the object, which was correct at high frequencies
(short wavelength) but not at low frequencies (long wavelength). It still was not clear why the spectrum of a hot
object had the form that it has (see diagram). Prior to Planck's work, it had been assumed that the energy of a body
could take on any value whatsoever – that it was a continuous variable. The Rayleigh–Jeans law makes close predictions
for a narrow range of values at one limit of temperatures, but the results diverge more and more strongly as temperatures
increase. To make Planck's law, which correctly predicts blackbody emissions, it was necessary to multiply the classical
expression by a complex factor that involves h in both the numerator and the denominator. The influence of h in this
complex factor would not disappear if it were set to zero or to any other value. Making an equation out of Planck's
law that would reproduce the Rayleigh–Jeans law could not be done by changing the values of h, of the Boltzmann constant,
or of any other constant or variable in the equation. In this case the picture given by classical physics is not
duplicated by a range of results in the quantum picture. The black-body problem was revisited in 1905, when Rayleigh
and Jeans (on the one hand) and Einstein (on the other hand) independently proved that classical electromagnetism
could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe",
a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric
effect) to convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical
formalism. The very first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta". Max Planck
received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics
by his discovery of energy quanta". The photoelectric effect is the emission of electrons (called "photoelectrons")
from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit
is usually reserved for Heinrich Hertz, who published the first thorough investigation in 1887. Another particularly
thorough investigation was published by Philipp Lenard in 1902. Einstein's 1905 paper discussing the effect in terms
of light quanta would earn him the Nobel Prize in 1921, when his predictions had been confirmed by the experimental
work of Robert Andrews Millikan. The Nobel committee awarded the prize for his work on the photo-electric effect,
rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment,
and dissent amongst its members as to the actual proof that relativity was real. Prior to Einstein's paper, electromagnetic
radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength"
to characterise different types of radiation. The energy transferred by a wave in a given time is called its intensity.
The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that
the spotlight gives out more energy per unit time and per unit space(and hence consumes more electricity) than the
ordinary bulb, even though the colour of the light might be very similar. Other waves, such as sound or the waves
crashing against a seafront, also have their own intensity. However, the energy account of the photoelectric effect
didn't seem to agree with the wave description of light. The "photoelectrons" emitted as a result of the photoelectric
effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent
of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding
to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless
a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously
(multiphoton effect) Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity
of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number
of photoelectrons to be emitted with higher kinetic energy. Niels Bohr introduced the first quantized model of the
atom in 1913, in an attempt to overcome a major shortcoming of Rutherford's classical model. In classical electrodynamics,
a charge moving in a circle should radiate electromagnetic radiation. If that charge were to be an electron orbiting
a nucleus, the radiation would cause it to lose energy and spiral down into the nucleus. Bohr solved this paradox
with explicit reference to Planck's work: an electron in a Bohr atom could only have certain defined energies En
Bohr also introduced the quantity , now known as the reduced Planck constant, as the quantum of angular momentum.
At first, Bohr thought that this was the angular momentum of each electron in an atom: this proved incorrect and,
despite developments by Sommerfeld and others, an accurate description of the electron angular momentum proved beyond
the Bohr model. The correct quantization rules for electrons – in which the energy reduces to the Bohr model equation
in the case of the hydrogen atom – were given by Heisenberg's matrix mechanics in 1925 and the Schrödinger wave equation
in 1926: the reduced Planck constant remains the fundamental quantum of angular momentum. In modern terms, if J is
the total angular momentum of a system with rotational invariance, and Jz the angular momentum measured along any
given direction, these quantities can only take on the values where the uncertainty is given as the standard deviation
of the measured value from its expected value. There are a number of other such pairs of physically measurable values
which obey a similar rule. One example is time vs. energy. The either-or nature of uncertainty forces measurement
attempts to choose between trade offs, and given that they are quanta, the trade offs often take the form of either-or
(as in Fourier analysis), rather than the compromises and gray areas of time series analysis. The Bohr magneton and
the nuclear magneton are units which are used to describe the magnetic properties of the electron and atomic nuclei
respectively. The Bohr magneton is the magnetic moment which would be expected for an electron if it behaved as a
spinning charge according to classical electrodynamics. It is defined in terms of the reduced Planck constant, the
elementary charge and the electron mass, all of which depend on the Planck constant: the final dependence on h1/2
(r2 > 0.995) can be found by expanding the variables. In principle, the Planck constant could be determined by examining
the spectrum of a black-body radiator or the kinetic energy of photoelectrons, and this is how its value was first
calculated in the early twentieth century. In practice, these are no longer the most accurate methods. The CODATA
value quoted here is based on three watt-balance measurements of KJ2RK and one inter-laboratory determination of
the molar volume of silicon, but is mostly determined by a 2007 watt-balance measurement made at the U.S. National
Institute of Standards and Technology (NIST). Five other measurements by three different methods were initially considered,
but not included in the final refinement as they were too imprecise to affect the result. There are both practical
and theoretical difficulties in determining h. The practical difficulties can be illustrated by the fact that the
two most accurate methods, the watt balance and the X-ray crystal density method, do not appear to agree with one
another. The most likely reason is that the measurement uncertainty for one (or both) of the methods has been estimated
too low – it is (or they are) not as precise as is currently believed – but for the time being there is no indication
which method is at fault. The theoretical difficulties arise from the fact that all of the methods except the X-ray
crystal density method rely on the theoretical basis of the Josephson effect and the quantum Hall effect. If these
theories are slightly inaccurate – though there is no evidence at present to suggest they are – the methods would
not give accurate values for the Planck constant. More importantly, the values of the Planck constant obtained in
this way cannot be used as tests of the theories without falling into a circular argument. Fortunately, there are
other statistical ways of testing the theories, and the theories have yet to be refuted. A watt balance is an instrument
for comparing two powers, one of which is measured in SI watts and the other of which is measured in conventional
electrical units. From the definition of the conventional watt W90, this gives a measure of the product KJ2RK in
SI units, where RK is the von Klitzing constant which appears in the quantum Hall effect. If the theoretical treatments
of the Josephson effect and the quantum Hall effect are valid, and in particular assuming that RK = h/e2, the measurement
of KJ2RK is a direct determination of the Planck constant. The gyromagnetic ratio γ is the constant of proportionality
between the frequency ν of nuclear magnetic resonance (or electron paramagnetic resonance for electrons) and the
applied magnetic field B: ν = γB. It is difficult to measure gyromagnetic ratios precisely because of the difficulties
in precisely measuring B, but the value for protons in water at 7002298150000000000♠25 °C is known to better than
one part per million. The protons are said to be "shielded" from the applied magnetic field by the electrons in the
water molecule, the same effect that gives rise to chemical shift in NMR spectroscopy, and this is indicated by a
prime on the symbol for the gyromagnetic ratio, γ′p. The gyromagnetic ratio is related to the shielded proton magnetic
moment μ′p, the spin number I (I = 1⁄2 for protons) and the reduced Planck constant. A further complication is that
the measurement of γ′p involves the measurement of an electric current: this is invariably measured in conventional
amperes rather than in SI amperes, so a conversion factor is required. The symbol Γ′p-90 is used for the measured
gyromagnetic ratio using conventional electrical units. In addition, there are two methods of measuring the value,
a "low-field" method and a "high-field" method, and the conversion factors are different in the two cases. Only the
high-field value Γ′p-90(hi) is of interest in determining the Planck constant. The Faraday constant F is the charge
of one mole of electrons, equal to the Avogadro constant NA multiplied by the elementary charge e. It can be determined
by careful electrolysis experiments, measuring the amount of silver dissolved from an electrode in a given time and
for a given electric current. In practice, it is measured in conventional electrical units, and so given the symbol
F90. Substituting the definitions of NA and e, and converting from conventional electrical units to SI units, gives
the relation to the Planck constant. The X-ray crystal density method is primarily a method for determining the Avogadro
constant NA but as the Avogadro constant is related to the Planck constant it also determines a value for h. The
principle behind the method is to determine NA as the ratio between the volume of the unit cell of a crystal, measured
by X-ray crystallography, and the molar volume of the substance. Crystals of silicon are used, as they are available
in high quality and purity by the technology developed for the semiconductor industry. The unit cell volume is calculated
from the spacing between two crystal planes referred to as d220. The molar volume Vm(Si) requires a knowledge of
the density of the crystal and the atomic weight of the silicon used. The Planck constant is given by There are a
number of proposals to redefine certain of the SI base units in terms of fundamental physical constants. This has
already been done for the metre, which is defined in terms of a fixed value of the speed of light. The most urgent
unit on the list for redefinition is the kilogram, whose value has been fixed for all science (since 1889) by the
mass of a small cylinder of platinum–iridium alloy kept in a vault just outside Paris. While nobody knows if the
mass of the International Prototype Kilogram has changed since 1889 – the value 1 kg of its mass expressed in kilograms
is by definition unchanged and therein lies one of the problems – it is known that over such a timescale the many
similar Pt–Ir alloy cylinders kept in national laboratories around the world, have changed their relative mass by
several tens of parts per million, however carefully they are stored, and the more so the more they have been taken
out and used as mass standards. A change of several tens of micrograms in one kilogram is equivalent to the current
uncertainty in the value of the Planck constant in SI units. The legal process to change the definition of the kilogram
is already underway, but it had been decided that no final decision would be made before the next meeting of the
General Conference on Weights and Measures in 2011. (For more detailed information, see kilogram definitions.) The
Planck constant is a leading contender to form the basis of the new definition, although not the only one. Possible
new definitions include "the mass of a body at rest whose equivalent energy equals the energy of photons whose frequencies
sum to 7050135639273999999♠135639274×1042 Hz", or simply "the kilogram is defined so that the Planck constant equals
6966662606895999999♠6.62606896×10−34 J⋅s".
Public policy and political leadership helps to "level the playing field" and drive the wider acceptance of renewable energy
technologies. Countries such as Germany, Denmark, and Spain have led the way in implementing innovative policies
which has driven most of the growth over the past decade. As of 2014, Germany has a commitment to the "Energiewende"
transition to a sustainable energy economy, and Denmark has a commitment to 100% renewable energy by 2050. There
are now 144 countries with renewable energy policy targets. Total investment in renewable energy (including small
hydro-electric projects) was $244 billion in 2012, down 12% from 2011 mainly due to dramatically lower solar prices
and weakened US and EU markets. As a share of total investment in power plants, wind and solar PV grew from 14% in
2000 to over 60% in 2012. The top countries for investment in recent years were China, Germany, Spain, the United
States, Italy, and Brazil. Renewable energy companies include BrightSource Energy, First Solar, Gamesa, GE Energy,
Goldwind, Sinovel, Trina Solar, Vestas and Yingli. EU member countries have shown support for ambitious renewable
energy goals. In 2010, Eurobarometer polled the twenty-seven EU member states about the target "to increase the share
of renewable energy in the EU by 20 percent by 2020". Most people in all twenty-seven countries either approved of
the target or called for it to go further. Across the EU, 57 percent thought the proposed goal was "about right"
and 16 percent thought it was "too modest." In comparison, 19 percent said it was "too ambitious". By the end of
2011, total renewable power capacity worldwide exceeded 1,360 GW, up 8%. Renewables producing electricity accounted
for almost half of the 208 GW of capacity added globally during 2011. Wind and solar photovoltaics (PV) accounted
for almost 40% and 30% . Based on REN21's 2014 report, renewables contributed 19 percent to our energy consumption
and 22 percent to our electricity generation in 2012 and 2013, respectively. This energy consumption is divided as
9% coming from traditional biomass, 4.2% as heat energy (non-biomass), 3.8% hydro electricity and 2% electricity
from wind, solar, geothermal, and biomass. During the five-years from the end of 2004 through 2009, worldwide renewable
energy capacity grew at rates of 10–60 percent annually for many technologies. In 2011, UN under-secretary general
Achim Steiner said: "The continuing growth in this core segment of the green economy is not happening by chance.
The combination of government target-setting, policy support and stimulus funds is underpinning the renewable industry's
rise and bringing the much needed transformation of our global energy system within reach." He added: "Renewable
energies are expanding both in terms of investment, projects and geographical spread. In doing so, they are making
an increasing contribution to combating climate change, countering energy poverty and energy insecurity". According
to a 2011 projection by the International Energy Agency, solar power plants may produce most of the world's electricity
within 50 years, significantly reducing the emissions of greenhouse gases that harm the environment. The IEA has
said: "Photovoltaic and solar-thermal plants may meet most of the world's demand for electricity by 2060 – and half
of all energy needs – with wind, hydropower and biomass plants supplying much of the remaining generation". "Photovoltaic
and concentrated solar power together can become the major source of electricity". In 2013, China led the world in
renewable energy production, with a total capacity of 378 GW, mainly from hydroelectric and wind power. As of 2014,
China leads the world in the production and use of wind power, solar photovoltaic power and smart grid technologies,
generating almost as much water, wind and solar energy as all of France and Germany's power plants combined. China's
renewable energy sector is growing faster than its fossil fuels and nuclear power capacity. Since 2005, production
of solar cells in China has expanded 100-fold. As Chinese renewable manufacturing has grown, the costs of renewable
energy technologies have dropped. Innovation has helped, but the main driver of reduced costs has been market expansion.
Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass
production and market competition. A 2011 IEA report said: "A portfolio of renewable energy technologies is becoming
cost-competitive in an increasingly broad range of circumstances, in some cases providing investment opportunities
without the need for specific economic support," and added that "cost reductions in critical technologies, such as
wind and solar, are set to continue." As of 2011[update], there have been substantial reductions in the cost of solar
and wind technologies: Renewable energy is also the most economic solution for new grid-connected capacity in areas
with good resources. As the cost of renewable power falls, the scope of economically viable applications increases.
Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation
is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable
solution almost always exists today". As of 2012, renewable power generation technologies accounted for around half
of all new power generation capacity additions globally. In 2011, additions included 41 gigawatt (GW) of new wind
power capacity, 30 GW of PV, 25 GW of hydro-electricity, 6 GW of biomass, 0.5 GW of CSP, and 0.1 GW of geothermal
power. Biomass for heat and power is a fully mature technology which offers a ready disposal mechanism for municipal,
agricultural, and industrial organic wastes. However, the industry has remained relatively stagnant over the decade
to 2007, even though demand for biomass (mostly wood) continues to grow in many developing countries. One of the
problems of biomass is that material directly combusted in cook stoves produces pollutants, leading to severe health
and environmental consequences, although improved cook stove programmes are alleviating some of these effects. First-generation
biomass technologies can be economically competitive, but may still require deployment support to overcome public
acceptance and small-scale issues. Hydroelectricity is the term referring to electricity generated by hydropower;
the production of electrical power through the use of the gravitational force of falling or flowing water. It is
the most widely used form of renewable energy, accounting for 16 percent of global electricity generation – 3,427
terawatt-hours of electricity production in 2010, and is expected to increase about 3.1% each year for the next 25
years. Hydroelectric plants have the advantage of being long-lived and many existing plants have operated for more
than 100 years. Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global
hydropower in 2010. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010,
representing around 17 percent of domestic electricity use. There are now three hydroelectricity plants larger than
10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela. The
cost of hydroelectricity is low, making it a competitive source of renewable electricity. The average cost of electricity
from a hydro plant larger than 10 megawatts is 3 to 5 U.S. cents per kilowatt-hour. Geothermal power capacity grew
from around 1 GW in 1975 to almost 10 GW in 2008. The United States is the world leader in terms of installed capacity,
representing 3.1 GW. Other countries with significant installed capacity include the Philippines (1.9 GW), Indonesia
(1.2 GW), Mexico (1.0 GW), Italy (0.8 GW), Iceland (0.6 GW), Japan (0.5 GW), and New Zealand (0.5 GW). In some countries,
geothermal power accounts for a significant share of the total electricity supply, such as in the Philippines, where
geothermal represented 17 percent of the total power mix at the end of 2008. Many solar photovoltaic power stations
have been built, mainly in Europe. As of July 2012, the largest photovoltaic (PV) power plants in the world are the
Agua Caliente Solar Project (USA, 247 MW), Charanka Solar Park (India, 214 MW), Golmud Solar Park (China, 200 MW),
Perovo Solar Park (Russia 100 MW), Sarnia Photovoltaic Power Plant (Canada, 97 MW), Brandenburg-Briest Solarpark
(Germany 91 MW), Solarpark Finow Tower (Germany 84.7 MW), Montalto di Castro Photovoltaic Power Station (Italy, 84.2
MW), Eggebek Solar Park (Germany 83.6 MW), Senftenberg Solarpark (Germany 82 MW), Finsterwalde Solar Park (Germany,
80.7 MW), Okhotnykovo Solar Park (Russia, 80 MW), Lopburi Solar Farm (Thailand 73.16 MW), Rovigo Photovoltaic Power
Plant (Italy, 72 MW), and the Lieberose Photovoltaic Park (Germany, 71.8 MW). There are also many large plants under
construction. The Desert Sunlight Solar Farm under construction in Riverside County, California and Topaz Solar Farm
being built in San Luis Obispo County, California are both 550 MW solar parks that will use thin-film solar photovoltaic
modules made by First Solar. The Blythe Solar Power Project is a 500 MW photovoltaic station under construction in
Riverside County, California. The California Valley Solar Ranch (CVSR) is a 250 megawatt (MW) solar photovoltaic
power plant, which is being built by SunPower in the Carrizo Plain, northeast of California Valley. The 230 MW Antelope
Valley Solar Ranch is a First Solar photovoltaic project which is under construction in the Antelope Valley area
of the Western Mojave Desert, and due to be completed in 2013. The Mesquite Solar project is a photovoltaic solar
power plant being built in Arlington, Maricopa County, Arizona, owned by Sempra Generation. Phase 1 will have a nameplate
capacity of 150 megawatts. Some of the second-generation renewables, such as wind power, have high potential and
have already realised relatively low production costs. Global wind power installations increased by 35,800 MW in
2010, bringing total installed capacity up to 194,400 MW, a 22.5% increase on the 158,700 MW installed at the end
of 2009. The increase for 2010 represents investments totalling €47.3 billion (US$65 billion) and for the first time
more than half of all new wind power was added outside of the traditional markets of Europe and North America, mainly
driven, by the continuing boom in China which accounted for nearly half of all of the installations at 16,500 MW.
China now has 42,300 MW of wind power installed. Wind power accounts for approximately 19% of electricity generated
in Denmark, 9% in Spain and Portugal, and 6% in Germany and the Republic of Ireland. In Australian state of South
Australia wind power, championed by Premier Mike Rann (2002–2011), now comprises 26% of the state's electricity generation,
edging out coal fired power. At the end of 2011 South Australia, with 7.2% of Australia's population, had 54%of the
nation's installed wind power capacity. Wind power's share of worldwide electricity usage at the end of 2014 was
3.1%. These are some of the largest wind farms in the world: As of 2014, the wind industry in the USA is able to
produce more power at lower cost by using taller wind turbines with longer blades, capturing the faster winds at
higher elevations. This has opened up new opportunities and in Indiana, Michigan, and Ohio, the price of power from
wind turbines built 300 feet to 400 feet above the ground can now compete with conventional fossil fuels like coal.
Prices have fallen to about 4 cents per kilowatt-hour in some cases and utilities have been increasing the amount
of wind energy in their portfolio, saying it is their cheapest option. Solar thermal power stations include the 354
megawatt (MW) Solar Energy Generating Systems power plant in the USA, Solnova Solar Power Station (Spain, 150 MW),
Andasol solar power station (Spain, 100 MW), Nevada Solar One (USA, 64 MW), PS20 solar power tower (Spain, 20 MW),
and the PS10 solar power tower (Spain, 11 MW). The 370 MW Ivanpah Solar Power Facility, located in California's Mojave
Desert, is the world's largest solar-thermal power plant project currently under construction. Many other plants
are under construction or planned, mainly in Spain and the USA. In developing countries, three World Bank projects
for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt, Mexico, and Morocco have been approved.
Nearly all the gasoline sold in the United States today is mixed with 10 percent ethanol, a mix known as E10, and
motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford, DaimlerChrysler,
and GM are among the automobile companies that sell flexible-fuel cars, trucks, and minivans that can use gasoline
and ethanol blends ranging from pure gasoline up to 85% ethanol (E85). The challenge is to expand the market for
biofuels beyond the farm states where they have been most popular to date. The Energy Policy Act of 2005, which calls
for 7.5 billion US gallons (28,000,000 m3) of biofuels to be used annually by 2012, will also help to expand the
market. According to the International Energy Agency, cellulosic ethanol biorefineries could allow biofuels to play
a much bigger role in the future than organizations such as the IEA previously thought. Cellulosic ethanol can be
made from plant matter composed primarily of inedible cellulose fibers that form the stems and branches of most plants.
Crop residues (such as corn stalks, wheat straw and rice straw), wood waste, and municipal solid waste are potential
sources of cellulosic biomass. Dedicated energy crops, such as switchgrass, are also promising cellulose sources
that can be sustainably produced in many regions. As of 2008[update], geothermal power development was under way
in more than 40 countries, partially attributable to the development of new technologies, such as Enhanced Geothermal
Systems. The development of binary cycle power plants and improvements in drilling and extraction technology may
enable enhanced geothermal systems over a much greater geographical range than "traditional" Geothermal systems.
Demonstration EGS projects are operational in the USA, Australia, Germany, France, and The United Kingdom. The PV
industry has seen drops in module prices since 2008. In late 2011, factory-gate prices for crystalline-silicon photovoltaic
modules dropped below the $1.00/W mark. The $1.00/W installed cost, is often regarded in the PV industry as marking
the achievement of grid parity for PV. These reductions have taken many stakeholders, including industry analysts,
by surprise, and perceptions of current solar power economics often lags behind reality. Some stakeholders still
have the perspective that solar PV remains too costly on an unsubsidized basis to compete with conventional generation
options. Yet technological advancements, manufacturing process improvements, and industry re-structuring, mean that
further price reductions are likely in coming years. Many energy markets, institutions, and policies have been developed
to support the production and use of fossil fuels. Newer and cleaner technologies may offer social and environmental
benefits, but utility operators often reject renewable resources because they are trained to think only in terms
of big, conventional power plants. Consumers often ignore renewable power systems because they are not given accurate
price signals about electricity consumption. Intentional market distortions (such as subsidies), and unintentional
market distortions (such as split incentives) may work against renewables. Benjamin K. Sovacool has argued that "some
of the most surreptitious, yet powerful, impediments facing renewable energy and energy efficiency in the United
States are more about culture and institutions than engineering and science". Lester Brown states that the market
"does not incorporate the indirect costs of providing goods or services into prices, it does not value nature's services
adequately, and it does not respect the sustainable-yield thresholds of natural systems". It also favors the near
term over the long term, thereby showing limited concern for future generations. Tax and subsidy shifting can help
overcome these problems, though is also problematic to combine different international normative regimes regulating
this issue. Tax shifting has been widely discussed and endorsed by economists. It involves lowering income taxes
while raising levies on environmentally destructive activities, in order to create a more responsive market. For
example, a tax on coal that included the increased health care costs associated with breathing polluted air, the
costs of acid rain damage, and the costs of climate disruption would encourage investment in renewable technologies.
Several Western European countries are already shifting taxes in a process known there as environmental tax reform.
Just as there is a need for tax shifting, there is also a need for subsidy shifting. Subsidies are not an inherently
bad thing as many technologies and industries emerged through government subsidy schemes. The Stern Review explains
that of 20 key innovations from the past 30 years, only one of the 14 was funded entirely by the private sector and
nine were totally publicly funded. In terms of specific examples, the Internet was the result of publicly funded
links among computers in government laboratories and research institutes. And the combination of the federal tax
deduction and a robust state tax deduction in California helped to create the modern wind power industry. Renewable
energy commercialization involves the deployment of three generations of renewable energy technologies dating back
more than 100 years. First-generation technologies, which are already mature and economically competitive, include
biomass, hydroelectricity, geothermal power and heat. Second-generation technologies are market-ready and are being
deployed at the present time; they include solar heating, photovoltaics, wind power, solar thermal power stations,
and modern forms of bioenergy. Third-generation technologies require continued R&D efforts in order to make large
contributions on a global scale and include advanced biomass gasification, hot-dry-rock geothermal power, and ocean
energy. As of 2012, renewable energy accounts for about half of new nameplate electrical capacity installed and costs
are continuing to fall. Lester Brown has argued that "a world facing the prospect of economically disruptive climate
change can no longer justify subsidies to expand the burning of coal and oil. Shifting these subsidies to the development
of climate-benign energy sources such as wind, solar, biomass, and geothermal power is the key to stabilizing the
earth's climate." The International Solar Energy Society advocates "leveling the playing field" by redressing the
continuing inequities in public subsidies of energy technologies and R&D, in which the fossil fuel and nuclear power
receive the largest share of financial support. Some countries are eliminating or reducing climate disrupting subsidies
and Belgium, France, and Japan have phased out all subsidies for coal. Germany is reducing its coal subsidy. The
subsidy dropped from $5.4 billion in 1989 to $2.8 billion in 2002, and in the process Germany lowered its coal use
by 46 percent. China cut its coal subsidy from $750 million in 1993 to $240 million in 1995 and more recently has
imposed a high-sulfur coal tax. However, the United States has been increasing its support for the fossil fuel and
nuclear industries. Setting national renewable energy targets can be an important part of a renewable energy policy
and these targets are usually defined as a percentage of the primary energy and/or electricity generation mix. For
example, the European Union has prescribed an indicative renewable energy target of 12 per cent of the total EU energy
mix and 22 per cent of electricity consumption by 2010. National targets for individual EU Member States have also
been set to meet the overall target. Other developed countries with defined national or regional targets include
Australia, Canada, Israel, Japan, Korea, New Zealand, Norway, Singapore, Switzerland, and some US States. Public
policy determines the extent to which renewable energy (RE) is to be incorporated into a developed or developing
country's generation mix. Energy sector regulators implement that policy—thus affecting the pace and pattern of RE
investments and connections to the grid. Energy regulators often have authority to carry out a number of functions
that have implications for the financial feasibility of renewable energy projects. Such functions include issuing
licenses, setting performance standards, monitoring the performance of regulated firms, determining the price level
and structure of tariffs, establishing uniform systems of accounts, arbitrating stakeholder disputes (like interconnection
cost allocations), performing management audits, developing agency human resources (expertise), reporting sector
and commission activities to government authorities, and coordinating decisions with other government agencies. Thus,
regulators make a wide range of decisions that affect the financial outcomes associated with RE investments. In addition,
the sector regulator is in a position to give advice to the government regarding the full implications of focusing
on climate change or energy security. The energy sector regulator is the natural advocate for efficiency and cost-containment
throughout the process of designing and implementing RE policies. Since policies are not self-implementing, energy
sector regulators become a key facilitator (or blocker) of renewable energy investments. The driving force behind
voluntary green electricity within the EU are the liberalized electricity markets and the RES Directive. According
to the directive the EU Member States must ensure that the origin of electricity produced from renewables can be
guaranteed and therefore a "guarantee of origin" must be issued (article 15). Environmental organisations are using
the voluntary market to create new renewables and improving sustainability of the existing power production. In the
US the main tool to track and stimulate voluntary actions is Green-e program managed by Center for Resource Solutions.
In Europe the main voluntary tool used by the NGOs to promote sustainable electricity production is EKOenergy label.
A number of events in 2006 pushed renewable energy up the political agenda, including the US mid-term elections in
November, which confirmed clean energy as a mainstream issue. Also in 2006, the Stern Review made a strong economic
case for investing in low carbon technologies now, and argued that economic growth need not be incompatible with
cutting energy consumption. According to a trend analysis from the United Nations Environment Programme, climate
change concerns coupled with recent high oil prices and increasing government support are driving increasing rates
of investment in the renewable energy and energy efficiency industries. New government spending, regulation, and
policies helped the industry weather the 2009 economic crisis better than many other sectors. Most notably, U.S.
President Barack Obama's American Recovery and Reinvestment Act of 2009 included more than $70 billion in direct
spending and tax credits for clean energy and associated transportation programs. This policy-stimulus combination
represents the largest federal commitment in U.S. history for renewables, advanced transportation, and energy conservation
initiatives. Based on these new rules, many more utilities strengthened their clean-energy programs. Clean Edge suggests
that the commercialization of clean energy will help countries around the world deal with the current economic malaise.
Once-promising solar energy company, Solyndra, became involved in a political controversy involving U.S. President
Barack Obama's administration's authorization of a $535 million loan guarantee to the Corporation in 2009 as part
of a program to promote alternative energy growth. The company ceased all business activity, filed for Chapter 11
bankruptcy, and laid-off nearly all of its employees in early September 2011. As of 2012, renewable energy plays
a major role in the energy mix of many countries globally. Renewables are becoming increasingly economic in both
developing and developed countries. Prices for renewable energy technologies, primarily wind power and solar power,
continued to drop, making renewables competitive with conventional energy sources. Without a level playing field,
however, high market penetration of renewables is still dependent on a robust promotional policies. Fossil fuel subsidies,
which are far higher than those for renewable energy, remain in place and quickly need to be phased out. United Nations'
Secretary-General Ban Ki-moon has said that "renewable energy has the ability to lift the poorest nations to new
levels of prosperity". In October 2011, he "announced the creation of a high-level group to drum up support for energy
access, energy efficiency and greater use of renewable energy. The group is to be co-chaired by Kandeh Yumkella,
the chair of UN Energy and director general of the UN Industrial Development Organisation, and Charles Holliday,
chairman of Bank of America". Worldwide use of solar power and wind power continued to grow significantly in 2012.
Solar electricity consumption increased by 58 percent, to 93 terawatt-hours (TWh). Use of wind power in 2012 increased
by 18.1 percent, to 521.3 TWh. Global solar and wind energy installed capacities continued to expand even though
new investments in these technologies declined during 2012. Worldwide investment in solar power in 2012 was $140.4
billion, an 11 percent decline from 2011, and wind power investment was down 10.1 percent, to $80.3 billion. But
due to lower production costs for both technologies, total installed capacities grew sharply. This investment decline,
but growth in installed capacity, may again occur in 2013. Analysts expect the market to triple by 2030. In 2015,
investment in renewables exceeded fossils. The incentive to use 100% renewable energy, for electricity, transport,
or even total primary energy supply globally, has been motivated by global warming and other ecological as well as
economic concerns. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological
limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. In
reviewing 164 recent scenarios of future renewable energy growth, the report noted that the majority expected renewable
sources to supply more than 17% of total energy by 2030, and 27% by 2050; the highest forecast projected 43% supplied
by renewables by 2030 and 77% by 2050. Renewable energy use has grown much faster than even advocates anticipated.
At the national level, at least 30 nations around the world already have renewable energy contributing more than
20% of energy supply. Also, Professors S. Pacala and Robert H. Socolow have developed a series of "stabilization
wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable
energy sources," in aggregate, constitute the largest number of their "wedges." Mark Z. Jacobson, professor of civil
and environmental engineering at Stanford University and director of its Atmosphere and Energy Program says producing
all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements
could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and
political, not technological or economic". Jacobson says that energy costs with a wind, solar, water system should
be similar to today's energy costs. Similarly, in the United States, the independent National Research Council has
noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role
in future electricity generation and thus help confront issues related to climate change, energy security, and the
escalation of energy costs … Renewable energy is an attractive option because renewable resources available in the
United States, taken collectively, can supply significantly greater amounts of electricity than the total current
or projected domestic demand." .
Palermo (Italian: [paˈlɛrmo] ( listen), Sicilian: Palermu, Latin: Panormus, from Greek: Πάνορμος, Panormos, Arabic: بَلَرْم,
Balarm; Phoenician: זִיז, Ziz) is a city in Insular Italy, the capital of both the autonomous region of Sicily and
the Province of Palermo. The city is noted for its history, culture, architecture and gastronomy, playing an important
role throughout much of its existence; it is over 2,700 years old. Palermo is located in the northwest of the island
of Sicily, right by the Gulf of Palermo in the Tyrrhenian Sea. The city was founded in 734 BC by the Phoenicians
as Ziz ('flower'). Palermo then became a possession of Carthage, before becoming part of the Roman Republic, the
Roman Empire and eventually part of the Byzantine Empire, for over a thousand years. The Greeks named the city Panormus
meaning 'complete port'. From 831 to 1072 the city was under Arab rule during the Emirate of Sicily when the city
first became a capital. The Arabs shifted the Greek name into Balarm, the root for Palermo's present-day name. Following
the Norman reconquest, Palermo became the capital of a new kingdom (from 1130 to 1816), the Kingdom of Sicily and
the capital of the Holy Roman Empire under Frederick II Holy Roman Emperor and Conrad IV of Germany, King of the
Romans. Eventually Sicily would be united with the Kingdom of Naples to form the Kingdom of the Two Sicilies until
the Italian unification of 1860. Palermo is Sicily's cultural, economic and touristic capital. It is a city rich
in history, culture, art, music and food. Numerous tourists are attracted to the city for its good Mediterranean
weather, its renowned gastronomy and restaurants, its Romanesque, Gothic and Baroque churches, palaces and buildings,
and its nightlife and music. Palermo is the main Sicilian industrial and commercial center: the main industrial sectors
include tourism, services, commerce and agriculture. Palermo currently has an international airport, and a significant
underground economy.[citation needed] In fact, for cultural, artistic and economic reasons, Palermo was one of the
largest cities in the Mediterranean and is now among the top tourist destinations in both Italy and Europe. The city
is also going through careful redevelopment, preparing to become one of the major cities of the Euro-Mediterranean
area. Palermo is surrounded by mountains, formed of calcar, which form a cirque around the city. Some districts of
the city are divided by the mountains themselves. Historically, it was relatively difficult to reach the inner part
of Sicily from the city because of the mounts. The tallest peak of the range is La Pizzuta, about 1,333 m (4,373
ft.) high. However, historically, the most important mount is Monte Pellegrino, which is geographically separated
from the rest of the range by a plain. The mount lies right in front of the Tyrrhenian Sea. Monte Pellegrino's cliff
was described in the 19th century by Johann Wolfgang von Goethe, as "The most beautiful promontory in the world",
in his essay "Italian Journey". Today both the Papireto river and the Kemonia are covered up by buildings. However,
the shape of the former watercourses can still be recognised today, because the streets that were built on them follow
their shapes. Today the only waterway not drained yet is the Oreto river that divides the downtown of the city from
the western uptown and the industrial districts. In the basins there were, though, many seasonal torrents that helped
formed swampy plains, reclaimed during history; a good example of which can be found in the borough of Mondello.
During 734 BC the Phoenicians, a sea trading people from the north of ancient Canaan, built a small settlement on
the natural harbor of Palermo. Some sources suggest they named the settlement "Ziz." It became one of the three main
Phoenician colonies of Sicily, along with Motya and Soluntum. However, the remains of the Phoenician presence in
the city are few and mostly preserved in the very populated center of the downtown area, making any excavation efforts
costly and logistically difficult. The site chosen by the Phoenicians made it easy to connect the port to the mountains
with a straight road that today has become Corso Calatifimi. This road helped the Phoenicians in trading with the
populations that lived beyond the mountains that surround the gulf. The first settlement is defined as Paleapolis
(Παλεάπολις), the Ancient Greek world for "old city", in order to distinguish it from a second settlement built during
the 5th century BC, called Neapolis (Νεάπολις), "new city". The neapolis was erected towards the east and along with
it, monumental walls around the whole settlement were built to prevent attacks from foreign threats. Some part of
this structure can still be seen in the Cassaro district. This district was named after the walls themselves; the
word Cassaro deriving from the Arab al-qsr (castle, stronghold). Along the walls there were few doors to access and
exit the city, suggesting that trade even toward the inner part of the island occurred frequently. Moreover, according
to some studies, it may be possible that there were some walls that divided the old city from the new one too. The
colony developed around a central street (decumanus), cut perpendicularly by minor streets. This street today has
become the Corso Vittorio Emanuele. Carthage was Palermo’s major trading partner under the Phoenicians and the city
enjoyed a prolonged peace during this period. Palermo came into contact with the Ancient Greeks between the 6th and
the 5th centuries BC which preceded the Sicilian Wars, a conflict fought between the Greeks of Syracuse and the Phoenicians
of Carthage for control over the island of Sicily. During this war the Greeks named the settlement Panormos (Πάνορμος)
from which the current name is derived, meaning "all port" due to the shape of its coast. It was from Palermo that
Hamilcar I's fleet (which was defeated at the Battle of Himera) was launched. In 409 B.C. the city was looted by
Hermocrates of Syracuse. The Sicilian Wars ended in 265 BC when Carthage and Syracuse stopped warring and united
in order to stop the Romans from gaining full control of the island during the First Punic War. In 276 BC, during
the Pyrrhic War, Panormos briefly became a Greek colony after being conquered by Pyrrhus of Epirus, but returned
to Phoenician Carthage in 275. In 254 BC Panormos was besieged and conquered by the Romans in the first battle of
Panormus (the name Latin name). Carthage attempted to reconquer Panormus in 251 BC but failed. As the Roman Empire
was falling apart, Palermo fell under the control of several Germanic tribes. The first were the Vandals in 440 AD
under the rule of their king Geiseric. The Vandals had occupied all the Roman provinces in North Africa by 455 establishing
themselves as a significant force. They acquired Corsica, Sardinia and Sicily shortly afterwards. However, they soon
lost these newly acquired possessions to the Ostrogoths. The Ostrogothic conquest under Theodoric the Great began
in 488; Theodoric supported Roman culture and government unlike the Germanic Goths. The Gothic War took place between
the Ostrogoths and the Eastern Roman Empire, also known as the Byzantine Empire. Sicily was the first part of Italy
to be taken under control of General Belisarius who was commissioned by Eastern Emperor. Justinian I solidified his
rule in the following years. The Muslims took control of the Island in 904, after decades of fierce fighting, and
the Emirate of Sicily was established. Muslim rule on the island lasted for about 120 years and was marked by cruelty
and brutality against the native population, which was reduced into near slavery[clarification needed] and Christian
churches across the island were all completely destroyed.[page needed] Palermo (Balarm during Arab rule) displaced
Syracuse as the capital city of Sicily. It was said to have then begun to compete with Córdoba and Cairo in terms
of importance and splendor. For more than one hundred years Palermo was the capital of a flourishing emirate. The
Arabs also introduced many agricultural crops which remain a mainstay of Sicilian cuisine. After dynastic quarrels
however, there was a Christian reconquest in 1072. The family who returned the city to Christianity were called the
Hautevilles, including Robert Guiscard and his army, who is regarded as a hero by the natives. It was under Roger
II of Sicily that Norman holdings in Sicily and the southern part of the Italian Peninsula were promoted from the
County of Sicily into the Kingdom of Sicily. The Kingdom's capital was Palermo, with the King's Court held at the
Palazzo dei Normanni. Much construction was undertaken during this period, such as the building of Palermo Cathedral.
The Kingdom of Sicily became one of the wealthiest states in Europe. Sicily fell under the control of the Holy Roman
Empire in 1194. Palermo was the preferred city of the Emperor Frederick II. Muslims of Palermo emigrated or were
expelled during Holy Roman rule. After an interval of Angevin rule (1266–1282), Sicily came under control of the
Aragon and Barcelona dynasties. By 1330, Palermo's population had declined to 51,000. From 1479 until 1713 Palermo
was ruled by the Kingdom of Spain, and again between 1717 and 1718. Palermo was also under Savoy control between
1713 and 1717 and 1718–1720 as a result of the Treaty of Utrecht. It was also ruled by Austria between 1720 and 1734.
After the Treaty of Utrecht (1713), Sicily was handed over to the Savoia, but by 1734 it was again a Bourbon possession.
Charles III chose Palermo for his coronation as King of Sicily. Charles had new houses built for the growing population,
while trade and industry grew as well. However, by now Palermo was now just another provincial city as the Royal
Court resided in Naples. Charles' son Ferdinand, though disliked by the population, took refuge in Palermo after
the French Revolution in 1798. His son Alberto died on the way to Palermo and is buried in the city. When the Kingdom
of the Two Sicilies was founded, the original capital city was Palermo (1816) but a year later moved to Naples. From
1820 to 1848 Sicily was shaken by upheavals, which culminated on 12 January 1848, with a popular insurrection, the
first one in Europe that year, led by Giuseppe La Masa. A parliament and constitution were proclaimed. The first
president was Ruggero Settimo. The Bourbons reconquered Palermo in 1849, and remained under their rule until the
time of Giuseppe Garibaldi. The famous general entered Palermo with his troops (the “Thousands”) on 27 May 1860.
After the plebiscite later that year Palermo, along with the rest of Sicily, became part of the new Kingdom of Italy
(1861). The majority of Sicilians preferred independence to the Savoia kingdom; in 1866, Palermo became the seat
of a week-long popular rebellion, which was finally crushed after Martial law was declared. The Italian government
blamed anarchists and the Church, specifically the Archbishop of Palermo, for the rebellion and began enacting anti-Sicilian
and anti-clerical policies. A new cultural, economic and industrial growth was spurred by several families, like
the Florio, the Ducrot, the Rutelli, the Sandron, the Whitaker, the Utveggio, and others. In the early twentieth
century Palermo expanded outside the old city walls, mostly to the north along the new boulevards Via Roma, Via Dante,
Via Notarbartolo, and Viale della Libertà. These roads would soon boast a huge number of villas in the Art Nouveau
style. Many of these were designed by the famous architect Ernesto Basile. The Grand Hotel Villa Igiea, designed
by Ernesto Basile for the Florio family, is a good example of Palermitan Art Nouveau. The huge Teatro Massimo was
designed in the same period by Giovan Battista Filippo Basile, and built by the Rutelli & Machì building firm of
the industrial and old Rutelli Italian family in Palermo, and was opened in 1897. The so-called "Sack of Palermo"
is one of the major visible faces of the problem. The term is used to indicate the speculative building practices
that have filled the city with poor buildings. The reduced importance of agriculture in the Sicilian economy has
led to a massive migration to the cities, especially Palermo, which swelled in size, leading to rapid expansion towards
the north. The regulatory plans for expansion was largely ignored in the boom. New parts of town appeared almost
out of nowhere, but without parks, schools, public buildings, proper roads and the other amenities that characterise
a modern city. Palermo experiences a hot-summer Mediterranean climate (Köppen climate classification: Csa). Winters
are cool and wet, while summers are hot and dry. Temperatures in autumn and spring are usually mild. Palermo is one
of the warmest cities in Europe (mainly due to its warm nights), with an average annual air temperature of 18.5 °C
(65.3 °F). It receives approximately 2,530 hours of sunshine per year. Snow is usually a rare occurrence, but it
does occur occasionally if there is a cold front, as the Apennines are too distant to protect the island from cold
winds blowing from the Balkans, and the mountains surrounding the city facilite the formation of snow accumulation
in Palermo, especially at night. Between the 1940s and the 2000s there have been eleven times when considerable snowfall
has occurred: In 1949, in 1956, when the minimum temperature went down to 0 °C (32 °F) and the city was blanketed
by several centimeters of snow. Snow also occurred in 1999, 2009 and 2015. The average annual temperature of the
sea is above 19 °C (66 °F); from 14 °C (57 °F) in February to 26 °C (79 °F) in August. In the period from May to
November, the average sea temperature exceeds 18 °C (64 °F) and in the period from June to October, the average sea
temperature exceeds 21 °C (70 °F). Palermo has at least 2 circuits of City Walls - many pieces of which still survive.
The first circuit surrounded the ancient core of the punic City - the so-called Palaeopolis (in the area east of
Porta Nuova) and the Neopolis. Via Vittorio Emanuele was the main road east-west through this early walled city.
The eastern edge of the walled city was on Via Roma and the ancient port in the vicinity of Piazza Marina. The wall
circuit was approximately Porto Nuovo, Corso Alberti, Piazza Peranni, Via Isodoro, Via Candela, Via Venezia, Via
Roma, Piazza Paninni, Via Biscottari, Via Del Bastione, Palazzo dei Normanni and back to Porto Nuovo. In the medieval
period the wall circuit was expanded. Via Vittorio Emanuele continued to be the main road east-west through the walled
city. West gate was still Porta Nuova, the circuit continued to Corso Alberti, to Piazza Vittorio Emanuele Orlando
where it turned east along Via Volturno to Piazza Verdi and along the line of Via Cavour. At this north-east corner
there was a defence, Castello a Mare, to protect the port at La Cala. A huge chain was used to block La Cala with
the other end at S Maria della Catena (St Mary of the Chain). The sea-side wall was along the western side of Foro
Italico Umberto. The wall turns west along the northern side of Via Abramo Lincoln, continues along Corso Tukory.
The wall turns north approximately on Via Benedetto, to Palazzo dei Normanni and back to Porta Nuova. Source: Palermo
- City Guide by Adriana Chirco, 1998, Dario Flaccovio Editore. The cathedral has a heliometer (solar "observatory")
of 1690, one of a number built in Italy in the 17th and 18th centuries. The device itself is quite simple: a tiny
hole in one of the minor domes acts as pinhole camera, projecting an image of the sun onto the floor at solar noon
(12:00 in winter, 13:00 in summer). There is a bronze line, la Meridiana on the floor, running precisely N/S. The
ends of the line mark the positions as at the summer and winter solstices; signs of the zodiac show the various other
dates throughout the year. In 2010, there were 1.2 million people living in the greater Palermo area, 655,875 of
which resided in the City boundaries, of whom 47.4% were male and 52.6% were female. People under age 15 totalled
15.6% compared to pensioners who composed 17.2% of the population. This compares with the Italian average of 14.1%
people under 15 years and 20.2% pensioners. The average age of a Palermo resident is 40.4 compared to the Italian
average of 42.8. In the ten years between 2001 and 2010, the population of Palermo declined by 4.5%, while the population
of Italy, as a whole, grew by 6.0%. The reason for Palermo's decline is a population flight to the suburbs, and to
Northern Italy. The current birth rate of Palermo is 10.2 births per 1,000 inhabitants compared to the Italian average
of 9.3 births. Being Sicily's administrative capital, Palermo is a centre for much of the region's finance, tourism
and commerce. The city currently hosts an international airport, and Palermo's economic growth over the years has
brought the opening of many new businesses. The economy mainly relies on tourism and services, but also has commerce,
shipbuilding and agriculture. The city, however, still has high unemployment levels, high corruption and a significant
black market empire (Palermo being the home of the Sicilian Mafia). Even though the city still suffers from widespread
corruption, inefficient bureaucracy and organized crime, the level of crime in Palermo's has gone down dramatically,
unemployment has been decreasing and many new, profitable opportunities for growth (especially regarding tourism)
have been introduced, making the city safer and better to live in. The port of Palermo, founded by the Phoenicians
over 2,700 years ago, is, together with the port of Messina, the main port of Sicily. From here ferries link Palermo
to Cagliari, Genoa, Livorno, Naples, Tunis and other cities and carry a total of almost 2 million passengers annually.
It is also an important port for cruise ships. Traffic includes also almost 5 million tonnes of cargo and 80.000
TEU yearly. The port also has links to minor sicilian islands such as Ustica and the Aeolian Islands (via Cefalù
in summer). Inside the Port of Palermo there is a section known as "tourist marina" for sailing yachts and catamarans.
The patron saint of Palermo is Santa Rosalia, who is widely revered. On 14 July, people in Palermo celebrate the
annual Festino, the most important religious event of the year. The Festino is a procession which goes through the
main street of Palermo to commemorate the miracle attributed to Santa Rosalia who, it is believed, freed the city
from the Black Death in 1624. Her remains were discovered in a cave on Monte Pellegrino, and her remains were carried
around the city three times, banishing the plague. There is a sanctuary marking the spot where her remains were found
which can be reached via a scenic bus ride from the city.
The modern English word green comes from the Middle English and Anglo-Saxon word grene, from the same Germanic root as the
words "grass" and "grow". It is the color of living grass and leaves and as a result is the color most associated
with springtime, growth and nature. By far the largest contributor to green in nature is chlorophyll, the chemical
by which plants photosynthesize and convert sunlight into chemical energy. Many creatures have adapted to their green
environments by taking on a green hue themselves as camouflage. Several minerals have a green color, including the
emerald, which is colored green by its chromium content. In surveys made in Europe and the United States, green is
the color most commonly associated with nature, life, health, youth, spring, hope and envy. In Europe and the U.S.
green is sometimes associated with death (green has several seemingly contrary associations), sickness, or the devil,
but in China its associations are very positive, as the symbol of fertility and happiness. In the Middle Ages and
Renaissance, when the color of clothing showed the owner's social status, green was worn by merchants, bankers and
the gentry, while red was the color of the nobility. The Mona Lisa by Leonardo da Vinci wears green, showing she
is not from a noble family; the benches in the British House of Commons are green, while those in the House of Lords
are red. Green is also the traditional color of safety and permission; a green light means go ahead, a green card
permits permanent residence in the United States. It is the most important color in Islam. It was the color of the
banner of Muhammad, and is found in the flags of nearly all Islamic countries, and represents the lush vegetation
of Paradise. It is also often associated with the culture of Gaelic Ireland, and is a color of the flag of Ireland.
Because of its association with nature, it is the color of the environmental movement. Political groups advocating
environmental protection and social justice describe themselves as part of the Green movement, some naming themselves
Green parties. This has led to similar campaigns in advertising, as companies have sold green, or environmentally
friendly, products. Thus, the languages mentioned above (Germanic, Romance, Slavic, Greek) have old terms for "green"
which are derived from words for fresh, sprouting vegetation. However, comparative linguistics makes clear that these
terms were coined independently, over the past few millennia, and there is no identifiable single Proto-Indo-European
or word for "green". For example, the Slavic zelenъ is cognate with Sanskrit hari "yellow, ochre, golden". The Turkic
languages also have jašɨl "green" or "yellowish green", compared to a Mongolian word for "meadow". In some languages,
including old Chinese, Thai, old Japanese, and Vietnamese, the same word can mean either blue or green. The Chinese
character 青 (pronounced qīng in Mandarin, ao in Japanese, and thanh in Sino-Vietnamese) has a meaning that covers
both blue and green; blue and green are traditionally considered shades of "青". In more contemporary terms, they
are 藍 (lán, in Mandarin) and 綠 (lǜ, in Mandarin) respectively. Japanese also has two terms that refer specifically
to the color green, 緑 (midori, which is derived from the classical Japanese descriptive verb midoru "to be in leaf,
to flourish" in reference to trees) and グリーン (guriin, which is derived from the English word "green"). However, in
Japan, although the traffic lights have the same colors that other countries have, the green light is described using
the same word as for blue, "aoi", because green is considered a shade of aoi; similarly, green variants of certain
fruits and vegetables such as green apples, green shiso (as opposed to red apples and red shiso) will be described
with the word "aoi". Vietnamese uses a single word for both blue and green, xanh, with variants such as xanh da trời
(azure, lit. "sky blue"), lam (blue), and lục (green; also xanh lá cây, lit. "leaf green"). "Green" in modern European
languages corresponds to about 520–570 nm, but many historical and non-European languages make other choices, e.g.
using a term for the range of ca. 450–530 nm ("blue/green") and another for ca. 530–590 nm ("green/yellow").[citation
needed] In the comparative study of color terms in the world's languages, green is only found as a separate category
in languages with the fully developed range of six colors (white, black, red, green, yellow, and blue), or more rarely
in systems with five colors (white, red, yellow, green, and black/blue). (See distinction of green from blue) These
languages have introduced supplementary vocabulary to denote "green", but these terms are recognizable as recent
adoptions that are not in origin color terms (much like the English adjective orange being in origin not a color
term but the name of a fruit). Thus, the Thai word เขียว besides meaning "green" also means "rank" and "smelly" and
holds other unpleasant associations. In the subtractive color system, used in painting and color printing, green
is created by a combination of yellow and blue, or yellow and cyan; in the RGB color model, used on television and
computer screens, it is one of the additive primary colors, along with red and blue, which are mixed in different
combinations to create all other colors. On the HSV color wheel, also known as the RGB color wheel, the complement
of green is magenta; that is, a color corresponding to an equal mixture of red and blue light (one of the purples).
On a traditional color wheel, based on subtractive color, the complementary color to green is considered to be red.
In additive color devices such as computer displays and televisions, one of the primary light sources is typically
a narrow-spectrum yellowish-green of dominant wavelength ~550 nm; this "green" primary is combined with an orangish-red
"red" primary and a purplish-blue "blue" primary to produce any color in between – the RGB color model. A unique
green (green appearing neither yellowish nor bluish) is produced on such a device by mixing light from the green
primary with some light from the blue primary. Lasers emitting in the green part of the spectrum are widely available
to the general public in a wide range of output powers. Green laser pointers outputting at 532 nm (563.5 THz) are
relatively inexpensive compared to other wavelengths of the same power, and are very popular due to their good beam
quality and very high apparent brightness. The most common green lasers use diode pumped solid state (DPSS) technology
to create the green light. An infrared laser diode at 808 nm is used to pump a crystal of neodymium-doped yttrium
vanadium oxide (Nd:YVO4) or neodymium-doped yttrium aluminium garnet (Nd:YAG) and induces it to emit 281.76 THz (1064
nm). This deeper infrared light is then passed through another crystal containing potassium, titanium and phosphorus
(KTP), whose non-linear properties generate light at a frequency that is twice that of the incident beam (563.5 THz);
in this case corresponding to the wavelength of 532 nm ("green"). Other green wavelengths are also available using
DPSS technology ranging from 501 nm to 543 nm. Green wavelengths are also available from gas lasers, including the
helium–neon laser (543 nm), the Argon-ion laser (514 nm) and the Krypton-ion laser (521 nm and 531 nm), as well as
liquid dye lasers. Green lasers have a wide variety of applications, including pointing, illumination, surgery, laser
light shows, spectroscopy, interferometry, fluorescence, holography, machine vision, non-lethal weapons and bird
control. Many minerals provide pigments which have been used in green paints and dyes over the centuries. Pigments,
in this case, are minerals which reflect the color green, rather that emitting it through luminescent or phosphorescent
qualities. The large number of green pigments makes it impossible to mention them all. Among the more notable green
minerals, however is the emerald, which is colored green by trace amounts of chromium and sometimes vanadium. Chromium(III)
oxide (Cr2O3), is called chrome green, also called viridian or institutional green when used as a pigment. For many
years, the source of amazonite's color was a mystery. Widely thought to have been due to copper because copper compounds
often have blue and green colors, the blue-green color is likely to be derived from small quantities of lead and
water in the feldspar. Copper is the source of the green color in malachite pigments, chemically known as basic copper(II)
carbonate. Verdigris is made by placing a plate or blade of copper, brass or bronze, slightly warmed, into a vat
of fermenting wine, leaving it there for several weeks, and then scraping off and drying the green powder that forms
on the metal. The process of making verdigris was described in ancient times by Pliny. It was used by the Romans
in the murals of Pompeii, and in Celtic medieval manuscripts as early as the 5th century AD. It produced a blue-green
which no other pigment could imitate, but it had drawbacks; it was unstable, it could not resist dampness, it did
not mix well with other colors, it could ruin other colors with which it came into contact., and it was toxic. Leonardo
da Vinci, in his treatise on painting, warned artists not to use it. It was widely used in miniature paintings in
Europe and Persia in the 16th and 17th centuries. Its use largely ended in the late 19th century, when it was replaced
by the safer and more stable chrome green. Viridian, also called chrome green, is a pigment made with chromium oxide
dihydrate, was patented in 1859. It became popular with painters, since, unlike other synthetic greens, it was stable
and not toxic. Vincent van Gogh used it, along with Prussian blue, to create a dark blue sky with a greenish tint
in his painting Cafe terrace at night. There is no natural source for green food colorings which has been approved
by the US Food and Drug Administration. Chlorophyll, the E numbers E140 and E141, is the most common green chemical
found in nature, and only allowed in certain medicines and cosmetic materials. Quinoline Yellow (E104) is a commonly
used coloring in the United Kingdom but is banned in Australia, Japan, Norway and the United States. Green S (E142)
is prohibited in many countries, for it is known to cause hyperactivity, asthma, urticaria, and insomnia. To create
green sparks, fireworks use barium salts, such as barium chlorate, barium nitrate crystals, or barium chloride, also
used for green fireplace logs. Copper salts typically burn blue, but cupric chloride (also known as "campfire blue")
can also produce green flames. Green pyrotechnic flares can use a mix ratio 75:25 of boron and potassium nitrate.
Smoke can be turned green by a mixture: solvent yellow 33, solvent green 3, lactose, magnesium carbonate plus sodium
carbonate added to potassium chlorate. Green is common in nature, as many plants are green because of a complex chemical
known as chlorophyll, which is involved in photosynthesis. Chlorophyll absorbs the long wavelengths of light (red)
and short wavelengths of light (blue) much more efficiently than the wavelengths that appear green to the human eye,
so light reflected by plants is enriched in green. Chlorophyll absorbs green light poorly because it first arose
in organisms living in oceans where purple halobacteria were already exploiting photosynthesis. Their purple color
arose because they extracted energy in the green portion of the spectrum using bacteriorhodopsin. The new organisms
that then later came to dominate the extraction of light were selected to exploit those portions of the spectrum
not used by the halobacteria. Animals typically use the color green as camouflage, blending in with the chlorophyll
green of the surrounding environment. Green animals include, especially, amphibians, reptiles, and some fish, birds
and insects. Most fish, reptiles, amphibians, and birds appear green because of a reflection of blue light coming
through an over-layer of yellow pigment. Perception of color can also be affected by the surrounding environment.
For example, broadleaf forests typically have a yellow-green light about them as the trees filter the light. Turacoverdin
is one chemical which can cause a green hue in birds, especially. Invertebrates such as insects or mollusks often
display green colors because of porphyrin pigments, sometimes caused by diet. This can causes their feces to look
green as well. Other chemicals which generally contribute to greenness among organisms are flavins (lychochromes)
and hemanovadin. Humans have imitated this by wearing green clothing as a camouflage in military and other fields.
Substances that may impart a greenish hue to one's skin include biliverdin, the green pigment in bile, and ceruloplasmin,
a protein that carries copper ions in chelation. There is no green pigment in green eyes; like the color of blue
eyes, it is an optical illusion; its appearance is caused by the combination of an amber or light brown pigmentation
of the stroma, given by a low or moderate concentration of melanin, with the blue tone imparted by the Rayleigh scattering
of the reflected light. Green eyes are most common in Northern and Central Europe. They can also be found in Southern
Europe, West Asia, Central Asia, and South Asia. In Iceland, 89% of women and 87% of men have either blue or green
eye color. A study of Icelandic and Dutch adults found green eyes to be much more prevalent in women than in men.
Among European Americans, green eyes are most common among those of recent Celtic and Germanic ancestry, about 16%.
In Ancient Egypt green was the symbol of regeneration and rebirth, and of the crops made possible by the annual flooding
of the Nile. For painting on the walls of tombs or on papyrus, Egyptian artists used finely-ground malachite, mined
in the west Sinai and the eastern desert- A paintbox with malachite pigment was found inside the tomb of King Tutankhamun.
They also used less expensive green earth pigment, or mixed yellow ochre and blue azurite. To dye fabrics green,
they first colored them yellow with dye made from saffron and then soaked them in blue dye from the roots of the
woad plant. For the ancient Egyptians, green had very positive associations. The hieroglyph for green represented
a growing papyrus sprout, showing the close connection between green, vegetation, vigor and growth. In wall paintings,
the ruler of the underworld, Osiris, was typically portrayed with a green face, because green was the symbol of good
health and rebirth. Palettes of green facial makeup, made with malachite, were found in tombs. It was worn by both
the living and dead, particularly around the eyes, to protect them from evil. Tombs also often contained small green
amulets in the shape of scarab beetles made of malachite, which would protect and give vigor to the deceased. It
also symbolized the sea, which was called the "Very Green." In Ancient Greece, green and blue were sometimes considered
the same color, and the same word sometimes described the color of the sea and the color of trees. The philosopher
Democritus described two different greens; cloron, or pale green, and prasinon, or leek green. Aristotle considered
that green was located midway between black, symbolizing the earth, and white, symbolizing water. However, green
was not counted among of the four classic colors of Greek painting; red, yellow, black and white, and is rarely found
in Greek art. The Romans had a greater appreciation for the color green; it was the color of Venus, the goddess of
gardens, vegetables and vineyards.The Romans made a fine green earth pigment, which was widely used in the wall paintings
of Pompeii, Herculaneum, Lyon, Vaison-la-Romaine, and other Roman cities. They also used the pigment verdigris, made
by soaking copper plates in fermenting wine. By the Second Century AD, the Romans were using green in paintings,
mosaics and glass, and there were ten different words in Latin for varieties of green. Unfortunately for those who
wanted or were required to wear green, there were no good vegetal green dyes which resisted washing and sunlight.
Green dyes were made out of the fern, plantain, buckthorn berries, the juice of nettles and of leeks, the digitalis
plant, the broom plant, the leaves of the fraxinus, or ash tree, and the bark of the alder tree, but they rapidly
faded or changed color. Only in the 16th century was a good green dye produced, by first dyeing the cloth blue with
woad, and then yellow with reseda luteola, also known as yellow-weed. In the 18th and 19th century, green was associated
with the romantic movement in literature and art. The French philosopher Jean-Jacques Rousseau celebrated the virtues
of nature, The German poet and philosopher Goethe declared that green was the most restful color, suitable for decorating
bedrooms. Painters such as John Constable and Jean-Baptiste-Camille Corot depicted the lush green of rural landscapes
and forests. Green was contrasted to the smoky grays and blacks of the Industrial Revolution. The late nineteenth
century also brought the systematic study of color theory, and particularly the study of how complementary colors
such as red and green reinforced each other when they were placed next to each other. These studies were avidly followed
by artists such as Vincent van Gogh. Describing his painting, The Night Cafe, to his brother Theo in 1888, Van Gogh
wrote: "I sought to express with red and green the terrible human passions. The hall is blood red and pale yellow,
with a green billiard table in the center, and four lamps of lemon yellow, with rays of orange and green. Everywhere
it is a battle and antithesis of the most different reds and greens." Green can communicate safety to proceed, as
in traffic lights. Green and red were standardized as the colors of international railroad signals in the 19th century.
The first traffic light, using green and red gas lamps, was erected in 1868 in front of the Houses of Parliament
in London. It exploded the following year, injuring the policeman who operated it. In 1912, the first modern electric
traffic lights were put up in Salt Lake City, Utah. Red was chosen largely because of its high visibility, and its
association with danger, while green was chosen largely because it could not be mistaken for red. Today green lights
universally signal that a system is turned on and working as it should. In many video games, green signifies both
health and completed objectives, opposite red. Like other common colors, green has several completely opposite associations.
While it is the color most associated by Europeans and Americans with good health, it is also the color most often
associated with toxicity and poison. There was a solid foundation for this association; in the nineteenth century
several popular paints and pigments, notably verdigris, vert de Schweinfurt and vert de Paris, were highly toxic,
containing copper or arsenic.[d] The intoxicating drink absinthe was known as "the green fairy". Many flags of the
Islamic world are green, as the color is considered sacred in Islam (see below). The flag of Hamas, as well as the
flag of Iran, is green, symbolizing their Islamist ideology. The 1977 flag of Libya consisted of a simple green field
with no other characteristics. It was the only national flag in the world with just one color and no design, insignia,
or other details. Some countries used green in their flags to represent their country's lush vegetation, as in the
flag of Jamaica, and hope in the future, as in the flags of Portugal and Nigeria. The green cedar of Lebanon tree
on the Flag of Lebanon officially represents steadiness and tolerance. In the 1980s green became the color of a number
of new European political parties organized around an agenda of environmentalism. Green was chosen for its association
with nature, health, and growth. The largest green party in Europe is Alliance '90/The Greens (German: Bündnis 90/Die
Grünen) in Germany, which was formed in 1993 from the merger of the German Green Party, founded in West Germany in
1980, and Alliance 90, founded during the Revolution of 1989–1990 in East Germany. In the 2009 federal elections,
the party won 10.7% of the votes and 68 out of 622 seats in the Bundestag. Roman Catholic and more traditional Protestant
clergy wear green vestments at liturgical celebrations during Ordinary Time. In the Eastern Catholic Church, green
is the color of Pentecost. Green is one of the Christmas colors as well, possibly dating back to pre-Christian times,
when evergreens were worshiped for their ability to maintain their color through the winter season. Romans used green
holly and evergreen as decorations for their winter solstice celebration called Saturnalia, which eventually evolved
into a Christmas celebration. In Ireland and Scotland especially, green is used to represent Catholics, while orange
is used to represent Protestantism. This is shown on the national flag of Ireland.
Zinc is a chemical element with symbol Zn and atomic number 30. It is the first element of group 12 of the periodic table.
In some respects zinc is chemically similar to magnesium: its ion is of similar size and its only common oxidation
state is +2. Zinc is the 24th most abundant element in Earth's crust and has five stable isotopes. The most common
zinc ore is sphalerite (zinc blende), a zinc sulfide mineral. The largest mineable amounts are found in Australia,
Asia, and the United States. Zinc production includes froth flotation of the ore, roasting, and final extraction
using electricity (electrowinning). Brass, which is an alloy of copper and zinc, has been used since at least the
10th century BC in Judea and by the 7th century BC in Ancient Greece. Zinc metal was not produced on a large scale
until the 12th century in India and was unknown to Europe until the end of the 16th century. The mines of Rajasthan
have given definite evidence of zinc production going back to the 6th century BC. To date, the oldest evidence of
pure zinc comes from Zawar, in Rajasthan, as early as the 9th century AD when a distillation process was employed
to make pure zinc. Alchemists burned zinc in air to form what they called "philosopher's wool" or "white snow". The
element was probably named by the alchemist Paracelsus after the German word Zinke (prong, tooth). German chemist
Andreas Sigismund Marggraf is credited with discovering pure metallic zinc in 1746. Work by Luigi Galvani and Alessandro
Volta uncovered the electrochemical properties of zinc by 1800. Corrosion-resistant zinc plating of iron (hot-dip
galvanizing) is the major application for zinc. Other applications are in batteries, small non-structural castings,
and alloys, such as brass. A variety of zinc compounds are commonly used, such as zinc carbonate and zinc gluconate
(as dietary supplements), zinc chloride (in deodorants), zinc pyrithione (anti-dandruff shampoos), zinc sulfide (in
luminescent paints), and zinc methyl or zinc diethyl in the organic laboratory. Zinc is an essential mineral perceived
by the public today as being of "exceptional biologic and public health importance", especially regarding prenatal
and postnatal development. Zinc deficiency affects about two billion people in the developing world and is associated
with many diseases. In children it causes growth retardation, delayed sexual maturation, infection susceptibility,
and diarrhea. Enzymes with a zinc atom in the reactive center are widespread in biochemistry, such as alcohol dehydrogenase
in humans. Consumption of excess zinc can cause ataxia, lethargy and copper deficiency. Zinc is a bluish-white, lustrous,
diamagnetic metal, though most common commercial grades of the metal have a dull finish. It is somewhat less dense
than iron and has a hexagonal crystal structure, with a distorted form of hexagonal close packing, in which each
atom has six nearest neighbors (at 265.9 pm) in its own plane and six others at a greater distance of 290.6 pm. The
metal is hard and brittle at most temperatures but becomes malleable between 100 and 150 °C. Above 210 °C, the metal
becomes brittle again and can be pulverized by beating. Zinc is a fair conductor of electricity. For a metal, zinc
has relatively low melting (419.5 °C) and boiling points (907 °C). Its melting point is the lowest of all the transition
metals aside from mercury and cadmium. Several dozen radioisotopes have been characterized. 65Zn, which has a half-life
of 243.66 days, is the most long-lived radioisotope, followed by 72Zn with a half-life of 46.5 hours. Zinc has 10
nuclear isomers. 69mZn has the longest half-life, 13.76 h. The superscript m indicates a metastable isotope. The
nucleus of a metastable isotope is in an excited state and will return to the ground state by emitting a photon in
the form of a gamma ray. 61Zn has three excited states and 73Zn has two. The isotopes 65Zn, 71Zn, 77Zn and 78Zn each
have only one excited state. The chemistry of zinc is dominated by the +2 oxidation state. When compounds in this
oxidation state are formed the outer shell s electrons are lost, which yields a bare zinc ion with the electronic
configuration [Ar]3d10. In aqueous solution an octahedral complex, [Zn(H 2O)6]2+ is the predominant species. The
volatilization of zinc in combination with zinc chloride at temperatures above 285 °C indicates the formation of
Zn 2Cl 2, a zinc compound with a +1 oxidation state. No compounds of zinc in oxidation states other than +1 or +2
are known. Calculations indicate that a zinc compound with the oxidation state of +4 is unlikely to exist. Zinc chemistry
is similar to the chemistry of the late first-row transition metals nickel and copper, though it has a filled d-shell,
so its compounds are diamagnetic and mostly colorless. The ionic radii of zinc and magnesium happen to be nearly
identical. Because of this some of their salts have the same crystal structure and in circumstances where ionic radius
is a determining factor zinc and magnesium chemistries have much in common. Otherwise there is little similarity.
Zinc tends to form bonds with a greater degree of covalency and it forms much more stable complexes with N- and S-
donors. Complexes of zinc are mostly 4- or 6- coordinate although 5-coordinate complexes are known. Zinc(I) compounds
are rare, and require bulky ligands to stabilize the low oxidation state. Most zinc(I) compounds contain formally
the [Zn2]2+ core, which is analogous to the [Hg2]2+ dimeric cation present in mercury(I) compounds. The diamagnetic
nature of the ion confirms its dimeric structure. The first zinc(I) compound containing the Zn—Zn bond, (η5-C5Me5)2Zn2,
is also the first dimetallocene. The [Zn2]2+ ion rapidly disproportionates into zinc metal and zinc(II), and has
only been obtained as a yellow glass formed by cooling a solution of metallic zinc in molten ZnCl2. Binary compounds
of zinc are known for most of the metalloids and all the nonmetals except the noble gases. The oxide ZnO is a white
powder that is nearly insoluble in neutral aqueous solutions, but is amphoteric, dissolving in both strong basic
and acidic solutions. The other chalcogenides (ZnS, ZnSe, and ZnTe) have varied applications in electronics and optics.
Pnictogenides (Zn 3N 2, Zn 3P 2, Zn 3As 2 and Zn 3Sb 2), the peroxide (ZnO 2), the hydride (ZnH 2), and the carbide
(ZnC 2) are also known. Of the four halides, ZnF 2 has the most ionic character, whereas the others (ZnCl 2, ZnBr
2, and ZnI 2) have relatively low melting points and are considered to have more covalent character. In weak basic
solutions containing Zn2+ ions, the hydroxide Zn(OH) 2 forms as a white precipitate. In stronger alkaline solutions,
this hydroxide is dissolved to form zincates ([Zn(OH)4]2−). The nitrate Zn(NO3) 2, chlorate Zn(ClO3) 2, sulfate ZnSO
4, phosphate Zn 3(PO4) 2, molybdate ZnMoO 4, cyanide Zn(CN) 2, arsenite Zn(AsO2) 2, arsenate Zn(AsO4) 2·8H 2O and
the chromate ZnCrO 4 (one of the few colored zinc compounds) are a few examples of other common inorganic compounds
of zinc. One of the simplest examples of an organic compound of zinc is the acetate (Zn(O 2CCH3) 2). The Charaka
Samhita, thought to have been written between 300 and 500 AD, mentions a metal which, when oxidized, produces pushpanjan,
thought to be zinc oxide. Zinc mines at Zawar, near Udaipur in India, have been active since the Mauryan period.
The smelting of metallic zinc here, however, appears to have begun around the 12th century AD. One estimate is that
this location produced an estimated million tonnes of metallic zinc and zinc oxide from the 12th to 16th centuries.
Another estimate gives a total production of 60,000 tonnes of metallic zinc over this period. The Rasaratna Samuccaya,
written in approximately the 13th century AD, mentions two types of zinc-containing ores: one used for metal extraction
and another used for medicinal purposes. The name of the metal was probably first documented by Paracelsus, a Swiss-born
German alchemist, who referred to the metal as "zincum" or "zinken" in his book Liber Mineralium II, in the 16th
century. The word is probably derived from the German zinke, and supposedly meant "tooth-like, pointed or jagged"
(metallic zinc crystals have a needle-like appearance). Zink could also imply "tin-like" because of its relation
to German zinn meaning tin. Yet another possibility is that the word is derived from the Persian word سنگ seng meaning
stone. The metal was also called Indian tin, tutanego, calamine, and spinter. William Champion's brother, John, patented
a process in 1758 for calcining zinc sulfide into an oxide usable in the retort process. Prior to this, only calamine
could be used to produce zinc. In 1798, Johann Christian Ruberg improved on the smelting process by building the
first horizontal retort smelter. Jean-Jacques Daniel Dony built a different kind of horizontal zinc smelter in Belgium,
which processed even more zinc. Italian doctor Luigi Galvani discovered in 1780 that connecting the spinal cord of
a freshly dissected frog to an iron rail attached by a brass hook caused the frog's leg to twitch. He incorrectly
thought he had discovered an ability of nerves and muscles to create electricity and called the effect "animal electricity".
The galvanic cell and the process of galvanization were both named for Luigi Galvani and these discoveries paved
the way for electrical batteries, galvanization and cathodic protection. Zinc metal is produced using extractive
metallurgy. After grinding the ore, froth flotation, which selectively separates minerals from gangue by taking advantage
of differences in their hydrophobicity, is used to get an ore concentrate. This concentrate consists of about 50%
zinc with the rest being sulfur (32%), iron (13%), and SiO 2 (5%). The composition of this is normally zinc sulfide
(80% to 85%), iron sulfide (7.0% to 12%), lead sulfide (3.0% to 5.0%) silica (2.5% to 3.5%), and cadmium sulfide
(0.35% to 0.41%). The production for sulfidic zinc ores produces large amounts of sulfur dioxide and cadmium vapor.
Smelter slag and other residues of process also contain significant amounts of heavy metals. About 1.1 million tonnes
of metallic zinc and 130 thousand tonnes of lead were mined and smelted in the Belgian towns of La Calamine and Plombières
between 1806 and 1882. The dumps of the past mining operations leach significant amounts of zinc and cadmium, and,
as a result, the sediments of the Geul River contain significant amounts of heavy metals. About two thousand years
ago emissions of zinc from mining and smelting totaled 10 thousand tonnes a year. After increasing 10-fold from 1850,
zinc emissions peaked at 3.4 million tonnes per year in the 1980s and declined to 2.7 million tonnes in the 1990s,
although a 2005 study of the Arctic troposphere found that the concentrations there did not reflect the decline.
Anthropogenic and natural emissions occur at a ratio of 20 to 1. Zinc is more reactive than iron or steel and thus
will attract almost all local oxidation until it completely corrodes away. A protective surface layer of oxide and
carbonate (Zn 5(OH) 6(CO 3) 2) forms as the zinc corrodes. This protection lasts even after the zinc layer is scratched
but degrades through time as the zinc corrodes away. The zinc is applied electrochemically or as molten zinc by hot-dip
galvanizing or spraying. Galvanization is used on chain-link fencing, guard rails, suspension bridges, lightposts,
metal roofs, heat exchangers, and car bodies. The relative reactivity of zinc and its ability to attract oxidation
to itself makes it an efficient sacrificial anode in cathodic protection (CP). For example, cathodic protection of
a buried pipeline can be achieved by connecting anodes made from zinc to the pipe. Zinc acts as the anode (negative
terminus) by slowly corroding away as it passes electric current to the steel pipeline.[note 2] Zinc is also used
to cathodically protect metals that are exposed to sea water from corrosion. A zinc disc attached to a ship's iron
rudder will slowly corrode, whereas the rudder stays unattacked. Other similar uses include a plug of zinc attached
to a propeller or the metal protective guard for the keel of the ship. Other widely used alloys that contain zinc
include nickel silver, typewriter metal, soft and aluminium solder, and commercial bronze. Zinc is also used in contemporary
pipe organs as a substitute for the traditional lead/tin alloy in pipes. Alloys of 85–88% zinc, 4–10% copper, and
2–8% aluminium find limited use in certain types of machine bearings. Zinc is the primary metal used in making American
one cent coins since 1982. The zinc core is coated with a thin layer of copper to give the impression of a copper
coin. In 1994, 33,200 tonnes (36,600 short tons) of zinc were used to produce 13.6 billion pennies in the United
States. Alloys of primarily zinc with small amounts of copper, aluminium, and magnesium are useful in die casting
as well as spin casting, especially in the automotive, electrical, and hardware industries. These alloys are marketed
under the name Zamak. An example of this is zinc aluminium. The low melting point together with the low viscosity
of the alloy makes the production of small and intricate shapes possible. The low working temperature leads to rapid
cooling of the cast products and therefore fast assembly is possible. Another alloy, marketed under the brand name
Prestal, contains 78% zinc and 22% aluminium and is reported to be nearly as strong as steel but as malleable as
plastic. This superplasticity of the alloy allows it to be molded using die casts made of ceramics and cement. Similar
alloys with the addition of a small amount of lead can be cold-rolled into sheets. An alloy of 96% zinc and 4% aluminium
is used to make stamping dies for low production run applications for which ferrous metal dies would be too expensive.
In building facades, roofs or other applications in which zinc is used as sheet metal and for methods such as deep
drawing, roll forming or bending, zinc alloys with titanium and copper are used. Unalloyed zinc is too brittle for
these kinds of manufacturing processes. Roughly one quarter of all zinc output in the United States (2009), is consumed
in the form of zinc compounds; a variety of which are used industrially. Zinc oxide is widely used as a white pigment
in paints, and as a catalyst in the manufacture of rubber. It is also used as a heat disperser for the rubber and
acts to protect its polymers from ultraviolet radiation (the same UV protection is conferred to plastics containing
zinc oxide). The semiconductor properties of zinc oxide make it useful in varistors and photocopying products. The
zinc zinc-oxide cycle is a two step thermochemical process based on zinc and zinc oxide for hydrogen production.
Zinc chloride is often added to lumber as a fire retardant and can be used as a wood preservative. It is also used
to make other chemicals. Zinc methyl (Zn(CH3) 2) is used in a number of organic syntheses. Zinc sulfide (ZnS) is
used in luminescent pigments such as on the hands of clocks, X-ray and television screens, and luminous paints. Crystals
of ZnS are used in lasers that operate in the mid-infrared part of the spectrum. Zinc sulfate is a chemical in dyes
and pigments. Zinc pyrithione is used in antifouling paints. 64Zn, the most abundant isotope of zinc, is very susceptible
to neutron activation, being transmuted into the highly radioactive 65Zn, which has a half-life of 244 days and produces
intense gamma radiation. Because of this, Zinc Oxide used in nuclear reactors as an anti-corrosion agent is depleted
of 64Zn before use, this is called depleted zinc oxide. For the same reason, zinc has been proposed as a salting
material for nuclear weapons (cobalt is another, better-known salting material). A jacket of isotopically enriched
64Zn would be irradiated by the intense high-energy neutron flux from an exploding thermonuclear weapon, forming
a large amount of 65Zn significantly increasing the radioactivity of the weapon's fallout. Such a weapon is not known
to have ever been built, tested, or used. 65Zn is also used as a tracer to study how alloys that contain zinc wear
out, or the path and the role of zinc in organisms. Zinc is included in most single tablet over-the-counter daily
vitamin and mineral supplements. Preparations include zinc oxide, zinc acetate, and zinc gluconate. It is believed
to possess antioxidant properties, which may protect against accelerated aging of the skin and muscles of the body;
studies differ as to its effectiveness. Zinc also helps speed up the healing process after an injury. It is also
suspected of being beneficial to the body's immune system. Indeed, zinc deficiency may have effects on virtually
all parts of the human immune system. Although not yet tested as a therapy in humans, a growing body of evidence
indicates that zinc may preferentially kill prostate cancer cells. Because zinc naturally homes to the prostate and
because the prostate is accessible with relatively non-invasive procedures, its potential as a chemotherapeutic agent
in this type of cancer has shown promise. However, other studies have demonstrated that chronic use of zinc supplements
in excess of the recommended dosage may actually increase the chance of developing prostate cancer, also likely due
to the natural buildup of this heavy metal in the prostate. There are many important organozinc compounds. Organozinc
chemistry is the science of organozinc compounds describing their physical properties, synthesis and reactions. Among
important applications is the Frankland-Duppa Reaction in which an oxalate ester(ROCOCOOR) reacts with an alkyl halide
R'X, zinc and hydrochloric acid to the α-hydroxycarboxylic esters RR'COHCOOR, the Reformatskii reaction which converts
α-halo-esters and aldehydes to β-hydroxy-esters, the Simmons–Smith reaction in which the carbenoid (iodomethyl)zinc
iodide reacts with alkene(or alkyne) and converts them to cyclopropane, the Addition reaction of organozinc compounds
to carbonyl compounds. The Barbier reaction (1899) is the zinc equivalent of the magnesium Grignard reaction and
is better of the two. In presence of just about any water the formation of the organomagnesium halide will fail,
whereas the Barbier reaction can even take place in water. On the downside organozincs are much less nucleophilic
than Grignards, are expensive and difficult to handle. Commercially available diorganozinc compounds are dimethylzinc,
diethylzinc and diphenylzinc. In one study the active organozinc compound is obtained from much cheaper organobromine
precursors: Zinc serves a purely structural role in zinc fingers, twists and clusters. Zinc fingers form parts of
some transcription factors, which are proteins that recognize DNA base sequences during the replication and transcription
of DNA. Each of the nine or ten Zn2+ ions in a zinc finger helps maintain the finger's structure by coordinately
binding to four amino acids in the transcription factor. The transcription factor wraps around the DNA helix and
uses its fingers to accurately bind to the DNA sequence. Other sources include fortified food and dietary supplements,
which come in various forms. A 1998 review concluded that zinc oxide, one of the most common supplements in the United
States, and zinc carbonate are nearly insoluble and poorly absorbed in the body. This review cited studies which
found low plasma zinc concentrations after zinc oxide and zinc carbonate were consumed compared with those seen after
consumption of zinc acetate and sulfate salts. However, harmful excessive supplementation is a problem among the
relatively affluent, and should probably not exceed 20 mg/day in healthy people, although the U.S. National Research
Council set a Tolerable Upper Intake of 40 mg/day. For fortification, however, a 2003 review recommended zinc oxide
in cereals as cheap, stable, and as easily absorbed as more expensive forms. A 2005 study found that various compounds
of zinc, including oxide and sulfate, did not show statistically significant differences in absorption when added
as fortificants to maize tortillas. A 1987 study found that zinc picolinate was better absorbed than zinc gluconate
or zinc citrate. However, a study published in 2008 determined that zinc glycinate is the best absorbed of the four
dietary supplement types available. Symptoms of mild zinc deficiency are diverse. Clinical outcomes include depressed
growth, diarrhea, impotence and delayed sexual maturation, alopecia, eye and skin lesions, impaired appetite, altered
cognition, impaired host defense properties, defects in carbohydrate utilization, and reproductive teratogenesis.
Mild zinc deficiency depresses immunity, although excessive zinc does also. Animals with a diet deficient in zinc
require twice as much food in order to attain the same weight gain as animals given sufficient zinc. Despite some
concerns, western vegetarians and vegans have not been found to suffer from overt zinc deficiencies any more than
meat-eaters. Major plant sources of zinc include cooked dried beans, sea vegetables, fortified cereals, soyfoods,
nuts, peas, and seeds. However, phytates in many whole-grains and fiber in many foods may interfere with zinc absorption
and marginal zinc intake has poorly understood effects. The zinc chelator phytate, found in seeds and cereal bran,
can contribute to zinc malabsorption. There is some evidence to suggest that more than the US RDA (15 mg) of zinc
daily may be needed in those whose diet is high in phytates, such as some vegetarians. These considerations must
be balanced against the fact that there is a paucity of adequate zinc biomarkers, and the most widely used indicator,
plasma zinc, has poor sensitivity and specificity. Diagnosing zinc deficiency is a persistent challenge. Nearly two
billion people in the developing world are deficient in zinc. In children it causes an increase in infection and
diarrhea, contributing to the death of about 800,000 children worldwide per year. The World Health Organization advocates
zinc supplementation for severe malnutrition and diarrhea. Zinc supplements help prevent disease and reduce mortality,
especially among children with low birth weight or stunted growth. However, zinc supplements should not be administered
alone, because many in the developing world have several deficiencies, and zinc interacts with other micronutrients.
Zinc deficiency is crop plants' most common micronutrient deficiency; it is particularly common in high-pH soils.
Zinc-deficient soil is cultivated in the cropland of about half of Turkey and India, a third of China, and most of
Western Australia, and substantial responses to zinc fertilization have been reported in these areas. Plants that
grow in soils that are zinc-deficient are more susceptible to disease. Zinc is primarily added to the soil through
the weathering of rocks, but humans have added zinc through fossil fuel combustion, mine waste, phosphate fertilizers,
pesticide (zinc phosphide), limestone, manure, sewage sludge, and particles from galvanized surfaces. Excess zinc
is toxic to plants, although zinc toxicity is far less widespread. There is evidence of induced copper deficiency
in those taking 100–300 mg of zinc daily. A 2007 trial observed that elderly men taking 80 mg daily were hospitalized
for urinary complications more often than those taking a placebo. The USDA RDA is 11 and 8 mg Zn/day for men and
women, respectively. Levels of 100–300 mg may interfere with the utilization of copper and iron or adversely affect
cholesterol. Levels of zinc in excess of 500 ppm in soil interfere with the ability of plants to absorb other essential
metals, such as iron and manganese. There is also a condition called the zinc shakes or "zinc chills" that can be
induced by the inhalation of freshly formed zinc oxide formed during the welding of galvanized materials. Zinc is
a common ingredient of denture cream which may contain between 17 and 38 mg of zinc per gram. There have been claims
of disability, and even death, due to excessive use of these products. The U.S. Food and Drug Administration (FDA)
has stated that zinc damages nerve receptors in the nose, which can cause anosmia. Reports of anosmia were also observed
in the 1930s when zinc preparations were used in a failed attempt to prevent polio infections. On June 16, 2009,
the FDA said that consumers should stop using zinc-based intranasal cold products and ordered their removal from
store shelves. The FDA said the loss of smell can be life-threatening because people with impaired smell cannot detect
leaking gas or smoke and cannot tell if food has spoiled before they eat it. Recent research suggests that the topical
antimicrobial zinc pyrithione is a potent heat shock response inducer that may impair genomic integrity with induction
of PARP-dependent energy crisis in cultured human keratinocytes and melanocytes. In 1982, the US Mint began minting
pennies coated in copper but made primarily of zinc. With the new zinc pennies, there is the potential for zinc toxicosis,
which can be fatal. One reported case of chronic ingestion of 425 pennies (over 1 kg of zinc) resulted in death due
to gastrointestinal bacterial and fungal sepsis, whereas another patient, who ingested 12 grams of zinc, only showed
lethargy and ataxia (gross lack of coordination of muscle movements). Several other cases have been reported of humans
suffering zinc intoxication by the ingestion of zinc coins. Pennies and other small coins are sometimes ingested
by dogs, resulting in the need for medical treatment to remove the foreign body. The zinc content of some coins can
cause zinc toxicity, which is commonly fatal in dogs, where it causes a severe hemolytic anemia, and also liver or
kidney damage; vomiting and diarrhea are possible symptoms. Zinc is highly toxic in parrots and poisoning can often
be fatal. The consumption of fruit juices stored in galvanized cans has resulted in mass parrot poisonings with zinc.
Many early 19th-century neoclassical architects were influenced by the drawings and projects of Étienne-Louis Boullée and
Claude Nicolas Ledoux. The many graphite drawings of Boullée and his students depict spare geometrical architecture
that emulates the eternality of the universe. There are links between Boullée's ideas and Edmund Burke's conception
of the sublime. Ledoux addressed the concept of architectural character, maintaining that a building should immediately
communicate its function to the viewer: taken literally such ideas give rise to "architecture parlante". The baroque
style had never truly been to the English taste. Four influential books were published in the first quarter of the
18th century which highlighted the simplicity and purity of classical architecture: Vitruvius Britannicus (Colen
Campbell 1715), Palladio's Four Books of Architecture (1715), De Re Aedificatoria (1726) and The Designs of Inigo
Jones... with Some Additional Designs (1727). The most popular was the four-volume Vitruvius Britannicus by Colen
Campbell. The book contained architectural prints of famous British buildings that had been inspired by the great
architects from Vitruvius to Palladio. At first the book mainly featured the work of Inigo Jones, but the later tomes
contained drawings and plans by Campbell and other 18th-century architects. Palladian architecture became well established
in 18th-century Britain. At the forefront of the new school of design was the aristocratic "architect earl", Richard
Boyle, 3rd Earl of Burlington; in 1729, he and William Kent, designed Chiswick House. This House was a reinterpretation
of Palladio's Villa Capra, but purified of 16th century elements and ornament. This severe lack of ornamentation
was to be a feature of the Palladianism. In 1734 William Kent and Lord Burlington designed one of England's finest
examples of Palladian architecture with Holkham Hall in Norfolk. The main block of this house followed Palladio's
dictates quite closely, but Palladio's low, often detached, wings of farm buildings were elevated in significance.
By the mid 18th century, the movement broadened to incorporate a greater range of Classical influences, including
those from Ancient Greece. The shift to neoclassical architecture is conventionally dated to the 1750s. It first
gained influence in England and France; in England, Sir William Hamilton's excavations at Pompeii and other sites,
the influence of the Grand Tour and the work of William Chambers and Robert Adam, was pivotal in this regard. In
France, the movement was propelled by a generation of French art students trained in Rome, and was influenced by
the writings of Johann Joachim Winckelmann. The style was also adopted by progressive circles in other countries
such as Sweden and Russia. A second neoclassic wave, more severe, more studied and more consciously archaeological,
is associated with the height of the Napoleonic Empire. In France, the first phase of neoclassicism was expressed
in the "Louis XVI style", and the second in the styles called "Directoire" or Empire. The Rococo style remained popular
in Italy until the Napoleonic regimes brought the new archaeological classicism, which was embraced as a political
statement by young, progressive, urban Italians with republican leanings.[according to whom?] Indoors, neoclassicism
made a discovery of the genuine classic interior, inspired by the rediscoveries at Pompeii and Herculaneum. These
had begun in the late 1740s, but only achieved a wide audience in the 1760s, with the first luxurious volumes of
tightly controlled distribution of Le Antichità di Ercolano (The Antiquities of Herculaneum). The antiquities of
Herculaneum showed that even the most classicising interiors of the Baroque, or the most "Roman" rooms of William
Kent were based on basilica and temple exterior architecture turned outside in, hence their often bombastic appearance
to modern eyes: pedimented window frames turned into gilded mirrors, fireplaces topped with temple fronts. The new
interiors sought to recreate an authentically Roman and genuinely interior vocabulary. Techniques employed in the
style included flatter, lighter motifs, sculpted in low frieze-like relief or painted in monotones en camaïeu ("like
cameos"), isolated medallions or vases or busts or bucrania or other motifs, suspended on swags of laurel or ribbon,
with slender arabesques against backgrounds, perhaps, of "Pompeiian red" or pale tints, or stone colours. The style
in France was initially a Parisian style, the Goût grec ("Greek style"), not a court style; when Louis XVI acceded
to the throne in 1774, Marie Antoinette, his fashion-loving Queen, brought the "Louis XVI" style to court. A new
phase in neoclassical design was inaugurated by Robert and James Adam, who travelled in Italy and Dalmatia in the
1750s, observing the ruins of the classical world. On their return to Britain, they published a book entitled The
Works in Architecture in installments between 1773 and 1779. This book of engraved designs made the Adam repertory
available throughout Europe. The Adam brothers aimed to simplify the rococo and baroque styles which had been fashionable
in the preceding decades, to bring what they felt to be a lighter and more elegant feel to Georgian houses. The Works
in Architecture illustrated the main buildings the Adam brothers had worked on and crucially documented the interiors,
furniture and fittings, designed by the Adams. From about 1800 a fresh influx of Greek architectural examples, seen
through the medium of etchings and engravings, gave a new impetus to neoclassicism, the Greek Revival. There was
little to no direct knowledge of Greek civilization before the middle of the 18th century in Western Europe, when
an expedition funded by the Society of Dilettanti in 1751 and led by James Stuart and Nicholas Revett began serious
archaeological enquiry. Stuart was commissioned after his return from Greece by George Lyttelton to produce the first
Greek building in England, the garden temple at Hagley Hall (1758–59). A number of British architects in the second
half of the century took up the expressive challenge of the Doric from their aristocratic patrons, including Joseph
Bonomi and John Soane, but it was to remain the private enthusiasm of connoisseurs up to the first decade of the
19th century. Seen in its wider social context, Greek Revival architecture sounded a new note of sobriety and restraint
in public buildings in Britain around 1800 as an assertion of nationalism attendant on the Act of Union, the Napoleonic
Wars, and the clamour for political reform. It was to be William Wilkins's winning design for the public competition
for Downing College, Cambridge that announced the Greek style was to be the dominant idiom in architecture. Wilkins
and Robert Smirke went on to build some of the most important buildings of the era, including the Theatre Royal,
Covent Garden (1808–09), the General Post Office (1824–29) and the British Museum (1823–48), Wilkins University College
London (1826–30) and the National Gallery (1832–38). In Scotland, Thomas Hamilton (1784–1858), in collaboration with
the artists Andrew Wilson (1780–1848) and Hugh William Williams (1773–1829) created monuments and buildings of international
significance; the Burns Monument at Alloway (1818) and the (Royal) High School in Edinburgh (1823–29). At the same
time the Empire style in France was a more grandiose wave of neoclassicism in architecture and the decorative arts.
Mainly based on Imperial Roman styles, it originated in, and took its name from, the rule of Napoleon I in the First
French Empire, where it was intended to idealize Napoleon's leadership and the French state. The style corresponds
to the more bourgeois Biedermeier style in the German-speaking lands, Federal style in the United States, the Regency
style in Britain, and the Napoleonstil in Sweden. According to the art historian Hugh Honour "so far from being,
as is sometimes supposed, the culmination of the Neo-classical movement, the Empire marks its rapid decline and transformation
back once more into a mere antique revival, drained of all the high-minded ideas and force of conviction that had
inspired its masterpieces". High neoclassicism was an international movement. Though neoclassical architecture employed
the same classical vocabulary as Late Baroque architecture, it tended to emphasize its planar qualities, rather than
sculptural volumes. Projections and recessions and their effects of light and shade were more flat; sculptural bas-reliefs
were flatter and tended to be enframed in friezes, tablets or panels. Its clearly articulated individual features
were isolated rather than interpenetrating, autonomous and complete in themselves. Neoclassicism also influenced
city planning; the ancient Romans had used a consolidated scheme for city planning for both defence and civil convenience,
however, the roots of this scheme go back to even older civilizations. At its most basic, the grid system of streets,
a central forum with city services, two main slightly wider boulevards, and the occasional diagonal street were characteristic
of the very logical and orderly Roman design. Ancient façades and building layouts were oriented to these city design
patterns and they tended to work in proportion with the importance of public buildings. From the middle of the 18th
century, exploration and publication changed the course of British architecture towards a purer vision of the Ancient
Greco-Roman ideal. James 'Athenian' Stuart's work The Antiquities of Athens and Other Monuments of Greece was very
influential in this regard, as were Robert Wood's Palmyra and Baalbec. A combination of simple forms and high levels
of enrichment was adopted by the majority of contemporary British architects and designers. The revolution begun
by Stuart was soon to be eclipsed by the work of the Adam Brothers, James Wyatt, Sir William Chambers, George Dance,
James Gandon and provincially based architects such as John Carr and Thomas Harrison of Chester. In the early 20th
century, the writings of Albert Richardson were responsible for a re-awakening of interest in pure neoclassical design.
Vincent Harris (compare Harris's colonnaded and domed interior of Manchester Central Reference Library to the colonnaded
and domed interior by John Carr and R R Duke), Bradshaw Gass & Hope and Percy Thomas were among those who designed
public buildings in the neoclassical style in the interwar period. In the British Raj in India, Sir Edwin Lutyens'
monumental city planning for New Delhi marked the sunset of neoclassicism. In Scotland and the north of England,
where the Gothic Revival was less strong, architects continued to develop the neoclassical style of William Henry
Playfair. The works of Cuthbert Brodrick and Alexander Thomson show that by the end of the 19th century the results
could be powerful and eccentric. The first phase of neoclassicism in France is expressed in the "Louis XVI style"
of architects like Ange-Jacques Gabriel (Petit Trianon, 1762–68); the second phase, in the styles called Directoire
and "Empire", might be characterized by Jean Chalgrin's severe astylar Arc de Triomphe (designed in 1806). In England
the two phases might be characterized first by the structures of Robert Adam, the second by those of Sir John Soane.
The interior style in France was initially a Parisian style, the "Goût grec" ("Greek style") not a court style. Only
when the young king acceded to the throne in 1771 did Marie Antoinette, his fashion-loving Queen, bring the "Louis
XVI" style to court. What little there was, started with Charles de Wailly's crypt in the church of St Leu-St Gilles
(1773–80), and Claude Nicolas Ledoux's Barriere des Bonshommes (1785–89). First-hand evidence of Greek architecture
was of very little importance to the French, due to the influence of Marc-Antoine Laugier's doctrines that sought
to discern the principles of the Greeks instead of their mere practices. It would take until Laboustre's Neo-Grec
of the second Empire for the Greek revival to flower briefly in France. The earliest examples of neoclassical architecture
in Hungary may be found in Vác. In this town the triumphal arch and the neoclassical façade of the baroque Cathedral
were designed by the French architect Isidor Marcellus Amandus Ganneval (Isidore Canevale) in the 1760s. Also the
work of a French architect Charles Moreau is the garden façade of the Esterházy Palace (1797–1805) in Kismarton (today
Eisenstadt in Austria). The two principal architect of Neoclassicism in Hungary was Mihály Pollack and József Hild.
Pollack's major work is the Hungarian National Museum (1837–1844). Hild is famous for his designs for the Cathedral
of Eger and Esztergom. Neoclassical architecture was introduced in Malta in the late 18th century, during the final
years of Hospitaller rule. Early examples include the Bibliotheca (1786), the De Rohan Arch (1798) and the Hompesch
Gate (1801). However, neoclassical architecture only became popular in Malta following the establishment of British
rule in the early 19th century. In 1814, a neoclassical portico decorated with the British coat of arms was added
to the Main Guard building so as to serve as a symbol of British Malta. Other 19th century neoclassical buildings
include RNH Bighi (1832), St Paul's Pro-Cathedral (1844), the Rotunda of Mosta (1860) and the now destroyed Royal
Opera House (1866). As of the first decade of the 21st century, contemporary neoclassical architecture is usually
classed under the umbrella term of New Classical Architecture. Sometimes it is also referred to as Neo-Historicism/Revivalism,
Traditionalism or simply neoclassical architecture like the historical style. For sincere traditional-style architecture
that sticks to regional architecture, materials and craftsmanship, the term Traditional Architecture (or vernacular)
is mostly used. The Driehaus Architecture Prize is awarded to major contributors in the field of 21st century traditional
or classical architecture, and comes with a prize money twice as high as that of the modernist Pritzker Prize. After
a lull during the period of modern architectural dominance (roughly post-World War II until the mid-1980s), neoclassicism
has seen somewhat of a resurgence. This rebirth can be traced to the movement of New Urbanism and postmodern architecture's
embrace of classical elements as ironic, especially in light of the dominance of Modernism. While some continued
to work with classicism as ironic, some architects such as Thomas Gordon Smith, began to consider classicism seriously.
While some schools had interest in classical architecture, such as the University of Virginia, no school was purely
dedicated to classical architecture. In the early 1990s a program in classical architecture was started by Smith
and Duncan Stroik at the University of Notre Dame that continues successfully. Programs at the University of Miami,
Andrews University, Judson University and The Prince's Foundation for Building Community have trained a number of
new classical architects since this resurgence. Today one can find numerous buildings embracing neoclassical style,
since a generation of architects trained in this discipline shapes urban planning. In Britain a number of architects
are active in the neoclassical style. Two new university Libraries, Quinlan Terry's Maitland Robinson Library at
Downing College and ADAM Architecture's Sackler Library illustrate that the approach taken can range from the traditional,
in the former case, to the unconventional, in the latter case. Recently, Prince Charles came under controversy for
promoting a classically designed development on the land of the former Chelsea Barracks in London. Writing to the
Qatari Royal family (who were funding the development through the property development company Qatari Diar) he condemned
the accepted modernist plans, instead advocating a classical approach. His appeal was met with success and the plans
were withdrawn. A new design by architecture house Dixon Jones is currently being drafted.
South Slavic dialects historically formed a continuum. The turbulent history of the area, particularly due to expansion of
the Ottoman Empire, resulted in a patchwork of dialectal and religious differences. Due to population migrations,
Shtokavian became the most widespread in the western Balkans, intruding westwards into the area previously occupied
by Chakavian and Kajkavian (which further blend into Slovenian in the northwest). Bosniaks, Croats and Serbs differ
in religion and were historically often part of different cultural circles, although a large part of the nations
have lived side by side under foreign overlords. During that period, the language was referred to under a variety
of names, such as "Slavic", "Illyrian", or according to region, "Bosnian", "Serbian" and "Croatian", the latter often
in combination with "Slavonian" or "Dalmatian". Serbo-Croatian was standardized in the mid-19th-century Vienna Literary
Agreement by Croatian and Serbian writers and philologists, decades before a Yugoslav state was established. From
the very beginning, there were slightly different literary Serbian and Croatian standards, although both were based
on the same Shtokavian subdialect, Eastern Herzegovinian. In the 20th century, Serbo-Croatian served as the official
language of the Kingdom of Yugoslavia (when it was called "Serbo-Croato-Slovenian"), and later as one of the official
languages of the Socialist Federal Republic of Yugoslavia. The breakup of Yugoslavia affected language attitudes,
so that social conceptions of the language separated on ethnic and political lines. Since the breakup of Yugoslavia,
Bosnian has likewise been established as an official standard in Bosnia and Herzegovina, and there is an ongoing
movement to codify a separate Montenegrin standard. Serbo-Croatian thus generally goes by the ethnic names Serbian,
Croatian, Bosnian, and sometimes Montenegrin and Bunjevac. Like other South Slavic languages, Serbo-Croatian has
a simple phonology, with the common five-vowel system and twenty-five consonants. Its grammar evolved from Common
Slavic, with complex inflection, preserving seven grammatical cases in nouns, pronouns, and adjectives. Verbs exhibit
imperfective or perfective aspect, with a moderately complex tense system. Serbo-Croatian is a pro-drop language
with flexible word order, subject–verb–object being the default. It can be written in Serbian Cyrillic or Gaj's Latin
alphabet, whose thirty letters mutually map one-to-one, and the orthography is highly phonemic in all standards.
Throughout the history of the South Slavs, the vernacular, literary, and written languages (e.g. Chakavian, Kajkavian,
Shtokavian) of the various regions and ethnicities developed and diverged independently. Prior to the 19th century,
they were collectively called "Illyric", "Slavic", "Slavonian", "Bosnian", "Dalmatian", "Serbian" or "Croatian".
As such, the term Serbo-Croatian was first used by Jacob Grimm in 1824, popularized by the Vienna philologist Jernej
Kopitar in the following decades, and accepted by Croatian Zagreb grammarians in 1854 and 1859. At that time, Serb
and Croat lands were still part of the Ottoman and Austrian Empires. Officially, the language was called variously
Serbo-Croat, Croato-Serbian, Serbian and Croatian, Croatian and Serbian, Serbian or Croatian, Croatian or Serbian.
Unofficially, Serbs and Croats typically called the language "Serbian" or "Croatian", respectively, without implying
a distinction between the two, and again in independent Bosnia and Herzegovina, "Bosnian", "Croatian", and "Serbian"
were considered to be three names of a single official language. Croatian linguist Dalibor Brozović advocated the
term Serbo-Croatian as late as 1988, claiming that in an analogy with Indo-European, Serbo-Croatian does not only
name the two components of the same language, but simply charts the limits of the region in which it is spoken and
includes everything between the limits (‘Bosnian’ and ‘Montenegrin’). Today, use of the term "Serbo-Croatian" is
controversial due to the prejudice that nation and language must match. It is still used for lack of a succinct alternative,
though alternative names have been used, such as Bosnian/Croatian/Serbian (BCS), which is often seen in political
contexts such as the Hague War Crimes tribunal. In the mid-19th century, Serbian (led by self-taught writer and folklorist
Vuk Stefanović Karadžić) and most Croatian writers and linguists (represented by the Illyrian movement and led by
Ljudevit Gaj and Đuro Daničić), proposed the use of the most widespread dialect, Shtokavian, as the base for their
common standard language. Karadžić standardised the Serbian Cyrillic alphabet, and Gaj and Daničić standardized the
Croatian Latin alphabet, on the basis of vernacular speech phonemes and the principle of phonological spelling. In
1850 Serbian and Croatian writers and linguists signed the Vienna Literary Agreement, declaring their intention to
create a unified standard. Thus a complex bi-variant language appeared, which the Serbs officially called "Serbo-Croatian"
or "Serbian or Croatian" and the Croats "Croato-Serbian", or "Croatian or Serbian". Yet, in practice, the variants
of the conceived common literary language served as different literary variants, chiefly differing in lexical inventory
and stylistic devices. The common phrase describing this situation was that Serbo-Croatian or "Croatian or Serbian"
was a single language. During the Austro-Hungarian occupation of Bosnia and Herzegovina, the language of all three
nations was called "Bosnian" until the death of administrator von Kállay in 1907, at which point the name was changed
to "Serbo-Croatian". West European scientists judge the Yugoslav language policy as an exemplary one: although three-quarters
of the population spoke one language, no single language was official on a federal level. Official languages were
declared only at the level of constituent republics and provinces, and very generously: Vojvodina had five (among
them Slovak and Romanian, spoken by 0.5 per cent of the population), and Kosovo four (Albanian, Turkish, Romany and
Serbo-Croatian). Newspapers, radio and television studios used sixteen languages, fourteen were used as languages
of tuition in schools, and nine at universities. Only the Yugoslav Army used Serbo-Croatian as the sole language
of command, with all other languages represented in the army’s other activities—however, this is not different from
other armies of multilingual states, or in other specific institutions, such as international air traffic control
where English is used worldwide. All variants of Serbo-Croatian were used in state administration and republican
and federal institutions. Both Serbian and Croatian variants were represented in respectively different grammar books,
dictionaries, school textbooks and in books known as pravopis (which detail spelling rules). Serbo-Croatian was a
kind of soft standardisation. However, legal equality could not dampen the prestige Serbo-Croatian had: since it
was the language of three quarters of the population, it functioned as an unofficial lingua franca. And within Serbo-Croatian,
the Serbian variant, with twice as many speakers as the Croatian, enjoyed greater prestige, reinforced by the fact
that Slovene and Macedonian speakers preferred it to the Croatian variant because their languages are also Ekavian.
This is a common situation in other pluricentric languages, e.g. the variants of German differ according to their
prestige, the variants of Portuguese too. Moreover, all languages differ in terms of prestige: "the fact is that
languages (in terms of prestige, learnability etc.) are not equal, and the law cannot make them equal". Like most
Slavic languages, there are mostly three genders for nouns: masculine, feminine, and neuter, a distinction which
is still present even in the plural (unlike Russian and, in part, the Čakavian dialect). They also have two numbers:
singular and plural. However, some consider there to be three numbers (paucal or dual, too), since (still preserved
in closely related Slovene) after two (dva, dvije/dve), three (tri) and four (četiri), and all numbers ending in
them (e.g. twenty-two, ninety-three, one hundred four) the genitive singular is used, and after all other numbers
five (pet) and up, the genitive plural is used. (The number one [jedan] is treated as an adjective.) Adjectives are
placed in front of the noun they modify and must agree in both case and number with it. Comparative and historical
linguistics offers some clues for memorising the accent position: If one compares many standard Serbo-Croatian words
to e.g. cognate Russian words, the accent in the Serbo-Croatian word will be one syllable before the one in the Russian
word, with the rising tone. Historically, the rising tone appeared when the place of the accent shifted to the preceding
syllable (the so-called "Neoshtokavian retraction"), but the quality of this new accent was different – its melody
still "gravitated" towards the original syllable. Most Shtokavian dialects (Neoshtokavian) dialects underwent this
shift, but Chakavian, Kajkavian and the Old Shtokavian dialects did not. The Croatian Latin alphabet (Gajica) followed
suit shortly afterwards, when Ljudevit Gaj defined it as standard Latin with five extra letters that had diacritics,
apparently borrowing much from Czech, but also from Polish, and inventing the unique digraphs "lj", "nj" and "dž".
These digraphs are represented as "ļ, ń and ǵ" respectively in the "Rječnik hrvatskog ili srpskog jezika", published
by the former Yugoslav Academy of Sciences and Arts in Zagreb. The latter digraphs, however, are unused in the literary
standard of the language. All in all, this makes Serbo-Croatian the only Slavic language to officially use both the
Latin and Cyrillic scripts, albeit the Latin version is more commonly used. South Slavic historically formed a dialect
continuum, i.e. each dialect has some similarities with the neighboring one, and differences grow with distance.
However, migrations from the 16th to 18th centuries resulting from the spread of Ottoman Empire on the Balkans have
caused large-scale population displacement that broke the dialect continuum into many geographical pockets. Migrations
in the 20th century, primarily caused by urbanization and wars, also contributed to the reduction of dialectal differences.
The Serbo-Croatian dialects differ not only in the question word they are named after, but also heavily in phonology,
accentuation and intonation, case endings and tense system (morphology) and basic vocabulary. In the past, Chakavian
and Kajkavian dialects were spoken on a much larger territory, but have been replaced by Štokavian during the period
of migrations caused by Ottoman Turkish conquest of the Balkans in the 15th and the 16th centuries. These migrations
caused the koinéisation of the Shtokavian dialects, that used to form the West Shtokavian (more closer and transitional
towards the neighbouring Chakavian and Kajkavian dialects) and East Shtokavian (transitional towards the Torlakian
and the whole Bulgaro-Macedonian area) dialect bundles, and their subsequent spread at the expense of Chakavian and
Kajkavian. As a result, Štokavian now covers an area larger than all the other dialects combined, and continues to
make its progress in the enclaves where non-literary dialects are still being spoken. Enisa Kafadar argues that there
is only one Serbo-Croatian language with several varieties. This has made possible to include all four varieties
into a new grammar book. Daniel Bunčić concludes that it is a pluricentric language, with four standard variants
spoken in Serbia, Croatia, Montenegro and Bosnia and Herzegovina. The mutual intelligibility between their speakers
"exceeds that between the standard variants of English, French, German, or Spanish". Other linguists have argued
that the differences between the variants of Serbo-Croatian are less significant than those between the variants
of English, German,, Dutch, and Hindi-Urdu. The opinion of the majority of Croatian linguists[citation needed] is
that there has never been a Serbo-Croatian language, but two different standard languages that overlapped sometime
in the course of history. However, Croatian linguist Snježana Kordić has been leading an academic discussion on that
issue in the Croatian journal Književna republika from 2001 to 2010. In the discussion, she shows that linguistic
criteria such as mutual intelligibility, huge overlap in linguistic system, and the same dialectic basis of standard
language provide evidence that Croatian, Serbian, Bosnian and Montenegrin are four national variants of the pluricentric
Serbo-Croatian language. Igor Mandić states: "During the last ten years, it has been the longest, the most serious
and most acrid discussion (…) in 21st-century Croatian culture". Inspired by that discussion, a monograph on language
and nationalism has been published. The topic of language for writers from Dalmatia and Dubrovnik prior to the 19th
century made a distinction only between speakers of Italian or Slavic, since those were the two main groups that
inhabited Dalmatian city-states at that time. Whether someone spoke Croatian or Serbian was not an important distinction
then, as the two languages were not distinguished by most speakers. This has been used as an argument to state that
Croatian literature Croatian per se, but also includes Serbian and other languages that are part of Serbo-Croatian,
These facts undermine the Croatian language proponents' argument that modern-day Croatian is based on a language
called Old Croatian. However, most intellectuals and writers from Dalmatia who used the Štokavian dialect and practiced
the Catholic faith saw themselves as part of a Croatian nation as far back as the mid-16th to 17th centuries, some
300 years before Serbo-Croatian ideology appeared. Their loyalty was first and foremost to Catholic Christendom,
but when they professed an ethnic identity, they referred to themselves as "Slovin" and "Illyrian" (a sort of forerunner
of Catholic baroque pan-Slavism) and Croat – these 30-odd writers over the span of c. 350 years always saw themselves
as Croats first and never as part of a Serbian nation. It should also be noted that, in the pre-national era, Catholic
religious orientation did not necessarily equate with Croat ethnic identity in Dalmatia. A Croatian follower of Vuk
Karadžić, Ivan Broz, noted that for a Dalmatian to identify oneself as a Serb was seen as foreign as identifying
oneself as Macedonian or Greek. Vatroslav Jagić pointed out in 1864: The luxurious and ornate representative texts
of Serbo-Croatian Church Slavonic belong to the later era, when they coexisted with the Serbo-Croatian vernacular
literature. The most notable are the "Missal of Duke Novak" from the Lika region in northwestern Croatia (1368),
"Evangel from Reims" (1395, named after the town of its final destination), Hrvoje's Missal from Bosnia and Split
in Dalmatia (1404), and the first printed book in Serbo-Croatian, the Glagolitic Missale Romanum Glagolitice (1483).
In 1954, major Serbian and Croatian writers, linguists and literary critics, backed by Matica srpska and Matica hrvatska
signed the Novi Sad Agreement, which in its first conclusion stated: "Serbs, Croats and Montenegrins share a single
language with two equal variants that have developed around Zagreb (western) and Belgrade (eastern)". The agreement
insisted on the equal status of Cyrillic and Latin scripts, and of Ekavian and Ijekavian pronunciations. It also
specified that Serbo-Croatian should be the name of the language in official contexts, while in unofficial use the
traditional Serbian and Croatian were to be retained. Matica hrvatska and Matica srpska were to work together on
a dictionary, and a committee of Serbian and Croatian linguists was asked to prepare a pravopis. During the sixties
both books were published simultaneously in Ijekavian Latin in Zagreb and Ekavian Cyrillic in Novi Sad. Yet Croatian
linguists claim that it was an act of unitarianism. The evidence supporting this claim is patchy: Croatian linguist
Stjepan Babić complained that the television transmission from Belgrade always used the Latin alphabet— which was
true, but was not proof of unequal rights, but of frequency of use and prestige. Babić further complained that the
Novi Sad Dictionary (1967) listed side by side words from both the Croatian and Serbian variants wherever they differed,
which one can view as proof of careful respect for both variants, and not of unitarism. Moreover, Croatian linguists
criticized those parts of the Dictionary for being unitaristic that were written by Croatian linguists. And finally,
Croatian linguists ignored the fact that the material for the Pravopisni rječnik came from the Croatian Philological
Society. Regardless of these facts, Croatian intellectuals brought the Declaration on the Status and Name of the
Croatian Literary Language in 1967. On occasion of the publication’s 45th anniversary, the Croatian weekly journal
Forum published the Declaration again in 2012, accompanied by a critical analysis. In addition, like most Slavic
languages, the Shtokavian verb also has one of two aspects: perfective or imperfective. Most verbs come in pairs,
with the perfective verb being created out of the imperfective by adding a prefix or making a stem change. The imperfective
aspect typically indicates that the action is unfinished, in progress, or repetitive; while the perfective aspect
typically denotes that the action was completed, instantaneous, or of limited duration. Some Štokavian tenses (namely,
aorist and imperfect) favor a particular aspect (but they are rarer or absent in Čakavian and Kajkavian). Actually,
aspects "compensate" for the relative lack of tenses, because aspect of the verb determines whether the act is completed
or in progress in the referred time. The jat-reflex rules are not without exception. For example, when short jat
is preceded by r, in most Ijekavian dialects developed into /re/ or, occasionally, /ri/. The prefix prě- ("trans-,
over-") when long became pre- in eastern Ijekavian dialects but to prije- in western dialects; in Ikavian pronunciation,
it also evolved into pre- or prije- due to potential ambiguity with pri- ("approach, come close to"). For verbs that
had -ěti in their infinitive, the past participle ending -ěl evolved into -io in Ijekavian Neoštokavian. On the other
hand, the opinion of Jagić from 1864 is argued not to have firm grounds. When Jagić says "Croatian", he refers to
a few cases referring to the Dubrovnik vernacular as ilirski (Illyrian). This was a common name for all Slavic vernaculars
in Dalmatian cities among the Roman inhabitants. In the meantime, other written monuments are found that mention
srpski, lingua serviana (= Serbian), and some that mention Croatian. By far the most competent Serbian scientist
on the Dubrovnik language issue, Milan Rešetar, who was born in Dubrovnik himself, wrote behalf of language characteristics:
"The one who thinks that Croatian and Serbian are two separate languages must confess that Dubrovnik always (linguistically)
used to be Serbian." Nationalists have conflicting views about the language(s). The nationalists among the Croats
conflictingly claim either that they speak an entirely separate language from Serbs and Bosnians or that these two
peoples have, due to the longer lexicographic tradition among Croats, somehow "borrowed" their standard languages
from them.[citation needed] Bosniak nationalists claim that both Croats and Serbs have "appropriated" the Bosnian
language, since Ljudevit Gaj and Vuk Karadžić preferred the Neoštokavian-Ijekavian dialect, widely spoken in Bosnia
and Herzegovina, as the basis for language standardization, whereas the nationalists among the Serbs claim either
that any divergence in the language is artificial, or claim that the Štokavian dialect is theirs and the Čakavian
Croats'— in more extreme formulations Croats have "taken" or "stolen" their language from the Serbs.[citation needed]
In Serbia, the Serbian language is the official one, while both Serbian and Croatian are official in the province
of Vojvodina. A large Bosniak minority is present in the southwest region of Sandžak, but the "official recognition"
of Bosnian language is moot. Bosnian is an optional course in 1st and 2nd grade of the elementary school, while it
is also in official use in the municipality of Novi Pazar. However, its nomenclature is controversial, as there is
incentive that it is referred to as "Bosniak" (bošnjački) rather than "Bosnian" (bosanski) (see Bosnian language
for details).
On October 9, 2006 at 6:00 a.m., the network switched to a 24-hour schedule, becoming one of the last major English-language
broadcasters to transition to such a schedule. Most CBC-owned stations previously signed off the air during the early
morning hours (typically from 1:00 a.m. to 6:00 a.m.). Instead of the infomercials aired by most private stations,
or a simulcast of CBC News Network in the style of BBC One's nightly simulcast of BBC News Channel, the CBC uses
the time to air repeats, including local news, primetime series, movies and other programming from the CBC library.
Its French counterpart, Ici Radio-Canada Télé, still signs off every night. Until 1998, the network carried a variety
of American programs in addition to its core Canadian programming, directly competing with private Canadian broadcasters
such as CTV and Global. Since then, it has restricted itself to Canadian programs, a handful of British programs,
and a few American movies and off-network repeats. Since this change, the CBC has sometimes struggled to maintain
ratings comparable to those it achieved before 1995, although it has seen somewhat of a ratings resurgence in recent
years. In the 2007-08 season, hit series such as Little Mosque on the Prairie and The Border helped the network achieve
its strongest ratings performance in over half a decade. Under the CBC's current arrangement with Rogers Communications
for National Hockey League broadcast rights, Hockey Night in Canada broadcasts on CBC-owned stations and affiliates
are not technically aired over the CBC Television network, but over a separate CRTC-licensed part-time network operated
by Rogers. This was required by the CRTC as Rogers exercises editorial control and sells all advertising time during
the HNIC broadcasts, even though the CBC bug and promos for other CBC Television programs appear throughout HNIC.
The CBC's flagship newscast, The National, airs Sunday through Fridays at 10:00 p.m. EST and Saturdays at 6:00 p.m.
EST. Until October 2006, CBC owned-and-operated stations aired a second broadcast of the program at 11:00 p.m.; this
later broadcast included only the main news portion of the program, and excluded the analysis and documentary segment.
This second airing was later replaced with other programming, and as of the 2012-13 television season, was replaced
on CBC's major market stations by a half-hour late newscast. There is also a short news update, at most, on late
Saturday evenings. During hockey season, this update is usually found during the first intermission of the second
game of the doubleheader on Hockey Night in Canada. In addition to the mentioned late local newscasts, CBC stations
in most markets fill early evenings with local news programs, generally from 5:00 p.m. to 6:30 p.m., while most stations
also air a single local newscast on weekend evenings (comprising a supper hour broadcast on Saturdays and a late
evening newscast on Sundays). Other newscasts include parts of CBC News Now airing weekday at 6:00 a.m. and noon.
Weekly newsmagazine the fifth estate is also a CBC mainstay, as are documentary series such as Doc Zone. One of the
most popular shows on CBC Television is the weekly Saturday night broadcast of NHL hockey games, Hockey Night in
Canada. It has been televised by the network since 1952. During the NHL lockout and subsequent cancellation of the
2004-2005 hockey season, CBC instead aired various recent and classic movies, branded as Movie Night in Canada, on
Saturday nights. Many cultural groups criticized this and suggested the CBC air games from minor hockey leagues;
the CBC responded that most such broadcast rights were already held by other groups, but it did base each Movie Night
broadcast from a different Canadian hockey venue. Other than hockey, CBC Sports properties include Toronto Raptors
basketball, Toronto FC Soccer, and various other amateur and professional events. It was also the exclusive carrier
of Canadian Curling Association events during the 2004–2005 season. Due to disappointing results and fan outrage
over many draws being carried on CBC Country Canada (now called Cottage Life Television, the association tried to
cancel its multiyear deal with the CBC signed in 2004. After the CBC threatened legal action, both sides eventually
came to an agreement under which early-round rights reverted to TSN. On June 15, 2006, the CCA announced that TSN
would obtain exclusive rights to curling broadcasts in Canada as of the 2008-09 season, shutting the CBC out of the
championship weekend for the first time in 40-plus years. Many were surprised by these changes to the CBC schedule,
which were apparently intended to attract a younger audience to the network; some suggested they might alienate the
core CBC viewership. Another note of criticism was made when the network decided to move The National in some time
zones to simulcast the American version of The One over the summer. This later became a moot point, as The One was
taken off the air after two weeks after extremely low American and Canadian ratings, and the newscast resumed its
regular schedule. Beginning in 2005, the CBC has contributed production funds for the BBC Wales revival of Doctor
Who, for which it received a special credit at the end of each episode. This arrangement continued until the end
of fourth season, broadcast in 2008. The CBC similarly contributed to the first season of the spin-off series, Torchwood.
More recently, the network has also begun picking up Canadian rights to some Australian series, including the drama
series Janet King and Love Child, and the comedy-drama series Please Like Me. On March 5, 2005, CBC Television launched
a high definition simulcast of its Toronto (CBLT-DT) and Montreal (CBMT-DT) stations. Since that time, the network
has also launched HD simulcasts in Vancouver (CBUT-DT), Ottawa (CBOT-DT), Edmonton (CBXT-DT), Calgary (CBRT-DT),
Halifax (CBHT-DT), Windsor, (CBET-DT), Winnipeg (CBWT-DT) and St. John's (CBNT-DT). CBC HD is available nationally
via satellite and on digital cable as well as for free over-the-air using a regular TV antenna and a digital tuner
(included in most new television sets) on the following channels: Most CBC television stations, including those in
the major cities, are owned and operated by the CBC itself. CBC O&O stations operate as a mostly seamless national
service with few deviations from the main network schedule, although there are some regional differences from time
to time. For on-air identification, most CBC stations use the CBC brand rather than their call letters, not identifying
themselves specifically until sign-on or sign-off (though some, like Toronto's CBLT, do not ID themselves at all
except through PSIP). All CBC O&O stations have a standard call letter naming convention, in that the first two letters
are "CB" (an ITU prefix allocated not to Canada, but to Chile) and the last letter is "T". Only the third letter
varies from market to market; however, that letter is typically the same as the third letter of the CBC Radio One
and CBC Radio 2 stations in the same market. An exception to this rule are the CBC North stations in Yellowknife,
Whitehorse and Iqaluit, whose call signs begin with "CF" due to their historic association with the CBC's Frontier
Coverage Package prior to the advent of microwave and satellite broadcasting. Some stations that broadcast from smaller
cities are private affiliates of the CBC, that is, stations which are owned by commercial broadcasters but predominantly
incorporate CBC programming within their schedules. Such stations generally follow the CBC schedule, airing a minimum
40 hours per week of network programming. However, they may opt out of some CBC programming in order to air locally
produced programs, syndicated series or programs purchased from other broadcasters, such as CTV Two, which do not
have a broadcast outlet in the same market. In these cases, the CBC programming being displaced may be broadcast
at a different time than the network, or may not be broadcast on the station at all. Most private affiliates generally
opt out of CBC's afternoon schedule and Thursday night arts programming. Private affiliates carry the 10 p.m. broadcast
of The National as a core part of the CBC schedule, but generally omitted the 11 p.m. repeat (which is no longer
broadcast). Most private affiliates produce their own local newscasts for a duration of at least 35 minutes. Some
of the private affiliates have begun adding CBC's overnight programming to their schedules since the network began
broadcasting 24 hours a day. Private CBC affiliates are not as common as they were in the past, as many such stations
have been purchased either by the CBC itself or by Canwest Global or CHUM Limited, respectively becoming E! or A-Channel
(later A, now CTV Two) stations. One private CBC affiliate, CHBC-TV in Kelowna, joined E! (then known as CH) on February
27, 2006. When a private CBC affiliate reaffiliates with another network, the CBC has normally added a retransmitter
of its nearest O&O station to ensure that CBC service is continued. However, due to an agreement between CHBC and
CFJC-TV in Kamloops, CFJC also disaffiliated from the CBC on February 27, 2006, but no retransmitters were installed
in the licence area. Former private CBC affiliates CKPG-TV Prince George and CHAT-TV Medicine Hat disaffiliated on
August 31, 2008 and joined E!, but the CBC announced it will not add new retransmitters to these areas. Incidentally,
CFJC, CKPG and CHAT are all owned by an independent media company, Jim Pattison Group. With the closure of E! and
other changes in the media landscape, several former CBC affiliates have since joined City or Global, or closed altogether.
According to filings to the Canadian Radio-television and Telecommunications Commission (CRTC) by Thunder Bay Electronics
(owner of CBC's Thunder Bay affiliate CKPR-DT) and Bell Media (owner of CBC affiliates CFTK-TV in Terrace and CJDC-TV
in Dawson Creek),[citation needed] the CBC informed them that it will not extend its association with any of its
private affiliates beyond August 31, 2011. Incidentally, that was also the date for analogue to digital transition
in Canada. Given recent practice and the CBC's decision not to convert any retransmitters to digital, even in markets
with populations in the hundreds in thousands, it is not expected that the CBC will open new transmitters to replace
its affiliates, and indeed may pare back its existing transmitter network. However, in March 2011, CKPR announced
that it had come to a programming agreement with the CBC, in which the station will continue to provide CBC programming
in Thunder Bay for a period of five years. On March 16, 2012, Astral Media announced the sale of its assets to Bell
Media, owners of CTV and CTV Two, for $3.38 billion with CFTK and CJDC included in the acquisition. Whether the stations
will remain CBC affiliates or become owned-and-operated stations of CTV or CTV Two following the completion of the
merger is undetermined. CBC Television stations can be received in many United States communities along the Canadian
border over-the-air and have a significant audience in those areas. Such a phenomenon can also take place within
Great Lakes communities such as Ashtabula, Ohio, which received programming from the CBC's London, Ontario, transmitter,
based upon prevailing atmospheric conditions over Lake Erie. As of September 2010 CBC shut down its analogue transmitter
and decided not to replace it with a digital transmitter. As a result, there is now a giant hole in the coverage
of CBC in South-Western Ontario. Both CBC - Toronto and CBC - Windsor are both over 100 miles from London, ON and
out of range for even the largest antennas[citation needed]. CBC's sports coverage has also attained high viewership
in border markets, including its coverage of the NHL's Stanley Cup Playoffs, which was generally considered to be
more complete and consistent than coverage by other networks such as NBC. Its coverage of the Olympic Games also
found a significant audience in border regions, primarily due to the fact that CBC aired more events live than NBC's
coverage, which had been criticized in recent years for tape delaying events to air in primetime, even if the event
is being held in a market in the Pacific Time Zone during primetime hours on the East (where it would still be delayed
for West coast primetime). While its fellow Canadian broadcasters converted most of their transmitters to digital
by the Canadian digital television transition deadline of August 31, 2011, CBC converted only about half of the analogue
transmitters in mandatory areas to digital (15 of 28 markets with CBC Television stations, and 14 of 28 markets with
Télévision de Radio-Canada stations). Due to financial difficulties reported by the corporation, the corporation
published digital transition plans for none of its analogue retransmitters in mandatory markets to be converted to
digital by the deadline. Under this plan, communities that receive analogue signals by rebroadcast transmitters in
mandatory markets would lose their over-the-air signals as of the deadline. Rebroadcast transmitters account for
23 of the 48 CBC and Radio-Canada transmitters in mandatory markets. Mandatory markets losing both CBC and Radio-Canada
over-the-air signals include London, Ontario (metropolitan area population 457,000) and Saskatoon, Saskatchewan (metro
area population 257,000). In both of those markets, the corporation's television transmitters are the only ones that
were not planned to be converted to digital by the deadline. Because rebroadcast transmitters were not planned to
be converted to digital, many markets stood to lose over-the-air coverage from CBC or Radio-Canada, or both. As a
result, only seven of the markets subject to the August 31, 2011 transition deadline were planned to have both CBC
and Radio-Canada in digital, and 13 other markets were planned to have either CBC or Radio-Canada in digital. In
mid-August 2011, the CRTC granted the CBC an extension, until August 31, 2012, to continue operating its analogue
transmitters in markets subject to the August 31, 2011 transition deadline. This CRTC decision prevented many markets
subject to the transition deadline from losing signals for CBC or Radio-Canada, or both at the transition deadline.
At the transition deadline, Barrie, Ontario lost both CBC and Radio-Canada signals as the CBC did not request that
the CRTC allow these transmitters to continue operating. In markets where a digital transmitter was installed, existing
coverage areas were not necessarily maintained. For instance, the CBC implemented a digital transmitter covering
Fredericton, New Brunswick in the place of the existing transmitter covering Saint John, New Brunswick and Fredericton,
and decided to maintain analogue service to Saint John. According to CBC's application for this transmitter to the
CRTC, the population served by the digital transmitter would be 113,930 people versus 303,465 served by the existing
analogue transmitter. In Victoria, the replacement of the Vancouver analogue transmitters with digital ones only
allowed only some northeastern parts of the metropolitan area (total population 330,000) to receive either CBC or
Radio-Canada. CBC announced on April 4, 2012, that it will shut down all of its approximately 620 analogue television
transmitters on July 31, 2012 with no plans to install digital transmitters in their place, thus reducing the total
number of the corporation's television transmitters across the country down to 27. According to the CBC, this would
reduce the corporation's yearly costs by $10 million. No plans have been announced to use subchannels to maintain
over-the-air signals for both CBC and Radio-Canada in markets where the corporation has one digital transmitter.
In fact, in its CRTC application to shut down all of its analogue television transmitters, the CBC communicated its
opposition to use of subchannels, citing costs, amongst other reasons. On August 6, 2010, the CBC issued a press
release stating that due to financial reasons, the CBC and Radio-Canada would only transition 27 transmitters total,
one in each market where there was an originating station (i.e. a CBC or Radio-Canada television station located
in that market). Further, the CBC stated in the release, that only 15 of the transmitters would be in place by August
31, 2011 due to lack of available funds, and that the remainder would not be on the air until as late as August 31,
2012. Additionally, the CBC stated in the release that it was asking the CRTC for permission to continue broadcasting
in analogue until the identified transmitters for transition were up and running. At the time of the press release,
only eight of the corporation's transmitters (four CBC and four Radio Canada) were broadcasting in digital. On November
30, 2010, CBC's senior director of regulatory affairs issued a letter to the CRTC regarding CBC's plans for transitioning
to digital. The letter states, "CBC/Radio-Canada will not be converting its analogue retransmitters in mandatory
markets to digital after August 31, 2011." On December 16, 2010, some months after the CRTC issued a bulletin reminding
broadcasters that analog transmitters had to be shut off by the deadline in mandatory markets, the CBC revised the
documents accompanying its August 6, 2010 news release to state that it had the money for and is striving to transition
all 27 transmitters by August 31, 2011. On March 23, 2011, the CRTC rejected an application by the CBC to install
a digital transmitter serving Fredricton, New Brunswick in place of the analogue transmitter serving Fredericton
and Saint John, New Brunswick, which would have served only 62.5% of the population served by the existing analogue
transmitter. The CBC issued a press release stating "CBC/Radio-Canada intends to re-file its application with the
CRTC to provide more detailed cost estimates that will allow the Commission to better understand the unfeasibility
of replicating the Corporation’s current analogue coverage." The press release further added that the CBC suggests
coverage could be maintained if the CRTC were to "allow CBC Television to continue providing the analogue service
it offers today – much in the same way the Commission permitted recently in the case of Yellowknife, Whitehorse and
Iqaluit." On August 18, 2011, the CRTC issued a decision that allows CBC's mandatory market rebroadcasting transmitters
in analogue to remain on-air until August 31, 2012. Before that deadline, CBC's licence renewal process would take
place and CBC's digital transition plans would be examined as part of that process. The requirement remains for all
of CBC's full-power transmitters occupying channels 52 to 69 to either relocate to channels 2 to 51 or become low-power
transmitters. In some cases, CBC has opted to reduce the power of existing transmitters to low-power transmitters,
which will result in signal loss for some viewers. On July 17, 2012, the CRTC approved the shut down of CBC's analogue
transmitters, noting that "while the Commission has the discretion to refuse to revoke broadcasting licences, even
on application from a licensee, it cannot direct the CBC or any other broadcaster to continue to operate its stations
and transmitters." On July 31, 2012, at around 11:59 p.m. in each time zone, the remaining 620 analogue transmitters
were shut down, leaving the network with 27 digital television transmitters across the country, and some transmitters
operated by some affiliated stations.
The Appalachian Mountains (i/ˌæpəˈleɪʃᵻn/ or /ˌæpəˈlætʃᵻn/,[note 1] French: les Appalaches), often called the Appalachians,
are a system of mountains in eastern North America. The Appalachians first formed roughly 480 million years ago during
the Ordovician Period and once reached elevations similar to those of the Alps and the Rocky Mountains before they
were eroded. The Appalachian chain is a barrier to east-west travel as it forms a series of alternating ridgelines
and valleys oriented in opposition to any road running east-west. Definitions vary on the precise boundaries of the
Appalachians. The United States Geological Survey (USGS) defines the Appalachian Highlands physiographic division
as consisting of thirteen provinces: the Atlantic Coast Uplands, Eastern Newfoundland Atlantic, Maritime Acadian
Highlands, Maritime Plain, Notre Dame and Mégantic Mountains, Western Newfoundland Mountains, Piedmont, Blue Ridge,
Valley and Ridge, Saint Lawrence Valley, Appalachian Plateaus, New England province, and the Adirondack provinces.
A common variant definition does not include the Adirondack Mountains, which geologically belong to the Grenville
Orogeny and have a different geological history from the rest of the Appalachians. The range is mostly located in
the United States but extends into southeastern Canada, forming a zone from 100 to 300 mi (160 to 480 km) wide, running
from the island of Newfoundland 1,500 mi (2,400 km) southwestward to Central Alabama in the United States.[discuss]
The range covers parts of the islands of Saint Pierre and Miquelon, which comprise an overseas territory of France.
The system is divided into a series of ranges, with the individual mountains averaging around 3,000 ft (910 m). The
highest of the group is Mount Mitchell in North Carolina at 6,684 feet (2,037 m), which is the highest point in the
United States east of the Mississippi River. The term Appalachian refers to several different regions associated
with the mountain range. Most broadly, it refers to the entire mountain range with its surrounding hills and the
dissected plateau region. The term is often used more restrictively to refer to regions in the central and southern
Appalachian Mountains, usually including areas in the states of Kentucky, Tennessee, Virginia, Maryland, West Virginia,
and North Carolina, as well as sometimes extending as far south as northern Alabama, Georgia and western South Carolina,
and as far north as Pennsylvania, southern Ohio and parts of southern upstate New York. While exploring inland along
the northern coast of Florida in 1528, the members of the Narváez expedition, including Álvar Núñez Cabeza de Vaca,
found a Native American village near present-day Tallahassee, Florida whose name they transcribed as Apalchen or
Apalachen [a.paˈla.tʃɛn]. The name was soon altered by the Spanish to Apalachee and used as a name for the tribe
and region spreading well inland to the north. Pánfilo de Narváez's expedition first entered Apalachee territory
on June 15, 1528, and applied the name. Now spelled "Appalachian," it is the fourth-oldest surviving European place-name
in the US. In addition to the true folded mountains, known as the ridge and valley province, the area of dissected
plateau to the north and west of the mountains is usually grouped with the Appalachians. This includes the Catskill
Mountains of southeastern New York, the Poconos in Pennsylvania, and the Allegheny Plateau of southwestern New York,
western Pennsylvania, eastern Ohio and northern West Virginia. This same plateau is known as the Cumberland Plateau
in southern West Virginia, eastern Kentucky, western Virginia, eastern Tennessee, and northern Alabama. The Appalachian
belt includes, with the ranges enumerated above, the plateaus sloping southward to the Atlantic Ocean in New England,
and south-eastward to the border of the coastal plain through the central and southern Atlantic states; and on the
north-west, the Allegheny and Cumberland plateaus declining toward the Great Lakes and the interior plains. A remarkable
feature of the belt is the longitudinal chain of broad valleys, including The Great Appalachian Valley, which in
the southerly sections divides the mountain system into two unequal portions, but in the northernmost lies west of
all the ranges possessing typical Appalachian features, and separates them from the Adirondack group. The mountain
system has no axis of dominating altitudes, but in every portion the summits rise to rather uniform heights, and,
especially in the central section, the various ridges and intermontane valleys have the same trend as the system
itself. None of the summits reaches the region of perpetual snow. Mountains of the Long Range in Newfoundland reach
heights of nearly 3,000 ft (900 m). In the Chic-Choc and Notre Dame mountain ranges in Quebec, the higher summits
rise to about 4,000 ft (1,200 m) elevation. Isolated peaks and small ranges in Nova Scotia and New Brunswick vary
from 1,000 to 2,700 ft (300 to 800 m). In Maine several peaks exceed 4,000 ft (1,200 m), including Mount Katahdin
at 5,267 feet (1,605 m). In New Hampshire, many summits rise above 5,000 ft (1,500 m), including Mount Washington
in the White Mountains at 6,288 ft (1,917 m), Adams at 5,771 ft (1,759 m), Jefferson at 5,712 ft (1,741 m), Monroe
at 5,380 ft (1,640 m), Madison at 5,367 ft (1,636 m), Lafayette at 5,249 feet (1,600 m), and Lincoln at 5,089 ft
(1,551 m). In the Green Mountains the highest point, Mt. Mansfield, is 4,393 ft (1,339 m) in elevation; others include
Killington Peak at 4,226 ft (1,288 m), Camel's Hump at 4,083 ft (1,244 m), Mt. Abraham at 4,006 ft (1,221 m), and
a number of other heights exceeding 3,000 ft (900 m). In Pennsylvania, there are over sixty summits that rise over
2,500 ft (800 m); the summits of Mount Davis and Blue Knob rise over 3,000 ft (900 m). In Maryland, Eagle Rock and
Dans Mountain are conspicuous points reaching 3,162 ft (964 m) and 2,882 ft (878 m) respectively. On the same side
of the Great Valley, south of the Potomac, are the Pinnacle 3,007 feet (917 m) and Pidgeon Roost 3,400 ft (1,000
m). In West Virginia, more than 150 peaks rise above 4,000 ft (1,200 m), including Spruce Knob 4,863 ft (1,482 m),
the highest point in the Allegheny Mountains. A number of other points in the state rise above 4,800 ft (1,500 m).
Snowshoe Mountain at Thorny Flat 4,848 ft (1,478 m) and Bald Knob 4,842 ft (1,476 m) are among the more notable peaks
in West Virginia. The Blue Ridge Mountains, rising in southern Pennsylvania and there known as South Mountain, attain
elevations of about 2,000 ft (600 m) in that state. South Mountain achieves its highest point just below the Mason-Dixon
line in Maryland at Quirauk Mountain 2,145 ft (654 m) and then diminishes in height southward to the Potomac River.
Once in Virginia the Blue Ridge again reaches 2,000 ft (600 m) and higher. In the Virginia Blue Ridge, the following
are some of the highest peaks north of the Roanoke River: Stony Man 4,031 ft (1,229 m), Hawksbill Mountain 4,066
ft (1,239 m), Apple Orchard Mountain 4,225 ft (1,288 m) and Peaks of Otter 4,001 and 3,875 ft (1,220 and 1,181 m).
South of the Roanoke River, along the Blue Ridge, are Virginia's highest peaks including Whitetop Mountain 5,520
ft (1,680 m) and Mount Rogers 5,729 ft (1,746 m), the highest point in the Commonwealth. Before the French and Indian
War, the Appalachian Mountains laid on the indeterminate boundary between Britain's colonies along the Atlantic and
French areas centered in the Mississippi basin. After the French and Indian War, the Proclamation of 1763 restricted
settlement for Great Britain's thirteen original colonies in North America to east of the summit line of the mountains
(except in the northern regions where the Great Lakes formed the boundary). Although the line was adjusted several
times to take frontier settlements into account and was impossible to enforce as law, it was strongly resented by
backcountry settlers throughout the Appalachians. The Proclamation Line can be seen as one of the grievances which
led to the American Revolutionary War. Many frontier settlers held that the defeat of the French opened the land
west of the mountains to English settlement, only to find settlement barred by the British King's proclamation. The
backcountry settlers who fought in the Illinois campaign of George Rogers Clark were motivated to secure their settlement
of Kentucky. In eastern Pennsylvania the Great Appalachian Valley, or Great Valley, was accessible by reason of a
broad gateway between the end of South Mountain and the Highlands, and many Germans and Moravians settled here between
the Susquehanna and Delaware Rivers forming the Pennsylvania Dutch community, some of whom even now speak a unique
American dialect of German known as the "Pennsylvania German language" or "Pennsylvania Dutch." These latecomers
to the New World were forced to the frontier to find cheap land. With their followers of both German, English and
Scots-Irish origin, they worked their way southward and soon occupied all of the Shenandoah Valley, ceded by the
Iroquois, and the upper reaches of the Great Valley tributaries of the Tennessee River, ceded by the Cherokee. Characteristic
birds of the forest are wild turkey (Meleagris gallopavo silvestris), ruffed grouse (Bonasa umbellus), mourning dove
(Zenaida macroura), common raven (Corvus corax), wood duck (Aix sponsa), great horned owl (Bubo virginianus), barred
owl (Strix varia), screech owl (Megascops asio), red-tailed hawk (Buteo jamaicensis), red-shouldered hawk (Buteo
lineatus), and northern goshawk (Accipiter gentilis), as well as a great variety of "songbirds" (Passeriformes),
like the warblers in particular. Animals that characterize the Appalachian forests include five species of tree squirrels.
The most commonly seen is the low to moderate elevation eastern gray squirrel (Sciurus carolinensis). Occupying similar
habitat is the slightly larger fox squirrel (Sciurus niger) and the much smaller southern flying squirrel (Glaucomys
volans). More characteristic of cooler northern and high elevation habitat is the red squirrel (Tamiasciurus hudsonicus),
whereas the Appalachian northern flying squirrel (Glaucomys sabrinus fuscus), which closely resembles the southern
flying squirrel, is confined to northern hardwood and spruce-fir forests. Dryer and rockier uplands and ridges are
occupied by oak-chestnut type forests dominated by a variety of oaks (Quercus spp.), hickories (Carya spp.) and,
in the past, by the American chestnut (Castanea dentata). The American chestnut was virtually eliminated as a canopy
species by the introduced fungal chestnut blight (Cryphonectaria parasitica), but lives on as sapling-sized sprouts
that originate from roots, which are not killed by the fungus. In present-day forest canopies chestnut has been largely
replaced by oaks. The oak forests of the southern and central Appalachians consist largely of black, northern red,
white, chestnut and scarlet oaks (Quercus velutina, Q. rubra, Q. alba, Q. prinus and Q. coccinea) and hickories,
such as the pignut (Carya glabra) in particular. The richest forests, which grade into mesic types, usually in coves
and on gentle slopes, have dominantly white and northern red oaks, while the driest sites are dominated by chestnut
oak, or sometimes by scarlet or northern red oaks. In the northern Appalachians the oaks, except for white and northern
red, drop out, while the latter extends farthest north. Chief summits in the southern section of the Blue Ridge are
located along two main crests—the Western or Unaka Front along the Tennessee-North Carolina border and the Eastern
Front in North Carolina—or one of several "cross ridges" between the two main crests. Major subranges of the Eastern
Front include the Black Mountains, Great Craggy Mountains, and Great Balsam Mountains, and its chief summits include
Grandfather Mountain 5,964 ft (1,818 m) near the Tennessee-North Carolina border, Mount Mitchell 6,684 ft (2,037
m) in the Blacks, and Black Balsam Knob 6,214 ft (1,894 m) and Cold Mountain 6,030 ft (1,840 m) in the Great Balsams.
The Western Blue Ridge Front is subdivided into the Unaka Range, the Bald Mountains, the Great Smoky Mountains, and
the Unicoi Mountains, and its major peaks include Roan Mountain 6,285 ft (1,916 m) in the Unakas, Big Bald 5,516
ft (1,681 m) and Max Patch 4,616 ft (1,407 m) in the Bald Mountains, Clingmans Dome 6,643 ft (2,025 m), Mount Le
Conte 6,593 feet (2,010 m), and Mount Guyot 6,621 ft (2,018 m) in the Great Smokies, and Big Frog Mountain 4,224
ft (1,287 m) near the Tennessee-Georgia-North Carolina border. Prominent summits in the cross ridges include Waterrock
Knob (6,292 ft (1,918 m)) in the Plott Balsams. Across northern Georgia, numerous peaks exceed 4,000 ft (1,200 m),
including Brasstown Bald, the state's highest, at 4,784 ft (1,458 m) and 4,696 ft (1,431 m) Rabun Bald. There are
many geological issues concerning the rivers and streams of the Appalachians. In spite of the existence of the Great
Appalachian Valley, many of the main rivers are transverse to the mountain system axis. The drainage divide of the
Appalachians follows a tortuous course which crosses the mountainous belt just north of the New River in Virginia.
South of the New River, rivers head into the Blue Ridge, cross the higher Unakas, receive important tributaries from
the Great Valley, and traversing the Cumberland Plateau in spreading gorges (water gaps), escape by way of the Cumberland
River and the Tennessee River rivers to the Ohio River and the Mississippi River, and thence to the Gulf of Mexico.
In the central section, north of the New River, the rivers, rising in or just beyond the Valley Ridges, flow through
great gorges to the Great Valley, and then across the Blue Ridge to tidal estuaries penetrating the coastal plain
via the Roanoke River, James River, Potomac River, and Susquehanna River. A look at rocks exposed in today's Appalachian
mountains reveals elongated belts of folded and thrust faulted marine sedimentary rocks, volcanic rocks and slivers
of ancient ocean floor, which provides strong evidence that these rocks were deformed during plate collision. The
birth of the Appalachian ranges, some 480 Ma, marks the first of several mountain-building plate collisions that
culminated in the construction of the supercontinent Pangaea with the Appalachians near the center. Because North
America and Africa were connected, the Appalachians formed part of the same mountain chain as the Little Atlas in
Morocco. This mountain range, known as the Central Pangean Mountains, extended into Scotland, from the North America/Europe
collision (See Caledonian orogeny). During the middle Ordovician Period (about 496-440 Ma), a change in plate motions
set the stage for the first Paleozoic mountain-building event (Taconic orogeny) in North America. The once-quiet
Appalachian passive margin changed to a very active plate boundary when a neighboring oceanic plate, the Iapetus,
collided with and began sinking beneath the North American craton. With the birth of this new subduction zone, the
early Appalachians were born. Along the continental margin, volcanoes grew, coincident with the initiation of subduction.
Thrust faulting uplifted and warped older sedimentary rock laid down on the passive margin. As mountains rose, erosion
began to wear them down. Streams carried rock debris down slope to be deposited in nearby lowlands. The Taconic Orogeny
was just the first of a series of mountain building plate collisions that contributed to the formation of the Appalachians,
culminating in the collision of North America and Africa (see Appalachian orogeny). By the end of the Mesozoic era,
the Appalachian Mountains had been eroded to an almost flat plain. It was not until the region was uplifted during
the Cenozoic Era that the distinctive topography of the present formed. Uplift rejuvenated the streams, which rapidly
responded by cutting downward into the ancient bedrock. Some streams flowed along weak layers that define the folds
and faults created many millions of years earlier. Other streams downcut so rapidly that they cut right across the
resistant folded rocks of the mountain core, carving canyons across rock layers and geologic structures. The Appalachian
Mountains contain major deposits of anthracite coal as well as bituminous coal. In the folded mountains the coal
is in metamorphosed form as anthracite, represented by the Coal Region of northeastern Pennsylvania. The bituminous
coal fields of western Pennsylvania, western Maryland, southeastern Ohio, eastern Kentucky, southwestern Virginia,
and West Virginia contain the sedimentary form of coal. The mountain top removal method of coal mining, in which
entire mountain tops are removed, is currently threatening vast areas and ecosystems of the Appalachian Mountain
region. The dominant northern and high elevation conifer is the red spruce (Picea rubens), which grows from near
sea level to above 4,000 ft (1,200 m) above sea level (asl) in northern New England and southeastern Canada. It also
grows southward along the Appalachian crest to the highest elevations of the southern Appalachians, as in North Carolina
and Tennessee. In the central Appalachians it is usually confined above 3,000 ft (900 m) asl, except for a few cold
valleys in which it reaches lower elevations. In the southern Appalachians it is restricted to higher elevations.
Another species is the black spruce (Picea mariana), which extends farthest north of any conifer in North America,
is found at high elevations in the northern Appalachians, and in bogs as far south as Pennsylvania. The Appalachians
are also home to two species of fir, the boreal balsam fir (Abies balsamea), and the southern high elevation endemic,
Fraser fir (Abies fraseri). Fraser fir is confined to the highest parts of the southern Appalachian Mountains, where
along with red spruce it forms a fragile ecosystem known as the Southern Appalachian spruce-fir forest. Fraser fir
rarely occurs below 5,500 ft (1,700 m), and becomes the dominant tree type at 6,200 ft (1,900 m). By contrast, balsam
fir is found from near sea level to the tree line in the northern Appalachians, but ranges only as far south as Virginia
and West Virginia in the central Appalachians, where it is usually confined above 3,900 ft (1,200 m) asl, except
in cold valleys. Curiously, it is associated with oaks in Virginia. The balsam fir of Virginia and West Virginia
is thought by some to be a natural hybrid between the more northern variety and Fraser fir. While red spruce is common
in both upland and bog habitats, balsam fir, as well as black spruce and tamarack, are more characteristic of the
latter. However balsam fir also does well in soils with a pH as high as 6. Eastern or Canada hemlock (Tsuga canadensis)
is another important evergreen needle-leaf conifer that grows along the Appalachian chain from north to south, but
is confined to lower elevations than red spruce and the firs. It generally occupies richer and less acidic soils
than the spruce and firs and is characteristic of deep, shaded and moist mountain valleys and coves. It is, unfortunately,
subject to the hemlock woolly adelgid (Adelges tsugae), an introduced insect, that is rapidly extirpating it as a
forest tree. Less abundant, and restricted to the southern Appalachians, is Carolina hemlock (Tsuga caroliniana).
Like Canada hemlock, this tree suffers severely from the hemlock woolly adelgid. Several species of pines characteristic
of the Appalachians are eastern white pine (Pinus strobus ), Virginia pine (Pinus virginiana), pitch pine (Pinus
rigida ), Table Mountain pine (Pinus pungens) and shortleaf pine (Pinus echinata). Red pine (Pinus resinosa) is a
boreal species that forms a few high elevation outliers as far south as West Virginia. All of these species except
white pine tend to occupy sandy, rocky, poor soil sites, which are mostly acidic in character. White pine, a large
species valued for its timber, tends to do best in rich, moist soil, either acidic or alkaline in character. Pitch
pine is also at home in acidic, boggy soil, and Table Mountain pine may occasionally be found in this habitat as
well. Shortleaf pine is generally found in warmer habitats and at lower elevations than the other species. All the
species listed do best in open or lightly shaded habitats, although white pine also thrives in shady coves, valleys,
and on floodplains. The Appalachians are characterized by a wealth of large, beautiful deciduous broadleaf (hardwood)
trees. Their occurrences are best summarized and described in E. Lucy Braun's 1950 classic, Deciduous Forests of
Eastern North America (Macmillan, New York). The most diverse and richest forests are the mixed mesophytic or medium
moisture types, which are largely confined to rich, moist montane soils of the southern and central Appalachians,
particularly in the Cumberland and Allegheny Mountains, but also thrive in the southern Appalachian coves. Characteristic
canopy species are white basswood (Tilia heterophylla), yellow buckeye (Aesculus octandra), sugar maple (Acer saccharum),
American beech (Fagus grandifolia), tuliptree (Liriodendron tulipifera), white ash (Fraxinus americana ) and yellow
birch (Betula alleganiensis). Other common trees are red maple (Acer rubrum), shagbark and bitternut hickories (Carya
ovata and C. cordiformis) and black or sweet birch (Betula lenta ). Small understory trees and shrubs include flowering
dogwood (Cornus florida), hophornbeam (Ostrya virginiana), witch-hazel (Hamamelis virginiana) and spicebush (Lindera
benzoin). There are also hundreds of perennial and annual herbs, among them such herbal and medicinal plants as American
ginseng (Panax quinquefolius), goldenseal (Hydrastis canadensis), bloodroot (Sanguinaria canadensis) and black cohosh
(Cimicifuga racemosa). The foregoing trees, shrubs and herbs are also more widely distributed in less rich mesic
forests that generally occupy coves, stream valleys and flood plains throughout the southern and central Appalachians
at low and intermediate elevations. In the northern Appalachians and at higher elevations of the central and southern
Appalachians these diverse mesic forests give way to less diverse "northern hardwoods" with canopies dominated only
by American beech, sugar maple, American basswood (Tilia americana) and yellow birch and with far fewer species of
shrubs and herbs. The oak forests generally lack the diverse small tree, shrub and herb layers of mesic forests.
Shrubs are generally ericaceous, and include the evergreen mountain laurel (Kalmia latifolia), various species of
blueberries (Vaccinium spp.), black huckleberry (Gaylussacia baccata), a number of deciduous rhododendrons (azaleas),
and smaller heaths such as teaberry (Gaultheria procumbens) and trailing arbutus (Epigaea repens ). The evergreen
great rhododendron (Rhododendron maximum) is characteristic of moist stream valleys. These occurrences are in line
with the prevailing acidic character of most oak forest soils. In contrast, the much rarer chinquapin oak (Quercus
muehlenbergii) demands alkaline soils and generally grows where limestone rock is near the surface. Hence no ericaceous
shrubs are associated with it. Eastern deciduous forests are subject to a number of serious insect and disease outbreaks.
Among the most conspicuous is that of the introduced gypsy moth (Lymantria dispar), which infests primarily oaks,
causing severe defoliation and tree mortality. But it also has the benefit of eliminating weak individuals, and thus
improving the genetic stock, as well as creating rich habitat of a type through accumulation of dead wood. Because
hardwoods sprout so readily, this moth is not as harmful as the hemlock woolly adelgid. Perhaps more serious is the
introduced beech bark disease complex, which includes both a scale insect (Cryptococcus fagisuga) and fungal components.
As familiar as squirrels are the eastern cottontail rabbit (Silvilagus floridanus) and the white-tailed deer (Odocoileus
virginianus). The latter in particular has greatly increased in abundance as a result of the extirpation of the eastern
wolf (Canis lupus lycaon) and the North American cougar. This has led to the overgrazing and browsing of many plants
of the Appalachian forests, as well as destruction of agricultural crops. Other deer include the moose (Alces alces
), found only in the north, and the elk (Cervus canadensis), which, although once extirpated, is now making a comeback,
through transplantation, in the southern and central Appalachians. In Quebec, the Chic-Chocs host the only population
of caribou (Rangifer tarandus) south of the St. Lawrence River. An additional species that is common in the north
but extends its range southward at high elevations to Virginia and West Virginia is the varying or snowshoe hare
(Lepus americanus). However, these central Appalachian populations are scattered and very small. Of great importance
are the many species of salamanders and, in particular, the lungless species (Family Plethodontidae) that live in
great abundance concealed by leaves and debris, on the forest floor. Most frequently seen, however, is the eastern
or red-spotted newt (Notophthalmus viridescens), whose terrestrial eft form is often encountered on the open, dry
forest floor. It has been estimated that salamanders represent the largest class of animal biomass in the Appalachian
forests. Frogs and toads are of lesser diversity and abundance, but the wood frog (Rana sylvatica) is, like the eft,
commonly encountered on the dry forest floor, while a number of species of small frogs, such as spring peepers (Pseudacris
crucifer), enliven the forest with their calls. Salamanders and other amphibians contribute greatly to nutrient cycling
through their consumption of small life forms on the forest floor and in aquatic habitats. Although reptiles are
less abundant and diverse than amphibians, a number of snakes are conspicuous members of the fauna. One of the largest
is the non-venomous black rat snake (Elaphe obsoleta obsoleta), while the common garter snake (Thamnophis sirtalis)
is among the smallest but most abundant. The American copperhead (Agkistrodon contortrix) and the timber rattler
(Crotalus horridus) are venomous pit vipers. There are few lizards, but the broad-headed skink (Eumeces laticeps),
at up to 13 in (33 cm) in length, and an excellent climber and swimmer, is one of the largest and most spectacular
in appearance and action. The most common turtle is the eastern box turtle (Terrapene carolina carolina), which is
found in both upland and lowland forests in the central and southern Appalachians. Prominent among aquatic species
is the large common snapping turtle (Chelydra serpentina), which occurs throughout the Appalachians. For a century,
the Appalachians were a barrier to the westward expansion of the British colonies. The continuity of the mountain
system, the bewildering multiplicity of its succeeding ridges, the tortuous courses and roughness of its transverse
passes, a heavy forest, and dense undergrowth all conspired to hold the settlers on the seaward-sloping plateaus
and coastal plains. Only by way of the Hudson and Mohawk Valleys, Cumberland Gap, the Wachesa Trail,[undue weight?
– discuss] and round about the southern termination of the system were there easy routes to the interior of the country,
and these were long closed by powerful Native American tribes such as the Iroquois, Creek, and Cherokee, among others.
Expansion was also blocked by the alliances the British Empire had forged with Native American tribes, the proximity
of the Spanish colonies in the south and French activity throughout the interior. By 1755, the obstacle to westward
expansion had been thus reduced by half; outposts of the English colonists had penetrated the Allegheny and Cumberland
plateaus, threatening French monopoly in the transmontane region, and a conflict became inevitable. Making common
cause against the French to determine the control of the Ohio valley, the unsuspected strength of the colonists was
revealed, and the successful ending of the French and Indian War extended England's territory to the Mississippi.
To this strength the geographic isolation enforced by the Appalachian mountains had been a prime contributor. The
confinement of the colonies between an ocean and a mountain wall led to the fullest occupation of the coastal border
of the continent, which was possible under existing conditions of agriculture, conducting to a community of purpose,
a political and commercial solidarity, which would not otherwise have been developed. As early as 1700 it was possible
to ride from Portland, Maine, to southern Virginia, sleeping each night at some considerable village. In contrast
to this complete industrial occupation, the French territory was held by a small and very scattered population, its
extent and openness adding materially to the difficulties of a disputed tenure. Bearing the brunt of this contest
as they did, the colonies were undergoing preparation for the subsequent struggle with the home government. Unsupported
by shipping, the American armies fought toward the sea with the mountains at their back protecting them against British
leagued with the Native Americans. The few settlements beyond the Great Valley were free for self-defense, debarred
from general participation in the conflict by reason of their position.
The company originated in 1911 as the Computing-Tabulating-Recording Company (CTR) through the consolidation of The Tabulating
Machine Company, the International Time Recording Company, the Computing Scale Company and the Bundy Manufacturing
Company. CTR was renamed "International Business Machines" in 1924, a name which Thomas J. Watson first used for
a CTR Canadian subsidiary. The initialism IBM followed. Securities analysts nicknamed the company Big Blue for its
size and common use of the color in products, packaging and its logo. In 2012, Fortune ranked IBM the second largest
U.S. firm in terms of number of employees (435,000 worldwide), the fourth largest in terms of market capitalization,
the ninth most profitable, and the nineteenth largest firm in terms of revenue. Globally, the company was ranked
the 31st largest in terms of revenue by Forbes for 2011. Other rankings for 2011/2012 include №1 company for leaders
(Fortune), №1 green company in the United States (Newsweek), №2 best global brand (Interbrand), №2 most respected
company (Barron's), №5 most admired company (Fortune), and №18 most innovative company (Fast Company). IBM has 12
research laboratories worldwide, bundled into IBM Research. As of 2013[update] the company held the record for most
patents generated by a business for 22 consecutive years. Its employees have garnered five Nobel Prizes, six Turing
Awards, ten National Medals of Technology and five National Medals of Science. Notable company inventions or developments
include the automated teller machine (ATM), the floppy disk, the hard disk drive, the magnetic stripe card, the relational
database, the Universal Product Code (UPC), the financial swap, the Fortran programming language, SABRE airline reservation
system, dynamic random-access memory (DRAM), copper wiring in semiconductors, the silicon-on-insulator (SOI) semiconductor
manufacturing process, and Watson artificial intelligence. IBM has constantly evolved since its inception. Over the
past decade, it has steadily shifted its business mix by exiting commoditizing markets such as PCs, hard disk drives
and DRAMs and focusing on higher-value, more profitable markets such as business intelligence, data analytics, business
continuity, security, cloud computing, virtualization and green solutions, resulting in a higher quality revenue
stream and higher profit margins. IBM's operating margin expanded from 16.8% in 2004 to 24.3% in 2013, and net profit
margins expanded from 9.0% in 2004 to 16.5% in 2013. IBM acquired Kenexa (2012) and SPSS (2009) and PwC's consulting
business (2002), spinning off companies like printer manufacturer Lexmark (1991), and selling off product lines like
its personal computer and x86 server businesses to Lenovo (2005, 2014). In 2014, IBM announced that it would go "fabless"
by offloading IBM Micro Electronics semiconductor manufacturing to GlobalFoundries, a leader in advanced technology
manufacturing, citing that semiconductor manufacturing is a capital-intensive business which is challenging to operate
without scale. This transition had progressed as of early 2015[update]. On June 16, 1911, their four companies were
consolidated in New York State by Charles Ranlett Flint to form the Computing-Tabulating-Recording Company (CTR).
CTR's business office was in Endicott. The individual companies owned by CTR continued to operate using their established
names until the businesses were integrated in 1933 and the holding company eliminated. The four companies had 1,300
employees and offices and plants in Endicott and Binghamton, New York; Dayton, Ohio; Detroit, Michigan; Washington,
D.C.; and Toronto. They manufactured machinery for sale and lease, ranging from commercial scales and industrial
time recorders, meat and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr., fired from the National
Cash Register Company by John Henry Patterson, called on Flint and, in 1914, was offered CTR. Watson joined CTR as
General Manager then, 11 months later, was made President when court cases relating to his time at NCR were resolved.
Having learned Patterson's pioneering business practices, Watson proceeded to put the stamp of NCR onto CTR's companies.
He implemented sales conventions, "generous sales incentives, a focus on customer service, an insistence on well-groomed,
dark-suited salesmen and had an evangelical fervor for instilling company pride and loyalty in every worker". His
favorite slogan, "THINK", became a mantra for each company's employees. During Watson's first four years, revenues
more than doubled to $9 million and the company's operations expanded to Europe, South America, Asia and Australia.
"Watson had never liked the clumsy hyphenated title of the CTR" and chose to replace it with the more expansive title
"International Business Machines". First as a name for a 1917 Canadian subsidiary, then as a line in advertisements.
For example, the McClures magazine, v53, May 1921, has a full page ad with, at the bottom: In 1937, IBM's tabulating
equipment enabled organizations to process unprecedented amounts of data, its clients including the U.S. Government,
during its first effort to maintain the employment records for 26 million people pursuant to the Social Security
Act, and the Third Reich, largely through the German subsidiary Dehomag. During the Second World War the company
produced small arms for the American war effort (M1 Carbine, and Browning Automatic Rifle). IBM provided translation
services for the Nuremberg Trials. In 1947, IBM opened its first office in Bahrain, as well as an office in Saudi
Arabia to service the needs of the Arabian-American Oil Company that would grow to become Saudi Business Machines
(SBM). In 1952, Thomas Watson, Sr., stepped down after almost 40 years at the company helm; his son, Thomas Watson,
Jr., was named president. In 1956, the company demonstrated the first practical example of artificial intelligence
when Arthur L. Samuel of IBM's Poughkeepsie, New York, laboratory programmed an IBM 704 not merely to play checkers
but "learn" from its own experience. In 1957, the FORTRAN (FORmula TRANslation) scientific programming language was
developed. In 1961, Thomas J. Watson, Jr., was elected chairman of the board and Albert L. Williams became company
president. The same year IBM developed the SABRE (Semi-Automatic Business-Related Environment) reservation system
for American Airlines and introduced the highly successful Selectric typewriter. In 2002, IBM acquired PwC consulting.
In 2003 it initiated a project to redefine company values. Using its Jam technology, it hosted a three-day Internet-based
online discussion of key business issues with 50,000 employees. Results were data mined with sophisticated text analysis
software (eClassifier) for common themes. Three emerged, expressed as: "Dedication to every client's success", "Innovation
that matters—for our company and for the world", and "Trust and personal responsibility in all relationships". Another
three-day Jam took place in 2004, with 52,000 employees discussing ways to implement company values in practice.
In 2005, the company sold its personal computer business to Chinese technology company Lenovo, and in the same year
it agreed to acquire Micromuse. A year later IBM launched Secure Blue, a low-cost hardware design for data encryption
that can be built into a microprocessor. In 2009 it acquired software company SPSS Inc. Later in 2009, IBM's Blue
Gene supercomputing program was awarded the National Medal of Technology and Innovation by U.S. President Barack
Obama. In 2011, IBM gained worldwide attention for its artificial intelligence program Watson, which was exhibited
on Jeopardy! where it won against game-show champions Ken Jennings and Brad Rutter. As of 2012[update], IBM had been
the top annual recipient of U.S. patents for 20 consecutive years. On October 28, 2015, IBM announced its acquisition
of digital assets from The Weather Company—a holding company of Bain Capital, The Blackstone Group and NBCUniversal
which owns The Weather Channel, including its weather data platforms (such as Weather Services International), websites
(Weather.com and Weather Underground) and mobile apps. The acquisition seeks to use Watson for weather analytics
and predictions. The acquisition does not include The Weather Channel itself, which will enter into a long-term licensing
agreement with IBM for use of its data. The sale closed on January 29, 2016 The company's 14 member Board of Directors
is responsible for overall corporate management. As of Cathie Black's resignation in November 2010 its membership
(by affiliation and year of joining) included: Alain J. P. Belda '08 (Alcoa), William R. Brody '07 (Salk Institute
/ Johns Hopkins University), Kenneth Chenault '98 (American Express), Michael L. Eskew '05 (UPS), Shirley Ann Jackson
'05 (Rensselaer Polytechnic Institute), Andrew N. Liveris '10 (Dow Chemical), W. James McNerney, Jr. '09 (Boeing),
James W. Owens '06 (Caterpillar), Samuel J. Palmisano '00 (IBM), Joan Spero '04 (Doris Duke Charitable Foundation),
Sidney Taurel '01 (Eli Lilly), and Lorenzo Zambrano '03 (Cemex). On January 21, 2014 IBM announced that company executives
would forgo bonuses for fiscal year 2013. The move came as the firm reported a 5% drop in sales and 1% decline in
net profit over 2012. It also committed to a $1.2bn plus expansion of its data center and cloud-storage business,
including the development of 15 new data centers. After ten successive quarters of flat or sliding sales under Chief
Executive Virginia Rometty IBM is being forced to look at new approaches. Said Rometty, “We’ve got to reinvent ourselves
like we’ve done in prior generations.” Other major campus installations include towers in Montreal, Paris, and Atlanta;
software labs in Raleigh-Durham, Rome, Cracow and Toronto; Johannesburg, Seattle; and facilities in Hakozaki and
Yamato. The company also operates the IBM Scientific Center, Hursley House, the Canada Head Office Building, IBM
Rochester, and the Somers Office Complex. The company's contributions to architecture and design, which include works
by Eero Saarinen, Ludwig Mies van der Rohe, and I.M. Pei, have been recognized. Van der Rohe's 330 North Wabash building
in Chicago, the original center of the company's research division post-World War II, was recognized with the 1990
Honor Award from the National Building Museum. IBM's employee management practices can be traced back to its roots.
In 1914, CEO Thomas J. Watson boosted company spirit by creating employee sports teams, hosting family outings, and
furnishing a company band. IBM sports teams still continue in the present day; the IBM Big Blue continue to exist
as semi-professional company rugby and American football teams. In 1924 the Quarter Century Club, which recognizes
employees with 25 years of service, was organized and the first issue of Business Machines, IBM's internal publication,
was published. In 1925, the first meeting of the Hundred Percent Club, composed of IBM salesmen who meet their quotas,
convened in Atlantic City, New Jersey. IBM was among the first corporations to provide group life insurance (1934),
survivor benefits (1935) and paid vacations (1937). In 1932 IBM created an Education Department to oversee training
for employees, which oversaw the completion of the IBM Schoolhouse at Endicott in 1933. In 1935, the employee magazine
Think was created. Also that year, IBM held its first training class for female systems service professionals. In
1942, IBM launched a program to train and employ disabled people in Topeka, Kansas. The next year classes began in
New York City, and soon the company was asked to join the President's Committee for Employment of the Handicapped.
In 1946, the company hired its first black salesman, 18 years before the Civil Rights Act of 1964. In 1947, IBM announced
a Total and Permanent Disability Income Plan for employees. A vested rights pension was added to the IBM retirement
plan. During IBM's management transformation in the 1990s revisions were made to these pension plans to reduce IBM's
pension liabilities. In 1952, Thomas J. Watson, Jr., published the company's first written equal opportunity policy
letter, one year before the U.S. Supreme Court decision in Brown vs. Board of Education and 11 years before the Civil
Rights Act of 1964. In 1961, IBM's nondiscrimination policy was expanded to include sex, national origin, and age.
The following year, IBM hosted its first Invention Award Dinner honoring 34 outstanding IBM inventors; and in 1963,
the company named the first eight IBM Fellows in a new Fellowship Program that recognizes senior IBM scientists,
engineers and other professionals for outstanding technical achievements. On September 21, 1953, Thomas Watson, Jr.,
the company's president at the time, sent out a controversial letter to all IBM employees stating that IBM needed
to hire the best people, regardless of their race, ethnic origin, or gender. He also publicized the policy so that
in his negotiations to build new manufacturing plants with the governors of two states in the U.S. South, he could
be clear that IBM would not build "separate-but-equal" workplaces. In 1984, IBM added sexual orientation to its nondiscrimination
policy. The company stated that this would give IBM a competitive advantage because IBM would then be able to hire
talented people its competitors would turn down. IBM has been a leading proponent of the Open Source Initiative,
and began supporting Linux in 1998. The company invests billions of dollars in services and software based on Linux
through the IBM Linux Technology Center, which includes over 300 Linux kernel developers. IBM has also released code
under different open source licenses, such as the platform-independent software framework Eclipse (worth approximately
US$40 million at the time of the donation), the three-sentence International Components for Unicode (ICU) license,
and the Java-based relational database management system (RDBMS) Apache Derby. IBM's open source involvement has
not been trouble-free, however (see SCO v. IBM). DeveloperWorks is a website run by IBM for software developers and
IT professionals. It contains how-to articles and tutorials, as well as software downloads and code samples, discussion
forums, podcasts, blogs, wikis, and other resources for developers and technical professionals. Subjects range from
open, industry-standard technologies like Java, Linux, SOA and web services, web development, Ajax, PHP, and XML
to IBM's products (WebSphere, Rational, Lotus, Tivoli and Information Management). In 2007, developerWorks was inducted
into the Jolt Hall of Fame. Virtually all console gaming systems of the previous generation used microprocessors
developed by IBM. The Xbox 360 contains a PowerPC tri-core processor, which was designed and produced by IBM in less
than 24 months. Sony's PlayStation 3 features the Cell BE microprocessor designed jointly by IBM, Toshiba, and Sony.
IBM also provided the microprocessor that serves as the heart of Nintendo's new Wii U system, which debuted in 2012.
The new Power Architecture-based microprocessor includes IBM's latest technology in an energy-saving silicon package.
Nintendo's seventh-generation console, Wii, features an IBM chip codenamed Broadway. The older Nintendo GameCube
utilizes the Gekko processor, also designed by IBM. IBM announced it will launch its new software, called "Open Client
Offering" which is to run on Linux, Microsoft Windows and Apple's Mac OS X. The company states that its new product
allows businesses to offer employees a choice of using the same software on Windows and its alternatives. This means
that "Open Client Offering" is to cut costs of managing whether to use Linux or Apple relative to Windows. There
will be no necessity for companies to pay Microsoft for its licenses for operating systems since the operating systems
will no longer rely on software which is Windows-based. One alternative to Microsoft's office document formats is
the Open Document Format software, whose development IBM supports. It is going to be used for several tasks like:
word processing, presentations, along with collaboration with Lotus Notes, instant messaging and blog tools as well
as an Internet Explorer competitor – the Mozilla Firefox web browser. IBM plans to install Open Client on 5% of its
desktop PCs. The Linux offering has been made available as the IBM Client for Smart Work product on the Ubuntu and
Red Hat Enterprise Linux platforms. In 2006, IBM launched Secure Blue, encryption hardware that can be built into
microprocessors. A year later, IBM unveiled Project Big Green, a re-direction of $1 billion per year across its businesses
to increase energy efficiency. On November 2008, IBM’s CEO, Sam Palmisano, during a speech at the Council on Foreign
Relations, outlined a new agenda for building a Smarter Planet. On March 1, 2011, IBM announced the Smarter Computing
framework to support Smarter Planet. On Aug 18, 2011, as part of its effort in cognitive computing, IBM has produced
chips that imitate neurons and synapses. These microprocessors do not use von Neumann architecture, and they consume
less memory and power. IBM also holds the SmartCamp program globally. The program searches for fresh start-up companies
that IBM can partner with to solve world problems. IBM holds 17 SmartCamp events around the world. Since July 2011,
IBM has partnered with Pennies, the electronic charity box, and produced a software solution for IBM retail customers
that provides an easy way to donate money when paying in-store by credit or debit card. Customers donate just a few
pence (1p-99p) a time and every donation goes to UK charities. The birthplace of IBM, Endicott, suffered pollution
for decades, however. IBM used liquid cleaning agents in circuit board assembly operation for more than two decades,
and six spills and leaks were recorded, including one leak in 1979 of 4,100 gallons from an underground tank. These
left behind volatile organic compounds in the town's soil and aquifer. Traces of volatile organic compounds have
been identified in Endicott’s drinking water, but the levels are within regulatory limits. Also, from 1980, IBM has
pumped out 78,000 gallons of chemicals, including trichloroethane, freon, benzene and perchloroethene to the air
and allegedly caused several cancer cases among the townspeople. IBM Endicott has been identified by the Department
of Environmental Conservation as the major source of pollution, though traces of contaminants from a local dry cleaner
and other polluters were also found. Remediation and testing are ongoing, however according to city officials, tests
show that the water is safe to drink.
In physics, energy is a property of objects which can be transferred to other objects or converted into different forms.
The "ability of a system to perform work" is a common description, but it is difficult to give one single comprehensive
definition of energy because of its many forms. For instance, in SI units, energy is measured in joules, and one
joule is defined "mechanically", being the energy transferred to an object by the mechanical work of moving it a
distance of 1 metre against a force of 1 newton.[note 1] However, there are many other definitions of energy, depending
on the context, such as thermal energy, radiant energy, electromagnetic, nuclear, etc., where definitions are derived
that are the most convenient. Common energy forms include the kinetic energy of a moving object, the potential energy
stored by an object's position in a force field (gravitational, electric or magnetic), the elastic energy stored
by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light,
and the thermal energy due to an object's temperature. All of the many forms of energy are convertible to other kinds
of energy. In Newtonian physics, there is a universal law of conservation of energy which says that energy can be
neither created nor be destroyed; however, it can change from one form to another. For "closed systems" with no external
source or sink of energy, the first law of thermodynamics states that a system's energy is constant unless energy
is transferred in or out by mechanical work or heat, and that no energy is lost in transfer. This means that it is
impossible to create or destroy energy. While heat can always be fully converted into work in a reversible isothermal
expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics
states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat
energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy
can be transformed in the other direction into thermal energy without such limitations. The total energy of a system
can be calculated by adding up all forms of energy in the system. Examples of energy transformation include generating
electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy
driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential
energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms
the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground.
Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that
in itself (since it still contains the same total energy even if in different forms), but its mass does decrease
when the energy escapes out to its surroundings, largely as radiant energy. The total energy of a system can be subdivided
and classified in various ways. For example, classical mechanics distinguishes between kinetic energy, which is determined
by an object's movement through space, and potential energy, which is a function of the position of an object within
a field. It may also be convenient to distinguish gravitational energy, thermal energy, several types of nuclear
energy (which utilize potentials from the nuclear force and the weak force), electric energy (from the electric field),
and magnetic energy (from the magnetic field), among others. Many of these classifications overlap; for instance,
thermal energy usually consists partly of kinetic and partly of potential energy. Some types of energy are a varying
mix of both potential and kinetic energy. An example is mechanical energy which is the sum of (usually macroscopic)
kinetic and potential energy in a system. Elastic energy in materials is also dependent upon electrical potential
energy (among atoms and molecules), as is chemical energy, which is stored and released from a reservoir of electrical
potential energy between electrons, and the molecules or atomic nuclei that attract them.[need quotation to verify].The
list is also not necessarily complete. Whenever physical scientists discover that a certain phenomenon appears to
violate the law of energy conservation, new forms are typically added that account for the discrepancy. In the late
17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product
of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for
slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent
parts of matter, a view shared by Isaac Newton, although it would be more than a century until this was generally
accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. In 1807,
Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard
Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential
energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any
isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely
a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and
the generation of heat. These developments led to the theory of conservation of energy, formalized largely by William
Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations
of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical
formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan.
According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics
do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the
direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. Another
energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as
the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented
in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the
kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than
the Hamiltonian for non-conservative systems (such as systems with friction). Noether's theorem (1918) states that
any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem
has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the
seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively),
it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous
symmetries need not have a corresponding conservation law. In the context of chemistry, energy is an attribute of
a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is
accompanied by a change in one or more of these kinds of structure, it is invariably accompanied by an increase or
decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants
of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the
reactants. A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state;
in the case of endergonic reactions the situation is the reverse. Chemical reactions are invariably not possible
unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction
(at given temperature T) is related to the activation energy E, by the Boltzmann's population factor e−E/kT – that
is the probability of molecule to have energy greater than or equal to E at the given temperature T. This exponential
dependence of a reaction rate on temperature is known as the Arrhenius equation.The activation energy necessary for
a chemical reaction can be in the form of thermal energy. In biology, energy is an attribute of all biological systems
from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development
of a biological cell or an organelle of a biological organism. Energy is thus often said to be stored by cells in
the structures of molecules of substances such as carbohydrates (including sugars), lipids, and proteins, which release
energy when reacted with oxygen in respiration. In human terms, the human equivalent (H-e) (Human energy conversion)
indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism,
assuming an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example,
if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents
(100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of
watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate
perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity
kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical
and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount
of energy. Sunlight is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide
and water (two low-energy compounds) are converted into the high-energy compounds carbohydrates, lipids, and proteins.
Plants also release oxygen during photosynthesis, which is utilized by living organisms as an electron acceptor,
to release the energy of carbohydrates, lipids, and proteins. Release of the energy stored during photosynthesis
as heat or light may be triggered suddenly by a spark, in a forest fire, or it may be made available more slowly
for animal or human metabolism, when these molecules are ingested, and catabolism is triggered by enzyme action.
Any living organism relies on an external source of energy—radiation from the Sun in the case of green plants, chemical
energy in some form in the case of animals—to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ)
recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates
and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised
to carbon dioxide and water in the mitochondria It would appear that living organisms are remarkably inefficient
(in the physical sense) in their use of the energy they receive (chemical energy or radiation), and it is true that
most real machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a
vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from.
The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the
universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount
of energy (as heat) across the remainder of the universe ("the surroundings").[note 3] Simpler organisms can achieve
higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are
not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step
in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the
first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%)
are used for the metabolism of green plants, i.e. reconverted into carbon dioxide and heat. Sunlight may be stored
as gravitational potential energy after it strikes the Earth, as (for example) water evaporates from oceans and is
deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or
generators to produce electricity). Sunlight also drives many weather phenomena, save those generated by volcanic
events. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm
ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement.
In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives
plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential
energy storage of the thermal energy, which may be later released to active kinetic energy in landslides, after a
triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced
ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such
as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational
field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that
has been stored in heavy atoms since the collapse of long-destroyed supernova stars created these atoms. In cosmology
and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output
energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of
energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular
hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter
elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential
energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe
cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store
of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated
from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed
into sunlight. In quantum mechanics, energy is defined in terms of the energy operator as a time derivative of the
wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system.
Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation
describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems.
The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an
energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator
(vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by
Planck's relation: (where is Planck's constant and the frequency). In the case of an electromagnetic wave these energy
states are called quanta of light or photons. For example, consider electron–positron annihilation, in which the
rest mass of individual particles is destroyed, but the inertia equivalent of the system of the two particles (its
invariant mass) remains (since all energy is associated with mass), and this inertia and invariant mass is carried
off by photons which individually are massless, but as a system retain their mass. This is a reversible process –
the inverse process is called pair creation – in which the rest mass of particles is created from energy of two (or
more) annihilating photons. In this system the matter (electrons and positrons) is destroyed and changed to non-matter
energy (the photons). However, the total system mass and energy do not change during this interaction. There are
strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described
by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient.
The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined
by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations
are permitted on a small scale, but certain larger transformations are not permitted because it is statistically
unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy transformations
in the universe over time are characterized by various kinds of potential energy that has been available since the
Big Bang later being "released" (transformed to more active types of energy such as kinetic or radiant energy) when
a triggering mechanism is available. Familiar examples of such processes include nuclear decay, in which energy is
released that was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process
ultimately using the gravitational potential energy released from the gravitational collapse of supernovae, to store
energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth.
This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in
the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in
a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and
the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal
to the decrease of potential energy. If one (unrealistically) assumes that there is no friction or other losses,
the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever.
Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also
equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy,
and likewise always appears associated with it, as described in mass-energy equivalence. The formula E = mc², derived
by Albert Einstein (1905) quantifies the relationship between rest-mass and rest-energy within the concept of special
relativity. In different theoretical frameworks, similar formulas were derived by J. J. Thomson (1881), Henri Poincaré
(1900), Friedrich Hasenöhrl (1904) and others (see Mass-energy equivalence#History for further information). Matter
may be converted to energy (and vice versa), but mass cannot ever be destroyed; rather, mass/energy equivalence remains
a constant for both the matter and the energy, during any process when they are converted into each other. However,
since is extremely large relative to ordinary human scales, the conversion of ordinary amount of matter (for example,
1 kg) to other forms of energy (such as heat, light, and other radiation) can liberate tremendous amounts of energy
(~ joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent
of a unit of energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to
measure by weight, unless the energy loss is very large. Examples of energy transformation into matter (i.e., kinetic
energy into particles with rest mass) are found in high-energy nuclear physics. Thermodynamics divides energy transformation
into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is
dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated
forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this
sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another,
is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of
lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy,
from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this
case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price
of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an
expansion of matter, or a randomisation in a crystal). As the universe evolves in time, more and more of its energy
becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred
to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does
not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to
other usable forms of energy (through the use of generators attached to heat engines), grows less and less. According
to conservation of energy, energy can neither be created (produced) nor destroyed by itself. It can only be transformed.
The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change
in the energy contained within the system. Energy is subject to a strict global conservation law; that is, whenever
one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly
on time, it is found that the total energy of the system always remains constant. This law is a fundamental principle
of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of
translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of
their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable.
This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy
and time also results in the uncertainty principle - it is impossible to define the exact amount of energy during
any definite time interval. The uncertainty principle should not be confused with energy conservation - rather it
provides mathematical limits to which energy can in principle be defined and measured. In particle physics, this
inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which and with
real particles, is responsible for the creation of all known fundamental forces (more accurately known as fundamental
interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible
for electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative
decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond forces and some other observable
phenomena. Energy transfer can be considered for the special case of systems which are closed to transfers of matter.
The portion of the energy which is transferred by conservative forces over a distance is measured as the work the
source system does on the receiving system. The portion of the energy which does not do work during the transfer
is called heat.[note 4] Energy can be transferred between systems in a variety of ways. Examples include the transmission
of electromagnetic energy via photons, physical collisions which transfer kinetic energy,[note 5] and the conductive
transfer of thermal energy. The first law of thermodynamics asserts that energy (but not necessarily thermodynamic
free energy) is always conserved and that heat flow is a form of energy transfer. For homogeneous systems, with a
well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only
to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change
in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as This principle
is vitally important to understanding the behaviour of a quantity closely related to energy, called entropy. Entropy
is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given
more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total
energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical
result is called the second law of thermodynamics.
East Prussia enclosed the bulk of the ancestral lands of the Baltic Old Prussians. During the 13th century, the native Prussians
were conquered by the crusading Teutonic Knights. The indigenous Balts who survived the conquest were gradually converted
to Christianity. Because of Germanization and colonisation over the following centuries, Germans became the dominant
ethnic group, while Poles and Lithuanians formed minorities. From the 13th century, East Prussia was part of the
monastic state of the Teutonic Knights. After the Second Peace of Thorn in 1466 it became a fief of the Kingdom of
Poland. In 1525, with the Prussian Homage, the province became the Duchy of Prussia. The Old Prussian language had
become extinct by the 17th or early 18th century. Because the duchy was outside of the core Holy Roman Empire, the
prince-electors of Brandenburg were able to proclaim themselves King of Prussia beginning in 1701. After the annexation
of most of western Royal Prussia in the First Partition of the Polish-Lithuanian Commonwealth in 1772, eastern (ducal)
Prussia was connected by land with the rest of the Prussian state and was reorganized as a province the following
year (1773). Between 1829 and 1878, the Province of East Prussia was joined with West Prussia to form the Province
of Prussia. The Kingdom of Prussia became the leading state of the German Empire after its creation in 1871. However,
the Treaty of Versailles following World War I granted West Prussia to Poland and made East Prussia an exclave of
Weimar Germany (the new Polish Corridor separating East Prussia from the rest of Germany), while the Memel Territory
was detached and was annexed by Lithuania in 1923. Following Nazi Germany's defeat in World War II in 1945, war-torn
East Prussia was divided at Joseph Stalin's insistence between the Soviet Union (the Kaliningrad Oblast in the Russian
SFSR and the constituent counties of the Klaipėda Region in the Lithuanian SSR) and the People's Republic of Poland
(the Warmian-Masurian Voivodeship). The capital city Königsberg was renamed Kaliningrad in 1946. The German population
of the province was largely evacuated during the war or expelled shortly thereafter in the expulsion of Germans after
World War II. An estimated 300,000 (around one fifth of the population) died either in war time bombings raids or
in the battles to defend the province.[citation needed] Upon the invitation of Duke Konrad I of Masovia, the Teutonic
Knights took possession of Prussia in the 13th century and created a monastic state to administer the conquered Old
Prussians. Local Old-Prussian (north) and Polish (south) toponyms were gradually Germanised. The Knights' expansionist
policies, including occupation of Polish Pomerania with Gdańsk/Danzig and western Lithuania, brought them into conflict
with the Kingdom of Poland and embroiled them in several wars, culminating in the Polish-Lithuanian-Teutonic War,
whereby the united armies of Poland and Lithuania, defeated the Teutonic Order at the Battle of Grunwald (Tannenberg)
in 1410. Its defeat was formalised in the Second Treaty of Thorn in 1466 ending the Thirteen Years' War, and leaving
the former Polish region Pomerania/Pomerelia under Polish control. Together with Warmia it formed the province of
Royal Prussia. Eastern Prussia remained under the Knights, but as a fief of Poland. 1466 and 1525 arrangements by
kings of Poland were not verified by the Holy Roman Empire as well as the previous gains of the Teutonic Knights
were not verified. The Teutonic Order lost eastern Prussia when Grand Master Albert of Brandenburg-Ansbach converted
to Lutheranism and secularized the Prussian branch of the Teutonic Order in 1525. Albert established himself as the
first duke of the Duchy of Prussia and a vassal of the Polish crown by the Prussian Homage. Walter von Cronberg,
the next Grand Master, was enfeoffed with the title to Prussia after the Diet of Augsburg in 1530, but the Order
never regained possession of the territory. In 1569 the Hohenzollern prince-electors of the Margraviate of Brandenburg
became co-regents with Albert's son, the feeble-minded Albert Frederick. The Administrator of Prussia, the grandmaster
of the Teutonic Order Maximilian III, son of emperor Maximilian II died in 1618. When Maximilian died, Albert's line
died out, and the Duchy of Prussia passed to the Electors of Brandenburg, forming Brandenburg-Prussia. Taking advantage
of the Swedish invasion of Poland in 1655, and instead of fulfilling his vassal's duties towards the Polish Kingdom,
by joining forces with the Swedes and subsequent treaties of Wehlau, Labiau, and Oliva, Elector and Duke Frederick
William succeeded in revoking king of Poland's sovereignty over the Duchy of Prussia in 1660. The absolutist elector
also subdued the noble estates of Prussia. Although Brandenburg was a part of the Holy Roman Empire, the Prussian
lands were not within the Holy Roman Empire and were with the administration by the Teutonic Order grandmasters under
jurisdiction of the Emperor. In return for supporting Emperor Leopold I in the War of the Spanish Succession, Elector
Frederick III was allowed to crown himself "King in Prussia" in 1701. The new kingdom ruled by the Hohenzollern dynasty
became known as the Kingdom of Prussia. The designation "Kingdom of Prussia" was gradually applied to the various
lands of Brandenburg-Prussia. To differentiate from the larger entity, the former Duchy of Prussia became known as
Altpreußen ("Old Prussia"), the province of Prussia, or "East Prussia". Approximately one-third of East Prussia's
population died in the plague and famine of 1709–1711, including the last speakers of Old Prussian. The plague, probably
brought by foreign troops during the Great Northern War, killed 250,000 East Prussians, especially in the province's
eastern regions. Crown Prince Frederick William I led the rebuilding of East Prussia, founding numerous towns. Thousands
of Protestants expelled from the Archbishopric of Salzburg were allowed to settle in depleted East Prussia. The province
was overrun by Imperial Russian troops during the Seven Years' War. In the 1772 First Partition of Poland, the Prussian
king Frederick the Great annexed neighboring Royal Prussia, i.e. the Polish voivodeships of Pomerania (Gdańsk Pomerania
or Pomerelia), Malbork, Chełmno and the Prince-Bishopric of Warmia, thereby bridging the "Polish Corridor" between
his Prussian and Farther Pomeranian lands and cutting remaining Poland off the Baltic Coast. The territory of Warmia
was incorporated into the lands of former Ducal Prussia, which, by administrative deed of 31 January 1773 were named
East Prussia. The former Polish Pomerelian lands beyond the Vistula River together with Malbork and Chełmno Land
formed the Province of West Prussia with its capital at Marienwerder (Kwidzyn). The Polish Partition Sejm ratified
the cession on 30 September 1773, whereafter Frederick officially went on to call himself a King "of" Prussia. After
the disastrous defeat of the Prussian Army at the Battle of Jena-Auerstedt in 1806, Napoleon occupied Berlin and
had the officials of the Prussian General Directory swear an oath of allegiance to him, while King Frederick William
III and his consort Louise fled via Königsberg and the Curonian Spit to Memel. The French troops immediately took
up pursuit but were delayed in the Battle of Eylau on 9 February 1807 by an East Prussian contingent under General
Anton Wilhelm von L'Estocq. Napoleon had to stay at the Finckenstein Palace, but in May, after a siege of 75 days,
his troops led by Marshal François Joseph Lefebvre were able to capture the city Danzig, which had been tenaciously
defended by General Count Friedrich Adolf von Kalkreuth. On 14 June, Napoleon ended the War of the Fourth Coalition
with his victory at the Battle of Friedland. Frederick William and Queen Louise met with Napoleon for peace negotiations,
and on 9 July the Prussian king signed the Treaty of Tilsit. The succeeding Prussian reforms instigated by Heinrich
Friedrich Karl vom und zum Stein and Karl August von Hardenberg included the implementation of an Oberlandesgericht
appellation court at Königsberg, a municipal corporation, economic freedom as well as emancipation of the serfs and
Jews. In the course of the Prussian restoration by the 1815 Congress of Vienna, the East Prussian territories were
re-arranged in the Regierungsbezirke of Gumbinnen and Königsberg. From 1905, the southern districts of East Prussia
formed the separate Regierungsbezirk of Allenstein. East and West Prussia were first united in personal union in
1824, and then merged in a real union in 1829 to form the Province of Prussia. The united province was again split
into separate East and West Prussian provinces in 1878. The population of the province in 1900 was 1,996,626 people,
with a religious makeup of 1,698,465 Protestants, 269,196 Roman Catholics, and 13,877 Jews. The Low Prussian dialect
predominated in East Prussia, although High Prussian was spoken in Warmia. The numbers of Masurians, Kursenieki and
Prussian Lithuanians decreased over time due to the process of Germanization. The Polish-speaking population concentrated
in the south of the province (Masuria and Warmia) and all German geographic atlases at the start of 20th century
showed the southern part of East Prussia as Polish with the number of Poles estimated at the time to be 300,000.
Kursenieki inhabited the areas around the Curonian lagoon, while Lithuanian-speaking Prussians concentrated in the
northeast in (Lithuania Minor). The Old Prussian ethnic group became completely Germanized over time and the Old
Prussian language died out in the 18th century. At the beginning of World War I, East Prussia became a theatre of
war when the Russian Empire invaded the country. The Russian Army encountered at first little resistance because
the bulk of the German Army had been directed towards the Western Front according to the Schlieffen Plan. Despite
early success and the capture of the towns of Rastenburg and Gumbinnen, in the Battle of Tannenberg in 1914 and the
Second Battle of the Masurian Lakes in 1915, the Russians were decisively defeated and forced to retreat. The Russians
were followed by the German Army advancing into Russian territory. With the forced abdication of Emperor William
II in 1918, Germany became a republic. Most of West Prussia and the former Prussian Province of Posen, territories
annexed by Prussia in the 18th century Partitions of Poland, were ceded to the Second Polish Republic according to
the Treaty of Versailles. East Prussia became an exclave, being separated from mainland Germany. After the Treaty
of Versailles, East Prussia was separated from Germany as an exclave; the Memelland was also separated from the province.
Because most of West Prussia became part of the Second Polish Republic as the Polish Corridor, the formerly West
Prussian Marienwerder region became part of East Prussia (as Regierungsbezirk Westpreußen). Also Soldau district
in Allenstein region was part of Second Polish Republic. The Seedienst Ostpreußen was established to provide an independent
transport service to East Prussia. Erich Koch headed the East Prussian Nazi party from 1928. He led the district
from 1932. This period was characterized by efforts to collectivize the local agriculture and ruthlessness in dealing
with his critics inside and outside the Party. He also had long-term plans for mass-scale industrialization of the
largely agricultural province. These actions made him unpopular among the local peasants. In 1932 the local paramilitary
SA had already started to terrorise their political opponents. On the night of 31 July 1932 there was a bomb attack
on the headquarters of the Social Democrats in Königsberg, the Otto-Braun-House. The Communist politician Gustav
Sauf was killed; the executive editor of the Social Democrat "Königsberger Volkszeitung", Otto Wyrgatsch, and the
German People's Party politician Max von Bahrfeldt were severely injured. Members of the Reichsbanner were attacked
and the local Reichsbanner Chairman of Lötzen, Kurt Kotzan, was murdered on 6 August 1932. Through publicly funded
emergency relief programs concentrating on agricultural land-improvement projects and road construction, the "Erich
Koch Plan" for East Prussia allegedly made the province free of unemployment; on August 16, 1933 Koch reported to
Hitler that unemployment had been banished entirely from East Prussia, a feat that gained admiration throughout the
Reich. Koch's industrialization plans led him into conflict with R. Walther Darré, who held the office of the Reich
Peasant Leader (Reichsbauernführer) and Minister of Agriculture. Darré, a neopaganist rural romantic, wanted to enforce
his vision of an agricultural East Prussia. When his "Land" representatives challenged Koch's plans, Koch had them
arrested. In 1938 the Nazis altered about one-third of the toponyms of the area, eliminating, Germanizing, or simplifying
a number of Old Prussian names, as well as those Polish or Lithuanian names originating from colonists and refugees
to Prussia during and after the Protestant Reformation. More than 1,500 places were ordered to be renamed by 16 July
1938 following a decree issued by Gauleiter and Oberpräsident Erich Koch and initiated by Adolf Hitler. Many who
would not cooperate with the rulers of Nazi Germany were sent to concentration camps and held prisoner there until
their death or liberation. In 1939 East Prussia had 2.49 million inhabitants, 85% of them ethnic Germans, the others
Poles in the south who, according to Polish estimates numbered in the interwar period around 300,000-350,000, the
Latvian speaking Kursenieki, and Lietuvininkai who spoke Lithuanian in the northeast. Most German East Prussians,
Masurians, Kursieniki, and Lietuvininkai were Lutheran, while the population of Ermland was mainly Roman Catholic
due to the history of its bishopric. The East Prussian Jewish Congregation declined from about 9,000 in 1933 to 3,000
in 1939, as most fled from Nazi rule. Those who remained were later deported and killed in the Holocaust. In 1939
the Regierungsbezirk Zichenau was annexed by Germany and incorporated into East Prussia. Parts of it were transferred
to other regions, e.g. Suwałki to Regierungsbezirk Gumbinnen and Soldau to Regierungsbezirk Allenstein. Despite Nazi
propaganda presenting all of the regions annexed as possessing significant German populations that wanted reunification
with Germany, the Reich's statistics of late 1939 show that only 31,000 out of 994,092 people in this territory were
ethnic Germans.[citation needed] Following Nazi Germany's defeat in World War II in 1945, East Prussia was partitioned
between Poland and the Soviet Union according to the Potsdam Conference. Southern East Prussia was placed under Polish
administration, while northern East Prussia was divided between the Soviet republics of Russia (the Kaliningrad Oblast)
and Lithuania (the constituent counties of the Klaipėda Region). The city of Königsberg was renamed Kaliningrad in
1946. The German population of the province largely evacuated during the war, but several hundreds of thousands died
during the years 1944–46 and the remainder were subsequently expelled. Shortly after the end of the war in May 1945,
Germans who had fled in early 1945 tried to return to their homes in East Prussia. An estimated number of 800,000
Germans were living in East Prussia during the summer of 1945. Many more were prevented from returning,[citation
needed] and the German population of East Prussia was almost completely expelled by the communist regimes. During
the war and for some time thereafter 45 camps were established for about 200,000-250,000 forced labourers, the vast
majority of whom were deported to the Soviet Union, including the Gulag camp system. The largest camp with about
48,000 inmates was established at Deutsch Eylau (Iława). Orphaned children who were left behind in the zone occupied
by the Soviet Union were referred to as Wolf children. Representatives of the Polish government officially took over
the civilian administration of the southern part of East Prussia on 23 May 1945. Subsequently Polish expatriates
from Polish lands annexed by the Soviet Union as well as Ukrainians and Lemkos from southern Poland, expelled in
Operation Vistula in 1947, were settled in the southern part of East Prussia, now the Polish Warmian-Masurian Voivodeship.
In 1950 the Olsztyn Voivodeship counted 689,000 inhabitants, 22.6% of them coming from areas annexed by the Soviet
Union, 10% Ukrainians, and 18.5% of them pre-war inhabitants. The remaining pre-war population was treated as Germanized
Poles and a policy of re-Polonization was pursued throughout the country Most of these "Autochthones" chose to emigrate
to West Germany from the 1950s through 1970s (between 1970 and 1988 55,227 persons from Warmia and Masuria moved
to Western Germany). Local toponyms were Polonised by the Polish Commission for the Determination of Place Names.
In April 1946, northern East Prussia became an official province of the Russian SFSR as the "Kyonigsbergskaya Oblast",
with the Memel Territory becoming part of the Lithuanian SSR. In June 1946 114,070 German and 41,029 Soviet citizens
were registered in the Oblast, with an unknown number of disregarded unregistered persons. In July of that year,
the historic city of Königsberg was renamed Kaliningrad to honour Mikhail Kalinin and the area named the Kaliningrad
Oblast. Between 24 August and 26 October 1948 21 transports with in total 42,094 Germans left the Oblast to the Soviet
Occupation Zone (which became East Germany). The last remaining Germans left in November 1949 (1,401 persons) and
January 1950 (7 persons). A similar fate befell the Curonians who lived in the area around the Curonian Lagoon. While
many fled from the Red Army during the evacuation of East Prussia, Curonians that remained behind were subsequently
expelled by the Soviet Union. Only 219 lived along the Curonian Spit in 1955. Many had German names such as Fritz
or Hans, a cause for anti-German discrimination. The Soviet authorities considered the Curonians fascists. Because
of this discrimination, many immigrated to West Germany in 1958, where the majority of Curonians now live. After
the expulsion of the German population ethnic Russians, Belarusians, and Ukrainians were settled in the northern
part. In the Soviet part of the region, a policy of eliminating all remnants of German history was pursued. All German
place names were replaced by new Russian names. The exclave was a military zone, which was closed to foreigners;
Soviet citizens could only enter with special permission. In 1967 the remnants of Königsberg Castle were demolished
on the orders of Leonid Brezhnev to make way for a new "House of the Soviets". Although the 1945–1949 expulsion of
Germans from the northern part of former East Prussia was often conducted in a violent and aggressive way by Soviet
officials, the present Russian inhabitants of the Kaliningrad Oblast have much less animosity towards Germans. German
names have been revived in commercial Russian trade and there is sometimes talk of reverting Kaliningrad's name to
its historic name of Königsberg. The city centre of Kaliningrad was completely rebuilt, as British bombs in 1944
and the Soviet siege in 1945 had left it in nothing but ruins. Since 1875, with the strengthening of self-rule, the
urban and rural districts (Kreise) within each province (sometimes within each governorate) formed a corporation
with common tasks and assets (schools, traffic installations, hospitals, cultural institutions, jails etc.) called
the Provinzialverband (provincial association). Initially the assemblies of the urban and rural districts elected
representatives for the provincial diets (Provinziallandtage), which were thus indirectly elected. As of 1919 the
provincial diets (or as to governorate diets, the so-called Kommunallandtage) were directly elected by the citizens
of the provinces (or governorates, respectively). These parliaments legislated within the competences transferred
to the provincial associations. The provincial diet of East Prussia elected a provincial executive body (government),
the provincial committee (Provinzialausschuss), and a head of province, the Landeshauptmann ("Land Captain"; till
the 1880s titled Landdirektor, land director).
The Ottoman Empire (/ˈɒtəmən/; Ottoman Turkish: دَوْلَتِ عَلِيّهٔ عُثمَانِیّه Devlet-i Aliyye-i Osmâniyye, Modern Turkish:
Osmanlı İmparatorluğu or Osmanlı Devleti), also known as the Turkish Empire, Ottoman Turkey or Turkey, was an empire
founded in 1299 by Oghuz Turks under Osman I in northwestern Anatolia. After conquests in the Balkans by Murad I
between 1362 and 1389, the Ottoman sultanate was transformed into a transcontinental empire and claimant to the caliphate.
The Ottomans ended the Byzantine Empire with the 1453 conquest of Constantinople by Mehmed the Conqueror. During
the 16th and 17th centuries, in particular at the height of its power under the reign of Suleiman the Magnificent,
the Ottoman Empire was a multinational, multilingual empire controlling much of Southeast Europe, Western Asia, the
Caucasus, North Africa, and the Horn of Africa. At the beginning of the 17th century the empire contained 32 provinces
and numerous vassal states. Some of these were later absorbed into the Ottoman Empire, while others were granted
various types of autonomy during the course of centuries.[dn 4] With Constantinople as its capital and control of
lands around the Mediterranean basin, the Ottoman Empire was at the centre of interactions between the Eastern and
Western worlds for six centuries. Following a long period of military setbacks against European powers, the Ottoman
Empire gradually declined into the late nineteenth century. The empire allied with Germany in the early 20th century,
with the imperial ambition of recovering its lost territories, joining in World War I to achieve this ambition on
the side of Germany and the Central Powers. While the Empire was able to largely hold its own during the conflict,
it was struggling with internal dissent, especially with the Arab Revolt in its Arabian holdings. Starting before
the war, but growing increasingly common and violent during it, major atrocities were committed by the Ottoman government
against the Armenians, Assyrians and Pontic Greeks. The Empire's defeat and the occupation of part of its territory
by the Allied Powers in the aftermath of World War I resulted in the emergence of a new state, Turkey, in the Ottoman
Anatolian heartland following the Turkish War of Independence, as well as the founding of modern Balkan and Middle
Eastern states and the partitioning of the Ottoman Empire. The word Ottoman is a historical anglicisation of the
name of Osman I, the founder of the Empire and of the ruling House of Osman (also known as the Ottoman dynasty).
Osman's name in turn was derived from the Persian form of the name ʿUthmān عثمان of ultimately Arabic origin. In
Ottoman Turkish, the empire was referred to as Devlet-i ʿAliyye-yi ʿOsmâniyye (دَوْلَتِ عَلِيّهٔ عُثمَانِیّه), (literally
"The Supreme State of the Ottomans") or alternatively Osmanlı Devleti (عثمانلى دولتى).[dn 5] In Modern Turkish, it
is known as Osmanlı İmparatorluğu ("Ottoman Empire") or Osmanlı Devleti ("The Ottoman State"). Ertuğrul, the father
of Osman I (founder of the Ottoman Empire), arrived in Anatolia from Merv (Turkmenistan) with 400 horsemen to aid
the Seljuks of Rum against the Byzantines. After the demise of the Turkish Seljuk Sultanate of Rum in the 14th century,
Anatolia was divided into a patchwork of independent, mostly Turkish states, the so-called Ghazi emirates. One of
the emirates was led by Osman I (1258–1326), from whom the name Ottoman is derived. Osman I extended the frontiers
of Turkish settlement toward the edge of the Byzantine Empire. It is not well understood how the early Ottomans came
to dominate their neighbours, as the history of medieval Anatolia is still little known. In the century after the
death of Osman I, Ottoman rule began to extend over the Eastern Mediterranean and the Balkans. Osman's son, Orhan,
captured the northwestern Anatolian city of Bursa in 1324, and made it the new capital of the Ottoman state. This
Ottoman conquest meant the loss of Byzantine control over northwestern Anatolia. The important city of Thessaloniki
was captured from the Venetians in 1387. The Ottoman victory at Kosovo in 1389 effectively marked the end of Serbian
power in the region, paving the way for Ottoman expansion into Europe. The Battle of Nicopolis in 1396, widely regarded
as the last large-scale crusade of the Middle Ages, failed to stop the advance of the victorious Ottoman Turks. With
the extension of Turkish dominion into the Balkans, the strategic conquest of Constantinople became a crucial objective.
The empire had managed to control nearly all former Byzantine lands surrounding the city, but in 1402 the Byzantines
were temporarily relieved when the Turco-Mongol leader Timur, founder of the Timurid Empire, invaded Anatolia from
the east. In the Battle of Ankara in 1402, Timur defeated the Ottoman forces and took Sultan Bayezid I as a prisoner,
throwing the empire into disorder. The ensuing civil war lasted from 1402 to 1413 as Bayezid's sons fought over succession.
It ended when Mehmed I emerged as the sultan and restored Ottoman power, bringing an end to the Interregnum, also
known as the Fetret Devri. Part of the Ottoman territories in the Balkans (such as Thessaloniki, Macedonia and Kosovo)
were temporarily lost after 1402 but were later recovered by Murad II between the 1430s and 1450s. On 10 November
1444, Murad II defeated the Hungarian, Polish, and Wallachian armies under Władysław III of Poland (also King of
Hungary) and John Hunyadi at the Battle of Varna, the final battle of the Crusade of Varna, although Albanians under
Skanderbeg continued to resist. Four years later, John Hunyadi prepared another army (of Hungarian and Wallachian
forces) to attack the Turks but was again defeated by Murad II at the Second Battle of Kosovo in 1448. The son of
Murad II, Mehmed the Conqueror, reorganized the state and the military, and conquered Constantinople on 29 May 1453.
Mehmed allowed the Orthodox Church to maintain its autonomy and land in exchange for accepting Ottoman authority.
Because of bad relations between the states of western Europe and the later Byzantine Empire, the majority of the
Orthodox population accepted Ottoman rule as preferable to Venetian rule. Albanian resistance was a major obstacle
to Ottoman expansion on the Italian peninsula. Suleiman the Magnificent (1520–1566) captured Belgrade in 1521, conquered
the southern and central parts of the Kingdom of Hungary as part of the Ottoman–Hungarian Wars,[not in citation given]
and, after his historical victory in the Battle of Mohács in 1526, he established Turkish rule in the territory of
present-day Hungary (except the western part) and other Central European territories. He then laid siege to Vienna
in 1529, but failed to take the city. In 1532, he made another attack on Vienna, but was repulsed in the Siege of
Güns. Transylvania, Wallachia and, intermittently, Moldavia, became tributary principalities of the Ottoman Empire.
In the east, the Ottoman Turks took Baghdad from the Persians in 1535, gaining control of Mesopotamia and naval access
to the Persian Gulf. In 1555, the Caucasus became officially partitioned for the first time between the Safavids
and the Ottomans, a status quo that would remain until the end of the Russo-Turkish War (1768–74). By this partitioning
of the Caucasus as signed in the Peace of Amasya, Western Armenia, and Western Georgia fell into Ottoman hands, while
Dagestan, Eastern Armenia, Eastern Georgia, and Azerbaijan remained Persian. France and the Ottoman Empire, united
by mutual opposition to Habsburg rule, became strong allies. The French conquests of Nice (1543) and Corsica (1553)
occurred as a joint venture between the forces of the French king Francis I and Suleiman, and were commanded by the
Ottoman admirals Barbarossa Hayreddin Pasha and Turgut Reis. A month prior to the siege of Nice, France supported
the Ottomans with an artillery unit during the 1543 Ottoman conquest of Esztergom in northern Hungary. After further
advances by the Turks, the Habsburg ruler Ferdinand officially recognized Ottoman ascendancy in Hungary in 1547.
The stagnation and decline, Stephen Lee argues, was relentless after the death of Suleiman in 1566, interrupted by
a few short revivals or reform and recovery. The decline gathered speed so that the Empire in 1699 was, "a mere shadow
of that which intimidated East and West alike in 1566." Although there are dissenting scholars, most historians point
to "degenerate Sultans, incompetent Grand Viziers, debilitated and ill-equipped armies, corrupt officials, avaricious
speculators, grasping enemies, and treacherous friends." The main cause was a failure of leadership, as Lee argues
the first 10 sultans from 1292 to 1566, with one exception, had performed admirably. The next 13 sultans from 1566
to 1703, with two exceptions, were lackadaisical or incompetent rulers, says Lee. In a highly centralized system,
the failure at the center proved fatal. A direct result was the strengthening of provincial elites who increasingly
ignored Constantinople. Secondly the military strength of European enemies grew stronger and stronger, while the
Ottoman armies and arms scarcely improved. Finally the Ottoman economic system grew distorted and impoverished, as
war caused inflation, world trade moved in other directions, and the deterioration of law and order made economic
progress difficult. The effective military and bureaucratic structures of the previous century came under strain
during a protracted period of misrule by weak Sultans. The Ottomans gradually fell behind the Europeans in military
technology as the innovation that fed the Empire's forceful expansion became stifled by growing religious and intellectual
conservatism. But in spite of these difficulties, the Empire remained a major expansionist power until the Battle
of Vienna in 1683, which marked the end of Ottoman expansion into Europe. The discovery of new maritime trade routes
by Western European states allowed them to avoid the Ottoman trade monopoly. The Portuguese discovery of the Cape
of Good Hope in 1488 initiated a series of Ottoman-Portuguese naval wars in the Indian Ocean throughout the 16th
century. The Somali Muslim Ajuran Empire, allied with the Ottomans, defied the Portuguese economic monopoly in the
Indian Ocean by employing a new coinage which followed the Ottoman pattern, thus proclaiming an attitude of economic
independence in regard to the Portuguese. In southern Europe, a Catholic coalition led by Philip II of Spain won
a victory over the Ottoman fleet at the Battle of Lepanto (1571). It was a startling, if mostly symbolic, blow to
the image of Ottoman invincibility, an image which the victory of the Knights of Malta against the Ottoman invaders
in the 1565 Siege of Malta had recently set about eroding. The battle was far more damaging to the Ottoman navy in
sapping experienced manpower than the loss of ships, which were rapidly replaced. The Ottoman navy recovered quickly,
persuading Venice to sign a peace treaty in 1573, allowing the Ottomans to expand and consolidate their position
in North Africa. By contrast, the Habsburg frontier had settled somewhat, a stalemate caused by a stiffening of the
Habsburg defences. The Long War against Habsburg Austria (1593–1606) created the need for greater numbers of Ottoman
infantry equipped with firearms, resulting in a relaxation of recruitment policy. This contributed to problems of
indiscipline and outright rebelliousness within the corps, which were never fully solved. Irregular sharpshooters
(Sekban) were also recruited, and on demobilization turned to brigandage in the Jelali revolts (1595–1610), which
engendered widespread anarchy in Anatolia in the late 16th and early 17th centuries. With the Empire's population
reaching 30 million people by 1600, the shortage of land placed further pressure on the government . In spite of
these problems, the Ottoman state remained strong, and its army did not collapse or suffer crushing defeats. The
only exceptions were campaigns against the Safavid dynasty of Persia, where many of the Ottoman eastern provinces
were lost, some permanently. This 1603–1618 war eventually resulted in the Treaty of Nasuh Pasha, which ceded the
entire Caucasus, except westernmost Georgia, back into Iranian Safavid possession. Campaigns during this era became
increasingly inconclusive, even against weaker states with much smaller forces, such as Poland or Austria. During
his brief majority reign, Murad IV (1612–1640) reasserted central authority and recaptured Iraq (1639) from the Safavids.
The resulting Treaty of Zuhab of that same year decisively parted the Caucasus and adjacent regions between the two
neighbouring empires as it had already been defined in the 1555 Peace of Amasya. The Sultanate of women (1648–1656)
was a period in which the mothers of young sultans exercised power on behalf of their sons. The most prominent women
of this period were Kösem Sultan and her daughter-in-law Turhan Hatice, whose political rivalry culminated in Kösem's
murder in 1651. During the Köprülü Era (1656–1703), effective control of the Empire was exercised by a sequence of
Grand Viziers from the Köprülü family. The Köprülü Vizierate saw renewed military success with authority restored
in Transylvania, the conquest of Crete completed in 1669, and expansion into Polish southern Ukraine, with the strongholds
of Khotyn and Kamianets-Podilskyi and the territory of Podolia ceding to Ottoman control in 1676. This period of
renewed assertiveness came to a calamitous end in May 1683 when Grand Vizier Kara Mustafa Pasha led a huge army to
attempt a second Ottoman siege of Vienna in the Great Turkish War of 1683–1687. The final assault being fatally delayed,
the Ottoman forces were swept away by allied Habsburg, German and Polish forces spearheaded by the Polish king Jan
III Sobieski at the Battle of Vienna. The alliance of the Holy League pressed home the advantage of the defeat at
Vienna, culminating in the Treaty of Karlowitz (26 January 1699), which ended the Great Turkish War. The Ottomans
surrendered control of significant territories, many permanently. Mustafa II (1695–1703) led the counterattack of
1695–96 against the Habsburgs in Hungary, but was undone at the disastrous defeat at Zenta (in modern Serbia), 11
September 1697. After the Austro-Turkish War of 1716–1718 the Treaty of Passarowitz confirmed the loss of the Banat,
Serbia and "Little Walachia" (Oltenia) to Austria. The Treaty also revealed that the Ottoman Empire was on the defensive
and unlikely to present any further aggression in Europe. The Austro-Russian–Turkish War, which was ended by the
Treaty of Belgrade in 1739, resulted in the recovery of Serbia and Oltenia, but the Empire lost the port of Azov,
north of the Crimean Peninsula, to the Russians. After this treaty the Ottoman Empire was able to enjoy a generation
of peace, as Austria and Russia were forced to deal with the rise of Prussia. Educational and technological reforms
came about, including the establishment of higher education institutions such as the Istanbul Technical University.
In 1734 an artillery school was established to impart Western-style artillery methods, but the Islamic clergy successfully
objected under the grounds of theodicy. In 1754 the artillery school was reopened on a semi-secret basis. In 1726,
Ibrahim Muteferrika convinced the Grand Vizier Nevşehirli Damat İbrahim Pasha, the Grand Mufti, and the clergy on
the efficiency of the printing press, and Muteferrika was later granted by Sultan Ahmed III permission to publish
non-religious books (despite opposition from some calligraphers and religious leaders). Muteferrika's press published
its first book in 1729 and, by 1743, issued 17 works in 23 volumes, each having between 500 and 1,000 copies. In
1768 Russian-backed Ukrainian Haidamaks, pursuing Polish confederates, entered Balta, an Ottoman-controlled town
on the border of Bessarabia in Ukraine, and massacred its citizens and burned the town to the ground. This action
provoked the Ottoman Empire into the Russo-Turkish War of 1768–1774. The Treaty of Küçük Kaynarca of 1774 ended the
war and provided freedom to worship for the Christian citizens of the Ottoman-controlled provinces of Wallachia and
Moldavia. By the late 18th century, a number of defeats in several wars with Russia led some people in the Ottoman
Empire to conclude that the reforms of Peter the Great had given the Russians an edge, and the Ottomans would have
to keep up with Western technology in order to avoid further defeats. The Serbian revolution (1804–1815) marked the
beginning of an era of national awakening in the Balkans during the Eastern Question. Suzerainty of Serbia as a hereditary
monarchy under its own dynasty was acknowledged de jure in 1830. In 1821, the Greeks declared war on the Sultan.
A rebellion that originated in Moldavia as a diversion was followed by the main revolution in the Peloponnese, which,
along with the northern part of the Gulf of Corinth, became the first parts of the Ottoman Empire to achieve independence
(in 1829). By the mid-19th century, the Ottoman Empire was called the "sick man" by Europeans. The suzerain states
– the Principality of Serbia, Wallachia, Moldavia and Montenegro – moved towards de jure independence during the
1860s and 1870s. The Christian population of the empire, owing to their higher educational levels, started to pull
ahead of the Muslim majority, leading to much resentment on the part of the latter. In 1861, there were 571 primary
and 94 secondary schools for Ottoman Christians with 140,000 pupils in total, a figure that vastly exceeded the number
of Muslim children in school at the same time, who were further hindered by the amount of time spent learning Arabic
and Islamic theology. In turn, the higher educational levels of the Christians allowed them to play a large role
in the economy. In 1911, of the 654 wholesale companies in Istanbul, 528 were owned by ethnic Greeks. Of course,
it would be a mistake to ignore the geopolitical dimensions of this dynamic. The preponderance of Christian merchants
owed not to any innate business sense on their part, although plenty of European observers were keen on making this
point. In fact, in many cases, Christians and also Jews were able to gain protection from European consuls and citizenship,
meaning they were protected from Ottoman law and not subject to the same economic regulations as their Muslim comrades.
The Crimean War (1853–1856) was part of a long-running contest between the major European powers for influence over
territories of the declining Ottoman Empire. The financial burden of the war led the Ottoman state to issue foreign
loans amounting to 5 million pounds sterling on 4 August 1854. The war caused an exodus of the Crimean Tatars, about
200,000 of whom moved to the Ottoman Empire in continuing waves of emigration. Toward the end of the Caucasian Wars,
90% of the Circassians were ethnically cleansed and exiled from their homelands in the Caucasus and fled to the Ottoman
Empire, resulting in the settlement of 500,000 to 700,000 Circassians in Turkey.[page needed] Some Circassian organisations
give much higher numbers, totaling 1–1.5 million deported or killed. As the Ottoman state attempted to modernize
its infrastructure and army in response to threats from the outside, it also opened itself up to a different kind
of threat: that of creditors. Indeed, as the historian Eugene Rogan has written, "the single greatest threat to the
independence of the Middle East" in the nineteenth century "was not the armies of Europe but its banks." The Ottoman
state, which had begun taking on debt with the Crimean War, was forced to declare bankruptcy in 1875. By 1881, the
Ottoman Empire agreed to have its debt controlled by an institution known as the Ottoman Public Debt Administration,
a council of European men with presidency alternating between France and Britain. The body controlled swaths of the
Ottoman economy, and used its position to insure that European capital continued to penetrate the empire, often to
the detriment of local Ottoman interests. The Ottoman bashi-bazouks brutally suppressed the Bulgarian uprising of
1876, massacring up to 100,000 people in the process. The Russo-Turkish War (1877–78) ended with a decisive victory
for Russia. As a result, Ottoman holdings in Europe declined sharply; Bulgaria was established as an independent
principality inside the Ottoman Empire, Romania achieved full independence. Serbia and Montenegro finally gained
complete independence, but with smaller territories. In 1878, Austria-Hungary unilaterally occupied the Ottoman provinces
of Bosnia-Herzegovina and Novi Pazar. As the Ottoman Empire gradually shrank in size, some 7–9 million Turkish-Muslims
from its former territories in the Caucasus, Crimea, Balkans, and the Mediterranean islands migrated to Anatolia
and Eastern Thrace. After the Empire lost the Balkan Wars (1912–13), it lost all its Balkan territories except East
Thrace (European Turkey). This resulted in around 400,000 Muslims fleeing with the retreating Ottoman armies (with
many dying from cholera brought by the soldiers), and with some 400,000 non-Muslims fleeing territory still under
Ottoman rule. Justin McCarthy estimates that during the period 1821 to 1922 several million Muslims died in the Balkans,
with the expulsion of a similar number. The defeat and dissolution of the Ottoman Empire (1908–1922) began with the
Second Constitutional Era, a moment of hope and promise established with the Young Turk Revolution. It restored the
Ottoman constitution of 1876 and brought in multi-party politics with a two-stage electoral system (electoral law)
under the Ottoman parliament. The constitution offered hope by freeing the empire’s citizens to modernize the state’s
institutions, rejuvenate its strength, and enable it to hold its own against outside powers. Its guarantee of liberties
promised to dissolve inter-communal tensions and transform the empire into a more harmonious place. Instead, this
period became the story of the twilight struggle of the Empire. Young Turks movement members once underground (named
committee, group, etc.) established (declared) their parties. Among them “Committee of Union and Progress,” and “Freedom
and Accord Party” were major parties. On the other end of the spectrum were ethnic parties which included; Poale
Zion, Al-Fatat, and Armenian national movement organized under Armenian Revolutionary Federation. Profiting from
the civil strife, Austria-Hungary officially annexed Bosnia and Herzegovina in 1908. The last of Ottoman censuses
was performed with the 1914 census. Ottoman military reforms resulted with the Ottoman Modern Army which engaged
with Italo-Turkish War (1911), Balkan Wars (1912–1913), and continuous unrest (Counter coup followed by restoration
and Saviors followed by Raid on Porte) in the Empire up to World War I. The history of the Ottoman Empire during
World War I began with the Ottoman engagement in the Middle Eastern theatre. There were several important Ottoman
victories in the early years of the war, such as the Battle of Gallipoli and the Siege of Kut. The Arab Revolt which
began in 1916 turned the tide against the Ottomans on the Middle Eastern front, where they initially seemed to have
the upper hand during the first two years of the war. The Armistice of Mudros was signed on 30 October 1918, and
set the partition of the Ottoman Empire under the terms of the Treaty of Sèvres. This treaty, as designed in the
conference of London, allowed the Sultan to retain his position and title. The occupation of Constantinople and İzmir
led to the establishment of a Turkish national movement, which won the Turkish War of Independence (1919–22) under
the leadership of Mustafa Kemal (later given the surname "Atatürk"). The sultanate was abolished on 1 November 1922,
and the last sultan, Mehmed VI (reigned 1918–22), left the country on 17 November 1922. The caliphate was abolished
on 3 March 1924. In 1915, as the Russian Caucasus Army continued to advance into eastern Anatolia, the Ottoman government
started the deportation of its ethnic Armenian population, resulting in the death of approximately 1.5 million Armenians
in what became known as the Armenian Genocide. The genocide was carried out during and after World War I and implemented
in two phases: the wholesale killing of the able-bodied male population through massacre and subjection of army conscripts
to forced labour, followed by the deportation of women, children, the elderly and infirm on death marches leading
to the Syrian desert. Driven forward by military escorts, the deportees were deprived of food and water and subjected
to periodic robbery, rape, and systematic massacre. Large-scale massacres were also committed against the Empire's
Greek and Assyrian minorities as part of the same campaign of ethnic cleansing. Before the reforms of the 19th and
20th centuries, the state organisation of the Ottoman Empire was a simple system that had two main dimensions, which
were the military administration and the civil administration. The Sultan was the highest position in the system.
The civil system was based on local administrative units based on the region's characteristics. The Ottomans practiced
a system in which the state (as in the Byzantine Empire) had control over the clergy. Certain pre-Islamic Turkish
traditions that had survived the adoption of administrative and legal practices from Islamic Iran remained important
in Ottoman administrative circles. According to Ottoman understanding, the state's primary responsibility was to
defend and extend the land of the Muslims and to ensure security and harmony within its borders within the overarching
context of orthodox Islamic practice and dynastic sovereignty. The Ottoman Empire or, as a dynastic institution,
the House of Osman was unprecedented and unequaled in the Islamic world for its size and duration. In Europe, only
the House of Habsburg had a similarly unbroken line of sovereigns (kings/emperors) from the same family who ruled
for so long, and during the same period, between the late 13th and early 20th centuries. The Ottoman dynasty was
Turkish in origin. On eleven occasions, the sultan was deposed (replaced by another sultan of the Ottoman dynasty,
who were either the former sultan's brother, son or nephew) because he was perceived by his enemies as a threat to
the state. There were only two attempts in Ottoman history to unseat the ruling Ottoman dynasty, both failures, which
suggests a political system that for an extended period was able to manage its revolutions without unnecessary instability.
As such, the last Ottoman sultan Mehmed VI (r. 1918–1922) was a direct patrilineal (male-line) descendant of the
first Ottoman sultan Osman I (r. 1299–1326), which was unparallelled in both Europe (e.g. the male line of the House
of Habsburg became extinct in 1740) and in the Islamic world. The primary purpose of the Imperial Harem was to ensure
the birth of male heirs to the Ottoman throne and secure the continuation of the direct patrilineal (male-line) descendance
of the Ottoman sultans. The highest position in Islam, caliphate, was claimed by the sultans starting since Murad
I, which was established as Ottoman Caliphate. The Ottoman sultan, pâdişâh or "lord of kings", served as the Empire's
sole regent and was considered to be the embodiment of its government, though he did not always exercise complete
control. The Imperial Harem was one of the most important powers of the Ottoman court. It was ruled by the Valide
Sultan. On occasion, the Valide Sultan would become involved in state politics. For a time, the women of the Harem
effectively controlled the state in what was termed the "Sultanate of Women". New sultans were always chosen from
the sons of the previous sultan. The strong educational system of the palace school was geared towards eliminating
the unfit potential heirs, and establishing support among the ruling elite for a successor. The palace schools, which
would also educate the future administrators of the state, were not a single track. First, the Madrasa (Ottoman Turkish:
Medrese) was designated for the Muslims, and educated scholars and state officials according to Islamic tradition.
The financial burden of the Medrese was supported by vakifs, allowing children of poor families to move to higher
social levels and income. The second track was a free boarding school for the Christians, the Enderûn, which recruited
3,000 students annually from Christian boys between eight and twenty years old from one in forty families among the
communities settled in Rumelia or the Balkans, a process known as Devshirme (Devşirme). Though the sultan was the
supreme monarch, the sultan's political and executive authority was delegated. The politics of the state had a number
of advisors and ministers gathered around a council known as Divan (after the 17th century it was renamed the "Porte").
The Divan, in the years when the Ottoman state was still a Beylik, was composed of the elders of the tribe. Its composition
was later modified to include military officers and local elites (such as religious and political advisors). Later
still, beginning in 1320, a Grand Vizier was appointed to assume certain of the sultan's responsibilities. The Grand
Vizier had considerable independence from the sultan with almost unlimited powers of appointment, dismissal and supervision.
Beginning with the late 16th century, sultans withdrew from politics and the Grand Vizier became the de facto head
of state. The Ottoman legal system accepted the religious law over its subjects. At the same time the Qanun (or Kanun),
a secular legal system, co-existed with religious law or Sharia. The Ottoman Empire was always organized around a
system of local jurisprudence. Legal administration in the Ottoman Empire was part of a larger scheme of balancing
central and local authority. Ottoman power revolved crucially around the administration of the rights to land, which
gave a space for the local authority to develop the needs of the local millet. The jurisdictional complexity of the
Ottoman Empire was aimed to permit the integration of culturally and religiously different groups. The Ottoman system
had three court systems: one for Muslims, one for non-Muslims, involving appointed Jews and Christians ruling over
their respective religious communities, and the "trade court". The entire system was regulated from above by means
of the administrative Qanun, i.e. laws, a system based upon the Turkic Yassa and Töre, which were developed in the
pre-Islamic era.[citation needed] These court categories were not, however, wholly exclusive: for instance, the Islamic
courts—which were the Empire's primary courts—could also be used to settle a trade conflict or disputes between litigants
of differing religions, and Jews and Christians often went to them to obtain a more forceful ruling on an issue.
The Ottoman state tended not to interfere with non-Muslim religious law systems, despite legally having a voice to
do so through local governors. The Islamic Sharia law system had been developed from a combination of the Qur'an;
the Hadīth, or words of the prophet Muhammad; ijmā', or consensus of the members of the Muslim community; qiyas,
a system of analogical reasoning from earlier precedents; and local customs. Both systems were taught at the Empire's
law schools, which were in Istanbul and Bursa. The Ottoman Islamic legal system was set up differently from traditional
European courts. Presiding over Islamic courts would be a Qadi, or judge. Since the closing of the ijtihad, or Gate
of Interpretation, Qadis throughout the Ottoman Empire focused less on legal precedent, and more with local customs
and traditions in the areas that they administered. However, the Ottoman court system lacked an appellate structure,
leading to jurisdictional case strategies where plaintiffs could take their disputes from one court system to another
until they achieved a ruling that was in their favor. These reforms were based heavily on French models, as indicated
by the adoption of a three-tiered court system. Referred to as Nizamiye, this system was extended to the local magistrate
level with the final promulgation of the Mecelle, a civil code that regulated marriage, divorce, alimony, will, and
other matters of personal status. In an attempt to clarify the division of judicial competences, an administrative
council laid down that religious matters were to be handled by religious courts, and statute matters were to be handled
by the Nizamiye courts. The first military unit of the Ottoman State was an army that was organized by Osman I from
the tribesmen inhabiting the hills of western Anatolia in the late 13th century. The military system became an intricate
organization with the advance of the Empire. The Ottoman military was a complex system of recruiting and fief-holding.
The main corps of the Ottoman Army included Janissary, Sipahi, Akıncı and Mehterân. The Ottoman army was once among
the most advanced fighting forces in the world, being one of the first to use muskets and cannons. The Ottoman Turks
began using falconets, which were short but wide cannons, during the Siege of Constantinople. The Ottoman cavalry
depended on high speed and mobility rather than heavy armour, using bows and short swords on fast Turkoman and Arabian
horses (progenitors of the Thoroughbred racing horse), and often applied tactics similar to those of the Mongol Empire,
such as pretending to retreat while surrounding the enemy forces inside a crescent-shaped formation and then making
the real attack. The decline in the army's performance became clear from the mid-17th century and after the Great
Turkish War. The 18th century saw some limited success against Venice, but in the north the European-style Russian
armies forced the Ottomans to concede land. The Ottoman Navy vastly contributed to the expansion of the Empire's
territories on the European continent. It initiated the conquest of North Africa, with the addition of Algeria and
Egypt to the Ottoman Empire in 1517. Starting with the loss of Greece in 1821 and Algeria in 1830, Ottoman naval
power and control over the Empire's distant overseas territories began to decline. Sultan Abdülaziz (reigned 1861–1876)
attempted to reestablish a strong Ottoman navy, building the largest fleet after those of Britain and France. The
shipyard at Barrow, England, built its first submarine in 1886 for the Ottoman Empire. However, the collapsing Ottoman
economy could not sustain the fleet's strength for too long. Sultan Abdülhamid II distrusted the admirals who sided
with the reformist Midhat Pasha, and claimed that the large and expensive fleet was of no use against the Russians
during the Russo-Turkish War. He locked most of the fleet inside the Golden Horn, where the ships decayed for the
next 30 years. Following the Young Turk Revolution in 1908, the Committee of Union and Progress sought to develop
a strong Ottoman naval force. The Ottoman Navy Foundation was established in 1910 to buy new ships through public
donations. The establishment of Ottoman military aviation dates back to between June 1909 and July 1911. The Ottoman
Empire started preparing its first pilots and planes, and with the founding of the Aviation School (Tayyare Mektebi)
in Yeşilköy on 3 July 1912, the Empire began to tutor its own flight officers. The founding of the Aviation School
quickened advancement in the military aviation program, increased the number of enlisted persons within it, and gave
the new pilots an active role in the Ottoman Army and Navy. In May 1913 the world's first specialized Reconnaissance
Training Program was started by the Aviation School and the first separate reconnaissance division was established.[citation
needed] In June 1914 a new military academy, the Naval Aviation School (Bahriye Tayyare Mektebi) was founded. With
the outbreak of World War I, the modernization process stopped abruptly. The Ottoman aviation squadrons fought on
many fronts during World War I, from Galicia in the west to the Caucasus in the east and Yemen in the south. Ottoman
government deliberately pursued a policy for the development of Bursa, Edirne, and Istanbul, successive Ottoman capitals,
into major commercial and industrial centres, considering that merchants and artisans were indispensable in creating
a new metropolis. To this end, Mehmed and his successor Bayezid, also encouraged and welcomed migration of the Jews
from different parts of Europe, who were settled in Istanbul and other port cities like Salonica. In many places
in Europe, Jews were suffering persecution at the hands of their Christian counterparts, such as in Spain after the
conclusion of Reconquista. The tolerance displayed by the Turks was welcomed by the immigrants. The Ottoman economic
mind was closely related to the basic concepts of state and society in the Middle East in which the ultimate goal
of a state was consolidation and extension of the ruler's power, and the way to reach it was to get rich resources
of revenues by making the productive classes prosperous. The ultimate aim was to increase the state revenues without
damaging the prosperity of subjects to prevent the emergence of social disorder and to keep the traditional organization
of the society intact. The organization of the treasury and chancery were developed under the Ottoman Empire more
than any other Islamic government and, until the 17th century, they were the leading organization among all their
contemporaries. This organization developed a scribal bureaucracy (known as "men of the pen") as a distinct group,
partly highly trained ulama, which developed into a professional body. The effectiveness of this professional financial
body stands behind the success of many great Ottoman statesmen. The economic structure of the Empire was defined
by its geopolitical structure. The Ottoman Empire stood between the West and the East, thus blocking the land route
eastward and forcing Spanish and Portuguese navigators to set sail in search of a new route to the Orient. The Empire
controlled the spice route that Marco Polo once used. When Vasco da Gama bypassed Ottoman controlled routes and established
direct trade links with India in 1498, and Christopher Columbus first journeyed to the Bahamas in 1492, the Ottoman
Empire was at its zenith. Modern Ottoman studies think that the change in relations between the Ottoman Turks and
central Europe was caused by the opening of the new sea routes. It is possible to see the decline in the significance
of the land routes to the East as Western Europe opened the ocean routes that bypassed the Middle East and Mediterranean
as parallel to the decline of the Ottoman Empire itself. The Anglo-Ottoman Treaty, also known as the Treaty of Balta
Liman that opened the Ottoman markets directly to English and French competitors, would be seen as one of the staging
posts along this development. By developing commercial centres and routes, encouraging people to extend the area
of cultivated land in the country and international trade through its dominions, the state performed basic economic
functions in the Empire. But in all this the financial and political interests of the state were dominant. Within
the social and political system they were living in Ottoman administrators could not have comprehended or seen the
desirability of the dynamics and principles of the capitalist and mercantile economies developing in Western Europe.
The rise of port cities saw the clustering of populations caused by the development of steamships and railroads.
Urbanization increased from 1700 to 1922, with towns and cities growing. Improvements in health and sanitation made
them more attractive to live and work in. Port cities like Salonica, in Greece, saw its population rise from 55,000
in 1800 to 160,000 in 1912 and İzmir which had a population of 150,000 in 1800 grew to 300,000 by 1914. Some regions
conversely had population falls – Belgrade saw its population drop from 25,000 to 8,000 mainly due to political strife.
Economic and political migrations made an impact across the empire. For example, the Russian and Austria-Habsburg
annexation of the Crimean and Balkan regions respectively saw large influxes of Muslim refugees – 200,000 Crimean
Tartars fleeing to Dobruja. Between 1783 and 1913, approximately 5–7 million refugees flooded into the Ottoman Empire,
at least 3.8 million of whom were from Russia. Some migrations left indelible marks such as political tension between
parts of the empire (e.g. Turkey and Bulgaria) whereas centrifugal effects were noticed in other territories, simpler
demographics emerging from diverse populations. Economies were also impacted with the loss of artisans, merchants,
manufacturers and agriculturists. Since the 19th century, a large proportion of Muslim peoples from the Balkans emigrated
to present-day Turkey. These people are called Muhacir. By the time the Ottoman Empire came to an end in 1922, half
of the urban population of Turkey was descended from Muslim refugees from Russia. Ottoman Turkish was the official
language of the Empire. It was an Oghuz Turkic language highly influenced by Persian and Arabic. The Ottomans had
several influential languages: Turkish, spoken by the majority of the people in Anatolia and by the majority of Muslims
of the Balkans except in Albania and Bosnia; Persian, only spoken by the educated; Arabic, spoken mainly in Arabia,
North Africa, Iraq, Kuwait, the Levant and parts of the Horn of Africa; and Somali throughout the Horn of Africa.
In the last two centuries, usage of these became limited, though, and specific: Persian served mainly as a literary
language for the educated, while Arabic was used for religious rites. Because of a low literacy rate among the public
(about 2–3% until the early 19th century and just about 15% at the end of 19th century),[citation needed] ordinary
people had to hire scribes as "special request-writers" (arzuhâlcis) to be able to communicate with the government.
The ethnic groups continued to speak within their families and neighborhoods (mahalles) with their own languages
(e.g., Jews, Greeks, Armenians, etc.). In villages where two or more populations lived together, the inhabitants
would often speak each other's language. In cosmopolitan cities, people often spoke their family languages; many
of those who were not ethnic Turks spoke Turkish as a second language. Until the second half of the 15th century
the empire had a Christian majority, under the rule of a Muslim minority. In the late 19th century, the non-Muslim
population of the empire began to fall considerably, not only due to secession, but also because of migratory movements.
The proportion of Muslims amounted to 60% in the 1820s, gradually increasing to 69% in the 1870s and then to 76%
in the 1890s. By 1914, only 19.1% of the empire's population was non-Muslim, mostly made up of Christian Greeks,
Assyrians, Armenians, and Jews. Muslim sects regarded as heretical, such as the Druze, Ismailis, Alevis, and Alawites,
ranked below Jews and Christians. In 1514, Sultan Selim I, nicknamed "the Grim" because of his cruelty, ordered the
massacre of 40,000 Anatolian Alevis (Qizilbash), whom he considered heretics, reportedly proclaiming that "the killing
of one Alevi had as much otherworldly reward as killing 70 Christians."[page needed] Selim was also responsible for
an unprecedented and rapid expansion of the Ottoman Empire into the Middle East, especially through his conquest
of the entire Mamluk Sultanate of Egypt, which included much of the region. With these conquests, Selim further solidified
the Ottoman claim for being an Islamic caliphate, although Ottoman sultans had been claiming the title of caliph
since the 14th century starting with Murad I (reigned 1362 to 1389). The caliphate would remain held by Ottoman sultans
for the rest of the office's duration, which ended with its abolition on 3 March 1924 by the Grand National Assembly
of Turkey and the exile of the last caliph, Abdülmecid II, to France. Under the millet system, non-Muslim people
were considered subjects of the Empire, but were not subject to the Muslim faith or Muslim law. The Orthodox millet,
for instance, was still officially legally subject to Justinian's Code, which had been in effect in the Byzantine
Empire for 900 years. Also, as the largest group of non-Muslim subjects (or zimmi) of the Islamic Ottoman state,
the Orthodox millet was granted a number of special privileges in the fields of politics and commerce, and had to
pay higher taxes than Muslim subjects. The Ottomans absorbed some of the traditions, art and institutions of cultures
in the regions they conquered, and added new dimensions to them. Numerous traditions and cultural traits of previous
empires (in fields such as architecture, cuisine, music, leisure and government) were adopted by the Ottoman Turks,
who elaborated them into new forms, which resulted in a new and distinctively Ottoman cultural identity. Despite
newer added amalgamations, the Ottoman dynasty, like their predecessors in the Sultanate of Rum and the Seljuk Empire,
were thoroughly Persianised in their culture, language, habits and customs, and therefore, the empire has been described
as a Persianate empire. Intercultural marriages also played their part in creating the characteristic Ottoman elite
culture. When compared to the Turkish folk culture, the influence of these new cultures in creating the culture of
the Ottoman elite was clear. Ottoman Divan poetry was a highly ritualized and symbolic art form. From the Persian
poetry that largely inspired it, it inherited a wealth of symbols whose meanings and interrelationships—both of similitude
(مراعات نظير mura'ât-i nazîr / تناسب tenâsüb) and opposition (تضاد tezâd) were more or less prescribed. Divan poetry
was composed through the constant juxtaposition of many such images within a strict metrical framework, thus allowing
numerous potential meanings to emerge. The vast majority of Divan poetry was lyric in nature: either gazels (which
make up the greatest part of the repertoire of the tradition), or kasîdes. There were, however, other common genres,
most particularly the mesnevî, a kind of verse romance and thus a variety of narrative poetry; the two most notable
examples of this form are the Leyli and Majnun of Fuzûlî and the Hüsn ü Aşk of Şeyh Gâlib. Until the 19th century,
Ottoman prose did not develop to the extent that contemporary Divan poetry did. A large part of the reason for this
was that much prose was expected to adhere to the rules of sec (سجع, also transliterated as seci), or rhymed prose,
a type of writing descended from the Arabic saj' and which prescribed that between each adjective and noun in a string
of words, such as a sentence, there must be a rhyme. Nevertheless, there was a tradition of prose in the literature
of the time, though exclusively non-fictional in nature. One apparent exception was Muhayyelât ("Fancies") by Giritli
Ali Aziz Efendi, a collection of stories of the fantastic written in 1796, though not published until 1867. The first
novel published in the Ottoman Empire was by an Armenian named Vartan Pasha. Published in 1851, the novel was entitled
The Story of Akabi (Turkish: Akabi Hikyayesi) and was written in Turkish but with Armenian script. Due to historically
close ties with France, French literature came to constitute the major Western influence on Ottoman literature throughout
the latter half of the 19th century. As a result, many of the same movements prevalent in France during this period
also had their Ottoman equivalents: in the developing Ottoman prose tradition, for instance, the influence of Romanticism
can be seen during the Tanzimat period, and that of the Realist and Naturalist movements in subsequent periods; in
the poetic tradition, on the other hand, it was the influence of the Symbolist and Parnassian movements that became
paramount. Many of the writers in the Tanzimat period wrote in several different genres simultaneously: for instance,
the poet Namik Kemal also wrote the important 1876 novel İntibâh ("Awakening"), while the journalist İbrahim Şinasi
is noted for writing, in 1860, the first modern Turkish play, the one-act comedy "Şair Evlenmesi" ("The Poet's Marriage").
An earlier play, a farce entitled "Vakâyi'-i 'Acibe ve Havâdis-i Garibe-yi Kefşger Ahmed" ("The Strange Events and
Bizarre Occurrences of the Cobbler Ahmed"), dates from the beginning of the 19th century, but there remains some
doubt about its authenticity. In a similar vein, the novelist Ahmed Midhat Efendi wrote important novels in each
of the major movements: Romanticism (Hasan Mellâh yâhud Sırr İçinde Esrâr, 1873; "Hasan the Sailor, or The Mystery
Within the Mystery"), Realism (Henüz On Yedi Yaşında, 1881; "Just Seventeen Years Old"), and Naturalism (Müşâhedât,
1891; "Observations"). This diversity was, in part, due to the Tanzimat writers' wish to disseminate as much of the
new literature as possible, in the hopes that it would contribute to a revitalization of Ottoman social structures.
Examples of Ottoman architecture of the classical period, besides Istanbul and Edirne, can also be seen in Egypt,
Eritrea, Tunisia, Algiers, the Balkans and Romania, where mosques, bridges, fountains and schools were built. The
art of Ottoman decoration developed with a multitude of influences due to the wide ethnic range of the Ottoman Empire.
The greatest of the court artists enriched the Ottoman Empire with many pluralistic artistic influences: such as
mixing traditional Byzantine art with elements of Chinese art. Ottoman illumination covers non-figurative painted
or drawn decorative art in books or on sheets in muraqqa or albums, as opposed to the figurative images of the Ottoman
miniature. It was a part of the Ottoman Book Arts together with the Ottoman miniature (taswir), calligraphy (hat),
Islamic calligraphy, bookbinding (cilt) and paper marbling (ebru). In the Ottoman Empire, illuminated and illustrated
manuscripts were commissioned by the Sultan or the administrators of the court. In Topkapi Palace, these manuscripts
were created by the artists working in Nakkashane, the atelier of the miniature and illumination artists. Both religious
and non-religious books could be illuminated. Also sheets for albums levha consisted of illuminated calligraphy (hat)
of tughra, religious texts, verses from poems or proverbs, and purely decorative drawings. The art of carpet weaving
was particularly significant in the Ottoman Empire, carpets having an immense importance both as decorative furnishings,
rich in religious and other symbolism, and as a practical consideration, as it was customary to remove one's shoes
in living quarters. The weaving of such carpets originated in the nomadic cultures of central Asia (carpets being
an easily transportable form of furnishing), and was eventually spread to the settled societies of Anatolia. Turks
used carpets, rugs and kilims not just on the floors of a room, but also as a hanging on walls and doorways, where
they provided additional insulation. They were also commonly donated to mosques, which often amassed large collections
of them. Ottoman classical music was an important part of the education of the Ottoman elite, a number of the Ottoman
sultans were accomplished musicians and composers themselves, such as Selim III, whose compositions are often still
performed today. Ottoman classical music arose largely from a confluence of Byzantine music, Armenian music, Arabic
music, and Persian music. Compositionally, it is organised around rhythmic units called usul, which are somewhat
similar to meter in Western music, and melodic units called makam, which bear some resemblance to Western musical
modes. The instruments used are a mixture of Anatolian and Central Asian instruments (the saz, the bağlama, the kemence),
other Middle Eastern instruments (the ud, the tanbur, the kanun, the ney), and—later in the tradition—Western instruments
(the violin and the piano). Because of a geographic and cultural divide between the capital and other areas, two
broadly distinct styles of music arose in the Ottoman Empire: Ottoman classical music, and folk music. In the provinces,
several different kinds of folk music were created. The most dominant regions with their distinguished musical styles
are: Balkan-Thracian Türküs, North-Eastern (Laz) Türküs, Aegean Türküs, Central Anatolian Türküs, Eastern Anatolian
Türküs, and Caucasian Türküs. Some of the distinctive styles were: Janissary Music, Roma music, Belly dance, Turkish
folk music. Ottoman cuisine refers to the cuisine of the capital, Istanbul, and the regional capital cities, where
the melting pot of cultures created a common cuisine that most of the population regardless of ethnicity shared.
This diverse cuisine was honed in the Imperial Palace's kitchens by chefs brought from certain parts of the Empire
to create and experiment with different ingredients. The creations of the Ottoman Palace's kitchens filtered to the
population, for instance through Ramadan events, and through the cooking at the Yalıs of the Pashas, and from there
on spread to the rest of the population. Much of the cuisine of former Ottoman territories today is descended from
a shared Ottoman cuisine, especially Turkish cuisine, and including Greek cuisine, Balkan cuisine, Armenian cuisine,
and Middle Eastern cuisine. Many common dishes in the region, descendants of the once-common Ottoman cuisine, include
yogurt, döner kebab/gyro/shawarma, cacık/tzatziki, ayran, pita bread, feta cheese, baklava, lahmacun, moussaka, yuvarlak,
köfte/keftés/kofta, börek/boureki, rakı/rakia/tsipouro/tsikoudia, meze, dolma, sarma, rice pilaf, Turkish coffee,
sujuk, kashk, keşkek, manti, lavash, kanafeh, and more. Over the course of Ottoman history, the Ottomans managed
to build a large collection of libraries complete with translations of books from other cultures, as well as original
manuscripts. A great part of this desire for local and foreign manuscripts arose in the 15th Century. Sultan Mehmet
II ordered Georgios Amiroutzes, a Greek scholar from Trabzon, to translate and make available to Ottoman educational
institutions the geography book of Ptolemy. Another example is Ali Qushji -an astronomer, mathematician and physicist
originally from Samarkand- who became a professor in two madrasas, and influenced Ottoman circles as a result of
his writings and the activities of his students, even though he only spent two or three years before his death in
Istanbul. The main sports Ottomans were engaged in were Turkish Wrestling, hunting, Turkish archery, horseback riding,
Equestrian javelin throw, arm wrestling, and swimming. European model sports clubs were formed with the spreading
popularity of football matches in 19th century Constantinople. The leading clubs, according to timeline, were Beşiktaş
Gymnastics Club (1903), Galatasaray Sports Club (1905) and Fenerbahçe Sports Club (1907) in Istanbul. Football clubs
were formed in other provinces too, such as Karşıyaka Sports Club (1912), Altay Sports Club (1914) and Turkish Fatherland
Football Club (later Ülküspor) (1914) of İzmir.
Philosophy of space and time is the branch of philosophy concerned with the issues surrounding the ontology, epistemology,
and character of space and time. While such ideas have been central to philosophy from its inception, the philosophy
of space and time was both an inspiration for and a central aspect of early analytic philosophy. The subject focuses
on a number of basic issues, including whether or not time and space exist independently of the mind, whether they
exist independently of one another, what accounts for time's apparently unidirectional flow, whether times other
than the present moment exist, and questions about the nature of identity (particularly the nature of identity over
time). The earliest recorded Western philosophy of time was expounded by the ancient Egyptian thinker Ptahhotep (c.
2650–2600 BC), who said, "Do not lessen the time of following desire, for the wasting of time is an abomination to
the spirit." The Vedas, the earliest texts on Indian philosophy and Hindu philosophy, dating back to the late 2nd
millennium BC, describe ancient Hindu cosmology, in which the universe goes through repeated cycles of creation,
destruction, and rebirth, with each cycle lasting 4,320,000 years. Ancient Greek philosophers, including Parmenides
and Heraclitus, wrote essays on the nature of time. In Book 11 of St. Augustine's Confessions, he ruminates on the
nature of time, asking, "What then is time? If no one asks me, I know: if I wish to explain it to one that asketh,
I know not." He goes on to comment on the difficulty of thinking about time, pointing out the inaccuracy of common
speech: "For but few things are there of which we speak properly; of most things we speak improperly, still the things
intended are understood." But Augustine presented the first philosophical argument for the reality of Creation (against
Aristotle) in the context of his discussion of time, saying that knowledge of time depends on the knowledge of the
movement of things, and therefore time cannot be where there are no creatures to measure its passing (Confessions
Book XI ¶30; City of God Book XI ch.6). In the early 11th century, the Muslim physicist Ibn al-Haytham (Alhacen or
Alhazen) discussed space perception and its epistemological implications in his Book of Optics (1021), he also rejected
Aristotle's definition of topos (Physics IV) by way of geometric demonstrations and defined place as a mathematical
spatial extension. His experimental proof of the intromission model of vision led to changes in the understanding
of the visual perception of space, contrary to the previous emission theory of vision supported by Euclid and Ptolemy.
In "tying the visual perception of space to prior bodily experience, Alhacen unequivocally rejected the intuitiveness
of spatial perception and, therefore, the autonomy of vision. Without tangible notions of distance and size for correlation,
sight can tell us next to nothing about such things." In 1781, Immanuel Kant published the Critique of Pure Reason,
one of the most influential works in the history of the philosophy of space and time. He describes time as an a priori
notion that, together with other a priori notions such as space, allows us to comprehend sense experience. Kant denies
that either space or time are substance, entities in themselves, or learned by experience; he holds, rather, that
both are elements of a systematic framework we use to structure our experience. Spatial measurements are used to
quantify how far apart objects are, and temporal measurements are used to quantitatively compare the interval between
(or duration of) events. Although space and time are held to be transcendentally ideal in this sense, they are also
empirically real—that is, not mere illusions. Arguing against the absolutist position, Leibniz offers a number of
thought experiments with the purpose of showing that there is contradiction in assuming the existence of facts such
as absolute location and velocity. These arguments trade heavily on two principles central to his philosophy: the
principle of sufficient reason and the identity of indiscernibles. The principle of sufficient reason holds that
for every fact, there is a reason that is sufficient to explain what and why it is the way it is and not otherwise.
The identity of indiscernibles states that if there is no way of telling two entities apart, then they are one and
the same thing. The example Leibniz uses involves two proposed universes situated in absolute space. The only discernible
difference between them is that the latter is positioned five feet to the left of the first. The example is only
possible if such a thing as absolute space exists. Such a situation, however, is not possible, according to Leibniz,
for if it were, a universe's position in absolute space would have no sufficient reason, as it might very well have
been anywhere else. Therefore, it contradicts the principle of sufficient reason, and there could exist two distinct
universes that were in all ways indiscernible, thus contradicting the identity of indiscernibles. Standing out in
Clarke's (and Newton's) response to Leibniz's arguments is the bucket argument: Water in a bucket, hung from a rope
and set to spin, will start with a flat surface. As the water begins to spin in the bucket, the surface of the water
will become concave. If the bucket is stopped, the water will continue to spin, and while the spin continues, the
surface will remain concave. The concave surface is apparently not the result of the interaction of the bucket and
the water, since the surface is flat when the bucket first starts to spin, it becomes concave as the water starts
to spin, and it remains concave as the bucket stops. Leibniz describes a space that exists only as a relation between
objects, and which has no existence apart from the existence of those objects. Motion exists only as a relation between
those objects. Newtonian space provided the absolute frame of reference within which objects can have motion. In
Newton's system, the frame of reference exists independently of the objects contained within it. These objects can
be described as moving in relation to space itself. For many centuries, the evidence of a concave water surface held
authority. Mach suggested that thought experiments like the bucket argument are problematic. If we were to imagine
a universe that only contains a bucket, on Newton's account, this bucket could be set to spin relative to absolute
space, and the water it contained would form the characteristic concave surface. But in the absence of anything else
in the universe, it would be difficult to confirm that the bucket was indeed spinning. It seems equally possible
that the surface of the water in the bucket would remain flat. Mach argued that, in effect, the water experiment
in an otherwise empty universe would remain flat. But if another object were introduced into this universe, perhaps
a distant star, there would now be something relative to which the bucket could be seen as rotating. The water inside
the bucket could possibly have a slight curve. To account for the curve that we observe, an increase in the number
of objects in the universe also increases the curvature in the water. Mach argued that the momentum of an object,
whether angular or linear, exists as a result of the sum of the effects of other objects in the universe (Mach's
Principle). Albert Einstein proposed that the laws of physics should be based on the principle of relativity. This
principle holds that the rules of physics must be the same for all observers, regardless of the frame of reference
that is used, and that light propagates at the same speed in all reference frames. This theory was motivated by Maxwell's
equations, which show that electromagnetic waves propagate in a vacuum at the speed of light. However, Maxwell's
equations give no indication of what this speed is relative to. Prior to Einstein, it was thought that this speed
was relative to a fixed medium, called the luminiferous ether. In contrast, the theory of special relativity postulates
that light propagates at the speed of light in all inertial frames, and examines the implications of this postulate.
In classical physics, an inertial reference frame is one in which an object that experiences no forces does not accelerate.
In general relativity, an inertial frame of reference is one that is following a geodesic of space-time. An object
that moves against a geodesic experiences a force. An object in free fall does not experience a force, because it
is following a geodesic. An object standing on the earth, however, will experience a force, as it is being held against
the geodesic by the surface of the planet. In light of this, the bucket of water rotating in empty space will experience
a force because it rotates with respect to the geodesic. The water will become concave, not because it is rotating
with respect to the distant stars, but because it is rotating with respect to the geodesic. Einstein partially advocates
Mach's principle in that distant stars explain inertia because they provide the gravitational field against which
acceleration and inertia occur. But contrary to Leibniz's account, this warped space-time is as integral a part of
an object as are its other defining characteristics, such as volume and mass. If one holds, contrary to idealist
beliefs, that objects exist independently of the mind, it seems that relativistics commits them to also hold that
space and temporality have exactly the same type of independent existence. Coordinative definition has two major
features. The first has to do with coordinating units of length with certain physical objects. This is motivated
by the fact that we can never directly apprehend length. Instead we must choose some physical object, say the Standard
Metre at the Bureau International des Poids et Mesures (International Bureau of Weights and Measures), or the wavelength
of cadmium to stand in as our unit of length. The second feature deals with separated objects. Although we can, presumably,
directly test the equality of length of two measuring rods when they are next to one another, we can not find out
as much for two rods distant from one another. Even supposing that two rods, whenever brought near to one another
are seen to be equal in length, we are not justified in stating that they are always equal in length. This impossibility
undermines our ability to decide the equality of length of two distant objects. Sameness of length, to the contrary,
must be set by definition. In the classical case, the invariance, or symmetry, group and the covariance group coincide,
but, interestingly enough, they part ways in relativistic physics. The symmetry group of the general theory of relativity
includes all differentiable transformations, i.e., all properties of an object are dynamical, in other words there
are no absolute objects. The formulations of the general theory of relativity, unlike those of classical mechanics,
do not share a standard, i.e., there is no single formulation paired with transformations. As such the covariance
group of the general theory of relativity is just the covariance group of every theory. The problem of the direction
of time arises directly from two contradictory facts. Firstly, the fundamental physical laws are time-reversal invariant;
if a cinematographic film were taken of any process describable by means of the aforementioned laws and then played
backwards, it would still portray a physically possible process. Secondly, our experience of time, at the macroscopic
level, is not time-reversal invariant. Glasses can fall and break, but shards of glass cannot reassemble and fly
up onto tables. We have memories of the past, and none of the future. We feel we can't change the past but can influence
the future. But in statistical mechanics things get more complicated. On one hand, statistical mechanics is far superior
to classical thermodynamics, in that thermodynamic behavior, such as glass breaking, can be explained by the fundamental
laws of physics paired with a statistical postulate. But statistical mechanics, unlike classical thermodynamics,
is time-reversal symmetric. The second law of thermodynamics, as it arises in statistical mechanics, merely states
that it is overwhelmingly likely that net entropy will increase, but it is not an absolute law. A third type of solution
to the problem of the direction of time, although much less represented, argues that the laws are not time-reversal
symmetric. For example, certain processes in quantum mechanics, relating to the weak nuclear force, are not time-reversible,
keeping in mind that when dealing with quantum mechanics time-reversibility comprises a more complex definition.
But this type of solution is insufficient because 1) the time-asymmetric phenomena in quantum mechanics are too few
to account for the uniformity of macroscopic time-asymmetry and 2) it relies on the assumption that quantum mechanics
is the final or correct description of physical processes.[citation needed] One recent proponent of the laws solution
is Tim Maudlin who argues that the fundamental laws of physics are laws of temporal evolution (see Maudlin ). However,
elsewhere Maudlin argues: "[the] passage of time is an intrinsic asymmetry in the temporal structure of the world...
It is the asymmetry that grounds the distinction between sequences that runs from past to future and sequences which
run from future to past" [ibid, 2010 edition, p. 108]. Thus it is arguably difficult to assess whether Maudlin is
suggesting that the direction of time is a consequence of the laws or is itself primitive. The problem of the flow
of time, as it has been treated in analytic philosophy, owes its beginning to a paper written by J. M. E. McTaggart.
In this paper McTaggart proposes two "temporal series". The first series, which means to account for our intuitions
about temporal becoming, or the moving Now, is called the A-series. The A-series orders events according to their
being in the past, present or future, simpliciter and in comparison to each other. The B-series eliminates all reference
to the present, and the associated temporal modalities of past and future, and orders all events by the temporal
relations earlier than and later than. According to Presentism, time is an ordering of various realities. At a certain
time some things exist and others do not. This is the only reality we can deal with and we cannot for example say
that Homer exists because at the present time he does not. An Eternalist, on the other hand, holds that time is a
dimension of reality on a par with the three spatial dimensions, and hence that all things—past, present, and future—can
be said to be just as real as things in the present. According to this theory, then, Homer really does exist, though
we must still use special language when talking about somebody who exists at a distant time—just as we would use
special language when talking about something far away (the very words near, far, above, below, and such are directly
comparable to phrases such as in the past, a minute ago, and so on). The positions on the persistence of objects
are somewhat similar. An endurantist holds that for an object to persist through time is for it to exist completely
at different times (each instance of existence we can regard as somehow separate from previous and future instances,
though still numerically identical with them). A perdurantist on the other hand holds that for a thing to exist through
time is for it to exist as a continuous reality, and that when we consider the thing as a whole we must consider
an aggregate of all its "temporal parts" or instances of existing. Endurantism is seen as the conventional view and
flows out of our pre-philosophical ideas (when I talk to somebody I think I am talking to that person as a complete
object, and not just a part of a cross-temporal being), but perdurantists have attacked this position. (An example
of a perdurantist is David Lewis.) One argument perdurantists use to state the superiority of their view is that
perdurantism is able to take account of change in objects. However, asymmetry of causation can be observed in a non-arbitrary
way which is not metaphysical in the case of a human hand dropping a cup of water which smashes into fragments on
a hard floor, spilling the liquid. In this order, the causes of the resultant pattern of cup fragments and water
spill is easily attributable in terms of the trajectory of the cup, irregularities in its structure, angle of its
impact on the floor, etc. However, applying the same event in reverse, it is difficult to explain why the various
pieces of the cup should fly up into the human hand and reassemble precisely into the shape of a cup, or why the
water should position itself entirely within the cup. The causes of the resultant structure and shape of the cup
and the encapsulation of the water by the hand within the cup are not easily attributable, as neither hand nor floor
can achieve such formations of the cup or water. This asymmetry is perceivable on account of two features: i) the
relationship between the agent capacities of the human hand (i.e., what it is and is not capable of and what it is
for) and non-animal agency (i.e., what floors are and are not capable of and what they are for) and ii) that the
pieces of cup came to possess exactly the nature and number of those of a cup before assembling. In short, such asymmetry
is attributable to the relationship between temporal direction on the one hand and the implications of form and functional
capacity on the other.
Traditionally considered the last part of the Stone Age, the Neolithic followed the terminal Holocene Epipaleolithic period
and commenced with the beginning of farming, which produced the "Neolithic Revolution". It ended when metal tools
became widespread (in the Copper Age or Bronze Age; or, in some geographical regions, in the Iron Age). The Neolithic
is a progression of behavioral and cultural characteristics and changes, including the use of wild and domestic crops
and of domesticated animals. The beginning of the Neolithic culture is considered to be in the Levant (Jericho, modern-day
West Bank) about 10,200 – 8,800 BC. It developed directly from the Epipaleolithic Natufian culture in the region,
whose people pioneered the use of wild cereals, which then evolved into true farming. The Natufian period was between
12,000 and 10,200 BC, and the so-called "proto-Neolithic" is now included in the Pre-Pottery Neolithic (PPNA) between
10,200 and 8,800 BC. As the Natufians had become dependent on wild cereals in their diet, and a sedentary way of
life had begun among them, the climatic changes associated with the Younger Dryas are thought to have forced people
to develop farming. Not all of these cultural elements characteristic of the Neolithic appeared everywhere in the
same order: the earliest farming societies in the Near East did not use pottery. In other parts of the world, such
as Africa, South Asia and Southeast Asia, independent domestication events led to their own regionally distinctive
Neolithic cultures that arose completely independent of those in Europe and Southwest Asia. Early Japanese societies
and other East Asian cultures used pottery before developing agriculture. The Neolithic 1 (PPNA) period began roughly
10,000 years ago in the Levant. A temple area in southeastern Turkey at Göbekli Tepe dated around 9,500 BC may be
regarded as the beginning of the period. This site was developed by nomadic hunter-gatherer tribes, evidenced by
the lack of permanent housing in the vicinity and may be the oldest known human-made place of worship. At least seven
stone circles, covering 25 acres (10 ha), contain limestone pillars carved with animals, insects, and birds. Stone
tools were used by perhaps as many as hundreds of people to create the pillars, which might have supported roofs.
Other early PPNA sites dating to around 9,500 to 9,000 BCE have been found in Jericho, Israel (notably Ain Mallaha,
Nahal Oren, and Kfar HaHoresh), Gilgal in the Jordan Valley, and Byblos, Lebanon. The start of Neolithic 1 overlaps
the Tahunian and Heavy Neolithic periods to some degree.[citation needed] The Neolithic 2 (PPNB) began around 8,800
BCE according to the ASPRO chronology in the Levant (Jericho, Israel). As with the PPNA dates, there are two versions
from the same laboratories noted above. This system of terminology, however, is not convenient for southeast Anatolia
and settlements of the middle Anatolia basin. This era was before the Mesolithic era.[citation needed] A settlement
of 3,000 inhabitants was found in the outskirts of Amman, Jordan. Considered to be one of the largest prehistoric
settlements in the Near East, called 'Ain Ghazal, it was continuously inhabited from approximately 7,250 – 5,000
B. Around 10,200 BC the first fully developed Neolithic cultures belonging to the phase Pre-Pottery Neolithic A (PPNA)
appeared in the fertile crescent. Around 10,700 to 9,400 BC a settlement was established in Tell Qaramel, 10 miles
north of Aleppo. The settlement included 2 temples dating back to 9,650. Around 9000 BC during the PPNA, one of the
world's first towns, Jericho, appeared in the Levant. It was surrounded by a stone and marble wall and contained
a population of 2000–3000 people and a massive stone tower. Around 6,400 BC the Halaf culture appeared in Lebanon,
Israel and Palestine, Syria, Anatolia, and Northern Mesopotamia and subsisted on dryland agriculture. In 1981 a team
of researchers from the Maison de l'Orient et de la Méditerranée, including Jacques Cauvin and Oliver Aurenche divided
Near East neolithic chronology into ten periods (0 to 9) based on social, economic and cultural characteristics.
In 2002 Danielle Stordeur and Frédéric Abbès advanced this system with a division into five periods. Natufian (1)
between 12,000 and 10,200 BC, Khiamian (2) between 10,200-8,800 BC, PPNA: Sultanian (Jericho), Mureybetian, early
PPNB (PPNB ancien) (3) between 8,800-7,600 BC, middle PPNB (PPNB moyen) 7,600-6,900 BC, late PPNB (PPNB récent) (4)
between 7,500 and 7,000 BC and a PPNB (sometimes called PPNC) transitional stage (PPNB final) (5) where Halaf and
dark faced burnished ware begin to emerge between 6,900-6,400 BC. They also advanced the idea of a transitional stage
between the PPNA and PPNB between 8,800 and 8,600 BC at sites like Jerf el Ahmar and Tell Aswad. Domestication of
sheep and goats reached Egypt from the Near East possibly as early as 6,000 BC. Graeme Barker states "The first indisputable
evidence for domestic plants and animals in the Nile valley is not until the early fifth millennium bc in northern
Egypt and a thousand years later further south, in both cases as part of strategies that still relied heavily on
fishing, hunting, and the gathering of wild plants" and suggests that these subsistence changes were not due to farmers
migrating from the Near East but was an indigenous development, with cereals either indigenous or obtained through
exchange. Other scholars argue that the primary stimulus for agriculture and domesticated animals (as well as mud-brick
architecture and other Neolithic cultural features) in Egypt was from the Middle East. In southeast Europe agrarian
societies first appeared in the 7th millennium BC, attested by one of the earliest farming sites of Europe, discovered
in Vashtëmi, southeastern Albania and dating back to 6,500 BC. Anthropomorphic figurines have been found in the Balkans
from 6000 BC, and in Central Europe by c. 5800 BC (La Hoguette). Among the earliest cultural complexes of this area
are the Sesklo culture in Thessaly, which later expanded in the Balkans giving rise to Starčevo-Körös (Cris), Linearbandkeramik,
and Vinča. Through a combination of cultural diffusion and migration of peoples, the Neolithic traditions spread
west and northwards to reach northwestern Europe by around 4500 BC. The Vinča culture may have created the earliest
system of writing, the Vinča signs, though archaeologist Shan Winn believes they most likely represented pictograms
and ideograms rather than a truly developed form of writing. The Cucuteni-Trypillian culture built enormous settlements
in Romania, Moldova and Ukraine from 5300 to 2300 BC. The megalithic temple complexes of Ġgantija on the Mediterranean
island of Gozo (in the Maltese archipelago) and of Mnajdra (Malta) are notable for their gigantic Neolithic structures,
the oldest of which date back to c. 3600 BC. The Hypogeum of Ħal-Saflieni, Paola, Malta, is a subterranean structure
excavated c. 2500 BC; originally a sanctuary, it became a necropolis, the only prehistoric underground temple in
the world, and showing a degree of artistry in stone sculpture unique in prehistory to the Maltese islands. After
2500 BC, the Maltese Islands were depopulated for several decades until the arrival of a new influx of Bronze Age
immigrants, a culture that cremated its dead and introduced smaller megalithic structures called dolmens to Malta.
In most cases there are small chambers here, with the cover made of a large slab placed on upright stones. They are
claimed to belong to a population certainly different from that which built the previous megalithic temples. It is
presumed the population arrived from Sicily because of the similarity of Maltese dolmens to some small constructions
found in the largest island of the Mediterranean sea. In 2012, news was released about a new farming site discovered
in Munam-ri, Goseong, Gangwon Province, South Korea, which may be the earliest farmland known to date in east Asia.
"No remains of an agricultural field from the Neolithic period have been found in any East Asian country before,
the institute said, adding that the discovery reveals that the history of agricultural cultivation at least began
during the period on the Korean Peninsula". The farm was dated between 3600 and 3000 B.C. Pottery, stone projectile
points, and possible houses were also found. "In 2002, researchers discovered prehistoric earthenware, jade earrings,
among other items in the area". The research team will perform accelerator mass spectrometry (AMS) dating to retrieve
a more precise date for the site. In Mesoamerica, a similar set of events (i.e., crop domestication and sedentary
lifestyles) occurred by around 4500 BC, but possibly as early as 11,000–10,000 BC. These cultures are usually not
referred to as belonging to the Neolithic; in America different terms are used such as Formative stage instead of
mid-late Neolithic, Archaic Era instead of Early Neolithic and Paleo-Indian for the preceding period. The Formative
stage is equivalent to the Neolithic Revolution period in Europe, Asia, and Africa. In the Southwestern United States
it occurred from 500 to 1200 C.E. when there was a dramatic increase in population and development of large villages
supported by agriculture based on dryland farming of maize, and later, beans, squash, and domesticated turkeys. During
this period the bow and arrow and ceramic pottery were also introduced. During most of the Neolithic age of Eurasia,
people lived in small tribes composed of multiple bands or lineages. There is little scientific evidence of developed
social stratification in most Neolithic societies; social stratification is more associated with the later Bronze
Age. Although some late Eurasian Neolithic societies formed complex stratified chiefdoms or even states, states evolved
in Eurasia only with the rise of metallurgy, and most Neolithic societies on the whole were relatively simple and
egalitarian. Beyond Eurasia, however, states were formed during the local Neolithic in three areas, namely in the
Preceramic Andes with the Norte Chico Civilization, Formative Mesoamerica and Ancient Hawaiʻi. However, most Neolithic
societies were noticeably more hierarchical than the Paleolithic cultures that preceded them and hunter-gatherer
cultures in general. The domestication of large animals (c. 8000 BC) resulted in a dramatic increase in social inequality
in most of the areas where it occurred; New Guinea being a notable exception. Possession of livestock allowed competition
between households and resulted in inherited inequalities of wealth. Neolithic pastoralists who controlled large
herds gradually acquired more livestock, and this made economic inequalities more pronounced. However, evidence of
social inequality is still disputed, as settlements such as Catal Huyuk reveal a striking lack of difference in the
size of homes and burial sites, suggesting a more egalitarian society with no evidence of the concept of capital,
although some homes do appear slightly larger or more elaborately decorated than others. Families and households
were still largely independent economically, and the household was probably the center of life. However, excavations
in Central Europe have revealed that early Neolithic Linear Ceramic cultures ("Linearbandkeramik") were building
large arrangements of circular ditches between 4800 BC and 4600 BC. These structures (and their later counterparts
such as causewayed enclosures, burial mounds, and henge) required considerable time and labour to construct, which
suggests that some influential individuals were able to organise and direct human labour — though non-hierarchical
and voluntary work remain possibilities. There is a large body of evidence for fortified settlements at Linearbandkeramik
sites along the Rhine, as at least some villages were fortified for some time with a palisade and an outer ditch.
Settlements with palisades and weapon-traumatized bones have been discovered, such as at the Talheim Death Pit demonstrates
"...systematic violence between groups" and warfare was probably much more common during the Neolithic than in the
preceding Paleolithic period. This supplanted an earlier view of the Linear Pottery Culture as living a "peaceful,
unfortified lifestyle". Control of labour and inter-group conflict is characteristic of corporate-level or 'tribal'
groups, headed by a charismatic individual; whether a 'big man' or a proto-chief, functioning as a lineage-group
head. Whether a non-hierarchical system of organization existed is debatable, and there is no evidence that explicitly
suggests that Neolithic societies functioned under any dominating class or individual, as was the case in the chiefdoms
of the European Early Bronze Age. Theories to explain the apparent implied egalitarianism of Neolithic (and Paleolithic)
societies have arisen, notably the Marxist concept of primitive communism. The shelter of the early people changed
dramatically from the paleolithic to the neolithic era. In the paleolithic, people did not normally live in permanent
constructions. In the neolithic, mud brick houses started appearing that were coated with plaster. The growth of
agriculture made permanent houses possible. Doorways were made on the roof, with ladders positioned both on the inside
and outside of the houses. The roof was supported by beams from the inside. The rough ground was covered by platforms,
mats, and skins on which residents slept. Stilt-houses settlements were common in the Alpine and Pianura Padana (Terramare)
region. Remains have been found at the Ljubljana Marshes in Slovenia and at the Mondsee and Attersee lakes in Upper
Austria, for example. A significant and far-reaching shift in human subsistence and lifestyle was to be brought about
in areas where crop farming and cultivation were first developed: the previous reliance on an essentially nomadic
hunter-gatherer subsistence technique or pastoral transhumance was at first supplemented, and then increasingly replaced
by, a reliance upon the foods produced from cultivated lands. These developments are also believed to have greatly
encouraged the growth of settlements, since it may be supposed that the increased need to spend more time and labor
in tending crop fields required more localized dwellings. This trend would continue into the Bronze Age, eventually
giving rise to permanently settled farming towns, and later cities and states whose larger populations could be sustained
by the increased productivity from cultivated lands. However, early farmers were also adversely affected in times
of famine, such as may be caused by drought or pests. In instances where agriculture had become the predominant way
of life, the sensitivity to these shortages could be particularly acute, affecting agrarian populations to an extent
that otherwise may not have been routinely experienced by prior hunter-gatherer communities. Nevertheless, agrarian
communities generally proved successful, and their growth and the expansion of territory under cultivation continued.
Another significant change undergone by many of these newly agrarian communities was one of diet. Pre-agrarian diets
varied by region, season, available local plant and animal resources and degree of pastoralism and hunting. Post-agrarian
diet was restricted to a limited package of successfully cultivated cereal grains, plants and to a variable extent
domesticated animals and animal products. Supplementation of diet by hunting and gathering was to variable degrees
precluded by the increase in population above the carrying capacity of the land and a high sedentary local population
concentration. In some cultures, there would have been a significant shift toward increased starch and plant protein.
The relative nutritional benefits and drawbacks of these dietary changes and their overall impact on early societal
development is still debated. Neolithic people were skilled farmers, manufacturing a range of tools necessary for
the tending, harvesting and processing of crops (such as sickle blades and grinding stones) and food production (e.g.
pottery, bone implements). They were also skilled manufacturers of a range of other types of stone tools and ornaments,
including projectile points, beads, and statuettes. But what allowed forest clearance on a large scale was the polished
stone axe above all other tools. Together with the adze, fashioning wood for shelter, structures and canoes for example,
this enabled them to exploit their newly won farmland. Neolithic peoples in the Levant, Anatolia, Syria, northern
Mesopotamia and Central Asia were also accomplished builders, utilizing mud-brick to construct houses and villages.
At Çatal höyük, houses were plastered and painted with elaborate scenes of humans and animals. In Europe, long houses
built from wattle and daub were constructed. Elaborate tombs were built for the dead. These tombs are particularly
numerous in Ireland, where there are many thousand still in existence. Neolithic people in the British Isles built
long barrows and chamber tombs for their dead and causewayed camps, henges, flint mines and cursus monuments. It
was also important to figure out ways of preserving food for future months, such as fashioning relatively airtight
containers, and using substances like salt as preservatives. Most clothing appears to have been made of animal skins,
as indicated by finds of large numbers of bone and antler pins which are ideal for fastening leather. Wool cloth
and linen might have become available during the later Neolithic, as suggested by finds of perforated stones which
(depending on size) may have served as spindle whorls or loom weights. The clothing worn in the Neolithic Age might
be similar to that worn by Ötzi the Iceman, although he was not Neolithic (since he belonged to the later Copper
age).
Friedrich Hayek CH (German: [ˈfʁiːdʁɪç ˈaʊ̯ɡʊst ˈhaɪ̯ɛk]; 8 May 1899 – 23 March 1992), born in Austria-Hungary as Friedrich
August von Hayek and frequently referred to as F. A. Hayek, was an Austrian and British economist and philosopher
best known for his defense of classical liberalism. Hayek shared the 1974 Nobel Memorial Prize in Economic Sciences
with Gunnar Myrdal for his "pioneering work in the theory of money and economic fluctuations and ... penetrating
analysis of the interdependence of economic, social and institutional phenomena." In 1984, he was appointed a member
of the Order of the Companions of Honour by Queen Elizabeth II on the advice of Prime Minister Margaret Thatcher
for his "services to the study of economics". He was the first recipient of the Hanns Martin Schleyer Prize in 1984.
He also received the US Presidential Medal of Freedom in 1991 from President George H. W. Bush. In 2011, his article
"The Use of Knowledge in Society" was selected as one of the top 20 articles published in The American Economic Review
during its first 100 years. Friedrich August von Hayek was born in Vienna to August von Hayek and Felicitas Hayek
(née von Juraschek). Friedrich's father, from whom he received his middle name, was also born in Vienna in 1871.
He was a medical doctor employed by the municipal ministry of health, with passion in botany, in which he wrote a
number of monographs. August von Hayek was also a part-time botany lecturer at the University of Vienna. Friedrich's
mother was born in 1875 to a wealthy, conservative, land-owning family. As her mother died several years prior to
Friedrich's birth, Felicitas gained a significant inheritance which provided as much as half of her and August's
income during the early years of their marriage. Hayek was the oldest of three brothers, Heinrich (1900–69) and Erich
(1904–86), who were one-and-a-half and five years younger than him. His father's career as a university professor
influenced Friedrich's goals later in life. Both of his grandfathers, who lived long enough for Friedrich to know
them, were scholars. Franz von Juraschek was a leading economist in Austria-Hungary and a close friend of Eugen Böhm
von Bawerk, one of the founders of the Austrian School of Economics. Von Juraschek was a statistician and was later
employed by the Austrian government. Friedrich's paternal grandfather, Gustav Edler von Hayek, taught natural sciences
at the Imperial Realobergymnasium (secondary school) in Vienna. He wrote systematic works in biology, some of which
are relatively well known. On his mother's side, Hayek was second cousin to the philosopher Ludwig Wittgenstein.
His mother often played with Wittgenstein's sisters, and had known Ludwig well. As a result of their family relationship,
Hayek became one of the first to read Wittgenstein's Tractatus Logico-Philosophicus when the book was published in
its original German edition in 1921. Although Hayek met Wittgenstein on only a few occasions, Hayek said that Wittgenstein's
philosophy and methods of analysis had a profound influence on his own life and thought. In his later years, Hayek
recalled a discussion of philosophy with Wittgenstein, when both were officers during World War I. After Wittgenstein's
death, Hayek had intended to write a biography of Wittgenstein and worked on collecting family materials; and he
later assisted biographers of Wittgenstein. Hayek displayed an intellectual and academic bent from a very young age.
He read fluently and frequently before going to school. At his father's suggestion, Hayek, as a teenager, read the
genetic and evolutionary works of Hugo de Vries and the philosophical works of Ludwig Feuerbach. In school Hayek
was much taken by one instructor's lectures on Aristotle's ethics. In his unpublished autobiographical notes, Hayek
recalled a division between him and his younger brothers who were only few years younger than him, but he believed
that they were somehow of a different generation. He preferred to associate with adults. At the University of Vienna,
Hayek earned doctorates in law and political science in 1921 and 1923 respectively; and he also studied philosophy,
psychology, and economics. For a short time, when the University of Vienna closed, Hayek studied in Constantin von
Monakow's Institute of Brain Anatomy, where Hayek spent much of his time staining brain cells. Hayek's time in Monakow's
lab, and his deep interest in the work of Ernst Mach, inspired Hayek's first intellectual project, eventually published
as The Sensory Order (1952). It located connective learning at the physical and neurological levels, rejecting the
"sense data" associationism of the empiricists and logical positivists. Hayek presented his work to the private seminar
he had created with Herbert Furth called the Geistkreis. During Hayek's years at the University of Vienna, Carl Menger's
work on the explanatory strategy of social science and Friedrich von Wieser's commanding presence in the classroom
left a lasting influence on him. Upon the completion of his examinations, Hayek was hired by Ludwig von Mises on
the recommendation of Wieser as a specialist for the Austrian government working on the legal and economic details
of the Treaty of Saint Germain. Between 1923 and 1924 Hayek worked as a research assistant to Prof. Jeremiah Jenks
of New York University, compiling macroeconomic data on the American economy and the operations of the US Federal
Reserve. Initially sympathetic to Wieser's democratic socialism, Hayek's economic thinking shifted away from socialism
and toward the classical liberalism of Carl Menger after reading von Mises' book Socialism. It was sometime after
reading Socialism that Hayek began attending von Mises' private seminars, joining several of his university friends,
including Fritz Machlup, Alfred Schutz, Felix Kaufmann, and Gottfried Haberler, who were also participating in Hayek's
own, more general, private seminar. It was during this time that he also encountered and befriended noted political
philosopher Eric Voegelin, with whom he retained a long-standing relationship. With the help of Mises, in the late
1920s Hayek founded and served as director of the Austrian Institute for Business Cycle Research, before joining
the faculty of the London School of Economics (LSE) in 1931 at the behest of Lionel Robbins. Upon his arrival in
London, Hayek was quickly recognised as one of the leading economic theorists in the world, and his development of
the economics of processes in time and the co-ordination function of prices inspired the ground-breaking work of
John Hicks, Abba Lerner, and many others in the development of modern microeconomics. Hayek was concerned about the
general view in Britain's academia that fascism was a capitalist reaction to socialism and The Road to Serfdom arose
from those concerns. It was written between 1940 and 1943. The title was inspired by the French classical liberal
thinker Alexis de Tocqueville's writings on the "road to servitude." It was first published in Britain by Routledge
in March 1944 and was quite popular, leading Hayek to call it "that unobtainable book," also due in part to wartime
paper rationing. When it was published in the United States by the University of Chicago in September of that year,
it achieved greater popularity than in Britain. At the arrangement of editor Max Eastman, the American magazine Reader's
Digest also published an abridged version in April 1945, enabling The Road to Serfdom to reach a far wider audience
than academics. The book is widely popular among those advocating individualism and classical liberalism. In 1950,
Hayek left the London School of Economics for the University of Chicago, where he became a professor in the Committee
on Social Thought. Hayek's salary was funded not by the university, but by an outside foundation. University of Chicago
President Robert Hutchins was in the midst of a war with the U. of Chicago faculty over departmental autonomy and
control, and Hayek got caught in the middle of that battle. Hutchins had been attempting to force all departments
to adopt the neo-Thomist Great Books program of Mortimer Adler, and the U. of Chicago economists were sick of Hutchins'
meddling. As the result the Economics department rejected Hutchins' pressure to hire Hayek, and Hayek became a part
of the new Committee on Social Thought. Hayek had made contact with many at the U. of Chicago in the 1940s, with
Hayek's The Road to Serfdom playing a seminal role in transforming how Milton Friedman and others understood how
society works. Hayek conducted a number in influential faculty seminars while at the U. of Chicago, and a number
of academics worked on research projects sympathetic to some of Hayek's own, such as Aaron Director, who was active
in the Chicago School in helping to fund and establish what became the "Law and Society" program in the University
of Chicago Law School. Hayek, Frank Knight, Friedman and George Stigler worked together in forming the Mont Pèlerin
Society, an international forum for libertarian economists. Hayek and Friedman cooperated in support of the Intercollegiate
Society of Individualists, later renamed the Intercollegiate Studies Institute, an American student organisation
devoted to libertarian ideas. After editing a book on John Stuart Mill's letters he planned to publish two books
on the liberal order, The Constitution of Liberty and "The Creative Powers of a Free Civilization" (eventually the
title for the second chapter of The Constitution of Liberty). He completed The Constitution of Liberty in May 1959,
with publication in February 1960. Hayek was concerned "with that condition of men in which coercion of some by others
is reduced as much as is possible in society". Hayek was disappointed that the book did not receive the same enthusiastic
general reception as The Road to Serfdom had sixteen years before. From 1962 until his retirement in 1968, he was
a professor at the University of Freiburg, West Germany, where he began work on his next book, Law, Legislation and
Liberty. Hayek regarded his years at Freiburg as "very fruitful". Following his retirement, Hayek spent a year as
a visiting professor of philosophy at the University of California, Los Angeles, where he continued work on Law,
Legislation and Liberty, teaching a graduate seminar by the same name and another on the philosophy of social science.
Primary drafts of the book were completed by 1970, but Hayek chose to rework his drafts and finally brought the book
to publication in three volumes in 1973, 1976 and 1979. In February 1975, Margaret Thatcher was elected leader of
the British Conservative Party. The Institute of Economic Affairs arranged a meeting between Hayek and Thatcher in
London soon after. During Thatcher's only visit to the Conservative Research Department in the summer of 1975, a
speaker had prepared a paper on why the "middle way" was the pragmatic path the Conservative Party should take, avoiding
the extremes of left and right. Before he had finished, Thatcher "reached into her briefcase and took out a book.
It was Hayek's The Constitution of Liberty. Interrupting our pragmatist, she held the book up for all of us to see.
'This', she said sternly, 'is what we believe', and banged Hayek down on the table". In 1977, Hayek was critical
of the Lib-Lab pact, in which the British Liberal Party agreed to keep the British Labour government in office. Writing
to The Times, Hayek said, "May one who has devoted a large part of his life to the study of the history and the principles
of liberalism point out that a party that keeps a socialist government in power has lost all title to the name 'Liberal'.
Certainly no liberal can in future vote 'Liberal'". Hayek was criticised by Liberal politicians Gladwyn Jebb and
Andrew Phillips, who both claimed that the purpose of the pact was to discourage socialist legislation. In 1978,
Hayek came into conflict with the Liberal Party leader, David Steel, who claimed that liberty was possible only with
"social justice and an equitable distribution of wealth and power, which in turn require a degree of active government
intervention" and that the Conservative Party were more concerned with the connection between liberty and private
enterprise than between liberty and democracy. Hayek claimed that a limited democracy might be better than other
forms of limited government at protecting liberty but that an unlimited democracy was worse than other forms of unlimited
government because "its government loses the power even to do what it thinks right if any group on which its majority
depends thinks otherwise". In 1984, he was appointed as a member of the Order of the Companions of Honour (CH) by
Queen Elizabeth II of the United Kingdom on the advice of the British Prime Minister Margaret Thatcher for his "services
to the study of economics". Hayek had hoped to receive a baronetcy, and after he was awarded the CH he sent a letter
to his friends requesting that he be called the English version of Friedrich (Frederick) from now on. After his 20
min audience with the Queen, he was "absolutely besotted" with her according to his daughter-in-law, Esca Hayek.
Hayek said a year later that he was "amazed by her. That ease and skill, as if she'd known me all my life." The audience
with the Queen was followed by a dinner with family and friends at the Institute of Economic Affairs. When, later
that evening, Hayek was dropped off at the Reform Club, he commented: "I've just had the happiest day of my life."
In 1991, US President George H. W. Bush awarded Hayek the Presidential Medal of Freedom, one of the two highest civilian
awards in the United States, for a "lifetime of looking beyond the horizon". Hayek died on 23 March 1992 in Freiburg,
Germany, and was buried on 4 April in the Neustift am Walde cemetery in the northern outskirts of Vienna according
to the Catholic rite. In 2011, his article The Use of Knowledge in Society was selected as one of the top 20 articles
published in the American Economic Review during its first 100 years. Hayek's principal investigations in economics
concerned capital, money, and the business cycle. Mises had earlier applied the concept of marginal utility to the
value of money in his Theory of Money and Credit (1912), in which he also proposed an explanation for "industrial
fluctuations" based on the ideas of the old British Currency School and of Swedish economist Knut Wicksell. Hayek
used this body of work as a starting point for his own interpretation of the business cycle, elaborating what later
became known as the "Austrian Theory of the Business Cycle". Hayek spelled out the Austrian approach in more detail
in his book, published in 1929, an English translation of which appeared in 1933 as Monetary Theory and the Trade
Cycle. There he argued for a monetary approach to the origins of the cycle. In his Prices and Production (1931),
Hayek argued that the business cycle resulted from the central bank's inflationary credit expansion and its transmission
over time, leading to a capital misallocation caused by the artificially low interest rates. Hayek claimed that "the
past instability of the market economy is the consequence of the exclusion of the most important regulator of the
market mechanism, money, from itself being regulated by the market process". In 1929, Lionel Robbins assumed the
helm of the London School of Economics (LSE). Eager to promote alternatives to what he regarded as the narrow approach
of the school of economic thought that then dominated the English-speaking academic world (centred at the University
of Cambridge and deriving largely from the work of Alfred Marshall), Robbins invited Hayek to join the faculty at
LSE, which he did in 1931. According to Nicholas Kaldor, Hayek's theory of the time-structure of capital and of the
business cycle initially "fascinated the academic world" and appeared to offer a less "facile and superficial" understanding
of macroeconomics than the Cambridge school's. Also in 1931, Hayek critiqued Keynes's Treatise on Money (1930) in
his "Reflections on the pure theory of Mr. J. M. Keynes" and published his lectures at the LSE in book form as Prices
and Production. Unemployment and idle resources are, for Keynes, caused by a lack of effective demand; for Hayek,
they stem from a previous, unsustainable episode of easy money and artificially low interest rates. Keynes asked
his friend Piero Sraffa to respond. Sraffa elaborated on the effect of inflation-induced "forced savings" on the
capital sector and about the definition of a "natural" interest rate in a growing economy. Others who responded negatively
to Hayek's work on the business cycle included John Hicks, Frank Knight, and Gunnar Myrdal. Kaldor later wrote that
Hayek's Prices and Production had produced "a remarkable crop of critics" and that the total number of pages in British
and American journals dedicated to the resulting debate "could rarely have been equalled in the economic controversies
of the past." Hayek continued his research on monetary and capital theory, revising his theories of the relations
between credit cycles and capital structure in Profits, Interest and Investment (1939) and The Pure Theory of Capital
(1941), but his reputation as an economic theorist had by then fallen so much that those works were largely ignored,
except for scathing critiques by Nicholas Kaldor. Lionel Robbins himself, who had embraced the Austrian theory of
the business cycle in The Great Depression (1934), later regretted having written the book and accepted many of the
Keynesian counter-arguments. Hayek never produced the book-length treatment of "the dynamics of capital" that he
had promised in the Pure Theory of Capital. After 1941, he continued to publish works on the economics of information,
political philosophy, the theory of law, and psychology, but seldom on macroeconomics. At the University of Chicago,
Hayek was not part of the economics department and did not influence the rebirth of neoclassical theory which took
place there (see Chicago school of economics). When, in 1974, he shared the Nobel Memorial Prize in Economics with
Gunnar Myrdal, the latter complained about being paired with an "ideologue". Milton Friedman declared himself "an
enormous admirer of Hayek, but not for his economics. I think Prices and Production is a very flawed book. I think
his [Pure Theory of Capital] is unreadable. On the other hand, The Road to Serfdom is one of the great books of our
time." Building on the earlier work of Ludwig von Mises and others, Hayek also argued that while in centrally planned
economies an individual or a select group of individuals must determine the distribution of resources, these planners
will never have enough information to carry out this allocation reliably. This argument, first proposed by Max Weber,
says that the efficient exchange and use of resources can be maintained only through the price mechanism in free
markets (see economic calculation problem). Some socialists such as H. D. Dickinson and Oskar Lange, responded by
invoking general equilibrium theory, which they argued disproved Mises's thesis. They noted that the difference between
a planned and a free market system lay in who was responsible for solving the equations. They argued, if some of
the prices chosen by socialist managers were wrong, gluts or shortages would appear, signalling them to adjust the
prices up or down, just as in a free market. Through such a trial and error, a socialist economy could mimic the
efficiency of a free market system, while avoiding its many problems. In The Use of Knowledge in Society (1945),
Hayek argued that the price mechanism serves to share and synchronise local and personal knowledge, allowing society's
members to achieve diverse, complicated ends through a principle of spontaneous self-organization. He contrasted
the use of the price mechanism with central planning, arguing that the former allows for more rapid adaptation to
changes in particular circumstances of time and place. Thus, he set the stage for Oliver Williamson's later contrast
between markets and hierarchies as alternative co-ordination mechanisms for economic transactions. He used the term
catallaxy to describe a "self-organizing system of voluntary co-operation". Hayek's research into this argument was
specifically cited by the Nobel Committee in its press release awarding Hayek the Nobel prize. Hayek was one of the
leading academic critics of collectivism in the 20th century. Hayek argued that all forms of collectivism (even those
theoretically based on voluntary co-operation) could only be maintained by a central authority of some kind. In Hayek's
view, the central role of the state should be to maintain the rule of law, with as little arbitrary intervention
as possible. In his popular book, The Road to Serfdom (1944) and in subsequent academic works, Hayek argued that
socialism required central economic planning and that such planning in turn leads towards totalitarianism. Hayek
also wrote that the state can play a role in the economy, and specifically, in creating a "safety net". He wrote,
"There is no reason why, in a society which has reached the general level of wealth ours has, the first kind of security
should not be guaranteed to all without endangering general freedom; that is: some minimum of food, shelter and clothing,
sufficient to preserve health. Nor is there any reason why the state should not help to organize a comprehensive
system of social insurance in providing for those common hazards of life against which few can make adequate provision."
Hayek's work on the microeconomics of the choice theoretics of investment, non-permanent goods, potential permanent
resources, and economically-adapted permanent resources mark a central dividing point between his work in areas of
macroeconomics and that of almost all other economists. Hayek's work on the macroeconomic subjects of central planning,
trade cycle theory, the division of knowledge, and entrepreneurial adaptation especially, differ greatly from the
opinions of macroeconomic "Marshallian" economists in the tradition of John Maynard Keynes and the microeconomic
"Walrasian" economists in the tradition of Abba Lerner. During World War II, Hayek began the ‘Abuse of Reason’ project.
His goal was to show how a number of then-popular doctrines and beliefs had a common origin in some fundamental misconceptions
about the social science. In his philosophy of science, which has much in common with that of his good friend Karl
Popper, Hayek was highly critical of what he termed scientism: a false understanding of the methods of science that
has been mistakenly forced upon the social sciences, but that is contrary to the practices of genuine science. Usually,
scientism involves combining the philosophers' ancient demand for demonstrative justification with the associationists'
false view that all scientific explanations are simple two-variable linear relationships. In The Sensory Order: An
Inquiry into the Foundations of Theoretical Psychology (1952), Hayek independently developed a "Hebbian learning"
model of learning and memory – an idea which he first conceived in 1920, prior to his study of economics. Hayek's
expansion of the "Hebbian synapse" construction into a global brain theory has received continued attention in neuroscience,
cognitive science, computer science, behavioural science, and evolutionary psychology, by scientists such as Gerald
Edelman, and Joaquin Fuster. In the latter half of his career Hayek made a number of contributions to social and
political philosophy, which he based on his views on the limits of human knowledge, and the idea of spontaneous order
in social institutions. He argues in favour of a society organised around a market order, in which the apparatus
of state is employed almost (though not entirely) exclusively to enforce the legal order (consisting of abstract
rules, and not particular commands) necessary for a market of free individuals to function. These ideas were informed
by a moral philosophy derived from epistemological concerns regarding the inherent limits of human knowledge. Hayek
argued that his ideal individualistic, free-market polity would be self-regulating to such a degree that it would
be 'a society which does not depend for its functioning on our finding good men for running it'. Hayek disapproved
of the notion of 'social justice'. He compared the market to a game in which 'there is no point in calling the outcome
just or unjust' and argued that 'social justice is an empty phrase with no determinable content'; likewise "the results
of the individual's efforts are necessarily unpredictable, and the question as to whether the resulting distribution
of incomes is just has no meaning". He generally regarded government redistribution of income or capital as an unacceptable
intrusion upon individual freedom: "the principle of distributive justice, once introduced, would not be fulfilled
until the whole of society was organized in accordance with it. This would produce a kind of society which in all
essential respects would be the opposite of a free society." Hayek’s concept of the market as a spontaneous order
has been recently applied to ecosystems to defend a broadly non-interventionist policy. Like the market, ecosystems
contain complex networks of information, involve an ongoing dynamic process, contain orders within orders, and the
entire system operates without being directed by a conscious mind. On this analysis, species takes the place of price
as a visible element of the system formed by a complex set of largely unknowable elements. Human ignorance about
the countless interactions between the organisms of an ecosystem limits our ability to manipulate nature. Since humans
rely on the ecosystem to sustain themselves, we have a prima facie obligation to not disrupt such systems. This analysis
of ecosystems as spontaneous orders does not rely on markets qualifying as spontaneous orders. As such, one need
not endorse Hayek’s analysis of markets to endorse ecosystems as spontaneous orders. With regard to a safety net,
Hayek advocated "some provision for those threatened by the extremes of indigence or starvation, be if only in the
interest of those who require protection against acts of desperation on the part of the needy." As referenced in
the section on "The economic calculation problem," Hayek wrote that "there is no reason why... the state should not
help to organize a comprehensive system of social insurance." Summarizing on this topic, Wapshott writes "[Hayek]
advocated mandatory universal health care and unemployment insurance, enforced, if not directly provided, by the
state." Bernard Harcourt says that "Hayek was adamant about this." In the 1973 Law, Legislation, and Liberty, Hayek
wrote: Arthur M. Diamond argues Hayek's problems arise when he goes beyond claims that can be evaluated within economic
science. Diamond argued that: “The human mind, Hayek says, is not just limited in its ability to synthesize a vast
array of concrete facts, it is also limited in its ability to give a deductively sound ground to ethics. Here is
where the tension develops, for he also wants to give a reasoned moral defense of the free market. He is an intellectual
skeptic who wants to give political philosophy a secure intellectual foundation. It is thus not too surprising that
what results is confused and contradictory.” Asked about the liberal, non-democratic rule by a Chilean interviewer,
Hayek is translated from German to Spanish to English as having said, "As long term institutions, I am totally against
dictatorships. But a dictatorship may be a necessary system for a transitional period. [...] Personally I prefer
a liberal dictatorship to democratic government devoid of liberalism. My personal impression – and this is valid
for South America – is that in Chile, for example, we will witness a transition from a dictatorial government to
a liberal government." In a letter to the London Times, he defended the Pinochet regime and said that he had "not
been able to find a single person even in much maligned Chile who did not agree that personal freedom was much greater
under Pinochet than it had been under Allende." Hayek admitted that "it is not very likely that this will succeed,
even if, at a particular point in time, it may be the only hope there is.", he explained, however, "It is not certain
hope, because it will always depend on the goodwill of an individual, and there are very few individuals one can
trust. But if it is the sole opportunity which exists at a particular moment it may be the best solution despite
this. And only if and when the dictatorial government is visibly directing its steps towards limited democracy".
For Hayek, the supposedly stark difference between authoritarianism and totalitarianism has much importance and Hayek
places heavy weight on this distinction in his defence of transitional dictatorship. For example, when Hayek visited
Venezuela in May 1981, he was asked to comment on the prevalence of totalitarian regimes in Latin America. In reply,
Hayek warned against confusing "totalitarianism with authoritarianism," and said that he was unaware of "any totalitarian
governments in Latin America. The only one was Chile under Allende". For Hayek, however, the word 'totalitarian'
signifies something very specific: the want to “organize the whole of society” to attain a “definite social goal”
—which is stark in contrast to “liberalism and individualism”. In 1932, Hayek suggested that private investment in
the public markets was a better road to wealth and economic co-ordination in Britain than government spending programs,
as argued in a letter he co-signed with Lionel Robbins and others in an exchange of letters with John Maynard Keynes
in The Times. The nearly decade long deflationary depression in Britain dating from Churchill's decision in 1925
to return Britain to the gold standard at the old pre-war, pre-inflationary par was the public policy backdrop for
Hayek's single public engagement with Keynes over British monetary and fiscal policy, otherwise Hayek and Keynes
agreed on many theoretical matters, and their economic disagreements were fundamentally theoretical, having to do
almost exclusively with the relation of the economics of extending the length of production to the economics of labour
inputs. Hayek's influence on the development of economics is widely acknowledged. Hayek is the second-most frequently
cited economist (after Kenneth Arrow) in the Nobel lectures of the prize winners in economics, which is particularly
striking since his own lecture was critical of the field of orthodox economics and neo-classical modelisation. A
number of Nobel Laureates in economics, such as Vernon Smith and Herbert A. Simon, recognise Hayek as the greatest
modern economist. Another Nobel winner, Paul Samuelson, believed that Hayek was worthy of his award but nevertheless
claimed that "there were good historical reasons for fading memories of Hayek within the mainstream last half of
the twentieth century economist fraternity. In 1931, Hayek's Prices and Production had enjoyed an ultra-short Byronic
success. In retrospect hindsight tells us that its mumbo-jumbo about the period of production grossly misdiagnosed
the macroeconomics of the 1927–1931 (and the 1931–2007) historical scene". Despite this comment, Samuelson spent
the last 50 years of his life obsessed with the problems of capital theory identified by Hayek and Böhm-Bawerk, and
Samuelson flatly judged Hayek to have been right and his own teacher, Joseph Schumpeter, to have been wrong on the
central economic question of the 20th century, the feasibility of socialist economic planning in a production goods
dominated economy. Hayek is widely recognised for having introduced the time dimension to the equilibrium construction
and for his key role in helping inspire the fields of growth theory, information economics, and the theory of spontaneous
order. The "informal" economics presented in Milton Friedman's massively influential popular work Free to Choose
(1980), is explicitly Hayekian in its account of the price system as a system for transmitting and co-ordinating
knowledge. This can be explained by the fact that Friedman taught Hayek's famous paper "The Use of Knowledge in Society"
(1945) in his graduate seminars. Hayek had a long-standing and close friendship with philosopher of science Karl
Popper, also from Vienna. In a letter to Hayek in 1944, Popper stated, "I think I have learnt more from you than
from any other living thinker, except perhaps Alfred Tarski." (See Hacohen, 2000). Popper dedicated his Conjectures
and Refutations to Hayek. For his part, Hayek dedicated a collection of papers, Studies in Philosophy, Politics,
and Economics, to Popper and, in 1982, said that "ever since his Logik der Forschung first came out in 1934, I have
been a complete adherent to his general theory of methodology". Popper also participated in the inaugural meeting
of the Mont Pelerin Society. Their friendship and mutual admiration, however, do not change the fact that there are
important differences between their ideas. Hayek's greatest intellectual debt was to Carl Menger, who pioneered an
approach to social explanation similar to that developed in Britain by Bernard Mandeville and the Scottish moral
philosophers in the Scottish Enlightenment. He had a wide-reaching influence on contemporary economics, politics,
philosophy, sociology, psychology and anthropology. For example, Hayek's discussion in The Road to Serfdom (1944)
about truth, falsehood and the use of language influenced some later opponents of postmodernism. Hayek received new
attention in the 1980s and 1990s with the rise of conservative governments in the United States, United Kingdom,
and Canada. After winning the United Kingdom general election, 1979, Margaret Thatcher appointed Keith Joseph, the
director of the Hayekian Centre for Policy Studies, as her secretary of state for industry in an effort to redirect
parliament's economic strategies. Likewise, David Stockman, Ronald Reagan's most influential financial official in
1981 was an acknowledged follower of Hayek. Hayek wrote an essay, "Why I Am Not a Conservative" (included as an appendix
to The Constitution of Liberty), in which he disparaged conservatism for its inability to adapt to changing human
realities or to offer a positive political program, remarking, "Conservatism is only as good as what it conserves."
Although he noted that modern day conservatism shares many opinions on economics with classical liberals, particularly
a belief in the free market, he believed it's because conservatism wants to "stand still," whereas liberalism embraces
the free market because it "wants to go somewhere." Hayek identified himself as a classical liberal, but noted that
in the United States it had become almost impossible to use "liberal" in its original definition, and the term "libertarian"
has been used instead. In this text, Hayek also opposed conservatism for "its hostility to internationalism and its
proneness to a strident nationalism" and its frequent association with imperialism. However, for his part, Hayek
found this term "singularly unattractive" and offered the term "Old Whig" (a phrase borrowed from Edmund Burke) instead.
In his later life, he said, "I am becoming a Burkean Whig." However, Whiggery as a political doctrine had little
affinity for classical political economy, the tabernacle of the Manchester School and William Gladstone. His essay
has served as an inspiration to other liberal-minded economists wishing to distinguish themselves from conservative
thinkers, for example James M. Buchanan's essay "Why I, Too, Am Not a Conservative: The Normative Vision of Classical
Liberalism". His opponents have attacked Hayek as a leading promoter of "neoliberalism". A British scholar, Samuel
Brittan, concluded in 2010, "Hayek's book [The Constitution of Liberty] is still probably the most comprehensive
statement of the underlying ideas of the moderate free market philosophy espoused by neoliberals." Brittan adds that
although Raymond Plant (2009) comes out in the end against Hayek's doctrines, Plant gives The Constitution of Liberty
a "more thorough and fair-minded analysis than it has received even from its professed adherents". In Why F A Hayek
is a Conservative, British policy analyst Madsen Pirie claims Hayek mistakes the nature of the conservative outlook.
Conservatives, he says, are not averse to change – but like Hayek, they are highly averse to change being imposed
on the social order by people in authority who think they know how to run things better. They wish to allow the market
to function smoothly and give it the freedom to change and develop. It is an outlook, says Pirie, that Hayek and
conservatives both share.
Diarrhea, also spelled diarrhoea, is the condition of having at least three loose or liquid bowel movements each day. It
often lasts for a few days and can result in dehydration due to fluid loss. Signs of dehydration often begin with
loss of the normal stretchiness of the skin and irritable behaviour. This can progress to decreased urination, loss
of skin color, a fast heart rate, and a decrease in responsiveness as it becomes more severe. Loose but non-watery
stools in babies who are breastfed, however, may be normal. The most common cause is an infection of the intestines
due to either a virus, bacteria, or parasite; a condition known as gastroenteritis. These infections are often acquired
from food or water that has been contaminated by stool, or directly from another person who is infected. It may be
divided into three types: short duration watery diarrhea, short duration bloody diarrhea, and if it lasts for more
than two weeks, persistent diarrhea. The short duration watery diarrhea may be due to an infection by cholera, although
this is rare in the developed world. If blood is present it is also known as dysentery. A number of non-infectious
causes may also result in diarrhea, including hyperthyroidism, lactose intolerance, inflammatory bowel disease, a
number of medications, and irritable bowel syndrome. In most cases, stool cultures are not required to confirm the
exact cause. Prevention of infectious diarrhea is by improved sanitation, clean drinking water, and hand washing
with soap. Breastfeeding for at least six months is also recommended as is vaccination against rotavirus. Oral rehydration
solution (ORS), which is clean water with modest amounts of salts and sugar, is the treatment of choice. Zinc tablets
are also recommended. These treatments have been estimated to have saved 50 million children in the past 25 years.
When people have diarrhea it is recommended that they continue to eat healthy food and babies continue to be breastfed.
If commercial ORS are not available, homemade solutions may be used. In those with severe dehydration, intravenous
fluids may be required. Most cases; however, can be managed well with fluids by mouth. Antibiotics, while rarely
used, may be recommended in a few cases such as those who have bloody diarrhea and a high fever, those with severe
diarrhea following travelling, and those who grow specific bacteria or parasites in their stool. Loperamide may help
decrease the number of bowel movement but is not recommended in those with severe disease. About 1.7 to 5 billion
cases of diarrhea occur per year. It is most common in developing countries, where young children get diarrhea on
average three times a year. Total deaths from diarrhea are estimated at 1.26 million in 2013 – down from 2.58 million
in 1990. In 2012, it is the second most common cause of deaths in children younger than five (0.76 million or 11%).
Frequent episodes of diarrhea are also a common cause of malnutrition and the most common cause in those younger
than five years of age. Other long term problems that can result include stunted growth and poor intellectual development.
Secretory diarrhea means that there is an increase in the active secretion, or there is an inhibition of absorption.
There is little to no structural damage. The most common cause of this type of diarrhea is a cholera toxin that stimulates
the secretion of anions, especially chloride ions. Therefore, to maintain a charge balance in the lumen, sodium is
carried with it, along with water. In this type of diarrhea intestinal fluid secretion is isotonic with plasma even
during fasting. It continues even when there is no oral food intake. Osmotic diarrhea occurs when too much water
is drawn into the bowels. If a person drinks solutions with excessive sugar or excessive salt, these can draw water
from the body into the bowel and cause osmotic diarrhea. Osmotic diarrhea can also be the result of maldigestion
(e.g., pancreatic disease or Coeliac disease), in which the nutrients are left in the lumen to pull in water. Or
it can be caused by osmotic laxatives (which work to alleviate constipation by drawing water into the bowels). In
healthy individuals, too much magnesium or vitamin C or undigested lactose can produce osmotic diarrhea and distention
of the bowel. A person who has lactose intolerance can have difficulty absorbing lactose after an extraordinarily
high intake of dairy products. In persons who have fructose malabsorption, excess fructose intake can also cause
diarrhea. High-fructose foods that also have a high glucose content are more absorbable and less likely to cause
diarrhea. Sugar alcohols such as sorbitol (often found in sugar-free foods) are difficult for the body to absorb
and, in large amounts, may lead to osmotic diarrhea. In most of these cases, osmotic diarrhea stops when offending
agent (e.g. milk, sorbitol) is stopped. Inflammatory diarrhea occurs when there is damage to the mucosal lining or
brush border, which leads to a passive loss of protein-rich fluids and a decreased ability to absorb these lost fluids.
Features of all three of the other types of diarrhea[clarification needed] can be found in this type of diarrhea.
It can be caused by bacterial infections, viral infections, parasitic infections, or autoimmune problems such as
inflammatory bowel diseases. It can also be caused by tuberculosis, colon cancer, and enteritis.[citation needed]
Diarrheal disease may have a negative impact on both physical fitness and mental development. "Early childhood malnutrition
resulting from any cause reduces physical fitness and work productivity in adults," and diarrhea is a primary cause
of childhood malnutrition. Further, evidence suggests that diarrheal disease has significant impacts on mental development
and health; it has been shown that, even when controlling for helminth infection and early breastfeeding, children
who had experienced severe diarrhea had significantly lower scores on a series of tests of intelligence. Another
possible cause of diarrhea is irritable bowel syndrome (IBS), which usually presents with abdominal discomfort relieved
by defecation and unusual stool (diarrhea or constipation) for at least 3 days a week over the previous 3 months.
Symptoms of diarrhea-predominant IBS can be managed through a combination of dietary changes, soluble fiber supplements,
and/or medications such as loperamide or codeine. About 30% of patients with diarrhea-predominant IBS have bile acid
malabsorption diagnosed with an abnormal SeHCAT test. Poverty is a good indicator of the rate of infectious diarrhea
in a population. This association does not stem from poverty itself, but rather from the conditions under which impoverished
people live. The absence of certain resources compromises the ability of the poor to defend themselves against infectious
diarrhea. "Poverty is associated with poor housing, crowding, dirt floors, lack of access to clean water or to sanitary
disposal of fecal waste (sanitation), cohabitation with domestic animals that may carry human pathogens, and a lack
of refrigerated storage for food, all of which increase the frequency of diarrhea... Poverty also restricts the ability
to provide age-appropriate, nutritionally balanced diets or to modify diets when diarrhea develops so as to mitigate
and repair nutrient losses. The impact is exacerbated by the lack of adequate, available, and affordable medical
care." Proper nutrition is important for health and functioning, including the prevention of infectious diarrhea.
It is especially important to young children who do not have a fully developed immune system. Zinc deficiency, a
condition often found in children in developing countries can, even in mild cases, have a significant impact on the
development and proper functioning of the human immune system. Indeed, this relationship between zinc deficiency
reduced immune functioning corresponds with an increased severity of infectious diarrhea. Children who have lowered
levels of zinc have a greater number of instances of diarrhea, severe diarrhea, and diarrhea associated with fever.
Similarly, vitamin A deficiency can cause an increase in the severity of diarrheal episodes, however there is some
discrepancy when it comes to the impact of vitamin A deficiency on the rate of disease. While some argue that a relationship
does not exist between the rate of disease and vitamin A status, others suggest an increase in the rate associated
with deficiency. Given that estimates suggest 127 million preschool children worldwide are vitamin A deficient, this
population has the potential for increased risk of disease contraction. According to two researchers, Nesse and Williams,
diarrhea may function as an evolved expulsion defense mechanism. As a result, if it is stopped, there might be a
delay in recovery. They cite in support of this argument research published in 1973 that found that treating Shigella
with the anti-diarrhea drug (Co-phenotrope, Lomotil) caused people to stay feverish twice as long as those not so
treated. The researchers indeed themselves observed that: "Lomotil may be contraindicated in shigellosis. Diarrhea
may represent a defense mechanism". Basic sanitation techniques can have a profound effect on the transmission of
diarrheal disease. The implementation of hand washing using soap and water, for example, has been experimentally
shown to reduce the incidence of disease by approximately 42–48%. Hand washing in developing countries, however,
is compromised by poverty as acknowledged by the CDC: "Handwashing is integral to disease prevention in all parts
of the world; however, access to soap and water is limited in a number of less developed countries. This lack of
access is one of many challenges to proper hygiene in less developed countries." Solutions to this barrier require
the implementation of educational programs that encourage sanitary behaviours. Given that water contamination is
a major means of transmitting diarrheal disease, efforts to provide clean water supply and improved sanitation have
the potential to dramatically cut the rate of disease incidence. In fact, it has been proposed that we might expect
an 88% reduction in child mortality resulting from diarrheal disease as a result of improved water sanitation and
hygiene. Similarly, a meta-analysis of numerous studies on improving water supply and sanitation shows a 22–27% reduction
in disease incidence, and a 21–30% reduction in mortality rate associated with diarrheal disease. Immunization against
the pathogens that cause diarrheal disease is a viable prevention strategy, however it does require targeting certain
pathogens for vaccination. In the case of Rotavirus, which was responsible for around 6% of diarrheal episodes and
20% of diarrheal disease deaths in the children of developing countries, use of a Rotavirus vaccine in trials in
1985 yielded a slight (2-3%) decrease in total diarrheal disease incidence, while reducing overall mortality by 6-10%.
Similarly, a Cholera vaccine showed a strong reduction in morbidity and mortality, though the overall impact of vaccination
was minimal as Cholera is not one of the major causative pathogens of diarrheal disease. Since this time, more effective
vaccines have been developed that have the potential to save many thousands of lives in developing nations, while
reducing the overall cost of treatment, and the costs to society. Dietary deficiencies in developing countries can
be combated by promoting better eating practices. Supplementation with vitamin A and/or zinc. Zinc supplementation
proved successful showing a significant decrease in the incidence of diarrheal disease compared to a control group.
The majority of the literature suggests that vitamin A supplementation is advantageous in reducing disease incidence.
Development of a supplementation strategy should take into consideration the fact that vitamin A supplementation
was less effective in reducing diarrhea incidence when compared to vitamin A and zinc supplementation, and that the
latter strategy was estimated to be significantly more cost effective. In many cases of diarrhea, replacing lost
fluid and salts is the only treatment needed. This is usually by mouth – oral rehydration therapy – or, in severe
cases, intravenously. Diet restrictions such as the BRAT diet are no longer recommended. Research does not support
the limiting of milk to children as doing so has no effect on duration of diarrhea. To the contrary, WHO recommends
that children with diarrhea continue to eat as sufficient nutrients are usually still absorbed to support continued
growth and weight gain, and that continuing to eat also speeds up recovery of normal intestinal functioning. CDC
recommends that children and adults with cholera also continue to eat. Oral rehydration solution (ORS) (a slightly
sweetened and salty water) can be used to prevent dehydration. Standard home solutions such as salted rice water,
salted yogurt drinks, vegetable and chicken soups with salt can be given. Home solutions such as water in which cereal
has been cooked, unsalted soup, green coconut water, weak tea (unsweetened), and unsweetened fresh fruit juices can
have from half a teaspoon to full teaspoon of salt (from one-and-a-half to three grams) added per liter. Clean plain
water can also be one of several fluids given. There are commercial solutions such as Pedialyte, and relief agencies
such as UNICEF widely distribute packets of salts and sugar. A WHO publication for physicians recommends a homemade
ORS consisting of one liter water with one teaspoon salt (3 grams) and two tablespoons sugar (18 grams) added (approximately
the "taste of tears"). Rehydration Project recommends adding the same amount of sugar but only one-half a teaspoon
of salt, stating that this more dilute approach is less risky with very little loss of effectiveness. Both agree
that drinks with too much sugar or salt can make dehydration worse. Drinks especially high in simple sugars, such
as soft drinks and fruit juices, are not recommended in children under 5 years of age as they may increase dehydration.
A too rich solution in the gut draws water from the rest of the body, just as if the person were to drink sea water.
Plain water may be used if more specific and effective ORT preparations are unavailable or are not palatable. Additionally,
a mix of both plain water and drinks perhaps too rich in sugar and salt can alternatively be given to the same person,
with the goal of providing a medium amount of sodium overall. A nasogastric tube can be used in young children to
administer fluids if warranted. WHO recommends a child with diarrhea continue to be fed. Continued feeding speeds
the recovery of normal intestinal function. In contrast, children whose food is restricted have diarrhea of longer
duration and recover intestinal function more slowly. A child should also continue to be breastfed. The WHO states
"Food should never be withheld and the child's usual foods should not be diluted. Breastfeeding should always be
continued." And in the specific example of cholera, CDC also makes the same recommendation. In young children who
are not breast-fed and live in the developed world, a lactose-free diet may be useful to speed recovery. While antibiotics
are beneficial in certain types of acute diarrhea, they are usually not used except in specific situations. There
are concerns that antibiotics may increase the risk of hemolytic uremic syndrome in people infected with Escherichia
coli O157:H7. In resource-poor countries, treatment with antibiotics may be beneficial. However, some bacteria are
developing antibiotic resistance, particularly Shigella. Antibiotics can also cause diarrhea, and antibiotic-associated
diarrhea is the most common adverse effect of treatment with general antibiotics.
Madrasa (Arabic: مدرسة, madrasah, pl. مدارس, madāris, Turkish: Medrese) is the Arabic word for any type of educational institution,
whether secular or religious (of any religion). The word is variously transliterated madrasah, madarasaa, medresa,
madrassa, madraza, medrese, etc. In the West, the word usually refers to a specific type of religious school or college
for the study of the Islamic religion, though this may not be the only subject studied. Not all students in madaris
are Muslims; there is also a modern curriculum. The word madrasah derives from the triconsonantal Semitic root د-ر-س
D-R-S 'to learn, study', through the wazn (form/stem) مفعل(ة); mafʻal(ah), meaning "a place where something is done".
Therefore, madrasah literally means "a place where learning and studying take place". The word is also present as
a loanword with the same innocuous meaning in many Arabic-influenced languages, such as: Urdu, Bengali, Hindi, Persian,
Turkish, Azeri, Kurdish, Indonesian, Malay and Bosnian / Croatian. In the Arabic language, the word مدرسة madrasah
simply means the same as school does in the English language, whether that is private, public or parochial school,
as well as for any primary or secondary school whether Muslim, non-Muslim, or secular. Unlike the use of the word
school in British English, the word madrasah more closely resembles the term school in American English, in that
it can refer to a university-level or post-graduate school as well as to a primary or secondary school. For example,
in the Ottoman Empire during the Early Modern Period, madaris had lower schools and specialised schools where the
students became known as danişmends. The usual Arabic word for a university, however, is جامعة (jāmiʻah). The Hebrew
cognate midrasha also connotes the meaning of a place of learning; the related term midrash literally refers to study
or learning, but has acquired mystical and religious connotations. However, in English, the term madrasah usually
refers to the specifically Islamic institutions. A typical Islamic school usually offers two courses of study: a
ḥifẓ course teaching memorization of the Qur'an (the person who commits the entire Qurʼan to memory is called a ḥāfiẓ);
and an ʻālim course leading the candidate to become an accepted scholar in the community. A regular curriculum includes
courses in Arabic, tafsir (Qur'anic interpretation), sharīʻah (Islamic law), hadiths (recorded sayings and deeds
of Muhammad), mantiq (logic), and Muslim history. In the Ottoman Empire, during the Early Modern Period, the study
of hadiths was introduced by Süleyman I. Depending on the educational demands, some madaris also offer additional
advanced courses in Arabic literature, English and other foreign languages, as well as science and world history.
Ottoman madaris along with religious teachings also taught "styles of writing, grammary, syntax, poetry, composition,
natural sciences, political sciences, and etiquette." People of all ages attend, and many often move on to becoming
imams.[citation needed] The certificate of an ʻālim, for example, requires approximately twelve years of study.[citation
needed] A good number of the ḥuffāẓ (plural of ḥāfiẓ) are the product of the madaris. The madaris also resemble colleges,
where people take evening classes and reside in dormitories. An important function of the madaris is to admit orphans
and poor children in order to provide them with education and training. Madaris may enroll female students; however,
they study separately from the men.[citation needed] The term "Islamic education" means education in the light of
Islam itself, which is rooted in the teachings of the Quran - holy book of Muslims. Islamic education and Muslim
education are not the same. Because Islamic education has epistemological integration which is founded on Tawhid
- Oneness or monotheism. For details Read "A Qur’anic Methodology for Integrating Knowledge and Education: Implications
for Malaysia’s Islamic Education Strategy" written Tareq M Zayed and "Knowledge of Shariah and Knowledge to Manage
‘Self’ and ‘System’: Integration of Islamic Epistemology with the Knowledge and Education" authored by Tareq M Zayed
The first institute of madrasa education was at the estate of Hazrat Zaid bin Arkam near a hill called Safa, where
Hazrat Muhammad was the teacher and the students were some of his followers.[citation needed] After Hijrah (migration)
the madrasa of "Suffa" was established in Madina on the east side of the Al-Masjid an-Nabawi mosque. Hazrat 'Ubada
bin Samit was appointed there by Hazrat Muhammad as teacher and among the students.[citation needed] In the curriculum
of the madrasa, there were teachings of The Qur'an,The Hadith, fara'iz, tajweed, genealogy, treatises of first aid,
etc. There were also trainings of horse-riding, art of war, handwriting and calligraphy, athletics and martial arts.
The first part of madrasa based education is estimated from the first day of "nabuwwat" to the first portion of the
"Umaiya" caliphate.[citation needed] During the rule of the Fatimid and Mamluk dynasties and their successor states
in the medieval Middle East, many of the ruling elite founded madaris through a religious endowment known as the
waqf. Not only was the madrasa a potent symbol of status but it was an effective means of transmitting wealth and
status to their descendants. Especially during the Mamlūk period, when only former slaves could assume power, the
sons of the ruling Mamlūk elite were unable to inherit. Guaranteed positions within the new madaris thus allowed
them to maintain status. Madaris built in this period include the Mosque-Madrasah of Sultan Ḥasan in Cairo. At the
beginning of the Caliphate or Islamic Empire, the reliance on courts initially confined sponsorship and scholarly
activities to major centres. Within several centuries, the development of Muslim educational institutions such as
the madrasah and masjid eventually introduced such activities to provincial towns and dispersed them across the Islamic
legal schools and Sufi orders. In addition to religious subjects, they also taught the "rational sciences," as varied
as mathematics, astronomy, astrology, geography, alchemy, philosophy, magic, and occultism, depending on the curriculum
of the specific institution in question. The madaris, however, were not centres of advanced scientific study; scientific
advances in Islam were usually carried out by scholars working under the patronage of royal courts. During this time,[when?]
the Caliphate experienced a growth in literacy, having the highest literacy rate of the Middle Ages, comparable to
classical Athens' literacy in antiquity but on a much larger scale. The emergence of the maktab and madrasa institutions
played a fundamental role in the relatively high literacy rates of the medieval Islamic world. In the medieval Islamic
world, an elementary school was known as a maktab, which dates back to at least the 10th century. Like madaris (which
referred to higher education), a maktab was often attached to an endowed mosque. In the 11th century, the famous
Persian Islamic philosopher and teacher Ibn Sīnā (known as Avicenna in the West), in one of his books, wrote a chapter
about the maktab entitled "The Role of the Teacher in the Training and Upbringing of Children," as a guide to teachers
working at maktab schools. He wrote that children can learn better if taught in classes instead of individual tuition
from private tutors, and he gave a number of reasons for why this is the case, citing the value of competition and
emulation among pupils, as well as the usefulness of group discussions and debates. Ibn Sīnā described the curriculum
of a maktab school in some detail, describing the curricula for two stages of education in a maktab school. Ibn Sīnā
refers to the secondary education stage of maktab schooling as a period of specialisation when pupils should begin
to acquire manual skills, regardless of their social status. He writes that children after the age of 14 should be
allowed to choose and specialise in subjects they have an interest in, whether it was reading, manual skills, literature,
preaching, medicine, geometry, trade and commerce, craftsmanship, or any other subject or profession they would be
interested in pursuing for a future career. He wrote that this was a transitional stage and that there needs to be
flexibility regarding the age in which pupils graduate, as the student's emotional development and chosen subjects
need to be taken into account. During its formative period, the term madrasah referred to a higher education institution,
whose curriculum initially included only the "religious sciences", whilst philosophy and the secular sciences were
often excluded. The curriculum slowly began to diversify, with many later madaris teaching both the religious and
the "secular sciences", such as logic, mathematics and philosophy. Some madaris further extended their curriculum
to history, politics, ethics, music, metaphysics, medicine, astronomy and chemistry. The curriculum of a madrasah
was usually set by its founder, but most generally taught both the religious sciences and the physical sciences.
Madaris were established throughout the Islamic world, examples being the 9th century University of al-Qarawiyyin,
the 10th century al-Azhar University (the most famous), the 11th century Niẓāmīyah, as well as 75 madaris in Cairo,
51 in Damascus and up to 44 in Aleppo between 1155 and 1260. Many more were also established in the Andalusian cities
of Córdoba, Seville, Toledo, Granada (Madrasah of Granada), Murcia, Almería, Valencia and Cádiz during the Caliphate
of Córdoba. Madaris were largely centred on the study of fiqh (Islamic jurisprudence). The ijāzat al-tadrīs wa-al-iftāʼ
("licence to teach and issue legal opinions") in the medieval Islamic legal education system had its origins in the
9th century after the formation of the madhāhib (schools of jurisprudence). George Makdisi considers the ijāzah to
be the origin of the European doctorate. However, in an earlier article, he considered the ijāzah to be of "fundamental
difference" to the medieval doctorate, since the former was awarded by an individual teacher-scholar not obliged
to follow any formal criteria, whereas the latter was conferred on the student by the collective authority of the
faculty. To obtain an ijāzah, a student "had to study in a guild school of law, usually four years for the basic
undergraduate course" and ten or more years for a post-graduate course. The "doctorate was obtained after an oral
examination to determine the originality of the candidate's theses", and to test the student's "ability to defend
them against all objections, in disputations set up for the purpose." These were scholarly exercises practised throughout
the student's "career as a graduate student of law." After students completed their post-graduate education, they
were awarded ijazas giving them the status of faqīh 'scholar of jurisprudence', muftī 'scholar competent in issuing
fatwās', and mudarris 'teacher'. The Arabic term ijāzat al-tadrīs was awarded to Islamic scholars who were qualified
to teach. According to Makdisi, the Latin title licentia docendi 'licence to teach' in the European university may
have been a translation of the Arabic, but the underlying concept was very different. A significant difference between
the ijāzat al-tadrīs and the licentia docendi was that the former was awarded by the individual scholar-teacher,
while the latter was awarded by the chief official of the university, who represented the collective faculty, rather
than the individual scholar-teacher. Much of the study in the madrasah college centred on examining whether certain
opinions of law were orthodox. This scholarly process of "determining orthodoxy began with a question which the Muslim
layman, called in that capacity mustaftī, presented to a jurisconsult, called mufti, soliciting from him a response,
called fatwa, a legal opinion (the religious law of Islam covers civil as well as religious matters). The mufti (professor
of legal opinions) took this question, studied it, researched it intensively in the sacred scriptures, in order to
find a solution to it. This process of scholarly research was called ijtihād, literally, the exertion of one's efforts
to the utmost limit." There is disagreement whether madaris ever became universities. Scholars like Arnold H. Green
and Seyyed Hossein Nasr have argued that starting in the 10th century, some medieval Islamic madaris indeed became
universities. George Makdisi and others, however, argue that the European university has no parallel in the medieval
Islamic world. Darleen Pryds questions this view, pointing out that madaris and European universities in the Mediterranean
region shared similar foundations by princely patrons and were intended to provide loyal administrators to further
the rulers' agenda. Other scholars regard the university as uniquely European in origin and characteristics. al-Qarawīyīn
University in Fez, Morocco is recognised by many historians as the oldest degree-granting university in the world,
having been founded in 859 by Fatima al-Fihri. While the madrasa college could also issue degrees at all levels,
the jāmiʻahs (such as al-Qarawīyīn and al-Azhar University) differed in the sense that they were larger institutions,
more universal in terms of their complete source of studies, had individual faculties for different subjects, and
could house a number of mosques, madaris, and other institutions within them. Such an institution has thus been described
as an "Islamic university". Al-Azhar University, founded in Cairo, Egypt in 975 by the Ismaʻīlī Shīʻī Fatimid dynasty
as a jāmiʻah, had individual faculties for a theological seminary, Islamic law and jurisprudence, Arabic grammar,
Islamic astronomy, early Islamic philosophy and logic in Islamic philosophy. The postgraduate doctorate in law was
only obtained after "an oral examination to determine the originality of the candidate's theses", and to test the
student's "ability to defend them against all objections, in disputations set up for the purpose." ‘Abd al-Laṭīf
al-Baghdādī also delivered lectures on Islamic medicine at al-Azhar, while Maimonides delivered lectures on medicine
and astronomy there during the time of Saladin. Another early jāmiʻah was the Niẓāmīyah of Baghdād (founded 1091),
which has been called the "largest university of the Medieval world." Mustansiriya University, established by the
ʻAbbāsid caliph al-Mustanṣir in 1233, in addition to teaching the religious subjects, offered courses dealing with
philosophy, mathematics and the natural sciences. However, the classification of madaris as "universities" is disputed
on the question of understanding of each institution on its own terms. In madaris, the ijāzahs were only issued in
one field, the Islamic religious law of sharīʻah, and in no other field of learning. Other academic subjects, including
the natural sciences, philosophy and literary studies, were only treated "ancillary" to the study of the Sharia.
For example, a natural science like astronomy was only studied (if at all) to supply religious needs, like the time
for prayer. This is why Ptolemaic astronomy was considered adequate, and is still taught in some modern day madaris.
The Islamic law undergraduate degree from al-Azhar, the most prestigious madrasa, was traditionally granted without
final examinations, but on the basis of the students' attentive attendance to courses. In contrast to the medieval
doctorate which was granted by the collective authority of the faculty, the Islamic degree was not granted by the
teacher to the pupil based on any formal criteria, but remained a "personal matter, the sole prerogative of the person
bestowing it; no one could force him to give one". Medievalist specialists who define the university as a legally
autonomous corporation disagree with the term "university" for the Islamic madaris and jāmi‘ahs because the medieval
university (from Latin universitas) was structurally different, being a legally autonomous corporation rather than
a waqf institution like the madrasa and jāmiʻah. Despite the many similarities, medieval specialists have coined
the term "Islamic college" for madrasa and jāmiʻah to differentiate them from the legally autonomous corporations
that the medieval European universities were. In a sense, the madrasa resembles a university college in that it has
most of the features of a university, but lacks the corporate element. Toby Huff summarises the difference as follows:
As Muslim institutions of higher learning, the madrasa had the legal designation of waqf. In central and eastern
Islamic lands, the view that the madrasa, as a charitable endowment, will remain under the control of the donor (and
their descendent), resulted in a "spurt" of establishment of madaris in the 11th and 12th centuries. However, in
Western Islamic lands, where the Maliki views prohibited donors from controlling their endowment, madaris were not
as popular. Unlike the corporate designation of Western institutions of higher learning, the waqf designation seemed
to have led to the exclusion of non-orthodox religious subjects such a philosophy and natural science from the curricula.
The madrasa of al-Qarawīyīn, one of the two surviving madaris that predate the founding of the earliest medieval
universities and are thus claimed to be the "first universities" by some authors, has acquired official university
status as late as 1947. The other, al-Azhar, did acquire this status in name and essence only in the course of numerous
reforms during the 19th and 20th century, notably the one of 1961 which introduced non-religious subjects to its
curriculum, such as economics, engineering, medicine, and agriculture. It should also be noted that many medieval
universities were run for centuries as Christian cathedral schools or monastic schools prior to their formal establishment
as universitas scholarium; evidence of these immediate forerunners of the university dates back to the 6th century
AD, thus well preceding the earliest madaris. George Makdisi, who has published most extensively on the topic concludes
in his comparison between the two institutions: Nevertheless, Makdisi has asserted that the European university borrowed
many of its features from the Islamic madrasa, including the concepts of a degree and doctorate. Makdisi and Hugh
Goddard have also highlighted other terms and concepts now used in modern universities which most likely have Islamic
origins, including "the fact that we still talk of professors holding the 'Chair' of their subject" being based on
the "traditional Islamic pattern of teaching where the professor sits on a chair and the students sit around him",
the term 'academic circles' being derived from the way in which Islamic students "sat in a circle around their professor",
and terms such as "having 'fellows', 'reading' a subject, and obtaining 'degrees', can all be traced back" to the
Islamic concepts of aṣḥāb ('companions, as of Muhammad'), qirāʼah ('reading aloud the Qur'an') and ijāzah ('licence
[to teach]') respectively. Makdisi has listed eighteen such parallels in terminology which can be traced back to
their roots in Islamic education. Some of the practices now common in modern universities which Makdisi and Goddard
trace back to an Islamic root include "practices such as delivering inaugural lectures, wearing academic robes, obtaining
doctorates by defending a thesis, and even the idea of academic freedom are also modelled on Islamic custom." The
Islamic scholarly system of fatwá and ijmāʻ, meaning opinion and consensus respectively, formed the basis of the
"scholarly system the West has practised in university scholarship from the Middle Ages down to the present day."
According to Makdisi and Goddard, "the idea of academic freedom" in universities was also "modelled on Islamic custom"
as practised in the medieval Madrasa system from the 9th century. Islamic influence was "certainly discernible in
the foundation of the first deliberately planned university" in Europe, the University of Naples Federico II founded
by Frederick II, Holy Roman Emperor in 1224. However, all of these facets of medieval university life are considered
by standard scholarship to be independent medieval European developments with no tracable Islamic influence. Generally,
some reviewers have pointed out the strong inclination of Makdisi of overstating his case by simply resting on "the
accumulation of close parallels", but all the while failing to point to convincing channels of transmission between
the Muslim and Christian world. Norman Daniel points out that the Arab equivalent of the Latin disputation, the taliqa,
was reserved for the ruler's court, not the madrasa, and that the actual differences between Islamic fiqh and medieval
European civil law were profound. The taliqa only reached Islamic Spain, the only likely point of transmission, after
the establishment of the first medieval universities. In fact, there is no Latin translation of the taliqa and, most
importantly, no evidence of Latin scholars ever showing awareness of Arab influence on the Latin method of disputation,
something they would have certainly found noteworthy. Rather, it was the medieval reception of the Greek Organon
which set the scholastic sic et non in motion. Daniel concludes that resemblances in method had more to with the
two religions having "common problems: to reconcile the conflicting statements of their own authorities, and to safeguard
the data of revelation from the impact of Greek philosophy"; thus Christian scholasticism and similar Arab concepts
should be viewed in terms of a parallel occurrence, not of the transmission of ideas from one to the other, a view
shared by Hugh Kennedy. Prior to the 12th century, women accounted for less than one percent of the world’s Islamic
scholars. However, al-Sakhawi and Mohammad Akram Nadwi have since found evidence of over 8,000 female scholars since
the 15th century. al-Sakhawi devotes an entire volume of his 12-volume biographical dictionary al-Ḍawʾ al-lāmiʻ to
female scholars, giving information on 1,075 of them. More recently, the scholar Mohammad Akram Nadwi, currently
a researcher from the Oxford Centre for Islamic Studies, has written 40 volumes on the muḥaddithāt (the women scholars
of ḥadīth), and found at least 8,000 of them. From around 750, during the Abbasid Caliphate, women “became renowned
for their brains as well as their beauty”. In particular, many well known women of the time were trained from childhood
in music, dancing and poetry. Mahbuba was one of these. Another feminine figure to be remembered for her achievements
was Tawaddud, "a slave girl who was said to have been bought at great cost by Hārūn al-Rashīd because she had passed
her examinations by the most eminent scholars in astronomy, medicine, law, philosophy, music, history, Arabic grammar,
literature, theology and chess". Moreover, among the most prominent feminine figures was Shuhda who was known as
"the Scholar" or "the Pride of Women" during the 12th century in Baghdad. Despite the recognition of women's aptitudes
during the Abbasid dynasty, all these came to an end in Iraq with the sack of Baghdad in 1258. According to the Sunni
scholar Ibn ʻAsākir in the 12th century, there were opportunities for female education in the medieval Islamic world,
writing that women could study, earn ijazahs (academic degrees), and qualify as scholars and teachers. This was especially
the case for learned and scholarly families, who wanted to ensure the highest possible education for both their sons
and daughters. Ibn ʻAsakir had himself studied under 80 different female teachers in his time. Female education in
the Islamic world was inspired by Muhammad's wives, such as Khadijah, a successful businesswoman. According to a
hadith attributed to Muhammad, he praised the women of Medina because of their desire for religious knowledge: "The
first Ottoman Medrese was created in İznik in 1331 and most Ottoman medreses followed the traditions of Sunni Islam."
"When an Ottoman sultan established a new medrese, he would invite scholars from the Islamic world—for example, Murad
II brought scholars from Persia, such as ʻAlāʼ al-Dīn and Fakhr al-Dīn who helped enhance the reputation of the Ottoman
medrese". This reveals that the Islamic world was interconnected in the early modern period as they travelled around
to other Islamic states exchanging knowledge. This sense that the Ottoman Empire was becoming modernised through
globalization is also recognised by Hamadeh who says: "Change in the eighteenth century as the beginning of a long
and unilinear march toward westernisation reflects the two centuries of reformation in sovereign identity." İnalcık
also mentions that while scholars from for example Persia travelled to the Ottomans in order to share their knowledge,
Ottomans travelled as well to receive education from scholars of these Islamic lands, such as Egypt, Persia and Turkestan.
Hence, this reveals that similar to today's modern world, individuals from the early modern society travelled abroad
to receive education and share knowledge and that the world was more interconnected than it seems. Also, it reveals
how the system of "schooling" was also similar to today's modern world where students travel abroad to different
countries for studies. Examples of Ottoman madaris are the ones built by Mehmed the Conqueror. He built eight madaris
that were built "on either side of the mosque where there were eight higher madaris for specialised studies and eight
lower medreses, which prepared students for these." The fact that they were built around, or near mosques reveals
the religious impulses behind madrasa building and it reveals the interconnectedness between institutions of learning
and religion. The students who completed their education in the lower medreses became known as danismends. This reveals
that similar to the education system today, the Ottomans' educational system involved different kinds of schools
attached to different kinds of levels. For example, there were lower madaris and specialised ones, and for one to
get into the specialised area meant that he had to complete the classes in the lower one in order to adequately prepare
himself for higher learning. Although Ottoman madaris had a number of different branches of study, such as calligraphic
sciences, oral sciences, and intellectual sciences, they primarily served the function of an Islamic centre for spiritual
learning. "The goal of all knowledge and in particular, of the spiritual sciences is knowledge of God." Religion,
for the most part, determines the significance and importance of each science. As İnalcık mentions: "Those which
aid religion are good and sciences like astrology are bad." However, even though mathematics, or studies in logic
were part of the madrasa's curriculum, they were all centred around religion. Even mathematics had a religious impulse
behind its teachings. "The Ulema of the Ottoman medreses held the view that hostility to logic and mathematics was
futile since these accustomed the mind to correct thinking and thus helped to reveal divine truths" – key word being
"divine". İnalcık also mentions that even philosophy was only allowed to be studied so that it helped to confirm
the doctrines of Islam." Hence, madaris – schools were basically religious centres for religious teachings and learning
in the Ottoman world. Although scholars such as Goffman have argued that the Ottomans were highly tolerant and lived
in a pluralistic society, it seems that schools that were the main centres for learning were in fact heftily religious
and were not religiously pluralistic, but centred around Islam. Similarly, in Europe "Jewish children learned the
Hebrew letters and texts of basic prayers at home, and then attended a school organised by the synagogue to study
the Torah." Wiesner-Hanks also says that Protestants also wanted to teach "proper religious values." This shows that
in the early modern period, Ottomans and Europeans were similar in their ideas about how schools should be managed
and what they should be primarily focused on. Thus, Ottoman madaris were very similar to present day schools in the
sense that they offered a wide range of studies; however, these studies, in their ultimate objective, aimed to further
solidify and consolidate Islamic practices and theories. As with any other country during the Early Modern Period,
such as Italy and Spain in Europe, the Ottoman social life was interconnected with the medrese. Medreses were built
in as part of a Mosque complex where many programmes, such as aid to the poor through soup kitchens, were held under
the infrastructure of a mosque, which reveals the interconnectedness of religion and social life during this period.
"The mosques to which medreses were attached, dominated the social life in Ottoman cities." Social life was not dominated
by religion only in the Muslim world of the Ottoman Empire; it was also quite similar to the social life of Europe
during this period. As Goffman says: "Just as mosques dominated social life for the Ottomans, churches and synagogues
dominated life for the Christians and Jews as well." Hence, social life and the medrese were closely linked, since
medreses taught many curricula, such as religion, which highly governed social life in terms of establishing orthodoxy.
"They tried moving their developing state toward Islamic orthodoxy." Overall, the fact that mosques contained medreses
comes to show the relevance of education to religion in the sense that education took place within the framework
of religion and religion established social life by trying to create a common religious orthodoxy. Hence, medreses
were simply part of the social life of society as students came to learn the fundamentals of their societal values
and beliefs. In India the majority of these schools follow the Hanafi school of thought. The religious establishment
forms part of the mainly two large divisions within the country, namely the Deobandis, who dominate in numbers (of
whom the Darul Uloom Deoband constitutes one of the biggest madaris) and the Barelvis, who also make up a sizeable
portion (Sufi-oriented). Some notable establishments include: Al Jamiatul Ashrafia, Mubarakpur, Manzar Islam Bareilly,
Jamia Nizamdina New Delhi, Jamia Nayeemia Muradabad which is one of the largest learning centres for the Barelvis.
The HR[clarification needed] ministry of the government of India has recently[when?] declared that a Central Madrasa
Board would be set up. This will enhance the education system of madaris in India. Though the madaris impart Quranic
education mainly, efforts are on to include Mathematics, Computers and science in the curriculum. In July 2015, the
state government of Maharashtra created a stir de-recognised madrasa education, receiving critisicm from several
political parties with the NCP accusing the ruling BJP of creating Hindu-Muslim friction in the state, and Kamal
Farooqui of the All India Muslim Personal Law Board saying it was "ill-designed" Today, the system of Arabic and
Islamic education has grown and further integrated with Kerala government administration. In 2005, an estimated 6,000
Muslim Arabic teachers taught in Kerala government schools, with over 500,000 Muslim students. State-appointed committees,
not private mosques or religious scholars outside the government, determine the curriculum and accreditation of new
schools and colleges. Primary education in Arabic and Islamic studies is available to Kerala Muslims almost entirely
in after-school madrasa programs - sharply unlike full-time madaris common in north India, which may replace formal
schooling. Arabic colleges (over eleven of which exist within the state-run University of Calicut and the Kannur
University) provide B.A. and Masters' level degrees. At all levels, instruction is co-educational, with many women
instructors and professors. Islamic education boards are independently run by the following organizations, accredited
by the Kerala state government: Samastha Kerala Islamic Education Board, Kerala Nadvathul Mujahideen, Jamaat-e-Islami
Hind, and Jamiat Ulema-e-Hind. In Southeast Asia, Muslim students have a choice of attending a secular government
or an Islamic school. Madaris or Islamic schools are known as Sekolah Agama (Malay: religious school) in Malaysia
and Indonesia, โรงเรียนศาสนาอิสลาม (Thai: school of Islam) in Thailand and madaris in the Philippines. In countries
where Islam is not the majority or state religion, Islamic schools are found in regions such as southern Thailand
(near the Thai-Malaysian border) and the southern Philippines in Mindanao, where a significant Muslim population
can be found. In Singapore, madrasahs are private schools which are overseen by Majlis Ugama Islam Singapura (MUIS,
English: Islamic Religious Council of Singapore). There are six Madrasahs in Singapore, catering to students from
Primary 1 to Secondary 4. Four Madrasahs are coeducational and two are for girls. Students take a range of Islamic
Studies subjects in addition to mainstream MOE curriculum subjects and sit for the PSLE and GCE 'O' Levels like their
peers. In 2009, MUIS introduced the "Joint Madrasah System" (JMS), a joint collaboration of Madrasah Al-Irsyad Al-Islamiah
primary school and secondary schools Madrasah Aljunied Al-Islamiah (offering the ukhrawi, or religious stream) and
Madrasah Al-Arabiah Al-Islamiah (offering the academic stream). The JMS aims to introduce the International Baccalaureate
(IB) programme into the Madrasah Al-Arabiah Al-Islamiah by 2019. Students attending a madrasah are required to wear
the traditional Malay attire, including the songkok for boys and tudong for girls, in contrast to mainstream government
schools which ban religious headgear as Singapore is officially a secular state. For students who wish to attend
a mainstream school, they may opt to take classes on weekends at the madrasah instead of enrolling full-time. In
2004, madaris were mainstreamed in 16 Regions nationwide, primarily in Muslim-majority areas in Mindanao under the
auspices of the Department of Education (DepEd). The DepEd adopted Department Order No. 51, which instituted Arabic-language
and Islamic Values instruction for Muslim children in state schools, and authorised implementation of the Standard
Madrasa Curriculum (SMC) in private-run madaris. While there are state-recognised Islamic schools, such as Ibn Siena
Integrated School in the Islamic City of Marawi, Sarang Bangun LC in Zamboanga and SMIE in Jolo, their Islamic studies
programmes initially varied in application and content. The first Madressa established in North America, Al-Rashid
Islamic Institute, was established in Cornwall, Ontario in 1983 and has graduates who are Hafiz (Quran) and Ulama.
The seminary was established by Mazhar Alam under the direction of his teacher the leading Indian Tablighi scholar
Muhammad Zakariya Kandhlawi and focuses on the traditional Hanafi school of thought and shuns Salafist / Wahabi teachings.
Due to its proximity to the US border city of Messina the school has historically had a high ratio of US students.
Their most prominent graduate Shaykh Muhammad Alshareef completed his Hifz in the early 1990s then went on to deviate
from his traditional roots and form the Salafist organization the AlMaghrib Institute. Western commentators post-9/11
often perceive madaris as places of radical revivalism with a connotation of anti-Americanism and radical extremism,
frequently associated in the Western press with Wahhabi attitudes toward non-Muslims. In Arabic the word madrasa
simply means "school" and does not imply a political or religious affiliation, radical or otherwise. Madaris have
varied curricula, and are not all religious. Some madaris in India, for example, have a secularised identity. Although
early madaris were founded primarily to gain "knowledge of God" they also taught subjects such as mathematics and
poetry. For example, in the Ottoman Empire, "Madrasahs had seven categories of sciences that were taught, such as:
styles of writing, oral sciences like the Arabic language, grammar, rhetoric, and history and intellectual sciences,
such as logic." This is similar to the Western world, in which universities began as institutions of the Catholic
church.
Miami (/maɪˈæmi/; Spanish pronunciation: [maiˈami]) is a city located on the Atlantic coast in southeastern Florida and the
seat of Miami-Dade County. The 44th-most populated city proper in the United States, with a population of 430,332,
it is the principal, central, and most populous city of the Miami metropolitan area, and the second most populous
metropolis in the Southeastern United States after Washington, D.C. According to the U.S. Census Bureau, Miami's
metro area is the eighth-most populous and fourth-largest urban area in the United States, with a population of around
5.5 million. Miami is a major center, and a leader in finance, commerce, culture, media, entertainment, the arts,
and international trade. In 2012, Miami was classified as an Alpha−World City in the World Cities Study Group's inventory.
In 2010, Miami ranked seventh in the United States in terms of finance, commerce, culture, entertainment, fashion,
education, and other sectors. It ranked 33rd among global cities. In 2008, Forbes magazine ranked Miami "America's
Cleanest City", for its year-round good air quality, vast green spaces, clean drinking water, clean streets, and
city-wide recycling programs. According to a 2009 UBS study of 73 world cities, Miami was ranked as the richest city
in the United States, and the world's fifth-richest city in terms of purchasing power. Miami is nicknamed the "Capital
of Latin America", is the second largest U.S. city with a Spanish-speaking majority, and the largest city with a
Cuban-American plurality. Downtown Miami is home to the largest concentration of international banks in the United
States, and many large national and international companies. The Civic Center is a major center for hospitals, research
institutes, medical centers, and biotechnology industries. For more than two decades, the Port of Miami, known as
the "Cruise Capital of the World", has been the number one cruise passenger port in the world. It accommodates some
of the world's largest cruise ships and operations, and is the busiest port in both passenger traffic and cruise
lines. Miami is noted as "the only major city in the United States conceived by a woman, Julia Tuttle", a local citrus
grower and a wealthy Cleveland native. The Miami area was better known as "Biscayne Bay Country" in the early years
of its growth. In the late 19th century, reports described the area as a promising wilderness. The area was also
characterized as "one of the finest building sites in Florida." The Great Freeze of 1894–95 hastened Miami's growth,
as the crops of the Miami area were the only ones in Florida that survived. Julia Tuttle subsequently convinced Henry
Flagler, a railroad tycoon, to expand his Florida East Coast Railway to the region, for which she became known as
"the mother of Miami." Miami was officially incorporated as a city on July 28, 1896 with a population of just over
300. It was named for the nearby Miami River, derived from Mayaimi, the historic name of Lake Okeechobee. Black labor
played a crucial role in Miami's early development. During the beginning of the 20th century, migrants from the Bahamas
and African-Americans constituted 40 percent of the city's population. Whatever their role in the city's growth,
their community's growth was limited to a small space. When landlords began to rent homes to African-Americans in
neighborhoods close to Avenue J (what would later become NW Fifth Avenue), a gang of white man with torches visited
the renting families and warned them to move or be bombed. During the early 20th century, northerners were attracted
to the city, and Miami prospered during the 1920s with an increase in population and infrastructure. The legacy of
Jim Crow was embedded in these developments. Miami's chief of police, H. Leslie Quigg, did not hide the fact that
he, like many other white Miami police officers, was a member of the Ku Klux Klan. Unsurprisingly, these officers
enforced social codes far beyond the written law. Quigg, for example, "personally and publicly beat a colored bellboy
to death for speaking directly to a white woman." After Fidel Castro rose to power in Cuba in 1959, many wealthy
Cubans sought refuge in Miami, further increasing the population. The city developed businesses and cultural amenities
as part of the New South. In the 1980s and 1990s, South Florida weathered social problems related to drug wars, immigration
from Haiti and Latin America, and the widespread destruction of Hurricane Andrew. Racial and cultural tensions were
sometimes sparked, but the city developed in the latter half of the 20th century as a major international, financial,
and cultural center. It is the second-largest U.S. city (after El Paso, Texas) with a Spanish-speaking majority,
and the largest city with a Cuban-American plurality. Miami and its suburbs are located on a broad plain between
the Florida Everglades to the west and Biscayne Bay to the east, which also extends from Florida Bay north to Lake
Okeechobee. The elevation of the area never rises above 40 ft (12 m) and averages at around 6 ft (1.8 m) above mean
sea level in most neighborhoods, especially near the coast. The highest undulations are found along the coastal Miami
Rock Ridge, whose substrate underlies most of the eastern Miami metropolitan region. The main portion of the city
lies on the shores of Biscayne Bay which contains several hundred natural and artificially created barrier islands,
the largest of which contains Miami Beach and South Beach. The Gulf Stream, a warm ocean current, runs northward
just 15 miles (24 km) off the coast, allowing the city's climate to stay warm and mild all year. The surface bedrock
under the Miami area is called Miami oolite or Miami limestone. This bedrock is covered by a thin layer of soil,
and is no more than 50 feet (15 m) thick. Miami limestone formed as the result of the drastic changes in sea level
associated with recent glaciations or ice ages. Beginning some 130,000 years ago the Sangamonian Stage raised sea
levels to approximately 25 feet (8 m) above the current level. All of southern Florida was covered by a shallow sea.
Several parallel lines of reef formed along the edge of the submerged Florida plateau, stretching from the present
Miami area to what is now the Dry Tortugas. The area behind this reef line was in effect a large lagoon, and the
Miami limestone formed throughout the area from the deposition of oolites and the shells of bryozoans. Starting about
100,000 years ago the Wisconsin glaciation began lowering sea levels, exposing the floor of the lagoon. By 15,000
years ago, the sea level had dropped to 300 to 350 feet (90 to 110 m) below the contemporary level. The sea level
rose quickly after that, stabilizing at the current level about 4000 years ago, leaving the mainland of South Florida
just above sea level. Beneath the plain lies the Biscayne Aquifer, a natural underground source of fresh water that
extends from southern Palm Beach County to Florida Bay, with its highest point peaking around the cities of Miami
Springs and Hialeah. Most of the Miami metropolitan area obtains its drinking water from this aquifer. As a result
of the aquifer, it is not possible to dig more than 15 to 20 ft (5 to 6 m) beneath the city without hitting water,
which impedes underground construction, though some underground parking garages exist. For this reason, the mass
transit systems in and around Miami are elevated or at-grade.[citation needed] Miami is partitioned into many different
sections, roughly into North, South, West and Downtown. The heart of the city is Downtown Miami and is technically
on the eastern side of the city. This area includes Brickell, Virginia Key, Watson Island, and PortMiami. Downtown
is South Florida's central business district, and Florida's largest and most influential central business district.
Downtown has the largest concentration of international banks in the U.S. along Brickell Avenue. Downtown is home
to many major banks, courthouses, financial headquarters, cultural and tourist attractions, schools, parks and a
large residential population. East of Downtown, across Biscayne Bay is South Beach. Just northwest of Downtown, is
the Civic Center, which is Miami's center for hospitals, research institutes and biotechnology with hospitals such
as Jackson Memorial Hospital, Miami VA Hospital, and the University of Miami's Leonard M. Miller School of Medicine.
The southern side of Miami includes Coral Way, The Roads and Coconut Grove. Coral Way is a historic residential neighborhood
built in 1922 connecting Downtown with Coral Gables, and is home to many old homes and tree-lined streets. Coconut
Grove was established in 1825 and is the location of Miami's City Hall in Dinner Key, the Coconut Grove Playhouse,
CocoWalk, many nightclubs, bars, restaurants and bohemian shops, and as such, is very popular with local college
students. It is a historic neighborhood with narrow, winding roads, and a heavy tree canopy. Coconut Grove has many
parks and gardens such as Villa Vizcaya, The Kampong, The Barnacle Historic State Park, and is the home of the Coconut
Grove Convention Center and numerous historic homes and estates. The northern side of Miami includes Midtown, a district
with a great mix of diversity with many West Indians, Hispanics, European Americans, bohemians, and artists. Edgewater,
and Wynwood, are neighborhoods of Midtown and are made up mostly of high-rise residential towers and are home to
the Adrienne Arsht Center for the Performing Arts. The wealthier residents usually live in the northeastern part,
in Midtown, the Design District, and the Upper East Side, with many sought after 1920s homes and home of the MiMo
Historic District, a style of architecture originated in Miami in the 1950s. The northern side of Miami also has
notable African American and Caribbean immigrant communities such as Little Haiti, Overtown (home of the Lyric Theater),
and Liberty City. Miami has a tropical monsoon climate (Köppen climate classification Am) with hot and humid summers
and short, warm winters, with a marked drier season in the winter. Its sea-level elevation, coastal location, position
just above the Tropic of Cancer, and proximity to the Gulf Stream shapes its climate. With January averaging 67.2
°F (19.6 °C), winter features mild to warm temperatures; cool air usually settles after the passage of a cold front,
which produces much of the little amount of rainfall during the season. Lows occasionally fall below 50 °F (10 °C),
but very rarely below 35 °F (2 °C). Highs generally range between 70–77 °F (21–25 °C). The wet season begins some
time in May, ending in mid-October. During this period, temperatures are in the mid 80s to low 90s (29–35 °C), accompanied
by high humidity, though the heat is often relieved by afternoon thunderstorms or a sea breeze that develops off
the Atlantic Ocean, which then allow lower temperatures, but conditions still remain very muggy. Much of the year's
55.9 inches (1,420 mm) of rainfall occurs during this period. Dewpoints in the warm months range from 71.9 °F (22.2
°C) in June to 73.7 °F (23.2 °C) in August. The city proper is home to less than one-thirteenth of the population
of South Florida. Miami is the 42nd-most populous city in the United States. The Miami metropolitan area, which includes
Miami-Dade, Broward and Palm Beach counties, had a combined population of more than 5.5 million people, ranked seventh
largest in the United States, and is the largest metropolitan area in the Southeastern United States. As of 2008[update],
the United Nations estimates that the Miami Urban Agglomeration is the 44th-largest in the world. In 1960, non-Hispanic
whites represented 80% of Miami-Dade county's population. In 1970, the Census Bureau reported Miami's population
as 45.3% Hispanic, 32.9% non-Hispanic White, and 22.7% Black. Miami's explosive population growth has been driven
by internal migration from other parts of the country, primarily up until the 1980s, as well as by immigration, primarily
from the 1960s to the 1990s. Today, immigration to Miami has slowed significantly and Miami's growth today is attributed
greatly to its fast urbanization and high-rise construction, which has increased its inner city neighborhood population
densities, such as in Downtown, Brickell, and Edgewater, where one area in Downtown alone saw a 2,069% increase in
population in the 2010 Census. Miami is regarded as more of a multicultural mosaic, than it is a melting pot, with
residents still maintaining much of, or some of their cultural traits. The overall culture of Miami is heavily influenced
by its large population of Hispanics and blacks mainly from the Caribbean islands. Several large companies are headquartered
in or around Miami, including but not limited to: Akerman Senterfitt, Alienware, Arquitectonica, Arrow Air, Bacardi,
Benihana, Brightstar Corporation, Burger King, Celebrity Cruises, Carnival Corporation, Carnival Cruise Lines, Crispin
Porter + Bogusky, Duany Plater-Zyberk & Company, Espírito Santo Financial Group, Fizber.com, Greenberg Traurig, Holland
& Knight, Inktel Direct, Interval International, Lennar, Navarro Discount Pharmacies, Norwegian Cruise Lines, Oceania
Cruises, Perry Ellis International, RCTV International, Royal Caribbean Cruise Lines, Ryder Systems, Seabourn Cruise
Line, Sedano's, Telefónica USA, UniMÁS, Telemundo, Univision, U.S. Century Bank, Vector Group and World Fuel Services.
Because of its proximity to Latin America, Miami serves as the headquarters of Latin American operations for more
than 1400 multinational corporations, including AIG, American Airlines, Cisco, Disney, Exxon, FedEx, Kraft Foods,
LEO Pharma Americas, Microsoft, Yahoo, Oracle, SBC Communications, Sony, Symantec, Visa International, and Wal-Mart.
Miami is a major television production center, and the most important city in the U.S. for Spanish language media.
Univisión, Telemundo and UniMÁS have their headquarters in Miami, along with their production studios. The Telemundo
Television Studios produces much of the original programming for Telemundo, such as their telenovelas and talk shows.
In 2011, 85% of Telemundo's original programming was filmed in Miami. Miami is also a major music recording center,
with the Sony Music Latin and Universal Music Latin Entertainment headquarters in the city, along with many other
smaller record labels. The city also attracts many artists for music video and film shootings. Since 2001, Miami
has been undergoing a large building boom with more than 50 skyscrapers rising over 400 feet (122 m) built or currently
under construction in the city. Miami's skyline is ranked third-most impressive in the U.S., behind New York City
and Chicago, and 19th in the world according to the Almanac of Architecture and Design. The city currently has the
eight tallest (as well as thirteen of the fourteen tallest) skyscrapers in the state of Florida, with the tallest
being the 789-foot (240 m) Four Seasons Hotel & Tower. During the mid-2000s, the city witnessed its largest real
estate boom since the Florida land boom of the 1920s. During this period, the city had well over a hundred approved
high-rise construction projects in which 50 were actually built. In 2007, however, the housing market crashed causing
lots of foreclosures on houses. This rapid high-rise construction, has led to fast population growth in the city's
inner neighborhoods, primarily in Downtown, Brickell and Edgewater, with these neighborhoods becoming the fastest-growing
areas in the city. The Miami area ranks 8th in the nation in foreclosures. In 2011, Forbes magazine named Miami the
second-most miserable city in the United States due to its high foreclosure rate and past decade of corruption among
public officials. In 2012, Forbes magazine named Miami the most miserable city in the United States because of a
crippling housing crisis that has cost multitudes of residents their homes and jobs. The metro area has one of the
highest violent crime rates in the country and workers face lengthy daily commutes. Miami International Airport and
PortMiami are among the nation's busiest ports of entry, especially for cargo from South America and the Caribbean.
The Port of Miami is the world's busiest cruise port, and MIA is the busiest airport in Florida, and the largest
gateway between the United States and Latin America. Additionally, the city has the largest concentration of international
banks in the country, primarily along Brickell Avenue in Brickell, Miami's financial district. Due to its strength
in international business, finance and trade, many international banks have offices in Downtown such as Espírito
Santo Financial Group, which has its U.S. headquarters in Miami. Miami was also the host city of the 2003 Free Trade
Area of the Americas negotiations, and is one of the leading candidates to become the trading bloc's headquarters.
Tourism is also an important industry in Miami. Along with finance and business, the beaches, conventions, festivals
and events draw over 38 million visitors annually into the city, from across the country and around the world, spending
$17.1 billion. The Art Deco District in South Beach, is reputed as one of the most glamorous in the world for its
nightclubs, beaches, historical buildings, and shopping. Annual events such as the Sony Ericsson Open, Art Basel,
Winter Music Conference, South Beach Wine & Food Festival, and Mercedes-Benz Fashion Week Miami attract millions
to the metropolis every year. According to the U.S. Census Bureau, in 2004, Miami had the third highest incidence
of family incomes below the federal poverty line in the United States, making it the third poorest city in the USA,
behind only Detroit, Michigan (ranked #1) and El Paso, Texas (ranked #2). Miami is also one of the very few cities
where its local government went bankrupt, in 2001. However, since that time, Miami has experienced a revival: in
2008, Miami was ranked as "America's Cleanest City" according to Forbes for its year-round good air quality, vast
green spaces, clean drinking water, clean streets and city-wide recycling programs. In a 2009 UBS study of 73 world
cities, Miami was ranked as the richest city in the United States (of four U.S. cities included in the survey) and
the world's fifth-richest city, in terms of purchasing power. In addition to such annual festivals like Calle Ocho
Festival and Carnaval Miami, Miami is home to many entertainment venues, theaters, museums, parks and performing
arts centers. The newest addition to the Miami arts scene is the Adrienne Arsht Center for the Performing Arts, the
second-largest performing arts center in the United States after the Lincoln Center in New York City, and is the
home of the Florida Grand Opera. Within it are the Ziff Ballet Opera House, the center's largest venue, the Knight
Concert Hall, the Carnival Studio Theater and the Peacock Rehearsal Studio. The center attracts many large-scale
operas, ballets, concerts, and musicals from around the world and is Florida's grandest performing arts center. Other
performing arts venues in Miami include the Gusman Center for the Performing Arts, Coconut Grove Playhouse, Colony
Theatre, Lincoln Theatre, New World Center, Actor's Playhouse at the Miracle Theatre, Jackie Gleason Theatre, Manuel
Artime Theater, Ring Theatre, Playground Theatre, Wertheim Performing Arts Center, the Fair Expo Center and the Bayfront
Park Amphitheater for outdoor music events. In the early 1970s, the Miami disco sound came to life with TK Records,
featuring the music of KC and the Sunshine Band, with such hits as "Get Down Tonight", "(Shake, Shake, Shake) Shake
Your Booty" and "That's the Way (I Like It)"; and the Latin-American disco group, Foxy (band), with their hit singles
"Get Off" and "Hot Number". Miami-area natives George McCrae and Teri DeSario were also popular music artists during
the 1970s disco era. The Bee Gees moved to Miami in 1975 and have lived here ever since then. Miami-influenced, Gloria
Estefan and the Miami Sound Machine, hit the popular music scene with their Cuban-oriented sound and had hits in
the 1980s with "Conga" and "Bad Boys". Miami is also considered a "hot spot" for dance music, Freestyle, a style
of dance music popular in the 80's and 90's heavily influenced by Electro, hip-hop, and disco. Many popular Freestyle
acts such as Pretty Tony, Debbie Deb, Stevie B, and Exposé, originated in Miami. Indie/folk acts Cat Power and Iron
& Wine are based in the city, while alternative hip hop artist Sage Francis, electro artist Uffie, and the electroclash
duo Avenue D were born in Miami, but musically based elsewhere. Also, ska punk band Against All Authority is from
Miami, and rock/metal bands Nonpoint and Marilyn Manson each formed in neighboring Fort Lauderdale. Cuban American
female recording artist, Ana Cristina, was born in Miami in 1985. This was also a period of alternatives to nightclubs,
the warehouse party, acid house, rave and outdoor festival scenes of the late 1980s and early 1990s were havens for
the latest trends in electronic dance music, especially house and its ever-more hypnotic, synthetic offspring techno
and trance, in clubs like the infamous Warsaw Ballroom better known as Warsaw and The Mix where DJs like david padilla
(who was the resident DJ for both) and radio. The new sound fed back into mainstream clubs across the country. The
scene in SoBe, along with a bustling secondhand market for electronic instruments and turntables, had a strong democratizing
effect, offering amateur, "bedroom" DJs the opportunity to become proficient and popular as both music players and
producers, regardless of the whims of the professional music and club industries. Some of these notable DJs are John
Benetiz (better known as JellyBean Benetiz), Danny Tenaglia, and David Padilla. Cuban immigrants in the 1960s brought
the Cuban sandwich, medianoche, Cuban espresso, and croquetas, all of which have grown in popularity to all Miamians,
and have become symbols of the city's varied cuisine. Today, these are part of the local culture, and can be found
throughout the city in window cafés, particularly outside of supermarkets and restaurants. Restaurants such as Versailles
restaurant in Little Havana is a landmark eatery of Miami. Located on the Atlantic Ocean, and with a long history
as a seaport, Miami is also known for its seafood, with many seafood restaurants located along the Miami River, and
in and around Biscayne Bay. Miami is also the home of restaurant chains such as Burger King, Tony Roma's and Benihana.
The Miami area has a unique dialect, (commonly called the "Miami accent") which is widely spoken. The dialect developed
among second- or third-generation Hispanics, including Cuban-Americans, whose first language was English (though
some non-Hispanic white, black, and other races who were born and raised the Miami area tend to adopt it as well.)
It is based on a fairly standard American accent but with some changes very similar to dialects in the Mid-Atlantic
(especially the New York area dialect, Northern New Jersey English, and New York Latino English.) Unlike Virginia
Piedmont, Coastal Southern American, and Northeast American dialects and Florida Cracker dialect (see section below),
"Miami accent" is rhotic; it also incorporates a rhythm and pronunciation heavily influenced by Spanish (wherein
rhythm is syllable-timed). However, this is a native dialect of English, not learner English or interlanguage; it
is possible to differentiate this variety from an interlanguage spoken by second-language speakers in that "Miami
accent" does not generally display the following features: there is no addition of /ɛ/ before initial consonant clusters
with /s/, speakers do not confuse of /dʒ/ with /j/, (e.g., Yale with jail), and /r/ and /rr/ are pronounced as alveolar
approximant [ɹ] instead of alveolar tap [ɾ] or alveolar trill [r] in Spanish. Miami's main four sports teams are
the Miami Dolphins of the National Football League, the Miami Heat of the National Basketball Association, the Miami
Marlins of Major League Baseball, and the Florida Panthers of the National Hockey League. As well as having all four
major professional teams, Miami is also home to the Major League Soccer expansion team led by David Beckham, Sony
Ericsson Open for professional tennis, numerous greyhound racing tracks, marinas, jai alai venues, and golf courses.
The city streets has hosted professional auto races, the Miami Indy Challenge and later the Grand Prix Americas.
The Homestead-Miami Speedway oval hosts NASCAR national races. Miami's tropical weather allows for year-round outdoors
activities. The city has numerous marinas, rivers, bays, canals, and the Atlantic Ocean, which make boating, sailing,
and fishing popular outdoors activities. Biscayne Bay has numerous coral reefs which make snorkeling and scuba diving
popular. There are over 80 parks and gardens in the city. The largest and most popular parks are Bayfront Park and
Bicentennial Park (located in the heart of Downtown and the location of the American Airlines Arena and Bayside Marketplace),
Tropical Park, Peacock Park, Morningside Park, Virginia Key, and Watson Island. The government of the City of Miami
(proper) uses the mayor-commissioner type of system. The city commission consists of five commissioners which are
elected from single member districts. The city commission constitutes the governing body with powers to pass ordinances,
adopt regulations, and exercise all powers conferred upon the city in the city charter. The mayor is elected at large
and appoints a city manager. The City of Miami is governed by Mayor Tomás Regalado and 5 City commissioners which
oversee the five districts in the City. The commission's regular meetings are held at Miami City Hall, which is located
at 3500 Pan American Drive on Dinner Key in the neighborhood of Coconut Grove . Miami has one of the largest television
markets in the nation and the second largest in the state of Florida. Miami has several major newspapers, the main
and largest newspaper being The Miami Herald. El Nuevo Herald is the major and largest Spanish-language newspaper.
The Miami Herald and El Nuevo Herald are Miami's and South Florida's main, major and largest newspapers. The papers
left their longtime home in downtown Miami in 2013. The newspapers are now headquartered at the former home of U.S.
Southern Command in Doral. Other major newspapers include Miami Today, headquartered in Brickell, Miami New Times,
headquartered in Midtown, Miami Sun Post, South Florida Business Journal, Miami Times, and Biscayne Boulevard Times.
An additional Spanish-language newspapers, Diario Las Americas also serve Miami. The Miami Herald is Miami's primary
newspaper with over a million readers and is headquartered in Downtown in Herald Plaza. Several other student newspapers
from the local universities, such as the oldest, the University of Miami's The Miami Hurricane, Florida International
University's The Beacon, Miami-Dade College's The Metropolis, Barry University's The Buccaneer, amongst others. Many
neighborhoods and neighboring areas also have their own local newspapers such as the Aventura News, Coral Gables
Tribune, Biscayne Bay Tribune, and the Palmetto Bay News. Miami is also the headquarters and main production city
of many of the world's largest television networks, record label companies, broadcasting companies and production
facilities, such as Telemundo, TeleFutura, Galavisión, Mega TV, Univisión, Univision Communications, Inc., Universal
Music Latin Entertainment, RCTV International and Sunbeam Television. In 2009, Univisión announced plans to build
a new production studio in Miami, dubbed 'Univisión Studios'. Univisión Studios is currently headquartered in Miami,
and will produce programming for all of Univisión Communications' television networks. Miami International Airport
serves as the primary international airport of the Greater Miami Area. One of the busiest international airports
in the world, Miami International Airport caters to over 35 million passengers a year. The airport is a major hub
and the single largest international gateway for American Airlines. Miami International is the busiest airport in
Florida, and is the United States' second-largest international port of entry for foreign air passengers behind New
York's John F. Kennedy International Airport, and is the seventh-largest such gateway in the world. The airport's
extensive international route network includes non-stop flights to over seventy international cities in North and
South America, Europe, Asia, and the Middle East. Miami is home to one of the largest ports in the United States,
the PortMiami. It is the largest cruise ship port in the world. The port is often called the "Cruise Capital of the
World" and the "Cargo Gateway of the Americas". It has retained its status as the number one cruise/passenger port
in the world for well over a decade accommodating the largest cruise ships and the major cruise lines. In 2007, the
port served 3,787,410 passengers. Additionally, the port is one of the nation's busiest cargo ports, importing 7.8
million tons of cargo in 2007. Among North American ports, it ranks second only to the Port of South Louisiana in
New Orleans in terms of cargo tonnage imported/exported from Latin America. The port is on 518 acres (2 km2) and
has 7 passenger terminals. China is the port's number one import country, and Honduras is the number one export country.
Miami has the world's largest amount of cruise line headquarters, home to: Carnival Cruise Lines, Celebrity Cruises,
Norwegian Cruise Line, Oceania Cruises, and Royal Caribbean International. In 2014, the Port of Miami Tunnel was
completed and will serve the PortMiami. Miami's heavy-rail rapid transit system, Metrorail, is an elevated system
comprising two lines and 23 stations on a 24.4-mile (39.3 km)-long line. Metrorail connects the urban western suburbs
of Hialeah, Medley, and inner-city Miami with suburban The Roads, Coconut Grove, Coral Gables, South Miami and urban
Kendall via the central business districts of Miami International Airport, the Civic Center, and Downtown. A free,
elevated people mover, Metromover, operates 21 stations on three different lines in greater Downtown Miami, with
a station at roughly every two blocks of Downtown and Brickell. Several expansion projects are being funded by a
transit development sales tax surcharge throughout Miami-Dade County. Construction is currently underway on the Miami
Intermodal Center and Miami Central Station, a massive transportation hub servicing Metrorail, Amtrak, Tri-Rail,
Metrobus, Greyhound Lines, taxis, rental cars, MIA Mover, private automobiles, bicycles and pedestrians adjacent
to Miami International Airport. Completion of the Miami Intermodal Center is expected to be completed by winter 2011,
and will serve over 150,000 commuters and travelers in the Miami area. Phase I of Miami Central Station is scheduled
to begin service in the spring of 2012, and Phase II in 2013. Miami is the southern terminus of Amtrak's Atlantic
Coast services, running two lines, the Silver Meteor and the Silver Star, both terminating in New York City. The
Miami Amtrak Station is located in the suburb of Hialeah near the Tri-Rail/Metrorail Station on NW 79 St and NW 38
Ave. Current construction of the Miami Central Station will move all Amtrak operations from its current out-of-the-way
location to a centralized location with Metrorail, MIA Mover, Tri-Rail, Miami International Airport, and the Miami
Intermodal Center all within the same station closer to Downtown. The station was expected to be completed by 2012,
but experienced several delays and was later expected to be completed in late 2014, again pushed back to early 2015.
Florida High Speed Rail was a proposed government backed high-speed rail system that would have connected Miami,
Orlando, and Tampa. The first phase was planned to connect Orlando and Tampa and was offered federal funding, but
it was turned down by Governor Rick Scott in 2011. The second phase of the line was envisioned to connect Miami.
By 2014, a private project known as All Aboard Florida by a company of the historic Florida East Coast Railway began
construction of a higher-speed rail line in South Florida that is planned to eventually terminate at Orlando International
Airport. Miami's road system is based along the numerical "Miami Grid" where Flagler Street forms the east-west baseline
and Miami Avenue forms the north-south meridian. The corner of Flagler Street and Miami Avenue is in the middle of
Downtown in front of the Downtown Macy's (formerly the Burdine's headquarters). The Miami grid is primarily numerical
so that, for example, all street addresses north of Flagler Street and west of Miami Avenue have "NW" in their address.
Because its point of origin is in Downtown, which is close to the coast, therefore, the "NW" and "SW" quadrants are
much larger than the "SE" and "NE" quadrants. Many roads, especially major ones, are also named (e.g., Tamiami Trail/SW
8th St), although, with exceptions, the number is in more common usage among locals. Miami has six major causeways
that span over Biscayne Bay connecting the western mainland, with the eastern barrier islands along the Atlantic
Ocean. The Rickenbacker Causeway is the southernmost causeway and connects Brickell to Virginia Key and Key Biscayne.
The Venetian Causeway and MacArthur Causeway connect Downtown with South Beach. The Julia Tuttle Causeway connects
Midtown and Miami Beach. The 79th Street Causeway connects the Upper East Side with North Beach. The northernmost
causeway, the Broad Causeway, is the smallest of Miami's six causeways, and connects North Miami with Bal Harbour.
In recent years the city government, under Mayor Manny Diaz, has taken an ambitious stance in support of bicycling
in Miami for both recreation and commuting. Every month, the city hosts "Bike Miami", where major streets in Downtown
and Brickell are closed to automobiles, but left open for pedestrians and bicyclists. The event began in November
2008, and has doubled in popularity from 1,500 participants to about 3,000 in the October 2009 Bike Miami. This is
the longest-running such event in the US. In October 2009, the city also approved an extensive 20-year plan for bike
routes and paths around the city. The city has begun construction of bike routes as of late 2009, and ordinances
requiring bike parking in all future construction in the city became mandatory as of October 2009.
In 1682, William Penn founded the city to serve as capital of the Pennsylvania Colony. Philadelphia played an instrumental
role in the American Revolution as a meeting place for the Founding Fathers of the United States, who signed the
Declaration of Independence in 1776 and the Constitution in 1787. Philadelphia was one of the nation's capitals in
the Revolutionary War, and served as temporary U.S. capital while Washington, D.C., was under construction. In the
19th century, Philadelphia became a major industrial center and railroad hub that grew from an influx of European
immigrants. It became a prime destination for African-Americans in the Great Migration and surpassed two million
occupants by 1950. Based on the similar shifts underway the nation's economy after 1960, Philadelphia experienced
a loss of manufacturing companies and jobs to lower taxed regions of the USA and often overseas. As a result, the
economic base of Philadelphia, which had historically been manufacturing, declined significantly. In addition, consolidation
in several American industries (retailing, financial services and health care in particular) reduced the number of
companies headquartered in Philadelphia. The economic impact of these changes would reduce Philadelphia's tax base
and the resources of local government. Philadelphia struggled through a long period of adjustment to these economic
changes, coupled with significant demographic change as wealthier residents moved into the nearby suburbs and more
immigrants moved into the city. The city in fact approached bankruptcy in the late 1980s. Revitalization began in
the 1990s, with gentrification turning around many neighborhoods and reversing its decades-long trend of population
loss. The area's many universities and colleges make Philadelphia a top international study destination, as the city
has evolved into an educational and economic hub. With a gross domestic product of $388 billion, Philadelphia ranks
ninth among world cities and fourth in the nation. Philadelphia is the center of economic activity in Pennsylvania
and is home to seven Fortune 1000 companies. The Philadelphia skyline is growing, with several nationally prominent
skyscrapers. The city is known for its arts, culture, and history, attracting over 39 million domestic tourists in
2013. Philadelphia has more outdoor sculptures and murals than any other American city, and Fairmount Park is the
largest landscaped urban park in the world. The 67 National Historic Landmarks in the city helped account for the
$10 billion generated by tourism. Philadelphia is the birthplace of the United States Marine Corps, and is also the
home of many U.S. firsts, including the first library (1731), first hospital (1751) and medical school (1765), first
Capitol (1777), first stock exchange (1790), first zoo (1874), and first business school (1881). Philadelphia is
the only World Heritage City in the United States. Before Europeans arrived, the Philadelphia area was home to the
Lenape (Delaware) Indians in the village of Shackamaxon. The Lenape are a Native American tribe and First Nations
band government. They are also called Delaware Indians and their historical territory was along the Delaware River
watershed, western Long Island and the Lower Hudson Valley.[a] Most Lenape were pushed out of their Delaware homeland
during the 18th century by expanding European colonies, exacerbated by losses from intertribal conflicts. Lenape
communities were weakened by newly introduced diseases, mainly smallpox, and violent conflict with Europeans. Iroquois
people occasionally fought the Lenape. Surviving Lenape moved west into the upper Ohio River basin. The American
Revolutionary War and United States' independence pushed them further west. In the 1860s, the United States government
sent most Lenape remaining in the eastern United States to the Indian Territory (present-day Oklahoma and surrounding
territory) under the Indian removal policy. In the 21st century, most Lenape now reside in the US state of Oklahoma,
with some communities living also in Wisconsin, Ontario (Canada) and in their traditional homelands. Europeans came
to the Delaware Valley in the early 17th century, with the first settlements founded by the Dutch, who in 1623 built
Fort Nassau on the Delaware River opposite the Schuylkill River in what is now Brooklawn, New Jersey. The Dutch considered
the entire Delaware River valley to be part of their New Netherland colony. In 1638, Swedish settlers led by renegade
Dutch established the colony of New Sweden at Fort Christina (present day Wilmington, Delaware) and quickly spread
out in the valley. In 1644, New Sweden supported the Susquehannocks in their military defeat of the English colony
of Maryland. In 1648, the Dutch built Fort Beversreede on the west bank of the Delaware, south of the Schuylkill
near the present-day Eastwick section of Philadelphia, to reassert their dominion over the area. The Swedes responded
by building Fort Nya Korsholm, named New Korsholm after a town that is now in Finland. In 1655, a Dutch military
campaign led by New Netherland Director-General Peter Stuyvesant took control of the Swedish colony, ending its claim
to independence, although the Swedish and Finnish settlers continued to have their own militia, religion, and court,
and to enjoy substantial autonomy under the Dutch. The English conquered the New Netherland colony in 1664, but the
situation did not really change until 1682, when the area was included in William Penn's charter for Pennsylvania.
In 1681, in partial repayment of a debt, Charles II of England granted William Penn a charter for what would become
the Pennsylvania colony. Despite the royal charter, Penn bought the land from the local Lenape to be on good terms
with the Native Americans and ensure peace for his colony. Penn made a treaty of friendship with Lenape chief Tammany
under an elm tree at Shackamaxon, in what is now the city's Fishtown section. Penn named the city Philadelphia, which
is Greek for brotherly love (from philos, "love" or "friendship", and adelphos, "brother"). As a Quaker, Penn had
experienced religious persecution and wanted his colony to be a place where anyone could worship freely. This tolerance,
far more than afforded by most other colonies, led to better relations with the local Native tribes and fostered
Philadelphia's rapid growth into America's most important city. Penn planned a city on the Delaware River to serve
as a port and place for government. Hoping that Philadelphia would become more like an English rural town instead
of a city, Penn laid out roads on a grid plan to keep houses and businesses spread far apart, with areas for gardens
and orchards. The city's inhabitants did not follow Penn's plans, as they crowded by the Delaware River, the port,
and subdivided and resold their lots. Before Penn left Philadelphia for the last time, he issued the Charter of 1701
establishing it as a city. It became an important trading center, poor at first, but with tolerable living conditions
by the 1750s. Benjamin Franklin, a leading citizen, helped improve city services and founded new ones, such as fire
protection, a library, and one of the American colonies' first hospitals. Philadelphia's importance and central location
in the colonies made it a natural center for America's revolutionaries. By the 1750s, Philadelphia had surpassed
Boston to become the largest city and busiest port in British America, and second in the British Empire, behind London.
The city hosted the First Continental Congress before the American Revolutionary War; the Second Continental Congress,
which signed the United States Declaration of Independence, during the war; and the Constitutional Convention (1787)
after the war. Several battles were fought in and near Philadelphia as well. The state government left Philadelphia
in 1799, and the federal government was moved to Washington, DC in 1800 with completion of the White House and Capitol.
The city remained the young nation's largest with a population of nearly 50,000 at the turn of the 19th century;
it was a financial and cultural center. Before 1800, its free black community founded the African Methodist Episcopal
Church (AME), the first independent black denomination in the country, and the first black Episcopal Church. The
free black community also established many schools for its children, with the help of Quakers. New York City soon
surpassed Philadelphia in population, but with the construction of roads, canals, and railroads, Philadelphia became
the first major industrial city in the United States. Throughout the 19th century, Philadelphia had a variety of
industries and businesses, the largest being textiles. Major corporations in the 19th and early 20th centuries included
the Baldwin Locomotive Works, William Cramp and Sons Ship and Engine Building Company, and the Pennsylvania Railroad.
Industry, along with the U.S. Centennial, was celebrated in 1876 with the Centennial Exposition, the first official
World's Fair in the United States. Immigrants, mostly Irish and German, settled in Philadelphia and the surrounding
districts. The rise in population of the surrounding districts helped lead to the Act of Consolidation of 1854, which
extended the city limits of Philadelphia from the 2 square miles of present-day Center City to the roughly 130 square
miles of Philadelphia County. These immigrants were largely responsible for the first general strike in North America
in 1835, in which workers in the city won the ten-hour workday. The city was a destination for thousands of Irish
immigrants fleeing the Great Famine in the 1840s; housing for them was developed south of South Street, and was later
occupied by succeeding immigrants. They established a network of Catholic churches and schools, and dominated the
Catholic clergy for decades. Anti-Irish, anti-Catholic Nativist riots had erupted in Philadelphia in 1844. In the
latter half of the century, immigrants from Russia, Eastern Europe and Italy; and African Americans from the southern
U.S. settled in the city. Between 1880 and 1930, the African-American population of Philadelphia increased from 31,699
to 219,559. Twentieth-century black newcomers were part of the Great Migration out of the rural South to northern
and midwestern industrial cities. By the 20th century, Philadelphia had become known as "corrupt and contented",
with a complacent population and an entrenched Republican political machine. The first major reform came in 1917
when outrage over the election-year murder of a police officer led to the shrinking of the Philadelphia City Council
from two houses to just one. In July 1919, Philadelphia was one of more than 36 industrial cities nationally to suffer
a race riot of ethnic whites against blacks during Red Summer, in post-World War I unrest, as recent immigrants competed
with blacks for jobs. In the 1920s, the public flouting of Prohibition laws, mob violence, and police involvement
in illegal activities led to the appointment of Brigadier General Smedley Butler of the U.S. Marine Corps as director
of public safety, but political pressure prevented any long-term success in fighting crime and corruption. In 1940,
non-Hispanic whites constituted 86.8% of the city's population. The population peaked at more than two million residents
in 1950, then began to decline with the restructuring of industry, which led to the loss of many middle-class union
jobs. In addition, suburbanization had been drawing off many of the wealthier residents to outlying railroad commuting
towns and newer housing. Revitalization and gentrification of neighborhoods began in the late 1970s and continues
into the 21st century, with much of the development in the Center City and University City areas of the city. After
many of the old manufacturers and businesses left Philadelphia or shut down, the city started attracting service
businesses and began to more aggressively market itself as a tourist destination. Glass-and-granite skyscrapers were
built in Center City. Historic areas such as Independence National Historical Park located in Old City and Society
Hill were renovated during the reformist mayoral era of the 1950s through the 1980s. They are now among the most
desirable living areas of Center City. This has slowed the city's 40-year population decline after it lost nearly
one-quarter of its population. Philadelphia's central city was created in the 17th century following the plan by
William Penn's surveyor Thomas Holme. Center City is structured with long straight streets running east-west and
north-south forming a grid pattern. The original city plan was designed to allow for easy travel and to keep residences
separated by open space that would help prevent the spread of fire. The Delaware River and Schuylkill Rivers served
as early boundaries between which the city's early street plan was kept within. In addition, Penn planned the creation
of five public parks in the city which were renamed in 1824 (in parenthesis): Centre Square, North East Publick Square
(Franklin Square), Northwest Square (Logan Square), Southwest Square (Rittenhouse Square), and Southeast Square (Washington
Square). Center City has grown into the second-most populated downtown area in the United States, after Midtown Manhattan
in New York City, with an estimated 183,240 residents in 2015. The City Planning Commission, tasked with guiding
growth and development of the city, has divided the city into 18 planning districts as part of the Philadelphia2035
physical development plan. Much of the city's 1980 zoning code was overhauled from 2007–2012 as part of a joint effort
between former mayors John F. Street and Michael Nutter. The zoning changes were intended to rectify incorrect zoning
mapping that would streamline future community preferences and development, which the city forecasts an additional
100,000 residents and 40,000 jobs to be added to Philadelphia in 2035. In the first decades of the 19th century,
Federal architecture and Greek Revival architecture were dominated by Philadelphia architects such as Benjamin Latrobe,
William Strickland, John Haviland, John Notman, Thomas U. Walter, and Samuel Sloan. Frank Furness is considered Philadelphia's
greatest architect of the second half of the 19th century, but his contemporaries included John McArthur, Jr., Addison
Hutton, Wilson Eyre, the Wilson Brothers, and Horace Trumbauer. In 1871, construction began on the Second Empire-style
Philadelphia City Hall. The Philadelphia Historical Commission was created in 1955 to preserve the cultural and architectural
history of the city. The commission maintains the Philadelphia Register of Historic Places, adding historic buildings,
structures, sites, objects and districts as it sees fit. The 548 ft (167 m) City Hall remained the tallest building
in the city until 1987 when One Liberty Place was constructed. Numerous glass and granite skyscrapers were built
in Philadelphia's Center City from the late 1980s onwards. In 2007, the Comcast Center surpassed One Liberty Place
to become the city's tallest building. The Comcast Innovation and Technology Center is under construction in Center
City and is planned to reach a height of 1,121 feet (342 meters); upon completion, the tower is expected to be the
tallest skyscraper in the United States outside of New York City and Chicago. For much of Philadelphia's history,
the typical home has been the row house. The row house was introduced to the United States via Philadelphia in the
early 19th century and, for a time, row houses built elsewhere in the United States were known as "Philadelphia rows".
A variety of row houses are found throughout the city, from Victorian-style homes in North Philadelphia to twin row
houses in West Philadelphia. While newer homes are scattered throughout the city, much of the housing is from the
early 20th century or older. The great age of the homes has created numerous problems, including blight and vacant
lots in many parts of the city, while other neighborhoods such as Society Hill, which has the largest concentration
of 18th-century architecture in the United States, have been rehabilitated and gentrified. Under the Köppen climate
classification, Philadelphia falls in the northern periphery of the humid subtropical climate zone (Köppen Cfa).
Summers are typically hot and muggy, fall and spring are generally mild, and winter is cold. Snowfall is highly variable,
with some winters bringing only light snow and others bringing several major snowstorms, with the normal seasonal
snowfall standing at 22.4 in (57 cm); snow in November or April is rare, and a sustained snow cover is rare. Precipitation
is generally spread throughout the year, with eight to twelve wet days per month, at an average annual rate of 41.5
inches (1,050 mm), but historically ranging from 29.31 in (744 mm) in 1922 to 64.33 in (1,634 mm) in 2011. The most
rain recorded in one day occurred on July 28, 2013, when 8.02 in (204 mm) fell at Philadelphia International Airport.
The January daily average is 33.0 °F (0.6 °C), though, in a normal winter, the temperature frequently rises to 50
°F (10 °C) during thaws and dips to 10 °F (−12 °C) for 2 or 3 nights. July averages 78.1 °F (25.6 °C), although heat
waves accompanied by high humidity and heat indices are frequent; highs reach or exceed 90 °F (32 °C) on 27 days
of the year. The average window for freezing temperatures is November 6 thru April 2, allowing a growing season of
217 days. Early fall and late winter are generally dry; February's average of 2.64 inches (67 mm) makes it the area's
driest month. The dewpoint in the summer averages between 59.1 °F (15 °C) to 64.5 °F (18 °C). According to the 2014
United States Census estimates, there were 1,560,297 people residing in the City of Philadelphia, representing a
2.2% increase since 2010. From the 1960s up until 2006, the city's population declined year after year. It eventually
reached a low of 1,488,710 residents in 2006 before beginning to rise again. Since 2006, Philadelphia added 71,587
residents in eight years. A study done by the city projected that the population would increase to about 1,630,000
residents by 2035, an increase of about 100,000 from 2010. In comparison, the 2010 Census Redistricting Data indicated
that the racial makeup of the city was 661,839 (43.4%) African American, 626,221 (41.0%) White, 6,996 (0.5%) Native
American, 96,405 (6.3%) Asian (2.0% Chinese, 1.2% Indian, 0.9% Vietnamese, 0.6% Cambodian, 0.4% Korean, 0.3% Filipino,
0.2% Pakistani, 0.1% Indonesian), 744 (0.0%) Pacific Islander, 90,731 (5.9%) from other races, and 43,070 (2.8%)
from two or more races. Hispanic or Latino of any race were 187,611 persons (12.3%); 8.0% of Philadelphia is Puerto
Rican, 1.0% Dominican, 1.0% Mexican, 0.3% Cuban, and 0.3% Colombian. The racial breakdown of Philadelphia's Hispanic/Latino
population was 63,636 (33.9%) White, 17,552 (9.4%) African American, 3,498 (1.9%) Native American, 884 (0.47%) Asian,
287 (0.15%) Pacific Islander, 86,626 (46.2%) from other races, and 15,128 (8.1%) from two or more races. The five
largest European ancestries reported in the 2010 United States Census Census included Irish (12.5%), Italian (8.4%),
German (8.1%), Polish (3.6%), and English (3.0%). The average population density was 11,457 people per square mile
(4,405.4/km²). The Census reported that 1,468,623 people (96.2% of the population) lived in households, 38,007 (2.5%)
lived in non-institutionalized group quarters, and 19,376 (1.3%) were institutionalized. In 2013, the city reported
having 668,247 total housing units, down slightly from 670,171 housing units in 2010. As of 2013[update], 87 percent
of housing units were occupied, while 13 percent were vacant, a slight change from 2010 where 89.5 percent of units
were occupied, or 599,736 and 10.5 percent were vacant, or 70,435. Of the city's residents, 32 percent reported having
no vehicles available while 23 percent had two or more vehicles available, as of 2013[update]. In 2010, 24.9 percent
of households reported having children under the age of 18 living with them, 28.3 percent were married couples living
together and 22.5 percent had a female householder with no husband present, 6.0 percent had a male householder with
no wife present, and 43.2 percent were non-families. The city reported 34.1 percent of all households were made up
of individuals while 10.5 percent had someone living alone who was 65 years of age or older. The average household
size was 2.45 and the average family size was 3.20. In 2013, the percentage of women who gave birth in the previous
12 months who were unmarried was 56 percent. Of Philadelphia's adults, 31 percent were married or lived as a couple,
55 percent were not married, 11 percent were divorced or separated, and 3 percent were widowed. According to the
Census Bureau, the median household income in 2013 was $36,836, down 7.9 percent from 2008 when the median household
income was $40,008 (in 2013 dollars). For comparison, the median household income among metropolitan areas was $60,482,
down 8.2 percent in the same period, and the national median household income was $55,250, down 7.0 percent from
2008. The city's wealth disparity is evident when neighborhoods are compared. Residents in Society Hill had a median
household income of $93,720 while residents in one of North Philadelphia's districts reported the lowest median household
income, $14,185. During the last decade, Philadelphia experienced a large shift in its age profile. In 2000, the
city's population pyramid had a largely stationary shape. In 2013, the city took on an expansive pyramid shape, with
an increase in the three millennial age groups, 20 to 24, 25 to 29, and 30 to 34. The city's 25- to 29-year-old age
group was the city's largest age cohort. According to the 2010 Census, 343,837 (22.5%) were under the age of 18;
203,697 (13.3%) from 18 to 25; 434,385 (28.5%) from 25 to 44; 358,778 (23.5%) from 45 to 64; and 185,309 (12.1%)
who were 65 years of age or older. The median age was 33.5 years. For every 100 females there were 89.4 males. For
every 100 females age 18 and over, there were 85.7 males. The city had 22,018 births in 2013, down from a peak 23,689
births in 2008. Philadelphia's death rate was at its lowest in at least a half-century, 13,691 deaths in 2013. Another
factor attributing to the population increase is Philadelphia's immigration rate. In 2013, 12.7 percent of residents
were foreign-born, just shy of the national average, 13.1 percent. Irish, Italians, Polish, Germans, English, and
Greeks are the largest ethnic European groups in the city. Philadelphia has the second-largest Irish and Italian
populations in the United States, after New York City. South Philadelphia remains one of the largest Italian neighborhoods
in the country and is home to the Italian Market. The Pennsport neighborhood and Gray's Ferry section of South Philadelphia,
home to many Mummer clubs, are well known as Irish neighborhoods. The Kensington section, Port Richmond, and Fishtown
have historically been heavily Irish and Polish. Port Richmond is well known in particular as the center of the Polish
immigrant and Polish-American community in Philadelphia, and it remains a common destination for Polish immigrants.
Northeast Philadelphia, although known for its Irish and Irish-American population, is also home to a large Jewish
and Russian population. Mount Airy in Northwest Philadelphia also contains a large Jewish community, while nearby
Chestnut Hill is historically known as an Anglo-Saxon Protestant stronghold. There has also been an increase of yuppie,
bohemian, and hipster types particularly around Center City, the neighborhood of Northern Liberties, and in the neighborhoods
around the city's universities, such as near Temple in North Philadelphia and particularly near Drexel and University
of Pennsylvania in West Philadelphia. Philadelphia is also home to a significant gay and lesbian population. Philadelphia's
Gayborhood, which is located near Washington Square, is home to a large concentration of gay and lesbian friendly
businesses, restaurants, and bars. As of 2010[update], 79.12% (1,112,441) of Philadelphia residents age 5 and older
spoke English at home as a primary language, while 9.72% (136,688) spoke Spanish, 1.64% (23,075) Chinese, 0.89% (12,499)
Vietnamese, 0.77% (10,885) Russian, 0.66% (9,240) French, 0.61% (8,639) other Asian languages, 0.58% (8,217) African
languages, 0.56% (7,933) Cambodian (Mon-Khmer), and Italian was spoken as a main language by 0.55% (7,773) of the
population over the age of five. In total, 20.88% (293,544) of Philadelphia's population age 5 and older spoke a
mother language other than English. Philadelphia's an annualized unemployment rate was 7.8% in 2014, down from 10.0%the
previous year. This is higher than the national average of 6.2%. Similarly, the rate of new jobs added to the city's
economy lagged behind the national job growth. In 2014, about 8,800 jobs were added to the city's economy. Sectors
with the largest number of jobs added were in education and health services, leisure and hospitality, and professional
and business services. Declines were seen in the city's manufacturing and government sectors. Philadelphia is home
to many national historical sites that relate to the founding of the United States. Independence National Historical
Park is the center of these historical landmarks being one of the country's 22 UNESCO World Heritage Sites. Independence
Hall, where the Declaration of Independence was signed, and the Liberty Bell are the city's most famous attractions.
Other historic sites include homes for Edgar Allan Poe, Betsy Ross, and Thaddeus Kosciuszko, early government buildings
like the First and Second Banks of the United States, Fort Mifflin, and the Gloria Dei (Old Swedes') Church. Philadelphia
alone has 67 National Historic Landmarks, the third most of any city in the country. Philadelphia's major science
museums include the Franklin Institute, which contains the Benjamin Franklin National Memorial; the Academy of Natural
Sciences; the Mütter Museum; and the University of Pennsylvania Museum of Archaeology and Anthropology. History museums
include the National Constitution Center, the Atwater Kent Museum of Philadelphia History, the National Museum of
American Jewish History, the African American Museum in Philadelphia, the Historical Society of Pennsylvania, the
Grand Lodge of Free and Accepted Masons in the state of Pennsylvania and The Masonic Library and Museum of Pennsylvania
and Eastern State Penitentiary. Philadelphia is home to the United States' first zoo and hospital, as well as Fairmount
Park, one of America's oldest and largest urban parks. The Philadelphia dialect, which is spread throughout the Delaware
Valley and South Jersey, is part of Mid-Atlantic American English, and as such it is identical in many ways to the
Baltimore dialect. Unlike the Baltimore dialect, however, the Philadelphia accent also shares many similarities with
the New York accent. Thanks to over a century of linguistics data collected by researchers at the University of Pennsylvania,
the Philadelphia dialect under sociolinguist William Labov has been one of the best-studied forms of American English.[f]
Areas such as South Street and Old City have a vibrant night life. The Avenue of the Arts in Center City contains
many restaurants and theaters, such as the Kimmel Center for the Performing Arts, which is home to the Philadelphia
Orchestra, generally considered one of the top five orchestras in the United States, and the Academy of Music, the
nation's oldest continually operating opera house, home to the Opera Company of Philadelphia and the Pennsylvania
Ballet. The Wilma Theatre and Philadelphia Theatre Company have new buildings constructed in the last decade on the
avenue. They produce a variety of new works. Several blocks to the east are the Walnut Street Theatre, America's
oldest theatre and the largest subscription theater in the world; as well as the Lantern Theatre at St. Stephens
Church, one of a number of smaller venues. Philadelphia has more public art than any other American city. In 1872,
the Association for Public Art (formerly the Fairmount Park Art Association) was created, the first private association
in the United States dedicated to integrating public art and urban planning. In 1959, lobbying by the Artists Equity
Association helped create the Percent for Art ordinance, the first for a U.S. city. The program, which has funded
more than 200 pieces of public art, is administered by the Philadelphia Office of Arts and Culture, the city's art
agency. Philadelphia artists have had a prominent national role in popular music. In the 1970s, Philadelphia soul
influenced the music of that and later eras. On July 13, 1985, Philadelphia hosted the American end of the Live Aid
concert at John F. Kennedy Stadium. The city reprised this role for the Live 8 concert, bringing some 700,000 people
to the Ben Franklin Parkway on July 2, 2005. Philadelphia is home to the world-renowned Philadelphia Boys Choir &
Chorale, which has performed its music all over the world. Dr. Robert G. Hamilton, founder of the choir, is a notable
native Philadelphian. The Philly Pops is another famous Philadelphia music group. The city has played a major role
in the development and support of American rock music and rap music. Hip-hop/Rap artists such as The Roots, DJ Jazzy
Jeff & The Fresh Prince, The Goats, Freeway, Schoolly D, Eve, and Lisa "Left Eye" Lopes hail from the city. Rowing
has been popular in Philadelphia since the 18th century. Boathouse Row is a symbol of Philadelphia's rich rowing
history, and each Big Five member has its own boathouse. Philadelphia hosts numerous local and collegiate rowing
clubs and competitions, including the annual Dad Vail Regatta, the largest intercollegiate rowing event in the U.S,
the Stotesbury Cup Regatta, and the Head of the Schuylkill Regatta, all of which are held on the Schuylkill River.
The regattas are hosted and organized by the Schuylkill Navy, an association of area rowing clubs that has produced
numerous Olympic rowers. The city uses the strong-mayor version of the mayor-council form of government, which is
headed by one mayor, in whom executive authority is vested. Elected at-large, the mayor is limited to two consecutive
four-year terms under the city's home rule charter, but can run for the position again after an intervening term.
The Mayor is Jim Kenney, who replaced Michael Nutter, who served two terms from 2009 to January 2016. Kenney, as
all Philadelphia mayors have been since 1952, is a member of the Democratic Party, which tends to dominate local
politics so thoroughly that the Democratic Mayoral primary is often more widely covered than the general election.
The legislative branch, the Philadelphia City Council, consists of ten council members representing individual districts
and seven members elected at large. Democrats currently hold 14 seats, with Republicans representing two allotted
at-large seats for the minority party, as well as the Northeast-based Tenth District. The current council president
is Darrell Clarke. The Philadelphia County Court of Common Pleas (First Judicial District) is the trial court of
general jurisdiction for Philadelphia, hearing felony-level criminal cases and civil suits above the minimum jurisdictional
limit of $7000 (excepting small claims cases valued between $7000 and $12000 and landlord-tenant issues heard in
the Municipal Court) under its original jurisdiction; it also has appellate jurisdiction over rulings from the Municipal
and Traffic Courts and over decisions of certain Pennsylvania state agencies (e.g. the Pennsylvania Liquor Control
Board). It has 90 legally trained judges elected by the voters. It is funded and operated largely by city resources
and employees. The current District Attorney is Seth Williams, a Democrat. The last Republican to hold the office
is Ron Castille, who left in 1991 and is currently the Chief Justice of the Pennsylvania Supreme Court. From the
American Civil War until the mid-20th century, Philadelphia was a bastion of the Republican Party, which arose from
the staunch pro-Northern views of Philadelphia residents during and after the war (Philadelphia was chosen as the
host city for the first Republican National Convention in 1856). After the Great Depression, Democratic registrations
increased, but the city was not carried by Democrat Franklin D. Roosevelt in his landslide victory of 1932 (in which
Pennsylvania was one of the few states won by Republican Herbert Hoover). Four years later, however, voter turnout
surged and the city finally flipped to the Democrats. Roosevelt carried Philadelphia with over 60% of the vote in
1936. The city has remained loyally Democratic in every presidential election since. It is now one of the most Democratic
in the country; in 2008, Democrat Barack Obama drew 83% of the city's vote. Obama's win was even greater in 2012,
capturing 85% of the vote. Philadelphia once comprised six congressional districts. However, as a result of the city's
declining population, it now has only four: the 1st district, represented by Bob Brady; the 2nd, represented by Chaka
Fattah; the 8th, represented by Mike Fitzpatrick; and the 13th, represented by Brendan Boyle. All but Fitzpatrick
are Democrats. Although they are usually swamped by Democrats in city, state and national elections, Republicans
still have some support in the area, primarily in the northeast. A Republican represented a significant portion of
Philadelphia in the House as late as 1983, and Sam Katz ran competitive mayoral races as the Republican nominee in
both 1999 and 2003. Like many American cities, Philadelphia saw a gradual yet pronounced rise in crime in the years
following World War II. There were 525 murders in 1990, a rate of 31.5 per 100,000. There were an average of about
600 murders a year for most of the 1990s. The murder count dropped in 2002 to 288, then rose four years later to
406 in 2006 and 392 in 2007. A few years later, Philadelphia began to see a rapid drop in homicides and violent crime.
In 2013, there were 246 murders, which is a decrease of over 25% from the previous year, and a decrease of over 44%
since 2007. And in 2014, there were 248 homicides, up by one since 2013. The number of shootings in the city has
declined significantly in the last 10 years. Shooting incidents peaked in 2006 when 1,857 shootings were recorded.
That number has dropped 44 percent to 1,047 shootings in 2014. Similarly, major crimes in the city has decreased
gradually in the last ten years since its peak in 2006 when 85,498 major crimes were reported. In the past three
years, the number of reported major crimes fell 11 percent to a total of 68,815. Violent crimes, which include homicide,
rape, aggravated assault, and robbery, decreased 14 percent in the past three years with a reported 15,771 occurrences
in 2014. Based on the rate of violent crimes per 1,000 residents in American cities with 25,000 people or more, Philadelphia
was ranked as the 54th most dangerous city in 2015. The city's K-12 enrollment in district run schools has dropped
in the last five years from 156,211 students in 2010 to 130,104 students in 2015. During the same time period, the
enrollment in charter schools has increased from 33,995 students in 2010 to 62,358 students in 2015. This consistent
drop in enrollment has led the city to close 24 of its public schools in 2013. During the 2014 school year, the city
spent an average of $12,570 per pupil, below the average among comparable urban school districts. Graduation rates
among district-run schools, meanwhile, have steadily increased in the last ten years. In 2005, Philadelphia had a
district graduation rate of 52%. This number has increased to 65% in 2014, still below the national and state averages.
Scores on the state's standardized test, the Pennsylvania System of School Assessment (PSSA) have trended upward
from 2005 to 2011 but have decreased since. In 2005, the district-run schools scored an average of 37.4% on math
and 35.5% on reading. The city's schools reached its peak scores in 2011 with 59.0% on math and 52.3% on reading.
In 2014, the scores dropped significantly to 45.2% on math and 42.0% on reading. The city's largest private school
by number of students is Temple University, followed by Drexel University. Along with the University of Pennsylvania,
Temple University and Drexel University make up the city's major research universities. The city is also home to
five schools of medicine: Drexel University College of Medicine, Perelman School of Medicine at the University of
Pennsylvania, Philadelphia College of Osteopathic Medicine, Temple University School of Medicine, and the Thomas
Jefferson University. Hospitals, universities, and higher education research institutions in Philadelphia's four
congressional districts received more than $252 million in National Institutes of Health grants in 2015. Philadelphia's
two major daily newspapers are The Philadelphia Inquirer, which is the eighteenth largest newspaper and third-oldest
surviving daily newspaper in the country, and the Philadelphia Daily News. Both newspapers were purchased from The
McClatchy Company (after buying out Knight Ridder) in 2006 by Philadelphia Media Holdings and operated by the group
until the organization declared bankruptcy in 2010. After two years of financial struggle, the two newspapers were
sold to Interstate General Media in 2012. The two newspapers have a combined circulation of about 500,000 readers.
The city also has a number of other, smaller newspapers and magazine in circulation such as the Philadelphia Tribune,
which serves the African-American community, the Philadelphia, a monthly regional magazine; Philadelphia Weekly,
an weekly-printed alternative newspaper; Philadelphia City Paper another weekly-printed newspaper; Philadelphia Gay
News, which services the LGBT community; The Jewish Exponent a weekly-printed newspaper servicing the Jewish community;
Philadelphia Metro, free daily newspaper; and Al Día, a weekly newspaper servicing the Latino community. The first
experimental radio license was issued in Philadelphia in August 1912 to St. Joseph's College. The first commercial
broadcasting radio stations appeared in 1922: first WIP, then owned by Gimbel's department store, on March 17, followed
the same year by WFIL, WOO, WCAU and WDAS. The highest-rated stations in Philadelphia include soft rock WBEB, KYW
Newsradio, and urban adult contemporary WDAS-FM. Philadelphia is served by three major non-commercial public radio
stations, WHYY-FM (NPR), WRTI (jazz, classical), and WXPN-FM (adult alternative music), as well as several smaller
stations. In the 1930s, the experimental station W3XE, owned by Philco, became the first television station in Philadelphia;
it became NBC's first affiliate in 1939, and later became KYW-TV (CBS). WCAU-TV, WPVI-TV, WHYY-TV, WPHL-TV, and WTXF-TV
had all been founded by the 1970s. In 1952, WFIL (now WPVI) premiered the television show Bandstand, which later
became the nationally broadcast American Bandstand hosted by Dick Clark. Today, as in many large metropolitan areas,
each of the commercial networks has an affiliate, and call letters have been replaced by corporate IDs: CBS3, 6ABC,
NBC10, Fox29, Telefutura28, Telemundo62, Univision65, plus My PHL 17 and CW Philly 57. The region is served also
by public broadcasting stations WYBE-TV (Philadelphia), WHYY-TV (Wilmington, Delaware and Philadelphia), WLVT-TV
(Lehigh Valley), and NJTV (New Jersey). In September 2007, Philadelphia approved a Public-access television cable
TV channel. In 1981, large sections of the SEPTA Regional Rail service to the far suburbs of Philadelphia were discontinued
due to lack of funding. Several projects have been proposed to extend rail service back to these areas, but lack
of funding has again been the chief obstacle to implementation. These projects include the proposed Schuylkill Valley
Metro to Wyomissing, PA, and extension of the Media/Elwyn line back to Wawa, PA. SEPTA's Airport Regional Rail Line
Regional Rail offers direct service to the Philadelphia International Airport. Two airports serve Philadelphia: the
Philadelphia International Airport (PHL), straddling the southern boundary of the city, and the Northeast Philadelphia
Airport (PNE), a general aviation reliever airport in Northeast Philadelphia. Philadelphia International Airport
provides scheduled domestic and international air service, while Northeast Philadelphia Airport serves general and
corporate aviation. In 2013, Philadelphia International Airport was the 15th busiest airport in the world measured
by traffic movements (i.e. takeoffs and landings). It is also the second largest hub and primary international hub
for American Airlines. Interstate 95 runs through the city along the Delaware River as a main north-south artery
known as the Delaware Expressway. The city is also served by the Schuylkill Expressway, a portion of Interstate 76
that runs along the Schuylkill River. It meets the Pennsylvania Turnpike at King of Prussia, Pennsylvania, providing
access to Harrisburg, Pennsylvania and points west. Interstate 676, the Vine Street Expressway, was completed in
1991 after years of planning. A link between I-95 and I-76, it runs below street level through Center City, connecting
to the Ben Franklin Bridge at its eastern end. Roosevelt Boulevard and the Roosevelt Expressway (U.S. 1) connect
Northeast Philadelphia with Center City. Woodhaven Road (Route 63), built in 1966, and Cottman Avenue (Route 73)
serve the neighborhoods of Northeast Philadelphia, running between Interstate 95 and the Roosevelt Boulevard (U.S.
1). The Fort Washington Expressway (Route 309) extends north from the city's northern border, serving Montgomery
County and Bucks County. U.S. 30, extending east-west from West Philadelphia to Lancaster, is known as Lancaster
Avenue throughout most of the city and through the adjacent Main Line suburbs. Philadelphia is also a major hub for
Greyhound Lines, which operates 24-hour service to points east of the Mississippi River. Most of Greyhound's services
in Philadelphia operate to/from the Philadelphia Greyhound Terminal, located at 1001 Filbert Street in Center City
Philadelphia. In 2006, the Philadelphia Greyhound Terminal was the second busiest Greyhound terminal in the United
States, after the Port Authority Bus Terminal in New York. Besides Greyhound, six other bus operators provide service
to the Center City Greyhound terminal: Bieber Tourways, Capitol Trailways, Martz Trailways, Peter Pan Bus Lines,
Susquehanna Trailways, and the bus division for New Jersey Transit. Other services include Megabus and Bolt Bus.
Since the early days of rail transport in the United States, Philadelphia has served as hub for several major rail
companies, particularly the Pennsylvania Railroad and the Reading Railroad. The Pennsylvania Railroad first operated
Broad Street Station, then 30th Street Station and Suburban Station, and the Reading Railroad operated out of Reading
Terminal, now part of the Pennsylvania Convention Center. The two companies also operated competing commuter rail
systems in the area, known collectively as the Regional Rail system. The two systems today, for the most part still
intact but now connected, operate as a single system under the control of the SEPTA, the regional transit authority.
Additionally, the PATCO Speedline subway system and NJ Transit's Atlantic City Line operate successor services to
southern New Jersey. Historically, Philadelphia sourced its water by the Fairmount Water Works, the nation's first
major urban water supply system. In 1909, Water Works was decommissioned as the city transitioned to modern sand
filtration methods. Today, the Philadelphia Water Department (PWD) provides drinking water, wastewater collection,
and stormwater services for Philadelphia, as well as surrounding counties. PWD draws about 57 percent of its drinking
water from the Delaware River and the balance from the Schuylkill River. The public wastewater system consists of
three water pollution control plants, 21 pumping stations, and about 3,657 miles of sewers. A 2007 investigation
by the Environmental Protection Agency found elevated levels of Iodine-131 in the city's potable water.[citation
needed] In 2012, the EPA's readings discovered that the city had the highest readings of I-131 in the nation. The
city campaigned against an Associated Press report that the high levels of I-131 were the results of local gas drilling
in the Upper Delaware River.[citation needed] Philadelphia Gas Works (PGW), overseen by the Pennsylvania Public Utility
Commission, is the nation's largest municipally owned natural gas utility. It serves over 500,000 homes and businesses
in the Philadelphia area. Founded in 1836, the company came under city ownership in 1987 and has been providing the
majority of gas distributed within city limits. In 2014, the Philadelphia City Council refused to conduct hearings
on a $1.86 billion sale of PGW, part of a two-year effort that was proposed by the mayor. The refusal led to the
prospective buyer terminating its offer. Southeastern Pennsylvania was assigned the 215 area code in 1947 when the
North American Numbering Plan of the "Bell System" went into effect. The geographic area covered by the code was
split nearly in half in 1994 when area code 610 was created, with the city and its northern suburbs retaining 215.
Overlay area code 267 was added to the 215 service area in 1997, and 484 was added to the 610 area in 1999. A plan
in 2001 to introduce a third overlay code to both service areas (area code 445 to 215, area code 835 to 610) was
delayed and later rescinded. Philadelphia has dedicated landmarks to its sister cities. Dedicated in June 1976, the
Sister Cities Plaza, a site of 0.5 acres (2,000 m2) located at 18th and Benjamin Franklin Parkway, honors Philadelphia's
relationships with Tel Aviv and Florence which were its first sister cities. Another landmark, the Toruń Triangle,
honoring the sister city relationship with Toruń, Poland, was constructed in 1976, west of the United Way building
at 18th Street and the Benjamin Franklin Parkway. In addition, the Triangle contains the Copernicus monument. Renovations
were made to Sister Cities Park in mid-2011 and on May 10, 2012, SCP was reopened and currently features an interactive
fountain honoring Philadelphia's ten sister and friendship cities, a café and visitor's center, children's play area,
outdoor garden, and boat pond, as well as pavilion built to environmentally friendly standards.
Kerry was born in Aurora, Colorado and attended boarding school in Massachusetts and New Hampshire. He graduated from Yale
University class of 1966 with a political science major. Kerry enlisted in the Naval Reserve in 1966, and during
1968–1969 served an abbreviated four-month tour of duty in South Vietnam as officer-in-charge (OIC) of a Swift Boat.
For that service, he was awarded combat medals that include the Silver Star Medal, Bronze Star Medal, and three Purple
Heart Medals. Securing an early return to the United States, Kerry joined the Vietnam Veterans Against the War organization
in which he served as a nationally recognized spokesman and as an outspoken opponent of the Vietnam War. He appeared
in the Fulbright Hearings before the Senate Committee on Foreign Affairs where he deemed United States war policy
in Vietnam to be the cause of war crimes. After receiving his J.D. from Boston College Law School, Kerry worked in
Massachusetts as an Assistant District Attorney. He served as Lieutenant Governor of Massachusetts under Michael
Dukakis from 1983 to 1985 and was elected to the U.S. Senate in 1984 and was sworn in the following January. On the
Senate Foreign Relations Committee, he led a series of hearings from 1987 to 1989 which were a precursor to the Iran–Contra
affair. Kerry was re-elected to additional terms in 1990, 1996, 2002 and 2008. In 2002, Kerry voted to authorize
the President "to use force, if necessary, to disarm Saddam Hussein", but warned that the administration should exhaust
its diplomatic avenues before launching war. In his 2004 presidential campaign, Kerry criticized George W. Bush for
the Iraq War. He and his running mate, North Carolina Senator John Edwards, lost the election, finishing 35 electoral
votes behind Bush and Vice President Dick Cheney. Kerry returned to the Senate, becoming Chairman of the Senate Committee
on Small Business and Entrepreneurship in 2007 and then of the Foreign Relations Committee in 2009. In January 2013,
Kerry was nominated by President Barack Obama to succeed outgoing Secretary of State Hillary Clinton and then confirmed
by the U.S. Senate, assuming the office on February 1, 2013. John Forbes Kerry was born on December 11, 1943 in Aurora,
Colorado, at Fitzsimons Army Hospital. He was the second oldest of four children born to Richard John Kerry, a Foreign
Service officer and lawyer, and Rosemary Isabel Forbes, a nurse and social activist. His father was raised Catholic
(John's paternal grandparents were Austro-Hungarian Jewish immigrants who converted to Catholicism) and his mother
was Episcopalian. He was raised with an elder sister named Margaret (born 1941), a younger sister named Diana (born
1947) and a younger brother named Cameron (born 1950). The children were raised in their father's faith; John Kerry
served as an altar boy. In his sophomore year, Kerry became the Chairman of the Liberal Party of the Yale Political
Union, and a year later he served as President of the Union. Amongst his influential teachers in this period was
Professor H. Bradford Westerfield, who was himself a former President of the Political Union. His involvement with
the Political Union gave him an opportunity to be involved with important issues of the day, such as the civil rights
movement and the New Frontier program. He also became a member of the secretive Skull and Bones Society, and traveled
to Switzerland through AIESEC Yale. On February 18, 1966, Kerry enlisted in the Naval Reserve. He began his active
duty military service on August 19, 1966. After completing 16 weeks of Officer Candidate School at the U.S. Naval
Training Center in Newport, Rhode Island, Kerry received his officer's commission on December 16, 1966. During the
2004 election, Kerry posted his military records at his website, and permitted reporters to inspect his medical records.
In 2005, Kerry released his military and medical records to the representatives of three news organizations, but
has not authorized full public access to those records. During his tour on the guided missile frigate USS Gridley,
Kerry requested duty in South Vietnam, listing as his first preference a position as the commander of a Fast Patrol
Craft (PCF), also known as a "Swift boat." These 50-foot (15 m) boats have aluminum hulls and have little or no armor,
but are heavily armed and rely on speed. "I didn't really want to get involved in the war", Kerry said in a book
of Vietnam reminiscences published in 1986. "When I signed up for the swift boats, they had very little to do with
the war. They were engaged in coastal patrolling and that's what I thought I was going to be doing." However, his
second choice of billet was on a river patrol boat, or "PBR", which at the time was serving a more dangerous duty
on the rivers of Vietnam. During the night of December 2 and early morning of December 3, 1968, Kerry was in charge
of a small boat operating near a peninsula north of Cam Ranh Bay together with a Swift boat (PCF-60). According to
Kerry and the two crewmen who accompanied him that night, Patrick Runyon and William Zaladonis, they surprised a
group of Vietnamese men unloading sampans at a river crossing, who began running and failed to obey an order to stop.
As the men fled, Kerry and his crew opened fire on the sampans and destroyed them, then rapidly left. During this
encounter, Kerry received a shrapnel wound in the left arm above the elbow. It was for this injury that Kerry received
his first Purple Heart Medal. Kerry received his second Purple Heart for a wound received in action on the Bồ Đề
River on February 20, 1969. The plan had been for the Swift boats to be accompanied by support helicopters. On the
way up the Bo De, however, the helicopters were attacked. As the Swift boats reached the Cửa Lớn River, Kerry's boat
was hit by a B-40 rocket (rocket propelled grenade round), and a piece of shrapnel hit Kerry's left leg, wounding
him. Thereafter, enemy fire ceased and his boat reached the Gulf of Thailand safely. Kerry continues to have shrapnel
embedded in his left thigh because the doctors that first treated him decided to remove the damaged tissue and close
the wound with sutures rather than make a wide opening to remove the shrapnel. Though wounded like several others
earlier that day, Kerry did not lose any time off from duty. Eight days later, on February 28, 1969, came the events
for which Kerry was awarded his Silver Star Medal. On this occasion, Kerry was in tactical command of his Swift boat
and two other Swift boats during a combat operation. Their mission on the Duong Keo River included bringing an underwater
demolition team and dozens of South Vietnamese Marines to destroy enemy sampans, structures and bunkers as described
in the story The Death Of PCF 43. Running into heavy small arms fire from the river banks, Kerry "directed the units
to turn to the beach and charge the Viet Cong positions" and he "expertly directed" his boat's fire causing the enemy
to flee while at the same time coordinating the insertion of the ninety South Vietnamese troops (according to the
original medal citation signed by Admiral Zumwalt). Moving a short distance upstream, Kerry's boat was the target
of a B-40 rocket round; Kerry charged the enemy positions and as his boat hove to and beached, a Viet Cong ("VC")
insurgent armed with a rocket launcher emerged from a spider hole and ran. While the boat's gunner opened fire, wounding
the VC in the leg, and while the other boats approached and offered cover fire, Kerry jumped from the boat to pursue
the VC insurgent, subsequently killing him and capturing his loaded rocket launcher. Kerry's commanding officer,
Lieutenant Commander George Elliott, stated to Douglas Brinkley in 2003 that he did not know whether to court-martial
Kerry for beaching the boat without orders or give him a medal for saving the crew. Elliott recommended Kerry for
the Silver Star, and Zumwalt flew into An Thoi to personally award medals to Kerry and the rest of the sailors involved
in the mission. The Navy's account of Kerry's actions is presented in the original medal citation signed by Zumwalt.
The engagement was documented in an after-action report, a press release written on March 1, 1969, and a historical
summary dated March 17, 1969. On March 13, 1969, on the Bái Háp River, Kerry was in charge of one of five Swift boats
that were returning to their base after performing an Operation Sealords mission to transport South Vietnamese troops
from the garrison at Cái Nước and MIKE Force advisors for a raid on a Vietcong camp located on the Rach Dong Cung
canal. Earlier in the day, Kerry received a slight shrapnel wound in the buttocks from blowing up a rice bunker.
Debarking some but not all of the passengers at a small village, the boats approached a fishing weir; one group of
boats went around to the left of the weir, hugging the shore, and a group with Kerry's PCF-94 boat went around to
the right, along the shoreline. A mine was detonated directly beneath the lead boat, PCF-3, as it crossed the weir
to the left, lifting PCF-3 "about 2-3 ft out of water". James Rassmann, a Green Beret advisor who was aboard Kerry's
PCF-94, was knocked overboard when, according to witnesses and the documentation of the event, a mine or rocket exploded
close to the boat. According to the documentation for the event, Kerry's arm was injured when he was thrown against
a bulkhead during the explosion. PCF 94 returned to the scene and Kerry rescued Rassmann who was receiving sniper
fire from the water. Kerry received the Bronze Star Medal with Combat "V" for "heroic achievement", for his actions
during this incident; he also received his third Purple Heart. After Kerry's third qualifying wound, he was entitled
per Navy regulations to reassignment away from combat duties. Kerry's preferred choice for reassignment was as a
military aide in Boston, New York or Washington, D.C. On April 11, 1969, he reported to the Brooklyn-based Atlantic
Military Sea Transportation Service, where he would remain on active duty for the following year as a personal aide
to an officer, Rear Admiral Walter Schlech. On January 1, 1970 Kerry was temporarily promoted to full Lieutenant.
Kerry had agreed to an extension of his active duty obligation from December 1969 to August 1970 in order to perform
Swift Boat duty. John Kerry was on active duty in the United States Navy from August 1966 until January 1970. He
continued to serve in the Naval Reserve until February 1978. With the continuing controversy that had surrounded
the military service of George W. Bush since the 2000 Presidential election (when he was accused of having used his
father's political influence to gain entrance to the Texas Air National Guard, thereby protecting himself from conscription
into the United States Army, and possible service in the Vietnam War), John Kerry's contrasting status as a decorated
Vietnam War veteran posed a problem for Bush's re-election campaign, which Republicans sought to counter by calling
Kerry's war record into question. As the presidential campaign of 2004 developed, approximately 250 members of a
group called Swift Boat Veterans for Truth (SBVT, later renamed Swift Vets and POWs for Truth) opposed Kerry's campaign.
The group held press conferences, ran ads and endorsed a book questioning Kerry's service record and his military
awards. The group included several members of Kerry's unit, such as Larry Thurlow, who commanded a swift boat alongside
of Kerry's, and Stephen Gardner, who served on Kerry's boat. The campaign inspired the widely used political pejorative
'swiftboating', to describe an unfair or untrue political attack. Most of Kerry's former crewmates have stated that
SBVT's allegations are false. After returning to the United States, Kerry joined the Vietnam Veterans Against the
War (VVAW). Then numbering about 20,000, VVAW was considered by some (including the administration of President Richard
Nixon) to be an effective, if controversial, component of the antiwar movement. Kerry participated in the "Winter
Soldier Investigation" conducted by VVAW of U.S. atrocities in Vietnam, and he appears in a film by that name that
documents the investigation. According to Nixon Secretary of Defense Melvin Laird, "I didn't approve of what he did,
but I understood the protesters quite well", and he declined two requests from the Navy to court martial Reserve
Lieutenant Kerry over his antiwar activity. On April 22, 1971, Kerry appeared before a U.S. Senate committee hearing
on proposals relating to ending the war. The day after this testimony, Kerry participated in a demonstration with
thousands of other veterans in which he and other Vietnam War veterans threw their medals and service ribbons over
a fence erected at the front steps of the United States Capitol building to dramatize their opposition to the war.
Jack Smith, a Marine, read a statement explaining why the veterans were returning their military awards to the government.
For more than two hours, almost 1000 angry veterans tossed their medals, ribbons, hats, jackets, and military papers
over the fence. Each veteran gave his or her name, hometown, branch of service and a statement. Kerry threw some
of his own decorations and awards as well as some given to him by other veterans to throw. As Kerry threw his decorations
over the fence, his statement was: "I'm not doing this for any violent reasons, but for peace and justice, and to
try and make this country wake up once and for all." Kerry was arrested on May 30, 1971, during a VVAW march to honor
American POWs held captive by North Vietnam. The march was planned as a multi-day event from Concord to Boston, and
while in Lexington, participants tried to camp on the village green. At 2:30 a.m., local and state police arrested
441 demonstrators, including Kerry, for trespassing. All were given the Miranda Warning and were hauled away on school
buses to spend the night at the Lexington Public Works Garage. Kerry and the other protesters later paid a $5 fine,
and were released. The mass arrests caused a community backlash and ended up giving positive coverage to the VVAW.
In 1970, Kerry had considered running for Congress in the Democratic primary against hawkish Democrat Philip J. Philbin
of Massachusetts's 3rd congressional district, but deferred in favour of Robert Drinan, a Jesuit priest and anti-war
activist, who went on to defeat Philbin. In February 1972, Kerry's wife bought a house in Worcester, with Kerry intending
to run against the 4th district's ageing thirteen-term incumbent Democrat, Harold Donohue. The couple never moved
in. After Republican Congressman F. Bradford Morse of the neighbouring 5th district announced his retirement and
then resignation to become Under-Secretary-General for Political and General Assembly Affairs at the United Nations.
The couple instead rented an apartment in Lowell, so that Kerry could run to succeed him. Including Kerry, the Democratic
primary race had 10 candidates, including attorney Paul J. Sheehy, State Representative Anthony R. DiFruscia, John
J. Desmond and Robert B. Kennedy. Kerry ran a "very expensive, sophisticated campaign", financed by out-of-state
backers and supported by many young volunteers. DiFruscia's campaign headquarters shared the same building as Kerry's.
On the eve of the September 19 primary, police found Kerry's younger brother Cameron and campaign field director
Thomas J. Vallely, breaking into where the building's telephone lines were located. They were arrested and charged
with "breaking and entering with the intent to commit grand larceny", but the charges were dropped a year later.
At the time of the incident, DiFruscia alleged that the two were trying to disrupt his get-out-the vote efforts.
Vallely and Cameron Kerry maintained that they were only checking their own telephone lines because they had received
an anonymous call warning that the Kerry lines would be cut. In the general election, Kerry was initially favored
to defeat the Republican candidate, former State Representative Paul W. Cronin, and conservative Democrat Roger P.
Durkin, who ran as an Independent. A week after the primary, one poll put Kerry 26-points ahead of Cronin. His campaign
called for a national health insurance system, discounted prescription drugs for the unemployed, a jobs programme
to clean up the Merrimack River and rent controls in Lowell and Lawrence. A major obstacle, however, was the district's
leading newspaper, the conservative The Sun. The paper editorialized against him. It also ran critical news stories
about his out-of-state contributions and his "carpetbagging", because he had only moved into the district in April.
Subsequently released "Watergate" Oval Office tape recordings of the Nixon White House showed that defeating Kerry's
candidacy had attracted the personal attention of President Nixon. Kerry himself asserts that Nixon sent operatives
to Lowell to help derail his campaign. The race was the most expensive for Congress in the country that year and
four days before the general election, Durkin withdrew and endorsed Cronin, hoping to see Kerry defeated. The week
before, a poll had put Kerry 10 points ahead of Cronin, with Dukin on 13%. In the final days of the campaign, Kerry
sensed that it was "slipping away" and Cronin emerged victorious by 110,970 votes (53.45%) to Kerry's 92,847 (44.72%).
After his defeat, Kerry lamented in a letter to supporters that "for two solid weeks, [The Sun] called me un-American,
New Left antiwar agitator, unpatriotic, and labeled me every other 'un-' and 'anti-' that they could find. It's hard
to believe that one newspaper could be so powerful, but they were." He later felt that his failure to respond directly
to The Sun's attacks cost him the race. After Kerry's 1972 defeat, he and his wife bought a house in Belvidere, Lowell,
entering a decade which his brother Cameron later called "the years in exile". He spent some time working as a fundraiser
for the Cooperative for Assistance and Relief Everywhere (CARE), an international humanitarian organization. In September
1973, he entered Boston College Law School. While studying, Kerry worked as a talk radio host on WBZ and, in July
1974, was named executive director of Mass Action, a Massachusetts advocacy association. In January 1977, Droney
promoted him to First Assistant District Attorney, essentially making Kerry his campaign and media surrogate because
Droney was afflicted with amyotrophic lateral sclerosis (ALS, or Lou Gehrig's Disease). As First Assistant, Kerry
tried cases, which included winning convictions in a high-profile rape case and a murder. He also played a role in
administering the office, including initiating the creation of special white-collar and organized crime units, creating
programs to address the problems of rape and other crime victims and witnesses, and managing trial calendars to reflect
case priorities. It was in this role in 1978 that Kerry announced an investigation into possible criminal charges
against then Senator Edward Brooke, regarding "misstatements" in his first divorce trial. The inquiry ended with
no charges being brought after investigators and prosecutors determined that Brooke's misstatements were pertinent
to the case, but were not material enough to have affected the outcome. Droney's health was poor and Kerry had decided
to run for his position in the 1978 election should Droney drop out. However, Droney was re-elected and his health
improved; he went on to re-assume many of the duties that he had delegated to Kerry. Kerry thus decided to leave,
departing in 1979 with assistant DA Roanne Sragow to set up their own law firm. Kerry also worked as a commentator
for WCVB-TV and co-founded a bakery, Kilvert & Forbes Ltd., with businessman and former Kennedy aide K. Dun Gifford.
The junior U.S. Senator from Massachusetts, Paul Tsongas, announced in 1984 that he would be stepping down for health
reasons. Kerry ran, and as in his 1982 race for Lieutenant Governor, he did not receive the endorsement of the party
regulars at the state Democratic convention. Congressman James Shannon, a favorite of House Speaker Tip O'Neill,
was the early favorite to win the nomination, and he "won broad establishment support and led in early polling."
Again as in 1982, however, Kerry prevailed in a close primary. On April 18, 1985, a few months after taking his Senate
seat, Kerry and Senator Tom Harkin of Iowa traveled to Nicaragua and met the country's president, Daniel Ortega.
Though Ortega had won internationally certified elections, the trip was criticized because Ortega and his leftist
Sandinista government had strong ties to Cuba and the USSR and were accused of human rights abuses. The Sandinista
government was opposed by the right-wing CIA-backed rebels known as the Contras. While in Nicaragua, Kerry and Harkin
talked to people on both sides of the conflict. Through the senators, Ortega offered a cease-fire agreement in exchange
for the U.S. dropping support of the Contras. The offer was denounced by the Reagan administration as a "propaganda
initiative" designed to influence a House vote on a $14 million Contra aid package, but Kerry said "I am willing.....
to take the risk in the effort to put to test the good faith of the Sandinistas." The House voted down the Contra
aid, but Ortega flew to Moscow to accept a $200 million loan the next day, which in part prompted the House to pass
a larger $27 million aid package six weeks later. Meanwhile, Kerry's staff began their own investigations and, on
October 14, issued a report that exposed illegal activities on the part of Lieutenant Colonel Oliver North, who had
set up a private network involving the National Security Council and the CIA to deliver military equipment to right-wing
Nicaraguan rebels (Contras). In effect, North and certain members of the President's administration were accused
by Kerry's report of illegally funding and supplying armed militants without the authorization of Congress. Kerry's
staff investigation, based on a year-long inquiry and interviews with fifty unnamed sources, is said to raise "serious
questions about whether the United States has abided by the law in its handling of the contras over the past three
years." The Kerry Committee report found that "the Contra drug links included..... payments to drug traffickers by
the U.S. State Department of funds authorized by the Congress for humanitarian assistance to the Contras, in some
cases after the traffickers had been indicted by federal law enforcement agencies on drug charges, in others while
traffickers were under active investigation by these same agencies." The U.S. State Department paid over $806,000
to known drug traffickers to carry humanitarian assistance to the Contras. Kerry's findings provoked little reaction
in the media and official Washington. During their investigation of Noriega, Kerry's staff found reason to believe
that the Pakistan-based Bank of Credit and Commerce International (BCCI) had facilitated Noriega's drug trafficking
and money laundering. This led to a separate inquiry into BCCI, and as a result, banking regulators shut down BCCI
in 1991. In December 1992, Kerry and Senator Hank Brown, a Republican from Colorado, released The BCCI Affair, a
report on the BCCI scandal. The report showed that the bank was crooked and was working with terrorists, including
Abu Nidal. It blasted the Department of Justice, the Department of the Treasury, the Customs Service, the Federal
Reserve Bank, as well as influential lobbyists and the CIA. In 1996, Kerry faced a difficult re-election fight against
Governor William Weld, a popular Republican incumbent who had been re-elected in 1994 with 71% of the vote. The race
was covered nationwide as one of the most closely watched Senate races that year. Kerry and Weld held several debates
and negotiated a campaign spending cap of $6.9 million at Kerry's Beacon Hill townhouse. Both candidates spent more
than the cap, with each camp accusing the other of being first to break the agreement. During the campaign, Kerry
spoke briefly at the 1996 Democratic National Convention. Kerry won re-election with 53 percent to Weld's 45 percent.
Kerry said that he had intended the remark as a jab at President Bush, and described the remarks as a "botched joke",
having inadvertently left out the key word "us" (which would have been, "If you don't, you get us stuck in Iraq"),
as well as leaving the phrase "just ask President Bush" off of the end of the sentence. In Kerry's prepared remarks,
which he released during the ensuing media frenzy, the corresponding line was "... you end up getting us stuck in
a war in Iraq. Just ask President Bush." He also said that from the context of the speech which, prior to the "stuck
in Iraq" line, made several specific references to Bush and elements of his biography, that Kerry was referring to
President Bush and not American troops in general. Kerry "has emerged in the past few years as an important envoy
for Afghanistan and Pakistan during times of crisis," a Washington Post report stated in May 2011, as Kerry undertook
another trip to the two countries. The killing of Osama bin Laden "has generated perhaps the most important crossroads
yet," the report continued, as the senator spoke at a press conference and prepared to fly from Kabul to Pakistan.
Among matters discussed during the May visit to Pakistan, under the general rubric of "recalibrating" the bilateral
relationship, Kerry sought and retrieved from the Pakistanis the tail-section of the U.S. helicopter which had had
to be abandoned at Abbottabad during the bin Laden strike. In 2013, Kerry met with Pakistan's army chief Gen. Ashfaq
Parvez Kayani to discuss the peace process with the Taliban in Afghanistan. Most analyses place Kerry's voting record
on the left within the Senate Democratic caucus. During the 2004 presidential election he was portrayed as a staunch
liberal by conservative groups and the Bush campaign, who often noted that in 2003 Kerry was rated the National Journal's
top Senate liberal. However, that rating was based only upon voting on legislation within that past year. In fact,
in terms of career voting records, the National Journal found that Kerry is the 11th most liberal member of the Senate.
Most analyses find that Kerry is at least slightly more liberal than the typical Democratic Senator. Kerry has stated
that he opposes privatizing Social Security, supports abortion rights for adult women and minors, supports same-sex
marriage, opposes capital punishment except for terrorists, supports most gun control laws, and is generally a supporter
of trade agreements. Kerry supported the North American Free Trade Agreement and Most Favored Nation status for China,
but opposed the Central American Free Trade Agreement.[citation needed] In the lead up to the Iraq War, Kerry said
on October 9, 2002; "I will be voting to give the President of the United States the authority to use force, if necessary,
to disarm Saddam Hussein because I believe that a deadly arsenal of weapons of mass destruction in his hands is a
real and grave threat to our security." Bush relied on that resolution in ordering the 2003 invasion of Iraq. Kerry
also gave a January 23, 2003 speech to Georgetown University saying "Without question, we need to disarm Saddam Hussein.
He is a brutal, murderous dictator; leading an oppressive regime he presents a particularly grievous threat because
he is so consistently prone to miscalculation. So the threat of Saddam Hussein with weapons of mass destruction is
real." Kerry did, however, warn that the administration should exhaust its diplomatic avenues before launching war:
"Mr. President, do not rush to war, take the time to build the coalition, because it's not winning the war that's
hard, it's winning the peace that's hard." Kerry chaired the Senate Select Committee on POW/MIA Affairs from 1991
to 1993. The committee's report, which Kerry endorsed, stated there was "no compelling evidence that proves that
any American remains alive in captivity in Southeast Asia." In 1994 the Senate passed a resolution, sponsored by
Kerry and fellow Vietnam veteran John McCain, that called for an end to the existing trade embargo against Vietnam;
it was intended to pave the way for normalization. In 1995, President Bill Clinton normalized diplomatic relations
with the country of Vietnam. In the 2004 Democratic presidential primaries, John Kerry defeated several Democratic
rivals, including Sen. John Edwards (D-North Carolina.), former Vermont Governor Howard Dean and retired Army General
Wesley Clark. His victory in the Iowa caucuses is widely believed to be the tipping point where Kerry revived his
sagging campaign in New Hampshire and the February 3, 2004, primary states like Arizona, South Carolina and New Mexico.
Kerry then went on to win landslide victories in Nevada and Wisconsin. Kerry thus won the Democratic nomination to
run for President of the United States against incumbent George W. Bush. On July 6, 2004, he announced his selection
of John Edwards as his running mate. Democratic strategist Bob Shrum, who was Kerry's 2004 campaign adviser, wrote
an article in Time magazine claiming that after the election, Kerry had said that he wished he'd never picked Edwards,
and that the two have since stopped speaking to each other. In a subsequent appearance on ABC's This Week, Kerry
refused to respond to Shrum's allegation, calling it a "ridiculous waste of time." During his bid to be elected president
in 2004, Kerry frequently criticized President George W. Bush for the Iraq War. While Kerry had initially voted in
support of authorizing President Bush to use force in dealing with Saddam Hussein, he voted against an $87 billion
supplemental appropriations bill to pay for the subsequent war. His statement on March 16, 2004, "I actually did
vote for the $87 billion before I voted against it," helped the Bush campaign to paint him as a flip-flopper and
has been cited as contributing to Kerry's defeat. Kerry established a separate political action committee, Keeping
America's Promise, which declared as its mandate "A Democratic Congress will restore accountability to Washington
and help change a disastrous course in Iraq", and raised money and channeled contributions to Democratic candidates
in state and federal races. Through Keeping America's Promise in 2005, Kerry raised over $5.5 million for other Democrats
up and down the ballot. Through his campaign account and his political action committee, the Kerry campaign operation
generated more than $10 million for various party committees and 179 candidates for the U.S. House, Senate, state
and local offices in 42 states focusing on the midterm elections during the 2006 election cycle. "Cumulatively, John
Kerry has done as much if not more than any other individual senator", Hassan Nemazee, the national finance chairman
of the DSCC said. On December 15, 2012, several news outlets reported that President Barack Obama would nominate
Kerry to succeed Hillary Clinton as Secretary of State, after Susan Rice, widely seen as Obama's preferred choice,
withdrew her name from consideration citing a politicized confirmation process following criticism of her response
to the 2012 Benghazi attack. On December 21, Obama proposed the nomination which received positive commentary. His
confirmation hearing took place on January 24, 2013, before the Senate Foreign Relations Committee, the same panel
where he first testified in 1971. The committee unanimously voted to approve him on January 29, 2013, and the same
day the full Senate confirmed him on a vote of 94–3. In a letter to Massachusetts Governor Deval Patrick, Kerry announced
his resignation from the Senate effective February 1. In the State Department, Kerry quickly earned a reputation
"for being aloof, keeping to himself, and not bothering to read staff memos." Career State Department officials have
complained that power has become too centralized under Kerry's leadership, which slows department operations when
Kerry is on one of his frequent overseas trips. Others in State describe Kerry as having "a kind of diplomatic attention
deficit disorder" as he shifts from topic to topic instead of focusing on long-term strategy. When asked whether
he was traveling too much, he responded, "Hell no. I'm not slowing down." Despite Kerry's early achievements, morale
at State is lower than under Hillary Clinton according to department employees. However, after Kerry's first six
months in the State Department, a Gallup poll found he had high approval ratings among Americans as Secretary of
State. After a year, another poll showed Kerry's favorability continued to rise. Less than two years into Kerry's
term, the Foreign Policy Magazine's 2014 Ivory Tower survey of international relations scholars asked, "Who was the
most effective U.S. Secretary of State in the past 50 years?"; John Kerry and Lawrence Eagleburger tied for 11th
place out of the 15 confirmed Secretaries of State in that period. In January 2014, having met with Secretary of
State, Archbishop Pietro Parolin, Kerry said "We touched on just about every major issue that we are both working
on, that are issues of concern to all of us. First of all, we talked at great length about Syria, and I was particularly
appreciative for the Archbishop’s raising this issue, and equally grateful for the Holy Father’s comments – the Pope's
comments yesterday regarding his support for the Geneva II process. We welcome that support. It is very important
to have broad support, and I know that the Pope is particularly concerned about the massive numbers of displaced
human beings and the violence that has taken over 130,000 lives". Kerry said on September 9 in response to a reporter's
question about whether Syrian President Bashar al-Assad could avert a military strike: "He could turn over every
single bit of his chemical weapons to the international community in the next week. Turn it over, all of it, without
delay, and allow a full and total accounting for that. But he isn't about to do it, and it can't be done, obviously."
This unscripted remark initiated a process that would lead to Syria agreeing to relinquish and destroy its chemical
weapons arsenal, as Russia treated Kerry's statement as a serious proposal. Russian Foreign Minister Sergey Lavrov
said Russia would work "immediately" to convince Syria relinquish and destroy its large chemical weapons arsenal.
Syria quickly welcomed this proposal and on September 14, the UN formally accepted Syria's application to join the
convention banning chemical weapons, and separately, the U.S. and Russia agreed on a plan to eliminate Syria's chemical
weapons by the middle of 2014. On September 28, the UN Security Council passed a resolution ordering the destruction
of Syria's chemical weapons and condemning the August 21 Ghouta attack. In a speech before the Organization of American
States in November 2013, Kerry remarked that the era of the Monroe Doctrine was over. He went on to explain, "The
relationship that we seek and that we have worked hard to foster is not about a United States declaration about how
and when it will intervene in the affairs of other American states. It's about all of our countries viewing one another
as equals, sharing responsibilities, cooperating on security issues, and adhering not to doctrine, but to the decisions
that we make as partners to advance the values and the interests that we share." Kerry's paternal grandparents, shoe
businessman Frederick A. "Fred" Kerry and musician Ida Lowe, were immigrants from the Austro-Hungarian Empire. Fred
was born as "Fritz Kohn" before he and Ida took on the "Kerry" name and moved to the United States. Fred and Ida
were born Jewish, and converted to Catholicism together in Austria. His maternal ancestors were of Scottish and English
descent, and his maternal grandfather James Grant Forbes II was a member of the Forbes family, while his maternal
grandmother Margaret Tyndal Winthrop was a member of the Dudley–Winthrop family. Margaret's paternal grandfather
Robert Charles Winthrop served as the 22nd Speaker of the U.S. House of Representatives. Robert's father was Governor
Thomas Lindall Winthrop. Thomas' father John Still Winthrop was a great-great-grandson of Massachusetts Bay Colony
Governor John Winthrop and great-grandson of Governor Thomas Dudley. Through his mother, John is a first cousin once
removed of French politician Brice Lalonde. Alexandra was born days before Kerry began law school. In 1982, Julia
asked Kerry for a separation while she was suffering from severe depression. They were divorced on July 25, 1988,
and the marriage was formally annulled in 1997. "After 14 years as a political wife, I associated politics only with
anger, fear and loneliness" she wrote in A Change of Heart, her book about depression. Thorne later married Richard
Charlesworth, an architect, and moved to Bozeman, Montana, where she became active in local environmental groups
such as the Greater Yellowstone Coalition. Thorne supported Kerry's 2004 presidential run. She died of cancer on
April 27, 2006. Kerry and his second wife, Mozambican-born businesswoman and philanthropist Maria Teresa Thierstein
Simões Ferreira (known as Teresa), the widow of Kerry's late Pennsylvania Republican Senate colleague Henry John
Heinz III, were introduced to each other by Heinz at an Earth Day rally in 1990. Early the following year, Senator
Heinz was killed in a plane crash near Lower Merion. Teresa has three sons from her previous marriage to Heinz, Henry
John Heinz IV, André Thierstein Heinz, and Christopher Drake Heinz. Heinz and Kerry were married on May 26, 1995,
in Nantucket, Massachusetts. The Forbes 400 survey estimated in 2004 that Teresa Heinz Kerry had a net worth of $750
million. However, estimates have frequently varied, ranging from around $165 million to as high as $3.2 billion,
according to a study in the Los Angeles Times. Regardless of which figure is correct, Kerry was the wealthiest U.S.
Senator while serving in the Senate. Independent of Heinz, Kerry is wealthy in his own right, and is the beneficiary
of at least four trusts inherited from Forbes family relatives, including his mother, Rosemary Forbes Kerry, who
died in 2002. Forbes magazine (named for the Forbes family of publishers, unrelated to Kerry) estimated that if elected,
and if Heinz family assets were included, Kerry would have been the third-richest U.S. President in history, when
adjusted for inflation. This assessment was based on Heinz and Kerry's combined assets, but the couple signed a prenuptial
agreement that keeps their assets separate. Kerry's financial disclosure form for 2011 put his personal assets in
the range of $230,000,000 to $320,000,000, including the assets of his spouse and any dependent children. This included
slightly more than three million dollars worth of H. J. Heinz Company assets, which increased in value by over six
hundred thousand dollars in 2013 when Berkshire Hathaway announced their intention to purchase the company. Kerry
is a Roman Catholic, and is said to carry a religious rosary, a prayer book, and a St. Christopher medal (the patron
saint of travelers) when he campaigned. While Kerry is personally against abortion, he supports a woman's legal right
to have one. Discussing his faith, Kerry said, "I thought of being a priest. I was very religious while at school
in Switzerland. I was an altar boy and prayed all the time. I was very centered around the Mass and the church."
He also said that the Letters of Paul (Apostle Paul) moved him the most, stating that they taught him to "not feel
sorry for myself." Kerry told Christianity Today in October 2004 "I'm a Catholic and I practice, but at the same
time I have an open-mindedness to many other expressions of spirituality that come through different religions...
I've spent some time reading and thinking about religion and trying to study it, and I've arrived at not so much
a sense of the differences, but a sense of the similarities in so many ways." He said that he believed that the Torah,
the Qur'an, and the Bible all share a fundamental story which connects with readers. In addition to the sports he
played at Yale, Kerry is described by Sports Illustrated, among others, as an "avid cyclist", primarily riding on
a road bike. Prior to his presidential bid, Kerry was known to have participated in several long-distance rides (centuries).
Even during his many campaigns, he was reported to have visited bicycle stores in both his home state and elsewhere.
His staff requested recumbent stationary bikes for his hotel rooms. He has also been a snowboarder, windsurfer, and
sailor. According to the Boston Herald, dated July 23, 2010, Kerry commissioned construction on a new $7 million
yacht (a Friendship 75) in New Zealand and moored it in Portsmouth, Rhode Island, where the Friendship yacht company
is based. The article claimed this allowed him to avoid paying Massachusetts taxes on the property including approximately
$437,500 in sales tax and an annual excise tax of about $500. However, on July 27, 2010, Kerry stated he had yet
to take legal possession of the boat, had not intended to avoid the taxes, and that when he took possession, he would
pay the taxes whether he owed them or not.
Rajasthan (/ˈrɑːdʒəstæn/ Hindustani pronunciation: [raːdʒəsˈt̪ʰaːn] ( listen); literally, "Land of Kings") is India's largest
state by area (342,239 square kilometres (132,139 sq mi) or 10.4% of India's total area). It is located on the western
side of the country, where it comprises most of the wide and inhospitable Thar Desert (also known as the "Rajasthan
Desert" and "Great Indian Desert") and shares a border with the Pakistani provinces of Punjab to the northwest and
Sindh to the west, along the Sutlej-Indus river valley. Elsewhere it is bordered by the other Indian states: Punjab
to the north; Haryana and Uttar Pradesh to the northeast; Madhya Pradesh to the southeast; and Gujarat to the southwest.
Its features include the ruins of the Indus Valley Civilization at Kalibanga; the Dilwara Temples, a Jain pilgrimage
site at Rajasthan's only hill station, Mount Abu, in the ancient Aravalli mountain range; and, in eastern Rajasthan,
the Keoladeo National Park near Bharatpur, a World Heritage Site known for its bird life. Rajasthan is also home
to two national tiger reserves, the Ranthambore National Park in Sawai Madhopur and Sariska Tiger Reserve in Alwar.
The first mention of the name "Rajasthan" appears in James Tod's 1829 publication Annals and Antiquities of Rajast'han
or the Central and Western Rajpoot States of India, while the earliest known record of "Rajputana" as a name for
the region is in George Thomas's 1800 memoir Military Memories. John Keay, in his book India: A History, stated that
"Rajputana" was coined by the British in 1829, John Briggs, translating Ferishta's history of early Islamic India,
used the phrase "Rajpoot (Rajput) princes" rather than "Indian princes". Parts of what is now Rajasthan were part
of the Indus Valley Civilization. Kalibangan, in Hanumangarh district, was a major provincial capital of the Indus
Valley Civilization,. It is believed that Western Kshatrapas (405–35 BC) were Saka rulers of the western part of
India (Saurashtra and Malwa: modern Gujarat, Southern Sindh, Maharashtra, Rajasthan). They were successors to the
Indo-Scythians and were contemporaneous with the Kushans, who ruled the northern part of the Indian subcontinent.
The Indo-Scythians invaded the area of Ujjain and established the Saka era (with their calendar), marking the beginning
of the long-lived Saka Western Satraps state. Matsya, a state of the Vedic civilisation of India, is said to roughly
corresponded to the former state of Jaipur in Rajasthan and included the whole of Alwar with portions of Bharatpur.
The capital of Matsya was at Viratanagar (modern Bairat), which is said to have been named after its founder king
Virata. Traditionally the Rajputs, Jats, Meenas, Gurjars, Bhils, Rajpurohit, Charans, Yadavs, Bishnois, Sermals,
PhulMali (Saini) and other tribes made a great contribution in building the state of Rajasthan. All these tribes
suffered great difficulties in protecting their culture and the land. Millions of them were killed trying to protect
their land. A number of Gurjars had been exterminated in Bhinmal and Ajmer areas fighting with the invaders. Bhils
once ruled Kota. Meenas were rulers of Bundi and the Dhundhar region. The Gurjar Pratihar Empire acted as a barrier
for Arab invaders from the 8th to the 11th century. The chief accomplishment of the Gurjara Pratihara empire lies
in its successful resistance to foreign invasions from the west, starting in the days of Junaid. Historian R. C.
Majumdar says that this was openly acknowledged by the Arab writers. He further notes that historians of India have
wondered at the slow progress of Muslim invaders in India, as compared with their rapid advance in other parts of
the world. Now there seems little doubt that it was the power of the Gurjara Pratihara army that effectively barred
the progress of the Arabs beyond the confines of Sindh, their first conquest for nearly 300 years. Modern Rajasthan
includes most of Rajputana, which comprises the erstwhile nineteen princely states, two chiefships, and the British
district of Ajmer-Merwara. Marwar (Jodhpur), Bikaner, Mewar (Chittorgarh), Alwar and Dhundhar (Jaipur) were some
of the main Rajput princely states. Bharatpur and Dholpur were Jat princely states whereas Tonk was a princely state
under a Muslim Nawab. Rajput families rose to prominence in the 6th century CE. The Rajputs put up a valiant resistance
to the Islamic invasions and protected this land with their warfare and chivalry for more than 500 years. They also
resisted Mughal incursions into India and thus contributed to their slower-than-anticipated access to the Indian
subcontinent. Later, the Mughals, through skilled warfare, were able to get a firm grip on northern India, including
Rajasthan. Mewar led other kingdoms in its resistance to outside rule. Most notably, Rana Sanga fought the Battle
of Khanua against Babur, the founder of the Mughal empire. Over the years, the Mughals began to have internal disputes
which greatly distracted them at times. The Mughal Empire continued to weaken, and with the decline of the Mughal
Empire in the 18th century, Rajputana came under the suzerainty of the Marathas. The Marathas, who were Hindus from
the state of what is now Maharashtra, ruled Rajputana for most of the eighteenth century. The Maratha Empire, which
had replaced the Mughal Empire as the overlord of the subcontinent, was finally replaced by the British Empire in
1818. The geographic features of Rajasthan are the Thar Desert and the Aravalli Range, which runs through the state
from southwest to northeast, almost from one end to the other, for more than 850 kilometres (530 mi). Mount Abu lies
at the southwestern end of the range, separated from the main ranges by the West Banas River, although a series of
broken ridges continues into Haryana in the direction of Delhi where it can be seen as outcrops in the form of the
Raisina Hill and the ridges farther north. About three-fifths of Rajasthan lies northwest of the Aravallis, leaving
two-fifths on the east and south direction. The northwestern portion of Rajasthan is generally sandy and dry. Most
of this region are covered by the Thar Desert which extends into adjoining portions of Pakistan. The Aravalli Range
does not intercept the moisture-giving southwest monsoon winds off the Arabian Sea, as it lies in a direction parallel
to that of the coming monsoon winds, leaving the northwestern region in a rain shadow. The Thar Desert is thinly
populated; the town of Jodhpur is the largest city in the desert and known as the gateway of thar desert. The desert
has some major districts like Jodhpur, Jaisalmer, Barmer, Bikaner and Nagour. This area is also important defence
point of view. Jodhpur airbase is Indias largest airbase and military, BSF bases are also situated here. A single
civil airport is also situated in Jodhpur. The Northwestern thorn scrub forests lie in a band around the Thar Desert,
between the desert and the Aravallis. This region receives less than 400 mm of rain in an average year. Temperatures
can exceed 48 °C in the summer months and drop below freezing in the winter. The Godwar, Marwar, and Shekhawati regions
lie in the thorn scrub forest zone, along with the city of Jodhpur. The Luni River and its tributaries are the major
river system of Godwar and Marwar regions, draining the western slopes of the Aravallis and emptying southwest into
the great Rann of Kutch wetland in neighboring Gujarat. This river is saline in the lower reaches and remains potable
only up to Balotara in Barmer district. The Ghaggar River, which originates in Haryana, is an intermittent stream
that disappears into the sands of the Thar Desert in the northern corner of the state and is seen as a remnant of
the primitive Saraswati river. The Aravalli Range and the lands to the east and southeast of the range are generally
more fertile and better watered. This region is home to the Kathiarbar-Gir dry deciduous forests ecoregion, with
tropical dry broadleaf forests that include teak, Acacia, and other trees. The hilly Vagad region, home to the cities
of Dungarpur and Banswara lies in southernmost Rajasthan, on the border with Gujarat and Madhya Pradesh. With the
exception of Mount Abu, Vagad is the wettest region in Rajasthan, and the most heavily forested. North of Vagad lies
the Mewar region, home to the cities of Udaipur and Chittaurgarh. The Hadoti region lies to the southeast, on the
border with Madhya Pradesh. North of Hadoti and Mewar lies the Dhundhar region, home to the state capital of Jaipur.
Mewat, the easternmost region of Rajasthan, borders Haryana and Uttar Pradesh. Eastern and southeastern Rajasthan
is drained by the Banas and Chambal rivers, tributaries of the Ganges. The Aravalli Range runs across the state from
the southwest peak Guru Shikhar (Mount Abu), which is 1,722 m in height, to Khetri in the northeast. This range divides
the state into 60% in the northwest of the range and 40% in the southeast. The northwest tract is sandy and unproductive
with little water but improves gradually from desert land in the far west and northwest to comparatively fertile
and habitable land towards the east. The area includes the Thar Desert. The south-eastern area, higher in elevation
(100 to 350 m above sea level) and more fertile, has a very diversified topography. in the south lies the hilly tract
of Mewar. In the southeast, a large area within the districts of Kota and Bundi forms a tableland. To the northeast
of these districts is a rugged region (badlands) following the line of the Chambal River. Farther north the country
levels out; the flat plains of the northeastern Bharatpur district are part of an alluvial basin. Merta City lies
in the geographical center of Rajasthan. The Desert National Park in Jaisalmer is spread over an area of 3,162 square
kilometres (1,221 sq mi), is an excellent example of the ecosystem of the Thar Desert and its diverse fauna. Seashells
and massive fossilised tree trunks in this park record the geological history of the desert. The region is a haven
for migratory and resident birds of the desert. One can see many eagles, harriers, falcons, buzzards, kestrels and
vultures. Short-toed eagles (Circaetus gallicus), tawny eagles (Aquila rapax), spotted eagles (Aquila clanga), laggar
falcons (Falco jugger) and kestrels are the commonest of these. Ranthambore National Park is known worldwide for
its tiger population and is considered by both wilderness lovers and photographers as one of the best place in India
to spot tigers. At one point, due to poaching and negligence, tigers became extinct at Sariska, but five tigers have
been relocated there. Prominent among the wildlife sanctuaries are Mount Abu Sanctuary, Bhensrod Garh Sanctuary,
Darrah Sanctuary, Jaisamand Sanctuary, Kumbhalgarh Wildlife Sanctuary, Jawahar Sagar sanctuary, and Sita Mata Wildlife
Sanctuary. Rajasthan's economy is primarily agricultural and pastoral. Wheat and barley are cultivated over large
areas, as are pulses, sugarcane, and oilseeds. Cotton and tobacco are the state's cash crops. Rajasthan is among
the largest producers of edible oils in India and the second largest producer of oilseeds. Rajasthan is also the
biggest wool-producing state in India and the main opium producer and consumer. There are mainly two crop seasons.
The water for irrigation comes from wells and tanks. The Indira Gandhi Canal irrigates northwestern Rajasthan. The
main industries are mineral based, agriculture based, and textile based. Rajasthan is the second largest producer
of polyester fibre in India. The Pali and Bhilwara District produces more cloth than Bhiwandi, Maharashtra and the
bhilwara is the largest city in suitings production and export and Pali is largest city in cotton and polyster in
blouse pieces and rubia production and export. Several prominent chemical and engineering companies are located in
the city of Kota, in southern Rajasthan. Rajasthan is pre-eminent in quarrying and mining in India. The Taj Mahal
was built from the white marble which was mined from a town called Makrana. The state is the second largest source
of cement in India. It has rich salt deposits at Sambhar, copper mines at Khetri, Jhunjhunu, and zinc mines at Dariba,
Zawar mines and Rampura Aghucha (opencast) near Bhilwara. Dimensional stone mining is also undertaken in Rajasthan.
Jodhpur sandstone is mostly used in monuments, important buildings and residential buildings. This stone is termed
as "chittar patthar". Jodhpur leads in Handicraft and Guar Gum industry. Rajasthan is also a part of the Mumbai-Delhi
Industrial corridor is set to benefit economically. The State gets 39% of the DMIC, with major districts of Jaipur,
Alwar, Kota and Bhilwara benefiting. Rajasthan is[when?] earning Rs. 150 million (approx. US$2.5 million) per day
as revenue from the crude oil sector. This earning is expected to reach ₹250 million per day in 2013 (which is an
increase of ₹100 million or more than 66 percent). The government of India has given permission to extract 300,000
barrels of crude per day from Barmer region which is now 175,000 barrels per day. Once this limit is achieved Rajasthan
will become a leader in Crude extraction in Country. Bombay High leads with a production of 250,000 barrels crude
per day. Once the limit if 300,000 barrels per day is reached, the overall production of the country will increase
by 15 percent. Cairn India is doing the work of exploration and extraction of crude oil in Rajasthan. Rajasthani
cooking was influenced by both the war-like lifestyles of its inhabitants and the availability of ingredients in
this arid region. Food that could last for several days and could be eaten without heating was preferred. The scarcity
of water and fresh green vegetables have all had their effect on the cooking. It is known for its snacks like Bikaneri
Bhujia. Other famous dishes include bajre ki roti (millet bread) and lashun ki chutney (hot garlic paste), mawa kachori
Mirchi Bada, Pyaaj Kachori and ghevar from Jodhpur, Alwar ka Mawa(Milk Cake), malpauas from Pushkar and rassgollas
from Bikaner. Originating from the Marwar region of the state is the concept Marwari Bhojnalaya, or vegetarian restaurants,
today found in many parts of India, which offer vegetarian food of the Marwari people. 4 Dal-Bati-Churma is very
popular in Rajasthan. The traditional way to serve it is to first coarsely mash the Baati then pour pure Ghee on
top of it. It is served with the daal (lentils) and spicy garlic chutney. Also served with Besan (gram flour) ki
kadi. It is commonly served at all festivities, including religious occasions, wedding ceremonies, and birthday parties
in Rajasthan. "Dal-Baati-Churma", is a combination of three different food items — Daal (lentils), Baati and Churma
(Sweet). It is a typical Rajasthani dish. The Ghoomar dance from Jodhpur Marwar and Kalbeliya dance of Jaisalmer
have gained international recognition. Folk music is a large part of Rajasthani culture. Kathputli, Bhopa, Chang,
Teratali, Ghindr, Kachchhighori, and Tejaji are examples of traditional Rajasthani culture. Folk songs are commonly
ballads which relate heroic deeds and love stories; and religious or devotional songs known as bhajans and banis
which are often accompanied by musical instruments like dholak, sitar, and sarangi are also sung. Rajasthan is known
for its traditional, colorful art. The block prints, tie and dye prints, Bagaru prints, Sanganer prints, and Zari
embroidery are major export products from Rajasthan. Handicraft items like wooden furniture and crafts, carpets,
and blue pottery are commonly found here. Shopping reflects the colorful culture, Rajasthani clothes have a lot of
mirror work and embroidery. A Rajasthani traditional dress for females comprises an ankle-length skirt and a short
top, also known as a lehenga or a chaniya choli. A piece of cloth is used to cover the head, both for protection
from heat and maintenance of modesty. Rajasthani dresses are usually designed in bright colors like blue, yellow
and orange. Spirit possession has been documented in modern Rajasthan. Some of the spirits possessing Rajasthanis
are seen as good and beneficial while others are seen as malevolent. The good spirits include murdered royalty, the
underworld god Bhaironji, and Muslim saints. Bad spirits include perpetual debtors who die in debt, stillborn infants,
deceased widows, and foreign tourists. The possessed individual is referred to as a ghorala ("mount"). Possession,
even if it is by a benign spirit, is regarded as undesirable, as it entails loss of self-control and violent emotional
outbursts. In recent decades, the literacy rate of Rajasthan has increased significantly. In 1991, the state's literacy
rate was only 38.55% (54.99% male and 20.44% female). In 2001, the literacy rate increased to 60.41% (75.70% male
and 43.85% female). This was the highest leap in the percentage of literacy recorded in India (the rise in female
literacy being 23%). At the Census 2011, Rajasthan had a literacy rate of 67.06% (80.51% male and 52.66% female).
Although Rajasthan's literacy rate is below the national average of 74.04% and although its female literacy rate
is the lowest in the country, the state has been praised for its efforts and achievements in raising male and female
literacy rates. In Rajasthan, Jodhpur and Kota are two major educational hubs. Kota is known for its quality education
in preparation of various competitive exams, coaching for medical and engineering exams while Jodhpur is home to
many higher educational institutions like IIT, AIIMS, National Law University, Sardar Patel Police University, National
institute of Fashion Technology, MBM Engineering College etc. Kota is popularly referred to as, "coaching capital
of India". Other major educational institutions are Birla Institute of Technology and Science Pilani, Malaviya National
Institute of Technology Jaipur, IIM Udaipur r and LNMIIT. Rajasthan has nine universities and more than 250 colleges,
55,000 primary and 7,400 secondary schools. There are 41 engineering colleges with an annual enrollment of about
11,500 students. Apart from above there 41 Private universities like Amity University Rajasthan, Jaipur,Manipal University
Jaipur, OPJS University, Churu, Mody University of Technology and Science Lakshmangarh (Women's University, Sikar),
RNB Global University, Bikaner. The state has 23 polytechnic colleges and 152 Industrial Training Institutes (ITIs)
that impart vocational training. Rajasthan attracted 14 percent of total foreign visitors during 2009–2010 which
is the fourth highest among Indian states. It is fourth also in Domestic tourist visitors. Tourism is a flourishing
industry in Rajasthan. The palaces of Jaipur and Ajmer-Pushkar, the lakes of Udaipur, the desert forts of Jodhpur,
Taragarh Fort (Star Fort) in Ajmer, and Bikaner and Jaisalmer rank among the most preferred destinations in India
for many tourists both Indian and foreign. Tourism accounts for eight percent of the state's domestic product. Many
old and neglected palaces and forts have been converted into heritage hotels. Tourism has increased employment in
the hospitality sector. Rajasthan is famous for its forts, carved temples, and decorated havelis, which were built
by Rajput kings in pre-Muslim era Rajasthan.[citation needed] Rajasthan's Jaipur Jantar Mantar, Mehrangarh Fort and
Stepwell of Jodhpur, Dilwara Temples, Chittorgarh Fort, Lake Palace, miniature paintings in Bundi, and numerous city
palaces and haveli's are part of the architectural heritage of India. Jaipur, the Pink City, is noted for the ancient
houses made of a type of sandstone dominated by a pink hue. In Jodhpur, maximum houses are painted blue. At Ajmer,
there is white marble Bara-dari on the Anasagar lake. Jain Temples dot Rajasthan from north to south and east to
west. Dilwara Temples of Mount Abu, Ranakpur Temple dedicated to Lord Adinath in Pali District, Jain temples in the
fort complexes of Chittor, Jaisalmer and Kumbhalgarh, Lodurva Jain temples, Mirpur Jain Temple, Sarun Mata Temple
kotputli, Bhandasar and Karni Mata Temple of Bikaner and Mandore of Jodhpur are some of the best examples.
Guam (i/ˈɡwɑːm/ or /ˈɡwɒm/; Chamorro: Guåhån;[needs IPA] formally the Territory of Guam) is an unincorporated and organized
territory of the United States. Located in the northwestern Pacific Ocean, Guam is one of five American territories
with an established civilian government. The capital city is Hagåtña, and the most populous city is Dededo. In 2015,
161,785 people resided on Guam. Guamanians are American citizens by birth. Guam has an area of 544 km2 (210 sq mi)
and a density of 297/km² (770/sq mi). It is the largest and southernmost of the Mariana Islands, and the largest
island in Micronesia. Among its municipalities, Mongmong-Toto-Maite has the highest density at 1,425/km² (3,691/sq
mi), whereas Inarajan and Umatac have the lowest density at 47/km² (119/sq mi). The highest point is Mount Lamlam
at 406 meters (1,332 ft) above sea level. The Chamorros, Guam's indigenous people, settled the island approximately
4,000 years ago. Portuguese explorer Ferdinand Magellan was the first European to visit the island on March 6, 1521.
Guam was colonized in 1668 with settlers, like Diego Luis de San Vitores, a Catholic missionary. Between the 1500s
and the 1700s, Guam was an important stopover for the Spanish Manila Galleons. During the Spanish–American War, the
United States captured Guam on June 21, 1898. Under the Treaty of Paris, Spain ceded Guam to the United States on
December 10, 1898. Guam is amongst the seventeen Non-Self-Governing Territories of the United Nations. Before World
War II, Guam and three other territories – American Samoa, Hawaii, and the Philippines – were the only American jurisdictions
in the Pacific Ocean. On December 7, 1941, hours after the attack on Pearl Harbor, Guam was captured by the Japanese,
and was occupied for thirty months. During the occupation, Guamanians were subjected to culture alignment, forced
labor, beheadings, rape, and torture. Guam endured hostilities when American forces recaptured the island on July
21, 1944; Liberation Day commemorates the victory. Since the 1960s, the economy is supported by two industries: tourism
and the United States Armed Forces. The ancient-Chamorro society had four classes: chamorri (chiefs), matua (upper
class), achaot (middle class), and mana'chang (lower class).:20–21 The matua were located in the coastal villages,
which meant they had the best access to fishing grounds, whereas the mana'chang were located in the interior of the
island. Matua and mana'chang rarely communicated with each other, and matua often used achaot as intermediaries.
There were also "makåhna" (similar to shamans), skilled in healing and medicine. Belief in spirits of ancient Chamorros
called "Taotao mo'na" still persists as a remnant of pre-European culture. Their society was organized along matrilineal
clans.:21 The first European to discover Guam was Portuguese navigator Ferdinand Magellan, sailing for the King of
Spain, when he sighted the island on March 6, 1521 during his fleet's circumnavigation of the globe.:41–42 When Magellan
arrived on Guam, he was greeted by hundreds of small outrigger canoes that appeared to be flying over the water,
due to their considerable speed. These outrigger canoes were called Proas, and resulted in Magellan naming Guam Islas
de las Velas Latinas ("Islands of the Lateen sails"). Antonio Pigafetta, one of Magellan's original 18 the name "Island
of Sails", but he also writes that the inhabitants "entered the ships and stole whatever they could lay their hands
on", including "the small boat that was fastened to the poop of the flagship.":129 "Those people are poor, but ingenious
and very thievish, on account of which we called those three islands Islas de los Ladrones ("Islands of thieves").":131
Despite Magellan's visit, Guam was not officially claimed by Spain until January 26, 1565 by General Miguel López
de Legazpi.:46 From 1565 to 1815, Guam and the Northern Mariana Islands, the only Spanish outpost in the Pacific
Ocean east of the Philippines, was an important resting stop for the Manila galleons, a fleet that covered the Pacific
trade route between Acapulco and Manila.:51 To protect these Pacific fleets, Spain built several defensive structures
which are still standing today, such as Fort Nuestra Señora de la Soledad in Umatac. It is the biggest single segment
of Micronesia, the largest islands between the island of Kyushu (Japan), New Guinea, the Philippines, and the Hawaiian
Islands. Spanish colonization commenced on June 15, 1668 with the arrival of Diego Luis de San Vitores and Pedro
Calungsod, who established the first Catholic church.:64 The islands were part of the Spanish East Indies governed
from the Philippines, which were in turn part of the Viceroyalty of New Spain based in Mexico City. Other reminders
of colonial times include the old Governor's Palace in Plaza de España and the Spanish Bridge, both in Hagatña. Guam's
Cathedral Dulce Nombre de Maria was formally opened on February 2, 1669, as was the Royal College of San Juan de
Letran.:68 Guam, along with the rest of the Mariana and Caroline Islands, were treated as part of Spain's colony
in the Philippines. While Guam's Chamorro culture has indigenous roots, the cultures of both Guam and the Northern
Marianas have many similarities with Spanish and Mexican culture due to three centuries of Spanish rule. Intermittent
warfare lasting from July 23, 1670 until July 1695, plus the typhoons of 1671 and 1693, and in particular the smallpox
epidemic of 1688, reduced the Chamorro population from 50,000 to 10,000 to less than 5,000.:86 Precipitated by the
death of Quipuha, and the murder of Father San Vitores and Pedro Calungsod by local rebel chief Matapang, tensions
led to a number of conflicts. Captain Juan de Santiago started a campaign to pacify the island, which was continued
by the successive commanders of the Spanish forces.:68–74 After his arrival in 1674, Captain Damian de Esplana ordered
the arrest of rebels who attacked the population of certain towns. Hostilities eventually led to the destruction
of villages such as Chochogo, Pepura, Tumon, Sidia-Aty, Sagua, Nagan and Ninca.:74–75 Starting in June 1676, the
first Spanish Governor of Guam, Capt. Francisco de Irrisarri y Vinar controlled internal affairs more strictly than
his predecessors in order to curb tensions. He also ordered the construction of schools, roads and other infrastructure.:75–76
Later, Capt. Jose de Quiroga arrived in 1680 and continued some of the development projects started by his predecessors.
He also continued the search for the rebels who had assassinated Father San Vitores, resulting in campaigns against
the rebels which were hiding out in some islands, eventually leading to the death of Matapang, Hurao and Aguarin.:77–78
Quiroga brought some natives from the northern islands to Guam, ordering the population to live in a few large villages.:78–79
These included Jinapsan, Umatac, Pago, Agat and Inarajan, where he built a number of churches.:79 By July 1695, Quiroga
had completed the pacification process in Guam, Rota, Tinian and Aguigan.:85 The United States took control of the
island in the 1898 Spanish–American War, as part of the Treaty of Paris. Guam was transferred to U.S. Navy control
on 23 December 1898 by Executive Order 108-A. Guam came to serve as a station for American ships traveling to and
from the Philippines, while the Northern Mariana Islands passed to Germany, and then to Japan. A U.S. Navy yard was
established at Piti in 1899, and a marine barracks at Sumay in 1901.:13 Following the Philippine–American War, Emilio
Aguinaldo and Apolinario Mabini were exiled on Guam in 1901.:vi The Northern Mariana Islands had become a Japanese
protectorate before the war. It was the Chamorros from the Northern Marianas who were brought to Guam to serve as
interpreters and in other capacities for the occupying Japanese force. The Guamanian Chamorros were treated as an
occupied enemy by the Japanese military. After the war, this would cause resentment between the Guamanian Chamorros
and the Chamorros of the Northern Marianas. Guam's Chamorros believed their northern brethren should have been compassionate
towards them, whereas having been occupied for over 30 years, the Northern Mariana Chamorros were loyal to Japan.
After World War II, the Guam Organic Act of 1950 established Guam as an unincorporated organized territory of the
United States, provided for the structure of the island's civilian government, and granted the people U.S. citizenship.
The Governor of Guam was federally appointed until 1968, when the Guam Elective Governor Act provided for the office's
popular election.:242 Since Guam is not a U.S. state, U.S. citizens residing on Guam are not allowed to vote for
president and their congressional representative is a non-voting member. Guam lies between 13.2°N and 13.7°N and
between 144.6°E and 145.0°E, and has an area of 212 square miles (549 km2), making it the 32nd largest island of
the United States. It is the southernmost and largest island in the Mariana island chain and is also the largest
island in Micronesia. This island chain was created by the colliding Pacific and Philippine Sea tectonic plates.
Guam is the closest land mass to the Mariana Trench, a deep subduction zone, that lies beside the island chain to
the east. Challenger Deep, the deepest surveyed point in the Oceans, is southwest of Guam at 35,797 feet (10,911
meters) deep. The highest point in Guam is Mount Lamlam at an elevation of 1,334 feet (407 meters). The island of
Guam is 30 miles (50 km) long and 4 to 12 miles (6 to 19 km) wide, 3⁄4 the size of Singapore. The island experiences
occasional earthquakes due to its location on the western edge of the Pacific Plate and near the Philippine Sea Plate.
In recent years, earthquakes with epicenters near Guam have had magnitudes ranging from 5.0 to 8.7. Unlike the Anatahan
volcano in the Northern Mariana Islands, Guam is not volcanically active. However, due to its proximity to Anatahan,
vog (i.e. volcanic smog) does occasionally affect Guam. Guam's climate is characterized as tropical marine moderated
by seasonal northeast trade winds. The weather is generally very warm and humid with little seasonal temperature
variation. The mean high temperature is 86 °F (30 °C) and mean low is 76 °F (24 °C) with an average annual rainfall
of 96 inches (2,180 mm). The dry season runs from December to June. The remaining months (July to November) constitute
the rainy season. The months of January and February are considered the coolest months of the year with overnight
low temperatures of 70–75 °F (21–24 °C) and low humidity levels. The highest temperature ever recorded in Guam was
96 °F (36 °C) on April 18, 1971 and April 1, 1990, and the lowest temperature ever recorded was 65 °F (18 °C) on
February 8, 1973. Post-European-contact Chamorro culture is a combination of American, Spanish, Filipino, other Micronesian
Islander and Mexican traditions, with few remaining indigenous pre-Hispanic customs. These influences are manifested
in the local language, music, dance, sea navigation, cuisine, fishing, games (such as batu, chonka, estuleks, and
bayogu), songs and fashion. During Spanish colonial rule (1668–1898) the majority of the population was converted
to Roman Catholicism and religious festivities such as Easter and Christmas became widespread. Post-contact Chamorro
cuisine is largely based on corn, and includes tortillas, tamales, atole and chilaquiles, which are a clear influence
from Spanish trade between Mesoamerica and Asia. The modern Chamorro language is a Malayo-Polynesian language with
much Spanish and Filipino influence. Many Chamorros also have Spanish surnames because of their conversion to Roman
Catholic Christianity and the adoption of names from the Catálogo alfabético de apellidos, a phenomenon also common
to the Philippines. Two aspects of indigenous pre-Hispanic culture that withstood time are chenchule' and inafa'maolek.
Chenchule' is the intricate system of reciprocity at the heart of Chamorro society. It is rooted in the core value
of inafa'maolek. Historian Lawrence Cunningham in 1992 wrote, "In a Chamorro sense, the land and its produce belong
to everyone. Inafa'maolek, or interdependence, is the key, or central value, in Chamorro culture ... Inafa'maolek
depends on a spirit of cooperation and sharing. This is the armature, or core, that everything in Chamorro culture
revolves around. It is a powerful concern for mutuality rather than individualism and private property rights." The
core culture or Pengngan Chamorro is based on complex social protocol centered upon respect: From sniffing over the
hands of the elders (called mangnginge in Chamorro), the passing down of legends, chants, and courtship rituals,
to a person asking for permission from spiritual ancestors before entering a jungle or ancient battle grounds. Other
practices predating Spanish conquest include galaide' canoe-making, making of the belembaotuyan (a string musical
instrument made from a gourd), fashioning of åcho' atupat slings and slingstones, tool manufacture, Måtan Guma' burial
rituals, and preparation of herbal medicines by Suruhanu. The cosmopolitan and multicultural nature of modern Guam
poses challenges for Chamorros struggling to preserve their culture and identity amidst forces of acculturation.
The increasing numbers of Chamorros, especially Chamorro youth, relocating to the U.S. Mainland has further complicated
both definition and preservation of Chamorro identity.[citation needed] While only a few masters exist to continue
traditional art forms, the resurgence of interest among the Chamorros to preserve the language and culture has resulted
in a growing number of young Chamorros who seek to continue the ancient ways of the Chamorro people. Guam is governed
by a popularly elected governor and a unicameral 15-member legislature, whose members are known as senators. Guam
elects one non-voting delegate, currently Democrat Madeleine Z. Bordallo, to the United States House of Representatives.
U.S. citizens in Guam vote in a straw poll for their choice in the U.S. Presidential general election, but since
Guam has no votes in the Electoral College, the poll has no real effect. However, in sending delegates to the Republican
and Democratic national conventions, Guam does have influence in the national presidential race. These delegates
are elected by local party conventions. In the 1980s and early 1990s, there was a significant movement in favor of
the territory becoming a commonwealth, which would give it a level of self-government similar to Puerto Rico and
the Northern Mariana Islands. However, the federal government rejected the version of a commonwealth that the government
of Guam proposed, due to it having clauses incompatible with the Territorial Clause (Art. IV, Sec. 3, cl. 2) of the
U.S. Constitution. Other movements advocate U.S. statehood for Guam, union with the state of Hawaii, union with the
Northern Mariana Islands as a single territory, or independence. The U.S. military has proposed building a new aircraft
carrier berth on Guam and moving 8,600 Marines, and 9,000 of their dependents, to Guam from Okinawa, Japan. Including
the required construction workers, this buildup would increase Guam's population by 45%. In a February 2010 letter,
the United States Environmental Protection Agency sharply criticized these plans because of a water shortfall, sewage
problems and the impact on coral reefs. By 2012, these plans had been cut to only have a maximum of 4,800 Marines
stationed on the island, two thirds of which would be there on a rotational basis without their dependents. Lying
in the western Pacific, Guam is a popular destination for Japanese tourists. Its tourist hub, Tumon, features over
20 large hotels, a Duty Free Shoppers Galleria, Pleasure Island district, indoor aquarium, Sandcastle Las Vegas–styled
shows and other shopping and entertainment venues. It is a relatively short flight from Asia or Australia compared
to Hawaii, with hotels and seven public golf courses accommodating over a million tourists per year. Although 75%
of the tourists are Japanese, Guam receives a sizable number of tourists from South Korea, the U.S., the Philippines,
and Taiwan. Significant sources of revenue include duty-free designer shopping outlets, and the American-style malls:
Micronesia Mall, Guam Premier Outlets, the Agana Shopping Center, and the world's largest Kmart.[citation needed]
The Compacts of Free Association between the United States, the Federated States of Micronesia, the Republic of the
Marshall Islands and the Republic of Palau accorded the former entities of the Trust Territory of the Pacific Islands
a political status of "free association" with the United States. The Compacts give citizens of these island nations
generally no restrictions to reside in the United States (also its territories), and many were attracted to Guam
due to its proximity, environmental, and cultural familiarity. Over the years, it was claimed by some in Guam that
the territory has had to bear the brunt of this agreement in the form of public assistance programs and public education
for those from the regions involved, and the federal government should compensate the states and territories affected
by this type of migration.[citation needed] Over the years, Congress had appropriated "Compact Impact" aids to Guam,
the Northern Mariana Islands and Hawaii, and eventually this appropriation was written into each renewed Compact.
Some, however, continue to claim the compensation is not enough or that the distribution of actual compensation received
is significantly disproportionate.[citation needed] In 1899, the local postage stamps were overprinted "Guam" as
was done for the other former Spanish colonies, but this was discontinued shortly thereafter and regular U.S. postage
stamps have been used ever since. Because Guam is also part of the U.S. Postal System (postal abbreviation: GU, ZIP
code range: 96910–96932), mail to Guam from the U.S. mainland is considered domestic and no additional charges are
required. Private shipping companies, such as FedEx, UPS, and DHL, however, have no obligation to do so, and do not
regard Guam as domestic. The speed of mail traveling between Guam and the states varies depending on size and time
of year. Light, first-class items generally take less than a week to or from the mainland, but larger first-class
or Priority items can take a week or two. Fourth-class mail, such as magazines, are transported by sea after reaching
Hawaii. Most residents use post office boxes or private mail boxes, although residential delivery is becoming increasingly
available. Incoming mail not from the Americas should be addressed to "Guam" instead of "USA" to avoid being routed
the long way through the U.S. mainland and possibly charged a higher rate (especially from Asia). The Commercial
Port of Guam is the island's lifeline because most products must be shipped into Guam for consumers. It receives
the weekly calls of the Hawaii-based shipping line Matson, Inc. whose container ships connect Guam with Honolulu,
Hawaii, Los Angeles, California, Oakland, California and Seattle, Washington. The port is also the regional transhipment
hub for over 500,000 customers throughout the Micronesian region. The port is the shipping and receiving point for
containers designated for the island's U.S. Department of Defense installations, Andersen Air Force Base and Commander,
Naval Forces Marianas and eventually the Third Marine Expeditionary Force. Guam is served by the Antonio B. Won Pat
International Airport, which is a hub for United Airlines. The island is outside the United States customs zone so
Guam is responsible for establishing and operating its own customs and quarantine agency and jurisdiction. Therefore,
the U.S. Customs and Border Protection only carries immigration (but not customs) functions. Since Guam is under
federal immigration jurisdiction, passengers arriving directly from the United States skip immigration and proceed
directly to Guam Customs and Quarantine. Believed to be a stowaway on a U.S. military transport near the end of World
War II, the brown tree snake (Boiga irregularis) was accidentally introduced to Guam, that previously had no native
species of snake. It nearly eliminated the native bird population. The problem was exacerbated because the reptile
has no natural predators on the island. The brown tree snake, known locally as the kulebla, is native to northern
and eastern coasts of Australia, Papua New Guinea, and the Solomon Islands. While slightly venomous, the snake is
relatively harmless to human beings. Although some studies have suggested a high density of these serpents on Guam,
residents rarely see the nocturnal creatures. The United States Department of Agriculture has trained detector dogs
to keep the snakes out of the island's cargo flow. The United States Geological Survey also has dogs capable of detecting
snakes in forested environments around the region's islands. Before the introduction of the brown tree snake, Guam
was home to several endemic bird species. Among them were the Guam rail (or ko'ko' bird in Chamorro) and the Guam
flycatcher, both common throughout the island. Today the flycatcher is entirely extinct while the Guam rail is extinct
in the wild but bred in captivity by the Division of Aquatic and Wildlife Resources. The devastation caused by the
snake has been significant over the past several decades. As many as twelve bird species are believed to have been
driven to extinction. According to many elders, ko'ko' birds were common in Guam before World War II. An infestation
of the coconut rhinoceros beetle (CRB), Oryctes rhinoceros, was detected on Guam on September 12, 2007. CRB is not
known to occur in the United States except in American Samoa. Delimiting surveys performed September 13–25, 2007
indicated that the infestation was limited to Tumon Bay and Faifai Beach, an area of approximately 900 acres (3.6
km2). Guam Department of Agriculture (GDA) placed quarantine on all properties within the Tumon area on October 5
and later expanded the quarantine to about 2,500 acres (10 km2) on October 25; approximately 0.5 miles (800 m) radius
in all directions from all known locations of CRB infestation. CRB is native to Southern Asia and distributed throughout
Asia and the Western Pacific including Sri Lanka, Upolu, Samoa, American Samoa, Palau, New Britain, West Irian, New
Ireland, Pak Island and Manus Island (New Guinea), Fiji, Cocos (Keeling) Islands, Mauritius, and Reunion. Wildfires
plague the forested areas of Guam every dry season despite the island's humid climate. Most fires are man-caused
with 80% resulting from arson. Poachers often start fires to attract deer to the new growth. Invasive grass species
that rely on fire as part of their natural life cycle grow in many regularly burned areas. Grasslands and "barrens"
have replaced previously forested areas leading to greater soil erosion. During the rainy season sediment is carried
by the heavy rains into the Fena Lake Reservoir and Ugum River, leading to water quality problems for southern Guam.
Eroded silt also destroys the marine life in reefs around the island. Soil stabilization efforts by volunteers and
forestry workers (planting trees) have had little success in preserving natural habitats. Efforts have been made
to protect Guam's coral reef habitats from pollution, eroded silt and overfishing, problems that have led to decreased
fish populations. (Since Guam is a significant vacation spot for scuba divers, this is important.) In recent years,
the Department of Agriculture, Division of Aquatic and Wildlife Resources has established several new marine preserves
where fish populations are monitored by biologists. Before adopting U.S. Environmental Protection Agency standards,
portions of Tumon Bay were dredged by the hotel chains to provide a better experience for hotel guests. Tumon Bay
has since been made into a preserve. A federal Guam National Wildlife Refuge in northern Guam protects the decimated
sea turtle population in addition to a small colony of Mariana fruit bats. The University of Guam (UOG) and Guam
Community College, both fully accredited by the Western Association of Schools and Colleges, offer courses in higher
education. UOG is a member of the exclusive group of only 76 U.S. land-grant institutions in the entire United States.
Pacific Islands University is a small Christian liberal arts institution nationally accredited by the Transnational
Association of Christian Colleges and Schools. They offer courses at both the undergraduate and graduate levels.
The Guam Department of Education serves the entire island of Guam. In 2000, 32,000 students attended Guam's public
schools. Guam Public Schools have struggled with problems such as high dropout rates and poor test scores. Guam's
educational system has always faced unique challenges as a small community located 6,000 miles (9,700 km) from the
U.S. mainland with a very diverse student body including many students who come from backgrounds without traditional
American education. An economic downturn in Guam since the mid-1990s has compounded the problems in schools. The
Government of Guam maintains the island's main health care facility, Guam Memorial Hospital, in Tamuning. U.S. board
certified doctors and dentists practice in all specialties. In addition, the U.S. Naval Hospital in Agana Heights
serves active-duty members and dependents of the military community. There is one subscriber-based air ambulance
located on the island, CareJet, which provides emergency patient transportation across Guam and surrounding islands.
A private hospital, the Guam Regional Medical City opened its doors in early 2016.
Philosophical empiricists hold no knowledge to be properly inferred or deduced unless it is derived from one's sense-based
experience. This view is commonly contrasted with rationalism, which states that knowledge may be derived from reason
independently of the senses. For example, John Locke held that some knowledge (e.g. knowledge of God's existence)
could be arrived at through intuition and reasoning alone. Similarly Robert Boyle, a prominent advocate of the experimental
method, held that we have innate ideas. The main continental rationalists (Descartes, Spinoza, and Leibniz) were
also advocates of the empirical "scientific method". Aristotle's explanation of how this was possible was not strictly
empiricist in a modern sense, but rather based on his theory of potentiality and actuality, and experience of sense
perceptions still requires the help of the active nous. These notions contrasted with Platonic notions of the human
mind as an entity that pre-existed somewhere in the heavens, before being sent down to join a body on Earth (see
Plato's Phaedo and Apology, as well as others). Aristotle was considered to give a more important position to sense
perception than Plato, and commentators in the Middle Ages summarized one of his positions as "nihil in intellectu
nisi prius fuerit in sensu" (Latin for "nothing in the intellect without first being in the senses"). This idea was
later developed in ancient philosophy by the Stoic school. Stoic epistemology generally emphasized that the mind
starts blank, but acquires knowledge as the outside world is impressed upon it. The doxographer Aetius summarizes
this view as "When a man is born, the Stoics say, he has the commanding part of his soul like a sheet of paper ready
for writing upon." Later stoics, such as Sextus of Chaeronea, would continue this idea of empiricism in later Stoic
writings as well. As Sextus contends "For every thought comes from sense-perception or not without sense-perception
and either from direct experience or not without direct experience" (Against the Professors, 8.56-8). During the
Middle Ages Aristotle's theory of tabula rasa was developed by Islamic philosophers starting with Al Farabi, developing
into an elaborate theory by Avicenna and demonstrated as a thought experiment by Ibn Tufail. For Avicenna (Ibn Sina),
for example, the tabula rasa is a pure potentiality that is actualized through education, and knowledge is attained
through "empirical familiarity with objects in this world from which one abstracts universal concepts" developed
through a "syllogistic method of reasoning in which observations lead to propositional statements which when compounded
lead to further abstract concepts". The intellect itself develops from a material intellect (al-'aql al-hayulani),
which is a potentiality "that can acquire knowledge to the active intellect (al-'aql al-fa'il), the state of the
human intellect in conjunction with the perfect source of knowledge". So the immaterial "active intellect", separate
from any individual person, is still essential for understanding to occur. In the 12th century CE the Andalusian
Muslim philosopher and novelist Abu Bakr Ibn Tufail (known as "Abubacer" or "Ebn Tophail" in the West) included the
theory of tabula rasa as a thought experiment in his Arabic philosophical novel, Hayy ibn Yaqdhan in which he depicted
the development of the mind of a feral child "from a tabula rasa to that of an adult, in complete isolation from
society" on a desert island, through experience alone. The Latin translation of his philosophical novel, entitled
Philosophus Autodidactus, published by Edward Pococke the Younger in 1671, had an influence on John Locke's formulation
of tabula rasa in An Essay Concerning Human Understanding. In the late renaissance various writers began to question
the medieval and classical understanding of knowledge acquisition in a more fundamental way. In political and historical
writing Niccolò Machiavelli and his friend Francesco Guicciardini initiated a new realistic style of writing. Machiavelli
in particular was scornful of writers on politics who judged everything in comparison to mental ideals and demanded
that people should study the "effectual truth" instead. Their contemporary, Leonardo da Vinci (1452–1519) said, "If
you find from your own experience that something is a fact and it contradicts what some authority has written down,
then you must abandon the authority and base your reasoning on your own findings." The decidedly anti-Aristotelian
and anti-clerical music theorist Vincenzo Galilei (ca. 1520–1591), father of Galileo and the inventor of monody,
made use of the method in successfully solving musical problems, firstly, of tuning such as the relationship of pitch
to string tension and mass in stringed instruments, and to volume of air in wind instruments; and secondly to composition,
by his various suggestions to composers in his Dialogo della musica antica e moderna (Florence, 1581). The Italian
word he used for "experiment" was esperienza. It is known that he was the essential pedagogical influence upon the
young Galileo, his eldest son (cf. Coelho, ed. Music and Science in the Age of Galileo Galilei), arguably one of
the most influential empiricists in history. Vincenzo, through his tuning research, found the underlying truth at
the heart of the misunderstood myth of 'Pythagoras' hammers' (the square of the numbers concerned yielded those musical
intervals, not the actual numbers, as believed), and through this and other discoveries that demonstrated the fallibility
of traditional authorities, a radically empirical attitude developed, passed on to Galileo, which regarded "experience
and demonstration" as the sine qua non of valid rational enquiry. British empiricism, though it was not a term used
at the time, derives from the 17th century period of early modern philosophy and modern science. The term became
useful in order to describe differences perceived between two of its founders Francis Bacon, described as empiricist,
and René Descartes, who is described as a rationalist. Thomas Hobbes and Baruch Spinoza, in the next generation,
are often also described as an empiricist and a rationalist respectively. John Locke, George Berkeley, and David
Hume were the primary exponents of empiricism in the 18th century Enlightenment, with Locke being the person who
is normally known as the founder of empiricism as such. In response to the early-to-mid-17th century "continental
rationalism" John Locke (1632–1704) proposed in An Essay Concerning Human Understanding (1689) a very influential
view wherein the only knowledge humans can have is a posteriori, i.e., based upon experience. Locke is famously attributed
with holding the proposition that the human mind is a tabula rasa, a "blank tablet", in Locke's words "white paper",
on which the experiences derived from sense impressions as a person's life proceeds are written. There are two sources
of our ideas: sensation and reflection. In both cases, a distinction is made between simple and complex ideas. The
former are unanalysable, and are broken down into primary and secondary qualities. Primary qualities are essential
for the object in question to be what it is. Without specific primary qualities, an object would not be what it is.
For example, an apple is an apple because of the arrangement of its atomic structure. If an apple was structured
differently, it would cease to be an apple. Secondary qualities are the sensory information we can perceive from
its primary qualities. For example, an apple can be perceived in various colours, sizes, and textures but it is still
identified as an apple. Therefore, its primary qualities dictate what the object essentially is, while its secondary
qualities define its attributes. Complex ideas combine simple ones, and divide into substances, modes, and relations.
According to Locke, our knowledge of things is a perception of ideas that are in accordance or discordance with each
other, which is very different from the quest for certainty of Descartes. A generation later, the Irish Anglican
bishop, George Berkeley (1685–1753), determined that Locke's view immediately opened a door that would lead to eventual
atheism. In response to Locke, he put forth in his Treatise Concerning the Principles of Human Knowledge (1710) an
important challenge to empiricism in which things only exist either as a result of their being perceived, or by virtue
of the fact that they are an entity doing the perceiving. (For Berkeley, God fills in for humans by doing the perceiving
whenever humans are not around to do it.) In his text Alciphron, Berkeley maintained that any order humans may see
in nature is the language or handwriting of God. Berkeley's approach to empiricism would later come to be called
subjective idealism. The Scottish philosopher David Hume (1711–1776) responded to Berkeley's criticisms of Locke,
as well as other differences between early modern philosophers, and moved empiricism to a new level of skepticism.
Hume argued in keeping with the empiricist view that all knowledge derives from sense experience, but he accepted
that this has implications not normally acceptable to philosophers. He wrote for example, "Locke divides all arguments
into demonstrative and probable. On this view, we must say that it is only probable that all men must die or that
the sun will rise to-morrow, because neither of these can be demonstrated. But to conform our language more to common
use, we ought to divide arguments into demonstrations, proofs, and probabilities—by ‘proofs’ meaning arguments from
experience that leave no room for doubt or opposition." And, Hume divided all of human knowledge into two categories:
relations of ideas and matters of fact (see also Kant's analytic-synthetic distinction). Mathematical and logical
propositions (e.g. "that the square of the hypotenuse is equal to the sum of the squares of the two sides") are examples
of the first, while propositions involving some contingent observation of the world (e.g. "the sun rises in the East")
are examples of the second. All of people's "ideas", in turn, are derived from their "impressions". For Hume, an
"impression" corresponds roughly with what we call a sensation. To remember or to imagine such impressions is to
have an "idea". Ideas are therefore the faint copies of sensations. Hume maintained that all knowledge, even the
most basic beliefs about the natural world, cannot be conclusively established by reason. Rather, he maintained,
our beliefs are more a result of accumulated habits, developed in response to accumulated sense experiences. Among
his many arguments Hume also added another important slant to the debate about scientific method — that of the problem
of induction. Hume argued that it requires inductive reasoning to arrive at the premises for the principle of inductive
reasoning, and therefore the justification for inductive reasoning is a circular argument. Among Hume's conclusions
regarding the problem of induction is that there is no certainty that the future will resemble the past. Thus, as
a simple instance posed by Hume, we cannot know with certainty by inductive reasoning that the sun will continue
to rise in the East, but instead come to expect it to do so because it has repeatedly done so in the past. Most of
Hume's followers have disagreed with his conclusion that belief in an external world is rationally unjustifiable,
contending that Hume's own principles implicitly contained the rational justification for such a belief, that is,
beyond being content to let the issue rest on human instinct, custom and habit. According to an extreme empiricist
theory known as phenomenalism, anticipated by the arguments of both Hume and George Berkeley, a physical object is
a kind of construction out of our experiences. Phenomenalism is the view that physical objects, properties, events
(whatever is physical) are reducible to mental objects, properties, events. Ultimately, only mental objects, properties,
events, exist — hence the closely related term subjective idealism. By the phenomenalistic line of thinking, to have
a visual experience of a real physical thing is to have an experience of a certain kind of group of experiences.
This type of set of experiences possesses a constancy and coherence that is lacking in the set of experiences of
which hallucinations, for example, are a part. As John Stuart Mill put it in the mid-19th century, matter is the
"permanent possibility of sensation". Mill's empiricism went a significant step beyond Hume in still another respect:
in maintaining that induction is necessary for all meaningful knowledge including mathematics. As summarized by D.W.
Hamlin: Mill's empiricism thus held that knowledge of any kind is not from direct experience but an inductive inference
from direct experience. The problems other philosophers have had with Mill's position center around the following
issues: Firstly, Mill's formulation encounters difficulty when it describes what direct experience is by differentiating
only between actual and possible sensations. This misses some key discussion concerning conditions under which such
"groups of permanent possibilities of sensation" might exist in the first place. Berkeley put God in that gap; the
phenomenalists, including Mill, essentially left the question unanswered. In the end, lacking an acknowledgement
of an aspect of "reality" that goes beyond mere "possibilities of sensation", such a position leads to a version
of subjective idealism. Questions of how floor beams continue to support a floor while unobserved, how trees continue
to grow while unobserved and untouched by human hands, etc., remain unanswered, and perhaps unanswerable in these
terms. Secondly, Mill's formulation leaves open the unsettling possibility that the "gap-filling entities are purely
possibilities and not actualities at all". Thirdly, Mill's position, by calling mathematics merely another species
of inductive inference, misapprehends mathematics. It fails to fully consider the structure and method of mathematical
science, the products of which are arrived at through an internally consistent deductive set of procedures which
do not, either today or at the time Mill wrote, fall under the agreed meaning of induction. The phenomenalist phase
of post-Humean empiricism ended by the 1940s, for by that time it had become obvious that statements about physical
things could not be translated into statements about actual and possible sense data. If a physical object statement
is to be translatable into a sense-data statement, the former must be at least deducible from the latter. But it
came to be realized that there is no finite set of statements about actual and possible sense-data from which we
can deduce even a single physical-object statement. Remember that the translating or paraphrasing statement must
be couched in terms of normal observers in normal conditions of observation. There is, however, no finite set of
statements that are couched in purely sensory terms and can express the satisfaction of the condition of the presence
of a normal observer. According to phenomenalism, to say that a normal observer is present is to make the hypothetical
statement that were a doctor to inspect the observer, the observer would appear to the doctor to be normal. But,
of course, the doctor himself must be a normal observer. If we are to specify this doctor's normality in sensory
terms, we must make reference to a second doctor who, when inspecting the sense organs of the first doctor, would
himself have to have the sense data a normal observer has when inspecting the sense organs of a subject who is a
normal observer. And if we are to specify in sensory terms that the second doctor is a normal observer, we must refer
to a third doctor, and so on (also see the third man). Logical empiricism (also logical positivism or neopositivism)
was an early 20th-century attempt to synthesize the essential ideas of British empiricism (e.g. a strong emphasis
on sensory experience as the basis for knowledge) with certain insights from mathematical logic that had been developed
by Gottlob Frege and Ludwig Wittgenstein. Some of the key figures in this movement were Otto Neurath, Moritz Schlick
and the rest of the Vienna Circle, along with A.J. Ayer, Rudolf Carnap and Hans Reichenbach. The neopositivists subscribed
to a notion of philosophy as the conceptual clarification of the methods, insights and discoveries of the sciences.
They saw in the logical symbolism elaborated by Frege (1848–1925) and Bertrand Russell (1872–1970) a powerful instrument
that could rationally reconstruct all scientific discourse into an ideal, logically perfect, language that would
be free of the ambiguities and deformations of natural language. This gave rise to what they saw as metaphysical
pseudoproblems and other conceptual confusions. By combining Frege's thesis that all mathematical truths are logical
with the early Wittgenstein's idea that all logical truths are mere linguistic tautologies, they arrived at a twofold
classification of all propositions: the analytic (a priori) and the synthetic (a posteriori). On this basis, they
formulated a strong principle of demarcation between sentences that have sense and those that do not: the so-called
verification principle. Any sentence that is not purely logical, or is unverifiable is devoid of meaning. As a result,
most metaphysical, ethical, aesthetic and other traditional philosophical problems came to be considered pseudoproblems.
In the extreme empiricism of the neopositivists—at least before the 1930s—any genuinely synthetic assertion must
be reducible to an ultimate assertion (or set of ultimate assertions) that expresses direct observations or perceptions.
In later years, Carnap and Neurath abandoned this sort of phenomenalism in favor of a rational reconstruction of
knowledge into the language of an objective spatio-temporal physics. That is, instead of translating sentences about
physical objects into sense-data, such sentences were to be translated into so-called protocol sentences, for example,
"X at location Y and at time T observes such and such." The central theses of logical positivism (verificationism,
the analytic-synthetic distinction, reductionism, etc.) came under sharp attack after World War II by thinkers such
as Nelson Goodman, W.V. Quine, Hilary Putnam, Karl Popper, and Richard Rorty. By the late 1960s, it had become evident
to most philosophers that the movement had pretty much run its course, though its influence is still significant
among contemporary analytic philosophers such as Michael Dummett and other anti-realists. In the late 19th and early
20th century several forms of pragmatic philosophy arose. The ideas of pragmatism, in its various forms, developed
mainly from discussions between Charles Sanders Peirce and William James when both men were at Harvard in the 1870s.
James popularized the term "pragmatism", giving Peirce full credit for its patrimony, but Peirce later demurred from
the tangents that the movement was taking, and redubbed what he regarded as the original idea with the name of "pragmaticism".
Along with its pragmatic theory of truth, this perspective integrates the basic insights of empirical (experience-based)
and rational (concept-based) thinking. Charles Peirce (1839–1914) was highly influential in laying the groundwork
for today's empirical scientific method.[citation needed] Although Peirce severely criticized many elements of Descartes'
peculiar brand of rationalism, he did not reject rationalism outright. Indeed, he concurred with the main ideas of
rationalism, most importantly the idea that rational concepts can be meaningful and the idea that rational concepts
necessarily go beyond the data given by empirical observation. In later years he even emphasized the concept-driven
side of the then ongoing debate between strict empiricism and strict rationalism, in part to counterbalance the excesses
to which some of his cohorts had taken pragmatism under the "data-driven" strict-empiricist view. Among Peirce's
major contributions was to place inductive reasoning and deductive reasoning in a complementary rather than competitive
mode, the latter of which had been the primary trend among the educated since David Hume wrote a century before.
To this, Peirce added the concept of abductive reasoning. The combined three forms of reasoning serve as a primary
conceptual foundation for the empirically based scientific method today. Peirce's approach "presupposes that (1)
the objects of knowledge are real things, (2) the characters (properties) of real things do not depend on our perceptions
of them, and (3) everyone who has sufficient experience of real things will agree on the truth about them. According
to Peirce's doctrine of fallibilism, the conclusions of science are always tentative. The rationality of the scientific
method does not depend on the certainty of its conclusions, but on its self-corrective character: by continued application
of the method science can detect and correct its own mistakes, and thus eventually lead to the discovery of truth".
In his Harvard "Lectures on Pragmatism" (1903), Peirce enumerated what he called the "three cotary propositions of
pragmatism" (L: cos, cotis whetstone), saying that they "put the edge on the maxim of pragmatism". First among these
he listed the peripatetic-thomist observation mentioned above, but he further observed that this link between sensory
perception and intellectual conception is a two-way street. That is, it can be taken to say that whatever we find
in the intellect is also incipiently in the senses. Hence, if theories are theory-laden then so are the senses, and
perception itself can be seen as a species of abductive inference, its difference being that it is beyond control
and hence beyond critique – in a word, incorrigible. This in no way conflicts with the fallibility and revisability
of scientific concepts, since it is only the immediate percept in its unique individuality or "thisness" – what the
Scholastics called its haecceity – that stands beyond control and correction. Scientific concepts, on the other hand,
are general in nature, and transient sensations do in another sense find correction within them. This notion of perception
as abduction has received periodic revivals in artificial intelligence and cognitive science research, most recently
for instance with the work of Irvin Rock on indirect perception. Around the beginning of the 20th century, William
James (1842–1910) coined the term "radical empiricism" to describe an offshoot of his form of pragmatism, which he
argued could be dealt with separately from his pragmatism – though in fact the two concepts are intertwined in James's
published lectures. James maintained that the empirically observed "directly apprehended universe needs ... no extraneous
trans-empirical connective support", by which he meant to rule out the perception that there can be any value added
by seeking supernatural explanations for natural phenomena. James's "radical empiricism" is thus not radical in the
context of the term "empiricism", but is instead fairly consistent with the modern use of the term "empirical". (His
method of argument in arriving at this view, however, still readily encounters debate within philosophy even today.)
John Dewey (1859–1952) modified James' pragmatism to form a theory known as instrumentalism. The role of sense experience
in Dewey's theory is crucial, in that he saw experience as unified totality of things through which everything else
is interrelated. Dewey's basic thought, in accordance with empiricism was that reality is determined by past experience.
Therefore, humans adapt their past experiences of things to perform experiments upon and test the pragmatic values
of such experience. The value of such experience is measured experientially and scientifically, and the results of
such tests generate ideas that serve as instruments for future experimentation, in physical sciences as in ethics.
Thus, ideas in Dewey's system retain their empiricist flavour in that they are only known a posteriori.
In philosophy, idealism is the group of philosophies which assert that reality, or reality as we can know it, is fundamentally
mental, mentally constructed, or otherwise immaterial. Epistemologically, idealism manifests as a skepticism about
the possibility of knowing any mind-independent thing. In a sociological sense, idealism emphasizes how human ideas—especially
beliefs and values—shape society. As an ontological doctrine, idealism goes further, asserting that all entities
are composed of mind or spirit. Idealism thus rejects physicalist and dualist theories that fail to ascribe priority
to the mind. The earliest extant arguments that the world of experience is grounded in the mental derive from India
and Greece. The Hindu idealists in India and the Greek Neoplatonists gave panentheistic arguments for an all-pervading
consciousness as the ground or true nature of reality. In contrast, the Yogācāra school, which arose within Mahayana
Buddhism in India in the 4th century CE, based its "mind-only" idealism to a greater extent on phenomenological analyses
of personal experience. This turn toward the subjective anticipated empiricists such as George Berkeley, who revived
idealism in 18th-century Europe by employing skeptical arguments against materialism. Beginning with Immanuel Kant,
German idealists such as G. W. F. Hegel, Johann Gottlieb Fichte, Friedrich Wilhelm Joseph Schelling, and Arthur Schopenhauer
dominated 19th-century philosophy. This tradition, which emphasized the mental or "ideal" character of all phenomena,
gave birth to idealistic and subjectivist schools ranging from British idealism to phenomenalism to existentialism.
The historical influence of this branch of idealism remains central even to the schools that rejected its metaphysical
assumptions, such as Marxism, pragmatism and positivism. Idealism is a term with several related meanings. It comes
via idea from the Greek idein (ἰδεῖν), meaning "to see". The term entered the English language by 1743. In ordinary
use, as when speaking of Woodrow Wilson's political idealism, it generally suggests the priority of ideals, principles,
values, and goals over concrete realities. Idealists are understood to represent the world as it might or should
be, unlike pragmatists, who focus on the world as it presently is. In the arts, similarly, idealism affirms imagination
and attempts to realize a mental conception of beauty, a standard of perfection, juxtaposed to aesthetic naturalism
and realism. Any philosophy that assigns crucial importance to the ideal or spiritual realm in its account of human
existence may be termed "idealist". Metaphysical idealism is an ontological doctrine that holds that reality itself
is incorporeal or experiential at its core. Beyond this, idealists disagree on which aspects of the mental are more
basic. Platonic idealism affirms that abstractions are more basic to reality than the things we perceive, while subjective
idealists and phenomenalists tend to privilege sensory experience over abstract reasoning. Epistemological idealism
is the view that reality can only be known through ideas, that only psychological experience can be apprehended by
the mind. Subjective idealists like George Berkeley are anti-realists in terms of a mind-independent world, whereas
transcendental idealists like Immanuel Kant are strong skeptics of such a world, affirming epistemological and not
metaphysical idealism. Thus Kant defines idealism as "the assertion that we can never be certain whether all of our
putative outer experience is not mere imagining". He claimed that, according to idealism, "the reality of external
objects does not admit of strict proof. On the contrary, however, the reality of the object of our internal sense
(of myself and state) is clear immediately through consciousness." However, not all idealists restrict the real or
the knowable to our immediate subjective experience. Objective idealists make claims about a transempirical world,
but simply deny that this world is essentially divorced from or ontologically prior to the mental. Thus Plato and
Gottfried Leibniz affirm an objective and knowable reality transcending our subjective awareness—a rejection of epistemological
idealism—but propose that this reality is grounded in ideal entities, a form of metaphysical idealism. Nor do all
metaphysical idealists agree on the nature of the ideal; for Plato, the fundamental entities were non-mental abstract
forms, while for Leibniz they were proto-mental and concrete monads. Christian theologians have held idealist views,
often based on Neoplatonism, despite the influence of Aristotelian scholasticism from the 12th century onward. Later
western theistic idealism such as that of Hermann Lotze offers a theory of the "world ground" in which all things
find their unity: it has been widely accepted by Protestant theologians. Several modern religious movements, for
example the organizations within the New Thought Movement and the Unity Church, may be said to have a particularly
idealist orientation. The theology of Christian Science includes a form of idealism: it teaches that all that truly
exists is God and God's ideas; that the world as it appears to the senses is a distortion of the underlying spiritual
reality, a distortion that may be corrected (both conceptually and in terms of human experience) through a reorientation
(spiritualization) of thought. Plato's theory of forms or "ideas" describes ideal forms (for example the platonic
solids in geometry or abstracts like Goodness and Justice), as universals existing independently of any particular
instance. Arne Grøn calls this doctrine "the classic example of a metaphysical idealism as a transcendent idealism",
while Simone Klein calls Plato "the earliest representative of metaphysical objective idealism". Nevertheless, Plato
holds that matter is real, though transitory and imperfect, and is perceived by our body and its senses and given
existence by the eternal ideas that are perceived directly by our rational soul. Plato was therefore a metaphysical
and epistemological dualist, an outlook that modern idealism has striven to avoid: Plato's thought cannot therefore
be counted as idealist in the modern sense, although quantum physics' assertion that man's consciousness is an immutable
and primary requisite for not merely perceiving but shaping matter, and thus his reality, would give more credence
to Plato's dualist position.[citation needed] With the neoplatonist Plotinus, wrote Nathaniel Alfred Boll; "there
even appears, probably for the first time in Western philosophy, idealism that had long been current in the East
even at that time, for it taught... that the soul has made the world by stepping from eternity into time...". Similarly,
in regard to passages from the Enneads, "The only space or place of the world is the soul" and "Time must not be
assumed to exist outside the soul", Ludwig Noiré wrote: "For the first time in Western philosophy we find idealism
proper in Plotinus, However, Plotinus does not address whether we know external objects, unlike Schopenhauer and
other modern philosophers. Subjective Idealism (immaterialism or phenomenalism) describes a relationship between
experience and the world in which objects are no more than collections or "bundles" of sense data in the perceiver.
Proponents include Berkeley, Bishop of Cloyne, an Anglo-Irish philosopher who advanced a theory he called immaterialism,
later referred to as "subjective idealism", contending that individuals can only know sensations and ideas of objects
directly, not abstractions such as "matter", and that ideas also depend upon being perceived for their very existence
- esse est percipi; "to be is to be perceived". Arthur Collier published similar assertions though there seems to
have been no influence between the two contemporary writers. The only knowable reality is the represented image of
an external object. Matter as a cause of that image, is unthinkable and therefore nothing to us. An external world
as absolute matter unrelated to an observer does not exist as far as we are concerned. The universe cannot exist
as it appears if there is no perceiving mind. Collier was influenced by An Essay Towards the Theory of the Ideal
or Intelligible World by "Cambridge Platonist" John Norris (1701). and proliferation of hyphenated entities such
as "thing-in-itself" (Immanuel Kant), "things-as-interacted-by-us" (Arthur Fine), "table-of-commonsense" and "table-of-physics"
(Sir Arthur Eddington) which are "warning signs" for conceptual idealism according to Musgrave because they allegedly
do not exist but only highlight the numerous ways in which people come to know the world. This argument does not
take into account the issues pertaining to hermeneutics, especially at the backdrop of analytic philosophy. Musgrave
criticized Richard Rorty and Postmodernist philosophy in general for confusion of use and mention. A. A. Luce and
John Foster are other subjectivists. Luce, in Sense without Matter (1954), attempts to bring Berkeley up to date
by modernizing his vocabulary and putting the issues he faced in modern terms, and treats the Biblical account of
matter and the psychology of perception and nature. Foster's The Case for Idealism argues that the physical world
is the logical creation of natural, non-logical constraints on human sense-experience. Foster's latest defense of
his views is in his book A World for Us: The Case for Phenomenalistic Idealism. The 2nd edition (1787) contained
a Refutation of Idealism to distinguish his transcendental idealism from Descartes's Sceptical Idealism and Berkeley's
anti-realist strain of Subjective Idealism. The section Paralogisms of Pure Reason is an implicit critique of Descartes'
idealism. Kant says that it is not possible to infer the 'I' as an object (Descartes' cogito ergo sum) purely from
"the spontaneity of thought". Kant focused on ideas drawn from British philosophers such as Locke, Berkeley and Hume
but distinguished his transcendental or critical idealism from previous varieties; In the first volume of his Parerga
and Paralipomena, Schopenhauer wrote his "Sketch of a History of the Doctrine of the Ideal and the Real". He defined
the ideal as being mental pictures that constitute subjective knowledge. The ideal, for him, is what can be attributed
to our own minds. The images in our head are what comprise the ideal. Schopenhauer emphasized that we are restricted
to our own consciousness. The world that appears is only a representation or mental picture of objects. We directly
and immediately know only representations. All objects that are external to the mind are known indirectly through
the mediation of our mind. He offered a history of the concept of the "ideal" as "ideational" or "existing in the
mind as an image". Friedrich Nietzsche argued that Kant commits an agnostic tautology and does not offer a satisfactory
answer as to the source of a philosophical right to such-or-other metaphysical claims; he ridicules his pride in
tackling "the most difficult thing that could ever be undertaken on behalf of metaphysics." The famous "thing-in-itself"
was called a product of philosophical habit, which seeks to introduce a grammatical subject: because wherever there
is cognition, there must be a thing that is cognized and allegedly it must be added to ontology as a being (whereas,
to Nietzsche, only the world as ever changing appearances can be assumed). Yet he attacks the idealism of Schopenhauer
and Descartes with an argument similar to Kant's critique of the latter (see above). Absolute idealism is G. W. F.
Hegel's account of how existence is comprehensible as an all-inclusive whole. Hegel called his philosophy "absolute"
idealism in contrast to the "subjective idealism" of Berkeley and the "transcendental idealism" of Kant and Fichte,
which were not based on a critique of the finite and a dialectical philosophy of history as Hegel's idealism was.
The exercise of reason and intellect enables the philosopher to know ultimate historical reality, the phenomenological
constitution of self-determination, the dialectical development of self-awareness and personality in the realm of
History. In his Science of Logic (1812–1814) Hegel argues that finite qualities are not fully "real" because they
depend on other finite qualities to determine them. Qualitative infinity, on the other hand, would be more self-determining
and hence more fully real. Similarly finite natural things are less "real"—because they are less self-determining—than
spiritual things like morally responsible people, ethical communities and God. So any doctrine, such as materialism,
that asserts that finite qualities or natural objects are fully real is mistaken. Hegel certainly intends to preserve
what he takes to be true of German idealism, in particular Kant's insistence that ethical reason can and does go
beyond finite inclinations. For Hegel there must be some identity of thought and being for the "subject" (any human
observer)) to be able to know any observed "object" (any external entity, possibly even another human) at all. Under
Hegel's concept of "subject-object identity," subject and object both have Spirit (Hegel's ersatz, redefined, nonsupernatural
"God") as their conceptual (not metaphysical) inner reality—and in that sense are identical. But until Spirit's "self-realization"
occurs and Spirit graduates from Spirit to Absolute Spirit status, subject (a human mind) mistakenly thinks every
"object" it observes is something "alien," meaning something separate or apart from "subject." In Hegel's words,
"The object is revealed to it [to "subject"] by [as] something alien, and it does not recognize itself." Self-realization
occurs when Hegel (part of Spirit's nonsupernatural Mind, which is the collective mind of all humans) arrives on
the scene and realizes that every "object" is himself, because both subject and object are essentially Spirit. When
self-realization occurs and Spirit becomes Absolute Spirit, the "finite" (man, human) becomes the "infinite" ("God,"
divine), replacing the imaginary or "picture-thinking" supernatural God of theism: man becomes God. Tucker puts it
this way: "Hegelianism . . . is a religion of self-worship whose fundamental theme is given in Hegel's image of the
man who aspires to be God himself, who demands 'something more, namely infinity.'" The picture Hegel presents is
"a picture of a self-glorifying humanity striving compulsively, and at the end successfully, to rise to divinity."
Kierkegaard criticised Hegel's idealist philosophy in several of his works, particularly his claim to a comprehensive
system that could explain the whole of reality. Where Hegel argues that an ultimate understanding of the logical
structure of the world is an understanding of the logical structure of God's mind, Kierkegaard asserting that for
God reality can be a system but it cannot be so for any human individual because both reality and humans are incomplete
and all philosophical systems imply completeness. A logical system is possible but an existential system is not.
"What is rational is actual; and what is actual is rational". Hegel's absolute idealism blurs the distinction between
existence and thought: our mortal nature places limits on our understanding of reality; A major concern of Hegel's
Phenomenology of Spirit (1807) and of the philosophy of Spirit that he lays out in his Encyclopedia of the Philosophical
Sciences (1817–1830) is the interrelation between individual humans, which he conceives in terms of "mutual recognition."
However, what Climacus means by the aforementioned statement, is that Hegel, in the Philosophy of Right, believed
the best solution was to surrender one's individuality to the customs of the State, identifying right and wrong in
view of the prevailing bourgeois morality. Individual human will ought, at the State's highest level of development,
to properly coincide with the will of the State. Climacus rejects Hegel's suppression of individuality by pointing
out it is impossible to create a valid set of rules or system in any society which can adequately describe existence
for any one individual. Submitting one's will to the State denies personal freedom, choice, and responsibility. In
addition, Hegel does believe we can know the structure of God's mind, or ultimate reality. Hegel agrees with Kierkegaard
that both reality and humans are incomplete, inasmuch as we are in time, and reality develops through time. But the
relation between time and eternity is outside time and this is the "logical structure" that Hegel thinks we can know.
Kierkegaard disputes this assertion, because it eliminates the clear distinction between ontology and epistemology.
Existence and thought are not identical and one cannot possibly think existence. Thought is always a form of abstraction,
and thus not only is pure existence impossible to think, but all forms in existence are unthinkable; thought depends
on language, which merely abstracts from experience, thus separating us from lived experience and the living essence
of all beings. In addition, because we are finite beings, we cannot possibly know or understand anything that is
universal or infinite such as God, so we cannot know God exists, since that which transcends time simultaneously
transcends human understanding. Bradley was the apparent target of G. E. Moore's radical rejection of idealism. Moore
claimed that Bradley did not understand the statement that something is real. We know for certain, through common
sense and prephilosophical beliefs, that some things are real, whether they are objects of thought or not, according
to Moore. The 1903 article The Refutation of Idealism is one of the first demonstrations of Moore's commitment to
analysis. He examines each of the three terms in the Berkeleian aphorism esse est percipi, "to be is to be perceived",
finding that it must mean that the object and the subject are necessarily connected so that "yellow" and "the sensation
of yellow" are identical - "to be yellow" is "to be experienced as yellow". But it also seems there is a difference
between "yellow" and "the sensation of yellow" and "that esse is held to be percipi, solely because what is experienced
is held to be identical with the experience of it". Though far from a complete refutation, this was the first strong
statement by analytic philosophy against its idealist predecessors, or at any rate against the type of idealism represented
by Berkeley. This argument did not show that the GEM (in post–Stove vernacular, see below) is logically invalid.
Pluralistic idealism such as that of Gottfried Leibniz takes the view that there are many individual minds that together
underlie the existence of the observed world and make possible the existence of the physical universe. Unlike absolute
idealism, pluralistic idealism does not assume the existence of a single ultimate mental reality or "Absolute". Leibniz'
form of idealism, known as Panpsychism, views "monads" as the true atoms of the universe and as entities having perception.
The monads are "substantial forms of being",elemental, individual, subject to their own laws, non-interacting, each
reflecting the entire universe. Monads are centers of force, which is substance while space, matter and motion are
phenomenal and their form and existence is dependent on the simple and immaterial monads. There is a pre-established
harmony established by God, the central monad, between the world in the minds of the monads and the external world
of objects. Leibniz's cosmology embraced traditional Christian Theism. The English psychologist and philosopher James
Ward inspired by Leibniz had also defended a form of pluralistic idealism. According to Ward the universe is composed
of "psychic monads" of different levels, interacting for mutual self- betterment. Howison's personal idealism was
also called "California Personalism" by others to distinguish it from the "Boston Personalism" which was of Bowne.
Howison maintained that both impersonal, monistic idealism and materialism run contrary to the experience of moral
freedom. To deny freedom to pursue truth, beauty, and "benignant love" is to undermine every profound human venture,
including science, morality, and philosophy. Personalistic idealists Borden Parker Bowne and Edgar S. Brightman and
realistic personal theist Saint Thomas Aquinas address a core issue, namely that of dependence upon an infinite personal
God. J. M. E. McTaggart of Cambridge University, argued that minds alone exist and only relate to each other through
love. Space, time and material objects are unreal. In The Unreality of Time he argued that time is an illusion because
it is impossible to produce a coherent account of a sequence of events. The Nature of Existence (1927) contained
his arguments that space, time, and matter cannot possibly be real. In his Studies in Hegelian Cosmology (Cambridge,
1901, p196) he declared that metaphysics are not relevant to social and political action. McTaggart "thought that
Hegel was wrong in supposing that metaphysics could show that the state is more than a means to the good of the individuals
who compose it". For McTaggart "philosophy can give us very little, if any, guidance in action... Why should a Hegelian
citizen be surprised that his belief as to the organic nature of the Absolute does not help him in deciding how to
vote? Would a Hegelian engineer be reasonable in expecting that his belief that all matter is spirit should help
him in planning a bridge? Thomas Davidson taught a philosophy called "apeirotheism", a "form of pluralistic idealism...coupled
with a stern ethical rigorism" which he defined as "a theory of Gods infinite in number." The theory was indebted
to Aristotle's pluralism and his concepts of Soul, the rational, living aspect of a living substance which cannot
exist apart from the body because it is not a substance but an essence, and nous, rational thought, reflection and
understanding. Although a perennial source of controversy, Aristotle arguably views the latter as both eternal and
immaterial in nature, as exemplified in his theology of unmoved movers. Identifying Aristotle's God with rational
thought, Davidson argued, contrary to Aristotle, that just as the soul cannot exist apart from the body, God cannot
exist apart from the world. Idealist notions took a strong hold among physicists of the early 20th century confronted
with the paradoxes of quantum physics and the theory of relativity. In The Grammar of Science, Preface to the 2nd
Edition, 1900, Karl Pearson wrote, "There are many signs that a sound idealism is surely replacing, as a basis for
natural philosophy, the crude materialism of the older physicists." This book influenced Einstein's regard for the
importance of the observer in scientific measurements[citation needed]. In § 5 of that book, Pearson asserted that
"...science is in reality a classification and analysis of the contents of the mind...." Also, "...the field of science
is much more consciousness than an external world." "The mind-stuff of the world is, of course, something more general
than our individual conscious minds.... The mind-stuff is not spread in space and time; these are part of the cyclic
scheme ultimately derived out of it.... It is necessary to keep reminding ourselves that all knowledge of our environment
from which the world of physics is constructed, has entered in the form of messages transmitted along the nerves
to the seat of consciousness.... Consciousness is not sharply defined, but fades into subconsciousness; and beyond
that we must postulate something indefinite but yet continuous with our mental nature.... It is difficult for the
matter-of-fact physicist to accept the view that the substratum of everything is of mental character. But no one
can deny that mind is the first and most direct thing in our experience, and all else is remote inference."
Czech (/ˈtʃɛk/; čeština Czech pronunciation: [ˈt͡ʃɛʃcɪna]), formerly known as Bohemian (/boʊˈhiːmiən, bə-/; lingua Bohemica
in Latin), is a West Slavic language strongly influenced by Latin and German language, spoken by over 10 million
people and it is the official language of the Czech Republic. Czech's closest relative is Slovak, with which it is
mutually intelligible. It is closely related to other West Slavic languages, such as Silesian and Polish. Although
most Czech vocabulary is based on shared roots with Slavic, Romance, and Germanic languages, many loanwords (most
associated with high culture) have been adopted in recent years. The languages have not undergone the deliberate
highlighting of minor linguistic differences in the name of nationalism as has occurred in the Bosnian, Serbian and
Croatian standards of Serbo-Croatian. However, most Slavic languages (including Czech) have been distanced in this
way from Russian influences because of widespread public resentment against the former Soviet Union (which occupied
Czechoslovakia in 1968). Czech and Slovak form a dialect continuum, with great similarity between neighboring Czech
and Slovak dialects. (See "Dialects" below.) One study showed that Czech and Slovak lexicons differed by 80 percent,
but this high percentage was found to stem primarily from differing orthographies and slight inconsistencies in morphological
formation; Slovak morphology is more regular (when changing from the nominative to the locative case, Praha becomes
Praze in Czech and Prahe in Slovak). The two lexicons are generally considered similar, with most differences in
colloquial vocabulary and some scientific terminology. Slovak has slightly more borrowed words than Czech. The similarities
between Czech and Slovak led to the languages being considered a single language by a group of 19th-century scholars
who called themselves "Czechoslavs" (Čechoslováci), believing that the peoples were connected in a way which excluded
German Bohemians and (to a lesser extent) Hungarians and other Slavs. During the First Czechoslovak Republic (1918–1938),
although "Czechoslovak" was designated as the republic's official language both Czech and Slovak written standards
were used. Standard written Slovak was partially modeled on literary Czech, and Czech was preferred for some official
functions in the Slovak half of the republic. Czech influence on Slovak was protested by Slovak scholars, and when
Slovakia broke off from Czechoslovakia in 1938 as the Slovak State (which then aligned with Nazi Germany in World
War II) literary Slovak was deliberately distanced from Czech. When the Axis powers lost the war and Czechoslovakia
reformed, Slovak developed somewhat on its own (with Czech influence); during the Prague Spring of 1968, Slovak gained
independence from (and equality with) Czech. Since then, "Czechoslovak" refers to improvised pidgins of the languages
which have arisen from the decrease in mutual intelligibility. Around the sixth century AD, a tribe of Slavs arrived
in a portion of Central Europe. According to legend they were led by a hero named Čech, from whom the word "Czech"
derives. The ninth century brought the state of Great Moravia, whose first ruler (Rastislav of Moravia) invited Byzantine
ruler Michael III to send missionaries in an attempt to reduce the influence of East Francia on religious and political
life in his country. These missionaries, Constantine and Methodius, helped to convert the Czechs from traditional
Slavic paganism to Christianity and established a church system. They also brought the Glagolitic alphabet to the
West Slavs, whose language was previously unwritten. This language, later known as Proto-Czech, was beginning to
separate from its fellow West Slavic hatchlings Proto-Slovak, Proto-Polish and Proto-Sorbian. Among other features,
Proto-Czech was marked by its ephemeral use of the voiced velar fricative consonant (/ɣ/) and consistent stress on
the first syllable. The Czechs' language separated from other Slavic tongues into what would later be called Old
Czech by the thirteenth century, a classification extending through the sixteenth century. Its use of cases differed
from the modern language; although Old Czech did not yet have a vocative case or an animacy distinction, declension
for its six cases and three genders rapidly became complicated (partially to differentiate homophones) and its declension
patterns resembled those of Lithuanian (its Balto-Slavic cousin). While Old Czech had a basic alphabet from which
a general set of orthographical correspondences was drawn, it did not have a standard orthography. It also contained
a number of sound clusters which no longer exist; allowing ě (/jɛ/) after soft consonants, which has since shifted
to e (/ɛ/), and allowing complex consonant clusters to be pronounced all at once rather than syllabically. A phonological
phenomenon, Havlik's law (which began in Proto-Slavic and took various forms in other Slavic languages), appeared
in Old Czech; counting backwards from the end of a clause, every odd-numbered yer was vocalized as a vowel, while
the other yers disappeared. Bohemia (as Czech civilization was known by then) increased in power over the centuries,
as its language did in regional importance. This growth was expedited during the fourteenth century by Holy Roman
Emperor Charles IV, who founded Charles University in Prague in 1348. Here, early Czech literature (a biblical translation,
hymns and hagiography) flourished. Old Czech texts, including poetry and cookbooks, were produced outside the university
as well. Later in the century Jan Hus contributed significantly to the standardization of Czech orthography, advocated
for widespread literacy among Czech commoners (particularly in religion) and made early efforts to model written
Czech after the spoken language. Czech continued to evolve and gain in regional importance for hundreds of years,
and has been a literary language in the Slovak lands since the early fifteenth century. A biblical translation, the
Kralice Bible, was published during the late sixteenth century (around the time of the King James and Luther versions)
which was more linguistically conservative than either. The publication of the Kralice Bible spawned widespread nationalism,
and in 1615 the government of Bohemia ruled that only Czech-speaking residents would be allowed to become full citizens
or inherit goods or land. This, and the conversion of the Czech upper classes from the Habsburg Empire's Catholicism
to Protestantism, angered the Habsburgs and helped trigger the Thirty Years' War (where the Czechs were defeated
at the Battle of White Mountain). The Czechs became serfs; Bohemia's printing industry (and its linguistic and political
rights) were dismembered, removing official regulation and support from its language. German quickly became the dominant
language in Bohemia. The consensus among linguists is that modern, standard Czech originated during the eighteenth
century. By then the language had developed a literary tradition, and since then it has changed little; journals
from that period have no substantial differences from modern standard Czech, and contemporary Czechs can understand
them with little difficulty. Changes include the morphological shift of í to ej and é to í (although é survives for
some uses) and the merging of í and the former ejí. Sometime before the eighteenth century, the Czech language abandoned
a distinction between phonemic /l/ and /ʎ/ which survives in Slovak. The Czech people gained widespread national
pride during the mid-eighteenth century, inspired by the Age of Enlightenment a half-century earlier. Czech historians
began to emphasize their people's accomplishments from the fifteenth through the seventeenth centuries, rebelling
against the Counter-Reformation (which had denigrated Czech and other non-Latin languages). Czech philologists studied
sixteenth-century texts, advocating the return of the language to high culture. This period is known as the Czech
National Revival (or Renascence). During the revival, in 1809 linguist and historian Josef Dobrovský released a German-language
grammar of Old Czech entitled Ausführliches Lehrgebäude der böhmischen Sprache (Comprehensive Doctrine of the Bohemian
Language). Dobrovský had intended his book to be descriptive, and did not think Czech had a realistic chance of returning
as a major language. However, Josef Jungmann and other revivalists used Dobrovský's book to advocate for a Czech
linguistic revival. Changes during this time included spelling reform (notably, í in place of the former j and j
in place of g), the use of t (rather than ti) to end infinitive verbs and the non-capitalization of nouns (which
had been a late borrowing from German). These changes differentiated Czech from Slovak. Modern scholars disagree
about whether the conservative revivalists were motivated by nationalism or considered contemporary spoken Czech
unsuitable for formal, widespread use. Czech, the official language of the Czech Republic (a member of the European
Union since 2004), is one of the EU's official languages and the 2012 Eurobarometer survey found that Czech was the
foreign language most often used in Slovakia. Economist Jonathan van Parys collected data on language knowledge in
Europe for the 2012 European Day of Languages. The five countries with the greatest use of Czech were the Czech Republic
(98.77 percent), Slovakia (24.86 percent), Portugal (1.93 percent), Poland (0.98 percent) and Germany (0.47 percent).
Immigration of Czechs from Europe to the United States occurred primarily from 1848 to 1914. Czech is a Less Commonly
Taught Language in U.S. schools, and is taught at Czech heritage centers. Large communities of Czech Americans live
in the states of Texas, Nebraska and Wisconsin. In the 2000 United States Census, Czech was reported as the most-common
language spoken at home (besides English) in Valley, Butler and Saunders Counties, Nebraska and Republic County,
Kansas. With the exception of Spanish (the non-English language most commonly spoken at home nationwide), Czech was
the most-common home language in over a dozen additional counties in Nebraska, Kansas, Texas, North Dakota and Minnesota.
As of 2009, 70,500 Americans spoke Czech as their first language (49th place nationwide, behind Turkish and ahead
of Swedish). In addition to a spoken standard and a closely related written standard, Czech has several regional
dialects primarily used in rural areas by speakers less proficient in other dialects or standard Czech. During the
second half of the twentieth century, Czech dialect use began to weaken. By the early 1990s dialect use was stigmatized,
associated with the shrinking lower class and used in literature or other media for comedic effect. Increased travel
and media availability to dialect-speaking populations has encouraged them to shift to (or add to their own dialect)
standard Czech. Although Czech has received considerable scholarly interest for a Slavic language, this interest
has focused primarily on modern standard Czech and ancient texts rather than dialects. Standard Czech is still the
norm for politicians, businesspeople and other Czechs in formal situations, but Common Czech is gaining ground in
journalism and the mass media. The Czech dialects spoken in Moravia and Silesia are known as Moravian (moravština).
In the Austro-Hungarian Empire, "Bohemian-Moravian-Slovak" was a language citizens could register as speaking (with
German, Polish and several others). Of the Czech dialects, only Moravian is distinguished in nationwide surveys by
the Czech Statistical Office. As of 2011, 62,908 Czech citizens spoke Moravian as their first language and 45,561
were diglossal (speaking Moravian and standard Czech as first languages). Czech contains ten basic vowel phonemes,
and three more found only in loanwords. They are /a/, /ɛ/, /ɪ/, /o/, and /u/, their long counterparts /aː/, /ɛː/,
/iː/, /oː/ and /uː/, and three diphthongs, /ou̯/, /au̯/ and /ɛu̯/. The latter two diphthongs and the long /oː/ are
exclusive to loanwords. Vowels are never reduced to schwa sounds when unstressed. Each word usually has primary stress
on its first syllable, except for enclitics (minor, monosyllabic, unstressed syllables). In all words of more than
two syllables, every odd-numbered syllable receives secondary stress. Stress is unrelated to vowel length, and the
possibility of stressed short vowels and unstressed long vowels can be confusing to students whose native language
combines the features (such as English). Although older German loanwords were colloquial, recent borrowings from
other languages are associated with high culture. During the nineteenth century, words with Greek and Latin roots
were rejected in favor of those based on older Czech words and common Slavic roots; "music" is muzyka in Polish and
музыка (muzyka) in Russian, but in Czech it is hudba. Some Czech words have been borrowed as loanwords into English
and other languages—for example, robot (from robota, "labor") and polka (from polka, "Polish woman" or from "půlka"
"half"). Because Czech uses grammatical case to convey word function in a sentence (instead of relying on word order,
as English does), its word order is flexible. As a pro-drop language, in Czech an intransitive sentence can consist
of only a verb; information about its subject is encoded in the verb. Enclitics (primarily auxiliary verbs and pronouns)
must appear in the second syntactic slot of a sentence, after the first stressed unit. The first slot must contain
a subject and object, a main form of a verb, an adverb or a conjunction (except for the light conjunctions a, "and",
i, "and even" or ale, "but"). Czech syntax has a subject–verb–object sentence structure. In practice, however, word
order is flexible and used for topicalization and focus. Although Czech has a periphrastic passive construction (like
English), colloquial word-order changes frequently produce the passive voice. For example, to change "Peter killed
Paul" to "Paul was killed by Peter" the order of subject and object is inverted: Petr zabil Pavla ("Peter killed
Paul") becomes "Paul, Peter killed" (Pavla zabil Petr). Pavla is in the accusative case, the grammatical object (in
this case, the victim) of the verb. In Czech, nouns and adjectives are declined into one of seven grammatical cases.
Nouns are inflected to indicate their use in a sentence. A nominative–accusative language, Czech marks subject nouns
with nominative case and object nouns with accusative case. The genitive case marks possessive nouns and some types
of movement. The remaining cases (instrumental, locative, vocative and dative) indicate semantic relationships, such
as secondary objects, movement or position (dative case) and accompaniment (instrumental case). An adjective's case
agrees with that of the noun it describes. When Czech children learn their language's declension patterns, the cases
are referred to by number: Czech distinguishes three genders—masculine, feminine, and neuter—and the masculine gender
is subdivided into animate and inanimate. With few exceptions, feminine nouns in the nominative case end in -a, -e,
or -ost; neuter nouns in -o, -e, or -í, and masculine nouns in a consonant. Adjectives agree in gender and animacy
(for masculine nouns in the accusative or genitive singular and the nominative plural) with the nouns they modify.
The main effect of gender in Czech is the difference in noun and adjective declension, but other effects include
past-tense verb endings: for example, dělal (he did, or made); dělala (she did, or made) and dělalo (it did, or made).
Nouns are also inflected for number, distinguishing between singular and plural. Typical of a Slavic language, Czech
cardinal numbers one through four allow the nouns and adjectives they modify to take any case, but numbers over five
place these nouns and adjectives in the genitive case when the entire expression is in nominative or accusative case.
The Czech koruna is an example of this feature; it is shown here as the subject of a hypothetical sentence, and declined
as genitive for numbers five and up. Typical of Slavic languages, Czech marks its verbs for one of two grammatical
aspects: perfective and imperfective. Most verbs are part of inflected aspect pairs—for example, koupit (perfective)
and kupovat (imperfective). Although the verbs' meaning is similar, in perfective verbs the action is completed and
in imperfective verbs it is ongoing. This is distinct from past and present tense, and any Czech verb of either aspect
can be conjugated into any of its three tenses. Aspect describes the state of the action at the time specified by
the tense. The verbs of most aspect pairs differ in one of two ways: by prefix or by suffix. In prefix pairs, the
perfective verb has an added prefix—for example, the imperfective psát (to write, to be writing) compared with the
perfective napsat (to write down, to finish writing). The most common prefixes are na-, o-, po-, s-, u-, vy-, z-
and za-. In suffix pairs, a different infinitive ending is added to the perfective stem; for example, the perfective
verbs koupit (to buy) and prodat (to sell) have the imperfective forms kupovat and prodávat. Imperfective verbs may
undergo further morphology to make other imperfective verbs (iterative and frequentative forms), denoting repeated
or regular action. The verb jít (to go) has the iterative form chodit (to go repeatedly) and the frequentative form
chodívat (to go regularly). The infinitive form ends in t (archaically, ti). It is the form found in dictionaries
and the form that follows auxiliary verbs (for example, můžu tě slyšet—"I can hear you"). Czech verbs have three
grammatical moods: indicative, imperative and conditional. The imperative mood adds specific endings for each of
three person (or number) categories: -Ø/-i/-ej for second-person singular, -te/-ete/-ejte for second-person plural
and -me/-eme/-ejme for first-person plural. The conditional mood is formed with a particle after the past-tense verb.
This mood indicates possible events, expressed in English as "I would" or "I wish". Czech has one of the most phonemic
orthographies of all European languages. Its thirty-one graphemes represent thirty sounds (in most dialects, i and
y have the same sound), and it contains only one digraph: ch, which follows h in the alphabet. As a result, some
of its characters have been used by phonologists to denote corresponding sounds in other languages. The characters
q, w and x appear only in foreign words. The háček (ˇ) is used with certain letters to form new characters: š, ž,
and č, as well as ň, ě, ř, ť, and ď (the latter five uncommon outside Czech). The last two letters are sometimes
written with a comma above (ʼ, an abbreviated háček) because of their height. The character ó exists only in loanwords
and onomatopoeia. Czech typographical features not associated with phonetics generally resemble those of most Latin
European languages, including English. Proper nouns, honorifics, and the first letters of quotations are capitalized,
and punctuation is typical of other Latin European languages. Writing of ordinal numerals is similar to most European
languages. The Czech language uses a decimal comma instead of a decimal point. When writing a long number, spaces
between every three numbers (e.g. between hundreds and thousands) may be used for better orientation in handwritten
texts, but not in decimal places, like in English. The number 1,234,567.8910 may be written as 1234567,8910 or 1
234 567,8910. Ordinal numbers (1st) use a point as in German (1.). In proper noun phrases (except personal names),
only the first word is capitalized (Pražský hrad, Prague Castle).
Education is the process of facilitating learning, or the acquisition of knowledge, skills, values, beliefs, and habits.
Educational methods include storytelling, discussion, teaching, training, and directed research. Education frequently
takes place under the guidance of educators, but learners may also educate themselves. Education can take place in
formal or informal settings and any experience that has a formative effect on the way one thinks, feels, or acts
may be considered educational. The methodology of teaching is called pedagogy. After the Fall of Rome, the Catholic
Church became the sole preserver of literate scholarship in Western Europe. The church established cathedral schools
in the Early Middle Ages as centers of advanced education. Some of these establishments ultimately evolved into medieval
universities and forebears of many of Europe's modern universities. During the High Middle Ages, Chartres Cathedral
operated the famous and influential Chartres Cathedral School. The medieval universities of Western Christendom were
well-integrated across all of Western Europe, encouraged freedom of inquiry, and produced a great variety of fine
scholars and natural philosophers, including Thomas Aquinas of the University of Naples, Robert Grosseteste of the
University of Oxford, an early expositor of a systematic method of scientific experimentation, and Saint Albert the
Great, a pioneer of biological field research. Founded in 1088, the University of Bologne is considered the first,
and the oldest continually operating university. Formal education occurs in a structured environment whose explicit
purpose is teaching students. Usually, formal education takes place in a school environment with classrooms of multiple
students learning together with a trained, certified teacher of the subject. Most school systems are designed around
a set of values or ideals that govern all educational choices in that system. Such choices include curriculum, organizational
models, design of the physical learning spaces (e.g. classrooms), student-teacher interactions, methods of assessment,
class size, educational activities, and more. Preschools provide education from ages approximately three to seven,
depending on the country, when children enter primary education. These are also known as nursery schools and as kindergarten,
except in the US, where kindergarten is a term used for primary education.[citation needed] Kindergarten "provide[s]
a child-centered, preschool curriculum for three- to seven-year-old children that aim[s] at unfolding the child's
physical, intellectual, and moral nature with balanced emphasis on each of them." Primary (or elementary) education
consists of the first five to seven years of formal, structured education. In general, primary education consists
of six to eight years of schooling starting at the age of five or six, although this varies between, and sometimes
within, countries. Globally, around 89% of children aged six to twelve are enrolled in primary education, and this
proportion is rising. Under the Education For All programs driven by UNESCO, most countries have committed to achieving
universal enrollment in primary education by 2015, and in many countries, it is compulsory. The division between
primary and secondary education is somewhat arbitrary, but it generally occurs at about eleven or twelve years of
age. Some education systems have separate middle schools, with the transition to the final stage of secondary education
taking place at around the age of fourteen. Schools that provide primary education, are mostly referred to as primary
schools or elementary schools. Primary schools are often subdivided into infant schools and junior school. In most
contemporary educational systems of the world, secondary education comprises the formal education that occurs during
adolescence. It is characterized by transition from the typically compulsory, comprehensive primary education for
minors, to the optional, selective tertiary, "postsecondary", or "higher" education (e.g. university, vocational
school) for adults. Depending on the system, schools for this period, or a part of it, may be called secondary or
high schools, gymnasiums, lyceums, middle schools, colleges, or vocational schools. The exact meaning of any of these
terms varies from one system to another. The exact boundary between primary and secondary education also varies from
country to country and even within them, but is generally around the seventh to the tenth year of schooling. Secondary
education occurs mainly during the teenage years. In the United States, Canada and Australia, primary and secondary
education together are sometimes referred to as K-12 education, and in New Zealand Year 1–13 is used. The purpose
of secondary education can be to give common knowledge, to prepare for higher education, or to train directly in
a profession. Secondary education in the United States did not emerge until 1910, with the rise of large corporations
and advancing technology in factories, which required skilled workers. In order to meet this new job demand, high
schools were created, with a curriculum focused on practical job skills that would better prepare students for white
collar or skilled blue collar work. This proved beneficial for both employers and employees, since the improved human
capital lowered costs for the employer, while skilled employees received a higher wages. Higher education, also called
tertiary, third stage, or postsecondary education, is the non-compulsory educational level that follows the completion
of a school such as a high school or secondary school. Tertiary education is normally taken to include undergraduate
and postgraduate education, as well as vocational education and training. Colleges and universities mainly provide
tertiary education. Collectively, these are sometimes known as tertiary institutions. Individuals who complete tertiary
education generally receive certificates, diplomas, or academic degrees. University education includes teaching,
research, and social services activities, and it includes both the undergraduate level (sometimes referred to as
tertiary education) and the graduate (or postgraduate) level (sometimes referred to as graduate school). Universities
are generally composed of several colleges. In the United States, universities can be private and independent like
Yale University; public and state-governed like the Pennsylvania State System of Higher Education; or independent
but state-funded like the University of Virginia. A number of career specific courses are now available to students
through the Internet. In the past, those who were disabled were often not eligible for public education. Children
with disabilities were repeatedly denied an education by physicians or special tutors. These early physicians (people
like Itard, Seguin, Howe, Gallaudet) set the foundation for special education today. They focused on individualized
instruction and functional skills. In its early years, special education was only provided to people with severe
disabilities, but more recently it has been opened to anyone who has experienced difficulty learning. While considered
"alternative" today, most alternative systems have existed since ancient times. After the public school system was
widely developed beginning in the 19th century, some parents found reasons to be discontented with the new system.
Alternative education developed in part as a reaction to perceived limitations and failings of traditional education.
A broad range of educational approaches emerged, including alternative schools, self learning, homeschooling and
unschooling. Example alternative schools include Montessori schools, Waldorf schools (or Steiner schools), Friends
schools, Sands School, Summerhill School, The Peepal Grove School, Sudbury Valley School, Krishnamurti schools, and
open classroom schools. Charter schools are another example of alternative education, which have in the recent years
grown in numbers in the US and gained greater importance in its public education system. In time, some ideas from
these experiments and paradigm challenges may be adopted as the norm in education, just as Friedrich Fröbel's approach
to early childhood education in 19th-century Germany has been incorporated into contemporary kindergarten classrooms.
Other influential writers and thinkers have included the Swiss humanitarian Johann Heinrich Pestalozzi; the American
transcendentalists Amos Bronson Alcott, Ralph Waldo Emerson, and Henry David Thoreau; the founders of progressive
education, John Dewey and Francis Parker; and educational pioneers such as Maria Montessori and Rudolf Steiner, and
more recently John Caldwell Holt, Paul Goodman, Frederick Mayer, George Dennison and Ivan Illich. Indigenous education
refers to the inclusion of indigenous knowledge, models, methods, and content within formal and non-formal educational
systems. Often in a post-colonial context, the growing recognition and use of indigenous education methods can be
a response to the erosion and loss of indigenous knowledge and language through the processes of colonialism. Furthermore,
it can enable indigenous communities to "reclaim and revalue their languages and cultures, and in so doing, improve
the educational success of indigenous students." Informal learning is one of three forms of learning defined by the
Organisation for Economic Co-operation and Development (OECD). Informal learning occurs in a variety of places, such
as at home, work, and through daily interactions and shared relationships among members of society. For many learners
this includes language acquisition, cultural norms and manners. Informal learning for young people is an ongoing
process that also occurs in a variety of places, such as out of school time, in youth programs at community centers
and media labs. Informal learning usually takes place outside educational establishments, does not follow a specified
curriculum and may originate accidentally, sporadically, in association with certain occasions, from changing practical
requirements. It is not necessarily planned to be pedagogically conscious, systematic and according to subjects,
but rather unconsciously incidental, holistically problem-related, and related to situation management and fitness
for life. It is experienced directly in its "natural" function of everyday life and is often spontaneous. The concept
of 'education through recreation' was applied to childhood development in the 19th century. In the early 20th century,
the concept was broadened to include young adults but the emphasis was on physical activities. L.P. Jacks, also an
early proponent of lifelong learning, described education through recreation: "A master in the art of living draws
no sharp distinction between his work and his play, his labour and his leisure, his mind and his body, his education
and his recreation. He hardly knows which is which. He simply pursues his vision of excellence through whatever he
is doing and leaves others to determine whether he is working or playing. To himself he always seems to be doing
both. Enough for him that he does it well." Education through recreation is the opportunity to learn in a seamless
fashion through all of life's activities. The concept has been revived by the University of Western Ontario to teach
anatomy to medical students. Autodidacticism (also autodidactism) is a contemplative, absorbing process, of "learning
on your own" or "by yourself", or as a self-teacher. Some autodidacts spend a great deal of time reviewing the resources
of libraries and educational websites. One may become an autodidact at nearly any point in one's life. While some
may have been informed in a conventional manner in a particular field, they may choose to inform themselves in other,
often unrelated areas. Notable autodidacts include Abraham Lincoln (U.S. president), Srinivasa Ramanujan (mathematician),
Michael Faraday (chemist and physicist), Charles Darwin (naturalist), Thomas Alva Edison (inventor), Tadao Ando (architect),
George Bernard Shaw (playwright), Frank Zappa (composer, recording engineer, film director), and Leonardo da Vinci
(engineer, scientist, mathematician). In 2012, the modern use of electronic educational technology (also called e-learning)
had grown at 14 times the rate of traditional learning.[clarification needed] Open education is fast growing to become
the dominant form of education, for many reasons such as its efficiency and results compared to traditional methods.
Cost of education has been an issue throughout history, and a major political issue in most countries today. Online
courses often can be more expensive than face-to-face classes. Out of 182 colleges surveyed in 2009 nearly half said
tuition for online courses was higher than for campus based ones. Many large university institutions are now starting
to offer free or almost free full courses such as Harvard, MIT and Berkeley teaming up to form edX. Other universities
offering open education are Stanford, Princeton, Duke, Johns Hopkins, Edinburgh, U. Penn, U. Michigan, U. Virginia,
U. Washington, and Caltech. It has been called the biggest change in the way we learn since the printing press. Despite
favorable studies on effectiveness, many people may still desire to choose traditional campus education for social
and cultural reasons. The conventional merit-system degree is currently not as common in open education as it is
in campus universities, although some open universities do already offer conventional degrees such as the Open University
in the United Kingdom. Presently, many of the major open education sources offer their own form of certificate. Due
to the popularity of open education, these new kind of academic certificates are gaining more respect and equal "academic
value" to traditional degrees. Many open universities are working to have the ability to offer students standardized
testing and traditional degrees and credentials. A culture is beginning to form around distance learning for people
who are looking to social connections enjoyed on traditional campuses. For example, students may create study groups,
meetups and movements such as UnCollege. Universal Primary Education is one of the eight international Millennium
Development Goals, towards which progress has been made in the past decade, though barriers still remain. Securing
charitable funding from prospective donors is one particularly persistent problem. Researchers at the Overseas Development
Institute have indicated that the main obstacles to funding for education include conflicting donor priorities, an
immature aid architecture, and a lack of evidence and advocacy for the issue. Additionally, Transparency International
has identified corruption in the education sector as a major stumbling block to achieving Universal Primary Education
in Africa. Furthermore, demand in the developing world for improved educational access is not as high as foreigners
have expected. Indigenous governments are reluctant to take on the ongoing costs involved. There is also economic
pressure from some parents, who prefer their children to earn money in the short term rather than work towards the
long-term benefits of education.[citation needed] Similarities — in systems or even in ideas — that schools share
internationally have led to an increase in international student exchanges. The European Socrates-Erasmus Program
facilitates exchanges across European universities. The Soros Foundation provides many opportunities for students
from central Asia and eastern Europe. Programs such as the International Baccalaureate have contributed to the internationalization
of education. The global campus online, led by American universities, allows free access to class materials and lecture
files recorded during the actual classes. Research into LCPS (low cost private schools) found that over 5 years to
July 2013, debate around LCPSs to achieving Education for All (EFA) objectives was polarised and finding growing
coverage in international policy. The polarisation was due to disputes around whether the schools are affordable
for the poor, reach disadvantaged groups, provide quality education, support or undermine equality, and are financially
sustainable. The report examined the main challenges encountered by development organisations which support LCPSs.
Surveys suggest these types of schools are expanding across Africa and Asia. This success is attributed to excess
demand. These surveys found concern for: Some claim that there is education inequality because children did not exceed
the education of their parents. This education inequality is then associated with income inequality. Although critical
thinking is a goal of education, criticism and blame are often the unintended by products of our current educational
process. Students often blame their teachers and their textbooks, despite the availability of libraries and the internet.
When someone tries to improve education, the educational establishment itself occasionally showers the person with
criticism rather than gratitude. Better by products of an educational system would be gratitude and determination.
Developed countries have people with more resources (housing, food, transportation, water and sewage treatment, hospitals,
health care, libraries, books, media, schools, the internet, education, etc.) than most of the world's population.
One merely needs to see through travel or the media how many people in the undeveloped countries live to sense this.
However, one can also use economic data to gain some insight into this. Yet criticism and blame are common among
people in the developed countries. Gratitude for all these resources and the determination to develop oneself would
be more productive than criticism and blame because the resources are readily available and because, if you blame
others, there is no need for you to do something different tomorrow or for you to change and improve. Where there
is a will, there is a way. People in developed countries have the will and the way to do many things that they want
to do. They sometimes need more determination and will to improve and to educate themselves with the resources that
are abundantly available. They occasionally need more gratitude for the resources they have, including their teachers
and their textbooks. The entire internet is also available to supplement these teachers and textbooks. Educational
psychology is the study of how humans learn in educational settings, the effectiveness of educational interventions,
the psychology of teaching, and the social psychology of schools as organizations. Although the terms "educational
psychology" and "school psychology" are often used interchangeably, researchers and theorists are likely to be identified
as educational psychologists, whereas practitioners in schools or school-related settings are identified as school
psychologists. Educational psychology is concerned with the processes of educational attainment in the general population
and in sub-populations such as gifted children and those with specific disabilities. Educational psychology can in
part be understood through its relationship with other disciplines. It is informed primarily by psychology, bearing
a relationship to that discipline analogous to the relationship between medicine and biology. Educational psychology
in turn informs a wide range of specialties within educational studies, including instructional design, educational
technology, curriculum development, organizational learning, special education and classroom management. Educational
psychology both draws from and contributes to cognitive science and the learning sciences. In universities, departments
of educational psychology are usually housed within faculties of education, possibly accounting for the lack of representation
of educational psychology content in introductory psychology textbooks (Lucas, Blazek, & Raley, 2006). Intelligence
is an important factor in how the individual responds to education. Those who have higher intelligence tend to perform
better at school and go on to higher levels of education. This effect is also observable in the opposite direction,
in that education increases measurable intelligence. Studies have shown that while educational attainment is important
in predicting intelligence in later life, intelligence at 53 is more closely correlated to intelligence at 8 years
old than to educational attainment. Dunn and Dunn focused on identifying relevant stimuli that may influence learning
and manipulating the school environment, at about the same time as Joseph Renzulli recommended varying teaching strategies.
Howard Gardner identified a wide range of modalities in his Multiple Intelligences theories. The Myers-Briggs Type
Indicator and Keirsey Temperament Sorter, based on the works of Jung, focus on understanding how people's personality
affects the way they interact personally, and how this affects the way individuals respond to each other within the
learning environment. The work of David Kolb and Anthony Gregorc's Type Delineator follows a similar but more simplified
approach. Some theories propose that all individuals benefit from a variety of learning modalities, while others
suggest that individuals may have preferred learning styles, learning more easily through visual or kinesthetic experiences.
A consequence of the latter theory is that effective teaching should present a variety of teaching methods which
cover all three learning modalities so that different students have equal opportunities to learn in a way that is
effective for them. Guy Claxton has questioned the extent that learning styles such as Visual, Auditory and Kinesthetic(VAK)
are helpful, particularly as they can have a tendency to label children and therefore restrict learning. Recent research
has argued "there is no adequate evidence base to justify incorporating learning styles assessments into general
educational practice." As an academic field, philosophy of education is "the philosophical study of education and
its problems (...) its central subject matter is education, and its methods are those of philosophy". "The philosophy
of education may be either the philosophy of the process of education or the philosophy of the discipline of education.
That is, it may be part of the discipline in the sense of being concerned with the aims, forms, methods, or results
of the process of educating or being educated; or it may be metadisciplinary in the sense of being concerned with
the concepts, aims, and methods of the discipline." As such, it is both part of the field of education and a field
of applied philosophy, drawing from fields of metaphysics, epistemology, axiology and the philosophical approaches
(speculative, prescriptive, and/or analytic) to address questions in and about pedagogy, education policy, and curriculum,
as well as the process of learning, to name a few. For example, it might study what constitutes upbringing and education,
the values and norms revealed through upbringing and educational practices, the limits and legitimization of education
as an academic discipline, and the relation between education theory and practice. Instruction is the facilitation
of another's learning. Instructors in primary and secondary institutions are often called teachers, and they direct
the education of students and might draw on many subjects like reading, writing, mathematics, science and history.
Instructors in post-secondary institutions might be called teachers, instructors, or professors, depending on the
type of institution; and they primarily teach only their specific discipline. Studies from the United States suggest
that the quality of teachers is the single most important factor affecting student performance, and that countries
which score highly on international tests have multiple policies in place to ensure that the teachers they employ
are as effective as possible. With the passing of NCLB in the United States (No Child Left Behind), teachers must
be highly qualified. A popular way to gauge teaching performance is to use student evaluations of teachers (SETS),
but these evaluations have been criticized for being counterproductive to learning and inaccurate due to student
bias. It has been argued that high rates of education are essential for countries to be able to achieve high levels
of economic growth. Empirical analyses tend to support the theoretical prediction that poor countries should grow
faster than rich countries because they can adopt cutting edge technologies already tried and tested by rich countries.
However, technology transfer requires knowledgeable managers and engineers who are able to operate new machines or
production practices borrowed from the leader in order to close the gap through imitation. Therefore, a country's
ability to learn from the leader is a function of its stock of "human capital". Recent study of the determinants
of aggregate economic growth have stressed the importance of fundamental economic institutions and the role of cognitive
skills. At the level of the individual, there is a large literature, generally related to the work of Jacob Mincer,
on how earnings are related to the schooling and other human capital. This work has motivated a large number of studies,
but is also controversial. The chief controversies revolve around how to interpret the impact of schooling. Some
students who have indicated a high potential for learning, by testing with a high intelligence quotient, may not
achieve their full academic potential, due to financial difficulties.[citation needed] The Renaissance in Europe
ushered in a new age of scientific and intellectual inquiry and appreciation of ancient Greek and Roman civilizations.
Around 1450, Johannes Gutenberg developed a printing press, which allowed works of literature to spread more quickly.
The European Age of Empires saw European ideas of education in philosophy, religion, arts and sciences spread out
across the globe. Missionaries and scholars also brought back new ideas from other civilisations — as with the Jesuit
China missions who played a significant role in the transmission of knowledge, science, and culture between China
and Europe, translating works from Europe like Euclid's Elements for Chinese scholars and the thoughts of Confucius
for European audiences. The Enlightenment saw the emergence of a more secular educational outlook in Europe.
Tennessee (i/tɛnᵻˈsiː/) (Cherokee: ᏔᎾᏏ, Tanasi) is a state located in the southeastern United States. Tennessee is the 36th
largest and the 17th most populous of the 50 United States. Tennessee is bordered by Kentucky and Virginia to the
north, North Carolina to the east, Georgia, Alabama, and Mississippi to the south, and Arkansas and Missouri to the
west. The Appalachian Mountains dominate the eastern part of the state, and the Mississippi River forms the state's
western border. Tennessee's capital and second largest city is Nashville, which has a population of 601,222. Memphis
is the state's largest city, with a population of 653,450. The state of Tennessee is rooted in the Watauga Association,
a 1772 frontier pact generally regarded as the first constitutional government west of the Appalachians. What is
now Tennessee was initially part of North Carolina, and later part of the Southwest Territory. Tennessee was admitted
to the Union as the 16th state on June 1, 1796. Tennessee was the last state to leave the Union and join the Confederacy
at the outbreak of the U.S. Civil War in 1861. Occupied by Union forces from 1862, it was the first state to be readmitted
to the Union at the end of the war. Tennessee furnished more soldiers for the Confederate Army than any other state,
and more soldiers for the Union Army than any other Southern state. Beginning during Reconstruction, it had competitive
party politics, but a Democratic takeover in the late 1880s resulted in passage of disfranchisement laws that excluded
most blacks and many poor whites from voting. This sharply reduced competition in politics in the state until after
passage of civil rights legislation in the mid-20th century. In the 20th century, Tennessee transitioned from an
agrarian economy to a more diversified economy, aided by massive federal investment in the Tennessee Valley Authority
and, in the early 1940s, the city of Oak Ridge. This city was established to house the Manhattan Project's uranium
enrichment facilities, helping to build the world's first atomic bomb, which was used during World War II. Tennessee
has played a critical role in the development of many forms of American popular music, including rock and roll, blues,
country, and rockabilly.[not verified in body] Beale Street in Memphis is considered by many to be the birthplace
of the blues, with musicians such as W.C. Handy performing in its clubs as early as 1909.[not verified in body] Memphis
is also home to Sun Records, where musicians such as Elvis Presley, Johnny Cash, Carl Perkins, Jerry Lee Lewis, Roy
Orbison, and Charlie Rich began their recording careers, and where rock and roll took shape in the 1950s.[not verified
in body] The 1927 Victor recording sessions in Bristol generally mark the beginning of the country music genre and
the rise of the Grand Ole Opry in the 1930s helped make Nashville the center of the country music recording industry.[not
verified in body] Three brick-and-mortar museums recognize Tennessee's role in nurturing various forms of popular
music: the Memphis Rock N' Soul Museum, the Country Music Hall of Fame and Museum in Nashville, and the International
Rock-A-Billy Museum in Jackson. Moreover, the Rockabilly Hall of Fame, an online site recognizing the development
of rockabilly in which Tennessee played a crucial role, is based in Nashville.[not verified in body] Tennessee's
major industries include agriculture, manufacturing, and tourism. Poultry, soybeans, and cattle are the state's primary
agricultural products, and major manufacturing exports include chemicals, transportation equipment, and electrical
equipment. The Great Smoky Mountains National Park, the nation's most visited national park, is headquartered in
the eastern part of the state, and a section of the Appalachian Trail roughly follows the Tennessee-North Carolina
border. Other major tourist attractions include the Tennessee Aquarium in Chattanooga; Dollywood in Pigeon Forge;
the Parthenon, the Country Music Hall of Fame and Museum, and Ryman Auditorium in Nashville; the Jack Daniel's Distillery
in Lynchburg; and Elvis Presley's Graceland residence and tomb, the Memphis Zoo, and the National Civil Rights Museum
in Memphis. The earliest variant of the name that became Tennessee was recorded by Captain Juan Pardo, the Spanish
explorer, when he and his men passed through an American Indian village named "Tanasqui" in 1567 while traveling
inland from South Carolina. In the early 18th century, British traders encountered a Cherokee town named Tanasi (or
"Tanase") in present-day Monroe County, Tennessee. The town was located on a river of the same name (now known as
the Little Tennessee River), and appears on maps as early as 1725. It is not known whether this was the same town
as the one encountered by Juan Pardo, although recent research suggests that Pardo's "Tanasqui" was located at the
confluence of the Pigeon River and the French Broad River, near modern Newport. The modern spelling, Tennessee, is
attributed to James Glen, the governor of South Carolina, who used this spelling in his official correspondence during
the 1750s. The spelling was popularized by the publication of Henry Timberlake's "Draught of the Cherokee Country"
in 1765. In 1788, North Carolina created "Tennessee County", the third county to be established in what is now Middle
Tennessee. (Tennessee County was the predecessor to current-day Montgomery County and Robertson County.) When a constitutional
convention met in 1796 to organize a new state out of the Southwest Territory, it adopted "Tennessee" as the name
of the state. Tennessee is known as the "Volunteer State", a nickname some claimed was earned during the War of 1812
because of the prominent role played by volunteer soldiers from Tennessee, especially during the Battle of New Orleans.
Other sources differ on the origin of the state nickname; according to the Columbia Encyclopedia, the name refers
to volunteers for the Mexican–American War. This explanation is more likely, because President Polk's call for 2,600
nationwide volunteers at the beginning of the Mexican-American War resulted in 30,000 volunteers from Tennessee alone,
largely in response to the death of Davy Crockett and appeals by former Tennessee Governor and now Texas politician,
Sam Houston. The highest point in the state is Clingmans Dome at 6,643 feet (2,025 m). Clingmans Dome, which lies
on Tennessee's eastern border, is the highest point on the Appalachian Trail, and is the third highest peak in the
United States east of the Mississippi River. The state line between Tennessee and North Carolina crosses the summit.
The state's lowest point is the Mississippi River at the Mississippi state line (the lowest point in Memphis, nearby,
is at 195 ft (59 m)). The geographical center of the state is located in Murfreesboro. Stretching west from the Blue
Ridge for approximately 55 miles (89 km) is the Ridge and Valley region, in which numerous tributaries join to form
the Tennessee River in the Tennessee Valley. This area of Tennessee is covered by fertile valleys separated by wooded
ridges, such as Bays Mountain and Clinch Mountain. The western section of the Tennessee Valley, where the depressions
become broader and the ridges become lower, is called the Great Valley. In this valley are numerous towns and two
of the region's three urban areas, Knoxville, the 3rd largest city in the state, and Chattanooga, the 4th largest
city in the state. The third urban area, the Tri-Cities, comprising Bristol, Johnson City, and Kingsport and their
environs, is located to the northeast of Knoxville. East Tennessee has several important transportation links with
Middle and West Tennessee, as well as the rest of the nation and the world, including several major airports and
interstates. Knoxville's McGhee Tyson Airport (TYS) and Chattanooga's Chattanooga Metropolitan Airport (CHA), as
well as the Tri-Cities' Tri-Cities Regional Airport (TRI), provide air service to numerous destinations. I-24, I-81,
I-40, I-75, and I-26 along with numerous state highways and other important roads, traverse the Grand Division and
connect Chattanooga, Knoxville, and the Tri-Cities, along with other cities and towns such as Cleveland, Athens,
and Sevierville. The easternmost section, about 10 miles (16 km) in width, consists of hilly land that runs along
the western bank of the Tennessee River. To the west of this narrow strip of land is a wide area of rolling hills
and streams that stretches all the way to the Mississippi River; this area is called the Tennessee Bottoms or bottom
land. In Memphis, the Tennessee Bottoms end in steep bluffs overlooking the river. To the west of the Tennessee Bottoms
is the Mississippi Alluvial Plain, less than 300 feet (90 m) above sea level. This area of lowlands, flood plains,
and swamp land is sometimes referred to as the Delta region. Memphis is the economic center of West Tennessee and
the largest city in the state. Most of the state has a humid subtropical climate, with the exception of some of the
higher elevations in the Appalachians, which are classified as having a mountain temperate climate or a humid continental
climate due to cooler temperatures. The Gulf of Mexico is the dominant factor in the climate of Tennessee, with winds
from the south being responsible for most of the state's annual precipitation. Generally, the state has hot summers
and mild to cool winters with generous precipitation throughout the year, with highest average monthly precipitation
generally in the winter and spring months, between December and April. The driest months, on average, are August
to October. On average the state receives 50 inches (130 cm) of precipitation annually. Snowfall ranges from 5 inches
(13 cm) in West Tennessee to over 16 inches (41 cm) in the higher mountains in East Tennessee. Summers in the state
are generally hot and humid, with most of the state averaging a high of around 90 °F (32 °C) during the summer months.
Winters tend to be mild to cool, increasing in coolness at higher elevations. Generally, for areas outside the highest
mountains, the average overnight lows are near freezing for most of the state. The highest recorded temperature is
113 °F (45 °C) at Perryville on August 9, 1930 while the lowest recorded temperature is −32 °F (−36 °C) at Mountain
City on December 30, 1917. While the state is far enough from the coast to avoid any direct impact from a hurricane,
the location of the state makes it likely to be impacted from the remnants of tropical cyclones which weaken over
land and can cause significant rainfall, such as Tropical Storm Chris in 1982 and Hurricane Opal in 1995. The state
averages around 50 days of thunderstorms per year, some of which can be severe with large hail and damaging winds.
Tornadoes are possible throughout the state, with West and Middle Tennessee the most vulnerable. Occasionally, strong
or violent tornadoes occur, such as the devastating April 2011 tornadoes that killed 20 people in North Georgia and
Southeast Tennessee. On average, the state has 15 tornadoes per year. Tornadoes in Tennessee can be severe, and Tennessee
leads the nation in the percentage of total tornadoes which have fatalities. Winter storms are an occasional problem,
such as the infamous Blizzard of 1993, although ice storms are a more likely occurrence. Fog is a persistent problem
in parts of the state, especially in East Tennessee. The capital is Nashville, though Knoxville, Kingston, and Murfreesboro
have all served as state capitals in the past. Memphis has the largest population of any city in the state. Nashville's
13-county metropolitan area has been the state's largest since c. 1990. Chattanooga and Knoxville, both in the eastern
part of the state near the Great Smoky Mountains, each has approximately one-third of the population of Memphis or
Nashville. The city of Clarksville is a fifth significant population center, some 45 miles (72 km) northwest of Nashville.
Murfreesboro is the sixth-largest city in Tennessee, consisting of some 108,755 residents. The area now known as
Tennessee was first inhabited by Paleo-Indians nearly 12,000 years ago. The names of the cultural groups that inhabited
the area between first settlement and the time of European contact are unknown, but several distinct cultural phases
have been named by archaeologists, including Archaic (8000–1000 BC), Woodland (1000 BC–1000 AD), and Mississippian
(1000–1600 AD), whose chiefdoms were the cultural predecessors of the Muscogee people who inhabited the Tennessee
River Valley before Cherokee migration into the river's headwaters. The first recorded European excursions into what
is now called Tennessee were three expeditions led by Spanish explorers, namely Hernando de Soto in 1540, Tristan
de Luna in 1559, and Juan Pardo in 1567. Pardo recorded the name "Tanasqui" from a local Indian village, which evolved
to the state's current name. At that time, Tennessee was inhabited by tribes of Muscogee and Yuchi people. Possibly
because of European diseases devastating the Indian tribes, which would have left a population vacuum, and also from
expanding European settlement in the north, the Cherokee moved south from the area now called Virginia. As European
colonists spread into the area, the Indian populations were forcibly displaced to the south and west, including all
Muscogee and Yuchi peoples, the Chickasaw and Choctaw, and ultimately, the Cherokee in 1838. The first British settlement
in what is now Tennessee was built in 1756 by settlers from the colony of South Carolina at Fort Loudoun, near present-day
Vonore. Fort Loudoun became the westernmost British outpost to that date. The fort was designed by John William Gerard
de Brahm and constructed by forces under British Captain Raymond Demeré. After its completion, Captain Raymond Demeré
relinquished command on August 14, 1757 to his brother, Captain Paul Demeré. Hostilities erupted between the British
and the neighboring Overhill Cherokees, and a siege of Fort Loudoun ended with its surrender on August 7, 1760. The
following morning, Captain Paul Demeré and a number of his men were killed in an ambush nearby, and most of the rest
of the garrison was taken prisoner. During the American Revolutionary War, Fort Watauga at Sycamore Shoals (in present-day
Elizabethton) was attacked (1776) by Dragging Canoe and his warring faction of Cherokee who were aligned with the
British Loyalists. These renegade Cherokee were referred to by settlers as the Chickamauga. They opposed North Carolina's
annexation of the Washington District and the concurrent settling of the Transylvania Colony further north and west.
The lives of many settlers were spared from the initial warrior attacks through the warnings of Dragging Canoe's
cousin, Nancy Ward. The frontier fort on the banks of the Watauga River later served as a 1780 staging area for the
Overmountain Men in preparation to trek over the Appalachian Mountains, to engage, and to later defeat the British
Army at the Battle of Kings Mountain in South Carolina. Three counties of the Washington District (now part of Tennessee)
broke off from North Carolina in 1784 and formed the State of Franklin. Efforts to obtain admission to the Union
failed, and the counties (now numbering eight) had re-joined North Carolina by 1789. North Carolina ceded the area
to the federal government in 1790, after which it was organized into the Southwest Territory. In an effort to encourage
settlers to move west into the new territory, in 1787 the mother state of North Carolina ordered a road to be cut
to take settlers into the Cumberland Settlements—from the south end of Clinch Mountain (in East Tennessee) to French
Lick (Nashville). The Trace was called the "North Carolina Road" or "Avery's Trace", and sometimes "The Wilderness
Road" (although it should not be confused with Daniel Boone's "Wilderness Road" through the Cumberland Gap). Tennessee
was admitted to the Union on June 1, 1796 as the 16th state. It was the first state created from territory under
the jurisdiction of the United States federal government. Apart from the former Thirteen Colonies only Vermont and
Kentucky predate Tennessee's statehood, and neither was ever a federal territory. The state boundaries, according
to the Constitution of the State of Tennessee, Article I, Section 31, stated that the beginning point for identifying
the boundary was the extreme height of the Stone Mountain, at the place where the line of Virginia intersects it,
and basically ran the extreme heights of mountain chains through the Appalachian Mountains separating North Carolina
from Tennessee past the Indian towns of Cowee and Old Chota, thence along the main ridge of the said mountain (Unicoi
Mountain) to the southern boundary of the state; all the territory, lands and waters lying west of said line are
included in the boundaries and limits of the newly formed state of Tennessee. Part of the provision also stated that
the limits and jurisdiction of the state would include future land acquisition, referencing possible land trade with
other states, or the acquisition of territory from west of the Mississippi River. During the administration of U.S.
President Martin Van Buren, nearly 17,000 Cherokees—along with approximately 2,000 black slaves owned by Cherokees—were
uprooted from their homes between 1838 and 1839 and were forced by the U.S. military to march from "emigration depots"
in Eastern Tennessee (such as Fort Cass) toward the more distant Indian Territory west of Arkansas. During this relocation
an estimated 4,000 Cherokees died along the way west. In the Cherokee language, the event is called Nunna daul Isunyi—"the
Trail Where We Cried." The Cherokees were not the only American Indians forced to emigrate as a result of the Indian
removal efforts of the United States, and so the phrase "Trail of Tears" is sometimes used to refer to similar events
endured by other American Indian peoples, especially among the "Five Civilized Tribes". The phrase originated as
a description of the earlier emigration of the Choctaw nation. In February 1861, secessionists in Tennessee's state
government—led by Governor Isham Harris—sought voter approval for a convention to sever ties with the United States,
but Tennessee voters rejected the referendum by a 54–46% margin. The strongest opposition to secession came from
East Tennessee (which later tried to form a separate Union-aligned state). Following the Confederate attack upon
Fort Sumter in April and Lincoln's call for troops from Tennessee and other states in response, Governor Isham Harris
began military mobilization, submitted an ordinance of secession to the General Assembly, and made direct overtures
to the Confederate government. The Tennessee legislature ratified an agreement to enter a military league with the
Confederate States on May 7, 1861. On June 8, 1861, with people in Middle Tennessee having significantly changed
their position, voters approved a second referendum calling for secession, becoming the last state to do so. Many
major battles of the American Civil War were fought in Tennessee—most of them Union victories. Ulysses S. Grant and
the U.S. Navy captured control of the Cumberland and Tennessee rivers in February 1862. They held off the Confederate
counterattack at Shiloh in April. Memphis fell to the Union in June, following a naval battle on the Mississippi
River in front of the city. The Capture of Memphis and Nashville gave the Union control of the western and middle
sections; this control was confirmed at the Battle of Murfreesboro in early January 1863 and by the subsequent Tullahoma
Campaign. Confederates held East Tennessee despite the strength of Unionist sentiment there, with the exception of
extremely pro-Confederate Sullivan County. The Confederates, led by General James Longstreet, did attack General
Burnside's Fort Sanders at Knoxville and lost. It was a big blow to East Tennessee Confederate momentum, but Longstreet
won the Battle of Bean's Station a few weeks later. The Confederates besieged Chattanooga during the Chattanooga
Campaign in early fall 1863, but were driven off by Grant in November. Many of the Confederate defeats can be attributed
to the poor strategic vision of General Braxton Bragg, who led the Army of Tennessee from Perryville, Kentucky to
another Confederate defeat at Chattanooga. When the Emancipation Proclamation was announced, Tennessee was mostly
held by Union forces. Thus, Tennessee was not among the states enumerated in the Proclamation, and the Proclamation
did not free any slaves there. Nonetheless, enslaved African Americans escaped to Union lines to gain freedom without
waiting for official action. Old and young, men, women and children camped near Union troops. Thousands of former
slaves ended up fighting on the Union side, nearly 200,000 in total across the South. In 1864, Andrew Johnson (a
War Democrat from Tennessee) was elected Vice President under Abraham Lincoln. He became President after Lincoln's
assassination in 1865. Under Johnson's lenient re-admission policy, Tennessee was the first of the seceding states
to have its elected members readmitted to the U.S. Congress, on July 24, 1866. Because Tennessee had ratified the
Fourteenth Amendment, it was the only one of the formerly secessionist states that did not have a military governor
during the Reconstruction period. After the formal end of Reconstruction, the struggle over power in Southern society
continued. Through violence and intimidation against freedmen and their allies, White Democrats regained political
power in Tennessee and other states across the South in the late 1870s and 1880s. Over the next decade, the state
legislature passed increasingly restrictive laws to control African Americans. In 1889 the General Assembly passed
four laws described as electoral reform, with the cumulative effect of essentially disfranchising most African Americans
in rural areas and small towns, as well as many poor Whites. Legislation included implementation of a poll tax, timing
of registration, and recording requirements. Tens of thousands of taxpaying citizens were without representation
for decades into the 20th century. Disfranchising legislation accompanied Jim Crow laws passed in the late 19th century,
which imposed segregation in the state. In 1900, African Americans made up nearly 24% of the state's population,
and numbered 480,430 citizens who lived mostly in the central and western parts of the state. In 2002, businessman
Phil Bredesen was elected as the 48th governor. Also in 2002, Tennessee amended the state constitution to allow for
the establishment of a lottery. Tennessee's Bob Corker was the only freshman Republican elected to the United States
Senate in the 2006 midterm elections. The state constitution was amended to reject same-sex marriage. In January
2007, Ron Ramsey became the first Republican elected as Speaker of the State Senate since Reconstruction, as a result
of the realignment of the Democratic and Republican parties in the South since the late 20th century, with Republicans
now elected by conservative voters, who previously had supported Democrats. According to the U.S. Census Bureau,
as of 2015, Tennessee had an estimated population of 6,600,299, which is an increase of 50,947, from the prior year
and an increase of 254,194, or 4.01%, since the year 2010. This includes a natural increase since the last census
of 142,266 people (that is 493,881 births minus 351,615 deaths), and an increase from net migration of 219,551 people
into the state. Immigration from outside the United States resulted in a net increase of 59,385 people, and migration
within the country produced a net increase of 160,166 people. Twenty percent of Tennesseans were born outside the
South in 2008, compared to a figure of 13.5% in 1990. In 2000, the five most common self-reported ethnic groups in
the state were: American (17.3%), African American (13.0%), Irish (9.3%), English (9.1%), and German (8.3%). Most
Tennesseans who self-identify as having American ancestry are of English and Scotch-Irish ancestry. An estimated
21–24% of Tennesseans are of predominantly English ancestry. In the 1980 census 1,435,147 Tennesseans claimed "English"
or "mostly English" ancestry out of a state population of 3,221,354 making them 45% of the state at the time. Tennessee
is home to several Protestant denominations, such as the National Baptist Convention (headquartered in Nashville);
the Church of God in Christ and the Cumberland Presbyterian Church (both headquartered in Memphis); the Church of
God and The Church of God of Prophecy (both headquartered in Cleveland). The Free Will Baptist denomination is headquartered
in Antioch; its main Bible college is in Nashville. The Southern Baptist Convention maintains its general headquarters
in Nashville. Publishing houses of several denominations are located in Nashville. Major outputs for the state include
textiles, cotton, cattle, and electrical power. Tennessee has over 82,000 farms, roughly 59 percent of which accommodate
beef cattle. Although cotton was an early crop in Tennessee, large-scale cultivation of the fiber did not begin until
the 1820s with the opening of the land between the Tennessee and Mississippi Rivers. The upper wedge of the Mississippi
Delta extends into southwestern Tennessee, and it was in this fertile section that cotton took hold. Soybeans are
also heavily planted in West Tennessee, focusing on the northwest corner of the state. Major corporations with headquarters
in Tennessee include FedEx, AutoZone and International Paper, all based in Memphis; Pilot Corporation and Regal Entertainment
Group, based in Knoxville; Eastman Chemical Company, based in Kingsport; the North American headquarters of Nissan
Motor Company, based in Franklin; Hospital Corporation of America and Caterpillar Financial, based in Nashville;
and Unum, based in Chattanooga. Tennessee is also the location of the Volkswagen factory in Chattanooga, a $2 billion
polysilicon production facility by Wacker Chemie in Bradley County, and a $1.2 billion polysilicon production facility
by Hemlock Semiconductor in Clarksville. The Tennessee income tax does not apply to salaries and wages, but most
income from stock, bonds and notes receivable is taxable. All taxable dividends and interest which exceed the $1,250
single exemption or the $2,500 joint exemption are taxable at the rate of 6%. The state's sales and use tax rate
for most items is 7%. Food is taxed at a lower rate of 5.25%, but candy, dietary supplements and prepared food are
taxed at the full 7% rate. Local sales taxes are collected in most jurisdictions, at rates varying from 1.5% to 2.75%,
bringing the total sales tax to between 8.5% and 9.75%, one of the highest levels in the nation. Intangible property
is assessed on the shares of stock of stockholders of any loan company, investment company, insurance company or
for-profit cemetery companies. The assessment ratio is 40% of the value multiplied by the tax rate for the jurisdiction.
Tennessee imposes an inheritance tax on decedents' estates that exceed maximum single exemption limits ($1,000,000
for deaths in 2006 and thereafter). Tourism contributes billions of dollars each year to the state's economy and
Tennessee is ranked among the Top 10 destinations in the US. In 2014 a record 100 million people visited the state
resulting in $17.7 billion in tourism related spending within the state, an increase of 6.3% over 2013; tax revenue
from tourism equaled $1.5 billion. Each county in Tennessee saw at least $1 million from tourism while 19 counties
received at least $100 million (Davidson, Shelby, and Sevier counties were the top three). Tourism-generated jobs
for the state reached 152,900, a 2.8% increase. International travelers to Tennessee accounted for $533 million in
spending. In 2013 tourism within the state from local citizens accounted for 39.9% of tourists, the second highest
originating location for tourists to Tennessee is the state of Georgia, accounting for 8.4% of tourists.:17 Forty-four
percent of stays in the state were "day trips", 25% stayed one night, 15% stayed two nights, and 11% stayed 4 or
more nights. The average stay was 2.16 nights, compared to 2.03 nights for the US as a whole.:40 The average person
spent $118 per day: 29% on transportation, 24% on food, 17% on accommodation, and 28% on shopping and entertainment.:44
Interstate 40 crosses the state in a west-east orientation. Its branch interstate highways include I-240 in Memphis;
I-440 in Nashville; I-140 from Knoxville to Alcoa and I-640 in Knoxville. I-26, although technically an east-west
interstate, runs from the North Carolina border below Johnson City to its terminus at Kingsport. I-24 is an east-west
interstate that runs cross-state from Chattanooga to Clarksville. In a north-south orientation are highways I-55,
I-65, I-75, and I-81. Interstate 65 crosses the state through Nashville, while Interstate 75 serves Chattanooga and
Knoxville and Interstate 55 serves Memphis. Interstate 81 enters the state at Bristol and terminates at its junction
with I-40 near Dandridge. I-155 is a branch highway from I-55. The only spur highway of I-75 in Tennessee is I-275,
which is in Knoxville. When completed, I-69 will travel through the western part of the state, from South Fulton
to Memphis. A branch interstate, I-269 also exists from Millington to Collierville. Tennessee politics, like that
of most U.S. states, are dominated by the Republican and the Democratic parties. Historian Dewey W. Grantham traces
divisions in the state to the period of the American Civil War: for decades afterward, the eastern third of the state
was Republican and the western two thirds voted Democrat. This division was related to the state's pattern of farming,
plantations and slaveholding. The eastern section was made up of yeoman farmers, but Middle and West Tennessee cultivated
crops, such as tobacco and cotton, that were dependent on the use of slave labor. These areas became defined as Democratic
after the war. During Reconstruction, freedmen and former free people of color were granted the right to vote; most
joined the Republican Party. Numerous African Americans were elected to local offices, and some to state office.
Following Reconstruction, Tennessee continued to have competitive party politics. But in the 1880s, the white-dominated
state government passed four laws, the last of which imposed a poll tax requirement for voter registration. These
served to disenfranchise most African Americans, and their power in the Republican Party, the state, and cities where
they had significant population was markedly reduced. In 1900 African Americans comprised 23.8 percent of the state's
population, concentrated in Middle and West Tennessee. In the early 1900s, the state legislature approved a form
of commission government for cities based on at-large voting for a few positions on a Board of Commission; several
adopted this as another means to limit African-American political participation. In 1913 the state legislature enacted
a bill enabling cities to adopt this structure without legislative approval. After disenfranchisement of blacks,
the GOP in Tennessee was historically a sectional party supported by whites only in the eastern part of the state.
In the 20th century, except for two nationwide Republican landslides of the 1920s (in 1920, when Tennessee narrowly
supported Warren G. Harding over Ohio Governor James Cox, and in 1928, when it more decisively voted for Herbert
Hoover over New York Governor Al Smith, a Catholic), the state was part of the Democratic Solid South until the 1950s.
In that postwar decade, it twice voted for Republican Dwight D. Eisenhower, former Allied Commander of the Armed
Forces during World War II. Since then, more of the state's voters have shifted to supporting Republicans, and Democratic
presidential candidates have carried Tennessee only four times. By 1960 African Americans comprised 16.45% of the
state's population. It was not until after the mid-1960s and passage of the Voting Rights Act of 1965 that they were
able to vote in full again, but new devices, such as at-large commission city governments, had been adopted in several
jurisdictions to limit their political participation. Former Gov. Winfield Dunn and former U.S. Sen. Bill Brock wins
in 1970 helped make the Republican Party competitive among whites for the statewide victory. Tennessee has selected
governors from different parties since 1970. Increasingly the Republican Party has become the party of white conservatives.
In the early 21st century, Republican voters control most of the state, especially in the more rural and suburban
areas outside of the cities; Democratic strength is mostly confined to the urban cores of the four major cities,
and is particularly strong in the cities of Nashville and Memphis. The latter area includes a large African-American
population. Historically, Republicans had their greatest strength in East Tennessee before the 1960s. Tennessee's
1st and 2nd congressional districts, based in the Tri-Cities and Knoxville, respectively, are among the few historically
Republican districts in the South. Those districts' residents supported the Union over the Confederacy during the
Civil War; they identified with the GOP after the war and have stayed with that party ever since. The 1st has been
in Republican hands continuously since 1881, and Republicans (or their antecedents) have held it for all but four
years since 1859. The 2nd has been held continuously by Republicans or their antecedents since 1859. In the 2000
presidential election, Vice President Al Gore, a former Democratic U.S. Senator from Tennessee, failed to carry his
home state, an unusual occurrence but indicative of strengthening Republican support. Republican George W. Bush received
increased support in 2004, with his margin of victory in the state increasing from 4% in 2000 to 14% in 2004. Democratic
presidential nominees from Southern states (such as Lyndon B. Johnson, Jimmy Carter, Bill Clinton) usually fare better
than their Northern counterparts do in Tennessee, especially among split-ticket voters outside the metropolitan areas.
The Baker v. Carr (1962) decision of the US Supreme Court established the principle of "one man, one vote", requiring
state legislatures to redistrict to bring Congressional apportionment in line with decennial censuses. It also required
both houses of state legislatures to be based on population for representation and not geographic districts such
as counties. This case arose out of a lawsuit challenging the longstanding rural bias of apportionment of seats in
the Tennessee legislature. After decades in which urban populations had been underrepresented in many state legislatures,
this significant ruling led to an increased (and proportional) prominence in state politics by urban and, eventually,
suburban, legislators and statewide officeholders in relation to their population within the state. The ruling also
applied to numerous other states long controlled by rural minorities, such as Alabama, Vermont, and Montana. The
Highway Patrol is the primary law enforcement entity that concentrates on highway safety regulations and general
non-wildlife state law enforcement and is under the jurisdiction of the Tennessee Department of Safety. The TWRA
is an independent agency tasked with enforcing all wildlife, boating, and fisheries regulations outside of state
parks. The TBI maintains state-of-the-art investigative facilities and is the primary state-level criminal investigative
department. Tennessee State Park Rangers are responsible for all activities and law enforcement inside the Tennessee
State Parks system. Local law enforcement is divided between County Sheriff's Offices and Municipal Police Departments.
Tennessee's Constitution requires that each County have an elected Sheriff. In 94 of the 95 counties the Sheriff
is the chief law enforcement officer in the county and has jurisdiction over the county as a whole. Each Sheriff's
Office is responsible for warrant service, court security, jail operations and primary law enforcement in the unincorporated
areas of a county as well as providing support to the municipal police departments. Incorporated municipalities are
required to maintain a police department to provide police services within their corporate limits. Capital punishment
has existed in Tennessee at various times since statehood. Before 1913 the method of execution was hanging. From
1913 to 1915 there was a hiatus on executions but they were reinstated in 1916 when electrocution became the new
method. From 1972 to 1978, after the Supreme Court ruled (Furman v. Georgia) capital punishment unconstitutional,
there were no further executions. Capital punishment was restarted in 1978, although those prisoners awaiting execution
between 1960 and 1978 had their sentences mostly commuted to life in prison. From 1916 to 1960 the state executed
125 inmates. For a variety of reasons there were no further executions until 2000. Since 2000, Tennessee has executed
six prisoners and has 73 prisoners on death row (as of April 2015). In Knoxville, the Tennessee Volunteers college
team has played in the Southeastern Conference of the National Collegiate Athletic Association since 1932. The football
team has won 13 SEC championships and 25 bowls, including four Sugar Bowls, three Cotton Bowls, an Orange Bowl and
a Fiesta Bowl. Meanwhile, the men's basketball team has won four SEC championships and reached the NCAA Elite Eight
in 2010. In addition, the women's basketball team has won a host of SEC regular-season and tournament titles along
with 8 national titles.
Post-punk is a heterogeneous type of rock music that emerged in the wake of the punk movement of the 1970s. Drawing inspiration
from elements of punk rock while departing from its musical conventions and wider cultural affiliations, post-punk
music was marked by varied, experimentalist sensibilities and its "conceptual assault" on rock tradition. Artists
embraced electronic music, black dance styles and the avant-garde, as well as novel recording technology and production
techniques. The movement also saw the frequent intersection of music with art and politics, as artists liberally
drew on sources such as critical theory, cinema, performance art and modernist literature. Accompanying these musical
developments were subcultures that produced visual art, multimedia performances, independent record labels and fanzines
in conjunction with the music. The term "post-punk" was first used by journalists in the late 1970s to describe groups
moving beyond punk's sonic template into disparate areas. Many of these artists, initially inspired by punk's DIY
ethic and energy, ultimately became disillusioned with the style and movement, feeling that it had fallen into commercial
formula, rock convention and self-parody. They repudiated its populist claims to accessibility and raw simplicity,
instead seeing an opportunity to break with musical tradition, subvert commonplaces and challenge audiences. Artists
moved beyonds punk's focus on the concerns of a largely white, male, working class population and abandoned its continued
reliance on established rock and roll tropes, such as three-chord progressions and Chuck Berry-based guitar riffs.
These artists instead defined punk as "an imperative to constant change", believing that "radical content demands
radical form". Though the music varied widely between regions and artists, the post-punk movement has been characterized
by its "conceptual assault" on rock conventions and rejection of aesthetics perceived of as traditionalist, hegemonic
or rockist in favor of experimentation with production techniques and non-rock musical styles such as dub, electronic
music, disco, noise, jazz, krautrock, world music and the avant-garde. While post-punk musicians often avoided or
intentionally obscured conventional influences, previous musical styles did serve as touchstones for the movement,
including particular brands of glam, art rock and "[the] dark undercurrent of '60s music".[nb 1] According to Reynolds,
artists once again approached the studio as an instrument, using new recording methods and pursuing novel sonic territories.
Author Matthew Bannister wrote that post-punk artists rejected the high cultural references of 1960s rock artists
like the Beatles and Bob Dylan as well as paradigms that defined "rock as progressive, as art, as 'sterile' studio
perfectionism ... by adopting an avant-garde aesthetic". Nicholas Lezard described post-punk as "a fusion of art
and music". The era saw the robust appropriation of ideas from literature, art, cinema, philosophy, politics and
critical theory into musical and pop cultural contexts. Artists sought to refuse the common distinction between high
and low culture and returned to the art school tradition found in the work of artists such as Captain Beefheart and
David Bowie. Among major influences on a variety of post-punk artists were writers such as William S. Burroughs and
J.G. Ballard, avant-garde political scenes such as Situationism and Dada, and intellectual movements such as postmodernism.
Many artists viewed their work in explicitly political terms. Additionally, in some locations, the creation of post-punk
music was closely linked to the development of efficacious subcultures, which played important roles in the production
of art, multimedia performances, fanzines and independent labels related to the music. Many post-punk artists maintained
an anti-corporatist approach to recording and instead seized on alternate means of producing and releasing music.
Journalists also became an important element of the culture, and popular music magazines and critics became immersed
in the movement. The scope of the term "post-punk" has been subject to controversy. While some critics, such as AllMusic's
Stephen Thomas Erlewine, have employed the term "post-punk" to denote "a more adventurous and arty form of punk",
others have suggested it pertains to a set of artistic sensibilities and approaches rather than any unifying style.
Music journalist and post-punk scholar Simon Reynolds has advocated that post-punk be conceived as "less a genre
of music than a space of possibility", suggesting that "what unites all this activity is a set of open-ended imperatives:
innovation; willful oddness; the willful jettisoning of all things precedented or 'rock'n'roll'". Nicholas Lezard,
problematizing the categorization of post-punk as a genre, described the movement as "so multifarious that only the
broadest use of the term is possible". Generally, post-punk music is defined as music that emerged from the cultural
milieu of punk rock in the late 1970s, although many groups now categorized as post-punk were initially subsumed
under the broad umbrella of punk or new wave music, only becoming differentiated as the terms came to signify more
narrow styles. Additionally, the accuracy of the term's chronological prefix "post" has been disputed, as various
groups commonly labeled post-punk in fact predate the punk rock movement. Reynolds defined the post-punk era as occurring
loosely between 1978 and 1984. During the initial punk era, a variety of entrepreneurs interested in local punk-influenced
music scenes began founding independent record labels, including Rough Trade (founded by record shop owner Geoff
Travis) and Factory (founded by Manchester-based television personality Tony Wilson). By 1977, groups began pointedly
pursuing methods of releasing music independently , an idea disseminated in particular by the Buzzcocks' release
of their Spiral Scratch EP on their own label as well as the self-released 1977 singles of Desperate Bicycles. These
DIY imperatives would help form the production and distribution infrastructure of post-punk and the indie music scene
that later blossomed in the mid-1980s. In late 1977, music writers for Sounds first used the terms "New Musick" and
"post punk" to describe British acts such as Siouxsie and the Banshees and Wire, who began experimenting with sounds,
lyrics and aesthetics that differed significantly from their punk contemporaries. Writer Jon Savage described some
of these early developments as exploring "harsh urban scrapings [,] controlled white noise" and "massively accented
drumming". In January 1978, singer John Lydon (then known as Johnny Rotten) announced the break-up of his pioneering
punk band the Sex Pistols, citing his disillusionment with punk's musical predictability and cooption by commercial
interests, as well as his desire to explore more diverse interests. As the initial punk movement dwindled, vibrant
new scenes began to coalesce out of a variety of bands pursuing experimental sounds and wider conceptual territory
in their work. Many of these artists drew on backgrounds in art and viewed their music as invested in particular
political or aesthetic agendas. British music publications such as the NME and Sounds developed an influential part
in this nascent post-punk culture, with writers like Jon Savage, Paul Morley and Ian Penman developing a dense (and
often playful) style of criticism that drew on critical theory, radical politics and an eclectic variety of other
sources. Weeks after ending the Sex Pistols, Lydon formed the experimental group Public Image Ltd and declared the
project to be "anti music of any kind". Public Image and other acts such as the Pop Group and the Slits had begun
experimenting with dance music, dub production techniques and the avant-garde, while punk-indebted Manchester acts
such as Joy Division, The Fall and A Certain Ratio developed unique styles which drew on a similarly disparate range
of influences across music and modernist literature. Bands such as Scritti Politti, Gang of Four and This Heat incorporated
Leftist political philosophy and their own art school studies in their work. The innovative production techniques
devised by post-punk producers such as Martin Hannett and Dennis Bovell during this period would become an important
element of the emerging music, with studio experimentation taking a central role. A variety of groups that predated
punk, such as Cabaret Voltaire and Throbbing Gristle, experimented with crude production techniques and electronic
instruments in tandem with performance art methods and influence from transgressive literature, ultimately helping
to pioneer industrial music. Throbbing Gristle's independent label Industrial Records would become a hub for this
scene and provide it with its namesake. In the mid 1970s, various American groups (some with ties to Downtown Manhattan's
punk scene, including Television and Suicide) had begun expanding on the vocabulary of punk music. Midwestern groups
such as Pere Ubu and Devo drew inspiration from the region's derelict industrial environments, employing conceptual
art techniques, musique concrète and unconventional verbal styles that would presage the post-punk movement by several
years. A variety of subsequent groups, including New York-based Talking Heads and Boston-based Mission of Burma,
combined elements of punk with art school sensibilities. In 1978, the former band began a series of collaborations
with British ambient pioneer and ex-Roxy Music member Brian Eno, experimenting with Dada-influenced lyrical techniques,
dance music, and African polyrhythms. San Francisco's vibrant post-punk scene was centered around such groups as
Chrome, the Residents and Tuxedomoon, who incorporated multimedia experimentation, film and ideas from Antonin Artaud's
Theater of Cruelty. Also emerging during this period was New York's no wave movement, a short-lived art and music
scene that began in part as a reaction against punk's recycling of traditionalist rock tropes and often reflected
an abrasive, confrontational and nihilistic worldview. No wave musicians such as the Contortions, Teenage Jesus and
the Jerks, Mars, DNA, Theoretical Girls and Rhys Chatham instead experimented with noise, dissonance and atonality
in addition to non-rock styles. The former four groups were included on the Eno-produced No New York compilation,
often considered the quintessential testament to the scene. The no wave-affiliated label ZE Records was founded in
1978, and would also produce acclaimed and influential compilations in subsequent years. British post-punk entered
the 1980s with support from members of the critical community—American critic Greil Marcus characterised "Britain's
postpunk pop avant-garde" in a 1980 Rolling Stone article as "sparked by a tension, humour and sense of paradox plainly
unique in present day pop music"—as well as media figures such as BBC DJ John Peel, while several groups, such as
PiL and Joy Division, achieved some success in the popular charts. The network of supportive record labels that included
Industrial, Fast, E.G., Mute, Axis/4AD and Glass continued to facilitate a large output of music, by artists such
as the Raincoats, Essential Logic, Killing Joke, the Teardrop Explodes, and the Psychedelic Furs. However, during
this period, major figures and artists in the scene began leaning away from underground aesthetics. In the music
press, the increasingly esoteric writing of post-punk publications soon began to alienate their readerships; it is
estimated that within several years, NME suffered the loss of half its circulation. Writers like Paul Morley began
advocating "overground brightness" instead of the experimental sensibilities promoted in early years. Morley's own
musical collaboration with engineer Gary Langan and programmer J. J. Jeczalik, the Art of Noise, would attempt to
bring sampled and electronic sounds to the pop mainstream. A variety of more pop-oriented groups, including ABC,
the Associates, Adam and the Ants and Bow Wow Wow (the latter two managed by former Sex Pistols manager Malcolm McLaren)
emerged in tandem with the development of the New Romantic subcultural scene. Emphasizing glamour, fashion, and escapism
in distinction to the experimental seriousness of earlier post-punk groups, the club-oriented scene drew some suspicion
from denizens of the movement. Artists such as Gary Numan, the Human League, Soft Cell, John Foxx and Visage helped
pioneer a new synthpop style that drew more heavily from electronic and synthesizer music and benefited from the
rise of MTV. Post-punk artists such as Scritti Politti's Green Gartside and Josef K's Paul Haig, previously engaged
in avant-garde practices, turned away from these approaches and pursued mainstream styles and commercial success.
These new developments, in which post-punk artists attempted to bring subversive ideas into the pop mainstream, began
to be categorized under the marketing term new pop. In the early 1980s, Downtown Manhattan's no wave scene transitioned
from its abrasive origins into a more dance-oriented sound, with compilations such as ZE's Mutant Disco (1981) highlighting
a newly playful sensibility borne out of the city's clash of hip hop, disco and punk styles, as well as dub reggae
and world music influences. Artists such as Liquid Liquid, the B-52s, Cristina, Arthur Russell, James White and the
Blacks and Lizzy Mercier Descloux pursued a formula described by Luc Sante as "anything at all + disco bottom". The
decadent parties and art installations of venues such as Club 57 and the Mudd Club became cultural hubs for musicians
and visual artists alike, with figures such as Jean-Michel Basquiat, Keith Haring and Michael Holman frequenting
the scene. Other no wave-indebted groups such as Swans, Glenn Branca, the Lounge Lizards, Bush Tetras and Sonic Youth
instead continued exploring the early scene's forays into noise and more abrasive territory. In Germany, groups such
as Einstürzende Neubauten developed a unique style of industrial music, utilizing avant-garde noise, homemade instruments
and found objects. Members of that group would later go on to collaborate with members of the Birthday Party. In
Brazil, the post-punk scene grew after the generation of Brasilia rock with bands such as Legião Urbana, Capital
Inicial and Plebe Rude and then the opening of the music club Madame Satã in São Paulo, with acts like Cabine C,
Titãs, Patife Band, Fellini and Mercenárias, as documented on compilations like The Sexual Life of the Savages and
the Não Wave/Não São Paulo series, released in the UK, Germany and Brazil, respectively.[citation needed] The original
post-punk movement ended as the bands associated with the movement turned away from its aesthetics, often in favor
of more commercial sounds. Many of these groups would continue recording as part of the new pop movement, with entryism
becoming a popular concept. In the United States, driven by MTV and modern rock radio stations, a number of post-punk
acts had an influence on or became part of the Second British Invasion of "New Music" there. Some shifted to a more
commercial new wave sound (such as Gang of Four), while others were fixtures on American college radio and became
early examples of alternative rock. Perhaps the most successful band to emerge from post-punk was U2, who combined
elements of religious imagery together with political commentary into their often anthemic music. Until recently,
in most critical writing the post-punk era was "often dismissed as an awkward period in which punk's gleeful ructions
petered out into the vacuity of the Eighties". Contemporary scholars have argued to the contrary, asserting that
the period produced significant innovations and music on its own. Simon Reynolds described the period as "a fair
match for the sixties in terms of the sheer amount of great music created, the spirit of adventure and idealism that
infused it, and the way that the music seemed inextricably connected to the political and social turbulence of its
era". Nicholas Lezard wrote that the music of the period "was avant-garde, open to any musical possibilities that
suggested themselves, united only in the sense that it was very often cerebral, concocted by brainy young men and
women interested as much in disturbing the audience, or making them think, as in making a pop song". Post-punk was
an eclectic genre which resulted in a wide variety of musical innovations and helped merge white and black musical
styles. Out of the post-punk milieu came the beginnings of various subsequent genres, including new wave, dance-rock,
New Pop, industrial music, synthpop, post-hardcore, neo-psychedelia alternative rock and house music. Bands such
as Joy Division, Siouxsie and the Banshees, Bauhaus and the Cure played in a darker, more morose style of post-punk
that lead to the development of the gothic rock genre. At the turn of the 21st century, a post-punk revival developed
in British and American alternative and indie rock, which soon started appearing in other countries, as well. The
earliest sign of a revival was the emergence of various underground bands in the mid-'90s. However, the first commercially
successful bands – the Strokes, Franz Ferdinand, Interpol, Neils Children and Editors – surfaced in the late 1990s
to early 2000s, as did several dance-oriented bands such as the Rapture, Radio 4 and LCD Soundsystem. Additionally,
some darker post-punk bands began to appear in the indie music scene in the 2010s, including Cold Cave, She Wants
Revenge, Eagulls, the Soft Moon, She Past Away and Light Asylum, who were also affiliated with the darkwave revival,
as well as A Place to Bury Strangers, who combined early post-punk and shoegaze. These bands tend to draw a fanbase
who are a combination of the indie music subculture, older post-punk fans and the current goth subculture. In the
2010s, Savages played a music reminiscent of early British post-punk bands of the late '70s.
In Canada, the term "football" may refer to Canadian football and American football collectively, or to either sport specifically,
depending on context. The two sports have shared origins and are closely related but have significant differences.
In particular, Canadian football has 12 players on the field per team rather than 11; the field is roughly 10 yards
wider, and 10 yards longer between end-zones that are themselves 10 yards deeper; and a team has only three downs
to gain 10 yards, which results in less offensive rushing than in the American game. In the Canadian game all players
on the defending team, when a down begins, must be at least 1 yard from the line of scrimmage. (The American game
has a similar "neutral zone" but it is only the length of the football.) Canadian football is also played at the
high school, junior, collegiate, and semi-professional levels: the Canadian Junior Football League, formed May 8,
1974, and Quebec Junior Football League are leagues for players aged 18–22, many post-secondary institutions compete
in Canadian Interuniversity Sport for the Vanier Cup, and senior leagues such as the Alberta Football League have
grown in popularity in recent years. Great achievements in Canadian football are enshrined in the Canadian Football
Hall of Fame. The first written account of a game played was on October 15, 1862, on the Montreal Cricket Grounds.
It was between the First Battalion Grenadier Guards and the Second Battalion Scots Fusilier Guards resulting in a
win by the Grenadier Guards 3 goals, 2 rouges to nothing.[citation needed] In 1864, at Trinity College, Toronto,
F. Barlow Cumberland, Frederick A. Bethune, and Christopher Gwynn, one of the founders of Milton, Massachusetts,
devised rules based on rugby football. The game gradually gained a following, with the Hamilton Football Club formed
on November 3, 1869, (the oldest football club in Canada). Montreal formed a team April 8, 1872, Toronto was formed
on October 4, 1873, and the Ottawa FBC on September 20, 1876. The first attempt to establish a proper governing body
and adopted the current set of Rugby rules was the Foot Ball Association of Canada, organized on March 24, 1873 followed
by the Canadian Rugby Football Union (CRFU) founded June 12, 1880, which included teams from Ontario and Quebec.
Later both the Ontario and Quebec Rugby Football Union (ORFU and QRFU) were formed (January 1883), and then the Interprovincial
(1907) and Western Interprovincial Football Union (1936) (IRFU and WIFU). The CRFU reorganized into an umbrella organization
forming the Canadian Rugby Union (CRU) in 1891. The original forerunners to the current Canadian Football League,
was established in 1956 when the IRFU and WIFU formed an umbrella organization, The Canadian Football Council (CFC).
And then in 1958 the CFC left The CRFU to become The CFL. The Burnside rules closely resembling American Football
that were incorporated in 1903 by The ORFU, was an effort to distinguish it from a more rugby-oriented game. The
Burnside Rules had teams reduced to 12 men per side, introduced the Snap-Back system, required the offensive team
to gain 10 yards on three downs, eliminated the Throw-In from the sidelines, allowed only six men on the line, stated
that all goals by kicking were to be worth two points and the opposition was to line up 10 yards from the defenders
on all kicks. The rules were an attempt to standardize the rules throughout the country. The CIRFU, QRFU and CRU
refused to adopt the new rules at first. Forward passes were not allowed in the Canadian game until 1929, and touchdowns,
which had been five points, were increased to six points in 1956, in both cases several decades after the Americans
had adopted the same changes. The primary differences between the Canadian and American games stem from rule changes
that the American side of the border adopted but the Canadian side did not (originally, both sides had three downs,
goal posts on the goal lines and unlimited forward motion, but the American side modified these rules and the Canadians
did not). The Canadian field width was one rule that was not based on American rules, as the Canadian game played
in wider fields and stadiums that were not as narrow as the American stadiums. The Grey Cup was established in 1909
after being donated by Albert Grey, 4th Earl Grey, The Governor General of Canada as the championship of teams under
the CRU for the Rugby Football Championship of Canada. Initially an amateur competition, it eventually became dominated
by professional teams in the 1940s and early 1950s. The Ontario Rugby Football Union, the last amateur organization
to compete for the trophy, withdrew from competition in 1954. The move ushered in the modern era of Canadian professional
football. Canadian football has mostly been confined to Canada, with the United States being the only other country
to have hosted a high-level Canadian football game. The CFL's controversial "South Division" as it would come to
be officially known attempted to put CFL teams in the United States playing under Canadian rules between 1992 and
1995. The move was aborted after three years; the Baltimore Stallions were the most successful of the numerous Americans
teams to play in the CFL, winning the 83rd Grey Cup. Continuing financial losses, a lack of proper Canadian football
venues, a pervasive belief that the American teams were simply pawns to provide the struggling Canadian teams with
expansion fee revenue, and the return of the NFL to Baltimore prompted the end of Canadian football on the American
side of the border. Amateur football is governed by Football Canada. At the university level, 26 teams play in four
conferences under the auspices of Canadian Interuniversity Sport; the CIS champion is awarded the Vanier Cup. Junior
football is played by many after high school before joining the university ranks. There are 20 junior teams in three
divisions in the Canadian Junior Football League competing for the Canadian Bowl. The Quebec Junior Football League
includes teams from Ontario and Quebec who battle for the Manson Cup. The Canadian football field is 150 yards (137
m) long and 65 yards (59 m) wide with end zones 20 yards (18 m) deep, and goal lines 110 yards (101 m) apart. At
each goal line is a set of 40-foot-high (12 m) goalposts, which consist of two uprights joined by an 18 1⁄2-foot-long
(5.6 m) crossbar which is 10 feet (3 m) above the goal line. The goalposts may be H-shaped (both posts fixed in the
ground) although in the higher-calibre competitions the tuning-fork design (supported by a single curved post behind
the goal line, so that each post starts 10 feet (3 m) above the ground) is preferred. The sides of the field are
marked by white sidelines, the goal line is marked in white, and white lines are drawn laterally across the field
every 5 yards (4.6 m) from the goal line. These lateral lines are called "yard lines" and often marked with the distance
in yards from and an arrow pointed toward the nearest goal line. In previous decades, arrows were not used and every
yard line was usually marked with the distance to the goal line, including the goal line itself which was marked
with a "0"; in most stadiums today, the 10-, 20-, 30-, 40-, and 50-yard lines are marked with numbers, with the goal
line sometimes being marked with a "G". The centre (55-yard) line usually is marked with a "C". "Hash marks" are
painted in white, parallel to the yardage lines, at 1 yard (0.9 m) intervals, 24 yards (21.9 m) from the sidelines.
On fields that have a surrounding running track, such as Commonwealth Stadium, Molson Stadium, and many universities,
the end zones are often cut off in the corners to accommodate the track. In 2014, Edmonton placed turf over the track
to create full end zones. This was particularly common among U.S.-based teams during the CFL's American expansion,
where few American stadiums were able to accommodate the much longer and noticeably wider CFL field. At the beginning
of a match, an official tosses a coin and allows the captain of the visiting team call heads or tails. The captain
of the team winning the coin toss is given the option of having first choice, or of deferring first choice to the
other captain. The captain making first choice may either choose a) to kick off or receive the kick and the beginning
of the half, or b) which direction of the field to play in. The remaining choice is given to the opposing captain.
Before the resumption of play in the second half, the captain that did not have first choice in the first half is
given first choice. Teams usually choose to defer, so it is typical for the team that wins the coin toss to kick
to begin the first half and receive to begin the second. Play stops when the ball carrier's knee, elbow, or any other
body part aside from the feet and hands, is forced to the ground (a tackle); when a forward pass is not caught on
the fly (during a scrimmage); when a touchdown (see below) or a field goal is scored; when the ball leaves the playing
area by any means (being carried, thrown, or fumbled out of bounds); or when the ball carrier is in a standing position
but can no longer move forwards (called forward progress). If no score has been made, the next play starts from scrimmage.
Before scrimmage, an official places the ball at the spot it was at the stop of clock, but no nearer than 24 yards
from the sideline or 1 yard from the goal line. The line parallel to the goal line passing through the ball (line
from sideline to sideline for the length of the ball) is referred to as the line of scrimmage. This line is similar
to "no-man's land"; players must stay on their respective sides of this line until the play has begun again. For
a scrimmage to be valid the team in possession of the football must have seven players, excluding the quarterback,
within one yard of the line of scrimmage. The defending team must stay a yard or more back from the line of scrimmage.
On the field at the beginning of a play are two teams of 12 (unlike 11 in American football). The team in possession
of the ball is the offence and the team defending is referred to as the defence. Play begins with a backwards pass
through the legs (the snap) by a member of the offensive team, to another member of the offensive team. This is usually
the quarterback or punter, but a "direct snap" to a running back is also not uncommon. If the quarterback or punter
receives the ball, he may then do any of the following: Each play constitutes a down. The offence must advance the
ball at least ten yards towards the opponents' goal line within three downs or forfeit the ball to their opponents.
Once ten yards have been gained the offence gains a new set of three downs (rather than the four downs given in American
football). Downs do not accumulate. If the offensive team completes 10 yards on their first play, they lose the other
two downs and are granted another set of three. If a team fails to gain ten yards in two downs they usually punt
the ball on third down or try to kick a field goal (see below), depending on their position on the field. The team
may, however use its third down in an attempt to advance the ball and gain a cumulative 10 yards. There are many
rules to contact in this type of football. First, the only player on the field who may be legally tackled is the
player currently in possession of the football (the ball carrier). Second, a receiver, that is to say, an offensive
player sent down the field to receive a pass, may not be interfered with (have his motion impeded, be blocked, etc.)
unless he is within one yard of the line of scrimmage (instead of 5 yards (4.6 m) in American football). Any player
may block another player's passage, so long as he does not hold or trip the player he intends to block. The kicker
may not be contacted after the kick but before his kicking leg returns to the ground (this rule is not enforced upon
a player who has blocked a kick), and the quarterback, having already thrown the ball, may not be hit or tackled.
Infractions of the rules are punished with penalties, typically a loss of yardage of 5, 10 or 15 yards against the
penalized team. Minor violations such as offside (a player from either side encroaching into scrimmage zone before
the play starts) are penalized five yards, more serious penalties (such as holding) are penalized 10 yards, and severe
violations of the rules (such as face-masking) are typically penalized 15 yards. Depending on the penalty, the penalty
yardage may be assessed from the original line of scrimmage, from where the violation occurred (for example, for
a pass interference infraction), or from where the ball ended after the play. Penalties on the offence may, or may
not, result in a loss of down; penalties on the defence may result in a first down being automatically awarded to
the offence. For particularly severe conduct, the game official(s) may eject players (ejected players may be substituted
for), or in exceptional cases, declare the game over and award victory to one side or the other. Penalties do not
affect the yard line which the offence must reach to gain a first down (unless the penalty results in a first down
being awarded); if a penalty against the defence results in the first down yardage being attained, then the offence
is awarded a first down. Penalties never result in a score for the offence. For example, a point-of-foul infraction
committed by the defence in their end zone is not ruled a touchdown, but instead advances the ball to the one-yard
line with an automatic first down. For a distance penalty, if the yardage is greater than half the distance to the
goal line, then the ball is advanced half the distance to the goal line, though only up to the one-yard line (unlike
American football, in Canadian football no scrimmage may start inside either one-yard line). If the original penalty
yardage would have resulted in a first down or moving the ball past the goal line, a first down is awarded. In most
cases, the non-penalized team will have the option of declining the penalty; in which case the results of the previous
play stand as if the penalty had not been called. One notable exception to this rule is if the kicking team on a
3rd down punt play is penalized before the kick occurs: the receiving team may not decline the penalty and take over
on downs. After the kick is made, change of possession occurs and subsequent penalties are assessed against either
the spot where the ball is caught, or the runback. During the last three minutes of a half, the penalty for failure
to place the ball in play within the 20-second play clock, known as "time count" (this foul is known as "delay of
game" in American football), is dramatically different from during the first 27 minutes. Instead of the penalty being
5 yards with the down repeated, the base penalty (except during convert attempts) becomes loss of down on first or
second down, and 10 yards on third down with the down repeated. In addition, as noted previously, the referee can
give possession to the defence for repeated deliberate time count violations on third down. The clock does not run
during convert attempts in the last three minutes of a half. If the 15 minutes of a quarter expire while the ball
is live, the quarter is extended until the ball becomes dead. If a quarter's time expires while the ball is dead,
the quarter is extended for one more scrimmage. A quarter cannot end while a penalty is pending: after the penalty
yardage is applied, the quarter is extended one scrimmage. Note that the non-penalized team has the option to decline
any penalty it considers disadvantageous, so a losing team cannot indefinitely prolong a game by repeatedly committing
infractions. In the CFL, if the game is tied at the end of regulation play, then each team is given an equal number
of chances to break the tie. A coin toss is held to determine which team will take possession first; the first team
scrimmages the ball at the opponent's 35-yard line and advances through a series of downs until it scores or loses
possession. If the team scores a touchdown, starting with the 2010 season, it is required to attempt a 2-point conversion.
The other team then scrimmages the ball at the same 35-yard line and has the same opportunity to score. After the
teams have completed their possessions, if one team is ahead, then it is declared the winner; otherwise, the two
teams each get another chance to score, scrimmaging from the other 35-yard line. After this second round, if there
is still no winner, during the regular season the game ends as a tie. In a playoff or championship game, the teams
continue to attempt to score from alternating 35-yard lines, until one team is leading after both have had an equal
number of possessions.
The Seven Years' War was fought between 1755 and 1764, the main conflict occurring in the seven-year period from 1756 to
1763. It involved every great power of the time except the Ottoman Empire, and affected Europe, the Americas, West
Africa, India, and the Philippines. Considered a prelude to the two world wars and the greatest European war since
the Thirty Years War of the 17th century, it once again split Europe into two coalitions, led by Great Britain on
one side and France on the other. For the first time, aiming to curtail Britain and Prussia's ever-growing might,
France formed a grand coalition of its own, which ended with failure as Britain rose as the world's predominant power,
altering the European balance of power. Realizing that war was imminent, Prussia preemptively struck Saxony and quickly
overran it. The result caused uproar across Europe. Because of Prussia's alliance with Britain, Austria formed an
alliance with France, seeing an opportunity to recapture Silesia, which had been lost in a previous war. Reluctantly,
by following the imperial diet, most of the states of the empire joined Austria's cause. The Anglo-Prussian alliance
was joined by smaller German states (especially Hanover). Sweden, fearing Prussia's expansionist tendencies, went
to war in 1757 to protect its Baltic dominions, seeing its chance when virtually all of Europe opposed Prussia. Spain,
bound by the Pacte de Famille, intervened on behalf of France and together they launched an utterly unsuccessful
invasion of Portugal in 1762. The Russian Empire was originally aligned with Austria, fearing Prussia's ambition
on the Polish-Lithuanian Commonwealth, but switched sides upon the succession of Tsar Peter III in 1762. Many middle
and small powers in Europe, unlike in the previous wars, tried to steer clear away from the escalating conflict,
even though they had interests in the conflict or with the belligerents, like Denmark-Norway. The Dutch Republic,
long-time British ally, kept its neutrality intact, fearing the odds against Britain and Prussia fighting the great
powers of Europe, even tried to prevent Britain's domination in India. Naples, Sicily, and Savoy, although sided
with Franco-Spanish party, declined to join the coalition under the fear of British power. The taxation needed for
war caused the Russian people considerable hardship, being added to the taxation of salt and alcohol begun by Empress
Elizabeth in 1759 to complete her addition to the Winter Palace. Like Sweden, Russia concluded a separate peace with
Prussia. The war was successful for Great Britain, which gained the bulk of New France in North America, Spanish
Florida, some individual Caribbean islands in the West Indies, the colony of Senegal on the West African coast, and
superiority over the French trading outposts on the Indian subcontinent. The Native American tribes were excluded
from the settlement; a subsequent conflict, known as Pontiac's War, was also unsuccessful in returning them to their
pre-war status. In Europe, the war began disastrously for Prussia, but a combination of good luck and successful
strategy saw King Frederick the Great manage to retrieve the Prussian position and retain the status quo ante bellum.
Prussia emerged as a new European great power. Although Austria failed to retrieve the territory of Silesia from
Prussia (its original goal) its military prowess was also noted by the other powers. The involvement of Portugal,
Spain and Sweden did not return them to their former status as great powers. France was deprived of many of its colonies
and had saddled itself with heavy war debts that its inefficient financial system could barely handle. Spain lost
Florida but gained French Louisiana and regained control of its colonies, e.g., Cuba and the Philippines, which had
been captured by the British during the war. France and other European powers avenged their defeat in 1778 when the
American Revolutionary War broke out, with hopes of destroying Britain's dominance once and for all. The war has
been described as the first "world war", although this label was also given to various earlier conflicts like the
Eighty Years' War, the Thirty Years' War, the War of the Spanish Succession and the War of the Austrian Succession,
and to later conflicts like the Napoleonic Wars. The term "Second Hundred Years' War" has been used in order to describe
the almost continuous level of world-wide conflict during the entire 18th century, reminiscent of the more famous
and compact struggle of the 14th century. Realizing that war was imminent, Prussia preemptively struck Saxony and
quickly overran it. The result caused uproar across Europe. Because of Prussia's alliance with Britain, Austria formed
an alliance with France, seeing an opportunity to recapture Silesia, which had been lost in a previous war. Reluctantly,
by following the imperial diet, most of the states of the empire joined Austria's cause. The Anglo-Prussian alliance
was joined by smaller German states (especially Hanover). Sweden, fearing Prussia's expansionist tendencies, went
to war in 1757 to protect its Baltic dominions, seeing its chance when virtually all of Europe opposed Prussia. Spain,
bound by the Pacte de Famille, intervened on behalf of France and together they launched a disastrous invasion of
Portugal in 1762. The Russian Empire was originally aligned with Austria, fearing Prussia's ambition on the Polish-Lithuanian
Commonwealth, but switched sides upon the succession of Tsar Peter III in 1762. Many middle and small powers in Europe,
unlike in the previous wars, tried to steer clear away from the escalating conflict, even though they had interests
in the conflict or with the belligerents, like Denmark-Norway. The Dutch Republic, long-time British ally, kept its
neutrality intact, fearing the odds against Britain and Prussia fighting the great powers of Europe, even tried to
prevent Britain's domination in India. Naples, Sicily, and Savoy, although sided with Franco-Spanish party, declined
to join the coalition under the fear of British power. The taxation needed for war caused the Russian people considerable
hardship, being added to the taxation of salt and alcohol begun by Empress Elizabeth in 1759 to complete her addition
to the Winter Palace. Like Sweden, Russia concluded a separate peace with Prussia. The war was successful for Great
Britain, which gained the bulk of New France in North America, Spanish Florida, some individual Caribbean islands
in the West Indies, the colony of Senegal on the West African coast, and superiority over the French trading outposts
on the Indian subcontinent. The Native American tribes were excluded from the settlement; a subsequent conflict,
known as Pontiac's War, was also unsuccessful in returning them to their pre-war status. In Europe, the war began
disastrously for Prussia, but a combination of good luck and successful strategy saw King Frederick the Great manage
to retrieve the Prussian position and retain the status quo ante bellum. Prussia emerged as a new European great
power. The involvement of Portugal, Spain and Sweden did not return them to their former status as great powers.
France was deprived of many of its colonies and had saddled itself with heavy war debts that its inefficient financial
system could barely handle. Spain lost Florida but gained French Louisiana and regained control of its colonies,
e.g., Cuba and the Philippines, which had been captured by the British during the war. France and other European
powers will soon avenge their defeat in 1778 when American Revolutionary War broke out, with hopes of destroying
Britain's dominance once and for all. The War of the Austrian Succession had seen the belligerents aligned on a time-honoured
basis. France’s traditional enemies, Great Britain and Austria, had coalesced just as they had done against Louis
XIV. Prussia, the leading anti-Austrian state in Germany, had been supported by France. Neither group, however, found
much reason to be satisfied with its partnership: British subsidies to Austria had produced nothing of much help
to the British, while the British military effort had not saved Silesia for Austria. Prussia, having secured Silesia,
had come to terms with Austria in disregard of French interests. Even so, France had concluded a defensive alliance
with Prussia in 1747, and the maintenance of the Anglo-Austrian alignment after 1748 was deemed essential by the
Duke of Newcastle, British secretary of state in the ministry of his brother Henry Pelham. The collapse of that system
and the aligning of France with Austria and of Great Britain with Prussia constituted what is known as the “diplomatic
revolution” or the “reversal of alliances.” In 1756 Austria was making military preparations for war with Prussia
and pursuing an alliance with Russia for this purpose. On June 2, 1746, Austria and Russia concluded a defensive
alliance that covered their own territory and Poland against attack by Prussia or the Ottoman Empire. They also agreed
to a secret clause that promised the restoration of Silesia and the countship of Glatz (now Kłodzko, Poland) to Austria
in the event of hostilities with Prussia. Their real desire, however, was to destroy Frederick’s power altogether,
reducing his sway to his electorate of Brandenburg and giving East Prussia to Poland, an exchange that would be accompanied
by the cession of the Polish Duchy of Courland to Russia. Aleksey Petrovich, Graf (count) Bestuzhev-Ryumin, grand
chancellor of Russia under Empress Elizabeth, was hostile to both France and Prussia, but he could not persuade Austrian
statesman Wenzel Anton von Kaunitz to commit to offensive designs against Prussia so long as Prussia was able to
rely on French support. The Hanoverian king George II of Great Britain was passionately devoted to his family’s continental
holdings, but his commitments in Germany were counterbalanced by the demands of the British colonies overseas. If
war against France for colonial expansion was to be resumed, then Hanover had to be secured against Franco-Prussian
attack. France was very much interested in colonial expansion and was willing to exploit the vulnerability of Hanover
in war against Great Britain, but it had no desire to divert forces to central Europe for Prussia's interest. French
policy was, moreover, complicated by the existence of the le Secret du roi—a system of private diplomacy conducted
by King Louis XV. Unbeknownst to his foreign minister, Louis had established a network of agents throughout Europe
with the goal of pursuing personal political objectives that were often at odds with France’s publicly stated policies.
Louis’s goals for le Secret du roi included an attempt to win the Polish crown for his kinsman Louis François de
Bourbon, prince de Conti, and the maintenance of Poland, Sweden, and Turkey as French client states in opposition
to Russian and Austrian interests. Frederick saw Saxony and Polish west Prussia as potential fields for expansion
but could not expect French support if he started an aggressive war for them. If he joined the French against the
British in the hope of annexing Hanover, he might fall victim to an Austro-Russian attack. The hereditary elector
of Saxony, Augustus III, was also elective King of Poland as Augustus III, but the two territories were physically
separated by Brandenburg and Silesia. Neither state could pose as a great power. Saxony was merely a buffer between
Prussia and Austrian Bohemia, whereas Poland, despite its union with the ancient lands of Lithuania, was prey to
pro-French and pro-Russian factions. A Prussian scheme for compensating Frederick Augustus with Bohemia in exchange
for Saxony obviously presupposed further spoliation of Austria. In the attempt to satisfy Austria at the time, Britain
gave their electoral vote in Hanover for the candidacy of Maria Theresa's son, Joseph, as the Holy Roman Emperor,
much to the dismay of Frederick and Prussia. Not only that, Britain would soon join the Austro-Russian alliance,
but complications arose. Britain's basic framework for the alliance itself was to protect Hanover's interests against
France. While at the same time, Kaunitz kept approaching the French in the hope of establishing such alliance with
Austria. Not only that, France had no intention to ally with Russia, who meddled with their affairs in Austria's
succession war, years earlier, and saw the complete dismemberment of Prussia as unacceptable to the stability of
Central Europe. Years later, Kaunitz kept trying to establish France's alliance with Austria. He tried as hard as
he could for Austria to not get entangled in Hanover's political affairs, and was even willing to trade Austrian
Netherlands for France's aid in recapturing Silesia. Frustrated by this decision and by the Dutch Republic's insistence
on neutrality, Britain soon turned to Russia. On September 30, 1755, Britain pledged financial aid to Russia in order
to station 50,000 troops on the Livonian-Lithunian border, so they could defend Britain's interests in Hanover immediately.
Besthuzev, assuming the preparation was directed against Prussia, was more than happy to obey the request of the
British. Unbeknownst to the other powers, King George II also made overtures to the Prussian king; Frederick, who
began fearing the Austro-Russian intentions, and was excited to welcome a rapprochement with Britain. On January
16, 1756, the Convention of Westminster was signed wherein Britain and Prussia promised to aid one another in order
to achieve lasting peace and stability in Europe. The carefully coded word in the agreement proved no less catalytic
for the other European powers. The results were absolute chaos. Empress Elizabeth of Russia was outraged at the duplicity
of Britain's position. Not only that France was so enraged, and terrified, by the sudden betrayal of its only ally.
Austria, particularly Kaunitz, used this situation to their utmost advantage. The now-isolated France was forced
to accede to the Austro-Russian alliance or face ruin. Thereafter, on May 1, 1756, the First Treaty of Versailles
was signed, in which both nations pledged 24.000 troops to defend each other in the case of an attack. This diplomatic
revolution proved to be an important cause of the war; although both treaties were self-defensive in nature, the
actions of both coalitions made the war virtually inevitable. The most important French fort planned was intended
to occupy a position at "the Forks" where the Allegheny and Monongahela Rivers meet to form the Ohio River (present
day Pittsburgh, Pennsylvania). Peaceful British attempts to halt this fort construction were unsuccessful, and the
French proceeded to build the fort they named Fort Duquesne. British colonial militia from Virginia were then sent
to drive them out. Led by George Washington, they ambushed a small French force at Jumonville Glen on 28 May 1754
killing ten, including commander Jumonville. The French retaliated by attacking Washington's army at Fort Necessity
on 3 July 1754 and forced Washington to surrender. News of this arrived in Europe, where Britain and France unsuccessfully
attempted to negotiate a solution. The two nations eventually dispatched regular troops to North America to enforce
their claims. The first British action was the assault on Acadia on 16 June 1755 in the Battle of Fort Beauséjour,
which was immediately followed by their expulsion of the Acadians. In July British Major General Edward Braddock
led about 2,000 army troops and provincial militia on an expedition to retake Fort Duquesne, but the expedition ended
in disastrous defeat. In further action, Admiral Edward Boscawen fired on the French ship Alcide on 8 June 1755,
capturing it and two troop ships. In September 1755, French and British troops met in the inconclusive Battle of
Lake George. For much of the eighteenth century, France approached its wars in the same way. It would let colonies
defend themselves or would offer only minimal help (sending them limited numbers of troops or inexperienced soldiers),
anticipating that fights for the colonies would most likely be lost anyway. This strategy was to a degree forced
upon France: geography, coupled with the superiority of the British navy, made it difficult for the French navy to
provide significant supplies and support to French colonies. Similarly, several long land borders made an effective
domestic army imperative for any French ruler. Given these military necessities, the French government, unsurprisingly,
based its strategy overwhelmingly on the army in Europe: it would keep most of its army on the continent, hoping
for victories closer to home. The plan was to fight to the end of hostilities and then, in treaty negotiations, to
trade territorial acquisitions in Europe to regain lost overseas possessions. This approach did not serve France
well in the war, as the colonies were indeed lost, but although much of the European war went well, by its end France
had few counterbalancing European successes. The British—by inclination as well as for practical reasons—had tended
to avoid large-scale commitments of troops on the Continent. They sought to offset the disadvantage of this in Europe
by allying themselves with one or more Continental powers whose interests were antithetical to those of their enemies,
particularly France.:15–16 By subsidising the armies of continental allies, Britain could turn London's enormous
financial power to military advantage. In the Seven Years' War, the British chose as their principal partner the
greatest general of the day, Frederick the Great of Prussia, then the rising power in central Europe, and paid Frederick
substantial subsidies for his campaigns.:106 This was accomplished in the Diplomatic Revolution of 1756, in which
Britain ended its long-standing alliance with Austria in favor of Prussia, leaving Austria to side with France. In
marked contrast to France, Britain strove to prosecute the war actively in the colonies, taking full advantage of
its naval power. :64–66 The British pursued a dual strategy – naval blockade and bombardment of enemy ports, and
rapid movement of troops by sea. They harassed enemy shipping and attacked enemy colonies, frequently using colonists
from nearby British colonies in the effort. William Pitt, who entered the cabinet in 1756, had a grand vision for
the war that made it entirely different from previous wars with France. As prime minister Pitt committed Britain
to a grand strategy of seizing the entire French Empire, especially its possessions in North America and India. Britain's
main weapon was the Royal Navy, which could control the seas and bring as many invasion troops as were needed. He
also planned to use colonial forces from the Thirteen American colonies, working under the command of British regulars,
to invade new France. In order to tie the French army down he subsidized his European allies. Pitt Head of the government
from 1756 to 1761, and even after that the British continued his strategy. It proved completely successful. Pitt
had a clear appreciation of the enormous value of imperial possessions, and realized how vulnerable was the French
Empire. The British Prime Minister, the Duke of Newcastle, was optimistic that the new series of alliances could
prevent war from breaking out in Europe. However, a large French force was assembled at Toulon, and the French opened
the campaign against the British by an attack on Minorca in the Mediterranean. A British attempt at relief was foiled
at the Battle of Minorca, and the island was captured on 28 June (for which Admiral Byng was court-martialed and
executed). War between Britain and France had been formally declared on 18 May nearly two years after fighting had
broken out in the Ohio Country. Frederick II of Prussia had received reports of the clashes in North America and
had formed an alliance with Great Britain. On 29 August 1756, he led Prussian troops across the border of Saxony,
one of the small German states in league with Austria. He intended this as a bold pre-emption of an anticipated Austro-French
invasion of Silesia. He had three goals in his new war on Austria. First, he would seize Saxony and eliminate it
as a threat to Prussia, then using the Saxon army and treasury to aid the Prussian war effort. His second goal was
to advance into Bohemia where he might set up winter quarters at Austria's expense. Thirdly, he wanted to invade
Moravia from Silesia, seize the fortress at Olmütz, and advance on Vienna to force an end to the war. Accordingly,
leaving Field Marshal Count Kurt von Schwerin in Silesia with 25,000 soldiers to guard against incursions from Moravia
or Hungary, and leaving Field Marshal Hans von Lehwaldt in East Prussia to guard against Russian invasion from the
east, Frederick set off with his army for Saxony. The Prussian army marched in three columns. On the right was a
column of about 15,000 men under the command of Prince Ferdinand of Brunswick. On the left was a column of 18,000
men under the command of the Duke of Brunswick-Bevern. In the centre was Frederick II, himself with Field Marshal
James Keith commanding a corps of 30,000 troops. Ferdinand of Brunswick was to close in on the town of Chemnitz.
The Duke of Brunswick-Bevern was to traverse Lusatia to close in on Bautzen. Meanwhile, Frederick and Field Marshal
Keith would make for Dresden. The Saxon and Austrian armies were unprepared, and their forces were scattered. Frederick
occupied Dresden with little or no opposition from the Saxons. At the Battle of Lobositz on 1 October 1756, Frederick
prevented the isolated Saxon army from being reinforced by an Austrian army under General Browne. The Prussians then
occupied Saxony; after the Siege of Pirna, the Saxon army surrendered in October 1756, and was forcibly incorporated
into the Prussian army. The attack on neutral Saxony caused outrage across Europe and led to the strengthening of
the anti-Prussian coalition. The only significant Austrian success was the partial occupation of Silesia. Far from
being easy, Frederick's early successes proved indecisive and very costly for Prussia's smaller army. This led him
to remark that he did not fight the same Austrians as he had during the previous war. Britain had been surprised
by the sudden Prussian offensive but now began shipping supplies and ₤670,000 (equivalent to ₤89.9 million in 2015)
to its new ally. A combined force of allied German states was organised by the British to protect Hanover from French
invasion, under the command of the Duke of Cumberland. The British attempted to persuade the Dutch Republic to join
the alliance, but the request was rejected, as the Dutch wished to remain fully neutral. Despite the huge disparity
in numbers, the year had been successful for the Prussian-led forces on the continent, in contrast to disappointing
British campaigns in North America. In early 1757, Frederick II again took the initiative by marching into the Kingdom
of Bohemia, hoping to inflict a decisive defeat on Austrian forces. After winning the bloody Battle of Prague on
6 May 1757, in which both forces suffered major casualties, the Prussians forced the Austrians back into the fortifications
of Prague. The Prussian army then laid siege to the city. Following the battle at Prague, Frederick took 5,000 troops
from the siege at Prague and sent them to reinforce the 19,000-man army under the Duke of Brunswick-Bevern at Kolin
in Bohemia. Austrian Marshal Daun arrived too late to participate in the battle of Prague, but picked up 16,000 men
who had escaped from the battle. With this army he slowly moved to relieve Prague. The Prussian army was too weak
to simultaneously besiege Prague and keep Daun away, and Frederick was forced to attack prepared positions. The resulting
Battle of Kolin was a sharp defeat for Frederick, his first military defeat. His losses further forced him to lift
the siege and withdraw from Bohemia altogether. Later that summer, the Russians invaded Memel with 75,000 troops.
Memel had one of the strongest fortresses in Prussia. However, after five days of artillery bombardment the Russian
army was able to storm it. The Russians then used Memel as a base to invade East Prussia and defeated a smaller Prussian
force in the fiercely contested Battle of Gross-Jägersdorf on 30 August 1757. However, it was not yet able to take
Königsberg and retreated soon afterward. Still, it was a new threat to Prussia. Not only was Frederick forced to
break off his invasion of Bohemia, he was now forced to withdraw further into Prussian-controlled territory. His
defeats on the battlefield brought still more opportunist nations into the war. Sweden declared war on Prussia and
invaded Pomerania with 17,000 men. Sweden felt this small army was all that was needed to occupy Pomerania and felt
the Swedish army would not need to engage with the Prussians because the Prussians were occupied on so many other
fronts. Things were looking grim for Prussia now, with the Austrians mobilising to attack Prussian-controlled soil
and a French army under Soubise approaching from the west. However, in November and December of 1757, the whole situation
in Germany was reversed. First, Frederick devastated Prince Soubise's French force at the Battle of Rossbach on 5
November 1757 and then routed a vastly superior Austrian force at the Battle of Leuthen on 5 December 1757 With these
victories, Frederick once again established himself as Europe's premier general and his men as Europe's most accomplished
soldiers. In spite of this, the Prussians were now facing the prospect of four major powers attacking on four fronts
(France from the West, Austria from the South, Russia from the East and Sweden from the North). Meanwhile, a combined
force from a number of smaller German states such as Bavaria had been established under Austrian leadership, thus
threatening Prussian control of Saxony. This problem was compounded when the main Hanoverian army under Cumberland
was defeated at the Battle of Hastenbeck and forced to surrender entirely at the Convention of Klosterzeven following
a French Invasion of Hanover. The Convention removed Hanover and Brunswick from the war, leaving the Western approach
to Prussian territory extremely vulnerable. Frederick sent urgent requests to Britain for more substantial assistance,
as he was now without any outside military support for his forces in Germany. Calculating that no further Russian
advance was likely until 1758, Frederick moved the bulk of his eastern forces to Pomerania under the command of Marshal
Lehwaldt where they were to repel the Swedish invasion. In short order, the Prussian army drove the Swedes back,
occupied most of Swedish Pomerania, and blockaded its capital Stralsund. George II of Great Britain, on the advice
of his British ministers, revoked the Convention of Klosterzeven, and Hanover reentered the war. Over the winter
the new commander of the Hanoverian forces, Duke Ferdinand of Brunswick, regrouped his army and launched a series
of offensives that drove the French back across the River Rhine. The British had suffered further defeats in North
America, particularly at Fort William Henry. At home, however, stability had been established. Since 1756, successive
governments led by Newcastle and Pitt had fallen. In August 1757, the two men agreed to a political partnership and
formed a coalition government that gave new, firmer direction to the war effort. The new strategy emphasised both
Newcastle's commitment to British involvement on the Continent, particularly in defence of Germany, and William Pitt's
determination to use naval power to seize French colonies around the globe. This "dual strategy" would dominate British
policy for the next five years. Between 10 and 17 October 1757, a Hungarian general, Count András Hadik, serving
in the Austrian army, executed what may be the most famous hussar action in history. When the Prussian King Frederick
was marching south with his powerful armies, the Hungarian general unexpectedly swung his force of 5,000, mostly
hussars, around the Prussians and occupied part of their capital, Berlin, for one night. The city was spared for
a negotiated ransom of 200,000 thalers. When Frederick heard about this humiliating occupation, he immediately sent
a larger force to free the city. Hadik, however, left the city with his Hussars and safely reached the Austrian lines.
Subsequently, Hadik was promoted to the rank of Marshal in the Austrian army. In early 1758, Frederick launched an
invasion of Moravia, and laid siege to Olmütz (now Olomouc, Czech Republic). Following an Austrian victory at the
Battle of Domstadtl that wiped out a supply convoy destined for Olmütz, Frederick broke off the siege and withdrew
from Moravia. It marked the end of his final attempt to launch a major invasion of Austrian territory. East Prussia
had been occupied by Russian forces over the winter and would remain under their control until 1762, although Frederick
did not see the Russians as an immediate threat and instead entertained hopes of first fighting a decisive battle
against Austria that would knock them out of the war. In April 1758, the British concluded the Anglo-Prussian Convention
with Frederick in which they committed to pay him an annual subsidy of £670,000. Britain also dispatched 9,000 troops
to reinforce Ferdinand's Hanoverian army, the first British troop commitment on the continent and a reversal in the
policy of Pitt. Ferdinand had succeeded in driving the French from Hanover and Westphalia and re-captured the port
of Emden in March 1758 before crossing the Rhine with his own forces, which caused alarm in France. Despite Ferdinand's
victory over the French at the Battle of Krefeld and the brief occupation of Düsseldorf, he was compelled by the
successful manoeuvering of larger French forces to withdraw across the Rhine. By this point Frederick was increasingly
concerned by the Russian advance from the east and marched to counter it. Just east of the Oder in Brandenburg-Neumark,
at the Battle of Zorndorf (now Sarbinowo, Poland), a Prussian army of 35,000 men under Frederick on Aug. 25, 1758,
fought a Russian army of 43,000 commanded by Count William Fermor. Both sides suffered heavy casualties – the Prussians
12,800, the Russians 18,000 – but the Russians withdrew, and Frederick claimed victory. In the undecided Battle of
Tornow on 25 September, a Swedish army repulsed six assaults by a Prussian army but did not push on Berlin following
the Battle of Fehrbellin. The war was continuing indecisively when on 14 October Marshal Daun's Austrians surprised
the main Prussian army at the Battle of Hochkirch in Saxony. Frederick lost much of his artillery but retreated in
good order, helped by dense woods. The Austrians had ultimately made little progress in the campaign in Saxony despite
Hochkirch and had failed to achieve a decisive breakthrough. After a thwarted attempt to take Dresden, Daun's troops
were forced to withdraw to Austrian territory for the winter, so that Saxony remained under Prussian occupation.
At the same time, the Russians failed in an attempt to take Kolberg in Pomerania (now Kołobrzeg, Poland) from the
Prussians. The year 1759 saw several Prussian defeats. At the Battle of Kay, or Paltzig, the Russian Count Saltykov
with 47,000 Russians defeated 26,000 Prussians commanded by General Carl Heinrich von Wedel. Though the Hanoverians
defeated an army of 60,000 French at Minden, Austrian general Daun forced the surrender of an entire Prussian corps
of 13,000 in the Battle of Maxen. Frederick himself lost half his army in the Battle of Kunersdorf (now Kunowice
Poland), the worst defeat in his military career and one that drove him to the brink of abdication and thoughts of
suicide. The disaster resulted partly from his misjudgment of the Russians, who had already demonstrated their strength
at Zorndorf and at Gross-Jägersdorf (now Motornoye, Russia), and partly from good cooperation between the Russian
and Austrian forces. The French planned to invade the British Isles during 1759 by accumulating troops near the mouth
of the Loire and concentrating their Brest and Toulon fleets. However, two sea defeats prevented this. In August,
the Mediterranean fleet under Jean-François de La Clue-Sabran was scattered by a larger British fleet under Edward
Boscawen at the Battle of Lagos. In the Battle of Quiberon Bay on 20 November, the British admiral Edward Hawke with
23 ships of the line caught the French Brest fleet with 21 ships of the line under Marshal de Conflans and sank,
captured, or forced many of them aground, putting an end to the French plans. Despite this, the Austrians, under
the command of General Laudon, captured Glatz (now Kłodzko, Poland) in Silesia. In the Battle of Liegnitz Frederick
scored a strong victory despite being outnumbered three to one. The Russians under General Saltykov and Austrians
under General Lacy briefly occupied his capital, Berlin, in October, but could not hold it for long. The end of that
year saw Frederick once more victorious, defeating the able Daun in the Battle of Torgau; but he suffered very heavy
casualties, and the Austrians retreated in good order. 1762 brought two new countries into the war. Britain declared
war against Spain on 4 January 1762; Spain reacted by issuing their own declaration of war against Britain on 18
January. Portugal followed by joining the war on Britain's side. Spain, aided by the French, launched an invasion
of Portugal and succeeded in capturing Almeida. The arrival of British reinforcements stalled a further Spanish advance,
and the Battle of Valencia de Alcántara saw British-Portuguese forces overrun a major Spanish supply base. The invaders
were stopped on the heights in front of Abrantes (called the pass to Lisbon) where the Anglo-Portuguese were entrenched.
Eventually the Anglo-Portuguese army, aided by guerrillas and practicing a scorched earth strategy, chased the greatly
reduced Franco-Spanish army back to Spain, recovering almost all the lost towns, among them the Spanish headquarters
in Castelo Branco full of wounded and sick that had been left behind. On the eastern front, progress was very slow.
The Russian army was heavily dependent upon its main magazines in Poland, and the Prussian army launched several
successful raids against them. One of them, led by general Platen in September resulted in the loss of 2,000 Russians,
mostly captured, and the destruction of 5,000 wagons. Deprived of men, the Prussians had to resort to this new sort
of warfare, raiding, to delay the advance of their enemies. Nonetheless, at the end of the year, they suffered two
critical setbacks. The Russians under Zakhar Chernyshev and Pyotr Rumyantsev stormed Kolberg in Pomerania, while
the Austrians captured Schweidnitz. The loss of Kolberg cost Prussia its last port on the Baltic Sea. In Britain,
it was speculated that a total Prussian collapse was now imminent. Britain now threatened to withdraw its subsidies
if Prussia didn't consider offering concessions to secure peace. As the Prussian armies had dwindled to just 60,000
men and with Berlin itself under siege, Frederick's survival was severely threatened. Then on 5 January 1762 the
Russian Empress Elizabeth died. Her Prussophile successor, Peter III, at once recalled Russian armies from Berlin
(see: the Treaty of Saint Petersburg (1762)) and mediated Frederick's truce with Sweden. He also placed a corps of
his own troops under Frederick's command This turn of events has become known as the Miracle of the House of Brandenburg.
Frederick was then able to muster a larger army of 120,000 men and concentrate it against Austria. He drove them
from much of Saxony, while his brother Henry won a victory in Silesia in the Battle of Freiberg (29 October 1762).
At the same time, his Brunswick allies captured the key town of Göttingen and compounded this by taking Cassel. By
1763, the war in Central Europe was essentially a stalemate. Frederick had retaken most of Silesia and Saxony but
not the latter's capital, Dresden. His financial situation was not dire, but his kingdom was devastated and his army
severely weakened. His manpower had dramatically decreased, and he had lost so many effective officers and generals
that a new offensive was perhaps impossible. British subsidies had been stopped by the new Prime Minister Lord Bute,
and the Russian Emperor had been overthrown by his wife, Catherine, who ended Russia's alliance with Prussia and
withdrew from the war. Austria, however, like most participants, was facing a severe financial crisis and had to
decrease the size of its army, something which greatly affected its offensive power. Indeed, after having effectively
sustained a long war, its administration was in disarray. By that time, it still held Dresden, the southeastern parts
of Saxony, the county of Glatz, and southern Silesia, but the prospect of victory was dim without Russian support.
In 1763 a peace settlement was reached at the Treaty of Hubertusburg, ending the war in central Europe. Despite the
debatable strategic success and the operational failure of the descent on Rochefort, William Pitt—who saw purpose
in this type of asymmetric enterprise—prepared to continue such operations. An army was assembled under the command
of Charles Spencer, 3rd Duke of Marlborough; he was aided by Lord George Sackville. The naval squadron and transports
for the expedition were commanded by Richard Howe. The army landed on 5 June 1758 at Cancalle Bay, proceeded to St.
Malo, and, finding that it would take prolonged siege to capture it, instead attacked the nearby port of St. Servan.
It burned shipping in the harbor, roughly 80 French privateers and merchantmen, as well as four warships which were
under construction. The force then re-embarked under threat of the arrival of French relief forces. An attack on
Havre de Grace was called off, and the fleet sailed on to Cherbourg; the weather being bad and provisions low, that
too was abandoned, and the expedition returned having damaged French privateering and provided further strategic
demonstration against the French coast. Pitt now prepared to send troops into Germany; and both Marlborough and Sackville,
disgusted by what they perceived as the futility of the "descents", obtained commissions in that army. The elderly
General Bligh was appointed to command a new "descent", escorted by Howe. The campaign began propitiously with the
Raid on Cherbourg. Covered by naval bombardment, the army drove off the French force detailed to oppose their landing,
captured Cherbourg, and destroyed its fortifications, docks, and shipping. The troops were reembarked and moved to
the Bay of St. Lunaire in Brittany where, on 3 September, they were landed to operate against St. Malo; however,
this action proved impractical. Worsening weather forced the two armies to separate: the ships sailed for the safer
anchorage of St. Cast, while the army proceeded overland. The tardiness of Bligh in moving his forces allowed a French
force of 10,000 from Brest to catch up with him and open fire on the reembarkation troops. A rear-guard of 1,400
under General Dury held off the French while the rest of the army embarked. They could not be saved; 750, including
Dury, were killed and the rest captured. Great Britain lost Minorca in the Mediterranean to the French in 1756 but
captured the French colonies in Senegal in 1758. The British Royal Navy took the French sugar colonies of Guadeloupe
in 1759 and Martinique in 1762 as well as the Spanish cities of Havana in Cuba, and Manila in the Philippines, both
prominent Spanish colonial cities. However, expansion into the hinterlands of both cities met with stiff resistance.
In the Philippines, the British were confined to Manila until their agreed upon withdrawal at the war's end. During
the war, the Seven Nations of Canada were allied with the French. These were Native Americans of the Laurentian valley—the
Algonquin, the Abenaki, the Huron, and others. Although the Algonquin tribes and the Seven Nations were not directly
concerned with the fate of the Ohio River Valley, they had been victims of the Iroquois Confederation. The Iroquois
had encroached on Algonquin territory and pushed the Algonquins west beyond Lake Michigan. Therefore, the Algonquin
and the Seven Nations were interested in fighting against the Iroquois. Throughout New England, New York, and the
North-west Native American tribes formed differing alliances with the major belligerents. The Iroquois, dominant
in what is now Upstate New York, sided with the British but did not play a large role in the war. In 1756 and 1757
the French captured forts Oswego and William Henry from the British. The latter victory was marred when France's
native allies broke the terms of capitulation and attacked the retreating British column, which was under French
guard, slaughtering and scalping soldiers and taking captive many men, women and children while the French refused
to protect their captives. French naval deployments in 1757 also successfully defended the key Fortress of Louisbourg
on Cape Breton Island, securing the seaward approaches to Quebec. British Prime Minister William Pitt's focus on
the colonies for the 1758 campaign paid off with the taking of Louisbourg after French reinforcements were blocked
by British naval victory in the Battle of Cartagena and in the successful capture of Fort Duquesne and Fort Frontenac.
The British also continued the process of deporting the Acadian population with a wave of major operations against
Île Saint-Jean (present-day Prince Edward Island), the St. John River valley, and the Petitcodiac River valley. The
celebration of these successes was dampened by their embarrassing defeat in the Battle of Carillon (Ticonderoga),
in which 4,000 French troops repulsed 16,000 British. All of Britain's campaigns against New France succeeded in
1759, part of what became known as an Annus Mirabilis. Fort Niagara and Fort Carillon on 8 July 1758 fell to sizable
British forces, cutting off French frontier forts further west. On 13 September 1759, following a three-month siege
of Quebec, General James Wolfe defeated the French on the Plains of Abraham outside the city. The French staged a
counteroffensive in the spring of 1760, with initial success at the Battle of Sainte-Foy, but they were unable to
retake Quebec, due to British naval superiority following the battle of Neuville. The French forces retreated to
Montreal, where on 8 September they surrendered to overwhelming British numerical superiority. In 1762, towards the
end of the war, French forces attacked St. John's, Newfoundland. If successful, the expedition would have strengthened
France's hand at the negotiating table. Although they took St. John's and raided nearby settlements, the French forces
were eventually defeated by British troops at the Battle of Signal Hill. This was the final battle of the war in
North America, and it forced the French to surrender to Lieutenant Colonel William Amherst. The victorious British
now controlled all of eastern North America. The history of the Seven Years' War in North America, particularly the
expulsion of the Acadians, the siege of Quebec, the death of Wolfe, and the Battle of Fort William Henry generated
a vast number of ballads, broadsides, images, and novels (see Longfellow's Evangeline, Benjamin West's The Death
of General Wolfe, James Fenimore Cooper's The Last of the Mohicans), maps and other printed materials, which testify
to how this event held the imagination of the British and North American public long after Wolfe's death in 1759.
The Anglo-French hostilities were ended in 1763 by the Treaty of Paris, which involved a complex series of land exchanges,
the most important being France's cession to Spain of Louisiana, and to Great Britain the rest of New France except
for the islands of St. Pierre and Miquelon. Faced with the choice of retrieving either New France or its Caribbean
island colonies of Guadeloupe and Martinique, France chose the latter to retain these lucrative sources of sugar,
writing off New France as an unproductive, costly territory. France also returned Minorca to the British. Spain lost
control of Florida to Great Britain, but it received from the French the Île d'Orléans and all of the former French
holdings west of the Mississippi River. The exchanges suited the British as well, as their own Caribbean islands
already supplied ample sugar, and, with the acquisition of New France and Florida, they now controlled all of North
America east of the Mississippi. In India, the British retained the Northern Circars, but returned all the French
trading ports. The treaty, however, required that the fortifications of these settlements be destroyed and never
rebuilt, while only minimal garrisons could be maintained there, thus rendering them worthless as military bases.
Combined with the loss of France's ally in Bengal and the defection of Hyderabad to the British as a result of the
war, this effectively brought French power in India to an end, making way for British hegemony and eventual control
of the subcontinent. The Treaty of Hubertusburg, between Austria, Prussia, and Saxony, was signed on February 15,
1763, at a hunting lodge between Dresden and Leipzig. Negotiations had started there on December 31, 1762. Frederick,
who had considered ceding East Prussia to Russia if Peter III helped him secure Saxony, finally insisted on excluding
Russia (in fact, no longer a belligerent) from the negotiations. At the same time, he refused to evacuate Saxony
until its elector had renounced any claim to reparation. The Austrians wanted at least to retain Glatz, which they
had in fact reconquered, but Frederick would not allow it. The treaty simply restored the status quo of 1748, with
Silesia and Glatz reverting to Frederick and Saxony to its own elector. The only concession that Prussia made to
Austria was to consent to the election of Archduke Joseph as Holy Roman emperor. Austria was not able to retake Silesia
or make any significant territorial gain. However, it did prevent Prussia from invading parts of Saxony. More significantly,
its military performance proved far better than during the War of the Austrian Succession and seemed to vindicate
Maria Theresa's administrative and military reforms. Hence, Austria's prestige was restored in great part and the
empire secured its position as a major player in the European system. Also, by promising to vote for Joseph II in
the Imperial elections, Frederick II accepted the Habsburg preeminence in the Holy Roman Empire. The survival of
Prussia as a first-rate power and the enhanced prestige of its king and its army, however, was potentially damaging
in the long run to Austria's influence in Germany. Not only that, Austria now found herself estranged with the new
developments within the empire itself. Beside the rise of Prussia, Augustus III, although ineffective, could mustered
up an army not only from Saxony, but also Poland, considering the elector was also the King of Poland. Bavaria's
growing power and independence was also apparent as she had more voices on the path that its army should have taken,
and managed to slip out of the war at its own will. Most importantly, with the now somehow-belligerent Hanover united
personally under George III of Great Britain, It can amassed a considerable power, even brought Britain in, on the
future conflicts. This power dynamic is important to the future and the latter conflicts of the empire. The war also
proved that Maria Theresa's reforms were still not enough to compete with Prussia: unlike its enemy, the Austrians
went almost bankrupt at the end of war. Hence, she dedicated the next two decades to the consolidation of her administration.
Prussia emerged from the war as a great power whose importance could no longer be challenged. Frederick the Great’s
personal reputation was enormously enhanced, as his debt to fortune (Russia’s volte-face after Elizabeth’s death)
and to the British subsidy were soon forgotten while the memory of his energy and his military genius was strenuously
kept alive. Russia, on the other hand, made one great invisible gain from the war: the elimination of French influence
in Poland. The First Partition of Poland (1772) was to be a Russo-Prussian transaction, with Austria only reluctantly
involved and with France simply ignored. The British government was close to bankruptcy, and Britain now faced the
delicate task of pacifying its new French-Canadian subjects as well as the many American Indian tribes who had supported
France. George III's Proclamation of 1763, which forbade white settlement beyond the crest of the Appalachians, was
intended to appease the latter but led to considerable outrage in the Thirteen Colonies, whose inhabitants were eager
to acquire native lands. The Quebec Act of 1774, similarly intended to win over the loyalty of French Canadians,
also spurred resentment among American colonists. The act protected Catholic religion and French language, which
enraged the Americans, but the Québécois remained loyal and did not rebel. The war had also brought to an end the
"Old System" of alliances in Europe, In the years after the war, under the direction of Lord Sandwich, the British
did try to re-establish this system. But after her surprising grand success against a coalition of great powers,
European states such as Austria, The Dutch Republic, Sweden, Denmark-Norway, Ottoman Empire, and Russia now saw Britain
as a greater threat than France and did not join them, while the Prussians were angered by what they considered a
British betrayal in 1762. Consequently, when the American War of Independence turned into a global war between 1778–83,
Britain found itself opposed by a strong coalition of European powers, and lacking any substantial ally.
Richard Phillips Feynman (/ˈfaɪnmən/; May 11, 1918 – February 15, 1988) was an American theoretical physicist known for his
work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, and the physics
of the superfluidity of supercooled liquid helium, as well as in particle physics for which he proposed the parton
model. For his contributions to the development of quantum electrodynamics, Feynman, jointly with Julian Schwinger
and Sin-Itiro Tomonaga, received the Nobel Prize in Physics in 1965. He developed a widely used pictorial representation
scheme for the mathematical expressions governing the behavior of subatomic particles, which later became known as
Feynman diagrams. During his lifetime, Feynman became one of the best-known scientists in the world. In a 1999 poll
of 130 leading physicists worldwide by the British journal Physics World he was ranked as one of the ten greatest
physicists of all time. Feynman was a keen popularizer of physics through both books and lectures, including a 1959
talk on top-down nanotechnology called There's Plenty of Room at the Bottom, and the three-volume publication of
his undergraduate lectures, The Feynman Lectures on Physics. Feynman also became known through his semi-autobiographical
books Surely You're Joking, Mr. Feynman! and What Do You Care What Other People Think? and books written about him,
such as Tuva or Bust! and Genius: The Life and Science of Richard Feynman by James Gleick. Richard Phillips Feynman
was born on May 11, 1918, in Queens, New York City, the son of Lucille (née Phillips), a homemaker, and Melville
Arthur Feynman, a sales manager. His family originated from Russia and Poland; both of his parents were Ashkenazi
Jews. They were not religious, and by his youth Feynman described himself as an "avowed atheist". He also stated
"To select, for approbation the peculiar elements that come from some supposedly Jewish heredity is to open the door
to all kinds of nonsense on racial theory," and adding "... at thirteen I was not only converted to other religious
views, but I also stopped believing that the Jewish people are in any way 'the chosen people'." Later in his life,
during a visit to the Jewish Theological Seminary, he encountered the Talmud for the first time, and remarked that
he found it a "wonderful book" and "valuable". The young Feynman was heavily influenced by his father, who encouraged
him to ask questions to challenge orthodox thinking, and who was always ready to teach Feynman something new. From
his mother he gained the sense of humor that he had throughout his life. As a child, he had a talent for engineering,
maintained an experimental laboratory in his home, and delighted in repairing radios. When he was in grade school,
he created a home burglar alarm system while his parents were out for the day running errands. When Richard was five
years old, his mother gave birth to a younger brother, but this brother died at four weeks of age. Four years later,
Richard gained a sister, Joan, and the family moved to Far Rockaway, Queens. Though separated by nine years, Joan
and Richard were close, as they both shared a natural curiosity about the world. Their mother thought that women
did not have the cranial capacity to comprehend such things. Despite their mother's disapproval of Joan's desire
to study astronomy, Richard encouraged his sister to explore the universe. Joan eventually became an astrophysicist
specializing in interactions between the Earth and the solar wind. Feynman attended Far Rockaway High School, a school
in Far Rockaway, Queens also attended by fellow Nobel laureates Burton Richter and Baruch Samuel Blumberg. Upon starting
high school, Feynman was quickly promoted into a higher math class. An unspecified school-administered IQ test estimated
his IQ at 123—high, but "merely respectable" according to biographer James Gleick. When he turned 15, he taught himself
trigonometry, advanced algebra, infinite series, analytic geometry, and both differential and integral calculus.
In high school he was developing the mathematical intuition behind his Taylor series of mathematical operators. Before
entering college, he was experimenting with and deriving mathematical topics such as the half-derivative using his
own notation. He attained a perfect score on the graduate school entrance exams to Princeton University in mathematics
and physics—an unprecedented feat—but did rather poorly on the history and English portions. Attendees at Feynman's
first seminar included Albert Einstein, Wolfgang Pauli, and John von Neumann. He received a PhD from Princeton in
1942; his thesis advisor was John Archibald Wheeler. Feynman's thesis applied the principle of stationary action
to problems of quantum mechanics, inspired by a desire to quantize the Wheeler–Feynman absorber theory of electrodynamics,
laying the groundwork for the "path integral" approach and Feynman diagrams, and was titled "The Principle of Least
Action in Quantum Mechanics". At Princeton, the physicist Robert R. Wilson encouraged Feynman to participate in the
Manhattan Project—the wartime U.S. Army project at Los Alamos developing the atomic bomb. Feynman said he was persuaded
to join this effort to build it before Nazi Germany developed their own bomb. He was assigned to Hans Bethe's theoretical
division and impressed Bethe enough to be made a group leader. He and Bethe developed the Bethe–Feynman formula for
calculating the yield of a fission bomb, which built upon previous work by Robert Serber. He immersed himself in
work on the project, and was present at the Trinity bomb test. Feynman claimed to be the only person to see the explosion
without the very dark glasses or welder's lenses provided, reasoning that it was safe to look through a truck windshield,
as it would screen out the harmful ultraviolet radiation. On witnessing the blast, Feynman ducked towards the floor
of his truck because of the immense brightness of the explosion, where he saw a temporary "purple splotch" afterimage
of the event. Feynman's other work at Los Alamos included calculating neutron equations for the Los Alamos "Water
Boiler", a small nuclear reactor, to measure how close an assembly of fissile material was to criticality. On completing
this work he was transferred to the Oak Ridge facility, where he aided engineers in devising safety procedures for
material storage so that criticality accidents (for example, due to sub-critical amounts of fissile material inadvertently
stored in proximity on opposite sides of a wall) could be avoided. He also did theoretical work and calculations
on the proposed uranium hydride bomb, which later proved not to be feasible. Due to the top secret nature of the
work, Los Alamos was isolated. In Feynman's own words, "There wasn't anything to do there." Bored, he indulged his
curiosity by learning to pick the combination locks on cabinets and desks used to secure papers. Feynman played many
jokes on colleagues. In one case he found the combination to a locked filing cabinet by trying the numbers he thought
a physicist would use (it proved to be 27–18–28 after the base of natural logarithms, e = 2.71828...), and found
that the three filing cabinets where a colleague kept a set of atomic bomb research notes all had the same combination.
He left a series of notes in the cabinets as a prank, which initially spooked his colleague, Frederic de Hoffmann,
into thinking a spy or saboteur had gained access to atomic bomb secrets. On several occasions, Feynman drove to
Albuquerque to see his ailing wife in a car borrowed from Klaus Fuchs, who was later discovered to be a real spy
for the Soviets, transporting nuclear secrets in his car to Santa Fe. Feynman alludes to his thoughts on the justification
for getting involved in the Manhattan project in The Pleasure of Finding Things Out. He felt the possibility of Nazi
Germany developing the bomb before the Allies was a compelling reason to help with its development for the U.S. He
goes on to say that it was an error on his part not to reconsider the situation once Germany was defeated. In the
same publication, Feynman also talks about his worries in the atomic bomb age, feeling for some considerable time
that there was a high risk that the bomb would be used again soon, so that it was pointless to build for the future.
Later he describes this period as a "depression". Following the completion of his PhD in 1942, Feynman held an appointment
at the University of Wisconsin–Madison as an assistant professor of physics. The appointment was spent on leave for
his involvement in the Manhattan project. In 1945, he received a letter from Dean Mark Ingraham of the College of
Letters and Science requesting his return to UW to teach in the coming academic year. His appointment was not extended
when he did not commit to return. In a talk given several years later at UW, Feynman quipped, "It's great to be back
at the only university that ever had the good sense to fire me." After the war, Feynman declined an offer from the
Institute for Advanced Study in Princeton, New Jersey, despite the presence there of such distinguished faculty members
as Albert Einstein, Kurt Gödel and John von Neumann. Feynman followed Hans Bethe, instead, to Cornell University,
where Feynman taught theoretical physics from 1945 to 1950. During a temporary depression following the destruction
of Hiroshima by the bomb produced by the Manhattan Project, he focused on complex physics problems, not for utility,
but for self-satisfaction. One of these was analyzing the physics of a twirling, nutating dish as it is moving through
the air. His work during this period, which used equations of rotation to express various spinning speeds, proved
important to his Nobel Prize–winning work, yet because he felt burned out and had turned his attention to less immediately
practical problems, he was surprised by the offers of professorships from other renowned universities. Despite yet
another offer from the Institute for Advanced Study, Feynman rejected the Institute on the grounds that there were
no teaching duties: Feynman felt that students were a source of inspiration and teaching was a diversion during uncreative
spells. Because of this, the Institute for Advanced Study and Princeton University jointly offered him a package
whereby he could teach at the university and also be at the institute.[citation needed] Feynman instead accepted
an offer from the California Institute of Technology (Caltech)—and as he says in his book Surely You're Joking Mr.
Feynman!—because a desire to live in a mild climate had firmly fixed itself in his mind while he was installing tire
chains on his car in the middle of a snowstorm in Ithaca. Feynman has been called the "Great Explainer". He gained
a reputation for taking great care when giving explanations to his students and for making it a moral duty to make
the topic accessible. His guiding principle was that, if a topic could not be explained in a freshman lecture, it
was not yet fully understood. Feynman gained great pleasure from coming up with such a "freshman-level" explanation,
for example, of the connection between spin and statistics. What he said was that groups of particles with spin ½
"repel", whereas groups with integer spin "clump". This was a brilliantly simplified way of demonstrating how Fermi–Dirac
statistics and Bose–Einstein statistics evolved as a consequence of studying how fermions and bosons behave under
a rotation of 360°. This was also a question he pondered in his more advanced lectures, and to which he demonstrated
the solution in the 1986 Dirac memorial lecture. In the same lecture, he further explained that antiparticles must
exist, for if particles had only positive energies, they would not be restricted to a so-called "light cone". He
also developed Feynman diagrams, a bookkeeping device that helps in conceptualizing and calculating interactions
between particles in spacetime, including the interactions between electrons and their antimatter counterparts, positrons.
This device allowed him, and later others, to approach time reversibility and other fundamental processes. Feynman's
mental picture for these diagrams started with the hard sphere approximation, and the interactions could be thought
of as collisions at first. It was not until decades later that physicists thought of analyzing the nodes of the Feynman
diagrams more closely. Feynman painted Feynman diagrams on the exterior of his van. From his diagrams of a small
number of particles interacting in spacetime, Feynman could then model all of physics in terms of the spins of those
particles and the range of coupling of the fundamental forces. Feynman attempted an explanation of the strong interactions
governing nucleons scattering called the parton model. The parton model emerged as a complement to the quark model
developed by his Caltech colleague Murray Gell-Mann. The relationship between the two models was murky; Gell-Mann
referred to Feynman's partons derisively as "put-ons". In the mid-1960s, physicists believed that quarks were just
a bookkeeping device for symmetry numbers, not real particles, as the statistics of the Omega-minus particle, if
it were interpreted as three identical strange quarks bound together, seemed impossible if quarks were real. The
Stanford linear accelerator deep inelastic scattering experiments of the late 1960s showed, analogously to Ernest
Rutherford's experiment of scattering alpha particles on gold nuclei in 1911, that nucleons (protons and neutrons)
contained point-like particles that scattered electrons. It was natural to identify these with quarks, but Feynman's
parton model attempted to interpret the experimental data in a way that did not introduce additional hypotheses.
For example, the data showed that some 45% of the energy momentum was carried by electrically-neutral particles in
the nucleon. These electrically-neutral particles are now seen to be the gluons that carry the forces between the
quarks and carry also the three-valued color quantum number that solves the Omega-minus problem. Feynman did not
dispute the quark model; for example, when the fifth quark was discovered in 1977, Feynman immediately pointed out
to his students that the discovery implied the existence of a sixth quark, which was discovered in the decade after
his death. After the success of quantum electrodynamics, Feynman turned to quantum gravity. By analogy with the photon,
which has spin 1, he investigated the consequences of a free massless spin 2 field, and derived the Einstein field
equation of general relativity, but little more. The computational device that Feynman discovered then for gravity,
"ghosts", which are "particles" in the interior of his diagrams that have the "wrong" connection between spin and
statistics, have proved invaluable in explaining the quantum particle behavior of the Yang–Mills theories, for example,
QCD and the electro-weak theory. Feynman was elected a Foreign Member of the Royal Society (ForMemRS) in 1965. At
this time in the early 1960s, Feynman exhausted himself by working on multiple major projects at the same time, including
a request, while at Caltech, to "spruce up" the teaching of undergraduates. After three years devoted to the task,
he produced a series of lectures that eventually became The Feynman Lectures on Physics. He wanted a picture of a
drumhead sprinkled with powder to show the modes of vibration at the beginning of the book. Concerned over the connections
to drugs and rock and roll that could be made from the image, the publishers changed the cover to plain red, though
they included a picture of him playing drums in the foreword. The Feynman Lectures on Physics occupied two physicists,
Robert B. Leighton and Matthew Sands, as part-time co-authors for several years. Even though the books were not adopted
by most universities as textbooks, they continue to sell well because they provide a deep understanding of physics.
Many of his lectures and miscellaneous talks were turned into other books, including The Character of Physical Law,
QED: The Strange Theory of Light and Matter, Statistical Mechanics, Lectures on Gravitation, and the Feynman Lectures
on Computation. In 1974, Feynman delivered the Caltech commencement address on the topic of cargo cult science, which
has the semblance of science, but is only pseudoscience due to a lack of "a kind of scientific integrity, a principle
of scientific thought that corresponds to a kind of utter honesty" on the part of the scientist. He instructed the
graduating class that "The first principle is that you must not fool yourself—and you are the easiest person to fool.
So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists.
You just have to be honest in a conventional way after that." In the late 1980s, according to "Richard Feynman and
the Connection Machine", Feynman played a crucial role in developing the first massively parallel computer, and in
finding innovative uses for it in numerical computations, in building neural networks, as well as physical simulations
using cellular automata (such as turbulent fluid flow), working with Stephen Wolfram at Caltech. His son Carl also
played a role in the development of the original Connection Machine engineering; Feynman influencing the interconnects
while his son worked on the software. Feynman diagrams are now fundamental for string theory and M-theory, and have
even been extended topologically. The world-lines of the diagrams have developed to become tubes to allow better
modeling of more complicated objects such as strings and membranes. Shortly before his death, Feynman criticized
string theory in an interview: "I don't like that they're not calculating anything," he said. "I don't like that
they don't check their ideas. I don't like that for anything that disagrees with an experiment, they cook up an explanation—a
fix-up to say, 'Well, it still might be true.'" These words have since been much-quoted by opponents of the string-theoretic
direction for particle physics. Feynman devoted the latter half of his book What Do You Care What Other People Think?
to his experience on the Rogers Commission, straying from his usual convention of brief, light-hearted anecdotes
to deliver an extended and sober narrative. Feynman's account reveals a disconnect between NASA's engineers and executives
that was far more striking than he expected. His interviews of NASA's high-ranking managers revealed startling misunderstandings
of elementary concepts. For instance, NASA managers claimed that there was a 1 in 100,000 chance of a catastrophic
failure aboard the shuttle, but Feynman discovered that NASA's own engineers estimated the chance of a catastrophe
at closer to 1 in 200. He concluded that the space shuttle reliability estimate by NASA management was fantastically
unrealistic, and he was particularly angered that NASA used these figures to recruit Christa McAuliffe into the Teacher-in-Space
program. He warned in his appendix to the commission's report (which was included only after he threatened not to
sign the report), "For a successful technology, reality must take precedence over public relations, for nature cannot
be fooled." Although born to and raised by parents who were Ashkenazi, Feynman was not only an atheist, but declined
to be labelled Jewish. He routinely refused to be included in lists or books that classified people by race. He asked
to not be included in Tina Levitan's The Laureates: Jewish Winners of the Nobel Prize, writing, "To select, for approbation
the peculiar elements that come from some supposedly Jewish heredity is to open the door to all kinds of nonsense
on racial theory," and adding "... at thirteen I was not only converted to other religious views, but I also stopped
believing that the Jewish people are in any way 'the chosen people'." While pursuing his PhD at Princeton, Feynman
married his high school sweetheart, Arline Greenbaum (often misspelled "Arlene"), despite the knowledge that she
was seriously ill with tuberculosis—an incurable disease at the time. She died in 1945. In 1946, Feynman wrote a
letter to her, expressing his deep love and heartbreak, that he kept for the rest of his life. ("Please excuse my
not mailing this," the letter concluded, "but I don't know your new address.") This portion of Feynman's life was
portrayed in the 1996 film Infinity, which featured Feynman's daughter, Michelle, in a cameo role. Feynman had a
great deal of success teaching Carl, using, for example, discussions about ants and Martians as a device for gaining
perspective on problems and issues. He was surprised to learn that the same teaching devices were not useful with
Michelle. Mathematics was a common interest for father and son; they both entered the computer field as consultants
and were involved in advancing a new method of using multiple computers to solve complex problems—later known as
parallel computing. The Jet Propulsion Laboratory retained Feynman as a computational consultant during critical
missions. One co-worker characterized Feynman as akin to Don Quixote at his desk, rather than at a computer workstation,
ready to do battle with the windmills. Feynman traveled to Brazil, where he gave courses at the BCPF (Brazilian Center
for Physics Research) and near the end of his life schemed to visit the Russian land of Tuva, a dream that, because
of Cold War bureaucratic problems, never became reality. The day after he died, a letter arrived for him from the
Soviet government, giving him authorization to travel to Tuva. Later Feynman's daughter Michelle would realize this
journey. Out of his enthusiastic interest in reaching Tuva came the phrase "Tuva or Bust" (also the title of a book
about his efforts to get there), which was tossed about frequently amongst his circle of friends in hope that they,
one day, could see it firsthand. The documentary movie, Genghis Blues, mentions some of his attempts to communicate
with Tuva and chronicles the successful journey there by his friends. Responding to Hubert Humphrey's congratulation
for his Nobel Prize, Feynman admitted to a long admiration for the then vice president. According to Genius, the
James Gleick-authored biography, Feynman tried LSD during his professorship at Caltech. Somewhat embarrassed by his
actions, he largely sidestepped the issue when dictating his anecdotes; he mentions it in passing in the "O Americano,
Outra Vez" section, while the "Altered States" chapter in Surely You're Joking, Mr. Feynman! describes only marijuana
and ketamine experiences at John Lilly's famed sensory deprivation tanks, as a way of studying consciousness. Feynman
gave up alcohol when he began to show vague, early signs of alcoholism, as he did not want to do anything that could
damage his brain—the same reason given in "O Americano, Outra Vez" for his reluctance to experiment with LSD. In
Surely You're Joking, Mr. Feynman!, he gives advice on the best way to pick up a girl in a hostess bar. At Caltech,
he used a nude or topless bar as an office away from his usual office, making sketches or writing physics equations
on paper placemats. When the county officials tried to close the place, all visitors except Feynman refused to testify
in favor of the bar, fearing that their families or patrons would learn about their visits. Only Feynman accepted,
and in court, he affirmed that the bar was a public need, stating that craftsmen, technicians, engineers, common
workers, "and a physics professor" frequented the establishment. While the bar lost the court case, it was allowed
to remain open as a similar case was pending appeal. In a 1992 New York Times article on Feynman and his legacy,
James Gleick recounts the story of how Murray Gell-Mann described what has become known as "The Feynman Algorithm"
or "The Feynman Problem-Solving Algorithm" to a student: "The student asks Gell-Mann about Feynman's notes. Gell-Mann
says no, Dick's methods are not the same as the methods used here. The student asks, well, what are Feynman's methods?
Gell-Mann leans coyly against the blackboard and says: Dick's method is this. You write down the problem. You think
very hard. (He shuts his eyes and presses his knuckles parodically to his forehead.) Then you write down the answer."
The Feynman Lectures on Physics is perhaps his most accessible work for anyone with an interest in physics, compiled
from lectures to Caltech undergraduates in 1961–64. As news of the lectures' lucidity grew, professional physicists
and graduate students began to drop in to listen. Co-authors Robert B. Leighton and Matthew Sands, colleagues of
Feynman, edited and illustrated them into book form. The work has endured and is useful to this day. They were edited
and supplemented in 2005 with "Feynman's Tips on Physics: A Problem-Solving Supplement to the Feynman Lectures on
Physics" by Michael Gottlieb and Ralph Leighton (Robert Leighton's son), with support from Kip Thorne and other physicists.
Muammar Muhammad Abu Minyar al-Gaddafi (Arabic: معمر محمد أبو منيار القذافي Arabic pronunciation: [muʕamar al.qaðaːfiː];
/ˈmoʊ.əmɑːr ɡəˈdɑːfi/; audio (help·info); c. 1942 – 20 October 2011), commonly known as Colonel Gaddafi,[b] was a
Libyan revolutionary, politician, and political theorist. He governed Libya as Revolutionary Chairman of the Libyan
Arab Republic from 1969 to 1977 and then as the "Brotherly Leader" of the Great Socialist People's Libyan Arab Jamahiriya
from 1977 to 2011. Initially ideologically committed to Arab nationalism and Arab socialism, he came to rule according
to his own Third International Theory before embracing Pan-Africanism and serving as Chairperson of the African Union
from 2009 to 2010. The son of an impoverished Bedouin goat herder, Gaddafi became involved in politics while at school
in Sabha, subsequently enrolling in the Royal Military Academy, Benghazi. Founding a revolutionary cell within the
military, in 1969 they seized power from the absolute monarchy of King Idris in a bloodless coup. Becoming Chairman
of the governing Revolutionary Command Council (RCC), Gaddafi abolished the monarchy and proclaimed the Republic.
Ruling by decree, he implemented measures to remove what he viewed as foreign imperialist influence from Libya, and
strengthened ties to Arab nationalist governments. Intent on pushing Libya towards "Islamic socialism", he introduced
sharia as the basis for the legal system and nationalized the oil industry, using the increased revenues to bolster
the military, implement social programs and fund revolutionary militants across the world. In 1973 he initiated a
"Popular Revolution" with the formation of General People's Committees (GPCs), purported to be a system of direct
democracy, but retained personal control over major decisions. He outlined his Third International Theory that year,
publishing these ideas in The Green Book. In 1977, Gaddafi dissolved the Republic and created a new socialist state,
the Jamahiriya ("state of the masses"). Officially adopting a symbolic role in governance, he retained power as military
commander-in-chief and head of the Revolutionary Committees responsible for policing and suppressing opponents. Overseeing
unsuccessful border conflicts with Egypt and Chad, Gaddafi's support for foreign militants and alleged responsibility
for the Lockerbie bombing led to Libya's label of "international pariah". A particularly hostile relationship developed
with the United States and United Kingdom, resulting in the 1986 U.S. bombing of Libya and United Nations-imposed
economic sanctions. Rejecting his earlier ideological commitments, from 1999 Gaddafi encouraged economic privatization
and sought rapprochement with Western nations, also embracing Pan-Africanism and helping to establish the African
Union. Amid the Arab Spring, in 2011 an anti-Gaddafist uprising led by the National Transitional Council (NTC) broke
out, resulting in the Libyan Civil War. NATO intervened militarily on the side of the NTC, bringing about the government's
downfall. Retreating to Sirte, Gaddafi was captured and killed by NTC militants. Muammar Gaddafi was born in a tent
near Qasr Abu Hadi, a rural area outside the town of Sirte in the deserts of western Libya. His family came from
a small, relatively un-influential tribal group called the Qadhadhfa, who were Arabized Berber in heritage. His father,
Mohammad Abdul Salam bin Hamed bin Mohammad, was known as Abu Meniar (died 1985), and his mother was named Aisha
(died 1978); Abu Meniar earned a meager subsistence as a goat and camel herder. Nomadic Bedouins, they were illiterate
and kept no birth records. As such, Gaddafi's date of birth is not known with certainty, and sources have set it
in 1942 or in the spring of 1943, although biographers Blundy and Lycett noted that it could have been pre-1940.
His parents' only surviving son, he had three older sisters. Gaddafi's upbringing in Bedouin culture influenced his
personal tastes for the rest of his life. He repeatedly expressed a preference for the desert over the city and retreated
to the desert to meditate. From childhood, Gaddafi was aware of the involvement of European colonialists in Libya;
his nation was occupied by Italy, and during the North African Campaign of World War II it witnessed conflict between
Italian and British troops. According to later claims, Gaddafi's paternal grandfather, Abdessalam Bouminyar, was
killed by the Italian Army during the Italian invasion of 1911. At World War II's end in 1945, Libya was occupied
by British and French forces. Although Britain and France intended on dividing the nation between their empires,
the General Assembly of the United Nations (UN) declared that the country be granted political independence. In 1951,
the UN created the United Kingdom of Libya, a federal state under the leadership of a pro-western monarch, Idris,
who banned political parties and established an absolute monarchy. Gaddafi's earliest education was of a religious
nature, imparted by a local Islamic teacher. Subsequently moving to nearby Sirte to attend elementary school, he
progressed through six grades in four years. Education in Libya was not free, but his father thought it would greatly
benefit his son despite the financial strain. During the week Gaddafi slept in a mosque, and at weekends walked 20
miles to visit his parents. Bullied for being a Bedouin, he was proud of his identity and encouraged pride in other
Bedouin children. From Sirte, he and his family moved to the market town of Sabha in Fezzan, south-central Libya,
where his father worked as a caretaker for a tribal leader while Muammar attended secondary school, something neither
parent had done. Gaddafi was popular at school; some friends made there received significant jobs in his later administration,
most notably his best friend Abdul Salam Jalloud. Gaddafi organized demonstrations and distributed posters criticizing
the monarchy. In October 1961, he led a demonstration protesting Syria's secession from the United Arab Republic.
During this they broke windows of a local hotel accused of serving alcohol. Catching the authorities' attention,
they expelled his family from Sabha. Gaddafi moved to Misrata, there attending Misrata Secondary School. Maintaining
his interest in Arab nationalist activism, he refused to join any of the banned political parties active in the city
– including the Arab Nationalist Movement, the Arab Socialist Ba'ath Party, and the Muslim Brotherhood – claiming
he rejected factionalism. He read voraciously on the subjects of Nasser and the French Revolution of 1789, as well
as the works of Syrian political theorist Michel Aflaq and biographies of Abraham Lincoln, Sun Yat-sen, and Mustafa
Kemal Atatürk. Gaddafi briefly studied History at the University of Libya in Benghazi, before dropping out to join
the military. Despite his police record, in 1963 he began training at the Royal Military Academy, Benghazi, alongside
several like-minded friends from Misrata. The armed forces offered the only opportunity for upward social mobility
for underprivileged Libyans, and Gaddafi recognised it as a potential instrument of political change. Under Idris,
Libya's armed forces were trained by the British military; this angered Gaddafi, who viewed the British as imperialists,
and accordingly he refused to learn English and was rude to the British officers, ultimately failing his exams. British
trainers reported him for insubordination and abusive behaviour, stating their suspicion that he was involved in
the assassination of the military academy's commander in 1963. Such reports were ignored and Gaddafi quickly progressed
through the course. The Bovington signal course's director reported that Gaddafi successfully overcame problems learning
English, displaying a firm command of voice procedure. Noting that Gaddafi's favourite hobbies were reading and playing
football, he thought him an "amusing officer, always cheerful, hard-working, and conscientious." Gaddafi disliked
England, claiming British Army officers racially insulted him and finding it difficult adjusting to the country's
culture; asserting his Arab identity in London, he walked around Piccadilly wearing traditional Libyan robes. He
later related that while he travelled to England believing it more advanced than Libya, he returned home "more confident
and proud of our values, ideals and social character." Many teachers at Sabha were Egyptian, and for the first time
Gaddafi had access to pan-Arab newspapers and radio broadcasts, most notably the Cairo-based Voice of the Arabs.
Growing up, Gaddafi witnessed significant events rock the Arab world, including the 1948 Arab–Israeli War, the Egyptian
Revolution of 1952, the Suez Crisis of 1956, and the short-lived existence of the United Arab Republic between 1958
and 1961. Gaddafi admired the political changes implemented in the Arab Republic of Egypt under his hero, President
Gamal Abdel Nasser. Nasser argued for Arab nationalism; the rejection of Western colonialism, neo-colonialism, and
Zionism; and a transition from capitalism to socialism. Nasser's book, Philosophy of the Revolution, was a key influence
on Gaddafi; outlining how to initiate a coup, it has been described as "the inspiration and blueprint of [Gaddafi's]
revolution." Idris' government was increasingly unpopular by the latter 1960s; it had exacerbated Libya's traditional
regional and tribal divisions by centralising the country's federal system in order to take advantage of the country's
oil wealth, while corruption and entrenched systems of patronage were widespread throughout the oil industry. Arab
nationalism was increasingly popular, and protests flared up following Egypt's 1967 defeat in the Six-Day War with
Israel; allied to the western powers, Idris' administration was seen as pro-Israeli. Anti-western riots broke out
in Tripoli and Benghazi, while Libyan workers shut down oil terminals in solidarity with Egypt. By 1969, the U.S.
Central Intelligence Agency was expecting segments of Libya's armed forces to launch a coup. Although claims have
been made that they knew of Gaddafi's Free Officers Movement, they have since claimed ignorance, stating that they
were monitoring Abdul Aziz Shalhi's Black Boots revolutionary group. In mid-1969, Idris travelled abroad to spend
the summer in Turkey and Greece. Gaddafi's Free Officers recognized this as their chance to overthrow the monarchy,
initiating "Operation Jerusalem". On 1 September, they occupied airports, police depots, radio stations and government
offices in Tripoli and Benghazi. Gaddafi took control of the Berka barracks in Benghazi, while Omar Meheisha occupied
Tripoli barracks and Jalloud seized the city's anti-aircraft batteries. Khweldi Hameidi was sent to arrest crown
prince Sayyid Hasan ar-Rida al-Mahdi as-Sanussi, and force him to relinquish his claim to the throne. They met no
serious resistance, and wielded little violence against the monarchists. Having removed the monarchical government,
Gaddafi proclaimed the foundation of the Libyan Arab Republic. Addressing the populace by radio, he proclaimed an
end to the "reactionary and corrupt" regime, "the stench of which has sickened and horrified us all." Due to the
coup's bloodless nature, it was initially labelled the "White Revolution", although was later renamed the "One September
Revolution" after the date on which it occurred. Gaddafi insisted that the Free Officers' coup represented a revolution,
marking the start of widespread change in the socio-economic and political nature of Libya. He proclaimed that the
revolution meant "freedom, socialism, and unity", and over the coming years implemented measures to achieve this.
Although theoretically a collegial body operating through consensus building, Gaddafi dominated the RCC, although
some of the others attempted to constrain what they saw as his excesses. Gaddafi remained the government's public
face, with the identities of the other RCC members only being publicly revealed on 10 January 1970. All young men
from (typically rural) working and middle-class backgrounds, none had university degrees; in this way they were distinct
from the wealthy, highly educated conservatives who previously governed the country. The coup completed, the RCC
proceeded with their intentions of consolidating the revolutionary government and modernizing the country. They purged
monarchists and members of Idris' Senussi clan from Libya's political world and armed forces; Gaddafi believed this
elite were opposed to the will of the Libyan people and had to be expunged. "People's Courts" were founded to try
various monarchist politicians and journalists, and though many were imprisoned, none were executed. Idris was sentenced
to execution in absentia. In May 1970, the Revolutionary Intellectuals Seminar was held to bring intellectuals in
line with the revolution, while that year's Legislative Review and Amendment united secular and religious law codes,
introducing sharia into the legal system. Ruling by decree, the RCC maintained the monarchy's ban on political parties,
in May 1970 banned trade unions, and in 1972 outlawed workers' strikes and suspended newspapers. In September 1971,
Gaddafi resigned, claiming to be dissatisfied with the pace of reform, but returned to his position within a month.
In February 1973, he resigned again, once more returning the following month. With crude oil as the country's primary
export, Gaddafi sought to improve Libya's oil sector. In October 1969, he proclaimed the current trade terms unfair,
benefiting foreign corporations more than the Libyan state, and by threatening to reduce production, in December
Jalloud successfully increased the price of Libyan oil. In 1970, other OPEC states followed suit, leading to a global
increase in the price of crude oil. The RCC followed with the Tripoli Agreement, in which they secured income tax,
back-payments and better pricing from the oil corporations; these measures brought Libya an estimated $1 billion
in additional revenues in its first year. Increasing state control over the oil sector, the RCC began a program of
nationalization, starting with the expropriation of British Petroleum's share of the British Petroleum-N.B. Hunt
Sahir Field in December 1971. In September 1973, it was announced that all foreign oil producers active in Libya
were to be nationalized. For Gaddafi, this was an important step towards socialism. It proved an economic success;
while gross domestic product had been $3.8 billion in 1969, it had risen to $13.7 billion in 1974, and $24.5 billion
in 1979. In turn, the Libyans' standard of life greatly improved over the first decade of Gaddafi's administration,
and by 1979 the average per-capita income was at $8,170, up from $40 in 1951; this was above the average of many
industrialized countries like Italy and the U.K. The RCC attempted to suppress regional and tribal affiliation, replacing
it with a unified pan-Libyan identity. In doing so, they tried discrediting tribal leaders as agents of the old regime,
and in August 1971 a Sabha military court tried many of them for counter-revolutionary activity. Long-standing administrative
boundaries were re-drawn, crossing tribal boundaries, while pro-revolutionary modernizers replaced traditional leaders,
but the communities they served often rejected them. Realizing the failures of the modernizers, Gaddafi created the
Arab Socialist Union (ASU), a mass mobilization vanguard party of which he was president. The ASU recognized the
RCC as its "Supreme Leading Authority", and was designed to further revolutionary enthusiasm throughout the country.
The RCC implemented measures for social reform, adopting sharia as a basis. The consumption of alcohol was banned,
night clubs and Christian churches were shut down, traditional Libyan dress was encouraged, while Arabic was decreed
as the only language permitted in official communications and on road signs. From 1969 to 1973, the RCC introduced
social welfare programs funded with oil money, which led to house-building projects and improved healthcare and education.
In doing so, they greatly expanded the public sector, providing employment for thousands. The influence of Nasser's
Arab nationalism over the RCC was immediately apparent. The administration was instantly recognized by the neighbouring
Arab nationalist regimes in Egypt, Syria, Iraq and Sudan, with Egypt sending experts to aid the inexperienced RCC.
Gaddafi propounded Pan-Arab ideas, proclaiming the need for a single Arab state stretching across North Africa and
the Middle East. In December 1969, Libya founded the Arab Revolutionary Front with Egypt and Sudan as a step towards
political unification, and in 1970 Syria stated its intention to join. After Nasser died in November 1970, his successor,
Anwar Sadat, suggested that rather than a unified state, they create a political federation, implemented in April
1971; in doing so, Egypt, Syria and Sudan got large grants of Libyan oil money. In February 1972, Gaddafi and Sadat
signed an unofficial charter of merger, but it was never implemented as relations broke down the following year.
Sadat became increasingly wary of Libya's radical direction, and the September 1973 deadline for implementing the
Federation passed by with no action taken. After the 1969 coup, representatives of the Four Powers – France, the
United Kingdom, the United States and the Soviet Union – were called to meet RCC representatives. The U.K. and U.S.
quickly extended diplomatic recognition, hoping to secure the position of their military bases in Libya and fearing
further instability. Hoping to ingratiate themselves with Gaddafi, in 1970 the U.S. informed him of at least one
planned counter-coup. Such attempts to form a working relationship with the RCC failed; Gaddafi was determined to
reassert national sovereignty and expunge what he described as foreign colonial and imperialist influences. His administration
insisted that the U.S. and U.K. remove their military bases from Libya, with Gaddafi proclaiming that "the armed
forces which rose to express the people's revolution [will not] tolerate living in their shacks while the bases of
imperialism exist in Libyan territory." The British left in March and the Americans in June 1970. Moving to reduce
Italian influence, in October 1970 all Italian-owned assets were expropriated and the 12,000-strong Italian community
expelled from Libya alongside a smaller number of Jews. The day became a national holiday. Aiming to reduce NATO
power in the Mediterranean, in 1971 Libya requested that Malta cease to allow NATO to use its land for a military
base, in turn offering them foreign aid. Compromising, Malta's government continued allowing NATO use of the island,
but only on the condition that they would not use it for launching attacks on Arab territory. Orchestrating a military
build-up, the RCC began purchasing weapons from France and the Soviet Union. The commercial relationship with the
latter led to an increasingly strained relationship with the U.S., who were then engaged in the Cold War with the
Soviets. His relationship with Palestinian leader Yasser Arafat of Fatah was strained, with Gaddafi considering him
too moderate and calling for more violent action. Instead he supported militia like the Popular Front for the Liberation
of Palestine, Popular Front for the Liberation of Palestine – General Command, the Democratic Front for the Liberation
of Palestine, As-Sa'iqa, the Palestinian Popular Struggle Front, and the Abu Nidal Organization. He funded the Black
September Organization who perpetrated the 1972 Munich massacre of Israeli athletes in West Germany, and had the
killed militants' bodies flown to Libya for a hero's funeral. Gaddafi also welcomed the three surviving attackers
in Tripoli following their release in exchange for the hostages of hijacked Lufthansa Flight 615 a few weeks later
and allowed them to go into hiding. Gaddafi financially supported other militant groups across the world, including
the Black Panther Party, Nation of Islam, Tupamaros, 19th of April Movement and Sandinista National Liberation Front
in the Americas, the ANC among other liberation movements in the fight against Apartheid in South Africa, the Provisional
Irish Republican Army, ETA, Sardinian nationalists, Action directe, the Red Brigades, and the Red Army Faction in
Europe, and the Armenian Secret Army, Japanese Red Army, Free Aceh Movement, and Moro National Liberation Front in
Asia. Gaddafi was indiscriminate in the causes he funded, sometimes switching from supporting one side in a conflict
to the other, as in the Eritrean War of Independence. Throughout the 1970s these groups received financial support
from Libya, which came to be seen as a leader in the Third World's struggle against colonialism and neocolonialism.
Though many of these groups were labelled "terrorists" by critics of their activities, Gaddafi rejected such a characterisation,
instead considering them revolutionaries engaged in liberation struggles. On 16 April 1973, Gaddafi proclaimed the
start of a "Popular Revolution" in a Zuwarah speech. He initiated this with a 5-point plan, the first point of which
dissolved all existing laws, to be replaced by revolutionary enactments. The second point proclaimed that all opponents
of the revolution had to be removed, while the third initiated an administrative revolution that Gaddafi proclaimed
would remove all traces of bureaucracy and the bourgeoisie. The fourth point announced that the population must form
People's Committees and be armed to defend the revolution, while the fifth proclaimed the beginning of a cultural
revolution to expunge Libya of "poisonous" foreign influences. He began to lecture on this new phase of the revolution
in Libya, Egypt, and France. As part of this Popular Revolution, Gaddafi invited Libya's people to found General
People's Committees as conduits for raising political consciousness. Although offering little guidance for how to
set up these councils, Gaddafi claimed that they would offer a form of direct political participation that was more
democratic than a traditional party-based representative system. He hoped that the councils would mobilize the people
behind the RCC, erode the power of the traditional leaders and the bureaucracy, and allow for a new legal system
chosen by the people. The People's Committees led to a high percentage of public involvement in decision making,
within the limits permitted by the RCC, but exacerbated tribal divisions. They also served as a surveillance system,
aiding the security services in locating individuals with views critical of the RCC, leading to the arrest of Ba'athists,
Marxists and Islamists. Operating in a pyramid structure, the base form of these Committees were local working groups,
who sent elected representatives to the district level, and from there to the national level, divided between the
General People's Congress and the General People's Committee. Above these remained Gaddafi and the RCC, who remained
responsible for all major decisions. In June 1973, Gaddafi created a political ideology as a basis for the Popular
Revolution. Third International Theory considered the U.S. and the Soviet Union as imperialist, thus rejected Western
capitalism as well as Eastern bloc communism's atheism. In this respect it was similar to the Three Worlds Theory
developed by China's political leader Mao Zedong. As part of this theory, Gaddafi praised nationalism as a progressive
force and advocated the creation of a pan-Arab state which would lead the Islamic and Third Worlds against imperialism.
Gaddafi summarized Third International Theory in three short volumes published between 1975 and 1979, collectively
known as The Green Book. Volume one was devoted to the issue of democracy, outlining the flaws of representative
systems in favour of direct, participatory GPCs. The second dealt with Gaddafi's beliefs regarding socialism, while
the third explored social issues regarding the family and the tribe. While the first two volumes advocated radical
reform, the third adopted a socially conservative stance, proclaiming that while men and women were equal, they were
biologically designed for different roles in life. During the years that followed, Gaddafists adopted quotes from
The Green Book, such as "Representation is Fraud", as slogans. Meanwhile, in September 1975, Gaddafi implemented
further measures to increase popular mobilization, introducing objectives to improve the relationship between the
Councils and the ASU. In September 1975, Gaddafi purged the army, arresting around 200 senior officers, and in October
he founded the clandestine Office for the Security of the Revolution. In 1976, student demonstrations broke out in
Tripoli and Benghazi, and were attacked by police and Gaddafist students. The RCC responded with mass arrests, and
introduced compulsory national service for young people. Dissent also arose from conservative clerics and the Muslim
Brotherhood, who were persecuted as anti-revolutionary. In January 1977, two dissenting students and a number of
army officers were publicly hanged; Amnesty International condemned it as the first time in Gaddafist Libya that
dissenters had been executed for purely political crimes. Following Anwar Sadat's ascension to the Egyptian presidency,
Libya's relations with Egypt deteriorated. Sadat was perturbed by Gaddafi's unpredictability and insistence that
Egypt required a cultural revolution. In February 1973, Israeli forces shot down Libyan Arab Airlines Flight 114,
which had strayed from Egyptian airspace into Israeli-held territory during a sandstorm. Gaddafi was infuriated that
Egypt had not done more to prevent the incident, and in retaliation planned to destroy the RMS Queen Elizabeth 2,
a British ship chartered by American Jews to sail to Haifa for Israel's 25th anniversary. Gaddafi ordered an Egyptian
submarine to target the ship, but Sadat cancelled the order, fearing a military escalation. Gaddafi was later infuriated
when Egypt and Syria planned the Yom Kippur War against Israel without consulting him, and was angered when Egypt
conceded to peace talks rather than continuing the war. Gaddafi become openly hostile to Egypt's leader, calling
for Sadat's overthrow, and when Sudanese President Gaafar Nimeiry took Sadat's side, Gaddafi by 1975 sponsored the
Sudan People's Liberation Army to overthrow Nimeiry. Focusing his attention elsewhere in Africa, in late 1972 and
early 1973, Libya invaded Chad to annex the uranium-rich Aouzou Strip. Offering financial incentives, he successfully
convinced 8 African states to break off diplomatic relations with Israel in 1973. Intent on propagating Islam, in
1973 Gaddafi founded the Islamic Call Society, which had opened 132 centres across Africa within a decade. In 1973
he converted Gabonese President Omar Bongo, an action which he repeated three years later with Jean-Bédel Bokassa,
president of the Central African Republic. Gaddafi sought to develop closer links in the Maghreb; in January 1974
Libya and Tunisia announced a political union, the Arab Islamic Republic. Although advocated by Gaddafi and Tunisian
President Habib Bourguiba, the move was deeply unpopular in Tunisia and soon abandoned. Retaliating, Gaddafi sponsored
anti-government militants in Tunisia into the 1980s. Turning his attention to Algeria, in 1975 Libya signed the Hassi
Messaoud defence agreement allegedly to counter "Moroccan expansionism", also funding the Polisario Front of Western
Sahara in their independence struggle against Morocco. Seeking to diversify Libya's economy, Gaddafi's government
began purchasing shares in major European corporations like Fiat as well as buying real estate in Malta and Italy,
which would become a valuable source of income during the 1980s oil slump. On 2 March 1977 the General People's Congress
adopted the "Declaration of the Establishment of the People's Authority" at Gaddafi's behest. Dissolving the Libyan
Arab Republic, it was replaced by the Great Socialist People's Libyan Arab Jamahiriya (Arabic: الجماهيرية العربية
الليبية الشعبية الاشتراكية, al-Jamāhīrīyah al-‘Arabīyah al-Lībīyah ash-Sha‘bīyah al-Ishtirākīyah), a "state of the
masses" conceptualized by Gaddafi. Officially, the Jamahiriya was a direct democracy in which the people ruled themselves
through the 187 Basic People's Congresses, where all adult Libyans participated and voted on national decisions.
These then sent members to the annual General People's Congress, which was broadcast live on television. In principle,
the People's Congresses were Libya's highest authority, with major decisions proposed by government officials or
with Gaddafi himself requiring the consent of the People's Congresses. Debate remained limited, and major decisions
regarding the economy and defence were avoided or dealt with cursorily; the GPC largely remained "a rubber stamp"
for Gaddafi's policies. On rare occasions, the GPC opposed Gaddafi's suggestions, sometimes successfully; notably,
when Gaddafi called on primary schools to be abolished, believing that home schooling was healthier for children,
the GPC rejected the idea. In other instances, Gaddafi pushed through laws without the GPC's support, such as when
he desired to allow women into the armed forces. Gaddafi proclaimed that the People's Congresses provided for Libya's
every political need, rendering other political organizations unnecessary; all non-authorized groups, including political
parties, professional associations, independent trade unions and women's groups, were banned. With preceding legal
institutions abolished, Gaddafi envisioned the Jamahiriya as following the Qur'an for legal guidance, adopting sharia
law; he proclaimed "man-made" laws unnatural and dictatorial, only permitting Allah's law. Within a year he was backtracking,
announcing that sharia was inappropriate for the Jamahiriya because it guaranteed the protection of private property,
contravening The Green Book's socialism. His emphasis on placing his own work on a par with the Qur'an led conservative
clerics to accuse him of shirk, furthering their opposition to his regime. In July, a border war broke out with Egypt,
in which the Egyptians defeated Libya despite their technological inferiority. The conflict lasted one week before
both sides agreed to sign a peace treaty that was brokered by several Arab states. That year, Gaddafi was invited
to Moscow by the Soviet government in recognition of their increasing commercial relationship. In December 1978,
Gaddafi stepped down as Secretary-General of the GPC, announcing his new focus on revolutionary rather than governmental
activities; this was part of his new emphasis on separating the apparatus of the revolution from the government.
Although no longer in a formal governmental post, he adopted the title of "Leader of the Revolution" and continued
as commander-in-chief of the armed forces. He continued exerting considerable influence over Libya, with many critics
insisting that the structure of Libya's direct democracy gave him "the freedom to manipulate outcomes". Libya began
to turn towards socialism. In March 1978, the government issued guidelines for housing redistribution, attempting
to ensure the population that every adult Libyan owned his own home and that nobody was enslaved to paying their
rent. Most families were banned from owning more than one house, while former rental properties were seized and sold
to the tenants at a heavily subsidized price. In September, Gaddafi called for the People's Committees to eliminate
the "bureaucracy of the public sector" and the "dictatorship of the private sector"; the People's Committees took
control of several hundred companies, converting them into worker cooperatives run by elected representatives. On
2 March 1979, the GPC announced the separation of government and revolution, the latter being represented by new
Revolutionary Committees, who operated in tandem with the People's Committees in schools, universities, unions, the
police force and the military. Dominated by revolutionary zealots, the Revolutionary Committees were led by Mohammad
Maghgoub and a Central Coordinating Office, and met with Gaddafi annually. Publishing a weekly magazine The Green
March (al-Zahf al-Akhdar), in October 1980 they took control of the press. Responsible for perpetuating revolutionary
fervour, they performed ideological surveillance, later adopting a significant security role, making arrests and
putting people on trial according to the "law of the revolution" (qanun al-thawra). With no legal code or safeguards,
the administration of revolutionary justice was largely arbitrary and resulted in widespread abuses and the suppression
of civil liberties: the "Green Terror." In 1979, the committees began the redistribution of land in the Jefara plain,
continuing through 1981. In May 1980, measures to redistribute and equalize wealth were implemented; anyone with
over 1000 dinar in his bank account saw that extra money expropriated. The following year, the GPC announced that
the government would take control of all import, export and distribution functions, with state supermarkets replacing
privately owned businesses; this led to a decline in the availability of consumer goods and the development of a
thriving black market. The Jamahiriya's radical direction earned the government many enemies. In February 1978, Gaddafi
discovered that his head of military intelligence was plotting to kill him, and began to increasingly entrust security
to his Qaddadfa tribe. Many who had seen their wealth and property confiscated turned against the administration,
and a number of western-funded opposition groups were founded by exiles. Most prominent was the National Front for
the Salvation of Libya (NFSL), founded in 1981 by Mohammed Magariaf, which orchestrated militant attacks against
Libya's government, while another, al-Borkan, began killing Libyan diplomats abroad. Following Gaddafi's command
to kill these "stray dogs", under Colonel Younis Bilgasim's leadership, the Revolutionary Committees set up overseas
branches to suppress counter-revolutionary activity, assassinating various dissidents. Although nearby nations like
Syria also used hit squads, Gaddafi was unusual in publicly bragging about his administration's use of them; in June
1980, he ordered all dissidents to return home or be "liquidated wherever you are." In 1979, the U.S. placed Libya
on its list of "State Sponsors of Terrorism", while at the end of the year a demonstration torched the U.S. embassy
in Tripoli in solidarity with the perpetrators of the Iran hostage crisis. The following year, Libyan fighters began
intercepting U.S. fighter jets flying over the Mediterranean, signalling the collapse of relations between the two
countries. Libyan relations with Lebanon and Shi'ite communities across the world also deteriorated due to the August
1978 disappearance of imam Musa al-Sadr when visiting Libya; the Lebanese accused Gaddafi of having him killed or
imprisoned, a charge he denied. Relations with Syria improved, as Gaddafi and Syrian President Hafez al-Assad shared
an enmity with Israel and Egypt's Sadat. In 1980, they proposed a political union, with Libya paying off Syria's
£1 billion debt to the Soviet Union; although pressures led Assad to pull out, they remained allies. Another key
ally was Uganda, and in 1979, Gaddafi sent 2,500 troops into Uganda to defend the regime of President Idi Amin from
Tanzanian invaders. The mission failed; 400 Libyans were killed and they were forced to retreat. Gaddafi later came
to regret his alliance with Amin, openly criticising him. The early and mid-1980s saw economic trouble for Libya;
from 1982 to 1986, the country's annual oil revenues dropped from $21 billion to $5.4 billion. Focusing on irrigation
projects, 1983 saw construction start on "Gaddafi's Pet Project", the Great Man-Made River; although designed to
be finished by the end of the decade, it remained incomplete at the start of the 21st century. Military spending
increased, while other administrative budgets were cut back. Libya had long supported the FROLINAT militia in neighbouring
Chad, and in December 1980, re-invaded Chad at the request of the Frolinat-controlled GUNT government to aid in the
civil war; in January 1981, Gaddafi suggested a political merger. The Organisation of African Unity (OAU) rejected
this, and called for a Libyan withdrawal, which came about in November 1981. The civil war resumed, and so Libya
sent troops back in, clashing with French forces who supported the southern Chadian forces. Many African nations
had tired of Libya's policies of interference in foreign affairs; by 1980, nine African states had cut off diplomatic
relations with Libya, while in 1982 the OAU cancelled its scheduled conference in Tripoli in order to prevent Gaddafi
gaining chairmanship. Proposing political unity with Morocco, in August 1984, Gaddafi and Moroccan monarch Hassan
II signed the Oujda Treaty, forming the Arab-African Union; such a union was considered surprising due to the strong
political differences and longstanding enmity that existed between the two governments. Relations remained strained,
particularly due to Morocco's friendly relations with the U.S. and Israel; in August 1986, Hassan abolished the union.
Domestic threats continued to plague Gaddafi; in May 1984, his Bab al-Azizia home was unsuccessfully attacked by
a joint NFSL–Muslim Brotherhood militia, and in the aftermath 5000 dissidents were arrested. In 1981, the new US
President Ronald Reagan pursued a hard line approach to Libya, erroneously considering it a puppet regime of the
Soviet Union. In turn, Gaddafi played up his commercial relationship with the Soviets, visiting Moscow again in April
1981 and 1985, and threatening to join the Warsaw Pact. The Soviets were nevertheless cautious of Gaddafi, seeing
him as an unpredictable extremist. Beginning military exercises in the Gulf of Sirte – an area of sea that Libya
claimed as a part of its territorial waters – in August 1981 the U.S. shot down two Libyan Su-22 planes monitoring
them. Closing down Libya's embassy in Washington, D.C., Reagan advised U.S. companies operating in the country to
reduce the number of American personnel stationed there. In March 1982, the U.S. implemented an embargo of Libyan
oil, and in January 1986 ordered all U.S. companies to cease operating in the country, although several hundred workers
remained. Diplomatic relations also broke down with the U.K., after Libyan diplomats were accused in the shooting
death of Yvonne Fletcher, a British policewoman stationed outside their London embassy, in April 1984. In Spring
1986, the U.S. Navy again began performing exercises in the Gulf of Sirte; the Libyan military retaliated, but failed
as the U.S. sank several Libyan ships. After the U.S. accused Libya of orchestrating the 1986 Berlin discotheque
bombing, in which two American soldiers died, Reagan decided to retaliate militarily. The Central Intelligence Agency
were critical of the move, believing that Syria were a greater threat and that an attack would strengthen Gaddafi's
reputation; however Libya was recognised as a "soft target." Reagan was supported by the U.K. but opposed by other
European allies, who argued that it would contravene international law. In Operation El Dorado Canyon, orchestrated
on 15 April 1986, U.S. military planes launched a series of air-strikes on Libya, bombing military installations
in various parts of the country, killing around 100 Libyans, including several civilians. One of the targets had
been Gaddafi's home. Himself unharmed, two of Gaddafi's sons were injured, and he claimed that his four-year-old
adopted daughter Hanna was killed, although her existence has since been questioned. In the immediate aftermath,
Gaddafi retreated to the desert to meditate, while there were sporadic clashes between Gaddafists and army officers
who wanted to overthrow the government. Although the U.S. was condemned internationally, Reagan received a popularity
boost at home. Publicly lambasting U.S. imperialism, Gaddafi's reputation as an anti-imperialist was strengthened
both domestically and across the Arab world, and in June 1986, he ordered the names of the month to be changed in
Libya. The late 1980s saw a series of liberalising economic reforms within Libya designed to cope with the decline
in oil revenues. In May 1987, Gaddafi announced the start of the "Revolution within a Revolution", which began with
reforms to industry and agriculture and saw the re-opening of small business. Restrictions were placed on the activities
of the Revolutionary Committees; in March 1988, their role was narrowed by the newly created Ministry for Mass Mobilization
and Revolutionary Leadership to restrict their violence and judicial role, while in August 1988 Gaddafi publicly
criticised them, asserting that "they deviated, harmed, tortured" and that "the true revolutionary does not practise
repression." In March, hundreds of political prisoners were freed, with Gaddafi falsely claiming that there were
no further political prisoners in Libya. In June, Libya's government issued the Great Green Charter on Human Rights
in the Era of the Masses, in which 27 articles laid out goals, rights and guarantees to improve the situation of
human rights in Libya, restricting the use of the death penalty and calling for its eventual abolition. Many of the
measures suggested in the charter would be implemented the following year, although others remained inactive. Also
in 1989, the government founded the Al-Gaddafi International Prize for Human Rights, to be awarded to figures from
the Third World who had struggled against colonialism and imperialism; the first year's winner was South African
anti-apartheid activist Nelson Mandela. From 1994 through to 1997, the government initiated cleansing committees
to root out corruption, particularly in the economic sector. In the aftermath of the 1986 U.S. attack, the army was
purged of perceived disloyal elements, and in 1988, Gaddafi announced the creation of a popular militia to replace
the army and police. In 1987, Libya began production of mustard gas at a facility in Rabta, although publicly denying
it was stockpiling chemical weapons, and unsuccessfully attempted to develop nuclear weapons. The period also saw
a growth in domestic Islamist opposition, formulated into groups like the Muslim Brotherhood and the Libyan Islamic
Fighting Group. A number of assassination attempts against Gaddafi were foiled, and in turn, 1989 saw the security
forces raid mosques believed to be centres of counter-revolutionary preaching. In October 1993, elements of the increasingly
marginalised army initiated a failed coup in Misrata, while in September 1995, Islamists launched an insurgency in
Benghazi, and in July 1996 an anti-Gaddafist football riot broke out in Tripoli. The Revolutionary Committees experienced
a resurgence to combat these Islamists. In 1989, Gaddafi was overjoyed by the foundation of the Arab Maghreb Union,
uniting Libya in an economic pact with Mauritania, Morocco, Tunisia and Algeria, viewing it as beginnings of a new
Pan-Arab union. Meanwhile, Libya stepped up its support for anti-western militants such as the Provisional IRA, and
in 1988, Pan Am Flight 103 was blown up over Lockerbie in Scotland, killing 243 passengers and 16 crew members, plus
11 people on the ground. British police investigations identified two Libyans – Abdelbaset al-Megrahi and Lamin Khalifah
Fhimah – as the chief suspects, and in November 1991 issued a declaration demanding that Libya hand them over. When
Gaddafi refused, citing the Montreal Convention, the United Nations (UN) imposed Resolution 748 in March 1992, initiating
economic sanctions against Libya which had deep repercussions for the country's economy. The country suffered an
estimated $900 million financial loss as a result. Further problems arose with the west when in January 1989, two
Libyan warplanes were shot down by the U.S. off the Libyan coast. Many African states opposed the UN sanctions, with
Mandela criticising them on a visit to Gaddafi in October 1997, when he praised Libya for its work in fighting apartheid
and awarded Gaddafi the Order of Good Hope. They would only be suspended in 1998 when Libya agreed to allow the extradition
of the suspects to the Scottish Court in the Netherlands, in a process overseen by Mandela. As the 20th century came
to a close, Gaddafi increasingly rejected Arab nationalism, frustrated by the failure of his Pan-Arab ideals; instead
he turned to Pan-Africanism, emphasising Libya's African identity. From 1997 to 2000, Libya initiated cooperative
agreements or bilateral aid arrangements with 10 African states, and in 1999 joined the Community of Sahel-Saharan
States. In June 1999, Gaddafi visited Mandela in South Africa, and the following month attended the OAU summit in
Algiers, calling for greater political and economic integration across the continent and advocating the foundation
of a United States of Africa. He became one of the founders of the African Union (AU), initiated in July 2002 to
replace the OAU; at the opening ceremonies, he proclaimed that African states should reject conditional aid from
the developed world, a direct contrast to the message of South African President Thabo Mbeki. At the third AU summit,
held in Libya in July 2005, he called for a greater level of integration, advocating a single AU passport, a common
defence system and a single currency, utilising the slogan: "The United States of Africa is the hope." In June 2005,
Libya joined the Common Market for Eastern and Southern Africa (COMESA), and in August 2008 Gaddafi was proclaimed
"King of Kings" by an assembled committee of traditional African leaders. On 1 February 2009, his "coronation ceremony"
was held in Addis Ababa, Ethiopia, coinciding with Gaddafi's election as AU chairman for a year. The era saw Libya's
return to the international arena. In 1999, Libya began secret talks with the British government to normalise relations.
In 2001, Gaddafi condemned the September 11 attacks on the U.S. by al-Qaeda, expressing sympathy with the victims
and calling for Libyan involvement in the War on Terror against militant Islamism. His government continued suppressing
domestic Islamism, at the same time as Gaddafi called for the wider application of sharia law. Libya also cemented
connections with China and North Korea, being visited by Chinese President Jiang Zemin in April 2002. Influenced
by the events of the Iraq War, in December 2003, Libya renounced its possession of weapons of mass destruction, decommissioning
its chemical and nuclear weapons programs. Relations with the U.S. improved as a result, while UK Prime Minister
Tony Blair met with Gaddafi in the Libyan desert in March 2004. The following month, Gaddafi travelled to the headquarters
of the European Union (EU) in Brussels, signifying improved relations between Libya and the EU, the latter ending
its remaining sanctions in October. In October 2010, the EU paid Libya €50 million to stop African migrants passing
into Europe; Gaddafi encouraged the move, saying that it was necessary to prevent the loss of European cultural identity
to a new "Black Europe". Removed from the U.S. list of state sponsors of terrorism in 2006, Gaddafi nevertheless
continued his anti-western rhetoric, and at the Second Africa-South America Summit in Venezuela in September 2009,
joined Venezuelan President Hugo Chávez in calling for an "anti-imperialist" front across Africa and Latin America.
Gaddafi proposed the establishment of a South Atlantic Treaty Organization to rival NATO. That month he also addressed
the United Nations General Assembly in New York for the first time, using it to condemn "western aggression". In
Spring 2010, Gaddafi proclaimed jihad against Switzerland after Swiss police accused two of his family members of
criminal activity in the country, resulting in the breakdown of bilateral relations. Libya's economy witnessed increasing
privatization; although rejecting the socialist policies of nationalized industry advocated in The Green Book, government
figures asserted that they were forging "people's socialism" rather than capitalism. Gaddafi welcomed these reforms,
calling for wide-scale privatization in a March 2003 speech. In 2003, the oil industry was largely sold to private
corporations, and by 2004, there was $40 billion of direct foreign investment in Libya, a sixfold rise over 2003.
Sectors of Libya's population reacted against these reforms with public demonstrations, and in March 2006, revolutionary
hard-liners took control of the GPC cabinet; although scaling back the pace of the changes, they did not halt them.
In 2010, plans were announced that would have seen half the Libyan economy privatized over the following decade.
While there was no accompanying political liberalization, with Gaddafi retaining predominant control, in March 2010,
the government devolved further powers to the municipal councils. Rising numbers of reformist technocrats attained
positions in the country's governance; best known was Gaddafi's son and heir apparent Saif al-Islam Gaddafi, who
was openly critical of Libya's human rights record. He led a group who proposed the drafting of the new constitution,
although it was never adopted, and in October 2009 was appointed to head the PSLC. Involved in encouraging tourism,
Saif founded several privately run media channels in 2008, but after criticising the government they were nationalised
in 2009. In October 2010, Gaddafi apologized to African leaders on behalf of Arab nations for their involvement in
the African slave trade. Following the start of the Arab Spring in 2011, Gaddafi spoke out in favour of Tunisian
President Zine El Abidine Ben Ali, then threatened by the Tunisian Revolution. He suggested that Tunisia's people
would be satisfied if Ben Ali introduced a Jamahiriyah system there. Fearing domestic protest, Libya's government
implemented preventative measures, reducing food prices, purging the army leadership of potential defectors and releasing
several Islamist prisoners. They proved ineffective, and on 17 February 2011, major protests broke out against Gaddafi's
government. Unlike Tunisia or Egypt, Libya was largely religiously homogenous and had no strong Islamist movement,
but there was widespread dissatisfaction with the corruption and entrenched systems of patronage, while unemployment
had reached around 30%. Accusing the rebels of being "drugged" and linked to al-Qaeda, Gaddafi proclaimed that he
would die a martyr rather than leave Libya. As he announced that the rebels would be "hunted down street by street,
house by house and wardrobe by wardrobe", the army opened fire on protests in Benghazi, killing hundreds. Shocked
at the government's response, a number of senior politicians resigned or defected to the protesters' side. The uprising
spread quickly through Libya's less economically developed eastern half. By February's end, eastern cities like Benghazi,
Misrata, al-Bayda and Tobruk were controlled by rebels, and the Benghazi-based National Transitional Council (NTC)
had been founded to represent them. In the conflict's early months it appeared that Gaddafi's government – with its
greater firepower – would be victorious. Both sides disregarded the laws of war, committing human rights abuses,
including arbitrary arrests, torture, extrajudicial executions and revenge attacks. On 26 February the United Nations
Security Council passed Resolution 1970, suspending Libya from the UN Human Rights Council, implementing sanctions
and calling for an International Criminal Court (ICC) investigation into the killing of unarmed civilians. In March,
the Security Council declared a no fly zone to protect the civilian population from aerial bombardment, calling on
foreign nations to enforce it; it also specifically prohibited foreign occupation. Ignoring this, Qatar sent hundreds
of troops to support the dissidents, and along with France and the United Arab Emirates provided the NTC with weaponry
and training. A week after the implementation of the no-fly zone, NATO announced that it would be enforced. On 30
April a NATO airstrike killed Gaddafi's sixth son and three of his grandsons in Tripoli, though Gaddafi and his wife
were unharmed. Western officials remained divided over whether Gaddafi was a legitimate military target under the
U.N. Security Council resolution. U.S. Secretary of Defense Robert Gates said that NATO was "not targeting Gaddafi
specifically" but that his command-and-control facilities were legitimate targets—including a facility inside his
sprawling Tripoli compound that was hit with airstrikes on 25 April. On 27 June, the ICC issued arrest warrants for
Gaddafi, his son Saif al-Islam, and his brother-in-law Abdullah Senussi, head of state security, for charges concerning
crimes against humanity. Libyan officials rejected the ICC, claiming that it had "no legitimacy whatsoever" and highlighting
that "all of its activities are directed at African leaders". That month, Amnesty International published their findings,
in which they asserted that many of the accusations of mass human rights abuses made against Gaddafist forces lacked
credible evidence, and were instead fabrications of the rebel forces which had been readily adopted by the western
media. Amnesty International did however still accuse Gaddafi forces of numerous war crimes. On 15 July 2011, at
a meeting in Istanbul, over 30 governments recognised the NTC as the legitimate government of Libya. Gaddafi responded
to the announcement with a speech on Libyan national television, in which he called on supporters to "Trample on
those recognitions, trample on them under your feet ... They are worthless". Now with NATO support in the form of
air cover, the rebel militia pushed westward, defeating loyalist armies and securing control of the centre of the
country. Gaining the support of Amazigh (Berber) communities of the Nafusa Mountains, who had long been persecuted
as non-Arabic speakers under Gaddafi, the NTC armies surrounded Gaddafi loyalists in several key areas of western
Libya. In August, the rebels seized Zliten and Tripoli, ending the last vestiges of Gaddafist power. On 25 August,
the Arab League recognised the NTC to be "the legitimate representative of the Libyan state", on which basis Libya
would resume its membership in the League. Only a few towns in western Libya—such as Bani Walid, Sebha and Sirte—remained
Gaddafist strongholds. Retreating to Sirte after Tripoli's fall, Gaddafi announced his willingness to negotiate for
a handover to a transitional government, a suggestion rejected by the NTC. Surrounding himself with bodyguards, he
continually moved residences to escape NTC shelling, devoting his days to prayer and reading the Qur'an. On 20 October,
Gaddafi broke out of Sirte's District 2 in a joint civilian-military convoy, hoping to take refuge in the Jarref
Valley. At around 8.30am, NATO bombers attacked, destroying at least 14 vehicles and killing at least 53. The convoy
scattered, and Gaddafi and those closest to him fled to a nearby villa, which was shelled by rebel militia from Misrata.
Fleeing to a construction site, Gaddafi and his inner cohort hid inside drainage pipes while his bodyguards battled
the rebels; in the conflict, Gaddafi suffered head injuries from a grenade blast while defence minister Abu-Bakr
Yunis Jabr was killed. A Misratan militia took Gaddafi prisoner, beating him, causing serious injuries; the events
were filmed on a mobile phone. A video appears to picture Gaddafi being poked or stabbed in the rear end "with some
kind of stick or knife" or possibly a bayonet. Pulled onto the front of a pick-up truck, he fell off as it drove
away. His semi-naked, lifeless body was then placed into an ambulance and taken to Misrata; upon arrival, he was
found to be dead. Official NTC accounts claimed that Gaddafi was caught in a cross-fire and died from his bullet
wounds. Other eye-witness accounts claimed that rebels had fatally shot Gaddafi in the stomach; a rebel identifying
himself as Senad el-Sadik el-Ureybi later claimed responsibility. Gaddafi's son Mutassim, who had also been among
the convoy, was also captured, and found dead several hours later, most probably from an extrajudicial execution.
Around 140 Gaddafi loyalists were rounded up from the convoy; tied up and abused, the corpses of 66 were found at
the nearby Mahari Hotel, victims of extrajudicial execution. Libya's chief forensic pathologist, Dr. Othman al-Zintani,
carried out the autopsies of Gaddafi, his son and Jabr in the days following their deaths; although the pathologist
initially told the press that Gaddafi had died from a gunshot wound to the head, the autopsy report was not made
public. On the afternoon of Gaddafi's death, NTC Prime Minister Mahmoud Jibril publicly revealed the news. Gaddafi's
corpse was placed in the freezer of a local market alongside the corpses of Yunis Jabr and Mutassim; the bodies were
publicly displayed for four days, with Libyans from all over the country coming to view them. In response to international
calls, on 24 October Jibril announced that a commission would investigate Gaddafi's death. On 25 October, the NTC
announced that Gaddafi had been buried at an unidentified location in the desert; Al Aan TV showed amateur video
footage of the funeral. Seeking vengeance for the killing, Gaddafist sympathisers fatally wounded one of those who
had captured Gaddafi, Omran Shaaban, near Bani Walid in September 2012. As a schoolboy, Gaddafi adopted the ideologies
of Arab nationalism and Arab socialism, influenced in particular by Nasserism, the thought of Egyptian revolutionary
and president Gamal Abdel Nasser, whom Gaddafi adopted as his hero. During the early 1970s, Gaddafi formulated his
own particular approach to Arab nationalism and socialism, known as Third International Theory, which has been described
as a combination of "utopian socialism, Arab nationalism, and the Third World revolutionary theory that was in vogue
at the time". He laid out the principles of this Theory in the three volumes of The Green Book, in which he sought
to "explain the structure of the ideal society." His Arab nationalist views led him to believe that there needed
to be unity across the Arab world, combining the Arab nation under a single nation-state. He described his approach
to economics as "Islamic socialism", although biographers Blundy and Lycett noted that Gaddafi's socialism had a
"curiously Marxist undertone", with political scientist Sami Hajjar arguing that Gaddafi's model of socialism offered
a simplification of Karl Marx and Friedrich Engels' theories. Gaddafi saw his socialist Jamahiriyah as a model for
the Arab, Islamic, and non-aligned worlds to follow. Gaddafi's ideological worldview was moulded by his environment,
namely his Islamic faith, his Bedouin upbringing, and his disgust at the actions of European colonialists in Libya.
He was driven by a sense of "divine mission", believing himself a conduit of Allah's will, and thought that he must
achieve his goals "no matter what the cost". Raised within the Sunni branch of Islam, Gaddafi called for the implementation
of sharia within Libya. He desired unity across the Islamic world, and encouraged the propagation of the faith elsewhere.
On a 2010 visit to Italy, he paid a modelling agency to find 200 young Italian women for a lecture he gave urging
them to convert. He also funded the construction and renovation of two mosques in Africa, including Uganda's Kampala
Mosque. He nevertheless clashed with conservative Libyan clerics as to his interpretation of Islam. Many criticised
his attempts to encourage women to enter traditionally male-only sectors of society, such as the armed forces. Gaddafi
was keen to improve women's status, though saw the sexes as "separate but equal" and therefore felt women should
usually remain in traditional roles. A fundamental part of Gaddafi's ideology was anti-Zionism. He believed that
the state of Israel should not exist, and that any Arab compromise with the Israeli government was a betrayal of
the Arab people. In large part due to their support of Israel, Gaddafi despised the United States, considering the
country to be imperialist and lambasting it as "the embodiment of evil." Rallying against Jews in many of his speeches,
his anti-Semitism has been described as "almost Hitlerian" by Blundy and Lycett. From the late 1990s onward, his
view seemed to become more moderate. In 2007, he advocated the Isratin single-state solution to the Israeli–Palestinian
conflict, stating that "the [Israel-Palestine] solution is to establish a democratic state for the Jews and the Palestinians...
This is the fundamental solution, or else the Jews will be annihilated in the future, because the Palestinians have
[strategic] depth." Two years later he argued that a single-state solution would "move beyond old conflicts and look
to a unified future based on shared culture and respect." Gaddafi was a very private individual, who described himself
as a "simple revolutionary" and "pious Muslim" called upon by Allah to continue Nasser's work. Reporter Mirella Bianco
found that his friends considered him particularly loyal and generous, and asserted that he adored children. She
was told by Gaddafi's father that even as a child he had been "always serious, even taciturn", a trait he also exhibited
in adulthood. His father said that he was courageous, intelligent, pious, and family oriented. In the 1970s and 1980s
there were reports of his making sexual advances toward female reporters and members of his entourage. After the
civil war, more serious charges came to light. Annick Cojean, a journalist for Le Monde, wrote in her book, Gaddafi's
Harem that Gaddafi had raped, tortured, performed urolagnia, and imprisoned hundreds or thousands of women, usually
very young. Another source—Libyan psychologist Seham Sergewa—reported that several of his female bodyguards claim
to have been raped by Gaddafi and senior officials. After the civil war, Luis Moreno Ocampo, prosecutor for the International
Criminal Court, said there was evidence that Gaddafi told soldiers to rape women who had spoken out against his regime.
In 2011 Amnesty International questioned this and other claims used to justify NATO's war in Libya. Following his
ascension to power, Gaddafi moved into the Bab al-Azizia barracks, a six-mile long fortified compound located two
miles from the center of Tripoli. His home and office at Azizia was a bunker designed by West German engineers, while
the rest of his family lived in a large two-story building. Within the compound were also two tennis courts, a soccer
field, several gardens, camels, and a Bedouin tent in which he entertained guests. In the 1980s, his lifestyle was
considered modest in comparison to those of many other Arab leaders. Gaddafi allegedly worked for years with Swiss
banks to launder international banking transactions. In November 2011, The Sunday Times identified property worth
£1 billion in the UK that Gaddafi allegedly owned. Gaddafi had an Airbus A340 private jet, which he bought from Prince
Al-Waleed bin Talal of Saudi Arabia for $120 million in 2003. Operated by Tripoli-based Afriqiyah Airways and decorated
externally in their colours, it had various luxuries including a jacuzzi. Gaddafi married his first wife, Fatiha
al-Nuri, in 1969. She was the daughter of General Khalid, a senior figure in King Idris' administration, and was
from a middle-class background. Although they had one son, Muhammad Gaddafi (b. 1970), their relationship was strained,
and they divorced in 1970. Gaddafi's second wife was Safia Farkash, née el-Brasai, a former nurse from Obeidat tribe
born in Bayda. They met in 1969, following his ascension to power, when he was hospitalized with appendicitis; he
claimed that it was love at first sight. The couple remained married until his death. Together they had seven biological
children: Saif al-Islam Gaddafi (b. 1972), Al-Saadi Gaddafi (b. 1973), Mutassim Gaddafi (1974–2011), Hannibal Muammar
Gaddafi (b. 1975), Ayesha Gaddafi (b. 1976), Saif al-Arab Gaddafi (1982–2011), and Khamis Gaddafi (1983–2011). He
also adopted two children, Hanna Gaddafi and Milad Gaddafi. Biographers Blundy and Lycett believed that he was "a
populist at heart." Throughout Libya, crowds of supporters would turn up to public events at which he appeared; described
as "spontaneous demonstrations" by the government, there are recorded instances of groups being coerced or paid to
attend. He was typically late to public events, and would sometimes not show up at all. Although Bianco thought he
had a "gift for oratory", he was considered a poor orator by biographers Blundy and Lycett. Biographer Daniel Kawczynski
noted that Gaddafi was famed for his "lengthy, wandering" speeches, which typically involved criticising Israel and
the U.S. Gaddafi was notably confrontational in his approach to foreign powers, and generally shunned western ambassadors
and diplomats, believing them to be spies. He once said that HIV was "a peaceful virus, not an aggressive virus"
and assured attendees at the African Union that "if you are straight you have nothing to fear from AIDS". He also
said that the H1N1 influenza virus was a biological weapon manufactured by a foreign military, and he assured Africans
that the tsetse fly and mosquito were "God's armies which will protect us against colonialists". Should these 'enemies'
come to Africa, "they will get malaria and sleeping sickness". Starting in the 1980s, he travelled with his all-female
Amazonian Guard, who were allegedly sworn to a life of celibacy. However, according to psychologist Seham Sergewa,
after the civil war several of the guards told her they had been pressured into joining and raped by Gaddafi and
senior officials. He hired several Ukrainian nurses to care for him and his family's health, and traveled everywhere
with his trusted Ukrainian nurse Halyna Kolotnytska. Kolotnytska's daughter denied the suggestion that the relationship
was anything but professional. Gaddafi remained a controversial and divisive figure on the world stage throughout
his life and after death. Supporters praised Gaddafi's administration for the creation of an almost classless society
through domestic reform. They stress the regime's achievements in combating homelessness and ensuring access to food
and safe drinking water. Highlighting that under Gaddafi, all Libyans enjoyed free education to a university level,
they point to the dramatic rise in literacy rates after the 1969 revolution. Supporters have also applauded achievements
in medical care, praising the universal free healthcare provided under the Gaddafist administration, with diseases
like cholera and typhoid being contained and life expectancy raised. Biographers Blundy and Lycett believed that
under the first decade of Gaddafi's leadership, life for most Libyans "undoubtedly changed for the better" as material
conditions and wealth drastically improved, while Libyan studies specialist Lillian Craig Harris remarked that in
the early years of his administration, Libya's "national wealth and international influence soared, and its national
standard of living has risen dramatically." Such high standards declined during the 1980s, as a result of economic
stagnation. Gaddafi claimed that his Jamahiriya was a "concrete utopia", and that he had been appointed by "popular
assent", with some Islamic supporters believing that he exhibited barakah. His opposition to Western governments
earned him the respect of many in the Euro-American far right. Critics labelled Gaddafi "despotic, cruel, arrogant,
vain and stupid", with western governments and press presenting him as the "vicious dictator of an oppressed people".
During the Reagan administration, the United States regarded him as "Public Enemy No. 1" and Reagan famously dubbed
him the "mad dog of the Middle East". According to critics, the Libyan people lived in a climate of fear under Gaddafi's
administration, due to his government's pervasive surveillance of civilians. Gaddafi's Libya was typically described
by western commentators as "a police state". Opponents were critical of Libya's human rights abuses; according to
Human Rights Watch (HRW) and others, hundreds of arrested political opponents often failed to receive a fair trial,
and were sometimes subjected to torture or extrajudicial execution, most notably in the Abu Salim prison, including
an alleged massacre on 29 June 1996 in which HRW estimated that 1,270 prisoners were massacred. Dissidents abroad
or "stray dogs" were also publicly threatened with death and sometimes killed by government hit squads. His government's
treatment of non-Arab Libyans has also came in for criticism from human rights activists, with native Berbers, Italians,
Jews, refugees, and foreign workers all facing persecution in Gaddafist Libya. According to journalist Annick Cojean
and psychologist Seham Sergewa, Gaddafi and senior officials raped and imprisoned hundreds or thousands of young
women and reportedly raped several of his female bodyguards. Gaddafi's government was frequently criticized for not
being democratic, with Freedom House consistently giving Libya under Gaddafi the "Not Free" ranking for civil liberties
and political rights. International reactions to Gaddafi's death were divided. U.S. President Barack Obama stated
that it meant that "the shadow of tyranny over Libya has been lifted," while UK Prime Minister David Cameron stated
that he was "proud" of his country's role in overthrowing "this brutal dictator". Contrastingly, former Cuban President
Fidel Castro commented that in defying the rebels, Gaddafi would "enter history as one of the great figures of the
Arab nations", while Venezuelan President Hugo Chávez described him as "a great fighter, a revolutionary and a martyr."
Nelson Mandela expressed sadness at the news, praising Gaddafi for his anti-apartheid stance, remarking that he backed
the African National Congress during "the darkest moments of our struggle". Gaddafi was mourned by many as a hero
across Sub-Saharan Africa, for instance, a vigil was held by Muslims in Sierra Leone. The Daily Times of Nigeria
stated that while undeniably a dictator, Gaddafi was the most benevolent in a region that only knew dictatorship,
and that he was "a great man that looked out for his people and made them the envy of all of Africa." The Nigerian
newspaper Leadership reported that while many Libyans and Africans would mourn Gaddafi, this would be ignored by
western media and that as such it would take 50 years before historians decided whether he was "martyr or villain."
Following his defeat in the civil war, Gaddafi's system of governance was dismantled and replaced under the interim
government of the NTC, who legalised trade unions and freedom of the press. In July 2012, elections were held to
form a new General National Congress (GNC), who officially took over governance from the NTC in August. The GNC proceeded
to elect Mohammed Magariaf as president of the chamber, and then voted Mustafa A.G. Abushagur as Prime Minister;
when Abushagar failed to gain congressional approval, the GNC instead elected Ali Zeidan to the position. In January
2013, the GNC officially renamed the Jamahiriyah as the "State of Libya".
Cyprus (i/ˈsaɪprəs/; Greek: Κύπρος IPA: [ˈcipros]; Turkish: Kıbrıs IPA: [ˈkɯbɾɯs]), officially the Republic of Cyprus (Greek:
Κυπριακή Δημοκρατία; Turkish: Kıbrıs Cumhuriyeti), is an island country in the Eastern Mediterranean Sea, off the
coasts of Syria and Turkey.[e] Cyprus is the third largest and third most populous island in the Mediterranean, and
a member state of the European Union. It is located south of Turkey, west of Syria and Lebanon, northwest of Israel
and Palestine, north of Egypt and east of Greece. The earliest known human activity on the island dates to around
the 10th millennium BC. Archaeological remains from this period include the well-preserved Neolithic village of Khirokitia,
and Cyprus is home to some of the oldest water wells in the world. Cyprus was settled by Mycenaean Greeks in two
waves in the 2nd millennium BC. As a strategic location in the Middle East, it was subsequently occupied by several
major powers, including the empires of the Assyrians, Egyptians and Persians, from whom the island was seized in
333 BC by Alexander the Great. Subsequent rule by Ptolemaic Egypt, the Classical and Eastern Roman Empire, Arab caliphates
for a short period, the French Lusignan dynasty and the Venetians, was followed by over three centuries of Ottoman
rule between 1571 and 1878 (de jure until 1914). Cyprus was placed under British administration based on Cyprus Convention
in 1878 and formally annexed by Britain in 1914. Even though Turkish Cypriots made up only 18% of the population,
the partition of Cyprus and creation of a Turkish state in the north became a policy of Turkish Cypriot leaders and
Turkey in the 1950s. Turkish leaders for a period advocated the annexation of Cyprus to Turkey as Cyprus was considered
an "extension of Anatolia" by them; while since the 19th century, the majority Greek Cypriot population and its Orthodox
church had been pursuing union with Greece, which became a Greek national policy in the 1950s. Following nationalist
violence in the 1950s, Cyprus was granted independence in 1960. In 1963, the 11-year intercommunal violence between
Greek Cypriots and Turkish Cypriots started, which displaced more than 25,000 Turkish Cypriots and brought the end
of Turkish Cypriot representation in the republic. On 15 July 1974, a coup d'état was staged by Greek Cypriot nationalists
and elements of the Greek military junta in an attempt at enosis, the incorporation of Cyprus into Greece. This action
precipitated the Turkish invasion of Cyprus, which led to the capture of the present-day territory of Northern Cyprus
the following month, after a ceasefire collapsed, and the displacement of over 150,000 Greek Cypriots and 50,000
Turkish Cypriots. A separate Turkish Cypriot state in the north was established in 1983. These events and the resulting
political situation are matters of a continuing dispute. The Republic of Cyprus has de jure sovereignty over the
island of Cyprus and its surrounding waters, according to international law, except for the British Overseas Territory
of Akrotiri and Dhekelia, administered as Sovereign Base Areas. However, the Republic of Cyprus is de facto partitioned
into two main parts; the area under the effective control of the Republic, comprising about 59% of the island's area,
and the north, administered by the self-declared Turkish Republic of Northern Cyprus, which is recognised only by
Turkey, covering about 36% of the island's area. The international community considers the northern part of the island
as territory of the Republic of Cyprus occupied by Turkish forces. The occupation is viewed as illegal under international
law, amounting to illegal occupation of EU territory since Cyprus became a member of the European Union. During the
late Bronze Age the island experienced two waves of Greek settlement. The first wave consisted of Mycenaean Greek
traders who started visiting Cyprus around 1400 BC. A major wave of Greek settlement is believed to have taken place
following the Bronze Age collapse of Mycenaean Greece from 1100 to 1050 BC, with the island's predominantly Greek
character dating from this period. Cyprus occupies an important role in Greek mythology being the birthplace of Aphrodite
and Adonis, and home to King Cinyras, Teucer and Pygmalion. Beginning in the 8th century BC Phoenician colonies were
founded on the south coast of Cyprus, near present-day Larnaca and Salamis. Following the death in 1473 of James
II, the last Lusignan king, the Republic of Venice assumed control of the island, while the late king's Venetian
widow, Queen Catherine Cornaro, reigned as figurehead. Venice formally annexed the Kingdom of Cyprus in 1489, following
the abdication of Catherine. The Venetians fortified Nicosia by building the Venetian Walls, and used it as an important
commercial hub. Throughout Venetian rule, the Ottoman Empire frequently raided Cyprus. In 1539 the Ottomans destroyed
Limassol and so fearing the worst, the Venetians also fortified Famagusta and Kyrenia. In 1570, a full-scale Ottoman
assault with 60,000 troops brought the island under Ottoman control, despite stiff resistance by the inhabitants
of Nicosia and Famagusta. Ottoman forces capturing Cyprus massacred many Greek and Armenian Christian inhabitants.
The previous Latin elite were destroyed and the first significant demographic change since antiquity took place with
the formation of a Muslim community. Soldiers who fought in the conquest settled on the island and Turkish peasants
and craftsmen were brought to the island from Anatolia. This new community also included banished Anatolian tribes,
"undesirable" persons and members of various "troublesome" Muslim sects, as well as a number of new converts on the
island. The Ottomans abolished the feudal system previously in place and applied the millet system to Cyprus, under
which non-Muslim peoples were governed by their own religious authorities. In a reversal from the days of Latin rule,
the head of the Church of Cyprus was invested as leader of the Greek Cypriot population and acted as mediator between
Christian Greek Cypriots and the Ottoman authorities. This status ensured that the Church of Cyprus was in a position
to end the constant encroachments of the Roman Catholic Church. Ottoman rule of Cyprus was at times indifferent,
at times oppressive, depending on the temperaments of the sultans and local officials, and the island began over
250 years of economic decline. The ratio of Muslims to Christians fluctuated throughout the period of Ottoman domination.
In 1777–78, 47,000 Muslims constituted a majority over the island's 37,000 Christians. By 1872, the population of
the island had risen to 144,000, comprising 44,000 Muslims and 100,000 Christians. The Muslim population included
numerous crypto-Christians, including the Linobambaki, a crypto-Catholic community that arose due to religious persecution
of the Catholic community by the Ottoman authorities; this community would assimilate into the Turkish Cypriot community
during British rule. As soon as the Greek War of Independence broke out in 1821, several Greek Cypriots left for
Greece to join the Greek forces. In response, the Ottoman governor of Cyprus arrested and executed 486 prominent
Greek Cypriots, including the Archbishop of Cyprus, Kyprianos and four other bishops. In 1828, modern Greece's first
president Ioannis Kapodistrias called for union of Cyprus with Greece, and numerous minor uprisings took place. Reaction
to Ottoman misrule led to uprisings by both Greek and Turkish Cypriots, although none were successful. After centuries
of neglect by the Turks, the unrelenting poverty of most of the people, and the ever-present tax collectors fuelled
Greek nationalism, and by the 20th century idea of enosis, or union, with newly independent Greece was firmly rooted
among Greek Cypriots. The island would serve Britain as a key military base for its colonial routes. By 1906, when
the Famagusta harbour was completed, Cyprus was a strategic naval outpost overlooking the Suez Canal, the crucial
main route to India which was then Britain's most important overseas possession. Following the outbreak of the First
World War and the decision of the Ottoman Empire to join the war on the side of the Central Powers, on 5 November
1914 the British Empire formally annexed Cyprus and declared the Ottoman Khedivate of Egypt and Sudan a Sultanate
and British protectorate. The Greek Cypriot population, meanwhile, had become hopeful that the British administration
would lead to enosis. The idea of enosis was historically part of the Megali Idea, a greater political ambition of
a Greek state encompassing the territories with Greek inhabitants in the former Ottoman Empire, including Cyprus
and Asia Minor with a capital in Constantinople, and was actively pursued by the Cypriot Orthodox Church, which had
its members educated in Greece. These religious officials, together with Greek military officers and professionals,
some of whom still pursued the Megali Idea, would later found the guerrilla organisation Ethniki Organosis Kyprion
Agoniston or National Organisation of Cypriot Fighters (EOKA). The Greek Cypriots viewed the island as historically
Greek and believed that union with Greece was a natural right. In the 1950s, the pursuit of enosis became a part
of the Greek national policy, Initially, the Turkish Cypriots favoured the continuation of the British rule. However,
they were alarmed by the Greek Cypriot calls for enosis as they saw the union of Crete with Greece, which led to
the exodus of Cretan Turks, as a precedent to be avoided, and they took a pro-partition stance in response to the
militant activity of EOKA. The Turkish Cypriots also viewed themselves as a distinct ethnic group of the island and
believed in their having a separate right to self-determination from Greek Cypriots. Meanwhile, in the 1950s, Turkish
leader Menderes considered Cyprus an "extension of Anatolia", rejected the partition of Cyprus along ethnic lines
and favoured the annexation of the whole island to Turkey. Nationalistic slogans centred on the idea that "Cyprus
is Turkish" and the ruling party declared Cyprus to be a part of the Turkish homeland that was vital to its security.
Upon realising the fact that the Turkish Cypriot population was only 20% of the islanders made annexation unfeasible,
the national policy was changed to favour partition. The slogan "Partition or Death" was frequently used in Turkish
Cypriot and Turkish protests starting in the late 1950s and continuing throughout the 1960s. Although after the Zürich
and London conferences Turkey seemed to accept the existence of the Cypriot state and to distance itself from its
policy of favouring the partition of the island, the goal of the Turkish and Turkish Cypriot leaders remained that
of creating an independent Turkish state in the northern part of the island. In January 1950, the Church of Cyprus
organised a referendum under the supervision of clerics and with no Turkish Cypriot participation, where 96% of the
participating Greek Cypriots voted in favour of enosis, The Greeks were 80.2% of the total island' s population at
the time (census 1946). Restricted autonomy under a constitution was proposed by the British administration but eventually
rejected. In 1955 the EOKA organisation was founded, seeking union with Greece through armed struggle. At the same
time the Turkish Resistance Organisation (TMT), calling for Taksim, or partition, was established by the Turkish
Cypriots as a counterweight. The British had also adopted at the time a policy of "divide and rule". Woodhouse, a
British official in Cyprus, revealed that then British Foreign Secretary Harold Macmillan "urged the Britons in Cyprus
to stir up the Turks in order to neutralise Greek agitation". British officials also tolerated the creation of the
Turkish underground organisation T.M.T. The Secretary of State for the Colonies in a letter dated 15 July 1958 had
advised the Governor of Cyprus not to act against T.M.T despite its illegal actions so as not to harm British relations
with the Turkish government. On 16 August 1960, Cyprus attained independence after the Zürich and London Agreement
between the United Kingdom, Greece and Turkey. Cyprus had a total population of 573,566; of whom 442,138 (77.1%)
were Greeks, 104,320 (18.2%) Turks, and 27,108 (4.7%) others The UK retained the two Sovereign Base Areas of Akrotiri
and Dhekelia, while government posts and public offices were allocated by ethnic quotas, giving the minority Turkish
Cypriots a permanent veto, 30% in parliament and administration, and granting the three mother-states guarantor rights.
However, the division of power as foreseen by the constitution soon resulted in legal impasses and discontent on
both sides, and nationalist militants started training again, with the military support of Greece and Turkey respectively.
The Greek Cypriot leadership believed that the rights given to Turkish Cypriots under the 1960 constitution were
too extensive and designed the Akritas plan, which was aimed at reforming the constitution in favour of Greek Cypriots,
persuading the international community about the correctness of the changes and violently subjugating Turkish Cypriots
in a few days should they not accept the plan. Tensions were heightened when Cypriot President Archbishop Makarios
III called for constitutional changes, which were rejected by Turkey and opposed by Turkish Cypriots. Intercommunal
violence erupted on December 21, 1963, when two Turkish Cypriots were killed at an incident involving the Greek Cypriot
police. The violence resulted in the death of 364 Turkish and 174 Greek Cypriots, destruction of 109 Turkish Cypriot
or mixed villages and displacement of 25,000-30,000 Turkish Cypriots. The crisis resulted in the end of the Turkish
Cypriot involvement in the administration and their claiming that it had lost its legitimacy; the nature of this
event is still controversial. In some areas, Greek Cypriots prevented Turkish Cypriots from travelling and entering
government buildings, while some Turkish Cypriots willingly withdrew due to the calls of the Turkish Cypriot administration.
Turkish Cypriots started living in enclaves; the republic's structure was changed unilaterally by Makarios and Nicosia
was divided by the Green Line, with the deployment of UNFICYP troops. In 1964, Turkey tried to invade Cyprus in response
to the continuing Cypriot intercommunal violence. But Turkey was stopped by a strongly worded telegram from the US
President Lyndon B. Johnson on 5 June, warning that the US would not stand beside Turkey in case of a consequential
Soviet invasion of Turkish territory. Meanwhile, by 1964, enosis was a Greek policy that could not be abandoned;
Makarios and the Greek prime minister Georgios Papandreou agreed that enosis should be the ultimate aim and King
Constantine wished Cyprus "a speedy union with the mother country". Greece dispatched 10,000 troops to Cyprus to
counter a possible Turkish invasion. On 15 July 1974, the Greek military junta under Dimitrios Ioannides carried
out a coup d'état in Cyprus, to unite the island with Greece. The coup ousted president Makarios III and replaced
him with pro-enosis nationalist Nikos Sampson. In response to the coup, five days later, on 20 July 1974, the Turkish
army invaded the island, citing a right to intervene to restore the constitutional order from the 1960 Treaty of
Guarantee. This justification has been rejected by the United Nations and the international community. Three days
later, when a ceasefire had been agreed, Turkey had landed 30,000 troops on the island and captured Kyrenia, the
corridor linking Kyrenia to Nicosia, and the Turkish Cypriot quarter of Nicosia itself. The junta in Athens, and
then the Sampson regime in Cyprus fell from power. In Nicosia, Glafkos Clerides assumed the presidency and constitutional
order was restored, removing the pretext for the Turkish invasion. But after the peace negotiations in Geneva, the
Turkish government reinforced their Kyrenia bridgehead and started a second invasion on 14 August. The invasion resulted
in the seizure of Morphou, Karpass, Famagusta and the Mesaoria. International pressure led to a ceasefire, and by
then 37% of the island had been taken over by the Turks and 180,000 Greek Cypriots had been evicted from their homes
in the north. At the same time, around 50,000 Turkish Cypriots moved to the areas under the control of the Turkish
Forces and settled in the properties of the displaced Greek Cypriots. Among a variety of sanctions against Turkey,
in mid-1975 the US Congress imposed an arms embargo on Turkey for using American-supplied equipment during the Turkish
invasion of Cyprus in 1974. There are 1,534 Greek Cypriots and 502 Turkish Cypriots missing as a result of the fighting.
The events of the summer of 1974 dominate the politics on the island, as well as Greco-Turkish relations. Around
150,000 settlers from Turkey are believed to be living in the north—many of whom were forced from Turkey by the Turkish
government—in violation of the Geneva Convention and various UN resolutions. Following the invasion and the capture
of its northern territory by Turkish troops, the Republic of Cyprus announced that all of its ports of entry in the
north were closed, as they were effectively not under its control.[citation needed] The Turkish invasion, followed
by occupation and the declaration of independence of the TRNC have been condemned by United Nations resolutions,
which are reaffirmed by the Security Council every year. The last major effort to settle the Cyprus dispute was the
Annan Plan in 2004, drafted by the then Secretary General, Kofi Annan. The plan was put to a referendum in both Northern
Cyprus and the Republic of Cyprus. 65% of Turkish Cypriots voted in support of the plan and 74% Greek Cypriots voted
against the plan, claiming that it disproportionately favoured the Turkish side. In total, 66.7% of the voters rejected
the Annan Plan V. On 1 May 2004 Cyprus joined the European Union, together with nine other countries. Cyprus was
accepted into the EU as a whole, although the EU legislation is suspended in the territory occupied by Turkey (TRNC),
until a final settlement of the Cyprus problem. In July 2006, the island served as a haven for people fleeing Lebanon,
due to the conflict between Israel and Hezbollah (also called "The July War"). The physical relief of the island
is dominated by two mountain ranges, the Troodos Mountains and the smaller Kyrenia Range, and the central plain they
encompass, the Mesaoria. The Mesaoria plain is drained by the Pedieos River, the longest on the island. The Troodos
Mountains cover most of the southern and western portions of the island and account for roughly half its area. The
highest point on Cyprus is Mount Olympus at 1,952 m (6,404 ft), located in the centre of the Troodos range. The narrow
Kyrenia Range, extending along the northern coastline, occupies substantially less area, and elevations are lower,
reaching a maximum of 1,024 m (3,360 ft). The island lies within the Anatolian Plate. Cyprus has one of the warmest
climates in the Mediterranean part of the European Union.[citation needed] The average annual temperature on the
coast is around 24 °C (75 °F) during the day and 14 °C (57 °F) at night. Generally, summers last about eight months,
beginning in April with average temperatures of 21–23 °C (70–73 °F) during the day and 11–13 °C (52–55 °F) at night,
and ending in November with average temperatures of 22–23 °C (72–73 °F) during the day and 12–14 °C (54–57 °F) at
night, although in the remaining four months temperatures sometimes exceed 20 °C (68 °F). Among all cities in the
Mediterranean part of the European Union, Limassol has one of the warmest winters, in the period January – February
average temperature is 17–18 °C (63–64 °F) during the day and 7–8 °C (45–46 °F) at night, in other coastal locations
in Cyprus is generally 16–17 °C (61–63 °F) during the day and 6–8 °C (43–46 °F) at night. During March, Limassol
has average temperatures of 19–20 °C (66–68 °F) during the day and 9–11 °C (48–52 °F) at night, in other coastal
locations in Cyprus is generally 17–19 °C (63–66 °F) during the day and 8–10 °C (46–50 °F) at night. The middle of
summer is hot – in July and August on the coast the average temperature is usually around 33 °C (91 °F) during the
day and around 22 °C (72 °F) at night (inland, in the highlands average temperature exceeds 35 °C (95 °F)) while
in the June and September on the coast the average temperature is usually around 30 °C (86 °F) during the day and
around 20 °C (68 °F) at night in Limassol, while is usually around 28 °C (82 °F) during the day and around 18 °C
(64 °F) at night in Paphos. Large fluctuations in temperature are rare. Inland temperatures are more extreme, with
colder winters and hotter summers compared with the coast of the island. Cyprus suffers from a chronic shortage of
water. The country relies heavily on rain to provide household water, but in the past 30 years average yearly precipitation
has decreased. Between 2001 and 2004, exceptionally heavy annual rainfall pushed water reserves up, with supply exceeding
demand, allowing total storage in the island's reservoirs to rise to an all-time high by the start of 2005. However,
since then demand has increased annually – a result of local population growth, foreigners moving to Cyprus and the
number of visiting tourists – while supply has fallen as a result of more frequent droughts. Dams remain the principal
source of water both for domestic and agricultural use; Cyprus has a total of 107 dams (plus one currently under
construction) and reservoirs, with a total water storage capacity of about 330,000,000 m3 (1.2×1010 cu ft). Water
desalination plants are gradually being constructed to deal with recent years of prolonged drought. The Government
has invested heavily in the creation of water desalination plants which have supplied almost 50 per cent of domestic
water since 2001. Efforts have also been made to raise public awareness of the situation and to encourage domestic
water users to take more responsibility for the conservation of this increasingly scarce commodity. The 1960 Constitution
provided for a presidential system of government with independent executive, legislative and judicial branches as
well as a complex system of checks and balances including a weighted power-sharing ratio designed to protect the
interests of the Turkish Cypriots. The executive was led by a Greek Cypriot president and a Turkish Cypriot vice-president
elected by their respective communities for five-year terms and each possessing a right of veto over certain types
of legislation and executive decisions. Legislative power rested on the House of Representatives who were also elected
on the basis of separate voters' rolls. Since 1965, following clashes between the two communities, the Turkish Cypriot
seats in the House remain vacant. In 1974 Cyprus was divided de facto when the Turkish army occupied the northern
third of the island. The Turkish Cypriots subsequently declared independence in 1983 as the Turkish Republic of Northern
Cyprus but were recognised only by Turkey. In 1985 the TRNC adopted a constitution and held its first elections.
The United Nations recognises the sovereignty of the Republic of Cyprus over the entire island of Cyprus. The House
of Representatives currently has 59 members elected for a five-year term, 56 members by proportional representation
and 3 observer members representing the Armenian, Latin and Maronite minorities. 24 seats are allocated to the Turkish
community but remain vacant since 1964. The political environment is dominated by the communist AKEL, the liberal
conservative Democratic Rally, the centrist Democratic Party, the social-democratic EDEK and the centrist EURO.KO.
In 2008, Dimitris Christofias became the country's first Communist head of state. Due to his involvement in the 2012–13
Cypriot financial crisis, Christofias did not run for re-election in 2013. The Presidential election in 2013 resulted
in Democratic Rally candidate Nicos Anastasiades winning 57.48% of the vote. As a result, Anastasiades was sworn
in on and has been President since 28 February 2013. In "Freedom in the World 2011", Freedom House rated Cyprus as
"free". In January 2011, the Report of the Office of the United Nations High Commissioner for Human Rights on the
question of Human Rights in Cyprus noted that the ongoing division of Cyprus continues to affect human rights throughout
the island "... including freedom of movement, human rights pertaining to the question of missing persons, discrimination,
the right to life, freedom of religion, and economic, social and cultural rights." The constant focus on the division
of the island can sometimes mask other human rights issues.[citation needed] In 2014, Turkey was ordered by the European
Court of Human Rights to pay well over $100m in compensation to Cyprus for the invasion; Ankara announced that it
would ignore the judgment. In 2014, a group of Cypriot refugees and a European parliamentarian, later joined by the
Cypriot government, filed a complaint to the International Court of Justice, accusing Turkey of violating the Geneva
Conventions by directly or indirectly transferring its civilian population into occupied territory. Over the preceding
ten years, civilian transfer by Turkey had "reached new heights", in the words of one US ambassador.[f] Other violations
of the Geneva and the Hague Conventions—both ratified by Turkey—amount to what archaeologist Sophocles Hadjisavvas
called "the organized destruction of Greek and Christian heritage in the north". These violations include looting
of cultural treasures, deliberate destruction of churches, neglect of works of art, and altering the names of important
historical sites, which was condemned by the International Council on Monuments and Sites. Hadjisavvas has asserted
that these actions are motivated by a Turkish policy of erasing the Greek presence in Northern Cyprus within a framework
of ethnic cleansing, as well as by greed and profit-seeking on the part of the individuals involved. The air force
includes the 449th Helicopter Gunship Squadron (449 ΜΑΕ) – operating Aérospatiale SA-342L and Bell 206 and the 450th
Helicopter Gunship Squadron (450 ME/P) – operating Mi-35P helicopters and the Britten-Norman BN-2B and Pilatus PC-9
fixed-wing aircraft. Current senior officers include Supreme Commander, Cypriot National Guard, Lt. General Stylianos
Nasis, and Chief of Staff, Cypriot National Guard: Maj. General Michalis Flerianos.[citation needed] The Evangelos
Florakis Naval Base explosion, which occurred on 11 July 2011, was the most deadly military accident ever recorded
in Cyprus. In the early 21st century the Cypriot economy has diversified and become prosperous. However, in 2012
it became affected by the Eurozone financial and banking crisis. In June 2012, the Cypriot government announced it
would need €1.8 billion in foreign aid to support the Cyprus Popular Bank, and this was followed by Fitch downgrading
Cyprus's credit rating to junk status. Fitch said Cyprus would need an additional €4 billion to support its banks
and the downgrade was mainly due to the exposure of Bank of Cyprus, Cyprus Popular Bank and Hellenic Bank, Cyprus's
three largest banks, to the Greek financial crisis. The 2012–2013 Cypriot financial crisis led to an agreement with
the Eurogroup in March 2013 to split the country's second largest bank, the Cyprus Popular Bank (also known as Laiki
Bank), into a "bad" bank which would be wound down over time and a "good" bank which would be absorbed by the Bank
of Cyprus. In return for a €10 billion bailout from the European Commission, the European Central Bank and the International
Monetary Fund, often referred to as the "troika", the Cypriot government was required to impose a significant haircut
on uninsured deposits, a large proportion of which were held by wealthy Russians who used Cyprus as a tax haven.
Insured deposits of €100,000 or less were not affected. According to the latest International Monetary Fund estimates,
its per capita GDP (adjusted for purchasing power) at $30,769 is just above the average of the European Union.[citation
needed] Cyprus has been sought as a base for several offshore businesses for its low tax rates. Tourism, financial
services and shipping are significant parts of the economy. Economic policy of the Cyprus government has focused
on meeting the criteria for admission to the European Union. The Cypriot government adopted the euro as the national
currency on 1 January 2008. In recent years significant quantities of offshore natural gas have been discovered in
the area known as Aphrodite in Cyprus' exclusive economic zone (EEZ), about 175 kilometres (109 miles) south of Limassol
at 33°5′40″N and 32°59′0″E. However, Turkey's offshore drilling companies have accessed both natural gas and oil
resources since 2013. Cyprus demarcated its maritime border with Egypt in 2003, and with Lebanon in 2007. Cyprus
and Israel demarcated their maritime border in 2010, and in August 2011, the US-based firm Noble Energy entered into
a production-sharing agreement with the Cypriot government regarding the block's commercial development. Available
modes of transport are by road, sea and air. Of the 10,663 km (6,626 mi) of roads in the Republic of Cyprus in 1998,
6,249 km (3,883 mi) were paved, and 4,414 km (2,743 mi) were unpaved. In 1996 the Turkish-occupied area had a similar
ratio of paved to unpaved, with approximately 1,370 km (850 mi) of paved road and 980 km (610 mi) unpaved.[citation
needed] Cyprus is one of only four EU nations in which vehicles drive on the left-hand side of the road, a remnant
of British colonisation (the others being Ireland, Malta and the United Kingdom). A series of motorways runs along
the coast from Paphos east to Ayia Napa, with two motorways running inland to Nicosia, one from Limassol and one
from Larnaca. Due to the inter-communal ethnic tensions between 1963 and 1974, an island-wide census was regarded
as impossible. Nevertheless, the Greek Cypriots conducted one in 1973, without the Turkish Cypriot populace. According
to this census, the Greek Cypriot population was 482,000. One year later, in 1974, the Cypriot government's Department
of Statistics and Research estimated the total population of Cyprus at 641,000; of whom 506,000 (78.9%) were Greeks,
and 118,000 (18.4%) Turkish. After the partition of the island in 1974, Greeks conducted four more censuses: in 1976,
1982, 1992 and 2001; these excluded the Turkish population which was resident in the northern part of the island.
According to the 2006 census carried out by Northern Cyprus, there were 256,644 (de jure) people living in Northern
Cyprus. 178,031 were citizens of Northern Cyprus, of whom 147,405 were born in Cyprus (112,534 from the north; 32,538
from the south; 371 did not indicate what part of Cyprus they were from); 27,333 born in Turkey; 2,482 born in the
UK and 913 born in Bulgaria. Of the 147,405 citizens born in Cyprus, 120,031 say both parents were born in Cyprus;
16,824 say both parents born in Turkey; 10,361 have one parent born in Turkey and one parent born in Cyprus. The
majority of Greek Cypriots identify as Greek Orthodox, whereas most Turkish Cypriots are adherents of Sunni Islam.
According to Eurobarometer 2005, Cyprus was the second most religious state in the European Union at that time, after
Malta (although in 2005 Romania wasn't in the European Union, currently Romania is the most religious state in the
European Union) (see Religion in the European Union). The first President of Cyprus, Makarios III, was an archbishop.
The current leader of the Greek Orthodox Church of Cyprus is Archbishop Chrysostomos II. Cyprus has two official
languages, Greek and Turkish. Armenian and Cypriot Maronite Arabic are recognised as minority languages. Although
without official status, English is widely spoken and it features widely on road signs, public notices, and in advertisements,
etc. English was the sole official language during British colonial rule and the lingua franca until 1960, and continued
to be used (de facto) in courts of law until 1989 and in legislation until 1996. 80.4% of Cypriots are proficient
in the English language as a second language. Russian is widely spoken among the country's minorities, residents
and citizens of post-Soviet countries, and Pontic Greeks. Russian, after English and Greek, is the third language
used on many signs of shops and restaurants, particularly in Limassol and Paphos. In addition to these languages,
12% speak French and 5% speak German. State schools are generally seen as equivalent in quality of education to private-sector
institutions. However, the value of a state high-school diploma is limited by the fact that the grades obtained account
for only around 25% of the final grade for each topic, with the remaining 75% assigned by the teacher during the
semester, in a minimally transparent way. Cypriot universities (like universities in Greece) ignore high school grades
almost entirely for admissions purposes. While a high-school diploma is mandatory for university attendance, admissions
are decided almost exclusively on the basis of scores at centrally administered university entrance examinations
that all university candidates are required to take. The majority of Cypriots receive their higher education at Greek,
British, Turkish, other European and North American universities. It is noteworthy that Cyprus currently has the
highest percentage of citizens of working age who have higher-level education in the EU at 30% which is ahead of
Finland's 29.5%. In addition, 47% of its population aged 25–34 have tertiary education, which is the highest in the
EU. The body of Cypriot students is highly mobile, with 78.7% studying in a university outside Cyprus. Greek Cypriots
and Turkish Cypriots share a lot in common in their culture but also have differences. Several traditional food (such
as souvla and halloumi) and beverages are similar, as well as expressions and ways of life. Hospitality and buying
or offering food and drinks for guests or others are common among both. In both communities, music, dance and art
are integral parts of social life and many artistic, verbal and nonverbal expressions, traditional dances such as
tsifteteli, similarities in dance costumes and importance placed on social activities are shared between the communities.
However, the two communities have distinct religions and religious cultures, with the Greek Cypriots traditionally
being Greek Orthodox and Turkish Cypriots traditionally being Sunni Muslims, which has partly hindered cultural exchange.
Greek Cypriots have influences from Greece and Christianity, while Turkish Cypriots have influences from Turkey and
Islam. In modern times Cypriot art history begins with the painter Vassilis Vryonides (1883–1958) who studied at
the Academy of Fine Arts in Venice. Arguably the two founding fathers of modern Cypriot art were Adamantios Diamantis
(1900–1994) who studied at London's Royal College of Art and Christopheros Savva (1924–1968) who also studied in
London, at Saint Martin's School of Art. In many ways these two artists set the template for subsequent Cypriot art
and both their artistic styles and the patterns of their education remain influential to this day. In particular
the majority of Cypriot artists still train in England while others train at art schools in Greece and local art
institutions such as the Cyprus College of Art, University of Nicosia and the Frederick Institute of Technology.
One of the features of Cypriot art is a tendency towards figurative painting although conceptual art is being rigorously
promoted by a number of art "institutions" and most notably the Nicosia Municipal Art Centre. Municipal art galleries
exist in all the main towns and there is a large and lively commercial art scene. Cyprus was due to host the international
art festival Manifesta in 2006 but this was cancelled at the last minute following a dispute between the Dutch organizers
of Manifesta and the Cyprus Ministry of Education and Culture over the location of some of the Manifesta events in
the Turkish sector of the capital Nicosia. The traditional folk music of Cyprus has several common elements with
Greek, Turkish, and Arabic music including Greco-Turkish dances such as the sousta, syrtos, zeibekikos, tatsia, and
karsilamas as well as the Middle Eastern-inspired tsifteteli and arapies. There is also a form of musical poetry
known as chattista which is often performed at traditional feasts and celebrations. The instruments commonly associated
with Cyprus folk music are the bouzouki, oud ("outi"), violin ("fkiolin"), lute ("laouto"), accordion, Cyprus flute
("pithkiavlin") and percussion (including the "toumperleki"). Composers associated with traditional Cypriot music
include Evagoras Karageorgis, Marios Tokas, Solon Michaelides and Savvas Salides. Among musicians is also the acclaimed
pianist Cyprien Katsaris and composer and artistic director of the European Capital of Culture initiative Marios
Joannou Elia. Popular music in Cyprus is generally influenced by the Greek Laïka scene; artists who play in this
genre include international platinum star Anna Vissi, Evridiki, and Sarbel. Hip Hop, R&B and reggae have been supported
by the emergence of Cypriot rap and the urban music scene at Ayia Napa. Cypriot rock music and Éntekhno rock is often
associated with artists such as Michalis Hatzigiannis and Alkinoos Ioannidis. Metal also has a small following in
Cyprus represented by bands such as Armageddon (rev.16:16), Blynd, Winter's Verge and Quadraphonic. Epic poetry,
notably the "acritic songs", flourished during Middle Ages. Two chronicles, one written by Leontios Machairas and
the other by Georgios Voustronios, cover the entire Middle Ages until the end of Frankish rule (4th century–1489).
Poèmes d'amour written in medieval Greek Cypriot date back from the 16th century. Some of them are actual translations
of poems written by Petrarch, Bembo, Ariosto and G. Sannazzaro. Many Cypriot scholars fled Cyprus at troubled times
such as Ioannis Kigalas (c. 1622–1687) who migrated from Cyprus to Italy in the 17th century, several of his works
have survived in books of other scholars. Modern Greek Cypriot literary figures include the poet and writer Kostas
Montis, poet Kyriakos Charalambides, poet Michalis Pasiardis, writer Nicos Nicolaides, Stylianos Atteshlis, Altheides,
Loukis Akritas and Demetris Th. Gotsis. Dimitris Lipertis, Vasilis Michaelides and Pavlos Liasides are folk poets
who wrote poems mainly in the Cypriot-Greek dialect. Among leading Turkish Cypriot writers are Osman Türkay, twice
nominated for the Nobel Prize in Literature, Özker Yaşın, Neriman Cahit, Urkiye Mine Balman, Mehmet Yaşın and Neşe
Yaşın. Examples of Cyprus in foreign literature, include the works of Shakespeare, with the majority of the play
Othello by William Shakespeare set on the island of Cyprus. British writer Lawrence Durrell lived in Cyprus from
1952 until 1956, during his time working for the British colonial government on the island, and wrote the book Bitter
Lemons concerning his time in Cyprus which won the second Duff Cooper Prize in 1957. More recently British writer
Victoria Hislop used Cyprus as the setting for her 2014 novel The Sunrise. Local television companies in Cyprus include
the state owned Cyprus Broadcasting Corporation which runs two television channels. In addition on the Greek side
of the island there are the private channels ANT1 Cyprus, Plus TV, Mega Channel, Sigma TV, Nimonia TV (NTV) and New
Extra. In Northern Cyprus, the local channels are BRT, the Turkish Cypriot equivalent to the Cyprus Broadcasting
Corporation, and a number of private channels. The majority of local arts and cultural programming is produced by
the Cyprus Broadcasting Corporation and BRT, with local arts documentaries, review programmes and filmed drama series.
In 1994, cinematographic production received a boost with the establishment of the Cinema Advisory Committee. As
of the year 2000, the annual amount set aside in the national budget stands at Cy Pounds 500,000 (about 850,000 Euros).
In addition to government grants, Cypriot co-productions are eligible for funding from the Council of Europe's Eurimages
Fund, which finances European film co-productions. To date, four feature-length films in which a Cypriot was executive
producer have received funding from Eurimages. The first was I Sphagi tou Kokora (1992), completed in 1996, Hellados
(And the Trains Fly to the Sky, 1995), which is currently in post-production, and Costas Demetriou's O Dromos gia
tin Ithaki (The Road to Ithaka, 1997) which premiered in March 2000. The theme song to The Road to Ithaka was composed
by Costas Cacoyannis and sung by Alexia Vassiliou. In September 1999, To Tama (The Promise) by Andreas Pantzis also
received funding from the Eurimages Fund. In 2009 the Greek director, writer and producer Vassilis Mazomenos filmed
in Cyprus Guilt. The film was awarded in 2012 with the Best Screenwriting and Best Photography award in London Greek
Film Festival (UK) and was official selection in Montreal World Film Festival, Cairo International Film Festival,
India International Film Festival, Tallinn Black Nights Film Festival, Fantasporto and opening film in the Panorama
of European Cinema in Athens. In 2010 the film was Nominated for the best film from the Hellenic Film Academy. Seafood
and fish dishes include squid, octopus, red mullet, and sea bass. Cucumber and tomato are used widely in salads.
Common vegetable preparations include potatoes in olive oil and parsley, pickled cauliflower and beets, asparagus
and taro. Other traditional delicacies of are meat marinated in dried coriander seeds and wine, and eventually dried
and smoked, such as lountza (smoked pork loin), charcoal-grilled lamb, souvlaki (pork and chicken cooked over charcoal),
and sheftalia (minced meat wrapped in mesentery). Pourgouri (bulgur, cracked wheat) is the traditional source of
carbohydrate other than bread, and is used to make the delicacy koubes. Fresh vegetables and fruits are common ingredients.
Frequently used vegetables include courgettes, green peppers, okra, green beans, artichokes, carrots, tomatoes, cucumbers,
lettuce and grape leaves, and pulses such as beans, broad beans, peas, black-eyed beans, chick-peas and lentils.
The most common fruits and nuts are pears, apples, grapes, oranges, mandarines, nectarines, medlar, blackberries,
cherry, strawberries, figs, watermelon, melon, avocado, lemon, pistachio, almond, chestnut, walnut, and hazelnut.
Tennis player Marcos Baghdatis was ranked 8th in the world, was a finalist at the Australian Open, and reached the
Wimbledon semi-final, all in 2006. High jumper Kyriakos Ioannou achieved a jump of 2.35 m at the 11th IAAF World
Championships in Athletics in Osaka, Japan, in 2007, winning the bronze medal. He has been ranked third in the world.
In motorsports, Tio Ellinas is a successful race car driver, currently racing in the GP3 Series for Marussia Manor
Motorsport. There is also mixed martial artist Costas Philippou, who competes in the Ultimate Fighting Championship
promotion's middleweight division. Costas holds a 6-3 record in UFC bouts, and recently defeated "The Monsoon" Lorenz
Larkin with a Knockout in the 1st round.
In a career spanning more than four decades, Spielberg's films have covered many themes and genres. Spielberg's early science-fiction
and adventure films were seen as archetypes of modern Hollywood blockbuster filmmaking. In later years, his films
began addressing humanistic issues such as the Holocaust (in Schindler's List), the transatlantic slave trade (in
Amistad), war (in Empire of the Sun, Saving Private Ryan, War Horse and Bridge of Spies) and terrorism (in Munich).
His other films include Close Encounters of the Third Kind, the Indiana Jones film series, and A.I. Artificial Intelligence.
Spielberg was born in Cincinnati, Ohio, to an Orthodox Jewish family. His mother, Leah (Adler) Posner (born 1920),
was a restaurateur and concert pianist, and his father, Arnold Spielberg (born 1917), was an electrical engineer
involved in the development of computers. His paternal grandparents were immigrants from Ukraine who settled in Cincinnati
in the first decade of the 1900s. In 1950, his family moved to Haddon Township, New Jersey when his father took a
job with RCA. Three years later, the family moved to Phoenix, Arizona.:548 Spielberg attended Hebrew school from
1953 to 1957, in classes taught by Rabbi Albert L. Lewis. Spielberg won the Academy Award for Best Director for Schindler's
List (1993) and Saving Private Ryan (1998). Three of Spielberg's films—Jaws (1975), E.T. the Extra-Terrestrial (1982),
and Jurassic Park (1993)—achieved box office records, originated and came to epitomize the blockbuster film. The
unadjusted gross of all Spielberg-directed films exceeds $9 billion worldwide, making him the highest-grossing director
in history. His personal net worth is estimated to be more than $3 billion. He has been associated with composer
John Williams since 1974, who composed music for all save five of Spielberg's feature films. As a child, Spielberg
faced difficulty reconciling being an Orthodox Jew with the perception of him by other children he played with. "It
isn't something I enjoy admitting," he once said, "but when I was seven, eight, nine years old, God forgive me, I
was embarrassed because we were Orthodox Jews. I was embarrassed by the outward perception of my parents' Jewish
practices. I was never really ashamed to be Jewish, but I was uneasy at times." Spielberg also said he suffered from
acts of anti-Semitic prejudice and bullying: "In high school, I got smacked and kicked around. Two bloody noses.
It was horrible." In 1958, he became a Boy Scout and fulfilled a requirement for the photography merit badge by making
a nine-minute 8 mm film entitled The Last Gunfight. Years later, Spielberg recalled to a magazine interviewer, "My
dad's still-camera was broken, so I asked the scoutmaster if I could tell a story with my father's movie camera.
He said yes, and I got an idea to do a Western. I made it and got my merit badge. That was how it all started." At
age thirteen, while living in Phoenix, Spielberg won a prize for a 40-minute war film he titled Escape to Nowhere,
using a cast composed of other high school friends. That motivated him to make 15 more amateur 8mm films.:548 In
1963, at age sixteen, Spielberg wrote and directed his first independent film, a 140-minute science fiction adventure
called Firelight, which would later inspire Close Encounters. The film was made for $500, most of which came from
his father, and was shown in a local cinema for one evening, which earned back its cost. While still a student, he
was offered a small unpaid intern job at Universal Studios with the editing department. He was later given the opportunity
to make a short film for theatrical release, the 26-minute, 35mm, Amblin', which he wrote and directed. Studio vice
president Sidney Sheinberg was impressed by the film, which had won a number of awards, and offered Spielberg a seven-year
directing contract. It made him the youngest director ever to be signed for a long-term deal with a major Hollywood
studio.:548 He subsequently dropped out of college to begin professionally directing TV productions with Universal.
His first professional TV job came when he was hired to direct one of the segments for the 1969 pilot episode of
Night Gallery. The segment, "Eyes," starred Joan Crawford; she and Spielberg were reportedly close friends until
her death. The episode is unusual in his body of work, in that the camerawork is more highly stylized than his later,
more "mature" films. After this, and an episode of Marcus Welby, M.D., Spielberg got his first feature-length assignment:
an episode of The Name of the Game called "L.A. 2017". This futuristic science fiction episode impressed Universal
Studios and they signed him to a short contract. He did another segment on Night Gallery and did some work for shows
such as Owen Marshall: Counselor at Law and The Psychiatrist, before landing the first series episode of Columbo
(previous episodes were actually TV films). Based on the strength of his work, Universal signed Spielberg to do four
TV films. The first was a Richard Matheson adaptation called Duel. The film is about a psychotic Peterbilt 281 tanker
truck driver who chases the terrified driver (Dennis Weaver) of a small Plymouth Valiant and tries to run him off
the road. Special praise of this film by the influential British critic Dilys Powell was highly significant to Spielberg's
career. Another TV film (Something Evil) was made and released to capitalize on the popularity of The Exorcist, then
a major best-selling book which had not yet been released as a film. He fulfilled his contract by directing the TV
film-length pilot of a show called Savage, starring Martin Landau. Spielberg's debut full-length feature film was
The Sugarland Express, about a married couple who are chased by police as the couple tries to regain custody of their
baby. Spielberg's cinematography for the police chase was praised by reviewers, and The Hollywood Reporter stated
that "a major new director is on the horizon.":223 However, the film fared poorly at the box office and received
a limited release. Studio producers Richard D. Zanuck and David Brown offered Spielberg the director's chair for
Jaws, a thriller-horror film based on the Peter Benchley novel about an enormous killer shark. Spielberg has often
referred to the gruelling shoot as his professional crucible. Despite the film's ultimate, enormous success, it was
nearly shut down due to delays and budget over-runs. But Spielberg persevered and finished the film. It was an enormous
hit, winning three Academy Awards (for editing, original score and sound) and grossing more than $470 million worldwide
at the box office. It also set the domestic record for box office gross, leading to what the press described as "Jawsmania.":248
Jaws made Spielberg a household name and one of America's youngest multi-millionaires, allowing him a great deal
of autonomy for his future projects.:250 It was nominated for Best Picture and featured Spielberg's first of three
collaborations with actor Richard Dreyfuss. Rejecting offers to direct Jaws 2, King Kong and Superman, Spielberg
and actor Richard Dreyfuss re-convened to work on a film about UFOs, which became Close Encounters of the Third Kind
(1977). One of the rare films both written and directed by Spielberg, Close Encounters was a critical and box office
hit, giving Spielberg his first Best Director nomination from the Academy as well as earning six other Academy Awards
nominations. It won Oscars in two categories (Cinematography, Vilmos Zsigmond, and a Special Achievement Award for
Sound Effects Editing, Frank E. Warner). This second blockbuster helped to secure Spielberg's rise. His next film,
1941, a big-budgeted World War II farce, was not nearly as successful and though it grossed over $92.4 million worldwide
(and did make a small profit for co-producing studios Columbia and Universal) it was seen as a disappointment, mainly
with the critics. Spielberg then revisited his Close Encounters project and, with financial backing from Columbia
Pictures, released Close Encounters: The Special Edition in 1980. For this, Spielberg fixed some of the flaws he
thought impeded the original 1977 version of the film and also, at the behest of Columbia, and as a condition of
Spielberg revising the film, shot additional footage showing the audience the interior of the mothership seen at
the end of the film (a decision Spielberg would later regret as he felt the interior of the mothership should have
remained a mystery). Nevertheless, the re-release was a moderate success, while the 2001 DVD release of the film
restored the original ending. Next, Spielberg teamed with Star Wars creator and friend George Lucas on an action
adventure film, Raiders of the Lost Ark, the first of the Indiana Jones films. The archaeologist and adventurer hero
Indiana Jones was played by Harrison Ford (whom Lucas had previously cast in his Star Wars films as Han Solo). The
film was considered an homage to the cliffhanger serials of the Golden Age of Hollywood. It became the biggest film
at the box office in 1981, and the recipient of numerous Oscar nominations including Best Director (Spielberg's second
nomination) and Best Picture (the second Spielberg film to be nominated for Best Picture). Raiders is still considered
a landmark example of the action-adventure genre. The film also led to Ford's casting in Ridley Scott's Blade Runner.
His next directorial feature was the Raiders prequel Indiana Jones and the Temple of Doom. Teaming up once again
with Lucas and Ford, the film was plagued with uncertainty for the material and script. This film and the Spielberg-produced
Gremlins led to the creation of the PG-13 rating due to the high level of violence in films targeted at younger audiences.
In spite of this, Temple of Doom is rated PG by the MPAA, even though it is the darkest and, possibly, most violent
Indy film. Nonetheless, the film was still a huge blockbuster hit in 1984. It was on this project that Spielberg
also met his future wife, actress Kate Capshaw. In 1985, Spielberg released The Color Purple, an adaptation of Alice
Walker's Pulitzer Prize-winning novel of the same name, about a generation of empowered African-American women during
depression-era America. Starring Whoopi Goldberg and future talk-show superstar Oprah Winfrey, the film was a box
office smash and critics hailed Spielberg's successful foray into the dramatic genre. Roger Ebert proclaimed it the
best film of the year and later entered it into his Great Films archive. The film received eleven Academy Award nominations,
including two for Goldberg and Winfrey. However, much to the surprise of many, Spielberg did not get a Best Director
nomination. In 1987, as China began opening to Western capital investment, Spielberg shot the first American film
in Shanghai since the 1930s, an adaptation of J. G. Ballard's autobiographical novel Empire of the Sun, starring
John Malkovich and a young Christian Bale. The film garnered much praise from critics and was nominated for several
Oscars, but did not yield substantial box office revenues. Reviewer Andrew Sarris called it the best film of the
year and later included it among the best films of the decade. Spielberg was also a co-producer of the 1987 film
*batteries not included. After two forays into more serious dramatic films, Spielberg then directed the third Indiana
Jones film, 1989's Indiana Jones and the Last Crusade. Once again teaming up with Lucas and Ford, Spielberg also
cast actor Sean Connery in a supporting role as Indy's father. The film earned generally positive reviews and was
another box office success, becoming the highest grossing film worldwide that year; its total box office receipts
even topped those of Tim Burton's much-anticipated film Batman, which had been the bigger hit domestically. Also
in 1989, he re-united with actor Richard Dreyfuss for the romantic comedy-drama Always, about a daredevil pilot who
extinguishes forest fires. Spielberg's first romantic film, Always was only a moderate success and had mixed reviews.
Spielberg's next film, Schindler's List, was based on the true story of Oskar Schindler, a man who risked his life
to save 1,100 Jews from the Holocaust. Schindler's List earned Spielberg his first Academy Award for Best Director
(it also won Best Picture). With the film a huge success at the box office, Spielberg used the profits to set up
the Shoah Foundation, a non-profit organization that archives filmed testimony of Holocaust survivors. In 1997, the
American Film Institute listed it among the 10 Greatest American Films ever Made (#9) which moved up to (#8) when
the list was remade in 2007. His next theatrical release in that same year was the World War II film Saving Private
Ryan, about a group of U.S. soldiers led by Capt. Miller (Tom Hanks) sent to bring home a paratrooper whose three
older brothers were killed in the same twenty-four hours, June 5–6, of the Normandy landing. The film was a huge
box office success, grossing over $481 million worldwide and was the biggest film of the year at the North American
box office (worldwide it made second place after Michael Bay's Armageddon). Spielberg won his second Academy Award
for his direction. The film's graphic, realistic depiction of combat violence influenced later war films such as
Black Hawk Down and Enemy at the Gates. The film was also the first major hit for DreamWorks, which co-produced the
film with Paramount Pictures (as such, it was Spielberg's first release from the latter that was not part of the
Indiana Jones series). Later, Spielberg and Tom Hanks produced a TV mini-series based on Stephen Ambrose's book Band
of Brothers. The ten-part HBO mini-series follows Easy Company of the 101st Airborne Division's 506th Parachute Infantry
Regiment. The series won a number of awards at the Golden Globes and the Emmys. Spielberg and actor Tom Cruise collaborated
for the first time for the futuristic neo-noir Minority Report, based upon the science fiction short story written
by Philip K. Dick about a Washington D.C. police captain in the year 2054 who has been foreseen to murder a man he
has not yet met. The film received strong reviews with the review tallying website Rotten Tomatoes giving it a 92%
approval rating, reporting that 206 out of the 225 reviews they tallied were positive. The film earned over $358
million worldwide. Roger Ebert, who named it the best film of 2002, praised its breathtaking vision of the future
as well as for the way Spielberg blended CGI with live-action. Also in 2005, Spielberg directed a modern adaptation
of War of the Worlds (a co-production of Paramount and DreamWorks), based on the H. G. Wells book of the same name
(Spielberg had been a huge fan of the book and the original 1953 film). It starred Tom Cruise and Dakota Fanning,
and, as with past Spielberg films, Industrial Light & Magic (ILM) provided the visual effects. Unlike E.T. and Close
Encounters of the Third Kind, which depicted friendly alien visitors, War of the Worlds featured violent invaders.
The film was another huge box office smash, grossing over $591 million worldwide. Spielberg's film Munich, about
the events following the 1972 Munich Massacre of Israeli athletes at the Olympic Games, was his second film essaying
Jewish relations in the world (the first being Schindler's List). The film is based on Vengeance, a book by Canadian
journalist George Jonas. It was previously adapted into the 1986 made-for-TV film Sword of Gideon. The film received
strong critical praise, but underperformed at the U.S. and world box-office; it remains one of Spielberg's most controversial
films to date. Munich received five Academy Awards nominations, including Best Picture, Film Editing, Original Music
Score (by John Williams), Best Adapted Screenplay, and Best Director for Spielberg. It was Spielberg's sixth Best
Director nomination and fifth Best Picture nomination. In June 2006, Steven Spielberg announced he would direct a
scientifically accurate film about "a group of explorers who travel through a worm hole and into another dimension",
from a treatment by Kip Thorne and producer Lynda Obst. In January 2007, screenwriter Jonathan Nolan met with them
to discuss adapting Obst and Thorne's treatment into a narrative screenplay. The screenwriter suggested the addition
of a "time element" to the treatment's basic idea, which was welcomed by Obst and Thorne. In March of that year,
Paramount hired Nolan, as well as scientists from Caltech, forming a workshop to adapt the treatment under the title
Interstellar. The following July, Kip Thorne said there was a push by people for him to portray himself in the film.
Spielberg later abandoned Interstellar, which was eventually directed by Christopher Nolan. In early 2009, Spielberg
shot the first film in a planned trilogy of motion capture films based on The Adventures of Tintin, written by Belgian
artist Hergé, with Peter Jackson. The Adventures of Tintin: The Secret of the Unicorn, was not released until October
2011, due to the complexity of the computer animation involved. The world premiere took place on October 22, 2011
in Brussels, Belgium. The film was released in North American theaters on December 21, 2011, in Digital 3D and IMAX.
It received generally positive reviews from critics, and grossed over $373 million worldwide. The Adventures of Tintin
won the award for Best Animated Feature Film at the Golden Globe Awards that year. It is the first non-Pixar film
to win the award since the category was first introduced. Jackson has been announced to direct the second film. Spielberg
followed with War Horse, shot in England in the summer of 2010. It was released just four days after The Adventures
of Tintin, on December 25, 2011. The film, based on the novel of the same name written by Michael Morpurgo and published
in 1982, follows the long friendship between a British boy and his horse Joey before and during World War I – the
novel was also adapted into a hit play in London which is still running there, as well as on Broadway. The film was
released and distributed by Disney, with whom DreamWorks made a distribution deal in 2009. War Horse received generally
positive reviews from critics, and was nominated for six Academy Awards, including Best Picture. Spielberg next directed
the historical drama film Lincoln, starring Daniel Day-Lewis as United States President Abraham Lincoln and Sally
Field as Mary Todd Lincoln. Based on Doris Kearns Goodwin's bestseller Team of Rivals: The Political Genius of Abraham
Lincoln, the film covered the final four months of Lincoln's life. Written by Tony Kushner, the film was shot in
Richmond, Virginia, in late 2011, and was released in the United States by Disney in November 2012. The film's international
distribution was handled by 20th Century Fox. Upon release, Lincoln received widespread critical acclaim, and was
nominated for twelve Academy Awards (the most of any film that year) including Best Picture and Best Director for
Spielberg. It won the award for Best Production Design and Day-Lewis won the Academy Award for Best Actor for his
portrayal of Lincoln, becoming the first three time winner in that category as well as the first to win for a performance
directed by Spielberg. Spielberg directed 2015's Bridge of Spies, a Cold War thriller based on the 1960 U-2 incident,
and focusing on James B. Donovan's negotiations with the Soviets for the release of pilot Gary Powers after his aircraft
was shot down over Soviet territory. The film starred Tom Hanks as Donovan, as well as Mark Rylance, Amy Ryan, and
Alan Alda, with a script by the Coen brothers. The film was shot from September to December 2014 on location in New
York City, Berlin and Wroclaw, Poland (which doubled for East Berlin), and was released by Disney on October 16,
2015. Bridge of Spies received positive reviews from critics, and was nominated for six Academy Awards, including
Best Picture. Since the mid-1980s, Spielberg has increased his role as a film producer. He headed up the production
team for several cartoons, including the Warner Bros. hits Tiny Toon Adventures, Animaniacs, Pinky and the Brain,
Toonsylvania, and Freakazoid!, for which he collaborated with Jean MacCurdy and Tom Ruegger. Due to his work on these
series, in the official titles, most of them say, "Steven Spielberg presents" as well as making numerous cameos on
the shows. Spielberg also produced the Don Bluth animated features, An American Tail and The Land Before Time, which
were released by Universal Studios. He also served as one of the executive producers of Who Framed Roger Rabbit and
its three related shorts (Tummy Trouble, Roller Coaster Rabbit, Trail Mix-Up), which were all released by Disney,
under both the Walt Disney Pictures and the Touchstone Pictures banners. He was furthermore, for a short time, the
executive producer of the long-running medical drama ER. In 1989, he brought the concept of The Dig to LucasArts.
He contributed to the project from that time until 1995 when the game was released. He also collaborated with software
publishers Knowledge Adventure on the multimedia game Steven Spielberg's Director's Chair, which was released in
1996. Spielberg appears, as himself, in the game to direct the player. The Spielberg name provided branding for a
Lego Moviemaker kit, the proceeds of which went to the Starbright Foundation. Spielberg served as an uncredited executive
producer on The Haunting, The Prince of Egypt, Just Like Heaven, Shrek, Road to Perdition, and Evolution. He served
as an executive producer for the 1997 film Men in Black, and its sequels, Men in Black II and Men in Black III. In
2005, he served as a producer of Memoirs of a Geisha, an adaptation of the novel by Arthur Golden, a film to which
he was previously attached as director. In 2006, Spielberg co-executive produced with famed filmmaker Robert Zemeckis
a CGI children's film called Monster House, marking their eighth collaboration since 1990's Back to the Future Part
III. He also teamed with Clint Eastwood for the first time in their careers, co-producing Eastwood's Flags of Our
Fathers and Letters from Iwo Jima with Robert Lorenz and Eastwood himself. He earned his twelfth Academy Award nomination
for the latter film as it was nominated for Best Picture. Spielberg served as executive producer for Disturbia and
the Transformers live action film with Brian Goldner, an employee of Hasbro. The film was directed by Michael Bay
and written by Roberto Orci and Alex Kurtzman, and Spielberg continued to collaborate on the sequels, Transformers:
Revenge of the Fallen and Transformers: Dark of the Moon. In 2011, he produced the J. J. Abrams science fiction thriller
film Super 8 for Paramount Pictures. Other major television series Spielberg produced were Band of Brothers, Taken
and The Pacific. He was an executive producer on the critically acclaimed 2005 TV miniseries Into the West which
won two Emmy awards, including one for Geoff Zanelli's score. For his 2010 miniseries The Pacific he teamed up once
again with co-producer Tom Hanks, with Gary Goetzman also co-producing'. The miniseries is believed to have cost
$250 million and is a 10-part war miniseries centered on the battles in the Pacific Theater during World War II.
Writer Bruce McKenna, who penned several installments of (Band of Brothers), was the head writer. In 2011, Spielberg
launched Falling Skies, a science fiction television series, on the TNT network. He developed the series with Robert
Rodat and is credited as an executive producer. Spielberg is also producing the Fox TV series Terra Nova. Terra Nova
begins in the year 2149 when all life on the planet Earth is threatened with extinction resulting in scientists opening
a door that allows people to travel back 85 million years to prehistoric times. Spielberg also produced The River,
Smash, Under the Dome, Extant and The Whispers, as well as a TV adaptation of Minority Report. Apart from being an
ardent gamer Spielberg has had a long history of involvement in video games. He has been giving thanks to his games
of his division DreamWorks Interactive most notable as Someone's in the Kitchen with script written by Animaniacs'
Paul Rugg, Goosebumps: Escape from HorrorLand, The Neverhood (all in 1996), Skullmonkeys, Dilbert's Desktop Games,
Goosebumps: Attack of the Mutant (all 1997), Boombots (1999), T'ai Fu: Wrath of the Tiger (1999), and Clive Barker's
Undying (2001). In 2005 the director signed with Electronic Arts to collaborate on three games including an action
game and an award winning puzzle game for the Wii called Boom Blox (and its 2009 sequel: Boom Blox Bash Party). Previously,
he was involved in creating the scenario for the adventure game The Dig. In 1996, Spielberg worked on and shot original
footage for a movie-making simulation game called Steven Spielberg's Director's Chair. He is the creator of the Medal
of Honor series by Electronic Arts. He is credited in the special thanks section of the 1998 video game Trespasser.
In 2013, Spielberg has announced he is collaborating with 343 Industries for a live-action TV show of Halo. Spielberg
has filmed and is currently in post-production on an adaptation of Roald Dahl's celebrated children's story The BFG.
Spielberg's DreamWorks bought the rights in 2010, originally intending John Madden to direct. The film was written
by E.T. screenwriter Melissa Mathison and is co-produced by Walt Disney Pictures, marking the first Disney-branded
film to be directed by Spielberg. The BFG is set to premiere out of competition at the Cannes Film Festival in May
2016, before its wide release in the US on July 1, 2016. After completing filming on Ready Player One, while it is
in its lengthy, effects-heavy post-production, he will film his long-planned adaptation of David Kertzer's acclaimed
The Kidnapping of Edgardo Mortara. The book follows the true story of a young Jewish boy in 1858 Italy who was secretly
baptized by a family servant and then kidnapped from his family by the Papal States, where he was raised and trained
as a priest, causing international outrage and becoming a media sensation. First announced in 2014, the book has
been adapted by Tony Kushner and the film will again star Mark Rylance, as Pope Pius IX. It will be filmed in early
2017 for release at the end of that year, before Ready Player One is completed and released in 2018. Spielberg was
scheduled to shoot a $200 million adaptation of Daniel H. Wilson's novel Robopocalypse, adapted for the screen by
Drew Goddard. The film would follow a global human war against a robot uprising about 15–20 years in the future.
Like Lincoln, it was to be released by Disney in the United States and Fox overseas. It was set for release on April
25, 2014, with Anne Hathaway and Chris Hemsworth set to star, but Spielberg postponed production indefinitely in
January 2013, just before it had been set to begin. Spielberg's films often deal with several recurring themes. Most
of his films deal with ordinary characters searching for or coming in contact with extraordinary beings or finding
themselves in extraordinary circumstances. In an AFI interview in August 2000 Spielberg commented on his interest
in the possibility of extra terrestrial life and how it has influenced some of his films. Spielberg described himself
as feeling like an alien during childhood, and his interest came from his father, a science fiction fan, and his
opinion that aliens would not travel light years for conquest, but instead curiosity and sharing of knowledge. A
strong consistent theme in his family-friendly work is a childlike, even naïve, sense of wonder and faith, as attested
by works such as Close Encounters of the Third Kind, E.T. the Extra-Terrestrial, Hook, A.I. Artificial Intelligence
and The BFG. According to Warren Buckland, these themes are portrayed through the use of low height camera tracking
shots, which have become one of Spielberg's directing trademarks. In the cases when his films include children (E.T.
the Extra-Terrestrial, Empire of the Sun, Jurassic Park, etc.), this type of shot is more apparent, but it is also
used in films like Munich, Saving Private Ryan, The Terminal, Minority Report, and Amistad. If one views each of
his films, one will see this shot utilized by the director, notably the water scenes in Jaws are filmed from the
low-angle perspective of someone swimming. Another child oriented theme in Spielberg's films is that of loss of innocence
and coming-of-age. In Empire of the Sun, Jim, a well-groomed and spoiled English youth, loses his innocence as he
suffers through World War II China. Similarly, in Catch Me If You Can, Frank naively and foolishly believes that
he can reclaim his shattered family if he accumulates enough money to support them. The most persistent theme throughout
his films is tension in parent-child relationships. Parents (often fathers) are reluctant, absent or ignorant. Peter
Banning in Hook starts off in the beginning of the film as a reluctant married-to-his-work parent who through the
course of the film regains the respect of his children. The notable absence of Elliott's father in E.T., is the most
famous example of this theme. In Indiana Jones and the Last Crusade, it is revealed that Indy has always had a very
strained relationship with his father, who is a professor of medieval literature, as his father always seemed more
interested in his work, specifically in his studies of the Holy Grail, than in his own son, although his father does
not seem to realize or understand the negative effect that his aloof nature had on Indy (he even believes he was
a good father in the sense that he taught his son "self reliance," which is not how Indy saw it). Even Oskar Schindler,
from Schindler's List, is reluctant to have a child with his wife. Munich depicts Avner as a man away from his wife
and newborn daughter. There are of course exceptions; Brody in Jaws is a committed family man, while John Anderton
in Minority Report is a shattered man after the disappearance of his son. This theme is arguably the most autobiographical
aspect of Spielberg's films, since Spielberg himself was affected by his parents' divorce as a child and by the absence
of his father. Furthermore, to this theme, protagonists in his films often come from families with divorced parents,
most notably E.T. the Extra-Terrestrial (protagonist Elliot's mother is divorced) and Catch Me If You Can (Frank
Abagnale's mother and father split early on in the film). Little known also is Tim in Jurassic Park (early in the
film, another secondary character mentions Tim and Lex's parents' divorce). The family often shown divided is often
resolved in the ending as well. Following this theme of reluctant fathers and father figures, Tim looks to Dr. Alan
Grant as a father figure. Initially, Dr. Grant is reluctant to return those paternal feelings to Tim. However, by
the end of the film, he has changed, and the kids even fall asleep with their heads on his shoulders. In terms of
casting and production itself, Spielberg has a known penchant for working with actors and production members from
his previous films. For instance, he has cast Richard Dreyfuss in several films: Jaws, Close Encounters of the Third
Kind, and Always. Aside from his role as Indiana Jones, Spielberg also cast Harrison Ford as a headteacher in E.T.
the Extra-Terrestrial (though the scene was ultimately cut). Although Spielberg directed veteran voice actor Frank
Welker only once (in Raiders of the Lost Ark, for which he voiced many of the animals), Welker has lent his voice
in a number of productions Spielberg has executive produced from Gremlins to its sequel Gremlins 2: The New Batch,
as well as The Land Before Time, Who Framed Roger Rabbit, and television shows such as Tiny Toons, Animaniacs, and
SeaQuest DSV. Spielberg has used Tom Hanks on several occasions and has cast him in Saving Private Ryan, Catch Me
If You Can, The Terminal, and Bridge of Spies. Spielberg has collaborated with Tom Cruise twice on Minority Report
and War of the Worlds, and cast Shia LaBeouf in five films: Transformers, Eagle Eye, Indiana Jones and the Kingdom
of the Crystal Skull, Transformers: Revenge of the Fallen, and Transformers: Dark of the Moon. Spielberg prefers
working with production members with whom he has developed an existing working relationship. An example of this is
his production relationship with Kathleen Kennedy who has served as producer on all his major films from E.T. the
Extra-Terrestrial to the recent Lincoln. For cinematography, Allen Daviau, a childhood friend and cinematographer,
shot the early Spielberg film Amblin and most of his films up to Empire of the Sun; Janusz Kamiński who has shot
every Spielberg film since Schindler's List (see List of film director and cinematographer collaborations); and the
film editor Michael Kahn who has edited every film directed by Spielberg from Close Encounters to Munich (except
E.T. the Extra-Terrestrial). Most of the DVDs of Spielberg's films have documentaries by Laurent Bouzereau. A famous
example of Spielberg working with the same professionals is his long-time collaboration with John Williams and the
use of his musical scores in all of his films since The Sugarland Express (except Bridge of Spies, The Color Purple
and Twilight Zone: The Movie). One of Spielberg's trademarks is his use of music by Williams to add to the visual
impact of his scenes and to try and create a lasting picture and sound of the film in the memories of the film audience.
These visual scenes often uses images of the sun (e.g. Empire of the Sun, Saving Private Ryan, the final scene of
Jurassic Park, and the end credits of Indiana Jones and the Last Crusade (where they ride into the sunset)), of which
the last two feature a Williams score at that end scene. Spielberg is a contemporary of filmmakers George Lucas,
Francis Ford Coppola, Martin Scorsese, John Milius, and Brian De Palma, collectively known as the "Movie Brats".
Aside from his principal role as a director, Spielberg has acted as a producer for a considerable number of films,
including early hits for Joe Dante and Robert Zemeckis. Spielberg has often never worked with the same screenwriter
in his films, beside Tony Kushner and David Koepp, who have written a few of his films more than once. Spielberg
first met actress Amy Irving in 1976 at the suggestion of director Brian De Palma, who knew he was looking for an
actress to play in Close Encounters. After meeting her, Spielberg told his co-producer Julia Phillips, "I met a real
heartbreaker last night.":293 Although she was too young for the role, she and Spielberg began dating and she eventually
moved in to what she described as his "bachelor funky" house.:294 They lived together for four years, but the stresses
of their professional careers took a toll on their relationship. Irving wanted to be certain that whatever success
she attained as an actress would be her own: "I don't want to be known as Steven's girlfriend," she said, and chose
not to be in any of his films during those years.:295 As a result, they broke up in 1979, but remained close friends.
Then in 1984 they renewed their romance, and in November 1985, they married, already having had a son, Max Samuel.
After three and a half years of marriage, however, many of the same competing stresses of their careers caused them
to divorce in 1989. They agreed to maintain homes near each other as to facilitate the shared custody and parenting
of their son.:403 Their divorce was recorded as the third most costly celebrity divorce in history. In 2002, Spielberg
was one of eight flagbearers who carried the Olympic Flag into Rice-Eccles Stadium at the Opening Ceremonies of the
2002 Winter Olympic Games in Salt Lake City. In 2006, Premiere listed him as the most powerful and influential figure
in the motion picture industry. Time listed him as one of the 100 Most Important People of the Century. At the end
of the 20th century, Life named him the most influential person of his generation. In 2009, Boston University presented
him an honorary Doctor of Humane Letters degree. According to Forbes' Most Influential Celebrities 2014 list, Spielberg
was listed as the most influential celebrity in America. The annual list is conducted by E-Poll Market Research and
it gave more than 6,600 celebrities on 46 different personality attributes a score representing "how that person
is perceived as influencing the public, their peers, or both." Spielberg received a score of 47, meaning 47% of the
US believes he is influential. Gerry Philpott, president of E-Poll Market Research, supported Spielberg's score by
stating, "If anyone doubts that Steven Spielberg has greatly influenced the public, think about how many will think
for a second before going into the water this summer." A collector of film memorabilia, Spielberg purchased a balsa
Rosebud sled from Citizen Kane (1941) in 1982. He bought Orson Welles's own directorial copy of the script for the
radio broadcast The War of the Worlds (1938) in 1994. Spielberg has purchased Academy Award statuettes being sold
on the open market and donated them to the Academy of Motion Picture Arts and Sciences, to prevent their further
commercial exploitation. His donations include the Oscars that Bette Davis received for Dangerous (1935) and Jezebel
(1938), and Clark Gable's Oscar for It Happened One Night (1934). Since playing Pong while filming Jaws in 1974,
Spielberg has been an avid video gamer. Spielberg played many of LucasArts adventure games, including the first Monkey
Island games. He owns a Wii, a PlayStation 3, a PSP, and Xbox 360, and enjoys playing first-person shooters such
as the Medal of Honor series and Call of Duty 4: Modern Warfare. He has also criticized the use of cut scenes in
games, calling them intrusive, and feels making story flow naturally into the gameplay is a challenge for future
game developers. Drawing from his own experiences in Scouting, Spielberg helped the Boy Scouts of America develop
a merit badge in cinematography in order to help promote filmmaking as a marketable skill. The badge was launched
at the 1989 National Scout Jamboree, which Spielberg attended, and where he personally counseled many boys in their
work on requirements. That same year, 1989, saw the release of Indiana Jones and the Last Crusade. The opening scene
shows a teenage Indiana Jones in scout uniform bearing the rank of a Life Scout. Spielberg stated he made Indiana
Jones a Boy Scout in honor of his experience in Scouting. For his career accomplishments, service to others, and
dedication to a new merit badge Spielberg was awarded the Distinguished Eagle Scout Award. In 2004 he was admitted
as knight of the Légion d'honneur by president Jacques Chirac. On July 15, 2006, Spielberg was also awarded the Gold
Hugo Lifetime Achievement Award at the Summer Gala of the Chicago International Film Festival, and also was awarded
a Kennedy Center honour on December 3. The tribute to Spielberg featured a short, filmed biography narrated by Tom
Hanks and included thank-yous from World War II veterans for Saving Private Ryan, as well as a performance of the
finale to Leonard Bernstein's Candide, conducted by John Williams (Spielberg's frequent composer).[citation needed]
The Science Fiction Hall of Fame inducted Spielberg in 2005, the first year it considered non-literary contributors.
In November 2007, he was chosen for a Lifetime Achievement Award to be presented at the sixth annual Visual Effects
Society Awards in February 2009. He was set to be honored with the Cecil B. DeMille Award at the January 2008 Golden
Globes; however, the new, watered-down format of the ceremony resulting from conflicts in the 2007–08 writers strike,
the HFPA postponed his honor to the 2009 ceremony. In 2008, Spielberg was awarded the Légion d'honneur.
Starting in the coal mines, by the mid-19th century elevators were operated with steam power and were used for moving goods
in bulk in mines and factories. These steam driven devices were soon being applied to a diverse set of purposes -
in 1823, two architects working in London, Burton and Hormer, built and operated a novel tourist attraction, which
they called the "ascending room". It elevated paying customers to a considerable height in the center of London,
allowing them a magnificent panoramic view of downtown. The hydraulic crane was invented by Sir William Armstrong
in 1846, primarily for use at the Tyneside docks for loading cargo. These quickly supplanted the earlier steam driven
elevators: exploiting Pascal's law, they provided a much greater force. A water pump supplied a variable level of
water pressure to a plunger encased inside a vertical cylinder, allowing the level of the platform (carrying a heavy
load) to be raised and lowered. Counterweights and balances were also used to increase the lifting power of the apparatus.
In 1845, the Neapolitan architect Gaetano Genovese installed in the Royal Palace of Caserta the "Flying Chair", an
elevator ahead of its time, covered with chestnut wood outside and with maple wood inside. It included a light, two
benches and a hand operated signal, and could be activated from the outside, without any effort on the part of the
occupants. Traction was controlled by a motor mechanic utilizing a system of toothed wheels. A safety system was
designed to take effect if the cords broke. It consisted of a beam pushed outwards by a steel spring. In 1852, Elisha
Otis introduced the safety elevator, which prevented the fall of the cab if the cable broke. The design of the Otis
safety elevator is somewhat similar to one type still used today. A governor device engages knurled roller(s), locking
the elevator to its guides should the elevator descend at excessive speed. He demonstrated it at the New York exposition
in the Crystal Palace in a dramatic, death-defying presentation in 1854, and the first such passenger elevator was
installed at 488 Broadway in New York City on March 23, 1857. The first elevator shaft preceded the first elevator
by four years. Construction for Peter Cooper's Cooper Union Foundation building in New York began in 1853. An elevator
shaft was included in the design, because Cooper was confident that a safe passenger elevator would soon be invented.
The shaft was cylindrical because Cooper thought it was the most efficient design. Later, Otis designed a special
elevator for the building. Today the Otis Elevator Company, now a subsidiary of United Technologies Corporation,
is the world's largest manufacturer of vertical transport systems. The first electric elevator was built by Werner
von Siemens in 1880 in Germany. The inventor Anton Freissler developed the ideas of von Siemens and built up a successful
enterprise in Austria-Hungary. The safety and speed of electric elevators were significantly enhanced by Frank Sprague
who added floor control, automatic elevators, acceleration control of cars, and safeties. His elevator ran faster
and with larger loads than hydraulic or steam elevators, and 584 electric elevators were installed before Sprague
sold his company to the Otis Elevator Company in 1895. Sprague also developed the idea and technology for multiple
elevators in a single shaft. Some people argue that elevators began as simple rope or chain hoists (see Traction
elevators below). An elevator is essentially a platform that is either pulled or pushed up by a mechanical means.
A modern-day elevator consists of a cab (also called a "cage", "carriage" or "car") mounted on a platform within
an enclosed space called a shaft or sometimes a "hoistway". In the past, elevator drive mechanisms were powered by
steam and water hydraulic pistons or by hand. In a "traction" elevator, cars are pulled up by means of rolling steel
ropes over a deeply grooved pulley, commonly called a sheave in the industry. The weight of the car is balanced by
a counterweight. Sometimes two elevators are built so that their cars always move synchronously in opposite directions,
and are each other's counterweight. Elevator doors protect riders from falling into the shaft. The most common configuration
is to have two panels that meet in the middle, and slide open laterally. In a cascading telescopic configuration
(potentially allowing wider entryways within limited space), the doors roll on independent tracks so that while open,
they are tucked behind one another, and while closed, they form cascading layers on one side. This can be configured
so that two sets of such cascading doors operate like the center opening doors described above, allowing for a very
wide elevator cab. In less expensive installations the elevator can also use one large "slab" door: a single panel
door the width of the doorway that opens to the left or right laterally. Some buildings have elevators with the single
door on the shaft way, and double cascading doors on the cab. Historically, AC motors were used for single or double
speed elevator machines on the grounds of cost and lower usage applications where car speed and passenger comfort
were less of an issue, but for higher speed, larger capacity elevators, the need for infinitely variable speed control
over the traction machine becomes an issue. Therefore, DC machines powered by an AC/DC motor generator were the preferred
solution. The MG set also typically powered the relay controller of the elevator, which has the added advantage of
electrically isolating the elevators from the rest of a building's electrical system, thus eliminating the transient
power spikes in the building's electrical supply caused by the motors starting and stopping (causing lighting to
dim every time the elevators are used for example), as well as interference to other electrical equipment caused
by the arcing of the relay contactors in the control system. Gearless traction machines are low-speed (low-RPM),
high-torque electric motors powered either by AC or DC. In this case, the drive sheave is directly attached to the
end of the motor. Gearless traction elevators can reach speeds of up to 20 m/s (4,000 ft/min), A brake is mounted
between the motor and gearbox or between the motor and drive sheave or at the end of the drive sheave to hold the
elevator stationary at a floor. This brake is usually an external drum type and is actuated by spring force and held
open electrically; a power failure will cause the brake to engage and prevent the elevator from falling (see inherent
safety and safety engineering). But it can also be some form of disc type like 1 or more calipers over a disc in
one end of the motor shaft or drive sheave which is used in high speed, high rise and large capacity elevators with
machine rooms(an exception is the Kone MonoSpace's EcoDisc which is not high speed, high rise and large capacity
and is machine room less but it uses the same design as is a thinner version of a conventional gearless traction
machine) for breaking power, compactness and redundancy(assuming there's at least 2 calipers on the disc), or 1 or
more disc brakes with a single caliper at one end of the motor shaft or drive sheave which is used in machine room
less elevators for compactness, breaking power, and redundancy(assuming there's 2 brakes or more). In each case,
cables are attached to a hitch plate on top of the cab or may be "underslung" below a cab, and then looped over the
drive sheave to a counterweight attached to the opposite end of the cables which reduces the amount of power needed
to move the cab. The counterweight is located in the hoist-way and rides a separate railway system; as the car goes
up, the counterweight goes down, and vice versa. This action is powered by the traction machine which is directed
by the controller, typically a relay logic or computerized device that directs starting, acceleration, deceleration
and stopping of the elevator cab. The weight of the counterweight is typically equal to the weight of the elevator
cab plus 40-50% of the capacity of the elevator. The grooves in the drive sheave are specially designed to prevent
the cables from slipping. "Traction" is provided to the ropes by the grip of the grooves in the sheave, thereby the
name. As the ropes age and the traction grooves wear, some traction is lost and the ropes must be replaced and the
sheave repaired or replaced. Sheave and rope wear may be significantly reduced by ensuring that all ropes have equal
tension, thus sharing the load evenly. Rope tension equalization may be achieved using a rope tension gauge, and
is a simple way to extend the lifetime of the sheaves and ropes. Elevators with more than 30 m (98 ft) of travel
have a system called compensation. This is a separate set of cables or a chain attached to the bottom of the counterweight
and the bottom of the elevator cab. This makes it easier to control the elevator, as it compensates for the differing
weight of cable between the hoist and the cab. If the elevator cab is at the top of the hoist-way, there is a short
length of hoist cable above the car and a long length of compensating cable below the car and vice versa for the
counterweight. If the compensation system uses cables, there will be an additional sheave in the pit below the elevator,
to guide the cables. If the compensation system uses chains, the chain is guided by a bar mounted between the counterweight
railway lines. The low mechanical complexity of hydraulic elevators in comparison to traction elevators makes them
ideal for low rise, low traffic installations. They are less energy efficient as the pump works against gravity to
push the car and its passengers upwards; this energy is lost when the car descends on its own weight. The high current
draw of the pump when starting up also places higher demands on a building’s electrical system. There are also environmental
concerns should the lifting cylinder leak fluid into the ground. A climbing elevator is a self-ascending elevator
with its own propulsion. The propulsion can be done by an electric or a combustion engine. Climbing elevators are
used in guyed masts or towers, in order to make easy access to parts of these constructions, such as flight safety
lamps for maintenance. An example would be the Moonlight towers in Austin, Texas, where the elevator holds only one
person and equipment for maintenance. The Glasgow Tower — an observation tower in Glasgow, Scotland — also makes
use of two climbing elevators. A elevator of this kind uses a vacuum on top of the cab and a valve on the top of
the "shaft" to move the cab upwards and closes the valve in order to keep the cab at the same level. a diaphragm
or a piston is used as a "brake" if there's a sudden increase in pressure avove the cab. however, to go down, it
opens the valve so that the air can pressurize the top of the "shaft", allowing the cab to go down by its own weight.
this also means that in case of a power failure, the cab will automatically go down. the "shaft" is made of acrilic,
is always round, due to the shape of the vacuum pump turbine. in order to keep the air inside of the cab, rubber
seals are used. due to technical limitations, these elevators have a low capacity. they usually allow 1-3 passengers
and up to 525 lbs. In the first half of the twentieth century, almost all elevators had no automatic positioning
of the floor on which the cab would stop. Some of the older freight elevators were controlled by switches operated
by pulling on adjacent ropes. In general, most elevators before WWII were manually controlled by elevator operators
using a rheostat connected to the motor. This rheostat (see picture) was enclosed within a cylindrical container
about the size and shape of a cake. This was mounted upright or sideways on the cab wall and operated via a projecting
handle, which was able to slide around the top half of the cylinder. The elevator motor was located at the top of
the shaft or beside the bottom of the shaft. Pushing the handle forward would cause the cab to rise; backwards would
make it sink. The harder the pressure, the faster the elevator would move. The handle also served as a dead man switch:
if the operator let go of the handle, it would return to its upright position, causing the elevator cab to stop.
In time, safety interlocks would ensure that the inner and outer doors were closed before the elevator was allowed
to move. Some skyscraper buildings and other types of installation feature a destination operating panel where a
passenger registers their floor calls before entering the car. The system lets them know which car to wait for, instead
of everyone boarding the next car. In this way, travel time is reduced as the elevator makes fewer stops for individual
passengers, and the computer distributes adjacent stops to different cars in the bank. Although travel time is reduced,
passenger waiting times may be longer as they will not necessarily be allocated the next car to depart. During the
down peak period the benefit of destination control will be limited as passengers have a common destination. However,
performance enhancements cannot be generalized as the benefits and limitations of the system are dependent on many
factors. One problem is that the system is subject to gaming. Sometimes, one person enters the destination for a
large group of people going to the same floor. The dispatching algorithm is usually unable to completely cater for
the variation, and latecomers may find the elevator they are assigned to is already full. Also, occasionally, one
person may press the floor multiple times. This is common with up/down buttons when people believe this to be an
effective way to hurry elevators. However, this will make the computer think multiple people are waiting and will
allocate empty cars to serve this one person. To prevent this problem, in one implementation of destination control,
every user gets an RFID card to identify himself, so the system knows every user call and can cancel the first call
if the passenger decides to travel to another destination to prevent empty calls. The newest invention knows even
where people are located and how many on which floor because of their identification, either for the purposes of
evacuating the building or for security reasons. Another way to prevent this issue is to treat everyone travelling
from one floor to another as one group and to allocate only one car for that group. During up-peak mode (also called
moderate incoming traffic), elevator cars in a group are recalled to the lobby to provide expeditious service to
passengers arriving at the building, most typically in the morning as people arrive for work or at the conclusion
of a lunch-time period. Elevators are dispatched one-by-one when they reach a pre-determined passenger load, or when
they have had their doors opened for a certain period of time. The next elevator to be dispatched usually has its
hall lantern or a "this car leaving next" sign illuminated to encourage passengers to make maximum use of the available
elevator system capacity. Some elevator banks are programmed so that at least one car will always return to the lobby
floor and park whenever it becomes free. Independent service is a special service mode found on most elevators. It
is activated by a key switch either inside the elevator itself or on a centralized control panel in the lobby. When
an elevator is placed on independent service, it will no longer respond to hall calls. (In a bank of elevators, traffic
is rerouted to the other elevators, while in a single elevator, the hall buttons are disabled). The elevator will
remain parked on a floor with its doors open until a floor is selected and the door close button is held until the
elevator starts to travel. Independent service is useful when transporting large goods or moving groups of people
between certain floors. Inspection service is designed to provide access to the hoistway and car top for inspection
and maintenance purposes by qualified elevator mechanics. It is first activated by a key switch on the car operating
panel usually labeled 'Inspection', 'Car Top', 'Access Enable' or 'HWENAB'. When this switch is activated the elevator
will come to a stop if moving, car calls will be canceled (and the buttons disabled), and hall calls will be assigned
to other elevator cars in the group (or canceled in a single elevator configuration). The elevator can now only be
moved by the corresponding 'Access' key switches, usually located at the highest (to access the top of the car) and
lowest (to access the elevator pit) landings. The access key switches will allow the car to move at reduced inspection
speed with the hoistway door open. This speed can range from anywhere up to 60% of normal operating speed on most
controllers, and is usually defined by local safety codes. Phase one mode is activated by a corresponding smoke sensor
or heat sensor in the building. Once an alarm has been activated, the elevator will automatically go into phase one.
The elevator will wait an amount of time, then proceed to go into nudging mode to tell everyone the elevator is leaving
the floor. Once the elevator has left the floor, depending on where the alarm was set off, the elevator will go to
the fire-recall floor. However, if the alarm was activated on the fire-recall floor, the elevator will have an alternate
floor to recall to. When the elevator is recalled, it proceeds to the recall floor and stops with its doors open.
The elevator will no longer respond to calls or move in any direction. Located on the fire-recall floor is a fire-service
key switch. The fire-service key switch has the ability to turn fire service off, turn fire service on or to bypass
fire service. The only way to return the elevator to normal service is to switch it to bypass after the alarms have
reset. Phase-two mode can only be activated by a key switch located inside the elevator on the centralized control
panel. This mode was created for firefighters so that they may rescue people from a burning building. The phase-two
key switch located on the COP has three positions: off, on, and hold. By turning phase two on, the firefighter enables
the car to move. However, like independent-service mode, the car will not respond to a car call unless the firefighter
manually pushes and holds the door close button. Once the elevator gets to the desired floor it will not open its
doors unless the firefighter holds the door open button. This is in case the floor is burning and the firefighter
can feel the heat and knows not to open the door. The firefighter must hold door open until the door is completely
opened. If for any reason the firefighter wishes to leave the elevator, they will use the hold position on the key
switch to make sure the elevator remains at that floor. If the firefighter wishes to return to the recall floor,
they simply turn the key off and close the doors. Once the elevator arrives at the floor, it will park with its doors
open and the car buttons will be disabled to prevent a passenger from taking control of the elevator. Medical personnel
must then activate the code-blue key switch inside the car, select their floor and close the doors with the door
close button. The elevator will then travel non-stop to the selected floor, and will remain in code-blue service
until switched off in the car. Some hospital elevators will feature a 'hold' position on the code-blue key switch
(similar to fire service) which allows the elevator to remain at a floor locked out of service until code blue is
deactivated. When power is lost in a traction elevator system, all elevators will initially come to a halt. One by
one, each car in the group will return to the lobby floor, open its doors and shut down. People in the remaining
elevators may see an indicator light or hear a voice announcement informing them that the elevator will return to
the lobby shortly. Once all cars have successfully returned, the system will then automatically select one or more
cars to be used for normal operations and these cars will return to service. The car(s) selected to run under emergency
power can be manually overridden by a key or strip switch in the lobby. In order to help prevent entrapment, when
the system detects that it is running low on power, it will bring the running cars to the lobby or nearest floor,
open the doors and shut down. In hydraulic elevator systems, emergency power will lower the elevators to the lowest
landing and open the doors to allow passengers to exit. The doors then close after an adjustable time period and
the car remains unusable until reset, usually by cycling the elevator main power switch. Typically, due to the high
current draw when starting the pump motor, hydraulic elevators are not run using standard emergency power systems.
Buildings like hospitals and nursing homes usually size their emergency generators to accommodate this draw. However,
the increasing use of current-limiting motor starters, commonly known as "soft-start" contactors, avoid much of this
problem, and the current draw of the pump motor is less of a limiting concern. Statistically speaking, cable-borne
elevators are extremely safe. Their safety record is unsurpassed by any other vehicle system. In 1998, it was estimated
that approximately eight millionths of one percent (1 in 12 million) of elevator rides result in an anomaly, and
the vast majority of these were minor things such as the doors failing to open. Of the 20 to 30 elevator-related
deaths each year, most of them are maintenance-related — for example, technicians leaning too far into the shaft
or getting caught between moving parts, and most of the rest are attributed to other kinds of accidents, such as
people stepping blindly through doors that open into empty shafts or being strangled by scarves caught in the doors.
In fact, prior to the September 11th terrorist attacks, the only known free-fall incident in a modern cable-borne
elevator happened in 1945 when a B-25 bomber struck the Empire State Building in fog, severing the cables of an elevator
cab, which fell from the 75th floor all the way to the bottom of the building, seriously injuring (though not killing)
the sole occupant — the elevator operator. However, there was an incident in 2007 at a Seattle children's hospital,
where a ThyssenKrupp ISIS machine-room-less elevator free-fell until the safety brakes were engaged. This was due
to a flaw in the design where the cables were connected at one common point, and the kevlar ropes had a tendency
to overheat and cause slipping (or, in this case, a free-fall). While it is possible (though extraordinarily unlikely)
for an elevator's cable to snap, all elevators in the modern era have been fitted with several safety devices which
prevent the elevator from simply free-falling and crashing. An elevator cab is typically borne by 2 to 6 (up to 12
or more in high rise installations) hoist cables or belts, each of which is capable on its own of supporting the
full load of the elevator plus twenty-five percent more weight. In addition, there is a device which detects whether
the elevator is descending faster than its maximum designed speed; if this happens, the device causes copper (or
silicon nitride in high rise installations) brake shoes to clamp down along the vertical rails in the shaft, stopping
the elevator quickly, but not so abruptly as to cause injury. This device is called the governor, and was invented
by Elisha Graves Otis. In addition, a oil/hydraulic or spring or polyurethane or telescopic oil/hydraulic buffer
or a combination (depending on the travel height and travel speed) is installed at the bottom of the shaft (or in
the bottom of the cab and sometimes also in the top of the cab or shaft) to somewhat cushion any impact. However,
In Thailand, in November 2012, a woman was killed in free falling elevator, in what was reported as the "first legally
recognised death caused by a falling lift". Past problems with hydraulic elevators include underground electrolytic
destruction of the cylinder and bulkhead, pipe failures, and control failures. Single bulkhead cylinders, typically
built prior to a 1972 ASME A17.1 Elevator Safety Code change requiring a second dished bulkhead, were subject to
possible catastrophic failure. The code previously permitted only single-bottom hydraulic cylinders. In the event
of a cylinder breach, the fluid loss results in uncontrolled down movement of the elevator. This creates two significant
hazards: being subject to an impact at the bottom when the elevator stops suddenly and being in the entrance for
a potential shear if the rider is partly in the elevator. Because it is impossible to verify the system at all times,
the code requires periodic testing of the pressure capability. Another solution to protect against a cylinder blowout
is to install a plunger gripping device. One commercially available is known by the marketing name "LifeJacket".
This is a device which, in the event of an uncontrolled downward acceleration, nondestructively grips the plunger
and stops the car. A device known as an overspeed or rupture valve is attached to the hydraulic inlet/outlet of the
cylinder and is adjusted for a maximum flow rate. If a pipe or hose were to break (rupture), the flow rate of the
rupture valve will surpass a set limit and mechanically stop the outlet flow of hydraulic fluid, thus stopping the
plunger and the car in the down direction. Safety testing of mine shaft elevator rails is routinely undertaken. The
method involves destructive testing of a segment of the cable. The ends of the segment are frayed, then set in conical
zinc molds. Each end of the segment is then secured in a large, hydraulic stretching machine. The segment is then
placed under increasing load to the point of failure. Data about elasticity, load, and other factors is compiled
and a report is produced. The report is then analyzed to determine whether or not the entire rail is safe to use.
Passenger elevators capacity is related to the available floor space. Generally passenger elevators are available
in capacities from 500 to 2,700 kg (1,000–6,000 lb) in 230 kg (500 lb) increments.[citation needed] Generally passenger
elevators in buildings of eight floors or fewer are hydraulic or electric, which can reach speeds up to 1 m/s (200
ft/min) hydraulic and up to 152 m/min (500 ft/min) electric. In buildings up to ten floors, electric and gearless
elevators are likely to have speeds up to 3 m/s (500 ft/min), and above ten floors speeds range 3 to 10 m/s (500–2,000
ft/min).[citation needed] Sometimes passenger elevators are used as a city transport along with funiculars. For example,
there is a 3-station underground public elevator in Yalta, Ukraine, which takes passengers from the top of a hill
above the Black Sea on which hotels are perched, to a tunnel located on the beach below. At Casco Viejo station in
the Bilbao Metro, the elevator that provides access to the station from a hilltop neighborhood doubles as city transportation:
the station's ticket barriers are set up in such a way that passengers can pay to reach the elevator from the entrance
in the lower city, or vice versa. See also the Elevators for urban transport section. A freight elevator, or goods
lift, is an elevator designed to carry goods, rather than passengers. Freight elevators are generally required to
display a written notice in the car that the use by passengers is prohibited (though not necessarily illegal), though
certain freight elevators allow dual use through the use of an inconspicuous riser. In order for an elevator to be
legal to carry passengers in some jurisdictions it must have a solid inner door. Freight elevators are typically
larger and capable of carrying heavier loads than a passenger elevator, generally from 2,300 to 4,500 kg. Freight
elevators may have manually operated doors, and often have rugged interior finishes to prevent damage while loading
and unloading. Although hydraulic freight elevators exist, electric elevators are more energy efficient for the work
of freight lifting.[citation needed] Stage lifts and orchestra lifts are specialized elevators, typically powered
by hydraulics, that are used to raise and lower entire sections of a theater stage. For example, Radio City Music
Hall has four such elevators: an orchestra lift that covers a large area of the stage, and three smaller lifts near
the rear of the stage. In this case, the orchestra lift is powerful enough to raise an entire orchestra, or an entire
cast of performers (including live elephants) up to stage level from below. There's a barrel on the background of
the image of the left which can be used as a scale to represent the size of the mechanism A residential elevator
is often permitted to be of lower cost and complexity than full commercial elevators. They may have unique design
characteristics suited for home furnishings, such as hinged wooden shaft-access doors rather than the typical metal
sliding doors of commercial elevators. Construction may be less robust than in commercial designs with shorter maintenance
periods, but safety systems such as locks on shaft access doors, fall arrestors, and emergency phones must still
be present in the event of malfunction. Some types of residential elevators do not use a traditional elevator shaft,
machine room, and elevator hoistway. This allows an elevator to be installed where a traditional elevator may not
fit, and simplifies installation. The ASME board first approved machine-room-less systems in a revision of the ASME
A17.1 in 2007. Machine-room-less elevators have been available commercially since the mid 1990s, however cost and
overall size prevented their adoption to the residential elevator market until around 2010. Dumbwaiters are small
freight elevators that are intended to carry food, books or other small freight loads rather than passengers. They
often connect kitchens to rooms on other floors. they usually do not have the same safety features found in passenger
elevators, like various ropes for redundancy. they have a lower capacity, and they can be up to 1 meter (3 ft.) tall.
there's a control panel at every stop, that mimics the ones found in passenger elevators, like calling, door control,
floor selection. Material transport elevators generally consist of an inclined plane on which a conveyor belt runs.
The conveyor often includes partitions to ensure that the material moves forward. These elevators are often used
in industrial and agricultural applications. When such mechanisms (or spiral screws or pneumatic transport) are used
to elevate grain for storage in large vertical silos, the entire structure is called a grain elevator. Belt elevators
are often used in docks for loading loose materials such as coal, iron ore and grain into the holds of bulk carriers
Elevators necessitated new social protocols. When Nicholas II of Russia visited the Hotel Adlon in Berlin, his courtiers
panicked about who would enter the elevator first, and who would press the buttons. In Lifted: A Cultural History
of the Elevator, author Andreas Bernard documents other social impacts caused by the modern elevator, including thriller
movies about stuck elevators, casual encounters and sexual tension on elevators, the reduction of personal space,
and concerns about personal hygiene. In addition to the call buttons, elevators usually have floor indicators (often
illuminated by LED) and direction lanterns. The former are almost universal in cab interiors with more than two stops
and may be found outside the elevators as well on one or more of the floors. Floor indicators can consist of a dial
with a rotating needle, but the most common types are those with successively illuminated floor indications or LCDs.
Likewise, a change of floors or an arrival at a floor is indicated by a sound, depending on the elevator. Direction
lanterns are also found both inside and outside elevator cars, but they should always be visible from outside because
their primary purpose is to help people decide whether or not to get on the elevator. If somebody waiting for the
elevator wants to go up, but a car comes first that indicates that it is going down, then the person may decide not
to get on the elevator. If the person waits, then one will still stop going up. Direction indicators are sometimes
etched with arrows or shaped like arrows and/or use the convention that one that lights up red means "down" and green
means "up". Since the color convention is often undermined or overridden by systems that do not invoke it, it is
usually used only in conjunction with other differentiating factors. An example of a place whose elevators use only
the color convention to differentiate between directions is the Museum of Contemporary Art in Chicago, where a single
circle can be made to light up green for "up" and red for "down". Sometimes directions must be inferred by the position
of the indicators relative to one another. There are several technologies aimed to provide better experience to passengers
suffering from claustrophobia, anthropophobia or social anxiety. Israeli startup DigiGage uses motion sensors to
scroll the pre-rendered images, building and floor-specific content on a screen embedded into the wall as the cab
moves up and down. British company LiftEye provides a virtual window technology to turn common elevator into panoramic.
It creates 3d video panorama using live feed from cameras placed vertically along the facade and synchronizes it
with cab movement. The video is projected on a wall-sized screens making it look like the walls are made of glass.
In most US and Canadian jurisdictions, passenger elevators are required to conform to the American Society of Mechanical
Engineers' Standard A17.1, Safety Code for Elevators and Escalators. As of 2006, all states except Kansas, Mississippi,
North Dakota, and South Dakota have adopted some version of ASME codes, though not necessarily the most recent. In
Canada the document is the CAN/CSA B44 Safety Standard, which was harmonized with the US version in the 2000 edition.[citation
needed] In addition, passenger elevators may be required to conform to the requirements of A17.3 for existing elevators
where referenced by the local jurisdiction. Passenger elevators are tested using the ASME A17.2 Standard. The frequency
of these tests is mandated by the local jurisdiction, which may be a town, city, state or provincial standard. Most
elevators have a location in which the permit for the building owner to operate the elevator is displayed. While
some jurisdictions require the permit to be displayed in the elevator cab, other jurisdictions allow for the operating
permit to be kept on file elsewhere – such as the maintenance office – and to be made available for inspection on
demand. In such cases instead of the permit being displayed in the elevator cab, often a notice is posted in its
place informing riders of where the actual permits are kept. As of January 2008, Spain is the nation with the most
elevators installed in the world, with 950,000 elevators installed that run more than one hundred million lifts every
day, followed by United States with 700,000 elevators installed and China with 610,000 elevators installed since
1949. In Brazil, it is estimated that there are approximately 300,000 elevators currently in operation. The world's
largest market for elevators is Italy, with more than 1,629 million euros of sales and 1,224 million euros of internal
market. Double deck elevators are used in the Taipei 101 office tower. Tenants of even-numbered floors first take
an escalator (or an elevator from the parking garage) to the 2nd level, where they will enter the upper deck and
arrive at their floors. The lower deck is turned off during low-volume hours, and the upper deck can act as a single-level
elevator stopping at all adjacent floors. For example, the 85th floor restaurants can be accessed from the 60th floor
sky-lobby. Restaurant customers must clear their reservations at the reception counter on the 2nd floor. A bank of
express elevators stop only on the sky lobby levels (36 and 60, upper-deck car), where tenants can transfer to "local"
elevators. The high-speed observation deck elevators accelerate to a world-record certified speed of 1,010 metres
per minute (61 km/h) in 16 seconds, and then it slows down for arrival with subtle air pressure sensations. The door
opens after 37 seconds from the 5th floor. Special features include aerodynamic car and counterweights, and cabin
pressure control to help passengers adapt smoothly to pressure changes. The downwards journey is completed at a reduced
speed of 600 meters per minute, with the doors opening at the 52nd second. The Twilight Zone Tower of Terror is the
common name for a series of elevator attractions at the Disney's Hollywood Studios park in Orlando, the Disney California
Adventure Park park in Anaheim, the Walt Disney Studios Park in Paris and the Tokyo DisneySea park in Tokyo. The
central element of this attraction is a simulated free-fall achieved through the use of a high-speed elevator system.
For safety reasons, passengers are seated and secured in their seats rather than standing. Unlike most traction elevators,
the elevator car and counterweight are joined using a rail system in a continuous loop running through both the top
and the bottom of the drop shaft. This allows the drive motor to pull down on the elevator car from underneath, resulting
in downward acceleration greater than that of normal gravity. The high-speed drive motor is used to rapidly lift
the elevator as well. The passenger cabs are mechanically separated from the lift mechanism, thus allowing the elevator
shafts to be used continuously while passengers board and embark from the cabs, as well as move through show scenes
on various floors. The passenger cabs, which are automated guided vehicles or AGVs, move into the vertical motion
shaft and lock themselves in before the elevator starts moving vertically. Multiple elevator shafts are used to further
improve passenger throughput. The doorways of the top few "floors" of the attraction are open to the outdoor environment,
thus allowing passengers to look out from the top of the structure. Guests ascending to the 67th, 69th, and 70th
level observation decks (dubbed "Top of the Rock") atop the GE Building at Rockefeller Center in New York City ride
a high-speed glass-top elevator. When entering the cab, it appears to be any normal elevator ride. However, once
the cab begins moving, the interior lights turn off and a special blue light above the cab turns on. This lights
the entire shaft, so riders can see the moving cab through its glass ceiling as it rises and lowers through the shaft.
Music plays and various animations are also displayed on the ceiling. The entire ride takes about 60 seconds. Part
of the Haunted Mansion attraction at Disneyland in Anaheim, California, and Disneyland in Paris, France, takes place
on an elevator. The "stretching room" on the ride is actually an elevator that travels downwards, giving access to
a short underground tunnel which leads to the rest of the attraction. The elevator has no ceiling and its shaft is
decorated to look like walls of a mansion. Because there is no roof, passengers are able to see the walls of the
shaft by looking up, which gives the illusion of the room stretching.
Neptune is the eighth and farthest known planet from the Sun in the Solar System. It is the fourth-largest planet by diameter
and the third-largest by mass. Among the giant planets in the Solar System, Neptune is the most dense. Neptune is
17 times the mass of Earth and is slightly more massive than its near-twin Uranus, which is 15 times the mass of
Earth and slightly larger than Neptune.[c] Neptune orbits the Sun once every 164.8 years at an average distance of
30.1 astronomical units (4.50×109 km). Named after the Roman god of the sea, its astronomical symbol is ♆, a stylised
version of the god Neptune's trident. Neptune is not visible to the unaided eye and is the only planet in the Solar
System found by mathematical prediction rather than by empirical observation. Unexpected changes in the orbit of
Uranus led Alexis Bouvard to deduce that its orbit was subject to gravitational perturbation by an unknown planet.
Neptune was subsequently observed with a telescope on 23 September 1846 by Johann Galle within a degree of the position
predicted by Urbain Le Verrier. Its largest moon, Triton, was discovered shortly thereafter, though none of the planet's
remaining known 14 moons were located telescopically until the 20th century. The planet's distance from Earth gives
it a very small apparent size, making it challenging to study with Earth-based telescopes. Neptune was visited by
Voyager 2, when it flew by the planet on 25 August 1989. The advent of Hubble Space Telescope and large ground-based
telescopes with adaptive optics has recently allowed for additional detailed observations from afar. Neptune is similar
in composition to Uranus, and both have compositions that differ from those of the larger gas giants, Jupiter and
Saturn. Like Jupiter and Saturn, Neptune's atmosphere is composed primarily of hydrogen and helium, along with traces
of hydrocarbons and possibly nitrogen, but contains a higher proportion of "ices" such as water, ammonia, and methane.
However, its interior, like that of Uranus, is primarily composed of ices and rock, and hence Uranus and Neptune
are normally considered "ice giants" to emphasise this distinction. Traces of methane in the outermost regions in
part account for the planet's blue appearance. In contrast to the hazy, relatively featureless atmosphere of Uranus,
Neptune's atmosphere has active and visible weather patterns. For example, at the time of the Voyager 2 flyby in
1989, the planet's southern hemisphere had a Great Dark Spot comparable to the Great Red Spot on Jupiter. These weather
patterns are driven by the strongest sustained winds of any planet in the Solar System, with recorded wind speeds
as high as 2,100 kilometres per hour (580 m/s; 1,300 mph). Because of its great distance from the Sun, Neptune's
outer atmosphere is one of the coldest places in the Solar System, with temperatures at its cloud tops approaching
55 K (−218 °C). Temperatures at the planet's centre are approximately 5,400 K (5,100 °C). Neptune has a faint and
fragmented ring system (labelled "arcs"), which was first detected during the 1960s and confirmed by Voyager 2. Some
of the earliest recorded observations ever made through a telescope, Galileo's drawings on 28 December 1612 and 27
January 1613, contain plotted points that match up with what is now known to be the position of Neptune. On both
occasions, Galileo seems to have mistaken Neptune for a fixed star when it appeared close—in conjunction—to Jupiter
in the night sky; hence, he is not credited with Neptune's discovery. At his first observation in December 1612,
Neptune was almost stationary in the sky because it had just turned retrograde that day. This apparent backward motion
is created when Earth's orbit takes it past an outer planet. Because Neptune was only beginning its yearly retrograde
cycle, the motion of the planet was far too slight to be detected with Galileo's small telescope. In July 2009, University
of Melbourne physicist David Jamieson announced new evidence suggesting that Galileo was at least aware that the
'star' he had observed had moved relative to the fixed stars. In 1821, Alexis Bouvard published astronomical tables
of the orbit of Neptune's neighbour Uranus. Subsequent observations revealed substantial deviations from the tables,
leading Bouvard to hypothesise that an unknown body was perturbing the orbit through gravitational interaction. In
1843, John Couch Adams began work on the orbit of Uranus using the data he had. Via Cambridge Observatory director
James Challis, he requested extra data from Sir George Airy, the Astronomer Royal, who supplied it in February 1844.
Adams continued to work in 1845–46 and produced several different estimates of a new planet. Meanwhile, Le Verrier
by letter urged Berlin Observatory astronomer Johann Gottfried Galle to search with the observatory's refractor.
Heinrich d'Arrest, a student at the observatory, suggested to Galle that they could compare a recently drawn chart
of the sky in the region of Le Verrier's predicted location with the current sky to seek the displacement characteristic
of a planet, as opposed to a fixed star. On the evening of 23 September 1846, the day Galle received the letter,
he discovered Neptune within 1° of where Le Verrier had predicted it to be, about 12° from Adams' prediction. Challis
later realised that he had observed the planet twice, on 4 and 12 August, but did not recognise it as a planet because
he lacked an up-to-date star map and was distracted by his concurrent work on comet observations. In the wake of
the discovery, there was much nationalistic rivalry between the French and the British over who deserved credit for
the discovery. Eventually, an international consensus emerged that both Le Verrier and Adams jointly deserved credit.
Since 1966, Dennis Rawlins has questioned the credibility of Adams's claim to co-discovery, and the issue was re-evaluated
by historians with the return in 1998 of the "Neptune papers" (historical documents) to the Royal Observatory, Greenwich.
After reviewing the documents, they suggest that "Adams does not deserve equal credit with Le Verrier for the discovery
of Neptune. That credit belongs only to the person who succeeded both in predicting the planet's place and in convincing
astronomers to search for it." Claiming the right to name his discovery, Le Verrier quickly proposed the name Neptune
for this new planet, though falsely stating that this had been officially approved by the French Bureau des Longitudes.
In October, he sought to name the planet Le Verrier, after himself, and he had loyal support in this from the observatory
director, François Arago. This suggestion met with stiff resistance outside France. French almanacs quickly reintroduced
the name Herschel for Uranus, after that planet's discoverer Sir William Herschel, and Leverrier for the new planet.
Most languages today, even in countries that have no direct link to Greco-Roman culture, use some variant of the
name "Neptune" for the planet. However, in Chinese, Japanese, and Korean, the planet's name was translated as "sea
king star" (海王星), because Neptune was the god of the sea. In Mongolian, Neptune is called Dalain Van (Далайн ван),
reflecting its namesake god's role as the ruler of the sea. In modern Greek the planet is called Poseidon (Ποσειδώνας,
Poseidonas), the Greek counterpart of Neptune. In Hebrew, "Rahab" (רהב), from a Biblical sea monster mentioned in
the Book of Psalms, was selected in a vote managed by the Academy of the Hebrew Language in 2009 as the official
name for the planet, even though the existing Latin term "Neptun" (נפטון) is commonly used. In Māori, the planet
is called Tangaroa, named after the Māori god of the sea. In Nahuatl, the planet is called Tlāloccītlalli, named
after the rain god Tlāloc. From its discovery in 1846 until the subsequent discovery of Pluto in 1930, Neptune was
the farthest known planet. When Pluto was discovered it was considered a planet, and Neptune thus became the penultimate
known planet, except for a 20-year period between 1979 and 1999 when Pluto's elliptical orbit brought it closer to
the Sun than Neptune. The discovery of the Kuiper belt in 1992 led many astronomers to debate whether Pluto should
be considered a planet or as part of the Kuiper belt. In 2006, the International Astronomical Union defined the word
"planet" for the first time, reclassifying Pluto as a "dwarf planet" and making Neptune once again the outermost
known planet in the Solar System. Neptune's mass of 1.0243×1026 kg, is intermediate between Earth and the larger
gas giants: it is 17 times that of Earth but just 1/19th that of Jupiter.[d] Its gravity at 1 bar is 11.15 m/s2,
1.14 times the surface gravity of Earth, and surpassed only by Jupiter. Neptune's equatorial radius of 24,764 km
is nearly four times that of Earth. Neptune, like Uranus, is an ice giant, a subclass of giant planet, due to their
smaller size and higher concentrations of volatiles relative to Jupiter and Saturn. In the search for extrasolar
planets, Neptune has been used as a metonym: discovered bodies of similar mass are often referred to as "Neptunes",
just as scientists refer to various extrasolar bodies as "Jupiters". The mantle is equivalent to 10 to 15 Earth masses
and is rich in water, ammonia and methane. As is customary in planetary science, this mixture is referred to as icy
even though it is a hot, dense fluid. This fluid, which has a high electrical conductivity, is sometimes called a
water–ammonia ocean. The mantle may consist of a layer of ionic water in which the water molecules break down into
a soup of hydrogen and oxygen ions, and deeper down superionic water in which the oxygen crystallises but the hydrogen
ions float around freely within the oxygen lattice. At a depth of 7000 km, the conditions may be such that methane
decomposes into diamond crystals that rain downwards like hailstones. Very-high-pressure experiments at the Lawrence
Livermore National Laboratory suggest that the base of the mantle may comprise an ocean of liquid carbon with floating
solid 'diamonds'. At high altitudes, Neptune's atmosphere is 80% hydrogen and 19% helium. A trace amount of methane
is also present. Prominent absorption bands of methane exist at wavelengths above 600 nm, in the red and infrared
portion of the spectrum. As with Uranus, this absorption of red light by the atmospheric methane is part of what
gives Neptune its blue hue, although Neptune's vivid azure differs from Uranus's milder cyan. Because Neptune's atmospheric
methane content is similar to that of Uranus, some unknown atmospheric constituent is thought to contribute to Neptune's
colour. Models suggest that Neptune's troposphere is banded by clouds of varying compositions depending on altitude.
The upper-level clouds lie at pressures below one bar, where the temperature is suitable for methane to condense.
For pressures between one and five bars (100 and 500 kPa), clouds of ammonia and hydrogen sulfide are thought to
form. Above a pressure of five bars, the clouds may consist of ammonia, ammonium sulfide, hydrogen sulfide and water.
Deeper clouds of water ice should be found at pressures of about 50 bars (5.0 MPa), where the temperature reaches
273 K (0 °C). Underneath, clouds of ammonia and hydrogen sulfide may be found. High-altitude clouds on Neptune have
been observed casting shadows on the opaque cloud deck below. There are also high-altitude cloud bands that wrap
around the planet at constant latitude. These circumferential bands have widths of 50–150 km and lie about 50–110
km above the cloud deck. These altitudes are in the layer where weather occurs, the troposphere. Weather does not
occur in the higher stratosphere or thermosphere. Unlike Uranus, Neptune's composition has a higher volume of ocean,
whereas Uranus has a smaller mantle. For reasons that remain obscure, the planet's thermosphere is at an anomalously
high temperature of about 750 K. The planet is too far from the Sun for this heat to be generated by ultraviolet
radiation. One candidate for a heating mechanism is atmospheric interaction with ions in the planet's magnetic field.
Other candidates are gravity waves from the interior that dissipate in the atmosphere. The thermosphere contains
traces of carbon dioxide and water, which may have been deposited from external sources such as meteorites and dust.
Neptune also resembles Uranus in its magnetosphere, with a magnetic field strongly tilted relative to its rotational
axis at 47° and offset at least 0.55 radii, or about 13500 km from the planet's physical centre. Before Voyager 2's
arrival at Neptune, it was hypothesised that Uranus's tilted magnetosphere was the result of its sideways rotation.
In comparing the magnetic fields of the two planets, scientists now think the extreme orientation may be characteristic
of flows in the planets' interiors. This field may be generated by convective fluid motions in a thin spherical shell
of electrically conducting liquids (probably a combination of ammonia, methane and water) resulting in a dynamo action.
The dipole component of the magnetic field at the magnetic equator of Neptune is about 14 microteslas (0.14 G). The
dipole magnetic moment of Neptune is about 2.2 × 1017 T·m3 (14 μT·RN3, where RN is the radius of Neptune). Neptune's
magnetic field has a complex geometry that includes relatively large contributions from non-dipolar components, including
a strong quadrupole moment that may exceed the dipole moment in strength. By contrast, Earth, Jupiter and Saturn
have only relatively small quadrupole moments, and their fields are less tilted from the polar axis. The large quadrupole
moment of Neptune may be the result of offset from the planet's centre and geometrical constraints of the field's
dynamo generator. Neptune has a planetary ring system, though one much less substantial than that of Saturn. The
rings may consist of ice particles coated with silicates or carbon-based material, which most likely gives them a
reddish hue. The three main rings are the narrow Adams Ring, 63,000 km from the centre of Neptune, the Le Verrier
Ring, at 53,000 km, and the broader, fainter Galle Ring, at 42,000 km. A faint outward extension to the Le Verrier
Ring has been named Lassell; it is bounded at its outer edge by the Arago Ring at 57,000 km. Neptune's weather is
characterised by extremely dynamic storm systems, with winds reaching speeds of almost 600 m/s (2,200 km/h; 1,300
mph)—nearly reaching supersonic flow. More typically, by tracking the motion of persistent clouds, wind speeds have
been shown to vary from 20 m/s in the easterly direction to 325 m/s westward. At the cloud tops, the prevailing winds
range in speed from 400 m/s along the equator to 250 m/s at the poles. Most of the winds on Neptune move in a direction
opposite the planet's rotation. The general pattern of winds showed prograde rotation at high latitudes vs. retrograde
rotation at lower latitudes. The difference in flow direction is thought to be a "skin effect" and not due to any
deeper atmospheric processes. At 70° S latitude, a high-speed jet travels at a speed of 300 m/s. In 2007, it was
discovered that the upper troposphere of Neptune's south pole was about 10 K warmer than the rest of its atmosphere,
which averages approximately 73 K (−200 °C). The temperature differential is enough to let methane, which elsewhere
is frozen in the troposphere, escape into the stratosphere near the pole. The relative "hot spot" is due to Neptune's
axial tilt, which has exposed the south pole to the Sun for the last quarter of Neptune's year, or roughly 40 Earth
years. As Neptune slowly moves towards the opposite side of the Sun, the south pole will be darkened and the north
pole illuminated, causing the methane release to shift to the north pole. The Scooter is another storm, a white cloud
group farther south than the Great Dark Spot. This nickname first arose during the months leading up to the Voyager
2 encounter in 1989, when they were observed moving at speeds faster than the Great Dark Spot (and images acquired
later would subsequently reveal the presence of clouds moving even faster than those that had initially been detected
by Voyager 2). The Small Dark Spot is a southern cyclonic storm, the second-most-intense storm observed during the
1989 encounter. It was initially completely dark, but as Voyager 2 approached the planet, a bright core developed
and can be seen in most of the highest-resolution images. Neptune's dark spots are thought to occur in the troposphere
at lower altitudes than the brighter cloud features, so they appear as holes in the upper cloud decks. As they are
stable features that can persist for several months, they are thought to be vortex structures. Often associated with
dark spots are brighter, persistent methane clouds that form around the tropopause layer. The persistence of companion
clouds shows that some former dark spots may continue to exist as cyclones even though they are no longer visible
as a dark feature. Dark spots may dissipate when they migrate too close to the equator or possibly through some other
unknown mechanism. Neptune's more varied weather when compared to Uranus is due in part to its higher internal heating.
Although Neptune lies over 50% further from the Sun than Uranus, and receives only 40% its amount of sunlight, the
two planets' surface temperatures are roughly equal. The upper regions of Neptune's troposphere reach a low temperature
of 51.8 K (−221.3 °C). At a depth where the atmospheric pressure equals 1 bar (100 kPa), the temperature is 72.00
K (−201.15 °C). Deeper inside the layers of gas, the temperature rises steadily. As with Uranus, the source of this
heating is unknown, but the discrepancy is larger: Uranus only radiates 1.1 times as much energy as it receives from
the Sun; whereas Neptune radiates about 2.61 times as much energy as it receives from the Sun. Neptune is the farthest
planet from the Sun, yet its internal energy is sufficient to drive the fastest planetary winds seen in the Solar
System. Depending on the thermal properties of its interior, the heat left over from Neptune's formation may be sufficient
to explain its current heat flow, though it is more difficult to simultaneously explain Uranus's lack of internal
heat while preserving the apparent similarity between the two planets. On 11 July 2011, Neptune completed its first
full barycentric orbit since its discovery in 1846, although it did not appear at its exact discovery position in
the sky, because Earth was in a different location in its 365.26-day orbit. Because of the motion of the Sun in relation
to the barycentre of the Solar System, on 11 July Neptune was also not at its exact discovery position in relation
to the Sun; if the more common heliocentric coordinate system is used, the discovery longitude was reached on 12
July 2011. Neptune's orbit has a profound impact on the region directly beyond it, known as the Kuiper belt. The
Kuiper belt is a ring of small icy worlds, similar to the asteroid belt but far larger, extending from Neptune's
orbit at 30 AU out to about 55 AU from the Sun. Much in the same way that Jupiter's gravity dominates the asteroid
belt, shaping its structure, so Neptune's gravity dominates the Kuiper belt. Over the age of the Solar System, certain
regions of the Kuiper belt became destabilised by Neptune's gravity, creating gaps in the Kuiper belt's structure.
The region between 40 and 42 AU is an example. There do exist orbits within these empty regions where objects can
survive for the age of the Solar System. These resonances occur when Neptune's orbital period is a precise fraction
of that of the object, such as 1:2, or 3:4. If, say, an object orbits the Sun once for every two Neptune orbits,
it will only complete half an orbit by the time Neptune returns to its original position. The most heavily populated
resonance in the Kuiper belt, with over 200 known objects, is the 2:3 resonance. Objects in this resonance complete
2 orbits for every 3 of Neptune, and are known as plutinos because the largest of the known Kuiper belt objects,
Pluto, is among them. Although Pluto crosses Neptune's orbit regularly, the 2:3 resonance ensures they can never
collide. The 3:4, 3:5, 4:7 and 2:5 resonances are less populated. Neptune has a number of known trojan objects occupying
both the Sun–Neptune L4 and L5 Lagrangian points—gravitationally stable regions leading and trailing Neptune in its
orbit, respectively. Neptune trojans can be viewed as being in a 1:1 resonance with Neptune. Some Neptune trojans
are remarkably stable in their orbits, and are likely to have formed alongside Neptune rather than being captured.
The first and so far only object identified as associated with Neptune's trailing L5 Lagrangian point is 2008 LC18.
Neptune also has a temporary quasi-satellite, (309239) 2007 RW10. The object has been a quasi-satellite of Neptune
for about 12,500 years and it will remain in that dynamical state for another 12,500 years. The formation of the
ice giants, Neptune and Uranus, has proven difficult to model precisely. Current models suggest that the matter density
in the outer regions of the Solar System was too low to account for the formation of such large bodies from the traditionally
accepted method of core accretion, and various hypotheses have been advanced to explain their formation. One is that
the ice giants were not formed by core accretion but from instabilities within the original protoplanetary disc and
later had their atmospheres blasted away by radiation from a nearby massive OB star. An alternative concept is that
they formed closer to the Sun, where the matter density was higher, and then subsequently migrated to their current
orbits after the removal of the gaseous protoplanetary disc. This hypothesis of migration after formation is favoured,
due to its ability to better explain the occupancy of the populations of small objects observed in the trans-Neptunian
region. The current most widely accepted explanation of the details of this hypothesis is known as the Nice model,
which explores the effect of a migrating Neptune and the other giant planets on the structure of the Kuiper belt.
Neptune has 14 known moons. Triton is the largest Neptunian moon, comprising more than 99.5% of the mass in orbit
around Neptune,[e] and it is the only one massive enough to be spheroidal. Triton was discovered by William Lassell
just 17 days after the discovery of Neptune itself. Unlike all other large planetary moons in the Solar System, Triton
has a retrograde orbit, indicating that it was captured rather than forming in place; it was probably once a dwarf
planet in the Kuiper belt. It is close enough to Neptune to be locked into a synchronous rotation, and it is slowly
spiralling inward because of tidal acceleration. It will eventually be torn apart, in about 3.6 billion years, when
it reaches the Roche limit. In 1989, Triton was the coldest object that had yet been measured in the Solar System,
with estimated temperatures of 38 K (−235 °C). From July to September 1989, Voyager 2 discovered six moons of Neptune.
Of these, the irregularly shaped Proteus is notable for being as large as a body of its density can be without being
pulled into a spherical shape by its own gravity. Although the second-most-massive Neptunian moon, it is only 0.25%
the mass of Triton. Neptune's innermost four moons—Naiad, Thalassa, Despina and Galatea—orbit close enough to be
within Neptune's rings. The next-farthest out, Larissa, was originally discovered in 1981 when it had occulted a
star. This occultation had been attributed to ring arcs, but when Voyager 2 observed Neptune in 1989, Larissa was
found to have caused it. Five new irregular moons discovered between 2002 and 2003 were announced in 2004. A new
moon and the smallest yet, S/2004 N 1, was found in 2013. Because Neptune was the Roman god of the sea, Neptune's
moons have been named after lesser sea gods. Because of the distance of Neptune from Earth, its angular diameter
only ranges from 2.2 to 2.4 arcseconds, the smallest of the Solar System planets. Its small apparent size makes it
challenging to study it visually. Most telescopic data was fairly limited until the advent of Hubble Space Telescope
(HST) and large ground-based telescopes with adaptive optics (AO). The first scientifically useful observation of
Neptune from ground-based telescopes using adaptive optics, was commenced in 1997 from Hawaii. Neptune is currently
entering its spring and summer season and has been shown to be heating up, with increased atmospheric activity and
brightness as a consequence. Combined with technological advancements, ground-based telescopes with adaptive optics
are recording increasingly more detailed images of this Outer Planet. Both the HST and AO telescopes on Earth has
made many new discoveries within the Solar System since the mid-1990s, with a large increase in the number of known
satellites and moons around the Outer Planets for example. In 2004 and 2005, five new small satellites of Neptune
with diameters between 38 and 61 kilometres were discovered. Voyager 2 is the only spacecraft that has visited Neptune.
The spacecraft's closest approach to the planet occurred on 25 August 1989. Because this was the last major planet
the spacecraft could visit, it was decided to make a close flyby of the moon Triton, regardless of the consequences
to the trajectory, similarly to what was done for Voyager 1's encounter with Saturn and its moon Titan. The images
relayed back to Earth from Voyager 2 became the basis of a 1989 PBS all-night program, Neptune All Night. After the
Voyager 2 flyby mission, the next step in scientific exploration of the Neptunian system, is considered to be a Flagship
orbital mission. Such a hypothetical mission is envisioned to be possible at in the late 2020s or early 2030s. However,
there have been a couple of discussions to launch Neptune missions sooner. In 2003, there was a proposal in NASA's
"Vision Missions Studies" for a "Neptune Orbiter with Probes" mission that does Cassini-level science. Another, more
recent proposal was for Argo, a flyby spacecraft to be launched in 2019, that would visit Jupiter, Saturn, Neptune,
and a Kuiper belt object. The focus would be on Neptune and its largest moon Triton to be investigated around 2029.
The proposed New Horizons 2 mission (which was later scrapped) might also have done a close flyby of the Neptunian
system.
A railway electrification system supplies electric power to railway trains and trams without an on-board prime mover or local
fuel supply. Electrification has many advantages but requires significant capital expenditure. Selection of an electrification
system is based on economics of energy supply, maintenance, and capital cost compared to the revenue obtained for
freight and passenger traffic. Different systems are used for urban and intercity areas; some electric locomotives
can switch to different supply voltages to allow flexibility in operation. Electric railways use electric locomotives
to haul passengers or freight in separate cars or electric multiple units, passenger cars with their own motors.
Electricity is typically generated in large and relatively efficient generating stations, transmitted to the railway
network and distributed to the trains. Some electric railways have their own dedicated generating stations and transmission
lines but most purchase power from an electric utility. The railway usually provides its own distribution lines,
switches and transformers. In comparison to the principal alternative, the diesel engine, electric railways offer
substantially better energy efficiency, lower emissions and lower operating costs. Electric locomotives are usually
quieter, more powerful, and more responsive and reliable than diesels. They have no local emissions, an important
advantage in tunnels and urban areas. Some electric traction systems provide regenerative braking that turns the
train's kinetic energy back into electricity and returns it to the supply system to be used by other trains or the
general utility grid. While diesel locomotives burn petroleum, electricity is generated from diverse sources including
many that do not produce carbon dioxide such as nuclear power and renewable forms including hydroelectric, geothermal,
wind and solar. Disadvantages of electric traction include high capital costs that may be uneconomic on lightly trafficked
routes; a relative lack of flexibility since electric trains need electrified tracks or onboard supercapacitors and
charging infrastructure at stations; and a vulnerability to power interruptions. Different regions may use different
supply voltages and frequencies, complicating through service. The limited clearances available under catenaries
may preclude efficient double-stack container service. The lethal voltages on contact wires and third rails are a
safety hazard to track workers, passengers and trespassers. Overhead wires are safer than third rails, but they are
often considered unsightly. Railways must operate at variable speeds. Until the mid 1980s this was only practical
with the brush-type DC motor, although such DC can be supplied from an AC catenary via on-board electric power conversion.
Since such conversion was not well developed in the late 19th century and early 20th century, most early electrified
railways used DC and many still do, particularly rapid transit (subways) and trams. Speed was controlled by connecting
the traction motors in various series-parallel combinations, by varying the traction motors' fields, and by inserting
and removing starting resistances to limit motor current. Motors have very little room for electrical insulation
so they generally have low voltage ratings. Because transformers (prior to the development of power electronics)
cannot step down DC voltages, trains were supplied with a relatively low DC voltage that the motors can use directly.
The most common DC voltages are listed in the previous section. Third (and fourth) rail systems almost always use
voltages below 1 kV for safety reasons while overhead wires usually use higher voltages for efficiency. ("Low" voltage
is relative; even 600 V can be instantly lethal when touched.) There has, however, been interest among railroad operators
in returning to DC use at higher voltages than previously used. At the same voltage, DC often has less loss than
AC, and for this reason high-voltage direct current is already used on some bulk power transmission lines. DC avoids
the electromagnetic radiation inherent with AC, and on a railway this also reduces interference with signalling and
communications and mitigates hypothetical EMF risks. DC also avoids the power factor problems of AC. Of particular
interest to railroading is that DC can supply constant power with a single ungrounded wire. Constant power with AC
requires three-phase transmission with at least two ungrounded wires. Another important consideration is that mains-frequency
3-phase AC must be carefully planned to avoid unbalanced phase loads. Parts of the system are supplied from different
phases on the assumption that the total loads of the 3 phases will even out. At the phase break points between regions
supplied from different phases, long insulated supply breaks are required to avoid them being shorted by rolling
stock using more than one pantograph at a time. A few railroads have tried 3-phase but its substantial complexity
has made single-phase standard practice despite the interruption in power flow that occurs twice every cycle. An
experimental 6 kV DC railway was built in the Soviet Union. 1,500 V DC is used in the Netherlands, Japan, Republic
Of Indonesia, Hong Kong (parts), Republic of Ireland, Australia (parts), India (around the Mumbai area alone, has
been converted to 25 kV AC like the rest of India), France (also using 25 kV 50 Hz AC), New Zealand (Wellington)
and the United States (Chicago area on the Metra Electric district and the South Shore Line interurban line). In
Slovakia, there are two narrow-gauge lines in the High Tatras (one a cog railway). In Portugal, it is used in the
Cascais Line and in Denmark on the suburban S-train system. 3 kV DC is used in Belgium, Italy, Spain, Poland, the
northern Czech Republic, Slovakia, Slovenia, South Africa, Chile, and former Soviet Union countries (also using 25
kV 50 Hz AC). It was formerly used by the Milwaukee Road from Harlowton, Montana to Seattle-Tacoma, across the Continental
Divide and including extensive branch and loop lines in Montana, and by the Delaware, Lackawanna & Western Railroad
(now New Jersey Transit, converted to 25 kV AC) in the United States, and the Kolkata suburban railway (Bardhaman
Main Line) in India, before it was converted to 25 kV 50 Hz AC. Most electrification systems use overhead wires,
but third rail is an option up to about 1,200 V. Third rail systems exclusively use DC distribution. The use of AC
is not feasible because the dimensions of a third rail are physically very large compared with the skin depth that
the alternating current penetrates to (0.3 millimetres or 0.012 inches) in a steel rail). This effect makes the resistance
per unit length unacceptably high compared with the use of DC. Third rail is more compact than overhead wires and
can be used in smaller-diameter tunnels, an important factor for subway systems. DC systems (especially third-rail
systems) are limited to relatively low voltages and this can limit the size and speed of trains and cannot use low-level
platform and also limit the amount of air-conditioning that the trains can provide. This may be a factor favouring
overhead wires and high-voltage AC, even for urban usage. In practice, the top speed of trains on third-rail systems
is limited to 100 mph (160 km/h) because above that speed reliable contact between the shoe and the rail cannot be
maintained. Some street trams (streetcars) used conduit third-rail current collection. The third rail was below street
level. The tram picked up the current through a plough (U.S. "plow") accessed through a narrow slot in the road.
In the United States, much (though not all) of the former streetcar system in Washington, D.C. (discontinued in 1962)
was operated in this manner to avoid the unsightly wires and poles associated with electric traction. The same was
true with Manhattan's former streetcar system. The evidence of this mode of running can still be seen on the track
down the slope on the northern access to the abandoned Kingsway Tramway Subway in central London, United Kingdom,
where the slot between the running rails is clearly visible, and on P and Q Streets west of Wisconsin Avenue in the
Georgetown neighborhood of Washington DC, where the abandoned tracks have not been paved over. The slot can easily
be confused with the similar looking slot for cable trams/cars (in some cases, the conduit slot was originally a
cable slot). The disadvantage of conduit collection included much higher initial installation costs, higher maintenance
costs, and problems with leaves and snow getting in the slot. For this reason, in Washington, D.C. cars on some lines
converted to overhead wire on leaving the city center, a worker in a "plough pit" disconnecting the plough while
another raised the trolley pole (hitherto hooked down to the roof) to the overhead wire. In New York City for the
same reasons of cost and operating efficiency outside of Manhattan overhead wire was used. A similar system of changeover
from conduit to overhead wire was also used on the London tramways, notably on the southern side; a typical changeover
point was at Norwood, where the conduit snaked sideways from between the running rails, to provide a park for detached
shoes or ploughs. A new approach to avoiding overhead wires is taken by the "second generation" tram/streetcar system
in Bordeaux, France (entry into service of the first line in December 2003; original system discontinued in 1958)
with its APS (alimentation par sol – ground current feed). This involves a third rail which is flush with the surface
like the tops of the running rails. The circuit is divided into segments with each segment energized in turn by sensors
from the car as it passes over it, the remainder of the third rail remaining "dead". Since each energized segment
is completely covered by the lengthy articulated cars, and goes dead before being "uncovered" by the passage of the
vehicle, there is no danger to pedestrians. This system has also been adopted in some sections of the new tram systems
in Reims, France (opened 2011) and Angers, France (also opened 2011). Proposals are in place for a number of other
new services including Dubai, UAE; Barcelona, Spain; Florence, Italy; Marseille, France; Gold Coast, Australia; Washington,
D.C., U.S.A.; Brasília, Brazil and Tours, France. The London Underground in England is one of the few networks that
uses a four-rail system. The additional rail carries the electrical return that, on third rail and overhead networks,
is provided by the running rails. On the London Underground, a top-contact third rail is beside the track, energized
at +420v DC, and a top-contact fourth rail is located centrally between the running rails at −210v DC, which combine
to provide a traction voltage of 630v DC. London Underground is now upgrading its fourth rail system to 750v DC with
a positive conductor rail energised to +500v DC and a negative conductor rail energised to -250v DC. However, many
older sections in tunnels are still energised to 630v DC. The same system was used for Milan's earliest underground
line, Milan Metro's line 1, whose more recent lines use an overhead catenary or a third rail. The key advantage of
the four-rail system is that neither running rail carries any current. This scheme was introduced because of the
problems of return currents, intended to be carried by the earthed (grounded) running rail, flowing through the iron
tunnel linings instead. This can cause electrolytic damage and even arcing if the tunnel segments are not electrically
bonded together. The problem was exacerbated because the return current also had a tendency to flow through nearby
iron pipes forming the water and gas mains. Some of these, particularly Victorian mains that predated London's underground
railways, were not constructed to carry currents and had no adequate electrical bonding between pipe segments. The
four-rail system solves the problem. Although the supply has an artificially created earth point, this connection
is derived by using resistors which ensures that stray earth currents are kept to manageable levels. Power-only rails
can be mounted on strongly insulating ceramic chairs to minimise current leak, but this is not possible for running
rails which have to be seated on stronger metal chairs to carry the weight of trains. However, elastomeric rubber
pads placed between the rails and chairs can now solve part of the problem by insulating the running rails from the
current return should there be a leakage through the running rails. On tracks that London Underground share with
National Rail third-rail stock (the Bakerloo and District lines both have such sections), the centre rail is connected
to the running rails, allowing both types of train to operate, at a compromise voltage of 660 V. Underground trains
pass from one section to the other at speed; lineside electrical connections and resistances separate the two types
of supply. These routes were originally solely electrified on the four-rail system by the LNWR before National Rail
trains were rewired to their standard three-rail system to simplify rolling stock use. A few lines of the Paris Métro
in France operate on a four-rail power scheme because they run on rubber tyres which run on a pair of narrow roadways
made of steel and, in some places, concrete. Since the tyres do not conduct the return current, the two guide rails
provided outside the running 'roadways' double up as conductor rails, so at least electrically it is a four-rail
scheme. One of the guide rails is bonded to the return conventional running rails situated inside the roadway so
a single polarity supply is required. The trains are designed to operate from either polarity of supply, because
some lines use reversing loops at one end, causing the train to be reversed during every complete journey. The loop
was originally provided to save the original steam locomotives having to 'run around' the rest of the train saving
much time. Today, the driver does not have to change ends at termini provided with such a loop, but the time saving
is not so significant as it takes almost as long to drive round the loop as it does to change ends. Many of the original
loops have been lost as lines were extended. An early advantage of AC is that the power-wasting resistors used in
DC locomotives for speed control were not needed in an AC locomotive: multiple taps on the transformer can supply
a range of voltages. Separate low-voltage transformer windings supply lighting and the motors driving auxiliary machinery.
More recently, the development of very high power semiconductors has caused the classic "universal" AC/DC motor to
be largely replaced with the three-phase induction motor fed by a variable frequency drive, a special inverter that
varies both frequency and voltage to control motor speed. These drives can run equally well on DC or AC of any frequency,
and many modern electric locomotives are designed to handle different supply voltages and frequencies to simplify
cross-border operation. DC commutating electric motors, if fitted with laminated pole pieces, become universal motors
because they can also operate on AC; reversing the current in both stator and rotor does not reverse the motor. But
the now-standard AC distribution frequencies of 50 and 60 Hz caused difficulties with inductive reactance and eddy
current losses. Many railways chose low AC frequencies to overcome these problems. They must be converted from utility
power by motor-generators or static inverters at the feeding substations or generated at dedicated traction powerstations.
High-voltage AC overhead systems are not only for standard gauge national networks. The meter gauge Rhaetian Railway
(RhB) and the neighbouring Matterhorn Gotthard Bahn (MGB) operate on 11 kV at 16.7 Hz frequency. Practice has proven
that both Swiss and German 15 kV trains can operate under these lower voltages. The RhB started trials of the 11
kV system in 1913 on the Engadin line (St. Moritz-Scuol/Tarasp). The MGB constituents Furka-Oberalp-Bahn (FO) and
Brig-Visp-Zermatt Bahn (BVZ) introduced their electric services in 1941 and 1929 respectively, adopting the already
proven RhB system. In the United States, 25 Hz, a once-common industrial power frequency is used on Amtrak's 25 Hz
traction power system at 12 kV on the Northeast Corridor between Washington, D.C. and New York City and on the Keystone
Corridor between Harrisburg, Pennsylvania and Philadelphia. SEPTA's 25 Hz traction power system uses the same 12
kV voltage on the catenary in Northeast Philadelphia. This allows for the trains to operate on both the Amtrak and
SEPTA power systems. Apart from having an identical catenary voltage, the power distribution systems of Amtrak and
SEPTA are very different. The Amtrak power distribution system has a 138 kV transmission network that provides power
to substations which then transform the voltage to 12 kV to feed the catenary system. The SEPTA power distribution
system uses a 2:1 ratio autotransformer system, with the catenary fed at 12 kV and a return feeder wire fed at 24
kV. The New York, New Haven and Hartford Railroad used an 11 kV system between New York City and New Haven, Connecticut
which was converted to 12.5 kV 60 Hz in 1987. In the UK, the London, Brighton and South Coast Railway pioneered overhead
electrification of its suburban lines in London, London Bridge to Victoria being opened to traffic on 1 December
1909. Victoria to Crystal Palace via Balham and West Norwood opened in May 1911. Peckham Rye to West Norwood opened
in June 1912. Further extensions were not made owing to the First World War. Two lines opened in 1925 under the Southern
Railway serving Coulsdon North and Sutton railway station. The lines were electrified at 6.7 kV 25 Hz. It was announced
in 1926 that all lines were to be converted to DC third rail and the last overhead electric service ran in September
1929. Three-phase AC railway electrification was used in Italy, Switzerland and the United States in the early twentieth
century. Italy was the major user, for lines in the mountainous regions of northern Italy from 1901 until 1976. The
first lines were the Burgdorf-Thun line in Switzerland (1899), and the lines of the Ferrovia Alta Valtellina from
Colico to Chiavenna and Tirano in Italy, which were electrified in 1901 and 1902. Other lines where the three-phase
system were used were the Simplon Tunnel in Switzerland from 1906 to 1930, and the Cascade Tunnel of the Great Northern
Railway in the United States from 1909 to 1927. The first attempts to use standard-frequency single-phase AC were
made in Hungary as far back as 1923, by the Hungarian Kálmán Kandó on the line between Budapest-Nyugati and Alag,
using 16 kV at 50 Hz. The locomotives carried a four-pole rotating phase converter feeding a single traction motor
of the polyphase induction type at 600 to 1,100 V. The number of poles on the 2,500 hp motor could be changed using
slip rings to run at one of four synchronous speeds. The tests were a success so, from 1932 until the 1960s, trains
on the Budapest-Hegyeshalom line (towards Vienna) regularly used the same system. A few decades after the Second
World War, the 16 kV was changed to the Russian and later French 25 kV system. To prevent the risk of out-of-phase
supplies mixing, sections of line fed from different feeder stations must be kept strictly isolated. This is achieved
by Neutral Sections (also known as Phase Breaks), usually provided at feeder stations and midway between them although,
typically, only half are in use at any time, the others being provided to allow a feeder station to be shut down
and power provided from adjacent feeder stations. Neutral Sections usually consist of an earthed section of wire
which is separated from the live wires on either side by insulating material, typically ceramic beads, designed so
that the pantograph will smoothly run from one section to the other. The earthed section prevents an arc being drawn
from one live section to the other, as the voltage difference may be higher than the normal system voltage if the
live sections are on different phases and the protective circuit breakers may not be able to safely interrupt the
considerable current that would flow. To prevent the risk of an arc being drawn across from one section of wire to
earth, when passing through the neutral section, the train must be coasting and the circuit breakers must be open.
In many cases, this is done manually by the drivers. To help them, a warning board is provided just before both the
neutral section and an advance warning some distance before. A further board is then provided after the neutral section
to tell drivers to re-close the circuit breaker, although drivers must not do this until the rear pantograph has
passed this board. In the UK, a system known as Automatic Power Control (APC) automatically opens and closes the
circuit breaker, this being achieved by using sets of permanent magnets alongside the track communicating with a
detector on the train. The only action needed by the driver is to shut off power and coast and therefore warning
boards are still provided at and on the approach to neutral sections. Modern electrification systems take AC energy
from a power grid which is delivered to a locomotive and converted to a DC voltage to be used by traction motors.
These motors may either be DC motors which directly use the DC or they may be 3-phase AC motors which require further
conversion of the DC to 3-phase AC (using power electronics). Thus both systems are faced with the same task: converting
and transporting high-voltage AC from the power grid to low-voltage DC in the locomotive. Where should this conversion
take place and at what voltage and current (AC or DC) should the power flow to the locomotive? And how does all this
relate to energy-efficiency? Both the transmission and conversion of electric energy involve losses: ohmic losses
in wires and power electronics, magnetic field losses in transformers and smoothing reactors (inductors). Power conversion
for a DC system takes place mainly in a railway substation where large, heavy, and more efficient hardware can be
used as compared to an AC system where conversion takes place aboard the locomotive where space is limited and losses
are significantly higher. Also, the energy used to blow air to cool transformers, power electronics (including rectifiers),
and other conversion hardware must be accounted for. In the Soviet Union, in the 1970s, a comparison was made between
systems electrified at 3 kV DC and 25 kV AC (50 Hz). The results showed that percentage losses in the overhead wires
(catenary and contact wires) was over 3 times greater for 3 kV DC than for 25 kV AC. But when the conversion losses
were all taken into account and added to overhead wire losses (including cooling blower energy) the 25 kV AC lost
a somewhat higher percent of energy than for 3 kV DC. Thus in spite of the much higher losses in the catenary, the
3 kV DC was a little more energy efficient than AC in providing energy from the USSR power grid to the terminals
of the traction motors (all DC at that time). While both systems use energy in converting higher voltage AC from
the USSR's power grid to lower voltage DC, the conversions for the DC system all took place (at higher efficiency)
in the railway substation, while most of the conversion for the AC system took place inside the locomotive (at lower
efficiency). Consider also that it takes energy to constantly move this mobile conversion hardware over the rails
while the stationary hardware in the railway substation doesn't incur this energy cost. For more details see: Wiki:
Soviet Union DC vs. AC. Newly electrified lines often show a "sparks effect", whereby electrification in passenger
rail systems leads to significant jumps in patronage / revenue. The reasons may include electric trains being seen
as more modern and attractive to ride, faster and smoother service, and the fact that electrification often goes
hand in hand with a general infrastructure and rolling stock overhaul / replacement, which leads to better service
quality (in a way that theoretically could also be achieved by doing similar upgrades yet without electrification).
Whatever the causes of the sparks effect, it is well established for numerous routes that have electrified over decades.
Network effects are a large factor with electrification. When converting lines to electric, the connections with
other lines must be considered. Some electrifications have subsequently been removed because of the through traffic
to non-electrified lines. If through traffic is to have any benefit, time consuming engine switches must occur to
make such connections or expensive dual mode engines must be used. This is mostly an issue for long distance trips,
but many lines come to be dominated by through traffic from long-haul freight trains (usually running coal, ore,
or containers to or from ports). In theory, these trains could enjoy dramatic savings through electrification, but
it can be too costly to extend electrification to isolated areas, and unless an entire network is electrified, companies
often find that they need to continue use of diesel trains even if sections are electrified. The increasing demand
for container traffic which is more efficient when utilizing the double-stack car also has network effect issues
with existing electrifications due to insufficient clearance of overhead electrical lines for these trains, but electrification
can be built or modified to have sufficient clearance, at additional cost. Additionally, there are issues of connections
between different electrical services, particularly connecting intercity lines with sections electrified for commuter
traffic, but also between commuter lines built to different standards. This can cause electrification of certain
connections to be very expensive simply because of the implications on the sections it is connecting. Many lines
have come to be overlaid with multiple electrification standards for different trains to avoid having to replace
the existing rolling stock on those lines. Obviously, this requires that the economics of a particular connection
must be more compelling and this has prevented complete electrification of many lines. In a few cases, there are
diesel trains running along completely electrified routes and this can be due to incompatibility of electrification
standards along the route. Central station electricity can often be generated with higher efficiency than a mobile
engine/generator. While the efficiency of power plant generation and diesel locomotive generation are roughly the
same in the nominal regime, diesel motors decrease in efficiency in non-nominal regimes at low power while if an
electric power plant needs to generate less power it will shut down its least efficient generators, thereby increasing
efficiency. The electric train can save energy (as compared to diesel) by regenerative braking and by not needing
to consume energy by idling as diesel locomotives do when stopped or coasting. However, electric rolling stock may
run cooling blowers when stopped or coasting, thus consuming energy. Energy sources unsuitable for mobile power plants,
such as nuclear power, renewable hydroelectricity, or wind power can be used. According to widely accepted global
energy reserve statistics, the reserves of liquid fuel are much less than gas and coal (at 42, 167 and 416 years
respectively). Most countries with large rail networks do not have significant oil reserves and those that did, like
the United States and Britain, have exhausted much of their reserves and have suffered declining oil output for decades.
Therefore, there is also a strong economic incentive to substitute other fuels for oil. Rail electrification is often
considered an important route towards consumption pattern reform. However, there are no reliable, peer-reviewed studies
available to assist in rational public debate on this critical issue, although there are untranslated Soviet studies
from the 1980s. In the former Soviet Union, electric traction eventually became somewhat more energy-efficient than
diesel. Partly due to inefficient generation of electricity in the USSR (only 20.8% thermal efficiency in 1950 vs.
36.2% in 1975), in 1950 diesel traction was about twice as energy efficient as electric traction (in terms of net
tonne-km of freight per kg of fuel). But as efficiency of electricity generation (and thus of electric traction)
improved, by about 1965 electric railways became more efficient than diesel. After the mid 1970s electrics used about
25% less fuel per ton-km. However diesels were mainly used on single track lines with a fair amount of traffic so
that the lower fuel consumption of electrics may be in part due to better operating conditions on electrified lines
(such as double tracking) rather than inherent energy efficiency. Nevertheless, the cost of diesel fuel was about
1.5 times more (per unit of heat energy content) than that of the fuel used in electric power plants (that generated
electricity), thus making electric railways even more energy-cost effective. Besides increased efficiency of power
plants, there was an increase in efficiency (between 1950 and 1973) of the railway utilization of this electricity
with energy-intensity dropping from 218 to 124 kwh/10,000 gross tonne-km (of both passenger and freight trains) or
a 43% drop. Since energy-intensity is the inverse of energy-efficiency it drops as efficiency goes up. But most of
this 43% decrease in energy-intensity also benefited diesel traction. The conversion of wheel bearings from plain
to roller, increase of train weight, converting single track lines to double track (or partially double track), and
the elimination of obsolete 2-axle freight cars increased the energy-efficiency of all types of traction: electric,
diesel, and steam. However, there remained a 12–15% reduction of energy-intensity that only benefited electric traction
(and not diesel). This was due to improvements in locomotives, more widespread use of regenerative braking (which
in 1989 recycled 2.65% of the electric energy used for traction,) remote control of substations, better handling
of the locomotive by the locomotive crew, and improvements in automation. Thus the overall efficiency of electric
traction as compared to diesel more than doubled between 1950 and the mid-1970s in the Soviet Union. But after 1974
(thru 1980) there was no improvement in energy-intensity (wh/tonne-km) in part due to increasing speeds of passenger
and freight trains.
The Spanish language is the second most spoken language in the United States. There are 45 million Hispanophones who speak
Spanish as a first or second language in the United States, as well as six million Spanish language students. Together,
this makes the United States of America the second largest Hispanophone country in the world after Mexico, and with
the United States having more Spanish-speakers than Colombia and Spain (but fewer first language speakers). Spanish
is the Romance language and the Indo-European language with the largest number of native speakers in the world. Roughly
half of all American Spanish-speakers also speak English "very well," based on their self-assessment in the U.S.
Census. The Spanish language has been present in what is now the United States since the 16th and 17th centuries,
with the arrival of Spanish colonization in North America that would later become the states of Florida, Texas, Colorado,
New Mexico, Arizona, Nevada, Utah, and California. The Spanish explorers explored areas of 42 future U.S. states
leaving behind a varying range of Hispanic legacy in the North American continent. Additionally, western regions
of the Louisiana Territory were under Spanish rule between 1763 to 1800, after the French and Indian War, further
extending the Spanish influence throughout modern-day United States of America. Spanish was the language spoken by
the first permanent European settlers in North America. Spanish arrived in the territory of the modern United States
with Ponce de León in 1513. In 1565, the Spaniards, by way of Juan Ponce de León, founded St. Augustine, Florida,
and as of the early 1800s, it became the oldest continuously occupied European settlement in the continental United
States. The oldest city in all of the U.S. territory, as of 1898, is San Juan, capital of Puerto Rico, where Juan
Ponce De León was its first governor In 1821, after Mexico's War of Independence from Spain, Texas was part of the
United Mexican States as the state of Coahuila y Tejas. A large influx of Americans soon followed, originally with
the approval of Mexico's president. In 1836, the now largely "American" Texans, fought a war of independence from
the central government of Mexico and established the Republic of Texas. In 1846, the Republic dissolved when Texas
entered the United States of America as a state. Per the 1850 U.S. census, fewer than 16,000 Texans were of Mexican
descent, and nearly all were Spanish-speaking people (both Mexicans and non-Spanish European settlers who include
German Texan) who were outnumbered (six-to-one) by English-speaking settlers (both Americans and other immigrant
Europeans).[citation needed] After the Mexican War of Independence from Spain also, California, Nevada, Arizona,
Utah, western Colorado and southwestern Wyoming became part of the Mexican territory of Alta California and most
of New Mexico, western Texas, southern Colorado, southwestern Kansas, and Oklahoma panhandle were part of the territory
of Santa Fe de Nuevo México. The geographical isolation and unique political history of this territory led to New
Mexican Spanish differing notably from both Spanish spoken in other parts of the United States of America and Spanish
spoken in the present-day United Mexican States. Through the force of sheer numbers, the English-speaking American
settlers entering the Southwest established their language, culture, and law as dominant, to the extent it fully
displaced Spanish in the public sphere; this is why the United States never developed bilingualism as Canada did.
For example, the California constitutional convention of 1849 had eight Californio participants; the resulting state
constitution was produced in English and Spanish, and it contained a clause requiring all published laws and regulations
to be published in both languages. The constitutional convention of 1872 had no Spanish-speaking participants; the
convention's English-speaking participants felt that the state's remaining minority of Spanish-speakers should simply
learn English; and the convention ultimately voted 46-39 to revise the earlier clause so that all official proceedings
would henceforth be published only in English. For decades, the U.S. federal government strenuously tried to force
Puerto Ricans to adopt English, to the extent of making them use English as the primary language of instruction in
their high schools. It was completely unsuccessful, and retreated from that policy in 1948. Puerto Rico was able
to maintain its Spanish language, culture, and identity because the relatively small, densely populated island was
already home to nearly a million people at the time of the U.S. takeover, all of those spoke Spanish, and the territory
was never hit with a massive influx of millions of English speakers like the vast territory acquired from Mexico
50 years earlier. At over 5 million, Puerto Ricans are easily the 2nd largest Hispanic group. Of all major Hispanic
groups, Puerto Ricans are the least likely to be proficient in Spanish, but millions of Puerto Rican Americans living
in the U.S. mainland nonetheless are fluent in Spanish. Puerto Ricans are natural-born U.S. citizens, and many Puerto
Ricans have migrated to New York City, Orlando, Philadelphia, and other areas of the Eastern United States, increasing
the Spanish-speaking populations and in some areas being the majority of the Hispanophone population, especially
in Central Florida. In Hawaii, where Puerto Rican farm laborers and Mexican ranchers have settled since the late
19th century, 7.0 per cent of the islands' people are either Hispanic or Hispanophone or both. Immigration to the
United States of Spanish-speaking Cubans began because of Cuba's political instability upon achieving independence.
The deposition of Fulgencio Batista's dictatorship and the ascension of Fidel Castro's government in 1959 increased
Cuban immigration to the United States, hence there are some one million Cubans in the United States, most settled
in southern and central Florida, while other Cubans live in the Northeastern United States; most are fluent in Spanish.
In the city of Miami today Spanish is the first language mostly due to Cuban immigration. Likewise the migration
of Spanish-speaking Nicaraguans also began as a result of political instability during the end of the 1970s and the
1980s. The uprising of the Sandinista revolution which toppled the Somoza dictatorship in 1979 caused many Nicaraguans
to migrate particularly from those opposing the Sandinistas. Throughout the 1980s with the United States supported
Contra War (or Contra-revolutionary war) which continued up until 1988, and the economic collapse of the country
many more Nicaraguans migrated to the United States amongst other countries. The states of the United States where
most Nicaraguans migrated to include Florida, California and Texas. The exodus of Salvadorans was a result of both
economic and political problems. The largest immigration wave occurred as a result of the Salvadoran Civil War in
the 1980s, in which 20–30% of El Salvador's population emigrated. About 50%, or up to 500,000 of those who escaped
headed to the United States, which was already home to over 10,000 Salvadorans, making Salvadorans Americans the
fourth-largest Hispanic and Latino American group, after the Mexican-American majority, stateside Puerto Ricans,
and Cubans. As civil wars engulfed several Central American countries in the 1980s, hundreds of thousands of Salvadorans
fled their country and came to the United States. Between 1980 and 1990, the Salvadoran immigrant population in the
United States increased nearly fivefold from 94,000 to 465,000. The number of Salvadoran immigrants in the United
States continued to grow in the 1990s and 2000s as a result of family reunification and new arrivals fleeing a series
of natural disasters that hit El Salvador, including earthquakes and hurricanes. By 2008, there were about 1.1 million
Salvadoran immigrants in the United States. Until the 20th century, there was no clear record of the number of Venezuelans
who emigrated to the United States. Between the 18th and early 19th centuries, there were many European immigrants
who went to Venezuela, only to later migrate to the United States along with their children and grandchildren who
born and/or grew up in Venezuela speaking Spanish. From 1910 to 1930, it is estimated that over 4,000 South Americans
each year emigrated to the United States; however, there are few specific figures indicating these statistics. Many
Venezuelans settled in the United States with hopes of receiving a better education, only to remain in there following
graduation. They are frequently joined by relatives. However, since the early 1980s, the reasons for Venezuelan emigration
have changed to include hopes of earning a higher salary and due to the economic fluctuations in Venezuela which
also promoted an important migration of Venezuelan professionals to the US. In the 2000s, more Venezuelans opposing
the economic and political policies of president Hugo Chávez migrated to the United States (mostly to Florida, but
New York City and Houston are other destinations). The largest concentration of Venezuelans in the United States
is in South Florida, especially the suburbs of Doral and Weston. Other main states with Venezuelan American populations
are, according to the 1990 census, New York, California, Texas (adding their existing Hispanic populations), New
Jersey, Massachusetts and Maryland. Some of the urban areas with a high Venezuelan community include Miami, New York
City, Los Angeles, and Washington, D.C. Although the United States has no de jure official language, English is the
dominant language of business, education, government, religion, media, culture, civil society, and the public sphere.
Virtually all state and federal government agencies and large corporations use English as their internal working
language, especially at the management level. Some states, such as New Mexico, provide bilingual legislated notices
and official documents, in Spanish and English, and other commonly used languages. By 2015, there was a trend that
most Americans and American residents who are of Hispanic descent speak only English in the home. The state (like
its southwestern neighbors) has had close linguistic and cultural ties with Mexico. The state outside the Gadsden
Purchase of 1853 was part of the New Mexico Territory until 1863, when the western half was made into the Arizona
Territory. The area of the former Gadsden Purchase contained a majority of Spanish-speakers until the 1940s, although
the Tucson area had a higher ratio of anglophones (including Mexican Americans who were fluent in English); the continuous
arrival of Mexican settlers increases the number of Spanish-speakers. New Mexico is commonly thought to have Spanish
as an official language alongside English because of its wide usage and legal promotion of Spanish in the state;
however, the state has no official language. New Mexico's laws are promulgated bilingually in Spanish and English.
Although English is the state government's paper working language, government business is often conducted in Spanish,
particularly at the local level. Spanish has been spoken in the New Mexico-Colorado border and the contemporary U.S.–Mexico
border since the 16th century.[citation needed] Because of its relative isolation from other Spanish-speaking areas
over most of its 400-year existence, New Mexico Spanish, and in particular the Spanish of northern New Mexico and
Colorado has retained many elements of 16th- and 17th-century Spanish and has developed its own vocabulary. In addition,
it contains many words from Nahuatl, the language spoken by the ancient Aztecs of Mexico. New Mexican Spanish also
contains loan words from the Pueblo languages of the upper Rio Grande Valley, Mexican-Spanish words (mexicanismos),
and borrowings from English. Grammatical changes include the loss of the second person verb form, changes in verb
endings, particularly in the preterite, and partial merging of the second and third conjugations. In Texas, English
is the state's de facto official language (though it lacks de jure status) and is used in government. However, the
continual influx of Spanish-speaking immigrants increased the import of Spanish in Texas. Texas's counties bordering
Mexico are mostly Hispanic, and consequently, Spanish is commonly spoken in the region. The Government of Texas,
through Section 2054.116 of the Government Code, mandates that state agencies provide information on their websites
in Spanish to assist residents who have limited English proficiency. Spanish is currently the most widely taught
non-English language in American secondary schools and of higher education. More than 1.4 million university students
were enrolled in language courses in autumn of 2002 and Spanish is the most widely taught language in American colleges
and universities with 53 percent of the total number of people enrolled, followed by French (14.4%), German (7.1%),
Italian (4.5%), American Sign language (4.3%), Japanese (3.7%), and Chinese (2.4%) although the totals remain relatively
small in relation to the total U.S population. The State of the Union Addresses and other presidential speeches are
translated to Spanish, following the precedent set by the Bill Clinton administration. Official Spanish translations
are available at WhiteHouse.gov. Moreover, non-Hispanic American origin politicians fluent in Spanish-speak in Spanish
to Hispanic majority constituencies. There are 500 Spanish newspapers, 152 magazines, and 205 publishers in the United
States; magazine and local television advertising expenditures for the Hispanic market have increased much from 1999
to 2003, with growth of 58 percent and 43 percent, respectively. Calvin Veltman undertook, for the National Center
for Education Statistics and for the Hispanic Policy Development Project, the most complete study of English language
adoption by Hispanophone immigrants. Mr Veltman's language shift studies document high bilingualism rates and subsequent
adoption of English as the preferred language of Hispanics, particularly by the young and the native-born. The complete
set of these studies' demographic projections postulates the near-complete assimilation of a given Hispanophone immigrant
cohort within two generations. Although his study based itself upon a large 1976 sample from the Bureau of the Census
(which has not been repeated), data from the 1990 Census tend to confirm the great Anglicization of the US Hispanic
American origin population. After the incorporation of these states to the United States in the first half of the
19th century, the Spanish language was later reinforced in the country by the acquisition of Puerto Rico in 1898.
Later waves of emigration from Mexico, Cuba, El Salvador and elsewhere in Hispanic America to the United States beginning
in the second half of the 19th century to the present-day have strengthened the role of the Spanish language in the
country. Today, Hispanics are one of the fastest growing demographics in the United States, thus increasing the use
and importance of American Spanish in the United States.
Charleston is the oldest and second-largest city in the U.S. state of South Carolina, the county seat of Charleston County,
and the principal city in the Charleston–North Charleston–Summerville Metropolitan Statistical Area. The city lies
just south of the geographical midpoint of South Carolina's coastline and is located on Charleston Harbor, an inlet
of the Atlantic Ocean formed by the confluence of the Ashley and Cooper Rivers, or, as is locally expressed, "where
the Cooper and Ashley Rivers come together to form the Atlantic Ocean." Founded in 1670 as Charles Town in honor
of King Charles II of England, Charleston adopted its present name in 1783. It moved to its present location on Oyster
Point in 1680 from a location on the west bank of the Ashley River known as Albemarle Point. By 1690, Charles Town
was the fifth-largest city in North America, and it remained among the 10 largest cities in the United States through
the 1840 census. With a 2010 census population of 120,083 (and a 2014 estimate of 130,113), current trends put Charleston
as the fastest-growing municipality in South Carolina. The population of the Charleston metropolitan area, comprising
Berkeley, Charleston, and Dorchester Counties, was counted by the 2014 estimate at 727,689 – the third-largest in
the state – and the 78th-largest metropolitan statistical area in the United States. According to the United States
Census Bureau, the city has a total area of 127.5 square miles (330.2 km2), of which 109.0 square miles (282.2 km2)
is land and 18.5 square miles (47.9 km2) is covered by water. The old city is located on a peninsula at the point
where, as Charlestonians say, "The Ashley and the Cooper Rivers come together to form the Atlantic Ocean." The entire
peninsula is very low, some is landfill material, and as such, frequently floods during heavy rains, storm surges,
and unusually high tides. The city limits have expanded across the Ashley River from the peninsula, encompassing
the majority of West Ashley as well as James Island and some of Johns Island. The city limits also have expanded
across the Cooper River, encompassing Daniel Island and the Cainhoy area. North Charleston blocks any expansion up
the peninsula, and Mount Pleasant occupies the land directly east of the Cooper River. Charleston has a humid subtropical
climate (Köppen climate classification Cfa), with mild winters, hot, humid summers, and significant rainfall all
year long. Summer is the wettest season; almost half of the annual rainfall occurs from June to September in the
form of thundershowers. Fall remains relatively warm through November. Winter is short and mild, and is characterized
by occasional rain. Measurable snow (≥0.1 in or 0.25 cm) only occurs several times per decade at the most, with the
last such event occurring December 26, 2010. However, 6.0 in (15 cm) fell at the airport on December 23, 1989, the
largest single-day fall on record, contributing to a single-storm and seasonal record of 8.0 in (20 cm) snowfall.
The highest temperature recorded within city limits was 104 °F (40 °C), on June 2, 1985, and June 24, 1944, and the
lowest was 7 °F (−14 °C) on February 14, 1899, although at the airport, where official records are kept, the historical
range is 105 °F (41 °C) on August 1, 1999 down to 6 °F (−14 °C) on January 21, 1985. Hurricanes are a major threat
to the area during the summer and early fall, with several severe hurricanes hitting the area – most notably Hurricane
Hugo on September 21, 1989 (a category 4 storm). Dewpoint in the summer ranges from 67.8 to 71.4 °F (20 to 22 °C).
The Charleston-North Charleston-Summerville Metropolitan Statistical Area consists of three counties: Charleston,
Berkeley, and Dorchester. As of the 2013 U.S. Census, the metropolitan statistical area had a total population of
712,239 people. North Charleston is the second-largest city in the Charleston-North Charleston-Summerville Metropolitan
Statistical Area and ranks as the third-largest city in the state; Mount Pleasant and Summerville are the next-largest
cities. These cities combined with other incorporated and unincorporated areas along with the city of Charleston
form the Charleston-North Charleston Urban Area with a population of 548,404 as of 2010. The metropolitan statistical
area also includes a separate and much smaller urban area within Berkeley County, Moncks Corner (with a 2000 population
of 9,123). The traditional parish system persisted until the Reconstruction Era, when counties were imposed.[citation
needed] Nevertheless, traditional parishes still exist in various capacities, mainly as public service districts.
When the city of Charleston was formed, it was defined by the limits of the Parish of St. Philip and St. Michael,
now also includes parts of St. James' Parish, St. George's Parish, St. Andrew's Parish, and St. John's Parish, although
the last two are mostly still incorporated rural parishes. After Charles II of England (1630–1685) was restored to
the English throne in 1660 following Oliver Cromwell's Protectorate, he granted the chartered Province of Carolina
to eight of his loyal friends, known as the Lords Proprietors, on March 24, 1663. It took seven years before the
group arranged for settlement expeditions. The first of these founded Charles Town, in 1670. Governance, settlement,
and development were to follow a visionary plan known as the Grand Model prepared for the Lords Proprietors by John
Locke. The community was established by several shiploads of settlers from Bermuda (which lies due east of South
Carolina, although at 1,030 km or 640 mi, it is closest to Cape Hatteras, North Carolina), under the leadership of
governor William Sayle, on the west bank of the Ashley River, a few miles northwest of the present-day city center.
It was soon predicted by the Earl of Shaftesbury, one of the Lords Proprietors, to become a "great port towne", a
destiny the city quickly fulfilled. In 1680, the settlement was moved east of the Ashley River to the peninsula between
the Ashley and Cooper Rivers. Not only was this location more defensible, but it also offered access to a fine natural
harbor. The first settlers primarily came from England, its Caribbean colony of Barbados, and its Atlantic colony
of Bermuda. Among these were free people of color, born in the West Indies of alliances and marriages between Africans
and Englanders, when color lines were looser among the working class in the early colonial years, and some wealthy
whites took black consorts or concubines. Charles Town attracted a mixture of ethnic and religious groups. French,
Scottish, Irish, and Germans migrated to the developing seacoast town, representing numerous Protestant denominations.
Because of the battles between English "royalty" and the Roman Catholic Church, practicing Catholics were not allowed
to settle in South Carolina until after the American Revolution. Jews were allowed, and Sephardic Jews migrated to
the city in such numbers that by the beginning of the 19th century, the city was home to the largest and wealthiest
Jewish community in North America—a status it held until about 1830. The early settlement was often subject to attack
from sea and land, including periodic assaults from Spain and France (both of whom contested England's claims to
the region), and pirates. These were combined with raids by Native Americans, who tried to protect themselves from
so-called European "settlers," who in turn wanted to expand the settlement. The heart of the city was fortified according
to a 1704 plan by Governor Johnson. Except those fronting Cooper River, the walls were largely removed during the
1720s. Africans were brought to Charles Town on the Middle Passage, first as "servants", then as slaves. Ethnic groups
transported here included especially Wolof, Yoruba, Fulani, Igbo, Malinke, and other people of the Windward Coast.
An estimated 40% of the total 400,000 Africans transported and sold as slaves into North America are estimated to
have landed at Sullivan's Island, just off the port of Charles Town; it is described as a "hellish Ellis Island of
sorts .... Today nothing commemorates that ugly fact but a simple bench, established by the author Toni Morrison
using private funds." Colonial Lowcountry landowners experimented with cash crops ranging from tea to silkworms.
African slaves brought knowledge of rice cultivation, which plantation owners cultivated and developed as a successful
commodity crop by 1700. With the coerced help of African slaves from the Caribbean, Eliza Lucas, daughter of plantation
owner George Lucas, learned how to raise and use indigo in the Lowcountry in 1747. Supported with subsidies from
Britain, indigo was a leading export by 1750. Those and naval stores were exported in an extremely profitable shipping
industry. By the mid-18th century, Charles Town had become a bustling trade center, the hub of the Atlantic trade
for the southern colonies. Charles Towne was also the wealthiest and largest city south of Philadelphia, in part
because of the lucrative slave trade. By 1770, it was the fourth-largest port in the colonies, after Boston, New
York, and Philadelphia, with a population of 11,000—slightly more than half of them slaves. By 1708, the majority
of the colony's population was slaves, and the future state would continue to be a majority of African descent until
after the Great Migration of the early 20th century. Charles Town was a hub of the deerskin trade, the basis of its
early economy. Trade alliances with the Cherokee and Creek nations insured a steady supply of deer hides. Between
1699 and 1715, colonists exported an average of 54,000 deer skins annually to Europe through Charles Town. Between
1739 and 1761, the height of the deerskin trade era, an estimated 500,000 to 1,250,000 deer were slaughtered. During
the same period, Charles Town records show an export of 5,239,350 pounds of deer skins. Deer skins were used in the
production of men's fashionable and practical buckskin pantaloons, gloves, and book bindings. As Charles Town grew,
so did the community's cultural and social opportunities, especially for the elite merchants and planters. The first
theatre building in America was built in 1736 on the site of today's Dock Street Theatre. Benevolent societies were
formed by different ethnic groups, from French Huguenots to free people of color to Germans to Jews. The Charles
Towne Library Society was established in 1748 by well-born young men who wanted to share the financial cost to keep
up with the scientific and philosophical issues of the day. This group also helped establish the College of Charles
Towne in 1770, the oldest college in South Carolina. Until its transition to state ownership in 1970, this was the
oldest municipally supported college in the United States. On June 28, 1776, General Sir Henry Clinton along with
2,000 men and a naval squadron tried to seize Charles Towne, hoping for a simultaneous Loyalist uprising in South
Carolina. When the fleet fired cannonballs, they failed to penetrate Fort Sullivan's unfinished, yet thick, palmetto-log
walls. No local Loyalists attacked the town from the mainland side, as the British had hoped they would do. Col.
Moultrie's men returned fire and inflicted heavy damage on several of the British ships. The British were forced
to withdraw their forces, and the Americans renamed the defensive installation as Fort Moultrie in honor of its commander.
Clinton returned in 1780 with 14,000 soldiers. American General Benjamin Lincoln was trapped and surrendered his
entire 5,400-man force after a long fight, and the Siege of Charles Towne was the greatest American defeat of the
war. Several Americans who escaped the carnage joined other militias, including those of Francis Marion, the "Swamp
Fox"; and Andrew Pickens. The British retained control of the city until December 1782. After the British left, the
city's name was officially changed to Charleston in 1783. Although the city lost the status of state capital to Columbia
in 1786, Charleston became even more prosperous in the plantation-dominated economy of the post-Revolutionary years.
The invention of the cotton gin in 1793 revolutionized the processing of this crop, making short-staple cotton profitable.
It was more easily grown in the upland areas, and cotton quickly became South Carolina's major export commodity.
The Piedmont region was developed into cotton plantations, to which the sea islands and Lowcountry were already devoted.
Slaves were also the primary labor force within the city, working as domestics, artisans, market workers, and laborers.
The city also had a large class of free people of color. By 1860, 3,785 free people of color were in Charleston,
nearly 18% of the city's black population, and 8% of the total population. Free people of color were far more likely
to be of mixed racial background than slaves. Many were educated, practiced skilled crafts, and some even owned substantial
property, including slaves. In 1790, they established the Brown Fellowship Society for mutual aid, initially as a
burial society. It continued until 1945. By 1820, Charleston's population had grown to 23,000, maintaining its black
(and mostly slave) majority. When a massive slave revolt planned by Denmark Vesey, a free black, was revealed in
May 1822, whites reacted with intense fear, as they were well aware of the violent retribution of slaves against
whites during the Haitian Revolution. Soon after, Vesey was tried and executed, hanged in early July with five slaves.
Another 28 slaves were later hanged. Later, the state legislature passed laws requiring individual legislative approval
for manumission (the freeing of a slave) and regulating activities of free blacks and slaves. In Charleston, the
African American population increased as freedmen moved from rural areas to the major city: from 17,000 in 1860 to
over 27,000 in 1880. Historian Eric Foner noted that blacks were glad to be relieved of the many regulations of slavery
and to operate outside of white surveillance. Among other changes, most blacks quickly left the Southern Baptist
Church, setting up their own black Baptist congregations or joining new African Methodist Episcopal Church and AME
Zion churches, both independent black denominations first established in the North. Freedmen "acquired dogs, guns,
and liquor (all barred to them under slavery), and refused to yield the sidewalks to whites". Industries slowly brought
the city and its inhabitants back to a renewed vitality and jobs attracted new residents. As the city's commerce
improved, residents worked to restore or create community institutions. In 1865, the Avery Normal Institute was established
by the American Missionary Association as the first free secondary school for Charleston's African American population.
General William T. Sherman lent his support to the conversion of the United States Arsenal into the Porter Military
Academy, an educational facility for former soldiers and boys left orphaned or destitute by the war. Porter Military
Academy later joined with Gaud School and is now a university-preparatory school, Porter-Gaud School. In 1875, blacks
made up 57% of the city's population, and 73% of Charleston County. With leadership by members of the antebellum
free black community, historian Melinda Meeks Hennessy described the community as "unique" in being able to defend
themselves without provoking "massive white retaliation", as occurred in numerous other areas during Reconstruction.
In the 1876 election cycle, two major riots between black Republicans and white Democrats occurred in the city, in
September and the day after the election in November, as well as a violent incident in Cainhoy at an October joint
discussion meeting. Violent incidents occurred throughout the Piedmont of the state as white insurgents struggled
to maintain white supremacy in the face of social changes after the war and granting of citizenship to freedmen by
federal constitutional amendments. After former Confederates were allowed to vote again, election campaigns from
1872 on were marked by violent intimidation of blacks and Republicans by white Democratic paramilitary groups, known
as the Red Shirts. Violent incidents took place in Charleston on King Street in September 6 and in nearby Cainhoy
on October 15, both in association with political meetings before the 1876 election. The Cainhoy incident was the
only one statewide in which more whites were killed than blacks. The Red Shirts were instrumental in suppressing
the black Republican vote in some areas in 1876 and narrowly electing Wade Hampton as governor, and taking back control
of the state legislature. Another riot occurred in Charleston the day after the election, when a prominent Republican
leader was mistakenly reported killed. On August 31, 1886, Charleston was nearly destroyed by an earthquake. The
shock was estimated to have a moment magnitude of 7.0 and a maximum Mercalli intensity of X (Extreme). It was felt
as far away as Boston to the north, Chicago and Milwaukee to the northwest, as far west as New Orleans, as far south
as Cuba, and as far east as Bermuda. It damaged 2,000 buildings in Charleston and caused $6 million worth of damage
($133 million in 2006 dollars), at a time when all the city's buildings were valued around $24 million ($531 million
in 2006 dollars). Investment in the city continued. The William Enston Home, a planned community for the city's aged
and infirm, was built in 1889. An elaborate public building, the United States Post Office and Courthouse, was completed
by the federal government in 1896 in the heart of the city. The Democrat-dominated state legislature passed a new
constitution in 1895 that disfranchised blacks, effectively excluding them entirely from the political process, a
second-class status that was maintained for more than six decades in a state that was majority black until about
1930. On June 17, 2015, 21-year-old Dylann Roof entered the historic Emanuel African Methodist Episcopal Church during
a Bible study and killed nine people. Senior pastor Clementa Pinckney, who also served as a state senator, was among
those killed during the attack. The deceased also included congregation members Susie Jackson, 87; Rev. Daniel Simmons
Sr., 74; Ethel Lance, 70; Myra Thompson, 59; Cynthia Hurd, 54; Rev. Depayne Middleton-Doctor, 49; Rev. Sharonda Coleman-Singleton,
45; and Tywanza Sanders, 26. The attack garnered national attention, and sparked a debate on historical racism, Confederate
symbolism in Southern states, and gun violence. On July 10, 2015, the Confederate battle flag was removed from the
South Carolina State House. A memorial service on the campus of the College of Charleston was attended by President
Barack Obama, Michelle Obama, Vice President Joe Biden, Jill Biden, and Speaker of the House John Boehner. Charleston
is known for its unique culture, which blends traditional Southern U.S., English, French, and West African elements.
The downtown peninsula has gained a reputation for its art, music, local cuisine, and fashion. Spoleto Festival USA,
held annually in late spring, has become one of the world's major performing arts festivals. It was founded in 1977
by Pulitzer Prize-winning composer Gian Carlo Menotti, who sought to establish a counterpart to the Festival dei
Due Mondi (the Festival of Two Worlds) in Spoleto, Italy. Charleston's oldest community theater group, the Footlight
Players, has provided theatrical productions since 1931. A variety of performing arts venues includes the historic
Dock Street Theatre. The annual Charleston Fashion Week held each spring in Marion Square brings in designers, journalists,
and clients from across the nation. Charleston is known for its local seafood, which plays a key role in the city's
renowned cuisine, comprising staple dishes such as gumbo, she-crab soup, fried oysters, Lowcountry boil, deviled
crab cakes, red rice, and shrimp and grits. Rice is the staple in many dishes, reflecting the rice culture of the
Low Country. The cuisine in Charleston is also strongly influenced by British and French elements. The traditional
Charleston accent has long been noted in the state and throughout the South. It is typically heard in wealthy white
families who trace their families back generations in the city. It has ingliding or monophthongal long mid-vowels,
raises ay and aw in certain environments, and is nonrhotic. Sylvester Primer of the College of Charleston wrote about
aspects of the local dialect in his late 19th-century works: "Charleston Provincialisms" (1887) and "The Huguenot
Element in Charleston's Provincialisms", published in a German journal. He believed the accent was based on the English
as it was spoken by the earliest settlers, therefore derived from Elizabethan England and preserved with modifications
by Charleston speakers. The rapidly disappearing "Charleston accent" is still noted in the local pronunciation of
the city's name. Some elderly (and usually upper-class) Charleston natives ignore the 'r' and elongate the first
vowel, pronouncing the name as "Chah-l-ston". Some observers attribute these unique features of Charleston's speech
to its early settlement by French Huguenots and Sephardic Jews (who were primarily English speakers from London),
both of whom played influential roles in Charleston's early development and history.[citation needed] Charleston
annually hosts Spoleto Festival USA founded by Gian Carlo Menotti, a 17-day art festival featuring over 100 performances
by individual artists in a variety of disciplines. The Spoleto Festival is internationally recognized as America's
premier performing arts festival. The annual Piccolo Spoleto festival takes place at the same time and features local
performers and artists, with hundreds of performances throughout the city. Other festivals and events include Historic
Charleston Foundation's Festival of Houses and Gardens and Charleston Antiques Show, the Taste of Charleston, The
Lowcountry Oyster Festival, the Cooper River Bridge Run, The Charleston Marathon, Southeastern Wildlife Exposition
(SEWE), Charleston Food and Wine Festival, Charleston Fashion Week, the MOJA Arts Festival, and the Holiday Festival
of Lights (at James Island County Park), and the Charleston International Film Festival. As it has on every aspect
of Charleston culture, the Gullah community has had a tremendous influence on music in Charleston, especially when
it comes to the early development of jazz music. In turn, the music of Charleston has had an influence on that of
the rest of the country. The geechee dances that accompanied the music of the dock workers in Charleston followed
a rhythm that inspired Eubie Blake's "Charleston Rag" and later James P. Johnson's "The Charleston", as well as the
dance craze that defined a nation in the 1920s. "Ballin' the Jack", which was a popular dance in the years before
"The Charleston", was written by native Charlestonian Chris Smith. The Jenkins Orphanage was established in 1891
by the Rev. Daniel J. Jenkins in Charleston. The orphanage accepted donations of musical instruments and Rev. Jenkins
hired local Charleston musicians and Avery Institute Graduates to tutor the boys in music. As a result, Charleston
musicians became proficient on a variety of instruments and were able to read music expertly. These traits set Jenkins
musicians apart and helped land some of them positions in big bands with Duke Ellington and Count Basie. William
"Cat" Anderson, Jabbo Smith, and Freddie Green are but a few of the alumni from the Jenkins Orphanage band who became
professional musicians in some of the best bands of the day. Orphanages around the country began to develop brass
bands in the wake of the Jenkins Orphanage Band's success. At the Colored Waif's Home Brass Band in New Orleans,
for example, a young trumpeter named Louis Armstrong first began to draw attention. As many as five bands were on
tour during the 1920s. The Jenkins Orphanage Band played in the inaugural parades of Presidents Theodore Roosevelt
and William Taft and toured the USA and Europe. The band also played on Broadway for the play "Porgy" by DuBose and
Dorothy Heyward, a stage version of their novel of the same title. The story was based in Charleston and featured
the Gullah community. The Heywards insisted on hiring the real Jenkins Orphanage Band to portray themselves on stage.
Only a few years later, DuBose Heyward collaborated with George and Ira Gershwin to turn his novel into the now famous
opera, Porgy and Bess (so named so as to distinguish it from the play). George Gershwin and Heyward spent the summer
of 1934 at Folly Beach outside of Charleston writing this "folk opera", as Gershwin called it. Porgy and Bess is
considered the Great American Opera[citation needed] and is widely performed. The City of Charleston Fire Department
consists over 300 full-time firefighters. These firefighters operate out of 19 companies located throughout the city:
16 engine companies, two tower companies, and one ladder company. Training, Fire Marshall, Operations, and Administration
are the divisions of the department. The department operates on a 24/48 schedule and had a Class 1 ISO rating until
late 2008, when ISO officially lowered it to Class 3. Russell (Rusty) Thomas served as Fire Chief until June 2008,
and was succeeded by Chief Thomas Carr in November 2008. The City of Charleston Police Department, with a total of
452 sworn officers, 137 civilians, and 27 reserve police officers, is South Carolina's largest police department.
Their procedures on cracking down on drug use and gang violence in the city are used as models to other cities to
do the same.[citation needed] According to the final 2005 FBI Crime Reports, Charleston crime level is worse than
the national average in almost every major category. Greg Mullen, the former Deputy Chief of the Virginia Beach,
Virginia Police Department, serves as the current Chief of the Charleston Police Department. The former Charleston
police chief was Reuben Greenberg, who resigned August 12, 2005. Greenberg was credited with creating a polite police
force that kept police brutality well in check, even as it developed a visible presence in community policing and
a significant reduction in crime rates. Charleston is the primary medical center for the eastern portion of the state.
The city has several major hospitals located in the downtown area: Medical University of South Carolina Medical Center
(MUSC), Ralph H. Johnson VA Medical Center, and Roper Hospital. MUSC is the state's first school of medicine, the
largest medical university in the state, and the sixth-oldest continually operating school of medicine in the United
States. The downtown medical district is experiencing rapid growth of biotechnology and medical research industries
coupled with substantial expansions of all the major hospitals. Additionally, more expansions are planned or underway
at another major hospital located in the West Ashley portion of the city: Bon Secours-St Francis Xavier Hospital.
The Trident Regional Medical Center located in the City of North Charleston and East Cooper Regional Medical Center
located in Mount Pleasant also serve the needs of residents of the city of Charleston. The City of Charleston is
served by the Charleston International Airport. It is located in the City of North Charleston and is about 12 miles
(20 km) northwest of downtown Charleston. It is the busiest passenger airport in South Carolina (IATA: CHS, ICAO:
KCHS). The airport shares runways with the adjacent Charleston Air Force Base. Charleston Executive Airport is a
smaller airport located in the John's Island section of the city of Charleston and is used by noncommercial aircraft.
Both airports are owned and operated by the Charleston County Aviation Authority. Interstate 26 begins in downtown
Charleston, with exits to the Septima Clark Expressway, the Arthur Ravenel, Jr. Bridge and Meeting Street. Heading
northwest, it connects the city to North Charleston, the Charleston International Airport, Interstate 95, and Columbia.
The Arthur Ravenel, Jr. Bridge and Septima Clark Expressway are part of U.S. Highway 17, which travels east-west
through the cities of Charleston and Mount Pleasant. The Mark Clark Expressway, or Interstate 526, is the bypass
around the city and begins and ends at U.S. Highway 17. U.S. Highway 52 is Meeting Street and its spur is East Bay
Street, which becomes Morrison Drive after leaving the east side. This highway merges with King Street in the city's
Neck area (industrial district). U.S. Highway 78 is King Street in the downtown area, eventually merging with Meeting
Street. The Arthur Ravenel Jr. Bridge across the Cooper River opened on July 16, 2005, and was the second-longest
cable-stayed bridge in the Americas at the time of its construction.[citation needed] The bridge links Mount Pleasant
with downtown Charleston, and has eight lanes plus a 12-foot lane shared by pedestrians and bicycles. It replaced
the Grace Memorial Bridge (built in 1929) and the Silas N. Pearman Bridge (built in 1966). They were considered two
of the more dangerous bridges in America and were demolished after the Ravenel Bridge opened. The Roman Catholic
Diocese of Charleston Office of Education also operates out of the city and oversees several K-8 parochial schools,
such as Blessed Sacrament School, Christ Our King School, Charleston Catholic School, Nativity School, and Divine
Redeemer School, all of which are "feeder" schools into Bishop England High School, a diocesan high school within
the city. Bishop England, Porter-Gaud School, and Ashley Hall are the city's oldest and most prominent private schools,
and are a significant part of Charleston history, dating back some 150 years. Public institutions of higher education
in Charleston include the College of Charleston (the nation's 13th-oldest university), The Citadel, The Military
College of South Carolina, and the Medical University of South Carolina. The city is also home to private universities,
including the Charleston School of Law . Charleston is also home to the Roper Hospital School of Practical Nursing,
and the city has a downtown satellite campus for the region's technical school, Trident Technical College. Charleston
is also the location for the only college in the country that offers bachelor's degrees in the building arts, The
American College of the Building Arts. The Art Institute of Charleston, located downtown on North Market Street,
opened in 2007. Charleston has one official sister city, Spoleto, Umbria, Italy. The relationship between the two
cities began when Pulitzer Prize-winning Italian composer Gian Carlo Menotti selected Charleston as the city to host
the American version of Spoleto's annual Festival of Two Worlds. "Looking for a city that would provide the charm
of Spoleto, as well as its wealth of theaters, churches, and other performance spaces, they selected Charleston,
South Carolina, as the ideal location. The historic city provided a perfect fit: intimate enough that the Festival
would captivate the entire city, yet cosmopolitan enough to provide an enthusiastic audience and robust infrastructure."
During this period, the Weapons Station was the Atlantic Fleet's loadout base for all nuclear ballistic missile submarines.
Two SSBN "Boomer" squadrons and a submarine tender were homeported at the Weapons Station, while one SSN attack squadron,
Submarine Squadron 4, and a submarine tender were homeported at the Naval Base. At the 1996 closure of the station's
Polaris Missile Facility Atlantic (POMFLANT), over 2,500 nuclear warheads and their UGM-27 Polaris, UGM-73 Poseidon,
and UGM-96 Trident I delivery missiles (SLBM) were stored and maintained, guarded by a U.S. Marine Corps security
force company. In 1832, South Carolina passed an ordinance of nullification, a procedure by which a state could,
in effect, repeal a federal law; it was directed against the most recent tariff acts. Soon, federal soldiers were
dispensed to Charleston's forts, and five United States Coast Guard cutters were detached to Charleston Harbor "to
take possession of any vessel arriving from a foreign port, and defend her against any attempt to dispossess the
Customs Officers of her custody until all the requirements of law have been complied with." This federal action became
known as the Charleston incident. The state's politicians worked on a compromise law in Washington to gradually reduce
the tariffs. By 1840, the Market Hall and Sheds, where fresh meat and produce were brought daily, became a hub of
commercial activity. The slave trade also depended on the port of Charleston, where ships could be unloaded and the
slaves bought and sold. The legal importation of African slaves had ended in 1808, although smuggling was significant.
However, the domestic trade was booming. More than one million slaves were transported from the Upper South to the
Deep South in the antebellum years, as cotton plantations were widely developed through what became known as the
Black Belt. Many slaves were transported in the coastwise slave trade, with slave ships stopping at ports such as
Charleston. After the defeat of the Confederacy, federal forces remained in Charleston during the city's reconstruction.
The war had shattered the prosperity of the antebellum city. Freed slaves were faced with poverty and discrimination,
but a large community of free people of color had been well-established in the city before the war and became the
leaders of the postwar Republican Party and its legislators. Men who had been free people of color before the war
comprised 26% of those elected to state and federal office in South Carolina from 1868 to 1876.
Between 7 September 1940 and 21 May 1941, 16 British cities suffered aerial raids with at least 100 long tons of high explosives.
Over a period of 267 days, London was attacked 71 times, Birmingham, Liverpool and Plymouth eight times, Bristol
six, Glasgow five, Southampton four, Portsmouth and Hull three and a minimum of one large raid on eight other cities.
This was a result of a rapid escalation starting on 24 August 1940, when night bombers aiming for RAF airfields drifted
off course and accidentally destroyed several London homes, killing civilians, combined with the UK Prime Minister
Winston Churchill's retaliatory bombing of Berlin on the following night.[clarification needed] From 7 September
1940, one year into the war, London was bombed by the Luftwaffe for 57 consecutive nights. More than one million
London houses were destroyed or damaged and more than 40,000 civilians were killed, almost half of them in London.
Ports and industrial centres outside London were also attacked. The main Atlantic sea port of Liverpool was bombed,
causing nearly 4,000 deaths within the Merseyside area during the war. The North Sea port of Hull, a convenient and
easily found target or secondary target for bombers unable to locate their primary targets, was subjected to 86 raids
in the Hull Blitz during the war, with a conservative estimate of 1,200 civilians killed and 95 percent of its housing
stock destroyed or damaged. Other ports including Bristol, Cardiff, Portsmouth, Plymouth, Southampton and Swansea
were also bombed, as were the industrial cities of Birmingham, Belfast, Coventry, Glasgow, Manchester and Sheffield.
Birmingham and Coventry were chosen because of the Spitfire and tank factories in Birmingham and the many munitions
factories in Coventry. The city centre of Coventry was almost destroyed, as was Coventry Cathedral. The bombing failed
to demoralise the British into surrender or significantly damage the war economy. The eight months of bombing never
seriously hampered British production and the war industries continued to operate and expand. The Blitz was only
authorised when the Luftwaffe had failed to meet preconditions for a 1940 launch of Operation Sea Lion, the provisionally
planned German invasion of Britain. By May 1941 the threat of an invasion of Britain had passed, and Hitler's attention
had turned to Operation Barbarossa in the East. In comparison to the later Allied bombing campaign against Germany,
the Blitz resulted in relatively few casualties; the British bombing of Hamburg in July 1943 inflicted some 42,000
civilian deaths, about the same as the entire Blitz. In the 1920s and 1930s, air power theorists Giulio Douhet and
Billy Mitchell espoused the idea that air forces could win wars by themselves, without a need for land and sea fighting.
It was thought there was no defence against air attack, particularly at night. Enemy industry, their seats of government,
factories and communications could be destroyed, effectively taking away their means to resist. It was also thought
the bombing of residential centres would cause a collapse of civilian will, which might have led to the collapse
of production and civil life. Democracies, where the populace was allowed to show overt disapproval of the ruling
government, were thought particularly vulnerable. This thinking was prevalent in both the RAF and what was then known
as the United States Army Air Corps (USAAC) between the two world wars. RAF Bomber Command's policy in particular
would attempt to achieve victory through the destruction of civilian will, communications and industry. Within the
Luftwaffe, there was a more muted view of strategic bombing. The OKL did not oppose the strategic bombardment of
enemy industries and or cities, and believed it could greatly affect the balance of power on the battlefield in Germany's
favour by disrupting production and damaging civilian morale, but they did not believe that air power alone could
be decisive. Contrary to popular belief, the Luftwaffe did not have a systematic policy of what became known as "terror
bombing". Evidence suggests that the Luftwaffe did not adopt an official bombing policy in which civilians became
the primary target until 1942. Wever argued that the Luftwaffe General Staff should not be solely educated in tactical
and operational matters. He argued they should be educated in grand strategy, war economics, armament production,
and the mentality of potential opponents (also known as mirror imaging). Wever's vision was not realised; the General
Staff studies in those subjects fell by the wayside, and the Air Academies focused on tactics, technology, and operational
planning, rather than on independent strategic air offensives. In 1936, Wever was killed in an air crash. The failure
to implement his vision for the new Luftwaffe was largely attributable to his immediate successors. Ex-Army personnel
Albert Kesselring and Hans-Jürgen Stumpff are usually blamed for the turning away from strategic planning and focusing
on close air support. However, it would seem the two most prominent enthusiasts for the focus on ground-support operations
(direct or indirect) were actually Hugo Sperrle and Hans Jeschonnek. These men were long-time professional airmen
involved in German air services since early in their careers. The Luftwaffe was not pressured into ground support
operations because of pressure from the army, or because it was led by ex-army personnel. It was instead a mission
that suited the Luftwaffe's existing approach to warfare; a culture of joint inter-service operations, rather than
independent strategic air campaigns. Adolf Hitler failed to pay as much attention to bombing the enemy as he did
to protection from enemy bombing, although he had promoted the development of a bomber force in the 1930s and understood
that it was possible to use bombers for major strategic purposes. He told the OKL in 1939 that ruthless employment
of the Luftwaffe against the heart of the British will to resist could and would follow when the moment was right;
however, he quickly developed a lively scepticism toward strategic bombing, confirmed by the results of the Blitz.
He frequently complained of the Luftwaffe's inability to damage industries sufficiently, saying, "The munitions industry
cannot be interfered with effectively by air raids ... usually the prescribed targets are not hit". Ultimately, Hitler
was trapped within his own vision of bombing as a terror weapon, formed in the 1930s when he threatened smaller nations
into accepting German rule rather than submit to air bombardment. This fact had important implications. It showed
the extent to which Hitler personally mistook Allied strategy for one of morale breaking instead of one of economic
warfare, with the collapse of morale as an additional bonus. Hitler was much more attracted to the political aspects
of bombing. As the mere threat of it had produced diplomatic results in the 1930s, he expected that the threat of
German retaliation would persuade the Allies to adopt a policy of moderation and not to begin a policy of unrestricted
bombing. His hope was — for reasons of political prestige within Germany itself — that the German population would
be protected from the Allied bombings. When this proved impossible, he began to fear that popular feeling would turn
against his regime, and he redoubled efforts to mount a similar "terror offensive" against Britain in order to produce
a stalemate in which both sides would hesitate to use bombing at all. A major problem in the managing of the Luftwaffe
was Hermann Göring. Hitler believed the Luftwaffe was "the most effective strategic weapon", and in reply to repeated
requests from the Kriegsmarine for control over aircraft insisted, "We should never have been able to hold our own
in this war if we had not had an undivided Luftwaffe". Such principles made it much harder to integrate the air force
into the overall strategy and produced in Göring a jealous and damaging defence of his "empire" while removing Hitler
voluntarily from the systematic direction of the Luftwaffe at either the strategic or operational level. When Hitler
tried to intervene more in the running of the air force later in the war, he was faced with a political conflict
of his own making between himself and Göring, which was not fully resolved until the war was almost over. In 1940
and 1941, Göring's refusal to cooperate with the Kriegsmarine denied the entire Wehrmacht military forces of the
Reich the chance to strangle British sea communications, which might have had strategic or decisive effect in the
war against the British Empire. The deliberate separation of the Luftwaffe from the rest of the military structure
encouraged the emergence of a major "communications gap" between Hitler and the Luftwaffe, which other factors helped
to exacerbate. For one thing, Göring's fear of Hitler led him to falsify or misrepresent what information was available
in the direction of an uncritical and over-optimistic interpretation of air strength. When Göring decided against
continuing Wever's original heavy bomber programme in 1937, the Reichsmarschall's own explanation was that Hitler
wanted to know only how many bombers there were, not how many engines each had. In July 1939, Göring arranged a display
of the Luftwaffe's most advanced equipment at Rechlin, to give the impression the air force was more prepared for
a strategic air war than was actually the case. Within hours of the UK and France declaring war on Germany on 3 September
1939, the RAF bombed German warships along the German coast at Wilhelmshaven. Thereafter bombing operations were
against ports and shipping and propaganda leaflet drops. Operations were planned to minimize civilian casualties.
From 15 May 1940 – the day after the Luftwaffe destroyed the centre of Rotterdam – the RAF also carried out operations
east of the Rhine, attacking industrial and transportation targets. Operations were carried out every night thereafter.
Although not specifically prepared to conduct independent strategic air operations against an opponent, the Luftwaffe
was expected to do so over Britain. From July until September 1940 the Luftwaffe attacked RAF Fighter Command to
gain air superiority as a prelude to invasion. This involved the bombing of English Channel convoys, ports, and RAF
airfields and supporting industries. Destroying RAF Fighter Command would allow the Germans to gain control of the
skies over the invasion area. It was supposed that Bomber Command, RAF Coastal Command and the Royal Navy could not
operate under conditions of German air superiority. The Luftwaffe's poor intelligence meant that their aircraft were
not always able to locate their targets, and thus attacks on factories and airfields failed to achieve the desired
results. British fighter aircraft production continued at a rate surpassing Germany's by 2 to 1. The British produced
10,000 aircraft in 1940, in comparison to Germany's 8,000. The replacement of pilots and aircrew was more difficult.
Both the RAF and Luftwaffe struggled to replace manpower losses, though the Germans had larger reserves of trained
aircrew. The circumstances affected the Germans more than the British. Operating over home territory, British flyers
could fly again if they survived being shot down. German crews, even if they survived, faced capture. Moreover, bombers
had four to five crewmen on board, representing a greater loss of manpower. On 7 September, the Germans shifted away
from the destruction of the RAF's supporting structures. German intelligence suggested Fighter Command was weakening,
and an attack on London would force it into a final battle of annihilation while compelling the British Government
to surrender. The decision to change strategy is sometimes claimed as a major mistake by the Oberkommando der Luftwaffe
(OKL). It is argued that persisting with attacks on RAF airfields might have won air superiority for the Luftwaffe.
Others argue that the Luftwaffe made little impression on Fighter Command in the last week of August and first week
of September and that the shift in strategy was not decisive. It has also been argued that it was doubtful the Luftwaffe
could have won air superiority before the "weather window" began to deteriorate in October. It was also possible,
if RAF losses became severe, that they could pull out to the north, wait for the German invasion, then redeploy southward
again. Other historians argue that the outcome of the air battle was irrelevant; the massive numerical superiority
of British naval forces and the inherent weakness of the Kriegsmarine would have made the projected German invasion,
Unternehmen Seelöwe (Operation Sea Lion), a disaster with or without German air superiority. Regardless of the ability
of the Luftwaffe to win air superiority, Adolf Hitler was frustrated that it was not happening quickly enough. With
no sign of the RAF weakening, and Luftwaffe air fleets (Luftflotten) taking punishing losses, the OKL was keen for
a change in strategy. To reduce losses further, a change in strategy was also favoured to take place at night, to
give the bombers greater protection under cover of darkness.[b] On 4 September 1940, in a long address at the Sportspalast,
Hitler declared: "And should the Royal Air Force drop two thousand, or three thousand [kilograms ...] then we will
now drop [...] 300,000, 400,000, yes one million kilograms in a single night. And should they declare they will greatly
increase their attacks on our cities, then we will erase their cities." It was decided to focus on bombing Britain's
industrial cities in daylight to begin with. The main focus of the bombing operations was against the city of London.
The first major raid in this regard took place on 7 September. On 15 September, on a date known as the Battle of
Britain Day, a large-scale raid was launched in daylight, but suffered significant loss for no lasting gain. Although
there were a few large air battles fought in daylight later in the month and into October, the Luftwaffe switched
its main effort to night attacks in order to reduce losses. This became official policy on 7 October. The air campaign
soon got underway against London and other British cities. However, the Luftwaffe faced limitations. Its aircraft—Dornier
Do 17, Junkers Ju 88, and Heinkel He 111s—were capable of carrying out strategic missions, but were incapable of
doing greater damage because of bomb-load limitations. The Luftwaffe's decision in the interwar period to concentrate
on medium bombers can be attributed to several reasons: Hitler did not intend or foresee a war with Britain in 1939;
the OKL believed a medium bomber could carry out strategic missions just as well as a heavy bomber force; and Germany
did not possess the resources or technical ability to produce four-engined bombers before the war. Although it had
equipment capable of doing serious damage, the problem for the Luftwaffe was its unclear strategy and poor intelligence.
OKL had not been informed that Britain was to be considered a potential opponent until early 1938. It had no time
to gather reliable intelligence on Britain's industries. Moreover, OKL could not settle on an appropriate strategy.
German planners had to decide whether the Luftwaffe should deliver the weight of its attacks against a specific segment
of British industry such as aircraft factories, or against a system of interrelated industries such as Britain's
import and distribution network, or even in a blow aimed at breaking the morale of the British population. The Luftwaffe's
strategy became increasingly aimless over the winter of 1940–1941. Disputes among the OKL staff revolved more around
tactics than strategy. This method condemned the offensive over Britain to failure before it began. In an operational
capacity, limitations in weapons technology and quick British reactions were making it more difficult to achieve
strategic effect. Attacking ports, shipping and imports as well as disrupting rail traffic in the surrounding areas,
especially the distribution of coal, an important fuel in all industrial economies of the Second World War, would
net a positive result. However, the use of delayed-action bombs, while initially very effective, gradually had less
impact, partly because they failed to detonate.[c] Moreover, the British had anticipated the change in strategy and
dispersed its production facilities making them less vulnerable to a concentrated attack. Regional commissioners
were given plenipotentiary powers to restore communications and organise the distribution of supplies to keep the
war economy moving. Based on experience with German strategic bombing during World War I against the United Kingdom,
the British government estimated after the war that 50 casualties— with about one third killed— would result for
every tonne of bombs dropped on London. The estimate of tonnes of bombs an enemy could drop per day grew as aircraft
technology advanced, from 75 in 1922, to 150 in 1934, to 644 in 1937. That year the Committee on Imperial Defence
estimated that an attack of 60 days would result in 600,000 dead and 1,200,000 wounded. News reports of the Spanish
Civil War, such as the bombing of Barcelona, supported the 50-casualties-per-tonne estimate. By 1938 experts generally
expected that Germany would attempt to drop as much as 3,500 tonnes in the first 24 hours of war and average 700
tonnes a day for several weeks. In addition to high explosive and incendiary bombs the enemy would possibly use poison
gas and even bacteriological warfare, all with a high degree of accuracy. In 1939 military theorist Basil Liddell-Hart
predicted that 250,000 deaths and injuries in Britain could occur in the first week of war. In addition to the dead
and wounded, government leaders feared mass psychological trauma from aerial attack and a resulting collapse of civil
society. A committee of psychiatrists reported to the government in 1938 that there would be three times as many
mental as physical casualties from aerial bombing, implying three to four million psychiatric patients. Winston Churchill
told Parliament in 1934, "We must expect that, under the pressure of continuous attack upon London, at least three
or four million people would be driven out into the open country around the metropolis." Panicked reactions during
the Munich crisis, such as the migration by 150,000 to Wales, contributed to fear of societal chaos. The government
planned to voluntarily evacuate four million people—mostly women and children—from urban areas, including 1.4 million
from London. It expected about 90% of evacuees to stay in private homes, and conducted an extensive survey to determine
available space. Detailed preparations for transporting them were developed. A trial blackout was held on 10 August
1939, and when Germany invaded Poland on 1 September a blackout began at sunset. Lights would not be allowed after
dark for almost six years, and the blackout became by far the most unpopular aspect of the war for civilians, more
than rationing.:51,106 The relocation of the government and the civil service was also planned, but would only have
occurred if necessary so as not to damage civilian morale.:33 Much civil-defence preparation in the form of shelters
was left in the hands of local authorities, and many areas such as Birmingham, Coventry, Belfast and the East End
of London did not have enough shelters. The Phoney War, however, and the unexpected delay of civilian bombing permitted
the shelter programme to finish in June 1940.:35 The programme favoured backyard Anderson shelters and small brick
surface shelters; many of the latter were soon abandoned in 1940 as unsafe. In addition, authorities expected that
the raids would be brief and during the day. Few predicted that attacks by night would force Londoners to sleep in
shelters. Very deeply buried shelters provided the most protection against a direct hit. The government did not build
them for large populations before the war because of cost, time to build, and fears that their very safety would
cause occupants to refuse to leave to return to work, or that anti-war sentiment would develop in large groups. The
government saw the Communist Party's leading role in advocating for building deep shelters as an attempt to damage
civilian morale, especially after the Molotov-Ribbentrop Pact of August 1939.:34 The most important existing communal
shelters were the London Underground stations. Although many civilians had used them as such during the First World
War, the government in 1939 refused to allow the stations to be used as shelters so as not to interfere with commuter
and troop travel, and the fears that occupants might refuse to leave. Underground officials were ordered to lock
station entrances during raids; but by the second week of heavy bombing the government relented and ordered the stations
to be opened. Each day orderly lines of people queued until 4 pm, when they were allowed to enter the stations. In
mid-September 1940 about 150,000 a night slept in the Underground, although by the winter and spring months the numbers
had declined to 100,000 or less. Noises of battle were muffled and sleep was easier in the deepest stations, but
many were killed from direct hits on several stations. Communal shelters never housed more than one seventh of Greater
London residents, however. Peak use of the Underground as shelter was 177,000 on 27 September 1940, and a November
1940 census of London found that about 4% of residents used the Tube and other large shelters; 9% in public surface
shelters; and 27% in private home shelters, implying that the remaining 60% of the city likely stayed at home. The
government distributed Anderson shelters until 1941 and that year began distributing the Morrison shelter, which
could be used inside homes.:190 Public demand caused the government in October 1940 to build new deep shelters:189–190
within the Underground to hold 80,000 people but were not completed until the period of heaviest bombing had passed.
By the end of 1940 significant improvements had been made in the Underground and in many other large shelters. Authorities
provided stoves and bathrooms and canteen trains provided food. Tickets were issued for bunks in large shelters to
reduce the amount of time spent queuing. Committees quickly formed within shelters as informal governments, and organisations
such as the British Red Cross and the Salvation Army worked to improve conditions. Entertainment included concerts,
films, plays and books from local libraries. Although the intensity of the bombing was not as great as prewar expectations
so an equal comparison is impossible, no psychiatric crisis occurred because of the Blitz even during the period
of greatest bombing of September 1940. An American witness wrote "By every test and measure I am able to apply, these
people are staunch to the bone and won't quit ... the British are stronger and in a better position than they were
at its beginning". People referred to raids as if they were weather, stating that a day was "very blitzy".:75,261
However, another American who visited Britain, the publisher Ralph Ingersoll, wrote soon after the Blitz eased on
15 September that: Ingersoll added that, according to Anna Freud and Edward Glover, London civilians surprisingly
did not suffer from widespread shell shock, unlike the soldiers in the Dunkirk evacuation.:114,117–118 The psychoanalysts
were correct, and the special network of psychiatric clinics opened to receive mental casualties of the attacks closed
due to lack of need. Although the stress of the war resulted in many anxiety attacks, eating disorders, fatigue,
weeping, miscarriages, and other physical and mental ailments, society did not collapse. The number of suicides and
drunkenness declined, and London recorded only about two cases of "bomb neuroses" per week in the first three months
of bombing. Many civilians found that the best way to retain mental stability was to be with family, and after the
first few weeks of bombing avoidance of the evacuation programs grew.:80–81 Glover speculated that the knowledge
that the entire country was being attacked, that there was no way to escape the bombs, forced people to accept and
deal with the situation.:118 The cheerful crowds visiting bomb sites were so large they interfered with rescue work,
pub visits increased in number (beer was never rationed), and 13,000 attended cricket at Lord's. People left shelters
when told instead of refusing to leave, although many housewives reportedly enjoyed the break from housework. Some
people even told government surveyors that they enjoyed air raids if they occurred occasionally, perhaps once a week.
Despite the attacks, defeat in Norway and France, and the threat of invasion, overall morale remained high; a Gallup
poll found only 3% of Britons expected to lose the war in May 1940, another found an 88% approval rating for Churchill
in July, and a third found 89% support for his leadership in October. Support for peace negotiations declined from
29% in February. Each setback caused more civilians to volunteer to become unpaid Local Defence Volunteers, workers
worked longer shifts and over weekends, contributions rose to the £5,000 "Spitfire Funds" to build fighters, and
the number of work days lost to strikes in 1940 was the lowest in history.:60–63,67–68,75,78–79,215–216 The civilians
of London had an enormous role to play in the protection of their city. Many civilians who were unwilling or unable
to join the military became members of the Home Guard, the Air Raid Precautions service (ARP), the Auxiliary Fire
Service, and many other organisations. The AFS had 138,000 personnel by July 1939. Only one year earlier, there had
only been 6,600 full-time and 13,800 part-time firemen in the entire country. During the Blitz, The Scout Association
guided fire engines to where they were most needed, and became known as the "Blitz Scouts". Many unemployed were
drafted into the Royal Army Pay Corps. These personnel, along with others from the Pioneer Corps, were charged with
the task of salvage and clean-up. The WVS (Women's Voluntary Services for Civil Defence) was set up under the direction
of Samuel Hoare, Home Secretary in 1938 specifically in the event of air raids. Hoare considered it the female branch
of the ARP. They organised the evacuation of children, established centres for those displaced by bombing, and operated
canteens, salvage and recycling schemes. By the end of 1941, the WVS had one million members. Prior to the outbreak
of war, civilians were issued with 50 million respirators (gas masks). These were issued in the event of bombing
taking place with gas before evacuation. In the inter-war years and after 1940, Hugh Dowding, Air Officer Commanding
Fighter Command has received credit for the defence of British air space and the failure of the Luftwaffe to achieve
air superiority. However, Dowding had spent so much effort preparing day fighter defences, there was little to prevent
the Germans carrying out an alternative strategy by bombing at night. When the Luftwaffe struck at British cities
for the first time on 7 September 1940, a number of civic and political leaders were worried by Dowding's apparent
lack of reaction to the new crisis. Dowding accepted that as AOC, he was responsible for the day and night defence
of Britain, and the blame, should he fail, would be laid at his door. When urgent changes and improvements needed
to be made, Dowding seemed reluctant to act quickly. The Air Staff felt that this was due to his stubborn nature
and reluctance to cooperate. Dowding's opponents in the Air Ministry, already critical of his handling of the day
battle (see Battle of Britain Day and the Big Wing controversy), were ready to use these failings as a cudgel with
which to attack him and his abilities. Dowding was summoned to an Air Ministry conference on 17 October 1940 to explain
the poor state of night defences and the supposed (but ultimately successful) "failure" of his daytime strategy.
The criticism of his leadership extended far beyond the Air Council, and the Minister of Aircraft Production, Lord
Beaverbrook, and Churchill themselves intimated their support was waning. While the failure of night defence preparation
was undeniable, it was not the AOC's responsibility to accrue resources. The general neglect of the RAF until the
late spurt in 1938 had left sparse resources to build defences. While it was permissible to disagree with Dowding's
operational and tactical deployment of forces, the failure of the Government and Air Ministry to allot resources
was ultimately the responsibility of the civil and military institutions at large. In the pre-war period, the Chamberlain
Government stated that night defence from air attack should not take up much of the national effort and, along with
the Air Ministry, did not make it a priority. The attitude of the Air Ministry was in contrast to the experiences
of the First World War when a few German bombers caused physical and psychological damage out of all proportion to
their numbers. Around 280 short tons (250 t) (9,000 bombs) had been dropped, killing 1,413 people and injuring 3,500
more. Most people aged 35 or over remembered the threat and greeted the bombings with great trepidation. From 1916–1918,
German raids had diminished against countermeasures which demonstrated defence against night air raids was possible.
Although night air defence was causing greater concern before the war, it was not at the forefront of RAF planning.
Most of the resources went into planning for daylight fighter defences. The difficulty RAF bombers had navigating
in darkness, led the British to believe German bombers would suffer the same problems and would be unable to reach
and identify their targets. There was also a mentality in all air forces that, if they could carry out effective
operations by day, night missions and their disadvantages could be avoided. British air doctrine, since the time
of Chief of the Air Staff Hugh Trenchard in the early 1920s, had stressed offence was the best means of defence.
British defensive strategy revolved around offensive action, what became known as the cult of the offensive. To prevent
German formations from hitting targets in Britain, RAF's Bomber Command would destroy Luftwaffe aircraft on their
own bases, aircraft in their factories and fuel reserves by attacking oil plants. This philosophy was impractical
as Bomber Command lacked the technology and equipment and needed several years to develop it. This strategy retarded
the development of fighter defences in the 1930s. Dowding agreed air defence would require some offensive action,
and fighters could not defend Britain alone. Until September 1940, the RAF lacked specialist night-fighting aircraft
and relied on anti-aircraft units which were poorly equipped and lacking in numbers. Bomber crews already had some
experience with these types of systems due to the deployment of the Lorenz beam, a commercial blind-landing aid which
allowed aircraft to land at night or in bad weather. The Germans developed the short-range Lorenz system into the
Knickebein aid, a system which used two Lorenz beams with much stronger signal transmissions. The concept was the
same as the Lorenz system. Two aerials were rotated for the two converging beams which were pointed to cross directly
over the target. The German bombers would attach themselves to either beam and fly along it until they started to
pick up the signal from the other beam. When a continuous sound was heard from the second beam the crew knew they
were above the target and began dropping their bombs. While Knickebein was used by German crews en masse, X-Gerät
use was limited to specially trained pathfinder crews. Special receivers were mounted in He 111s, with a radio mast
on the bomber's fuselage. The system worked on a higher frequency (66–77 MHz, compared to Knickebein's 30–33 MHz).
Transmitters on the ground sent pulses at a rate of 180 per minute. X-Gerät received and analysed the pulses, giving
the pilot both visual and aural "on course" signals. Three beams intersected the beam along the He 111's flight path.
The first cross-beam acted as a warning for the bomb-aimer to start the bombing-clock which he would activate only
when the second cross-beam was reached. When the third cross-beam was reached the bomb aimer activated a third trigger,
which stopped the first hand of the equipment's clock, with the second hand continuing. When the second hand re-aligned
with the first, the bombs were released. The clock's timing mechanism was co-ordinated with the distances of the
intersecting beams from the target so the target was directly below when the bomb release occurred. Y-Gerät was the
most complex system of the three. It was, in effect, an automatic beam-tracking system, operated through the bomber's
autopilot. The single approach beam along which the bomber tracked was monitored by a ground controller. The signals
from the station were retransmitted by the bomber's equipment. This way the distance the bomber travelled along the
beam could be precisely verified. Direction-finding checks also enabled the controller to keep the crew on an exact
course. The crew would be ordered to drop their bombs either by issue of a code word by the ground controller, or
at the conclusion of the signal transmissions which would stop. Although its maximum usable range was similar to
the previous systems, it was not unknown for specific buildings to be hit. In June 1940, a German prisoner of war
was overheard boasting that the British would never find the Knickebein, even though it was under their noses. The
details of the conversation were passed to an RAF Air Staff technical advisor, Dr. R. V. Jones, who started an in-depth
investigation which discovered that the Luftwaffe's Lorenz receivers were more than blind-landing devices. Jones
therefore began a search for the German beams. Avro Ansons of the Beam Approach Training Development Unit (BATDU)
were flown up and down Britain fitted with a 30 MHz receiver to detect them. Soon a beam was traced to Derby (which
had been mentioned in Luftwaffe transmissions). The first jamming operations were carried out using requisitioned
hospital electrocautery machines. A subtle form of distortion was introduced. Up to nine special transmitters directed
their signals at the beams in a manner that widened its path, negating its ability to accurately locate targets.
Confidence in the device was diminished by the time the Luftwaffe decided to launch large-scale raids. The counter
operations were carried out by British Electronic Counter Measures (ECM) units under Wing Commander Edward Addison,
No. 80 Wing RAF. The production of false radio navigation signals by re-transmitting the originals was a technique
known as masking beacons (meacons). German beacons operated on the medium-frequency band and the signals involved
a two-letter Morse identifier followed by a lengthy time-lapse which enabled the Luftwaffe crews to determine the
signal's bearing. The Meacon system involved separate locations for a receiver with a directional aerial and a transmitter.
The receipt of the German signal by the receiver was duly passed to the transmitter, the signal to be repeated. The
action did not guarantee automatic success. If the German bomber flew closer to its own beam than the Meacon then
the former signal would come through the stronger on the direction finder. The reverse would apply only if the meacon
were closer. In general, German bombers were likely to get through to their targets without too much difficulty.
It was to be some months before an effective night fighter force would be ready, and anti-aircraft defences only
became adequate after the Blitz was over, so ruses were created to lure German bombers away from their targets. Throughout
1940, dummy airfields were prepared, good enough to stand up to skilled observation. A number[clarification needed]
of bombs fell on these diversionary ("Starfish") targets. The use of diversionary techniques such as fires had to
be made carefully. The fake fires could only begin when the bombing started over an adjacent target and its effects
were brought under control. Too early and the chances of success receded; too late and the real conflagration at
the target would exceed the diversionary fires. Another innovation was the boiler fire. These units were fed from
two adjacent tanks containing oil and water. The oil-fed fires were then injected with water from time to time; the
flashes produced were similar to those of the German C-250 and C-500 Flammbomben. The hope was that, if it could
deceive German bombardiers, it would draw more bombers away from the real target. Initially the change in strategy
caught the RAF off-guard, and caused extensive damage and civilian casualties. Some 107,400 long tons (109,100 t)
of shipping was damaged in the Thames Estuary and 1,600 civilians were casualties. Of this total around 400 were
killed. The fighting in the air was more intense in daylight. Overall Loge had cost the Luftwaffe 41 aircraft; 14
bombers, 16 Messerschmitt Bf 109s, seven Messerschmitt Bf 110s and four reconnaissance aircraft. Fighter Command
lost 23 fighters, with six pilots killed and another seven wounded. Another 247 bombers from Sperrle's Luftflotte
3 (Air Fleet 3) attacked that night. On 8 September, the Luftwaffe returned. This time 412 people were killed and
747 severely wounded. On 9 September the OKL appeared to be backing two strategies. Its round-the-clock bombing of
London was an immediate attempt to force the British government to capitulate, but it was also striking at Britain's
vital sea communications to achieve a victory through siege. Although the weather was poor, heavy raids took place
that afternoon on the London suburbs and the airfield at Farnborough. The day's fighting cost Kesselring and Luftflotte
2 (Air Fleet 2) 24 aircraft, including 13 Bf 109s. Fighter Command lost 17 fighters and six pilots. Over the next
few days weather was poor and the next main effort would not be made until 15 September 1940. On 15 September the
Luftwaffe made two large daylight attacks on London along the Thames Estuary, targeting the docks and rail communications
in the city. Its hope was to destroy its targets and draw the RAF into defending them, allowing the Luftwaffe to
destroy their fighters in large numbers, thereby achieving an air superiority. Large air battles broke out, lasting
for most of the day. The first attack merely damaged the rail network for three days, and the second attack failed
altogether. The air battle was later commemorated by Battle of Britain Day. The Luftwaffe lost 18 percent of the
bombers sent on the operations that day, and failed to gain air superiority. While Göring was optimistic the Luftwaffe
could prevail, Hitler was not. On 17 September he postponed Operation Sea Lion (as it turned out, indefinitely) rather
than gamble Germany's newly gained military prestige on a risky cross-Channel operation, particularly in the face
of a sceptical Joseph Stalin in the Soviet Union. In the last days of the battle, the bombers became lures in an
attempt to draw the RAF into combat with German fighters. But their operations were to no avail; the worsening weather
and unsustainable attrition in daylight gave the OKL an excuse to switch to night attacks on 7 October. On 14 October,
the heaviest night attack to date saw 380 German bombers from Luftflotte 3 hit London. Around 200 people were killed
and another 2,000 injured. British anti-aircraft defences (General Frederick Alfred Pile) fired 8,326 rounds and
shot down only two bombers. On 15 October, the bombers returned and about 900 fires were started by the mix of 415
short tons (376 t) of high explosive and 11 short tons (10.0 t) of incendiaries dropped. Five main rail lines were
cut in London and rolling stock damaged. Loge continued during October. According to German sources, 9,000 short
tons (8,200 t) of bombs were dropped in that month, of which about 10 percent of which was dropped in daylight. Over
6,000 short tons (5,400 t) was aimed at London during the night. Attacks on Birmingham and Coventry were subject
to 500 short tons (450 t) of bombs between them in the last 10 days of October. Liverpool suffered 200 short tons
(180 t) of bombs dropped. Hull and Glasgow were attacked, but 800 short tons (730 t) of bombs were spread out all
over Britain. The Metropolitan-Vickers works in Manchester was targeted and 12 short tons (11 t) of bombs dropped
against it. Little tonnage was dropped on Fighter Command airfields; Bomber Command airfields were hit instead. Luftwaffe
policy at this point was primarily to continue progressive attacks on London, chiefly by night attack; second, to
interfere with production in the vast industrial arms factories of the West Midlands, again chiefly by night attack;
and third to disrupt plants and factories during the day by means of fighter-bombers. Kesselring, commanding Luftflotte
2, was ordered to send 50 sorties per night against London and attack eastern harbours in daylight. Sperrle, commanding
Luftflotte 3, was ordered to dispatch 250 sorties per night including 100 against the West Midlands. Seeschlange
would be carried out by Fliegerkorps X (10th Air Corps) which concentrated on mining operations against shipping.
It also took part in the bombing over Britain. By 19/20 April 1941, it had dropped 3,984 mines, ⅓ of the total dropped.
The mines' ability to destroy entire streets earned them respect in Britain, but several fell unexploded into British
hands allowing counter-measures to be developed which damaged the German anti-shipping campaign. By mid-November
1940, when the Germans adopted a changed plan, more than 13,000 short tons (12,000 t) of high explosive and nearly
1,000,000 incendiaries had fallen on London. Outside the capital, there had been widespread harassing activity by
single aircraft, as well as fairly strong diversionary attacks on Birmingham, Coventry and Liverpool, but no major
raids. The London docks and railways communications had taken a heavy pounding, and much damage had been done to
the railway system outside. In September, there had been no less than 667 hits on railways in Great Britain, and
at one period, between 5,000 and 6,000 wagons were standing idle from the effect of delayed action bombs. But the
great bulk of the traffic went on; and Londoners—though they glanced apprehensively each morning at the list of closed
stretches of line displayed at their local station, or made strange detours round back streets in the buses—still
got to work. For all the destruction of life and property, the observers sent out by the Ministry of Home Security
failed to discover the slightest sign of a break in morale. More than 13,000 civilians had been killed, and almost
20,000 injured, in September and October alone, but the death toll was much less than expected. In late 1940, Churchill
credited the shelters: The American observer Ingersoll reported at this time that "as to the accuracy of the bombing
of military objectives, here I make no qualifications. The aim is surprisingly, astonishingly, amazingly inaccurate
... The physical damage to civilian London, to sum up, was more general and more extensive than I had imagined. The
damage to military targets much less", and stated that he had seen numerous examples of untouched targets surrounded
by buildings destroyed by errant bombs. For example, in two months of bombing, Battersea Power Station, perhaps the
largest single target in London, had only received one minor hit ("a nick"). No bridge over the Thames had been hit,
and the docks were still functioning despite great damage. An airfield was hit 56 times but the runways were never
damaged and the field was never out of operation, despite German pilots' familiarity with it from prewar commercial
flights. Ingersoll wrote that the difference between the failure of the German campaign against military targets
versus its success in continental Europe was the RAF retaining control of the air.:79–80,174 British night air defences
were in a poor state. Few anti-aircraft guns had fire-control systems, and the underpowered searchlights were usually
ineffective against aircraft at altitudes above 12,000 ft (3,700 m). In July 1940, only 1,200 heavy and 549 light
guns were deployed in the whole of Britain. Of the "heavies", some 200 were of the obsolescent 3 in (76 mm) type;
the remainder were the effective 4.5 in (110 mm) and 3.7 in (94 mm) guns, with a theoretical "ceiling"' of over 30,000
ft (9,100 m) but a practical limit of 25,000 ft (7,600 m) because the predictor in use could not accept greater heights.
The light guns, about half of which were of the excellent Bofors 40 mm, dealt with aircraft only up to 6,000 ft (1,800
m). Although the use of the guns improved civilian morale, with the knowledge the German bomber crews were facing
the barrage, it is now believed that the anti-aircraft guns achieved little and in fact the falling shell fragments
caused more British casualties on the ground. London's defences were rapidly reorganised by General Pile, the Commander-in-Chief
of Anti-Aircraft Command. The difference this made to the effectiveness of air defences is questionable. The British
were still one-third below the establishment of heavy anti-aircraft artillery AAA (or ack-ack) in May 1941, with
only 2,631 weapons available. Dowding had to rely on night fighters. From 1940 to 1941, the most successful night-fighter
was the Boulton Paul Defiant; its four squadrons shot down more enemy aircraft than any other type. AA defences improved
by better use of radar and searchlights. Over several months, the 20,000 shells spent per raider shot down in September
1940, was reduced to 4,087 in January 1941 and to 2,963 shells in February 1941. Airborne Interception radar (AI)
was unreliable. The heavy fighting in the Battle of Britain had eaten up most of Fighter Command's resources, so
there was little investment in night fighting. Bombers were flown with airborne search lights out of desperation[citation
needed], but to little avail. Of greater potential was the GL (Gunlaying) radar and searchlights with fighter direction
from RAF fighter control rooms to begin a GCI system (Ground Control-led Interception) under Group-level control
(No. 10 Group RAF, No. 11 Group RAF and No. 12 Group RAF). Whitehall's disquiet at the failures of the RAF led to
the replacement of Dowding (who was already due for retirement) with Sholto Douglas on 25 November. Douglas set about
introducing more squadrons and dispersing the few GL sets to create a carpet effect in the southern counties. Still,
in February 1941, there remained only seven squadrons with 87 pilots, under half the required strength. The GL carpet
was supported by six GCI sets controlling radar-equipped night-fighters. By the height of the Blitz, they were becoming
more successful. The number of contacts and combats rose in 1941, from 44 and two in 48 sorties in January 1941,
to 204 and 74 in May (643 sorties). But even in May, 67% of the sorties were visual cat's-eye missions. Curiously,
while 43% of the contacts in May 1941 were by visual sightings, they accounted for 61% of the combats. Yet when compared
with Luftwaffe daylight operations, there was a sharp decline in German losses to 1%. If a vigilant bomber crew could
spot the fighter first, they had a decent chance at evading it. Nevertheless, it was radar that proved to be critical
weapon in the night battles over Britain from this point onward. Dowding had introduced the concept of airborne radar
and encouraged its usage. Eventually it would become a success. On the night of 22/23 July 1940, Flying Officer Cyril
Ashfield (pilot), Pilot Officer Geoffrey Morris (Observer) and Flight Sergeant Reginald Leyland (Air Intercept radar
operator) of the Fighter Interception Unit became the first pilot and crew to intercept and destroy an enemy aircraft
using onboard radar to guide them to a visual interception, when their AI night fighter brought down a Do 17 off
Sussex. On 19 November 1940 the famous RAF night fighter ace John Cunningham shot down a Ju 88 bomber using airborne
radar, just as Dowding had predicted. From November 1940 – February 1941, the Luftwaffe shifted its strategy and
attacked other industrial cities. In particular, the West Midlands were targeted. On the night of 13/14 November,
77 He 111s of Kampfgeschwader 26 (26th Bomber Wing, or KG 26) bombed London while 63 from KG 55 hit Birmingham. The
next night, a large force hit Coventry. "Pathfinders" from 12 Kampfgruppe 100 (Bomb Group 100 or KGr 100) led 437
bombers from KG 1, KG 3, KG 26, KG 27, KG 55 and Lehrgeschwader 1 (1st Training Wing, or LG 1) which dropped 394
short tons (357 t) of high explosive, 56 short tons (51 t) of incendiaries, and 127 parachute mines. Other sources
say 449 bombers and a total of 530 short tons (480 t) of bombs were dropped. The raid against Coventry was particularly
devastating, and led to widespread use of the phrase "to conventrate". Over 10,000 incendiaries were dropped. Around
21 factories were seriously damaged in Coventry, and loss of public utilities stopped work at nine others, disrupting
industrial output for several months. Only one bomber was lost, to anti-aircraft fire, despite the RAF flying 125
night sorties. No follow up raids were made, as OKL underestimated the British power of recovery (as Bomber Command
would do over Germany from 1943–1945). The Germans were surprised by the success of the attack. The concentration
had been achieved by accident. The strategic effect of the raid was a brief 20 percent dip in aircraft production.
Five nights later, Birmingham was hit by 369 bombers from KG 54, KG 26, and KG 55. By the end of November, 1,100
bombers were available for night raids. An average of 200 were able to strike per night. This weight of attack went
on for two months, with the Luftwaffe dropping 13,900 short tons (12,600 t) of bombs. In November 1940, 6,000 sorties
and 23 major attacks (more than 100 tons of bombs dropped) were flown. Two heavy (50 short tons (45 t) of bombs)
attacks were also flown. In December, only 11 major and five heavy attacks were made. Probably the most devastating
strike occurred on the evening of 29 December, when German aircraft attacked the City of London itself with incendiary
and high explosive bombs, causing a firestorm that has been called the Second Great Fire of London. The first group
to use these incendiaries was Kampfgruppe 100 which despatched 10 "pathfinder" He 111s. At 18:17, it released the
first of 10,000 fire bombs, eventually amounting to 300 dropped per minute. Altogether, 130 German bombers destroyed
the historical centre of London. Civilian casualties on London throughout the Blitz amounted to 28,556 killed, and
25,578 wounded. The Luftwaffe had dropped 18,291 short tons (16,593 t) of bombs. Not all of the Luftwaffe's effort
was made against inland cities. Port cities were also attacked to try to disrupt trade and sea communications. In
January Swansea was bombed four times, very heavily. On 17 January around 100 bombers dropped a high concentration
of incendiaries, some 32,000 in all. The main damage was inflicted on the commercial and domestic areas. Four days
later 230 tons was dropped including 60,000 incendiaries. In Portsmouth Southsea and Gosport waves of 150 bombers
destroyed vast swaths of the city with 40,000 incendiaries. Warehouses, rail lines and houses were destroyed and
damaged, but the docks were largely untouched. Although official German air doctrine did target civilian morale,
it did not espouse the attacking of civilians directly. It hoped to destroy morale by destroying the enemy's factories
and public utilities as well as its food stocks (by attacking shipping). Nevertheless, its official opposition to
attacks on civilians became an increasingly moot point when large-scale raids were conducted in November and December
1940. Although not encouraged by official policy, the use of mines and incendiaries, for tactical expediency, came
close to indiscriminate bombing. Locating targets in skies obscured by industrial haze meant they needed to be illuminated
"without regard for the civilian population". Special units, such as KGr 100, became the Beleuchtergruppe (Firelighter
Group), which used incendiaries and high explosive to mark the target area. The tactic was expanded into Feuerleitung
(Blaze Control) with the creation of Brandbombenfelder (Incendiary Fields) to mark targets. These were marked out
by parachute flares. Then bombers carrying SC 1000 (1,000 kg (2,205 lb)), SC 1400 (1,400 kg (3,086 lb)), and SC 1800
(1,800 kg (3,968 lb)) "Satan" bombs were used to level streets and residential areas. By December, the SC 2500 (2,500
kg (5,512 lb)) "Max" bomb was used. These decisions, apparently taken at the Luftflotte or Fliegerkorps level (see
Organisation of the Luftwaffe (1933–1945)), meant attacks on individual targets were gradually replaced by what was,
for all intents and purposes, an unrestricted area attack or Terrorangriff (Terror Attack). Part of the reason for
this was inaccuracy of navigation. The effectiveness of British countermeasures against Knickebein, which was designed
to avoid area attacks, forced the Luftwaffe to resort to these methods. The shift from precision bombing to area
attack is indicated in the tactical methods and weapons dropped. KGr 100 increased its use of incendiaries from 13–28
percent. By December, this had increased to 92 percent. Use of incendiaries, which were inherently inaccurate, indicated
much less care was taken to avoid civilian property close to industrial sites. Other units ceased using parachute
flares and opted for explosive target markers. Captured German air crews also indicated the homes of industrial workers
were deliberately targeted. In 1941, the Luftwaffe shifted strategy again. Erich Raeder—commander-in-chief of the
Kriegsmarine—had long argued the Luftwaffe should support the German submarine force (U-Bootwaffe) in the Battle
of the Atlantic by attacking shipping in the Atlantic Ocean and attacking British ports. Eventually, he convinced
Hitler of the need to attack British port facilities. Hitler had been convinced by Raeder that this was the right
course of action due to the high success rates of the U-Boat force during this period of the war. Hitler correctly
noted that the greatest damage to the British war economy had been done through submarines and air attacks by small
numbers of Focke-Wulf Fw 200 naval aircraft. He ordered attacks to be carried out on those targets which were also
the target of the Kriegsmarine. This meant that British coastal centres and shipping at sea west of Ireland were
the prime targets. Hitler's interest in this strategy forced Göring and Jeschonnek to review the air war against
Britain in January 1941. This led to Göring and Jeschonnek agreeing to Hitler's Directive 23, Directions for operations
against the British War Economy, which was published on 6 February 1941 and gave aerial interdiction of British imports
by sea top priority. This strategy had been recognised before the war, but Operation Eagle Attack and the following
Battle of Britain had got in the way of striking at Britain's sea communications and diverted German air strength
to the campaign against the RAF and its supporting structures. The OKL had always regarded the interdiction of sea
communications of less importance than bombing land-based aircraft industries. Directive 23 was the only concession
made by Göring to the Kriegsmarine over the strategic bombing strategy of the Luftwaffe against Britain. Thereafter,
he would refuse to make available any air units to destroy British dockyards, ports, port facilities, or shipping
in dock or at sea, lest Kriegsmarine gain control of more Luftwaffe units. Raeder's successor—Karl Dönitz—would—on
the intervention of Hitler—gain control of one unit (KG 40), but Göring would soon regain it. Göring's lack of cooperation
was detrimental to the one air strategy with potentially decisive strategic effect on Britain. Instead, he wasted
aircraft of Fliegerführer Atlantik (Flying Command Atlantic) on bombing mainland Britain instead of attacks against
convoys. For Göring, his prestige had been damaged by the defeat in the Battle of Britain, and he wanted to regain
it by subduing Britain by air power alone. He was always reluctant to cooperate with Raeder. Even so, the decision
by OKL to support the strategy in Directive 23 was instigated by two considerations, both of which had little to
do with wanting to destroy Britain's sea communications in conjunction with the Kriegsmarine. First, the difficulty
in estimating the impact of bombing upon war production was becoming apparent, and second, the conclusion British
morale was unlikely to break led OKL to adopt the naval option. The indifference displayed by OKL to Directive 23
was perhaps best demonstrated in operational directives which diluted its effect. They emphasised the core strategic
interest was attacking ports but they insisted in maintaining pressure, or diverting strength, onto industries building
aircraft, anti-aircraft guns, and explosives. Other targets would be considered if the primary ones could not be
attacked because of weather conditions. A further line in the directive stressed the need to inflict the heaviest
losses possible, but also to intensify the air war in order to create the impression an amphibious assault on Britain
was planned for 1941. However, meteorological conditions over Britain were not favourable for flying and prevented
an escalation in air operations. Airfields became water-logged and the 18 Kampfgruppen (bomber groups) of the Luftwaffe's
Kampfgeschwadern (bomber wings) were relocated to Germany for rest and re-equipment. From the German point of view,
March 1941 saw an improvement. The Luftwaffe flew 4,000 sorties that month, including 12 major and three heavy attacks.
The electronic war intensified but the Luftwaffe flew major inland missions only on moonlit nights. Ports were easier
to find and made better targets. To confuse the British, radio silence was observed until the bombs fell. X- and
Y-Gerät beams were placed over false targets and switched only at the last minute. Rapid frequency changes were introduced
for X-Gerät, whose wider band of frequencies and greater tactical flexibility ensured it remained effective at a
time when British selective jamming was degrading the effectiveness of Y-Gerät. The attacks were focused against
western ports in March. These attacks produced some breaks in morale, with civil leaders fleeing the cities before
the offensive reached its height. But the Luftwaffe's effort eased in the last 10 attacks as seven Kampfgruppen moved
to Austria in preparation for the Balkans Campaign in Yugoslavia and Greece. The shortage of bombers caused the OKL
to improvise. Some 50 Junkers Ju 87 Stuka dive-bombers and Jabos (fighter-bombers) were used, officially classed
as Leichte Kampfflugzeuge ("light bombers") and sometimes called Leichte Kesselringe ("Light Kesselrings"). The defences
failed to prevent widespread damage but on some occasions did prevent German bombers concentrating on their targets.
On occasion, only one-third of German bombs hit their targets. The diversion of heavier bombers to the Balkans meant
that the crews and units left behind were asked to fly two or three sorties per night. Bombers were noisy, cold,
and vibrated badly. Added to the tension of the mission which exhausted and drained crews, tiredness caught up with
and killed many. In one incident on 28/29 April, Peter Stahl of KG 30 was flying on his 50th mission. He fell asleep
at the controls of his Ju 88 and woke up to discover the entire crew asleep. He roused them, ensured they took oxygen
and Dextro-Energen tablets, then completed the mission. Regardless, the Luftwaffe could still inflict huge damage.
With the German occupation of Western Europe, the intensification of submarine and air attack on Britain's sea communications
was feared by the British. Such an event would have serious consequences on the future course of the war, should
the Germans succeed. Liverpool and its port became an important destination for convoys heading through the Western
Approaches from North America, bringing supplies and materials. The considerable rail network distributed to the
rest of the country. Operations against Liverpool in the Liverpool Blitz were successful. Air attacks sank 39,126
long tons (39,754 t) of shipping, with another 111,601 long tons (113,392 t) damaged. Minister of Home Security Herbert
Morrison was also worried morale was breaking, noting the defeatism expressed by civilians. Other sources point to
half of the port's 144 berths rendered unusable, while cargo unloading capability was reduced by 75%. Roads and railways
were blocked and ships could not leave harbour. On 8 May 1941, 57 ships were destroyed, sunk or damaged amounting
to 80,000 long tons (81,000 t). Around 66,000 houses were destroyed, 77,000 people made homeless, and 1,900 people
killed and 1,450 seriously hurt on one night. Operations against London up until May 1941 could also have a severe
impact on morale. The populace of the port of Hull became 'trekkers', people who underwent a mass exodus from cities
before, during, and after attacks. However, the attacks failed to knock out or damage railways, or port facilities
for long, even in the Port of London, a target of many attacks. The Port of London in particular was an important
target, bringing in one-third of overseas trade. On 13 March, the upper Clyde port of Clydebank near Glasgow was
bombed. All but seven of its 12,000 houses were damaged. Many more ports were attacked. Plymouth was attacked five
times before the end of the month while Belfast, Hull, and Cardiff were hit. Cardiff was bombed on three nights,
Portsmouth centre was devastated by five raids. The rate of civilian housing lost was averaging 40,000 people per
week dehoused in September 1940. In March 1941, two raids on Plymouth and London dehoused 148,000 people. Still,
while heavily damaged, British ports continued to support war industry and supplies from North America continued
to pass through them while the Royal Navy continued to operate in Plymouth, Southampton, and Portsmouth. Plymouth
in particular, because of its vulnerable position on the south coast and close proximity to German air bases, was
subjected to the heaviest attacks. On 10/11 March, 240 bombers dropped 193 tons of high explosives and 46,000 incendiaries.
Many houses and commercial centres were heavily damaged, the electrical supply was knocked out, and five oil tanks
and two magazines exploded. Nine days later, two waves of 125 and 170 bombers dropped heavy bombs, including 160
tons of high explosive and 32,000 incendiaries. Much of the city centre was destroyed. Damage was inflicted on the
port installations, but many bombs fell on the city itself. On 17 April 346 tons of explosives and 46,000 incendiaries
were dropped from 250 bombers led by KG 26. The damage was considerable, and the Germans also used aerial mines.
Over 2,000 AAA shells were fired, destroying two Ju 88s. By the end of the air campaign over Britain, only eight
percent of the German effort against British ports was made using mines. In the north, substantial efforts were made
against Newcastle-upon-Tyne and Sunderland, which were large ports on the English east coast. On 9 April 1941 Luftflotte
2 dropped 150 tons of high explosives and 50,000 incendiaries from 120 bombers in a five-hour attack. Sewer, rail,
docklands, and electric installations were damaged. In Sunderland on 25 April, Luftflotte 2 sent 60 bombers which
dropped 80 tons of high explosive and 9,000 incendiaries. Much damage was done. A further attack on the Clyde, this
time at Greenock, took place on 6 and 7 May. However, as with the attacks in the south, the Germans failed to prevent
maritime movements or cripple industry in the regions. The last major attack on London was on 10/11 May 1941, on
which the Luftwaffe flew 571 sorties and dropped 800 tonnes of bombs. This caused more than 2,000 fires. 1,436 people
were killed and 1,792 seriously injured, which affected morale badly. Another raid was carried out on 11/12 May 1941.
Westminster Abbey and the Law Courts were damaged, while the Chamber of the House of Commons was destroyed. One-third
of London's streets were impassable. All but one railway station line was blocked for several weeks. This raid was
significant, as 63 German fighters were sent with the bombers, indicating the growing effectiveness of RAF night
fighter defences. German air supremacy at night was also now under threat. British night-fighter operations out over
the Channel were proving highly successful. This was not immediately apparent. The Bristol Blenheim F.1 was undergunned,
with just four .303 in (7.7 mm) machine guns which struggled to down the Do 17, Ju 88, or Heinkel He 111. Moreover,
the Blenheim struggled to reach the speed of the German bombers. Added to the fact an interception relied on visual
sighting, a kill was most elusive even in the conditions of a moonlit sky. The Boulton Paul Defiant, despite its
poor performance during daylight engagements, was a much better night fighter. It was faster, able to catch the bombers
and its configuration of four machine guns in a turret could (much like German night fighters in 1943–1945 with Schräge
Musik) engage the unsuspecting German bomber from beneath. Attacks from below offered a larger target, compared to
attacking tail-on, as well as a better chance of not being seen by the bomber (so less chance of evasion), as well
as greater likelihood of detonating its bombload. In subsequent months a steady number of German bombers would fall
to night fighters. Improved aircraft designs were in the offing with the Bristol Beaufighter, then under development.
It would prove formidable, but its development was slow. The Beaufighter had a maximum speed of 320 mph (510 km/h),
an operational ceiling of 26,000 ft (7,900 m) and a climb rate of 2,500 ft (760 m) per minute. Its armament of four
20 mm (0.79 in) Hispano cannon and six .303 in Browning machine guns offered a serious threat to German bombers.
On 19 November, John Cunningham of No. 604 Squadron RAF shot down a bomber flying an AI-equipped Beaufighter. It
was the first air victory for the airborne radar. By April and May 1941, the Luftwaffe was still getting through
to their targets, taking no more than one- to two-percent losses on any given mission. On 19/20 April 1941, in honour
of Hitler's 52nd birthday, 712 bombers hit Plymouth with a record 1,000 tons of bombs. Losses were minimal. In the
following month, 22 German bombers were lost with 13 confirmed to have been shot down by night fighters. On 3/4 May,
nine were shot down in one night. On 10/11 May, London suffered severe damage, but 10 German bombers were downed.
In May 1941, RAF night fighters shot down 38 German bombers. The military effectiveness of bombing varied. The Luftwaffe
dropped around 45,000 short tons (41,000 t) of bombs during the Blitz disrupting production and transport, reducing
food supplies and shaking the British morale. It also helped to support the U-Boat blockade by sinking some 58,000
long tons (59,000 t) of shipping destroyed and 450,000 long tons (460,000 t) damaged. Yet, overall the British production
rose steadily throughout this period although there were significant falls during April 1941, probably influenced
by the departure of workers of Easter Holidays according to the British official history. The British official history
on war production noted the great impact was upon the supply of components rather than complete equipment. In aircraft
production, the British were denied the opportunity to reach the planned target of 2,500 aircraft in a month, arguably
the greatest achievement of the bombing, as it forced the dispersal of industry. In April 1941, when the targets
were British ports, rifle production fell by 25%, filled-shell production by 4.6%, and in smallarms production 4.5%
overall. The strategic impact on industrial cities was varied; most took from 10–15 days to recover from heavy raids,
although Belfast and Liverpool took longer. The attacks against Birmingham took war industries some three months
to recover fully from. The exhausted population took three weeks to overcome the effects of an attack. The air offensive
against the RAF and British industry failed to have the desired effect. More might have been achieved had the OKL
exploited their enemy's weak spot, the vulnerability of British sea communications. The Allies did so later when
Bomber Command attacked rail communications and the United States Army Air Forces targeted oil, but that would have
required an economic-industrial analysis of which the Luftwaffe was incapable. The OKL instead sought clusters of
targets that suited the latest policy (which changed frequently), and disputes within the leadership were about tactics
rather than strategy. Though militarily ineffective, the Blitz caused enormous damage to Britain's infrastructure
and housing stock. It cost around 41,000 lives, and may have injured another 139,000. The relieved British began
to assess the impact of the Blitz in August 1941, and the RAF Air Staff used the German experience to improve Bomber
Command's offensives. They concluded bombers should strike a single target each night and use more incendiaries because
they had a greater impact on production than high explosives. They also noted regional production was severely disrupted
when city centres were devastated through the loss of administrative offices, utilities and transport. They believed
the Luftwaffe had failed in precision attack, and concluded the German example of area attack using incendiaries
was the way forward for operations over Germany. Some writers claim the Air Staff ignored a critical lesson, however:
British morale did not break. Targeting German morale, as Bomber Command would do, was no more successful. Aviation
strategists dispute that morale was ever a major consideration for Bomber Command. Throughout 1933–39 none of the
16 Western Air Plans drafted mentioned morale as a target. The first three directives in 1940 did not mention civilian
populations or morale in any way. Morale was not mentioned until the ninth wartime directive on 21 September 1940.
The 10th directive in October 1940 mentioned morale by name. However, industrial cities were only to be targeted
if weather denied strikes on Bomber Command's main concern, oil. AOC Bomber Command Arthur Harris did see German
morale as a major objective. However, he did not believe that the morale-collapse could occur without the destruction
of the German economy. The primary goal of Bomber Command's offensives was to destroy the German industrial base
(economic warfare), and in doing so reduce morale. In late 1943, just before the Battle of Berlin, he declared the
power of Bomber Command would enable it to achieve "a state of devastation in which surrender is inevitable." A summary
of Harris' strategic intentions was clear: A converse popular image arose of British people in the Second World War:
a collection of people locked in national solidarity. This image entered the historiography of the Second World War
in the 1980s and 1990s, especially after the publication of Angus Calder's book The Myth of the Blitz (1991). It
was evoked by both the right and left political factions in Britain during the Falklands War when it was embedded
in a nostalgic narrative in which the Second World War represented aggressive British patriotism successfully defending
democracy. This imagery of people in the Blitz was and is powerfully portrayed in film, radio, newspapers and magazines.
At the time it was a useful propaganda tool for home and foreign consumption. Historians' critical response to this
construction focused on what were seen as over-emphasised claims of righteous nationalism and national unity. In
the Myth of the Blitz, Calder exposed some of the counter-evidence of anti-social and divisive behaviours. What he
saw as the myth—serene national unity—became "historical truth". In particular, class division was most evident.
In the wake of the Coventry Blitz, there was widespread agitation from the Communist Party over the need for bomb-proof
shelters. Many Londoners, in particular, took to using the Underground railway system, without authority, for shelter
and sleeping through the night there until the following morning. So worried were the Government over the sudden
campaign of leaflets and posters distributed by the Communist Party in Coventry and London, that the Police were
sent in to seize their production facilities. The Government, up until November 1940, was opposed to the centralised
organisation of shelter. Home Secretary Sir John Anderson was replaced by Morrison soon afterwards, in the wake of
a Cabinet reshuffle as the dying Neville Chamberlain resigned. Morrison warned that he could not counter the Communist
unrest unless provision of shelters were made. He recognised the right of the public to seize tube stations and authorised
plans to improve their condition and expand them by tunnelling. Still, many British citizens, who had been members
of the Labour Party, itself inert over the issue, turned to the Communist Party. The Communists attempted to blame
the damage and casualties of the Coventry raid on the rich factory owners, big business and landowning interests
and called for a negotiated peace. Though they failed to make a large gain in influence, the membership of the Party
had doubled by June 1941. The "Communist threat" was deemed important enough for Herbert Morrison to order, with
the support of the Cabinet, the stoppage of the Daily Worker and The Week; the Communist newspaper and journal. The
brief success of the Communists also fed into the hands of the British Union of Fascists (BUF). Anti-Semitic attitudes
became widespread, particularly in London. Rumours that Jewish support was underpinning the Communist surge were
frequent. Rumours that Jews were inflating prices, were responsible for the Black Market, were the first to panic
under attack (even the cause of the panic), and secured the best shelters via underhanded methods, were also widespread.
Moreover, there was also racial antagonism between the small Black, Indian and Jewish communities. However, the feared
race riots did not transpire despite the mixing of different peoples into confined areas. In other cities, class
conflict was more evident. Over a quarter of London's population had left the city by November 1940. Civilians left
for more remote areas of the country. Upsurges in population south Wales and Gloucester intimated where these displaced
people went. Other reasons, including industry dispersal may have been a factor. However, resentment of rich self-evacuees
or hostile treatment of poor ones were signs of persistence of class resentments although these factors did not appear
to threaten social order. The total number of evacuees numbered 1.4 million, including a high proportion from the
poorest inner-city families. Reception committees were completely unprepared for the condition of some of the children.
Far from displaying the nation's unity in time of war, the scheme backfired, often aggravating class antagonism and
bolstering prejudice about the urban poor. Within four months, 88% of evacuated mothers, 86% of small children, and
43% of school children had been returned home. The lack of bombing in the Phoney War contributed significantly to
the return of people to the cities, but class conflict was not eased a year later when evacuation operations had
to be put into effect again. In recent years a large number of wartime recordings relating to the Blitz have been
made available on audiobooks such as The Blitz, The Home Front and British War Broadcasting. These collections include
period interviews with civilians, servicemen, aircrew, politicians and Civil Defence personnel, as well as Blitz
actuality recordings, news bulletins and public information broadcasts. Notable interviews include Thomas Alderson,
the first recipient of the George Cross, John Cormack, who survived eight days trapped beneath rubble on Clydeside,
and Herbert Morrison's famous "Britain shall not burn" appeal for more fireguards in December 1940.
The Endangered Species Act of 1973 (ESA; 16 U.S.C. § 1531 et seq.) is one of the few dozens of United States environmental
laws passed in the 1970s, and serves as the enacting legislation to carry out the provisions outlined in The Convention
on International Trade in Endangered Species of Wild Fauna and Flora (CITES). The ESA was signed into law by President
Richard Nixon on December 28, 1973, it was designed to protect critically imperiled species from extinction as a
"consequence of economic growth and development untempered by adequate concern and conservation." The U.S. Supreme
Court found that "the plain intent of Congress in enacting" the ESA "was to halt and reverse the trend toward species
extinction, whatever the cost." The Act is administered by two federal agencies, the United States Fish and Wildlife
Service (FWS) and the National Oceanic and Atmospheric Administration (NOAA). One species in particular received
widespread attention—the whooping crane. The species' historical range extended from central Canada South to Mexico,
and from Utah to the Atlantic coast. Unregulated hunting and habitat loss contributed to a steady decline in the
whooping crane population until, by 1890, it had disappeared from its primary breeding range in the north central
United States. It would be another eight years before the first national law regulating wildlife commerce was signed,
and another two years before the first version of the endangered species act was passed. The whooping crane population
by 1941 was estimated at about only 16 birds still in the wild. The Lacey Act of 1900 was the first federal law that
regulated commercial animal markets. It prohibited interstate commerce of animals killed in violation of state game
laws, and covered all fish and wildlife and their parts or products, as well as plants. Other legislation followed,
including the Migratory Bird Conservation Act of 1929, a 1937 treaty prohibiting the hunting of right and gray whales,
and the Bald Eagle Protection Act of 1940. These later laws had a low cost to society–the species were relatively
rare–and little opposition was raised. It authorized the Secretary of the Interior to list endangered domestic fish
and wildlife and allowed the United States Fish and Wildlife Service to spend up to $15 million per year to buy habitats
for listed species. It also directed federal land agencies to preserve habitat on their lands. The Act also consolidated
and even expanded authority for the Secretary of the Interior to manage and administer the National Wildlife Refuge
System. Other public agencies were encouraged, but not required, to protect species. The act did not address the
commerce in endangered species and parts. This first list is referred to as the "Class of '67" in The Endangered
Species Act at Thirty, Volume 1, which concludes that habitat destruction, the biggest threat to those 78 species,
is still the same threat to the currently listed species. It included only vertebrates because the Department of
Interior's definition of "fish and wildlife" was limited to vertebrates. However, with time, researchers noticed
that the animals on the endangered species list still were not getting enough protection, thus further threatening
their extinction. The endangered species program was expanded by the Endangered Species Act of 1969. The Endangered
Species Conservation Act (P. L. 91-135), passed in December, 1969, amended the original law to provide additional
protection to species in danger of "worldwide extinction" by prohibiting their importation and subsequent sale in
the United States. It expanded the Lacey Act's ban on interstate commerce to include mammals, reptiles, amphibians,
mollusks and crustaceans. Reptiles were added mainly to reduce the rampant poaching of alligators and crocodiles.
This law was the first time that invertebrates were included for protection. President Richard Nixon declared current
species conservation efforts to be inadequate and called on the 93rd United States Congress to pass comprehensive
endangered species legislation. Congress responded with a completely rewritten law, the Endangered Species Act of
1973 which was signed by Nixon on December 28, 1973 (Pub.L. 93–205). It was written by a team of lawyers and scientists,
including the first appointed head of the Council on Environmental Quality (CEQ),an outgrowth of NEPA (The "National
Environmental Policy Act of 1969") Dr. Russell E. Train. Dr. Train was assisted by a core group of staffers, including
Dr. Earl Baysinger at EPA (currently Assistant Chief, Office of Endangered Species and International. Activities),
Dick Gutting (U.S. Commerce Dept. lawyer, currently joined NOAA the previous year (1972), and Dr. Gerard A. "Jerry"
Bertrand, a marine biologist (Ph.D, Oregon State University) by training, who had transferred from his post as the
Scientific Adviser to the U.S Army Corps of Engineers, office of the Commandant of the Corp. to join the newly formed
White House office. The staff, under Dr. Train's leadership, incorporated dozens of new principles and ideas into
the landmark legislation; crafting a document that completely changed the direction of environmental conservation
in the United States. Dr. Bertrand is credited with writing the most challenged section of the Act, the "takings"
clause - Section 2. A species can be listed in two ways. The United States Fish and Wildlife Service (FWS) or NOAA
Fisheries (also called the National Marine Fisheries Service) can directly list a species through its candidate assessment
program, or an individual or organizational petition may request that the FWS or NMFS list a species. A "species"
under the act can be a true taxonomic species, a subspecies, or in the case of vertebrates, a "distinct population
segment." The procedures are the same for both types except with the person/organization petition, there is a 90-day
screening period. During the listing process, economic factors cannot be considered, but must be " based solely on
the best scientific and commercial data available." The 1982 amendment to the ESA added the word "solely" to prevent
any consideration other than the biological status of the species. Congress rejected President Ronald Reagan's Executive
Order 12291 which required economic analysis of all government agency actions. The House committee's statement was
"that economic considerations have no relevance to determinations regarding the status of species." Public notice
is given through legal notices in newspapers, and communicated to state and county agencies within the species' area.
Foreign nations may also receive notice of a listing. A public hearing is mandatory if any person has requested one
within 45 days of the published notice. "The purpose of the notice and comment requirement is to provide for meaningful
public participation in the rulemaking process." summarized the Ninth Circuit court in the case of Idaho Farm Bureau
Federation v. Babbitt. The provision of the law in Section 4 that establishes critical habitat is a regulatory link
between habitat protection and recovery goals, requiring the identification and protection of all lands, water and
air necessary to recover endangered species. To determine what exactly is critical habitat, the needs of open space
for individual and population growth, food, water, light or other nutritional requirements, breeding sites, seed
germination and dispersal needs, and lack of disturbances are considered. All federal agencies are prohibited from
authorizing, funding or carrying out actions that "destroy or adversely modify" critical habitats (Section 7(a) (2)).
While the regulatory aspect of critical habitat does not apply directly to private and other non-federal landowners,
large-scale development, logging and mining projects on private and state land typically require a federal permit
and thus become subject to critical habitat regulations. Outside or in parallel with regulatory processes, critical
habitats also focus and encourage voluntary actions such as land purchases, grant making, restoration, and establishment
of reserves. The ESA requires that critical habitat be designated at the time of or within one year of a species
being placed on the endangered list. In practice, most designations occur several years after listing. Between 1978
and 1986 the FWS regularly designated critical habitat. In 1986 the Reagan Administration issued a regulation limiting
the protective status of critical habitat. As a result, few critical habitats were designated between 1986 and the
late 1990s. In the late 1990s and early 2000s, a series of court orders invalidated the Reagan regulations and forced
the FWS and NMFS to designate several hundred critical habitats, especially in Hawaii, California and other western
states. Midwest and Eastern states received less critical habitat, primarily on rivers and coastlines. As of December,
2006, the Reagan regulation has not yet been replaced though its use has been suspended. Nonetheless, the agencies
have generally changed course and since about 2005 have tried to designate critical habitat at or near the time of
listing. Fish and Wildlife Service (FWS) and National Marine Fisheries Service (NMFS) are required to create an Endangered
Species Recovery Plan outlining the goals, tasks required, likely costs, and estimated timeline to recover endangered
species (i.e., increase their numbers and improve their management to the point where they can be removed from the
endangered list). The ESA does not specify when a recovery plan must be completed. The FWS has a policy specifying
completion within three years of the species being listed, but the average time to completion is approximately six
years. The annual rate of recovery plan completion increased steadily from the Ford administration (4) through Carter
(9), Reagan (30), Bush I (44), and Clinton (72), but declined under Bush II (16 per year as of 9/1/06). The question
to be answered is whether a listed species will be harmed by the action and, if so, how the harm can be minimized.
If harm cannot be avoided, the project agency can seek an exemption from the Endangered Species Committee, an ad
hoc panel composed of members from the executive branch and at least one appointee from the state where the project
is to occur. Five of the seven committee members must vote for the exemption to allow taking (to harass, harm, pursue,
hunt, shoot, wound, kill, trap, capture, or collect, or significant habitat modification, or to attempt to engage
in any such conduct) of listed species. Long before the exemption is considered by the Endangered Species Committee,
the Forest Service, and either the FWS or the NMFS will have consulted on the biological implications of the timber
harvest. The consultation can be informal, to determine if harm may occur; and then formal if the harm is believed
to be likely. The questions to be answered in these consultations are whether the species will be harmed, whether
the habitat will be harmed and if the action will aid or hinder the recovery of the listed species. There have been
six instances as of 2009 in which the exemption process was initiated. Of these six, one was granted, one was partially
granted, one was denied and three were withdrawn. Donald Baur, in The Endangered Species Act: law, policy, and perspectives,
concluded," ... the exemption provision is basically a nonfactor in the administration of the ESA. A major reason,
of course, is that so few consultations result in jeopardy opinions, and those that do almost always result in the
identification of reasonable and prudent alternatives to avoid jeopardy." More than half of habitat for listed species
is on non-federal property, owned by citizens, states, local governments, tribal governments and private organizations.
Before the law was amended in 1982, a listed species could be taken only for scientific or research purposes. The
amendment created a permit process to circumvent the take prohibition called a Habitat Conservation Plan or HCP to
give incentives to non-federal land managers and private landowners to help protect listed and unlisted species,
while allowing economic development that may harm ("take") the species. The person or organization submits a HCP
and if approved by the agency (FWS or NMFS), will be issued an Incidental Take Permit (ITP) which allows a certain
number of "takes" of the listed species. The permit may be revoked at any time and can allow incidental takes for
varying amounts of time. For instance, the San Bruno Habitat Conservation Plan/ Incidental Take Permit is good for
30 years and the Wal-Mart store (in Florida) permit expires after one year. Because the permit is issued by a federal
agency to a private party, it is a federal action-which means other federal laws can apply, such as the National
Environmental Policy Act or NEPA. A notice of the permit application action is published in the Federal Register
and a public comment period of 30 to 90 days begins. The US Congress was urged to create the exemption by proponents
of a conservation plan on San Bruno Mountain, California that was drafted in the early 1980s and is the first HCP
in the nation. In the conference report on the 1982 amendments, Congress specified that it intended the San Bruno
plan to act "as a model" for future conservation plans developed under the incidental take exemption provision and
that "the adequacy of similar conservation plans should be measured against the San Bruno plan". Congress further
noted that the San Bruno plan was based on "an independent exhaustive biological study" and protected at least 87%
of the habitat of the listed butterflies that led to the development of the HCP. Growing scientific recognition of
the role of private lands for endangered species recovery and the landmark 1981 court decision in Palila v. Hawaii
Department of Land and Natural Resources both contributed to making Habitat Conservation Plans/ Incidental Take Permits
"a major force for wildlife conservation and a major headache to the development community", wrote Robert D.Thornton
in the 1991 Environmental Law article, Searching for Consensus and Predictability: Habitat Conservation Planning
under the Endangered Species Act of 1973. The "No Surprises" rule is meant to protect the landowner if "unforeseen
circumstances" occur which make the landowner's efforts to prevent or mitigate harm to the species fall short. The
"No Surprises" policy may be the most controversial of the recent reforms of the law, because once an Incidental
Take Permit is granted, the Fish and Wildlife Service (FWS) loses much ability to further protect a species if the
mitigation measures by the landowner prove insufficient. The landowner or permittee would not be required to set
aside additional land or pay more in conservation money. The federal government would have to pay for additional
protection measures. The "Safe Harbor" agreement is a voluntary agreement between the private landowner and FWS.
The landowner agrees to alter the property to benefit or even attract a listed or proposed species in exchange for
assurances that the FWS will permit future "takes" above a pre-determined level. The policy relies on the "enhancement
of survival" provision of Section §1539(a)(1)(A). A landowner can have either a "Safe Harbor" agreement or an Incidental
Take Permit, or both. The policy was developed by the Clinton Administration in 1999. The Candidate Conservation
Agreement is closely related to the "Safe Harbor" agreement, the main difference is that the Candidate Conservation
Agreements With Assurances(CCA) are meant to protect unlisted species by providing incentives to private landowners
and land managing agencies to restore, enhance or maintain habitat of unlisted species which are declining and have
the potential to become threatened or endangered if critical habitat is not protected. The FWS will then assure that
if, in the future the unlisted species becomes listed, the landowner will not be required to do more than already
agreed upon in the CCA. Two examples of animal species recently delisted are: the Virginia northern flying squirrel
(subspecies) on August, 2008, which had been listed since 1985, and the gray wolf (Northern Rocky Mountain DPS).
On April 15, 2011, President Obama signed the Department of Defense and Full-Year Appropriations Act of 2011. A section
of that Appropriations Act directed the Secretary of the Interior to reissue within 60 days of enactment the final
rule published on April 2, 2009, that identified the Northern Rocky Mountain population of gray wolf (Canis lupus)
as a distinct population segment (DPS) and to revise the List of Endangered and Threatened Wildlife by removing most
of the gray wolves in the DPS. As of September 2012, fifty-six species have been delisted; twenty-eight due to recovery,
ten due to extinction (seven of which are believed to have been extinct prior to being listed), ten due to changes
in taxonomic classification practices, six due to discovery of new populations, one due to an error in the listing
rule, and one due to an amendment to the Endangered Species Act specifically requiring the species delisting. Twenty-five
others have been down listed from "endangered" to "threatened" status. Opponents of the Endangered Species Act argue
that with over 2,000 endangered species listed, and only 28 delisted due to recovery, the success rate of 1% over
nearly three decades proves that there needs to be serious reform in their methods to actually help the endangered
animals and plants. Others argue that the ESA may encourage preemptive habitat destruction by landowners who fear
losing the use of their land because of the presence of an endangered species; known colloquially as "Shoot, Shovel
and Shut-Up." One example of such perverse incentives is the case of a forest owner who, in response to ESA listing
of the red-cockaded woodpecker, increased harvesting and shortened the age at which he harvests his trees to ensure
that they do not become old enough to become suitable habitat. While no studies have shown that the Act's negative
effects, in total, exceed the positive effects, many economists believe that finding a way to reduce such perverse
incentives would lead to more effective protection of endangered species. According to research published in 1999
by Alan Green and the Center for Public Integrity (CPI), loopholes in the ESA are commonly exploited in the exotic
pet trade. Although the legislation prohibits interstate and foreign transactions for list species, no provisions
are made for in-state commerce, allowing these animals to be sold to roadside zoos and private collectors. Additionally,
the ESA allows listed species to be shipped across state lines as long as they are not sold. According to Green and
the CPI, this allows dealers to "donate" listed species through supposed "breeding loans" to anyone, and in return
they can legally receive a reciprocal monetary "donation" from the receiving party. Furthermore, an interview with
an endangered species specialist at the US Fish and Wildlife Service revealed that the agency does not have sufficient
staff to perform undercover investigations, which would catch these false "donations" and other mislabeled transactions.
Green and the CPI further noted another exploit of the ESA in their discussion of the critically endangered cotton-top
tamarin (Saguinus oedipus). Not only had they found documentation that 151 of these primates had inadvertently made
their way from the Harvard-affiliated New England Regional Primate Research Center into the exotic pet trade through
the aforementioned loophole, but in October 1976, over 800 cotton-top tamarins were imported into the United States
in order to beat the official listing of the species under the ESA. Section 6 of the Endangered Species Act provided
funding for development of programs for management of threatened and endangered species by state wildlife agencies.
Subsequently, lists of endangered and threatened species within their boundaries have been prepared by each state.
These state lists often include species which are considered endangered or threatened within a specific state but
not within all states, and which therefore are not included on the national list of endangered and threatened species.
Examples include Florida, Minnesota, Maine, and California. A reward will be paid to any person who furnishes information
which leads to an arrest, conviction, or revocation of a license, so long as they are not a local, state, or federal
employee in the performance of official duties. The Secretary may also provide reasonable and necessary costs incurred
for the care of fish, wildlife, and forest service or plant pending the violation caused by the criminal. If the
balance ever exceeds $500,000 the Secretary of the Treasury is required to deposit an amount equal to the excess
into the cooperative endangered species conservation fund.
Vacuum is space void of matter. The word stems from the Latin adjective vacuus for "vacant" or "void". An approximation to
such vacuum is a region with a gaseous pressure much less than atmospheric pressure. Physicists often discuss ideal
test results that would occur in a perfect vacuum, which they sometimes simply call "vacuum" or free space, and use
the term partial vacuum to refer to an actual imperfect vacuum as one might have in a laboratory or in space. In
engineering and applied physics on the other hand, vacuum refers to any space in which the pressure is lower than
atmospheric pressure. The Latin term in vacuo is used to describe an object as being in what would otherwise be a
vacuum. The quality of a partial vacuum refers to how closely it approaches a perfect vacuum. Other things equal,
lower gas pressure means higher-quality vacuum. For example, a typical vacuum cleaner produces enough suction to
reduce air pressure by around 20%. Much higher-quality vacuums are possible. Ultra-high vacuum chambers, common in
chemistry, physics, and engineering, operate below one trillionth (10−12) of atmospheric pressure (100 nPa), and
can reach around 100 particles/cm3. Outer space is an even higher-quality vacuum, with the equivalent of just a few
hydrogen atoms per cubic meter on average. According to modern understanding, even if all matter could be removed
from a volume, it would still not be "empty" due to vacuum fluctuations, dark energy, transiting gamma rays, cosmic
rays, neutrinos, and other phenomena in quantum physics. In the electromagnetism in the 19th century, vacuum was
thought to be filled with a medium called aether. In modern particle physics, the vacuum state is considered the
ground state of matter. Historically, there has been much dispute over whether such a thing as a vacuum can exist.
Ancient Greek philosophers debated the existence of a vacuum, or void, in the context of atomism, which posited void
and atom as the fundamental explanatory elements of physics. Following Plato, even the abstract concept of a featureless
void faced considerable skepticism: it could not be apprehended by the senses, it could not, itself, provide additional
explanatory power beyond the physical volume with which it was commensurate and, by definition, it was quite literally
nothing at all, which cannot rightly be said to exist. Aristotle believed that no void could occur naturally, because
the denser surrounding material continuum would immediately fill any incipient rarity that might give rise to a void.
In his Physics, book IV, Aristotle offered numerous arguments against the void: for example, that motion through
a medium which offered no impediment could continue ad infinitum, there being no reason that something would come
to rest anywhere in particular. Although Lucretius argued for the existence of vacuum in the first century BC and
Hero of Alexandria tried unsuccessfully to create an artificial vacuum in the first century AD, it was European scholars
such as Roger Bacon, Blasius of Parma and Walter Burley in the 13th and 14th century who focused considerable attention
on these issues. Eventually following Stoic physics in this instance, scholars from the 14th century onward increasingly
departed from the Aristotelian perspective in favor of a supernatural void beyond the confines of the cosmos itself,
a conclusion widely acknowledged by the 17th century, which helped to segregate natural and theological concerns.
Rapid decompression can be much more dangerous than vacuum exposure itself. Even if the victim does not hold his
or her breath, venting through the windpipe may be too slow to prevent the fatal rupture of the delicate alveoli
of the lungs. Eardrums and sinuses may be ruptured by rapid decompression, soft tissues may bruise and seep blood,
and the stress of shock will accelerate oxygen consumption leading to hypoxia. Injuries caused by rapid decompression
are called barotrauma. A pressure drop of 13 kPa (100 Torr), which produces no symptoms if it is gradual, may be
fatal if it occurs suddenly. Almost two thousand years after Plato, René Descartes also proposed a geometrically
based alternative theory of atomism, without the problematic nothing–everything dichotomy of void and atom. Although
Descartes agreed with the contemporary position, that a vacuum does not occur in nature, the success of his namesake
coordinate system and more implicitly, the spatial–corporeal component of his metaphysics would come to define the
philosophically modern notion of empty space as a quantified extension of volume. By the ancient definition however,
directional information and magnitude were conceptually distinct. With the acquiescence of Cartesian mechanical philosophy
to the "brute fact" of action at a distance, and at length, its successful reification by force fields and ever more
sophisticated geometric structure, the anachronism of empty space widened until "a seething ferment" of quantum activity
in the 20th century filled the vacuum with a virtual pleroma. In 1930, Paul Dirac proposed a model of the vacuum
as an infinite sea of particles possessing negative energy, called the Dirac sea. This theory helped refine the predictions
of his earlier formulated Dirac equation, and successfully predicted the existence of the positron, confirmed two
years later. Werner Heisenberg's uncertainty principle formulated in 1927, predict a fundamental limit within which
instantaneous position and momentum, or energy and time can be measured. This has far reaching consequences on the
"emptiness" of space between particles. In the late 20th century, so-called virtual particles that arise spontaneously
from empty space were confirmed. In general relativity, a vanishing stress-energy tensor implies, through Einstein
field equations, the vanishing of all the components of the Ricci tensor. Vacuum does not mean that the curvature
of space-time is necessarily flat: the gravitational field can still produce curvature in a vacuum in the form of
tidal forces and gravitational waves (technically, these phenomena are the components of the Weyl tensor). The black
hole (with zero electric charge) is an elegant example of a region completely "filled" with vacuum, but still showing
a strong curvature. But although it meets the definition of outer space, the atmospheric density within the first
few hundred kilometers above the Kármán line is still sufficient to produce significant drag on satellites. Most
artificial satellites operate in this region called low Earth orbit and must fire their engines every few days to
maintain orbit.[citation needed] The drag here is low enough that it could theoretically be overcome by radiation
pressure on solar sails, a proposed propulsion system for interplanetary travel.[citation needed] Planets are too
massive for their trajectories to be significantly affected by these forces, although their atmospheres are eroded
by the solar winds. In the medieval Middle Eastern world, the physicist and Islamic scholar, Al-Farabi (Alpharabius,
872–950), conducted a small experiment concerning the existence of vacuum, in which he investigated handheld plungers
in water.[unreliable source?] He concluded that air's volume can expand to fill available space, and he suggested
that the concept of perfect vacuum was incoherent. However, according to Nader El-Bizri, the physicist Ibn al-Haytham
(Alhazen, 965–1039) and the Mu'tazili theologians disagreed with Aristotle and Al-Farabi, and they supported the
existence of a void. Using geometry, Ibn al-Haytham mathematically demonstrated that place (al-makan) is the imagined
three-dimensional void between the inner surfaces of a containing body. According to Ahmad Dallal, Abū Rayhān al-Bīrūnī
also states that "there is no observable evidence that rules out the possibility of vacuum". The suction pump later
appeared in Europe from the 15th century. Medieval thought experiments into the idea of a vacuum considered whether
a vacuum was present, if only for an instant, between two flat plates when they were rapidly separated. There was
much discussion of whether the air moved in quickly enough as the plates were separated, or, as Walter Burley postulated,
whether a 'celestial agent' prevented the vacuum arising. The commonly held view that nature abhorred a vacuum was
called horror vacui. Speculation that even God could not create a vacuum if he wanted to was shut down[clarification
needed] by the 1277 Paris condemnations of Bishop Etienne Tempier, which required there to be no restrictions on
the powers of God, which led to the conclusion that God could create a vacuum if he so wished. Jean Buridan reported
in the 14th century that teams of ten horses could not pull open bellows when the port was sealed. In 1654, Otto
von Guericke invented the first vacuum pump and conducted his famous Magdeburg hemispheres experiment, showing that
teams of horses could not separate two hemispheres from which the air had been partially evacuated. Robert Boyle
improved Guericke's design and with the help of Robert Hooke further developed vacuum pump technology. Thereafter,
research into the partial vacuum lapsed until 1850 when August Toepler invented the Toepler Pump and Heinrich Geissler
invented the mercury displacement pump in 1855, achieving a partial vacuum of about 10 Pa (0.1 Torr). A number of
electrical properties become observable at this vacuum level, which renewed interest in further research. While outer
space provides the most rarefied example of a naturally occurring partial vacuum, the heavens were originally thought
to be seamlessly filled by a rigid indestructible material called aether. Borrowing somewhat from the pneuma of Stoic
physics, aether came to be regarded as the rarefied air from which it took its name, (see Aether (mythology)). Early
theories of light posited a ubiquitous terrestrial and celestial medium through which light propagated. Additionally,
the concept informed Isaac Newton's explanations of both refraction and of radiant heat. 19th century experiments
into this luminiferous aether attempted to detect a minute drag on the Earth's orbit. While the Earth does, in fact,
move through a relatively dense medium in comparison to that of interstellar space, the drag is so minuscule that
it could not be detected. In 1912, astronomer Henry Pickering commented: "While the interstellar absorbing medium
may be simply the ether, [it] is characteristic of a gas, and free gaseous molecules are certainly there". The quality
of a vacuum is indicated by the amount of matter remaining in the system, so that a high quality vacuum is one with
very little matter left in it. Vacuum is primarily measured by its absolute pressure, but a complete characterization
requires further parameters, such as temperature and chemical composition. One of the most important parameters is
the mean free path (MFP) of residual gases, which indicates the average distance that molecules will travel between
collisions with each other. As the gas density decreases, the MFP increases, and when the MFP is longer than the
chamber, pump, spacecraft, or other objects present, the continuum assumptions of fluid mechanics do not apply. This
vacuum state is called high vacuum, and the study of fluid flows in this regime is called particle gas dynamics.
The MFP of air at atmospheric pressure is very short, 70 nm, but at 100 mPa (~6997100000000000000♠1×10−3 Torr) the
MFP of room temperature air is roughly 100 mm, which is on the order of everyday objects such as vacuum tubes. The
Crookes radiometer turns when the MFP is larger than the size of the vanes. The SI unit of pressure is the pascal
(symbol Pa), but vacuum is often measured in torrs, named for Torricelli, an early Italian physicist (1608–1647).
A torr is equal to the displacement of a millimeter of mercury (mmHg) in a manometer with 1 torr equaling 133.3223684
pascals above absolute zero pressure. Vacuum is often also measured on the barometric scale or as a percentage of
atmospheric pressure in bars or atmospheres. Low vacuum is often measured in millimeters of mercury (mmHg) or pascals
(Pa) below standard atmospheric pressure. "Below atmospheric" means that the absolute pressure is equal to the current
atmospheric pressure. Hydrostatic gauges (such as the mercury column manometer) consist of a vertical column of liquid
in a tube whose ends are exposed to different pressures. The column will rise or fall until its weight is in equilibrium
with the pressure differential between the two ends of the tube. The simplest design is a closed-end U-shaped tube,
one side of which is connected to the region of interest. Any fluid can be used, but mercury is preferred for its
high density and low vapour pressure. Simple hydrostatic gauges can measure pressures ranging from 1 torr (100 Pa)
to above atmospheric. An important variation is the McLeod gauge which isolates a known volume of vacuum and compresses
it to multiply the height variation of the liquid column. The McLeod gauge can measure vacuums as high as 10−6 torr
(0.1 mPa), which is the lowest direct measurement of pressure that is possible with current technology. Other vacuum
gauges can measure lower pressures, but only indirectly by measurement of other pressure-controlled properties. These
indirect measurements must be calibrated via a direct measurement, most commonly a McLeod gauge. Thermal conductivity
gauges rely on the fact that the ability of a gas to conduct heat decreases with pressure. In this type of gauge,
a wire filament is heated by running current through it. A thermocouple or Resistance Temperature Detector (RTD)
can then be used to measure the temperature of the filament. This temperature is dependent on the rate at which the
filament loses heat to the surrounding gas, and therefore on the thermal conductivity. A common variant is the Pirani
gauge which uses a single platinum filament as both the heated element and RTD. These gauges are accurate from 10
torr to 10−3 torr, but they are sensitive to the chemical composition of the gases being measured. Ion gauges are
used in ultrahigh vacuum. They come in two types: hot cathode and cold cathode. In the hot cathode version an electrically
heated filament produces an electron beam. The electrons travel through the gauge and ionize gas molecules around
them. The resulting ions are collected at a negative electrode. The current depends on the number of ions, which
depends on the pressure in the gauge. Hot cathode gauges are accurate from 10−3 torr to 10−10 torr. The principle
behind cold cathode version is the same, except that electrons are produced in a discharge created by a high voltage
electrical discharge. Cold cathode gauges are accurate from 10−2 torr to 10−9 torr. Ionization gauge calibration
is very sensitive to construction geometry, chemical composition of gases being measured, corrosion and surface deposits.
Their calibration can be invalidated by activation at atmospheric pressure or low vacuum. The composition of gases
at high vacuums will usually be unpredictable, so a mass spectrometer must be used in conjunction with the ionization
gauge for accurate measurement. Cold or oxygen-rich atmospheres can sustain life at pressures much lower than atmospheric,
as long as the density of oxygen is similar to that of standard sea-level atmosphere. The colder air temperatures
found at altitudes of up to 3 km generally compensate for the lower pressures there. Above this altitude, oxygen
enrichment is necessary to prevent altitude sickness in humans that did not undergo prior acclimatization, and spacesuits
are necessary to prevent ebullism above 19 km. Most spacesuits use only 20 kPa (150 Torr) of pure oxygen. This pressure
is high enough to prevent ebullism, but decompression sickness and gas embolisms can still occur if decompression
rates are not managed. Humans and animals exposed to vacuum will lose consciousness after a few seconds and die of
hypoxia within minutes, but the symptoms are not nearly as graphic as commonly depicted in media and popular culture.
The reduction in pressure lowers the temperature at which blood and other body fluids boil, but the elastic pressure
of blood vessels ensures that this boiling point remains above the internal body temperature of 37 °C. Although the
blood will not boil, the formation of gas bubbles in bodily fluids at reduced pressures, known as ebullism, is still
a concern. The gas may bloat the body to twice its normal size and slow circulation, but tissues are elastic and
porous enough to prevent rupture. Swelling and ebullism can be restrained by containment in a flight suit. Shuttle
astronauts wore a fitted elastic garment called the Crew Altitude Protection Suit (CAPS) which prevents ebullism
at pressures as low as 2 kPa (15 Torr). Rapid boiling will cool the skin and create frost, particularly in the mouth,
but this is not a significant hazard. In ultra high vacuum systems, some very "odd" leakage paths and outgassing
sources must be considered. The water absorption of aluminium and palladium becomes an unacceptable source of outgassing,
and even the adsorptivity of hard metals such as stainless steel or titanium must be considered. Some oils and greases
will boil off in extreme vacuums. The permeability of the metallic chamber walls may have to be considered, and the
grain direction of the metallic flanges should be parallel to the flange face. In quantum mechanics and quantum field
theory, the vacuum is defined as the state (that is, the solution to the equations of the theory) with the lowest
possible energy (the ground state of the Hilbert space). In quantum electrodynamics this vacuum is referred to as
'QED vacuum' to distinguish it from the vacuum of quantum chromodynamics, denoted as QCD vacuum. QED vacuum is a
state with no matter particles (hence the name), and also no photons. As described above, this state is impossible
to achieve experimentally. (Even if every matter particle could somehow be removed from a volume, it would be impossible
to eliminate all the blackbody photons.) Nonetheless, it provides a good model for realizable vacuum, and agrees
with a number of experimental observations as described next. QED vacuum has interesting and complex properties.
In QED vacuum, the electric and magnetic fields have zero average values, but their variances are not zero. As a
result, QED vacuum contains vacuum fluctuations (virtual particles that hop into and out of existence), and a finite
energy called vacuum energy. Vacuum fluctuations are an essential and ubiquitous part of quantum field theory. Some
experimentally verified effects of vacuum fluctuations include spontaneous emission and the Lamb shift. Coulomb's
law and the electric potential in vacuum near an electric charge are modified. Stars, planets, and moons keep their
atmospheres by gravitational attraction, and as such, atmospheres have no clearly delineated boundary: the density
of atmospheric gas simply decreases with distance from the object. The Earth's atmospheric pressure drops to about
6998320000000000000♠3.2×10−2 Pa at 100 kilometres (62 mi) of altitude, the Kármán line, which is a common definition
of the boundary with outer space. Beyond this line, isotropic gas pressure rapidly becomes insignificant when compared
to radiation pressure from the Sun and the dynamic pressure of the solar winds, so the definition of pressure becomes
difficult to interpret. The thermosphere in this range has large gradients of pressure, temperature and composition,
and varies greatly due to space weather. Astrophysicists prefer to use number density to describe these environments,
in units of particles per cubic centimetre. Vacuum is useful in a variety of processes and devices. Its first widespread
use was in the incandescent light bulb to protect the filament from chemical degradation. The chemical inertness
produced by a vacuum is also useful for electron beam welding, cold welding, vacuum packing and vacuum frying. Ultra-high
vacuum is used in the study of atomically clean substrates, as only a very good vacuum preserves atomic-scale clean
surfaces for a reasonably long time (on the order of minutes to days). High to ultra-high vacuum removes the obstruction
of air, allowing particle beams to deposit or remove materials without contamination. This is the principle behind
chemical vapor deposition, physical vapor deposition, and dry etching which are essential to the fabrication of semiconductors
and optical coatings, and to surface science. The reduction of convection provides the thermal insulation of thermos
bottles. Deep vacuum lowers the boiling point of liquids and promotes low temperature outgassing which is used in
freeze drying, adhesive preparation, distillation, metallurgy, and process purging. The electrical properties of
vacuum make electron microscopes and vacuum tubes possible, including cathode ray tubes. The elimination of air friction
is useful for flywheel energy storage and ultracentrifuges. Manifold vacuum can be used to drive accessories on automobiles.
The best-known application is the vacuum servo, used to provide power assistance for the brakes. Obsolete applications
include vacuum-driven windscreen wipers and Autovac fuel pumps. Some aircraft instruments (Attitude Indicator (AI)
and the Heading Indicator (HI)) are typically vacuum-powered, as protection against loss of all (electrically powered)
instruments, since early aircraft often did not have electrical systems, and since there are two readily available
sources of vacuum on a moving aircraft—the engine and an external venturi. Vacuum induction melting uses electromagnetic
induction within a vacuum. Evaporation and sublimation into a vacuum is called outgassing. All materials, solid or
liquid, have a small vapour pressure, and their outgassing becomes important when the vacuum pressure falls below
this vapour pressure. In man-made systems, outgassing has the same effect as a leak and can limit the achievable
vacuum. Outgassing products may condense on nearby colder surfaces, which can be troublesome if they obscure optical
instruments or react with other materials. This is of great concern to space missions, where an obscured telescope
or solar cell can ruin an expensive mission. To continue evacuating a chamber indefinitely without requiring infinite
growth, a compartment of the vacuum can be repeatedly closed off, exhausted, and expanded again. This is the principle
behind positive displacement pumps, like the manual water pump for example. Inside the pump, a mechanism expands
a small sealed cavity to create a vacuum. Because of the pressure differential, some fluid from the chamber (or the
well, in our example) is pushed into the pump's small cavity. The pump's cavity is then sealed from the chamber,
opened to the atmosphere, and squeezed back to a minute size. The above explanation is merely a simple introduction
to vacuum pumping, and is not representative of the entire range of pumps in use. Many variations of the positive
displacement pump have been developed, and many other pump designs rely on fundamentally different principles. Momentum
transfer pumps, which bear some similarities to dynamic pumps used at higher pressures, can achieve much higher quality
vacuums than positive displacement pumps. Entrapment pumps can capture gases in a solid or absorbed state, often
with no moving parts, no seals and no vibration. None of these pumps are universal; each type has important performance
limitations. They all share a difficulty in pumping low molecular weight gases, especially hydrogen, helium, and
neon. The lowest pressure that can be attained in a system is also dependent on many things other than the nature
of the pumps. Multiple pumps may be connected in series, called stages, to achieve higher vacuums. The choice of
seals, chamber geometry, materials, and pump-down procedures will all have an impact. Collectively, these are called
vacuum technique. And sometimes, the final pressure is not the only relevant characteristic. Pumping systems differ
in oil contamination, vibration, preferential pumping of certain gases, pump-down speeds, intermittent duty cycle,
reliability, or tolerance to high leakage rates. Fluids cannot generally be pulled, so a vacuum cannot be created
by suction. Suction can spread and dilute a vacuum by letting a higher pressure push fluids into it, but the vacuum
has to be created first before suction can occur. The easiest way to create an artificial vacuum is to expand the
volume of a container. For example, the diaphragm muscle expands the chest cavity, which causes the volume of the
lungs to increase. This expansion reduces the pressure and creates a partial vacuum, which is soon filled by air
pushed in by atmospheric pressure.
The Han dynasty (Chinese: 漢朝; pinyin: Hàn cháo) was the second imperial dynasty of China, preceded by the Qin dynasty (221–207
BC) and succeeded by the Three Kingdoms period (220–280 AD). Spanning over four centuries, the Han period is considered
a golden age in Chinese history. To this day, China's majority ethnic group refers to itself as the "Han people"
and the Chinese script is referred to as "Han characters". It was founded by the rebel leader Liu Bang, known posthumously
as Emperor Gaozu of Han, and briefly interrupted by the Xin dynasty (9–23 AD) of the former regent Wang Mang. This
interregnum separates the Han dynasty into two periods: the Western Han or Former Han (206 BC – 9 AD) and the Eastern
Han or Later Han (25–220 AD). The emperor was at the pinnacle of Han society. He presided over the Han government
but shared power with both the nobility and appointed ministers who came largely from the scholarly gentry class.
The Han Empire was divided into areas directly controlled by the central government using an innovation inherited
from the Qin known as commanderies, and a number of semi-autonomous kingdoms. These kingdoms gradually lost all vestiges
of their independence, particularly following the Rebellion of the Seven States. From the reign of Emperor Wu onward,
the Chinese court officially sponsored Confucianism in education and court politics, synthesized with the cosmology
of later scholars such as Dong Zhongshu. This policy endured until the fall of the Qing dynasty in AD 1911. The Han
dynasty was an age of economic prosperity and saw a significant growth of the money economy first established during
the Zhou dynasty (c. 1050–256 BC). The coinage issued by the central government mint in 119 BC remained the standard
coinage of China until the Tang dynasty (618–907 AD). The period saw a number of limited institutional innovations.
To pay for its military campaigns and the settlement of newly conquered frontier territories, the government nationalized
the private salt and iron industries in 117 BC, but these government monopolies were repealed during the Eastern
Han period. Science and technology during the Han period saw significant advances, including papermaking, the nautical
steering rudder, the use of negative numbers in mathematics, the raised-relief map, the hydraulic-powered armillary
sphere for astronomy, and a seismometer employing an inverted pendulum. The Xiongnu, a nomadic steppe confederation,
defeated the Han in 200 BC and forced the Han to submit as a de facto inferior partner, but continued their raids
on the Han borders. Emperor Wu of Han (r. 141–87 BC) launched several military campaigns against them. The ultimate
Han victory in these wars eventually forced the Xiongnu to accept vassal status as Han tributaries. These campaigns
expanded Han sovereignty into the Tarim Basin of Central Asia, divided the Xiongnu into two separate confederations,
and helped establish the vast trade network known as the Silk Road, which reached as far as the Mediterranean world.
The territories north of Han's borders were quickly overrun by the nomadic Xianbei confederation. Emperor Wu also
launched successful military expeditions in the south, annexing Nanyue in 111 BC and Dian in 109 BC, and in the Korean
Peninsula where the Xuantu and Lelang Commanderies were established in 108 BC. After 92 AD, the palace eunuchs increasingly
involved themselves in court politics, engaging in violent power struggles between the various consort clans of the
empresses and empress dowagers, causing the Han's ultimate downfall. Imperial authority was also seriously challenged
by large Daoist religious societies which instigated the Yellow Turban Rebellion and the Five Pecks of Rice Rebellion.
Following the death of Emperor Ling (r. 168–189 AD), the palace eunuchs suffered wholesale massacre by military officers,
allowing members of the aristocracy and military governors to become warlords and divide the empire. When Cao Pi,
King of Wei, usurped the throne from Emperor Xian, the Han dynasty ceased to exist. China's first imperial dynasty
was the Qin dynasty (221–206 BC). The Qin unified the Chinese Warring States by conquest, but their empire became
unstable after the death of the first emperor Qin Shi Huangdi. Within four years, the dynasty's authority had collapsed
in the face of rebellion. Two former rebel leaders, Xiang Yu (d. 202 BC) of Chu and Liu Bang (d. 195 BC) of Han,
engaged in a war to decide who would become hegemon of China, which had fissured into 18 kingdoms, each claiming
allegiance to either Xiang Yu or Liu Bang. Although Xiang Yu proved to be a capable commander, Liu Bang defeated
him at Battle of Gaixia (202 BC), in modern-day Anhui. Liu Bang assumed the title "emperor" (huangdi) at the urging
of his followers and is known posthumously as Emperor Gaozu (r. 202–195 BC). Chang'an was chosen as the new capital
of the reunified empire under Han. At the beginning of the Western Han dynasty, thirteen centrally controlled commanderies—including
the capital region—existed in the western third of the empire, while the eastern two-thirds were divided into ten
semi-autonomous kingdoms. To placate his prominent commanders from the war with Chu, Emperor Gaozu enfeoffed some
of them as kings. By 157 BC, the Han court had replaced all of these kings with royal Liu family members, since the
loyalty of non-relatives to the throne was questioned. After several insurrections by Han kings—the largest being
the Rebellion of the Seven States in 154 BC—the imperial court enacted a series of reforms beginning in 145 BC limiting
the size and power of these kingdoms and dividing their former territories into new centrally controlled commanderies.
Kings were no longer able to appoint their own staff; this duty was assumed by the imperial court. Kings became nominal
heads of their fiefs and collected a portion of tax revenues as their personal incomes. The kingdoms were never entirely
abolished and existed throughout the remainder of Western and Eastern Han. To the north of China proper, the nomadic
Xiongnu chieftain Modu Chanyu (r. 209–174 BC) conquered various tribes inhabiting the eastern portion of the Eurasian
Steppe. By the end of his reign, he controlled Manchuria, Mongolia, and the Tarim Basin, subjugating over twenty
states east of Samarkand. Emperor Gaozu was troubled about the abundant Han-manufactured iron weapons traded to the
Xiongnu along the northern borders, and he established a trade embargo against the group. Although the embargo was
in place, the Xiongnu found traders willing to supply their needs. Chinese forces also mounted surprise attacks against
Xiongnu who traded at the border markets. In retaliation, the Xiongnu invaded what is now Shanxi province, where
they defeated the Han forces at Baideng in 200 BC. After negotiations, the heqin agreement in 198 BC nominally held
the leaders of the Xiongnu and the Han as equal partners in a royal marriage alliance, but the Han were forced to
send large amounts of tribute items such as silk clothes, food, and wine to the Xiongnu. Despite the tribute and
a negotiation between Laoshang Chanyu (r. 174–160 BC) and Emperor Wen (r. 180–157 BC) to reopen border markets, many
of the Chanyu's Xiongnu subordinates chose not to obey the treaty and periodically raided Han territories south of
the Great Wall for additional goods. In a court conference assembled by Emperor Wu (r. 141–87 BC) in 135 BC, the
majority consensus of the ministers was to retain the heqin agreement. Emperor Wu accepted this, despite continuing
Xiongnu raids. However, a court conference the following year convinced the majority that a limited engagement at
Mayi involving the assassination of the Chanyu would throw the Xiongnu realm into chaos and benefit the Han. When
this plot failed in 133 BC, Emperor Wu launched a series of massive military invasions into Xiongnu territory. Chinese
armies captured one stronghold after another and established agricultural colonies to strengthen their hold. The
assault culminated in 119 BC at the Battle of Mobei, where the Han commanders Huo Qubing (d. 117 BC) and Wei Qing
(d. 106 BC) forced the Xiongnu court to flee north of the Gobi Desert. In 121 BC, Han forces expelled the Xiongnu
from a vast territory spanning the Hexi Corridor to Lop Nur. They repelled a joint Xiongnu-Qiang invasion of this
northwestern territory in 111 BC. In that year, the Han court established four new frontier commanderies in this
region: Jiuquan, Zhangyi, Dunhuang, and Wuwei. The majority of people on the frontier were soldiers. On occasion,
the court forcibly moved peasant farmers to new frontier settlements, along with government-owned slaves and convicts
who performed hard labor. The court also encouraged commoners, such as farmers, merchants, landowners, and hired
laborers, to voluntarily migrate to the frontier. Even before Han's expansion into Central Asia, diplomat Zhang Qian's
travels from 139 to 125 BC had established Chinese contacts with many surrounding civilizations. Zhang encountered
Dayuan (Fergana), Kangju (Sogdiana), and Daxia (Bactria, formerly the Greco-Bactrian Kingdom); he also gathered information
on Shendu (Indus River valley of North India) and Anxi (the Parthian Empire). All of these countries eventually received
Han embassies. These connections marked the beginning of the Silk Road trade network that extended to the Roman Empire,
bringing Han items like silk to Rome and Roman goods such as glasswares to China. From roughly 115 to 60 BC, Han
forces fought the Xiongnu over control of the oasis city-states in the Tarim Basin. Han was eventually victorious
and established the Protectorate of the Western Regions in 60 BC, which dealt with the region's defense and foreign
affairs. The Han also expanded southward. The naval conquest of Nanyue in 111 BC expanded the Han realm into what
are now modern Guangdong, Guangxi, and northern Vietnam. Yunnan was brought into the Han realm with the conquest
of the Dian Kingdom in 109 BC, followed by parts of the Korean Peninsula with the colonial establishments of Xuantu
Commandery and Lelang Commandery in 108 BC. In China's first known nationwide census taken in 2 AD, the population
was registered as having 57,671,400 individuals in 12,366,470 households. To pay for his military campaigns and colonial
expansion, Emperor Wu nationalized several private industries. He created central government monopolies administered
largely by former merchants. These monopolies included salt, iron, and liquor production, as well as bronze-coin
currency. The liquor monopoly lasted only from 98 to 81 BC, and the salt and iron monopolies were eventually abolished
in early Eastern Han. The issuing of coinage remained a central government monopoly throughout the rest of the Han
dynasty. The government monopolies were eventually repealed when a political faction known as the Reformists gained
greater influence in the court. The Reformists opposed the Modernist faction that had dominated court politics in
Emperor Wu's reign and during the subsequent regency of Huo Guang (d. 68 BC). The Modernists argued for an aggressive
and expansionary foreign policy supported by revenues from heavy government intervention in the private economy.
The Reformists, however, overturned these policies, favoring a cautious, non-expansionary approach to foreign policy,
frugal budget reform, and lower tax-rates imposed on private entrepreneurs. Wang Mang initiated a series of major
reforms that were ultimately unsuccessful. These reforms included outlawing slavery, nationalizing land to equally
distribute between households, and introducing new currencies, a change which debased the value of coinage. Although
these reforms provoked considerable opposition, Wang's regime met its ultimate downfall with the massive floods of
c. 3 AD and 11 AD. Gradual silt buildup in the Yellow River had raised its water level and overwhelmed the flood
control works. The Yellow River split into two new branches: one emptying to the north and the other to the south
of the Shandong Peninsula, though Han engineers managed to dam the southern branch by 70 AD. The period between the
foundation of the Han dynasty and Wang Mang's reign is known as the Western Han dynasty (simplified Chinese: 西汉;
traditional Chinese: 西漢; pinyin: Xī Hàn) or Former Han dynasty (simplified Chinese: 前汉; traditional Chinese: 前漢;
pinyin: Qiánhàn) (206 BC – 9 AD). During this period the capital was at Chang'an (modern Xi'an). From the reign of
Guangwu the capital was moved eastward to Luoyang. The era from his reign until the fall of Han is known as the Eastern
Han dynasty (simplified Chinese: 东汉; traditional Chinese: 東漢; pinyin: Dōng Hàn) or the Later Han dynasty (simplified
Chinese: 后汉; traditional Chinese: 後漢; pinyin: Hòu Hàn) (25–220 AD). The Eastern Han, also known as the Later Han,
formally began on 5 August 25, when Liu Xiu became Emperor Guangwu of Han. During the widespread rebellion against
Wang Mang, the state of Goguryeo was free to raid Han's Korean commanderies; Han did not reaffirm its control over
the region until AD 30. The Trưng Sisters of Vietnam rebelled against Han in AD 40. Their rebellion was crushed by
Han general Ma Yuan (d. AD 49) in a campaign from AD 42–43. Wang Mang renewed hostilities against the Xiongnu, who
were estranged from Han until their leader Bi (比), a rival claimant to the throne against his cousin Punu (蒲奴), submitted
to Han as a tributary vassal in AD 50. This created two rival Xiongnu states: the Southern Xiongnu led by Bi, an
ally of Han, and the Northern Xiongnu led by Punu, an enemy of Han. During the turbulent reign of Wang Mang, Han
lost control over the Tarim Basin, which was conquered by the Northern Xiongnu in AD 63 and used as a base to invade
Han's Hexi Corridor in Gansu. Dou Gu (d. 88 AD) defeated the Northern Xiongnu at the Battle of Yiwulu in AD 73, evicting
them from Turpan and chasing them as far as Lake Barkol before establishing a garrison at Hami. After the new Protector
General of the Western Regions Chen Mu (d. AD 75) was killed by allies of the Xiongnu in Karasahr and Kucha, the
garrison at Hami was withdrawn. At the Battle of Ikh Bayan in AD 89, Dou Xian (d. AD 92) defeated the Northern Xiongnu
chanyu who then retreated into the Altai Mountains. After the Northern Xiongnu fled into the Ili River valley in
AD 91, the nomadic Xianbei occupied the area from the borders of the Buyeo Kingdom in Manchuria to the Ili River
of the Wusun people. The Xianbei reached their apogee under Tanshihuai (檀石槐) (d. AD 180), who consistently defeated
Chinese armies. However, Tanshihuai's confederation disintegrated after his death. Ban Chao (d. AD 102) enlisted
the aid of the Kushan Empire, occupying the area of modern India, Pakistan, Afghanistan, and Tajikistan, to subdue
Kashgar and its ally Sogdiana. When a request by Kushan ruler Vima Kadphises (r. c. 90–c. 100 AD) for a marriage
alliance with the Han was rejected in AD 90, he sent his forces to Wakhan (Afghanistan) to attack Ban Chao. The conflict
ended with the Kushans withdrawing because of lack of supplies. In AD 91, the office of Protector General of the
Western Regions was reinstated when it was bestowed on Ban Chao. In addition to tributary relations with the Kushans,
the Han Empire received gifts from the Parthian Empire, from a king in modern Burma, from a ruler in Japan, and initiated
an unsuccessful mission to Daqin (Rome) in AD 97 with Gan Ying as emissary. A Roman embassy of Emperor Marcus Aurelius
(r. 161–180 AD) is recorded in the Hou Hanshu to have reached the court of Emperor Huan of Han (r. AD 146–168) in
AD 166, yet Rafe de Crespigny asserts that this was most likely a group of Roman merchants. Other travelers to Eastern-Han
China included Buddhist monks who translated works into Chinese, such as An Shigao of Parthia, and Lokaksema from
Kushan-era Gandhara, India. Emperor Zhang's (r. 75–88 AD) reign came to be viewed by later Eastern Han scholars as
the high point of the dynastic house. Subsequent reigns were increasingly marked by eunuch intervention in court
politics and their involvement in the violent power struggles of the imperial consort clans. With the aid of the
eunuch Zheng Zhong (d. 107 AD), Emperor He (r. 88–105 AD) had Empress Dowager Dou (d. 97 AD) put under house arrest
and her clan stripped of power. This was in revenge for Dou's purging of the clan of his natural mother—Consort Liang—and
then concealing her identity from him. After Emperor He's death, his wife Empress Deng Sui (d. 121 AD) managed state
affairs as the regent empress dowager during a turbulent financial crisis and widespread Qiang rebellion that lasted
from 107 to 118 AD. When Empress Dowager Deng died, Emperor An (r. 106–125 AD) was convinced by the accusations of
the eunuchs Li Run (李閏) and Jiang Jing (江京) that Deng and her family had planned to depose him. An dismissed Deng's
clan members from office, exiled them and forced many to commit suicide. After An's death, his wife, Empress Dowager
Yan (d. 126 AD) placed the child Marquess of Beixiang on the throne in an attempt to retain power within her family.
However, palace eunuch Sun Cheng (d. 132 AD) masterminded a successful overthrow of her regime to enthrone Emperor
Shun of Han (r. 125–144 AD). Yan was placed under house arrest, her relatives were either killed or exiled, and her
eunuch allies were slaughtered. The regent Liang Ji (d. 159 AD), brother of Empress Liang Na (d. 150 AD), had the
brother-in-law of Consort Deng Mengnü (later empress) (d. 165 AD) killed after Deng Mengnü resisted Liang Ji's attempts
to control her. Afterward, Emperor Huan employed eunuchs to depose Liang Ji, who was then forced to commit suicide.
Students from the Imperial University organized a widespread student protest against the eunuchs of Emperor Huan's
court. Huan further alienated the bureaucracy when he initiated grandiose construction projects and hosted thousands
of concubines in his harem at a time of economic crisis. Palace eunuchs imprisoned the official Li Ying (李膺) and
his associates from the Imperial University on a dubious charge of treason. In 167 AD, the Grand Commandant Dou Wu
(d. 168 AD) convinced his son-in-law, Emperor Huan, to release them. However the emperor permanently barred Li Ying
and his associates from serving in office, marking the beginning of the Partisan Prohibitions. Following Huan's death,
Dou Wu and the Grand Tutor Chen Fan (陳蕃) (d. 168 AD) attempted a coup d'état against the eunuchs Hou Lan (d. 172
AD), Cao Jie (d. 181 AD), and Wang Fu (王甫). When the plot was uncovered, the eunuchs arrested Empress Dowager Dou
(d. 172 AD) and Chen Fan. General Zhang Huan (張奐) favored the eunuchs. He and his troops confronted Dou Wu and his
retainers at the palace gate where each side shouted accusations of treason against the other. When the retainers
gradually deserted Dou Wu, he was forced to commit suicide. The Partisan Prohibitions were repealed during the Yellow
Turban Rebellion and Five Pecks of Rice Rebellion in 184 AD, largely because the court did not want to continue to
alienate a significant portion of the gentry class who might otherwise join the rebellions. The Yellow Turbans and
Five-Pecks-of-Rice adherents belonged to two different hierarchical Daoist religious societies led by faith healers
Zhang Jue (d. 184 AD) and Zhang Lu (d. 216 AD), respectively. Zhang Lu's rebellion, in modern northern Sichuan and
southern Shaanxi, was not quelled until 215 AD. Zhang Jue's massive rebellion across eight provinces was annihilated
by Han forces within a year, however the following decades saw much smaller recurrent uprisings. Although the Yellow
Turbans were defeated, many generals appointed during the crisis never disbanded their assembled militia forces and
used these troops to amass power outside of the collapsing imperial authority. General-in-Chief He Jin (d. 189 AD),
half-brother to Empress He (d. 189 AD), plotted with Yuan Shao (d. 202 AD) to overthrow the eunuchs by having several
generals march to the outskirts of the capital. There, in a written petition to Empress He, they demanded the eunuchs'
execution. After a period of hesitation, Empress He consented. When the eunuchs discovered this, however, they had
her brother He Miao (何苗) rescind the order. The eunuchs assassinated He Jin on September 22, 189 AD. Yuan Shao then
besieged Luoyang's Northern Palace while his brother Yuan Shu (d. 199 AD) besieged the Southern Palace. On September
25 both palaces were breached and approximately two thousand eunuchs were killed. Zhang Rang had previously fled
with Emperor Shao (r. 189 AD) and his brother Liu Xie—the future Emperor Xian of Han (r. 189–220 AD). While being
pursued by the Yuan brothers, Zhang committed suicide by jumping into the Yellow River. General Dong Zhuo (d. 192
AD) found the young emperor and his brother wandering in the countryside. He escorted them safely back to the capital
and was made Minister of Works, taking control of Luoyang and forcing Yuan Shao to flee. After Dong Zhuo demoted
Emperor Shao and promoted his brother Liu Xie as Emperor Xian, Yuan Shao led a coalition of former officials and
officers against Dong, who burned Luoyang to the ground and resettled the court at Chang'an in May 191 AD. Dong Zhuo
later poisoned Emperor Shao. After Cao's defeat at the naval Battle of Red Cliffs in 208 AD, China was divided into
three spheres of influence, with Cao Cao dominating the north, Sun Quan (182–252 AD) dominating the south, and Liu
Bei (161–223 AD) dominating the west. Cao Cao died in March 220 AD. By December his son Cao Pi (187–226 AD) had Emperor
Xian relinquish the throne to him and is known posthumously as Emperor Wen of Wei. This formally ended the Han dynasty
and initiated an age of conflict between three states: Cao Wei, Eastern Wu, and Shu Han. Each successive rank gave
its holder greater pensions and legal privileges. The highest rank, of full marquess, came with a state pension and
a territorial fiefdom. Holders of the rank immediately below, that of ordinary marquess, received a pension, but
had no territorial rule. Officials who served in government belonged to the wider commoner social class and were
ranked just below nobles in social prestige. The highest government officials could be enfeoffed as marquesses. By
the Eastern Han period, local elites of unattached scholars, teachers, students, and government officials began to
identify themselves as members of a larger, nationwide gentry class with shared values and a commitment to mainstream
scholarship. When the government became noticeably corrupt in mid-to-late Eastern Han, many gentrymen even considered
the cultivation of morally grounded personal relationships more important than serving in public office. The farmer,
or specifically the small landowner-cultivator, was ranked just below scholars and officials in the social hierarchy.
Other agricultural cultivators were of a lower status, such as tenants, wage laborers, and in rare cases slaves.
Artisans and craftsmen had a legal and socioeconomic status between that of owner-cultivator farmers and common merchants.
State-registered merchants, who were forced by law to wear white-colored clothes and pay high commercial taxes, were
considered by the gentry as social parasites with a contemptible status. These were often petty shopkeepers of urban
marketplaces; merchants such as industrialists and itinerant traders working between a network of cities could avoid
registering as merchants and were often wealthier and more powerful than the vast majority of government officials.
Wealthy landowners, such as nobles and officials, often provided lodging for retainers who provided valuable work
or duties, sometimes including fighting bandits or riding into battle. Unlike slaves, retainers could come and go
from their master's home as they pleased. Medical physicians, pig breeders, and butchers had a fairly high social
status, while occultist diviners, runners, and messengers had low status. The Han-era family was patrilineal and
typically had four to five nuclear family members living in one household. Multiple generations of extended family
members did not occupy the same house, unlike families of later dynasties. According to Confucian family norms, various
family members were treated with different levels of respect and intimacy. For example, there were different accepted
time frames for mourning the death of a father versus a paternal uncle. Arranged marriages were normal, with the
father's input on his offspring's spouse being considered more important than the mother's. Monogamous marriages
were also normal, although nobles and high officials were wealthy enough to afford and support concubines as additional
lovers. Under certain conditions dictated by custom, not law, both men and women were able to divorce their spouses
and remarry. Apart from the passing of noble titles or ranks, inheritance practices did not involve primogeniture;
each son received an equal share of the family property. Unlike the practice in later dynasties, the father usually
sent his adult married sons away with their portions of the family fortune. Daughters received a portion of the family
fortune through their marriage dowries, though this was usually much less than the shares of sons. A different distribution
of the remainder could be specified in a will, but it is unclear how common this was. Women were expected to obey
the will of their father, then their husband, and then their adult son in old age. However, it is known from contemporary
sources that there were many deviations to this rule, especially in regard to mothers over their sons, and empresses
who ordered around and openly humiliated their fathers and brothers. Women were exempt from the annual corvée labor
duties, but often engaged in a range of income-earning occupations aside from their domestic chores of cooking and
cleaning. The early Western Han court simultaneously accepted the philosophical teachings of Legalism, Huang-Lao
Daoism, and Confucianism in making state decisions and shaping government policy. However, the Han court under Emperor
Wu gave Confucianism exclusive patronage. He abolished all academic chairs or erudites (bóshì 博士) not dealing with
the Confucian Five Classics in 136 BC and encouraged nominees for office to receive a Confucian-based education at
the Imperial University that he established in 124 BC. Unlike the original ideology espoused by Confucius, or Kongzi
(551–479 BC), Han Confucianism in Emperor Wu's reign was the creation of Dong Zhongshu (179–104 BC). Dong was a scholar
and minor official who aggregated the ethical Confucian ideas of ritual, filial piety, and harmonious relationships
with five phases and yin-yang cosmologies. Much to the interest of the ruler, Dong's synthesis justified the imperial
system of government within the natural order of the universe. The Imperial University grew in importance as the
student body grew to over 30,000 by the 2nd century AD. A Confucian-based education was also made available at commandery-level
schools and private schools opened in small towns, where teachers earned respectable incomes from tuition payments.
Some important texts were created and studied by scholars. Philosophical works written by Yang Xiong (53 BC – 18
AD), Huan Tan (43 BC – 28 AD), Wang Chong (27–100 AD), and Wang Fu (78–163 AD) questioned whether human nature was
innately good or evil and posed challenges to Dong's universal order. The Records of the Grand Historian by Sima
Tan (d. 110 BC) and his son Sima Qian (145–86 BC) established the standard model for all of imperial China's Standard
Histories, such as the Book of Han written by Ban Biao (3–54 AD), his son Ban Gu (32–92 AD), and his daughter Ban
Zhao (45–116 AD). There were dictionaries such as the Shuowen Jiezi by Xu Shen (c. 58 – c. 147 AD) and the Fangyan
by Yang Xiong. Biographies on important figures were written by various gentrymen. Han dynasty poetry was dominated
by the fu genre, which achieved its greatest prominence during the reign of Emperor Wu. Various cases for rape, physical
abuse and murder were prosecuted in court. Women, although usually having fewer rights by custom, were allowed to
level civil and criminal charges against men. While suspects were jailed, convicted criminals were never imprisoned.
Instead, punishments were commonly monetary fines, periods of forced hard labor for convicts, and the penalty of
death by beheading. Early Han punishments of torturous mutilation were borrowed from Qin law. A series of reforms
abolished mutilation punishments with progressively less-severe beatings by the bastinado. The most common staple
crops consumed during Han were wheat, barley, foxtail millet, proso millet, rice, and beans. Commonly eaten fruits
and vegetables included chestnuts, pears, plums, peaches, melons, apricots, strawberries, red bayberries, jujubes,
calabash, bamboo shoots, mustard plant and taro. Domesticated animals that were also eaten included chickens, Mandarin
ducks, geese, cows, sheep, pigs, camels and dogs (various types were bred specifically for food, while most were
used as pets). Turtles and fish were taken from streams and lakes. Commonly hunted game, such as owl, pheasant, magpie,
sika deer, and Chinese bamboo partridge were consumed. Seasonings included sugar, honey, salt and soy sauce. Beer
and wine were regularly consumed. Families throughout Han China made ritual sacrifices of animals and food to deities,
spirits, and ancestors at temples and shrines, in the belief that these items could be utilized by those in the spiritual
realm. It was thought that each person had a two-part soul: the spirit-soul (hun 魂) which journeyed to the afterlife
paradise of immortals (xian), and the body-soul (po 魄) which remained in its grave or tomb on earth and was only
reunited with the spirit-soul through a ritual ceremony. These tombs were commonly adorned with uniquely decorated
hollow clay tiles that function also as a doorjamb to the tomb. Otherwise known as tomb tiles, these artifacts feature
holes in the top and bottom of the tile allowing it to pivot. Similar tiles have been found in the Chengdu area of
Sichuan province in south-central China. In addition to his many other roles, the emperor acted as the highest priest
in the land who made sacrifices to Heaven, the main deities known as the Five Powers, and the spirits (shen 神) of
mountains and rivers. It was believed that the three realms of Heaven, Earth, and Mankind were linked by natural
cycles of yin and yang and the five phases. If the emperor did not behave according to proper ritual, ethics, and
morals, he could disrupt the fine balance of these cosmological cycles and cause calamities such as earthquakes,
floods, droughts, epidemics, and swarms of locusts. It was believed that immortality could be achieved if one reached
the lands of the Queen Mother of the West or Mount Penglai. Han-era Daoists assembled into small groups of hermits
who attempted to achieve immortality through breathing exercises, sexual techniques and use of medical elixirs. By
the 2nd century AD, Daoists formed large hierarchical religious societies such as the Way of the Five Pecks of Rice.
Its followers believed that the sage-philosopher Laozi (fl. 6th century BC) was a holy prophet who would offer salvation
and good health if his devout followers would confess their sins, ban the worship of unclean gods who accepted meat
sacrifices and chant sections of the Daodejing. Buddhism first entered China during the Eastern Han and was first
mentioned in 65 AD. Liu Ying (d. 71 AD), a half-brother to Emperor Ming of Han (r. 57–75 AD), was one of its earliest
Chinese adherents, although Chinese Buddhism at this point was heavily associated with Huang-Lao Daoism. China's
first known Buddhist temple, the White Horse Temple, was erected during Ming's reign. Important Buddhist canons were
translated into Chinese during the 2nd century AD, including the Sutra of Forty-two Chapters, Perfection of Wisdom,
Shurangama Sutra, and Pratyutpanna Sutra. In Han government, the emperor was the supreme judge and lawgiver, the
commander-in-chief of the armed forces and sole designator of official nominees appointed to the top posts in central
and local administrations; those who earned a 600-dan salary-rank or higher. Theoretically, there were no limits
to his power. However, state organs with competing interests and institutions such as the court conference (tingyi
廷議)—where ministers were convened to reach majority consensus on an issue—pressured the emperor to accept the advice
of his ministers on policy decisions. If the emperor rejected a court conference decision, he risked alienating his
high ministers. Nevertheless, emperors sometimes did reject the majority opinion reached at court conferences. Ranked
below the Three Councillors of State were the Nine Ministers, who each headed a specialized ministry. The Minister
of Ceremonies was the chief official in charge of religious rites, rituals, prayers and the maintenance of ancestral
temples and altars. The Minister of the Household was in charge of the emperor's security within the palace grounds,
external imperial parks and wherever the emperor made an outing by chariot. The Minister of the Guards was responsible
for securing and patrolling the walls, towers, and gates of the imperial palaces. The Minister Coachman was responsible
for the maintenance of imperial stables, horses, carriages and coach-houses for the emperor and his palace attendants,
as well as the supply of horses for the armed forces. The Minister of Justice was the chief official in charge of
upholding, administering, and interpreting the law. The Minister Herald was the chief official in charge of receiving
honored guests at the imperial court, such as nobles and foreign ambassadors. The Minister of the Imperial Clan oversaw
the imperial court's interactions with the empire's nobility and extended imperial family, such as granting fiefs
and titles. The Minister of Finance was the treasurer for the official bureaucracy and the armed forces who handled
tax revenues and set standards for units of measurement. The Minister Steward served the emperor exclusively, providing
him with entertainment and amusements, proper food and clothing, medicine and physical care, valuables and equipment.
A commandery consisted of a group of counties, and was headed by an Administrator. He was the top civil and military
leader of the commandery and handled defense, lawsuits, seasonal instructions to farmers and recommendations of nominees
for office sent annually to the capital in a quota system first established by Emperor Wu. The head of a large county
of about 10,000 households was called a Prefect, while the heads of smaller counties were called Chiefs, and both
could be referred to as Magistrates. A Magistrate maintained law and order in his county, registered the populace
for taxation, mobilized commoners for annual corvée duties, repaired schools and supervised public works. At the
beginning of the Han dynasty, every male commoner aged twenty-three was liable for conscription into the military.
The minimum age for the military draft was reduced to twenty after Emperor Zhao's (r. 87–74 BC) reign. Conscripted
soldiers underwent one year of training and one year of service as non-professional soldiers. The year of training
was served in one of three branches of the armed forces: infantry, cavalry or navy. The year of active service was
served either on the frontier, in a king's court or under the Minister of the Guards in the capital. A small professional
(paid) standing army was stationed near the capital. During the Eastern Han, conscription could be avoided if one
paid a commutable tax. The Eastern Han court favored the recruitment of a volunteer army. The volunteer army comprised
the Southern Army (Nanjun 南軍), while the standing army stationed in and near the capital was the Northern Army (Beijun
北軍). Led by Colonels (Xiaowei 校尉), the Northern Army consisted of five regiments, each composed of several thousand
soldiers. When central authority collapsed after 189 AD, wealthy landowners, members of the aristocracy/nobility,
and regional military-governors relied upon their retainers to act as their own personal troops (buqu 部曲). The Han
dynasty inherited the ban liang coin type from the Qin. In the beginning of the Han, Emperor Gaozu closed the government
mint in favor of private minting of coins. This decision was reversed in 186 BC by his widow Grand Empress Dowager
Lü Zhi (d. 180 BC), who abolished private minting. In 182 BC, Lü Zhi issued a bronze coin that was much lighter in
weight than previous coins. This caused widespread inflation that was not reduced until 175 BC when Emperor Wen allowed
private minters to manufacture coins that were precisely 2.6 g (0.09 oz) in weight. In 144 BC Emperor Jing abolished
private minting in favor of central-government and commandery-level minting; he also introduced a new coin. Emperor
Wu introduced another in 120 BC, but a year later he abandoned the ban liangs entirely in favor of the wuzhu (五銖)
coin, weighing 3.2 g (0.11 oz). The wuzhu became China's standard coin until the Tang dynasty (618–907 AD). Its use
was interrupted briefly by several new currencies introduced during Wang Mang's regime until it was reinstated in
40 AD by Emperor Guangwu. The small landowner-cultivators formed the majority of the Han tax base; this revenue was
threatened during the latter half of Eastern Han when many peasants fell into debt and were forced to work as farming
tenants for wealthy landlords. The Han government enacted reforms in order to keep small landowner-cultivators out
of debt and on their own farms. These reforms included reducing taxes, temporary remissions of taxes, granting loans
and providing landless peasants temporary lodging and work in agricultural colonies until they could recover from
their debts. In the early Western Han, a wealthy salt or iron industrialist, whether a semi-autonomous king or wealthy
merchant, could boast funds that rivaled the imperial treasury and amass a peasant workforce of over a thousand.
This kept many peasants away from their farms and denied the government a significant portion of its land tax revenue.
To eliminate the influence of such private entrepreneurs, Emperor Wu nationalized the salt and iron industries in
117 BC and allowed many of the former industrialists to become officials administering the monopolies. By Eastern
Han times, the central government monopolies were repealed in favor of production by commandery and county administrations,
as well as private businessmen. Liquor was another profitable private industry nationalized by the central government
in 98 BC. However, this was repealed in 81 BC and a property tax rate of two coins for every 0.2 L (0.05 gallons)
was levied for those who traded it privately. By 110 BC Emperor Wu also interfered with the profitable trade in grain
when he eliminated speculation by selling government-stored grain at a lower price than demanded by merchants. Apart
from Emperor Ming's creation of a short-lived Office for Price Adjustment and Stabilization, which was abolished
in 68 AD, central-government price control regulations were largely absent during the Eastern Han. Evidence suggests
that blast furnaces, that convert raw iron ore into pig iron, which can be remelted in a cupola furnace to produce
cast iron by means of a cold blast and hot blast, were operational in China by the late Spring and Autumn period
(722–481 BC). The bloomery was nonexistent in ancient China; however, the Han-era Chinese produced wrought iron by
injecting excess oxygen into a furnace and causing decarburization. Cast iron and pig iron could be converted into
wrought iron and steel using a fining process. The Han-era Chinese used bronze and iron to make a range of weapons,
culinary tools, carpenters' tools and domestic wares. A significant product of these improved iron-smelting techniques
was the manufacture of new agricultural tools. The three-legged iron seed drill, invented by the 2nd century BC,
enabled farmers to carefully plant crops in rows instead of casting seeds out by hand. The heavy moldboard iron plow,
also invented during the Han dynasty, required only one man to control it, two oxen to pull it. It had three plowshares,
a seed box for the drills, a tool which turned down the soil and could sow roughly 45,730 m2 (11.3 acres) of land
in a single day. To protect crops from wind and drought, the Grain Intendant Zhao Guo (趙過) created the alternating
fields system (daitianfa 代田法) during Emperor Wu's reign. This system switched the positions of furrows and ridges
between growing seasons. Once experiments with this system yielded successful results, the government officially
sponsored it and encouraged peasants to use it. Han farmers also used the pit field system (aotian 凹田) for growing
crops, which involved heavily fertilized pits that did not require plows or oxen and could be placed on sloping terrain.
In southern and small parts of central Han-era China, paddy fields were chiefly used to grow rice, while farmers
along the Huai River used transplantation methods of rice production. Timber was the chief building material during
the Han dynasty; it was used to build palace halls, multi-story residential towers and halls and single-story houses.
Because wood decays rapidly, the only remaining evidence of Han wooden architecture is a collection of scattered
ceramic roof tiles. The oldest surviving wooden halls in China date to the Tang dynasty (618–907 AD). Architectural
historian Robert L. Thorp points out the scarcity of Han-era archaeological remains, and claims that often unreliable
Han-era literary and artistic sources are used by historians for clues about lost Han architecture. Though Han wooden
structures decayed, some Han-dynasty ruins made of brick, stone, and rammed earth remain intact. This includes stone
pillar-gates, brick tomb chambers, rammed-earth city walls, rammed-earth and brick beacon towers, rammed-earth sections
of the Great Wall, rammed-earth platforms where elevated halls once stood, and two rammed-earth castles in Gansu.
The ruins of rammed-earth walls that once surrounded the capitals Chang'an and Luoyang still stand, along with their
drainage systems of brick arches, ditches, and ceramic water pipes. Monumental stone pillar-gates, twenty-nine of
which survive from the Han period, formed entrances of walled enclosures at shrine and tomb sites. These pillars
feature artistic imitations of wooden and ceramic building components such as roof tiles, eaves, and balustrades.
Evidence of Han-era mechanical engineering comes largely from the choice observational writings of sometimes disinterested
Confucian scholars. Professional artisan-engineers (jiang 匠) did not leave behind detailed records of their work.
Han scholars, who often had little or no expertise in mechanical engineering, sometimes provided insufficient information
on the various technologies they described. Nevertheless, some Han literary sources provide crucial information.
For example, in 15 BC the philosopher Yang Xiong described the invention of the belt drive for a quilling machine,
which was of great importance to early textile manufacturing. The inventions of the artisan-engineer Ding Huan (丁緩)
are mentioned in the Miscellaneous Notes on the Western Capital. Around 180 AD, Ding created a manually operated
rotary fan used for air conditioning within palace buildings. Ding also used gimbals as pivotal supports for one
of his incense burners and invented the world's first known zoetrope lamp. Modern archaeology has led to the discovery
of Han artwork portraying inventions which were otherwise absent in Han literary sources. As observed in Han miniature
tomb models, but not in literary sources, the crank handle was used to operate the fans of winnowing machines that
separated grain from chaff. The odometer cart, invented during Han, measured journey lengths, using mechanical figures
banging drums and gongs to indicate each distance traveled. This invention is depicted in Han artwork by the 2nd
century AD, yet detailed written descriptions were not offered until the 3rd century AD. Modern archaeologists have
also unearthed specimens of devices used during the Han dynasty, for example a pair of sliding metal calipers used
by craftsmen for making minute measurements. These calipers contain inscriptions of the exact day and year they were
manufactured. These tools are not mentioned in any Han literary sources. The waterwheel appeared in Chinese records
during the Han. As mentioned by Huan Tan in about 20 AD, they were used to turn gears that lifted iron trip hammers,
and were used in pounding, threshing and polishing grain. However, there is no sufficient evidence for the watermill
in China until about the 5th century. The Nanyang Commandery Administrator Du Shi (d. 38 AD) created a waterwheel-powered
reciprocator that worked the bellows for the smelting of iron. Waterwheels were also used to power chain pumps that
lifted water to raised irrigation ditches. The chain pump was first mentioned in China by the philosopher Wang Chong
in his 1st-century-AD Balanced Discourse. The armillary sphere, a three-dimensional representation of the movements
in the celestial sphere, was invented in Han China by the 1st century BC. Using a water clock, waterwheel and a series
of gears, the Court Astronomer Zhang Heng (78–139 AD) was able to mechanically rotate his metal-ringed armillary
sphere. To address the problem of slowed timekeeping in the pressure head of the inflow water clock, Zhang was the
first in China to install an additional tank between the reservoir and inflow vessel. Zhang also invented a seismometer
(Houfeng didong yi 候风地动仪) in 132 AD to detect the exact cardinal or ordinal direction of earthquakes from hundreds
of kilometers away. This employed an inverted pendulum that, when disturbed by ground tremors, would trigger a set
of gears that dropped a metal ball from one of eight dragon mouths (representing all eight directions) into a metal
toad's mouth. Three Han mathematical treatises still exist. These are the Book on Numbers and Computation, the Arithmetical
Classic of the Gnomon and the Circular Paths of Heaven and the Nine Chapters on the Mathematical Art. Han-era mathematical
achievements include solving problems with right-angle triangles, square roots, cube roots, and matrix methods, finding
more accurate approximations for pi, providing mathematical proof of the Pythagorean theorem, use of the decimal
fraction, Gaussian elimination to solve linear equations, and continued fractions to find the roots of equations.
One of the Han's greatest mathematical advancements was the world's first use of negative numbers. Negative numbers
first appeared in the Nine Chapters on the Mathematical Art as black counting rods, where positive numbers were represented
by red counting rods. Negative numbers are used in the Bakhshali manuscript of ancient India, but its exact date
of compilation is unknown. Negative numbers were also used by the Greek mathematician Diophantus in about 275 AD,
but were not widely accepted in Europe until the 16th century AD. Han-era astronomers adopted a geocentric model
of the universe, theorizing that it was shaped like a sphere surrounding the earth in the center. They assumed that
the Sun, Moon, and planets were spherical and not disc-shaped. They also thought that the illumination of the Moon
and planets was caused by sunlight, that lunar eclipses occurred when the Earth obstructed sunlight falling onto
the Moon, and that a solar eclipse occurred when the Moon obstructed sunlight from reaching the Earth. Although others
disagreed with his model, Wang Chong accurately described the water cycle of the evaporation of water into clouds.
Evidence found in Chinese literature, and archaeological evidence, show that cartography existed in China before
the Han. Some of the earliest Han maps discovered were ink-penned silk maps found amongst the Mawangdui Silk Texts
in a 2nd-century-BC tomb. The general Ma Yuan created the world's first known raised-relief map from rice in the
1st century AD. This date could be revised if the tomb of Qin Shi Huang is excavated and the account in the Records
of the Grand Historian concerning a model map of the empire is proven to be true. The Han-era Chinese sailed in a
variety of ships differing from those of previous eras, such as the tower ship. The junk design was developed and
realized during Han. Junks featured a square-ended bow and stern, a flat-bottomed hull or carvel-shaped hull with
no keel or sternpost, and solid transverse bulkheads in the place of structural ribs found in Western vessels. Moreover,
Han ships were the first in the world to be steered using a rudder at the stern, in contrast to the simpler steering
oar used for riverine transport, allowing them to sail on the high seas. Han-era medical physicians believed that
the human body was subject to the same forces of nature that governed the greater universe, namely the cosmological
cycles of yin and yang and the five phases. Each organ of the body was associated with a particular phase. Illness
was viewed as a sign that qi or "vital energy" channels leading to a certain organ had been disrupted. Thus, Han-era
physicians prescribed medicine that was believed to counteract this imbalance. For example, since the wood phase
was believed to promote the fire phase, medicinal ingredients associated with the wood phase could be used to heal
an organ associated with the fire phase. To this end, the physician Zhang Zhongjing (c. 150–c. 219 AD) prescribed
regulated diets rich in certain foods that were thought to cure specific illnesses. These are now known to be nutrition
disorders caused by the lack of certain vitamins consumed in one's diet. Besides dieting, Han physicians also prescribed
moxibustion, acupuncture, and calisthenics as methods of maintaining one's health. When surgery was performed by
the physician Hua Tuo (d. 208 AD), he used anesthesia to numb his patients' pain and prescribed a rubbing ointment
that allegedly sped the process of healing surgical wounds.
Muslims believe the Quran was verbally revealed by God to Muhammad through the angel Gabriel (Jibril), gradually over a period
of approximately 23 years, beginning on 22 December 609 CE, when Muhammad was 40, and concluding in 632, the year
of his death. Muslims regard the Quran as the most important miracle of Muhammad, a proof of his prophethood, and
the culmination of a series of divine messages that started with the messages revealed to Adam and ended with Muhammad.
The word "Quran" occurs some 70 times in the text of the Quran, although different names and words are also said
to be references to the Quran. According to the traditional narrative, several companions of Muhammad served as scribes
and were responsible for writing down the revelations. Shortly after Muhammad's death, the Quran was compiled by
his companions who wrote down and memorized parts of it. These codices had differences that motivated the Caliph
Uthman to establish a standard version now known as Uthman's codex, which is generally considered the archetype of
the Quran known today. There are, however, variant readings, with mostly minor differences in meaning. The Quran
assumes familiarity with major narratives recounted in the Biblical scriptures. It summarizes some, dwells at length
on others and, in some cases, presents alternative accounts and interpretations of events. The Quran describes itself
as a book of guidance. It sometimes offers detailed accounts of specific historical events, and it often emphasizes
the moral significance of an event over its narrative sequence. The Quran is used along with the hadith to interpret
sharia law. During prayers, the Quran is recited only in Arabic. The word qurʼān appears about 70 times in the Quran
itself, assuming various meanings. It is a verbal noun (maṣdar) of the Arabic verb qaraʼa (قرأ), meaning "he read"
or "he recited". The Syriac equivalent is (ܩܪܝܢܐ) qeryānā, which refers to "scripture reading" or "lesson". While
some Western scholars consider the word to be derived from the Syriac, the majority of Muslim authorities hold the
origin of the word is qaraʼa itself. Regardless, it had become an Arabic term by Muhammad's lifetime. An important
meaning of the word is the "act of reciting", as reflected in an early Quranic passage: "It is for Us to collect
it and to recite it (qurʼānahu)." The term also has closely related synonyms that are employed throughout the Quran.
Each synonym possesses its own distinct meaning, but its use may converge with that of qurʼān in certain contexts.
Such terms include kitāb (book); āyah (sign); and sūrah (scripture). The latter two terms also denote units of revelation.
In the large majority of contexts, usually with a definite article (al-), the word is referred to as the "revelation"
(waḥy), that which has been "sent down" (tanzīl) at intervals. Other related words are: dhikr (remembrance), used
to refer to the Quran in the sense of a reminder and warning, and ḥikmah (wisdom), sometimes referring to the revelation
or part of it. The Quran describes itself as "the discernment" (al-furqān), "the mother book" (umm al-kitāb), "the
guide" (huda), "the wisdom" (hikmah), "the remembrance" (dhikr) and "the revelation" (tanzīl; something sent down,
signifying the descent of an object from a higher place to lower place). Another term is al-kitāb (The Book), though
it is also used in the Arabic language for other scriptures, such as the Torah and the Gospels. The adjective of
"Quran" has multiple transliterations including "quranic", "koranic", and "qur'anic", or capitalised as "Qur'anic",
"Koranic", and "Quranic". The term mus'haf ('written work') is often used to refer to particular Quranic manuscripts
but is also used in the Quran to identify earlier revealed books. Other transliterations of "Quran" include "al-Coran",
"Coran", "Kuran", and "al-Qurʼan". Islamic tradition relates that Muhammad received his first revelation in the Cave
of Hira during one of his isolated retreats to the mountains. Thereafter, he received revelations over a period of
23 years. According to hadith and Muslim history, after Muhammad immigrated to Medina and formed an independent Muslim
community, he ordered many of his companions to recite the Quran and to learn and teach the laws, which were revealed
daily. It is related that some of the Quraysh who were taken prisoners at the battle of Badr regained their freedom
after they had taught some of the Muslims the simple writing of the time. Thus a group of Muslims gradually became
literate. As it was initially spoken, the Quran was recorded on tablets, bones, and the wide, flat ends of date palm
fronds. Most suras were in use amongst early Muslims since they are mentioned in numerous sayings by both Sunni and
Shia sources, relating Muhammad's use of the Quran as a call to Islam, the making of prayer and the manner of recitation.
However, the Quran did not exist in book form at the time of Muhammad's death in 632. There is agreement among scholars
that Muhammad himself did not write down the revelation. The Quran describes Muhammad as "ummi", which is traditionally
interpreted as "illiterate," but the meaning is rather more complex. Medieval commentators such as Al-Tabari maintained
that the term induced two meanings: first, the inability to read or write in general; second, the inexperience or
ignorance of the previous books or scriptures (but they gave priority to the first meaning). Muhammad's illiteracy
was taken as a sign of the genuineness of his prophethood. For example, according to Fakhr al-Din al-Razi, if Muhammad
had mastered writing and reading he possibly would have been suspected of having studied the books of the ancestors.
Some scholars such as Watt prefer the second meaning of "ummi" - they take it to indicate unfamiliarity with earlier
sacred texts. Based on earlier transmitted reports, in the year 632, after the demise of Muhammad a number of his
companions who knew the Quran by heart were killed in a battle by Musaylimah, the first caliph Abu Bakr (d. 634)
decided to collect the book in one volume so that it could be preserved. Zayd ibn Thabit (d. 655) was the person
to collect the Quran since "he used to write the Divine Inspiration for Allah's Apostle". Thus, a group of scribes,
most importantly Zayd, collected the verses and produced a hand-written manuscript of the complete book. The manuscript
according to Zayd remained with Abu Bakr until he died. Zayd's reaction to the task and the difficulties in collecting
the Quranic material from parchments, palm-leaf stalks, thin stones and from men who knew it by heart is recorded
in earlier narratives. After Abu Bakr, Hafsa bint Umar, Muhammad's widow, was entrusted with the manuscript. In about
650, the third Caliph Uthman ibn Affan (d. 656) began noticing slight differences in pronunciation of the Quran as
Islam expanded beyond the Arabian Peninsula into Persia, the Levant, and North Africa. In order to preserve the sanctity
of the text, he ordered a committee headed by Zayd to use Abu Bakr's copy and prepare a standard copy of the Quran.
Thus, within 20 years of Muhammad's death, the Quran was committed to written form. That text became the model from
which copies were made and promulgated throughout the urban centers of the Muslim world, and other versions are believed
to have been destroyed. The present form of the Quran text is accepted by Muslim scholars to be the original version
compiled by Abu Bakr. In 1972, in a mosque in the city of Sana'a, Yemen, manuscripts were discovered that were later
proved to be the most ancient Quranic text known to exist at the time. The Sana'a manuscripts contain palimpsests,
a manuscript page from which the text has been washed off to make the parchment reusable again—a practice which was
common in ancient times due to scarcity of writing material. However, the faint washed-off underlying text (scriptio
inferior) is still barely visible and believed to be "pre-Uthmanic" Quranic content, while the text written on top
(scriptio superior) is believed to belong to Uthmanic time. Studies using radiocarbon dating indicate that the parchments
are dated to the period before 671 AD with a 99 percent probability. In 2015, fragments of a very early Quran, dating
back to 1370 years ago, were discovered in the library of the University of Birmingham, England. According to the
tests carried out by Oxford University Radiocarbon Accelerator Unit, "with a probability of more than 95%, the parchment
was from between 568 and 645". The manuscript is written in Hijazi script, an early form of written Arabic. This
is possibly the earliest extant exemplar of the Quran, but as the tests allow a range of possible dates, it cannot
be said with certainty which of the existing versions is the oldest. Saudi scholar Saud al-Sarhan has expressed doubt
over the age of the fragments as they contain dots and chapter separators that are believed to have originated later.
Respect for the written text of the Quran is an important element of religious faith by many Muslims, and the Quran
is treated with reverence. Based on tradition and a literal interpretation of Quran 56:79 ("none shall touch but
those who are clean"), some Muslims believe that they must perform a ritual cleansing with water before touching
a copy of the Quran, although this view is not universal. Worn-out copies of the Quran are wrapped in a cloth and
stored indefinitely in a safe place, buried in a mosque or a Muslim cemetery, or burned and the ashes buried or scattered
over water. Inimitability of the Quran (or "I'jaz") is the belief that no human speech can match the Quran in its
content and form. The Quran is considered an inimitable miracle by Muslims, effective until the Day of Resurrection—and,
thereby, the central proof granted to Muhammad in authentication of his prophetic status. The concept of inimitability
originates in the Quran where in five different verses opponents are challenged to produce something like the Quran:
"If men and sprites banded together to produce the like of this Quran they would never produce its like not though
they backed one another." So the suggestion is that if there are doubts concerning the divine authorship of the Quran,
come forward and create something like it. From the ninth century, numerous works appeared which studied the Quran
and examined its style and content. Medieval Muslim scholars including al-Jurjani (d. 1078) and al-Baqillani (d.
1013) have written treatises on the subject, discussed its various aspects, and used linguistic approaches to study
the Quran. Others argue that the Quran contains noble ideas, has inner meanings, maintained its freshness through
the ages and has caused great transformations in individual level and in the history. Some scholars state that the
Quran contains scientific information that agrees with modern science. The doctrine of miraculousness of the Quran
is further emphasized by Muhammad's illiteracy since the unlettered prophet could not have been suspected of composing
the Quran. The Quran consists of 114 chapters of varying lengths, each known as a sura. Suras are classified as Meccan
or Medinan, depending on whether the verses were revealed before or after the migration of Muhammad to the city of
Medina. However, a sura classified as Medinan may contain Meccan verses in it and vice versa. Sura titles are derived
from a name or quality discussed in the text, or from the first letters or words of the sura. Suras are arranged
roughly in order of decreasing size. The sura arrangement is thus not connected to the sequence of revelation. Each
sura except the ninth starts with the Bismillah (بسم الله الرحمن الرحيم), an Arabic phrase meaning "In the name of
God". There are, however, still 114 occurrences of the Bismillah in the Quran, due to its presence in Quran 27:30
as the opening of Solomon's letter to the Queen of Sheba. In addition to and independent of the division into suras,
there are various ways of dividing the Quran into parts of approximately equal length for convenience in reading.
The 30 juz' (plural ajzāʼ) can be used to read through the entire Quran in a month. Some of these parts are known
by names—which are the first few words by which the juzʼ starts. A juz' is sometimes further divided into two ḥizb
(plural aḥzāb), and each hizb subdivided into four rubʻ al-ahzab. The Quran is also divided into seven approximately
equal parts, manzil (plural manāzil), for it to be recited in a week. The Quranic content is concerned with basic
Islamic beliefs including the existence of God and the resurrection. Narratives of the early prophets, ethical and
legal subjects, historical events of Muhammad's time, charity and prayer also appear in the Quran. The Quranic verses
contain general exhortations regarding right and wrong and historical events are related to outline general moral
lessons. Verses pertaining to natural phenomena have been interpreted by Muslims as an indication of the authenticity
of the Quranic message. The doctrine of the last day and eschatology (the final fate of the universe) may be reckoned
as the second great doctrine of the Quran. It is estimated that approximately one-third of the Quran is eschatological,
dealing with the afterlife in the next world and with the day of judgment at the end of time. There is a reference
to the afterlife on most pages of the Quran and belief in the afterlife is often referred to in conjunction with
belief in God as in the common expression: "Believe in God and the last day". A number of suras such as 44, 56, 75,
78, 81 and 101 are directly related to the afterlife and its preparations. Some suras indicate the closeness of the
event and warn people to be prepared for the imminent day. For instance, the first verses of Sura 22, which deal
with the mighty earthquake and the situations of people on that day, represent this style of divine address: "O People!
Be respectful to your Lord. The earthquake of the Hour is a mighty thing." According to the Quran, God communicated
with man and made his will known through signs and revelations. Prophets, or 'Messengers of God', received revelations
and delivered them to humanity. The message has been identical and for all humankind. "Nothing is said to you that
was not said to the messengers before you, that your lord has at his Command forgiveness as well as a most Grievous
Penalty." The revelation does not come directly from God to the prophets. Angels acting as God's messengers deliver
the divine revelation to them. This comes out in Quran 42:51, in which it is stated: "It is not for any mortal that
God should speak to them, except by revelation, or from behind a veil, or by sending a messenger to reveal by his
permission whatsoever He will." Belief is a fundamental aspect of morality in the Quran, and scholars have tried
to determine the semantic contents of "belief" and "believer" in the Quran. The ethico-legal concepts and exhortations
dealing with righteous conduct are linked to a profound awareness of God, thereby emphasizing the importance of faith,
accountability, and the belief in each human's ultimate encounter with God. People are invited to perform acts of
charity, especially for the needy. Believers who "spend of their wealth by night and by day, in secret and in public"
are promised that they "shall have their reward with their Lord; on them shall be no fear, nor shall they grieve".
It also affirms family life by legislating on matters of marriage, divorce, and inheritance. A number of practices,
such as usury and gambling, are prohibited. The Quran is one of the fundamental sources of Islamic law (sharia).
Some formal religious practices receive significant attention in the Quran including the formal prayers (salat) and
fasting in the month of Ramadan. As for the manner in which the prayer is to be conducted, the Quran refers to prostration.
The term for charity, zakat, literally means purification. Charity, according to the Quran, is a means of self-purification.
The astrophysicist Nidhal Guessoum while being highly critical of pseudo-scientific claims made about the Quran,
has highlighted the encouragement for sciences that the Quran provides by developing "the concept of knowledge.".
He writes: "The Qur'an draws attention to the danger of conjecturing without evidence (And follow not that of which
you have not the (certain) knowledge of... 17:36) and in several different verses asks Muslims to require proofs
(Say: Bring your proof if you are truthful 2:111), both in matters of theological belief and in natural science."
Guessoum cites Ghaleb Hasan on the definition of "proof" according the Quran being "clear and strong... convincing
evidence or argument." Also, such a proof cannot rely on an argument from authority, citing verse 5:104. Lastly,
both assertions and rejections require a proof, according to verse 4:174. Ismail al-Faruqi and Taha Jabir Alalwani
are of the view that any reawakening of the Muslim civilization must start with the Quran; however, the biggest obstacle
on this route is the "centuries old heritage of tafseer (exegesis) and other classical disciplines" which inhibit
a "universal, epidemiological and systematic conception" of the Quran's message. The philosopher Muhammad Iqbal,
considered the Quran's methodology and epistemology to be empirical and rational. It's generally accepted that there
are around 750 verses in the Quran dealing with natural phenomenon. In many of these verses the study of nature is
"encouraged and highly recommended," and historical Islamic scientists like Al-Biruni and Al-Battani derived their
inspiration from verses of the Quran. Mohammad Hashim Kamali has the stated that "scientific observation, experimental
knowledge and rationality" are the primary tools with which humanity can achieve the goals laid out for it in the
Quran. Ziauddin Sardar built a case for Muslims having developed the foundations of modern science, by highlighting
the repeated calls of the Quran to observe and reflect upon natural phenomenon. "The 'scientific method,' as it is
understood today, was first developed by Muslim scientists" like Ibn al-Haytham and Al-Biruni, along with numerous
other Muslim scientists. The physicist Abdus Salam, in his Nobel Prize banquet address, quoted a well known verse
from the Quran (67:3-4) and then stated: "This in effect is the faith of all physicists: the deeper we seek, the
more is our wonder excited, the more is the dazzlement of our gaze". One of Salam's core beliefs was that there is
no contradiction between Islam and the discoveries that science allows humanity to make about nature and the universe.
Salam also held the opinion that the Quran and the Islamic spirit of study and rational reflection was the source
of extraordinary civilizational development. Salam highlights, in particular, the work of Ibn al-Haytham and Al-Biruni
as the pioneers of empiricism who introduced the experimental approach, breaking way from Aristotle's influence,
and thus giving birth to modern science. Salam was also careful to differentiate between metaphysics and physics,
and advised against empirically probing certain matters on which "physics is silent and will remain so," such as
the doctrine of "creation from nothing" which in Salam's view is outside the limits of science and thus "gives way"
to religious considerations. The language of the Quran has been described as "rhymed prose" as it partakes of both
poetry and prose; however, this description runs the risk of failing to convey the rhythmic quality of Quranic language,
which is more poetic in some parts and more prose-like in others. Rhyme, while found throughout the Quran, is conspicuous
in many of the earlier Meccan suras, in which relatively short verses throw the rhyming words into prominence. The
effectiveness of such a form is evident for instance in Sura 81, and there can be no doubt that these passages impressed
the conscience of the hearers. Frequently a change of rhyme from one set of verses to another signals a change in
the subject of discussion. Later sections also preserve this form but the style is more expository. The Quranic text
seems to have no beginning, middle, or end, its nonlinear structure being akin to a web or net. The textual arrangement
is sometimes considered to exhibit lack of continuity, absence of any chronological or thematic order and repetitiousness.
Michael Sells, citing the work of the critic Norman O. Brown, acknowledges Brown's observation that the seeming disorganization
of Quranic literary expression – its scattered or fragmented mode of composition in Sells's phrase – is in fact a
literary device capable of delivering profound effects as if the intensity of the prophetic message were shattering
the vehicle of human language in which it was being communicated. Sells also addresses the much-discussed repetitiveness
of the Quran, seeing this, too, as a literary device. A text is self-referential when it speaks about itself and
makes reference to itself. According to Stefan Wild, the Quran demonstrates this metatextuality by explaining, classifying,
interpreting and justifying the words to be transmitted. Self-referentiality is evident in those passages where the
Quran refers to itself as revelation (tanzil), remembrance (dhikr), news (naba'), criterion (furqan) in a self-designating
manner (explicitly asserting its Divinity, "And this is a blessed Remembrance that We have sent down; so are you
now denying it?"), or in the frequent appearance of the "Say" tags, when Muhammad is commanded to speak (e.g., "Say:
'God's guidance is the true guidance' ", "Say: 'Would you then dispute with us concerning God?' "). According to
Wild the Quran is highly self-referential. The feature is more evident in early Meccan suras. Tafsir is one of the
earliest academic activities of Muslims. According to the Quran, Muhammad was the first person who described the
meanings of verses for early Muslims. Other early exegetes included a few Companions of Muhammad, like ʻAli ibn Abi
Talib, ʻAbdullah ibn Abbas, ʻAbdullah ibn Umar and Ubayy ibn Kaʻb. Exegesis in those days was confined to the explanation
of literary aspects of the verse, the background of its revelation and, occasionally, interpretation of one verse
with the help of the other. If the verse was about a historical event, then sometimes a few traditions (hadith) of
Muhammad were narrated to make its meaning clear. Because the Quran is spoken in classical Arabic, many of the later
converts to Islam (mostly non-Arabs) did not always understand the Quranic Arabic, they did not catch allusions that
were clear to early Muslims fluent in Arabic and they were concerned with reconciling apparent conflict of themes
in the Quran. Commentators erudite in Arabic explained the allusions, and perhaps most importantly, explained which
Quranic verses had been revealed early in Muhammad's prophetic career, as being appropriate to the very earliest
Muslim community, and which had been revealed later, canceling out or "abrogating" (nāsikh) the earlier text (mansūkh).
Other scholars, however, maintain that no abrogation has taken place in the Quran. The Ahmadiyya Muslim Community
has published a ten-volume Urdu commentary on the Quran, with the name Tafseer e Kabir. Esoteric or Sufi interpretation
attempts to unveil the inner meanings of the Quran. Sufism moves beyond the apparent (zahir) point of the verses
and instead relates Quranic verses to the inner or esoteric (batin) and metaphysical dimensions of consciousness
and existence. According to Sands, esoteric interpretations are more suggestive than declarative, they are allusions
(isharat) rather than explanations (tafsir). They indicate possibilities as much as they demonstrate the insights
of each writer. Moses, in 7:143, comes the way of those who are in love, he asks for a vision but his desire is denied,
he is made to suffer by being commanded to look at other than the Beloved while the mountain is able to see God.
The mountain crumbles and Moses faints at the sight of God's manifestation upon the mountain. In Qushayri's words,
Moses came like thousands of men who traveled great distances, and there was nothing left to Moses of Moses. In that
state of annihilation from himself, Moses was granted the unveiling of the realities. From the Sufi point of view,
God is the always the beloved and the wayfarer's longing and suffering lead to realization of the truths. One of
the notable authors of esoteric interpretation prior to the 12th century is Sulami (d. 1021) without whose work the
majority of very early Sufi commentaries would not have been preserved. Sulami's major commentary is a book named
haqaiq al-tafsir ("Truths of Exegesis") which is a compilation of commentaries of earlier Sufis. From the 11th century
onwards several other works appear, including commentaries by Qushayri (d. 1074), Daylami (d. 1193), Shirazi (d.
1209) and Suhrawardi (d. 1234). These works include material from Sulami's books plus the author's contributions.
Many works are written in Persian such as the works of Maybudi (d. 1135) kash al-asrar ("the unveiling of the secrets").
Rumi (d. 1273) wrote a vast amount of mystical poetry in his book Mathnawi. Rumi makes heavy use of the Quran in
his poetry, a feature that is sometimes omitted in translations of Rumi's work. A large number of Quranic passages
can be found in Mathnawi, which some consider a kind of Sufi interpretation of the Quran. Rumi's book is not exceptional
for containing citations from and elaboration on the Quran, however, Rumi does mention Quran more frequently. Simnani
(d. 1336) wrote two influential works of esoteric exegesis on the Quran. He reconciled notions of God's manifestation
through and in the physical world with the sentiments of Sunni Islam. Comprehensive Sufi commentaries appear in the
18th century such as the work of Ismail Hakki Bursevi (d. 1725). His work ruh al-Bayan (the Spirit of Elucidation)
is a voluminous exegesis. Written in Arabic, it combines the author's own ideas with those of his predecessors (notably
Ibn Arabi and Ghazali), all woven together in Hafiz, a Persian poetry form. Commentaries dealing with the zahir (outward
aspects) of the text are called tafsir, and hermeneutic and esoteric commentaries dealing with the batin are called
ta'wil ("interpretation" or "explanation"), which involves taking the text back to its beginning. Commentators with
an esoteric slant believe that the ultimate meaning of the Quran is known only to God. In contrast, Quranic literalism,
followed by Salafis and Zahiris, is the belief that the Quran should only be taken at its apparent meaning.[citation
needed] The first fully attested complete translations of the Quran were done between the 10th and 12th centuries
in Persian. The Samanid king, Mansur I (961-976), ordered a group of scholars from Khorasan to translate the Tafsir
al-Tabari, originally in Arabic, into Persian. Later in the 11th century, one of the students of Abu Mansur Abdullah
al-Ansari wrote a complete tafsir of the Quran in Persian. In the 12th century, Najm al-Din Abu Hafs al-Nasafi translated
the Quran into Persian. The manuscripts of all three books have survived and have been published several times.[citation
needed] Robert of Ketton's 1143 translation of the Quran for Peter the Venerable, Lex Mahumet pseudoprophete, was
the first into a Western language (Latin). Alexander Ross offered the first English version in 1649, from the French
translation of L'Alcoran de Mahomet (1647) by Andre du Ryer. In 1734, George Sale produced the first scholarly translation
of the Quran into English; another was produced by Richard Bell in 1937, and yet another by Arthur John Arberry in
1955. All these translators were non-Muslims. There have been numerous translations by Muslims. The Ahmadiyya Muslim
Community has published translations of the Quran in 50 different languages besides a five-volume English commentary
and an English translation of the Quran. The proper recitation of the Quran is the subject of a separate discipline
named tajwid which determines in detail how the Quran should be recited, how each individual syllable is to be pronounced,
the need to pay attention to the places where there should be a pause, to elisions, where the pronunciation should
be long or short, where letters should be sounded together and where they should be kept separate, etc. It may be
said that this discipline studies the laws and methods of the proper recitation of the Quran and covers three main
areas: the proper pronunciation of consonants and vowels (the articulation of the Quranic phonemes), the rules of
pause in recitation and of resumption of recitation, and the musical and melodious features of recitation. Vocalization
markers indicating specific vowel sounds were introduced into the Arabic language by the end of the 9th century.
The first Quranic manuscripts lacked these marks, therefore several recitations remain acceptable. The variation
in readings of the text permitted by the nature of the defective vocalization led to an increase in the number of
readings during the 10th century. The 10th-century Muslim scholar from Baghdad, Ibn Mujāhid, is famous for establishing
seven acceptable textual readings of the Quran. He studied various readings and their trustworthiness and chose seven
8th-century readers from the cities of Mecca, Medina, Kufa, Basra and Damascus. Ibn Mujahid did not explain why he
chose seven readers, rather than six or ten, but this may be related to a prophetic tradition (Muhammad's saying)
reporting that the Quran had been revealed in seven "ahruf" (meaning seven letters or modes). Today, the most popular
readings are those transmitted by Ḥafṣ (d.796) and Warsh (d. 812) which are according to two of Ibn Mujahid's reciters,
Aasim ibn Abi al-Najud (Kufa, d. 745) and Nafi‘ al-Madani (Medina, d. 785), respectively. The influential standard
Quran of Cairo (1924) uses an elaborate system of modified vowel-signs and a set of additional symbols for minute
details and is based on ʻAsim's recitation, the 8th-century recitation of Kufa. This edition has become the standard
for modern printings of the Quran. Before printing was widely adopted in the 19th century, the Quran was transmitted
in manuscripts made by calligraphers and copyists. The earliest manuscripts were written in Ḥijāzī-type script. The
Hijazi style manuscripts nevertheless confirm that transmission of the Quran in writing began at an early stage.
Probably in the ninth century, scripts began to feature thicker strokes, which are traditionally known as Kufic scripts.
Toward the end of the ninth century, new scripts began to appear in copies of the Quran and replace earlier scripts.
The reason for discontinuation in the use of the earlier style was that it took too long to produce and the demand
for copies was increasing. Copyists would therefore chose simpler writing styles. Beginning in the 11th century,
the styles of writing employed were primarily the naskh, muhaqqaq, rayḥānī and, on rarer occasions, the thuluth script.
Naskh was in very widespread use. In North Africa and Spain, the Maghribī style was popular. More distinct is the
Bihari script which was used solely in the north of India. Nastaʻlīq style was also rarely used in Persian world.
In the beginning, the Quran did not have vocalization markings. The system of vocalization, as we know it today,
seems to have been introduced towards the end of the ninth century. Since it would have been too costly for most
Muslims to purchase a manuscript, copies of the Quran were held in mosques in order to make them accessible to people.
These copies frequently took the form of a series of 30 parts or juzʼ. In terms of productivity, the Ottoman copyists
provide the best example. This was in response to widespread demand, unpopularity of printing methods and for aesthetic
reasons. According to Sahih al-Bukhari, the Quran was recited among Levantines and Iraqis, and discussed by Christians
and Jews, before it was standardized. Its language was similar to the Syriac language.[citation needed] The Quran
recounts stories of many of the people and events recounted in Jewish and Christian sacred books (Tanakh, Bible)
and devotional literature (Apocrypha, Midrash), although it differs in many details. Adam, Enoch, Noah, Eber, Shelah,
Abraham, Lot, Ishmael, Isaac, Jacob, Joseph, Job, Jethro, David, Solomon, Elijah, Elisha, Jonah, Aaron, Moses, Zechariah,
John the Baptist and Jesus are mentioned in the Quran as prophets of God (see Prophets of Islam). In fact, Moses
is mentioned more in the Quran than any other individual. Jesus is mentioned more often in the Quran than Muhammad,
while Mary is mentioned in the Quran more than the New Testament. Muslims believe the common elements or resemblances
between the Bible and other Jewish and Christian writings and Islamic dispensations is due to their common divine
source,[citation needed] and that the original Christian or Jewish texts were authentic divine revelations given
to prophets. According to Tabatabaei, there are acceptable and unacceptable esoteric interpretations. Acceptable
ta'wil refers to the meaning of a verse beyond its literal meaning; rather the implicit meaning, which ultimately
is known only to God and can't be comprehended directly through human thought alone. The verses in question here
refer to the human qualities of coming, going, sitting, satisfaction, anger and sorrow, which are apparently attributed
to God. Unacceptable ta'wil is where one "transfers" the apparent meaning of a verse to a different meaning by means
of a proof; this method is not without obvious inconsistencies. Although this unacceptable ta'wil has gained considerable
acceptance, it is incorrect and cannot be applied to the Quranic verses. The correct interpretation is that reality
a verse refers to. It is found in all verses, the decisive and the ambiguous alike; it is not a sort of a meaning
of the word; it is a fact that is too sublime for words. God has dressed them with words to bring them a bit nearer
to our minds; in this respect they are like proverbs that are used to create a picture in the mind, and thus help
the hearer to clearly grasp the intended idea. The Quran most likely existed in scattered written form during Muhammad's
lifetime. Several sources indicate that during Muhammad's lifetime a large number of his companions had memorized
the revelations. Early commentaries and Islamic historical sources support the above-mentioned understanding of the
Quran's early development. The Quran in its present form is generally considered by academic scholars to record the
words spoken by Muhammad because the search for variants has not yielded any differences of great significance.[page
needed] University of Chicago professor Fred Donner states that "...there was a very early attempt to establish a
uniform consonantal text of the Qurʾān from what was probably a wider and more varied group of related texts in early
transmission. [...] After the creation of this standardized canonical text, earlier authoritative texts were suppressed,
and all extant manuscripts—despite their numerous variants—seem to date to a time after this standard consonantal
text was established." Although most variant readings of the text of the Quran have ceased to be transmitted, some
still are. There has been no critical text produced on which a scholarly reconstruction of the Quranic text could
be based. Historically, controversy over the Quran's content has rarely become an issue, although debates continue
on the subject. Sahih al-Bukhari narrates Muhammad describing the revelations as, "Sometimes it is (revealed) like
the ringing of a bell" and Aisha reported, "I saw the Prophet being inspired Divinely on a very cold day and noticed
the sweat dropping from his forehead (as the Inspiration was over)." Muhammad's first revelation, according to the
Quran, was accompanied with a vision. The agent of revelation is mentioned as the "one mighty in power", the one
who "grew clear to view when he was on the uppermost horizon. Then he drew nigh and came down till he was (distant)
two bows' length or even nearer." The Islamic studies scholar Welch states in the Encyclopaedia of Islam that he
believes the graphic descriptions of Muhammad's condition at these moments may be regarded as genuine, because he
was severely disturbed after these revelations. According to Welch, these seizures would have been seen by those
around him as convincing evidence for the superhuman origin of Muhammad's inspirations. However, Muhammad's critics
accused him of being a possessed man, a soothsayer or a magician since his experiences were similar to those claimed
by such figures well known in ancient Arabia. Welch additionally states that it remains uncertain whether these experiences
occurred before or after Muhammad's initial claim of prophethood. Muhammad Husayn Tabatabaei says that according
to the popular explanation among the later exegetes, ta'wil indicates the particular meaning a verse is directed
towards. The meaning of revelation (tanzil), as opposed to ta'wil, is clear in its accordance to the obvious meaning
of the words as they were revealed. But this explanation has become so widespread that, at present, it has become
the primary meaning of ta'wil, which originally meant "to return" or "the returning place". In Tabatabaei's view,
what has been rightly called ta'wil, or hermeneutic interpretation of the Quran, is not concerned simply with the
denotation of words. Rather, it is concerned with certain truths and realities that transcend the comprehension of
the common run of men; yet it is from these truths and realities that the principles of doctrine and the practical
injunctions of the Quran issue forth. Interpretation is not the meaning of the verse—rather it transpires through
that meaning, in a special sort of transpiration. There is a spiritual reality—which is the main objective of ordaining
a law, or the basic aim in describing a divine attribute—and then there is an actual significance that a Quranic
story refers to. According to Shia beliefs, those who are firmly rooted in knowledge like Muhammad and the imams
know the secrets of the Quran. According to Tabatabaei, the statement "none knows its interpretation except God"
remains valid, without any opposing or qualifying clause. Therefore, so far as this verse is concerned, the knowledge
of the Quran's interpretation is reserved for God. But Tabatabaei uses other verses and concludes that those who
are purified by God know the interpretation of the Quran to a certain extent.
From 1989 through 1996, the total area of the US was listed as 9,372,610 km2 (3,618,780 sq mi) (land + inland water only).
The listed total area changed to 9,629,091 km2 (3,717,813 sq mi) in 1997 (Great Lakes area and coastal waters added),
to 9,631,418 km2 (3,718,711 sq mi) in 2004, to 9,631,420 km2 (3,718,710 sq mi) in 2006, and to 9,826,630 km2 (3,794,080
sq mi) in 2007 (territorial waters added). Currently, the CIA World Factbook gives 9,826,675 km2 (3,794,100 sq mi),
the United Nations Statistics Division gives 9,629,091 km2 (3,717,813 sq mi), and the Encyclopædia Britannica gives
9,522,055 km2 (3,676,486 sq mi)(Great Lakes area included but not coastal waters). These source consider only the
50 states and the Federal District, and exclude overseas territories. By total area (water as well as land), the
United States is either slightly larger or smaller than the People's Republic of China, making it the world's third
or fourth largest country. China and the United States are smaller than Russia and Canada in total area, but are
larger than Brazil. By land area only (exclusive of waters), the United States is the world's third largest country,
after Russia and China, with Canada in fourth. Whether the US or China is the third largest country by total area
depends on two factors: (1) The validity of China's claim on Aksai Chin and Trans-Karakoram Tract. Both these territories
are also claimed by India, so are not counted; and (2) How US calculates its own surface area. Since the initial
publishing of the World Factbook, the CIA has updated the total area of United States a number of times. The United
States shares land borders with Canada (to the north) and Mexico (to the south), and a territorial water border with
Russia in the northwest, and two territorial water borders in the southeast between Florida and Cuba, and Florida
and the Bahamas. The contiguous forty-eight states are otherwise bounded by the Pacific Ocean on the west, the Atlantic
Ocean on the east, and the Gulf of Mexico to the southeast. Alaska borders the Pacific Ocean to the south, the Bering
Strait to the west, and the Arctic Ocean to the north, while Hawaii lies far to the southwest of the mainland in
the Pacific Ocean. The capital city, Washington, District of Columbia, is a federal district located on land donated
by the state of Maryland. (Virginia had also donated land, but it was returned in 1849.) The United States also has
overseas territories with varying levels of independence and organization: in the Caribbean the territories of Puerto
Rico and the U.S. Virgin Islands, and in the Pacific the inhabited territories of Guam, American Samoa, and the Northern
Mariana Islands, along with a number of uninhabited island territories. The five Great Lakes are located in the north-central
portion of the country, four of them forming part of the border with Canada, only Lake Michigan situated entirely
within United States. The southeast United States contain subtropical forests and, near the gulf coast, mangrove
wetlands, especially in Florida. West of the Appalachians lies the Mississippi River basin and two large eastern
tributaries, the Ohio River and the Tennessee River. The Ohio and Tennessee Valleys and the Midwest consist largely
of rolling hills and productive farmland, stretching south to the Gulf Coast. The Great Plains lie west of the Mississippi
River and east of the Rocky Mountains. A large portion of the country's agricultural products are grown in the Great
Plains. Before their general conversion to farmland, the Great Plains were noted for their extensive grasslands,
from tallgrass prairie in the eastern plains to shortgrass steppe in the western High Plains. Elevation rises gradually
from less than a few hundred feet near the Mississippi River to more than a mile high in the High Plains. The generally
low relief of the plains is broken in several places, most notably in the Ozark and Ouachita Mountains, which form
the U.S. Interior Highlands, the only major mountainous region between the Rocky Mountains and the Appalachian Mountains.
The Great Plains come to an abrupt end at the Rocky Mountains. The Rocky Mountains form a large portion of the Western
U.S., entering from Canada and stretching nearly to Mexico. The Rocky Mountain region is the highest region of the
United States by average elevation. The Rocky Mountains generally contain fairly mild slopes and wider peaks compared
to some of the other great mountain ranges, with a few exceptions (such as the Teton Mountains in Wyoming and the
Sawatch Range in Colorado). The highest peaks of the Rockies are found in Colorado, the tallest peak being Mount
Elbert at 14,440 ft (4,400 m). The Rocky Mountains contain some of the most spectacular, and well known scenery in
the world. In addition, instead of being one generally continuous and solid mountain range, it is broken up into
a number of smaller, intermittent mountain ranges, forming a large series of basins and valleys. West of the Rocky
Mountains lies the Intermontane Plateaus (also known as the Intermountain West), a large, arid desert lying between
the Rockies and the Cascades and Sierra Nevada ranges. The large southern portion, known as the Great Basin, consists
of salt flats, drainage basins, and many small north-south mountain ranges. The Southwest is predominantly a low-lying
desert region. A portion known as the Colorado Plateau, centered around the Four Corners region, is considered to
have some of the most spectacular scenery in the world. It is accentuated in such national parks as Grand Canyon,
Arches, Mesa Verde National Park and Bryce Canyon, among others. Other smaller Intermontane areas include the Columbia
Plateau covering eastern Washington, western Idaho and northeast Oregon and the Snake River Plain in Southern Idaho.
The Intermontane Plateaus come to an end at the Cascade Range and the Sierra Nevada. The Cascades consist of largely
intermittent, volcanic mountains, many rising prominently from the surrounding landscape. The Sierra Nevada, further
south, is a high, rugged, and dense mountain range. It contains the highest point in the contiguous 48 states, Mount
Whitney (14,505 ft or 4,421 m) It is located at the boundary between California's Inyo and Tulare counties, just
84.6 mi or 136.2 km west-northwest of the lowest point in North America at the Badwater Basin in Death Valley National
Park at 279 ft or 85 m below sea level. These areas contain some spectacular scenery as well, as evidenced by such
national parks as Yosemite and Mount Rainier. West of the Cascades and Sierra Nevada is a series of valleys, such
as the Central Valley in California and the Willamette Valley in Oregon. Along the coast is a series of low mountain
ranges known as the Pacific Coast Ranges. Much of the Pacific Northwest coast is inhabited by some of the densest
vegetation outside of the Tropics, and also the tallest trees in the world (the Redwoods). The Atlantic coast of
the United States is low, with minor exceptions. The Appalachian Highland owes its oblique northeast-southwest trend
to crustal deformations which in very early geological time gave a beginning to what later came to be the Appalachian
mountain system. This system had its climax of deformation so long ago (probably in Permian time) that it has since
then been very generally reduced to moderate or low relief. It owes its present-day altitude either to renewed elevations
along the earlier lines or to the survival of the most resistant rocks as residual mountains. The oblique trend of
this coast would be even more pronounced but for a comparatively modern crustal movement, causing a depression in
the northeast resulting in an encroachment of the sea upon the land. Additionally, the southeastern section has undergone
an elevation resulting in the advance of the land upon the sea. The east coast Appalachian system, originally forest
covered, is relatively low and narrow and is bordered on the southeast and south by an important coastal plain. The
Cordilleran system on the western side of the continent is lofty, broad and complicated having two branches, the
Rocky Mountain System and the Pacific Mountain System. In between these mountain systems lie the Intermontaine Plateaus.
Both the Columbia River and Colorado River rise far inland near the easternmost members of the Cordilleran system,
and flow through plateaus and intermontaine basins to the ocean. Heavy forests cover the northwest coast, but elsewhere
trees are found only on the higher ranges below the Alpine region. The intermontane valleys, plateaus and basins
range from treeless to desert with the most arid region being in the southwest. The Laurentian Highlands, the Interior
Plains and the Interior Highlands lie between the two coasts, stretching from the Gulf of Mexico northward, far beyond
the national boundary, to the Arctic Ocean. The central plains are divided by a hardly perceptible height of land
into a Canadian and a United States portion. It is from the United States side, that the great Mississippi system
discharges southward to the Gulf of Mexico. The upper Mississippi and some of the Ohio basin is the semi-arid prairie
region, with trees originally only along the watercourses. The uplands towards the Appalachians were included in
the great eastern forested area, while the western part of the plains has so dry a climate that its native plant
life is scanty, and in the south it is practically barren. Due to its large size and wide range of geographic features,
the United States contains examples of nearly every global climate. The climate is temperate in most areas, subtropical
in the Southern United States, tropical in Hawaii and southern Florida, polar in Alaska, semiarid in the Great Plains
west of the 100th meridian, Mediterranean in coastal California and arid in the Great Basin. Its comparatively favorable
agricultural climate contributed (in part) to the country's rise as a world power, with infrequent severe drought
in the major agricultural regions, a general lack of widespread flooding, and a mainly temperate climate that receives
adequate precipitation. The Great Basin and Columbia Plateau (the Intermontane Plateaus) are arid or semiarid regions
that lie in the rain shadow of the Cascades and Sierra Nevada. Precipitation averages less than 15 inches (38 cm).
The Southwest is a hot desert, with temperatures exceeding 100 °F (37.8 °C) for several weeks at a time in summer.
The Southwest and the Great Basin are also affected by the monsoon from the Gulf of California from July to September,
which brings localized but often severe thunderstorms to the region. Much of California consists of a Mediterranean
climate, with sometimes excessive rainfall from October–April and nearly no rain the rest of the year. In the Pacific
Northwest rain falls year-round, but is much heavier during winter and spring. The mountains of the west receive
abundant precipitation and very heavy snowfall. The Cascades are one of the snowiest places in the world, with some
places averaging over 600 inches (1,524 cm) of snow annually, but the lower elevations closer to the coast receive
very little snow. On average, the mountains of the western states receive the highest levels of snowfall on Earth.
The greatest annual snowfall level is at Mount Rainier in Washington, at 692 inches (1,758 cm); the record there
was 1,122 inches (2,850 cm) in the winter of 1971–72. This record was broken by the Mt. Baker Ski Area in northwestern
Washington which reported 1,140 inches (2,896 cm) of snowfall for the 1998-99 snowfall season. Other places with
significant snowfall outside the Cascade Range are the Wasatch Mountains, near the Great Salt Lake, the San Juan
Mountains in Colorado, and the Sierra Nevada, near Lake Tahoe. In the east, while snowfall does not approach western
levels, the region near the Great Lakes and the mountains of the Northeast receive the most. Along the northwestern
Pacific coast, rainfall is greater than anywhere else in the continental U.S., with Quinault Rainforest in Washington
having an average of 137 inches (348 cm). Hawaii receives even more, with 460 inches (1,168 cm) measured annually
on Mount Waialeale, in Kauai. The Mojave Desert, in the southwest, is home to the driest locale in the U.S. Yuma,
Arizona, has an average of 2.63 inches (6.7 cm) of precipitation each year. In central portions of the U.S., tornadoes
are more common than anywhere else on Earth and touch down most commonly in the spring and summer. Deadly and destructive
hurricanes occur almost every year along the Atlantic seaboard and the Gulf of Mexico. The Appalachian region and
the Midwest experience the worst floods, though virtually no area in the U.S. is immune to flooding. The Southwest
has the worst droughts; one is thought to have lasted over 500 years and to have hurt Ancestral Pueblo peoples. The
West is affected by large wildfires each year. Occasional severe flooding is experienced. There was the Great Mississippi
Flood of 1927, the Great Flood of 1993, and widespread flooding and mudslides caused by the 1982-1983 El Niño event
in the western United States. Localized flooding can, however, occur anywhere, and mudslides from heavy rain can
cause problems in any mountainous area, particularly the Southwest. Large stretches of desert shrub in the west can
fuel the spread of wildfires. The narrow canyons of many mountain areas in the west and severe thunderstorm activity
during the summer lead to sometimes devastating flash floods as well, while Nor'Easter snowstorms can bring activity
to a halt throughout the Northeast (although heavy snowstorms can occur almost anywhere). The West Coast of the continental
United States and areas of Alaska (including the Aleutian Islands, the Alaskan Peninsula and southern Alaskan coast)
make up part of the Pacific Ring of Fire, an area of heavy tectonic and volcanic activity that is the source of 90%
of the world's earthquakes.[citation needed] The American Northwest sees the highest concentration of active volcanoes
in the United States, in Washington, Oregon and northern California along the Cascade Mountains. There are several
active volcanoes located in the islands of Hawaii, including Kilauea in ongoing eruption since 1983, but they do
not typically adversely affect the inhabitants of the islands. There has not been a major life-threatening eruption
on the Hawaiian islands since the 17th century. Volcanic eruptions can occasionally be devastating, such as in the
1980 eruption of Mount St. Helens in Washington.
Compact Disc (CD) is a digital optical disc data storage format. The format was originally developed to store and play only
sound recordings but was later adapted for storage of data (CD-ROM). Several other formats were further derived from
these, including write-once audio and data storage (CD-R), rewritable media (CD-RW), Video Compact Disc (VCD), Super
Video Compact Disc (SVCD), Photo CD, PictureCD, CD-i, and Enhanced Music CD. Audio CDs and audio CD players have
been commercially available since October 1982. In 2004, worldwide sales of audio CDs, CD-ROMs and CD-Rs reached
about 30 billion discs. By 2007, 200 billion CDs had been sold worldwide. CDs are increasingly being replaced by
other forms of digital storage and distribution, with the result that audio CD sales rates in the U.S. have dropped
about 50% from their peak; however, they remain one of the primary distribution methods for the music industry. In
2014, revenues from digital music services matched those from physical format sales for the first time. The Compact
Disc is an evolution of LaserDisc technology, where a focused laser beam is used that enables the high information
density required for high-quality digital audio signals. Prototypes were developed by Philips and Sony independently
in the late 1970s. In 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio
disc. After a year of experimentation and discussion, the Red Book CD-DA standard was published in 1980. After their
commercial release in 1982, compact discs and their players were extremely popular. Despite costing up to $1,000,
over 400,000 CD players were sold in the United States between 1983 and 1984. The success of the compact disc has
been credited to the cooperation between Philips and Sony, who came together to agree upon and develop compatible
hardware. The unified design of the compact disc allowed consumers to purchase any disc or player from any company,
and allowed the CD to dominate the at-home music market unchallenged. In 1974, L. Ottens, director of the audio division
of Philips, started a small group with the aim to develop an analog optical audio disc with a diameter of 20 cm and
a sound quality superior to that of the vinyl record. However, due to the unsatisfactory performance of the analog
format, two Philips research engineers recommended a digital format in March 1974. In 1977, Philips then established
a laboratory with the mission of creating a digital audio disc. The diameter of Philips's prototype compact disc
was set at 11.5 cm, the diagonal of an audio cassette. Heitaro Nakajima, who developed an early digital audio recorder
within Japan's national public broadcasting organization NHK in 1970, became general manager of Sony's audio department
in 1971. His team developed a digital PCM adaptor audio tape recorder using a Betamax video recorder in 1973. After
this, in 1974 the leap to storing digital audio on an optical disc was easily made. Sony first publicly demonstrated
an optical digital audio disc in September 1976. A year later, in September 1977, Sony showed the press a 30 cm disc
that could play 60 minutes of digital audio (44,100 Hz sampling rate and 16-bit resolution) using MFM modulation.
In September 1978, the company demonstrated an optical digital audio disc with a 150-minute playing time, 44,056
Hz sampling rate, 16-bit linear resolution, and cross-interleaved error correction code—specifications similar to
those later settled upon for the standard Compact Disc format in 1980. Technical details of Sony's digital audio
disc were presented during the 62nd AES Convention, held on 13–16 March 1979, in Brussels. Sony's AES technical paper
was published on 1 March 1979. A week later, on 8 March, Philips publicly demonstrated a prototype of an optical
digital audio disc at a press conference called "Philips Introduce Compact Disc" in Eindhoven, Netherlands. As a
result, in 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. Led
by engineers Kees Schouhamer Immink and Toshitada Doi, the research pushed forward laser and optical disc technology.
After a year of experimentation and discussion, the task force produced the Red Book CD-DA standard. First published
in 1980, the standard was formally adopted by the IEC as an international standard in 1987, with various amendments
becoming part of the standard in 1996. The Japanese launch was followed in March 1983 by the introduction of CD players
and discs to Europe and North America (where CBS Records released sixteen titles). This event is often seen as the
"Big Bang" of the digital audio revolution. The new audio disc was enthusiastically received, especially in the early-adopting
classical music and audiophile communities, and its handling quality received particular praise. As the price of
players gradually came down, and with the introduction of the portable Walkman the CD began to gain popularity in
the larger popular and rock music markets. The first artist to sell a million copies on CD was Dire Straits, with
their 1985 album Brothers in Arms. The first major artist to have his entire catalogue converted to CD was David
Bowie, whose 15 studio albums were made available by RCA Records in February 1985, along with four greatest hits
albums. In 1988, 400 million CDs were manufactured by 50 pressing plants around the world. The CD was planned to
be the successor of the gramophone record for playing music, rather than primarily as a data storage medium. From
its origins as a musical format, CDs have grown to encompass other applications. In 1983, following the CD's introduction,
Immink and Braat presented the first experiments with erasable compact discs during the 73rd AES Convention. In June
1985, the computer-readable CD-ROM (read-only memory) and, in 1990, CD-Recordable were introduced, also developed
by both Sony and Philips. Recordable CDs were a new alternative to tape for recording music and copying music albums
without defects introduced in compression used in other digital recording methods. Other newer video formats such
as DVD and Blu-ray use the same physical geometry as CD, and most DVD and Blu-ray players are backward compatible
with audio CD. Meanwhile, with the advent and popularity of Internet-based distribution of files in lossily-compressed
audio formats such as MP3, sales of CDs began to decline in the 2000s. For example, between 2000 - 2008, despite
overall growth in music sales and one anomalous year of increase, major-label CD sales declined overall by 20%, although
independent and DIY music sales may be tracking better according to figures released 30 March 2009, and CDs still
continue to sell greatly. As of 2012, CDs and DVDs made up only 34 percent of music sales in the United States. In
Japan, however, over 80 percent of music was bought on CDs and other physical formats as of 2015. Replicated CDs
are mass-produced initially using a hydraulic press. Small granules of heated raw polycarbonate plastic are fed into
the press. A screw forces the liquefied plastic into the mold cavity. The mold closes with a metal stamper in contact
with the disc surface. The plastic is allowed to cool and harden. Once opened, the disc substrate is removed from
the mold by a robotic arm, and a 15 mm diameter center hole (called a stacking ring) is created. The time it takes
to "stamp" one CD is usually two to three seconds. This method produces the clear plastic blank part of the disc.
After a metallic reflecting layer (usually aluminium, but sometimes gold or other metal) is applied to the clear
blank substrate, the disc goes under a UV light for curing and it is ready to go to press. To prepare to press a
CD, a glass master is made, using a high-powered laser on a device similar to a CD writer. The glass master is a
positive image of the desired CD surface (with the desired microscopic pits and lands). After testing, it is used
to make a die by pressing it against a metal disc. The die is a negative image of the glass master: typically, several
are made, depending on the number of pressing mills that are to make the CD. The die then goes into a press, and
the physical image is transferred to the blank CD, leaving a final positive image on the disc. A small amount of
lacquer is applied as a ring around the center of the disc, and rapid spinning spreads it evenly over the surface.
Edge protection lacquer is applied before the disc is finished. The disc can then be printed and packed. The most
expensive part of a CD is the jewel case. In 1995, material costs were 30 cents for the jewel case and 10 to 15 cents
for the CD. Wholesale cost of CDs was $0.75 to $1.15, which retailed for $16.98. On average, the store received 35
percent of the retail price, the record company 27 percent, the artist 16 percent, the manufacturer 13 percent, and
the distributor 9 percent. When 8-track tapes, cassette tapes, and CDs were introduced, each was marketed at a higher
price than the format they succeeded, even though the cost to produce the media was reduced. This was done because
the apparent value increased. This continued from vinyl to CDs but was broken when Apple marketed MP3s for $0.99,
and albums for $9.99. The incremental cost, though, to produce an MP3 is very small. CD-R recordings are designed
to be permanent. Over time, the dye's physical characteristics may change causing read errors and data loss until
the reading device cannot recover with error correction methods. The design life is from 20 to 100 years, depending
on the quality of the discs, the quality of the writing drive, and storage conditions. However, testing has demonstrated
such degradation of some discs in as little as 18 months under normal storage conditions. This failure is known as
disc rot, for which there are several, mostly environmental, reasons. The ReWritable Audio CD is designed to be used
in a consumer audio CD recorder, which will not (without modification) accept standard CD-RW discs. These consumer
audio CD recorders use the Serial Copy Management System (SCMS), an early form of digital rights management (DRM),
to conform to the United States' Audio Home Recording Act (AHRA). The ReWritable Audio CD is typically somewhat more
expensive than CD-RW due to (a) lower volume and (b) a 3% AHRA royalty used to compensate the music industry for
the making of a copy. Due to technical limitations, the original ReWritable CD could be written no faster than 4x
speed. High Speed ReWritable CD has a different design, which permits writing at speeds ranging from 4x to 12x. Original
CD-RW drives can only write to original ReWritable CDs. High Speed CD-RW drives can typically write to both original
ReWritable CDs and High Speed ReWritable CDs. Both types of CD-RW discs can be read in most CD drives. Higher speed
CD-RW discs, Ultra Speed (16x to 24x write speed) and Ultra Speed+ (32x write speed) are now available. A CD is read
by focusing a 780 nm wavelength (near infrared) semiconductor laser housed within the CD player, through the bottom
of the polycarbonate layer. The change in height between pits and lands results in a difference in the way the light
is reflected. By measuring the intensity change with a photodiode, the data can be read from the disc. In order to
accommodate the spiral pattern of data, the semiconductor laser is placed on a swing arm within the disc tray of
any CD player. This swing arm allows the laser to read information from the centre to the edge of a disc, without
having to interrupt the spinning of the disc itself. The pits and lands themselves do not directly represent the
zeros and ones of binary data. Instead, non-return-to-zero, inverted encoding is used: a change from pit to land
or land to pit indicates a one, while no change indicates a series of zeros. There must be at least two and no more
than ten zeros between each one, which is defined by the length of the pit. This in turn is decoded by reversing
the eight-to-fourteen modulation used in mastering the disc, and then reversing the cross-interleaved Reed–Solomon
coding, finally revealing the raw data stored on the disc. These encoding techniques (defined in the Red Book) were
originally designed for CD Digital Audio, but they later became a standard for almost all CD formats (such as CD-ROM).
CDs are susceptible to damage during handling and from environmental exposure. Pits are much closer to the label
side of a disc, enabling defects and contaminants on the clear side to be out of focus during playback. Consequently,
CDs are more likely to suffer damage on the label side of the disc. Scratches on the clear side can be repaired by
refilling them with similar refractive plastic or by careful polishing. The edges of CDs are sometimes incompletely
sealed, allowing gases and liquids to corrode the metal reflective layer and to interfere with the focus of the laser
on the pits. The fungus Geotrichum candidum, found in Belize, has been found to consume the polycarbonate plastic
and aluminium found in CDs. The digital data on a CD begins at the center of the disc and proceeds toward the edge,
which allows adaptation to the different size formats available. Standard CDs are available in two sizes. By far,
the most common is 120 millimetres (4.7 in) in diameter, with a 74- or 80-minute audio capacity and a 650 or 700
MiB (737,280,000-byte) data capacity. This capacity was reportedly specified by Sony executive Norio Ohga in May
1980 so as to be able to contain the entirety of the London Philharmonic Orchestra's recording of Beethoven's Ninth
Symphony on one disc. This is a myth according to Kees Immink, as the code format had not yet been decided in May
1980. The adoption of EFM one month later would have allowed a playing time of 97 minutes for 120 mm diameter or
74 minutes for a disc as small as 100 mm. The 120 mm diameter has been adopted by subsequent formats, including Super
Audio CD, DVD, HD DVD, and Blu-ray Disc. Eighty-millimeter discs ("Mini CDs") were originally designed for CD singles
and can hold up to 24 minutes of music or 210 MiB of data but never became popular.[citation needed] Today, nearly
every single is released on a 120 mm CD, called a Maxi single.[citation needed] The logical format of an audio CD
(officially Compact Disc Digital Audio or CD-DA) is described in a document produced in 1980 by the format's joint
creators, Sony and Philips. The document is known colloquially as the Red Book CD-DA after the colour of its cover.
The format is a two-channel 16-bit PCM encoding at a 44.1 kHz sampling rate per channel. Four-channel sound was to
be an allowable option within the Red Book format, but has never been implemented. Monaural audio has no existing
standard on a Red Book CD; thus, mono source material is usually presented as two identical channels in a standard
Red Book stereo track (i.e., mirrored mono); an MP3 CD, however, can have audio file formats with mono sound. Compact
Disc + Graphics is a special audio compact disc that contains graphics data in addition to the audio data on the
disc. The disc can be played on a regular audio CD player, but when played on a special CD+G player, it can output
a graphics signal (typically, the CD+G player is hooked up to a television set or a computer monitor); these graphics
are almost exclusively used to display lyrics on a television set for karaoke performers to sing along with. The
CD+G format takes advantage of the channels R through W. These six bits store the graphics information. SVCD has
two-thirds the resolution of DVD, and over 2.7 times the resolution of VCD. One CD-R disc can hold up to 60 minutes
of standard quality SVCD-format video. While no specific limit on SVCD video length is mandated by the specification,
one must lower the video bit rate, and therefore quality, to accommodate very long videos. It is usually difficult
to fit much more than 100 minutes of video onto one SVCD without incurring significant quality loss, and many hardware
players are unable to play video with an instantaneous bit rate lower than 300 to 600 kilobits per second. Photo
CD is a system designed by Kodak for digitizing and storing photos on a CD. Launched in 1992, the discs were designed
to hold nearly 100 high-quality images, scanned prints and slides using special proprietary encoding. Photo CDs are
defined in the Beige Book and conform to the CD-ROM XA and CD-i Bridge specifications as well. They are intended
to play on CD-i players, Photo CD players and any computer with the suitable software irrespective of the operating
system. The images can also be printed out on photographic paper with a special Kodak machine. This format is not
to be confused with Kodak Picture CD, which is a consumer product in CD-ROM format. The Red Book audio specification,
except for a simple "anti-copy" statement in the subcode, does not include any copy protection mechanism. Known at
least as early as 2001, attempts were made by record companies to market "copy-protected" non-standard compact discs,
which cannot be ripped, or copied, to hard drives or easily converted to MP3s. One major drawback to these copy-protected
discs is that most will not play on either computer CD-ROM drives or some standalone CD players that use CD-ROM mechanisms.
Philips has stated that such discs are not permitted to bear the trademarked Compact Disc Digital Audio logo because
they violate the Red Book specifications. Numerous copy-protection systems have been countered by readily available,
often free, software.
A transistor is a semiconductor device used to amplify or switch electronic signals and electrical power. It is composed
of semiconductor material with at least three terminals for connection to an external circuit. A voltage or current
applied to one pair of the transistor's terminals changes the current through another pair of terminals. Because
the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal.
Today, some transistors are packaged individually, but many more are found embedded in integrated circuits. The transistor
is the fundamental building block of modern electronic devices, and is ubiquitous in modern electronic systems. First
conceived by Julius Lilienfeld in 1926 and practically implemented in 1947 by American physicists John Bardeen, Walter
Brattain, and William Shockley, the transistor revolutionized the field of electronics, and paved the way for smaller
and cheaper radios, calculators, and computers, among other things. The transistor is on the list of IEEE milestones
in electronics, and Bardeen, Brattain, and Shockley shared the 1956 Nobel Prize in Physics for their achievement.
The thermionic triode, a vacuum tube invented in 1907, enabled amplified radio technology and long-distance telephony.
The triode, however, was a fragile device that consumed a lot of power. Physicist Julius Edgar Lilienfeld filed a
patent for a field-effect transistor (FET) in Canada in 1925, which was intended to be a solid-state replacement
for the triode. Lilienfeld also filed identical patents in the United States in 1926 and 1928. However, Lilienfeld
did not publish any research articles about his devices nor did his patents cite any specific examples of a working
prototype. Because the production of high-quality semiconductor materials was still decades away, Lilienfeld's solid-state
amplifier ideas would not have found practical use in the 1920s and 1930s, even if such a device had been built.
In 1934, German inventor Oskar Heil patented a similar device. From November 17, 1947 to December 23, 1947, John
Bardeen and Walter Brattain at AT&T's Bell Labs in the United States performed experiments and observed that when
two gold point contacts were applied to a crystal of germanium, a signal was produced with the output power greater
than the input. Solid State Physics Group leader William Shockley saw the potential in this, and over the next few
months worked to greatly expand the knowledge of semiconductors. The term transistor was coined by John R. Pierce
as a contraction of the term transresistance. According to Lillian Hoddeson and Vicki Daitch, authors of a biography
of John Bardeen, Shockley had proposed that Bell Labs' first patent for a transistor should be based on the field-effect
and that he be named as the inventor. Having unearthed Lilienfeld’s patents that went into obscurity years earlier,
lawyers at Bell Labs advised against Shockley's proposal because the idea of a field-effect transistor that used
an electric field as a "grid" was not new. Instead, what Bardeen, Brattain, and Shockley invented in 1947 was the
first point-contact transistor. In acknowledgement of this accomplishment, Shockley, Bardeen, and Brattain were jointly
awarded the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor
effect." In 1948, the point-contact transistor was independently invented by German physicists Herbert Mataré and
Heinrich Welker while working at the Compagnie des Freins et Signaux, a Westinghouse subsidiary located in Paris.
Mataré had previous experience in developing crystal rectifiers from silicon and germanium in the German radar effort
during World War II. Using this knowledge, he began researching the phenomenon of "interference" in 1947. By June
1948, witnessing currents flowing through point-contacts, Mataré produced consistent results using samples of germanium
produced by Welker, similar to what Bardeen and Brattain had accomplished earlier in December 1947. Realizing that
Bell Labs' scientists had already invented the transistor before them, the company rushed to get its "transistron"
into production for amplified use in France's telephone network. Although several companies each produce over a billion
individually packaged (known as discrete) transistors every year, the vast majority of transistors are now produced
in integrated circuits (often shortened to IC, microchips or simply chips), along with diodes, resistors, capacitors
and other electronic components, to produce complete electronic circuits. A logic gate consists of up to about twenty
transistors whereas an advanced microprocessor, as of 2009, can use as many as 3 billion transistors (MOSFETs). "About
60 million transistors were built in 2002… for [each] man, woman, and child on Earth." The essential usefulness of
a transistor comes from its ability to use a small signal applied between one pair of its terminals to control a
much larger signal at another pair of terminals. This property is called gain. It can produce a stronger output signal,
a voltage or current, which is proportional to a weaker input signal; that is, it can act as an amplifier. Alternatively,
the transistor can be used to turn current on or off in a circuit as an electrically controlled switch, where the
amount of current is determined by other circuit elements. There are two types of transistors, which have slight
differences in how they are used in a circuit. A bipolar transistor has terminals labeled base, collector, and emitter.
A small current at the base terminal (that is, flowing between the base and the emitter) can control or switch a
much larger current between the collector and emitter terminals. For a field-effect transistor, the terminals are
labeled gate, source, and drain, and a voltage at the gate can control a current between source and drain. In a grounded-emitter
transistor circuit, such as the light-switch circuit shown, as the base voltage rises, the emitter and collector
currents rise exponentially. The collector voltage drops because of reduced resistance from collector to emitter.
If the voltage difference between the collector and emitter were zero (or near zero), the collector current would
be limited only by the load resistance (light bulb) and the supply voltage. This is called saturation because current
is flowing from collector to emitter freely. When saturated, the switch is said to be on. Providing sufficient base
drive current is a key problem in the use of bipolar transistors as switches. The transistor provides current gain,
allowing a relatively large current in the collector to be switched by a much smaller current into the base terminal.
The ratio of these currents varies depending on the type of transistor, and even for a particular type, varies depending
on the collector current. In the example light-switch circuit shown, the resistor is chosen to provide enough base
current to ensure the transistor will be saturated. In a switching circuit, the idea is to simulate, as near as possible,
the ideal switch having the properties of open circuit when off, short circuit when on, and an instantaneous transition
between the two states. Parameters are chosen such that the "off" output is limited to leakage currents too small
to affect connected circuitry; the resistance of the transistor in the "on" state is too small to affect circuitry;
and the transition between the two states is fast enough not to have a detrimental effect. Bipolar transistors are
so named because they conduct by using both majority and minority carriers. The bipolar junction transistor, the
first type of transistor to be mass-produced, is a combination of two junction diodes, and is formed of either a
thin layer of p-type semiconductor sandwiched between two n-type semiconductors (an n–p–n transistor), or a thin
layer of n-type semiconductor sandwiched between two p-type semiconductors (a p–n–p transistor). This construction
produces two p–n junctions: a base–emitter junction and a base–collector junction, separated by a thin region of
semiconductor known as the base region (two junction diodes wired together without sharing an intervening semiconducting
region will not make a transistor). BJTs have three terminals, corresponding to the three layers of semiconductor—an
emitter, a base, and a collector. They are useful in amplifiers because the currents at the emitter and collector
are controllable by a relatively small base current. In an n–p–n transistor operating in the active region, the emitter–base
junction is forward biased (electrons and holes recombine at the junction), and electrons are injected into the base
region. Because the base is narrow, most of these electrons will diffuse into the reverse-biased (electrons and holes
are formed at, and move away from the junction) base–collector junction and be swept into the collector; perhaps
one-hundredth of the electrons will recombine in the base, which is the dominant mechanism in the base current. By
controlling the number of electrons that can leave the base, the number of electrons entering the collector can be
controlled. Collector current is approximately β (common-emitter current gain) times the base current. It is typically
greater than 100 for small-signal transistors but can be smaller in transistors designed for high-power applications.
In a FET, the drain-to-source current flows via a conducting channel that connects the source region to the drain
region. The conductivity is varied by the electric field that is produced when a voltage is applied between the gate
and source terminals; hence the current flowing between the drain and source is controlled by the voltage applied
between the gate and source. As the gate–source voltage (VGS) is increased, the drain–source current (IDS) increases
exponentially for VGS below threshold, and then at a roughly quadratic rate (IGS ∝ (VGS − VT)2) (where VT is the
threshold voltage at which drain current begins) in the "space-charge-limited" region above threshold. A quadratic
behavior is not observed in modern devices, for example, at the 65 nm technology node. FETs are divided into two
families: junction FET (JFET) and insulated gate FET (IGFET). The IGFET is more commonly known as a metal–oxide–semiconductor
FET (MOSFET), reflecting its original construction from layers of metal (the gate), oxide (the insulation), and semiconductor.
Unlike IGFETs, the JFET gate forms a p–n diode with the channel which lies between the source and drain. Functionally,
this makes the n-channel JFET the solid-state equivalent of the vacuum tube triode which, similarly, forms a diode
between its grid and cathode. Also, both devices operate in the depletion mode, they both have a high input impedance,
and they both conduct current under the control of an input voltage. FETs are further divided into depletion-mode
and enhancement-mode types, depending on whether the channel is turned on or off with zero gate-to-source voltage.
For enhancement mode, the channel is off at zero bias, and a gate potential can "enhance" the conduction. For the
depletion mode, the channel is on at zero bias, and a gate potential (of the opposite polarity) can "deplete" the
channel, reducing conduction. For either mode, a more positive gate voltage corresponds to a higher current for n-channel
devices and a lower current for p-channel devices. Nearly all JFETs are depletion-mode because the diode junctions
would forward bias and conduct if they were enhancement-mode devices; most IGFETs are enhancement-mode types. The
bipolar junction transistor (BJT) was the most commonly used transistor in the 1960s and 70s. Even after MOSFETs
became widely available, the BJT remained the transistor of choice for many analog circuits such as amplifiers because
of their greater linearity and ease of manufacture. In integrated circuits, the desirable properties of MOSFETs allowed
them to capture nearly all market share for digital circuits. Discrete MOSFETs can be applied in transistor applications,
including analog circuits, voltage regulators, amplifiers, power transmitters and motor drivers. The Pro Electron
standard, the European Electronic Component Manufacturers Association part numbering scheme, begins with two letters:
the first gives the semiconductor type (A for germanium, B for silicon, and C for materials like GaAs); the second
letter denotes the intended use (A for diode, C for general-purpose transistor, etc.). A 3-digit sequence number
(or one letter then 2 digits, for industrial types) follows. With early devices this indicated the case type. Suffixes
may be used, with a letter (e.g. "C" often means high hFE, such as in: BC549C) or other codes may follow to show
gain (e.g. BC327-25) or voltage rating (e.g. BUK854-800A). The more common prefixes are: The JEDEC EIA370 transistor
device numbers usually start with "2N", indicating a three-terminal device (dual-gate field-effect transistors are
four-terminal devices, so begin with 3N), then a 2, 3 or 4-digit sequential number with no significance as to device
properties (although early devices with low numbers tend to be germanium). For example, 2N3055 is a silicon n–p–n
power transistor, 2N1301 is a p–n–p germanium switching transistor. A letter suffix (such as "A") is sometimes used
to indicate a newer variant, but rarely gain groupings. Manufacturers of devices may have their own proprietary numbering
system, for example CK722. Since devices are second-sourced, a manufacturer's prefix (like "MPF" in MPF102, which
originally would denote a Motorola FET) now is an unreliable indicator of who made the device. Some proprietary naming
schemes adopt parts of other naming schemes, for example a PN2222A is a (possibly Fairchild Semiconductor) 2N2222A
in a plastic case (but a PN108 is a plastic version of a BC108, not a 2N108, while the PN100 is unrelated to other
xx100 devices). The junction forward voltage is the voltage applied to the emitter–base junction of a BJT in order
to make the base conduct a specified current. The current increases exponentially as the junction forward voltage
is increased. The values given in the table are typical for a current of 1 mA (the same values apply to semiconductor
diodes). The lower the junction forward voltage the better, as this means that less power is required to "drive"
the transistor. The junction forward voltage for a given current decreases with increase in temperature. For a typical
silicon junction the change is −2.1 mV/°C. In some circuits special compensating elements (sensistors) must be used
to compensate for such changes. Because the electron mobility is higher than the hole mobility for all semiconductor
materials, a given bipolar n–p–n transistor tends to be swifter than an equivalent p–n–p transistor. GaAs has the
highest electron mobility of the three semiconductors. It is for this reason that GaAs is used in high-frequency
applications. A relatively recent FET development, the high-electron-mobility transistor (HEMT), has a heterostructure
(junction between different semiconductor materials) of aluminium gallium arsenide (AlGaAs)-gallium arsenide (GaAs)
which has twice the electron mobility of a GaAs-metal barrier junction. Because of their high speed and low noise,
HEMTs are used in satellite receivers working at frequencies around 12 GHz. HEMTs based on gallium nitride and aluminium
gallium nitride (AlGaN/GaN HEMTs) provide a still higher electron mobility and are being developed for various applications.
Discrete transistors are individually packaged transistors. Transistors come in many different semiconductor packages
(see image). The two main categories are through-hole (or leaded), and surface-mount, also known as surface-mount
device (SMD). The ball grid array (BGA) is the latest surface-mount package (currently only for large integrated
circuits). It has solder "balls" on the underside in place of leads. Because they are smaller and have shorter interconnections,
SMDs have better high-frequency characteristics but lower power rating.
In the Pre-Modern era, many people's sense of self and purpose was often expressed via a faith in some form of deity, be
that in a single God or in many gods. Pre-modern cultures have not been thought of creating a sense of distinct individuality,
though. Religious officials, who often held positions of power, were the spiritual intermediaries to the common person.
It was only through these intermediaries that the general masses had access to the divine. Tradition was sacred to
ancient cultures and was unchanging and the social order of ceremony and morals in a culture could be strictly enforced.
The term "modern" was coined in the 16th century to indicate present or recent times (ultimately derived from the
Latin adverb modo, meaning "just now). The European Renaissance (about 1420–1630), which marked the transition between
the Late Middle Ages and Early Modern times, started in Italy and was spurred in part by the rediscovery of classical
art and literature, as well as the new perspectives gained from the Age of Discovery and the invention of the telescope
and microscope, expanding the borders of thought and knowledge. The term "Early Modern" was introduced in the English
language in the 1930s. to distinguish the time between what we call Middle Ages and time of the late Enlightenment
(1800) (when the meaning of the term Modern Ages was developing its contemporary form). It is important to note that
these terms stem from European history. In usage in other parts of the world, such as in Asia, and in Muslim countries,
the terms are applied in a very different way, but often in the context with their contact with European culture
in the Age of Discovery. In the Contemporary era, there were various socio-technological trends. Regarding the 21st
century and the late modern world, the Information age and computers were forefront in use, not completely ubiquitous
but often present in daily life. The development of Eastern powers was of note, with China and India becoming more
powerful. In the Eurasian theater, the European Union and Russian Federation were two forces recently developed.
A concern for Western world, if not the whole world, was the late modern form of terrorism and the warfare that has
resulted from the contemporary terrorist acts. In Asia, various Chinese dynasties and Japanese shogunates controlled
the Asian sphere. In Japan, the Edo period from 1600 to 1868 is also referred to as the early modern period. And
in Korea, from the rising of Joseon Dynasty to the enthronement of King Gojong is referred to as the early modern
period. In the Americas, Native Americans had built a large and varied civilization, including the Aztec Empire and
alliance, the Inca civilization, the Mayan Empire and cities, and the Chibcha Confederation. In the west, the European
kingdoms and movements were in a movement of reformation and expansion. Russia reached the Pacific coast in 1647
and consolidated its control over the Russian Far East in the 19th century. In China, urbanization increased as the
population grew and as the division of labor grew more complex. Large urban centers, such as Nanjing and Beijing,
also contributed to the growth of private industry. In particular, small-scale industries grew up, often specializing
in paper, silk, cotton, and porcelain goods. For the most part, however, relatively small urban centers with markets
proliferated around the country. Town markets mainly traded food, with some necessary manufactures such as pins or
oil. Despite the xenophobia and intellectual introspection characteristic of the increasingly popular new school
of neo-Confucianism, China under the early Ming dynasty was not isolated. Foreign trade and other contacts with the
outside world, particularly Japan, increased considerably. Chinese merchants explored all of the Indian Ocean, reaching
East Africa with the treasure voyages of Zheng He. The Qing dynasty (1644–1911) was founded after the fall of the
Ming, the last Han Chinese dynasty, by the Manchus. The Manchus were formerly known as the Jurchens. When Beijing
was captured by Li Zicheng's peasant rebels in 1644, the Chongzhen Emperor, the last Ming emperor, committed suicide.
The Manchus then allied with former Ming general Wu Sangui and seized control of Beijing, which became the new capital
of the Qing dynasty. The Mancus adopted the Confucian norms of traditional Chinese government in their rule of China
proper. Schoppa, the editor of The Columbia Guide to Modern Chinese History argues, "A date around 1780 as the beginning
of modern China is thus closer to what we know today as historical 'reality'. It also allows us to have a better
baseline to understand the precipitous decline of the Chinese polity in the nineteenth and twentieth centuries."
Society in the Japanese "Tokugawa period" (Edo society), unlike the shogunates before it, was based on the strict
class hierarchy originally established by Toyotomi Hideyoshi. The daimyo, or lords, were at the top, followed by
the warrior-caste of samurai, with the farmers, artisans, and traders ranking below. In some parts of the country,
particularly smaller regions, daimyo and samurai were more or less identical, since daimyo might be trained as samurai,
and samurai might act as local lords. Otherwise, the largely inflexible nature of this social stratification system
unleashed disruptive forces over time. Taxes on the peasantry were set at fixed amounts which did not account for
inflation or other changes in monetary value. As a result, the tax revenues collected by the samurai landowners were
worth less and less over time. This often led to numerous confrontations between noble but impoverished samurai and
well-to-do peasants, ranging from simple local disturbances to much bigger rebellions. None, however, proved compelling
enough to seriously challenge the established order until the arrival of foreign powers. On the Indian subcontinent,
the Mughal Empire ruled most of India in the early 18th century. The "classic period" ended with the death and defeat
of Emperor Aurangzeb in 1707 by the rising Hindu Maratha Empire, although the dynasty continued for another 150 years.
During this period, the Empire was marked by a highly centralized administration connecting the different regions.
All the significant monuments of the Mughals, their most visible legacy, date to this period which was characterised
by the expansion of Persian cultural influence in the Indian subcontinent, with brilliant literary, artistic, and
architectural results. The Maratha Empire was located in the south west of present-day India and expanded greatly
under the rule of the Peshwas, the prime ministers of the Maratha empire. In 1761, the Maratha army lost the Third
Battle of Panipat which halted imperial expansion and the empire was then divided into a confederacy of Maratha states.
The development of New Imperialism saw the conquest of nearly all eastern hemisphere territories by colonial powers.
The commercial colonization of India commenced in 1757, after the Battle of Plassey, when the Nawab of Bengal surrendered
his dominions to the British East India Company, in 1765, when the Company was granted the diwani, or the right to
collect revenue, in Bengal and Bihar, or in 1772, when the Company established a capital in Calcutta, appointed its
first Governor-General, Warren Hastings, and became directly involved in governance. The Maratha states, following
the Anglo-Maratha wars, eventually lost to the British East India Company in 1818 with the Third Anglo-Maratha War.
The rule lasted until 1858, when, after the Indian rebellion of 1857 and consequent of the Government of India Act
1858, the British government assumed the task of directly administering India in the new British Raj. In 1819 Stamford
Raffles established Singapore as a key trading post for Britain in their rivalry with the Dutch. However, their rivalry
cooled in 1824 when an Anglo-Dutch treaty demarcated their respective interests in Southeast Asia. From the 1850s
onwards, the pace of colonization shifted to a significantly higher gear. The Dutch East India Company (1800) and
British East India Company (1858) were dissolved by their respective governments, who took over the direct administration
of the colonies. Only Thailand was spared the experience of foreign rule, although, Thailand itself was also greatly
affected by the power politics of the Western powers. Colonial rule had a profound effect on Southeast Asia. While
the colonial powers profited much from the region's vast resources and large market, colonial rule did develop the
region to a varying extent. Many major events caused Europe to change around the start of the 16th century, starting
with the Fall of Constantinople in 1453, the fall of Muslim Spain and the discovery of the Americas in 1492, and
Martin Luther's Protestant Reformation in 1517. In England the modern period is often dated to the start of the Tudor
period with the victory of Henry VII over Richard III at the Battle of Bosworth in 1485. Early modern European history
is usually seen to span from the start of the 15th century, through the Age of Reason and the Age of Enlightenment
in the 17th and 18th centuries, until the beginning of the Industrial Revolution in the late 18th century. Russia
experienced territorial growth through the 17th century, which was the age of Cossacks. Cossacks were warriors organized
into military communities, resembling pirates and pioneers of the New World. In 1648, the peasants of Ukraine joined
the Zaporozhian Cossacks in rebellion against Poland-Lithuania during the Khmelnytsky Uprising, because of the social
and religious oppression they suffered under Polish rule. In 1654 the Ukrainian leader, Bohdan Khmelnytsky, offered
to place Ukraine under the protection of the Russian Tsar, Aleksey I. Aleksey's acceptance of this offer led to another
Russo-Polish War (1654–1667). Finally, Ukraine was split along the river Dnieper, leaving the western part (or Right-bank
Ukraine) under Polish rule and eastern part (Left-bank Ukraine and Kiev) under Russian. Later, in 1670–71 the Don
Cossacks led by Stenka Razin initiated a major uprising in the Volga region, but the Tsar's troops were successful
in defeating the rebels. In the east, the rapid Russian exploration and colonisation of the huge territories of Siberia
was led mostly by Cossacks hunting for valuable furs and ivory. Russian explorers pushed eastward primarily along
the Siberian river routes, and by the mid-17th century there were Russian settlements in the Eastern Siberia, on
the Chukchi Peninsula, along the Amur River, and on the Pacific coast. In 1648 the Bering Strait between Asia and
North America was passed for the first time by Fedot Popov and Semyon Dezhnyov. Traditionally, the European intellectual
transformation of and after the Renaissance bridged the Middle Ages and the Modern era. The Age of Reason in the
Western world is generally regarded as being the start of modern philosophy, and a departure from the medieval approach,
especially Scholasticism. Early 17th-century philosophy is often called the Age of Rationalism and is considered
to succeed Renaissance philosophy and precede the Age of Enlightenment, but some consider it as the earliest part
of the Enlightenment era in philosophy, extending that era to two centuries. The 18th century saw the beginning of
secularization in Europe, rising to notability in the wake of the French Revolution. The Age of Enlightenment is
a time in Western philosophy and cultural life centered upon the 18th century in which reason was advocated as the
primary source and legitimacy for authority. Enlightenment gained momentum more or less simultaneously in many parts
of Europe and America. Developing during the Enlightenment era, Renaissance humanism as an intellectual movement
spread across Europe. The basic training of the humanist was to speak well and write (typically, in the form of a
letter). The term umanista comes from the latter part of the 15th century. The people were associated with the studia
humanitatis, a novel curriculum that was competing with the quadrivium and scholastic logic. Renaissance humanism
took a close study of the Latin and Greek classical texts, and was antagonistic to the values of scholasticism with
its emphasis on the accumulated commentaries; and humanists were involved in the sciences, philosophies, arts and
poetry of classical antiquity. They self-consciously imitated classical Latin and deprecated the use of medieval
Latin. By analogy with the perceived decline of Latin, they applied the principle of ad fontes, or back to the sources,
across broad areas of learning. The quarrel of the Ancients and the Moderns was a literary and artistic quarrel that
heated up in the early 1690s and shook the Académie française. The opposing two sides were, the Ancients (Anciens)
who constrain choice of subjects to those drawn from the literature of Antiquity and the Moderns (Modernes), who
supported the merits of the authors of the century of Louis XIV. Fontenelle quickly followed with his Digression
sur les anciens et les modernes (1688), in which he took the Modern side, pressing the argument that modern scholarship
allowed modern man to surpass the ancients in knowledge. The Scientific Revolution was a period when European ideas
in classical physics, astronomy, biology, human anatomy, chemistry, and other classical sciences were rejected and
led to doctrines supplanting those that had prevailed from Ancient Greece to the Middle Ages which would lead to
a transition to modern science. This period saw a fundamental transformation in scientific ideas across physics,
astronomy, and biology, in institutions supporting scientific investigation, and in the more widely held picture
of the universe. Individuals started to question all manners of things and it was this questioning that led to the
Scientific Revolution, which in turn formed the foundations of contemporary sciences and the establishment of several
modern scientific fields. The changes were accompanied by violent turmoil which included the trial and execution
of the king, vast bloodshed and repression during the Reign of Terror, and warfare involving every other major European
power. Subsequent events that can be traced to the Revolution include the Napoleonic Wars, two separate restorations
of the monarchy, and two additional revolutions as modern France took shape. In the following century, France would
be governed at one point or another as a republic, constitutional monarchy, and two different empires. The campaigns
of French Emperor and General Napoleon Bonaparte characterized the Napoleonic Era. Born on Corsica as the French
invaded, and dying suspiciously on the tiny British Island of St. Helena, this brilliant commander, controlled a
French Empire that, at its height, ruled a large portion of Europe directly from Paris, while many of his friends
and family ruled countries such as Spain, Poland, several parts of Italy and many other Kingdoms Republics and dependencies.
The Napoleonic Era changed the face of Europe forever, and old Empires and Kingdoms fell apart as a result of the
mighty and "Glorious" surge of Republicanism. Italian unification was the political and social movement that annexed
different states of the Italian peninsula into the single state of Italy in the 19th century. There is a lack of
consensus on the exact dates for the beginning and the end of this period, but many scholars agree that the process
began with the end of Napoleonic rule and the Congress of Vienna in 1815, and approximately ended with the Franco-Prussian
War in 1871, though the last città irredente did not join the Kingdom of Italy until after World War I. Beginning
the Age of Revolution, the American Revolution and the ensuing political upheaval during the last half of the 18th
century saw the Thirteen Colonies of North America overthrow the governance of the Parliament of Great Britain, and
then reject the British monarchy itself to become the sovereign United States of America. In this period the colonies
first rejected the authority of the Parliament to govern them without representation, and formed self-governing independent
states. The Second Continental Congress then joined together against the British to defend that self-governance in
the armed conflict from 1775 to 1783 known as the American Revolutionary War (also called American War of Independence).
The American Revolution begun with fighting at Lexington and Concord. On July 4, 1776, they issued the Declaration
of Independence, which proclaimed their independence from Great Britain and their formation of a cooperative union.
In June 1776, Benjamin Franklin was appointed a member of the Committee of Five that drafted the Declaration of Independence.
Although he was temporarily disabled by gout and unable to attend most meetings of the Committee, Franklin made several
small changes to the draft sent to him by Thomas Jefferson. The decolonization of the Americas was the process by
which the countries in the Americas gained their independence from European rule. Decolonization began with a series
of revolutions in the late 18th and early-to-mid-19th centuries. The Spanish American wars of independence were the
numerous wars against Spanish rule in Spanish America that took place during the early 19th century, from 1808 until
1829, directly related to the Napoleonic French invasion of Spain. The conflict started with short-lived governing
juntas established in Chuquisaca and Quito opposing the composition of the Supreme Central Junta of Seville. When
the Central Junta fell to the French, numerous new Juntas appeared all across the Americas, eventually resulting
in a chain of newly independent countries stretching from Argentina and Chile in the south, to Mexico in the north.
After the death of the king Ferdinand VII, in 1833, only Cuba and Puerto Rico remained under Spanish rule, until
the Spanish–American War in 1898. Unlike the Spanish, the Portuguese did not divide their colonial territory in America.
The captaincies they created were subdued to a centralized administration in Salvador (later relocated to Rio de
Janeiro) which reported directly to the Portuguese Crown until its independence in 1822, becoming the Empire of Brazil.
The first Industrial Revolution merged into the Second Industrial Revolution around 1850, when technological and
economic progress gained momentum with the development of steam-powered ships and railways, and later in the 19th
century with the internal combustion engine and electric power generation. The Second Industrial Revolution was a
phase of the Industrial Revolution; labeled as the separate Technical Revolution. From a technological and a social
point of view there is no clean break between the two. Major innovations during the period occurred in the chemical,
electrical, petroleum, and steel industries. Specific advancements included the introduction of oil fired steam turbine
and internal combustion driven steel ships, the development of the airplane, the practical commercialization of the
automobile, mass production of consumer goods, the perfection of canning, mechanical refrigeration and other food
preservation techniques, and the invention of the telephone. Industrialization is the process of social and economic
change whereby a human group is transformed from a pre-industrial society into an industrial one. It is a subdivision
of a more general modernization process, where social change and economic development are closely related with technological
innovation, particularly with the development of large-scale energy and metallurgy production. It is the extensive
organization of an economy for the purpose of manufacturing. Industrialization also introduces a form of philosophical
change, where people obtain a different attitude towards their perception of nature. The modern petroleum industry
started in 1846 with the discovery of the process of refining kerosene from coal by Nova Scotian Abraham Pineo Gesner.
Ignacy Łukasiewicz improved Gesner's method to develop a means of refining kerosene from the more readily available
"rock oil" ("petr-oleum") seeps in 1852 and the first rock oil mine was built in Bóbrka, near Krosno in Galicia in
the following year. In 1854, Benjamin Silliman, a science professor at Yale University in New Haven, was the first
to fractionate petroleum by distillation. These discoveries rapidly spread around the world. Engineering achievements
of the revolution ranged from electrification to developments in materials science. The advancements made a great
contribution to the quality of life. In the first revolution, Lewis Paul was the original inventor of roller spinning,
the basis of the water frame for spinning cotton in a cotton mill. Matthew Boulton and James Watt's improvements
to the steam engine were fundamental to the changes brought by the Industrial Revolution in both the Kingdom of Great
Britain and the world. In the latter part of the second revolution, Thomas Alva Edison developed many devices that
greatly influenced life around the world and is often credited with the creation of the first industrial research
laboratory. In 1882, Edison switched on the world's first large-scale electrical supply network that provided 110
volts direct current to fifty-nine customers in lower Manhattan. Also toward the end of the second industrial revolution,
Nikola Tesla made many contributions in the field of electricity and magnetism in the late 19th and early 20th centuries.
The European Revolutions of 1848, known in some countries as the Spring of Nations or the Year of Revolution, were
a series of political upheavals throughout the European continent. Described as a revolutionary wave, the period
of unrest began in France and then, further propelled by the French Revolution of 1848, soon spread to the rest of
Europe. Although most of the revolutions were quickly put down, there was a significant amount of violence in many
areas, with tens of thousands of people tortured and killed. While the immediate political effects of the revolutions
were reversed, the long-term reverberations of the events were far-reaching. Following the Enlightenment's ideas,
the reformers looked to the Scientific Revolution and industrial progress to solve the social problems which arose
with the Industrial Revolution. Newton's natural philosophy combined a mathematics of axiomatic proof with the mechanics
of physical observation, yielding a coherent system of verifiable predictions and replacing a previous reliance on
revelation and inspired truth. Applied to public life, this approach yielded several successful campaigns for changes
in social policy. Under Peter I (the Great), Russia was proclaimed an Empire in 1721 and became recognized as a world
power. Ruling from 1682 to 1725, Peter defeated Sweden in the Great Northern War, forcing it to cede West Karelia
and Ingria (two regions lost by Russia in the Time of Troubles), as well as Estland and Livland, securing Russia's
access to the sea and sea trade. On the Baltic Sea Peter founded a new capital called Saint Petersburg, later known
as Russia's Window to Europe. Peter the Great's reforms brought considerable Western European cultural influences
to Russia. Catherine II (the Great), who ruled in 1762–96, extended Russian political control over the Polish-Lithuanian
Commonwealth and incorporated most of its territories into Russia during the Partitions of Poland, pushing the Russian
frontier westward into Central Europe. In the south, after successful Russo-Turkish Wars against the Ottoman Empire,
Catherine advanced Russia's boundary to the Black Sea, defeating the Crimean khanate. The Victorian era of the United
Kingdom was the period of Queen Victoria's reign from June 1837 to January 1901. This was a long period of prosperity
for the British people, as profits gained from the overseas British Empire, as well as from industrial improvements
at home, allowed a large, educated middle class to develop. Some scholars would extend the beginning of the period—as
defined by a variety of sensibilities and political games that have come to be associated with the Victorians—back
five years to the passage of the Reform Act 1832. In Britain's "imperial century", victory over Napoleon left Britain
without any serious international rival, other than Russia in central Asia. Unchallenged at sea, Britain adopted
the role of global policeman, a state of affairs later known as the Pax Britannica, and a foreign policy of "splendid
isolation". Alongside the formal control it exerted over its own colonies, Britain's dominant position in world trade
meant that it effectively controlled the economies of many nominally independent countries, such as China, Argentina
and Siam, which has been generally characterized as "informal empire". Of note during this time was the Anglo-Zulu
War, which was fought in 1879 between the British Empire and the Zulu Empire. British imperial strength was underpinned
by the steamship and the telegraph, new technologies invented in the second half of the 19th century, allowing it
to control and defend the Empire. By 1902, the British Empire was linked together by a network of telegraph cables,
the so-called All Red Line. Growing until 1922, around 13,000,000 square miles (34,000,000 km2) of territory and
roughly 458 million people were added to the British Empire. The British established colonies in Australia in 1788,
New Zealand in 1840 and Fiji in 1872, with much of Oceania becoming part of the British Empire. The Bourbon Restoration
followed the ousting of Napoleon I of France in 1814. The Allies restored the Bourbon Dynasty to the French throne.
The ensuing period is called the Restoration, following French usage, and is characterized by a sharp conservative
reaction and the re-establishment of the Roman Catholic Church as a power in French politics. The July Monarchy was
a period of liberal constitutional monarchy in France under King Louis-Philippe starting with the July Revolution
(or Three Glorious Days) of 1830 and ending with the Revolution of 1848. The Second Empire was the Imperial Bonapartist
regime of Napoleon III from 1852 to 1870, between the Second Republic and the Third Republic, in France. The Franco-Prussian
War was a conflict between France and Prussia, while Prussia was backed up by the North German Confederation, of
which it was a member, and the South German states of Baden, Württemberg and Bavaria. The complete Prussian and German
victory brought about the final unification of Germany under King Wilhelm I of Prussia. It also marked the downfall
of Napoleon III and the end of the Second French Empire, which was replaced by the Third Republic. As part of the
settlement, almost all of the territory of Alsace-Lorraine was taken by Prussia to become a part of Germany, which
it would retain until the end of World War I. The major European powers laid claim to the areas of Africa where they
could exhibit a sphere of influence over the area. These claims did not have to have any substantial land holdings
or treaties to be legitimate. The European power that demonstrated its control over a territory accepted the mandate
to rule that region as a national colony. The European nation that held the claim developed and benefited from their
colony’s commercial interests without having to fear rival European competition. With the colonial claim came the
underlying assumption that the European power that exerted control would use its mandate to offer protection and
provide welfare for its colonial peoples, however, this principle remained more theory than practice. There were
many documented instances of material and moral conditions deteriorating for native Africans in the late nineteenth
and early twentieth centuries under European colonial rule, to the point where the colonial experience for them has
been described as "hell on earth." At the time of the Berlin Conference, Africa contained one-fifth of the world’s
population living in one-quarter of the world’s land area. However, from Europe's perspective, they were dividing
an unknown continent. European countries established a few coastal colonies in Africa by the mid-nineteenth century,
which included Cape Colony (Great Britain), Angola (Portugal), and Algeria (France), but until the late nineteenth
century Europe largely traded with free African states without feeling the need for territorial possession. Until
the 1880s most of Africa remained unchartered, with western maps from the period generally showing blank spaces for
the continent’s interior. From the 1880s to 1914, the European powers expanded their control across the African continent,
competing with each other for Africa’s land and resources. Great Britain controlled various colonial holdings in
East Africa that spanned the length of the African continent from Egypt in the north to South Africa. The French
gained major ground in West Africa, and the Portuguese held colonies in southern Africa. Germany, Italy, and Spain
established a small number of colonies at various points throughout the continent, which included German East Africa
(Tanganyika) and German Southwest Africa for Germany, Eritrea and Libya for Italy, and the Canary Islands and Rio
de Oro in northwestern Africa for Spain. Finally, for King Leopold (ruled from 1865–1909), there was the large “piece
of that great African cake” known as the Congo, which, unfortunately for the native Congolese, became his personal
fiefdom to do with as he pleased in Central Africa. By 1914, almost the entire continent was under European control.
Liberia, which was settled by freed American slaves in the 1820s, and Abyssinia (Ethiopia) in eastern Africa were
the last remaining independent African states. (John Merriman, A History of Modern Europe, Volume Two: From the French
Revolution to the Present, Third Edition (New York: W. W. Norton & Company, 2010), pp. 819–859). Around the end of
the 19th century and into the 20th century, the Meiji era was marked by the reign of the Meiji Emperor. During this
time, Japan started its modernization and rose to world power status. This era name means "Enlightened Rule". In
Japan, the Meiji Restoration started in the 1860s, marking the rapid modernization by the Japanese themselves along
European lines. Much research has focused on the issues of discontinuity versus continuity with the previous Tokugawa
Period. In the 1960s younger Japanese scholars led by Irokawa Daikichi, reacted against the bureaucratic superstate,
and began searching for the historic role of the common people . They avoided the elite, and focused not on political
events but on social forces and attitudes. They rejected both Marxism and modernization theory as alien and confining.
They stressed the importance of popular energies in the development of modern Japan. They enlarged history by using
the methods of social history. It was not until the beginning of the Meiji Era that the Japanese government began
taking modernization seriously. Japan expanded its military production base by opening arsenals in various locations.
The hyobusho (war office) was replaced with a War Department and a Naval Department. The samurai class suffered great
disappointment the following years. Laws were instituted that required every able-bodied male Japanese citizen, regardless
of class, to serve a mandatory term of three years with the first reserves and two additional years with the second
reserves. This action, the deathblow for the samurai warriors and their daimyo feudal lords, initially met resistance
from both the peasant and warrior alike. The peasant class interpreted the term for military service, ketsu-eki (blood
tax) literally, and attempted to avoid service by any means necessary. The Japanese government began modelling their
ground forces after the French military. The French government contributed greatly to the training of Japanese officers.
Many were employed at the military academy in Kyoto, and many more still were feverishly translating French field
manuals for use in the Japanese ranks. The Antebellum Age was a period of increasing division in the country based
on the growth of slavery in the American South and in the western territories of Kansas and Nebraska that eventually
lead to the Civil War in 1861. The Antebellum Period is often considered to have begun with the Kansas–Nebraska Act
of 1854,[citation needed] although it may have begun as early as 1812. This period is also significant because it
marked the transition of American manufacturing to the industrial revolution.[citation needed] Northern leaders agreed
that victory would require more than the end of fighting. Secession and Confederate nationalism had to be totally
repudiated and all forms of slavery or quasi-slavery had to be eliminated. Lincoln proved effective in mobilizing
support for the war goals, raising large armies and supplying them, avoiding foreign interference, and making the
end of slavery a war goal. The Confederacy had a larger area than it could defend, and it failed to keep its ports
open and its rivers clear. The North kept up the pressure as the South could barely feed and clothe its soldiers.
Its soldiers, especially those in the East under the command of General Robert E. Lee proved highly resourceful until
they finally were overwhelmed by Generals Ulysses S. Grant and William T. Sherman in 1864–65, The Reconstruction
Era (1863–77) began with the Emancipation proclamation in 1863, and included freedom, full citizenship and the vote
for the Southern blacks. It was followed by a reaction that left the blacks in a second class status legally, politically,
socially and economically until the 1960s. During the Gilded Age, there was substantial growth in population in the
United States and extravagant displays of wealth and excess of America's upper-class during the post-Civil War and
post-Reconstruction era, in the late 19th century. The wealth polarization derived primarily from industrial and
population expansion. The businessmen of the Second Industrial Revolution created industrial towns and cities in
the Northeast with new factories, and contributed to the creation of an ethnically diverse industrial working class
which produced the wealth owned by rising super-rich industrialists and financiers called the "robber barons". An
example is the company of John D. Rockefeller, who was an important figure in shaping the new oil industry. Using
highly effective tactics and aggressive practices, later widely criticized, Standard Oil absorbed or destroyed most
of its competition. The creation of a modern industrial economy took place. With the creation of a transportation
and communication infrastructure, the corporation became the dominant form of business organization and a managerial
revolution transformed business operations. In 1890, Congress passed the Sherman Antitrust Act—the source of all
American anti-monopoly laws. The law forbade every contract, scheme, deal, or conspiracy to restrain trade, though
the phrase "restraint of trade" remained subjective. By the beginning of the 20th century, per capita income and
industrial production in the United States exceeded that of any other country except Britain. Long hours and hazardous
working conditions led many workers to attempt to form labor unions despite strong opposition from industrialists
and the courts. But the courts did protect the marketplace, declaring the Standard Oil group to be an "unreasonable"
monopoly under the Sherman Antitrust Act in 1911. It ordered Standard to break up into 34 independent companies with
different boards of directors. Replacing the classical physics in use since the end of the scientific revolution,
modern physics arose in the early 20th century with the advent of quantum physics, substituting mathematical studies
for experimental studies and examining equations to build a theoretical structure.[citation needed] The old quantum
theory was a collection of results which predate modern quantum mechanics, but were never complete or self-consistent.
The collection of heuristic prescriptions for quantum mechanics were the first corrections to classical mechanics.
Outside the realm of quantum physics, the various aether theories in classical physics, which supposed a "fifth element"
such as the Luminiferous aether, were nullified by the Michelson-Morley experiment—an attempt to detect the motion
of earth through the aether. In biology, Darwinism gained acceptance, promoting the concept of adaptation in the
theory of natural selection. The fields of geology, astronomy and psychology also made strides and gained new insights.
In medicine, there were advances in medical theory and treatments. The assertions of Chinese philosophy began to
integrate concepts of Western philosophy, as steps toward modernization. By the time of the Xinhai Revolution in
1911, there were many calls, such as the May Fourth Movement, to completely abolish the old imperial institutions
and practices of China. There were attempts to incorporate democracy, republicanism, and industrialism into Chinese
philosophy, notably by Sun Yat-Sen (Sūn yì xiān, in one Mandarin form of the name) at the beginning of the 20th century.
Mao Zedong (Máo zé dōng) added Marxist-Leninist thought. When the Communist Party of China took over power, previous
schools of thought, excepting notably Legalism, were denounced as backward, and later even purged during the Cultural
Revolution. Starting one-hundred years before the 20th century, the enlightenment spiritual philosophy was challenged
in various quarters around the 1900s. Developed from earlier secular traditions, modern Humanist ethical philosophies
affirmed the dignity and worth of all people, based on the ability to determine right and wrong by appealing to universal
human qualities, particularly rationality, without resorting to the supernatural or alleged divine authority from
religious texts. For liberal humanists such as Rousseau and Kant, the universal law of reason guided the way toward
total emancipation from any kind of tyranny. These ideas were challenged, for example by the young Karl Marx, who
criticized the project of political emancipation (embodied in the form of human rights), asserting it to be symptomatic
of the very dehumanization it was supposed to oppose. For Friedrich Nietzsche, humanism was nothing more than a secular
version of theism. In his Genealogy of Morals, he argues that human rights exist as a means for the weak to collectively
constrain the strong. On this view, such rights do not facilitate emancipation of life, but rather deny it. In the
20th century, the notion that human beings are rationally autonomous was challenged by the concept that humans were
driven by unconscious irrational desires. Albert Einstein is known for his theories of special relativity and general
relativity. He also made important contributions to statistical mechanics, especially his mathematical treatment
of Brownian motion, his resolution of the paradox of specific heats, and his connection of fluctuations and dissipation.
Despite his reservations about its interpretation, Einstein also made contributions to quantum mechanics and, indirectly,
quantum field theory, primarily through his theoretical studies of the photon. In 1901, the Federation of Australia
was the process by which the six separate British self-governing colonies of New South Wales, Queensland, South Australia,
Tasmania, Victoria and Western Australia formed one nation. They kept the systems of government that they had developed
as separate colonies but also would have a federal government that was responsible for matters concerning the whole
nation. When the Constitution of Australia came into force, the colonies collectively became states of the Commonwealth
of Australia. The last days of the Qing Dynasty were marked by civil unrest and foreign invasions. Responding to
these civil failures and discontent, the Qing Imperial Court did attempt to reform the government in various ways,
such as the decision to draft a constitution in 1906, the establishment of provincial legislatures in 1909, and the
preparation for a national parliament in 1910. However, many of these measures were opposed by the conservatives
of the Qing Court, and many reformers were either imprisoned or executed outright. The failures of the Imperial Court
to enact such reforming measures of political liberalization and modernization caused the reformists to steer toward
the road of revolution. In 1912, the Republic of China was established and Sun Yat-sen was inaugurated in Nanjing
as the first Provisional President. But power in Beijing already had passed to Yuan Shikai, who had effective control
of the Beiyang Army, the most powerful military force in China at the time. To prevent civil war and possible foreign
intervention from undermining the infant republic, leaders agreed to Army's demand that China be united under a Beijing
government. On March 10, in Beijing, Shikai was sworn in as the second Provisional President of the Republic of China.
After the early 20th century revolutions, shifting alliances of China's regional warlords waged war for control of
the Beijing government. Despite the fact that various warlords gained control of the government in Beijing during
the warlord era, this did not constitute a new era of control or governance, because other warlords did not acknowledge
the transitory governments in this period and were a law unto themselves. These military-dominated governments were
collectively known as the Beiyang government. The warlord era ended around 1927. Four years into the 20th century
saw the Russo-Japanese War with the Battle of Port Arthur establishing the Empire of Japan as a world power. The
Russians were in constant pursuit of a warm water port on the Pacific Ocean, for their navy as well as for maritime
trade. The Manchurian Campaign of the Russian Empire was fought against the Japanese over Manchuria and Korea. The
major theatres of operations were Southern Manchuria, specifically the area around the Liaodong Peninsula and Mukden,
and the seas around Korea, Japan, and the Yellow Sea. The resulting campaigns, in which the fledgling Japanese military
consistently attained victory over the Russian forces arrayed against them, were unexpected by world observers. These
victories, as time transpired, would dramatically transform the distribution of power in East Asia, resulting in
a reassessment of Japan's recent entry onto the world stage. The embarrassing string of defeats increased Russian
popular dissatisfaction with the inefficient and corrupt Tsarist government. The Edwardian era in the United Kingdom
is the period spanning the reign of King Edward VII up to the end of the First World War, including the years surrounding
the sinking of the RMS Titanic. In the early years of the period, the Second Boer War in South Africa split the country
into anti- and pro-war factions. The imperial policies of the Conservatives eventually proved unpopular and in the
general election of 1906 the Liberals won a huge landslide. The Liberal government was unable to proceed with all
of its radical programme without the support of the House of Lords, which was largely Conservative. Conflict between
the two Houses of Parliament over the People's Budget led to a reduction in the power of the peers in 1910. The general
election in January that year returned a hung parliament with the balance of power held by Labour and Irish Nationalist
members. The causes of World War I included many factors, including the conflicts and antagonisms of the four decades
leading up to the war. The Triple Entente was the name given to the loose alignment between the United Kingdom, France,
and Russia after the signing of the Anglo-Russian Entente in 1907. The alignment of the three powers, supplemented
by various agreements with Japan, the United States, and Spain, constituted a powerful counterweight to the Triple
Alliance of Germany, Austria-Hungary, and Italy, the third having concluded an additional secret agreement with France
effectively nullifying her Alliance commitments. Militarism, alliances, imperialism, and nationalism played major
roles in the conflict. The immediate origins of the war lay in the decisions taken by statesmen and generals during
the July Crisis of 1914, the spark (or casus belli) for which was the assassination of Archduke Franz Ferdinand of
Austria. However, the crisis did not exist in a void; it came after a long series of diplomatic clashes between the
Great Powers over European and colonial issues in the decade prior to 1914 which had left tensions high. The diplomatic
clashes can be traced to changes in the balance of power in Europe since 1870. An example is the Baghdad Railway
which was planned to connect the Ottoman Empire cities of Konya and Baghdad with a line through modern-day Turkey,
Syria and Iraq. The railway became a source of international disputes during the years immediately preceding World
War I. Although it has been argued that they were resolved in 1914 before the war began, it has also been argued
that the railroad was a cause of the First World War. Fundamentally the war was sparked by tensions over territory
in the Balkans. Austria-Hungary competed with Serbia and Russia for territory and influence in the region and they
pulled the rest of the great powers into the conflict through their various alliances and treaties. The Balkan Wars
were two wars in South-eastern Europe in 1912–1913 in the course of which the Balkan League (Bulgaria, Montenegro,
Greece, and Serbia) first captured Ottoman-held remaining part of Thessaly, Macedonia, Epirus, Albania and most of
Thrace and then fell out over the division of the spoils, with incorporation of Romania this time. The First World
War began in 1914 and lasted to the final Armistice in 1918. The Allied Powers, led by the British Empire, France,
Russia until March 1918, Japan and the United States after 1917, defeated the Central Powers, led by the German Empire,
Austro-Hungarian Empire and the Ottoman Empire. The war caused the disintegration of four empires—the Austro-Hungarian,
German, Ottoman, and Russian ones—as well as radical change in the European and West Asian maps. The Allied powers
before 1917 are referred to as the Triple Entente, and the Central Powers are referred to as the Triple Alliance.
Much of the fighting in World War I took place along the Western Front, within a system of opposing manned trenches
and fortifications (separated by a "No man's land") running from the North Sea to the border of Switzerland. On the
Eastern Front, the vast eastern plains and limited rail network prevented a trench warfare stalemate from developing,
although the scale of the conflict was just as large. Hostilities also occurred on and under the sea and—for the
first time—from the air. More than 9 million soldiers died on the various battlefields, and nearly that many more
in the participating countries' home fronts on account of food shortages and genocide committed under the cover of
various civil wars and internal conflicts. Notably, more people died of the worldwide influenza outbreak at the end
of the war and shortly after than died in the hostilities. The unsanitary conditions engendered by the war, severe
overcrowding in barracks, wartime propaganda interfering with public health warnings, and migration of so many soldiers
around the world helped the outbreak become a pandemic. Ultimately, World War I created a decisive break with the
old world order that had emerged after the Napoleonic Wars, which was modified by the mid-19th century's nationalistic
revolutions. The results of World War I would be important factors in the development of World War II approximately
20 years later. More immediate to the time, the partitioning of the Ottoman Empire was a political event that redrew
the political boundaries of West Asia. The huge conglomeration of territories and peoples formerly ruled by the Sultan
of the Ottoman Empire was divided into several new nations. The partitioning brought the creation of the modern Arab
world and the Republic of Turkey. The League of Nations granted France mandates over Syria and Lebanon and granted
the United Kingdom mandates over Mesopotamia and Palestine (which was later divided into two regions: Palestine and
Transjordan). Parts of the Ottoman Empire on the Arabian Peninsula became parts of what are today Saudi Arabia and
Yemen. The Russian Revolution is the series of revolutions in Russia in 1917, which destroyed the Tsarist autocracy
and led to the creation of the Soviet Union. Following the abdication of Nicholas II of Russia, the Russian Provisional
Government was established. In October 1917, a red faction revolution occurred in which the Red Guard, armed groups
of workers and deserting soldiers directed by the Bolshevik Party, seized control of Saint Petersburg (then known
as Petrograd) and began an immediate armed takeover of cities and villages throughout the former Russian Empire.
Another action in 1917 that is of note was the armistice signed between Russia and the Central Powers at Brest-Litovsk.
As a condition for peace, the treaty by the Central Powers conceded huge portions of the former Russian Empire to
Imperial Germany and the Ottoman Empire, greatly upsetting nationalists and conservatives. The Bolsheviks made peace
with the German Empire and the Central Powers, as they had promised the Russian people prior to the Revolution. Vladimir
Lenin's decision has been attributed to his sponsorship by the foreign office of Wilhelm II, German Emperor, offered
by the latter in hopes that with a revolution, Russia would withdraw from World War I. This suspicion was bolstered
by the German Foreign Ministry's sponsorship of Lenin's return to Petrograd. The Western Allies expressed their dismay
at the Bolsheviks, upset at: The Russian Civil War was a multi-party war that occurred within the former Russian
Empire after the Russian provisional government collapsed and the Soviets under the domination of the Bolshevik party
assumed power, first in Petrograd (St. Petersburg) and then in other places. In the wake of the October Revolution,
the old Russian Imperial Army had been demobilized; the volunteer-based Red Guard was the Bolsheviks' main military
force, augmented by an armed military component of the Cheka, the Bolshevik state security apparatus. There was an
instituted mandatory conscription of the rural peasantry into the Red Army. Opposition of rural Russians to Red Army
conscription units was overcome by taking hostages and shooting them when necessary in order to force compliance.
Former Tsarist officers were utilized as "military specialists" (voenspetsy), taking their families hostage in order
to ensure loyalty. At the start of the war, three-fourths of the Red Army officer corps was composed of former Tsarist
officers. By its end, 83% of all Red Army divisional and corps commanders were ex-Tsarist soldiers. The principal
fighting occurred between the Bolshevik Red Army and the forces of the White Army. Many foreign armies warred against
the Red Army, notably the Allied Forces, yet many volunteer foreigners fought in both sides of the Russian Civil
War. Other nationalist and regional political groups also participated in the war, including the Ukrainian nationalist
Green Army, the Ukrainian anarchist Black Army and Black Guards, and warlords such as Ungern von Sternberg. The most
intense fighting took place from 1918 to 1920. Major military operations ended on 25 October 1922 when the Red Army
occupied Vladivostok, previously held by the Provisional Priamur Government. The last enclave of the White Forces
was the Ayano-Maysky District on the Pacific coast. The majority of the fighting ended in 1920 with the defeat of
General Pyotr Wrangel in the Crimea, but a notable resistance in certain areas continued until 1923 (e.g., Kronstadt
Uprising, Tambov Rebellion, Basmachi Revolt, and the final resistance of the White movement in the Far East). The
May Fourth Movement helped to rekindle the then-fading cause of republican revolution. In 1917 Sun Yat-sen had become
commander-in-chief of a rival military government in Guangzhou in collaboration with southern warlords. Sun's efforts
to obtain aid from the Western democracies were ignored, however, and in 1920 he turned to the Soviet Union, which
had recently achieved its own revolution. The Soviets sought to befriend the Chinese revolutionists by offering scathing
attacks on Western imperialism. But for political expediency, the Soviet leadership initiated a dual policy of support
for both Sun and the newly established Chinese Communist Party (CCP). In North America, especially the first half
of this period, people experienced considerable prosperity in the Roaring Twenties. The social and societal upheaval
known as the Roaring Twenties began in North America and spread to Europe in the aftermath of World War I. The Roaring
Twenties, often called "The Jazz Age", saw an exposition of social, artistic, and cultural dynamism. 'Normalcy' returned
to politics, jazz music blossomed, the flapper redefined modern womanhood, Art Deco peaked. The spirit of the Roaring
Twenties was marked by a general feeling of discontinuity associated with modernity, a break with traditions. Everything
seemed to be feasible through modern technology. New technologies, especially automobiles, movies and radio proliferated
'modernity' to a large part of the population. The 1920s saw the general favor of practicality, in architecture as
well as in daily life. The 1920s was further distinguished by several inventions and discoveries, extensive industrial
growth and the rise in consumer demand and aspirations, and significant changes in lifestyle. Europe spent these
years rebuilding and coming to terms with the vast human cost of the conflict. The economy of the United States became
increasingly intertwined with that of Europe. In Germany, the Weimar Republic gave way to episodes of political and
economic turmoil, which culminated with the German hyperinflation of 1923 and the failed Beer Hall Putsch of that
same year. When Germany could no longer afford war payments, Wall Street invested heavily in European debts to keep
the European economy afloat as a large consumer market for American mass-produced goods. By the middle of the decade,
economic development soared in Europe, and the Roaring Twenties broke out in Germany, Britain and France, the second
half of the decade becoming known as the "Golden Twenties". In France and francophone Canada, they were also called
the "années folles" ("Crazy Years"). Worldwide prosperity changed dramatically with the onset of the Great Depression
in 1929. The Wall Street Crash of 1929 served to punctuate the end of the previous era, as The Great Depression set
in. The Great Depression was a worldwide economic downturn starting in most places in 1929 and ending at different
times in the 1930s or early 1940s for different countries. It was the largest and most important economic depression
in the 20th century, and is used in the 21st century as an example of how far the world's economy can fall. The depression
had devastating effects in virtually every country, rich or poor. International trade plunged by half to two-thirds,
as did personal income, tax revenue, prices and profits. Cities all around the world were hit hard, especially those
dependent on heavy industry. Construction was virtually halted in many countries. Farming and rural areas suffered
as crop prices fell by roughly 60 percent. Facing plummeting demand with few alternate sources of jobs, areas dependent
on primary sector industries suffered the most. The Great Depression ended at different times in different countries
with the effect lasting into the next era. America's Great Depression ended in 1941 with America's entry into World
War II. The majority of countries set up relief programs, and most underwent some sort of political upheaval, pushing
them to the left or right. In some world states, the desperate citizens turned toward nationalist demagogues—the
most infamous being Adolf Hitler—setting the stage for the next era of war. The convulsion brought on by the worldwide
depression resulted in the rise of Nazism. In Asia, Japan became an ever more assertive power, especially with regards
to China. The interwar period was also marked by a radical change in the international order, away from the balance
of power that had dominated pre–World War I Europe. One main institution that was meant to bring stability was the
League of Nations, which was created after the First World War with the intention of maintaining world security and
peace and encouraging economic growth between member countries. The League was undermined by the bellicosity of Nazi
Germany, Imperial Japan, the Soviet Union, and Mussolini's Italy, and by the non-participation of the United States,
leading many to question its effectiveness and legitimacy. A series of international crises strained the League to
its limits, the earliest being the invasion of Manchuria by Japan and the Abyssinian crisis of 1935/36 in which Italy
invaded Abyssinia, one of the only free African nations at that time. The League tried to enforce economic sanctions
upon Italy, but to no avail. The incident highlighted French and British weakness, exemplified by their reluctance
to alienate Italy and lose her as their ally. The limited actions taken by the Western powers pushed Mussolini's
Italy towards alliance with Hitler's Germany anyway. The Abyssinian war showed Hitler how weak the League was and
encouraged the remilitarization of the Rhineland in flagrant disregard of the Treaty of Versailles. This was the
first in a series of provocative acts culminating in the invasion of Poland in September 1939 and the beginning of
the Second World War. Few Chinese had any illusions about Japanese designs on China. Hungry for raw materials and
pressed by a growing population, Japan initiated the seizure of Manchuria in September 1931 and established ex-Qing
emperor Puyi as head of the puppet state of Manchukuo in 1932. During the Sino-Japanese War (1937–1945), the loss
of Manchuria, and its vast potential for industrial development and war industries, was a blow to the Kuomintang
economy. The League of Nations, established at the end of World War I, was unable to act in the face of the Japanese
defiance. After 1940, conflicts between the Kuomintang and Communists became more frequent in the areas not under
Japanese control. The Communists expanded their influence wherever opportunities presented themselves through mass
organizations, administrative reforms, and the land- and tax-reform measures favoring the peasants—while the Kuomintang
attempted to neutralize the spread of Communist influence. The Second Sino-Japanese War had seen tensions rise between
Imperial Japan and the United States; events such as the Panay incident and the Nanking Massacre turned American
public opinion against Japan. With the occupation of French Indochina in the years of 1940–41, and with the continuing
war in China, the United States placed embargoes on Japan of strategic materials such as scrap metal and oil, which
were vitally needed for the war effort. The Japanese were faced with the option of either withdrawing from China
and losing face or seizing and securing new sources of raw materials in the resource-rich, European-controlled colonies
of South East Asia—specifically British Malaya and the Dutch East Indies (modern-day Indonesia). In 1940, Imperial
Japan signed the Tripartite Pact with Nazi Germany and Fascist Italy. On December 7, 1941, Japan attacked the United
States at Pearl Harbor, bringing it too into the war on the Allied side. China also joined the Allies, as eventually
did most of the rest of the world. China was in turmoil at the time, and attacked Japanese armies through guerilla-type
warfare. By the beginning of 1942, the major combatants were aligned as follows: the British Commonwealth, the United
States, and the Soviet Union were fighting Germany and Italy; and the British Commonwealth, China, and the United
States were fighting Japan. The United Kingdom, the United States, the Soviet Union and China were referred as a
"trusteeship of the powerful" during the World War II and were recognized as the Allied "Big Four" in Declaration
by United Nations These four countries were considered as the "Four Policemen" or "Four Sheriffs" of the Allies power
and primary victors of World War II. From then through August 1945, battles raged across all of Europe, in the North
Atlantic Ocean, across North Africa, throughout Southeast Asia, throughout China, across the Pacific Ocean and in
the air over Japan. It is possible that around 62 million people died in the war; estimates vary greatly. About 60%
of all casualties were civilians, who died as a result of disease, starvation, genocide (in particular, the Holocaust),
and aerial bombing. The former Soviet Union and China suffered the most casualties. Estimates place deaths in the
Soviet Union at around 23 million, while China suffered about 10 million. No country lost a greater portion of its
population than Poland: approximately 5.6 million, or 16%, of its pre-war population of 34.8 million died. The Holocaust
(which roughly means "burnt whole") was the deliberate and systematic murder of millions of Jews and other "unwanted"
during World War II by the Nazi regime in Germany. Several differing views exist regarding whether it was intended
to occur from the war's beginning, or if the plans for it came about later. Regardless, persecution of Jews extended
well before the war even started, such as in the Kristallnacht (Night of Broken Glass). The Nazis used propaganda
to great effect to stir up anti-Semitic feelings within ordinary Germans. After World War II, Europe was informally
split into Western and Soviet spheres of influence. Western Europe later aligned as the North Atlantic Treaty Organization
(NATO) and Eastern Europe as the Warsaw Pact. There was a shift in power from Western Europe and the British Empire
to the two new superpowers, the United States and the Soviet Union. These two rivals would later face off in the
Cold War. In Asia, the defeat of Japan led to its democratization. China's civil war continued through and after
the war, resulting eventually in the establishment of the People's Republic of China. The former colonies of the
European powers began their road to independence. Over the course of the 20th century, the world's per-capita gross
domestic product grew by a factor of five, much more than all earlier centuries combined (including the 19th with
its Industrial Revolution). Many economists make the case that this understates the magnitude of growth, as many
of the goods and services consumed at the end of the 20th century, such as improved medicine (causing world life
expectancy to increase by more than two decades) and communications technologies, were not available at any price
at its beginning. However, the gulf between the world's rich and poor grew wider, and the majority of the global
population remained in the poor side of the divide. Still, advancing technology and medicine has had a great impact
even in the Global South. Large-scale industry and more centralized media made brutal dictatorships possible on an
unprecedented scale in the middle of the century, leading to wars that were also unprecedented. However, the increased
communications contributed to democratization. Technological developments included the development of airplanes and
space exploration, nuclear technology, advancement in genetics, and the dawning of the Information Age. The Soviet
Union created the Eastern Bloc of countries that it occupied, annexing some as Soviet Socialist Republics and maintaining
others as satellite states that would later form the Warsaw Pact. The United States and various western European
countries began a policy of "containment" of communism and forged myriad alliances to this end, including NATO. Several
of these western countries also coordinated efforts regarding the rebuilding of western Europe, including western
Germany, which the Soviets opposed. In other regions of the world, such as Latin America and Southeast Asia, the
Soviet Union fostered communist revolutionary movements, which the United States and many of its allies opposed and,
in some cases, attempted to "roll back". Many countries were prompted to align themselves with the nations that would
later form either NATO or the Warsaw Pact, though other movements would also emerge. The Cold War saw periods of
both heightened tension and relative calm. International crises arose, such as the Berlin Blockade (1948–1949), the
Korean War (1950–1953), the Berlin Crisis of 1961, the Vietnam War (1959–1975), the Cuban Missile Crisis (1962),
the Soviet war in Afghanistan (1979–1989) and NATO exercises in November 1983. There were also periods of reduced
tension as both sides sought détente. Direct military attacks on adversaries were deterred by the potential for mutual
assured destruction using deliverable nuclear weapons. In the Cold War era, the Generation of Love and the rise of
computers changed society in very different, complex ways, including higher social and local mobility. The Cold War
drew to a close in the late 1980s and the early 1990s. The United States under President Ronald Reagan increased
diplomatic, military, and economic pressure on the Soviet Union, which was already suffering from severe economic
stagnation. In the second half of the 1980s, newly appointed Soviet leader Mikhail Gorbachev introduced the perestroika
and glasnost reforms. The Soviet Union collapsed in 1991, leaving the United States as the dominant military power,
though Russia retained much of the massive Soviet nuclear arsenal. In Latin America in the 1970s, leftists acquired
a significant political influence which prompted the right-wing, ecclesiastical authorities and a large portion of
the individual country's upper class to support coup d'états to avoid what they perceived as a communist threat.
This was further fueled by Cuban and United States intervention which led to a political polarization. Most South
American countries were in some periods ruled by military dictatorships that were supported by the United States
of America. In the 1970s, the regimes of the Southern Cone collaborated in Operation Condor killing many leftist
dissidents, including some urban guerrillas. However, by the early 1990s all countries had restored their democracies.
The Space Age is a period encompassing the activities related to the Space Race, space exploration, space technology,
and the cultural developments influenced by these events. The Space Age began with the development of several technologies
that culminated with the launch of Sputnik 1 by the Soviet Union. This was the world's first artificial satellite,
orbiting the Earth in 98.1 minutes and weighing in at 83 kg. The launch of Sputnik 1 ushered a new era of political,
scientific and technological achievements that became known as the Space Age. The Space Age was characterized by
rapid development of new technology in a close race mostly between the United States and the Soviet Union. The Space
Age brought the first human spaceflight during the Vostok programme and reached its peak with the Apollo program
which captured the imagination of much of the world's population. The landing of Apollo 11 was an event watched by
over 500 million people around the world and is widely recognized as one of the defining moments of the 20th century.
Since then and with the end of the space race due to the dissolution of the Soviet Union, public attention has largely
moved to other areas.
The phrase "51st state" can be used in a positive sense, meaning that a region or territory is so aligned, supportive, and
conducive with the United States, that it is like a U.S. state. It can also be used in a pejorative sense, meaning
an area or region is perceived to be under excessive American cultural or military influence or control. In various
countries around the world, people who believe their local or national culture has become too Americanized sometimes
use the term "51st state" in reference to their own countries. Under Article IV, Section Three of the United States
Constitution, which outlines the relationship among the states, Congress has the power to admit new states to the
union. The states are required to give "full faith and credit" to the acts of each other's legislatures and courts,
which is generally held to include the recognition of legal contracts, marriages, and criminal judgments. The states
are guaranteed military and civil defense by the federal government, which is also obliged by Article IV, Section
Four, to "guarantee to every state in this union a republican form of government". Puerto Rico has been discussed
as a potential 51st state of the United States. In a 2012 status referendum a majority of voters, 54%, expressed
dissatisfaction with the current political relationship. In a separate question, 61% of voters supported statehood
(excluding the 26% of voters who left this question blank). On December 11, 2012, Puerto Rico's legislature resolved
to request that the President and the U.S. Congress act on the results, end the current form of territorial status
and begin the process of admitting Puerto Rico to the Union as a state. Since 1898, Puerto Rico has had limited representation
in the Congress in the form of a Resident Commissioner, a nonvoting delegate. The 110th Congress returned the Commissioner's
power to vote in the Committee of the Whole, but not on matters where the vote would represent a decisive participation.
Puerto Rico has elections on the United States presidential primary or caucus of the Democratic Party and the Republican
Party to select delegates to the respective parties' national conventions although presidential electors are not
granted on the Electoral College. As American citizens, Puerto Ricans can vote in U.S. presidential elections, provided
they reside in one of the 50 states or the District of Columbia and not in Puerto Rico itself. Residents of Puerto
Rico pay U.S. federal taxes: import/export taxes, federal commodity taxes, social security taxes, therefore contributing
to the American Government. Most Puerto Rico residents do not pay federal income tax but do pay federal payroll taxes
(Social Security and Medicare). However, federal employees, those who do business with the federal government, Puerto
Rico–based corporations that intend to send funds to the U.S. and others do pay federal income taxes. Puerto Ricans
may enlist in the U.S. military. Puerto Ricans have participated in all American wars since 1898; 52 Puerto Ricans
had been killed in the Iraq War and War in Afghanistan by November 2012. Puerto Rico has been under U.S. sovereignty
for over a century when it was ceded to the U.S. by Spain following the end of the Spanish–American War, and Puerto
Ricans have been U.S. citizens since 1917. The island's ultimate status has not been determined as of 2012[update],
its residents do not have voting representation in their federal government. Puerto Rico has limited representation
in the U.S. Congress in the form of a Resident Commissioner, a delegate with limited no voting rights. Like the states,
Puerto Rico has self-rule, a republican form of government organized pursuant to a constitution adopted by its people,
and a bill of rights. This constitution was created when the U.S. Congress directed local government to organize
a constitutional convention to write the Puerto Rico Constitution in 1951. The acceptance of that constitution by
Puerto Rico's electorate, the U.S. Congress, and the U.S. president occurred in 1952. In addition, the rights, privileges
and immunities attendant to United States citizens are "respected in Puerto Rico to the same extent as though Puerto
Rico were a state of the union" through the express extension of the Privileges and Immunities Clause of the U.S.
Constitution by the U.S. Congress in 1948. Puerto Rico is designated in its constitution as the "Commonwealth of
Puerto Rico". The Constitution of Puerto Rico which became effective in 1952 adopted the name of Estado Libre Asociado
(literally translated as "Free Associated State"), officially translated into English as Commonwealth, for its body
politic. The island is under the jurisdiction of the Territorial Clause of the U.S. Constitution, which has led to
doubts about the finality of the Commonwealth status for Puerto Rico. In addition, all people born in Puerto Rico
become citizens of the U.S. at birth (under provisions of the Jones–Shafroth Act in 1917), but citizens residing
in Puerto Rico cannot vote for president nor for full members of either house of Congress. Statehood would grant
island residents full voting rights at the Federal level. The Puerto Rico Democracy Act (H.R. 2499) was approved
on April 29, 2010, by the United States House of Representatives 223–169, but was not approved by the Senate before
the end of the 111th Congress. It would have provided for a federally sanctioned self-determination process for the
people of Puerto Rico. This act would provide for referendums to be held in Puerto Rico to determine the island's
ultimate political status. It had also been introduced in 2007. In November 2012, a referendum resulted in 54 percent
of respondents voting to reject the current status under the territorial clause of the U.S. Constitution, while a
second question resulted in 61 percent of voters identifying statehood as the preferred alternative to the current
territorial status. The 2012 referendum was by far the most successful referendum for statehood advocates and support
for statehood has risen in each successive popular referendum. However, more than one in four voters abstained from
answering the question on the preferred alternative status. Statehood opponents have argued that the statehood option
garnered only 45 percent of the votes if abstentions are included. If abstentions are considered, the result of the
referendum is much closer to 44 percent for statehood, a number that falls under the 50 percent majority mark. The
Washington Post, The New York Times and the Boston Herald have published opinion pieces expressing support for the
statehood of Puerto Rico. On November 8, 2012, Washington, D.C. newspaper The Hill published an article saying that
Congress will likely ignore the results of the referendum due to the circumstances behind the votes. and U.S. Congressman
Luis Gutiérrez U.S. Congresswoman Nydia Velázquez, both of Puerto Rican ancestry, agreed with the The Hill 's statements.
Shortly after the results were published Puerto Rico-born U.S. Congressman José Enrique Serrano commented "I was
particularly impressed with the outcome of the 'status' referendum in Puerto Rico. A majority of those voting signaled
the desire to change the current territorial status. In a second question an even larger majority asked to become
a state. This is an earthquake in Puerto Rican politics. It will demand the attention of Congress, and a definitive
answer to the Puerto Rican request for change. This is a history-making moment where voters asked to move forward."
Several days after the referendum, the Resident Commissioner Pedro Pierluisi, Governor Luis Fortuño, and Governor-elect
Alejandro García Padilla wrote separate letters to the President of the United States Barack Obama addressing the
results of the voting. Pierluisi urged Obama to begin legislation in favor of the statehood of Puerto Rico, in light
of its win in the referendum. Fortuño urged him to move the process forward. García Padilla asked him to reject the
results because of their ambiguity. The White House stance related to the November 2012 plebiscite was that the results
were clear, the people of Puerto Rico want the issue of status resolved, and a majority chose statehood in the second
question. Former White House director of Hispanic media stated, "Now it is time for Congress to act and the administration
will work with them on that effort, so that the people of Puerto Rico can determine their own future." On May 15,
2013, Resident Commissioner Pierluisi introduced H.R. 2000 to Congress to "set forth the process for Puerto Rico
to be admitted as a state of the Union," asking for Congress to vote on ratifying Puerto Rico as the 51st state.
On February 12, 2014, Senator Martin Heinrich introduced a bill in the US Senate. The bill would require a binding
referendum to be held in Puerto Rico asking whether the territory wants to be admitted as a state. In the event of
a yes vote, the president would be asked to submit legislation to Congress to admit Puerto Rico as a state. Washington,
D.C. is often mentioned as a candidate for statehood. In Federalist No. 43 of The Federalist Papers, James Madison
considered the implications of the definition of the "seat of government" found in the United States Constitution.
Although he noted potential conflicts of interest, and the need for a "municipal legislature for local purposes,"
Madison did not address the district's role in national voting. Legal scholars disagree on whether a simple act of
Congress can admit the District as a state, due to its status as the seat of government of the United States, which
Article I, Section 8 of the Constitution requires to be under the exclusive jurisdiction of Congress; depending on
the interpretation of this text, admission of the full District as a state may require a Constitutional amendment,
which is much more difficult to enact. However, the Constitution does not set a minimum size for the District. Its
size has already changed once before, when Virginia reclaimed the portion of the District south of the Potomac. So
the constitutional requirement for a federal district can be satisfied by reducing its size to the small central
core of government buildings and monuments, giving the rest of the territory to the new state. Washington, D.C. residents
who support the statehood movement sometimes use a shortened version of the Revolutionary War protest motto "No taxation
without representation", omitting the initial "No", denoting their lack of Congressional representation; the phrase
is now printed on newly issued Washington, D.C. license plates (although a driver may choose to have the Washington,
D.C. website address instead). President Bill Clinton's presidential limousine had the "Taxation without representation"
license plate late in his term, while President George W. Bush had the vehicle's plates changed shortly after beginning
his term in office. President Barack Obama had the license plates changed back to the protest style at the beginning
of his second term. This position was carried by the D.C. Statehood Party, a political party; it has since merged
with the local Green Party affiliate to form the D.C. Statehood Green Party. The nearest this movement ever came
to success was in 1978, when Congress passed the District of Columbia Voting Rights Amendment. Two years later in
1980, local citizens passed an initiative calling for a constitutional convention for a new state. In 1982, voters
ratified the constitution of the state, which was to be called New Columbia. The drive for statehood stalled in 1985,
however, when the Washington, D.C. Voting Rights Amendment failed because not enough states ratified the amendment
within the seven-year span specified. Other less likely contenders are Guam and the United States Virgin Islands,
both of which are unincorporated organized territories of the United States. Also, the Northern Mariana Islands and
American Samoa, an unorganized, unincorporated territory, could both attempt to gain statehood. Some proposals call
for the Virgin Islands to be admitted with Puerto Rico as one state (often known as the proposed "Commonwealth of
Prusvi", for Puerto Rico/U.S. Virgin Islands, or as "Puerto Virgo"), and for the amalgamation of U.S. territories
or former territories in the Pacific Ocean, in the manner of the "Greater Hawaii" concept of the 1960s. Guam and
the Northern Mariana Islands would be admitted as one state, along with Palau, the Federated States of Micronesia,
and the Marshall Islands (although these latter three entities are now separate sovereign nations, which have Compact
of Free Association relationships with the United States). Such a state would have a population of 412,381 (slightly
lower than Wyoming's population) and a land area of 911.82 square miles (2,361.6 km2) (slightly smaller than Rhode
Island). American Samoa could possibly be part of such a state, increasing the population to 467,900 and the area
to 988.65 square miles (2,560.6 km2). Radio Australia, in late May 2008, issued signs of Guam and the Northern Mariana
Islands becoming one again and becoming the 51st state. The Philippines has had small grassroots movements for U.S.
statehood. Originally part of the platform of the Progressive Party, then known as the Federalista Party, the party
dropped it in 1907, which coincided with the name change. As recently as 2004, the concept of the Philippines becoming
a U.S. state has been part of a political platform in the Philippines. Supporters of this movement include Filipinos
who believe that the quality of life in the Philippines would be higher and that there would be less poverty there
if the Philippines were an American state or territory. Supporters also include Filipinos that had fought as members
of the United States Armed Forces in various wars during the Commonwealth period. In Canada, "the 51st state" is
a phrase generally used in such a way as to imply that if a certain political course is taken, Canada's destiny will
be as little more than a part of the United States. Examples include the Canada-United States Free Trade Agreement
in 1988, the debate over the creation of a common defense perimeter, and as a potential consequence of not adopting
proposals intended to resolve the issue of Quebec sovereignty, the Charlottetown Accord in 1992 and the Clarity Act
in 1999. The phrase is usually used in local political debates, in polemic writing or in private conversations. It
is rarely used by politicians themselves in a public context, although at certain times in Canadian history political
parties have used other similarly loaded imagery. In the 1988 federal election, the Liberals asserted that the proposed
Free Trade Agreement amounted to an American takeover of Canada—notably, the party ran an ad in which Progressive
Conservative (PC) strategists, upon the adoption of the agreement, slowly erased the Canada-U.S. border from a desktop
map of North America. Within days, however, the PCs responded with an ad which featured the border being drawn back
on with a permanent marker, as an announcer intoned "Here's where we draw the line." The implication has historical
basis and dates to the breakup of British America during the American Revolution. The colonies that had confederated
to form the United States invaded Canada (at the time a term referring specifically to the modern-day provinces of
Quebec and Ontario, which had only been in British hands since 1763) at least twice, neither time succeeding in taking
control of the territory. The first invasion was during the Revolution, under the assumption that French-speaking
Canadians' presumed hostility towards British colonial rule combined with the Franco-American alliance would make
them natural allies to the American cause; the Continental Army successfully recruited two Canadian regiments for
the invasion. That invasion's failure forced the members of those regiments into exile, and they settled mostly in
upstate New York. The Articles of Confederation, written during the Revolution, included a provision for Canada to
join the United States, should they ever decide to do so, without needing to seek U.S. permission as other states
would. The United States again invaded Canada during the War of 1812, but this effort was made more difficult due
to the large number of Loyalist Americans that had fled to what is now Ontario and still resisted joining the republic.
The Hunter Patriots in the 1830s and the Fenian raids after the American Civil War were private attacks on Canada
from the U.S. Several U.S. politicians in the 19th century also spoke in favour of annexing Canada. In the late 1940s,
during the last days of the Dominion of Newfoundland (at the time a dominion-dependency in the Commonwealth and independent
of Canada), there was mainstream support, although not majority, for Newfoundland to form an economic union with
the United States, thanks to the efforts of the Economic Union Party and significant U.S. investment in Newfoundland
stemming from the U.S.-British alliance in World War II. The movement ultimately failed when, in a 1948 referendum,
voters narrowly chose to confederate with Canada (the Economic Union Party supported an independent "responsible
government" that they would then push toward their goals). In the United States, the term "the 51st state" when applied
to Canada can serve to highlight the similarities and close relationship between the United States and Canada. Sometimes
the term is used disparagingly, intended to deride Canada as an unimportant neighbor. In the Quebec general election,
1989, the political party Parti 51 ran 11 candidates on a platform of Quebec seceding from Canada to join the United
States (with its leader, André Perron, claiming Quebec could not survive as an independent nation). The party attracted
just 3,846 votes across the province, 0.11% of the total votes cast. In comparison, the other parties in favour of
sovereignty of Quebec in that election got 40.16% (PQ) and 1.22% (NPDQ). Due to geographical proximity of the Central
American countries to the U.S. which has powerful military, economic, and political influences, there were several
movements and proposals by the United States during the 19th and 20th centuries to annex some or all of the Central
American republics (Costa Rica, El Salvador, Guatemala, Honduras with the formerly British-ruled Bay Islands, Nicaragua,
Panama which had the U.S.-ruled Canal Zone territory from 1903 to 1979, and formerly British Honduras or Belize since
1981). However, the U.S. never acted on these proposals from some U.S. politicians; some of which were never delivered
or considered seriously. In 2001, El Salvador adopted the U.S. dollar as its currency, while Panama has used it for
decades due to its ties to the Canal Zone. Cuba, like many Spanish territories, wanted to break free from Spain.
A pro-independence movement in Cuba was supported by the U.S., and Cuban guerrilla leaders wanted annexation to the
United States, but Cuban revolutionary leader José Martí called for Cuban nationhood. When the U.S. battleship Maine
sank in Havana Harbor, the U.S. blamed Spain and the Spanish–American War broke out in 1898. After the U.S. won,
Spain relinquished claim of sovereignty over territories, including Cuba. The U.S. administered Cuba as a protectorate
until 1902. Several decades later in 1959, the corrupt Cuban government of U.S.-backed Fulgencio Batista was overthrown
by Fidel Castro. Castro installed a Marxist–Leninist government allied with the Soviet Union, which has been in power
ever since. Several websites assert that Israel is the 51st state due to the annual funding and defense support it
receives from the United States. An example of this concept can be found in 2003 when Martine Rothblatt published
a book called Two Stars for Peace that argued for the addition of Israel and the Palestinian territories surrounding
it as the 51st state in the Union. The American State of Canaan, is a book published by Prof. Alfred de Grazia, political
science and sociologist, in March 2009, proposing the creation of the 51st and 52nd states from Israel and the Palestinian
territories. In Article 3 of the Treaty of San Francisco between the Allied Powers and Japan, which came into force
in April 1952, the U.S. put the outlying islands of the Ryukyus, including the island of Okinawa—home to over 1 million
Okinawans related to the Japanese—and the Bonin Islands, the Volcano Islands, and Iwo Jima into U.S. trusteeship.
All these trusteeships were slowly returned to Japanese rule. Okinawa was returned on May 15, 1972, but the U.S.
stations troops in the island's bases as a defense for Japan. In 2010 there was an attempt to register a 51st State
Party with the New Zealand Electoral Commission. The party advocates New Zealand becoming the 51st state of the United
States of America. The party's secretary is Paulus Telfer, a former Christchurch mayoral candidate. On February 5,
2010, the party applied to register a logo with the Electoral Commission. The logo – a US flag with 51 stars – was
rejected by the Electoral Commission on the grounds that it was likely to cause confusion or mislead electors. As
of 2014[update], the party remains unregistered and cannot appear on a ballot. Albania has often been called the
51st state for its perceived strongly pro-American positions, mainly because of the United States' policies towards
it. In reference to President George W. Bush's 2007 European tour, Edi Rama, Tirana's mayor and leader of the opposition
Socialists, said: "Albania is for sure the most pro-American country in Europe, maybe even in the world ... Nowhere
else can you find such respect and hospitality for the President of the United States. Even in Michigan, he wouldn't
be as welcome." At the time of ex-Secretary of State James Baker's visit in 1992, there was even a move to hold a
referendum declaring the country as the 51st American state. In addition to Albania, Kosovo which is predominately
Albanian is seen as a 51st state due to the heavily presence and influence of the United States. The US has had troops
and the largest base outside US territory, Camp Bondsteel in the territory since 1999. During World War II, when
Denmark was occupied by Nazi Germany, the United States briefly controlled Greenland for battlefields and protection.
In 1946, the United States offered to buy Greenland from Denmark for $100 million ($1.2 billion today) but Denmark
refused to sell it. Several politicians and others have in recent years argued that Greenland could hypothetically
be in a better financial situation as a part of the United States; for instance mentioned by professor Gudmundur
Alfredsson at University of Akureyri in 2014. One of the actual reasons behind US interest in Greenland could be
the vast natural resources of the island. According to Wikileaks, the U.S. appears to be highly interested in investing
in the resource base of the island and in tapping the vast expected hydrocarbons off the Greenlandic coast. Poland
has historically been staunchly pro-American, dating back to General Tadeusz Kościuszko and Casimir Pulaski's involvement
in the American Revolution. This pro-American stance was reinforced following favorable American intervention in
World War I (leading to the creation of an independent Poland) and the Cold War (culminating in a Polish state independent
of Soviet influence). Poland contributed a large force to the "Coalition of the Willing" in Iraq. A quote referring
to Poland as "the 51st state" has been attributed to James Pavitt, then Central Intelligence Agency Deputy Director
for Operations, especially in connection to extraordinary rendition. The Party of Reconstruction in Sicily, which
claimed 40,000 members in 1944, campaigned for Sicily to be admitted as a U.S. state. This party was one of several
Sicilian separatist movements active after the downfall of Italian Fascism. Sicilians felt neglected or underrepresented
by the Italian government after the annexation of 1861 that ended the rule of the Kingdom of the Two Sicilies based
in Naples. The large population of Sicilians in America and the American-led Allied invasion of Sicily in July–August
1943 may have contributed to the sentiment. There are four categories of terra nullius, land that is unclaimed by
any state: the small unclaimed territory of Bir Tawil between Egypt and Sudan, Antarctica, the oceans, and celestial
bodies such as the Moon or Mars. In the last three of these, international treaties (the Antarctic Treaty, the United
Nations Convention on the Law of the Sea, and the Outer Space Treaty respectively) prevent colonization and potential
statehood of any of these uninhabited (and, given current technology, not permanently inhabitable) territories.
An antenna (plural antennae or antennas), or aerial, is an electrical device which converts electric power into radio waves,
and vice versa. It is usually used with a radio transmitter or radio receiver. In transmission, a radio transmitter
supplies an electric current oscillating at radio frequency (i.e. a high frequency alternating current (AC)) to the
antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves).
In reception, an antenna intercepts some of the power of an electromagnetic wave in order to produce a tiny voltage
at its terminals, that is applied to a receiver to be amplified. Typically an antenna consists of an arrangement
of metallic conductors (elements), electrically connected (often through a transmission line) to the receiver or
transmitter. An oscillating current of electrons forced through the antenna by a transmitter will create an oscillating
magnetic field around the antenna elements, while the charge of the electrons also creates an oscillating electric
field along the elements. These time-varying fields radiate away from the antenna into space as a moving transverse
electromagnetic field wave. Conversely, during reception, the oscillating electric and magnetic fields of an incoming
radio wave exert force on the electrons in the antenna elements, causing them to move back and forth, creating oscillating
currents in the antenna. The words antenna (plural: antennas in US English, although both "antennas" and "antennae"
are used in International English) and aerial are used interchangeably. Occasionally the term "aerial" is used to
mean a wire antenna. However, note the important international technical journal, the IEEE Transactions on Antennas
and Propagation. In the United Kingdom and other areas where British English is used, the term aerial is sometimes
used although 'antenna' has been universal in professional use for many years. The origin of the word antenna relative
to wireless apparatus is attributed to Italian radio pioneer Guglielmo Marconi. In the summer of 1895, Marconi began
testing his wireless system outdoors on his father's estate near Bologna and soon began to experiment with long wire
"aerials". Marconi discovered that by raising the "aerial" wire above the ground and connecting the other side of
his transmitter to ground, the transmission range was increased. Soon he was able to transmit signals over a hill,
a distance of approximately 2.4 kilometres (1.5 mi). In Italian a tent pole is known as l'antenna centrale, and the
pole with the wire was simply called l'antenna. Until then wireless radiating transmitting and receiving elements
were known simply as aerials or terminals. Antennas are required by any radio receiver or transmitter to couple its
electrical connection to the electromagnetic field. Radio waves are electromagnetic waves which carry signals through
the air (or through space) at the speed of light with almost no transmission loss. Radio transmitters and receivers
are used to convey signals (information) in systems including broadcast (audio) radio, television, mobile telephones,
Wi-Fi (WLAN) data networks, trunk lines and point-to-point communications links (telephone, data networks), satellite
links, many remote controlled devices such as garage door openers, and wireless remote sensors, among many others.
Radio waves are also used directly for measurements in technologies including radar, GPS, and radio astronomy. In
each and every case, the transmitters and receivers involved require antennas, although these are sometimes hidden
(such as the antenna inside an AM radio or inside a laptop computer equipped with Wi-Fi). One example of omnidirectional
antennas is the very common vertical antenna or whip antenna consisting of a metal rod (often, but not always, a
quarter of a wavelength long). A dipole antenna is similar but consists of two such conductors extending in opposite
directions, with a total length that is often, but not always, a half of a wavelength long. Dipoles are typically
oriented horizontally in which case they are weakly directional: signals are reasonably well radiated toward or received
from all directions with the exception of the direction along the conductor itself; this region is called the antenna
blind cone or null. Both the vertical and dipole antennas are simple in construction and relatively inexpensive.
The dipole antenna, which is the basis for most antenna designs, is a balanced component, with equal but opposite
voltages and currents applied at its two terminals through a balanced transmission line (or to a coaxial transmission
line through a so-called balun). The vertical antenna, on the other hand, is a monopole antenna. It is typically
connected to the inner conductor of a coaxial transmission line (or a matching network); the shield of the transmission
line is connected to ground. In this way, the ground (or any large conductive surface) plays the role of the second
conductor of a dipole, thereby forming a complete circuit. Since monopole antennas rely on a conductive ground, a
so-called grounding structure may be employed to provide a better ground contact to the earth or which itself acts
as a ground plane to perform that function regardless of (or in absence of) an actual contact with the earth. Antennas
more complex than the dipole or vertical designs are usually intended to increase the directivity and consequently
the gain of the antenna. This can be accomplished in many different ways leading to a plethora of antenna designs.
The vast majority of designs are fed with a balanced line (unlike a monopole antenna) and are based on the dipole
antenna with additional components (or elements) which increase its directionality. Antenna "gain" in this instance
describes the concentration of radiated power into a particular solid angle of space, as opposed to the spherically
uniform radiation of the ideal radiator. The increased power in the desired direction is at the expense of that in
the undesired directions. Power is conserved, and there is no net power increase over that delivered from the power
source (the transmitter.) For instance, a phased array consists of two or more simple antennas which are connected
together through an electrical network. This often involves a number of parallel dipole antennas with a certain spacing.
Depending on the relative phase introduced by the network, the same combination of dipole antennas can operate as
a "broadside array" (directional normal to a line connecting the elements) or as an "end-fire array" (directional
along the line connecting the elements). Antenna arrays may employ any basic (omnidirectional or weakly directional)
antenna type, such as dipole, loop or slot antennas. These elements are often identical. However a log-periodic dipole
array consists of a number of dipole elements of different lengths in order to obtain a somewhat directional antenna
having an extremely wide bandwidth: these are frequently used for television reception in fringe areas. The dipole
antennas composing it are all considered "active elements" since they are all electrically connected together (and
to the transmission line). On the other hand, a superficially similar dipole array, the Yagi-Uda Antenna (or simply
"Yagi"), has only one dipole element with an electrical connection; the other so-called parasitic elements interact
with the electromagnetic field in order to realize a fairly directional antenna but one which is limited to a rather
narrow bandwidth. The Yagi antenna has similar looking parasitic dipole elements but which act differently due to
their somewhat different lengths. There may be a number of so-called "directors" in front of the active element in
the direction of propagation, and usually a single (but possibly more) "reflector" on the opposite side of the active
element. At low frequencies (such as AM broadcast), arrays of vertical towers are used to achieve directionality
and they will occupy large areas of land. For reception, a long Beverage antenna can have significant directivity.
For non directional portable use, a short vertical antenna or small loop antenna works well, with the main design
challenge being that of impedance matching. With a vertical antenna a loading coil at the base of the antenna may
be employed to cancel the reactive component of impedance; small loop antennas are tuned with parallel capacitors
for this purpose. An antenna lead-in is the transmission line (or feed line) which connects the antenna to a transmitter
or receiver. The antenna feed may refer to all components connecting the antenna to the transmitter or receiver,
such as an impedance matching network in addition to the transmission line. In a so-called aperture antenna, such
as a horn or parabolic dish, the "feed" may also refer to a basic antenna inside the entire system (normally at the
focus of the parabolic dish or at the throat of a horn) which could be considered the one active element in that
antenna system. A microwave antenna may also be fed directly from a waveguide in place of a (conductive) transmission
line. Monopole antennas consist of a single radiating element such as a metal rod, often mounted over a conducting
surface, a ground plane. One side of the feedline from the receiver or transmitter is connected to the rod, and the
other side to the ground plane, which may be the Earth. The most common form is the quarter-wave monopole which is
one-quarter of a wavelength long and has a gain of 5.12 dBi when mounted over a ground plane. Monopoles have an omnidirectional
radiation pattern, so they are used for broad coverage of an area, and have vertical polarization. The ground waves
used for broadcasting at low frequencies must be vertically polarized, so large vertical monopole antennas are used
for broadcasting in the MF, LF, and VLF bands. Small monopoles are used as nondirectional antennas on portable radios
in the HF, VHF, and UHF bands. The most widely used class of antenna, a dipole antenna consists of two symmetrical
radiators such as metal rods or wires, with one side of the balanced feedline from the transmitter or receiver attached
to each. A horizontal dipole radiates in two lobes perpendicular to the antenna's axis. A half-wave dipole the most
common type, has two collinear elements each a quarter wavelength long and a gain of 2.15 dBi. Used individually
as low gain antennas, dipoles are also used as driven elements in many more complicated higher gain types of antennas.
A necessary condition for the aforementioned reciprocity property is that the materials in the antenna and transmission
medium are linear and reciprocal. Reciprocal (or bilateral) means that the material has the same response to an electric
current or magnetic field in one direction, as it has to the field or current in the opposite direction. Most materials
used in antennas meet these conditions, but some microwave antennas use high-tech components such as isolators and
circulators, made of nonreciprocal materials such as ferrite. These can be used to give the antenna a different behavior
on receiving than it has on transmitting, which can be useful in applications like radar. Antennas are characterized
by a number of performance measures which a user would be concerned with in selecting or designing an antenna for
a particular application. Chief among these relate to the directional characteristics (as depicted in the antenna's
radiation pattern) and the resulting gain. Even in omnidirectional (or weakly directional) antennas, the gain can
often be increased by concentrating more of its power in the horizontal directions, sacrificing power radiated toward
the sky and ground. The antenna's power gain (or simply "gain") also takes into account the antenna's efficiency,
and is often the primary figure of merit. Resonant antennas are expected to be used around a particular resonant
frequency; an antenna must therefore be built or ordered to match the frequency range of the intended application.
A particular antenna design will present a particular feedpoint impedance. While this may affect the choice of an
antenna, an antenna's impedance can also be adapted to the desired impedance level of a system using a matching network
while maintaining the other characteristics (except for a possible loss of efficiency). An antenna transmits and
receives radio waves with a particular polarization which can be reoriented by tilting the axis of the antenna in
many (but not all) cases. The physical size of an antenna is often a practical issue, particularly at lower frequencies
(longer wavelengths). Highly directional antennas need to be significantly larger than the wavelength. Resonant antennas
usually use a linear conductor (or element), or pair of such elements, each of which is about a quarter of the wavelength
in length (an odd multiple of quarter wavelengths will also be resonant). Antennas that are required to be small
compared to the wavelength sacrifice efficiency and cannot be very directional. Fortunately at higher frequencies
(UHF, microwaves) trading off performance to obtain a smaller physical size is usually not required. The majority
of antenna designs are based on the resonance principle. This relies on the behaviour of moving electrons, which
reflect off surfaces where the dielectric constant changes, in a fashion similar to the way light reflects when optical
properties change. In these designs, the reflective surface is created by the end of a conductor, normally a thin
metal wire or rod, which in the simplest case has a feed point at one end where it is connected to a transmission
line. The conductor, or element, is aligned with the electrical field of the desired signal, normally meaning it
is perpendicular to the line from the antenna to the source (or receiver in the case of a broadcast antenna). The
radio signal's electrical component induces a voltage in the conductor. This causes an electrical current to begin
flowing in the direction of the signal's instantaneous field. When the resulting current reaches the end of the conductor,
it reflects, which is equivalent to a 180 degree change in phase. If the conductor is 1⁄4 of a wavelength long, current
from the feed point will undergo 90 degree phase change by the time it reaches the end of the conductor, reflect
through 180 degrees, and then another 90 degrees as it travels back. That means it has undergone a total 360 degree
phase change, returning it to the original signal. The current in the element thus adds to the current being created
from the source at that instant. This process creates a standing wave in the conductor, with the maximum current
at the feed. The ordinary half-wave dipole is probably the most widely used antenna design. This consists of two
1⁄4-wavelength elements arranged end-to-end, and lying along essentially the same axis (or collinear), each feeding
one side of a two-conductor transmission wire. The physical arrangement of the two elements places them 180 degrees
out of phase, which means that at any given instant one of the elements is driving current into the transmission
line while the other is pulling it out. The monopole antenna is essentially one half of the half-wave dipole, a single
1⁄4-wavelength element with the other side connected to ground or an equivalent ground plane (or counterpoise). Monopoles,
which are one-half the size of a dipole, are common for long-wavelength radio signals where a dipole would be impractically
large. Another common design is the folded dipole, which is essentially two dipoles placed side-by-side and connected
at their ends to make a single one-wavelength antenna. The standing wave forms with this desired pattern at the design
frequency, f0, and antennas are normally designed to be this size. However, feeding that element with 3f0 (whose
wavelength is 1⁄3 that of f0) will also lead to a standing wave pattern. Thus, an antenna element is also resonant
when its length is 3⁄4 of a wavelength. This is true for all odd multiples of 1⁄4 wavelength. This allows some flexibility
of design in terms of antenna lengths and feed points. Antennas used in such a fashion are known to be harmonically
operated. The quarter-wave elements imitate a series-resonant electrical element due to the standing wave present
along the conductor. At the resonant frequency, the standing wave has a current peak and voltage node (minimum) at
the feed. In electrical terms, this means the element has minimum reactance, generating the maximum current for minimum
voltage. This is the ideal situation, because it produces the maximum output for the minimum input, producing the
highest possible efficiency. Contrary to an ideal (lossless) series-resonant circuit, a finite resistance remains
(corresponding to the relatively small voltage at the feed-point) due to the antenna's radiation resistance as well
as any actual electrical losses. It is possible to use the impedance matching concepts to construct vertical antennas
substantially shorter than the 1⁄4 wavelength at which the antenna is resonant. By adding an inductance in series
with the antenna, a so-called loading coil, the capacitive reactance of this antenna can be cancelled leaving a pure
resistance which can then be matched to the transmission line. Sometimes the resulting resonant frequency of such
a system (antenna plus matching network) is described using the construct of electrical length and the use of a shorter
antenna at a lower frequency than its resonant frequency is termed electrical lengthening. The end result is that
the resonant antenna will efficiently feed a signal into the transmission line only when the source signal's frequency
is close to that of the design frequency of the antenna, or one of the resonant multiples. This makes resonant antenna
designs inherently narrowband, and they are most commonly used with a single target signal. They are particularly
common on radar systems, where the same antenna is used for both broadcast and reception, or for radio and television
broadcasts, where the antenna is working with a single frequency. They are less commonly used for reception where
multiple channels are present, in which case additional modifications are used to increase the bandwidth, or entirely
different antenna designs are used. The amount of signal received from a distant transmission source is essentially
geometric in nature due to the inverse square law, and this leads to the concept of effective area. This measures
the performance of an antenna by comparing the amount of power it generates to the amount of power in the original
signal, measured in terms of the signal's power density in Watts per square metre. A half-wave dipole has an effective
area of 0.13 2. If more performance is needed, one cannot simply make the antenna larger. Although this would intercept
more energy from the signal, due to the considerations above, it would decrease the output significantly due to it
moving away from the resonant length. In roles where higher performance is needed, designers often use multiple elements
combined together. Returning to the basic concept of current flows in a conductor, consider what happens if a half-wave
dipole is not connected to a feed point, but instead shorted out. Electrically this forms a single 1⁄2-wavelength
element. But the overall current pattern is the same; the current will be zero at the two ends, and reach a maximum
in the center. Thus signals near the design frequency will continue to create a standing wave pattern. Any varying
electrical current, like the standing wave in the element, will radiate a signal. In this case, aside from resistive
losses in the element, the rebroadcast signal will be significantly similar to the original signal in both magnitude
and shape. If this element is placed so its signal reaches the main dipole in-phase, it will reinforce the original
signal, and increase the current in the dipole. Elements used in this way are known as passive elements. A Yagi-Uda
array uses passive elements to greatly increase gain. It is built along a support boom that is pointed toward the
signal, and thus sees no induced signal and does not contribute to the antenna's operation. The end closer to the
source is referred to as the front. Near the rear is a single active element, typically a half-wave dipole or folded
dipole. Passive elements are arranged in front (directors) and behind (reflectors) the active element along the boom.
The Yagi has the inherent quality that it becomes increasingly directional, and thus has higher gain, as the number
of elements increases. However, this also makes it increasingly sensitive to changes in frequency; if the signal
frequency changes, not only does the active element receive less energy directly, but all of the passive elements
adding to that signal also decrease their output as well and their signals no longer reach the active element in-phase.
It is also possible to use multiple active elements and combine them together with transmission lines to produce
a similar system where the phases add up to reinforce the output. The antenna array and very similar reflective array
antenna consist of multiple elements, often half-wave dipoles, spaced out on a plane and wired together with transmission
lines with specific phase lengths to produce a single in-phase signal at the output. The log-periodic antenna is
a more complex design that uses multiple in-line elements similar in appearance to the Yagi-Uda but using transmission
lines between the elements to produce the output. Reflection of the original signal also occurs when it hits an extended
conductive surface, in a fashion similar to a mirror. This effect can also be used to increase signal through the
use of a reflector, normally placed behind the active element and spaced so the reflected signal reaches the element
in-phase. Generally the reflector will remain highly reflective even if it is not solid; gaps less than 1⁄10 generally
have little effect on the outcome. For this reason, reflectors often take the form of wire meshes or rows of passive
elements, which makes them lighter and less subject to wind. The parabolic reflector is perhaps the best known example
of a reflector-based antenna, which has an effective area far greater than the active element alone. Another extreme
case of impedance matching occurs when using a small loop antenna (usually, but not always, for receiving) at a relatively
low frequency where it appears almost as a pure inductor. Resonating such an inductor with a capacitor at the frequency
of operation not only cancels the reactance but greatly magnifies the very small radiation resistance of such a loop.[citation
needed] This is implemented in most AM broadcast receivers, with a small ferrite loop antenna resonated by a capacitor
which is varied along with the receiver tuning in order to maintain resonance over the AM broadcast band Antenna
tuning generally refers to cancellation of any reactance seen at the antenna terminals, leaving only a resistive
impedance which might or might not be exactly the desired impedance (that of the transmission line). Although an
antenna may be designed to have a purely resistive feedpoint impedance (such as a dipole 97% of a half wavelength
long) this might not be exactly true at the frequency that it is eventually used at. In some cases the physical length
of the antenna can be "trimmed" to obtain a pure resistance. On the other hand, the addition of a series inductance
or parallel capacitance can be used to cancel a residual capacitative or inductive reactance, respectively. Although
a resonant antenna has a purely resistive feed-point impedance at a particular frequency, many (if not most) applications
require using an antenna over a range of frequencies. An antenna's bandwidth specifies the range of frequencies over
which its performance does not suffer due to a poor impedance match. Also in the case of a Yagi-Uda array, the use
of the antenna very far away from its design frequency reduces the antenna's directivity, thus reducing the usable
bandwidth regardless of impedance matching. Instead, it is often desired to have an antenna whose impedance does
not vary so greatly over a certain bandwidth. It turns out that the amount of reactance seen at the terminals of
a resonant antenna when the frequency is shifted, say, by 5%, depends very much on the diameter of the conductor
used. A long thin wire used as a half-wave dipole (or quarter wave monopole) will have a reactance significantly
greater than the resistive impedance it has at resonance, leading to a poor match and generally unacceptable performance.
Making the element using a tube of a diameter perhaps 1/50 of its length, however, results in a reactance at this
altered frequency which is not so great, and a much less serious mismatch which will only modestly damage the antenna's
net performance. Thus rather thick tubes are typically used for the solid elements of such antennas, including Yagi-Uda
arrays. Rather than just using a thick tube, there are similar techniques used to the same effect such as replacing
thin wire elements with cages to simulate a thicker element. This widens the bandwidth of the resonance. On the other
hand, amateur radio antennas need to operate over several bands which are widely separated from each other. This
can often be accomplished simply by connecting resonant elements for the different bands in parallel. Most of the
transmitter's power will flow into the resonant element while the others present a high (reactive) impedance and
draw little current from the same voltage. A popular solution uses so-called traps consisting of parallel resonant
circuits which are strategically placed in breaks along each antenna element. When used at one particular frequency
band the trap presents a very high impedance (parallel resonance) effectively truncating the element at that length,
making it a proper resonant antenna. At a lower frequency the trap allows the full length of the element to be employed,
albeit with a shifted resonant frequency due to the inclusion of the trap's net reactance at that lower frequency.
Gain is a parameter which measures the degree of directivity of the antenna's radiation pattern. A high-gain antenna
will radiate most of its power in a particular direction, while a low-gain antenna will radiate over a wider angle.
The antenna gain, or power gain of an antenna is defined as the ratio of the intensity (power per unit surface area)
radiated by the antenna in the direction of its maximum output, at an arbitrary distance, divided by the intensity
radiated at the same distance by a hypothetical isotropic antenna which radiates equal power in all directions. This
dimensionless ratio is usually expressed logarithmically in decibels, these units are called "decibels-isotropic"
(dBi) High-gain antennas have the advantage of longer range and better signal quality, but must be aimed carefully
at the other antenna. An example of a high-gain antenna is a parabolic dish such as a satellite television antenna.
Low-gain antennas have shorter range, but the orientation of the antenna is relatively unimportant. An example of
a low-gain antenna is the whip antenna found on portable radios and cordless phones. Antenna gain should not be confused
with amplifier gain, a separate parameter measuring the increase in signal power due to an amplifying device. Due
to reciprocity (discussed above) the gain of an antenna used for transmitting must be proportional to its effective
area when used for receiving. Consider an antenna with no loss, that is, one whose electrical efficiency is 100%.
It can be shown that its effective area averaged over all directions must be equal to λ2/4π, the wavelength squared
divided by 4π. Gain is defined such that the average gain over all directions for an antenna with 100% electrical
efficiency is equal to 1. Therefore, the effective area Aeff in terms of the gain G in a given direction is given
by: The radiation pattern of an antenna is a plot of the relative field strength of the radio waves emitted by the
antenna at different angles. It is typically represented by a three-dimensional graph, or polar plots of the horizontal
and vertical cross sections. The pattern of an ideal isotropic antenna, which radiates equally in all directions,
would look like a sphere. Many nondirectional antennas, such as monopoles and dipoles, emit equal power in all horizontal
directions, with the power dropping off at higher and lower angles; this is called an omnidirectional pattern and
when plotted looks like a torus or donut. The radiation of many antennas shows a pattern of maxima or "lobes" at
various angles, separated by "nulls", angles where the radiation falls to zero. This is because the radio waves emitted
by different parts of the antenna typically interfere, causing maxima at angles where the radio waves arrive at distant
points in phase, and zero radiation at other angles where the radio waves arrive out of phase. In a directional antenna
designed to project radio waves in a particular direction, the lobe in that direction is designed larger than the
others and is called the "main lobe". The other lobes usually represent unwanted radiation and are called "sidelobes".
The axis through the main lobe is called the "principal axis" or "boresight axis". As an electro-magnetic wave travels
through the different parts of the antenna system (radio, feed line, antenna, free space) it may encounter differences
in impedance (E/H, V/I, etc.). At each interface, depending on the impedance match, some fraction of the wave's energy
will reflect back to the source, forming a standing wave in the feed line. The ratio of maximum power to minimum
power in the wave can be measured and is called the standing wave ratio (SWR). A SWR of 1:1 is ideal. A SWR of 1.5:1
is considered to be marginally acceptable in low power applications where power loss is more critical, although an
SWR as high as 6:1 may still be usable with the right equipment. Minimizing impedance differences at each interface
(impedance matching) will reduce SWR and maximize power transfer through each part of the antenna system. Efficiency
of a transmitting antenna is the ratio of power actually radiated (in all directions) to the power absorbed by the
antenna terminals. The power supplied to the antenna terminals which is not radiated is converted into heat. This
is usually through loss resistance in the antenna's conductors, but can also be due to dielectric or magnetic core
losses in antennas (or antenna systems) using such components. Such loss effectively robs power from the transmitter,
requiring a stronger transmitter in order to transmit a signal of a given strength. For instance, if a transmitter
delivers 100 W into an antenna having an efficiency of 80%, then the antenna will radiate 80 W as radio waves and
produce 20 W of heat. In order to radiate 100 W of power, one would need to use a transmitter capable of supplying
125 W to the antenna. Note that antenna efficiency is a separate issue from impedance matching, which may also reduce
the amount of power radiated using a given transmitter. If an SWR meter reads 150 W of incident power and 50 W of
reflected power, that means that 100 W have actually been absorbed by the antenna (ignoring transmission line losses).
How much of that power has actually been radiated cannot be directly determined through electrical measurements at
(or before) the antenna terminals, but would require (for instance) careful measurement of field strength. Fortunately
the loss resistance of antenna conductors such as aluminum rods can be calculated and the efficiency of an antenna
using such materials predicted. However loss resistance will generally affect the feedpoint impedance, adding to
its resistive (real) component. That resistance will consist of the sum of the radiation resistance Rr and the loss
resistance Rloss. If an rms current I is delivered to the terminals of an antenna, then a power of I2Rr will be radiated
and a power of I2Rloss will be lost as heat. Therefore, the efficiency of an antenna is equal to Rr / (Rr + Rloss).
Of course only the total resistance Rr + Rloss can be directly measured. According to reciprocity, the efficiency
of an antenna used as a receiving antenna is identical to the efficiency as defined above. The power that an antenna
will deliver to a receiver (with a proper impedance match) is reduced by the same amount. In some receiving applications,
the very inefficient antennas may have little impact on performance. At low frequencies, for example, atmospheric
or man-made noise can mask antenna inefficiency. For example, CCIR Rep. 258-3 indicates man-made noise in a residential
setting at 40 MHz is about 28 dB above the thermal noise floor. Consequently, an antenna with a 20 dB loss (due to
inefficiency) would have little impact on system noise performance. The loss within the antenna will affect the intended
signal and the noise/interference identically, leading to no reduction in signal to noise ratio (SNR). The definition
of antenna gain or power gain already includes the effect of the antenna's efficiency. Therefore, if one is trying
to radiate a signal toward a receiver using a transmitter of a given power, one need only compare the gain of various
antennas rather than considering the efficiency as well. This is likewise true for a receiving antenna at very high
(especially microwave) frequencies, where the point is to receive a signal which is strong compared to the receiver's
noise temperature. However, in the case of a directional antenna used for receiving signals with the intention of
rejecting interference from different directions, one is no longer concerned with the antenna efficiency, as discussed
above. In this case, rather than quoting the antenna gain, one would be more concerned with the directive gain which
does not include the effect of antenna (in)efficiency. The directive gain of an antenna can be computed from the
published gain divided by the antenna's efficiency. This is fortunate, since antennas at lower frequencies which
are not rather large (a good fraction of a wavelength in size) are inevitably inefficient (due to the small radiation
resistance Rr of small antennas). Most AM broadcast radios (except for car radios) take advantage of this principle
by including a small loop antenna for reception which has an extremely poor efficiency. Using such an inefficient
antenna at this low frequency (530–1650 kHz) thus has little effect on the receiver's net performance, but simply
requires greater amplification by the receiver's electronics. Contrast this tiny component to the massive and very
tall towers used at AM broadcast stations for transmitting at the very same frequency, where every percentage point
of reduced antenna efficiency entails a substantial cost. The polarization of an antenna refers to the orientation
of the electric field (E-plane) of the radio wave with respect to the Earth's surface and is determined by the physical
structure of the antenna and by its orientation; note that this designation is totally distinct from the antenna's
directionality. Thus, a simple straight wire antenna will have one polarization when mounted vertically, and a different
polarization when mounted horizontally. As a transverse wave, the magnetic field of a radio wave is at right angles
to that of the electric field, but by convention, talk of an antenna's "polarization" is understood to refer to the
direction of the electric field. Reflections generally affect polarization. For radio waves, one important reflector
is the ionosphere which can change the wave's polarization. Thus for signals received following reflection by the
ionosphere (a skywave), a consistent polarization cannot be expected. For line-of-sight communications or ground
wave propagation, horizontally or vertically polarized transmissions generally remain in about the same polarization
state at the receiving location. Matching the receiving antenna's polarization to that of the transmitter can make
a very substantial difference in received signal strength. Polarization is predictable from an antenna's geometry,
although in some cases it is not at all obvious (such as for the quad antenna). An antenna's linear polarization
is generally along the direction (as viewed from the receiving location) of the antenna's currents when such a direction
can be defined. For instance, a vertical whip antenna or Wi-Fi antenna vertically oriented will transmit and receive
in the vertical polarization. Antennas with horizontal elements, such as most rooftop TV antennas in the United States,
are horizontally polarized (broadcast TV in the U.S. usually uses horizontal polarization). Even when the antenna
system has a vertical orientation, such as an array of horizontal dipole antennas, the polarization is in the horizontal
direction corresponding to the current flow. The polarization of a commercial antenna is an essential specification.
Polarization is the sum of the E-plane orientations over time projected onto an imaginary plane perpendicular to
the direction of motion of the radio wave. In the most general case, polarization is elliptical, meaning that the
polarization of the radio waves varies over time. Two special cases are linear polarization (the ellipse collapses
into a line) as we have discussed above, and circular polarization (in which the two axes of the ellipse are equal).
In linear polarization the electric field of the radio wave oscillates back and forth along one direction; this can
be affected by the mounting of the antenna but usually the desired direction is either horizontal or vertical polarization.
In circular polarization, the electric field (and magnetic field) of the radio wave rotates at the radio frequency
circularly around the axis of propagation. Circular or elliptically polarized radio waves are designated as right-handed
or left-handed using the "thumb in the direction of the propagation" rule. Note that for circular polarization, optical
researchers use the opposite right hand rule from the one used by radio engineers. It is best for the receiving antenna
to match the polarization of the transmitted wave for optimum reception. Intermediate matchings will lose some signal
strength, but not as much as a complete mismatch. A circularly polarized antenna can be used to equally well match
vertical or horizontal linear polarizations. Transmission from a circularly polarized antenna received by a linearly
polarized antenna (or vice versa) entails a 3 dB reduction in signal-to-noise ratio as the received power has thereby
been cut in half. Maximum power transfer requires matching the impedance of an antenna system (as seen looking into
the transmission line) to the complex conjugate of the impedance of the receiver or transmitter. In the case of a
transmitter, however, the desired matching impedance might not correspond to the dynamic output impedance of the
transmitter as analyzed as a source impedance but rather the design value (typically 50 ohms) required for efficient
and safe operation of the transmitting circuitry. The intended impedance is normally resistive but a transmitter
(and some receivers) may have additional adjustments to cancel a certain amount of reactance in order to "tweak"
the match. When a transmission line is used in between the antenna and the transmitter (or receiver) one generally
would like an antenna system whose impedance is resistive and near the characteristic impedance of that transmission
line in order to minimize the standing wave ratio (SWR) and the increase in transmission line losses it entails,
in addition to supplying a good match at the transmitter or receiver itself. In some cases this is done in a more
extreme manner, not simply to cancel a small amount of residual reactance, but to resonate an antenna whose resonance
frequency is quite different from the intended frequency of operation. For instance, a "whip antenna" can be made
significantly shorter than 1/4 wavelength long, for practical reasons, and then resonated using a so-called loading
coil. This physically large inductor at the base of the antenna has an inductive reactance which is the opposite
of the capacitative reactance that such a vertical antenna has at the desired operating frequency. The result is
a pure resistance seen at feedpoint of the loading coil; unfortunately that resistance is somewhat lower than would
be desired to match commercial coax.[citation needed] So an additional problem beyond canceling the unwanted reactance
is of matching the remaining resistive impedance to the characteristic impedance of the transmission line. In principle
this can always be done with a transformer, however the turns ratio of a transformer is not adjustable. A general
matching network with at least two adjustments can be made to correct both components of impedance. Matching networks
using discrete inductors and capacitors will have losses associated with those components, and will have power restrictions
when used for transmitting. Avoiding these difficulties, commercial antennas are generally designed with fixed matching
elements or feeding strategies to get an approximate match to standard coax, such as 50 or 75 Ohms. Antennas based
on the dipole (rather than vertical antennas) should include a balun in between the transmission line and antenna
element, which may be integrated into any such matching network. Unlike the above antennas, traveling wave antennas
are nonresonant so they have inherently broad bandwidth. They are typically wire antennas multiple wavelengths long,
through which the voltage and current waves travel in one direction, instead of bouncing back and forth to form standing
waves as in resonant antennas. They have linear polarization (except for the helical antenna). Unidirectional traveling
wave antennas are terminated by a resistor at one end equal to the antenna's characteristic resistance, to absorb
the waves from one direction. This makes them inefficient as transmitting antennas. The radiation pattern and even
the driving point impedance of an antenna can be influenced by the dielectric constant and especially conductivity
of nearby objects. For a terrestrial antenna, the ground is usually one such object of importance. The antenna's
height above the ground, as well as the electrical properties (permittivity and conductivity) of the ground, can
then be important. Also, in the particular case of a monopole antenna, the ground (or an artificial ground plane)
serves as the return connection for the antenna current thus having an additional effect, particularly on the impedance
seen by the feed line. The net quality of a ground reflection depends on the topography of the surface. When the
irregularities of the surface are much smaller than the wavelength, we are in the regime of specular reflection,
and the receiver sees both the real antenna and an image of the antenna under the ground due to reflection. But if
the ground has irregularities not small compared to the wavelength, reflections will not be coherent but shifted
by random phases. With shorter wavelengths (higher frequencies), this is generally the case. The phase of reflection
of electromagnetic waves depends on the polarization of the incident wave. Given the larger refractive index of the
ground (typically n=2) compared to air (n=1), the phase of horizontally polarized radiation is reversed upon reflection
(a phase shift of radians or 180°). On the other hand, the vertical component of the wave's electric field is reflected
at grazing angles of incidence approximately in phase. These phase shifts apply as well to a ground modelled as a
good electrical conductor. When an electromagnetic wave strikes a plane surface such as the ground, part of the wave
is transmitted into the ground and part of it is reflected, according to the Fresnel coefficients. If the ground
is a very good conductor then almost all of the wave is reflected (180° out of phase), whereas a ground modeled as
a (lossy) dielectric can absorb a large amount of the wave's power. The power remaining in the reflected wave, and
the phase shift upon reflection, strongly depend on the wave's angle of incidence and polarization. The dielectric
constant and conductivity (or simply the complex dielectric constant) is dependent on the soil type and is a function
of frequency. The effective area or effective aperture of a receiving antenna expresses the portion of the power
of a passing electromagnetic wave which it delivers to its terminals, expressed in terms of an equivalent area. For
instance, if a radio wave passing a given location has a flux of 1 pW / m2 (10−12 watts per square meter) and an
antenna has an effective area of 12 m2, then the antenna would deliver 12 pW of RF power to the receiver (30 microvolts
rms at 75 ohms). Since the receiving antenna is not equally sensitive to signals received from all directions, the
effective area is a function of the direction to the source. The bandwidth characteristics of a resonant antenna
element can be characterized according to its Q, just as one uses to characterize the sharpness of an L-C resonant
circuit. However it is often assumed that there is an advantage in an antenna having a high Q. After all, Q is short
for "quality factor" and a low Q typically signifies excessive loss (due to unwanted resistance) in a resonant L-C
circuit. However this understanding does not apply to resonant antennas where the resistance involved is the radiation
resistance, a desired quantity which removes energy from the resonant element in order to radiate it (the purpose
of an antenna, after all!). The Q is a measure of the ratio of reactance to resistance, so with a fixed radiation
resistance (an element's radiation resistance is almost independent of its diameter) a greater reactance off-resonance
corresponds to the poorer bandwidth of a very thin conductor. The Q of such a narrowband antenna can be as high as
15. On the other hand, a thick element presents less reactance at an off-resonant frequency, and consequently a Q
as low as 5. These two antennas will perform equivalently at the resonant frequency, but the second antenna will
perform over a bandwidth 3 times as wide as the "hi-Q" antenna consisting of a thin conductor. For example, at 30
MHz (10 m wavelength) a true resonant 1⁄4-wavelength monopole would be almost 2.5 meters long, and using an antenna
only 1.5 meters tall would require the addition of a loading coil. Then it may be said that the coil has lengthened
the antenna to achieve an electrical length of 2.5 meters. However, the resulting resistive impedance achieved will
be quite a bit lower than the impedance of a resonant monopole, likely requiring further impedance matching. In addition
to a lower radiation resistance, the reactance becomes higher as the antenna size is reduced, and the resonant circuit
formed by the antenna and the tuning coil has a Q factor that rises and eventually causes the bandwidth of the antenna
to be inadequate for the signal being transmitted. This is the major factor that sets the size of antennas at 1 MHz
and lower frequencies. Consider a half-wave dipole designed to work with signals 1 m wavelength, meaning the antenna
would be approximately 50 cm across. If the element has a length-to-diameter ratio of 1000, it will have an inherent
resistance of about 63 ohms. Using the appropriate transmission wire or balun, we match that resistance to ensure
minimum signal loss. Feeding that antenna with a current of 1 ampere will require 63 volts of RF, and the antenna
will radiate 63 watts (ignoring losses) of radio frequency power. Now consider the case when the antenna is fed a
signal with a wavelength of 1.25 m; in this case the reflected current would arrive at the feed out-of-phase with
the signal, causing the net current to drop while the voltage remains the same. Electrically this appears to be a
very high impedance. The antenna and transmission line no longer have the same impedance, and the signal will be
reflected back into the antenna, reducing output. This could be addressed by changing the matching system between
the antenna and transmission line, but that solution only works well at the new design frequency. Recall that a current
will reflect when there are changes in the electrical properties of the material. In order to efficiently send the
signal into the transmission line, it is important that the transmission line has the same impedance as the elements,
otherwise some of the signal will be reflected back into the antenna. This leads to the concept of impedance matching,
the design of the overall system of antenna and transmission line so the impedance is as close as possible, thereby
reducing these losses. Impedance matching between antennas and transmission lines is commonly handled through the
use of a balun, although other solutions are also used in certain roles. An important measure of this basic concept
is the standing wave ratio, which measures the magnitude of the reflected signal. An electromagnetic wave refractor
in some aperture antennas is a component which due to its shape and position functions to selectively delay or advance
portions of the electromagnetic wavefront passing through it. The refractor alters the spatial characteristics of
the wave on one side relative to the other side. It can, for instance, bring the wave to a focus or alter the wave
front in other ways, generally in order to maximize the directivity of the antenna system. This is the radio equivalent
of an optical lens. The actual antenna which is transmitting the original wave then also may receive a strong signal
from its own image from the ground. This will induce an additional current in the antenna element, changing the current
at the feedpoint for a given feedpoint voltage. Thus the antenna's impedance, given by the ratio of feedpoint voltage
to current, is altered due to the antenna's proximity to the ground. This can be quite a significant effect when
the antenna is within a wavelength or two of the ground. But as the antenna height is increased, the reduced power
of the reflected wave (due to the inverse square law) allows the antenna to approach its asymptotic feedpoint impedance
given by theory. At lower heights, the effect on the antenna's impedance is very sensitive to the exact distance
from the ground, as this affects the phase of the reflected wave relative to the currents in the antenna. Changing
the antenna's height by a quarter wavelength, then changes the phase of the reflection by 180°, with a completely
different effect on the antenna's impedance. For horizontal propagation between transmitting and receiving antennas
situated near the ground reasonably far from each other, the distances traveled by tne direct and reflected rays
are nearly the same. There is almost no relative phase shift. If the emission is polarized vertically, the two fields
(direct and reflected) add and there is maximum of received signal. If the signal is polarized horizontally, the
two signals subtract and the received signal is largely cancelled. The vertical plane radiation patterns are shown
in the image at right. With vertical polarization there is always a maximum for θ=0, horizontal propagation (left
pattern). For horizontal polarization, there is cancellation at that angle. Note that the above formulae and these
plots assume the ground as a perfect conductor. These plots of the radiation pattern correspond to a distance between
the antenna and its image of 2.5λ. As the antenna height is increased, the number of lobes increases as well. On
the other hand, classical (analog) television transmissions are usually horizontally polarized, because in urban
areas buildings can reflect the electromagnetic waves and create ghost images due to multipath propagation. Using
horizontal polarization, ghosting is reduced because the amount of reflection of electromagnetic waves in the p polarization
(horizontal polarization off the side of a building) is generally less than s (vertical, in this case) polarization.
Vertically polarized analog television has nevertheless been used in some rural areas. In digital terrestrial television
such reflections are less problematic, due to robustness of binary transmissions and error correction. Current circulating
in one antenna generally induces a voltage across the feedpoint of nearby antennas or antenna elements. The mathematics
presented below are useful in analyzing the electrical behaviour of antenna arrays, where the properties of the individual
array elements (such as half wave dipoles) are already known. If those elements were widely separated and driven
in a certain amplitude and phase, then each would act independently as that element is known to. However, because
of the mutual interaction between their electric and magnetic fields due to proximity, the currents in each element
are not simply a function of the applied voltage (according to its driving point impedance), but depend on the currents
in the other nearby elements. Note that this now is a near field phenomenon which could not be properly accounted
for using the Friis transmission equation for instance. This is a consequence of Lorentz reciprocity. For an antenna
element not connected to anything (open circuited) one can write . But for an element which is short circuited, a
current is generated across that short but no voltage is allowed, so the corresponding . This is the case, for instance,
with the so-called parasitic elements of a Yagi-Uda antenna where the solid rod can be viewed as a dipole antenna
shorted across its feedpoint. Parasitic elements are unpowered elements that absorb and reradiate RF energy according
to the induced current calculated using such a system of equations. The difference in the above factors for the case
of θ=0 is the reason that most broadcasting (transmissions intended for the public) uses vertical polarization. For
receivers near the ground, horizontally polarized transmissions suffer cancellation. For best reception the receiving
antennas for these signals are likewise vertically polarized. In some applications where the receiving antenna must
work in any position, as in mobile phones, the base station antennas use mixed polarization, such as linear polarization
at an angle (with both vertical and horizontal components) or circular polarization. Loop antennas consist of a loop
or coil of wire. Loops with circumference of a wavelength or larger act similarly to dipole antennas. However loops
small in comparison to a wavelength act differently. They interact with the magnetic field of the radio wave instead
of the electric field as other antennas do, and so are relatively insensitive to nearby electrical noise. However
they have low radiation resistance, and so are inefficient for transmitting. They are used as receiving antennas
at low frequencies, and also as direction finding antennas. It is a fundamental property of antennas that the electrical
characteristics of an antenna described in the next section, such as gain, radiation pattern, impedance, bandwidth,
resonant frequency and polarization, are the same whether the antenna is transmitting or receiving. For example,
the "receiving pattern" (sensitivity as a function of direction) of an antenna when used for reception is identical
to the radiation pattern of the antenna when it is driven and functions as a radiator. This is a consequence of the
reciprocity theorem of electromagnetics. Therefore, in discussions of antenna properties no distinction is usually
made between receiving and transmitting terminology, and the antenna can be viewed as either transmitting or receiving,
whichever is more convenient.
The flowering plants (angiosperms), also known as Angiospermae or Magnoliophyta, are the most diverse group of land plants,
with about 350,000 species. Like gymnosperms, angiosperms are seed-producing plants; they are distinguished from
gymnosperms by characteristics including flowers, endosperm within the seeds, and the production of fruits that contain
the seeds. Etymologically, angiosperm means a plant that produces seeds within an enclosure, in other words, a fruiting
plant. The term "angiosperm" comes from the Greek composite word (angeion-, "case" or "casing", and sperma, "seed")
meaning "enclosed seeds", after the enclosed condition of the seeds. Fossilized spores suggest that higher plants
(embryophytes) have lived on land for at least 475 million years. Early land plants reproduced sexually with flagellated,
swimming sperm, like the green algae from which they evolved. An adaptation to terrestrialization was the development
of upright meiosporangia for dispersal by spores to new habitats. This feature is lacking in the descendants of their
nearest algal relatives, the Charophycean green algae. A later terrestrial adaptation took place with retention of
the delicate, avascular sexual stage, the gametophyte, within the tissues of the vascular sporophyte. This occurred
by spore germination within sporangia rather than spore release, as in non-seed plants. A current example of how
this might have happened can be seen in the precocious spore germination in Selaginella, the spike-moss. The result
for the ancestors of angiosperms was enclosing them in a case, the seed. The first seed bearing plants, like the
ginkgo, and conifers (such as pines and firs), did not produce flowers. The pollen grains (males) of Ginkgo and cycads
produce a pair of flagellated, mobile sperm cells that "swim" down the developing pollen tube to the female and her
eggs. The apparently sudden appearance of nearly modern flowers in the fossil record initially posed such a problem
for the theory of evolution that it was called an "abominable mystery" by Charles Darwin. However, the fossil record
has considerably grown since the time of Darwin, and recently discovered angiosperm fossils such as Archaefructus,
along with further discoveries of fossil gymnosperms, suggest how angiosperm characteristics may have been acquired
in a series of steps. Several groups of extinct gymnosperms, in particular seed ferns, have been proposed as the
ancestors of flowering plants, but there is no continuous fossil evidence showing exactly how flowers evolved. Some
older fossils, such as the upper Triassic Sanmiguelia, have been suggested. Based on current evidence, some propose
that the ancestors of the angiosperms diverged from an unknown group of gymnosperms in the Triassic period (245–202
million years ago). Fossil angiosperm-like pollen from the Middle Triassic (247.2–242.0 Ma) suggests an older date
for their origin. A close relationship between angiosperms and gnetophytes, proposed on the basis of morphological
evidence, has more recently been disputed on the basis of molecular evidence that suggest gnetophytes are instead
more closely related to other gymnosperms.[citation needed] The evolution of seed plants and later angiosperms appears
to be the result of two distinct rounds of whole genome duplication events. These occurred at 319 million years ago
and 192 million years ago. Another possible whole genome duplication event at 160 million years ago perhaps created
the ancestral line that led to all modern flowering plants. That event was studied by sequencing the genome of an
ancient flowering plant, Amborella trichopoda, and directly addresses Darwin's "abominable mystery." The earliest
known macrofossil confidently identified as an angiosperm, Archaefructus liaoningensis, is dated to about 125 million
years BP (the Cretaceous period), whereas pollen considered to be of angiosperm origin takes the fossil record back
to about 130 million years BP. However, one study has suggested that the early-middle Jurassic plant Schmeissneria,
traditionally considered a type of ginkgo, may be the earliest known angiosperm, or at least a close relative. In
addition, circumstantial chemical evidence has been found for the existence of angiosperms as early as 250 million
years ago. Oleanane, a secondary metabolite produced by many flowering plants, has been found in Permian deposits
of that age together with fossils of gigantopterids. Gigantopterids are a group of extinct seed plants that share
many morphological traits with flowering plants, although they are not known to have been flowering plants themselves.
The great angiosperm radiation, when a great diversity of angiosperms appears in the fossil record, occurred in the
mid-Cretaceous (approximately 100 million years ago). However, a study in 2007 estimated that the division of the
five most recent (the genus Ceratophyllum, the family Chloranthaceae, the eudicots, the magnoliids, and the monocots)
of the eight main groups occurred around 140 million years ago. By the late Cretaceous, angiosperms appear to have
dominated environments formerly occupied by ferns and cycadophytes, but large canopy-forming trees replaced conifers
as the dominant trees only close to the end of the Cretaceous 66 million years ago or even later, at the beginning
of the Tertiary. The radiation of herbaceous angiosperms occurred much later. Yet, many fossil plants recognizable
as belonging to modern families (including beech, oak, maple, and magnolia) had already appeared by the late Cretaceous.
Island genetics provides one proposed explanation for the sudden, fully developed appearance of flowering plants.
Island genetics is believed to be a common source of speciation in general, especially when it comes to radical adaptations
that seem to have required inferior transitional forms. Flowering plants may have evolved in an isolated setting
like an island or island chain, where the plants bearing them were able to develop a highly specialized relationship
with some specific animal (a wasp, for example). Such a relationship, with a hypothetical wasp carrying pollen from
one plant to another much the way fig wasps do today, could result in the development of a high degree of specialization
in both the plant(s) and their partners. Note that the wasp example is not incidental; bees, which, it is postulated,
evolved specifically due to mutualistic plant relationships, are descended from wasps. Animals are also involved
in the distribution of seeds. Fruit, which is formed by the enlargement of flower parts, is frequently a seed-dispersal
tool that attracts animals to eat or otherwise disturb it, incidentally scattering the seeds it contains (see frugivory).
Although many such mutualistic relationships remain too fragile to survive competition and to spread widely, flowering
proved to be an unusually effective means of reproduction, spreading (whatever its origin) to become the dominant
form of land plant life. Flower ontogeny uses a combination of genes normally responsible for forming new shoots.
The most primitive flowers probably had a variable number of flower parts, often separate from (but in contact with)
each other. The flowers tended to grow in a spiral pattern, to be bisexual (in plants, this means both male and female
parts on the same flower), and to be dominated by the ovary (female part). As flowers evolved, some variations developed
parts fused together, with a much more specific number and design, and with either specific sexes per flower or plant
or at least "ovary-inferior". Flower evolution continues to the present day; modern flowers have been so profoundly
influenced by humans that some of them cannot be pollinated in nature. Many modern domesticated flower species were
formerly simple weeds, which sprouted only when the ground was disturbed. Some of them tended to grow with human
crops, perhaps already having symbiotic companion plant relationships with them, and the prettiest did not get plucked
because of their beauty, developing a dependence upon and special adaptation to human affection. The exact relationship
between these eight groups is not yet clear, although there is agreement that the first three groups to diverge from
the ancestral angiosperm were Amborellales, Nymphaeales, and Austrobaileyales. The term basal angiosperms refers
to these three groups. Among the rest, the relationship between the three broadest of these groups (magnoliids, monocots,
and eudicots) remains unclear. Some analyses make the magnoliids the first to diverge, others the monocots. Ceratophyllum
seems to group with the eudicots rather than with the monocots. The botanical term "Angiosperm", from the Ancient
Greek αγγείον, angeíon (bottle, vessel) and σπέρμα, (seed), was coined in the form Angiospermae by Paul Hermann in
1690, as the name of one of his primary divisions of the plant kingdom. This included flowering plants possessing
seeds enclosed in capsules, distinguished from his Gymnospermae, or flowering plants with achenial or schizo-carpic
fruits, the whole fruit or each of its pieces being here regarded as a seed and naked. The term and its antonym were
maintained by Carl Linnaeus with the same sense, but with restricted application, in the names of the orders of his
class Didynamia. Its use with any approach to its modern scope became possible only after 1827, when Robert Brown
established the existence of truly naked ovules in the Cycadeae and Coniferae, and applied to them the name Gymnosperms.[citation
needed] From that time onward, as long as these Gymnosperms were, as was usual, reckoned as dicotyledonous flowering
plants, the term Angiosperm was used antithetically by botanical writers, with varying scope, as a group-name for
other dicotyledonous plants. In most taxonomies, the flowering plants are treated as a coherent group. The most popular
descriptive name has been Angiospermae (Angiosperms), with Anthophyta ("flowering plants") a second choice. These
names are not linked to any rank. The Wettstein system and the Engler system use the name Angiospermae, at the assigned
rank of subdivision. The Reveal system treated flowering plants as subdivision Magnoliophytina (Frohne & U. Jensen
ex Reveal, Phytologia 79: 70 1996), but later split it to Magnoliopsida, Liliopsida, and Rosopsida. The Takhtajan
system and Cronquist system treat this group at the rank of division, leading to the name Magnoliophyta (from the
family name Magnoliaceae). The Dahlgren system and Thorne system (1992) treat this group at the rank of class, leading
to the name Magnoliopsida. The APG system of 1998, and the later 2003 and 2009 revisions, treat the flowering plants
as a clade called angiosperms without a formal botanical name. However, a formal classification was published alongside
the 2009 revision in which the flowering plants form the Subclass Magnoliidae. The internal classification of this
group has undergone considerable revision. The Cronquist system, proposed by Arthur Cronquist in 1968 and published
in its full form in 1981, is still widely used but is no longer believed to accurately reflect phylogeny. A consensus
about how the flowering plants should be arranged has recently begun to emerge through the work of the Angiosperm
Phylogeny Group (APG), which published an influential reclassification of the angiosperms in 1998. Updates incorporating
more recent research were published as APG II in 2003 and as APG III in 2009. Recent studies, as by the APG, show
that the monocots form a monophyletic group (clade) but that the dicots do not (they are paraphyletic). Nevertheless,
the majority of dicot species do form a monophyletic group, called the eudicots or tricolpates. Of the remaining
dicot species, most belong to a third major clade known as the magnoliids, containing about 9,000 species. The rest
include a paraphyletic grouping of primitive species known collectively as the basal angiosperms, plus the families
Ceratophyllaceae and Chloranthaceae. The number of species of flowering plants is estimated to be in the range of
250,000 to 400,000. This compares to around 12,000 species of moss or 11,000 species of pteridophytes, showing that
the flowering plants are much more diverse. The number of families in APG (1998) was 462. In APG II (2003) it is
not settled; at maximum it is 457, but within this number there are 55 optional segregates, so that the minimum number
of families in this system is 402. In APG III (2009) there are 415 families. In the dicotyledons, the bundles in
the very young stem are arranged in an open ring, separating a central pith from an outer cortex. In each bundle,
separating the xylem and phloem, is a layer of meristem or active formative tissue known as cambium. By the formation
of a layer of cambium between the bundles (interfascicular cambium), a complete ring is formed, and a regular periodical
increase in thickness results from the development of xylem on the inside and phloem on the outside. The soft phloem
becomes crushed, but the hard wood persists and forms the bulk of the stem and branches of the woody perennial. Owing
to differences in the character of the elements produced at the beginning and end of the season, the wood is marked
out in transverse section into concentric rings, one for each season of growth, called annual rings. The characteristic
feature of angiosperms is the flower. Flowers show remarkable variation in form and elaboration, and provide the
most trustworthy external characteristics for establishing relationships among angiosperm species. The function of
the flower is to ensure fertilization of the ovule and development of fruit containing seeds. The floral apparatus
may arise terminally on a shoot or from the axil of a leaf (where the petiole attaches to the stem). Occasionally,
as in violets, a flower arises singly in the axil of an ordinary foliage-leaf. More typically, the flower-bearing
portion of the plant is sharply distinguished from the foliage-bearing or vegetative portion, and forms a more or
less elaborate branch-system called an inflorescence. The flower may consist only of these parts, as in willow, where
each flower comprises only a few stamens or two carpels. Usually, other structures are present and serve to protect
the sporophylls and to form an envelope attractive to pollinators. The individual members of these surrounding structures
are known as sepals and petals (or tepals in flowers such as Magnolia where sepals and petals are not distinguishable
from each other). The outer series (calyx of sepals) is usually green and leaf-like, and functions to protect the
rest of the flower, especially the bud. The inner series (corolla of petals) is, in general, white or brightly colored,
and is more delicate in structure. It functions to attract insect or bird pollinators. Attraction is effected by
color, scent, and nectar, which may be secreted in some part of the flower. The characteristics that attract pollinators
account for the popularity of flowers and flowering plants among humans. While the majority of flowers are perfect
or hermaphrodite (having both pollen and ovule producing parts in the same flower structure), flowering plants have
developed numerous morphological and physiological mechanisms to reduce or prevent self-fertilization. Heteromorphic
flowers have short carpels and long stamens, or vice versa, so animal pollinators cannot easily transfer pollen to
the pistil (receptive part of the carpel). Homomorphic flowers may employ a biochemical (physiological) mechanism
called self-incompatibility to discriminate between self and non-self pollen grains. In other species, the male and
female parts are morphologically separated, developing on different flowers. Double fertilization refers to a process
in which two sperm cells fertilize cells in the ovary. This process begins when a pollen grain adheres to the stigma
of the pistil (female reproductive structure), germinates, and grows a long pollen tube. While this pollen tube is
growing, a haploid generative cell travels down the tube behind the tube nucleus. The generative cell divides by
mitosis to produce two haploid (n) sperm cells. As the pollen tube grows, it makes its way from the stigma, down
the style and into the ovary. Here the pollen tube reaches the micropyle of the ovule and digests its way into one
of the synergids, releasing its contents (which include the sperm cells). The synergid that the cells were released
into degenerates and one sperm makes its way to fertilize the egg cell, producing a diploid (2n) zygote. The second
sperm cell fuses with both central cell nuclei, producing a triploid (3n) cell. As the zygote develops into an embryo,
the triploid cell develops into the endosperm, which serves as the embryo's food supply. The ovary now will develop
into fruit and the ovule will develop into seed. The character of the seed coat bears a definite relation to that
of the fruit. They protect the embryo and aid in dissemination; they may also directly promote germination. Among
plants with indehiscent fruits, in general, the fruit provides protection for the embryo and secures dissemination.
In this case, the seed coat is only slightly developed. If the fruit is dehiscent and the seed is exposed, in general,
the seed-coat is well developed, and must discharge the functions otherwise executed by the fruit. Agriculture is
almost entirely dependent on angiosperms, which provide virtually all plant-based food, and also provide a significant
amount of livestock feed. Of all the families of plants, the Poaceae, or grass family (grains), is by far the most
important, providing the bulk of all feedstocks (rice, corn — maize, wheat, barley, rye, oats, pearl millet, sugar
cane, sorghum). The Fabaceae, or legume family, comes in second place. Also of high importance are the Solanaceae,
or nightshade family (potatoes, tomatoes, and peppers, among others), the Cucurbitaceae, or gourd family (also including
pumpkins and melons), the Brassicaceae, or mustard plant family (including rapeseed and the innumerable varieties
of the cabbage species Brassica oleracea), and the Apiaceae, or parsley family. Many of our fruits come from the
Rutaceae, or rue family (including oranges, lemons, grapefruits, etc.), and the Rosaceae, or rose family (including
apples, pears, cherries, apricots, plums, etc.). Traditionally, the flowering plants are divided into two groups,
which in the Cronquist system are called Magnoliopsida (at the rank of class, formed from the family name Magnoliaceae)
and Liliopsida (at the rank of class, formed from the family name Liliaceae). Other descriptive names allowed by
Article 16 of the ICBN include Dicotyledones or Dicotyledoneae, and Monocotyledones or Monocotyledoneae, which have
a long history of use. In English a member of either group may be called a dicotyledon (plural dicotyledons) and
monocotyledon (plural monocotyledons), or abbreviated, as dicot (plural dicots) and monocot (plural monocots). These
names derive from the observation that the dicots most often have two cotyledons, or embryonic leaves, within each
seed. The monocots usually have only one, but the rule is not absolute either way. From a diagnostic point of view,
the number of cotyledons is neither a particularly handy nor a reliable character.
Hyderabad (i/ˈhaɪdərəˌbæd/ HY-dər-ə-bad; often /ˈhaɪdrəˌbæd/) is the capital of the southern Indian state of Telangana and
de jure capital of Andhra Pradesh.[A] Occupying 650 square kilometres (250 sq mi) along the banks of the Musi River,
it has a population of about 6.7 million and a metropolitan population of about 7.75 million, making it the fourth
most populous city and sixth most populous urban agglomeration in India. At an average altitude of 542 metres (1,778
ft), much of Hyderabad is situated on hilly terrain around artificial lakes, including Hussain Sagar—predating the
city's founding—north of the city centre. Established in 1591 by Muhammad Quli Qutb Shah, Hyderabad remained under
the rule of the Qutb Shahi dynasty for nearly a century before the Mughals captured the region. In 1724, Mughal viceroy
Asif Jah I declared his sovereignty and created his own dynasty, known as the Nizams of Hyderabad. The Nizam's dominions
became a princely state during the British Raj, and remained so for 150 years, with the city serving as its capital.
The Nizami influence can still be seen in the culture of the Hyderabadi Muslims. The city continued as the capital
of Hyderabad State after it was brought into the Indian Union in 1948, and became the capital of Andhra Pradesh after
the States Reorganisation Act, 1956. Since 1956, Rashtrapati Nilayam in the city has been the winter office of the
President of India. In 2014, the newly formed state of Telangana split from Andhra Pradesh and the city became joint
capital of the two states, a transitional arrangement scheduled to end by 2025. Relics of Qutb Shahi and Nizam rule
remain visible today, with the Charminar—commissioned by Muhammad Quli Qutb Shah—coming to symbolise Hyderabad. Golconda
fort is another major landmark. The influence of Mughlai culture is also evident in the city's distinctive cuisine,
which includes Hyderabadi biryani and Hyderabadi haleem. The Qutb Shahis and Nizams established Hyderabad as a cultural
hub, attracting men of letters from different parts of the world. Hyderabad emerged as the foremost centre of culture
in India with the decline of the Mughal Empire in the mid-19th century, with artists migrating to the city from the
rest of the Indian subcontinent. While Hyderabad is losing its cultural pre-eminence, it is today, due to the Telugu
film industry, the country's second-largest producer of motion pictures. Hyderabad was historically known as a pearl
and diamond trading centre, and it continues to be known as the City of Pearls. Many of the city's traditional bazaars,
including Laad Bazaar, Begum Bazaar and Sultan Bazaar, have remained open for centuries. However, industrialisation
throughout the 20th century attracted major Indian manufacturing, research and financial institutions, including
Bharat Heavy Electricals Limited, the National Geophysical Research Institute and the Centre for Cellular and Molecular
Biology. Special economic zones dedicated to information technology have encouraged companies from across India and
around the world to set up operations and the emergence of pharmaceutical and biotechnology industries in the 1990s
led to the area's naming as India's "Genome Valley". With an output of US$74 billion, Hyderabad is the fifth-largest
contributor to India's overall gross domestic product. According to John Everett-Heath, the author of Oxford Concise
Dictionary of World Place Names, Hyderabad means "Haydar's city" or "lion city", from haydar (lion) and ābād (city).
It was named to honour the Caliph Ali Ibn Abi Talib, who was also known as Haydar because of his lion-like valour
in battles. Andrew Petersen, a scholar of Islamic architecture, says the city was originally called Baghnagar (city
of gardens). One popular theory suggests that Muhammad Quli Qutb Shah, the founder of the city, named it "Bhagyanagar"
or "Bhāgnagar" after Bhagmati, a local nautch (dancing) girl with whom he had fallen in love. She converted to Islam
and adopted the title Hyder Mahal. The city was renamed Hyderabad in her honour. According to another source, the
city was named after Haidar, the son of Quli Qutb Shah. Archaeologists excavating near the city have unearthed Iron
Age sites that may date from 500 BCE. The region comprising modern Hyderabad and its surroundings was known as Golkonda
(Golla Konda-"shepherd's hill"), and was ruled by the Chalukya dynasty from 624 CE to 1075 CE. Following the dissolution
of the Chalukya empire into four parts in the 11th century, Golkonda came under the control of the Kakatiya dynasty
from 1158, whose seat of power was at Warangal, 148 km (92 mi) northeast of modern Hyderabad. The Kakatiya dynasty
was reduced to a vassal of the Khilji dynasty in 1310 after its defeat by Sultan Alauddin Khilji of the Delhi Sultanate.
This lasted until 1321, when the Kakatiya dynasty was annexed by Malik Kafur, Allaudin Khilji's general. During this
period, Alauddin Khilji took the Koh-i-Noor diamond, which is said to have been mined from the Kollur Mines of Golkonda,
to Delhi. Muhammad bin Tughluq succeeded to the Delhi sultanate in 1325, bringing Warangal under the rule of the
Tughlaq dynasty until 1347 when Ala-ud-Din Bahman Shah, a governor under bin Tughluq, rebelled against Delhi and
established the Bahmani Sultanate in the Deccan Plateau, with Gulbarga, 200 km (124 mi) west of Hyderabad, as its
capital. The Bahmani kings ruled the region until 1518 and were the first independent Muslim rulers of the Deccan.
Sultan Quli, a governor of Golkonda, revolted against the Bahmani Sultanate and established the Qutb Shahi dynasty
in 1518; he rebuilt the mud-fort of Golconda and named the city "Muhammad nagar". The fifth sultan, Muhammad Quli
Qutb Shah, established Hyderabad on the banks of the Musi River in 1591, to avoid the water shortages experienced
at Golkonda. During his rule, he had the Charminar and Mecca Masjid built in the city. On 21 September 1687, the
Golkonda Sultanate came under the rule of the Mughal emperor Aurangzeb after a year-long siege of the Golkonda fort.
The annexed area was renamed Deccan Suba (Deccan province) and the capital was moved from Golkonda to Aurangabad,
about 550 km (342 mi) northwest of Hyderabad. In 1713 Farrukhsiyar, the Mughal emperor, appointed Asif Jah I to be
Viceroy of the Deccan, with the title Nizam-ul-Mulk (Administrator of the Realm). In 1724, Asif Jah I defeated Mubariz
Khan to establish autonomy over the Deccan Suba, named the region Hyderabad Deccan, and started what came to be known
as the Asif Jahi dynasty. Subsequent rulers retained the title Nizam ul-Mulk and were referred to as Asif Jahi Nizams,
or Nizams of Hyderabad. The death of Asif Jah I in 1748 resulted in a period of political unrest as his sons, backed
by opportunistic neighbouring states and colonial foreign forces, contended for the throne. The accession of Asif
Jah II, who reigned from 1762 to 1803, ended the instability. In 1768 he signed the treaty of Masulipatnam, surrendering
the coastal region to the East India Company in return for a fixed annual rent. In 1769 Hyderabad city became the
formal capital of the Nizams. In response to regular threats from Hyder Ali (Dalwai of Mysore), Baji Rao I (Peshwa
of the Maratha Empire), and Basalath Jung (Asif Jah II's elder brother, who was supported by the Marquis de Bussy-Castelnau),
the Nizam signed a subsidiary alliance with the East India Company in 1798, allowing the British Indian Army to occupy
Bolarum (modern Secunderabad) to protect the state's borders, for which the Nizams paid an annual maintenance to
the British. After India gained independence, the Nizam declared his intention to remain independent rather than
become part of the Indian Union. The Hyderabad State Congress, with the support of the Indian National Congress and
the Communist Party of India, began agitating against Nizam VII in 1948. On 17 September that year, the Indian Army
took control of Hyderabad State after an invasion codenamed Operation Polo. With the defeat of his forces, Nizam
VII capitulated to the Indian Union by signing an Instrument of Accession, which made him the Rajpramukh (Princely
Governor) of the state until 31 October 1956. Between 1946 and 1951, the Communist Party of India fomented the Telangana
uprising against the feudal lords of the Telangana region. The Constitution of India, which became effective on 26
January 1950, made Hyderabad State one of the part B states of India, with Hyderabad city continuing to be the capital.
In his 1955 report Thoughts on Linguistic States, B. R. Ambedkar, then chairman of the Drafting Committee of the
Indian Constitution, proposed designating the city of Hyderabad as the second capital of India because of its amenities
and strategic central location. Since 1956, the Rashtrapati Nilayam in Hyderabad has been the second official residence
and business office of the President of India; the President stays once a year in winter and conducts official business
particularly relating to Southern India. On 1 November 1956 the states of India were reorganised by language. Hyderabad
state was split into three parts, which were merged with neighbouring states to form the modern states of Maharashtra,
Karnataka and Andhra Pradesh. The nine Telugu- and Urdu-speaking districts of Hyderabad State in the Telangana region
were merged with the Telugu-speaking Andhra State to create Andhra Pradesh, with Hyderabad as its capital. Several
protests, known collectively as the Telangana movement, attempted to invalidate the merger and demanded the creation
of a new Telangana state. Major actions took place in 1969 and 1972, and a third began in 2010. The city suffered
several explosions: one at Dilsukhnagar in 2002 claimed two lives; terrorist bombs in May and August 2007 caused
communal tension and riots; and two bombs exploded in February 2013. On 30 July 2013 the government of India declared
that part of Andhra Pradesh would be split off to form a new Telangana state, and that Hyderabad city would be the
capital city and part of Telangana, while the city would also remain the capital of Andhra Pradesh for no more than
ten years. On 3 October 2013 the Union Cabinet approved the proposal, and in February 2014 both houses of Parliament
passed the Telangana Bill. With the final assent of the President of India in June 2014, Telangana state was formed.
Situated in the southern part of Telangana in southeastern India, Hyderabad is 1,566 kilometres (973 mi) south of
Delhi, 699 kilometres (434 mi) southeast of Mumbai, and 570 kilometres (350 mi) north of Bangalore by road. It lies
on the banks of the Musi River, in the northern part of the Deccan Plateau. Greater Hyderabad covers 650 km2 (250
sq mi), making it one of the largest metropolitan areas in India. With an average altitude of 542 metres (1,778 ft),
Hyderabad lies on predominantly sloping terrain of grey and pink granite, dotted with small hills, the highest being
Banjara Hills at 672 metres (2,205 ft). The city has numerous lakes referred to as sagar, meaning "sea". Examples
include artificial lakes created by dams on the Musi, such as Hussain Sagar (built in 1562 near the city centre),
Osman Sagar and Himayat Sagar. As of 1996, the city had 140 lakes and 834 water tanks (ponds). Hyderabad has a tropical
wet and dry climate (Köppen Aw) bordering on a hot semi-arid climate (Köppen BSh). The annual mean temperature is
26.6 °C (79.9 °F); monthly mean temperatures are 21–33 °C (70–91 °F). Summers (March–June) are hot and humid, with
average highs in the mid-to-high 30s Celsius; maximum temperatures often exceed 40 °C (104 °F) between April and
June. The coolest temperatures occur in December and January, when the lowest temperature occasionally dips to 10
°C (50 °F). May is the hottest month, when daily temperatures range from 26 to 39 °C (79–102 °F); December, the coldest,
has temperatures varying from 14.5 to 28 °C (57–82 °F). Hyderabad's lakes and the sloping terrain of its low-lying
hills provide habitat for an assortment of flora and fauna. The forest region in and around the city encompasses
areas of ecological and biological importance, which are preserved in the form of national parks, zoos, mini-zoos
and a wildlife sanctuary. Nehru Zoological Park, the city's one large zoo, is the first in India to have a lion and
tiger safari park. Hyderabad has three national parks (Mrugavani National Park, Mahavir Harina Vanasthali National
Park and Kasu Brahmananda Reddy National Park), and the Manjira Wildlife Sanctuary is about 50 km (31 mi) from the
city. Hyderabad's other environmental reserves are: Kotla Vijayabhaskara Reddy Botanical Gardens, Shamirpet Lake,
Hussain Sagar, Fox Sagar Lake, Mir Alam Tank and Patancheru Lake, which is home to regional birds and attracts seasonal
migratory birds from different parts of the world. Organisations engaged in environmental and wildlife preservation
include the Telangana Forest Department, Indian Council of Forestry Research and Education, the International Crops
Research Institute for the Semi-Arid Tropics (ICRISAT), the Animal Welfare Board of India, the Blue Cross of Hyderabad
and the University of Hyderabad. The Greater Hyderabad Municipal Corporation (GHMC) oversees the civic infrastructure
of the city's 18 "circles", which together encompass 150 municipal wards. Each ward is represented by a corporator,
elected by popular vote. The corporators elect the Mayor, who is the titular head of GHMC; executive powers rest
with the Municipal Commissioner, appointed by the state government. The GHMC carries out the city's infrastructural
work such as building and maintenance of roads and drains, town planning including construction regulation, maintenance
of municipal markets and parks, solid waste management, the issuing of birth and death certificates, the issuing
of trade licences, collection of property tax, and community welfare services such as mother and child healthcare,
and pre-school and non-formal education. The GHMC was formed in April 2007 by merging the Municipal Corporation of
Hyderabad (MCH) with 12 municipalities of the Hyderabad, Ranga Reddy and Medak districts covering a total area of
650 km2 (250 sq mi).:3 In the 2016 municipal election, the Telangana Rashtra Samithi formed the majority and the
present Mayor is Bonthu Ram Mohan. The Secunderabad Cantonment Board is a civic administration agency overseeing
an area of 40.1 km2 (15.5 sq mi),:93 where there are several military camps.:2 The Osmania University campus is administered
independently by the university authority.:93 The jurisdictions of the city's administrative agencies are, in ascending
order of size: the Hyderabad Police area, Hyderabad district, the GHMC area ("Hyderabad city") and the area under
the Hyderabad Metropolitan Development Authority (HMDA). The HMDA is an apolitical urban planning agency that covers
the GHMC and its suburbs, extending to 54 mandals in five districts encircling the city. It coordinates the development
activities of GHMC and suburban municipalities and manages the administration of bodies such as the Hyderabad Metropolitan
Water Supply and Sewerage Board (HMWSSB). The HMWSSB regulates rainwater harvesting, sewerage services and water
supply, which is sourced from several dams located in the suburbs. In 2005, the HMWSSB started operating a 116-kilometre-long
(72 mi) water supply pipeline from Nagarjuna Sagar Dam to meet increasing demand. The Telangana Southern Power Distribution
Company Limited manages electricity supply. As of October 2014, there were 15 fire stations in the city, operated
by the Telangana State Disaster and Fire Response Department. The government-owned India Post has five head post
offices and many sub-post offices in Hyderabad, which are complemented by private courier services. Hyderabad produces
around 4,500 tonnes of solid waste daily, which is transported from collection units in Imlibun, Yousufguda and Lower
Tank Bund to the dumpsite in Jawaharnagar. Disposal is managed by the Integrated Solid Waste Management project which
was started by the GHMC in 2010. Rapid urbanisation and increased economic activity has also led to increased industrial
waste, air, noise and water pollution, which is regulated by the Telangana Pollution Control Board (TPCB). The contribution
of different sources to air pollution in 2006 was: 20–50% from vehicles, 40–70% from a combination of vehicle discharge
and road dust, 10–30% from industrial discharges and 3–10% from the burning of household rubbish. Deaths resulting
from atmospheric particulate matter are estimated at 1,700–3,000 each year. Ground water around Hyderabad, which
has a hardness of up to 1000 ppm, around three times higher than is desirable, is the main source of drinking water
but the increasing population and consequent increase in demand has led to a decline in not only ground water but
also river and lake levels. This shortage is further exacerbated by inadequately treated effluent discharged from
industrial treatment plants polluting the water sources of the city. The Commissionerate of Health and Family Welfare
is responsible for planning, implementation and monitoring of all facilities related to health and preventive services.
As of 2010[update]–11, the city had 50 government hospitals, 300 private and charity hospitals and 194 nursing homes
providing around 12,000 hospital beds, fewer than half the required 25,000. For every 10,000 people in the city,
there are 17.6 hospital beds, 9 specialist doctors, 14 nurses and 6 physicians. The city also has about 4,000 individual
clinics and 500 medical diagnostic centres. Private clinics are preferred by many residents because of the distance
to, poor quality of care at and long waiting times in government facilities,:60–61 despite the high proportion of
the city's residents being covered by government health insurance: 24% according to a National Family Health Survey
in 2005.:41 As of 2012[update], many new private hospitals of various sizes were opened or being built. Hyderabad
also has outpatient and inpatient facilities that use Unani, homeopathic and Ayurvedic treatments. In the 2005 National
Family Health Survey, it was reported that the city's total fertility rate is 1.8,:47 which is below the replacement
rate. Only 61% of children had been provided with all basic vaccines (BCG, measles and full courses of polio and
DPT), fewer than in all other surveyed cities except Meerut.:98 The infant mortality rate was 35 per 1,000 live births,
and the mortality rate for children under five was 41 per 1,000 live births.:97 The survey also reported that a third
of women and a quarter of men are overweight or obese, 49% of children below 5 years are anaemic, and up to 20% of
children are underweight,:44, 55–56 while more than 2% of women and 3% of men suffer from diabetes.:57 When the GHMC
was created in 2007, the area occupied by the municipality increased from 175 km2 (68 sq mi) to 650 km2 (250 sq mi).
Consequently, the population increased by 87%, from 3,637,483 in the 2001 census to 6,809,970 in the 2011 census,
24% of which are migrants from elsewhere in India,:2 making Hyderabad the nation's fourth most populous city. As
of 2011[update], the population density is 18,480/km2 (47,900/sq mi). At the same 2011 census, the Hyderabad Urban
Agglomeration had a population of 7,749,334, making it the sixth most populous urban agglomeration in the country.
The population of the Hyderabad urban agglomeration has since been estimated by electoral officials to be 9.1 million
as of early 2013 but is expected to exceed 10 million by the end of the year. There are 3,500,802 male and 3,309,168
female citizens—a sex ratio of 945 females per 1000 males, higher than the national average of 926 per 1000. Among
children aged 0–6 years, 373,794 are boys and 352,022 are girls—a ratio of 942 per 1000. Literacy stands at 82.96%
(male 85.96%; female 79.79%), higher than the national average of 74.04%. The socio-economic strata consist of 20%
upper class, 50% middle class and 30% working class. Referred to as "Hyderabadi", the residents of Hyderabad are
predominantly Telugu and Urdu speaking people, with minority Bengali, Gujarati (including Memon), Kannada (including
Nawayathi), Malayalam, Marathi, Marwari, Odia, Punjabi, Tamil and Uttar Pradeshi communities. Hyderabad is home to
a unique dialect of Urdu called Hyderabadi Urdu, which is a type of Dakhini, and is the mother tongue of most Hyderabadi
Muslims, a unique community who owe much of their history, language, cuisine, and culture to Hyderabad, and the various
dynasties who previously ruled. Hadhrami Arabs, African Arabs, Armenians, Abyssinians, Iranians, Pathans and Turkish
people are also present; these communities, of which the Hadhrami are the largest, declined after Hyderabad State
became part of the Indian Union, as they lost the patronage of the Nizams. In the greater metropolitan area, 13%
of the population live below the poverty line. According to a 2012 report submitted by GHMC to the World Bank, Hyderabad
has 1,476 slums with a total population of 1.7 million, of whom 66% live in 985 slums in the "core" of the city (the
part that formed Hyderabad before the April 2007 expansion) and the remaining 34% live in 491 suburban tenements.
About 22% of the slum-dwelling households had migrated from different parts of India in the last decade of the 20th
century, and 63% claimed to have lived in the slums for more than 10 years.:55 Overall literacy in the slums is 60–80%
and female literacy is 52–73%. A third of the slums have basic service connections, and the remainder depend on general
public services provided by the government. There are 405 government schools, 267 government aided schools, 175 private
schools and 528 community halls in the slum areas.:70 According to a 2008 survey by the Centre for Good Governance,
87.6% of the slum-dwelling households are nuclear families, 18% are very poor, with an income up to ₹20000 (US$300)
per annum, 73% live below the poverty line (a standard poverty line recognised by the Andhra Pradesh Government is
₹24000 (US$360) per annum), 27% of the chief wage earners (CWE) are casual labour and 38% of the CWE are illiterate.
About 3.72% of the slum children aged 5–14 do not go to school and 3.17% work as child labour, of whom 64% are boys
and 36% are girls. The largest employers of child labour are street shops and construction sites. Among the working
children, 35% are engaged in hazardous jobs.:59 Many historic and tourist sites lie in south central Hyderabad, such
as the Charminar, the Mecca Masjid, the Salar Jung Museum, the Nizam's Museum, the Falaknuma Palace, and the traditional
retail corridor comprising the Pearl Market, Laad Bazaar and Madina Circle. North of the river are hospitals, colleges,
major railway stations and business areas such as Begum Bazaar, Koti, Abids, Sultan Bazaar and Moazzam Jahi Market,
along with administrative and recreational establishments such as the Reserve Bank of India, the Telangana Secretariat,
the Hyderabad Mint, the Telangana Legislature, the Public Gardens, the Nizam Club, the Ravindra Bharathi, the State
Museum, the Birla Temple and the Birla Planetarium. North of central Hyderabad lie Hussain Sagar, Tank Bund Road,
Rani Gunj and the Secunderabad Railway Station. Most of the city's parks and recreational centres, such as Sanjeevaiah
Park, Indira Park, Lumbini Park, NTR Gardens, the Buddha statue and Tankbund Park are located here. In the northwest
part of the city there are upscale residential and commercial areas such as Banjara Hills, Jubilee Hills, Begumpet,
Khairatabad and Miyapur. The northern end contains industrial areas such as Sanathnagar, Moosapet, Balanagar, Patancheru
and Chanda Nagar. The northeast end is dotted with residential areas. In the eastern part of the city lie many defence
research centres and Ramoji Film City. The "Cyberabad" area in the southwest and west of the city has grown rapidly
since the 1990s. It is home to information technology and bio-pharmaceutical companies and to landmarks such as Hyderabad
Airport, Osman Sagar, Himayath Sagar and Kasu Brahmananda Reddy National Park. Heritage buildings constructed during
the Qutb Shahi and Nizam eras showcase Indo-Islamic architecture influenced by Medieval, Mughal and European styles.
After the 1908 flooding of the Musi River, the city was expanded and civic monuments constructed, particularly during
the rule of Mir Osman Ali Khan (the VIIth Nizam), whose patronage of architecture led to him being referred to as
the maker of modern Hyderabad. In 2012, the government of India declared Hyderabad the first "Best heritage city
of India". Qutb Shahi architecture of the 16th and early 17th centuries followed classical Persian architecture featuring
domes and colossal arches. The oldest surviving Qutb Shahi structure in Hyderabad is the ruins of Golconda fort built
in the 16th century. The Charminar, Mecca Masjid, Charkaman and Qutb Shahi tombs are other existing structures of
this period. Among these the Charminar has become an icon of the city; located in the centre of old Hyderabad, it
is a square structure with sides 20 m (66 ft) long and four grand arches each facing a road. At each corner stands
a 56 m (184 ft)-high minaret. Most of the historical bazaars that still exist were constructed on the street north
of Charminar towards Golconda fort. The Charminar, Qutb Shahi tombs and Golconda fort are considered to be monuments
of national importance in India; in 2010 the Indian government proposed that the sites be listed for UNESCO World
Heritage status.:11–18 Among the oldest surviving examples of Nizam architecture in Hyderabad is the Chowmahalla
Palace, which was the seat of royal power. It showcases a diverse array of architectural styles, from the Baroque
Harem to its Neoclassical royal court. The other palaces include Falaknuma Palace (inspired by the style of Andrea
Palladio), Purani Haveli, King Kothi and Bella Vista Palace all of which were built at the peak of Nizam rule in
the 19th century. During Mir Osman Ali Khan's rule, European styles, along with Indo-Islamic, became prominent. These
styles are reflected in the Falaknuma Palace and many civic monuments such as the Hyderabad High Court, Osmania Hospital,
Osmania University, the State Central Library, City College, the Telangana Legislature, the State Archaeology Museum,
Jubilee Hall, and Hyderabad and Kachiguda railway stations. Other landmarks of note are Paigah Palace, Asman Garh
Palace, Basheer Bagh Palace, Errum Manzil and the Spanish Mosque, all constructed by the Paigah family.:16–17 Hyderabad
is the largest contributor to the gross domestic product (GDP), tax and other revenues, of Telangana, and the sixth
largest deposit centre and fourth largest credit centre nationwide, as ranked by the Reserve Bank of India (RBI)
in June 2012. Its US$74 billion GDP made it the fifth-largest contributor city to India's overall GDP in 2011–12.
Its per capita annual income in 2011 was ₹44300 (US$660). As of 2006[update], the largest employers in the city were
the governments of Andhra Pradesh (113,098 employees) and India (85,155). According to a 2005 survey, 77% of males
and 19% of females in the city were employed. The service industry remains dominant in the city, and 90% of the employed
workforce is engaged in this sector. Hyderabad's role in the pearl trade has given it the name "City of Pearls" and
up until the 18th century, the city was also the only global trading centre for large diamonds. Industrialisation
began under the Nizams in the late 19th century, helped by railway expansion that connected the city with major ports.
From the 1950s to the 1970s, Indian enterprises, such as Bharat Heavy Electricals Limited (BHEL), Nuclear Fuel Complex
(NFC), National Mineral Development Corporation (NMDC), Bharat Electronics (BEL), Electronics Corporation of India
Limited (ECIL), Defence Research and Development Organisation (DRDO), Hindustan Aeronautics Limited (HAL), Centre
for Cellular and Molecular Biology (CCMB), Centre for DNA Fingerprinting and Diagnostics (CDFD), State Bank of Hyderabad
(SBH) and Andhra Bank (AB) were established in the city. The city is home to Hyderabad Securities formerly known
as Hyderabad Stock Exchange (HSE), and houses the regional office of the Securities and Exchange Board of India (SEBI).
In 2013, the Bombay Stock Exchange (BSE) facility in Hyderabad was forecast to provide operations and transactions
services to BSE-Mumbai by the end of 2014. The growth of the financial services sector has helped Hyderabad evolve
from a traditional manufacturing city to a cosmopolitan industrial service centre. Since the 1990s, the growth of
information technology (IT), IT-enabled services (ITES), insurance and financial institutions has expanded the service
sector, and these primary economic activities have boosted the ancillary sectors of trade and commerce, transport,
storage, communication, real estate and retail. The establishment of Indian Drugs and Pharmaceuticals Limited (IDPL),
a public sector undertaking, in 1961 was followed over the decades by many national and global companies opening
manufacturing and research facilities in the city. As of 2010[update], the city manufactured one third of India's
bulk drugs and 16% of biotechnology products, contributing to its reputation as "India's pharmaceutical capital"
and the "Genome Valley of India". Hyderabad is a global centre of information technology, for which it is known as
Cyberabad (Cyber City). As of 2013[update], it contributed 15% of India's and 98% of Andhra Pradesh's exports in
IT and ITES sectors and 22% of NASSCOM's total membership is from the city. The development of HITEC City, a township
with extensive technological infrastructure, prompted multinational companies to establish facilities in Hyderabad.
The city is home to more than 1300 IT and ITES firms, including global conglomerates such as Microsoft (operating
its largest R&D campus outside the US), Google, IBM, Yahoo!, Dell, Facebook,:3 and major Indian firms including Tech
Mahindra, Infosys, Tata Consultancy Services (TCS), Polaris and Wipro.:3 In 2009 the World Bank Group ranked the
city as the second best Indian city for doing business. The city and its suburbs contain the highest number of special
economic zones of any Indian city. Like the rest of India, Hyderabad has a large informal economy that employs 30%
of the labour force.:71 According to a survey published in 2007, it had 40–50,000 street vendors, and their numbers
were increasing.:9 Among the street vendors, 84% are male and 16% female,:12 and four fifths are "stationary vendors"
operating from a fixed pitch, often with their own stall.:15–16 Most are financed through personal savings; only
8% borrow from moneylenders.:19 Vendor earnings vary from ₹50 (74¢ US) to ₹800 (US$12) per day.:25 Other unorganised
economic sectors include dairy, poultry farming, brick manufacturing, casual labour and domestic help. Those involved
in the informal economy constitute a major portion of urban poor.:71 Hyderabad emerged as the foremost centre of
culture in India with the decline of the Mughal Empire. After the fall of Delhi in 1857, the migration of performing
artists to the city particularly from the north and west of the Indian sub continent, under the patronage of the
Nizam, enriched the cultural milieu. This migration resulted in a mingling of North and South Indian languages, cultures
and religions, which has since led to a co-existence of Hindu and Muslim traditions, for which the city has become
noted.:viii A further consequence of this north–south mix is that both Telugu and Urdu are official languages of
Telangana. The mixing of religions has also resulted in many festivals being celebrated in Hyderabad such as Ganesh
Chaturthi, Diwali and Bonalu of Hindu tradition and Eid ul-Fitr and Eid al-Adha by Muslims. In the past, Qutb Shahi
rulers and Nizams attracted artists, architects and men of letters from different parts of the world through patronage.
The resulting ethnic mix popularised cultural events such as mushairas (poetic symposia). The Qutb Shahi dynasty
particularly encouraged the growth of Deccani Urdu literature leading to works such as the Deccani Masnavi and Diwan
poetry, which are among the earliest available manuscripts in Urdu. Lazzat Un Nisa, a book compiled in the 15th century
at Qutb Shahi courts, contains erotic paintings with diagrams for secret medicines and stimulants in the eastern
form of ancient sexual arts. The reign of the Nizams saw many literary reforms and the introduction of Urdu as a
language of court, administration and education. In 1824, a collection of Urdu Ghazal poetry, named Gulzar-e-Mahlaqa,
authored by Mah Laqa Bai—the first female Urdu poet to produce a Diwan—was published in Hyderabad. Hyderabad has
continued with these traditions in its annual Hyderabad Literary Festival, held since 2010, showcasing the city's
literary and cultural creativity. Organisations engaged in the advancement of literature include the Sahitya Akademi,
the Urdu Academy, the Telugu Academy, the National Council for Promotion of Urdu Language, the Comparative Literature
Association of India, and the Andhra Saraswata Parishad. Literary development is further aided by state institutions
such as the State Central Library, the largest public library in the state which was established in 1891, and other
major libraries including the Sri Krishna Devaraya Andhra Bhasha Nilayam, the British Library and the Sundarayya
Vignana Kendram. South Indian music and dances such as the Kuchipudi and Bharatanatyam styles are popular in the
Deccan region. As a result of their culture policies, North Indian music and dance gained popularity during the rule
of the Mughals and Nizams, and it was also during their reign that it became a tradition among the nobility to associate
themselves with tawaif (courtesans). These courtesans were revered as the epitome of etiquette and culture, and were
appointed to teach singing, poetry and classical dance to many children of the aristocracy. This gave rise to certain
styles of court music, dance and poetry. Besides western and Indian popular music genres such as filmi music, the
residents of Hyderabad play city-based marfa music, dholak ke geet (household songs based on local Folklore), and
qawwali, especially at weddings, festivals and other celebratory events. The state government organises the Golconda
Music and Dance Festival, the Taramati Music Festival and the Premavathi Dance Festival to further encourage the
development of music. Although the city is not particularly noted for theatre and drama, the state government promotes
theatre with multiple programmes and festivals in such venues as the Ravindra Bharati, Shilpakala Vedika and Lalithakala
Thoranam. Although not a purely music oriented event, Numaish, a popular annual exhibition of local and national
consumer products, does feature some musical performances. The city is home to the Telugu film industry, popularly
known as Tollywood and as of 2012[update], produces the second largest number of films in India behind Bollywood.
Films in the local Hyderabadi dialect are also produced and have been gaining popularity since 2005. The city has
also hosted international film festivals such as the International Children's Film Festival and the Hyderabad International
Film Festival. In 2005, Guinness World Records declared Ramoji Film City to be the world's largest film studio. The
region is well known for its Golconda and Hyderabad painting styles which are branches of Deccani painting. Developed
during the 16th century, the Golconda style is a native style blending foreign techniques and bears some similarity
to the Vijayanagara paintings of neighbouring Mysore. A significant use of luminous gold and white colours is generally
found in the Golconda style. The Hyderabad style originated in the 17th century under the Nizams. Highly influenced
by Mughal painting, this style makes use of bright colours and mostly depicts regional landscape, culture, costumes
and jewellery. Although not a centre for handicrafts itself, the patronage of the arts by the Mughals and Nizams
attracted artisans from the region to Hyderabad. Such crafts include: Bidriware, a metalwork handicraft from neighbouring
Karnataka, which was popularised during the 18th century and has since been granted a Geographical Indication (GI)
tag under the auspices of the WTO act; and Zari and Zardozi, embroidery works on textile that involve making elaborate
designs using gold, silver and other metal threads. Another example of a handicraft drawn to Hyderabad is Kalamkari,
a hand-painted or block-printed cotton textile that comes from cities in Andhra Pradesh. This craft is distinguished
in having both a Hindu style, known as Srikalahasti and entirely done by hand, and an Islamic style, known as Machilipatnam
that uses both hand and block techniques. Examples of Hyderabad's arts and crafts are housed in various museums including
the Salar Jung Museum (housing "one of the largest one-man-collections in the world"), the AP State Archaeology Museum,
the Nizam Museum, the City Museum and the Birla Science Museum. Hyderabadi cuisine comprises a broad repertoire of
rice, wheat and meat dishes and the skilled use of various spices. Hyderabadi biryani and Hyderabadi haleem, with
their blend of Mughlai and Arab cuisines, have become iconic dishes of India. Hyderabadi cuisine is highly influenced
by Mughlai and to some extent by French, Arabic, Turkish, Iranian and native Telugu and Marathwada cuisines. Other
popular native dishes include nihari, chakna, baghara baingan and the desserts qubani ka meetha, double ka meetha
and kaddu ki kheer (a sweet porridge made with sweet gourd). One of Hyderabad's earliest newspapers, The Deccan Times,
was established in the 1780s. In modern times, the major Telugu dailies published in Hyderabad are Eenadu, Andhra
Jyothy, Sakshi and Namaste Telangana, while the major English papers are The Times of India, The Hindu and The Deccan
Chronicle. The major Urdu papers include The Siasat Daily, The Munsif Daily and Etemaad. Many coffee table magazines,
professional magazines and research journals are also regularly published. The Secunderabad Cantonment Board established
the first radio station in Hyderabad State around 1919. Deccan Radio was the first radio public broadcast station
in the city starting on 3 February 1935, with FM broadcasting beginning in 2000. The available channels in Hyderabad
include All India Radio, Radio Mirchi, Radio City, Red FM and Big FM. Television broadcasting in Hyderabad began
in 1974 with the launch of Doordarshan, the Government of India's public service broadcaster, which transmits two
free-to-air terrestrial television channels and one satellite channel. Private satellite channels started in July
1992 with the launch of Star TV. Satellite TV channels are accessible via cable subscription, direct-broadcast satellite
services or internet-based television. Hyderabad's first dial-up internet access became available in the early 1990s
and was limited to software development companies. The first public internet access service began in 1995, with the
first private sector internet service provider (ISP) starting operations in 1998. In 2015, high-speed public WiFi
was introduced in parts of the city. Public and private schools in Hyderabad are governed by the Central Board of
Secondary Education and follow a "10+2+3" plan. About two-thirds of pupils attend privately run institutions. Languages
of instruction include English, Hindi, Telugu and Urdu. Depending on the institution, students are required to sit
the Secondary School Certificate or the Indian Certificate of Secondary Education. After completing secondary education,
students enroll in schools or junior colleges with a higher secondary facility. Admission to professional graduation
colleges in Hyderabad, many of which are affiliated with either Jawaharlal Nehru Technological University Hyderabad
(JNTUH) or Osmania University (OU), is through the Engineering Agricultural and Medical Common Entrance Test (EAM-CET).
There are 13 universities in Hyderabad: two private universities, two deemed universities, six state universities
and three central universities. The central universities are the University of Hyderabad, Maulana Azad National Urdu
University and the English and Foreign Languages University. Osmania University, established in 1918, was the first
university in Hyderabad and as of 2012[update] is India's second most popular institution for international students.
The Dr. B. R. Ambedkar Open University, established in 1982, is the first distance learning open university in India.
Hyderabad is also home to a number of centres specialising in particular fields such as biomedical sciences, biotechnology
and pharmaceuticals, such as the National Institute of Pharmaceutical Education and Research (NIPER) and National
Institute of Nutrition (NIN). Hyderabad has five major medical schools—Osmania Medical College, Gandhi Medical College,
Nizam's Institute of Medical Sciences, Deccan College of Medical Sciences and Shadan Institute of Medical Sciences—and
many affiliated teaching hospitals. The Government Nizamia Tibbi College is a college of Unani medicine. Hyderabad
is also the headquarters of the Indian Heart Association, a non-profit foundation for cardiovascular education. Institutes
in Hyderabad include the National Institute of Rural Development, the Indian School of Business, the Institute of
Public Enterprise, the Administrative Staff College of India and the Sardar Vallabhbhai Patel National Police Academy.
Technical and engineering schools include the International Institute of Information Technology, Hyderabad (IIITH),
Birla Institute of Technology and Science, Pilani – Hyderabad (BITS Hyderabad) and Indian Institute of Technology,
Hyderabad (IIT-H) as well as agricultural engineering institutes such as the International Crops Research Institute
for the Semi-Arid Tropics (ICRISAT) and the Acharya N. G. Ranga Agricultural University. Hyderabad also has schools
of fashion design including Raffles Millennium International, NIFT Hyderabad and Wigan and Leigh College. The National
Institute of Design, Hyderabad (NID-H), will offer undergraduate and postgraduate courses from 2015. The most popular
sports played in Hyderabad are cricket and association football. At the professional level, the city has hosted national
and international sports events such as the 2002 National Games of India, the 2003 Afro-Asian Games, the 2004 AP
Tourism Hyderabad Open women's tennis tournament, the 2007 Military World Games, the 2009 World Badminton Championships
and the 2009 IBSF World Snooker Championship. The city hosts a number of venues suitable for professional competition
such as the Swarnandhra Pradesh Sports Complex for field hockey, the G. M. C. Balayogi Stadium in Gachibowli for
athletics and football, and for cricket, the Lal Bahadur Shastri Stadium and Rajiv Gandhi International Cricket Stadium,
home ground of the Hyderabad Cricket Association. Hyderabad has hosted many international cricket matches, including
matches in the 1987 and the 1996 ICC Cricket World Cups. The Hyderabad cricket team represents the city in the Ranji
Trophy—a first-class cricket tournament among India's states and cities. Hyderabad is also home to the Indian Premier
League franchise Sunrisers Hyderabad. A previous franchise was the Deccan Chargers, which won the 2009 Indian Premier
League held in South Africa. During British rule, Secunderabad became a well-known sporting centre and many race
courses, parade grounds and polo fields were built.:18 Many elite clubs formed by the Nizams and the British such
as the Secunderabad Club, the Nizam Club and the Hyderabad Race Club, which is known for its horse racing especially
the annual Deccan derby, still exist. In more recent times, motorsports has become popular with the Andhra Pradesh
Motor Sports Club organising popular events such as the Deccan ¼ Mile Drag, TSD Rallies and 4x4 off-road rallying.
International-level sportspeople from Hyderabad include: cricketers Ghulam Ahmed, M. L. Jaisimha, Mohammed Azharuddin,
V. V. S. Laxman, Venkatapathy Raju, Shivlal Yadav, Arshad Ayub, Syed Abid Ali and Noel David; football players Syed
Abdul Rahim, Syed Nayeemuddin and Shabbir Ali; tennis player Sania Mirza; badminton players S. M. Arif, Pullela Gopichand,
Saina Nehwal, P. V. Sindhu, Jwala Gutta and Chetan Anand; hockey players Syed Mohammad Hadi and Mukesh Kumar; rifle
shooters Gagan Narang and Asher Noria and bodybuilder Mir Mohtesham Ali Khan. The most commonly used forms of medium
distance transport in Hyderabad include government owned services such as light railways and buses, as well as privately
operated taxis and auto rickshaws. Bus services operate from the Mahatma Gandhi Bus Station in the city centre and
carry over 130 million passengers daily across the entire network.:76 Hyderabad's light rail transportation system,
the Multi-Modal Transport System (MMTS), is a three line suburban rail service used by over 160,000 passengers daily.
Complementing these government services are minibus routes operated by Setwin (Society for Employment Promotion &
Training in Twin Cities). Intercity rail services also operate from Hyderabad; the main, and largest, station is
Secunderabad Railway Station, which serves as Indian Railways' South Central Railway zone headquarters and a hub
for both buses and MMTS light rail services connecting Secunderabad and Hyderabad. Other major railway stations in
Hyderabad are Hyderabad Deccan Station, Kachiguda Railway Station, Begumpet Railway Station, Malkajgiri Railway Station
and Lingampally Railway Station. The Hyderabad Metro, a new rapid transit system, is to be added to the existing
public transport infrastructure and is scheduled to operate three lines by 2015. As of 2012[update], there are over
3.5 million vehicles operating in the city, of which 74% are two-wheelers, 15% cars and 3% three-wheelers. The remaining
8% include buses, goods vehicles and taxis. The large number of vehicles coupled with relatively low road coverage—roads
occupy only 9.5% of the total city area:79—has led to widespread traffic congestion especially since 80% of passengers
and 60% of freight are transported by road.:3 The Inner Ring Road, the Outer Ring Road, the Hyderabad Elevated Expressway,
the longest flyover in India, and various interchanges, overpasses and underpasses were built to ease the congestion.
Maximum speed limits within the city are 50 km/h (31 mph) for two-wheelers and cars, 35 km/h (22 mph) for auto rickshaws
and 40 km/h (25 mph) for light commercial vehicles and buses. Hyderabad sits at the junction of three National Highways
linking it to six other states: NH-7 runs 2,369 km (1,472 mi) from Varanasi, Uttar Pradesh, in the north to Kanyakumari,
Tamil Nadu, in the south; NH-9, runs 841 km (523 mi) east-west between Machilipatnam, Andhra Pradesh, and Pune, Maharashtra;
and the 280 km (174 mi) NH-163 links Hyderabad to Bhopalpatnam, Chhattisgarh NH-765 links Hyderabad to Srisailam.
Five state highways, SH-1, SH-2, SH-4, SH-5 and SH-6, either start from, or pass through, Hyderabad.:58
Santa Monica is a beachfront city in western Los Angeles County, California, United States. The city is named after the Christian
saint, Monica. Situated on Santa Monica Bay, it is bordered on three sides by the city of Los Angeles – Pacific Palisades
to the north, Brentwood on the northeast, Sawtelle on the east, Mar Vista on the southeast, and Venice on the south.
Santa Monica is well known for its affluent single-family neighborhoods but also has many neighborhoods consisting
primarily of condominiums and apartments. Over two-thirds of Santa Monica's residents are renters. The Census Bureau
population for Santa Monica in 2010 was 89,736. Santa Monica was long inhabited by the Tongva people. Santa Monica
was called Kecheek in the Tongva language. The first non-indigenous group to set foot in the area was the party of
explorer Gaspar de Portolà, who camped near the present day intersection of Barrington and Ohio Avenues on August
3, 1769. There are two different versions of the naming of the city. One says that it was named in honor of the feast
day of Saint Monica (mother of Saint Augustine), but her feast day is actually May 4. Another version says that it
was named by Juan Crespí on account of a pair of springs, the Kuruvungna Springs (Serra Springs), that were reminiscent
of the tears that Saint Monica shed over her son's early impiety. Around the start of the 20th century, a growing
population of Asian Americans lived in or near Santa Monica and Venice. A Japanese fishing village was located near
the Long Wharf while small numbers of Chinese lived or worked in both Santa Monica and Venice. The two ethnic minorities
were often viewed differently by White Americans who were often well-disposed towards the Japanese but condescending
towards the Chinese. The Japanese village fishermen were an integral economic part of the Santa Monica Bay community.
Donald Wills Douglas, Sr. built a plant in 1922 at Clover Field (Santa Monica Airport) for the Douglas Aircraft Company.
In 1924, four Douglas-built planes took off from Clover Field to attempt the first aerial circumnavigation of the
world. Two planes made it back, after having covered 27,553 miles (44,342 km) in 175 days, and were greeted on their
return September 23, 1924, by a crowd of 200,000 (generously estimated). The Douglas Company (later McDonnell Douglas)
kept facilities in the city until the 1960s. Douglas's business grew astronomically with the onset of World War II,
employing as many as 44,000 people in 1943. To defend against air attack set designers from the Warner Brothers Studios
prepared elaborate camouflage that disguised the factory and airfield. The RAND Corporation began as a project of
the Douglas Company in 1945, and spun off into an independent think tank on May 14, 1948. RAND eventually acquired
a 15-acre (61,000 m²) campus centrally located between the Civic Center and the pier entrance. The Santa Monica Looff
Hippodrome (carousel) is a National Historic Landmark. It sits on the Santa Monica Pier, which was built in 1909.
The La Monica Ballroom on the pier was once the largest ballroom in the US and the source for many New Year's Eve
national network broadcasts. The Santa Monica Civic Auditorium was an important music venue for several decades and
hosted the Academy Awards in the 1960s. McCabe's Guitar Shop is still a leading acoustic performance space as well
as retail outlet. Bergamot Station is a city-owned art gallery compound that includes the Santa Monica Museum of
Art. The city is also home to the California Heritage Museum and the Angels Attic dollhouse and toy museum. The Downtown
District is the home of the Third Street Promenade, a major outdoor pedestrian-only shopping district that stretches
for three blocks between Wilshire Blvd. and Broadway (not the same Broadway in downtown and south Los Angeles). Third
Street is closed to vehicles for those three blocks to allow people to stroll, congregate, shop and enjoy street
performers. Santa Monica Place, featuring Bloomingdale's and Nordstrom in a three-level outdoor environment, is located
at the south end of the Promenade. After a period of redevelopment, the mall reopened in the fall of 2010 as a modern
shopping, entertainment and dining complex with more outdoor space. Every fall the Santa Monica Chamber of Commerce
hosts The Taste of Santa Monica on the Santa Monica Pier. Visitors can sample food and drinks from Santa Monica restaurants.
Other annual events include the Business and Consumer Expo, Sustainable Quality Awards, Santa Monica Cares Health
and Wellness Festival, and the State of the City. The swanky Shutters on the Beach Hotel offers a trip to the famous
Santa Monica Farmers Market to select and influence the materials that will become that evening's special "Market
Dinner." Classified as a Subtropical Mediterranean climate (Köppen Csb), Santa Monica enjoys an average of 310 days
of sunshine a year. It is located in USDA plant hardiness zone 11a. Because of its location, nestled on the vast
and open Santa Monica Bay, morning fog is a common phenomenon in May, June and early July (caused by ocean temperature
variations and currents). Like other inhabitants of the greater Los Angeles area, residents have a particular terminology
for this phenomenon: the "May Gray" and the "June Gloom". Overcast skies are common during June mornings, but usually
the strong sun burns the fog off by noon. In the late winter/early summer, daily fog is a phenomenon too. It happens
suddenly and it may last some hours or past sunset time. Nonetheless, it will sometimes stay cloudy and cool all
day during June, even as other parts of the Los Angeles area enjoy sunny skies and warmer temperatures. At times,
the sun can be shining east of 20th Street, while the beach area is overcast. As a general rule, the beach temperature
is from 5 to 10 degrees Fahrenheit (3 to 6 degrees Celsius) cooler than it is inland during summer days, and 5–10
degrees warmer during winter nights. Santa Monica is one of the most environmentally activist municipalities in the
nation. The city first proposed its Sustainable City Plan in 1992 and in 1994, was one of the first cities in the
nation to formally adopt a comprehensive sustainability plan, setting waste reduction and water conservation policies
for both public and private sector through its Office of Sustainability and the Environment. Eighty-two percent of
the city's public works vehicles now run on alternative fuels, including nearly 100% of the municipal bus system,
making it among the largest such fleets in the country. Santa Monica fleet vehicles and Buses now source their natural
gas from Redeem, a Southern California-based supplier of renewable and sustainable natural gas obtained from non-fracked
methane biogas generated from organic landfill waste. An urban runoff facility (SMURFF), the first of its kind in
the US, catches and treats 3.5 million US gallons (13,000 m3) of water each week that would otherwise flow into the
bay via storm-drains and sells it back to end-users within the city for reuse as gray-water, while bio-swales throughout
the city allow rainwater to percolate into and replenish the groundwater supply. The groundwater supply in turn plays
an important role in the city's Sustainable Water Master Plan, whereby Santa Monica has set a goal of attaining 100%
water independence by 2020. The city has numerous programs designed to promote water conservation among residents,
including a rebate of $1.50 per square foot for those who convert water intensive lawns to more local drought-tolerant
gardens that require less water. The city is also in the process of implementing a 5-year and 20 year Bike Action
Plan with a goal of attaining 14 to 35% bicycle transportation mode share by 2030 through the installation of enhanced
bicycle infrastructure throughout the city. Other environmentally focused initiatives include curbside recycling,
curbside composting bins (in addition to trash, yard-waste, and recycle bins), farmers' markets, community gardens,
garden-share, an urban forest initiative, a hazardous materials home-collection service, green business certification,
and a municipal bus system which is currently being revamped to integrate with the soon-to-open Expo Line. There
were 46,917 households, out of which 7,835 (16.7%) had children under the age of 18 living in them, 13,092 (27.9%)
were opposite-sex married couples living together, 3,510 (7.5%) had a female householder with no husband present,
1,327 (2.8%) had a male householder with no wife present. There were 2,867 (6.1%) unmarried opposite-sex partnerships,
and 416 (0.9%) same-sex married couples or partnerships. 22,716 households (48.4%) were made up of individuals and
5,551 (11.8%) had someone living alone who was 65 years of age or older. The average household size was 1.87. There
were 17,929 families (38.2% of all households); the average family size was 2.79. As of the census of 2000, there
are 84,084 people, 44,497 households, and 16,775 families in the city. The population density is 10,178.7 inhabitants
per square mile (3,930.4/km²). There are 47,863 housing units at an average density of 5,794.0 per square mile (2,237.3/km²).
The racial makeup of the city is 78.29% White, 7.25% Asian, 3.78% African American, 0.47% Native American, 0.10%
Pacific Islander, 5.97% from other races, and 4.13% from two or more races. 13.44% of the population are Hispanic
or Latino of any race. There are 44,497 households, out of which 15.8% have children under the age of 18, 27.5% are
married couples living together, 7.5% have a female householder with no husband present, and 62.3% are non-families.
51.2% of all households are made up of individuals and 10.6% have someone living alone who is 65 years of age or
older. The average household size is 1.83 and the average family size is 2.80. Santa Monica College is a junior college
originally founded in 1929. Many SMC graduates transfer to the University of California system. It occupies 35 acres
(14 hectares) and enrolls 30,000 students annually. The Frederick S. Pardee RAND Graduate School, associated with
the RAND Corporation, is the U.S.'s largest producer of public policy PhDs. The Art Institute of California – Los
Angeles is also located in Santa Monica near the Santa Monica Airport. Universities and colleges within a 22-mile
(35 km) radius from Santa Monica include Santa Monica College, Antioch University Los Angeles, Loyola Marymount University,
Mount St. Mary's College, Pepperdine University, California State University, Northridge, California State University,
Los Angeles, UCLA, USC, West Los Angeles College, California Institute of Technology (Caltech), Occidental College
(Oxy), Los Angeles City College, Los Angeles Southwest College, Los Angeles Valley College, and Emperor's College
of Traditional Oriental Medicine. Santa Monica has a bike action plan and recently launched a Bicycle sharing system
in November 2015. The city is traversed by the Marvin Braude Bike Trail. Santa Monica has received the Bicycle Friendly
Community Award (Bronze in 2009, Silver in 2013) by the League of American Bicyclists. Local bicycle advocacy organizations
include Santa Monica Spoke, a local chapter of the Los Angeles County Bicycle Coalition. Santa Monica is thought
to be one of the leaders for bicycle infrastructure and programming in Los Angeles County.[citation needed] The Santa
Monica Freeway (Interstate 10) begins in Santa Monica near the Pacific Ocean and heads east. The Santa Monica Freeway
between Santa Monica and downtown Los Angeles has the distinction of being one of the busiest highways in all of
North America. After traversing Los Angeles County, I-10 crosses seven more states, terminating at Jacksonville,
Florida. In Santa Monica, there is a road sign designating this route as the Christopher Columbus Transcontinental
Highway. State Route 2 (Santa Monica Boulevard) begins in Santa Monica, barely grazing State Route 1 at Lincoln Boulevard,
and continues northeast across Los Angeles County, through the Angeles National Forest, crossing the San Gabriel
Mountains as the Angeles Crest Highway, ending in Wrightwood. Santa Monica is also the western (Pacific) terminus
of historic U.S. Route 66. Close to the eastern boundary of Santa Monica, Sepulveda Boulevard reaches from Long Beach
at the south, to the northern end of the San Fernando Valley. Just east of Santa Monica is Interstate 405, the "San
Diego Freeway", a major north-south route in Los Angeles County and Orange County, California. Historical aspects
of the Expo line route are noteworthy. It uses the right-of-way for the Santa Monica Air Line that provided electric-powered
freight and passenger service between Los Angeles and Santa Monica beginning in the 1920s. Service was discontinued
in 1953 but diesel-powered freight deliveries to warehouses along the route continued until March 11, 1988. The abandonment
of the line spurred concerns within the community and the entire right-of-way was purchased from Southern Pacific
by Los Angeles Metropolitan Transportation Authority. The line was built in 1875 as the steam-powered Los Angeles
and Independence Railroad to bring mining ore to ships in Santa Monica harbor and as a passenger excursion train
to the beach. In 2006, crime in Santa Monica affected 4.41% of the population, slightly lower than the national average
crime rate that year of 4.48%. The majority of this was property crime, which affected 3.74% of Santa Monica's population
in 2006; this was higher than the rates for Los Angeles County (2.76%) and California (3.17%), but lower than the
national average (3.91%). These per-capita crime rates are computed based on Santa Monica's full-time population
of about 85,000. However, the Santa Monica Police Department has suggested the actual per-capita crime rate is much
lower, as tourists, workers, and beachgoers can increase the city's daytime population to between 250,000 and 450,000
people. In 1999, there was a double homicide in the Westside Clothing store on Lincoln Boulevard. During the incident,
Culver City gang members David "Puppet" Robles and Jesse "Psycho" Garcia entered the store masked and began opening
fire, killing Anthony and Michael Juarez. They then ran outside to a getaway vehicle driven by a third Culver City
gang member, who is now also in custody. The clothing store was believed to be a local hang out for Santa Monica
gang members. The dead included two men from Northern California who had merely been visiting the store's owner,
their cousin, to see if they could open a similar store in their area. Police say the incident was in retaliation
for a shooting committed by the Santa Monica 13 gang days before the Juarez brothers were gunned down. Hundreds of
movies have been shot or set in part within the city of Santa Monica. One of the oldest exterior shots in Santa Monica
is Buster Keaton's Spite Marriage (1929) which shows much of 2nd Street. The comedy It's a Mad, Mad, Mad, Mad World
(1963) included several scenes shot in Santa Monica, including those along the California Incline, which led to the
movie's treasure spot, "The Big W". The Sylvester Stallone film Rocky III (1982) shows Rocky Balboa and Apollo Creed
training to fight Clubber Lang by running on the Santa Monica Beach, and Stallone's Demolition Man (1993) includes
Santa Monica settings. Henry Jaglom's indie Someone to Love (1987), the last film in which Orson Welles appeared,
takes place in Santa Monica's venerable Mayfair Theatre. Heathers (1989) used Santa Monica's John Adams Middle School
for many exterior shots. The Truth About Cats & Dogs (1996) is set entirely in Santa Monica, particularly the Palisades
Park area, and features a radio station that resembles KCRW at Santa Monica College. 17 Again (2009) was shot at
Samohi. Other films that show significant exterior shots of Santa Monica include Fletch (1985), Species (1995), Get
Shorty (1995), and Ocean's Eleven (2001). Richard Rossi's biopic Aimee Semple McPherson opens and closes at the beach
in Santa Monica. Iron Man features the Santa Monica pier and surrounding communities as Tony Stark tests his experimental
flight suit. Santa Monica is featured in the video games True Crime: Streets of LA (2003), Vampire: The Masquerade
– Bloodlines (2004), Grand Theft Auto San Andreas (2004) as a fictional district - Santa Maria Beach, Destroy All
Humans! (2004), Tony Hawk's American Wasteland (2005), L.A. Rush (2005), Midnight Club: Los Angeles (2008), Cars
Race-O-Rama (2009), Grand Theft Auto V (2013) as a fictional district – Del Perro, Call of Duty: Ghosts (2013) as
a fictional U.S. military base – Fort Santa Monica, The Crew (2014), Need for Speed (2015)
Washington University in St. Louis (Wash. U., or WUSTL) is a private research university located in St. Louis, Missouri,
United States. Founded in 1853, and named after George Washington, the university has students and faculty from all
50 U.S. states and more than 120 countries. Twenty-five Nobel laureates have been affiliated with Washington University,
nine having done the major part of their pioneering research at the university. Washington University's undergraduate
program is ranked 15th by U.S. News and World Report. The university is ranked 32nd in the world by the Academic
Ranking of World Universities. The university's first chancellor was Joseph Gibson Hoyt. Crow secured the university
charter from the Missouri General Assembly in 1853, and Eliot was named President of the Board of Trustees. Early
on, Eliot solicited support from members of the local business community, including John O'Fallon, but Eliot failed
to secure a permanent endowment. Washington University is unusual among major American universities in not having
had a prior financial endowment. The institution had no backing of a religious organization, single wealthy patron,
or earmarked government support. During the three years following its inception, the university bore three different
names. The board first approved "Eliot Seminary," but William Eliot was uncomfortable with naming a university after
himself and objected to the establishment of a seminary, which would implicitly be charged with teaching a religious
faith. He favored a nonsectarian university. In 1854, the Board of Trustees changed the name to "Washington Institute"
in honor of George Washington. Naming the University after the nation's first president, only seven years before
the American Civil War and during a time of bitter national division, was no coincidence. During this time of conflict,
Americans universally admired George Washington as the father of the United States and a symbol of national unity.
The Board of Trustees believed that the university should be a force of unity in a strongly divided Missouri. In
1856, the University amended its name to "Washington University." The university amended its name once more in 1976,
when the Board of Trustees voted to add the suffix "in St. Louis" to distinguish the university from the nearly two
dozen other universities bearing Washington's name. Although chartered as a university, for many years Washington
University functioned primarily as a night school located on 17th Street and Washington Avenue in the heart of downtown
St. Louis. Owing to limited financial resources, Washington University initially used public buildings. Classes began
on October 22, 1854, at the Benton School building. At first the university paid for the evening classes, but as
their popularity grew, their funding was transferred to the St. Louis Public Schools. Eventually the board secured
funds for the construction of Academic Hall and a half dozen other buildings. Later the university divided into three
departments: the Manual Training School, Smith Academy, and the Mary Institute. In 1867, the university opened the
first private nonsectarian law school west of the Mississippi River. By 1882, Washington University had expanded
to numerous departments, which were housed in various buildings across St. Louis. Medical classes were first held
at Washington University in 1891 after the St. Louis Medical College decided to affiliate with the University, establishing
the School of Medicine. During the 1890s, Robert Sommers Brookings, the president of the Board of Trustees, undertook
the tasks of reorganizing the university's finances, putting them onto a sound foundation, and buying land for a
new campus. Washington University spent its first half century in downtown St. Louis bounded by Washington Ave.,
Lucas Place, and Locust Street. By the 1890s, owing to the dramatic expansion of the Manual School and a new benefactor
in Robert Brookings, the University began to move west. The University Board of Directors began a process to find
suitable ground and hired the landscape architecture firm Olmsted, Olmsted & Eliot of Boston. A committee of Robert
S. Brookings, Henry Ware Eliot, and William Huse found a site of 103 acres (41.7 ha) just beyond Forest Park, located
west of the city limits in St. Louis County. The elevation of the land was thought to resemble the Acropolis and
inspired the nickname of "Hilltop" campus, renamed the Danforth campus in 2006 to honor former chancellor William
H. Danforth. In 1899, the university opened a national design contest for the new campus. The renowned Philadelphia
firm Cope & Stewardson won unanimously with its plan for a row of Collegiate Gothic quadrangles inspired by Oxford
and Cambridge Universities. The cornerstone of the first building, Busch Hall, was laid on October 20, 1900. The
construction of Brookings Hall, Ridgley, and Cupples began shortly thereafter. The school delayed occupying these
buildings until 1905 to accommodate the 1904 World's Fair and Olympics. The delay allowed the university to construct
ten buildings instead of the seven originally planned. This original cluster of buildings set a precedent for the
development of the Danforth Campus; Cope & Stewardson’s original plan and its choice of building materials have,
with few exceptions, guided the construction and expansion of the Danforth Campus to the present day. After working
for many years at the University of Chicago, Arthur Holly Compton returned to St. Louis in 1946 to serve as Washington
University's ninth chancellor. Compton reestablished the Washington University football team, making the declaration
that athletics were to be henceforth played on a "strictly amateur" basis with no athletic scholarships. Under Compton’s
leadership, enrollment at the University grew dramatically, fueled primarily by World War II veterans' use of their
GI Bill benefits. The process of desegregation at Washington University began in 1947 with the School of Medicine
and the School of Social Work. During the mid and late 1940s, the University was the target of critical editorials
in the local African American press, letter-writing campaigns by churches and the local Urban League, and legal briefs
by the NAACP intended to strip its tax-exempt status. In spring 1949, a Washington University student group, the
Student Committee for the Admission of Negroes (SCAN), began campaigning for full racial integration. In May 1952,
the Board of Trustees passed a resolution desegregating the school's undergraduate divisions. During the latter half
of the 20th century, Washington University transitioned from a strong regional university to a national research
institution. In 1957, planning began for the construction of the “South 40,” a complex of modern residential halls.
With the additional on-campus housing, Washington University, which had been predominantly a “streetcar college”
of commuter students, began to attract a more national pool of applicants. By 1964, over two-thirds of incoming students
came from outside the St. Louis area. Washington University has been selected by the Commission on Presidential Debates
to host more presidential and vice-presidential debates than any other institution in history. United States presidential
election debates were held at the Washington University Athletic Complex in 1992, 2000, 2004, and 2016. A presidential
debate was planned to occur in 1996, but owing to scheduling difficulties between the candidates, the debate was
canceled. The university hosted the only 2008 vice presidential debate, between Republican Sarah Palin and Democrat
Joe Biden, on October 2, 2008, also at the Washington University Athletic Complex. Although Chancellor Wrighton had
noted after the 2004 debate that it would be "improbable" that the university will host another debate and was not
eager to commit to the possibility, he subsequently changed his view and the university submitted a bid for the 2008
debates. "These one-of-a-kind events are great experiences for our students, they contribute to a national understanding
of important issues, and they allow us to help bring national and international attention to the St. Louis region
as one of America's great metropolitan areas," said Wrighton. In 2013, Washington University received a record 30,117
applications for a freshman class of 1,500 with an acceptance rate of 13.7%. More than 90% of incoming freshmen whose
high schools ranked were ranked in the top 10% of their high school classes. In 2006, the university ranked fourth
overall and second among private universities in the number of enrolled National Merit Scholar freshmen, according
to the National Merit Scholar Corporation's annual report. In 2008, Washington University was ranked #1 for quality
of life according to The Princeton Review, among other top rankings. In addition, the Olin Business School's undergraduate
program is among the top 4 in the country. The Olin Business School's undergraduate program is also among the country's
most competitive, admitting only 14% of applicants in 2007 and ranking #1 in SAT scores with an average composite
of 1492 M+CR according to BusinessWeek. Graduate schools include the School of Medicine, currently ranked sixth in
the nation, and the George Warren Brown School of Social Work, currently ranked first. The program in occupational
therapy at Washington University currently occupies the first spot for the 2016 U.S. News & World Report rankings,
and the program in physical therapy is ranked first as well. For the 2015 edition, the School of Law is ranked 18th
and the Olin Business School is ranked 19th. Additionally, the Graduate School of Architecture and Urban Design was
ranked ninth in the nation by the journal DesignIntelligence in its 2013 edition of "America's Best Architecture
& Design Schools." Washington University's North Campus and West Campus principally house administrative functions
that are not student focused. North Campus lies in St. Louis City near the Delmar Loop. The University acquired the
building and adjacent property in 2004, formerly home to the Angelica Uniform Factory. Several University administrative
departments are located at the North Campus location, including offices for Quadrangle Housing, Accounting and Treasury
Services, Parking and Transportation Services, Army ROTC, and Network Technology Services. The North Campus location
also provides off-site storage space for the Performing Arts Department. Renovations are still ongoing; recent additions
to the North Campus space include a small eatery operated by Bon Appétit Management Company, the University's on-campus
food provider, completed during spring semester 2007, as well as the Family Learning Center, operated by Bright Horizons
and opened in September 2010. The West Campus is located about one mile (1.6 km) to the west of the Danforth Campus
in Clayton, Missouri, and primarily consists of a four-story former department store building housing mostly administrative
space. The West Campus building was home to the Clayton branch of the Famous-Barr department store until 1990, when
the University acquired the property and adjacent parking and began a series of renovations. Today, the basement
level houses the West Campus Library, the University Archives, the Modern Graphic History Library, and conference
space. The ground level still remains a retail space. The upper floors house consolidated capital gifts, portions
of alumni and development, and information systems offices from across the Danforth and Medical School campuses.
There is also a music rehearsal room on the second floor. The West Campus is also home to the Center for the Application
of Information Technologies (CAIT), which provides IT training services. Tyson Research Center is a 2,000-acre (809
ha) field station located west of St. Louis on the Meramec River. Washington University obtained Tyson as surplus
property from the federal government in 1963. It is used by the University as a biological field station and research/education
center. In 2010 the Living Learning Center was named one of the first two buildings accredited nationwide as a "living
building" under the Living Building Challenge, opened to serve as a biological research station and classroom for
summer students. Arts & Sciences at Washington University comprises three divisions: the College of Arts & Sciences,
the Graduate School of Arts & Sciences, and University College in Arts & Sciences. Barbara Schaal is Dean of the
Faculty of Arts & Sciences. James E. McLeod was the Vice Chancellor for Students and Dean of the College of Arts
& Sciences; according to a University news release he died at the University's Barnes-Jewish Hospital on Tuesday,
September 6, 2011 of renal failure as a result of a two-year-long struggle with cancer. Richard J. Smith is Dean
of the Graduate School of Arts & Sciences. Founded as the School of Commerce and Finance in 1917, the Olin Business
School was named after entrepreneur John M. Olin in 1988. The school's academic programs include BSBA, MBA, Professional
MBA (PMBA), Executive MBA (EMBA), MS in Finance, MS in Supply Chain Management, MS in Customer Analytics, Master
of Accounting, Global Master of Finance Dual Degree program, and Doctorate programs, as well as non-degree executive
education. In 2002, an Executive MBA program was established in Shanghai, in cooperation with Fudan University. Olin
has a network of more than 16,000 alumni worldwide. Over the last several years, the school’s endowment has increased
to $213 million (2004) and annual gifts average $12 million per year.[citation needed] Simon Hall was opened in 1986
after a donation from John E. Simon. On May 2, 2014, the $90 million conjoined Knight and Bauer Halls were dedicated,
following a $15 million gift from Charles F. Knight and Joanne Knight and a $10 million gift from George and Carol
Bauer through the Bauer Foundation. Undergraduate BSBA students take 40–60% of their courses within the business
school and are able to formally declare majors in eight areas: accounting, entrepreneurship, finance, healthcare
management, marketing, managerial economics and strategy, organization and human resources, international business,
and operations and supply chain management. Graduate students are able to pursue an MBA either full-time or part-time.
Students may also take elective courses from other disciplines at Washington University, including law and many other
fields. Mahendra R. Gupta is the Dean of the Olin Business School. Washington University School of Law offers joint-degree
programs with the Olin Business School, the Graduate School of Arts and Sciences, the School of Medicine, and the
School of Social Work. It also offers an LLM in Intellectual Property and Technology Law, an LLM in Taxation, an
LLM in US Law for Foreign Lawyers, a Master of Juridical Studies (MJS), and a Juris Scientiae Doctoris (JSD). The
law school offers 3 semesters of courses in the Spring, Summer, and Fall, and requires at least 85 hours of coursework
for the JD. In the 2015 US News & World Report America's Best Graduate Schools, the law school is ranked 18th nationally,
out of over 180 law schools. In particular, its Clinical Education Program is currently ranked 4th in the nation.
This year, the median score placed the average student in the 96th percentile of test takers. The law school offers
a full-time day program, beginning in August, for the J.D. degree. The law school is located in a state-of-the-art
building, Anheuser-Busch Hall (opened in 1997). The building combines traditional architecture, a five-story open-stacks
library, an integration of indoor and outdoor spaces, and the latest wireless and other technologies. National Jurist
ranked Washington University 4th among the "25 Most Wired Law Schools." The Washington University School of Medicine,
founded in 1891, is highly regarded as one of the world's leading centers for medical research and training. The
School ranks first in the nation in student selectivity. Among its many recent initiatives, The Genome Center at
Washington University (directed by Richard K. Wilson) played a leading role in the Human Genome Project, having contributed
25% of the finished sequence. The School pioneered bedside teaching and led in the transformation of empirical knowledge
into scientific medicine. The medical school partners with St. Louis Children's Hospital and Barnes-Jewish Hospital
(part of BJC HealthCare), where all physicians are members of the school's faculty. With roots dating back to 1909
in the university's School of Social Economy, the George Warren Brown School of Social Work (commonly called the
Brown School or Brown) was founded in 1925. Brown's academic degree offerings include a Master of Social Work (MSW),
a Master of Public Health (MPH), a PhD in Social Work, and a PhD in Public Health Sciences. It is currently ranked
first among Master of Social Work programs in the United States. The school was endowed by Bettie Bofinger Brown
and named for her husband, George Warren Brown, a St. Louis philanthropist and co-founder of the Brown Shoe Company.
The school was the first in the country to have a building for the purpose of social work education, and it is also
a founding member of the Association of Schools and Programs of Public Health. The school is housed within Brown
and Goldfarb Halls, but a third building expansion is currently in progress and slated to be completed in summer
2015. The new building, adjacent to Brown and Goldfarb Halls, targets LEED Gold certification and will add approximately
105,000 square feet, more than doubling the school's teaching, research, and program space. The school has many nationally
and internationally acclaimed scholars in social security, health care, health disparities, communication, social
and health policy, and individual and family development. Many of the faculty have training in both social work and
public health. The school's current dean is Edward F. Lawlor. In addition to affiliation with the university-wide
Institute of Public Health, Brown houses 12 research centers. The Brown School Library collects materials on many
topics, with specific emphasis on: children, youth, and families; gerontology; health; mental health; social and
economic development; family therapy; and management. The library maintains subscriptions to over 450 academic journals.
The Mildred Lane Kemper Art Museum, established in 1881, is one of the oldest teaching museums in the country. The
collection includes works from 19th, 20th, and 21st century American and European artists, including George Caleb
Bingham, Thomas Cole, Pablo Picasso, Max Ernst, Alexander Calder, Jackson Pollock, Rembrandt, Robert Rauschenberg,
Barbara Kruger, and Christian Boltanski. Also in the complex is the 3,000 sq ft (300 m2) Newman Money Museum. In
October 2006, the Kemper Art Museum moved from its previous location, Steinberg Hall, into a new facility designed
by former faculty member Fumihiko Maki. Interestingly, the new Kemper Art Museum is located directly across from
Steinberg Hall, which was Maki's very first commission in 1959. Virtually all faculty members at Washington University
engage in academic research,[citation needed] offering opportunities for both undergraduate and graduate students
across the university's seven schools. Known for its interdisciplinarity and departmental collaboration, many of
Washington University's research centers and institutes are collaborative efforts between many areas on campus.[citation
needed] More than 60% of undergraduates are involved in faculty research across all areas; it is an institutional
priority for undergraduates to be allowed to participate in advanced research. According to the Center for Measuring
University Performance, it is considered to be one of the top 10 private research universities in the nation. A dedicated
Office of Undergraduate Research is located on the Danforth Campus and serves as a resource to post research opportunities,
advise students in finding appropriate positions matching their interests, publish undergraduate research journals,
and award research grants to make it financially possible to perform research. During fiscal year 2007, $537.5 million
was received in total research support, including $444 million in federal obligations. The University has over 150
National Institutes of Health funded inventions, with many of them licensed to private companies. Governmental agencies
and non-profit foundations such as the NIH, United States Department of Defense, National Science Foundation, and
NASA provide the majority of research grant funding, with Washington University being one of the top recipients in
NIH grants from year-to-year. Nearly 80% of NIH grants to institutions in the state of Missouri went to Washington
University alone in 2007. Washington University and its Medical School play a large part in the Human Genome Project,
where it contributes approximately 25% of the finished sequence. The Genome Sequencing Center has decoded the genome
of many animals, plants, and cellular organisms, including the platypus, chimpanzee, cat, and corn. Washington University
has over 300 undergraduate student organizations on campus. Most are funded by the Washington University Student
Union, which has a $2 million plus annual budget that is completely student-controlled and is one of the largest
student government budgets in the country. Known as SU for short, the Student Union sponsors large-scale campus programs
including WILD (a semesterly concert in the quad) and free copies of the New York Times, USA Today, and the St. Louis
Post-Dispatch through The Collegiate Readership Program; it also contributes to the Assembly Series, a weekly lecture
series produced by the University, and funds the campus television station, WUTV, and the radio station, KWUR. KWUR
was named best radio station in St. Louis of 2003 by the Riverfront Times despite the fact that its signal reaches
only a few blocks beyond the boundaries of the campus. There are 11 fraternities and 9 sororities, with approximately
35% of the student body being involved in Greek life. The Congress of the South 40 (CS40) is a Residential Life and
Events Programming Board, which operates outside of the SU sphere. CS40's funding comes from the Housing Activities
Fee of each student living on the South 40. Washington University has a large number of student-run musical groups
on campus, including 12 official a cappella groups. The Pikers, an all-male group, is the oldest such group on campus.
The Greenleafs, an all-female group is the oldest (and only) female group on campus. The Mosaic Whispers, founded
in 1991, is the oldest co-ed group on campus. They have produced 9 albums and have appeared on a number of compilation
albums, including Ben Folds' Ben Folds Presents: University A Cappella! The Amateurs, who also appeared on this album,
is another co-ed a cappella group on campus, founded in 1991. They have recorded seven albums and toured extensively.
After Dark is a co-ed a cappella group founded in 2001. It has released three albums and has won several Contemporary
A Capella Recording (CARA) awards. In 2008 the group performed on MSNBC during coverage of the vice presidential
debate with specially written songs about Joe Biden and Sarah Palin. The Ghost Lights, founded in 2010, is the campus's
newest and only Broadway, Movies, and Television soundtrack group. They have performed multiple philanthropic concerts
in the greater St. Louis area and were honored in November 2010 with the opportunity to perform for Nobel Laureate
Douglass North at his birthday celebration. Over 50% of undergraduate students live on campus. Most of the residence
halls on campus are located on the South 40, named because of its adjacent location on the south side of the Danforth
Campus and its size of 40 acres (160,000 m2). It is the location of all the freshman buildings as well as several
upperclassman buildings, which are set up in the traditional residential college system. All of the residential halls
are co-ed. The South 40 is organized as a pedestrian-friendly environment wherein residences surround a central recreational
lawn known as the Swamp. Bear's Den (the largest dining hall on campus), the Habif Health and Wellness Center (Student
Health Services), the Residential Life Office, University Police Headquarters, various student-owned businesses (e.g.
the laundry service, Wash U Wash), and the baseball, softball, and intramural fields are also located on the South
40. Another group of residences, known as the Village, is located in the northwest corner of Danforth Campus. Only
open to upperclassmen and January Scholars, the North Side consists of Millbrook Apartments, The Village, Village
East on-campus apartments, and all fraternity houses except the Zeta Beta Tau house, which is off campus and located
just northwest of the South 40. Sororities at Washington University do not have houses by their own accord. The Village
is a group of residences where students who have similar interests or academic goals apply as small groups of 4 to
24, known as BLOCs, to live together in clustered suites along with non-BLOCs. Like the South 40, the residences
around the Village also surround a recreational lawn. Washington University supports four major student-run media
outlets. The university's student newspaper, Student Life, is available for students. KWUR (90.3 FM) serves as the
students' official radio station; the station also attracts an audience in the immediately surrounding community
due to its eclectic and free-form musical programming. WUTV is the university's closed-circuit television channel.
The university's main student-run political publication is the Washington University Political Review (nicknamed
"WUPR"), a self-described "multipartisan" monthly magazine. Washington University undergraduates publish two literary
and art journals, The Eliot Review and Spires Intercollegiate Arts and Literary Magazine. A variety of other publications
also serve the university community, ranging from in-house academic journals to glossy alumni magazines to WUnderground,
campus' student-run satirical newspaper. Washington University's sports teams are called the Bears. They are members
of the National Collegiate Athletic Association and participate in the University Athletic Association at the Division
III level. The Bears have won 19 NCAA Division III Championships— one in women's cross country (2011), one in men's
tennis (2008), two in men's basketball (2008, 2009), five in women's basketball (1998–2001, 2010), and ten in women's
volleyball (1989, 1991–1996, 2003, 2007, 2009) – and 144 UAA titles in 15 different sports. The Athletic Department
is headed by John Schael who has served as director of athletics since 1978. The 2000 Division III Central Region
winner of the National Association of Collegiate Directors of Athletics/Continental Airlines Athletics Director of
the Year award, Schael has helped orchestrate the Bears athletics transformation into one of the top departments
in Division III.
Unlike the Federal Bureau of Investigation (FBI), which is a domestic security service, CIA has no law enforcement function
and is mainly focused on overseas intelligence gathering, with only limited domestic collection. Though it is not
the only U.S. government agency specializing in HUMINT, CIA serves as the national manager for coordination and deconfliction
of HUMINT activities across the entire intelligence community. Moreover, CIA is the only agency authorized by law
to carry out and oversee covert action on behalf of the President, unless the President determines that another agency
is better suited for carrying out such action. It can, for example, exert foreign political influence through its
tactical divisions, such as the Special Activities Division. The Executive Office also supports the U.S. military
by providing it with information it gathers, receiving information from military intelligence organizations, and
cooperating on field activities. The Executive Director is in charge of the day to day operation of the CIA, and
each branch of the service has its own Director. The Associate Director of military affairs, a senior military officer,
manages the relationship between the CIA and the Unified Combatant Commands, who produce regional/operational intelligence
and consume national intelligence. The Directorate of Analysis produces all-source intelligence investigation on
key foreign and intercontinental issues relating to powerful and sometimes anti-government sensitive topics. It has
four regional analytic groups, six groups for transnational issues, and three focus on policy, collection, and staff
support. There is an office dedicated to Iraq, and regional analytical Offices covering the Near Eastern and South
Asian Analysis, the Office of Russian and European Analysis, and the Office of Asian Pacific, Asian Pacific, Latin
American, and African Analysis and African Analysis. The Directorate of Operations is responsible for collecting
foreign intelligence, mainly from clandestine HUMINT sources, and covert action. The name reflects its role as the
coordinator of human intelligence activities among other elements of the wider U.S. intelligence community with their
own HUMINT operations. This Directorate was created in an attempt to end years of rivalry over influence, philosophy
and budget between the United States Department of Defense (DOD) and the CIA. In spite of this, the Department of
Defense recently organized its own global clandestine intelligence service, the Defense Clandestine Service (DCS),
under the Defense Intelligence Agency (DIA). The CIA established its first training facility, the Office of Training
and Education, in 1950. Following the end of the Cold War, the CIA's training budget was slashed, which had a negative
effect on employee retention. In response, Director of Central Intelligence George Tenet established CIA University
in 2002. CIA University holds between 200 and 300 courses each year, training both new hires and experienced intelligence
officers, as well as CIA support staff. The facility works in partnership with the National Intelligence University,
and includes the Sherman Kent School for Intelligence Analysis, the Directorate of Analysis' component of the university.
Details of the overall United States intelligence budget are classified. Under the Central Intelligence Agency Act
of 1949, the Director of Central Intelligence is the only federal government employee who can spend "un-vouchered"
government money. The government has disclosed a total figure for all non-military intelligence spending since 2007;
the fiscal 2013 figure is $52.6 billion. According to the 2013 mass surveillance disclosures, the CIA's fiscal 2013
budget is $14.7 billion, 28% of the total and almost 50% more than the budget of the National Security Agency. CIA's
HUMINT budget is $2.3 billion, the SIGINT budget is $1.7 billion, and spending for security and logistics of CIA
missions is $2.5 billion. "Covert action programs", including a variety of activities such as the CIA's drone fleet
and anti-Iranian nuclear program activities, accounts for $2.6 billion. There were numerous previous attempts to
obtain general information about the budget. As a result, it was revealed that CIA's annual budget in Fiscal Year
1963 was US $550 million (inflation-adjusted US$ 4.3 billion in 2016), and the overall intelligence budget in FY
1997 was US $26.6 billion (inflation-adjusted US$ 39.2 billion in 2016). There have been accidental disclosures;
for instance, Mary Margaret Graham, a former CIA official and deputy director of national intelligence for collection
in 2005, said that the annual intelligence budget was $44 billion, and in 1994 Congress accidentally published a
budget of $43.4 billion (in 2012 dollars) in 1994 for the non-military National Intelligence Program, including $4.8
billion for the CIA. After the Marshall Plan was approved, appropriating $13.7 billion over five years, 5% of those
funds or $685 million were made available to the CIA. The role and functions of the CIA are roughly equivalent to
those of the United Kingdom's Secret Intelligence Service (the SIS or MI6), the Australian Secret Intelligence Service
(ASIS), the Egyptian General Intelligence Service, the Russian Foreign Intelligence Service (Sluzhba Vneshney Razvedki)
(SVR), the Indian Research and Analysis Wing (RAW), the Pakistani Inter-Services Intelligence (ISI), the French foreign
intelligence service Direction Générale de la Sécurité Extérieure (DGSE) and Israel's Mossad. While the preceding
agencies both collect and analyze information, some like the U.S. State Department's Bureau of Intelligence and Research
are purely analytical agencies.[citation needed] The closest links of the U.S. IC to other foreign intelligence agencies
are to Anglophone countries: Australia, Canada, New Zealand, and the United Kingdom. There is a special communications
marking that signals that intelligence-related messages can be shared with these four countries. An indication of
the United States' close operational cooperation is the creation of a new message distribution label within the main
U.S. military communications network. Previously, the marking of NOFORN (i.e., No Foreign Nationals) required the
originator to specify which, if any, non-U.S. countries could receive the information. A new handling caveat, USA/AUS/CAN/GBR/NZL
Five Eyes, used primarily on intelligence messages, gives an easier way to indicate that the material can be shared
with Australia, Canada, United Kingdom, and New Zealand. The success of the British Commandos during World War II
prompted U.S. President Franklin D. Roosevelt to authorize the creation of an intelligence service modeled after
the British Secret Intelligence Service (MI6), and Special Operations Executive. This led to the creation of the
Office of Strategic Services (OSS). On September 20, 1945, shortly after the end of World War II, Harry S. Truman
signed an executive order dissolving the OSS, and by October 1945 its functions had been divided between the Departments
of State and War. The division lasted only a few months. The first public mention of the "Central Intelligence Agency"
appeared on a command-restructuring proposal presented by Jim Forrestal and Arthur Radford to the U.S. Senate Military
Affairs Committee at the end of 1945. Despite opposition from the military establishment, the United States Department
of State and the Federal Bureau of Investigation (FBI), Truman established the National Intelligence Authority in
January 1946, which was the direct predecessor of the CIA. Its operational extension was known as the Central Intelligence
Group (CIG) Lawrence Houston, head counsel of the SSU, CIG, and, later CIA, was a principle draftsman of the National
Security Act of 1947 which dissolved the NIA and the CIG, and established both the National Security Council and
the Central Intelligence Agency. In 1949, Houston would help draft the Central Intelligence Agency Act, (Public law
81-110) which authorized the agency to use confidential fiscal and administrative procedures, and exempted it from
most limitations on the use of Federal funds. It also exempted the CIA from having to disclose its "organization,
functions, officials, titles, salaries, or numbers of personnel employed." It created the program "PL-110", to handle
defectors and other "essential aliens" who fell outside normal immigration procedures. At the outset of the Korean
War the CIA still only had a few thousand employees, a thousand of whom worked in analysis. Intelligence primarily
came from the Office of Reports and Estimates, which drew its reports from a daily take of State Department telegrams,
military dispatches, and other public documents. The CIA still lacked its own intelligence gathering abilities. On
21 August 1950, shortly after the invasion of South Korea, Truman announced Walter Bedell Smith as the new Director
of the CIA to correct what was seen as a grave failure of Intelligence.[clarification needed] The CIA had different
demands placed on it by the different bodies overseeing it. Truman wanted a centralized group to organize the information
that reached him, the Department of Defense wanted military intelligence and covert action, and the State Department
wanted to create global political change favorable to the US. Thus the two areas of responsibility for the CIA were
covert action and covert intelligence. One of the main targets for intelligence gathering was the Soviet Union, which
had also been a priority of the CIA's predecessors. US army general Hoyt Vandenberg, the CIG's second director, created
the Office of Special Operations (OSO), as well as the Office of Reports and Estimates (ORE). Initially the OSO was
tasked with spying and subversion overseas with a budget of $15 million, the largesse of a small number of patrons
in congress. Vandenberg's goals were much like the ones set out by his predecessor; finding out "everything about
the Soviet forces in Eastern and Central Europe - their movements, their capabilities, and their intentions." This
task fell to the 228 overseas personnel covering Germany, Austria, Switzerland, Poland, Czechoslovakia, and Hungary.
On 18 June 1948, the National Security Council issued Directive 10/2 calling for covert action against the USSR,
and granting the authority to carry out covert operations against "hostile foreign states or groups" that could,
if needed, be denied by the U.S. government. To this end, the Office of Policy Coordination was created inside the
new CIA. The OPC was quite unique; Frank Wisner, the head of the OPC, answered not to the CIA Director, but to the
secretaries of defense, state, and the NSC, and the OPC's actions were a secret even from the head of the CIA. Most
CIA stations had two station chiefs, one working for the OSO, and one working for the OPC. The early track record
of the CIA was poor, with the agency unable to provide sufficient intelligence about the Soviet takeovers of Romania
and Czechoslovakia, the Soviet blockade of Berlin, and the Soviet atomic bomb project. In particular, the agency
failed to predict the Chinese entry into the Korean War with 300,000 troops. The famous double agent Kim Philby was
the British liaison to American Central Intelligence. Through him the CIA coordinated hundreds of airdrops inside
the iron curtain, all compromised by Philby. Arlington Hall, the nerve center of CIA cryptanalysisl was compromised
by Bill Weisband, a Russian translator and Soviet spy. The CIA would reuse the tactic of dropping plant agents behind
enemy lines by parachute again on China, and North Korea. This too would be fruitless.
Pain is a distressing feeling often caused by intense or damaging stimuli, such as stubbing a toe, burning a finger, putting
alcohol on a cut, and bumping the "funny bone". Because it is a complex, subjective phenomenon, defining pain has
been a challenge. The International Association for the Study of Pain's widely used definition states: "Pain is an
unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms
of such damage." In medical diagnosis, pain is a symptom. Pain is the most common reason for physician consultation
in most developed countries. It is a major symptom in many medical conditions, and can interfere with a person's
quality of life and general functioning. Psychological factors such as social support, hypnotic suggestion, excitement,
or distraction can significantly affect pain's intensity or unpleasantness. In some arguments put forth in physician-assisted
suicide or euthanasia debates, pain has been used as an argument to permit terminally ill patients to end their lives.
In 1994, responding to the need for a more useful system for describing chronic pain, the International Association
for the Study of Pain (IASP) classified pain according to specific characteristics: (1) region of the body involved
(e.g. abdomen, lower limbs), (2) system whose dysfunction may be causing the pain (e.g., nervous, gastrointestinal),
(3) duration and pattern of occurrence, (4) intensity and time since onset, and (5) etiology. However, this system
has been criticized by Clifford J. Woolf and others as inadequate for guiding research and treatment. Woolf suggests
three classes of pain : (1) nociceptive pain, (2) inflammatory pain which is associated with tissue damage and the
infiltration of immune cells, and (3) pathological pain which is a disease state caused by damage to the nervous
system or by its abnormal function (e.g. fibromyalgia, irritable bowel syndrome, tension type headache, etc.). Pain
is usually transitory, lasting only until the noxious stimulus is removed or the underlying damage or pathology has
healed, but some painful conditions, such as rheumatoid arthritis, peripheral neuropathy, cancer and idiopathic pain,
may persist for years. Pain that lasts a long time is called chronic or persistent, and pain that resolves quickly
is called acute. Traditionally, the distinction between acute and chronic pain has relied upon an arbitrary interval
of time from onset; the two most commonly used markers being 3 months and 6 months since the onset of pain, though
some theorists and researchers have placed the transition from acute to chronic pain at 12 months.:93 Others apply
acute to pain that lasts less than 30 days, chronic to pain of more than six months' duration, and subacute to pain
that lasts from one to six months. A popular alternative definition of chronic pain, involving no arbitrarily fixed
durations, is "pain that extends beyond the expected period of healing". Chronic pain may be classified as cancer
pain or else as benign. Nociceptive pain is caused by stimulation of peripheral nerve fibers that respond to stimuli
approaching or exceeding harmful intensity (nociceptors), and may be classified according to the mode of noxious
stimulation. The most common categories are "thermal" (e.g. heat or cold), "mechanical" (e.g. crushing, tearing,
shearing, etc.) and "chemical" (e.g. iodine in a cut or chemicals released during inflammation). Some nociceptors
respond to more than one of these modalities and are consequently designated polymodal. Nociceptive pain may also
be divided into "visceral", "deep somatic" and "superficial somatic" pain. Visceral structures are highly sensitive
to stretch, ischemia and inflammation, but relatively insensitive to other stimuli that normally evoke pain in other
structures, such as burning and cutting. Visceral pain is diffuse, difficult to locate and often referred to a distant,
usually superficial, structure. It may be accompanied by nausea and vomiting and may be described as sickening, deep,
squeezing, and dull. Deep somatic pain is initiated by stimulation of nociceptors in ligaments, tendons, bones, blood
vessels, fasciae and muscles, and is dull, aching, poorly-localized pain. Examples include sprains and broken bones.
Superficial pain is initiated by activation of nociceptors in the skin or other superficial tissue, and is sharp,
well-defined and clearly located. Examples of injuries that produce superficial somatic pain include minor wounds
and minor (first degree) burns. The prevalence of phantom pain in upper limb amputees is nearly 82%, and in lower
limb amputees is 54%. One study found that eight days after amputation, 72 percent of patients had phantom limb pain,
and six months later, 65 percent reported it. Some amputees experience continuous pain that varies in intensity or
quality; others experience several bouts a day, or it may occur only once every week or two. It is often described
as shooting, crushing, burning or cramping. If the pain is continuous for a long period, parts of the intact body
may become sensitized, so that touching them evokes pain in the phantom limb, or phantom limb pain may accompany
urination or defecation. Local anesthetic injections into the nerves or sensitive areas of the stump may relieve
pain for days, weeks, or sometimes permanently, despite the drug wearing off in a matter of hours; and small injections
of hypertonic saline into the soft tissue between vertebrae produces local pain that radiates into the phantom limb
for ten minutes or so and may be followed by hours, weeks or even longer of partial or total relief from phantom
pain. Vigorous vibration or electrical stimulation of the stump, or current from electrodes surgically implanted
onto the spinal cord, all produce relief in some patients. Paraplegia, the loss of sensation and voluntary motor
control after serious spinal cord damage, may be accompanied by girdle pain at the level of the spinal cord damage,
visceral pain evoked by a filling bladder or bowel, or, in five to ten per cent of paraplegics, phantom body pain
in areas of complete sensory loss. This phantom body pain is initially described as burning or tingling but may evolve
into severe crushing or pinching pain, or the sensation of fire running down the legs or of a knife twisting in the
flesh. Onset may be immediate or may not occur until years after the disabling injury. Surgical treatment rarely
provides lasting relief. People with long-term pain frequently display psychological disturbance, with elevated scores
on the Minnesota Multiphasic Personality Inventory scales of hysteria, depression and hypochondriasis (the "neurotic
triad"). Some investigators have argued that it is this neuroticism that causes acute pain to turn chronic, but clinical
evidence points the other way, to chronic pain causing neuroticism. When long-term pain is relieved by therapeutic
intervention, scores on the neurotic triad and anxiety fall, often to normal levels. Self-esteem, often low in chronic
pain patients, also shows improvement once pain has resolved. Breakthrough pain is transitory acute pain that comes
on suddenly and is not alleviated by the patient's normal pain management. It is common in cancer patients who often
have background pain that is generally well-controlled by medications, but who also sometimes experience bouts of
severe pain that from time to time "breaks through" the medication. The characteristics of breakthrough cancer pain
vary from person to person and according to the cause. Management of breakthrough pain can entail intensive use of
opioids, including fentanyl. Although unpleasantness is an essential part of the IASP definition of pain, it is possible
to induce a state described as intense pain devoid of unpleasantness in some patients, with morphine injection or
psychosurgery. Such patients report that they have pain but are not bothered by it; they recognize the sensation
of pain but suffer little, or not at all. Indifference to pain can also rarely be present from birth; these people
have normal nerves on medical investigations, and find pain unpleasant, but do not avoid repetition of the pain stimulus.
A much smaller number of people are insensitive to pain due to an inborn abnormality of the nervous system, known
as "congenital insensitivity to pain". Children with this condition incur carelessly-repeated damage to their tongues,
eyes, joints, skin, and muscles. Some die before adulthood, and others have a reduced life expectancy.[citation needed]
Most people with congenital insensitivity to pain have one of five hereditary sensory and autonomic neuropathies
(which includes familial dysautonomia and congenital insensitivity to pain with anhidrosis). These conditions feature
decreased sensitivity to pain together with other neurological abnormalities, particularly of the autonomic nervous
system. A very rare syndrome with isolated congenital insensitivity to pain has been linked with mutations in the
SCN9A gene, which codes for a sodium channel (Nav1.7) necessary in conducting pain nerve stimuli. In 1644, René Descartes
theorized that pain was a disturbance that passed down along nerve fibers until the disturbance reached the brain,
a development that transformed the perception of pain from a spiritual, mystical experience to a physical, mechanical
sensation[citation needed]. Descartes's work, along with Avicenna's, prefigured the 19th-century development of specificity
theory. Specificity theory saw pain as "a specific sensation, with its own sensory apparatus independent of touch
and other senses". Another theory that came to prominence in the 18th and 19th centuries was intensive theory, which
conceived of pain not as a unique sensory modality, but an emotional state produced by stronger than normal stimuli
such as intense light, pressure or temperature. By the mid-1890s, specificity was backed mostly by physiologists
and physicians, and the intensive theory was mostly backed by psychologists. However, after a series of clinical
observations by Henry Head and experiments by Max von Frey, the psychologists migrated to specificity almost en masse,
and by century's end, most textbooks on physiology and psychology were presenting pain specificity as fact. In 1955,
DC Sinclair and G Weddell developed peripheral pattern theory, based on a 1934 suggestion by John Paul Nafe. They
proposed that all skin fiber endings (with the exception of those innervating hair cells) are identical, and that
pain is produced by intense stimulation of these fibers. Another 20th-century theory was gate control theory, introduced
by Ronald Melzack and Patrick Wall in the 1965 Science article "Pain Mechanisms: A New Theory". The authors proposed
that both thin (pain) and large diameter (touch, pressure, vibration) nerve fibers carry information from the site
of injury to two destinations in the dorsal horn of the spinal cord, and that the more large fiber activity relative
to thin fiber activity at the inhibitory cell, the less pain is felt. Both peripheral pattern theory and gate control
theory have been superseded by more modern theories of pain[citation needed]. In 1968 Ronald Melzack and Kenneth
Casey described pain in terms of its three dimensions: "sensory-discriminative" (sense of the intensity, location,
quality and duration of the pain), "affective-motivational" (unpleasantness and urge to escape the unpleasantness),
and "cognitive-evaluative" (cognitions such as appraisal, cultural values, distraction and hypnotic suggestion).
They theorized that pain intensity (the sensory discriminative dimension) and unpleasantness (the affective-motivational
dimension) are not simply determined by the magnitude of the painful stimulus, but "higher" cognitive activities
can influence perceived intensity and unpleasantness. Cognitive activities "may affect both sensory and affective
experience or they may modify primarily the affective-motivational dimension. Thus, excitement in games or war appears
to block both dimensions of pain, while suggestion and placebos may modulate the affective-motivational dimension
and leave the sensory-discriminative dimension relatively undisturbed." (p. 432) The paper ends with a call to action:
"Pain can be treated not only by trying to cut down the sensory input by anesthetic block, surgical intervention
and the like, but also by influencing the motivational-affective and cognitive factors as well." (p. 435) Wilhelm
Erb's (1874) "intensive" theory, that a pain signal can be generated by intense enough stimulation of any sensory
receptor, has been soundly disproved. Some sensory fibers do not differentiate between noxious and non-noxious stimuli,
while others, nociceptors, respond only to noxious, high intensity stimuli. At the peripheral end of the nociceptor,
noxious stimuli generate currents that, above a given threshold, begin to send signals along the nerve fiber to the
spinal cord. The "specificity" (whether it responds to thermal, chemical or mechanical features of its environment)
of a nociceptor is determined by which ion channels it expresses at its peripheral end. Dozens of different types
of nociceptor ion channels have so far been identified, and their exact functions are still being determined. The
pain signal travels from the periphery to the spinal cord along an A-delta or C fiber. Because the A-delta fiber
is thicker than the C fiber, and is thinly sheathed in an electrically insulating material (myelin), it carries its
signal faster (5–30 m/s) than the unmyelinated C fiber (0.5–2 m/s). Pain evoked by the (faster) A-delta fibers is
described as sharp and is felt first. This is followed by a duller pain, often described as burning, carried by the
C fibers. These first order neurons enter the spinal cord via Lissauer's tract. Spinal cord fibers dedicated to carrying
A-delta fiber pain signals, and others that carry both A-delta and C fiber pain signals up the spinal cord to the
thalamus in the brain have been identified. Other spinal cord fibers, known as wide dynamic range neurons, respond
to A-delta and C fibers, but also to the large A-beta fibers that carry touch, pressure and vibration signals. Pain-related
activity in the thalamus spreads to the insular cortex (thought to embody, among other things, the feeling that distinguishes
pain from other homeostatic emotions such as itch and nausea) and anterior cingulate cortex (thought to embody, among
other things, the motivational element of pain); and pain that is distinctly located also activates the primary and
secondary somatosensory cortices. Melzack and Casey's 1968 picture of the dimensions of pain is as influential today
as ever, firmly framing theory and guiding research in the functional neuroanatomy and psychology of pain. In his
book, The Greatest Show on Earth: The Evidence for Evolution, biologist Richard Dawkins grapples with the question
of why pain has to be so very painful. He describes the alternative as a simple, mental raising of a "red flag".
To argue why that red flag might be insufficient, Dawkins explains that drives must compete with each other within
living beings. The most fit creature would be the one whose pains are well balanced. Those pains which mean certain
death when ignored will become the most powerfully felt. The relative intensities of pain, then, may resemble the
relative importance of that risk to our ancestors (lack of food, too much cold, or serious injuries are felt as agony,
whereas minor damage is felt as mere discomfort). This resemblance will not be perfect, however, because natural
selection can be a poor designer. The result is often glitches in animals, including supernormal stimuli. Such glitches
help explain pains which are not, or at least no longer directly adaptive (e.g. perhaps some forms of toothache,
or injury to fingernails). Differences in pain perception and tolerance thresholds are associated with, among other
factors, ethnicity, genetics, and sex. People of Mediterranean origin report as painful some radiant heat intensities
that northern Europeans describe as nonpainful. And Italian women tolerate less intense electric shock than Jewish
or Native American women. Some individuals in all cultures have significantly higher than normal pain perception
and tolerance thresholds. For instance, patients who experience painless heart attacks have higher pain thresholds
for electric shock, muscle cramp and heat. A person's self-report is the most reliable measure of pain, with health
care professionals tending to underestimate severity. A definition of pain widely employed in nursing, emphasizing
its subjective nature and the importance of believing patient reports, was introduced by Margo McCaffery in 1968:
"Pain is whatever the experiencing person says it is, existing whenever he says it does". To assess intensity, the
patient may be asked to locate their pain on a scale of 0 to 10, with 0 being no pain at all, and 10 the worst pain
they have ever felt. Quality can be established by having the patient complete the McGill Pain Questionnaire indicating
which words best describe their pain. The Multidimensional Pain Inventory (MPI) is a questionnaire designed to assess
the psychosocial state of a person with chronic pain. Analysis of MPI results by Turk and Rudy (1988) found three
classes of chronic pain patient: "(a) dysfunctional, people who perceived the severity of their pain to be high,
reported that pain interfered with much of their lives, reported a higher degree of psychological distress caused
by pain, and reported low levels of activity; (b) interpersonally distressed, people with a common perception that
significant others were not very supportive of their pain problems; and (c) adaptive copers, patients who reported
high levels of social support, relatively low levels of pain and perceived interference, and relatively high levels
of activity." Combining the MPI characterization of the person with their IASP five-category pain profile is recommended
for deriving the most useful case description. When a person is non-verbal and cannot self-report pain, observation
becomes critical, and specific behaviors can be monitored as pain indicators. Behaviors such as facial grimacing
and guarding indicate pain, as well as an increase or decrease in vocalizations, changes in routine behavior patterns
and mental status changes. Patients experiencing pain may exhibit withdrawn social behavior and possibly experience
a decreased appetite and decreased nutritional intake. A change in condition that deviates from baseline such as
moaning with movement or when manipulating a body part, and limited range of motion are also potential pain indicators.
In patients who possess language but are incapable of expressing themselves effectively, such as those with dementia,
an increase in confusion or display of aggressive behaviors or agitation may signal that discomfort exists, and further
assessment is necessary. The experience of pain has many cultural dimensions. For instance, the way in which one
experiences and responds to pain is related to sociocultural characteristics, such as gender, ethnicity, and age.
An aging adult may not respond to pain in the way that a younger person would. Their ability to recognize pain may
be blunted by illness or the use of multiple prescription drugs. Depression may also keep the older adult from reporting
they are in pain. The older adult may also quit doing activities they love because it hurts too much. Decline in
self-care activities (dressing, grooming, walking, etc.) may also be indicators that the older adult is experiencing
pain. The older adult may refrain from reporting pain because they are afraid they will have to have surgery or will
be put on a drug they might become addicted to. They may not want others to see them as weak, or may feel there is
something impolite or shameful in complaining about pain, or they may feel the pain is deserved punishment for past
transgressions. Cultural barriers can also keep a person from telling someone they are in pain. Religious beliefs
may prevent the individual from seeking help. They may feel certain pain treatment is against their religion. They
may not report pain because they feel it is a sign that death is near. Many people fear the stigma of addiction and
avoid pain treatment so as not to be prescribed potentially addicting drugs. Many Asians do not want to lose respect
in society by admitting they are in pain and need help, believing the pain should be borne in silence, while other
cultures feel they should report pain right away and get immediate relief. Gender can also be a factor in reporting
pain. Sexual differences can be the result of social and cultural expectations, with women expected to be emotional
and show pain and men stoic, keeping pain to themselves. The International Association for the Study of Pain advocates
that the relief of pain should be recognized as a human right, that chronic pain should be considered a disease in
its own right, and that pain medicine should have the full status of a specialty. It is a specialty only in China
and Australia at this time. Elsewhere, pain medicine is a subspecialty under disciplines such as anesthesiology,
physiatry, neurology, palliative medicine and psychiatry. In 2011, Human Rights Watch alerted that tens of millions
of people worldwide are still denied access to inexpensive medications for severe pain. Sugar taken orally reduces
the total crying time but not the duration of the first cry in newborns undergoing a painful procedure (a single
lancing of the heel). It does not moderate the effect of pain on heart rate and a recent single study found that
sugar did not significantly affect pain-related electrical activity in the brains of newborns one second after the
heel lance procedure. Sweet oral liquid moderately reduces the incidence and duration of crying caused by immunization
injection in children between one and twelve months of age. A number of meta-analyses have found clinical hypnosis
to be effective in controlling pain associated with diagnostic and surgical procedures in both adults and children,
as well as pain associated with cancer and childbirth. A 2007 review of 13 studies found evidence for the efficacy
of hypnosis in the reduction of chronic pain in some conditions, though the number of patients enrolled in the studies
was low, bringing up issues of power to detect group differences, and most lacked credible controls for placebo and/or
expectation. The authors concluded that "although the findings provide support for the general applicability of hypnosis
in the treatment of chronic pain, considerably more research will be needed to fully determine the effects of hypnosis
for different chronic-pain conditions." Pain is the most common reason for people to use complementary and alternative
medicine. An analysis of the 13 highest quality studies of pain treatment with acupuncture, published in January
2009, concluded there is little difference in the effect of real, sham and no acupuncture. However other reviews
have found benefit. Additionally, there is tentative evidence for a few herbal medicine. There is interest in the
relationship between vitamin D and pain, but the evidence so far from controlled trials for such a relationship,
other than in osteomalacia, is unconvincing. Physical pain is an important political topic in relation to various
issues, including pain management policy, drug control, animal rights or animal welfare, torture, and pain compliance.
In various contexts, the deliberate infliction of pain in the form of corporal punishment is used as retribution
for an offence, or for the purpose of disciplining or reforming a wrongdoer, or to deter attitudes or behaviour deemed
unacceptable. In some cultures, extreme practices such as mortification of the flesh or painful rites of passage
are highly regarded. The most reliable method for assessing pain in most humans is by asking a question: a person
may report pain that cannot be detected by any known physiological measure. However, like infants (Latin infans meaning
"unable to speak"), animals cannot answer questions about whether they feel pain; thus the defining criterion for
pain in humans cannot be applied to them. Philosophers and scientists have responded to this difficulty in a variety
of ways. René Descartes for example argued that animals lack consciousness and therefore do not experience pain and
suffering in the way that humans do. Bernard Rollin of Colorado State University, the principal author of two U.S.
federal laws regulating pain relief for animals, writes that researchers remained unsure into the 1980s as to whether
animals experience pain, and that veterinarians trained in the U.S. before 1989 were simply taught to ignore animal
pain. In his interactions with scientists and other veterinarians, he was regularly asked to "prove" that animals
are conscious, and to provide "scientifically acceptable" grounds for claiming that they feel pain. Carbone writes
that the view that animals feel pain differently is now a minority view. Academic reviews of the topic are more equivocal,
noting that although the argument that animals have at least simple conscious thoughts and feelings has strong support,
some critics continue to question how reliably animal mental states can be determined. The ability of invertebrate
species of animals, such as insects, to feel pain and suffering is also unclear. The presence of pain in an animal
cannot be known for certain, but it can be inferred through physical and behavioral reactions. Specialists currently
believe that all vertebrates can feel pain, and that certain invertebrates, like the octopus, might too. As for other
animals, plants, or other entities, their ability to feel physical pain is at present a question beyond scientific
reach, since no mechanism is known by which they could have such a feeling. In particular, there are no known nociceptors
in groups such as plants, fungi, and most insects, except for instance in fruit flies.
A database management system (DBMS) is a computer software application that interacts with the user, other applications,
and the database itself to capture and analyze data. A general-purpose DBMS is designed to allow the definition,
creation, querying, update, and administration of databases. Well-known DBMSs include MySQL, PostgreSQL, Microsoft
SQL Server, Oracle, Sybase, SAP HANA, and IBM DB2. A database is not generally portable across different DBMSs, but
different DBMS can interoperate by using standards such as SQL and ODBC or JDBC to allow a single application to
work with more than one DBMS. Database management systems are often classified according to the database model that
they support; the most popular database systems since the 1980s have all supported the relational model as represented
by the SQL language.[disputed – discuss] Sometimes a DBMS is loosely referred to as a 'database'. Formally, a "database"
refers to a set of related data and the way it is organized. Access to these data is usually provided by a "database
management system" (DBMS) consisting of an integrated set of computer software that allows users to interact with
one or more databases and provides access to all of the data contained in the database (although restrictions may
exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval
of large quantities of information and provides ways to manage how that information is organized. Physically, database
servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database
servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage.
RAID is used for recovery of data if any of the disks fail. Hardware database accelerators, connected to one or more
servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found
at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in
networking support, but modern DBMSs typically rely on a standard operating system to provide these functions. from
databases before the inception of Structured Query Language (SQL). The data recovered was disparate, redundant and
disorderly, since there was no proper method to fetch it and arrange it in a concrete structure.[citation needed]
A DBMS has evolved into a complex software system and its development typically requires thousands of human years
of development effort.[a] Some general-purpose DBMSs such as Adabas, Oracle and DB2 have been undergoing upgrades
since the 1970s. General-purpose DBMSs aim to meet the needs of as many applications as possible, which adds to the
complexity. However, the fact that their development cost can be spread over a large number of users means that they
are often the most cost-effective approach. However, a general-purpose DBMS is not always the optimal solution: in
some cases a general-purpose DBMS may introduce unnecessary overhead. Therefore, there are many examples of systems
that use special-purpose databases. A common example is an email system that performs many of the functions of a
general-purpose DBMS such as the insertion and deletion of messages composed of various items of data or associating
messages with a particular email address; but these functions are limited to what is required to handle email and
don't provide the user with all of the functionality that would be available using a general-purpose DBMS. Many other
databases have application software that accesses the database on behalf of end-users, without exposing the DBMS
interface directly. Application programmers may use a wire protocol directly, or more likely through an application
programming interface. Database designers and database administrators interact with the DBMS through dedicated interfaces
to build and maintain the applications' databases, and thus need some more knowledge and understanding about how
DBMSs operate and the DBMSs' external interfaces and tuning parameters. The relational model, first proposed in 1970
by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content,
rather than by following links. The relational model employs sets of ledger-style tables, each used for a different
type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment
of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all
large-scale data processing applications, and as of 2015[update] they remain dominant : IBM DB2, Oracle, MySQL and
Microsoft SQL Server are the top DBMS. The dominant database language, standardised SQL for the relational model,
has influenced database languages for other data models.[citation needed] As computers grew in speed and capability,
a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial
use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store
(IDS), founded the "Database Task Group" within CODASYL, the group responsible for the creation and standardization
of COBOL. In 1971 the Database Task Group delivered their standard, which generally became known as the "CODASYL
approach", and soon a number of commercial products based on this approach entered the market. IBM also had their
own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the
Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for
its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational
databases due to the way data was accessed, and Bachman's 1973 Turing Award presentation was The Programmer as Navigator.
IMS is classified[by whom?] as a hierarchical database. IDMS and Cincom Systems' TOTAL database are classified as
network databases. IMS remains in use as of 2014[update]. In this paper, he described a new system for storing and
working with large databases. Instead of records being stored in some sort of linked list of free-form records as
in CODASYL, Codd's idea was to use a "table" of fixed-length records, with each table used for a different type of
entity. A linked-list system would be very inefficient when storing "sparse" databases where some of the data for
any one record could be left empty. The relational model solved this by splitting the data into a series of normalized
tables (or relations), with optional elements being moved out of the main table to where they would take up room
only if needed. Data may be freely inserted, deleted and edited in these tables, with the DBMS doing whatever maintenance
needed to present a table view to the application/user. The relational model also allowed the content of the database
to evolve without constant rewriting of links and pointers. The relational part comes from entities referencing other
entities in what is known as one-to-many relationship, like a traditional hierarchical model, and many-to-many relationship,
like a navigational (network) model. Thus, a relational model can express both hierarchical and navigational models,
as well as its native tabular model, allowing for pure or combined modeling in terms of these three models, as the
application requires. For instance, a common use of a database system is to track information about users, their
name, login information, various addresses and phone numbers. In the navigational approach all of this data would
be placed in a single record, and unused items would simply not be placed in the database. In the relational approach,
the data would be normalized into a user table, an address table and a phone number table (for instance). Records
would be created in these optional tables only if the address or phone numbers were actually provided. Linking the
information back together is the key to this system. In the relational model, some bit of information was used as
a "key", uniquely defining a particular record. When information was being collected about a user, information stored
in the optional tables would be found by searching for this key. For instance, if the login name of a user is unique,
addresses and phone numbers for that user would be recorded with the login name as its key. This simple "re-linking"
of related data back into a single collection is something that traditional computer languages are not designed for.
Just as the navigational approach would require programs to loop in order to collect records, the relational approach
would require loops to collect information about any one record. Codd's solution to the necessary looping was a set-oriented
language, a suggestion that would later spawn the ubiquitous SQL. Using a branch of mathematics known as tuple calculus,
he demonstrated that such a system could support all the operations of normal databases (inserting, updating etc.)
as well as providing a simple system for finding and returning sets of data in a single operation. Codd's paper was
picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES
using funding that had already been allocated for a geographical database project and student programmers to produce
code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in
1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known
as QUEL. Over time, INGRES moved to the emerging SQL standard. Another approach to hardware support for database
management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long
term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the
rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems
running on general-purpose hardware, using general-purpose computer data storage. However this idea is still pursued
for certain applications by some companies like Netezza and Oracle (Exadata). IBM started working on a prototype
system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and
work then started on multi-table systems in which the data could be split so that all of the data for a record (some
of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested
by customers in 1978 and 1979, by which time a standardized query language – SQL[citation needed] – had been added.
Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true
production version of System R, known as SQL/DS, and, later, Database 2 (DB2). The 1980s ushered in the age of desktop
computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like
dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff
the creator of dBASE stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of
the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user
can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing
files, and managing space allocation." dBASE was one of the top selling software titles in the 1980s and early 1990s.
The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled.
Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's
data were in a database, that person's attributes, such as their address, phone number, and age, were now considered
to belong to that person instead of being extraneous data. This allows for relations between data to be relations
to objects and their attributes and not to individual fields. The term "object-relational impedance mismatch" described
the inconvenience of translating between programmed objects and database tables. Object databases and object-relational
databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL)
that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object-relational
mappings (ORMs) attempt to solve the same problem.
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes.
XML databases are mostly used in enterprise database management, where XML is being used as the machine-to-machine
data interoperability standard. XML database management systems include commercial software MarkLogic and Oracle
Berkeley DB XML, and a free use software Clusterpoint Distributed XML/JSON Database. All are enterprise software
database platforms and support industry standard ACID-compliant transaction processing with strong database consistency
characteristics and high level of database security. In recent years there was a high demand for massively distributed
databases with high partition tolerance but according to the CAP theorem it is impossible for a distributed system
to simultaneously provide consistency, availability and partition tolerance guarantees. A distributed system can
satisfy any two of these guarantees at the same time, but not all three. For that reason many NoSQL databases are
using what is called eventual consistency to provide both availability and partition tolerance guarantees with a
reduced level of data consistency. The first task of a database designer is to produce a conceptual data model that
reflects the structure of the information to be held in the database. A common approach to this is to develop an
entity-relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling
Language. A successful data model will accurately reflect the possible state of the external world being modeled:
for example, if people can have more than one phone number, it will allow this information to be captured. Designing
a good conceptual data model requires a good understanding of the application domain; it typically involves asking
deep questions about the things of interest to an organisation, like "can a customer also be a supplier?", or "if
a product is sold with two different forms of packaging, are those the same product or different products?", or "if
a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?". The answers
to these questions establish definitions of the terminology used for entities (customers, products, flights, flight
segments) and their relationships and attributes. Having produced a conceptual data model that users are happy with,
the next stage is to translate this into a schema that implements the relevant data structures within the database.
This process is often called logical database design, and the output is a logical data model expressed in the form
of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology,
the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The
terms data model and database model are often used interchangeably, but in this article we use data model for the
design of a specific database, and database model for the modelling notation used to express that design.) The final
stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the
like. This is often called physical database design. A key goal during this stage is data independence, meaning that
the decisions made for performance optimization purposes should be invisible to end-users and applications. Physical
design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access
patterns, and a deep understanding of the features offered by the chosen DBMS. While there is typically only one
conceptual (or logical) and physical (or internal) view of the data, there can be any number of different external
views. This allows users to see database information in a more business-related way rather than from a technical,
processing viewpoint. For example, a financial department of a company needs the payment details of all employees
as part of the company's expenses, but does not need details about employees that are the interest of the human resources
department. Thus different departments need different views of the company's database. The conceptual view provides
a level of indirection between internal and external. On one hand it provides a common view of the database, independent
of different external view structures, and on the other hand it abstracts away details of how the data are stored
or managed (internal level). In principle every level, and even every external view, can be presented by a different
data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels
(e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation,
requires a different level of detail and uses its own types of data structure types. Database storage is the container
of the physical materialization of a database. It comprises the internal (physical) level in the database architecture.
It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures)
to reconstruct the conceptual level and external level from the internal level when needed. Putting data into permanent
storage is generally the responsibility of the database engine a.k.a. "storage engine". Though typically accessed
by a DBMS through the underlying operating system (and often utilizing the operating systems' file systems as intermediates
for storage layout), storage properties and configuration setting are extremely important for the efficient operation
of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its
database residing in several types of storage (e.g., memory and external storage). The database data and the additional
needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in
structures that look completely different from the way the data look in the conceptual and external levels, but in
ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs,
as well as for computing additional types of needed information from the data (e.g., when querying the database).
Database access control deals with controlling who (a person or a certain computer program) is allowed to access
what information in the database. The information may comprise specific database objects (e.g., record types, specific
records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or
utilizing specific access paths to the former (e.g., using specific indexes or other data structures to access information).
Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected
security DBMS interfaces. This may be managed directly on an individual basis, or by the assignment of individuals
and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles
which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database.
Using passwords, users are allowed access to the entire database or subsets of it called "subschemas". For example,
an employee database can contain all the data about an individual employee, but one group of users may be authorized
to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides
a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing
personal databases. Database transactions can be used to introduce some level of fault tolerance and data integrity
after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations
over a database (e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported in database
and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are
included in that transaction (determined by the transaction's programmer via special transaction commands). A database
built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations
it is desirable to move, migrate a database from one DBMS to another. The reasons are primarily economical (different
DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have
different capabilities). The migration involves the database's transformation from one DBMS type to another. The
transformation should maintain (if possible) the database related application (i.e., all related application programs)
intact. Thus, the database's conceptual and external architectural levels should be maintained in the transformation.
It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database
migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision
to migrate. This in spite of the fact that tools may exist to help migration between specific DBMSs. Typically a
DBMS vendor provides tools to help importing databases from other popular DBMSs. Sometimes it is desired to bring
a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a
software error, or if it has been updated with erroneous data). To achieve this a backup operation is done occasionally
or continuously, where each desired database state (i.e., the values of its data and their embedding in database's
data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When this
state is needed, i.e., when it is decided by a database administrator to bring the database back to this state (e.g.,
by specifying this state by a desired point in time when the database was in this state), these files are utilized
to restore that state. Static analysis techniques for software verification can be applied also in the scenario of
query languages. In particular, the *Abstract interpretation framework has been extended to the field of query languages
for relational databases as a way to support sound approximation techniques. The semantics of query languages can
be tuned according to suitable abstractions of the concrete domain of data. The abstraction of relational database
system has many interesting applications, in particular, for security purposes, such as fine grained access control,
watermarking, etc.
Tucson (/ˈtuːsɒn/ /tuːˈsɒn/) is a city and the county seat of Pima County, Arizona, United States, and home to the University
of Arizona. The 2010 United States Census put the population at 520,116, while the 2013 estimated population of the
entire Tucson metropolitan statistical area (MSA) was 996,544. The Tucson MSA forms part of the larger Tucson-Nogales
combined statistical area (CSA), with a total population of 980,263 as of the 2010 Census. Tucson is the second-largest
populated city in Arizona behind Phoenix, both of which anchor the Arizona Sun Corridor. The city is located 108
miles (174 km) southeast of Phoenix and 60 mi (97 km) north of the U.S.-Mexico border. Tucson is the 33rd largest
city and the 59th largest metropolitan area in the United States. Roughly 150 Tucson companies are involved in the
design and manufacture of optics and optoelectronics systems, earning Tucson the nickname Optics Valley. Tucson was
probably first visited by Paleo-Indians, known to have been in southern Arizona about 12,000 years ago. Recent archaeological
excavations near the Santa Cruz River have located a village site dating from 2100 BC.[citation needed] The floodplain
of the Santa Cruz River was extensively farmed during the Early Agricultural period, circa 1200 BC to AD 150. These
people constructed irrigation canals and grew corn, beans, and other crops while gathering wild plants and hunting.
The Early Ceramic period occupation of Tucson saw the first extensive use of pottery vessels for cooking and storage.
The groups designated as the Hohokam lived in the area from AD 600 to 1450 and are known for their vast irrigation
canal systems and their red-on-brown pottery.[citation needed] Jesuit missionary Eusebio Francisco Kino visited the
Santa Cruz River valley in 1692, and founded the Mission San Xavier del Bac in 1700 about 7 mi (11 km) upstream from
the site of the settlement of Tucson. A separate Convento settlement was founded downstream along the Santa Cruz
River, near the base of what is now "A" mountain. Hugo O'Conor, the founding father of the city of Tucson, Arizona
authorized the construction of a military fort in that location, Presidio San Agustín del Tucsón, on August 20, 1775
(near the present downtown Pima County Courthouse). During the Spanish period of the presidio, attacks such as the
Second Battle of Tucson were repeatedly mounted by Apaches. Eventually the town came to be called "Tucson" and became
a part of Sonora after Mexico gained independence from Spain in 1821. Arizona, south of the Gila River was legally
bought from Mexico in the Gadsden Purchase on June 8, 1854. Tucson became a part of the United States of America,
although the American military did not formally take over control until March 1856. In 1857 Tucson became a stage
station on the San Antonio-San Diego Mail Line and in 1858 became 3rd division headquarters of the Butterfield Overland
Mail until the line shut down in March 1861. The Overland Mail Corporation attempted to continue running, however
following the Bascom Affair, devastating Apache attacks on the stations and coaches ended operations in August 1861.[citation
needed] From 1877 to 1878, the area suffered a rash of stagecoach robberies. Most notable, however, were the two
holdups committed by masked road-agent William Whitney Brazelton. Brazelton held up two stages in the summer of 1878
near Point of Mountain Station approximately 17 mi (27 km) northwest of Tucson. John Clum, of Tombstone, Arizona
fame was one of the passengers. Brazelton was eventually tracked down and killed on Monday August 19, 1878, in a
mesquite bosque along the Santa Cruz River 3 miles (5 km) south of Tucson by Pima County Sheriff Charles A. Shibell
and his citizen's posse. Brazelton had been suspected of highway robbery not only in the Tucson area, but also in
the Prescott region and Silver City, New Mexico area as well. Brazelton's crimes prompted John J. Valentine, Sr.
of Wells, Fargo & Co. to send special agent and future Pima County sheriff Bob Paul to investigate. Fort Lowell,
then east of Tucson, was established to help protect settlers from Apache attacks. In 1882, Frank Stilwell was implicated
in the murder of Morgan Earp by Cowboy Pete Spence's wife, Marietta, at the coroner's inquest on Morgan Earp's shooting.
The coroner's jury concluded that Spence, Stilwell, Frederick Bode, and Florentino "Indian Charlie" Cruz were the
prime suspects in the assassination of Morgan Earp. :250 Deputy U.S. Marshal Wyatt Earp gathered a few trusted friends
and accompanied Virgil Earp and his family as they traveled to Benson for a train ride to California. They found
Stilwell lying in wait for Virgil in the Tucson station and killed him on the tracks. After killing Stilwell, Wyatt
deputized others and rode on a vendetta, killing three more cowboys over the next few days before leaving the state.
By 1900, 7,531 people lived in the city. The population increased gradually to 13,913 in 1910. At about this time,
the U.S. Veterans Administration had begun construction on the present Veterans Hospital. Many veterans who had been
gassed in World War I and were in need of respiratory therapy began coming to Tucson after the war, due to the clean
dry air. Over the following years the city continued to grow, with the population increasing to 20,292 in 1920 and
36,818 in 1940. In 2006 the population of Pima County, in which Tucson is located, passed one million while the City
of Tucson's population was 535,000. The city's elevation is 2,643 ft (806 m) above sea level (as measured at the
Tucson International Airport). Tucson is situated on an alluvial plain in the Sonoran desert, surrounded by five
minor ranges of mountains: the Santa Catalina Mountains and the Tortolita Mountains to the north, the Santa Rita
Mountains to the south, the Rincon Mountains to the east, and the Tucson Mountains to the west. The high point of
the Santa Catalina Mountains is 9,157 ft (2,791 m) Mount Lemmon, the southernmost ski destination in the continental
U.S., while the Tucson Mountains include 4,687 ft (1,429 m) Wasson Peak. The highest point in the area is Mount Wrightson,
found in the Santa Rita Mountains at 9,453 ft (2,881 m) above sea level. Tucson is located 118 mi (190 km) southeast
of Phoenix and 60 mi (97 km) north of the United States - Mexico border. The 2010 United States Census puts the city's
population at 520,116 with a metropolitan area population at 980,263. In 2009, Tucson ranked as the 32nd largest
city and 52nd largest metropolitan area in the United States. A major city in the Arizona Sun Corridor, Tucson is
the largest city in southern Arizona, the second largest in the state after Phoenix. It is also the largest city
in the area of the Gadsden Purchase. As of 2015, The Greater Tucson Metro area has exceeded a population of 1 million.
Interstate 10, which runs southeast to northwest through town, connects Tucson to Phoenix to the northwest on the
way to its western terminus in Santa Monica, California, and to Las Cruces, New Mexico and El Paso, Texas toward
its eastern terminus in Jacksonville, Florida. I-19 runs south from Tucson toward Nogales and the U.S.-Mexico border.
I-19 is the only Interstate highway that uses "kilometer posts" instead of "mileposts", although the speed limits
are marked in miles per hour instead of kilometers per hour. At the end of the first decade of the 21st century,
downtown Tucson underwent a revitalization effort by city planners and the business community. The primary project
was Rio Nuevo, a large retail and community center that has been stalled in planning for more than ten years. Downtown
is generally regarded as the area bordered by 17th Street to the south, I-10 to the west, and 6th Street to the north,
and Toole Avenue and the Union Pacific (formerly Southern Pacific) railroad tracks, site of the historic train depot
and "Locomotive #1673", built in 1900. Downtown is divided into the Presidio District, the Barrio Viejo, and the
Congress Street Arts and Entertainment District. Some authorities include the 4th Avenue shopping district, which
is set just northeast of the rest of downtown and connected by an underpass beneath the UPRR tracks. As one of the
oldest parts of town, Central Tucson is anchored by the Broadway Village shopping center designed by local architect
Josias Joesler at the intersection of Broadway Boulevard and Country Club Road. The 4th Avenue Shopping District
between downtown and the University and the Lost Barrio just East of downtown, also have many unique and popular
stores. Local retail business in Central Tucson is densely concentrated along Fourth Avenue and the Main Gate Square
on University Boulevard near the UA campus. The El Con Mall is also located in the eastern part of midtown. Tucson's
largest park, Reid Park, is located in midtown and includes Reid Park Zoo and Hi Corbett Field. Speedway Boulevard,
a major east-west arterial road in central Tucson, was named the "ugliest street in America" by Life magazine in
the early 1970s, quoting Tucson Mayor James Corbett. Despite this, Speedway Boulevard was awarded "Street of the
Year" by Arizona Highways in the late 1990s. According to David Leighton, historical writer for the Arizona Daily
Star newspaper, Speedway Boulevard derives its name from an old horse racetrack, known as "The Harlem River Speedway,"
more commonly called "The Speedway," in New York City. The street was called "The Speedway," from 1904 to about 1906
before the word "The" was taken out. Central Tucson is bicycle-friendly. To the east of the University of Arizona,
Third Street is bike-only except for local traffic and passes by the historic homes of the Sam Hughes neighborhood.
To the west, E. University Boulevard leads to the Fourth Avenue Shopping District. To the North, N. Mountain Avenue
has a full bike-only lane for half of the 3.5 miles (5.6 km) to the Rillito River Park bike and walk multi-use path.
To the south, N. Highland Avenue leads to the Barraza-Aviation Parkway bicycle path. South Tucson is actually the
name of an independent, incorporated town of 1 sq mi (2.6 km2), completely surrounded by the city of Tucson, sitting
just south of downtown. South Tucson has a colorful, dynamic history. It was first incorporated in 1936, and later
reincorporated in 1940. The population consists of about 83% Mexican-American and 10% Native American residents.
South Tucson is widely known for its many Mexican restaurants and the architectural styles which include bright outdoor
murals, many of which have been painted over due to city policy. A combination of urban and suburban development,
the West Side is generally defined as the area west of I-10. Western Tucson encompasses the banks of the Santa Cruz
River and the foothills of the Tucson Mountains, and includes the International Wildlife Museum, Sentinel Peak, and
the Marriott Starr Pass Resort & Spa, located in the wealthy enclave known as Starr Pass. Moving past the Tucson
Mountains, travelers find themselves in the area commonly referred to as "west of" Tucson or "Old West Tucson". A
large undulating plain extending south into the Altar Valley, rural residential development predominates, but here
you will also find major attractions including Saguaro National Park West, the Arizona-Sonora Desert Museum, and
the Old Tucson Studios movie set/theme park. On Sentinel Peak (also known as "'A' Mountain"), just west of downtown,
there is a giant "A" in honor of the University of Arizona. Starting in about 1916, a yearly tradition developed
for freshmen to whitewash the "A", which was visible for miles. However, at the beginning of the Iraq War, anti-war
activists painted it black. This was followed by a paint scuffle where the "A" was painted various colors until the
city council intervened. It is now red, white and blue except when it is white or another color decided by a biennial
election. Because of the three-color paint scheme often used, the shape of the A can be vague and indistinguishable
from the rest of the peak. The top of Sentinel Peak, which is accessible by road, offers an outstanding scenic view
of the city looking eastward. A parking lot located near the summit of Sentinel Peak was formerly a popular place
to watch sunsets or view the city lights at night. Also on the north side is the suburban community of Catalina Foothills,
located in the foothills of the Santa Catalina Mountains just north of the city limits. This community includes among
the area's most expensive homes, sometimes multimillion-dollar estates. The Foothills area is generally defined as
north of River Road, east of Oracle Road, and west of Sabino Creek. Some of the Tucson area's major resorts are located
in the Catalina Foothills, including the Hacienda Del Sol, Westin La Paloma Resort, Loews Ventana Canyon Resort and
Canyon Ranch Resort. La Encantada, an upscale outdoor shopping mall, is also in the Foothills. The expansive area
northwest of the city limits is diverse, ranging from the rural communities of Catalina and parts of the town of
Marana, the small suburb of Picture Rocks, the affluent town of Oro Valley in the western foothills of the Santa
Catalina Mountains, and residential areas in the northeastern foothills of the Tucson Mountains. Continental Ranch
(Marana), Dove Mountain (Marana), and Rancho Vistoso (Oro Valley) are all masterplanned communities located in the
Northwest, where thousands of residents live. The community of Casas Adobes is also on the Northwest Side, with the
distinction of being Tucson's first suburb, established in the late 1940s. Casas Adobes is centered on the historic
Casas Adobes Plaza (built in 1948). Casas Adobes is also home to Tohono Chul Park (a nature preserve) near the intersection
of North Oracle Road and West Ina Road. The attempted assassination of Representative Gabrielle Giffords, and the
murders of chief judge for the U.S. District Court for Arizona, John Roll and five other people on January 8, 2011,
occurred at the La Toscana Village in Casas Adobes. The Foothills Mall is also located on the northwest side in Casas
Adobes. East Tucson is relatively new compared to other parts of the city, developed between the 1950s and the 1970s,[citation
needed] with developments such as Desert Palms Park. It is generally classified as the area of the city east of Swan
Road, with above-average real estate values relative to the rest of the city. The area includes urban and suburban
development near the Rincon Mountains. East Tucson includes Saguaro National Park East. Tucson's "Restaurant Row"
is also located on the east side, along with a significant corporate and financial presence. Restaurant Row is sandwiched
by three of Tucson's storied Neighborhoods: Harold Bell Wright Estates, named after the famous author's ranch which
occupied some of that area prior to the depression; the Tucson Country Club (the third to bear the name Tucson Country
Club), and the Dorado Country Club. Tucson's largest office building is 5151 East Broadway in east Tucson, completed
in 1975. The first phases of Williams Centre, a mixed-use, master-planned development on Broadway near Craycroft
Road, were opened in 1987. Park Place, a recently renovated shopping center, is also located along Broadway (west
of Wilmot Road). Near the intersection of Craycroft and Ft. Lowell Roads are the remnants of the Historic Fort Lowell.
This area has become one of Tucson's iconic neighborhoods. In 1891, the Fort was abandoned and much of the interior
was stripped of their useful components and it quickly fell into ruin. In 1900, three of the officer buildings were
purchased for use as a sanitarium. The sanitarium was then sold to Harvey Adkins in 1928. The Bolsius family Pete,
Nan and Charles Bolsius purchased and renovated surviving adobe buildings of the Fort – transforming them into spectacular
artistic southwestern architectural examples. Their woodwork, plaster treatment and sense of proportion drew on their
Dutch heritage and New Mexican experience. Other artists and academics throughout the middle of the 20th century,
including: Win Ellis, Jack Maul, Madame Cheruy, Giorgio Belloli, Charels Bode, Veronica Hughart, Edward and Rosamond
Spicer, Hazel Larson Archer and Ruth Brown, renovated adobes, built homes and lived in the area. The artist colony
attracted writers and poets including beat generation Alan Harrington and Jack Kerouac whose visit is documented
in his iconic book On the Road. This rural pocket in the middle of the city is listed on the National Register of
Historic Places. Each year in February the neighborhood celebrates its history in the City Landmark it owns and restored
the San Pedro Chapel. Southeast Tucson continues to experience rapid residential development. The area includes Davis-Monthan
Air Force Base. The area is considered to be south of Golf Links Road. It is the home of Santa Rita High School,
Chuck Ford Park (Lakeside Park), Lakeside Lake, Lincoln Park (upper and lower), The Lakecrest Neighborhoods, and
Pima Community College East Campus. The Atterbury Wash with its access to excellent bird watching is also located
in the Southeast Tucson area. The suburban community of Rita Ranch houses many of the military families from Davis-Monthan,
and is near the southeastern-most expansion of the current city limits. Close by Rita Ranch and also within the city
limits lies Civano, a planned development meant to showcase ecologically sound building practices and lifestyles.
Catalina Highway stretches 25 miles (40 km) and the entire mountain range is one of Tucson's most popular vacation
spots for cycling, hiking, rock climbing, camping, birding, and wintertime snowboarding and skiing. Near the top
of Mt. Lemmon is the town of Summerhaven. In Summerhaven, visitors will find log houses and cabins, a general store,
and various shops, as well as numerous hiking trails. Near Summerhaven is the road to Ski Valley which hosts a ski
lift, several runs, a giftshop, and nearby restaurant. Tucson has a desert climate (Köppen BWh), with two major seasons,
summer and winter; plus three minor seasons: fall, spring, and the monsoon. Tucson averages 11.8 inches (299.7 mm)
of precipitation per year, more than most other locations with desert climates, but it still qualifies due to its
high evapotranspiration; in other words, it experiences a high net loss of water. A similar scenario is seen in Alice
Springs, Australia, which averages 11 inches (279.4 mm) a year, but has a desert climate. The monsoon can begin any
time from mid-June to late July, with an average start date around July 3. It typically continues through August
and sometimes into September. During the monsoon, the humidity is much higher than the rest of the year. It begins
with clouds building up from the south in the early afternoon followed by intense thunderstorms and rainfall, which
can cause flash floods. The evening sky at this time of year is often pierced with dramatic lightning strikes. Large
areas of the city do not have storm sewers, so monsoon rains flood the main thoroughfares, usually for no longer
than a few hours. A few underpasses in Tucson have "feet of water" scales painted on their supports to discourage
fording by automobiles during a rainstorm. Arizona traffic code Title 28-910, the so-called "Stupid Motorist Law",
was instituted in 1995 to discourage people from entering flooded roadways. If the road is flooded and a barricade
is in place, motorists who drive around the barricade can be charged up to $2000 for costs involved in rescuing them.
Despite all warnings and precautions, however, three Tucson drivers have drowned between 2004 and 2010. Winters in
Tucson are mild relative to other parts of the United States. Daytime highs in the winter range between 64 and 75
°F (18 and 24 °C), with overnight lows between 30 and 44 °F (−1 and 7 °C). Tucson typically averages one hard freeze
per winter season, with temperatures dipping to the mid or low-20s (−7 to −4 °C), but this is typically limited to
only a very few nights. Although rare, snow has been known to fall in Tucson, usually a light dusting that melts
within a day. The most recent snowfall was on February 20, 2013 when 2.0 inches of snow blanketed the city, the largest
snowfall since 1987. At the University of Arizona, where records have been kept since 1894, the record maximum temperature
was 115 °F (46 °C) on June 19, 1960, and July 28, 1995, and the record minimum temperature was 6 °F (−14 °C) on January
7, 1913. There are an average of 150.1 days annually with highs of 90 °F (32 °C) or higher and an average of 26.4
days with lows reaching or below the freezing mark. Average annual precipitation is 11.15 in (283 mm). There is an
average of 49 days with measurable precipitation. The wettest year was 1905 with 24.17 in (614 mm) and the driest
year was 1924 with 5.07 in (129 mm). The most precipitation in one month was 7.56 in (192 mm) in July 1984. The most
precipitation in 24 hours was 4.16 in (106 mm) on October 1, 1983. Annual snowfall averages 0.7 in (1.8 cm). The
most snow in one year was 7.2 in (18 cm) in 1987. The most snow in one month was 6.0 in (15 cm) in January 1898 and
March 1922. At the airport, where records have been kept since 1930, the record maximum temperature was 117 °F (47
°C) on June 26, 1990, and the record minimum temperature was 16 °F (−9 °C) on January 4, 1949. There is an average
of 145.0 days annually with highs of 90 °F (32 °C) or higher and an average of 16.9 days with lows reaching or below
the freezing mark. Measurable precipitation falls on an average of 53 days. The wettest year was 1983 with 21.86
in (555 mm) of precipitation, and the driest year was 1953 with 5.34 in (136 mm). The most rainfall in one month
was 7.93 in (201 mm) in August 1955. The most rainfall in 24 hours was 3.93 in (100 mm) on July 29, 1958. Snow at
the airport averages only 1.1 in (2.8 cm) annually. The most snow received in one year was 8.3 in (21 cm) and the
most snow in one month was 6.8 in (17 cm) in December 1971. As of the census of 2010, there were 520,116 people,
229,762 households, and 112,455 families residing in the city. The population density was 2,500.1 inhabitants per
square mile (965.3/km²). There were 209,609 housing units at an average density of 1,076.7 per square mile (415.7/km²).
The racial makeup of the city was 69.7% White (down from 94.8% in 1970), 5.0% Black or African-American, 2.7% Native
American, 2.9% Asian, 0.2% Pacific Islander, 16.9% from other races, and 3.8% from two or more races. Hispanic or
Latino of any race were 41.6% of the population. Non-Hispanic Whites were 47.2% of the population in 2010, down from
72.8% in 1970. Much of Tucson's economic development has been centered on the development of the University of Arizona,
which is currently the second largest employer in the city. Davis-Monthan Air Force Base, located on the southeastern
edge of the city, also provides many jobs for Tucson residents. Its presence, as well as the presence of the US Army
Intelligence Center (Fort Huachuca, the largest employer in the region in nearby Sierra Vista), has led to the development
of a significant number of high-tech industries, including government contractors, in the area. The city of Tucson
is also a major hub for the Union Pacific Railroad's Sunset Route that links the Los Angeles ports with the South/Southeast
regions of the country. The City of Tucson, Pima County, the State of Arizona, and the private sector have all made
commitments to create a growing, healthy economy[citation needed] with advanced technology industry sectors as its
foundation. Raytheon Missile Systems (formerly Hughes Aircraft Co.), Texas Instruments, IBM, Intuit Inc., Universal
Avionics, Honeywell Aerospace, Sunquest Information Systems, Sanofi-Aventis, Ventana Medical Systems, Inc., and Bombardier
Aerospace all have a significant presence in Tucson. Roughly 150 Tucson companies are involved in the design and
manufacture of optics and optoelectronics systems, earning Tucson the nickname "Optics Valley". Since 2009, the Tucson
Festival of Books has been held annually over a two-day period in March at the University of Arizona. By 2010 it
had become the fourth largest book festival in the United States, with 450 authors and 80,000 attendees. In addition
to readings and lectures, it features a science fair, varied entertainment, food, and exhibitors ranging from local
retailers and publishers to regional and national nonprofit organizations. In 2011, the Festival began presenting
a Founder's Award; recipients include Elmore Leonard and R.L. Stine. For the past 25 years, the Tucson Folk Festival
has taken place the first Saturday and Sunday of May in downtown Tucson's El Presidio Park. In addition to nationally
known headline acts each evening, the Festival highlights over 100 local and regional musicians on five stages is
one of the largest free festivals in the country. All stages are within easy walking distance. Organized by the Tucson
Kitchen Musicians Association, volunteers make this festival possible. KXCI 91.3-FM, Arizona's only community radio
station, is a major partner, broadcasting from the Plaza Stage throughout the weekend. In addition, there are numerous
workshops, events for children, sing-alongs, and a popular singer/songwriter contest. Musicians typically play 30-minute
sets, supported by professional audio staff volunteers. A variety of food and crafts are available at the festival,
as well as local micro-brews. All proceeds from sales go to fund future festivals. Another popular event held in
February, which is early spring in Tucson, is the Fiesta de los Vaqueros, or rodeo week, founded by winter visitor,
Leighton Kramer. While at its heart the Fiesta is a sporting event, it includes what is billed as "the world's largest
non-mechanized parade". The Rodeo Parade is a popular event as most schools give two rodeo days off instead of Presidents
Day. The exception to this is Presidio High (a non-public charter school), which doesn't get either. Western wear
is seen throughout the city as corporate dress codes are cast aside during the Fiesta. The Fiesta de los Vaqueros
marks the beginning of the rodeo season in the United States. The Procession, held at sundown, consists of a non-motorized
parade through downtown Tucson featuring many floats, sculptures, and memorials, in which the community is encouraged
to participate. The parade is followed by performances on an outdoor stage, culminating in the burning of an urn
in which written prayers have been collected from participants and spectators. The event is organized and funded
by the non-profit arts organization Many Mouths One Stomach, with the assistance of many volunteers and donations
from the public and local businesses. The accomplished and awarded writers (poets, novelists, dramatists, nonfiction
writers) who have lived in Tucson include Edward Abbey, Erskine Caldwell, Barbara Kingsolver and David Foster Wallace.
Some were associated with the University of Arizona, but many were independent writers who chose to make Tucson their
home. The city is particularly active in publishing and presenting contemporary innovative poetry in various ways.
Examples are the Chax Press, a publisher of poetry books in trade and book arts editions, and the University of Arizona
Poetry Center, which has a sizable poetry library and presents readings, conferences, and workshops. Tucson is commonly
known as "The Old Pueblo". While the exact origin of this nickname is uncertain, it is commonly traced back to Mayor
R. N. "Bob" Leatherwood. When rail service was established to the city on March 20, 1880, Leatherwood celebrated
the fact by sending telegrams to various leaders, including the President of the United States and the Pope, announcing
that the "ancient and honorable pueblo" of Tucson was now connected by rail to the outside world. The term became
popular with newspaper writers who often abbreviated it as "A. and H. Pueblo". This in turn transformed into the
current form of "The Old Pueblo". The University of Arizona Wildcats sports teams, most notably the men's basketball
and women's softball teams have strong local interest. The men's basketball team, formerly coached by Hall of Fame
head coach Lute Olson and currently coached by Sean Miller, has made 25 straight NCAA Tournaments and won the 1997
National Championship. Arizona's Softball team has reached the NCAA National Championship game 12 times and has won
8 times, most recently in 2007. The university's swim teams have gained international recognition, with swimmers
coming from as far as Japan and Africa to train with the coach Frank Busch who has also worked with the U.S. Olympic
swim team for a number of years. Both men and women's swim teams recently[when?] won the NCAA National Championships.
The Tucson Padres played at Kino Veterans Memorial Stadium from 2011 to 2013. They served as the AAA affiliate of
the San Diego Padres. The team, formerly known as the Portland Beavers, was temporarily relocated to Tucson from
Portland while awaiting the building of a new stadium in Escondido. Legal issues derailed the plans to build the
Escondido stadium, so they moved to El Paso, Texas for the 2014 season. Previously, the Tucson Sidewinders, a triple-A
affiliate of the Arizona Diamondbacks, won the Pacific Coast League championship and unofficial AAA championship
in 2006. The Sidewinders played in Tucson Electric Park and were in the Pacific Conference South of the PCL. The
Sidewinders were sold in 2007 and moved to Reno, Nevada after the 2008 season. They now compete as the Reno Aces.
Tracks include Tucson Raceway Park and Rillito Downs. Tucson Raceway Park hosts NASCAR-sanctioned auto racing events
and is one of only two asphalt short tracks in Arizona. Rillito Downs is an in-town destination on weekends in January
and February each year. This historic track held the first organized quarter horse races in the world, and they are
still racing there. The racetrack is threatened by development. The Moltacqua racetrack, was another historic horse
racetrack located on what is now Sabino Canyon Road and Vactor Ranch Trail, but it no longer exists. The League of
American Bicyclists gave Tucson a gold rating for bicycle friendliness in late April 2007. Tucson hosts the largest
perimeter cycling event in the United States. The ride called "El Tour de Tucson" happens in November on the Saturday
before Thanksgiving. El Tour de Tucson produced and promoted by Perimeter Bicycling has as many as 10,000 participants
from all over the world, annually. Tucson is one of only nine cities in the U.S. to receive a gold rating or higher
for cycling friendliness from the League of American Bicyclists. The city is known for its winter cycling opportunities.
Both road and mountain biking are popular in and around Tucson with trail areas including Starr Pass and Fantasy
Island. In general, Tucson and Pima County support the Democratic Party, as opposed the state's largest metropolitan
area, Phoenix, which usually supports the Republican Party. Congressional redistricting in 2013, following the publication
of the 2010 Census, divided the Tucson area into three Federal Congressional districts (the first, second and third
of Arizona). The city center is in the 3rd District, represented by Raul Grijalva, a Democrat, since 2003, while
the more affluent residential areas to the south and east are in the 2nd District, represented by Republican Martha
McSally since 2015, and the exurbs north and west between Tucson and Phoenix in the 3rd District are represented
by Democrat Ann Kirkpatrick since 2008. The United States Postal Service operates post offices in Tucson. The Tucson
Main Post Office is located at 1501 South Cherrybell Stravenue. Both the council members and the mayor serve four-year
terms; none face term limits. Council members are nominated by their wards via a ward-level primary held in September.
The top vote-earners from each party then compete at-large for their ward's seat on the November ballot. In other
words, on election day the whole city votes on all the council races up for that year. Council elections are severed:
Wards 1, 2, and 4 (as well as the mayor) are up for election in the same year (most recently 2011), while Wards 3,
5, and 6 share another year (most recently 2013). Tucson is known for being a trailblazer in voluntary partial publicly
financed campaigns. Since 1985, both mayoral and council candidates have been eligible to receive matching public
funds from the city. To become eligible, council candidates must receive 200 donations of $10 or more (300 for a
mayoral candidate). Candidates must then agree to spending limits equal to 33¢ for every registered Tucson voter,
or $79,222 in 2005 (the corresponding figures for mayor are 64¢ per registered voter, or $142,271 in 2003). In return,
candidates receive matching funds from the city at a 1:1 ratio of public money to private donations. The only other
limitation is that candidates may not exceed 75% of the limit by the date of the primary. Many cities, such as San
Francisco and New York City, have copied this system, albeit with more complex spending and matching formulas. Tucson
has one daily newspaper, the morning Arizona Daily Star. Wick Communications publishes the daily legal paper The
Daily Territorial, while Boulder, Colo.-based 10/13 Communications publishes Tucson Weekly (an "alternative" publication),
Inside Tucson Business and the Explorer. TucsonSentinel.com is a nonprofit independent online news organization.
Tucson Lifestyle Magazine, Lovin' Life News, DesertLeaf, and Zócalo Magazine are monthly publications covering arts,
architecture, decor, fashion, entertainment, business, history, and other events. The Arizona Daily Wildcat is the
University of Arizona's student newspaper, and the Aztec News is the Pima Community College student newspaper. The
New Vision is the newspaper for the Roman Catholic Diocese of Tucson, and the Arizona Jewish Post is the newspaper
of the Jewish Federation of Southern Arizona. The Tucson metro area is served by many local television stations and
is the 68th largest designated market area (DMA) in the U.S. with 433,310 homes (0.39% of the total U.S.). It is
limited to the three counties of southeastern Arizona (Pima, Santa Cruz, and Cochise) The major television networks
serving Tucson are: KVOA 4 (NBC), KGUN 9 (ABC), KMSB-TV 11 (Fox), KOLD-TV 13 (CBS), KTTU 18 (My Network TV) and KWBA
58 (The CW). KUAT-TV 6 is a PBS affiliate run by the University of Arizona (as is sister station KUAS 27). Tucson's
primary electrical power source is a coal and natural gas power-plant managed by Tucson Electric Power that is situated
within the city limits on the south-western boundary of Davis-Monthan Air-force base adjacent to Interstate-10. The
air pollution generated has raised some concerns as the Sundt operating station has been online since 1962 as is
exempt from many pollution standards and controls due to its age. Solar has been gaining ground in Tucson with its
ideal over 300 days of sunshine climate. Federal, state, and even local utility credits and incentives have also
enticed residents to equip homes with solar systems. Davis-Monthan AFB has a 3.3 Megawatt (MW) ground-mounted solar
photovoltaic (PV) array and a 2.7 MW rooftop-mounted PV array, both of which are located in the Base Housing area.
The base will soon have the largest solar-generating capacity in the United States Department of Defense after awarding
a contract on September 10, 2010, to SunEdison to construct a 14.5 MW PV field on the northwestern side of the base.
Perhaps the biggest sustainability problem in Tucson, with its high desert climate, is potable water supply. The
state manages all water in Arizona through its Arizona Department of Water Resources (ADWR). The primary consumer
of water is Agriculture (including golf courses), which consumes about 69% of all water. Municipal (which includes
residential use) accounts for about 25% of use. Energy consumption and availability is another sustainability issue.
However, with over 300 days of full sun a year, Tucson has demonstrated its potential to be an ideal solar energy
producer. In an effort to conserve water, Tucson is recharging groundwater supplies by running part of its share
of CAP water into various open portions of local rivers to seep into their aquifer. Additional study is scheduled
to determine the amount of water that is lost through evaporation from the open areas, especially during the summer.
The City of Tucson already provides reclaimed water to its inhabitants, but it is only used for "applications such
as irrigation, dust control, and industrial uses." These resources have been in place for more than 27 years, and
deliver to over 900 locations. To prevent further loss of groundwater, Tucson has been involved in water conservation
and groundwater preservation efforts, shifting away from its reliance on a series of Tucson area wells in favor of
conservation, consumption-based pricing for residential and commercial water use, and new wells in the more sustainable
Avra Valley aquifer, northwest of the city. An allocation from the Central Arizona Project Aqueduct (CAP), which
passes more than 300 mi (480 km) across the desert from the Colorado River, has been incorporated into the city's
water supply, annually providing over 20 million gallons of "recharged" water which is pumped into the ground to
replenish water pumped out. Since 2001, CAP water has allowed the city to remove or turn off over 80 wells. Tucson's
Sun Tran bus system serves greater Tucson with standard, express, regional shuttle, and on-demand shuttle bus service.
It was awarded Best Transit System in 1988 and 2005. A 3.9-mile streetcar line, Sun Link, connects the University
of Arizona campus with 4th Avenue, downtown, and the Mercado District west of Interstate 10 and the Santa Cruz River.
Ten-minute headway passenger service began July 25, 2014. The streetcar utilizes Sun Tran's card payment and transfer
system, connecting with the University of Arizona's CatTran shuttles, Amtrak, and Greyhound intercity bus service.
Cycling is popular in Tucson due to its flat terrain and dry climate. Tucson and Pima County maintain an extensive
network of marked bike routes, signal crossings, on-street bike lanes, mountain-biking trails, and dedicated shared-use
paths. The Loop is a network of seven linear parks comprising over 100 mi (160 km) of paved, vehicle-free trails
that encircles the majority of the city with links to Marana and Oro Valley. The Tucson-Pima County Bicycle Advisory
Committee (TPCBAC) serves in an advisory capacity to local governments on issues relating to bicycle recreation,
transportation, and safety. Tucson was awarded a gold rating for bicycle-friendliness by the League of American Bicyclists
in 2006.
Armenia is a unitary, multi-party, democratic nation-state with an ancient cultural heritage. Urartu was established in 860
BC and by the 6th century BC it was replaced by the Satrapy of Armenia. In the 1st century BC the Kingdom of Armenia
reached its height under Tigranes the Great. Armenia became the first state in the world to adopt Christianity as
its official religion. In between the late 3rd century to early years of the 4th century, the state became the first
Christian nation. The official date of state adoption of Christianity is 301 AD. The ancient Armenian kingdom was
split between the Byzantine and Sasanian Empires around the early 5th century. Between the 16th century and 19th
century, the traditional Armenian homeland composed of Eastern Armenia and Western Armenia came under the rule of
the Ottoman and successive Iranian empires, repeatedly ruled by either of the two over the centuries. By the 19th
century, Eastern Armenia had been conquered by the Russian Empire, while most of the western parts of the traditional
Armenian homeland remained under Ottoman rule. During World War I, Armenians living in their ancestral lands in the
Ottoman Empire were systematically exterminated in the Armenian Genocide. In 1918, after the Russian Revolution,
all non-Russian countries declared their independence from the Russian empire, leading to the establishment of the
First Republic of Armenia. By 1920, the state was incorporated into the Transcaucasian Socialist Federative Soviet
Republic, and in 1922 became a founding member of the Soviet Union. In 1936, the Transcaucasian state was dissolved,
transforming its constituent states, including the Armenian Soviet Socialist Republic, into full Union republics.
The modern Republic of Armenia became independent in 1991 during the dissolution of the Soviet Union. The exonym
Armenia is attested in the Old Persian Behistun Inscription (515 BC) as Armina ( ). The ancient Greek terms Ἀρμενία
(Armenía) and Ἀρμένιοι (Arménioi, "Armenians") are first mentioned by Hecataeus of Miletus (c. 550 BC – c. 476 BC).
Xenophon, a Greek general serving in some of the Persian expeditions, describes many aspects of Armenian village
life and hospitality in around 401 BC. He relates that the people spoke a language that to his ear sounded like the
language of the Persians. According to the histories of both Moses of Chorene and Michael Chamchian, Armenia derives
from the name of Aram, a lineal descendant of Hayk. Several bronze-era states flourished in the area of Greater Armenia,
including the Hittite Empire (at the height of its power), Mitanni (South-Western historical Armenia), and Hayasa-Azzi
(1500–1200 BC). The Nairi people (12th to 9th centuries BC) and the Kingdom of Urartu (1000–600 BC) successively
established their sovereignty over the Armenian Highland. Each of the aforementioned nations and tribes participated
in the ethnogenesis of the Armenian people. A large cuneiform lapidary inscription found in Yerevan established that
the modern capital of Armenia was founded in the summer of 782 BC by King Argishti I. Yerevan is the world's oldest
city to have documented the exact date of its foundation. During the late 6th century BC, the first geographical
entity that was called Armenia by neighboring populations was established under the Orontid Dynasty within the Achaemenid
Empire, as part of the latters' territories. The kingdom became fully sovereign from the sphere of influence of the
Seleucid Empire in 190 BC under King Artaxias I and begun the rule of the Artaxiad dynasty. Armenia reached its height
between 95 and 66 BC under Tigranes the Great, becoming the most powerful kingdom of its time east of the Roman Republic.
In the next centuries, Armenia was in the Persian Empire's sphere of influence during the reign of Tiridates I, the
founder of the Arsacid dynasty of Armenia, which itself was a branch of the eponymous Arsacid dynasty of Parthia.
Throughout its history, the kingdom of Armenia enjoyed both periods of independence and periods of autonomy subject
to contemporary empires. Its strategic location between two continents has subjected it to invasions by many peoples,
including the Assyrians (under Ashurbanipal, at around 669–627 BC, the boundaries of the Assyrian Empire reached
as far as Armenia & the Caucasus Mountains), Medes, Achaemenid Persians, Greeks, Parthians, Romans, Sassanid Persians,
Byzantines, Arabs, Seljuks, Mongols, Ottomans, successive Iranian Safavids, Afsharids, and Qajars, and the Russians.
After the Marzpanate period (428–636), Armenia emerged as the Emirate of Armenia, an autonomous principality within
the Arabic Empire, reuniting Armenian lands previously taken by the Byzantine Empire as well. The principality was
ruled by the Prince of Armenia, and recognized by the Caliph and the Byzantine Emperor. It was part of the administrative
division/emirate Arminiya created by the Arabs, which also included parts of Georgia and Caucasian Albania, and had
its center in the Armenian city, Dvin. The Principality of Armenia lasted until 884, when it regained its independence
from the weakened Arab Empire under King Ashot I Bagratuni. In 1045, the Byzantine Empire conquered Bagratid Armenia.
Soon, the other Armenian states fell under Byzantine control as well. The Byzantine rule was short lived, as in 1071
Seljuk Turks defeated the Byzantines and conquered Armenia at the Battle of Manzikert, establishing the Seljuk Empire.
To escape death or servitude at the hands of those who had assassinated his relative, Gagik II, King of Ani, an Armenian
named Roupen, went with some of his countrymen into the gorges of the Taurus Mountains and then into Tarsus of Cilicia.
The Byzantine governor of the palace gave them shelter where the Armenian Kingdom of Cilicia was eventually established
on 6 January 1198 under King Leo I, a descendant of Prince Roupen. The Seljuk Empire soon started to collapse. In
the early 12th century, Armenian princes of the Zakarid noble family drove out the Seljuk Turks and established a
semi-independent Armenian principality in Northern and Eastern Armenia, known as Zakarid Armenia, which lasted under
the patronage of the Georgian Kingdom. The noble family of Orbelians shared control with the Zakarids in various
parts of the country, especially in Syunik and Vayots Dzor, while the Armenian family of Hasan-Jalalians controlled
provinces of Artsakh and Utik as the Kingdom of Artsakh. In the 16th century, the Ottoman Empire and Safavid Empire
divided Armenia. From the early 16th century, both Western Armenia and Eastern Armenia fell under Iranian Safavid
rule. Owing to the century long Turco-Iranian geo-political rivalry that would last in Western Asia, significant
parts of the region were frequently fought over between the two rivalling empires. From the mid 16th century with
the Peace of Amasya, and decisively from the first half of the 17th century with the Treaty of Zuhab until the first
half of the 19th century, Eastern Armenia was ruled by the successive Iranian Safavid, Afsharid and Qajar empires,
while Western Armenia remained under Ottoman rule. While Western Armenia still remained under Ottoman rule, the Armenians
were granted considerable autonomy within their own enclaves and lived in relative harmony with other groups in the
empire (including the ruling Turks). However, as Christians under a strict Muslim social system, Armenians faced
pervasive discrimination. When they began pushing for more rights within the Ottoman Empire, Sultan ‘Abdu’l-Hamid
II, in response, organized state-sponsored massacres against the Armenians between 1894 and 1896, resulting in an
estimated death toll of 80,000 to 300,000 people. The Hamidian massacres, as they came to be known, gave Hamid international
infamy as the "Red Sultan" or "Bloody Sultan." During the 1890s, the Armenian Revolutionary Federation, commonly
known as Dashnaktsutyun, became active within the Ottoman Empire with the aim of unifying the various small groups
in the empire that were advocating for reform and defending Armenian villages from massacres that were widespread
in some of the Armenian-populated areas of the empire. Dashnaktsutyun members also formed fedayi groups that defended
Armenian civilians through armed resistance. The Dashnaks also worked for the wider goal of creating a "free, independent
and unified" Armenia, although they sometimes set aside this goal in favor of a more realistic approach, such as
advocating autonomy. The Ottoman Empire began to collapse, and in 1908, the Young Turk Revolution overthrew the government
of Sultan Hamid. In April 1909, the Adana massacre occurred in the Adana Vilayet of the Ottoman Empire resulting
in the deaths of as many as 20,000–30,000 Armenians. The Armenians living in the empire hoped that the Committee
of Union and Progress would change their second-class status. Armenian reform package (1914) was presented as a solution
by appointing an inspector general over Armenian issues. When World War I broke out leading to confrontation between
the Ottoman Empire and the Russian Empire in the Caucasus and Persian Campaigns, the new government in Istanbul began
to look on the Armenians with distrust and suspicion. This was because the Imperial Russian Army contained a contingent
of Armenian volunteers. On 24 April 1915, Armenian intellectuals were arrested by Ottoman authorities and, with the
Tehcir Law (29 May 1915), eventually a large proportion of Armenians living in Anatolia perished in what has become
known as the Armenian Genocide. The genocide was implemented in two phases: the wholesale killing of the able-bodied
male population through massacre and subjection of army conscripts to forced labour, followed by the deportation
of women, children, the elderly and infirm on death marches leading to the Syrian desert. Driven forward by military
escorts, the deportees were deprived of food and water and subjected to periodic robbery, rape, and massacre. There
was local Armenian resistance in the region, developed against the activities of the Ottoman Empire. The events of
1915 to 1917 are regarded by Armenians and the vast majority of Western historians to have been state-sponsored mass
killings, or genocide. Turkish authorities deny the genocide took place to this day. The Armenian Genocide is acknowledged
to have been one of the first modern genocides. According to the research conducted by Arnold J. Toynbee, an estimated
600,000 Armenians died during deportation from 1915–16). This figure, however, accounts for solely the first year
of the Genocide and does not take into account those who died or were killed after the report was compiled on the
24th May 1916. The International Association of Genocide Scholars places the death toll at "more than a million".
The total number of people killed has been most widely estimated at between 1 and 1.5 million. Although the Russian
Caucasus Army of Imperial forces commanded by Nikolai Yudenich and Armenians in volunteer units and Armenian militia
led by Andranik Ozanian and Tovmas Nazarbekian succeeded in gaining most of Ottoman Armenia during World War I, their
gains were lost with the Bolshevik Revolution of 1917.[citation needed] At the time, Russian-controlled Eastern Armenia,
Georgia, and Azerbaijan attempted to bond together in the Transcaucasian Democratic Federative Republic. This federation,
however, lasted from only February to May 1918, when all three parties decided to dissolve it. As a result, the Dashnaktsutyun
government of Eastern Armenia declared its independence on 28 May as the First Republic of Armenia under the leadership
of Aram Manukian. At the end of the war, the victorious powers sought to divide up the Ottoman Empire. Signed between
the Allied and Associated Powers and Ottoman Empire at Sèvres on 10 August 1920, the Treaty of Sèvres promised to
maintain the existence of the Armenian republic and to attach the former territories of Ottoman Armenia to it. Because
the new borders of Armenia were to be drawn by United States President Woodrow Wilson, Ottoman Armenia was also referred
to as "Wilsonian Armenia." In addition, just days prior, on 5 August 1920, Mihran Damadian of the Armenian National
Union, the de facto Armenian administration in Cilicia, declared the independence of Cilicia as an Armenian autonomous
republic under French protectorate. In 1920, Turkish nationalist forces invaded the fledgling Armenian republic from
the east. Turkish forces under the command of Kazım Karabekir captured Armenian territories that Russia had annexed
in the aftermath of the 1877–1878 Russo-Turkish War and occupied the old city of Alexandropol (present-day Gyumri).
The violent conflict finally concluded with the Treaty of Alexandropol on 2 December 1920. The treaty forced Armenia
to disarm most of its military forces, cede all former Ottoman territory granted to it by the Treaty of Sèvres, and
to give up all the "Wilsonian Armenia" granted to it at the Sèvres treaty. Simultaneously, the Soviet Eleventh Army,
under the command of Grigoriy Ordzhonikidze, invaded Armenia at Karavansarai (present-day Ijevan) on 29 November.
By 4 December, Ordzhonikidze's forces entered Yerevan and the short-lived Armenian republic collapsed. Armenia was
annexed by Bolshevist Russia and along with Georgia and Azerbaijan, it was incorporated into the Soviet Union as
part of the Transcaucasian SFSR (TSFSR) on 4 March 1922. With this annexation, the Treaty of Alexandropol was superseded
by the Turkish-Soviet Treaty of Kars. In the agreement, Turkey allowed the Soviet Union to assume control over Adjara
with the port city of Batumi in return for sovereignty over the cities of Kars, Ardahan, and Iğdır, all of which
were part of Russian Armenia. The TSFSR existed from 1922 to 1936, when it was divided up into three separate entities
(Armenian SSR, Azerbaijan SSR, and Georgian SSR). Armenians enjoyed a period of relative stability under Soviet rule.
They received medicine, food, and other provisions from Moscow, and communist rule proved to be a soothing balm in
contrast to the turbulent final years of the Ottoman Empire. The situation was difficult for the church, which struggled
under Soviet rule. After the death of Vladimir Lenin, Joseph Stalin took the reins of power and began an era of renewed
fear and terror for Armenians. Fears decreased when Stalin died in 1953 and Nikita Khruschev emerged as the Soviet
Union's new leader. Soon, life in Soviet Armenia began to see rapid improvement. The church, which suffered greatly
under Stalin, was revived when Catholicos Vazgen I assumed the duties of his office in 1955. In 1967, a memorial
to the victims of the Armenian Genocide was built at the Tsitsernakaberd hill above the Hrazdan gorge in Yerevan.
This occurred after mass demonstrations took place on the tragic event's fiftieth anniversary in 1965. During the
Gorbachev era of the 1980s, with the reforms of Glasnost and Perestroika, Armenians began to demand better environmental
care for their country, opposing the pollution that Soviet-built factories brought. Tensions also developed between
Soviet Azerbaijan and its autonomous district of Nagorno-Karabakh, a majority-Armenian region separated by Stalin
from Armenia in 1923. About 484,000 Armenians lived in Azerbaijan in 1970. The Armenians of Karabakh demanded unification
with Soviet Armenia. Peaceful protests in Yerevan supporting the Karabakh Armenians were met with anti-Armenian pogroms
in the Azerbaijani city of Sumgait. Compounding Armenia's problems was a devastating earthquake in 1988 with a moment
magnitude of 7.2. Gorbachev's inability to alleviate any of Armenia's problems created disillusionment among the
Armenians and fed a growing hunger for independence. In May 1990, the New Armenian Army (NAA) was established, serving
as a defence force separate from the Soviet Red Army. Clashes soon broke out between the NAA and Soviet Internal
Security Forces (MVD) troops based in Yerevan when Armenians decided to commemorate the establishment of the 1918
First Republic of Armenia. The violence resulted in the deaths of five Armenians killed in a shootout with the MVD
at the railway station. Witnesses there claimed that the MVD used excessive force and that they had instigated the
fighting. Further firefights between Armenian militiamen and Soviet troops occurred in Sovetashen, near the capital
and resulted in the deaths of over 26 people, mostly Armenians. The pogrom of Armenians in Baku in January 1990 forced
almost all of the 200,000 Armenians in the Azerbaijani capital Baku to flee to Armenia. On 17 March 1991, Armenia,
along with the Baltic states, Georgia and Moldova, boycotted a nationwide referendum in which 78% of all voters voted
for the retention of the Soviet Union in a reformed form. Ter-Petrosyan led Armenia alongside Defense Minister Vazgen
Sargsyan through the Nagorno-Karabakh War with neighboring Azerbaijan. The initial post-Soviet years were marred
by economic difficulties, which had their roots early in the Karabakh conflict when the Azerbaijani Popular Front
managed to pressure the Azerbaijan SSR to instigate a railway and air blockade against Armenia. This move effectively
crippled Armenia's economy as 85% of its cargo and goods arrived through rail traffic. In 1993, Turkey joined the
blockade against Armenia in support of Azerbaijan. The Karabakh war ended after a Russian-brokered cease-fire was
put in place in 1994. The war was a success for the Karabakh Armenian forces who managed to capture 16% of Azerbaijan's
internationally recognised territory including Nagorno-Karabakh itself. Since then, Armenia and Azerbaijan have held
peace talks, mediated by the Organisation for Security and Co-operation in Europe (OSCE). The status of Karabakh
has yet to be determined. The economies of both countries have been hurt in the absence of a complete resolution
and Armenia's borders with Turkey and Azerbaijan remain closed. By the time both Azerbaijan and Armenia had finally
agreed to a ceasefire in 1994, an estimated 30,000 people had been killed and over a million had been displaced.
International observers of Council of Europe and US Department of State have questioned the fairness of Armenia's
parliamentary and presidential elections and constitutional referendum since 1995, citing polling deficiencies, lack
of cooperation by the Electoral Commission, and poor maintenance of electoral lists and polling places. Freedom House
categorized Armenia in its 2008 report as a "Semi-consolidated Authoritarian Regime" (along with Moldova, Kosovo,
Kyrgyzstan, and Russia) and ranked Armenia 20th among 29 nations in transition, with a Democracy Score of 5.21 out
of 7 (7 represents the lowest democratic progress). Armenia presently maintains good relations with almost every
country in the world, with two major exceptions being its immediate neighbours, Turkey and Azerbaijan. Tensions were
running high between Armenians and Azerbaijanis during the final years of the Soviet Union. The Nagorno-Karabakh
War dominated the region's politics throughout the 1990s. The border between the two rival countries remains closed
up to this day, and a permanent solution for the conflict has not been reached despite the mediation provided by
organisations such as the OSCE. Turkey also has a long history of poor relations with Armenia over its refusal to
acknowledge the Armenian Genocide. Turkey was one of the first countries to recognize the Republic of Armenia (the
3rd republic) after its independence from the USSR in 1991. Despite this, for most of the 20th century and early
21st century, relations remain tense and there are no formal diplomatic relations between the two countries due to
Turkey's refusal to establish them for numerous reasons. During the Nagorno-Karabakh War and citing it as the reason,
Turkey illegally closed its land border with Armenia in 1993. It has not lifted its blockade despite pressure from
the powerful Turkish business lobby interested in Armenian markets. On 10 October 2009, Armenia and Turkey signed
protocols on normalisation of relationships, which set a timetable for restoring diplomatic ties and reopening their
joint border. The ratification of those had to be made in the national parliaments. In Armenia it passed through
the required by legislation approval of the Constitutional Court and was sent to the parliament for the final ratification.
The President had made multiple public announcements, both in Armenia and abroad, that as the leader of the political
majority of Armenia he assured the ratification of the protocols if Turkey also ratified them. Despite this, the
process stopped, as Turkey continuously added more preconditions to its ratification and also "delayed it beyond
any reasonable time-period". Due to its position between two unfriendly neighbours, Armenia has close security ties
with Russia. At the request of the Armenian government, Russia maintains a military base in the northwestern Armenian
city of Gyumri as a deterrent against Turkey.[citation needed] Despite this, Armenia has also been looking toward
Euro-Atlantic structures in recent years. It maintains good relations with the United States especially through its
Armenian diaspora. According to the US Census Bureau, there are 427,822 Armenians living in the country. Armenia
is also a member of the Council of Europe, maintaining friendly relations with the European Union, especially with
its member states such as France and Greece. A 2005 survey reported that 64% of Armenia's population would be in
favor of joining the EU. Several Armenian officials have also expressed the desire for their country to eventually
become an EU member state, some[who?] predicting that it will make an official bid for membership in a few years.[citation
needed] In 2004 its forces joined KFOR, a NATO-led international force in Kosovo. It is also an observer member of
the Eurasian Economic Community and the Non-Aligned Movement. The Armenian Army, Air Force, Air Defence, and Border
Guard comprise the four branches of the Armed Forces of the Republic of Armenia. The Armenian military was formed
after the collapse of the Soviet Union in 1991 and with the establishment of the Ministry of Defence in 1992. The
Commander-in-Chief of the military is the President of Armenia, Serzh Sargsyan. The Ministry of Defence is in charge
of political leadership, currently headed by Colonel General Seyran Ohanyan, while military command remains in the
hands of the General Staff, headed by the Chief of Staff, who is currently Colonel General Yuri Khatchaturov. Armenia
is member of Collective Security Treaty Organisation (CSTO) along with Belarus, Kazakhstan, Kyrgyzstan, Russia, Tajikistan
and Uzbekistan. It participates in NATO's Partnership for Peace (PiP) program and is in a NATO organisation called
Euro-Atlantic Partnership Council (EAPC). Armenia has engaged in a peacekeeping mission in Kosovo as part of non-NATO
KFOR troops under Greek command. Armenia also had 46 members of its military peacekeeping forces as a part of the
Coalition Forces in Iraq War until October 2008. Within each province are communities (hamaynkner, singular hamaynk).
Each community is self-governing and consists of one or more settlements (bnakavayrer, singular bnakavayr). Settlements
are classified as either towns (kaghakner, singular kaghak) or villages (gyugher, singular gyugh). As of 2007[update],
Armenia includes 915 communities, of which 49 are considered urban and 866 are considered rural. The capital, Yerevan,
also has the status of a community. Additionally, Yerevan is divided into twelve semi-autonomous districts. The economy
relies heavily on investment and support from Armenians abroad. Before independence, Armenia's economy was largely
industry-based – chemicals, electronics, machinery, processed food, synthetic rubber, and textile – and highly dependent
on outside resources. The republic had developed a modern industrial sector, supplying machine tools, textiles, and
other manufactured goods to sister republics in exchange for raw materials and energy. Recently, the Intel Corporation
agreed to open a research center in Armenia, in addition to other technology companies, signalling the growth of
the technology industry in Armenia. Agriculture accounted for less than 20% of both net material product and total
employment before the dissolution of the Soviet Union in 1991. After independence, the importance of agriculture
in the economy increased markedly, its share at the end of the 1990s rising to more than 30% of GDP and more than
40% of total employment. This increase in the importance of agriculture was attributable to food security needs of
the population in the face of uncertainty during the first phases of transition and the collapse of the non-agricultural
sectors of the economy in the early 1990s. As the economic situation stabilized and growth resumed, the share of
agriculture in GDP dropped to slightly over 20% (2006 data), although the share of agriculture in employment remained
more than 40%. Like other newly independent states of the former Soviet Union, Armenia's economy suffers from the
breakdown of former Soviet trading patterns. Soviet investment in and support of Armenian industry has virtually
disappeared, so that few major enterprises are still able to function. In addition, the effects of the 1988 Spitak
earthquake, which killed more than 25,000 people and made 500,000 homeless, are still being felt. The conflict with
Azerbaijan over Nagorno-Karabakh has not been resolved. The closure of Azerbaijani and Turkish borders has devastated
the economy, because Armenia depends on outside supplies of energy and most raw materials. Land routes through Georgia
and Iran are inadequate or unreliable. The GDP fell nearly 60% between 1989 and 1993, but then resumed robust growth.
The national currency, the dram, suffered hyperinflation for the first years after its introduction in 1993. Nevertheless,
the government was able to make wide-ranging economic reforms that paid off in dramatically lower inflation and steady
growth. The 1994 cease-fire in the Nagorno-Karabakh conflict has also helped the economy. Armenia has had strong
economic growth since 1995, building on the turnaround that began the previous year, and inflation has been negligible
for the past several years. New sectors, such as precious-stone processing and jewellery making, information and
communication technology, and even tourism are beginning to supplement more traditional sectors of the economy, such
as agriculture. This steady economic progress has earned Armenia increasing support from international institutions.
The International Monetary Fund (IMF), World Bank, European Bank for Reconstruction and Development (EBRD), and other
international financial institutions (IFIs) and foreign countries are extending considerable grants and loans. Loans
to Armenia since 1993 exceed $1.1 billion. These loans are targeted at reducing the budget deficit and stabilizing
the currency; developing private businesses; energy; agriculture; food processing; transportation; the health and
education sectors; and ongoing rehabilitation in the earthquake zone. The government joined the World Trade Organization
on 5 February 2003. But one of the main sources of foreign direct investments remains the Armenian diaspora, which
finances major parts of the reconstruction of infrastructure and other public projects. Being a growing democratic
state, Armenia also hopes to get more financial aid from the Western World. A liberal foreign investment law was
approved in June 1994, and a law on privatisation was adopted in 1997, as well as a program of state property privatisation.
Continued progress will depend on the ability of the government to strengthen its macroeconomic management, including
increasing revenue collection, improving the investment climate, and making strides against corruption. However,
unemployment, which currently stands at around 15%, still remains a major problem due to the influx of thousands
of refugees from the Karabakh conflict. In the 1988–89 school year, 301 students per 10,000 population were in specialized
secondary or higher education, a figure slightly lower than the Soviet average. In 1989 some 58% of Armenians over
age fifteen had completed their secondary education, and 14% had a higher education. In the 1990–91 school year,
the estimated 1,307 primary and secondary schools were attended by 608,800 students. Another seventy specialized
secondary institutions had 45,900 students, and 68,400 students were enrolled in a total of ten postsecondary institutions
that included universities. In addition, 35% of eligible children attended preschools. In 1992 Armenia's largest
institution of higher learning, Yerevan State University, had eighteen departments, including ones for social sciences,
sciences, and law. Its faculty numbered about 1,300 teachers and its student population about 10,000 students. The
National Polytechnic University of Armenia is operating since 1933. On the basis of the expansion and development
of Yerevan State University a number of higher educational independent Institutions were formed including Medical
Institute separated in 1930 which was set up on the basis of medical faculty. In 1980 Yerevan State Medical University
was awarded one of the main rewards of the former USSR – the Order of Labor red Banner for training qualified specialists
in health care and valuable service in the development of Medical Science. In 1995 YSMI was renamed to YSMU and since
1989 it has been named after Mkhitar Heratsi, the famous medieval doctor. Mkhitar Heratsi was the founder of Armenian
Medical school in Cilician Armenia. The great doctor played the same role in Armenian Medical Science as Hippocrates
in Western, Galen in Roman, Ibn Sīnā in Arabic medicine. Foreign students' department for Armenian diaspora established
in 1957 later was enlarged and the enrollment of foreign students began. Nowadays the YSMU is a Medical Institution
corresponding to international requirements, trains medical staff for not only Armenia and neighbor countries, i.e.
Iran, Syria, Lebanon, Georgia, but also many other leading countries all over the world. A great number of foreign
students from India, Nepal, Sri Lanka, the USA and Russian Federation study together with Armenian students. Nowadays
the university is ranked among famous higher Medical Institutions and takes its honorable place in the World Directory
of Medical Schools published by the WHO. Other educational institutions in Armenia include the American University
of Armenia and the QSI International School of Yerevan. The American University of Armenia has graduate programs
in Business and Law, among others. The institution owes its existence to the combined efforts of the Government of
Armenia, the Armenian General Benevolent Union, U.S. Agency for International Development, and the University of
California. The extension programs and the library at AUA form a new focal point for English-language intellectual
life in the city. Armenia also hosts a deployment of OLPC – One Laptopschool Per child XO laptop-tablet schools.
Instruments like the duduk, the dhol, the zurna, and the kanun are commonly found in Armenian folk music. Artists
such as Sayat Nova are famous due to their influence in the development of Armenian folk music. One of the oldest
types of Armenian music is the Armenian chant which is the most common kind of religious music in Armenia. Many of
these chants are ancient in origin, extending to pre-Christian times, while others are relatively modern, including
several composed by Saint Mesrop Mashtots, the inventor of the Armenian alphabet. Whilst under Soviet rule, Armenian
classical music composer Aram Khatchaturian became internationally well known for his music, for various ballets
and the Sabre Dance from his composition for the ballet Gayane. The Armenian Genocide caused widespread emigration
that led to the settlement of Armenians in various countries in the world. Armenians kept to their traditions and
certain diasporans rose to fame with their music. In the post-Genocide Armenian community of the United States, the
so-called "kef" style Armenian dance music, using Armenian and Middle Eastern folk instruments (often electrified/amplified)
and some western instruments, was popular. This style preserved the folk songs and dances of Western Armenia, and
many artists also played the contemporary popular songs of Turkey and other Middle Eastern countries from which the
Armenians emigrated. Richard Hagopian is perhaps the most famous artist of the traditional "kef" style and the Vosbikian
Band was notable in the 1940s and 1950s for developing their own style of "kef music" heavily influenced by the popular
American Big Band Jazz of the time. Later, stemming from the Middle Eastern Armenian diaspora and influenced by Continental
European (especially French) pop music, the Armenian pop music genre grew to fame in the 1960s and 1970s with artists
such as Adiss Harmandian and Harout Pamboukjian performing to the Armenian diaspora and Armenia; also with artists
such as Sirusho, performing pop music combined with Armenian folk music in today's entertainment industry. Other
Armenian diasporans that rose to fame in classical or international music circles are world-renowned French-Armenian
singer and composer Charles Aznavour, pianist Sahan Arzruni, prominent opera sopranos such as Hasmik Papian and more
recently Isabel Bayrakdarian and Anna Kasyan. Certain Armenians settled to sing non-Armenian tunes such as the heavy
metal band System of a Down (which nonetheless often incorporates traditional Armenian instrumentals and styling
into their songs) or pop star Cher. In the Armenian diaspora, Armenian revolutionary songs are popular with the youth.
These songs encourage Armenian patriotism and are generally about Armenian history and national heroes. Yerevan Vernissage
(arts and crafts market), close to Republic Square, bustles with hundreds of vendors selling a variety of crafts
on weekends and Wednesdays (though the selection is much reduced mid-week). The market offers woodcarving, antiques,
fine lace, and the hand-knotted wool carpets and kilims that are a Caucasus specialty. Obsidian, which is found locally,
is crafted into assortment of jewellery and ornamental objects. Armenian gold smithery enjoys a long tradition, populating
one corner of the market with a selection of gold items. Soviet relics and souvenirs of recent Russian manufacture
– nesting dolls, watches, enamel boxes and so on – are also available at the Vernisage. The National Art Gallery
in Yerevan has more than 16,000 works that date back to the Middle Ages, which indicate Armenia's rich tales and
stories of the times. It houses paintings by many European masters as well. The Modern Art Museum, the Children’s
Picture Gallery, and the Martiros Saryan Museum are only a few of the other noteworthy collections of fine art on
display in Yerevan. Moreover, many private galleries are in operation, with many more opening every year, featuring
rotating exhibitions and sales. A wide array of sports are played in Armenia, the most popular among them being wrestling,
weightlifting, judo, association football, chess, and boxing. Armenia's mountainous terrain provides great opportunities
for the practice of sports like skiing and climbing. Being a landlocked country, water sports can only be practiced
on lakes, notably Lake Sevan. Competitively, Armenia has been successful in chess, weightlifting and wrestling at
the international level. Armenia is also an active member of the international sports community, with full membership
in the Union of European Football Associations (UEFA) and International Ice Hockey Federation (IIHF). It also hosts
the Pan-Armenian Games. Prior to 1992, Armenians would participate in the Olympics representing the USSR. As part
of the Soviet Union, Armenia was very successful, winning plenty of medals and helping the USSR win the medal standings
at the Olympics on numerous occasions. The first medal won by an Armenian in modern Olympic history was by Hrant
Shahinyan (sometimes spelled as Grant Shaginyan), who won two golds and two silvers in gymnastics at the 1952 Summer
Olympics in Helsinki. To highlight the level of success of Armenians in the Olympics, Shahinyan was quoted as saying:
Football is also popular in Armenia. The most successful team was the FC Ararat Yerevan team of the 1970s who won
the Soviet Cup in 1973 and 1975 and the Soviet Top League in 1973. The latter achievement saw FC Ararat gain entry
to the European Cup where – despite a home victory in the second leg – they lost on aggregate at the quarter final
stage to eventual winner FC Bayern Munich. Armenia competed internationally as part of the USSR national football
team until the Armenian national football team was formed in 1992 after the split of the Soviet Union. Armenia have
never qualified for a major tournament although recent improvements saw the team to achieve 44th position in the
FIFA World Rankings in September 2011. The national team is controlled by the Football Federation of Armenia. The
Armenian Premier League is the highest level football competition in Armenia, and has been dominated by FC Pyunik
in recent seasons. The league currently consists of eight teams and relegates to the Armenian First League. Due to
the lack of success lately on the international level, in recent years, Armenia has rebuilt 16 Soviet-era sports
schools and furnished them with new equipment for a total cost of $1.9 million. The rebuilding of the regional schools
was financed by the Armenian government. $9.3 million has been invested in the resort town of Tsaghkadzor to improve
the winter sports infrastructure because of dismal performances at recent winter sports events. In 2005, a cycling
center was opened in Yerevan with the aim of helping produce world class Armenian cyclists. The government has also
promised a cash reward of $700,000 to Armenians who win a gold medal at the Olympics. Armenian cuisine is as ancient
as the history of Armenia, a combination of different tastes and aromas. The food often has quite a distinct aroma.
Closely related to eastern and Mediterranean cuisine, various spices, vegetables, fish, and fruits combine to present
unique dishes. The main characteristics of Armenian cuisine are a reliance on the quality of the ingredients rather
than heavily spicing food, the use of herbs, the use of wheat in a variety of forms, of legumes, nuts, and fruit
(as a main ingredient as well as to sour food), and the stuffing of a wide variety of leaves.
Bacteria (i/bækˈtɪəriə/; singular: bacterium) constitute a large domain of prokaryotic microorganisms. Typically a few micrometres
in length, bacteria have a number of shapes, ranging from spheres to rods and spirals. Bacteria were among the first
life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit soil, water, acidic hot
springs, radioactive waste, and the deep portions of Earth's crust. Bacteria also live in symbiotic and parasitic
relationships with plants and animals. They are also known to have flourished in manned spacecraft. There are typically
40 million bacterial cells in a gram of soil and a million bacterial cells in a millilitre of fresh water. There
are approximately 5×1030 bacteria on Earth, forming a biomass which exceeds that of all plants and animals. Bacteria
are vital in recycling nutrients, with many of the stages in nutrient cycles dependent on these organisms, such as
the fixation of nitrogen from the atmosphere and putrefaction. In the biological communities surrounding hydrothermal
vents and cold seeps, bacteria provide the nutrients needed to sustain life by converting dissolved compounds, such
as hydrogen sulphide and methane, to energy. On 17 March 2013, researchers reported data that suggested bacterial
life forms thrive in the Mariana Trench, which with a depth of up to 11 kilometres is the deepest part of the Earth's
oceans. Other researchers reported related studies that microbes thrive inside rocks up to 580 metres below the sea
floor under 2.6 kilometres of ocean off the coast of the northwestern United States. According to one of the researchers,
"You can find microbes everywhere — they're extremely adaptable to conditions, and survive wherever they are." There
are approximately ten times as many bacterial cells in the human flora as there are human cells in the body, with
the largest number of the human flora being in the gut flora, and a large number on the skin. The vast majority of
the bacteria in the body are rendered harmless by the protective effects of the immune system, and some are beneficial.
However, several species of bacteria are pathogenic and cause infectious diseases, including cholera, syphilis, anthrax,
leprosy, and bubonic plague. The most common fatal bacterial diseases are respiratory infections, with tuberculosis
alone killing about 2 million people per year, mostly in sub-Saharan Africa. In developed countries, antibiotics
are used to treat bacterial infections and are also used in farming, making antibiotic resistance a growing problem.
In industry, bacteria are important in sewage treatment and the breakdown of oil spills, the production of cheese
and yogurt through fermentation, and the recovery of gold, palladium, copper and other metals in the mining sector,
as well as in biotechnology, and the manufacture of antibiotics and other chemicals. Once regarded as plants constituting
the class Schizomycetes, bacteria are now classified as prokaryotes. Unlike cells of animals and other eukaryotes,
bacterial cells do not contain a nucleus and rarely harbour membrane-bound organelles. Although the term bacteria
traditionally included all prokaryotes, the scientific classification changed after the discovery in the 1990s that
prokaryotes consist of two very different groups of organisms that evolved from an ancient common ancestor. These
evolutionary domains are called Bacteria and Archaea. The ancestors of modern bacteria were unicellular microorganisms
that were the first forms of life to appear on Earth, about 4 billion years ago. For about 3 billion years, most
organisms were microscopic, and bacteria and archaea were the dominant forms of life. In 2008, fossils of macroorganisms
were discovered and named as the Francevillian biota. Although bacterial fossils exist, such as stromatolites, their
lack of distinctive morphology prevents them from being used to examine the history of bacterial evolution, or to
date the time of origin of a particular bacterial species. However, gene sequences can be used to reconstruct the
bacterial phylogeny, and these studies indicate that bacteria diverged first from the archaeal/eukaryotic lineage.
Bacteria were also involved in the second great evolutionary divergence, that of the archaea and eukaryotes. Here,
eukaryotes resulted from the entering of ancient bacteria into endosymbiotic associations with the ancestors of eukaryotic
cells, which were themselves possibly related to the Archaea. This involved the engulfment by proto-eukaryotic cells
of alphaproteobacterial symbionts to form either mitochondria or hydrogenosomes, which are still found in all known
Eukarya (sometimes in highly reduced form, e.g. in ancient "amitochondrial" protozoa). Later on, some eukaryotes
that already contained mitochondria also engulfed cyanobacterial-like organisms. This led to the formation of chloroplasts
in algae and plants. There are also some algae that originated from even later endosymbiotic events. Here, eukaryotes
engulfed a eukaryotic algae that developed into a "second-generation" plastid. This is known as secondary endosymbiosis.
Bacteria display a wide diversity of shapes and sizes, called morphologies. Bacterial cells are about one-tenth the
size of eukaryotic cells and are typically 0.5–5.0 micrometres in length. However, a few species are visible to the
unaided eye — for example, Thiomargarita namibiensis is up to half a millimetre long and Epulopiscium fishelsoni
reaches 0.7 mm. Among the smallest bacteria are members of the genus Mycoplasma, which measure only 0.3 micrometres,
as small as the largest viruses. Some bacteria may be even smaller, but these ultramicrobacteria are not well-studied.
Most bacterial species are either spherical, called cocci (sing. coccus, from Greek kókkos, grain, seed), or rod-shaped,
called bacilli (sing. bacillus, from Latin baculus, stick). Elongation is associated with swimming. Some bacteria,
called vibrio, are shaped like slightly curved rods or comma-shaped; others can be spiral-shaped, called spirilla,
or tightly coiled, called spirochaetes. A small number of species even have tetrahedral or cuboidal shapes. More
recently, some bacteria were discovered deep under Earth's crust that grow as branching filamentous types with a
star-shaped cross-section. The large surface area to volume ratio of this morphology may give these bacteria an advantage
in nutrient-poor environments. This wide variety of shapes is determined by the bacterial cell wall and cytoskeleton,
and is important because it can influence the ability of bacteria to acquire nutrients, attach to surfaces, swim
through liquids and escape predators. Many bacterial species exist simply as single cells, others associate in characteristic
patterns: Neisseria form diploids (pairs), Streptococcus form chains, and Staphylococcus group together in "bunch
of grapes" clusters. Bacteria can also be elongated to form filaments, for example the Actinobacteria. Filamentous
bacteria are often surrounded by a sheath that contains many individual cells. Certain types, such as species of
the genus Nocardia, even form complex, branched filaments, similar in appearance to fungal mycelia. Bacteria often
attach to surfaces and form dense aggregations called biofilms or bacterial mats. These films can range from a few
micrometers in thickness to up to half a meter in depth, and may contain multiple species of bacteria, protists and
archaea. Bacteria living in biofilms display a complex arrangement of cells and extracellular components, forming
secondary structures, such as microcolonies, through which there are networks of channels to enable better diffusion
of nutrients. In natural environments, such as soil or the surfaces of plants, the majority of bacteria are bound
to surfaces in biofilms. Biofilms are also important in medicine, as these structures are often present during chronic
bacterial infections or in infections of implanted medical devices, and bacteria protected within biofilms are much
harder to kill than individual isolated bacteria. Even more complex morphological changes are sometimes possible.
For example, when starved of amino acids, Myxobacteria detect surrounding cells in a process known as quorum sensing,
migrate toward each other, and aggregate to form fruiting bodies up to 500 micrometres long and containing approximately
100,000 bacterial cells. In these fruiting bodies, the bacteria perform separate tasks; this type of cooperation
is a simple type of multicellular organisation. For example, about one in 10 cells migrate to the top of these fruiting
bodies and differentiate into a specialised dormant state called myxospores, which are more resistant to drying and
other adverse environmental conditions than are ordinary cells. The bacterial cell is surrounded by a cell membrane
(also known as a lipid, cytoplasmic or plasma membrane). This membrane encloses the contents of the cell and acts
as a barrier to hold nutrients, proteins and other essential components of the cytoplasm within the cell. As they
are prokaryotes, bacteria do not usually have membrane-bound organelles in their cytoplasm, and thus contain few
large intracellular structures. They lack a true nucleus, mitochondria, chloroplasts and the other organelles present
in eukaryotic cells. Bacteria were once seen as simple bags of cytoplasm, but structures such as the prokaryotic
cytoskeleton and the localization of proteins to specific locations within the cytoplasm that give bacteria some
complexity have been discovered. These subcellular levels of organization have been called "bacterial hyperstructures".
Many important biochemical reactions, such as energy generation, use concentration gradients across membranes. The
general lack of internal membranes in bacteria means reactions such as electron transport occur across the cell membrane
between the cytoplasm and the periplasmic space. However, in many photosynthetic bacteria the plasma membrane is
highly folded and fills most of the cell with layers of light-gathering membrane. These light-gathering complexes
may even form lipid-enclosed structures called chlorosomes in green sulfur bacteria. Other proteins import nutrients
across the cell membrane, or expel undesired molecules from the cytoplasm. Bacteria do not have a membrane-bound
nucleus, and their genetic material is typically a single circular DNA chromosome located in the cytoplasm in an
irregularly shaped body called the nucleoid. The nucleoid contains the chromosome with its associated proteins and
RNA. The phylum Planctomycetes and candidate phylum Poribacteria may be exceptions to the general absence of internal
membranes in bacteria, because they appear to have a double membrane around their nucleoids and contain other membrane-bound
cellular structures. Like all living organisms, bacteria contain ribosomes, often grouped in chains called polyribosomes,
for the production of proteins, but the structure of the bacterial ribosome is different from that of eukaryotes
and Archaea. Bacterial ribosomes have a sedimentation rate of 70S (measured in Svedberg units): their subunits have
rates of 30S and 50S. Some antibiotics bind specifically to 70S ribosomes and inhibit bacterial protein synthesis.
Those antibiotics kill bacteria without affecting the larger 80S ribosomes of eukaryotic cells and without harming
the host. Some bacteria produce intracellular nutrient storage granules for later use, such as glycogen, polyphosphate,
sulfur or polyhydroxyalkanoates. Certain bacterial species, such as the photosynthetic Cyanobacteria, produce internal
gas vesicles, which they use to regulate their buoyancy – allowing them to move up or down into water layers with
different light intensities and nutrient levels. Intracellular membranes called chromatophores are also found in
membranes of phototrophic bacteria. Used primarily for photosynthesis, they contain bacteriochlorophyll pigments
and carotenoids. An early idea was that bacteria might contain membrane folds termed mesosomes, but these were later
shown to be artifacts produced by the chemicals used to prepare the cells for electron microscopy. Inclusions are
considered to be nonliving components of the cell that do not possess metabolic activity and are not bounded by membranes.
The most common inclusions are glycogen, lipid droplets, crystals, and pigments. Volutin granules are cytoplasmic
inclusions of complexed inorganic polyphosphate. These granules are called metachromatic granules due to their displaying
the metachromatic effect; they appear red or blue when stained with the blue dyes methylene blue or toluidine blue.
Gas vacuoles, which are freely permeable to gas, are membrane-bound vesicles present in some species of Cyanobacteria.
They allow the bacteria to control their buoyancy. Microcompartments are widespread, membrane-bound organelles that
are made of a protein shell that surrounds and encloses various enzymes. Carboxysomes are bacterial microcompartments
that contain enzymes involved in carbon fixation. Magnetosomes are bacterial microcompartments, present in magnetotactic
bacteria, that contain magnetic crystals. In most bacteria, a cell wall is present on the outside of the cell membrane.
The cell membrane and cell wall comprise the cell envelope. A common bacterial cell wall material is peptidoglycan
(called "murein" in older sources), which is made from polysaccharide chains cross-linked by peptides containing
D-amino acids. Bacterial cell walls are different from the cell walls of plants and fungi, which are made of cellulose
and chitin, respectively. The cell wall of bacteria is also distinct from that of Archaea, which do not contain peptidoglycan.
The cell wall is essential to the survival of many bacteria, and the antibiotic penicillin is able to kill bacteria
by inhibiting a step in the synthesis of peptidoglycan. Gram-positive bacteria possess a thick cell wall containing
many layers of peptidoglycan and teichoic acids. In contrast, gram-negative bacteria have a relatively thin cell
wall consisting of a few layers of peptidoglycan surrounded by a second lipid membrane containing lipopolysaccharides
and lipoproteins. Lipopolysaccharides, also called endotoxins, are composed of polysaccharides and lipid A that is
responsible for much of the toxicity of gram-negative bacteria. Most bacteria have the gram-negative cell wall, and
only the Firmicutes and Actinobacteria have the alternative gram-positive arrangement. These two groups were previously
known as the low G+C and high G+C Gram-positive bacteria, respectively. These differences in structure can produce
differences in antibiotic susceptibility; for instance, vancomycin can kill only gram-positive bacteria and is ineffective
against gram-negative pathogens, such as Haemophilus influenzae or Pseudomonas aeruginosa. If the bacterial cell
wall is entirely removed, it is called a protoplast, whereas if it is partially removed, it is called a spheroplast.
β-Lactam antibiotics, such as penicillin, inhibit the formation of peptidoglycan cross-links in the bacterial cell
wall. The enzyme lysozyme, found in human tears, also digests the cell wall of bacteria and is the body's main defense
against eye infections. Acid-fast bacteria, such as Mycobacteria, are resistant to decolorization by acids during
staining procedures. The high mycolic acid content of Mycobacteria, is responsible for the staining pattern of poor
absorption followed by high retention. The most common staining technique used to identify acid-fast bacteria is
the Ziehl-Neelsen stain or acid-fast stain, in which the acid-fast bacilli are stained bright-red and stand out clearly
against a blue background. L-form bacteria are strains of bacteria that lack cell walls. The main pathogenic bacteria
in this class is Mycoplasma (not to be confused with Mycobacteria). Fimbriae (sometimes called "attachment pili")
are fine filaments of protein, usually 2–10 nanometres in diameter and up to several micrometers in length. They
are distributed over the surface of the cell, and resemble fine hairs when seen under the electron microscope. Fimbriae
are believed to be involved in attachment to solid surfaces or to other cells, and are essential for the virulence
of some bacterial pathogens. Pili (sing. pilus) are cellular appendages, slightly larger than fimbriae, that can
transfer genetic material between bacterial cells in a process called conjugation where they are called conjugation
pili or "sex pili" (see bacterial genetics, below). They can also generate movement where they are called type IV
pili (see movement, below). Certain genera of Gram-positive bacteria, such as Bacillus, Clostridium, Sporohalobacter,
Anaerobacter, and Heliobacterium, can form highly resistant, dormant structures called endospores. In almost all
cases, one endospore is formed and this is not a reproductive process, although Anaerobacter can make up to seven
endospores in a single cell. Endospores have a central core of cytoplasm containing DNA and ribosomes surrounded
by a cortex layer and protected by an impermeable and rigid coat. Dipicolinic acid is a chemical compound that composes
5% to 15% of the dry weight of bacterial spores. It is implicated as responsible for the heat resistance of the endospore.
Endospores show no detectable metabolism and can survive extreme physical and chemical stresses, such as high levels
of UV light, gamma radiation, detergents, disinfectants, heat, freezing, pressure, and desiccation. In this dormant
state, these organisms may remain viable for millions of years, and endospores even allow bacteria to survive exposure
to the vacuum and radiation in space. According to scientist Dr. Steinn Sigurdsson, "There are viable bacterial spores
that have been found that are 40 million years old on Earth — and we know they're very hardened to radiation." Endospore-forming
bacteria can also cause disease: for example, anthrax can be contracted by the inhalation of Bacillus anthracis endospores,
and contamination of deep puncture wounds with Clostridium tetani endospores causes tetanus. Bacteria exhibit an
extremely wide variety of metabolic types. The distribution of metabolic traits within a group of bacteria has traditionally
been used to define their taxonomy, but these traits often do not correspond with modern genetic classifications.
Bacterial metabolism is classified into nutritional groups on the basis of three major criteria: the kind of energy
used for growth, the source of carbon, and the electron donors used for growth. An additional criterion of respiratory
microorganisms are the electron acceptors used for aerobic or anaerobic respiration. Carbon metabolism in bacteria
is either heterotrophic, where organic carbon compounds are used as carbon sources, or autotrophic, meaning that
cellular carbon is obtained by fixing carbon dioxide. Heterotrophic bacteria include parasitic types. Typical autotrophic
bacteria are phototrophic cyanobacteria, green sulfur-bacteria and some purple bacteria, but also many chemolithotrophic
species, such as nitrifying or sulfur-oxidising bacteria. Energy metabolism of bacteria is either based on phototrophy,
the use of light through photosynthesis, or based on chemotrophy, the use of chemical substances for energy, which
are mostly oxidised at the expense of oxygen or alternative electron acceptors (aerobic/anaerobic respiration). Bacteria
are further divided into lithotrophs that use inorganic electron donors and organotrophs that use organic compounds
as electron donors. Chemotrophic organisms use the respective electron donors for energy conservation (by aerobic/anaerobic
respiration or fermentation) and biosynthetic reactions (e.g., carbon dioxide fixation), whereas phototrophic organisms
use them only for biosynthetic purposes. Respiratory organisms use chemical compounds as a source of energy by taking
electrons from the reduced substrate and transferring them to a terminal electron acceptor in a redox reaction. This
reaction releases energy that can be used to synthesise ATP and drive metabolism. In aerobic organisms, oxygen is
used as the electron acceptor. In anaerobic organisms other inorganic compounds, such as nitrate, sulfate or carbon
dioxide are used as electron acceptors. This leads to the ecologically important processes of denitrification, sulfate
reduction, and acetogenesis, respectively. These processes are also important in biological responses to pollution;
for example, sulfate-reducing bacteria are largely responsible for the production of the highly toxic forms of mercury
(methyl- and dimethylmercury) in the environment. Non-respiratory anaerobes use fermentation to generate energy and
reducing power, secreting metabolic by-products (such as ethanol in brewing) as waste. Facultative anaerobes can
switch between fermentation and different terminal electron acceptors depending on the environmental conditions in
which they find themselves. Lithotrophic bacteria can use inorganic compounds as a source of energy. Common inorganic
electron donors are hydrogen, carbon monoxide, ammonia (leading to nitrification), ferrous iron and other reduced
metal ions, and several reduced sulfur compounds. In unusual circumstances, the gas methane can be used by methanotrophic
bacteria as both a source of electrons and a substrate for carbon anabolism. In both aerobic phototrophy and chemolithotrophy,
oxygen is used as a terminal electron acceptor, whereas under anaerobic conditions inorganic compounds are used instead.
Most lithotrophic organisms are autotrophic, whereas organotrophic organisms are heterotrophic. Regardless of the
type of metabolic process they employ, the majority of bacteria are able to take in raw materials only in the form
of relatively small molecules, which enter the cell by diffusion or through molecular channels in cell membranes.
The Planctomycetes are the exception (as they are in possessing membranes around their nuclear material). It has
recently been shown that Gemmata obscuriglobus is able to take in large molecules via a process that in some ways
resembles endocytosis, the process used by eukaryotic cells to engulf external items. Unlike in multicellular organisms,
increases in cell size (cell growth) and reproduction by cell division are tightly linked in unicellular organisms.
Bacteria grow to a fixed size and then reproduce through binary fission, a form of asexual reproduction. Under optimal
conditions, bacteria can grow and divide extremely rapidly, and bacterial populations can double as quickly as every
9.8 minutes. In cell division, two identical clone daughter cells are produced. Some bacteria, while still reproducing
asexually, form more complex reproductive structures that help disperse the newly formed daughter cells. Examples
include fruiting body formation by Myxobacteria and aerial hyphae formation by Streptomyces, or budding. Budding
involves a cell forming a protrusion that breaks away and produces a daughter cell. In the laboratory, bacteria are
usually grown using solid or liquid media. Solid growth media, such as agar plates, are used to isolate pure cultures
of a bacterial strain. However, liquid growth media are used when measurement of growth or large volumes of cells
are required. Growth in stirred liquid media occurs as an even cell suspension, making the cultures easy to divide
and transfer, although isolating single bacteria from liquid media is difficult. The use of selective media (media
with specific nutrients added or deficient, or with antibiotics added) can help identify specific organisms. Most
laboratory techniques for growing bacteria use high levels of nutrients to produce large amounts of cells cheaply
and quickly. However, in natural environments, nutrients are limited, meaning that bacteria cannot continue to reproduce
indefinitely. This nutrient limitation has led the evolution of different growth strategies (see r/K selection theory).
Some organisms can grow extremely rapidly when nutrients become available, such as the formation of algal (and cyanobacterial)
blooms that often occur in lakes during the summer. Other organisms have adaptations to harsh environments, such
as the production of multiple antibiotics by Streptomyces that inhibit the growth of competing microorganisms. In
nature, many organisms live in communities (e.g., biofilms) that may allow for increased supply of nutrients and
protection from environmental stresses. These relationships can be essential for growth of a particular organism
or group of organisms (syntrophy). Bacterial growth follows four phases. When a population of bacteria first enter
a high-nutrient environment that allows growth, the cells need to adapt to their new environment. The first phase
of growth is the lag phase, a period of slow growth when the cells are adapting to the high-nutrient environment
and preparing for fast growth. The lag phase has high biosynthesis rates, as proteins necessary for rapid growth
are produced. The second phase of growth is the log phase, also known as the logarithmic or exponential phase. The
log phase is marked by rapid exponential growth. The rate at which cells grow during this phase is known as the growth
rate (k), and the time it takes the cells to double is known as the generation time (g). During log phase, nutrients
are metabolised at maximum speed until one of the nutrients is depleted and starts limiting growth. The third phase
of growth is the stationary phase and is caused by depleted nutrients. The cells reduce their metabolic activity
and consume non-essential cellular proteins. The stationary phase is a transition from rapid growth to a stress response
state and there is increased expression of genes involved in DNA repair, antioxidant metabolism and nutrient transport.
The final phase is the death phase where the bacteria run out of nutrients and die. Most bacteria have a single circular
chromosome that can range in size from only 160,000 base pairs in the endosymbiotic bacteria Candidatus Carsonella
ruddii, to 12,200,000 base pairs in the soil-dwelling bacteria Sorangium cellulosum. Spirochaetes of the genus Borrelia
are a notable exception to this arrangement, with bacteria such as Borrelia burgdorferi, the cause of Lyme disease,
containing a single linear chromosome. The genes in bacterial genomes are usually a single continuous stretch of
DNA and although several different types of introns do exist in bacteria, these are much rarer than in eukaryotes.
Bacteria, as asexual organisms, inherit identical copies of their parent's genes (i.e., they are clonal). However,
all bacteria can evolve by selection on changes to their genetic material DNA caused by genetic recombination or
mutations. Mutations come from errors made during the replication of DNA or from exposure to mutagens. Mutation rates
vary widely among different species of bacteria and even among different clones of a single species of bacteria.
Genetic changes in bacterial genomes come from either random mutation during replication or "stress-directed mutation",
where genes involved in a particular growth-limiting process have an increased mutation rate. Transduction of bacterial
genes by bacteriophage appears to be a consequence of infrequent errors during intracellular assembly of virus particles,
rather than a bacterial adaptation. Conjugation, in the much-studied E. coli system is determined by plasmid genes,
and is an adaptation for transferring copies of the plasmid from one bacterial host to another. It is seldom that
a conjugative plasmid integrates into the host bacterial chromosome, and subsequently transfers part of the host
bacterial DNA to another bacterium. Plasmid-mediated transfer of host bacterial DNA also appears to be an accidental
process rather than a bacterial adaptation. Transformation, unlike transduction or conjugation, depends on numerous
bacterial gene products that specifically interact to perform this complex process, and thus transformation is clearly
a bacterial adaptation for DNA transfer. In order for a bacterium to bind, take up and recombine donor DNA into its
own chromosome, it must first enter a special physiological state termed competence (see Natural competence). In
Bacillus subtilis, about 40 genes are required for the development of competence. The length of DNA transferred during
B. subtilis transformation can be between a third of a chromosome up to the whole chromosome. Transformation appears
to be common among bacterial species, and thus far at least 60 species are known to have the natural ability to become
competent for transformation. The development of competence in nature is usually associated with stressful environmental
conditions, and seems to be an adaptation for facilitating repair of DNA damage in recipient cells. In ordinary circumstances,
transduction, conjugation, and transformation involve transfer of DNA between individual bacteria of the same species,
but occasionally transfer may occur between individuals of different bacterial species and this may have significant
consequences, such as the transfer of antibiotic resistance. In such cases, gene acquisition from other bacteria
or the environment is called horizontal gene transfer and may be common under natural conditions. Gene transfer is
particularly important in antibiotic resistance as it allows the rapid transfer of resistance genes between different
pathogens. Bacteriophages are viruses that infect bacteria. Many types of bacteriophage exist, some simply infect
and lyse their host bacteria, while others insert into the bacterial chromosome. A bacteriophage can contain genes
that contribute to its host's phenotype: for example, in the evolution of Escherichia coli O157:H7 and Clostridium
botulinum, the toxin genes in an integrated phage converted a harmless ancestral bacterium into a lethal pathogen.
Bacteria resist phage infection through restriction modification systems that degrade foreign DNA, and a system that
uses CRISPR sequences to retain fragments of the genomes of phage that the bacteria have come into contact with in
the past, which allows them to block virus replication through a form of RNA interference. This CRISPR system provides
bacteria with acquired immunity to infection. Bacterial species differ in the number and arrangement of flagella
on their surface; some have a single flagellum (monotrichous), a flagellum at each end (amphitrichous), clusters
of flagella at the poles of the cell (lophotrichous), while others have flagella distributed over the entire surface
of the cell (peritrichous). The bacterial flagella is the best-understood motility structure in any organism and
is made of about 20 proteins, with approximately another 30 proteins required for its regulation and assembly. The
flagellum is a rotating structure driven by a reversible motor at the base that uses the electrochemical gradient
across the membrane for power. This motor drives the motion of the filament, which acts as a propeller. Classification
seeks to describe the diversity of bacterial species by naming and grouping organisms based on similarities. Bacteria
can be classified on the basis of cell structure, cellular metabolism or on differences in cell components, such
as DNA, fatty acids, pigments, antigens and quinones. While these schemes allowed the identification and classification
of bacterial strains, it was unclear whether these differences represented variation between distinct species or
between strains of the same species. This uncertainty was due to the lack of distinctive structures in most bacteria,
as well as lateral gene transfer between unrelated species. Due to lateral gene transfer, some closely related bacteria
can have very different morphologies and metabolisms. To overcome this uncertainty, modern bacterial classification
emphasizes molecular systematics, using genetic techniques such as guanine cytosine ratio determination, genome-genome
hybridization, as well as sequencing genes that have not undergone extensive lateral gene transfer, such as the rRNA
gene. Classification of bacteria is determined by publication in the International Journal of Systematic Bacteriology,
and Bergey's Manual of Systematic Bacteriology. The International Committee on Systematic Bacteriology (ICSB) maintains
international rules for the naming of bacteria and taxonomic categories and for the ranking of them in the International
Code of Nomenclature of Bacteria. The term "bacteria" was traditionally applied to all microscopic, single-cell prokaryotes.
However, molecular systematics showed prokaryotic life to consist of two separate domains, originally called Eubacteria
and Archaebacteria, but now called Bacteria and Archaea that evolved independently from an ancient common ancestor.
The archaea and eukaryotes are more closely related to each other than either is to the bacteria. These two domains,
along with Eukarya, are the basis of the three-domain system, which is currently the most widely used classification
system in microbiolology. However, due to the relatively recent introduction of molecular systematics and a rapid
increase in the number of genome sequences that are available, bacterial classification remains a changing and expanding
field. For example, a few biologists argue that the Archaea and Eukaryotes evolved from Gram-positive bacteria. The
Gram stain, developed in 1884 by Hans Christian Gram, characterises bacteria based on the structural characteristics
of their cell walls. The thick layers of peptidoglycan in the "Gram-positive" cell wall stain purple, while the thin
"Gram-negative" cell wall appears pink. By combining morphology and Gram-staining, most bacteria can be classified
as belonging to one of four groups (Gram-positive cocci, Gram-positive bacilli, Gram-negative cocci and Gram-negative
bacilli). Some organisms are best identified by stains other than the Gram stain, particularly mycobacteria or Nocardia,
which show acid-fastness on Ziehl–Neelsen or similar stains. Other organisms may need to be identified by their growth
in special media, or by other techniques, such as serology. As with bacterial classification, identification of bacteria
is increasingly using molecular methods. Diagnostics using DNA-based tools, such as polymerase chain reaction, are
increasingly popular due to their specificity and speed, compared to culture-based methods. These methods also allow
the detection and identification of "viable but nonculturable" cells that are metabolically active but non-dividing.
However, even using these improved methods, the total number of bacterial species is not known and cannot even be
estimated with any certainty. Following present classification, there are a little less than 9,300 known species
of prokaryotes, which includes bacteria and archaea; but attempts to estimate the true number of bacterial diversity
have ranged from 107 to 109 total species – and even these diverse estimates may be off by many orders of magnitude.
Some species of bacteria kill and then consume other microorganisms, these species are called predatory bacteria.
These include organisms such as Myxococcus xanthus, which forms swarms of cells that kill and digest any bacteria
they encounter. Other bacterial predators either attach to their prey in order to digest them and absorb nutrients,
such as Vampirovibrio chlorellavorus, or invade another cell and multiply inside the cytosol, such as Daptobacter.
These predatory bacteria are thought to have evolved from saprophages that consumed dead microorganisms, through
adaptations that allowed them to entrap and kill other organisms. Certain bacteria form close spatial associations
that are essential for their survival. One such mutualistic association, called interspecies hydrogen transfer, occurs
between clusters of anaerobic bacteria that consume organic acids, such as butyric acid or propionic acid, and produce
hydrogen, and methanogenic Archaea that consume hydrogen. The bacteria in this association are unable to consume
the organic acids as this reaction produces hydrogen that accumulates in their surroundings. Only the intimate association
with the hydrogen-consuming Archaea keeps the hydrogen concentration low enough to allow the bacteria to grow. In
soil, microorganisms that reside in the rhizosphere (a zone that includes the root surface and the soil that adheres
to the root after gentle shaking) carry out nitrogen fixation, converting nitrogen gas to nitrogenous compounds.
This serves to provide an easily absorbable form of nitrogen for many plants, which cannot fix nitrogen themselves.
Many other bacteria are found as symbionts in humans and other organisms. For example, the presence of over 1,000
bacterial species in the normal human gut flora of the intestines can contribute to gut immunity, synthesise vitamins,
such as folic acid, vitamin K and biotin, convert sugars to lactic acid (see Lactobacillus), as well as fermenting
complex undigestible carbohydrates. The presence of this gut flora also inhibits the growth of potentially pathogenic
bacteria (usually through competitive exclusion) and these beneficial bacteria are consequently sold as probiotic
dietary supplements. If bacteria form a parasitic association with other organisms, they are classed as pathogens.
Pathogenic bacteria are a major cause of human death and disease and cause infections such as tetanus, typhoid fever,
diphtheria, syphilis, cholera, foodborne illness, leprosy and tuberculosis. A pathogenic cause for a known medical
disease may only be discovered many years after, as was the case with Helicobacter pylori and peptic ulcer disease.
Bacterial diseases are also important in agriculture, with bacteria causing leaf spot, fire blight and wilts in plants,
as well as Johne's disease, mastitis, salmonella and anthrax in farm animals. Each species of pathogen has a characteristic
spectrum of interactions with its human hosts. Some organisms, such as Staphylococcus or Streptococcus, can cause
skin infections, pneumonia, meningitis and even overwhelming sepsis, a systemic inflammatory response producing shock,
massive vasodilation and death. Yet these organisms are also part of the normal human flora and usually exist on
the skin or in the nose without causing any disease at all. Other organisms invariably cause disease in humans, such
as the Rickettsia, which are obligate intracellular parasites able to grow and reproduce only within the cells of
other organisms. One species of Rickettsia causes typhus, while another causes Rocky Mountain spotted fever. Chlamydia,
another phylum of obligate intracellular parasites, contains species that can cause pneumonia, or urinary tract infection
and may be involved in coronary heart disease. Finally, some species, such as Pseudomonas aeruginosa, Burkholderia
cenocepacia, and Mycobacterium avium, are opportunistic pathogens and cause disease mainly in people suffering from
immunosuppression or cystic fibrosis. Bacterial infections may be treated with antibiotics, which are classified
as bacteriocidal if they kill bacteria, or bacteriostatic if they just prevent bacterial growth. There are many types
of antibiotics and each class inhibits a process that is different in the pathogen from that found in the host. An
example of how antibiotics produce selective toxicity are chloramphenicol and puromycin, which inhibit the bacterial
ribosome, but not the structurally different eukaryotic ribosome. Antibiotics are used both in treating human disease
and in intensive farming to promote animal growth, where they may be contributing to the rapid development of antibiotic
resistance in bacterial populations. Infections can be prevented by antiseptic measures such as sterilizing the skin
prior to piercing it with the needle of a syringe, and by proper care of indwelling catheters. Surgical and dental
instruments are also sterilized to prevent contamination by bacteria. Disinfectants such as bleach are used to kill
bacteria or other pathogens on surfaces to prevent contamination and further reduce the risk of infection. The ability
of bacteria to degrade a variety of organic compounds is remarkable and has been used in waste processing and bioremediation.
Bacteria capable of digesting the hydrocarbons in petroleum are often used to clean up oil spills. Fertilizer was
added to some of the beaches in Prince William Sound in an attempt to promote the growth of these naturally occurring
bacteria after the 1989 Exxon Valdez oil spill. These efforts were effective on beaches that were not too thickly
covered in oil. Bacteria are also used for the bioremediation of industrial toxic wastes. In the chemical industry,
bacteria are most important in the production of enantiomerically pure chemicals for use as pharmaceuticals or agrichemicals.
Because of their ability to quickly grow and the relative ease with which they can be manipulated, bacteria are the
workhorses for the fields of molecular biology, genetics and biochemistry. By making mutations in bacterial DNA and
examining the resulting phenotypes, scientists can determine the function of genes, enzymes and metabolic pathways
in bacteria, then apply this knowledge to more complex organisms. This aim of understanding the biochemistry of a
cell reaches its most complex expression in the synthesis of huge amounts of enzyme kinetic and gene expression data
into mathematical models of entire organisms. This is achievable in some well-studied bacteria, with models of Escherichia
coli metabolism now being produced and tested. This understanding of bacterial metabolism and genetics allows the
use of biotechnology to bioengineer bacteria for the production of therapeutic proteins, such as insulin, growth
factors, or antibodies. Bacteria were first observed by the Dutch microscopist Antonie van Leeuwenhoek in 1676, using
a single-lens microscope of his own design. He then published his observations in a series of letters to the Royal
Society of London. Bacteria were Leeuwenhoek's most remarkable microscopic discovery. They were just at the limit
of what his simple lenses could make out and, in one of the most striking hiatuses in the history of science, no
one else would see them again for over a century. Only then were his by-then-largely-forgotten observations of bacteria
— as opposed to his famous "animalcules" (spermatozoa) — taken seriously. Though it was known in the nineteenth century
that bacteria are the cause of many diseases, no effective antibacterial treatments were available. In 1910, Paul
Ehrlich developed the first antibiotic, by changing dyes that selectively stained Treponema pallidum — the spirochaete
that causes syphilis — into compounds that selectively killed the pathogen. Ehrlich had been awarded a 1908 Nobel
Prize for his work on immunology, and pioneered the use of stains to detect and identify bacteria, with his work
being the basis of the Gram stain and the Ziehl–Neelsen stain.
When the board has no embedded components it is more correctly called a printed wiring board (PWB) or etched wiring board.
However, the term printed wiring board has fallen into disuse. A PCB populated with electronic components is called
a printed circuit assembly (PCA), printed circuit board assembly or PCB assembly (PCBA). The IPC preferred term for
assembled boards is circuit card assembly (CCA), and for assembled backplanes it is backplane assemblies. The term
PCB is used informally both for bare and assembled boards. Initially PCBs were designed manually by creating a photomask
on a clear mylar sheet, usually at two or four times the true size. Starting from the schematic diagram the component
pin pads were laid out on the mylar and then traces were routed to connect the pads. Rub-on dry transfers of common
component footprints increased efficiency. Traces were made with self-adhesive tape. Pre-printed non-reproducing
grids on the mylar assisted in layout. To fabricate the board, the finished photomask was photolithographically reproduced
onto a photoresist coating on the blank copper-clad boards. Panelization is a procedure whereby a number of PCBs
are grouped for manufacturing onto a larger board - the panel. Usually a panel consists of a single design but sometimes
multiple designs are mixed on a single panel. There are two types of panels: assembly panels - often called arrays
- and bare board manufacturing panels. The assemblers often mount components on panels rather than single PCBs because
this is efficient. The bare board manufactures always uses panels, not only for efficiency, but because of the requirements
the plating process. Thus a manufacturing panel can consist of a grouping of individual PCBs or of arrays, depending
on what must be delivered. The panel is eventually broken apart into individual PCBs; this is called depaneling.
Separating the individual PCBs is frequently aided by drilling or routing perforations along the boundaries of the
individual circuits, much like a sheet of postage stamps. Another method, which takes less space, is to cut V-shaped
grooves across the full dimension of the panel. The individual PCBs can then be broken apart along this line of weakness.
Today depaneling is often done by lasers which cut the board with no contact. Laser panelization reduces stress on
the fragile circuits. Subtractive methods remove copper from an entirely copper-coated board to leave only the desired
copper pattern. In additive methods the pattern is electroplated onto a bare substrate using a complex process. The
advantage of the additive method is that less material is needed and less waste is produced. In the full additive
process the bare laminate is covered with a photosensitive film which is imaged (exposed to light through a mask
and then developed which removes the unexposed film). The exposed areas are sensitized in a chemical bath, usually
containing palladium and similar to that used for through hole plating which makes the exposed area capable of bonding
metal ions. The laminate is then plated with copper in the sensitized areas. When the mask is stripped, the PCB is
finished. Semi-additive is the most common process: The unpatterned board has a thin layer of copper already on it.
A reverse mask is then applied. (Unlike a subtractive process mask, this mask exposes those parts of the substrate
that will eventually become the traces.) Additional copper is then plated onto the board in the unmasked areas; copper
may be plated to any desired weight. Tin-lead or other surface platings are then applied. The mask is stripped away
and a brief etching step removes the now-exposed bare original copper laminate from the board, isolating the individual
traces. Some single-sided boards which have plated-through holes are made in this way. General Electric made consumer
radio sets in the late 1960s using additive boards. The simplest method, used for small-scale production and often
by hobbyists, is immersion etching, in which the board is submerged in etching solution such as ferric chloride.
Compared with methods used for mass production, the etching time is long. Heat and agitation can be applied to the
bath to speed the etching rate. In bubble etching, air is passed through the etchant bath to agitate the solution
and speed up etching. Splash etching uses a motor-driven paddle to splash boards with etchant; the process has become
commercially obsolete since it is not as fast as spray etching. In spray etching, the etchant solution is distributed
over the boards by nozzles, and recirculated by pumps. Adjustment of the nozzle pattern, flow rate, temperature,
and etchant composition gives predictable control of etching rates and high production rates. Multi-layer printed
circuit boards have trace layers inside the board. This is achieved by laminating a stack of materials in a press
by applying pressure and heat for a period of time. This results in an inseparable one piece product. For example,
a four-layer PCB can be fabricated by starting from a two-sided copper-clad laminate, etch the circuitry on both
sides, then laminate to the top and bottom pre-preg and copper foil. It is then drilled, plated, and etched again
to get traces on top and bottom layers. Holes through a PCB are typically drilled with small-diameter drill bits
made of solid coated tungsten carbide. Coated tungsten carbide is recommended since many board materials are very
abrasive and drilling must be high RPM and high feed to be cost effective. Drill bits must also remain sharp so as
not to mar or tear the traces. Drilling with high-speed-steel is simply not feasible since the drill bits will dull
quickly and thus tear the copper and ruin the boards. The drilling is performed by automated drilling machines with
placement controlled by a drill tape or drill file. These computer-generated files are also called numerically controlled
drill (NCD) files or "Excellon files". The drill file describes the location and size of each drilled hole. The hole
walls for boards with two or more layers can be made conductive and then electroplated with copper to form plated-through
holes. These holes electrically connect the conducting layers of the PCB. For multi-layer boards, those with three
layers or more, drilling typically produces a smear of the high temperature decomposition products of bonding agent
in the laminate system. Before the holes can be plated through, this smear must be removed by a chemical de-smear
process, or by plasma-etch. The de-smear process ensures that a good connection is made to the copper layers when
the hole is plated through. On high reliability boards a process called etch-back is performed chemically with a
potassium permanganate based etchant or plasma. The etch-back removes resin and the glass fibers so that the copper
layers extend into the hole and as the hole is plated become integral with the deposited copper. Matte solder is
usually fused to provide a better bonding surface or stripped to bare copper. Treatments, such as benzimidazolethiol,
prevent surface oxidation of bare copper. The places to which components will be mounted are typically plated, because
untreated bare copper oxidizes quickly, and therefore is not readily solderable. Traditionally, any exposed copper
was coated with solder by hot air solder levelling (HASL). The HASL finish prevents oxidation from the underlying
copper, thereby guaranteeing a solderable surface. This solder was a tin-lead alloy, however new solder compounds
are now used to achieve compliance with the RoHS directive in the EU and US, which restricts the use of lead. One
of these lead-free compounds is SN100CL, made up of 99.3% tin, 0.7% copper, 0.05% nickel, and a nominal of 60ppm
germanium. Other platings used are OSP (organic surface protectant), immersion silver (IAg), immersion tin, electroless
nickel with immersion gold coating (ENIG), electroless nickel electroless palladium immersion gold (ENEPIG) and direct
gold plating (over nickel). Edge connectors, placed along one edge of some boards, are often nickel plated then gold
plated. Another coating consideration is rapid diffusion of coating metal into Tin solder. Tin forms intermetallics
such as Cu5Sn6 and Ag3Cu that dissolve into the Tin liquidus or solidus(@50C), stripping surface coating or leaving
voids. Electrochemical migration (ECM) is the growth of conductive metal filaments on or in a printed circuit board
(PCB) under the influence of a DC voltage bias. Silver, zinc, and aluminum are known to grow whiskers under the influence
of an electric field. Silver also grows conducting surface paths in the presence of halide and other ions, making
it a poor choice for electronics use. Tin will grow "whiskers" due to tension in the plated surface. Tin-Lead or
solder plating also grows whiskers, only reduced by the percentage Tin replaced. Reflow to melt solder or tin plate
to relieve surface stress lowers whisker incidence. Another coating issue is tin pest, the transformation of tin
to a powdery allotrope at low temperature. Areas that should not be soldered may be covered with solder resist (solder
mask). One of the most common solder resists used today is called "LPI" (liquid photoimageable solder mask). A photo-sensitive
coating is applied to the surface of the PWB, then exposed to light through the solder mask image film, and finally
developed where the unexposed areas are washed away. Dry film solder mask is similar to the dry film used to image
the PWB for plating or etching. After being laminated to the PWB surface it is imaged and develop as LPI. Once common
but no longer commonly used because of its low accuracy and resolution is to screen print epoxy ink. Solder resist
also provides protection from the environment. Unpopulated boards are usually bare-board tested for "shorts" and
"opens". A short is a connection between two points that should not be connected. An open is a missing connection
between points that should be connected. For high-volume production a fixture or a rigid needle adapter is used to
make contact with copper lands on the board. Building the adapter is a significant fixed cost and is only economical
for high-volume or high-value production. For small or medium volume production flying probe testers are used where
test probes are moved over the board by an XY drive to make contact with the copper lands. The CAM system instructs
the electrical tester to apply a voltage to each contact point as required and to check that this voltage appears
on the appropriate contact points and only on these. Often, through-hole and surface-mount construction must be combined
in a single assembly because some required components are available only in surface-mount packages, while others
are available only in through-hole packages. Another reason to use both methods is that through-hole mounting can
provide needed strength for components likely to endure physical stress, while components that are expected to go
untouched will take up less space using surface-mount techniques. For further comparison, see the SMT page. In boundary
scan testing, test circuits integrated into various ICs on the board form temporary connections between the PCB traces
to test that the ICs are mounted correctly. Boundary scan testing requires that all the ICs to be tested use a standard
test configuration procedure, the most common one being the Joint Test Action Group (JTAG) standard. The JTAG test
architecture provides a means to test interconnects between integrated circuits on a board without using physical
test probes. JTAG tool vendors provide various types of stimulus and sophisticated algorithms, not only to detect
the failing nets, but also to isolate the faults to specific nets, devices, and pins. PCBs intended for extreme environments
often have a conformal coating, which is applied by dipping or spraying after the components have been soldered.
The coat prevents corrosion and leakage currents or shorting due to condensation. The earliest conformal coats were
wax; modern conformal coats are usually dips of dilute solutions of silicone rubber, polyurethane, acrylic, or epoxy.
Another technique for applying a conformal coating is for plastic to be sputtered onto the PCB in a vacuum chamber.
The chief disadvantage of conformal coatings is that servicing of the board is rendered extremely difficult. Many
assembled PCBs are static sensitive, and therefore must be placed in antistatic bags during transport. When handling
these boards, the user must be grounded (earthed). Improper handling techniques might transmit an accumulated static
charge through the board, damaging or destroying components. Even bare boards are sometimes static sensitive. Traces
have become so fine that it's quite possible to blow an etch off the board (or change its characteristics) with a
static charge. This is especially true on non-traditional PCBs such as MCMs and microwave PCBs. The first PCBs used
through-hole technology, mounting electronic components by leads inserted through holes on one side of the board
and soldered onto copper traces on the other side. Boards may be single-sided, with an unplated component side, or
more compact double-sided boards, with components soldered on both sides. Horizontal installation of through-hole
parts with two axial leads (such as resistors, capacitors, and diodes) is done by bending the leads 90 degrees in
the same direction, inserting the part in the board (often bending leads located on the back of the board in opposite
directions to improve the part's mechanical strength), soldering the leads, and trimming off the ends. Leads may
be soldered either manually or by a wave soldering machine. Through-hole manufacture adds to board cost by requiring
many holes to be drilled accurately, and limits the available routing area for signal traces on layers immediately
below the top layer on multi-layer boards since the holes must pass through all layers to the opposite side. Once
surface-mounting came into use, small-sized SMD components were used where possible, with through-hole mounting only
of components unsuitably large for surface-mounting due to power requirements or mechanical limitations, or subject
to mechanical stress which might damage the PCB. Surface-mount technology emerged in the 1960s, gained momentum in
the early 1980s and became widely used by the mid-1990s. Components were mechanically redesigned to have small metal
tabs or end caps that could be soldered directly onto the PCB surface, instead of wire leads to pass through holes.
Components became much smaller and component placement on both sides of the board became more common than with through-hole
mounting, allowing much smaller PCB assemblies with much higher circuit densities. Surface mounting lends itself
well to a high degree of automation, reducing labor costs and greatly increasing production rates. Components can
be supplied mounted on carrier tapes. Surface mount components can be about one-quarter to one-tenth of the size
and weight of through-hole components, and passive components much cheaper; prices of semiconductor surface mount
devices (SMDs) are determined more by the chip itself than the package, with little price advantage over larger packages.
Some wire-ended components, such as 1N4148 small-signal switch diodes, are actually significantly cheaper than SMD
equivalents. Each trace consists of a flat, narrow part of the copper foil that remains after etching. The resistance,
determined by width and thickness, of the traces must be sufficiently low for the current the conductor will carry.
Power and ground traces may need to be wider than signal traces. In a multi-layer board one entire layer may be mostly
solid copper to act as a ground plane for shielding and power return. For microwave circuits, transmission lines
can be laid out in the form of stripline and microstrip with carefully controlled dimensions to assure a consistent
impedance. In radio-frequency and fast switching circuits the inductance and capacitance of the printed circuit board
conductors become significant circuit elements, usually undesired; but they can be used as a deliberate part of the
circuit design, obviating the need for additional discrete components. The cloth or fiber material used, resin material,
and the cloth to resin ratio determine the laminate's type designation (FR-4, CEM-1, G-10, etc.) and therefore the
characteristics of the laminate produced. Important characteristics are the level to which the laminate is fire retardant,
the dielectric constant (er), the loss factor (tδ), the tensile strength, the shear strength, the glass transition
temperature (Tg), and the Z-axis expansion coefficient (how much the thickness changes with temperature). There are
quite a few different dielectrics that can be chosen to provide different insulating values depending on the requirements
of the circuit. Some of these dielectrics are polytetrafluoroethylene (Teflon), FR-4, FR-1, CEM-1 or CEM-3. Well
known pre-preg materials used in the PCB industry are FR-2 (phenolic cotton paper), FR-3 (cotton paper and epoxy),
FR-4 (woven glass and epoxy), FR-5 (woven glass and epoxy), FR-6 (matte glass and polyester), G-10 (woven glass and
epoxy), CEM-1 (cotton paper and epoxy), CEM-2 (cotton paper and epoxy), CEM-3 (non-woven glass and epoxy), CEM-4
(woven glass and epoxy), CEM-5 (woven glass and polyester). Thermal expansion is an important consideration especially
with ball grid array (BGA) and naked die technologies, and glass fiber offers the best dimensional stability. The
reinforcement type defines two major classes of materials - woven and non-woven. Woven reinforcements are cheaper,
but the high dielectric constant of glass may not be favorable for many higher-frequency applications. The spatially
nonhomogeneous structure also introduces local variations in electrical parameters, due to different resin/glass
ratio at different areas of the weave pattern. Nonwoven reinforcements, or materials with low or no reinforcement,
are more expensive but more suitable for some RF/analog applications. At the glass transition temperature the resin
in the composite softens and significantly increases thermal expansion; exceeding Tg then exerts mechanical overload
on the board components - e.g. the joints and the vias. Below Tg the thermal expansion of the resin roughly matches
copper and glass, above it gets significantly higher. As the reinforcement and copper confine the board along the
plane, virtually all volume expansion projects to the thickness and stresses the plated-through holes. Repeated soldering
or other exposition to higher temperatures can cause failure of the plating, especially with thicker boards; thick
boards therefore require high Tg matrix. Moisture absorption occurs when the material is exposed to high humidity
or water. Both the resin and the reinforcement may absorb water; water may be also soaked by capillary forces through
voids in the materials and along the reinforcement. Epoxies of the FR-4 materials aren't too susceptible, with absorption
of only 0.15%. Teflon has very low absorption of 0.01%. Polyimides and cyanate esters, on the other side, suffer
from high water absorption. Absorbed water can lead to significant degradation of key parameters; it impairs tracking
resistance, breakdown voltage, and dielectric parameters. Relative dielectric constant of water is about 73, compared
to about 4 for common circuitboard materials. Absorbed moisture can also vaporize on heating and cause cracking and
delamination, the same effect responsible for "popcorning" damage on wet packaging of electronic parts. Careful baking
of the substrates may be required. The printed circuit board industry defines heavy copper as layers exceeding three
ounces of copper, or approximately 0.0042 inches (4.2 mils, 105 μm) thick. PCB designers and fabricators often use
heavy copper when design and manufacturing circuit boards in order to increase current-carrying capacity as well
as resistance to thermal strains. Heavy copper plated vias transfer heat to external heat sinks. IPC 2152 is a standard
for determining current-carrying capacity of printed circuit board traces. Since it was quite easy to stack interconnections
(wires) inside the embedding matrix, the approach allowed designers to forget completely about the routing of wires
(usually a time-consuming operation of PCB design): Anywhere the designer needs a connection, the machine will draw
a wire in straight line from one location/pin to another. This led to very short design times (no complex algorithms
to use even for high density designs) as well as reduced crosstalk (which is worse when wires run parallel to each
other—which almost never happens in Multiwire), though the cost is too high to compete with cheaper PCB technologies
when large quantities are needed. Cordwood construction can save significant space and was often used with wire-ended
components in applications where space was at a premium (such as missile guidance and telemetry systems) and in high-speed
computers, where short traces were important. In cordwood construction, axial-leaded components were mounted between
two parallel planes. The components were either soldered together with jumper wire, or they were connected to other
components by thin nickel ribbon welded at right angles onto the component leads. To avoid shorting together different
interconnection layers, thin insulating cards were placed between them. Perforations or holes in the cards allowed
component leads to project through to the next interconnection layer. One disadvantage of this system was that special
nickel-leaded components had to be used to allow the interconnecting welds to be made. Differential thermal expansion
of the component could put pressure on the leads of the components and the PCB traces and cause physical damage (as
was seen in several modules on the Apollo program). Additionally, components located in the interior are difficult
to replace. Some versions of cordwood construction used soldered single-sided PCBs as the interconnection method
(as pictured), allowing the use of normal-leaded components. Development of the methods used in modern printed circuit
boards started early in the 20th century. In 1903, a German inventor, Albert Hanson, described flat foil conductors
laminated to an insulating board, in multiple layers. Thomas Edison experimented with chemical methods of plating
conductors onto linen paper in 1904. Arthur Berry in 1913 patented a print-and-etch method in Britain, and in the
United States Max Schoop obtained a patent to flame-spray metal onto a board through a patterned mask. Charles Ducas
in 1927 patented a method of electroplating circuit patterns. The Austrian engineer Paul Eisler invented the printed
circuit as part of a radio set while working in England around 1936. Around 1943 the USA began to use the technology
on a large scale to make proximity fuses for use in World War II. After the war, in 1948, the USA released the invention
for commercial use. Printed circuits did not become commonplace in consumer electronics until the mid-1950s, after
the Auto-Sembly process was developed by the United States Army. At around the same time in Britain work along similar
lines was carried out by Geoffrey Dummer, then at the RRDE. During World War II, the development of the anti-aircraft
proximity fuse required an electronic circuit that could withstand being fired from a gun, and could be produced
in quantity. The Centralab Division of Globe Union submitted a proposal which met the requirements: a ceramic plate
would be screenprinted with metallic paint for conductors and carbon material for resistors, with ceramic disc capacitors
and subminiature vacuum tubes soldered in place. The technique proved viable, and the resulting patent on the process,
which was classified by the U.S. Army, was assigned to Globe Union. It was not until 1984 that the Institute of Electrical
and Electronics Engineers (IEEE) awarded Mr. Harry W. Rubinstein, the former head of Globe Union's Centralab Division,
its coveted Cledo Brunetti Award for early key contributions to the development of printed components and conductors
on a common insulating substrate. As well, Mr. Rubinstein was honored in 1984 by his alma mater, the University of
Wisconsin-Madison, for his innovations in the technology of printed electronic circuits and the fabrication of capacitors.
Originally, every electronic component had wire leads, and the PCB had holes drilled for each wire of each component.
The components' leads were then passed through the holes and soldered to the PCB trace. This method of assembly is
called through-hole construction. In 1949, Moe Abramson and Stanislaus F. Danko of the United States Army Signal
Corps developed the Auto-Sembly process in which component leads were inserted into a copper foil interconnection
pattern and dip soldered. The patent they obtained in 1956 was assigned to the U.S. Army. With the development of
board lamination and etching techniques, this concept evolved into the standard printed circuit board fabrication
process in use today. Soldering could be done automatically by passing the board over a ripple, or wave, of molten
solder in a wave-soldering machine. However, the wires and holes are wasteful since drilling holes is expensive and
the protruding wires are merely cut off.
Greek colonies and communities have been historically established on the shores of the Mediterranean Sea and Black Sea, but
the Greek people have always been centered around the Aegean and Ionian seas, where the Greek language has been spoken
since the Bronze Age. Until the early 20th century, Greeks were distributed between the Greek peninsula, the western
coast of Asia Minor, the Black Sea coast, Cappadocia in central Anatolia, Egypt, the Balkans, Cyprus, and Constantinople.
Many of these regions coincided to a large extent with the borders of the Byzantine Empire of the late 11th century
and the Eastern Mediterranean areas of ancient Greek colonization. The cultural centers of the Greeks have included
Athens, Thessalonica, Alexandria, Smyrna, and Constantinople at various periods. The evolution of Proto-Greek should
be considered within the context of an early Paleo-Balkan sprachbund that makes it difficult to delineate exact boundaries
between individual languages. The characteristically Greek representation of word-initial laryngeals by prothetic
vowels is shared, for one, by the Armenian language, which also seems to share some other phonological and morphological
peculiarities of Greek; this has led some linguists to propose a hypothetical closer relationship between Greek and
Armenian, although evidence remains scant. Around 1200 BC, the Dorians, another Greek-speaking people, followed from
Epirus. Traditionally, historians have believed that the Dorian invasion caused the collapse of the Mycenaean civilization,
but it is likely the main attack was made by seafaring raiders (sea peoples) who sailed into the eastern Mediterranean
around 1180 BC. The Dorian invasion was followed by a poorly attested period of migrations, appropriately called
the Greek Dark Ages, but by 800 BC the landscape of Archaic and Classical Greece was discernible. The Greeks of classical
antiquity idealized their Mycenaean ancestors and the Mycenaean period as a glorious era of heroes, closeness of
the gods and material wealth. The Homeric Epics (i.e. Iliad and Odyssey) were especially and generally accepted as
part of the Greek past and it was not until the 19th century that scholars began to question Homer's historicity.
As part of the Mycenaean heritage that survived, the names of the gods and goddesses of Mycenaean Greece (e.g. Zeus,
Poseidon and Hades) became major figures of the Olympian Pantheon of later antiquity. The ethnogenesis of the Greek
nation is linked to the development of Pan-Hellenism in the 8th century BC. According to some scholars, the foundational
event was the Olympic Games in 776 BC, when the idea of a common Hellenism among the Greek tribes was first translated
into a shared cultural experience and Hellenism was primarily a matter of common culture. The works of Homer (i.e.
Iliad and Odyssey) and Hesiod (i.e. Theogony) were written in the 8th century BC, becoming the basis of the national
religion, ethos, history and mythology. The Oracle of Apollo at Delphi was established in this period. The classical
period of Greek civilization covers a time spanning from the early 5th century BC to the death of Alexander the Great,
in 323 BC (some authors prefer to split this period into 'Classical', from the end of the Persian wars to the end
of the Peloponnesian War, and 'Fourth Century', up to the death of Alexander). It is so named because it set the
standards by which Greek civilization would be judged in later eras. The Classical period is also described as the
"Golden Age" of Greek civilization, and its art, philosophy, architecture and literature would be instrumental in
the formation and development of Western culture. In any case, Alexander's toppling of the Achaemenid Empire, after
his victories at the battles of the Granicus, Issus and Gaugamela, and his advance as far as modern-day Pakistan
and Tajikistan, provided an important outlet for Greek culture, via the creation of colonies and trade routes along
the way. While the Alexandrian empire did not survive its creator's death intact, the cultural implications of the
spread of Hellenism across much of the Middle East and Asia were to prove long lived as Greek became the lingua franca,
a position it retained even in Roman times. Many Greeks settled in Hellenistic cities like Alexandria, Antioch and
Seleucia. Two thousand years later, there are still communities in Pakistan and Afghanistan, like the Kalash, who
claim to be descended from Greek settlers. This age saw the Greeks move towards larger cities and a reduction in
the importance of the city-state. These larger cities were parts of the still larger Kingdoms of the Diadochi. Greeks,
however, remained aware of their past, chiefly through the study of the works of Homer and the classical authors.
An important factor in maintaining Greek identity was contact with barbarian (non-Greek) peoples, which was deepened
in the new cosmopolitan environment of the multi-ethnic Hellenistic kingdoms. This led to a strong desire among Greeks
to organize the transmission of the Hellenic paideia to the next generation. Greek science, technology and mathematics
are generally considered to have reached their peak during the Hellenistic period. In the religious sphere, this
was a period of profound change. The spiritual revolution that took place, saw a waning of the old Greek religion,
whose decline beginning in the 3rd century BC continued with the introduction of new religious movements from the
East. The cults of deities like Isis and Mithra were introduced into the Greek world. Greek-speaking communities
of the Hellenized East were instrumental in the spread of early Christianity in the 2nd and 3rd centuries, and Christianity's
early leaders and writers (notably St Paul) were generally Greek-speaking, though none were from Greece. However,
Greece itself had a tendency to cling to paganism and was not one of the influential centers of early Christianity:
in fact, some ancient Greek religious practices remained in vogue until the end of the 4th century, with some areas
such as the southeastern Peloponnese remaining pagan until well into the 10th century AD. Of the new eastern religions
introduced into the Greek world, the most successful was Christianity. From the early centuries of the Common Era,
the Greeks identified as Romaioi ("Romans"), by that time the name ‘Hellenes’ denoted pagans. While ethnic distinctions
still existed in the Roman Empire, they became secondary to religious considerations and the renewed empire used
Christianity as a tool to support its cohesion and promoted a robust Roman national identity. Concurrently the secular,
urban civilization of late antiquity survived in the Eastern Mediterranean along with Greco-Roman educational system,
although it was from Christianity that the culture's essential values were drawn. The Eastern Roman Empire – today
conventionally named the Byzantine Empire, a name not in use during its own time – became increasingly influenced
by Greek culture after the 7th century, when Emperor Heraclius (AD 575 - 641) decided to make Greek the empire's
official language. Certainly from then on, but likely earlier, the Roman and Greek cultures were virtually fused
into a single Greco-Roman world. Although the Latin West recognized the Eastern Empire's claim to the Roman legacy
for several centuries, after Pope Leo III crowned Charlemagne, king of the Franks, as the "Roman Emperor" on 25 December
800, an act which eventually led to the formation of the Holy Roman Empire, the Latin West started to favour the
Franks and began to refer to the Eastern Roman Empire largely as the Empire of the Greeks (Imperium Graecorum). A
distinct Greek political identity re-emerged in the 11th century in educated circles and became more forceful after
the fall of Constantinople to the Crusaders of the Fourth Crusade in 1204, so that when the empire was revived in
1261, it became in many ways a Greek national state. That new notion of nationhood engendered a deep interest in
the classical past culminating in the ideas of the Neoplatonist philosopher Gemistus Pletho, who abandoned Christianity.
However, it was the combination of Orthodox Christianity with a specifically Greek identity that shaped the Greeks'
notion of themselves in the empire's twilight years. The interest in the Classical Greek heritage was complemented
by a renewed emphasis on Greek Orthodox identity, which was reinforced in the late Medieval and Ottoman Greeks' links
with their fellow Orthodox Christians in the Russian Empire. These were further strengthened following the fall of
the Empire of Trebizond in 1461, after which and until the second Russo-Turkish War of 1828-29 hundreds of thousands
of Pontic Greeks fled or migrated from the Pontic Alps and Armenian Highlands to southern Russia and the Russian
South Caucasus (see also Greeks in Russia, Greeks in Armenia, Greeks in Georgia, and Caucasian Greeks). Following
the Fall of Constantinople on 29 May 1453, many Greeks sought better employment and education opportunities by leaving
for the West, particularly Italy, Central Europe, Germany and Russia. Greeks are greatly credited for the European
cultural revolution, later called, the Renaissance. In Greek-inhabited territory itself, Greeks came to play a leading
role in the Ottoman Empire, due in part to the fact that the central hub of the empire, politically, culturally,
and socially, was based on Western Thrace and Greek Macedonia, both in Northern Greece, and of course was centred
on the mainly Greek-populated, former Byzantine capital, Constantinople. As a direct consequence of this situation,
Greek-speakers came to play a hugely important role in the Ottoman trading and diplomatic establishment, as well
as in the church. Added to this, in the first half of the Ottoman period men of Greek origin made up a significant
proportion of the Ottoman army, navy, and state bureaucracy, having been levied as adolescents (along with especially
Albanians and Serbs) into Ottoman service through the devshirme. Many Ottomans of Greek (or Albanian or Serb) origin
were therefore to be found within the Ottoman forces which governed the provinces, from Ottoman Egypt, to Ottomans
occupied Yemen and Algeria, frequently as provincial governors. For those that remained under the Ottoman Empire's
millet system, religion was the defining characteristic of national groups (milletler), so the exonym "Greeks" (Rumlar
from the name Rhomaioi) was applied by the Ottomans to all members of the Orthodox Church, regardless of their language
or ethnic origin. The Greek speakers were the only ethnic group to actually call themselves Romioi, (as opposed to
being so named by others) and, at least those educated, considered their ethnicity (genos) to be Hellenic. There
were, however, many Greeks who escaped the second-class status of Christians inherent in the Ottoman millet system,
according to which Muslims were explicitly awarded senior status and preferential treatment. These Greeks either
emigrated, particularly to their fellow Greek Orthodox protector, the Russian Empire, or simply converted to Islam,
often only very superficially and whilst remaining crypto-Christian. The most notable examples of large-scale conversion
to Turkish Islam among those today defined as Greek Muslims - excluding those who had to convert as a matter of course
on being recruited through the devshirme - were to be found in Crete (Cretan Turks), Greek Macedonia (for example
among the Vallahades of western Macedonia), and among Pontic Greeks in the Pontic Alps and Armenian Highlands. Several
Ottoman sultans and princes were also of part Greek origin, with mothers who were either Greek concubines or princesses
from Byzantine noble families, one famous example being sultan Selim the Grim, whose mother Gülbahar Hatun was a
Pontic Greek. The roots of Greek success in the Ottoman Empire can be traced to the Greek tradition of education
and commerce. It was the wealth of the extensive merchant class that provided the material basis for the intellectual
revival that was the prominent feature of Greek life in the half century and more leading to the outbreak of the
Greek War of Independence in 1821. Not coincidentally, on the eve of 1821, the three most important centres of Greek
learning were situated in Chios, Smyrna and Aivali, all three major centres of Greek commerce. Greek success was
also favoured by Greek domination of the Christian Orthodox church. The relationship between ethnic Greek identity
and Greek Orthodox religion continued after the creation of the Modern Greek state in 1830. According to the second
article of the first Greek constitution of 1822, a Greek was defined as any Christian resident of the Kingdom of
Greece, a clause removed by 1840. A century later, when the Treaty of Lausanne was signed between Greece and Turkey
in 1923, the two countries agreed to use religion as the determinant for ethnic identity for the purposes of population
exchange, although most of the Greeks displaced (over a million of the total 1.5 million) had already been driven
out by the time the agreement was signed.[note 1] The Greek genocide, in particular the harsh removal of Pontian
Greeks from the southern shore area of the Black Sea, contemporaneous with and following the failed Greek Asia Minor
Campaign, was part of this process of Turkification of the Ottoman Empire and the placement of its economy and trade,
then largely in Greek hands under ethnic Turkish control. The terms used to define Greekness have varied throughout
history but were never limited or completely identified with membership to a Greek state. By Western standards, the
term Greeks has traditionally referred to any native speakers of the Greek language, whether Mycenaean, Byzantine
or modern Greek. Byzantine Greeks called themselves Romioi and considered themselves the political heirs of Rome,
but at least by the 12th century a growing number of those educated, deemed themselves the heirs of ancient Greece
as well, although for most of the Greek speakers, "Hellene" still meant pagan. On the eve of the Fall of Constantinople
the Last Emperor urged his soldiers to remember that they were the descendants of Greeks and Romans. Before the establishment
of the Modern Greek state, the link between ancient and modern Greeks was emphasized by the scholars of Greek Enlightenment
especially by Rigas Feraios. In his "Political Constitution", he addresses to the nation as "the people descendant
of the Greeks". The modern Greek state was created in 1829, when the Greeks liberated a part of their historic homelands,
Peloponnese, from the Ottoman Empire. The large Greek diaspora and merchant class were instrumental in transmitting
the ideas of western romantic nationalism and philhellenism, which together with the conception of Hellenism, formulated
during the last centuries of the Byzantine Empire, formed the basis of the Diafotismos and the current conception
of Hellenism. Homer refers to the "Hellenes" (/ˈhɛliːnz/) as a relatively small tribe settled in Thessalic Phthia,
with its warriors under the command of Achilleus. The Parian Chronicle says that Phthia was the homeland of the Hellenes
and that this name was given to those previously called Greeks (Γραικοί). In Greek mythology, Hellen, the patriarch
of Hellenes, was son of Pyrrha and Deucalion, who ruled around Phthia, the only survivors after the great deluge.
It seems that the myth was invented when the Greek tribes started to separate from each other in certain areas of
Greece and it indicates their common origin. Aristotle names ancient Hellas as an area in Epirus between Dodona and
the Achelous river, the location of the great deluge of Deucalion, a land occupied by the Selloi and the "Greeks"
who later came to be known as "Hellenes". Selloi were the priests of Dodonian Zeus and the word probably means "sacrificers"
(compare Gothic saljan, "present, sacrifice"). There is currently no satisfactory etymology of the name Hellenes.
Some scholars assert that the name Selloi changed to Sellanes and then to Hellanes-Hellenes. However this etymology
connects the name Hellenes with the Dorians who occupied Epirus and the relation with the name Greeks given by the
Romans becomes uncertain. The name Hellenes seems to be older and it was probably used by the Greeks with the establishment
of the Great Amphictyonic League. This was an ancient association of Greek tribes with twelve founders which was
organized to protect the great temples of Apollo in Delphi (Phocis) and of Demeter near Thermopylae (Locris). According
to the legend it was founded after the Trojan War by the eponymous Amphictyon, brother of Hellen. In the Hesiodic
Catalogue of Women, Graecus is presented as the son of Zeus and Pandora II, sister of Hellen the patriarch of Hellenes.
Hellen was the son of Deucalion who ruled around Phthia in central Greece. The Parian Chronicle mentions that when
Deucalion became king of Phthia, the previously called Graikoi were named Hellenes. Aristotle notes that the Hellenes
were related with Grai/Greeks (Meteorologica I.xiv) a native name of a Dorian tribe in Epirus which was used by the
Illyrians. He also claims that the great deluge must have occurred in the region around Dodona, where the Selloi
dwelt. However, according to the Greek tradition it is more possible that the homeland of the Greeks was originally
in central Greece. A modern theory derives the name Greek (Latin Graeci) from Graikos, "inhabitant of Graia/Graea,"
a town on the coast of Boeotia. Greek colonists from Graia helped to found Cumae (900 BC) in Italy, where they were
called Graeces. When the Romans encountered them they used this name for the colonists and then for all Greeks (Graeci.)
The word γραῖα graia "old woman" comes from the PIE root *ǵerh2-/*ǵreh2-, "to grow old" via Proto-Greek *gera-/grau-iu;
the same root later gave γέρας geras (/keras/), "gift of honour" in Mycenean Greek. The Germanic languages borrowed
the word Greeks with an initial "k" sound which probably was their initial sound closest to the Latin "g" at the
time (Goth. Kreks). The area out of ancient Attica including Boeotia was called Graïke and is connected with the
older deluge of Ogyges, the mythological ruler of Boeotia. The region was originally occupied by the Minyans who
were autochthonous or Proto-Greek speaking people. In ancient Greek the name Ogygios came to mean "from earliest
days". Homer uses the terms Achaeans and Danaans (Δαναοί) as a generic term for Greeks in Iliad, and they were probably
a part of the Mycenean civilization. The names Achaioi and Danaoi seem to be pre-Dorian belonging to the people who
were overthrown. They were forced to the region that later bore the name Achaea after the Dorian invasion. In the
5th century BC, they were redefined as contemporary speakers of Aeolic Greek which was spoken mainly in Thessaly,
Boeotia and Lesbos. There are many controversial theories on the origin of the Achaeans. According to one view, the
Achaeans were one of the fair-headed tribes of upper Europe, who pressed down over the Alps during the early Iron
age (1300 BC) to southern Europe. Another theory suggests that the Peloponnesian Dorians were the Achaeans. These
theories are rejected by other scholars who, based on linguistic criteria, suggest that the Achaeans were mainland
pre-Dorian Greeks. There is also the theory that there was an Achaean ethnos that migrated from Asia minor to lower
Thessaly prior to 2000 BC. Some Hittite texts mention a nation lying to the west called Ahhiyava or Ahhiya. Egyptian
documents refer to Ekwesh, one of the groups of sea peoples who attached Egypt during the reign of Merneptah (1213-1203
BCE), who may have been Achaeans. In Homer's Iliad, the names Danaans (or Danaoi: Δαναοί) and Argives (Argives: Αργείοι)
are used to designate the Greek forces opposed to the Trojans. The myth of Danaus, whose origin is Egypt, is a foundation
legend of Argos. His daughters Danaides, were forced in Tartarus to carry a jug to fill a bathtub without a bottom.
This myth is connected with a task that can never be fulfilled (Sisyphos) and the name can be derived from the PIE
root *danu: "river". There is not any satisfactory theory on their origin. Some scholars connect Danaans with the
Denyen, one of the groups of the sea peoples who attacked Egypt during the reign of Ramesses III (1187-1156 BCE).
The same inscription mentions the Weshesh who might have been the Achaeans. The Denyen seem to have been inhabitants
of the city Adana in Cilicia. Pottery similar to that of Mycenae itself has been found in Tarsus of Cilicia and it
seems that some refugees from the Aegean went there after the collapse of the Mycenean civilization. These Cilicians
seem to have been called Dananiyim, the same word as Danaoi who attacked Egypt in 1191 BC along with the Quaouash
(or Weshesh) who may be Achaeans. They were also called Danuna according to a Hittite inscription and the same name
is mentioned in the Amarna letters. Julius Pokorny reconstructs the name from the PIE root da:-: "flow, river", da:-nu:
"any moving liquid, drops", da: navo "people living by the river, Skyth. nomadic people (in Rigveda water-demons,
fem.Da:nu primordial goddess), in Greek Danaoi, Egypt. Danuna". It is also possible that the name Danaans is pre-Greek.
A country Danaja with a city Mukana (propaply: Mycenea) is mentioned in inscriptions from Egypt from Amenophis III
(1390-1352 BC), Thutmosis III (1437 BC). The most obvious link between modern and ancient Greeks is their language,
which has a documented tradition from at least the 14th century BC to the present day, albeit with a break during
the Greek Dark Ages (lasting from the 11th to the 8th century BC). Scholars compare its continuity of tradition to
Chinese alone. Since its inception, Hellenism was primarily a matter of common culture and the national continuity
of the Greek world is a lot more certain than its demographic. Yet, Hellenism also embodied an ancestral dimension
through aspects of Athenian literature that developed and influenced ideas of descent based on autochthony. During
the later years of the Eastern Roman Empire, areas such as Ionia and Constantinople experienced a Hellenic revival
in language, philosophy, and literature and on classical models of thought and scholarship. This revival provided
a powerful impetus to the sense of cultural affinity with ancient Greece and its classical heritage. The cultural
changes undergone by the Greeks are, despite a surviving common sense of ethnicity, undeniable. At the same time,
the Greeks have retained their language and alphabet, certain values and cultural traditions, customs, a sense of
religious and cultural difference and exclusion, (the word barbarian was used by 12th-century historian Anna Komnene
to describe non-Greek speakers), a sense of Greek identity and common sense of ethnicity despite the global political
and social changes of the past two millennia. Today, Greeks are the majority ethnic group in the Hellenic Republic,
where they constitute 93% of the country's population, and the Republic of Cyprus where they make up 78% of the island's
population (excluding Turkish settlers in the occupied part of the country). Greek populations have not traditionally
exhibited high rates of growth; nonetheless, the population of Greece has shown regular increase since the country's
first census in 1828. A large percentage of the population growth since the state's foundation has resulted from
annexation of new territories and the influx of 1.5 million Greek refugees after the 1923 population exchange between
Greece and Turkey. About 80% of the population of Greece is urban, with 28% concentrated in the city of Athens Greeks
from Cyprus have a similar history of emigration, usually to the English-speaking world because of the island's colonization
by the British Empire. Waves of emigration followed the Turkish invasion of Cyprus in 1974, while the population
decreased between mid-1974 and 1977 as a result of emigration, war losses, and a temporary decline in fertility.
After the ethnic cleansing of a third of the Greek population of the island in 1974, there was also an increase in
the number of Greek Cypriots leaving, especially for the Middle East, which contributed to a decrease in population
that tapered off in the 1990s. Today more than two-thirds of the Greek population in Cyprus is urban. There is a
sizeable Greek minority of about 105,000 (disputed, sources claim higher) people, in Albania. The Greek minority
of Turkey, which numbered upwards of 200,000 people after the 1923 exchange, has now dwindled to a few thousand,
after the 1955 Constantinople Pogrom and other state sponsored violence and discrimination. This effectively ended,
though not entirely, the three-thousand-year-old presence of Hellenism in Asia Minor. There are smaller Greek minorities
in the rest of the Balkan countries, the Levant and the Black Sea states, remnants of the Old Greek Diaspora (pre-19th
century). The total number of Greeks living outside Greece and Cyprus today is a contentious issue. Where Census
figures are available, they show around 3 million Greeks outside Greece and Cyprus. Estimates provided by the SAE
- World Council of Hellenes Abroad put the figure at around 7 million worldwide. According to George Prevelakis of
Sorbonne University, the number is closer to just below 5 million. Integration, intermarriage, and loss of the Greek
language influence the self-identification of the Omogeneia. Important centres of the New Greek Diaspora today are
London, New York, Melbourne and Toronto. In 2010, the Hellenic Parliament introduced a law that enables Diaspora
Greeks in Greece to vote in the elections of the Greek state. This law was later repealed in early 2014. In ancient
times, the trading and colonizing activities of the Greek tribes and city states spread the Greek culture, religion
and language around the Mediterranean and Black Sea basins, especially in Sicily and southern Italy (also known as
Magna Grecia), Spain, the south of France and the Black sea coasts. Under Alexander the Great's empire and successor
states, Greek and Hellenizing ruling classes were established in the Middle East, India and in Egypt. The Hellenistic
period is characterized by a new wave of Greek colonization that established Greek cities and kingdoms in Asia and
Africa. Under the Roman Empire, easier movement of people spread Greeks across the Empire and in the eastern territories,
Greek became the lingua franca rather than Latin. The modern-day Griko community of southern Italy, numbering about
60,000, may represent a living remnant of the ancient Greek populations of Italy. During and after the Greek War
of Independence, Greeks of the diaspora were important in establishing the fledgling state, raising funds and awareness
abroad. Greek merchant families already had contacts in other countries and during the disturbances many set up home
around the Mediterranean (notably Marseilles in France, Livorno in Italy, Alexandria in Egypt), Russia (Odessa and
Saint Petersburg), and Britain (London and Liverpool) from where they traded, typically in textiles and grain. Businesses
frequently comprised the extended family, and with them they brought schools teaching Greek and the Greek Orthodox
Church. Greek culture has evolved over thousands of years, with its beginning in the Mycenaean civilization, continuing
through the Classical period, the Roman and Eastern Roman periods and was profoundly affected by Christianity, which
it in turn influenced and shaped. Ottoman Greeks had to endure through several centuries of adversity that culminated
in genocide in the 20th century but nevertheless included cultural exchanges and enriched both cultures. The Diafotismos
is credited with revitalizing Greek culture and giving birth to the synthesis of ancient and medieval elements that
characterize it today. Greek demonstrates several linguistic features that are shared with other Balkan languages,
such as Albanian, Bulgarian and Eastern Romance languages (see Balkan sprachbund), and has absorbed many foreign
words, primarily of Western European and Turkish origin. Because of the movements of Philhellenism and the Diafotismos
in the 19th century, which emphasized the modern Greeks' ancient heritage, these foreign influences were excluded
from official use via the creation of Katharevousa, a somewhat artificial form of Greek purged of all foreign influence
and words, as the official language of the Greek state. In 1976, however, the Hellenic Parliament voted to make the
spoken Dimotiki the official language, making Katharevousa obsolete. Modern Greek has, in addition to Standard Modern
Greek or Dimotiki, a wide variety of dialects of varying levels of mutual intelligibility, including Cypriot, Pontic,
Cappadocian, Griko and Tsakonian (the only surviving representative of ancient Doric Greek). Yevanic is the language
of the Romaniotes, and survives in small communities in Greece, New York and Israel. In addition to Greek, many Greeks
in Greece and the Diaspora are bilingual in other languages or dialects such as English, Arvanitika/Albanian, Aromanian,
Macedonian Slavic, Russian and Turkish. Most Greeks are Christians, belonging to the Greek Orthodox Church. During
the first centuries after Jesus Christ, the New Testament was originally written in Koine Greek, which remains the
liturgical language of the Greek Orthodox Church, and most of the early Christians and Church Fathers were Greek-speaking.
There are small groups of ethnic Greeks adhering to other Christian denominations like Greek Catholics, Greek Evangelicals,
Pentecostals, and groups adhering to other religions including Romaniot and Sephardic Jews and Greek Muslims. About
2,000 Greeks are members of Hellenic Polytheistic Reconstructionism congregations. Greek art has a long and varied
history. Greeks have contributed to the visual, literary and performing arts. In the West, ancient Greek art was
influential in shaping the Roman and later the modern western artistic heritage. Following the Renaissance in Europe,
the humanist aesthetic and the high technical standards of Greek art inspired generations of European artists. Well
into the 19th century, the classical tradition derived from Greece played an important role in the art of the western
world. In the East, Alexander the Great's conquests initiated several centuries of exchange between Greek, Central
Asian and Indian cultures, resulting in Greco-Buddhist art, whose influence reached as far as Japan. Notable modern
Greek artists include Renaissance painter Dominikos Theotokopoulos (El Greco), Panagiotis Doxaras, Nikolaos Gyzis,
Nikiphoros Lytras, Yannis Tsarouchis, Nikos Engonopoulos, Constantine Andreou, Jannis Kounellis, sculptors such as
Leonidas Drosis, Georgios Bonanos, Yannoulis Chalepas and Joannis Avramidis, conductor Dimitri Mitropoulos, soprano
Maria Callas, composers such as Mikis Theodorakis, Nikos Skalkottas, Iannis Xenakis, Manos Hatzidakis, Eleni Karaindrou,
Yanni and Vangelis, one of the best-selling singers worldwide Nana Mouskouri and poets such as Kostis Palamas, Dionysios
Solomos, Angelos Sikelianos and Yannis Ritsos. Alexandrian Constantine P. Cavafy and Nobel laureates Giorgos Seferis
and Odysseas Elytis are among the most important poets of the 20th century. Novel is also represented by Alexandros
Papadiamantis and Nikos Kazantzakis. The Greeks of the Classical era made several notable contributions to science
and helped lay the foundations of several western scientific traditions, like philosophy, historiography and mathematics.
The scholarly tradition of the Greek academies was maintained during Roman times with several academic institutions
in Constantinople, Antioch, Alexandria and other centres of Greek learning while Eastern Roman science was essentially
a continuation of classical science. Greeks have a long tradition of valuing and investing in paideia (education).
Paideia was one of the highest societal values in the Greek and Hellenistic world while the first European institution
described as a university was founded in 5th century Constantinople and operated in various incarnations until the
city's fall to the Ottomans in 1453. The University of Constantinople was Christian Europe's first secular institution
of higher learning since no theological subjects were taught, and considering the original meaning of the world university
as a corporation of students, the world’s first university as well. As of 2007, Greece had the eighth highest percentage
of tertiary enrollment in the world (with the percentages for female students being higher than for male) while Greeks
of the Diaspora are equally active in the field of education. Hundreds of thousands of Greek students attend western
universities every year while the faculty lists of leading Western universities contain a striking number of Greek
names. Notable modern Greek scientists of modern times include Dimitrios Galanos, Georgios Papanikolaou (inventor
of the Pap test), Nicholas Negroponte, Constantin Carathéodory, Manolis Andronikos, Michael Dertouzos, John Argyris,
Panagiotis Kondylis, John Iliopoulos (2007 Dirac Prize for his contributions on the physics of the charm quark, a
major contribution to the birth of the Standard Model, the modern theory of Elementary Particles), Joseph Sifakis
(2007 Turing Award, the "Nobel Prize" of Computer Science), Christos Papadimitriou (2002 Knuth Prize, 2012 Gödel
Prize), Mihalis Yannakakis (2005 Knuth Prize) and Dimitri Nanopoulos. The most widely used symbol is the flag of
Greece, which features nine equal horizontal stripes of blue alternating with white representing the nine syllables
of the Greek national motto Eleftheria i thanatos (freedom or death), which was the motto of the Greek War of Independence.
The blue square in the upper hoist-side corner bears a white cross, which represents Greek Orthodoxy. The Greek flag
is widely used by the Greek Cypriots, although Cyprus has officially adopted a neutral flag to ease ethnic tensions
with the Turkish Cypriot minority – see flag of Cyprus). Greek surnames were widely in use by the 9th century supplanting
the ancient tradition of using the father’s name, however Greek surnames are most commonly patronymics. Commonly,
Greek male surnames end in -s, which is the common ending for Greek masculine proper nouns in the nominative case.
Exceptionally, some end in -ou, indicating the genitive case of this proper noun for patronymic reasons. Although
surnames in mainland Greece are static today, dynamic and changing patronymic usage survives in middle names where
the genitive of father's first name is commonly the middle name (this usage having been passed on to the Russians).
In Cyprus, by contrast, surnames follow the ancient tradition of being given according to the father’s name. Finally,
in addition to Greek-derived surnames many have Latin, Turkish and Italian origin. The traditional Greek homelands
have been the Greek peninsula and the Aegean Sea, the Southern Italy (Magna Graecia), the Black Sea, the Ionian coasts
of Asia Minor and the islands of Cyprus and Sicily. In Plato's Phaidon, Socrates remarks, "we (Greeks) live around
a sea like frogs around a pond" when describing to his friends the Greek cities of the Aegean. This image is attested
by the map of the Old Greek Diaspora, which corresponded to the Greek world until the creation of the Greek state
in 1832. The sea and trade were natural outlets for Greeks since the Greek peninsula is rocky and does not offer
good prospects for agriculture. Notable Greek seafarers include people such as Pytheas of Marseilles, Scylax of Caryanda
who sailed to Iberia and beyond, Nearchus, the 6th century merchant and later monk Cosmas Indicopleustes (Cosmas
who sailed to India) and the explorer of the Northwestern passage Juan de Fuca. In later times, the Romioi plied
the sea-lanes of the Mediterranean and controlled trade until an embargo imposed by the Roman Emperor on trade with
the Caliphate opened the door for the later Italian pre-eminence in trade. The Greek shipping tradition recovered
during Ottoman rule when a substantial merchant middle class developed, which played an important part in the Greek
War of Independence. Today, Greek shipping continues to prosper to the extent that Greece has the largest merchant
fleet in the world, while many more ships under Greek ownership fly flags of convenience. The most notable shipping
magnate of the 20th century was Aristotle Onassis, others being Yiannis Latsis, George Livanos, and Stavros Niarchos.
Another study from 2012 included 150 dental school students from University of Athens, the result showed that light
hair colour (blonde/light ash brown) was predominant in 10.7% of the students. 36% had medium hair colour (Light
brown/Medium darkest brown). 32% had darkest brown and 21% black (15.3 off black, 6% midnight black). In conclusion
the hair colour of young Greeks are mostly brown, ranging from light to dark brown. with significant minorities having
black and blonde hair. The same study also showed that the eye colour of the students was 14.6% blue/green, 28% medium
(light brown) and 57.4% dark brown. The history of the Greek people is closely associated with the history of Greece,
Cyprus, Constantinople, Asia Minor and the Black Sea. During the Ottoman rule of Greece, a number of Greek enclaves
around the Mediterranean were cut off from the core, notably in Southern Italy, the Caucasus, Syria and Egypt. By
the early 20th century, over half of the overall Greek-speaking population was settled in Asia Minor (now Turkey),
while later that century a huge wave of migration to the United States, Australia, Canada and elsewhere created the
modern Greek diaspora.
The Premier League is a corporation in which the 20 member clubs act as shareholders. Seasons run from August to May. Teams
play 38 matches each (playing each team in the league twice, home and away), totalling 380 matches in the season.
Most games are played on Saturday and Sunday afternoons; others during weekday evenings. It is currently sponsored
by Barclays Bank and thus officially known as the Barclays Premier League and is colloquially known as the Premiership.
Outside the UK it is commonly referred to as the English Premier League (EPL). The competition formed as the FA Premier
League on 20 February 1992 following the decision of clubs in the Football League First Division to break away from
the Football League, which was originally founded in 1888, and take advantage of a lucrative television rights deal.
The deal was worth £1 billion a year domestically as of 2013–14, with BSkyB and BT Group securing the domestic rights
to broadcast 116 and 38 games respectively. The league generates €2.2 billion per year in domestic and international
television rights. In 2014/15, teams were apportioned revenues of £1.6 billion. The Premier League is the most-watched
football league in the world, broadcast in 212 territories to 643 million homes and a potential TV audience of 4.7
billion people. In the 2014–15 season, the average Premier League match attendance exceeded 36,000, second highest
of any professional football league behind the Bundesliga's 43,500. Most stadium occupancies are near capacity. The
Premier League rank second in the UEFA coefficients of leagues based on performances in European competitions over
the past five seasons. Despite significant European success during the 1970s and early 1980s, the late '80s had marked
a low point for English football. Stadiums were crumbling, supporters endured poor facilities, hooliganism was rife,
and English clubs were banned from European competition for five years following the Heysel Stadium disaster in 1985.
The Football League First Division, which had been the top level of English football since 1888, was well behind
leagues such as Italy's Serie A and Spain's La Liga in attendances and revenues, and several top English players
had moved abroad. However, by the turn of the 1990s the downward trend was starting to reverse; England had been
successful in the 1990 FIFA World Cup, reaching the semi-finals. UEFA, European football's governing body, lifted
the five-year ban on English clubs playing in European competitions in 1990 (resulting in Manchester United lifting
the UEFA Cup Winners' Cup in 1991) and the Taylor Report on stadium safety standards, which proposed expensive upgrades
to create all-seater stadiums in the aftermath of the Hillsborough disaster, was published in January of that year.
Television money had also become much more important; the Football League received £6.3 million for a two-year agreement
in 1986, but when that deal was renewed in 1988, the price rose to £44 million over four years. The 1988 negotiations
were the first signs of a breakaway league; ten clubs threatened to leave and form a "super league", but were eventually
persuaded to stay. As stadiums improved and match attendance and revenues rose, the country's top teams again considered
leaving the Football League in order to capitalise on the growing influx of money being pumped into the sport. At
the close of the 1991 season, a proposal was tabled for the establishment of a new league that would bring more money
into the game overall. The Founder Members Agreement, signed on 17 July 1991 by the game's top-flight clubs, established
the basic principles for setting up the FA Premier League. The newly formed top division would have commercial independence
from The Football Association and the Football League, giving the FA Premier League licence to negotiate its own
broadcast and sponsorship agreements. The argument given at the time was that the extra income would allow English
clubs to compete with teams across Europe. The managing director of London Weekend Television (LWT), Greg Dyke, met
with the representatives of the "big five" football clubs in England in 1990. The meeting was to pave the way for
a break away from The Football League. Dyke believed that it would be more lucrative for LWT if only the larger clubs
in the country were featured on national television and wanted to establish whether the clubs would be interested
in a larger share of television rights money. The five clubs decided it was a good idea and decided to press ahead
with it; however, the league would have no credibility without the backing of The Football Association and so David
Dein of Arsenal held talks to see whether the FA were receptive to the idea. The FA did not enjoy an amicable relationship
with the Football League at the time and considered it as a way to weaken the Football League's position. In 1992,
the First Division clubs resigned from the Football League en masse and on 27 May 1992 the FA Premier League was
formed as a limited company working out of an office at the Football Association's then headquarters in Lancaster
Gate. This meant a break-up of the 104-year-old Football League that had operated until then with four divisions;
the Premier League would operate with a single division and the Football League with three. There was no change in
competition format; the same number of teams competed in the top flight, and promotion and relegation between the
Premier League and the new First Division remained the same as the old First and Second Divisions with three teams
relegated from the league and three promoted. The league held its first season in 1992–93 and was originally composed
of 22 clubs. The first ever Premier League goal was scored by Brian Deane of Sheffield United in a 2–1 win against
Manchester United. The 22 inaugural members of the new Premier League were Arsenal, Aston Villa, Blackburn Rovers,
Chelsea, Coventry City, Crystal Palace, Everton, Ipswich Town, Leeds United, Liverpool, Manchester City, Manchester
United, Middlesbrough, Norwich City, Nottingham Forest, Oldham Athletic, Queens Park Rangers, Sheffield United, Sheffield
Wednesday, Southampton, Tottenham Hotspur, and Wimbledon. Luton Town, Notts County and West Ham United were the three
teams relegated from the old first division at the end of the 1991–92 season, and did not take part in the inaugural
Premier League season. One significant feature of the Premier League in the mid-2000s was the dominance of the so-called
"Big Four" clubs: Arsenal, Chelsea, Liverpool and Manchester United. During this decade, and particularly from 2002
to 2009, they dominated the top four spots, which came with UEFA Champions League qualification, taking all top four
places in 5 out of 6 seasons from 2003–04 to 2008–09 inclusive, with Arsenal going as far as winning the league without
losing a single game in 2003–04, the only time it has ever happened in the Premier League. In May 2008 Kevin Keegan
stated that "Big Four" dominance threatened the division, "This league is in danger of becoming one of the most boring
but great leagues in the world." Premier League chief executive Richard Scudamore said in defence: "There are a lot
of different tussles that go on in the Premier League depending on whether you're at the top, in the middle or at
the bottom that make it interesting." The years following 2009 marked a shift in the structure of the "Big Four"
with Tottenham Hotspur and Manchester City both breaking into the top four. In the 2009–10 season, Tottenham finished
fourth and became the first team to break the top four since Everton in 2005. Criticism of the gap between an elite
group of "super clubs" and the majority of the Premier League has continued, nevertheless, due to their increasing
ability to spend more than the other Premier League clubs. Manchester City won the title in the 2011–12 season, becoming
the first club outside the "Big Four" to win since 1994–95. That season also saw two of the Big Four (Chelsea and
Liverpool) finish outside the top four places for the first time since 1994–95. Due to insistence by the International
Federation of Association Football (FIFA), the international governing body of football, that domestic leagues reduce
the number of games clubs played, the number of clubs was reduced to 20 in 1995 when four teams were relegated from
the league and only two teams promoted. On 8 June 2006, FIFA requested that all major European leagues, including
Italy's Serie A and Spain's La Liga be reduced to 18 teams by the start of the 2007–08 season. The Premier League
responded by announcing their intention to resist such a reduction. Ultimately, the 2007–08 season kicked off again
with 20 teams. The Football Association Premier League Ltd (FAPL) is operated as a corporation and is owned by the
20 member clubs. Each club is a shareholder, with one vote each on issues such as rule changes and contracts. The
clubs elect a chairman, chief executive, and board of directors to oversee the daily operations of the league. The
current chairman is Sir Dave Richards, who was appointed in April 1999, and the chief executive is Richard Scudamore,
appointed in November 1999. The former chairman and chief executive, John Quinton and Peter Leaver, were forced to
resign in March 1999 after awarding consultancy contracts to former Sky executives Sam Chisholm and David Chance.
The Football Association is not directly involved in the day-to-day operations of the Premier League, but has veto
power as a special shareholder during the election of the chairman and chief executive and when new rules are adopted
by the league. The Premier League sends representatives to UEFA's European Club Association, the number of clubs
and the clubs themselves chosen according to UEFA coefficients. For the 2012–13 season the Premier League has 10
representatives in the Association: Arsenal, Aston Villa, Chelsea, Everton, Fulham, Liverpool, Manchester City, Manchester
United, Newcastle United and Tottenham Hotspur. The European Club Association is responsible for electing three members
to UEFA's Club Competitions Committee, which is involved in the operations of UEFA competitions such as the Champions
League and UEFA Europa League. There are 20 clubs in the Premier League. During the course of a season (from August
to May) each club plays the others twice (a double round-robin system), once at their home stadium and once at that
of their opponents, for a total of 38 games. Teams receive three points for a win and one point for a draw. No points
are awarded for a loss. Teams are ranked by total points, then goal difference, and then goals scored. If still equal,
teams are deemed to occupy the same position. If there is a tie for the championship, for relegation, or for qualification
to other competitions, a play-off match at a neutral venue decides rank. The three lowest placed teams are relegated
into the Football League Championship, and the top two teams from the Championship, together with the winner of play-offs
involving the third to sixth placed Championship clubs, are promoted in their place. The team placed fifth in the
Premier League automatically qualifies for the UEFA Europa League, and the sixth and seventh-placed teams can also
qualify, depending on the winners of the two domestic cup competitions i.e. the FA Cup and the Capital One Cup (League
Cup). Two Europa League places are reserved for the winners of each tournament; if the winner of either the FA Cup
or League Cup qualifies for the Champions League, then that place will go to the next-best placed finisher in the
Premier League. A further place in the UEFA Europa League is also available via the Fair Play initiative. If the
Premier League has one of the three highest Fair Play rankings in Europe, the highest ranked team in the Premier
League Fair Play standings which has not already qualified for Europe will automatically qualify for the UEFA Europa
League first qualifying round. An exception to the usual European qualification system happened in 2005, after Liverpool
won the Champions League the year before, but did not finish in a Champions League qualification place in the Premier
League that season. UEFA gave special dispensation for Liverpool to enter the Champions League, giving England five
qualifiers. UEFA subsequently ruled that the defending champions qualify for the competition the following year regardless
of their domestic league placing. However, for those leagues with four entrants in the Champions League, this meant
that if the Champions League winner finished outside the top four in its domestic league, it would qualify at the
expense of the fourth-placed team in the league. No association can have more than four entrants in the Champions
League. This occurred in 2012, when Chelsea – who had won the Champions League the previous year, but finished sixth
in the league – qualified for the Champions League in place of Tottenham Hotspur, who went into the Europa League.
Between the 1992–93 season and the 2012–13 season, Premier League clubs had won the UEFA Champions League four times
(as well as supplying five of the runners-up), behind Spain's La Liga with six wins, and Italy's Serie A with five
wins, and ahead of, among others, Germany's Bundesliga with three wins (see table here). The FIFA Club World Cup
(or the FIFA Club World Championship, as it was originally called) has been won by Premier league clubs once (Manchester
United in 2008), and they have also been runners-up twice, behind Brazil's Brasileirão with four wins, and Spain's
La Liga and Italy's Serie A with two wins each (see table here). The Premier League has the highest revenue of any
football league in the world, with total club revenues of €2.48 billion in 2009–10. In 2013–14, due to improved television
revenues and cost controls, the Premier League had net profits in excess of £78 million, exceeding all other football
leagues. In 2010 the Premier League was awarded the Queen's Award for Enterprise in the International Trade category
for its outstanding contribution to international trade and the value it brings to English football and the United
Kingdom's broadcasting industry. In 2011, a Welsh club participated in the Premier League for the first time after
Swansea City gained promotion. The first Premier League match to be played outside England was Swansea City's home
match at the Liberty Stadium against Wigan Athletic on 20 August 2011. In 2012–13, Swansea qualified for the Europa
League by winning the League Cup. The number of Welsh clubs in the Premier League increased to two for the first
time in 2013–14, as Cardiff City gained promotion, but Cardiff City was relegated after its maiden season. Participation
in the Premier League by some Scottish or Irish clubs has sometimes been discussed, but without result. The idea
came closest to reality in 1998, when Wimbledon received Premier League approval to relocate to Dublin, Ireland,
but the move was blocked by the Football Association of Ireland. Additionally, the media occasionally discusses the
idea that Scotland's two biggest teams, Celtic and Rangers, should or will take part in the Premier League, but nothing
has come of these discussions. Television has played a major role in the history of the Premier League. The League's
decision to assign broadcasting rights to BSkyB in 1992 was at the time a radical decision, but one that has paid
off. At the time pay television was an almost untested proposition in the UK market, as was charging fans to watch
live televised football. However, a combination of Sky's strategy, the quality of Premier League football and the
public's appetite for the game has seen the value of the Premier League's TV rights soar. The Premier League sells
its television rights on a collective basis. This is in contrast to some other European Leagues, including La Liga,
in which each club sells its rights individually, leading to a much higher share of the total income going to the
top few clubs. The money is divided into three parts: half is divided equally between the clubs; one quarter is awarded
on a merit basis based on final league position, the top club getting twenty times as much as the bottom club, and
equal steps all the way down the table; the final quarter is paid out as facilities fees for games that are shown
on television, with the top clubs generally receiving the largest shares of this. The income from overseas rights
is divided equally between the twenty clubs. The first Sky television rights agreement was worth £304 million over
five seasons. The next contract, negotiated to start from the 1997–98 season, rose to £670 million over four seasons.
The third contract was a £1.024 billion deal with BSkyB for the three seasons from 2001–02 to 2003–04. The league
brought in £320 million from the sale of its international rights for the three-year period from 2004–05 to 2006–07.
It sold the rights itself on a territory-by-territory basis. Sky's monopoly was broken from August 2006 when Setanta
Sports was awarded rights to show two out of the six packages of matches available. This occurred following an insistence
by the European Commission that exclusive rights should not be sold to one television company. Sky and Setanta paid
a total of £1.7 billion, a two-thirds increase which took many commentators by surprise as it had been widely assumed
that the value of the rights had levelled off following many years of rapid growth. Setanta also hold rights to a
live 3 pm match solely for Irish viewers. The BBC has retained the rights to show highlights for the same three seasons
(on Match of the Day) for £171.6 million, a 63 per cent increase on the £105 million it paid for the previous three-year
period. Sky and BT have agreed to jointly pay £84.3 million for delayed television rights to 242 games (that is the
right to broadcast them in full on television and over the internet) in most cases for a period of 50 hours after
10 pm on matchday. Overseas television rights fetched £625 million, nearly double the previous contract. The total
raised from these deals is more than £2.7 billion, giving Premier League clubs an average media income from league
games of around £40 million-a-year from 2007 to 2010. The TV rights agreement between the Premier League and Sky
has faced accusations of being a cartel, and a number of court cases have arisen as a result. An investigation by
the Office of Fair Trading in 2002 found BSkyB to be dominant within the pay TV sports market, but concluded that
there were insufficient grounds for the claim that BSkyB had abused its dominant position. In July 1999 the Premier
League's method of selling rights collectively for all member clubs was investigated by the UK Restrictive Practices
Court, who concluded that the agreement was not contrary to the public interest. The BBC's highlights package on
Saturday and Sunday nights, as well as other evenings when fixtures justify, will run until 2016. Television rights
alone for the period 2010 to 2013 have been purchased for £1.782 billion. On 22 June 2009, due to troubles encountered
by Setanta Sports after it failed to meet a final deadline over a £30 million payment to the Premier League, ESPN
was awarded two packages of UK rights containing a total of 46 matches that were available for the 2009–10 season
as well as a package of 23 matches per season from 2010–11 to 2012–13. On 13 June 2012, the Premier League announced
that BT had been awarded 38 games a season for the 2013–14 through 2015–16 seasons at £246 million-a-year. The remaining
116 games were retained by Sky who paid £760 million-a-year. The total domestic rights have raised £3.018 billion,
an increase of 70.2% over the 2010–11 to 2012–13 rights. The value of the licensing deal rose by another 70.2% in
2015, when Sky and BT paid a total of £5.136 billion to renew their contracts with the Premier League for another
three years up to the 2018–19 season. The Premier League is particularly popular in Asia, where it is the most widely
distributed sports programme. In Australia, Fox Sports broadcasts almost all of the season's 380 matches live, and
Foxtel gives subscribers the option of selecting which Saturday 3pm match to watch. In India, the matches are broadcast
live on STAR Sports. In China, the broadcast rights were awarded to Super Sports in a six-year agreement that began
in the 2013–14 season. As of the 2013–14 season, Canadian broadcast rights to the Premier League are jointly owned
by Sportsnet and TSN, with both rival networks holding rights to 190 matches per season. The Premier League is broadcast
in the United States through NBC Sports. Premier League viewership has increased rapidly, with NBC and NBCSN averaging
a record 479,000 viewers in the 2014–15 season, up 118% from 2012–13 when coverage still aired on Fox Soccer and
ESPN/ESPN2 (220,000 viewers), and NBC Sports has been widely praised for its coverage. NBC Sports reached a six-year
extension with the Premier League in 2015 to broadcast the league through the 2021–22 season in a deal valued at
$1 billion (£640 million). There has been an increasing gulf between the Premier League and the Football League.
Since its split with the Football League, many established clubs in the Premier League have managed to distance themselves
from their counterparts in lower leagues. Owing in large part to the disparity in revenue from television rights
between the leagues, many newly promoted teams have found it difficult to avoid relegation in their first season
in the Premier League. In every season except 2001–02 and 2011–12, at least one Premier League newcomer has been
relegated back to the Football League. In 1997–98 all three promoted clubs were relegated at the end of the season.
The Premier League distributes a portion of its television revenue to clubs that are relegated from the league in
the form of "parachute payments". Starting with the 2013–14 season, these payments are in excess of £60 million over
four seasons. Though designed to help teams adjust to the loss of television revenues (the average Premier League
team receives £55 million while the average Football League Championship club receives £2 million), critics maintain
that the payments actually widen the gap between teams that have reached the Premier League and those that have not,
leading to the common occurrence of teams "bouncing back" soon after their relegation. For some clubs who have failed
to win immediate promotion back to the Premier League, financial problems, including in some cases administration
or even liquidation have followed. Further relegations down the footballing ladder have ensued for several clubs
unable to cope with the gap. As of the 2015–16 season, Premier League football has been played in 53 stadiums since
the formation of the Premier League in 1992. The Hillsborough disaster in 1989 and the subsequent Taylor Report saw
a recommendation that standing terraces should be abolished; as a result all stadiums in the Premier League are all-seater.
Since the formation of the Premier League, football grounds in England have seen constant improvements to capacity
and facilities, with some clubs moving to new-build stadiums. Nine stadiums that have seen Premier League football
have now been demolished. The stadiums for the 2010–11 season show a large disparity in capacity: Old Trafford, the
home of Manchester United has a capacity of 75,957 with Bloomfield Road, the home of Blackpool, having a capacity
of 16,220. The combined total capacity of the Premier League in the 2010–11 season is 770,477 with an average capacity
of 38,523. Stadium attendances are a significant source of regular income for Premier League clubs. For the 2009–10
season, average attendances across the league clubs were 34,215 for Premier League matches with a total aggregate
attendance figure of 13,001,616. This represents an increase of 13,089 from the average attendance of 21,126 recorded
in the league's first season (1992–93). However, during the 1992–93 season the capacities of most stadiums were reduced
as clubs replaced terraces with seats in order to meet the Taylor Report's 1994–95 deadline for all-seater stadiums.
The Premier League's record average attendance of 36,144 was set during the 2007–08 season. This record was then
beaten in the 2013–14 season recording an average attendance of 36,695 with a total attendance of just under 14 million,
the highest average in England's top flight since 1950. Managers in the Premier League are involved in the day-to-day
running of the team, including the training, team selection, and player acquisition. Their influence varies from
club-to-club and is related to the ownership of the club and the relationship of the manager with fans. Managers
are required to have a UEFA Pro Licence which is the final coaching qualification available, and follows the completion
of the UEFA 'B' and 'A' Licences. The UEFA Pro Licence is required by every person who wishes to manage a club in
the Premier League on a permanent basis (i.e. more than 12 weeks – the amount of time an unqualified caretaker manager
is allowed to take control). Caretaker appointments are managers that fill the gap between a managerial departure
and a new appointment. Several caretaker managers have gone on to secure a permanent managerial post after performing
well as a caretaker; examples include Paul Hart at Portsmouth and David Pleat at Tottenham Hotspur. At the inception
of the Premier League in 1992–93, just eleven players named in the starting line-ups for the first round of matches
hailed from outside of the United Kingdom or Ireland. By 2000–01, the number of foreign players participating in
the Premier League was 36 per cent of the total. In the 2004–05 season the figure had increased to 45 per cent. On
26 December 1999, Chelsea became the first Premier League side to field an entirely foreign starting line-up, and
on 14 February 2005 Arsenal were the first to name a completely foreign 16-man squad for a match. By 2009, under
40% of the players in the Premier League were English. In response to concerns that clubs were increasingly passing
over young English players in favour of foreign players, in 1999, the Home Office tightened its rules for granting
work permits to players from countries outside of the European Union. A non-EU player applying for the permit must
have played for his country in at least 75 per cent of its competitive 'A' team matches for which he was available
for selection during the previous two years, and his country must have averaged at least 70th place in the official
FIFA world rankings over the previous two years. If a player does not meet those criteria, the club wishing to sign
him may appeal. Players may only be transferred during transfer windows that are set by the Football Association.
The two transfer windows run from the last day of the season to 31 August and from 31 December to 31 January. Player
registrations cannot be exchanged outside these windows except under specific licence from the FA, usually on an
emergency basis. As of the 2010–11 season, the Premier League introduced new rules mandating that each club must
register a maximum 25-man squad of players aged over 21, with the squad list only allowed to be changed in transfer
windows or in exceptional circumstances. This was to enable the 'home grown' rule to be enacted, whereby the League
would also from 2010 require at least 8 of the named 25 man squad to be made up of 'home-grown players'. The record
transfer fee for a Premier League player has risen steadily over the lifetime of the competition. Prior to the start
of the first Premier League season Alan Shearer became the first British player to command a transfer fee of more
than £3 million. The record rose steadily in the Premier League's first few seasons, until Alan Shearer made a record
breaking £15 million move to Newcastle United in 1996. The three highest transfer in the sport's history had a Premier
League club on the selling end, with Tottenham Hotspur selling Gareth Bale to Real Madrid for £85 million in 2013,
Manchester United's sale of Cristiano Ronaldo to Real Madrid for £80 million in 2009, and Liverpool selling Luis
Suárez to Barcelona for £75 million in 2014. The Golden Boot is awarded to the top Premier League scorer at the end
of each season. Former Blackburn Rovers and Newcastle United striker Alan Shearer holds the record for most Premier
League goals with 260. Twenty-four players have reached the 100-goal mark. Since the first Premier League season
in 1992–93, 14 different players from 10 different clubs have won or shared the top scorers title. Thierry Henry
won his fourth overall scoring title by scoring 27 goals in the 2005–06 season. Andrew Cole and Alan Shearer hold
the record for most goals in a season (34) – for Newcastle and Blackburn respectively. Ryan Giggs of Manchester United
holds the record for scoring goals in consecutive seasons, having scored in the first 21 seasons of the league. Its
main body is solid sterling silver and silver gilt, while its plinth is made of malachite, a semi-precious stone.
The plinth has a silver band around its circumference, upon which the names of the title-winning clubs are listed.
Malachite's green colour is also representative of the green field of play. The design of the trophy is based on
the heraldry of Three Lions that is associated with English football. Two of the lions are found above the handles
on either side of the trophy – the third is symbolised by the captain of the title winning team as he raises the
trophy, and its gold crown, above his head at the end of the season. The ribbons that drape the handles are presented
in the team colours of the league champions that year.
The Roman Republic (Latin: Res publica Romana; Classical Latin: [ˈreːs ˈpuːb.lɪ.ka roːˈmaː.na]) was the period of ancient
Roman civilization beginning with the overthrow of the Roman Kingdom, traditionally dated to 509 BC, and ending in
27 BC with the establishment of the Roman Empire. It was during this period that Rome's control expanded from the
city's immediate surroundings to hegemony over the entire Mediterranean world. During the first two centuries of
its existence, the Roman Republic expanded through a combination of conquest and alliance, from central Italy to
the entire Italian peninsula. By the following century, it included North Africa, Spain, and what is now southern
France. Two centuries after that, towards the end of the 1st century BC, it included the rest of modern France, Greece,
and much of the eastern Mediterranean. By this time, internal tensions led to a series of civil wars, culminating
with the assassination of Julius Caesar, which led to the transition from republic to empire. The exact date of transition
can be a matter of interpretation. Historians have variously proposed Julius Caesar's crossing of the Rubicon River
in 49 BC, Caesar's appointment as dictator for life in 44 BC, and the defeat of Mark Antony and Cleopatra at the
Battle of Actium in 31 BC. However, most use the same date as did the ancient Romans themselves, the Roman Senate's
grant of extraordinary powers to Octavian and his adopting the title Augustus in 27 BC, as the defining event ending
the Republic. Roman government was headed by two consuls, elected annually by the citizens and advised by a senate
composed of appointed magistrates. As Roman society was very hierarchical by modern standards, the evolution of the
Roman government was heavily influenced by the struggle between the patricians, Rome's land-holding aristocracy,
who traced their ancestry to the founding of Rome, and the plebeians, the far more numerous citizen-commoners. Over
time, the laws that gave patricians exclusive rights to Rome's highest offices were repealed or weakened, and leading
plebeian families became full members of the aristocracy. The leaders of the Republic developed a strong tradition
and morality requiring public service and patronage in peace and war, making military and political success inextricably
linked. Many of Rome's legal and legislative structures (later codified into the Justinian Code, and again into the
Napoleonic Code) can still be observed throughout Europe and much of the world in modern nation states and international
organizations. The exact causes and motivations for Rome's military conflicts and expansions during the republic
are subject to wide debate. While they can be seen as motivated by outright aggression and imperialism, historians
typically take a much more nuanced view. They argue that Rome's expansion was driven by short-term defensive and
inter-state factors (that is, relations with city-states and kingdoms outside Rome's hegemony), and the new contingencies
that these decisions created. In its early history, as Rome successfully defended itself against foreign threats
in central and then northern Italy, neighboring city-states sought the protection a Roman alliance would bring. As
such, early republican Rome was not an "empire" or "state" in the modern sense, but an alliance of independent city-states
(similar to the Greek hegemonies of the same period) with varying degrees of genuine independence (which itself changed
over time) engaged in an alliance of mutual self-protection, but led by Rome. With some important exceptions, successful
wars in early republican Rome generally led not to annexation or military occupation, but to the restoration of the
way things were. But the defeated city would be weakened (sometimes with outright land concessions) and thus less
able to resist Romanizing influences, such as Roman settlers seeking land or trade with the growing Roman confederacy.
It was also less able to defend itself against its non-Roman enemies, which made attack by these enemies more likely.
It was, therefore, more likely to seek an alliance of protection with Rome. This growing coalition expanded the potential
enemies that Rome might face, and moved Rome closer to confrontation with major powers. The result was more alliance-seeking,
on the part of both the Roman confederacy and city-states seeking membership (and protection) within that confederacy.
While there were exceptions to this (such as military rule of Sicily after the First Punic War), it was not until
after the Second Punic War that these alliances started to harden into something more like an empire, at least in
certain locations. This shift mainly took place in parts of the west, such as the southern Italian towns that sided
with Hannibal. In contrast, Roman expansion into Spain and Gaul occurred as a mix of alliance-seeking and military
occupation. In the 2nd century BC, Roman involvement in the Greek east remained a matter of alliance-seeking, but
this time in the face of major powers that could rival Rome. According to Polybius, who sought to trace how Rome
came to dominate the Greek east in less than a century, this was mainly a matter of several Greek city-states seeking
Roman protection against the Macedonian kingdom and Seleucid Empire in the face of destabilisation created by the
weakening of Ptolemaic Egypt. In contrast to the west, the Greek east had been dominated by major empires for centuries,
and Roman influence and alliance-seeking led to wars with these empires that further weakened them and therefore
created an unstable power vacuum that only Rome could fill. This had some important similarities to (and important
differences from) the events in Italy centuries earlier, but this time on a global scale. Historians see the growing
Roman influence over the east, as with the west, as not a matter of intentional empire-building, but constant crisis
management narrowly focused on short-term goals within a highly unstable, unpredictable, and inter-dependent network
of alliances and dependencies. With some major exceptions of outright military rule, the Roman Republic remained
an alliance of independent city-states and kingdoms (with varying degrees of independence, both de jure and de facto)
until it transitioned into the Roman Empire. It was not until the time of the Roman Empire that the entire Roman
world was organized into provinces under explicit Roman control. The first Roman republican wars were wars of both
expansion and defence, aimed at protecting Rome itself from neighbouring cities and nations and establishing its
territory in the region. Initially, Rome's immediate neighbours were either Latin towns and villages, or else tribal
Sabines from the Apennine hills beyond. One by one Rome defeated both the persistent Sabines and the local cities,
both those under Etruscan control and those that had cast off their Etruscan rulers. Rome defeated Latin cities in
the Battle of Lake Regillus in 496 BC, the Battle of Mons Algidus in 458 BC, the Battle of Corbione in 446 BC, the
Battle of Aricia, and especially the Battle of the Cremera in 477 BC wherein it fought against the most important
Etruscan city of Veii. By 390 BC, several Gallic tribes were invading Italy from the north as their culture expanded
throughout Europe. The Romans were alerted to this when a particularly warlike tribe invaded two Etruscan towns close
to Rome's sphere of influence. These towns, overwhelmed by the enemy's numbers and ferocity, called on Rome for help.
The Romans met the Gauls in pitched battle at the Battle of Allia River around 390–387 BC. The Gauls, led by chieftain
Brennus, defeated the Roman army of approximately 15,000 troops, pursued the fleeing Romans back to Rome, and sacked
the city before being either driven off or bought off. Romans and Gauls continued to war intermittently in Italy
for more than two centuries.[relevant? – discuss] After recovering surprisingly fast from the sack of Rome, the Romans
immediately resumed their expansion within Italy. The First Samnite War from 343 BC to 341 BC was relatively short:
the Romans beat the Samnites in two battles, but were forced to withdraw before they could pursue the conflict further
due to the revolt of several of their Latin allies in the Latin War. Rome beat the Latins in the Battle of Vesuvius
and again in the Battle of Trifanum, after which the Latin cities were obliged to submit to Roman rule. Despite early
victories, Pyrrhus found his position in Italy untenable. Rome steadfastly refused to negotiate with Pyrrhus as long
as his army remained in Italy. Facing unacceptably heavy losses from each encounter with the Roman army, Pyrrhus
withdrew from the peninsula (hence the term "Pyrrhic victory"). In 275 BC, Pyrrhus again met the Roman army at the
Battle of Beneventum. While Beneventum was indecisive, Pyrrhus realised his army had been exhausted and reduced by
years of foreign campaigns. Seeing little hope for further gains, he withdrew completely from Italy. The first few
naval battles were disasters for Rome. However, after training more sailors and inventing a grappling engine, a Roman
naval force was able to defeat a Carthaginian fleet, and further naval victories followed. The Carthaginians then
hired Xanthippus of Carthage, a Spartan mercenary general, to reorganise and lead their army. He cut off the Roman
army from its base by re-establishing Carthaginian naval supremacy. The Romans then again defeated the Carthaginians
in naval battle at the Battle of the Aegates Islands and left Carthage with neither a fleet nor sufficient coin to
raise one. For a maritime power the loss of their access to the Mediterranean stung financially and psychologically,
and the Carthaginians sued for peace. The Romans held off Hannibal in three battles, but then Hannibal smashed a
succession of Roman consular armies. By this time Hannibal's brother Hasdrubal Barca sought to cross the Alps into
Italy and join his brother with a second army. Hasdrubal managed to break through into Italy only to be defeated
decisively on the Metaurus River. Unable to defeat Hannibal on Italian soil, the Romans boldly sent an army to Africa
under Scipio Africanus to threaten the Carthaginian capital. Hannibal was recalled to Africa, and defeated at the
Battle of Zama. Carthage never recovered militarily after the Second Punic War, but quickly economically and the
Third Punic War that followed was in reality a simple punitive mission after the neighbouring Numidians allied to
Rome robbed/attacked Carthaginian merchants. Treaties had forbidden any war with Roman allies, and defense against
robbing/pirates was considered as "war action": Rome decided to annihilate the city of Carthage. Carthage was almost
defenceless, and submitted when besieged. However, the Romans demanded complete surrender and moval of the city into
the (desert) inland far off any coastal or harbour region, and the Carthaginians refused. The city was besieged,
stormed, and completely destroyed. Ultimately, all of Carthage's North African and Iberian territories were acquired
by Rome. Note that "Carthage" was not an 'empire', but a league of punic colonies (port cities in the western mediterranean)
like the 1st and 2nd Athenian ("attic") leagues, under leadership of Carthage. Punic Carthago was gone, but the other
punic cities in the western mediterranean flourished under Roman rule. Rome's preoccupation with its war with Carthage
provided an opportunity for Philip V of the kingdom of Macedonia, located in the north of the Greek peninsula, to
attempt to extend his power westward. Philip sent ambassadors to Hannibal's camp in Italy, to negotiate an alliance
as common enemies of Rome. However, Rome discovered the agreement when Philip's emissaries were captured by a Roman
fleet. The First Macedonian War saw the Romans involved directly in only limited land operations, but they ultimately
achieved their objective of pre-occupying Philip and preventing him from aiding Hannibal. The past century had seen
the Greek world dominated by the three primary successor kingdoms of Alexander the Great's empire: Ptolemaic Egypt,
Macedonia and the Seleucid Empire. In 202 BC, internal problems led to a weakening of Egypt's position, thereby disrupting
the power balance among the successor states. Macedonia and the Seleucid Empire agreed to an alliance to conquer
and divide Egypt. Fearing this increasingly unstable situation, several small Greek kingdoms sent delegations to
Rome to seek an alliance. The delegation succeeded, even though prior Greek attempts to involve Rome in Greek affairs
had been met with Roman apathy. Our primary source about these events, the surviving works of Polybius, do not state
Rome's reason for getting involved. Rome gave Philip an ultimatum to cease his campaigns against Rome's new Greek
allies. Doubting Rome's strength (a reasonable doubt, given Rome's performance in the First Macedonian War) Philip
ignored the request, and Rome sent an army of Romans and Greek allies, beginning the Second Macedonian War. Despite
his recent successes against the Greeks and earlier successes against Rome, Philip's army buckled under the pressure
from the Roman-Greek army. In 197 BC, the Romans decisively defeated Philip at the Battle of Cynoscephalae, and Philip
was forced to give up his recent Greek conquests. The Romans declared the "Peace of the Greeks", believing that Philip's
defeat now meant that Greece would be stable. They pulled out of Greece entirely, maintaining minimal contacts with
their Greek allies. With Egypt and Macedonia weakened, the Seleucid Empire made increasingly aggressive and successful
attempts to conquer the entire Greek world. Now not only Rome's allies against Philip, but even Philip himself, sought
a Roman alliance against the Seleucids. The situation was made worse by the fact that Hannibal was now a chief military
advisor to the Seleucid emperor, and the two were believed to be planning an outright conquest not just of Greece,
but of Rome itself. The Seleucids were much stronger than the Macedonians had ever been, because they controlled
much of the former Persian Empire, and by now had almost entirely reassembled Alexander the Great's former empire.
Fearing the worst, the Romans began a major mobilization, all but pulling out of recently pacified Spain and Gaul.
They even established a major garrison in Sicily in case the Seleucids ever got to Italy. This fear was shared by
Rome's Greek allies, who had largely ignored Rome in the years after the Second Macedonian War, but now followed
Rome again for the first time since that war. A major Roman-Greek force was mobilized under the command of the great
hero of the Second Punic War, Scipio Africanus, and set out for Greece, beginning the Roman-Syrian War. After initial
fighting that revealed serious Seleucid weaknesses, the Seleucids tried to turn the Roman strength against them at
the Battle of Thermopylae (as they believed the 300 Spartans had done centuries earlier). Like the Spartans, the
Seleucids lost the battle, and were forced to evacuate Greece. The Romans pursued the Seleucids by crossing the Hellespont,
which marked the first time a Roman army had ever entered Asia. The decisive engagement was fought at the Battle
of Magnesia, resulting in a complete Roman victory. The Seleucids sued for peace, and Rome forced them to give up
their recent Greek conquests. Although they still controlled a great deal of territory, this defeat marked the decline
of their empire, as they were to begin facing increasingly aggressive subjects in the east (the Parthians) and the
west (the Greeks). Their empire disintegrated into a rump over the course of the next century, when it was eclipsed
by Pontus. Following Magnesia, Rome again withdrew from Greece, assuming (or hoping) that the lack of a major Greek
power would ensure a stable peace. In fact, it did the opposite. In 179 BC Philip died. His talented and ambitious
son, Perseus, took the throne and showed a renewed interest in conquering Greece. With her Greek allies facing a
major new threat, Rome declared war on Macedonia again, starting the Third Macedonian War. Perseus initially had
some success against the Romans. However, Rome responded by sending a stronger army. This second consular army decisively
defeated the Macedonians at the Battle of Pydna in 168 BC and the Macedonians duly capitulated, ending the war. Convinced
now that the Greeks (and therefore the rest of the region) would not have peace if left alone, Rome decided to establish
its first permanent foothold in the Greek world, and divided the Kingdom of Macedonia into four client republics.
Yet, Macedonian agitation continued. The Fourth Macedonian War, 150 to 148 BC, was fought against a Macedonian pretender
to the throne who was again destabilizing Greece by trying to re-establish the old kingdom. The Romans swiftly defeated
the Macedonians at the Second battle of Pydna. The Jugurthine War of 111–104 BC was fought between Rome and Jugurtha
of the North African kingdom of Numidia. It constituted the final Roman pacification of Northern Africa, after which
Rome largely ceased expansion on the continent after reaching natural barriers of desert and mountain. Following
Jugurtha's usurpation of the throne of Numidia, a loyal ally of Rome since the Punic Wars, Rome felt compelled to
intervene. Jugurtha impudently bribed the Romans into accepting his usurpation. Jugurtha was finally captured not
in battle but by treachery. In 121 BC, Rome came into contact with two Celtic tribes (from a region in modern France),
both of which they defeated with apparent ease. The Cimbrian War (113–101 BC) was a far more serious affair than
the earlier clashes of 121 BC. The Germanic tribes of the Cimbri and the Teutons migrated from northern Europe into
Rome's northern territories, and clashed with Rome and her allies. At the Battle of Aquae Sextiae and the Battle
of Vercellae both tribes were virtually annihilated, which ended the threat. The extensive campaigning abroad by
Roman generals, and the rewarding of soldiers with plunder on these campaigns, led to a general trend of soldiers
becoming increasingly loyal to their generals rather than to the state. Rome was also plagued by several slave uprisings
during this period, in part because vast tracts of land had been given over to slave farming in which the slaves
greatly outnumbered their Roman masters. In the 1st century BC at least twelve civil wars and rebellions occurred.
This pattern continued until 27 BC, when Octavian (later Augustus) successfully challenged the Senate's authority,
and was made princeps (first citizen). Between 135 BC and 71 BC there were three "Servile Wars" involving slave uprisings
against the Roman state. The third and final uprising was the most serious, involving ultimately between 120,000
and 150,000 slaves under the command of the gladiator Spartacus. In 91 BC the Social War broke out between Rome and
its former allies in Italy when the allies complained that they shared the risk of Rome's military campaigns, but
not its rewards. Although they lost militarily, the allies achieved their objectives with legal proclamations which
granted citizenship to more than 500,000 Italians. The internal unrest reached its most serious state, however, in
the two civil wars that were caused by the clash between generals Gaius Marius and Lucius Cornelius Sulla starting
from 88 BC. In the Battle of the Colline Gate at the very door of the city of Rome, a Roman army under Sulla bested
an army of the Marius supporters and entered the city. Sulla's actions marked a watershed in the willingness of Roman
troops to wage war against one another that was to pave the way for the wars which ultimately overthrew the Republic,
and caused the founding of the Roman Empire. Mithridates the Great was the ruler of Pontus, a large kingdom in Asia
Minor (modern Turkey), from 120 to 63 BC. Mithridates antagonised Rome by seeking to expand his kingdom, and Rome
for her part seemed equally eager for war and the spoils and prestige that it might bring. In 88 BC, Mithridates
ordered the killing of a majority of the 80,000 Romans living in his kingdom. The massacre was the official reason
given for the commencement of hostilities in the First Mithridatic War. The Roman general Lucius Cornelius Sulla
forced Mithridates out of Greece proper, but then had to return to Italy to answer the internal threat posed by his
rival, Gaius Marius. A peace was made between Rome and Pontus, but this proved only a temporary lull. During his
term as praetor in the Iberian Peninsula (modern Portugal and Spain), Pompey's contemporary Julius Caesar defeated
two local tribes in battle. After his term as consul in 59 BC, he was appointed to a five-year term as the proconsular
Governor of Cisalpine Gaul (part of current northern Italy), Transalpine Gaul (current southern France) and Illyria
(part of the modern Balkans). Not content with an idle governorship, Caesar strove to find reason to invade Gaul
(modern France and Belgium), which would give him the dramatic military success he sought. When two local tribes
began to migrate on a route that would take them near (not into) the Roman province of Transalpine Gaul, Caesar had
the barely sufficient excuse he needed for his Gallic Wars, fought between 58 BC and 49 BC. By 59 BC an unofficial
political alliance known as the First Triumvirate was formed between Gaius Julius Caesar, Marcus Licinius Crassus,
and Gnaeus Pompeius Magnus ("Pompey the Great") to share power and influence. In 53 BC, Crassus launched a Roman
invasion of the Parthian Empire (modern Iraq and Iran). After initial successes, he marched his army deep into the
desert; but here his army was cut off deep in enemy territory, surrounded and slaughtered at the Battle of Carrhae
in which Crassus himself perished. The death of Crassus removed some of the balance in the Triumvirate and, consequently,
Caesar and Pompey began to move apart. While Caesar was fighting in Gaul, Pompey proceeded with a legislative agenda
for Rome that revealed that he was at best ambivalent towards Caesar and perhaps now covertly allied with Caesar's
political enemies. In 51 BC, some Roman senators demanded that Caesar not be permitted to stand for consul unless
he turned over control of his armies to the state, which would have left Caesar defenceless before his enemies. Caesar
chose civil war over laying down his command and facing trial. By the spring of 49 BC, the hardened legions of Caesar
crossed the river Rubicon, the legal boundary of Roman Italy beyond which no commander might bring his army, and
swept down the Italian peninsula towards Rome, while Pompey ordered the abandonment of Rome. Afterwards Caesar turned
his attention to the Pompeian stronghold of Hispania (modern Spain) but decided to tackle Pompey himself in Greece.
Pompey initially defeated Caesar, but failed to follow up on the victory, and was decisively defeated at the Battle
of Pharsalus in 48 BC, despite outnumbering Caesar's forces two to one, albeit with inferior quality troops. Pompey
fled again, this time to Egypt, where he was murdered. Caesar was now the primary figure of the Roman state, enforcing
and entrenching his powers. His enemies feared that he had ambitions to become an autocratic ruler. Arguing that
the Roman Republic was in danger, a group of senators hatched a conspiracy and assassinated Caesar at a meeting of
the Senate in March 44 BC. Mark Antony, Caesar's lieutenant, condemned Caesar's assassination, and war broke out
between the two factions. Antony was denounced as a public enemy, and Caesar's adopted son and chosen heir, Gaius
Octavianus, was entrusted with the command of the war against him. At the Battle of Mutina Mark Antony was defeated
by the consuls Hirtius and Pansa, who were both killed. However, civil war flared again when the Second Triumvirate
of Octavian, Lepidus and Mark Antony failed. The ambitious Octavian built a power base of patronage and then launched
a campaign against Mark Antony. At the naval Battle of Actium off the coast of Greece, Octavian decisively defeated
Antony and Cleopatra. Octavian was granted a series of special powers including sole "imperium" within the city of
Rome, permanent consular powers and credit for every Roman military victory, since all future generals were assumed
to be acting under his command. In 27 BC Octavian was granted the use of the names "Augustus" and "Princeps", indicating
his primary status above all other Romans, and he adopted the title "Imperator Caesar" making him the first Roman
Emperor. The last king of the Roman Kingdom, Lucius Tarquinius Superbus, was overthrown in 509 BC by a group of noblemen
led by Lucius Junius Brutus. Tarquin made a number of attempts to retake the throne, including the Tarquinian conspiracy,
the war with Veii and Tarquinii and finally the war between Rome and Clusium, all of which failed to achieve Tarquin's
objectives. The most important constitutional change during the transition from kingdom to republic concerned the
chief magistrate. Before the revolution, a king would be elected by the senators for a life term. Now, two consuls
were elected by the citizens for an annual term. Each consul would check his colleague, and their limited term in
office would open them up to prosecution if they abused the powers of their office. Consular political powers, when
exercised conjointly with a consular colleague, were no different from those of the old king. In 494 BC, the city
was at war with two neighboring tribes. The plebeian soldiers refused to march against the enemy, and instead seceded
to the Aventine Hill. The plebeians demanded the right to elect their own officials. The patricians agreed, and the
plebeians returned to the battlefield. The plebeians called these new officials "plebeian tribunes". The tribunes
would have two assistants, called "plebeian aediles". During the 5th century BC, a series of reforms were passed.
The result of these reforms was that any law passed by the plebeian would have the full force of law. In 443 BC,
the censorship was created. From 375 BC to 371 BC, the republic experienced a constitutional crisis during which
the tribunes used their vetoes to prevent the election of senior magistrates. After the consulship had been opened
to the plebeians, the plebeians were able to hold both the dictatorship and the censorship. Plebiscites of 342 BC
placed limits on political offices; an individual could hold only one office at a time, and ten years must elapse
between the end of his official term and his re-election. Further laws attempted to relieve the burden of debt from
plebeians by banning interest on loans. In 337 BC, the first plebeian praetor was elected. During these years, the
tribunes and the senators grew increasingly close. The senate realised the need to use plebeian officials to accomplish
desired goals. To win over the tribunes, the senators gave the tribunes a great deal of power and the tribunes began
to feel obligated to the senate. As the tribunes and the senators grew closer, plebeian senators were often able
to secure the tribunate for members of their own families. In time, the tribunate became a stepping stone to higher
office. Shortly before 312 BCE, the Plebeian Council enacted the Plebiscitum Ovinium. During the early republic,
only consuls could appoint new senators. This initiative, however, transferred this power to the censors. It also
required the censor to appoint any newly elected magistrate to the senate. By this point, plebeians were already
holding a significant number of magisterial offices. Thus, the number of plebeian senators probably increased quickly.
However, it remained difficult for a plebeian to enter the senate if he was not from a well-known political family,
as a new patrician-like plebeian aristocracy emerged. The old nobility existed through the force of law, because
only patricians were allowed to stand for high office. The new nobility existed due to the organization of society.
As such, only a revolution could overthrow this new structure. By 287 BC, the economic condition of the average plebeian
had become poor. The problem appears to have centered around widespread indebtedness. The plebeians demanded relief,
but the senators refused to address their situation. The result was the final plebeian secession. The plebeians seceded
to the Janiculum hill. To end the secession, a dictator was appointed. The dictator passed a law (the Lex Hortensia),
which ended the requirement that the patrician senators must agree before any bill could be considered by the Plebeian
Council. This was not the first law to require that an act of the Plebeian Council have the full force of law. The
Plebeian Council acquired this power during a modification to the original Valerian law in 449 BC. The significance
of this law was in the fact that it robbed the patricians of their final weapon over the plebeians. The result was
that control over the state fell, not onto the shoulders of voters, but to the new plebeian nobility. The plebeians
had finally achieved political equality with the patricians. However, the plight of the average plebeian had not
changed. A small number of plebeian families achieved the same standing that the old aristocratic patrician families
had always had, but the new plebeian aristocrats became as uninterested in the plight of the average plebeian as
the old patrician aristocrats had always been. The plebeians rebelled by leaving Rome and refusing to return until
they had more rights. The patricians then noticed how much they needed the plebeians and accepted their terms. The
plebeians then returned to Rome and continued their work. The Hortensian Law deprived the patricians of their last
weapon against the plebeians, and thus resolved the last great political question of the era. No such important political
changes occurred between 287 BC and 133 BC. The important laws of this era were still enacted by the senate. In effect,
the plebeians were satisfied with the possession of power, but did not care to use it. The senate was supreme during
this era because the era was dominated by questions of foreign and military policy. This was the most militarily
active era of the Roman Republic. In the final decades of this era many plebeians grew poorer. The long military
campaigns had forced citizens to leave their farms to fight, while their farms fell into disrepair. The landed aristocracy
began buying bankrupted farms at discounted prices. As commodity prices fell, many farmers could no longer operate
their farms at a profit. The result was the ultimate bankruptcy of countless farmers. Masses of unemployed plebeians
soon began to flood into Rome, and thus into the ranks of the legislative assemblies. Their poverty usually led them
to vote for the candidate who offered them the most. A new culture of dependency was emerging, in which citizens
would look to any populist leader for relief. Tiberius Gracchus was elected tribune in 133 BC. He attempted to enact
a law which would have limited the amount of land that any individual could own. The aristocrats, who stood to lose
an enormous amount of money, were bitterly opposed to this proposal. Tiberius submitted this law to the Plebeian
Council, but the law was vetoed by a tribune named Marcus Octavius. Tiberius then used the Plebeian Council to impeach
Octavius. The theory, that a representative of the people ceases to be one when he acts against the wishes of the
people, was counter to Roman constitutional theory. If carried to its logical end, this theory would remove all constitutional
restraints on the popular will, and put the state under the absolute control of a temporary popular majority. His
law was enacted, but Tiberius was murdered with 300 of his associates when he stood for reelection to the tribunate.
Tiberius' brother Gaius was elected tribune in 123 BC. Gaius Gracchus' ultimate goal was to weaken the senate and
to strengthen the democratic forces. In the past, for example, the senate would eliminate political rivals either
by establishing special judicial commissions or by passing a senatus consultum ultimum ("ultimate decree of the senate").
Both devices would allow the Senate to bypass the ordinary due process rights that all citizens had. Gaius outlawed
the judicial commissions, and declared the senatus consultum ultimum to be unconstitutional. Gaius then proposed
a law which would grant citizenship rights to Rome's Italian allies. This last proposal was not popular with the
plebeians and he lost much of his support. He stood for election to a third term in 121 BC, but was defeated and
then murdered by representatives of the senate with 3,000 of his supporters on Capitoline Hill in Rome. Though the
senate retained control, the Gracchi had strengthened the political influence of the plebeians. In 118 BC, King Micipsa
of Numidia (current-day Algeria and Tunisia) died. He was succeeded by two legitimate sons, Adherbal and Hiempsal,
and an illegitimate son, Jugurtha. Micipsa divided his kingdom between these three sons. Jugurtha, however, turned
on his brothers, killing Hiempsal and driving Adherbal out of Numidia. Adherbal fled to Rome for assistance, and
initially Rome mediated a division of the country between the two brothers. Eventually, Jugurtha renewed his offensive,
leading to a long and inconclusive war with Rome. He also bribed several Roman commanders, and at least two tribunes,
before and during the war. His nemesis, Gaius Marius, a legate from a virtually unknown provincial family, returned
from the war in Numidia and was elected consul in 107 BC over the objections of the aristocratic senators. Marius
invaded Numidia and brought the war to a quick end, capturing Jugurtha in the process. The apparent incompetence
of the Senate, and the brilliance of Marius, had been put on full display. The populares party took full advantage
of this opportunity by allying itself with Marius. Several years later, in 88 BC, a Roman army was sent to put down
an emerging Asian power, king Mithridates of Pontus. The army, however, was defeated. One of Marius' old quaestors,
Lucius Cornelius Sulla, had been elected consul for the year, and was ordered by the senate to assume command of
the war against Mithridates. Marius, a member of the "populares" party, had a tribune revoke Sulla's command of the
war against Mithridates. Sulla, a member of the aristocratic ("optimates") party, brought his army back to Italy
and marched on Rome. Sulla was so angry at Marius' tribune that he passed a law intended to permanently weaken the
tribunate. He then returned to his war against Mithridates. With Sulla gone, the populares under Marius and Lucius
Cornelius Cinna soon took control of the city. During the period in which the populares party controlled the city,
they flouted convention by re-electing Marius consul several times without observing the customary ten-year interval
between offices. They also transgressed the established oligarchy by advancing unelected individuals to magisterial
office, and by substituting magisterial edicts for popular legislation. Sulla soon made peace with Mithridates. In
83 BC, he returned to Rome, overcame all resistance, and recaptured the city. Sulla and his supporters then slaughtered
most of Marius' supporters. Sulla, having observed the violent results of radical popular reforms, was naturally
conservative. As such, he sought to strengthen the aristocracy, and by extension the senate. Sulla made himself dictator,
passed a series of constitutional reforms, resigned the dictatorship, and served one last term as consul. He died
in 78 BC. In 77 BC, the senate sent one of Sulla's former lieutenants, Gnaeus Pompeius Magnus ("Pompey the Great"),
to put down an uprising in Spain. By 71 BC, Pompey returned to Rome after having completed his mission. Around the
same time, another of Sulla's former lieutenants, Marcus Licinius Crassus, had just put down the Spartacus-led gladiator/slave
revolt in Italy. Upon their return, Pompey and Crassus found the populares party fiercely attacking Sulla's constitution.
They attempted to forge an agreement with the populares party. If both Pompey and Crassus were elected consul in
70 BC, they would dismantle the more obnoxious components of Sulla's constitution. The two were soon elected, and
quickly dismantled most of Sulla's constitution. Around 66 BC, a movement to use constitutional, or at least peaceful,
means to address the plight of various classes began. After several failures, the movement's leaders decided to use
any means that were necessary to accomplish their goals. The movement coalesced under an aristocrat named Lucius
Sergius Catilina. The movement was based in the town of Faesulae, which was a natural hotbed of agrarian agitation.
The rural malcontents were to advance on Rome, and be aided by an uprising within the city. After assassinating the
consuls and most of the senators, Catiline would be free to enact his reforms. The conspiracy was set in motion in
63 BC. The consul for the year, Marcus Tullius Cicero, intercepted messages that Catiline had sent in an attempt
to recruit more members. As a result, the top conspirators in Rome (including at least one former consul) were executed
by authorisation (of dubious constitutionality) of the senate, and the planned uprising was disrupted. Cicero then
sent an army, which cut Catiline's forces to pieces. In 62 BC, Pompey returned victorious from Asia. The Senate,
elated by its successes against Catiline, refused to ratify the arrangements that Pompey had made. Pompey, in effect,
became powerless. Thus, when Julius Caesar returned from a governorship in Spain in 61 BC, he found it easy to make
an arrangement with Pompey. Caesar and Pompey, along with Crassus, established a private agreement, now known as
the First Triumvirate. Under the agreement, Pompey's arrangements would be ratified. Caesar would be elected consul
in 59 BC, and would then serve as governor of Gaul for five years. Crassus was promised a future consulship. Caesar
became consul in 59 BC. His colleague, Marcus Calpurnius Bibulus, was an extreme aristocrat. Caesar submitted the
laws that he had promised Pompey to the assemblies. Bibulus attempted to obstruct the enactment of these laws, and
so Caesar used violent means to ensure their passage. Caesar was then made governor of three provinces. He facilitated
the election of the former patrician Publius Clodius Pulcher to the tribunate for 58 BC. Clodius set about depriving
Caesar's senatorial enemies of two of their more obstinate leaders in Cato and Cicero. Clodius was a bitter opponent
of Cicero because Cicero had testified against him in a sacrilege case. Clodius attempted to try Cicero for executing
citizens without a trial during the Catiline conspiracy, resulting in Cicero going into self-imposed exile and his
house in Rome being burnt down. Clodius also passed a bill that forced Cato to lead the invasion of Cyprus which
would keep him away from Rome for some years. Clodius also passed a law to expand the previous partial grain subsidy
to a fully free grain dole for citizens. Clodius formed armed gangs that terrorised the city and eventually began
to attack Pompey's followers, who in response funded counter-gangs formed by Titus Annius Milo. The political alliance
of the triumvirate was crumbling. Domitius Ahenobarbus ran for the consulship in 55 BC promising to take Caesar's
command from him. Eventually, the triumvirate was renewed at Lucca. Pompey and Crassus were promised the consulship
in 55 BC, and Caesar's term as governor was extended for five years. Crassus led an ill-fated expedition with legions
led by his son, Caesar's lieutenant, against the Kingdom of Parthia. This resulted in his defeat and death at the
Battle of Carrhae. Finally, Pompey's wife, Julia, who was Caesar's daughter, died in childbirth. This event severed
the last remaining bond between Pompey and Caesar. Beginning in the summer of 54 BC, a wave of political corruption
and violence swept Rome. This chaos reached a climax in January of 52 BC, when Clodius was murdered in a gang war
by Milo. On 1 January 49 BC, an agent of Caesar presented an ultimatum to the senate. The ultimatum was rejected,
and the senate then passed a resolution which declared that if Caesar did not lay down his arms by July of that year,
he would be considered an enemy of the Republic. Meanwhile, the senators adopted Pompey as their new champion against
Caesar. On 7 January of 49 BC, the senate passed a senatus consultum ultimum, which vested Pompey with dictatorial
powers. Pompey's army, however, was composed largely of untested conscripts. On 10 January, Caesar crossed the Rubicon
with his veteran army (in violation of Roman laws) and marched towards Rome. Caesar's rapid advance forced Pompey,
the consuls and the senate to abandon Rome for Greece. Caesar entered the city unopposed. Caesar held both the dictatorship
and the tribunate, and alternated between the consulship and the proconsulship. In 48 BC, Caesar was given permanent
tribunician powers. This made his person sacrosanct, gave him the power to veto the senate, and allowed him to dominate
the Plebeian Council. In 46 BC, Caesar was given censorial powers, which he used to fill the senate with his own
partisans. Caesar then raised the membership of the Senate to 900. This robbed the senatorial aristocracy of its
prestige, and made it increasingly subservient to him. While the assemblies continued to meet, he submitted all candidates
to the assemblies for election, and all bills to the assemblies for enactment. Thus, the assemblies became powerless
and were unable to oppose him. Caesar was assassinated on March 15, 44 BC. The assassination was led by Gaius Cassius
and Marcus Brutus. Most of the conspirators were senators, who had a variety of economic, political, or personal
motivations for carrying out the assassination. Many were afraid that Caesar would soon resurrect the monarchy and
declare himself king. Others feared loss of property or prestige as Caesar carried out his land reforms in favor
of the landless classes. Virtually all the conspirators fled the city after Caesar's death in fear of retaliation.
The civil war that followed destroyed what was left of the Republic. After the assassination, Mark Antony formed
an alliance with Caesar's adopted son and great-nephew, Gaius Octavian. Along with Marcus Lepidus, they formed an
alliance known as the Second Triumvirate. They held powers that were nearly identical to the powers that Caesar had
held under his constitution. As such, the Senate and assemblies remained powerless, even after Caesar had been assassinated.
The conspirators were then defeated at the Battle of Philippi in 42 BC. Eventually, however, Antony and Octavian
fought against each other in one last battle. Antony was defeated in the naval Battle of Actium in 31 BC, and he
committed suicide with his lover, Cleopatra. In 29 BC, Octavian returned to Rome as the unchallenged master of the
Empire and later accepted the title of Augustus ("Exalted One"). He was convinced that only a single strong ruler
could restore order in Rome. As with most ancient civilizations, Rome's military served the triple purposes of securing
its borders, exploiting peripheral areas through measures such as imposing tribute on conquered peoples, and maintaining
internal order. From the outset, Rome's military typified this pattern and the majority of Rome's wars were characterized
by one of two types. The first is the foreign war, normally begun as a counter-offensive or defense of an ally. The
second is the civil war, which plagued the Roman Republic in its final century. Roman armies were not invincible,
despite their formidable reputation and host of victories. Over the centuries the Romans "produced their share of
incompetents" who led Roman armies into catastrophic defeats. Nevertheless, it was generally the fate of the greatest
of Rome's enemies, such as Pyrrhus and Hannibal, to win early battles but lose the war. The history of Rome's campaigning
is, if nothing else, a history of obstinate persistence overcoming appalling losses. During this period, Roman soldiers
seem to have been modelled after those of the Etruscans to the north, who themselves seem to have copied their style
of warfare from the Greeks. Traditionally, the introduction of the phalanx formation into the Roman army is ascribed
to the city's penultimate king, Servius Tullius (ruled 578 to 534 BC). According to Livy and Dionysius of Halicarnassus,
the front rank was composed of the wealthiest citizens, who were able to purchase the best equipment. Each subsequent
rank consisted of those with less wealth and poorer equipment than the one before it. One disadvantage of the phalanx
was that it was only effective when fighting in large, open spaces, which left the Romans at a disadvantage when
fighting in the hilly terrain of central Italian peninsula. In the 4th century BC, the Romans abandoned the phalanx
in favour of the more flexible manipular formation. This change is sometimes attributed to Marcus Furius Camillus
and placed shortly after the Gallic invasion of 390 BC; it is more likely, however, that they were copied from Rome's
Samnite enemies to the south, possibly as a result of Samnite victories during the Second Samnite War (326 to 304
BC). The heavy infantry of the maniples were supported by a number of light infantry and cavalry troops, typically
300 horsemen per manipular legion. The cavalry was drawn primarily from the richest class of equestrians. There was
an additional class of troops who followed the army without specific martial roles and were deployed to the rear
of the third line. Their role in accompanying the army was primarily to supply any vacancies that might occur in
the maniples. The light infantry consisted of 1,200 unarmoured skirmishing troops drawn from the youngest and lower
social classes. They were armed with a sword and a small shield, as well as several light javelins. Rome's military
confederation with the other peoples of the Italian peninsula meant that half of Rome's army was provided by the
Socii, such as the Etruscans, Umbrians, Apulians, Campanians, Samnites, Lucani, Bruttii, and the various southern
Greek cities. Polybius states that Rome could draw on 770,000 men at the beginning of the Second Punic War, of which
700,000 were infantry and 70,000 met the requirements for cavalry. Rome's Italian allies would be organized in alae,
or wings, roughly equal in manpower to the Roman legions, though with 900 cavalry instead of 300. The extraordinary
demands of the Punic Wars, in addition to a shortage of manpower, exposed the tactical weaknesses of the manipular
legion, at least in the short term. In 217 BC, near the beginning of the Second Punic War, Rome was forced to effectively
ignore its long-standing principle that its soldiers must be both citizens and property owners. During the 2nd century
BC, Roman territory saw an overall decline in population, partially due to the huge losses incurred during various
wars. This was accompanied by severe social stresses and the greater collapse of the middle classes. As a result,
the Roman state was forced to arm its soldiers at the expense of the state, which it had not had to do in the past.
In process known as the Marian reforms, Roman consul Gaius Marius carried out a programme of reform of the Roman
military. In 107 BC, all citizens, regardless of their wealth or social class, were made eligible for entry into
the Roman army. This move formalised and concluded a gradual process that had been growing for centuries, of removing
property requirements for military service. The distinction between the three heavy infantry classes, which had already
become blurred, had collapsed into a single class of heavy legionary infantry. The heavy infantry legionaries were
drawn from citizen stock, while non-citizens came to dominate the ranks of the light infantry. The army's higher-level
officers and commanders were still drawn exclusively from the Roman aristocracy. The legions of the late Republic
were, structurally, almost entirely heavy infantry. The legion's main sub-unit was called a cohort and consisted
of approximately 480 infantrymen. The cohort was therefore a much larger unit than the earlier maniple sub-unit,
and was divided into six centuries of 80 men each. Each century was separated further into 10 "tent groups" of 8
men each. The cavalry troops were used as scouts and dispatch riders rather than battlefield cavalry. Legions also
contained a dedicated group of artillery crew of perhaps 60 men. Each legion was normally partnered with an approximately
equal number of allied (non-Roman) troops. After having declined in size following the subjugation of the Mediterranean,
the Roman navy underwent short-term upgrading and revitalisation in the late Republic to meet several new demands.
Under Caesar, an invasion fleet was assembled in the English Channel to allow the invasion of Britannia; under Pompey,
a large fleet was raised in the Mediterranean Sea to clear the sea of Cilician pirates. During the civil war that
followed, as many as a thousand ships were either constructed or pressed into service from Greek cities. The senate's
ultimate authority derived from the esteem and prestige of the senators. This esteem and prestige was based on both
precedent and custom, as well as the caliber and reputation of the senators. The senate passed decrees, which were
called senatus consulta. These were officially "advice" from the senate to a magistrate. In practice, however, they
were usually followed by the magistrates. The focus of the Roman senate was usually directed towards foreign policy.
Though it technically had no official role in the management of military conflict, the senate ultimately was the
force that oversaw such affairs. The power of the senate expanded over time as the power of the legislative assemblies
declined, and the senate took a greater role in ordinary law-making. Its members were usually appointed by Roman
Censors, who ordinarily selected newly elected magistrates for membership in the senate, making the senate a partially
elected body. During times of military emergency, such as the civil wars of the 1st century BC, this practice became
less prevalent, as the Roman Dictator, Triumvir or the senate itself would select its members. The legal status of
Roman citizenship was limited and was a vital prerequisite to possessing many important legal rights such as the
right to trial and appeal, to marry, to vote, to hold office, to enter binding contracts, and to special tax exemptions.
An adult male citizen with the full complement of legal and political rights was called "optimo jure." The optimo
jure elected their assemblies, whereupon the assemblies elected magistrates, enacted legislation, presided over trials
in capital cases, declared war and peace, and forged or dissolved treaties. There were two types of legislative assemblies.
The first was the comitia ("committees"), which were assemblies of all optimo jure. The second was the concilia ("councils"),
which were assemblies of specific groups of optimo jure. Citizens were organized on the basis of centuries and tribes,
which would each gather into their own assemblies. The Comitia Centuriata ("Centuriate Assembly") was the assembly
of the centuries (i.e. soldiers). The president of the Comitia Centuriata was usually a consul. The centuries would
vote, one at a time, until a measure received support from a majority of the centuries. The Comitia Centuriata would
elect magistrates who had imperium powers (consuls and praetors). It also elected censors. Only the Comitia Centuriata
could declare war, and ratify the results of a census. It also served as the highest court of appeal in certain judicial
cases. The assembly of the tribes (i.e. the citizens of Rome), the Comitia Tributa, was presided over by a consul,
and was composed of 35 tribes. The tribes were not ethnic or kinship groups, but rather geographical subdivisions.
The order that the thirty-five tribes would vote in was selected randomly by lot. Once a measure received support
from a majority of the tribes, the voting would end. While it did not pass many laws, the Comitia Tributa did elect
quaestors, curule aediles, and military tribunes. The Plebeian Council was identical to the assembly of the tribes,
but excluded the patricians (the elite who could trace their ancestry to the founding of Rome). They elected their
own officers, plebeian tribunes and plebeian aediles. Usually a plebeian tribune would preside over the assembly.
This assembly passed most laws, and could also act as a court of appeal. Each republican magistrate held certain
constitutional powers. Only the People of Rome (both plebeians and patricians) had the right to confer these powers
on any individual magistrate. The most powerful constitutional power was imperium. Imperium was held by both consuls
and praetors. Imperium gave a magistrate the authority to command a military force. All magistrates also had the
power of coercion. This was used by magistrates to maintain public order. While in Rome, all citizens had a judgement
against coercion. This protection was called provocatio (see below). Magistrates also had both the power and the
duty to look for omens. This power would often be used to obstruct political opponents. One check on a magistrate's
power was his collegiality. Each magisterial office would be held concurrently by at least two people. Another such
check was provocatio. Provocatio was a primordial form of due process. It was a precursor to habeas corpus. If any
magistrate tried to use the powers of the state against a citizen, that citizen could appeal the decision of the
magistrate to a tribune. In addition, once a magistrate's one-year term of office expired, he would have to wait
ten years before serving in that office again. This created problems for some consuls and praetors, and these magistrates
would occasionally have their imperium extended. In effect, they would retain the powers of the office (as a promagistrate),
without officially holding that office. The consuls of the Roman Republic were the highest ranking ordinary magistrates;
each consul served for one year. Consuls had supreme power in both civil and military matters. While in the city
of Rome, the consuls were the head of the Roman government. They would preside over the senate and the assemblies.
While abroad, each consul would command an army. His authority abroad would be nearly absolute. Praetors administered
civil law and commanded provincial armies. Every five years, two censors were elected for an 18-month term, during
which they would conduct a census. During the census, they could enroll citizens in the senate, or purge them from
the senate. Aediles were officers elected to conduct domestic affairs in Rome, such as managing public games and
shows. The quaestors would usually assist the consuls in Rome, and the governors in the provinces. Their duties were
often financial. Since the tribunes were considered to be the embodiment of the plebeians, they were sacrosanct.
Their sacrosanctity was enforced by a pledge, taken by the plebeians, to kill any person who harmed or interfered
with a tribune during his term of office. All of the powers of the tribune derived from their sacrosanctity. One
consequence was that it was considered a capital offense to harm a tribune, to disregard his veto, or to interfere
with a tribune. In times of military emergency, a dictator would be appointed for a term of six months. Constitutional
government would be dissolved, and the dictator would be the absolute master of the state. When the dictator's term
ended, constitutional government would be restored. Life in the Roman Republic revolved around the city of Rome,
and its famed seven hills. The city also had several theatres, gymnasiums, and many taverns, baths and brothels.
Throughout the territory under Rome's control, residential architecture ranged from very modest houses to country
villas, and in the capital city of Rome, to the residences on the elegant Palatine Hill, from which the word "palace"
is derived. The vast majority of the population lived in the city center, packed into apartment blocks.[citation
needed] Many aspects of Roman culture were borrowed from the Greeks. In architecture and sculpture, the difference
between Greek models and Roman paintings are apparent. The chief Roman contributions to architecture were the arch
and the dome. Rome has also had a tremendous impact on European cultures following it. Its significance is perhaps
best reflected in its endurance and influence, as is seen in the longevity and lasting importance of works of Virgil
and Ovid. Latin, the Republic's primary language, remains used for liturgical purposes by the Roman Catholic Church,
and up to the 19th century was used extensively in scholarly writings in, for example, science and mathematics. Roman
law laid the foundations for the laws of many European countries and their colonies.[citation needed] Slavery and
slaves were part of the social order; there were slave markets where they could be bought and sold. Many slaves were
freed by the masters for services rendered; some slaves could save money to buy their freedom. Generally, mutilation
and murder of slaves was prohibited by legislation. However, Rome did not have a law enforcement arm. All actions
were treated as "torts," which were brought by an accuser who was forced to prove the entire case himself. If the
accused were a noble and the victim, not a noble, the likelihood of finding for the accused was small. At most, the
accused might have to pay a fine for killing a slave. It is estimated that over 25% of the Roman population was enslaved.
Men typically wore a toga, and women a stola. The woman's stola differed in looks from a toga, and was usually brightly
coloured. The cloth and the dress distinguished one class of people from the other class. The tunic worn by plebeians,
or common people, like shepherds and slaves, was made from coarse and dark material, whereas the tunic worn by patricians
was of linen or white wool. A knight or magistrate would wear an augusticlavus, a tunic bearing small purple studs.
Senators wore tunics with broad red stripes, called tunica laticlavia. Military tunics were shorter than the ones
worn by civilians. Boys, up until the festival of Liberalia, wore the toga praetexta, which was a toga with a crimson
or purple border. The toga virilis, (or toga pura) was worn by men over the age of 16 to signify their citizenship
in Rome. The toga picta was worn by triumphant generals and had embroidery of their skill on the battlefield. The
toga pulla was worn when in mourning.[citation needed] The staple foods were generally consumed around 11 o'clock,
and consisted of bread, lettuce, cheese, fruits, nuts, and cold meat left over from the dinner the night before.[citation
needed] The Roman poet Horace mentions another Roman favorite, the olive, in reference to his own diet, which he
describes as very simple: "As for me, olives, endives, and smooth mallows provide sustenance." The family ate together,
sitting on stools around a table. Fingers were used to eat solid foods and spoons were used for soups.[citation needed]
Wine was considered the basic drink, consumed at all meals and occasions by all classes and was quite inexpensive.
Cato the Elder once advised cutting his rations in half to conserve wine for the workforce. Many types of drinks
involving grapes and honey were consumed as well. Drinking on an empty stomach was regarded as boorish and a sure
sign for alcoholism, the debilitating physical and psychological effects of which were known to the Romans. An accurate
accusation of being an alcoholic was an effective way to discredit political rivals. Prominent Roman alcoholics included
Mark Antony, and Cicero's own son Marcus (Cicero Minor). Even Cato the Younger was known to be a heavy drinker.[citation
needed] Following various military conquests in the Greek East, Romans adapted a number of Greek educational precepts
to their own fledgling system. They began physical training to prepare the boys to grow as Roman citizens and for
eventual recruitment into the army. Conforming to discipline was a point of great emphasis. Girls generally received
instruction from their mothers in the art of spinning, weaving, and sewing. Schooling in a more formal sense was
begun around 200 BC. Education began at the age of around six, and in the next six to seven years, boys and girls
were expected to learn the basics of reading, writing and counting. By the age of twelve, they would be learning
Latin, Greek, grammar and literature, followed by training for public speaking. Oratory was an art to be practiced
and learnt, and good orators commanded respect.[citation needed] The native language of the Romans was Latin. Although
surviving Latin literature consists almost entirely of Classical Latin, an artificial and highly stylised and polished
literary language from the 1st century BC, the actual spoken language was Vulgar Latin, which significantly differed
from Classical Latin in grammar, vocabulary, and eventually pronunciation. Rome's expansion spread Latin throughout
Europe, and over time Vulgar Latin evolved and dialectised in different locations, gradually shifting into a number
of distinct Romance languages. Many of these languages, including French, Italian, Portuguese, Romanian and Spanish,
flourished, the differences between them growing greater over time. Although English is Germanic rather than Roman
in origin, English borrows heavily from Latin and Latin-derived words.[citation needed] Roman literature was from
its very inception influenced heavily by Greek authors. Some of the earliest works we possess are of historical epics
telling the early military history of Rome. As the republic expanded, authors began to produce poetry, comedy, history,
and tragedy. Virgil represents the pinnacle of Roman epic poetry. His Aeneid tells the story of flight of Aeneas
from Troy and his settlement of the city that would become Rome.[citation needed] Lucretius, in his On the Nature
of Things, attempted to explicate science in an epic poem. The genre of satire was common in Rome, and satires were
written by, among others, Juvenal and Persius. The rhetorical works of Cicero are considered[by whom?] to be some
of the best bodies of correspondence recorded in antiquity.[citation needed] Music was a major part of everyday life.[citation
needed] The word itself derives from Greek μουσική (mousike), "(art) of the Muses". Many private and public events
were accompanied by music, ranging from nightly dining to military parades and manoeuvres. In a discussion of any
ancient music, however, non-specialists and even many musicians have to be reminded that much of what makes our modern
music familiar to us is the result of developments only within the last 1,000 years; thus, our ideas of melody, scales,
harmony, and even the instruments we use may not have been familiar to Romans who made and listened to music many
centuries earlier.[citation needed] Over time, Roman architecture was modified as their urban requirements changed,
and the civil engineering and building construction technology became developed and refined. The Roman concrete has
remained a riddle, and even after more than 2,000 years some Roman structures still stand magnificently. The architectural
style of the capital city was emulated by other urban centers under Roman control and influence. Roman cities were
well planned, efficiently managed and neatly maintained.[citation needed] The city of Rome had a place called the
Campus Martius ("Field of Mars"), which was a sort of drill ground for Roman soldiers. Later, the Campus became Rome's
track and field playground. In the campus, the youth assembled to play and exercise, which included jumping, wrestling,
boxing and racing.[citation needed] Equestrian sports, throwing, and swimming were also preferred physical activities.[citation
needed] In the countryside, pastimes included fishing and hunting.[citation needed] Board games played in Rome included
dice (Tesserae or Tali), Roman Chess (Latrunculi), Roman Checkers (Calculi), Tic-tac-toe (Terni Lapilli), and Ludus
duodecim scriptorum and Tabula, predecessors of backgammon. Other activities included chariot races, and musical
and theatrical performances.[citation needed] Roman religious beliefs date back to the founding of Rome, around 800
BC. However, the Roman religion commonly associated with the republic and early empire did not begin until around
500 BC, when Romans came in contact with Greek culture, and adopted many of the Greek religious beliefs. Private
and personal worship was an important aspect of religious practices. In a sense, each household was a temple to the
gods. Each household had an altar (lararium), at which the family members would offer prayers, perform rites, and
interact with the household gods. Many of the gods that Romans worshiped came from the Proto-Indo-European pantheon,
others were based on Greek gods. The two most famous deities were Jupiter (the king God) and Mars (the god of war).
With its cultural influence spreading over most of the Mediterranean, Romans began accepting foreign gods into their
own culture, as well as other philosophical traditions such as Cynicism and Stoicism.
It is generally considered that the Pacific War began on 7/8 December 1941, on which date Japan invaded Thailand and attacked
the British possessions of Malaya, Singapore, and Hong Kong as well as the United States military bases in Hawaii,
Wake Island, Guam and the Philippines. Some historians contend that the conflict in Asia can be dated back to 7 July
1937 with the beginning of the Second Sino-Japanese War between the Empire of Japan and the Republic of China, or
possibly 19 September 1931, beginning with the Japanese invasion of Manchuria. However, it is more widely accepted
that the Pacific War itself started in early December 1941, with the Sino-Japanese War then becoming part of it as
a theater of the greater World War II.[nb 9] The Pacific War saw the Allied powers pitted against the Empire of Japan,
the latter briefly aided by Thailand and to a much lesser extent by its Axis allies, Germany and Italy. The war culminated
in the atomic bombings of Hiroshima and Nagasaki, and other large aerial bomb attacks by the United States Army Air
Forces, accompanied by the Soviet invasion of Manchuria on 8 August 1945, resulting in the Japanese announcement
of intent to surrender on 15 August 1945. The formal and official surrender of Japan took place aboard the battleship
USS Missouri in Tokyo Bay on 2 September 1945. Following its defeat, Japan's Shinto Emperor stepped down as the divine
leader through the Shinto Directive, because the Allied Powers believed this was the major political cause of Japan's
military aggression and deconstruction process soon took place to install a new liberal-democratic constitution to
the Japanese public as the current Constitution of Japan. Japan used the name Greater East Asia War (大東亜戦争, Dai Tō-A
Sensō?), as chosen by a cabinet decision on 10 December 1941, to refer to both the war with the Western Allies and
the ongoing war in China. This name was released to the public on 12 December, with an explanation that it involved
Asian nations achieving their independence from the Western powers through armed forces of the Greater East Asia
Co-Prosperity Sphere. Japanese officials integrated what they called the Japan–China Incident (日支事変, Nisshi Jihen?)
into the Greater East Asia War. The Axis states which assisted Japan included the authoritarian government of Thailand
in World War II, which quickly formed a temporary alliance with the Japanese in 1941, as the Japanese forces were
already invading the peninsula of southern Thailand. The Phayap Army sent troops to invade and occupy northeastern
Burma, which was former Thai territory that had been annexed by Britain much earlier. Also involved were the Japanese
puppet states of Manchukuo and Mengjiang (consisting of most of Manchuria and parts of Inner Mongolia respectively),
and the collaborationist Wang Jingwei regime (which controlled the coastal regions of China). The official policy
of the U.S. Government is that Thailand was not an ally of the Axis, and that the United States was not at war with
Thailand. The policy of the U.S. Government ever since 1945 has been to treat Thailand not as a former enemy, but
rather as a country which had been forced into certain actions by Japanese blackmail, before being occupied by Japanese
troops. Thailand has been treated by the United States in the same way as such other Axis-occupied countries as Belgium,
Czechoslovakia, Denmark, Greece, Norway, Poland, and the Netherlands. Japan conscripted many soldiers from its colonies
of Korea and Formosa (Taiwan). To a small extent, some Vichy French, Indian National Army, and Burmese National Army
forces were active in the area of the Pacific War. Collaborationist units from Hong Kong (reformed ex-colonial police),
Philippines, Dutch East Indies (the PETA) and Dutch Guinea, British Malaya and British Borneo, Inner Mongolia and
former French Indochina (after the overthrow of Vichy French regime) as well as Timorese militia also assisted Japanese
war efforts. The major Allied participants were the United States, the Republic of China, the United Kingdom (including
the armed forces of British India, the Fiji Islands, Samoa, etc.), Australia, the Commonwealth of the Philippines,
the Netherlands (as the possessor of the Dutch East Indies and the western part of New Guinea), New Zealand, and
Canada, all of whom were members of the Pacific War Council. Mexico, Free France and many other countries also took
part, especially forces from other British colonies. By 1937, Japan controlled Manchuria and was ready to move deeper
into China. The Marco Polo Bridge Incident on 7 July 1937 provoked full-scale war between China and Japan. The Nationalist
and Communist Chinese suspended their civil war to form a nominal alliance against Japan, and the Soviet Union quickly
lent support by providing large amount of materiel to Chinese troops. In August 1937, Generalissimo Chiang Kai-shek
deployed his best army to fight about 300,000 Japanese troops in Shanghai, but, after three months of fighting, Shanghai
fell. The Japanese continued to push the Chinese forces back, capturing the capital Nanking in December 1937 and
committed which was known as Nanking Massacre. In March 1938, Nationalist forces won their first victory at Taierzhuang.
but then the city of Xuzhou was taken by Japanese in May. In June 1938, Japan deployed about 350,000 troops to invade
Wuhan and captured it in October. The Japanese achieved major military victories, but world opinion—in particular
in the United States—condemned Japan, especially after the Panay Incident. In September 1940, Japan decided to cut
China's only land line to the outside world by seizing Indochina, which was controlled at the time by Vichy France.
Japanese forces broke their agreement with the Vichy administration and fighting broke out, ending in a Japanese
victory. On 27 September Japan signed a military alliance with Germany and Italy, becoming one of the three Axis
Powers. In practice, there was little coordination between Japan and Germany until 1944, by which time the U.S. was
deciphering their secret diplomatic correspondence. The war entered a new phase with the unprecedented defeat of
the Japanese at Battle of Suixian-Zaoyang and 1st Battle of Changsha. After these victories, Chinese nationalist
forces launched a large-scale counter-offensive in early 1940; however, due to its low military-industrial capacity,
it was repulsed by Japanese army in late March 1940. In August of 1940, Chinese communists launched an offensive
in Central China; in retaliation, Japan instituted the "Three Alls Policy" ("Kill all, Burn all, Loot all") in occupied
areas to reduce human and material resources for the communists. By 1941 the conflict had become a stalemate. Although
Japan had occupied much of northern, central, and coastal China, the Nationalist Government had retreated to the
interior with a provisional capital set up at Chungking while the Chinese communists remained in control of base
areas in Shaanxi. In addition, Japanese control of northern and central China was somewhat tenuous, in that Japan
was usually able to control railroads and the major cities ("points and lines"), but did not have a major military
or administrative presence in the vast Chinese countryside. The Japanese found its aggression against the retreating
and regrouping Chinese army was stalled by the mountainous terrain in southwestern China while the Communists organised
widespread guerrilla and saboteur activities in northern and eastern China behind the Japanese front line. Japan
sponsored several puppet governments, one of which was headed by Wang Jingwei. However, its policies of brutality
toward the Chinese population, of not yielding any real power to these regimes, and of supporting several rival governments
failed to make any of them a viable alternative to the Nationalist government led by Chiang Kai-shek. Conflicts between
Chinese communist and nationalist forces vying for territory control behind enemy lines culminated in a major armed
clash in January 1941, effectively ending their co-operation. From as early as 1935 Japanese military strategists
had concluded the Dutch East Indies were, because of their oil reserves, of considerable importance to Japan. By
1940 they had expanded this to include Indo-China, Malaya, and the Philippines within their concept of the Greater
East Asia Co-Prosperity Sphere. Japanese troop build ups in Hainan, Taiwan, and Haiphong were noted, Japanese Army
officers were openly talking about an inevitable war, and Admiral Sankichi Takahashi was reported as saying a showdown
with the United States was necessary. In an effort to discourage Japanese militarism, Western powers including Australia,
the United States, Britain, and the Dutch government in exile, which controlled the petroleum-rich Dutch East Indies,
stopped selling oil, iron ore, and steel to Japan, denying it the raw materials needed to continue its activities
in China and French Indochina. In Japan, the government and nationalists viewed these embargos as acts of aggression;
imported oil made up about 80% of domestic consumption, without which Japan's economy, let alone its military, would
grind to a halt. The Japanese media, influenced by military propagandists,[nb 10] began to refer to the embargoes
as the "ABCD ("American-British-Chinese-Dutch") encirclement" or "ABCD line". The Japanese leadership was aware that
a total military victory in a traditional sense against the USA was impossible; the alternative would be negotiating
for peace after their initial victories, which would recognize Japanese hegemony in Asia. In fact, the Imperial GHQ
noted, should acceptable negotiations be reached with the Americans, the attacks were to be canceled—even if the
order to attack had already been given. The Japanese leadership looked to base the conduct of the war against America
on the historical experiences of the successful wars against China (1894–95) and Russia (1904–05), in both of which
a strong continental power was defeated by reaching limited military objectives, not by total conquest. In the early
hours of 7 December (Hawaiian time), Japan launched a major surprise carrier-based air strike on Pearl Harbor without
explicit warning, which crippled the U.S. Pacific Fleet, leaving eight American battleships out of action, 188 American
aircraft destroyed, and 2,403 American citizens dead. At the time of the attack, the U.S. was not officially at war
anywhere in the world, which means that the people killed or property destroyed at Pearl Harbor by the Japanese attack
had a non-combatant status.[nb 11] The Japanese had gambled that the United States, when faced with such a sudden
and massive blow, would agree to a negotiated settlement and allow Japan free rein in Asia. This gamble did not pay
off. American losses were less serious than initially thought: The American aircraft carriers, which would prove
to be more important than battleships, were at sea, and vital naval infrastructure (fuel oil tanks, shipyard facilities,
and a power station), submarine base, and signals intelligence units were unscathed. Japan's fallback strategy, relying
on a war of attrition to make the U.S. come to terms, was beyond the IJN's capabilities. Before the attack on Pearl
Harbor, the 800,000-member America First Committee vehemently opposed any American intervention in the European conflict,
even as America sold military aid to Britain and the Soviet Union through the Lend-Lease program. Opposition to war
in the U.S. vanished after the attack. On 8 December, the United States, the United Kingdom, Canada, and the Netherlands
declared war on Japan, followed by China and Australia the next day. Four days after Pearl Harbor, Nazi Germany and
Fascist Italy declared war on the United States, drawing the country into a two-theater war. This is widely agreed
to be a grand strategic blunder, as it abrogated the benefit Germany gained by Japan's distraction of the U.S. (predicted
months before in a memo by Commander Arthur McCollum)[nb 12] and the reduction in aid to Britain, which both Congress
and Hitler had managed to avoid during over a year of mutual provocation, which would otherwise have resulted. Following
the Declaration by United Nations (the first official use of the term United Nations) on 1 January 1942, the Allied
governments appointed the British General Sir Archibald Wavell to the American-British-Dutch-Australian Command (ABDACOM),
a supreme command for Allied forces in Southeast Asia. This gave Wavell nominal control of a huge force, albeit thinly
spread over an area from Burma to the Philippines to northern Australia. Other areas, including India, Hawaii, and
the rest of Australia remained under separate local commands. On 15 January Wavell moved to Bandung in Java to assume
control of ABDACOM. In January, Japan invaded Burma, the Dutch East Indies, New Guinea, the Solomon Islands and captured
Manila, Kuala Lumpur and Rabaul. After being driven out of Malaya, Allied forces in Singapore attempted to resist
the Japanese during the Battle of Singapore but surrendered to the Japanese on 15 February 1942; about 130,000 Indian,
British, Australian and Dutch personnel became prisoners of war. The pace of conquest was rapid: Bali and Timor also
fell in February. The rapid collapse of Allied resistance had left the "ABDA area" split in two. Wavell resigned
from ABDACOM on 25 February, handing control of the ABDA Area to local commanders and returning to the post of Commander-in-Chief,
India. In Burma the British, under intense pressure, made a fighting retreat from Rangoon to the Indo-Burmese border.
This cut the Burma Road which was the western Allies' supply line to the Chinese Nationalists. In March 1942, Chinese
Expeditionary Force started to attack Japanese forces in northern Burma. On 16 April, 7,000 British soldiers were
encircled by the Japanese 33rd Division during the Battle of Yenangyaung and rescued by the Chinese 38th Division
led by Sun Li-jen. Cooperation between the Chinese Nationalists and the Communists had waned from its zenith at the
Battle of Wuhan, and the relationship between the two had gone sour as both attempted to expand their area of operations
in occupied territories. Most of the Nationalist guerrilla areas were eventually overtaken by the Communists. On
the other hand, some Nationalist units were deployed to blockade the Communists and not the Japanese. Furthermore,
many of the forces of the Chinese Nationalists were warlords allied to Chiang Kai-Shek, but not directly under his
command. "Of the 1,200,000 troops under Chiang's control, only 650,000 were directly controlled by his generals,
and another 550,000 controlled by warlords who claimed loyalty to his government; the strongest force was the Szechuan
army of 320,000 men. The defeat of this army would do much to end Chiang's power." The Japanese exploited this lack
of unity to press ahead in their offensives. Two battle-hardened Australian divisions were steaming from the Mid-East
for Singapore. Churchill wanted them diverted to Burma, but Curtin insisted on a return to Australia. In early 1942
elements of the Imperial Japanese Navy proposed an invasion of Australia. The Japanese Army opposed the plan and
it was rejected in favour of a policy of isolating Australia from the United States via blockade by advancing through
the South Pacific. The Japanese decided upon a seaborne invasion of Port Moresby, capital of the Australian Territory
of Papua which would put Northern Australia within range of Japanese bomber aircraft. President Franklin Roosevelt
ordered General Douglas MacArthur in the Philippines to formulate a Pacific defence plan with Australia in March
1942. Curtin agreed to place Australian forces under the command of MacArthur who became Supreme Commander, South
West Pacific. MacArthur moved his headquarters to Melbourne in March 1942 and American troops began massing in Australia.
Enemy naval activity reached Sydney in late May 1942, when Japanese midget submarines launched a daring raid on Sydney
Harbour. On 8 June 1942, two Japanese submarines briefly shelled Sydney's eastern suburbs and the city of Newcastle.
In early 1942, the governments of smaller powers began to push for an inter-governmental Asia-Pacific war council,
based in Washington, D.C.. A council was established in London, with a subsidiary body in Washington. However, the
smaller powers continued to push for an American-based body. The Pacific War Council was formed in Washington, on
1 April 1942, with President Franklin D. Roosevelt, his key advisor Harry Hopkins, and representatives from Britain,
China, Australia, the Netherlands, New Zealand, and Canada. Representatives from India and the Philippines were later
added. The council never had any direct operational control, and any decisions it made were referred to the U.S.-UK
Combined Chiefs of Staff, which was also in Washington. Allied resistance, at first symbolic, gradually began to
stiffen. Australian and Dutch forces led civilians in a prolonged guerilla campaign in Portuguese Timor. By mid-1942,
the Japanese found themselves holding a vast area from the Indian Ocean to the Central Pacific, even though they
lacked the resources to defend or sustain it. Moreover, Combined Fleet doctrine was inadequate to execute the proposed
"barrier" defence. Instead, Japan decided on additional attacks in both the south and central Pacific. While she
had the element of surprise at Pearl Harbor, Allied codebreakers had now turned the tables. They discovered an attack
was planned against Port Moresby; if it fell, Japan would control the seas to the north and west of Australia and
could isolate the country. The carrier USS Lexington under Admiral Fletcher joined USS Yorktown and an American-Australian
task force to stop the Japanese advance. The resulting Battle of the Coral Sea, fought in May 1942, was the first
naval battle in which ships involved never sighted each other and only aircraft were used to attack opposing forces.
Although Lexington was sunk and Yorktown seriously damaged, the Japanese lost the carrier Shōhō, and suffered extensive
damage to Shōkaku and heavy losses to the air wing of Zuikaku, both of which missed the operation against Midway
the following month. Although Allied losses were heavier than Japanese, the attack on Port Moresby was thwarted and
the Japanese invasion force turned back, a strategic victory for the Allies. The Japanese were forced to abandon
their attempts to isolate Australia. Moreover, Japan lacked the capacity to replace losses in ships, planes and trained
pilots. A Japanese force was sent north to attack the Aleutian Islands. The next stage of the plan called for the
capture of Midway, which would give him an opportunity to destroy Nimitz's remaining carriers. In May, Allied codebreakers
discovered his intentions. Nagumo was again in tactical command but was focused on the invasion of Midway; Yamamoto's
complex plan had no provision for intervention by Nimitz before the Japanese expected him. Planned surveillance of
the U.S. fleet by long range seaplane did not happen (as a result of an abortive identical operation in March), so
Fletcher's carriers were able to proceed to a flanking position without being detected. Nagumo had 272 planes operating
from his four carriers, the U.S. 348 (115 land-based). As anticipated by Nimitz, the Japanese fleet arrived off Midway
on 4 June and was spotted by PBY patrol aircraft. Nagumo executed a first strike against Midway, while Fletcher launched
his aircraft, bound for Nagumo's carriers. At 09:20 the first U.S. carrier aircraft arrived, TBD Devastator torpedo
bombers from Hornet, but their attacks were poorly coordinated and ineffectual; thanks in part to faulty aerial torpedoes,
they failed to score a single hit and all 15 were wiped out by defending Zero fighters. At 09:35, 15 additional TBDs
from Enterprise attacked in which 14 were lost, again with no hits. Thus far, Fletcher's attacks had been disorganized
and seemingly ineffectual, but they succeeded in drawing Nagumo's defensive fighters down to sea level where they
expended much of their fuel and ammunition repulsing the two waves of torpedo bombers. As a result, when U.S. dive
bombers arrived at high altitude, the Zeros were poorly positioned to defend. To make matters worse, Nagumo's four
carriers had drifted out of formation in their efforts to avoid torpedoes, reducing the concentration of their anti-aircraft
fire. Nagumo's indecision had also created confusion aboard his carriers. Alerted to the need of a second strike
on Midway, but also wary of the need to deal with the American carriers that he now knew were in the vicinity, Nagumo
twice changed the arming orders for his aircraft. As a result, the American dive bombers found the Japanese carriers
with their decks cluttered with munitions as the crews worked hastily to properly re-arm their air groups. With the
Japanese CAP out of position and the carriers at their most vulnerable, SBD Dauntlesses from Enterprise and Yorktown
appeared at an altitude of 10,000 feet (3,000 m) and commenced their attack, quickly dealing fatal blows to three
fleet carriers: Sōryū, Kaga, and Akagi. Within minutes, all three were ablaze and had to be abandoned with great
loss of life. Hiryū managed to survive the wave of dive bombers and launched a counter-attack against the American
carriers which caused severe damage to Yorktown (which was later finished off by a Japanese submarine). However,
a second attack from the U.S. carriers a few hours later found and destroyed Hiryū, the last remaining fleet carrier
available to Nagumo. With his carriers lost and the Americans withdrawn out of range of his powerful battleships,
Yamamoto was forced to call off the operation, leaving Midway in American hands. The battle proved to be a decisive
victory for the Allies. For the second time, Japanese expansion had been checked and its formidable Combined Fleet
was significantly weakened by the loss of four fleet carriers and many highly trained, virtually irreplaceable, personnel.
Japan would be largely on the defensive for the rest of the war. Japanese land forces continued to advance in the
Solomon Islands and New Guinea. From July 1942, a few Australian reserve battalions, many of them very young and
untrained, fought a stubborn rearguard action in New Guinea, against a Japanese advance along the Kokoda Track, towards
Port Moresby, over the rugged Owen Stanley Ranges. The militia, worn out and severely depleted by casualties, were
relieved in late August by regular troops from the Second Australian Imperial Force, returning from action in the
Mediterranean theater. In early September 1942 Japanese marines attacked a strategic Royal Australian Air Force base
at Milne Bay, near the eastern tip of New Guinea. They were beaten back by Allied (primarily Australian Army) forces.
With Japanese and Allied forces occupying various parts of the island, over the following six months both sides poured
resources into an escalating battle of attrition on land, at sea, and in the sky. Most of the Japanese aircraft based
in the South Pacific were redeployed to the defense of Guadalcanal. Many were lost in numerous engagements with the
Allied air forces based at Henderson Field as well as carrier based aircraft. Meanwhile, Japanese ground forces launched
repeated attacks on heavily defended US positions around Henderson Field, in which they suffered appalling casualties.
To sustain these offensives, resupply was carried out by Japanese convoys, termed the "Tokyo Express" by the Allies.
The convoys often faced night battles with enemy naval forces in which they expended destroyers that the IJN could
ill-afford to lose. Later fleet battles involving heavier ships and even daytime carrier battles resulted in a stretch
of water near Guadalcanal becoming known as "Ironbottom Sound" from the multitude of ships sunk on both sides. However,
the Allies were much better able to replace these losses. Finally recognizing that the campaign to recapture Henderson
Field and secure Guadalcanal had simply become too costly to continue, the Japanese evacuated the island and withdrew
in February 1943. In the sixth month war of attrition, the Japanese had lost as a result of failing to commit enough
forces in sufficient time. In mainland China, the Japanese 3rd, 6th, and 40th Divisions massed at Yueyang and advanced
southward in three columns and crossed the Xinqiang River, and tried again to cross the Miluo River to reach Changsha.
In January 1942, Chinese forces got a victory at Changsha which was the first Allied success against Japan. After
the Doolittle Raid, the Japanese army conducted a massive sweep through Zhejiang and Jiangxi of China, now known
as the Zhejiang-Jiangxi Campaign, with the goal of searching out the surviving American airmen, applying retribution
on the Chinese who aided them and destroying air bases. This operation started on 15 May 1942 with 40 infantry battalions
and 15–16 artillery battalions but was repelled by Chinese forces in September. During this campaign, The Imperial
Japanese Army left behind a trail of devastation and had also spread cholera, typhoid, plague and dysentery pathogens.
Chinese estimates put the death toll at 250,000 civilians. Around 1,700 Japanese troops died out of a total 10,000
Japanese soldiers who fell ill with disease when their own biological weapons attack rebounded on their own forces.
On 2 November 1943, Isamu Yokoyama, commander of the Imperial Japanese 11th Army, deployed the 39th, 58th, 13th,
3rd, 116th and 68th divisions, a grand total of around 100,000 troops, to attack Changde of China. During the seven-week
Battle of Changde, the Chinese forced Japan to fight a costly war of attrition. Although the Japanese army initially
successfully captured the city, the Chinese 57th division was able to pin them down long enough for reinforcements
to arrive and encircle the Japanese. The Chinese army then cut off the Japanese supply lines, forcing them into retreat,
whereupon the Chinese pursued their enemy. During the battle, in an act of desperation, Japan used chemical weapons.
In the aftermath of the Japanese conquest of Burma, there was widespread disorder in eastern India, and a disastrous
famine in Bengal, which ultimately caused up to 3 million deaths. In spite of these, and inadequate lines of communication,
British and Indian forces attempted limited counter-attacks in Burma in early 1943. An offensive in Arakan failed,
while a long distance raid mounted by the Chindits under Brigadier Orde Wingate suffered heavy losses, but was publicized
to bolster Allied morale. It also provoked the Japanese to mount major offensives themselves the following year.
In August 1943 the Allies formed a new South East Asia Command (SEAC) to take over strategic responsibilities for
Burma and India from the British India Command, under Wavell. In October 1943 Winston Churchill appointed Admiral
Lord Louis Mountbatten as its Supreme Commander. The British and Indian Fourteenth Army was formed to face the Japanese
in Burma. Under Lieutenant General William Slim, its training, morale and health greatly improved. The American General
Joseph Stilwell, who also was deputy commander to Mountbatten and commanded U.S. forces in the China Burma India
Theater, directed aid to China and prepared to construct the Ledo Road to link India and China by land. Midway proved
to be the last great naval battle for two years. The United States used the ensuing period to turn its vast industrial
potential into increased numbers of ships, planes, and trained aircrew. At the same time, Japan, lacking an adequate
industrial base or technological strategy, a good aircrew training program, or adequate naval resources and commerce
defense, fell further and further behind. In strategic terms the Allies began a long movement across the Pacific,
seizing one island base after another. Not every Japanese stronghold had to be captured; some, like Truk, Rabaul,
and Formosa, were neutralized by air attack and bypassed. The goal was to get close to Japan itself, then launch
massive strategic air attacks, improve the submarine blockade, and finally (only if necessary) execute an invasion.
U.S. submarines, as well as some British and Dutch vessels, operating from bases at Cavite in the Philippines (1941–42);
Fremantle and Brisbane, Australia; Pearl Harbor; Trincomalee, Ceylon; Midway; and later Guam, played a major role
in defeating Japan, even though submarines made up a small proportion of the Allied navies—less than two percent
in the case of the US Navy. Submarines strangled Japan by sinking its merchant fleet, intercepting many troop transports,
and cutting off nearly all the oil imports essential to weapons production and military operations. By early 1945
Japanese oil supplies were so limited that its fleet was virtually stranded. U.S. submarines accounted for 56% of
the Japanese merchantmen sunk; mines or aircraft destroyed most of the rest. US submariners also claimed 28% of Japanese
warships destroyed. Furthermore, they played important reconnaissance roles, as at the battles of the Philippine
Sea (June 1944) and Leyte Gulf (October 1944) (and, coincidentally,[clarification needed] at Midway in June 1942),
when they gave accurate and timely warning of the approach of the Japanese fleet. Submarines also rescued hundreds
of downed fliers, including future U.S. president George H.W. Bush. Allied submarines did not adopt a defensive posture
and wait for the enemy to attack. Within hours after the Pearl Harbor attack, in retribution against Japan, Roosevelt
promulgated a new doctrine: unrestricted submarine warfare against Japan. This meant sinking any warship, commercial
vessel, or passenger ship in Axis-controlled waters, without warning and without help to survivors.[nb 13] At the
outbreak of the war in the Pacific the Dutch Admiral in charge of the naval defense of the East Indies, Conrad Helfrich,
gave instructions to wage war aggressively. His small force of submarines sank more Japanese ships in the first weeks
of the war than the entire British and US navies together, an exploit which earned him the nickname "Ship-a-day Helfrich".
The Dutch force were in fact the first to sink an enemy warship; On 24 December 1941, HNLMS K XVI torpedoed and sank
the Japanese destroyer Sagiri. While Japan had a large number of submarines, they did not make a significant impact
on the war. In 1942, the Japanese fleet subs performed well, knocking out or damaging many Allied warships. However,
Imperial Japanese Navy (and pre-war U.S.) doctrine stipulated that only fleet battles, not guerre de course (commerce
raiding) could win naval campaigns. So, while the US had an unusually long supply line between its west coast and
frontline areas, leaving it vulnerable to submarine attack, Japan used its submarines primarily for long-range reconnaissance
and only occasionally attacked U.S. supply lines. The Japanese submarine offensive against Australia in 1942 and
1943 also achieved little. The U.S. Navy, by contrast, relied on commerce raiding from the outset. However, the problem
of Allied forces surrounded in the Philippines, during the early part of 1942, led to diversion of boats to "guerrilla
submarine" missions. As well, basing in Australia placed boats under Japanese aerial threat while en route to patrol
areas, reducing their effectiveness, and Nimitz relied on submarines for close surveillance of enemy bases. Furthermore,
the standard-issue Mark 14 torpedo and its Mark VI exploder both proved defective, problems which were not corrected
until September 1943. Worst of all, before the war, an uninformed US Customs officer had seized a copy of the Japanese
merchant marine code (called the "maru code" in the USN), not knowing that the Office of Naval Intelligence (ONI)
had broken it. The Japanese promptly changed it, and the new code was not broken again by OP-20G until 1943. Thus,
only in 1944 did the U.S. Navy begin to use its 150 submarines to maximum effect: installing effective shipboard
radar, replacing commanders deemed lacking in aggression, and fixing the faults in the torpedoes. Japanese commerce
protection was "shiftless beyond description,"[nb 14] and convoys were poorly organized and defended compared to
Allied ones, a product of flawed IJN doctrine and training – errors concealed by American faults as much as Japanese
overconfidence. The number of U.S. submarines patrols (and sinkings) rose steeply: 350 patrols (180 ships sunk) in
1942, 350 (335) in 1943, and 520 (603) in 1944. By 1945, sinkings of Japanese vessels had decreased because so few
targets dared to venture out on the high seas. In all, Allied submarines destroyed 1,200 merchant ships – about five
million tons of shipping. Most were small cargo-carriers, but 124 were tankers bringing desperately needed oil from
the East Indies. Another 320 were passenger ships and troop transports. At critical stages of the Guadalcanal, Saipan,
and Leyte campaigns, thousands of Japanese troops were killed or diverted from where they were needed. Over 200 warships
were sunk, ranging from many auxiliaries and destroyers to one battleship and no fewer than eight carriers. In mid-1944
Japan mobilized over 500,000 men and launched a massive operation across China under the code name Operation Ichi-Go,
their largest offensive of World War II, with the goal of connecting Japanese-controlled territory in China and French
Indochina and capturing airbases in southeastern China where American bombers were based. During this time, many
of the newest American-trained Chinese units and supplies were forcibly locked in the Burmese theater under Joseph
Stilwell set by terms of the Lend-Lease Agreement. Though Japan suffered about 100,000 casualties, these attacks,
the biggest in several years, gained much ground for Japan before Chinese forces stopped the incursions in Guangxi.
Despite major tactical victories, the operation overall failed to provide Japan with any significant strategic gains.
A great majority of the Chinese forces were able to retreat out of the area, and later come back to attack Japanese
positions such as Battle of West Hunan. Japan was not any closer in defeating China after this operation, and the
constant defeats the Japanese suffered in the Pacific meant that Japan never got the time and resources needed to
achieve final victory over China. Operation Ichi-go created a great sense of social confusion in the areas of China
that it affected. Chinese Communist guerrillas were able to exploit this confusion to gain influence and control
of greater areas of the countryside in the aftermath of Ichi-go. After the Allied setbacks in 1943, the South East
Asia command prepared to launch offensives into Burma on several fronts. In the first months of 1944, the Chinese
and American troops of the Northern Combat Area Command (NCAC), commanded by the American Joseph Stilwell, began
extending the Ledo Road from India into northern Burma, while the XV Corps began an advance along the coast in the
Arakan Province. In February 1944 the Japanese mounted a local counter-attack in the Arakan. After early Japanese
success, this counter-attack was defeated when the Indian divisions of XV Corps stood firm, relying on aircraft to
drop supplies to isolated forward units until reserve divisions could relieve them. The Japanese responded to the
Allied attacks by launching an offensive of their own into India in the middle of March, across the mountainous and
densely forested frontier. This attack, codenamed Operation U-Go, was advocated by Lieutenant General Renya Mutaguchi,
the recently promoted commander of the Japanese Fifteenth Army; Imperial General Headquarters permitted it to proceed,
despite misgivings at several intervening headquarters. Although several units of the British Fourteenth Army had
to fight their way out of encirclement, by early April they had concentrated around Imphal in Manipur state. A Japanese
division which had advanced to Kohima in Nagaland cut the main road to Imphal, but failed to capture the whole of
the defences at Kohima. During April, the Japanese attacks against Imphal failed, while fresh Allied formations drove
the Japanese from the positions they had captured at Kohima. As many Japanese had feared, Japan's supply arrangements
could not maintain her forces. Once Mutaguchi's hopes for an early victory were thwarted, his troops, particularly
those at Kohima, starved. During May, while Mutaguchi continued to order attacks, the Allies advanced southwards
from Kohima and northwards from Imphal. The two Allied attacks met on 22 June, breaking the Japanese siege of Imphal.
The Japanese finally broke off the operation on 3 July. They had lost over 50,000 troops, mainly to starvation and
disease. This represented the worst defeat suffered by the Japanese Army to that date.[citation needed] Although
the advance in the Arakan had been halted to release troops and aircraft for the Battle of Imphal, the Americans
and Chinese had continued to advance in northern Burma, aided by the Chindits operating against the Japanese lines
of communication. In the middle of 1944 the Chinese Expeditionary Force invaded northern Burma from Yunnan province.
They captured a fortified position at Mount Song. By the time campaigning ceased during the monsoon rains, the NCAC
had secured a vital airfield at Myitkyina (August 1944), which eased the problems of air resupply from India to China
over "The Hump". It was imperative for Japanese commanders to hold Saipan. The only way to do this was to destroy
the U.S. Fifth Fleet, which had 15 fleet carriers and 956 planes, 7 battleships, 28 submarines, and 69 destroyers,
as well as several light and heavy cruisers. Vice Admiral Jisaburo Ozawa attacked with nine-tenths of Japan's fighting
fleet, which included nine carriers with 473 planes, 5 battleships, several cruisers, and 28 destroyers. Ozawa's
pilots were outnumbered 2:1 and their aircraft were becoming or were already obsolete. The Japanese had considerable
antiaircraft defenses but lacked proximity fuzes or good radar. With the odds against him, Ozawa devised an appropriate
strategy. His planes had greater range because they were not weighed down with protective armor; they could attack
at about 480 km (300 mi)[citation needed], and could search a radius of 900 km[citation needed] (560 mi). U.S. Navy
Hellcat fighters could only attack within 200 miles (320 km) and only search within a 325-mile (523 km)[citation
needed] radius. Ozawa planned to use this advantage by positioning his fleet 300 miles (480 km)[citation needed]
out. The Japanese planes would hit the U.S. carriers, land at Guam to refuel, then hit the enemy again when returning
to their carriers. Ozawa also counted on about 500 land-based planes at Guam and other islands. The forces converged
in the largest sea battle of World War II up to that point. Over the previous month American destroyers had destroyed
17 of 25 submarines out of Ozawa's screening force. Repeated U.S. raids destroyed the Japanese land-based planes.
Ozawa's main attack lacked coordination, with the Japanese planes arriving at their targets in a staggered sequence.
Following a directive from Nimitz, the U.S. carriers all had combat information centers, which interpreted the flow
of radar data and radioed interception orders to the Hellcats. The result was later dubbed the Great Marianas Turkey
Shoot. The few attackers to reach the U.S. fleet encountered massive AA fire with proximity fuzes. Only one American
warship was slightly damaged. On the second day, U.S. reconnaissance planes located Ozawa's fleet, 275 miles (443
km)[citation needed] away, and submarines sank two Japanese carriers. Mitscher launched 230 torpedo planes and dive
bombers. He then discovered the enemy was actually another 60 miles (97 km)[citation needed] further off, out of
aircraft range (based on a roundtrip flight). Mitscher decided this chance to destroy the Japanese fleet was worth
the risk of aircraft losses due to running out of fuel on the return flight. Overall, the U.S. lost 130 planes and
76 aircrew; however, Japan lost 450 planes, three carriers, and 445 aircrew. The Imperial Japanese Navy's carrier
force was effectively destroyed. The Battle of Leyte Gulf was arguably the largest naval battle in history and was
the largest naval battle of World War II. It was a series of four distinct engagements fought off the Philippine
island of Leyte from 23 to 26 October 1944. Leyte Gulf featured the largest battleships ever built, was the last
time in history that battleships engaged each other, and was also notable as the first time that kamikaze aircraft
were used. Allied victory in the Philippine Sea established Allied air and sea superiority in the western Pacific.
Nimitz favored blockading the Philippines and landing on Formosa. This would give the Allies control of the sea routes
to Japan from southern Asia, cutting off substantial Japanese garrisons. MacArthur favored an invasion of the Philippines,
which also lay across the supply lines to Japan. Roosevelt adjudicated in favor of the Philippines. Meanwhile, Japanese
Combined Fleet Chief Toyoda Soemu prepared four plans to cover all Allied offensive scenarios. On 12 October Nimitz
launched a carrier raid against Formosa to make sure that planes based there could not intervene in the landings
on Leyte. Toyoda put Plan Sho-2 into effect, launching a series of air attacks against the U.S. carriers. However
the Japanese lost 600 planes in three days, leaving them without air cover. Sho-1 called for V. Adm. Jisaburo Ozawa's
force to use an apparently vulnerable carrier force to lure the U.S. 3rd Fleet away from Leyte and remove air cover
from the Allied landing forces, which would then be attacked from the west by three Japanese forces: V. Adm. Takeo
Kurita's force would enter Leyte Gulf and attack the landing forces; R. Adm. Shoji Nishimura's force and V. Adm.
Kiyohide Shima's force would act as mobile strike forces. The plan was likely to result in the destruction of one
or more of the Japanese forces, but Toyoda justified it by saying that there would be no sense in saving the fleet
and losing the Philippines. Kurita's "Center Force" consisted of five battleships, 12 cruisers and 13 destroyers.
It included the two largest battleships ever built: Yamato and Musashi. As they passed Palawan Island after midnight
on 23 October the force was spotted, and U.S. submarines sank two cruisers. On 24 October, as Kurita's force entered
the Sibuyan Sea, USS Intrepid and USS Cabot launched 260 planes, which scored hits on several ships. A second wave
of planes scored many direct hits on Musashi. A third wave, from USS Enterprise and USS Franklin hit Musashi with
11 bombs and eight torpedoes. Kurita retreated but in the evening turned around to head for San Bernardino Strait.
Musashi sank at about 19:30. Nishimura's force consisted of two battleships, one cruiser and four destroyers. Because
they were observing radio silence, Nishimura was unable to synchronize with Shima and Kurita. Nishimura and Shima
had failed to even coordinate their plans before the attacks – they were long-time rivals and neither wished to have
anything to do with the other. When he entered the narrow Surigao Strait at about 02:00, Shima was 22 miles (40 km)
behind him, and Kurita was still in the Sibuyan Sea, several hours from the beaches at Leyte. As they passed Panaon
Island, Nishimura's force ran into a trap set for them by the U.S.-Australian 7th Fleet Support Force. R. Adm. Jesse
Oldendorf had six battleships, four heavy cruisers, four light cruisers, 29 destroyers and 39 PT boats. To pass the
strait and reach the landings, Nishimura had to run the gauntlet. At about 03:00 the Japanese battleship Fusō and
three destroyers were hit by torpedoes and Fusō broke in two. At 03:50 the U.S. battleships opened fire. Radar fire
control meant they could hit targets from a much greater distance than the Japanese. The battleship Yamashiro, a
cruiser and a destroyer were crippled by 16-inch (406 mm) shells; Yamashiro sank at 04:19. Only one of Nishimura's
force of seven ships survived the engagement. At 04:25 Shima's force of two cruisers and eight destroyers reached
the battle. Seeing Fusō and believing her to be the wrecks of two battleships, Shima ordered a retreat, ending the
last battleship-vs-battleship action in history. Ozawa's "Northern Force" had four aircraft carriers, two obsolete
battleships partly converted to carriers, three cruisers and nine destroyers. The carriers had only 108 planes. The
force was not spotted by the Allies until 16:40 on 24 October. At 20:00 Toyoda ordered all remaining Japanese forces
to attack. Halsey saw an opportunity to destroy the remnants of the Japanese carrier force. The U.S. Third Fleet
was formidable – nine large carriers, eight light carriers, six battleships, 17 cruisers, 63 destroyers and 1,000
planes – and completely outgunned Ozawa's force. Halsey's ships set out in pursuit of Ozawa just after midnight.
U.S. commanders ignored reports that Kurita had turned back towards San Bernardino Strait. They had taken the bait
set by Ozawa. On the morning of 25 October Ozawa launched 75 planes. Most were shot down by U.S. fighter patrols.
By 08:00 U.S. fighters had destroyed the screen of Japanese fighters and were hitting ships. By evening, they had
sunk the carriers Zuikaku, Zuihō, and Chiyoda, and a destroyer. The fourth carrier, Chitose, and a cruiser were disabled
and later sank. Kurita passed through San Bernardino Strait at 03:00 on 25 October and headed along the coast of
Samar. The only thing standing in his path were three groups (Taffy 1, 2 and 3) of the Seventh Fleet, commanded by
Admiral Thomas Kinkaid. Each group had six escort carriers, with a total of more than 500 planes, and seven or eight
destroyers or destroyer escorts (DE). Kinkaid still believed that Lee's force was guarding the north, so the Japanese
had the element of surprise when they attacked Taffy 3 at 06:45. Kurita mistook the Taffy carriers for large fleet
carriers and thought he had the whole Third Fleet in his sights. Since escort carriers stood little chance against
a battleship, Adm. Clifton Sprague directed the carriers of Taffy 3 to turn and flee eastward, hoping that bad visibility
would reduce the accuracy of Japanese gunfire, and used his destroyers to divert the Japanese battleships. The destroyers
made harassing torpedo attacks against the Japanese. For ten minutes Yamato was caught up in evasive action. Two
U.S. destroyers and a DE were sunk, but they had bought enough time for the Taffy groups to launch planes. Taffy
3 turned and fled south, with shells scoring hits on some of its carriers and sinking one of them. The superior speed
of the Japanese force allowed it to draw closer and fire on the other two Taffy groups. However, at 09:20 Kurita
suddenly turned and retreated north. Signals had disabused him of the notion that he was attacking the Third Fleet,
and the longer Kurita continued to engage, the greater the risk of major air strikes. Destroyer attacks had broken
the Japanese formations, shattering tactical control. Three of Kurita's heavy cruisers had been sunk and another
was too damaged to continue the fight. The Japanese retreated through the San Bernardino Strait, under continuous
air attack. The Battle of Leyte Gulf was over; and a large part of the Japanese surface fleet destroyed. The battle
secured the beachheads of the U.S. Sixth Army on Leyte against attack from the sea, broke the back of Japanese naval
power and opened the way for an advance to the Ryukyu Islands in 1945. The only significant Japanese naval operation
afterwards was the disastrous Operation Ten-Go in April 1945. Kurita's force had begun the battle with five battleships;
when he returned to Japan, only Yamato was combat-worthy. Nishimura's sunken Yamashiro was the last battleship in
history to engage another in combat. On 20 October 1944 the U.S. Sixth Army, supported by naval and air bombardment,
landed on the favorable eastern shore of Leyte, north of Mindanao. The U.S. Sixth Army continued its advance from
the east, as the Japanese rushed reinforcements to the Ormoc Bay area on the western side of the island. While the
Sixth Army was reinforced successfully, the U.S. Fifth Air Force was able to devastate the Japanese attempts to resupply.
In torrential rains and over difficult terrain, the advance continued across Leyte and the neighboring island of
Samar to the north. On 7 December U.S. Army units landed at Ormoc Bay and, after a major land and air battle, cut
off the Japanese ability to reinforce and supply Leyte. Although fierce fighting continued on Leyte for months, the
U.S. Army was in control. On 15 December 1944 landings against minimal resistance were made on the southern beaches
of the island of Mindoro, a key location in the planned Lingayen Gulf operations, in support of major landings scheduled
on Luzon. On 9 January 1945, on the south shore of Lingayen Gulf on the western coast of Luzon, General Krueger's
Sixth Army landed his first units. Almost 175,000 men followed across the twenty-mile (32 km) beachhead within a
few days. With heavy air support, Army units pushed inland, taking Clark Field, 40 miles (64 km) northwest of Manila,
in the last week of January. Palawan Island, between Borneo and Mindoro, the fifth largest and western-most Philippine
Island, was invaded on 28 February with landings of the Eighth Army at Puerto Princesa. The Japanese put up little
direct defense of Palawan, but cleaning up pockets of Japanese resistance lasted until late April, as the Japanese
used their common tactic of withdrawing into the mountain jungles, dispersed as small units. Throughout the Philippines,
U.S. forces were aided by Filipino guerrillas to find and dispatch the holdouts. The battle of Iwo Jima ("Operation
Detachment") in February 1945 was one of the bloodiest battles fought by the Americans in the Pacific War. Iwo Jima
was an 8 sq mile (21 km2) island situated halfway between Tokyo and the Mariana Islands. Holland Smith, the commander
of the invasion force, aimed to capture the island, and utilize its three airfields as bases to carry out air attacks
against the Home Islands. Lt. General Tadamichi Kuribayashi, the commander of the island's defense, knew that he
could not win the battle, but he hoped to make the Americans suffer far more than they could endure. From early 1944
until the days leading up to the invasion, Kuribayashi transformed the island into a massive network of bunkers,
hidden guns, and 11 mi (18 km) of underground tunnels. The heavy American naval and air bombardment did little but
drive the Japanese further underground, making their positions impervious to enemy fire. Their pillboxes and bunkers
were all connected so that if one was knocked out, it could be reoccupied again. The network of bunkers and pillboxes
greatly favored the defender. Starting in mid-June 1944, Iwo Jima came under sustained aerial bombardment and naval
artillery fire. However, Kuribayashi's hidden guns and defenses survived the constant bombardment virtually unscathed.
On 19 February 1945, some 30,000 men of the 3rd, 4th, and 5th Marine Divisions landed on the southeast coast of Iwo,
just under Mount Suribachi; where most of the island's defenses were concentrated. For some time, they did not come
under fire. This was part of Kuribayashi's plan to hold fire until the landing beaches were full. As soon as the
Marines pushed inland to a line of enemy bunkers, they came under devastating machine gun and artillery fire which
cut down many of the men. By the end of the day, the Marines reached the west coast of the island, but their losses
were appalling; almost 2,000 men killed or wounded. On 23 February, the 28th Marine Regiment reached the summit of
Suribachi, prompting the now famous Raising the Flag on Iwo Jima picture. Navy Secretary James Forrestal, upon seeing
the flag, remarked "there will be a Marine Corps for the next 500 years." The flag raising is often cited as the
most reproduced photograph of all time and became the archetypal representation not only of that battle, but of the
entire Pacific War. For the rest of February, the Americans pushed north, and by 1 March, had taken two-thirds of
the island. But it was not until 26 March that the island was finally secured. The Japanese fought to the last man,
killing 6,800 Marines and wounding nearly 20,000 more. The Japanese losses totaled well over 20,000 men killed, and
only 1,083 prisoners were taken. Historians debate whether it was strategically worth the casualties sustained. During
April, Fourteenth Army advanced 300 miles (480 km) south towards Rangoon, the capital and principal port of Burma,
but was delayed by Japanese rearguards 40 miles (64 km) north of Rangoon at the end of the month. Slim feared that
the Japanese would defend Rangoon house-to-house during the monsoon, placing his army in a disastrous supply situation,
and in March he had asked that a plan to capture Rangoon by an amphibious force, Operation Dracula, which had been
abandoned earlier, be reinstated. Dracula was launched on 1 May, but Rangoon was found to have been abandoned. The
troops which occupied Rangoon linked up with Fourteenth Army five days later, securing the Allies' lines of communication.
Although the campaign was criticized in Australia at the time, and in subsequent years, as pointless or a "waste"
of the lives of soldiers, it did achieve a number of objectives, such as increasing the isolation of significant
Japanese forces occupying the main part of the Dutch East Indies, capturing major oil supplies and freeing Allied
prisoners of war, who were being held in deteriorating conditions. At one of the very worst sites, around Sandakan
in Borneo, only six of some 2,500 British and Australian prisoners survived. By April 1945, China had already been
at war with Japan for more than seven years. Both nations were exhausted by years of battles, bombings and blockades.
After Japanese victories in Operation Ichi-Go, Japan were losing the battle in Burma and facing constant attacks
from Chinese Nationalists forces and Communist guerrillas in the country side. The Japanese army began preparations
for the Battle of West Hunan in March 1945. Japanese mobilized 34th, 47th, 64th, 68th and 116th Divisions, as well
as the 86th Independent Brigade, for a total of 80,000 men to seize Chinese airfields and secure railroads in West
Hunan by early April. In response, the Chinese National Military Council dispatched the 4th Front Army and the 10th
and 27th Army Groups with He Yingqin as commander-in-chief. At the same time, it airlifted the entire Chinese New
6th Corps, an American-equipped corps and veterans of the Burma Expeditionary Force, from Kunming to Zhijiang. Chinese
forces totaled 110,000 men in 20 divisions. They were supported by about 400 aircraft from Chinese and American air
forces. Chinese forces achieved a decisive victory and launched a large counterattack in this campaign. Concurrently,
the Chinese managed to repel a Japanese offensive in Henan and Hubei. Afterwards, Chinese forces retook Hunan and
Hubei provinces in South China. Chinese launched a counter offensive to retake Guangxi which was the last major Japanese
stronghold in South China. In August 1945, Chinese forces successfully retook Guangxi.[citation needed] The largest
and bloodiest American battle came at Okinawa, as the U.S. sought airbases for 3,000 B-29 bombers and 240 squadrons
of B-17 bombers for the intense bombardment of Japan's home islands in preparation for a full-scale invasion in late
1945. The Japanese, with 115,000 troops augmented by thousands of civilians on the heavily populated island, did
not resist on the beaches—their strategy was to maximize the number of soldier and Marine casualties, and naval losses
from Kamikaze attacks. After an intense bombardment the Americans landed on 1 April 1945 and declared victory on
21 June. The supporting naval forces were the targets for 4,000 sorties, many by Kamikaze suicide planes. U.S. losses
totaled 38 ships of all types sunk and 368 damaged with 4,900 sailors killed. The Americans suffered 75,000 casualties
on the ground; 94% of the Japanese soldiers died along with many civilians. Hard-fought battles on the Japanese home
islands of Iwo Jima, Okinawa, and others resulted in horrific casualties on both sides but finally produced a Japanese
defeat. Of the 117,000 Japanese troops defending Okinawa, 94 percent died. Faced with the loss of most of their experienced
pilots, the Japanese increased their use of kamikaze tactics in an attempt to create unacceptably high casualties
for the Allies. The U.S. Navy proposed to force a Japanese surrender through a total naval blockade and air raids.
Towards the end of the war as the role of strategic bombing became more important, a new command for the U.S. Strategic
Air Forces in the Pacific was created to oversee all U.S. strategic bombing in the hemisphere, under United States
Army Air Forces General Curtis LeMay. Japanese industrial production plunged as nearly half of the built-up areas
of 67 cities were destroyed by B-29 firebombing raids. On 9–10 March 1945 alone, about 100,000 people were killed
in a conflagration caused by an incendiary attack on Tokyo. LeMay also oversaw Operation Starvation, in which the
inland waterways of Japan were extensively mined by air, which disrupted the small amount of remaining Japanese coastal
sea traffic. On 26 July 1945, the President of the United States Harry S. Truman, the President of the Nationalist
Government of China Chiang Kai-shek and the Prime Minister of Great Britain Winston Churchill issued the Potsdam
Declaration, which outlined the terms of surrender for the Empire of Japan as agreed upon at the Potsdam Conference.
This ultimatum stated that, if Japan did not surrender, it would face "prompt and utter destruction." On 6 August
1945, the U.S. dropped an atomic bomb on the Japanese city of Hiroshima in the first nuclear attack in history. In
a press release issued after the atomic bombing of Hiroshima, Truman warned Japan to surrender or "...expect a rain
of ruin from the air, the like of which has never been seen on this earth." Three days later, on 9 August, the U.S.
dropped another atomic bomb on Nagasaki, the last nuclear attack in history. More than 140,000–240,000 people died
as a direct result of these two bombings. The necessity of the atomic bombings has long been debated, with detractors
claiming that a naval blockade and aerial bombing campaign had already made invasion, hence the atomic bomb, unnecessary.
However, other scholars have argued that the bombings shocked the Japanese government into surrender, with Emperor
finally indicating his wish to stop the war. Another argument in favor of the atomic bombs is that they helped avoid
Operation Downfall, or a prolonged blockade and bombing campaign, any of which would have exacted much higher casualties
among Japanese civilians. Historian Richard B. Frank wrote that a Soviet invasion of Japan was never likely because
they had insufficient naval capability to mount an amphibious invasion of Hokkaidō. The Manchurian Strategic Offensive
Operation began on 9 August 1945, with the Soviet invasion of the Japanese puppet state of Manchukuo and was the
last campaign of the Second World War and the largest of the 1945 Soviet–Japanese War which resumed hostilities between
the Soviet Union and the Empire of Japan after almost six years of peace. Soviet gains on the continent were Manchukuo,
Mengjiang (Inner Mongolia) and northern Korea. The rapid defeat of Japan's Kwantung Army has been argued to be a
significant factor in the Japanese surrender and the end of World War II, as Japan realized the Soviets were willing
and able to take the cost of invasion of its Home Islands, after their rapid conquest of Manchuria and Invasion of
South Sakhalin island. The effects of the "Twin Shocks"—the Soviet entry and the atomic bombing—were profound. On
10 August the "sacred decision" was made by Japanese Cabinet to accept the Potsdam terms on one condition: the "prerogative
of His Majesty as a Sovereign Ruler". At noon on 15 August, after the American government's intentionally ambiguous
reply, stating that the "authority" of the emperor "shall be subject to the Supreme Commander of the Allied Powers",
the Emperor broadcast to the nation and to the world at large the rescript of surrender, ending the Second World
War. In Japan, 14 August is considered to be the day that the Pacific War ended. However, as Imperial Japan actually
surrendered on 15 August, this day became known in the English-speaking countries as "V-J Day" (Victory in Japan).
The formal Japanese Instrument of Surrender was signed on 2 September 1945, on the battleship USS Missouri, in Tokyo
Bay. The surrender was accepted by General Douglas MacArthur as Supreme Commander for the Allied Powers, with representatives
of several Allied nations, from a Japanese delegation led by Mamoru Shigemitsu and Yoshijiro Umezu. A widely publicised
example of institutionalised sexual slavery are "comfort women", a euphemism for the 200,000 women, mostly from Korea
and China, who served in the Japanese army's camps during World War II. Some 35 Dutch comfort women brought a successful
case before the Batavia Military Tribunal in 1948. In 1993, Chief Cabinet Secretary Yohei Kono said that women were
coerced into brothels run by Japan's wartime military. Other Japanese leaders have apologized, including former Prime
Minister Junichiro Koizumi in 2001. In 2007, then-Prime Minister Shinzō Abe claimed: "The fact is, there is no evidence
to prove there was coercion." Australia had been shocked by the speedy collapse of British Malaya and Fall of Singapore
in which around 15,000 Australian soldiers became prisoners of war. Curtin predicted that the "battle for Australia"
would now follow. The Japanese established a major base in the Australian Territory of New Guinea in early 1942.
On 19 February, Darwin suffered a devastating air raid, the first time the Australian mainland had been attacked.
Over the following 19 months, Australia was attacked from the air almost 100 times.
With an estimated population of 1,381,069 as of July 1, 2014, San Diego is the eighth-largest city in the United States and
second-largest in California. It is part of the San Diego–Tijuana conurbation, the second-largest transborder agglomeration
between the US and a bordering country after Detroit–Windsor, with a population of 4,922,723 people. San Diego is
the birthplace of California and is known for its mild year-round climate, natural deep-water harbor, extensive beaches,
long association with the United States Navy and recent emergence as a healthcare and biotechnology development center.
Historically home to the Kumeyaay people, San Diego was the first site visited by Europeans on what is now the West
Coast of the United States. Upon landing in San Diego Bay in 1542, Juan Rodríguez Cabrillo claimed the entire area
for Spain, forming the basis for the settlement of Alta California 200 years later. The Presidio and Mission San
Diego de Alcalá, founded in 1769, formed the first European settlement in what is now California. In 1821, San Diego
became part of the newly-independent Mexico, which reformed as the First Mexican Republic two years later. In 1850,
it became part of the United States following the Mexican–American War and the admission of California to the union.
The first European to visit the region was Portuguese-born explorer Juan Rodríguez Cabrillo sailing under the flag
of Castile. Sailing his flagship San Salvador from Navidad, New Spain, Cabrillo claimed the bay for the Spanish Empire
in 1542, and named the site 'San Miguel'. In November 1602, Sebastián Vizcaíno was sent to map the California coast.
Arriving on his flagship San Diego, Vizcaíno surveyed the harbor and what are now Mission Bay and Point Loma and
named the area for the Catholic Saint Didacus, a Spaniard more commonly known as San Diego de Alcalá. On November
12, 1602, the first Christian religious service of record in Alta California was conducted by Friar Antonio de la
Ascensión, a member of Vizcaíno's expedition, to celebrate the feast day of San Diego. In May 1769, Gaspar de Portolà
established the Fort Presidio of San Diego on a hill near the San Diego River. It was the first settlement by Europeans
in what is now the state of California. In July of the same year, Mission San Diego de Alcalá was founded by Franciscan
friars under Junípero Serra. By 1797, the mission boasted the largest native population in Alta California, with
over 1,400 neophytes living in and around the mission proper. Mission San Diego was the southern anchor in California
of the historic mission trail El Camino Real. Both the Presidio and the Mission are National Historic Landmarks.
In 1821, Mexico won its independence from Spain, and San Diego became part of the Mexican territory of Alta California.
In 1822, Mexico began attempting to extend its authority over the coastal territory of Alta California. The fort
on Presidio Hill was gradually abandoned, while the town of San Diego grew up on the level land below Presidio Hill.
The Mission was secularized by the Mexican government in 1833, and most of the Mission lands were sold to wealthy
Californio settlers. The 432 residents of the town petitioned the governor to form a pueblo, and Juan María Osuna
was elected the first alcalde ("municipal magistrate"), defeating Pío Pico in the vote. (See, List of pre-statehood
mayors of San Diego.) However, San Diego had been losing population throughout the 1830s and in 1838 the town lost
its pueblo status because its size dropped to an estimated 100 to 150 residents. Beyond town Mexican land grants
expanded the number of California ranchos that modestly added to the local economy. In 1846, the United States went
to war against Mexico and sent a naval and land expedition to conquer Alta California. At first they had an easy
time of it capturing the major ports including San Diego, but the Californios in southern Alta California struck
back. Following the successful revolt in Los Angeles, the American garrison at San Diego was driven out without firing
a shot in early October 1846. Mexican partisans held San Diego for three weeks until October 24, 1846, when the Americans
recaptured it. For the next several months the Americans were blockaded inside the pueblo. Skirmishes occurred daily
and snipers shot into the town every night. The Californios drove cattle away from the pueblo hoping to starve the
Americans and their Californio supporters out. On December 1 the Americans garrison learned that the dragoons of
General Stephen W. Kearney were at Warner's Ranch. Commodore Robert F. Stockton sent a mounted force of fifty under
Captain Archibald Gillespie to march north to meet him. Their joint command of 150 men, returning to San Diego, encountered
about 93 Californios under Andrés Pico. In the ensuing Battle of San Pasqual, fought in the San Pasqual Valley which
is now part of the city of San Diego, the Americans suffered their worst losses in the campaign. Subsequently a column
led by Lieutenant Gray arrived from San Diego, rescuing Kearny's battered and blockaded command. Stockton and Kearny
went on to recover Los Angeles and force the capitulation of Alta California with the "Treaty of Cahuenga" on January
13, 1847. As a result of the Mexican–American War of 1846–48, the territory of Alta California, including San Diego,
was ceded to the United States by Mexico, under the terms of the Treaty of Guadalupe Hidalgo in 1848. The Mexican
negotiators of that treaty tried to retain San Diego as part of Mexico, but the Americans insisted that San Diego
was "for every commercial purpose of nearly equal importance to us with that of San Francisco," and the Mexican-American
border was eventually established to be one league south of the southernmost point of San Diego Bay, so as to include
the entire bay within the United States. The state of California was admitted to the United States in 1850. That
same year San Diego was designated the seat of the newly established San Diego County and was incorporated as a city.
Joshua H. Bean, the last alcalde of San Diego, was elected the first mayor. Two years later the city was bankrupt;
the California legislature revoked the city's charter and placed it under control of a board of trustees, where it
remained until 1889. A city charter was re-established in 1889 and today's city charter was adopted in 1931. The
original town of San Diego was located at the foot of Presidio Hill, in the area which is now Old Town San Diego
State Historic Park. The location was not ideal, being several miles away from navigable water. In 1850, William
Heath Davis promoted a new development by the Bay shore called "New San Diego", several miles south of the original
settlement; however, for several decades the new development consisted only a few houses, a pier and an Army depot.
In the late 1860s, Alonzo Horton promoted a move to the bayside area, which he called "New Town" and which became
Downtown San Diego. Horton promoted the area heavily, and people and businesses began to relocate to New Town because
of its location on San Diego Bay convenient to shipping. New Town soon eclipsed the original settlement, known to
this day as Old Town, and became the economic and governmental heart of the city. Still, San Diego remained a relative
backwater town until the arrival of a railroad connection in 1878. In the early part of the 20th century, San Diego
hosted two World's Fairs: the Panama-California Exposition in 1915 and the California Pacific International Exposition
in 1935. Both expositions were held in Balboa Park, and many of the Spanish/Baroque-style buildings that were built
for those expositions remain to this day as central features of the park. The buildings were intended to be temporary
structures, but most remained in continuous use until they progressively fell into disrepair. Most were eventually
rebuilt, using castings of the original façades to retain the architectural style. The menagerie of exotic animals
featured at the 1915 exposition provided the basis for the San Diego Zoo. During the 1950s there was a citywide festival
called Fiesta del Pacifico highlighting the area's Spanish and Mexican past. In the 2010s there was a proposal for
a large-scale celebration of the 100th anniversary of Balboa Park, but the plans were abandoned when the organization
tasked with putting on the celebration went out of business. The southern portion of the Point Loma peninsula was
set aside for military purposes as early as 1852. Over the next several decades the Army set up a series of coastal
artillery batteries and named the area Fort Rosecrans. Significant U.S. Navy presence began in 1901 with the establishment
of the Navy Coaling Station in Point Loma, and expanded greatly during the 1920s. By 1930, the city was host to Naval
Base San Diego, Naval Training Center San Diego, San Diego Naval Hospital, Camp Matthews, and Camp Kearny (now Marine
Corps Air Station Miramar). The city was also an early center for aviation: as early as World War I, San Diego was
proclaiming itself "The Air Capital of the West". The city was home to important airplane developers and manufacturers
like Ryan Airlines (later Ryan Aeronautical), founded in 1925, and Consolidated Aircraft (later Convair), founded
in 1923. Charles A. Lindbergh's plane The Spirit of St. Louis was built in San Diego in 1927 by Ryan Airlines. During
World War II, San Diego became a major hub of military and defense activity, due to the presence of so many military
installations and defense manufacturers. The city's population grew rapidly during and after World War II, more than
doubling between 1930 (147,995) and 1950 (333,865). During the final months of the war, the Japanese had a plan to
target multiple U.S. cities for biological attack, starting with San Diego. The plan was called "Operation Cherry
Blossoms at Night" and called for kamikaze planes filled with fleas infected with plague (Yersinia pestis) to crash
into civilian population centers in the city, hoping to spread plague in the city and effectively kill tens of thousands
of civilians. The plan was scheduled to launch on September 22, 1945, but was not carried out because Japan surrendered
five weeks earlier. From the start of the 20th century through the 1970s, the American tuna fishing fleet and tuna
canning industry were based in San Diego, "the tuna capital of the world". San Diego's first tuna cannery was founded
in 1911, and by the mid-1930s the canneries employed more than 1,000 people. A large fishing fleet supported the
canneries, mostly staffed by immigrant fishermen from Japan, and later from the Portuguese Azores and Italy whose
influence is still felt in neighborhoods like Little Italy and Point Loma. Due to rising costs and foreign competition,
the last of the canneries closed in the early 1980s. The city lies on approximately 200 deep canyons and hills separating
its mesas, creating small pockets of natural open space scattered throughout the city and giving it a hilly geography.
Traditionally, San Diegans have built their homes and businesses on the mesas, while leaving the urban canyons relatively
wild. Thus, the canyons give parts of the city a segmented feel, creating gaps between otherwise proximate neighborhoods
and contributing to a low-density, car-centered environment. The San Diego River runs through the middle of San Diego
from east to west, creating a river valley which serves to divide the city into northern and southern segments. The
river used to flow into San Diego Bay and its fresh water was the focus of the earliest Spanish explorers.[citation
needed] Several reservoirs and Mission Trails Regional Park also lie between and separate developed areas of the
city. Downtown San Diego is located on San Diego Bay. Balboa Park encompasses several mesas and canyons to the northeast,
surrounded by older, dense urban communities including Hillcrest and North Park. To the east and southeast lie City
Heights, the College Area, and Southeast San Diego. To the north lies Mission Valley and Interstate 8. The communities
north of the valley and freeway, and south of Marine Corps Air Station Miramar, include Clairemont, Kearny Mesa,
Tierrasanta, and Navajo. Stretching north from Miramar are the northern suburbs of Mira Mesa, Scripps Ranch, Rancho
Peñasquitos, and Rancho Bernardo. The far northeast portion of the city encompasses Lake Hodges and the San Pasqual
Valley, which holds an agricultural preserve. Carmel Valley and Del Mar Heights occupy the northwest corner of the
city. To their south are Torrey Pines State Reserve and the business center of the Golden Triangle. Further south
are the beach and coastal communities of La Jolla, Pacific Beach, Mission Beach, and Ocean Beach. Point Loma occupies
the peninsula across San Diego Bay from downtown. The communities of South San Diego, such as San Ysidro and Otay
Mesa, are located next to the Mexico–United States border, and are physically separated from the rest of the city
by the cities of National City and Chula Vista. A narrow strip of land at the bottom of San Diego Bay connects these
southern neighborhoods with the rest of the city. The development of skyscrapers over 300 feet (91 m) in San Diego
is attributed to the construction of the El Cortez Hotel in 1927, the tallest building in the city from 1927 to 1963.
As time went on multiple buildings claimed the title of San Diego's tallest skyscraper, including the Union Bank
of California Building and Symphony Towers. Currently the tallest building in San Diego is One America Plaza, standing
500 feet (150 m) tall, which was completed in 1991. The downtown skyline contains no super-talls, as a regulation
put in place by the Federal Aviation Administration in the 1970s set a 500 feet (152 m) limit on the height of buildings
due to the proximity of San Diego International Airport. An iconic description of the skyline includes its skyscrapers
being compared to the tools of a toolbox. San Diego is one of the top-ten best climates in the Farmers' Almanac and
is one of the two best summer climates in America as scored by The Weather Channel. Under the Köppen–Geiger climate
classification system, the San Diego area has been variously categorized as having either a semi-arid climate (BSh
in the original classification and BSkn in modified Köppen classification) or a Mediterranean climate (Csa and Csb).
San Diego's climate is characterized by warm, dry summers and mild winters with most of the annual precipitation
falling between December and March. The city has a mild climate year-round, with an average of 201 days above 70
°F (21 °C) and low rainfall (9–13 inches [230–330 mm] annually). Dewpoints in the summer months range from 57.0 °F
(13.9 °C) to 62.4 °F (16.9 °C). The climate in San Diego, like most of Southern California, often varies significantly
over short geographical distances resulting in microclimates. In San Diego, this is mostly because of the city's
topography (the Bay, and the numerous hills, mountains, and canyons). Frequently, particularly during the "May gray/June
gloom" period, a thick "marine layer" cloud cover will keep the air cool and damp within a few miles of the coast,
but will yield to bright cloudless sunshine approximately 5–10 miles (8.0–16.1 km) inland. Sometimes the June gloom
can last into July, causing cloudy skies over most of San Diego for the entire day. Even in the absence of June gloom,
inland areas tend to experience much more significant temperature variations than coastal areas, where the ocean
serves as a moderating influence. Thus, for example, downtown San Diego averages January lows of 50 °F (10 °C) and
August highs of 78 °F (26 °C). The city of El Cajon, just 10 miles (16 km) inland from downtown San Diego, averages
January lows of 42 °F (6 °C) and August highs of 88 °F (31 °C). Rainfall along the coast averages about 10 inches
(250 mm) of precipitation annually. The average (mean) rainfall is 10.65 inches (271 mm) and the median is 9.6 inches
(240 mm). Most of the rainfall occurs during the cooler months. The months of December through March supply most
of the rain, with February the only month averaging 2 inches (51 mm) or more of rain. The months of May through September
tend to be almost completely dry. Though there are few wet days per month during the rainy period, rainfall can be
heavy when it does fall. Rainfall is usually greater in the higher elevations of San Diego; some of the higher elevation
areas of San Diego can receive 11–15 inches (280–380 mm) of rain a year. Variability of rainfall can be extreme:
in the wettest years of 1883/1884 and 1940/1941 more than 24 inches (610 mm) fell in the city, whilst in the driest
years as little as 3.2 inches (80 mm) has fallen for a full year. The wettest month on record has been December 1921
with 9.21 inches (234 mm). Like most of southern California, the majority of San Diego's current area was originally
occupied by chaparral, a plant community made up mostly of drought-resistant shrubs. The endangered Torrey pine has
the bulk of its population in San Diego in a stretch of protected chaparral along the coast. The steep and varied
topography and proximity to the ocean create a number of different habitats within the city limits, including tidal
marsh and canyons. The chaparral and coastal sage scrub habitats in low elevations along the coast are prone to wildfire,
and the rates of fire have increased in the 20th century, due primarily to fires starting near the borders of urban
and wild areas. San Diego County has one of the highest counts of animal and plant species that appear on the endangered
species list among counties in the United States. Because of its diversity of habitat and its position on the Pacific
Flyway, San Diego County has recorded the presence of 492 bird species, more than any other region in the country.
San Diego always scores very high in the number of bird species observed in the annual Christmas Bird Count, sponsored
by the Audubon Society, and it is known as one of the "birdiest" areas in the United States. San Diego and its backcountry
are subject to periodic wildfires. In October 2003, San Diego was the site of the Cedar Fire, which has been called
the largest wildfire in California over the past century. The fire burned 280,000 acres (1,100 km2), killed 15 people,
and destroyed more than 2,200 homes. In addition to damage caused by the fire, smoke resulted in a significant increase
in emergency room visits due to asthma, respiratory problems, eye irritation, and smoke inhalation; the poor air
quality caused San Diego County schools to close for a week. Wildfires four years later destroyed some areas, particularly
within the communities of Rancho Bernardo, Rancho Santa Fe, and Ramona. The city had a population of 1,307,402 according
to the 2010 census, distributed over a land area of 372.1 square miles (963.7 km2). The urban area of San Diego extends
beyond the administrative city limits and had a total population of 2,956,746, making it the third-largest urban
area in the state, after that of the Los Angeles metropolitan area and San Francisco metropolitan area. They, along
with the Riverside–San Bernardino, form those metropolitan areas in California larger than the San Diego metropolitan
area, with a total population of 3,095,313 at the 2010 census. As of the Census of 2010, there were 1,307,402 people
living in the city of San Diego. That represents a population increase of just under 7% from the 1,223,400 people,
450,691 households, and 271,315 families reported in 2000. The estimated city population in 2009 was 1,306,300. The
population density was 3,771.9 people per square mile (1,456.4/km2). The racial makeup of San Diego was 45.1% White,
6.7% African American, 0.6% Native American, 15.9% Asian (5.9% Filipino, 2.7% Chinese, 2.5% Vietnamese, 1.3% Indian,
1.0% Korean, 0.7% Japanese, 0.4% Laotian, 0.3% Cambodian, 0.1% Thai). 0.5% Pacific Islander (0.2% Guamanian, 0.1%
Samoan, 0.1% Native Hawaiian), 12.3% from other races, and 5.1% from two or more races. The ethnic makeup of the
city was 28.8% Hispanic or Latino (of any race); 24.9% of the total population were Mexican American, and 0.6% were
Puerto Rican. As of January 1, 2008 estimates by the San Diego Association of Governments revealed that the household
median income for San Diego rose to $66,715, up from $45,733, and that the city population rose to 1,336,865, up
9.3% from 2000. The population was 45.3% non-Hispanic whites, down from 78.9% in 1970, 27.7% Hispanics, 15.6% Asians/Pacific
Islanders, 7.1% blacks, 0.4% American Indians, and 3.9% from other races. Median age of Hispanics was 27.5 years,
compared to 35.1 years overall and 41.6 years among non-Hispanic whites; Hispanics were the largest group in all
ages under 18, and non-Hispanic whites constituted 63.1% of population 55 and older. The U.S. Census Bureau reported
that in 2000, 24.0% of San Diego residents were under 18, and 10.5% were 65 and over. As of 2011[update] the median
age was 35.6; more than a quarter of residents were under age 20 and 11% were over age 65. Millennials (ages 18 through
34) constitute 27.1% of San Diego's population, the second-highest percentage in a major U.S. city. The San Diego
County regional planning agency, SANDAG, provides tables and graphs breaking down the city population into 5-year
age groups. In 2000, the median income for a household in the city was $45,733, and the median income for a family
was $53,060. Males had a median income of $36,984 versus $31,076 for females. The per capita income for the city
was $23,609. According to Forbes in 2005, San Diego was the fifth wealthiest U.S. city but about 10.6% of families
and 14.6% of the population were below the poverty line, including 20.0% of those under age 18 and 7.6% of those
age 65 or over. Nonetheless, San Diego was rated the fifth-best place to live in the United States in 2006 by Money
magazine. Tourism is a major industry owing to the city's climate, its beaches, and numerous tourist attractions
such as Balboa Park, Belmont amusement park, San Diego Zoo, San Diego Zoo Safari Park, and SeaWorld San Diego. San
Diego's Spanish and Mexican heritage is reflected in the many historic sites across the city, such as Mission San
Diego de Alcala and Old Town San Diego State Historic Park. Also, the local craft brewing industry attracts an increasing
number of visitors for "beer tours" and the annual San Diego Beer Week in November; San Diego has been called "America's
Craft Beer Capital." The city shares a 15-mile (24 km) border with Mexico that includes two border crossings. San
Diego hosts the busiest international border crossing in the world, in the San Ysidro neighborhood at the San Ysidro
Port of Entry. A second, primarily commercial border crossing operates in the Otay Mesa area; it is the largest commercial
crossing on the California-Baja California border and handles the third-highest volume of trucks and dollar value
of trade among all United States-Mexico land crossings. San Diego hosts several major producers of wireless cellular
technology. Qualcomm was founded and is headquartered in San Diego, and is one of the largest private-sector employers
in San Diego. Other wireless industry manufacturers headquartered here include Nokia, LG Electronics, Kyocera International.,
Cricket Communications and Novatel Wireless. The largest software company in San Diego is security software company
Websense Inc. San Diego also has the U.S. headquarters for the Slovakian security company ESET. San Diego has been
designated as an iHub Innovation Center for collaboration potentially between wireless and life sciences. The presence
of the University of California, San Diego and other research institutions has helped to fuel biotechnology growth.
In 2013, San Diego has the second-largest biotech cluster in the United States, below the Boston area and above the
San Francisco Bay Area. There are more than 400 biotechnology companies in the area. In particular, the La Jolla
and nearby Sorrento Valley areas are home to offices and research facilities for numerous biotechnology companies.
Major biotechnology companies like Illumina and Neurocrine Biosciences are headquartered in San Diego, while many
biotech and pharmaceutical companies have offices or research facilities in San Diego. San Diego is also home to
more than 140 contract research organizations (CROs) that provide a variety of contract services for pharmaceutical
and biotechnology companies. Many popular museums, such as the San Diego Museum of Art, the San Diego Natural History
Museum, the San Diego Museum of Man, the Museum of Photographic Arts, and the San Diego Air & Space Museum are located
in Balboa Park, which is also the location of the San Diego Zoo. The Museum of Contemporary Art San Diego (MCASD)
is located in La Jolla and has a branch located at the Santa Fe Depot downtown. The downtown branch consists of two
building on two opposite streets. The Columbia district downtown is home to historic ship exhibits belonging to the
San Diego Maritime Museum, headlined by the Star of India, as well as the unrelated San Diego Aircraft Carrier Museum
featuring the USS Midway aircraft carrier. The San Diego Symphony at Symphony Towers performs on a regular basis
and is directed by Jahja Ling. The San Diego Opera at Civic Center Plaza, directed by Ian Campbell, was ranked by
Opera America as one of the top 10 opera companies in the United States. Old Globe Theatre at Balboa Park produces
about 15 plays and musicals annually. The La Jolla Playhouse at UCSD is directed by Christopher Ashley. Both the
Old Globe Theatre and the La Jolla Playhouse have produced the world premieres of plays and musicals that have gone
on to win Tony Awards or nominations on Broadway. The Joan B. Kroc Theatre at Kroc Center's Performing Arts Center
is a 600-seat state-of-the-art theatre that hosts music, dance, and theatre performances. The San Diego Repertory
Theatre at the Lyceum Theatres in Horton Plaza produces a variety of plays and musicals. Hundreds of movies and a
dozen TV shows have been filmed in San Diego, a tradition going back as far as 1898. The San Diego Surf of the American
Basketball Association is located in the city. The annual Farmers Insurance Open golf tournament (formerly the Buick
Invitational) on the PGA Tour occurs at Torrey Pines Golf Course. This course was also the site of the 2008 U.S.
Open Golf Championship. The San Diego Yacht Club hosted the America's Cup yacht races three times during the period
1988 to 1995. The amateur beach sport Over-the-line was invented in San Diego, and the annual world Over-the-line
championships are held at Mission Bay every year. The city is governed by a mayor and a 9-member city council. In
2006, the city's form of government changed from a council–manager government to a strong mayor government. The change
was brought about by a citywide vote in 2004. The mayor is in effect the chief executive officer of the city, while
the council is the legislative body. The City of San Diego is responsible for police, public safety, streets, water
and sewer service, planning and zoning, and similar services within its borders. San Diego is a sanctuary city, however,
San Diego County is a participant of the Secure Communities program. As of 2011[update], the city had one employee
for every 137 residents, with a payroll greater than $733 million. The members of the city council are each elected
from single member districts within the city. The mayor and city attorney are elected directly by the voters of the
entire city. The mayor, city attorney, and council members are elected to four-year terms, with a two-term limit.
Elections are held on a non-partisan basis per California state law; nevertheless, most officeholders do identify
themselves as either Democrats or Republicans. In 2007, registered Democrats outnumbered Republicans by about 7 to
6 in the city, and Democrats currently (as of 2015[update]) hold a 5-4 majority in the city council. The current
mayor, Kevin Faulconer, is a Republican. In 2005 two city council members, Ralph Inzunza and Deputy Mayor Michael
Zucchet – who briefly took over as acting mayor when Murphy resigned – were convicted of extortion, wire fraud, and
conspiracy to commit wire fraud for taking campaign contributions from a strip club owner and his associates, allegedly
in exchange for trying to repeal the city's "no touch" laws at strip clubs. Both subsequently resigned. Inzunza was
sentenced to 21 months in prison. In 2009, a judge acquitted Zucchet on seven out of the nine counts against him,
and granted his petition for a new trial on the other two charges; the remaining charges were eventually dropped.
In July 2013, three former supporters of Mayor Bob Filner asked him to resign because of allegations of repeated
sexual harassment. Over the ensuing six weeks, 18 women came forward to publicly claim that Filner had sexually harassed
them, and multiple individuals and groups called for him to resign. On August 19 Filner and city representatives
entered a mediation process, as a result of which Filner agreed to resign, effective August 30, 2013, while the city
agreed to limit his legal and financial exposure. Filner subsequently pleaded guilty to one felony count of false
imprisonment and two misdemeanor battery charges, and was sentenced to house arrest and probation. San Diego was
ranked as the 20th-safest city in America in 2013 by Business Insider. According to Forbes magazine, San Diego was
the ninth-safest city in the top 10 list of safest cities in the U.S. in 2010. Like most major cities, San Diego
had a declining crime rate from 1990 to 2000. Crime in San Diego increased in the early 2000s. In 2004, San Diego
had the sixth lowest crime rate of any U.S. city with over half a million residents. From 2002 to 2006, the crime
rate overall dropped 0.8%, though not evenly by category. While violent crime decreased 12.4% during this period,
property crime increased 1.1%. Total property crimes per 100,000 people were lower than the national average in 2008.
San Diego's first television station was KFMB, which began broadcasting on May 16, 1949. Since the Federal Communications
Commission (FCC) licensed seven television stations in Los Angeles, two VHF channels were available for San Diego
because of its relative proximity to the larger city. In 1952, however, the FCC began licensing UHF channels, making
it possible for cities such as San Diego to acquire more stations. Stations based in Mexico (with ITU prefixes of
XE and XH) also serve the San Diego market. Television stations today include XHTJB 3 (Once TV), XETV 6 (CW), KFMB
8 (CBS), KGTV 10 (ABC), XEWT 12 (Televisa Regional), KPBS 15 (PBS), KBNT-CD 17 (Univision), XHTIT-TDT 21 (Azteca
7), XHJK-TDT 27 (Azteca 13), XHAS 33 (Telemundo), K35DG-D 35 (UCSD-TV), KDTF-LD 51 (Telefutura), KNSD 39 (NBC), KZSD-LP
41 (Azteca America), KSEX-CD 42 (Infomercials), XHBJ-TDT 45 (Gala TV), XHDTV 49 (MNTV), KUSI 51 (Independent), XHUAA-TDT
57 (Canal de las Estrellas), and KSWB-TV 69 (Fox). San Diego has an 80.6 percent cable penetration rate. Due to the
ratio of U.S. and Mexican-licensed stations, San Diego is the largest media market in the United States that is legally
unable to support a television station duopoly between two full-power stations under FCC regulations, which disallow
duopolies in metropolitan areas with fewer than nine full-power television stations and require that there must be
eight unique station owners that remain once a duopoly is formed (there are only seven full-power stations on the
California side of the San Diego-Tijuana market).[citation needed] Though the E. W. Scripps Company owns KGTV and
KZSD-LP, they are not considered a duopoly under the FCC's legal definition as common ownership between full-power
and low-power television stations in the same market is permitted regardless to the number of stations licensed to
the area. As a whole, the Mexico side of the San Diego-Tijuana market has two duopolies and one triopoly (Entravision
Communications owns both XHAS-TV and XHDTV-TV, Azteca owns XHJK-TV and XHTIT-TV, and Grupo Televisa owns XHUAA-TV
and XHWT-TV along with being the license holder for XETV-TV, which is run by California-based subsidiary Bay City
Television). The radio stations in San Diego include nationwide broadcaster, Clear Channel Communications; CBS Radio,
Midwest Television, Lincoln Financial Media, Finest City Broadcasting, and many other smaller stations and networks.
Stations include: KOGO AM 600, KFMB AM 760, KCEO AM 1000, KCBQ AM 1170, K-Praise, KLSD AM 1360 Air America, KFSD
1450 AM, KPBS-FM 89.5, Channel 933, Star 94.1, FM 94/9, FM News and Talk 95.7, Q96 96.1, KyXy 96.5, Free Radio San
Diego (AKA Pirate Radio San Diego) 96.9FM FRSD, KSON 97.3/92.1, KXSN 98.1, Jack-FM 100.7, 101.5 KGB-FM, KLVJ 102.1,
Rock 105.3, and another Pirate Radio station at 106.9FM, as well as a number of local Spanish-language radio stations.
With the automobile being the primary means of transportation for over 80 percent of its residents, San Diego is
served by a network of freeways and highways. This includes Interstate 5, which runs south to Tijuana and north to
Los Angeles; Interstate 8, which runs east to Imperial County and the Arizona Sun Corridor; Interstate 15, which
runs northeast through the Inland Empire to Las Vegas and Salt Lake City; and Interstate 805, which splits from I-5
near the Mexican border and rejoins I-5 at Sorrento Valley. Major state highways include SR 94, which connects downtown
with I-805, I-15 and East County; SR 163, which connects downtown with the northeast part of the city, intersects
I-805 and merges with I-15 at Miramar; SR 52, which connects La Jolla with East County through Santee and SR 125;
SR 56, which connects I-5 with I-15 through Carmel Valley and Rancho Peñasquitos; SR 75, which spans San Diego Bay
as the San Diego-Coronado Bridge, and also passes through South San Diego as Palm Avenue; and SR 905, which connects
I-5 and I-805 to the Otay Mesa Port of Entry. San Diego's roadway system provides an extensive network of routes
for travel by bicycle. The dry and mild climate of San Diego makes cycling a convenient and pleasant year-round option.
At the same time, the city's hilly, canyon-like terrain and significantly long average trip distances—brought about
by strict low-density zoning laws—somewhat restrict cycling for utilitarian purposes. Older and denser neighborhoods
around the downtown tend to be utility cycling oriented. This is partly because of the grid street patterns now absent
in newer developments farther from the urban core, where suburban style arterial roads are much more common. As a
result, a vast majority of cycling-related activities are recreational. Testament to San Diego's cycling efforts,
in 2006, San Diego was rated as the best city for cycling for U.S. cities with a population over 1 million. San Diego
is served by the San Diego Trolley light rail system, by the SDMTS bus system, and by Coaster and Amtrak Pacific
Surfliner commuter rail; northern San Diego county is also served by the Sprinter light rail line. The Trolley primarily
serves downtown and surrounding urban communities, Mission Valley, east county, and coastal south bay. A planned
Mid-Coast extension of the Trolley will operate from Old Town to University City and the University of California,
San Diego along the I-5 Freeway, with planned operation by 2018. The Amtrak and Coaster trains currently run along
the coastline and connect San Diego with Los Angeles, Orange County, Riverside, San Bernardino, and Ventura via Metrolink
and the Pacific Surfliner. There are two Amtrak stations in San Diego, in Old Town and the Santa Fe Depot downtown.
San Diego transit information about public transportation and commuting is available on the Web and by dialing "511"
from any phone in the area. The city's primary commercial airport is the San Diego International Airport (SAN), also
known as Lindbergh Field. It is the busiest single-runway airport in the United States. It served over 17 million
passengers in 2005, and is dealing with an increasingly larger number every year. It is located on San Diego Bay
three miles (4.8 km) from downtown. San Diego International Airport maintains scheduled flights to the rest of the
United States including Hawaii, as well as to Mexico, Canada, Japan, and the United Kingdom. It is operated by an
independent agency, the San Diego Regional Airport Authority. In addition, the city itself operates two general-aviation
airports, Montgomery Field (MYF) and Brown Field (SDM). By 2015, the Tijuana Cross-border Terminal in Otay Mesa will
give direct access to Tijuana International Airport, with passengers walking across the U.S.–Mexico border on a footbridge
to catch their flight on the Mexican side. Numerous regional transportation projects have occurred in recent years
to mitigate congestion in San Diego. Notable efforts are improvements to San Diego freeways, expansion of San Diego
Airport, and doubling the capacity of the cruise ship terminal of the port. Freeway projects included expansion of
Interstates 5 and 805 around "The Merge," a rush-hour spot where the two freeways meet. Also, an expansion of Interstate
15 through the North County is underway with the addition of high-occupancy-vehicle (HOV) "managed lanes". There
is a tollway (The South Bay Expressway) connecting SR 54 and Otay Mesa, near the Mexican border. According to a 2007
assessment, 37 percent of streets in San Diego were in acceptable driving condition. The proposed budget fell $84.6
million short of bringing the city's streets to an acceptable level. Port expansions included a second cruise terminal
on Broadway Pier which opened in 2010. Airport projects include expansion of Terminal 2, currently under construction
and slated for completion in summer 2013. Private colleges and universities in the city include University of San
Diego (USD), Point Loma Nazarene University (PLNU), Alliant International University (AIU), National University,
California International Business University (CIBU), San Diego Christian College, John Paul the Great Catholic University,
California College San Diego, Coleman University, University of Redlands School of Business, Design Institute of
San Diego (DISD), Fashion Institute of Design & Merchandising's San Diego campus, NewSchool of Architecture and Design,
Pacific Oaks College San Diego Campus, Chapman University's San Diego Campus, The Art Institute of California – San
Diego, Platt College, Southern States University (SSU), UEI College, and Woodbury University School of Architecture's
satellite campus.
The term Muslim world, also known as Islamic world and the Ummah (Arabic: أمة, meaning "nation" or "community") has different
meanings. In a religious sense, the Islamic Ummah refers to those who adhere to the teachings of Islam, referred
to as Muslims. In a cultural sense, the Muslim Ummah refers to Islamic civilization, exclusive of non-Muslims living
in that civilization. In a modern geopolitical sense, the term "Islamic Nation" usually refers collectively to Muslim-majority
countries, states, districts, or towns. The Islamic Golden Age coincided with the Middle Ages in the Muslim world,
starting with the rise of Islam and establishment of the first Islamic state in 622. The end of the age is variously
given as 1258 with the Mongolian Sack of Baghdad, or 1492 with the completion of the Christian Reconquista of the
Emirate of Granada in Al-Andalus, Iberian Peninsula. During the reign of the Abbasid caliph Harun ar-Rashid (786
to 809), the legendary House of Wisdom was inaugurated in Baghdad where scholars from various parts of the world
sought to translate and gather all the known world's knowledge into Arabic. The Abbasids were influenced by the Quranic
injunctions and hadiths, such as "the ink of a scholar is more holy than the blood of a martyr," that stressed the
value of knowledge. The major Islamic capital cities of Baghdad, Cairo, and Córdoba became the main intellectual
centers for science, philosophy, medicine, and education. During this period, the Muslim world was a collection of
cultures; they drew together and advanced the knowledge gained from the ancient Greek, Roman, Persian, Chinese, Indian,
Egyptian, and Phoenician civilizations. Between the 8th and 18th centuries, the use of glazed ceramics was prevalent
in Islamic art, usually assuming the form of elaborate pottery. Tin-opacified glazing was one of the earliest new
technologies developed by the Islamic potters. The first Islamic opaque glazes can be found as blue-painted ware
in Basra, dating to around the 8th century. Another contribution was the development of stone-paste ceramics, originating
from 9th century Iraq. Other centers for innovative ceramic pottery in the Old world included Fustat (from 975 to
1075), Damascus (from 1100 to around 1600) and Tabriz (from 1470 to 1550). The best known work of fiction from the
Islamic world is One Thousand and One Nights (In Persian: hezār-o-yek šab > Arabic: ʔalf-layl-at-wa-l’-layla= One
thousand Night and (one) Night) or *Arabian Nights, a name invented by early Western translators, which is a compilation
of folk tales from Sanskrit, Persian, and later Arabian fables. The original concept is derived from a pre-Islamic
Persian prototype Hezār Afsān (Thousand Fables) that relied on particular Indian elements. It reached its final form
by the 14th century; the number and type of tales have varied from one manuscript to another. All Arabian fantasy
tales tend to be called Arabian Nights stories when translated into English, regardless of whether they appear in
The Book of One Thousand and One Nights or not. This work has been very influential in the West since it was translated
in the 18th century, first by Antoine Galland. Imitations were written, especially in France. Various characters
from this epic have themselves become cultural icons in Western culture, such as Aladdin, Sinbad the Sailor and Ali
Baba. A famous example of Arabic poetry and Persian poetry on romance (love) is Layla and Majnun, dating back to
the Umayyad era in the 7th century. It is a tragic story of undying love much like the later Romeo and Juliet, which
was itself said to have been inspired by a Latin version of Layla and Majnun to an extent. Ferdowsi's Shahnameh,
the national epic of Iran, is a mythical and heroic retelling of Persian history. Amir Arsalan was also a popular
mythical Persian story, which has influenced some modern works of fantasy fiction, such as The Heroic Legend of Arslan.
Ibn Tufail (Abubacer) and Ibn al-Nafis were pioneers of the philosophical novel. Ibn Tufail wrote the first Arabic
novel Hayy ibn Yaqdhan (Philosophus Autodidactus) as a response to Al-Ghazali's The Incoherence of the Philosophers,
and then Ibn al-Nafis also wrote a novel Theologus Autodidactus as a response to Ibn Tufail's Philosophus Autodidactus.
Both of these narratives had protagonists (Hayy in Philosophus Autodidactus and Kamil in Theologus Autodidactus)
who were autodidactic feral children living in seclusion on a desert island, both being the earliest examples of
a desert island story. However, while Hayy lives alone with animals on the desert island for the rest of the story
in Philosophus Autodidactus, the story of Kamil extends beyond the desert island setting in Theologus Autodidactus,
developing into the earliest known coming of age plot and eventually becoming the first example of a science fiction
novel. Theologus Autodidactus, written by the Arabian polymath Ibn al-Nafis (1213–1288), is the first example of
a science fiction novel. It deals with various science fiction elements such as spontaneous generation, futurology,
the end of the world and doomsday, resurrection, and the afterlife. Rather than giving supernatural or mythological
explanations for these events, Ibn al-Nafis attempted to explain these plot elements using the scientific knowledge
of biology, astronomy, cosmology and geology known in his time. Ibn al-Nafis' fiction explained Islamic religious
teachings via science and Islamic philosophy. A Latin translation of Ibn Tufail's work, Philosophus Autodidactus,
first appeared in 1671, prepared by Edward Pococke the Younger, followed by an English translation by Simon Ockley
in 1708, as well as German and Dutch translations. These translations might have later inspired Daniel Defoe to write
Robinson Crusoe, regarded as the first novel in English. Philosophus Autodidactus, continuing the thoughts of philosophers
such as Aristotle from earlier ages, inspired Robert Boyle to write his own philosophical novel set on an island,
The Aspiring Naturalist. Dante Alighieri's Divine Comedy, derived features of and episodes about Bolgia from Arabic
works on Islamic eschatology: the Hadith and the Kitab al-Miraj (translated into Latin in 1264 or shortly before
as Liber Scale Machometi) concerning the ascension to Heaven of Muhammad, and the spiritual writings of Ibn Arabi.
The Moors also had a noticeable influence on the works of George Peele and William Shakespeare. Some of their works
featured Moorish characters, such as Peele's The Battle of Alcazar and Shakespeare's The Merchant of Venice, Titus
Andronicus and Othello, which featured a Moorish Othello as its title character. These works are said to have been
inspired by several Moorish delegations from Morocco to Elizabethan England at the beginning of the 17th century.
One of the common definitions for "Islamic philosophy" is "the style of philosophy produced within the framework
of Islamic culture." Islamic philosophy, in this definition is neither necessarily concerned with religious issues,
nor is exclusively produced by Muslims. The Persian scholar Ibn Sina (Avicenna) (980–1037) had more than 450 books
attributed to him. His writings were concerned with various subjects, most notably philosophy and medicine. His medical
textbook The Canon of Medicine was used as the standard text in European universities for centuries. He also wrote
The Book of Healing, an influential scientific and philosophical encyclopedia. Yet another influential philosopher
who had an influence on modern philosophy was Ibn Tufail. His philosophical novel, Hayy ibn Yaqdha, translated into
Latin as Philosophus Autodidactus in 1671, developed the themes of empiricism, tabula rasa, nature versus nurture,
condition of possibility, materialism, and Molyneux's problem. European scholars and writers influenced by this novel
include John Locke, Gottfried Leibniz, Melchisédech Thévenot, John Wallis, Christiaan Huygens, George Keith, Robert
Barclay, the Quakers, and Samuel Hartlib. Other influential Muslim philosophers include al-Jahiz, a pioneer in evolutionary
thought; Ibn al-Haytham (Alhazen), a pioneer of phenomenology and the philosophy of science and a critic of Aristotelian
natural philosophy and Aristotle's concept of place (topos); Al-Biruni, a critic of Aristotelian natural philosophy;
Ibn Tufail and Ibn al-Nafis, pioneers of the philosophical novel; Shahab al-Din Suhrawardi, founder of Illuminationist
philosophy; Fakhr al-Din al-Razi, a critic of Aristotelian logic and a pioneer of inductive logic; and Ibn Khaldun,
a pioneer in the philosophy of history. Muslim scientists contributed to advances in the sciences. They placed far
greater emphasis on experiment than had the Greeks. This led to an early scientific method being developed in the
Muslim world, where progress in methodology was made, beginning with the experiments of Ibn al-Haytham (Alhazen)
on optics from circa 1000, in his Book of Optics. The most important development of the scientific method was the
use of experiments to distinguish between competing scientific theories set within a generally empirical orientation,
which began among Muslim scientists. Ibn al-Haytham is also regarded as the father of optics, especially for his
empirical proof of the intromission theory of light. Some have also described Ibn al-Haytham as the "first scientist."
al-Khwarzimi's invented the log base systems that are being used today, he also contributed theorems in trigonometry
as well as limits. Recent studies show that it is very likely that the Medieval Muslim artists were aware of advanced
decagonal quasicrystal geometry (discovered half a millennium later in the 1970s and 1980s in the West) and used
it in intricate decorative tilework in the architecture. Muslim physicians contributed to the field of medicine,
including the subjects of anatomy and physiology: such as in the 15th century Persian work by Mansur ibn Muhammad
ibn al-Faqih Ilyas entitled Tashrih al-badan (Anatomy of the body) which contained comprehensive diagrams of the
body's structural, nervous and circulatory systems; or in the work of the Egyptian physician Ibn al-Nafis, who proposed
the theory of pulmonary circulation. Avicenna's The Canon of Medicine remained an authoritative medical textbook
in Europe until the 18th century. Abu al-Qasim al-Zahrawi (also known as Abulcasis) contributed to the discipline
of medical surgery with his Kitab al-Tasrif ("Book of Concessions"), a medical encyclopedia which was later translated
to Latin and used in European and Muslim medical schools for centuries. Other medical advancements came in the fields
of pharmacology and pharmacy. In astronomy, Muḥammad ibn Jābir al-Ḥarrānī al-Battānī improved the precision of the
measurement of the precession of the Earth's axis. The corrections made to the geocentric model by al-Battani, Averroes,
Nasir al-Din al-Tusi, Mu'ayyad al-Din al-'Urdi and Ibn al-Shatir were later incorporated into the Copernican heliocentric
model. Heliocentric theories were also discussed by several other Muslim astronomers such as Al-Biruni, Al-Sijzi,
Qotb al-Din Shirazi, and Najm al-Dīn al-Qazwīnī al-Kātibī. The astrolabe, though originally developed by the Greeks,
was perfected by Islamic astronomers and engineers, and was subsequently brought to Europe. Advances were made in
irrigation and farming, using new technology such as the windmill. Crops such as almonds and citrus fruit were brought
to Europe through al-Andalus, and sugar cultivation was gradually adopted by the Europeans. Arab merchants dominated
trade in the Indian Ocean until the arrival of the Portuguese in the 16th century. Hormuz was an important center
for this trade. There was also a dense network of trade routes in the Mediterranean, along which Muslim countries
traded with each other and with European powers such as Venice, Genoa and Catalonia. The Silk Road crossing Central
Asia passed through Muslim states between China and Europe. Muslim engineers in the Islamic world made a number of
innovative industrial uses of hydropower, and early industrial uses of tidal power and wind power, fossil fuels such
as petroleum, and early large factory complexes (tiraz in Arabic). The industrial uses of watermills in the Islamic
world date back to the 7th century, while horizontal-wheeled and vertical-wheeled water mills were both in widespread
use since at least the 9th century. A variety of industrial mills were being employed in the Islamic world, including
early fulling mills, gristmills, hullers, sawmills, ship mills, stamp mills, steel mills, sugar mills, tide mills
and windmills. By the 11th century, every province throughout the Islamic world had these industrial mills in operation,
from al-Andalus and North Africa to the Middle East and Central Asia. Muslim engineers also invented crankshafts
and water turbines, employed gears in mills and water-raising machines, and pioneered the use of dams as a source
of water power, used to provide additional power to watermills and water-raising machines. Such advances made it
possible for industrial tasks that were previously driven by manual labour in ancient times to be mechanized and
driven by machinery instead in the medieval Islamic world. The transfer of these technologies to medieval Europe
had an influence on the Industrial Revolution. More than 20% of the world's population is Muslim. Current estimates
conclude that the number of Muslims in the world is around 1,5 billion. Muslims are the majority in 49 countries,
they speak hundreds of languages and come from diverse ethnic backgrounds. Major languages spoken by Muslims include
Arabic, Urdu, Bengali, Punjabi, Malay, Javanese, Sundanese, Swahili, Hausa, Fula, Berber, Tuareg, Somali, Albanian,
Bosnian, Russian, Turkish, Azeri, Kazakh, Uzbek, Tatar, Persian, Kurdish, Pashto, Balochi, Sindhi and Kashmiri, among
many others. The two main denominations of Islam are the Sunni and Shia sects. They differ primarily upon of how
the life of the ummah ("faithful") should be governed, and the role of the imam. These two main differences stem
from the understanding of which hadith are to interpret the Quran. Sunnis believe the true political successor of
the Prophet in Sunnah is based on ٍShura (consultation) at the Saqifah which selected Abu Bakr, father of the Prophet's
favourite wife, 'A'ishah, to lead the Islamic community while the religious succession ceased to exist on account
of finality of Prophethood. Shia on the other hand believe that the true political as well as religious successor
is Ali ibn Abi Talib, husband of the Prophet's daughter Fatimah (designated by the Prophet). Literacy rate in the
Muslim world varies. Some members such as Kuwait, Kazakhstan, Tajikistan and Turkmenistan have over 97% literacy
rates, whereas literacy rates are the lowest in Mali, Afghanistan, Chad and parts of Africa. In 2015, the International
Islamic News Agency reported that nearly 37% of the population of the Muslim world is unable to read or write, basing
that figure on reports from the Organisation of Islamic Cooperation and the Islamic Educational, Scientific and Cultural
Organization. Several Muslim countries like Turkey and Iran exhibit high scientific publication. Some countries have
tried to encourage scientific research. In Pakistan, establishment of the Higher Education Commission in 2002, resulted
in a 5-fold increase in the number of PhDs and a 10-fold increase in the number of scientific research papers in
10 years with the total number of universities increasing from 115 in 2001 to over 400 in 2012.[citation needed]
Saudi Arabia has established the King Abdullah University of Science and Technology. United Arab Emirates has invested
in Zayed University, United Arab Emirates University, and Masdar Institute of Science and Technology[clarification
needed] Encompasses both secular and religious styles, the design and style made by Muslims and their construction
of buildings and structures in Islamic culture included the architectural types: the Mosque, the Tomb, the Palace
and the Fort. Perhaps the most important expression of Islamic art is architecture, particularly that of the mosque.
Through Islamic architecture, effects of varying cultures within Islamic civilization can be illustrated. Generally,
the use of Islamic geometric patterns and foliage based arabesques were striking. There was also the use of decorative
calligraphy instead of pictures which were haram (forbidden) in mosque architecture. Note that in secular architecture,
human and animal representation was indeed present. No Islamic visual images or depictions of God are meant to exist
because it is believed that such artistic depictions may lead to idolatry. Moreover, Muslims believe that God is
incorporeal, making any two- or three- dimensional depictions impossible. Instead, Muslims describe God by the names
and attributes that, according to Islam, he revealed to his creation. All but one sura of the Quran begins with the
phrase "In the name of God, the Beneficent, the Merciful". Images of Mohammed are likewise prohibited. Such aniconism
and iconoclasm can also be found in Jewish and some Christian theology. Islamic art frequently adopts the use of
geometrical floral or vegetal designs in a repetition known as arabesque. Such designs are highly nonrepresentational,
as Islam forbids representational depictions as found in pre-Islamic pagan religions. Despite this, there is a presence
of depictional art in some Muslim societies, notably the miniature style made famous in Persia and under the Ottoman
Empire which featured paintings of people and animals, and also depictions of Quranic stories and Islamic traditional
narratives. Another reason why Islamic art is usually abstract is to symbolize the transcendence, indivisible and
infinite nature of God, an objective achieved by arabesque. Islamic calligraphy is an omnipresent decoration in Islamic
art, and is usually expressed in the form of Quranic verses. Two of the main scripts involved are the symbolic kufic
and naskh scripts, which can be found adorning the walls and domes of mosques, the sides of minbars, and so on. Distinguishing
motifs of Islamic architecture have always been ordered repetition, radiating structures, and rhythmic, metric patterns.
In this respect, fractal geometry has been a key utility, especially for mosques and palaces. Other features employed
as motifs include columns, piers and arches, organized and interwoven with alternating sequences of niches and colonnettes.
The role of domes in Islamic architecture has been considerable. Its usage spans centuries, first appearing in 691
with the construction of the Dome of the Rock mosque, and recurring even up until the 17th century with the Taj Mahal.
And as late as the 19th century, Islamic domes had been incorporated into European architecture. The Solar Hijri
calendar, also called the Shamsi Hijri calendar, and abbreviated as SH, is the official calendar of Iran and Afghanistan.
It begins on the vernal equinox. Each of the twelve months corresponds with a zodiac sign. The first six months have
31 days, the next five have 30 days, and the last month has 29 days in usual years but 30 days in leap years. The
year of Prophet Muhammad's migration to Medina (622 CE) is fixed as the first year of the calendar, and the New Year's
Day always falls on the March equinox. In a small minority of Muslim countries, the law requires women to cover either
just legs, shoulders and head or the whole body apart from the face. In strictest forms, the face as well must be
covered leaving just a mesh to see through. These rules for dressing cause tensions, concerning particularly Muslims
living in Western countries, where restrictions are considered both sexist and oppressive. Some Muslims oppose this
charge, and instead declare that the media in these countries presses on women to reveal too much in order to be
deemed attractive, and that this is itself sexist and oppressive.
Iran (/aɪˈræn/ or i/ɪˈrɑːn/; Persian: Irān – ایران [ʔiːˈɾɒːn] ( listen)), also known as Persia (/ˈpɜːrʒə/ or /ˈpɜːrʃə/),
officially the Islamic Republic of Iran (جمهوری اسلامی ایران – Jomhuri ye Eslāmi ye Irān [d͡ʒomhuːˌɾije eslɒːˌmije
ʔiːˈɾɒːn]), is a sovereign state in Western Asia. It is bordered to the northwest by Armenia, the de facto Nagorno-Karabakh
Republic, and Azerbaijan; to the north by Kazakhstan and Russia across the Caspian Sea; to the northeast by Turkmenistan;
to the east by Afghanistan and Pakistan; to the south by the Persian Gulf and the Gulf of Oman; and to the west by
Turkey and Iraq. Comprising a land area of 1,648,195 km2 (636,372 sq mi), it is the second-largest country in the
Middle East and the 18th-largest in the world. With 78.4 million inhabitants, Iran is the world's 17th-most-populous
country. It is the only country that has both a Caspian Sea and an Indian Ocean coastline. Iran has long been of
geostrategic importance because of its central location in Eurasia and Western Asia, and its proximity to the Strait
of Hormuz. Iran is home to one of the world's oldest civilizations, beginning with the formation of the Proto-Elamite
and Elamite kingdoms in 3200–2800 BC. The Iranian Medes unified the area into the first of many empires in 625 BC,
after which it became the dominant cultural and political power in the region. Iran reached the pinnacle of its power
during the Achaemenid Empire founded by Cyrus the Great in 550 BC, which at its greatest extent comprised major portions
of the ancient world, stretching from parts of the Balkans (Thrace-Macedonia, Bulgaria-Paeonia) and Eastern Europe
proper in the west, to the Indus Valley in the east, making it the largest empire the world had yet seen. The empire
collapsed in 330 BC following the conquests of Alexander the Great. The Parthian Empire emerged from the ashes and
was succeeded by the Sassanid Dynasty in 224 AD, under which Iran again became one of the leading powers in the world,
along with the Roman-Byzantine Empire, for a period of more than four centuries. In 633 AD, Rashidun Arabs invaded
Iran and conquered it by 651 AD, largely converting Iranian people from their indigenous faiths of Manichaeism and
Zoroastrianism to Sunni Islam. Arabic replaced Persian as the official language, while Persian remained the language
of both ordinary people and of literature. Iran became a major contributor to the Islamic Golden Age, producing many
influential scientists, scholars, artists, and thinkers. Establishment of the Safavid Dynasty in 1501, converted
the Iranian people from Sunni Islam to Twelver Shia Islam, and made Twelver Shia Islam the official religion of Iran.
Safavid conversion of Iran from Sunnism to Shiism marked one of the most important turning points in Iranian and
Muslim history. Starting in 1736 under Nader Shah, Iran reached its greatest territorial extent since the Sassanid
Empire, briefly possessing what was arguably the most powerful empire at the time. During the 19th century, Iran
irrevocably lost swaths of its territories in the Caucasus which made part of the concept of Iran for centuries,
to neighboring Imperial Russia. Popular unrest culminated in the Persian Constitutional Revolution of 1906, which
established a constitutional monarchy and the country's first Majles (parliament). Following a coup d'état instigated
by the U.K. and the U.S. in 1953, Iran gradually became close allies with the United States and the rest of the West,
remained secular, but grew increasingly autocratic. Growing dissent against foreign influence and political repression
culminated in the 1979 Revolution, which led to the establishment of an Islamic republic on 1 April 1979. Tehran
is the country's capital and largest city, as well as its leading cultural and economic center. Iran is a major regional
and middle power, exerting considerable influence in international energy security and the world economy through
its large reserves of fossil fuels, which include the largest natural gas supply in the world and the fourth-largest
proven oil reserves. Iran's rich cultural legacy is reflected in part by its 19 UNESCO World Heritage Sites, the
fourth-largest number in Asia and 12th-largest in the world. The term Iran derives directly from Middle Persian Ērān,
first attested in a 3rd-century inscription at Rustam Relief, with the accompanying Parthian inscription using the
term Aryān, in reference to Iranians. The Middle Iranian ērān and aryān are oblique plural forms of gentilic ēr-
(Middle Persian) and ary- (Parthian), both deriving from Proto-Iranian *arya- (meaning "Aryan," i.e., "of the Iranians"),
argued to descend from Proto-Indo-European *ar-yo-, meaning "skillful assembler." In Iranian languages, the gentilic
is attested as a self-identifier included in ancient inscriptions and the literature of Avesta,[a] and remains also
in other Iranian ethnic names such as Alans (Ossetic: Ир – Ir) and Iron (Ossetic: Ирон – Iron). Historically, Iran
has been referred to as Persia by the West, due mainly to the writings of Greek historians who called Iran Persis
(Greek: Περσίς), meaning "land of the Persians." As the most extensive interactions the Ancient Greeks had with any
outsider was with the Persians, the term persisted, even long after the Persian rule in Greece. However, Persis (Old
Persian: Pārśa; Modern Persian: Pārse) was originally referred to a region settled by Persians in the west shore
of Lake Urmia, in the 9th century BC. The settlement was then shifted to the southern end of the Zagros Mountains,
and is today defined as Fars Province. In 1935, Reza Shah requested the international community to refer to the country
by its native name, Iran. As the New York Times explained at the time, "At the suggestion of the Persian Legation
in Berlin, the Tehran government, on the Persian New Year, Nowruz, March 21, 1935, substituted Iran for Persia as
the official name of the country." Opposition to the name change led to the reversal of the decision, and Professor
Ehsan Yarshater, editor of Encyclopædia Iranica, propagated a move to use Persia and Iran interchangeably. Today,
both Persia and Iran are used in cultural contexts; although, Iran is the name used officially in political contexts.
The earliest archaeological artifacts in Iran, like those excavated at the Kashafrud and Ganj Par sites, attest to
a human presence in Iran since the Lower Paleolithic era, c. 800,000–200,000 BC. Iran's Neanderthal artifacts from
the Middle Paleolithic period, c. 200,000–40,000 BC, have been found mainly in the Zagros region, at sites such as
Warwasi and Yafteh Cave.[page needed] Around 10th to 8th millennium BC, early agricultural communities such as Chogha
Golan and Chogha Bonut began to flourish in Iran, as well as Susa and Chogha Mish developing in and around the Zagros
region.[page needed] The emergence of Susa as a city, as determined by radiocarbon dating, dates back to early 4,395
BC. There are dozens of prehistoric sites across the Iranian plateau, pointing to the existence of ancient cultures
and urban settlements in the 4th millennium BC. During the Bronze Age, Iran was home to several civilizations including
Elam, Jiroft, and Zayande River. Elam, the most prominent of these civilizations, developed in the southwest of Iran,
alongside those in Mesopotamia. The emergence of writing in Elam was paralleled to Sumer, and the Elamite cuneiform
was developed since the 3rd millennium BC. From the late 10th to late 7th centuries BC, the Iranian peoples, together
with the pre-Iranian kingdoms, fell under the domination of the Assyrian Empire, based in northern Mesopotamia. Under
king Cyaxares, the Medes and Persians entered into an alliance with Nabopolassar of Babylon, as well as the Scythians
and the Cimmerians, and together they attacked the Assyrian Empire. The civil war ravaged the Assyrian Empire between
616 BC and 605 BC, thus freeing their respective peoples from three centuries of Assyrian rule. The unification of
the Median tribes under a single ruler in 728 BC led to the foundation of the Median Empire which, by 612 BC, controlled
the whole Iran and the eastern Anatolia. This marked the end of the Kingdom of Urartu as well, which was subsequently
conquered and dissolved. In 550 BC, Cyrus the Great, son of Mandane and Cambyses I, took over the Median Empire,
and founded the Achaemenid Empire by unifying other city states. The conquest of Media was a result of what is called
the Persian Revolt. The brouhaha was initially triggered by the actions of the Median ruler Astyages, and was quickly
spread to other provinces, as they allied with the Persians. Later conquests under Cyrus and his successors expanded
the empire to include Lydia, Babylon, Egypt, parts of the Balkans and Eastern Europe proper, as well as the lands
to the west of the Indus and Oxus rivers. At its greatest extent, the Achaemenid Empire included the modern territories
of Iran, Azerbaijan, Armenia, Georgia, Turkey, much of the Black Sea coastal regions, northeastern Greece and southern
Bulgaria (Thrace), northern Greece and Macedonia (Paeonia and Ancient Macedon), Iraq, Syria, Lebanon, Jordan, Israel,
Palestine, all significant ancient population centers of ancient Egypt as far west as Libya, Kuwait, northern Saudi
Arabia, parts of the UAE and Oman, Pakistan, Afghanistan, and much of Central Asia, making it the first world government
and the largest empire the world had yet seen. It is estimated that in 480 BC, 50 million people lived in the Achaemenid
Empire. The empire at its peak ruled over 44% of the world's population, the highest such figure for any empire in
history. In Greek history, the Achaemenid Empire is considered as the antagonist of the Greek city states, for the
emancipation of slaves including the Jewish exiles in Babylon, building infrastructures such as road and postal systems,
and the use of an official language, the Imperial Aramaic, throughout its territories. The empire had a centralized,
bureaucratic administration under the emperor, a large professional army, and civil services, inspiring similar developments
in later empires. Furthermore, one of the Seven Wonders of the Ancient World, the Mausoleum at Halicarnassus, was
built in the empire between 353 and 350 BC. In 334 BC, Alexander the Great invaded the Achaemenid Empire, defeating
the last Achaemenid emperor, Darius III, at the Battle of Issus. Following the premature death of Alexander, Iran
came under the control of the Hellenistic Seleucid Empire. In the middle of the 2nd century BC, the Parthian Empire
rose to become the main power in Iran, and the century-long geopolitical arch-rivalry between Romans and Parthians
began, culminating in the Roman–Parthian Wars. The Parthian Empire continued as a feudal monarchy for nearly five
centuries, until 224 CE, when it was succeeded by the Sassanid Empire. Together with their neighboring arch-rival,
the Roman-Byzantines, they made up the world's two most dominant powers at the time, for over four centuries. The
prolonged Byzantine-Sassanid Wars, most importantly the climactic Byzantine-Sassanid War of 602-628, as well as the
social conflict within the Sassanid Empire, opened the way for an Arab invasion to Iran in the 7th century. Initially
defeated by the Arab Rashidun Caliphate, Iran came under the rule of the Arab caliphates of Umayyad and Abbasid.
The prolonged and gradual process of the Islamization of Iran began following the conquest. Under the new Arab elite
of the Rashidun and later the Umayyad caliphates, both converted (mawali) and non-converted (dhimmi) Iranians were
discriminated against, being excluded from the government and military, and having to pay a special tax called Jizya.
Gunde Shapur, home of the Academy of Gunde Shapur which was the most important medical center of the world at the
time, survived after the invasion, but became known as an Islamic institute thereafter. The blossoming literature,
philosophy, medicine, and art of Iran became major elements in the formation of a new age for the Iranian civilization,
during the period known as the Islamic Golden Age. The Islamic Golden Age reached its peak by the 10th and 11th centuries,
during which Iran was the main theater of the scientific activities. After the 10th century, Persian language, alongside
Arabic, was used for the scientific, philosophical, historical, musical, and medical works, whereas the important
Iranian writers, such as Tusi, Avicenna, Qotb od Din Shirazi, and Biruni, had major contributions in the scientific
writing. The 10th century saw a mass migration of Turkic tribes from Central Asia into the Iranian plateau. Turkic
tribesmen were first used in the Abbasid army as mamluks (slave-warriors), replacing Iranian and Arab elements within
the army. As a result, the mamluks gained a significant political power. In 999, large portions of Iran came briefly
under the rule of the Ghaznavids, whose rulers were of mamluk Turk origin, and longer subsequently under the Turkish
Seljuk and Khwarezmian empires. These Turks had been Persianized and had adopted Persian models of administration
and rulership. The Seljuks subsequently gave rise to the Sultanate of Rum in Anatolia, while taking their thoroughly
Persianized identity with them. The result of the adoption and patronage of Persian culture by Turkish rulers was
the development of a distinct Turko-Persian tradition. Following the fracture of the Mongol Empire in 1256, Hulagu
Khan, grandson of Genghis Khan, established the Ilkhanate in Iran. In 1370, yet another conqueror, Timur, followed
the example of Hulagu, establishing the Timurid Empire which lasted for another 156 years. In 1387, Timur ordered
the complete massacre of Isfahan, reportedly killing 70,000 citizens. The Ilkhans and the Timurids soon came to adopt
the ways and customs of the Iranians, choosing to surround themselves with a culture that was distinctively Iranian.
By the 1500s, Ismail I from Ardabil, established the Safavid Dynasty, with Tabriz as the capital. Beginning with
Azerbaijan, he subsequently extended his authority over all of the Iranian territories, and established an intermittent
Iranian hegemony over the vast relative regions, reasserting the Iranian identity within large parts of the Greater
Iran. Iran was predominantly Sunni, but Ismail instigated a forced conversion to the Shia branch of Islam, by which
the Shia Islam spread throughout the Safavid territories in the Caucasus, Iran, Anatolia, and Mesopotamia. As a result,
thereof, the modern-day Iran is the only official Shia nation of the world, with it holding an absolute majority
in Iran and the Republic of Azerbaijan, having there the 1st and 2nd highest number of Shia inhabitants by population
percentage in the world. The centuries-long geopolitical and ideological rivalry between Safavid Iran and the neighboring
Ottoman Empire, led to numerous Ottoman–Persian Wars. The Safavid Era peaked in the reign of Abbas the Great, 1587–1629,
surpassing their Ottoman arch rivals in strength, and making the empire a leading hub in Western Eurasia for the
sciences and arts. The Safavid Era saw the start of mass integration from Caucasian populations into new layers of
the society of Iran, as well as mass resettlement of them within the heartlands of Iran, playing a pivotal role in
the history of Iran for centuries onwards. Following a gradual decline in the late 1600s and early 1700s, which was
caused by the internal conflicts, the continuous wars with the Ottomans, and the foreign interference (most notably
the Russian interference), the Safavid rule was ended by the Pashtun rebels who besieged Isfahan and defeated Soltan
Hosein in 1722. In 1729, Nader Shah, a chieftain and military genius from Khorasan, successfully drove out and conquered
the Pashtun invaders. He subsequently took back the annexed Caucasian territories which were divided among the Ottoman
and Russian authorities by the ongoing chaos in Iran. During the reign of Nader Shah, Iran reached its greatest extent
since the Sassanid Empire, reestablishing the Iranian hegemony all over the Caucasus, as well as other major parts
of the west and central Asia, and briefly possessing what was arguably the most powerful empire at the time. Another
civil war ensued after the death of Karim Khan in 1779, out of which Aqa Mohammad Khan emerged, founding the Qajar
Dynasty in 1794. In 1795, following the disobedience of the Georgian subjects and their alliance with the Russians,
the Qajars captured Tblisi by the Battle of Krtsanisi, and drove the Russians out of the entire Caucasus, reestablishing
a short-lived Iranian suzerainty over the region. The Russo-Persian wars of 1804–1813 and 1826–1828 resulted in large
irrevocable territorial losses for Iran in the Caucasus, comprising all of Transcaucasia and Dagestan, which made
part of the very concept of Iran for centuries, and thus substantial gains for the neighboring Russian Empire. As
a result of the 19th century Russo-Persian wars, the Russians took over the Caucasus, and Iran irrevocably lost control
over its integral territories in the region (comprising modern-day Dagestan, Georgia, Armenia, and Azerbaijan), which
got confirmed per the treaties of Gulistan and Turkmenchay. The area to the north of the river Aras, among which
the contemporary Republic of Azerbaijan, eastern Georgia, Dagestan, and Armenia, were Iranian territory until they
were occupied by Russia in the course of the 19th century. Between 1872 and 1905, a series of protests took place
in response to the sale of concessions to foreigners by Nasser od Din and Mozaffar od Din shahs of Qajar, and led
to the Iranian Constitutional Revolution. The first Iranian Constitution and the first national parliament of Iran
were founded in 1906, through the ongoing revolution. The Constitution included the official recognition of Iran's
three religious minorities, namely Christians, Zoroastrians, and Jews, which has remained a basis in the legislation
of Iran since then. The struggle related to the constitutional movement continued until 1911, when Mohammad Ali Shah
was defeated and forced to abdicate. On the pretext of restoring order, the Russians occupied Northern Iran in 1911,
and maintained a military presence in the region for years to come. During World War I, the British occupied much
of Western Iran, and fully withdrew in 1921. The Persian Campaign commenced furthermore during World War I in Northwestern
Iran after an Ottoman invasion, as part of the Middle Eastern Theatre of World War I. As a result of Ottoman hostilities
across the border, a large amount of the Assyrians of Iran were massacred by the Ottoman armies, notably in and around
Urmia. Apart from the rule of Aqa Mohammad Khan, the Qajar rule is characterized as a century of misrule. In 1941,
Reza Shah was forced to abdicate in favor of his son, Mohammad Reza Pahlavi, and established the Persian Corridor,
a massive supply route that would last until the end of the ongoing war. The presence of so many foreign troops in
the nation also culminated in the Soviet-backed establishment of two puppet regimes in the nation; the Azerbaijan
People's Government, and the Republic of Mahabad. As the Soviet Union refused to relinquish the occupied Iranian
territory, the Iran crisis of 1946 was followed, which particularly resulted in the dissolution of both puppet states,
and the withdrawal of the Soviets. Due to the 1973 spike in oil prices, the economy of Iran was flooded with foreign
currency, which caused inflation. By 1974, the economy of Iran was experiencing double digit inflation, and despite
many large projects to modernize the country, corruption was rampant and caused large amounts of waste. By 1975 and
1976, an economic recession led to increased unemployment, especially among millions of youth who had migrated to
the cities of Iran looking for construction jobs during the boom years of the early 1970s. By the late 1970s, many
of these people opposed the Shah's regime and began to organize and join the protests against it. The immediate nationwide
uprisings against the new government began by the 1979 Kurdish rebellion with the Khuzestan uprisings, along with
the uprisings in Sistan and Baluchestan Province and other areas. Over the next several years, these uprisings were
subdued in a violent manner by the new Islamic government. The new government went about purging itself of the non-Islamist
political opposition. Although both nationalists and Marxists had initially joined with Islamists to overthrow the
Shah, tens of thousands were executed by the Islamic government afterward. On November 4, 1979, a group of students
seized the United States Embassy and took the embassy with 52 personnel and citizens hostage, after the United States
refused to return Mohammad Reza Pahlavi to Iran to face trial in the court of the new regime. Attempts by the Jimmy
Carter administration to negotiate for the release of the hostages, and a failed rescue attempt, helped force Carter
out of office and brought Ronald Reagan to power. On Jimmy Carter's final day in office, the last hostages were finally
set free as a result of the Algiers Accords. On September 22, 1980, the Iraqi army invaded the Iranian Khuzestan,
and the Iran–Iraq War began. Although the forces of Saddam Hussein made several early advances, by mid 1982, the
Iranian forces successfully managed to drive the Iraqi army back into Iraq. In July 1982, with Iraq thrown on the
defensive, Iran took the decision to invade Iraq and conducted countless offensives in a bid to conquer Iraqi territory
and capture cities, such as Basra. The war continued until 1988, when the Iraqi army defeated the Iranian forces
inside Iraq and pushed the remaining Iranian troops back across the border. Subsequently, Khomeini accepted a truce
mediated by the UN. The total Iranian casualties in the war were estimated to be 123,220–160,000 KIA, 60,711 MIA,
and 11,000–16,000 civilians killed. Iran has an area of 1,648,195 km2 (636,372 sq mi). Iran lies between latitudes
24° and 40° N, and longitudes 44° and 64° E. Its borders are with Azerbaijan (611 km or 380 mi, with Azerbaijan-Naxcivan
exclave, 179 km or 111 mi) and Armenia (35 km or 22 mi) to the north-west; the Caspian Sea to the north; Turkmenistan
(992 km or 616 mi) to the north-east; Pakistan (909 km or 565 mi) and Afghanistan (936 km or 582 mi) to the east;
Turkey (499 km or 310 mi) and Iraq (1,458 km or 906 mi) to the west; and finally the waters of the Persian Gulf and
the Gulf of Oman to the south. Iran consists of the Iranian Plateau with the exception of the coasts of the Caspian
Sea and Khuzestan Province. It is one of the world's most mountainous countries, its landscape dominated by rugged
mountain ranges that separate various basins or plateaux from one another. The populous western part is the most
mountainous, with ranges such as the Caucasus, Zagros and Alborz Mountains; the last contains Iran's highest point,
Mount Damavand at 5,610 m (18,406 ft), which is also the highest mountain on the Eurasian landmass west of the Hindu
Kush. Iran's climate ranges from arid or semiarid, to subtropical along the Caspian coast and the northern forests.
On the northern edge of the country (the Caspian coastal plain) temperatures rarely fall below freezing and the area
remains humid for the rest of the year. Summer temperatures rarely exceed 29 °C (84.2 °F). Annual precipitation is
680 mm (26.8 in) in the eastern part of the plain and more than 1,700 mm (66.9 in) in the western part. United Nations
Resident Coordinator for Iran Gary Lewis has said that "Water scarcity poses the most severe human security challenge
in Iran today". To the west, settlements in the Zagros basin experience lower temperatures, severe winters with below
zero average daily temperatures and heavy snowfall. The eastern and central basins are arid, with less than 200 mm
(7.9 in) of rain, and have occasional deserts. Average summer temperatures rarely exceed 38 °C (100.4 °F). The coastal
plains of the Persian Gulf and Gulf of Oman in southern Iran have mild winters, and very humid and hot summers. The
annual precipitation ranges from 135 to 355 mm (5.3 to 14.0 in). At least 74 species of Iranian wildlife are on the
red list of the International Union for the Conservation of Nature, a sign of serious threats against the country’s
biodiversity. The Iranian Parliament has been showing disregard for wildlife by passing laws and regulations such
as the act that lets the Ministry of Industries and Mines exploit mines without the involvement of the Department
of Environment, and by approving large national development projects without demanding comprehensive study of their
impact on wildlife habitats. Shiraz, with a population of around 1.4 million (2011 census), is the sixth major city
of Iran. It is the capital of Fars Province, and was also a former capital of Iran. The area was greatly influenced
by the Babylonian civilization, and after the emergence of the ancient Persians, soon came to be known as Persis.
Persians were present in the region since the 9th century BC, and became rulers of a large empire under the reign
of the Achaemenid Dynasty in the 6th century BC. The ruins of Persepolis and Pasargadae, two of the four capitals
of the Achaemenid Empire, are located around the modern-day city of Shiraz. The political system of the Islamic Republic
is based on the 1979 Constitution, and comprises several intricately connected governing bodies. The Leader of the
Revolution ("Supreme Leader") is responsible for delineation and supervision of the general policies of the Islamic
Republic of Iran. The Supreme Leader is Commander-in-Chief of the armed forces, controls the military intelligence
and security operations, and has sole power to declare war or peace. The heads of the judiciary, state radio and
television networks, the commanders of the police and military forces and six of the twelve members of the Guardian
Council are appointed by the Supreme Leader. The Assembly of Experts elects and dismisses the Supreme Leader on the
basis of qualifications and popular esteem. The President is responsible for the implementation of the Constitution
and for the exercise of executive powers, except for matters directly related to the Supreme Leader, who has the
final say in all matters. The President appoints and supervises the Council of Ministers, coordinates government
decisions, and selects government policies to be placed before the legislature. Eight Vice-Presidents serve under
the President, as well as a cabinet of twenty-two ministers, who must all be approved by the legislature. The Guardian
Council comprises twelve jurists including six appointed by the Supreme Leader. The others are elected by the Iranian
Parliament from among the jurists nominated by the Head of the Judiciary. The Council interprets the constitution
and may veto Parliament. If a law is deemed incompatible with the constitution or Sharia (Islamic law), it is referred
back to Parliament for revision. The Expediency Council has the authority to mediate disputes between Parliament
and the Guardian Council, and serves as an advisory body to the Supreme Leader, making it one of the most powerful
governing bodies in the country. Local city councils are elected by public vote to four-year terms in all cities
and villages of Iran. The Special Clerical Court handles crimes allegedly committed by clerics, although it has also
taken on cases involving lay people. The Special Clerical Court functions independently of the regular judicial framework
and is accountable only to the Supreme Leader. The Court's rulings are final and cannot be appealed. The Assembly
of Experts, which meets for one week annually, comprises 86 "virtuous and learned" clerics elected by adult suffrage
for eight-year terms. As with the presidential and parliamentary elections, the Guardian Council determines candidates'
eligibility. The Assembly elects the Supreme Leader and has the constitutional authority to remove the Supreme Leader
from power at any time. It has not challenged any of the Supreme Leader's decisions. Since 2005, Iran's nuclear program
has become the subject of contention with the international community following earlier quotes of Iranian leadership
favoring the use of an atomic bomb against Iran's enemies and in particular Israel. Many countries have expressed
concern that Iran's nuclear program could divert civilian nuclear technology into a weapons program. This has led
the UN Security Council to impose sanctions against Iran which had further isolated Iran politically and economically
from the rest of the global community. In 2009, the US Director of National Intelligence said that Iran, if choosing
to, would not be able to develop a nuclear weapon until 2013. Iran has a paramilitary, volunteer militia force within
the IRGC, called the Basij, which includes about 90,000 full-time, active-duty uniformed members. Up to 11 million
men and women are members of the Basij who could potentially be called up for service; GlobalSecurity.org estimates
Iran could mobilize "up to one million men". This would be among the largest troop mobilizations in the world. In
2007, Iran's military spending represented 2.6% of the GDP or $102 per capita, the lowest figure of the Persian Gulf
nations. Iran's military doctrine is based on deterrence. In 2014 arms spending the country spent $15 billion and
was outspent by the states of the Gulf Cooperation Council by a factor of 13. Since the 1979 Revolution, to overcome
foreign embargoes, Iran has developed its own military industry, produced its own tanks, armored personnel carriers,
guided missiles, submarines, military vessels, guided missile destroyer, radar systems, helicopters and fighter planes.
In recent years, official announcements have highlighted the development of weapons such as the Hoot, Kowsar, Zelzal,
Fateh-110, Shahab-3 and Sejjil missiles, and a variety of unmanned aerial vehicles (UAVs). The Fajr-3 (MIRV) is currently
Iran's most advanced ballistic missile, it is a liquid fuel missile with an undisclosed range which was developed
and produced domestically. In 2006, about 45% of the government's budget came from oil and natural gas revenues,
and 31% came from taxes and fees. As of 2007[update], Iran had earned $70 billion in foreign exchange reserves mostly
(80%) from crude oil exports. Iranian budget deficits have been a chronic problem, mostly due to large-scale state
subsidies, that include foodstuffs and especially gasoline, totaling more than $84 billion in 2008 for the energy
sector alone. In 2010, the economic reform plan was approved by parliament to cut subsidies gradually and replace
them with targeted social assistance. The objective is to move towards free market prices in a 5-year period and
increase productivity and social justice. The administration continues to follow the market reform plans of the previous
one and indicated that it will diversify Iran's oil-reliant economy. Iran has also developed a biotechnology, nanotechnology,
and pharmaceuticals industry. However, nationalized industries such as the bonyads have often been managed badly,
making them ineffective and uncompetitive with years. Currently, the government is trying to privatize these industries,
and, despite successes, there are still several problems to be overcome, such as the lagging corruption in the public
sector and lack of competitiveness. In 2010, Iran was ranked 69, out of 139 nations, in the Global Competitiveness
Report. Economic sanctions against Iran, such as the embargo against Iranian crude oil, have affected the economy.
Sanctions have led to a steep fall in the value of the rial, and as of April 2013 one US dollar is worth 36,000 rial,
compared with 16,000 in early 2012. Following a successful implementation of the 2015 nuclear and sanctions relief
deal, the resulting benefits might not be distributed evenly across the Iranian economy as political elites such
as the Islamic Revolutionary Guard Corps have garnered more resources and economic interests. Alongside the capital,
the most popular tourist destinations are Isfahan, Mashhad and Shiraz. In the early 2000s, the industry faced serious
limitations in infrastructure, communications, industry standards and personnel training. The majority of the 300,000
tourist visas granted in 2003 were obtained by Asian Muslims, who presumably intended to visit important pilgrimage
sites in Mashhad and Qom. Several organized tours from Germany, France and other European countries come to Iran
annually to visit archaeological sites and monuments. In 2003, Iran ranked 68th in tourism revenues worldwide. According
to UNESCO and the deputy head of research for Iran Travel and Tourism Organization (ITTO), Iran is rated 4th among
the top 10 destinations in the Middle East. Domestic tourism in Iran is one of the largest in the world. Weak advertising,
unstable regional conditions, a poor public image in some parts of the world, and absence of efficient planning schemes
in the tourism sector have all hindered the growth of tourism. Iran has the second largest proved gas reserves in
the world after Russia, with 33.6 trillion cubic metres, and third largest natural gas production in the world after
Indonesia, and Russia. It also ranks fourth in oil reserves with an estimated 153,600,000,000 barrels. It is OPEC's
2nd largest oil exporter and is an energy superpower. In 2005, Iran spent US$4 billion on fuel imports, because of
contraband and inefficient domestic use. Oil industry output averaged 4 million barrels per day (640,000 m3/d) in
2005, compared with the peak of six million barrels per day reached in 1974. In the early years of the 2000s (decade),
industry infrastructure was increasingly inefficient because of technological lags. Few exploratory wells were drilled
in 2005. In 2004, a large share of natural gas reserves in Iran were untapped. The addition of new hydroelectric
stations and the streamlining of conventional coal and oil-fired stations increased installed capacity to 33,000
megawatts. Of that amount, about 75% was based on natural gas, 18% on oil, and 7% on hydroelectric power. In 2004,
Iran opened its first wind-powered and geothermal plants, and the first solar thermal plant is to come online in
2009. Iran is the third country in the world to have developed GTL technology. Iranian scientists outside Iran have
also made some major contributions to science. In 1960, Ali Javan co-invented the first gas laser, and fuzzy set
theory was introduced by Lotfi Zadeh. Iranian cardiologist, Tofy Mussivand invented and developed the first artificial
cardiac pump, the precursor of the artificial heart. Furthering research and treatment of diabetes, HbA1c was discovered
by Samuel Rahbar. Iranian physics is especially strong in string theory, with many papers being published in Iran.
Iranian-American string theorist Kamran Vafa proposed the Vafa-Witten theorem together with Edward Witten. In August
2014, Maryam Mirzakhani became the first-ever woman, as well as the first-ever Iranian, to receive the Fields Medal,
the highest prize in mathematics. As with the spoken languages, the ethnic group composition also remains a point
of debate, mainly regarding the largest and second largest ethnic groups, the Persians and Azerbaijanis, due to the
lack of Iranian state censuses based on ethnicity. The CIA's World Factbook has estimated that around 79% of the
population of Iran are a diverse Indo-European ethno-linguistic group that comprise the speakers of Iranian languages,
with Persians constituting 53% of the population, Gilaks and Mazanderanis 7%, Kurds 10%, Lurs 6%, and Balochs 2%.
Peoples of the other ethnicities in Iran make up the remaining 22%, with Azerbaijanis constituting 16%, Arabs 2%,
Turkmens and Turkic tribes 2%, and others 2% (such as Armenians, Talysh, Georgians, Circassians, Assyrians). Christianity,
Judaism, Zoroastrianism, and the Sunni branch of Islam are officially recognized by the government, and have reserved
seats in the Iranian Parliament. But the Bahá'í Faith, which is said to be the largest non-Muslim religious minority
in Iran, is not officially recognized, and has been persecuted during its existence in Iran since the 19th century.
Since the 1979 Revolution, the persecution of Bahais has increased with executions, the denial of civil rights and
liberties, and the denial of access to higher education and employment. The earliest examples of visual representations
in Iranian history are traced back to the bas-reliefs of Persepolis, c. 500 BC. Persepolis was the ritual center
of the ancient kingdom of Achaemenids, and the figures at Persepolis remain bound by the rules of grammar and syntax
of visual language. The Iranian visual arts reached a pinnacle by the Sassanid Era. A bas-relief from this period
in Taq Bostan depicts a complex hunting scene. Similar works from the period have been found to articulate movements
and actions in a highly sophisticated manner. It is even possible to see a progenitor of the cinema close-up in one
of these works of art, which shows a wounded wild pig escaping from the hunting ground. The 1960s was a significant
decade for Iranian cinema, with 25 commercial films produced annually on average throughout the early 60s, increasing
to 65 by the end of the decade. The majority of production focused on melodrama and thrillers. With the screening
of the films Kaiser and The Cow, directed by Masoud Kimiai and Dariush Mehrjui respectively in 1969, alternative
films established their status in the film industry. Attempts to organize a film festival that had begun in 1954
within the framework of the Golrizan Festival, bore fruits in the form of the Sepas Festival in 1969. The endeavors
also resulted in the formation of the Tehran World Festival in 1973. After the Revolution of 1979, as the new government
imposed new laws and standards, a new age in Iranian cinema emerged, starting with Viva... by Khosrow Sinai and followed
by many other directors, such as Abbas Kiarostami and Jafar Panahi. Kiarostami, an admired Iranian director, planted
Iran firmly on the map of world cinema when he won the Palme d'Or for Taste of Cherry in 1997. The continuous presence
of Iranian films in prestigious international festivals, such as the Cannes Film Festival, the Venice Film Festival,
and the Berlin International Film Festival, attracted world attention to Iranian masterpieces. In 2006, six Iranian
films, of six different styles, represented Iranian cinema at the Berlin International Film Festival. Critics considered
this a remarkable event in the history of Iranian cinema. Iran received access to the Internet in 1993. According
to 2014 census, around 40% of the population of Iran are Internet users. Iran ranks 24th among countries by number
of Internet users. According to the statistics provided by the web information company of Alexa, Google Search and
Yahoo! are the most used search engines in Iran. Over 80% of the users of Telegram, a cloud-based instant messaging
service, are from Iran. Instagram is the most popular online social networking service in Iran. Direct access to
Facebook has been blocked in Iran since the 2009 Iranian presidential election protests, due to organization of the
opposition movements on the website; but however, Facebook has around 12 to 17 million users in Iran who are using
virtual private networks and proxy servers to access the website. Around 90% of Iran's e-commerce takes place on
the Iranian online store of Digikala, which has around 750,000 visitors per day and more than 2.3 million subscribers.
Digikala is the most visited online store in the Middle East, and ranks 4th among the most visited websites in Iran.
Iranian cuisine is diverse due to its variety of ethnic groups and the influence of other cultures. Herbs are frequently
used along with fruits such as plums, pomegranates, quince, prunes, apricots, and raisins. Iranians usually eat plain
yogurt with lunch and dinner; it is a staple of the diet in Iran. To achieve a balanced taste, characteristic flavourings
such as saffron, dried limes, cinnamon, and parsley are mixed delicately and used in some special dishes. Onions
and garlic are normally used in the preparation of the accompanying course, but are also served separately during
meals, either in raw or pickled form. Iran is also famous for its caviar. Other non-governmental estimations regarding
the groups other than the Persians and Azerbaijanis roughly congruate with the World Factbook and the Library of
Congress. However, many scholarly and organisational estimations regarding the number of these two groups differ
significantly from the mentioned census. According to many of them, the number of ethnic Azerbaijanis in Iran comprises
between 21.6–30% of the total population, with the majority holding it on 25%.cd In any case, the largest population
of Azerbaijanis in the world live in Iran. Iran has leading manufacturing industries in the fields of car-manufacture
and transportation, construction materials, home appliances, food and agricultural goods, armaments, pharmaceuticals,
information technology, power and petrochemicals in the Middle East. According to FAO, Iran has been a top five producer
of the following agricultural products in the world in 2012: apricots, cherries, sour cherries, cucumbers and gherkins,
dates, eggplants, figs, pistachios, quinces, walnuts, and watermelons. Iranian art encompasses many disciplines,
including architecture, painting, weaving, pottery, calligraphy, metalworking, and stonemasonry. The Median and Achaemenid
empires left a significant classical art scene which remained as basic influences for the art of the later eras.
Art of the Parthians was a mixture of Iranian and Hellenistic artworks, with their main motifs being scenes of royal
hunting expeditions and investitures. The Sassanid art played a prominent role in the formation of both European
and Asian medieval art, which carried forward to the Islamic world, and much of what later became known as Islamic
learning, such as philology, literature, jurisprudence, philosophy, medicine, architecture, and science, were of
Sassanid basis.
The British Isles are a group of islands off the north-western coast of continental Europe that consist of the islands of
Great Britain, Ireland and over six thousand smaller isles. Situated in the North Atlantic, the islands have a total
area of approximately 315,159 km2, and a combined population of just under 70 million. Two sovereign states are located
on the islands: Ireland (which covers roughly five-sixths of the island with the same name) and the United Kingdom
of Great Britain and Northern Ireland. The British Isles also include three Crown Dependencies: the Isle of Man and,
by tradition, the Bailiwick of Jersey and the Bailiwick of Guernsey in the Channel Islands, although the latter are
not physically a part of the archipelago. The oldest rocks in the group are in the north west of Scotland, Ireland
and North Wales and are 2,700 million years old. During the Silurian period the north-western regions collided with
the south-east, which had been part of a separate continental landmass. The topography of the islands is modest in
scale by global standards. Ben Nevis rises to an elevation of only 1,344 metres (4,409 ft), and Lough Neagh, which
is notably larger than other lakes on the isles, covers 390 square kilometres (151 sq mi). The climate is temperate
marine, with mild winters and warm summers. The North Atlantic Drift brings significant moisture and raises temperatures
11 °C (20 °F) above the global average for the latitude. This led to a landscape which was long dominated by temperate
rainforest, although human activity has since cleared the vast majority of forest cover. The region was re-inhabited
after the last glacial period of Quaternary glaciation, by 12,000 BC when Great Britain was still a peninsula of
the European continent. Ireland, which became an island by 12,000 BC, was not inhabited until after 8000 BC. Great
Britain became an island by 5600 BC. Hiberni (Ireland), Pictish (northern Britain) and Britons (southern Britain)
tribes, all speaking Insular Celtic, inhabited the islands at the beginning of the 1st millennium AD. Much of Brittonic-controlled
Britain was conquered by the Roman Empire from AD 43. The first Anglo-Saxons arrived as Roman power waned in the
5th century and eventually dominated the bulk of what is now England. Viking invasions began in the 9th century,
followed by more permanent settlements and political change—particularly in England. The subsequent Norman conquest
of England in 1066 and the later Angevin partial conquest of Ireland from 1169 led to the imposition of a new Norman
ruling elite across much of Britain and parts of Ireland. By the Late Middle Ages, Great Britain was separated into
the Kingdoms of England and Scotland, while control in Ireland fluxed between Gaelic kingdoms, Hiberno-Norman lords
and the English-dominated Lordship of Ireland, soon restricted only to The Pale. The 1603 Union of the Crowns, Acts
of Union 1707 and Acts of Union 1800 attempted to consolidate Britain and Ireland into a single political unit, the
United Kingdom of Great Britain and Ireland, with the Isle of Man and the Channel Islands remaining as Crown Dependencies.
The expansion of the British Empire and migrations following the Irish Famine and Highland Clearances resulted in
the distribution of the islands' population and culture throughout the world and a rapid de-population of Ireland
in the second half of the 19th century. Most of Ireland seceded from the United Kingdom after the Irish War of Independence
and the subsequent Anglo-Irish Treaty (1919–1922), with six counties remaining in the UK as Northern Ireland. The
term British Isles is controversial in Ireland, where there are objections to its usage due to the association of
the word British with Ireland. The Government of Ireland does not recognise or use the term and its embassy in London
discourages its use. As a result, Britain and Ireland is used as an alternative description, and Atlantic Archipelago
has had limited use among a minority in academia, although British Isles is still commonly employed. Within them,
they are also sometimes referred to as these islands. The earliest known references to the islands as a group appeared
in the writings of sea-farers from the ancient Greek colony of Massalia. The original records have been lost; however,
later writings, e.g. Avienus's Ora maritima, that quoted from the Massaliote Periplus (6th century BC) and from Pytheas's
On the Ocean (circa 325–320 BC) have survived. In the 1st century BC, Diodorus Siculus has Prettanikē nēsos, "the
British Island", and Prettanoi, "the Britons". Strabo used Βρεττανική (Brettanike), and Marcian of Heraclea, in his
Periplus maris exteri, used αἱ Πρεττανικαί νῆσοι (the Prettanic Isles) to refer to the islands. Historians today,
though not in absolute agreement, largely agree that these Greek and Latin names were probably drawn from native
Celtic-language names for the archipelago. Along these lines, the inhabitants of the islands were called the Πρεττανοί
(Priteni or Pretani). The shift from the "P" of Pretannia to the "B" of Britannia by the Romans occurred during the
time of Julius Caesar. The Greco-Egyptian scientist Claudius Ptolemy referred to the larger island as great Britain
(μεγάλης Βρεττανίας - megális Brettanias) and to Ireland as little Britain (μικρής Βρεττανίας - mikris Brettanias)
in his work Almagest (147–148 AD). In his later work, Geography (c. 150 AD), he gave these islands the names Alwion,
Iwernia, and Mona (the Isle of Man), suggesting these may have been names of the individual islands not known to
him at the time of writing Almagest. The name Albion appears to have fallen out of use sometime after the Roman conquest
of Great Britain, after which Britain became the more commonplace name for the island called Great Britain. The earliest
known use of the phrase Brytish Iles in the English language is dated 1577 in a work by John Dee. Today, this name
is seen by some as carrying imperialist overtones although it is still commonly used. Other names used to describe
the islands include the Anglo-Celtic Isles, Atlantic archipelago, British-Irish Isles, Britain and Ireland, UK and
Ireland, and British Isles and Ireland. Owing to political and national associations with the word British, the Government
of Ireland does not use the term British Isles and in documents drawn up jointly between the British and Irish governments,
the archipelago is referred to simply as "these islands". Nonetheless, British Isles is still the most widely accepted
term for the archipelago. The British Isles lie at the juncture of several regions with past episodes of tectonic
mountain building. These orogenic belts form a complex geology that records a huge and varied span of Earth's history.
Of particular note was the Caledonian Orogeny during the Ordovician Period, c. 488–444 Ma and early Silurian period,
when the craton Baltica collided with the terrane Avalonia to form the mountains and hills in northern Britain and
Ireland. Baltica formed roughly the northwestern half of Ireland and Scotland. Further collisions caused the Variscan
orogeny in the Devonian and Carboniferous periods, forming the hills of Munster, southwest England, and southern
Wales. Over the last 500 million years the land that forms the islands has drifted northwest from around 30°S, crossing
the equator around 370 million years ago to reach its present northern latitude. The islands have been shaped by
numerous glaciations during the Quaternary Period, the most recent being the Devensian.[citation needed] As this
ended, the central Irish Sea was deglaciated and the English Channel flooded, with sea levels rising to current levels
some 4,000 to 5,000 years ago, leaving the British Isles in their current form. Whether or not there was a land bridge
between Great Britain and Ireland at this time is somewhat disputed, though there was certainly a single ice sheet
covering the entire sea. The islands are at relatively low altitudes, with central Ireland and southern Great Britain
particularly low lying: the lowest point in the islands is Holme, Cambridgeshire at −2.75 m (−9.02 ft). The Scottish
Highlands in the northern part of Great Britain are mountainous, with Ben Nevis being the highest point on the islands
at 1,343 m (4,406 ft). Other mountainous areas include Wales and parts of Ireland, however only seven peaks in these
areas reach above 1,000 m (3,281 ft). Lakes on the islands are generally not large, although Lough Neagh in Northern
Ireland is an exception, covering 150 square miles (390 km2).[citation needed] The largest freshwater body in Great
Britain (by area) is Loch Lomond at 27.5 square miles (71 km2), and Loch Ness, by volume whilst Loch Morar is the
deepest freshwater body in the British Isles, with a maximum depth of 310 m (1,017 ft). There are a number of major
rivers within the British Isles. The longest is the Shannon in Ireland at 224 mi (360 km).[citation needed] The river
Severn at 220 mi (354 km)[citation needed] is the longest in Great Britain. The isles have a temperate marine climate.
The North Atlantic Drift ("Gulf Stream") which flows from the Gulf of Mexico brings with it significant moisture
and raises temperatures 11 °C (20 °F) above the global average for the islands' latitudes. Winters are cool and wet,
with summers mild and also wet. Most Atlantic depressions pass to the north of the islands, combined with the general
westerly circulation and interactions with the landmass, this imposes an east-west variation in climate. The islands
enjoy a mild climate and varied soils, giving rise to a diverse pattern of vegetation. Animal and plant life is similar
to that of the northwestern European continent. There are however, fewer numbers of species, with Ireland having
even less. All native flora and fauna in Ireland is made up of species that migrated from elsewhere in Europe, and
Great Britain in particular. The only window when this could have occurred was between the end of the last Ice Age
(about 12,000 years ago) and when the land bridge connecting the two islands was flooded by sea (about 8,000 years
ago). As with most of Europe, prehistoric Britain and Ireland were covered with forest and swamp. Clearing began
around 6000 BC and accelerated in medieval times. Despite this, Britain retained its primeval forests longer than
most of Europe due to a small population and later development of trade and industry, and wood shortages were not
a problem until the 17th century. By the 18th century, most of Britain's forests were consumed for shipbuilding or
manufacturing charcoal and the nation was forced to import lumber from Scandinavia, North America, and the Baltic.
Most forest land in Ireland is maintained by state forestation programmes. Almost all land outside urban areas is
farmland. However, relatively large areas of forest remain in east and north Scotland and in southeast England. Oak,
elm, ash and beech are amongst the most common trees in England. In Scotland, pine and birch are most common. Natural
forests in Ireland are mainly oak, ash, wych elm, birch and pine. Beech and lime, though not native to Ireland, are
also common there. Farmland hosts a variety of semi-natural vegetation of grasses and flowering plants. Woods, hedgerows,
mountain slopes and marshes host heather, wild grasses, gorse and bracken. Many larger animals, such as wolf, bear
and the European elk are today extinct. However, some species such as red deer are protected. Other small mammals,
such as rabbits, foxes, badgers, hares, hedgehogs, and stoats, are very common and the European beaver has been reintroduced
in parts of Scotland. Wild boar have also been reintroduced to parts of southern England, following escapes from
boar farms and illegal releases. Many rivers contain otters and seals are common on coasts. Over 200 species of bird
reside permanently and another 200 migrate. Common types are the common chaffinch, common blackbird, house sparrow
and common starling; all small birds. Large birds are declining in number, except for those kept for game such as
pheasant, partridge, and red grouse. Fish are abundant in the rivers and lakes, in particular salmon, trout, perch
and pike. Sea fish include dogfish, cod, sole, pollock and bass, as well as mussels, crab and oysters along the coast.
There are more than 21,000 species of insects. Few species of reptiles or amphibians are found in Great Britain or
Ireland. Only three snakes are native to Great Britain: the common European adder, the grass snake and the smooth
snake; none are native to Ireland. In general, Great Britain has slightly more variation and native wild life, with
weasels, polecats, wildcats, most shrews, moles, water voles, roe deer and common toads also being absent from Ireland.
This pattern is also true for birds and insects. Notable exceptions include the Kerry slug and certain species of
wood lice native to Ireland but not Great Britain. The demographics of the British Isles today are characterised
by a generally high density of population in England, which accounts for almost 80% of the total population of the
islands. In elsewhere on Great Britain and on Ireland, high density of population is limited to areas around, or
close to, a few large cities. The largest urban area by far is the Greater London Urban Area with 9 million inhabitants.
Other major populations centres include Greater Manchester Urban Area (2.4 million), West Midlands conurbation (2.4
million), West Yorkshire Urban Area (1.6 million) in England, Greater Glasgow (1.2 million) in Scotland and Greater
Dublin Area (1.1 million) in Ireland.[citation needed] The population of England rose rapidly during the 19th and
20th centuries whereas the populations of Scotland and Wales have shown little increase during the 20th century,
with the population of Scotland remaining unchanged since 1951. Ireland for most of its history comprised a population
proportionate to its land area (about one third of the total population). However, since the Great Irish Famine,
the population of Ireland has fallen to less than one tenth of the population of the British Isles. The famine, which
caused a century-long population decline, drastically reduced the Irish population and permanently altered the demographic
make-up of the British Isles. On a global scale, this disaster led to the creation of an Irish diaspora that numbers
fifteen times the current population of the island. The linguistic heritage of the British Isles is rich, with twelve
languages from six groups across four branches of the Indo-European family. The Insular Celtic languages of the Goidelic
sub-group (Irish, Manx and Scottish Gaelic) and the Brittonic sub-group (Cornish, Welsh and Breton, spoken in north-western
France) are the only remaining Celtic languages—the last of their continental relations becoming extinct before the
7th century. The Norman languages of Guernésiais, Jèrriais and Sarkese spoken in the Channel Islands are similar
to French. A cant, called Shelta, is spoken by Irish Travellers, often as a means to conceal meaning from those outside
the group. However, English, sometimes in the form of Scots, is the dominant language, with few monoglots remaining
in the other languages of the region. The Norn language of Orkney and Shetland became extinct around 1880. At the
end of the last ice age, what are now the British Isles were joined to the European mainland as a mass of land extending
north west from the modern-day northern coastline of France, Belgium and the Netherlands. Ice covered almost all
of what is now Scotland, most of Ireland and Wales, and the hills of northern England. From 14,000 to 10,000 years
ago, as the ice melted, sea levels rose separating Ireland from Great Britain and also creating the Isle of Man.
About two to four millennia later, Great Britain became separated from the mainland. Britain probably became repopulated
with people before the ice age ended and certainly before it became separated from the mainland. It is likely that
Ireland became settled by sea after it had already become an island. At the time of the Roman Empire, about two thousand
years ago, various tribes, which spoke Celtic dialects of the Insular Celtic group, were inhabiting the islands.
The Romans expanded their civilisation to control southern Great Britain but were impeded in advancing any further,
building Hadrian's Wall to mark the northern frontier of their empire in 122 AD. At that time, Ireland was populated
by a people known as Hiberni, the northern third or so of Great Britain by a people known as Picts and the southern
two thirds by Britons. Anglo-Saxons arrived as Roman power waned in the 5th century AD. Initially, their arrival
seems to have been at the invitation of the Britons as mercenaries to repulse incursions by the Hiberni and Picts.
In time, Anglo-Saxon demands on the British became so great that they came to culturally dominate the bulk of southern
Great Britain, though recent genetic evidence suggests Britons still formed the bulk of the population. This dominance
creating what is now England and leaving culturally British enclaves only in the north of what is now England, in
Cornwall and what is now known as Wales. Ireland had been unaffected by the Romans except, significantly, having
been Christianised, traditionally by the Romano-Briton, Saint Patrick. As Europe, including Britain, descended into
turmoil following the collapse of Roman civilisation, an era known as the Dark Ages, Ireland entered a golden age
and responded with missions (first to Great Britain and then to the continent), the founding of monasteries and universities.
These were later joined by Anglo-Saxon missions of a similar nature. Viking invasions began in the 9th century, followed
by more permanent settlements, particularly along the east coast of Ireland, the west coast of modern-day Scotland
and the Isle of Man. Though the Vikings were eventually neutralised in Ireland, their influence remained in the cities
of Dublin, Cork, Limerick, Waterford and Wexford. England however was slowly conquered around the turn of the first
millennium AD, and eventually became a feudal possession of Denmark. The relations between the descendants of Vikings
in England and counterparts in Normandy, in northern France, lay at the heart of a series of events that led to the
Norman conquest of England in 1066. The remnants of the Duchy of Normandy, which conquered England, remain associated
to the English Crown as the Channel Islands to this day. A century later the marriage of the future Henry II of England
to Eleanor of Aquitaine created the Angevin Empire, partially under the French Crown. At the invitation of a provincial
king and under the authority of Pope Adrian IV (the only Englishman to be elected pope), the Angevins invaded Ireland
in 1169. Though initially intended to be kept as an independent kingdom, the failure of the Irish High King to ensure
the terms of the Treaty of Windsor led Henry II, as King of England, to rule as effective monarch under the title
of Lord of Ireland. This title was granted to his younger son but when Henry's heir unexpectedly died the title of
King of England and Lord of Ireland became entwined in one person. By the Late Middle Ages, Great Britain was separated
into the Kingdoms of England and Scotland. Power in Ireland fluxed between Gaelic kingdoms, Hiberno-Norman lords
and the English-dominated Lordship of Ireland. A similar situation existed in the Principality of Wales, which was
slowly being annexed into the Kingdom of England by a series of laws. During the course of the 15th century, the
Crown of England would assert a claim to the Crown of France, thereby also releasing the King of England as from
being vassal of the King of France. In 1534, King Henry VIII, at first having been a strong defender of Roman Catholicism
in the face of the Reformation, separated from the Roman Church after failing to secure a divorce from the Pope.
His response was to place the King of England as "the only Supreme Head in Earth of the Church of England", thereby
removing the authority of the Pope from the affairs of the English Church. Ireland, which had been held by the King
of England as Lord of Ireland, but which strictly speaking had been a feudal possession of the Pope since the Norman
invasion was declared a separate kingdom in personal union with England. Scotland, meanwhile had remained an independent
Kingdom. In 1603, that changed when the King of Scotland inherited the Crown of England, and consequently the Crown
of Ireland also. The subsequent 17th century was one of political upheaval, religious division and war. English colonialism
in Ireland of the 16th century was extended by large-scale Scottish and English colonies in Ulster. Religious division
heightened and the King in England came into conflict with parliament. A prime issue was, inter alia, over his policy
of tolerance towards Catholicism. The resulting English Civil War or War of the Three Kingdoms led to a revolutionary
republic in England. Ireland, largely Catholic was mainly loyal to the king. Following defeat to the parliaments
army, large scale land distributions from loyalist Irish nobility to English commoners in the service of the parliamentary
army created the beginnings a new Ascendancy class which over the next hundred years would obliterate the English
(Hiberno-Norman) and Gaelic Irish nobility in Ireland. The new ruling class was Protestant and British, whilst the
common people were largely Catholic and Irish. This theme would influence Irish politics for centuries to come. When
the monarchy was restored in England, the king found it politically impossible to restore all the lands of former
land-owners in Ireland. The "Glorious Revolution" of 1688 repeated similar themes: a Catholic king pushing for religious
tolerance in opposition to a Protestant parliament in England. The king's army was defeated at the Battle of the
Boyne and at the militarily crucial Battle of Aughrim in Ireland. Resistance held out, and a guarantee of religious
tolerance was a cornerstone of the Treaty of Limerick. However, in the evolving political climate, the terms of Limerick
were superseded, a new monarchy was installed, and the new Irish parliament was packed with the new elite which legislated
increasing intolerant Penal Laws, which discommoded both Dissenters and Catholics. The Kingdoms of England and Scotland
were unified in 1707 creating the Kingdom of Great Britain. Following an attempted republican revolution in Ireland
in 1798, the Kingdoms of Ireland and Great Britain were unified in 1801, creating the United Kingdom. The Isle of
Man and the Channel Islands remaining outside of the United Kingdom but with their ultimate good governance being
the responsibility of the British Crown (effectively the British government). Although, the colonies of North American
that would become the United States of America were lost by the start of the 19th century, the British Empire expanded
rapidly elsewhere. A century later it would cover one third of the globe. Poverty in the United Kingdom remained
desperate however and industrialisation in England led to terrible condition for the working class. Mass migrations
following the Irish Famine and Highland Clearances resulted in the distribution of the islands' population and culture
throughout the world and a rapid de-population of Ireland in the second-half of the 19th century. Most of Ireland
seceded from the United Kingdom after the Irish War of Independence and the subsequent Anglo-Irish Treaty (1919–1922),
with the six counties that formed Northern Ireland remaining as an autonomous region of the UK. There are two sovereign
states in the isles: Ireland and the United Kingdom of Great Britain and Northern Ireland. Ireland, sometimes called
the Republic of Ireland, governs five sixths of the island of Ireland, with the remainder of the island forming Northern
Ireland. Northern Ireland is a part of the United Kingdom of Great Britain and Northern Ireland, usually shortened
to simply the United Kingdom, which governs the remainder of the archipelago with the exception of the Isle of Man
and the Channel Islands. The Isle of Man and the two states of the Channel Islands, Jersey and Guernsey, are known
as the Crown Dependencies. They exercise constitutional rights of self-government and judicial independence; responsibility
for international representation rests largely upon the UK (in consultation with the respective governments); and
responsibility for defence is reserved by the UK. The United Kingdom is made up of four constituent parts: England,
Scotland and Wales, forming Great Britain, and Northern Ireland in the north-east of the island of Ireland. Of these,
Scotland, Wales and Northern Ireland have "devolved" governments meaning that they have their own parliaments/assemblies
and are self-governing with respect to certain areas set down by law. For judicial purposes, Scotland, Northern Ireland
and England and Wales (the latter being one entity) form separate legal jurisdiction, with there being no single
law for the UK as a whole. Ireland, the United Kingdom and the three Crown Dependencies are all parliamentary democracies,
with their own separate parliaments. All parts of the United Kingdom return members to parliament in London. In addition
to this, voters in Scotland, Wales and Northern Ireland return members to a parliament in Edinburgh and to assemblies
in Cardiff and Belfast respectively. Governance in the norm is by majority rule, however, Northern Ireland uses a
system of power sharing whereby unionists and nationalists share executive posts proportionately and where the assent
of both groups are required for the Northern Ireland Assembly to make certain decisions. (In the context of Northern
Ireland, unionists are those who want Northern Ireland to remain a part of the United Kingdom and nationalists are
those who want Northern Ireland join with the rest of Ireland.) The British monarch is the head of state for all
parts of the isles except for the Republic of Ireland, where the head of state is the President of Ireland. Ireland
and the United Kingdom are both part of the European Union (EU). The Crown Dependencies are not a part of the EU
however do participate in certain aspects that were negotiated as a part of the UK's accession to the EU. Neither
the United Kingdom or Ireland are part of the Schengen area, that allow passport-free travel between EU members states.
However, since the partition of Ireland, an informal free-travel area had existed across the region. In 1997, this
area required formal recognition during the course of negotiations for the Amsterdam Treaty of the European Union
and is now known as the Common Travel Area. Reciprocal arrangements allow British and Irish citizens to full voting
rights in the two states. Exceptions to this are presidential elections and constitutional referendums in the Republic
of Ireland, for which there is no comparable franchise in the other states. In the United Kingdom, these pre-date
European Union law, and in both jurisdictions go further than that required by European Union law. Other EU nationals
may only vote in local and European Parliament elections while resident in either the UK or Ireland. In 2008, a UK
Ministry of Justice report investigating how to strengthen the British sense of citizenship proposed to end this
arrangement arguing that, "the right to vote is one of the hallmarks of the political status of citizens; it is not
a means of expressing closeness between countries." The Northern Ireland Peace Process has led to a number of unusual
arrangements between the Republic of Ireland, Northern Ireland and the United Kingdom. For example, citizens of Northern
Ireland are entitled to the choice of Irish or British citizenship or both and the Governments of Ireland and the
United Kingdom consult on matters not devolved to the Northern Ireland Executive. The Northern Ireland Executive
and the Government of Ireland also meet as the North/South Ministerial Council to develop policies common across
the island of Ireland. These arrangements were made following the 1998 Good Friday Agreement. Another body established
under the Good Friday Agreement, the British–Irish Council, is made up of all of the states and territories of the
British Isles. The British–Irish Parliamentary Assembly (Irish: Tionól Pharlaiminteach na Breataine agus na hÉireann)
predates the British–Irish Council and was established in 1990. Originally it comprised 25 members of the Oireachtas,
the Irish parliament, and 25 members of the parliament of the United Kingdom, with the purpose of building mutual
understanding between members of both legislatures. Since then the role and scope of the body has been expanded to
include representatives from the Scottish Parliament, the National Assembly for Wales, the Northern Ireland Assembly,
the States of Jersey, the States of Guernsey and the High Court of Tynwald (Isle of Man). The Council does not have
executive powers but meets biannually to discuss issues of mutual importance. Similarly, the Parliamentary Assembly
has no legislative powers but investigates and collects witness evidence from the public on matters of mutual concern
to its members. Reports on its findings are presented to the Governments of Ireland and the United Kingdom. During
the February 2008 meeting of the British–Irish Council, it was agreed to set up a standing secretariat that would
serve as a permanent 'civil service' for the Council. Leading on from developments in the British–Irish Council,
the chair of the British–Irish Inter-Parliamentary Assembly, Niall Blaney, has suggested that the body should shadow
the British–Irish Council's work. The United Kingdom and Ireland have separate media, although British television,
newspapers and magazines are widely available in Ireland, giving people in Ireland a high level of familiarity with
cultural matters in the United Kingdom. Irish newspapers are also available in the UK, and Irish state and private
television is widely available in Northern Ireland.[citation needed] Certain reality TV shows have embraced the whole
of the islands, for example The X Factor, seasons 3, 4 and 7 of which featured auditions in Dublin and were open
to Irish voters, whilst the show previously known as Britain's Next Top Model became Britain and Ireland's Next Top
Model in 2011. A few cultural events are organised for the island group as a whole. For example, the Costa Book Awards
are awarded to authors resident in the UK or Ireland. The Man Booker Prize is awarded to authors from the Commonwealth
of Nations and Ireland. The Mercury Music Prize is handed out every year to the best album from a British or Irish
musician or group. Many globally popular sports had modern rules codified in the British Isles, including golf, association
football, cricket, rugby, snooker and darts, as well as many minor sports such as croquet, bowls, pitch and putt,
water polo and handball. A number of sports are popular throughout the British Isles, the most prominent of which
is association football. While this is organised separately in different national associations, leagues and national
teams, even within the UK, it is a common passion in all parts of the islands. Rugby union is also widely enjoyed
across the islands with four national teams from England, Ireland, Scotland and Wales. The British and Irish Lions
is a team chosen from each national team and undertakes tours of the southern hemisphere rugby playing nations every
four years. Ireland play as a united team, represented by players from both Northern Ireland and the Republic. These
national rugby teams play each other each year for the Triple Crown as part of the Six Nations Championship. Also
since 2001 the professional club teams of Ireland, Scotland, Wales and Italy compete against each other in the RaboDirect
Pro12. The idea of building a tunnel under the Irish Sea has been raised since 1895, when it was first investigated.
Several potential Irish Sea tunnel projects have been proposed, most recently the Tusker Tunnel between the ports
of Rosslare and Fishguard proposed by The Institute of Engineers of Ireland in 2004. A rail tunnel was proposed in
1997 on a different route, between Dublin and Holyhead, by British engineering firm Symonds. Either tunnel, at 50
mi (80 km), would be by far the longest in the world, and would cost an estimated £15 billion or €20 billion. A proposal
in 2007, estimated the cost of building a bridge from County Antrim in Northern Ireland to Galloway in Scotland at
£3.5bn (€5bn).
Phaininda and episkyros were Greek ball games. An image of an episkyros player depicted in low relief on a vase at the National
Archaeological Museum of Athens appears on the UEFA European Championship Cup. Athenaeus, writing in 228 AD, referenced
the Roman ball game harpastum. Phaininda, episkyros and harpastum were played involving hands and violence. They
all appear to have resembled rugby football, wrestling and volleyball more than what is recognizable as modern football.
As with pre-codified "mob football", the antecedent of all modern football codes, these three games involved more
handling the ball than kicking. Non-competitive games included kemari in Japan, chuk-guk in Korea and woggabaliri
in Australia. The goalkeepers are the only players allowed to touch the ball with their hands or arms while it is
in play and only in their penalty area. Outfield players mostly use their feet to strike or pass the ball, but may
also use their head or torso to do so instead. The team that scores the most goals by the end of the match wins.
If the score is level at the end of the game, either a draw is declared or the game goes into extra time and/or a
penalty shootout depending on the format of the competition. The Laws of the Game were originally codified in England
by The Football Association in 1863. Association football is governed internationally by the International Federation
of Association Football (FIFA; French: Fédération Internationale de Football Association), which organises World
Cups for both men and women every four years. Association football in itself does not have a classical history. Notwithstanding
any similarities to other ball games played around the world FIFA have recognised that no historical connection exists
with any game played in antiquity outside Europe. The modern rules of association football are based on the mid-19th
century efforts to standardise the widely varying forms of football played in the public schools of England. The
history of football in England dates back to at least the eighth century AD. The Cambridge Rules, first drawn up
at Cambridge University in 1848, were particularly influential in the development of subsequent codes, including
association football. The Cambridge Rules were written at Trinity College, Cambridge, at a meeting attended by representatives
from Eton, Harrow, Rugby, Winchester and Shrewsbury schools. They were not universally adopted. During the 1850s,
many clubs unconnected to schools or universities were formed throughout the English-speaking world, to play various
forms of football. Some came up with their own distinct codes of rules, most notably the Sheffield Football Club,
formed by former public school pupils in 1857, which led to formation of a Sheffield FA in 1867. In 1862, John Charles
Thring of Uppingham School also devised an influential set of rules. At a professional level, most matches produce
only a few goals. For example, the 2005–06 season of the English Premier League produced an average of 2.48 goals
per match. The Laws of the Game do not specify any player positions other than goalkeeper, but a number of specialised
roles have evolved. Broadly, these include three main categories: strikers, or forwards, whose main task is to score
goals; defenders, who specialise in preventing their opponents from scoring; and midfielders, who dispossess the
opposition and keep possession of the ball to pass it to the forwards on their team. Players in these positions are
referred to as outfield players, to distinguish them from the goalkeeper. These positions are further subdivided
according to the area of the field in which the player spends most time. For example, there are central defenders,
and left and right midfielders. The ten outfield players may be arranged in any combination. The number of players
in each position determines the style of the team's play; more forwards and fewer defenders creates a more aggressive
and offensive-minded game, while the reverse creates a slower, more defensive style of play. While players typically
spend most of the game in a specific position, there are few restrictions on player movement, and players can switch
positions at any time. The layout of a team's players is known as a formation. Defining the team's formation and
tactics is usually the prerogative of the team's manager. These ongoing efforts contributed to the formation of The
Football Association (The FA) in 1863, which first met on the morning of 26 October 1863 at the Freemasons' Tavern
in Great Queen Street, London. The only school to be represented on this occasion was Charterhouse. The Freemason's
Tavern was the setting for five more meetings between October and December, which eventually produced the first comprehensive
set of rules. At the final meeting, the first FA treasurer, the representative from Blackheath, withdrew his club
from the FA over the removal of two draft rules at the previous meeting: the first allowed for running with the ball
in hand; the second for obstructing such a run by hacking (kicking an opponent in the shins), tripping and holding.
Other English rugby clubs followed this lead and did not join the FA and instead in 1871 formed the Rugby Football
Union. The eleven remaining clubs, under the charge of Ebenezer Cobb Morley, went on to ratify the original thirteen
laws of the game. These rules included handling of the ball by "marks" and the lack of a crossbar, rules which made
it remarkably similar to Victorian rules football being developed at that time in Australia. The Sheffield FA played
by its own rules until the 1870s with the FA absorbing some of its rules until there was little difference between
the games. The world's oldest football competition is the FA Cup, which was founded by C. W. Alcock and has been
contested by English teams since 1872. The first official international football match also took place in 1872, between
Scotland and England in Glasgow, again at the instigation of C. W. Alcock. England is also home to the world's first
football league, which was founded in Birmingham in 1888 by Aston Villa director William McGregor. The original format
contained 12 clubs from the Midlands and Northern England. The laws of the game are determined by the International
Football Association Board (IFAB). The Board was formed in 1886 after a meeting in Manchester of The Football Association,
the Scottish Football Association, the Football Association of Wales, and the Irish Football Association. FIFA, the
international football body, was formed in Paris in 1904 and declared that they would adhere to Laws of the Game
of the Football Association. The growing popularity of the international game led to the admittance of FIFA representatives
to the International Football Association Board in 1913. The board consists of four representatives from FIFA and
one representative from each of the four British associations. In many parts of the world football evokes great passions
and plays an important role in the life of individual fans, local communities, and even nations. R. Kapuscinski says
that Europeans who are polite, modest, or humble fall easily into rage when playing or watching football games. The
Côte d'Ivoire national football team helped secure a truce to the nation's civil war in 2006 and it helped further
reduce tensions between government and rebel forces in 2007 by playing a match in the rebel capital of Bouaké, an
occasion that brought both armies together peacefully for the first time. By contrast, football is widely considered
to have been the final proximate cause for the Football War in June 1969 between El Salvador and Honduras. The sport
also exacerbated tensions at the beginning of the Yugoslav Wars of the 1990s, when a match between Dinamo Zagreb
and Red Star Belgrade degenerated into rioting in May 1990. The growth in women's football has seen major competitions
being launched at both national and international level mirroring the male competitions. Women's football has faced
many struggles. It had a "golden age" in the United Kingdom in the early 1920s when crowds reached 50,000 at some
matches; this was stopped on 5 December 1921 when England's Football Association voted to ban the game from grounds
used by its member clubs. The FA's ban was rescinded in December 1969 with UEFA voting to officially recognise women's
football in 1971. The FIFA Women's World Cup was inaugurated in 1991 and has been held every four years since, while
women's football has been an Olympic event since 1996. Association football is played in accordance with a set of
rules known as the Laws of the Game. The game is played using a spherical ball of 68.5–69.5 cm (27.0–27.4 in) circumference,
known as the football (or soccer ball). Two teams of eleven players each compete to get the ball into the other team's
goal (between the posts and under the bar), thereby scoring a goal. The team that has scored more goals at the end
of the game is the winner; if both teams have scored an equal number of goals then the game is a draw. Each team
is led by a captain who has only one official responsibility as mandated by the Laws of the Game: to be involved
in the coin toss prior to kick-off or penalty kicks. The primary law is that players other than goalkeepers may not
deliberately handle the ball with their hands or arms during play, though they do use their hands during a throw-in
restart. Although players usually use their feet to move the ball around, they may use any part of their body (notably,
"heading" with the forehead) other than their hands or arms. Within normal play, all players are free to play the
ball in any direction and move throughout the pitch, though the ball cannot be received in an offside position. In
game play, players attempt to create goal-scoring opportunities through individual control of the ball, such as by
dribbling, passing the ball to a team-mate, and by taking shots at the goal, which is guarded by the opposing goalkeeper.
Opposing players may try to regain control of the ball by intercepting a pass or through tackling the opponent in
possession of the ball; however, physical contact between opponents is restricted. Football is generally a free-flowing
game, with play stopping only when the ball has left the field of play or when play is stopped by the referee for
an infringement of the rules. After a stoppage, play recommences with a specified restart. There are 17 laws in the
official Laws of the Game, each containing a collection of stipulation and guidelines. The same laws are designed
to apply to all levels of football, although certain modifications for groups such as juniors, seniors, women and
people with physical disabilities are permitted. The laws are often framed in broad terms, which allow flexibility
in their application depending on the nature of the game. The Laws of the Game are published by FIFA, but are maintained
by the International Football Association Board (IFAB). In addition to the seventeen laws, numerous IFAB decisions
and other directives contribute to the regulation of football. Each team consists of a maximum of eleven players
(excluding substitutes), one of whom must be the goalkeeper. Competition rules may state a minimum number of players
required to constitute a team, which is usually seven. Goalkeepers are the only players allowed to play the ball
with their hands or arms, provided they do so within the penalty area in front of their own goal. Though there are
a variety of positions in which the outfield (non-goalkeeper) players are strategically placed by a coach, these
positions are not defined or required by the Laws. The basic equipment or kit players are required to wear includes
a shirt, shorts, socks, footwear and adequate shin guards. An athletic supporter and protective cup is highly recommended
for male players by medical experts and professionals. Headgear is not a required piece of basic equipment, but players
today may choose to wear it to protect themselves from head injury. Players are forbidden to wear or use anything
that is dangerous to themselves or another player, such as jewellery or watches. The goalkeeper must wear clothing
that is easily distinguishable from that worn by the other players and the match officials. As the Laws were formulated
in England, and were initially administered solely by the four British football associations within IFAB, the standard
dimensions of a football pitch were originally expressed in imperial units. The Laws now express dimensions with
approximate metric equivalents (followed by traditional units in brackets), though use of imperial units remains
popular in English-speaking countries with a relatively recent history of metrication (or only partial metrication),
such as Britain. The length of the pitch for international adult matches is in the range of 100–110 m (110–120 yd)
and the width is in the range of 64–75 m (70–80 yd). Fields for non-international matches may be 90–120 m (100–130
yd) length and 45–90 m (50–100 yd) in width, provided that the pitch does not become square. In 2008, the IFAB initially
approved a fixed size of 105 m (344 ft) long and 68 m (223 ft) wide as a standard pitch dimension for international
matches; however, this decision was later put on hold and was never actually implemented. In front of the goal is
the penalty area. This area is marked by the goal line, two lines starting on the goal line 16.5 m (18 yd) from the
goalposts and extending 16.5 m (18 yd) into the pitch perpendicular to the goal line, and a line joining them. This
area has a number of functions, the most prominent being to mark where the goalkeeper may handle the ball and where
a penalty foul by a member of the defending team becomes punishable by a penalty kick. Other markings define the
position of the ball or players at kick-offs, goal kicks, penalty kicks and corner kicks. A standard adult football
match consists of two periods of 45 minutes each, known as halves. Each half runs continuously, meaning that the
clock is not stopped when the ball is out of play. There is usually a 15-minute half-time break between halves. The
end of the match is known as full-time. The referee is the official timekeeper for the match, and may make an allowance
for time lost through substitutions, injured players requiring attention, or other stoppages. This added time is
called additional time in FIFA documents, but is most commonly referred to as stoppage time or injury time, while
loss time can also be used as a synonym. The duration of stoppage time is at the sole discretion of the referee.
The referee alone signals the end of the match. In matches where a fourth official is appointed, toward the end of
the half the referee signals how many minutes of stoppage time he intends to add. The fourth official then informs
the players and spectators by holding up a board showing this number. The signalled stoppage time may be further
extended by the referee. Added time was introduced because of an incident which happened in 1891 during a match between
Stoke and Aston Villa. Trailing 1–0 and with just two minutes remaining, Stoke were awarded a penalty. Villa's goalkeeper
kicked the ball out of the ground, and by the time the ball had been recovered, the 90 minutes had elapsed and the
game was over. The same law also states that the duration of either half is extended until the penalty kick to be
taken or retaken is completed, thus no game shall end with a penalty to be taken. In league competitions, games may
end in a draw. In knockout competitions where a winner is required various methods may be employed to break such
a deadlock, some competitions may invoke replays. A game tied at the end of regulation time may go into extra time,
which consists of two further 15-minute periods. If the score is still tied after extra time, some competitions allow
the use of penalty shootouts (known officially in the Laws of the Game as "kicks from the penalty mark") to determine
which team will progress to the next stage of the tournament. Goals scored during extra time periods count toward
the final score of the game, but kicks from the penalty mark are only used to decide the team that progresses to
the next part of the tournament (with goals scored in a penalty shootout not making up part of the final score).
In the late 1990s and early 2000s, the IFAB experimented with ways of creating a winner without requiring a penalty
shootout, which was often seen as an undesirable way to end a match. These involved rules ending a game in extra
time early, either when the first goal in extra time was scored (golden goal), or if one team held a lead at the
end of the first period of extra time (silver goal). Golden goal was used at the World Cup in 1998 and 2002. The
first World Cup game decided by a golden goal was France's victory over Paraguay in 1998. Germany was the first nation
to score a golden goal in a major competition, beating Czech Republic in the final of Euro 1996. Silver goal was
used in Euro 2004. Both these experiments have been discontinued by IFAB. The referee may punish a player's or substitute's
misconduct by a caution (yellow card) or dismissal (red card). A second yellow card at the same game leads to a red
card, and therefore to a dismissal. A player given a yellow card is said to have been "booked", the referee writing
the player's name in his official notebook. If a player has been dismissed, no substitute can be brought on in their
place. Misconduct may occur at any time, and while the offences that constitute misconduct are listed, the definitions
are broad. In particular, the offence of "unsporting behaviour" may be used to deal with most events that violate
the spirit of the game, even if they are not listed as specific offences. A referee can show a yellow or red card
to a player, substitute or substituted player. Non-players such as managers and support staff cannot be shown the
yellow or red card, but may be expelled from the technical area if they fail to conduct themselves in a responsible
manner. Along with the general administration of the sport, football associations and competition organisers also
enforce good conduct in wider aspects of the game, dealing with issues such as comments to the press, clubs' financial
management, doping, age fraud and match fixing. Most competitions enforce mandatory suspensions for players who are
sent off in a game. Some on-field incidents, if considered very serious (such as allegations of racial abuse), may
result in competitions deciding to impose heavier sanctions than those normally associated with a red card. Some
associations allow for appeals against player suspensions incurred on-field if clubs feel a referee was incorrect
or unduly harsh. There has been a football tournament at every Summer Olympic Games since 1900, except at the 1932
games in Los Angeles. Before the inception of the World Cup, the Olympics (especially during the 1920s) had the same
status as the World Cup. Originally, the event was for amateurs only; however, since the 1984 Summer Olympics, professional
players have been permitted, albeit with certain restrictions which prevent countries from fielding their strongest
sides. The Olympic men's tournament is played at Under-23 level. In the past the Olympics have allowed a restricted
number of over-age players per team. A women's tournament was added in 1996; in contrast to the men's event, full
international sides without age restrictions play the women's Olympic tournament. After the World Cup, the most important
international football competitions are the continental championships, which are organised by each continental confederation
and contested between national teams. These are the European Championship (UEFA), the Copa América (CONMEBOL), African
Cup of Nations (CAF), the Asian Cup (AFC), the CONCACAF Gold Cup (CONCACAF) and the OFC Nations Cup (OFC). The FIFA
Confederations Cup is contested by the winners of all six continental championships, the current FIFA World Cup champions
and the country which is hosting the Confederations Cup. This is generally regarded as a warm-up tournament for the
upcoming FIFA World Cup and does not carry the same prestige as the World Cup itself. The most prestigious competitions
in club football are the respective continental championships, which are generally contested between national champions,
for example the UEFA Champions League in Europe and the Copa Libertadores in South America. The winners of each continental
competition contest the FIFA Club World Cup. The governing bodies in each country operate league systems in a domestic
season, normally comprising several divisions, in which the teams gain points throughout the season depending on
results. Teams are placed into tables, placing them in order according to points accrued. Most commonly, each team
plays every other team in its league at home and away in each season, in a round-robin tournament. At the end of
a season, the top team is declared the champion. The top few teams may be promoted to a higher division, and one
or more of the teams finishing at the bottom are relegated to a lower division. A number of players may be replaced
by substitutes during the course of the game. The maximum number of substitutions permitted in most competitive international
and domestic league games is three, though the permitted number may vary in other competitions or in friendly matches.
Common reasons for a substitution include injury, tiredness, ineffectiveness, a tactical switch, or timewasting at
the end of a finely poised game. In standard adult matches, a player who has been substituted may not take further
part in a match. IFAB recommends "that a match should not continue if there are fewer than seven players in either
team." Any decision regarding points awarded for abandoned games is left to the individual football associations.
Georgian architecture is the name given in most English-speaking countries to the set of architectural styles current between
1714 and 1830. It is eponymous for the first four British monarchs of the House of Hanover—George I, George II, George
III, and George IV—who reigned in continuous succession from August 1714 to June 1830. The style was revived in the
late 19th century in the United States as Colonial Revival architecture and in the early 20th century in Great Britain
as Neo-Georgian architecture; in both it is also called Georgian Revival architecture. In America the term "Georgian"
is generally used to describe all building from the period, regardless of style; in Britain it is generally restricted
to buildings that are "architectural in intention", and have stylistic characteristics that are typical of the period,
though that covers a wide range. The style of Georgian buildings is very variable, but marked by a taste for symmetry
and proportion based on the classical architecture of Greece and Rome, as revived in Renaissance architecture. Ornament
is also normally in the classical tradition, but typically rather restrained, and sometimes almost completely absent
on the exterior. The period brought the vocabulary of classical architecture to smaller and more modest buildings
than had been the case before, replacing English vernacular architecture (or becoming the new vernacular style) for
almost all new middle-class homes and public buildings by the end of the period. In towns, which expanded greatly
during the period, landowners turned into property developers, and rows of identical terraced houses became the norm.
Even the wealthy were persuaded to live in these in town, especially if provided with a square of garden in front
of the house. There was an enormous amount of building in the period, all over the English-speaking world, and the
standards of construction were generally high. Where they have not been demolished, large numbers of Georgian buildings
have survived two centuries or more, and they still form large parts of the core of cities such as London, Edinburgh,
Dublin and Bristol. The period saw the growth of a distinct and trained architectural profession; before the mid-century
"the high-sounding title, 'architect' was adopted by anyone who could get away with it". But most buildings were
still designed by builders and landlords together, and the wide spread of Georgian architecture, and the Georgian
styles of design more generally, came from dissemination through pattern books and inexpensive suites of engravings.
This contrasted with earlier styles, which were primarily disseminated among craftsmen through the direct experience
of the apprenticeship system. Authors such as the prolific William Halfpenny (active 1723–1755) received editions
in America as well as Britain. From the mid-18th century, Georgian styles were assimilated into an architectural
vernacular that became part and parcel of the training of every architect, designer, builder, carpenter, mason and
plasterer, from Edinburgh to Maryland. Georgian succeeded the English Baroque of Sir Christopher Wren, Sir John Vanbrugh,
Thomas Archer, William Talman, and Nicholas Hawksmoor; this in fact continued into at least the 1720s, overlapping
with a more restrained Georgian style. The architect James Gibbs was a transitional figure, his earlier buildings
are Baroque, reflecting the time he spent in Rome in the early 18th century, but he adjusted his style after 1720.
Major architects to promote the change in direction from baroque were Colen Campbell, author of the influential book
Vitruvius Britannicus (1715-1725); Richard Boyle, 3rd Earl of Burlington and his protégé William Kent; Isaac Ware;
Henry Flitcroft and the Venetian Giacomo Leoni, who spent most of his career in England. Other prominent architects
of the early Georgian period include James Paine, Robert Taylor, and John Wood, the Elder. The European Grand Tour
became very common for wealthy patrons in the period, and Italian influence remained dominant, though at the start
of the period Hanover Square, Westminster (1713 on), developed and occupied by Whig supporters of the new dynasty,
seems to have deliberately adopted German stylisic elements in their honour, especially vertical bands connecting
the windows. The styles that resulted fall within several categories. In the mainstream of Georgian style were both
Palladian architecture— and its whimsical alternatives, Gothic and Chinoiserie, which were the English-speaking world's
equivalent of European Rococo. From the mid-1760s a range of Neoclassical modes were fashionable, associated with
the British architects Robert Adam, James Gibbs, Sir William Chambers, James Wyatt, George Dance the Younger, Henry
Holland and Sir John Soane. John Nash was one of the most prolific architects of the late Georgian era known as The
Regency style, he was responsible for designing large areas of London. Greek Revival architecture was added to the
repertory, beginning around 1750, but increasing in popularity after 1800. Leading exponents were William Wilkins
and Robert Smirke. Georgian architecture is characterized by its proportion and balance; simple mathematical ratios
were used to determine the height of a window in relation to its width or the shape of a room as a double cube. Regularity,
as with ashlar (uniformly cut) stonework, was strongly approved, imbuing symmetry and adherence to classical rules:
the lack of symmetry, where Georgian additions were added to earlier structures remaining visible, was deeply felt
as a flaw, at least before Nash began to introduce it in a variety of styles. Regularity of housefronts along a street
was a desirable feature of Georgian town planning. Until the start of the Gothic Revival in the early 19th century,
Georgian designs usually lay within the Classical orders of architecture and employed a decorative vocabulary derived
from ancient Rome or Greece. Versions of revived Palladian architecture dominated English country house architecture.
Houses were increasingly placed in grand landscaped settings, and large houses were generally made wide and relatively
shallow, largely to look more impressive from a distance. The height was usually highest in the centre, and the Baroque
emphasis on corner pavilions often found on the continent generally avoided. In grand houses, an entrance hall led
to steps up to a piano nobile or mezzanine floor where the main reception rooms were. Typically the basement area
or "rustic", with kitchens, offices and service areas, as well as male guests with muddy boots, came some way above
ground, and was lit by windows that were high on the inside, but just above ground level outside. A single block
was typical, with a perhaps a small court for carriages at the front marked off by railings and a gate, but rarely
a stone gatehouse, or side wings around the court. Windows in all types of buildings were large and regularly placed
on a grid; this was partly to minimize window tax, which was in force throughout the period in the United Kingdom.
Some windows were subsequently bricked-in. Their height increasingly varied between the floors, and they increasingly
began below waist-height in the main rooms, making a small balcony desirable. Before this the internal plan and function
of the rooms can generally not be deduced from the outside. To open these large windows the sash window, already
developed by the 1670s, became very widespread. Corridor plans became universal inside larger houses. Internal courtyards
became more rare, except beside the stables, and the functional parts of the building were placed at the sides, or
in separate buildings nearby hidden by trees. The views to and from the front and rear of the main block were concentrated
on, with the side approaches usually much less important. The roof was typically invisible from the ground, though
domes were sometimes visible in grander buildings. The roofline was generally clear of ornament except for a balustrade
or the top of a pediment. Columns or pilasters, often topped by a pediment, were popular for ornament inside and
out, and other ornament was generally geometrical or plant-based, rather than using the human figure. Inside ornament
was far more generous, and could sometimes be overwhelming. The chimneypiece continued to be the usual main focus
of rooms, and was now given a classical treatment, and increasingly topped by a painting or a mirror. Plasterwork
ceilings, carved wood, and bold schemes of wallpaint formed a backdrop to increasingly rich collections of furniture,
paintings, porcelain, mirrors, and objets d'art of all kinds. Wood-panelling, very common since about 1500, fell
from favour around the mid-century, and wallpaper included very expensive imports from China. In towns even most
better-off people lived in terraced houses, which typically opened straight onto the street, often with a few steps
up to the door. There was often an open space, protected by iron railings, dropping down to the basement level, with
a discreet entrance down steps off the street for servants and deliveries; this is known as the "area". This meant
that the ground floor front was now removed and protected from the street and encouraged the main reception rooms
to move there from the floor above. Where, as often, a new street or set of streets was developed, the road and pavements
were raised up, and the gardens or yards behind the houses at a lower level, usually representing the original one.
Town terraced houses for all social classes remained resolutely tall and narrow, each dwelling occupying the whole
height of the building. This contrasted with well-off continental dwellings, which had already begun to be formed
of wide apartments occupying only one or two floors of a building; such arrangements were only typical in England
when housing groups of batchelors, as in Oxbridge colleges, the lawyers in the Inns of Court or The Albany after
it was converted in 1802. In the period in question, only in Edinburgh were working-class purpose-built tenements
common, though lodgers were common in other cities. A curving crescent, often looking out at gardens or a park, was
popular for terraces where space allowed. In early and central schemes of development, plots were sold and built
on individually, though there was often an attempt to enforce some uniformity, but as development reached further
out schemes were increasingly built as a uniform scheme and then sold. The late Georgian period saw the birth of
the semi-detached house, planned systematically, as a suburban compromise between the terraced houses of the city
and the detached "villas" further out, where land was cheaper. There had been occasional examples in town centres
going back to medieval times. Most early suburban examples are large, and in what are now the outer fringes of Central
London, but were then in areas being built up for the first time. Blackheath, Chalk Farm and St John's Wood are among
the areas contesting being the original home of the semi. Sir John Summerson gave primacy to the Eyre Estate of St
John's Wood. A plan for this exists dated 1794, where "the whole development consists of pairs of semi-detached houses,
So far as I know, this is the first recorded scheme of the kind". In fact the French Wars put an end to this scheme,
but when the development was finally built it retained the semi-detached form, "a revolution of striking significance
and far-reaching effect". Until the Church Building Act of 1818, the period saw relatively few churches built in
Britain, which was already well-supplied, although in the later years of the period the demand for Non-conformist
and Roman Catholic places of worship greatly increased. Anglican churches that were built were designed internally
to allow maximum audibility, and visibility, for preaching, so the main nave was generally wider and shorter than
in medieval plans, and often there were no side-aisles. Galleries were common in new churches. Especially in country
parishes, the external appearance generally retained the familiar signifiers of a Gothic church, with a tower or
spire, a large west front with one or more doors, and very large windows along the nave, but all with any ornament
drawn from the classical vocabulary. Where funds permitted, a classical temple portico with columns and a pediment
might be used at the west front. Decoration inside was very limited, but churches filled up with monuments to the
prosperous. Public buildings generally varied between the extremes of plain boxes with grid windows and Italian Late
Renaissance palaces, depending on budget. Somerset House in London, designed by Sir William Chambers in 1776 for
government offices, was as magnificent as any country house, though never quite finished, as funds ran out. Barracks
and other less prestigious buildings could be as functional as the mills and factories that were growing increasingly
large by the end of the period. But as the period came to an end many commercial projects were becoming sufficiently
large, and well-funded, to become "architectural in intention", rather than having their design left to the lesser
class of "surveyors". Georgian architecture was widely disseminated in the English colonies during the Georgian era.
American buildings of the Georgian period were very often constructed of wood with clapboards; even columns were
made of timber, framed up, and turned on an over-sized lathe. At the start of the period the difficulties of obtaining
and transporting brick or stone made them a common alternative only in the larger cities, or where they were obtainable
locally. Dartmouth College, Harvard University, and the College of William and Mary, offer leading examples of Georgian
architecture in the Americas. Unlike the Baroque style that it replaced, which was mostly used for palaces and churches,
and had little representation in the British colonies, simpler Georgian styles were widely used by the upper and
middle classes. Perhaps the best remaining house is the pristine Hammond-Harwood House (1774) in Annapolis, Maryland,
designed by the colonial architect William Buckland and modelled on the Villa Pisani at Montagnana, Italy as depicted
in Andrea Palladio's I quattro libri dell'architettura ("Four Books of Architecture"). After about 1840, Georgian
conventions were slowly abandoned as a number of revival styles, including Gothic Revival, that had originated in
the Georgian period, developed and contested in Victorian architecture, and in the case of Gothic became better researched,
and closer to their originals. Neoclassical architecture remained popular, and was the opponent of Gothic in the
Battle of the Styles of the early Victorian period. In the United States the Federalist Style contained many elements
of Georgian style, but incorporated revolutionary symbols. In the early decades of the twentieth century when there
was a growing nostalgia for its sense of order, the style was revived and adapted and in the United States came to
be known as the Colonial Revival. In Canada the United Empire Loyalists embraced Georgian architecture as a sign
of their fealty to Britain, and the Georgian style was dominant in the country for most of the first half of the
19th century. The Grange, for example, a manor built in Toronto, was built in 1817. In Montreal, English born architect
John Ostell worked on a significant number of remarkable constructions in the Georgian style such as the Old Montreal
Custom House and the Grand séminaire de Montréal. The revived Georgian style that emerged in Britain at the beginning
of the 20th century is usually referred to as Neo-Georgian; the work of Edwin Lutyens includes many examples. Versions
of the Neo-Georgian style were commonly used in Britain for certain types of urban architecture until the late 1950s,
Bradshaw Gass & Hope's Police Headquarters in Salford of 1958 being a good example. In both the United States and
Britain, the Georgian style is still employed by architects like Quinlan Terry Julian Bicknell and Fairfax and Sammons
for private residences.
The Republic of Liberia, beginning as a settlement of the American Colonization Society (ACS), declared its independence
on July 26, 1847. The United States did not recognize Liberia's independence until during the American Civil War
on February 5, 1862. Between January 7, 1822 and the American Civil War, more than 15,000 freed and free-born Black
Americans from United States and 3,198 Afro-Caribbeans relocated to the settlement. The Black American settlers carried
their culture with them to Liberia. The Liberian constitution and flag were modeled after the United States. In January
3, 1848 Joseph Jenkins Roberts, a wealthy free-born Black American from Virginia who settled in Liberia, was elected
as Liberia's first president after the people proclaimed independence. Longstanding political tensions from the 27
year rule of William Tubman resulted in a military coup in 1980 that overthrew the leadership soon after his death,
marking the beginning of political instability. Five years of military rule by the People's Redemption Council and
five years of civilian rule by the National Democratic Party of Liberia were followed by the First and Second Liberian
Civil Wars. These resulted in the deaths and displacement of more than half a million people and devastated Liberia's
economy. A peace agreement in 2003 led to democratic elections in 2005. Recovery proceeds but about 85% of the population
live below the international poverty line. This influx was compounded by the decline of the Western Sudanic Mali
Empire in 1375 and the Songhai Empire in 1591. Additionally, as inland regions underwent desertification, inhabitants
moved to the wetter coast. These new inhabitants brought skills such as cotton spinning, cloth weaving, iron smelting,
rice and sorghum cultivation, and social and political institutions from the Mali and Songhai empires. Shortly after
the Mane conquered the region, the Vai people of the former Mali Empire immigrated into the Grand Cape Mount region.
The ethnic Kru opposed the influx of Vai, forming an alliance with the Mane to stop further influx of Vai.[citation
needed] In the United States, there was a movement to resettle American free blacks and freed slaves in Africa. The
American Colonization Society was founded in 1816 in Washington, DC for this purpose, by a group of prominent politicians
and slaveholders. But its membership grew to include mostly people who supported abolition of slavery. Slaveholders
wanted to get free people of color out of the South, where they were thought to threaten the stability of the slave
societies. Some abolitionists collaborated on relocation of free blacks, as they were discouraged by discrimination
against them in the North and believed they would never be accepted in the larger society. Most African Americans,
who were native-born by this time, wanted to improve conditions in the United States rather than emigrate. Leading
activists in the North strongly opposed the ACS, but some free blacks were ready to try a different environment.
In 1822, the American Colonization Society began sending African-American volunteers to the Pepper Coast to establish
a colony for freed African Americans. By 1867, the ACS (and state-related chapters) had assisted in the migration
of more than 13,000 African Americans to Liberia. These free African Americans and their descendants married within
their community and came to identify as Americo-Liberians. Many were of mixed race and educated in American culture;
they did not identify with the indigenous natives of the tribes they encountered. They intermarried largely within
the colonial community, developing an ethnic group that had a cultural tradition infused with American notions of
political republicanism and Protestant Christianity. The Americo-Liberian settlers did not identify with the indigenous
peoples they encountered, especially those in communities of the more isolated "bush." They knew nothing of their
cultures, languages or animist religion. Encounters with tribal Africans in the bush often developed as violent confrontations.
The colonial settlements were raided by the Kru and Grebo people from their inland chiefdoms. Because of feeling
set apart and superior by their culture and education to the indigenous peoples, the Americo-Liberians developed
as a small elite that held on to political power. It excluded the indigenous tribesmen from birthright citizenship
in their own lands until 1904, in a repetition of the United States' treatment of Native Americans. Because of the
cultural gap between the groups and assumption of superiority of western culture, the Americo-Liberians envisioned
creating a western-style state to which the tribesmen should assimilate. They encouraged religious organizations
to set up missions and schools to educate the indigenous peoples. On April 12, 1980, a military coup led by Master
Sergeant Samuel Doe of the Krahn ethnic group overthrew and killed President William R. Tolbert, Jr.. Doe and the
other plotters later executed a majority of Tolbert's cabinet and other Americo-Liberian government officials and
True Whig Party members. The coup leaders formed the People's Redemption Council (PRC) to govern the country. A strategic
Cold War ally of the West, Doe received significant financial backing from the United States while critics condemned
the PRC for corruption and political repression. The rebels soon split into various factions fighting one another.
The Economic Community Monitoring Group under the Economic Community of West African States organized a military
task force to intervene in the crisis. From 1989 to 1996 one of Africa's bloodiest civil wars ensued, claiming the
lives of more than 200,000 Liberians and displacing a million others into refugee camps in neighboring countries.
A peace deal between warring parties was reached in 1995, leading to Taylor's election as president in 1997. In March
2003, a second rebel group, Movement for Democracy in Liberia, began launching attacks against Taylor from the southeast.
Peace talks between the factions began in Accra in June of that year, and Taylor was indicted by the Special Court
for Sierra Leone for crimes against humanity that same month. By July 2003, the rebels had launched an assault on
Monrovia. Under heavy pressure from the international community and the domestic Women of Liberia Mass Action for
Peace movement, Taylor resigned in August 2003 and went into exile in Nigeria. The subsequent 2005 elections were
internationally regarded as the most free and fair in Liberian history. Ellen Johnson Sirleaf, a Harvard-trained
economist and former Minister of Finance, was elected as the first female president in Africa. Upon her inauguration,
Sirleaf requested the extradition of Taylor from Nigeria and transferred him to the SCSL for trial in The Hague.
In 2006, the government established a Truth and Reconciliation Commission to address the causes and crimes of the
civil war. Liberia is divided into fifteen counties, which, in turn, are subdivided into a total of 90 districts
and further subdivided into clans. The oldest counties are Grand Bassa and Montserrado, both founded in 1839 prior
to Liberian independence. Gbarpolu is the newest county, created in 2001. Nimba is the largest of the counties in
size at 11,551 km2 (4,460 sq mi), while Montserrado is the smallest at 1,909 km2 (737 sq mi). Montserrado is also
the most populous county with 1,144,806 residents as of the 2008 census. The Legislature is composed of the Senate
and the House of Representatives. The House, led by a speaker, has 73 members apportioned among the 15 counties on
the basis of the national census, with each county receiving a minimum of two members. Each House member represents
an electoral district within a county as drawn by the National Elections Commission and is elected by a plurality
of the popular vote of their district into a six-year term. The Senate is made up of two senators from each county
for a total of 30 senators. Senators serve nine-year terms and are elected at-large by a plurality of the popular
vote. The vice president serves as the President of the Senate, with a President pro tempore serving in their absence.
Liberia's highest judicial authority is the Supreme Court, made up of five members and headed by the Chief Justice
of Liberia. Members are nominated to the court by the president and are confirmed by the Senate, serving until the
age of 70. The judiciary is further divided into circuit and speciality courts, magistrate courts and justices of
the peace. The judicial system is a blend of common law, based on Anglo-American law, and customary law. An informal
system of traditional courts still exists within the rural areas of the country, with trial by ordeal remaining common
despite being officially outlawed. Liberia scored a 3.3 on a scale from 10 (highly clean) to 0 (highly corrupt) on
the 2010 Corruption Perceptions Index. This gave it a ranking 87th of 178 countries worldwide and 11th of 47 in Sub-Saharan
Africa. This score represented a significant improvement since 2007, when the country scored 2.1 and ranked 150th
of 180 countries. When seeking attention of a selection of service[clarification needed] providers, 89% of Liberians
had to pay a bribe, the highest national percentage in the world according to the organization's 2010 Global Corruption
Barometer. The Central Bank of Liberia is responsible for printing and maintaining the Liberian dollar, which is
the primary form of currency in Liberia. Liberia is one of the world's poorest countries, with a formal employment
rate of 15%. GDP per capita peaked in 1980 at US$496, when it was comparable to Egypt's (at the time). In 2011, the
country's nominal GDP was US$1.154 billion, while nominal GDP per capita stood at US$297, the third-lowest in the
world. Historically, the Liberian economy has depended heavily on foreign aid, foreign direct investment and exports
of natural resources such as iron ore, rubber and timber. Following a peak in growth in 1979, the Liberian economy
began a steady decline due to economic mismanagement following the 1980 coup. This decline was accelerated by the
outbreak of civil war in 1989; GDP was reduced by an estimated 90% between 1989 and 1995, one of the fastest declines
in history. Upon the end of the war in 2003, GDP growth began to accelerate, reaching 9.4% in 2007. The global financial
crisis slowed GDP growth to 4.6% in 2009, though a strengthening agricultural sector led by rubber and timber exports
increased growth to 5.1% in 2010 and an expected 7.3% in 2011, making the economy one of the 20 fastest growing in
the world. In 2003, additional UN sanctions were placed on Liberian timber exports, which had risen from US$5 million
in 1997 to over US$100 million in 2002 and were believed to be funding rebels in Sierra Leone. These sanctions were
lifted in 2006. Due in large part to foreign aid and investment inflow following the end of the war, Liberia maintains
a large account deficit, which peaked at nearly 60% in 2008. Liberia gained observer status with the World Trade
Organization in 2010 and is in the process of acquiring full member status. Liberia has the highest ratio of foreign
direct investment to GDP in the world, with US$16 billion in investment since 2006. Following the inauguration of
the Sirleaf administration in 2006, Liberia signed several multibillion-dollar concession agreements in the iron
ore and palm oil industries with numerous multinational corporations, including BHP Billiton, ArcelorMittal, and
Sime Darby. Especially palm oil companies like Sime Darby (Malaysia) and Golden Veroleum (USA) are being accused
by critics of the destruction of livelihoods and the displacement of local communities, enabled through government
concessions. The Firestone Tire and Rubber Company has operated the world's largest rubber plantation in Liberia
since 1926. The Kpelle comprise more than 20% of the population and are the largest ethnic group in Liberia, residing
mostly in Bong County and adjacent areas in central Liberia. Americo-Liberians, who are descendants of African American
and West Indian, mostly Barbadian settlers, make up 2.5%. Congo people, descendants of repatriated Congo and Afro-Caribbean
slaves who arrived in 1825, make up an estimated 2.5%. These latter two groups established political control in the
19th century which they kept well into the 20th century. Numerous immigrants have come as merchants and become a
major part of the business community, including Lebanese, Indians, and other West African nationals. There is a high
percentage of interracial marriage between ethnic Liberians and the Lebanese, resulting in a significant mixed-race
population especially in and around Monrovia. A small minority of Liberians of European descent reside in the country.[better
source needed] The Liberian constitution restricts citizenship to people of Black African descent. In 2010, the literacy
rate of Liberia was estimated at 60.8% (64.8% for males and 56.8% for females). In some areas primary and secondary
education is free and compulsory from the ages of 6 to 16, though enforcement of attendance is lax. In other areas
children are required to pay a tuition fee to attend school. On average, children attain 10 years of education (11
for boys and 8 for girls). The country's education sector is hampered by inadequate schools and supplies, as well
as a lack of qualified teachers. Hospitals in Liberia include the John F. Kennedy Medical Center in Monrovia and
several others. Life expectancy in Liberia is estimated to be 57.4 years in 2012. With a fertility rate of 5.9 births
per woman, the maternal mortality rate stood at 990 per 100,000 births in 2010. A number of highly communicable diseases
are widespread, including tuberculosis, diarrheal diseases and malaria. In 2007, the HIV infection rates stood at
2% of the population aged 15–49 whereas the incidence of tuberculosis was 420 per 100,000 people in 2008. Approximately
58.2% – 66% of women are estimated to have undergone female genital mutilation. Liberia has a long, rich history
in textile arts and quilting, as the settlers brought with them their sewing and quilting skills. Liberia hosted
National Fairs in 1857 and 1858 in which prizes were awarded for various needle arts. One of the most well-known
Liberian quilters was Martha Ann Ricks, who presented a quilt featuring the famed Liberian coffee tree to Queen Victoria
in 1892. When President Ellen Johnson Sirleaf moved into the Executive Mansion, she reportedly had a Liberian-made
quilt installed in her presidential office.
Beginning in the late 1910s and early 1920s, Whitehead gradually turned his attention from mathematics to philosophy of science,
and finally to metaphysics. He developed a comprehensive metaphysical system which radically departed from most of
western philosophy. Whitehead argued that reality consists of processes rather than material objects, and that processes
are best defined by their relations with other processes, thus rejecting the theory that reality is fundamentally
constructed by bits of matter that exist independently of one another. Today Whitehead's philosophical works – particularly
Process and Reality – are regarded as the foundational texts of process philosophy. Alfred North Whitehead was born
in Ramsgate, Kent, England, in 1861. His father, Alfred Whitehead, was a minister and schoolmaster of Chatham House
Academy, a successful school for boys established by Thomas Whitehead, Alfred North's grandfather. Whitehead himself
recalled both of them as being very successful schoolmasters, but that his grandfather was the more extraordinary
man. Whitehead's mother was Maria Sarah Whitehead, formerly Maria Sarah Buckmaster. Whitehead was apparently not
particularly close with his mother, as he never mentioned her in any of his writings, and there is evidence that
Whitehead's wife, Evelyn, had a low opinion of her. In 1918 Whitehead's academic responsibilities began to seriously
expand as he accepted a number of high administrative positions within the University of London system, of which
Imperial College London was a member at the time. He was elected Dean of the Faculty of Science at the University
of London in late 1918 (a post he held for four years), a member of the University of London's Senate in 1919, and
chairman of the Senate's Academic (leadership) Council in 1920, a post which he held until he departed for America
in 1924. Whitehead was able to exert his newfound influence to successfully lobby for a new history of science department,
help establish a Bachelor of Science degree (previously only Bachelor of Arts degrees had been offered), and make
the school more accessible to less wealthy students. The two volume biography of Whitehead by Victor Lowe is the
most definitive presentation of the life of Whitehead. However, many details of Whitehead's life remain obscure because
he left no Nachlass; his family carried out his instructions that all of his papers be destroyed after his death.
Additionally, Whitehead was known for his "almost fanatical belief in the right to privacy", and for writing very
few personal letters of the kind that would help to gain insight on his life. This led to Lowe himself remarking
on the first page of Whitehead's biography, "No professional biographer in his right mind would touch him." In addition
to numerous articles on mathematics, Whitehead wrote three major books on the subject: A Treatise on Universal Algebra
(1898), Principia Mathematica (co-written with Bertrand Russell and published in three volumes between 1910 and 1913),
and An Introduction to Mathematics (1911). The former two books were aimed exclusively at professional mathematicians,
while the latter book was intended for a larger audience, covering the history of mathematics and its philosophical
foundations. Principia Mathematica in particular is regarded as one of the most important works in mathematical logic
of the 20th century. At the time structures such as Lie algebras and hyperbolic quaternions drew attention to the
need to expand algebraic structures beyond the associatively multiplicative class. In a review Alexander Macfarlane
wrote: "The main idea of the work is not unification of the several methods, nor generalization of ordinary algebra
so as to include them, but rather the comparative study of their several structures." In a separate review, G. B.
Mathews wrote, "It possesses a unity of design which is really remarkable, considering the variety of its themes."
Whitehead and Russell had thought originally that Principia Mathematica would take a year to complete; it ended up
taking them ten years. To add insult to injury, when it came time for publication, the three-volume work was so massive
(more than 2,000 pages) and its audience so narrow (professional mathematicians) that it was initially published
at a loss of 600 pounds, 300 of which was paid by Cambridge University Press, 200 by the Royal Society of London,
and 50 apiece by Whitehead and Russell themselves. Despite the initial loss, today there is likely no major academic
library in the world which does not hold a copy of Principia Mathematica. The ultimate substantive legacy of Principia
Mathematica is mixed. It is generally accepted that Kurt Gödel's incompleteness theorem of 1931 definitively demonstrated
that for any set of axioms and inference rules proposed to encapsulate mathematics, there would in fact be some truths
of mathematics which could not be deduced from them, and hence that Principia Mathematica could never achieve its
aims. However, Gödel could not have come to this conclusion without Whitehead and Russell's book. In this way, Principia
Mathematica's legacy might be described as its key role in disproving the possibility of achieving its own stated
goals. But beyond this somewhat ironic legacy, the book popularized modern mathematical logic and drew important
connections between logic, epistemology, and metaphysics. Whitehead's most complete work on education is the 1929
book The Aims of Education and Other Essays, which collected numerous essays and addresses by Whitehead on the subject
published between 1912 and 1927. The essay from which Aims of Education derived its name was delivered as an address
in 1916 when Whitehead was president of the London Branch of the Mathematical Association. In it, he cautioned against
the teaching of what he called "inert ideas" – ideas that are disconnected scraps of information, with no application
to real life or culture. He opined that "education with inert ideas is not only useless: it is, above all things,
harmful." Rather than teach small parts of a large number of subjects, Whitehead advocated teaching a relatively
few important concepts that the student could organically link to many different areas of knowledge, discovering
their application in actual life. For Whitehead, education should be the exact opposite of the multidisciplinary,
value-free school model – it should be transdisciplinary, and laden with values and general principles that provide
students with a bedrock of wisdom and help them to make connections between areas of knowledge that are usually regarded
as separate. Whitehead did not begin his career as a philosopher. In fact, he never had any formal training in philosophy
beyond his undergraduate education. Early in his life he showed great interest in and respect for philosophy and
metaphysics, but it is evident that he considered himself a rank amateur. In one letter to his friend and former
student Bertrand Russell, after discussing whether science aimed to be explanatory or merely descriptive, he wrote:
"This further question lands us in the ocean of metaphysic, onto which my profound ignorance of that science forbids
me to enter." Ironically, in later life Whitehead would become one of the 20th century's foremost metaphysicians.
Whitehead was unimpressed by this objection. In the notes of one his students for a 1927 class, Whitehead was quoted
as saying: "Every scientific man in order to preserve his reputation has to say he dislikes metaphysics. What he
means is he dislikes having his metaphysics criticized." In Whitehead's view, scientists and philosophers make metaphysical
assumptions about how the universe works all the time, but such assumptions are not easily seen precisely because
they remain unexamined and unquestioned. While Whitehead acknowledged that "philosophers can never hope finally to
formulate these metaphysical first principles," he argued that people need to continually re-imagine their basic
assumptions about how the universe works if philosophy and science are to make any real progress, even if that progress
remains permanently asymptotic. For this reason Whitehead regarded metaphysical investigations as essential to both
good science and good philosophy. Perhaps foremost among what Whitehead considered faulty metaphysical assumptions
was the Cartesian idea that reality is fundamentally constructed of bits of matter that exist totally independently
of one another, which he rejected in favor of an event-based or "process" ontology in which events are primary and
are fundamentally interrelated and dependent on one another. He also argued that the most basic elements of reality
can all be regarded as experiential, indeed that everything is constituted by its experience. He used the term "experience"
very broadly, so that even inanimate processes such as electron collisions are said to manifest some degree of experience.
In this, he went against Descartes' separation of two different kinds of real existence, either exclusively material
or else exclusively mental. Whitehead referred to his metaphysical system as "philosophy of organism", but it would
become known more widely as "process philosophy." This is not to say that Whitehead's thought was widely accepted
or even well-understood. His philosophical work is generally considered to be among the most difficult to understand
in all of the western canon. Even professional philosophers struggled to follow Whitehead's writings. One famous
story illustrating the level of difficulty of Whitehead's philosophy centers around the delivery of Whitehead's Gifford
lectures in 1927–28 – following Arthur Eddington's lectures of the year previous – which Whitehead would later publish
as Process and Reality: However, Mathews' frustration with Whitehead's books did not negatively affect his interest.
In fact, there were numerous philosophers and theologians at Chicago's Divinity School that perceived the importance
of what Whitehead was doing without fully grasping all of the details and implications. In 1927 they invited one
of America's only Whitehead experts – Henry Nelson Wieman – to Chicago to give a lecture explaining Whitehead's thought.
Wieman's lecture was so brilliant that he was promptly hired to the faculty and taught there for twenty years, and
for at least thirty years afterward Chicago's Divinity School was closely associated with Whitehead's thought. Wieman's
words proved prophetic. Though Process and Reality has been called "arguably the most impressive single metaphysical
text of the twentieth century," it has been little-read and little-understood, partly because it demands – as Isabelle
Stengers puts it – "that its readers accept the adventure of the questions that will separate them from every consensus."
Whitehead questioned western philosophy's most dearly held assumptions about how the universe works, but in doing
so he managed to anticipate a number of 21st century scientific and philosophical problems and provide novel solutions.
In Whitehead's view, then, concepts such as "quality", "matter", and "form" are problematic. These "classical" concepts
fail to adequately account for change, and overlook the active and experiential nature of the most basic elements
of the world. They are useful abstractions, but are not the world's basic building blocks. What is ordinarily conceived
of as a single person, for instance, is philosophically described as a continuum of overlapping events. After all,
people change all the time, if only because they have aged by another second and had some further experience. These
occasions of experience are logically distinct, but are progressively connected in what Whitehead calls a "society"
of events. By assuming that enduring objects are the most real and fundamental things in the universe, materialists
have mistaken the abstract for the concrete (what Whitehead calls the "fallacy of misplaced concreteness"). To put
it another way, a thing or person is often seen as having a "defining essence" or a "core identity" that is unchanging,
and describes what the thing or person really is. In this way of thinking, things and people are seen as fundamentally
the same through time, with any changes being qualitative and secondary to their core identity (e.g. "Mark's hair
has turned gray as he has gotten older, but he is still the same person"). But in Whitehead's cosmology, the only
fundamentally existent things are discrete "occasions of experience" that overlap one another in time and space,
and jointly make up the enduring person or thing. On the other hand, what ordinary thinking often regards as "the
essence of a thing" or "the identity/core of a person" is an abstract generalization of what is regarded as that
person or thing's most important or salient features across time. Identities do not define people, people define
identities. Everything changes from moment to moment, and to think of anything as having an "enduring essence" misses
the fact that "all things flow", though it is often a useful way of speaking. Whitehead pointed to the limitations
of language as one of the main culprits in maintaining a materialistic way of thinking, and acknowledged that it
may be difficult to ever wholly move past such ideas in everyday speech. After all, each moment of each person's
life can hardly be given a different proper name, and it is easy and convenient to think of people and objects as
remaining fundamentally the same things, rather than constantly keeping in mind that each thing is a different thing
from what it was a moment ago. Yet the limitations of everyday living and everyday speech should not prevent people
from realizing that "material substances" or "essences" are a convenient generalized description of a continuum of
particular, concrete processes. No one questions that a ten-year-old person is quite different by the time he or
she turns thirty years old, and in many ways is not the same person at all; Whitehead points out that it is not philosophically
or ontologically sound to think that a person is the same from one second to the next. A second problem with materialism
is that it obscures the importance of relations. It sees every object as distinct and discrete from all other objects.
Each object is simply an inert clump of matter that is only externally related to other things. The idea of matter
as primary makes people think of objects as being fundamentally separate in time and space, and not necessarily related
to anything. But in Whitehead's view, relations take a primary role, perhaps even more important than the relata
themselves. A student taking notes in one of Whitehead's fall 1924 classes wrote that: In fact, Whitehead describes
any entity as in some sense nothing more and nothing less than the sum of its relations to other entities – its synthesis
of and reaction to the world around it. A real thing is just that which forces the rest of the universe to in some
way conform to it; that is to say, if theoretically a thing made strictly no difference to any other entity (i.e.
it was not related to any other entity), it could not be said to really exist. Relations are not secondary to what
a thing is, they are what the thing is. It must be emphasized, however, that an entity is not merely a sum of its
relations, but also a valuation of them and reaction to them. For Whitehead, creativity is the absolute principle
of existence, and every entity (whether it is a human being, a tree, or an electron) has some degree of novelty in
how it responds to other entities, and is not fully determined by causal or mechanistic laws. Of course, most entities
do not have consciousness. As a human being's actions cannot always be predicted, the same can be said of where a
tree's roots will grow, or how an electron will move, or whether it will rain tomorrow. Moreover, inability to predict
an electron's movement (for instance) is not due to faulty understanding or inadequate technology; rather, the fundamental
creativity/freedom of all entities means that there will always remain phenomena that are unpredictable. Since Whitehead's
metaphysics described a universe in which all entities experience, he needed a new way of describing perception that
was not limited to living, self-conscious beings. The term he coined was "prehension", which comes from the Latin
prehensio, meaning "to seize." The term is meant to indicate a kind of perception that can be conscious or unconscious,
applying to people as well as electrons. It is also intended to make clear Whitehead's rejection of the theory of
representative perception, in which the mind only has private ideas about other entities. For Whitehead, the term
"prehension" indicates that the perceiver actually incorporates aspects of the perceived thing into itself. In this
way, entities are constituted by their perceptions and relations, rather than being independent of them. Further,
Whitehead regards perception as occurring in two modes, causal efficacy (or "physical prehension") and presentational
immediacy (or "conceptual prehension"). Whitehead describes causal efficacy as "the experience dominating the primitive
living organisms, which have a sense for the fate from which they have emerged, and the fate towards which they go."
It is, in other words, the sense of causal relations between entities, a feeling of being influenced and affected
by the surrounding environment, unmediated by the senses. Presentational immediacy, on the other hand, is what is
usually referred to as "pure sense perception", unmediated by any causal or symbolic interpretation, even unconscious
interpretation. In other words, it is pure appearance, which may or may not be delusive (e.g. mistaking an image
in a mirror for "the real thing"). In higher organisms (like people), these two modes of perception combine into
what Whitehead terms "symbolic reference", which links appearance with causation in a process that is so automatic
that both people and animals have difficulty refraining from it. By way of illustration, Whitehead uses the example
of a person's encounter with a chair. An ordinary person looks up, sees a colored shape, and immediately infers that
it is a chair. However, an artist, Whitehead supposes, "might not have jumped to the notion of a chair", but instead
"might have stopped at the mere contemplation of a beautiful color and a beautiful shape." This is not the normal
human reaction; most people place objects in categories by habit and instinct, without even thinking about it. Moreover,
animals do the same thing. Using the same example, Whitehead points out that a dog "would have acted immediately
on the hypothesis of a chair and would have jumped onto it by way of using it as such." In this way symbolic reference
is a fusion of pure sense perceptions on the one hand and causal relations on the other, and that it is in fact the
causal relationships that dominate the more basic mentality (as the dog illustrates), while it is the sense perceptions
which indicate a higher grade mentality (as the artist illustrates). Whitehead makes the startling observation that
"life is comparatively deficient in survival value." If humans can only exist for about a hundred years, and rocks
for eight hundred million, then one is forced to ask why complex organisms ever evolved in the first place; as Whitehead
humorously notes, "they certainly did not appear because they were better at that game than the rocks around them."
He then observes that the mark of higher forms of life is that they are actively engaged in modifying their environment,
an activity which he theorizes is directed toward the three-fold goal of living, living well, and living better.
In other words, Whitehead sees life as directed toward the purpose of increasing its own satisfaction. Without such
a goal, he sees the rise of life as totally unintelligible. Whitehead's idea of God differs from traditional monotheistic
notions. Perhaps his most famous and pointed criticism of the Christian conception of God is that "the Church gave
unto God the attributes which belonged exclusively to Caesar." Here Whitehead is criticizing Christianity for defining
God as primarily a divine king who imposes his will on the world, and whose most important attribute is power. As
opposed to the most widely accepted forms of Christianity, Whitehead emphasized an idea of God that he called "the
brief Galilean vision of humility": It should be emphasized, however, that for Whitehead God is not necessarily tied
to religion. Rather than springing primarily from religious faith, Whitehead saw God as necessary for his metaphysical
system. His system required that an order exist among possibilities, an order that allowed for novelty in the world
and provided an aim to all entities. Whitehead posited that these ordered potentials exist in what he called the
primordial nature of God. However, Whitehead was also interested in religious experience. This led him to reflect
more intensively on what he saw as the second nature of God, the consequent nature. Whitehead's conception of God
as a "dipolar" entity has called for fresh theological thinking. God's consequent nature, on the other hand, is anything
but unchanging – it is God's reception of the world's activity. As Whitehead puts it, "[God] saves the world as it
passes into the immediacy of his own life. It is the judgment of a tenderness which loses nothing that can be saved."
In other words, God saves and cherishes all experiences forever, and those experiences go on to change the way God
interacts with the world. In this way, God is really changed by what happens in the world and the wider universe,
lending the actions of finite creatures an eternal significance. Whitehead thus sees God and the world as fulfilling
one another. He sees entities in the world as fluent and changing things that yearn for a permanence which only God
can provide by taking them into God's self, thereafter changing God and affecting the rest of the universe throughout
time. On the other hand, he sees God as permanent but as deficient in actuality and change: alone, God is merely
eternally unrealized possibilities, and requires the world to actualize them. God gives creatures permanence, while
the creatures give God actuality and change. Here it is worthwhile to quote Whitehead at length: For Whitehead the
core of religion was individual. While he acknowledged that individuals cannot ever be fully separated from their
society, he argued that life is an internal fact for its own sake before it is an external fact relating to others.
His most famous remark on religion is that "religion is what the individual does with his own solitariness ... and
if you are never solitary, you are never religious." Whitehead saw religion as a system of general truths that transformed
a person's character. He took special care to note that while religion is often a good influence, it is not necessarily
good – an idea which he called a "dangerous delusion" (e.g., a religion might encourage the violent extermination
of a rival religion's adherents). However, while Whitehead saw religion as beginning in solitariness, he also saw
religion as necessarily expanding beyond the individual. In keeping with his process metaphysics in which relations
are primary, he wrote that religion necessitates the realization of "the value of the objective world which is a
community derivative from the interrelations of its component individuals." In other words, the universe is a community
which makes itself whole through the relatedness of each individual entity to all the others – meaning and value
do not exist for the individual alone, but only in the context of the universal community. Whitehead writes further
that each entity "can find no such value till it has merged its individual claim with that of the objective universe.
Religion is world-loyalty. The spirit at once surrenders itself to this universal claim and appropriates it for itself."
In this way the individual and universal/social aspects of religion are mutually dependent. Whitehead also described
religion more technically as "an ultimate craving to infuse into the insistent particularity of emotion that non-temporal
generality which primarily belongs to conceptual thought alone." In other words, religion takes deeply felt emotions
and contextualizes them within a system of general truths about the world, helping people to identify their wider
meaning and significance. For Whitehead, religion served as a kind of bridge between philosophy and the emotions
and purposes of a particular society. It is the task of religion to make philosophy applicable to the everyday lives
of ordinary people. Isabelle Stengers wrote that "Whiteheadians are recruited among both philosophers and theologians,
and the palette has been enriched by practitioners from the most diverse horizons, from ecology to feminism, practices
that unite political struggle and spirituality with the sciences of education." Indeed, in recent decades attention
to Whitehead's work has become more widespread, with interest extending to intellectuals in Europe and China, and
coming from such diverse fields as ecology, physics, biology, education, economics, and psychology. One of the first
theologians to attempt to interact with Whitehead's thought was the future Archbishop of Canterbury, William Temple.
In Temple's Gifford Lectures of 1932-1934 (subsequently published as "Nature, Man and God"), Whitehead is one of
a number of philosophers of the emergent evolution approach Temple interacts with. However, it was not until the
1970s and 1980s that Whitehead's thought drew much attention outside of a small group of philosophers and theologians,
primarily Americans, and even today he is not considered especially influential outside of relatively specialized
circles. Early followers of Whitehead were found primarily at the University of Chicago's Divinity School, where
Henry Nelson Wieman initiated an interest in Whitehead's work that would last for about thirty years. Professors
such as Wieman, Charles Hartshorne, Bernard Loomer, Bernard Meland, and Daniel Day Williams made Whitehead's philosophy
arguably the most important intellectual thread running through the Divinity School. They taught generations of Whitehead
scholars, the most notable of which is John B. Cobb, Jr. But while Claremont remains the most concentrated hub of
Whiteheadian activity, the place where Whitehead's thought currently seems to be growing the most quickly is in China.
In order to address the challenges of modernization and industrialization, China has begun to blend traditions of
Taoism, Buddhism, and Confucianism with Whitehead's "constructive post-modern" philosophy in order to create an "ecological
civilization." To date, the Chinese government has encouraged the building of twenty-three university-based centers
for the study of Whitehead's philosophy, and books by process philosophers John Cobb and David Ray Griffin are becoming
required reading for Chinese graduate students. Cobb has attributed China's interest in process philosophy partly
to Whitehead's stress on the mutual interdependence of humanity and nature, as well as his emphasis on an educational
system that includes the teaching of values rather than simply bare facts. Overall, however, Whitehead's influence
is very difficult to characterize. In English-speaking countries, his primary works are little-studied outside of
Claremont and a select number of liberal graduate-level theology and philosophy programs. Outside of these circles
his influence is relatively small and diffuse, and has tended to come chiefly through the work of his students and
admirers rather than Whitehead himself. For instance, Whitehead was a teacher and long-time friend and collaborator
of Bertrand Russell, and he also taught and supervised the dissertation of Willard Van Orman Quine, both of whom
are important figures in analytic philosophy – the dominant strain of philosophy in English-speaking countries in
the 20th century. Whitehead has also had high-profile admirers in the continental tradition, such as French post-structuralist
philosopher Gilles Deleuze, who once dryly remarked of Whitehead that "he stands provisionally as the last great
Anglo-American philosopher before Wittgenstein's disciples spread their misty confusion, sufficiency, and terror."
French sociologist and anthropologist Bruno Latour even went so far as to call Whitehead "the greatest philosopher
of the 20th century." Deleuze's and Latour's opinions, however, are minority ones, as Whitehead has not been recognized
as particularly influential within the most dominant philosophical schools. It is impossible to say exactly why Whitehead's
influence has not been more widespread, but it may be partly due to his metaphysical ideas seeming somewhat counter-intuitive
(such as his assertion that matter is an abstraction), or his inclusion of theistic elements in his philosophy, or
the perception of metaphysics itself as passé, or simply the sheer difficulty and density of his prose. Historically
Whitehead's work has been most influential in the field of American progressive theology. The most important early
proponent of Whitehead's thought in a theological context was Charles Hartshorne, who spent a semester at Harvard
as Whitehead's teaching assistant in 1925, and is widely credited with developing Whitehead's process philosophy
into a full-blown process theology. Other notable process theologians include John B. Cobb, Jr., David Ray Griffin,
Marjorie Hewitt Suchocki, C. Robert Mesle, Roland Faber, and Catherine Keller. Process theology typically stresses
God's relational nature. Rather than seeing God as impassive or emotionless, process theologians view God as "the
fellow sufferer who understands", and as the being who is supremely affected by temporal events. Hartshorne points
out that people would not praise a human ruler who was unaffected by either the joys or sorrows of his followers
– so why would this be a praise-worthy quality in God? Instead, as the being who is most affected by the world, God
is the being who can most appropriately respond to the world. However, process theology has been formulated in a
wide variety of ways. C. Robert Mesle, for instance, advocates a "process naturalism", i.e. a process theology without
God. In fact, process theology is difficult to define because process theologians are so diverse and transdisciplinary
in their views and interests. John B. Cobb, Jr. is a process theologian who has also written books on biology and
economics. Roland Faber and Catherine Keller integrate Whitehead with poststructuralist, postcolonialist, and feminist
theory. Charles Birch was both a theologian and a geneticist. Franklin I. Gamwell writes on theology and political
theory. In Syntheism - Creating God in The Internet Age, futurologists Alexander Bard and Jan Söderqvist repeatedly
credit Whitehead for the process theology they see rising out of the participatory culture expected to dominate the
digital era. One philosophical school which has historically had a close relationship with process philosophy is
American pragmatism. Whitehead himself thought highly of William James and John Dewey, and acknowledged his indebtedness
to them in the preface to Process and Reality. Charles Hartshorne (along with Paul Weiss) edited the collected papers
of Charles Sanders Peirce, one of the founders of pragmatism. Noted neopragmatist Richard Rorty was in turn a student
of Hartshorne. Today, Nicholas Rescher is one example of a philosopher who advocates both process philosophy and
pragmatism. In physics, Whitehead's thought has had some influence. He articulated a view that might perhaps be regarded
as dual to Einstein's general relativity, see Whitehead's theory of gravitation. It has been severely criticized.
Yutaka Tanaka, who suggests that the gravitational constant disagrees with experimental findings, proposes that Einstein's
work does not actually refute Whitehead's formulation. Whitehead's view has now been rendered obsolete, with the
discovery of gravitational waves. They are phenonena observed locally that largely violate the kind of local flatness
of space that Whitehead assumes. Consequently, Whitehead's cosmology must be regarded as a local approximation, and
his assumption of a uniform spatio-temporal geometry, Minkowskian in particular, as an often-locally-adequate approximation.
An exact replacement of Whitehead's cosmology would need to admit a Riemannian geometry. Also, although Whitehead
himself gave only secondary consideration to quantum theory, his metaphysics of processes has proved attractive to
some physicists in that field. Henry Stapp and David Bohm are among those whose work has been influenced by Whitehead.
This work has been pioneered by John B. Cobb, Jr., whose book Is It Too Late? A Theology of Ecology (1971) was the
first single-authored book in environmental ethics. Cobb also co-authored a book with economist Herman Daly entitled
For the Common Good: Redirecting the Economy toward Community, the Environment, and a Sustainable Future (1989),
which applied Whitehead's thought to economics, and received the Grawemeyer Award for Ideas Improving World Order.
Cobb followed this with a second book, Sustaining the Common Good: A Christian Perspective on the Global Economy
(1994), which aimed to challenge "economists' zealous faith in the great god of growth." Another model is the FEELS
model developed by Xie Bangxiu and deployed successfully in China. "FEELS" stands for five things in curriculum and
education: Flexible-goals, Engaged-learner, Embodied-knowledge, Learning-through-interactions, and Supportive-teacher.
It is used for understanding and evaluating educational curriculum under the assumption that the purpose of education
is to "help a person become whole." This work is in part the product of cooperation between Chinese government organizations
and the Institute for the Postmodern Development of China. Whitehead has had some influence on philosophy of business
administration and organizational theory. This has led in part to a focus on identifying and investigating the effect
of temporal events (as opposed to static things) within organizations through an “organization studies” discourse
that accommodates a variety of 'weak' and 'strong' process perspectives from a number of philosophers. One of the
leading figures having an explicitly Whiteheadian and panexperientialist stance towards management is Mark Dibben,
who works in what he calls "applied process thought" to articulate a philosophy of management and business administration
as part of a wider examination of the social sciences through the lens of process metaphysics. For Dibben, this allows
"a comprehensive exploration of life as perpetually active experiencing, as opposed to occasional – and thoroughly
passive – happening." Dibben has published two books on applied process thought, Applied Process Thought I: Initial
Explorations in Theory and Research (2008), and Applied Process Thought II: Following a Trail Ablaze (2009), as well
as other papers in this vein in the fields of philosophy of management and business ethics. Margaret Stout and Carrie
M. Staton have also written recently on the mutual influence of Whitehead and Mary Parker Follett, a pioneer in the
fields of organizational theory and organizational behavior. Stout and Staton see both Whitehead and Follett as sharing
an ontology that "understands becoming as a relational process; difference as being related, yet unique; and the
purpose of becoming as harmonizing difference." This connection is further analyzed by Stout and Jeannine M. Love
in Integrative Process: Follettian Thinking from Ontology to Administration
Antibiotics revolutionized medicine in the 20th century, and have together with vaccination led to the near eradication of
diseases such as tuberculosis in the developed world. Their effectiveness and easy access led to overuse, especially
in livestock raising, prompting bacteria to develop resistance. This has led to widespread problems with antimicrobial
and antibiotic resistance, so much as to prompt the World Health Organization to classify antimicrobial resistance
as a "serious threat [that] is no longer a prediction for the future, it is happening right now in every region of
the world and has the potential to affect anyone, of any age, in any country". In empirical therapy, a patient has
proven or suspected infection, but the responsible microorganism is not yet unidentified. While the microorgainsim
is being identified the doctor will usually administer the best choice of antibiotic that will be most active against
the likely cause of infection usually a broad spectrum antibiotic. Empirical therapy is usually initiated before
the doctor knows the exact identification of microorgansim causing the infection as the identification process make
take several days in the laboratory. Antibiotics are screened for any negative effects on humans or other mammals
before approval for clinical use, and are usually considered safe and most are well tolerated. However, some antibiotics
have been associated with a range of adverse side effects. Side-effects range from mild to very serious depending
on the antibiotics used, the microbial organisms targeted, and the individual patient. Side effects may reflect the
pharmacological or toxicological properties of the antibiotic or may involve hypersensitivity reactions or anaphylaxis.
Safety profiles of newer drugs are often not as well established as for those that have a long history of use. Adverse
effects range from fever and nausea to major allergic reactions, including photodermatitis and anaphylaxis. Common
side-effects include diarrhea, resulting from disruption of the species composition in the intestinal flora, resulting,
for example, in overgrowth of pathogenic bacteria, such as Clostridium difficile. Antibacterials can also affect
the vaginal flora, and may lead to overgrowth of yeast species of the genus Candida in the vulvo-vaginal area. Additional
side-effects can result from interaction with other drugs, such as elevated risk of tendon damage from administration
of a quinolone antibiotic with a systemic corticosteroid. Some scientists have hypothesized that the indiscriminate
use of antibiotics alter the host microbiota and this has been associated with chronic disease. Exposure to antibiotics
early in life is associated with increased body mass in humans and mouse models. Early life is a critical period
for the establishment of the intestinal microbiota and for metabolic development. Mice exposed to subtherapeutic
antibiotic treatment (STAT)– with either penicillin, vancomycin, penicillin and vancomycin, or chlortetracycline
had altered composition of the gut microbiota as well as its metabolic capabilities. Moreover, research have shown
that mice given low-dose penicillin (1 μg/g body weight) around birth and throughout the weaning process had an increased
body mass and fat mass, accelerated growth, and increased hepatic expression of genes involved in adipogenesis, compared
to controlled mice. In addition, penicillin in combination with a high-fat diet increased fasting insulin levels
in mice. However, it is unclear whether or not antibiotics cause obesity in humans. Studies have found a correlation
between early exposure of antibiotics (
<6 months) and increased body mass (at 10 and 20 months). Another study found that the type of antibiotic exposure was also
significant with the highest risk of being overweight in those given macrolides compared to penicillin and cephalosporin.
Therefore, there is correlation between antibiotic exposure in early life and obesity in humans, but whether or not
there is a causal relationship remains unclear. Although there is a correlation between antibiotic use in early life
and obesity, the effect of antibiotics on obesity in humans needs to be weighed against the beneficial effects of
clinically indicated treatment with antibiotics in infancy. The majority of studies indicate antibiotics do interfere
with contraceptive pills, such as clinical studies that suggest the failure rate of contraceptive pills caused by
antibiotics is very low (about 1%). In cases where antibacterials have been suggested to affect the efficiency of
birth control pills, such as for the broad-spectrum antibacterial rifampicin, these cases may be due to an increase
in the activities of hepatic liver enzymes ' causing increased breakdown of the pill's active ingredients. Effects
on the intestinal flora, which might result in reduced absorption of estrogens in the colon, have also been suggested,
but such suggestions have been inconclusive and controversial. Clinicians have recommended that extra contraceptive
measures be applied during therapies using antibacterials that are suspected to interact with oral contraceptives.
Interactions between alcohol and certain antibiotics may occur and may cause side-effects and decreased effectiveness
of antibiotic therapy. While moderate alcohol consumption is unlikely to interfere with many common antibiotics,
there are specific types of antibiotics with which alcohol consumption may cause serious side-effects. Therefore,
potential risks of side-effects and effectiveness depend on the type of antibiotic administered. Despite the lack
of a categorical counterindication, the belief that alcohol and antibiotics should never be mixed is widespread.
The successful outcome of antimicrobial therapy with antibacterial compounds depends on several factors. These include
host defense mechanisms, the location of infection, and the pharmacokinetic and pharmacodynamic properties of the
antibacterial. A bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires
ongoing metabolic activity and division of bacterial cells. These findings are based on laboratory studies, and in
clinical settings have also been shown to eliminate bacterial infection. Since the activity of antibacterials depends
frequently on its concentration, in vitro characterization of antibacterial activity commonly includes the determination
of the minimum inhibitory concentration and minimum bactericidal concentration of an antibacterial. To predict clinical
outcome, the antimicrobial activity of an antibacterial is usually combined with its pharmacokinetic profile, and
several pharmacological parameters are used as markers of drug efficacy. Antibacterial antibiotics are commonly classified
based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions
or growth processes. Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane
(polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides)
have bactericidal activities. Those that target protein synthesis (macrolides, lincosamides and tetracyclines) are
usually bacteriostatic (with the exception of bactericidal aminoglycosides). Further categorization is based on their
target specificity. "Narrow-spectrum" antibacterial antibiotics target specific types of bacteria, such as Gram-negative
or Gram-positive bacteria, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year
hiatus in discovering new classes of antibacterial compounds, four new classes of antibacterial antibiotics have
been brought into clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines
(such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin). With advances
in medicinal chemistry, most modern antibacterials are semisynthetic modifications of various natural compounds.
These include, for example, the beta-lactam antibiotics, which include the penicillins (produced by fungi in the
genus Penicillium), the cephalosporins, and the carbapenems. Compounds that are still isolated from living organisms
are the aminoglycosides, whereas other antibacterials—for example, the sulfonamides, the quinolones, and the oxazolidinones—are
produced solely by chemical synthesis. Many antibacterial compounds are relatively small molecules with a molecular
weight of less than 2000 atomic mass units.[citation needed] The emergence of resistance of bacteria to antibiotics
is a common phenomenon. Emergence of resistance often reflects evolutionary processes that take place during antibiotic
therapy. The antibiotic treatment may select for bacterial strains with physiologically or genetically enhanced capacity
to survive high doses of antibiotics. Under certain conditions, it may result in preferential growth of resistant
bacteria, while growth of susceptible bacteria is inhibited by the drug. For example, antibacterial selection for
strains having previously acquired antibacterial-resistance genes was demonstrated in 1943 by the Luria–Delbrück
experiment. Antibiotics such as penicillin and erythromycin, which used to have a high efficacy against many bacterial
species and strains, have become less effective, due to the increased resistance of many bacterial strains. Several
molecular mechanisms of antibacterial resistance exist. Intrinsic antibacterial resistance may be part of the genetic
makeup of bacterial strains. For example, an antibiotic target may be absent from the bacterial genome. Acquired
resistance results from a mutation in the bacterial chromosome or the acquisition of extra-chromosomal DNA. Antibacterial-producing
bacteria have evolved resistance mechanisms that have been shown to be similar to, and may have been transferred
to, antibacterial-resistant strains. The spread of antibacterial resistance often occurs through vertical transmission
of mutations during growth and by genetic recombination of DNA by horizontal genetic exchange. For instance, antibacterial
resistance genes can be exchanged between different bacterial strains or species via plasmids that carry these resistance
genes. Plasmids that carry several different resistance genes can confer resistance to multiple antibacterials. Cross-resistance
to several antibacterials may also occur when a resistance mechanism encoded by a single gene conveys resistance
to more than one antibacterial compound. Antibacterial-resistant strains and species, sometimes referred to as
"superbugs", now contribute to the emergence of diseases that were for a while well controlled. For example, emergent bacterial
strains causing tuberculosis (TB) that are resistant to previously effective antibacterial treatments pose many therapeutic
challenges. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated
to occur worldwide. For example, NDM-1 is a newly identified enzyme conveying bacterial resistance to a broad range
of beta-lactam antibacterials. The United Kingdom
's Health Protection Agency has stated that "most isolates with NDM-1 enzyme are resistant to all standard intravenous antibiotics for treatment of severe infections."
Inappropriate antibiotic treatment and overuse of antibiotics have contributed to the emergence of antibiotic-resistant bacteria. Self prescription of antibiotics is an example of misuse. Many antibiotics are frequently prescribed to treat symptoms or diseases that do not respond to antibiotics or that are likely to resolve without treatment. Also, incorrect or suboptimal antibiotics are prescribed for certain bacterial infections. The overuse of antibiotics, like penicillin and erythromycin, has been associated with emerging antibiotic resistance since the 1950s. Widespread usage of antibiotics in hospitals has also been associated with increases in bacterial strains and species that no longer respond to treatment with the most common antibiotics.
Common forms of antibiotic misuse include excessive use of prophylactic antibiotics in travelers and failure of medical professionals to prescribe the correct dosage of antibiotics on the basis of the patient's weight and history of prior use. Other forms of misuse include failure to take the entire prescribed course of
the antibiotic, incorrect dosage and administration, or failure to rest for sufficient recovery. Inappropriate antibiotic
treatment, for example, is their prescription to treat viral infections such as the common cold. One study on respiratory
tract infections found
"physicians were more likely to prescribe antibiotics to patients who appeared to expect them". Multifactorial interventions aimed at both physicians and patients can reduce inappropriate prescription of antibiotics.
Several organizations concerned with antimicrobial resistance are lobbying to eliminate the unnecessary use of antibiotics.
The issues of misuse and overuse of antibiotics have been addressed by the formation of the US Interagency Task Force
on Antimicrobial Resistance. This task force aims to actively address antimicrobial resistance, and is coordinated
by the US Centers for Disease Control and Prevention, the Food and Drug Administration (FDA), and the National Institutes
of Health (NIH), as well as other US agencies. An NGO campaign group is Keep Antibiotics Working. In France, an
"Antibiotics are not automatic" government campaign started in 2002 and led to a marked reduction of unnecessary antibiotic prescriptions, especially
in children. The emergence of antibiotic resistance has prompted restrictions on their use in the UK in 1970 (Swann
report 1969), and the EU has banned the use of antibiotics as growth-promotional agents since 2003. Moreover, several
organizations (e.g., The American Society for Microbiology (ASM), American Public Health Association (APHA) and the
American Medical Association (AMA)) have called for restrictions on antibiotic use in food animal production and
an end to all nontherapeutic uses.[citation needed] However, commonly there are delays in regulatory and legislative
actions to limit the use of antibiotics, attributable partly to resistance against such regulation by industries
using or selling antibiotics, and to the time required for research to test causal links between their use and resistance
to them. Two federal bills (S.742 and H.R. 2562) aimed at phasing out nontherapeutic use of antibiotics in US food
animals were proposed, but have not passed. These bills were endorsed by public health and medical organizations,
including the American Holistic Nurses
' Association, the American Medical Association, and the American Public Health Association (APHA).
There has been extensive use of antibiotics in animal husbandry. In the United States, the question of emergence of antibiotic-resistant bacterial strains due to use of antibiotics in livestock was raised by the US Food and Drug Administration (FDA) in 1977. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock, which violated FDA regulations.
Before the early 20th century, treatments for infections were based primarily on medicinal folklore. Mixtures with antimicrobial properties that were used in treatments of infections were described over 2000 years ago. Many ancient cultures, including the ancient Egyptians and ancient Greeks, used specially selected mold and plant materials and extracts to treat infections. More recent observations made in the laboratory of antibiosis between microorganisms led to the discovery of natural antibacterials produced by microorganisms. Louis Pasteur observed, "if we could intervene in the antagonism observed between some bacteria, it would offer perhaps the greatest hopes for therapeutics". The term 'antibiosis
', meaning "against life", was introduced by the French bacteriologist Jean Paul Vuillemin as a descriptive name of the phenomenon exhibited by these early antibacterial drugs. Antibiosis was first described in 1877 in bacteria when Louis Pasteur and Robert Koch observed that an airborne bacillus could inhibit the growth of Bacillus anthracis. These drugs were later renamed antibiotics by Selman Waksman, an American microbiologist, in 1942. Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Ehrlich noted certain dyes would color human, animal, or bacterial cells, whereas others did not. He then proposed the idea that it might be possible to create chemicals that would act as a selective drug that would bind to and kill bacteria without harming the human host. After screening hundreds of dyes against various organisms, in 1907, he discovered a medicinally useful drug, the synthetic antibacterial salvarsan now called arsphenamine.
The effects of some types of mold on infection had been noticed many times over the course of history (see: History of penicillin). In 1928, Alexander Fleming noticed the same effect in a Petri dish, where a number of disease-causing bacteria were killed by a fungus of the genus Penicillium. Fleming postulated that the effect is mediated by an antibacterial compound he named penicillin, and that its antibacterial properties could be exploited for chemotherapy. He initially characterized some of its biological properties, and attempted to use a crude preparation to treat some infections, but he was unable to pursue its further development without the aid of trained chemists.
The first sulfonamide and first commercially available antibacterial, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 at the Bayer Laboratories of the IG Farben conglomerate in Germany. Domagk received the 1939 Nobel Prize for Medicine for his efforts. Prontosil had a relatively broad effect against Gram-positive cocci, but not against enterobacteria. Research was stimulated apace by its success. The discovery and development of this sulfonamide drug opened the era of antibacterials.
In 1939, coinciding with the start of World War II, Rene Dubos reported the discovery of the first naturally derived antibiotic, tyrothricin, a compound of 20% gramicidin and 80% tyrocidine, from B. brevis. It was one of the first commercially manufactured antibiotics universally and was very effective in treating wounds and ulcers during World War II. Gramicidin, however, could not be used systemically because of toxicity. Tyrocidine also proved too toxic for systemic usage. Research results obtained during that period were not shared between the Axis and the Allied powers during the war.
Florey and Chain succeeded in purifying the first penicillin, penicillin G, in 1942, but it did not become widely available outside the Allied military before 1945. Later, Norman Heatley developed the back extraction technique for efficiently purifying penicillin in bulk. The chemical structure of penicillin was determined by Dorothy Crowfoot Hodgkin in 1945. Purified penicillin displayed potent antibacterial activity against a wide range of bacteria and had low toxicity in humans. Furthermore, its activity was not inhibited by biological constituents such as pus, unlike the synthetic sulfonamides. The discovery of such a powerful antibiotic was unprecedented, and the development of penicillin led to renewed interest in the search for antibiotic compounds with similar efficacy and safety. For their successful development of penicillin, which Fleming had accidentally discovered but could not develop himself, as a therapeutic drug, Ernst Chain and Howard Florey shared the 1945 Nobel Prize in Medicine with Fleming. Florey credited Dubos with pioneering the approach of deliberately and systematically searching for antibacterial compounds, which had led to the discovery of gramicidin and had revived Florey's research in penicillin. Vaccines rely on immune modulation or augmentation. Vaccination either excites or reinforces
the immune competence of a host to ward off infection, leading to the activation of macrophages, the production of
antibodies, inflammation, and other classic immune reactions. Antibacterial vaccines have been responsible for a
drastic reduction in global bacterial diseases. Vaccines made from attenuated whole cells or lysates have been replaced
largely by less reactogenic, cell-free vaccines consisting of purified components, including capsular polysaccharides
and their conjugates, to protein carriers, as well as inactivated toxins (toxoids) and proteins. Phage therapy is
another option that is being looked into for treating resistant strains of bacteria. The way that researchers are
doing this is by infecting pathogenic bacteria with their own viruses, more specifically, bacteriophages. Bacteriophages,
also known simply as phages, are precisely bacterial viruses that infect bacteria by disrupting pathogenic bacterium
lytic cycles. By disrupting the lytic cycles of bacterium, phages destroy their metabolism, which eventually results
in the cell
's death. Phages will insert their DNA into the bacterium, allowing their DNA to be transcribed. Once their DNA is transcribed the cell will proceed to make new phages and as soon as they are ready to be released, the cell will lyse. One of the worries about using phages to fight pathogens is that the phages will infect "good" bacteria, or the bacteria that are important in the everyday function of human beings. However, studies have proven that phages are very specific when they target bacteria, which makes researchers confident that bacteriophage therapy is the definite route to defeating antibiotic resistant bacteria.
In April 2013, the Infectious Disease Society of America (IDSA) reported that the weak antibiotic pipeline does not match bacteria's increasing ability to develop resistance. Since 2009, only 2 new antibiotics were approved in the United States.
The number of new antibiotics approved for marketing per year declines continuously. The report identified seven
antibiotics against the Gram-negative bacilli (GNB) currently in phase 2 or phase 3 clinical trials. However, these
drugs do not address the entire spectrum of resistance of GNB. Some of these antibiotics are combination of existent
treatments: Possible improvements include clarification of clinical trial regulations by FDA. Furthermore, appropriate
economic incentives could persuade pharmaceutical companies to invest in this endeavor. Antibiotic Development to
Advance Patient Treatment (ADAPT) Act aims to fast track the drug development to combat the growing threat of
'superbugs'. Under this Act, FDA can approve antibiotics and antifungals treating life-threatening infections based on smaller
clinical trials. The CDC will monitor the use of antibiotics and the emerging resistance, and publish the data. The
FDA antibiotics labeling process, 'Susceptibility Test Interpretive Criteria for Microbial Organisms' or
'breakpoints', will provide accurate data to healthcare professionals. According to Allan Coukell, senior director for health
programs at The Pew Charitable Trusts,
"By allowing drug developers to rely on smaller datasets, and clarifying FDA's authority to tolerate a higher level of uncertainty for these drugs when making a risk/benefit calculation, ADAPT would make the clinical trials more feasible."
Windows 8 introduced major changes to the operating system's platform and user interface to improve its user experience on
tablets, where Windows was now competing with mobile operating systems, including Android and iOS. In particular,
these changes included a touch-optimized Windows shell based on Microsoft's "Metro" design language, the Start screen
(which displays programs and dynamically updated content on a grid of tiles), a new platform for developing apps
with an emphasis on touchscreen input, integration with online services (including the ability to sync apps and settings
between devices), and Windows Store, an online store for downloading and purchasing new software. Windows 8 added
support for USB 3.0, Advanced Format hard drives, near field communications, and cloud computing. Additional security
features were introduced, such as built-in antivirus software, integration with Microsoft SmartScreen phishing filtering
service and support for UEFI Secure Boot on supported devices with UEFI firmware, to prevent malware from infecting
the boot process. Windows 8 was released to a mixed critical reception. Although reaction towards its performance
improvements, security enhancements, and improved support for touchscreen devices was positive, the new user interface
of the operating system was widely criticized for being potentially confusing and difficult to learn (especially
when used with a keyboard and mouse instead of a touchscreen). Despite these shortcomings, 60 million Windows 8 licenses
have been sold through January 2013, a number which included both upgrades and sales to OEMs for new PCs. Windows
8 development started before Windows 7 had shipped in 2009. At the Consumer Electronics Show in January 2011, it
was announced that the next version of Windows would add support for ARM system-on-chips alongside the existing x86
processors produced by vendors, especially AMD and Intel. Windows division president Steven Sinofsky demonstrated
an early build of the port on prototype devices, while Microsoft CEO Steve Ballmer announced the company's goal for
Windows to be "everywhere on every kind of device without compromise." Details also began to surface about a new
application framework for Windows 8 codenamed "Jupiter", which would be used to make "immersive" applications using
XAML (similarly to Windows Phone and Silverlight) that could be distributed via a new packaging system and a rumored
application store. Three milestone releases of Windows 8 leaked to the general public. Milestone 1, Build 7850, was
leaked on April 12, 2011. It was the first build where the text of a window was written centered instead of aligned
to the left. It was also probably the first appearance of the Metro-style font, and its wallpaper had the text shhh...
let's not leak our hard work. However, its detailed build number reveals that the build was created on September
22, 2010. The leaked copy edition was Enterprise edition. The OS still reads as "Windows 7". Milestone 2, Build 7955,
was leaked on April 25, 2011. The traditional Blue Screen of Death (BSoD) was replaced by a new Black screen, although
this was later scrapped. This build introduced a new ribbon in Windows Explorer. Build 7959, with minor changes but
the first 64-bit version, was leaked on May 1, 2011. The "Windows 7" logo was temporarily replaced with text displaying
"Microsoft Confidential". On June 17, 2011, build 7989 64-bit edition was leaked. It introduced a new boot screen
featuring the same fish as the default Windows 7 Beta wallpaper, which was later scrapped, and the circling dots
as featured in the final (although the final version comes with smaller circling dots throbber). It also had the
text Welcome below them, although this was also scrapped. The build was released for download later in the day in
standard 32-bit and 64-bit versions, plus a special 64-bit version which included SDKs and developer tools (Visual
Studio Express and Expression Blend) for developing Metro-style apps. The Windows Store was announced during the
presentation, but was not available in this build. According to Microsoft, there were about 535,000 downloads of
the developer preview within the first 12 hours of its release. Originally set to expire on March 11, 2012, in February
2012 the Developer Preview's expiry date was changed to January 15, 2013. On February 29, 2012, Microsoft released
Windows 8 Consumer Preview, the beta version of Windows 8, build 8250. Alongside other changes, the build removed
the Start button from the taskbar for the first time since its debut on Windows 95; according to Windows manager
Chaitanya Sareen, the Start button was removed to reflect their view that on Windows 8, the desktop was an "app"
itself, and not the primary interface of the operating system. Windows president Steven Sinofsky said more than 100,000
changes had been made since the developer version went public. The day after its release, Windows 8 Consumer Preview
had been downloaded over one million times. Like the Developer Preview, the Consumer Preview expired on January 15,
2013. Many other builds were released until the Japan's Developers Day conference, when Steven Sinofsky announced
that Windows 8 Release Preview (build 8400) would be released during the first week of June. On May 28, 2012, Windows
8 Release Preview (Standard Simplified Chinese x64 edition, not China-specific version, build 8400) was leaked online
on various Chinese and BitTorrent websites. On May 31, 2012, Windows 8 Release Preview was released to the public
by Microsoft. Major items in the Release Preview included the addition of Sports, Travel, and News apps, along with
an integrated version of Adobe Flash Player in Internet Explorer. Like the Developer Preview and the Consumer Preview,
the release preview expired on January 15, 2013. On August 1, 2012, Windows 8 (build 9200) was released to manufacturing
with the build number 6.2.9200.16384 . Microsoft planned to hold a launch event on October 25, 2012 and release Windows
8 for general availability on the next day. However, only a day after its release to manufacturing, a copy of the
final version of Windows 8 Enterprise N (a version for European markets lacking bundled media players to comply with
a court ruling) leaked online, followed by leaks of the final versions of Windows 8 Pro and Enterprise a few days
later. On August 15, 2012, Windows 8 was made available to download for MSDN and TechNet subscribers. Windows 8 was
made available to Software Assurance customers on August 16, 2012. Windows 8 was made available for students with
a DreamSpark Premium subscription on August 22, 2012, earlier than advertised. Relatively few changes were made from
the Release Preview to the final version; these included updated versions of its pre-loaded apps, the renaming of
Windows Explorer to File Explorer, the replacement of the Aero Glass theme from Windows Vista and 7 with a new flat
and solid-colored theme, and the addition of new background options for the Start screen, lock screen, and desktop.
Prior to its general availability on October 26, 2012, updates were released for some of Windows 8's bundled apps,
and a "General Availability Cumulative Update" (which included fixes to improve performance, compatibility, and battery
life) was released on Tuesday, October 9, 2012. Microsoft indicated that due to improvements to its testing infrastructure,
general improvements of this nature are to be released more frequently through Windows Update instead of being relegated
to OEMs and service packs only. Microsoft began an advertising campaign centered around Windows 8 and its Surface
tablet in October 2012, starting with its first television advertisement premiering on October 14, 2012. Microsoft's
advertising budget of US$1.5–1.8 billion was significantly larger than the US$200 million campaign used to promote
Windows 95. As part of its campaign, Microsoft set up 34 pop-up stores inside malls (primarily focusing on Surface),
provided training for retail employees in partnership with Intel, and collaborated with the electronics store chain
Best Buy to design expanded spaces to showcase devices. In an effort to make retail displays of Windows 8 devices
more "personal", Microsoft also developed a character known in English-speaking markets as "Allison Brown", whose
fictional profile (including personal photos, contacts, and emails) is also featured on demonstration units of Windows
8 devices. In May 2013, Microsoft launched a new television campaign for Windows 8 illustrating the capabilities
and pricing of Windows 8 tablets in comparison to the iPad, which featured the voice of Siri remarking on the iPad's
limitations in a parody of Apple's "Get a Mac" advertisements. On June 12, 2013 during game 1 of the 2013 Stanley
Cup Finals, Microsoft premiered the first ad in its "Windows Everywhere" campaign, which promoted Windows 8, Windows
Phone 8, and the company's suite of online services as an interconnected platform. New features and functionality
in Windows 8 include a faster startup through UEFI integration and the new "Hybrid Boot" mode (which hibernates the
Windows kernel on shutdown to speed up the subsequent boot), a new lock screen with a clock and notifications, and
the ability for enterprise users to create live USB versions of Windows (known as Windows To Go). Windows 8 also
adds native support for USB 3.0 devices, which allow for faster data transfers and improved power management with
compatible devices, and hard disk 4KB Advanced Format support, as well as support for near field communication to
facilitate sharing and communication between devices. Windows Explorer, which has been renamed File Explorer, now
includes a ribbon in place of the command bar. File operation dialog boxes have been updated to provide more detailed
statistics, the ability to pause file transfers, and improvements in the ability to manage conflicts when copying
files. A new "File History" function allows incremental revisions of files to be backed up to and restored from a
secondary storage device, while Storage Spaces allows users to combine different sized hard disks into virtual drives
and specify mirroring, parity, or no redundancy on a folder-by-folder basis. Task Manager has been redesigned, including
a new processes tab with the option to display fewer or more details of running applications and background processes,
a heat map using different colors indicating the level of resource usage, network and disk counters, grouping by
process type (e.g. applications, background processes and Windows processes), friendly names for processes and a
new option which allows users to search the web to find information about obscure processes. Additionally, the Blue
Screen of Death has been updated with a simpler and modern design with less technical information displayed. New
security features in Windows 8 include two new authentication methods tailored towards touchscreens (PINs and picture
passwords), the addition of antivirus capabilities to Windows Defender (bringing it in parity with Microsoft Security
Essentials). SmartScreen filtering integrated into Windows, Family Safety offers Parental controls, which allows
parents to monitor and manage their children's activities on a device with activity reports and safety controls.
Windows 8 also provides integrated system recovery through the new "Refresh" and "Reset" functions, including system
recovery from USB drive. Windows 8's first security patches would be released on November 13, 2012; it would contain
three fixes deemed "critical" by the company. Windows 8 supports a feature of the UEFI specification known as "Secure
boot", which uses a public-key infrastructure to verify the integrity of the operating system and prevent unauthorized
programs such as bootkits from infecting the device's boot process. Some pre-built devices may be described as "certified"
by Microsoft; these must have secure boot enabled by default, and provide ways for users to disable or re-configure
the feature. ARM-based Windows RT devices must have secure boot permanently enabled. Windows 8 provides heavier integration
with online services from Microsoft and others. A user can now log in to Windows with a Microsoft account, which
can be used to access services and synchronize applications and settings between devices. Windows 8 also ships with
a client app for Microsoft's SkyDrive cloud storage service, which also allows apps to save files directly to SkyDrive.
A SkyDrive client for the desktop and File Explorer is not included in Windows 8, and must be downloaded separately.
Bundled multimedia apps are provided under the Xbox brand, including Xbox Music, Xbox Video, and the Xbox SmartGlass
companion for use with an Xbox 360 console. Games can integrate into an Xbox Live hub app, which also allows users
to view their profile and gamerscore. Other bundled apps provide the ability to link Flickr and Facebook. Due to
Facebook Connect service changes, Facebook support is disabled in all bundled apps effective June 8, 2015. Internet
Explorer 10 is included as both a desktop program and a touch-optimized app, and includes increased support for HTML5,
CSS3, and hardware acceleration. The Internet Explorer app does not support plugins or ActiveX components, but includes
a version of Adobe Flash Player that is optimized for touch and low power usage. Initially, Adobe Flash would only
work on sites included on a "Compatibility View" whitelist; however, after feedback from users and additional compatibility
tests, an update in March 2013 changed this behavior to use a smaller blacklist of sites with known compatibility
issues instead, allowing Flash to be used on most sites by default. The desktop version does not contain these limitations.
Windows 8 also incorporates improved support for mobile broadband; the operating system can now detect the insertion
of a SIM card and automatically configure connection settings (including APNs and carrier branding), and reduce its
internet usage in order to conserve bandwidth on metered networks. Windows 8 also adds an integrated airplane mode
setting to globally disable all wireless connectivity as well. Carriers can also offer account management systems
through Windows Store apps, which can be automatically installed as a part of the connection process and offer usage
statistics on their respective tile. Windows 8 introduces a new style of application, Windows Store apps. According
to Microsoft developer Jensen Harris, these apps are to be optimized for touchscreen environments and are more specialized
than current desktop applications. Apps can run either in a full-screen mode, or be snapped to the side of a screen.
Apps can provide toast notifications on screen or animate their tiles on the Start screen with dynamic content. Apps
can use "contracts"; a collection of hooks to provide common functionality that can integrate with other apps, including
search and sharing. Apps can also provide integration with other services; for example, the People app can connect
to a variety of different social networks and services (such as Facebook, Skype, and People service), while the Photos
app can aggregate photos from services such as Facebook and Flickr. Windows Store apps run within a new set of APIs
known as Windows Runtime, which supports programming languages such as C, C++, Visual Basic .NET, C#, along with
HTML5 and JavaScript. If written in some "high-level" languages, apps written for Windows Runtime can be compatible
with both Intel and ARM versions of Windows, otherwise they are not binary code compatible. Components may be compiled
as Windows Runtime Components, permitting consumption by all compatible languages. To ensure stability and security,
apps run within a sandboxed environment, and require permissions to access certain functionality, such as accessing
the Internet or a camera. Retail versions of Windows 8 are only able to install these apps through Windows Store—a
namesake distribution platform which offers both apps, and listings for desktop programs certified for comparability
with Windows 8. A method to sideload apps from outside Windows Store is available to devices running Windows 8 Enterprise
and joined to a domain; Windows 8 Pro and Windows RT devices that are not part of a domain can also sideload apps,
but only after special product keys are obtained through volume licensing. The term "Immersive app" had been used
internally by Microsoft developers to refer to the apps prior to the first official presentation of Windows 8, after
which they were referred to as "Metro-style apps" in reference to the Metro design language. The term was phased
out in August 2012; a Microsoft spokesperson denied rumors that the change was related to a potential trademark issue,
and stated that "Metro" was only a codename that would be replaced prior to Windows 8's release. Following these
reports, the terms "Modern UI-style apps", "Windows 8-style apps" and "Windows Store apps" began to be used by various
Microsoft documents and material to refer to the new apps. In an interview on September 12, 2012, Soma Somasegar
(vice president of Microsoft's development software division) confirmed that "Windows Store apps" would be the official
term for the apps. An MSDN page explaining the Metro design language uses the term "Modern design" to refer to the
language as a whole. Exceptions to the restrictions faced by Windows Store apps are given to web browsers. The user's
default browser can distribute a Metro-style web browser in same package as the desktop version, which has access
to functionality unavailable to other apps, such as being able to permanently run in the background, use multiple
background processes, and use Windows API code instead of WinRT (allowing for code to be re-used with the desktop
version, while still taking advantage of features available to Windows Store apps, such as charms). Microsoft advertises
this exception privilege "New experience enabled" (formerly "Metro-style enabled"). The developers of both Chrome
and Firefox committed to developing Metro-style versions of their browsers; while Chrome's "Windows 8 mode" uses
a full-screen version of the existing desktop interface, Firefox's version (which was first made available on the
"Aurora" release channel in September 2013) uses a touch-optimized interface inspired by the Android version of Firefox.
In October 2013, Chrome's app was changed to mimic the desktop environment used by Chrome OS. Development of the
Firefox app for Windows 8 has since been cancelled, citing a lack of user adoption for the beta versions. Windows
8 introduces significant changes to the operating system's user interface, many of which are aimed at improving its
experience on tablet computers and other touchscreen devices. The new user interface is based on Microsoft's Metro
design language, and uses a Start screen similar to that of Windows Phone 7 as the primary means of launching applications.
The Start screen displays a customizable array of tiles linking to various apps and desktop programs, some of which
can display constantly updated information and content through "live tiles". As a form of multi-tasking, apps can
be snapped to the side of a screen. Alongside the traditional Control Panel, a new simplified and touch-optimized
settings app known as "PC Settings" is used for basic configuration and user settings. It does not include many of
the advanced options still accessible from the normal Control Panel. A vertical toolbar known as the charms (accessed
by swiping from the right edge of a touchscreen, or pointing the cursor at hotspots in the right corners of a screen)
provides access to system and app-related functions, such as search, sharing, device management, settings, and a
Start button. The traditional desktop environment for running desktop applications is accessed via a tile on the
Start screen. The Start button on the taskbar from previous versions of Windows has been converted into a hotspot
in the lower-left corner of the screen, which displays a large tooltip displaying a thumbnail of the Start screen.
Swiping from the left edge of a touchscreen or clicking in the top-left corner of the screen allows one to switch
between apps and Desktop. Pointing the cursor in the top-left corner of the screen and moving down reveals a thumbnail
list of active apps. Aside from the removal of the Start button and the replacement of the Aero Glass theme with
a flatter and solid-colored design, the desktop interface on Windows 8 is similar to that of Windows 7. Several notable
features have been removed in Windows 8, beginning with the traditional Start menu. Support for playing DVD-Video
was removed from Windows Media Player due to the cost of licensing the necessary decoders (especially for devices
which do not include optical disc drives at all) and the prevalence of online streaming services. For the same reasons,
Windows Media Center is not included by default on Windows 8, but Windows Media Center and DVD playback support can
be purchased in the "Pro Pack" (which upgrades the system to Windows 8 Pro) or "Media Center Pack" add-on for Windows
8 Pro. As with prior versions, third-party DVD player software can still be used to enable DVD playback. Backup and
Restore, the backup component of Windows, is deprecated. It still ships with Windows 8 and continues to work on preset
schedules, but is pushed to the background and can only be accessed through a Control Panel applet called "Windows
7 File Recovery".:76 Shadow Copy, a component of Windows Explorer that once saved previous versions of changed files,
no longer protects local files and folders. It can only access previous versions of shared files stored on a Windows
Server computer.:74 The subsystem on which these components worked, however, is still available for other software
to use.:74 Microsoft released minimum hardware requirements for tablet and laplet devices to be "certified" for Windows
8, and defined a convertible form factor as a standalone device that combines the PC, display and rechargeable power
source with a mechanically attached keyboard and pointing device in a single chassis. A convertible can be transformed
into a tablet where the attached input devices are hidden or removed leaving the display as the only input mechanism.
On March 12, 2013, Microsoft amended its certification requirements to only require that screens on tablets have
a minimum resolution of 1024×768 (down from the previous 1366×768). The amended requirement is intended to allow
"greater design flexibility" for future products. Windows 8 is available in three different editions, of which the
lowest version, branded simply as Windows 8, and Windows 8 Pro, were sold at retail in most countries, and as pre-loaded
software on new computers. Each edition of Windows 8 includes all of the capabilities and features of the edition
below it, and add additional features oriented towards their market segments. For example, Pro added BitLocker, Hyper-V,
the ability to join a domain, and the ability to install Windows Media Center as a paid add-on. Users of Windows
8 can purchase a "Pro Pack" license that upgrades their system to Windows 8 Pro through Add features to Windows.
This license also includes Windows Media Center. Windows 8 Enterprise contains additional features aimed towards
business environments, and is only available through volume licensing. A port of Windows 8 for ARM architecture,
Windows RT, is marketed as an edition of Windows 8, but was only included as pre-loaded software on devices specifically
developed for it. Windows 8 was distributed as a retail box product on DVD, and through a digital download that could
be converted into DVD or USB install media. As part of a launch promotion, Microsoft offered Windows 8 Pro upgrades
at a discounted price of US$39.99 online, or $69.99 for retail box from its launch until January 31, 2013; afterward
the Windows 8 price has been $119.99 and the Pro price $199.99. Those who purchased new PCs pre-loaded with Windows
7 Home Basic, Home Premium, Professional, or Ultimate between June 2, 2012 and January 31, 2013 could digitally purchase
a Windows 8 Pro upgrade for US$14.99. Several PC manufacturers offered rebates and refunds on Windows 8 upgrades
obtained through the promotion on select models, such as Hewlett-Packard (in the U.S. and Canada on select models),
and Acer (in Europe on selected Ultrabook models). During these promotions, the Windows Media Center add-on for Windows
8 Pro was also offered for free. Unlike previous versions of Windows, Windows 8 was distributed at retail in "Upgrade"
licenses only, which require an existing version of Windows to install. The "full version software" SKU, which was
more expensive but could be installed on computers without an eligible OS or none at all, was discontinued. In lieu
of full version, a specialized "System Builder" SKU was introduced. The "System Builder" SKU replaced the original
equipment manufacturer (OEM) SKU, which was only allowed to be used on PCs meant for resale, but added a "Personal
Use License" exemption that officially allowed its purchase and personal use by users on homebuilt computers. Retail
distribution of Windows 8 has since been discontinued in favor of Windows 8.1. Unlike 8, 8.1 is available as "full
version software" at both retail and online for download that does not require a previous version of Windows in order
to be installed. Pricing for these new copies remain identical. With the retail release returning to full version
software for Windows 8.1, the "Personal Use License" exemption was removed from the OEM SKU, meaning that end users
building their own PCs for personal use must use the full retail version in order to satisfy the Windows 8.1 licensing
requirements. Windows 8.1 with Bing is a special OEM-specific SKU of Windows 8.1 subsidized by Microsoft's Bing search
engine. The three desktop editions of Windows 8 support 32-bit and 64-bit architectures; retail copies of Windows
8 include install DVDs for both architectures, while the online installer automatically installs the version corresponding
with the architecture of the system's existing Windows installation. The 32-bit version runs on CPUs compatible with
x86 architecture 3rd generation (known as IA-32) or newer, and can run 32-bit and 16-bit applications, although 16-bit
support must be enabled first. (16-bit applications are developed for CPUs compatible with x86 2nd generation, first
conceived in 1978. Microsoft started moving away from this architecture after Windows 95.) Windows RT, the only edition
of Windows 8 for systems with ARM processors, only supports applications included with the system (such as a special
version of Office 2013), supplied through Windows Update, or Windows Store apps, to ensure that the system only runs
applications that are optimized for the architecture. Windows RT does not support running IA-32 or x64 applications.
Windows Store apps can either support both the x86 and ARM architectures, or compiled to support a specific architecture.
Following the unveiling of Windows 8, Microsoft faced criticism (particularly from free software supporters) for
mandating that devices receiving its optional certification for Windows 8 have secure boot enabled by default using
a key provided by Microsoft. Concerns were raised that secure boot could prevent or hinder the use of alternate operating
systems such as Linux. In a post discussing secure boot on the Building Windows 8 blog, Microsoft developer Tony
Mangefeste indicated that vendors would provide means to customize secure boot, stating that "At the end of the day,
the customer is in control of their PC. Microsoft's philosophy is to provide customers with the best experience first,
and allow them to make decisions themselves." Microsoft's certification guidelines for Windows 8 ultimately revealed
that vendors would be required to provide means for users to re-configure or disable secure boot in their device's
UEFI firmware. It also revealed that ARM devices (Windows RT) would be required to have secure boot permanently enabled,
with no way for users to disable it. However, Tom Warren of The Verge noted that other vendors have implemented similar
hardware restrictions on their own ARM-based tablet and smartphone products (including those running Microsoft's
own Windows Phone platform), but still argued that Microsoft should "keep a consistent approach across ARM and x86,
though, not least because of the number of users who'd love to run Android alongside Windows 8 on their future tablets."
No mandate is made regarding the installation of third-party certificates that would enable running alternative programs.
Several notable video game developers criticized Microsoft for making its Windows Store a closed platform subject
to its own regulations, as it conflicted with their view of the PC as an open platform. Markus "Notch" Persson (creator
of the indie game Minecraft), Gabe Newell (co-founder of Valve Corporation and developer of software distribution
platform Steam), and Rob Pardo from Activision Blizzard voiced concern about the closed nature of the Windows Store.
However, Tom Warren of The Verge stated that Microsoft's addition of the Store was simply responding to the success
of both Apple and Google in pursuing the "curated application store approach." Reviews of the various editions of
Windows 8 have been mixed. Tom Warren of The Verge said that although Windows 8's emphasis on touch computing was
significant and risked alienating desktop users, a "tablet PC with Windows 8 makes an iPad feel immediately out of
date" due to the capabilities of the operating system's hybrid model and increased focus on cloud services. David
Pierce of The Verge described Windows 8 as "the first desktop operating system that understands what a computer is
supposed to do in 2012" and praised Microsoft's "no compromise" approach and the operating system's emphasis on Internet
connectivity and cloud services. Pierce also considered the Start Screen to be a "brilliant innovation for desktop
computers" when compared with "folder-littered desktops on every other OS" because it allows users to interact with
dynamic information. In contrast, an ExtremeTech article said it was Microsoft "flailing" and a review in PC Magazine
condemned the Metro-style user interface. Some of the included apps in Windows 8 were considered to be basic and
lacking in functionality, but the Xbox apps were praised for their promotion of a multi-platform entertainment experience.
Other improvements and features (such as File History, Storage Spaces, and the updated Task Manager) were also regarded
as positive changes. Peter Bright of Ars Technica wrote that while its user interface changes may overshadow them,
Windows 8's improved performance, updated file manager, new storage functionality, expanded security features, and
updated Task Manager were still positive improvements for the operating system. Bright also said that Windows 8's
duality towards tablets and traditional PCs was an "extremely ambitious" aspect of the platform as well, but criticized
Microsoft for emulating Apple's model of a closed distribution platform when implementing the Windows Store. The
interface of Windows 8 has been the subject of mixed reaction. Bright wrote that its system of hot corners and edge
swiping "wasn't very obvious" due to the lack of instructions provided by the operating system on the functions accessed
through the user interface, even by the video tutorial added on the RTM release (which only instructed users to point
at corners of the screen or swipe from its sides). Despite this "stumbling block", Bright said that Windows 8's interface
worked well in some places, but began to feel incoherent when switching between the "Metro" and desktop environments,
sometimes through inconsistent means. Tom Warren of The Verge wrote that the new interface was "as stunning as it
is surprising", contributing to an "incredibly personal" experience once it is customized by the user, but had a
steep learning curve, and was awkward to use with a keyboard and mouse. He noted that while forcing all users to
use the new touch-oriented interface was a risky move for Microsoft as a whole, it was necessary in order to push
development of apps for the Windows Store. Others, such as Adrian Kingsley-Hughes from ZDNet, considered the interface
to be "clumsy and impractical" due to its inconsistent design (going as far as considering it "two operating systems
unceremoniously bolted together"), and concluded that "Windows 8 wasn't born out of a need or demand; it was born
out of a desire on Microsoft's part to exert its will on the PC industry and decide to shape it in a direction—touch
and tablets -- that allows it to compete against, and remain relevant in the face of Apple's iPad." However, according
to research firm NPD, sales of devices running Windows in the United States have declined 21 percent compared to
the same time period in 2011. As the holiday shopping season wrapped up, Windows 8 sales continued to lag, even as
Apple reported brisk sales. The market research firm IDC reported an overall drop in PC sales for the quarter, and
said the drop may have been partly due to consumer reluctance to embrace the new features of the OS and poor support
from OEM for these features. This capped the first year of declining PC sales to the Asia Pacific region, as consumers
bought more mobile devices than Windows PCs. Windows 8 surpassed Windows Vista in market share with a 5.1% usage
rate according to numbers posted in July 2013 by Net Applications, with usage on a steady upward trajectory. However,
intake of Windows 8 still lags behind that of Windows Vista and Windows 7 at the same point in their release cycles.
Windows 8's tablet market share has also been growing steadily, with 7.4% of tablets running Windows in Q1 2013 according
to Strategy Analytics, up from nothing just a year before. However, this is still well below Android and iOS, which
posted 43.4% and 48.2% market share respectively, although both operating systems have been on the market much longer
than Windows 8. Strategy Analytics also noted "a shortage of top tier apps" for Windows tablets despite Microsoft
strategy of paying developers to create apps for the operating system (in addition to for Windows Phone). In March
2013, Microsoft also amended its certification requirements to allow tablets to use the 1024×768 resolution as a
minimum; this change is expected to allow the production of certified Windows 8 tablets in smaller form factors—a
market which is currently dominated by Android-based tablets. Despite the reaction of industry experts, Microsoft
reported that they had sold 100 million licenses in the first six months. This matched sales of Windows 7 over a
similar period. This statistic includes shipments to channel warehouses which now need to be sold in order to make
way for new shipments. In February 2014, Bloomberg reported that Microsoft would be lowering the price of Windows
8 licenses by 70% for devices that retail under US$250; alongside the announcement that an update to the operating
system would allow OEMs to produce devices with as little as 1 GB of RAM and 16 GB of storage, critics felt that
these changes would help Windows compete against Linux-based devices in the low-end market, particularly those running
Chrome OS. Microsoft had similarly cut the price of Windows XP licenses to compete against the early waves of Linux-based
netbooks. Reports also indicated that Microsoft was planning to offer cheaper Windows 8 licenses to OEMs in exchange
for setting Internet Explorer's default search engine to Bing. Some media outlets falsely reported that the SKU associated
with this plan, "Windows 8.1 with Bing", was a variant which would be a free or low-cost version of Windows 8 for
consumers using older versions of Windows. On April 2, 2014, Microsoft ultimately announced that it would be removing
license fees entirely for devices with screens smaller than 9 inches, and officially confirmed the rumored "Windows
8.1 with Bing" OEM SKU on May 23, 2014. In May 2014, the Government of China banned the internal purchase of Windows
8-based products under government contracts requiring "energy-efficient" devices. The Xinhua News Agency claimed
that Windows 8 was being banned in protest of Microsoft's support lifecycle policy and the end of support for Windows
XP (which, as of January 2014, had a market share of 49% in China), as the government "obviously cannot ignore the
risks of running OS [sic] without guaranteed technical support." However, Ni Guangnan of the Chinese Academy of Sciences
had also previously warned that Windows 8 could allegedly expose users to surveillance by the United States government
due to its heavy use of internet-based services. In June 2014, state broadcaster China Central Television (CCTV)
broadcast a news story further characterizing Windows 8 as a threat to national security. The story featured an interview
with Ni Guangnan, who stated that operating systems could aggregate "sensitive user information" that could be used
to "understand the conditions and activities of our national economy and society", and alleged that per documents
leaked by Edward Snowden, the U.S. government had worked with Microsoft to retrieve encrypted information. Yang Min,
a computer scientist at Fudan University, also stated that "the security features of Windows 8 are basically to the
benefit of Microsoft, allowing them control of the users' data, and that poses a big challenge to the national strategy
for information security." Microsoft denied the claims in a number of posts on the Chinese social network Sina Weibo,
which stated that the company had never "assisted any government in an attack of another government or clients" or
provided client data to the U.S. government, never "provided any government the authority to directly visit" or placed
any backdoors in its products and services, and that it had never concealed government requests for client data.
An upgrade to Windows 8 known as Windows 8.1 was officially announced by Microsoft on May 14, 2013. Following a presentation
devoted to the upgrade at Build 2013, a public beta version of the upgrade was released on June 26, 2013. Windows
8.1 was released to OEM hardware partners on August 27, 2013, and released publicly as a free download through Windows
Store on October 17, 2013. Volume license customers and subscribers to MSDN Plus and TechNet Plus were initially
unable to obtain the RTM version upon its release; a spokesperson said the policy was changed to allow Microsoft
to work with OEMs "to ensure a quality experience at general availability." However, after criticism, Microsoft reversed
its decision and released the RTM build on MSDN and TechNet on September 9, 2013. The upgrade addressed a number
of criticisms faced by Windows 8 upon its release, with additional customization options for the Start screen, the
restoration of a visible Start button on the desktop, the ability to snap up to four apps on a single display, and
the ability to boot to the desktop instead of the Start screen. Windows 8's stock apps were also updated, a new Bing-based
unified search system was added, SkyDrive was given deeper integration with the operating system, and a number of
new stock apps, along with a tutorial, were added. Windows 8.1 also added support for 3D printing, Miracast media
streaming, NFC printing, and Wi-Fi Direct.
At no more than 200 kilometres (120 mi) north to south and 130 kilometres (81 mi) east to west, Swaziland is one of the smallest
countries in Africa. Despite its size, however, its climate and topography is diverse, ranging from a cool and mountainous
highveld to a hot and dry lowveld. The population is primarily ethnic Swazis whose language is siSwati. They established
their kingdom in the mid-18th century under the leadership of Ngwane III; the present boundaries were drawn up in
1881. After the Anglo-Boer War, Swaziland was a British protectorate from 1903 until 1967. It regained its independence
on 6 September 1968. Swaziland is a developing country with a small economy. Its GDP per capita of $9,714 means it
is classified as a country with a lower-middle income. As a member of the Southern African Customs Union (SACU) and
Common Market for Eastern and Southern Africa (COMESA), its main local trading partner is South Africa. Swaziland's
currency, the lilangeni, is pegged to the South African rand. Swaziland's major overseas trading partners are the
United States and the European Union. The majority of the country's employment is provided by its agricultural and
manufacturing sectors. Swaziland is a member of the Southern African Development Community (SADC), the African Union,
the Commonwealth of Nations and the United Nations. Swaziland derives its name from a later king named Mswati II.
KaNgwane, named for Ngwane III, is an alternative name for Swaziland the surname of whose royal house remains Nkhosi
Dlamini. Nkhosi literally means "king". Mswati II was the greatest of the fighting kings of Swaziland, and he greatly
extended the area of the country to twice its current size. The Emakhandzambili clans were initially incorporated
into the kingdom with wide autonomy, often including grants of special ritual and political status. The extent of
their autonomy however was drastically curtailed by Mswati, who attacked and subdued some of them in the 1850s. In
1903, after British victory in the Anglo-Boer war, Swaziland became a British protectorate. Much of its early administration
(for example, postal services) being carried out from South Africa until 1906 when the Transvaal colony was granted
self-government. Following this, Swaziland was partitioned into European and non-European (or native reserves) areas
with the former being two-thirds of the total land. Sobhuza's official coronation was in December 1921 after the
regency of Labotsibeni after which he led an unsuccessful deputation to the Privy council in London in 1922 regarding
the issue of the land. The constitution for independent Swaziland was promulgated by Britain in November 1963 under
the terms of which legislative and executive councils were established. This development was opposed by the Swazi
National Council (liqoqo). Despite such opposition, elections took place and the first Legislative Council of Swaziland
was constituted on 9 September 1964. Changes to the original constitution proposed by the Legislative Council were
accepted by Britain and a new constitution providing for a House of Assembly and Senate was drawn up. Elections under
this constitution were held in 1967. Following the elections of 1973, the constitution of Swaziland was suspended
by King Sobhuza II who thereafter ruled the country by decree until his death in 1982. At this point Sobhuza II had
ruled Swaziland for 61 years, making him the longest ruling monarch in history. A regency followed his death, with
Queen Regent Dzeliwe Shongwe being head of state until 1984 when she was removed by Liqoqo and replaced by Queen
Mother Ntfombi Tfwala. Mswati III, the son of Ntfombi, was crowned king on 25 April 1986 as King and Ingwenyama of
Swaziland. The king appoints the prime minister from the legislature and also appoints a minority of legislators
to both chambers of the Libandla (parliament) with help from an advisory council. The king is allowed by the constitution
to appoint some members to parliament for special interests. These special interests are citizens who might have
been left out by the electorate during the course of elections or did not enter as candidates. This is done to balance
views in parliament. Special interests could be people of gender, race, disability, the business community, civic
society, scholars, chiefs and so on. The Swazi bicameral Parliament or Libandla consists of the Senate (30 seats;
10 members appointed by the House of Assembly and 20 appointed by the monarch; to serve five-year terms) and the
House of Assembly (65 seats; 10 members appointed by the monarch and 55 elected by popular vote; to serve five-year
terms). The elections are held every five years after dissolution of parliament by the king. The last elections were
held on 20 September 2013. The balloting is done on a non-party basis in all categories. All election procedures
are overseen by the elections and boundaries commission. In 2005, the constitution was put into effect. There is
still much debate in the country about the constitutional reforms. From the early seventies, there was active resistance
to the royal hegemony. Despite complaints from progressive formations, support for the monarchy and the current political
system remains strong among the majority of the population.[citation needed] Submissions were made by citizens around
the country to commissions, including the constitutional draft committee, indicating that they would prefer to maintain
the current situation. Nominations take place at the chiefdoms. On the day of nomination, the name of the nominee
is raised by a show of hand and the nominee is given an opportunity to indicate whether he or she accepts the nomination.
If he or she accepts it, he or she must be supported by at least ten members of that chiefdom. The nominations are
for the position of Member of Parliament, Constituency Headman (Indvuna) and the Constituency Executive Committee
(Bucopho). The minimum number of nominees is four and the maximum is ten. As noted above, there are 55 tinkhundla
in Swaziland and each elects one representative to the House of Assembly of Swaziland. Each inkhundla has a development
committee (bucopho) elected from the various constituency chiefdoms in its area for a five-year term. Bucopho bring
to the inkhundla all matters of interest and concern to their various chiefdoms, and take back to the chiefdoms the
decisions of the inkhundla. The chairman of the bucopho is elected at the inkhundla and is called indvuna ye nkhundla.
A small, landlocked kingdom, Swaziland is bordered in the North, West and South by the Republic of South Africa and
by Mozambique in the East. Swaziland has a land area of 17,364 km2. Swaziland has four separate geographical regions.
These run from North to South and are determined by altitude. Swaziland is located at approximately 26°30'S, 31°30'E.
Swaziland has a wide variety of landscapes, from the mountains along the Mozambican border to savannas in the east
and rain forest in the northwest. Several rivers flow through the country, such as the Great Usutu River. About 75%
of the population is employed in subsistence agriculture upon Swazi Nation Land (SNL). In contrast with the commercial
farms, Swazi Nation Land suffers from low productivity and investment. This dual nature of the Swazi economy, with
high productivity in textile manufacturing and in the industrialised agricultural TDLs on the one hand, and declining
productivity subsistence agriculture (on SNL) on the other, may well explain the country's overall low growth, high
inequality and unemployment. Economic growth in Swaziland has lagged behind that of its neighbours. Real GDP growth
since 2001 has averaged 2.8%, nearly 2 percentage points lower than growth in other Southern African Customs Union
(SACU) member countries. Low agricultural productivity in the SNLs, repeated droughts, the devastating effect of
HIV/AIDS and an overly large and inefficient government sector are likely contributing factors. Swaziland's public
finances deteriorated in the late 1990s following sizeable surpluses a decade earlier. A combination of declining
revenues and increased spending led to significant budget deficits. The considerable spending did not lead to more
growth and did not benefit the poor. Much of the increased spending has gone to current expenditures related to wages,
transfers, and subsidies. The wage bill today constitutes over 15% of GDP and 55% of total public spending; these
are some of the highest levels on the African continent. The recent rapid growth in SACU revenues has, however, reversed
the fiscal situation, and a sizeable surplus was recorded since 2006. SACU revenues today account for over 60% of
total government revenues. On the positive side, the external debt burden has declined markedly over the last 20
years, and domestic debt is almost negligible; external debt as a percent of GDP was less than 20% in 2006. The Swazi
economy is very closely linked to the economy of South Africa, from which it receives over 90% of its imports and
to which it sends about 70% of its exports. Swaziland's other key trading partners are the United States and the
EU, from whom the country has received trade preferences for apparel exports (under the African Growth and Opportunity
Act – AGOA – to the US) and for sugar (to the EU). Under these agreements, both apparel and sugar exports did well,
with rapid growth and a strong inflow of foreign direct investment. Textile exports grew by over 200% between 2000
and 2005 and sugar exports increasing by more than 50% over the same period. Swaziland's currency is pegged to the
South African Rand, subsuming Swaziland's monetary policy to South Africa. Customs duties from the Southern African
Customs Union, which may equal as much as 70% of government revenue this year, and worker remittances from South
Africa substantially supplement domestically earned income. Swaziland is not poor enough to merit an IMF program;
however, the country is struggling to reduce the size of the civil service and control costs at public enterprises.
The government is trying to improve the atmosphere for foreign direct investment. 83% of the total population adheres
to Christianity, making it the most common religion in Swaziland. Anglican, Protestant and indigenous African churches,
including African Zionist, constitute the majority of the Christians (40%), followed by Roman Catholicism at 20%
of the population. On 18 July 2012, Ellinah Wamukoya, was elected Anglican Bishop of Swaziland, becoming the first
woman to be a bishop in Africa. 15% of the population follows traditional religions; other non-Christian religions
practised in the country include Islam (1%), the Bahá'í Faith (0.5%), and Hinduism (0.2%). There are 14 Jewish families.
In 2004, the Swaziland government acknowledged for the first time that it suffered an AIDS crisis, with 38.8% of
tested pregnant women infected with HIV (see AIDS in Africa). The then Prime Minister Themba Dlamini declared a humanitarian
crisis due to the combined effect of drought, land degradation, increased poverty, and HIV/AIDS. According to the
2011 UNAIDS Report, Swaziland is close to achieving universal access to HIV/AIDS treatment, defined as 80% coverage
or greater. Estimates of treatment coverage range from 70% to 80% of those infected. Life expectancy had fallen from
61 years in 2000 to 32 years in 2009. Tuberculosis is also a significant problem, with an 18% mortality rate. Many
patients have a multi-drug resistant strain, and 83% are co-infected with HIV. Education in Swaziland begins with
pre-school education for infants, primary, secondary and high school education for general education and training
(GET), and universities and colleges at tertiary level. Pre-school education is usually for children 5-year or younger
after that the students can enroll in a primary school anywhere in the country. In Swaziland early childhood care
and education (ECCE) centres are in the form of preschools or neighbourhood care points (NCPs). In the country 21.6%
of preschool age children have access to early childhood education. The secondary and high school education system
in Swaziland is a five-year programme divided into three years junior secondary and two years senior secondary. There
is an external public examination (Junior Certificate) at the end of the junior secondary that learners have to pass
to progress to the senior secondary level. The Examination Council of Swaziland (ECOS) administers this examination.
At the end of the senior secondary level, learners sit for a public examination, the Swaziland General Certificate
of Secondary Education (SGCSE) and International General Certificate of Secondary Education (IGCSE) which is accredited
by the Cambridge International Examination (CIE). A few schools offer the Advanced Studies (AS) programme in their
curriculum. The University of Swaziland, Southern African Nazarene University, Swaziland Christian University are
the institutions that offer university education in the country. A campus of Limkokwing University of Creative Technology
can be found at Sidvwashini, a suburb of the capital Mbabane. There are some teaching and nursing assistant colleges
around the country. Ngwane Teacher's College and William Pitcher College are the country's teaching colleges. The
Good Shepherd Hospital in Siteki is home to the College for Nursing Assistants. The main centre for technical training
in Swaziland is the Swaziland College of Technology which is slated to become a full university. It aims to provide
and facilitating high quality training and learning in technology and business studies in collaboration with the
Commercial, Industrial and Public Sectors. Other technical and vocational institutions are the Gwamile Vocational
and Commercial Training Institute located in Matsapha and the Manzini Industrial and Training Centre (MITC) in Manzini.
Other vocational institutions include Nhlangano Agricultural Skills Training Center and Siteki Industrial Training
Centre. In addition to these institutions, Swaziland also has the Swaziland Institute of Management and Public Administration
(SIMPA) and Institute of Development Management (IDM). SIMPA is a government owned management and development institute
and IDM is a regional organisation in Botswana, Lesotho and Swaziland that provides training, consultancy, and research
in management. The Mananga management centre was established as Mananga Agricultural Management Centre in 1972 as
an International Management Development Centre catering for middle and senior managers, it is located at Ezulwini.
The Sangoma is a traditional diviner chosen by the ancestors of that particular family. The training of the Sangoma
is called "kwetfwasa". At the end of the training, a graduation ceremony takes place where all the local sangoma
come together for feasting and dancing. The diviner is consulted for various reasons, such the cause of sickness
or even death. His diagnosis is based on "kubhula", a process of communication, through trance, with the natural
superpowers. The Inyanga (a medical and pharmaceutical specialist in western terms) possesses the bone throwing skill
("kushaya ematsambo") used to determine the cause of the sickness. Swaziland's most well-known cultural event is
the annual Umhlanga Reed Dance. In the eight-day ceremony, girls cut reeds and present them to the queen mother and
then dance. (There is no formal competition.) It is done in late August or early September. Only childless, unmarried
girls can take part. The aims of the ceremony are to preserve girls' chastity, provide tribute labour for the Queen
mother, and to encourage solidarity by working together. The royal family appoints a commoner maiden to be "induna"
(captain) of the girls and she announces over the radio the dates of the ceremony. She will be an expert dancer and
knowledgeable on royal protocol. One of the King's daughters will be her counterpart. The Reed Dance today is not
an ancient ceremony but a development of the old "umchwasho" custom. In "umchwasho", all young girls were placed
in a female age-regiment. If any girl became pregnant outside of marriage, her family paid a fine of one cow to the
local chief. After a number of years, when the girls had reached a marriageable age, they would perform labour service
for the Queen Mother, ending with dancing and feasting. The country was under the chastity rite of "umchwasho" until
19 August 2005.
The English word "translation" derives from the Latin translatio (which itself comes from trans- and from fero, the supine
form of which is latum—together meaning "a carrying across" or "a bringing across"). The modern Romance languages
use equivalents of the English term "translation" that are derived from that same Latin source or from the alternative
Latin traducere ("to lead across" or "to bring across"). The Slavic and Germanic languages (except in the case of
the Dutch equivalent, "vertaling"—a "re-language-ing") likewise use calques of these Latin sources. Despite occasional
theoretical diversity, the actual practice of translation has hardly changed since antiquity. Except for some extreme
metaphrasers in the early Christian period and the Middle Ages, and adapters in various periods (especially pre-Classical
Rome, and the 18th century), translators have generally shown prudent flexibility in seeking equivalents — "literal"
where possible, paraphrastic where necessary — for the original meaning and other crucial "values" (e.g., style,
verse form, concordance with musical accompaniment or, in films, with speech articulatory movements) as determined
from context. In general, translators have sought to preserve the context itself by reproducing the original order
of sememes, and hence word order — when necessary, reinterpreting the actual grammatical structure, for example,
by shifting from active to passive voice, or vice versa. The grammatical differences between "fixed-word-order" languages
(e.g. English, French, German) and "free-word-order" languages (e.g., Greek, Latin, Polish, Russian) have been no
impediment in this regard. The particular syntax (sentence-structure) characteristics of a text's source language
are adjusted to the syntactic requirements of the target language. Generally, the greater the contact and exchange
that have existed between two languages, or between those languages and a third one, the greater is the ratio of
metaphrase to paraphrase that may be used in translating among them. However, due to shifts in ecological niches
of words, a common etymology is sometimes misleading as a guide to current meaning in one or the other language.
For example, the English actual should not be confused with the cognate French actuel ("present", "current"), the
Polish aktualny ("present", "current," "topical," "timely," "feasible"), the Swedish aktuell ("topical", "presently
of importance"), the Russian актуальный ("urgent", "topical") or the Dutch actueel. The translator's role as a bridge
for "carrying across" values between cultures has been discussed at least since Terence, the 2nd-century-BCE Roman
adapter of Greek comedies. The translator's role is, however, by no means a passive, mechanical one, and so has also
been compared to that of an artist. The main ground seems to be the concept of parallel creation found in critics
such as Cicero. Dryden observed that "Translation is a type of drawing after life..." Comparison of the translator
with a musician or actor goes back at least to Samuel Johnson’s remark about Alexander Pope playing Homer on a flageolet,
while Homer himself used a bassoon. Though earlier approaches to translation are less commonly used today, they retain
importance when dealing with their products, as when historians view ancient or medieval records to piece together
events which took place in non-Western or pre-Western environments. Also, though heavily influenced by Western traditions
and practiced by translators taught in Western-style educational systems, Chinese and related translation traditions
retain some theories and philosophies unique to the Chinese tradition. Translation of material into Arabic expanded
after the creation of Arabic script in the 5th century, and gained great importance with the rise of Islam and Islamic
empires. Arab translation initially focused primarily on politics, rendering Persian, Greek, even Chinese and Indic
diplomatic materials into Arabic. It later focused on translating classical Greek and Persian works, as well as some
Chinese and Indian texts, into Arabic for scholarly study at major Islamic learning centers, such as the Al-Karaouine
(Fes, Morocco), Al-Azhar (Cairo, Egypt), and the Al-Nizamiyya of Baghdad. In terms of theory, Arabic translation
drew heavily on earlier Near Eastern traditions as well as more contemporary Greek and Persian traditions. Arabic
translation efforts and techniques are important to Western translation traditions due to centuries of close contacts
and exchanges. Especially after the Renaissance, Europeans began more intensive study of Arabic and Persian translations
of classical works as well as scientific and philosophical works of Arab and oriental origins. Arabic and, to a lesser
degree, Persian became important sources of material and perhaps of techniques for revitalized Western traditions,
which in time would overtake the Islamic and oriental traditions. The movement to translate English and European
texts transformed the Arabic and Ottoman Turkish languages, and new words, simplified syntax, and directness came
to be valued over the previous convolutions. Educated Arabs and Turks in the new professions and the modernized civil
service expressed skepticism, writes Christopher de Bellaigue, "with a freedom that is rarely witnessed today....
No longer was legitimate knowledge defined by texts in the religious schools, interpreted for the most part with
stultifying literalness. It had come to include virtually any intellectual production anywhere in the world." One
of the neologisms that, in a way, came to characterize the infusion of new ideas via translation was "darwiniya",
or "Darwinism". After World War I, when Britain and France divided up the Middle East's countries, apart from Turkey,
between them, pursuant to the Sykes-Picot agreement—in violation of solemn wartime promises of postwar Arab autonomy—there
came an immediate reaction: the Muslim Brotherhood emerged in Egypt, the House of Saud took over the Hijaz, and regimes
led by army officers came to power in Iran and Turkey. "[B]oth illiberal currents of the modern Middle East," writes
de Bellaigne, "Islamism and militarism, received a major impetus from Western empire-builders." As often happens
in countries undergoing social crisis, the aspirations of the Muslim world's translators and modernizers, such as
Muhammad Abduh, largely had to yield to retrograde currents. Many non-transparent-translation theories draw on concepts
from German Romanticism, the most obvious influence being the German theologian and philosopher Friedrich Schleiermacher.
In his seminal lecture "On the Different Methods of Translation" (1813) he distinguished between translation methods
that move "the writer toward [the reader]", i.e., transparency, and those that move the "reader toward [the author]",
i.e., an extreme fidelity to the foreignness of the source text. Schleiermacher favored the latter approach; he was
motivated, however, not so much by a desire to embrace the foreign, as by a nationalist desire to oppose France's
cultural domination and to promote German literature. Comparison of a back-translation with the original text is
sometimes used as a check on the accuracy of the original translation, much as the accuracy of a mathematical operation
is sometimes checked by reversing the operation. But the results of such reverse-translation operations, while useful
as approximate checks, are not always precisely reliable. Back-translation must in general be less accurate than
back-calculation because linguistic symbols (words) are often ambiguous, whereas mathematical symbols are intentionally
unequivocal. Mark Twain provided humorously telling evidence for the frequent unreliability of back-translation when
he issued his own back-translation of a French translation of his short story, "The Celebrated Jumping Frog of Calaveras
County". He published his back-translation in a 1903 volume together with his English-language original, the French
translation, and a "Private History of the 'Jumping Frog' Story". The latter included a synopsized adaptation of
his story that Twain stated had appeared, unattributed to Twain, in a Professor Sidgwick’s Greek Prose Composition
(p. 116) under the title, "The Athenian and the Frog"; the adaptation had for a time been taken for an independent
ancient Greek precursor to Twain's "Jumping Frog" story. When a historic document survives only in translation, the
original having been lost, researchers sometimes undertake back-translation in an effort to reconstruct the original
text. An example involves the novel The Saragossa Manuscript by the Polish aristocrat Jan Potocki (1761–1815), who
wrote the novel in French and anonymously published fragments in 1804 and 1813–14. Portions of the original French-language
manuscript were subsequently lost; however, the missing fragments survived in a Polish translation that was made
by Edmund Chojecki in 1847 from a complete French copy, now lost. French-language versions of the complete Saragossa
Manuscript have since been produced, based on extant French-language fragments and on French-language versions that
have been back-translated from Chojecki’s Polish version. Translation has served as a school of writing for many
authors. Translators, including monks who spread Buddhist texts in East Asia, and the early modern European translators
of the Bible, in the course of their work have shaped the very languages into which they have translated. They have
acted as bridges for conveying knowledge between cultures; and along with ideas, they have imported from the source
languages, into their own languages, loanwords and calques of grammatical structures, idioms and vocabulary. Interpreters
have sometimes played crucial roles in history. A prime example is La Malinche, also known as Malintzin, Malinalli
and Doña Marina, an early-16th-century Nahua woman from the Mexican Gulf Coast. As a child she had been sold or given
to Maya slave-traders from Xicalango, and thus had become bilingual. Subsequently given along with other women to
the invading Spaniards, she became instrumental in the Spanish conquest of Mexico, acting as interpreter, adviser,
intermediary and lover to Hernán Cortés. Web-based human translation is generally favored by companies and individuals
that wish to secure more accurate translations. In view of the frequent inaccuracy of machine translations, human
translation remains the most reliable, most accurate form of translation available. With the recent emergence of
translation crowdsourcing, translation-memory techniques, and internet applications, translation agencies have been
able to provide on-demand human-translation services to businesses, individuals, and enterprises. Relying exclusively
on unedited machine translation, however, ignores the fact that communication in human language is context-embedded
and that it takes a person to comprehend the context of the original text with a reasonable degree of probability.
It is certainly true that even purely human-generated translations are prone to error; therefore, to ensure that
a machine-generated translation will be useful to a human being and that publishable-quality translation is achieved,
such translations must be reviewed and edited by a human. In Asia, the spread of Buddhism led to large-scale ongoing
translation efforts spanning well over a thousand years. The Tangut Empire was especially efficient in such efforts;
exploiting the then newly invented block printing, and with the full support of the government (contemporary sources
describe the Emperor and his mother personally contributing to the translation effort, alongside sages of various
nationalities), the Tanguts took mere decades to translate volumes that had taken the Chinese centuries to render.[citation
needed] The Arabs undertook large-scale efforts at translation. Having conquered the Greek world, they made Arabic
versions of its philosophical and scientific works. During the Middle Ages, translations of some of these Arabic
versions were made into Latin, chiefly at Córdoba in Spain. King Alfonso X el Sabio (Alphonse the Wise) of Castille
in the 13th century promoted this effort by founding a Schola Traductorum (School of Translation) in Toledo. There
Arabic texts, Hebrew texts, and Latin texts were translated into the other tongues by Muslim, Jewish and Christian
scholars, who also argued the merits of their respective religions. Latin translations of Greek and original Arab
works of scholarship and science helped advance European Scholasticism, and thus European science and culture. The
first great English translation was the Wycliffe Bible (ca. 1382), which showed the weaknesses of an underdeveloped
English prose. Only at the end of the 15th century did the great age of English prose translation begin with Thomas
Malory's Le Morte Darthur—an adaptation of Arthurian romances so free that it can, in fact, hardly be called a true
translation. The first great Tudor translations are, accordingly, the Tyndale New Testament (1525), which influenced
the Authorized Version (1611), and Lord Berners' version of Jean Froissart's Chronicles (1523–25). Meanwhile, in
Renaissance Italy, a new period in the history of translation had opened in Florence with the arrival, at the court
of Cosimo de' Medici, of the Byzantine scholar Georgius Gemistus Pletho shortly before the fall of Constantinople
to the Turks (1453). A Latin translation of Plato's works was undertaken by Marsilio Ficino. This and Erasmus' Latin
edition of the New Testament led to a new attitude to translation. For the first time, readers demanded rigor of
rendering, as philosophical and religious beliefs depended on the exact words of Plato, Aristotle and Jesus. Throughout
the 18th century, the watchword of translators was ease of reading. Whatever they did not understand in a text, or
thought might bore readers, they omitted. They cheerfully assumed that their own style of expression was the best,
and that texts should be made to conform to it in translation. For scholarship they cared no more than had their
predecessors, and they did not shrink from making translations from translations in third languages, or from languages
that they hardly knew, or—as in the case of James Macpherson's "translations" of Ossian—from texts that were actually
of the "translator's" own composition. The 19th century brought new standards of accuracy and style. In regard to
accuracy, observes J.M. Cohen, the policy became "the text, the whole text, and nothing but the text", except for
any bawdy passages and the addition of copious explanatory footnotes. In regard to style, the Victorians' aim, achieved
through far-reaching metaphrase (literality) or pseudo-metaphrase, was to constantly remind readers that they were
reading a foreign classic. An exception was the outstanding translation in this period, Edward FitzGerald's Rubaiyat
of Omar Khayyam (1859), which achieved its Oriental flavor largely by using Persian names and discreet Biblical echoes
and actually drew little of its material from the Persian original. Translation of a text that is sung in vocal music
for the purpose of singing in another language—sometimes called "singing translation"—is closely linked to translation
of poetry because most vocal music, at least in the Western tradition, is set to verse, especially verse in regular
patterns with rhyme. (Since the late 19th century, musical setting of prose and free verse has also been practiced
in some art music, though popular music tends to remain conservative in its retention of stanzaic forms with or without
refrains.) A rudimentary example of translating poetry for singing is church hymns, such as the German chorales translated
into English by Catherine Winkworth. Translation of sung texts is generally much more restrictive than translation
of poetry, because in the former there is little or no freedom to choose between a versified translation and a translation
that dispenses with verse structure. One might modify or omit rhyme in a singing translation, but the assignment
of syllables to specific notes in the original musical setting places great challenges on the translator. There is
the option in prose sung texts, less so in verse, of adding or deleting a syllable here and there by subdividing
or combining notes, respectively, but even with prose the process is almost like strict verse translation because
of the need to stick as closely as possible to the original prosody of the sung melodic line. Translations of sung
texts—whether of the above type meant to be sung or of a more or less literal type meant to be read—are also used
as aids to audiences, singers and conductors, when a work is being sung in a language not known to them. The most
familiar types are translations presented as subtitles or surtitles projected during opera performances, those inserted
into concert programs, and those that accompany commercial audio CDs of vocal music. In addition, professional and
amateur singers often sing works in languages they do not know (or do not know well), and translations are then used
to enable them to understand the meaning of the words they are singing. One of the first recorded instances of translation
in the West was the rendering of the Old Testament into Greek in the 3rd century BCE. The translation is known as
the "Septuagint", a name that refers to the seventy translators (seventy-two, in some versions) who were commissioned
to translate the Bible at Alexandria, Egypt. Each translator worked in solitary confinement in his own cell, and
according to legend all seventy versions proved identical. The Septuagint became the source text for later translations
into many languages, including Latin, Coptic, Armenian and Georgian. The period preceding, and contemporary with,
the Protestant Reformation saw the translation of the Bible into local European languages—a development that contributed
to Western Christianity's split into Roman Catholicism and Protestantism due to disparities between Catholic and
Protestant versions of crucial words and passages (although the Protestant movement was largely based on other things,
such as a perceived need for reformation of the Roman Catholic Church to eliminate corruption). Lasting effects on
the religions, cultures and languages of their respective countries have been exerted by such Bible translations
as Martin Luther's into German, Jakub Wujek's into Polish, and the King James Bible's translators' into English.
Debate and religious schism over different translations of religious texts remain to this day, as demonstrated by,
for example, the King James Only movement.
An airport is an aerodrome with facilities for flights to take off and land. Airports often have facilities to store and
maintain aircraft, and a control tower. An airport consists of a landing area, which comprises an aerially accessible
open space including at least one operationally active surface such as a runway for a plane to take off or a helipad,
and often includes adjacent utility buildings such as control towers, hangars and terminals. Larger airports may
have fixed base operator services, airport aprons, air traffic control centres, passenger facilities such as restaurants
and lounges, and emergency services. The majority of the world's airports are non-towered, with no air traffic control
presence. Busy airports have air traffic control (ATC) system. All airports use a traffic pattern to assure smooth
traffic flow between departing and arriving aircraft. There are a number of aids available to pilots, though not
all airports are equipped with them. Many airports have lighting that help guide planes using the runways and taxiways
at night or in rain, snow, or fog. In the U.S. and Canada, the vast majority of airports, large and small, will either
have some form of automated airport weather station, a human observer or a combination of the two. Air safety is
an important concern in the operation of an airport, and airports often have their own safety services. Most of the
world's airports are owned by local, regional, or national government bodies who then lease the airport to private
corporations who oversee the airport's operation. For example, in the United Kingdom the state-owned British Airports
Authority originally operated eight of the nation's major commercial airports - it was subsequently privatized in
the late 1980s, and following its takeover by the Spanish Ferrovial consortium in 2006, has been further divested
and downsized to operating just five. Germany's Frankfurt Airport is managed by the quasi-private firm Fraport. While
in India GMR Group operates, through joint ventures, Indira Gandhi International Airport and Rajiv Gandhi International
Airport. Bengaluru International Airport and Chhatrapati Shivaji International Airport are controlled by GVK Group.
The rest of India's airports are managed by the Airports Authority of India. Airports are divided into landside and
airside areas. Landside areas include parking lots, public transportation train stations and access roads. Airside
areas include all areas accessible to aircraft, including runways, taxiways and aprons. Access from landside areas
to airside areas is tightly controlled at most airports. Passengers on commercial flights access airside areas through
terminals, where they can purchase tickets, clear security check, or claim luggage and board aircraft through gates.
The waiting areas which provide passenger access to aircraft are typically called concourses, although this term
is often used interchangeably with terminal. Most major airports provide commercial outlets for products and services.
Most of these companies, many of which are internationally known brands, are located within the departure areas.
These include clothing boutiques and restaurants. Prices charged for items sold at these outlets are generally higher
than those outside the airport. However, some airports now regulate costs to keep them comparable to "street prices".
This term is misleading as prices often match the manufacturers' suggested retail price (MSRP) but are almost never
discounted.[citation needed] Airports may also contain premium and VIP services. The premium and VIP services may
include express check-in and dedicated check-in counters. These services are usually reserved for First and Business
class passengers, premium frequent flyers, and members of the airline's clubs. Premium services may sometimes be
open to passengers who are members of a different airline's frequent flyer program. This can sometimes be part of
a reciprocal deal, as when multiple airlines are part of the same alliance, or as a ploy to attract premium customers
away from rival airlines. Many large airports are located near railway trunk routes for seamless connection of multimodal
transport, for instance Frankfurt Airport, Amsterdam Airport Schiphol, London Heathrow Airport, London Gatwick Airport
and London Stansted Airport. It is also common to connect an airport and a city with rapid transit, light rail lines
or other non-road public transport systems. Some examples of this would include the AirTrain JFK at John F. Kennedy
International Airport in New York, Link Light Rail that runs from the heart of downtown Seattle to Seattle–Tacoma
International Airport, and the Silver Line T at Boston's Logan International Airport by the Massachusetts Bay Transportation
Authority (MBTA). Such a connection lowers risk of missed flights due to traffic congestion. Large airports usually
have access also through controlled-access highways ('freeways' or 'motorways') from which motor vehicles enter either
the departure loop or the arrival loop. The distances passengers need to move within a large airport can be substantial.
It is common for airports to provide moving walkways and buses. The Hartsfield–Jackson Atlanta International Airport
has a tram that takes people through the concourses and baggage claim. Major airports with more than one terminal
offer inter-terminal transportation, such as Mexico City International Airport, where the domestic building of Terminal
1 is connected by Aerotrén to Terminal 2, on the other side of the airport. The title of "world's oldest airport"
is disputed, but College Park Airport in Maryland, US, established in 1909 by Wilbur Wright, is generally agreed
to be the world's oldest continually operating airfield, although it serves only general aviation traffic. Bisbee-Douglas
International Airport in Arizona was declared "the first international airport of the Americas" by US president Franklin
D. Roosevelt in 1943. Pearson Field Airport in Vancouver, Washington had a dirigible land in 1905 and planes in 1911
and is still in use. Bremen Airport opened in 1913 and remains in use, although it served as an American military
field between 1945 and 1949. Amsterdam Airport Schiphol opened on September 16, 1916 as a military airfield, but
only accepted civil aircraft from December 17, 1920, allowing Sydney Airport in Sydney, Australia—which started operations
in January 1920—to claim to be one of the world's oldest continually operating commercial airports. Minneapolis-Saint
Paul International Airport in Minneapolis-Saint Paul, Minnesota, opened in 1920 and has been in continuous commercial
service since. It serves about 35,000,000 passengers each year and continues to expand, recently opening a new 11,000
foot (3,355 meter) runway. Of the airports constructed during this early period in aviation, it is one of the largest
and busiest that is still currently operating. Rome Ciampino Airport, opened 1916, is also a contender, as well as
the Don Mueang International Airport near Bangkok,Thailand, which opened in 1914. Increased aircraft traffic during
World War I led to the construction of landing fields. Aircraft had to approach these from certain directions and
this led to the development of aids for directing the approach and landing slope. Following the war, some of these
military airfields added civil facilities for handling passenger traffic. One of the earliest such fields was Paris
– Le Bourget Airport at Le Bourget, near Paris. The first airport to operate scheduled international commercial services
was Hounslow Heath Aerodrome in August 1919, but it was closed and supplanted by Croydon Airport in March 1920. In
1922, the first permanent airport and commercial terminal solely for commercial aviation was opened at Flughafen
Devau near what was then Königsberg, East Prussia. The airports of this era used a paved "apron", which permitted
night flying as well as landing heavier aircraft. The first lighting used on an airport was during the latter part
of the 1920s; in the 1930s approach lighting came into use. These indicated the proper direction and angle of descent.
The colours and flash intervals of these lights became standardized under the International Civil Aviation Organization
(ICAO). In the 1940s, the slope-line approach system was introduced. This consisted of two rows of lights that formed
a funnel indicating an aircraft's position on the glideslope. Additional lights indicated incorrect altitude and
direction. Airport construction boomed during the 1960s with the increase in jet aircraft traffic. Runways were extended
out to 3,000 m (9,800 ft). The fields were constructed out of reinforced concrete using a slip-form machine that
produces a continual slab with no disruptions along the length. The early 1960s also saw the introduction of jet
bridge systems to modern airport terminals, an innovation which eliminated outdoor passenger boarding. These systems
became commonplace in the United States by the 1970s. The majority of the world's airports are non-towered, with
no air traffic control presence. However, at particularly busy airports, or airports with other special requirements,
there is an air traffic control (ATC) system whereby controllers (usually ground-based) direct aircraft movements
via radio or other communications links. This coordinated oversight facilitates safety and speed in complex operations
where traffic moves in all three dimensions. Air traffic control responsibilities at airports are usually divided
into at least two main areas: ground and tower, though a single controller may work both stations. The busiest airports
also have clearance delivery, apron control, and other specialized ATC stations. Ground Control is responsible for
directing all ground traffic in designated "movement areas", except the traffic on runways. This includes planes,
baggage trains, snowplows, grass cutters, fuel trucks, stair trucks, airline food trucks, conveyor belt vehicles
and other vehicles. Ground Control will instruct these vehicles on which taxiways to use, which runway they will
use (in the case of planes), where they will park, and when it is safe to cross runways. When a plane is ready to
takeoff it will stop short of the runway, at which point it will be turned over to Tower Control. After a plane has
landed, it will depart the runway and be returned to Ground Control. Tower Control controls aircraft on the runway
and in the controlled airspace immediately surrounding the airport. Tower controllers may use radar to locate an
aircraft's position in three-dimensional space, or they may rely on pilot position reports and visual observation.
They coordinate the sequencing of aircraft in the traffic pattern and direct aircraft on how to safely join and leave
the circuit. Aircraft which are only passing through the airspace must also contact Tower Control in order to be
sure that they remain clear of other traffic. At all airports the use of a traffic pattern (often called a traffic
circuit outside the U.S.) is possible. They may help to assure smooth traffic flow between departing and arriving
aircraft. There is no technical need within modern aviation for performing this pattern, provided there is no queue.
And due to the so-called SLOT-times, the overall traffic planning tend to assure landing queues are avoided. If for
instance an aircraft approaches runway 17 (which has a heading of approx. 170 degrees) from the north (coming from
360/0 degrees heading towards 180 degrees), the aircraft will land as fast as possible by just turning 10 degrees
and follow the glidepath, without orbit the runway for visual reasons, whenever this is possible. For smaller piston
engined airplanes at smaller airfields without ILS equipment, things are very differently though. Generally, this
pattern is a circuit consisting of five "legs" that form a rectangle (two legs and the runway form one side, with
the remaining legs forming three more sides). Each leg is named (see diagram), and ATC directs pilots on how to join
and leave the circuit. Traffic patterns are flown at one specific altitude, usually 800 or 1,000 ft (244 or 305 m)
above ground level (AGL). Standard traffic patterns are left-handed, meaning all turns are made to the left. One
of the main reason for this is that pilots sit on the left side of the airplane, and a Left-hand patterns improves
their visibility of the airport and pattern. Right-handed patterns do exist, usually because of obstacles such as
a mountain, or to reduce noise for local residents. The predetermined circuit helps traffic flow smoothly because
all pilots know what to expect, and helps reduce the chance of a mid-air collision. At extremely large airports,
a circuit is in place but not usually used. Rather, aircraft (usually only commercial with long routes) request approach
clearance while they are still hours away from the airport, often before they even take off from their departure
point. Large airports have a frequency called Clearance Delivery which is used by departing aircraft specifically
for this purpose. This then allows aircraft to take the most direct approach path to the runway and land without
worrying about interference from other aircraft. While this system keeps the airspace free and is simpler for pilots,
it requires detailed knowledge of how aircraft are planning to use the airport ahead of time and is therefore only
possible with large commercial airliners on pre-scheduled flights. The system has recently become so advanced that
controllers can predict whether an aircraft will be delayed on landing before it even takes off; that aircraft can
then be delayed on the ground, rather than wasting expensive fuel waiting in the air. There are a number of aids
available to pilots, though not all airports are equipped with them. A visual approach slope indicator (VASI) helps
pilots fly the approach for landing. Some airports are equipped with a VHF omnidirectional range (VOR) to help pilots
find the direction to the airport. VORs are often accompanied by a distance measuring equipment (DME) to determine
the distance to the VOR. VORs are also located off airports, where they serve to provide airways for aircraft to
navigate upon. In poor weather, pilots will use an instrument landing system (ILS) to find the runway and fly the
correct approach, even if they cannot see the ground. The number of instrument approaches based on the use of the
Global Positioning System (GPS) is rapidly increasing and may eventually be the primary means for instrument landings.
On runways, green lights indicate the beginning of the runway for landing, while red lights indicate the end of the
runway. Runway edge lighting consists of white lights spaced out on both sides of the runway, indicating the edge.
Some airports have more complicated lighting on the runways including lights that run down the centerline of the
runway and lights that help indicate the approach (an approach lighting system, or ALS). Low-traffic airports may
use pilot controlled lighting to save electricity and staffing costs. Hazards to aircraft include debris, nesting
birds, and reduced friction levels due to environmental conditions such as ice, snow, or rain. Part of runway maintenance
is airfield rubber removal which helps maintain friction levels. The fields must be kept clear of debris using cleaning
equipment so that loose material does not become a projectile and enter an engine duct (see foreign object damage).
In adverse weather conditions, ice and snow clearing equipment can be used to improve traction on the landing strip.
For waiting aircraft, equipment is used to spray special deicing fluids on the wings. Many ground crew at the airport
work at the aircraft. A tow tractor pulls the aircraft to one of the airbridges, The ground power unit is plugged
in. It keeps the electricity running in the plane when it stands at the terminal. The engines are not working, therefore
they do not generate the electricity, as they do in flight. The passengers disembark using the airbridge. Mobile
stairs can give the ground crew more access to the aircraft's cabin. There is a cleaning service to clean the aircraft
after the aircraft lands. Flight catering provides the food and drinks on flights. A toilet waste truck removes the
human waste from the tank which holds the waste from the toilets in the aircraft. A water truck fills the water tanks
of the aircraft. A fuel transfer vehicle transfers aviation fuel from fuel tanks underground, to the aircraft tanks.
A tractor and its dollies bring in luggage from the terminal to the aircraft. They also carry luggage to the terminal
if the aircraft has landed, and is being unloaded. Hi-loaders lift the heavy luggage containers to the gate of the
cargo hold. The ground crew push the luggage containers into the hold. If it has landed, they rise, the ground crew
push the luggage container on the hi-loader, which carries it down. The luggage container is then pushed on one of
the tractors dollies. The conveyor, which is a conveyor belt on a truck, brings in the awkwardly shaped, or late
luggage. The airbridge is used again by the new passengers to embark the aircraft. The tow tractor pushes the aircraft
away from the terminal to a taxi area. The aircraft should be off of the airport and in the air in 90 minutes. The
airport charges the airline for the time the aircraft spends at the airport. An airbase, sometimes referred to as
an air station or airfield, provides basing and support of military aircraft. Some airbases, known as military airports,
provide facilities similar to their civilian counterparts. For example, RAF Brize Norton in the UK has a terminal
which caters to passengers for the Royal Air Force's scheduled TriStar flights to the Falkland Islands. Some airbases
are co-located with civilian airports, sharing the same ATC facilities, runways, taxiways and emergency services,
but with separate terminals, parking areas and hangars. Bardufoss Airport , Bardufoss Air Station in Norway and Pune
Airport in India are examples of this. Airports have played major roles in films and television programs due to their
very nature as a transport and international hub, and sometimes because of distinctive architectural features of
particular airports. One such example of this is The Terminal, a film about a man who becomes permanently grounded
in an airport terminal and must survive only on the food and shelter provided by the airport. They are also one of
the major elements in movies such as The V.I.P.s, Airplane!, Airport (1970), Die Hard 2, Soul Plane, Jackie Brown,
Get Shorty, Home Alone, Liar Liar, Passenger 57, Final Destination (2000), Unaccompanied Minors, Catch Me If You
Can, Rendition and The Langoliers. They have also played important parts in television series like Lost, The Amazing
Race, America's Next Top Model, Cycle 10 which have significant parts of their story set within airports. In other
programmes and films, airports are merely indicative of journeys, e.g. Good Will Hunting. Most airports welcome filming
on site, although it must be agreed in advance and may be subject to a fee. Landside, filming can take place in all
public areas. However airside, filming is heavily restricted, the only airside locations where filming is permitted
are the Departure Lounge and some outside areas. To film in an airside location, all visitors must go through security,
the same as passengers, and be accompanied by a full airside pass holder and have their passport with them at all
times. Filming can not be undertaken in Security, at Immigration/Customs, or in Baggage Reclaim.
Kievan Rus' begins with the rule (882–912) of Prince Oleg, who extended his control from Novgorod south along the Dnieper
river valley in order to protect trade from Khazar incursions from the east and moved his capital to the more strategic
Kiev. Sviatoslav I (died 972) achieved the first major expansion of Kievan Rus' territorial control, fighting a war
of conquest against the Khazar Empire. Vladimir the Great (980–1015) introduced Christianity with his own baptism
and, by decree, that of all the inhabitants of Kiev and beyond. Kievan Rus' reached its greatest extent under Yaroslav
I (1019–1054); his sons assembled and issued its first written legal code, the Rus' Justice, shortly after his death.
The term "Kievan Rus'" (Ки́евская Русь Kievskaya Rus’) was coined in the 19th century in Russian historiography to
refer to the period when the centre was in Kiev. In English, the term was introduced in the early 20th century, when
it was found in the 1913 English translation of Vasily Klyuchevsky's A History of Russia, to distinguish the early
polity from successor states, which were also named Rus. Later, the Russian term was rendered into Belarusian and
Ukrainian as Кіеўская Русь Kijeŭskaja Rus’ and Ки́ївська Русь Kyivs'ka Rus’, respectively. Prior to the emergence
of Kievan Rus' in the 9th century AD, the lands between the Baltic Sea and Black Sea were primarily populated by
eastern Slavic tribes. In the northern region around Novgorod were the Ilmen Slavs and neighboring Krivichi, who
occupied territories surrounding the headwaters of the West Dvina, Dnieper, and Volga Rivers. To their north, in
the Ladoga and Karelia regions, were the Finnic Chud tribe. In the south, in the area around Kiev, were the Poliane,
a group of Slavicized tribes with Iranian origins, the Drevliane to the west of the Dnieper, and the Severiane to
the east. To their north and east were the Vyatichi, and to their south was forested land settled by Slav farmers,
giving way to steppelands populated by nomadic herdsmen. Controversy persists over whether the Rus’ were Varangians
(Vikings) or Slavs. This uncertainty is due largely to a paucity of contemporary sources. Attempts to address this
question instead rely on archaeological evidence, the accounts of foreign observers, legends and literature from
centuries later. To some extent the controversy is related to the foundation myths of modern states in the region.
According to the "Normanist" view, the Rus' were Scandinavians, while Russian and Ukrainian nationalist historians
generally argue that the Rus' were themselves Slavs. Normanist theories focus on the earliest written source for
the East Slavs, the Russian Primary Chronicle, although even this account was not produced until the 12th century.
Nationalist accounts have suggested that the Rus' were present before the arrival of the Varangians, noting that
only a handful of Scandinavian words can be found in modern Russian and that Scandinavian names in the early chronicles
were soon replaced by Slavic names. Nevertheless, archaeological evidence from the area suggests that a Scandinavian
population was present during the 10th century at the latest. On balance, it seems likely that the Rus' proper were
a small minority of Scandinavians who formed an elite ruling class, while the great majority of their subjects were
Slavs. Considering the linguistic arguments mounted by nationalist scholars, if the proto-Rus' were Scandinavians,
they must have quickly become nativized, adopting Slavic languages and other cultural practices. Ahmad ibn Fadlan,
an Arab traveler during the 10th century, provided one of the earliest written descriptions of the Rus': "They are
as tall as a date palm, blond and ruddy, so that they do not need to wear a tunic nor a cloak; rather the men among
them wear garments that only cover half of his body and leaves one of his hands free." Liutprand of Cremona, who
was twice an envoy to the Byzantine court (949 and 968), identifies the "Russi" with the Norse ("the Russi, whom
we call Norsemen by another name") but explains the name as a Greek term referring to their physical traits ("A certain
people made up of a part of the Norse, whom the Greeks call [...] the Russi on account of their physical features,
we designate as Norsemen because of the location of their origin."). Leo the Deacon, a 10th-century Byzantine historian
and chronicler, refers to the Rus' as "Scythians" and notes that they tended to adopt Greek rituals and customs.
According to the Primary Chronicle, the territories of the East Slavs in the 9th century were divided between the
Varangians and the Khazars. The Varangians are first mentioned imposing tribute from Slavic and Finnic tribes in
859. In 862, the Finnic and Slavic tribes in the area of Novgorod rebelled against the Varangians, driving them "back
beyond the sea and, refusing them further tribute, set out to govern themselves." The tribes had no laws, however,
and soon began to make war with one another, prompting them to invite the Varangians back to rule them and bring
peace to the region: The three brothers—Rurik, Sineus, and Truvor—established themselves in Novgorod, Beloozero,
and Izborsk, respectively. Two of the brothers died, and Rurik became the sole ruler of the territory and progenitor
of the Rurik Dynasty. A short time later, two of Rurik’s men, Askold and Dir, asked him for permission to go to Tsargrad
(Constantinople). On their way south, they discovered "a small city on a hill," Kiev, captured it and the surrounding
country from the Khazars, populated the region with more Varangians, and "established their dominion over the country
of the Polyanians." The Chronicle reports that Askold and Dir continued to Constantinople with a navy to attack the
city in 863–66, catching the Byzantines by surprise and ravaging the surrounding area, though other accounts date
the attack in 860. Patriarch Photius vividly describes the "universal" devastation of the suburbs and nearby islands,
and another account further details the destruction and slaughter of the invasion. The Rus' turned back before attacking
the city itself, due either to a storm dispersing their boats, the return of the Emperor, or in a later account,
due to a miracle after a ceremonial appeal by the Patriarch and the Emperor to the Virgin. The attack was the first
encounter between the Rus' and Byzantines and led the Patriarch to send missionaries north to engage and attempt
to convert the Rus' and the Slavs. Rurik led the Rus' until his death in about 879, bequeathing his kingdom to his
kinsman, Prince Oleg, as regent for his young son, Igor. In 880-82, Oleg led a military force south along the Dnieper
river, capturing Smolensk and Lyubech before reaching Kiev, where he deposed and killed Askold and Dir, proclaimed
himself prince, and declared Kiev the "mother of Rus' cities." Oleg set about consolidating his power over the surrounding
region and the riverways north to Novgorod, imposing tribute on the East Slav tribes. In 883, he conquered the Drevlians,
imposing a fur tribute on them. By 885 he had subjugated the Poliane, Severiane, Vyatichi, and Radimichs, forbidding
them to pay further tribute to the Khazars. Oleg continued to develop and expand a network of Rus' forts in Slav
lands, begun by Rurik in the north. The new Kievan state prospered due to its abundant supply of furs, beeswax, honey,
and slaves for export, and because it controlled three main trade routes of Eastern Europe. In the north, Novgorod
served as a commercial link between the Baltic Sea and the Volga trade route to the lands of the Volga Bulgars, the
Khazars, and across the Caspian Sea as far as Baghdad, providing access to markets and products from Central Asia
and the Middle East. Trade from the Baltic also moved south on a network of rivers and short portages along the Dnieper
known as the "route from the Varangians to the Greeks," continuing to the Black Sea and on to Constantinople. Kiev
was a central outpost along the Dnieper route and a hub with the east-west overland trade route between the Khazars
and the Germanic lands of Central Europe. These commercial connections enriched Rus' merchants and princes, funding
military forces and the construction of churches, palaces, fortifications, and further towns. Demand for luxury goods
fostered production of expensive jewelry and religious wares, allowing their export, and an advanced credit and money-lending
system may have also been in place. The rapid expansion of the Rus' to the south led to conflict and volatile relationships
with the Khazars and other neighbors on the Pontic steppe. The Khazars dominated the Black Sea steppe during the
8th century, trading and frequently allying with the Byzantine Empire against Persians and Arabs. In the late 8th
century, the collapse of the Göktürk Khaganate led the Magyars and the Pechenegs, Ugric and Turkic peoples from Central
Asia, to migrate west into the steppe region, leading to military conflict, disruption of trade, and instability
within the Khazar Khaganate. The Rus' and Slavs had earlier allied with the Khazars against Arab raids on the Caucasus,
but they increasingly worked against them to secure control of the trade routes. The Byzantine Empire was able to
take advantage of the turmoil to expand its political influence and commercial relationships, first with the Khazars
and later with the Rus' and other steppe groups. The Byzantines established the Theme of Cherson, formally known
as Klimata, in the Crimea in the 830s to defend against raids by the Rus' and to protect vital grain shipments supplying
Constantinople. Cherson also served as a key diplomatic link with the Khazars and others on the steppe, and it became
the centre of Black Sea commerce. The Byzantines also helped the Khazars build a fortress at Sarkel on the Don river
to protect their northwest frontier against incursions by the Turkic migrants and the Rus', and to control caravan
trade routes and the portage between the Don and Volga rivers. The expansion of the Rus' put further military and
economic pressure on the Khazars, depriving them of territory, tributaries, and trade. In around 890, Oleg waged
an indecisive war in the lands of the lower Dniester and Dnieper rivers with the Tivertsi and the Ulichs, who were
likely acting as vassals of the Magyars, blocking Rus' access to the Black Sea. In 894, the Magyars and Pechenegs
were drawn into the wars between the Byzantines and the Bulgarian Empire. The Byzantines arranged for the Magyars
to attack Bulgarian territory from the north, and Bulgaria in turn persuaded the Pechenegs to attack the Magyars
from their rear. Boxed in, the Magyars were forced to migrate further west across the Carpathian Mountains into the
Hungarian plain, depriving the Khazars of an important ally and a buffer from the Rus'. The migration of the Magyars
allowed Rus' access to the Black Sea, and they soon launched excursions into Khazar territory along the sea coast,
up the Don river, and into the lower Volga region. The Rus' were raiding and plundering into the Caspian Sea region
from 864, with the first large-scale expedition in 913, when they extensively raided Baku, Gilan, Mazandaran and
penetrated into the Caucasus. As the 10th century progressed, the Khazars were no longer able to command tribute
from the Volga Bulgars, and their relationship with the Byzantines deteriorated, as Byzantium increasingly allied
with the Pechenegs against them. The Pechenegs were thus secure to raid the lands of the Khazars from their base
between the Volga and Don rivers, allowing them to expand to the west. Rus' relations with the Pechenegs were complex,
as the groups alternately formed alliances with and against one another. The Pechenegs were nomads roaming the steppe
raising livestock which they traded with the Rus' for agricultural goods and other products. The lucrative Rus' trade
with the Byzantine Empire had to pass through Pecheneg-controlled territory, so the need for generally peaceful relations
was essential. Nevertheless, while the Primary Chronicle reports the Pechenegs entering Rus' territory in 915 and
then making peace, they were waging war with one another again in 920. Pechenegs are reported assisting the Rus'
in later campaigns against the Byzantines, yet allied with the Byzantines against the Rus' at other times. After
the Rus' attack on Constantinople in 860, the Byzantine Patriarch Photius sent missionaries north to convert the
Rus' and the Slavs. Prince Rastislav of Moravia had requested the Emperor to provide teachers to interpret the holy
scriptures, so in 863 the brothers Cyril and Methodius were sent as missionaries, due to their knowledge of the Slavonic
language. The Slavs had no written language, so the brothers devised the Glagolitic alphabet, later developed into
Cyrillic, and standardized the language of the Slavs, later known as Old Church Slavonic. They translated portions
of the Bible and drafted the first Slavic civil code and other documents, and the language and texts spread throughout
Slavic territories, including Kievan Rus’. The mission of Cyril and Methodius served both evangelical and diplomatic
purposes, spreading Byzantine cultural influence in support of imperial foreign policy. In 867 the Patriarch announced
that the Rus' had accepted a bishop, and in 874 he speaks of an "Archbishop of the Rus'." Relations between the Rus'
and Byzantines became more complex after Oleg took control over Kiev, reflecting commercial, cultural, and military
concerns. The wealth and income of the Rus' depended heavily upon trade with Byzantium. Constantine Porphyrogenitus
described the annual course of the princes of Kiev, collecting tribute from client tribes, assembling the product
into a flotilla of hundreds of boats, conducting them down the Dnieper to the Black Sea, and sailing to the estuary
of the Dniester, the Danube delta, and on to Constantinople. On their return trip they would carry silk fabrics,
spices, wine, and fruit. The importance of this trade relationship led to military action when disputes arose. The
Primary Chronicle reports that the Rus' attacked Constantinople again in 907, probably to secure trade access. The
Chronicle glorifies the military prowess and shrewdness of Oleg, an account imbued with legendary detail. Byzantine
sources do not mention the attack, but a pair of treaties in 907 and 911 set forth a trade agreement with the Rus',
the terms suggesting pressure on the Byzantines, who granted the Rus' quarters and supplies for their merchants and
tax-free trading privileges in Constantinople. The Chronicle provides a mythic tale of Oleg's death. A sorcerer prophesies
that the death of the Grand Prince would be associated with a certain horse. Oleg has the horse sequestered, and
it later dies. Oleg goes to visit the horse and stands over the carcass, gloating that he had outlived the threat,
when a snake strikes him from among the bones, and he soon becomes ill and dies. The Chronicle reports that Prince
Igor succeeded Oleg in 913, and after some brief conflicts with the Drevlians and the Pechenegs, a period of peace
ensued for over twenty years. In 941, Igor led another major Rus' attack on Constantinople, probably over trading
rights again. A navy of 10,000 vessels, including Pecheneg allies, landed on the Bithynian coast and devastated the
Asiatic shore of the Bosphorus. The attack was well-timed, perhaps due to intelligence, as the Byzantine fleet was
occupied with the Arabs in the Mediterranean, and the bulk of its army was stationed in the east. The Rus’ burned
towns, churches, and monasteries, butchering the people and amassing booty. The emperor arranged for a small group
of retired ships to be outfitted with Greek fire throwers and sent them out to meet the Rus’, luring them into surrounding
the contingent before unleashing the Greek fire. Liutprand of Cremona wrote that "the Rus', seeing the flames, jumped
overboard, preferring water to fire. Some sank, weighed down by the weight of their breastplates and helmets; others
caught fire." Those captured were beheaded. The ploy dispelled the Rus’ fleet, but their attacks continued into the
hinterland as far as Nicomedia, with many atrocities reported as victims were crucified and set up for use as targets.
At last a Byzantine army arrived from the Balkans to drive the Rus' back, and a naval contingent reportedly destroyed
much of the Rus' fleet on its return voyage (possibly an exaggeration since the Rus' soon mounted another attack).
The outcome indicates increased military might by Byzantium since 911, suggesting a shift in the balance of power.
Igor returned to Kiev keen for revenge. He assembled a large force of warriors from among neighboring Slavs and Pecheneg
allies, and sent for reinforcements of Varangians from “beyond the sea.” In 944 the Rus' force advanced again on
the Greeks, by land and sea, and a Byzantine force from Cherson responded. The Emperor sent gifts and offered tribute
in lieu of war, and the Rus' accepted. Envoys were sent between the Rus’, the Byzantines, and the Bulgarians in 945,
and a peace treaty was completed. The agreement again focused on trade, but this time with terms less favorable to
the Rus’, including stringent regulations on the conduct of Rus’ merchants in Cherson and Constantinople and specific
punishments for violations of the law. The Byzantines may have been motivated to enter the treaty out of concern
of a prolonged alliance of the Rus', Pechenegs, and Bulgarians against them, though the more favorable terms further
suggest a shift in power. Following the death of Grand Prince Igor in 945, his wife Olga ruled as regent in Kiev
until their son Sviatoslav reached maturity (ca. 963). His decade-long reign over Rus' was marked by rapid expansion
through the conquest of the Khazars of the Pontic steppe and the invasion of the Balkans. By the end of his short
life, Sviatoslav carved out for himself the largest state in Europe, eventually moving his capital from Kiev to Pereyaslavets
on the Danube in 969. In contrast with his mother's conversion to Christianity, Sviatoslav, like his druzhina, remained
a staunch pagan. Due to his abrupt death in an ambush in 972, Sviatoslav's conquests, for the most part, were not
consolidated into a functioning empire, while his failure to establish a stable succession led to a fratricidal feud
among his sons, which resulted in two of his three sons being killed. It is not clearly documented when the title
of the Grand Duke was first introduced, but the importance of the Kiev principality was recognized after the death
of Sviatoslav I in 972 and the ensuing struggle between Vladimir the Great and Yaropolk I. The region of Kiev dominated
the state of Kievan Rus' for the next two centuries. The Grand Prince ("velikiy kniaz'") of Kiev controlled the lands
around the city, and his formally subordinate relatives ruled the other cities and paid him tribute. The zenith of
the state's power came during the reigns of Vladimir the Great (980–1015) and Prince Yaroslav I the Wise (1019–1054).
Both rulers continued the steady expansion of Kievan Rus' that had begun under Oleg. Vladimir had been prince of
Novgorod when his father Sviatoslav I died in 972. He was forced to flee to Scandinavia in 976 after his half-brother
Yaropolk had murdered his other brother Oleg and taken control of Rus. In Scandinavia, with the help of his relative
Earl Håkon Sigurdsson, ruler of Norway, Vladimir assembled a Viking army and reconquered Novgorod and Kiev from Yaropolk.
As Prince of Kiev, Vladimir's most notable achievement was the Christianization of Kievan Rus', a process that began
in 988. The Primary Chronicle states that when Vladimir had decided to accept a new faith instead of the traditional
idol-worship (paganism) of the Slavs, he sent out some of his most valued advisors and warriors as emissaries to
different parts of Europe. They visited the Christians of the Latin Rite, the Jews, and the Muslims before finally
arriving in Constantinople. They rejected Islam because, among other things, it prohibited the consumption of alcohol,
and Judaism because the god of the Jews had permitted his chosen people to be deprived of their country. They found
the ceremonies in the Roman church to be dull. But at Constantinople, they were so astounded by the beauty of the
cathedral of Hagia Sophia and the liturgical service held there that they made up their minds there and then about
the faith they would like to follow. Upon their arrival home, they convinced Vladimir that the faith of the Byzantine
Rite was the best choice of all, upon which Vladimir made a journey to Constantinople and arranged to marry Princess
Anna, the sister of Byzantine emperor Basil II. Vladimir's choice of Eastern Christianity may also have reflected
his close personal ties with Constantinople, which dominated the Black Sea and hence trade on Kiev's most vital commercial
route, the Dnieper River. Adherence to the Eastern Church had long-range political, cultural, and religious consequences.
The church had a liturgy written in Cyrillic and a corpus of translations from Greek that had been produced for the
Slavic peoples. This literature facilitated the conversion to Christianity of the Eastern Slavs and introduced them
to rudimentary Greek philosophy, science, and historiography without the necessity of learning Greek (there were
some merchants who did business with Greeks and likely had an understanding of contemporary business Greek). In contrast,
educated people in medieval Western and Central Europe learned Latin. Enjoying independence from the Roman authority
and free from tenets of Latin learning, the East Slavs developed their own literature and fine arts, quite distinct
from those of other Eastern Orthodox countries.[citation needed] (See Old East Slavic language and Architecture of
Kievan Rus for details ). Following the Great Schism of 1054, the Rus' church maintained communion with both Rome
and Constantinople for some time, but along with most of the Eastern churches it eventually split to follow the Eastern
Orthodox. That being said, unlike other parts of the Greek world, Kievan Rus' did not have a strong hostility to
the Western world. Yaroslav, known as "the Wise", struggled for power with his brothers. A son of Vladimir the Great,
he was vice-regent of Novgorod at the time of his father's death in 1015. Subsequently, his eldest surviving brother,
Svyatopolk the Accursed, killed three of his other brothers and seized power in Kiev. Yaroslav, with the active support
of the Novgorodians and the help of Viking mercenaries, defeated Svyatopolk and became the grand prince of Kiev in
1019. Although he first established his rule over Kiev in 1019, he did not have uncontested rule of all of Kievan
Rus' until 1036. Like Vladimir, Yaroslav was eager to improve relations with the rest of Europe, especially the Byzantine
Empire. Yaroslav's granddaughter, Eupraxia the daughter of his son Vsevolod I, Prince of Kiev, was married to Henry
III, Holy Roman Emperor. Yaroslav also arranged marriages for his sister and three daughters to the kings of Poland,
France, Hungary and Norway. Yaroslav promulgated the first East Slavic law code, Russkaya Pravda; built Saint Sophia
Cathedral in Kiev and Saint Sophia Cathedral in Novgorod; patronized local clergy and monasticism; and is said to
have founded a school system. Yaroslav's sons developed the great Kiev Pechersk Lavra (monastery), which functioned
in Kievan Rus' as an ecclesiastical academy. An unconventional power succession system was established (rota system)
whereby power was transferred to the eldest member of the ruling dynasty rather than from father to son, i.e. in
most cases to the eldest brother of the ruler, fomenting constant hatred and rivalry within the royal family.[citation
needed] Familicide was frequently deployed in obtaining power and can be traced particularly during the time of the
Yaroslavichi rule (sons of Yaroslav) when the established system was skipped in the establishment of Vladimir II
Monomakh as the Grand Prince of Kiev,[clarification needed] in turn creating major squabbles between Olegovichi from
Chernihiv, Monomakhs from Pereyaslav, Izyaslavichi from Turov/Volhynia, and Polotsk Princes.[citation needed] The
most prominent struggle for power was the conflict that erupted after the death of Yaroslav the Wise. The rivaling
Principality of Polotsk was contesting the power of the Grand Prince by occupying Novgorod, while Rostislav Vladimirovich
was fighting for the Black Sea port of Tmutarakan belonging to Chernihiv.[citation needed] Three of Yaroslav's sons
that first allied together found themselves fighting each other especially after their defeat to the Cuman forces
in 1068 at the Battle of the Alta River. At the same time an uprising took place in Kiev, bringing to power Vseslav
of Polotsk who supported the traditional Slavic paganism.[citation needed] The ruling Grand Prince Iziaslav fled
to Poland asking for support and in couple of years returned to establish the order.[citation needed] The affairs
became even more complicated by the end of the 11th century driving the state into chaos and constant warfare. On
the initiative of Vladimir II Monomakh in 1097 the first federal council of Kievan Rus took place near Chernihiv]in
the city of Liubech with the main intention to find an understanding among the fighting sides. However even though
that did not really stop the fighting, it certainly cooled things off.[citation needed] The decline of Constantinople
– a main trading partner of Kievan Rus' – played a significant role in the decline of the Kievan Rus'. The trade
route from the Varangians to the Greeks, along which the goods were moving from the Black Sea (mainly Byzantine)
through eastern Europe to the Baltic, was a cornerstone of Kiev wealth and prosperity. Kiev was the main power and
initiator in this relationship, once the Byzantine Empire fell into turmoil and the supplies became erratic, profits
dried out, and Kiev lost its appeal.[citation needed] The last ruler to maintain united state was Mstislav the Great.
After his death in 1132 the Kievan Rus' fell into recession and a rapid decline, and Mstislav's successor Yaropolk
II of Kiev instead of focussing on the external threat of the Cumans was embroiled in conflicts with the growing
power of the Novgorod Republic. In 1169, as the Kievan Rus' state was full of internal conflict, Andrei Bogolyubsky
of Vladimir sacked the city of Kiev. The sack of the city fundamentally changed the perception of Kiev and was evidence
of the fragmentation of the Kievan Rus'. By the end of the 12th century, the Kievan state became even further fragmented
and had been divided into roughly twelve different principalities. The Crusades brought a shift in European trade
routes that accelerated the decline of Kievan Rus'. In 1204 the forces of the Fourth Crusade sacked Constantinople,
making the Dnieper trade route marginal. At the same time the Teutonic Knights (of the Northern Crusades) were conquering
the Baltic region and threatening the Lands of Novgorod. Concurrently with it the Ruthenian Federation of Kievan
Rus' started to disintegrate into smaller principalities as the Rurik dynasty grew. The local Orthodox Christianity
of Kievan Rus', while struggling to establish itself in the predominantly pagan state and losing its main base in
Constantinople was on the brink of extinction. Some of the main regional centres that developed later were Novgorod,
Chernigov, Galich, Kiev, Ryazan, Vladimir-upon-Klyazma, Vladimir of Volyn and Polotsk. In the north, the Republic
of Novgorod prospered because it controlled trade routes from the River Volga to the Baltic Sea. As Kievan Rus' declined,
Novgorod became more independent. A local oligarchy ruled Novgorod; major government decisions were made by a town
assembly, which also elected a prince as the city's military leader. In the 12th century, Novgorod acquired its own
archbishop Ilya in 1169, a sign of increased importance and political independence, while about 30 years prior to
that in 1136 in Novgorod was established a republican form of government - elective monarchy. Since then Novgorod
enjoyed a wide degree of autonomy although being closely associated with the Kievan Rus. In the northeast, Slavs
from the Kievan region colonized the territory that later would become the Grand Duchy of Moscow by subjugating and
merging with the Finnic tribes already occupying the area. The city of Rostov, the oldest centre of the northeast,
was supplanted first by Suzdal and then by the city of Vladimir, which become the capital of Vladimir-Suzdal'. The
combined principality of Vladimir-Suzdal asserted itself as a major power in Kievan Rus' in the late 12th century.
In 1169 Prince Andrey Bogolyubskiy of Vladimir-Suzdal sacked the city of Kiev and took over the title of the (Великий
Князь/Velikiy Knyaz/Grand Prince or Grand Duke) to Vladimir, this way claiming the primacy in Rus'. Prince Andrey
then installed his younger brother, who ruled briefly in Kiev while Andrey continued to rule his realm from Suzdal.
In 1299, in the wake of the Mongol invasion, the metropolitan moved from Kiev to the city of Vladimir and Vladimir-Suzdal.
To the southwest, the principality of Halych had developed trade relations with its Polish, Hungarian and Lithuanian
neighbours and emerged as the local successor to Kievan Rus'. In 1199, Prince Roman Mstislavich united the two previously
separate principalities. In 1202 he conquered Kiev, and assumed the title of Grand Duke of Kievan Rus', which was
held by the rulers of Vladimir-Suzdal since 1169. His son, Prince Daniil (r. 1238–1264) looked for support from the
West. He accepted a crown as a "Rex Rusiae" ("King of Russia") from the Roman papacy, apparently doing so without
breaking with Constantinople. In 1370, the patriarch of the Eastern Orthodox Church in Constantinople granted the
King of Poland a metropolitan for his Russian subjects. Lithuanian rulers also requested and received a metropolitan
for Novagrudok shortly afterwards. Cyprian, a candidate pushed by the Lithuanian rulers, became Metropolitan of Kiev
in 1375 and metropolitan of Moscow in 1382; this way the church in the Russian countries was reunited for some time.
In 1439, Kiev became the seat of a separate "Metropolitan of Kiev, Galich and all Rus'" for all Greek Orthodox Christians
under Polish-Lithuanian rule. Due to the expansion of trade and its geographical proximity, Kiev became the most
important trade centre and chief among the communes; therefore the leader of Kiev gained political "control" over
the surrounding areas. This princedom emerged from a coalition of traditional patriarchic family communes banded
together in an effort to increase the applicable workforce and expand the productivity of the land. This union developed
the first major cities in the Rus' and was the first notable form of self-government. As these communes became larger,
the emphasis was taken off the family holdings and placed on the territory that surrounded. This shift in ideology
became known as the verv'. In the 11th century and the 12th century, the princes and their retinues, which were a
mixture of Slavic and Scandinavian elites, dominated the society of Kievan Rus'. Leading soldiers and officials received
income and land from the princes in return for their political and military services. Kievan society lacked the class
institutions and autonomous towns that were typical of Western European feudalism. Nevertheless, urban merchants,
artisans and labourers sometimes exercised political influence through a city assembly, the veche (council), which
included all the adult males in the population. In some cases, the veche either made agreements with their rulers
or expelled them and invited others to take their place. At the bottom of society was a stratum of slaves. More important
was a class of tribute-paying peasants, who owed labour duty to the princes. The widespread personal serfdom characteristic
of Western Europe did not exist in Kievan Rus'. The change in political structure led to the inevitable development
of the peasant class or smerdy. The smerdy were free un-landed people that found work by labouring for wages on the
manors which began to develop around 1031 as the verv' began to dominate socio-political structure. The smerdy were
initially given equality in the Kievian law code, they were theoretically equal to the prince, so they enjoyed as
much freedom as can be expected of manual labourers. However, in the 13th century they began to slowly lose their
rights and became less equal in the eyes of the law. Kievan Rus', although sparsely populated compared to Western
Europe, was not only the largest contemporary European state in terms of area but also culturally advanced. Literacy
in Kiev, Novgorod and other large cities was high. As birch bark documents attest, they exchanged love letters and
prepared cheat sheets for schools. Novgorod had a sewage system and wood paving not often found in other cities at
the time. The Russkaya Pravda confined punishments to fines and generally did not use capital punishment. Certain
rights were accorded to women, such as property and inheritance rights. Kievan Rus' also played an important genealogical
role in European politics. Yaroslav the Wise, whose stepmother belonged to the Macedonian dynasty, the greatest one
to rule Byzantium, married the only legitimate daughter of the king who Christianized Sweden. His daughters became
queens of Hungary, France and Norway, his sons married the daughters of a Polish king and a Byzantine emperor (not
to mention a niece of the Pope), while his granddaughters were a German Empress and (according to one theory) the
queen of Scotland. A grandson married the only daughter of the last Anglo-Saxon king of England. Thus the Rurikids
were a well-connected royal family of the time. From the 9th century, the Pecheneg nomads began an uneasy relationship
with Kievan Rus′. For over two centuries they launched sporadic raids into the lands of Rus′, which sometimes escalated
into full-scale wars (such as the 920 war on the Pechenegs by Igor of Kiev reported in the Primary Chronicle), but
there were also temporary military alliances (e.g. the 943 Byzantine campaign by Igor). In 968, the Pechenegs attacked
and besieged the city of Kiev. Some speculation exists that the Pechenegs drove off the Tivertsi and the Ulichs to
the regions of the upper Dniester river in Bukovina. The Byzantine Empire was known to support the Pechenegs in their
military campaigns against the Eastern Slavic states.[citation needed] The exact date of creation of the Kiev Metropolis
is uncertain, as well as who was the first leader of the church. Predominantly it is considered that the first head
was Michael I of Kiev, however some sources also claim Leontiy who is often placed after Michael or Anastas Chersonesos,
became the first bishop of the Church of the Tithes. The first metropolitan to be confirmed by historical sources
is Theopemp, who was appointed by Patriarch Alexius of Constantinople in 1038. Before 1015 there were five dioceses:
Kiev, Chernihiv, Bilhorod, Volodymyr, Novgorod, and soon thereafter Yuriy-upon-Ros. The Kiev Metropolitan sent his
own delegation to the Council of Bari in 1098.
The Super Nintendo Entertainment System (officially abbreviated the Super NES[b] or SNES[c], and commonly shortened to Super
Nintendo[d]) is a 16-bit home video game console developed by Nintendo that was released in 1990 in Japan and South
Korea, 1991 in North America, 1992 in Europe and Australasia (Oceania), and 1993 in South America. In Japan, the
system is called the Super Famicom (Japanese: スーパーファミコン, Hepburn: Sūpā Famikon?, officially adopting the abbreviated
name of its predecessor, the Family Computer), or SFC for short. In South Korea, it is known as the Super Comboy
(슈퍼 컴보이 Syupeo Keomboi) and was distributed by Hyundai Electronics. Although each version is essentially the same,
several forms of regional lockout prevent the different versions from being compatible with one another. It was released
in Brazil on September 2, 1992, by Playtronic. To compete with the popular Family Computer in Japan, NEC Home Electronics
launched the PC Engine in 1987, and Sega Enterprises followed suit with the Mega Drive in 1988. The two platforms
were later launched in North America in 1989 as the TurboGrafx-16 and the Genesis respectively. Both systems were
built on 16-bit architectures and offered improved graphics and sound over the 8-bit NES. However, it took several
years for Sega's system to become successful. Nintendo executives were in no rush to design a new system, but they
reconsidered when they began to see their dominance in the market slipping. Designed by Masayuki Uemura, the designer
of the original Famicom, the Super Famicom was released in Japan on Wednesday, November 21, 1990 for ¥25,000 (US$210).
It was an instant success; Nintendo's initial shipment of 300,000 units sold out within hours, and the resulting
social disturbance led the Japanese government to ask video game manufacturers to schedule future console releases
on weekends. The system's release also gained the attention of the Yakuza, leading to a decision to ship the devices
at night to avoid robbery. On August 23, 1991,[a] Nintendo released the Super Nintendo Entertainment System, a redesigned
version of the Super Famicom, in North America for US$199. The SNES was released in the United Kingdom and Ireland
in April 1992 for GB£150, with a German release following a few weeks later. Most of the PAL region versions of the
console use the Japanese Super Famicom design, except for labeling and the length of the joypad leads. The Playtronic
Super NES in Brazil, although PAL, uses the North American design. Both the NES and SNES were released in Brazil
in 1993 by Playtronic, a joint venture between the toy company Estrela and consumer electronics company Gradiente.
The rivalry between Nintendo and Sega resulted in what has been described as one of the most notable console wars
in video game history, in which Sega positioned the Genesis as the "cool" console, with more mature titles aimed
at older gamers, and edgy advertisements that occasionally attacked the competition. Nintendo however, scored an
early public relations advantage by securing the first console conversion of Capcom's arcade classic Street Fighter
II for SNES, which took over a year to make the transition to Genesis. Despite the Genesis's head start, much larger
library of games, and lower price point, the Genesis only represented an estimated 60% of the American 16-bit console
market in June 1992, and neither console could maintain a definitive lead for several years. Donkey Kong Country
is said to have helped establish the SNES's market prominence in the latter years of the 16-bit generation, and for
a time, maintain against the PlayStation and Saturn. According to Nintendo, the company had sold more than 20 million
SNES units in the U.S. According to a 2014 Wedbush Securities report based on NPD sales data, the SNES ultimately
outsold the Genesis in the U.S. market. During the NES era, Nintendo maintained exclusive control over titles released
for the system—the company had to approve every game, each third-party developer could only release up to five games
per year (but some third parties got around this by using different names, for example Konami's "Ultra Games" brand),
those games could not be released on another console within two years, and Nintendo was the exclusive manufacturer
and supplier of NES cartridges. However, competition from Sega's console brought an end to this practice; in 1991,
Acclaim began releasing games for both platforms, with most of Nintendo's other licensees following suit over the
next several years; Capcom (which licensed some games to Sega instead of producing them directly) and Square were
the most notable holdouts. The company continued to carefully review submitted titles, giving them scores using a
40-point scale and allocating Nintendo's marketing resources accordingly. Each region performed separate evaluations.
Nintendo of America also maintained a policy that, among other things, limited the amount of violence in the games
on its systems. One game, Mortal Kombat, would challenge this policy. A surprise hit in arcades in 1992, Mortal Kombat
features splashes of blood and finishing moves that often depict one character dismembering the other. Because the
Genesis version retained the gore while the SNES version did not, it outsold the SNES version by a ratio of three
or four-to-one. Game players were not the only ones to notice the violence in this game; US Senators Herb Kohl and
Joe Lieberman convened a Congressional hearing on December 9, 1993 to investigate the marketing of violent video
games to children.[e] While Nintendo took the high ground with moderate success, the hearings led to the creation
of the Interactive Digital Software Association and the Entertainment Software Rating Board, and the inclusion of
ratings on all video games. With these ratings in place, Nintendo decided its censorship policies were no longer
needed. While other companies were moving on to 32-bit systems, Rare and Nintendo proved that the SNES was still
a strong contender in the market. In November 1994, Rare released Donkey Kong Country, a platform game featuring
3D models and textures pre-rendered on SGI workstations. With its detailed graphics, fluid animation and high-quality
music, Donkey Kong Country rivaled the aesthetic quality of games that were being released on newer 32-bit CD-based
consoles. In the last 45 days of 1994, the game sold 6.1 million units, making it the fastest-selling video game
in history to that date. This game sent a message that early 32-bit systems had little to offer over the SNES, and
helped make way for the more advanced consoles on the horizon. In October 1997, Nintendo released a redesigned model
of the SNES (the SNS-101 model) in North America for US$99, which sometimes included the pack-in game Super Mario
World 2: Yoshi's Island. Like the earlier redesign of the NES (the NES-101 model), the new model was slimmer and
lighter than its predecessor, but it lacked S-Video and RGB output, and it was among the last major SNES-related
releases in the region. A similarly redesigned Super Famicom Jr. was released in Japan at around the same time. Internally,
a regional lockout chip (CIC) within the console and in each cartridge prevents PAL region games from being played
on Japanese or North American consoles and vice versa. The Japanese and North American machines have the same region
chip. This can be overcome through the use of adapters, typically by inserting the imported cartridge in one slot
and a cartridge with the correct region chip in a second slot. Alternatively, disconnecting one pin of the console's
lockout chip will prevent it from locking the console; hardware in later games can detect this situation, so it later
became common to install a switch to reconnect the lockout chip as needed. PAL consoles face another incompatibility
when playing out-of-region cartridges: the NTSC video standard specifies video at 60 Hz while PAL operates at 50
Hz, resulting in approximately 16.7% slower gameplay. Additionally, PAL's higher resolution results in letterboxing
of the output image. Some commercial PAL region releases exhibit this same problem and, therefore, can be played
in NTSC systems without issue while others will face a 20% speedup if played in an NTSC console. To mostly correct
this issue, a switch can be added to place the SNES PPU into a 60 Hz mode supported by most newer PAL televisions.
Later games will detect this setting and refuse to run, requiring the switch to be thrown only after the check completes.
All versions of the SNES are predominantly gray, although the exact shade may differ. The original North American
version, designed by Nintendo of America industrial designer Lance Barr (who previously redesigned the Famicom to
become the NES), has a boxy design with purple sliding switches and a dark gray eject lever. The loading bay surface
is curved, both to invite interaction and to prevent food or drinks from being placed on the console and spilling
as had happened with the flat surfaced NES. The Japanese and European versions are more rounded, with darker gray
accents and buttons. The North American SNS-101 model and the Japanese Super Famicom Jr. (the SHVC-101 model), all
designed by Barr, are both smaller with a rounded contour; however, the SNS-101 buttons are purple where the Super
Famicom Jr. buttons are gray. The European and American versions of the SNES controllers have much longer cables
compared to the Japanese Super Famicom controllers. All versions incorporate a top-loading slot for game cartridges,
although the shape of the slot differs between regions to match the different shapes of the cartridges. The MULTI
OUT connector (later used on the Nintendo 64 and GameCube) can output composite video, S-Video and RGB signals, as
well as RF with an external RF modulator. Original versions additionally include a 28-pin expansion port under a
small cover on the bottom of the unit and a standard RF output with channel selection switch on the back; the redesigned
models output composite video only, requiring an external modulator for RF. The ABS plastic used in the casing of
some older SNES and Super Famicom consoles is particularly susceptible to oxidization on exposure to air, likely
due to an incorrect mixture of the stabilizing or flame retarding additives. This, along with the particularly light
color of the original plastic, causes affected consoles to quickly become yellow; if the sections of the casing came
from different batches of plastic, a "two-tone" effect results. The color can sometimes be restored with UV light
and a hydrogen peroxide solution. The cartridge media of the console is officially referred to as Game Pak in most
Western regions, and as Cassette (カセット, Kasetto?) in Japan and parts of Latin America. While the SNES can address
128 Mbit,[f] only 117.75 Mbit are actually available for cartridge use. A fairly normal mapping could easily address
up to 95 Mbit of ROM data (48 Mbit at FastROM speed) with 8 Mbit of battery-backed RAM. However, most available memory
access controllers only support mappings of up to 32 Mbit. The largest games released (Tales of Phantasia and Star
Ocean) contain 48 Mbit of ROM data, while the smallest games contain only 2 Mbit. The standard SNES controller adds
two additional face buttons (X and Y) to the design of the NES iteration, arranging the four in a diamond shape,
and introduces two shoulder buttons. It also features an ergonomic design by Lance Barr, later used for the NES-102
model controllers, also designed by Barr. The Japanese and PAL region versions incorporate the colors of the four
action buttons into system's logo. The North American version's buttons are colored to match the redesigned console;
the X and Y buttons are lavender with concave faces, and the A and B buttons are purple with convex faces. Several
later consoles derive elements of their controller design from the SNES, including the PlayStation, Dreamcast, Xbox,
and Wii Classic Controller. Throughout the course of its life, a number of peripherals were released which added
to the functionality of the SNES. Many of these devices were modeled after earlier add-ons for the NES: the Super
Scope is a light gun functionally similar to the NES Zapper (though the Super Scope features wireless capabilities)
and the Super Advantage is an arcade-style joystick with adjustable turbo settings akin to the NES Advantage. Nintendo
also released the SNES Mouse in conjunction with its Mario Paint title. Hudson Soft, under license from Nintendo,
released the Super Multitap, a multiplayer adapter for use with its popular series of Bomberman games. Some of the
more unusual controllers include the BatterUP baseball bat, the Life Fitness Entertainment System (an exercise bike
controller with built-in monitoring software), and the TeeV Golf golf club. While Nintendo never released an adapter
for playing NES games on the SNES (though the instructions included a way to connect both consoles to the same TV
by either daisy chaining the RF switches or using AV outputs for one or both systems), the Super Game Boy adapter
cartridge allows games designed for Nintendo's portable Game Boy system to be played on the SNES. The Super Game
Boy touted several feature enhancements over the Game Boy, including palette substitution, custom screen borders,
and (for specially enhanced games) access to the SNES console. Japan also saw the release of the Super Game Boy 2,
which added a communication port to enable a second Game Boy to connect for multiplayer games. Japan saw the release
of the Satellaview, a modem which attached to the Super Famicom's expansion port and connected to the St.GIGA satellite
radio station. Users of the Satellaview could download gaming news and specially designed games, which were frequently
either remakes of or sequels to older Famicom titles, released in installments. Satellaview signals were broadcast
from April 23, 1995 through June 30, 2000. In the United States, the similar but relatively short-lived XBAND allowed
users to connect to a network via a dial-up modem to compete against other players around the country. During the
SNES's life, Nintendo contracted with two different companies to develop a CD-ROM-based peripheral for the console
to compete with Sega's CD-ROM based addon, Mega-CD. Ultimately, deals with both Sony and Philips fell through, (although
a prototype console was produced by Sony) with Philips gaining the right to release a series of titles based on Nintendo
franchises for its CD-i multimedia player and Sony going on to develop its own console based on its initial dealings
with Nintendo (the PlayStation). Nintendo of America took the same stance against the distribution of SNES ROM image
files and the use of emulators as it did with the NES, insisting that they represented flagrant software piracy.
Proponents of SNES emulation cite discontinued production of the SNES constituting abandonware status, the right
of the owner of the respective game to make a personal backup via devices such as the Retrode, space shifting for
private use, the desire to develop homebrew games for the system, the frailty of SNES ROM cartridges and consoles,
and the lack of certain foreign imports. Emulation of the SNES is now available on handheld units, such as Android
devices, Apple's iPhone and iPad, Sony's PlayStation Portable (PSP), the Nintendo DS and Game Boy Advance, the Gizmondo,
the Dingoo and the GP2X by GamePark Holdings, as well as PDAs. While individual games have been included with emulators
on some GameCube discs, Nintendo's Virtual Console service for the Wii marks the introduction of officially sanctioned
general SNES emulation, though SNES9x GX, a port of SNES9x, has been made for the Wii. In 2007, GameTrailers named
the SNES as the second-best console of all time in their list of top ten consoles that "left their mark on the history
of gaming", citing its graphic, sound, and library of top-quality games. In 2015, they also named it the best Nintendo
console of all time, saying, "The list of games we love from this console completely annihilates any other roster
from the Big N." Technology columnist Don Reisinger proclaimed "The SNES is the greatest console of all time" in
January 2008, citing the quality of the games and the console's dramatic improvement over its predecessor; fellow
technology columnist Will Greenwald replied with a more nuanced view, giving the SNES top marks with his heart, the
NES with his head, and the PlayStation (for its controller) with his hands. GamingExcellence also gave the SNES first
place in 2008, declaring it "simply the most timeless system ever created" with many games that stand the test of
time and citing its innovation in controller design, graphics capabilities, and game storytelling. At the same time,
GameDaily rated it fifth of ten for its graphics, audio, controllers, and games. In 2009, IGN named the Super Nintendo
Entertainment System the fourth best video game console, complimenting its audio and "concentration of AAA titles".
However, some scholars contest the idea of a Proto-Euphratean language or one substrate language. It has been suggested by
them and others, that the Sumerian language was originally that of the hunter and fisher peoples, who lived in the
marshland and the Eastern Arabia littoral region, and were part of the Arabian bifacial culture. Reliable historical
records begin much later; there are none in Sumer of any kind that have been dated before Enmebaragesi (c. 26th century
BC). Professor Juris Zarins believes the Sumerians were settled along the coast of Eastern Arabia, today's Persian
Gulf region, before it flooded at the end of the Ice Age. Native Sumerian rule re-emerged for about a century in
the Neo-Sumerian Empire or Third Dynasty of Ur (Sumerian Renaissance) approximately 2100-2000 BC, but the Akkadian
language also remained in use. The Sumerian city of Eridu, on the coast of the Persian Gulf, is considered to have
been the world's first city, where three separate cultures may have fused — that of peasant Ubaidian farmers, living
in mud-brick huts and practicing irrigation; that of mobile nomadic Semitic pastoralists living in black tents and
following herds of sheep and goats; and that of fisher folk, living in reed huts in the marshlands, who may have
been the ancestors of the Sumerians. The term "Sumerian" is the common name given to the ancient non-Semitic inhabitants
of Mesopotamia, Sumer, by the Semitic Akkadians. The Sumerians referred to themselves as ùĝ saĝ gíg-ga (cuneiform:
𒌦 𒊕 𒈪 𒂵), phonetically /uŋ saŋ giga/, literally meaning "the black-headed people", and to their land as ki-en-gi(-r)
('place' + 'lords' + 'noble'), meaning "place of the noble lords". The Akkadian word Shumer may represent the geographical
name in dialect, but the phonological development leading to the Akkadian term šumerû is uncertain. Hebrew Shinar,
Egyptian Sngr, and Hittite Šanhar(a), all referring to southern Mesopotamia, could be western variants of Shumer.
The Sumerian city-states rose to power during the prehistoric Ubaid and Uruk periods. Sumerian written history reaches
back to the 27th century BC and before, but the historical record remains obscure until the Early Dynastic III period,
c. the 23rd century BC, when a now deciphered syllabary writing system was developed, which has allowed archaeologists
to read contemporary records and inscriptions. Classical Sumer ends with the rise of the Akkadian Empire in the 23rd
century BC. Following the Gutian period, there is a brief Sumerian Renaissance in the 21st century BC, cut short
in the 20th century BC by Semitic Amorite invasions. The Amorite "dynasty of Isin" persisted until c. 1700 BC, when
Mesopotamia was united under Babylonian rule. The Sumerians were eventually absorbed into the Akkadian (Assyro-Babylonian)
population. The Ubaid period is marked by a distinctive style of fine quality painted pottery which spread throughout
Mesopotamia and the Persian Gulf. During this time, the first settlement in southern Mesopotamia was established
at Eridu (Cuneiform: NUN.KI), c. 5300 BC, by farmers who brought with them the Hadji Muhammed culture, which first
pioneered irrigation agriculture. It appears that this culture was derived from the Samarran culture from northern
Mesopotamia. It is not known whether or not these were the actual Sumerians who are identified with the later Uruk
culture. Eridu remained an important religious center when it was gradually surpassed in size by the nearby city
of Uruk. The story of the passing of the me (gifts of civilization) to Inanna, goddess of Uruk and of love and war,
by Enki, god of wisdom and chief god of Eridu, may reflect this shift in hegemony. By the time of the Uruk period
(c. 4100–2900 BC calibrated), the volume of trade goods transported along the canals and rivers of southern Mesopotamia
facilitated the rise of many large, stratified, temple-centered cities (with populations of over 10,000 people) where
centralized administrations employed specialized workers. It is fairly certain that it was during the Uruk period
that Sumerian cities began to make use of slave labor captured from the hill country, and there is ample evidence
for captured slaves as workers in the earliest texts. Artifacts, and even colonies of this Uruk civilization have
been found over a wide area—from the Taurus Mountains in Turkey, to the Mediterranean Sea in the west, and as far
east as central Iran. Sumerian cities during the Uruk period were probably theocratic and were most likely headed
by a priest-king (ensi), assisted by a council of elders, including both men and women. It is quite possible that
the later Sumerian pantheon was modeled upon this political structure. There was little evidence of organized warfare
or professional soldiers during the Uruk period, and towns were generally unwalled. During this period Uruk became
the most urbanized city in the world, surpassing for the first time 50,000 inhabitants. The earliest dynastic king
on the Sumerian king list whose name is known from any other legendary source is Etana, 13th king of the first dynasty
of Kish. The earliest king authenticated through archaeological evidence is Enmebaragesi of Kish (c. 26th century
BC), whose name is also mentioned in the Gilgamesh epic—leading to the suggestion that Gilgamesh himself might have
been a historical king of Uruk. As the Epic of Gilgamesh shows, this period was associated with increased war. Cities
became walled, and increased in size as undefended villages in southern Mesopotamia disappeared. (Gilgamesh is credited
with having built the walls of Uruk). Although short-lived, one of the first empires known to history was that of
Eannatum of Lagash, who annexed practically all of Sumer, including Kish, Uruk, Ur, and Larsa, and reduced to tribute
the city-state of Umma, arch-rival of Lagash. In addition, his realm extended to parts of Elam and along the Persian
Gulf. He seems to have used terror as a matter of policy. Eannatum's Stele of the Vultures depicts vultures pecking
at the severed heads and other body parts of his enemies. His empire collapsed shortly after his death. The Semitic
Akkadian language is first attested in proper names of the kings of Kish c. 2800 BC, preserved in later king lists.
There are texts written entirely in Old Akkadian dating from c. 2500 BC. Use of Old Akkadian was at its peak during
the rule of Sargon the Great (c. 2270–2215 BC), but even then most administrative tablets continued to be written
in Sumerian, the language used by the scribes. Gelb and Westenholz differentiate three stages of Old Akkadian: that
of the pre-Sargonic era, that of the Akkadian empire, and that of the "Neo-Sumerian Renaissance" that followed it.
Akkadian and Sumerian coexisted as vernacular languages for about one thousand years, but by around 1800 BC, Sumerian
was becoming more of a literary language familiar mainly only to scholars and scribes. Thorkild Jacobsen has argued
that there is little break in historical continuity between the pre- and post-Sargon periods, and that too much emphasis
has been placed on the perception of a "Semitic vs. Sumerian" conflict. However, it is certain that Akkadian was
also briefly imposed on neighboring parts of Elam that were previously conquered by Sargon. Later, the 3rd dynasty
of Ur under Ur-Nammu and Shulgi, whose power extended as far as southern Assyria, was the last great "Sumerian renaissance",
but already the region was becoming more Semitic than Sumerian, with the rise in power of the Akkadian speaking Semites
in Assyria and elsewhere, and the influx of waves of Semitic Martu (Amorites) who were to found several competing
local powers including Isin, Larsa, Eshnunna and eventually Babylon. The last of these eventually came to dominate
the south of Mesopotamia as the Babylonian Empire, just as the Old Assyrian Empire had already done so in the north
from the late 21st century BC. The Sumerian language continued as a sacerdotal language taught in schools in Babylonia
and Assyria, much as Latin was used in the Medieval period, for as long as cuneiform was utilized. This period is
generally taken to coincide with a major shift in population from southern Mesopotamia toward the north. Ecologically,
the agricultural productivity of the Sumerian lands was being compromised as a result of rising salinity. Soil salinity
in this region had been long recognized as a major problem. Poorly drained irrigated soils, in an arid climate with
high levels of evaporation, led to the buildup of dissolved salts in the soil, eventually reducing agricultural yields
severely. During the Akkadian and Ur III phases, there was a shift from the cultivation of wheat to the more salt-tolerant
barley, but this was insufficient, and during the period from 2100 BC to 1700 BC, it is estimated that the population
in this area declined by nearly three fifths. This greatly upset the balance of power within the region, weakening
the areas where Sumerian was spoken, and comparatively strengthening those where Akkadian was the major language.
Henceforth Sumerian would remain only a literary and liturgical language, similar to the position occupied by Latin
in medieval Europe. The Sumerians were a non-Semitic caucasoid people, and spoke a language isolate; a number of
linguists believed they could detect a substrate language beneath Sumerian, because names of some of Sumer's major
cities are not Sumerian, revealing influences of earlier inhabitants. However, the archaeological record shows clear
uninterrupted cultural continuity from the time of the early Ubaid period (5300 – 4700 BC C-14) settlements in southern
Mesopotamia. The Sumerian people who settled here farmed the lands in this region that were made fertile by silt
deposited by the Tigris and the Euphrates rivers. It is speculated by some archaeologists that Sumerian speakers
were farmers who moved down from the north, after perfecting irrigation agriculture there. The Ubaid pottery of southern
Mesopotamia has been connected via Choga Mami transitional ware to the pottery of the Samarra period culture (c.
5700 – 4900 BC C-14) in the north, who were the first to practice a primitive form of irrigation agriculture along
the middle Tigris River and its tributaries. The connection is most clearly seen at Tell Awayli (Oueilli, Oueili)
near Larsa, excavated by the French in the 1980s, where eight levels yielded pre-Ubaid pottery resembling Samarran
ware. According to this theory, farming peoples spread down into southern Mesopotamia because they had developed
a temple-centered social organization for mobilizing labor and technology for water control, enabling them to survive
and prosper in a difficult environment.[citation needed] Though women were protected by late Sumerian law and were
able to achieve a higher status in Sumer than in other contemporary civilizations, the culture was male-dominated.
The Code of Ur-Nammu, the oldest such codification yet discovered, dating to the Ur-III "Sumerian Renaissance", reveals
a glimpse at societal structure in late Sumerian law. Beneath the lu-gal ("great man" or king), all members of society
belonged to one of two basic strata: The "lu" or free person, and the slave (male, arad; female geme). The son of
a lu was called a dumu-nita until he married. A woman (munus) went from being a daughter (dumu-mi), to a wife (dam),
then if she outlived her husband, a widow (numasu) and she could then remarry. The most important archaeological
discoveries in Sumer are a large number of tablets written in cuneiform. Sumerian writing, while proven to be not
the oldest example of writing on earth, is considered to be a great milestone in the development of man's ability
to not only create historical records but also in creating pieces of literature both in the form of poetic epics
and stories as well as prayers and laws. Although pictures — that is, hieroglyphs — were first used, cuneiform and
then Ideograms (where symbols were made to represent ideas) soon followed. Triangular or wedge-shaped reeds were
used to write on moist clay. A large body of hundreds of thousands of texts in the Sumerian language have survived,
such as personal or business letters, receipts, lexical lists, laws, hymns, prayers, stories, daily records, and
even libraries full of clay tablets. Monumental inscriptions and texts on different objects like statues or bricks
are also very common. Many texts survive in multiple copies because they were repeatedly transcribed by scribes-in-training.
Sumerian continued to be the language of religion and law in Mesopotamia long after Semitic speakers had become dominant.
The Sumerian language is generally regarded as a language isolate in linguistics because it belongs to no known language
family; Akkadian, by contrast, belongs to the Semitic branch of the Afroasiatic languages. There have been many failed
attempts to connect Sumerian to other language groups. It is an agglutinative language; in other words, morphemes
("units of meaning") are added together to create words, unlike analytic languages where morphemes are purely added
together to create sentences. Some authors have proposed that there may be evidence of a sub-stratum or add-stratum
language for geographic features and various crafts and agricultural activities, called variously Proto-Euphratean
or Proto Tigrean, but this is disputed by others. Sumerian religion seems to have been founded upon two separate
cosmogenic myths. The first saw creation as the result of a series of hieros gami or sacred marriages, involving
the reconciliation of opposites, postulated as a coming together of male and female divine beings; the gods. This
continued to influence the whole Mesopotamian mythos. Thus in the Enuma Elish the creation was seen as the union
of fresh and salt water; as male Abzu, and female Tiamat. The product of that union, Lahm and Lahmu, "the muddy ones",
were titles given to the gate keepers of the E-Abzu temple of Enki, in Eridu, the first Sumerian city. Describing
the way that muddy islands emerge from the confluence of fresh and salty water at the mouth of the Euphrates, where
the river deposited its load of silt, a second hieros gamos supposedly created Anshar and Kishar, the "sky-pivot"
or axle, and the "earth pivot", parents in turn of Anu (the sky) and Ki (the earth). Another important Sumerian hieros
gamos was that between Ki, here known as Ninhursag or "Lady Sacred Mountain", and Enki of Eridu, the god of fresh
water which brought forth greenery and pasture. These deities formed a core pantheon; there were additionally hundreds
of minor ones. Sumerian gods could thus have associations with different cities, and their religious importance often
waxed and waned with those cities' political power. The gods were said to have created human beings from clay for
the purpose of serving them. The temples organized the mass labour projects needed for irrigation agriculture. Citizens
had a labor duty to the temple, though they could avoid it by a payment of silver. Ziggurats (Sumerian temples) each
had an individual name and consisted of a forecourt, with a central pond for purification. The temple itself had
a central nave with aisles along either side. Flanking the aisles would be rooms for the priests. At one end would
stand the podium and a mudbrick table for animal and vegetable sacrifices. Granaries and storehouses were usually
located near the temples. After a time the Sumerians began to place the temples on top of multi-layered square constructions
built as a series of rising terraces, giving rise to the Ziggurat style. It was believed that when people died, they
would be confined to a gloomy world of Ereshkigal, whose realm was guarded by gateways with various monsters designed
to prevent people entering or leaving. The dead were buried outside the city walls in graveyards where a small mound
covered the corpse, along with offerings to monsters and a small amount of food. Those who could afford it sought
burial at Dilmun. Human sacrifice was found in the death pits at the Ur royal cemetery where Queen Puabi was accompanied
in death by her servants. It is also said that the Sumerians invented the first oboe-like instrument, and used them
at royal funerals. In the early Sumerian Uruk period, the primitive pictograms suggest that sheep, goats, cattle,
and pigs were domesticated. They used oxen as their primary beasts of burden and donkeys or equids as their primary
transport animal and "woollen clothing as well as rugs were made from the wool or hair of the animals. ... By the
side of the house was an enclosed garden planted with trees and other plants; wheat and probably other cereals were
sown in the fields, and the shaduf was already employed for the purpose of irrigation. Plants were also grown in
pots or vases." The Sumerians were one of the first known beer drinking societies. Cereals were plentiful and were
the key ingredient in their early brew. They brewed multiple kinds of beer consisting of wheat, barley, and mixed
grain beers. Beer brewing was very important to the Sumerians. It was referenced in the Epic of Gilgamesh when Enkidu
was introduced to the food and beer of Gilgamesh's people: "Drink the beer, as is the custom of the land... He drank
the beer-seven jugs! and became expansive and sang with joy!" As is known from the "Sumerian Farmer's Almanac", after
the flood season and after the Spring Equinox and the Akitu or New Year Festival, using the canals, farmers would
flood their fields and then drain the water. Next they made oxen stomp the ground and kill weeds. They then dragged
the fields with pickaxes. After drying, they plowed, harrowed, and raked the ground three times, and pulverized it
with a mattock, before planting seed. Unfortunately the high evaporation rate resulted in a gradual increase in the
salinity of the fields. By the Ur III period, farmers had switched from wheat to the more salt-tolerant barley as
their principal crop. According to Archibald Sayce, the primitive pictograms of the early Sumerian (i.e. Uruk) era
suggest that "Stone was scarce, but was already cut into blocks and seals. Brick was the ordinary building material,
and with it cities, forts, temples and houses were constructed. The city was provided with towers and stood on an
artificial platform; the house also had a tower-like appearance. It was provided with a door which turned on a hinge,
and could be opened with a sort of key; the city gate was on a larger scale, and seems to have been double. The foundation
stones — or rather bricks — of a house were consecrated by certain objects that were deposited under them." The most
impressive and famous of Sumerian buildings are the ziggurats, large layered platforms which supported temples. Sumerian
cylinder seals also depict houses built from reeds not unlike those built by the Marsh Arabs of Southern Iraq until
as recently as 400 CE. The Sumerians also developed the arch, which enabled them to develop a strong type of dome.
They built this by constructing and linking several arches. Sumerian temples and palaces made use of more advanced
materials and techniques,[citation needed] such as buttresses, recesses, half columns, and clay nails. The Sumerians
developed a complex system of metrology c. 4000 BC. This advanced metrology resulted in the creation of arithmetic,
geometry, and algebra. From c. 2600 BC onwards, the Sumerians wrote multiplication tables on clay tablets and dealt
with geometrical exercises and division problems. The earliest traces of the Babylonian numerals also date back to
this period. The period c. 2700 – 2300 BC saw the first appearance of the abacus, and a table of successive columns
which delimited the successive orders of magnitude of their sexagesimal number system. The Sumerians were the first
to use a place value numeral system. There is also anecdotal evidence the Sumerians may have used a type of slide
rule in astronomical calculations. They were the first to find the area of a triangle and the volume of a cube. Commercial
credit and agricultural consumer loans were the main types of loans. The trade credit was usually extended by temples
in order to finance trade expeditions and was nominated in silver. The interest rate was set at 1/60 a month (one
shekel per mina) some time before 2000 BC and it remained at that level for about two thousand years. Rural loans
commonly arose as a result of unpaid obligations due to an institution (such as a temple), in this case the arrears
were considered to be lent to the debtor. They were denominated in barley or other crops and the interest rate was
typically much higher than for commercial loans and could amount to 1/3 to 1/2 of the loan principal. Periodically
"clean slate" decrees were signed by rulers which cancelled all the rural (but not commercial) debt and allowed bondservants
to return to their homes. Customarily rulers did it at the beginning of the first full year of their reign, but they
could also be proclaimed at times of military conflict or crop failure. The first known ones were made by Enmetena
and Urukagina of Lagash in 2400-2350 BC. According to Hudson, the purpose of these decrees was to prevent debts mounting
to a degree that they threatened fighting force which could happen if peasants lost the subsistence land or became
bondservants due to the inability to repay the debt. The almost constant wars among the Sumerian city-states for
2000 years helped to develop the military technology and techniques of Sumer to a high level. The first war recorded
in any detail was between Lagash and Umma in c. 2525 BC on a stele called the Stele of the Vultures. It shows the
king of Lagash leading a Sumerian army consisting mostly of infantry. The infantrymen carried spears, wore copper
helmets, and carried rectangular shields. The spearmen are shown arranged in what resembles the phalanx formation,
which requires training and discipline; this implies that the Sumerians may have made use of professional soldiers.
Evidence of wheeled vehicles appeared in the mid 4th millennium BC, near-simultaneously in Mesopotamia, the Northern
Caucasus (Maykop culture) and Central Europe. The wheel initially took the form of the potter's wheel. The new concept
quickly led to wheeled vehicles and mill wheels. The Sumerians' cuneiform writing system is the oldest (or second
oldest after the Egyptian hieroglyphs) which has been deciphered (the status of even older inscriptions such as the
Jiahu symbols and Tartaria tablets is controversial). The Sumerians were among the first astronomers, mapping the
stars into sets of constellations, many of which survived in the zodiac and were also recognized by the ancient Greeks.
They were also aware of the five planets that are easily visible to the naked eye. They invented and developed arithmetic
by using several different number systems including a mixed radix system with an alternating base 10 and base 6.
This sexagesimal system became the standard number system in Sumer and Babylonia. They may have invented military
formations and introduced the basic divisions between infantry, cavalry, and archers. They developed the first known
codified legal and administrative systems, complete with courts, jails, and government records. The first true city-states
arose in Sumer, roughly contemporaneously with similar entities in what are now Syria and Lebanon. Several centuries
after the invention of cuneiform, the use of writing expanded beyond debt/payment certificates and inventory lists
to be applied for the first time, about 2600 BC, to messages and mail delivery, history, legend, mathematics, astronomical
records, and other pursuits. Conjointly with the spread of writing, the first formal schools were established, usually
under the auspices of a city-state's primary temple.
Tuvalu (i/tuːˈvɑːluː/ too-VAH-loo or /ˈtuːvəluː/ TOO-və-loo), formerly known as the Ellice Islands, is a Polynesian island
nation located in the Pacific Ocean, midway between Hawaii and Australia. It comprises three reef islands and six
true atolls spread out between the latitude of 5° to 10° south and longitude of 176° to 180°, west of the International
Date Line. Its nearest neighbours are Kiribati, Nauru, Samoa and Fiji. Tuvalu has a population of 10,640 (2012 census).
The total land area of the islands of Tuvalu is 26 square kilometres (10 sq mi). In 1568, Spanish navigator Álvaro
de Mendaña was the first European to sail through the archipelago, sighting the island of Nui during his expedition
in search of Terra Australis. In 1819 the island of Funafuti was named Ellice's Island; the name Ellice was applied
to all nine islands after the work of English hydrographer Alexander George Findlay. The islands came under Britain's
sphere of influence in the late 19th century, when each of the Ellice Islands was declared a British Protectorate
by Captain Gibson of HMS Curacoa between 9 and 16 October 1892. The Ellice Islands were administered as British protectorate
by a Resident Commissioner from 1892 to 1916 as part of the British Western Pacific Territories (BWPT), and then
as part of the Gilbert and Ellice Islands colony from 1916 to 1974. The origins of the people of Tuvalu are addressed
in the theories regarding migration into the Pacific that began about 3000 years ago. During pre-European-contact
times there was frequent canoe voyaging between the nearer islands including Samoa and Tonga. Eight of the nine islands
of Tuvalu were inhabited; thus the name, Tuvalu, means "eight standing together" in Tuvaluan (compare to *walo meaning
"eight" in Proto-Austronesian). Possible evidence of fire in the Caves of Nanumanga may indicate human occupation
for thousands of years. An important creation myth of the islands of Tuvalu is the story of the te Pusi mo te Ali
(the Eel and the Flounder) who created the islands of Tuvalu; te Ali (the flounder) is believed to be the origin
of the flat atolls of Tuvalu and the te Pusin (the Eel) is the model for the coconut palms that are important in
the lives of Tuvaluans. The stories as to the ancestors of the Tuvaluans vary from island to island. On Niutao, Funafuti
and Vaitupu the founding ancestor is described as being from Samoa; whereas on Nanumea the founding ancestor is described
as being from Tonga. Captain John Byron passed through the islands of Tuvalu in 1764 during his circumnavigation
of the globe as captain of the Dolphin (1751). Byron charted the atolls as Lagoon Islands. Keith S. Chambers and
Doug Munro (1980) identified Niutao as the island that Francisco Mourelle de la Rúa sailed past on 5 May 1781, thus
solving what Europeans had called The Mystery of Gran Cocal. Mourelle's map and journal named the island El Gran
Cocal ('The Great Coconut Plantation'); however, the latitude and longitude was uncertain. Longitude could only be
reckoned crudely as accurate chronometers were unavailable until the late 18th century. The next European to visit
was Arent Schuyler de Peyster, of New York, captain of the armed brigantine or privateer Rebecca, sailing under British
colours, which passed through the southern Tuvaluan waters in May 1819; de Peyster sighted Nukufetau and Funafuti,
which he named Ellice's Island after an English Politician, Edward Ellice, the Member of Parliament for Coventry
and the owner of the Rebecca's cargo. The name Ellice was applied to all nine islands after the work of English hydrographer
Alexander George Findlay. For less than a year between 1862–63, Peruvian ships, engaged in what became to be called
the "blackbirding" trade, combed the smaller islands of Polynesia from Easter Island in the eastern Pacific to Tuvalu
and the southern atolls of the Gilbert Islands (now Kiribati), seeking recruits to fill the extreme labour shortage
in Peru. While some islanders were voluntary recruits the "blackbirders" were notorious for enticing islanders on
to ships with tricks, such as pretending to be Christian missionaries, as well as kidnapping islanders at gun point.
The Rev. A. W. Murray, the earliest European missionary in Tuvalu, reported that in 1863 about 170 people were taken
from Funafuti and about 250 were taken from Nukulaelae, as there were fewer than 100 of the 300 recorded in 1861
as living on Nukulaelae. Christianity came to Tuvalu in 1861 when Elekana, a deacon of a Congregational church in
Manihiki, Cook Islands became caught in a storm and drifted for 8 weeks before landing at Nukulaelae on 10 May 1861.
Elekana began proselytising Christianity. He was trained at Malua Theological College, a London Missionary Society
(LMS) school in Samoa, before beginning his work in establishing the Church of Tuvalu. In 1865 the Rev. A. W. Murray
of the LMS – a Protestant congregationalist missionary society – arrived as the first European missionary where he
too proselytised among the inhabitants of Tuvalu. By 1878 Protestantism was well established with preachers on each
island. In the later 19th and early 20th centuries the ministers of what became the Church of Tuvalu (Te Ekalesia
Kelisiano Tuvalu) were predominantly Samoans, who influenced the development of the Tuvaluan language and the music
of Tuvalu. Trading companies became active in Tuvalu in the mid-19th century; the trading companies engaged palagi
traders who lived on the islands. John (also known as Jack) O'Brien was the first European to settle in Tuvalu, he
became a trader on Funafuti in the 1850s. He married Salai, the daughter of the paramount chief of Funafuti. Louis
Becke, who later found success as a writer, was a trader on Nanumanga from April 1880 until the trading-station was
destroyed later that year in a cyclone. He then became a trader on Nukufetau. In 1892 Captain Davis of the HMS Royalist
reported on trading activities and traders on each of the islands visited. Captain Davis identified the following
traders in the Ellice Group: Edmund Duffy (Nanumea); Jack Buckland (Niutao); Harry Nitz (Vaitupu); John (also known
as Jack) O'Brien (Funafuti); Alfred Restieaux and Emile Fenisot (Nukufetau); and Martin Kleis (Nui). During this
time, the greatest number of palagi traders lived on the atolls, acting as agents for the trading companies. Some
islands would have competing traders while dryer islands might only have a single trader. In the later 1890s and
into first decade of the 20th century, structural changes occurred in the operation of the Pacific trading companies;
they moved from a practice of having traders resident on each island to instead becoming a business operation where
the supercargo (the cargo manager of a trading ship) would deal directly with the islanders when a ship visited an
island. From 1900 the numbers of palagi traders in Tuvalu declined and the last of the palagi traders were Fred Whibley
on Niutao, Alfred Restieaux on Nukufetau, and Martin Kleis on Nui. By 1909 there were no more resident palagi traders
representing the trading companies, although both Whibley and Restieaux remained in the islands until their deaths.
In 1890 Robert Louis Stevenson, his wife Fanny Vandegrift Stevenson and her son Lloyd Osbourne sailed on the Janet
Nicoll, a trading steamer owned by Henderson and Macfarlane of Auckland, New Zealand, which operated between Sydney
and Auckland and into the central Pacific. The Janet Nicoll visited three of the Ellice Islands; while Fanny records
that they made landfall at Funafuti, Niutao and Nanumea, Jane Resture suggests that it was more likely they landed
at Nukufetau rather than Funafuti. An account of this voyage was written by Fanny Stevenson and published under the
title The Cruise of the Janet Nichol, together with photographs taken by Robert Louis Stevenson and Lloyd Osbourne.
The boreholes on Funafuti, at the site now called Darwin's Drill, are the result of drilling conducted by the Royal
Society of London for the purpose of investigating the formation of coral reefs to determine whether traces of shallow
water organisms could be found at depth in the coral of Pacific atolls. This investigation followed the work on The
Structure and Distribution of Coral Reefs conducted by Charles Darwin in the Pacific. Drilling occurred in 1896,
1897 and 1898. Professor Edgeworth David of the University of Sydney was a member of the 1896 "Funafuti Coral Reef
Boring Expedition of the Royal Society", under Professor William Sollas and lead the expedition in 1897. Photographers
on these trips recorded people, communities, and scenes at Funafuti. Charles Hedley, a naturalist at the Australian
Museum, accompanied the 1896 expedition and during his stay on Funafuti collected invertebrate and ethnological objects.
The descriptions of these were published in Memoir III of the Australian Museum Sydney between 1896 and 1900. Hedley
also wrote the General Account of the Atoll of Funafuti, The Ethnology of Funafuti, and The Mollusca of Funafuti.
Edgar Waite was also part of the 1896 expedition and published an account of The mammals, reptiles, and fishes of
Funafuti. William Rainbow described the spiders and insects collected at Funafuti in The insect fauna of Funafuti.
During the Pacific War Funafuti was used as a base to prepare for the subsequent seaborn attacks on the Gilbert Islands
(Kiribati) that were occupied by Japanese forces. The United States Marine Corps landed on Funafuti on 2 October
1942 and on Nanumea and Nukufetau in August 1943. The Japanese had already occupied Tarawa and other islands in what
is now Kiribati, but were delayed by the losses at the Battle of the Coral Sea. The islanders assisted the American
forces to build airfields on Funafuti, Nanumea and Nukufetau and to unload supplies from ships. On Funafuti the islanders
shifted to the smaller islets so as to allow the American forces to build the airfield and to build naval bases and
port facilities on Fongafale. A Naval Construction Battalion (Seabees) built a sea plane ramp on the lagoon side
of Fongafale islet for seaplane operations by both short and long range seaplanes and a compacted coral runway was
also constructed on Fongafale, with runways also constructed to create Nanumea Airfield and Nukufetau Airfield. USN
Patrol Torpedo Boats (PTs) were based at Funafuti from 2 November 1942 to 11 May 1944. In 1974 ministerial government
was introduced to the Gilbert and Ellice Islands Colony through a change to the Constitution. In that year a general
election was held; and a referendum was held in December 1974 to determine whether the Gilbert Islands and Ellice
Islands should each have their own administration. As a consequence of the referendum, separation occurred in two
stages. The Tuvaluan Order 1975, which took effect on 1 October 1975, recognised Tuvalu as a separate British dependency
with its own government. The second stage occurred on 1 January 1976 when separate administrations were created out
of the civil service of the Gilbert and Ellice Islands Colony. From 1974 (the creation of the British colony of Tuvalu)
until independence, the legislative body of Tuvalu was called the House of the Assembly or Fale I Fono. Following
independence in October 1978 the House of the Assembly was renamed the Parliament of Tuvalu or Palamene o Tuvalu.
The unicameral Parliament has 15 members with elections held every four years. The members of parliament select the
Prime Minister (who is the head of government) and the Speaker of Parliament. The ministers that form the Cabinet
are appointed by the Governor General on the advice of the Prime Minister. There are eight Island Courts and Lands
Courts; appeals in relation to land disputes are made to the Lands Courts Appeal Panel. Appeals from the Island Courts
and the Lands Courts Appeal Panel are made to the Magistrates Court, which has jurisdiction to hear civil cases involving
up to $T10,000. The superior court is the High Court of Tuvalu as it has unlimited original jurisdiction to determine
the Law of Tuvalu and to hear appeals from the lower courts. Sir Gordon Ward is the current Chief Justice of Tuvalu.
Rulings of the High Court can be appealed to the Court of Appeal of Tuvalu. From the Court of Appeal there is a right
of appeal to Her Majesty in Council, i.e., the Privy Council in London. Tuvalu participates in the work of Secretariat
of the Pacific Community, or SPC (sometimes Pacific Community) and is a member of the Pacific Islands Forum, the
Commonwealth of Nations and the United Nations. Tuvalu has maintained a mission at the United Nations in New York
City since 2000. Tuvalu is a member of the World Bank and the Asian Development Bank. On 18 February 2016 Tuvalu
signed the Pacific Islands Development Forum Charter and formally joined the Pacific Islands Development Forum (PIDF).
A major international priority for Tuvalu in the UN, at the 2002 Earth Summit in Johannesburg, South Africa and in
other international fora, is promoting concern about global warming and the possible sea level rising. Tuvalu advocates
ratification and implementation of the Kyoto Protocol. In December 2009 the islands stalled talks on climate change
at the United Nations Climate Change Conference in Copenhagen, fearing some other developing countries were not committing
fully to binding deals on a reduction in carbon emissions. Their chief negotiator stated, "Tuvalu is one of the most
vulnerable countries in the world to climate change and our future rests on the outcome of this meeting." Tuvalu
participates in the Alliance of Small Island States (AOSIS), which is a coalition of small island and low-lying coastal
countries that have concerns about their vulnerability to the adverse effects of global climate change. Under the
Majuro Declaration, which was signed on 5 September 2013, Tuvalu has commitment to implement power generation of
100% renewable energy (between 2013 and 2020), which is proposed to be implemented using Solar PV (95% of demand)
and biodiesel (5% of demand). The feasibility of wind power generation will be considered. Tuvalu participates in
the operations of the Pacific Islands Applied Geoscience Commission (SOPAC) and the Secretariat of the Pacific Regional
Environment Programme (SPREP). Tuvalu participates in the operations of the Pacific Island Forum Fisheries Agency
(FFA) and the Western and Central Pacific Fisheries Commission (WCPFC). The Tuvaluan government, the US government,
and the governments of other Pacific islands, are parties to the South Pacific Tuna Treaty (SPTT), which entered
into force in 1988. Tuvalu is also a member of the Nauru Agreement which addresses the management of tuna purse seine
fishing in the tropical western Pacific. In May 2013 representatives from the United States and the Pacific Islands
countries agreed to sign interim arrangement documents to extend the Multilateral Fisheries Treaty (which encompasses
the South Pacific Tuna Treaty) to confirm access to the fisheries in the Western and Central Pacific for US tuna
boats for 18 months. Tuvalu and the other members of the Pacific Island Forum Fisheries Agency (FFA) and the United
States have settled a tuna fishing deal for 2015; a longer term deal will be negotiated. The treaty is an extension
of the Nauru Agreement and provides for US flagged purse seine vessels to fish 8,300 days in the region in return
for a payment of US$90 million made up by tuna fishing industry and US-Government contributions. In 2015 Tuvalu has
refused to sell fishing days to certain nations and fleets that have blocked Tuvaluan initiatives to develop and
sustain their own fishery. In July 2013 Tuvalu signed the Memorandum of Understanding (MOU) to establish the Pacific
Regional Trade and Development Facility, which Facility originated in 2006, in the context of negotiations for an
Economic Partnership Agreement (EPA) between Pacific ACP States and the European Union. The rationale for the creation
of the Facility being to improve the delivery of aid to Pacific island countries in support of the Aid-for-Trade
(AfT) requirements. The Pacific ACP States are the countries in the Pacific that are signatories to the Cotonou Agreement
with the European Union. Each island has its own high-chief, or ulu-aliki, and several sub-chiefs (alikis). The community
council is the Falekaupule (the traditional assembly of elders) or te sina o fenua (literally: "grey-hairs of the
land"). In the past, another caste, the priests (tofuga), were also amongst the decision-makers. The ulu-aliki and
aliki exercise informal authority at the local level. Ulu-aliki are always chosen based on ancestry. Under the Falekaupule
Act (1997), the powers and functions of the Falekaupule are now shared with the pule o kaupule (elected village presidents;
one on each atoll). In 2014 attention was drawn to an appeal to the New Zealand Immigration and Protection Tribunal
against the deportation of a Tuvaluan family on the basis that they were "climate change refugees", who would suffer
hardship resulting from the environmental degradation of Tuvalu. However the subsequent grant of residence permits
to the family was made on grounds unrelated to the refugee claim. The family was successful in their appeal because,
under the relevant immigration legislation, there were "exceptional circumstances of a humanitarian nature" that
justified the grant of resident permits as the family was integrated into New Zealand society with a sizeable extended
family which had effectively relocated to New Zealand. Indeed, in 2013 a claim of a Kiribati man of being a "climate
change refugee" under the Convention relating to the Status of Refugees (1951) was determined by the New Zealand
High Court to be untenable as there was no persecution or serious harm related to any of the five stipulated Refugee
Convention grounds. Permanent migration to Australia and New Zealand, such as for family reunification, requires
compliance with the immigration legislation of those countries. New Zealand has an annual quota of 75 Tuvaluans granted
work permits under the Pacific Access Category, as announced in 2001. The applicants register for the Pacific Access
Category (PAC) ballots; the primary criteria is that the principal applicant must have a job offer from a New Zealand
employer. Tuvaluans also have access to seasonal employment in the horticulture and viticulture industries in New
Zealand under the Recognised Seasonal Employer (RSE) Work Policy introduced in 2007 allowing for employment of up
to 5,000 workers from Tuvalu and other Pacific islands. Tuvaluans can participate in the Australian Pacific Seasonal
Worker Program, which allows Pacific Islanders to obtain seasonal employment in the Australian agriculture industry,
in particular cotton and cane operations; fishing industry, in particular aquaculture; and with accommodation providers
in the tourism industry. The Tuvaluan language and English are the national languages of Tuvalu. Tuvaluan is of the
Ellicean group of Polynesian languages, distantly related to all other Polynesian languages such as Hawaiian, Māori,
Tahitian, Samoan and Tongan. It is most closely related to the languages spoken on the Polynesian outliers in Micronesia
and northern and central Melanesia. The language has borrowed from the Samoan language, as a consequence of Christian
missionaries in the late 19th and early 20th centuries being predominantly Samoan. The Princess Margaret Hospital
on Funafuti is the only hospital in Tuvalu. The Tuvaluan medical staff at PMH in 2011 comprised the Director of Health
& Surgeon, the Chief Medical Officer Public Health, an anaesthetist, a paediatric medical officer and an obstetrics
and gynaecology medical officer. Allied health staff include two radiographers, two pharmacists, three laboratory
technicians, two dieticians and 13 nurses with specialised training in fields including surgical nursing, anaesthesia
nursing/ICU, paediatric nursing and midwifery. PMH also employs a dentist. The Department of Health also employs
nine or ten nurses on the outer islands to provide general nursing and midwifery services. Fetuvalu offers the Cambridge
syllabus. Motufoua offers the Fiji Junior Certificate (FJC) at year 10, Tuvaluan Certificate at Year 11 and the Pacific
Senior Secondary Certificate (PSSC) at Year 12, set by the Fiji-based exam board SPBEA. Sixth form students who pass
their PSSC go on to the Augmented Foundation Programme, funded by the government of Tuvalu. This program is required
for tertiary education programmes outside of Tuvalu and is available at the University of the South Pacific (USP)
Extension Centre in Funafuti. Required attendance at school is 10 years for males and 11 years for females (2001).
The adult literacy rate is 99.0% (2002). In 2010, there were 1,918 students who were taught by 109 teachers (98 certified
and 11 uncertified). The teacher-pupil ratio for primary schools in Tuvalu is around 1:18 for all schools with the
exception of Nauti School, which has a teacher-student ratio of 1:27. Nauti School on Funafuti is the largest primary
in Tuvalu with more than 900 students (45 percent of the total primary school enrolment). The pupil-teacher ratio
for Tuvalu is low compared to the Pacific region (ratio of 1:29). Community Training Centres (CTCs) have been established
within the primary schools on each atoll. The CTCs provide vocational training to students who do not progress beyond
Class 8 because they failed the entry qualifications for secondary education. The CTCs offer training in basic carpentry,
gardening and farming, sewing and cooking. At the end of their studies the graduates can apply to continue studies
either at Motufoua Secondary School or the Tuvalu Maritime Training Institute (TMTI). Adults can also attend courses
at the CTCs. The traditional buildings of Tuvalu used plants and trees from the native broadleaf forest, including
timber from: Pouka, (Hernandia peltata); Ngia or Ingia bush, (Pemphis acidula); Miro, (Thespesia populnea); Tonga,
(Rhizophora mucronata); Fau or Fo fafini, or woman's fibre tree (Hibiscus tiliaceus). and fibre from: coconut; Ferra,
native fig (Ficus aspem); Fala, screw pine or Pandanus. The buildings were constructed without nails and were lashed
and tied together with a plaited sennit rope that was handmade from dried coconut fibre. The women of Tuvalu use
cowrie and other shells in traditional handicrafts. The artistic traditions of Tuvalu have traditionally been expressed
in the design of clothing and traditional handicrafts such as the decoration of mats and fans. Crochet (kolose) is
one of the art forms practiced by Tuvaluan women. The material culture of Tuvalu uses traditional design elements
in artefacts used in everyday life such as the design of canoes and fish hooks made from traditional materials. The
design of women's skirts (titi), tops (teuga saka), headbands, armbands, and wristbands, which continue to be used
in performances of the traditional dance songs of Tuvalu, represents contemporary Tuvaluan art and design. The cuisine
of Tuvalu is based on the staple of coconut and the many species of fish found in the ocean and lagoons of the atolls.
Desserts made on the islands include coconut and coconut milk, instead of animal milk. The traditional foods eaten
in Tuvalu are pulaka, taro, bananas, breadfruit and coconut. Tuvaluans also eat seafood, including coconut crab and
fish from the lagoon and ocean. A traditional food source is seabirds (taketake or black noddy and akiaki or white
tern), with pork being eaten mostly at fateles (or parties with dancing to celebrate events). Another important building
is the falekaupule or maneapa the traditional island meeting hall, where important matters are discussed and which
is also used for wedding celebrations and community activities such as a fatele involving music, singing and dancing.
Falekaupule is also used as the name of the council of elders – the traditional decision making body on each island.
Under the Falekaupule Act, Falekaupule means "traditional assembly in each island...composed in accordance with the
Aganu of each island". Aganu means traditional customs and culture. A traditional sport played in Tuvalu is kilikiti,
which is similar to cricket. A popular sport specific to Tuvalu is Ano, which is played with two round balls of 12
cm (5 in) diameter. Ano is a localised version of volleyball, in which the two hard balls made from pandanus leaves
are volleyed at great speed with the team members trying to stop the Ano hitting the ground. Traditional sports in
the late 19th century were foot racing, lance throwing, quarterstaff fencing and wrestling, although the Christian
missionaries disapproved of these activities. The popular sports in Tuvalu include kilikiti, Ano, football, futsal,
volleyball, handball, basketball and rugby union. Tuvalu has sports organisations for athletics, badminton, tennis,
table tennis, volleyball, football, basketball, rugby union, weightlifting and powerlifting. At the 2013 Pacific
Mini Games, Tuau Lapua Lapua won Tuvalu's first gold medal in an international competition in the weightlifting 62
kilogram male snatch. (He also won bronze in the clean and jerk, and obtained the silver medal overall for the combined
event.) In 2015 Telupe Iosefa received the first gold medal won by Tuvalu at the Pacific Games in the powerlifting
120 kg male division. A major sporting event is the "Independence Day Sports Festival" held annually on 1 October.
The most important sports event within the country is arguably the Tuvalu Games, which are held yearly since 2008.
Tuvalu first participated in the Pacific Games in 1978 and in the Commonwealth Games in 1998, when a weightlifter
attended the games held at Kuala Lumpur, Malaysia. Two table tennis players attended the 2002 Commonwealth Games
in Manchester, England; Tuvalu entered competitors in shooting, table tennis and weightlifting at the 2006 Commonwealth
Games in Melbourne, Australia; three athletes participated in the 2010 Commonwealth Games in Delhi, India, entering
the discus, shot put and weightlifting events; and a team of 3 weightlifters and 2 table tennis players attended
the 2014 Commonwealth Games in Glasgow. Tuvaluan athletes have also participated in the men's and women's 100 metre
sprints at the World Championships in Athletics from 2009. From 1996 to 2002, Tuvalu was one of the best-performing
Pacific Island economies and achieved an average real gross domestic product (GDP) growth rate of 5.6% per annum.
Since 2002 economic growth has slowed, with GDP growth of 1.5% in 2008. Tuvalu was exposed to rapid rises in world
prices of fuel and food in 2008, with the level of inflation peaking at 13.4%. The International Monetary Fund 2010
Report on Tuvalu estimates that Tuvalu experienced zero growth in its 2010 GDP, after the economy contracted by about
2% in 2009. On 5 August 2012, the Executive Board of the International Monetary Fund (IMF) concluded the Article
IV consultation with Tuvalu, and assessed the economy of Tuvalu: "A slow recovery is underway in Tuvalu, but there
are important risks. GDP grew in 2011 for the first time since the global financial crisis, led by the private retail
sector and education spending. We expect growth to rise slowly". The IMF 2014 Country Report noted that real GDP
growth in Tuvalu had been volatile averaging only 1 percent in the past decade. The 2014 Country Report describes
economic growth prospects as generally positive as the result of large revenues from fishing licenses, together with
substantial foreign aid. Banking services are provided by the National Bank of Tuvalu. Public sector workers make
up about 65% of those formally employed. Remittances from Tuvaluans living in Australia and New Zealand, and remittances
from Tuvaluan sailors employed on overseas ships are important sources of income for Tuvaluans. Approximately 15%
of adult males work as seamen on foreign-flagged merchant ships. Agriculture in Tuvalu is focused on coconut trees
and growing pulaka in large pits of composted soil below the water table. Tuvaluans are otherwise involved in traditional
subsistence agriculture and fishing. Tuvaluans are well known for their seafaring skills, with the Tuvalu Maritime
Training Institute on Amatuku motu (island), Funafuti, providing training to approximately 120 marine cadets each
year so that they have the skills necessary for employment as seafarers on merchant shipping. The Tuvalu Overseas
Seamen's Union (TOSU) is the only registered trade union in Tuvalu. It represents workers on foreign ships. The Asian
Development Bank (ADB) estimates that 800 Tuvaluan men are trained, certified and active as seafarers. The ADB estimates
that, at any one time, about 15% of the adult male population works abroad as seafarers. Job opportunities also exist
as observers on tuna boats where the role is to monitor compliance with the boat's tuna fishing licence. Government
revenues largely come from sales of fishing licenses, income from the Tuvalu Trust Fund, and from the lease of its
highly fortuitous .tv Internet Top Level Domain (TLD). In 1998, Tuvalu began deriving revenue from the use of its
area code for premium-rate telephone numbers and from the commercialisation of its ".tv" Internet domain name, which
is now managed by Verisign until 2021. The ".tv" domain name generates around $2.2 million each year from royalties,
which is about ten per cent of the government's total revenue. Domain name income paid most of the cost of paving
the streets of Funafuti and installing street lighting in mid-2002. Tuvalu also generates income from stamps by the
Tuvalu Philatelic Bureau and income from the Tuvalu Ship Registry. The United Nations designates Tuvalu as a least
developed country (LDC) because of its limited potential for economic development, absence of exploitable resources
and its small size and vulnerability to external economic and environmental shocks. Tuvalu participates in the Enhanced
Integrated Framework for Trade-Related Technical Assistance to Least Developed Countries (EIF), which was established
in October 1997 under the auspices of the World Trade Organisation. In 2013 Tuvalu deferred its graduation from least
developed country (LDC) status to a developing country to 2015. Prime Minister Enele Sopoaga said that this deferral
was necessary to maintain access by Tuvalu to the funds provided by the United Nations's National Adaptation Programme
of Action (NAPA), as "Once Tuvalu graduates to a developed country, it will not be considered for funding assistance
for climate change adaptation programmes like NAPA, which only goes to LDCs". Tuvalu had met targets so that Tuvalu
was to graduate from LDC status. Prime minister, Enele Sopoaga wants the United Nations to reconsider its criteria
for graduation from LDC status as not enough weight is given to the environmental plight of small island states like
Tuvalu in the application of the Environmental Vulnerability Index (EVI). The Tuvalu Media Department of the Government
of Tuvalu operates Radio Tuvalu which broadcasts from Funafuti. In 2011 the Japanese government provided financial
support to construct a new AM broadcast studio. The installation of upgraded transmission equipment allows Radio
Tuvalu to be heard on all nine islands of Tuvalu. The new AM radio transmitter on Funafuti replaced the FM radio
service to the outer islands and freed up satellite bandwidth for mobile services. Fenui – news from Tuvalu is a
free digital publication of the Tuvalu Media Department that is emailed to subscribers and operates a Facebook page,
which publishes news about government activities and news about Tuvaluan events, such as a special edition covering
the results of the 2015 general election. Funafuti is the only port but there is a deep-water berth in the harbour
at Nukufetau. The merchant marine fleet consists of two passenger/cargo ships Nivaga III and Manu Folau. These ships
carry cargo and passengers between the main atolls and travel between Suva, Fiji and Funafuti 3 to 4 times a year.
The Nivaga III and Manu Folau provide round trip visits to the outer islands every three or four weeks. The Manu
Folau is a 50-metre vessel that was a gift from Japan to the people of Tuvalu. In 2015 the United Nations Development
Program (UNDP) assisted the government of Tuvalu to acquire MV Talamoana, a 30-metre vessel that will be used to
implement Tuvalu's National Adaptation Programme of Action (NAPA) to transport government officials and project personnel
to the outer islands. In 2015 the Nivaga III was donated by the government of Japan; it replaced the Nivaga II, which
had serviced Tuvalu from 1989. Tuvalu consists of three reef islands and six true atolls. Its small, scattered group
of atolls have poor soil and a total land area of only about 26 square kilometres (10 square miles) making it the
fourth smallest country in the world. The islets that form the atolls are very low lying. Nanumanga, Niutao, Niulakita
are reef islands and the six true atolls are Funafuti, Nanumea, Nui, Nukufetau, Nukulaelae and Vaitupu. Tuvalu's
Exclusive Economic Zone (EEZ) covers an oceanic area of approximately 900,000 km2. Funafuti is the largest atoll
of the nine low reef islands and atolls that form the Tuvalu volcanic island chain. It comprises numerous islets
around a central lagoon that is approximately 25.1 kilometres (15.6 miles) (N–S) by 18.4 kilometres (11.4 miles)
(W-E), centred on 179°7'E and 8°30'S. On the atolls, an annular reef rim surrounds the lagoon with several natural
reef channels. Surveys were carried out in May 2010 of the reef habitats of Nanumea, Nukulaelae and Funafuti and
a total of 317 fish species were recorded during this Tuvalu Marine Life study. The surveys identified 66 species
that had not previously been recorded in Tuvalu, which brings the total number of identified species to 607. Tuvalu
experiences the effects of El Niño and La Niña caused by changes in ocean temperatures in the equatorial and central
Pacific. El Niño effects increase the chances of tropical storms and cyclones, while La Niña effects increase the
chances of drought. Typically the islands of Tuvalu receive between 200 to 400 mm (8 to 16 in) of rainfall per month.
However, in 2011 a weak La Niña effect caused a drought by cooling the surface of the sea around Tuvalu. A state
of emergency was declared on 28 September 2011; with rationing of fresh-water on the islands of Funafuti and Nukulaelae.
Households on Funafuti and Nukulaelae were restricted to two buckets of fresh water per day (40 litres). The governments
of Australia and New Zealand responded to the 2011 fresh-water crisis by supplying temporary desalination plants,
and assisted in the repair of the existing desalination unit that was donated by Japan in 2006. In response to the
2011 drought, Japan funded the purchase of a 100 m3/d desalination plant and two portable 10 m3/d plants as part
of its Pacific Environment Community (PEC) program. Aid programs from the European Union and Australia also provided
water tanks as part of the longer term solution for the storage of available fresh water. The eastern shoreline of
Funafuti Lagoon was modified during World War II when the airfield (what is now Funafuti International Airport) was
constructed. The coral base of the atoll was used as fill to create the runway. The resulting borrow pits impacted
the fresh-water aquifer. In the low areas of Funafuti the sea water can be seen bubbling up through the porous coral
rock to form pools with each high tide. Since 1994 a project has been in development to assess the environmental
impact of transporting sand from the lagoon to fill all the borrow pits and low-lying areas on Fongafale. In 2014
the Tuvalu Borrow Pits Remediation (BPR) project was approved in order to fill 10 borrow pits, leaving Tafua Pond,
which is a natural pond. The New Zealand Government funded the BPR project. The project was carried out in 2015 with
365,000 sqm of sand being dredged from the lagoon to fill the holes and improve living conditions on the island.
This project increase the usable land space on Fongafale by eight per cent. The reefs at Funafuti have suffered damage,
with 80 per cent of the coral becoming bleached as a consequence of the increase in ocean temperatures and ocean
acidification. The coral bleaching, which includes staghorn corals, is attributed to the increase in water temperature
that occurred during the El Niños that occurred from 1998–2000 and from 2000–2001. A reef restoration project has
investigated reef restoration techniques; and researchers from Japan have investigated rebuilding the coral reefs
through the introduction of foraminifera. The project of the Japan International Cooperation Agency is designed to
increase the resilience of the Tuvalu coast against sea level rise through ecosystem rehabilitation and regeneration
and through support for sand production. The rising population has resulted in an increased demand on fish stocks,
which are under stress; although the creation of the Funafuti Conservation Area has provided a fishing exclusion
area to help sustain the fish population across the Funafuti lagoon. Population pressure on the resources of Funafuti
and inadequate sanitation systems have resulted in pollution. The Waste Operations and Services Act of 2009 provides
the legal framework for waste management and pollution control projects funded by the European Union directed at
organic waste composting in eco-sanitation systems. The Environment Protection (Litter and Waste Control) Regulation
2013 is intended to improve the management of the importation of non-biodegradable materials. In Tuvalu plastic waste
is a problem as much imported food and other commodities are supplied in plastic containers or packaging. Reverse
osmosis (R/O) desalination units supplement rainwater harvesting on Funafuti. The 65 m3 desalination plant operates
at a real production level of around 40 m3 per day. R/O water is only intended to be produced when storage falls
below 30%, however demand to replenish household storage supplies with tanker-delivered water means that the R/O
desalination units are continually operating. Water is delivered at a cost of A$3.50 per m3. Cost of production and
delivery has been estimated at A$6 per m3, with the difference subsidised by the government. In July 2012 a United
Nations Special Rapporteur called on the Tuvalu Government to develop a national water strategy to improve access
to safe drinking water and sanitation. In 2012, Tuvalu developed a National Water Resources Policy under the Integrated
Water Resource Management (IWRM) Project and the Pacific Adaptation to Climate Change (PACC) Project, which are sponsored
by the Global Environment Fund/SOPAC. Government water planning has established a target of between 50 and 100L of
water per person per day accounting for drinking water, cleaning, community and cultural activities. Because of the
low elevation, the islands that make up this nation are vulnerable to the effects of tropical cyclones and by the
threat of current and future sea level rise. The highest elevation is 4.6 metres (15 ft) above sea level on Niulakita,
which gives Tuvalu the second-lowest maximum elevation of any country (after the Maldives). The highest elevations
are typically in narrow storm dunes on the ocean side of the islands which are prone to overtopping in tropical cyclones,
as occurred with Cyclone Bebe, which was a very early-season storm that passed through the Tuvaluan atolls in October
1972. Cyclone Bebe submerged Funafuti, eliminating 90% of structures on the island. Sources of drinking water were
contaminated as a result of the system's storm surge and the flooding of the sources of fresh water. Cyclone Bebe
in 1972 caused severe damage to Funafuti. Funafuti's Tepuka Vili Vili islet was devastated by Cyclone Meli in 1979,
with all its vegetation and most of its sand swept away during the cyclone. Along with a tropical depression that
affected the islands a few days later, Severe Tropical Cyclone Ofa had a major impact on Tuvalu with most islands
reporting damage to vegetation and crops. Cyclone Gavin was first identified during 2 March 1997, and was the first
of three tropical cyclones to affect Tuvalu during the 1996–97 cyclone season with Cyclones Hina and Keli following
later in the season. In March 2015, the winds and storm surge created by Cyclone Pam resulted in waves of 3 metres
(9.8 ft) to 5 metres (16 ft) breaking over the reef of the outer islands caused damage to houses, crops and infrastructure.
On Nui the sources of fresh water were destroyed or contaminated. The flooding in Nui and Nukufetau caused many families
to shelter in evacuation centres or with other families. Nui suffered the most damage of the three central islands
(Nui, Nukufetau and Vaitupu); with both Nui and Nukufetau suffering the loss of 90% of the crops. Of the three northern
islands (Nanumanga, Niutao, Nanumea), Nanumanga suffered the most damage, with 60-100 houses flooded, with the waves
also causing damage to the health facility. Vasafua islet, part of the Funafuti Conservation Area, was severely damaged
by Cyclone Pam. The coconut palms were washed away, leaving the islet as a sand bar. The Tuvalu Government carried
out assessments of the damage caused by Cyclone Pam to the islands and has provided medical aid, food as well as
assistance for the cleaning-up of storm debris. Government and Non-Government Organisations provided assistance technical,
funding and material support to Tuvalu to assist with recovery, including WHO, UNICEF, UNDP, OCHA, World Bank, DFAT,
New Zealand Red Cross & IFRC, Fiji National University and governments of New Zealand, Netherlands, UAE, Taiwan and
the United States. Whether there are measurable changes in the sea level relative to the islands of Tuvalu is a contentious
issue. There were problems associated with the pre-1993 sea level records from Funafuti which resulted in improvements
in the recording technology to provide more reliable data for analysis. The degree of uncertainty as to estimates
of sea level change relative to the islands of Tuvalu was reflected in the conclusions made in 2002 from the available
data. The 2011 report of the Pacific Climate Change Science Program published by the Australian Government, concludes:
"The sea-level rise near Tuvalu measured by satellite altimeters since 1993 is about 5 mm (0.2 in) per year." The
atolls have shown resilience to gradual sea-level rise, with atolls and reef islands being able to grow under current
climate conditions by generating sufficient sand and coral debris that accumulates and gets dumped on the islands
during cyclones. Gradual sea-level rise also allows for coral polyp activity to increase the reefs. However, if the
increase in sea level occurs at faster rate as compared to coral growth, or if polyp activity is damaged by ocean
acidification, then the resilience of the atolls and reef islands is less certain. The 2011 report of Pacific Climate
Change Science Program of Australia concludes, in relation to Tuvalu, states the conclusions that over the course
of the 21st century: While some commentators have called for the relocation of Tuvalu's population to Australia,
New Zealand or Kioa in Fiji, in 2006 Maatia Toafa (Prime Minister from 2004–2006) said his government did not regard
rising sea levels as such a threat that the entire population would need to be evacuated. In 2013 Enele Sopoaga,
the prime minister of Tuvalu, said that relocating Tuvaluans to avoid the impact of sea level rise "should never
be an option because it is self defeating in itself. For Tuvalu I think we really need to mobilise public opinion
in the Pacific as well as in the [rest of] world to really talk to their lawmakers to please have some sort of moral
obligation and things like that to do the right thing."
The defined dogma of the Immaculate Conception regards original sin only, saying that Mary was preserved from any stain (in
Latin, macula or labes, the second of these two synonymous words being the one used in the formal definition). The
proclaimed Roman Catholic dogma states "that the most Blessed Virgin Mary, in the first instance of her conception,
by a singular grace and privilege granted by Almighty God, in view of the merits of Jesus Christ, the Saviour of
the human race, was preserved free from all stain of original sin." Therefore, being always free from original sin,
the doctrine teaches that from her conception Mary received the sanctifying grace that would normally come with baptism
after birth. The definition makes no declaration about the Church's belief that the Blessed Virgin was sinless in
the sense of freedom from actual or personal sin. However, the Church holds that Mary was also sinless personally,
"free from all sin, original or personal". The Council of Trent decreed: "If anyone shall say that a man once justified
can sin no more, nor lose grace, and that therefore he who falls and sins was never truly justified; or, on the contrary,
that throughout his whole life he can avoid all sins even venial sins, except by a special privilege of God, as the
Church holds in regard to the Blessed Virgin: let him be anathema." The doctrine of the immaculate conception (Mary
being conceived free from original sin) is not to be confused with her virginal conception of her son Jesus. This
misunderstanding of the term immaculate conception is frequently met in the mass media. Catholics believe that Mary
was not the product of a virginal conception herself but was the daughter of a human father and mother, traditionally
known by the names of Saint Joachim and Saint Anne. In 1677, the Holy See condemned the belief that Mary was virginally
conceived, which had been a belief surfacing occasionally since the 4th century. The Church celebrates the Feast
of the Immaculate Conception (when Mary was conceived free from original sin) on 8 December, exactly nine months
before celebrating the Nativity of Mary. The feast of the Annunciation (which commemorates the virginal conception
and the Incarnation of Jesus) is celebrated on 25 March, nine months before Christmas Day. Another misunderstanding
is that, by her immaculate conception, Mary did not need a saviour. When defining the dogma in Ineffabilis Deus,
Pope Pius IX explicitly affirmed that Mary was redeemed in a manner more sublime. He stated that Mary, rather than
being cleansed after sin, was completely prevented from contracting Original Sin in view of the foreseen merits of
Jesus Christ, the Savior of the human race. In Luke 1:47, Mary proclaims: "My spirit has rejoiced in God my Saviour."
This is referred to as Mary's pre-redemption by Christ. Since the Second Council of Orange against semi-pelagianism,
the Catholic Church has taught that even had man never sinned in the Garden of Eden and was sinless, he would still
require God's grace to remain sinless. Mary's complete sinlessness and concomitant exemption from any taint from
the first moment of her existence was a doctrine familiar to Greek theologians of Byzantium. Beginning with St. Gregory
Nazianzen, his explanation of the "purification" of Jesus and Mary at the circumcision (Luke 2:22) prompted him to
consider the primary meaning of "purification" in Christology (and by extension in Mariology) to refer to a perfectly
sinless nature that manifested itself in glory in a moment of grace (e.g., Jesus at his Baptism). St. Gregory Nazianzen
designated Mary as "prokathartheisa (prepurified)." Gregory likely attempted to solve the riddle of the Purification
of Jesus and Mary in the Temple through considering the human natures of Jesus and Mary as equally holy and therefore
both purified in this manner of grace and glory. Gregory's doctrines surrounding Mary's purification were likely
related to the burgeoning commemoration of the Mother of God in and around Constantinople very close to the date
of Christmas. Nazianzen's title of Mary at the Annunciation as "prepurified" was subsequently adopted by all theologians
interested in his Mariology to justify the Byzantine equivalent of the Immaculate Conception. This is especially
apparent in the Fathers St. Sophronios of Jerusalem and St. John Damascene, who will be treated below in this article
at the section on Church Fathers. About the time of Damascene, the public celebration of the "Conception of St. Ann
[i.e., of the Theotokos in her womb]" was becoming popular. After this period, the "purification" of the perfect
natures of Jesus and Mary would not only mean moments of grace and glory at the Incarnation and Baptism and other
public Byzantine liturgical feasts, but purification was eventually associated with the feast of Mary's very conception
(along with her Presentation in the Temple as a toddler) by Orthodox authors of the 2nd millennium (e.g., St. Nicholas
Cabasilas and Joseph Bryennius). It is admitted that the doctrine as defined by Pius IX was not explicitly mooted
before the 12th century. It is also agreed that "no direct or categorical and stringent proof of the dogma can be
brought forward from Scripture". But it is claimed that the doctrine is implicitly contained in the teaching of the
Fathers. Their expressions on the subject of the sinlessness of Mary are, it is pointed out, so ample and so absolute
that they must be taken to include original sin as well as actual. Thus in the first five centuries such epithets
as "in every respect holy", "in all things unstained", "super-innocent", and "singularly holy" are applied to her;
she is compared to Eve before the fall, as ancestress of a redeemed people; she is "the earth before it was accursed".
The well-known words of St. Augustine (d. 430) may be cited: "As regards the mother of God," he says, "I will not
allow any question whatever of sin." It is true that he is here speaking directly of actual or personal sin. But
his argument is that all men are sinners; that they are so through original depravity; that this original depravity
may be overcome by the grace of God, and he adds that he does not know but that Mary may have had sufficient grace
to overcome sin "of every sort" (omni ex parte). Although the doctrine of Mary's Immaculate Conception appears only
later among Latin (and particularly Frankish) theologians, it became ever more manifest among Byzantine theologians
reliant on Gregory Nazianzen's Mariology in the Medieval or Byzantine East. Although hymnographers and scholars,
like the Emperor Justinian I, were accustomed to call Mary "prepurified" in their poetic and credal statements, the
first point of departure for more fully commenting on Nazianzen's meaning occurs in Sophronius of Jerusalem. In other
places Sophronius explains that the Theotokos was already immaculate, when she was "purified" at the Annunciation
and goes so far as to note that John the Baptist is literally "holier than all 'Men' born of woman" since Mary's
surpassing holiness signifies that she was holier than even John after his sanctification in utero. Sophronius' teaching
is augmented and incorporated by St. John Damascene (d. 749/750). John, besides many passages wherein he extolls
the Theotokos for her purification at the Annunciation, grants her the unique honor of "purifying the waters of baptism
by touching them." This honor was most famously and firstly attributed to Christ, especially in the legacy of Nazianzen.
As such, Nazianzen's assertion of parallel holiness between the prepurified Mary and purified Jesus of the New Testament
is made even more explicit in Damascene in his discourse on Mary's holiness to also imitate Christ's baptism at the
Jordan. The Damascene's hymnongraphy and De fide Orthodoxa explicitly use Mary's "pre purification" as a key to understanding
her absolute holiness and unsullied human nature. In fact, Damascene (along with Nazianzen) serves as the source
for nearly all subsequent promotion of Mary's complete holiness from her Conception by the "all pure seed" of Joachim
and the womb "wider than heaven" of St. Ann. By 750, the feast of her conception was widely celebrated in the Byzantine
East, under the name of the Conception (active) of Saint Anne. In the West it was known as the feast of the Conception
(passive) of Mary, and was associated particularly with the Normans, whether these introduced it directly from the
East or took it from English usage. The spread of the feast, by now with the adjective "Immaculate" attached to its
title, met opposition on the part of some, on the grounds that sanctification was possible only after conception.
Critics included Saints Bernard of Clairvaux, Albertus Magnus and Thomas Aquinas. Other theologians defended the
expression "Immaculate Conception", pointing out that sanctification could be conferred at the first moment of conception
in view of the foreseen merits of Christ, a view held especially by Franciscans. On 28 February 1476, Pope Sixtus
IV, authorized those dioceses that wished to introduce the feast to do so, and introduced it to his own diocese of
Rome in 1477, with a specially composed Mass and Office of the feast. With his bull Cum praeexcelsa of 28 February
1477, in which he referred to the feast as that of the Conception of Mary, without using the word "Immaculate", he
granted indulgences to those who would participate in the specially composed Mass or Office on the feast itself or
during its octave, and he used the word "immaculate" of Mary, but applied instead the adjective "miraculous" to her
conception. On 4 September 1483, referring to the feast as that of "the Conception of Immaculate Mary ever Virgin",
he condemned both those who called it mortally sinful and heretical to hold that the "glorious and immaculate mother
of God was conceived without the stain of original sin" and those who called it mortally sinful and heretical to
hold that "the glorious Virgin Mary was conceived with original sin", since, he said, "up to this time there has
been no decision made by the Roman Church and the Apostolic See." This decree was reaffirmed by the Council of Trent.
In 1839 Mariano Spada (1796 - 1872), professor of theology at the Roman College of Saint Thomas, published Esame
Critico sulla dottrina dell’ Angelico Dottore S. Tommaso di Aquino circa il Peccato originale, relativamente alla
Beatissima Vergine Maria [A critical examination of the doctrine of St. Thomas Aquinas, the Angelic Doctor, regarding
original sin with respect to the Most Blessed Virgin Mary], in which Aquinas is interpreted not as treating the question
of the Immaculate Conception later formulated in the papal bull Ineffabilis Deus but rather the sanctification of
the fetus within Mary's womb. Spada furnished an interpretation whereby Pius IX was relieved of the problem of seeming
to foster a doctrine not in agreement with the Aquinas' teaching. Pope Pius IX would later appoint Spada Master of
the Sacred Palace in 1867. It seems to have been St Bernard of Clairvaux who, in the 12th century, explicitly raised
the question of the Immaculate Conception. A feast of the Conception of the Blessed Virgin had already begun to be
celebrated in some churches of the West. St Bernard blames the canons of the metropolitan church of Lyon for instituting
such a festival without the permission of the Holy See. In doing so, he takes occasion to repudiate altogether the
view that the conception of Mary was sinless. It is doubtful, however, whether he was using the term "conception"
in the same sense in which it is used in the definition of Pope Pius IX. Bernard would seem to have been speaking
of conception in the active sense of the mother's cooperation, for in his argument he says: "How can there be absence
of sin where there is concupiscence (libido)?" and stronger expressions follow, showing that he is speaking of the
mother and not of the child. The celebrated John Duns Scotus (d. 1308), a Friar Minor like Saint Bonaventure, argued,
on the contrary, that from a rational point of view it was certainly as little derogatory to the merits of Christ
to assert that Mary was by him preserved from all taint of sin, as to say that she first contracted it and then was
delivered. Proposing a solution to the theological problem of reconciling the doctrine with that of universal redemption
in Christ, he argued that Mary's immaculate conception did not remove her from redemption by Christ; rather it was
the result of a more perfect redemption granted her because of her special role in salvation history. Popular opinion
remained firmly behind the celebration of Mary's conception. In 1439, the Council of Basel, which is not reckoned
an ecumenical council, stated that belief in the immaculate conception of Mary is in accord with the Catholic faith.
By the end of the 15th century the belief was widely professed and taught in many theological faculties, but such
was the influence of the Dominicans, and the weight of the arguments of Thomas Aquinas (who had been canonised in
1323 and declared "Doctor Angelicus" of the Church in 1567) that the Council of Trent (1545–63)—which might have
been expected to affirm the doctrine—instead declined to take a position. The papal bull defining the dogma, Ineffabilis
Deus, mentioned in particular the patrististic interpretation of Genesis 3:15 as referring to a woman, Mary, who
would be eternally at enmity with the evil serpent and completely triumphing over him. It said the Fathers saw foreshadowings
of Mary's "wondrous abundance of divine gifts and original innocence" "in that ark of Noah, which was built by divine
command and escaped entirely safe and sound from the common shipwreck of the whole world; in the ladder which Jacob
saw reaching from the earth to heaven, by whose rungs the angels of God ascended and descended, and on whose top
the Lord himself leaned; in that bush which Moses saw in the holy place burning on all sides, which was not consumed
or injured in any way but grew green and blossomed beautifully; in that impregnable tower before the enemy, from
which hung a thousand bucklers and all the armor of the strong; in that garden enclosed on all sides, which cannot
be violated or corrupted by any deceitful plots; in that resplendent city of God, which has its foundations on the
holy mountains; in that most august temple of God, which, radiant with divine splendours, is full of the glory of
God; and in very many other biblical types of this kind." Contemporary Eastern Orthodox Christians often object to
the dogmatic declaration of her immaculate conception as an "over-elaboration" of the faith and because they see
it as too closely connected with a particular interpretation of the doctrine of ancestral sin. All the same, the
historical and authentic tradition of Mariology in Byzantium took its historical point of departure from Sophronios,
Damascene, and their imitators. The most famous Eastern Orthodox theologian to imply Mary's Immaculate Conception
was St. Gregory Palamas. Though many passages from his works were long known to extol and attribute to Mary a Christlike
holiness in her human nature, traditional objections to Palamas' disposition toward the Immaculate Conception typically
rely on a poor understanding of his doctrine of "the purification of Mary" at the Annunciation. Not only did he explicitly
cite St. Gregory Nazianzen for his understanding of Jesus' purification at His baptism and Mary's at the Annunciation,
but Theophanes of Nicaea, Joseph Bryennius, and Gennadios Scholarios all explicitly placed Mary's Conception as the
first moment of her all-immaculate participation in the divine energies to such a degree that she was always completely
without spot and graced. In addition to Emperor Manuel II and Gennadius Scholarius, St. Mark of Ephesus also fervently
defended Mary's title as "prepurified" against the Dominican, Manuel Calecas, who was perhaps promoting thomistic
Mariology that denied Mary's all-holiness from the first moment of her existence. Martin Luther, who initiated the
Protestant Reformation, said: "Mother Mary, like us, was born in sin of sinful parents, but the Holy Spirit covered
her, sanctified and purified her so that this child was born of flesh and blood, but not with sinful flesh and blood.
The Holy Spirit permitted the Virgin Mary to remain a true, natural human being of flesh and blood, just as we. However,
he warded off sin from her flesh and blood so that she became the mother of a pure child, not poisoned by sin as
we are. For in that moment when she conceived, she was a holy mother filled with the Holy Spirit and her fruit is
a holy pure fruit, at once God and truly man, in one person." Some Lutherans, such as the members of the Anglo-Lutheran
Catholic Church, support the doctrine. The report "Mary: Faith and Hope in Christ", by the Anglican-Roman Catholic
International Commission, concluded that the teaching about Mary in the two definitions of the Assumption and the
Immaculate Conception can be said to be consonant with the teaching of the Scriptures and the ancient common traditions.
But the report expressed concerns that the Roman Catholic dogmatic definitions of these concepts implies them to
be "revealed by God", stating: "The question arises for Anglicans, however, as to whether these doctrines concerning
Mary are revealed by God in a way which must be held by believers as a matter of faith." Some Western writers claim
that the immaculate conception of Mary is a teaching of Islam. Thus, commenting in 1734 on the passage in the Qur'an,
"I have called her Mary; and I commend her to thy protection, and also her issue, against Satan driven away with
stones", George Sale stated: "It is not improbable that the pretended immaculate conception of the virgin Mary is
intimated in this passage. For according to a tradition of Mohammed, every person that comes into the world, is touched
at his birth by the devil, and therefore cries out, Mary and her son only excepted; between whom, and the evil spirit
God placed a veil, so that his touch did not reach them. And for this reason they say, neither of them were guilty
of any sin, like the rest of the children of Adam." Others have rejected that the doctrine of Immaculate Conception
exists in Islam, the Quranic account does not confirm the Immaculate Conception exclusively for Mary as in Islam
every human child is born pure and immaculate, her sinless birth is thus independent of the Christian docrtrine of
original sin as no such doctrine exists in Islam. Moreover, Hannah's prayer in the Quran for her child to remain
protected from Satan (Shayṭān) was said after it had already been born, not before and expresses a natural concern
any righteous parent would have. The Muslim tradition or hadith, which states that the only children born without
the "touch of Satan," were Mary and Jesus. should therefore not be taken in isolation from the Quran, and is to be
interpreted within the specific context of exonerating Mary and her child from the charges that were made against
them and is not a general statement. The specific mention of Mary and Jesus in this hadith may also be taken to represent
a class of people, in keeping with the Arabic language and the Quranic verse [O Satan] surely thou shalt have no
power over My servants, except such of the erring ones as choose to follow thee (15:42) Further claims were made
that the Roman Catholic Church derives its doctrine from the Islamic teaching. In volume 5 of his Decline and Fall
of the Roman Empire, published in 1788, Edward Gibbon wrote: "The Latin Church has not disdained to borrow from the
Koran the immaculate conception of his virgin mother." That he was speaking of her immaculate conception by her mother,
not of her own virginal conception of Jesus, is shown by his footnote: "In the xiith century the immaculate conception
was condemned by St. Bernard as a presumptuous novelty." In the aftermath of the definition of the dogma in 1854,
this charge was repeated: "Strange as it may appear, that the doctrine which the church of Rome has promulgated,
with so much pomp and ceremony, 'for the destruction of all heresies, and the confirmation of the faith of her adherents',
should have its origin in the Mohametan Bible; yet the testimony of such authorities as Gibbon, and Sale, and Forster,
and Gagnier, and Maracci, leave no doubt as to the marvellous fact." The Roman Missal and the Roman Rite Liturgy
of the Hours naturally includes references to Mary's immaculate conception in the feast of the Immaculate Conception.
An example is the antiphon that begins: "Tota pulchra es, Maria, et macula originalis non est in te" (You are all
beautiful, Mary, and the original stain [of sin] is not in you. Your clothing is white as snow, and your face is
like the sun. You are all beautiful, Mary, and the original stain [of sin] is not in you. You are the glory of Jerusalem,
you are the joy of Israel, you give honour to our people. You are all beautiful, Mary.) On the basis of the original
Gregorian chant music, polyphonic settings have been composed by Anton Bruckner, Pablo Casals, Maurice Duruflé, Grzegorz
Gerwazy Gorczycki, no:Ola Gjeilo, José Maurício Nunes Garcia, and Nikolaus Schapfl, The popularity of this particular
representation of The Immaculate Conception spread across the rest of Europe, and has since remained the best known
artistic depiction of the concept: in a heavenly realm, moments after her creation, the spirit of Mary (in the form
of a young woman) looks up in awe at (or bows her head to) God. The moon is under her feet and a halo of twelve stars
surround her head, possibly a reference to "a woman clothed with the sun" from Revelation 12:1-2. Additional imagery
may include clouds, a golden light, and cherubs. In some paintings the cherubim are holding lilies and roses, flowers
often associated with Mary.
Namibia has free education for both Primary and secondary education levels. Grades 1–7 are primary level, grades 8–12 secondary.
In 1998, there were 400,325 Namibian students in primary school and 115,237 students in secondary schools. The pupil-teacher
ratio in 1999 was estimated at 32:1, with about 8% of the GDP being spent on education. Curriculum development, educational
research, and professional development of teachers is centrally organised by the National Institute for Educational
Development (NIED) in Okahandja. Namibia (i/nəˈmɪbiə/, /næˈ-/), officially the Republic of Namibia (German: Republik
Namibia (help·info); Afrikaans: Republiek van Namibië) is a country in southern Africa whose western border is the
Atlantic Ocean. It shares land borders with Zambia and Angola to the north, Botswana to the east and South Africa
to the south and east. Although it does not border Zimbabwe, a part of less than 200 metres of the Zambezi River
(essentially a small bulge in Botswana to achieve a Botswana/Zambia micro-border) separates it from that country.
Namibia gained independence from South Africa on 21 March 1990, following the Namibian War of Independence. Its capital
and largest city is Windhoek, and it is a member state of the United Nations (UN), the Southern African Development
Community (SADC), the African Union (AU), and the Commonwealth of Nations. The dry lands of Namibia were inhabited
since early times by San, Damara, and Namaqua, and since about the 14th century AD by immigrating Bantu who came
with the Bantu expansion. Most of the territory became a German Imperial protectorate in 1884 and remained a German
colony until the end of World War I. In 1920, the League of Nations mandated the country to South Africa, which imposed
its laws and, from 1948, its apartheid policy. The port of Walvis Bay and the offshore Penguin Islands had been annexed
by the Cape Colony under the British crown by 1878 and had become an integral part of the new Union of South Africa
at its creation in 1910. Uprisings and demands by African leaders led the UN to assume direct responsibility over
the territory. It recognised the South West Africa People's Organisation (SWAPO) as the official representative of
the Namibian people in 1973. Namibia, however, remained under South African administration during this time as South-West
Africa. Following internal violence, South Africa installed an interim administration in Namibia in 1985. Namibia
obtained full independence from South Africa in 1990, with the exception of Walvis Bay and the Penguin Islands, which
remained under South African control until 1994. The dry lands of Namibia were inhabited since early times by San,
Damara, Nama and, since about the 14th century AD, by immigrating Bantu who came with the Bantu expansion from central
Africa. From the late 18th century onwards, Orlam clans from the Cape Colony crossed the Orange River and moved into
the area that today is southern Namibia. Their encounters with the nomadic Nama tribes were largely peaceful. The
missionaries accompanying the Orlams were well received by them, the right to use waterholes and grazing was granted
against an annual payment. On their way further northwards, however, the Orlams encountered clans of the Herero tribe
at Windhoek, Gobabis, and Okahandja which were less accommodating. The Nama-Herero War broke out in 1880, with hostilities
ebbing only when Imperial Germany deployed troops to the contested places and cemented the status quo between Nama,
Orlams, and Herero. The first Europeans to disembark and explore the region were the Portuguese navigators Diogo
Cão in 1485 and Bartolomeu Dias in 1486; still the region was not claimed by the Portuguese crown. However, like
most of Sub-Saharan Africa, Namibia was not extensively explored by Europeans until the 19th century, when traders
and settlers arrived, principally from Germany and Sweden. In the late 19th century Dorsland trekkers crossed the
area on their way from the Transvaal to Angola. Some of them settled in Namibia instead of continuing their journey.
From 1904 to 1907, the Herero and the Namaqua took up arms against the Germans and in calculated punitive action
by the German occupiers, the 'first genocide of the Twentieth Century' was committed. In the Herero and Namaqua genocide,
10,000 Nama (half the population) and approximately 65,000 Hereros (about 80% of the population) were systematically
murdered. The survivors, when finally released from detention, were subjected to a policy of dispossession, deportation,
forced labour, racial segregation and discrimination in a system that in many ways anticipated apartheid. South Africa
occupied the colony in 1915 after defeating the German force during World War I and administered it from 1919 onward
as a League of Nations mandate territory. Although the South African government desired to incorporate 'South-West
Africa' into its territory, it never officially did so, although it was administered as the de facto 'fifth province',
with the white minority having representation in the whites-only Parliament of South Africa, as well as electing
their own local administration the SWA Legislative Assembly. The South African government also appointed the SWA
administrator, who had extensive powers. Following the League's replacement by the United Nations in 1946, South
Africa refused to surrender its earlier mandate to be replaced by a United Nations Trusteeship agreement, requiring
closer international monitoring of the territory's administration (along with a definite independence schedule).
The Herero Chief's Council submitted a number of petitions to the UN calling for it to grant Namibia independence
during the 1950s. During the 1960s, when European powers granted independence to their colonies and trust territories
in Africa, pressure mounted on South Africa to do so in Namibia. In 1966 the International Court of Justice dismissed
a complaint brought by Ethiopia and Liberia against South Africa's continued presence in the territory, but the U.N.
General Assembly subsequently revoked South Africa's mandate, while in 1971 the International Court of Justice issued
an "advisory opinion" declaring South Africa's continued administration to be illegal. In response to the 1966 ruling
by the International Court of Justice, South-West Africa People's Organisation (SWAPO) military wing, People's Liberation
Army of Namibia, a guerrilla group began their armed struggle for independence, but it was not until 1988 that South
Africa agreed to end its occupation of Namibia, in accordance with a UN peace plan for the entire region. During
the South African occupation of Namibia, white commercial farmers, most of whom came as settlers from South Africa
and represented 0.2% of the national population, owned 74% of the arable land. Outside the central-southern area
of Namibia (known as the "Police Zone" since the German era and which contained the main towns, industries, mines
and best arable land), the country was divided into "homelands", the version of South African bantustan applied to
Namibia, although only a few were actually established because indigenous Namibians often did not cooperate. South
West Africa became known as Namibia by the UN when the General Assembly changed the territory's name by Resolution
2372 (XXII) of 12 June 1968. In 1978 the UN Security Council passed UN Resolution 435 which planned a transition
toward independence for Namibia. Attempts to persuade South Africa to agree to the plan's implementation were not
successful until 1988 when the transition to independence finally started under a diplomatic agreement between South
Africa, Angola and Cuba, with the USSR and the USA as observers, under which South Africa agreed to withdraw and
demobilise its forces in Namibia. As a result, Cuba agreed to pull back its troops in southern Angola sent to support
the MPLA in its war for control of Angola with UNITA. A combined UN civilian and peace-keeping force called UNTAG
(United Nations Transition Assistance Group) under Finnish diplomat Martti Ahtisaari was deployed from April 1989
to March 1990 to monitor the peace process, elections and supervise military withdrawals. As UNTAG began to deploy
peacekeepers, military observers, police, and political workers, hostilities were briefly renewed on the day the
transition process was supposed to begin. After a new round of negotiations, a second date was set and the elections
process began in earnest. After the return of SWAPO exiles (over 46,000 exiles), Namibia's first one-person one-vote
elections for the constitutional assembly took place in November 1989. The official election slogan was "Free and
Fair Elections". This was won by SWAPO although it did not gain the two-thirds majority it had hoped for; the South
African-backed Democratic Turnhalle Alliance (DTA) became the official opposition. The elections were peaceful and
declared free and fair. The Namibian Constitution adopted in February 1990 incorporated protection for human rights,
compensation for state expropriations of private property, an independent judiciary and an executive presidency (the
constituent assembly became the national assembly). The country officially became independent on 21 March 1990. Sam
Nujoma was sworn in as the first President of Namibia watched by Nelson Mandela (who had been released from prison
the previous month) and representatives from 147 countries, including 20 heads of state. Walvis Bay was ceded to
Namibia in 1994 upon the end of Apartheid in South Africa.[citation needed] Since independence Namibia has successfully
completed the transition from white minority apartheid rule to parliamentary democracy. Multiparty democracy was
introduced and has been maintained, with local, regional and national elections held regularly. Several registered
political parties are active and represented in the National Assembly, although the Swapo Party has won every election
since independence. The transition from the 15-year rule of President Sam Nujoma to his successor Hifikepunye Pohamba
in 2005 went smoothly. The Kalahari Desert, an arid region shared with South Africa and Botswana, is one of Namibia's
well-known geographical features. The Kalahari, while popularly known as a desert, has a variety of localised environments,
including some verdant and technically non-desert areas. One of these, known as the Succulent Karoo, is home to over
5,000 species of plants, nearly half of them endemic; Approximately 10 percent of the world's succulents are found
in the Karoo. The reason behind this high productivity and endemism may be the relatively stable nature of precipitation.
Namibia extends from 17°S to 25°S: climatically the range of the sub-Tropical High Pressure Belt, arid is the overall
climate description descending from the Sub-Humid (mean rain above 500 mm) through Semi-Arid between 300 and 500
mm (embracing most of the waterless Kalahari) and Arid from 150 to 300 mm (these three regions are inland from the
western escarpment) to the Hyper-Arid coastal plain with less than a 100 mm mean. Temperature maxima are limited
by the overall elevation of the entire region: only in the far south, Warmbad for instance, are mid-40 °C maxima
recorded. Typically the sub-Tropical High Pressure Belt, with frequent clear skies, provides more than 300 days of
sunshine per year. It is situated at the southern edge of the tropics; the Tropic of Capricorn cuts the country about
in half. The winter (June – August) is generally dry, both rainy seasons occur in summer, the small rainy season
between September and November, the big one between February and April. Humidity is low, and average rainfall varies
from almost zero in the coastal desert to more than 600 mm in the Caprivi Strip. Rainfall is however highly variable,
and droughts are common. The last[update] bad rainy season with rainfall far below the annual average occurred in
summer 2006/07. Weather and climate in the coastal area are dominated by the cold, north-flowing Benguela current
of the Atlantic Ocean which accounts for very low precipitation (50 mm per year or less), frequent dense fog, and
overall lower temperatures than in the rest of the country. In Winter, occasionally a condition known as Bergwind
(German: Mountain breeze) or Oosweer (Afrikaans: East weather) occurs, a hot dry wind blowing from the inland to
the coast. As the area behind the coast is a desert, these winds can develop into sand storms with sand deposits
in the Atlantic Ocean visible on satellite images. Namibia is the driest country in sub-Saharan Africa and depends
largely on groundwater. With an average rainfall of about 350 mm per annum, the highest rainfall occurs in the Caprivi
in the northeast (about 600 mm per annum) and decreases in a westerly and southwesterly direction to as little as
50 mm and less per annum at the coast. The only perennial rivers are found on the national borders with South Africa,
Angola, Zambia, and the short border with Botswana in the Caprivi. In the interior of the country surface water is
available only in the summer months when rivers are in flood after exceptional rainfalls. Otherwise, surface water
is restricted to a few large storage dams retaining and damming up these seasonal floods and their runoff. Thus,
where people don’t live near perennial rivers or make use of the storage dams, they are dependent on groundwater.
The advantage of using groundwater sources is that even isolated communities and those economic activities located
far from good surface water sources such as mining, agriculture, and tourism can be supplied from groundwater over
nearly 80% of the country. Namibia is one of few countries in the world to specifically address conservation and
protection of natural resources in its constitution. Article 95 states, "The State shall actively promote and maintain
the welfare of the people by adopting international policies aimed at the following: maintenance of ecosystems, essential
ecological processes, and biological diversity of Namibia, and utilisation of living natural resources on a sustainable
basis for the benefit of all Namibians, both present and future." In 1993, the newly formed government of Namibia
received funding from the United States Agency for International Development (USAID) through its Living in a Finite
Environment (LIFE) Project. The Ministry of Environment and Tourism with the financial support from organisations
such as USAID, Endangered Wildlife Trust, WWF, and Canadian Ambassador's Fund, together form a Community Based Natural
Resource Management (CBNRM) support structure. The main goal of this project is promote sustainable natural resource
management by giving local communities rights to wildlife management and tourism. Namibia follows a largely independent
foreign policy, with persisting affiliations with states that aided the independence struggle, including Cuba. With
a small army and a fragile economy, the Namibian Government's principal foreign policy concern is developing strengthened
ties within the Southern African region. A dynamic member of the Southern African Development Community, Namibia
is a vocal advocate for greater regional integration. Namibia became the 160th member of the UN on 23 April 1990.
On its independence it became the fiftieth member of the Commonwealth of Nations. According to the Namibia Labour
Force Survey Report 2012, conducted by the Namibia Statistics Agency, the country's unemployment rate is 27.4%. "Strict
unemployment" (people actively seeking a full-time job) stood at 20.2% in 2000, 21.9% in 2004 and spiraled to 29.4%
in 2008. Under a broader definition (including people that have given up searching for employment) unemployment rose
to 36.7% in 2004. This estimate considers people in the informal economy as employed. Labour and Social Welfare Minister
Immanuel Ngatjizeko praised the 2008 study as "by far superior in scope and quality to any that has been available
previously", but its methodology has also received criticism. In 2013, global business and financial news provider,
Bloomberg, named Namibia the top emerging market economy in Africa and the 13th best in the world. Only four African
countries made the Top 20 Emerging Markets list in the March 2013 issue of Bloomberg Markets magazine, and Namibia
was rated ahead of Morocco (19th), South Africa (15th) and Zambia (14th). Worldwide, Namibia also fared better than
Hungary, Brazil and Mexico. Bloomberg Markets magazine ranked the top 20 based on more than a dozen criteria. The
data came from Bloomberg's own financial-market statistics, IMF forecasts and the World Bank. The countries were
also rated on areas of particular interest to foreign investors: the ease of doing business, the perceived level
of corruption and economic freedom. In order to attract foreign investment, the government has made improvement in
reducing red tape resulted from excessive government regulations making the country one of the least bureaucratic
places to do business in the region. However, facilitation payments are occasionally demanded by customs due to cumbersome
and costly customs procedures. Namibia is also classified as an Upper Middle Income country by the World Bank, and
ranks 87th out of 185 economies in terms of ease of doing business. About half of the population depends on agriculture
(largely subsistence agriculture) for its livelihood, but Namibia must still import some of its food. Although per
capita GDP is five times the per capita GDP of Africa's poorest countries, the majority of Namibia's people live
in rural areas and exist on a subsistence way of life. Namibia has one of the highest rates of income inequality
in the world, due in part to the fact that there is an urban economy and a more rural cash-less economy. The inequality
figures thus take into account people who do not actually rely on the formal economy for their survival. Although
arable land accounts for only 1% of Namibia, nearly half of the population is employed in agriculture. Providing
25% of Namibia's revenue, mining is the single most important contributor to the economy. Namibia is the fourth largest
exporter of non-fuel minerals in Africa and the world's fourth largest producer of uranium. There has been significant
investment in uranium mining and Namibia is set to become the largest exporter of uranium by 2015. Rich alluvial
diamond deposits make Namibia a primary source for gem-quality diamonds. While Namibia is known predominantly for
its gem diamond and uranium deposits, a number of other minerals are extracted industrially such as lead, tungsten,
gold, tin, fluorspar, manganese, marble, copper and zinc. There are offshore gas deposits in the Atlantic Ocean that
are planned to be extracted in the future. According to "The Diamond Investigation", a book about the global diamond
market, from 1978, De Beers, the largest diamond company, bought most of the Namibian diamonds, and would continue
to do so, because "whatever government eventually comes to power they will need this revenue to survive". There are
many lodges and reserves to accommodate eco-tourists. Sport hunting is also a large, and growing component of the
Namibian economy, accounting for 14% of total tourism in the year 2000, or $19.6 million US dollars, with Namibia
boasting numerous species sought after by international sport hunters. In addition, extreme sports such as sandboarding,
skydiving and 4x4ing have become popular, and many cities have companies that provide tours.[citation needed] The
most visited places include the capital city of Windhoek, Caprivi Strip, Fish River Canyon, Sossusvlei, the Skeleton
Coast Park, Sesriem, Etosha Pan and the coastal towns of Swakopmund, Walvis Bay and Lüderitz. The capital city of
Windhoek plays a very important role in Namibia's tourism due to its central location and close proximity to Hosea
Kutako International Airport. According to The Namibia Tourism Exit Survey, which was produced by the Millennium
Challenge Corporation for the Namibian Directorate of Tourism, 56% of all tourists visiting Namibia during the time
period, 2012 - 2013, visited Windhoek. Many of Namibia's tourism related parastatals and governing bodies such as
Namibia Wildlife Resorts, Air Namibia and the Namibia Tourism Board as well as Namibia's tourism related trade associations
such as the Hospitality Association of Namibia are also all headquartered in Windhoek. There are also a number of
notable hotels in Windhoek such as Windhoek Country Club Resort and some international hotel chains also operate
in Windhoek, such as Avani Hotels and Resorts and Hilton Hotels and Resorts. Namibia's primary tourism related governing
body, the Namibia Tourism Board (NTB), was established by an Act of Parliament: the Namibia Tourism Board Act, 2000
(Act 21 of 2000). Its primary objectives are to regulate the tourism industry and to market Namibia as a tourist
destination. There are also a number of trade associations that represent the tourism sector in Namibia, such as
the Federation of Namibia Tourism Associations (the umbrella body for all tourism associations in Namibia), the Hospitality
Association of Namibia, the Association of Namibian Travel Agents, Car Rental Association of Namibia and the Tour
and Safari Association of Namibia. Apart from residences for upper and middle class households, sanitation is insufficient
in most residential areas. Private flush toilets are too expensive for virtually all residents in townships due to
their water consumption and installation cost. As a result, access to improved sanitation has not increased much
since independence: In Namibia's rural areas 13% of the population had more than basic sanitation, up from 8% in
1990. Many of Namibia's inhabitants have to resort to "flying toilets", plastic bags to defecate which after use
are flung into the bush. The use of open areas close to residential land to urinate and defecate is very common and
has been identified as a major health hazard. Whites (mainly of Afrikaner, German, British and Portuguese origin)
make up between 4.0 and 7.0% of the population. Although their percentage of population is decreasing due to emigration
and lower birth rates they still form the second-largest population of European ancestry, both in terms of percentage
and actual numbers, in Sub-Saharan Africa (after South Africa). The majority of Namibian whites and nearly all those
who are mixed race speak Afrikaans and share similar origins, culture, and religion as the white and coloured populations
of South Africa. A large minority of whites (around 30,000) trace their family origins back to the German settlers
who colonized Namibia prior to the British confiscation of German lands after World War One, and they maintain German
cultural and educational institutions. Nearly all Portuguese settlers came to the country from the former Portuguese
colony of Angola. The 1960 census reported 526,004 persons in what was then South-West Africa, including 73,464 whites
(14%). Namibia conducts a census every ten years. After independence the first Population and Housing Census was
carried out in 1991, further rounds followed in 2001 and 2011. The data collection method is to count every person
resident in Namibia on the census reference night, wherever they happen to be. This is called the de facto method.
For enumeration purposes the country is demarcated into 4,042 enumeration areas. These areas do not overlap with
constituency boundaries to get reliable data for election purposes as well. Up to 1990, English, German and Afrikaans
were official languages. Long before Namibia's independence from South Africa, SWAPO was of the opinion that the
country should become officially monolingual, choosing this approach in contrast to that of its neighbour South Africa
(which granted all 11 of its major languages official status), which was seen by them as "a deliberate policy of
ethnolinguistic fragmentation." Consequently, SWAPO instituted English as the sole official language of Namibia though
only about 3% of the population speaks it as a home language. Its implementation is focused on the civil service,
education and the broadcasting system. Some other languages have received semi-official recognition by being allowed
as medium of instruction in primary schools. It is expected of private schools to follow the same policy as state
schools, and "English language" is a compulsory subject. As in other postcolonial African societies, the push for
monolingual instruction and policy has resulted in a high rate of school drop-outs and of individuals whose academic
competence in any language is low. Inline hockey was first played in 1995 and has also become more and more popular
in the last years. The Women's inline hockey National Team participated in the 2008 FIRS World Championships. Namibia
is the home for one of the toughest footraces in the world, the Namibian ultra marathon. The most famous athlete
from Namibia is Frankie Fredericks, sprinter (100 and 200 m). He won four Olympic silver medals (1992, 1996) and
also has medals from several World Athletics Championships. He is also known for humanitarian activities in Namibia
and beyond. The first newspaper in Namibia was the German-language Windhoeker Anzeiger, founded 1898. Radio was introduced
in 1969, TV in 1981. During German rule, the newspapers mainly reflected the living reality and the view of the white
German-speaking minority. The black majority was ignored or depicted as a threat. During South African rule, the
white bias continued, with mentionable influence of the Pretoria government on the "South West African" media system.
Independent newspapers were seen as a menace to the existing order, critical journalists threatened. Other mentionable
newspapers are the tabloid Informanté owned by TrustCo, the weekly Windhoek Observer, the weekly Namibia Economist,
as well as the regional Namib Times. Current affairs magazines include Insight Namibia, Vision2030 Focus magazine[citation
needed] and Prime FOCUS. Sister Namibia Magazine stands out as the longest running NGO magazine in Namibia, while
Namibia Sport is the only national sport magazine. Furthermore, the print market is complemented with party publications,
student newspapers and PR publications. Compared to neighbouring countries, Namibia has a large degree of media freedom.
Over the past years, the country usually ranked in the upper quarter of the Press Freedom Index of Reporters without
Borders, reaching position 21 in 2010, being on par with Canada and the best-positioned African country. The African
Media Barometer shows similarly positive results.[citation needed] However, as in other countries, there is still
mentionable influence of representatives of state and economy on media in Namibia. In 2009, Namibia dropped to position
36 on the Press Freedom Index. In 2013, it was 19th. In 2014 it ranked 22nd Life expectancy at birth is estimated
to be 52.2 years in 2012 – among the lowest in the world. The AIDS epidemic is a large problem in Namibia. Though
its rate of infection is substantially lower than that of its eastern neighbour, Botswana, approximately 13.1% of
the adult population is[update] infected with HIV. In 2001, there were an estimated 210,000 people living with HIV/AIDS,
and the estimated death toll in 2003 was 16,000. According to the 2011 UNAIDS Report, the epidemic in Namibia "appears
to be leveling off." As the HIV/AIDS epidemic has reduced the working-aged population, the number of orphans has
increased. It falls to the government to provide education, food, shelter and clothing for these orphans. The malaria
problem seems to be compounded by the AIDS epidemic. Research has shown that in Namibia the risk of contracting malaria
is 14.5% greater if a person is also infected with HIV. The risk of death from malaria is also raised by approximately
50% with a concurrent HIV infection. Given infection rates this large, as well as a looming malaria problem, it may
be very difficult for the government to deal with both the medical and economic impacts of this epidemic. The country
had only 598 physicians in 2002.
Russian (ру́сский язы́к, russkiy yazyk, pronounced [ˈruskʲɪj jɪˈzɨk] ( listen)) is an East Slavic language and an official
language in Russia, Belarus, Kazakhstan, Kyrgyzstan and many minor or unrecognised territories. It is an unofficial
but widely-spoken language in Ukraine, Latvia, Estonia, and to a lesser extent, the other countries that were once
constituent republics of the Soviet Union and former participants of the Eastern Bloc. Russian belongs to the family
of Indo-European languages and is one of the three living members of the East Slavic languages. Written examples
of Old East Slavonic are attested from the 10th century onwards. Russian distinguishes between consonant phonemes
with palatal secondary articulation and those without, the so-called soft and hard sounds. This distinction is found
between pairs of almost all consonants and is one of the most distinguishing features of the language. Another important
aspect is the reduction of unstressed vowels. Stress, which is unpredictable, is not normally indicated orthographically
though an optional acute accent (знак ударения, znak udareniya) may be used to mark stress, such as to distinguish
between homographic words, for example замо́к (zamok, meaning a lock) and за́мок (zamok, meaning a castle), or to
indicate the proper pronunciation of uncommon words or names. Russian is a Slavic language of the Indo-European family.
It is a lineal descendant of the language used in Kievan Rus'. From the point of view of the spoken language, its
closest relatives are Ukrainian, Belarusian, and Rusyn, the other three languages in the East Slavic group. In many
places in eastern and southern Ukraine and throughout Belarus, these languages are spoken interchangeably, and in
certain areas traditional bilingualism resulted in language mixtures, e.g. Surzhyk in eastern Ukraine and Trasianka
in Belarus. An East Slavic Old Novgorod dialect, although vanished during the 15th or 16th century, is sometimes
considered to have played a significant role in the formation of modern Russian. Also Russian has notable lexical
similarities with Bulgarian due to a common Church Slavonic influence on both languages, as well as because of later
interaction in the 19th–20th centuries, although Bulgarian grammar differs markedly from Russian. In the 19th century,
the language was often called "Great Russian" to distinguish it from Belarusian, then called "White Russian" and
Ukrainian, then called "Little Russian". The vocabulary (mainly abstract and literary words), principles of word
formations, and, to some extent, inflections and literary style of Russian have been also influenced by Church Slavonic,
a developed and partly russified form of the South Slavic Old Church Slavonic language used by the Russian Orthodox
Church. However, the East Slavic forms have tended to be used exclusively in the various dialects that are experiencing
a rapid decline. In some cases, both the East Slavic and the Church Slavonic forms are in use, with many different
meanings. For details, see Russian phonology and History of the Russian language. Until the 20th century, the language's
spoken form was the language of only the upper noble classes and urban population, as Russian peasants from the countryside
continued to speak in their own dialects. By the mid-20th century, such dialects were forced out with the introduction
of the compulsory education system that was established by the Soviet government. Despite the formalization of Standard
Russian, some nonstandard dialectal features (such as fricative [ɣ] in Southern Russian dialects) are still observed
in colloquial speech. Ethnic Russians constitute 25.5% of the country's current population and 58.6% of the native
Estonian population is also able to speak Russian. In all, 67.8% of Estonia's population can speak Russian. Command
of Russian language, however, is rapidly decreasing among younger Estonians (primarily being replaced by the command
of English). For example, if 53% of ethnic Estonians between 15 and 19 claim to speak some Russian, then among the
10- to 14-year-old group, command of Russian has fallen to 19% (which is about one-third the percentage of those
who claim to have command of English in the same age group). As the Grand Duchy of Finland was part of the Russian
Empire from 1809 to 1918, a number of Russian speakers have remained in Finland. There are 33,400 Russian-speaking
Finns, amounting to 0.6% of the population. Five thousand (0.1%) of them are late 19th century and 20th century immigrants
or their descendants, and the remaining majority are recent immigrants who moved there in the 1990s and later.[citation
needed] Russian is spoken by 1.4% of the population of Finland according to a 2014 estimate from the World Factbook.
In Ukraine, Russian is seen as a language of inter-ethnic communication, and a minority language, under the 1996
Constitution of Ukraine. According to estimates from Demoskop Weekly, in 2004 there were 14,400,000 native speakers
of Russian in the country, and 29 million active speakers. 65% of the population was fluent in Russian in 2006, and
38% used it as the main language with family, friends or at work. Russian is spoken by 29.6% of the population according
to a 2001 estimate from the World Factbook. 20% of school students receive their education primarily in Russian.
In the 20th century, Russian was mandatorily taught in the schools of the members of the old Warsaw Pact and in other
countries that used to be satellites of the USSR. In particular, these countries include Poland, Bulgaria, the Czech
Republic, Slovakia, Hungary, Albania, former East Germany and Cuba. However, younger generations are usually not
fluent in it, because Russian is no longer mandatory in the school system. According to the Eurobarometer 2005 survey,
though, fluency in Russian remains fairly high (20–40%) in some countries, in particular those where the people speak
a Slavic language and thereby have an edge in learning Russian (namely, Poland, Czech Republic, Slovakia, and Bulgaria).
Significant Russian-speaking groups also exist in Western Europe. These have been fed by several waves of immigrants
since the beginning of the 20th century, each with its own flavor of language. The United Kingdom, Spain, Portugal,
France, Italy, Belgium, Greece, Brazil, Norway, and Austria have significant Russian-speaking communities. According
to the 2011 Census of Ireland, there were 21,639 people in the nation who use Russian as a home language. However,
of this only 13% were Russian nationals. 20% held Irish citizenship, while 27% and 14% were holding the passports
of Latvia and Lithuania respectively. In Armenia Russian has no official status, but it's recognised as a minority
language under the Framework Convention for the Protection of National Minorities. According to estimates from Demoskop
Weekly, in 2004 there were 15,000 native speakers of Russian in the country, and 1 million active speakers. 30% of
the population was fluent in Russian in 2006, and 2% used it as the main language with family, friends or at work.
Russian is spoken by 1.4% of the population according to a 2009 estimate from the World Factbook. In Georgia Russian
has no official status, but it's recognised as a minority language under the Framework Convention for the Protection
of National Minorities. According to estimates from Demoskop Weekly, in 2004 there were 130,000 native speakers of
Russian in the country, and 1.7 million active speakers. 27% of the population was fluent in Russian in 2006, and
1% used it as the main language with family, friends or at work. Russian is the language of 9% of the population
according to the World Factook. Ethnologue cites Russian as the country's de facto working language. In Kazakhstan
Russian is not a state language, but according to article 7 of the Constitution of Kazakhstan its usage enjoys equal
status to that of the Kazakh language in state and local administration. According to estimates from Demoskop Weekly,
in 2004 there were 4,200,000 native speakers of Russian in the country, and 10 million active speakers. 63% of the
population was fluent in Russian in 2006, and 46% used it as the main language with family, friends or at work. According
to a 2001 estimate from the World Factbook, 95% of the population can speak Russian. Large Russian-speaking communities
still exist in northern Kazakhstan, and ethnic Russians comprise 25.6% of Kazakhstan's population. The 2009 census
reported that 10,309,500 people, or 84.8% of the population aged 15 and above, could read and write well in Russian,
as well as understand the spoken language. The language was first introduced in North America when Russian explorers
voyaged into Alaska and claimed it for Russia during the 1700s. Although most colonists left after the United States
bought the land in 1867, a handful stayed and preserved the Russian language in this region to this day, although
only a few elderly speakers of this unique dialect are left. Sizable Russian-speaking communities also exist in North
America, especially in large urban centers of the U.S. and Canada, such as New York City, Philadelphia, Boston, Los
Angeles, Nashville, San Francisco, Seattle, Spokane, Toronto, Baltimore, Miami, Chicago, Denver and Cleveland. In
a number of locations they issue their own newspapers, and live in ethnic enclaves (especially the generation of
immigrants who started arriving in the early 1960s). Only about 25% of them are ethnic Russians, however. Before
the dissolution of the Soviet Union, the overwhelming majority of Russophones in Brighton Beach, Brooklyn in New
York City were Russian-speaking Jews. Afterward, the influx from the countries of the former Soviet Union changed
the statistics somewhat, with ethnic Russians and Ukrainians immigrating along with some more Russian Jews and Central
Asians. According to the United States Census, in 2007 Russian was the primary language spoken in the homes of over
850,000 individuals living in the United States. Russian is one of the official languages (or has similar status
and interpretation must be provided into Russian) of the United Nations, International Atomic Energy Agency, World
Health Organization, International Civil Aviation Organization, UNESCO, World Intellectual Property Organization,
International Telecommunication Union, World Meteorological Organization, Food and Agriculture Organization, International
Fund for Agricultural Development, International Criminal Court, International Monetary Fund, International Olympic
Committee, Universal Postal Union, World Bank, Commonwealth of Independent States, Organization for Security and
Co-operation in Europe, Shanghai Cooperation Organisation, Eurasian Economic Community, Collective Security Treaty
Organization, Antarctic Treaty Secretariat, International Organization for Standardization, GUAM Organization for
Democracy and Economic Development, International Mathematical Olympiad. The Russian language is also one of two
official languages aboard the International Space Station - NASA astronauts who serve alongside Russian cosmonauts
usually take Russian language courses. This practice goes back to the Apollo-Soyuz mission, which first flew in 1975.
In March 2013 it was announced that Russian is now the second-most used language on the Internet after English. People
use the Russian language on 5.9% of all websites, slightly ahead of German and far behind English (54.7%). Russian
is used not only on 89.8% of .ru sites, but also on 88.7% of sites with the former Soviet Union domain .su. The websites
of former Soviet Union nations also use high levels of Russian: 79.0% in Ukraine, 86.9% in Belarus, 84.0% in Kazakhstan,
79.6% in Uzbekistan, 75.9% in Kyrgyzstan and 81.8% in Tajikistan. However, Russian is the sixth-most used language
on the top 1,000 sites, behind English, Chinese, French, German and Japanese. Despite leveling after 1900, especially
in matters of vocabulary and phonetics, a number of dialects still exist in Russia. Some linguists divide the dialects
of Russian into two primary regional groupings, "Northern" and "Southern", with Moscow lying on the zone of transition
between the two. Others divide the language into three groupings, Northern, Central (or Middle) and Southern, with
Moscow lying in the Central region. All dialects also divided in two main chronological categories: the dialects
of primary formation (the territory of the Eastern Rus' or Muscovy, roughly consists of the modern Central and Northwestern
Federal districts); and secondary formation (other territory). Dialectology within Russia recognizes dozens of smaller-scale
variants. The dialects often show distinct and non-standard features of pronunciation and intonation, vocabulary
and grammar. Some of these are relics of ancient usage now completely discarded by the standard language. The Northern
Russian dialects and those spoken along the Volga River typically pronounce unstressed /o/ clearly (the phenomenon
called okanye/оканье). Besides the absence of vowel reduction, some dialects have high or diphthongal /e~i̯ɛ/ in
the place of Proto-Slavic *ě and /o~u̯ɔ/ in stressed closed syllables (as in Ukrainian) instead of Standard Russian
/e/ and /o/. An interesting morphological feature is a post-posed definite article -to, -ta, -te similarly to that
existing in Bulgarian and Macedonian. In the Southern Russian dialects, instances of unstressed /e/ and /a/ following
palatalized consonants and preceding a stressed syllable are not reduced to [ɪ] (as occurs in the Moscow dialect),
being instead pronounced [a] in such positions (e.g. несли is pronounced [nʲaˈslʲi], not [nʲɪsˈlʲi]) – this is called
yakanye/яканье. Consonants include a fricative /ɣ/, a semivowel /w~u̯/ and /x~xv~xw/, whereas the Standard and Northern
dialects have the consonants /ɡ/, /v/, and final /l/ and /f/, respectively. The morphology features a palatalized
final /tʲ/ in 3rd person forms of verbs (this is unpalatalized in the Standard and Northern dialects). Some of these
features such as akanye/yakanye, a debuccalized or lenited /ɡ/, a semivowel /w~u̯/ and palatalized final /tʲ/ in
3rd person forms of verbs are also present in modern Belarusian and some dialects of Ukrainian (Eastern Polesian),
indicating a linguistic continuum. Among the first to study Russian dialects was Lomonosov in the 18th century. In
the 19th, Vladimir Dal compiled the first dictionary that included dialectal vocabulary. Detailed mapping of Russian
dialects began at the turn of the 20th century. In modern times, the monumental Dialectological Atlas of the Russian
Language (Диалектологический атлас русского языка [dʲɪɐˌlʲɛktəlɐˈɡʲitɕɪskʲɪj ˈatləs ˈruskəvə jɪzɨˈka]), was published
in three folio volumes 1986–1989, after four decades of preparatory work. Older letters of the Russian alphabet include
⟨ѣ⟩, which merged to ⟨е⟩ (/je/ or /ʲe/); ⟨і⟩ and ⟨ѵ⟩, which both merged to ⟨и⟩ (/i/); ⟨ѳ⟩, which merged to ⟨ф⟩ (/f/);
⟨ѫ⟩, which merged to ⟨у⟩ (/u/); ⟨ѭ⟩, which merged to ⟨ю⟩ (/ju/ or /ʲu/); and ⟨ѧ/⟨ѩ⟩⟩, which later were graphically
reshaped into ⟨я⟩ and merged phonetically to /ja/ or /ʲa/. While these older letters have been abandoned at one time
or another, they may be used in this and related articles. The yers ⟨ъ⟩ and ⟨ь⟩ originally indicated the pronunciation
of ultra-short or reduced /ŭ/, /ĭ/. Because of many technical restrictions in computing and also because of the unavailability
of Cyrillic keyboards abroad, Russian is often transliterated using the Latin alphabet. For example, мороз ('frost')
is transliterated moroz, and мышь ('mouse'), mysh or myš'. Once commonly used by the majority of those living outside
Russia, transliteration is being used less frequently by Russian-speaking typists in favor of the extension of Unicode
character encoding, which fully incorporates the Russian alphabet. Free programs leveraging this Unicode extension
are available which allow users to type Russian characters, even on Western 'QWERTY' keyboards. The Russian alphabet
has many systems of character encoding. KOI8-R was designed by the Soviet government and was intended to serve as
the standard encoding. This encoding was and still is widely used in UNIX-like operating systems. Nevertheless, the
spread of MS-DOS and OS/2 (IBM866), traditional Macintosh (ISO/IEC 8859-5) and Microsoft Windows (CP1251) created
chaos and ended by establishing different encodings as de facto standards, with Windows-1251 becoming a de facto
standard in Russian Internet and e-mail communication during the period of roughly 1995–2005. According to the Institute
of Russian Language of the Russian Academy of Sciences, an optional acute accent (знак ударения) may, and sometimes
should, be used to mark stress. For example, it is used to distinguish between otherwise identical words, especially
when context does not make it obvious: замо́к/за́мок (lock/castle), сто́ящий/стоя́щий (worthwhile/standing), чудно́/чу́дно
(this is odd/this is marvelous), молоде́ц/мо́лодец (attaboy/fine young man), узна́ю/узнаю́ (I shall learn it/I recognize
it), отреза́ть/отре́зать (to be cutting/to have cut); to indicate the proper pronunciation of uncommon words, especially
personal and family names (афе́ра, гу́ру, Гарси́я, Оле́ша, Фе́рми), and to show which is the stressed word in a sentence
(Ты́ съел печенье?/Ты съе́л печенье?/Ты съел пече́нье? – Was it you who ate the cookie?/Did you eat the cookie?/Was
it the cookie that you ate?). Stress marks are mandatory in lexical dictionaries and books for children or Russian
learners. The language possesses five vowels (or six, under the St. Petersburg Phonological School), which are written
with different letters depending on whether or not the preceding consonant is palatalized. The consonants typically
come in plain vs. palatalized pairs, which are traditionally called hard and soft. (The hard consonants are often
velarized, especially before front vowels, as in Irish). The standard language, based on the Moscow dialect, possesses
heavy stress and moderate variation in pitch. Stressed vowels are somewhat lengthened, while unstressed vowels tend
to be reduced to near-close vowels or an unclear schwa. (See also: vowel reduction in Russian.) Russian is notable
for its distinction based on palatalization of most of the consonants. While /k/, /ɡ/, /x/ do have palatalized allophones
[kʲ, ɡʲ, xʲ], only /kʲ/ might be considered a phoneme, though it is marginal and generally not considered distinctive
(the only native minimal pair which argues for /kʲ/ to be a separate phoneme is "это ткёт" ([ˈɛtə tkʲɵt], 'it weaves')/"этот
кот" ([ˈɛtət kot], 'this cat')). Palatalization means that the center of the tongue is raised during and after the
articulation of the consonant. In the case of /tʲ/ and /dʲ/, the tongue is raised enough to produce slight frication
(affricate sounds). These sounds: /t, d, ts, s, z, n and rʲ/ are dental, that is pronounced with the tip of the tongue
against the teeth rather than against the alveolar ridge. Judging by the historical records, by approximately 1000
AD the predominant ethnic group over much of modern European Russia, Ukraine and Belarus was the Eastern branch of
the Slavs, speaking a closely related group of dialects. The political unification of this region into Kievan Rus'
in about 880, from which modern Russia, Ukraine and Belarus trace their origins, established Old East Slavic as a
literary and commercial language. It was soon followed by the adoption of Christianity in 988 and the introduction
of the South Slavic Old Church Slavonic as the liturgical and official language. Borrowings and calques from Byzantine
Greek began to enter the Old East Slavic and spoken dialects at this time, which in their turn modified the Old Church
Slavonic as well. The political reforms of Peter the Great (Пётр Вели́кий, Pyótr Velíkiy) were accompanied by a reform
of the alphabet, and achieved their goal of secularization and Westernization. Blocks of specialized vocabulary were
adopted from the languages of Western Europe. By 1800, a significant portion of the gentry spoke French daily, and
German sometimes. Many Russian novels of the 19th century, e.g. Leo Tolstoy's (Лев Толсто́й) War and Peace, contain
entire paragraphs and even pages in French with no translation given, with an assumption that educated readers would
not need one. The modern literary language is usually considered to date from the time of Alexander Pushkin (Алекса́ндр
Пу́шкин) in the first third of the 19th century. Pushkin revolutionized Russian literature by rejecting archaic grammar
and vocabulary (so-called "высо́кий стиль" — "high style") in favor of grammar and vocabulary found in the spoken
language of the time. Even modern readers of younger age may only experience slight difficulties understanding some
words in Pushkin's texts, since relatively few words used by Pushkin have become archaic or changed meaning. In fact,
many expressions used by Russian writers of the early 19th century, in particular Pushkin, Mikhail Lermontov (Михаи́л
Ле́рмонтов), Nikolai Gogol (Никола́й Го́голь), Aleksander Griboyedov (Алекса́ндр Грибое́дов), became proverbs or
sayings which can be frequently found even in modern Russian colloquial speech. During the Soviet period, the policy
toward the languages of the various other ethnic groups fluctuated in practice. Though each of the constituent republics
had its own official language, the unifying role and superior status was reserved for Russian, although it was declared
the official language only in 1990. Following the break-up of the USSR in 1991, several of the newly independent
states have encouraged their native languages, which has partly reversed the privileged status of Russian, though
its role as the language of post-Soviet national discourse throughout the region has continued. According to figures
published in 2006 in the journal "Demoskop Weekly" research deputy director of Research Center for Sociological Research
of the Ministry of Education and Science (Russia) Arefyev A. L., the Russian language is gradually losing its position
in the world in general, and in Russia in particular. In 2012, A. L. Arefyev published a new study "Russian language
at the turn of the 20th-21st centuries", in which he confirmed his conclusion about the trend of further weakening
of the Russian language in all regions of the world (findings published in 2013 in the journal "Demoskop Weekly").
In the countries of the former Soviet Union the Russian language is gradually being replaced by local languages.
Currently the number speakers of Russian language in the world depends on the number of Russians in the world (as
the main sources distribution Russian language) and total population Russia (where Russian is an official language).
The United States Air Force (USAF) is the aerial warfare service branch of the United States Armed Forces and one of the
seven American uniformed services. Initially part of the United States Army, the USAF was formed as a separate branch
of the military on 18 September 1947 under the National Security Act of 1947. It is the most recent branch of the
U.S. military to be formed, and is the largest and one of the world's most technologically advanced air forces. The
USAF articulates its core functions as Nuclear Deterrence Operations, Special Operations, Air Superiority, Global
Integrated ISR, Space Superiority, Command and Control, Cyberspace Superiority, Personnel Recovery, Global Precision
Attack, Building Partnerships, Rapid Global Mobility and Agile Combat Support. The U.S. Air Force is a military service
organized within the Department of the Air Force, one of the three military departments of the Department of Defense.
The Air Force is headed by the civilian Secretary of the Air Force, who reports to the Secretary of Defense, and
is appointed by the President with Senate confirmation. The highest-ranking military officer in the Department of
the Air Force is the Chief of Staff of the Air Force, who exercises supervision over Air Force units, and serves
as a member of the Joint Chiefs of Staff. Air Force combat and mobility forces are assigned, as directed by the Secretary
of Defense, to the Combatant Commanders, and neither the Secretary of the Air Force nor the Chief of Staff have operational
command authority over them. Recently, the Air Force refined its understanding of the core duties and responsibilities
it performs as a Military Service Branch, streamlining what previously were six distinctive capabilities and seventeen
operational functions into twelve core functions to be used across the doctrine, organization, training, equipment,
leadership, and education, personnel, and facilities spectrum. These core functions express the ways in which the
Air Force is particularly and appropriately suited to contribute to national security, but they do not necessarily
express every aspect of what the Air Force contributes to the nation. It should be emphasized that the core functions,
by themselves, are not doctrinal constructs. Assure/Dissuade/Deter is a mission set derived from the Air Force's
readiness to carry out the nuclear strike operations mission as well as from specific actions taken to assure allies
as a part of extended deterrence. Dissuading others from acquiring or proliferating WMD, and the means to deliver
them, contributes to promoting security and is also an integral part of this mission. Moreover, different deterrence
strategies are required to deter various adversaries, whether they are a nation state, or non-state/transnational
actor. The Air Force maintains and presents credible deterrent capabilities through successful visible demonstrations
and exercises which assure allies, dissuade proliferation, deter potential adversaries from actions that threaten
US national security or the populations and deployed military forces of the US, its allies and friends. Nuclear strike
is the ability of nuclear forces to rapidly and accurately strike targets which the enemy holds dear in a devastating
manner. If a crisis occurs, rapid generation and, if necessary, deployment of nuclear strike capabilities will demonstrate
US resolve and may prompt an adversary to alter the course of action deemed threatening to our national interest.
Should deterrence fail, the President may authorize a precise, tailored response to terminate the conflict at the
lowest possible level and lead to a rapid cessation of hostilities. Post-conflict, regeneration of a credible nuclear
deterrent capability will deter further aggression. The Air Force may present a credible force posture in either
the Continental United States, within a theater of operations, or both to effectively deter the range of potential
adversaries envisioned in the 21st century. This requires the ability to engage targets globally using a variety
of methods; therefore, the Air Force should possess the ability to induct, train, assign, educate and exercise individuals
and units to rapidly and effectively execute missions that support US NDO objectives. Finally, the Air Force regularly
exercises and evaluates all aspects of nuclear operations to ensure high levels of performance. Nuclear surety ensures
the safety, security and effectiveness of nuclear operations. Because of their political and military importance,
destructive power, and the potential consequences of an accident or unauthorized act, nuclear weapons and nuclear
weapon systems require special consideration and protection against risks and threats inherent in their peacetime
and wartime environments. The Air Force, in conjunction with other entities within the Departments of Defense or
Energy, achieves a high standard of protection through a stringent nuclear surety program. This program applies to
materiel, personnel, and procedures that contribute to the safety, security, and control of nuclear weapons, thus
assuring no nuclear accidents, incidents, loss, or unauthorized or accidental use (a Broken Arrow incident). The
Air Force continues to pursue safe, secure and effective nuclear weapons consistent with operational requirements.
Adversaries, allies, and the American people must be highly confident of the Air Force's ability to secure nuclear
weapons from accidents, theft, loss, and accidental or unauthorized use. This day-to-day commitment to precise and
reliable nuclear operations is the cornerstone of the credibility of the NDO mission. Positive nuclear command, control,
communications; effective nuclear weapons security; and robust combat support are essential to the overall NDO function.
Offensive Counterair (OCA) is defined as "offensive operations to destroy, disrupt, or neutralize enemy aircraft,
missiles, launch platforms, and their supporting structures and systems both before and after launch, but as close
to their source as possible" (JP 1-02). OCA is the preferred method of countering air and missile threats, since
it attempts to defeat the enemy closer to its source and typically enjoys the initiative. OCA comprises attack operations,
sweep, escort, and suppression/destruction of enemy air defense. Defensive Counterair (DCA) is defined as "all the
defensive measures designed to detect, identify, intercept, and destroy or negate enemy forces attempting to penetrate
or attack through friendly airspace" (JP 1-02). A major goal of DCA operations, in concert with OCA operations, is
to provide an area from which forces can operate, secure from air and missile threats. The DCA mission comprises
both active and passive defense measures. Active defense is "the employment of limited offensive action and counterattacks
to deny a contested area or position to the enemy" (JP 1-02). It includes both ballistic missile defense and air
breathing threat defense, and encompasses point defense, area defense, and high value airborne asset defense. Passive
defense is "measures taken to reduce the probability of and to minimize the effects of damage caused by hostile action
without the intention of taking the initiative" (JP 1-02). It includes detection and warning; chemical, biological,
radiological, and nuclear defense; camouflage, concealment, and deception; hardening; reconstitution; dispersion;
redundancy; and mobility, counter-measures, and stealth. Space superiority is "the degree of dominance in space of
one force over another that permits the conduct of operations by the former and its related land, sea, air, space,
and special operations forces at a given time and place without prohibitive interference by the opposing force" (JP
1-02). Space superiority may be localized in time and space, or it may be broad and enduring. Space superiority provides
freedom of action in space for friendly forces and, when directed, denies the same freedom to the adversary. Space
Control is defined as "operations to ensure freedom of action in space for the US and its allies and, when directed,
deny an adversary freedom of action in space. This mission area includes: operations conducted to protect friendly
space capabilities from attack, interference, or unintentional hazards (defensive space control); operations to deny
an adversary's use of space capabilities (offensive space control); and the requisite current and predictive knowledge
of the space environment and the operational environment upon which space operations depend (space situational awareness)"
(JP 1-02). This is the passive, active, and dynamic employment of capabilities to respond to imminent or on-going
actions against Air Force or Air Force-protected networks, the Air Force's portion of the Global Information Grid,
or expeditionary communications assigned to the Air Force. Cyberspace defense incorporates CNE, computer network
defense (CND), and CNA techniques and may be a contributor to influence operations. It is highly dependent upon ISR,
fused all-source intelligence, automated indications and warning, sophisticated attribution/characterization, situational
awareness, assessment, and responsive C2. Cyberspace Support is foundational, continuous, or responsive operations
ensuring information integrity and availability in, through, and from Air Force-controlled infrastructure and its
interconnected analog and digital portion of the battle space. Inherent in this mission is the ability to establish,
extend, secure, protect, and defend in order to sustain assigned networks and missions. This includes protection
measures against supply chain components plus critical C2 networks/communications links and nuclear C2 networks.
The cyberspace support mission incorporates CNE and CND techniques. It incorporates all elements of Air Force Network
Operations, information transport, enterprise management, and information assurance, and is dependent on ISR and
all-source intelligence. Command and control is "the exercise of authority and direction by a properly designated
commander over assigned and attached forces in the accomplishment of the mission. Command and control functions are
performed through an arrangement of personnel, equipment, communications, facilities, and procedures employed by
a commander in planning, directing, coordinating, and controlling forces and operations in the accomplishment of
the mission" (JP 1-02). This core function includes all of the C2-related capabilities and activities associated
with air, space, cyberspace, nuclear, and agile combat support operations to achieve strategic, operational, and
tactical objectives. Planning and Directing is "the determination of intelligence requirements, development of appropriate
intelligence architecture, preparation of a collection plan, and issuance of orders and requests to information collection
agencies" (JP 2-01, Joint and National Intelligence Support to Military Operations). These activities enable the
synchronization and integration of collection, processing, exploitation, analysis, and dissemination activities/resources
to meet information requirements of national and military decision makers. Special Operations are "operations conducted
in hostile, denied, or politically sensitive environments to achieve military, diplomatic, informational, and/or
economic objectives employing military capabilities for which there is no broad conventional force requirement. These
operations may require covert, clandestine, or low-visibility capabilities. Special operations are applicable across
the ROMO. They can be conducted independently or in conjunction with operations of conventional forces or other government
agencies and may include operations through, with, or by indigenous or surrogate forces. Special operations differ
from conventional operations in degree of physical and political risk, operational techniques, mode of employment,
independence from friendly support, and dependence on detailed operational intelligence and indigenous assets" (JP
1-02). Airlift is "operations to transport and deliver forces and materiel through the air in support of strategic,
operational, or tactical objectives" (AFDD 3–17, Air Mobility Operations). The rapid and flexible options afforded
by airlift allow military forces and national leaders the ability to respond and operate in a variety of situations
and time frames. The global reach capability of airlift provides the ability to apply US power worldwide by delivering
forces to crisis locations. It serves as a US presence that demonstrates resolve and compassion in humanitarian crisis.
Aeromedical Evacuation is "the movement of patients under medical supervision to and between medical treatment facilities
by air transportation" (JP 1-02). JP 4-02, Health Service Support, further defines it as "the fixed wing movement
of regulated casualties to and between medical treatment facilities, using organic and/or contracted mobility airframes,
with aircrew trained explicitly for this mission." Aeromedical evacuation forces can operate as far forward as fixed-wing
aircraft are able to conduct airland operations. Personnel Recovery (PR) is defined as "the sum of military, diplomatic,
and civil efforts to prepare for and execute the recovery and reintegration of isolated personnel" (JP 1-02). It
is the ability of the US government and its international partners to effect the recovery of isolated personnel across
the ROMO and return those personnel to duty. PR also enhances the development of an effective, global capacity to
protect and recover isolated personnel wherever they are placed at risk; deny an adversary's ability to exploit a
nation through propaganda; and develop joint, interagency, and international capabilities that contribute to crisis
response and regional stability. Humanitarian Assistance Operations are "programs conducted to relieve or reduce
the results of natural or manmade disasters or other endemic conditions such as human pain, disease, hunger, or privation
that might present a serious threat to life or that can result in great damage to or loss of property. Humanitarian
assistance provided by US forces is limited in scope and duration. The assistance provided is designed to supplement
or complement the efforts of the host nation civil authorities or agencies that may have the primary responsibility
for providing humanitarian assistance" (JP 1-02). Building Partnerships is described as airmen interacting with international
airmen and other relevant actors to develop, guide, and sustain relationships for mutual benefit and security. Building
Partnerships is about interacting with others and is therefore an inherently inter-personal and cross-cultural undertaking.
Through both words and deeds, the majority of interaction is devoted to building trust-based relationships for mutual
benefit. It includes both foreign partners as well as domestic partners and emphasizes collaboration with foreign
governments, militaries and populations as well as US government departments, agencies, industry, and NGOs. To better
facilitate partnering efforts, Airmen should be competent in the relevant language, region, and culture. The U.S.
War Department created the first antecedent of the U.S. Air Force in 1907, which through a succession of changes
of organization, titles, and missions advanced toward eventual separation 40 years later. In World War II, almost
68,000 U.S airmen died helping to win the war; only the infantry suffered more enlisted casualties. In practice,
the U.S. Army Air Forces (USAAF) was virtually independent of the Army during World War II, but officials wanted
formal independence. The National Security Act of 1947 was signed on on 26 July 1947 by President Harry S Truman,
which established the Department of the Air Force, but it was not not until 18 September 1947, when the first secretary
of the Air Force, W. Stuart Symington was sworn into office that the Air Force was officially formed. The act created
the National Military Establishment (renamed Department of Defense in 1949), which was composed of three subordinate
Military Departments, namely the Department of the Army, the Department of the Navy, and the newly created Department
of the Air Force. Prior to 1947, the responsibility for military aviation was shared between the Army (for land-based
operations), the Navy (for sea-based operations from aircraft carriers and amphibious aircraft), and the Marine Corps
(for close air support of infantry operations). The 1940s proved to be important in other ways as well. In 1947,
Captain Chuck Yeager broke the sound barrier in his X-1 rocket-powered aircraft, beginning a new era of aeronautics
in America. During the early 2000s, the USAF fumbled several high profile aircraft procurement projects, such as
the missteps on the KC-X program. Winslow Wheeler has written that this pattern represents "failures of intellect
and – much more importantly – ethics." As a result, the USAF fleet is setting new records for average aircraft age
and needs to replace its fleets of fighters, bombers, airborne tankers, and airborne warning aircraft, in an age
of restrictive defense budgets. Finally in the midst of scandal and failure in maintaining its nuclear arsenal, the
civilian and military leaders of the air force were replaced in 2008. Since 2005, the USAF has placed a strong focus
on the improvement of Basic Military Training (BMT) for enlisted personnel. While the intense training has become
longer, it also has shifted to include a deployment phase. This deployment phase, now called the BEAST, places the
trainees in a surreal environment that they may experience once they deploy. While the trainees do tackle the massive
obstacle courses along with the BEAST, the other portions include defending and protecting their base of operations,
forming a structure of leadership, directing search and recovery, and basic self aid buddy care. During this event,
the Military Training Instructors (MTI) act as mentors and enemy forces in a deployment exercise. In 2007, the USAF
undertook a Reduction-in-Force (RIF). Because of budget constraints, the USAF planned to reduce the service's size
from 360,000 active duty personnel to 316,000. The size of the active duty force in 2007 was roughly 64% of that
of what the USAF was at the end of the first Gulf War in 1991. However, the reduction was ended at approximately
330,000 personnel in 2008 in order to meet the demand signal of combatant commanders and associated mission requirements.
These same constraints have seen a sharp reduction in flight hours for crew training since 2005 and the Deputy Chief
of Staff for Manpower and Personnel directing Airmen's Time Assessments. On 5 June 2008, Secretary of Defense Robert
Gates, accepted the resignations of both the Secretary of the Air Force, Michael Wynne, and the Chief of Staff of
the United States Air Force, General T. Michael Moseley. Gates in effect fired both men for "systemic issues associated
with declining Air Force nuclear mission focus and performance." This followed an investigation into two embarrassing
incidents involving mishandling of nuclear weapons: specifically a nuclear weapons incident aboard a B-52 flight
between Minot AFB and Barksdale AFB, and an accidental shipment of nuclear weapons components to Taiwan. The resignations
were also the culmination of disputes between the Air Force leadership, populated primarily by non-nuclear background
fighter pilots, versus Gates. To put more emphasis on nuclear assets, the USAF established the nuclear-focused Air
Force Global Strike Command on 24 October 2008. Due to the Budget sequestration in 2013, the USAF was forced to ground
many of its squadrons. The Commander of Air Combat Command, General Mike Hostage indicated that the USAF must reduce
its F-15 and F-16 fleets and eliminate platforms like the A-10 in order to focus on a fifth-generation jet fighter
future. In response to squadron groundings and flight time reductions, many Air Force pilots have opted to resign
from active duty and enter the Air Force Reserve and Air National Guard while pursuing careers in the commercial
airlines where they can find flight hours on more modern aircraft. Specific concerns include a compounded inability
for the Air Force to replace its aging fleet, and an overall reduction of strength and readiness. The USAF attempted
to make these adjustments by primarily cutting the Air National Guard and Air Force Reserve aircraft fleets and their
associated manpower, but Congress reversed this initiative and the majority of the lost manpower will come from the
active forces. However, Congress did allow for $208 million of reprogramming from fleet modernization to enable some
portion of the third of the grounded fleet to resume operations. The Department of the Air Force is one of three
military departments within the Department of Defense, and is managed by the civilian Secretary of the Air Force,
under the authority, direction, and control of the Secretary of Defense. The senior officials in the Office of the
Secretary are the Under Secretary of the Air Force, four Assistant Secretaries of the Air Force and the General Counsel,
all of whom are appointed by the President with the advice and consent of the Senate. The senior uniformed leadership
in the Air Staff is made up of the Chief of Staff of the Air Force and the Vice Chief of Staff of the Air Force.
The organizational structure as shown above is responsible for the peacetime organization, equipping, and training
of aerospace units for operational missions. When required to support operational missions, the Secretary of Defense
(SECDEF) directs the Secretary of the Air Force (SECAF) to execute a Change in Operational Control (CHOP) of these
units from their administrative alignment to the operational command of a Regional Combatant Commander (CCDR). In
the case of AFSPC, AFSOC, PACAF, and USAFE units, forces are normally employed in-place under their existing CCDR.
Likewise, AMC forces operating in support roles retain their componency to USTRANSCOM unless chopped to a Regional
CCDR. "Chopped" units are referred to as forces. The top-level structure of these forces is the Air and Space Expeditionary
Task Force (AETF). The AETF is the Air Force presentation of forces to a CCDR for the employment of Air Power. Each
CCDR is supported by a standing Component Numbered Air Force (C-NAF) to provide planning and execution of aerospace
forces in support of CCDR requirements. Each C-NAF consists of a Commander, Air Force Forces (COMAFFOR) and AFFOR/A-staff,
and an Air Operations Center (AOC). As needed to support multiple Joint Force Commanders (JFC) in the COCOM's Area
of Responsibility (AOR), the C-NAF may deploy Air Component Coordinate Elements (ACCE) to liaise with the JFC. If
the Air Force possesses the preponderance of air forces in a JFC's area of operations, the COMAFFOR will also serve
as the Joint Forces Air Component Commander (JFACC). AFSCs range from officer specialties such as pilot, combat systems
officer, missile launch officer, intelligence officer, aircraft maintenance officer, judge advocate general (JAG),
medical doctor, nurse or other fields, to various enlisted specialties. The latter range from flight combat operations
such as a gunner, to working in a dining facility to ensure that members are properly fed. There are additional occupational
fields such as computer specialties, mechanic specialties, enlisted aircrew, communication systems, cyberspace operations,
avionics technicians, medical specialties, civil engineering, public affairs, hospitality, law, drug counseling,
mail operations, security forces, and search and rescue specialties. Beyond combat flight crew personnel, perhaps
the most dangerous USAF jobs are Explosive Ordnance Disposal (EOD), Combat rescue officer, Pararescue, Security Forces,
Combat Control, Combat Weather, Tactical Air Control Party, and AFOSI agents, who deploy with infantry and special
operations units who disarm bombs, rescue downed or isolated personnel, call in air strikes and set up landing zones
in forward locations. Most of these are enlisted positions augmented by a smaller number of commissioned officers.
Other career fields that have seen increasing exposure to combat include civil engineers, vehicle operators, and
Air Force Office of Special Investigations (AFOSI) personnel. Training programs vary in length; for example, 3M0X1
(Services) has 31 days of tech school training, while 3E8X1 (Explosive Ordnance Disposal) is one year of training
with a preliminary school and a main school consisting of over 10 separate divisions, sometimes taking students close
to two years to complete. Officer technical training conducted by Second Air Force can also vary by AFSC, while flight
training for aeronautically-rated officers conducted by AETC's Nineteenth Air Force can last well in excess of one
year. USAF rank is divided between enlisted airmen, non-commissioned officers, and commissioned officers, and ranges
from the enlisted Airman Basic (E-1) to the commissioned officer rank of General (O-10). Enlisted promotions are
granted based on a combination of test scores, years of experience, and selection board approval while officer promotions
are based on time-in-grade and a promotion selection board. Promotions among enlisted personnel and non-commissioned
officers are generally designated by increasing numbers of insignia chevrons. Commissioned officer rank is designated
by bars, oak leaves, a silver eagle, and anywhere from one to four stars (one to five stars in war-time).[citation
needed] Air Force officer promotions are governed by the Defense Officer Personnel Management Act of 1980 and its
companion Reserve Officer Personnel Management Act (ROPMA) for officers in the Air Force Reserve and the Air National
Guard. DOPMA also establishes limits on the number of officers that can serve at any given time in the Air Force.
Currently, promotion from second lieutenant to first lieutenant is virtually guaranteed after two years of satisfactory
service. The promotion from first lieutenant to captain is competitive after successfully completing another two
years of service, with a selection rate varying between 99% and 100%. Promotion to major through major general is
through a formal selection board process, while promotions to lieutenant general and general are contingent upon
nomination to specific general officer positions and subject to U.S. Senate approval. During the board process an
officer's record is reviewed by a selection board at the Air Force Personnel Center at Randolph Air Force Base in
San Antonio, Texas. At the 10 to 11 year mark, captains will take part in a selection board to major. If not selected,
they will meet a follow-on board to determine if they will be allowed to remain in the Air Force. Promotion from
major to lieutenant colonel is similar and occurs approximately between the thirteen year (for officers who were
promoted to major early "below the zone") and the fifteen year mark, where a certain percentage of majors will be
selected below zone (i.e., "early"), in zone (i.e., "on time") or above zone (i.e., "late") for promotion to lieutenant
colonel. This process will repeat at the 16 year mark (for officers previously promoted early to major and lieutenant
colonel) to the 21 year mark for promotion to full colonel. Although provision is made in Title 10 of the United
States Code for the Secretary of the Air Force to appoint warrant officers, the Air Force does not currently use
warrant officer grades, and is the only one of the U.S. Armed Services not to do so. The Air Force inherited warrant
officer ranks from the Army at its inception in 1947, but their place in the Air Force structure was never made clear.[citation
needed] When the Congress authorized the creation of two new senior enlisted ranks in 1958, Air Force officials privately
concluded that these two new "super grades" could fill all Air Force needs then performed at the warrant officer
level, although this was not publicly acknowledged until years later.[citation needed] The Air Force stopped appointing
warrant officers in 1959, the same year the first promotions were made to the new top enlisted grade, Chief Master
Sergeant. Most of the existing Air Force warrant officers entered the commissioned officer ranks during the 1960s,
but small numbers continued to exist in the warrant officer grades for the next 21 years. Enlisted members of the
USAF have pay grades from E-1 (entry level) to E-9 (senior enlisted). While all USAF military personnel are referred
to as Airmen, the term also refers to the pay grades of E-1 through E-4, which are below the level of non-commissioned
officers (NCOs). Above the pay grade of E-4 (i.e., pay grades E-5 through E-9) all ranks fall into the category of
NCO and are further subdivided into "NCOs" (pay grades E-5 and E-6) and "Senior NCOs" (pay grades E-7 through E-9);
the term "Junior NCO" is sometimes used to refer to staff sergeants and technical sergeants (pay grades E-5 and E-6).
The USAF is the only branch of the U.S. military where NCO status is achieved when an enlisted person reaches the
pay grade of E-5. In all other branches, NCO status is generally achieved at the pay grade of E-4 (e.g., a Corporal
in the Army and Marine Corps, Petty Officer Third Class in the Navy and Coast Guard). The Air Force mirrored the
Army from 1976 to 1991 with an E-4 being either a Senior Airman wearing three stripes without a star or a Sergeant
(referred to as "Buck Sergeant"), which was noted by the presence of the central star and considered an NCO. Despite
not being an NCO, a Senior Airman who has completed Airman Leadership School can be a supervisor according to the
AFI 36-2618. The first USAF dress uniform, in 1947, was dubbed and patented "Uxbridge Blue" after "Uxbridge 1683
Blue", developed at the former Bachman-Uxbridge Worsted Company. The current Service Dress Uniform, which was adopted
in 1993 and standardized in 1995, consists of a three-button, pocketless coat, similar to that of a men's "sport
jacket" (with silver "U.S." pins on the lapels, with a silver ring surrounding on those of enlisted members), matching
trousers, and either a service cap or flight cap, all in Shade 1620, "Air Force Blue" (a darker purplish-blue). This
is worn with a light blue shirt (Shade 1550) and Shade 1620 herringbone patterned necktie. Enlisted members wear
sleeve insignia on both the jacket and shirt, while officers wear metal rank insignia pinned onto the coat, and Air
Force Blue slide-on epaulet loops on the shirt. USAF personnel assigned to Base Honor Guard duties wear, for certain
occasions, a modified version of the standard service dress uniform, but with silver trim on the sleeves and trousers,
with the addition of a ceremonial belt (if necessary), wheel cap with silver trim and Hap Arnold Device, and a silver
aiguillette placed on the left shoulder seam and all devices and accoutrement. In addition to basic uniform clothing,
various badges are used by the USAF to indicate a billet assignment or qualification-level for a given assignment.
Badges can also be used as merit-based or service-based awards. Over time, various badges have been discontinued
and are no longer distributed. Authorized badges include the Shields of USAF Fire Protection, and Security Forces,
and the Missile Badge (or "pocket rocket"), which is earned after working in a missile system maintenance or missile
operations capacity for at least one year. Officers may be commissioned upon graduation from the United States Air
Force Academy, upon graduation from another college or university through the Air Force Reserve Officer Training
Corps (AFROTC) program, or through the Air Force Officer Training School (OTS). OTS, previously located at Lackland
AFB, Texas until 1993 and located at Maxwell Air Force Base in Montgomery, Alabama since 1993, in turn encompasses
two separate commissioning programs: Basic Officer Training (BOT), which is for line-officer candidates of the active-duty
Air Force and the U.S. Air Force Reserve; and the Academy of Military Science (AMS), which is for line-officer candidates
of the Air National Guard. (The term "line officer" derives from the concept of the line of battle and refers to
an officer whose role falls somewhere within the "Line of the Air", meaning combat or combat-support operations within
the scope of legitimate combatants as defined by the Geneva Conventions.) The Air Force also provides Commissioned
Officer Training (COT) for officers of all three components who are direct-commissioned to non-line positions due
to their credentials in medicine, law, religion, biological sciences, or healthcare administration. Originally viewed
as a "knife and fork school" that covered little beyond basic wear of the uniform, COT in recent years has been fully
integrated into the OTS program and today encompasses extensive coursework as well as field exercises in leadership,
confidence, fitness, and deployed-environment operations. The US Air Force Fitness Test (AFFT) is designed to test
the abdominal circumference, muscular strength/endurance and cardiovascular respiratory fitness of airmen in the
USAF. As part of the Fit to Fight program, the USAF adopted a more stringent physical fitness assessment; the new
fitness program was put into effect on 1 June 2010. The annual ergo-cycle test which the USAF had used for several
years had been replaced in 2004. In the AFFT, Airmen are given a score based on performance consisting of four components:
waist circumference, the sit-up, the push-up, and a 1.5-mile (2.4 km) run. Airmen can potentially earn a score of
100, with the run counting as 60%, waist circumference as 20%, and both strength test counting as 10% each. A passing
score is 75 points. Effective 1 July 2010, the AFFT is administered by the base Fitness Assessment Cell (FAC), and
is required twice a year. Personnel may test once a year if he or she earns a score above a 90%. Additionally, only
meeting the minimum standards on each one of these tests will not get you a passing score of 75%, and failing any
one component will result in a failure for the entire test. The ground-attack aircraft of the USAF are designed to
attack targets on the ground and are often deployed as close air support for, and in proximity to, U.S. ground forces.
The proximity to friendly forces require precision strikes from these aircraft that are not possible with bomber
aircraft listed below. They are typically deployed as close air support to ground forces, their role is tactical
rather than strategic, operating at the front of the battle rather than against targets deeper in the enemy's rear.
In the US Air Force, the distinction between bombers, fighters that are actually fighter-bombers, and attack aircraft
has become blurred. Many attack aircraft, even ones that look like fighters, are optimized to drop bombs, with very
little ability to engage in aerial combat. Many fighter aircraft, such as the F-16, are often used as 'bomb trucks',
despite being designed for aerial combat. Perhaps the one meaningful distinction at present is the question of range:
a bomber is generally a long-range aircraft capable of striking targets deep within enemy territory, whereas fighter
bombers and attack aircraft are limited to 'theater' missions in and around the immediate area of battlefield combat.
Even that distinction is muddied by the availability of aerial refueling, which greatly increases the potential radius
of combat operations. The US, Russia, and the People's Republic of China operate strategic bombers. The service's
B-2A aircraft entered service in the 1990s, its B-1B aircraft in the 1980s and its current B-52H aircraft in the
early 1960s. The B-52 Stratofortress airframe design is over 60 years old and the B-52H aircraft currently in the
active inventory were all built between 1960 and 1962. The B-52H is scheduled to remain in service for another 30
years, which would keep the airframe in service for nearly 90 years, an unprecedented length of service for any aircraft.
The B-21 is projected to replace the B-52 and parts of the B-1B force by the mid-2020s. Cargo and transport aircraft
are typically used to deliver troops, weapons and other military equipment by a variety of methods to any area of
military operations around the world, usually outside of the commercial flight routes in uncontrolled airspace. The
workhorses of the USAF Air Mobility Command are the C-130 Hercules, C-17 Globemaster III, and C-5 Galaxy. These aircraft
are largely defined in terms of their range capability as strategic airlift (C-5), strategic/tactical (C-17), and
tactical (C-130) airlift to reflect the needs of the land forces they most often support. The CV-22 is used by the
Air Force for the U.S. Special Operations Command (USSOCOM). It conducts long-range, special operations missions,
and is equipped with extra fuel tanks and terrain-following radar. Some aircraft serve specialized transportation
roles such as executive/embassy support (C-12), Antarctic Support (LC-130H), and USSOCOM support (C-27J, C-145A,
and C-146A). The WC-130H aircraft are former weather reconnaissance aircraft, now reverted to the transport mission.
The purpose of electronic warfare is to deny the opponent an advantage in the EMS and ensure friendly, unimpeded
access to the EM spectrum portion of the information environment. Electronic warfare aircraft are used to keep airspaces
friendly, and send critical information to anyone who needs it. They are often called "The Eye in the Sky." The roles
of the aircraft vary greatly among the different variants to include Electronic Warfare/Jamming (EC-130H), Psychological
Operations/Communications (EC-130J), Airborne Early Warning and Control (E-3), Airborne Command Post (E-4B), ground
targeting radar (E-8C), range control (E-9A), and communications relay (E-11A) The fighter aircraft of the USAF are
small, fast, and maneuverable military aircraft primarily used for air-to-air combat. Many of these fighters have
secondary ground-attack capabilities, and some are dual-roled as fighter-bombers (e.g., the F-16 Fighting Falcon);
the term "fighter" is also sometimes used colloquially for dedicated ground-attack aircraft. Other missions include
interception of bombers and other fighters, reconnaissance, and patrol. The F-16 is currently used by the USAF Air
Demonstration squadron, the Thunderbirds, while a small number of both man-rated and non-man-rated F-4 Phantom II
are retained as QF-4 aircraft for use as Full Scale Aerial Targets (FSAT) or as part of the USAF Heritage Flight
program. These extant QF-4 aircraft are being replaced in the FSAT role by early model F-16 aircraft converted to
QF-16 configuration. The USAF has 2,025 fighters in service as of September 2012. The USAF's KC-135 and KC-10 aerial
refueling aircraft are based on civilian jets. The USAF aircraft are equipped primarily for providing the fuel via
a tail-mounted refueling boom, and can be equipped with "probe and drogue" refueling systems. Air-to-air refueling
is extensively used in large-scale operations and also used in normal operations; fighters, bombers, and cargo aircraft
rely heavily on the lesser-known "tanker" aircraft. This makes these aircraft an essential part of the Air Force's
global mobility and the U.S. force projection. The KC-46A Pegasus is undergoing testing and is projected to be delivered
to USAF units starting in 2017. In response to the 2007 United States Air Force nuclear weapons incident, Secretary
of Defense Robert Gates accepted in June 2009 the resignations of Secretary of the Air Force Michael Wynne and the
Chief of Staff of the Air Force General T. Michael Moseley. Moseley's successor, General Norton A. Schwartz, a former
tactical airlift and special operations pilot was the first officer appointed to that position who did not have a
background as a fighter or bomber pilot. The Washington Post reported in 2010 that General Schwartz began to dismantle
the rigid class system of the USAF, particularly in the officer corps. Daniel L. Magruder, Jr defines USAF culture
as a combination of the rigorous application of advanced technology, individualism and progressive airpower theory.
Major General Charles J. Dunlap, Jr. adds that the U.S. Air Force's culture also includes an egalitarianism bred
from officers perceiving themselves as their service's principal "warriors" working with small groups of enlisted
airmen either as the service crew or the onboard crew of their aircraft. Air Force officers have never felt they
needed the formal social "distance" from their enlisted force that is common in the other U.S. armed services. Although
the paradigm is changing, for most of its history, the Air Force, completely unlike its sister services, has been
an organization in which mostly its officers fought, not its enlisted force, the latter being primarily a rear echelon
support force. When the enlisted force did go into harm's way, such as members of multi-crewed aircraft, the close
comradeship of shared risk in tight quarters created traditions that shaped a somewhat different kind of officer/enlisted
relationship than exists elsewhere in the military. Cultural and career issues in the U.S. Air Force have been cited
as one of the reasons for the shortfall in needed UAV operators. In spite of an urgent need for UAVs or drones to
provide round the clock coverage for American troops during the Iraq War, the USAF did not establish a new career
field for piloting them until the last year of that war and in 2014 changed its RPA training syllabus again, in the
face of large aircraft losses in training, and in response to a GAO report critical of handling of drone programs.
Paul Scharre has reported that the cultural divide between the USAF and US Army has kept both services from adopting
each other's drone handing innovations. Many of the U.S. Air Force's formal and informal traditions are an amalgamation
of those taken from the Royal Air Force (e.g., dining-ins/mess nights) or the experiences of its predecessor organizations
such as the U.S. Army Air Service, U.S. Army Air Corps and the U.S. Army Air Forces. Some of these traditions range
from "Friday Name Tags" in flying units to an annual "Mustache Month." The use of "challenge coins" is a recent innovation
that was adopted from the U.S. Army while another cultural tradition unique to the Air Force is the "roof stomp",
practiced by Air Force members to welcome a new commander or to commemorate another event, such as a retirement.
The United States Air Force has had numerous recruiting slogans including "No One Comes Close" and Uno Ab Alto ("One
From On High"). For many years, the U.S. Air Force used "Aim High" as its recruiting slogan; more recently, they
have used "Cross into the Blue", "We've been waiting for you" and "Do Something Amazing", "Above All", and the newest
one, as of 7 October 2010, considered a call and response, "Aim high" followed with the response, "Fly-Fight-Win"
Each wing, group, or squadron usually has its own slogan(s). Information and logos can usually be found on the wing,
group, or squadron websites.
Recent developments in LEDs permit them to be used in environmental and task lighting. LEDs have many advantages over incandescent
light sources including lower energy consumption, longer lifetime, improved physical robustness, smaller size, and
faster switching. Light-emitting diodes are now used in applications as diverse as aviation lighting, automotive
headlamps, advertising, general lighting, traffic signals, camera flashes and lighted wallpaper. As of 2015[update],
LEDs powerful enough for room lighting remain somewhat more expensive, and require more precise current and heat
management, than compact fluorescent lamp sources of comparable output. Electroluminescence as a phenomenon was discovered
in 1907 by the British experimenter H. J. Round of Marconi Labs, using a crystal of silicon carbide and a cat's-whisker
detector. Soviet inventor Oleg Losev reported creation of the first LED in 1927. His research was distributed in
Soviet, German and British scientific journals, but no practical use was made of the discovery for several decades.
Kurt Lehovec, Carl Accardo and Edward Jamgochian, explained these first light-emitting diodes in 1951 using an apparatus
employing SiC crystals with a current source of battery or pulse generator and with a comparison to a variant, pure,
crystal in 1953. In 1957, Braunstein further demonstrated that the rudimentary devices could be used for non-radio
communication across a short distance. As noted by Kroemer Braunstein".. had set up a simple optical communications
link: Music emerging from a record player was used via suitable electronics to modulate the forward current of a
GaAs diode. The emitted light was detected by a PbS diode some distance away. This signal was fed into an audio amplifier,
and played back by a loudspeaker. Intercepting the beam stopped the music. We had a great deal of fun playing with
this setup." This setup presaged the use of LEDs for optical communication applications. In September 1961, while
working at Texas Instruments in Dallas, Texas, James R. Biard and Gary Pittman discovered near-infrared (900 nm)
light emission from a tunnel diode they had constructed on a GaAs substrate. By October 1961, they had demonstrated
efficient light emission and signal coupling between a GaAs p-n junction light emitter and an electrically-isolated
semiconductor photodetector. On August 8, 1962, Biard and Pittman filed a patent titled "Semiconductor Radiant Diode"
based on their findings, which described a zinc diffused p–n junction LED with a spaced cathode contact to allow
for efficient emission of infrared light under forward bias. After establishing the priority of their work based
on engineering notebooks predating submissions from G.E. Labs, RCA Research Labs, IBM Research Labs, Bell Labs, and
Lincoln Lab at MIT, the U.S. patent office issued the two inventors the patent for the GaAs infrared (IR) light-emitting
diode (U.S. Patent US3293513), the first practical LED. Immediately after filing the patent, Texas Instruments (TI)
began a project to manufacture infrared diodes. In October 1962, TI announced the first LED commercial product (the
SNX-100), which employed a pure GaAs crystal to emit a 890 nm light output. In October 1963, TI announced the first
commercial hemispherical LED, the SNX-110. The first visible-spectrum (red) LED was developed in 1962 by Nick Holonyak,
Jr., while working at General Electric Company. Holonyak first reported his LED in the journal Applied Physics Letters
on the December 1, 1962. M. George Craford, a former graduate student of Holonyak, invented the first yellow LED
and improved the brightness of red and red-orange LEDs by a factor of ten in 1972. In 1976, T. P. Pearsall created
the first high-brightness, high-efficiency LEDs for optical fiber telecommunications by inventing new semiconductor
materials specifically adapted to optical fiber transmission wavelengths. The first commercial LEDs were commonly
used as replacements for incandescent and neon indicator lamps, and in seven-segment displays, first in expensive
equipment such as laboratory and electronics test equipment, then later in such appliances as TVs, radios, telephones,
calculators, as well as watches (see list of signal uses). Until 1968, visible and infrared LEDs were extremely costly,
in the order of US$200 per unit, and so had little practical use. The Monsanto Company was the first organization
to mass-produce visible LEDs, using gallium arsenide phosphide (GaAsP) in 1968 to produce red LEDs suitable for indicators.
Hewlett Packard (HP) introduced LEDs in 1968, initially using GaAsP supplied by Monsanto. These red LEDs were bright
enough only for use as indicators, as the light output was not enough to illuminate an area. Readouts in calculators
were so small that plastic lenses were built over each digit to make them legible. Later, other colors became widely
available and appeared in appliances and equipment. In the 1970s commercially successful LED devices at less than
five cents each were produced by Fairchild Optoelectronics. These devices employed compound semiconductor chips fabricated
with the planar process invented by Dr. Jean Hoerni at Fairchild Semiconductor. The combination of planar processing
for chip fabrication and innovative packaging methods enabled the team at Fairchild led by optoelectronics pioneer
Thomas Brandt to achieve the needed cost reductions. These methods continue to be used by LED producers. The first
high-brightness blue LED was demonstrated by Shuji Nakamura of Nichia Corporation in 1994 and was based on InGaN.
In parallel, Isamu Akasaki and Hiroshi Amano in Nagoya were working on developing the important GaN nucleation on
sapphire substrates and the demonstration of p-type doping of GaN. Nakamura, Akasaki and Amano were awarded the 2014
Nobel prize in physics for their work. In 1995, Alberto Barbieri at the Cardiff University Laboratory (GB) investigated
the efficiency and reliability of high-brightness LEDs and demonstrated a "transparent contact" LED using indium
tin oxide (ITO) on (AlGaInP/GaAs). The attainment of high efficiency in blue LEDs was quickly followed by the development
of the first white LED. In this device a Y3Al5O12:Ce (known as "YAG") phosphor coating on the emitter absorbs some
of the blue emission and produces yellow light through fluorescence. The combination of that yellow with remaining
blue light appears white to the eye. However using different phosphors (fluorescent materials) it also became possible
to instead produce green and red light through fluorescence. The resulting mixture of red, green and blue is not
only perceived by humans as white light but is superior for illumination in terms of color rendering, whereas one
cannot appreciate the color of red or green objects illuminated only by the yellow (and remaining blue) wavelengths
from the YAG phosphor. A P-N junction can convert absorbed light energy into a proportional electric current. The
same process is reversed here (i.e. the P-N junction emits light when electrical energy is applied to it). This phenomenon
is generally called electroluminescence, which can be defined as the emission of light from a semi-conductor under
the influence of an electric field. The charge carriers recombine in a forward-biased P-N junction as the electrons
cross from the N-region and recombine with the holes existing in the P-region. Free electrons are in the conduction
band of energy levels, while holes are in the valence energy band. Thus the energy level of the holes will be lesser
than the energy levels of the electrons. Some portion of the energy must be dissipated in order to recombine the
electrons and the holes. This energy is emitted in the form of heat and light. In September 2003, a new type of blue
LED was demonstrated by Cree that consumes 24 mW at 20 milliamperes (mA). This produced a commercially packaged white
light giving 65 lm/W at 20 mA, becoming the brightest white LED commercially available at the time, and more than
four times as efficient as standard incandescents. In 2006, they demonstrated a prototype with a record white LED
luminous efficacy of 131 lm/W at 20 mA. Nichia Corporation has developed a white LED with luminous efficacy of 150
lm/W at a forward current of 20 mA. Cree's XLamp XM-L LEDs, commercially available in 2011, produce 100 lm/W at their
full power of 10 W, and up to 160 lm/W at around 2 W input power. In 2012, Cree announced a white LED giving 254
lm/W, and 303 lm/W in March 2014. Practical general lighting needs high-power LEDs, of one watt or more. Typical
operating currents for such devices begin at 350 mA. The most common symptom of LED (and diode laser) failure is
the gradual lowering of light output and loss of efficiency. Sudden failures, although rare, can also occur. Early
red LEDs were notable for their short service life. With the development of high-power LEDs the devices are subjected
to higher junction temperatures and higher current densities than traditional devices. This causes stress on the
material and may cause early light-output degradation. To quantitatively classify useful lifetime in a standardized
manner it has been suggested to use L70 or L50, which are the runtimes (typically given in thousands of hours) at
which a given LED reaches 70% and 50% of initial light output, respectively. Since LED efficacy is inversely proportional
to operating temperature, LED technology is well suited for supermarket freezer lighting. Because LEDs produce less
waste heat than incandescent lamps, their use in freezers can save on refrigeration costs as well. However, they
may be more susceptible to frost and snow buildup than incandescent lamps, so some LED lighting systems have been
designed with an added heating circuit. Additionally, research has developed heat sink technologies that will transfer
heat produced within the junction to appropriate areas of the light fixture. The first blue-violet LED using magnesium-doped
gallium nitride was made at Stanford University in 1972 by Herb Maruska and Wally Rhines, doctoral students in materials
science and engineering. At the time Maruska was on leave from RCA Laboratories, where he collaborated with Jacques
Pankove on related work. In 1971, the year after Maruska left for Stanford, his RCA colleagues Pankove and Ed Miller
demonstrated the first blue electroluminescence from zinc-doped gallium nitride, though the subsequent device Pankove
and Miller built, the first actual gallium nitride light-emitting diode, emitted green light. In 1974 the U.S. Patent
Office awarded Maruska, Rhines and Stanford professor David Stevenson a patent for their work in 1972 (U.S. Patent
US3819974 A) and today magnesium-doping of gallium nitride continues to be the basis for all commercial blue LEDs
and laser diodes. These devices built in the early 1970s had too little light output to be of practical use and research
into gallium nitride devices slowed. In August 1989, Cree introduced the first commercially available blue LED based
on the indirect bandgap semiconductor, silicon carbide (SiC). SiC LEDs had very low efficiency, no more than about
0.03%, but did emit in the blue portion of the visible light spectrum.[citation needed] In the late 1980s, key breakthroughs
in GaN epitaxial growth and p-type doping ushered in the modern era of GaN-based optoelectronic devices. Building
upon this foundation, Dr. Moustakas at Boston University patented a method for producing high-brightness blue LEDs
using a new two-step process. Two years later, in 1993, high-brightness blue LEDs were demonstrated again by Shuji
Nakamura of Nichia Corporation using a gallium nitride growth process similar to Dr. Moustakas's. Both Dr. Moustakas
and Mr. Nakamura were issued separate patents, which confused the issue of who was the original inventor (partly
because although Dr. Moustakas invented his first, Dr. Nakamura filed first).[citation needed] This new development
revolutionized LED lighting, making high-power blue light sources practical, leading to the development of technologies
like BlueRay, as well as allowing the bright high resolution screens of modern tablets and phones.[citation needed]
Nakamura was awarded the 2006 Millennium Technology Prize for his invention. Nakamura, Hiroshi Amano and Isamu Akasaki
were awarded the Nobel Prize in Physics in 2014 for the invention of the blue LED. In 2015, a US court ruled that
three companies (i.e. the litigants who had not previously settled out of court) that had licensed Mr. Nakamura's
patents for production in the United States had infringed Dr. Moustakas's prior patent, and order them to pay licensing
fees of not less than 13 million USD. By the late 1990s, blue LEDs became widely available. They have an active region
consisting of one or more InGaN quantum wells sandwiched between thicker layers of GaN, called cladding layers. By
varying the relative In/Ga fraction in the InGaN quantum wells, the light emission can in theory be varied from violet
to amber. Aluminium gallium nitride (AlGaN) of varying Al/Ga fraction can be used to manufacture the cladding and
quantum well layers for ultraviolet LEDs, but these devices have not yet reached the level of efficiency and technological
maturity of InGaN/GaN blue/green devices. If un-alloyed GaN is used in this case to form the active quantum well
layers, the device will emit near-ultraviolet light with a peak wavelength centred around 365 nm. Green LEDs manufactured
from the InGaN/GaN system are far more efficient and brighter than green LEDs produced with non-nitride material
systems, but practical devices still exhibit efficiency too low for high-brightness applications.[citation needed]
With nitrides containing aluminium, most often AlGaN and AlGaInN, even shorter wavelengths are achievable. Ultraviolet
LEDs in a range of wavelengths are becoming available on the market. Near-UV emitters at wavelengths around 375–395
nm are already cheap and often encountered, for example, as black light lamp replacements for inspection of anti-counterfeiting
UV watermarks in some documents and paper currencies. Shorter-wavelength diodes, while substantially more expensive,
are commercially available for wavelengths down to 240 nm. As the photosensitivity of microorganisms approximately
matches the absorption spectrum of DNA, with a peak at about 260 nm, UV LED emitting at 250–270 nm are to be expected
in prospective disinfection and sterilization devices. Recent research has shown that commercially available UVA
LEDs (365 nm) are already effective disinfection and sterilization devices. UV-C wavelengths were obtained in laboratories
using aluminium nitride (210 nm), boron nitride (215 nm) and diamond (235 nm). White light can be formed by mixing
differently colored lights; the most common method is to use red, green, and blue (RGB). Hence the method is called
multi-color white LEDs (sometimes referred to as RGB LEDs). Because these need electronic circuits to control the
blending and diffusion of different colors, and because the individual color LEDs typically have slightly different
emission patterns (leading to variation of the color depending on direction) even if they are made as a single unit,
these are seldom used to produce white lighting. Nonetheless, this method has many applications because of the flexibility
of mixing different colors, and in principle, this mechanism also has higher quantum efficiency in producing white
light.[citation needed] There are several types of multi-color white LEDs: di-, tri-, and tetrachromatic white LEDs.
Several key factors that play among these different methods, include color stability, color rendering capability,
and luminous efficacy. Often, higher efficiency will mean lower color rendering, presenting a trade-off between the
luminous efficacy and color rendering. For example, the dichromatic white LEDs have the best luminous efficacy (120
lm/W), but the lowest color rendering capability. However, although tetrachromatic white LEDs have excellent color
rendering capability, they often have poor luminous efficacy. Trichromatic white LEDs are in between, having both
good luminous efficacy (>70 lm/W) and fair color rendering capability. Multi-color LEDs offer not merely another
means to form white light but a new means to form light of different colors. Most perceivable colors can be formed
by mixing different amounts of three primary colors. This allows precise dynamic color control. As more effort is
devoted to investigating this method, multi-color LEDs should have profound influence on the fundamental method that
we use to produce and control light color. However, before this type of LED can play a role on the market, several
technical problems must be solved. These include that this type of LED's emission power decays exponentially with
rising temperature, resulting in a substantial change in color stability. Such problems inhibit and may preclude
industrial use. Thus, many new package designs aimed at solving this problem have been proposed and their results
are now being reproduced by researchers and scientists. This method involves coating LEDs of one color (mostly blue
LEDs made of InGaN) with phosphors of different colors to form white light; the resultant LEDs are called phosphor-based
or phosphor-converted white LEDs (pcLEDs). A fraction of the blue light undergoes the Stokes shift being transformed
from shorter wavelengths to longer. Depending on the color of the original LED, phosphors of different colors can
be employed. If several phosphor layers of distinct colors are applied, the emitted spectrum is broadened, effectively
raising the color rendering index (CRI) value of a given LED. Phosphor-based LED efficiency losses are due to the
heat loss from the Stokes shift and also other phosphor-related degradation issues. Their luminous efficacies compared
to normal LEDs depend on the spectral distribution of the resultant light output and the original wavelength of the
LED itself. For example, the luminous efficacy of a typical YAG yellow phosphor based white LED ranges from 3 to
5 times the luminous efficacy of the original blue LED because of the human eye's greater sensitivity to yellow than
to blue (as modeled in the luminosity function). Due to the simplicity of manufacturing the phosphor method is still
the most popular method for making high-intensity white LEDs. The design and production of a light source or light
fixture using a monochrome emitter with phosphor conversion is simpler and cheaper than a complex RGB system, and
the majority of high-intensity white LEDs presently on the market are manufactured using phosphor light conversion.
Among the challenges being faced to improve the efficiency of LED-based white light sources is the development of
more efficient phosphors. As of 2010, the most efficient yellow phosphor is still the YAG phosphor, with less than
10% Stoke shift loss. Losses attributable to internal optical losses due to re-absorption in the LED chip and in
the LED packaging itself account typically for another 10% to 30% of efficiency loss. Currently, in the area of phosphor
LED development, much effort is being spent on optimizing these devices to higher light output and higher operation
temperatures. For instance, the efficiency can be raised by adapting better package design or by using a more suitable
type of phosphor. Conformal coating process is frequently used to address the issue of varying phosphor thickness.
White LEDs can also be made by coating near-ultraviolet (NUV) LEDs with a mixture of high-efficiency europium-based
phosphors that emit red and blue, plus copper and aluminium-doped zinc sulfide (ZnS:Cu, Al) that emits green. This
is a method analogous to the way fluorescent lamps work. This method is less efficient than blue LEDs with YAG:Ce
phosphor, as the Stokes shift is larger, so more energy is converted to heat, but yields light with better spectral
characteristics, which render color better. Due to the higher radiative output of the ultraviolet LEDs than of the
blue ones, both methods offer comparable brightness. A concern is that UV light may leak from a malfunctioning light
source and cause harm to human eyes or skin. A new style of wafers composed of gallium-nitride-on-silicon (GaN-on-Si)
is being used to produce white LEDs using 200-mm silicon wafers. This avoids the typical costly sapphire substrate
in relatively small 100- or 150-mm wafer sizes. The sapphire apparatus must be coupled with a mirror-like collector
to reflect light that would otherwise be wasted. It is predicted that by 2020, 40% of all GaN LEDs will be made with
GaN-on-Si. Manufacturing large sapphire material is difficult, while large silicon material is cheaper and more abundant.
LED companies shifting from using sapphire to silicon should be a minimal investment. Quantum dots (QD) are semiconductor
nanocrystals that possess unique optical properties. Their emission color can be tuned from the visible throughout
the infrared spectrum. This allows quantum dot LEDs to create almost any color on the CIE diagram. This provides
more color options and better color rendering than white LEDs since the emission spectrum is much narrower, characteristic
of quantum confined states. There are two types of schemes for QD excitation. One uses photo excitation with a primary
light source LED (typically blue or UV LEDs are used). The other is direct electrical excitation first demonstrated
by Alivisatos et al. The structure of QD-LEDs used for the electrical-excitation scheme is similar to basic design
of OLEDs. A layer of quantum dots is sandwiched between layers of electron-transporting and hole-transporting materials.
An applied electric field causes electrons and holes to move into the quantum dot layer and recombine forming an
exciton that excites a QD. This scheme is commonly studied for quantum dot display. The tunability of emission wavelengths
and narrow bandwidth is also beneficial as excitation sources for fluorescence imaging. Fluorescence near-field scanning
optical microscopy (NSOM) utilizing an integrated QD-LED has been demonstrated. High-power LEDs (HP-LEDs) or high-output
LEDs (HO-LEDs) can be driven at currents from hundreds of mA to more than an ampere, compared with the tens of mA
for other LEDs. Some can emit over a thousand lumens. LED power densities up to 300 W/cm2 have been achieved. Since
overheating is destructive, the HP-LEDs must be mounted on a heat sink to allow for heat dissipation. If the heat
from a HP-LED is not removed, the device will fail in seconds. One HP-LED can often replace an incandescent bulb
in a flashlight, or be set in an array to form a powerful LED lamp. LEDs have been developed by Seoul Semiconductor
that can operate on AC power without the need for a DC converter. For each half-cycle, part of the LED emits light
and part is dark, and this is reversed during the next half-cycle. The efficacy of this type of HP-LED is typically
40 lm/W. A large number of LED elements in series may be able to operate directly from line voltage. In 2009, Seoul
Semiconductor released a high DC voltage LED, named as 'Acrich MJT', capable of being driven from AC power with a
simple controlling circuit. The low-power dissipation of these LEDs affords them more flexibility than the original
AC LED design. Alphanumeric LEDs are available in seven-segment, starburst and dot-matrix format. Seven-segment displays
handle all numbers and a limited set of letters. Starburst displays can display all letters. Dot-matrix displays
typically use 5x7 pixels per character. Seven-segment LED displays were in widespread use in the 1970s and 1980s,
but rising use of liquid crystal displays, with their lower power needs and greater display flexibility, has reduced
the popularity of numeric and alphanumeric LED displays. Digital-RGB LEDs are RGB LEDs that contain their own "smart"
control electronics. In addition to power and ground, these provide connections for data-in, data-out, and sometimes
a clock or strobe signal. These are connected in a daisy chain, with the data in of the first LED sourced by a microprocessor,
which can control the brightness and color of each LED independently of the others. They are used where a combination
of maximum control and minimum visible electronics are needed such as strings for Christmas and LED matrices. Some
even have refresh rates in the kHz range, allowing for basic video applications. An LED filament consists of multiple
LED dice connected in series on a common longitudinal substrate that form a thin rod reminiscent of a traditional
incandescent filament. These are being used as a low cost decorative alternative for traditional light bulbs that
are being phased out in many countries. The filaments require a rather high voltage to light to nominal brightness,
allowing them to work efficiently and simply with mains voltages. Often a simple rectifier and capacitive current
limiting are employed to create a low-cost replacement for a traditional light bulb without the complexity of creating
a low voltage, high current converter which is required by single die LEDs. Usually they are packaged in a sealed
enclosure with a shape similar to lamps they were designed to replace (e.g. a bulb), and filled with inert nitrogen
or carbon dioxide gas to remove heat efficiently. The current–voltage characteristic of an LED is similar to other
diodes, in that the current is dependent exponentially on the voltage (see Shockley diode equation). This means that
a small change in voltage can cause a large change in current. If the applied voltage exceeds the LED's forward voltage
drop by a small amount, the current rating may be exceeded by a large amount, potentially damaging or destroying
the LED. The typical solution is to use constant-current power supplies to keep the current below the LED's maximum
current rating. Since most common power sources (batteries, mains) are constant-voltage sources, most LED fixtures
must include a power converter, at least a current-limiting resistor. However, the high resistance of three-volt
coin cells combined with the high differential resistance of nitride-based LEDs makes it possible to power such an
LED from such a coin cell without an external resistor. The vast majority of devices containing LEDs are "safe under
all conditions of normal use", and so are classified as "Class 1 LED product"/"LED Klasse 1". At present, only a
few LEDs—extremely bright LEDs that also have a tightly focused viewing angle of 8° or less—could, in theory, cause
temporary blindness, and so are classified as "Class 2". The opinion of the French Agency for Food, Environmental
and Occupational Health & Safety (ANSES) of 2010, on the health issues concerning LEDs, suggested banning public
use of lamps which were in the moderate Risk Group 2, especially those with a high blue component in places frequented
by children. In general, laser safety regulations—and the "Class 1", "Class 2", etc. system—also apply to LEDs. While
LEDs have the advantage over fluorescent lamps that they do not contain mercury, they may contain other hazardous
metals such as lead and arsenic. Regarding the toxicity of LEDs when treated as waste, a study published in 2011
stated: "According to federal standards, LEDs are not hazardous except for low-intensity red LEDs, which leached
Pb [lead] at levels exceeding regulatory limits (186 mg/L; regulatory limit: 5). However, according to California
regulations, excessive levels of copper (up to 3892 mg/kg; limit: 2500), lead (up to 8103 mg/kg; limit: 1000), nickel
(up to 4797 mg/kg; limit: 2000), or silver (up to 721 mg/kg; limit: 500) render all except low-intensity yellow LEDs
hazardous." One-color light is well suited for traffic lights and signals, exit signs, emergency vehicle lighting,
ships' navigation lights or lanterns (chromacity and luminance standards being set under the Convention on the International
Regulations for Preventing Collisions at Sea 1972, Annex I and the CIE) and LED-based Christmas lights. In cold climates,
LED traffic lights may remain snow-covered. Red or yellow LEDs are used in indicator and alphanumeric displays in
environments where night vision must be retained: aircraft cockpits, submarine and ship bridges, astronomy observatories,
and in the field, e.g. night time animal watching and military field use. Because of their long life, fast switching
times, and their ability to be seen in broad daylight due to their high output and focus, LEDs have been used in
brake lights for cars' high-mounted brake lights, trucks, and buses, and in turn signals for some time, but many
vehicles now use LEDs for their rear light clusters. The use in brakes improves safety, due to a great reduction
in the time needed to light fully, or faster rise time, up to 0.5 second faster[citation needed] than an incandescent
bulb. This gives drivers behind more time to react. In a dual intensity circuit (rear markers and brakes) if the
LEDs are not pulsed at a fast enough frequency, they can create a phantom array, where ghost images of the LED will
appear if the eyes quickly scan across the array. White LED headlamps are starting to be used. Using LEDs has styling
advantages because LEDs can form much thinner lights than incandescent lamps with parabolic reflectors. Assistive
listening devices in many theaters and similar spaces use arrays of infrared LEDs to send sound to listeners' receivers.
Light-emitting diodes (as well as semiconductor lasers) are used to send data over many types of fiber optic cable,
from digital audio over TOSLINK cables to the very high bandwidth fiber links that form the Internet backbone. For
some time, computers were commonly equipped with IrDA interfaces, which allowed them to send and receive data to
nearby machines via infrared. In the US, one kilowatt-hour (3.6 MJ) of electricity currently causes an average 1.34
pounds (610 g) of CO 2 emission. Assuming the average light bulb is on for 10 hours a day, a 40-watt bulb will cause
196 pounds (89 kg) of CO 2 emission per year. The 6-watt LED equivalent will only cause 30 pounds (14 kg) of CO 2
over the same time span. A building’s carbon footprint from lighting can therefore be reduced by 85% by exchanging
all incandescent bulbs for new LEDs if a building previously used only incandescent bulbs. Machine vision systems
often require bright and homogeneous illumination, so features of interest are easier to process. LEDs are often
used for this purpose, and this is likely to remain one of their major uses until the price drops low enough to make
signaling and illumination uses more widespread. Barcode scanners are the most common example of machine vision,
and many low cost products use red LEDs instead of lasers. Optical computer mice are an example of LEDs in machine
vision, as it is used to provide an even light source on the surface for the miniature camera within the mouse. LEDs
constitute a nearly ideal light source for machine vision systems for several reasons: The light from LEDs can be
modulated very quickly so they are used extensively in optical fiber and free space optics communications. This includes
remote controls, such as for TVs, VCRs, and LED Computers, where infrared LEDs are often used. Opto-isolators use
an LED combined with a photodiode or phototransistor to provide a signal path with electrical isolation between two
circuits. This is especially useful in medical equipment where the signals from a low-voltage sensor circuit (usually
battery-powered) in contact with a living organism must be electrically isolated from any possible electrical failure
in a recording or monitoring device operating at potentially dangerous voltages. An optoisolator also allows information
to be transferred between circuits not sharing a common ground potential. Many sensor systems rely on light as the
signal source. LEDs are often ideal as a light source due to the requirements of the sensors. LEDs are used as motion
sensors, for example in optical computer mice. The Nintendo Wii's sensor bar uses infrared LEDs. Pulse oximeters
use them for measuring oxygen saturation. Some flatbed scanners use arrays of RGB LEDs rather than the typical cold-cathode
fluorescent lamp as the light source. Having independent control of three illuminated colors allows the scanner to
calibrate itself for more accurate color balance, and there is no need for warm-up. Further, its sensors only need
be monochromatic, since at any one time the page being scanned is only lit by one color of light. Since LEDs can
also be used as photodiodes, they can be used for both photo emission and detection. This could be used, for example,
in a touchscreen that registers reflected light from a finger or stylus. Many materials and biological systems are
sensitive to, or dependent on, light. Grow lights use LEDs to increase photosynthesis in plants, and bacteria and
viruses can be removed from water and other substances using UV LEDs for sterilization. LEDs have also been used
as a medium-quality voltage reference in electronic circuits. The forward voltage drop (e.g. about 1.7 V for a normal
red LED) can be used instead of a Zener diode in low-voltage regulators. Red LEDs have the flattest I/V curve above
the knee. Nitride-based LEDs have a fairly steep I/V curve and are useless for this purpose. Although LED forward
voltage is far more current-dependent than a Zener diode, Zener diodes with breakdown voltages below 3 V are not
widely available.
The term "great power" was first used to represent the most important powers in Europe during the post-Napoleonic era. The
"Great Powers" constituted the "Concert of Europe" and claimed the right to joint enforcement of the postwar treaties.
The formalization of the division between small powers and great powers came about with the signing of the Treaty
of Chaumont in 1814. Since then, the international balance of power has shifted numerous times, most dramatically
during World War I and World War II. While some nations are widely considered to be great powers, there is no definitive
list of them. In literature, alternative terms for great power are often world power or major power, but these terms
can also be interchangeable with superpower. A great power is a sovereign state that is recognized as having the
ability and expertise to exert its influence on a global scale. Great powers characteristically possess military
and economic strength, as well as diplomatic and soft power influence, which may cause middle or small powers to
consider the great powers' opinions before taking actions of their own. International relations theorists have posited
that great power status can be characterized into power capabilities, spatial aspects, and status dimensions. Sometimes
the status of great powers is formally recognized in conferences such as the Congress of Vienna or an international
structure such as the United Nations Security Council (China, France, Russia, the United Kingdom and the United States
serve as the body's five permanent members). At the same time the status of great powers can be informally recognized
in a forum such as the G7 which consists of Canada, France, Germany, Italy, Japan, the United Kingdom and the United
States of America. Early writings on the subject tended to judge states by the realist criterion, as expressed by
the historian A. J. P. Taylor when he noted that "The test of a great power is the test of strength for war." Later
writers have expanded this test, attempting to define power in terms of overall military, economic, and political
capacity. Kenneth Waltz, the founder of the neorealist theory of international relations, uses a set of five criteria
to determine great power: population and territory; resource endowment; economic capability; political stability
and competence; and military strength. These expanded criteria can be divided into three heads: power capabilities,
spatial aspects, and status. All states have a geographic scope of interests, actions, or projected power. This is
a crucial factor in distinguishing a great power from a regional power; by definition the scope of a regional power
is restricted to its region. It has been suggested that a great power should be possessed of actual influence throughout
the scope of the prevailing international system. Arnold J. Toynbee, for example, observes that "Great power may
be defined as a political force exerting an effect co-extensive with the widest range of the society in which it
operates. The Great powers of 1914 were 'world-powers' because Western society had recently become 'world-wide'."
Other important criteria throughout history are that great powers should have enough influence to be included in
discussions of political and diplomatic questions of the day, and have influence on the final outcome and resolution.
Historically, when major political questions were addressed, several great powers met to discuss them. Before the
era of groups like the United Nations, participants of such meetings were not officially named, but were decided
based on their great power status. These were conferences which settled important questions based on major historical
events. This might mean deciding the political resolution of various geographical and nationalist claims following
a major conflict, or other contexts. Lord Castlereagh, the British Foreign Secretary, first used the term in its
diplomatic context, in a letter sent on February 13, 1814: "It affords me great satisfaction to acquaint you that
there is every prospect of the Congress terminating with a general accord and Guarantee between the Great powers
of Europe, with a determination to support the arrangement agreed upon, and to turn the general influence and if
necessary the general arms against the Power that shall first attempt to disturb the Continental peace." Of the five
original great powers recognised at the Congress of Vienna, only France and the United Kingdom have maintained that
status continuously to the present day, although France was defeated in the Franco-Prussian War and occupied during
World War II. After the Congress of Vienna, the British Empire emerged as the pre-eminent power, due to its navy
and the extent of its territories, which signalled the beginning of the Pax Britannica and of the Great Game between
the UK and Russia. The balance of power between the Great Powers became a major influence in European politics, prompting
Otto von Bismarck to say "All politics reduces itself to this formula: try to be one of three, as long as the world
is governed by the unstable equilibrium of five great powers." Over time, the relative power of these five nations
fluctuated, which by the dawn of the 20th century had served to create an entirely different balance of power. Some,
such as the United Kingdom and Prussia (as the founder of the newly formed German state), experienced continued economic
growth and political power. Others, such as Russia and Austria-Hungary, stagnated. At the same time, other states
were emerging and expanding in power, largely through the process of industrialization. These countries seeking to
attain great power status were: Italy after the Risorgimento, Japan after the Meiji Restoration, and the United States
after its civil war. By the dawn of the 20th century, the balance of world power had changed substantially since
the Congress of Vienna. The Eight-Nation Alliance was a belligerent alliance of eight nations against the Boxer Rebellion
in China. It formed in 1900 and consisted of the five Congress powers plus Italy, Japan, and the United States, representing
the great powers at the beginning of 20th century. Shifts of international power have most notably occurred through
major conflicts. The conclusion of the Great War and the resulting treaties of Versailles, St-Germain, Neuilly, Trianon
and Sèvres witnessed the United Kingdom, France, Italy, Japan and the United States as the chief arbiters of the
new world order. In the aftermath of World War I the German Empire was defeated, the Austria-Hungarian empire was
divided into new, less powerful states and the Russian Empire fell to a revolution. During the Paris Peace Conference,
the "Big Four"—France, Italy, United Kingdom and the United States—held noticeably more power and influence on the
proceedings and outcome of the treaties than Japan. The Big Four were leading architects of the Treaty of Versailles
which was signed by Germany; the Treaty of St. Germain, with Austria; the Treaty of Neuilly, with Bulgaria; the Treaty
of Trianon, with Hungary; and the Treaty of Sèvres, with the Ottoman Empire. During the decision-making of the Treaty
of Versailles, Italy pulled out of the conference because a part of its demands were not met and temporarily left
the other three countries as the sole major architects of that treaty, referred to as the "Big Three". The victorious
great powers also gained an acknowledgement of their status through permanent seats at the League of Nations Council,
where they acted as a type of executive body directing the Assembly of the League. However, the Council began with
only four permanent members—the United Kingdom, France, Italy, and Japan—because the United States, meant to be the
fifth permanent member, left because the US Senate voted on 19 March 1920 against the ratification of the Treaty
of Versailles, thus preventing American participation in the League. When World War II started in 1939, it divided
the world into two alliances—the Allies (the United Kingdom and France at first in Europe, China in Asia since 1937,
followed in 1941 by the Soviet Union, the United States); and the Axis powers consisting of Germany, Italy and Japan.[nb
1] During World War II, the United States, United Kingdom, and Soviet Union controlled Allied policy and emerged
as the "Big Three". The Republic of China and the Big Three were referred as a "trusteeship of the powerful" and
were recognized as the Allied "Big Four" in Declaration by United Nations in 1942. These four countries were referred
as the "Four Policemen" of the Allies and considered as the primary victors of World War II. The importance of France
was acknowledged by their inclusion, along with the other four, in the group of countries allotted permanent seats
in the United Nations Security Council. Since the end of the World Wars, the term "great power" has been joined by
a number of other power classifications. Foremost among these is the concept of the superpower, used to describe
those nations with overwhelming power and influence in the rest of the world. It was first coined in 1944 by William
T.R. Fox and according to him, there were three superpowers: the British Empire, the United States, and the Soviet
Union. But by the mid-1950s the British Empire lost its superpower status, leaving the United States and the Soviet
Union as the world's superpowers.[nb 2] The term middle power has emerged for those nations which exercise a degree
of global influence, but are insufficient to be decisive on international affairs. Regional powers are those whose
influence is generally confined to their region of the world. During the Cold War, the Asian power of Japan and the
European powers of the United Kingdom, France, and West Germany rebuilt their economies. France and the United Kingdom
maintained technologically advanced armed forces with power projection capabilities and maintain large defence budgets
to this day. Yet, as the Cold War continued, authorities began to question if France and the United Kingdom could
retain their long-held statuses as great powers. China, with the world's largest population, has slowly risen to
great power status, with large growth in economic and military power in the post-war period. After 1949, the Republic
of China began to lose its recognition as the sole legitimate government of China by the other great powers, in favour
of the People's Republic of China. Subsequently, in 1971, it lost its permanent seat at the UN Security Council to
the People's Republic of China. According to Joshua Baron – a "researcher, lecturer, and consultant on international
conflict" – since the early 1960s direct military conflicts and major confrontations have "receded into the background"
with regards to relations among the great powers. Baron argues several reasons why this is the case, citing the unprecedented
rise of the United States and its predominant position as the key reason. Baron highlights that since World War Two
no other great power has been able to achieve parity or near parity with the United States, with the exception of
the Soviet Union for a brief time. This position is unique among the great powers since the start of the modern era
(the 16th century), where there has traditionally always been "tremendous parity among the great powers". This unique
period of American primacy has been an important factor in maintaining a condition of peace between the great powers.
Another important factor is the apparent consensus among Western great powers that military force is no longer an
effective tool of resolving disputes among their peers. This "subset" of great powers – France, Germany, Japan, the
United Kingdom and the United States – consider maintaining a "state of peace" as desirable. As evidence, Baron outlines
that since the Cuban missile crisis (1962) during the Cold War, these influential Western nations have resolved all
disputes among the great powers peacefully at the United Nations and other forums of international discussion. China,
France, Russia, the United Kingdom and the United States are often referred to as great powers by academics due to
"their political and economic dominance of the global arena". These five nations are the only states to have permanent
seats with veto power on the UN Security Council. They are also the only recognized "Nuclear Weapons States" under
the Nuclear Non-Proliferation Treaty, and maintain military expenditures which are among the largest in the world.
However, there is no unanimous agreement among authorities as to the current status of these powers or what precisely
defines a great power. For example, sources have at times referred to China, France, Russia and the United Kingdom
as middle powers. Japan and Germany are great powers too, though due to their large advanced economies (having the
third and fourth largest economies respectively) rather than their strategic and hard power capabilities (i.e., the
lack of permanent seats and veto power on the UN Security Council or strategic military reach). Germany has been
a member together with the five permanent Security Council members in the P5+1 grouping of world powers. Like China,
France, Russia and the United Kingdom; Germany and Japan have also been referred to as middle powers. In addition
to those contemporary great powers mentioned above, Zbigniew Brzezinski and Malik Mohan consider India to be a great
power too. Although unlike the contemporary great powers who have long been considered so, India's recognition among
authorities as a great power is comparatively recent. However, there is no collective agreement among observers as
to the status of India, for example, a number of academics believe that India is emerging as a great power, while
some believe that India remains a middle power. Milena Sterio, American expert of international law, includes the
former axis powers (Germany, Italy and Japan) and India among the great powers along with the permanent members of
the UNSC. She considers Germany, Japan and Italy to be great powers due to their G7 membership and because of their
influence in regional and international organizations. Various authors describe Italy as an equal major power, while
others view Italy as an "intermittent great power" or as "the least of the great powers". With continuing European
integration, the European Union is increasingly being seen as a great power in its own right, with representation
at the WTO and at G8 and G-20 summits. This is most notable in areas where the European Union has exclusive competence
(i.e. economic affairs). It also reflects a non-traditional conception of Europe's world role as a global "civilian
power", exercising collective influence in the functional spheres of trade and diplomacy, as an alternative to military
dominance. The European Union is a supranational union and not a sovereign state, and has limited scope in the areas
of foreign affairs and defence policy. These remain largely with the member states of the European Union, which include
the three great powers of France, Germany and the United Kingdom (referred to as the "EU three"). Referring to great
power relations pre-1960, Joshua Baron highlights that starting from around the 16th century and the rise of several
European great powers, military conflicts and confrontations was the defining characteristic of diplomacy and relations
between such powers. "Between 1500 and 1953, there were 64 wars in which at least one great power was opposed to
another, and they averaged little more than five years in length. In approximately a 450-year time frame, on average
at least two great powers were fighting one another in each and every year." Even during the period of Pax Britannica
(or "the British Peace") between 1815 and 1914, war and military confrontations among the great powers was still
a frequent occurrence. In fact, Joshua Baron points out that, in terms of militarized conflicts or confrontations,
the UK led the way in this period with nineteen such instances against; Russia (8), France (5), Germany/Prussia (5)
and Italy (1).
Birds (Aves) are a group of endothermic vertebrates, characterised by feathers, toothless beaked jaws, the laying of hard-shelled
eggs, a high metabolic rate, a four-chambered heart, and a lightweight but strong skeleton. Birds live worldwide
and range in size from the 5 cm (2 in) bee hummingbird to the 2.75 m (9 ft) ostrich. They rank as the class of tetrapods
with the most living species, at approximately ten thousand, with more than half of these being passerines, sometimes
known as perching birds or, less accurately, as songbirds. The fossil record indicates that birds are the last surviving
dinosaurs, having evolved from feathered ancestors within the theropod group of saurischian dinosaurs. True birds
first appeared during the Cretaceous period, around 100 million years ago. DNA-based evidence finds that birds diversified
dramatically around the time of the Cretaceous–Paleogene extinction event that killed off all other dinosaurs. Birds
in South America survived this event and then migrated to other parts of the world via multiple land bridges while
diversifying during periods of global cooling. Primitive bird-like dinosaurs that lie outside class Aves proper,
in the broader group Avialae, have been found dating back to the mid-Jurassic period. Many of these early "stem-birds",
such as Archaeopteryx, were not yet capable of fully powered flight, and many retained primitive characteristics
like toothy jaws in place of beaks, and long bony tails. Birds have wings which are more or less developed depending
on the species; the only known groups without wings are the extinct moas and elephant birds. Wings, which evolved
from forelimbs, give most birds the ability to fly, although further speciation has led to some flightless birds,
including ratites, penguins, and diverse endemic island species of birds. The digestive and respiratory systems of
birds are also uniquely adapted for flight. Some bird species of aquatic environments, particularly the aforementioned
flightless penguins, and also members of the duck family, have also evolved for swimming. Birds, specifically Darwin's
finches, played an important part in the inception of Darwin's theory of evolution by natural selection. Some birds,
especially corvids and parrots, are among the most intelligent animals; several bird species make and use tools,
and many social species pass on knowledge across generations, which is considered a form of culture. Many species
annually migrate great distances. Birds are social, communicating with visual signals, calls, and bird songs, and
participating in such social behaviours as cooperative breeding and hunting, flocking, and mobbing of predators.
The vast majority of bird species are socially monogamous, usually for one breeding season at a time, sometimes for
years, but rarely for life. Other species have polygynous ("many females") or, rarely, polyandrous ("many males")
breeding systems. Birds produce offspring by laying eggs which are fertilized through sexual reproduction. They are
usually laid in a nest and incubated by the parents. Most birds have an extended period of parental care after hatching.
Some birds, such as hens, lay eggs even when not fertilized, though unfertilized eggs do not produce offspring. Many
species of birds are economically important. Domesticated and undomesticated birds (poultry and game) are important
sources of eggs, meat, and feathers. Songbirds, parrots, and other species are popular as pets. Guano (bird excrement)
is harvested for use as a fertilizer. Birds prominently figure throughout human culture. About 120–130 species have
become extinct due to human activity since the 17th century, and hundreds more before then. Human activity threatens
about 1,200 bird species with extinction, though efforts are underway to protect them. Recreational birdwatching
is an important part of the ecotourism industry. Aves and a sister group, the clade Crocodilia, contain the only
living representatives of the reptile clade Archosauria. During the late 1990s, Aves was most commonly defined phylogenetically
as all descendants of the most recent common ancestor of modern birds and Archaeopteryx lithographica. However, an
earlier definition proposed by Jacques Gauthier gained wide currency in the 21st century, and is used by many scientists
including adherents of the Phylocode system. Gauthier defined Aves to include only the crown group of the set of
modern birds. This was done by excluding most groups known only from fossils, and assigning them, instead, to the
Avialae, in part to avoid the uncertainties about the placement of Archaeopteryx in relation to animals traditionally
thought of as theropod dinosaurs. Based on fossil and biological evidence, most scientists accept that birds are
a specialized subgroup of theropod dinosaurs, and more specifically, they are members of Maniraptora, a group of
theropods which includes dromaeosaurs and oviraptorids, among others. As scientists have discovered more theropods
closely related to birds, the previously clear distinction between non-birds and birds has become blurred. Recent
discoveries in the Liaoning Province of northeast China, which demonstrate many small theropod feathered dinosaurs,
contribute to this ambiguity. The consensus view in contemporary paleontology is that the flying theropods, or avialans,
are the closest relatives of the deinonychosaurs, which include dromaeosaurids and troodontids. Together, these form
a group called Paraves. Some basal members of this group, such as Microraptor, have features which may have enabled
them to glide or fly. The most basal deinonychosaurs were very small. This evidence raises the possibility that the
ancestor of all paravians may have been arboreal, have been able to glide, or both. Unlike Archaeopteryx and the
non-avialan feathered dinosaurs, who primarily ate meat, recent studies suggest that the first avialans were omnivores.
The Late Jurassic Archaeopteryx is well known as one of the first transitional fossils to be found, and it provided
support for the theory of evolution in the late 19th century. Archaeopteryx was the first fossil to display both
clearly traditional reptilian characteristics: teeth, clawed fingers, and a long, lizard-like tail, as well as wings
with flight feathers similar to those of modern birds. It is not considered a direct ancestor of birds, though it
is possibly closely related to the true ancestor. The earliest known avialan fossils come from the Tiaojishan Formation
of China, which has been dated to the late Jurassic period (Oxfordian stage), about 160 million years ago. The avialan
species from this time period include Anchiornis huxleyi, Xiaotingia zhengi, and Aurornis xui. The well-known early
avialan, Archaeopteryx, dates from slightly later Jurassic rocks (about 155 million years old) from Germany. Many
of these early avialans shared unusual anatomical features that may be ancestral to modern birds, but were later
lost during bird evolution. These features include enlarged claws on the second toe which may have been held clear
of the ground in life, and long feathers or "hind wings" covering the hind limbs and feet, which may have been used
in aerial maneuvering. Avialans diversified into a wide variety of forms during the Cretaceous Period. Many groups
retained primitive characteristics, such as clawed wings and teeth, though the latter were lost independently in
a number of avialan groups, including modern birds (Aves). While the earliest forms, such as Archaeopteryx and Jeholornis,
retained the long bony tails of their ancestors, the tails of more advanced avialans were shortened with the advent
of the pygostyle bone in the group Pygostylia. In the late Cretaceous, around 95 million years ago, the ancestor
of all modern birds also evolved a better sense of smell. The first large, diverse lineage of short-tailed avialans
to evolve were the enantiornithes, or "opposite birds", so named because the construction of their shoulder bones
was in reverse to that of modern birds. Enantiornithes occupied a wide array of ecological niches, from sand-probing
shorebirds and fish-eaters to tree-dwelling forms and seed-eaters. While they were the dominant group of avialans
during the Cretaceous period, enantiornithes became extinct along with many other dinosaur groups at the end of the
Mesozoic era. Many species of the second major avialan lineage to diversify, the Euornithes (meaning "true birds",
because they include the ancestors of modern birds), were semi-aquatic and specialized in eating fish and other small
aquatic organisms. Unlike the enantiornithes, which dominated land-based and arboreal habitats, most early euornithes
lacked perching adaptations and seem to have included shorebird-like species, waders, and swimming and diving species.
The later included the superficially gull-like Ichthyornis, the Hesperornithiformes, which became so well adapted
to hunting fish in marine environments that they lost the ability to fly and became primarily aquatic. The early
euornithes also saw the development of many traits associated with modern birds, like strongly keeled breastbones,
toothless, beaked portions of their jaws (though most non-avian euornithes retained teeth in other parts of the jaws).
Euornithes also included the first avialans to develop true pygostyle and a fully mobile fan of tail feathers, which
may have replaced the "hind wing" as the primary mode of aerial maneuverability and braking in flight. All modern
birds lie within the crown group Aves (alternately Neornithes), which has two subdivisions: the Palaeognathae, which
includes the flightless ratites (such as the ostriches) and the weak-flying tinamous, and the extremely diverse Neognathae,
containing all other birds. These two subdivisions are often given the rank of superorder, although Livezey and Zusi
assigned them "cohort" rank. Depending on the taxonomic viewpoint, the number of known living bird species varies
anywhere from 9,800 to 10,050. The earliest divergence within the Neognathes was that of the Galloanserae, the superorder
containing the Anseriformes (ducks, geese, swans and screamers) and the Galliformes (the pheasants, grouse, and their
allies, together with the mound builders and the guans and their allies). The earliest fossil remains of true birds
come from the possible galliform Austinornis lentus, dated to about 85 million years ago, but the dates for the actual
splits are much debated by scientists. The Aves are agreed to have evolved in the Cretaceous, and the split between
the Galloanseri from other Neognathes occurred before the Cretaceous–Paleogene extinction event, but there are different
opinions about whether the radiation of the remaining Neognathes occurred before or after the extinction of the other
dinosaurs. This disagreement is in part caused by a divergence in the evidence; molecular dating suggests a Cretaceous
radiation, while fossil evidence supports a Cenozoic radiation. Attempts to reconcile the molecular and fossil evidence
have proved controversial, but recent results show that all the extant groups of birds originated from only a small
handful of species that survived the Cretaceous–Paleogene extinction. The classification of birds is a contentious
issue. Sibley and Ahlquist's Phylogeny and Classification of Birds (1990) is a landmark work on the classification
of birds, although it is frequently debated and constantly revised. Most evidence seems to suggest the assignment
of orders is accurate, but scientists disagree about the relationships between the orders themselves; evidence from
modern bird anatomy, fossils and DNA have all been brought to bear on the problem, but no strong consensus has emerged.
More recently, new fossil and molecular evidence is providing an increasingly clear picture of the evolution of modern
bird orders. The most recent effort is drawn above and is based on whole genome sequencing of 48 representative species.
Birds live and breed in most terrestrial habitats and on all seven continents, reaching their southern extreme in
the snow petrel's breeding colonies up to 440 kilometres (270 mi) inland in Antarctica. The highest bird diversity
occurs in tropical regions. It was earlier thought that this high diversity was the result of higher speciation rates
in the tropics, however recent studies found higher speciation rates in the high latitudes that were offset by greater
extinction rates than in the tropics. Several families of birds have adapted to life both on the world's oceans and
in them, with some seabird species coming ashore only to breed and some penguins have been recorded diving up to
300 metres (980 ft). Many bird species have established breeding populations in areas to which they have been introduced
by humans. Some of these introductions have been deliberate; the ring-necked pheasant, for example, has been introduced
around the world as a game bird. Others have been accidental, such as the establishment of wild monk parakeets in
several North American cities after their escape from captivity. Some species, including cattle egret, yellow-headed
caracara and galah, have spread naturally far beyond their original ranges as agricultural practices created suitable
new habitat. The skeleton consists of very lightweight bones. They have large air-filled cavities (called pneumatic
cavities) which connect with the respiratory system. The skull bones in adults are fused and do not show cranial
sutures. The orbits are large and separated by a bony septum. The spine has cervical, thoracic, lumbar and caudal
regions with the number of cervical (neck) vertebrae highly variable and especially flexible, but movement is reduced
in the anterior thoracic vertebrae and absent in the later vertebrae. The last few are fused with the pelvis to form
the synsacrum. The ribs are flattened and the sternum is keeled for the attachment of flight muscles except in the
flightless bird orders. The forelimbs are modified into wings. Like the reptiles, birds are primarily uricotelic,
that is, their kidneys extract nitrogenous waste from their bloodstream and excrete it as uric acid instead of urea
or ammonia through the ureters into the intestine. Birds do not have a urinary bladder or external urethral opening
and (with exception of the ostrich) uric acid is excreted along with feces as a semisolid waste. However, birds such
as hummingbirds can be facultatively ammonotelic, excreting most of the nitrogenous wastes as ammonia. They also
excrete creatine, rather than creatinine like mammals. This material, as well as the output of the intestines, emerges
from the bird's cloaca. The cloaca is a multi-purpose opening: waste is expelled through it, most birds mate by joining
cloaca, and females lay eggs from it. In addition, many species of birds regurgitate pellets. Males within Palaeognathae
(with the exception of the kiwis), the Anseriformes (with the exception of screamers), and in rudimentary forms in
Galliformes (but fully developed in Cracidae) possess a penis, which is never present in Neoaves. The length is thought
to be related to sperm competition. When not copulating, it is hidden within the proctodeum compartment within the
cloaca, just inside the vent. The digestive system of birds is unique, with a crop for storage and a gizzard that
contains swallowed stones for grinding food to compensate for the lack of teeth. Most birds are highly adapted for
rapid digestion to aid with flight. Some migratory birds have adapted to use protein from many parts of their bodies,
including protein from the intestines, as additional energy during migration. Birds have one of the most complex
respiratory systems of all animal groups. Upon inhalation, 75% of the fresh air bypasses the lungs and flows directly
into a posterior air sac which extends from the lungs and connects with air spaces in the bones and fills them with
air. The other 25% of the air goes directly into the lungs. When the bird exhales, the used air flows out of the
lung and the stored fresh air from the posterior air sac is simultaneously forced into the lungs. Thus, a bird's
lungs receive a constant supply of fresh air during both inhalation and exhalation. Sound production is achieved
using the syrinx, a muscular chamber incorporating multiple tympanic membranes which diverges from the lower end
of the trachea; the trachea being elongated in some species, increasing the volume of vocalizations and the perception
of the bird's size. The avian circulatory system is driven by a four-chambered, myogenic heart contained in a fibrous
pericardial sac. This pericardial sac is filled with a serous fluid for lubrication. The heart itself is divided
into a right and left half, each with an atrium and ventricle. The atrium and ventricles of each side are separated
by atrioventricular valves which prevent back flow from one chamber to the next during contraction. Being myogenic,
the heart's pace is maintained by pacemaker cells found in the sinoatrial node, located on the right atrium. The
sinoatrial node uses calcium to cause a depolarizing signal transduction pathway from the atrium through right and
left atrioventricular bundle which communicates contraction to the ventricles. The avian heart also consists of muscular
arches that are made up of thick bundles of muscular layers. Much like a mammalian heart, the avian heart is composed
of endocardial, myocardial and epicardial layers. The atrium walls tend to be thinner than the ventricle walls, due
to the intense ventricular contraction used to pump oxygenated blood throughout the body. Avian hearts are generally
larger than mammalian hearts when compared to body mass. This adaptation allows more blood to be pumped to meet the
high metabolic need associated with flight. Birds have a very efficient system for diffusing oxygen into the blood;
birds have a ten times greater surface area to gas exchange volume than mammals. As a result, birds have more blood
in their capillaries per unit of volume of lung than a mammal. The arteries are composed of thick elastic muscles
to withstand the pressure of the ventricular constriction, and become more rigid as they move away from the heart.
Blood moves through the arteries, which undergo vasoconstriction, and into arterioles which act as a transportation
system to distribute primarily oxygen as well as nutrients to all tissues of the body. As the arterioles move away
from the heart and into individual organs and tissues they are further divided to increase surface area and slow
blood flow. Travelling through the arterioles blood moves into the capillaries where gas exchange can occur. Capillaries
are organized into capillary beds in tissues, it is here that blood exchanges oxygen for carbon dioxide waste. In
the capillary beds blood flow is slowed to allow maximum diffusion of oxygen into the tissues. Once the blood has
become deoxygenated it travels through venules then veins and back to the heart. Veins, unlike arteries, are thin
and rigid as they do not need to withstand extreme pressure. As blood travels through the venules to the veins a
funneling occurs called vasodilation bringing blood back to the heart. Once the blood reaches the heart it moves
first into the right atrium, then the right ventricle to be pumped through the lungs for further gas exchange of
carbon dioxide waste for oxygen. Oxygenated blood then flows from the lungs through the left atrium to the left ventricle
where it is pumped out to the body. The nervous system is large relative to the bird's size. The most developed part
of the brain is the one that controls the flight-related functions, while the cerebellum coordinates movement and
the cerebrum controls behaviour patterns, navigation, mating and nest building. Most birds have a poor sense of smell
with notable exceptions including kiwis, New World vultures and tubenoses. The avian visual system is usually highly
developed. Water birds have special flexible lenses, allowing accommodation for vision in air and water. Some species
also have dual fovea. Birds are tetrachromatic, possessing ultraviolet (UV) sensitive cone cells in the eye as well
as green, red and blue ones. This allows them to perceive ultraviolet light, which is involved in courtship. Birds
have specialized light-sensing cells deep in their brains that respond to light without input from eyes or other
sensory neurons. These photo-receptive cells in the hypothalamus are involved in detecting the longer days of spring,
and thus regulate breeding activities. Many birds show plumage patterns in ultraviolet that are invisible to the
human eye; some birds whose sexes appear similar to the naked eye are distinguished by the presence of ultraviolet
reflective patches on their feathers. Male blue tits have an ultraviolet reflective crown patch which is displayed
in courtship by posturing and raising of their nape feathers. Ultraviolet light is also used in foraging—kestrels
have been shown to search for prey by detecting the UV reflective urine trail marks left on the ground by rodents.
The eyelids of a bird are not used in blinking. Instead the eye is lubricated by the nictitating membrane, a third
eyelid that moves horizontally. The nictitating membrane also covers the eye and acts as a contact lens in many aquatic
birds. The bird retina has a fan shaped blood supply system called the pecten. Most birds cannot move their eyes,
although there are exceptions, such as the great cormorant. Birds with eyes on the sides of their heads have a wide
visual field, while birds with eyes on the front of their heads, such as owls, have binocular vision and can estimate
the depth of field. The avian ear lacks external pinnae but is covered by feathers, although in some birds, such
as the Asio, Bubo and Otus owls, these feathers form tufts which resemble ears. The inner ear has a cochlea, but
it is not spiral as in mammals. A dearth of field observations limit our knowledge, but intraspecific conflicts are
known to sometimes result in injury or death. The screamers (Anhimidae), some jacanas (Jacana, Hydrophasianus), the
spur-winged goose (Plectropterus), the torrent duck (Merganetta) and nine species of lapwing (Vanellus) use a sharp
spur on the wing as a weapon. The steamer ducks (Tachyeres), geese and swans (Anserinae), the solitaire (Pezophaps),
sheathbills (Chionis), some guans (Crax) and stone curlews (Burhinus) use a bony knob on the alular metacarpal to
punch and hammer opponents. The jacanas Actophilornis and Irediparra have an expanded, blade-like radius. The extinct
Xenicibis was unique in having an elongate forelimb and massive hand which likely functioned in combat or defence
as a jointed club or flail. Swans, for instance, may strike with the bony spurs and bite when defending eggs or young.
Feathers are a feature characteristic of birds (though also present in some dinosaurs not currently considered to
be true birds). They facilitate flight, provide insulation that aids in thermoregulation, and are used in display,
camouflage, and signaling. There are several types of feathers, each serving its own set of purposes. Feathers are
epidermal growths attached to the skin and arise only in specific tracts of skin called pterylae. The distribution
pattern of these feather tracts (pterylosis) is used in taxonomy and systematics. The arrangement and appearance
of feathers on the body, called plumage, may vary within species by age, social status, and sex. Plumage is regularly
moulted; the standard plumage of a bird that has moulted after breeding is known as the "non-breeding" plumage, or—in
the Humphrey-Parkes terminology—"basic" plumage; breeding plumages or variations of the basic plumage are known under
the Humphrey-Parkes system as "alternate" plumages. Moulting is annual in most species, although some may have two
moults a year, and large birds of prey may moult only once every few years. Moulting patterns vary across species.
In passerines, flight feathers are replaced one at a time with the innermost primary being the first. When the fifth
of sixth primary is replaced, the outermost tertiaries begin to drop. After the innermost tertiaries are moulted,
the secondaries starting from the innermost begin to drop and this proceeds to the outer feathers (centrifugal moult).
The greater primary coverts are moulted in synchrony with the primary that they overlap. A small number of species,
such as ducks and geese, lose all of their flight feathers at once, temporarily becoming flightless. As a general
rule, the tail feathers are moulted and replaced starting with the innermost pair. Centripetal moults of tail feathers
are however seen in the Phasianidae. The centrifugal moult is modified in the tail feathers of woodpeckers and treecreepers,
in that it begins with the second innermost pair of feathers and finishes with the central pair of feathers so that
the bird maintains a functional climbing tail. The general pattern seen in passerines is that the primaries are replaced
outward, secondaries inward, and the tail from center outward. Before nesting, the females of most bird species gain
a bare brood patch by losing feathers close to the belly. The skin there is well supplied with blood vessels and
helps the bird in incubation. Feathers require maintenance and birds preen or groom them daily, spending an average
of around 9% of their daily time on this. The bill is used to brush away foreign particles and to apply waxy secretions
from the uropygial gland; these secretions protect the feathers' flexibility and act as an antimicrobial agent, inhibiting
the growth of feather-degrading bacteria. This may be supplemented with the secretions of formic acid from ants,
which birds receive through a behaviour known as anting, to remove feather parasites. Most birds can fly, which distinguishes
them from almost all other vertebrate classes. Flight is the primary means of locomotion for most bird species and
is used for breeding, feeding, and predator avoidance and escape. Birds have various adaptations for flight, including
a lightweight skeleton, two large flight muscles, the pectoralis (which accounts for 15% of the total mass of the
bird) and the supracoracoideus, as well as a modified forelimb (wing) that serves as an aerofoil. Wing shape and
size generally determine a bird species' type of flight; many birds combine powered, flapping flight with less energy-intensive
soaring flight. About 60 extant bird species are flightless, as were many extinct birds. Flightlessness often arises
in birds on isolated islands, probably due to limited resources and the absence of land predators. Though flightless,
penguins use similar musculature and movements to "fly" through the water, as do auks, shearwaters and dippers. Birds
that employ many strategies to obtain food or feed on a variety of food items are called generalists, while others
that concentrate time and effort on specific food items or have a single strategy to obtain food are considered specialists.
Birds' feeding strategies vary by species. Many birds glean for insects, invertebrates, fruit, or seeds. Some hunt
insects by suddenly attacking from a branch. Those species that seek pest insects are considered beneficial 'biological
control agents' and their presence encouraged in biological pest control programs. Nectar feeders such as hummingbirds,
sunbirds, lories, and lorikeets amongst others have specially adapted brushy tongues and in many cases bills designed
to fit co-adapted flowers. Kiwis and shorebirds with long bills probe for invertebrates; shorebirds' varied bill
lengths and feeding methods result in the separation of ecological niches. Loons, diving ducks, penguins and auks
pursue their prey underwater, using their wings or feet for propulsion, while aerial predators such as sulids, kingfishers
and terns plunge dive after their prey. Flamingos, three species of prion, and some ducks are filter feeders. Geese
and dabbling ducks are primarily grazers. Some species, including frigatebirds, gulls, and skuas, engage in kleptoparasitism,
stealing food items from other birds. Kleptoparasitism is thought to be a supplement to food obtained by hunting,
rather than a significant part of any species' diet; a study of great frigatebirds stealing from masked boobies estimated
that the frigatebirds stole at most 40% of their food and on average stole only 5%. Other birds are scavengers; some
of these, like vultures, are specialised carrion eaters, while others, like gulls, corvids, or other birds of prey,
are opportunists. Most birds scoop water in their beaks and raise their head to let water run down the throat. Some
species, especially of arid zones, belonging to the pigeon, finch, mousebird, button-quail and bustard families are
capable of sucking up water without the need to tilt back their heads. Some desert birds depend on water sources
and sandgrouse are particularly well known for their daily congregations at waterholes. Nesting sandgrouse and many
plovers carry water to their young by wetting their belly feathers. Some birds carry water for chicks at the nest
in their crop or regurgitate it along with food. The pigeon family, flamingos and penguins have adaptations to produce
a nutritive fluid called crop milk that they provide to their chicks. Feathers being critical to the survival of
a bird, require maintenance. Apart from physical wear and tear, feathers face the onslaught of fungi, ectoparasitic
feather mites and birdlice. The physical condition of feathers are maintained by preening often with the application
of secretions from the preen gland. Birds also bathe in water or dust themselves. While some birds dip into shallow
water, more aerial species may make aerial dips into water and arboreal species often make use of dew or rain that
collect on leaves. Birds of arid regions make use of loose soil to dust-bathe. A behaviour termed as anting in which
the bird encourages ants to run through their plumage is also thought to help them reduce the ectoparasite load in
feathers. Many species will spread out their wings and expose them to direct sunlight and this too is thought to
help in reducing fungal and ectoparasitic activity that may lead to feather damage. Many bird species migrate to
take advantage of global differences of seasonal temperatures, therefore optimising availability of food sources
and breeding habitat. These migrations vary among the different groups. Many landbirds, shorebirds, and waterbirds
undertake annual long distance migrations, usually triggered by the length of daylight as well as weather conditions.
These birds are characterised by a breeding season spent in the temperate or polar regions and a non-breeding season
in the tropical regions or opposite hemisphere. Before migration, birds substantially increase body fats and reserves
and reduce the size of some of their organs. Migration is highly demanding energetically, particularly as birds need
to cross deserts and oceans without refuelling. Landbirds have a flight range of around 2,500 km (1,600 mi) and shorebirds
can fly up to 4,000 km (2,500 mi), although the bar-tailed godwit is capable of non-stop flights of up to 10,200
km (6,300 mi). Seabirds also undertake long migrations, the longest annual migration being those of sooty shearwaters,
which nest in New Zealand and Chile and spend the northern summer feeding in the North Pacific off Japan, Alaska
and California, an annual round trip of 64,000 km (39,800 mi). Other seabirds disperse after breeding, travelling
widely but having no set migration route. Albatrosses nesting in the Southern Ocean often undertake circumpolar trips
between breeding seasons. Some bird species undertake shorter migrations, travelling only as far as is required to
avoid bad weather or obtain food. Irruptive species such as the boreal finches are one such group and can commonly
be found at a location in one year and absent the next. This type of migration is normally associated with food availability.
Species may also travel shorter distances over part of their range, with individuals from higher latitudes travelling
into the existing range of conspecifics; others undertake partial migrations, where only a fraction of the population,
usually females and subdominant males, migrates. Partial migration can form a large percentage of the migration behaviour
of birds in some regions; in Australia, surveys found that 44% of non-passerine birds and 32% of passerines were
partially migratory. Altitudinal migration is a form of short distance migration in which birds spend the breeding
season at higher altitudes elevations and move to lower ones during suboptimal conditions. It is most often triggered
by temperature changes and usually occurs when the normal territories also become inhospitable due to lack of food.
Some species may also be nomadic, holding no fixed territory and moving according to weather and food availability.
Parrots as a family are overwhelmingly neither migratory nor sedentary but considered to either be dispersive, irruptive,
nomadic or undertake small and irregular migrations. The ability of birds to return to precise locations across vast
distances has been known for some time; in an experiment conducted in the 1950s a Manx shearwater released in Boston
returned to its colony in Skomer, Wales, within 13 days, a distance of 5,150 km (3,200 mi). Birds navigate during
migration using a variety of methods. For diurnal migrants, the sun is used to navigate by day, and a stellar compass
is used at night. Birds that use the sun compensate for the changing position of the sun during the day by the use
of an internal clock. Orientation with the stellar compass depends on the position of the constellations surrounding
Polaris. These are backed up in some species by their ability to sense the Earth's geomagnetism through specialised
photoreceptors. Birds sometimes use plumage to assess and assert social dominance, to display breeding condition
in sexually selected species, or to make threatening displays, as in the sunbittern's mimicry of a large predator
to ward off hawks and protect young chicks. Variation in plumage also allows for the identification of birds, particularly
between species. Visual communication among birds may also involve ritualised displays, which have developed from
non-signalling actions such as preening, the adjustments of feather position, pecking, or other behaviour. These
displays may signal aggression or submission or may contribute to the formation of pair-bonds. The most elaborate
displays occur during courtship, where "dances" are often formed from complex combinations of many possible component
movements; males' breeding success may depend on the quality of such displays. Calls are used for a variety of purposes,
including mate attraction, evaluation of potential mates, bond formation, the claiming and maintenance of territories,
the identification of other individuals (such as when parents look for chicks in colonies or when mates reunite at
the start of breeding season), and the warning of other birds of potential predators, sometimes with specific information
about the nature of the threat. Some birds also use mechanical sounds for auditory communication. The Coenocorypha
snipes of New Zealand drive air through their feathers, woodpeckers drum territorially, and palm cockatoos use tools
to drum. While some birds are essentially territorial or live in small family groups, other birds may form large
flocks. The principal benefits of flocking are safety in numbers and increased foraging efficiency. Defence against
predators is particularly important in closed habitats like forests, where ambush predation is common and multiple
eyes can provide a valuable early warning system. This has led to the development of many mixed-species feeding flocks,
which are usually composed of small numbers of many species; these flocks provide safety in numbers but increase
potential competition for resources. Costs of flocking include bullying of socially subordinate birds by more dominant
birds and the reduction of feeding efficiency in certain cases. The high metabolic rates of birds during the active
part of the day is supplemented by rest at other times. Sleeping birds often use a type of sleep known as vigilant
sleep, where periods of rest are interspersed with quick eye-opening "peeks", allowing them to be sensitive to disturbances
and enable rapid escape from threats. Swifts are believed to be able to sleep in flight and radar observations suggest
that they orient themselves to face the wind in their roosting flight. It has been suggested that there may be certain
kinds of sleep which are possible even when in flight. Some birds have also demonstrated the capacity to fall into
slow-wave sleep one hemisphere of the brain at a time. The birds tend to exercise this ability depending upon its
position relative to the outside of the flock. This may allow the eye opposite the sleeping hemisphere to remain
vigilant for predators by viewing the outer margins of the flock. This adaptation is also known from marine mammals.
Communal roosting is common because it lowers the loss of body heat and decreases the risks associated with predators.
Roosting sites are often chosen with regard to thermoregulation and safety. Many sleeping birds bend their heads
over their backs and tuck their bills in their back feathers, although others place their beaks among their breast
feathers. Many birds rest on one leg, while some may pull up their legs into their feathers, especially in cold weather.
Perching birds have a tendon locking mechanism that helps them hold on to the perch when they are asleep. Many ground
birds, such as quails and pheasants, roost in trees. A few parrots of the genus Loriculus roost hanging upside down.
Some hummingbirds go into a nightly state of torpor accompanied with a reduction of their metabolic rates. This physiological
adaptation shows in nearly a hundred other species, including owlet-nightjars, nightjars, and woodswallows. One species,
the common poorwill, even enters a state of hibernation. Birds do not have sweat glands, but they may cool themselves
by moving to shade, standing in water, panting, increasing their surface area, fluttering their throat or by using
special behaviours like urohidrosis to cool themselves. Ninety-five percent of bird species are socially monogamous.
These species pair for at least the length of the breeding season or—in some cases—for several years or until the
death of one mate. Monogamy allows for both paternal care and biparental care, which is especially important for
species in which females require males' assistance for successful brood-rearing. Among many socially monogamous species,
extra-pair copulation (infidelity) is common. Such behaviour typically occurs between dominant males and females
paired with subordinate males, but may also be the result of forced copulation in ducks and other anatids. Female
birds have sperm storage mechanisms that allow sperm from males to remain viable long after copulation, a hundred
days in some species. Sperm from multiple males may compete through this mechanism. For females, possible benefits
of extra-pair copulation include getting better genes for her offspring and insuring against the possibility of infertility
in her mate. Males of species that engage in extra-pair copulations will closely guard their mates to ensure the
parentage of the offspring that they raise. Breeding usually involves some form of courtship display, typically performed
by the male. Most displays are rather simple and involve some type of song. Some displays, however, are quite elaborate.
Depending on the species, these may include wing or tail drumming, dancing, aerial flights, or communal lekking.
Females are generally the ones that drive partner selection, although in the polyandrous phalaropes, this is reversed:
plainer males choose brightly coloured females. Courtship feeding, billing and allopreening are commonly performed
between partners, generally after the birds have paired and mated. All birds lay amniotic eggs with hard shells made
mostly of calcium carbonate. Hole and burrow nesting species tend to lay white or pale eggs, while open nesters lay
camouflaged eggs. There are many exceptions to this pattern, however; the ground-nesting nightjars have pale eggs,
and camouflage is instead provided by their plumage. Species that are victims of brood parasites have varying egg
colours to improve the chances of spotting a parasite's egg, which forces female parasites to match their eggs to
those of their hosts. Bird eggs are usually laid in a nest. Most species create somewhat elaborate nests, which can
be cups, domes, plates, beds scrapes, mounds, or burrows. Some bird nests, however, are extremely primitive; albatross
nests are no more than a scrape on the ground. Most birds build nests in sheltered, hidden areas to avoid predation,
but large or colonial birds—which are more capable of defence—may build more open nests. During nest construction,
some species seek out plant matter from plants with parasite-reducing toxins to improve chick survival, and feathers
are often used for nest insulation. Some bird species have no nests; the cliff-nesting common guillemot lays its
eggs on bare rock, and male emperor penguins keep eggs between their body and feet. The absence of nests is especially
prevalent in ground-nesting species where the newly hatched young are precocial. Incubation, which optimises temperature
for chick development, usually begins after the last egg has been laid. In monogamous species incubation duties are
often shared, whereas in polygamous species one parent is wholly responsible for incubation. Warmth from parents
passes to the eggs through brood patches, areas of bare skin on the abdomen or breast of the incubating birds. Incubation
can be an energetically demanding process; adult albatrosses, for instance, lose as much as 83 grams (2.9 oz) of
body weight per day of incubation. The warmth for the incubation of the eggs of megapodes comes from the sun, decaying
vegetation or volcanic sources. Incubation periods range from 10 days (in woodpeckers, cuckoos and passerine birds)
to over 80 days (in albatrosses and kiwis). The length and nature of parental care varies widely amongst different
orders and species. At one extreme, parental care in megapodes ends at hatching; the newly hatched chick digs itself
out of the nest mound without parental assistance and can fend for itself immediately. At the other extreme, many
seabirds have extended periods of parental care, the longest being that of the great frigatebird, whose chicks take
up to six months to fledge and are fed by the parents for up to an additional 14 months. The chick guard stage describes
the period of breeding during which one of the adult birds is permanently present at the nest after chicks have hatched.
The main purpose of the guard stage is to aid offspring to thermoregulate and protect them from predation. In some
species, both parents care for nestlings and fledglings; in others, such care is the responsibility of only one sex.
In some species, other members of the same species—usually close relatives of the breeding pair, such as offspring
from previous broods—will help with the raising of the young. Such alloparenting is particularly common among the
Corvida, which includes such birds as the true crows, Australian magpie and fairy-wrens, but has been observed in
species as different as the rifleman and red kite. Among most groups of animals, male parental care is rare. In birds,
however, it is quite common—more so than in any other vertebrate class. Though territory and nest site defence, incubation,
and chick feeding are often shared tasks, there is sometimes a division of labour in which one mate undertakes all
or most of a particular duty. The point at which chicks fledge varies dramatically. The chicks of the Synthliboramphus
murrelets, like the ancient murrelet, leave the nest the night after they hatch, following their parents out to sea,
where they are raised away from terrestrial predators. Some other species, such as ducks, move their chicks away
from the nest at an early age. In most species, chicks leave the nest just before, or soon after, they are able to
fly. The amount of parental care after fledging varies; albatross chicks leave the nest on their own and receive
no further help, while other species continue some supplementary feeding after fledging. Chicks may also follow their
parents during their first migration. Brood parasitism, in which an egg-layer leaves her eggs with another individual's
brood, is more common among birds than any other type of organism. After a parasitic bird lays her eggs in another
bird's nest, they are often accepted and raised by the host at the expense of the host's own brood. Brood parasites
may be either obligate brood parasites, which must lay their eggs in the nests of other species because they are
incapable of raising their own young, or non-obligate brood parasites, which sometimes lay eggs in the nests of conspecifics
to increase their reproductive output even though they could have raised their own young. One hundred bird species,
including honeyguides, icterids, and ducks, are obligate parasites, though the most famous are the cuckoos. Some
brood parasites are adapted to hatch before their host's young, which allows them to destroy the host's eggs by pushing
them out of the nest or to kill the host's chicks; this ensures that all food brought to the nest will be fed to
the parasitic chicks. Birds have evolved a variety of mating behaviors, with the peacock tail being perhaps the most
famous example of sexual selection and the Fisherian runaway. Commonly occurring sexual dimorphisms such as size
and color differences are energetically costly attributes that signal competitive breeding situations. Many types
of avian sexual selection have been identified; intersexual selection, also known as female choice; and intrasexual
competition, where individuals of the more abundant sex compete with each other for the privilege to mate. Sexually
selected traits often evolve to become more pronounced in competitive breeding situations until the trait begins
to limit the individual’s fitness. Conflicts between an individual fitness and signaling adaptations ensure that
sexually selected ornaments such as plumage coloration and courtship behavior are "honest" traits. Signals must be
costly to ensure that only good-quality individuals can present these exaggerated sexual ornaments and behaviors.
Incestuous matings by the purple-crowned fairy wren Malurus coronatus result in severe fitness costs due to inbreeding
depression (greater than 30% reduction in hatchability of eggs). Females paired with related males may undertake
extra pair matings (see Promiscuity#Other animals for 90% frequency in avian species) that can reduce the negative
effects of inbreeding. However, there are ecological and demographic constraints on extra pair matings. Nevertheless,
43% of broods produced by incestuously paired females contained extra pair young. Cooperative breeding in birds typically
occurs when offspring, usually males, delay dispersal from their natal group in order to remain with the family to
help rear younger kin. Female offspring rarely stay at home, dispersing over distances that allow them to breed independently,
or to join unrelated groups. In general, inbreeding is avoided because it leads to a reduction in progeny fitness
(inbreeding depression) due largely to the homozygous expression of deleterious recessive alleles. Cross-fertilization
between unrelated individuals ordinarily leads to the masking of deleterious recessive alleles in progeny. Birds
occupy a wide range of ecological positions. While some birds are generalists, others are highly specialised in their
habitat or food requirements. Even within a single habitat, such as a forest, the niches occupied by different species
of birds vary, with some species feeding in the forest canopy, others beneath the canopy, and still others on the
forest floor. Forest birds may be insectivores, frugivores, and nectarivores. Aquatic birds generally feed by fishing,
plant eating, and piracy or kleptoparasitism. Birds of prey specialise in hunting mammals or other birds, while vultures
are specialised scavengers. Avivores are animals that are specialized at preying on birds. Birds are often important
to island ecology. Birds have frequently reached islands that mammals have not; on those islands, birds may fulfill
ecological roles typically played by larger animals. For example, in New Zealand the moas were important browsers,
as are the kereru and kokako today. Today the plants of New Zealand retain the defensive adaptations evolved to protect
them from the extinct moa. Nesting seabirds may also affect the ecology of islands and surrounding seas, principally
through the concentration of large quantities of guano, which may enrich the local soil and the surrounding seas.
Since birds are highly visible and common animals, humans have had a relationship with them since the dawn of man.
Sometimes, these relationships are mutualistic, like the cooperative honey-gathering among honeyguides and African
peoples such as the Borana. Other times, they may be commensal, as when species such as the house sparrow have benefited
from human activities. Several bird species have become commercially significant agricultural pests, and some pose
an aviation hazard. Human activities can also be detrimental, and have threatened numerous bird species with extinction
(hunting, avian lead poisoning, pesticides, roadkill, and predation by pet cats and dogs are common sources of death
for birds). Domesticated birds raised for meat and eggs, called poultry, are the largest source of animal protein
eaten by humans; in 2003, 76 million tons of poultry and 61 million tons of eggs were produced worldwide. Chickens
account for much of human poultry consumption, though domesticated turkeys, ducks, and geese are also relatively
common. Many species of birds are also hunted for meat. Bird hunting is primarily a recreational activity except
in extremely undeveloped areas. The most important birds hunted in North and South America are waterfowl; other widely
hunted birds include pheasants, wild turkeys, quail, doves, partridge, grouse, snipe, and woodcock. Muttonbirding
is also popular in Australia and New Zealand. Though some hunting, such as that of muttonbirds, may be sustainable,
hunting has led to the extinction or endangerment of dozens of species. Birds have been domesticated by humans both
as pets and for practical purposes. Colourful birds, such as parrots and mynas, are bred in captivity or kept as
pets, a practice that has led to the illegal trafficking of some endangered species. Falcons and cormorants have
long been used for hunting and fishing, respectively. Messenger pigeons, used since at least 1 AD, remained important
as recently as World War II. Today, such activities are more common either as hobbies, for entertainment and tourism,
or for sports such as pigeon racing. Birds play prominent and diverse roles in religion and mythology. In religion,
birds may serve as either messengers or priests and leaders for a deity, such as in the Cult of Makemake, in which
the Tangata manu of Easter Island served as chiefs or as attendants, as in the case of Hugin and Munin, the two common
ravens who whispered news into the ears of the Norse god Odin. In several civilizations of ancient Italy, particularly
Etruscan and Roman religion, priests were involved in augury, or interpreting the words of birds while the "auspex"
(from which the word "auspicious" is derived) watched their activities to foretell events. They may also serve as
religious symbols, as when Jonah (Hebrew: יוֹנָה, dove) embodied the fright, passivity, mourning, and beauty traditionally
associated with doves. Birds have themselves been deified, as in the case of the common peacock, which is perceived
as Mother Earth by the Dravidians of India. In religious images preserved from the Inca and Tiwanaku empires, birds
are depicted in the process of transgressing boundaries between earthly and underground spiritual realms. Indigenous
peoples of the central Andes maintain legends of birds passing to and from metaphysical worlds. The mythical chullumpi
bird is said to mark the existence of a portal between such worlds, and to transform itself into a llama. Birds have
featured in culture and art since prehistoric times, when they were represented in early cave paintings. Some birds
have been perceived as monsters, including the mythological Roc and the Māori's legendary Pouākai, a giant bird capable
of snatching humans. Birds were later used as symbols of power, as in the magnificent Peacock Throne of the Mughal
and Persian emperors. With the advent of scientific interest in birds, many paintings of birds were commissioned
for books. Among the most famous of these bird artists was John James Audubon, whose paintings of North American
birds were a great commercial success in Europe and who later lent his name to the National Audubon Society. Birds
are also important figures in poetry; for example, Homer incorporated nightingales into his Odyssey, and Catullus
used a sparrow as an erotic symbol in his Catullus 2. The relationship between an albatross and a sailor is the central
theme of Samuel Taylor Coleridge's The Rime of the Ancient Mariner, which led to the use of the term as a metaphor
for a 'burden'. Other English metaphors derive from birds; vulture funds and vulture investors, for instance, take
their name from the scavenging vulture. Though human activities have allowed the expansion of a few species, such
as the barn swallow and European starling, they have caused population decreases or extinction in many other species.
Over a hundred bird species have gone extinct in historical times, although the most dramatic human-caused avian
extinctions, eradicating an estimated 750–1800 species, occurred during the human colonisation of Melanesian, Polynesian,
and Micronesian islands. Many bird populations are declining worldwide, with 1,227 species listed as threatened by
BirdLife International and the IUCN in 2009.
The Qing dynasty (Chinese: 清朝; pinyin: Qīng Cháo; Wade–Giles: Ch'ing Ch'ao; IPA: [tɕʰíŋ tʂʰɑ̌ʊ̯]), officially the Great Qing
(Chinese: 大清; pinyin: Dà Qīng), also called the Empire of the Great Qing, or the Manchu dynasty, was the last imperial
dynasty of China, ruling from 1644 to 1912 with a brief, abortive restoration in 1917. It was preceded by the Ming
dynasty and succeeded by the Republic of China. The Qing multi-cultural empire lasted almost three centuries and
formed the territorial base for the modern Chinese state. The dynasty was founded by the Jurchen Aisin Gioro clan
in Manchuria. In the late sixteenth century, Nurhaci, originally a Ming vassal, began organizing Jurchen clans into
"Banners", military-social units. Nurhaci formed these clans into a unified entity, the subjects of which became
known collectively as the Manchu people. By 1636, his son Hong Taiji began driving Ming forces out of Liaodong and
declared a new dynasty, the Qing. In 1644, peasant rebels led by Li Zicheng conquered the Ming capital Beijing. Rather
than serve them, Ming general Wu Sangui made an alliance with the Manchus and opened the Shanhai Pass to the Banner
Armies led by Prince Dorgon, who defeated the rebels and seized Beijing. The conquest of China proper was not completed
until 1683 under the Kangxi Emperor (r. 1661–1722). The Ten Great Campaigns of the Qianlong Emperor from the 1750s
to the 1790s extended Qing control into Central Asia. While the early rulers maintained their Manchu ways, and while
their official title was Emperor they were known as khans to the Mongols and patronized Tibetan Buddhism, they governed
using Confucian styles and institutions of bureaucratic government. They retained the imperial examinations to recruit
Han Chinese to work under or in parallel with Manchus. They also adapted the ideals of the tributary system in international
relations, and in places such as Taiwan, the Qing so-called internal foreign policy closely resembled colonial policy
and control. The reign of the Qianlong Emperor (1735–1796) saw the apogee and initial decline in prosperity and imperial
control. The population rose to some 400 million, but taxes and government revenues were fixed at a low rate, virtually
guaranteeing eventual fiscal crisis. Corruption set in, rebels tested government legitimacy, and ruling elites did
not change their mindsets in the face of changes in the world system. Following the Opium War, European powers imposed
unequal treaties, free trade, extraterritoriality and treaty ports under foreign control. The Taiping Rebellion (1850–64)
and Dungan Revolt (1862–77) in Central Asia led to the deaths of some 20 million people. In spite of these disasters,
in the Tongzhi Restoration of the 1860s, Han Chinese elites rallied to the defense of the Confucian order and the
Qing rulers. The initial gains in the Self-Strengthening Movement were destroyed in the First Sino-Japanese War of
1895, in which the Qing lost its influence over Korea and the possession of Taiwan. New Armies were organized, but
the ambitious Hundred Days' Reform of 1898 was turned back by Empress Dowager Cixi, a ruthless but capable leader.
When, in response to the violently anti-foreign Yihetuan ("Boxers"), foreign powers invaded China, the Empress Dowager
declared war on them, leading to defeat and the flight of the Imperial Court to Xi'an. After agreeing to sign the
Boxer Protocol the government then initiated unprecedented fiscal and administrative reforms, including elections,
a new legal code, and abolition of the examination system. Sun Yat-sen and other revolutionaries competed with reformers
such as Liang Qichao and monarchists such as Kang Youwei to transform the Qing empire into a modern nation. After
the death of Empress Dowager Cixi and the Guangxu Emperor in 1908, the hardline Manchu court alienated reformers
and local elites alike. Local uprisings starting on October 11, 1911 led to the Xinhai Revolution. Puyi, the last
emperor, abdicated on February 12, 1912. Nurhaci declared himself the "Bright Khan" of the Later Jin (lit. "gold")
state in honor both of the 12–13th century Jurchen Jin dynasty and of his Aisin Gioro clan (Aisin being Manchu for
the Chinese 金 (jīn, "gold")). His son Hong Taiji renamed the dynasty Great Qing in 1636. There are competing explanations
on the meaning of Qīng (lit. "clear" or "pure"). The name may have been selected in reaction to the name of the Ming
dynasty (明), which consists of the Chinese characters for "sun" (日) and "moon" (月), both associated with the fire
element of the Chinese zodiacal system. The character Qīng (清) is composed of "water" (氵) and "azure" (青), both associated
with the water element. This association would justify the Qing conquest as defeat of fire by water. The water imagery
of the new name may also have had Buddhist overtones of perspicacity and enlightenment and connections with the Bodhisattva
Manjusri. The Manchu name daicing, which sounds like a phonetic rendering of Dà Qīng or Dai Ching, may in fact have
been derived from a Mongolian word that means "warrior". Daicing gurun may therefore have meant "warrior state",
a pun that was only intelligible to Manchu and Mongol people. In the later part of the dynasty, however, even the
Manchus themselves had forgotten this possible meaning. After conquering "China proper", the Manchus identified their
state as "China" (中國, Zhōngguó; "Middle Kingdom"), and referred to it as Dulimbai Gurun in Manchu (Dulimbai means
"central" or "middle," gurun means "nation" or "state"). The emperors equated the lands of the Qing state (including
present day Northeast China, Xinjiang, Mongolia, Tibet and other areas) as "China" in both the Chinese and Manchu
languages, defining China as a multi-ethnic state, and rejecting the idea that "China" only meant Han areas. The
Qing emperors proclaimed that both Han and non-Han peoples were part of "China." They used both "China" and "Qing"
to refer to their state in official documents, international treaties (as the Qing was known internationally as "China"
or the "Chinese Empire") and foreign affairs, and "Chinese language" (Dulimbai gurun i bithe) included Chinese, Manchu,
and Mongol languages, and "Chinese people" (中國之人 Zhōngguó zhī rén; Manchu: Dulimbai gurun i niyalma) referred to
all subjects of the empire. In the Chinese-language versions of its treaties and its maps of the world, the Qing
government used "Qing" and "China" interchangeably. The Qing dynasty was founded not by Han Chinese, who constitute
the majority of the Chinese population, but by a sedentary farming people known as the Jurchen, a Tungusic people
who lived around the region now comprising the Chinese provinces of Jilin and Heilongjiang. The Manchus are sometimes
mistaken for a nomadic people, which they were not. What was to become the Manchu state was founded by Nurhaci, the
chieftain of a minor Jurchen tribe – the Aisin Gioro – in Jianzhou in the early 17th century. Originally a vassal
of the Ming emperors, Nurhachi embarked on an intertribal feud in 1582 that escalated into a campaign to unify the
nearby tribes. By 1616, he had sufficiently consolidated Jianzhou so as to be able to proclaim himself Khan of the
Great Jin in reference to the previous Jurchen dynasty. Relocating his court from Jianzhou to Liaodong provided Nurhachi
access to more resources; it also brought him in close contact with the Khorchin Mongol domains on the plains of
Mongolia. Although by this time the once-united Mongol nation had long since fragmented into individual and hostile
tribes, these tribes still presented a serious security threat to the Ming borders. Nurhachi's policy towards the
Khorchins was to seek their friendship and cooperation against the Ming, securing his western border from a powerful
potential enemy. There were too few ethnic Manchus to conquer China, so they gained strength by defeating and absorbing
Mongols, but more importantly, adding Han Chinese to the Eight Banners. The Manchus had to create an entire "Jiu
Han jun" (Old Han Army) due to the massive amount of Han Chinese soldiers which were absorbed into the Eight Banners
by both capture and defection, Ming artillery was responsible for many victories against the Manchus, so the Manchus
established an artillery corps made out of Han Chinese soldiers in 1641 and the swelling of Han Chinese numbers in
the Eight Banners led in 1642 of all Eight Han Banners being created. It was defected Ming Han Chinese armies which
conquered southern China for the Qing. This was followed by the creation of the first two Han Banners in 1637 (increasing
to eight in 1642). Together these military reforms enabled Hong Taiji to resoundingly defeat Ming forces in a series
of battles from 1640 to 1642 for the territories of Songshan and Jinzhou. This final victory resulted in the surrender
of many of the Ming dynasty's most battle-hardened troops, the death of Yuan Chonghuan at the hands of the Chongzhen
Emperor (who thought Yuan had betrayed him), and the complete and permanent withdrawal of the remaining Ming forces
north of the Great Wall. Hong Taiji's bureaucracy was staffed with many Han Chinese, including many newly surrendered
Ming officials. The Manchus' continued dominance was ensured by an ethnic quota for top bureaucratic appointments.
Hong Taiji's reign also saw a fundamental change of policy towards his Han Chinese subjects. Nurhaci had treated
Han in Liaodong differently according to how much grain they had, those with less than 5 to 7 sin were treated like
chattel while those with more than that amount were rewarded with property. Due to a revolt by Han in Liaodong in
1623, Nurhachi, who previously gave concessions to conquered Han subjects in Liaodong, turned against them and ordered
that they no longer be trusted; He enacted discriminatory policies and killings against them, while ordering that
Han who assimilated to the Jurchen (in Jilin) before 1619 be treated equally as Jurchens were and not like the conquered
Han in Liaodong. Hong Taiji instead incorporated them into the Jurchen "nation" as full (if not first-class) citizens,
obligated to provide military service. By 1648, less than one-sixth of the bannermen were of Manchu ancestry. This
change of policy not only increased Hong Taiji's manpower and reduced his military dependence on banners not under
his personal control, it also greatly encouraged other Han Chinese subjects of the Ming dynasty to surrender and
accept Jurchen rule when they were defeated militarily. Through these and other measures Hong Taiji was able to centralize
power unto the office of the Khan, which in the long run prevented the Jurchen federation from fragmenting after
his death. Hong Taiji died suddenly in September 1643 without a designated heir. As the Jurchens had traditionally
"elected" their leader through a council of nobles, the Qing state did not have in place a clear succession system
until the reign of the Kangxi Emperor. The leading contenders for power at this time were Hong Taiji's oldest son
Hooge and Hong Taiji' half brother Dorgon. A compromise candidate in the person of Hong Taiji's five-year-old son,
Fulin, was installed as the Shunzhi Emperor, with Dorgon as regent and de facto leader of the Manchu nation. Ming
government officials fought against each other, against fiscal collapse, and against a series of peasant rebellions.
They were unable to capitalise on the Manchu succession dispute and installation of a minor as emperor. In April
1644, the capital at Beijing was sacked by a coalition of rebel forces led by Li Zicheng, a former minor Ming official,
who established a short-lived Shun dynasty. The last Ming ruler, the Chongzhen Emperor, committed suicide when the
city fell, marking the official end of the dynasty. Li Zicheng then led a coalition of rebel forces numbering 200,000[a]
to confront Wu Sangui, the general commanding the Ming garrison at Shanhai Pass. Shanhai Pass is a pivotal pass of
the Great Wall, located fifty miles northeast of Beijing, and for years its defenses kept the Manchus from directly
raiding the Ming capital. Wu Sangui, caught between a rebel army twice his size and a foreign enemy he had fought
for years, decided to cast his lot with the Manchus, with whom he was familiar. Wu Sangui may have been influenced
by Li Zicheng's mistreatment of his family and other wealthy and cultured officials; it was said that Li also took
Wu's concubine Chen Yuanyuan for himself. Wu and Dorgon allied in the name of avenging the death of the Chongzhen
Emperor. Together, the two former enemies met and defeated Li Zicheng's rebel forces in battle on May 27, 1644. The
newly allied armies captured Beijing on June 6. The Shunzhi Emperor was invested as the "Son of Heaven" on October
30. The Manchus, who had positioned themselves as political heir to the Ming emperor by defeating the rebel Li Zicheng,
completed the symbolic transition by holding a formal funeral for the Chongzhen Emperor. However the process of conquering
the rest of China took another seventeen years of battling Ming loyalists, pretenders and rebels. The last Ming pretender,
Prince Gui, sought refuge with the King of Burma, but was turned over to a Qing expeditionary army commanded by Wu
Sangui, who had him brought back to Yunnan province and executed in early 1662. Han Chinese Banners were made up
of Han Chinese who defected to the Qing up to 1644 and joined the Eight Banners, giving them social and legal privileges
in addition to being acculturated to Manchu culture. So many Han defected to the Qing and swelled the ranks of the
Eight Banners that ethnic Manchus became a minority, making up only 16% in 1648, with Han Bannermen dominating at
75% and Mongol Bannermen making up the rest. This multi-ethnic force in which Manchus were only a minority conquered
China for the Qing. The Qing showed that the Manchus valued military skills in propaganda targeted towards the Ming
military to get them to defect to the Qing, since the Ming civilian political system discriminated against the military.
The three Liaodong Han Bannermen officers who played a massive role in the conquest of southern China from the Ming
were Shang Kexi, Geng Zhongming, and Kong Youde and they governed southern China autonomously as viceroys for the
Qing after their conquests. Normally the Manchu Bannermen acted only as reserve forces or in the rear and were used
predominantly for quick strikes with maximum impact, so as to minimize ethnic Manchu losses; instead, the Qing used
defected Han Chinese troops to fight as the vanguard during the entire conquest of China. First, the Manchus had
entered "China proper" because Dorgon responded decisively to Wu Sangui's appeal. Then, after capturing Beijing,
instead of sacking the city as the rebels had done, Dorgon insisted, over the protests of other Manchu princes, on
making it the dynastic capital and reappointing most Ming officials. Choosing Beijing as the capital had not been
a straightforward decision, since no major Chinese dynasty had directly taken over its immediate predecessor's capital.
Keeping the Ming capital and bureaucracy intact helped quickly stabilize the regime and sped up the conquest of the
rest of the country. However, not all of Dorgon's policies were equally popular nor easily implemented. Dorgon's
controversial July 1645 edict (the "haircutting order") forced adult Han Chinese men to shave the front of their
heads and comb the remaining hair into the queue hairstyle which was worn by Manchu men, on pain of death. The popular
description of the order was: "To keep the hair, you lose the head; To keep your head, you cut the hair." To the
Manchus, this policy was a test of loyalty and an aid in distinguishing friend from foe. For the Han Chinese, however,
it was a humiliating reminder of Qing authority that challenged traditional Confucian values. The Classic of Filial
Piety (Xiaojing) held that "a person's body and hair, being gifts from one's parents, are not to be damaged." Under
the Ming dynasty, adult men did not cut their hair but instead wore it in the form of a top-knot. The order triggered
strong resistance to Qing rule in Jiangnan and massive killing of ethnic Han Chinese. It was Han Chinese defectors
who carried out massacres against people refusing to wear the queue.. Li Chengdong, a Han Chinese general who had
served the Ming but surrendered to the Qing, ordered his Han troops to carry out three separate massacres in the
city of Jiading within a month, resulting in tens of thousands of deaths. At the end of the third massacre, there
was hardly any living person left in this city. Jiangyin also held out against about 10,000 Han Chinese Qing troops
for 83 days. When the city wall was finally breached on 9 October 1645, the Han Chinese Qing army led by the Han
Chinese Ming defector Liu Liangzuo (劉良佐), who had been ordered to "fill the city with corpses before you sheathe
your swords," massacred the entire population, killing between 74,000 and 100,000 people. The queue was the only
aspect of Manchu culture which the Qing forced on the common Han population. The Qing required people serving as
officials to wear Manchu clothing, but allowed non-official Han civilians to continue wearing Hanfu (Han clothing).
Although his support had been essential to Shunzhi's ascent, Dorgon had through the years centralised so much power
in his hands as to become a direct threat to the throne. So much so that upon his death he was extraordinarily bestowed
the posthumous title of Emperor Yi (Chinese: 義皇帝), the only instance in Qing history in which a Manchu "prince of
the blood" (Chinese: 親王) was so honored. Two months into Shunzhi's personal rule, Dorgon was not only stripped of
his titles, but his corpse was disinterred and mutilated.[b] to atone for multiple "crimes", one of which was persecuting
to death Shunzhi’s agnate eldest brother, Hooge. More importantly, Dorgon's symbolic fall from grace also signaled
a political purge of his family and associates at court, thus reverting power back to the person of the emperor.
After a promising start, Shunzhi's reign was cut short by his early death in 1661 at the age of twenty-four from
smallpox. He was succeeded by his third son Xuanye, who reigned as the Kangxi Emperor. The Manchus sent Han Bannermen
to fight against Koxinga's Ming loyalists in Fujian. The Qing carried out a massive depopulation policy and seaban
forcing people to evacuated the coast in order to deprive Koxinga's Ming loyalists of resources, this has led to
a myth that it was because Manchus were "afraid of water". In Fujian, it was Han Bannermen who were the ones carrying
out the fighting and killing for the Qing and this disproved the entirely irrelevant claim that alleged fear of the
water on part of the Manchus had to do with the coastal evacuation and seaban. Even though a poem refers to the soldiers
carrying out massacres in Fujian as "barbarian", both Han Green Standard Army and Han Bannermen were involved in
the fighting for the Qing side and carried out the worst slaughter. 400,000 Green Standard Army soldiers were used
against the Three Feudatories besides 200,000 Bannermen. The sixty-one year reign of the Kangxi Emperor was the longest
of any Chinese emperor. Kangxi's reign is also celebrated as the beginning of an era known as the "High Qing", during
which the dynasty reached the zenith of its social, economic and military power. Kangxi's long reign started when
he was eight years old upon the untimely demise of his father. To prevent a repeat of Dorgon's dictatorial monopolizing
of power during the regency, the Shunzhi Emperor, on his deathbed, hastily appointed four senior cabinet ministers
to govern on behalf of his young son. The four ministers — Sonin, Ebilun, Suksaha, and Oboi — were chosen for their
long service, but also to counteract each other's influences. Most important, the four were not closely related to
the imperial family and laid no claim to the throne. However, as time passed, through chance and machination, Oboi,
the most junior of the four, achieved such political dominance as to be a potential threat. Even though Oboi's loyalty
was never an issue, his personal arrogance and political conservatism led him into an escalating conflict with the
young emperor. In 1669 Kangxi, through trickery, disarmed and imprisoned Oboi — a significant victory for a fifteen-year-old
emperor over a wily politician and experienced commander. The early Manchu rulers also established two foundations
of legitimacy which help to explain the stability of their dynasty. The first was the bureaucratic institutions and
the neo-Confucian culture which they adopted from earlier dynasties. Manchu rulers and Han Chinese scholar-official
elites gradually came to terms with each other. The examination system offered a path for ethnic Han to become officials.
Imperial patronage of Kangxi Dictionary demonstrated respect for Confucian learning, while the Sacred Edict of 1670
effectively extolled Confucian family values. The second major source of stability was the Central Asian aspect of
their Manchu identity which allowed them to appeal to Mongol, Tibetan and Uighur constituents. The Qing used the
title of Emperor (Huangdi) in Chinese while among Mongols the Qing monarch was referred to as Bogda khan (wise Khan),
and referred to as Gong Ma in Tibet. Qianlong propagated the image of himself as Buddhist sage rulers, patrons of
Tibetan Buddhism. In the Manchu language, the Qing monarch was alternately referred to as either Huwangdi (Emperor)
or Khan with no special distinction between the two usages. The Kangxi Emperor also welcomed to his court Jesuit
missionaries, who had first come to China under the Ming. Missionaries including Tomás Pereira, Martino Martini,
Johann Adam Schall von Bell, Ferdinand Verbiest and Antoine Thomas held significant positions as military weapons
experts, mathematicians, cartographers, astronomers and advisers to the emperor. The relationship of trust was however
lost in the later Chinese Rites controversy. Yet controlling the "Mandate of Heaven" was a daunting task. The vastness
of China's territory meant that there were only enough banner troops to garrison key cities forming the backbone
of a defense network that relied heavily on surrendered Ming soldiers. In addition, three surrendered Ming generals
were singled out for their contributions to the establishment of the Qing dynasty, ennobled as feudal princes (藩王),
and given governorships over vast territories in Southern China. The chief of these was Wu Sangui, who was given
the provinces of Yunnan and Guizhou, while generals Shang Kexi and Geng Jingzhong were given Guangdong and Fujian
provinces respectively. As the years went by, the three feudal lords and their extensive territories became increasingly
autonomous. Finally, in 1673, Shang Kexi petitioned Kangxi for permission to retire to his hometown in Liaodong province
and nominated his son as his successor. The young emperor granted his retirement, but denied the heredity of his
fief. In reaction, the two other generals decided to petition for their own retirements to test Kangxi's resolve,
thinking that he would not risk offending them. The move backfired as the young emperor called their bluff by accepting
their requests and ordering that all three fiefdoms to be reverted to the crown. Faced with the stripping of their
powers, Wu Sangui, later joined by Geng Zhongming and by Shang Kexi's son Shang Zhixin, felt they had no choice but
to revolt. The ensuing Revolt of the Three Feudatories lasted for eight years. Wu attempted, ultimately in vain,
to fire the embers of south China Ming loyalty by restoring Ming customs, ordering that the resented queues be cut,
and declaring himself emperor of a new dynasty. At the peak of the rebels' fortunes, they extended their control
as far north as the Yangtze River, nearly establishing a divided China. Wu then hesitated to go further north, not
being able to coordinate strategy with his allies, and Kangxi was able to unify his forces for a counterattack led
by a new generation of Manchu generals. By 1681, the Qing government had established control over a ravaged southern
China which took several decades to recover. Manchu Generals and Bannermen were initially put to shame by the better
performance of the Han Chinese Green Standard Army, who fought better than them against the rebels and this was noted
by Kangxi, leading him to task Generals Sun Sike, Wang Jinbao, and Zhao Liangdong to lead Green Standard Soldiers
to crush the rebels. The Qing thought that Han Chinese were superior at battling other Han people and so used the
Green Standard Army as the dominant and majority army in crushing the rebels instead of Bannermen. Similarly, in
northwestern China against Wang Fuchen, the Qing used Han Chinese Green Standard Army soldiers and Han Chinese Generals
such as Zhang Liangdong, Wang Jinbao, and Zhang Yong as the primary military forces. This choice was due to the rocky
terrain, which favoured infantry troops over cavalry, to the desire to keep Bannermen in the reserves, and, again,
to the belief that Han troops were better at fighting other Han people. These Han generals achieved victory over
the rebels. Also due to the mountainous terrain, Sichuan and southern Shaanxi were also retaken by the Han Chinese
Green Standard Army under Wang Jinbao and Zhao Liangdong in 1680, with Manchus only participating in dealing with
logistics and provisions. 400,000 Green Standard Army soldiers and 150,000 Bannermen served on the Qing side during
the war. 213 Han Chinese Banner companies, and 527 companies of Mongol and Manchu Banners were mobilized by the Qing
during the revolt. 400,000 Green Standard Army soldiers were used against the Three Feudatories besides 200,000 Bannermen.
The Qing forces were crushed by Wu from 1673-1674. The Qing had the support of the majority of Han Chinese soldiers
and Han elite against the Three Feudatories, since they refused to join Wu Sangui in the revolt, while the Eight
Banners and Manchu officers fared poorly against Wu Sangui, so the Qing responded with using a massive army of more
than 900,000 Han Chinese (non-Banner) instead of the Eight Banners, to fight and crush the Three Feudatories. Wu
Sangui's forces were crushed by the Green Standard Army, made out of defected Ming soldiers. To extend and consolidate
the dynasty's control in Central Asia, the Kangxi Emperor personally led a series of military campaigns against the
Dzungars in Outer Mongolia. The Kangxi Emperor was able to successfully expel Galdan's invading forces from these
regions, which were then incorporated into the empire. Galdan was eventually killed in the Dzungar–Qing War. In 1683,
Qing forces received the surrender of Taiwan from Zheng Keshuang, grandson of Koxinga, who had conquered Taiwan from
the Dutch colonists as a base against the Qing. Zheng Keshuang was awarded the title "Duke Haicheng" (海澄公) and was
inducted into the Han Chinese Plain Red Banner of the Eight Banners when he moved to Beijing. Several Ming princes
had accompanied Koxinga to Taiwan in 1661-1662, including the Prince of Ningjing Zhu Shugui and Prince Zhu Honghuan
(朱弘桓), son of Zhu Yihai, where they lived in the Kingdom of Tungning. The Qing sent the 17 Ming princes still living
on Taiwan in 1683 back to mainland China where they spent the rest of their lives in exile since their lives were
spared from execution. Winning Taiwan freed Kangxi's forces for series of battles over Albazin, the far eastern outpost
of the Tsardom of Russia. Zheng's former soldiers on Taiwan like the rattan shield troops were also inducted into
the Eight Banners and used by the Qing against Russian Cossacks at Albazin. The 1689 Treaty of Nerchinsk was China's
first formal treaty with a European power and kept the border peaceful for the better part of two centuries. After
Galdan's death, his followers, as adherents to Tibetan Buddhism, attempted to control the choice of the next Dalai
Lama. Kangxi dispatched two armies to Lhasa, the capital of Tibet, and installed a Dalai Lama sympathetic to the
Qing. After the Kangxi Emperor's death in the winter of 1722, his fourth son, Prince Yong (雍親王), became the Yongzheng
Emperor. In the later years of Kangxi's reign, Yongzheng and his brothers had fought, and there were rumours that
he had usurped the throne(most of the rumours believe that Yongzheng's brother Yingzhen (Kangxi's 14th son) is the
real successor of Kangxi Emperor, the reason why Yingzhen failed to sit on the throne is because Yongzheng and his
confidant Keduo Long tampered the content of Kangxi's testament at the night when Kangxi passed away), a charge for
which there is little evidence. In fact, his father had trusted him with delicate political issues and discussed
state policy with him. When Yongzheng came to power at the age of 45, he felt a sense of urgency about the problems
which had accumulated in his father's later years and did not need instruction in how to exercise power. In the words
of one recent historian, he was "severe, suspicious, and jealous, but extremely capable and resourceful," and in
the words of another, turned out to be an "early modern state-maker of the first order." He moved rapidly. First,
he promoted Confucian orthodoxy and reversed what he saw as his father's laxness by cracking down on unorthodox sects
and by decapitating an anti-Manchu writer his father had pardoned. In 1723 he outlawed Christianity and expelled
Christian missionaries, though some were allowed to remain in the capital. Next, he moved to control the government.
He expanded his father's system of Palace Memorials which brought frank and detailed reports on local conditions
directly to the throne without being intercepted by the bureaucracy, and created a small Grand Council of personal
advisors which eventually grew into the emperor's de facto cabinet for the rest of the dynasty. He shrewdly filled
key positions with Manchu and Han Chinese officials who depended on his patronage. When he began to realize that
the financial crisis was even greater than he had thought, Yongzheng rejected his father's lenient approach to local
landowning elites and mounted a campaign to enforce collection of the land tax. The increased revenues were to be
used for "money to nourish honesty" among local officials and for local irrigation, schools, roads, and charity.
Although these reforms were effective in the north, in the south and lower Yangzi valley, where Kangxi had wooed
the elites, there were long established networks of officials and landowners. Yongzheng dispatched experienced Manchu
commissioners to penetrate the thickets of falsified land registers and coded account books, but they were met with
tricks, passivity, and even violence. The fiscal crisis persisted. In 1725 Yongzheng bestowed the hereditary title
of Marquis on a descendant of the Ming dynasty Imperial family, Zhu Zhiliang, who received a salary from the Qing
government and whose duty was to perform rituals at the Ming tombs, and was also inducted the Chinese Plain White
Banner in the Eight Banners. Later the Qianlong Emperor bestowed the title Marquis of Extended Grace posthumously
on Zhu Zhuliang in 1750, and the title passed on through twelve generations of Ming descendants until the end of
the Qing dynasty. Yongzheng also inherited diplomatic and strategic problems. A team made up entirely of Manchus
drew up the Treaty of Kyakhta (1727) to solidify the diplomatic understanding with Russia. In exchange for territory
and trading rights, the Qing would have a free hand dealing with the situation in Mongolia. Yongzheng then turned
to that situation, where the Zunghars threatened to re-emerge, and to the southwest, where local Miao chieftains
resisted Qing expansion. These campaigns drained the treasury but established the emperor's control of the military
and military finance. Qianlong's reign saw the launch of several ambitious cultural projects, including the compilation
of the Siku Quanshu, or Complete Repository of the Four Branches of Literature. With a total of over 3,400 books,
79,000 chapters, and 36,304 volumes, the Siku Quanshu is the largest collection of books in Chinese history. Nevertheless,
Qianlong used Literary Inquisition to silence opposition. The accusation of individuals began with the emperor's
own interpretation of the true meaning of the corresponding words. If the emperor decided these were derogatory or
cynical towards the dynasty, persecution would begin. Literary inquisition began with isolated cases at the time
of Shunzhi and Kangxi, but became a pattern under Qianlong's rule, during which there were 53 cases of literary persecution.
China also began suffering from mounting overpopulation during this period. Population growth was stagnant for the
first half of the 17th century due to civil wars and epidemics, but prosperity and internal stability gradually reversed
this trend. The introduction of new crops from the Americas such as the potato and peanut allowed an improved food
supply as well, so that the total population of China during the 18th century ballooned from 100 million to 300 million
people. Soon all available farmland was used up, forcing peasants to work ever-smaller and more intensely worked
plots. The Qianlong Emperor once bemoaned the country's situation by remarking "The population continues to grow,
but the land does not." The only remaining part of the empire that had arable farmland was Manchuria, where the provinces
of Jilin and Heilongjiang had been walled off as a Manchu homeland. The emperor decreed for the first time that Han
Chinese civilians were forbidden to settle. Mongols were forbidden by the Qing from crossing the borders of their
banners, even into other Mongol Banners and from crossing into neidi (the Han Chinese 18 provinces) and were given
serious punishments if they did in order to keep the Mongols divided against each other to benefit the Qing. However
Qing rule saw an massively increasing amount of Han Chinese both illegally and legally streaming into Manchuria and
settling down to cultivate land as Manchu landlords desired Han Chinese peasants to rent on their land and grow grain,
most Han Chinese migrants were not evicted as they went over the Great Wall and Willow Palisade, during the eighteenth
century Han Chinese farmed 500,000 hectares of privately owned land in Manchuria and 203,583 hectares of lands which
were part of coutrier stations, noble estates, and Banner lands, in garrisons and towns in Manchuria Han Chinese
made up 80% of the population. Han Chinese farmers were resettled from north China by the Qing to the area along
the Liao River in order to restore the land to cultivation. Wasteland was reclaimed by Han Chinese squatters in addition
to other Han who rented land from Manchu landlords. Despite officially prohibiting Han Chinese settlement on the
Manchu and Mongol lands, by the 18th century the Qing decided to settle Han refugees from northern China who were
suffering from famine, floods, and drought into Manchuria and Inner Mongolia so that Han Chinese farmed 500,000 hectares
in Manchuria and tens of thousands of hectares in Inner Mongolia by the 1780s. Qianlong allowed Han Chinese peasants
suffering from drought to move into Manchuria despite him issuing edicts in favor of banning them from 1740–1776.
Chinese tenant farmers rented or even claimed title to land from the "imperial estates" and Manchu Bannerlands in
the area. Besides moving into the Liao area in southern Manchuria, the path linking Jinzhou, Fengtian, Tieling, Changchun,
Hulun, and Ningguta was settled by Han Chinese during the Qianlong Emperor's rule, and Han Chinese were the majority
in urban areas of Manchuria by 1800. To increase the Imperial Treasury's revenue, the Qing sold formerly Manchu only
lands along the Sungari to Han Chinese at the beginning of the Daoguang Emperor's reign, and Han Chinese filled up
most of Manchuria's towns by the 1840s according to Abbe Huc. However, the 18th century saw the European empires
gradually expand across the world, as European states developed economies built on maritime trade. The dynasty was
confronted with newly developing concepts of the international system and state to state relations. European trading
posts expanded into territorial control in nearby India and on the islands that are now Indonesia. The Qing response,
successful for a time, was in 1756 to establish the Canton System, which restricted maritime trade to that city and
gave monopoly trading rights to private Chinese merchants. The British East India Company and the Dutch East India
Company had long before been granted similar monopoly rights by their governments. Demand in Europe for Chinese goods
such as silk, tea, and ceramics could only be met if European companies funneled their limited supplies of silver
into China. In the late 1700s, the governments of Britain and France were deeply concerned about the imbalance of
trade and the drain of silver. To meet the growing Chinese demand for opium, the British East India Company greatly
expanded its production in Bengal. Since China's economy was essentially self-sufficient, the country had little
need to import goods or raw materials from the Europeans, so the usual way of payment was through silver. The Daoguang
Emperor, concerned both over the outflow of silver and the damage that opium smoking was causing to his subjects,
ordered Lin Zexu to end the opium trade. Lin confiscated the stocks of opium without compensation in 1839, leading
Britain to send a military expedition the following year. The First Opium War revealed the outdated state of the
Chinese military. The Qing navy, composed entirely of wooden sailing junks, was severely outclassed by the modern
tactics and firepower of the British Royal Navy. British soldiers, using advanced muskets and artillery, easily outmaneuvered
and outgunned Qing forces in ground battles. The Qing surrender in 1842 marked a decisive, humiliating blow to China.
The Treaty of Nanjing, the first of the unequal treaties, demanded war reparations, forced China to open up the five
ports of Canton, Amoy, Fuchow, Ningpo and Shanghai to western trade and missionaries, and to cede Hong Kong Island
to Britain. It revealed many inadequacies in the Qing government and provoked widespread rebellions against the already
hugely unpopular regime. The Taiping Rebellion in the mid-19th century was the first major instance of anti-Manchu
sentiment threatening the stability of the dynasty. Hong Xiuquan, a failed civil service candidate, led the Taiping
Rebellion, amid widespread social unrest and worsening famine. In 1851 Hong Xiuquan and others launched an uprising
in Guizhou province, established the Taiping Heavenly Kingdom with Hong himself as king, claiming he often had visions
of God and that he was the brother of Jesus Christ. Slavery, concubinage, arranged marriage, opium smoking, footbinding,
judicial torture, and the worship of idols were all banned. However, success and subsequent authority and power led
to internal feuds, defections and corruption. In addition, British and French troops, equipped with modern weapons,
had come to the assistance of the Qing imperial army. It was not until 1864 that Qing armies under Zeng Guofan succeeded
in crushing the revolt. The rebellion not only posed the most serious threat towards Qing rulers; it was also "bloodiest
civil war of all time." Between 20 and 30 million people died during its fourteen-year course from 1850 to 1864.
After the outbreak of this rebellion, there were also revolts by the Muslims and Miao people of China against the
Qing dynasty, most notably in the Dungan Revolt (1862–77) in the northwest and the Panthay Rebellion (1856–1873)
in Yunnan. The Western powers, largely unsatisfied with the Treaty of Nanjing, gave grudging support to the Qing
government during the Taiping and Nian Rebellions. China's income fell sharply during the wars as vast areas of farmland
were destroyed, millions of lives were lost, and countless armies were raised and equipped to fight the rebels. In
1854, Britain tried to re-negotiate the Treaty of Nanjing, inserting clauses allowing British commercial access to
Chinese rivers and the creation of a permanent British embassy at Beijing. Ratification of the treaty the following
year led to resumption of hostilities and in 1860, with Anglo-French forces marching on Beijing, the emperor and
his court fled the capital for the imperial hunting lodge at Rehe. Once in Beijing, the Anglo-French forces looted
the Old Summer Palace, and in an act of revenge for the arrest of several Englishmen, burnt it to the ground. Prince
Gong, a younger half-brother of the emperor, who had been left as his brother's proxy in the capital, was forced
to sign the Convention of Beijing. Meanwhile, the humiliated emperor died the following year at Rehe. Chinese generals
and officials such as Zuo Zongtang led the suppression of rebellions and stood behind the Manchus. When the Tongzhi
Emperor came to the throne at the age of five in 1861, these officials rallied around him in what was called the
Tongzhi Restoration. Their aim was to adopt western military technology in order to preserve Confucian values. Zeng
Guofan, in alliance with Prince Gong, sponsored the rise of younger officials such as Li Hongzhang, who put the dynasty
back on its feet financially and instituted the Self-Strengthening Movement. The reformers then proceeded with institutional
reforms, including China's first unified ministry of foreign affairs, the Zongli Yamen; allowing foreign diplomats
to reside in the capital; establishment of the Imperial Maritime Customs Service; the formation of modernized armies,
such as the Beiyang Army, as well as a navy; and the purchase from Europeans of armament factories. The dynasty lost
control of peripheral territories bit by bit. In return for promises of support against the British and the French,
the Russian Empire took large chunks of territory in the Northeast in 1860. The period of cooperation between the
reformers and the European powers ended with the Tientsin Massacre of 1870, which was incited by the murder of French
nuns set off by the belligerence of local French diplomats. Starting with the Cochinchina Campaign in 1858, France
expanded control of Indochina. By 1883, France was in full control of the region and had reached the Chinese border.
The Sino-French War began with a surprise attack by the French on the Chinese southern fleet at Fuzhou. After that
the Chinese declared war on the French. A French invasion of Taiwan was halted and the French were defeated on land
in Tonkin at the Battle of Bang Bo. However Japan threatened to enter the war against China due to the Gapsin Coup
and China chose to end the war with negotiations. The war ended in 1885 with the Treaty of Tientsin (1885) and the
Chinese recognition of the French protectorate in Vietnam. Historians have judged the Qing dynasty's vulnerability
and weakness to foreign imperialism in the 19th century to be based mainly on its maritime naval weakness while it
achieved military success against westerners on land, the historian Edward L. Dreyer said that "China’s nineteenth-century
humiliations were strongly related to her weakness and failure at sea. At the start of the Opium War, China had no
unified navy and no sense of how vulnerable she was to attack from the sea; British forces sailed and steamed wherever
they wanted to go......In the Arrow War (1856–60), the Chinese had no way to prevent the Anglo-French expedition
of 1860 from sailing into the Gulf of Zhili and landing as near as possible to Beijing. Meanwhile, new but not exactly
modern Chinese armies suppressed the midcentury rebellions, bluffed Russia into a peaceful settlement of disputed
frontiers in Central Asia, and defeated the French forces on land in the Sino-French War (1884–85). But the defeat
of the fleet, and the resulting threat to steamship traffic to Taiwan, forced China to conclude peace on unfavorable
terms." In 1884, pro-Japanese Koreans in Seoul led the Gapsin Coup. Tensions between China and Japan rose after China
intervened to suppress the uprising. Japanese Prime Minister Itō Hirobumi and Li Hongzhang signed the Convention
of Tientsin, an agreement to withdraw troops simultaneously, but the First Sino-Japanese War of 1895 was a military
humiliation. The Treaty of Shimonoseki recognized Korean independence and ceded Taiwan and the Pescadores to Japan.
The terms might have been harsher, but when Japanese citizen attacked and wounded Li Hongzhang, an international
outcry shamed the Japanese into revising them. The original agreement stipulated the cession of Liaodong Peninsula
to Japan, but Russia, with its own designs on the territory, along with Germany and France, in what was known as
the Triple Intervention, successfully put pressure on the Japanese to abandon the peninsula. These years saw an evolution
in the participation of Empress Dowager Cixi (Wade–Giles: Tz'u-Hsi) in state affairs. She entered the imperial palace
in the 1850s as a concubine to the Xianfeng Emperor (r. 1850–1861) and came to power in 1861 after her five-year-old
son, the Tongzhi Emperor ascended the throne. She, the Empress Dowager Ci'an (who had been Xianfeng's empress), and
Prince Gong (a son of the Daoguang Emperor), staged a coup that ousted several regents for the boy emperor. Between
1861 and 1873, she and Ci'an served as regents, choosing the reign title "Tongzhi" (ruling together). Following the
emperor's death in 1875, Cixi's nephew, the Guangxu Emperor, took the throne, in violation of the dynastic custom
that the new emperor be of the next generation, and another regency began. In the spring of 1881, Ci'an suddenly
died, aged only forty-three, leaving Cixi as sole regent. From 1889, when Guangxu began to rule in his own right,
to 1898, the Empress Dowager lived in semi-retirement, spending the majority of the year at the Summer Palace. On
November 1, 1897, two German Roman Catholic missionaries were murdered in the southern part of Shandong Province
(the Juye Incident). In response, Germany used the murders as a pretext for a naval occupation of Jiaozhou Bay. The
occupation prompted a "scramble for concessions" in 1898, which included the German lease of Jiazhou Bay, the Russian
acquisition of Liaodong, and the British lease of the New Territories of Hong Kong. In the wake of these external
defeats, the Guangxu Emperor initiated the Hundred Days' Reform of 1898. Newer, more radical advisers such as Kang
Youwei were given positions of influence. The emperor issued a series of edicts and plans were made to reorganize
the bureaucracy, restructure the school system, and appoint new officials. Opposition from the bureaucracy was immediate
and intense. Although she had been involved in the initial reforms, the empress dowager stepped in to call them off,
arrested and executed several reformers, and took over day-to-day control of policy. Yet many of the plans stayed
in place, and the goals of reform were implanted. Widespread drought in North China, combined with the imperialist
designs of European powers and the instability of the Qing government, created conditions that led to the emergence
of the Righteous and Harmonious Fists, or "Boxers." In 1900, local groups of Boxers proclaiming support for the Qing
dynasty murdered foreign missionaries and large numbers of Chinese Christians, then converged on Beijing to besiege
the Foreign Legation Quarter. A coalition of European, Japanese, and Russian armies (the Eight-Nation Alliance) then
entered China without diplomatic notice, much less permission. Cixi declared war on all of these nations, only to
lose control of Beijing after a short, but hard-fought campaign. She fled to Xi'an. The victorious allies drew up
scores of demands on the Qing government, including compensation for their expenses in invading China and execution
of complicit officials. By the early 20th century, mass civil disorder had begun in China, and it was growing continuously.
To overcome such problems, Empress Dowager Cixi issued an imperial edict in 1901 calling for reform proposals from
the governors-general and governors and initiated the era of the dynasty's "New Policies", also known as the "Late
Qing Reform". The edict paved the way for the most far-reaching reforms in terms of their social consequences, including
the creation of a national education system and the abolition of the imperial examinations in 1905. The Guangxu Emperor
died on November 14, 1908, and on November 15, 1908, Cixi also died. Rumors held that she or Yuan Shikai ordered
trusted eunuchs to poison the Guangxu Emperor, and an autopsy conducted nearly a century later confirmed lethal levels
of arsenic in his corpse. Puyi, the oldest son of Zaifeng, Prince Chun, and nephew to the childless Guangxu Emperor,
was appointed successor at the age of two, leaving Zaifeng with the regency. This was followed by the dismissal of
General Yuan Shikai from his former positions of power. In April 1911 Zaifeng created a cabinet in which there were
two vice-premiers. Nonetheless, this cabinet was also known by contemporaries as "The Royal Cabinet" because among
the thirteen cabinet members, five were members of the imperial family or Aisin Gioro relatives. This brought a wide
range of negative opinions from senior officials like Zhang Zhidong. The Wuchang Uprising of October 10, 1911, led
to the creation of a new central government, the Republic of China, in Nanjing with Sun Yat-sen as its provisional
head. Many provinces soon began "separating" from Qing control. Seeing a desperate situation unfold, the Qing government
brought Yuan Shikai back to military power. He took control of his Beiyang Army to crush the revolution in Wuhan
at the Battle of Yangxia. After taking the position of Prime Minister and creating his own cabinet, Yuan Shikai went
as far as to ask for the removal of Zaifeng from the regency. This removal later proceeded with directions from Empress
Dowager Longyu. With Zaifeng gone, Yuan Shikai and his Beiyang commanders effectively dominated Qing politics. He
reasoned that going to war would be unreasonable and costly, especially when noting that the Qing government had
a goal for constitutional monarchy. Similarly, Sun Yat-sen's government wanted a republican constitutional reform,
both aiming for the benefit of China's economy and populace. With permission from Empress Dowager Longyu, Yuan Shikai
began negotiating with Sun Yat-sen, who decided that his goal had been achieved in forming a republic, and that therefore
he could allow Yuan to step into the position of President of the Republic of China. On 12 February 1912, after rounds
of negotiations, Longyu issued an imperial edict bringing about the abdication of the child emperor Puyi. This brought
an end to over 2,000 years of Imperial China and began an extended period of instability of warlord factionalism.
The unorganized political and economic systems combined with a widespread criticism of Chinese culture led to questioning
and doubt about the future. In the 1930s, the Empire of Japan invaded Northeast China and founded Manchukuo in 1932,
with Puyi, as the emperor. After the invasion by the Soviet Union, Manchukuo collapsed in 1945. The early Qing emperors
adopted the bureaucratic structures and institutions from the preceding Ming dynasty but split rule between Han Chinese
and Manchus, with some positions also given to Mongols. Like previous dynasties, the Qing recruited officials via
the imperial examination system, until the system was abolished in 1905. The Qing divided the positions into civil
and military positions, each having nine grades or ranks, each subdivided into a and b categories. Civil appointments
ranged from attendant to the emperor or a Grand Secretary in the Forbidden City (highest) to being a prefectural
tax collector, deputy jail warden, deputy police commissioner or tax examiner. Military appointments ranged from
being a field marshal or chamberlain of the imperial bodyguard to a third class sergeant, corporal or a first or
second class private. The formal structure of the Qing government centered on the Emperor as the absolute ruler,
who presided over six Boards (Ministries[c]), each headed by two presidents[d] and assisted by four vice presidents.[e]
In contrast to the Ming system, however, Qing ethnic policy dictated that appointments were split between Manchu
noblemen and Han officials who had passed the highest levels of the state examinations. The Grand Secretariat,[f]
which had been an important policy-making body under the Ming, lost its importance during the Qing and evolved into
an imperial chancery. The institutions which had been inherited from the Ming formed the core of the Qing "Outer
Court," which handled routine matters and was located in the southern part of the Forbidden City. In order not to
let the routine administration take over the running of the empire, the Qing emperors made sure that all important
matters were decided in the "Inner Court," which was dominated by the imperial family and Manchu nobility and which
was located in the northern part of the Forbidden City. The core institution of the inner court was the Grand Council.[g]
It emerged in the 1720s under the reign of the Yongzheng Emperor as a body charged with handling Qing military campaigns
against the Mongols, but it soon took over other military and administrative duties and served to centralize authority
under the crown. The Grand Councillors[h] served as a sort of privy council to the emperor. From the early Qing,
the central government was characterized by a system of dual appointments by which each position in the central government
had a Manchu and a Han Chinese assigned to it. The Han Chinese appointee was required to do the substantive work
and the Manchu to ensure Han loyalty to Qing rule. The distinction between Han Chinese and Manchus extended to their
court costumes. During the Qianlong Emperor's reign, for example, members of his family were distinguished by garments
with a small circular emblem on the back, whereas Han officials wore clothing with a square emblem. In addition to
the six boards, there was a Lifan Yuan unique to the Qing government. This institution was established to supervise
the administration of Tibet and the Mongol lands. As the empire expanded, it took over administrative responsibility
of all minority ethnic groups living in and around the empire, including early contacts with Russia — then seen as
a tribute nation. The office had the status of a full ministry and was headed by officials of equal rank. However,
appointees were at first restricted only to candidates of Manchu and Mongol ethnicity, until later open to Han Chinese
as well. Even though the Board of Rites and Lifan Yuan performed some duties of a foreign office, they fell short
of developing into a professional foreign service. It was not until 1861 — a year after losing the Second Opium War
to the Anglo-French coalition — that the Qing government bowed to foreign pressure and created a proper foreign affairs
office known as the Zongli Yamen. The office was originally intended to be temporary and was staffed by officials
seconded from the Grand Council. However, as dealings with foreigners became increasingly complicated and frequent,
the office grew in size and importance, aided by revenue from customs duties which came under its direct jurisdiction.
There was also another government institution called Imperial Household Department which was unique to the Qing dynasty.
It was established before the fall of the Ming, but it became mature only after 1661, following the death of the
Shunzhi Emperor and the accession of his son, the Kangxi Emperor. The department's original purpose was to manage
the internal affairs of the imperial family and the activities of the inner palace (in which tasks it largely replaced
eunuchs), but it also played an important role in Qing relations with Tibet and Mongolia, engaged in trading activities
(jade, ginseng, salt, furs, etc.), managed textile factories in the Jiangnan region, and even published books. Relations
with the Salt Superintendents and salt merchants, such as those at Yangzhou, were particularly lucrative, especially
since they were direct, and did not go through absorptive layers of bureaucracy. The department was manned by booi,[o]
or "bondservants," from the Upper Three Banners. By the 19th century, it managed the activities of at least 56 subagencies.
Qing China reached its largest extent during the 18th century, when it ruled China proper (eighteen provinces) as
well as the areas of present-day Northeast China, Inner Mongolia, Outer Mongolia, Xinjiang and Tibet, at approximately
13 million km2 in size. There were originally 18 provinces, all of which in China proper, but later this number was
increased to 22, with Manchuria and Xinjiang being divided or turned into provinces. Taiwan, originally part of Fujian
province, became a province of its own in the late 19th century, but was ceded to the Empire of Japan in 1895 following
the First Sino-Japanese War. In addition, many surrounding countries, such as Korea (Joseon dynasty), Vietnam frequently
paid tribute to China during much of this period. Khanate of Kokand were forced to submit as protectorate and pay
tribute to the Qing dynasty in China between 1774 and 1798. The Qing organization of provinces was based on the fifteen
administrative units set up by the Ming dynasty, later made into eighteen provinces by splitting for example, Huguang
into Hubei and Hunan provinces. The provincial bureaucracy continued the Yuan and Ming practice of three parallel
lines, civil, military, and censorate, or surveillance. Each province was administered by a governor (巡撫, xunfu)
and a provincial military commander (提督, tidu). Below the province were prefectures (府, fu) operating under a prefect
(知府, zhīfǔ), followed by subprefectures under a subprefect. The lowest unit was the county, overseen by a county
magistrate. The eighteen provinces are also known as "China proper". The position of viceroy or governor-general
(總督, zongdu) was the highest rank in the provincial administration. There were eight regional viceroys in China proper,
each usually took charge of two or three provinces. The Viceroy of Zhili, who was responsible for the area surrounding
the capital Beijing, is usually considered as the most honorable and powerful viceroy among the eight. By the mid-18th
century, the Qing had successfully put outer regions such as Inner and Outer Mongolia, Tibet and Xinjiang under its
control. Imperial commissioners and garrisons were sent to Mongolia and Tibet to oversee their affairs. These territories
were also under supervision of a central government institution called Lifan Yuan. Qinghai was also put under direct
control of the Qing court. Xinjiang, also known as Chinese Turkestan, was subdivided into the regions north and south
of the Tian Shan mountains, also known today as Dzungaria and Tarim Basin respectively, but the post of Ili General
was established in 1762 to exercise unified military and administrative jurisdiction over both regions. Dzungaria
was fully opened to Han migration by the Qianlong Emperor from the beginning. Han migrants were at first forbidden
from permanently settling in the Tarim Basin but were the ban was lifted after the invasion by Jahangir Khoja in
the 1820s. Likewise, Manchuria was also governed by military generals until its division into provinces, though some
areas of Xinjiang and Northeast China were lost to the Russian Empire in the mid-19th century. Manchuria was originally
separated from China proper by the Inner Willow Palisade, a ditch and embankment planted with willows intended to
restrict the movement of the Han Chinese, as the area was off-limits to civilian Han Chinese until the government
started colonizing the area, especially since the 1860s. With respect to these outer regions, the Qing maintained
imperial control, with the emperor acting as Mongol khan, patron of Tibetan Buddhism and protector of Muslims. However,
Qing policy changed with the establishment of Xinjiang province in 1884. During The Great Game era, taking advantage
of the Dungan revolt in northwest China, Yaqub Beg invaded Xinjiang from Central Asia with support from the British
Empire, and made himself the ruler of the kingdom of Kashgaria. The Qing court sent forces to defeat Yaqub Beg and
Xinjiang was reconquered, and then the political system of China proper was formally applied onto Xinjiang. The Kumul
Khanate, which was incorporated into the Qing empire as a vassal after helping Qing defeat the Zunghars in 1757,
maintained its status after Xinjiang turned into a province through the end of the dynasty in the Xinhai Revolution
up until 1930. In early 20th century, Britain sent an expedition force to Tibet and forced Tibetans to sign a treaty.
The Qing court responded by asserting Chinese sovereignty over Tibet, resulting in the 1906 Anglo-Chinese Convention
signed between Britain and China. The British agreed not to annex Tibetan territory or to interfere in the administration
of Tibet, while China engaged not to permit any other foreign state to interfere with the territory or internal administration
of Tibet. Furthermore, similar to Xinjiang which was converted into a province earlier, the Qing government also
turned Manchuria into three provinces in the early 20th century, officially known as the "Three Northeast Provinces",
and established the post of Viceroy of the Three Northeast Provinces to oversee these provinces, making the total
number of regional viceroys to nine. The early Qing military was rooted in the Eight Banners first developed by Nurhaci
to organize Jurchen society beyond petty clan affiliations. There were eight banners in all, differentiated by color.
The yellow, bordered yellow, and white banners were known as the "Upper Three Banners" and were under the direct
command of the emperor. Only Manchus belonging to the Upper Three Banners, and selected Han Chinese who had passed
the highest level of martial exams could serve as the emperor's personal bodyguards. The remaining Banners were known
as the "Lower Five Banners." They were commanded by hereditary Manchu princes descended from Nurhachi's immediate
family, known informally as the "Iron cap princes". Together they formed the ruling council of the Manchu nation
as well as high command of the army. Nurhachi's son Hong Taiji expanded the system to include mirrored Mongol and
Han Banners. After capturing Beijing in 1644, the relatively small Banner armies were further augmented by the Green
Standard Army, made up of those Ming troops who had surrendered to the Qing, which eventually outnumbered Banner
troops three to one. They maintained their Ming era organization and were led by a mix of Banner and Green Standard
officers.[citation needed] Banner Armies were organized along ethnic lines, namely Manchu and Mongol, but included
non-Manchu bondservants registered under the household of their Manchu masters. The years leading up to the conquest
increased the number of Han Chinese under Manchu rule, leading Hong Taiji to create the Eight Han Banners (zh), and
around the time of the Qing takeover of Beijing, their numbers rapidly swelled. Han Bannermen held high status and
power in the early Qing period, especially immediately after the conquest during Shunzhi and Kangxi's reign where
they dominated Governor-Generalships and Governorships across China at the expense of both Manchu Bannermen and Han
civilians. Han also numerically dominated the Banners up until the mid 18th century. European visitors in Beijing
called them "Tartarized Chinese" or "Tartarified Chinese". It was in Qianlong's reign that the Qianlong Emperor,
concerned about maintaining Manchu identity, re-emphasized Manchu ethnicity, ancestry, language, and culture in the
Eight Banners and started a mass discharge of Han Bannermen from the Eight Banners, either asking them to voluntarily
resign from the Banner rolls or striking their names off. This led to a change from Han majority to a Manchu majority
within the Banner system, and previous Han Bannermen garrisons in southern China such as at Fuzhou, Zhenjiang, Guangzhou,
were replaced by Manchu Bannermen in the purge, which started in 1754. The turnover by Qianlong most heavily impacted
Han banner garrisons stationed in the provinces while it less impacted Han Bannermen in Beijing, leaving a larger
proportion of remaining Han Bannermen in Beijing than the provinces. Han Bannermen's status was decreased from that
point on with Manchu Banners gaining higher status. Han Bannermen numbered 75% in 1648 Shunzhi's reign, 72% in 1723
Yongzheng's reign, but decreased to 43% in 1796 during the first year of Jiaqing's reign, which was after Qianlong's
purge. The mass discharge was known as the Disbandment of the Han Banners (zh). Qianlong directed most of his ire
at those Han Bannermen descended from defectors who joined the Qing after the Qing passed through the Great Wall
at Shanhai Pass in 1644, deeming their ancestors as traitors to the Ming and therefore untrustworthy, while retaining
Han Bannermen who were descended from defectors who joined the Qing before 1644 in Liaodong and marched through Shanhai
pass, also known as those who "followed the Dragon through the pass" (從龍入關; cong long ru guan). Early during the
Taiping Rebellion, Qing forces suffered a series of disastrous defeats culminating in the loss of the regional capital
city of Nanjing in 1853. Shortly thereafter, a Taiping expeditionary force penetrated as far north as the suburbs
of Tianjin, the imperial heartlands. In desperation the Qing court ordered a Chinese official, Zeng Guofan, to organize
regional and village militias into an emergency army called tuanlian. Zeng Guofan's strategy was to rely on local
gentry to raise a new type of military organization from those provinces that the Taiping rebels directly threatened.
This new force became known as the Xiang Army, named after the Hunan region where it was raised. The Xiang Army was
a hybrid of local militia and a standing army. It was given professional training, but was paid for out of regional
coffers and funds its commanders — mostly members of the Chinese gentry — could muster. The Xiang Army and its successor,
the Huai Army, created by Zeng Guofan's colleague and mentee Li Hongzhang, were collectively called the "Yong Ying"
(Brave Camp). Zeng Guofan had no prior military experience. Being a classically educated official, he took his blueprint
for the Xiang Army from the Ming general Qi Jiguang, who, because of the weakness of regular Ming troops, had decided
to form his own "private" army to repel raiding Japanese pirates in the mid-16th century. Qi Jiguang's doctrine was
based on Neo-Confucian ideas of binding troops' loyalty to their immediate superiors and also to the regions in which
they were raised. Zeng Guofan's original intention for the Xiang Army was simply to eradicate the Taiping rebels.
However, the success of the Yongying system led to its becoming a permanent regional force within the Qing military,
which in the long run created problems for the beleaguered central government. First, the Yongying system signaled
the end of Manchu dominance in Qing military establishment. Although the Banners and Green Standard armies lingered
on as a drain on resources, henceforth the Yongying corps became the Qing government's de facto first-line troops.
Second, the Yongying corps were financed through provincial coffers and were led by regional commanders, weakening
central government's grip on the whole country. Finally, the nature of Yongying command structure fostered nepotism
and cronyism amongst its commanders, who laid the seeds of regional warlordism in the first half of the 20th century.
By the late 19th century, the most conservative elements within the Qing court could no longer ignore China's military
weakness. In 1860, during the Second Opium War, the capital Beijing was captured and the Summer Palace sacked by
a relatively small Anglo-French coalition force numbering 25,000. The advent of modern weaponry resulting from the
European Industrial Revolution had rendered China's traditionally trained and equipped army and navy obsolete. The
government attempts to modernize during the Self-Strengthening Movement were initially successful, but yielded few
lasting results because of the central government's lack of funds, lack of political will, and unwillingness to depart
from tradition. Losing the First Sino-Japanese War of 1894–1895 was a watershed. Japan, a country long regarded by
the Chinese as little more than an upstart nation of pirates, annihilated the Qing government's modernized Beiyang
Fleet, then deemed to be the strongest naval force in Asia. The Japanese victory occurred a mere three decades after
the Meiji Restoration set a feudal Japan on course to emulate the Western nations in their economic and technological
achievements. Finally, in December 1894, the Qing government took concrete steps to reform military institutions
and to re-train selected units in westernized drills, tactics and weaponry. These units were collectively called
the New Army. The most successful of these was the Beiyang Army under the overall supervision and control of a former
Huai Army commander, General Yuan Shikai, who used his position to build networks of loyal officers and eventually
become President of the Republic of China. The most significant fact of early and mid-Qing social history was population
growth. The population doubled during the 18th century. People in this period were also remarkably on the move. There
is evidence suggesting that the empire's rapidly expanding population was geographically mobile on a scale, which,
in term of its volume and its protracted and routinized nature, was unprecedented in Chinese history. Indeed, the
Qing government did far more to encourage mobility than to discourage it. Migration took several different forms,
though might be divided in two varieties: permanent migration for resettlement, and relocation conceived by the party
(in theory at least) as a temporary sojourn. Parties to the latter would include the empire's increasingly large
and mobile manual workforce, as well as its densely overlapping internal diaspora of local-origin-based merchant
groups. It would also included the patterned movement of Qing subjects overseas, largely to Southeastern Asia, in
search of trade and other economic opportunities. According to statute, Qing society was divided into relatively
closed estates, of which in most general terms there were five. Apart from the estates of the officials, the comparatively
minuscule aristocracy, and the degree-holding literati, there also existed a major division among ordinary Chinese
between commoners and people with inferior status. They were divided into two categories: one of them, the good "commoner"
people, the other "mean" people. The majority of the population belonged to the first category and were described
as liangmin, a legal term meaning good people, as opposed to jianmin meaning the mean (or ignoble) people. Qing law
explicitly stated that the traditional four occupational groups of scholars, farmers, artisans and merchants were
"good", or having a status of commoners. On the other hand, slaves or bondservants, entertainers (including prostitutes
and actors), and those low-level employees of government officials were the "mean people". Mean people were considered
legally inferior to commoners and suffered unequal treatments, forbidden to take the imperial examination. By the
end of the 17th century, the Chinese economy had recovered from the devastation caused by the wars in which the Ming
dynasty were overthrown, and the resulting breakdown of order. In the following century, markets continued to expand
as in the late Ming period, but with more trade between regions, a greater dependence on overseas markets and a greatly
increased population. After the re-opening of the southeast coast, which had been closed in the late 17th century,
foreign trade was quickly re-established, and was expanding at 4% per annum throughout the latter part of the 18th
century. China continued to export tea, silk and manufactures, creating a large, favorable trade balance with the
West. The resulting inflow of silver expanded the money supply, facilitating the growth of competitive and stable
markets. The government broadened land ownership by returning land that had been sold to large landowners in the
late Ming period by families unable to pay the land tax. To give people more incentives to participate in the market,
they reduced the tax burden in comparison with the late Ming, and replaced the corvée system with a head tax used
to hire laborers. The administration of the Grand Canal was made more efficient, and transport opened to private
merchants. A system of monitoring grain prices eliminated severe shortages, and enabled the price of rice to rise
slowly and smoothly through the 18th century. Wary of the power of wealthy merchants, Qing rulers limited their trading
licenses and usually refused them permission to open new mines, except in poor areas. These restrictions on domestic
resource exploration, as well as on foreign trade, are held by some scholars as a cause of the Great Divergence,
by which the Western world overtook China economically. By the end of the 18th century the population had risen to
300 million from approximately 150 million during the late Ming dynasty. The dramatic rise in population was due
to several reasons, including the long period of peace and stability in the 18th century and the import of new crops
China received from the Americas, including peanuts, sweet potatoes and maize. New species of rice from Southeast
Asia led to a huge increase in production. Merchant guilds proliferated in all of the growing Chinese cities and
often acquired great social and even political influence. Rich merchants with official connections built up huge
fortunes and patronized literature, theater and the arts. Textile and handicraft production boomed. The Qing emperors
were generally adept at poetry and often skilled in painting, and offered their patronage to Confucian culture. The
Kangxi and Qianlong Emperors, for instance, embraced Chinese traditions both to control them and to proclaim their
own legitimacy. The Kangxi Emperor sponsored the Peiwen Yunfu, a rhyme dictionary published in 1711, and the Kangxi
Dictionary published in 1716, which remains to this day an authoritative reference. The Qianlong Emperor sponsored
the largest collection of writings in Chinese history, the Siku Quanshu, completed in 1782. Court painters made new
versions of the Song masterpiece, Zhang Zeduan's Along the River During the Qingming Festival whose depiction of
a prosperous and happy realm demonstrated the beneficence of the emperor. The emperors undertook tours of the south
and commissioned monumental scrolls to depict the grandeur of the occasion. Imperial patronage also encouraged the
industrial production of ceramics and Chinese export porcelain. Yet the most impressive aesthetic works were done
among the scholars and urban elite. Calligraphy and painting remained a central interest to both court painters and
scholar-gentry who considered the Four Arts part of their cultural identity and social standing. The painting of
the early years of the dynasty included such painters as the orthodox Four Wangs and the individualists Bada Shanren
(1626–1705) and Shitao (1641–1707). The nineteenth century saw such innovations as the Shanghai School and the Lingnan
School which used the technical skills of tradition to set the stage for modern painting. Literature grew to new
heights in the Qing period. Poetry continued as a mark of the cultivated gentleman, but women wrote in larger and
larger numbers and poets came from all walks of life. The poetry of the Qing dynasty is a lively field of research,
being studied (along with the poetry of the Ming dynasty) for its association with Chinese opera, developmental trends
of Classical Chinese poetry, the transition to a greater role for vernacular language, and for poetry by women in
Chinese culture. The Qing dynasty was a period of much literary collection and criticism, and many of the modern
popular versions of Classical Chinese poems were transmitted through Qing dynasty anthologies, such as the Quantangshi
and the Three Hundred Tang Poems. Pu Songling brought the short story form to a new level in his Strange Stories
from a Chinese Studio, published in the mid-18th century, and Shen Fu demonstrated the charm of the informal memoir
in Six Chapters of a Floating Life, written in the early 19th century but published only in 1877. The art of the
novel reached a pinnacle in Cao Xueqin's Dream of the Red Chamber, but its combination of social commentary and psychological
insight were echoed in highly skilled novels such as Wu Jingzi's The Scholars (1750) and Li Ruzhen's Flowers in the
Mirror (1827). Cuisine aroused a cultural pride in the accumulated richness of a long and varied past. The gentleman
gourmet, such as Yuan Mei, applied aesthetic standards to the art of cooking, eating, and appreciation of tea at
a time when New World crops and products entered everyday life. The Suiyuan Shidan written by him, detailed the culinary
esthetics and theory, along with a wide range of recipes from the ruling period of Qianlong during Qing Dynasty.
The Manchu Han Imperial Feast originated at the court. Although this banquet was probably never common, it reflected
an appreciation by Han Chinese for Manchu culinary customs. Nevertheless, culinary traditionalists such as Yuan Mei
lambasted the opulent culinary rituals of the Manchu Han Imperial Feast, saying that it is cause in part by "...the
vulgar habits of bad chefs" and that "Display this trite are useful only for welcoming new relations through one’s
gates or when the boss comes to visit." (皆惡廚陋習。只可用之於新親上門,上司入境)
The indigenous peoples of the Americas are the descendants of the pre-Columbian inhabitants of the Americas. Pueblos indígenas
(indigenous peoples) is a common term in Spanish-speaking countries. Aborigen (aboriginal/native) is used in Argentina,
whereas "Amerindian" is used in Quebec, The Guianas, and the English-speaking Caribbean. Indigenous peoples are commonly
known in Canada as Aboriginal peoples, which include First Nations, Inuit, and Métis peoples. Indigenous peoples
of the United States are commonly known as Native Americans or American Indians, and Alaska Natives. According to
the prevailing theories of the settlement of the Americas, migrations of humans from Asia (in particular North Asia)
to the Americas took place via Beringia, a land bridge which connected the two continents across what is now the
Bering Strait. The majority of experts agree that the earliest pre-modern human migration via Beringia took place
at least 13,500 years ago, with disputed evidence that people had migrated into the Americas much earlier, up to
40,000 years ago. These early Paleo-Indians spread throughout the Americas, diversifying into many hundreds of culturally
distinct nations and tribes. According to the oral histories of many of the indigenous peoples of the Americas, they
have been living there since their genesis, described by a wide range of creation myths. Application of the term
"Indian" originated with Christopher Columbus, who, in his search for Asia, thought that he had arrived in the East
Indies. The Americas came to be known as the "West Indies", a name still used to refer to the islands of the Caribbean
Sea. This led to the names "Indies" and "Indian", which implied some kind of racial or cultural unity among the aboriginal
peoples of the Americas. This unifying concept, codified in law, religion, and politics, was not originally accepted
by indigenous peoples but has been embraced by many over the last two centuries.[citation needed] Even though the
term "Indian" does not include the Aleuts, Inuit, or Yupik peoples, these groups are considered indigenous peoples
of the Americas. Although some indigenous peoples of the Americas were traditionally hunter-gatherers—and many, especially
in Amazonia, still are—many groups practiced aquaculture and agriculture. The impact of their agricultural endowment
to the world is a testament to their time and work in reshaping and cultivating the flora indigenous to the Americas.
Although some societies depended heavily on agriculture, others practiced a mix of farming, hunting, and gathering.
In some regions the indigenous peoples created monumental architecture, large-scale organized cities, chiefdoms,
states, and empires. Many parts of the Americas are still populated by indigenous peoples; some countries have sizable
populations, especially Belize, Bolivia, Chile, Ecuador, Greenland, Guatemala, Mexico, and Peru. At least a thousand
different indigenous languages are spoken in the Americas. Some, such as the Quechuan languages, Aymara, Guaraní,
Mayan languages, and Nahuatl, count their speakers in millions. Many also maintain aspects of indigenous cultural
practices to varying degrees, including religion, social organization, and subsistence practices. Like most cultures,
over time, cultures specific to many indigenous peoples have evolved to incorporate traditional aspects, but also
cater to modern needs. Some indigenous peoples still live in relative isolation from Western culture and a few are
still counted as uncontacted peoples. The specifics of Paleo-Indian migration to and throughout the Americas, including
the exact dates and routes traveled, provide the subject of ongoing research and discussion. According to archaeological
and genetic evidence, North and South America were the last continents in the world with human habitation. During
the Wisconsin glaciation, 50–17,000 years ago, falling sea levels allowed people to move across the land bridge of
Beringia that joined Siberia to north west North America (Alaska). Alaska was a glacial refugia because it had low
snowfall, allowing a small population to exist. The Laurentide Ice Sheet covered most of North America, blocking
nomadic inhabitants and confining them to Alaska (East Beringia) for thousands of years. Indigenous genetic studies
suggest that the first inhabitants of the Americas share a single ancestral population, one that developed in isolation,
conjectured to be Beringia. The isolation of these peoples in Beringia might have lasted 10–20,000 years. Around
16,500 years ago, the glaciers began melting, allowing people to move south and east into Canada and beyond. These
people are believed to have followed herds of now-extinct Pleistocene megafauna along ice-free corridors that stretched
between the Laurentide and Cordilleran Ice Sheets. The data indicate that the individual was from a population directly
ancestral to present South American and Central American Native American populations, and closely related to present
North American Native American populations. The implication is that there was an early divergence between North American
and Central American plus South American populations. Hypotheses which posit that invasions subsequent to the Clovis
culture overwhelmed or assimilated previous migrants into the Americas were ruled out. While technically referring
to the era before Christopher Columbus' voyages of 1492 to 1504, in practice the term usually includes the history
of American indigenous cultures until Europeans either conquered or significantly influenced them, even if this happened
decades or even centuries after Columbus' initial landing. "Pre-Columbian" is used especially often in the context
of discussing the great indigenous civilizations of the Americas, such as those of Mesoamerica (the Olmec, the Toltec,
the Teotihuacano, the Zapotec, the Mixtec, the Aztec, and the Maya civilizations) and those of the Andes (Inca Empire,
Moche culture, Muisca Confederation, Cañaris). Many pre-Columbian civilizations established characteristics and hallmarks
which included permanent or urban settlements, agriculture, civic and monumental architecture, and complex societal
hierarchies. Some of these civilizations had long faded by the time of the first significant European and African
arrivals (ca. late 15th–early 16th centuries), and are known only through oral history and through archaeological
investigations. Others were contemporary with this period, and are also known from historical accounts of the time.
A few, such as the Mayan, Olmec, Mixtec, and Nahua peoples, had their own written records. However, the European
colonists of the time worked to eliminate non-Christian beliefs, and Christian pyres destroyed many pre-Columbian
written records. Only a few documents remained hidden and survived, leaving contemporary historians with glimpses
of ancient culture and knowledge. According to both indigenous American and European accounts and documents, American
civilizations at the time of European encounter had achieved many accomplishments. For instance, the Aztecs built
one of the largest cities in the world, Tenochtitlan, the ancient site of Mexico City, with an estimated population
of 200,000. American civilizations also displayed impressive accomplishments in astronomy and mathematics. The domestication
of maize or corn required thousands of years of selective breeding. The European colonization of the Americas forever
changed the lives and cultures of the peoples of the continents. Although the exact pre-contact population of the
Americas is unknown, scholars estimate that Native American populations diminished by between 80 and 90% within the
first centuries of contact with Europeans. The leading cause was disease. The continent was ravaged by epidemics
of diseases such as smallpox, measles, and cholera, which were brought from Europe by the early explorers and spread
quickly into new areas even before later explorers and colonists reached them. Native Americans suffered high mortality
rates due to their lack of prior exposure to these diseases. The loss of lives was exacerbated by conflict between
colonists and indigenous people. Colonists also frequently perpetrated massacres on the indigenous groups and enslaved
them. According to the U.S. Bureau of the Census (1894), the North American Indian Wars of the 19th century cost
the lives of about 19,000 whites and 30,000 Native Americans. The first indigenous group encountered by Columbus
were the 250,000 Taínos of Hispaniola who represented the dominant culture in the Greater Antilles and the Bahamas.
Within thirty years about 70% of the Taínos had died. They had no immunity to European diseases, so outbreaks of
measles and smallpox ravaged their population. Increasing punishment of the Taínos for revolting against forced labour,
despite measures put in place by the encomienda, which included religious education and protection from warring tribes,
eventually led to the last great Taíno rebellion. Following years of mistreatment, the Taínos began to adopt suicidal
behaviors, with women aborting or killing their infants and men jumping from the cliffs or ingesting untreated cassava,
a violent poison. Eventually, a Taíno Cacique named Enriquillo managed to hold out in the Baoruco Mountain Range
for thirteen years, causing serious damage to the Spanish, Carib-held plantations and their Indian auxiliaries. Hearing
of the seriousness of the revolt, Emperor Charles V (also King of Spain) sent captain Francisco Barrionuevo to negotiate
a peace treaty with the ever-increasing number of rebels. Two months later, after consultation with the Audencia
of Santo Domingo, Enriquillo was offered any part of the island to live in peace. Various theories for the decline
of the Native American populations emphasize epidemic diseases, conflicts with Europeans, and conflicts among warring
tribes. Scholars now believe that, among the various contributing factors, epidemic disease was the overwhelming
cause of the population decline of the American natives. Some believe that after first contacts with Europeans and
Africans, Old World diseases caused the death of 90 to 95% of the native population of the New World in the following
150 years. Smallpox killed up to one third of the native population of Hispaniola in 1518. By killing the Incan ruler
Huayna Capac, smallpox caused the Inca Civil War. Smallpox was only the first epidemic. Typhus (probably) in 1546,
influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, measles in 1618—all ravaged
the remains of Inca culture. Contact with European diseases such as smallpox and measles killed between 50 and 67
per cent of the Aboriginal population of North America in the first hundred years after the arrival of Europeans.
Some 90 per cent of the native population near Massachusetts Bay Colony died of smallpox in an epidemic in 1617–1619.
In 1633, in Plymouth, the Native Americans there were exposed to smallpox because of contact with Europeans. As it
had done elsewhere, the virus wiped out entire population groups of Native Americans. It reached Lake Ontario in
1636, and the lands of the Iroquois by 1679. During the 1770s, smallpox killed at least 30% of the West Coast Native
Americans. The 1775–82 North American smallpox epidemic and 1837 Great Plains smallpox epidemic brought devastation
and drastic population depletion among the Plains Indians. In 1832, the federal government of the United States established
a smallpox vaccination program for Native Americans (The Indian Vaccination Act of 1832). The Spanish Empire and
other Europeans brought horses to the Americas. Some of these animals escaped and began to breed and increase their
numbers in the wild. The re-introduction of the horse, extinct in the Americas for over 7500 years, had a profound
impact on Native American culture in the Great Plains of North America and of Patagonia in South America. By domesticating
horses, some tribes had great success: horses enabled them to expand their territories, exchange more goods with
neighboring tribes, and more easily capture game, especially bison. Over the course of thousands of years, American
indigenous peoples domesticated, bred and cultivated a large array of plant species. These species now constitute
50–60% of all crops in cultivation worldwide. In certain cases, the indigenous peoples developed entirely new species
and strains through artificial selection, as was the case in the domestication and breeding of maize from wild teosinte
grasses in the valleys of southern Mexico. Numerous such agricultural products retain their native names in the English
and Spanish lexicons. The South American highlands were a center of early agriculture. Genetic testing of the wide
variety of cultivars and wild species suggests that the potato has a single origin in the area of southern Peru,
from a species in the Solanum brevicaule complex. Over 99% of all modern cultivated potatoes worldwide are descendants
of a subspecies indigenous to south-central Chile, Solanum tuberosum ssp. tuberosum, where it was cultivated as long
as 10,000 years ago. According to George Raudzens, "It is clear that in pre-Columbian times some groups struggled
to survive and often suffered food shortages and famines, while others enjoyed a varied and substantial diet." The
persistent drought around 850 AD coincided with the collapse of Classic Maya civilization, and the famine of One
Rabbit (AD 1454) was a major catastrophe in Mexico. Natives of North America began practicing farming approximately
4,000 years ago, late in the Archaic period of North American cultures. Technology had advanced to the point that
pottery was becoming common and the small-scale felling of trees had become feasible. Concurrently, the Archaic Indians
began using fire in a controlled manner. Intentional burning of vegetation was used to mimic the effects of natural
fires that tended to clear forest understories. It made travel easier and facilitated the growth of herbs and berry-producing
plants, which were important for both food and medicines. Many crops first domesticated by indigenous Americans are
now produced and used globally. Chief among these is maize or "corn", arguably the most important crop in the world.
Other significant crops include cassava, chia, squash (pumpkins, zucchini, marrow, acorn squash, butternut squash),
the pinto bean, Phaseolus beans including most common beans, tepary beans and lima beans, tomatoes, potatoes, avocados,
peanuts, cocoa beans (used to make chocolate), vanilla, strawberries, pineapples, Peppers (species and varieties
of Capsicum, including bell peppers, jalapeños, paprika and chili peppers) sunflower seeds, rubber, brazilwood, chicle,
tobacco, coca, manioc and some species of cotton. Cultural practices in the Americas seem to have been shared mostly
within geographical zones where unrelated peoples adopted similar technologies and social organizations. An example
of such a cultural area is Mesoamerica, where millennia of coexistence and shared development among the peoples of
the region produced a fairly homogeneous culture with complex agricultural and social patterns. Another well-known
example is the North American plains where until the 19th century several peoples shared the traits of nomadic hunter-gatherers
based primarily on buffalo hunting. The development of writing is counted among the many achievements and innovations
of pre-Columbian American cultures. Independent from the development of writing in other areas of the world, the
Mesoamerican region produced several indigenous writing systems beginning in the 1st millennium BCE. What may be
the earliest-known example in the Americas of an extensive text thought to be writing is by the Cascajal Block. The
Olmec hieroglyphs tablet has been indirectly dated from ceramic shards found in the same context to approximately
900 BCE, around the time that Olmec occupation of San Lorenzo Tenochtitlán began to wane. The Maya writing system
(often called hieroglyphs from a superficial resemblance to the Ancient Egyptian writing) was a combination of phonetic
symbols and logograms. It is most often classified as a logographic or (more properly) a logosyllabic writing system,
in which syllabic signs play a significant role. It is the only pre-Columbian writing system known to represent completely
the spoken language of its community. In total, the script has more than one thousand different glyphs although a
few are variations of the same sign or meaning and many appear only rarely or are confined to particular localities.
At any one time, no more than about five hundred glyphs were in use, some two hundred of which (including variations)
had a phonetic or syllabic interpretation. Spanish mendicants in the sixteenth century taught indigenous scribes
in their communities to write their languages in Latin letters and there is a large number of local-level documents
in Nahuatl, Zapotec, Mixtec, and Yucatec Maya from the colonial era, many of which were part of lawsuits and other
legal matters. Although Spaniards initially taught indigenous scribes alphabetic writing, the tradition became self-perpetuating
at the local level. The Spanish crown gathered such documentation and contemporary Spanish translations were made
for legal cases. Scholars have translated and analyzed these documents in what is called the New Philology to write
histories of indigenous peoples from indigenous viewpoints. Native American music in North America is almost entirely
monophonic, but there are notable exceptions. Traditional Native American music often centers around drumming. Rattles,
clappersticks, and rasps were also popular percussive instruments. Flutes were made of rivercane, cedar, and other
woods. The tuning of these flutes is not precise and depends on the length of the wood used and the hand span of
the intended player, but the finger holes are most often around a whole step apart and, at least in Northern California,
a flute was not used if it turned out to have an interval close to a half step. The Apache fiddle is a single stringed
instrument. The music of the indigenous peoples of Central Mexico and Central America was often pentatonic. Before
the arrival of the Spaniards and other Europeans, music was inseparable from religious festivities and included a
large variety of percussion and wind instruments such as drums, flutes, sea snail shells (used as a trumpet) and
"rain" tubes. No remnants of pre-Columbian stringed instruments were found until archaeologists discovered a jar
in Guatemala, attributed to the Maya of the Late Classic Era (600–900 CE), which depicts a stringed musical instrument
which has since been reproduced. This instrument is one of the very few stringed instruments known in the Americas
prior to the introduction of European musical instruments; when played it produces a sound virtually identical to
a jaguar's growl. Visual arts by indigenous peoples of the Americas comprise a major category in the world art collection.
Contributions include pottery, paintings, jewellery, weavings, sculptures, basketry, carvings, and beadwork. Because
too many artists were posing as Native Americans and Alaska Natives in order to profit from the caché of Indigenous
art in the United States, the U.S. passed the Indian Arts and Crafts Act of 1990, requiring artists to prove that
they are enrolled in a state or federally recognized tribe. To support the ongoing practice of American Indian, Alaska
Native and Native Hawaiian arts and cultures in the United States, the Ford Foundation, arts advocates and American
Indian tribes created an endowment seed fund and established a national Native Arts and Cultures Foundation in 2007.
In 2005, Argentina's indigenous population (known as pueblos originarios) numbered about 600,329 (1.6% of total population);
this figure includes 457,363 people who self-identified as belonging to an indigenous ethnic group and 142,966 who
identified themselves as first-generation descendants of an indigenous people. The ten most populous indigenous peoples
are the Mapuche (113,680 people), the Kolla (70,505), the Toba (69,452), the Guaraní (68,454), the Wichi (40,036),
the Diaguita-Calchaquí (31,753), the Mocoví (15,837), the Huarpe (14,633), the Comechingón (10,863) and the Tehuelche
(10,590). Minor but important peoples are the Quechua (6,739), the Charrúa (4,511), the Pilagá (4,465), the Chané
(4,376), and the Chorote (2,613). The Selknam (Ona) people are now virtually extinct in its pure form. The languages
of the Diaguita, Tehuelche, and Selknam nations have become extinct or virtually extinct: the Cacán language (spoken
by Diaguitas) in the 18th century and the Selknam language in the 20th century; one Tehuelche language (Southern
Tehuelche) is still spoken by a handful of elderly people. In Bolivia, a 62% majority of residents over the age of
15 self-identify as belonging to an indigenous people, while another 3.7% grew up with an indigenous mother tongue
yet do not self-identify as indigenous. Including both of these categories, and children under 15, some 66.4% of
Bolivia's population was registered as indigenous in the 2001 Census. The largest indigenous ethnic groups are: Quechua,
about 2.5 million people; Aymara, 2.0 million; Chiquitano, 181,000; Guaraní, 126,000; and Mojeño, 69,000. Some 124,000
belong to smaller indigenous groups. The Constitution of Bolivia, enacted in 2009, recognizes 36 cultures, each with
its own language, as part of a plurinational state. Some groups, including CONAMAQ (the National Council of Ayllus
and Markas of Qullasuyu) draw ethnic boundaries within the Quechua- and Aymara-speaking population, resulting in
a total of fifty indigenous peoples native to Bolivia. Large numbers of Bolivian highland peasants retained indigenous
language, culture, customs, and communal organization throughout the Spanish conquest and the post-independence period.
They mobilized to resist various attempts at the dissolution of communal landholdings and used legal recognition
of "empowered caciques" to further communal organization. Indigenous revolts took place frequently until 1953. While
the National Revolutionary Movement government begun in 1952 discouraged self-identification as indigenous (reclassifying
rural people as campesinos, or peasants), renewed ethnic and class militancy re-emerged in the Katarista movement
beginning in the 1970s. Lowland indigenous peoples, mostly in the east, entered national politics through the 1990
March for Territory and Dignity organized by the CIDOB confederation. That march successfully pressured the national
government to sign the ILO Convention 169 and to begin the still-ongoing process of recognizing and titling indigenous
territories. The 1994 Law of Popular Participation granted "grassroots territorial organizations" that are recognized
by the state certain rights to govern local areas. Morales began work on his "indigenous autonomy" policy, which
he launched in the eastern lowlands department on August 3, 2009, making Bolivia the first country in the history
of South America to affirm the right of indigenous people to govern themselves. Speaking in Santa Cruz Department,
the President called it "a historic day for the peasant and indigenous movement", saying that, though he might make
errors, he would "never betray the fight started by our ancestors and the fight of the Bolivian people". A vote on
further autonomy will take place in referendums which are expected to be held in December 2009. The issue has divided
the country. Indigenous peoples of Brazil make up 0.4% of Brazil's population, or about 700,000 people, even though
millions of Brazilians have some indigenous ancestry. Indigenous peoples are found in the entire territory of Brazil,
although the majority of them live in Indian reservations in the North and Center-Western part of the country. On
January 18, 2007, FUNAI reported that it had confirmed the presence of 67 different uncontacted tribes in Brazil,
up from 40 in 2005. With this addition Brazil has now overtaken the island of New Guinea as the country having the
largest number of uncontacted tribes. Aboriginal peoples in Canada comprise the First Nations, Inuit and Métis; the
descriptors "Indian" and "Eskimo" are falling into disuse, and other than in neighboring Alaska. "Eskimo" is considered
derogatory in many other places because it was given by non-Inuit people and was said to mean "eater of raw meat."
Hundreds of Aboriginal nations evolved trade, spiritual and social hierarchies. The Métis culture of mixed blood
originated in the mid-17th century when First Nation and native Inuit married European settlers. The Inuit had more
limited interaction with European settlers during that early period. Various laws, treaties, and legislation have
been enacted between European immigrants and First Nations across Canada. Aboriginal Right to Self-Government provides
opportunity to manage historical, cultural, political, health care and economic control aspects within first people's
communities. Although not without conflict, European/Canadian early interactions with First Nations and Inuit populations
were relatively peaceful compared to the experience of native peoples in the United States. Combined with a late
economic development in many regions, this relatively peaceful history has allowed Canadian Indigenous peoples to
have a fairly strong influence on the early national culture while preserving their own identity. From the late 18th
century, European Canadians encouraged Aboriginals to assimilate into their own culture, referred to as "Canadian
culture". These attempts reached a climax in the late 19th and early 20th centuries with forced integration. National
Aboriginal Day recognises the cultures and contributions of Aboriginal peoples of Canada. There are currently over
600 recognized First Nations governments or bands encompassing 1,172,790 2006 people spread across Canada with distinctive
Aboriginal cultures, languages, art, and music. According to the 2002 Census, 4.6% of the Chilean population, including
the Rapanui (a Polynesian people) of Easter Island, was indigenous, although most show varying degrees of mixed heritage.
Many are descendants of the Mapuche, and live in Santiago, Araucanía and the lake district. The Mapuche successfully
fought off defeat in the first 300–350 years of Spanish rule during the Arauco War. Relations with the new Chilean
Republic were good until the Chilean state decided to occupy their lands. During the Occupation of Araucanía the
Mapuche surrendered to the country's army in the 1880s. Their land was opened to settlement by Chileans and Europeans.
Conflict over Mapuche land rights continues to the present. Ecuador was the site of many indigenous cultures, and
civilizations of different proportions. An early sedentary culture, known as the Valdivia culture, developed in the
coastal region, while the Caras and the Quitus unified to form an elaborate civilization that ended at the birth
of the Capital Quito. The Cañaris near Cuenca were the most advanced, and most feared by the Inca, due to their fierce
resistance to the Incan expansion. Their architecture remains were later destroyed by Spaniards and the Incas. Approximately
96.4% of Ecuador's Indigenous population are Highland Quichuas living in the valleys of the Sierra region. Primarily
consisting of the descendents of Incans, they are Kichwa speakers and include the Caranqui, the Otavalos, the Cayambi,
the Quitu-Caras, the Panzaleo, the Chimbuelo, the Salasacan, the Tugua, the Puruhá, the Cañari, and the Saraguro.
Linguistic evidence suggests that the Salascan and the Saraguro may have been the descendants of Bolivian ethnic
groups transplanted to Ecuador as mitimaes. Much of El Salvador was home to the Pipil, the Lenca, Xinca, and Kakawira.
The Pipil lived in western El Salvador, spoke Nawat, and had many settlements there, most noticeably Cuzcatlan. The
Pipil had no precious mineral resources, but they did have rich and fertile land that was good for farming. The Spaniards
were disappointed not to find gold or jewels in El Salvador as they had in other lands like Guatemala or Mexico,
but upon learning of the fertile land in El Salvador, they attempted to conquer it. Noted Meso-American indigenous
warriors to rise militarily against the Spanish included Princes Atonal and Atlacatl of the Pipil people in central
El Salvador and Princess Antu Silan Ulap of the Lenca people in eastern El Salvador, who saw the Spanish not as gods
but as barbaric invaders. After fierce battles, the Pipil successfully fought off the Spanish army led by Pedro de
Alvarado along with their Mexican Indian allies (the Tlaxcalas), sending them back to Guatemala. After many other
attacks with an army reinforced with Guatemalan Indian allies, the Spanish were able to conquer Cuzcatlan. After
further attacks, the Spanish also conquered the Lenca people. Eventually, the Spaniards intermarried with Pipil and
Lenca women, resulting in the Mestizo population which would become the majority of the Salvadoran people. Today
many Pipil and other indigenous populations live in the many small towns of El Salvador like Izalco, Panchimalco,
Sacacoyo, and Nahuizalco. About five percent of the population are of full-blooded indigenous descent, but upwards
to eighty percent more or the majority of Hondurans are mestizo or part-indigenous with European admixture, and about
ten percent are of indigenous or African descent. The main concentration of indigenous in Honduras are in the rural
westernmost areas facing Guatemala and to the Caribbean Sea coastline, as well on the Nicaraguan border. The majority
of indigenous people are Lencas, Miskitos to the east, Mayans, Pech, Sumos, and Tolupan. The territory of modern-day
Mexico was home to numerous indigenous civilizations prior to the arrival of the Spanish conquistadores: The Olmecs,
who flourished from between 1200 BCE to about 400 BCE in the coastal regions of the Gulf of Mexico; the Zapotecs
and the Mixtecs, who held sway in the mountains of Oaxaca and the Isthmus of Tehuantepec; the Maya in the Yucatan
(and into neighbouring areas of contemporary Central America); the Purépecha in present-day Michoacán and surrounding
areas, and the Aztecs/Mexica, who, from their central capital at Tenochtitlan, dominated much of the centre and south
of the country (and the non-Aztec inhabitants of those areas) when Hernán Cortés first landed at Veracruz. The "General
Law of Linguistic Rights of the Indigenous Peoples" grants all indigenous languages spoken in Mexico, regardless
of the number of speakers, the same validity as Spanish in all territories in which they are spoken, and indigenous
peoples are entitled to request some public services and documents in their native languages. Along with Spanish,
the law has granted them — more than 60 languages — the status of "national languages". The law includes all indigenous
languages of the Americas regardless of origin; that is, it includes the indigenous languages of ethnic groups non-native
to the territory. As such the National Commission for the Development of Indigenous Peoples recognizes the language
of the Kickapoo, who immigrated from the United States, and recognizes the languages of the Guatemalan indigenous
refugees. The Mexican government has promoted and established bilingual primary and secondary education in some indigenous
rural communities. Nonetheless, of the indigenous peoples in Mexico, only about 67% of them (or 5.4% of the country's
population) speak an indigenous language and about a sixth do not speak Spanish (1.2% of the country's population).
About 5% of the Nicaraguan population are indigenous. The largest indigenous group in Nicaragua is the Miskito people.
Their territory extended from Cape Camarón, Honduras, to Rio Grande, Nicaragua along the Mosquito Coast. There is
a native Miskito language, but large groups speak Miskito Coast Creole, Spanish, Rama and other languages. The Creole
English came about through frequent contact with the British who colonized the area. Many are Christians. Traditional
Miskito society was highly structured with a defined political structure. There was a king, but he did not have total
power. Instead, the power was split between himself, a governor, a general, and by the 1750s, an admiral. Historical
information on kings is often obscured by the fact that many of the kings were semi-mythical. Another major group
is the Mayangna (or Sumu) people, counting some 10,000 people. Indigenous population in Peru make up around 45%.
Native Peruvian traditions and customs have shaped the way Peruvians live and see themselves today. Cultural citizenship—or
what Renato Rosaldo has called, "the right to be different and to belong, in a democratic, participatory sense" (1996:243)—is
not yet very well developed in Peru. This is perhaps no more apparent than in the country's Amazonian regions where
indigenous societies continue to struggle against state-sponsored economic abuses, cultural discrimination, and pervasive
violence. Indigenous peoples in what is now the contiguous United States, including their descendants, are commonly
called "American Indians", or simply "Indians" domestically, or "Native Americans" by the USCB. In Alaska, indigenous
peoples belong to 11 cultures with 11 languages. These include the St. Lawrence Island Yupik, Iñupiat, Athabaskan,
Yup'ik, Cup'ik, Unangax, Alutiiq, Eyak, Haida, Tsimshian, and Tlingit, who are collectively called Alaska Natives.
Indigenous Polynesian peoples, which include Marshallese, Samoan, Tahitian, and Tongan, are politically considered
Pacific Islands American but are geographically and culturally distinct from indigenous peoples of the Americas.
Native Americans in the United States make up 0.97% to 2% of the population. In the 2010 census, 2.9 million people
self-identified as Native American, Native Hawaiian, and Alaska Native alone, and 5.2 million people identified as
U.S. Native Americans, either alone or in combination with one or more ethnicity or other races. 1.8 million are
recognized as enrolled tribal members.[citation needed] Tribes have established their own criteria for membership,
which are often based on blood quantum, lineal descent, or residency. A minority of US Native Americans live in land
units called Indian reservations. Some California and Southwestern tribes, such as the Kumeyaay, Cocopa, Pascua Yaqui
and Apache span both sides of the US–Mexican border. Haudenosaunee people have the legal right to freely cross the
US–Canadian border. Athabascan, Tlingit, Haida, Tsimshian, Iñupiat, Blackfeet, Nakota, Cree, Anishinaabe, Huron,
Lenape, Mi'kmaq, Penobscot, and Haudenosaunee, among others live in both Canada and the US. Most Venezuelans have
some indigenous heritage, but the indigenous population make up only around 2% of the total population. They speak
around 29 different languages and many more dialects, but some of the ethnic groups are very small and their languages
are in danger of becoming extinct in the next decades. The most important indigenous groups are the Ye'kuana, the
Wayuu, the Pemon and the Warao. The most advanced native people to have lived in present-day Venezuela is thought
to have been the Timoto-cuicas, who mainly lived in the Venezuelan Andes. In total it is estimated that there were
between 350 thousand and 500 thousand inhabitants, the most densely populated area being the Andean region (Timoto-cuicas),
thanks to the advanced agricultural techniques used. The Native American name controversy is an ongoing dispute over
the acceptable ways to refer to the indigenous peoples of the Americas and to broad subsets thereof, such as those
living in a specific country or sharing certain cultural attributes. When discussing broader subsets of peoples,
naming may be based on shared language, region, or historical relationship. Many English exonyms have been used to
refer to the indigenous peoples of the Americas. Some of these names were based on foreign-language terms used by
earlier explorers and colonists, while others resulted from the colonists' attempt to translate endonyms from the
native language into their own, and yet others were pejorative terms arising out of prejudice and fear, during periods
of conflict. However, since the 20th century, indigenous peoples in the Americas have been more vocal about the ways
they wish to be referred to, pressing for the elimination of terms widely considered to be obsolete, inaccurate,
or racist. During the latter half of the 20th century and the rise of the Indian rights movement, the United States
government responded by proposing the use of the term "Native American," to recognize the primacy of indigenous peoples'
tenure in the nation, but this term was not fully accepted. Other naming conventions have been proposed and used,
but none are accepted by all indigenous groups. In recent years, there has been a rise of indigenous movements in
the Americas (mainly South America). These are rights-driven groups that organize themselves in order to achieve
some sort of self-determination and the preservation of their culture for their peoples. Organizations like the Coordinator
of Indigenous Organizations of the Amazon River Basin and the Indian Council of South America are examples of movements
that are breaking the barrier of borders in order to obtain rights for Amazonian indigenous populations everywhere.
Similar movements for indigenous rights can also be seen in Canada and the United States, with movements like the
International Indian Treaty Council and the accession of native Indian group into the Unrepresented Nations and Peoples
Organization. Representatives from indigenous and rural organizations from major South American countries, including
Bolivia, Ecuador, Colombia, Chile and Brazil, started a forum in support of Morales' legal process of change. The
meeting condemned plans by the European "foreign power elite" to destabilize the country. The forum also expressed
solidarity with the Morales and his economic and social changes in the interest of historically marginalized majorities.
Furthermore, in a cathartic blow to the US-backed elite, it questioned US interference through diplomats and NGOs.
The forum was suspicious of plots against Bolivia and other countries, including Cuba, Venezuela, Ecuador, Paraguay
and Nicaragua. Genetic history of indigenous peoples of the Americas primarily focus on Human Y-chromosome DNA haplogroups
and Human mitochondrial DNA haplogroups. "Y-DNA" is passed solely along the patrilineal line, from father to son,
while "mtDNA" is passed down the matrilineal line, from mother to offspring of both sexes. Neither recombines, and
thus Y-DNA and mtDNA change only by chance mutation at each generation with no intermixture between parents' genetic
material. Autosomal "atDNA" markers are also used, but differ from mtDNA or Y-DNA in that they overlap significantly.
AtDNA is generally used to measure the average continent-of-ancestry genetic admixture in the entire human genome
and related isolated populations. Genetic studies of mitochondrial DNA (mtDNA) of Amerindians and some Siberian and
Central Asian peoples also revealed that the gene pool of the Turkic-speaking peoples of Siberia such as Altaians,
Khakas, Shors and Soyots, living between the Altai and Lake Baikal along the Sayan mountains, are genetically closest
to Amerindians.[citation needed] This view is shared by other researchers who argue that "the ancestors of the American
Indians were the first to separate from the great Asian population in the Middle Paleolithic. 2012 research found
evidence for a recent common ancestry between Native Americans and indigenous Altaians based on mitochondrial DNA
and Y-Chromosome analysis. The paternal lineages of Altaians mostly belong to the subclades of haplogroup P-M45 (xR1a
38-93%; xQ1a 4-32%). Human settlement of the New World occurred in stages from the Bering sea coast line, with an
initial 15,000 to 20,000-year layover on Beringia for the small founding population. The micro-satellite diversity
and distributions of the Y lineage specific to South America indicates that certain indigenous peoples of the Americas
populations have been isolated since the initial colonization of the region. The Na-Dené, Inuit and Indigenous Alaskan
populations exhibit haplogroup Q (Y-DNA) mutations, however are distinct from other indigenous peoples of the Americas
with various mtDNA and atDNA mutations. This suggests that the earliest migrants into the northern extremes of North
America and Greenland derived from later migrant populations. A 2013 study in Nature reported that DNA found in the
24,000-year-old remains of a young boy from the archaeological Mal'ta-Buret' culture suggest that up to one-third
of the indigenous Americans may have ancestry that can be traced back to western Eurasians, who may have "had a more
north-easterly distribution 24,000 years ago than commonly thought". "We estimate that 14 to 38 percent of Native
American ancestry may originate through gene flow from this ancient population," the authors wrote. Professor Kelly
Graf said, A route through Beringia is seen as more likely than the Solutrean hypothesis. Kashani et al. 2012 state
that "The similarities in ages and geographical distributions for C4c and the previously analyzed X2a lineage provide
support to the scenario of a dual origin for Paleo-Indians. Taking into account that C4c is deeply rooted in the
Asian portion of the mtDNA phylogeny and is indubitably of Asian origin, the finding that C4c and X2a are characterized
by parallel genetic histories definitively dismisses the controversial hypothesis of an Atlantic glacial entry route
into North America."
Red is the color at the end of the spectrum of visible light next to orange and opposite violet. Red color has a predominant
light wavelength of roughly 620–740 nanometres. Red is one of the additive primary colors of visible light, along
with green and blue, which in Red Green Blue (RGB) color systems are combined to create all the colors on a computer
monitor or television screen. Red is also one of the subtractive primary colors, along with yellow and blue, of the
RYB color space and traditional color wheel used by painters and artists. In nature, the red color of blood comes
from hemoglobin, the iron-containing protein found in the red blood cells of all vertebrates. The red color of the
Grand Canyon and other geological features is caused by hematite or red ochre, both forms of iron oxide. It also
causes the red color of the planet Mars. The red sky at sunset and sunrise is caused by an optical effect known as
Rayleigh scattering, which, when the sun is low or below the horizon, increases the red-wavelength light that reaches
the eye. The color of autumn leaves is caused by pigments called anthocyanins, which are produced towards the end
of summer, when the green chlorophyll is no longer produced. One to two percent of the human population has red hair;
the color is produced by high levels of the reddish pigment pheomelanin (which also accounts for the red color of
the lips) and relatively low levels of the dark pigment eumelanin. A red dye called Kermes was made beginning in
the Neolithic Period by drying and then crushing the bodies of the females of a tiny scale insect in the genus Kermes,
primarily Kermes vermilio. The insects live on the sap of certain trees, especially Kermes oak trees near the Mediterranean
region. Jars of kermes have been found in a Neolithic cave-burial at Adaoutse, Bouches-du-Rhône. Kermes from oak
trees was later used by Romans, who imported it from Spain. A different variety of dye was made from Porphyrophora
hamelii (Armenian cochineal) scale insects that lived on the roots and stems of certain herbs. It was mentioned in
texts as early as the 8th century BC, and it was used by the ancient Assyrians and Persians. Kermes is also mentioned
in the Bible. In the Book of Exodus, God instructs Moses to have the Israelites bring him an offering including cloth
"of blue, and purple, and scarlet." The term used for scarlet in the 4th century Latin Vulgate version of the Bible
passage is coccumque bis tinctum, meaning "colored twice with coccus." Coccus, from the ancient Greek Kokkos, means
a tiny grain and is the term that was used in ancient times for the Kermes vermilio insect used to make the Kermes
dye. This was also the origin of the expression "dyed in the grain." But, like many colors, it also had a negative
association, with heat, destruction and evil. A prayer to god Isis said: "Oh Isis, protect me from all things evil
and red." The ancient Egyptians began manufacturing pigments in about 4000 BC. Red ochre was widely used as a pigment
for wall paintings, particularly as the skin color of men. An ivory painter's palette found inside the tomb of King
Tutankhamun had small compartments with pigments of red ochre and five other colors. The Egyptians used the root
of the rubia, or madder plant, to make a dye, later known as alizarin, and also used it to color white power to use
as a pigment, which became known as madder lake, alizarin or alizarin crimson. In Ancient Rome, Tyrian purple was
the color of the Emperor, but red had an important religious symbolism. Romans wore togas with red stripes on holidays,
and the bride at a wedding wore a red shawl, called a flammeum. Red was used to color statues and the skin of gladiators.
Red was also the color associated with army; Roman soldiers wore red tunics, and officers wore a cloak called a paludamentum
which, depending upon the quality of the dye, could be crimson, scarlet or purple. In Roman mythology red is associated
with the god of war, Mars. The vexilloid of the Roman Empire had a red background with the letters SPQR in gold.
A Roman general receiving a triumph had his entire body painted red in honor of his achievement. The Romans liked
bright colors, and many Roman villas were decorated with vivid red murals. The pigment used for many of the murals
was called vermilion, and it came from the mineral cinnabar, a common ore of mercury. It was one of the finest reds
of ancient times – the paintings have retained their brightness for more than twenty centuries. The source of cinnabar
for the Romans was a group of mines near Almadén, southwest of Madrid, in Spain. Working in the mines was extremely
dangerous, since mercury is highly toxic; the miners were slaves or prisoners, and being sent to the cinnabar mines
was a virtual death sentence. Red was the color of the banner of the Byzantine emperors. In Western Europe, Emperor
Charlemagne painted his palace red as a very visible symbol of his authority, and wore red shoes at his coronation.
Kings, princes and, beginning in 1295, Roman Catholic cardinals began to wear red costumes. When Abbe Suger rebuilt
Saint Denis Basilica outside Paris in the early 12th century, he added stained glass windows colored blue cobalt
glass and red glass tinted with copper. Together they flooded the basilica with a mystical light. Soon stained glass
windows were being added to cathedrals all across France, England and Germany. In Medieval painting red was used
to attract attention to the most important figures; both Christ and the Virgin Mary were commonly painted wearing
red mantles. Red clothing was a sign of status and wealth. It was worn not only by cardinals and princes, but also
by merchants, artisans and townpeople, particularly on holidays or special occasions. Red dye for the clothing of
ordinary people was made from the roots of the rubia tinctorum, the madder plant. This color leaned toward brick-red,
and faded easily in the sun or during washing. The wealthy and aristocrats wore scarlet clothing dyed with kermes,
or carmine, made from the carminic acid in tiny female scale insects, which lived on the leaves of oak trees in Eastern
Europe and around the Mediterranean. The insects were gathered, dried, crushed, and boiled with different ingredients
in a long and complicated process, which produced a brilliant scarlet. Red played an important role in Chinese philosophy.
It was believed that the world was composed of five elements: metal, wood, water, fire and earth, and that each had
a color. Red was associated with fire. Each Emperor chose the color that his fortune-tellers believed would bring
the most prosperity and good fortune to his reign. During the Zhou, Han, Jin, Song and Ming Dynasties, red considered
a noble color, and it was featured in all court ceremonies, from coronations to sacrificial offerings, and weddings.
Red was also a badge of rank. During the Song dynasty (906–1279), officials of the top three ranks wore purple clothes;
those of the fourth and fifth wore bright red; those of the sixth and seventh wore green; and the eighth and ninth
wore blue. Red was the color worn by the royal guards of honor, and the color of the carriages of the imperial family.
When the imperial family traveled, their servants and accompanying officials carried red and purple umbrellas. Of
an official who had talent and ambition, it was said "he is so red he becomes purple." Red was also featured in Chinese
Imperial architecture. In the Tang and Song Dynasties, gates of palaces were usually painted red, and nobles often
painted their entire mansion red. One of the most famous works of Chinese literature, A Dream of Red Mansions by
Cao Xueqin (1715–1763), was about the lives of noble women who passed their lives out of public sight within the
walls of such mansions. In later dynasties red was reserved for the walls of temples and imperial residences. When
the Manchu rulers of the Qing Dynasty conquered the Ming and took over the Forbidden City and Imperial Palace in
Beijing, all the walls, gates, beams and pillars were painted in red and gold. There were guilds of dyers who specialized
in red in Venice and other large Europeans cities. The Rubia plant was used to make the most common dye; it produced
an orange-red or brick red color used to dye the clothes of merchants and artisans. For the wealthy, the dye used
was Kermes, made from a tiny scale insect which fed on the branches and leaves of the oak tree. For those with even
more money there was Polish Cochineal; also known as Kermes vermilio or "Blood of Saint John", which was made from
a related insect, the Margodes polonicus. It made a more vivid red than ordinary Kermes. The finest and most expensive
variety of red made from insects was the "Kermes" of Armenia (Armenian cochineal, also known as Persian kirmiz),
made by collecting and crushing Porphyophora hamelii, an insect which lived on the roots and stems of certain grasses.
The pigment and dye merchants of Venice imported and sold all of these products and also manufactured their own color,
called Venetian red, which was considered the most expensive and finest red in Europe. Its secret ingredient was
arsenic, which brightened the color. But early in the 16th century, a brilliant new red appeared in Europe. When
the Spanish conquistador Hernán Cortés and his soldiers conquered the Aztec Empire in 1519-1521, they discovered
slowly that the Aztecs had another treasure beside silver and gold; they had the tiny cochineal, a parasitic scale
insect which lived on cactus plants, which, when dried and crushed, made a magnificent red. The cochineal in Mexico
was closely related to the Kermes varieties of Europe, but unlike European Kermes, it could be harvested several
times a year, and it was ten times stronger than the Kermes of Poland. It worked particularly well on silk, satin
and other luxury textiles. In 1523 Cortes sent the first shipment to Spain. Soon cochineal began to arrive in European
ports aboard convoys of Spanish galleons. The painters of the early Renaissance used two traditional lake pigments,
made from mixing dye with either chalk or alum, kermes lake, made from kermes insects, and madder lake, made from
the rubia tinctorum plant. With the arrival of cochineal, they had a third, carmine, which made a very fine crimson,
though it had a tendency to change color if not used carefully. It was used by almost all the great painters of the
15th and 16th centuries, including Rembrandt, Vermeer, Rubens, Anthony van Dyck, Diego Velázquez and Tintoretto.
Later it was used by Thomas Gainsborough, Seurat and J.M.W. Turner. During the French Revolution, Red became a symbol
of liberty and personal freedom used by the Jacobins and other more radical parties. Many of them wore a red Phrygian
cap, or liberty cap, modeled after the caps worn by freed slaves in Ancient Rome. During the height of the Reign
of Terror, Women wearing red caps gathered around the guillotine to celebrate each execution. They were called the
"Furies of the guillotine". The guillotines used during the Reign of Terror in 1792 and 1793 were painted red, or
made of red wood. During the Reign of Terror a statue of a woman titled liberty, painted red, was placed in the square
in front of the guillotine. After the end of the Reign of Terror, France went back to the blue, white and red tricolor,
whose red was taken from the traditional color of Saint Denis, the Christian martyr and patron saint of Paris. As
the Industrial Revolution spread across Europe, chemists and manufacturers sought new red dyes that could be used
for large-scale manufacture of textiles. One popular color imported into Europe from Turkey and India in the 18th
and early 19th century was Turkey red, known in France as rouge d'Adrinople. Beginning in the 1740s, this bright
red color was used to dye or print cotton textiles in England, the Netherlands and France. Turkey red used madder
as the colorant, but the process was longer and more complicated, involving multiple soaking of the fabrics in lye,
olive oil, sheep's dung, and other ingredients. The fabric was more expensive but resulted in a fine bright and lasting
red, similar to carmine, perfectly suited to cotton. The fabric was widely exported from Europe to Africa, the Middle
East and America. In 19th century America, it was widely used in making the traditional patchwork quilt. The 19th
century also saw the use of red in art to create specific emotions, not just to imitate nature. It saw the systematic
study of color theory, and particularly the study of how complementary colors such as red and green reinforced each
other when they were placed next to each other. These studies were avidly followed by artists such as Vincent van
Gogh. Describing his painting, The Night Cafe, to his brother Theo in 1888, Van Gogh wrote: "I sought to express
with red and green the terrible human passions. The hall is blood red and pale yellow, with a green billiard table
in the center, and four lamps of lemon yellow, with rays of orange and green. Everywhere it is a battle and antithesis
of the most different reds and greens." Matisse was also one of the first 20th-century artists to make color the
central element of the painting, chosen to evoke emotions. "A certain blue penetrates your soul", he wrote. "A certain
red affects your blood pressure." He also was familiar with the way that complementary colors, such as red and green,
strengthened each other when they were placed next to each other. He wrote, "My choice of colors is not based on
scientific theory; it is based on observation, upon feelings, upon the real nature of each experience ... I just
try to find a color which corresponds to my feelings." Rothko also began using the new synthetic pigments, but not
always with happy results. In 1962 he donated to Harvard University a series of large murals of the Passion of Christ
whose predominant colors were dark pink and deep crimson. He mixed mostly traditional colors to make the pink and
crimson; synthetic ultramarine, cerulean blue, and titanium white, but he also used two new organic reds, Naphtol
and Lithol. The Naphtol did well, but the Lithol slowly changed color when exposed to light. Within five years the
deep pinks and reds had begun to turn light blue, and by 1979 the paintings were ruined and had to be taken down.
Unlike vermilion or red ochre, made from minerals, red lake pigments are made by mixing organic dyes, made from insects
or plants, with white chalk or alum. Red lac was made from the gum lac, the dark red resinous substance secreted
by various scale insects, particularly the Laccifer lacca from India. Carmine lake was made from the cochineal insect
from Central and South America, Kermes lake came from a different scale insect, kermes vermilio, which thrived on
oak trees around the Mediterranean. Other red lakes were made from the rose madder plant and from the brazilwood
tree. In modern color theory, also known as the RGB color model, red, green and blue are additive primary colors.
Red, green and blue light combined together makes white light, and these three colors, combined in different mixtures,
can produce nearly any other color. This is the principle that is used to make all of the colors on your computer
screen and your television. For example, purple on a computer screen is made by a similar formula to used by Cennino
Cennini in the Renaissance to make violet, but using additive colors and light instead of pigment: it is created
by combining red and blue light at equal intensity on a black screen. Violet is made on a computer screen in a similar
way, but with a greater amount of blue light and less red light. So that the maximum number of colors can be accurately
reproduced on your computer screen, each color has been given a code number, or sRGB, which tells your computer the
intensity of the red, green and blue components of that color. The intensity of each component is measured on a scale
of zero to 255, which means the complete list includes 16,777,216 distinct colors and shades. The sRGB number of
pure red, for example, is 255, 00, 00, which means the red component is at its maximum intensity, and there is no
green or blue. The sRGB number for crimson is 220, 20, 60, which means that the red is slightly less intense and
therefore darker, there is some green, which leans it toward orange; and there is a larger amount of blue,which makes
it slightly blue-violet. As a ray of white sunlight travels through the atmosphere to the eye, some of the colors
are scattered out of the beam by air molecules and airborne particles due to Rayleigh scattering, changing the final
color of the beam that is seen. Colors with a shorter wavelength, such as blue and green, scatter more strongly,
and are removed from the light that finally reaches the eye. At sunrise and sunset, when the path of the sunlight
through the atmosphere to the eye is longest, the blue and green components are removed almost completely, leaving
the longer wavelength orange and red light. The remaining reddened sunlight can also be scattered by cloud droplets
and other relatively large particles, which give the sky above the horizon its red glow. Lasers emitting in the red
region of the spectrum have been available since the invention of the ruby laser in 1960. In 1962 the red helium–neon
laser was invented, and these two types of lasers were widely used in many scientific applications including holography,
and in education. Red helium–neon lasers were used commercially in LaserDisc players. The use of red laser diodes
became widespread with the commercial success of modern DVD players, which use a 660 nm laser diode technology. Today,
red and red-orange laser diodes are widely available to the public in the form of extremely inexpensive laser pointers.
Portable, high-powered versions are also available for various applications. More recently, 671 nm diode-pumped solid
state (DPSS) lasers have been introduced to the market for all-DPSS laser display systems, particle image velocimetry,
Raman spectroscopy, and holography. During the summer growing season, phosphate is at a high level. It has a vital
role in the breakdown of the sugars manufactured by chlorophyll. But in the fall, phosphate, along with the other
chemicals and nutrients, moves out of the leaf into the stem of the plant. When this happens, the sugar-breakdown
process changes, leading to the production of anthocyanin pigments. The brighter the light during this period, the
greater the production of anthocyanins and the more brilliant the resulting color display. When the days of autumn
are bright and cool, and the nights are chilly but not freezing, the brightest colorations usually develop. Red hair
varies from a deep burgundy through burnt orange to bright copper. It is characterized by high levels of the reddish
pigment pheomelanin (which also accounts for the red color of the lips) and relatively low levels of the dark pigment
eumelanin. The term redhead (originally redd hede) has been in use since at least 1510. Cultural reactions have varied
from ridicule to admiration; many common stereotypes exist regarding redheads and they are often portrayed as fiery-tempered.
(See red hair). Red is associated with dominance in a number of animal species. For example, in mandrills, red coloration
of the face is greatest in alpha males, increasingly less prominent in lower ranking subordinates, and directly correlated
with levels of testosterone. Red can also affect the perception of dominance by others, leading to significant differences
in mortality, reproductive success and parental investment between individuals displaying red and those not. In humans,
wearing red has been linked with increased performance in competitions, including professional sport and multiplayer
video games. Controlled tests have demonstrated that wearing red does not increase performance or levels of testosterone
during exercise, so the effect is likely to be produced by perceived rather than actual performance. Judges of tae
kwon do have been shown to favor competitors wearing red protective gear over blue, and, when asked, a significant
majority of people say that red abstract shapes are more "dominant", "aggressive", and "likely to win a physical
competition" than blue shapes. In contrast to its positive effect in physical competition and dominance behavior,
exposure to red decreases performance in cognitive tasks and elicits aversion in psychological tests where subjects
are placed in an "achievement" context (e.g. taking an IQ test). Surveys show that red is the color most associated
with courage. In western countries red is a symbol of martyrs and sacrifice, particularly because of its association
with blood. Beginning in the Middle Ages, the Pope and Cardinals of the Roman Catholic Church wore red to symbolize
the blood of Christ and the Christian martyrs. The banner of the Christian soldiers in the First Crusade was a red
cross on a white field, the St. George's Cross. According to Christian tradition, Saint George was a Roman soldier
who was a member of the guards of the Emperor Diocletian, who refused to renounce his Christian faith and was martyred.
The Saint George's Cross became the Flag of England in the 16th century, and now is part of the Union Flag of the
United Kingdom, as well as the Flag of the Republic of Georgia. Saint Valentine, a Roman Catholic Bishop or priest
who was martyred in about 296 AD, seems to have had no known connection with romantic love, but the day of his martyrdom
on the Roman Catholic calendar, Saint Valentine's Day (February 14), became, in the 14th century, an occasion for
lovers to send messages to each other. In recent years the celebration of Saint Valentine' s day has spread beyond
Christian countries to Japan and China and other parts of the world. The celebration of Saint Valentine's Day is
forbidden or strongly condemned in many Islamic countries, including Saudi Arabia, Pakistan and Iran. In Saudi Arabia,
in 2002 and 2011, religious police banned the sale of all Valentine's Day items, telling shop workers to remove any
red items, as the day is considered a Christian holiday. Red is the color most commonly associated with joy and well
being. It is the color of celebration and ceremony. A red carpet is often used to welcome distinguished guests. Red
is also the traditional color of seats in opera houses and theaters. Scarlet academic gowns are worn by new Doctors
of Philosophy at degree ceremonies at Oxford University and other schools. In China, it is considered the color of
good fortune and prosperity, and it is the color traditionally worn by brides. In Christian countries, it is the
color traditionally worn at Christmas by Santa Claus, because in the 4th century the historic Saint Nicholas was
the Greek Christian Bishop of Myra, in modern-day Turkey, and bishops then dressed in red. Red is the traditional
color of warning and danger. In the Middle Ages, a red flag announced that the defenders of a town or castle would
fight to defend it, and a red flag hoisted by a warship meant they would show no mercy to their enemy. In Britain,
in the early days of motoring, motor cars had to follow a man with a red flag who would warn horse-drawn vehicles,
before the Locomotives on Highways Act 1896 abolished this law. In automobile races, the red flag is raised if there
is danger to the drivers. In international football, a player who has made a serious violation of the rules is shown
a red penalty card and ejected from the game. Red is the international color of stop signs and stop lights on highways
and intersections. It was standarized as the international color at the Vienna Convention on Road Signs and Signals
of 1968. It was chosen partly because red is the brightest color in daytime (next to orange), though it is less visible
at twilight, when green is the most visible color. Red also stands out more clearly against a cool natural backdrop
of blue sky, green trees or gray buildings. But it was mostly chosen as the color for stoplights and stop signs because
of its universal association with danger and warning. Red is used in modern fashion much as it was used in Medieval
painting; to attract the eyes of the viewer to the person who is supposed to be the center of attention. People wearing
red seem to be closer than those dressed in other colors, even if they are actually the same distance away. Monarchs,
wives of Presidential candidates and other celebrities often wear red to be visible from a distance in a crowd. It
is also commonly worn by lifeguards and others whose job requires them to be easily found. "So he carried me away
in the spirit into the wilderness: and I saw a woman sit upon a scarlet coloured beast, full of names of blasphemy,
having seven heads and ten horns. "And the woman was arrayed in purple and scarlet colour, and decked with gold and
precious stones and pearls, having a golden cup in her hand full of abominations and filthiness of her fornication:
"And upon her forehead was a name written a mystery: Babylon the Great, the Mother of Harlots and of all the abominations
of the earth: And I saw the woman drunken with the blood of the saints, and with the blood of the martyrs of Jesus.
In China, red (simplified Chinese: 红; traditional Chinese: 紅; pinyin: hóng) is the symbol of fire and the south (both
south in general and Southern China specifically). It carries a largely positive connotation, being associated with
courage, loyalty, honor, success, fortune, fertility, happiness, passion, and summer. In Chinese cultural traditions,
red is associated with weddings (where brides traditionally wear red dresses) and red paper is frequently used to
wrap gifts of money or other objects. Special red packets (simplified Chinese: 红包; traditional Chinese: 紅包; pinyin:
hóng bāo in Mandarin or lai see in Cantonese) are specifically used during Chinese New Year celebrations for giving
monetary gifts. On the more negative side, obituaries are traditionally written in red ink, and to write someone's
name in red signals either cutting them out of one's life, or that they have died. Red is also associated with either
the feminine or the masculine (yin and yang respectively), depending on the source. The Little Red Book, a collection
of quotations from Chairman Mao Tse-Tung, founding father of the People's Republic of China (PRC), was published
in 1966 and widely distributed thereafter. In Central Africa, Ndembu warriors rub themselves with red paint during
celebrations. Since their culture sees the color as a symbol of life and health, sick people are also painted with
it. Like most Central African cultures, the Ndembu see red as ambivalent, better than black but not as good as white.
In other parts of Africa, however, red is a color of mourning, representing death. Because red bears are associated
with death in many parts of Africa, the Red Cross has changed its colors to green and white in parts of the continent.
Major League Baseball is especially well known for red teams. The Cincinnati Red Stockings are the oldest professional
baseball team, dating back to 1869. The franchise soon relocated to Boston and is now the Atlanta Braves, but its
name survives as the origin for both the Cincinnati Reds and Boston Red Sox. During the 1950s when red was strongly
associated with communism, the modern Cincinnati team was known as the "Redlegs" and the term was used on baseball
cards. After the red scare faded, the team was known as the "Reds" again. The Los Angeles Angels of Anaheim are also
known for their color red, as are the St. Louis Cardinals, Arizona Diamondbacks, and the Philadelphia Phillies. In
association football, teams such as Manchester United, Bayern Munich, Liverpool, Arsenal, Toronto FC, and S.L. Benfica
primarily wear red jerseys. Other teams that prominently feature red on their kits include A.C. Milan (nicknamed
i rossoneri for their red and black shirts), AFC Ajax, Olympiacos, River Plate, Atlético Madrid, and Flamengo. A
red penalty card is issued to a player who commits a serious infraction: the player is immediately disqualified from
further play and his team must continue with one less player for the game's duration. Red is one of the most common
colors used on national flags. The use of red has similar connotations from country to country: the blood, sacrifice,
and courage of those who defended their country; the sun and the hope and warmth it brings; and the sacrifice of
Christ's blood (in some historically Christian nations) are a few examples. Red is the color of the flags of several
countries that once belonged to the former British Empire. The British flag bears the colors red, white, and blue;
it includes the cross of Saint George, patron saint of England, and the saltire of Saint Patrick, patron saint of
Ireland, both of which are red on white. The flag of the United States bears the colors of Britain, the colors of
the French tricolore include red as part of the old Paris coat of arms, and other countries' flags, such as those
of Australia, New Zealand, and Fiji, carry a small inset of the British flag in memory of their ties to that country.
Many former colonies of Spain, such as Mexico, Colombia, Ecuador, Cuba, Puerto Rico, Peru, and Venezuela, also feature
red-one of the colors of the Spanish flag-on their own banners. Red flags are also used to symbolize storms, bad
water conditions, and many other dangers. Navy flags are often red and yellow. Red is prominently featured in the
flag of the United States Marine Corps. Red, blue, and white are also the Pan-Slavic colors adopted by the Slavic
solidarity movement of the late nineteenth century. Initially these were the colors of the Russian flag; as the Slavic
movement grew, they were adopted by other Slavic peoples including Slovaks, Slovenes, and Serbs. The flags of the
Czech Republic and Poland use red for historic heraldic reasons (see Coat of arms of Poland and Coat of arms of the
Czech Republic) & not due to Pan-Slavic connotations. In 2004 Georgia adopted a new white flag, which consists of
four small and one big red cross in the middle touching all four sides. Red, white, and black were the colors of
the German Empire from 1870 to 1918, and as such they came to be associated with German nationalism. In the 1920s
they were adopted as the colors of the Nazi flag. In Mein Kampf, Hitler explained that they were "revered colors
expressive of our homage to the glorious past." The red part of the flag was also chosen to attract attention - Hitler
wrote: "the new flag ... should prove effective as a large poster" because "in hundreds of thousands of cases a really
striking emblem may be the first cause of awakening interest in a movement." The red also symbolized the social program
of the Nazis, aimed at German workers. Several designs by a number of different authors were considered, but the
one adopted in the end was Hitler's personal design. The red flag appeared as a political symbol during the French
Revolution, after the fall of Bastille. A law adopted by the new government on October 20, 1789 authorized the Garde
Nationale to raise the red flag in the event of a riot, to signal that the Garde would imminently intervene. During
a demonstration on the Champs de Mars on July 17, 1791, the Garde Nationale fired on the crowd, killed up to fifty
people. The government was denounced by the more radical revolutionaries. In the words of his famous hymn, the Marseillaise,
Rouget de Lisle wrote: "Against us they have raised the bloody flag of tyranny!" (Contre nous de la tyrannie, l'entendard
sanglant est leve). Beginning in 1790, the most radical revolutionaries adopted the red flag themselves, to symbolize
the blood of those killed in the demonstrations, and to call for the repression of those they considered counter-revolutionary.
Karl Marx published the Communist Manifesto in February 1848, with little attention. However, a few days later the
French Revolution of 1848 broke out, which replaced the monarchy of Louis Philippe with the Second French Republic.
In June 1848, Paris workers, disenchanted with the new government, built barricades and raised red flags. The new
government called in the French Army to put down the uprising, the first of many such confrontations between the
army and the new worker's movements in Europe. In 1870, following the stunning defeat of the French Army by the Germans
in the Franco-Prussian War, French workers and socialist revolutionaries seized Paris and created the Paris Commune.
The Commune lasted for two months before it was crushed by the French Army, with much bloodshed. The original red
banners of the Commune became icons of the socialist revolution; in 1921 members of the French Communist Party came
to Moscow and presented the new Soviet government with one of the original Commune banners; it was placed (and is
still in place) in the tomb of Vladimir Lenin, next to his open coffin. In the United States, political commentators
often refer to the "red states", which traditionally vote for Republican candidates in presidential elections, and
"blue states", which vote for the Democratic candidate. This convention is relatively recent: before the 2000 presidential
election, media outlets assigned red and blue to both parties, sometimes alternating the allocation for each election.
Fixed usage was established during the 39-day recount following the 2000 election, when the media began to discuss
the contest in terms of "red states" versus "blue states". The Communist Party of China, founded in 1920, adopted
the red flag and hammer and sickle emblem of the Soviet Union, which became the national symbols when the Party took
power in China in 1949. Under Party leader Mao Zedong, the Party anthem became "The East Is Red", and Mao Zedong
himself was sometimes referred to as a "red sun". During the Cultural Revolution in China, Party ideology was enforced
by the Red Guards, and the sayings of Mao Zedong were published as a small red book in hundreds of millions of copies.
Today the Communist Party of China claims to be the largest political party in the world, with eighty million members.
After the Communist Party of China took power in 1949, the flag of China became a red flag with a large star symbolizing
the Communist Party, and smaller stars symbolizing workers, peasants, the urban middle class and rural middle class.
The flag of the Communist Party of China became a red banner with a hammer and sickle, similar to that on the Soviet
flag. In the 1950s and 1960s, other Communist regimes such as Vietnam and Laos also adopted red flags. Some Communist
countries, such as Cuba, chose to keep their old flags; and other countries used red flags which had nothing to do
with Communism or socialism; the red flag of Nepal, for instance, represents the national flower.
Egypt (i/ˈiːdʒɪpt/; Arabic: مِصر Miṣr, Egyptian Arabic: مَصر Maṣr, Coptic: Ⲭⲏⲙⲓ Khemi), officially the Arab Republic of
Egypt, is a transcontinental country spanning the northeast corner of Africa and southwest corner of Asia, via a
land bridge formed by the Sinai Peninsula. It is the world's only contiguous Eurafrasian nation. Most of Egypt's
territory of 1,010,408 square kilometres (390,000 sq mi) lies within the Nile Valley. Egypt is a Mediterranean country.
It is bordered by the Gaza Strip and Israel to the northeast, the Gulf of Aqaba to the east, the Red Sea to the east
and south, Sudan to the south and Libya to the west. Egypt has one of the longest histories of any modern country,
arising in the tenth millennium BC as one of the world's first nation states. Considered a cradle of civilisation,
Ancient Egypt experienced some of the earliest developments of writing, agriculture, urbanisation, organised religion
and central government. Iconic monuments such as the Giza Necropolis and its Great Sphinx, as well the ruins of Memphis,
Thebes, Karnak, and the Valley of the Kings, reflect this legacy and remain a significant focus of archaeological
study and popular interest worldwide. Egypt's rich cultural heritage is an integral part of its national identity,
having endured, and at times assimilated, various foreign influences, including Greek, Persian, Roman, Arab, Ottoman,
and European. Although Christianised in the first century of the Common Era, it was subsequently Islamised due to
the Islamic conquests of the seventh century. With over 90 million inhabitants, Egypt is the most populous country
in North Africa and the Arab World, the third-most populous in Africa (after Nigeria and Ethiopia), and the fifteenth-most
populous in the world. The great majority of its people live near the banks of the Nile River, an area of about 40,000
square kilometres (15,000 sq mi), where the only arable land is found. The large regions of the Sahara desert, which
constitute most of Egypt's territory, are sparsely inhabited. About half of Egypt's residents live in urban areas,
with most spread across the densely populated centres of greater Cairo, Alexandria and other major cities in the
Nile Delta. Modern Egypt is considered to be a regional and middle power, with significant cultural, political, and
military influence in North Africa, the Middle East and the Muslim world. Its economy is one of the largest and most
diversified in the Middle East, with sectors such as tourism, agriculture, industry and services at almost equal
production levels. In 2011, longtime President Hosni Mubarak stepped down amid mass protests. Later elections saw
the rise of the Muslim Brotherhood, which was ousted by the army a year later amid mass protests. Miṣr (IPA: [mi̠sˤr]
or Egyptian Arabic pronunciation: [mesˤɾ]; Arabic: مِصر) is the Classical Quranic Arabic and modern official name
of Egypt, while Maṣr (IPA: [mɑsˤɾ]; Egyptian Arabic: مَصر) is the local pronunciation in Egyptian Arabic. The name
is of Semitic origin, directly cognate with other Semitic words for Egypt such as the Hebrew מִצְרַיִם (Mitzráyim).
The oldest attestation of this name for Egypt is the Akkadian 𒆳 𒈪 𒄑 𒊒 KURmi-iṣ-ru miṣru, related to miṣru/miṣirru/miṣaru,
meaning "border" or "frontier". By about 6000 BC, a Neolithic culture rooted in the Nile Valley. During the Neolithic
era, several predynastic cultures developed independently in Upper and Lower Egypt. The Badarian culture and the
successor Naqada series are generally regarded as precursors to dynastic Egypt. The earliest known Lower Egyptian
site, Merimda, predates the Badarian by about seven hundred years. Contemporaneous Lower Egyptian communities coexisted
with their southern counterparts for more than two thousand years, remaining culturally distinct, but maintaining
frequent contact through trade. The earliest known evidence of Egyptian hieroglyphic inscriptions appeared during
the predynastic period on Naqada III pottery vessels, dated to about 3200 BC. The First Intermediate Period ushered
in a time of political upheaval for about 150 years. Stronger Nile floods and stabilisation of government, however,
brought back renewed prosperity for the country in the Middle Kingdom c. 2040 BC, reaching a peak during the reign
of Pharaoh Amenemhat III. A second period of disunity heralded the arrival of the first foreign ruling dynasty in
Egypt, that of the Semitic Hyksos. The Hyksos invaders took over much of Lower Egypt around 1650 BC and founded a
new capital at Avaris. They were driven out by an Upper Egyptian force led by Ahmose I, who founded the Eighteenth
Dynasty and relocated the capital from Memphis to Thebes. The New Kingdom c. 1550–1070 BC began with the Eighteenth
Dynasty, marking the rise of Egypt as an international power that expanded during its greatest extension to an empire
as far south as Tombos in Nubia, and included parts of the Levant in the east. This period is noted for some of the
most well known Pharaohs, including Hatshepsut, Thutmose III, Akhenaten and his wife Nefertiti, Tutankhamun and Ramesses
II. The first historically attested expression of monotheism came during this period as Atenism. Frequent contacts
with other nations brought new ideas to the New Kingdom. The country was later invaded and conquered by Libyans,
Nubians and Assyrians, but native Egyptians eventually drove them out and regained control of their country. In 525
BC, the powerful Achaemenid Persians, led by Cambyses II, began their conquest of Egypt, eventually capturing the
pharaoh Psamtik III at the battle of Pelusium. Cambyses II then assumed the formal title of pharaoh, but ruled Egypt
from his home of Susa in Persia (modern Iran), leaving Egypt under the control of a satrapy. The entire Twenty-seventh
Dynasty of Egypt, from 525 BC to 402 BC, save for Petubastis III, was an entirely Persian ruled period, with the
Achaemenid kings all being granted the title of pharaoh. A few temporarily successful revolts against the Persians
marked the fifth century BC, but Egypt was never able to permanently overthrow the Persians. The Ptolemaic Kingdom
was a powerful Hellenistic state, extending from southern Syria in the east, to Cyrene to the west, and south to
the frontier with Nubia. Alexandria became the capital city and a centre of Greek culture and trade. To gain recognition
by the native Egyptian populace, they named themselves as the successors to the Pharaohs. The later Ptolemies took
on Egyptian traditions, had themselves portrayed on public monuments in Egyptian style and dress, and participated
in Egyptian religious life. The last ruler from the Ptolemaic line was Cleopatra VII, who committed suicide following
the burial of her lover Mark Antony who had died in her arms (from a self-inflicted stab wound), after Octavian had
captured Alexandria and her mercenary forces had fled. The Ptolemies faced rebellions of native Egyptians often caused
by an unwanted regime and were involved in foreign and civil wars that led to the decline of the kingdom and its
annexation by Rome. Nevertheless, Hellenistic culture continued to thrive in Egypt well after the Muslim conquest.
The Byzantines were able to regain control of the country after a brief Sasanian Persian invasion early in the 7th
century amidst the Byzantine–Sasanian War of 602–628 during which they established a new short-lived province for
ten years known as Sasanian Egypt, until 639–42, when Egypt was invaded and conquered by the Islamic Empire by the
Muslim Arabs. When they defeated the Byzantine Armies in Egypt, the Arabs brought Sunni Islam to the country. Early
in this period, Egyptians began to blend their new faith with indigenous beliefs and practices, leading to various
Sufi orders that have flourished to this day. These earlier rites had survived the period of Coptic Christianity.
Muhammad Ali Pasha evolved the military from one that convened under the tradition of the corvée to a great modernised
army. He introduced conscription of the male peasantry in 19th century Egypt, and took a novel approach to create
his great army, strengthening it with numbers and in skill. Education and training of the new soldiers was not an
option; the new concepts were furthermore enforced by isolation. The men were held in barracks to avoid distraction
of their growth as a military unit to be reckoned with. The resentment for the military way of life eventually faded
from the men and a new ideology took hold, one of nationalism and pride. It was with the help of this newly reborn
martial unit that Muhammad Ali imposed his rule over Egypt. The Suez Canal, built in partnership with the French,
was completed in 1869. Its construction led to enormous debt to European banks, and caused popular discontent because
of the onerous taxation it required. In 1875 Ismail was forced to sell Egypt's share in the canal to the British
Government. Within three years this led to the imposition of British and French controllers who sat in the Egyptian
cabinet, and, "with the financial power of the bondholders behind them, were the real power in the Government." The
new government drafted and implemented a constitution in 1923 based on a parliamentary system. Saad Zaghlul was popularly
elected as Prime Minister of Egypt in 1924. In 1936, the Anglo-Egyptian Treaty was concluded. Continued instability
due to remaining British influence and increasing political involvement by the king led to the dissolution of the
parliament in a military coup d'état known as the 1952 Revolution. The Free Officers Movement forced King Farouk
to abdicate in support of his son Fuad. British military presence in Egypt lasted until 1954. In 1958, Egypt and
Syria formed a sovereign union known as the United Arab Republic. The union was short-lived, ending in 1961 when
Syria seceded, thus ending the union. During most of its existence, the United Arab Republic was also in a loose
confederation with North Yemen (or the Mutawakkilite Kingdom of Yemen), known as the United Arab States. In 1959,
the All-Palestine Government of the Gaza Strip, an Egyptian client state, was absorbed into the United Arab Republic
under the pretext of Arab union, and was never restored. In mid May 1967, the Soviet Union issued warnings to Nasser
of an impending Israeli attack on Syria. Although the chief of staff Mohamed Fawzi verified them as "baseless", Nasser
took three successive steps that made the war virtually inevitable: On 14 May he deployed his troops in Sinai near
the border with Israel, on 19 May he expelled the UN peacekeepers stationed in the Sinai Peninsula border with Israel,
and on 23 May he closed the Straits of Tiran to Israeli shipping. On 26 May Nasser declared, "The battle will be
a general one and our basic objective will be to destroy Israel". At the time of the fall of the Egyptian monarchy
in the early 1950s, less than half a million Egyptians were considered upper class and rich, four million middle
class and 17 million lower class and poor. Fewer than half of all primary-school-age children attended school, most
of them being boys. Nasser's policies changed this. Land reform and distribution, the dramatic growth in university
education, and government support to national industries greatly improved social mobility and flattened the social
curve. From academic year 1953-54 through 1965-66, overall public school enrolments more than doubled. Millions of
previously poor Egyptians, through education and jobs in the public sector, joined the middle class. Doctors, engineers,
teachers, lawyers, journalists, constituted the bulk of the swelling middle class in Egypt under Nasser. During the
1960s, the Egyptian economy went from sluggish to the verge of collapse, the society became less free, and Nasser's
appeal waned considerably. In 1970, President Nasser died and was succeeded by Anwar Sadat. Sadat switched Egypt's
Cold War allegiance from the Soviet Union to the United States, expelling Soviet advisors in 1972. He launched the
Infitah economic reform policy, while clamping down on religious and secular opposition. In 1973, Egypt, along with
Syria, launched the October War, a surprise attack to regain part of the Sinai territory Israel had captured 6 years
earlier. it presented Sadat with a victory that allowed him to regain the Sinai later in return for peace with Israel.
In 1975, Sadat shifted Nasser's economic policies and sought to use his popularity to reduce government regulations
and encourage foreign investment through his program of Infitah. Through this policy, incentives such as reduced
taxes and import tariffs attracted some investors, but investments were mainly directed at low risk and profitable
ventures like tourism and construction, abandoning Egypt's infant industries. Even though Sadat's policy was intended
to modernise Egypt and assist the middle class, it mainly benefited the higher class, and, because of the elimination
of subsidies on basic foodstuffs, led to the 1977 Egyptian Bread Riots. During Mubarak's reign, the political scene
was dominated by the National Democratic Party, which was created by Sadat in 1978. It passed the 1993 Syndicates
Law, 1995 Press Law, and 1999 Nongovernmental Associations Law which hampered freedoms of association and expression
by imposing new regulations and draconian penalties on violations.[citation needed] As a result, by the late 1990s
parliamentary politics had become virtually irrelevant and alternative avenues for political expression were curtailed
as well. Constitutional changes voted on 19 March 2007 prohibited parties from using religion as a basis for political
activity, allowed the drafting of a new anti-terrorism law, authorised broad police powers of arrest and surveillance,
and gave the president power to dissolve parliament and end judicial election monitoring. In 2009, Dr. Ali El Deen
Hilal Dessouki, Media Secretary of the National Democratic Party (NDP), described Egypt as a "pharaonic" political
system, and democracy as a "long-term goal". Dessouki also stated that "the real center of power in Egypt is the
military". On 18 January 2014, the interim government instituted a new constitution following a referendum in which
98.1% of voters were supportive. Participation was low with only 38.6% of registered voters participating although
this was higher than the 33% who voted in a referendum during Morsi's tenure. On 26 March 2014 Abdel Fattah el-Sisi
the head of the Egyptian Armed Forces, who at this time was in control of the country, resigned from the military,
announcing he would stand as a candidate in the 2014 presidential election. The poll, held between 26 and 28 May
2014, resulted in a landslide victory for el-Sisi. Sisi was sworn into office as President of Egypt on 8 June 2014.
The Muslim Brotherhood and some liberal and secular activist groups boycotted the vote. Even though the military-backed
authorities extended voting to a third day, the 46% turnout was lower than the 52% turnout in the 2012 election.
Most of Egypt's rain falls in the winter months. South of Cairo, rainfall averages only around 2 to 5 mm (0.1 to
0.2 in) per year and at intervals of many years. On a very thin strip of the northern coast the rainfall can be as
high as 410 mm (16.1 in), mostly between October and March. Snow falls on Sinai's mountains and some of the north
coastal cities such as Damietta, Baltim, Sidi Barrany, etc. and rarely in Alexandria. A very small amount of snow
fell on Cairo on 13 December 2013, the first time Cairo received snowfall in many decades. Frost is also known in
mid-Sinai and mid-Egypt. Egypt is the driest and the sunniest country in the world, and most of its land surface
is desert. The plan stated that the following numbers of species of different groups had been recorded from Egypt:
algae (1483 species), animals (about 15,000 species of which more than 10,000 were insects), fungi (more than 627
species), monera (319 species), plants (2426 species), protozoans (371 species). For some major groups, for example
lichen-forming fungi and nematode worms, the number was not known. Apart from small and well-studied groups like
amphibians, birds, fish, mammals and reptiles, the many of those numbers are likely to increase as further species
are recorded from Egypt. For the fungi, including lichen-forming species, for example, subsequent work has shown
that over 2200 species have been recorded from Egypt, and the final figure of all fungi actually occurring in the
country is expected to be much higher. The House of Representatives, whose members are elected to serve five-year
terms, specialises in legislation. Elections were last held between November 2011 and January 2012 which was later
dissolved. The next parliamentary election will be held within 6 months of the constitution's ratification on 18
January 2014. Originally, the parliament was to be formed before the president was elected, but interim president
Adly Mansour pushed the date. The Egyptian presidential election, 2014, took place on 26–28 May 2014. Official figures
showed a turnout of 25,578,233 or 47.5%, with Abdel Fattah el-Sisi winning with 23.78 million votes, or 96.91% compared
to 757,511 (3.09%) for Hamdeen Sabahi. In the 1980s, 1990s, and 2000s, terrorist attacks in Egypt became numerous
and severe, and began to target Christian Copts, foreign tourists and government officials. In the 1990s an Islamist
group, Al-Gama'a al-Islamiyya, engaged in an extended campaign of violence, from the murders and attempted murders
of prominent writers and intellectuals, to the repeated targeting of tourists and foreigners. Serious damage was
done to the largest sector of Egypt's economy—tourism—and in turn to the government, but it also devastated the livelihoods
of many of the people on whom the group depended for support. On 18 January 2014, the interim government successfully
institutionalised a more secular constitution. The president is elected to a four-year term and may serve 2 terms.
The parliament may impeach the president. Under the constitution, there is a guarantee of gender equality and absolute
freedom of thought. The military retains the ability to appoint the national Minister of Defence for the next 8 years.
Under the constitution, political parties may not be based on "religion, race, gender or geography". The Pew Forum
on Religion & Public Life ranks Egypt as the fifth worst country in the world for religious freedom. The United States
Commission on International Religious Freedom, a bipartisan independent agency of the US government, has placed Egypt
on its watch list of countries that require close monitoring due to the nature and extent of violations of religious
freedom engaged in or tolerated by the government. According to a 2010 Pew Global Attitudes survey, 84% of Egyptians
polled supported the death penalty for those who leave Islam; 77% supported whippings and cutting off of hands for
theft and robbery; and 82% support stoning a person who commits adultery. As a result of modernisation efforts over
the years, Egypt's healthcare system has made great strides forward. Access to healthcare in both urban and rural
areas greatly improved and immunisation programs are now able to cover 98% of the population. Life expectancy increased
from 44.8 years during the 1960s to 72.12 years in 2009. There was a noticeable decline of the infant mortality rate
(during the 1970s to the 1980s the infant mortality rate was 101-132/1000 live births, in 2000 the rate was 50-60/1000,
and in 2008 it was 28-30/1000). Cairo University is ranked as 401-500 according to the Academic Ranking of World
Universities (Shanghai Ranking) and 551-600 according to QS World University Rankings. American University in Cairo
is ranked as 360 according to QS World University Rankings and Al-Azhar University, Alexandria University and Ain
Shams University fall in the 701+ range. Egypt is currently opening new research institutes for the aim of modernising
research in the nation, the most recent example of which is Zewail City of Science and Technology. Coptic Christians
face discrimination at multiple levels of the government, ranging from disproportionate representation in government
ministries to laws that limit their ability to build or repair churches. Intolerance of Bahá'ís and non-orthodox
Muslim sects, such as Sufis, Shi'a and Ahmadis, also remains a problem. When the government moved to computerise
identification cards, members of religious minorities, such as Bahá'ís, could not obtain identification documents.
An Egyptian court ruled in early 2008 that members of other faiths may obtain identity cards without listing their
faiths, and without becoming officially recognised. Egypt actively practices capital punishment. Egypt's authorities
do not release figures on death sentences and executions, despite repeated requests over the years by human rights
organisations. The United Nations human rights office and various NGOs expressed "deep alarm" after an Egyptian Minya
Criminal Court sentenced 529 people to death in a single hearing on 25 March 2014. Sentenced supporters of former
President Mohamed Morsi will be executed for their alleged role in violence following his ousting in July 2013. The
judgment was condemned as a violation of international law. By May 2014, approximately 16,000 people (and as high
as more than 40,000 by one independent count), mostly Brotherhood members or supporters, have been imprisoned after
the coup after the Muslim Brotherhood was labelled as terrorist organisation by the post-coup interim Egyptian government.
After Morsi was ousted by the military, the judiciary system aligned itself with the new government, actively suopporting
the repression of Muslim Brotherhood members. This resulted in a sharp increase in mass death sentences that arose
criticism from the US president Barack Obama and the General Secretary of the UN, Ban Ki Moon. In April 2013, one
judge of the Minya governatorate of Upper Egypt, sentenced 1,212 people to death. In December 2014 the judge Mohammed
Nagi Shahata, notorious for his fierceness in passing on death sentences, condemened to the capital penalty 188 members
of the Muslim Brotherhood, for assaulting a police station. Various Egyptian and international human rights organisations
have already pointed out the lack of fair trials, that often last only a few minutes and do not take into consideration
the procedural standards of fair trials. The United States provides Egypt with annual military assistance, which
in 2015 amounted to US$1.3 billion. In 1989, Egypt was designated as a major non-NATO ally of the United States.
Nevertheless, ties between the two countries have partially soured since the July 2013 military coup that deposed
Islamist president Mohamed Morsi, with the Obama administration condemning Egypt's violent crackdown on the Muslim
Brotherhood and its supporters, and cancelling future military exercises involving the two countries. There have
been recent attempts, however, to normalise relations between the two, with both governments frequently calling for
mutual support in the fight against regional and international terrorism. The Egyptian military has dozens of factories
manufacturing weapons as well as consumer goods. The Armed Forces' inventory includes equipment from different countries
around the world. Equipment from the former Soviet Union is being progressively replaced by more modern US, French,
and British equipment, a significant portion of which is built under license in Egypt, such as the M1 Abrams tank.[citation
needed] Relations with Russia have improved significantly following Mohamed Morsi's removal and both countries have
worked since then to strengthen military and trade ties among other aspects of bilateral co-operation. Relations
with China have also improved considerably. In 2014, Egypt and China have established a bilateral "comprehensive
strategic partnership". The permanent headquarters of the Arab League are located in Cairo and the body's secretary
general has traditionally been Egyptian. This position is currently held by former foreign minister Nabil el-Araby.
The Arab League briefly moved from Egypt to Tunis in 1978 to protest the Egypt-Israel Peace Treaty, but it later
returned to Cairo in 1989. Gulf monarchies, including the United Arab Emirates and Saudi Arabia, have pledged billions
of dollars to help Egypt overcome its economic difficulties since the July 2013 coup. Following the 1973 war and
the subsequent peace treaty, Egypt became the first Arab nation to establish diplomatic relations with Israel. Despite
that, Israel is still widely considered as a hostile state by the majority of Egyptians. Egypt has played a historical
role as a mediator in resolving various disputes in the Middle East, most notably its handling of the Israeli-Palestinian
conflict and the peace process. Egypt's ceasefire and truce brokering efforts in Gaza have hardly been challenged
following Israel's evacuation of its settlements from the strip in 2005, despite increasing animosity towards the
Hamas government in Gaza following the ouster of Mohamed Morsi, and despite recent attempts by countries like Turkey
and Qatar to take over this role. Egypt's economy depends mainly on agriculture, media, petroleum imports, natural
gas, and tourism; there are also more than three million Egyptians working abroad, mainly in Saudi Arabia, the Persian
Gulf and Europe. The completion of the Aswan High Dam in 1970 and the resultant Lake Nasser have altered the time-honored
place of the Nile River in the agriculture and ecology of Egypt. A rapidly growing population, limited arable land,
and dependence on the Nile all continue to overtax resources and stress the economy. Egypt has a developed energy
market based on coal, oil, natural gas, and hydro power. Substantial coal deposits in the northeast Sinai are mined
at the rate of about 600,000 tonnes (590,000 long tons; 660,000 short tons) per year. Oil and gas are produced in
the western desert regions, the Gulf of Suez, and the Nile Delta. Egypt has huge reserves of gas, estimated at 2,180
cubic kilometres (520 cu mi), and LNG up to 2012 exported to many countries. In 2013, the Egyptian General Petroleum
Co (EGPC) said the country will cut exports of natural gas and tell major industries to slow output this summer to
avoid an energy crisis and stave off political unrest, Reuters has reported. Egypt is counting on top liquid natural
gas (LNG) exporter Qatar to obtain additional gas volumes in summer, while encouraging factories to plan their annual
maintenance for those months of peak demand, said EGPC chairman, Tarek El Barkatawy. Egypt produces its own energy,
but has been a net oil importer since 2008 and is rapidly becoming a net importer of natural gas. Economic conditions
have started to improve considerably, after a period of stagnation, due to the adoption of more liberal economic
policies by the government as well as increased revenues from tourism and a booming stock market. In its annual report,
the International Monetary Fund (IMF) has rated Egypt as one of the top countries in the world undertaking economic
reforms. Some major economic reforms undertaken by the government since 2003 include a dramatic slashing of customs
and tariffs. A new taxation law implemented in 2005 decreased corporate taxes from 40% to the current 20%, resulting
in a stated 100% increase in tax revenue by the year 2006. Foreign direct investment (FDI) in Egypt increased considerably
before the removal of Hosni Mubarak, exceeding $6 billion in 2006, due to economic liberalisation and privatisation
measures taken by minister of investment Mahmoud Mohieddin.[citation needed] Since the fall of Hosni Mubarak in 2011,
Egypt has experienced a drastic fall in both foreign investment and tourism revenues, followed by a 60% drop in foreign
exchange reserves, a 3% drop in growth, and a rapid devaluation of the Egyptian pound. Although one of the main obstacles
still facing the Egyptian economy is the limited trickle down of wealth to the average population, many Egyptians
criticise their government for higher prices of basic goods while their standards of living or purchasing power remains
relatively stagnant. Corruption is often cited by Egyptians as the main impediment to further economic growth. The
government promised major reconstruction of the country's infrastructure, using money paid for the newly acquired
third mobile license ($3 billion) by Etisalat in 2006. In the Corruption Perceptions Index 2013, Egypt was ranked
114 out of 177. Egypt's most prominent multinational companies are the Orascom Group and Raya Contact Center. The
information technology (IT) sector has expanded rapidly in the past few years, with many start-ups selling outsourcing
services to North America and Europe, operating with companies such as Microsoft, Oracle and other major corporations,
as well as many small and medium size enterprises. Some of these companies are the Xceed Contact Center, Raya, E
Group Connections and C3. The IT sector has been stimulated by new Egyptian entrepreneurs with government encouragement.[citation
needed] Egypt has a wide range of beaches situated on the Mediterranean and the Red Sea that extend to over 3,000
km. The Red Sea has serene waters, coloured coral reefs, rare fish and beautiful mountains. The Akba Gulf beaches
also provide facilities for practising sea sports. Safaga tops the Red Sea zone with its beautiful location on the
Suez Gulf. Last but not least, Sharm el-Sheikh (or City of Peace), Hurghada, Luxor (known as world's greatest open-air
museum/ or City of the ⅓ of world monuments), Dahab, Ras Sidr, Marsa Alam, Safaga and the northern coast of the Mediterranean
are major tourist's destinations of the recreational tourism. Egypt was producing 691,000 bbl/d of oil and 2,141.05
Tcf of natural gas (in 2013), which makes Egypt as the largest oil producer not member of the Organization of the
Petroleum Exporting Countries (OPEC) and the second-largest dry natural gas producer in Africa. In 2013, Egypt was
the largest consumer of oil and natural gas in Africa, as more than 20% of total oil consumption and more than 40%
of total dry natural gas consumption in Africa. Also, Egypt possesses the largest oil refinery capacity in Africa
726,000 bbl/d (in 2012). Egypt is currently planning to build its first nuclear power plant in El Dabaa city, northern
Egypt. The Suez Canal is an artificial sea-level waterway in Egypt considered the most important centre of the maritime
transport in the Middle East, connecting the Mediterranean Sea and the Red Sea. Opened in November 1869 after 10
years of construction work, it allows ship transport between Europe and Asia without navigation around Africa. The
northern terminus is Port Said and the southern terminus is Port Tawfiq at the city of Suez. Ismailia lies on its
west bank, 3 km (1.9 mi) from the half-way point. The canal is 193.30 km (120.11 mi) long, 24 m (79 ft) deep and
205 metres (673 ft) wide as of 2010. It consists of the northern access channel of 22 km (14 mi), the canal itself
of 162.25 km (100.82 mi) and the southern access channel of 9 km (5.6 mi). The canal is a single lane with passing
places in the "Ballah By-Pass" and the Great Bitter Lake. It contains no locks; seawater flows freely through the
canal. In general, the canal north of the Bitter Lakes flows north in winter and south in summer. The current south
of the lakes changes with the tide at Suez. Drinking water supply and sanitation in Egypt is characterised by both
achievements and challenges. Among the achievements are an increase of piped water supply between 1990 and 2010 from
89% to 100% in urban areas and from 39% to 93% in rural areas despite rapid population growth, the elimination of
open defecation in rural areas during the same period, and in general a relatively high level of investment in infrastructure.
Access to an improved water source in Egypt is now practically universal with a rate of 99%. About one half of the
population is connected to sanitary sewers. Partly because of low sanitation coverage about 17,000 children die each
year because of diarrhoea. Another challenge is low cost recovery due to water tariffs that are among the lowest
in the world. This in turn requires government subsidies even for operating costs, a situation that has been aggravated
by salary increases without tariff increases after the Arab Spring. Poor operation of facilities, such as water and
wastewater treatment plants, as well as limited government accountability and transparency, are also issues. Ethnic
Egyptians are by far the largest ethnic group in the country, constituting 91% of the total population. Ethnic minorities
include the Abazas, Turks, Greeks, Bedouin Arab tribes living in the eastern deserts and the Sinai Peninsula, the
Berber-speaking Siwis (Amazigh) of the Siwa Oasis, and the Nubian communities clustered along the Nile. There are
also tribal Beja communities concentrated in the south-eastern-most corner of the country, and a number of Dom clans
mostly in the Nile Delta and Faiyum who are progressively becoming assimilated as urbanisation increases. Egypt also
hosts an unknown number of refugees and asylum seekers, estimated to be between 500,000 and 3 million. There are
some 70,000 Palestinian refugees, and about 150,000 recently arrived Iraqi refugees, but the number of the largest
group, the Sudanese, is contested.[nb 1] The once-vibrant and ancient Greek and Jewish communities in Egypt have
almost disappeared, with only a small number remaining in the country, but many Egyptian Jews visit on religious
or other occasions and tourism. Several important Jewish archaeological and historical sites are found in Cairo,
Alexandria and other cities. The official language of the Republic is Modern Standard Arabic. Arabic was adopted
by the Egyptians after the Arab invasion of Egypt. The spoken languages are: Egyptian Arabic (68%), Sa'idi Arabic
(29%), Eastern Egyptian Bedawi Arabic (1.6%), Sudanese Arabic (0.6%), Domari (0.3%), Nobiin (0.3%), Beja (0.1%),
Siwi and others. Additionally, Greek, Armenian and Italian are the main languages of immigrants. In Alexandria in
the 19th century there was a large community of Italian Egyptians and Italian was the "lingua franca" of the city.
Although Egypt was a majority Christian country before the 7th Century, after Islam arrived, the country was slowly
Islamified to become a majority Muslim country. Egypt emerged as a centre of politics and culture in the Muslim world.
Under Anwar Sadat, Islam became the official state religion and Sharia the main source of law. It is estimated that
15 million Egyptians follow Native Sufi orders, with the Sufi leadership asserting that the numbers are much greater
as many Egyptian Sufis are not officially registered with a Sufi order. Of the Christian minority in Egypt over 90%
belong to the native Coptic Orthodox Church of Alexandria, an Oriental Orthodox Christian Church. Other native Egyptian
Christians are adherents of the Coptic Catholic Church, the Evangelical Church of Egypt and various other Protestant
denominations. Non-native Christian communities are largely found in the urban regions of Cairo and Alexandria, such
as the Syro-Lebanese, who belong to Greek Catholic, Greek Orthodox, and Maronite Catholic denominations. Egypt recognises
only three religions: Islam, Christianity, and Judaism. Other faiths and minority Muslim sects practised by Egyptians,
such as the small Bahá'í and Ahmadi community, are not recognised by the state and face persecution since they are
labelled as far right groups that threaten Egypt's national security. Individuals, particularly Baha'is and atheists,
wishing to include their religion (or lack thereof) on their mandatory state issued identification cards are denied
this ability (see Egyptian identification card controversy), and are put in the position of either not obtaining
required identification or lying about their faith. A 2008 court ruling allowed members of unrecognised faiths to
obtain identification and leave the religion field blank. The Egyptians were one of the first major civilisations
to codify design elements in art and architecture. Egyptian blue, also known as calcium copper silicate is a pigment
used by Egyptians for thousands of years. It is considered to be the first synthetic pigment. The wall paintings
done in the service of the Pharaohs followed a rigid code of visual rules and meanings. Egyptian civilisation is
renowned for its colossal pyramids, temples and monumental tombs. Well-known examples are the Pyramid of Djoser designed
by ancient architect and engineer Imhotep, the Sphinx, and the temple of Abu Simbel. Modern and contemporary Egyptian
art can be as diverse as any works in the world art scene, from the vernacular architecture of Hassan Fathy and Ramses
Wissa Wassef, to Mahmoud Mokhtar's sculptures, to the distinctive Coptic iconography of Isaac Fanous. The Cairo Opera
House serves as the main performing arts venue in the Egyptian capital. Egyptian literature traces its beginnings
to ancient Egypt and is some of the earliest known literature. Indeed, the Egyptians were the first culture to develop
literature as we know it today, that is, the book. It is an important cultural element in the life of Egypt. Egyptian
novelists and poets were among the first to experiment with modern styles of Arabic literature, and the forms they
developed have been widely imitated throughout the Middle East. The first modern Egyptian novel Zaynab by Muhammad
Husayn Haykal was published in 1913 in the Egyptian vernacular. Egyptian novelist Naguib Mahfouz was the first Arabic-language
writer to win the Nobel Prize in Literature. Egyptian women writers include Nawal El Saadawi, well known for her
feminist activism, and Alifa Rifaat who also writes about women and tradition. Egyptian cinema became a regional
force with the coming of sound. In 1936, Studio Misr, financed by industrialist Talaat Harb, emerged as the leading
Egyptian studio, a role the company retained for three decades. For over 100 years, more than 4000 films have been
produced in Egypt, three quarters of the total Arab production.[citation needed] Egypt is considered the leading
country in the field of cinema in the Middle East. Actors from all over the Arab World seek to appear in the Egyptian
cinema for the sake of fame. The Cairo International Film Festival has been rated as one of 11 festivals with a top
class rating worldwide by the International Federation of Film Producers' Associations. Egyptian music is a rich
mixture of indigenous, Mediterranean, African and Western elements. It has been an integral part of Egyptian culture
since antiquity. The ancient Egyptians credited one of their gods Hathor with the invention of music, which Osiris
in turn used as part of his effort to civilise the world. Egyptians used music instruments since then. Contemporary
Egyptian music traces its beginnings to the creative work of people such as Abdu El Hamouli, Almaz and Mahmoud Osman,
who influenced the later work of Sayed Darwish, Umm Kulthum, Mohammed Abdel Wahab and Abdel Halim Hafez whose age
is considered the golden age of music in Egypt and the whole Middle East and North-Africa. Prominent contemporary
Egyptian pop singers include Amr Diab and Mohamed Mounir. Egypt has one of the oldest civilisations in the world.
It has been in contact with many other civilisations and nations and has been through so many eras, starting from
prehistoric age to the modern age, passing through so many ages such as; Pharonic, Roman, Greek, Islamic and many
other ages. Because of this wide variation of ages, the continuous contact with other nations and the big number
of conflicts Egypt had been through, at least 60 museums may be found in Egypt, mainly covering a wide area of these
ages and conflicts. Some consider koshari (a mixture of rice, lentils, and macaroni) to be the national dish. Fried
onions can be also added to koshari. In addition, ful medames (mashed fava beans) is one of the most popular dishes.
Fava bean is also used in making falafel (also known as "ta'meyya"), which may have originated in Egypt and spread
to other parts of the Middle East. Garlic fried with coriander is added to mulukhiyya, a popular green soup made
from finely chopped jute leaves, sometimes with chicken or rabbit. Football is the most popular national sport of
Egypt. The Cairo Derby is one of the fiercest derbies in Africa, and the BBC picked it as one of the 7 toughest derbies
in the world. Al Ahly is the most successful club of the 20th century in the African continent according to CAF,
closely followed by their rivals Zamalek SC. Al Ahly was named in 2000 by the Confederation of African Football as
the "African Club of the Century". With twenty titles, Al Ahly is currently the world's most successful club in terms
of international trophies, surpassing Italy's A.C. Milan and Argentina's Boca Juniors, both having eighteen. Egypt
has hosted several international competitions. the last one was 2009 FIFA U-20 World Cup which took place between
24 September - 16 October 2009. On Friday 19 September of the year 2014, Guinness World Records has announced that
Egyptian scuba diver Ahmed Gabr is the new title holder for deepest salt water scuba dive, at 332.35 metres. Ahmed
set a new world record Friday when he reached a depth of more than 1,000 feet. The 14-hour feat took Gabr 1,066 feet
down into the abyss near the Egyptian town of Dahab in ther Red Sea, where he works as a diving instructor.
Mosaic has a long history, starting in Mesopotamia in the 3rd millennium BC. Pebble mosaics were made in Tiryns in Mycenean
Greece; mosaics with patterns and pictures became widespread in classical times, both in Ancient Greece and Ancient
Rome. Early Christian basilicas from the 4th century onwards were decorated with wall and ceiling mosaics. Mosaic
art flourished in the Byzantine Empire from the 6th to the 15th centuries; that tradition was adopted by the Norman
kingdom in Sicily in the 12th century, by eastern-influenced Venice, and among the Rus in Ukraine. Mosaic fell out
of fashion in the Renaissance, though artists like Raphael continued to practise the old technique. Roman and Byzantine
influence led Jews to decorate 5th and 6th century synagogues in the Middle East with floor mosaics. Bronze age pebble
mosaics have been found at Tiryns; mosaics of the 4th century BC are found in the Macedonian palace-city of Aegae,
and the 4th-century BC mosaic of The Beauty of Durrës discovered in Durrës, Albania in 1916, is an early figural
example; the Greek figural style was mostly formed in the 3rd century BC. Mythological subjects, or scenes of hunting
or other pursuits of the wealthy, were popular as the centrepieces of a larger geometric design, with strongly emphasized
borders. Pliny the Elder mentions the artist Sosus of Pergamon by name, describing his mosaics of the food left on
a floor after a feast and of a group of doves drinking from a bowl. Both of these themes were widely copied. Greek
figural mosaics could have been copied or adapted paintings, a far more prestigious artform, and the style was enthusiastically
adopted by the Romans so that large floor mosaics enriched the floors of Hellenistic villas and Roman dwellings from
Britain to Dura-Europos. Most recorded names of Roman mosaic workers are Greek, suggesting they dominated high quality
work across the empire; no doubt most ordinary craftsmen were slaves. Splendid mosaic floors are found in Roman villas
across North Africa, in places such as Carthage, and can still be seen in the extensive collection in Bardo Museum
in Tunis, Tunisia. There were two main techniques in Greco-Roman mosaic: opus vermiculatum used tiny tesserae, typically
cubes of 4 millimeters or less, and was produced in workshops in relatively small panels which were transported to
the site glued to some temporary support. The tiny tesserae allowed very fine detail, and an approach to the illusionism
of painting. Often small panels called emblemata were inserted into walls or as the highlights of larger floor-mosaics
in coarser work. The normal technique was opus tessellatum, using larger tesserae, which was laid on site. There
was a distinct native Italian style using black on a white background, which was no doubt cheaper than fully coloured
work. In Rome, Nero and his architects used mosaics to cover some surfaces of walls and ceilings in the Domus Aurea,
built 64 AD, and wall mosaics are also found at Pompeii and neighbouring sites. However it seems that it was not
until the Christian era that figural wall mosaics became a major form of artistic expression. The Roman church of
Santa Costanza, which served as a mausoleum for one or more of the Imperial family, has both religious mosaic and
decorative secular ceiling mosaics on a round vault, which probably represent the style of contemporary palace decoration.
The mosaics of the Villa Romana del Casale near Piazza Armerina in Sicily are the largest collection of late Roman
mosaics in situ in the world, and are protected as a UNESCO World Heritage Site. The large villa rustica, which was
probably owned by Emperor Maximian, was built largely in the early 4th century. The mosaics were covered and protected
for 700 years by a landslide that occurred in the 12th Century. The most important pieces are the Circus Scene, the
64m long Great Hunting Scene, the Little Hunt, the Labours of Hercules and the famous Bikini Girls, showing women
undertaking a range of sporting activities in garments that resemble 20th Century bikinis. The peristyle, the imperial
apartments and the thermae were also decorated with ornamental and mythological mosaics. Other important examples
of Roman mosaic art in Sicily were unearthed on the Piazza Vittoria in Palermo where two houses were discovered.
The most important scenes there depicted Orpheus, Alexander the Great's Hunt and the Four Seasons. In 1913 the Zliten
mosaic, a Roman mosaic famous for its many scenes from gladiatorial contests, hunting and everyday life, was discovered
in the Libyan town of Zliten. In 2000 archaeologists working in Leptis Magna, Libya, uncovered a 30 ft length of
five colorful mosaics created during the 1st or 2nd century AD. The mosaics show a warrior in combat with a deer,
four young men wrestling a wild bull to the ground, and a gladiator resting in a state of fatigue, staring at his
slain opponent. The mosaics decorated the walls of a cold plunge pool in a bath house within a Roman villa. The gladiator
mosaic is noted by scholars as one of the finest examples of mosaic art ever seen — a "masterpiece comparable in
quality with the Alexander Mosaic in Pompeii." With the building of Christian basilicas in the late 4th century,
wall and ceiling mosaics were adopted for Christian uses. The earliest examples of Christian basilicas have not survived,
but the mosaics of Santa Constanza and Santa Pudenziana, both from the 4th century, still exist. The winemaking putti
in the ambulatory of Santa Constanza still follow the classical tradition in that they represent the feast of Bacchus,
which symbolizes transformation or change, and are thus appropriate for a mausoleum, the original function of this
building. In another great Constantinian basilica, the Church of the Nativity in Bethlehem the original mosaic floor
with typical Roman geometric motifs is partially preserved. The so-called Tomb of the Julii, near the crypt beneath
St Peter's Basilica, is a 4th-century vaulted tomb with wall and ceiling mosaics that are given Christian interpretations.
The Rotunda of Galerius in Thessaloniki, converted into a Christian church during the course of the 4th century,
was embellished with very high artistic quality mosaics. Only fragments survive of the original decoration, especially
a band depicting saints with hands raised in prayer, in front of complex architectural fantasies. In the following
century Ravenna, the capital of the Western Roman Empire, became the center of late Roman mosaic art (see details
in Ravenna section). Milan also served as the capital of the western empire in the 4th century. In the St Aquilinus
Chapel of the Basilica of San Lorenzo, mosaics executed in the late 4th and early 5th centuries depict Christ with
the Apostles and the Abduction of Elijah; these mosaics are outstanding for their bright colors, naturalism and adherence
to the classical canons of order and proportion. The surviving apse mosaic of the Basilica of Sant'Ambrogio, which
shows Christ enthroned between Saint Gervasius and Saint Protasius and angels before a golden background date back
to the 5th and to the 8th century, although it was restored many times later. The baptistery of the basilica, which
was demolished in the 15th century, had a vault covered with gold-leaf tesserae, large quantities of which were found
when the site was excavated. In the small shrine of San Vittore in ciel d'oro, now a chapel of Sant'Ambrogio, every
surface is covered with mosaics from the second half of the 5th century. Saint Victor is depicted in the center of
the golden dome, while figures of saints are shown on the walls before a blue background. The low spandrels give
space for the symbols of the four Evangelists. In the 5th-century Ravenna, the capital of the Western Roman Empire,
became the center of late Roman mosaic art. The Mausoleum of Galla Placidia was decorated with mosaics of high artistic
quality in 425–430. The vaults of the small, cross-shaped structure are clad with mosaics on blue background. The
central motif above the crossing is a golden cross in the middle of the starry sky. Another great building established
by Galla Placidia was the church of San Giovanni Evangelista. She erected it in fulfillment of a vow that she made
having escaped from a deadly storm in 425 on the sea voyage from Constantinople to Ravenna. The mosaics depicted
the storm, portraits of members of the western and eastern imperial family and the bishop of Ravenna, Peter Chrysologus.
They are known only from Renaissance sources because almost all were destroyed in 1747. After 539 Ravenna was reconquered
by the Romans in the form of the Eastern Roman Empire (Byzantine Empire) and became the seat of the Exarchate of
Ravenna. The greatest development of Christian mosaics unfolded in the second half of the 6th century. Outstanding
examples of Byzantine mosaic art are the later phase mosaics in the Basilica of San Vitale and Basilica of Sant'Apollinare
Nuovo. The mosaic depicting Emperor Saint Justinian I and Empress Theodora in the Basilica of San Vitale were executed
shortly after the Byzantine conquest. The mosaics of the Basilica of Sant'Apollinare in Classe were made around 549.
The anti-Arian theme is obvious in the apse mosaic of San Michele in Affricisco, executed in 545–547 (largely destroyed;
the remains in Berlin). The mosaic pavement of the Vrina Plain basilica of Butrint, Albania appear to pre-date that
of the Baptistery by almost a generation, dating to the last quarter of the 5th or the first years of the 6th century.
The mosaic displays a variety of motifs including sea-creatures, birds, terrestrial beasts, fruits, flowers, trees
and abstracts – designed to depict a terrestrial paradise of God’s creation. Superimposed on this scheme are two
large tablets, tabulae ansatae, carrying inscriptions. A variety of fish, a crab, a lobster, shrimps, mushrooms,
flowers, a stag and two cruciform designs surround the smaller of the two inscriptions, which reads: In fulfilment
of the vow (prayer) of those whose names God knows. This anonymous dedicatory inscription is a public demonstration
of the benefactors’ humility and an acknowledgement of God’s omniscience. The abundant variety of natural life depicted
in the Butrint mosaics celebrates the richness of God’s creation; some elements also have specific connotations.
The kantharos vase and vine refer to the eucharist, the symbol of the sacrifice of Christ leading to salvation. Peacocks
are symbols of paradise and resurrection; shown eating or drinking from the vase they indicate the route to eternal
life. Deer or stags were commonly used as images of the faithful aspiring to Christ: "As a heart desireth the water
brook, so my souls longs for thee, O God." Water-birds and fish and other sea-creatures can indicate baptism as well
as the members of the Church who are christened. Christian mosaic art also flourished in Rome, gradually declining
as conditions became more difficult in the Early Middle Ages. 5th century mosaics can be found over the triumphal
arch and in the nave of the basilica of Santa Maria Maggiore. The 27 surviving panels of the nave are the most important
mosaic cycle in Rome of this period. Two other important 5th century mosaics are lost but we know them from 17th-century
drawings. In the apse mosaic of Sant'Agata dei Goti (462–472, destroyed in 1589) Christ was seated on a globe with
the twelve Apostles flanking him, six on either side. At Sant'Andrea in Catabarbara (468–483, destroyed in 1686)
Christ appeared in the center, flanked on either side by three Apostles. Four streams flowed from the little mountain
supporting Christ. The original 5th-century apse mosaic of the Santa Sabina was replaced by a very similar fresco
by Taddeo Zuccari in 1559. The composition probably remained unchanged: Christ flanked by male and female saints,
seated on a hill while lambs drinking from a stream at its feet. All three mosaics had a similar iconography. In
the 7th–9th centuries Rome fell under the influence of Byzantine art, noticeable on the mosaics of Santa Prassede,
Santa Maria in Domnica, Sant'Agnese fuori le Mura, Santa Cecilia in Trastevere, Santi Nereo e Achilleo and the San
Venanzio chapel of San Giovanni in Laterano. The great dining hall of Pope Leo III in the Lateran Palace was also
decorated with mosaics. They were all destroyed later except for one example, the so-called Triclinio Leoniano of
which a copy was made in the 18th century. Another great work of Pope Leo, the apse mosaic of Santa Susanna, depicted
Christ with the Pope and Charlemagne on one side, and SS. Susanna and Felicity on the other. It was plastered over
during a renovation in 1585. Pope Paschal I (817–824) embellished the church of Santo Stefano del Cacco with an apsidal
mosaic which depicted the pope with a model of the church (destroyed in 1607). Important fragments survived from
the mosaic floor of the Great Palace of Constantinople which was commissioned during Justinian's reign. The figures,
animals, plants all are entirely classical but they are scattered before a plain background. The portrait of a moustached
man, probably a Gothic chieftain, is considered the most important surviving mosaic of the Justinianian age. The
so-called small sekreton of the palace was built during Justin II's reign around 565–577. Some fragments survive
from the mosaics of this vaulted room. The vine scroll motifs are very similar to those in the Santa Constanza and
they still closely follow the Classical tradition. There are remains of floral decoration in the Church of the Acheiropoietos
in Thessaloniki (5th–6th centuries). Very few early Byzantine mosaics survived the Iconoclastic destruction of the
8th century. Among the rare examples are the 6th-century Christ in majesty (or Ezekiel's Vision) mosaic in the apse
of the Church of Hosios David in Thessaloniki that was hidden behind mortar during those dangerous times. Nine mosaic
panels in the Hagios Demetrios Church, which were made between 634 and 730, also escaped destruction. Unusually almost
all represent Saint Demetrius of Thessaloniki, often with suppliants before him. In the Iconoclastic era, figural
mosaics were also condemned as idolatry. The Iconoclastic churches were embellished with plain gold mosaics with
only one great cross in the apse like the Hagia Irene in Constantinople (after 740). There were similar crosses in
the apses of the Hagia Sophia Church in Thessaloniki and in the Church of the Dormition in Nicaea. The crosses were
substituted with the image of the Theotokos in both churches after the victory of the Iconodules (787–797 and in
8th–9th centuries respectively, the Dormition church was totally destroyed in 1922). The Nea Moni Monastery on Chios
was established by Constantine Monomachos in 1043–1056. The exceptional mosaic decoration of the dome showing probably
the nine orders of the angels was destroyed in 1822 but other panels survived (Theotokos with raised hands, four
evangelists with seraphim, scenes from Christ's life and an interesting Anastasis where King Salomon bears resemblance
to Constantine Monomachos). In comparison with Osios Loukas Nea Moni mosaics contain more figures, detail, landscape
and setting. Another great undertaking by Constantine Monomachos was the restoration of the Church of the Holy Sepulchre
in Jerusalem between 1042 and 1048. Nothing survived of the mosaics which covered the walls and the dome of the edifice
but the Russian abbot Daniel, who visited Jerusalem in 1106–1107 left a description: "Lively mosaics of the holy
prophets are under the ceiling, over the tribune. The altar is surmounted by a mosaic image of Christ. In the main
altar one can see the mosaic of the Exhaltation of Adam. In the apse the Ascension of Christ. The Annunciation occupies
the two pillars next to the altar." The 9th- and 10th-century mosaics of the Hagia Sophia in Constantinople are truly
classical Byzantine artworks. The north and south tympana beneath the dome was decorated with figures of prophets,
saints and patriarchs. Above the principal door from the narthex we can see an Emperor kneeling before Christ (late
9th or early 10th century). Above the door from the southwest vestibule to the narthex another mosaic shows the Theotokos
with Justinian and Constantine. Justinian I is offering the model of the church to Mary while Constantine is holding
a model of the city in his hand. Both emperors are beardless – this is an example for conscious archaization as contemporary
Byzantine rulers were bearded. A mosaic panel on the gallery shows Christ with Constantine Monomachos and Empress
Zoe (1042–1055). The emperor gives a bulging money sack to Christ as a donation for the church. There are very few
existing mosaics from the Komnenian period but this paucity must be due to accidents of survival and gives a misleading
impression. The only surviving 12th-century mosaic work in Constantinople is a panel in Hagia Sophia depicting Emperor
John II and Empress Eirene with the Theotokos (1122–34). The empress with her long braided hair and rosy cheeks is
especially capturing. It must be a lifelike portrayal because Eirene was really a redhead as her original Hungarian
name, Piroska shows. The adjacent portrait of Emperor Alexios I Komnenos on a pier (from 1122) is similarly personal.
The imperial mausoleum of the Komnenos dynasty, the Pantokrator Monastery was certainly decorated with great mosaics
but these were later destroyed. The lack of Komnenian mosaics outside the capital is even more apparent. There is
only a "Communion of the Apostles" in the apse of the cathedral of Serres. A striking technical innovation of the
Komnenian period was the production of very precious, miniature mosaic icons. In these icons the small tesserae (with
sides of 1 mm or less) were set on wax or resin on a wooden panel. These products of extraordinary craftmanship were
intended for private devotion. The Louvre Transfiguration is a very fine example from the late 12th century. The
miniature mosaic of Christ in the Museo Nazionale at Florence illustrates the more gentle, humanistic conception
of Christ which appeared in the 12th century. The Church of the Holy Apostles in Thessaloniki was built in 1310–14.
Although some vandal systematically removed the gold tesserae of the background it can be seen that the Pantokrator
and the prophets in the dome follow the traditional Byzantine pattern. Many details are similar to the Pammakaristos
mosaics so it is supposed that the same team of mosaicists worked in both buildings. Another building with a related
mosaic decoration is the Theotokos Paregoritissa Church in Arta. The church was established by the Despot of Epirus
in 1294–96. In the dome is the traditional stern Pantokrator, with prophets and cherubim below. The greatest mosaic
work of the Palaeologan renaissance in art is the decoration of the Chora Church in Constantinople. Although the
mosaics of the naos have not survived except three panels, the decoration of the exonarthex and the esonarthex constitute
the most important full-scale mosaic cycle in Constantinople after the Hagia Sophia. They were executed around 1320
by the command of Theodore Metochites. The esonarthex has two fluted domes, specially created to provide the ideal
setting for the mosaic images of the ancestors of Christ. The southern one is called the Dome of the Pantokrator
while the northern one is the Dome of the Theotokos. The most important panel of the esonarthex depicts Theodore
Metochites wearing a huge turban, offering the model of the church to Christ. The walls of both narthexes are decorated
with mosaic cycles from the life of the Virgin and the life of Christ. These panels show the influence of the Italian
trecento on Byzantine art especially the more natural settings, landscapes, figures. The last great period of Roman
mosaic art was the 12th–13th century when Rome developed its own distinctive artistic style, free from the strict
rules of eastern tradition and with a more realistic portrayal of figures in the space. Well-known works of this
period are the floral mosaics of the Basilica di San Clemente, the façade of Santa Maria in Trastevere and San Paolo
fuori le Mura. The beautiful apse mosaic of Santa Maria in Trastevere (1140) depicts Christ and Mary sitting next
to each other on the heavenly throne, the first example of this iconographic scheme. A similar mosaic, the Coronation
of the Virgin, decorates the apse of Santa Maria Maggiore. It is a work of Jacopo Torriti from 1295. The mosaics
of Torriti and Jacopo da Camerino in the apse of San Giovanni in Laterano from 1288–94 were thoroughly restored in
1884. The apse mosaic of San Crisogono is attributed to Pietro Cavallini, the greatest Roman painter of the 13th
century. Six scenes from the life of Mary in Santa Maria in Trastevere were also executed by Cavallini in 1290. These
mosaics are praised for their realistic portrayal and attempts of perspective. There is an interesting mosaic medaillon
from 1210 above the gate of the church of San Tommaso in Formis showing Christ enthroned between a white and a black
slave. The church belonged to the Order of the Trinitarians which was devoted to ransoming Christian slaves. The
great Navicella mosaic (1305–1313) in the atrium of the Old St. Peter's is attributed to Giotto di Bondone. The giant
mosaic, commissioned by Cardinal Jacopo Stefaneschi, was originally situated on the eastern porch of the old basilica
and occupied the whole wall above the entrance arcade facing the courtyard. It depicted St. Peter walking on the
waters. This extraordinary work was mainly destroyed during the construction of the new St. Peter's in the 17th century.
Navicella means "little ship" referring to the large boat which dominated the scene, and whose sail, filled by the
storm, loomed over the horizon. Such a natural representation of a seascape was known only from ancient works of
art. The heyday of mosaic making in Sicily was the age of the independent Norman kingdom in the 12th century. The
Norman kings adopted the Byzantine tradition of mosaic decoration to enhance the somewhat dubious legality of their
rule. Greek masters working in Sicily developed their own style, that shows the influence of Western European and
Islamic artistic tendencies. Best examples of Sicilian mosaic art are the Cappella Palatina of Roger II, the Martorana
church in Palermo and the cathedrals of Cefalù and Monreale. The Martorana church (decorated around 1143) looked
originally even more Byzantine although important parts were later demolished. The dome mosaic is similar to that
of the Cappella Palatina, with Christ enthroned in the middle and four bowed, elongated angels. The Greek inscriptions,
decorative patterns, and evangelists in the squinches are obviously executed by the same Greek masters who worked
on the Cappella Palatina. The mosaic depicting Roger II of Sicily, dressed in Byzantine imperial robes and receiving
the crown by Christ, was originally in the demolished narthex together with another panel, the Theotokos with Georgios
of Antiochia, the founder of the church. The Monreale mosaics constitute the largest decoration of this kind in Italy,
covering 0,75 hectares with at least 100 million glass and stone tesserae. This huge work was executed between 1176
and 1186 by the order of King William II of Sicily. The iconography of the mosaics in the presbytery is similar to
Cefalu while the pictures in the nave are almost the same as the narrative scenes in the Cappella Palatina. The Martorana
mosaic of Roger II blessed by Christ was repeated with the figure of King William II instead of his predecessor.
Another panel shows the king offering the model of the cathedral to the Theotokos. Southern Italy was also part of
the Norman kingdom but great mosaics did not survive in this area except the fine mosaic pavement of the Otranto
Cathedral from 1166, with mosaics tied into a tree of life, mostly still preserved. The scenes depict biblical characters,
warrior kings, medieval beasts, allegories of the months and working activity. Only fragments survived from the original
mosaic decoration of Amalfi's Norman Cathedral. The mosaic ambos in the churches of Ravello prove that mosaic art
was widespread in Southern Italy during the 11th–13th centuries. In parts of Italy, which were under eastern artistic
influences, like Sicily and Venice, mosaic making never went out of fashion in the Middle Ages. The whole interior
of the St Mark's Basilica in Venice is clad with elaborate, golden mosaics. The oldest scenes were executed by Greek
masters in the late 11th century but the majority of the mosaics are works of local artists from the 12th–13th centuries.
The decoration of the church was finished only in the 16th century. One hundred and ten scenes of mosaics in the
atrium of St Mark's were based directly on the miniatures of the Cotton Genesis, a Byzantine manuscript that was
brought to Venice after the sack of Constantinople (1204). The mosaics were executed in the 1220s. Other important
Venetian mosaics can be found in the Cathedral of Santa Maria Assunta in Torcello from the 12th century, and in the
Basilical of Santi Maria e Donato in Murano with a restored apse mosaic from the 12th century and a beautiful mosaic
pavement (1140). The apse of the San Cipriano Church in Murano was decorated with an impressive golden mosaic from
the early 13th century showing Christ enthroned with Mary, St John and the two patron saints, Cipriano and Cipriana.
When the church was demolished in the 19th century, the mosaic was bought by Frederick William IV of Prussia. It
was reassembled in the Friedenskirche of Potsdam in the 1840s. The Abbot of Monte Cassino, Desiderius sent envoys
to Constantinople some time after 1066 to hire expert Byzantine mosaicists for the decoration of the rebuilt abbey
church. According to chronicler Leo of Ostia the Greek artists decorated the apse, the arch and the vestibule of
the basilica. Their work was admired by contemporaries but was totally destroyed in later centuries except two fragments
depicting greyhounds (now in the Monte Cassino Museum). "The abbot in his wisdom decided that great number of young
monks in the monastery should be thoroughly initiated in these arts" – says the chronicler about the role of the
Greeks in the revival of mosaic art in medieval Italy. Sometimes not only church interiors but façades were also
decorated with mosaics in Italy like in the case of the St Mark's Basilica in Venice (mainly from the 17th–19th centuries,
but the oldest one from 1270–75, "The burial of St Mark in the first basilica"), the Cathedral of Orvieto (golden
Gothic mosaics from the 14th century, many times redone) and the Basilica di San Frediano in Lucca (huge, striking
golden mosaic representing the Ascension of Christ with the apostles below, designed by Berlinghiero Berlinghieri
in the 13th century). The Cathedral of Spoleto is also decorated on the upper façade with a huge mosaic portraying
the Blessing Christ (signed by one Solsternus from 1207). Only scant remains prove that mosaics were still used in
the Early Middle Ages. The Abbey of Saint-Martial in Limoges, originally an important place of pilgrimage, was totally
demolished during the French Revolution except its crypt which was rediscovered in the 1960s. A mosaic panel was
unearthed which was dated to the 9th century. It somewhat incongruously uses cubes of gilded glass and deep green
marble, probably taken from antique pavements. This could also be the case with the early 9th century mosaic found
under the Basilica of Saint-Quentin in Picardy, where antique motifs are copied but using only simple colors. The
mosaics in the Cathedral of Saint-Jean at Lyon have been dated to the 11th century because they employ the same non-antique
simple colors. More fragments were found on the site of Saint-Croix at Poitiers which might be from the 6th or 9th
century. Later fresco replaced the more labor-intensive technique of mosaic in Western-Europe, although mosaics were
sometimes used as decoration on medieval cathedrals. The Royal Basilica of the Hungarian kings in Székesfehérvár
(Alba Regia) had a mosaic decoration in the apse. It was probably a work of Venetian or Ravennese craftsmen, executed
in the first decades of the 11th century. The mosaic was almost totally destroyed together with the basilica in the
17th century. The Golden Gate of the St. Vitus Cathedral in Prague got its name from the golden 14th-century mosaic
of the Last Judgement above the portal. It was executed by Venetian craftsmen. The Crusaders in the Holy Land also
adopted mosaic decoration under local Byzantine influence. During their 12th-century reconstruction of the Church
of the Holy Sepulchre in Jerusalem they complemented the existing Byzantine mosaics with new ones. Almost nothing
of them survived except the "Ascension of Christ" in the Latin Chapel (now confusingly surrounded by many 20th-century
mosaics). More substantial fragments were preserved from the 12th-century mosaic decoration of the Church of the
Nativity in Bethlehem. The mosaics in the nave are arranged in five horizontal bands with the figures of the ancestors
of Christ, Councils of the Church and angels. In the apses the Annunciation, the Nativity, Adoration of the Magi
and Dormition of the Blessed Virgin can be seen. The program of redecoration of the church was completed in 1169
as a unique collaboration of the Byzantine emperor, the king of Jerusalem and the Latin Church. In 2003, the remains
of a mosaic pavement were discovered under the ruins of the Bizere Monastery near the River Mureş in present-day
Romania. The panels depict real or fantastic animal, floral, solar and geometric representations. Some archeologists
supposed that it was the floor of an Orthodox church, built some time between the 10th and 11th century. Other experts
claim that it was part of the later Catholic monastery on the site because it shows the signs of strong Italianate
influence. The monastery was situated that time in the territory of the Kingdom of Hungary. The mosaics of St. Peter's
often show lively Baroque compositions based on designs or canvases from like Ciro Ferri, Guido Reni, Domenichino,
Carlo Maratta, and many others. Raphael is represented by a mosaic replica of this last painting, the Transfiguration.
Many different artists contributed to the 17th- and 18th-century mosaics in St. Peter's, including Giovanni Battista
Calandra, Fabio Cristofari (died 1689), and Pietro Paolo Cristofari (died 1743). Works of the Fabbrica were often
used as papal gifts. The single most important piece of Byzantine Christian mosaic art in the East is the Madaba
Map, made between 542 and 570 as the floor of the church of Saint George at Madaba, Jordan. It was rediscovered in
1894. The Madaba Map is the oldest surviving cartographic depiction of the Holy Land. It depicts an area from Lebanon
in the north to the Nile Delta in the south, and from the Mediterranean Sea in the west to the Eastern Desert. The
largest and most detailed element of the topographic depiction is Jerusalem, at the center of the map. The map is
enriched with many naturalistic features, like animals, fishing boats, bridges and palm trees Important Justinian
era mosaics decorated the Saint Catherine's Monastery on Mount Sinai in Egypt. Generally wall mosaics have not survived
in the region because of the destruction of buildings but the St. Catherine's Monastery is exceptional. On the upper
wall Moses is shown in two panels on a landscape background. In the apse we can see the Transfiguration of Jesus
on a golden background. The apse is surrounded with bands containing medallions of apostles and prophets, and two
contemporary figure, "Abbot Longinos" and "John the Deacon". The mosaic was probably created in 565/6. Jerusalem
with its many holy places probably had the highest concentration of mosaic-covered churches but very few of them
survived the subsequent waves of destructions. The present remains do not do justice to the original richness of
the city. The most important is the so-called "Armenian Mosaic" which was discovered in 1894 on the Street of the
Prophets near Damascus Gate. It depicts a vine with many branches and grape clusters, which springs from a vase.
Populating the vine's branches are peacocks, ducks, storks, pigeons, an eagle, a partridge, and a parrot in a cage.
The inscription reads: "For the memory and salvation of all those Armenians whose name the Lord knows." Beneath a
corner of the mosaic is a small, natural cave which contained human bones dating to the 5th or 6th centuries. The
symbolism of the mosaic and the presence of the burial cave indicates that the room was used as a mortuary chapel.
An exceptionally well preserved, carpet-like mosaic floor was uncovered in 1949 in Bethany, the early Byzantine church
of the Lazarium which was built between 333 and 390. Because of its purely geometrical pattern, the church floor
is to be grouped with other mosaics of the time in Palestine and neighboring areas, especially the Constantinian
mosaics in the central nave at Bethlehem. A second church was built above the older one during the 6th century with
another more simple geometric mosaic floor. The monastic communities of the Judean Desert also decorated their monasteries
with mosaic floors. The Monastery of Martyrius was founded in the end of the 5th century and it was re-discovered
in 1982–85. The most important work of art here is the intact geometric mosaic floor of the refectory although the
severely damaged church floor was similarly rich. The mosaics in the church of the nearby Monastery of Euthymius
are of later date (discovered in 1930). They were laid down in the Umayyad era, after a devastating earthquake in
659. Two six pointed stars and a red chalice are the most important surviving features. Mosaic art also flourished
in Christian Petra where three Byzantine churches were discovered. The most important one was uncovered in 1990.
It is known that the walls were also covered with golden glass mosaics but only the floor panels survived as usual.
The mosaic of the seasons in the southern aisle is from this first building period from the middle of the 5th century.
In the first half of the 6th century the mosaics of the northern aisle and the eastern end of the southern aisle
were installed. They depict native as well as exotic or mythological animals, and personifications of the Seasons,
Ocean, Earth and Wisdom. The mosaics of the Church of St Stephen in ancient Kastron Mefaa (now Umm ar-Rasas) were
made in 785 (discovered after 1986). The perfectly preserved mosaic floor is the largest one in Jordan. On the central
panel hunting and fishing scenes are depicted while another panel illustrates the most important cities of the region.
The frame of the mosaic is especially decorative. Six mosaic masters signed the work: Staurachios from Esbus, Euremios,
Elias, Constantinus, Germanus and Abdela. It overlays another, damaged, mosaic floor of the earlier (587) "Church
of Bishop Sergius." Another four churches were excavated nearby with traces of mosaic decoration. The craft has also
been popular in early medieval Rus, inherited as part of the Byzantine tradition. Yaroslav, the Grand Prince of the
Kievan Rus' built a large cathedral in his capital, Kiev. The model of the church was the Hagia Sophia in Constantinople,
and it was also called Saint Sophia Cathedral. It was built mainly by Byzantine master craftsmen, sent by Constantine
Monomachos, between 1037 and 1046. Naturally the more important surfaces in the interior were decorated with golden
mosaics. In the dome we can see the traditional stern Pantokrator supported by angels. Between the 12 windows of
the drum were apostles and the four evangelists on the pendentives. The apse is dominated by an orant Theotokos with
a Deesis in three medallions above. Below is a Communion of the Apostles. The apse mosaic of the Gelati Monastery
is a rare example of mosaic use in Georgia. Began by king David IV and completed by his son Demetrius I of Georgia,
the fragmentary panel depicts Theotokos flanked by two archangels. The use of mosaic in Gelati attests to some Byzantine
influence in the country and was a demonstration of the imperial ambition of the Bagrationids. The mosaic covered
church could compete in magnificence with the churches of Constantinople. Gelati is one of few mosaic creations which
survived in Georgia but fragments prove that the early churches of Pitsunda and Tsromi were also decorated with mosaic
as well as other, lesser known sites. The destroyed 6th century mosaic floors in the Pitsunda Cathedral have been
inspired by Roman prototypes. In Tsromi the tesserae are still visible on the walls of the 7th-century church but
only faint lines hint at the original scheme. Its central figure was Christ standing and displaying a scroll with
Georgian text. The remains of a 6th-century synagogue have been uncovered in Sepphoris, which was an important centre
of Jewish culture between the 3rd–7th centuries and a multicultural town inhabited by Jews, Christians and pagans.
The mosaic reflects an interesting fusion of Jewish and pagan beliefs. In the center of the floor the zodiac wheel
was depicted. Helios sits in the middle, in his sun chariot, and each zodiac is matched with a Jewish month. Along
the sides of the mosaic are strips depicting Biblical scenes, such as the binding of Isaac, as well as traditional
rituals, including a burnt sacrifice and the offering of fruits and grains. The synagogue in Eshtemoa (As-Samu) was
built around the 4th century. The mosaic floor is decorated with only floral and geometric patterns. The synagogue
in Khirbet Susiya (excavated in 1971–72, founded in the end of the 4th century) has three mosaic panels, the eastern
one depicting a Torah shrine, two menorahs, a lulav and an etrog with columns, deer and rams. The central panel is
geometric while the western one is seriously damaged but it has been suggested that it depicted Daniel in the lion’s
den. The Roman synagogue in Ein Gedi was remodeled in the Byzantine era and a more elaborate mosaic floor was laid
down above the older white panels. The usual geometric design was enriched with birds in the center. It includes
the names of the signs of the zodiac and important figures from the Jewish past but not their images suggesting that
it served a rather conservative community. The ban on figurative depiction was not taken so seriously by the Jews
living in Byzantine Gaza. In 1966 remains of a synagogue were found in the ancient harbour area. Its mosaic floor
depicts King David as Orpheus, identified by his name in Hebrew letters. Near him were lion cubs, a giraffe and a
snake listening to him playing a lyre. A further portion of the floor was divided by medallions formed by vine leaves,
each of which contains an animal: a lioness suckling her cub, a giraffe, peacocks, panthers, bears, a zebra and so
on. The floor was paved in 508/509. It is very similar to that of the synagogue at Maon (Menois) and the Christian
church at Shellal, suggesting that the same artist most probably worked at all three places. A 5th-century building
in Huldah may be a Samaritan synagogue. Its mosaic floor contains typical Jewish symbols (menorah, lulav, etrog)
but the inscriptions are Greek. Another Samaritan synagogue with a mosaic floor was located in Bet She'an (excavated
in 1960). The floor had only decorative motifs and an aedicule (shrine) with cultic symbols. The ban on human or
animal images was more strictly observed by the Samaritans than their Jewish neighbours in the same town (see above).
The mosaic was laid by the same masters who made the floor of the Beit Alfa synagogue. One of the inscriptions was
written in Samaritan script. Islamic architecture used mosaic technique to decorate religious buildings and palaces
after the Muslim conquests of the eastern provinces of the Byzantine Empire. In Syria and Egypt the Arabs were influenced
by the great tradition of Roman and Early Christian mosaic art. During the Umayyad Dynasty mosaic making remained
a flourishing art form in Islamic culture and it is continued in the art of zellige and azulejo in various parts
of the Arab world, although tile was to become the main Islamic form of wall decoration. The most important early
Islamic mosaic work is the decoration of the Umayyad Mosque in Damascus, then capital of the Arab Caliphate. The
mosque was built between 706 and 715. The caliph obtained 200 skilled workers from the Byzantine Emperor to decorate
the building. This is evidenced by the partly Byzantine style of the decoration. The mosaics of the inner courtyard
depict Paradise with beautiful trees, flowers and small hill towns and villages in the background. The mosaics include
no human figures, which makes them different from the otherwise similar contemporary Byzantine works. The biggest
continuous section survives under the western arcade of the courtyard, called the "Barada Panel" after the river
Barada. It is thought that the mosque used to have the largest gold mosaic in the world, at over 4 m2. In 1893 a
fire damaged the mosque extensively, and many mosaics were lost, although some have been restored since. Non-religious
Umayyad mosaic works were mainly floor panels which decorated the palaces of the caliphs and other high-ranking officials.
They were closely modeled after the mosaics of the Roman country villas, once common in the Eastern Mediterranean.
The most superb example can be found in the bath house of Hisham's Palace, Palestine which was made around 744. The
main panel depicts a large tree and underneath it a lion attacking a deer (right side) and two deers peacefully grazing
(left side). The panel probably represents good and bad governance. Mosaics with classical geometric motifs survived
in the bath area of the 8th-century Umayyad palace complex in Anjar, Lebanon. The luxurious desert residence of Al-Walid
II in Qasr al-Hallabat (in present-day Jordan) was also decorated with floor mosaics that show a high level of technical
skill. The best preserved panel at Hallabat is divided by a Tree of Life flanked by "good" animals on one side and
"bad" animals on the other. Among the Hallabat representations are vine scrolls, grapes, pomegranates, oryx, wolves,
hares, a leopard, pairs of partridges, fish, bulls, ostriches, rabbits, rams, goats, lions and a snake. At Qastal,
near Amman, excavations in 2000 uncovered the earliest known Umayyad mosaics in present-day Jordan, dating probably
from the caliphate of Abd al-Malik ibn Marwan (685–705). They cover much of the floor of a finely decorated building
that probably served as the palace of a local governor. The Qastal mosaics depict geometrical patterns, trees, animals,
fruits and rosettes. Except for the open courtyard, entrance and staircases, the floors of the entire palace were
covered in mosaics. Some of the best examples of later Islamic mosaics were produced in Moorish Spain. The golden
mosaics in the mihrab and the central dome of the Great Mosque in Corduba have a decidedly Byzantine character. They
were made between 965 and 970 by local craftsmen, supervised by a master mosaicist from Constantinople, who was sent
by the Byzantine Emperor to the Umayyad Caliph of Spain. The decoration is composed of colorful floral arabesques
and wide bands of Arab calligraphy. The mosaics were purported to evoke the glamour of the Great Mosque in Damascus,
which was lost for the Umayyad family. Noted 19th-century mosaics include those by Edward Burne-Jones at St Pauls
within the Walls in Rome. Another modern mosaic of note is the world's largest mosaic installation located at the
Cathedral Basilica of St. Louis, located in St. Louis, Missouri. A modern example of mosaic is the Museum of Natural
History station of the New York City Subway (there are many such works of art scattered throughout the New York City
subway system, though many IND stations are usually designed with bland mosaics.) Another example of mosaics in ordinary
surroundings is the use of locally themed mosaics in some restrooms in the rest areas along some Texas interstate
highways. In styles that owe as much to videogame pixel art and popculture as to traditional mosaic, street art has
seen a novel reinvention and expansion of mosaic artwork. The most prominent artist working with mosaics in street
art is the French Invader. He has done almost all his work in two very distinct mosaic styles, the first of which
are small "traditional" tile mosaics of 8 bit video game character, installed in cities across the globe, and the
second of which are a style he refers to as "Rubikcubism", which uses a kind of dual layer mosaic via grids of scrambled
Rubik's Cubes. Although he is the most prominent, other street and urban artists do work in Mosaic styles as well.
Portuguese pavement (in Portuguese, Calçada Portuguesa) is a kind of two-tone stone mosaic paving created in Portugal,
and common throughout the Lusosphere. Most commonly taking the form of geometric patterns from the simple to the
complex, it also is used to create complex pictorial mosaics in styles ranging from iconography to classicism and
even modern design. In Portuguese-speaking countries, many cities have a large amount of their sidewalks and even,
though far more occasionally, streets done in this mosaic form. Lisbon in particular maintains almost all walkways
in this style. The indirect method of applying tesserae is often used for very large projects, projects with repetitive
elements or for areas needing site specific shapes. Tiles are applied face-down to a backing paper using an adhesive,
and later transferred onto walls, floors or craft projects. This method is most useful for extremely large projects
as it gives the maker time to rework areas, allows the cementing of the tiles to the backing panel to be carried
out quickly in one operation and helps ensure that the front surfaces of the mosaic tiles and mosaic pieces are flat
and in the same plane on the front, even when using tiles and pieces of differing thicknesses. Mosaic murals, benches
and tabletops are some of the items usually made using the indirect method, as it results in a smoother and more
even surface. The double indirect method can be used when it is important to see the work during the creation process
as it will appear when completed. The tesserae are placed face-up on a medium (often adhesive-backed paper, sticky
plastic or soft lime or putty) as it will appear when installed. When the mosaic is complete, a similar medium is
placed atop it. The piece is then turned over, the original underlying material is carefully removed, and the piece
is installed as in the indirect method described above. In comparison to the indirect method, this is a complex system
to use and requires great skill on the part of the operator, to avoid damaging the work. Its greatest advantage lies
in the possibility of the operator directly controlling the final result of the work, which is important e.g. when
the human figure is involved. This method was created in 1989 by Maurizio Placuzzi and registered for industrial
use (patent n. 0000222556) under the name of his company, Sicis International Srl, now Sicis The Art Mosaic Factory
Srl. A tile mosaic is a digital image made up of individual tiles, arranged in a non-overlapping fashion, e.g. to
make a static image on a shower room or bathing pool floor, by breaking the image down into square pixels formed
from ceramic tiles (a typical size is 1 in × 1 in (25 mm × 25 mm), as for example, on the floor of the University
of Toronto pool, though sometimes larger tiles such as 2 in × 2 in (51 mm × 51 mm) are used). These digital images
are coarse in resolution and often simply express text, such as the depth of the pool in various places, but some
such digital images are used to show a sunset or other beach theme. With high cost of labor in developed countries,
production automation has become increasingly popular. Rather than being assembled by hand, mosaics designed using
computer aided design (CAD) software can be assembled by a robot. Production can be greater than 10 times faster
with higher accuracy. But these "computer" mosaics have a different look than hand-made "artisanal" mosaics. With
robotic production, colored tiles are loaded into buffers, and then the robot picks and places tiles individually
according to a command file from the design software.
The original Latin word "universitas" refers in general to "a number of persons associated into one body, a society, company,
community, guild, corporation, etc." At the time of the emergence of urban town life and medieval guilds, specialised
"associations of students and teachers with collective legal rights usually guaranteed by charters issued by princes,
prelates, or the towns in which they were located" came to be denominated by this general term. Like other guilds,
they were self-regulating and determined the qualifications of their members. An important idea in the definition
of a university is the notion of academic freedom. The first documentary evidence of this comes from early in the
life of the first university. The University of Bologna adopted an academic charter, the Constitutio Habita, in 1158
or 1155, which guaranteed the right of a traveling scholar to unhindered passage in the interests of education. Today
this is claimed as the origin of "academic freedom". This is now widely recognised internationally - on 18 September
1988, 430 university rectors signed the Magna Charta Universitatum, marking the 900th anniversary of Bologna's foundation.
The number of universities signing the Magna Charta Universitatum continues to grow, drawing from all parts of the
world. European higher education took place for hundreds of years in Christian cathedral schools or monastic schools
(scholae monasticae), in which monks and nuns taught classes; evidence of these immediate forerunners of the later
university at many places dates back to the 6th century. The earliest universities were developed under the aegis
of the Latin Church by papal bull as studia generalia and perhaps from cathedral schools. It is possible, however,
that the development of cathedral schools into universities was quite rare, with the University of Paris being an
exception. Later they were also founded by Kings (University of Naples Federico II, Charles University in Prague,
Jagiellonian University in Kraków) or municipal administrations (University of Cologne, University of Erfurt). In
the early medieval period, most new universities were founded from pre-existing schools, usually when these schools
were deemed to have become primarily sites of higher education. Many historians state that universities and cathedral
schools were a continuation of the interest in learning promoted by monasteries. All over Europe rulers and city
governments began to create universities to satisfy a European thirst for knowledge, and the belief that society
would benefit from the scholarly expertise generated from these institutions. Princes and leaders of city governments
perceived the potential benefits of having a scholarly expertise develop with the ability to address difficult problems
and achieve desired ends. The emergence of humanism was essential to this understanding of the possible utility of
universities as well as the revival of interest in knowledge gained from ancient Greek texts. The rediscovery of
Aristotle's works–more than 3000 pages of it would eventually be translated –fuelled a spirit of inquiry into natural
processes that had already begun to emerge in the 12th century. Some scholars believe that these works represented
one of the most important document discoveries in Western intellectual history. Richard Dales, for instance, calls
the discovery of Aristotle's works "a turning point in the history of Western thought." After Aristotle re-emerged,
a community of scholars, primarily communicating in Latin, accelerated the process and practice of attempting to
reconcile the thoughts of Greek antiquity, and especially ideas related to understanding the natural world, with
those of the church. The efforts of this "scholasticism" were focused on applying Aristotelian logic and thoughts
about natural processes to biblical passages and attempting to prove the viability of those passages through reason.
This became the primary mission of lecturers, and the expectation of students. The university culture developed differently
in northern Europe than it did in the south, although the northern (primarily Germany, France and Great Britain)
and southern universities (primarily Italy) did have many elements in common. Latin was the language of the university,
used for all texts, lectures, disputations and examinations. Professors lectured on the books of Aristotle for logic,
natural philosophy, and metaphysics; while Hippocrates, Galen, and Avicenna were used for medicine. Outside of these
commonalities, great differences separated north and south, primarily in subject matter. Italian universities focused
on law and medicine, while the northern universities focused on the arts and theology. There were distinct differences
in the quality of instruction in these areas which were congruent with their focus, so scholars would travel north
or south based on their interests and means. There was also a difference in the types of degrees awarded at these
universities. English, French and German universities usually awarded bachelor's degrees, with the exception of degrees
in theology, for which the doctorate was more common. Italian universities awarded primarily doctorates. The distinction
can be attributed to the intent of the degree holder after graduation – in the north the focus tended to be on acquiring
teaching positions, while in the south students often went on to professional positions. The structure of northern
universities tended to be modeled after the system of faculty governance developed at the University of Paris. Southern
universities tended to be patterned after the student-controlled model begun at the University of Bologna. Among
the southern universities, a further distinction has been noted between those of northern Italy, which followed the
pattern of Bologna as a "self-regulating, independent corporation of scholars" and those of southern Italy and Iberia,
which were "founded by royal and imperial charter to serve the needs of government." Their endowment by a prince
or monarch and their role in training government officials made these Mediterranean universities similar to Islamic
madrasas, although madrasas were generally smaller and individual teachers, rather than the madrasa itself, granted
the license or degree. Scholars like Arnold H. Green and Hossein Nasr have argued that starting in the 10th century,
some medieval Islamic madrasahs became universities. George Makdisi and others, however, argue that the European
university has no parallel in the medieval Islamic world. Other scholars regard the university as uniquely European
in origin and characteristics. Many scholars (including Makdisi) have argued that early medieval universities were
influenced by the religious madrasahs in Al-Andalus, the Emirate of Sicily, and the Middle East (during the Crusades).
Other scholars see this argument as overstated. Lowe and Yasuhara have recently drawn on the well-documented influences
of scholarship from the Islamic world on the universities of Western Europe to call for a reconsideration of the
development of higher education, turning away from a concern with local institutional structures to a broader consideration
within a global context. During the Early Modern period (approximately late 15th century to 1800), the universities
of Europe would see a tremendous amount of growth, productivity and innovative research. At the end of the Middle
Ages, about 400 years after the first university was founded, there were twenty-nine universities spread throughout
Europe. In the 15th century, twenty-eight new ones were created, with another eighteen added between 1500 and 1625.
This pace continued until by the end of the 18th century there were approximately 143 universities in Europe and
Eastern Europe, with the highest concentrations in the German Empire (34), Italian countries (26), France (25), and
Spain (23) – this was close to a 500% increase over the number of universities toward the end of the Middle Ages.
This number does not include the numerous universities that disappeared, or institutions that merged with other universities
during this time. It should be noted that the identification of a university was not necessarily obvious during the
Early Modern period, as the term is applied to a burgeoning number of institutions. In fact, the term "university"
was not always used to designate a higher education institution. In Mediterranean countries, the term studium generale
was still often used, while "Academy" was common in Northern European countries. The propagation of universities
was not necessarily a steady progression, as the 17th century was rife with events that adversely affected university
expansion. Many wars, and especially the Thirty Years' War, disrupted the university landscape throughout Europe
at different times. War, plague, famine, regicide, and changes in religious power and structure often adversely affected
the societies that provided support for universities. Internal strife within the universities themselves, such as
student brawling and absentee professors, acted to destabilize these institutions as well. Universities were also
reluctant to give up older curricula, and the continued reliance on the works of Aristotle defied contemporary advancements
in science and the arts. This era was also affected by the rise of the nation-state. As universities increasingly
came under state control, or formed under the auspices of the state, the faculty governance model (begun by the University
of Paris) became more and more prominent. Although the older student-controlled universities still existed, they
slowly started to move toward this structural organization. Control of universities still tended to be independent,
although university leadership was increasingly appointed by the state. Although the structural model provided by
the University of Paris, where student members are controlled by faculty "masters," provided a standard for universities,
the application of this model took at least three different forms. There were universities that had a system of faculties
whose teaching addressed a very specific curriculum; this model tended to train specialists. There was a collegiate
or tutorial model based on the system at University of Oxford where teaching and organization was decentralized and
knowledge was more of a generalist nature. There were also universities that combined these models, using the collegiate
model but having a centralized organization. Early Modern universities initially continued the curriculum and research
of the Middle Ages: natural philosophy, logic, medicine, theology, mathematics, astronomy (and astrology), law, grammar
and rhetoric. Aristotle was prevalent throughout the curriculum, while medicine also depended on Galen and Arabic
scholarship. The importance of humanism for changing this state-of-affairs cannot be underestimated. Once humanist
professors joined the university faculty, they began to transform the study of grammar and rhetoric through the studia
humanitatis. Humanist professors focused on the ability of students to write and speak with distinction, to translate
and interpret classical texts, and to live honorable lives. Other scholars within the university were affected by
the humanist approaches to learning and their linguistic expertise in relation to ancient texts, as well as the ideology
that advocated the ultimate importance of those texts. Professors of medicine such as Niccolò Leoniceno, Thomas Linacre
and William Cop were often trained in and taught from a humanist perspective as well as translated important ancient
medical texts. The critical mindset imparted by humanism was imperative for changes in universities and scholarship.
For instance, Andreas Vesalius was educated in a humanist fashion before producing a translation of Galen, whose
ideas he verified through his own dissections. In law, Andreas Alciatus infused the Corpus Juris with a humanist
perspective, while Jacques Cujas humanist writings were paramount to his reputation as a jurist. Philipp Melanchthon
cited the works of Erasmus as a highly influential guide for connecting theology back to original texts, which was
important for the reform at Protestant universities. Galileo Galilei, who taught at the Universities of Pisa and
Padua, and Martin Luther, who taught at the University of Wittenberg (as did Melanchthon), also had humanist training.
The task of the humanists was to slowly permeate the university; to increase the humanist presence in professorships
and chairs, syllabi and textbooks so that published works would demonstrate the humanistic ideal of science and scholarship.
Although the initial focus of the humanist scholars in the university was the discovery, exposition and insertion
of ancient texts and languages into the university, and the ideas of those texts into society generally, their influence
was ultimately quite progressive. The emergence of classical texts brought new ideas and led to a more creative university
climate (as the notable list of scholars above attests to). A focus on knowledge coming from self, from the human,
has a direct implication for new forms of scholarship and instruction, and was the foundation for what is commonly
known as the humanities. This disposition toward knowledge manifested in not simply the translation and propagation
of ancient texts, but also their adaptation and expansion. For instance, Vesalius was imperative for advocating the
use of Galen, but he also invigorated this text with experimentation, disagreements and further research. The propagation
of these texts, especially within the universities, was greatly aided by the emergence of the printing press and
the beginning of the use of the vernacular, which allowed for the printing of relatively large texts at reasonable
prices. There are several major exceptions on tuition fees. In many European countries, it is possible to study without
tuition fees. Public universities in Nordic countries were entirely without tuition fees until around 2005. Denmark,
Sweden and Finland then moved to put in place tuition fees for foreign students. Citizens of EU and EEA member states
and citizens from Switzerland remain exempted from tuition fees, and the amounts of public grants granted to promising
foreign students were increased to offset some of the impact. Colloquially, the term university may be used to describe
a phase in one's life: "When I was at university..." (in the United States and Ireland, college is often used instead:
"When I was in college..."). In Australia, Canada, New Zealand, the United Kingdom, Nigeria, the Netherlands, Spain
and the German-speaking countries university is often contracted to uni. In Ghana, New Zealand and in South Africa
it is sometimes called "varsity" (although this has become uncommon in New Zealand in recent years). "Varsity" was
also common usage in the UK in the 19th century.[citation needed] "Varsity" is still in common usage in Scotland.
In Canada, "college" generally refers to a two-year, non-degree-granting institution, while "university" connotes
a four-year, degree-granting institution. Universities may be sub-classified (as in the Macleans rankings) into large
research universities with many PhD granting programs and medical schools (for example, McGill University); "comprehensive"
universities that have some PhDs but aren't geared toward research (such as Waterloo); and smaller, primarily undergraduate
universities (such as St. Francis Xavier). Although each institution is organized differently, nearly all universities
have a board of trustees; a president, chancellor, or rector; at least one vice president, vice-chancellor, or vice-rector;
and deans of various divisions. Universities are generally divided into a number of academic departments, schools
or faculties. Public university systems are ruled over by government-run higher education boards. They review financial
requests and budget proposals and then allocate funds for each university in the system. They also approve new programs
of instruction and cancel or make changes in existing programs. In addition, they plan for the further coordinated
growth and development of the various institutions of higher education in the state or country. However, many public
universities in the world have a considerable degree of financial, research and pedagogical autonomy. Private universities
are privately funded and generally have broader independence from state policies. However, they may have less independence
from business corporations depending on the source of their finances. The funding and organization of universities
varies widely between different countries around the world. In some countries universities are predominantly funded
by the state, while in others funding may come from donors or from fees which students attending the university must
pay. In some countries the vast majority of students attend university in their local town, while in other countries
universities attract students from all over the world, and may provide university accommodation for their students.
Universities created by bilateral or multilateral treaties between states are intergovernmental. An example is the
Academy of European Law, which offers training in European law to lawyers, judges, barristers, solicitors, in-house
counsel and academics. EUCLID (Pôle Universitaire Euclide, Euclid University) is chartered as a university and umbrella
organisation dedicated to sustainable development in signatory countries, and the United Nations University engages
in efforts to resolve the pressing global problems that are of concern to the United Nations, its peoples and member
states. The European University Institute, a post-graduate university specialised in the social sciences, is officially
an intergovernmental organisation, set up by the member states of the European Union. A national university is generally
a university created or run by a national state but at the same time represents a state autonomic institution which
functions as a completely independent body inside of the same state. Some national universities are closely associated
with national cultural or political aspirations, for instance the National University of Ireland in the early days
of Irish independence collected a large amount of information on the Irish language and Irish culture. Reforms in
Argentina were the result of the University Revolution of 1918 and its posterior reforms by incorporating values
that sought for a more equal and laic higher education system. In 1963, the Robbins Report on universities in the
United Kingdom concluded that such institutions should have four main "objectives essential to any properly balanced
system: instruction in skills; the promotion of the general powers of the mind so as to produce not mere specialists
but rather cultivated men and women; to maintain research in balance with teaching, since teaching should not be
separated from the advancement of learning and the search for truth; and to transmit a common culture and common
standards of citizenship." Until the 19th century, religion played a significant role in university curriculum; however,
the role of religion in research universities decreased in the 19th century, and by the end of the 19th century,
the German university model had spread around the world. Universities concentrated on science in the 19th and 20th
centuries and became increasingly accessible to the masses. In Britain, the move from Industrial Revolution to modernity
saw the arrival of new civic universities with an emphasis on science and engineering, a movement initiated in 1960
by Sir Keith Murray (chairman of the University Grants Committee) and Sir Samuel Curran, with the formation of the
University of Strathclyde. The British also established universities worldwide, and higher education became available
to the masses not only in Europe. By the end of the early modern period, the structure and orientation of higher
education had changed in ways that are eminently recognizable for the modern context. Aristotle was no longer a force
providing the epistemological and methodological focus for universities and a more mechanistic orientation was emerging.
The hierarchical place of theological knowledge had for the most part been displaced and the humanities had become
a fixture, and a new openness was beginning to take hold in the construction and dissemination of knowledge that
were to become imperative for the formation of the modern state. The epistemological tensions between scientists
and universities were also heightened by the economic realities of research during this time, as individual scientists,
associations and universities were vying for limited resources. There was also competition from the formation of
new colleges funded by private benefactors and designed to provide free education to the public, or established by
local governments to provide a knowledge hungry populace with an alternative to traditional universities. Even when
universities supported new scientific endeavors, and the university provided foundational training and authority
for the research and conclusions, they could not compete with the resources available through private benefactors.
Other historians find incongruity in the proposition that the very place where the vast number of the scholars that
influenced the scientific revolution received their education should also be the place that inhibits their research
and the advancement of science. In fact, more than 80% of the European scientists between 1450–1650 included in the
Dictionary of Scientific Biography were university trained, of which approximately 45% held university posts. It
was the case that the academic foundations remaining from the Middle Ages were stable, and they did provide for an
environment that fostered considerable growth and development. There was considerable reluctance on the part of universities
to relinquish the symmetry and comprehensiveness provided by the Aristotelian system, which was effective as a coherent
system for understanding and interpreting the world. However, university professors still utilized some autonomy,
at least in the sciences, to choose epistemological foundations and methods. For instance, Melanchthon and his disciples
at University of Wittenberg were instrumental for integrating Copernican mathematical constructs into astronomical
debate and instruction. Another example was the short-lived but fairly rapid adoption of Cartesian epistemology and
methodology in European universities, and the debates surrounding that adoption, which led to more mechanistic approaches
to scientific problems as well as demonstrated an openness to change. There are many examples which belie the commonly
perceived intransigence of universities. Although universities may have been slow to accept new sciences and methodologies
as they emerged, when they did accept new ideas it helped to convey legitimacy and respectability, and supported
the scientific changes through providing a stable environment for instruction and material resources. Regardless
of the way the tension between universities, individual scientists, and the scientific revolution itself is perceived,
there was a discernible impact on the way that university education was constructed. Aristotelian epistemology provided
a coherent framework not simply for knowledge and knowledge construction, but also for the training of scholars within
the higher education setting. The creation of new scientific constructs during the scientific revolution, and the
epistemological challenges that were inherent within this creation, initiated the idea of both the autonomy of science
and the hierarchy of the disciplines. Instead of entering higher education to become a "general scholar" immersed
in becoming proficient in the entire curriculum, there emerged a type of scholar that put science first and viewed
it as a vocation in itself. The divergence between those focused on science and those still entrenched in the idea
of a general scholar exacerbated the epistemological tensions that were already beginning to emerge. Examining the
influence of humanism on scholars in medicine, mathematics, astronomy and physics may suggest that humanism and universities
were a strong impetus for the scientific revolution. Although the connection between humanism and the scientific
discovery may very well have begun within the confines of the university, the connection has been commonly perceived
as having been severed by the changing nature of science during the scientific revolution. Historians such as Richard
S. Westfall have argued that the overt traditionalism of universities inhibited attempts to re-conceptualize nature
and knowledge and caused an indelible tension between universities and scientists. This resistance to changes in
science may have been a significant factor in driving many scientists away from the university and toward private
benefactors, usually in princely courts, and associations with newly forming scientific societies.
The priesthoods of public religion were held by members of the elite classes. There was no principle analogous to separation
of church and state in ancient Rome. During the Roman Republic (509–27 BC), the same men who were elected public
officials might also serve as augurs and pontiffs. Priests married, raised families, and led politically active lives.
Julius Caesar became pontifex maximus before he was elected consul. The augurs read the will of the gods and supervised
the marking of boundaries as a reflection of universal order, thus sanctioning Roman expansionism as a matter of
divine destiny. The Roman triumph was at its core a religious procession in which the victorious general displayed
his piety and his willingness to serve the public good by dedicating a portion of his spoils to the gods, especially
Jupiter, who embodied just rule. As a result of the Punic Wars (264–146 BC), when Rome struggled to establish itself
as a dominant power, many new temples were built by magistrates in fulfillment of a vow to a deity for assuring their
military success. Roman religion was thus practical and contractual, based on the principle of do ut des, "I give
that you might give." Religion depended on knowledge and the correct practice of prayer, ritual, and sacrifice, not
on faith or dogma, although Latin literature preserves learned speculation on the nature of the divine and its relation
to human affairs. Even the most skeptical among Rome's intellectual elite such as Cicero, who was an augur, saw religion
as a source of social order. For ordinary Romans, religion was a part of daily life. Each home had a household shrine
at which prayers and libations to the family's domestic deities were offered. Neighborhood shrines and sacred places
such as springs and groves dotted the city. The Roman calendar was structured around religious observances. Women,
slaves, and children all participated in a range of religious activities. Some public rituals could be conducted
only by women, and women formed what is perhaps Rome's most famous priesthood, the state-supported Vestals, who tended
Rome's sacred hearth for centuries, until disbanded under Christian domination. The Romans are known for the great
number of deities they honored, a capacity that earned the mockery of early Christian polemicists. The presence of
Greeks on the Italian peninsula from the beginning of the historical period influenced Roman culture, introducing
some religious practices that became as fundamental as the cult of Apollo. The Romans looked for common ground between
their major gods and those of the Greeks (interpretatio graeca), adapting Greek myths and iconography for Latin literature
and Roman art. Etruscan religion was also a major influence, particularly on the practice of augury. Imported mystery
religions, which offered initiates salvation in the afterlife, were a matter of personal choice for an individual,
practiced in addition to carrying on one's family rites and participating in public religion. The mysteries, however,
involved exclusive oaths and secrecy, conditions that conservative Romans viewed with suspicion as characteristic
of "magic", conspiratorial (coniuratio), or subversive activity. Sporadic and sometimes brutal attempts were made
to suppress religionists who seemed to threaten traditional morality and unity, as with the senate's efforts to restrict
the Bacchanals in 186 BC. As the Romans extended their dominance throughout the Mediterranean world, their policy
in general was to absorb the deities and cults of other peoples rather than try to eradicate them, since they believed
that preserving tradition promoted social stability. One way that Rome incorporated diverse peoples was by supporting
their religious heritage, building temples to local deities that framed their theology within the hierarchy of Roman
religion. Inscriptions throughout the Empire record the side-by-side worship of local and Roman deities, including
dedications made by Romans to local gods. By the height of the Empire, numerous international deities were cultivated
at Rome and had been carried to even the most remote provinces, among them Cybele, Isis, Epona, and gods of solar
monism such as Mithras and Sol Invictus, found as far north as Roman Britain. Because Romans had never been obligated
to cultivate one god or one cult only, religious tolerance was not an issue in the sense that it is for competing
monotheistic systems. The monotheistic rigor of Judaism posed difficulties for Roman policy that led at times to
compromise and the granting of special exemptions, but sometimes to intractable conflict. For example, religious
disputes helped cause the First Jewish–Roman War and the Bar Kokhba revolt. In the wake of the Republic's collapse,
state religion had adapted to support the new regime of the emperors. Augustus, the first Roman emperor, justified
the novelty of one-man rule with a vast program of religious revivalism and reform. Public vows formerly made for
the security of the republic now were directed at the wellbeing of the emperor. So-called "emperor worship" expanded
on a grand scale the traditional Roman veneration of the ancestral dead and of the Genius, the divine tutelary of
every individual. Imperial cult became one of the major ways in which Rome advertised its presence in the provinces
and cultivated shared cultural identity and loyalty throughout the Empire. Rejection of the state religion was tantamount
to treason. This was the context for Rome's conflict with Christianity, which Romans variously regarded as a form
of atheism and novel superstitio. Rome had a semi-divine ancestor in the Trojan refugee Aeneas, son of Venus, who
was said to have established the nucleus of Roman religion when he brought the Palladium, Lares and Penates from
Troy to Italy. These objects were believed in historical times to remain in the keeping of the Vestals, Rome's female
priesthood. Aeneas had been given refuge by King Evander, a Greek exile from Arcadia, to whom were attributed other
religious foundations: he established the Ara Maxima, "Greatest Altar," to Hercules at the site that would become
the Forum Boarium, and he was the first to celebrate the Lupercalia, an archaic festival in February that was celebrated
as late as the 5th century of the Christian era. The myth of a Trojan founding with Greek influence was reconciled
through an elaborate genealogy (the Latin kings of Alba Longa) with the well-known legend of Rome's founding by Romulus
and Remus. The most common version of the twins' story displays several aspects of hero myth. Their mother, Rhea
Silvia, had been ordered by her uncle the king to remain a virgin, in order to preserve the throne he had usurped
from her father. Through divine intervention, the rightful line was restored when Rhea Silvia was impregnated by
the god Mars. She gave birth to twins, who were duly exposed by order of the king but saved through a series of miraculous
events. Romulus was credited with several religious institutions. He founded the Consualia festival, inviting the
neighbouring Sabines to participate; the ensuing rape of the Sabine women by Romulus's men further embedded both
violence and cultural assimilation in Rome's myth of origins. As a successful general, Romulus is also supposed to
have founded Rome's first temple to Jupiter Feretrius and offered the spolia opima, the prime spoils taken in war,
in the celebration of the first Roman triumph. Spared a mortal's death, Romulus was mysteriously spirited away and
deified. Each of Rome's legendary or semi-legendary kings was associated with one or more religious institutions
still known to the later Republic. Tullus Hostilius and Ancus Marcius instituted the fetial priests. The first "outsider"
Etruscan king, Lucius Tarquinius Priscus, founded a Capitoline temple to the triad Jupiter, Juno and Minerva which
served as the model for the highest official cult throughout the Roman world. The benevolent, divinely fathered Servius
Tullius established the Latin League, its Aventine Temple to Diana, and the Compitalia to mark his social reforms.
Servius Tullius was murdered and succeeded by the arrogant Tarquinius Superbus, whose expulsion marked the beginning
of Rome as a republic with annually elected magistrates. Rome offers no native creation myth, and little mythography
to explain the character of its deities, their mutual relationships or their interactions with the human world, but
Roman theology acknowledged that di immortales (immortal gods) ruled all realms of the heavens and earth. There were
gods of the upper heavens, gods of the underworld and a myriad of lesser deities between. Some evidently favoured
Rome because Rome honoured them, but none were intrinsically, irredeemably foreign or alien. The political, cultural
and religious coherence of an emergent Roman super-state required a broad, inclusive and flexible network of lawful
cults. At different times and in different places, the sphere of influence, character and functions of a divine being
could expand, overlap with those of others, and be redefined as Roman. Change was embedded within existing traditions.
Several versions of a semi-official, structured pantheon were developed during the political, social and religious
instability of the Late Republican era. Jupiter, the most powerful of all gods and "the fount of the auspices upon
which the relationship of the city with the gods rested", consistently personified the divine authority of Rome's
highest offices, internal organization and external relations. During the archaic and early Republican eras, he shared
his temple, some aspects of cult and several divine characteristics with Mars and Quirinus, who were later replaced
by Juno and Minerva. A conceptual tendency toward triads may be indicated by the later agricultural or plebeian triad
of Ceres, Liber and Libera, and by some of the complementary threefold deity-groupings of Imperial cult. Other major
and minor deities could be single, coupled, or linked retrospectively through myths of divine marriage and sexual
adventure. These later Roman pantheistic hierarchies are part literary and mythographic, part philosophical creations,
and often Greek in origin. The Hellenization of Latin literature and culture supplied literary and artistic models
for reinterpreting Roman deities in light of the Greek Olympians, and promoted a sense that the two cultures had
a shared heritage. The impressive, costly, and centralised rites to the deities of the Roman state were vastly outnumbered
in everyday life by commonplace religious observances pertaining to an individual's domestic and personal deities,
the patron divinities of Rome's various neighborhoods and communities, and the often idiosyncratic blends of official,
unofficial, local and personal cults that characterised lawful Roman religion. In this spirit, a provincial Roman
citizen who made the long journey from Bordeaux to Italy to consult the Sibyl at Tibur did not neglect his devotion
to his own goddess from home: Roman calendars show roughly forty annual religious festivals. Some lasted several
days, others a single day or less: sacred days (dies fasti) outnumbered "non-sacred" days (dies nefasti). A comparison
of surviving Roman religious calendars suggests that official festivals were organized according to broad seasonal
groups that allowed for different local traditions. Some of the most ancient and popular festivals incorporated ludi
("games," such as chariot races and theatrical performances), with examples including those held at Palestrina in
honour of Fortuna Primigenia during Compitalia, and the Ludi Romani in honour of Liber. Other festivals may have
required only the presence and rites of their priests and acolytes, or particular groups, such as women at the Bona
Dea rites. Other public festivals were not required by the calendar, but occasioned by events. The triumph of a Roman
general was celebrated as the fulfillment of religious vows, though these tended to be overshadowed by the political
and social significance of the event. During the late Republic, the political elite competed to outdo each other
in public display, and the ludi attendant on a triumph were expanded to include gladiator contests. Under the Principate,
all such spectacular displays came under Imperial control: the most lavish were subsidised by emperors, and lesser
events were provided by magistrates as a sacred duty and privilege of office. Additional festivals and games celebrated
Imperial accessions and anniversaries. Others, such as the traditional Republican Secular Games to mark a new era
(saeculum), became imperially funded to maintain traditional values and a common Roman identity. That the spectacles
retained something of their sacral aura even in late antiquity is indicated by the admonitions of the Church Fathers
that Christians should not take part. The meaning and origin of many archaic festivals baffled even Rome's intellectual
elite, but the more obscure they were, the greater the opportunity for reinvention and reinterpretation — a fact
lost neither on Augustus in his program of religious reform, which often cloaked autocratic innovation, nor on his
only rival as mythmaker of the era, Ovid. In his Fasti, a long-form poem covering Roman holidays from January to
June, Ovid presents a unique look at Roman antiquarian lore, popular customs, and religious practice that is by turns
imaginative, entertaining, high-minded, and scurrilous; not a priestly account, despite the speaker's pose as a vates
or inspired poet-prophet, but a work of description, imagination and poetic etymology that reflects the broad humor
and burlesque spirit of such venerable festivals as the Saturnalia, Consualia, and feast of Anna Perenna on the Ides
of March, where Ovid treats the assassination of the newly deified Julius Caesar as utterly incidental to the festivities
among the Roman people. But official calendars preserved from different times and places also show a flexibility
in omitting or expanding events, indicating that there was no single static and authoritative calendar of required
observances. In the later Empire under Christian rule, the new Christian festivals were incorporated into the existing
framework of the Roman calendar, alongside at least some of the traditional festivals. The Latin word templum originally
referred not to the temple building itself, but to a sacred space surveyed and plotted ritually through augury: "The
architecture of the ancient Romans was, from first to last, an art of shaping space around ritual." The Roman architect
Vitruvius always uses the word templum to refer to this sacred precinct, and the more common Latin words aedes, delubrum,
or fanum for a temple or shrine as a building. The ruins of temples are among the most visible monuments of ancient
Roman culture. All sacrifices and offerings required an accompanying prayer to be effective. Pliny the Elder declared
that "a sacrifice without prayer is thought to be useless and not a proper consultation of the gods." Prayer by itself,
however, had independent power. The spoken word was thus the single most potent religious action, and knowledge of
the correct verbal formulas the key to efficacy. Accurate naming was vital for tapping into the desired powers of
the deity invoked, hence the proliferation of cult epithets among Roman deities. Public prayers (prex) were offered
loudly and clearly by a priest on behalf of the community. Public religious ritual had to be enacted by specialists
and professionals faultlessly; a mistake might require that the action, or even the entire festival, be repeated
from the start. The historian Livy reports an occasion when the presiding magistrate at the Latin festival forgot
to include the "Roman people" among the list of beneficiaries in his prayer; the festival had to be started over.
Even private prayer by an individual was formulaic, a recitation rather than a personal expression, though selected
by the individual for a particular purpose or occasion. Sacrifice to deities of the heavens (di superi, "gods above")
was performed in daylight, and under the public gaze. Deities of the upper heavens required white, infertile victims
of their own sex: Juno a white heifer (possibly a white cow); Jupiter a white, castrated ox (bos mas) for the annual
oath-taking by the consuls. Di superi with strong connections to the earth, such as Mars, Janus, Neptune and various
genii – including the Emperor's – were offered fertile victims. After the sacrifice, a banquet was held; in state
cults, the images of honoured deities took pride of place on banqueting couches and by means of the sacrificial fire
consumed their proper portion (exta, the innards). Rome's officials and priests reclined in order of precedence alongside
and ate the meat; lesser citizens may have had to provide their own. Chthonic gods such as Dis pater, the di inferi
("gods below"), and the collective shades of the departed (di Manes) were given dark, fertile victims in nighttime
rituals. Animal sacrifice usually took the form of a holocaust or burnt offering, and there was no shared banquet,
as "the living cannot share a meal with the dead". Ceres and other underworld goddesses of fruitfulness were sometimes
offered pregnant female animals; Tellus was given a pregnant cow at the Fordicidia festival. Color had a general
symbolic value for sacrifices. Demigods and heroes, who belonged to the heavens and the underworld, were sometimes
given black-and-white victims. Robigo (or Robigus) was given red dogs and libations of red wine at the Robigalia
for the protection of crops from blight and red mildew. The same divine agencies who caused disease or harm also
had the power to avert it, and so might be placated in advance. Divine consideration might be sought to avoid the
inconvenient delays of a journey, or encounters with banditry, piracy and shipwreck, with due gratitude to be rendered
on safe arrival or return. In times of great crisis, the Senate could decree collective public rites, in which Rome's
citizens, including women and children, moved in procession from one temple to the next, supplicating the gods. Extraordinary
circumstances called for extraordinary sacrifice: in one of the many crises of the Second Punic War, Jupiter Capitolinus
was promised every animal born that spring (see ver sacrum), to be rendered after five more years of protection from
Hannibal and his allies. The "contract" with Jupiter is exceptionally detailed. All due care would be taken of the
animals. If any died or were stolen before the scheduled sacrifice, they would count as already sacrificed, since
they had already been consecrated. Normally, if the gods failed to keep their side of the bargain, the offered sacrifice
would be withheld. In the imperial period, sacrifice was withheld following Trajan's death because the gods had not
kept the Emperor safe for the stipulated period. In Pompeii, the Genius of the living emperor was offered a bull:
presumably a standard practise in Imperial cult, though minor offerings (incense and wine) were also made. The exta
were the entrails of a sacrificed animal, comprising in Cicero's enumeration the gall bladder (fel), liver (iecur),
heart (cor), and lungs (pulmones). The exta were exposed for litatio (divine approval) as part of Roman liturgy,
but were "read" in the context of the disciplina Etrusca. As a product of Roman sacrifice, the exta and blood are
reserved for the gods, while the meat (viscera) is shared among human beings in a communal meal. The exta of bovine
victims were usually stewed in a pot (olla or aula), while those of sheep or pigs were grilled on skewers. When the
deity's portion was cooked, it was sprinkled with mola salsa (ritually prepared salted flour) and wine, then placed
in the fire on the altar for the offering; the technical verb for this action was porricere. Human sacrifice in ancient
Rome was rare but documented. After the Roman defeat at Cannae two Gauls and two Greeks were buried under the Forum
Boarium, in a stone chamber "which had on a previous occasion [228 BC] also been polluted by human victims, a practice
most repulsive to Roman feelings". Livy avoids the word "sacrifice" in connection with this bloodless human life-offering;
Plutarch does not. The rite was apparently repeated in 113 BC, preparatory to an invasion of Gaul. Its religious
dimensions and purpose remain uncertain. In the early stages of the First Punic War (264 BC) the first known Roman
gladiatorial munus was held, described as a funeral blood-rite to the manes of a Roman military aristocrat. The gladiator
munus was never explicitly acknowledged as a human sacrifice, probably because death was not its inevitable outcome
or purpose. Even so, the gladiators swore their lives to the infernal gods, and the combat was dedicated as an offering
to the di manes or other gods. The event was therefore a sacrificium in the strict sense of the term, and Christian
writers later condemned it as human sacrifice. The small woolen dolls called Maniae, hung on the Compitalia shrines,
were thought a symbolic replacement for child-sacrifice to Mania, as Mother of the Lares. The Junii took credit for
its abolition by their ancestor L. Junius Brutus, traditionally Rome's Republican founder and first consul. Political
or military executions were sometimes conducted in such a way that they evoked human sacrifice, whether deliberately
or in the perception of witnesses; Marcus Marius Gratidianus was a gruesome example. Officially, human sacrifice
was obnoxious "to the laws of gods and men." The practice was a mark of the "Other", attributed to Rome's traditional
enemies such as the Carthaginians and Gauls. Rome banned it on several occasions under extreme penalty. A law passed
in 81 BC characterised human sacrifice as murder committed for magical purposes. Pliny saw the ending of human sacrifice
conducted by the druids as a positive consequence of the conquest of Gaul and Britain. Despite an empire-wide ban
under Hadrian, human sacrifice may have continued covertly in North Africa and elsewhere. A pater familias was the
senior priest of his household. He offered daily cult to his lares and penates, and to his di parentes/divi parentes
at his domestic shrines and in the fires of the household hearth. His wife (mater familias) was responsible for the
household's cult to Vesta. In rural estates, bailiffs seem to have been responsible for at least some of the household
shrines (lararia) and their deities. Household cults had state counterparts. In Vergil's Aeneid, Aeneas brought the
Trojan cult of the lares and penates from Troy, along with the Palladium which was later installed in the temple
of Vesta. Religious law centered on the ritualised system of honours and sacrifice that brought divine blessings,
according to the principle do ut des ("I give, that you might give"). Proper, respectful religio brought social harmony
and prosperity. Religious neglect was a form of atheism: impure sacrifice and incorrect ritual were vitia (impious
errors). Excessive devotion, fearful grovelling to deities and the improper use or seeking of divine knowledge were
superstitio. Any of these moral deviations could cause divine anger (ira deorum) and therefore harm the State. The
official deities of the state were identified with its lawful offices and institutions, and Romans of every class
were expected to honour the beneficence and protection of mortal and divine superiors. Participation in public rites
showed a personal commitment to their community and its values. Official cults were state funded as a "matter of
public interest" (res publica). Non-official but lawful cults were funded by private individuals for the benefit
of their own communities. The difference between public and private cult is often unclear. Individuals or collegial
associations could offer funds and cult to state deities. The public Vestals prepared ritual substances for use in
public and private cults, and held the state-funded (thus public) opening ceremony for the Parentalia festival, which
was otherwise a private rite to household ancestors. Some rites of the domus (household) were held in public places
but were legally defined as privata in part or whole. All cults were ultimately subject to the approval and regulation
of the censor and pontifices. Rome had no separate priestly caste or class. The highest authority within a community
usually sponsored its cults and sacrifices, officiated as its priest and promoted its assistants and acolytes. Specialists
from the religious colleges and professionals such as haruspices and oracles were available for consultation. In
household cult, the paterfamilias functioned as priest, and members of his familia as acolytes and assistants. Public
cults required greater knowledge and expertise. The earliest public priesthoods were probably the flamines (the singular
is flamen), attributed to king Numa: the major flamines, dedicated to Jupiter, Mars and Quirinus, were traditionally
drawn from patrician families. Twelve lesser flamines were each dedicated to a single deity, whose archaic nature
is indicated by the relative obscurity of some. Flamines were constrained by the requirements of ritual purity; Jupiter's
flamen in particular had virtually no simultaneous capacity for a political or military career. In the Regal era,
a rex sacrorum (king of the sacred rites) supervised regal and state rites in conjunction with the king (rex) or
in his absence, and announced the public festivals. He had little or no civil authority. With the abolition of monarchy,
the collegial power and influence of the Republican pontifices increased. By the late Republican era, the flamines
were supervised by the pontifical collegia. The rex sacrorum had become a relatively obscure priesthood with an entirely
symbolic title: his religious duties still included the daily, ritual announcement of festivals and priestly duties
within two or three of the latter but his most important priestly role – the supervision of the Vestals and their
rites – fell to the more politically powerful and influential pontifex maximus. Public priests were appointed by
the collegia. Once elected, a priest held permanent religious authority from the eternal divine, which offered him
lifetime influence, privilege and immunity. Therefore, civil and religious law limited the number and kind of religious
offices allowed an individual and his family. Religious law was collegial and traditional; it informed political
decisions, could overturn them, and was difficult to exploit for personal gain. Priesthood was a costly honour: in
traditional Roman practice, a priest drew no stipend. Cult donations were the property of the deity, whose priest
must provide cult regardless of shortfalls in public funding – this could mean subsidy of acolytes and all other
cult maintenance from personal funds. For those who had reached their goal in the Cursus honorum, permanent priesthood
was best sought or granted after a lifetime's service in military or political life, or preferably both: it was a
particularly honourable and active form of retirement which fulfilled an essential public duty. For a freedman or
slave, promotion as one of the Compitalia seviri offered a high local profile, and opportunities in local politics;
and therefore business. During the Imperial era, priesthood of the Imperial cult offered provincial elites full Roman
citizenship and public prominence beyond their single year in religious office; in effect, it was the first step
in a provincial cursus honorum. In Rome, the same Imperial cult role was performed by the Arval Brethren, once an
obscure Republican priesthood dedicated to several deities, then co-opted by Augustus as part of his religious reforms.
The Arvals offered prayer and sacrifice to Roman state gods at various temples for the continued welfare of the Imperial
family on their birthdays, accession anniversaries and to mark extraordinary events such as the quashing of conspiracy
or revolt. Every January 3 they consecrated the annual vows and rendered any sacrifice promised in the previous year,
provided the gods had kept the Imperial family safe for the contracted time. The Vestals were a public priesthood
of six women devoted to the cultivation of Vesta, goddess of the hearth of the Roman state and its vital flame. A
girl chosen to be a Vestal achieved unique religious distinction, public status and privileges, and could exercise
considerable political influence. Upon entering her office, a Vestal was emancipated from her father's authority.
In archaic Roman society, these priestesses were the only women not required to be under the legal guardianship of
a man, instead answering directly to the Pontifex Maximus. A Vestal's dress represented her status outside the usual
categories that defined Roman women, with elements of both virgin bride and daughter, and Roman matron and wife.
Unlike male priests, Vestals were freed of the traditional obligations of marrying and producing children, and were
required to take a vow of chastity that was strictly enforced: a Vestal polluted by the loss of her chastity while
in office was buried alive. Thus the exceptional honor accorded a Vestal was religious rather than personal or social;
her privileges required her to be fully devoted to the performance of her duties, which were considered essential
to the security of Rome. The Vestals embody the profound connection between domestic cult and the religious life
of the community. Any householder could rekindle their own household fire from Vesta's flame. The Vestals cared for
the Lares and Penates of the state that were the equivalent of those enshrined in each home. Besides their own festival
of Vestalia, they participated directly in the rites of Parilia, Parentalia and Fordicidia. Indirectly, they played
a role in every official sacrifice; among their duties was the preparation of the mola salsa, the salted flour that
was sprinkled on every sacrificial victim as part of its immolation. Augustus' religious reformations raised the
funding and public profile of the Vestals. They were given high-status seating at games and theatres. The emperor
Claudius appointed them as priestesses to the cult of the deified Livia, wife of Augustus. They seem to have retained
their religious and social distinctions well into the 4th century, after political power within the Empire had shifted
to the Christians. When the Christian emperor Gratian refused the office of pontifex maximus, he took steps toward
the dissolution of the order. His successor Theodosius I extinguished Vesta's sacred fire and vacated her temple.
Public religion took place within a sacred precinct that had been marked out ritually by an augur. The original meaning
of the Latin word templum was this sacred space, and only later referred to a building. Rome itself was an intrinsically
sacred space; its ancient boundary (pomerium) had been marked by Romulus himself with oxen and plough; what lay within
was the earthly home and protectorate of the gods of the state. In Rome, the central references for the establishment
of an augural templum appear to have been the Via Sacra (Sacred Way) and the pomerium. Magistrates sought divine
opinion of proposed official acts through an augur, who read the divine will through observations made within the
templum before, during and after an act of sacrifice. Divine disapproval could arise through unfit sacrifice, errant
rites (vitium) or an unacceptable plan of action. If an unfavourable sign was given, the magistrate could repeat
the sacrifice until favourable signs were seen, consult with his augural colleagues, or abandon the project. Magistrates
could use their right of augury (ius augurum) to adjourn and overturn the process of law, but were obliged to base
their decision on the augur's observations and advice. For Cicero, himself an augur, this made the augur the most
powerful authority in the Late Republic. By his time (mid 1st century BC) augury was supervised by the college of
pontifices, whose powers were increasingly woven into the magistracies of the cursus honorum. Haruspicy was also
used in public cult, under the supervision of the augur or presiding magistrate. The haruspices divined the will
of the gods through examination of entrails after sacrifice, particularly the liver. They also interpreted omens,
prodigies and portents, and formulated their expiation. Most Roman authors describe haruspicy as an ancient, ethnically
Etruscan "outsider" religious profession, separate from Rome's internal and largely unpaid priestly hierarchy, essential
but never quite respectable. During the mid-to-late Republic, the reformist Gaius Gracchus, the populist politician-general
Gaius Marius and his antagonist Sulla, and the "notorious Verres" justified their very different policies by the
divinely inspired utterances of private diviners. The senate and armies used the public haruspices: at some time
during the late Republic, the Senate decreed that Roman boys of noble family be sent to Etruria for training in haruspicy
and divination. Being of independent means, they would be better motivated to maintain a pure, religious practice
for the public good. The motives of private haruspices – especially females – and their clients were officially suspect:
none of this seems to have troubled Marius, who employed a Syrian prophetess. Prodigies were transgressions in the
natural, predictable order of the cosmos – signs of divine anger that portended conflict and misfortune. The Senate
decided whether a reported prodigy was false, or genuine and in the public interest, in which case it was referred
to the public priests, augurs and haruspices for ritual expiation. In 207 BC, during one of the Punic Wars' worst
crises, the Senate dealt with an unprecedented number of confirmed prodigies whose expiation would have involved
"at least twenty days" of dedicated rites. Livy presents these as signs of widespread failure in Roman religio. The
major prodigies included the spontaneous combustion of weapons, the apparent shrinking of the sun's disc, two moons
in a daylit sky, a cosmic battle between sun and moon, a rain of red-hot stones, a bloody sweat on statues, and blood
in fountains and on ears of corn: all were expiated by sacrifice of "greater victims". The minor prodigies were less
warlike but equally unnatural; sheep become goats, a hen become a cock (and vice versa) – these were expiated with
"lesser victims". The discovery of an androgynous four-year-old child was expiated by its drowning and the holy procession
of 27 virgins to the temple of Juno Regina, singing a hymn to avert disaster: a lightning strike during the hymn
rehearsals required further expiation. Religious restitution is proved only by Rome's victory. Roman beliefs about
an afterlife varied, and are known mostly for the educated elite who expressed their views in terms of their chosen
philosophy. The traditional care of the dead, however, and the perpetuation after death of their status in life were
part of the most archaic practices of Roman religion. Ancient votive deposits to the noble dead of Latium and Rome
suggest elaborate and costly funeral offerings and banquets in the company of the deceased, an expectation of afterlife
and their association with the gods. As Roman society developed, its Republican nobility tended to invest less in
spectacular funerals and extravagant housing for their dead, and more on monumental endowments to the community,
such as the donation of a temple or public building whose donor was commemorated by his statue and inscribed name.
Persons of low or negligible status might receive simple burial, with such grave goods as relatives could afford.
Funeral and commemorative rites varied according to wealth, status and religious context. In Cicero's time, the better-off
sacrificed a sow at the funeral pyre before cremation. The dead consumed their portion in the flames of the pyre,
Ceres her portion through the flame of her altar, and the family at the site of the cremation. For the less well-off,
inhumation with "a libation of wine, incense, and fruit or crops was sufficient". Ceres functioned as an intermediary
between the realms of the living and the dead: the deceased had not yet fully passed to the world of the dead and
could share a last meal with the living. The ashes (or body) were entombed or buried. On the eighth day of mourning,
the family offered further sacrifice, this time on the ground; the shade of the departed was assumed to have passed
entirely into the underworld. They had become one of the di Manes, who were collectively celebrated and appeased
at the Parentalia, a multi-day festival of remembrance in February. In the later Imperial era, the burial and commemorative
practises of Christian and non-Christians overlapped. Tombs were shared by Christian and non-Christian family members,
and the traditional funeral rites and feast of novemdialis found a part-match in the Christian Constitutio Apostolica.
The customary offers of wine and food to the dead continued; St Augustine (following St Ambrose) feared that this
invited the "drunken" practices of Parentalia but commended funeral feasts as a Christian opportunity to give alms
of food to the poor. Christians attended Parentalia and its accompanying Feralia and Caristia in sufficient numbers
for the Council of Tours to forbid them in AD 567. Other funerary and commemorative practices were very different.
Traditional Roman practice spurned the corpse as a ritual pollution; inscriptions noted the day of birth and duration
of life. The Christian Church fostered the veneration of saintly relics, and inscriptions marked the day of death
as a transition to "new life". Roman camps followed a standard pattern for defense and religious ritual; in effect
they were Rome in miniature. The commander's headquarters stood at the centre; he took the auspices on a dais in
front. A small building behind housed the legionary standards, the divine images used in religious rites and in the
Imperial era, the image of the ruling emperor. In one camp, this shrine is even called Capitolium. The most important
camp-offering appears to have been the suovetaurilia performed before a major, set battle. A ram, a boar and a bull
were ritually garlanded, led around the outer perimeter of the camp (a lustratio exercitus) and in through a gate,
then sacrificed: Trajan's column shows three such events from his Dacian wars. The perimeter procession and sacrifice
suggest the entire camp as a divine templum; all within are purified and protected. Each camp had its own religious
personnel; standard bearers, priestly officers and their assistants, including a haruspex, and housekeepers of shrines
and images. A senior magistrate-commander (sometimes even a consul) headed it, his chain of subordinates ran it and
a ferocious system of training and discipline ensured that every citizen-soldier knew his duty. As in Rome, whatever
gods he served in his own time seem to have been his own business; legionary forts and vici included shrines to household
gods, personal deities and deities otherwise unknown. From the earliest Imperial era, citizen legionaries and provincial
auxiliaries gave cult to the emperor and his familia on Imperial accessions, anniversaries and their renewal of annual
vows. They celebrated Rome's official festivals in absentia, and had the official triads appropriate to their function
– in the Empire, Jupiter, Victoria and Concordia were typical. By the early Severan era, the military also offered
cult to the Imperial divi, the current emperor's numen, genius and domus (or familia), and special cult to the Empress
as "mother of the camp." The near ubiquitous legionary shrines to Mithras of the later Imperial era were not part
of official cult until Mithras was absorbed into Solar and Stoic Monism as a focus of military concordia and Imperial
loyalty. The devotio was the most extreme offering a Roman general could make, promising to offer his own life in
battle along with the enemy as an offering to the underworld gods. Livy offers a detailed account of the devotio
carried out by Decius Mus; family tradition maintained that his son and grandson, all bearing the same name, also
devoted themselves. Before the battle, Decius is granted a prescient dream that reveals his fate. When he offers
sacrifice, the victim's liver appears "damaged where it refers to his own fortunes". Otherwise, the haruspex tells
him, the sacrifice is entirely acceptable to the gods. In a prayer recorded by Livy, Decius commits himself and the
enemy to the dii Manes and Tellus, charges alone and headlong into the enemy ranks, and is killed; his action cleanses
the sacrificial offering. Had he failed to die, his sacrificial offering would have been tainted and therefore void,
with possibly disastrous consequences. The act of devotio is a link between military ethics and those of the Roman
gladiator. The efforts of military commanders to channel the divine will were on occasion less successful. In the
early days of Rome's war against Carthage, the commander Publius Claudius Pulcher (consul 249 BC) launched a sea
campaign "though the sacred chickens would not eat when he took the auspices." In defiance of the omen, he threw
them into the sea, "saying that they might drink, since they would not eat. He was defeated, and on being bidden
by the senate to appoint a dictator, he appointed his messenger Glycias, as if again making a jest of his country's
peril." His impiety not only lost the battle but ruined his career. Roman women were present at most festivals and
cult observances. Some rituals specifically required the presence of women, but their active participation was limited.
As a rule women did not perform animal sacrifice, the central rite of most major public ceremonies. In addition to
the public priesthood of the Vestals, some cult practices were reserved for women only. The rites of the Bona Dea
excluded men entirely. Because women enter the public record less frequently than men, their religious practices
are less known, and even family cults were headed by the paterfamilias. A host of deities, however, are associated
with motherhood. Juno, Diana, Lucina, and specialized divine attendants presided over the life-threatening act of
giving birth and the perils of caring for a baby at a time when the infant mortality rate was as high as 40 percent.
Excessive devotion and enthusiasm in religious observance were superstitio, in the sense of "doing or believing more
than was necessary", to which women and foreigners were considered particularly prone. The boundaries between religio
and superstitio are perhaps indefinite. The famous tirade of Lucretius, the Epicurean rationalist, against what is
usually translated as "superstition" was in fact aimed at excessive religio. Roman religion was based on knowledge
rather than faith, but superstitio was viewed as an "inappropriate desire for knowledge"; in effect, an abuse of
religio. In the everyday world, many individuals sought to divine the future, influence it through magic, or seek
vengeance with help from "private" diviners. The state-sanctioned taking of auspices was a form of public divination
with the intent of ascertaining the will of the gods, not foretelling the future. Secretive consultations between
private diviners and their clients were thus suspect. So were divinatory techniques such as astrology when used for
illicit, subversive or magical purposes. Astrologers and magicians were officially expelled from Rome at various
times, notably in 139 BC and 33 BC. In 16 BC Tiberius expelled them under extreme penalty because an astrologer had
predicted his death. "Egyptian rites" were particularly suspect: Augustus banned them within the pomerium to doubtful
effect; Tiberius repeated and extended the ban with extreme force in AD 19. Despite several Imperial bans, magic
and astrology persisted among all social classes. In the late 1st century AD, Tacitus observed that astrologers "would
always be banned and always retained at Rome". In the Graeco-Roman world, practitioners of magic were known as magi
(singular magus), a "foreign" title of Persian priests. Apuleius, defending himself against accusations of casting
magic spells, defined the magician as "in popular tradition (more vulgari)... someone who, because of his community
of speech with the immortal gods, has an incredible power of spells (vi cantaminum) for everything he wishes to."
Pliny the Elder offers a thoroughly skeptical "History of magical arts" from their supposed Persian origins to Nero's
vast and futile expenditure on research into magical practices in an attempt to control the gods. Philostratus takes
pains to point out that the celebrated Apollonius of Tyana was definitely not a magus, "despite his special knowledge
of the future, his miraculous cures, and his ability to vanish into thin air". Lucan depicts Sextus Pompeius, the
doomed son of Pompey the Great, as convinced "the gods of heaven knew too little" and awaiting the Battle of Pharsalus
by consulting with the Thessalian witch Erichtho, who practices necromancy and inhabits deserted graves, feeding
on rotting corpses. Erichtho, it is said, can arrest "the rotation of the heavens and the flow of rivers" and make
"austere old men blaze with illicit passions". She and her clients are portrayed as undermining the natural order
of gods, mankind and destiny. A female foreigner from Thessaly, notorious for witchcraft, Erichtho is the stereotypical
witch of Latin literature, along with Horace's Canidia. The Twelve Tables forbade any harmful incantation (malum
carmen, or 'noisome metrical charm'); this included the "charming of crops from one field to another" (excantatio
frugum) and any rite that sought harm or death to others. Chthonic deities functioned at the margins of Rome's divine
and human communities; although sometimes the recipients of public rites, these were conducted outside the sacred
boundary of the pomerium. Individuals seeking their aid did so away from the public gaze, during the hours of darkness.
Burial grounds and isolated crossroads were among the likely portals. The barrier between private religious practices
and "magic" is permeable, and Ovid gives a vivid account of rites at the fringes of the public Feralia festival that
are indistinguishable from magic: an old woman squats among a circle of younger women, sews up a fish-head, smears
it with pitch, then pierces and roasts it to "bind hostile tongues to silence". By this she invokes Tacita, the "Silent
One" of the underworld. Archaeology confirms the widespread use of binding spells (defixiones), magical papyri and
so-called "voodoo dolls" from a very early era. Around 250 defixiones have been recovered just from Roman Britain,
in both urban and rural settings. Some seek straightforward, usually gruesome revenge, often for a lover's offense
or rejection. Others appeal for divine redress of wrongs, in terms familiar to any Roman magistrate, and promise
a portion of the value (usually small) of lost or stolen property in return for its restoration. None of these defixiones
seem produced by, or on behalf of the elite, who had more immediate recourse to human law and justice. Similar traditions
existed throughout the empire, persisting until around the 7th century AD, well into the Christian era. Rome's government,
politics and religion were dominated by an educated, male, landowning military aristocracy. Approximately half Rome's
population were slave or free non-citizens. Most others were plebeians, the lowest class of Roman citizens. Less
than a quarter of adult males had voting rights; far fewer could actually exercise them. Women had no vote. However,
all official business was conducted under the divine gaze and auspices, in the name of the senate and people of Rome.
"In a very real sense the senate was the caretaker of the Romans’ relationship with the divine, just as it was the
caretaker of their relationship with other humans". The links between religious and political life were vital to
Rome's internal governance, diplomacy and development from kingdom, to Republic and to Empire. Post-regal politics
dispersed the civil and religious authority of the kings more or less equitably among the patrician elite: kingship
was replaced by two annually elected consular offices. In the early Republic, as presumably in the regal era, plebeians
were excluded from high religious and civil office, and could be punished for offenses against laws of which they
had no knowledge. They resorted to strikes and violence to break the oppressive patrician monopolies of high office,
public priesthood, and knowledge of civil and religious law. The senate appointed Camillus as dictator to handle
the emergency; he negotiated a settlement, and sanctified it by the dedication of a temple to Concordia. The religious
calendars and laws were eventually made public. Plebeian tribunes were appointed, with sacrosanct status and the
right of veto in legislative debate. In principle, the augural and pontifical colleges were now open to plebeians.
In reality, the patrician and to a lesser extent, plebeian nobility dominated religious and civil office throughout
the Republican era and beyond. While the new plebeian nobility made social, political and religious inroads on traditionally
patrician preserves, their electorate maintained their distinctive political traditions and religious cults. During
the Punic crisis, popular cult to Dionysus emerged from southern Italy; Dionysus was equated with Father Liber, the
inventor of plebeian augury and personification of plebeian freedoms, and with Roman Bacchus. Official consternation
at these enthusiastic, unofficial Bacchanalia cults was expressed as moral outrage at their supposed subversion,
and was followed by ferocious suppression. Much later, a statue of Marsyas, the silen of Dionysus flayed by Apollo,
became a focus of brief symbolic resistance to Augustus' censorship. Augustus himself claimed the patronage of Venus
and Apollo; but his settlement appealed to all classes. Where loyalty was implicit, no divine hierarchy need be politically
enforced; Liber's festival continued. The Augustan settlement built upon a cultural shift in Roman society. In the
middle Republican era, even Scipio's tentative hints that he might be Jupiter's special protege sat ill with his
colleagues. Politicians of the later Republic were less equivocal; both Sulla and Pompey claimed special relationships
with Venus. Julius Caesar went further, and claimed her as his ancestress. Such claims suggested personal character
and policy as divinely inspired; an appointment to priesthood offered divine validation. In 63 BC, Julius Caesar's
appointment as pontifex maximus "signaled his emergence as a major player in Roman politics". Likewise, political
candidates could sponsor temples, priesthoods and the immensely popular, spectacular public ludi and munera whose
provision became increasingly indispensable to the factional politics of the Late Republic. Under the principate,
such opportunities were limited by law; priestly and political power were consolidated in the person of the princeps
("first citizen"). By the end of the regal period Rome had developed into a city-state, with a large plebeian, artisan
class excluded from the old patrician gentes and from the state priesthoods. The city had commercial and political
treaties with its neighbours; according to tradition, Rome's Etruscan connections established a temple to Minerva
on the predominantly plebeian Aventine; she became part of a new Capitoline triad of Jupiter, Juno and Minerva, installed
in a Capitoline temple, built in an Etruscan style and dedicated in a new September festival, Epulum Jovis. These
are supposedly the first Roman deities whose images were adorned, as if noble guests, at their own inaugural banquet.
Rome's diplomatic agreement with her neighbours of Latium confirmed the Latin league and brought the cult of Diana
from Aricia to the Aventine. and established on the Aventine in the "commune Latinorum Dianae templum": At about
the same time, the temple of Jupiter Latiaris was built on the Alban mount, its stylistic resemblance to the new
Capitoline temple pointing to Rome's inclusive hegemony. Rome's affinity to the Latins allowed two Latin cults within
the pomoerium: and the cult to Hercules at the ara maxima in the Forum Boarium was established through commercial
connections with Tibur. and the Tusculan cult of Castor as the patron of cavalry found a home close to the Forum
Romanum: Juno Sospita and Juno Regina were brought from Italy, and Fortuna Primigenia from Praeneste. In 217, Venus
was brought from Sicily and installed in a temple on the Capitoline hill. The introduction of new or equivalent deities
coincided with Rome's most significant aggressive and defensive military forays. In 206 BC the Sibylline books commended
the introduction of cult to the aniconic Magna Mater (Great Mother) from Pessinus, installed on the Palatine in 191
BC. The mystery cult to Bacchus followed; it was suppressed as subversive and unruly by decree of the Senate in 186
BC. Greek deities were brought within the sacred pomerium: temples were dedicated to Juventas (Hebe) in 191 BC, Diana
(Artemis) in 179 BC, Mars (Ares) in 138 BC), and to Bona Dea, equivalent to Fauna, the female counterpart of the
rural Faunus, supplemented by the Greek goddess Damia. Further Greek influences on cult images and types represented
the Roman Penates as forms of the Greek Dioscuri. The military-political adventurers of the Later Republic introduced
the Phrygian goddess Ma (identified with Roman Bellona, the Egyptian mystery-goddess Isis and Persian Mithras.) The
spread of Greek literature, mythology and philosophy offered Roman poets and antiquarians a model for the interpretation
of Rome's festivals and rituals, and the embellishment of its mythology. Ennius translated the work of Graeco-Sicilian
Euhemerus, who explained the genesis of the gods as apotheosized mortals. In the last century of the Republic, Epicurean
and particularly Stoic interpretations were a preoccupation of the literate elite, most of whom held - or had held
- high office and traditional Roman priesthoods; notably, Scaevola and the polymath Varro. For Varro - well versed
in Euhemerus' theory - popular religious observance was based on a necessary fiction; what the people believed was
not itself the truth, but their observance led them to as much higher truth as their limited capacity could deal
with. Whereas in popular belief deities held power over mortal lives, the skeptic might say that mortal devotion
had made gods of mortals, and these same gods were only sustained by devotion and cult. Just as Rome itself claimed
the favour of the gods, so did some individual Romans. In the mid-to-late Republican era, and probably much earlier,
many of Rome's leading clans acknowledged a divine or semi-divine ancestor and laid personal claim to their favour
and cult, along with a share of their divinity. Most notably in the very late Republic, the Julii claimed Venus Genetrix
as ancestor; this would be one of many foundations for the Imperial cult. The claim was further elaborated and justified
in Vergil's poetic, Imperial vision of the past. Towards the end of the Republic, religious and political offices
became more closely intertwined; the office of pontifex maximus became a de facto consular prerogative. Augustus
was personally vested with an extraordinary breadth of political, military and priestly powers; at first temporarily,
then for his lifetime. He acquired or was granted an unprecedented number of Rome's major priesthoods, including
that of pontifex maximus; as he invented none, he could claim them as traditional honours. His reforms were represented
as adaptive, restorative and regulatory, rather than innovative; most notably his elevation (and membership) of the
ancient Arvales, his timely promotion of the plebeian Compitalia shortly before his election and his patronage of
the Vestals as a visible restoration of Roman morality. Augustus obtained the pax deorum, maintained it for the rest
of his reign and adopted a successor to ensure its continuation. This remained a primary religious and social duty
of emperors. The Roman Empire expanded to include different peoples and cultures; in principle, Rome followed the
same inclusionist policies that had recognised Latin, Etruscan and other Italian peoples, cults and deities as Roman.
Those who acknowledged Rome's hegemony retained their own cult and religious calendars, independent of Roman religious
law. Newly municipal Sabratha built a Capitolium near its existing temple to Liber Pater and Serapis. Autonomy and
concord were official policy, but new foundations by Roman citizens or their Romanised allies were likely to follow
Roman cultic models. Romanisation offered distinct political and practical advantages, especially to local elites.
All the known effigies from the 2nd century AD forum at Cuicul are of emperors or Concordia. By the middle of the
1st century AD, Gaulish Vertault seems to have abandoned its native cultic sacrifice of horses and dogs in favour
of a newly established, Romanised cult nearby: by the end of that century, Sabratha’s so-called tophet was no longer
in use. Colonial and later Imperial provincial dedications to Rome's Capitoline Triad were a logical choice, not
a centralised legal requirement. Major cult centres to "non-Roman" deities continued to prosper: notable examples
include the magnificent Alexandrian Serapium, the temple of Aesculapeus at Pergamum and Apollo's sacred wood at Antioch.
Military settlement within the empire and at its borders broadened the context of Romanitas. Rome's citizen-soldiers
set up altars to multiple deities, including their traditional gods, the Imperial genius and local deities – sometimes
with the usefully open-ended dedication to the diis deabusque omnibus (all the gods and goddesses). They also brought
Roman "domestic" deities and cult practices with them. By the same token, the later granting of citizenship to provincials
and their conscription into the legions brought their new cults into the Roman military. The first and last Roman
known as a living divus was Julius Caesar, who seems to have aspired to divine monarchy; he was murdered soon after.
Greek allies had their own traditional cults to rulers as divine benefactors, and offered similar cult to Caesar's
successor, Augustus, who accepted with the cautious proviso that expatriate Roman citizens refrain from such worship;
it might prove fatal. By the end of his reign, Augustus had appropriated Rome's political apparatus – and most of
its religious cults – within his "reformed" and thoroughly integrated system of government. Towards the end of his
life, he cautiously allowed cult to his numen. By then the Imperial cult apparatus was fully developed, first in
the Eastern Provinces, then in the West. Provincial Cult centres offered the amenities and opportunities of a major
Roman town within a local context; bathhouses, shrines and temples to Roman and local deities, amphitheatres and
festivals. In the early Imperial period, the promotion of local elites to Imperial priesthood gave them Roman citizenship.
In Rome, state cult to a living emperor acknowledged his rule as divinely approved and constitutional. As princeps
(first citizen) he must respect traditional Republican mores; given virtually monarchic powers, he must restrain
them. He was not a living divus but father of his country (pater patriae), its pontifex maximus (greatest priest)
and at least notionally, its leading Republican. When he died, his ascent to heaven, or his descent to join the dii
manes was decided by a vote in the Senate. As a divus, he could receive much the same honours as any other state
deity – libations of wine, garlands, incense, hymns and sacrificial oxen at games and festivals. What he did in return
for these favours is unknown, but literary hints and the later adoption of divus as a title for Christian Saints
suggest him as a heavenly intercessor. In Rome, official cult to a living emperor was directed to his genius; a small
number refused this honour and there is no evidence of any emperor receiving more than that. In the crises leading
up to the Dominate, Imperial titles and honours multiplied, reaching a peak under Diocletian. Emperors before him
had attempted to guarantee traditional cults as the core of Roman identity and well-being; refusal of cult undermined
the state and was treasonous. For at least a century before the establishment of the Augustan principate, Jews and
Judaism were tolerated in Rome by diplomatic treaty with Judaea's Hellenised elite. Diaspora Jews had much in common
with the overwhelmingly Hellenic or Hellenised communities that surrounded them. Early Italian synagogues have left
few traces; but one was dedicated in Ostia around the mid-1st century BC and several more are attested during the
Imperial period. Judaea's enrollment as a client kingdom in 63 BC increased the Jewish diaspora; in Rome, this led
to closer official scrutiny of their religion. Their synagogues were recognised as legitimate collegia by Julius
Caesar. By the Augustan era, the city of Rome was home to several thousand Jews. In some periods under Roman rule,
Jews were legally exempt from official sacrifice, under certain conditions. Judaism was a superstitio to Cicero,
but the Church Father Tertullian described it as religio licita (an officially permitted religion) in contrast to
Christianity. After the Great Fire of Rome in 64 AD, Emperor Nero accused the Christians as convenient scapegoats,
who were later persecuted and killed. From that point on, Roman official policy towards Christianity tended towards
persecution. During the various Imperial crises of the 3rd century, “contemporaries were predisposed to decode any
crisis in religious terms”, regardless of their allegiance to particular practices or belief systems. Christianity
drew its traditional base of support from the powerless, who seemed to have no religious stake in the well-being
of the Roman State, and therefore threatened its existence. The majority of Rome’s elite continued to observe various
forms of inclusive Hellenistic monism; Neoplatonism in particular accommodated the miraculous and the ascetic within
a traditional Graeco-Roman cultic framework. Christians saw these ungodly practices as a primary cause of economic
and political crisis. In the wake of religious riots in Egypt, the emperor Decius decreed that all subjects of the
Empire must actively seek to benefit the state through witnessed and certified sacrifice to "ancestral gods" or suffer
a penalty: only Jews were exempt. Decius' edict appealed to whatever common mos maiores might reunite a politically
and socially fractured Empire and its multitude of cults; no ancestral gods were specified by name. The fulfillment
of sacrificial obligation by loyal subjects would define them and their gods as Roman. Roman oaths of loyalty were
traditionally collective; the Decian oath has been interpreted as a design to root out individual subversives and
suppress their cults, but apostasy was sought, rather than capital punishment. A year after its due deadline, the
edict expired. Valerian's first religious edict singled out Christianity as a particularly self-interested and subversive
foreign cult, outlawed its assemblies and urged Christians to sacrifice to Rome's traditional gods. His second edict
acknowledged a Christian threat to the Imperial system – not yet at its heart but close to it, among Rome’s equites
and Senators. Christian apologists interpreted his disgraceful capture and death as divine judgement. The next forty
years were peaceful; the Christian church grew stronger and its literature and theology gained a higher social and
intellectual profile, due in part to its own search for political toleration and theological coherence. Origen discussed
theological issues with traditionalist elites in a common Neoplatonist frame of reference – he had written to Decius'
predecessor Philip the Arab in similar vein – and Hippolytus recognised a “pagan” basis in Christian heresies. The
Christian churches were disunited; Paul of Samosata, Bishop of Antioch was deposed by a synod of 268 for "dogmatic
reasons – his doctrine on the human nature of Christ was rejected – and for his lifestyle, which reminded his brethren
of the habits of the administrative elite". The reasons for his deposition were widely circulated among the churches.
Meanwhile, Aurelian (270-75) appealed for harmony among his soldiers (concordia militum), stabilised the Empire and
its borders and successfully established an official, Hellenic form of unitary cult to the Palmyrene Sol Invictus
in Rome's Campus Martius. In 295, a certain Maximilian refused military service; in 298 Marcellus renounced his military
oath. Both were executed for treason; both were Christians. At some time around 302, a report of ominous haruspicy
in Diocletian's domus and a subsequent (but undated) dictat of placatory sacrifice by the entire military triggered
a series of edicts against Christianity. The first (303 AD) "ordered the destruction of church buildings and Christian
texts, forbade services to be held, degraded officials who were Christians, re-enslaved imperial freedmen who were
Christians, and reduced the legal rights of all Christians... [Physical] or capital punishments were not imposed
on them" but soon after, several Christians suspected of attempted arson in the palace were executed. The second
edict threatened Christian priests with imprisonment and the third offered them freedom if they performed sacrifice.
An edict of 304 enjoined universal sacrifice to traditional gods, in terms that recall the Decian edict. In some
cases and in some places the edicts were strictly enforced: some Christians resisted and were imprisoned or martyred.
Others complied. Some local communities were not only pre-dominantly Christian, but powerful and influential; and
some provincial authorities were lenient, notably the Caesar in Gaul, Constantius Chlorus, the father of Constantine
I. Diocletian's successor Galerius maintained anti-Christian policy until his deathbed revocation in 311, when he
asked Christians to pray for him. "This meant an official recognition of their importance in the religious world of
the Roman empire, although one of the tetrarchs, Maximinus Daia, still oppressed Christians in his part of the empire
up to 313." With the abatement of persecution, St. Jerome acknowledged the Empire as a bulwark against evil but insisted
that "imperial honours" were contrary to Christian teaching. His was an authoritative but minority voice: most Christians
showed no qualms in the veneration of even "pagan" emperors. The peace of the emperors was the peace of God; as far
as the Church was concerned, internal dissent and doctrinal schism were a far greater problem. The solution came
from a hitherto unlikely source: as pontifex maximus Constantine I favoured the "Catholic Church of the Christians"
against the Donatists because: Constantine successfully balanced his own role as an instrument of the pax deorum
with the power of the Christian priesthoods in determining what was (in traditional Roman terms) auspicious - or
in Christian terms, what was orthodox. The edict of Milan (313) redefined Imperial ideology as one of mutual toleration.
Constantine had triumphed under the signum (sign) of the Christ: Christianity was therefore officially embraced along
with traditional religions and from his new Eastern capital, Constantine could be seen to embody both Christian and
Hellenic religious interests. He may have officially ended – or attempted to end – blood sacrifices to the genius
of living emperors but his Imperial iconography and court ceremonial outstripped Diocletian's in their supra-human
elevation of the Imperial hierarch. His later direct intervention in Church affairs proved a political masterstroke.
Constantine united the empire as an absolute head of state, and on his death, he was honored as a Christian, Imperial,
and "divus". At the time, there were many varying opinions about Christian doctrine, and no centralized way of enforcing
orthodoxy. Constantine called all the Christian bishops throughout the Roman Empire to a meeting, and some 318 bishops
(very few from the Western Empire) attended the First Council of Nicaea. The purpose of this meeting was to define
Christian orthodoxy and clearly differentiate it from Christian heresies. The meeting reached consensus on the Nicene
Creed and other statements. Later, Philostorgius criticized Christians who offered sacrifice at statues of the divus
Constantine. Constantine nevertheless took great pains to assuage traditionalist and Christian anxieties. The emperor
Julian made a short-lived attempt to revive traditional and Hellenistic religion and to affirm the special status
of Judaism, but in 380 under Theodosius I, Nicene Christianity became the official state religion of the Roman Empire.
Pleas for religious tolerance from traditionalists such as the senator Symmachus (d. 402) were rejected. Christianity
became increasingly popular. Heretics as well as non-Christians were subject to exclusion from public life or persecution,
but Rome's original religious hierarchy and many aspects of its ritual influenced Christian forms, and many pre-Christian
beliefs and practices survived in Christian festivals and local traditions. Constantine's nephew Julian rejected
the "Galilean madness" of his upbringing for an idiosyncratic synthesis of neo-Platonism, Stoic asceticism and universal
solar cult. Julian became Augustus in 361 and actively but vainly fostered a religious and cultural pluralism, attempting
a restitution of non-Christian practices and rights. He proposed the rebuilding of Jerusalem's temple as an Imperial
project and argued against the "irrational impieties" of Christian doctrine. His attempt to restore an Augustan form
of principate, with himself as primus inter pares ended with his death in 363 in Persia, after which his reforms
were reversed or abandoned. The empire once again fell under Christian control, this time permanently. The Western
emperor Gratian refused the office of pontifex maximus, and against the protests of the senate, removed the altar
of Victory from the senate house and began the disestablishment of the Vestals. Theodosius I briefly re-united the
Empire: in 391 he officially adopted Nicene Christianity as the Imperial religion and ended official support for
all other creeds and cults. He not only refused to restore Victory to the senate-house, but extinguished the Sacred
fire of the Vestals and vacated their temple: the senatorial protest was expressed in a letter by Quintus Aurelius
Symmachus to the Western and Eastern emperors. Ambrose, the influential Bishop of Milan and future saint, wrote urging
the rejection of Symmachus's request for tolerance. Yet Theodosius accepted comparison with Hercules and Jupiter
as a living divinity in the panegyric of Pacatus, and despite his active dismantling of Rome's traditional cults
and priesthoods could commend his heirs to its overwhelmingly Hellenic senate in traditional Hellenic terms.[clarification
needed] He was the last emperor of both East and West.
YouTube is a global video-sharing website headquartered in San Bruno, California, United States. The service was created
by three former PayPal employees in February 2005. In November 2006, it was bought by Google for US$1.65 billion.
YouTube now operates as one of Google's subsidiaries. The site allows users to upload, view, rate, share, and comment
on videos, and it makes use of WebM, H.264/MPEG-4 AVC, and Adobe Flash Video technology to display a wide variety
of user-generated and corporate media video. Available content includes video clips, TV clips, music videos, movie
trailers, and other content such as video blogging, short original videos, and educational videos. According to a
story that has often been repeated in the media, Hurley and Chen developed the idea for YouTube during the early
months of 2005, after they had experienced difficulty sharing videos that had been shot at a dinner party at Chen's
apartment in San Francisco. Karim did not attend the party and denied that it had occurred, but Chen commented that
the idea that YouTube was founded after a dinner party "was probably very strengthened by marketing ideas around
creating a story that was very digestible". YouTube offered the public a beta test of the site in May 2005. The first
video to reach one million views was a Nike advertisement featuring Ronaldinho in September 2005. Following a $3.5
million investment from Sequoia Capital in November, the site launched officially on December 15, 2005, by which
time the site was receiving 8 million views a day. The site grew rapidly, and in July 2006 the company announced
that more than 65,000 new videos were being uploaded every day, and that the site was receiving 100 million video
views per day. According to data published by market research company comScore, YouTube is the dominant provider
of online video in the United States, with a market share of around 43% and more than 14 billion views of videos
in May 2010. In 2014 YouTube said that 300 hours of new videos were uploaded to the site every minute, three times
more than one year earlier and that around three quarters of the material comes from outside the U.S. The site has
800 million unique users a month. It is estimated that in 2007 YouTube consumed as much bandwidth as the entire Internet
in 2000. According to third-party web analytics providers, Alexa and SimilarWeb, YouTube is the third most visited
website in the world, as of June 2015; SimilarWeb also lists YouTube as the top TV and video website globally, attracting
more than 15 billion visitors per month. On March 31, 2010, the YouTube website launched a new design, with the aim
of simplifying the interface and increasing the time users spend on the site. Google product manager Shiva Rajaraman
commented: "We really felt like we needed to step back and remove the clutter." In May 2010, it was reported that
YouTube was serving more than two billion videos a day, which it described as "nearly double the prime-time audience
of all three major US television networks combined". In May 2011, YouTube reported in its company blog that the site
was receiving more than three billion views per day. In January 2012, YouTube stated that the figure had increased
to four billion videos streamed per day. In February 2015, YouTube announced the launch of a new app specifically
for use by children visiting the site, called YouTube Kids. It allows parental controls and restrictions on who can
upload content, and is available for both Android and iOS devices. Later on August 26, 2015, YouTube Gaming was launched,
a platform for video gaming enthusiasts intended to compete with Twitch.tv. 2015 also saw the announcement of a premium
YouTube service titled YouTube Red, which provides users with both ad-free content as well as the ability to download
videos among other features. In January 2010, YouTube launched an experimental version of the site that used the
built-in multimedia capabilities of web browsers supporting the HTML5 standard. This allowed videos to be viewed
without requiring Adobe Flash Player or any other plug-in to be installed. The YouTube site had a page that allowed
supported browsers to opt into the HTML5 trial. Only browsers that supported HTML5 Video using the H.264 or WebM
formats could play the videos, and not all videos on the site were available. All YouTube users can upload videos
up to 15 minutes each in duration. Users who have a good track record of complying with the site's Community Guidelines
may be offered the ability to upload videos up to 12 hours in length, which requires verifying the account, normally
through a mobile phone. When YouTube was launched in 2005, it was possible to upload long videos, but a ten-minute
limit was introduced in March 2006 after YouTube found that the majority of videos exceeding this length were unauthorized
uploads of television shows and films. The 10-minute limit was increased to 15 minutes in July 2010. If an up-to-date
browser version is used, videos greater than 20 GB can be uploaded. In November 2008, 720p HD support was added.
At the time of the 720p launch, the YouTube player was changed from a 4:3 aspect ratio to a widescreen 16:9. With
this new feature, YouTube began a switchover to H.264/MPEG-4 AVC as its default video compression format. In November
2009, 1080p HD support was added. In July 2010, YouTube announced that it had launched a range of videos in 4K format,
which allows a resolution of up to 4096×3072 pixels. In June 2015, support for 8K resolution was added, with the
videos playing at 7680×4320 pixels. In a video posted on July 21, 2009, YouTube software engineer Peter Bradshaw
announced that YouTube users can now upload 3D videos. The videos can be viewed in several different ways, including
the common anaglyph (cyan/red lens) method which utilizes glasses worn by the viewer to achieve the 3D effect. The
YouTube Flash player can display stereoscopic content interleaved in rows, columns or a checkerboard pattern, side-by-side
or anaglyph using a red/cyan, green/magenta or blue/yellow combination. In May 2011, an HTML5 version of the YouTube
player began supporting side-by-side 3D footage that is compatible with Nvidia 3D Vision. YouTube offers users the
ability to view its videos on web pages outside their website. Each YouTube video is accompanied by a piece of HTML
that can be used to embed it on any page on the Web. This functionality is often used to embed YouTube videos in
social networking pages and blogs. Users wishing to post a video discussing, inspired by or related to another user's
video are able to make a "video response". On August 27, 2013, YouTube announced that it would remove video responses
for being an underused feature. Embedding, rating, commenting and response posting can be disabled by the video owner.
YouTube does not usually offer a download link for its videos, and intends for them to be viewed through its website
interface. A small number of videos, such as the weekly addresses by President Barack Obama, can be downloaded as
MP4 files. Numerous third-party web sites, applications and browser plug-ins allow users to download YouTube videos.
In February 2009, YouTube announced a test service, allowing some partners to offer video downloads for free or for
a fee paid through Google Checkout. In June 2012, Google sent cease and desist letters threatening legal action against
several websites offering online download and conversion of YouTube videos. In response, Zamzar removed the ability
to download YouTube videos from its site. The default settings when uploading a video to YouTube will retain a copyright
on the video for the uploader, but since July 2012 it has been possible to select a Creative Commons license as the
default, allowing other users to reuse and remix the material if it is free of copyright. Since June 2007, YouTube's
videos have been available for viewing on a range of Apple products. This required YouTube's content to be transcoded
into Apple's preferred video standard, H.264, a process that took several months. YouTube videos can be viewed on
devices including Apple TV, iPod Touch and the iPhone. In July 2010, the mobile version of the site was relaunched
based on HTML5, avoiding the need to use Adobe Flash Player and optimized for use with touch screen controls. The
mobile version is also available as an app for the Android platform. In September 2012, YouTube launched its first
app for the iPhone, following the decision to drop YouTube as one of the preloaded apps in the iPhone 5 and iOS 6
operating system. According to GlobalWebIndex, YouTube was used by 35% of smartphone users between April and June
2013, making it the third most used app. A TiVo service update in July 2008 allowed the system to search and play
YouTube videos. In January 2009, YouTube launched "YouTube for TV", a version of the website tailored for set-top
boxes and other TV-based media devices with web browsers, initially allowing its videos to be viewed on the PlayStation
3 and Wii video game consoles. In June 2009, YouTube XL was introduced, which has a simplified interface designed
for viewing on a standard television screen. YouTube is also available as an app on Xbox Live. On November 15, 2012,
Google launched an official app for the Wii, allowing users to watch YouTube videos from the Wii channel. An app
is also available for Wii U and Nintendo 3DS, and videos can be viewed on the Wii U Internet Browser using HTML5.
Google made YouTube available on the Roku player on December 17, 2013 and in October 2014, the Sony PlayStation 4.
YouTube Red is YouTube's premium subscription service. It offers advertising-free streaming, access to exclusive
content, background and offline video playback on mobile devices, and access to the Google Play Music "All Access"
service. YouTube Red was originally announced on November 12, 2014, as "Music Key", a subscription music streaming
service, and was intended to integrate with and replace the existing Google Play Music "All Access" service. On October
28, 2015, the service was re-launched as YouTube Red, offering ad-free streaming of all videos, as well as access
to exclusive original content. Both private individuals and large production companies have used YouTube to grow
audiences. Independent content creators have built grassroots followings numbering in the thousands at very little
cost or effort, while mass retail and radio promotion proved problematic. Concurrently, old media celebrities moved
into the website at the invitation of a YouTube management that witnessed early content creators accruing substantial
followings, and perceived audience sizes potentially larger than that attainable by television. While YouTube's revenue-sharing
"Partner Program" made it possible to earn a substantial living as a video producer—its top five hundred partners
each earning more than $100,000 annually and its ten highest-earning channels grossing from $2.5 million to $12 million—in
2012 CMU business editor characterized YouTube as "a free-to-use... promotional platform for the music labels". In
2013 Forbes' Katheryn Thayer asserted that digital-era artists' work must not only be of high quality, but must elicit
reactions on the YouTube platform and social media. In 2013, videos of the 2.5% of artists categorized as "mega",
"mainstream" and "mid-sized" received 90.3% of the relevant views on YouTube and Vevo. By early 2013 Billboard had
announced that it was factoring YouTube streaming data into calculation of the Billboard Hot 100 and related genre
charts. Observing that face-to-face communication of the type that online videos convey has been "fine-tuned by millions
of years of evolution", TED curator Chris Anderson referred to several YouTube contributors and asserted that "what
Gutenberg did for writing, online video can now do for face-to-face communication". Anderson asserted that it's not
far-fetched to say that online video will dramatically accelerate scientific advance, and that video contributors
may be about to launch "the biggest learning cycle in human history." In education, for example, the Khan Academy
grew from YouTube video tutoring sessions for founder Salman Khan's cousin into what Forbes' Michael Noer called
"the largest school in the world", with technology poised to disrupt how people learn. YouTube has enabled people
to more directly engage with government, such as in the CNN/YouTube presidential debates (2007) in which ordinary
people submitted questions to U.S. presidential candidates via YouTube video, with a techPresident co-founder saying
that Internet video was changing the political landscape. Describing the Arab Spring (2010- ), sociologist Philip
N. Howard quoted an activist's succinct description that organizing the political unrest involved using "Facebook
to schedule the protests, Twitter to coordinate, and YouTube to tell the world." In 2012, more than a third of the
U.S. Senate introduced a resolution condemning Joseph Kony 16 days after the "Kony 2012" video was posted to YouTube,
with resolution co-sponsor Senator Lindsey Graham remarking that the video "will do more to lead to (Kony's) demise
than all other action combined." Conversely, YouTube has also allowed government to more easily engage with citizens,
the White House's official YouTube channel being the seventh top news organization producer on YouTube in 2012 and
in 2013 a healthcare exchange commissioned Obama impersonator Iman Crosson's YouTube music video spoof to encourage
young Americans to enroll in the Affordable Care Act (Obamacare)-compliant health insurance. In February 2014, U.S.
President Obama held a meeting at the White House with leading YouTube content creators to not only promote awareness
of Obamacare but more generally to develop ways for government to better connect with the "YouTube Generation". Whereas
YouTube's inherent ability to allow presidents to directly connect with average citizens was noted, the YouTube content
creators' new media savvy was perceived necessary to better cope with the website's distracting content and fickle
audience. The anti-bullying It Gets Better Project expanded from a single YouTube video directed to discouraged or
suicidal LGBT teens, that within two months drew video responses from hundreds including U.S. President Barack Obama,
Vice President Biden, White House staff, and several cabinet secretaries. Similarly, in response to fifteen-year-old
Amanda Todd's video "My story: Struggling, bullying, suicide, self-harm", legislative action was undertaken almost
immediately after her suicide to study the prevalence of bullying and form a national anti-bullying strategy. Google
does not provide detailed figures for YouTube's running costs, and YouTube's revenues in 2007 were noted as "not
material" in a regulatory filing. In June 2008, a Forbes magazine article projected the 2008 revenue at $200 million,
noting progress in advertising sales. In January 2012, it was estimated that visitors to YouTube spent an average
of 15 minutes a day on the site, in contrast to the four or five hours a day spent by a typical U.S. citizen watching
television. In 2012, YouTube's revenue from its ads program was estimated at 3.7 billion. In 2013 it nearly doubled
and estimated to hit 5.6 billion dollars according to eMarketer, others estimated 4.7 billion, YouTube entered into
a marketing and advertising partnership with NBC in June 2006. In November 2008, YouTube reached an agreement with
MGM, Lions Gate Entertainment, and CBS, allowing the companies to post full-length films and television episodes
on the site, accompanied by advertisements in a section for US viewers called "Shows". The move was intended to create
competition with websites such as Hulu, which features material from NBC, Fox, and Disney. In November 2009, YouTube
launched a version of "Shows" available to UK viewers, offering around 4,000 full-length shows from more than 60
partners. In January 2010, YouTube introduced an online film rentals service, which is available only to users in
the US, Canada and the UK as of 2010. The service offers over 6,000 films. In May 2007, YouTube launched its Partner
Program, a system based on AdSense which allows the uploader of the video to share the revenue produced by advertising
on the site. YouTube typically takes 45 percent of the advertising revenue from videos in the Partner Program, with
55 percent going to the uploader. There are over a million members of the YouTube Partner Program. According to TubeMogul,
in 2013 a pre-roll advertisement on YouTube (one that is shown before the video starts) cost advertisers on average
$7.60 per 1000 views. Usually no more than half of eligible videos have a pre-roll advertisement, due to a lack of
interested advertisers. Assuming pre-roll advertisements on half of videos, a YouTube partner would earn 0.5 X $7.60
X 55% = $2.09 per 1000 views in 2013. Much of YouTube's revenue goes to the copyright holders of the videos. In 2010
it was reported that nearly a third of the videos with advertisements were uploaded without permission of the copyright
holders. YouTube gives an option for copyright holders to locate and remove their videos or to have them continue
running for revenue. In May 2013, Nintendo began enforcing its copyright ownership and claiming the advertising revenue
from video creators who posted screenshots of its games. In February 2015, Nintendo agreed to share the revenue with
the video creators. At the time of uploading a video, YouTube users are shown a message asking them not to violate
copyright laws. Despite this advice, there are still many unauthorized clips of copyrighted material on YouTube.
YouTube does not view videos before they are posted online, and it is left to copyright holders to issue a DMCA takedown
notice pursuant to the terms of the Online Copyright Infringement Liability Limitation Act. Three successful complaints
for copyright infringement against a user account will result in the account and all of its uploaded videos being
deleted. Organizations including Viacom, Mediaset, and the English Premier League have filed lawsuits against YouTube,
claiming that it has done too little to prevent the uploading of copyrighted material. Viacom, demanding $1 billion
in damages, said that it had found more than 150,000 unauthorized clips of its material on YouTube that had been
viewed "an astounding 1.5 billion times". YouTube responded by stating that it "goes far beyond its legal obligations
in assisting content owners to protect their works". During the same court battle, Viacom won a court ruling requiring
YouTube to hand over 12 terabytes of data detailing the viewing habits of every user who has watched videos on the
site. The decision was criticized by the Electronic Frontier Foundation, which called the court ruling "a setback
to privacy rights". In June 2010, Viacom's lawsuit against Google was rejected in a summary judgment, with U.S. federal
Judge Louis L. Stanton stating that Google was protected by provisions of the Digital Millennium Copyright Act. Viacom
announced its intention to appeal the ruling. In June 2007, YouTube began trials of a system for automatic detection
of uploaded videos that infringe copyright. Google CEO Eric Schmidt regarded this system as necessary for resolving
lawsuits such as the one from Viacom, which alleged that YouTube profited from content that it did not have the right
to distribute. The system, which became known as Content ID, creates an ID File for copyrighted audio and video material,
and stores it in a database. When a video is uploaded, it is checked against the database, and flags the video as
a copyright violation if a match is found. An independent test in 2009 uploaded multiple versions of the same song
to YouTube, and concluded that while the system was "surprisingly resilient" in finding copyright violations in the
audio tracks of videos, it was not infallible. The use of Content ID to remove material automatically has led to
controversy in some cases, as the videos have not been checked by a human for fair use. If a YouTube user disagrees
with a decision by Content ID, it is possible to fill in a form disputing the decision. YouTube has cited the effectiveness
of Content ID as one of the reasons why the site's rules were modified in December 2010 to allow some users to upload
videos of unlimited length. YouTube relies on its users to flag the content of videos as inappropriate, and a YouTube
employee will view a flagged video to determine whether it violates the site's terms of service. In July 2008, the
Culture and Media Committee of the House of Commons of the United Kingdom stated that it was "unimpressed" with YouTube's
system for policing its videos, and argued that "proactive review of content should be standard practice for sites
hosting user-generated content". YouTube responded by stating: Most videos enable users to leave comments, and these
have attracted attention for the negative aspects of both their form and content. In 2006, Time praised Web 2.0 for
enabling "community and collaboration on a scale never seen before", and added that YouTube "harnesses the stupidity
of crowds as well as its wisdom. Some of the comments on YouTube make you weep for the future of humanity just for
the spelling alone, never mind the obscenity and the naked hatred". The Guardian in 2009 described users' comments
on YouTube as: On November 6, 2013, Google implemented a new comment system that requires all YouTube users to use
a Google+ account in order to comment on videos and making the comment system Google+ oriented. The changes are in
large part an attempt to address the frequent criticisms of the quality and tone of YouTube comments. They give creators
more power to moderate and block comments, and add new sorting mechanisms to ensure that better, more relevant discussions
appear at the top. The new system restored the ability to include URLs in comments, which had previously been removed
due to problems with abuse. In response, YouTube co-founder Jawed Karim posted the question "why the fuck do I need
a google+ account to comment on a video?" on his YouTube channel to express his negative opinion of the change. The
official YouTube announcement received 20,097 "thumbs down" votes and generated more than 32,000 comments in two
days. Writing in the Newsday blog Silicon Island, Chase Melvin noted that "Google+ is nowhere near as popular a social
media network as Facebook, but it's essentially being forced upon millions of YouTube users who don't want to lose
their ability to comment on videos" and "Discussion forums across the Internet are already bursting with outcry against
the new comment system". In the same article Melvin goes on to say: In some countries, YouTube is completely blocked,
either through a long term standing ban or for more limited periods of time such as during periods of unrest, the
run-up to an election, or in response to upcoming political anniversaries. In other countries access to the website
as a whole remains open, but access to specific videos is blocked. In cases where the entire site is banned due to
one particular video, YouTube will often agree to remove or limit access to that video in order to restore service.
In May 2014, prior to the launch of YouTube's subscription-based Music Key service, the independent music trade organization
Worldwide Independent Network alleged that YouTube was using non-negotiable contracts with independent labels that
were "undervalued" in comparison to other streaming services, and that YouTube would block all music content from
labels who do not reach a deal to be included on the paid service. In a statement to the Financial Times in June
2014, Robert Kyncl confirmed that YouTube would block the content of labels who do not negotiate deals to be included
in the paid service "to ensure that all content on the platform is governed by its new contractual terms." Stating
that 90% of labels had reached deals, he went on to say that "while we wish that we had [a] 100% success rate, we
understand that is not likely an achievable goal and therefore it is our responsibility to our users and the industry
to launch the enhanced music experience." The Financial Times later reported that YouTube had reached an aggregate
deal with Merlin Network—a trade group representing over 20,000 independent labels, for their inclusion in the service.
However, YouTube itself has not confirmed the deal.
Jefferson's metaphor of a wall of separation has been cited repeatedly by the U.S. Supreme Court. In Reynolds v. United States
(1879) the Court wrote that Jefferson's comments "may be accepted almost as an authoritative declaration of the scope
and effect of the [First] Amendment." In Everson v. Board of Education (1947), Justice Hugo Black wrote: "In the
words of Thomas Jefferson, the clause against establishment of religion by law was intended to erect a wall of separation
between church and state." Many early immigrant groups traveled to America to worship freely, particularly after
the English Civil War and religious conflict in France and Germany. They included nonconformists like the Puritans,
who were Protestant Christians fleeing religious persecution from the Anglican King of England. Despite a common
background, the groups' views on religious toleration were mixed. While some such as Roger Williams of Rhode Island
and William Penn of Pennsylvania ensured the protection of religious minorities within their colonies, others like
the Plymouth Colony and Massachusetts Bay Colony had established churches. The Dutch colony of New Netherland established
the Dutch Reformed Church and outlawed all other worship, though enforcement was sparse. Religious conformity was
desired partly for financial reasons: the established Church was responsible for poverty relief, putting dissenting
churches at a significant disadvantage. ^Note 2: in 1789 the Georgia Constitution was amended as follows: "Article
IV. Section 10. No person within this state shall, upon any pretense, be deprived of the inestimable privilege of
worshipping God in any manner agreeable to his own conscience, nor be compelled to attend any place of worship contrary
to his own faith and judgment; nor shall he ever be obliged to pay tithes, taxes, or any other rate, for the building
or repairing any place of worship, or for the maintenance of any minister or ministry, contrary to what he believes
to be right, or hath voluntarily engaged to do. No one religious society shall ever be established in this state,
in preference to another; nor shall any person be denied the enjoyment of any civil right merely on account of his
religious principles." ^Note 5: The North Carolina Constitution of 1776 disestablished the Anglican church, but until
1835 the NC Constitution allowed only Protestants to hold public office. From 1835-1876 it allowed only Christians
(including Catholics) to hold public office. Article VI, Section 8 of the current NC Constitution forbids only atheists
from holding public office. Such clauses were held by the United States Supreme Court to be unenforceable in the
1961 case of Torcaso v. Watkins, when the court ruled unanimously that such clauses constituted a religious test
incompatible with First and Fourteenth Amendment protections. The Flushing Remonstrance shows support for separation
of church and state as early as the mid-17th century, stating their opposition to religious persecution of any sort:
"The law of love, peace and liberty in the states extending to Jews, Turks and Egyptians, as they are considered
sons of Adam, which is the glory of the outward state of Holland, so love, peace and liberty, extending to all in
Christ Jesus, condemns hatred, war and bondage." The document was signed December 27, 1657 by a group of English
citizens in America who were affronted by persecution of Quakers and the religious policies of the Governor of New
Netherland, Peter Stuyvesant. Stuyvesant had formally banned all religions other than the Dutch Reformed Church from
being practiced in the colony, in accordance with the laws of the Dutch Republic. The signers indicated their "desire
therefore in this case not to judge lest we be judged, neither to condemn least we be condemned, but rather let every
man stand or fall to his own Master." Stuyvesant fined the petitioners and threw them in prison until they recanted.
However, John Bowne allowed the Quakers to meet in his home. Bowne was arrested, jailed, and sent to the Netherlands
for trial; the Dutch court exonerated Bowne. There were also opponents to the support of any established church even
at the state level. In 1773, Isaac Backus, a prominent Baptist minister in New England, wrote against a state sanctioned
religion, saying: "Now who can hear Christ declare, that his kingdom is, not of this world, and yet believe that
this blending of church and state together can be pleasing to him?" He also observed that when "church and state
are separate, the effects are happy, and they do not at all interfere with each other: but where they have been confounded
together, no tongue nor pen can fully describe the mischiefs that have ensued." Thomas Jefferson's influential Virginia
Statute for Religious Freedom was enacted in 1786, five years before the Bill of Rights. The phrase "[A] hedge or
wall of separation between the garden of the church and the wilderness of the world" was first used by Baptist theologian
Roger Williams, the founder of the colony of Rhode Island, in his 1644 book The Bloody Tenent of Persecution. The
phrase was later used by Thomas Jefferson as a description of the First Amendment and its restriction on the legislative
branch of the federal government, in an 1802 letter to the Danbury Baptists (a religious minority concerned about
the dominant position of the Congregationalist church in Connecticut): Jefferson and James Madison's conceptions
of separation have long been debated. Jefferson refused to issue Proclamations of Thanksgiving sent to him by Congress
during his presidency, though he did issue a Thanksgiving and Prayer proclamation as Governor of Virginia. Madison
issued four religious proclamations while President, but vetoed two bills on the grounds they violated the first
amendment. On the other hand, both Jefferson and Madison attended religious services at the Capitol. Years before
the ratification of the Constitution, Madison contended "Because if Religion be exempt from the authority of the
Society at large, still less can it be subject to that of the Legislative Body." After retiring from the presidency,
Madison wrote of "total separation of the church from the state." " "Strongly guarded as is the separation between
Religion & Govt in the Constitution of the United States," Madison wrote, and he declared, "practical distinction
between Religion and Civil Government is essential to the purity of both, and as guaranteed by the Constitution of
the United States." In a letter to Edward Livingston Madison further expanded, "We are teaching the world the great
truth that Govts. do better without Kings & Nobles than with them. The merit will be doubled by the other lesson
that Religion flourishes in greater purity, without than with the aid of Govt." Madison's original draft of the Bill
of Rights had included provisions binding the States, as well as the Federal Government, from an establishment of
religion, but the House did not pass them.[citation needed] Jefferson's opponents said his position was the destruction
and the governmental rejection of Christianity, but this was a caricature. In setting up the University of Virginia,
Jefferson encouraged all the separate sects to have preachers of their own, though there was a constitutional ban
on the State supporting a Professorship of Divinity, arising from his own Virginia Statute for Religious Freedom.
Some have argued that this arrangement was "fully compatible with Jefferson's views on the separation of church and
state;" however, others point to Jefferson's support for a scheme in which students at the University would attend
religious worship each morning as evidence that his views were not consistent with strict separation. Still other
scholars, such as Mark David Hall, attempt to sidestep the whole issue by arguing that American jurisprudence focuses
too narrowly on this one Jeffersonian letter while failing to account for other relevant history Jefferson's letter
entered American jurisprudence in the 1878 Mormon polygamy case Reynolds v. U.S., in which the court cited Jefferson
and Madison, seeking a legal definition for the word religion. Writing for the majority, Justice Stephen Johnson
Field cited Jefferson's Letter to the Danbury Baptists to state that "Congress was deprived of all legislative power
over mere opinion, but was left free to reach actions which were in violation of social duties or subversive of good
order." Considering this, the court ruled that outlawing polygamy was constitutional. Jefferson and Madison's approach
was not the only one taken in the eighteenth century. Jefferson's Statute of Religious Freedom was drafted in opposition
to a bill, chiefly supported by Patrick Henry, which would permit any Virginian to belong to any denomination, but
which would require him to belong to some denomination and pay taxes to support it. Similarly, the Constitution of
Massachusetts originally provided that "no subject shall be hurt, molested, or restrained, in his person, liberty,
or estate, for worshipping God in the manner and season most agreeable to the dictates of his own conscience... provided
he doth not disturb the public peace, or obstruct others in their religious worship," (Article II) but also that:
The Duke of York had required that every community in his new lands of New York and New Jersey support some church,
but this was more often Dutch Reformed, Quaker or Presbyterian, than Anglican. Some chose to support more than one
church. He also ordained that the tax-payers were free, having paid his local tax, to choose their own church. The
terms for the surrender of New Amsterdam had provided that the Dutch would have liberty of conscience, and the Duke,
as an openly divine-right Catholic, was no friend of Anglicanism. The first Anglican minister in New Jersey arrived
in 1698, though Anglicanism was more popular in New York. The original charter of the Province of East Jersey had
restricted membership in the Assembly to Christians; the Duke of York was fervently Catholic, and the proprietors
of Perth Amboy, New Jersey were Scottish Catholic peers. The Province of West Jersey had declared, in 1681, that
there should be no religious test for office. An oath had also been imposed on the militia during the French and
Indian War requiring them to abjure the pretensions of the Pope, which may or may not have been applied during the
Revolution. That law was replaced by 1799. The first amendment to the US Constitution states "Congress shall make
no law respecting an establishment of religion, or prohibiting the free exercise thereof" The two parts, known as
the "establishment clause" and the "free exercise clause" respectively, form the textual basis for the Supreme Court's
interpretations of the "separation of church and state" doctrine. Three central concepts were derived from the 1st
Amendment which became America's doctrine for church-state separation: no coercion in religious matters, no expectation
to support a religion against one's will, and religious liberty encompasses all religions. In sum, citizens are free
to embrace or reject a faith, any support for religion - financial or physical - must be voluntary, and all religions
are equal in the eyes of the law with no special preference or favoritism. Some legal scholars, such as John Baker
of LSU, theorize that Madison's initial proposed language—that Congress should make no law regarding the establishment
of a "national religion"—was rejected by the House, in favor of the more general "religion" in an effort to appease
the Anti-Federalists. To both the Anti-Federalists and the Federalists, the very word "national" was a cause for
alarm because of the experience under the British crown. During the debate over the establishment clause, Rep. Elbridge
Gerry of Massachusetts took issue with Madison's language regarding whether the government was a national or federal
government (in which the states retained their individual sovereignty), which Baker suggests compelled Madison to
withdraw his language from the debate. Others, such as Rep. Roger Sherman of Connecticut, believed the clause was
unnecessary because the original Constitution only gave Congress stated powers, which did not include establishing
a national religion. Anti-Federalists such as Rep. Thomas Tucker of South Carolina moved to strike the establishment
clause completely because it could preempt the religious clauses in the state constitutions. However, the Anti-Federalists
were unsuccessful in persuading the House of Representatives to drop the clause from the first amendment. The Fourteenth
Amendment to the United States Constitution (Amendment XIV) is one of the post-Civil War amendments, intended to
secure rights for former slaves. It includes the due process and equal protection clauses among others. The amendment
introduces the concept of incorporation of all relevant federal rights against the states. While it has not been
fully implemented, the doctrine of incorporation has been used to ensure, through the Due Process Clause and Privileges
and Immunities Clause, the application of most of the rights enumerated in the Bill of Rights to the states. The
incorporation of the First Amendment establishment clause in the landmark case of Everson v. Board of Education has
impacted the subsequent interpretation of the separation of church and state in regard to the state governments.
Although upholding the state law in that case, which provided for public busing to private religious schools, the
Supreme Court held that the First Amendment establishment clause was fully applicable to the state governments. A
more recent case involving the application of this principle against the states was Board of Education of Kiryas
Joel Village School District v. Grumet (1994). Jefferson's concept of "separation of church and state" first became
a part of Establishment Clause jurisprudence in Reynolds v. U.S., 98 U.S. 145 (1878). In that case, the court examined
the history of religious liberty in the US, determining that while the constitution guarantees religious freedom,
"The word 'religion' is not defined in the Constitution. We must go elsewhere, therefore, to ascertain its meaning,
and nowhere more appropriately, we think, than to the history of the times in the midst of which the provision was
adopted." The court found that the leaders in advocating and formulating the constitutional guarantee of religious
liberty were James Madison and Thomas Jefferson. Quoting the "separation" paragraph from Jefferson's letter to the
Danbury Baptists, the court concluded that, "coming as this does from an acknowledged leader of the advocates of
the measure, it may be accepted almost as an authoritative declaration of the scope and effect of the amendment thus
secured." The centrality of the "separation" concept to the Religion Clauses of the Constitution was made explicit
in Everson v. Board of Education, 330 U.S. 1 (1947), a case dealing with a New Jersey law that allowed government
funds to pay for transportation of students to both public and Catholic schools. This was the first case in which
the court applied the Establishment Clause to the laws of a state, having interpreted the due process clause of the
Fourteenth Amendment as applying the Bill of Rights to the states as well as the federal legislature. Citing Jefferson,
the court concluded that "The First Amendment has erected a wall between church and state. That wall must be kept
high and impregnable. We could not approve the slightest breach." While the decision (with four dissents) ultimately
upheld the state law allowing the funding of transportation of students to religious schools, the majority opinion
(by Justice Hugo Black) and the dissenting opinions (by Justice Wiley Blount Rutledge and Justice Robert H. Jackson)
each explicitly stated that the Constitution has erected a "wall between church and state" or a "separation of Church
from State": their disagreement was limited to whether this case of state funding of transportation to religious
schools breached that wall. Rutledge, on behalf of the four dissenting justices, took the position that the majority
had indeed permitted a violation of the wall of separation in this case: "Neither so high nor so impregnable today
as yesterday is the wall raised between church and state by Virginia's great statute of religious freedom and the
First Amendment, now made applicable to all the states by the Fourteenth." Writing separately, Justice Jackson argued
that "[T]here are no good grounds upon which to support the present legislation. In fact, the undertones of the opinion,
advocating complete and uncompromising separation of Church from State, seem utterly discordant with its conclusion
yielding support to their commingling in educational matters." In 1962, the Supreme Court addressed the issue of
officially-sponsored prayer or religious recitations in public schools. In Engel v. Vitale, 370 U.S. 421 (1962),
the Court, by a vote of 6-1, determined it unconstitutional for state officials to compose an official school prayer
and require its recitation in public schools, even when the prayer is non-denominational and students may excuse
themselves from participation. (The prayer required by the New York State Board of Regents prior to the Court's decision
consisted of: "Almighty God, we acknowledge our dependence upon Thee, and we beg Thy blessings upon us, our parents,
our teachers, and our country. Amen.") As the Court stated: The court noted that it "is a matter of history that
this very practice of establishing governmentally composed prayers for religious services was one of the reasons
which caused many of our early colonists to leave England and seek religious freedom in America." The lone dissenter,
Justice Potter Stewart, objected to the court's embrace of the "wall of separation" metaphor: "I think that the Court's
task, in this as in all areas of constitutional adjudication, is not responsibly aided by the uncritical invocation
of metaphors like the "wall of separation," a phrase nowhere to be found in the Constitution." In Epperson v. Arkansas,
393 U.S. 97 (1968), the Supreme Court considered an Arkansas law that made it a crime "to teach the theory or doctrine
that mankind ascended or descended from a lower order of animals," or "to adopt or use in any such institution a
textbook that teaches" this theory in any school or university that received public funds. The court's opinion, written
by Justice Abe Fortas, ruled that the Arkansas law violated "the constitutional prohibition of state laws respecting
an establishment of religion or prohibiting the free exercise thereof. The overriding fact is that Arkansas' law
selects from the body of knowledge a particular segment which it proscribes for the sole reason that it is deemed
to conflict with a particular religious doctrine; that is, with a particular interpretation of the Book of Genesis
by a particular religious group." The court held that the Establishment Clause prohibits the state from advancing
any religion, and that "[T]he state has no legitimate interest in protecting any or all religions from views distasteful
to them." In Lemon v. Kurtzman, 403 U.S. 602 (1971), the court determined that a Pennsylvania state policy of reimbursing
the salaries and related costs of teachers of secular subjects in private religious schools violated the Establishment
Clause. The court's decision argued that the separation of church and state could never be absolute: "Our prior holdings
do not call for total separation between church and state; total separation is not possible in an absolute sense.
Some relationship between government and religious organizations is inevitable," the court wrote. "Judicial caveats
against entanglement must recognize that the line of separation, far from being a "wall," is a blurred, indistinct,
and variable barrier depending on all the circumstances of a particular relationship." Subsequent to this decision,
the Supreme Court has applied a three-pronged test to determine whether government action comports with the Establishment
Clause, known as the "Lemon Test". First, the law or policy must have been adopted with a neutral or non-religious
purpose. Second, the principle or primary effect must be one that neither advances nor inhibits religion. Third,
the statute or policy must not result in an "excessive entanglement" of government with religion. (The decision in
Lemon v. Kurtzman hinged upon the conclusion that the government benefits were flowing disproportionately to Catholic
schools, and that Catholic schools were an integral component of the Catholic Church's religious mission, thus the
policy involved the state in an "excessive entanglement" with religion.) Failure to meet any of these criteria is
a proof that the statute or policy in question violates the Establishment Clause. In 2002, a three judge panel on
the Ninth Circuit Court of Appeals held that classroom recitation of the Pledge of Allegiance in a California public
school was unconstitutional, even when students were not compelled to recite it, due to the inclusion of the phrase
"under God." In reaction to the case, Elk Grove Unified School District v. Newdow, both houses of Congress passed
measures reaffirming their support for the pledge, and condemning the panel's ruling. The case was appealed to the
Supreme Court, where the case was ultimately overturned in June 2004, solely on procedural grounds not related to
the substantive constitutional issue. Rather, a five-justice majority held that Newdow, a non-custodial parent suing
on behalf of his daughter, lacked standing to sue. On December 20, 2005, the United States Court of Appeals for the
Sixth Circuit ruled in the case of ACLU v. Mercer County that the continued display of the Ten Commandments as part
of a larger display on American legal traditions in a Kentucky courthouse was allowed, because the purpose of the
display (educating the public on American legal traditions) was secular in nature. In ruling on the Mount Soledad
cross controversy on May 3, 2006, however, a federal judge ruled that the cross on public property on Mount Soledad
must be removed. In what will be the case is Town of Greece v. Galloway, 12-696, the Supreme Court agreed to hear
a case regarding whether prayers at town meetings, which are allowed, must allow various faiths to lead prayer, or
whether the prayers can be predominately Christian. On May 5, 2014, the U.S. Supreme Court ruled 5-4 in favor of
the Town of Greece by holding that the U.S. Constitution not only allows for prayer at government meetings, but also
for sectarian prayers like predominately Christian prayers. Some scholars and organizations disagree with the notion
of "separation of church and state", or the way the Supreme Court has interpreted the constitutional limitation on
religious establishment. Such critics generally argue that the phrase misrepresents the textual requirements of the
Constitution, while noting that many aspects of church and state were intermingled at the time the Constitution was
ratified. These critics argue that the prevalent degree of separation of church and state could not have been intended
by the constitutional framers. Some of the intermingling between church and state include religious references in
official contexts, and such other founding documents as the United States Declaration of Independence, which references
the idea of a "Creator" and "Nature's God", though these references did not ultimately appear in the Constitution
nor do they mention any particular religious view of a "Creator" or "Nature's God." These critics of the modern separation
of church and state also note the official establishment of religion in several of the states at the time of ratification,
to suggest that the modern incorporation of the Establishment Clause as to state governments goes against the original
constitutional intent.[citation needed] The issue is complex, however, as the incorporation ultimately bases on the
passage of the 14th Amendment in 1868, at which point the first amendment's application to the state government was
recognized. Many of these constitutional debates relate to the competing interpretive theories of originalism versus
modern, progressivist theories such as the doctrine of the Living Constitution. Other debates center on the principle
of the law of the land in America being defined not just by the Constitution's Supremacy Clause, but also by legal
precedence, making an accurate reading of the Constitution subject to the mores and values of a given era, and rendering
the concept of historical revisionism irrelevant when discussing the Constitution. The "religious test" clause has
been interpreted to cover both elected officials and appointed ones, career civil servants as well as political appointees.
Religious beliefs or the lack of them have therefore not been permissible tests or qualifications with regard to
federal employees since the ratification of the Constitution. Seven states, however, have language included in their
Bill of Rights, Declaration of Rights, or in the body of their constitutions that require state office-holders to
have particular religious beliefs, though some of these have been successfully challenged in court. These states
are Texas, Massachusetts, Maryland, North Carolina, Pennsylvania, South Carolina, and Tennessee. The required beliefs
of these clauses include belief in a Supreme Being and belief in a future state of rewards and punishments. (Tennessee
Constitution Article IX, Section 2 is one such example.) Some of these same states specify that the oath of office
include the words "so help me God." In some cases these beliefs (or oaths) were historically required of jurors and
witnesses in court. At one time, such restrictions were allowed under the doctrine of states' rights; today they
are deemed to be in violation of the federal First Amendment, as applied to the states via the 14th amendment, and
hence unconstitutional and unenforceable. Relaxed zoning rules and special parking privileges for churches, the tax-free
status of church property, the fact that Christmas is a federal holiday, etc., have also been questioned, but have
been considered examples of the governmental prerogative in deciding practical and beneficial arrangements for the
society. The national motto "In God We Trust" has been challenged as a violation, but the Supreme Court has ruled
that ceremonial deism is not religious in nature. A circuit court ruling affirmed Ohio's right to use as its motto
a passage from the Bible, "With God, all things are possible", because it displayed no preference for a particular
religion. Jeffries and Ryan (2001) argue that the modern concept of separation of church and state dates from the
mid-twentieth century rulings of the Supreme Court. The central point, they argue, was a constitutional ban against
aid to religious schools, followed by a later ban on religious observance in public education. Jeffries and Ryan
argue that these two propositions—that public aid should not go to religious schools and that public schools should
not be religious—make up the separationist position of the modern Establishment Clause. Jeffries and Ryan argue that
no-aid position drew support from a coalition of separationist opinion. Most important was "the pervasive secularism
that came to dominate American public life," which sought to confine religion to a private sphere. Further, the ban
against government aid to religious schools was supported before 1970 by most Protestants (and most Jews), who opposed
aid to religious schools, which were mostly Catholic at the time. After 1980, however, anti-Catholic sentiment has
diminished among mainline Protestants, and the crucial coalition of public secularists and Protestant churches has
collapsed. While mainline Protestant denominations are more inclined towards strict separation of church and state,
much evangelical opinion has now largely deserted that position. As a consequence, strict separationism is opposed
today by members of many Protestant faiths, even perhaps eclipsing the opposition of Roman Catholics.[citation needed]
Critics of the modern concept of the "separation of church and state" argue that it is untethered to anything in
the text of the constitution and is contrary to the conception of the phrase as the Founding Fathers understood it.
Philip Hamburger, Columbia Law school professor and prominent critic of the modern understanding of the concept,
maintains that the modern concept, which deviates from the constitutional establishment clause jurisprudence, is
rooted in American anti-Catholicism and Nativism.[citation needed] Briefs before the Supreme Court, including by
the U.S. government, have argued that some state constitutional amendments relating to the modern conception of separation
of church and state (Blaine Amendments) were motivated by and intended to enact anti-Catholicism. J. Brent Walker,
Executive Director of the Baptist Joint Committee, responded to Hamburger's claims noting; "The fact that the separation
of church and state has been supported by some who exhibited an anti-Catholic animus or a secularist bent does not
impugn the validity of the principle. Champions of religious liberty have argued for the separation of church and
state for reasons having nothing to do with anti-Catholicism or desire for a secular culture. Of course, separationists
have opposed the Catholic Church when it has sought to tap into the public till to support its parochial schools
or to argue for on-campus released time in the public schools. But that principled debate on the issues does not
support a charge of religious bigotry" Steven Waldman notes that; "The evangelicals provided the political muscle
for the efforts of Madison and Jefferson, not merely because they wanted to block official churches but because they
wanted to keep the spiritual and secular worlds apart." "Religious freedom resulted from an alliance of unlikely
partners," writes the historian Frank Lambert in his book The Founding Fathers and the Place of Religion in America.
"New Light evangelicals such as Isaac Bachus and John Leland joined forces with Deists and skeptics such as James
Madison and Thomas Jefferson to fight for a complete separation of church and state." Robert N. Bellah has in his
writings that although the separation of church and state is grounded firmly in the constitution of the United States,
this does not mean that there is no religious dimension in the political society of the United States. He used the
term "Civil Religion" to describe the specific relation between politics and religion in the United States. His 1967
article analyzes the inaugural speech of John F. Kennedy: "Considering the separation of church and state, how is
a president justified in using the word 'God' at all? The answer is that the separation of church and state has not
denied the political realm a religious dimension." Robert S. Wood has argued that the United States is a model for
the world in terms of how a separation of church and state—no state-run or state-established church—is good for both
the church and the state, allowing a variety of religions to flourish. Speaking at the Toronto-based Center for New
Religions, Wood said that the freedom of conscience and assembly allowed under such a system has led to a "remarkable
religiosity" in the United States that isn't present in other industrialized nations. Wood believes that the U.S.
operates on "a sort of civic religion," which includes a generally-shared belief in a creator who "expects better
of us." Beyond that, individuals are free to decide how they want to believe and fill in their own creeds and express
their conscience. He calls this approach the "genius of religious sentiment in the United States."
Protestantism is a form of Christian faith and practice which originated with the Protestant Reformation,[a] a movement against
what its followers considered to be errors in the Roman Catholic Church. It is one of the three major divisions of
Christendom, together with Roman Catholicism and Eastern Orthodoxy. Anglicanism is sometimes considered to be independent
from Protestantism.[b] The term derives from the letter of protestation from Lutheran princes in 1529 against an
edict condemning the teachings of Martin Luther as heretical. All Protestant denominations reject the notion of papal
supremacy over the Church universal and generally deny the Roman Catholic doctrine of transubstantiation, but they
disagree among themselves regarding the real presence of Christ in the Eucharist. The various denominations generally
emphasize the priesthood of all believers, the doctrine of justification by faith alone (sola fide) rather than by
or with good works, and a belief in the Bible alone (rather than with Catholic tradition) as the highest authority
in matters of faith and morals (sola scriptura). The "Five solae" summarize the reformers' basic differences in theological
beliefs in opposition to the teaching of the Roman Catholic Church of the day. Protestantism spread in Europe during
the 16th century. Lutheranism spread from Germany into its surrounding areas,[c] Denmark,[d] Norway,[e] Sweden,[f]
Finland,[g] Prussia,[h] Latvia,[i], Estonia,[j] and Iceland,[k] as well as other smaller territories. Reformed churches
were founded primarily in Germany and its adjacent regions,[l] Hungary,[m] the Netherlands,[n] Scotland,[o] Switzerland,[p]
and France[q] by such reformers as John Calvin, Huldrych Zwingli, and John Knox. Arminianism[r] gained supporters
in the Netherlands and parts of Germany. In 1534, King Henry VIII put an end to all papal jurisdiction in England[s]
after the Pope failed to annul his marriage to Catherine of Aragon; this opened the door to reformational ideas,
notably during the following reign of Edward VI, through Thomas Cranmer, Richard Hooker, Matthew Parker and other
theologians. There were also reformational efforts throughout continental Europe known as the Radical Reformation—a
response to perceived corruption in both the Roman Catholic Church and the expanding Magisterial Reformation led
by Luther and various other reformers—which gave rise to Anabaptist, Moravian, and other Pietistic movements. In
later centuries, Protestants developed their own culture, which made major contributions in education, the humanities
and sciences, the political and social order, the economy and the arts, and other fields. Collectively encompassing
more than 900 million adherents, or nearly forty percent of Christians worldwide, Protestantism is present on all
populated continents.[t] The movement is more divided theologically and ecclesiastically than either Eastern Orthodoxy
or Roman Catholicism, lacking both structural unity and central human authority. Some Protestant churches do have
a worldwide scope and distribution of membership (notably, the Anglican Communion), while others are confined to
a single country, or even are solitary church bodies or congregations (such as the former Prussian Union of churches).
Nondenominational, evangelical, independent and other churches are on the rise, and constitute a significant part
of Protestant Christianity. During the Reformation, the term was hardly used outside of the German politics. The
word evangelical (German: evangelisch), which refers to the gospel, was much more widely used for those involved
in the religious movement. Nowadays, this word is still preferred among some of the historical Protestant denominations,
above all the ones in the German-speaking area such as the EKD. The German word evangelisch means Protestant, and
is different from the German evangelikal, which refers to churches shaped by Evangelicalism. The English word evangelical
usually refers to Evangelical Protestant churches, and therefore not to Protestantism as a whole. It traces its roots
back to the Puritans in England, where Evangelicalism originated, and then was brought to the United States. The
word reformatorisch is used as an alternative for evangelisch in German, and is different from English reformed (German:
reformiert), which refers to churches shaped by ideas of John Calvin, Huldrych Zwingli and other Reformed theologians.
The use of the phrases as summaries of teaching emerged over time during the Reformation, based on the overarching
principle of sola scriptura (by scripture alone). This idea contains the four main doctrines on the Bible: that its
teaching is needed for salvation (necessity); that all the doctrine necessary for salvation comes from the Bible
alone (sufficiency); that everything taught in the Bible is correct (inerrancy); and that, by the Holy Spirit overcoming
sin, believers may read and understand truth from the Bible itself, though understanding is difficult, so the means
used to guide individual believers to the true teaching is often mutual discussion within the church (clarity). The
necessity and inerrancy were well-established ideas, garnering little criticism, though they later came under debate
from outside during the Enlightenment. The most contentious idea at the time though was the notion that anyone could
simply pick up the Bible and learn enough to gain salvation. Though the reformers were concerned with ecclesiology
(the doctrine of how the church as a body works), they had a different understanding of the process in which truths
in scripture were applied to life of believers, compared to the Catholics' idea that certain people within the church,
or ideas that were old enough, had a special status in giving understanding of the text. The second main principle,
sola fide (by faith alone), states that faith in Christ is sufficient alone for eternal salvation. Though argued
from scripture, and hence logically consequent to sola scriptura, this is the guiding principle of the work of Luther
and the later reformers. Because sola scriptura placed the Bible as the only source of teaching, sola fide epitomises
the main thrust of the teaching the reformers wanted to get back to, namely the direct, close, personal connection
between Christ and the believer, hence the reformers' contention that their work was Christocentric. The Protestant
movement began to diverge into several distinct branches in the mid-to-late 16th century. One of the central points
of divergence was controversy over the Eucharist. Early Protestants rejected the Roman Catholic dogma of transubstantiation,
which teaches that the bread and wine used in the sacrificial rite of the Mass lose their natural substance by being
transformed into the body, blood, soul, and divinity of Christ. They disagreed with one another concerning the presence
of Christ and his body and blood in Holy Communion. In the late 1130s, Arnold of Brescia, an Italian canon regular
became one of the first theologians to attempt to reform the Roman Catholic Church. After his death, his teachings
on apostolic poverty gained currency among Arnoldists, and later more widely among Waldensians and the Spiritual
Franciscans, though no written word of his has survived the official condemnation. In the early 1170s, Peter Waldo
founded the Waldensians. He advocated an interpretation of the Gospel that led to conflicts with the Roman Catholic
Church. By 1215, the Waldensians were declared heretical and subject to persecution. Despite that, the movement continues
to exist to this day in Italy, as a part of the wider Reformed tradition. Beginning in first decade of the 15th century,
Jan Hus—a Roman Catholic priest, Czech reformist and professor—influenced by John Wycliffe's writings, founded the
Hussite movement. He strongly advocated his reformist Bohemian religious denomination. He was excommunicated and
burned at the stake in Constance, Bishopric of Constance in 1415 by secular authorities for unrepentant and persistent
heresy. After his execution, a revolt erupted. Hussites defeated five continuous crusades proclaimed against them
by the Pope. On 31 October 1517, Martin Luther supposedly nailed his 95 theses against the selling of indulgences
at the door of the All Saints', the Castle Church in Wittenberg. The theses debated and criticised the Church and
the papacy, but concentrated upon the selling of indulgences and doctrinal policies about purgatory, particular judgment,
and the authority of the pope. He would later write works on the Catholic devotion to Virgin Mary, the intercession
of and devotion to the saints, the sacraments, mandatory clerical celibacy, monasticism, further on the authority
of the pope, the ecclesiastical law, censure and excommunication, the role of secular rulers in religious matters,
the relationship between Christianity and the law, good works, and the sacraments. Following the excommunication
of Luther and condemnation of the Reformation by the Pope, the work and writings of John Calvin were influential
in establishing a loose consensus among various groups in Switzerland, Scotland, Hungary, Germany and elsewhere.
After the expulsion of its Bishop in 1526, and the unsuccessful attempts of the Bern reformer William Farel, Calvin
was asked to use the organisational skill he had gathered as a student of law to discipline the "fallen city" of
Geneva. His Ordinances of 1541 involved a collaboration of Church affairs with the City council and consistory to
bring morality to all areas of life. After the establishment of the Geneva academy in 1559, Geneva became the unofficial
capital of the Protestant movement, providing refuge for Protestant exiles from all over Europe and educating them
as Calvinist missionaries. The faith continued to spread after Calvin's death in 1563. Protestantism also spread
from the German lands into France, where the Protestants were nicknamed Huguenots. Calvin continued to take an interest
in the French religious affairs from his base in Geneva. He regularly trained pastors to lead congregations there.
Despite heavy persecution, the Reformed tradition made steady progress across large sections of the nation, appealing
to people alienated by the obduracy and the complacency of the Catholic establishment. French Protestantism came
to acquire a distinctly political character, made all the more obvious by the conversions of nobles during the 1550s.
This established the preconditions for a series of conflicts, known as the French Wars of Religion. The civil wars
gained impetus with the sudden death of Henry II of France in 1559. Atrocity and outrage became the defining characteristics
of the time, illustrated at their most intense in the St. Bartholomew's Day massacre of August 1572, when the Roman
Catholic party annihilated between 30,000 and 100,000 Huguenots across France. The wars only concluded when Henry
IV of France issued the Edict of Nantes, promising official toleration of the Protestant minority, but under highly
restricted conditions. Roman Catholicism remained the official state religion, and the fortunes of French Protestants
gradually declined over the next century, culminating in Louis XIV's Edict of Fontainebleau which revoked the Edict
of Nantes and made Roman Catholicism the sole legal religion once again. In response to the Edict of Fontainebleau,
Frederick William I, Elector of Brandenburg declared the Edict of Potsdam, giving free passage to Huguenot refugees.
In the late 17th century many Huguenots fled to England, the Netherlands, Prussia, Switzerland, and the English and
Dutch overseas colonies. A significant community in France remained in the Cévennes region. Parallel to events in
Germany, a movement began in Switzerland under the leadership of Huldrych Zwingli. Zwingli was a scholar and preacher,
who in 1518 moved to Zurich. Although the two movements agreed on many issues of theology, some unresolved differences
kept them separate. A long-standing resentment between the German states and the Swiss Confederation led to heated
debate over how much Zwingli owed his ideas to Lutheranism. The German Prince Philip of Hesse saw potential in creating
an alliance between Zwingli and Luther. A meeting was held in his castle in 1529, now known as the Colloquy of Marburg,
which has become infamous for its failure. The two men could not come to any agreement due to their disputation over
one key doctrine. The political separation of the Church of England from Rome under Henry VIII brought England alongside
this broad Reformation movement. Reformers in the Church of England alternated between sympathies for ancient Catholic
tradition and more Reformed principles, gradually developing into a tradition considered a middle way (via media)
between the Roman Catholic and Protestant traditions. The English Reformation followed a particular course. The different
character of the English Reformation came primarily from the fact that it was driven initially by the political necessities
of Henry VIII. King Henry decided to remove the Church of England from the authority of Rome. In 1534, the Act of
Supremacy recognized Henry as the only Supreme Head on earth of the Church of England. Between 1535 and 1540, under
Thomas Cromwell, the policy known as the Dissolution of the Monasteries was put into effect. Following a brief Roman
Catholic restoration during the reign of Mary I, a loose consensus developed during the reign of Elizabeth I. The
Elizabethan Religious Settlement largely formed Anglicanism into a distinctive church tradition. The compromise was
uneasy and was capable of veering between extreme Calvinism on the one hand and Roman Catholicism on the other. It
was relatively successful until the Puritan Revolution or English Civil War in the 17th century. The success of the
Counter-Reformation on the Continent and the growth of a Puritan party dedicated to further Protestant reform polarised
the Elizabethan Age. The early Puritan movement was a movement for reform in the Church of England. The desire was
for the Church of England to resemble more closely the Protestant churches of Europe, especially Geneva. The later
Puritan movement, often referred to as dissenters and nonconformists, eventually led to the formation of various
Reformed denominations. The Scottish Reformation of 1560 decisively shaped the Church of Scotland. The Reformation
in Scotland culminated ecclesiastically in the establishment of a church along Reformed lines, and politically in
the triumph of English influence over that of France. John Knox is regarded as the leader of the Scottish Reformation.
The Scottish Reformation Parliament of 1560 repudiated the pope's authority by the Papal Jurisdiction Act 1560, forbade
the celebration of the Mass and approved a Protestant Confession of Faith. It was made possible by a revolution against
French hegemony under the regime of the regent Mary of Guise, who had governed Scotland in the name of her absent
daughter. In the course of this religious upheaval, the German Peasants' War of 1524–25 swept through the Bavarian,
Thuringian and Swabian principalities. After the Eighty Years' War in the Low Countries and the French Wars of Religion,
the confessional division of the states of the Holy Roman Empire eventually erupted in the Thirty Years' War between
1618 and 1648. It devastated much of Germany, killing between 25% and 40% of its population. The main tenets of the
Peace of Westphalia, which ended the Thirty Years' War, were: The First Great Awakening was an evangelical and revitalization
movement that swept through Protestant Europe and British America, especially the American colonies in the 1730s
and 1740s, leaving a permanent impact on American Protestantism. It resulted from powerful preaching that gave listeners
a sense of deep personal revelation of their need of salvation by Jesus Christ. Pulling away from ritual, ceremony,
sacramentalism and hierarchy, it made Christianity intensely personal to the average person by fostering a deep sense
of spiritual conviction and redemption, and by encouraging introspection and a commitment to a new standard of personal
morality. The Second Great Awakening began around 1790. It gained momentum by 1800. After 1820, membership rose rapidly
among Baptist and Methodist congregations, whose preachers led the movement. It was past its peak by the late 1840s.
It has been described as a reaction against skepticism, deism, and rationalism, although why those forces became
pressing enough at the time to spark revivals is not fully understood. It enrolled millions of new members in existing
evangelical denominations and led to the formation of new denominations. The Third Great Awakening refers to a hypothetical
historical period that was marked by religious activism in American history and spans the late 1850s to the early
20th century. It affected pietistic Protestant denominations and had a strong element of social activism. It gathered
strength from the postmillennial belief that the Second Coming of Christ would occur after mankind had reformed the
entire earth. It was affiliated with the Social Gospel Movement, which applied Christianity to social issues and
gained its force from the Awakening, as did the worldwide missionary movement. New groupings emerged, such as the
Holiness, Nazarene, and Christian Science movements. A noteworthy development in 20th-century Protestant Christianity
was the rise of the modern Pentecostal movement. Sprung from Methodist and Wesleyan roots, it arose out of meetings
at an urban mission on Azusa Street in Los Angeles. From there it spread around the world, carried by those who experienced
what they believed to be miraculous moves of God there. These Pentecost-like manifestations have steadily been in
evidence throughout the history, such as seen in the two Great Awakenings. Pentecostalism, which in turn birthed
the Charismatic movement within already established denominations, continues to be an important force in Western
Christianity. In the United States and elsewhere in the world, there has been a marked rise in the evangelical wing
of Protestant denominations, especially those that are more exclusively evangelical, and a corresponding decline
in the mainstream liberal churches. In the post–World War I era, Liberal Christianity was on the rise, and a considerable
number of seminaries held and taught from a liberal perspective as well. In the post–World War II era, the trend
began to swing back towards the conservative camp in America's seminaries and church structures. In Europe, there
has been a general move away from religious observance and belief in Christian teachings and a move towards secularism.
The Enlightenment is largely responsible for the spread of secularism. Several scholars have argued for a link between
the rise of secularism and Protestantism, attributing it to the wide-ranging freedom in the Protestant countries.
In North America, South America and Australia Christian religious observance is much higher than in Europe. United
States remains particularly religious in comparison to other developed countries. South America, historically Roman
Catholic, has experienced a large Evangelical and Pentecostal infusion in the 20th and 21st centuries. In the view
of many associated with the Radical Reformation, the Magisterial Reformation had not gone far enough. Radical Reformer,
Andreas von Bodenstein Karlstadt, for example, referred to the Lutheran theologians at Wittenberg as the "new papists".
Since the term "magister" also means "teacher", the Magisterial Reformation is also characterized by an emphasis
on the authority of a teacher. This is made evident in the prominence of Luther, Calvin, and Zwingli as leaders of
the reform movements in their respective areas of ministry. Because of their authority, they were often criticized
by Radical Reformers as being too much like the Roman Popes. A more political side of the Radical Reformation can
be seen in the thought and practice of Hans Hut, although typically Anabaptism has been associated with pacifism.
Protestants reject the Roman Catholic Church's doctrine that it is the one true church, believing in the invisible
church, which consists of all who profess faith in Jesus Christ. Some Protestant denominations are less accepting
of other denominations, and the basic orthodoxy of some is questioned by most of the others. Individual denominations
also have formed over very subtle theological differences. Other denominations are simply regional or ethnic expressions
of the same beliefs. Because the five solas are the main tenets of the Protestant faith, non-denominational groups
and organizations are also considered Protestant. Various ecumenical movements have attempted cooperation or reorganization
of the various divided Protestant denominations, according to various models of union, but divisions continue to
outpace unions, as there is no overarching authority to which any of the churches owe allegiance, which can authoritatively
define the faith. Most denominations share common beliefs in the major aspects of the Christian faith while differing
in many secondary doctrines, although what is major and what is secondary is a matter of idiosyncratic belief. Several
countries have established their national churches, linking the ecclesiastical structure with the state. Jurisdictions
where a Protestant denomination has been established as a state religion include several Nordic countries; Denmark
(including Greenland), the Faroe Islands (its church being independent since 2007), Iceland and Norway have established
Evangelical Lutheran churches. Tuvalu has the only established church in Reformed tradition in the world, while Tonga—in
the Methodist tradition. The Church of England is the officially established religious institution in England, and
also the Mother Church of the worldwide Anglican Communion. Protestants can be differentiated according to how they
have been influenced by important movements since the Reformation, today regarded as branches. Some of these movements
have a common lineage, sometimes directly spawning individual denominations. Due to the earlier stated multitude
of denominations, this section discusses only the largest denominational families, or branches, widely considered
to be a part of Protestantism. These are, in alphabetical order: Adventist, Anglican, Baptist, Calvinist (Reformed),
Lutheran, Methodist and Pentecostal. A small but historically significant Anabaptist branch is also discussed. Although
the Adventist churches hold much in common, their theologies differ on whether the intermediate state is unconscious
sleep or consciousness, whether the ultimate punishment of the wicked is annihilation or eternal torment, the nature
of immortality, whether or not the wicked are resurrected after the millennium, and whether the sanctuary of Daniel
8 refers to the one in heaven or one on earth. The movement has encouraged the examination of the whole Bible, leading
Seventh-day Adventists and some smaller Adventist groups to observe the Sabbath. The General Conference of Seventh-day
Adventists has compiled that church's core beliefs in the 28 Fundamental Beliefs (1980 and 2005), which use Biblical
references as justification. The name Anabaptist, meaning "one who baptizes again", was given them by their persecutors
in reference to the practice of re-baptizing converts who already had been baptized as infants. Anabaptists required
that baptismal candidates be able to make their own confessions of faith and so rejected baptism of infants. The
early members of this movement did not accept the name Anabaptist, claiming that since infant baptism was unscriptural
and null and void, the baptizing of believers was not a re-baptism but in fact their first real baptism. As a result
of their views on the nature of baptism and other issues, Anabaptists were heavily persecuted during the 16th century
and into the 17th by both Magisterial Protestants and Roman Catholics.[aa] While most Anabaptists adhered to a literal
interpretation of the Sermon on the Mount, which precluded taking oaths, participating in military actions, and participating
in civil government, some who practiced re-baptism felt otherwise.[ab] They were thus technically Anabaptists, even
though conservative Amish, Mennonites, and Hutterites and some historians tend to consider them as outside of true
Anabaptism. Anabaptist reformers of the Radical Reformation are diveded into Radical and the so-called Second Front.
Some important Radical Reformation theologians were John of Leiden, Thomas Müntzer, Kaspar Schwenkfeld, Sebastian
Franck, Menno Simons. Second Front Reformers included Hans Denck, Conrad Grebel, Balthasar Hubmaier and Felix Manz.
Anglicanism comprises the Church of England and churches which are historically tied to it or hold similar beliefs,
worship practices and church structures. The word Anglican originates in ecclesia anglicana, a medieval Latin phrase
dating to at least 1246 that means the English Church. There is no single "Anglican Church" with universal juridical
authority, since each national or regional church has full autonomy. As the name suggests, the communion is an association
of churches in full communion with the Archbishop of Canterbury. The great majority of Anglicans are members of churches
which are part of the international Anglican Communion, which has 80 million adherents. The Church of England declared
its independence from the Catholic Church at the time of the Elizabethan Religious Settlement. Many of the new Anglican
formularies of the mid-16th century corresponded closely to those of contemporary Reformed tradition. These reforms
were understood by one of those most responsible for them, the then Archbishop of Canterbury Thomas Cranmer, as navigating
a middle way between two of the emerging Protestant traditions, namely Lutheranism and Calvinism. By the end of the
century, the retention in Anglicanism of many traditional liturgical forms and of the episcopate was already seen
as unacceptable by those promoting the most developed Protestant principles. Baptists subscribe to a doctrine that
baptism should be performed only for professing believers (believer's baptism, as opposed to infant baptism), and
that it must be done by complete immersion (as opposed to affusion or sprinkling). Other tenets of Baptist churches
include soul competency (liberty), salvation through faith alone, Scripture alone as the rule of faith and practice,
and the autonomy of the local congregation. Baptists recognize two ministerial offices, pastors and deacons. Baptist
churches are widely considered to be Protestant churches, though some Baptists disavow this identity. Historians
trace the earliest church labeled Baptist back to 1609 in Amsterdam, with English Separatist John Smyth as its pastor.
In accordance with his reading of the New Testament, he rejected baptism of infants and instituted baptism only of
believing adults. Baptist practice spread to England, where the General Baptists considered Christ's atonement to
extend to all people, while the Particular Baptists believed that it extended only to the elect. In 1638, Roger Williams
established the first Baptist congregation in the North American colonies. In the mid-18th century, the First Great
Awakening increased Baptist growth in both New England and the South. The Second Great Awakening in the South in
the early 19th century increased church membership, as did the preachers' lessening of support for abolition and
manumission of slavery, which had been part of the 18th-century teachings. Baptist missionaries have spread their
church to every continent. Today, this term also refers to the doctrines and practices of the Reformed churches of
which Calvin was an early leader. Less commonly, it can refer to the individual teaching of Calvin himself. The particulars
of Calvinist theology may be stated in a number of ways. Perhaps the best known summary is contained in the five
points of Calvinism, though these points identify the Calvinist view on soteriology rather than summarizing the system
as a whole. Broadly speaking, Calvinism stresses the sovereignty or rule of God in all things — in salvation but
also in all of life. This concept is seen clearly in the doctrines of predestination and total depravity. Today,
Lutheranism is one of the largest branches of Protestantism. With approximately 80 million adherents, it constitutes
the third most common Protestant confession after historically Pentecostal denominations and Anglicanism. The Lutheran
World Federation, the largest global communion of Lutheran churches represents over 72 million people. Additionally,
there are also many smaller bodies such as the International Lutheran Council and the Confessional Evangelical Lutheran
Conference, as well as independent churches. Methodism identifies principally with the theology of John Wesley—an
Anglican priest and evangelist. This evangelical movement originated as a revival within the 18th-century Church
of England and became a separate Church following Wesley's death. Because of vigorous missionary activity, the movement
spread throughout the British Empire, the United States, and beyond, today claiming approximately 80 million adherents
worldwide. Originally it appealed especially to workers, agricultural workers, and slaves. Soteriologically, most
Methodists are Arminian, emphasizing that Christ accomplished salvation for every human being, and that humans must
exercise an act of the will to receive it (as opposed to the traditional Calvinist doctrine of monergism). Methodism
is traditionally low church in liturgy, although this varies greatly between individual congregations; the Wesleys
themselves greatly valued the Anglican liturgy and tradition. Methodism is known for its rich musical tradition;
John Wesley's brother, Charles, was instrumental in writing much of the hymnody of the Methodist Church, and many
other eminent hymn writers come from the Methodist tradition. This branch of Protestantism is distinguished by belief
in the baptism with the Holy Spirit as an experience separate from conversion that enables a Christian to live a
Holy Spirit–filled and empowered life. This empowerment includes the use of spiritual gifts such as speaking in tongues
and divine healing—two other defining characteristics of Pentecostalism. Because of their commitment to biblical
authority, spiritual gifts, and the miraculous, Pentecostals tend to see their movement as reflecting the same kind
of spiritual power and teachings that were found in the Apostolic Age of the early church. For this reason, some
Pentecostals also use the term Apostolic or Full Gospel to describe their movement. Pentecostalism eventually spawned
hundreds of new denominations, including large groups such as the Assemblies of God and the Church of God in Christ,
both in the United States and elsewhere. There are over 279 million Pentecostals worldwide, and the movement is growing
in many parts of the world, especially the global South. Since the 1960s, Pentecostalism has increasingly gained
acceptance from other Christian traditions, and Pentecostal beliefs concerning Spirit baptism and spiritual gifts
have been embraced by non-Pentecostal Christians in Protestant and Catholic churches through the Charismatic Movement.
Together, Pentecostal and Charismatic Christianity numbers over 500 million adherents. There are many other Protestant
denominations that do not fit neatly into the mentioned branches, and are far smaller in membership. Some groups
of individuals who hold basic Protestant tenets identify themselves simply as "Christians" or "born-again Christians".
They typically distance themselves from the confessionalism and/or creedalism of other Christian communities by calling
themselves "non-denominational" or "evangelical". Often founded by individual pastors, they have little affiliation
with historic denominations. The Plymouth Brethren are a conservative, low church, evangelical movement, whose history
can be traced to Dublin, Ireland, in the late 1820s, originating from Anglicanism. Among other beliefs, the group
emphasizes sola scriptura. Brethren generally see themselves not as a denomination, but as a network, or even as
a collection of overlapping networks, of like-minded independent churches. Although the group refused for many years
to take any denominational name to itself—a stance that some of them still maintain—the title The Brethren, is one
that many of their number are comfortable with in that the Bible designates all believers as brethren. Quakers, or
Friends, are members of a family of religious movements collectively known as the Religious Society of Friends. The
central unifying doctrine of these movements is the priesthood of all believers. Many Friends view themselves as
members of a Christian denomination. They include those with evangelical, holiness, liberal, and traditional conservative
Quaker understandings of Christianity. Unlike many other groups that emerged within Christianity, the Religious Society
of Friends has actively tried to avoid creeds and hierarchical structures. There are also Christian movements which
cross denominational lines and even branches, and cannot be classified on the same level previously mentioned forms.
Evangelicalism is a prominent example. Some of those movements are active exclusively within Protestantism, some
are Christian-wide. Transdenominational movements are sometimes capable of affecting parts of the Roman Catholic
Church, such as does it the Charismatic Movement, which aims to incorporate beliefs and practices similar to Pentecostals
into the various branches of Christianity. Neo-charismatic churches are sometimes regarded as a subgroup of the Charismatic
Movement. Nondenominational churches often adopt, or are akin to one of these movements. It gained great momentum
in the 18th and 19th centuries with the emergence of Methodism and the Great Awakenings in Britain and North America.
The origins of Evangelicalism are usually traced back to the English Methodist movement, Nicolaus Zinzendorf, the
Moravian Church, Lutheran pietism, Presbyterianism and Puritanism. Among leaders and major figures of the Evangelical
Protestant movement were John Wesley, George Whitefield, Jonathan Edwards, Billy Graham, Harold John Ockenga, John
Stott and Martyn Lloyd-Jones. In America, Episcopalian Dennis Bennett is sometimes cited as one of the charismatic
movement's seminal influence. In the United Kingdom, Colin Urquhart, Michael Harper, David Watson and others were
in the vanguard of similar developments. The Massey conference in New Zealand, 1964 was attended by several Anglicans,
including the Rev. Ray Muller, who went on to invite Bennett to New Zealand in 1966, and played a leading role in
developing and promoting the Life in the Spirit seminars. Other Charismatic movement leaders in New Zealand include
Bill Subritzky. Larry Christenson, a Lutheran theologian based in San Pedro, California, did much in the 1960s and
1970s to interpret the charismatic movement for Lutherans. A very large annual conference regarding that matter was
held in Minneapolis. Charismatic Lutheran congregations in Minnesota became especially large and influential; especially
"Hosanna!" in Lakeville, and North Heights in St. Paul. The next generation of Lutheran charismatics cluster around
the Alliance of Renewal Churches. There is considerable charismatic activity among young Lutheran leaders in California
centered around an annual gathering at Robinwood Church in Huntington Beach. Richard A. Jensen's Touched by the Spirit
published in 1974, played a major role of the Lutheran understanding to the charismatic movement. In Congregational
and Presbyterian churches which profess a traditionally Calvinist or Reformed theology there are differing views
regarding present-day continuation or cessation of the gifts (charismata) of the Spirit. Generally, however, Reformed
charismatics distance themselves from renewal movements with tendencies which could be perceived as overemotional,
such as Word of Faith, Toronto Blessing, Brownsville Revival and Lakeland Revival. Prominent Reformed charismatic
denominations are the Sovereign Grace Churches and the Every Nation Churches in the USA, in Great Britain there is
the Newfrontiers churches and movement, which leading figure is Terry Virgo. Puritans were blocked from changing
the established church from within, and were severely restricted in England by laws controlling the practice of religion.
Their beliefs, however, were transported by the emigration of congregations to the Netherlands (and later to New
England), and by evangelical clergy to Ireland (and later into Wales), and were spread into lay society and parts
of the educational system, particularly certain colleges of the University of Cambridge. They took on distinctive
beliefs about clerical dress and in opposition to the episcopal system, particularly after the 1619 conclusions of
the Synod of Dort they were resisted by the English bishops. They largely adopted Sabbatarianism in the 17th century,
and were influenced by millennialism. They formed, and identified with various religious groups advocating greater
purity of worship and doctrine, as well as personal and group piety. Puritans adopted a Reformed theology, but they
also took note of radical criticisms of Zwingli in Zurich and Calvin in Geneva. In church polity, some advocated
for separation from all other Christians, in favor of autonomous gathered churches. These separatist and independent
strands of Puritanism became prominent in the 1640s, when the supporters of a Presbyterian polity in the Westminster
Assembly were unable to forge a new English national church. Although the Reformation was a religious movement, it
also had a strong impact on all other aspects of life: marriage and family, education, the humanities and sciences,
the political and social order, the economy, and the arts. Protestant churches reject the idea of a celibate priesthood
and thus allow their clergy to marry. Many of their families contributed to the development of intellectual elites
in their countries. Since about 1950, women have entered the ministry, and some have assumed leading positions (e.g.
bishops), in most Protestant churches. As the Reformers wanted all members of the church to be able to read the Bible,
education on all levels got a strong boost. By the middle of the eighteenth century, the literacy rate in England
was about 60 per cent, in Scotland 65 per cent, and in Sweden eight of ten men and women were able to read and to
write. Colleges and universities were founded. For example, the Puritans who established Massachusetts Bay Colony
in 1628 founded Harvard College only eight years later. About a dozen other colleges followed in the 18th century,
including Yale (1701). Pennsylvania also became a centre of learning. The Protestant concept of God and man allows
believers to use all their God-given faculties, including the power of reason. That means that they are allowed to
explore God's creation and, according to Genesis 2:15, make use of it in a responsible and sustainable way. Thus
a cultural climate was created that greatly enhanced the development of the humanities and the sciences. Another
consequence of the Protestant understanding of man is that the believers, in gratitude for their election and redemption
in Christ, are to follow God's commandments. Industry, frugality, calling, discipline, and a strong sense of responsibility
are at the heart of their moral code. In particular, Calvin rejected luxury. Therefore, craftsmen, industrialists,
and other businessmen were able to reinvest the greater part of their profits in the most efficient machinery and
the most modern production methods that were based on progress in the sciences and technology. As a result, productivity
grew, which led to increased profits and enabled employers to pay higher wages. In this way, the economy, the sciences,
and technology reinforced each other. The chance to participate in the economic success of technological inventions
was a strong incentive to both inventors and investors. The Protestant work ethic was an important force behind the
unplanned and uncoordinated mass action that influenced the development of capitalism and the Industrial Revolution.
This idea is also known as the "Protestant ethic thesis." In a factor analysis of the latest wave of World Values
Survey data, Arno Tausch (Corvinus University of Budapest) found that Protestantism emerges to be very close to combining
religion and the traditions of liberalism. The Global Value Development Index, calculated by Tausch, relies on the
World Values Survey dimensions such as trust in the state of law, no support for shadow economy, postmaterial activism,
support for democracy, a non-acceptance of violence, xenophobia and racism, trust in transnational capital and Universities,
confidence in the market economy, supporting gender justice, and engaging in environmental activism, etc. Episcopalians
and Presbyterians, as well as other WASPs, tend to be considerably wealthier and better educated (having graduate
and post-graduate degrees per capita) than most other religious groups in United States, and are disproportionately
represented in the upper reaches of American business, law and politics, especially the Republican Party. Numbers
of the most wealthy and affluent American families as the Vanderbilts and the Astors, Rockefeller, Du Pont, Roosevelt,
Forbes, Whitneys, the Morgans and Harrimans are Mainline Protestant families. Protestantism has had an important
influence on science. According to the Merton Thesis, there was a positive correlation between the rise of English
Puritanism and German Pietism on the one hand and early experimental science on the other. The Merton Thesis has
two separate parts: Firstly, it presents a theory that science changes due to an accumulation of observations and
improvement in experimental technique and methodology; secondly, it puts forward the argument that the popularity
of science in 17th-century England and the religious demography of the Royal Society (English scientists of that
time were predominantly Puritans or other Protestants) can be explained by a correlation between Protestantism and
the scientific values. Merton focused on English Puritanism and German Pietism as having been responsible for the
development of the scientific revolution of the 17th and 18th centuries. He explained that the connection between
religious affiliation and interest in science was the result of a significant synergy between the ascetic Protestant
values and those of modern science. Protestant values encouraged scientific research by allowing science to identify
God's influence on the world—his creation—and thus providing a religious justification for scientific research. In
the Middle Ages, the Church and the worldly authorities were closely related. Martin Luther separated the religious
and the worldly realms in principle (doctrine of the two kingdoms). The believers were obliged to use reason to govern
the worldly sphere in an orderly and peaceful way. Luther's doctrine of the priesthood of all believers upgraded
the role of laymen in the church considerably. The members of a congregation had the right to elect a minister and,
if necessary, to vote for his dismissal (Treatise On the right and authority of a Christian assembly or congregation
to judge all doctrines and to call, install and dismiss teachers, as testified in Scripture; 1523). Calvin strengthened
this basically democratic approach by including elected laymen (church elders, presbyters) in his representative
church government. The Huguenots added regional synods and a national synod, whose members were elected by the congregations,
to Calvin's system of church self-government. This system was taken over by the other reformed churches. Politically,
Calvin favoured a mixture of aristocracy and democracy. He appreciated the advantages of democracy: "It is an invaluable
gift, if God allows a people to freely elect its own authorities and overlords." Calvin also thought that earthly
rulers lose their divine right and must be put down when they rise up against God. To further protect the rights
of ordinary people, Calvin suggested separating political powers in a system of checks and balances (separation of
powers). Thus he and his followers resisted political absolutism and paved the way for the rise of modern democracy.
Besides England, the Netherlands were, under Calvinist leadership, the freest country in Europe in the seventeenth
and eighteenth centuries. It granted asylum to philosophers like Baruch Spinoza and Pierre Bayle. Hugo Grotius was
able to teach his natural-law theory and a relatively liberal interpretation of the Bible. Consistent with Calvin's
political ideas, Protestants created both the English and the American democracies. In seventeenth-century England,
the most important persons and events in this process were the English Civil War, Oliver Cromwell, John Milton, John
Locke, the Glorious Revolution, the English Bill of Rights, and the Act of Settlement. Later, the British took their
democratic ideals to their colonies, e.g. Australia, New Zealand, and India. In North America, Plymouth Colony (Pilgrim
Fathers; 1620) and Massachusetts Bay Colony (1628) practised democratic self-rule and separation of powers. These
Congregationalists were convinced that the democratic form of government was the will of God. The Mayflower Compact
was a social contract. Protestants also took the initiative in advocating for religious freedom. Freedom of conscience
had high priority on the theological, philosophical, and political agendas since Luther refused to recant his beliefs
before the Diet of the Holy Roman Empire at Worms (1521). In his view, faith was a free work of the Holy Spirit and
could, therefore, not be forced on a person. The persecuted Anabaptists and Huguenots demanded freedom of conscience,
and they practised separation of church and state. In the early seventeenth century, Baptists like John Smyth and
Thomas Helwys published tracts in defense of religious freedom. Their thinking influenced John Milton and John Locke's
stance on tolerance. Under the leadership of Baptist Roger Williams, Congregationalist Thomas Hooker, and Quaker
William Penn, respectively, Rhode Island, Connecticut, and Pennsylvania combined democratic constitutions with freedom
of religion. These colonies became safe havens for persecuted religious minorities, including Jews. The United States
Declaration of Independence, the United States Constitution, and the American Bill of Rights with its fundamental
human rights made this tradition permanent by giving it a legal and political framework. The great majority of American
Protestants, both clergy and laity, strongly supported the independence movement. All major Protestant churches were
represented in the First and Second Continental Congresses. In the nineteenth and twentieth centuries, the American
democracy became a model for numerous other countries and regions throughout the world (e.g., Latin America, Japan,
and Germany). The strongest link between the American and French Revolutions was Marquis de Lafayette, an ardent
supporter of the American constitutional principles. The French Declaration of the Rights of Man and of the Citizen
was mainly based on Lafayette's draft of this document. The United Nations Declaration and Universal Declaration
of Human Rights also echo the American constitutional tradition. Democracy, social-contract theory, separation of
powers, religious freedom, separation of church and state – these achievements of the Reformation and early Protestantism
were elaborated on and popularized by Enlightenment thinkers. Some of the philosophers of the English, Scottish,
German, and Swiss Enlightenment - Thomas Hobbes, John Locke, John Toland, David Hume, Gottfried Wilhelm Leibniz,
Christian Wolff, Immanuel Kant, and Jean-Jacques Rousseau - had Protestant backgrounds. For example, John Locke,
whose political thought was based on "a set of Protestant Christian assumptions", derived the equality of all humans,
including the equality of the genders ("Adam and Eve"), from Genesis 1, 26-28. As all persons were created equally
free, all governments needed "the consent of the governed." These Lockean ideas were fundamental to the United States
Declaration of Independence, which also deduced human rights from the biblical belief in creation: "We hold these
truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable
Rights, that among these are Life, Liberty, and the pursuit of Happiness." Also, other human rights were advocated
for by some Protestants. For example, torture was abolished in Prussia in 1740, slavery in Britain in 1834 and in
the United States in 1865 (William Wilberforce, Harriet Beecher Stowe, Abraham Lincoln - against Southern Protestants).
Hugo Grotius and Samuel Pufendorf were among the first thinkers who made significant contributions to international
law. The Geneva Convention, an important part of humanitarian international law, was largely the work of Henry Dunant,
a reformed pietist. He also founded the Red Cross. Protestants have founded hospitals, homes for disabled or elderly
people, educational institutions, organizations that give aid to developing countries, and other social welfare agencies.
In the nineteenth century, throughout the Anglo-American world, numerous dedicated members of all Protestant denominations
were active in social reform movements such as the abolition of slavery, prison reforms, and woman suffrage. As an
answer to the "social question" of the nineteenth century, Germany under Chancellor Otto von Bismarck introduced
insurance programs that led the way to the welfare state (health insurance, accident insurance, disability insurance,
old-age pensions). To Bismarck this was "practical Christianity". These programs, too, were copied by many other
nations, particularly in the Western world. World literature was enriched by the works of Edmund Spenser, John Milton,
John Bunyan, John Donne, John Dryden, Daniel Defoe, William Wordsworth, Jonathan Swift, Johann Wolfgang Goethe, Friedrich
Schiller, Samuel Taylor Coleridge, Edgar Allan Poe, Matthew Arnold, Conrad Ferdinand Meyer, Theodor Fontane, Washington
Irving, Robert Browning, Emily Dickinson, Emily Brontë, Charles Dickens, Nathaniel Hawthorne, Thomas Stearns Eliot,
John Galsworthy, Thomas Mann, William Faulkner, John Updike, and many others. The view of the Roman Catholic Church
is that Protestant denominations cannot be considered churches but rather that they are ecclesial communities or
specific faith-believing communities because their ordinances and doctrines are not historically the same as the
Catholic sacraments and dogmas, and the Protestant communities have no sacramental ministerial priesthood and therefore
lack true apostolic succession. According to Bishop Hilarion (Alfeyev) the Eastern Orthodox Church shares the same
view on the subject. Contrary to how the Protestant Reformers were often characterized, the concept of a catholic
or universal Church was not brushed aside during the Protestant Reformation. On the contrary, the visible unity of
the catholic or universal church was seen by the Protestant reformers as an important and essential doctrine of the
Reformation. The Magisterial reformers, such as Martin Luther, John Calvin, and Huldrych Zwingli, believed that they
were reforming the Roman Catholic Church, which they viewed as having become corrupted. Each of them took very seriously
the charges of schism and innovation, denying these charges and maintaining that it was the Roman Catholic Church
that had left them. In order to justify their departure from the Roman Catholic Church, Protestants often posited
a new argument, saying that there was no real visible Church with divine authority, only a spiritual, invisible,
and hidden church—this notion began in the early days of the Protestant Reformation. Wherever the Magisterial Reformation,
which received support from the ruling authorities, took place, the result was a reformed national Protestant church
envisioned to be a part of the whole invisible church, but disagreeing, in certain important points of doctrine and
doctrine-linked practice, with what had until then been considered the normative reference point on such matters,
namely the Papacy and central authority of the Roman Catholic Church. The Reformed churches thus believed in some
form of Catholicity, founded on their doctrines of the five solas and a visible ecclesiastical organization based
on the 14th and 15th century Conciliar movement, rejecting the papacy and papal infallibility in favor of ecumenical
councils, but rejecting the latest ecumenical council, the Council of Trent. Religious unity therefore became not
one of doctrine and identity but one of invisible character, wherein the unity was one of faith in Jesus Christ,
not common identity, doctrine, belief, and collaborative action. The ecumenical movement has had an influence on
mainline churches, beginning at least in 1910 with the Edinburgh Missionary Conference. Its origins lay in the recognition
of the need for cooperation on the mission field in Africa, Asia and Oceania. Since 1948, the World Council of Churches
has been influential, but ineffective in creating a united church. There are also ecumenical bodies at regional,
national and local levels across the globe; but schisms still far outnumber unifications. One, but not the only expression
of the ecumenical movement, has been the move to form united churches, such as the Church of South India, the Church
of North India, the US-based United Church of Christ, the United Church of Canada, the Uniting Church in Australia
and the United Church of Christ in the Philippines which have rapidly declining memberships. There has been a strong
engagement of Orthodox churches in the ecumenical movement, though the reaction of individual Orthodox theologians
has ranged from tentative approval of the aim of Christian unity to outright condemnation of the perceived effect
of watering down Orthodox doctrine. A Protestant baptism is held to be valid by the Catholic Church if given with
the trinitarian formula and with the intent to baptize. However, as the ordination of Protestant ministers is not
recognized due to the lack of apostolic succession and the disunity from Catholic Church, all other sacraments (except
marriage) performed by Protestant denominations and ministers are not recognized as valid. Therefore, Protestants
desiring full communion with the Catholic Church are not re-baptized (although they are confirmed) and Protestant
ministers who become Catholics may be ordained to the priesthood after a period of study. In 1999, the representatives
of Lutheran World Federation and Catholic Church signed the Joint Declaration on the Doctrine of Justification, apparently
resolving the conflict over the nature of justification which was at the root of the Protestant Reformation, although
Confessional Lutherans reject this statement. This is understandable, since there is no compelling authority within
them. On 18 July 2006, delegates to the World Methodist Conference voted unanimously to adopt the Joint Declaration.
There are more than 900 million Protestants worldwide,[ad] among approximately 2.4 billion Christians.[ae] In 2010,
a total of more than 800 million included 300 million in Sub-Saharan Africa, 260 million in the Americas, 140 million
in Asia-Pacific region, 100 million in Europe and 2 million in Middle East-North Africa. Protestants account for
nearly forty percent of Christians worldwide and more than one tenth of the total human population. Various estimates
put the percentage of Protestants in relation to the total number of world's Christians at 33%, 36%, 36.7%, and 40%,
while in relation to the world's population at 11.6% and 13%. In European countries which were most profoundly influenced
by the Reformation, Protestantism still remains the most practiced religion. These include the Nordic countries and
the United Kingdom. In other historical Protestant strongholds such as Germany, the Netherlands, Switzerland, Latvia,
Estonia and Hungary, it remains one of the most popular religions. Although Czech Republic was the site of one of
the most significant pre-reformation movements, there are only few Protestant adherents; mainly due to historical
reasons like persecution of Protestants by the Catholic Habsburgs, restrictions during the Communist rule, and also
the ongoing secularization. Over the last several decades, religious practice has been declining as secularization
has increased. According to a 2012 study about Religiosity in the European Union in 2012 by Eurobarometer, Protestants
made up 12% of the EU population. According to Pew Research Center, Protestants constituted nearly one fifth (or
17.8%) of the continent's Christian population in 2010. Clarke and Beyer estimate that Protestants constituted 15%
of all Europeans in 2009, while Noll claims that less than 12% of them lived in Europe in 2010. Changes in worldwide
Protestantism over the last century have been significant. Since 1900, Protestantism has spread rapidly in Africa,
Asia, Oceania and Latin America. That caused Protestantism to be called a primarily non-Western religion. Much of
the growth has occurred after World War II, when decolonization of Africa and abolition of various restrictions against
Protestants in Latin American countries occurred. According to one source, Protestants constituted respectively 2.5%,
2%, 0.5% of Latin Americans, Africans and Asians. In 2000, percentage of Protestants on mentioned continents was
17%, more than 27% and 5.5%, respectively. According to Mark A. Noll, 79% of Anglicans lived in the United Kingdom
in 1910, while most of the remainder was found in the United States and across the British Commonwealth. By 2010,
59% of Anglicans were found in Africa. In 2010, more Protestants lived in India than in the UK or Germany, while
Protestants in Brazil accounted for as many people as Protestants in the UK and Germany combined. Almost as many
lived in each of Nigeria and China as in all of Europe. China is home to world's largest Protestant minority.[af]
Brasília (Portuguese pronunciation: [bɾaˈziljɐ]) is the federal capital of Brazil and seat of government of the Federal District.
The city is located atop the Brazilian highlands in the country's center-western region. It was founded on April
21, 1960, to serve as the new national capital. Brasília and its metro (encompassing the whole of the Federal District)
had a population of 2,556,149 in 2011, making it the 4th most populous city in Brazil. Among major Latin American
cities, Brasília has the highest GDP per capita at R$61,915 (US$36,175). The city has a unique status in Brazil,
as it is an administrative division rather than a legal municipality like other cities in Brazil. The name 'Brasília'
is commonly used as a synonym for the Federal District through synecdoche; However, the Federal District is composed
of 31 administrative regions, only one of which is Brasília proper, with a population of 209,926 in a 2011 survey;
Demographic publications generally do not make this distinction and list the population of Brasília as synonymous
with the population of the Federal District, considering the whole of it as its metropolitan area. The city was one
of the main host cities of the 2014 FIFA World Cup. Additionally, Brasília hosted the 2013 FIFA Confederations Cup.
Juscelino Kubitschek, President of Brazil from 1956 to 1961, ordered the construction of Brasília, fulfilling the
promise of the Constitution and his own political campaign promise. Building Brasília was part of Juscelino's "fifty
years of prosperity in five" plan. Lúcio Costa won a contest and was the main urban planner in 1957, with 5550 people
competing. Oscar Niemeyer, a close friend, was the chief architect of most public buildings and Roberto Burle Marx
was the landscape designer. Brasília was built in 41 months, from 1956 to April 21, 1960, when it was officially
inaugurated. Until the 1980s, the governor of the Federal District was appointed by the Federal Government, and the
laws of Brasília were issued by the Brazilian Federal Senate. With the Constitution of 1988 Brasília gained the right
to elect its Governor, and a District Assembly (Câmara Legislativa) was elected to exercise legislative power. The
Federal District does not have a Judicial Power of its own. The Judicial Power which serves the Federal District
also serves federal territories. Currently, Brazil does not have any territories, therefore, for now the courts serve
only cases from the Federal District. Brasília has a tropical savanna climate (Aw) according to the Köppen system,
with two distinct seasons: the rainy season, from October to April, and a dry season, from May to September. The
average temperature is 20.6 °C (69.1 °F). September, at the end of the dry season, has the highest average maximum
temperature, 28.3 °C (82.9 °F), has major and minor lower maximum average temperature, of 25.1 °C (77.2 °F) and 12.9
°C (55.2 °F), respectively. Average temperatures from September through March are a consistent 22 °C (72 °F). With
247.4 mm (9.7 in), January is the month with the highest rainfall of the year, while June is the lowest, with only
8.7 mm (0.3 in). The Portuguese language is the official national language and the primary language taught in schools.
English and Spanish are also part of the official curriculum. The city has six international schools: American School
of Brasília, Brasília International School (BIS), Escola das Nações, Swiss International School (SIS), Lycée français
François-Mitterrand (LfFM) and Maple Bear Canadian School. August 2016 will see the opening of a new international
school - The British School of Brasilia. Brasília has two universities, three university centers, and many private
colleges. The Cathedral of Brasília in the capital of the Federative Republic of Brazil, is an expression of the
architect Oscar Niemeyer. This concrete-framed hyperboloid structure, seems with its glass roof reaching up, open,
to the heavens. On 31 May 1970, the Cathedral’s structure was finished, and only the 70 m (229.66 ft) diameter of
the circular area were visible. Niemeyer's project of Cathedral of Brasília is based in the hyperboloid of revolution
which sections are asymmetric. The hyperboloid structure itself is a result of 16 identical assembled concrete columns.
These columns, having hyperbolic section and weighing 90 t, represent two hands moving upwards to heaven. The Cathedral
was dedicated on 31 May 1970. A series of low-lying annexes (largely hidden) flank both ends. Also in the square
are the glass-faced Planalto Palace housing the presidential offices, and the Palace of the Supreme Court. Farther
east, on a triangle of land jutting into the lake, is the Palace of the Dawn (Palácio da Alvorada; the presidential
residence). Between the federal and civic buildings on the Monumental Axis is the city's cathedral, considered by
many to be Niemeyer's finest achievement (see photographs of the interior). The parabolically shaped structure is
characterized by its 16 gracefully curving supports, which join in a circle 115 feet (35 meters) above the floor
of the nave; stretched between the supports are translucent walls of tinted glass. The nave is entered via a subterranean
passage rather than conventional doorways. Other notable buildings are Buriti Palace, Itamaraty Palace, the National
Theater, and several foreign embassies that creatively embody features of their national architecture. The Brazilian
landscape architect Roberto Burle Marx designed landmark modernist gardens for some of the principal buildings. Both
low-cost and luxury housing were built by the government in the Brasília. The residential zones of the inner city
are arranged into superquadras ("superblocks"): groups of apartment buildings along with a prescribed number and
type of schools, retail stores, and open spaces. At the northern end of Lake Paranoá, separated from the inner city,
is a peninsula with many fashionable homes, and a similar city exists on the southern lakeshore. Originally the city
planners envisioned extensive public areas along the shores of the artificial lake, but during early development
private clubs, hotels, and upscale residences and restaurants gained footholds around the water. Set well apart from
the city are satellite cities, including Gama, Ceilândia, Taguatinga, Núcleo Bandeirante, Sobradinho, and Planaltina.
These cities, with the exception of Gama and Sobradinho were not planned. After a visit to Brasília, the French writer
Simone de Beauvoir complained that all of its superquadras exuded "the same air of elegant monotony," and other observers
have equated the city's large open lawns, plazas, and fields to wastelands. As the city has matured, some of these
have gained adornments, and many have been improved by landscaping, giving some observers a sense of "humanized"
spaciousness. Although not fully accomplished, the "Brasília utopia" has produced a city of relatively high quality
of life, in which the citizens live in forested areas with sporting and leisure structure (the superquadras) flanked
by small commercial areas, bookstores and cafes; the city is famous for its cuisine and efficiency of transit. The
major roles of construction and of services (government, communications, banking and finance, food production, entertainment,
and legal services) in Brasília's economy reflect the city's status as a governmental rather than an industrial center.
Industries connected with construction, food processing, and furnishings are important, as are those associated with
publishing, printing, and computer software. GDP is divided in Public Administration 54.8%, Services 28.7%, Industry
10.2%, Commerce 6.1%, Agribusiness 0.2%. Besides being the political center, Brasília is an important economic center.
Brasília has the highest city gross domestic product (GDP) of 99.5 billion reais representing 3.76% of the total
Brazilian GDP. The main economic activity of the federal capital results from its administrative function. Its industrial
planning is studied carefully by the Government of the Federal District. Being a city registered by UNESCO, the government
in Brasília has opted to encourage the development of non-polluting industries such as software, film, video, and
gemology among others, with emphasis on environmental preservation and maintaining ecological balance, preserving
the city property. The city's planned design included specific areas for almost everything, including accommodation,
Hotels Sectors North and South. New hotel facilities are being developed elsewhere, such as the hotels and tourism
Sector North, located on the shores of Lake Paranoá. Brasília has a range of tourist accommodation from inns, pensions
and hostels to larger international chain hotels. The city's restaurants cater to a wide range of foods from local
and regional Brazilian dishes to international cuisine. Brasília has also been the focus of modern-day literature.
Published in 2008, The World In Grey: Dom Bosco's Prophecy, by author Ryan J. Lucero, tells an apocalypticle story
based on the famous prophecy from the late 19th century by the Italian saint Don Bosco. According to Don Bosco's
prophecy: "Between parallels 15 and 20, around a lake which shall be formed; A great civilization will thrive, and
that will be the Promised Land." Brasília lies between the parallels 15° S and 20° S, where an artificial lake (Paranoá
Lake) was formed. Don Bosco is Brasília's patron saint. Praça dos Três Poderes (Portuguese for Square of the Three
Powers) is a plaza in Brasília. The name is derived from the encounter of the three federal branches around the plaza:
the Executive, represented by the Palácio do Planalto (presidential office); the Legislative, represented by the
National Congress (Congresso Nacional); and the Judicial branch, represented by the Supreme Federal Court (Supremo
Tribunal Federal). It is a tourist attraction in Brasília, designed by Lúcio Costa and Oscar Niemeyer as a place
where the three branches would meet harmoniously. The Palácio da Alvorada is the official residence of the President
of Brazil. The palace was designed, along with the rest of the city of Brasília, by Oscar Niemeyer and inaugurated
in 1958. One of the first structures built in the republic's new capital city, the "Alvorada" lies on a peninsula
at the margins of Lake Paranoá. The principles of simplicity and modernity, that in the past characterized the great
works of architecture, motivated Niemeyer. The viewer has an impression of looking at a glass box, softly landed
on the ground with the support of thin external columns. The building has an area of 7,000 m2 with three floors consisting
of the basement, landing, and second floor. The auditorium, kitchen, laundry, medical center, and administration
offices are at basement level. The rooms used by the presidency for official receptions are on the landing. The second
floor has four suites, two apartments, and various private rooms which make up the residential part of the palace.
The building also has a library, a heated Olympic-sized swimming pool, a music room, two dining rooms and various
meeting rooms. A chapel and heliport are in adjacent buildings. The Palácio do Planalto is the official workplace
of the President of Brazil. It is located at the Praça dos Três Poderes in Brasília. As the seat of government, the
term "Planalto" is often used as a metonym for the executive branch of government. The main working office of the
President of the Republic is in the Palácio do Planalto. The President and his or her family do not live in it, rather
in the official residence, the Palácio da Alvorada. Besides the President, senior advisors also have offices in the
"Planalto," including the Vice-President of Brazil and the Chief of Staff. The other Ministries are along the Esplanada
dos Ministérios. The architect of the Palácio do Planalto was Oscar Niemeyer, creator of most of the important buildings
in Brasília. The idea was to project an image of simplicity and modernity using fine lines and waves to compose the
columns and exterior structures. The Palace is four stories high, and has an area of 36,000 m2. Four other adjacent
buildings are also part of the complex. This makes for a large number of takeoffs and landings and it is not unusual
for flights to be delayed in the holding pattern before landing. Following the airport's master plan, Infraero built
a second runway, which was finished in 2006. In 2007, the airport handled 11,119,872 passengers. The main building's
third floor, with 12 thousand square meters, has a panoramic deck, a food court, shops, four movie theatres with
total capacity of 500 people, and space for exhibitions. Brasília Airport has 136 vendor spaces. The airport is located
about 11 km (6.8 mi) from the central area of Brasília, outside the metro system. The area outside the airport's
main gate is lined with taxis as well as several bus line services which connect the airport to Brasília's central
district. The parking lot accommodates 1,200 cars. The airport is serviced by domestic and regional airlines (TAM,
GOL, Azul, WebJET, Trip and Avianca), in addition to a number of international carriers. In 2012, Brasília's International
Airport was won by the InfrAmerica consortium, formed by the Brazilian engineering company ENGEVIX and the Argentine
Corporacion America holding company, with a 50% stake each. During the 25-year concession, the airport may be expanded
to up to 40 million passengers a year. In 2014, the airport received 15 new boarding bridges, totalling 28 in all.
This was the main requirement made by the federal government, which transferred the operation of the terminal to
the Inframerica Group after an auction. The group invested R$750 million in the project. In the same year, the number
of parking spaces doubled, reaching three thousand. The airport's entrance have a new rooftop cover and a new access
road. Furthermore, a VIP room was created on Terminal 1's third floor. The investments resulted an increase the capacity
of Brasília's airport from approximately 15 million passengers per year to 21 million by 2014. Brasília has direct
flights to all states of Brazil and direct international flights to Atlanta, Buenos Aires, Lisbon, Miami, Panama
City, and Paris. The Juscelino Kubitschek bridge, also known as the 'President JK Bridge' or the 'JK Bridge', crosses
Lake Paranoá in Brasília. It is named after Juscelino Kubitschek de Oliveira, former president of Brazil. It was
designed by architect Alexandre Chan and structural engineer Mário Vila Verde. Chan won the Gustav Lindenthal Medal
for this project at the 2003 International Bridge Conference in Pittsburgh due to "...outstanding achievement demonstrating
harmony with the environment, aesthetic merit and successful community participation". The metro leaves the Rodoviária
(bus station) and goes south, avoiding most of the political and tourist areas. The main purpose of the metro is
to serve cities, such as Samambaia, Taguatinga and Ceilândia, as well as Guará and Águas Claras. The satellite cities
served are more populated in total than the Plano Piloto itself (the census of 2000 indicated that Ceilândia had
344,039 inhabitants, Taguatinga had 243,575, whereas the Plano Piloto had approximately 400,000 inhabitants), and
most residents of the satellite cities depend on public transportation. In the original city plan, the interstate
buses should also stop at the Central Station. Because of the growth of Brasília (and corresponding growth in the
bus fleet), today the interstate buses leave from the older interstate station (called Rodoferroviária), located
at the western end of the Eixo Monumental. The Central Bus Station also contains a main metro station. A new bus
station was opened in July 2010. It is on Saída Sul (South Exit) near Parkshopping Mall and with its metro station,
and it's also an inter-state bus station, used only to leave the Federal District. Brasília is known as a departing
point for the practice of unpowered air sports, sports that may be practiced with hang gliding or paragliding wings.
Practitioners of such sports reveal that, because of the city's dry weather, the city offers strong thermal winds
and great "cloud-streets", which is also the name for a manoeuvre quite appreciated by practitioners. In 2003, Brasília
hosted the 14th Hang Gliding World Championship, one of the categories of free flying. In August 2005, the city hosted
the 2nd stage of the Brazilian Hang Gliding Championship.
Greece is a developed country with an economy based on the service (82.8%) and industrial sectors (13.3%). The agricultural
sector contributed 3.9% of national economic output in 2015. Important Greek industries include tourism and shipping.
With 18 million international tourists in 2013, Greece was the 7th most visited country in the European Union and
16th in the world. The Greek Merchant Navy is the largest in the world, with Greek-owned vessels accounting for 15%
of global deadweight tonnage as of 2013. The increased demand for international maritime transportation between Greece
and Asia has resulted in unprecedented investment in the shipping industry. The country is a significant agricultural
producer within the EU. Greece has the largest economy in the Balkans and is as an important regional investor. Greece
was the largest foreign investor in Albania in 2013, the third in Bulgaria, in the top-three in Romania and Serbia
and the most important trading partner and largest foreign investor in the former Yugoslav Republic of Macedonia.
The Greek telecommunications company OTE has become a strong investor in former Yugoslavia and in other Balkan countries.
Greece is classified as an advanced, high-income economy, and was a founding member of the Organisation for Economic
Co-operation and Development (OECD) and of the Organization of the Black Sea Economic Cooperation (BSEC). The country
joined what is now the European Union in 1981. In 2001 Greece adopted the euro as its currency, replacing the Greek
drachma at an exchange rate of 340.75 drachmae per euro. Greece is a member of the International Monetary Fund and
of the World Trade Organization, and ranked 34th on Ernst & Young's Globalization Index 2011. World War II (1939-1945)
devastated the country's economy, but the high levels of economic growth that followed from 1950 to 1980 have been
called the Greek economic miracle. From 2000 Greece saw high levels of GDP growth above the Eurozone average, peaking
at 5.8% in 2003 and 5.7% in 2006. The subsequent Great Recession and Greek government-debt crisis, a central focus
of the wider European debt crisis, plunged the economy into a sharp downturn, with real GDP growth rates of −0.3%
in 2008, −4.3% in 2009, −5.5% in 2010, −9.1% in 2011, −7.3% in 2012 and −3.2% in 2013. In 2011, the country's public
debt reached €356 billion (172% of nominal GDP). After negotiating the biggest debt restructuring in history with
the private sector, Greece reduced its sovereign debt burden to €280 billion (137% of GDP) in the first quarter of
2012. Greece achieved a real GDP growth rate of 0.7% in 2014 after 6 years of economic decline, but fell back into
recession in 2015. The evolution of the Greek economy during the 19th century (a period that transformed a large
part of the world because of the Industrial Revolution) has been little researched. Recent research from 2006 examines
the gradual development of industry and further development of shipping in a predominantly agricultural economy,
calculating an average rate of per capita GDP growth between 1833 and 1911 that was only slightly lower than that
of the other Western European nations. Industrial activity, (including heavy industry like shipbuilding) was evident,
mainly in Ermoupolis and Piraeus. Nonetheless, Greece faced economic hardships and defaulted on its external loans
in 1826, 1843, 1860 and 1894. After fourteen consecutive years of economic growth, Greece went into recession in
2008. By the end of 2009, the Greek economy faced the highest budget deficit and government debt-to-GDP ratios in
the EU. After several upward revisions, the 2009 budget deficit is now estimated at 15.7% of GDP. This, combined
with rapidly rising debt levels (127.9% of GDP in 2009) led to a precipitous increase in borrowing costs, effectively
shutting Greece out of the global financial markets and resulting in a severe economic crisis. Greece was accused
of trying to cover up the extent of its massive budget deficit in the wake of the global financial crisis. The allegation
was prompted by the massive revision of the 2009 budget deficit forecast by the new PASOK government elected in October
2009, from "6–8%" (estimated by the previous New Democracy government) to 12.7% (later revised to 15.7%). However,
the accuracy of the revised figures has also been questioned, and in February 2012 the Hellenic Parliament voted
in favor of an official investigation following accusations by a former member of the Hellenic Statistical Authority
that the deficit had been artificially inflated in order to justify harsher austerity measures. Most of the differences
in the revised budget deficit numbers were due to a temporary change of accounting practices by the new government,
i.e., recording expenses when military material was ordered rather than received. However, it was the retroactive
application of ESA95 methodology (applied since 2000) by Eurostat, that finally raised the reference year (1999)
budget deficit to 3.38% of GDP, thus exceeding the 3% limit. This led to claims that Greece (similar claims have
been made about other European countries like Italy) had not actually met all five accession criteria, and the common
perception that Greece entered the Eurozone through "falsified" deficit numbers. In the 2005 OECD report for Greece,
it was clearly stated that "the impact of new accounting rules on the fiscal figures for the years 1997 to 1999 ranged
from 0.7 to 1 percentage point of GDP; this retroactive change of methodology was responsible for the revised deficit
exceeding 3% in 1999, the year of [Greece's] EMU membership qualification". The above led the Greek minister of finance
to clarify that the 1999 budget deficit was below the prescribed 3% limit when calculated with the ESA79 methodology
in force at the time of Greece's application, and thus the criteria had been met. An error sometimes made is the
confusion of discussion regarding Greece’s Eurozone entry with the controversy regarding usage of derivatives’ deals
with U.S. Banks by Greece and other Eurozone countries to artificially reduce their reported budget deficits. A currency
swap arranged with Goldman Sachs allowed Greece to "hide" 2.8 billion Euros of debt, however, this affected deficit
values after 2001 (when Greece had already been admitted into the Eurozone) and is not related to Greece’s Eurozone
entry. According to Der Spiegel, credits given to European governments were disguised as "swaps" and consequently
did not get registered as debt because Eurostat at the time ignored statistics involving financial derivatives. A
German derivatives dealer had commented to Der Spiegel that "The Maastricht rules can be circumvented quite legally
through swaps," and "In previous years, Italy used a similar trick to mask its true debt with the help of a different
US bank." These conditions had enabled Greek as well as many other European governments to spend beyond their means,
while meeting the deficit targets of the European Union and the monetary union guidelines. In May 2010, the Greek
government deficit was again revised and estimated to be 13.6% which was the second highest in the world relative
to GDP with Iceland in first place at 15.7% and Great Britain third with 12.6%. Public debt was forecast, according
to some estimates, to hit 120% of GDP during 2010. As a consequence, there was a crisis in international confidence
in Greece's ability to repay its sovereign debt, as reflected by the rise of the country's borrowing rates (although
their slow rise – the 10-year government bond yield only exceeded 7% in April 2010 – coinciding with a large number
of negative articles, has led to arguments about the role of international news media in the evolution of the crisis).
In order to avert a default (as high borrowing rates effectively prohibited access to the markets), in May 2010 the
other Eurozone countries, and the IMF, agreed to a "rescue package" which involved giving Greece an immediate €45
billion in bail-out loans, with more funds to follow, totaling €110 billion. In order to secure the funding, Greece
was required to adopt harsh austerity measures to bring its deficit under control. Their implementation will be monitored
and evaluated by the European Commission, the European Central Bank and the IMF. Between 2005 and 2011, Greece has
had the highest percentage increase in industrial output compared to 2005 levels out of all European Union members,
with an increase of 6%. Eurostat statistics show that the industrial sector was hit by the Greek financial crisis
throughout 2009 and 2010, with domestic output decreasing by 5.8% and industrial production in general by 13.4%.
Currently, Greece is ranked third in the European Union in the production of marble (over 920,000 tons), after Italy
and Spain. Greece has the largest merchant navy in the world, accounting for more than 15% of the world's total deadweight
tonnage (dwt) according to the United Nations Conference on Trade and Development. The Greek merchant navy's total
dwt of nearly 245 million is comparable only to Japan's, which is ranked second with almost 224 million. Additionally,
Greece represents 39.52% of all of the European Union's dwt. However, today's fleet roster is smaller than an all-time
high of 5,000 ships in the late 1970s. In terms of ship categories, Greek companies have 22.6% of the world's tankers
and 16.1% of the world's bulk carriers (in dwt). An additional equivalent of 27.45% of the world's tanker dwt is
on order, with another 12.7% of bulk carriers also on order. Shipping accounts for an estimated 6% of Greek GDP,
employs about 160,000 people (4% of the workforce), and represents 1/3 of the country's trade deficit. Earnings from
shipping amounted to €14.1 billion in 2011, while between 2000 and 2010 Greek shipping contributed a total of €140
billion (half of the country's public debt in 2009 and 3.5 times the receipts from the European Union in the period
2000–2013). The 2011 ECSA report showed that there are approximately 750 Greek shipping companies in operation. Counting
shipping as quasi-exports and in terms of monetary value, Greece ranked 4th globally in 2011 having "exported" shipping
services worth 17,704.132 million $; only Denmark, Germany and South Korea ranked higher during that year. Similarly
counting shipping services provided to Greece by other countries as quasi-imports and the difference between "exports"
and "imports" as a "trade balance", Greece in 2011 ranked in the latter second behind Germany, having "imported"
shipping services worth 7,076.605 million US$ and having run a "trade surplus" of 10,712.342 million US$. Between
1949 and the 1980s, telephone communications in Greece were a state monopoly by the Hellenic Telecommunications Organization,
better known by its acronym, OTE. Despite the liberalization of telephone communications in the country in the 1980s,
OTE still dominates the Greek market in its field and has emerged as one of the largest telecommunications companies
in Southeast Europe. Since 2011, the company's major shareholder is Deutsche Telekom with a 40% stake, while the
Greek state continues to own 10% of the company's shares. OTE owns several subsidiaries across the Balkans, including
Cosmote, Greece's top mobile telecommunications provider, Cosmote Romania and Albanian Mobile Communications. Greece
has tended to lag behind its European Union partners in terms of Internet use, with the gap closing rapidly in recent
years. The percentage of households with access to the Internet more than doubled between 2006 and 2013, from 23%
to 56% respectively (compared with an EU average of 49% and 79%). At the same time, there has been a massive increase
in the proportion of households with a broadband connection, from 4% in 2006 to 55% in 2013 (compared with an EU
average of 30% and 76%). However, Greece also has the EU's third highest percentage of people who have never used
the Internet: 36% in 2013, down from 65% in 2006 (compared with an EU average of 21% and 42%). Greece attracts more
than 16 million tourists each year, thus contributing 18.2% to the nation's GDP in 2008 according to an OECD report.
The same survey showed that the average tourist expenditure while in Greece was $1,073, ranking Greece 10th in the
world. The number of jobs directly or indirectly related to the tourism sector were 840,000 in 2008 and represented
19% of the country's total labor force. In 2009, Greece welcomed over 19.3 million tourists, a major increase from
the 17.7 million tourists the country welcomed in 2008. In recent years a number of well-known tourism-related organizations
have placed Greek destinations in the top of their lists. In 2009 Lonely Planet ranked Thessaloniki, the country's
second-largest city, the world's fifth best "Ultimate Party Town", alongside cities such as Montreal and Dubai, while
in 2011 the island of Santorini was voted as the best island in the world by Travel + Leisure. The neighbouring island
of Mykonos was ranked as the 5th best island Europe. Thessaloniki was the European Youth Capital in 2014. Between
1975 and 2009, Olympic Airways (known after 2003 as Olympic Airlines) was the country’s state-owned flag carrier,
but financial problems led to its privatization and relaunch as Olympic Air in 2009. Both Aegean Airlines and Olympic
Air have won awards for their services; in 2009 and 2011, Aegean Airlines was awarded the "Best regional airline
in Europe" award by Skytrax, and also has two gold and one silver awards by the ERA, while Olympic Air holds one
silver ERA award for "Airline of the Year" as well as a "Condé Nast Traveller 2011 Readers Choice Awards: Top Domestic
Airline" award. Greece's rail network is estimated to be at 2,548 km. Rail transport in Greece is operated by TrainOSE,
a subsidiary of the Hellenic Railways Organization (OSE). Most of the country's network is standard gauge (1,565
km), while the country also has 983 km of narrow gauge. A total of 764 km of rail are electrified. Greece has rail
connections with Bulgaria, the Republic of Macedonia and Turkey. A total of three suburban railway systems (Proastiakos)
are in operation (in Athens, Thessaloniki and Patras), while one metro system is operational in Athens with another
under construction. According to Eurostat, Greece's largest port by tons of goods transported in 2010 is the port
of Aghioi Theodoroi, with 17.38 million tons. The Port of Thessaloniki comes second with 15.8 million tons, followed
by the Port of Piraeus, with 13.2 million tons, and the port of Eleusis, with 12.37 million tons. The total number
of goods transported through Greece in 2010 amounted to 124.38 million tons, a considerable drop from the 164.3 million
tons transported through the country in 2007. Since then, Piraeus has grown to become the Mediterranean's third-largest
port thanks to heavy investment by Chinese logistics giant COSCO. In 2013, Piraeus was declared the fastest-growing
port in the world. In 2010 Piraeus handled 513,319 TEUs, followed by Thessaloniki, which handled 273,282 TEUs. In
the same year, 83.9 million people passed through Greece's ports, 12.7 million through the port of Paloukia in Salamis,
another 12.7 through the port of Perama, 9.5 million through Piraeus and 2.7 million through Igoumenitsa. In 2013,
Piraeus handled a record 3.16 million TEUs, the third-largest figure in the Mediterranean, of which 2.52 million
were transported through Pier II, owned by COSCO and 644,000 were transported through Pier I, owned by the Greek
state. Energy production in Greece is dominated by the Public Power Corporation (known mostly by its acronym ΔΕΗ,
or in English DEI). In 2009 DEI supplied for 85.6% of all energy demand in Greece, while the number fell to 77.3%
in 2010. Almost half (48%) of DEI's power output is generated using lignite, a drop from the 51.6% in 2009. Another
12% comes from Hydroelectric power plants and another 20% from natural gas. Between 2009 and 2010, independent companies'
energy production increased by 56%, from 2,709 Gigawatt hour in 2009 to 4,232 GWh in 2010. In 2008 renewable energy
accounted for 8% of the country's total energy consumption, a rise from the 7.2% it accounted for in 2006, but still
below the EU average of 10% in 2008. 10% of the country's renewable energy comes from solar power, while most comes
from biomass and waste recycling. In line with the European Commission's Directive on Renewable Energy, Greece aims
to get 18% of its energy from renewable sources by 2020. In 2013 and for several months, Greece produced more than
20% of its electricity from renewable energy sources and hydroelectric power plants. Greece currently does not have
any nuclear power plants in operation, however in 2009 the Academy of Athens suggested that research in the possibility
of Greek nuclear power plants begin. In addition to the above, Greece is also to start oil and gas exploration in
other locations in the Ionian Sea, as well as the Libyan Sea, within the Greek exclusive economic zone, south of
Crete. The Ministry of the Environment, Energy and Climate Change announced that there was interest from various
countries (including Norway and the United States) in exploration, and the first results regarding the amount of
oil and gas in these locations were expected in the summer of 2012. In November 2012, a report published by Deutsche
Bank estimated the value of natural gas reserves south of Crete at €427 billion. Between 1832 and 2002 the currency
of Greece was the drachma. After signing the Maastricht Treaty, Greece applied to join the eurozone. The two main
convergence criteria were a maximum budget deficit of 3% of GDP and a declining public debt if it stood above 60%
of GDP. Greece met the criteria as shown in its 1999 annual public account. On 1 January 2001, Greece joined the
eurozone, with the adoption of the euro at the fixed exchange rate ₯340.75 to €1. However, in 2001 the euro only
existed electronically, so the physical exchange from drachma to euro only took place on 1 January 2002. This was
followed by a ten-year period for eligible exchange of drachma to euro, which ended on 1 March 2012. IMF's forecast
said that Greece's unemployment rate would hit the highest 14.8 percent in 2012 and decrease to 14.1 in 2014. But
in fact, the Greek economy suffered a prolonged high unemployemnt. The unemployment figure was between 9 per cent
and 11 per cent in 2009, and it soared to 28 per cent in 2013. In 2015, Greece's jobless rate is around 24 per cent.
It is thought that Greece's potential output has been eroded by this prolonged massive unemployment due to the associated
hysteresis effects.
Unlike in Westminster style legislatures or as with the Senate Majority Leader, the House Majority Leader's duties and prominence
vary depending upon the style and power of the Speaker of the House. Typically, the Speaker does not participate
in debate and rarely votes on the floor. In some cases, Majority Leaders have been more influential than the Speaker;
notably Tom DeLay who was more prominent than Speaker Dennis Hastert. In addition, Speaker Newt Gingrich delegated
to Dick Armey an unprecedented level of authority over scheduling legislation on the House floor. The current Minority
Leader Nancy Pelosi, of the United States House of Representatives serves as floor leader of the opposition party,
and is the counterpart to the Majority Leader. Unlike the Majority Leader, the Minority Leader is on the ballot for
Speaker of the House during the convening of the Congress. If the Minority Leader's party takes control of the House,
and the party officers are all re-elected to their seats, the Minority Leader is usually the party's top choice for
Speaker for the next Congress, while the Minority Whip is typically in line to become Majority Leader. The Minority
Leader usually meets with the Majority Leader and the Speaker to discuss agreements on controversial issues. Like
the Speaker of the House, the Minority Leaders are typically experienced lawmakers when they win election to this
position. When Nancy Pelosi, D-CA, became Minority Leader in the 108th Congress, she had served in the House nearly
20 years and had served as minority whip in the 107th Congress. When her predecessor, Richard Gephardt, D-MO, became
minority leader in the 104th House, he had been in the House for almost 20 years, had served as chairman of the Democratic
Caucus for four years, had been a 1988 presidential candidate, and had been majority leader from June 1989 until
Republicans captured control of the House in the November 1994 elections. Gephardt's predecessor in the minority
leadership position was Robert Michel, R-IL, who became GOP Leader in 1981 after spending 24 years in the House.
Michel's predecessor, Republican John Rhodes of Arizona, was elected Minority Leader in 1973 after 20 years of House
service. Starting with Republican Nicholas Longworth in 1925, and continued through the Democrats' control of the
House from 1931 to 1995, save for Republican majorities in 1947–49 and 1953–55, all majority leaders have directly
ascended to the Speakership brought upon by the retirement of the incumbent. The only exceptions during this period
were Charles A. Halleck who became Republican House leader and Minority Leader from 1959 to 1965, Hale Boggs who
died in a plane crash, and Dick Gephardt who became the Democrats' House leader but as Minority Leader since his
party lost control in the 1994 midterm elections. Since 1995, the only Majority Leader to become Speaker is John
Boehner, though indirectly as his party lost control in the 2006 midterms elections. He subsequently served as Republican
House leader and Minority Leader from 2007 to 2011 and then was elected Speaker when the House reconvened in 2011.
In 1998, with Speaker Newt Gingrich announcing his resignation, both Majority Leader Dick Armey and Majority Whip
Tom DeLay did not contest the Speakership which eventually went to Chief Deputy Whip Dennis Hastert. Traditionally,
the Speaker is reckoned as the leader of the majority party in the House, with the Majority Leader as second-in-command.
For instance, when the Republicans gained the majority in the House after the 2010 elections, Eric Cantor succeeded
Boehner as Majority Leader. Despite this, Cantor and his successor, Kevin McCarthy, have been reckoned as the second-ranking
Republicans in the House, since Boehner is still reckoned as the leader of the House Republicans. However, there
have been some exceptions. The most recent exception to this rule came when Majority Leader Tom DeLay generally overshadowed
Speaker Dennis Hastert from 2003 to 2006. In contrast, the Minority Leader is the undisputed leader of the minority
party. When the Majority Leader's party loses control of the House, and if the Speaker and Majority Leader both remain
in the leadership hierarchy, convention suggests that they would become the Minority Leader and Minority Whip, respectively.
As the minority party has one less leadership position after losing the speaker's chair, there may be a contest for
the remaining leadership positions. Nancy Pelosi is the most recent example of an outgoing Speaker seeking the Minority
Leader post to retain the House party leadership, as the Democrats lost control of the House in the 2010 elections.
Outgoing Speaker Nancy Pelosi ran successfully for Minority Leader in the 112th Congress. From an institutional perspective,
the rules of the House assign a number of specific responsibilities to the minority leader. For example, Rule XII,
clause 6, grant the minority leader (or his designee) the right to offer a motion to recommit with instructions;
Rule II, clause 6, states the Inspector General shall be appointed by joint recommendation of the Speaker, majority
leader, and minority leader; and Rule XV, clause 6, provides that the Speaker, after consultation with the minority
leader, may place legislation on the Corrections Calendar. The minority leader also has other institutional duties,
such as appointing individuals to certain federal entities. The roles and responsibilities of the minority leader
are not well-defined. To a large extent, the functions of the minority leader are defined by tradition and custom.
A minority leader from 1931 to 1939, Representative Bertrand Snell, R-N.Y., provided this "job description": "He
is spokesman for his party and enunciates its policies. He is required to be alert and vigilant in defense of the
minority's rights. It is his function and duty to criticize constructively the policies and programs of the majority,
and to this end employ parliamentary tactics and give close attention to all proposed legislation." To a large extent,
the minority leader's position is a 20th-century innovation. Prior to this time congressional parties were often
relatively disorganized, so it was not always evident who functioned as the opposition floor leader. Decades went
by before anything like the modern two-party congressional system emerged on Capitol Hill with official titles for
those who were its official leaders. However, from the beginning days of Congress, various House members intermittently
assumed the role of "opposition leader." Some scholars suggest that Representative James Madison of Virginia informally
functioned as the first "minority leader" because in the First Congress he led the opposition to Treasury Secretary
Alexander Hamilton's fiscal policies. During this early period, it was more usual that neither major party grouping
(Federalists and Democratic-Republicans) had an official leader. In 1813, for instance, a scholar recounts that the
Federalist minority of 36 Members needed a committee of 13 "to represent a party comprising a distinct minority"
and "to coordinate the actions of men who were already partisans in the same cause." In 1828, a foreign observer
of the House offered this perspective on the absence of formal party leadership on Capitol Hill: Internal party disunity
compounded the difficulty of identifying lawmakers who might have informally functioned as a minority leader. For
instance, "seven of the fourteen speakership elections from 1834 through 1859 had at least twenty different candidates
in the field. Thirty-six competed in 1839, ninety-seven in 1849, ninety-one in 1859, and 138 in 1855." With so many
candidates competing for the speakership, it is not at all clear that one of the defeated lawmakers then assumed
the mantle of "minority leader." The Democratic minority from 1861 to 1875 was so completely disorganized that they
did not "nominate a candidate for Speaker in two of these seven Congresses and nominated no man more than once in
the other five. The defeated candidates were not automatically looked to for leadership." In the judgment of political
scientist Randall Ripley, since 1883 "the candidate for Speaker nominated by the minority party has clearly been
the Minority Leader." However, this assertion is subject to dispute. On December 3, 1883, the House elected Democrat
John G. Carlisle of Kentucky as Speaker. Republicans placed in nomination for the speakership J. Warren Keifer of
Ohio, who was Speaker the previous Congress. Clearly, Keifer was not the Republicans' minority leader. He was a discredited
leader in part because as Speaker he arbitrarily handed out "choice jobs to close relatives ... all at handsome salaries."
Keifer received "the empty honor of the minority nomination. But with it came a sting -- for while this naturally
involves the floor leadership, he was deserted by his [partisan] associates and his career as a national figure terminated
ingloriously." Representative Thomas Reed, R-ME, who later became Speaker, assumed the de facto role of minority
floor leader in Keifer's stead. "[A]lthough Keifer was the minority's candidate for Speaker, Reed became its acknowledged
leader, and ever after, so long as he served in the House, remained the most conspicuous member of his party. Another
scholar contends that the minority leader position emerged even before 1883. On the Democratic side, "there were
serious caucus fights for the minority speakership nomination in 1871 and 1873," indicating that the "nomination
carried with it some vestige of leadership." Further, when Republicans were in the minority, the party nominated
for Speaker a series of prominent lawmakers, including ex-Speaker James Blaine of Maine in 1875, former Appropriations
Chairman James A. Garfield of Ohio, in 1876, 1877, and 1879, and ex-Speaker Keifer in 1883. "It is hard to believe
that House partisans would place a man in the speakership when in the majority, and nominate him for this office
when in the minority, and not look to him for legislative guidance." This was not the case, according to some observers,
with respect to ex-Speaker Keifer. In brief, there is disagreement among historical analysts as to the exact time
period when the minority leadership emerged officially as a party position. Nonetheless, it seems safe to conclude
that the position emerged during the latter part of the 19th century, a period of strong party organization and professional
politicians. This era was "marked by strong partisan attachments, resilient patronage-based party organizations,
and...high levels of party voting in Congress." Plainly, these were conditions conducive to the establishment of
a more highly differentiated House leadership structure. Second, Democrats have always elevated their minority floor
leader to the speakership upon reclaiming majority status. Republicans have not always followed this leadership succession
pattern. In 1919, for instance, Republicans bypassed James R. Mann, R-IL, who had been minority leader for eight
years, and elected Frederick Gillett, R-MA, to be Speaker. Mann "had angered many Republicans by objecting to their
private bills on the floor;" also he was a protégé of autocratic Speaker Joseph Cannon, R-IL (1903–1911), and many
Members "suspected that he would try to re-centralize power in his hands if elected Speaker." More recently, although
Robert H. Michel was the Minority Leader in 1994 when the Republicans regained control of the House in the 1994 midterm
elections, he had already announced his retirement and had little or no involvement in the campaign, including the
Contract with America which was unveiled six weeks before voting day. In the instance when the Presidency and both
Houses of Congress are controlled by one party, the Speaker normally assumes a lower profile and defers to the President.
For that situation the House Minority Leader can play the role of a de facto "leader of the opposition", often more
so than the Senate Minority Leader, due to the more partisan nature of the House and the greater role of leadership.
Minority Leaders who have played prominent roles in opposing the incumbent President have included Gerald Ford, Richard
Gephardt, Nancy Pelosi, and John Boehner. The style and role of any minority leader is influenced by a variety of
elements, including personality and contextual factors, such as the size and cohesion of the minority party, whether
his or her party controls the White House, the general political climate in the House, and the controversy that is
sometimes associated with the legislative agenda. Despite the variability of these factors, there are a number of
institutional obligations associated with this position. Many of these assignments or roles are spelled out in the
House rule book. Others have devolved upon the position in other ways. To be sure, the minority leader is provided
with extra staff resources—beyond those accorded him or her as a Representative—to assist in carrying out diverse
leadership functions. Worth emphasis is that there are limits on the institutional role of the minority leader, because
the majority party exercises disproportionate influence over the agenda, partisan ratios on committees, staff resources,
administrative operations, and the day-to-day schedule and management of floor activities. In addition, the minority
leader has a number of other institutional functions. For instance, the minority leader is sometimes statutorily
authorized to appoint individuals to certain federal entities; he or she and the majority leader each name three
Members to serve as Private Calendar objectors; he or she is consulted with respect to reconvening the House per
the usual formulation of conditional concurrent adjournment resolutions; he or she is a traditional member of the
House Office Building Commission; he or she is a member of the United States Capitol Preservation Commission; and
he or she may, after consultation with the Speaker, convene an early organizational party caucus or conference. Informally,
the minority leader maintains ties with majority party leaders to learn about the schedule and other House matters
and forges agreements or understandings with them insofar as feasible. The minority leader has a number of formal
and informal party responsibilities. Formally, the rules of each party specify certain roles and responsibilities
for their leader. For example, under Democratic rules for the 106th Congress, the minority leader may call meetings
of the Democratic Caucus. He or she is a member of the Democratic Congressional Campaign Committee; names the members
of the Democratic Leadership Council; chairs the Policy Committee; and heads the Steering Committee. Examples of
other assignments are making "recommendations to the Speaker on all Democratic Members who shall serve as conferees"
and nominating party members to the Committees on Rules and House Administration. Republican rules identify generally
comparable functions for their top party leader. A party's floor leader, in conjunction with other party leaders,
plays an influential role in the formulation of party policy and programs. He is instrumental in guiding legislation
favored by his party through the House, or in resisting those programs of the other party that are considered undesirable
by his own party. He is instrumental in devising and implementing his party's strategy on the floor with respect
to promoting or opposing legislation. He is kept constantly informed as to the status of legislative business and
as to the sentiment of his party respecting particular legislation under consideration. Such information is derived
in part from the floor leader's contacts with his party's members serving on House committees, and with the members
of the party's whip organization. Provide Campaign Assistance. Minority leaders are typically energetic and aggressive
campaigners for partisan incumbents and challengers. There is hardly any major aspect of campaigning that does not
engage their attention. For example, they assist in recruiting qualified candidates; they establish "leadership PACs"
to raise and distribute funds to House candidates of their party; they try to persuade partisan colleagues not to
retire or run for other offices so as to hold down the number of open seats the party would need to defend; they
coordinate their campaign activities with congressional and national party campaign committees; they encourage outside
groups to back their candidates; they travel around the country to speak on behalf of party candidates; and they
encourage incumbent colleagues to make significant financial contributions to the party's campaign committee. "The
amount of time that [Minority Leader] Gephardt is putting in to help the DCCC [Democratic Congressional Campaign
Committee] is unheard of," noted a Democratic lobbyist."No DCCC chairman has ever had that kind of support." Devise
Minority Party Strategies. The minority leader, in consultation with other party colleagues, has a range of strategic
options that he or she can employ to advance minority party objectives. The options selected depend on a wide range
of circumstances, such as the visibility or significance of the issue and the degree of cohesion within the majority
party. For instance, a majority party riven by internal dissension, as occurred during the early 1900s when Progressive
and "regular" Republicans were at loggerheads, may provide the minority leader with greater opportunities to achieve
his or her priorities than if the majority party exhibited high degrees of party cohesion. Among the variable strategies
available to the minority party, which can vary from bill to bill and be used in combination or at different stages
of the lawmaking process, are the following: A look at one minority leadership strategy—partisan opposition—may suggest
why it might be employed in specific circumstances. The purposes of obstruction are several, such as frustrating
the majority party's ability to govern or attracting press and media attention to the alleged ineffectiveness of
the majority party. "We know how to delay," remarked Minority Leader Gephardt Dilatory motions to adjourn, appeals
of the presiding officer's ruling, or numerous requests for roll call votes are standard time-consuming parliamentary
tactics. By stalling action on the majority party's agenda, the minority leader may be able to launch a campaign
against a "do-nothing Congress" and convince enough voters to put his party back in charge of the House. To be sure,
the minority leader recognizes that "going negative" carries risks and may not be a winning strategy if his party
fails to offer policy alternatives that appeal to broad segments of the general public. Promote and Publicize the
Party's Agenda. An important aim of the minority leader is to develop an electorally attractive agenda of ideas and
proposals that unites his or her own House members and that energizes and appeals to core electoral supporters as
well as independents and swing voters. Despite the minority leader's restricted ability to set the House's agenda,
there are still opportunities for him to raise minority priorities. For example, the minority leader may employ,
or threaten to use, discharge petitions to try and bring minority priorities to the floor. If he or she is able to
attract the required 218 signatures on a discharge petition by attracting majority party supporters, he or she can
force minority initiatives to the floor over the opposition of the majority leadership. As a GOP minority leader
once said, the challenges he confronted are to "keep our people together, and to look for votes on the other side."
Minority leaders may engage in numerous activities to publicize their party's priorities and to criticize the opposition's.
For instance, to keep their party colleagues "on message," they insure that partisan colleagues are sent packets
of suggested press releases or "talking points" for constituent meetings in their districts; they help to organize
"town meetings" in Members' districts around the country to publicize the party's agenda or a specific priority,
such as health care or education; they sponsor party "retreats" to discuss issues and assess the party's public image;
they create "theme teams" to craft party messages that might be raised during the one-minute, morning hour, or special
order period in the House; they conduct surveys of party colleagues to discern their policy preferences; they establish
websites that highlight and distribute party images and issues to users; and they organize task forces or issue teams
to formulate party programs and to develop strategies for communicating these programs to the public. House minority
leaders also hold joint news conferences and consult with their counterparts in the Senate—and with the president
if their party controls the White House. The overall objectives are to develop a coordinated communications strategy,
to share ideas and information, and to present a united front on issues. Minority leaders also make floor speeches
and close debate on major issues before the House; they deliver addresses in diverse forums across the country; and
they write books or articles that highlight minority party goals and achievements. They must also be prepared "to
debate on the floor, ad lib, no notes, on a moment's notice," remarked Minority Leader Michel. In brief, minority
leaders are key strategists in developing and promoting the party's agenda and in outlining ways to neutralize the
opposition's arguments and proposals. Confer With the White House. If his or her party controls the White House,
the minority leader confers regularly with the President and his aides about issues before Congress, the Administration's
agenda, and political events generally. Strategically, the role of the minority leader will vary depending on whether
the President is of the same party or the other party. In general, minority leaders will often work to advance the
goals and aspirations of their party's President in Congress. When Robert Michel, R-IL, was minority leader (1981–1995),
he typically functioned as the "point man" for Republican presidents. President Ronald Reagan's 1981 policy successes
in the Democratic controlled House was due in no small measure to Minority Leader Michel's effectiveness in wooing
so-called "Reagan Democrats" to support, for instance, the Administration's landmark budget reconciliation bill.
There are occasions, of course, when minority leaders will fault the legislative initiatives of their President.
On an administration proposal that could adversely affect his district, Michel stated that he might "abdicate my
leadership role [on this issue] since I can't harmonize my own views with the administration's." Minority Leader
Gephardt, as another example, has publicly opposed a number of President Clinton's legislative initiatives from "fast
track" trade authority to various budget issues. When the White House is controlled by the House majority party,
then the House minority leader assumes a larger role in formulating alternatives to executive branch initiatives
and in acting as a national spokesperson for his or her party. "As Minority Leader during [President Lyndon Johnson's]
Democratic administration, my responsibility has been to propose Republican alternatives," said Minority Leader Gerald
Ford, R-MI. Greatly outnumbered in the House, Minority Leader Ford devised a political strategy that allowed Republicans
to offer their alternatives in a manner that provided them political protection. As Ford explained: "We used a technique
of laying our program out in general debate," he said. When we got to the amendment phase, we would offer our program
as a substitute for the Johnson proposal. If we lost in the Committee of the Whole, then we would usually offer it
as a motion to recommit and get a vote on that. And if we lost on the motion to recommit, our Republican members
had a choice: They could vote against the Johnson program and say we did our best to come up with a better alternative.
Or they could vote for it and make the same argument. Usually we lost; but when you're only 140 out of 435, you don't
expect to win many. Gephardt added that "inclusion and empowerment of the people on the line have to be done to get
the best performance" from the minority party. Other techniques for fostering party harmony include the appointment
of task forces composed of partisan colleagues with conflicting views to reach consensus on issues; the creation
of new leadership positions as a way to reach out and involve a greater diversity of partisans in the leadership
structure; and daily meetings in the Leader's office (or at breakfast, lunch, or dinner) to lay out floor strategy
or political objectives for the minority party. The Chief Deputy Whip is the primary assistant to the whip, who is
the chief vote counter for his or her party. The current chief deputy majority whip is Republican Patrick McHenry.
Within the House Republican Conference, the chief deputy whip is the highest appointed position and often a launching
pad for future positions in the House Leadership. The House Democratic Conference has multiple chief deputy whips,
led by a Senior Chief Deputy Whip, which is the highest appointed position within the House Democratic Caucus. The
current senior chief deputy minority whip, John Lewis, has held his post since 1991.
Armenians constitute the main population of Armenia and the de facto independent Nagorno-Karabakh Republic. There is a wide-ranging
diaspora of around 5 million people of full or partial Armenian ancestry living outside of modern Armenia. The largest
Armenian populations today exist in Russia, the United States, France, Georgia, Iran, Ukraine, Lebanon, and Syria.
With the exceptions of Iran and the former Soviet states, the present-day Armenian diaspora was formed mainly as
a result of the Armenian Genocide. Historically, the name Armenian has come to internationally designate this group
of people. It was first used by neighbouring countries of ancient Armenia. The earliest attestations of the exonym
Armenia date around the 6th century BC. In his trilingual Behistun Inscription dated to 517 BC, Darius I the Great
of Persia refers to Urashtu (in Babylonian) as Armina (in Old Persian; Armina ( ) and Harminuya (in Elamite). In
Greek, Αρμένιοι "Armenians" is attested from about the same time, perhaps the earliest reference being a fragment
attributed to Hecataeus of Miletus (476 BC). Xenophon, a Greek general serving in some of the Persian expeditions,
describes many aspects of Armenian village life and hospitality in around 401 BC. He relates that the people spoke
a language that to his ear sounded like the language of the Persians. The Armenian Highland lies in the highlands
surrounding Mount Ararat, the highest peak of the region. In the Bronze Age, several states flourished in the area
of Greater Armenia, including the Hittite Empire (at the height of its power), Mitanni (South-Western historical
Armenia), and Hayasa-Azzi (1600–1200 BC). Soon after Hayasa-Azzi were Arme-Shupria (1300s–1190 BC), the Nairi (1400–1000
BC) and the Kingdom of Urartu (860–590 BC), who successively established their sovereignty over the Armenian Highland.
Each of the aforementioned nations and tribes participated in the ethnogenesis of the Armenian people. Under Ashurbanipal
(669–627 BC), the Assyrian empire reached the Caucasus Mountains (modern Armenia, Georgia and Azerbaijan). Eric P.
Hamp in his 2012 Indo-European family tree, groups the Armenian language along with Greek and Ancient Macedonian
("Helleno-Macedonian") in the Pontic Indo-European (also called Helleno-Armenian) subgroup. In Hamp's view the homeland
of this subgroup is the northeast coast of the Black Sea and its hinterlands. He assumes that they migrated from
there southeast through the Caucasus with the Armenians remaining after Batumi while the pre-Greeks proceeded westwards
along the southern coast of the Black Sea. The first geographical entity that was called Armenia by neighboring peoples
(such as by Hecataeus of Miletus and on the Achaemenid Behistun Inscription) was established in the late 6th century
BC under the Orontid dynasty within the Achaemenid Persian Empire as part of the latters' territories, and which
later became a kingdom. At its zenith (95–65 BC), the state extended from the Caucasus all the way to what is now
central Turkey, Lebanon, and northern Iran. The imperial reign of Tigranes the Great is thus the span of time during
which Armenia itself conquered areas populated by other peoples. The Arsacid Kingdom of Armenia, itself a branch
of the Arsacid dynasty of Parthia, was the first state to adopt Christianity as its religion (it had formerly been
adherent to Armenian paganism, which was influenced by Zoroastrianism, while later on adopting a few elements regarding
identification of its pantheon with Greco-Roman deities). in the early years of the 4th century, likely AD 301, partly
in defiance of the Sassanids it seems. In the late Parthian period, Armenia was a predominantly Zoroastrian-adhering
land, but by the Christianisation, previously predominant Zoroastrianism and paganism in Armenia gradually declined.
Later on, in order to further strengthen Armenian national identity, Mesrop Mashtots invented the Armenian alphabet,
in 405 AD. This event ushered the Golden Age of Armenia, during which many foreign books and manuscripts were translated
to Armenian by Mesrop's pupils. Armenia lost its sovereignty again in 428 AD to the rivalling Byzantine and Sassanid
Persian empires, until the Muslim conquest of Persia overran also the regions in which Armenians lived. In 885 AD
the Armenians reestablished themselves as a sovereign kingdom under the leadership of Ashot I of the Bagratid Dynasty.
A considerable portion of the Armenian nobility and peasantry fled the Byzantine occupation of Bagratid Armenia in
1045, and the subsequent invasion of the region by Seljuk Turks in 1064. They settled in large numbers in Cilicia,
an Anatolian region where Armenians were already established as a minority since Roman times. In 1080, they founded
an independent Armenian Principality then Kingdom of Cilicia, which became the focus of Armenian nationalism. The
Armenians developed close social, cultural, military, and religious ties with nearby Crusader States, but eventually
succumbed to Mamluk invasions. In the next few centuries, Djenghis Khan, Timurids, and the tribal Turkic federations
of the Ak Koyunlu and the Kara Koyunlu ruled over the Armenians. From the early 16th century, both Western Armenia
and Eastern Armenia fell under Iranian Safavid rule. Owing to the century long Turco-Iranian geo-political rivalry
that would last in Western Asia, significant parts of the region were frequently fought over between the two rivalling
empires. From the mid 16th century with the Peace of Amasya, and decisively from the first half of the 17th century
with the Treaty of Zuhab until the first half of the 19th century, Eastern Armenia was ruled by the successive Iranian
Safavid, Afsharid and Qajar empires, while Western Armenia remained under Ottoman rule. In the late 1820s, the parts
of historic Armenia under Iranian control centering on Yerevan and Lake Sevan (all of Eastern Armenia) were incorporated
into the Russian Empire following Iran's forced ceding of the territories after its loss in the Russo-Persian War
(1826-1828) and the outcoming Treaty of Turkmenchay. Western Armenia however, remained in Ottoman hands. Governments
of Republic of Turkey since that time have consistently rejected charges of genocide, typically arguing either that
those Armenians who died were simply in the way of a war or that killings of Armenians were justified by their individual
or collective support for the enemies of the Ottoman Empire. Passage of legislation in various foreign countries
condemning the persecution of the Armenians as genocide has often provoked diplomatic conflict. (See Recognition
of the Armenian Genocide) Following the breakup of the Russian Empire in the aftermath of World War I for a brief
period, from 1918 to 1920, Armenia was an independent republic. In late 1920, the communists came to power following
an invasion of Armenia by the Red Army, and in 1922, Armenia became part of the Transcaucasian SFSR of the Soviet
Union, later forming the Armenian Soviet Socialist Republic (1936 to September 21, 1991). In 1991, Armenia declared
independence from the USSR and established the second Republic of Armenia. Armenians have had a presence in the Armenian
Highland for over four thousand years, since the time when Hayk, the legendary patriarch and founder of the first
Armenian nation, led them to victory over Bel of Babylon. Today, with a population of 3.5 million, they not only
constitute an overwhelming majority in Armenia, but also in the disputed region of Nagorno-Karabakh. Armenians in
the diaspora informally refer to them as Hayastantsis (Հայաստանցի), meaning those that are from Armenia (that is,
those born and raised in Armenia). They, as well as the Armenians of Iran and Russia speak the Eastern dialect of
the Armenian language. The country itself is secular as a result of Soviet domination, but most of its citizens identify
themselves as Apostolic Armenian Christian. Small Armenian trading and religious communities have existed outside
of Armenia for centuries. For example, a community has existed for over a millennium in the Holy Land, and one of
the four quarters of the walled Old City of Jerusalem has been called the Armenian Quarter. An Armenian Catholic
monastic community of 35 founded in 1717 exists on an island near Venice, Italy. There are also remnants of formerly
populous communities in India, Myanmar, Thailand, Belgium, Portugal, Italy, Poland, Austria, Hungary, Bulgaria, Romania,
Serbia, Ethiopia, Sudan and Egypt.[citation needed] Within the diasporan Armenian community, there is an unofficial
classification of the different kinds of Armenians. For example, Armenians who originate from Iran are referred to
as Parskahay (Պարսկահայ), while Armenians from Lebanon are usually referred to as Lipananahay (Լիբանանահայ). Armenians
of the Diaspora are the primary speakers of the Western dialect of the Armenian language. This dialect has considerable
differences with Eastern Armenian, but speakers of either of the two variations can usually understand each other.
Eastern Armenian in the diaspora is primarily spoken in Iran and European countries such as Ukraine, Russia, and
Georgia (where they form a majority in the Samtskhe-Javakheti province). In diverse communities (such as in Canada
and the U.S.) where many different kinds of Armenians live together, there is a tendency for the different groups
to cluster together. Armenia established a Church that still exists independently of both the Catholic and the Eastern
Orthodox churches, having become so in 451 AD as a result of its stance regarding the Council of Chalcedon. Today
this church is known as the Armenian Apostolic Church, which is a part of the Oriental Orthodox communion, not to
be confused with the Eastern Orthodox communion. During its later political eclipses, Armenia depended on the church
to preserve and protect its unique identity. The original location of the Armenian Catholicosate is Echmiadzin. However,
the continuous upheavals, which characterized the political scenes of Armenia, made the political power move to safer
places. The Church center moved as well to different locations together with the political authority. Therefore,
it eventually moved to Cilicia as the Holy See of Cilicia. The Armenians collective has, at times, constituted a
Christian "island" in a mostly Muslim region. There is, however, a minority of ethnic Armenian Muslims, known as
Hamshenis but many Armenians view them as a separate race, while the history of the Jews in Armenia dates back 2,000
years. The Armenian Kingdom of Cilicia had close ties to European Crusader States. Later on, the deteriorating situation
in the region led the bishops of Armenia to elect a Catholicos in Etchmiadzin, the original seat of the Catholicosate.
In 1441, a new Catholicos was elected in Etchmiadzin in the person of Kirakos Virapetsi, while Krikor Moussapegiants
preserved his title as Catholicos of Cilicia. Therefore, since 1441, there have been two Catholicosates in the Armenian
Church with equal rights and privileges, and with their respective jurisdictions. The primacy of honor of the Catholicosate
of Etchmiadzin has always been recognized by the Catholicosate of Cilicia. While the Armenian Apostolic Church remains
the most prominent church in the Armenian community throughout the world, Armenians (especially in the diaspora)
subscribe to any number of other Christian denominations. These include the Armenian Catholic Church (which follows
its own liturgy but recognizes the Roman Catholic Pope), the Armenian Evangelical Church, which started as a reformation
in the Mother church but later broke away, and the Armenian Brotherhood Church, which was born in the Armenian Evangelical
Church, but later broke apart from it. There are other numerous Armenian churches belonging to Protestant denominations
of all kinds. Armenian literature dates back to 400 AD, when Mesrop Mashtots first invented the Armenian alphabet.
This period of time is often viewed as the Golden Age of Armenian literature. Early Armenian literature was written
by the "father of Armenian history", Moses of Chorene, who authored The History of Armenia. The book covers the time-frame
from the formation of the Armenian people to the fifth century AD. The nineteenth century beheld a great literary
movement that was to give rise to modern Armenian literature. This period of time, during which Armenian culture
flourished, is known as the Revival period (Zartonki sherchan). The Revivalist authors of Constantinople and Tiflis,
almost identical to the Romanticists of Europe, were interested in encouraging Armenian nationalism. Most of them
adopted the newly created Eastern or Western variants of the Armenian language depending on the targeted audience,
and preferred them over classical Armenian (grabar). This period ended after the Hamidian massacres, when Armenians
experienced turbulent times. As Armenian history of the 1920s and of the Genocide came to be more openly discussed,
writers like Paruyr Sevak, Gevork Emin, Silva Kaputikyan and Hovhannes Shiraz began a new era of literature. The
first Armenian churches were built between the 4th and 7th century, beginning when Armenia converted to Christianity,
and ending with the Arab invasion of Armenia. The early churches were mostly simple basilicas, but some with side
apses. By the fifth century the typical cupola cone in the center had become widely used. By the seventh century,
centrally planned churches had been built and a more complicated niched buttress and radiating Hrip'simé style had
formed. By the time of the Arab invasion, most of what we now know as classical Armenian architecture had formed.
From the 9th to 11th century, Armenian architecture underwent a revival under the patronage of the Bagratid Dynasty
with a great deal of building done in the area of Lake Van, this included both traditional styles and new innovations.
Ornately carved Armenian Khachkars were developed during this time. Many new cities and churches were built during
this time, including a new capital at Lake Van and a new Cathedral on Akdamar Island to match. The Cathedral of Ani
was also completed during this dynasty. It was during this time that the first major monasteries, such as Haghpat
and Haritchavank were built. This period was ended by the Seljuk invasion. During Soviet rule, Armenian athletes
rose to prominence winning plenty of medals and helping the USSR win the medal standings at the Olympics on numerous
occasions. The first medal won by an Armenian in modern Olympic history was by Hrant Shahinyan, who won two golds
and two silvers in gymnastics at the 1952 Summer Olympics in Helsinki. In football, their most successful team was
Yerevan's FC Ararat, which had claimed most of the Soviet championships in the 70s and had also gone to post victories
against professional clubs like FC Bayern Munich in the Euro cup. Instruments like the duduk, the dhol, the zurna
and the kanun are commonly found in Armenian folk music. Artists such as Sayat Nova are famous due to their influence
in the development of Armenian folk music. One of the oldest types of Armenian music is the Armenian chant which
is the most common kind of religious music in Armenia. Many of these chants are ancient in origin, extending to pre-Christian
times, while others are relatively modern, including several composed by Saint Mesrop Mashtots, the inventor of the
Armenian alphabet. Whilst under Soviet rule, Armenian classical music composer Aram Khatchaturian became internationally
well known for his music, for various ballets and the Sabre Dance from his composition for the ballet Gayane. The
Armenian Genocide caused widespread emigration that led to the settlement of Armenians in various countries in the
world. Armenians kept to their traditions and certain diasporans rose to fame with their music. In the post-Genocide
Armenian community of the United States, the so-called "kef" style Armenian dance music, using Armenian and Middle
Eastern folk instruments (often electrified/amplified) and some western instruments, was popular. This style preserved
the folk songs and dances of Western Armenia, and many artists also played the contemporary popular songs of Turkey
and other Middle Eastern countries from which the Armenians emigrated. Richard Hagopian is perhaps the most famous
artist of the traditional "kef" style and the Vosbikian Band was notable in the 40s and 50s for developing their
own style of "kef music" heavily influenced by the popular American Big Band Jazz of the time. Later, stemming from
the Middle Eastern Armenian diaspora and influenced by Continental European (especially French) pop music, the Armenian
pop music genre grew to fame in the 60s and 70s with artists such as Adiss Harmandian and Harout Pamboukjian performing
to the Armenian diaspora and Armenia. Also with artists such as Sirusho, performing pop music combined with Armenian
folk music in today's entertainment industry. Other Armenian diasporans that rose to fame in classical or international
music circles are world-renowned French-Armenian singer and composer Charles Aznavour, pianist Sahan Arzruni, prominent
opera sopranos such as Hasmik Papian and more recently Isabel Bayrakdarian and Anna Kasyan. Certain Armenians settled
to sing non-Armenian tunes such as the heavy metal band System of a Down (which nonetheless often incorporates traditional
Armenian instrumentals and styling into their songs) or pop star Cher. Ruben Hakobyan (Ruben Sasuntsi) is a well
recognized Armenian ethnographic and patriotic folk singer who has achieved widespread national recognition due to
his devotion to Armenian folk music and exceptional talent. In the Armenian diaspora, Armenian revolutionary songs
are popular with the youth.[citation needed] These songs encourage Armenian patriotism and are generally about Armenian
history and national heroes. Carpet-weaving is historically a major traditional profession for the majority of Armenian
women, including many Armenian families. Prominent Karabakh carpet weavers there were men too. The oldest extant
Armenian carpet from the region, referred to as Artsakh (see also Karabakh carpet) during the medieval era, is from
the village of Banants (near Gandzak) and dates to the early 13th century. The first time that the Armenian word
for carpet, gorg, was used in historical sources was in a 1242–1243 Armenian inscription on the wall of the Kaptavan
Church in Artsakh. Art historian Hravard Hakobyan notes that "Artsakh carpets occupy a special place in the history
of Armenian carpet-making." Common themes and patterns found on Armenian carpets were the depiction of dragons and
eagles. They were diverse in style, rich in color and ornamental motifs, and were even separated in categories depending
on what sort of animals were depicted on them, such as artsvagorgs (eagle-carpets), vishapagorgs (dragon-carpets)
and otsagorgs (serpent-carpets). The rug mentioned in the Kaptavan inscriptions is composed of three arches, "covered
with vegatative ornaments", and bears an artistic resemblance to the illuminated manuscripts produced in Artsakh.
Armenians enjoy many different native and foreign foods. Arguably the favorite food is khorovats an Armenian-styled
barbecue. Lavash is a very popular Armenian flat bread, and Armenian paklava is a popular dessert made from filo
dough. Other famous Armenian foods include the kabob (a skewer of marinated roasted meat and vegetables), various
dolmas (minced lamb, or beef meat and rice wrapped in grape leaves, cabbage leaves, or stuffed into hollowed vegetables),
and pilaf, a rice dish. Also, ghapama, a rice-stuffed pumpkin dish, and many different salads are popular in Armenian
culture. Fruits play a large part in the Armenian diet. Apricots (Prunus armeniaca, also known as Armenian Plum)
have been grown in Armenia for centuries and have a reputation for having an especially good flavor. Peaches are
popular as well, as are grapes, figs, pomegranates, and melons. Preserves are made from many fruits, including cornelian
cherries, young walnuts, sea buckthorn, mulberries, sour cherries, and many others.
Jehovah's Witnesses is a millenarian restorationist Christian denomination with nontrinitarian beliefs distinct from mainstream
Christianity. The group claims a worldwide membership of more than 8.2 million adherents involved in evangelism,
convention attendance figures of more than 15 million, and an annual Memorial attendance of more than 19.9 million.
Jehovah's Witnesses are directed by the Governing Body of Jehovah's Witnesses, a group of elders in Brooklyn, New
York, which establishes all doctrines based on its interpretations of the Bible. They prefer to use their own translation,
the New World Translation of the Holy Scriptures, although their literature occasionally quotes and cites other translations.
They believe that the destruction of the present world system at Armageddon is imminent, and that the establishment
of God's kingdom over the earth is the only solution for all problems faced by humanity. Jehovah's Witnesses are
best known for their door-to-door preaching, distributing literature such as The Watchtower and Awake!, and refusing
military service and blood transfusions. They consider use of the name Jehovah vital for proper worship. They reject
Trinitarianism, inherent immortality of the soul, and hellfire, which they consider to be unscriptural doctrines.
They do not observe Christmas, Easter, birthdays or other holidays and customs they consider to have pagan origins
incompatible with Christianity. Adherents commonly refer to their body of beliefs as "the truth" and consider themselves
to be "in the truth". They consider secular society to be morally corrupt and under the influence of Satan, and most
limit their social interaction with non-Witnesses. Congregational disciplinary actions include disfellowshipping,
their term for formal expulsion and shunning. Baptized individuals who formally leave are considered disassociated
and are also shunned. Disfellowshipped and disassociated individuals may eventually be reinstated if deemed repentant.
In 1870, Charles Taze Russell and others formed a group in Pittsburgh, Pennsylvania, to study the Bible. During the
course of his ministry, Russell disputed many beliefs of mainstream Christianity including immortality of the soul,
hellfire, predestination, the fleshly return of Jesus Christ, the Trinity, and the burning up of the world. In 1876,
Russell met Nelson H. Barbour; later that year they jointly produced the book Three Worlds, which combined restitutionist
views with end time prophecy. The book taught that God's dealings with humanity were divided dispensationally, each
ending with a "harvest," that Christ had returned as an invisible spirit being in 1874 inaugurating the "harvest
of the Gospel age," and that 1914 would mark the end of a 2520-year period called "the Gentile Times," at which time
world society would be replaced by the full establishment of God's kingdom on earth. Beginning in 1878 Russell and
Barbour jointly edited a religious journal, Herald of the Morning. In June 1879 the two split over doctrinal differences,
and in July, Russell began publishing the magazine Zion's Watch Tower and Herald of Christ's Presence, stating that
its purpose was to demonstrate that the world was in "the last days," and that a new age of earthly and human restitution
under the reign of Christ was imminent. From 1879, Watch Tower supporters gathered as autonomous congregations to
study the Bible topically. Thirty congregations were founded, and during 1879 and 1880, Russell visited each to provide
the format he recommended for conducting meetings. As congregations continued to form during Russell's ministry,
they each remained self-administrative, functioning under the congregationalist style of church governance. In 1881,
Zion's Watch Tower Tract Society was presided over by William Henry Conley, and in 1884, Charles Taze Russell incorporated
the society as a non-profit business to distribute tracts and Bibles. By about 1900, Russell had organized thousands
of part- and full-time colporteurs, and was appointing foreign missionaries and establishing branch offices. By the
1910s, Russell's organization maintained nearly a hundred "pilgrims," or traveling preachers. Russell engaged in
significant global publishing efforts during his ministry, and by 1912, he was the most distributed Christian author
in the United States. Russell moved the Watch Tower Society's headquarters to Brooklyn, New York, in 1909, combining
printing and corporate offices with a house of worship; volunteers were housed in a nearby residence he named Bethel.
He identified the religious movement as "Bible Students," and more formally as the International Bible Students Association.
By 1910, about 50,000 people worldwide were associated with the movement and congregations re-elected him annually
as their "pastor." Russell died October 31, 1916, at the age of 64 while returning from a ministerial speaking tour.
In January 1917, the Watch Tower Society's legal representative, Joseph Franklin Rutherford, was elected as its next
president. His election was disputed, and members of the Board of Directors accused him of acting in an autocratic
and secretive manner. The divisions between his supporters and opponents triggered a major turnover of members over
the next decade. In June 1917, he released The Finished Mystery as a seventh volume of Russell's Studies in the Scriptures
series. The book, published as the posthumous work of Russell, was a compilation of his commentaries on the Bible
books of Ezekiel and Revelation, plus numerous additions by Bible Students Clayton Woodworth and George Fisher. It
strongly criticized Catholic and Protestant clergy and Christian involvement in the Great War. As a result, Watch
Tower Society directors were jailed for sedition under the Espionage Act in 1918 and members were subjected to mob
violence; charges against the directors were dropped in 1920. Rutherford centralized organizational control of the
Watch Tower Society. In 1919, he instituted the appointment of a director in each congregation, and a year later
all members were instructed to report their weekly preaching activity to the Brooklyn headquarters. At an international
convention held at Cedar Point, Ohio, in September 1922, a new emphasis was made on house-to-house preaching. Significant
changes in doctrine and administration were regularly introduced during Rutherford's twenty-five years as president,
including the 1920 announcement that the Jewish patriarchs (such as Abraham and Isaac) would be resurrected in 1925,
marking the beginning of Christ's thousand-year Kingdom. Disappointed by the changes, tens of thousands of defections
occurred during the first half of Rutherford's tenure, leading to the formation of several Bible Student organizations
independent of the Watch Tower Society, most of which still exist. By mid-1919, as many as one in seven of Russell-era
Bible Students had ceased their association with the Society, and as many as two-thirds by the end of the 1920s.
On July 26, 1931, at a convention in Columbus, Ohio, Rutherford introduced the new name—Jehovah's witnesses—based
on Isaiah 43:10: "Ye are my witnesses, saith Jehovah, and my servant whom I have chosen"—which was adopted by resolution.
The name was chosen to distinguish his group of Bible Students from other independent groups that had severed ties
with the Society, as well as symbolize the instigation of new outlooks and the promotion of fresh evangelizing methods.
In 1932, Rutherford eliminated the system of locally elected elders and in 1938, introduced what he called a "theocratic"
(literally, God-ruled) organizational system, under which appointments in congregations worldwide were made from
the Brooklyn headquarters. From 1932, it was taught that the "little flock" of 144,000 would not be the only people
to survive Armageddon. Rutherford explained that in addition to the 144,000 "anointed" who would be resurrected—or
transferred at death—to live in heaven to rule over earth with Christ, a separate class of members, the "great multitude,"
would live in a paradise restored on earth; from 1935, new converts to the movement were considered part of that
class. By the mid-1930s, the timing of the beginning of Christ's presence (Greek: parousía), his enthronement as
king, and the start of the "last days" were each moved to 1914. Nathan Knorr was appointed as third president of
the Watch Tower Bible and Tract Society in 1942. Knorr commissioned a new translation of the Bible, the New World
Translation of the Holy Scriptures, the full version of which was released in 1961. He organized large international
assemblies, instituted new training programs for members, and expanded missionary activity and branch offices throughout
the world. Knorr's presidency was also marked by an increasing use of explicit instructions guiding Witnesses in
their lifestyle and conduct, and a greater use of congregational judicial procedures to enforce a strict moral code.
From 1966, Witness publications and convention talks built anticipation of the possibility that Christ's thousand-year
reign might begin in late 1975 or shortly thereafter. The number of baptisms increased significantly, from about
59,000 in 1966 to more than 297,000 in 1974. By 1975, the number of active members exceeded two million. Membership
declined during the late 1970s after expectations for 1975 were proved wrong. Watch Tower Society literature did
not state dogmatically that 1975 would definitely mark the end, but in 1980 the Watch Tower Society admitted its
responsibility in building up hope regarding that year. The offices of elder and ministerial servant were restored
to Witness congregations in 1972, with appointments made from headquarters (and later, also by branch committees).
It was announced that, starting in September 2014, appointments would be made by traveling overseers. In a major
organizational overhaul in 1976, the power of the Watch Tower Society president was diminished, with authority for
doctrinal and organizational decisions passed to the Governing Body. Since Knorr's death in 1977, the position of
president has been occupied by Frederick Franz (1977–1992) and Milton Henschel (1992–2000), both members of the Governing
Body, and since 2000 by Don A. Adams, not a member of the Governing Body. In 1995, Jehovah's Witnesses abandoned
the idea that Armageddon must occur during the lives of the generation that was alive in 1914 and in 2013 changed
their teaching on the "generation". Jehovah's Witnesses are organized hierarchically, in what the leadership calls
a "theocratic organization", reflecting their belief that it is God's "visible organization" on earth. The organization
is led by the Governing Body—an all-male group that varies in size, but since early 2014 has comprised seven members,[note
1] all of whom profess to be of the "anointed" class with a hope of heavenly life—based in the Watch Tower Society's
Brooklyn headquarters. There is no election for membership; new members are selected by the existing body. Until
late 2012, the Governing Body described itself as the representative and "spokesman" for God's "faithful and discreet
slave class" (approximately 10,000 self-professed "anointed" Jehovah's Witnesses). At the 2012 Annual Meeting of
the Watch Tower Society, the "faithful and discreet slave" was defined as referring to the Governing Body only. The
Governing Body directs several committees that are responsible for administrative functions, including publishing,
assembly programs and evangelizing activities. It appoints all branch committee members and traveling overseers,
after they have been recommended by local branches, with traveling overseers supervising circuits of congregations
within their jurisdictions. Traveling overseers appoint local elders and ministerial servants, and while branch offices
may appoint regional committees for matters such as Kingdom Hall construction or disaster relief. Each congregation
has a body of appointed unpaid male elders and ministerial servants. Elders maintain general responsibility for congregational
governance, setting meeting times, selecting speakers and conducting meetings, directing the public preaching work,
and creating "judicial committees" to investigate and decide disciplinary action for cases involving sexual misconduct
or doctrinal breaches. New elders are appointed by a traveling overseer after recommendation by the existing body
of elders. Ministerial servants—appointed in a similar manner to elders—fulfill clerical and attendant duties, but
may also teach and conduct meetings. Witnesses do not use elder as a title to signify a formal clergy-laity division,
though elders may employ ecclesiastical privilege such as confession of sins. Baptism is a requirement for being
considered a member of Jehovah's Witnesses. Jehovah's Witnesses do not practice infant baptism, and previous baptisms
performed by other denominations are not considered valid. Individuals undergoing baptism must affirm publicly that
dedication and baptism identify them "as one of Jehovah's Witnesses in association with God's spirit-directed organization,"
though Witness publications say baptism symbolizes personal dedication to God and not "to a man, work or organization."
Their literature emphasizes the need for members to be obedient and loyal to Jehovah and to "his organization,"[note
2] stating that individuals must remain part of it to receive God's favor and to survive Armageddon. Jehovah's Witnesses
believe their religion is a restoration of first-century Christianity. Doctrines of Jehovah's Witnesses are established
by the Governing Body, which assumes responsibility for interpreting and applying scripture. The Governing Body does
not issue any single, comprehensive "statement of faith", but prefers to express its doctrinal position in a variety
of ways through publications published by the Watch Tower Society. Their publications teach that doctrinal changes
and refinements result from a process of progressive revelation, in which God gradually reveals his will and purpose,
and that such enlightenment or "new light" results from the application of reason and study, the guidance of the
holy spirit, and direction from Jesus Christ and angels. The Society also teaches that members of the Governing Body
are helped by the holy spirit to discern "deep truths", which are then considered by the entire Governing Body before
it makes doctrinal decisions. The religion's leadership, while disclaiming divine inspiration and infallibility,
is said to provide "divine guidance" through its teachings described as "based on God's Word thus ... not from men,
but from Jehovah." The entire Protestant canon of scripture is considered the inspired, inerrant word of God. Jehovah's
Witnesses consider the Bible to be scientifically and historically accurate and reliable and interpret much of it
literally, but accept parts of it as symbolic. They consider the Bible to be the final authority for all their beliefs,
although sociologist Andrew Holden's ethnographic study of the religion concluded that pronouncements of the Governing
Body, through Watch Tower Society publications, carry almost as much weight as the Bible. Regular personal Bible
reading is frequently recommended; Witnesses are discouraged from formulating doctrines and "private ideas" reached
through Bible research independent of Watch Tower Society publications, and are cautioned against reading other religious
literature. Adherents are told to have "complete confidence" in the leadership, avoid skepticism about what is taught
in the Watch Tower Society's literature, and "not advocate or insist on personal opinions or harbor private ideas
when it comes to Bible understanding." The religion makes no provision for members to criticize or contribute to
official teachings and all Witnesses must abide by its doctrines and organizational requirements. Jehovah's Witnesses
believe that Jesus is God's only direct creation, that everything else was created by means of Christ, and that the
initial unassisted act of creation uniquely identifies Jesus as God's "only-begotten Son". Jesus served as a redeemer
and a ransom sacrifice to pay for the sins of humanity. They believe Jesus died on a single upright post rather than
the traditional cross. They believe that references in the Bible to the Archangel Michael, Abaddon (Apollyon), and
the Word all refer to Jesus. Jesus is considered to be the only intercessor and high priest between God and humanity,
and appointed by God as the king and judge of his kingdom. His role as a mediator (referred to in 1 Timothy 2:5)
is applied to the 'anointed' class, though the 'other sheep' are said to also benefit from the arrangement. Witnesses
believe that a "little flock" go to heaven, but that the hope for life after death for the majority of "other sheep"
involves being resurrected by God to a cleansed earth after Armageddon. They interpret Revelation 14:1–5 to mean
that the number of Christians going to heaven is limited to exactly 144,000, who will rule with Jesus as kings and
priests over earth. Jehovah's Witnesses teach that only they meet scriptural requirements for surviving Armageddon,
but that God is the final judge. During Christ's millennial reign, most people who died prior to Armageddon will
be resurrected with the prospect of living forever; they will be taught the proper way to worship God to prepare
them for their final test at the end of the millennium. Jehovah's Witnesses believe that God's kingdom is a literal
government in heaven, ruled by Jesus Christ and 144,000 "spirit-anointed" Christians drawn from the earth, which
they associate with Jesus' reference to a "new covenant". The kingdom is viewed as the means by which God will accomplish
his original purpose for the earth, transforming it into a paradise without sickness or death. It is said to have
been the focal point of Jesus' ministry on earth. They believe the kingdom was established in heaven in 1914, and
that Jehovah's Witnesses serve as representatives of the kingdom on earth. A central teaching of Jehovah's Witnesses
is that the current world era, or "system of things", entered the "last days" in 1914 and faces imminent destruction
through intervention by God and Jesus Christ, leading to deliverance for those who worship God acceptably. They consider
all other present-day religions to be false, identifying them with "Babylon the Great", or the "harlot", of Revelation
17, and believe that they will soon be destroyed by the United Nations, which they believe is represented in scripture
by the scarlet-colored wild beast of Revelation chapter 17. This development will mark the beginning of the "great
tribulation". Satan will subsequently attack Jehovah's Witnesses, an action that will prompt God to begin the war
of Armageddon, during which all forms of government and all people not counted as Christ's "sheep", or true followers,
will be destroyed. After Armageddon, God will extend his heavenly kingdom to include earth, which will be transformed
into a paradise similar to the Garden of Eden. After Armageddon, most of those who had died before God's intervention
will gradually be resurrected during "judgment day" lasting for one thousand years. This judgment will be based on
their actions after resurrection rather than past deeds. At the end of the thousand years, Christ will hand all authority
back to God. Then a final test will take place when Satan is released to mislead perfect mankind. Those who fail
will be destroyed, along with Satan and his demons. The end result will be a fully tested, glorified human race.
Jehovah's Witnesses believe that Jesus Christ began to rule in heaven as king of God's kingdom in October 1914, and
that Satan was subsequently ousted from heaven to the earth, resulting in "woe" to humanity. They believe that Jesus
rules invisibly, from heaven, perceived only as a series of "signs". They base this belief on a rendering of the
Greek word parousia—usually translated as "coming" when referring to Christ—as "presence". They believe Jesus' presence
includes an unknown period beginning with his inauguration as king in heaven in 1914, and ending when he comes to
bring a final judgment against humans on earth. They thus depart from the mainstream Christian belief that the "second
coming" of Matthew 24 refers to a single moment of arrival on earth to judge humans. Meetings for worship and study
are held at Kingdom Halls, which are typically functional in character, and do not contain religious symbols. Witnesses
are assigned to a congregation in whose "territory" they usually reside and attend weekly services they refer to
as "meetings" as scheduled by congregation elders. The meetings are largely devoted to study of Watch Tower Society
literature and the Bible. The format of the meetings is established by the religion's headquarters, and the subject
matter for most meetings is the same worldwide. Congregations meet for two sessions each week comprising five distinct
meetings that total about three-and-a-half hours, typically gathering mid-week (three meetings) and on the weekend
(two meetings). Prior to 2009, congregations met three times each week; these meetings were condensed, with the intention
that members dedicate an evening for "family worship". Gatherings are opened and closed with kingdom songs (hymns)
and brief prayers. Twice each year, Witnesses from a number of congregations that form a "circuit" gather for a one-day
assembly. Larger groups of congregations meet once a year for a three-day "regional convention", usually at rented
stadiums or auditoriums. Their most important and solemn event is the commemoration of the "Lord's Evening Meal",
or "Memorial of Christ's Death" on the date of the Jewish Passover. Jehovah's Witnesses are perhaps best known for
their efforts to spread their beliefs, most notably by visiting people from house to house, distributing literature
published by the Watch Tower Society in 700 languages. The objective is to start a regular "Bible study" with any
person who is not already a member, with the intention that the student be baptized as a member of the group; Witnesses
are advised to consider discontinuing Bible studies with students who show no interest in becoming members. Witnesses
are taught they are under a biblical command to engage in public preaching. They are instructed to devote as much
time as possible to their ministry and are required to submit an individual monthly "Field Service Report". Baptized
members who fail to report a month of preaching are termed "irregular" and may be counseled by elders; those who
do not submit reports for six consecutive months are termed "inactive". Divorce is discouraged, and remarriage is
forbidden unless a divorce is obtained on the grounds of adultery, which they refer to as "a scriptural divorce".
If a divorce is obtained for any other reason, remarriage is considered adulterous unless the prior spouse has died
or is since considered to have committed sexual immorality. Extreme physical abuse, willful non-support of one's
family, and what the religion terms "absolute endangerment of spirituality" are considered grounds for legal separation.
Formal discipline is administered by congregation elders. When a baptized member is accused of committing a serious
sin—usually cases of sexual misconduct or charges of apostasy for disputing Jehovah's Witness doctrines—a judicial
committee is formed to determine guilt, provide help and possibly administer discipline. Disfellowshipping, a form
of shunning, is the strongest form of discipline, administered to an offender deemed unrepentant. Contact with disfellowshipped
individuals is limited to direct family members living in the same home, and with congregation elders who may invite
disfellowshipped persons to apply for reinstatement; formal business dealings may continue if contractually or financially
obliged. Witnesses are taught that avoiding social and spiritual interaction with disfellowshipped individuals keeps
the congregation free from immoral influence and that "losing precious fellowship with loved ones may help [the shunned
individual] to come 'to his senses,' see the seriousness of his wrong, and take steps to return to Jehovah." The
practice of shunning may also serve to deter other members from dissident behavior. Members who disassociate (formally
resign) are described in Watch Tower Society literature as wicked and are also shunned. Expelled individuals may
eventually be reinstated to the congregation if deemed repentant by elders in the congregation in which the disfellowshipping
was enforced. Reproof is a lesser form of discipline given formally by a judicial committee to a baptized Witness
who is considered repentant of serious sin; the reproved person temporarily loses conspicuous privileges of service,
but suffers no restriction of social or spiritual fellowship. Marking, a curtailing of social but not spiritual fellowship,
is practiced if a baptized member persists in a course of action regarded as a violation of Bible principles but
not a serious sin.[note 4] Jehovah's Witnesses believe that the Bible condemns the mixing of religions, on the basis
that there can only be one truth from God, and therefore reject interfaith and ecumenical movements. They believe
that only their religion represents true Christianity, and that other religions fail to meet all the requirements
set by God and will soon be destroyed. Jehovah's Witnesses are taught that it is vital to remain "separate from the
world." The Witnesses' literature defines the "world" as "the mass of mankind apart from Jehovah's approved servants"
and teach that it is morally contaminated and ruled by Satan. Witnesses are taught that association with "worldly"
people presents a "danger" to their faith, and are instructed to minimize social contact with non-members to better
maintain their own standards of morality. Jehovah's Witnesses believe their highest allegiance belongs to God's kingdom,
which is viewed as an actual government in heaven, with Christ as king. They remain politically neutral, do not seek
public office, and are discouraged from voting, though individual members may participate in uncontroversial community
improvement issues. Although they do not take part in politics, they respect the authority of the governments under
which they live. They do not celebrate religious holidays such as Christmas and Easter, nor do they observe birthdays,
nationalistic holidays, or other celebrations they consider to honor people other than Jesus. They feel that these
and many other customs have pagan origins or reflect a nationalistic or political spirit. Their position is that
these traditional holidays reflect Satan's control over the world. Witnesses are told that spontaneous giving at
other times can help their children to not feel deprived of birthdays or other celebrations. They do not work in
industries associated with the military, do not serve in the armed services, and refuse national military service,
which in some countries may result in their arrest and imprisonment. They do not salute or pledge allegiance to flags
or sing national anthems or patriotic songs. Jehovah's Witnesses see themselves as a worldwide brotherhood that transcends
national boundaries and ethnic loyalties. Sociologist Ronald Lawson has suggested the religion's intellectual and
organizational isolation, coupled with the intense indoctrination of adherents, rigid internal discipline and considerable
persecution, has contributed to the consistency of its sense of urgency in its apocalyptic message. Jehovah's Witnesses
refuse blood transfusions, which they consider a violation of God's law based on their interpretation of Acts 15:28,
29 and other scriptures. Since 1961 the willing acceptance of a blood transfusion by an unrepentant member has been
grounds for expulsion from the religion. Members are directed to refuse blood transfusions, even in "a life-or-death
situation". Jehovah's Witnesses accept non-blood alternatives and other medical procedures in lieu of blood transfusions,
and their literature provides information about non-blood medical procedures. Though Jehovah's Witnesses do not accept
blood transfusions of whole blood, they may accept some blood plasma fractions at their own discretion. The Watch
Tower Society provides pre-formatted durable power of attorney documents prohibiting major blood components, in which
members can specify which allowable fractions and treatments they will personally accept. Jehovah's Witnesses have
established Hospital Liaison Committees as a cooperative arrangement between individual Jehovah's Witnesses and medical
professionals and hospitals. As of August 2015, Jehovah's Witnesses report an average of 8.2 million publishers—the
term they use for members actively involved in preaching—in 118,016 congregations. In 2015, these reports indicated
over 1.93 billion hours spent in preaching and "Bible study" activity. Since the mid-1990s, the number of peak publishers
has increased from 4.5 million to 8.2 million. In the same year, they conducted "Bible studies" with over 9.7 million
individuals, including those conducted by Witness parents with their children. Jehovah's Witnesses estimate their
current worldwide growth rate to be 1.5% per year. The official published membership statistics, such as those mentioned
above, include only those who submit reports for their personal ministry; official statistics do not include inactive
and disfellowshipped individuals or others who might attend their meetings. As a result, only about half of those
who self-identified as Jehovah's Witnesses in independent demographic studies are considered active by the faith
itself. The 2008 US Pew Forum on Religion & Public Life survey found a low retention rate among members of the religion:
about 37% of people raised in the religion continued to identify themselves as Jehovah's Witnesses. Sociologist James
A. Beckford, in his 1975 study of Jehovah's Witnesses, classified the religion's organizational structure as Totalizing,
characterized by an assertive leadership, specific and narrow objectives, control over competing demands on members'
time and energy, and control over the quality of new members. Other characteristics of the classification include
likelihood of friction with secular authorities, reluctance to co-operate with other religious organizations, a high
rate of membership turnover, a low rate of doctrinal change, and strict uniformity of beliefs among members. Beckford
identified the religion's chief characteristics as historicism (identifying historical events as relating to the
outworking of God's purpose), absolutism (conviction that Jehovah's Witness leaders dispense absolute truth), activism
(capacity to motivate members to perform missionary tasks), rationalism (conviction that Witness doctrines have a
rational basis devoid of mystery), authoritarianism (rigid presentation of regulations without the opportunity for
criticism) and world indifference (rejection of certain secular requirements and medical treatments). A sociological
comparative study by the Pew Research Center found that Jehovah's Witnesses in the United States ranked highest in
statistics for getting no further than high school graduation, belief in God, importance of religion in one's life,
frequency of religious attendance, frequency of prayers, frequency of Bible reading outside of religious services,
belief their prayers are answered, belief that their religion can only be interpreted one way, belief that theirs
is the only one true faith leading to eternal life, opposition to abortion, and opposition to homosexuality. In the
study, Jehovah's Witnesses ranked lowest in statistics for having earned a graduate degree and interest in politics.
Political and religious animosity against Jehovah's Witnesses has at times led to mob action and government oppression
in various countries. Their doctrine of political neutrality and their refusal to serve in the military has led to
imprisonment of members who refused conscription during World War II and at other times where national service has
been compulsory. In 1933, there were approximately 20,000 Jehovah's Witnesses in Germany, of whom about 10,000 were
later imprisoned. Of those, 2000 were sent to Nazi concentration camps, where they were identified by purple triangles;
as many as 1200 died, including 250 who were executed. In Canada, Jehovah's Witnesses were interned in camps along
with political dissidents and people of Chinese and Japanese descent. In the former Soviet Union, about 9,300 Jehovah's
Witnesses were deported to Siberia as part of Operation North in April 1951. Their religious activities are currently
banned or restricted in some countries, including China, Vietnam and some Islamic states. Authors including William
Whalen, Shawn Francis Peters and former Witnesses Barbara Grizzuti Harrison, Alan Rogerson and William Schnell have
claimed the arrests and mob violence in the United States in the 1930s and 1940s were the consequence of what appeared
to be a deliberate course of provocation of authorities and other religions by Jehovah’s Witnesses. Whalen, Harrison
and Schnell have suggested Rutherford invited and cultivated opposition for publicity purposes in a bid to attract
dispossessed members of society, and to convince members that persecution from the outside world was evidence of
the truth of their struggle to serve God. Watch Tower Society literature of the period directed that Witnesses should
"never seek a controversy" nor resist arrest, but also advised members not to co-operate with police officers or
courts that ordered them to stop preaching, and to prefer jail rather than pay fines. In the United States, their
persistent legal challenges prompted a series of state and federal court rulings that reinforced judicial protections
for civil liberties. Among the rights strengthened by Witness court victories in the United States are the protection
of religious conduct from federal and state interference, the right to abstain from patriotic rituals and military
service, the right of patients to refuse medical treatment, and the right to engage in public discourse. Similar
cases in their favor have been heard in Canada. Doctrines of Jehovah's Witnesses are established by the Governing
Body. The religion does not tolerate dissent over doctrines and practices; members who openly disagree with the religion's
teachings are expelled and shunned. Witness publications strongly discourage followers from questioning doctrine
and counsel received from the Governing Body, reasoning that it is to be trusted as part of "God's organization".
It also warns members to "avoid independent thinking", claiming such thinking "was introduced by Satan the Devil"
and would "cause division". Those who openly disagree with official teachings are condemned as "apostates" who are
"mentally diseased". Former members Heather and Gary Botting compare the cultural paradigms of the religion to George
Orwell's Nineteen Eighty-four, and Alan Rogerson describes the religion's leadership as totalitarian. Other critics
charge that by disparaging individual decision-making, the religion's leaders cultivate a system of unquestioning
obedience in which Witnesses abrogate all responsibility and rights over their personal lives. Critics also accuse
the religion's leaders of exercising "intellectual dominance" over Witnesses, controlling information and creating
"mental isolation", which former Governing Body member Raymond Franz argued were all elements of mind control. Sociologist
Rodney Stark states that Jehovah's Witness leaders are "not always very democratic" and that members "are expected
to conform to rather strict standards," but adds that "enforcement tends to be very informal, sustained by the close
bonds of friendship within the group", and that Jehovah's Witnesses see themselves as "part of the power structure
rather than subject to it." Sociologist Andrew Holden states that most members who join millenarian movements such
as Jehovah's Witnesses have made an informed choice. However, he also states that defectors "are seldom allowed a
dignified exit", and describes the administration as autocratic. On the other hand, in his study on nine of "the
Bibles most widely in use in the English-speaking world", Bible scholar Jason BeDuhn, Professor of Religious Studies
at the Northern Arizona University, wrote: “The NW [New World Translation] emerges as the most accurate of the translations
compared.” Although the general public and many Bible scholars assume that the differences in the New World Translation
are the result of religious bias on the part of its translators, BeDuhn stated: “Most of the differences are due
to the greater accuracy of the NW as a literal, conservative translation of the original expressions of the New Testament
writers.” He added however that the insertion of the name Jehovah in the New Testament "violate[s] accuracy in favor
of denominationally preferred expressions for God". Watch Tower Society publications have claimed that God has used
Jehovah's Witnesses (and formerly, the International Bible Students) to declare his will and has provided advance
knowledge about Armageddon and the establishment of God's kingdom. Some publications also claimed that God has used
Jehovah's Witnesses and the International Bible Students as a modern-day prophet.[note 5] Jehovah's Witnesses' publications
have made various predictions about world events they believe were prophesied in the Bible. Failed predictions have
led to the alteration or abandonment of some doctrines. Some failed predictions had been presented as "beyond doubt"
or "approved by God". The Watch Tower Society rejects accusations that it is a false prophet, stating that its teachings
are not inspired or infallible, and that it has not claimed its predictions were "the words of Jehovah." George D.
Chryssides has suggested that with the exception of statements about 1914, 1925 and 1975, the changing views and
dates of the Jehovah's Witnesses are largely attributable to changed understandings of biblical chronology than to
failed predictions. Chryssides further states, "it is therefore simplistic and naïve to view the Witnesses as a group
that continues to set a single end-date that fails and then devise a new one, as many counter-cultists do." However,
sociologist Andrew Holden states that since the foundation of the movement around 140 years ago, "Witnesses have
maintained that we are living on the precipice of the end of time." Jehovah's Witnesses have been accused of having
policies and culture that help to conceal cases of sexual abuse within the organization. The religion has been criticized
for its "two witness rule" for church discipline, based on its application of scriptures at Deuteronomy 19:15 and
Matthew 18:15-17, which requires sexual abuse to be substantiated by secondary evidence if the accused person denies
any wrongdoing. In cases where corroboration is lacking, the Watch Tower Society's instruction is that "the elders
will leave the matter in Jehovah's hands". A former member of the church’s headquarters staff, Barbara Anderson,
says the policy effectively requires that there be another witness to an act of molestation, "which is an impossibility".
Anderson says the policies "protect pedophiles rather than protect the children." Jehovah's Witnesses maintain that
they have a strong policy to protect children, adding that the best way to protect children is by educating parents;
they also state that they do not sponsor activities that separate children from parents. The religion's failure to
report abuse allegations to authorities has also been criticized. The Watch Tower Society's policy is that elders
inform authorities when required by law to do so, but otherwise leave that action up to the victim and his or her
family. The Australian Royal Commission into Institutional Responses to Child Sexual Abuse found that of 1006 alleged
perpetrators of child sexual abuse identified by the Jehovah's Witnesses within their organization since 1950, "not
one was reported by the church to secular authorities." William Bowen, a former Jehovah's Witness elder who established
the Silentlambs organization to assist sex abuse victims within the religion, has claimed Witness leaders discourage
followers from reporting incidents of sexual misconduct to authorities, and other critics claim the organization
is reluctant to alert authorities in order to protect its "crime-free" reputation. In court cases in the United Kingdom
and the United States the Watch Tower Society has been found to have been negligent in its failure to protect children
from known sex offenders within the congregation and the Society has settled other child abuse lawsuits out of court,
reportedly paying as much as $780,000 to one plaintiff without admitting wrongdoing.
Dwight David "Ike" Eisenhower (/ˈaɪzənˌhaʊ.ər/ EYE-zən-HOW-ər; October 14, 1890 – March 28, 1969) was an American politician
and general who served as the 34th President of the United States from 1953 until 1961. He was a five-star general
in the United States Army during World War II and served as Supreme Commander of the Allied Forces in Europe. He
was responsible for planning and supervising the invasion of North Africa in Operation Torch in 1942–43 and the successful
invasion of France and Germany in 1944–45 from the Western Front. In 1951, he became the first Supreme Commander
of NATO. Eisenhower's main goals in office were to keep pressure on the Soviet Union and reduce federal deficits.
In the first year of his presidency, he threatened the use of nuclear weapons in an effort to conclude the Korean
War; his New Look policy of nuclear deterrence prioritized inexpensive nuclear weapons while reducing funding for
conventional military forces. He ordered coups in Iran and Guatemala. Eisenhower refused to give major aid to help
France in Vietnam. He gave strong financial support to the new nation of South Vietnam. Congress agreed to his request
in 1955 for the Formosa Resolution, which obliged the U.S. to militarily support the pro-Western Republic of China
in Taiwan and continue the isolation of the People's Republic of China. After the Soviet Union launched the world's
first artificial satellite in 1957, Eisenhower authorized the establishment of NASA, which led to the space race.
During the Suez Crisis of 1956, Eisenhower condemned the Israeli, British and French invasion of Egypt, and forced
them to withdraw. He also condemned the Soviet invasion during the Hungarian Revolution of 1956 but took no action.
In 1958, Eisenhower sent 15,000 U.S. troops to Lebanon to prevent the pro-Western government from falling to a Nasser-inspired
revolution. Near the end of his term, his efforts to set up a summit meeting with the Soviets collapsed because of
the U-2 incident. In his January 17, 1961 farewell address to the nation, Eisenhower expressed his concerns about
the dangers of massive military spending, particularly deficit spending and government contracts to private military
manufacturers, and coined the term "military–industrial complex". On the domestic front, he covertly opposed Joseph
McCarthy and contributed to the end of McCarthyism by openly invoking the modern expanded version of executive privilege.
He otherwise left most political activity to his Vice President, Richard Nixon. He was a moderate conservative who
continued New Deal agencies and expanded Social Security. He also launched the Interstate Highway System, the Defense
Advanced Research Projects Agency (DARPA), the establishment of strong science education via the National Defense
Education Act, and encouraged peaceful use of nuclear power via amendments to the Atomic Energy Act. His parents
set aside specific times at breakfast and at dinner for daily family Bible reading. Chores were regularly assigned
and rotated among all the children, and misbehavior was met with unequivocal discipline, usually from David. His
mother, previously a member (with David) of the River Brethren sect of the Mennonites, joined the International Bible
Students Association, later known as Jehovah's Witnesses. The Eisenhower home served as the local meeting hall from
1896 to 1915, though Eisenhower never joined the International Bible Students. His later decision to attend West
Point saddened his mother, who felt that warfare was "rather wicked," but she did not overrule him. While speaking
of himself in 1948, Eisenhower said he was "one of the most deeply religious men I know" though unattached to any
"sect or organization". He was baptized in the Presbyterian Church in 1953. Eisenhower attended Abilene High School
and graduated with the class of 1909. As a freshman, he injured his knee and developed a leg infection that extended
into his groin, and which his doctor diagnosed as life-threatening. The doctor insisted that the leg be amputated
but Dwight refused to allow it, and miraculously recovered, though he had to repeat his freshman year. He and brother
Edgar both wanted to attend college, though they lacked the funds. They made a pact to take alternate years at college
while the other worked to earn the tuitions. Edgar took the first turn at school, and Dwight was employed as a night
supervisor at the Belle Springs Creamery. Edgar asked for a second year, Dwight consented and worked for a second
year. At that time, a friend "Swede" Hazlet was applying to the Naval Academy and urged Dwight to apply to the school,
since no tuition was required. Eisenhower requested consideration for either Annapolis or West Point with his U.S.
Senator, Joseph L. Bristow. Though Eisenhower was among the winners of the entrance-exam competition, he was beyond
the age limit for the Naval Academy. He then accepted an appointment to West Point in 1911. The Eisenhowers had two
sons. Doud Dwight "Icky" Eisenhower was born September 24, 1917, and died of scarlet fever on January 2, 1921, at
the age of three; Eisenhower was mostly reticent to discuss his death. Their second son, John Eisenhower (1922–2013),
was born in Denver Colorado. John served in the United States Army, retired as a brigadier general, became an author
and served as U.S. Ambassador to Belgium from 1969 to 1971. Coincidentally, John graduated from West Point on D-Day,
June 6, 1944. He married Barbara Jean Thompson on June 10, 1947. John and Barbara had four children: David, Barbara
Ann, Susan Elaine and Mary Jean. David, after whom Camp David is named, married Richard Nixon's daughter Julie in
1968. John died on December 21, 2013. Eisenhower was a golf enthusiast later in life, and joined the Augusta National
Golf Club in 1948. He played golf frequently during and after his presidency and was unreserved in expressing his
passion for the game, to the point of golfing during winter; he ordered his golf balls painted black so he could
see them better against snow on the ground. He had a small, basic golf facility installed at Camp David, and became
close friends with the Augusta National Chairman Clifford Roberts, inviting Roberts to stay at the White House on
several occasions. Roberts, an investment broker, also handled the Eisenhower family's investments. Roberts also
advised Eisenhower on tax aspects of publishing his memoirs, which proved financially lucrative. After golf, oil
painting was Eisenhower's second hobby. While at Columbia University, Eisenhower began the art after watching Thomas
E. Stephens paint Mamie's portrait. Eisenhower painted about 260 oils during the last 20 years of his life to relax,
mostly landscapes but also portraits of subjects such as Mamie, their grandchildren, General Montgomery, George Washington,
and Abraham Lincoln. Wendy Beckett stated that Eisenhower's work, "simple and earnest, rather cause us to wonder
at the hidden depths of this reticent president". A conservative in both art and politics, he in a 1962 speech denounced
modern art as "a piece of canvas that looks like a broken-down Tin Lizzie, loaded with paint, has been driven over
it." Angels in the Outfield was Eisenhower's favorite movie. His favorite reading material for relaxation were the
Western novels of Zane Grey. With his excellent memory and ability to focus, Eisenhower was skilled at card games.
He learned poker, which he called his "favorite indoor sport," in Abilene. Eisenhower recorded West Point classmates'
poker losses for payment after graduation, and later stopped playing because his opponents resented having to pay
him. A classmate reported that after learning to play contract bridge at West Point, Eisenhower played the game six
nights a week for five months. When the U.S. entered World War I he immediately requested an overseas assignment
but was again denied and then assigned to Ft. Leavenworth, Kansas. In February 1918 he was transferred to Camp Meade
in Maryland with the 65th Engineers. His unit was later ordered to France but to his chagrin he received orders for
the new tank corps, where he was promoted to brevet Lieutenant Colonel in the National Army. He commanded a unit
that trained tank crews at Camp Colt – his first command – at the site of "Pickett's Charge" on the Gettysburg, Pennsylvania
Civil War battleground. Though Eisenhower and his tank crews never saw combat, he displayed excellent organizational
skills, as well as an ability to accurately assess junior officers' strengths and make optimal placements of personnel.
Once again his spirits were raised when the unit under his command received orders overseas to France. This time
his wishes were thwarted when the armistice was signed, just a week before departure. Completely missing out on the
warfront left him depressed and bitter for a time, despite being given the Distinguished Service Medal for his work
at home.[citation needed] In World War II, rivals who had combat service in the first great war (led by Gen. Bernard
Montgomery) sought to denigrate Eisenhower for his previous lack of combat duty, despite his stateside experience
establishing a camp, completely equipped, for thousands of troops, and developing a full combat training schedule.
He assumed duties again at Camp Meade, Maryland, commanding a battalion of tanks, where he remained until 1922. His
schooling continued, focused on the nature of the next war and the role of the tank in it. His new expertise in tank
warfare was strengthened by a close collaboration with George S. Patton, Sereno E. Brett, and other senior tank leaders.
Their leading-edge ideas of speed-oriented offensive tank warfare were strongly discouraged by superiors, who considered
the new approach too radical and preferred to continue using tanks in a strictly supportive role for the infantry.
Eisenhower was even threatened with court martial for continued publication of these proposed methods of tank deployment,
and he relented. From 1920, Eisenhower served under a succession of talented generals – Fox Conner, John J. Pershing,
Douglas MacArthur and George Marshall. He first became executive officer to General Conner in the Panama Canal Zone,
where, joined by Mamie, he served until 1924. Under Conner's tutelage, he studied military history and theory (including
Carl von Clausewitz's On War), and later cited Conner's enormous influence on his military thinking, saying in 1962
that "Fox Conner was the ablest man I ever knew." Conner's comment on Eisenhower was, "[He] is one of the most capable,
efficient and loyal officers I have ever met." On Conner's recommendation, in 1925–26 he attended the Command and
General Staff College at Fort Leavenworth, Kansas, where he graduated first in a class of 245 officers. He then served
as a battalion commander at Fort Benning, Georgia, until 1927. During the late 1920s and early 1930s, Eisenhower's
career in the post-war army stalled somewhat, as military priorities diminished; many of his friends resigned for
high-paying business jobs. He was assigned to the American Battle Monuments Commission directed by General Pershing,
and with the help of his brother Milton Eisenhower, then a journalist at the Agriculture Department, he produced
a guide to American battlefields in Europe. He then was assigned to the Army War College and graduated in 1928. After
a one-year assignment in France, Eisenhower served as executive officer to General George V. Mosely, Assistant Secretary
of War, from 1929 to February 1933. Major Dwight D. Eisenhower graduated from the Army Industrial College (Washington,
DC) in 1933 and later served on the faculty (it was later expanded to become the Industrial College of the Armed
Services and is now known as the Dwight D. Eisenhower School for National Security and Resource Strategy). His primary
duty was planning for the next war, which proved most difficult in the midst of the Great Depression. He then was
posted as chief military aide to General MacArthur, Army Chief of Staff. In 1932, he participated in the clearing
of the Bonus March encampment in Washington, D.C. Although he was against the actions taken against the veterans
and strongly advised MacArthur against taking a public role in it, he later wrote the Army's official incident report,
endorsing MacArthur's conduct. Historians have concluded that this assignment provided valuable preparation for handling
the challenging personalities of Winston Churchill, George S. Patton, George Marshall, and General Montgomery during
World War II. Eisenhower later emphasized that too much had been made of the disagreements with MacArthur, and that
a positive relationship endured. While in Manila, Mamie suffered a life-threatening stomach ailment but recovered
fully. Eisenhower was promoted to the rank of permanent lieutenant colonel in 1936. He also learned to fly, making
a solo flight over the Philippines in 1937 and obtained his private pilot's license in 1939 at Fort Lewis. Also around
this time, he was offered a post by the Philippine Commonwealth Government, namely by then Philippine President Manuel
L. Quezon on recommendations by MacArthur, to become the chief of police of a new capital being planned, now named
Quezon City, but he declined the offer. Eisenhower returned to the U.S. in December 1939 and was assigned as a battalion
commander and regimental executive officer of the 15th Infantry at Fort Lewis, Washington. In March 1941 he was promoted
to colonel and assigned as chief of staff of the newly activated IX Corps under Major General Kenyon Joyce. In June
1941, he was appointed Chief of Staff to General Walter Krueger, Commander of the 3rd Army, at Fort Sam Houston in
San Antonio, Texas. After successfully participating in the Louisiana Maneuvers, he was promoted to brigadier general
on October 3, 1941. Although his administrative abilities had been noticed, on the eve of the U.S. entry into World
War II he had never held an active command above a battalion and was far from being considered by many as a potential
commander of major operations. After the Japanese attack on Pearl Harbor, Eisenhower was assigned to the General
Staff in Washington, where he served until June 1942 with responsibility for creating the major war plans to defeat
Japan and Germany. He was appointed Deputy Chief in charge of Pacific Defenses under the Chief of War Plans Division
(WPD), General Leonard T. Gerow, and then succeeded Gerow as Chief of the War Plans Division. Next, he was appointed
Assistant Chief of Staff in charge of the new Operations Division (which replaced WPD) under Chief of Staff General
George C. Marshall, who spotted talent and promoted accordingly. At the end of May 1942, Eisenhower accompanied Lt.
Gen. Henry H. Arnold, commanding general of the Army Air Forces, to London to assess the effectiveness of the theater
commander in England, Maj. Gen. James E. Chaney. He returned to Washington on June 3 with a pessimistic assessment,
stating he had an "uneasy feeling" about Chaney and his staff. On June 23, 1942, he returned to London as Commanding
General, European Theater of Operations (ETOUSA), based in London and with a house on Coombe, Kingston upon Thames,
and replaced Chaney. He was promoted to lieutenant general on July 7. In November 1942, he was also appointed Supreme
Commander Allied Expeditionary Force of the North African Theater of Operations (NATOUSA) through the new operational
Headquarters Allied (Expeditionary) Force Headquarters (A(E)FHQ). The word "expeditionary" was dropped soon after
his appointment for security reasons. The campaign in North Africa was designated Operation Torch and was planned
underground within the Rock of Gibraltar. Eisenhower was the first non-British person to command Gibraltar in 200
years. French cooperation was deemed necessary to the campaign, and Eisenhower encountered a "preposterous situation"
with the multiple rival factions in France. His primary objective was to move forces successfully into Tunisia, and
intending to facilitate that objective, he gave his support to François Darlan as High Commissioner in North Africa,
despite Darlan's previous high offices of state in Vichy France and his continued role as commander-in-chief of the
French armed forces. The Allied leaders were "thunderstruck" by this from a political standpoint, though none of
them had offered Eisenhower guidance with the problem in the course of planning the operation. Eisenhower was severely
criticized for the move. Darlan was assassinated on December 24 by Fernand Bonnier de La Chapelle. Eisenhower did
not take action to prevent the arrest and extrajudicial execution of Bonnier de La Chapelle by associates of Darlan
acting without authority from either Vichy or the Allies, considering it a criminal rather than a military matter.
Eisenhower later appointed General Henri Giraud as High Commissioner, who had been installed by the Allies as Darlan's
commander-in-chief, and who had refused to postpone the execution. Operation Torch also served as a valuable training
ground for Eisenhower's combat command skills; during the initial phase of Generalfeldmarschall Erwin Rommel's move
into the Kasserine Pass, Eisenhower created some confusion in the ranks by some interference with the execution of
battle plans by his subordinates. He also was initially indecisive in his removal of Lloyd Fredendall, commanding
U.S. II Corps. He became more adroit in such matters in later campaigns. In February 1943, his authority was extended
as commander of AFHQ across the Mediterranean basin to include the British Eighth Army, commanded by General Sir
Bernard Montgomery. The Eighth Army had advanced across the Western Desert from the east and was ready for the start
of the Tunisia Campaign. Eisenhower gained his fourth star and gave up command of ETOUSA to become commander of NATOUSA.
After the capitulation of Axis forces in North Africa, Eisenhower oversaw the highly successful invasion of Sicily.
Once Mussolini, the Italian leader, had fallen in Italy, the Allies switched their attention to the mainland with
Operation Avalanche. But while Eisenhower argued with President Roosevelt and British Prime Minister Churchill, who
both insisted on unconditional terms of surrender in exchange for helping the Italians, the Germans pursued an aggressive
buildup of forces in the country – making the job more difficult, by adding 19 divisions and initially outnumbering
the Allied forces 2 to 1; nevertheless, the invasion of Italy was highly successful. In December 1943, President
Roosevelt decided that Eisenhower – not Marshall – would be Supreme Allied Commander in Europe. The following month,
he resumed command of ETOUSA and the following month was officially designated as the Supreme Allied Commander of
the Allied Expeditionary Force (SHAEF), serving in a dual role until the end of hostilities in Europe in May 1945.
He was charged in these positions with planning and carrying out the Allied assault on the coast of Normandy in June
1944 under the code name Operation Overlord, the liberation of Western Europe and the invasion of Germany. Eisenhower,
as well as the officers and troops under him, had learned valuable lessons in their previous operations, and their
skills had all strengthened in preparation for the next most difficult campaign against the Germans—a beach landing
assault. His first struggles, however, were with Allied leaders and officers on matters vital to the success of the
Normandy invasion; he argued with Roosevelt over an essential agreement with De Gaulle to use French resistance forces
in covert and sabotage operations against the Germans in advance of Overlord. Admiral Ernest J. King fought with
Eisenhower over King's refusal to provide additional landing craft from the Pacific. He also insisted that the British
give him exclusive command over all strategic air forces to facilitate Overlord, to the point of threatening to resign
unless Churchill relented, as he did. Eisenhower then designed a bombing plan in France in advance of Overlord and
argued with Churchill over the latter's concern with civilian casualties; de Gaulle interjected that the casualties
were justified in shedding the yoke of the Germans, and Eisenhower prevailed. He also had to skillfully manage to
retain the services of the often unruly George S. Patton, by severely reprimanding him when Patton earlier had slapped
a subordinate, and then when Patton gave a speech in which he made improper comments about postwar policy. The D-Day
Normandy landings on June 6, 1944, were costly but successful. A month later, the invasion of Southern France took
place, and control of forces in the southern invasion passed from the AFHQ to the SHAEF. Many prematurely considered
that victory in Europe would come by summer's end—however the Germans did not capitulate for almost a year. From
then until the end of the war in Europe on May 8, 1945, Eisenhower, through SHAEF, commanded all Allied forces, and
through his command of ETOUSA had administrative command of all U.S. forces on the Western Front north of the Alps.
He was ever mindful of the inevitable loss of life and suffering that would be experienced on an individual level
by the troops under his command and their families. This prompted him to make a point of visiting every division
involved in the invasion. Eisenhower's sense of responsibility was underscored by his draft of a statement to be
issued if the invasion failed. It has been called one of the great speeches of history: Once the coastal assault
had succeeded, Eisenhower insisted on retaining personal control over the land battle strategy, and was immersed
in the command and supply of multiple assaults through France on Germany. Field Marshal Montgomery insisted priority
be given to his 21st Army Group's attack being made in the north, while Generals Bradley (12th U.S. Army Group) and
Devers (Sixth U.S. Army Group) insisted they be given priority in the center and south of the front (respectively).
Eisenhower worked tirelessly to address the demands of the rival commanders to optimize Allied forces, often by giving
them tactical, though sometimes ineffective, latitude; many historians conclude this delayed the Allied victory in
Europe. However, due to Eisenhower's persistence, the pivotal supply port at Antwerp was successfully, albeit belatedly,
opened in late 1944, and victory became a more distinct probability. In recognition of his senior position in the
Allied command, on December 20, 1944, he was promoted to General of the Army, equivalent to the rank of Field Marshal
in most European armies. In this and the previous high commands he held, Eisenhower showed his great talents for
leadership and diplomacy. Although he had never seen action himself, he won the respect of front-line commanders.
He interacted adeptly with allies such as Winston Churchill, Field Marshal Bernard Montgomery and General Charles
de Gaulle. He had serious disagreements with Churchill and Montgomery over questions of strategy, but these rarely
upset his relationships with them. He dealt with Soviet Marshal Zhukov, his Russian counterpart, and they became
good friends. The Germans launched a surprise counter offensive, in the Battle of the Bulge in December 1944, which
the Allies turned back in early 1945 after Eisenhower repositioned his armies and improved weather allowed the Air
Force to engage. German defenses continued to deteriorate on both the eastern front with the Soviets and the western
front with the Allies. The British wanted Berlin, but Eisenhower decided it would be a military mistake for him to
attack Berlin, and said orders to that effect would have to be explicit. The British backed down, but then wanted
Eisenhower to move into Czechoslovakia for political reasons. Washington refused to support Churchill's plan to use
Eisenhower's army for political maneuvers against Moscow. The actual division of Germany followed the lines that
Roosevelt, Churchill and Stalin had previously agreed upon. The Soviet Red Army captured Berlin in a very large-scale
bloody battle, and the Germans finally surrendered on May 7, 1945. Following the German unconditional surrender,
Eisenhower was appointed Military Governor of the U.S. Occupation Zone, based at the IG Farben Building in Frankfurt
am Main. He had no responsibility for the other three zones, controlled by Britain, France and the Soviet Union,
except for the city of Berlin, which was managed by the Four-Power Authorities through the Allied Kommandatura as
the governing body. Upon discovery of the Nazi concentration camps, he ordered camera crews to document evidence
of the atrocities in them for use in the Nuremberg Trials. He reclassified German prisoners of war (POWs) in U.S.
custody as Disarmed Enemy Forces (DEFs), who were no longer subject to the Geneva Convention. Eisenhower followed
the orders laid down by the Joint Chiefs of Staff (JCS) in directive JCS 1067, but softened them by bringing in 400,000
tons of food for civilians and allowing more fraternization. In response to the devastation in Germany, including
food shortages and an influx of refugees, he arranged distribution of American food and medical equipment. His actions
reflected the new American attitudes of the German people as Nazi victims not villains, while aggressively purging
the ex-Nazis. In November 1945, Eisenhower returned to Washington to replace Marshall as Chief of Staff of the Army.
His main role was rapid demobilization of millions of soldiers, a slow job that was delayed by lack of shipping.
Eisenhower was convinced in 1946 that the Soviet Union did not want war and that friendly relations could be maintained;
he strongly supported the new United Nations and favored its involvement in the control of atomic bombs. However,
in formulating policies regarding the atomic bomb and relations with the Soviets, Truman was guided by the U.S. State
Department and ignored Eisenhower and the Pentagon. Indeed, Eisenhower had opposed the use of the atomic bomb against
the Japanese, writing, "First, the Japanese were ready to surrender and it wasn't necessary to hit them with that
awful thing. Second, I hated to see our country be the first to use such a weapon." Initially, Eisenhower was characterized
by hopes for cooperation with the Soviets. He even visited Warsaw in 1945. Invited by Bolesław Bierut and decorated
with the highest military decoration, he was shocked by the scale of destruction in the city. However, by mid-1947,
as East–West tensions over economic recovery in Germany and the Greek Civil War escalated, Eisenhower gave up and
agreed with a containment policy to stop Soviet expansion. In June 1943 a visiting politician had suggested to Eisenhower
that he might become President of the United States after the war. Believing that a general should not participate
in politics, one author later wrote that "figuratively speaking, [Eisenhower] kicked his political-minded visitor
out of his office". As others asked him about his political future, Eisenhower told one that he could not imagine
wanting to be considered for any political job "from dogcatcher to Grand High Supreme King of the Universe", and
another that he could not serve as Army Chief of Staff if others believed he had political ambitions. In 1945 Truman
told Eisenhower during the Potsdam Conference that if desired, the president would help the general win the 1948
election, and in 1947 he offered to run as Eisenhower's running mate on the Democratic ticket if MacArthur won the
Republican nomination. As the election approached, other prominent citizens and politicians from both parties urged
Eisenhower to run for president. In January 1948, after learning of plans in New Hampshire to elect delegates supporting
him for the forthcoming Republican National Convention, Eisenhower stated through the Army that he was "not available
for and could not accept nomination to high political office"; "life-long professional soldiers", he wrote, "in the
absence of some obvious and overriding reason, [should] abstain from seeking high political office". Eisenhower maintained
no political party affiliation during this time. Many believed he was forgoing his only opportunity to be president;
Republican Thomas E. Dewey was considered the other probable winner, would presumably serve two terms, and Eisenhower,
at age 66 in 1956, would then be too old. In 1948, Eisenhower became President of Columbia University, an Ivy League
university in New York City. The assignment was described as not being a good fit in either direction. During that
year Eisenhower's memoir, Crusade in Europe, was published. Critics regarded it as one of the finest U.S. military
memoirs, and it was a major financial success as well. Eisenhower's profit on the book was substantially aided by
an unprecedented ruling by the U.S. Department of the Treasury that Eisenhower was not a professional writer, but
rather, marketing the lifetime asset of his experiences, and thus he only had to pay capital gains tax on his $635,000
advance instead of the much higher personal tax rate. This ruling saved Eisenhower about $400,000. Eisenhower's stint
as the president of Columbia University was punctuated by his activity within the Council on Foreign Relations, a
study group he led as president concerning the political and military implications of the Marshall Plan, and The
American Assembly, Eisenhower's "vision of a great cultural center where business, professional and governmental
leaders could meet from time to time to discuss and reach conclusions concerning problems of a social and political
nature". His biographer Blanche Wiesen Cook suggested that this period served as "the political education of General
Eisenhower", since he had to prioritize wide-ranging educational, administrative, and financial demands for the university.
Through his involvement in the Council on Foreign Relations, he also gained exposure to economic analysis, which
would become the bedrock of his understanding in economic policy. "Whatever General Eisenhower knows about economics,
he has learned at the study group meetings," one Aid to Europe member claimed. Within months of beginning his tenure
as the president of the university, Eisenhower was requested to advise U.S. Secretary of Defense James Forrestal
on the unification of the armed services. About six months after his appointment, he became the informal Chairman
of the Joint Chiefs of Staff in Washington. Two months later he fell ill, and he spent over a month in recovery at
the Augusta National Golf Club. He returned to his post in New York in mid-May, and in July 1949 took a two-month
vacation out-of-state. Because the American Assembly had begun to take shape, he traveled around the country during
mid-to-late 1950, building financial support from Columbia Associates, an alumni association. The contacts gained
through university and American Assembly fund-raising activities would later become important supporters in Eisenhower's
bid for the Republican party nomination and the presidency. Meanwhile, Columbia University's liberal faculty members
became disenchanted with the university president's ties to oilmen and businessmen, including Leonard McCollum, the
president of Continental Oil; Frank Abrams, the chairman of Standard Oil of New Jersey; Bob Kleberg, the president
of the King Ranch; H. J. Porter, a Texas oil executive; Bob Woodruff, the president of the Coca-Cola Corporation;
and Clarence Francis, the chairman of General Foods. The trustees of Columbia University refused to accept Eisenhower's
resignation in December 1950, when he took an extended leave from the university to become the Supreme Commander
of the North Atlantic Treaty Organization (NATO), and he was given operational command of NATO forces in Europe.
Eisenhower retired from active service as an Army general on May 31, 1952, and he resumed his presidency of Columbia.
He held this position until January 20, 1953, when he became the President of the United States. President Truman,
symbolizing a broad-based desire for an Eisenhower candidacy for president, again in 1951 pressed him to run for
the office as a Democrat. It was at this time that Eisenhower voiced his disagreements with the Democratic party
and declared himself and his family to be Republicans. A "Draft Eisenhower" movement in the Republican Party persuaded
him to declare his candidacy in the 1952 presidential election to counter the candidacy of non-interventionist Senator
Robert A. Taft. The effort was a long struggle; Eisenhower had to be convinced that political circumstances had created
a genuine duty for him to offer himself as a candidate, and that there was a mandate from the populace for him to
be their President. Henry Cabot Lodge, who served as his campaign manager, and others succeeded in convincing him,
and in June 1952 he resigned his command at NATO to campaign full-time. Eisenhower defeated Taft for the nomination,
having won critical delegate votes from Texas. Eisenhower's campaign was noted for the simple but effective slogan,
"I Like Ike". It was essential to his success that Eisenhower express opposition to Roosevelt's policy at Yalta and
against Truman's policies in Korea and China—matters in which he had once participated. In defeating Taft for the
nomination, it became necessary for Eisenhower to appease the right wing Old Guard of the Republican Party; his selection
of Richard M. Nixon as the Vice-President on the ticket was designed in part for that purpose. Nixon also provided
a strong anti-communist presence as well as some youth to counter Ike's more advanced age. In the general election,
against the advice of his advisors, Eisenhower insisted on campaigning in the South, refusing to surrender the region
to the Democratic Party. The campaign strategy, dubbed "K1C2", was to focus on attacking the Truman and Roosevelt
administrations on three issues: Korea, Communism and corruption. In an effort to accommodate the right, he stressed
that the liberation of Eastern Europe should be by peaceful means only; he also distanced himself from his former
boss President Truman. Two controversies during the campaign tested him and his staff, but did not affect the campaign.
One involved a report that Nixon had improperly received funds from a secret trust. Nixon spoke out adroitly to avoid
potential damage, but the matter permanently alienated the two candidates. The second issue centered on Eisenhower's
relented decision to confront the controversial methods of Joseph McCarthy on his home turf in a Wisconsin appearance.
Just two weeks prior to the election, Eisenhower vowed to go to Korea and end the war there. He promised to maintain
a strong commitment against Communism while avoiding the topic of NATO; finally, he stressed a corruption-free, frugal
administration at home. Eisenhower was the last president born in the 19th century, and at age 62, was the oldest
man elected President since James Buchanan in 1856 (President Truman stood at 64 in 1948 as the incumbent president
at the time of his election four years earlier). Eisenhower was the only general to serve as President in the 20th
century and the most recent President to have never held elected office prior to the Presidency (The other Presidents
who did not have prior elected office were Zachary Taylor, Ulysses S. Grant, William Howard Taft and Herbert Hoover).
Due to a complete estrangement between the two as a result of campaigning, Truman and Eisenhower had minimal discussions
about the transition of administrations. After selecting his budget director, Joseph M. Dodge, Eisenhower asked Herbert
Brownell and Lucius Clay to make recommendations for his cabinet appointments. He accepted their recommendations
without exception; they included John Foster Dulles and George M. Humphrey with whom he developed his closest relationships,
and one woman, Oveta Culp Hobby. Eisenhower's cabinet, consisting of several corporate executives and one labor leader,
was dubbed by one journalist, "Eight millionaires and a plumber." The cabinet was notable for its lack of personal
friends, office seekers, or experienced government administrators. He also upgraded the role of the National Security
Council in planning all phases of the Cold War. Prior to his inauguration, Eisenhower led a meeting of advisors at
Pearl Harbor addressing foremost issues; agreed objectives were to balance the budget during his term, to bring the
Korean War to an end, to defend vital interests at lower cost through nuclear deterrent, and to end price and wage
controls. Eisenhower also conducted the first pre-inaugural cabinet meeting in history in late 1952; he used this
meeting to articulate his anti-communist Russia policy. His inaugural address, as well, was exclusively devoted to
foreign policy and included this same philosophy, as well as a commitment to foreign trade and the United Nations.
Throughout his presidency, Eisenhower adhered to a political philosophy of dynamic conservatism. A self-described
"progressive conservative," he continued all the major New Deal programs still in operation, especially Social Security.
He expanded its programs and rolled them into a new cabinet-level agency, the Department of Health, Education and
Welfare, while extending benefits to an additional ten million workers. He implemented integration in the Armed Services
in two years, which had not been completed under Truman. As the 1954 congressional elections approached, and it became
evident that the Republicans were in danger of losing their thin majority in both houses, Eisenhower was among those
blaming the Old Guard for the losses, and took up the charge to stop suspected efforts by the right wing to take
control of the GOP. Eisenhower then articulated his position as a moderate, progressive Republican: "I have just
one purpose ... and that is to build up a strong progressive Republican Party in this country. If the right wing
wants a fight, they are going to get it ... before I end up, either this Republican Party will reflect progressivism
or I won't be with them anymore." Initially Eisenhower planned on serving only one term, but as with other decisions,
he maintained a position of maximum flexibility in case leading Republicans wanted him to run again. During his recovery
from a heart attack late in 1955, he huddled with his closest advisors to evaluate the GOP's potential candidates;
the group, in addition to his doctor, concluded a second term was well advised, and he announced in February 1956
he would run again. Eisenhower was publicly noncommittal about Nixon's repeating as the Vice President on his ticket;
the question was an especially important one in light of his heart condition. He personally favored Robert B. Anderson,
a Democrat, who rejected his offer; Eisenhower then resolved to leave the matter in the hands of the party. In 1956,
Eisenhower faced Adlai Stevenson again and won by an even larger landslide, with 457 of 531 electoral votes and 57.6%
of the popular vote. The level of campaigning was curtailed out of health considerations. Eisenhower's goal to create
improved highways was influenced by difficulties encountered during his involvement in the U.S. Army's 1919 Transcontinental
Motor Convoy. He was assigned as an observer for the mission, which involved sending a convoy of U.S. Army vehicles
coast to coast. His subsequent experience with encountering German autobahn limited-access road systems during the
concluding stages of World War II convinced him of the benefits of an Interstate Highway System. Noticing the improved
ability to move logistics throughout the country, he thought an Interstate Highway System in the U.S. would not only
be beneficial for military operations, but provide a measure of continued economic growth. The legislation initially
stalled in the Congress over the issuance of bonds to finance the project, but the legislative effort was renewed
and the law was signed by Eisenhower in June 1956. In 1953, the Republican Party's Old Guard presented Eisenhower
with a dilemma by insisting he disavow the Yalta Agreements as beyond the constitutional authority of the Executive
Branch; however, the death of Joseph Stalin in March 1953 made the matter a practical moot point. At this time Eisenhower
gave his Chance for Peace speech in which he attempted, unsuccessfully, to forestall the nuclear arms race with the
Soviet Union by suggesting multiple opportunities presented by peaceful uses of nuclear materials. Biographer Stephen
Ambrose opined that this was the best speech of Eisenhower's presidency. The U.N. speech was well received but the
Soviets never acted upon it, due to an overarching concern for the greater stockpiles of nuclear weapons in the U.S.
arsenal. Indeed, Eisenhower embarked upon a greater reliance on the use of nuclear weapons, while reducing conventional
forces, and with them the overall defense budget, a policy formulated as a result of Project Solarium and expressed
in NSC 162/2. This approach became known as the "New Look", and was initiated with defense cuts in late 1953. In
1955 American nuclear arms policy became one aimed primarily at arms control as opposed to disarmament. The failure
of negotiations over arms until 1955 was due mainly to the refusal of the Russians to permit any sort of inspections.
In talks located in London that year, they expressed a willingness to discuss inspections; the tables were then turned
on Eisenhower, when he responded with an unwillingness on the part of the U.S. to permit inspections. In May of that
year the Russians agreed to sign a treaty giving independence to Austria, and paved the way for a Geneva summit with
the U.S., U.K. and France. At the Geneva Conference Eisenhower presented a proposal called "Open Skies" to facilitate
disarmament, which included plans for Russia and the U.S. to provide mutual access to each other's skies for open
surveillance of military infrastructure. Russian leader Nikita Khrushchev dismissed the proposal out of hand. In
1954, Eisenhower articulated the domino theory in his outlook towards communism in Southeast Asia and also in Central
America. He believed that if the communists were allowed to prevail in Vietnam, this would cause a succession of
countries to fall to communism, from Laos through Malaysia and Indonesia ultimately to India. Likewise, the fall
of Guatemala would end with the fall of neighboring Mexico. That year the loss of North Vietnam to the communists
and the rejection of his proposed European Defence Community (EDC) were serious defeats, but he remained optimistic
in his opposition to the spread of communism, saying "Long faces don't win wars". As he had threatened the French
in their rejection of EDC, he afterwards moved to restore West Germany, as a full NATO partner. With Eisenhower's
leadership and Dulles' direction, CIA activities increased under the pretense of resisting the spread of communism
in poorer countries; the CIA in part deposed the leaders of Iran in Operation Ajax, of Guatemala through Operation
Pbsuccess, and possibly the newly independent Republic of the Congo (Léopoldville). In 1954 Eisenhower wanted to
increase surveillance inside the Soviet Union. With Dulles' recommendation, he authorized the deployment of thirty
Lockheed U-2's at a cost of $35 million. The Eisenhower administration also planned the Bay of Pigs Invasion to overthrow
Fidel Castro in Cuba, which John F. Kennedy was left to carry out." Over New York City in 1953, Eastern Airlines
Flight 8610, a commercial flight, had a near miss with Air Force Flight 8610, a Lockheed C-121 Constellation known
as Columbine II, while the latter was carrying President Eisenhower. This prompted the adoption of the unique call
sign Air Force One, to be used whenever the president is on board any US Air Force aircraft. Columbine II is the
only presidential aircraft to have ever been sold to the public and is the only remaining presidential aircraft left
unrestored and not on public display. On the whole, Eisenhower's support of the nation's fledgling space program
was officially modest until the Soviet launch of Sputnik in 1957, gaining the Cold War enemy enormous prestige around
the world. He then launched a national campaign that funded not just space exploration but a major strengthening
of science and higher education. His Open Skies Policy attempted to legitimize illegal Lockheed U-2 flyovers and
Project Genetrix while paving the way for spy satellite technology to orbit over sovereign territory, created NASA
as a civilian space agency, signed a landmark science education law, and fostered improved relations with American
scientists. In late 1952 Eisenhower went to Korea and discovered a military and political stalemate. Once in office,
when the Chinese began a buildup in the Kaesong sanctuary, he threatened to use nuclear force if an armistice was
not concluded. His earlier military reputation in Europe was effective with the Chinese. The National Security Council,
the Joint Chiefs of Staff, and the Strategic Air Command (SAC) devised detailed plans for nuclear war against China.
With the death of Stalin in early March 1953, Russian support for a Chinese hard-line weakened and China decided
to compromise on the prisoner issue. In July 1953, an armistice took effect with Korea divided along approximately
the same boundary as in 1950. The armistice and boundary remain in effect today, with American soldiers stationed
there to guarantee it. The armistice, concluded despite opposition from Secretary Dulles, South Korean President
Syngman Rhee, and also within Eisenhower's party, has been described by biographer Ambrose as the greatest achievement
of the administration. Eisenhower had the insight to realize that unlimited war in the nuclear age was unthinkable,
and limited war unwinnable. In November 1956, Eisenhower forced an end to the combined British, French and Israeli
invasion of Egypt in response to the Suez Crisis, receiving praise from Egyptian president Gamal Abdel Nasser. Simultaneously
he condemned the brutal Soviet invasion of Hungary in response to the Hungarian Revolution of 1956. He publicly disavowed
his allies at the United Nations, and used financial and diplomatic pressure to make them withdraw from Egypt. Eisenhower
explicitly defended his strong position against Britain and France in his memoirs, which were published in 1965.
Early in 1953, the French asked Eisenhower for help in French Indochina against the Communists, supplied from China,
who were fighting the First Indochina War. Eisenhower sent Lt. General John W. "Iron Mike" O'Daniel to Vietnam to
study and assess the French forces there. Chief of Staff Matthew Ridgway dissuaded the President from intervening
by presenting a comprehensive estimate of the massive military deployment that would be necessary. Eisenhower stated
prophetically that "this war would absorb our troops by divisions." Eisenhower did provide France with bombers and
non-combat personnel. After a few months with no success by the French, he added other aircraft to drop napalm for
clearing purposes. Further requests for assistance from the French were agreed to but only on conditions Eisenhower
knew were impossible to meet – allied participation and congressional approval. When the French fortress of Dien
Bien Phu fell to the Vietnamese Communists in May 1954, Eisenhower refused to intervene despite urgings from the
Chairman of the Joint Chiefs, the Vice President and the head of NCS. Eisenhower responded to the French defeat with
the formation of the SEATO (Southeast Asia Treaty Organization) Alliance with the U.K., France, New Zealand and Australia
in defense of Vietnam against communism. At that time the French and Chinese reconvened Geneva peace talks; Eisenhower
agreed the U.S. would participate only as an observer. After France and the Communists agreed to a partition of Vietnam,
Eisenhower rejected the agreement, offering military and economic aid to southern Vietnam. Ambrose argues that Eisenhower,
by not participating in the Geneva agreement, had kept the U.S out of Vietnam; nevertheless, with the formation of
SEATO, he had in the end put the U.S. back into the conflict. In late 1954, Gen. J. Lawton Collins was made ambassador
to "Free Vietnam" (the term South Vietnam came into use in 1955), effectively elevating the country to sovereign
status. Collins' instructions were to support the leader Ngo Dinh Diem in subverting communism, by helping him to
build an army and wage a military campaign. In February 1955, Eisenhower dispatched the first American soldiers to
Vietnam as military advisors to Diem's army. After Diem announced the formation of the Republic of Vietnam (RVN,
commonly known as South Vietnam) in October, Eisenhower immediately recognized the new state and offered military,
economic, and technical assistance. In the years that followed, Eisenhower increased the number of U.S. military
advisors in South Vietnam to 900 men. This was due to North Vietnam's support of "uprisings" in the south and concern
the nation would fall. In May 1957 Diem, then President of South Vietnam, made a state visit to the United States
for ten days. President Eisenhower pledged his continued support, and a parade was held in Diem's honor in New York
City. Although Diem was publicly praised, in private Secretary of State John Foster Dulles conceded that Diem had
been selected because there were no better alternatives. On May 1, 1960, a U.S. one-man U-2 spy plane was reportedly
shot down at high altitude over Soviet Union airspace. The flight was made to gain photo intelligence before the
scheduled opening of an East–West summit conference, which had been scheduled in Paris, 15 days later. Captain Francis
Gary Powers had bailed out of his aircraft and was captured after parachuting down onto Russian soil. Four days after
Powers disappeared, the Eisenhower Administration had NASA issue a very detailed press release noting that an aircraft
had "gone missing" north of Turkey. It speculated that the pilot might have fallen unconscious while the autopilot
was still engaged, and falsely claimed that "the pilot reported over the emergency frequency that he was experiencing
oxygen difficulties." Soviet Premier Nikita Khrushchev announced that a "spy-plane" had been shot down but intentionally
made no reference to the pilot. As a result, the Eisenhower Administration, thinking the pilot had died in the crash,
authorized the release of a cover story claiming that the plane was a "weather research aircraft" which had unintentionally
strayed into Soviet airspace after the pilot had radioed "difficulties with his oxygen equipment" while flying over
Turkey. The Soviets put Captain Powers on trial and displayed parts of the U-2, which had been recovered almost fully
intact. The 1960 Four Power Paris Summit between President Dwight Eisenhower, Nikita Khrushchev, Harold Macmillan
and Charles de Gaulle collapsed because of the incident. Eisenhower refused to accede to Khrushchev's demands that
he apologize. Therefore, Khrushchev would not take part in the summit. Up until this event, Eisenhower felt he had
been making progress towards better relations with the Soviet Union. Nuclear arms reduction and Berlin were to have
been discussed at the summit. Eisenhower stated it had all been ruined because of that "stupid U-2 business". While
President Truman had begun the process of desegregating the Armed Forces in 1948, actual implementation had been
slow. Eisenhower made clear his stance in his first State of the Union address in February 1953, saying "I propose
to use whatever authority exists in the office of the President to end segregation in the District of Columbia, including
the Federal Government, and any segregation in the Armed Forces". When he encountered opposition from the services,
he used government control of military spending to force the change through, stating "Wherever Federal Funds are
expended ..., I do not see how any American can justify ... a discrimination in the expenditure of those funds".
Eisenhower told District of Columbia officials to make Washington a model for the rest of the country in integrating
black and white public school children. He proposed to Congress the Civil Rights Act of 1957 and of 1960 and signed
those acts into law. The 1957 act for the first time established a permanent civil rights office inside the Justice
Department and a Civil Rights Commission to hear testimony about abuses of voting rights. Although both acts were
much weaker than subsequent civil rights legislation, they constituted the first significant civil rights acts since
1875. In 1957, the state of Arkansas refused to honor a federal court order to integrate their public school system
stemming from the Brown decision. Eisenhower demanded that Arkansas governor Orval Faubus obey the court order. When
Faubus balked, the president placed the Arkansas National Guard under federal control and sent in the 101st Airborne
Division. They escorted and protected nine black students' entry to Little Rock Central High School, an all-white
public school, for the first time since the Reconstruction Era. Martin Luther King Jr. wrote to Eisenhower to thank
him for his actions, writing "The overwhelming majority of southerners, Negro and white, stand firmly behind your
resolute action to restore law and order in Little Rock". This prevented Eisenhower from openly condemning Joseph
McCarthy's highly criticized methods against communism. To facilitate relations with Congress, Eisenhower decided
to ignore McCarthy's controversies and thereby deprive them of more energy from involvement of the White House. This
position drew criticism from a number of corners. In late 1953 McCarthy declared on national television that the
employment of communists within the government was a menace and would be a pivotal issue in the 1954 Senate elections.
Eisenhower was urged to respond directly and specify the various measures he had taken to purge the government of
communists. Nevertheless, he refused. Among Ike's objectives in not directly confronting McCarthy was to prevent
McCarthy from dragging the Atomic Energy Commission (AEC) into McCarthy's witch hunt for communists, which would
interfere with, and perhaps delay, the AEC's important work on H-bombs. The administration had discovered through
its own investigations that one of the leading scientists on the AEC, J. Robert Oppenheimer, had urged that the H-bomb
work be delayed. Eisenhower removed him from the agency and revoked his security clearance, though he knew this would
create fertile ground for McCarthy. In May 1955, McCarthy threatened to issue subpoenas to White House personnel.
Eisenhower was furious, and issued an order as follows: "It is essential to efficient and effective administration
that employees of the Executive Branch be in a position to be completely candid in advising with each other on official
matters ... it is not in the public interest that any of their conversations or communications, or any documents
or reproductions, concerning such advice be disclosed." This was an unprecedented step by Eisenhower to protect communication
beyond the confines of a cabinet meeting, and soon became a tradition known as executive privilege. Ike's denial
of McCarthy's access to his staff reduced McCarthy's hearings to rants about trivial matters, and contributed to
his ultimate downfall. The Democrats gained a majority in both houses in the 1954 election. Eisenhower had to work
with the Democratic Majority Leader Lyndon B. Johnson (later U.S. president) in the Senate and Speaker Sam Rayburn
in the House, both from Texas. Joe Martin, the Republican Speaker from 1947 to 1949 and again from 1953 to 1955,
wrote that Eisenhower "never surrounded himself with assistants who could solve political problems with professional
skill. There were exceptions, Leonard W. Hall, for example, who as chairman of the Republican National Committee
tried to open the administration's eyes to the political facts of life, with occasional success. However, these exceptions
were not enough to right the balance." Speaker Martin concluded that Eisenhower worked too much through subordinates
in dealing with Congress, with results, "often the reverse of what he has desired" because Members of Congress, "resent
having some young fellow who was picked up by the White House without ever having been elected to office himself
coming around and telling them 'The Chief wants this'. The administration never made use of many Republicans of consequence
whose services in one form or another would have been available for the asking." Whittaker was unsuited for the role
and soon retired. Stewart and Harlan were conservative Republicans, while Brennan was a Democrat who became a leading
voice for liberalism. In selecting a Chief Justice, Eisenhower looked for an experienced jurist who could appeal
to liberals in the party as well as law-and-order conservatives, noting privately that Warren "represents the kind
of political, economic, and social thinking that I believe we need on the Supreme Court ... He has a national name
for integrity, uprightness, and courage that, again, I believe we need on the Court". In the next few years Warren
led the Court in a series of liberal decisions that revolutionized the role of the Court. Eisenhower began smoking
cigarettes at West Point, often two or three packs a day. Eisenhower stated that he "gave [himself] an order" to
stop cold turkey in March 1949 while at Columbia. He was probably the first president to release information about
his health and medical records while in office. On September 24, 1955, while vacationing in Colorado, he had a serious
heart attack that required six weeks' hospitalization, during which time Nixon, Dulles, and Sherman Adams assumed
administrative duties and provided communication with the President. He was treated by Dr. Paul Dudley White, a cardiologist
with a national reputation, who regularly informed the press of the President's progress. Instead of eliminating
him as a candidate for a second term as President, his physician recommended a second term as essential to his recovery.
As a consequence of his heart attack, Eisenhower developed a left ventricular aneurysm, which was in turn the cause
of a mild stroke on November 25, 1957. This incident occurred during a cabinet meeting when Eisenhower suddenly found
himself unable to speak or move his right hand. The stroke had caused an aphasia. The president also suffered from
Crohn's disease, chronic inflammatory condition of the intestine, which necessitated surgery for a bowel obstruction
on June 9, 1956. To treat the intestinal block, surgeons bypassed about ten inches of his small intestine. His scheduled
meeting with Indian Prime Minister Jawaharlal Nehru was postponed so he could recover from surgery at his farm in
Gettysburg, Pennsylvania. He was still recovering from this operation during the Suez Crisis. Eisenhower's health
issues forced him to give up smoking and make some changes to his dietary habits, but he still indulged in alcohol.
During a visit to England he complained of dizziness and had to have his blood pressure checked on August 29, 1959;
however, before dinner at Chequers on the next day his doctor General Howard Snyder recalled Eisenhower "drank several
gin-and-tonics, and one or two gins on the rocks ... three or four wines with the dinner". The last three years of
Eisenhower's second term in office were ones of relatively good health. Eventually after leaving the White House,
he suffered several additional and ultimately crippling heart attacks. A severe heart attack in August 1965 largely
ended his participation in public affairs. In August 1966 he began to show symptoms of cholecystitis, for which he
underwent surgery on December 12, 1966, when his gallbladder was removed, containing 16 gallstones. After Eisenhower's
death in 1969 (see below), an autopsy unexpectedly revealed an adrenal pheochromocytoma, a benign adrenaline-secreting
tumor that may have made the President more vulnerable to heart disease. Eisenhower suffered seven heart attacks
in total from 1955 until his death. In the 1960 election to choose his successor, Eisenhower endorsed his own Vice
President, Republican Richard Nixon against Democrat John F. Kennedy. He told friends, "I will do almost anything
to avoid turning my chair and country over to Kennedy." He actively campaigned for Nixon in the final days, although
he may have done Nixon some harm. When asked by reporters at the end of a televised press conference to list one
of Nixon's policy ideas he had adopted, Eisenhower joked, "If you give me a week, I might think of one. I don't remember."
Kennedy's campaign used the quote in one of its campaign commercials. Nixon narrowly lost to Kennedy. Eisenhower,
who was the oldest president in history at that time (then 70), was succeeded by the youngest elected president,
as Kennedy was 43. On January 17, 1961, Eisenhower gave his final televised Address to the Nation from the Oval Office.
In his farewell speech, Eisenhower raised the issue of the Cold War and role of the U.S. armed forces. He described
the Cold War: "We face a hostile ideology global in scope, atheistic in character, ruthless in purpose and insidious
in method ..." and warned about what he saw as unjustified government spending proposals and continued with a warning
that "we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military–industrial
complex." Eisenhower retired to the place where he and Mamie had spent much of their post-war time, a working farm
adjacent to the battlefield at Gettysburg, Pennsylvania, only 70 miles from his ancestral home in Elizabethville,
Dauphin County, Pennsylvania. In 1967 the Eisenhowers donated the farm to the National Park Service. In retirement,
the former president did not completely retreat from political life; he spoke at the 1964 Republican National Convention
and appeared with Barry Goldwater in a Republican campaign commercial from Gettysburg. However, his endorsement came
somewhat reluctantly because Goldwater had attacked the former president as "a dime-store New Dealer". On the morning
of March 28, 1969, at the age of 78, Eisenhower died in Washington, D.C. of congestive heart failure at Walter Reed
Army Medical Center. The following day his body was moved to the Washington National Cathedral's Bethlehem Chapel,
where he lay in repose for 28 hours. On March 30, his body was brought by caisson to the United States Capitol, where
he lay in state in the Capitol Rotunda. On March 31, Eisenhower's body was returned to the National Cathedral, where
he was given an Episcopal Church funeral service. That evening, Eisenhower's body was placed onto a train en route
to Abilene, Kansas, the last time a funeral train has been used as part of funeral proceedings of an American president.
His body arrived on April 2, and was interred later that day in a small chapel on the grounds of the Eisenhower Presidential
Library. The president's body was buried as a General of the Army. The family used an $80 standard soldier's casket,
and dressed Eisenhower's body in his famous short green jacket. His only medals worn were: the Army Distinguished
Service Medal with three oak leaf clusters, the Navy Distinguished Service Medal, and the Legion of Merit. Eisenhower
is buried alongside his son Doud, who died at age 3 in 1921. His wife Mamie was buried next to him after her death
in 1979. In the immediate years after Eisenhower left office, his reputation declined. He was widely seen by critics
as an inactive, uninspiring, golf-playing president compared to his vigorous young successor. Despite his unprecedented
use of Army troops to enforce a federal desegregation order at Central High School in Little Rock, Eisenhower was
criticized for his reluctance to support the civil rights movement to the degree that activists wanted. Eisenhower
also attracted criticism for his handling of the 1960 U-2 incident and the associated international embarrassment,
for the Soviet Union's perceived leadership in the nuclear arms race and the Space Race, and for his failure to publicly
oppose McCarthyism. Since the 19th century, many if not all presidents were assisted by a central figure or "gatekeeper",
sometimes described as the President's Private Secretary, sometimes with no official title at all. Eisenhower formalized
this role, introducing the office of White House Chief of Staff – an idea he borrowed from the United States Army.
Every president after Lyndon Johnson has also appointed staff to this position. Initially, Gerald Ford and Jimmy
Carter tried to operate without a chief of staff, but each eventually appointed one. The development of the appreciation
medals was initiated by the White House and executed by the Bureau of the Mint through the U.S. Mint in Philadelphia.
The medals were struck from September 1958 through October 1960. A total of twenty designs are cataloged with a total
mintage of 9,858. Each of the designs incorporates the text "with appreciation" or "with personal and official gratitude"
accompanied with Eisenhower's initials "D.D.E." or facsimile signature. The design also incorporates location, date,
and/or significant event. Prior to the end of his second term as President, 1,451 medals were turned-in to the Bureau
of the Mint and destroyed. The Eisenhower appreciation medals are part of the Presidential Medal of Appreciation
Award Medal Series. The Interstate Highway System is officially known as the 'Dwight D. Eisenhower National System
of Interstate and Defense Highways' in his honor. It was inspired in part by Eisenhower's own Army experiences in
World War II, where he recognized the advantages of the autobahn systems in Germany, Austria, and Switzerland. Commemorative
signs reading "Eisenhower Interstate System" and bearing Eisenhower's permanent 5-star rank insignia were introduced
in 1993 and are currently displayed throughout the Interstate System. Several highways are also named for him, including
the Eisenhower Expressway (Interstate 290) near Chicago and the Eisenhower Tunnel on Interstate 70 west of Denver.
A loblolly pine, known as the "Eisenhower Pine", was located on Augusta's 17th hole, approximately 210 yards (192
m) from the Masters tee. President Dwight D. Eisenhower, an Augusta National member, hit the tree so many times that,
at a 1956 club meeting, he proposed that it be cut down. Not wanting to offend the president, the club's chairman,
Clifford Roberts, immediately adjourned the meeting rather than reject the request. The tree was removed in February
2014 after an ice storm caused it significant damage.
The Bronx /ˈbrɒŋks/ is the northernmost of the five boroughs (counties) of New York City in the state of New York, located
south of Westchester County. Many bridges and tunnels link the Bronx to the island and borough of Manhattan to the
west over and under the narrow Harlem River, as well as three longer bridges south over the East River to the borough
of Queens. Of the five boroughs, the Bronx is the only one on the U.S. mainland and, with a land area of 42 square
miles (109 km2) and a population of 1,438,159 in 2014, has the fourth largest land area, the fourth highest population,
and the third-highest population density. The Bronx is named after Jonas Bronck who created the first settlement
as part of the New Netherland colony in 1639. The native Lenape were displaced after 1643 by settlers. In the 19th
and 20th centuries, the Bronx received many immigrant groups as it was transformed into an urban community, first
from various European countries (particularly Ireland, Germany and Italy) and later from the Caribbean region (particularly
Puerto Rico, Jamaica and the Dominican Republic), as well as African American migrants from the American South. This
cultural mix has made the Bronx a wellspring of both Latin music and hip hop. The Bronx contains one of the five
poorest Congressional Districts in the United States, the 15th, but its wide diversity also includes affluent, upper-income
and middle-income neighborhoods such as Riverdale, Fieldston, Spuyten Duyvil, Schuylerville, Pelham Bay, Pelham Gardens,
Morris Park and Country Club. The Bronx, particularly the South Bronx, saw a sharp decline in population, livable
housing, and the quality of life in the late 1960s and the 1970s, culminating in a wave of arson. Since then the
communities have shown significant redevelopment starting in the late 1980s before picking up pace in the 1990s into
today. Jonas Bronck (c. 1600–43) was a Swedish born emigrant from Komstad, Norra Ljunga parish in Småland, Sweden
who arrived in New Netherland during the spring of 1639. He became the first recorded European settler in the area
now known as the Bronx. He leased land from the Dutch West India Company on the neck of the mainland immediately
north of the Dutch settlement in Harlem (on Manhattan island), and bought additional tracts from the local tribes.
He eventually accumulated 500 acres (about 2 square km, or 3/4 of a square mile) between the Harlem River and the
Aquahung, which became known as Bronck's River, or The Bronx. Dutch and English settlers referred to the area as
Bronck's Land. The American poet William Bronk was a descendant of Pieter Bronck, either Jonas Bronck's son or his
younger brother. The Bronx is referred to, both legally and colloquially, with a definite article, as the Bronx.
(The County of Bronx, unlike the coextensive Borough of the Bronx, does not place the immediately before Bronx in
formal references, nor does the United States Postal Service in its database of Bronx addresses.) The name for this
region, apparently after the Bronx River, first appeared in the Annexed District of the Bronx created in 1874 out
of part of Westchester County and was continued in the Borough of the Bronx, which included a larger annexation from
Westchester County in 1898. The use of the definite article is attributed to the style of referring to rivers. Another
explanation for the use of the definite article in the borough's name is that the original form of the name was a
possessive or collective one referring to the family, as in visiting The Broncks, The Bronck's or The Broncks'. The
development of the Bronx is directly connected to its strategic location between New England and New York (Manhattan).
Control over the bridges across the Harlem River plagued the period of British colonial rule. Kingsbridge, built
in 1693 where Broadway reached the Spuyten Duyvil Creek, was a possession of Frederick Philipse, lord of Philipse
Manor. The tolls were resented by local farmers on both sides of the creek. In 1759, the farmers led by Jacobus Dyckman
and Benjamin Palmer built a "free bridge" across the Harlem River which led to the abandonment of tolls altogether.
The territory now contained within Bronx County was originally part of Westchester County, one of the 12 original
counties of the English Province of New York. The present Bronx County was contained in the town of Westchester and
parts of the towns of Yonkers, Eastchester, and Pelham. In 1846, a new town, West Farms, was created by division
of Westchester; in turn, in 1855, the town of Morrisania was created from West Farms. In 1873, the town of Kingsbridge
(roughly corresponding to the modern Bronx neighborhoods of Kingsbridge, Riverdale, and Woodlawn) was established
within the former borders of Yonkers. The consolidation of the Bronx into New York City proceeded in two stages.
In 1873, the state legislature annexed Kingsbridge, West Farms and Morrisania to New York, effective in 1874; the
three towns were abolished in the process. In 1895, three years before New York's consolidation with Brooklyn, Queens
and Staten Island, the whole of the territory east of the Bronx River, including the Town of Westchester (which had
voted in 1894 against consolidation) and portions of Eastchester and Pelham, were annexed to the city. City Island,
a nautical community, voted to join the city in 1896. The history of the Bronx during the 20th century may be divided
into four periods: a boom period during 1900–29, with a population growth by a factor of six from 200,000 in 1900
to 1.3 million in 1930. The Great Depression and post World War II years saw a slowing of growth leading into an
eventual decline. The mid to late century were hard times, as the Bronx declined 1950–85 from a predominantly moderate-income
to a predominantly lower-income area with high rates of violent crime and poverty. The Bronx has experienced an economic
and developmental resurgence starting in the late 1980s that continues into today. The Bronx underwent rapid urban
growth after World War I. Extensions of the New York City Subway contributed to the increase in population as thousands
of immigrants came to the Bronx, resulting in a major boom in residential construction. Among these groups, many
Irish Americans, Italian Americans and especially Jewish Americans settled here. In addition, French, German, Polish
and other immigrants moved into the borough. The Jewish population also increased notably during this time. In 1937,
according to Jewish organizations, 592,185 Jews lived in The Bronx (43.9% of the borough's population), while only
54,000 Jews lived in the borough in 2011. Many synagogues still stand in the Bronx, but most have been converted
to other uses. Yet another may have been a reduction in the real-estate listings and property-related financial services
(such as mortgage loans or insurance policies) offered in some areas of the Bronx — a process known as redlining.
Others have suggested a "planned shrinkage" of municipal services, such as fire-fighting. There was also much debate
as to whether rent control laws had made it less profitable (or more costly) for landlords to maintain existing buildings
with their existing tenants than to abandon or destroy those buildings. In the 1970s, the Bronx was plagued by a
wave of arson. The burning of buildings was predominantly in the poorest communities, like the South Bronx. The most
common explanation of what occurred was that landlords decided to burn their low property-value buildings and take
the insurance money as it was more lucrative to get insurance money than to refurbish or sell a building in a severely
distressed area. The Bronx became identified with a high rate of poverty and unemployment, which was mainly a persistent
problem in the South Bronx. Since the late 1980s, significant development has occurred in the Bronx, first stimulated
by the city's "Ten-Year Housing Plan" and community members working to rebuild the social, economic and environmental
infrastructure by creating affordable housing. Groups affiliated with churches in the South Bronx erected the Nehemiah
Homes with about 1,000 units. The grass roots organization Nos Quedamos' endeavor known as Melrose Commons began
to rebuild areas in the South Bronx. The IRT White Plains Road Line (2 5 trains) began to show an increase in riders.
Chains such as Marshalls, Staples, and Target opened stores in the Bronx. More bank branches opened in the Bronx
as a whole (rising from 106 in 1997 to 149 in 2007), although not primarily in poor or minority neighborhoods, while
the Bronx still has fewer branches per person than other boroughs. In 1997, the Bronx was designated an All America
City by the National Civic League, acknowledging its comeback from the decline of the mid-century. In 2006, The New
York Times reported that "construction cranes have become the borough's new visual metaphor, replacing the window
decals of the 1980s in which pictures of potted plants and drawn curtains were placed in the windows of abandoned
buildings." The borough has experienced substantial new building construction since 2002. Between 2002 and June 2007,
33,687 new units of housing were built or were under way and $4.8 billion has been invested in new housing. In the
first six months of 2007 alone total investment in new residential development was $965 million and 5,187 residential
units were scheduled to be completed. Much of the new development is springing up in formerly vacant lots across
the South Bronx. Several boutique and chain hotels have opened in recent years in the South Bronx; in addition, a
La Quinta Inn that has been proposed for the Mott Haven waterfront. The Kingsbridge Armory, often cited as the largest
armory in the world, is scheduled for redevelopment as the Kingsbridge National Ice Center. Under consideration for
future development is the construction of a platform over the New York City Subway's Concourse Yard adjacent to Lehman
College. The construction would permit approximately 2,000,000 square feet (190,000 m2) of development and would
cost US$350–500 million. The Bronx is almost entirely situated on the North American mainland. The Hudson River separates
the Bronx on the west from Alpine, Tenafly and Englewood Cliffs in Bergen County, New Jersey; the Harlem River separates
it from the island of Manhattan to the southwest; the East River separates it from Queens to the southeast; and to
the east, Long Island Sound separates it from Nassau County in western Long Island. Directly north of the Bronx are
(from west to east) the adjoining Westchester County communities of Yonkers, Mount Vernon, Pelham Manor and New Rochelle.
(There is also a short southern land boundary with Marble Hill in the Borough of Manhattan, over the filled-in former
course of the Spuyten Duyvil Creek. Marble Hill's postal ZIP code, telephonic Area Code and fire service, however,
are shared with the Bronx and not Manhattan.) The Bronx's highest elevation at 280 feet (85 m) is in the northwest
corner, west of Van Cortlandt Park and in the Chapel Farm area near the Riverdale Country School. The opposite (southeastern)
side of the Bronx has four large low peninsulas or "necks" of low-lying land that jut into the waters of the East
River and were once salt marsh: Hunt's Point, Clason's Point, Screvin's Neck and Throg's Neck. Further up the coastline,
Rodman's Neck lies between Pelham Bay Park in the northeast and City Island. The Bronx's irregular shoreline extends
for 75 square miles (194 km2). The northern side of the borough includes the largest park in New York City—Pelham
Bay Park, which includes Orchard Beach—and the fourth largest, Van Cortlandt Park, which is west of Woodlawn Cemetery
and borders Yonkers. Also in the northern Bronx, Wave Hill, the former estate of George W. Perkins—known for a historic
house, gardens, changing site-specific art installations and concerts—overlooks the New Jersey Palisades from a promontory
on the Hudson in Riverdale. Nearer the borough's center, and along the Bronx River, is Bronx Park; its northern end
houses the New York Botanical Gardens, which preserve the last patch of the original hemlock forest that once covered
the entire county, and its southern end the Bronx Zoo, the largest urban zoological gardens in the United States.
Just south of Van Cortlandt Park is the Jerome Park Reservoir, surrounded by 2 miles (3 km) of stone walls and bordering
several small parks in the Bedford Park neighborhood; the reservoir was built in the 1890s on the site of the former
Jerome Park Racetrack. Further south is Crotona Park, home to a 3.3-acre (1.3 ha) lake, 28 species of trees, and
a large swimming pool. The land for these parks, and many others, was bought by New York City in 1888, while land
was still open and inexpensive, in anticipation of future needs and future pressures for development. East of the
Bronx River, the borough is relatively flat and includes four large low peninsulas, or 'necks,' of low-lying land
which jut into the waters of the East River and were once saltmarsh: Hunts Point, Clason's Point, Screvin's Neck
(Castle Hill Point) and Throgs Neck. The East Bronx has older tenement buildings, low income public housing complexes,
and multifamily homes, as well as single family homes. It includes New York City's largest park: Pelham Bay Park
along the Westchester-Bronx border. The western parts of the Bronx are hillier and are dominated by a series of parallel
ridges, running south to north. The West Bronx has older apartment buildings, low income public housing complexes,
multifamily homes in its lower income areas as well as larger single family homes in more affluent areas such as
Riverdale and Fieldston. It includes New York City's fourth largest park: Van Cortlandt Park along the Westchester-Bronx
border. The Grand Concourse, a wide boulevard, runs through it, north to south. Like other neighborhoods in New York
City, the South Bronx has no official boundaries. The name has been used to represent poverty in the Bronx and applied
to progressively more northern places so that by the 2000s Fordham Road was often used as a northern limit. The Bronx
River more consistently forms an eastern boundary. The South Bronx has many high-density apartment buildings, low
income public housing complexes, and multi-unit homes. The South Bronx is home to the Bronx County Courthouse, Borough
Hall, and other government buildings, as well as Yankee Stadium. The Cross Bronx Expressway bisects it, east to west.
The South Bronx has some of the poorest neighborhoods in the country, as well as very high crime areas. There are
three primary shopping centers in the Bronx: The Hub, Gateway Center and Southern Boulevard. The Hub–Third Avenue
Business Improvement District (B.I.D.), in The Hub, is the retail heart of the South Bronx, located where four roads
converge: East 149th Street, Willis, Melrose and Third Avenues. It is primarily located inside the neighborhood of
Melrose but also lines the northern border of Mott Haven. The Hub has been called "the Broadway of the Bronx", being
likened to the real Broadway in Manhattan and the northwestern Bronx. It is the site of both maximum traffic and
architectural density. In configuration, it resembles a miniature Times Square, a spatial "bow-tie" created by the
geometry of the street. The Hub is part of Bronx Community Board 1. The Gateway Center at Bronx Terminal Market,
in the West Bronx, is a shopping center that encompasses less than one million square feet of retail space, built
on a 17 acres (7 ha) site that formerly held the Bronx Terminal Market, a wholesale fruit and vegetable market as
well as the former Bronx House of Detention, south of Yankee Stadium. The $500 million shopping center, which was
completed in 2009, saw the construction of new buildings and two smaller buildings, one new and the other a renovation
of an existing building that was part of the original market. The two main buildings are linked by a six-level garage
for 2,600 cars. The center has earned itself a LEED "Silver" designation in its design. The Bronx street grid is
irregular. Like the northernmost part of upper Manhattan, the West Bronx's hilly terrain leaves a relatively free-style
street grid. Much of the West Bronx's street numbering carries over from upper Manhattan, but does not match it exactly;
East 132nd Street is the lowest numbered street in the Bronx. This dates from the mid-19th century when the southwestern
area of Westchester County west of the Bronx River, was incorporated into New York City and known as the Northside.
According to the 2010 Census, 53.5% of Bronx's population was of Hispanic, Latino, or Spanish origin (they may be
of any race); 30.1% non-Hispanic Black or African American, 10.9% of the population was non-Hispanic White, 3.4%
non-Hispanic Asian, 0.6% from some other race (non-Hispanic) and 1.2% of two or more races (non-Hispanic). The U.S.
Census considers the Bronx to be the most diverse area in the country. There is an 89.7 percent chance that any two
residents, chosen at random, would be of different race or ethnicity. As of 2010, 46.29% (584,463) of Bronx residents
aged five and older spoke Spanish at home, while 44.02% (555,767) spoke English, 2.48% (31,361) African languages,
0.91% (11,455) French, 0.90% (11,355) Italian, 0.87% (10,946) various Indic languages, 0.70% (8,836) other Indo-European
languages, and Chinese was spoken at home by 0.50% (6,610) of the population over the age of five. In total, 55.98%
(706,783) of the Bronx's population age five and older spoke a language at home other than English. A Garifuna-speaking
community from Honduras and Guatemala also makes the Bronx its home. According to the 2009 American Community Survey,
White Americans of both Hispanic and non-Hispanic origin represented over one-fifth (22.9%) of the Bronx's population.
However, non-Hispanic whites formed under one-eighth (12.1%) of the population, down from 34.4% in 1980. Out of all
five boroughs, the Bronx has the lowest number and percentage of white residents. 320,640 whites called the Bronx
home, of which 168,570 were non-Hispanic whites. The majority of the non-Hispanic European American population is
of Italian and Irish descent. People of Italian descent numbered over 55,000 individuals and made up 3.9% of the
population. People of Irish descent numbered over 43,500 individuals and made up 3.1% of the population. German Americans
and Polish Americans made up 1.4% and 0.8% of the population respectively. At the 2009 American Community Survey,
Black Americans made the second largest group in the Bronx after Hispanics and Latinos. Blacks of both Hispanic and
non-Hispanic origin represented over one-third (35.4%) of the Bronx's population. Blacks of non-Hispanic origin made
up 30.8% of the population. Over 495,200 blacks resided in the borough, of which 430,600 were non-Hispanic blacks.
Over 61,000 people identified themselves as "Sub-Saharan African" in the survey, making up 4.4% of the population.
In 2009, Hispanic and Latino Americans represented 52.0% of the Bronx's population. Puerto Ricans represented 23.2%
of the borough's population. Over 72,500 Mexicans lived in the Bronx, and they formed 5.2% of the population. Cubans
numbered over 9,640 members and formed 0.7% of the population. In addition, over 319,000 people were of various Hispanic
and Latino groups, such as Dominican, Salvadoran, and so on. These groups collectively represented 22.9% of the population.
At the 2010 Census, 53.5% of Bronx's population was of Hispanic, Latino, or Spanish origin (they may be of any race).
Asian Americans are a small but sizable minority in the borough. Roughly 49,600 Asians make up 3.6% of the population.
Roughly 13,600 Indians call the Bronx home, along with 9,800 Chinese, 6,540 Filipinos, 2,260 Vietnamese, 2,010 Koreans,
and 1,100 Japanese. Multiracial Americans are also a sizable minority in the Bronx. People of multiracial heritage
number over 41,800 individuals and represent 3.0% of the population. People of mixed Caucasian and African American
heritage number over 6,850 members and form 0.5% of the population. People of mixed Caucasian and Native American
heritage number over 2,450 members and form 0.2% of the population. People of mixed Caucasian and Asian heritage
number over 880 members and form 0.1% of the population. People of mixed African American and Native American heritage
number over 1,220 members and form 0.1% of the population. The office of Borough President was created in the consolidation
of 1898 to balance centralization with local authority. Each borough president had a powerful administrative role
derived from having a vote on the New York City Board of Estimate, which was responsible for creating and approving
the city's budget and proposals for land use. In 1989 the Supreme Court of the United States declared the Board of
Estimate unconstitutional on the grounds that Brooklyn, the most populous borough, had no greater effective representation
on the Board than Staten Island, the least populous borough, a violation of the Fourteenth Amendment's Equal Protection
Clause pursuant to the high court's 1964 "one man, one vote" decision. Until March 1, 2009, the Borough President
of the Bronx was Adolfo Carrión Jr., elected as a Democrat in 2001 and 2005 before retiring early to direct the White
House Office of Urban Affairs Policy. His successor, Democratic New York State Assembly member Rubén Díaz, Jr., who
won a special election on April 21, 2009 by a vote of 86.3% (29,420) on the "Bronx Unity" line to 13.3% (4,646) for
the Republican district leader Anthony Ribustello on the "People First" line, became Borough President on May 1.
In the Presidential primary elections of February 5, 2008, Sen. Clinton won 61.2% of the Bronx's 148,636 Democratic
votes against 37.8% for Barack Obama and 1.0% for the other four candidates combined (John Edwards, Dennis Kucinich,
Bill Richardson and Joe Biden). On the same day, John McCain won 54.4% of the borough's 5,643 Republican votes, Mitt
Romney 20.8%, Mike Huckabee 8.2%, Ron Paul 7.4%, Rudy Giuliani 5.6%, and the other candidates (Fred Thompson, Duncan
Hunter and Alan Keyes) 3.6% between them. Since then, the Bronx has always supported the Democratic Party's nominee
for President, starting with a vote of 2-1 for the unsuccessful Al Smith in 1928, followed by four 2-1 votes for
the successful Franklin D. Roosevelt. (Both had been Governors of New York, but Republican former Gov. Thomas E.
Dewey won only 28% of the Bronx's vote in 1948 against 55% for Pres. Harry Truman, the winning Democrat, and 17%
for Henry A. Wallace of the Progressives. It was only 32 years earlier, by contrast, that another Republican former
Governor who narrowly lost the Presidency, Charles Evans Hughes, had won 42.6% of the Bronx's 1916 vote against Democratic
President Woodrow Wilson's 49.8% and Socialist candidate Allan Benson's 7.3%.) The Bronx has often shown striking
differences from other boroughs in elections for Mayor. The only Republican to carry the Bronx since 1914 was Fiorello
La Guardia in 1933, 1937 and 1941 (and in the latter two elections, only because his 30-32% vote on the American
Labor Party line was added to 22-23% as a Republican). The Bronx was thus the only borough not carried by the successful
Republican re-election campaigns of Mayors Rudolph Giuliani in 1997 and Michael Bloomberg in 2005. The anti-war Socialist
campaign of Morris Hillquit in the 1917 mayoral election won over 31% of the Bronx's vote, putting him second and
well ahead of the 20% won by the incumbent pro-war Fusion Mayor John P. Mitchel, who came in second (ahead of Hillquit)
everywhere else and outpolled Hillquit city-wide by 23.2% to 21.7%. Education in the Bronx is provided by a large
number of public and private institutions, many of which draw students who live beyond the Bronx. The New York City
Department of Education manages public noncharter schools in the borough. In 2000, public schools enrolled nearly
280,000 of the Bronx's residents over 3 years old (out of 333,100 enrolled in all pre-college schools). There are
also several public charter schools. Private schools range from élite independent schools to religiously affiliated
schools run by the Roman Catholic Archdiocese of New York and Jewish organizations. Educational attainment: In 2000,
according to the U.S. Census, out of the nearly 800,000 people in the Bronx who were then at least 25 years old,
62.3% had graduated from high school and 14.6% held a bachelor's or higher college degree. These percentages were
lower than those for New York's other boroughs, which ranged from 68.8% (Brooklyn) to 82.6% (Staten Island) for high
school graduates over 24, and from 21.8% (Brooklyn) to 49.4% (Manhattan) for college graduates. (The respective state
and national percentages were [NY] 79.1% & 27.4% and [US] 80.4% & 24.4%.) Many public high schools are located in
the borough including the elite Bronx High School of Science, Celia Cruz Bronx High School of Music, DeWitt Clinton
High School, High School for Violin and Dance, Bronx Leadership Academy 2, Bronx International High School, the School
for Excellence, the Morris Academy for Collaborative Study, Wings Academy for young adults, The Bronx School for
Law, Government and Justice, Validus Preparatory Academy, The Eagle Academy For Young Men, Bronx Expeditionary Learning
High School, Bronx Academy of Letters, Herbert H. Lehman High School and High School of American Studies. The Bronx
is also home to three of New York City's most prestigious private, secular schools: Fieldston, Horace Mann, and Riverdale
Country School. In the 1990s, New York City began closing the large, public high schools in the Bronx and replacing
them with small high schools. Among the reasons cited for the changes were poor graduation rates and concerns about
safety. Schools that have been closed or reduced in size include John F. Kennedy, James Monroe, Taft, Theodore Roosevelt,
Adlai Stevenson, Evander Childs, Christopher Columbus, Morris, Walton, and South Bronx High Schools. More recently
the City has started phasing out large middle schools, also replacing them with smaller schools. The Bronx's evolution
from a hot bed of Latin jazz to an incubator of hip hop was the subject of an award-winning documentary, produced
by City Lore and broadcast on PBS in 2006, "From Mambo to Hip Hop: A South Bronx Tale". Hip Hop first emerged in
the South Bronx in the early 1970s. The New York Times has identified 1520 Sedgwick Avenue "an otherwise unremarkable
high-rise just north of the Cross Bronx Expressway and hard along the Major Deegan Expressway" as a starting point,
where DJ Kool Herc presided over parties in the community room. Beginning with the advent of beat match DJing, in
which Bronx DJs (Disc Jockeys) including Grandmaster Flash, Afrika Bambaataa and DJ Kool Herc extended the breaks
of funk records, a major new musical genre emerged that sought to isolate the percussion breaks of hit funk, disco
and soul songs. As hip hop's popularity grew, performers began speaking ("rapping") in sync with the beats, and became
known as MCs or emcees. The Herculoids, made up of Herc, Coke La Rock, and DJ Clark Kent, were the earliest to gain
major fame. The Bronx is referred to in hip-hop slang as "The Boogie Down Bronx", or just "The Boogie Down". This
was hip-hop pioneer KRS-One's inspiration for his thought provoking group BDP, or Boogie Down Productions, which
included DJ Scott La Rock. Newer hip hop artists from the Bronx include Big Pun, Lord Toriq and Peter Gunz, Camp
Lo, Swizz Beatz, Drag-On, Fat Joe, Terror Squad and Corey Gunz. The Bronx is the home of the New York Yankees of
Major League Baseball. The original Yankee Stadium opened in 1923 on 161st Street and River Avenue, a year that saw
the Yankees bring home their first of 27 World Series Championships. With the famous facade, the short right field
porch and Monument Park, Yankee Stadium has been home to many of baseball's greatest players including Babe Ruth,
Lou Gehrig, Joe DiMaggio, Whitey Ford, Yogi Berra, Mickey Mantle, Reggie Jackson, Derek Jeter and Mariano Rivera.
The Bronx is home to several Off-Off-Broadway theaters, many staging new works by immigrant playwrights from Latin
America and Africa. The Pregones Theater, which produces Latin American work, opened a new 130-seat theater in 2005
on Walton Avenue in the South Bronx. Some artists from elsewhere in New York City have begun to converge on the area,
and housing prices have nearly quadrupled in the area since 2002. However rising prices directly correlate to a housing
shortage across the city and the entire metro area. The Bronx Museum of the Arts, founded in 1971, exhibits 20th
century and contemporary art through its central museum space and 11,000 square feet (1,000 m2) of galleries. Many
of its exhibitions are on themes of special interest to the Bronx. Its permanent collection features more than 800
works of art, primarily by artists from Africa, Asia and Latin America, including paintings, photographs, prints,
drawings, and mixed media. The museum was temporarily closed in 2006 while it underwent a major expansion designed
by the architectural firm Arquitectonica. The Bronx has also become home to a peculiar poetic tribute, in the form
of the Heinrich Heine Memorial, better known as the Lorelei Fountain from one of Heine's best-known works (1838).
After Heine's German birthplace of Düsseldorf had rejected, allegedly for anti-Semitic motives, a centennial monument
to the radical German-Jewish poet (1797–1856), his incensed German-American admirers, including Carl Schurz, started
a movement to place one instead in Midtown Manhattan, at Fifth Avenue and 59th Street. However, this intention was
thwarted by a combination of ethnic antagonism, aesthetic controversy and political struggles over the institutional
control of public art. In 1899, the memorial, by the Berlin sculptor Ernst Gustav Herter (1846–1917), finally came
to rest, although subject to repeated vandalism, in the Bronx, at 164th Street and the Grand Concourse, or Joyce
Kilmer Park near today's Yankee Stadium. (In 1999, it was moved to 161st Street and the Concourse.) In 2007, Christopher
Gray of The New York Times described it as "a writhing composition in white Tyrolean marble depicting Lorelei, the
mythical German figure, surrounded by mermaids, dolphins and seashells." The peninsular borough's maritime heritage
is acknowledged in several ways.The City Island Historical Society and Nautical Museum occupies a former public school
designed by the New York City school system's turn-of-the-last-century master architect C. B. J. Snyder. The state's
Maritime College in Fort Schuyler (on the southeastern shore) houses the Maritime Industry Museum. In addition, the
Harlem River is reemerging as "Scullers' Row" due in large part to the efforts of the Bronx River Restoration Project,
a joint public-private endeavor of the city's parks department. Canoeing and kayaking on the borough's namesake river
have been promoted by the Bronx River Alliance. The river is also straddled by the New York Botanical Gardens, its
neighbor, the Bronx Zoo, and a little further south, on the west shore, Bronx River Art Center. The Bronx has several
local newspapers, including The Bronx News, Parkchester News, City News, The Riverdale Press, Riverdale Review, The
Bronx Times Reporter, Inner City Press (which now has more of a focus on national issues) and Co-Op City Times. Four
non-profit news outlets, Norwood News, Mount Hope Monitor, Mott Haven Herald and The Hunts Point Express serve the
borough's poorer communities. The editor and co-publisher of The Riverdale Press, Bernard Stein, won the Pulitzer
Prize for Editorial Writing for his editorials about Bronx and New York City issues in 1998. (Stein graduated from
the Bronx High School of Science in 1959.) The City of New York has an official television station run by the NYC
Media Group and broadcasting from Bronx Community College, and Cablevision operates News 12 The Bronx, both of which
feature programming based in the Bronx. Co-op City was the first area in the Bronx, and the first in New York beyond
Manhattan, to have its own cable television provider. The local public-access television station BronxNet originates
from Herbert H. Lehman College, the borough's only four year CUNY school, and provides government-access television
(GATV) public affairs programming in addition to programming produced by Bronx residents. Mid-20th century movies
set in the Bronx portrayed densely settled, working-class, urban culture. Hollywood films such as From This Day Forward
(1946), set in Highbridge, occasionally delved into Bronx life. Paddy Chayefsky's Academy Award-winning Marty was
the most notable examination of working class Bronx life was also explored by Chayefsky in his 1956 film The Catered
Affair, and in the 1993 Robert De Niro/Chazz Palminteri film, A Bronx Tale, Spike Lee's 1999 movie Summer of Sam,
centered in an Italian-American Bronx community, 1994's I Like It Like That that takes place in the predominantly
Puerto Rican neighborhood of the South Bronx, and Doughboys, the story of two Italian-American brothers in danger
of losing their bakery thanks to one brother's gambling debts. Starting in the 1970s, the Bronx often symbolized
violence, decay, and urban ruin. The wave of arson in the South Bronx in the 1960s and 1970s inspired the observation
that "The Bronx is burning": in 1974 it was the title of both a New York Times editorial and a BBC documentary film.
The line entered the pop-consciousness with Game Two of the 1977 World Series, when a fire broke out near Yankee
Stadium as the team was playing the Los Angeles Dodgers. Numerous fires had previously broken out in the Bronx prior
to this fire. As the fire was captured on live television, announcer Howard Cosell is wrongly remembered to have
said something like, "There it is, ladies and gentlemen: the Bronx is burning". Historians of New York City frequently
point to Cosell's remark as an acknowledgement of both the city and the borough's decline. A new feature-length documentary
film by Edwin Pagan called Bronx Burning is in production in 2006, chronicling what led up to the numerous arson-for-insurance
fraud fires of the 1970s in the borough. Bronx gang life was depicted in the 1974 novel The Wanderers by Bronx native
Richard Price and the 1979 movie of the same name. They are set in the heart of the Bronx, showing apartment life
and the then-landmark Krums ice cream parlor. In the 1979 film The Warriors, the eponymous gang go to a meeting in
Van Cortlandt Park in the Bronx, and have to fight their way out of the borough and get back to Coney Island in Brooklyn.
A Bronx Tale (1993) depicts gang activities in the Belmont "Little Italy" section of the Bronx. The 2005 video game
adaptation features levels called Pelham, Tremont, and "Gunhill" (a play off the name Gun Hill Road). This theme
lends itself to the title of The Bronx Is Burning, an eight-part ESPN TV mini-series (2007) about the New York Yankees'
drive to winning baseball's 1977 World Series. The TV series emphasizes the boisterous nature of the team, led by
manager Billy Martin, catcher Thurman Munson and outfielder Reggie Jackson, as well as the malaise of the Bronx and
New York City in general during that time, such as the blackout, the city's serious financial woes and near bankruptcy,
the arson for insurance payments, and the election of Ed Koch as mayor. The 1981 film Fort Apache, The Bronx is another
film that used the Bronx's gritty image for its storyline. The movie's title is from the nickname for the 41st Police
Precinct in the South Bronx which was nicknamed "Fort Apache". Also from 1981 is the horror film Wolfen making use
of the rubble of the Bronx as a home for werewolf type creatures. Knights of the South Bronx, a true story of a teacher
who worked with disadvantaged children, is another film also set in the Bronx released in 2005. The Bronx was the
setting for the 1983 film Fuga dal Bronx, also known as Bronx Warriors 2 and Escape 2000, an Italian B-movie best
known for its appearance on the television series Mystery Science Theatre 3000. The plot revolves around a sinister
construction corporation's plans to depopulate, destroy and redevelop the Bronx, and a band of rebels who are out
to expose the corporation's murderous ways and save their homes. The film is memorable for its almost incessant use
of the phrase, "Leave the Bronx!" Many of the movie's scenes were filmed in Queens, substituting as the Bronx. Rumble
in the Bronx was a 1995 Jackie Chan kung-fu film, another which popularised the Bronx to international audiences.
Last Bronx, a 1996 Sega game played on the bad reputation of the Bronx to lend its name to an alternate version of
post-Japanese bubble Tokyo, where crime and gang warfare is rampant. Bronx native Nancy Savoca's 1989 comedy, True
Love, explores two Italian-American Bronx sweethearts in the days before their wedding. The film, which debuted Annabella
Sciorra and Ron Eldard as the betrothed couple, won the Grand Jury Prize at that year's Sundance Film Festival. The
CBS television sitcom Becker, 1998–2004, was more ambiguous. The show starred Ted Danson as Dr. John Becker, a doctor
who operated a small practice and was constantly annoyed by his patients, co-workers, friends, and practically everything
and everybody else in his world. It showed his everyday life as a doctor working in a small clinic in the Bronx.
Penny Marshall's 1990 film Awakenings, which was nominated for several Oscars, is based on neurologist Oliver Sacks'
1973 account of his psychiatric patients at Beth Abraham Hospital in the Bronx who were paralyzed by a form of encephalitis
but briefly responded to the drug L-dopa. Robin Williams played the physician; Robert De Niro was one of the patients
who emerged from a catatonic (frozen) state. The home of Williams' character was shot not far from Sacks' actual
City Island residence. A 1973 Yorkshire Television documentary and "A Kind of Alaska", a 1985 play by Harold Pinter,
were also based on Sacks' book. The Bronx has been featured significantly in fiction literature. All of the characters
in Herman Wouk's City Boy: The Adventures of Herbie Bookbinder (1948) live in the Bronx, and about half of the action
is set there. Kate Simon's Bronx Primitive: Portraits of a Childhood is directly autobiographical, a warm account
of a Polish-Jewish girl in an immigrant family growing up before World War II, and living near Arthur Avenue and
Tremont Avenue. In Jacob M. Appel's short story, "The Grand Concourse" (2007), a woman who grew up in the iconic
Lewis Morris Building returns to the Morrisania neighborhood with her adult daughter. Similarly, in Avery Corman's
book The Old Neighborhood (1980), an upper-middle class white protagonist returns to his birth neighborhood (Fordham
Road and the Grand Concourse), and learns that even though the folks are poor, Hispanic and African-American, they
are good people. By contrast, Tom Wolfe's Bonfire of the Vanities (1987) portrays a wealthy, white protagonist, Sherman
McCoy, getting lost off the Major Deegan Expressway in the South Bronx and having an altercation with locals. A substantial
piece of the last part of the book is set in the resulting riotous trial at the Bronx County Courthouse. However,
times change, and in 2007, the New York Times reported that "the Bronx neighborhoods near the site of Sherman's accident
are now dotted with townhouses and apartments." In the same article, the Reverend Al Sharpton (whose fictional analogue
in the novel is "Reverend Bacon") asserts that "twenty years later, the cynicism of The Bonfire of the Vanities is
as out of style as Tom Wolfe's wardrobe."
It threatened the collapse of large financial institutions, which was prevented by the bailout of banks by national governments,
but stock markets still dropped worldwide. In many areas, the housing market also suffered, resulting in evictions,
foreclosures and prolonged unemployment. The crisis played a significant role in the failure of key businesses, declines
in consumer wealth estimated in trillions of U.S. dollars, and a downturn in economic activity leading to the 2008–2012
global recession and contributing to the European sovereign-debt crisis. The active phase of the crisis, which manifested
as a liquidity crisis, can be dated from August 9, 2007, when BNP Paribas terminated withdrawals from three hedge
funds citing "a complete evaporation of liquidity". The bursting of the U.S. (United States) housing bubble, which
peaked in 2004, caused the values of securities tied to U.S. real estate pricing to plummet, damaging financial institutions
globally. The financial crisis was triggered by a complex interplay of policies that encouraged home ownership, providing
easier access to loans for subprime borrowers, overvaluation of bundled subprime mortgages based on the theory that
housing prices would continue to escalate, questionable trading practices on behalf of both buyers and sellers, compensation
structures that prioritize short-term deal flow over long-term value creation, and a lack of adequate capital holdings
from banks and insurance companies to back the financial commitments they were making. Questions regarding bank solvency,
declines in credit availability and damaged investor confidence had an impact on global stock markets, where securities
suffered large losses during 2008 and early 2009. Economies worldwide slowed during this period, as credit tightened
and international trade declined. Governments and central banks responded with unprecedented fiscal stimulus, monetary
policy expansion and institutional bailouts. In the U.S., Congress passed the American Recovery and Reinvestment
Act of 2009. Many causes for the financial crisis have been suggested, with varying weight assigned by experts. The
U.S. Senate's Levin–Coburn Report concluded that the crisis was the result of "high risk, complex financial products;
undisclosed conflicts of interest; the failure of regulators, the credit rating agencies, and the market itself to
rein in the excesses of Wall Street." The Financial Crisis Inquiry Commission concluded that the financial crisis
was avoidable and was caused by "widespread failures in financial regulation and supervision", "dramatic failures
of corporate governance and risk management at many systemically important financial institutions", "a combination
of excessive borrowing, risky investments, and lack of transparency" by financial institutions, ill preparation and
inconsistent action by government that "added to the uncertainty and panic", a "systemic breakdown in accountability
and ethics", "collapsing mortgage-lending standards and the mortgage securitization pipeline", deregulation of over-the-counter
derivatives, especially credit default swaps, and "the failures of credit rating agencies" to correctly price risk.
The 1999 repeal of the Glass-Steagall Act effectively removed the separation between investment banks and depository
banks in the United States. Critics argued that credit rating agencies and investors failed to accurately price the
risk involved with mortgage-related financial products, and that governments did not adjust their regulatory practices
to address 21st-century financial markets. Research into the causes of the financial crisis has also focused on the
role of interest rate spreads. As part of the housing and credit booms, the number of financial agreements called
mortgage-backed securities (MBS) and collateralized debt obligations (CDO), which derived their value from mortgage
payments and housing prices, greatly increased. Such financial innovation enabled institutions and investors around
the world to invest in the U.S. housing market. As housing prices declined, major global financial institutions that
had borrowed and invested heavily in subprime MBS reported significant losses. Falling prices also resulted in homes
worth less than the mortgage loan, providing a financial incentive to enter foreclosure. The ongoing foreclosure
epidemic that began in late 2006 in the U.S. continues to drain wealth from consumers and erodes the financial strength
of banking institutions. Defaults and losses on other loan types also increased significantly as the crisis expanded
from the housing market to other parts of the economy. Total losses are estimated in the trillions of U.S. dollars
globally. While the housing and credit bubbles were building, a series of factors caused the financial system to
both expand and become increasingly fragile, a process called financialization. U.S. Government policy from the 1970s
onward has emphasized deregulation to encourage business, which resulted in less oversight of activities and less
disclosure of information about new activities undertaken by banks and other evolving financial institutions. Thus,
policymakers did not immediately recognize the increasingly important role played by financial institutions such
as investment banks and hedge funds, also known as the shadow banking system. Some experts believe these institutions
had become as important as commercial (depository) banks in providing credit to the U.S. economy, but they were not
subject to the same regulations. These institutions, as well as certain regulated banks, had also assumed significant
debt burdens while providing the loans described above and did not have a financial cushion sufficient to absorb
large loan defaults or MBS losses. These losses impacted the ability of financial institutions to lend, slowing economic
activity. Concerns regarding the stability of key financial institutions drove central banks to provide funds to
encourage lending and restore faith in the commercial paper markets, which are integral to funding business operations.
Governments also bailed out key financial institutions and implemented economic stimulus programs, assuming significant
additional financial commitments. The U.S. Financial Crisis Inquiry Commission reported its findings in January 2011.
It concluded that "the crisis was avoidable and was caused by: widespread failures in financial regulation, including
the Federal Reserve’s failure to stem the tide of toxic mortgages; dramatic breakdowns in corporate governance including
too many financial firms acting recklessly and taking on too much risk; an explosive mix of excessive borrowing and
risk by households and Wall Street that put the financial system on a collision course with crisis; key policy makers
ill prepared for the crisis, lacking a full understanding of the financial system they oversaw; and systemic breaches
in accountability and ethics at all levels". During a period of tough competition between mortgage lenders for revenue
and market share, and when the supply of creditworthy borrowers was limited, mortgage lenders relaxed underwriting
standards and originated riskier mortgages to less creditworthy borrowers. In the view of some analysts, the relatively
conservative government-sponsored enterprises (GSEs) policed mortgage originators and maintained relatively high
underwriting standards prior to 2003. However, as market power shifted from securitizers to originators and as intense
competition from private securitizers undermined GSE power, mortgage standards declined and risky loans proliferated.
The worst loans were originated in 2004–2007, the years of the most intense competition between securitizers and
the lowest market share for the GSEs. The majority report of the Financial Crisis Inquiry Commission, written by
the six Democratic appointees, the minority report, written by 3 of the 4 Republican appointees, studies by Federal
Reserve economists, and the work of several independent scholars generally contend that government affordable housing
policy was not the primary cause of the financial crisis. Although they concede that governmental policies had some
role in causing the crisis, they contend that GSE loans performed better than loans securitized by private investment
banks, and performed better than some loans originated by institutions that held loans in their own portfolios. Paul
Krugman has even claimed that the GSE never purchased subprime loans – a claim that is widely disputed. In his dissent
to the majority report of the Financial Crisis Inquiry Commission, American Enterprise Institute fellow Peter J.
Wallison stated his belief that the roots of the financial crisis can be traced directly and primarily to affordable
housing policies initiated by HUD in the 1990s and to massive risky loan purchases by government-sponsored entities
Fannie Mae and Freddie Mac. Later, based upon information in the SEC's December 2011 securities fraud case against
6 ex-executives of Fannie and Freddie, Peter Wallison and Edward Pinto estimated that, in 2008, Fannie and Freddie
held 13 million substandard loans totaling over $2 trillion. In the early and mid-2000s, the Bush administration
called numerous times for investigation into the safety and soundness of the GSEs and their swelling portfolio of
subprime mortgages. On September 10, 2003, the House Financial Services Committee held a hearing at the urging of
the administration to assess safety and soundness issues and to review a recent report by the Office of Federal Housing
Enterprise Oversight (OFHEO) that had uncovered accounting discrepancies within the two entities. The hearings never
resulted in new legislation or formal investigation of Fannie Mae and Freddie Mac, as many of the committee members
refused to accept the report and instead rebuked OFHEO for their attempt at regulation. Some believe this was an
early warning to the systemic risk that the growing market in subprime mortgages posed to the U.S. financial system
that went unheeded. A 2000 United States Department of the Treasury study of lending trends for 305 cities from 1993
to 1998 showed that $467 billion of mortgage lending was made by Community Reinvestment Act (CRA)-covered lenders
into low and mid level income (LMI) borrowers and neighborhoods, representing 10% of all U.S. mortgage lending during
the period. The majority of these were prime loans. Sub-prime loans made by CRA-covered institutions constituted
a 3% market share of LMI loans in 1998, but in the run-up to the crisis, fully 25% of all sub-prime lending occurred
at CRA-covered institutions and another 25% of sub-prime loans had some connection with CRA. In addition, an analysis
by the Federal Reserve Bank of Dallas in 2009, however, concluded that the CRA was not responsible for the mortgage
loan crisis, pointing out that CRA rules have been in place since 1995 whereas the poor lending emerged only a decade
later. Furthermore, most sub-prime loans were not made to the LMI borrowers targeted by the CRA, especially in the
years 2005–2006 leading up to the crisis. Nor did it find any evidence that lending under the CRA rules increased
delinquency rates or that the CRA indirectly influenced independent mortgage lenders to ramp up sub-prime lending.
To other analysts the delay between CRA rule changes (in 1995) and the explosion of subprime lending is not surprising,
and does not exonerate the CRA. They contend that there were two, connected causes to the crisis: the relaxation
of underwriting standards in 1995 and the ultra-low interest rates initiated by the Federal Reserve after the terrorist
attack on September 11, 2001. Both causes had to be in place before the crisis could take place. Critics also point
out that publicly announced CRA loan commitments were massive, totaling $4.5 trillion in the years between 1994 and
2007. They also argue that the Federal Reserve’s classification of CRA loans as “prime” is based on the faulty and
self-serving assumption that high-interest-rate loans (3 percentage points over average) equal “subprime” loans.
Others have pointed out that there were not enough of these loans made to cause a crisis of this magnitude. In an
article in Portfolio Magazine, Michael Lewis spoke with one trader who noted that "There weren’t enough Americans
with [bad] credit taking out [bad loans] to satisfy investors' appetite for the end product." Essentially, investment
banks and hedge funds used financial innovation to enable large wagers to be made, far beyond the actual value of
the underlying mortgage loans, using derivatives called credit default swaps, collateralized debt obligations and
synthetic CDOs. Countering Krugman, Peter J. Wallison wrote: "It is not true that every bubble—even a large bubble—has
the potential to cause a financial crisis when it deflates." Wallison notes that other developed countries had "large
bubbles during the 1997–2007 period" but "the losses associated with mortgage delinquencies and defaults when these
bubbles deflated were far lower than the losses suffered in the United States when the 1997–2007 [bubble] deflated."
According to Wallison, the reason the U.S. residential housing bubble (as opposed to other types of bubbles) led
to financial crisis was that it was supported by a huge number of substandard loans – generally with low or no downpayments.
Krugman's contention (that the growth of a commercial real estate bubble indicates that U.S. housing policy was not
the cause of the crisis) is challenged by additional analysis. After researching the default of commercial loans
during the financial crisis, Xudong An and Anthony B. Sanders reported (in December 2010): "We find limited evidence
that substantial deterioration in CMBS [commercial mortgage-backed securities] loan underwriting occurred prior to
the crisis." Other analysts support the contention that the crisis in commercial real estate and related lending
took place after the crisis in residential real estate. Business journalist Kimberly Amadeo reports: "The first signs
of decline in residential real estate occurred in 2006. Three years later, commercial real estate started feeling
the effects. Denice A. Gierach, a real estate attorney and CPA, wrote: In a Peabody Award winning program, NPR correspondents
argued that a "Giant Pool of Money" (represented by $70 trillion in worldwide fixed income investments) sought higher
yields than those offered by U.S. Treasury bonds early in the decade. This pool of money had roughly doubled in size
from 2000 to 2007, yet the supply of relatively safe, income generating investments had not grown as fast. Investment
banks on Wall Street answered this demand with products such as the mortgage-backed security and the collateralized
debt obligation that were assigned safe ratings by the credit rating agencies. The collateralized debt obligation
in particular enabled financial institutions to obtain investor funds to finance subprime and other lending, extending
or increasing the housing bubble and generating large fees. This essentially places cash payments from multiple mortgages
or other debt obligations into a single pool from which specific securities draw in a specific sequence of priority.
Those securities first in line received investment-grade ratings from rating agencies. Securities with lower priority
had lower credit ratings but theoretically a higher rate of return on the amount invested. By September 2008, average
U.S. housing prices had declined by over 20% from their mid-2006 peak. As prices declined, borrowers with adjustable-rate
mortgages could not refinance to avoid the higher payments associated with rising interest rates and began to default.
During 2007, lenders began foreclosure proceedings on nearly 1.3 million properties, a 79% increase over 2006. This
increased to 2.3 million in 2008, an 81% increase vs. 2007. By August 2008, 9.2% of all U.S. mortgages outstanding
were either delinquent or in foreclosure. By September 2009, this had risen to 14.4%. Lower interest rates encouraged
borrowing. From 2000 to 2003, the Federal Reserve lowered the federal funds rate target from 6.5% to 1.0%. This was
done to soften the effects of the collapse of the dot-com bubble and the September 2001 terrorist attacks, as well
as to combat a perceived risk of deflation. As early as 2002 it was apparent that credit was fueling housing instead
of business investment as some economists went so far as to advocate that the Fed "needs to create a housing bubble
to replace the Nasdaq bubble". Moreover, empirical studies using data from advanced countries show that excessive
credit growth contributed greatly to the severity of the crisis. Bernanke explained that between 1996 and 2004, the
U.S. current account deficit increased by $650 billion, from 1.5% to 5.8% of GDP. Financing these deficits required
the country to borrow large sums from abroad, much of it from countries running trade surpluses. These were mainly
the emerging economies in Asia and oil-exporting nations. The balance of payments identity requires that a country
(such as the U.S.) running a current account deficit also have a capital account (investment) surplus of the same
amount. Hence large and growing amounts of foreign funds (capital) flowed into the U.S. to finance its imports. The
Fed then raised the Fed funds rate significantly between July 2004 and July 2006. This contributed to an increase
in 1-year and 5-year adjustable-rate mortgage (ARM) rates, making ARM interest rate resets more expensive for homeowners.
This may have also contributed to the deflating of the housing bubble, as asset prices generally move inversely to
interest rates, and it became riskier to speculate in housing. U.S. housing and financial assets dramatically declined
in value after the housing bubble burst. Testimony given to the Financial Crisis Inquiry Commission by Richard M.
Bowen III on events during his tenure as the Business Chief Underwriter for Correspondent Lending in the Consumer
Lending Group for Citigroup (where he was responsible for over 220 professional underwriters) suggests that by the
final years of the U.S. housing bubble (2006–2007), the collapse of mortgage underwriting standards was endemic.
His testimony stated that by 2006, 60% of mortgages purchased by Citi from some 1,600 mortgage companies were "defective"
(were not underwritten to policy, or did not contain all policy-required documents) – this, despite the fact that
each of these 1,600 originators was contractually responsible (certified via representations and warrantees) that
its mortgage originations met Citi's standards. Moreover, during 2007, "defective mortgages (from mortgage originators
contractually bound to perform underwriting to Citi's standards) increased... to over 80% of production". In separate
testimony to Financial Crisis Inquiry Commission, officers of Clayton Holdings—the largest residential loan due diligence
and securitization surveillance company in the United States and Europe—testified that Clayton's review of over 900,000
mortgages issued from January 2006 to June 2007 revealed that scarcely 54% of the loans met their originators’ underwriting
standards. The analysis (conducted on behalf of 23 investment and commercial banks, including 7 "too big to fail"
banks) additionally showed that 28% of the sampled loans did not meet the minimal standards of any issuer. Clayton's
analysis further showed that 39% of these loans (i.e. those not meeting any issuer's minimal underwriting standards)
were subsequently securitized and sold to investors. Predatory lending refers to the practice of unscrupulous lenders,
enticing borrowers to enter into "unsafe" or "unsound" secured loans for inappropriate purposes. A classic bait-and-switch
method was used by Countrywide Financial, advertising low interest rates for home refinancing. Such loans were written
into extensively detailed contracts, and swapped for more expensive loan products on the day of closing. Whereas
the advertisement might state that 1% or 1.5% interest would be charged, the consumer would be put into an adjustable
rate mortgage (ARM) in which the interest charged would be greater than the amount of interest paid. This created
negative amortization, which the credit consumer might not notice until long after the loan transaction had been
consummated. Countrywide, sued by California Attorney General Jerry Brown for "unfair business practices" and "false
advertising" was making high cost mortgages "to homeowners with weak credit, adjustable rate mortgages (ARMs) that
allowed homeowners to make interest-only payments". When housing prices decreased, homeowners in ARMs then had little
incentive to pay their monthly payments, since their home equity had disappeared. This caused Countrywide's financial
condition to deteriorate, ultimately resulting in a decision by the Office of Thrift Supervision to seize the lender.
Critics such as economist Paul Krugman and U.S. Treasury Secretary Timothy Geithner have argued that the regulatory
framework did not keep pace with financial innovation, such as the increasing importance of the shadow banking system,
derivatives and off-balance sheet financing. A recent OECD study suggest that bank regulation based on the Basel
accords encourage unconventional business practices and contributed to or even reinforced the financial crisis. In
other cases, laws were changed or enforcement weakened in parts of the financial system. Key examples include: Prior
to the crisis, financial institutions became highly leveraged, increasing their appetite for risky investments and
reducing their resilience in case of losses. Much of this leverage was achieved using complex financial instruments
such as off-balance sheet securitization and derivatives, which made it difficult for creditors and regulators to
monitor and try to reduce financial institution risk levels. These instruments also made it virtually impossible
to reorganize financial institutions in bankruptcy, and contributed to the need for government bailouts. From 2004
to 2007, the top five U.S. investment banks each significantly increased their financial leverage (see diagram),
which increased their vulnerability to a financial shock. Changes in capital requirements, intended to keep U.S.
banks competitive with their European counterparts, allowed lower risk weightings for AAA securities. The shift from
first-loss tranches to AAA tranches was seen by regulators as a risk reduction that compensated the higher leverage.
These five institutions reported over $4.1 trillion in debt for fiscal year 2007, about 30% of USA nominal GDP for
2007. Lehman Brothers went bankrupt and was liquidated, Bear Stearns and Merrill Lynch were sold at fire-sale prices,
and Goldman Sachs and Morgan Stanley became commercial banks, subjecting themselves to more stringent regulation.
With the exception of Lehman, these companies required or received government support. Lehman reported that it had
been in talks with Bank of America and Barclays for the company's possible sale. However, both Barclays and Bank
of America ultimately declined to purchase the entire company. Behavior that may be optimal for an individual (e.g.,
saving more during adverse economic conditions) can be detrimental if too many individuals pursue the same behavior,
as ultimately one person's consumption is another person's income. Too many consumers attempting to save (or pay
down debt) simultaneously is called the paradox of thrift and can cause or deepen a recession. Economist Hyman Minsky
also described a "paradox of deleveraging" as financial institutions that have too much leverage (debt relative to
equity) cannot all de-leverage simultaneously without significant declines in the value of their assets. During April
2009, U.S. Federal Reserve vice-chair Janet Yellen discussed these paradoxes: "Once this massive credit crunch hit,
it didn’t take long before we were in a recession. The recession, in turn, deepened the credit crunch as demand and
employment fell, and credit losses of financial institutions surged. Indeed, we have been in the grips of precisely
this adverse feedback loop for more than a year. A process of balance sheet deleveraging has spread to nearly every
corner of the economy. Consumers are pulling back on purchases, especially on durable goods, to build their savings.
Businesses are cancelling planned investments and laying off workers to preserve cash. And, financial institutions
are shrinking assets to bolster capital and improve their chances of weathering the current storm. Once again, Minsky
understood this dynamic. He spoke of the paradox of deleveraging, in which precautions that may be smart for individuals
and firms—and indeed essential to return the economy to a normal state—nevertheless magnify the distress of the economy
as a whole." The term financial innovation refers to the ongoing development of financial products designed to achieve
particular client objectives, such as offsetting a particular risk exposure (such as the default of a borrower) or
to assist with obtaining financing. Examples pertinent to this crisis included: the adjustable-rate mortgage; the
bundling of subprime mortgages into mortgage-backed securities (MBS) or collateralized debt obligations (CDO) for
sale to investors, a type of securitization; and a form of credit insurance called credit default swaps (CDS). The
usage of these products expanded dramatically in the years leading up to the crisis. These products vary in complexity
and the ease with which they can be valued on the books of financial institutions. CDO issuance grew from an estimated
$20 billion in Q1 2004 to its peak of over $180 billion by Q1 2007, then declined back under $20 billion by Q1 2008.
Further, the credit quality of CDO's declined from 2000 to 2007, as the level of subprime and other non-prime mortgage
debt increased from 5% to 36% of CDO assets. As described in the section on subprime lending, the CDS and portfolio
of CDS called synthetic CDO enabled a theoretically infinite amount to be wagered on the finite value of housing
loans outstanding, provided that buyers and sellers of the derivatives could be found. For example, buying a CDS
to insure a CDO ended up giving the seller the same risk as if they owned the CDO, when those CDO's became worthless.
This boom in innovative financial products went hand in hand with more complexity. It multiplied the number of actors
connected to a single mortgage (including mortgage brokers, specialized originators, the securitizers and their due
diligence firms, managing agents and trading desks, and finally investors, insurances and providers of repo funding).
With increasing distance from the underlying asset these actors relied more and more on indirect information (including
FICO scores on creditworthiness, appraisals and due diligence checks by third party organizations, and most importantly
the computer models of rating agencies and risk management desks). Instead of spreading risk this provided the ground
for fraudulent acts, misjudgments and finally market collapse. In 2005 a group of computer scientists built a computational
model for the mechanism of biased ratings produced by rating agencies, which turned out to be adequate to what actually
happened in 2006–2008.[citation needed] The pricing of risk refers to the incremental compensation required by investors
for taking on additional risk, which may be measured by interest rates or fees. Several scholars have argued that
a lack of transparency about banks' risk exposures prevented markets from correctly pricing risk before the crisis,
enabled the mortgage market to grow larger than it otherwise would have, and made the financial crisis far more disruptive
than it would have been if risk levels had been disclosed in a straightforward, readily understandable format. For
a variety of reasons, market participants did not accurately measure the risk inherent with financial innovation
such as MBS and CDOs or understand its impact on the overall stability of the financial system. For example, the
pricing model for CDOs clearly did not reflect the level of risk they introduced into the system. Banks estimated
that $450bn of CDO were sold between "late 2005 to the middle of 2007"; among the $102bn of those that had been liquidated,
JPMorgan estimated that the average recovery rate for "high quality" CDOs was approximately 32 cents on the dollar,
while the recovery rate for mezzanine CDO was approximately five cents for every dollar. Another example relates
to AIG, which insured obligations of various financial institutions through the usage of credit default swaps. The
basic CDS transaction involved AIG receiving a premium in exchange for a promise to pay money to party A in the event
party B defaulted. However, AIG did not have the financial strength to support its many CDS commitments as the crisis
progressed and was taken over by the government in September 2008. U.S. taxpayers provided over $180 billion in government
support to AIG during 2008 and early 2009, through which the money flowed to various counterparties to CDS transactions,
including many large global financial institutions. As financial assets became more and more complex, and harder
and harder to value, investors were reassured by the fact that both the international bond rating agencies and bank
regulators, who came to rely on them, accepted as valid some complex mathematical models which theoretically showed
the risks were much smaller than they actually proved to be. George Soros commented that "The super-boom got out
of hand when the new products became so complicated that the authorities could no longer calculate the risks and
started relying on the risk management methods of the banks themselves. Similarly, the rating agencies relied on
the information provided by the originators of synthetic products. It was a shocking abdication of responsibility."
Moreover, a conflict of interest between professional investment managers and their institutional clients, combined
with a global glut in investment capital, led to bad investments by asset managers in over-priced credit assets.
Professional investment managers generally are compensated based on the volume of client assets under management.
There is, therefore, an incentive for asset managers to expand their assets under management in order to maximize
their compensation. As the glut in global investment capital caused the yields on credit assets to decline, asset
managers were faced with the choice of either investing in assets where returns did not reflect true credit risk
or returning funds to clients. Many asset managers chose to continue to invest client funds in over-priced (under-yielding)
investments, to the detriment of their clients, in order to maintain their assets under management. This choice was
supported by a "plausible deniability" of the risks associated with subprime-based credit assets because the loss
experience with early "vintages" of subprime loans was so low. Despite the dominance of the above formula, there
are documented attempts of the financial industry, occurring before the crisis, to address the formula limitations,
specifically the lack of dependence dynamics and the poor representation of extreme events. The volume "Credit Correlation:
Life After Copulas", published in 2007 by World Scientific, summarizes a 2006 conference held by Merrill Lynch in
London where several practitioners attempted to propose models rectifying some of the copula limitations. See also
the article by Donnelly and Embrechts and the book by Brigo, Pallavicini and Torresetti, that reports relevant warnings
and research on CDOs appeared in 2006. In a June 2008 speech, President and CEO of the New York Federal Reserve Bank
Timothy Geithner—who in 2009 became Secretary of the United States Treasury—placed significant blame for the freezing
of credit markets on a "run" on the entities in the "parallel" banking system, also called the shadow banking system.
These entities became critical to the credit markets underpinning the financial system, but were not subject to the
same regulatory controls. Further, these entities were vulnerable because of maturity mismatch, meaning that they
borrowed short-term in liquid markets to purchase long-term, illiquid and risky assets. This meant that disruptions
in credit markets would make them subject to rapid deleveraging, selling their long-term assets at depressed prices.
He described the significance of these entities: The securitization markets supported by the shadow banking system
started to close down in the spring of 2007 and nearly shut-down in the fall of 2008. More than a third of the private
credit markets thus became unavailable as a source of funds. According to the Brookings Institution, the traditional
banking system does not have the capital to close this gap as of June 2009: "It would take a number of years of strong
profits to generate sufficient capital to support that additional lending volume." The authors also indicate that
some forms of securitization are "likely to vanish forever, having been an artifact of excessively loose credit conditions."
Economist Mark Zandi testified to the Financial Crisis Inquiry Commission in January 2010: "The securitization markets
also remain impaired, as investors anticipate more loan losses. Investors are also uncertain about coming legal and
accounting rule changes and regulatory reforms. Private bond issuance of residential and commercial mortgage-backed
securities, asset-backed securities, and CDOs peaked in 2006 at close to $2 trillion...In 2009, private issuance
was less than $150 billion, and almost all of it was asset-backed issuance supported by the Federal Reserve's TALF
program to aid credit card, auto and small-business lenders. Issuance of residential and commercial mortgage-backed
securities and CDOs remains dormant." Rapid increases in a number of commodity prices followed the collapse in the
housing bubble. The price of oil nearly tripled from $50 to $147 from early 2007 to 2008, before plunging as the
financial crisis began to take hold in late 2008. Experts debate the causes, with some attributing it to speculative
flow of money from housing and other investments into commodities, some to monetary policy, and some to the increasing
feeling of raw materials scarcity in a fast-growing world, leading to long positions taken on those markets, such
as Chinese increasing presence in Africa. An increase in oil prices tends to divert a larger share of consumer spending
into gasoline, which creates downward pressure on economic growth in oil importing countries, as wealth flows to
oil-producing states. A pattern of spiking instability in the price of oil over the decade leading up to the price
high of 2008 has been recently identified. The destabilizing effects of this price variance has been proposed as
a contributory factor in the financial crisis. In testimony before the Senate Committee on Commerce, Science, and
Transportation on June 3, 2008, former director of the CFTC Division of Trading & Markets (responsible for enforcement)
Michael Greenberger specifically named the Atlanta-based IntercontinentalExchange, founded by Goldman Sachs, Morgan
Stanley and BP as playing a key role in speculative run-up of oil futures prices traded off the regulated futures
exchanges in London and New York. However, the IntercontinentalExchange (ICE) had been regulated by both European
and U.S. authorities since its purchase of the International Petroleum Exchange in 2001. Mr Greenberger was later
corrected on this matter. Feminist economists Ailsa McKay and Margunn Bjørnholt argue that the financial crisis and
the response to it revealed a crisis of ideas in mainstream economics and within the economics profession, and call
for a reshaping of both the economy, economic theory and the economics profession. They argue that such a reshaping
should include new advances within feminist economics and ecological economics that take as their starting point
the socially responsible, sensible and accountable subject in creating an economy and economic theories that fully
acknowledge care for each other as well as the planet. Current Governor of the Reserve Bank of India Raghuram Rajan
had predicted the crisis in 2005 when he became chief economist at the International Monetary Fund.In 2005, at a
celebration honouring Alan Greenspan, who was about to retire as chairman of the US Federal Reserve, Rajan delivered
a controversial paper that was critical of the financial sector. In that paper, "Has Financial Development Made the
World Riskier?", Rajan "argued that disaster might loom." Rajan argued that financial sector managers were encouraged
to "take risks that generate severe adverse consequences with small probability but, in return, offer generous compensation
the rest of the time. These risks are known as tail risks. But perhaps the most important concern is whether banks
will be able to provide liquidity to financial markets so that if the tail risk does materialise, financial positions
can be unwound and losses allocated so that the consequences to the real economy are minimised." The financial crisis
was not widely predicted by mainstream economists except Raghuram Rajan, who instead spoke of the Great Moderation.
A number of heterodox economists predicted the crisis, with varying arguments. Dirk Bezemer in his research credits
(with supporting argument and estimates of timing) 12 economists with predicting the crisis: Dean Baker (US), Wynne
Godley (UK), Fred Harrison (UK), Michael Hudson (US), Eric Janszen (US), Steve Keen (Australia), Jakob Brøchner Madsen
& Jens Kjaer Sørensen (Denmark), Kurt Richebächer (US), Nouriel Roubini (US), Peter Schiff (US), and Robert Shiller
(US). Examples of other experts who gave indications of a financial crisis have also been given. Not surprisingly,
the Austrian economic school regarded the crisis as a vindication and classic example of a predictable credit-fueled
bubble that could not forestall the disregarded but inevitable effect of an artificial, manufactured laxity in monetary
supply, a perspective that even former Fed Chair Alan Greenspan in Congressional testimony confessed himself forced
to return to. A cover story in BusinessWeek magazine claims that economists mostly failed to predict the worst international
economic crisis since the Great Depression of the 1930s. The Wharton School of the University of Pennsylvania's online
business journal examines why economists failed to predict a major global financial crisis. Popular articles published
in the mass media have led the general public to believe that the majority of economists have failed in their obligation
to predict the financial crisis. For example, an article in the New York Times informs that economist Nouriel Roubini
warned of such crisis as early as September 2006, and the article goes on to state that the profession of economics
is bad at predicting recessions. According to The Guardian, Roubini was ridiculed for predicting a collapse of the
housing market and worldwide recession, while The New York Times labelled him "Dr. Doom". Stock trader and financial
risk engineer Nassim Nicholas Taleb, author of the 2007 book The Black Swan, spent years warning against the breakdown
of the banking system in particular and the economy in general owing to their use of bad risk models and reliance
on forecasting, and their reliance on bad models, and framed the problem as part of "robustness and fragility". He
also took action against the establishment view by making a big financial bet on banking stocks and making a fortune
from the crisis ("They didn't listen, so I took their money"). According to David Brooks from the New York Times,
"Taleb not only has an explanation for what’s happening, he saw it coming." Market strategist Phil Dow believes distinctions
exist "between the current market malaise" and the Great Depression. He says the Dow Jones average's fall of more
than 50% over a period of 17 months is similar to a 54.7% fall in the Great Depression, followed by a total drop
of 89% over the following 16 months. "It's very troubling if you have a mirror image," said Dow. Floyd Norris, the
chief financial correspondent of The New York Times, wrote in a blog entry in March 2009 that the decline has not
been a mirror image of the Great Depression, explaining that although the decline amounts were nearly the same at
the time, the rates of decline had started much faster in 2007, and that the past year had only ranked eighth among
the worst recorded years of percentage drops in the Dow. The past two years ranked third, however. One of the first
victims was Northern Rock, a medium-sized British bank. The highly leveraged nature of its business led the bank
to request security from the Bank of England. This in turn led to investor panic and a bank run in mid-September
2007. Calls by Liberal Democrat Treasury Spokesman Vince Cable to nationalise the institution were initially ignored;
in February 2008, however, the British government (having failed to find a private sector buyer) relented, and the
bank was taken into public hands. Northern Rock's problems proved to be an early indication of the troubles that
would soon befall other banks and financial institutions. The first visible institution to run into trouble in the
United States was the Southern California–based IndyMac, a spin-off of Countrywide Financial. Before its failure,
IndyMac Bank was the largest savings and loan association in the Los Angeles market and the seventh largest mortgage
originator in the United States. The failure of IndyMac Bank on July 11, 2008, was the fourth largest bank failure
in United States history up until the crisis precipitated even larger failures, and the second largest failure of
a regulated thrift. IndyMac Bank's parent corporation was IndyMac Bancorp until the FDIC seized IndyMac Bank. IndyMac
Bancorp filed for Chapter 7 bankruptcy in July 2008. IndyMac often made loans without verification of the borrower’s
income or assets, and to borrowers with poor credit histories. Appraisals obtained by IndyMac on underlying collateral
were often questionable as well. As an Alt-A lender, IndyMac’s business model was to offer loan products to fit the
borrower’s needs, using an extensive array of risky option-adjustable-rate-mortgages (option ARMs), subprime loans,
80/20 loans, and other nontraditional products. Ultimately, loans were made to many borrowers who simply could not
afford to make their payments. The thrift remained profitable only as long as it was able to sell those loans in
the secondary mortgage market. IndyMac resisted efforts to regulate its involvement in those loans or tighten their
issuing criteria: see the comment by Ruthann Melbourne, Chief Risk Officer, to the regulating agencies. IndyMac reported
that during April 2008, Moody's and Standard & Poor's downgraded the ratings on a significant number of Mortgage-backed
security (MBS) bonds including $160 million of those issued by IndyMac and which the bank retained in its MBS portfolio.
IndyMac concluded that these downgrades would have negatively impacted the Company's risk-based capital ratio as
of June 30, 2008. Had these lowered ratings been in effect at March 31, 2008, IndyMac concluded that the bank's capital
ratio would have been 9.27% total risk-based. IndyMac warned that if its regulators found its capital position to
have fallen below "well capitalized" (minimum 10% risk-based capital ratio) to "adequately capitalized" (8–10% risk-based
capital ratio) the bank might no longer be able to use brokered deposits as a source of funds. Senator Charles Schumer
(D-NY) would later point out that brokered deposits made up more than 37 percent of IndyMac's total deposits and
ask the Federal Deposit Insurance Corporation (FDIC) whether it had considered ordering IndyMac to reduce its reliance
on these deposits. With $18.9 billion in total deposits reported on March 31, Senator Schumer would have been referring
to a little over $7 billion in brokered deposits. While the breakout of maturities of these deposits is not known
exactly, a simple averaging would have put the threat of brokered deposits loss to IndyMac at $500 million a month,
had the regulator disallowed IndyMac from acquiring new brokered deposits on June 30. When home prices declined in
the latter half of 2007 and the secondary mortgage market collapsed, IndyMac was forced to hold $10.7 billion of
loans it could not sell in the secondary market. Its reduced liquidity was further exacerbated in late June 2008
when account holders withdrew $1.55 billion or about 7.5% of IndyMac's deposits. This “run” on the thrift followed
the public release of a letter from Senator Charles Schumer to the FDIC and OTS. The letter outlined the Senator’s
concerns with IndyMac. While the run was a contributing factor in the timing of IndyMac’s demise, the underlying
cause of the failure was the unsafe and unsound manner in which the thrift was operated. On July 11, 2008, citing
liquidity concerns, the FDIC put IndyMac Bank into conservatorship. A bridge bank, IndyMac Federal Bank, FSB, was
established to assume control of IndyMac Bank's assets, its secured liabilities, and its insured deposit accounts.
The FDIC announced plans to open IndyMac Federal Bank, FSB on July 14, 2008. Until then, depositors would have access
their insured deposits through ATMs, their existing checks, and their existing debit cards. Telephone and Internet
account access was restored when the bank reopened. The FDIC guarantees the funds of all insured accounts up to US$100,000,
and has declared a special advance dividend to the roughly 10,000 depositors with funds in excess of the insured
amount, guaranteeing 50% of any amounts in excess of $100,000. Yet, even with the pending sale of Indymac to IMB
Management Holdings, an estimated 10,000 uninsured depositors of Indymac are still at a loss of over $270 million.
Initially the companies affected were those directly involved in home construction and mortgage lending such as Northern
Rock and Countrywide Financial, as they could no longer obtain financing through the credit markets. Over 100 mortgage
lenders went bankrupt during 2007 and 2008. Concerns that investment bank Bear Stearns would collapse in March 2008
resulted in its fire-sale to JP Morgan Chase. The financial institution crisis hit its peak in September and October
2008. Several major institutions either failed, were acquired under duress, or were subject to government takeover.
These included Lehman Brothers, Merrill Lynch, Fannie Mae, Freddie Mac, Washington Mutual, Wachovia, Citigroup, and
AIG. On Oct. 6, 2008, three weeks after Lehman Brothers filed the largest bankruptcy in U.S. history, Lehman's former
CEO found himself before Representative Henry A. Waxman, the California Democrat who chaired the House Committee
on Oversight and Government Reform. Fuld said he was a victim of the collapse, blaming a "crisis of confidence" in
the markets for dooming his firm. In September 2008, the crisis hit its most critical stage. There was the equivalent
of a bank run on the money market funds, which frequently invest in commercial paper issued by corporations to fund
their operations and payrolls. Withdrawal from money markets were $144.5 billion during one week, versus $7.1 billion
the week prior. This interrupted the ability of corporations to rollover (replace) their short-term debt. The U.S.
government responded by extending insurance for money market accounts analogous to bank deposit insurance via a temporary
guarantee and with Federal Reserve programs to purchase commercial paper. The TED spread, an indicator of perceived
credit risk in the general economy, spiked up in July 2007, remained volatile for a year, then spiked even higher
in September 2008, reaching a record 4.65% on October 10, 2008. Economist Paul Krugman and U.S. Treasury Secretary
Timothy Geithner explain the credit crisis via the implosion of the shadow banking system, which had grown to nearly
equal the importance of the traditional commercial banking sector as described above. Without the ability to obtain
investor funds in exchange for most types of mortgage-backed securities or asset-backed commercial paper, investment
banks and other entities in the shadow banking system could not provide funds to mortgage firms and other corporations.
This meant that nearly one-third of the U.S. lending mechanism was frozen and continued to be frozen into June 2009.
According to the Brookings Institution, the traditional banking system does not have the capital to close this gap
as of June 2009: "It would take a number of years of strong profits to generate sufficient capital to support that
additional lending volume". The authors also indicate that some forms of securitization are "likely to vanish forever,
having been an artifact of excessively loose credit conditions". While traditional banks have raised their lending
standards, it was the collapse of the shadow banking system that is the primary cause of the reduction in funds available
for borrowing. There is a direct relationship between declines in wealth and declines in consumption and business
investment, which along with government spending, represent the economic engine. Between June 2007 and November 2008,
Americans lost an estimated average of more than a quarter of their collective net worth.[citation needed] By early
November 2008, a broad U.S. stock index the S&P 500, was down 45% from its 2007 high. Housing prices had dropped
20% from their 2006 peak, with futures markets signaling a 30–35% potential drop. Total home equity in the United
States, which was valued at $13 trillion at its peak in 2006, had dropped to $8.8 trillion by mid-2008 and was still
falling in late 2008. Total retirement assets, Americans' second-largest household asset, dropped by 22%, from $10.3
trillion in 2006 to $8 trillion in mid-2008. During the same period, savings and investment assets (apart from retirement
savings) lost $1.2 trillion and pension assets lost $1.3 trillion. Taken together, these losses total a staggering
$8.3 trillion. Since peaking in the second quarter of 2007, household wealth is down $14 trillion. In November 2008,
economist Dean Baker observed: "There is a really good reason for tighter credit. Tens of millions of homeowners
who had substantial equity in their homes two years ago have little or nothing today. Businesses are facing the worst
downturn since the Great Depression. This matters for credit decisions. A homeowner with equity in her home is very
unlikely to default on a car loan or credit card debt. They will draw on this equity rather than lose their car and/or
have a default placed on their credit record. On the other hand, a homeowner who has no equity is a serious default
risk. In the case of businesses, their creditworthiness depends on their future profits. Profit prospects look much
worse in November 2008 than they did in November 2007... While many banks are obviously at the brink, consumers and
businesses would be facing a much harder time getting credit right now even if the financial system were rock solid.
The problem with the economy is the loss of close to $6 trillion in housing wealth and an even larger amount of stock
wealth. Several commentators have suggested that if the liquidity crisis continues, an extended recession or worse
could occur. The continuing development of the crisis has prompted fears of a global economic collapse although there
are now many cautiously optimistic forecasters in addition to some prominent sources who remain negative. The financial
crisis is likely to yield the biggest banking shakeout since the savings-and-loan meltdown. Investment bank UBS stated
on October 6 that 2008 would see a clear global recession, with recovery unlikely for at least two years. Three days
later UBS economists announced that the "beginning of the end" of the crisis had begun, with the world starting to
make the necessary actions to fix the crisis: capital injection by governments; injection made systemically; interest
rate cuts to help borrowers. The United Kingdom had started systemic injection, and the world's central banks were
now cutting interest rates. UBS emphasized the United States needed to implement systemic injection. UBS further
emphasized that this fixes only the financial crisis, but that in economic terms "the worst is still to come". UBS
quantified their expected recession durations on October 16: the Eurozone's would last two quarters, the United States'
would last three quarters, and the United Kingdom's would last four quarters. The economic crisis in Iceland involved
all three of the country's major banks. Relative to the size of its economy, Iceland’s banking collapse is the largest
suffered by any country in economic history. The Brookings Institution reported in June 2009 that U.S. consumption
accounted for more than a third of the growth in global consumption between 2000 and 2007. "The US economy has been
spending too much and borrowing too much for years and the rest of the world depended on the U.S. consumer as a source
of global demand." With a recession in the U.S. and the increased savings rate of U.S. consumers, declines in growth
elsewhere have been dramatic. For the first quarter of 2009, the annualized rate of decline in GDP was 14.4% in Germany,
15.2% in Japan, 7.4% in the UK, 18% in Latvia, 9.8% in the Euro area and 21.5% for Mexico. Some developing countries
that had seen strong economic growth saw significant slowdowns. For example, growth forecasts in Cambodia show a
fall from more than 10% in 2007 to close to zero in 2009, and Kenya may achieve only 3–4% growth in 2009, down from
7% in 2007. According to the research by the Overseas Development Institute, reductions in growth can be attributed
to falls in trade, commodity prices, investment and remittances sent from migrant workers (which reached a record
$251 billion in 2007, but have fallen in many countries since). This has stark implications and has led to a dramatic
rise in the number of households living below the poverty line, be it 300,000 in Bangladesh or 230,000 in Ghana.
Especially states with a fragile political system have to fear that investors from Western states withdraw their
money because of the crisis. Bruno Wenn of the German DEG recommends to provide a sound economic policymaking and
good governance to attract new investors The World Bank reported in February 2009 that the Arab World was far less
severely affected by the credit crunch. With generally good balance of payments positions coming into the crisis
or with alternative sources of financing for their large current account deficits, such as remittances, Foreign Direct
Investment (FDI) or foreign aid, Arab countries were able to avoid going to the market in the latter part of 2008.
This group is in the best position to absorb the economic shocks. They entered the crisis in exceptionally strong
positions. This gives them a significant cushion against the global downturn. The greatest impact of the global economic
crisis will come in the form of lower oil prices, which remains the single most important determinant of economic
performance. Steadily declining oil prices would force them to draw down reserves and cut down on investments. Significantly
lower oil prices could cause a reversal of economic performance as has been the case in past oil shocks. Initial
impact will be seen on public finances and employment for foreign workers. The output of goods and services produced
by labor and property located in the United States—decreased at an annual rate of approximately 6% in the fourth
quarter of 2008 and first quarter of 2009, versus activity in the year-ago periods. The U.S. unemployment rate increased
to 10.1% by October 2009, the highest rate since 1983 and roughly twice the pre-crisis rate. The average hours per
work week declined to 33, the lowest level since the government began collecting the data in 1964. With the decline
of gross domestic product came the decline in innovation. With fewer resources to risk in creative destruction, the
number of patent applications flat-lined. Compared to the previous 5 years of exponential increases in patent application,
this stagnation correlates to the similar drop in GDP during the same time period. Typical American families did
not fare as well, nor did those "wealthy-but-not wealthiest" families just beneath the pyramid's top. On the other
hand, half of the poorest families did not have wealth declines at all during the crisis. The Federal Reserve surveyed
4,000 households between 2007 and 2009, and found that the total wealth of 63 percent of all Americans declined in
that period. 77 percent of the richest families had a decrease in total wealth, while only 50 percent of those on
the bottom of the pyramid suffered a decrease. On November 3, 2008, the European Commission at Brussels predicted
for 2009 an extremely weak growth of GDP, by 0.1%, for the countries of the Eurozone (France, Germany, Italy, Belgium
etc.) and even negative number for the UK (−1.0%), Ireland and Spain. On November 6, the IMF at Washington, D.C.,
launched numbers predicting a worldwide recession by −0.3% for 2009, averaged over the developed economies. On the
same day, the Bank of England and the European Central Bank, respectively, reduced their interest rates from 4.5%
down to 3%, and from 3.75% down to 3.25%. As a consequence, starting from November 2008, several countries launched
large "help packages" for their economies. The U.S. Federal Reserve and central banks around the world have taken
steps to expand money supplies to avoid the risk of a deflationary spiral, in which lower wages and higher unemployment
lead to a self-reinforcing decline in global consumption. In addition, governments have enacted large fiscal stimulus
packages, by borrowing and spending to offset the reduction in private sector demand caused by the crisis. The U.S.
Federal Reserve's new and expanded liquidity facilities were intended to enable the central bank to fulfill its traditional
lender-of-last-resort role during the crisis while mitigating stigma, broadening the set of institutions with access
to liquidity, and increasing the flexibility with which institutions could tap such liquidity. This credit freeze
brought the global financial system to the brink of collapse. The response of the Federal Reserve, the European Central
Bank, the Bank of England and other central banks was immediate and dramatic. During the last quarter of 2008, these
central banks purchased US$2.5 trillion of government debt and troubled private assets from banks. This was the largest
liquidity injection into the credit market, and the largest monetary policy action, in world history. Following a
model initiated by the United Kingdom bank rescue package, the governments of European nations and the USA guaranteed
the debt issued by their banks and raised the capital of their national banking systems, ultimately purchasing $1.5
trillion newly issued preferred stock in their major banks. In October 2010, Nobel laureate Joseph Stiglitz explained
how the U.S. Federal Reserve was implementing another monetary policy —creating currency— as a method to combat the
liquidity trap. By creating $600 billion and inserting[clarification needed] this directly into banks, the Federal
Reserve intended to spur banks to finance more domestic loans and refinance mortgages. However, banks instead were
spending the money in more profitable areas by investing internationally in emerging markets. Banks were also investing
in foreign currencies, which Stiglitz and others point out may lead to currency wars while China redirects its currency
holdings away from the United States. United States President Barack Obama and key advisers introduced a series of
regulatory proposals in June 2009. The proposals address consumer protection, executive pay, bank financial cushions
or capital requirements, expanded regulation of the shadow banking system and derivatives, and enhanced authority
for the Federal Reserve to safely wind-down systemically important institutions, among others. In January 2010, Obama
proposed additional regulations limiting the ability of banks to engage in proprietary trading. The proposals were
dubbed "The Volcker Rule", in recognition of Paul Volcker, who has publicly argued for the proposed changes. The
U.S. Senate passed a reform bill in May 2010, following the House which passed a bill in December 2009. These bills
must now be reconciled. The New York Times provided a comparative summary of the features of the two bills, which
address to varying extent the principles enumerated by the Obama administration. For instance, the Volcker Rule against
proprietary trading is not part of the legislation, though in the Senate bill regulators have the discretion but
not the obligation to prohibit these trades. European regulators introduced Basel III regulations for banks. It increased
capital ratios, limits on leverage, narrow definition of capital (to exclude subordinated debt), limit counter-party
risk, and new liquidity requirements. Critics argue that Basel III doesn’t address the problem of faulty risk-weightings.
Major banks suffered losses from AAA-rated created by financial engineering (which creates apparently risk-free assets
out of high risk collateral) that required less capital according to Basel II. Lending to AA-rated sovereigns has
a risk-weight of zero, thus increasing lending to governments and leading to the next crisis. Johan Norberg argues
that regulations (Basel III among others) have indeed led to excessive lending to risky governments (see European
sovereign-debt crisis) and the ECB pursues even more lending as the solution. The U.S. recession that began in December
2007 ended in June 2009, according to the U.S. National Bureau of Economic Research (NBER) and the financial crisis
appears to have ended about the same time. In April 2009 TIME magazine declared "More Quickly Than It Began, The
Banking Crisis Is Over." The United States Financial Crisis Inquiry Commission dates the crisis to 2008. President
Barack Obama declared on January 27, 2010, "the markets are now stabilized, and we've recovered most of the money
we spent on the banks." Advanced economies led global economic growth prior to the financial crisis with "emerging"
and "developing" economies lagging behind. The crisis completely overturned this relationship. The International
Monetary Fund found that "advanced" economies accounted for only 31% of global GDP while emerging and developing
economies accounted for 69% of global GDP from 2007 to 2014. In the tables, the names of emergent economies are shown
in boldface type, while the names of developed economies are in Roman (regular) type.
Portugal (Portuguese: [puɾtuˈɣaɫ]), officially the Portuguese Republic (Portuguese: República Portuguesa), is a country on
the Iberian Peninsula, in Southwestern Europe. It is the westernmost country of mainland Europe, being bordered by
the Atlantic Ocean to the west and south and by Spain to the north and east. The Portugal–Spain border is 1,214 km
(754 mi) long and considered the longest uninterrupted border within the European Union. The republic also includes
the Atlantic archipelagos of the Azores and Madeira, both autonomous regions with their own regional governments.
The land within the borders of current Portugal has been continuously settled and fought over since prehistoric times.
The Celts and the Romans were followed by the Visigothic and the Suebi Germanic peoples, who were themselves later
invaded by the Moors. These Muslim peoples were eventually expelled during the Christian Reconquista of the peninsula.
By 1139, Portugal had established itself as a kingdom independent from León. In the 15th and 16th centuries, as the
result of pioneering the Age of Discovery, Portugal expanded Western influence and established the first global empire,
becoming one of the world's major economic, political and military powers. Portugal lost much of its wealth and status
with the destruction of Lisbon in a 1755 earthquake, occupation during the Napoleonic Wars, and the independence
of Brazil, its wealthiest colony, in 1822. After the 1910 revolution deposed the monarchy, the democratic but unstable
Portuguese First Republic was established, later being superseded by the "Estado Novo" right-wing authoritarian regime.
Democracy was restored after the Portuguese Colonial War and the Carnation Revolution in 1974. Shortly after, independence
was granted to all its colonies, with the exception of Macau, which was handed over to China in 1999. This marked
the end of the longest-lived European colonial empire, leaving a profound cultural and architectural influence across
the globe and a legacy of over 250 million Portuguese speakers today. Portugal maintains a unitary semi-presidential
republican form of government and is a developed country with an advanced economy, and a high living standard, having
the 18th highest Social Progress in the world, putting it ahead of other Western European countries like France,
Spain and Italy. It is a member of numerous international organizations, including the United Nations, the European
Union, the Eurozone, OECD, NATO and the Community of Portuguese Language Countries. Portugal is also known for having
decriminalized the usage of all common drugs in 2001, the first country in the world to do so. However, drugs are
still illegal in Portugal. The early history of Portugal is shared with the rest of the Iberian Peninsula located
in South Western Europe. The name of Portugal derives from the joined Romano-Celtic name Portus Cale. The region
was settled by Pre-Celts and Celts, giving origin to peoples like the Gallaeci, Lusitanians, Celtici and Cynetes,
visited by Phoenicians and Carthaginians, incorporated in the Roman Republic dominions as Lusitania and part of Gallaecia,
after 45 BC until 298 AD, settled again by Suebi, Buri, and Visigoths, and conquered by Moors. Other influences include
some 5th-century vestiges of Alan settlement, which were found in Alenquer (old Germanic Alankerk, from Alan+kerk;
meaning church of the Alan (people), Coimbra and Lisbon. In 27 BC, Lusitania gained the status of Roman province.
Later, a northern province of Lusitania was formed, known as Gallaecia, with capital in Bracara Augusta, today's
Braga. There are still many ruins of castros (hill forts) all over modern Portugal and remains of Castro culture.
Numerous Roman sites are scattered around present-day Portugal, some urban remains are quite large, like Conímbriga
and Mirobriga. The former, beyond being one of the largest Roman settlements in Portugal, is also classified as a
National Monument. Conímbriga lies 16 km from Coimbra which by its turn was the ancient Aeminium). The site also
has a museum that displays objects found by archaeologists during their excavations. After defeating the Visigoths
in only a few months, the Umayyad Caliphate started expanding rapidly in the peninsula. Beginning in 711, the land
that is now Portugal became part of the vast Umayyad Caliphate's empire of Damascus, which stretched from the Indus
river in the Indian sub-continent (now Pakistan) up to the South of France, until its collapse in 750. That year
the west of the empire gained its independence under Abd-ar-Rahman I with the establishment of the Emirate of Córdoba.
After almost two centuries, the Emirate became the Caliphate of Córdoba in 929, until its dissolution a century later
in 1031 into no less than 23 small kingdoms, called Taifa kingdoms. The governors of the taifas each proclaimed themselves
Emir of their provinces and established diplomatic relations with the Christian kingdoms of the north. Most of Portugal
fell into the hands of the Taifa of Badajoz of the Aftasid Dynasty, and after a short spell of an ephemeral Taifa
of Lisbon in 1022, fell under the dominion of the Taifa of Seville of the Abbadids poets. The Taifa period ended
with the conquest of the Almoravids who came from Morocco in 1086 winning a decisive victory at the Battle of Sagrajas,
followed a century later in 1147, after the second period of Taifa, by the Almohads, also from Marrakesh. The Muslim
population of the region consisted mainly of native Iberian converts to Islam (the so-called Muwallad or Muladi)
and to a lesser extent Berbers and Arabs. The Arabs were principally noblemen from Oman; and though few in numbers,
they constituted the elite of the population. The Berbers were originally from the Atlas mountains and Rif mountains
of North Africa and were essentially nomads. In Portugal, the Muslim population (or "Moors"), relatively small in
numbers, stayed in the Algarve region, and south of the Tagus. Today, there are approximately 800 words in the Portuguese
language of Arabic origin. The Muslims were expelled from Portugal 300 years earlier than in neighbouring Spain,
which is reflected both in Portuguese culture and the language, which is mostly Celtiberian and Vulgar Latin. Pelayos'
plan was to use the Cantabrian mountains as a place of refuge and protection from the invading Moors. He then aimed
to regroup the Iberian Peninsula's Christian armies and use the Cantabrian mountains as a springboard from which
to regain their lands from the Moors. In the process, after defeating the Moors in the Battle of Covadonga in 722
AD, Pelayos was proclaimed king, thus founding the Christian Kingdom of Asturias and starting the war of Christian
reconquest known in Portuguese as the Reconquista Cristã. After annexing the County of Portugal into one of the several
counties that made up the Kingdom of Asturias, King Alfonso III of Asturias knighted Vimara Peres, in 868 AD, as
the First Count of Portus Cale (Portugal). The region became known as Portucale, Portugale, and simultaneously Portugália
— the County of Portugal. Later the Kingdom of Asturias was divided into a number of Christian Kingdoms in Northern
Spain due to dynastic divisions of inheritance among the kings offspring. With the forced abdication of Alfonso III
"the Great" of Asturias by his sons in 910, the Kingdom of Asturias split into three separate kingdoms of León, Galicia
and Asturias. The three kingdoms were eventually reunited in 924 (León and Galicia in 914, Asturias later) under
the crown of León. A year before Alfonso III "the Great" of Asturias death, three of Alfonso's sons rose in rebellion
and forced him to abdicate, partitioning the kingdom among them. The eldest son, García, became king of León. The
second son, Ordoño, reigned in Galicia, while the third, Fruela, received Asturias with Oviedo as his capital. Alfonso
died in Zamora, probably in 910. His former realm would be reunited when first García died childless and León passed
to Ordoño. He in turn died when his children were too young to ascend; Fruela became king of a reunited crown. His
death the next year initiated a series of internecine struggles that led to unstable succession for over a century.
It continued under that name[clarification needed] until incorporated into the Kingdom of Castile in 1230, after
Ferdinand III became joint king of the two kingdoms. This was done to avoid dynastic feuds and to maintain the Christian
Kingdoms strong enough to prevent complete Muslim take over of the Iberian Peninsula and to further the Reconquista
of Iberia by Christian armies. During the century of internecine struggles for dominance among the Northern Christians
kingdoms, the County of Portugal, formed the southern portion of the Kingdom of Galicia. At times the Kingdom of
Galicia existed independently for short periods, but usually formed an important part of the Kingdom of Leon. Throughout
this period, the people of County of Portugal as Galicians found themselves struggling to maintain the autonomy of
Galicia with its distinct language and culture (Galician-Portuguese) from the Leonese culture, whenever the status
of the Kingdom of Galicia changed in relation to the Kingdom of Leon. As a result of political division, Galician-Portuguese
lost its unity when the County of Portugal separated from the Kingdom of Galicia (a dependent kingdom of Leon) to
establish the Kingdom of Portugal. The Galician and Portuguese versions of the language then diverged over time as
they followed independent evolutionary paths. This began occurring when the Kingdom of Leon and the Kingdom of Castile
united and the Castilian Language (known as Spanish) slowly over the centuries began influencing the Galician Language
and then trying to replace it. The same thing happened to Astur-Leonese Language to the point where it is greatly
reduced or completely replaced by the Castilian (Spanish Language). In 1348 and 1349 Portugal, like the rest of Europe,
was devastated by the Black Death. In 1373, Portugal made an alliance with England, which is the longest-standing
alliance in the world. This alliance served both nations' interests throughout history and is regarded by many as
the predecessor to NATO. Over time this went way beyond geo-political and military cooperation (protecting both nations'
interests in Africa, the Americas and Asia against French, Spanish and Dutch rivals) and maintained strong trade
and cultural ties between the two old European allies. Particularly in the Oporto region, there is visible English
influence to this day. Portugal spearheaded European exploration of the world and the Age of Discovery. Prince Henry
the Navigator, son of King João I, became the main sponsor and patron of this endeavour. During this period, Portugal
explored the Atlantic Ocean, discovering several Atlantic archipelagos like the Azores, Madeira, and Cape Verde,
explored the African coast, colonized selected areas of Africa, discovered an eastern route to India via the Cape
of Good Hope, discovered Brazil, explored the Indian Ocean, established trading routes throughout most of southern
Asia, and sent the first direct European maritime trade and diplomatic missions to China and Japan. In 1738, Sebastião
José de Carvalho e Melo, 1st Marquis of Pombal, began a diplomatic career as the Portuguese Ambassador in London
and later in Vienna. The Queen consort of Portugal, Archduchess Maria Anne Josefa of Austria, was fond of Melo; and
after his first wife died, she arranged the widowed de Melo's second marriage to the daughter of the Austrian Field
Marshal Leopold Josef, Count von Daun. King John V of Portugal, however, was not pleased and recalled Melo to Portugal
in 1749. John V died the following year and his son, Joseph I of Portugal, was crowned. In contrast to his father,
Joseph I was fond of de Melo, and with the Queen Mother's approval, he appointed Melo as Minister of Foreign Affairs.
As the King's confidence in de Melo increased, the King entrusted him with more control of the state. By 1755, Sebastião
de Melo was made Prime Minister. Impressed by British economic success that he had witnessed from the Ambassador,
he successfully implemented similar economic policies in Portugal. He abolished slavery in Portugal and in the Portuguese
colonies in India; reorganized the army and the navy; restructured the University of Coimbra, and ended discrimination
against different Christian sects in Portugal. But Sebastião de Melo's greatest reforms were economic and financial,
with the creation of several companies and guilds to regulate every commercial activity. He demarcated the region
for production of Port to ensure the wine's quality, and this was the first attempt to control wine quality and production
in Europe. He ruled with a strong hand by imposing strict law upon all classes of Portuguese society from the high
nobility to the poorest working class, along with a widespread review of the country's tax system. These reforms
gained him enemies in the upper classes, especially among the high nobility, who despised him as a social upstart.
Despite the calamity and huge death toll, Lisbon suffered no epidemics and within less than one year was already
being rebuilt. The new city centre of Lisbon was designed to resist subsequent earthquakes. Architectural models
were built for tests, and the effects of an earthquake were simulated by marching troops around the models. The buildings
and big squares of the Pombaline City Centre still remain as one of Lisbon's tourist attractions. Sebastião de Melo
also made an important contribution to the study of seismology by designing an inquiry that was sent to every parish
in the country. Following the earthquake, Joseph I gave his Prime Minister even more power, and Sebastião de Melo
became a powerful, progressive dictator. As his power grew, his enemies increased in number, and bitter disputes
with the high nobility became frequent. In 1758 Joseph I was wounded in an attempted assassination. The Távora family
and the Duke of Aveiro were implicated and executed after a quick trial. The Jesuits were expelled from the country
and their assets confiscated by the crown. Sebastião de Melo prosecuted every person involved, even women and children.
This was the final stroke that broke the power of the aristocracy. Joseph I made his loyal minister Count of Oeiras
in 1759. Following the Távora affair, the new Count of Oeiras knew no opposition. Made "Marquis of Pombal" in 1770,
he effectively ruled Portugal until Joseph I's death in 1779. However, historians also argue that Pombal’s "enlightenment,"
while far-reaching, was primarily a mechanism for enhancing autocracy at the expense of individual liberty and especially
an apparatus for crushing opposition, suppressing criticism, and furthering colonial economic exploitation as well
as intensifying book censorship and consolidating personal control and profit. With the occupation by Napoleon, Portugal
began a slow but inexorable decline that lasted until the 20th century. This decline was hastened by the independence
in 1822 of the country's largest colonial possession, Brazil. In 1807, as Napoleon's army closed in on Lisbon, the
Prince Regent João VI of Portugal transferred his court to Brazil and established Rio de Janeiro as the capital of
the Portuguese Empire. In 1815, Brazil was declared a Kingdom and the Kingdom of Portugal was united with it, forming
a pluricontinental State, the United Kingdom of Portugal, Brazil and the Algarves. As a result of the change in its
status and the arrival of the Portuguese royal family, Brazilian administrative, civic, economical, military, educational,
and scientific apparatus were expanded and highly modernized. Portuguese and their allied British troops fought against
the French Invasion of Portugal and by 1815 the situation in Europe had cooled down sufficiently that João VI would
have been able to return safely to Lisbon. However, the King of Portugal remained in Brazil until the Liberal Revolution
of 1820, which started in Porto, demanded his return to Lisbon in 1821. With the Conference of Berlin of 1884, Portuguese
Africa territories had their borders formally established on request of Portugal in order to protect the centuries-long
Portuguese interests in the continent from rivalries enticed by the Scramble for Africa. Portuguese Africa's cities
and towns like Nova Lisboa, Sá da Bandeira, Silva Porto, Malanje, Tete, Vila Junqueiro, Vila Pery and Vila Cabral
were founded or redeveloped inland during this period and beyond. New coastal towns like Beira, Moçâmedes, Lobito,
João Belo, Nacala and Porto Amélia were also founded. Even before the turn of the 20th century, railway tracks as
the Benguela railway in Angola, and the Beira railway in Mozambique, started to be built to link coastal areas and
selected inland regions. On 1 February 1908, the king Dom Carlos I of Portugal and his heir apparent, Prince Royal
Dom Luís Filipe, Duke of Braganza, were murdered in Lisbon. Under his rule, Portugal had twice been declared bankrupt
– on 14 June 1892, and again on 10 May 1902 – causing social turmoil, economic disturbances, protests, revolts and
criticism of the monarchy. Manuel II of Portugal became the new king, but was eventually overthrown by the 5 October
1910 revolution, which abolished the regime and instated republicanism in Portugal. Political instability and economic
weaknesses were fertile ground for chaos and unrest during the Portuguese First Republic. These conditions would
lead to the failed Monarchy of the North, 28 May 1926 coup d'état, and the creation of the National Dictatorship
(Ditadura Nacional). This in turn led to the establishment of the right-wing dictatorship of the Estado Novo under
António de Oliveira Salazar in 1933. Portugal was one of only five European countries to remain neutral in World
War II. From the 1940s to the 1960s, Portugal was a founding member of NATO, OECD and the European Free Trade Association
(EFTA). Gradually, new economic development projects and relocation of mainland Portuguese citizens into the overseas
provinces in Africa were initiated, with Angola and Mozambique, as the largest and richest overseas territories,
being the main targets of those initiatives. These actions were used to affirm Portugal's status as a transcontinental
nation and not as a colonial empire. The Portuguese government and army successfully resisted the decolonization
of its overseas territories until April 1974, when a bloodless left-wing military coup in Lisbon, known as the Carnation
Revolution, led the way for the independence of the overseas territories in Africa and Asia, as well as for the restoration
of democracy after two years of a transitional period known as PREC (Processo Revolucionário Em Curso). This period
was characterized by social turmoil and power disputes between left- and right-wing political forces. The retreat
from the overseas territories and the acceptance of its independence terms by Portuguese head representatives for
overseas negotiations, which would create independent states in 1975, prompted a mass exodus of Portuguese citizens
from Portugal's African territories (mostly from Portuguese Angola and Mozambique). The country continued to be governed
by a Junta de Salvação Nacional until the Portuguese legislative election of 1976. It was won by the Portuguese Socialist
Party (PS) and Mário Soares, its leader, became Prime Minister of the 1st Constitutional Government on 23 July. Mário
Soares would be Prime Minister from 1976 to 1978 and again from 1983 to 1985. In this capacity Soares tried to resume
the economic growth and development record that had been achieved before the Carnation Revolution, during the last
decade of the previous regime. He initiated the process of accession to the European Economic Community (EEC) by
starting accession negotiations as early as 1977. The country bounced between socialism and adherence to the neoliberal
model. Land reform and nationalizations were enforced; the Portuguese Constitution (approved in 1976) was rewritten
in order to accommodate socialist and communist principles. Until the constitutional revisions of 1982 and 1989,
the constitution was a highly charged ideological document with numerous references to socialism, the rights of workers,
and the desirability of a socialist economy. Portugal's economic situation after its transition to democracy, obliged
the government to pursue International Monetary Fund (IMF)-monitored stabilization programs in 1977–78 and 1983–85.
Portugal is defined as a Mediterranean climate (Csa in the South, interior, and Douro region; Csb in the North, Central
Portugal and coastal Alentejo; mixed oceanic climate along the northern half of the coastline and also Semi-arid
climate or Steppe climate (BSk in certain parts of Beja district far South) according to the Köppen-Geiger Climate
Classification), and is one of the warmest European countries: the annual average temperature in mainland Portugal
varies from 8–12 °C (46.4–53.6 °F) in the mountainous interior north to 16–19 °C (60.8–66.2 °F) in the south and
on the Guadiana river basin. The Algarve, separated from the Alentejo region by mountains reaching up to 900 metres
(3,000 ft) in Alto de Fóia, has a climate similar to that of the southern coastal areas of Spain or Southwest Australia.
Both the archipelagos of the Azores and Madeira have a subtropical climate, although variations between islands exist,
making weather predictions very difficult (owing to rough topography). The Madeira and Azorean archipelagos have
a narrower temperature range, with annual average temperatures exceeding 20 °C (68 °F) along the coast (according
to the Portuguese Meteorological Institute). Some islands in Azores do have drier months in the summer. Consequently,
the island of the Azores have been identified as having a Mediterranean climate (both Csa and Csb types), while some
islands (such as Flores or Corvo) are classified as Maritime Temperate (Cfb) and Humid subtropical (Cfa), respectively,
according to Köppen-Geiger classification. Despite the fact that humans have occupied the territory of Portugal for
thousands of years, something still remains of the original vegetation. In Gerês both deciduous and coniferous forests
can be found, an extremely rare worldwide mature mediterranean forest remain in some parts of the Arrábida mountain
and a subtropical laurissilva forest, dating back to the Tertiary period, covers its largest continuous area in the
world in the Madeira main island. Due to the human population decrease and rural exodus, Pyrenean oak and other local
native trees are colonizing many abandoned areas. Boar, Iberian red deer, roe deer, Iberian wild goat, for example,
are reported to be expanding greatly, during the last decades. Boars were found recently roaming at night inside
large urban areas, like in Setubal. Protected areas of Portugal include one national park (Portuguese: Parque Nacional),
12 natural parks (Portuguese: Parque Natural), nine natural reserves (Portuguese: Reserva Natural), five natural
monuments (Portuguese: Monumento Natural), and seven protected landscapes (Portuguese: Paisagem Protegida), which
include the Parque Nacional da Peneda-Gerês, the Parque Natural da Serra da Estrela and the Paul d'Arzila. Laurisilva
is a unique type of subtropical rainforest found in few areas of Europe and the world: in the Azores, and in particular
on the island of Madeira, there are large forests of endemic Laurisilva forests (the latter protected as a natural
heritage preserve). There are several species of diverse mammalian fauna, including the fox, badger, iberian lynx,
iberian wolf, wild goat (Capra pyrenaica), wild cat (Felis silvestris), hare, weasel, polecat, chameleon, mongoose,
civet, brown bear[citation needed] (spotted near Rio Minho, close to Peneda-Gerês) and many others. Portugal is an
important stopover for migratory birds, in places such as Cape St. Vincent or the Monchique mountains, where thousands
of birds cross from Europe to Africa during the autumn or in the spring (return migration). There are more than 100
freshwater fish species, varying from the giant European catfish (in the Tagus International Natural Park) to some
small and endemic species that live only in small lakes (along the western portion of country, for example). Some
of these rare and specific species are highly endangered because of habitat loss, pollution and drought. Up-welling
along the west coast of Portugal makes the sea extremely rich in nutrients and diverse species of marine fish; the
Portuguese marine waters are one of the richest in the world. Marine fish species are more common, and include thousands
of species, such as the sardine (Sardina pilchardus), tuna and Atlantic mackerel. Bioluminescent species are also
well represented (including species in different colour spectrum and forms), like the glowing plankton that are possible
to observe in some beaches. The President, who is elected to a five-year term, has an executive role: the current
President is Aníbal Cavaco Silva. The Assembly of the Republic is a single chamber parliament composed of 230 deputies
elected for a four-year term. The Government is headed by the Prime Minister (currently António Costa) and includes
Ministers and Secretaries of State. The Courts are organized into several levels, among the judicial, administrative
and fiscal branches. The Supreme Courts are institutions of last resort/appeal. A thirteen-member Constitutional
Court oversees the constitutionality of the laws. Portugal operates a multi-party system of competitive legislatures/local
administrative governments at the national-, regional- and local-levels. The Assembly of the Republic, Regional Assemblies
and local municipalities and parishes, are dominated by two political parties, the Socialist Party and the Social
Democratic Party, in addition to the Unitary Democratic Coalition (Portuguese Communist Party and Ecologist Party
"The Greens"), the Left Bloc and the Democratic and Social Centre – People's Party, which garner between 5 and 15%
of the vote regularly. The Head of State of Portugal is the President of the Republic, elected to a five-year term
by direct, universal suffrage. He or she has also supervision and reserve powers. These powers are often compared[by
whom?] with the "moderator power" that was held by the King in the Portuguese Constitutional Monarchy.[citation needed]
Presidential powers include the appointment of the Prime Minister and the other members of the Government (where
the President takes into account the results of legislative elections); dismissing the Prime Minister; dissolving
the Assembly of the Republic (to call early elections); vetoing legislation (which may be overridden by the Assembly
with a supermajority); and declaring a state of war or siege. The President is also the ex officio Commander-in-Chief
of the Armed Forces. The Council of Ministers – under the presidency of the Prime Minister (or the President of Portugal
at the latter's request) and the Ministers (may also include one or more Deputy Prime Ministers) – acts as the cabinet.
Each government is required to define the broad outline of its policies in a programme, and present it to the Assembly
for a mandatory period of debate. The failure of the Assembly to reject the government programme by an absolute majority
of deputies confirms the cabinet in office. Portuguese law applied in the former colonies and territories and continues
to be the major influence for those countries. Portugal's main police organizations are the Guarda Nacional Republicana
– GNR (National Republican Guard), a gendarmerie; the Polícia de Segurança Pública – PSP (Public Security Police),
a civilian police force who work in urban areas; and the Polícia Judiciária – PJ (Judicial Police), a highly specialized
criminal investigation police that is overseen by the Public Ministry. Portugal has arguably the most liberal laws
concerning possession of illicit drugs in the Western world. In 2001, Portugal decriminalized possession of effectively
all drugs that are still illegal in other developed nations including, but not limited to, cannabis, cocaine, heroin,
and LSD. While possession is legal, trafficking and possession of more than "10 days worth of personal use" are still
punishable by jail time and fines. People caught with small amounts of any drug are given the choice to go to a rehab
facility, and may refuse treatment without consequences. Despite criticism from other European nations, who stated
Portugal's drug consumption would tremendously increase, overall drug use has declined along with the number of HIV
infection cases, which had dropped 50 percent by 2009. Drug use among 16- to 18-year-olds also declined, however
the use of marijuana rose only slightly among that age group. Administratively, Portugal is divided into 308 municipalities
(Portuguese: municípios or concelhos), which after a reform in 2013 are subdivided into 3,092 civil parishes (Portuguese:
freguesia). Operationally, the municipality and civil parish, along with the national government, are the only legally
identifiable local administrative units identified by the government of Portugal (for example, cities, towns or villages
have no standing in law, although may be used as catchment for the defining services). For statistical purposes the
Portuguese government also identifies NUTS, inter-municipal communities and informally, the district system, used
until European integration (and being phased-out by the national government).[original research?] Continental Portugal
is agglomerated into 18 districts, while the archipelagos of the Azores and Madeira are governed as autonomous regions;
the largest units, established since 1976, are either mainland Portugal (Portuguese: Portugal Continental) and the
autonomous regions of Portugal (Azores and Madeira). The armed forces have three branches: Navy, Army and Air Force.
They serve primarily as a self-defense force whose mission is to protect the territorial integrity of the country
and provide humanitarian assistance and security at home and abroad. As of 2008, the three branches numbered 39,200
active personnel including 7,500 women. Portuguese military expenditure in 2009 was $5.2 billion, representing 2.1
percent of GDP. Military conscription was abolished in 2004. The minimum age for voluntary recruitment is 18 years.
The Army (21,000 personnel) comprises three brigades and other small units. An infantry brigade (mainly equipped
with Pandur II APC), a mechanized brigade (mainly equipped with Leopard 2 A6 tanks and M113 APC) and a Rapid Reaction
Brigade (consisting of paratroopers, commandos and rangers). The Navy (10,700 personnel, of which 1,580 are marines)
has five frigates, seven corvettes, two submarines, and 28 patrol and auxiliary vessels. The Air Force (7,500 personnel)
has the Lockheed F-16 Fighting Falcon and the Dassault/Dornier Alpha Jet as the main combat aircraft. In the 20th
century, Portugal engaged in two major conflicts: World War I and the Portuguese Colonial War (1961–1974). After
the end of the Portuguese Empire in 1975, the Portuguese Armed Forces have participated in peacekeeping missions
in East Timor, Bosnia, Kosovo, Afghanistan, Somalia, Iraq (Nasiriyah) and Lebanon. Portugal also conducted several
independent unilateral military operations abroad, as were the cases of the interventions of the Portuguese Armed
Forces in Angola in 1992 and in Guinea-Bissau in 1998 with the main objectives of protecting and withdrawing of Portuguese
and foreign citizens threatened by local civil conflicts. After the bailout was announced, the Portuguese government
headed by Pedro Passos Coelho managed to implement measures with the intention of improve the State's financial situation,
including tax hikes, a freeze of civil service-related lower-wages and cuts of higher-wages by 14.3%, on top of the
government's spending cuts. The Portuguese government also agreed to eliminate its golden share in Portugal Telecom
which gave it veto power over vital decisions. In 2012, all public servants had already seen an average wage cut
of 20% relative to their 2010 baseline, with cuts reaching 25% for those earning more than 1,500 euro per month.
A report released in January 2011 by the Diário de Notícias and published in Portugal by Gradiva, had demonstrated
that in the period between the Carnation Revolution in 1974 and 2010, the democratic Portuguese Republic governments
encouraged over-expenditure and investment bubbles through unclear Public–private partnerships and funding of numerous
ineffective and unnecessary external consultancy and advisory of committees and firms. This allowed considerable
slippage in state-managed public works and inflated top management and head officer bonuses and wages. Persistent
and lasting recruitment policies boosted the number of redundant public servants. Risky credit, public debt creation,
and European structural and cohesion funds were mismanaged across almost four decades. After the financial crisis
of 2007–08, it was known in 2008–2009 that two Portuguese banks (Banco Português de Negócios (BPN) and Banco Privado
Português (BPP)) had been accumulating losses for years due to bad investments, embezzlement and accounting fraud.
The case of BPN was particularly serious because of its size, market share, and the political implications - Portugal's
then current President, Cavaco Silva, and some of his political allies, maintained personal and business relationships
with the bank and its CEO, who was eventually charged and arrested for fraud and other crimes. In the grounds of
avoiding a potentially serious financial crisis in the Portuguese economy, the Portuguese government decided to give
them a bailout, eventually at a future loss to taxpayers and to the Portuguese people in general. The Portuguese
currency is the euro (€), which replaced the Portuguese Escudo, and the country was one of the original member states
of the eurozone. Portugal's central bank is the Banco de Portugal, an integral part of the European System of Central
Banks. Most industries, businesses and financial institutions are concentrated in the Lisbon and Porto metropolitan
areas—the Setúbal, Aveiro, Braga, Coimbra and Leiria districts are the biggest economic centres outside these two
main areas.[citation needed] According to World Travel Awards, Portugal is the Europe's Leading Golf Destination
2012 and 2013. Since the Carnation Revolution of 1974, which culminated in the end of one of Portugal's most notable
phases of economic expansion (that started in the 1960s), a significant change has occurred in the nation's annual
economic growth.[citation needed] After the turmoil of the 1974 revolution and the PREC period, Portugal tried to
adapt to a changing modern global economy, a process that continues in 2013. Since the 1990s, Portugal's public consumption-based
economic development model has been slowly changing to a system that is focused on exports, private investment and
the development of the high-tech sector. Consequently, business services have overtaken more traditional industries
such as textiles, clothing, footwear and cork (Portugal is the world's leading cork producer), wood products and
beverages. In the second decade of the 21st century the Portuguese economy suffered its most severe recession since
the 1970s resulting in the country having to be bailed out by the European Commission, European Central Bank and
International Monetary Fund. The bailout, agreed to in 2011, required Portugal to enter into a range of austerity
measures in exchange for funding support of €78 billion. In May 2014 the country exited the bailout but reaffirmed
its commitment to maintaining its reformist momentum. At the time of exiting the bailout the economy had contracted
by 0.7% in the first quarter of 2014, however unemployment, while still high had fallen to 15.3 percent. Agriculture
in Portugal is based on small to medium-sized family-owned dispersed units. However, the sector also includes larger
scale intensive farming export-oriented agrobusinesses backed by companies (like Grupo RAR's Vitacress, Sovena, Lactogal,
Vale da Rosa, Companhia das Lezírias and Valouro). The country produces a wide variety of crops and livestock products,
including tomatoes, citrus, green vegetables, rice, corn, barley, olives, oilseeds, nuts, cherries, bilberry, table
grapes, edible mushrooms, dairy products, poultry and beef. Traditionally a sea-power, Portugal has had a strong
tradition in the Portuguese fishing sector and is one of the countries with the highest fish consumption per capita.
The main landing sites in Portugal (including Azores and Madeira), according to total landings in weight by year,
are the harbours of Matosinhos, Peniche, Olhão, Sesimbra, Figueira da Foz, Sines, Portimão and Madeira. Portuguese
processed fish products are exported through several companies, under a number of different brands and registered
trademarks, such as Ramirez (the world’s oldest active canned fish producer), Bom Petisco, Nero, Combate, Comur,
General, Líder, Manná, Murtosa, Pescador, Pitéu, Tenório, Torreira and Vasco da Gama.[citation needed] Portugal is
a significant European minerals producer and is ranked among Europe's leading copper producers. The nation is also
a notable producer of tin, tungsten and uranium. However, the country lacks the potential to conduct hydrocarbon
exploration and aluminium, a limitation that has hindered the development of Portugal's mining and metallurgy sectors.
Although the country has vast iron and coal reserves—mainly in the north—after the 1974 revolution and the consequent
economic globalization, low competitiveness forced a decrease in the extraction activity for these minerals. The
Panasqueira and Neves-Corvo mines are among the most recognised Portuguese mines that are still in operation.[citation
needed] Industry is diversified, ranging from automotive (Volkswagen Autoeuropa and Peugeot Citroen), aerospace (Embraer
and OGMA), electronics and textiles, to food, chemicals, cement and wood pulp. Volkswagen Group's AutoEuropa motor
vehicle assembly plant in Palmela is among the largest foreign direct investment projects in Portugal. Modern non-traditional
technology-based industries, such as aerospace, biotechnology and information technology, have been developed in
several locations across the country. Alverca, Covilhã, Évora, and Ponte de Sor are the main centres of the Portuguese
aerospace industry, which is led by Brazil-based company Embraer and the Portuguese company OGMA. Following the turn
of the 21st century, many major biotechnology and information technology industries have been founded, and are concentrated
in the metropolitan areas of Lisbon, Porto, Braga, Coimbra and Aveiro.[citation needed] Travel and tourism continue
to be extremely important for Portugal, with visitor numbers forecast to increase significantly in the future.[citation
needed] However, the increasing competition from Eastern European destinations continues to develop, with the presence
of similar attractions that are often cheaper in countries such as Croatia. Consequently, it has been necessary for
the country to focus upon its niche attractions, such as health, nature and rural tourism, to stay ahead of its competitors.
The poor performance of the Portuguese economy was explored in April 2007 by The Economist, which described Portugal
as "a new sick man of Europe". From 2002 to 2007, the number of unemployed increased by 65% (270,500 unemployed citizens
in 2002, 448,600 unemployed citizens in 2007). By early December 2009, the unemployment rate had reached 10.2% –
a 23-year record high. In December 2009, ratings agency Standard & Poor's lowered its long-term credit assessment
of Portugal to "negative" from "stable," voicing pessimism on the country's structural weaknesses in the economy
and weak competitiveness that would hamper growth and the capacity to strengthen its public finances and reduce debt.
In July 2011, ratings agency Moody's downgraded its long-term credit assessment of Portugal after warning of deteriorating
risk of default in March 2011. On 6 April 2011, after his proposed "Plan for Stability and Growth IV" (PEC IV) was
rejected by the Parliament, Prime Minister José Sócrates announced on national television that the country would
request financial assistance from the IMF and the European Financial Stability Facility, as Greece and the Republic
of Ireland had done previously. It was the third time that the Portuguese government had requested external financial
aid from the IMF—the first occasion occurred in the late 1970s following the Carnation's Revolution. In October 2011,
Moody's Investor Services downgraded nine Portuguese banks due to financial weakness. In 2005, the number of public
employees per thousand inhabitants in the Portuguese government (70.8) was above the European Union average (62.4
per thousand inhabitants). By EU and USA standards, Portugal's justice system was internationally known as being
slow and inefficient, and by 2011 it was the second slowest in Western Europe (after Italy); conversely, Portugal
has one of the highest rates of judges and prosecutors—over 30 per 100,000 people. The entire Portuguese public service
has been known for its mismanagement, useless redundancies, waste, excess of bureaucracy and a general lack of productivity
in certain sectors, particularly in justice. In the first week of May 2013, Prime Minister Passos Coelho announced
a significant government plan for the public sector, whereby 30,000 jobs will be cut and the number of weekly working
hours will be increased from 35 to 40 hours. Coelho reaffirmed the announcement by explaining that austerity measures
are necessary if Portugal seeks to avoid another monetary bailout grant from the European Commission, European Central
Bank and International Monetary Fund—the overall plan intends to enact further cuts of €4.8 billion over a three-year
period. Passos Coelho also announced that the retirement age will be increased from 65 to 66, announced cuts in the
pensions, unemployment benefits, health, education and science expenses, abolished the English obligatory classes
in Basic Education, but kept the pensions of the judges, diplomats untouched and didn't raise the retirement age
of the military and police forces. He has, however, cut meaningfully the politicians salaries. These policies have
led to social unrest and to confrontations between several institutions, namely between the Government and the Constitutional
Court. Several individualities belonging to the parties that support the government have also raised their voices
against the policies that have been taken in order to try to solve the financial crisis. After years of high increase,
the unemployment in Portugal has been in a continuous falling trend since the third quarter of 2014, decreasing from
a peak of 17.7% achieved in the early 2013 to a rate of 11.9% in the second quarter of 2015. However, it is high
still high compared with what was the normal average Portuguese unemployment rate in the past. In the second quarter
of 2008 the unemployment rate was 7.3%, but the rate immediately rose the following period. By December 2009, unemployment
had surpassed the 10% mark nationwide in the wake of worldwide events, by 2010, the rate was around 11% and in 2011
it was above 12%.[citation needed] The first quarter of 2013 signified a new unemployment rate record for Portugal,
as it reached 17.7%— up from 17% in the previous quarter — and the Government has predicted an 18.5% unemployment
rate in 2014. However, in the third quarter of the same year, it has surprisingly declined to a rate of 15.6%. From
then on, the unemployment downtrend continued, declining to 13.9% in the second semester of 2014 and to 11.9% in
the second quarter of 2015. Tourist hotspots in Portugal are Lisbon, Algarve, Madeira, Porto and the city of Coimbra,
also, between 4-5 million religious pilgrims visit Fátima each year, where apparitions of the Blessed Virgin Mary
to three shepherd children reportedly took place in 1917. The Sanctuary of Fátima is one of the largest Roman Catholic
shrines in the world. The Portuguese government continues to promote and develop new tourist destinations, such as
the Douro Valley, the island of Porto Santo, and Alentejo. Lisbon is the 16th European city which attracts the most
tourists (with seven million tourists occupying the city's hotels in 2006, a number that grew 11.8% compared to previous
year). Lisbon in recent years surpassed the Algarve as the leading tourist region in Portugal. Porto and Northern
Portugal, especially the urban areas north of Douro River valley, was the tourist destination which grew most (11.9%)
in 2006, surpassing Madeira (in 2010), as the third most visited destination.[citation needed] By the early 1970s
Portugal's fast economic growth with increasing consumption and purchase of new automobiles set the priority for
improvements in transportation. Again in the 1990s, after joining the European Economic Community, the country built
many new motorways. Today, the country has a 68,732 km (42,708 mi) road network, of which almost 3,000 km (1,864
mi) are part of system of 44 motorways. Opened in 1944, the first motorway (which linked Lisbon to the National Stadium)
was an innovative project that made Portugal among one of the first countries in the world to establish a motorway
(this roadway eventually became the Lisbon-Cascais highway, or A5). But, although a few other tracts were created
(around 1960 and 1970), it was only after the beginning of the 1980s that large-scale motorway construction was implemented.
In 1972, Brisa, the highway concessionaire, was founded to handle the management of many of the regions motorways.
On many highways, toll needs to be paid, see Via Verde. Vasco da Gama bridge is the longest bridge in Europe. Continental
Portugal's 89,015 km2 (34,369 sq mi) territory is serviced by four international airports located near the principal
cities of Lisbon, Porto, Faro and Beja. Lisbon's geographical position makes it a stopover for many foreign airlines
at several airports within the country. The primary flag-carrier is TAP Portugal, although many other domestic airlines
provide services within and without the country. The government decided to build a new airport outside Lisbon, in
Alcochete, to replace Lisbon Portela Airport, though this plan has been stalled due to the austerity. Currently,
the most important airports are in Lisbon, Porto, Faro, Funchal (Madeira), and Ponta Delgada (Azores), managed by
the national airport authority group ANA – Aeroportos de Portugal. A national railway system that extends throughout
the country and into Spain, is supported and administered by Comboios de Portugal. Rail transport of passengers and
goods is derived using the 2,791 km (1,734 mi) of railway lines currently in service, of which 1,430 km (889 mi)
are electrified and about 900 km (559 mi) allow train speeds greater than 120 km/h (75 mph). The railway network
is managed by the REFER while the transport of passengers and goods are the responsibility of Comboios de Portugal
(CP), both public companies. In 2006 the CP carried 133 million passengers and 9,750,000 t (9,600,000 long tons;
10,700,000 short tons) of goods. The two largest metropolitan areas have subway systems: Lisbon Metro and Metro Sul
do Tejo in the Lisbon Metropolitan Area and Porto Metro in the Porto Metropolitan Area, each with more than 35 km
(22 mi) of lines. In Portugal, Lisbon tram services have been supplied by the Companhia de Carris de Ferro de Lisboa
(Carris), for over a century. In Porto, a tram network, of which only a tourist line on the shores of the Douro remain,
began construction on 12 September 1895 (a first for the Iberian Peninsula). All major cities and towns have their
own local urban transport network, as well as taxi services. Scientific and technological research activities in
Portugal are mainly conducted within a network of R&D units belonging to public universities and state-managed autonomous
research institutions like the INETI – Instituto Nacional de Engenharia, Tecnologia e Inovação and the INRB – Instituto
Nacional dos Recursos Biológicos. The funding and management of this research system is mainly conducted under the
authority of the Ministry of Science, Technology and Higher Education (MCTES) itself and the MCTES's Fundação para
a Ciência e Tecnologia (FCT). The largest R&D units of the public universities by volume of research grants and peer-reviewed
publications, include biosciences research institutions like the Instituto de Medicina Molecular, the Centre for
Neuroscience and Cell Biology, the IPATIMUP, the Instituto de Biologia Molecular e Celular and the Abel Salazar Biomedical
Sciences Institute. Among the largest non-state-run research institutions in Portugal are the Instituto Gulbenkian
de Ciência and the Champalimaud Foundation, a neuroscience and oncology research centre, which in addition awards
every year one of the highest monetary prizes of any science prize in the world. A number of both national and multinational
high-tech and industrial companies, are also responsible for research and development projects. One of the oldest
learned societies of Portugal is the Sciences Academy of Lisbon, founded in 1779. Portugal has the largest aquarium
in Europe, the Lisbon Oceanarium, and the Portuguese have several other notable organizations focused on science-related
exhibits and divulgation, like the state agency Ciência Viva, a programme of the Portuguese Ministry of Science and
Technology to the promotion of a scientific and technological culture among the Portuguese population, the Science
Museum of the University of Coimbra, the National Museum of Natural History at the University of Lisbon, and the
Visionarium. With the emergence and growth of several science parks throughout the world that helped create many
thousands of scientific, technological and knowledge-based businesses, Portugal started to develop several science
parks across the country. These include the Taguspark (in Oeiras), the Coimbra iParque (in Coimbra), the biocant
(in Cantanhede), the Madeira Tecnopolo (in Funchal), Sines Tecnopolo (in Sines), Tecmaia (in Maia) and Parkurbis
(in Covilhã). Companies locate in the Portuguese science parks to take advantage of a variety of services ranging
from financial and legal advice through to marketing and technological support. Portugal has considerable resources
of wind and river power, the two most cost-effective renewable sources. Since the turn of the 21st century, there
has been a trend towards the development of a renewable resource industry and reduction of both consumption and use
of fossil fuel resources. In 2006, the world's largest solar power plant at that date, the Moura Photovoltaic Power
Station, began operating near Moura, in the south, while the world's first commercial wave power farm, the Aguçadoura
Wave Farm, opened in the Norte region (2008). By the end of 2006, 66% of the country's electrical production was
from coal and fuel power plants, while 29% were derived from hydroelectric dams, and 6% by wind energy. Portugal’s
national energy transmission company, Redes Energéticas Nacionais (REN), uses sophisticated modeling to predict weather,
especially wind patterns, and computer programs to calculate energy from the various renewable-energy plants. Before
the solar/wind revolution, Portugal had generated electricity from hydropower plants on its rivers for decades. New
programs combine wind and water: wind-driven turbines pump water uphill at night, the most blustery period; then
the water flows downhill by day, generating electricity, when consumer demand is highest. Portugal’s distribution
system is also now a two-way street. Instead of just delivering electricity, it draws electricity from even the smallest
generators, like rooftop solar panels. The government aggressively encouraged such contributions by setting a premium
price for those who buy rooftop-generated solar electricity. The Statistics Portugal (Portuguese: INE - Instituto
Nacional de Estatística) estimates that, according to the 2011 census, the population was 10,562,178 (of which 52%
was female, 48% was male). This population has been relatively homogeneous for most of its history: a single religion
(Catholicism) and a single language have contributed to this ethnic and national unity, namely after the expulsion
of the Moors and Jews. A considerable number of Moors and Jews, nevertheless, stayed in Portugal, under the condition
that they converted to Catholicism, and afterwards they were known as Mouriscos (former Muslims) and Cristãos Novos
(New Christians or former Jews) some of whom may have continued to observe rabbinic Judaism in secret, as in the
case of the secret Jews of Belmonte, who now observe the Jewish faith openly. After 1772 the distinction between
Old and New Christians was abolished by decree. Some famous Portuguese New Christians were the mathematician Pedro
Nunes and the physician and naturalist Garcia de Orta. The most important demographic influence in the modern Portuguese
seems to be the oldest one; current interpretation of Y-chromosome and mtDNA data suggests that the Portuguese have
their origin in Paleolithic peoples that began arriving to the European continent around 45,000 years ago. All subsequent
migrations did leave an impact, genetically and culturally, but the main population source of the Portuguese is still
Paleolithic. Genetic studies show Portuguese populations not to be significantly different from other European populations.
Portugal's colonial history has long since been a cornerstone of its national identity, as has its geographic position
at the south-western corner of Europe, looking out into the Atlantic Ocean. It was one of the last western colonial
European powers to give up its overseas territories (among them Angola and Mozambique in 1975), turning over the
administration of Macau to the People's Republic of China at the end of 1999. Consequently, it has both influenced
and been influenced by cultures from former colonies or dependencies, resulting in immigration from these former
territories for both economic and/or personal reasons. Portugal, long a country of emigration (the vast majority
of Brazilians have Portuguese ancestry), has now become a country of net immigration, and not just from the last
Indian (Portuguese until 1961), African (Portuguese until 1975), and Far East Asian (Portuguese until 1999) overseas
territories. An estimated 800,000 Portuguese returned to Portugal as the country's African possessions gained independence
in 1975. By 2007, Portugal had 10,617,575 inhabitants of whom about 332,137 were legal immigrants. According to the
2011 Census, 81.0% of the Portuguese population are Roman Catholic. The country has small Protestant, Latter-day
Saint, Muslim, Hindu, Sikh, Eastern Orthodox Church, Jehovah's Witnesses, Baha'i, Buddhist, Jewish and Spiritist
communities. Influences from African Traditional Religion and Chinese Traditional Religion are also felt among many
people, particularly in fields related with Traditional Chinese Medicine and African Witch Doctors. Some 6.8% of
the population declared themselves to be non-religious, and 8.3% did not give any answer about their religion. Many
Portuguese holidays, festivals and traditions have a Christian origin or connotation. Although relations between
the Portuguese state and the Roman Catholic Church were generally amiable and stable since the earliest years of
the Portuguese nation, their relative power fluctuated. In the 13th and 14th centuries, the church enjoyed both riches
and power stemming from its role in the reconquest, its close identification with early Portuguese nationalism and
the foundation of the Portuguese educational system, including the first university. The growth of the Portuguese
overseas empire made its missionaries important agents of colonization, with important roles in the education and
evangelization of people from all the inhabited continents. The growth of liberal and nascent republican movements
during the eras leading to the formation of the First Portuguese Republic (1910–26) changed the role and importance
of organized religion. Within the white inescutcheon, the five quinas (small blue shields) with their five white
bezants representing the five wounds of Christ (Portuguese: Cinco Chagas) when crucified and are popularly associated
with the "Miracle of Ourique". The story associated with this miracle tells that before the Battle of Ourique (25
July 1139), an old hermit appeared before Count Afonso Henriques (future Afonso I) as a divine messenger. He foretold
Afonso's victory and assured him that God was watching over him and his peers. The messenger advised him to walk
away from his camp, alone, if he heard a nearby chapel bell tolling, in the following night. In doing so, he witnessed
an apparition of Jesus on the cross. Ecstatic, Afonso heard Jesus promising victories for the coming battles, as
well as God's wish to act through Afonso, and his descendants, in order to create an empire which would carry His
name to unknown lands, thus choosing the Portuguese to perform great tasks. Portuguese is the official language of
Portugal. Portuguese is a Romance language that originated in what is now Galicia and Northern Portugal, originating
from Galician-Portuguese, which was the common language of the Galician and Portuguese people until the independence
of Portugal. Particularly in the North of Portugal, there are still many similarities between the Galician culture
and the Portuguese culture. Galicia is a consultative observer of the Community of Portuguese Language Countries.
According to the Ethnologue of Languages, Portuguese and Spanish have a lexical similarity of 89% - educated speakers
of each language can communicate easily with one another. The Portuguese language is derived from the Latin spoken
by the romanized Pre-Roman peoples of the Iberian Peninsula around 2000 years ago—particularly the Celts, Tartessians,
Lusitanians and Iberians. In the 15th and 16th centuries, the language spread worldwide as Portugal established a
colonial and commercial empire between 1415 and 1999. Portuguese is now spoken as a native language in five different
continents, with Brazil accounting for the largest number of native Portuguese speakers of any country (200 million
speakers in 2012). The total adult literacy rate is 99 percent. Portuguese primary school enrollments are close to
100 percent. According to the OECD's Programme for International Student Assessment (PISA) 2009, the average Portuguese
15-year-old student, when rated in terms of reading literacy, mathematics and science knowledge, is placed at the
same level as those students from the United States, Sweden, Germany, Ireland, France, Denmark, United Kingdom, Hungary
and Taipei, with 489 points (493 is the average). Over 35% of college-age citizens (20 years old) attend one of the
country's higher education institutions (compared with 50% in the United States and 35% in the OECD countries). In
addition to being a destination for international students, Portugal is also among the top places of origin for international
students. All higher education students, both domestic and international, totaled 380,937 in 2005. Portuguese universities
have existed since 1290. The oldest Portuguese university was first established in Lisbon before moving to Coimbra.
Historically, within the scope of the Portuguese Empire, the Portuguese founded the oldest engineering school of
the Americas (the Real Academia de Artilharia, Fortificação e Desenho of Rio de Janeiro) in 1792, as well as the
oldest medical college in Asia (the Escola Médico-Cirúrgica of Goa) in 1842. The largest university in Portugal is
the University of Lisbon. The Bologna process has been adopted, since 2006, by Portuguese universities and poly-technical
institutes. Higher education in state-run educational establishments is provided on a competitive basis, a system
of numerus clausus is enforced through a national database on student admissions. However, every higher education
institution offers also a number of additional vacant places through other extraordinary admission processes for
sportsmen, mature applicants (over 23 years old), international students, foreign students from the Lusosphere, degree
owners from other institutions, students from other institutions (academic transfer), former students (readmission),
and course change, which are subject to specific standards and regulations set by each institution or course department.
Most student costs are supported with public money. However, with the increasing tuition fees a student has to pay
to attend a Portuguese state-run higher education institution and the attraction of new types of students (many as
part-time students or in evening classes) like employees, businessmen, parents, and pensioners, many departments
make a substantial profit from every additional student enrolled in courses, with benefits for the college or university's
gross tuition revenue and without loss of educational quality (teacher per student, computer per student, classroom
size per student, etc.). The Ministry of Health is responsible for developing health policy as well as managing the
SNS. Five regional health administrations are in charge of implementing the national health policy objectives, developing
guidelines and protocols and supervising health care delivery. Decentralization efforts have aimed at shifting financial
and management responsibility to the regional level. In practice, however, the autonomy of regional health administrations
over budget setting and spending has been limited to primary care. Similar to the other Eur-A countries, most Portuguese
die from noncommunicable diseases. Mortality from cardiovascular diseases (CVD) is higher than in the eurozone, but
its two main components, ischaemic heart disease and cerebrovascular disease, display inverse trends compared with
the Eur-A, with cerebrovascular disease being the single biggest killer in Portugal (17%). Portuguese people die
12% less often from cancer than in the Eur-A, but mortality is not declining as rapidly as in the Eur-A. Cancer is
more frequent among children as well as among women younger than 44 years. Although lung cancer (slowly increasing
among women) and breast cancer (decreasing rapidly) are scarcer, cancer of the cervix and the prostate are more frequent.
Portugal has the highest mortality rate for diabetes in the Eur-A, with a sharp increase since the 1980s. People
are usually well informed about their health status, the positive and negative effects of their behaviour on their
health and their use of health care services. Yet their perceptions of their health can differ from what administrative
and examination-based data show about levels of illness within populations. Thus, survey results based on self-reporting
at the household level complement other data on health status and the use of services. Only one third of adults rated
their health as good or very good in Portugal (Kasmel et al., 2004). This is the lowest of the Eur-A countries reporting
and reflects the relatively adverse situation of the country in terms of mortality and selected morbidity. Portugal
has developed a specific culture while being influenced by various civilizations that have crossed the Mediterranean
and the European continent, or were introduced when it played an active role during the Age of Discovery. In the
1990s and 2000s (decade), Portugal modernized its public cultural facilities, in addition to the Calouste Gulbenkian
Foundation established in 1956 in Lisbon. These include the Belém Cultural Centre in Lisbon, Serralves Foundation
and the Casa da Música, both in Porto, as well as new public cultural facilities like municipal libraries and concert
halls that were built or renovated in many municipalities across the country. Portugal is home to fifteen UNESCO
World Heritage Sites, ranking it 8th in Europe and 17th in the world. Traditional architecture is distinctive and
include the Manueline, also known as Portuguese late Gothic, a sumptuous, composite Portuguese style of architectural
ornamentation of the first decades of the 16th century. A 20th-century interpretation of traditional architecture,
Soft Portuguese style, appears extensively in major cities, especially Lisbon. Modern Portugal has given the world
renowned architects like Eduardo Souto de Moura, Álvaro Siza Vieira (both Pritzker Prize winners) and Gonçalo Byrne.
In Portugal Tomás Taveira is also noteworthy, particularly for stadium design. Portuguese cinema has a long tradition,
reaching back to the birth of the medium in the late 19th century. Portuguese film directors such as Arthur Duarte,
António Lopes Ribeiro, António Reis, Pedro Costa, Manoel de Oliveira, João César Monteiro, António-Pedro Vasconcelos,
Fernando Lopes, João Botelho and Leonel Vieira, are among those that gained notability. Noted Portuguese film actors
include Joaquim de Almeida, Daniela Ruah, Maria de Medeiros, Diogo Infante, Soraia Chaves, Ribeirinho, Lúcia Moniz,
and Diogo Morgado. Adventurer and poet Luís de Camões (c. 1524–1580) wrote the epic poem "Os Lusíadas" (The Lusiads),
with Virgil's Aeneid as his main influence. Modern Portuguese poetry is rooted in neoclassic and contemporary styles,
as exemplified by Fernando Pessoa (1888–1935). Modern Portuguese literature is represented by authors such as Almeida
Garrett, Camilo Castelo Branco, Eça de Queiroz, Fernando Pessoa, Sophia de Mello Breyner Andresen, António Lobo Antunes
and Miguel Torga. Particularly popular and distinguished is José Saramago, recipient of the 1998 Nobel Prize in Literature.
Portuguese cuisine is diverse. The Portuguese consume a lot of dry cod (bacalhau in Portuguese), for which there
are hundreds of recipes. There are more than enough bacalhau dishes for each day of the year. Two other popular fish
recipes are grilled sardines and caldeirada, a potato-based stew that can be made from several types of fish. Typical
Portuguese meat recipes, that may be made out of beef, pork, lamb, or chicken, include cozido à portuguesa, feijoada,
frango de churrasco, leitão (roast suckling pig) and carne de porco à alentejana. A very popular northern dish is
the arroz de sarrabulho (rice stewed in pigs blood) or the arroz de cabidela (rice and chickens meat stewed in chickens
blood). Typical fast food dishes include the Francesinha (Frenchie) from Porto, and bifanas (grilled pork) or prego
(grilled beef) sandwiches, which are well known around the country. The Portuguese art of pastry has its origins
in the many medieval Catholic monasteries spread widely across the country. These monasteries, using very few ingredients
(mostly almonds, flour, eggs and some liquor), managed to create a spectacular wide range of different pastries,
of which pastéis de Belém (or pastéis de nata) originally from Lisbon, and ovos moles from Aveiro are examples. Portuguese
cuisine is very diverse, with different regions having their own traditional dishes. The Portuguese have a culture
of good food, and throughout the country there are myriads of good restaurants and typical small tasquinhas. Portuguese
wines have enjoyed international recognition since the times of the Romans, who associated Portugal with their god
Bacchus. Today, the country is known by wine lovers and its wines have won several international prizes. Some of
the best Portuguese wines are: Vinho Verde, Vinho Alvarinho, Vinho do Douro, Vinho do Alentejo, Vinho do Dão, Vinho
da Bairrada and the sweet: Port Wine, Madeira Wine, the Moscatel from Setúbal and Favaios. Port and Madeira are particularly
appreciated in a wide range of places around the world. Portugal has several summer music festivals, such as Festival
Sudoeste in Zambujeira do Mar, Festival de Paredes de Coura in Paredes de Coura, Festival Vilar de Mouros near Caminha,
Boom Festival in Idanha-a-Nova Municipality, Optimus Alive!, Sumol Summer Fest in Ericeira, Rock in Rio Lisboa and
Super Bock Super Rock in Greater Lisbon. Out of the summer season, Portugal has a large number of festivals, designed
more to an urban audience, like Flowfest or Hip Hop Porto. Furthermore, one of the largest international Goa trance
festivals takes place in central Portugal every two years, the Boom Festival, that is also the only festival in Portugal
to win international awards: European Festival Award 2010 – Green'n'Clean Festival of the Year and the Greener Festival
Award Outstanding 2008 and 2010. There is also the student festivals of Queima das Fitas are major events in a number
of cities across Portugal. In 2005, Portugal held the MTV Europe Music Awards, in Pavilhão Atlântico, Lisbon. In
the classical music domain, Portugal is represented by names as the pianists Artur Pizarro, Maria João Pires, Sequeira
Costa, the violinists Carlos Damas, Gerardo Ribeiro and in the past by the great cellist Guilhermina Suggia. Notable
composers include José Vianna da Motta, Carlos Seixas, João Domingos Bomtempo, João de Sousa Carvalho, Luís de Freitas
Branco and his student Joly Braga Santos, Fernando Lopes-Graça, Emmanuel Nunes and Sérgio Azevedo. Similarly, contemporary
composers such as Nuno Malo and Miguel d'Oliveira have achieved some international success writing original music
for film and television. The 20th century saw the arrival of Modernism, and along with it came the most prominent
Portuguese painters: Amadeo de Souza-Cardoso, who was heavily influenced by French painters, particularly by the
Delaunays. Among his best-known works is Canção Popular a Russa e o Fígaro. Another great modernist painters/writers
were Carlos Botelho and Almada Negreiros, friend to the poet Fernando Pessoa, who painted his (Pessoa's) portrait.
He was deeply influenced by both Cubist and Futurist trends. Prominent international figures in visual arts nowadays
include painters Vieira da Silva, Júlio Pomar, Helena Almeida, Joana Vasconcelos, Julião Sarmento and Paula Rego.
Football is the most popular sport in Portugal. There are several football competitions ranging from local amateur
to world-class professional level. The legendary Eusébio is still a major symbol of Portuguese football history.
FIFA World Player of the Year winners Luís Figo and Cristiano Ronaldo who won the FIFA Ballon d'Or for 2013 and 2014,
are among the numerous examples of other world-class football players born in Portugal and noted worldwide. Portuguese
football managers are also noteworthy, with José Mourinho, André Villas-Boas, Fernando Santos, Carlos Queiroz and
Manuel José among the most renowned. SL Benfica, FC Porto, and Sporting CP are the largest sports clubs by popularity
and by number of trophies won, often known as "os três grandes" ("the big three"). They have won eight titles in
the European UEFA club competitions, were present in many finals and have been regular contenders in the last stages
almost every season. Other than football, many Portuguese sports clubs, including the "big three", compete in several
other sports events with a varying level of success and popularity, these may include roller hockey, basketball,
futsal, handball, and volleyball. The Portuguese Football Federation (FPF) – Federação Portuguesa de Futebol – annually
hosts the Algarve Cup, a prestigious women`s football tournament that has been celebrated in the Algarvian part of
Portugal.
Humanism is a philosophical and ethical stance that emphasizes the value and agency of human beings, individually and collectively,
and generally prefers critical thinking and evidence (rationalism, empiricism) over acceptance of dogma or superstition.
The meaning of the term humanism has fluctuated according to the successive intellectual movements which have identified
with it. Generally, however, humanism refers to a perspective that affirms some notion of human freedom and progress.
In modern times, humanist movements are typically aligned with secularism, and today humanism typically refers to
a non-theistic life stance centred on human agency and looking to science rather than revelation from a supernatural
source to understand the world. Gellius says that in his day humanitas is commonly used as a synonym for philanthropy
– or kindness and benevolence toward one's fellow human being. Gellius maintains that this common usage is wrong,
and that model writers of Latin, such as Cicero and others, used the word only to mean what we might call "humane"
or "polite" learning, or the Greek equivalent Paideia. Gellius became a favorite author in the Italian Renaissance,
and, in fifteenth-century Italy, teachers and scholars of philosophy, poetry, and rhetoric were called and called
themselves "humanists". Modern scholars, however, point out that Cicero (106 – 43 BCE), who was most responsible
for defining and popularizing the term humanitas, in fact frequently used the word in both senses, as did his near
contemporaries. For Cicero, a lawyer, what most distinguished humans from brutes was speech, which, allied to reason,
could (and should) enable them to settle disputes and live together in concord and harmony under the rule of law.
Thus humanitas included two meanings from the outset and these continue in the modern derivative, humanism, which
even today can refer to both humanitarian benevolence and to scholarship. During the French Revolution, and soon
after, in Germany (by the Left Hegelians), humanism began to refer to an ethical philosophy centered on humankind,
without attention to the transcendent or supernatural. The designation Religious Humanism refers to organized groups
that sprang up during the late-nineteenth and early twentieth centuries. It is similar to Protestantism, although
centered on human needs, interests, and abilities rather than the supernatural. In the Anglophone world, such modern,
organized forms of humanism, which are rooted in the 18th-century Enlightenment, have to a considerable extent more
or less detached themselves from the historic connection of humanism with classical learning and the liberal arts.
In 1808 Bavarian educational commissioner Friedrich Immanuel Niethammer coined the term Humanismus to describe the
new classical curriculum he planned to offer in German secondary schools, and by 1836 the word "humanism" had been
absorbed into the English language in this sense. The coinage gained universal acceptance in 1856, when German historian
and philologist Georg Voigt used humanism to describe Renaissance humanism, the movement that flourished in the Italian
Renaissance to revive classical learning, a use which won wide acceptance among historians in many nations, especially
Italy. But in the mid-18th century, during the French Enlightenment, a more ideological use of the term had come
into use. In 1765, the author of an anonymous article in a French Enlightenment periodical spoke of "The general
love of humanity ... a virtue hitherto quite nameless among us, and which we will venture to call 'humanism', for
the time has come to create a word for such a beautiful and necessary thing". The latter part of the 18th and the
early 19th centuries saw the creation of numerous grass-roots "philanthropic" and benevolent societies dedicated
to human betterment and the spreading of knowledge (some Christian, some not). After the French Revolution, the idea
that human virtue could be created by human reason alone independently from traditional religious institutions, attributed
by opponents of the Revolution to Enlightenment philosophes such as Rousseau, was violently attacked by influential
religious and political conservatives, such as Edmund Burke and Joseph de Maistre, as a deification or idolatry of
humanity. Humanism began to acquire a negative sense. The Oxford English Dictionary records the use of the word "humanism"
by an English clergyman in 1812 to indicate those who believe in the "mere humanity" (as opposed to the divine nature)
of Christ, i.e., Unitarians and Deists. In this polarised atmosphere, in which established ecclesiastical bodies
tended to circle the wagons and reflexively oppose political and social reforms like extending the franchise, universal
schooling, and the like, liberal reformers and radicals embraced the idea of Humanism as an alternative religion
of humanity. The anarchist Proudhon (best known for declaring that "property is theft") used the word "humanism"
to describe a "culte, déification de l’humanité" ("worship, deification of humanity") and Ernest Renan in L’avenir
de la science: pensées de 1848 ("The Future of Knowledge: Thoughts on 1848") (1848–49), states: "It is my deep conviction
that pure humanism will be the religion of the future, that is, the cult of all that pertains to humanity—all of
life, sanctified and raised to the level of a moral value." At about the same time, the word "humanism" as a philosophy
centred on humankind (as opposed to institutionalised religion) was also being used in Germany by the so-called Left
Hegelians, Arnold Ruge, and Karl Marx, who were critical of the close involvement of the church in the repressive
German government. There has been a persistent confusion between the several uses of the terms: philanthropic humanists
look to what they consider their antecedents in critical thinking and human-centered philosophy among the Greek philosophers
and the great figures of Renaissance history; and scholarly humanists stress the linguistic and cultural disciplines
needed to understand and interpret these philosophers and artists. Another instance of ancient humanism as an organised
system of thought is found in the Gathas of Zarathustra, composed between 1,000 BCE – 600 BCE in Greater Iran. Zarathustra's
philosophy in the Gathas lays out a conception of humankind as thinking beings dignified with choice and agency according
to the intellect which each receives from Ahura Mazda (God in the form of supreme wisdom). The idea of Ahura Mazda
as a non-intervening deistic divine God/Grand Architect of the universe tied with a unique eschatology and ethical
system implying that each person is held morally responsible for their choices, made freely in this present life,
in the afterlife. The importance placed on thought, action, responsibility, and a non-intervening creator was appealed
to by, and inspired a number of, Enlightenment humanist thinkers in Europe such as Voltaire and Montesquieu. In China,
Yellow Emperor is regarded as the humanistic primogenitor.[citation needed] Sage kings such as Yao and Shun are humanistic
figures as recorded.[citation needed] King Wu of Zhou has the famous saying: "Humanity is the Ling (efficacious essence)
of the world (among all)." Among them Duke of Zhou, respected as a founder of Rujia (Confucianism), is especially
prominent and pioneering in humanistic thought. His words were recorded in the Book of History as follows (translation):[citation
needed] In the 6th century BCE, Taoist teacher Lao Tzu espoused a series of naturalistic concepts with some elements
of humanistic philosophy. The Silver Rule of Confucianism from Analects XV.24, is an example of ethical philosophy
based on human values rather than the supernatural. Humanistic thought is also contained in other Confucian classics,
e.g., as recorded in Zuo Zhuan, Ji Liang says, "People is the zhu (master, lord, dominance, owner or origin) of gods.
So, to sage kings, people first, gods second"; Neishi Guo says, "Gods, clever, righteous and wholehearted, comply
with human." Taoist and Confucian secularism contain elements of moral thought devoid of religious authority or deism
however they only partly resembled our modern concept of secularism. 6th-century BCE pre-Socratic Greek philosophers
Thales of Miletus and Xenophanes of Colophon were the first in the region to attempt to explain the world in terms
of human reason rather than myth and tradition, thus can be said to be the first Greek humanists. Thales questioned
the notion of anthropomorphic gods and Xenophanes refused to recognise the gods of his time and reserved the divine
for the principle of unity in the universe. These Ionian Greeks were the first thinkers to assert that nature is
available to be studied separately from the supernatural realm. Anaxagoras brought philosophy and the spirit of rational
inquiry from Ionia to Athens. Pericles, the leader of Athens during the period of its greatest glory was an admirer
of Anaxagoras. Other influential pre-Socratics or rational philosophers include Protagoras (like Anaxagoras a friend
of Pericles), known for his famous dictum "man is the measure of all things" and Democritus, who proposed that matter
was composed of atoms. Little of the written work of these early philosophers survives and they are known mainly
from fragments and quotations in other writers, principally Plato and Aristotle. The historian Thucydides, noted
for his scientific and rational approach to history, is also much admired by later humanists. In the 3rd century
BCE, Epicurus became known for his concise phrasing of the problem of evil, lack of belief in the afterlife, and
human-centred approaches to achieving eudaimonia. He was also the first Greek philosopher to admit women to his school
as a rule. Renaissance humanism was an intellectual movement in Europe of the later Middle Ages and the Early Modern
period. The 19th-century German historian Georg Voigt (1827–91) identified Petrarch as the first Renaissance humanist.
Paul Johnson agrees that Petrarch was "the first to put into words the notion that the centuries between the fall
of Rome and the present had been the age of Darkness". According to Petrarch, what was needed to remedy this situation
was the careful study and imitation of the great classical authors. For Petrarch and Boccaccio, the greatest master
was Cicero, whose prose became the model for both learned (Latin) and vernacular (Italian) prose. In the high Renaissance,
in fact, there was a hope that more direct knowledge of the wisdom of antiquity, including the writings of the Church
fathers, the earliest known Greek texts of the Christian Gospels, and in some cases even the Jewish Kabbalah, would
initiate a harmonious new era of universal agreement. With this end in view, Renaissance Church authorities afforded
humanists what in retrospect appears a remarkable degree of freedom of thought. One humanist, the Greek Orthodox
Platonist Gemistus Pletho (1355–1452), based in Mystras, Greece (but in contact with humanists in Florence, Venice,
and Rome) taught a Christianised version of pagan polytheism. The humanists' close study of Latin literary texts
soon enabled them to discern historical differences in the writing styles of different periods. By analogy with what
they saw as decline of Latin, they applied the principle of ad fontes, or back to the sources, across broad areas
of learning, seeking out manuscripts of Patristic literature as well as pagan authors. In 1439, while employed in
Naples at the court of Alfonso V of Aragon (at the time engaged in a dispute with the Papal States) the humanist
Lorenzo Valla used stylistic textual analysis, now called philology, to prove that the Donation of Constantine, which
purported to confer temporal powers on the Pope of Rome, was an 8th-century forgery. For the next 70 years, however,
neither Valla nor any of his contemporaries thought to apply the techniques of philology to other controversial manuscripts
in this way. Instead, after the fall of the Byzantine Empire to the Turks in 1453, which brought a flood of Greek
Orthodox refugees to Italy, humanist scholars increasingly turned to the study of Neoplatonism and Hermeticism, hoping
to bridge the differences between the Greek and Roman Churches, and even between Christianity itself and the non-Christian
world. The refugees brought with them Greek manuscripts, not only of Plato and Aristotle, but also of the Christian
Gospels, previously unavailable in the Latin West. After 1517, when the new invention of printing made these texts
widely available, the Dutch humanist Erasmus, who had studied Greek at the Venetian printing house of Aldus Manutius,
began a philological analysis of the Gospels in the spirit of Valla, comparing the Greek originals with their Latin
translations with a view to correcting errors and discrepancies in the latter. Erasmus, along with the French humanist
Jacques Lefèvre d'Étaples, began issuing new translations, laying the groundwork for the Protestant Reformation.
Henceforth Renaissance humanism, particularly in the German North, became concerned with religion, while Italian
and French humanism concentrated increasingly on scholarship and philology addressed to a narrow audience of specialists,
studiously avoiding topics that might offend despotic rulers or which might be seen as corrosive of faith. After
the Reformation, critical examination of the Bible did not resume until the advent of the so-called Higher criticism
of the 19th-century German Tübingen school. The words of the comic playwright P. Terentius Afer reverberated across
the Roman world of the mid-2nd century BCE and beyond. Terence, an African and a former slave, was well placed to
preach the message of universalism, of the essential unity of the human race, that had come down in philosophical
form from the Greeks, but needed the pragmatic muscles of Rome in order to become a practical reality. The influence
of Terence's felicitous phrase on Roman thinking about human rights can hardly be overestimated. Two hundred years
later Seneca ended his seminal exposition of the unity of humankind with a clarion-call: Better acquaintance with
Greek and Roman technical writings also influenced the development of European science (see the history of science
in the Renaissance). This was despite what A. C. Crombie (viewing the Renaissance in the 19th-century manner as a
chapter in the heroic March of Progress) calls "a backwards-looking admiration for antiquity", in which Platonism
stood in opposition to the Aristotelian concentration on the observable properties of the physical world. But Renaissance
humanists, who considered themselves as restoring the glory and nobility of antiquity, had no interest in scientific
innovation. However, by the mid-to-late 16th century, even the universities, though still dominated by Scholasticism,
began to demand that Aristotle be read in accurate texts edited according to the principles of Renaissance philology,
thus setting the stage for Galileo's quarrels with the outmoded habits of Scholasticism. Just as artist and inventor
Leonardo da Vinci – partaking of the zeitgeist though not himself a humanist – advocated study of human anatomy,
nature, and weather to enrich Renaissance works of art, so Spanish-born humanist Juan Luis Vives (c. 1493–1540) advocated
observation, craft, and practical techniques to improve the formal teaching of Aristotelian philosophy at the universities,
helping to free them from the grip of Medieval Scholasticism. Thus, the stage was set for the adoption of an approach
to natural philosophy, based on empirical observations and experimentation of the physical universe, making possible
the advent of the age of scientific inquiry that followed the Renaissance. Early humanists saw no conflict between
reason and their Christian faith (see Christian Humanism). They inveighed against the abuses of the Church, but not
against the Church itself, much less against religion. For them, the word "secular" carried no connotations of disbelief
– that would come later, in the nineteenth century. In the Renaissance to be secular meant simply to be in the world
rather than in a monastery. Petrarch frequently admitted that his brother Gherardo's life as a Carthusian monk was
superior to his own (although Petrarch himself was in Minor Orders and was employed by the Church all his life).
He hoped that he could do some good by winning earthly glory and praising virtue, inferior though that might be to
a life devoted solely to prayer. By embracing a non-theistic philosophic base, however, the methods of the humanists,
combined with their eloquence, would ultimately have a corrosive effect on established authority. Eliot and her circle,
who included her companion George Henry Lewes (the biographer of Goethe) and the abolitionist and social theorist
Harriet Martineau, were much influenced by the positivism of Auguste Comte, whom Martineau had translated. Comte
had proposed an atheistic culte founded on human principles – a secular Religion of Humanity (which worshiped the
dead, since most humans who have ever lived are dead), complete with holidays and liturgy, modeled on the rituals
of what was seen as a discredited and dilapidated Catholicism. Although Comte's English followers, like Eliot and
Martineau, for the most part rejected the full gloomy panoply of his system, they liked the idea of a religion of
humanity. Comte's austere vision of the universe, his injunction to "vivre pour altrui" ("live for others", from
which comes the word "altruism"), and his idealisation of women inform the works of Victorian novelists and poets
from George Eliot and Matthew Arnold to Thomas Hardy. Active in the early 1920s, F.C.S. Schiller labelled his work
"humanism" but for Schiller the term referred to the pragmatist philosophy he shared with William James. In 1929,
Charles Francis Potter founded the First Humanist Society of New York whose advisory board included Julian Huxley,
John Dewey, Albert Einstein and Thomas Mann. Potter was a minister from the Unitarian tradition and in 1930 he and
his wife, Clara Cook Potter, published Humanism: A New Religion. Throughout the 1930s, Potter was an advocate of
such liberal causes as, women’s rights, access to birth control, "civil divorce laws", and an end to capital punishment.
Humanistic psychology is a psychological perspective which rose to prominence in the mid-20th century in response
to Sigmund Freud's psychoanalytic theory and B. F. Skinner's Behaviorism. The approach emphasizes an individual's
inherent drive towards self-actualization and creativity. Psychologists Carl Rogers and Abraham Maslow introduced
a positive, humanistic psychology in response to what they viewed as the overly pessimistic view of psychoanalysis
in the early 1960s. Other sources include the philosophies of existentialism and phenomenology. Raymond B. Bragg,
the associate editor of The New Humanist, sought to consolidate the input of Leon Milton Birkhead, Charles Francis
Potter, and several members of the Western Unitarian Conference. Bragg asked Roy Wood Sellars to draft a document
based on this information which resulted in the publication of the Humanist Manifesto in 1933. Potter's book and
the Manifesto became the cornerstones of modern humanism, the latter declaring a new religion by saying, "any religion
that can hope to be a synthesising and dynamic force for today must be shaped for the needs of this age. To establish
such a religion is a major necessity of the present." It then presented 15 theses of humanism as foundational principles
for this new religion. Renaissance humanism was an activity of cultural and educational reform engaged in by civic
and ecclesiastical chancellors, book collectors, educators, and writers, who by the late fifteenth century began
to be referred to as umanisti – "humanists". It developed during the fourteenth and the beginning of the fifteenth
centuries, and was a response to the challenge of scholastic university education, which was then dominated by Aristotelian
philosophy and logic. Scholasticism focused on preparing men to be doctors, lawyers or professional theologians,
and was taught from approved textbooks in logic, natural philosophy, medicine, law and theology. There were important
centres of humanism at Florence, Naples, Rome, Venice, Mantua, Ferrara, and Urbino. Humanists reacted against this
utilitarian approach and the narrow pedantry associated with it. They sought to create a citizenry (frequently including
women) able to speak and write with eloquence and clarity and thus capable of engaging the civic life of their communities
and persuading others to virtuous and prudent actions. This was to be accomplished through the study of the studia
humanitatis, today known as the humanities: grammar, rhetoric, history, poetry and moral philosophy. As a program
to revive the cultural – and particularly the literary – legacy and moral philosophy of classical antiquity, Humanism
was a pervasive cultural mode and not the program of a few isolated geniuses like Rabelais or Erasmus as is still
sometimes popularly believed. Contemporary humanism entails a qualified optimism about the capacity of people, but
it does not involve believing that human nature is purely good or that all people can live up to the Humanist ideals
without help. If anything, there is recognition that living up to one's potential is hard work and requires the help
of others. The ultimate goal is human flourishing; making life better for all humans, and as the most conscious species,
also promoting concern for the welfare of other sentient beings and the planet as a whole. The focus is on doing
good and living well in the here and now, and leaving the world a better place for those who come after. In 1925,
the English mathematician and philosopher Alfred North Whitehead cautioned: "The prophecy of Francis Bacon has now
been fulfilled; and man, who at times dreamt of himself as a little lower than the angels, has submitted to become
the servant and the minister of nature. It still remains to be seen whether the same actor can play both parts".
Religious humanism is an integration of humanist ethical philosophy with religious rituals and beliefs that centre
on human needs, interests, and abilities. Though practitioners of religious humanism did not officially organise
under the name of "humanism" until the late 19th and early 20th centuries, non-theistic religions paired with human-centred
ethical philosophy have a long history. The Cult of Reason (French: Culte de la Raison) was a religion based on deism
devised during the French Revolution by Jacques Hébert, Pierre Gaspard Chaumette and their supporters. In 1793 during
the French Revolution, the cathedral Notre Dame de Paris was turned into a "Temple to Reason" and for a time Lady
Liberty replaced the Virgin Mary on several altars. In the 1850s, Auguste Comte, the Father of Sociology, founded
Positivism, a "religion of humanity". One of the earliest forerunners of contemporary chartered humanist organisations
was the Humanistic Religious Association formed in 1853 in London. This early group was democratically organised,
with male and female members participating in the election of the leadership and promoted knowledge of the sciences,
philosophy, and the arts. The Ethical Culture movement was founded in 1876. The movement's founder, Felix Adler,
a former member of the Free Religious Association, conceived of Ethical Culture as a new religion that would retain
the ethical message at the heart of all religions. Ethical Culture was religious in the sense of playing a defining
role in people's lives and addressing issues of ultimate concern. Polemics about humanism have sometimes assumed
paradoxical twists and turns. Early 20th century critics such as Ezra Pound, T. E. Hulme, and T. S. Eliot considered
humanism to be sentimental "slop" (Hulme)[citation needed] or "an old bitch gone in the teeth" (Pound) and wanted
to go back to a more manly, authoritarian society such as (they believed) existed in the Middle Ages. Postmodern
critics who are self-described anti-humanists, such as Jean-François Lyotard and Michel Foucault, have asserted that
humanism posits an overarching and excessively abstract notion of humanity or universal human nature, which can then
be used as a pretext for imperialism and domination of those deemed somehow less than human. "Humanism fabricates
the human as much as it fabricates the nonhuman animal", suggests Timothy Laurie, turning the human into what he
calls "a placeholder for a range of attributes that have been considered most virtuous among humans (e.g. rationality,
altruism), rather than most commonplace (e.g. hunger, anger)". Nevertheless, philosopher Kate Soper notes that by
faulting humanism for falling short of its own benevolent ideals, anti-humanism thus frequently "secretes a humanist
rhetoric". In his book, Humanism (1997), Tony Davies calls these critics "humanist anti-humanists". Critics of antihumanism,
most notably Jürgen Habermas, counter that while antihumanists may highlight humanism's failure to fulfil its emancipatory
ideal, they do not offer an alternative emancipatory project of their own. Others, like the German philosopher Heidegger
considered themselves humanists on the model of the ancient Greeks, but thought humanism applied only to the German
"race" and specifically to the Nazis and thus, in Davies' words, were anti-humanist humanists. Such a reading of
Heidegger's thought is itself deeply controversial; Heidegger includes his own views and critique of Humanism in
Letter On Humanism. Davies acknowledges that after the horrific experiences of the wars of the 20th century "it should
no longer be possible to formulate phrases like 'the destiny of man' or the 'triumph of human reason' without an
instant consciousness of the folly and brutality they drag behind them". For "it is almost impossible to think of
a crime that has not been committed in the name of human reason". Yet, he continues, "it would be unwise to simply
abandon the ground occupied by the historical humanisms. For one thing humanism remains on many occasions the only
available alternative to bigotry and persecution. The freedom to speak and write, to organise and campaign in defence
of individual or collective interests, to protest and disobey: all these can only be articulated in humanist terms."
The ad fontes principle also had many applications. The re-discovery of ancient manuscripts brought a more profound
and accurate knowledge of ancient philosophical schools such as Epicureanism, and Neoplatonism, whose Pagan wisdom
the humanists, like the Church fathers of old, tended, at least initially, to consider as deriving from divine revelation
and thus adaptable to a life of Christian virtue. The line from a drama of Terence, Homo sum, humani nihil a me alienum
puto (or with nil for nihil), meaning "I am a human being, I think nothing human alien to me", known since antiquity
through the endorsement of Saint Augustine, gained renewed currency as epitomising the humanist attitude. The statement,
in a play modeled or borrowed from a (now lost) Greek comedy by Menander, may have originated in a lighthearted vein
– as a comic rationale for an old man's meddling – but it quickly became a proverb and throughout the ages was quoted
with a deeper meaning, by Cicero and Saint Augustine, to name a few, and most notably by Seneca. Richard Bauman writes:
Davies identifies Paine's The Age of Reason as "the link between the two major narratives of what Jean-François Lyotard
calls the narrative of legitimation": the rationalism of the 18th-century Philosophes and the radical, historically
based German 19th-century Biblical criticism of the Hegelians David Friedrich Strauss and Ludwig Feuerbach. "The
first is political, largely French in inspiration, and projects 'humanity as the hero of liberty'. The second is
philosophical, German, seeks the totality and autonomy of knowledge, and stresses understanding rather than freedom
as the key to human fulfilment and emancipation. The two themes converged and competed in complex ways in the 19th
century and beyond, and between them set the boundaries of its various humanisms. Homo homini deus est ("The human
being is a god to humanity" or "god is nothing [other than] the human being to himself"), Feuerbach had written.
Earth was initially molten due to extreme volcanism and frequent collisions with other bodies. Eventually, the outer layer
of the planet cooled to form a solid crust when water began accumulating in the atmosphere. The Moon formed soon
afterwards, possibly as the result of a Mars-sized object with about 10% of the Earth's mass impacting the planet
in a glancing blow. Some of this object's mass merged with the Earth, significantly altering its internal composition,
and a portion was ejected into space. Some of the material survived to form an orbiting moon. Outgassing and volcanic
activity produced the primordial atmosphere. Condensing water vapor, augmented by ice delivered from comets, produced
the oceans. Earth was initially molten due to extreme volcanism and frequent collisions with other bodies. Eventually,
the outer layer of the planet cooled to form a solid crust when water began accumulating in the atmosphere. The Moon
formed soon afterwards, possibly as the result of a Mars-sized object with about 10% of the Earth's mass impacting
the planet in a glancing blow. Some of this object's mass merged with the Earth, significantly altering its internal
composition, and a portion was ejected into space. Some of the material survived to form an orbiting moon. Outgassing
and volcanic activity produced the primordial atmosphere. Condensing water vapor, augmented by ice delivered from
comets, produced the oceans. The Earth of the early Archean (4,000 to 2,500 million years ago) may have had a different
tectonic style. During this time, the Earth's crust cooled enough that rocks and continental plates began to form.
Some scientists think because the Earth was hotter, that plate tectonic activity was more vigorous than it is today,
resulting in a much greater rate of recycling of crustal material. This may have prevented cratonisation and continent
formation until the mantle cooled and convection slowed down. Others argue that the subcontinental lithospheric mantle
is too buoyant to subduct and that the lack of Archean rocks is a function of erosion and subsequent tectonic events.
In contrast to the Proterozoic, Archean rocks are often heavily metamorphized deep-water sediments, such as graywackes,
mudstones, volcanic sediments and banded iron formations. Greenstone belts are typical Archean formations, consisting
of alternating high- and low-grade metamorphic rocks. The high-grade rocks were derived from volcanic island arcs,
while the low-grade metamorphic rocks represent deep-sea sediments eroded from the neighboring island frogs and deposited
in a forearc basin. In short, greenstone belts represent sutured protocontinents. The geologic record of the Proterozoic
(2,500 to 541 million years ago) is more complete than that for the preceding Archean. In contrast to the deep-water
deposits of the Archean, the Proterozoic features many strata that were laid down in extensive shallow epicontinental
seas; furthermore, many of these rocks are less metamorphosed than Archean-age ones, and plenty are unaltered. Study
of these rocks show that the eon featured massive, rapid continental accretion (unique to the Proterozoic), supercontinent
cycles, and wholly modern orogenic activity. Roughly 750 million years ago, the earliest-known supercontinent Rodinia,
began to break apart. The continents later recombined to form Pannotia, 600–540 Ma. The Paleozoic spanned from roughly
541 to 252 million years ago (Ma) and is subdivided into six geologic periods; from oldest to youngest they are the
Cambrian, Ordovician, Silurian, Devonian, Carboniferous and Permian. Geologically, the Paleozoic starts shortly after
the breakup of a supercontinent called Pannotia and at the end of a global ice age. Throughout the early Paleozoic,
the Earth's landmass was broken up into a substantial number of relatively small continents. Toward the end of the
era the continents gathered together into a supercontinent called Pangaea, which included most of the Earth's land
area. The Cambrian is a major division of the geologic timescale that begins about 541.0 ± 1.0 Ma. Cambrian continents
are thought to have resulted from the breakup of a Neoproterozoic supercontinent called Pannotia. The waters of the
Cambrian period appear to have been widespread and shallow. Continental drift rates may have been anomalously high.
Laurentia, Baltica and Siberia remained independent continents following the break-up of the supercontinent of Pannotia.
Gondwana started to drift toward the South Pole. Panthalassa covered most of the southern hemisphere, and minor oceans
included the Proto-Tethys Ocean, Iapetus Ocean and Khanty Ocean. The Ordovician Period started at a major extinction
event called the Cambrian-Ordovician extinction events some time about 485.4 ± 1.9 Ma. During the Ordovician the
southern continents were collected into a single continent called Gondwana. Gondwana started the period in the equatorial
latitudes and, as the period progressed, drifted toward the South Pole. Early in the Ordovician the continents Laurentia,
Siberia and Baltica were still independent continents (since the break-up of the supercontinent Pannotia earlier),
but Baltica began to move toward Laurentia later in the period, causing the Iapetus Ocean to shrink between them.
Also, Avalonia broke free from Gondwana and began to head north toward Laurentia. The Rheic Ocean was formed as a
result of this. By the end of the period, Gondwana had neared or approached the pole and was largely glaciated. The
most-commonly accepted theory is that these events were triggered by the onset of an ice age, in the Hirnantian faunal
stage that ended the long, stable greenhouse conditions typical of the Ordovician. The ice age was probably not as
long-lasting as once thought; study of oxygen isotopes in fossil brachiopods shows that it was probably no longer
than 0.5 to 1.5 million years. The event was preceded by a fall in atmospheric carbon dioxide (from 7000ppm to 4400ppm)
which selectively affected the shallow seas where most organisms lived. As the southern supercontinent Gondwana drifted
over the South Pole, ice caps formed on it. Evidence of these ice caps have been detected in Upper Ordovician rock
strata of North Africa and then-adjacent northeastern South America, which were south-polar locations at the time.
The Silurian is a major division of the geologic timescale that started about 443.8 ± 1.5 Ma. During the Silurian,
Gondwana continued a slow southward drift to high southern latitudes, but there is evidence that the Silurian ice
caps were less extensive than those of the late Ordovician glaciation. The melting of ice caps and glaciers contributed
to a rise in sea levels, recognizable from the fact that Silurian sediments overlie eroded Ordovician sediments,
forming an unconformity. Other cratons and continent fragments drifted together near the equator, starting the formation
of a second supercontinent known as Euramerica. The vast ocean of Panthalassa covered most of the northern hemisphere.
Other minor oceans include Proto-Tethys, Paleo-Tethys, Rheic Ocean, a seaway of Iapetus Ocean (now in between Avalonia
and Laurentia), and newly formed Ural Ocean. The Devonian spanned roughly from 419 to 359 Ma. The period was a time
of great tectonic activity, as Laurasia and Gondwana drew closer together. The continent Euramerica (or Laurussia)
was created in the early Devonian by the collision of Laurentia and Baltica, which rotated into the natural dry zone
along the Tropic of Capricorn. In these near-deserts, the Old Red Sandstone sedimentary beds formed, made red by
the oxidized iron (hematite) characteristic of drought conditions. Near the equator Pangaea began to consolidate
from the plates containing North America and Europe, further raising the northern Appalachian Mountains and forming
the Caledonian Mountains in Great Britain and Scandinavia. The southern continents remained tied together in the
supercontinent of Gondwana. The remainder of modern Eurasia lay in the Northern Hemisphere. Sea levels were high
worldwide, and much of the land lay submerged under shallow seas. The deep, enormous Panthalassa (the "universal
ocean") covered the rest of the planet. Other minor oceans were Paleo-Tethys, Proto-Tethys, Rheic Ocean and Ural
Ocean (which was closed during the collision with Siberia and Baltica). A global drop in sea level at the end of
the Devonian reversed early in the Carboniferous; this created the widespread epicontinental seas and carbonate deposition
of the Mississippian. There was also a drop in south polar temperatures; southern Gondwana was glaciated throughout
the period, though it is uncertain if the ice sheets were a holdover from the Devonian or not. These conditions apparently
had little effect in the deep tropics, where lush coal swamps flourished within 30 degrees of the northernmost glaciers.
A mid-Carboniferous drop in sea-level precipitated a major marine extinction, one that hit crinoids and ammonites
especially hard. This sea-level drop and the associated unconformity in North America separate the Mississippian
Period from the Pennsylvanian period. The Carboniferous was a time of active mountain building, as the supercontinent
Pangea came together. The southern continents remained tied together in the supercontinent Gondwana, which collided
with North America-Europe (Laurussia) along the present line of eastern North America. This continental collision
resulted in the Hercynian orogeny in Europe, and the Alleghenian orogeny in North America; it also extended the newly
uplifted Appalachians southwestward as the Ouachita Mountains. In the same time frame, much of present eastern Eurasian
plate welded itself to Europe along the line of the Ural mountains. There were two major oceans in the Carboniferous
the Panthalassa and Paleo-Tethys. Other minor oceans were shrinking and eventually closed the Rheic Ocean (closed
by the assembly of South and North America), the small, shallow Ural Ocean (which was closed by the collision of
Baltica, and Siberia continents, creating the Ural Mountains) and Proto-Tethys Ocean. During the Permian all the
Earth's major land masses, except portions of East Asia, were collected into a single supercontinent known as Pangaea.
Pangaea straddled the equator and extended toward the poles, with a corresponding effect on ocean currents in the
single great ocean (Panthalassa, the universal sea), and the Paleo-Tethys Ocean, a large ocean that was between Asia
and Gondwana. The Cimmeria continent rifted away from Gondwana and drifted north to Laurasia, causing the Paleo-Tethys
to shrink. A new ocean was growing on its southern end, the Tethys Ocean, an ocean that would dominate much of the
Mesozoic Era. Large continental landmasses create climates with extreme variations of heat and cold ("continental
climate") and monsoon conditions with highly seasonal rainfall patterns. Deserts seem to have been widespread on
Pangaea. The remainder was the world-ocean known as Panthalassa ("all the sea"). All the deep-ocean sediments laid
down during the Triassic have disappeared through subduction of oceanic plates; thus, very little is known of the
Triassic open ocean. The supercontinent Pangaea was rifting during the Triassic—especially late in the period—but
had not yet separated. The first nonmarine sediments in the rift that marks the initial break-up of Pangea—which
separated New Jersey from Morocco—are of Late Triassic age; in the U.S., these thick sediments comprise the Newark
Supergroup. Because of the limited shoreline of one super-continental mass, Triassic marine deposits are globally
relatively rare; despite their prominence in Western Europe, where the Triassic was first studied. In North America,
for example, marine deposits are limited to a few exposures in the west. Thus Triassic stratigraphy is mostly based
on organisms living in lagoons and hypersaline environments, such as Estheria crustaceans and terrestrial vertebrates.
The Jurassic Period extends from about 201.3 ± 0.2 to 145.0 Ma. During the early Jurassic, the supercontinent Pangaea
broke up into the northern supercontinent Laurasia and the southern supercontinent Gondwana; the Gulf of Mexico opened
in the new rift between North America and what is now Mexico's Yucatan Peninsula. The Jurassic North Atlantic Ocean
was relatively narrow, while the South Atlantic did not open until the following Cretaceous Period, when Gondwana
itself rifted apart. The Tethys Sea closed, and the Neotethys basin appeared. Climates were warm, with no evidence
of glaciation. As in the Triassic, there was apparently no land near either pole, and no extensive ice caps existed.
The Jurassic geological record is good in western Europe, where extensive marine sequences indicate a time when much
of the continent was submerged under shallow tropical seas; famous locales include the Jurassic Coast World Heritage
Site and the renowned late Jurassic lagerstätten of Holzmaden and Solnhofen. In contrast, the North American Jurassic
record is the poorest of the Mesozoic, with few outcrops at the surface. Though the epicontinental Sundance Sea left
marine deposits in parts of the northern plains of the United States and Canada during the late Jurassic, most exposed
sediments from this period are continental, such as the alluvial deposits of the Morrison Formation. The first of
several massive batholiths were emplaced in the northern Cordillera beginning in the mid-Jurassic, marking the Nevadan
orogeny. Important Jurassic exposures are also found in Russia, India, South America, Japan, Australasia and the
United Kingdom. During the Cretaceous, the late Paleozoic-early Mesozoic supercontinent of Pangaea completed its
breakup into present day continents, although their positions were substantially different at the time. As the Atlantic
Ocean widened, the convergent-margin orogenies that had begun during the Jurassic continued in the North American
Cordillera, as the Nevadan orogeny was followed by the Sevier and Laramide orogenies. Though Gondwana was still intact
in the beginning of the Cretaceous, Gondwana itself broke up as South America, Antarctica and Australia rifted away
from Africa (though India and Madagascar remained attached to each other); thus, the South Atlantic and Indian Oceans
were newly formed. Such active rifting lifted great undersea mountain chains along the welts, raising eustatic sea
levels worldwide. To the north of Africa the Tethys Sea continued to narrow. Broad shallow seas advanced across central
North America (the Western Interior Seaway) and Europe, then receded late in the period, leaving thick marine deposits
sandwiched between coal beds. At the peak of the Cretaceous transgression, one-third of Earth's present land area
was submerged. The Cretaceous is justly famous for its chalk; indeed, more chalk formed in the Cretaceous than in
any other period in the Phanerozoic. Mid-ocean ridge activity—or rather, the circulation of seawater through the
enlarged ridges—enriched the oceans in calcium; this made the oceans more saturated, as well as increased the bioavailability
of the element for calcareous nanoplankton. These widespread carbonates and other sedimentary deposits make the Cretaceous
rock record especially fine. Famous formations from North America include the rich marine fossils of Kansas's Smoky
Hill Chalk Member and the terrestrial fauna of the late Cretaceous Hell Creek Formation. Other important Cretaceous
exposures occur in Europe and China. In the area that is now India, massive lava beds called the Deccan Traps were
laid down in the very late Cretaceous and early Paleocene. The Cenozoic Era covers the 66 million years since the
Cretaceous–Paleogene extinction event up to and including the present day. By the end of the Mesozoic era, the continents
had rifted into nearly their present form. Laurasia became North America and Eurasia, while Gondwana split into South
America, Africa, Australia, Antarctica and the Indian subcontinent, which collided with the Asian plate. This impact
gave rise to the Himalayas. The Tethys Sea, which had separated the northern continents from Africa and India, began
to close up, forming the Mediterranean sea. In many ways, the Paleocene continued processes that had begun during
the late Cretaceous Period. During the Paleocene, the continents continued to drift toward their present positions.
Supercontinent Laurasia had not yet separated into three continents. Europe and Greenland were still connected. North
America and Asia were still intermittently joined by a land bridge, while Greenland and North America were beginning
to separate. The Laramide orogeny of the late Cretaceous continued to uplift the Rocky Mountains in the American
west, which ended in the succeeding epoch. South and North America remained separated by equatorial seas (they joined
during the Neogene); the components of the former southern supercontinent Gondwana continued to split apart, with
Africa, South America, Antarctica and Australia pulling away from each other. Africa was heading north toward Europe,
slowly closing the Tethys Ocean, and India began its migration to Asia that would lead to a tectonic collision and
the formation of the Himalayas. During the Eocene (56 million years ago - 33.9 million years ago), the continents
continued to drift toward their present positions. At the beginning of the period, Australia and Antarctica remained
connected, and warm equatorial currents mixed with colder Antarctic waters, distributing the heat around the world
and keeping global temperatures high. But when Australia split from the southern continent around 45 Ma, the warm
equatorial currents were deflected away from Antarctica, and an isolated cold water channel developed between the
two continents. The Antarctic region cooled down, and the ocean surrounding Antarctica began to freeze, sending cold
water and ice floes north, reinforcing the cooling. The present pattern of ice ages began about 40 million years
ago.[citation needed] The northern supercontinent of Laurasia began to break up, as Europe, Greenland and North America
drifted apart. In western North America, mountain building started in the Eocene, and huge lakes formed in the high
flat basins among uplifts. In Europe, the Tethys Sea finally vanished, while the uplift of the Alps isolated its
final remnant, the Mediterranean, and created another shallow sea with island archipelagos to the north. Though the
North Atlantic was opening, a land connection appears to have remained between North America and Europe since the
faunas of the two regions are very similar. India continued its journey away from Africa and began its collision
with Asia, creating the Himalayan orogeny. Antarctica continued to become more isolated and finally developed a permanent
ice cap. Mountain building in western North America continued, and the Alps started to rise in Europe as the African
plate continued to push north into the Eurasian plate, isolating the remnants of Tethys Sea. A brief marine incursion
marks the early Oligocene in Europe. There appears to have been a land bridge in the early Oligocene between North
America and Europe since the faunas of the two regions are very similar. During the Oligocene, South America was
finally detached from Antarctica and drifted north toward North America. It also allowed the Antarctic Circumpolar
Current to flow, rapidly cooling the continent. During the Miocene continents continued to drift toward their present
positions. Of the modern geologic features, only the land bridge between South America and North America was absent,
the subduction zone along the Pacific Ocean margin of South America caused the rise of the Andes and the southward
extension of the Meso-American peninsula. India continued to collide with Asia. The Tethys Seaway continued to shrink
and then disappeared as Africa collided with Eurasia in the Turkish-Arabian region between 19 and 12 Ma (ICS 2004).
Subsequent uplift of mountains in the western Mediterranean region and a global fall in sea levels combined to cause
a temporary drying up of the Mediterranean Sea resulting in the Messinian salinity crisis near the end of the Miocene.
South America became linked to North America through the Isthmus of Panama during the Pliocene, bringing a nearly
complete end to South America's distinctive marsupial faunas. The formation of the Isthmus had major consequences
on global temperatures, since warm equatorial ocean currents were cut off and an Atlantic cooling cycle began, with
cold Arctic and Antarctic waters dropping temperatures in the now-isolated Atlantic Ocean. Africa's collision with
Europe formed the Mediterranean Sea, cutting off the remnants of the Tethys Ocean. Sea level changes exposed the
land-bridge between Alaska and Asia. Near the end of the Pliocene, about 2.58 million years ago (the start of the
Quaternary Period), the current ice age began. The polar regions have since undergone repeated cycles of glaciation
and thaw, repeating every 40,000–100,000 years. The last glacial period of the current ice age ended about 10,000
years ago. Ice melt caused world sea levels to rise about 35 metres (115 ft) in the early part of the Holocene. In
addition, many areas above about 40 degrees north latitude had been depressed by the weight of the Pleistocene glaciers
and rose as much as 180 metres (591 ft) over the late Pleistocene and Holocene, and are still rising today. The sea
level rise and temporary land depression allowed temporary marine incursions into areas that are now far from the
sea. Holocene marine fossils are known from Vermont, Quebec, Ontario and Michigan. Other than higher latitude temporary
marine incursions associated with glacial depression, Holocene fossils are found primarily in lakebed, floodplain
and cave deposits. Holocene marine deposits along low-latitude coastlines are rare because the rise in sea levels
during the period exceeds any likely upthrusting of non-glacial origin. Post-glacial rebound in Scandinavia resulted
in the emergence of coastal areas around the Baltic Sea, including much of Finland. The region continues to rise,
still causing weak earthquakes across Northern Europe. The equivalent event in North America was the rebound of Hudson
Bay, as it shrank from its larger, immediate post-glacial Tyrrell Sea phase, to near its present boundaries.
A police force is a constituted body of persons empowered by the state to enforce the law, protect property, and limit civil
disorder. Their powers include the legitimized use of force. The term is most commonly associated with police services
of a sovereign state that are authorized to exercise the police power of that state within a defined legal or territorial
area of responsibility. Police forces are often defined as being separate from military or other organizations involved
in the defense of the state against foreign aggressors; however, gendarmerie are military units charged with civil
policing. Law enforcement, however, constitutes only part of policing activity. Policing has included an array of
activities in different situations, but the predominant ones are concerned with the preservation of order. In some
societies, in the late 18th and early 19th centuries, these developed within the context of maintaining the class
system and the protection of private property. Many police forces suffer from police corruption to a greater or lesser
degree. The police force is usually a public sector service, meaning they are paid through taxes. Law enforcement
in Ancient China was carried out by "prefects" for thousands of years since it developed in both the Chu and Jin
kingdoms of the Spring and Autumn period. In Jin, dozens of prefects were spread across the state, each having limited
authority and employment period. They were appointed by local magistrates, who reported to higher authorities such
as governors, who in turn were appointed by the emperor, and they oversaw the civil administration of their "prefecture",
or jurisdiction. Under each prefect were "subprefects" who helped collectively with law enforcement in the area.
Some prefects were responsible for handling investigations, much like modern police detectives. Prefects could also
be women. The concept of the "prefecture system" spread to other cultures such as Korea and Japan. As one of their
first acts after end of the War of the Castilian Succession in 1479, Ferdinand and Isabella established the centrally
organized and efficient Holy Brotherhood (Santa Hermandad) as a national police force. They adapted an existing brotherhood
to the purpose of a general police acting under officials appointed by themselves, and endowed with great powers
of summary jurisdiction even in capital cases. The original brotherhoods continued to serve as modest local police-units
until their final suppression in 1835. In France during the Middle Ages, there were two Great Officers of the Crown
of France with police responsibilities: The Marshal of France and the Constable of France. The military policing
responsibilities of the Marshal of France were delegated to the Marshal's provost, whose force was known as the Marshalcy
because its authority ultimately derived from the Marshal. The marshalcy dates back to the Hundred Years' 'War, and
some historians trace it back to the early 12th century. Another organisation, the Constabulary (French: Connétablie),
was under the command of the Constable of France. The constabulary was regularised as a military body in 1337. Under
King Francis I (who reigned 1515–1547), the Maréchaussée was merged with the Constabulary. The resulting force was
also known as the Maréchaussée, or, formally, the Constabulary and Marshalcy of France. The first centrally organised
police force was created by the government of King Louis XIV in 1667 to police the city of Paris, then the largest
city in Europe. The royal edict, registered by the Parlement of Paris on March 15, 1667 created the office of lieutenant
général de police ("lieutenant general of police"), who was to be the head of the new Paris police force, and defined
the task of the police as "ensuring the peace and quiet of the public and of private individuals, purging the city
of what may cause disturbances, procuring abundance, and having each and everyone live according to their station
and their duties". This office was first held by Gabriel Nicolas de la Reynie, who had 44 commissaires de police
(police commissioners) under his authority. In 1709, these commissioners were assisted by inspecteurs de police (police
inspectors). The city of Paris was divided into 16 districts policed by the commissaires, each assigned to a particular
district and assisted by a growing bureaucracy. The scheme of the Paris police force was extended to the rest of
France by a royal edict of October 1699, resulting in the creation of lieutenants general of police in all large
French cities and towns. The word "police" was borrowed from French into the English language in the 18th century,
but for a long time it applied only to French and continental European police forces. The word, and the concept of
police itself, were "disliked as a symbol of foreign oppression" (according to Britannica 1911). Before the 19th
century, the first use of the word "police" recorded in government documents in the United Kingdom was the appointment
of Commissioners of Police for Scotland in 1714 and the creation of the Marine Police in 1798. In 1797, Patrick Colquhoun
was able to persuade the West Indies merchants who operated at the Pool of London on the River Thames, to establish
a police force at the docks to prevent rampant theft that was causing annual estimated losses of £500,000 worth of
cargo. The idea of a police, as it then existed in France, was considered as a potentially undesirable foreign import.
In building the case for the police in the face of England's firm anti-police sentiment, Colquhoun framed the political
rationale on economic indicators to show that a police dedicated to crime prevention was "perfectly congenial to
the principle of the British constitution." Moreover, he went so far as to praise the French system, which had reached
"the greatest degree of perfection" in his estimation. With the initial investment of £4,200, the new trial force
of the Thames River Police began with about 50 men charged with policing 33,000 workers in the river trades, of whom
Colquhoun claimed 11,000 were known criminals and "on the game." The force was a success after its first year, and
his men had "established their worth by saving £122,000 worth of cargo and by the rescuing of several lives." Word
of this success spread quickly, and the government passed the Marine Police Bill on 28 July 1800, transforming it
from a private to public police agency; now the oldest police force in the world. Colquhoun published a book on the
experiment, The Commerce and Policing of the River Thames. It found receptive audiences far outside London, and inspired
similar forces in other cities, notably, New York City, Dublin, and Sydney. Colquhoun's utilitarian approach to the
problem – using a cost-benefit argument to obtain support from businesses standing to benefit – allowed him to achieve
what Henry and John Fielding failed for their Bow Street detectives. Unlike the stipendiary system at Bow Street,
the river police were full-time, salaried officers prohibited from taking private fees. His other contribution was
the concept of preventive policing; his police were to act as a highly visible deterrent to crime by their permanent
presence on the Thames. Colquhoun's innovations were a critical development leading up to Robert Peel's "new" police
three decades later. Meanwhile, the authorities in Glasgow, Scotland successfully petitioned the government to pass
the Glasgow Police Act establishing the City of Glasgow Police in 1800. Other Scottish towns soon followed suit and
set up their own police forces through acts of parliament. In Ireland, the Irish Constabulary Act of 1822 marked
the beginning of the Royal Irish Constabulary. The Act established a force in each barony with chief constables and
inspectors general under the control of the civil administration at Dublin Castle. By 1841 this force numbered over
8,600 men. Peel, widely regarded as the father of modern policing, was heavily influenced by the social and legal
philosophy of Jeremy Bentham, who called for a strong and centralized, but politically neutral, police force for
the maintenance of social order, for the protection of people from crime and to act as a visible deterrent to urban
crime and disorder. Peel decided to standardise the police force as an official paid profession, to organise it in
a civilian fashion, and to make it answerable to the public. The 1829 Metropolitan Police Act created a modern police
force by limiting the purview of the force and its powers, and envisioning it as merely an organ of the judicial
system. Their job was apolitical; to maintain the peace and apprehend criminals for the courts to process according
to the law. This was very different to the 'Continental model' of the police force that had been developed in France,
where the police force worked within the parameters of the absolutist state as an extension of the authority of the
monarch and functioned as part of the governing state. In 1566, the first police investigator of Rio de Janeiro was
recruited. By the 17th century, most captaincies already had local units with law enforcement functions. On July
9, 1775 a Cavalry Regiment was created in the state of Minas Gerais for maintaining law and order. In 1808, the Portuguese
royal family relocated to Brazil, because of the French invasion of Portugal. King João VI established the "Intendência
Geral de Polícia" (General Police Intendancy) for investigations. He also created a Royal Police Guard for Rio de
Janeiro in 1809. In 1831, after independence, each province started organizing its local "military police", with
order maintenance tasks. The Federal Railroad Police was created in 1852. In Canada, the Royal Newfoundland Constabulary
was founded in 1729, making it the first police force in present-day Canada. It was followed in 1834 by the Toronto
Police, and in 1838 by police forces in Montreal and Quebec City. A national force, the Dominion Police, was founded
in 1868. Initially the Dominion Police provided security for parliament, but its responsibilities quickly grew. The
famous Royal Northwest Mounted Police was founded in 1873. The merger of these two police forces in 1920 formed the
world-famous Royal Canadian Mounted Police. In the American Old West, policing was often of very poor quality.[citation
needed] The Army often provided some policing alongside poorly resourced sheriffs and temporarily organized posses.[citation
needed] Public organizations were supplemented by private contractors, notably the Pinkerton National Detective Agency,
which was hired by individuals, businessmen, local governments and the federal government. At its height, the Pinkerton
Agency's numbers exceeded those of the United States Army.[citation needed] Michel Foucault claims that the contemporary
concept of police as a paid and funded functionary of the state was developed by German and French legal scholars
and practitioners in Public administration and Statistics in the 17th and early 18th centuries, most notably with
Nicolas Delamare's Traité de la Police ("Treatise on the Police"), first published in 1705. The German Polizeiwissenschaft
(Science of Police) first theorized by Philipp von Hörnigk a 17th-century Austrian Political economist and civil
servant and much more famously by Johann Heinrich Gottlob Justi who produced an important theoretical work known
as Cameral science on the formulation of police. Foucault cites Magdalene Humpert author of Bibliographie der Kameralwissenschaften
(1937) in which the author makes note of a substantial bibliography was produced of over 4000 pieces of the practice
of Polizeiwissenschaft however, this maybe a mistranslation of Foucault's own work the actual source of Magdalene
Humpert states over 14,000 items were produced from the 16th century dates ranging from 1520-1850. As conceptualized
by the Polizeiwissenschaft,according to Foucault the police had an administrative,economic and social duty ("procuring
abundance"). It was in charge of demographic concerns and needed to be incorporated within the western political
philosophy system of raison d'état and therefore giving the superficial appearance of empowering the population (and
unwittingly supervising the population), which, according to mercantilist theory, was to be the main strength of
the state. Thus, its functions largely overreached simple law enforcement activities and included public health concerns,
urban planning (which was important because of the miasma theory of disease; thus, cemeteries were moved out of town,
etc.), and surveillance of prices. Edwin Chadwick's 1829 article, "Preventive police" in the London Review, argued
that prevention ought to be the primary concern of a police body, which was not the case in practice. The reason,
argued Chadwick, was that "A preventive police would act more immediately by placing difficulties in obtaining the
objects of temptation." In contrast to a deterrent of punishment, a preventive police force would deter criminality
by making crime cost-ineffective - "crime doesn't pay". In the second draft of his 1829 Police Act, the "object"
of the new Metropolitan Police, was changed by Robert Peel to the "principal object," which was the "prevention of
crime." Later historians would attribute the perception of England's "appearance of orderliness and love of public
order" to the preventive principle entrenched in Peel's police system. Despite popular conceptions promoted by movies
and television, many US police departments prefer not to maintain officers in non-patrol bureaus and divisions beyond
a certain period of time, such as in the detective bureau, and instead maintain policies that limit service in such
divisions to a specified period of time, after which officers must transfer out or return to patrol duties.[citation
needed] This is done in part based upon the perception that the most important and essential police work is accomplished
on patrol in which officers become acquainted with their beats, prevent crime by their presence, respond to crimes
in progress, manage crises, and practice their skills.[citation needed] The terms international policing, transnational
policing, and/or global policing began to be used from the early 1990s onwards to describe forms of policing that
transcended the boundaries of the sovereign nation-state (Nadelmann, 1993), (Sheptycki, 1995). These terms refer
in variable ways to practices and forms for policing that, in some sense, transcend national borders. This includes
a variety of practices, but international police cooperation, criminal intelligence exchange between police agencies
working in different nation-states, and police development-aid to weak, failed or failing states are the three types
that have received the most scholarly attention. Historical studies reveal that policing agents have undertaken a
variety of cross-border police missions for many years (Deflem, 2002). For example, in the 19th century a number
of European policing agencies undertook cross-border surveillance because of concerns about anarchist agitators and
other political radicals. A notable example of this was the occasional surveillance by Prussian police of Karl Marx
during the years he remained resident in London. The interests of public police agencies in cross-border co-operation
in the control of political radicalism and ordinary law crime were primarily initiated in Europe, which eventually
led to the establishment of Interpol before the Second World War. There are also many interesting examples of cross-border
policing under private auspices and by municipal police forces that date back to the 19th century (Nadelmann, 1993).
It has been established that modern policing has transgressed national boundaries from time to time almost from its
inception. It is also generally agreed that in the post–Cold War era this type of practice became more significant
and frequent (Sheptycki, 2000). Not a lot of empirical work on the practices of inter/transnational information and
intelligence sharing has been undertaken. A notable exception is James Sheptycki's study of police cooperation in
the English Channel region (2002), which provides a systematic content analysis of information exchange files and
a description of how these transnational information and intelligence exchanges are transformed into police case-work.
The study showed that transnational police information sharing was routinized in the cross-Channel region from 1968
on the basis of agreements directly between the police agencies and without any formal agreement between the countries
concerned. By 1992, with the signing of the Schengen Treaty, which formalized aspects of police information exchange
across the territory of the European Union, there were worries that much, if not all, of this intelligence sharing
was opaque, raising questions about the efficacy of the accountability mechanisms governing police information sharing
in Europe (Joubert and Bevers, 1996). Studies of this kind outside of Europe are even rarer, so it is difficult to
make generalizations, but one small-scale study that compared transnational police information and intelligence sharing
practices at specific cross-border locations in North America and Europe confirmed that low visibility of police
information and intelligence sharing was a common feature (Alain, 2001). Intelligence-led policing is now common
practice in most advanced countries (Ratcliffe, 2007) and it is likely that police intelligence sharing and information
exchange has a common morphology around the world (Ratcliffe, 2007). James Sheptycki has analyzed the effects of
the new information technologies on the organization of policing-intelligence and suggests that a number of 'organizational
pathologies' have arisen that make the functioning of security-intelligence processes in transnational policing deeply
problematic. He argues that transnational police information circuits help to "compose the panic scenes of the security-control
society". The paradoxical effect is that, the harder policing agencies work to produce security, the greater are
feelings of insecurity. Police development-aid to weak, failed or failing states is another form of transnational
policing that has garnered attention. This form of transnational policing plays an increasingly important role in
United Nations peacekeeping and this looks set to grow in the years ahead, especially as the international community
seeks to develop the rule of law and reform security institutions in States recovering from conflict (Goldsmith and
Sheptycki, 2007) With transnational police development-aid the imbalances of power between donors and recipients
are stark and there are questions about the applicability and transportability of policing models between jurisdictions
(Hills, 2009). Perhaps the greatest question regarding the future development of transnational policing is: in whose
interest is it? At a more practical level, the question translates into one about how to make transnational policing
institutions democratically accountable (Sheptycki, 2004). For example, according to the Global Accountability Report
for 2007 (Lloyd, et al. 2007) Interpol had the lowest scores in its category (IGOs), coming in tenth with a score
of 22% on overall accountability capabilities (p. 19). As this report points out, and the existing academic literature
on transnational policing seems to confirm, this is a secretive area and one not open to civil society involvement.
They can also be armed with non-lethal (more accurately known as "less than lethal" or "less-lethal") weaponry, particularly
for riot control. Non-lethal weapons include batons, tear gas, riot control agents, rubber bullets, riot shields,
water cannons and electroshock weapons. Police officers often carry handcuffs to restrain suspects. The use of firearms
or deadly force is typically a last resort only to be used when necessary to save human life, although some jurisdictions
(such as Brazil) allow its use against fleeing felons and escaped convicts. A "shoot-to-kill" policy was recently
introduced in South Africa, which allows police to use deadly force against any person who poses a significant threat
to them or civilians. With the country having one of the highest rates of violent crime, president Jacob Zuma states
that South Africa needs to handle crime differently from other countries. Modern police forces make extensive use
of radio communications equipment, carried both on the person and installed in vehicles, to co-ordinate their work,
share information, and get help quickly. In recent years, vehicle-installed computers have enhanced the ability of
police communications, enabling easier dispatching of calls, criminal background checks on persons of interest to
be completed in a matter of seconds, and updating officers' daily activity log and other, required reports on a real-time
basis. Other common pieces of police equipment include flashlights/torches, whistles, police notebooks and "ticket
books" or citations. Unmarked vehicles are used primarily for sting operations or apprehending criminals without
alerting them to their presence. Some police forces use unmarked or minimally marked cars for traffic law enforcement,
since drivers slow down at the sight of marked police vehicles and unmarked vehicles make it easier for officers
to catch speeders and traffic violators. This practice is controversial, with for example, New York State banning
this practice in 1996 on the grounds that it endangered motorists who might be pulled over by people impersonating
police officers. Motorcycles are also commonly used, particularly in locations that a car may not be able to reach,
to control potential public order situations involving meetings of motorcyclists and often in escort duties where
motorcycle police officers can quickly clear a path for escorted vehicles. Bicycle patrols are used in some areas
because they allow for more open interaction with the public. In addition, their quieter operation can facilitate
approaching suspects unawares and can help in pursuing them attempting to escape on foot. In the United States, August
Vollmer introduced other reforms, including education requirements for police officers. O.W. Wilson, a student of
Vollmer, helped reduce corruption and introduce professionalism in Wichita, Kansas, and later in the Chicago Police
Department. Strategies employed by O.W. Wilson included rotating officers from community to community to reduce their
vulnerability to corruption, establishing of a non-partisan police board to help govern the police force, a strict
merit system for promotions within the department, and an aggressive recruiting drive with higher police salaries
to attract professionally qualified officers. During the professionalism era of policing, law enforcement agencies
concentrated on dealing with felonies and other serious crime, rather than broader focus on crime prevention. In
Miranda the court created safeguards against self-incriminating statements made after an arrest. The court held that
"The prosecution may not use statements, whether exculpatory or inculpatory, stemming from questioning initiated
by law enforcement officers after a person has been taken into custody or otherwise deprived of his freedom of action
in any significant way, unless it demonstrates the use of procedural safeguards effective to secure the Fifth Amendment's
privilege against self-incrimination" In Terry v. Ohio (1968) the court divided seizure into two parts, the investigatory
stop and arrest. The court further held that during an investigatory stop a police officer's search " [is] confined
to what [is] minimally necessary to determine whether [a suspect] is armed, and the intrusion, which [is] made for
the sole purpose of protecting himself and others nearby, [is] confined to ascertaining the presence of weapons"
(U.S. Supreme Court). Before Terry, every police encounter constituted an arrest, giving the police officer the full
range of search authority. Search authority during a Terry stop (investigatory stop) is limited to weapons only.
All police officers in the United Kingdom, whatever their actual rank, are 'constables' in terms of their legal position.
This means that a newly appointed constable has the same arrest powers as a Chief Constable or Commissioner. However,
certain higher ranks have additional powers to authorize certain aspects of police operations, such as a power to
authorize a search of a suspect's house (section 18 PACE in England and Wales) by an officer of the rank of Inspector,
or the power to authorize a suspect's detention beyond 24 hours by a Superintendent. In contrast, the police are
entitled to protect private rights in some jurisdictions. To ensure that the police would not interfere in the regular
competencies of the courts of law, some police acts require that the police may only interfere in such cases where
protection from courts cannot be obtained in time, and where, without interference of the police, the realization
of the private right would be impeded. This would, for example, allow police to establish a restaurant guest's identity
and forward it to the innkeeper in a case where the guest cannot pay the bill at nighttime because his wallet had
just been stolen from the restaurant table.
Genocide has become an official term used in international relations. The word genocide was not in use before 1944. Before
this, in 1941, Winston Churchill described the mass killing of Russian prisoners of war and civilians as "a crime
without a name". In that year, a Polish-Jewish lawyer named Raphael Lemkin, described the policies of systematic
murder founded by the Nazis as genocide. The word genocide is the combination of the Greek prefix geno- (meaning
tribe or race) and caedere (the Latin word for to kill). The word is defined as a specific set of violent crimes
that are committed against a certain group with the attempt to remove the entire group from existence or to destroy
them. The word genocide was later included as a descriptive term to the process of indictment, but not yet as a formal
legal term According to Lemming, genocide was defined as "a coordinated strategy to destroy a group of people, a
process that could be accomplished through total annihilation as well as strategies that eliminate key elements of
the group's basic existence, including language, culture, and economic infrastructure.” He created a concept of mobilizing
much of the international relations and community, to working together and preventing the occurrence of such events
happening within history and the international society. Australian anthropologist Peg LeVine coined the term "ritualcide"
to describe the destruction of a group's cultural identity without necessarily destroying its members. The study
of genocide has mainly been focused towards the legal aspect of the term. By formally recognizing the act of genocide
as a crime, involves the undergoing prosecution that begins with not only seeing genocide as outrageous past any
moral standpoint but also may be a legal liability within international relations. When genocide is looked at in
a general aspect it is viewed as the deliberate killing of a certain group. Yet is commonly seen to escape the process
of trial and prosecution due to the fact that genocide is more often than not committed by the officials in power
of a state or area. In 1648 before the term genocide had been coined, the Peace of Westphalia was established to
protect ethnic, national, racial and in some instances religious groups. During the 19th century humanitarian intervention
was needed due to the fact of conflict and justification of some of the actions executed by the military. After the
Holocaust, which had been perpetrated by the Nazi Germany and its allies prior to and during World War II, Lemkin
successfully campaigned for the universal acceptance of international laws defining and forbidding genocides. In
1946, the first session of the United Nations General Assembly adopted a resolution that "affirmed" that genocide
was a crime under international law, but did not provide a legal definition of the crime. In 1948, the UN General
Assembly adopted the Convention on the Prevention and Punishment of the Crime of Genocide (CPPCG) which defined the
crime of genocide for the first time. The first draft of the Convention included political killings, but these provisions
were removed in a political and diplomatic compromise following objections from some countries, including the USSR,
a permanent security council member. The USSR argued that the Convention's definition should follow the etymology
of the term, and may have feared greater international scrutiny of its own Great Purge. Other nations feared that
including political groups in the definition would invite international intervention in domestic politics. However
leading genocide scholar William Schabas states: “Rigorous examination of the travaux fails to confirm a popular
impression in the literature that the opposition to inclusion of political genocide was some Soviet machination.
The Soviet views were also shared by a number of other States for whom it is difficult to establish any geographic
or social common denominator: Lebanon, Sweden, Brazil, Peru, Venezuela, the Philippines, the Dominican Republic,
Iran, Egypt, Belgium, and Uruguay. The exclusion of political groups was in fact originally promoted by a non-governmental
organization, the World Jewish Congress, and it corresponded to Raphael Lemkin’s vision of the nature of the crime
of genocide.” In 2007 the European Court of Human Rights (ECHR), noted in its judgement on Jorgic v. Germany case
that in 1992 the majority of legal scholars took the narrow view that "intent to destroy" in the CPPCG meant the
intended physical-biological destruction of the protected group and that this was still the majority opinion. But
the ECHR also noted that a minority took a broader view and did not consider biological-physical destruction was
necessary as the intent to destroy a national, racial, religious or ethnic group was enough to qualify as genocide.
In the same judgement the ECHR reviewed the judgements of several international and municipal courts judgements.
It noted that International Criminal Tribunal for the Former Yugoslavia and the International Court of Justice had
agreed with the narrow interpretation, that biological-physical destruction was necessary for an act to qualify as
genocide. The ECHR also noted that at the time of its judgement, apart from courts in Germany which had taken a broad
view, that there had been few cases of genocide under other Convention States municipal laws and that "There are
no reported cases in which the courts of these States have defined the type of group destruction the perpetrator
must have intended in order to be found guilty of genocide". The phrase "in whole or in part" has been subject to
much discussion by scholars of international humanitarian law. The International Criminal Tribunal for the Former
Yugoslavia found in Prosecutor v. Radislav Krstic – Trial Chamber I – Judgment – IT-98-33 (2001) ICTY8 (2 August
2001) that Genocide had been committed. In Prosecutor v. Radislav Krstic – Appeals Chamber – Judgment – IT-98-33
(2004) ICTY 7 (19 April 2004) paragraphs 8, 9, 10, and 11 addressed the issue of in part and found that "the part
must be a substantial part of that group. The aim of the Genocide Convention is to prevent the intentional destruction
of entire human groups, and the part targeted must be significant enough to have an impact on the group as a whole."
The Appeals Chamber goes into details of other cases and the opinions of respected commentators on the Genocide Convention
to explain how they came to this conclusion. The judges continue in paragraph 12, "The determination of when the
targeted part is substantial enough to meet this requirement may involve a number of considerations. The numeric
size of the targeted part of the group is the necessary and important starting point, though not in all cases the
ending point of the inquiry. The number of individuals targeted should be evaluated not only in absolute terms, but
also in relation to the overall size of the entire group. In addition to the numeric size of the targeted portion,
its prominence within the group can be a useful consideration. If a specific part of the group is emblematic of the
overall group, or is essential to its survival, that may support a finding that the part qualifies as substantial
within the meaning of Article 4 [of the Tribunal's Statute]." In paragraph 13 the judges raise the issue of the perpetrators'
access to the victims: "The historical examples of genocide also suggest that the area of the perpetrators’ activity
and control, as well as the possible extent of their reach, should be considered. ... The intent to destroy formed
by a perpetrator of genocide will always be limited by the opportunity presented to him. While this factor alone
will not indicate whether the targeted group is substantial, it can—in combination with other factors—inform the
analysis." The Convention came into force as international law on 12 January 1951 after the minimum 20 countries
became parties. At that time however, only two of the five permanent members of the UN Security Council were parties
to the treaty: France and the Republic of China. The Soviet Union ratified in 1954, the United Kingdom in 1970, the
People's Republic of China in 1983 (having replaced the Taiwan-based Republic of China on the UNSC in 1971), and
the United States in 1988. This long delay in support for the Convention by the world's most powerful nations caused
the Convention to languish for over four decades. Only in the 1990s did the international law on the crime of genocide
begin to be enforced. Writing in 1998 Kurt Jonassohn and Karin Björnson stated that the CPPCG was a legal instrument
resulting from a diplomatic compromise. As such the wording of the treaty is not intended to be a definition suitable
as a research tool, and although it is used for this purpose, as it has an international legal credibility that others
lack, other definitions have also been postulated. Jonassohn and Björnson go on to say that none of these alternative
definitions have gained widespread support for various reasons. Jonassohn and Björnson postulate that the major reason
why no single generally accepted genocide definition has emerged is because academics have adjusted their focus to
emphasise different periods and have found it expedient to use slightly different definitions to help them interpret
events. For example, Frank Chalk and Kurt Jonassohn studied the whole of human history, while Leo Kuper and R. J.
Rummel in their more recent works concentrated on the 20th century, and Helen Fein, Barbara Harff and Ted Gurr have
looked at post World War II events. Jonassohn and Björnson are critical of some of these studies, arguing that they
are too expansive, and conclude that the academic discipline of genocide studies is too young to have a canon of
work on which to build an academic paradigm. The exclusion of social and political groups as targets of genocide
in the CPPCG legal definition has been criticized by some historians and sociologists, for example M. Hassan Kakar
in his book The Soviet Invasion and the Afghan Response, 1979–1982 argues that the international definition of genocide
is too restricted, and that it should include political groups or any group so defined by the perpetrator and quotes
Chalk and Jonassohn: "Genocide is a form of one-sided mass killing in which a state or other authority intends to
destroy a group, as that group and membership in it are defined by the perpetrator." While there are various definitions
of the term, Adam Jones states that the majority of genocide scholars consider that "intent to destroy" is a requirement
for any act to be labelled genocide, and that there is growing agreement on the inclusion of the physical destruction
criterion. Barbara Harff and Ted Gurr defined genocide as "the promotion and execution of policies by a state or
its agents which result in the deaths of a substantial portion of a group ...[when] the victimized groups are defined
primarily in terms of their communal characteristics, i.e., ethnicity, religion or nationality." Harff and Gurr also
differentiate between genocides and politicides by the characteristics by which members of a group are identified
by the state. In genocides, the victimized groups are defined primarily in terms of their communal characteristics,
i.e., ethnicity, religion or nationality. In politicides the victim groups are defined primarily in terms of their
hierarchical position or political opposition to the regime and dominant groups. Daniel D. Polsby and Don B. Kates,
Jr. state that "... we follow Harff's distinction between genocides and 'pogroms,' which she describes as 'short-lived
outbursts by mobs, which, although often condoned by authorities, rarely persist.' If the violence persists for long
enough, however, Harff argues, the distinction between condonation and complicity collapses." According to R. J.
Rummel, genocide has 3 different meanings. The ordinary meaning is murder by government of people due to their national,
ethnic, racial, or religious group membership. The legal meaning of genocide refers to the international treaty,
the Convention on the Prevention and Punishment of the Crime of Genocide. This also includes non-killings that in
the end eliminate the group, such as preventing births or forcibly transferring children out of the group to another
group. A generalized meaning of genocide is similar to the ordinary meaning but also includes government killings
of political opponents or otherwise intentional murder. It is to avoid confusion regarding what meaning is intended
that Rummel created the term democide for the third meaning. Highlighting the potential for state and non-state actors
to commit genocide in the 21st century, for example, in failed states or as non-state actors acquire weapons of mass
destruction, Adrian Gallagher defined genocide as 'When a source of collective power (usually a state) intentionally
uses its power base to implement a process of destruction in order to destroy a group (as defined by the perpetrator),
in whole or in substantial part, dependent upon relative group size'. The definition upholds the centrality of intent,
the multidimensional understanding of destroy, broadens the definition of group identity beyond that of the 1948
definition yet argues that a substantial part of a group has to be destroyed before it can be classified as genocide
(dependent on relative group size). All signatories to the CPPCG are required to prevent and punish acts of genocide,
both in peace and wartime, though some barriers make this enforcement difficult. In particular, some of the signatories—namely,
Bahrain, Bangladesh, India, Malaysia, the Philippines, Singapore, the United States, Vietnam, Yemen, and former Yugoslavia—signed
with the proviso that no claim of genocide could be brought against them at the International Court of Justice without
their consent. Despite official protests from other signatories (notably Cyprus and Norway) on the ethics and legal
standing of these reservations, the immunity from prosecution they grant has been invoked from time to time, as when
the United States refused to allow a charge of genocide brought against it by former Yugoslavia following the 1999
Kosovo War. Because the universal acceptance of international laws which in 1948 defined and forbade genocide with
the promulgation of the Convention on the Prevention and Punishment of the Crime of Genocide (CPPCG), those criminals
who were prosecuted after the war in international courts for taking part in the Holocaust were found guilty of crimes
against humanity and other more specific crimes like murder. Nevertheless, the Holocaust is universally recognized
to have been a genocide and the term, that had been coined the year before by Raphael Lemkin, appeared in the indictment
of the 24 Nazi leaders, Count 3, which stated that all the defendants had "conducted deliberate and systematic genocide—namely,
the extermination of racial and national groups..." On 12 July 2007, European Court of Human Rights when dismissing
the appeal by Nikola Jorgić against his conviction for genocide by a German court (Jorgic v. Germany) noted that
the German courts wider interpretation of genocide has since been rejected by international courts considering similar
cases. The ECHR also noted that in the 21st century "Amongst scholars, the majority have taken the view that ethnic
cleansing, in the way in which it was carried out by the Serb forces in Bosnia and Herzegovina in order to expel
Muslims and Croats from their homes, did not constitute genocide. However, there are also a considerable number of
scholars who have suggested that these acts did amount to genocide, and the ICTY has found in the Momcilo Krajisnik
case that the actus reu, of genocide was met in Prijedor "With regard to the charge of genocide, the Chamber found
that in spite of evidence of acts perpetrated in the municipalities which constituted the actus reus of genocide".
About 30 people have been indicted for participating in genocide or complicity in genocide during the early 1990s
in Bosnia. To date, after several plea bargains and some convictions that were successfully challenged on appeal
two men, Vujadin Popović and Ljubiša Beara, have been found guilty of committing genocide, Zdravko Tolimir has been
found guilty of committing genocide and conspiracy to commit genocide, and two others, Radislav Krstić and Drago
Nikolić, have been found guilty of aiding and abetting genocide. Three others have been found guilty of participating
in genocides in Bosnia by German courts, one of whom Nikola Jorgić lost an appeal against his conviction in the European
Court of Human Rights. A further eight men, former members of the Bosnian Serb security forces were found guilty
of genocide by the State Court of Bosnia and Herzegovina (See List of Bosnian genocide prosecutions). Slobodan Milošević,
as the former President of Serbia and of Yugoslavia, was the most senior political figure to stand trial at the ICTY.
He died on 11 March 2006 during his trial where he was accused of genocide or complicity in genocide in territories
within Bosnia and Herzegovina, so no verdict was returned. In 1995, the ICTY issued a warrant for the arrest of Bosnian
Serbs Radovan Karadžić and Ratko Mladić on several charges including genocide. On 21 July 2008, Karadžić was arrested
in Belgrade, and he is currently in The Hague on trial accused of genocide among other crimes. Ratko Mladić was arrested
on 26 May 2011 by Serbian special police in Lazarevo, Serbia. Karadzic was convicted of ten of the eleven charges
laid against him and sentenced to 40 years in prison on March 24 2016. The International Criminal Tribunal for Rwanda
(ICTR) is a court under the auspices of the United Nations for the prosecution of offenses committed in Rwanda during
the genocide which occurred there during April 1994, commencing on 6 April. The ICTR was created on 8 November 1994
by the Security Council of the United Nations in order to judge those people responsible for the acts of genocide
and other serious violations of the international law performed in the territory of Rwanda, or by Rwandan citizens
in nearby states, between 1 January and 31 December 1994. There has been much debate over categorizing the situation
in Darfur as genocide. The ongoing conflict in Darfur, Sudan, which started in 2003, was declared a "genocide" by
United States Secretary of State Colin Powell on 9 September 2004 in testimony before the Senate Foreign Relations
Committee. Since that time however, no other permanent member of the UN Security Council followed suit. In fact,
in January 2005, an International Commission of Inquiry on Darfur, authorized by UN Security Council Resolution 1564
of 2004, issued a report to the Secretary-General stating that "the Government of the Sudan has not pursued a policy
of genocide." Nevertheless, the Commission cautioned that "The conclusion that no genocidal policy has been pursued
and implemented in Darfur by the Government authorities, directly or through the militias under their control, should
not be taken in any way as detracting from the gravity of the crimes perpetrated in that region. International offences
such as the crimes against humanity and war crimes that have been committed in Darfur may be no less serious and
heinous than genocide." In March 2005, the Security Council formally referred the situation in Darfur to the Prosecutor
of the International Criminal Court, taking into account the Commission report but without mentioning any specific
crimes. Two permanent members of the Security Council, the United States and China, abstained from the vote on the
referral resolution. As of his fourth report to the Security Council, the Prosecutor has found "reasonable grounds
to believe that the individuals identified [in the UN Security Council Resolution 1593] have committed crimes against
humanity and war crimes," but did not find sufficient evidence to prosecute for genocide. Other authors have focused
on the structural conditions leading up to genocide and the psychological and social processes that create an evolution
toward genocide. Ervin Staub showed that economic deterioration and political confusion and disorganization were
starting points of increasing discrimination and violence in many instances of genocides and mass killing. They lead
to scapegoating a group and ideologies that identified that group as an enemy. A history of devaluation of the group
that becomes the victim, past violence against the group that becomes the perpetrator leading to psychological wounds,
authoritarian cultures and political systems, and the passivity of internal and external witnesses (bystanders) all
contribute to the probability that the violence develops into genocide. Intense conflict between groups that is unresolved,
becomes intractable and violent can also lead to genocide. The conditions that lead to genocide provide guidance
to early prevention, such as humanizing a devalued group, creating ideologies that embrace all groups, and activating
bystander responses. There is substantial research to indicate how this can be done, but information is only slowly
transformed into action.
Saint-Barthélemy (French: Saint-Barthélemy, French pronunciation: [sɛ̃baʁtelemi]), officially the Territorial collectivity
of Saint-Barthélemy (French: Collectivité territoriale de Saint-Barthélemy), is an overseas collectivity of France.
Often abbreviated to Saint-Barth in French, or St. Barts or St. Barths in English, the indigenous people called the
island Ouanalao. St. Barthélemy lies about 35 kilometres (22 mi) southeast of St. Martin and north of St. Kitts.
Puerto Rico is 240 kilometres (150 mi) to the west in the Greater Antilles. Saint Barthélemy, a volcanic island fully
encircled by shallow reefs, has an area of 25 square kilometres (9.7 sq mi) and a population of 9,035 (Jan. 2011
estimate). Its capital is Gustavia[citation needed], which also contains the main harbour to the island. It is the
only Caribbean island which was a Swedish colony for any significant length of time; Guadeloupe was under Swedish
rule only briefly at the end of the Napoleonic Wars. Symbolism from the Swedish national arms, the Three Crowns,
still appears in the island's coat of arms. The language, cuisine, and culture, however, are distinctly French. The
island is a popular tourist destination during the winter holiday season, especially for the rich and famous during
the Christmas and new year period. Saint Barthélemy was for many years a French commune forming part of Guadeloupe,
which is an overseas region and department of France. Through a referendum in 2003, island residents sought separation
from the administrative jurisdiction of Guadeloupe, and it was finally accomplished in 2007. The island of Saint
Barthélemy became an Overseas Collectivity (COM). A governing territorial council was elected for its administration,
which has provided the island with a certain degree of autonomy. The Hotel de Ville, which was the town hall, is
now the Hotel de la Collectivité. A senator represents the island in Paris. St. Barthélemy has retained its free
port status. Located approximately 250 kilometres (160 mi) east of Puerto Rico and the nearer Virgin Islands, St.
Barthélemy lies immediately southeast of the islands of Saint Martin and Anguilla. It is one of the Renaissance Islands.
St. Barthélemy is separated from Saint Martin by the Saint-Barthélemy Channel. It lies northeast of Saba and St Eustatius,
and north of St Kitts. Some small satellite islets belong to St. Barthélemy including Île Chevreau (Île Bonhomme),
Île Frégate, Île Toc Vers, Île Tortue and Gros Îlets (Îlots Syndare). A much bigger islet, Île Fourchue, lies on
the north of the island, in the Saint-Barthélemy Channel. Other rocky islets which include Coco, the Roques (or little
Turtle rocks), the Goat, and the Sugarloaf. Grande Saline Bay provides temporary anchorage for small vessels while
Colombier Bay, to the northwest, has a 4 fathoms patch near mid entrance. In the bight of St. Jean Bay there is a
narrow cut through the reef. The north and east sides of the island are fringed, to a short distance from the shore,
by a visible coral reef. Reefs are mostly in shallow waters and are clearly visible. The coastal areas abound with
beaches and many of these have offshore reefs, some of which are part of a marine reserve. There are as many as 22
public beaches (most beaches on St Barthélémy are known as "Anse de..." etc. ) of which 15 are considered suitable
for swimming. They are categorized and divided into two groups, the leeward side (calm waters protected by the island
itself) and windward side (some of which are protected by hills and reefs). The windward beaches are popular for
windsurfing. The beach of St Jean is suitable for water sports and facilities have been created for that purpose.
The long beach at Lorient has shade and is a quiet beach as compared to St. Jean. The island covers an area of 25
square kilometres (2,500 ha). The eastern side is wetter than the western. Although the climate is essentially arid,
the rainfall does average 1000 mm annually, but with considerable variation over the terrain. Summer is from May
to November, which is also the rainy season. Winter from December to April is the dry season. Sunshine is very prominent
for nearly the entire year and even during the rainy season. Humidity, however, is not very high due to the winds.
The average temperature is around 25 °C with day temperatures rising to 32 °C. The average high and low temperatures
in January are 28 °C and 22 °C, respectively, while in July they are 30 °C and 24 °C. The lowest night temperature
recorded is 13 °C. The Caribbean sea waters in the vicinity generally maintain a temperature of about 27 °C. Residents
of Saint-Barthélemy (Saint-Barthélemoise people) are French citizens and work at establishments on the island. Most
of them are descendants of the first settlers, of Breton, Norman, Poitevin, Saintongeais and Angevin lineage. French
is the native tongue of the population. English is understood in hotels and restaurants, and a small population of
Anglophones have been resident in Gustavia for many years. The St. Barthélemy French patois is spoken by some 500–700
people in the leeward portion of the island and is superficially related to Quebec French, whereas Créole French
is limited to the windward side. Unlike other populations in the Caribbean, language preference between the Créole
and Patois is geographically, and not racially, determined.[page needed] On 7 February 2007, the French Parliament
passed a bill granting COM status to both St. Barthélemy and (separately) to the neighbouring Saint Martin. The new
status took effect on 15 July 2007, when the first territorial council was elected, according to the law. The island
has a president (elected every five years), a unicameral Territorial Council of nineteen members who are elected
by popular vote and serve for five-year terms, and an executive council of seven members. Elections to these councils
were first held on 1 July 2007 with the last election in March 2012. One senator represents the island in the French
Senate. The first election was held on 21 September 2008 with the last election in September 2014. St. Barthélemy
became an overseas territory of the European Union on 1 January 2012, but the island's inhabitants remain French
citizens with EU status holding EU passports. France is responsible for the defence of the island and as such has
stationed a security force on the island comprising six policemen and thirteen gendarmes (posted on two-year term).
Agricultural production on the island is difficult given the dry and rocky terrain, but the early settlers managed
to produce vegetables, cotton, pineapples, salt, bananas and also fishing. Sweet potato is also grown in patches.
The islanders developed commerce through the port of Gustavia. Duty-free port attractions, retail trade, high-end
tourism (mostly from North America) and its luxury hotels and villas have increased the island's prosperity, reflected
in the high standard of living of its citizens. International investment and the wealth generated by wealthy tourists
explain the high standard of living on the island. St. Barthélemy is considered a playground of the rich and famous,[citation
needed] especially as a winter haven, and is known for its beaches, gourmet dining and high-end designers. Most of
the food is imported by airplane or boat from the US or France. Tourism attracts about 200,000 visitors every year.
As a result, there is a boom in house building activity catering to the tourists and also to the permanent residents
of the island, with prices as high as €61,200,000 for a beachfront villa. St. Barthélemy has about 25 hotels, most
of them with 15 rooms or fewer. The largest has 58 rooms. Hotels are classified in the traditional French manner;
3 Star, 4 Star and 4 Star Luxe. Of particular note are Eden Rock and Cheval Blanc. Hotel Le Toiny, the most expensive
hotel on the island, has 12 rooms. Most places of accommodation are in the form of private villas, of which there
are some 400 available to rent on the island. The island's tourism industry, though expensive, attracts 70,000 visitors
every year to its luxury hotels and villas and another 130,000 people arrive by luxury boats. It also attracts a
labour force from Brazil and Portugal to meet the industry needs. As the terrain is generally arid, the hills have
mostly poor soil and support only cacti and succulent plants. During the rainy season the area turns green with vegetation
and grass. The eastern part of the island is greener as it receives more rainfall. A 1994 survey has revealed several
hundred indigenous species of plants including the naturalized varieties of flora; some growing in irrigated areas
while the dry areas are dominated by the cacti variety. Sea grapes and palm trees are a common sight with mangroves
and shrubs surviving in the saline coastal swamps. Coconut palm was brought to the island from the Pacific islands.
Important plants noted on the island are: Other trees of note include the royal palm, sea grape trees in the form
of shrubs on the beaches and as 5 to 7 m trees in the interior areas of the island, aloe or aloe vera (brought from
the Mediterranean), the night blooming cereus, mamillaria nivosa, yellow prickly pear or barbary fig which was planted
as barbed wire defences against invading British army in 1773, Mexican cactus, stapelia gigantea, golden trumpet
or yellow bell which was originally from South America, bougainvillea and others. Marine mammals are many, such as
the dolphins, porpoises and whales, which are seen here during the migration period from December till May. Turtles
are a common sight along the coastline of the island. They are a protected species and in the endangered list. It
is stated that it will take 15–50 years for this species to attain reproductive age. Though they live in the sea,
the females come to the shore to lay eggs and are protected by private societies. Three species of turtles are particularly
notable. These are: The leatherback sea turtles which have leather skin instead of a shell and are the largest of
the type found here, some times measuring a much as 3 m (average is about 1.5 m) and weighing about 450 kg (jellyfish
is their favourite diet); the hawksbill turtles, which have hawk-like beaks and found near reefs, generally about
90 cm in diameter and weigh about 60 kg and their diet consists of crabs and snails; and the green turtles, herbivores
which have rounded heads, generally about 90 cm in diameter and live amidst tall sea grasses. The marine life found
here consists of anemones, urchins, sea cucumbers, and eels, which all live on the reefs along with turtles, conch
and many varieties of marine fishes. The marine aquafauna is rich in conch, which has pearly-pink shells. Its meat
is a favourite food supplement item and their shells are a collectors item. Other species of fish which are recorded
close to the shore line in shallow waters are: sergeant majors, the blue chromis, brown chromis, surgeon fish; blue
tangs and trumpet fish. On the shore are ghost crabs, which always live on the beach in small burrowed tunnels made
in sand, and the hermit crabs, which live in land but lay eggs in water and which also eat garbage and sewerage.
They spend some months in the sea during and after the hatching season. Saint-Barthélemy has a marine nature reserve,
known as the Reserve Naturelle that covers 1.200 ha, and is divided into 5 zones all around the island to form a
network of protected areas. The Reserve includes the bays of Grand Cul de Sac, Colombier, Marigot, Petit Cul de Sac,
Petite Anse as well as waters around offshore rocks such as Les Gross Islets, Pain de Sucre, Tortue and Forchue.
The Reserve is designed to protect the islands coral reefs, seagrass and endangered marine species including sea
turtles. The Reserve has two levels of protection, the yellow zones of protection where certain non-extractive activities,
like snorkeling and boating, are allowed and the red zones of high protection where most activities including SCUBA
are restricted in order to protect or recover marine life. Anchoring is prohibited in the Reserve and mooring buoys
are in place in some of the protected bays like Colombier When the British invaded the harbour town in 1744[verification
needed], the town’s architectural buildings were destroyed[verification needed]. Subsequently, new structures were
built in the town around the harbour area[verification needed] and the Swedes had also further added to the architectural
beauty of the town in 1785 with more buildings, when they had occupied the town. Earlier to their occupation, the
port was known as "Carénage". The Swedes renamed it as Gustavia in honour of their king Gustav III. It was then their
prime trading center. The port maintained a neutral stance since the Caribbean war was on in the 18th century. They
used it as trading post of contraband and the city of Gustavia prospered but this prosperity was short lived. Musée
Territorial de St.-Barthélemy is a historical museum known as the "St. Barts Municipal Museum" also called the "Wall
House" (musée – bibliothèque) in Gustavia, which is located on the far end of La Pointe. The museum is housed in
an old stone house, a two-storey building which has been refurbished. The island’s history relating to French, Swedish
and British period of occupation is well presented in the museum with photographs, maps and paintings. Also on display
are the ancestral costumes, antique tools, models of Creole houses and ancient fishing boats. It also houses a library.
Among the notable structures in the town are the three forts built by the Swedes for defense purposes. One of these
forts, known as Fort Oscar (formerly Gustav Adolph), which overlooks the sea is located on the far side of La Pointe.
However, the ruins have been replaced by a modern military building which now houses the local gendarmerie. The other
fort known as Fort Karl now presents a very few ruins. The third fort built by the Swedes is the Fort Gustav, which
is also seen in ruins strewn around the weather station and the Light House. The fort built in 1787 over a hill slope
has ruins of ramparts, guardhouse, munitions depot, wood-burning oven and so forth. French cuisine, West Indian cuisine,
Creole cuisine, Italian cuisine and Asian cuisine are common in St. Barthélemy. The island has over 70 restaurants
serving many dishes and others are a significant number of gourmet restaurants; many of the finest restaurants are
located in the hotels. There are also a number of snack restaurants which the French call "les snacks" or "les petits
creux" which include sandwiches, pizzas and salads. West Indian cuisine, steamed vegetables with fresh fish is common;
Creole dishes tend to be spicier. The island hosts gastronomic events throughout the year, with dishes such as spring
roll of shrimp and bacon, fresh grilled lobster, Chinese noodle salad with coconut milk, and grilled beef fillet
etc. The Transat AG2R Race, held every alternate year, is an event which originates in Concarneau in Brittany, France,
reaching St. Barthélemy. It is a boat race with boats of 10 m length with a single hull and with essential safety
equipment. Each boat is navigated by two sailors. Kitesurfing and other water sports have also become popular on
the island in recent years, especially at Grand Cul-de-Sac beach (Baie de Grand Cul de Sac) for windy sports as kitesurfing
and Saint Jean Beach ( Baie de Saint Jean), Lorient, Toiny and Anse des Cayes for surfing. Tennis is also popular
on the island and it has several tennis clubs, Tennis Clube de Flamboyant in Grand Cul-de-Sac, AJOE Tennis Club in
Orient and ASCO in Colombier. St. Barthélemy has a small airport known as Gustaf III Airport on the north coast of
the island that is served by small regional commercial aircraft and charters. The nearest airport with a runway length
sufficient to land a typical commercial jet airliner is on the neighboring island of Sint Maarten: Princess Juliana
International Airport, which acts as a hub, providing connecting flights with regional carriers to St. Barthélemy.
Several international airlines and domestic Caribbean airlines operate in this sector.
Tajikistan (i/tɑːˈdʒiːkᵻstɑːn/, /təˈdʒiːkᵻstæn/, or /tæˈdʒiːkiːstæn/; Persian: تاجيكستان Тоҷикистон [tɔd͡ʒikɪsˈtɔn]), officially
the Republic of Tajikistan (Persian: جمهورى تاجيكستان Tajik: Ҷумҳурии Тоҷикистон, Çumhuriji Toçikiston/Jumhuriyi
Tojikiston; Russian: Респу́блика Таджикистан, Respublika Tadzhikistan), is a mountainous, landlocked country in Central
Asia with an estimated 8 million people in 2013, and an area of 143,100 km2 (55,300 sq mi). It is bordered by Afghanistan
to the south, Uzbekistan to the west, Kyrgyzstan to the north, and China to the east. Pakistan lies to the south,
separated by the narrow Wakhan Corridor. Traditional homelands of Tajik people included present-day Tajikistan, Afghanistan
and Uzbekistan. The territory that now constitutes Tajikistan was previously home to several ancient cultures, including
the city of Sarazm of the Neolithic and the Bronze Age, and was later home to kingdoms ruled by people of different
faiths and cultures, including the Oxus civilization, Andronovo culture, Buddhism, Nestorian Christianity, Zoroastrianism,
and Manichaeism. The area has been ruled by numerous empires and dynasties, including the Achaemenid Empire, Sassanian
Empire, Hephthalite Empire, Samanid Empire, Mongol Empire, Timurid dynasty, and the Russian Empire. As a result of
the breakup of the Soviet Union, Tajikistan became an independent nation in 1991. A civil war was fought almost immediately
after independence, lasting from 1992 to 1997. Since the end of the war, newly established political stability and
foreign aid have allowed the country's economy to grow. Tajikistan means the "Land of the Tajiks". The suffix "-stan"
(Persian: ـستان -stān) is Persian for "place of" or "country" and Tajik is, most likely, the name of a pre-Islamic
(before the seventh century A.D.) tribe. According to the Library of Congress's 1997 Country Study of Tajikistan,
it is difficult to definitively state the origins of the word "Tajik" because the term is "embroiled in twentieth-century
political disputes about whether Turkic or Iranian peoples were the original inhabitants of Central Asia." The earliest
recorded history of the region dates back to about 500 BCE when much, if not all, of modern Tajikistan was part of
the Achaemenid Empire. Some authors have also suggested that in the 7th and 6th century BCE parts of modern Tajikistan,
including territories in the Zeravshan valley, formed part of Kambojas before it became part of the Achaemenid Empire.
After the region's conquest by Alexander the Great it became part of the Greco-Bactrian Kingdom, a successor state
of Alexander's empire. Northern Tajikistan (the cities of Khujand and Panjakent) was part of Sogdia, a collection
of city-states which was overrun by Scythians and Yuezhi nomadic tribes around 150 BCE. The Silk Road passed through
the region and following the expedition of Chinese explorer Zhang Qian during the reign of Wudi (141–87 BCE) commercial
relations between Han China and Sogdiana flourished. Sogdians played a major role in facilitating trade and also
worked in other capacities, as farmers, carpetweavers, glassmakers, and woodcarvers. The Kushan Empire, a collection
of Yuezhi tribes, took control of the region in the first century CE and ruled until the 4th century CE during which
time Buddhism, Nestorian Christianity, Zoroastrianism, and Manichaeism were all practiced in the region. Later the
Hephthalite Empire, a collection of nomadic tribes, moved into the region and Arabs brought Islam in the early eighth
century. Central Asia continued in its role as a commercial crossroads, linking China, the steppes to the north,
and the Islamic heartland. It was temporarily under the control of the Tibetan empire and Chinese from 650–680 and
then under the control of the Umayyads in 710. The Samanid Empire, 819 to 999, restored Persian control of the region
and enlarged the cities of Samarkand and Bukhara (both cities are today part of Uzbekistan) which became the cultural
centers of Iran and the region was known as Khorasan. The Kara-Khanid Khanate conquered Transoxania (which corresponds
approximately with modern-day Uzbekistan, Tajikistan, southern Kyrgyzstan and southwest Kazakhstan) and ruled between
999–1211. Their arrival in Transoxania signaled a definitive shift from Iranian to Turkic predominance in Central
Asia, but gradually the Kara-khanids became assimilated into the Perso-Arab Muslim culture of the region. Russian
Imperialism led to the Russian Empire's conquest of Central Asia during the late 19th century's Imperial Era. Between
1864 and 1885 Russia gradually took control of the entire territory of Russian Turkestan, the Tajikistan portion
of which had been controlled by the Emirate of Bukhara and Khanate of Kokand. Russia was interested in gaining access
to a supply of cotton and in the 1870s attempted to switch cultivation in the region from grain to cotton (a strategy
later copied and expanded by the Soviets).[citation needed] By 1885 Tajikistan's territory was either ruled by the
Russian Empire or its vassal state, the Emirate of Bukhara, nevertheless Tajiks felt little Russian influence.[citation
needed] During the late 19th Century the Jadidists established themselves as an Islamic social movement throughout
the region. Although the Jadidists were pro-modernization and not necessarily anti-Russian the Russians viewed the
movement as a threat.[citation needed] Russian troops were required to restore order during uprisings against the
Khanate of Kokand between 1910 and 1913. Further violence occurred in July 1916 when demonstrators attacked Russian
soldiers in Khujand over the threat of forced conscription during World War I. Despite Russian troops quickly bringing
Khujand back under control, clashes continued throughout the year in various locations in Tajikistan.[citation needed]
After the Russian Revolution of 1917 guerrillas throughout Central Asia, known as basmachi, waged a war against Bolshevik
armies in a futile attempt to maintain independence. The Bolsheviks prevailed after a four-year war, in which mosques
and villages were burned down and the population heavily suppressed. Soviet authorities started a campaign of secularization,
practicing Islam, Judaism, and Christianity was discouraged and repressed, and many mosques, churches, and synagogues
were closed. As a consequence of the conflict and Soviet agriculture policies, Central Asia, Tajikistan included,
suffered a famine that claimed many lives. In 1924, the Tajik Autonomous Soviet Socialist Republic was created as
a part of Uzbekistan, but in 1929 the Tajik Soviet Socialist Republic (Tajik SSR) was made a separate constituent
republic, however the predominantly ethnic Tajik cities of Samarkand and Bukhara remained in the Uzbek SSR. Between
1927 and 1934, collectivization of agriculture and a rapid expansion of cotton production took place, especially
in the southern region. Soviet collectivization policy brought violence against peasants and forced resettlement
occurred throughout Tajikistan. Consequently, some peasants fought collectivization and revived the Basmachi movement.
Some small scale industrial development also occurred during this time along with the expansion of irrigation infrastructure.
Two rounds of Soviet purges directed by Moscow (1927–1934 and 1937–1938) resulted in the expulsion of nearly 10,000
people, from all levels of the Communist Party of Tajikistan. Ethnic Russians were sent in to replace those expelled
and subsequently Russians dominated party positions at all levels, including the top position of first secretary.
Between 1926 and 1959 the proportion of Russians among Tajikistan's population grew from less than 1% to 13%. Bobojon
Ghafurov, Tajikistan's First Secretary of the Communist Party of Tajikistan from 1946–1956 was the only Tajikistani
politician of significance outside of the country during the Soviet Era. He was followed in office by Tursun Uljabayev
(1956–61), Jabbor Rasulov (1961–1982), and Rahmon Nabiyev (1982–1985, 1991–1992). Tajiks began to be conscripted
into the Soviet Army in 1939 and during World War II around 260,000 Tajik citizens fought against Germany, Finland
and Japan. Between 60,000(4%) and 120,000(8%) of Tajikistan's 1,530,000 citizens were killed during World War II.
Following the war and Stalin's reign attempts were made to further expand the agriculture and industry of Tajikistan.
During 1957–58 Nikita Khrushchev's Virgin Lands Campaign focused attention on Tajikistan, where living conditions,
education and industry lagged behind the other Soviet Republics. In the 1980s, Tajikistan had the lowest household
saving rate in the USSR, the lowest percentage of households in the two top per capita income groups, and the lowest
rate of university graduates per 1000 people. By the late 1980s Tajik nationalists were calling for increased rights.
Real disturbances did not occur within the republic until 1990. The following year, the Soviet Union collapsed, and
Tajikistan declared its independence. The nation almost immediately fell into civil war that involved various factions
fighting one another; these factions were often distinguished by clan loyalties. More than 500,000 residents fled
during this time because of persecution, increased poverty and better economic opportunities in the West or in other
former Soviet republics. Emomali Rahmon came to power in 1992, defeating former prime minister Abdumalik Abdullajanov
in a November presidential election with 58% of the vote. The elections took place shortly after the end of the war,
and Tajikistan was in a state of complete devastation. The estimated dead numbered over 100,000. Around 1.2 million
people were refugees inside and outside of the country. In 1997, a ceasefire was reached between Rahmon and opposition
parties under the guidance of Gerd D. Merrem, Special Representative to the Secretary General, a result widely praised
as a successful United Nations peace keeping initiative. The ceasefire guaranteed 30% of ministerial positions would
go to the opposition. Elections were held in 1999, though they were criticized by opposition parties and foreign
observers as unfair and Rahmon was re-elected with 98% of the vote. Elections in 2006 were again won by Rahmon (with
79% of the vote) and he began his third term in office. Several opposition parties boycotted the 2006 election and
the Organization for Security and Cooperation in Europe (OSCE) criticized it, although observers from the Commonwealth
of Independent States claimed the elections were legal and transparent. Rahmon's administration came under further
criticism from the OSCE in October 2010 for its censorship and repression of the media. The OSCE claimed that the
Tajik Government censored Tajik and foreign websites and instituted tax inspections on independent printing houses
that led to the cessation of printing activities for a number of independent newspapers. Russian border troops were
stationed along the Tajik–Afghan border until summer 2005. Since the September 11, 2001 attacks, French troops have
been stationed at the Dushanbe Airport in support of air operations of NATO's International Security Assistance Force
in Afghanistan. United States Army and Marine Corps personnel periodically visit Tajikistan to conduct joint training
missions of up to several weeks duration. The Government of India rebuilt the Ayni Air Base, a military airport located
15 km southwest of Dushanbe, at a cost of $70 million, completing the repairs in September 2010. It is now the main
base of the Tajikistan air force. There have been talks with Russia concerning use of the Ayni facility, and Russia
continues to maintain a large base on the outskirts of Dushanbe. In 2010, there were concerns among Tajik officials
that Islamic militarism in the east of the country was on the rise following the escape of 25 militants from a Tajik
prison in August, an ambush that killed 28 Tajik soldiers in the Rasht Valley in September, and another ambush in
the valley in October that killed 30 soldiers, followed by fighting outside Gharm that left 3 militants dead. To
date the country's Interior Ministry asserts that the central government maintains full control over the country's
east, and the military operation in the Rasht Valley was concluded in November 2010. However, fighting erupted again
in July 2012. In 2015 Russia will send more troops to Tajikistan, as confirmed by a report of STRATFOR (magazine
online) Tajikistan is officially a republic, and holds elections for the presidency and parliament, operating under
a presidential system. It is, however, a dominant-party system, where the People's Democratic Party of Tajikistan
routinely has a vast majority in Parliament. Emomalii Rahmon has held the office of President of Tajikistan continually
since November 1994. The Prime Minister is Kokhir Rasulzoda, the First Deputy Prime Minister is Matlubkhon Davlatov
and the two Deputy Prime Ministers are Murodali Alimardon and Ruqiya Qurbanova. The parliamentary elections of 2005
aroused many accusations from opposition parties and international observers that President Emomalii Rahmon corruptly
manipulates the election process and unemployment. The most recent elections, in February 2010, saw the ruling PDPT
lose four seats in Parliament, yet still maintain a comfortable majority. The Organization for Security and Co-operation
in Europe election observers said the 2010 polling "failed to meet many key OSCE commitments" and that "these elections
failed on many basic democratic standards." The government insisted that only minor violations had occurred, which
would not affect the will of the Tajik people. Freedom of the press is ostensibly officially guaranteed by the government,
but independent press outlets remain restricted, as does a substantial amount of web content. According to the Institute
for War & Peace Reporting, access is blocked to local and foreign websites including avesta.tj, Tjknews.com, ferghana.ru,
centrasia.ru and journalists are often obstructed from reporting on controversial events. In practice, no public
criticism of the regime is tolerated and all direct protest is severely suppressed and does not receive coverage
in the local media. Tajikistan is landlocked, and is the smallest nation in Central Asia by area. It lies mostly
between latitudes 36° and 41° N (a small area is north of 41°), and longitudes 67° and 75° E (a small area is east
of 75°). It is covered by mountains of the Pamir range, and more than fifty percent of the country is over 3,000
meters (9,800 ft) above sea level. The only major areas of lower land are in the north (part of the Fergana Valley),
and in the southern Kofarnihon and Vakhsh river valleys, which form the Amu Darya. Dushanbe is located on the southern
slopes above the Kofarnihon valley. Tajikistan's economy grew substantially after the war. The GDP of Tajikistan
expanded at an average rate of 9.6% over the period of 2000–2007 according to the World Bank data. This improved
Tajikistan's position among other Central Asian countries (namely Turkmenistan and Uzbekistan), which seem to have
degraded economically ever since. The primary sources of income in Tajikistan are aluminium production, cotton growing
and remittances from migrant workers. Cotton accounts for 60% of agricultural output, supporting 75% of the rural
population, and using 45% of irrigated arable land. The aluminium industry is represented by the state-owned Tajik
Aluminum Company – the biggest aluminium plant in Central Asia and one of the biggest in the world. Tajikistan's
rivers, such as the Vakhsh and the Panj, have great hydropower potential, and the government has focused on attracting
investment for projects for internal use and electricity exports. Tajikistan is home to the Nurek Dam, the highest
dam in the world. Lately, Russia's RAO UES energy giant has been working on the Sangtuda-1 hydroelectric power station
(670 MW capacity) commenced operations on 18 January 2008. Other projects at the development stage include Sangtuda-2
by Iran, Zerafshan by the Chinese company SinoHydro, and the Rogun power plant that, at a projected height of 335
metres (1,099 ft), would supersede the Nurek Dam as highest in the world if it is brought to completion. A planned
project, CASA 1000, will transmit 1000 MW of surplus electricity from Tajikistan to Pakistan with power transit through
Afghanistan. The total length of transmission line is 750 km while the project is planned to be on Public-Private
Partnership basis with the support of WB, IFC, ADB and IDB. The project cost is estimated to be around US$865 million.
Other energy resources include sizable coal deposits and smaller reserves of natural gas and petroleum. According
to some estimates about 20% of the population lives on less than US$1.25 per day. Migration from Tajikistan and the
consequent remittances have been unprecedented in their magnitude and economic impact. In 2010, remittances from
Tajik labour migrants totaled an estimated $2.1 billion US dollars, an increase from 2009. Tajikistan has achieved
transition from a planned to a market economy without substantial and protracted recourse to aid (of which it by
now receives only negligible amounts), and by purely market-based means, simply by exporting its main commodity of
comparative advantage — cheap labor. The World Bank Tajikistan Policy Note 2006 concludes that remittances have played
an important role as one of the drivers of Tajikistan's robust economic growth during the past several years, have
increased incomes, and as a result helped significantly reduce poverty. Drug trafficking is the major illegal source
of income in Tajikistan as it is an important transit country for Afghan narcotics bound for Russian and, to a lesser
extent, Western European markets; some opium poppy is also raised locally for the domestic market. However, with
the increasing assistance from international organizations, such as UNODC, and cooperation with the US, Russian,
EU and Afghan authorities a level of progress on the fight against illegal drug-trafficking is being achieved. Tajikistan
holds third place in the world for heroin and raw opium confiscations (1216.3 kg of heroin and 267.8 kg of raw opium
in the first half of 2006). Drug money corrupts the country's government; according to some experts the well-known
personalities that fought on both sides of the civil war and have held the positions in the government after the
armistice was signed are now involved in the drug trade. UNODC is working with Tajikistan to strengthen border crossings,
provide training, and set up joint interdiction teams. It also helped to establish Tajikistani Drug Control Agency.
As a landlocked country Tajikistan has no ports and the majority of transportation is via roads, air, and rail. In
recent years Tajikistan has pursued agreements with Iran and Pakistan to gain port access in those countries via
Afghanistan. In 2009, an agreement was made between Tajikistan, Pakistan, and Afghanistan to improve and build a
1,300 km (810 mi) highway and rail system connecting the three countries to Pakistan's ports. The proposed route
would go through the Gorno-Badakhshan Autonomous Province in the eastern part of the country. And in 2012, the presidents
of Tajikistan, Afghanistan, and Iran signed an agreement to construct roads and railways as well as oil, gas, and
water pipelines to connect the three countries. In 2009 Tajikistan had 26 airports, 18 of which had paved runways,
of which two had runways longer than 3,000 meters. The country's main airport is Dushanbe International Airport which
as of April 2015, had regularly scheduled flights to major cities in Russia, Central Asia, as well as Delhi, Dubai,
Frankfurt, Istanbul, Kabul, Tehran, and Ürümqi amongst others. There are also international flights, mainly to Russia,
from Khujand Airport in the northern part of the country as well as limited international services from Kulob Airport,
and Qurghonteppa International Airport. Khorog Airport is a domestic airport and also the only airport in the sparsely
populated eastern half of the country. Tajikistan has a population of 7,349,145 (July 2009 est.) of which 70% are
under the age of 30 and 35% are between the ages of 14 and 30. Tajiks who speak Tajik (a dialect of Persian) are
the main ethnic group, although there are sizable minorities of Uzbeks and Russians, whose numbers are declining
due to emigration. The Pamiris of Badakhshan, a small population of Yaghnobi people, and a sizeable minority of Ismailis
are all considered to belong to the larger group of Tajiks. All citizens of Tajikistan are called Tajikistanis. The
Pamiri people of Gorno-Badakhshan Autonomous Province in the southeast, bordering Afghanistan and China, though considered
part of the Tajik ethnicity, nevertheless are distinct linguistically and culturally from most Tajiks. In contrast
to the mostly Sunni Muslim residents of the rest of Tajikistan, the Pamiris overwhelmingly follow the Ismaili sect
of Islam, and speak a number of Eastern Iranian languages, including Shughni, Rushani, Khufi and Wakhi. Isolated
in the highest parts of the Pamir Mountains, they have preserved many ancient cultural traditions and folk arts that
have been largely lost elsewhere in the country. Sunni Islam of the Hanafi school has been officially recognized
by the government since 2009. Tajikistan considers itself a secular state with a Constitution providing for freedom
of religion. The Government has declared two Islamic holidays, Id Al-Fitr and Idi Qurbon, as state holidays. According
to a U.S. State Department release and Pew research group, the population of Tajikistan is 98% Muslim. Approximately
87%–95% of them are Sunni and roughly 3% are Shia and roughly 7% are non-denominational Muslims. The remaining 2%
of the population are followers of Russian Orthodoxy, Protestantism, Zoroastrianism and Buddhism. A great majority
of Muslims fast during Ramadan, although only about one third in the countryside and 10% in the cities observe daily
prayer and dietary restrictions. Relationships between religious groups are generally amicable, although there is
some concern among mainstream Muslim leaders[who?] that minority religious groups undermine national unity. There
is a concern for religious institutions becoming active in the political sphere. The Islamic Renaissance Party (IRP),
a major combatant in the 1992–1997 Civil War and then-proponent of the creation of an Islamic state in Tajikistan,
constitutes no more than 30% of the government by statute. Membership in Hizb ut-Tahrir, a militant Islamic party
which today aims for an overthrow of secular governments and the unification of Tajiks under one Islamic state, is
illegal and members are subject to arrest and imprisonment. Numbers of large mosques appropriate for Friday prayers
are limited and some[who?] feel this is discriminatory. By law, religious communities must register by the State
Committee on Religious Affairs (SCRA) and with local authorities. Registration with the SCRA requires a charter,
a list of 10 or more members, and evidence of local government approval prayer site location. Religious groups who
do not have a physical structure are not allowed to gather publicly for prayer. Failure to register can result in
large fines and closure of place of worship. There are reports that registration on the local level is sometimes
difficult to obtain. People under the age of 18 are also barred from public religious practice. Despite repeated
efforts by the Tajik government to improve and expand health care, the system remains extremely underdeveloped and
poor, with severe shortages of medical supplies. The state's Ministry of Labor and Social Welfare reported that 104,272
disabled people are registered in Tajikistan (2000). This group of people suffers most from poverty in Tajikistan.
The government of Tajikistan and the World Bank considered activities to support this part of the population described
in the World Bank's Poverty Reduction Strategy Paper. Public expenditure on health was at 1% of the GDP in 2004.
Public education in Tajikistan consists of 11 years of primary and secondary education but the government has plans
to implement a 12-year system in 2016. There is a relatively large number of tertiary education institutions including
Khujand State University which has 76 departments in 15 faculties, Tajikistan State University of Law, Business,
& Politics, Khorugh State University, Agricultural University of Tajikistan, Tajik State National University, and
several other institutions. Most, but not all, universities were established during the Soviet Era. As of 2008[update]
tertiary education enrollment was 17%, significantly below the sub-regional average of 37%. Many Tajiks left the
education system due to low demand in the labor market for people with extensive educational training or professional
skills. Tajikistan consists of 4 administrative divisions. These are the provinces (viloyat) of Sughd and Khatlon,
the autonomous province of Gorno-Badakhshan (abbreviated as GBAO), and the Region of Republican Subordination (RRP
– Raiony Respublikanskogo Podchineniya in transliteration from Russian or NTJ – Ноҳияҳои тобеи ҷумҳурӣ in Tajik;
formerly known as Karotegin Province). Each region is divided into several districts, (Tajik: Ноҳия, nohiya or raion),
which in turn are subdivided into jamoats (village-level self-governing units) and then villages (qyshloqs). As of
2006[update], there were 58 districts and 367 jamoats in Tajikistan. Nearly 47% of Tajikistan's GDP comes from immigrant
remittances (mostly from Tajiks working in Russia). The current economic situation remains fragile, largely owing
to corruption, uneven economic reforms, and economic mismanagement. With foreign revenue precariously dependent upon
remittances from migrant workers overseas and exports of aluminium and cotton, the economy is highly vulnerable to
external shocks. In FY 2000, international assistance remained an essential source of support for rehabilitation
programs that reintegrated former civil war combatants into the civilian economy, which helped keep the peace. International
assistance also was necessary to address the second year of severe drought that resulted in a continued shortfall
of food production. On August 21, 2001, the Red Cross announced that a famine was striking Tajikistan, and called
for international aid for Tajikistan and Uzbekistan, however access to food remains a problem today. In January 2012,
680,152 of the people living in Tajikistan were living with food insecurity. Out of those, 676,852 were at risk of
Phase 3 (Acute Food and Livelihoods Crisis) food insecurity and 3,300 were at risk of Phase 4 (Humanitarian Emergency).
Those with the highest risk of food insecurity were living in the remote Murghob District of GBAO.
The University of Notre Dame du Lac (or simply Notre Dame /ˌnoʊtərˈdeɪm/ NOH-tər-DAYM) is a Catholic research university
located adjacent to South Bend, Indiana, in the United States. In French, Notre Dame du Lac means "Our Lady of the
Lake" and refers to the university's patron saint, the Virgin Mary. The main campus covers 1,250 acres in a suburban
setting and it contains a number of recognizable landmarks, such as the Golden Dome, the "Word of Life" mural (commonly
known as Touchdown Jesus), and the Basilica. Notre Dame rose to national prominence in the early 1900s for its Fighting
Irish football team, especially under the guidance of the legendary coach Knute Rockne. The university's athletic
teams are members of the NCAA Division I and are known collectively as the Fighting Irish. The football team, an
Independent, has accumulated eleven consensus national championships, seven Heisman Trophy winners, 62 members in
the College Football Hall of Fame and 13 members in the Pro Football Hall of Fame and is considered one of the most
famed and successful college football teams in history. Other ND teams, chiefly in the Atlantic Coast Conference,
have accumulated 16 national championships. The Notre Dame Victory March is often regarded as the most famous and
recognizable collegiate fight song. Besides its prominence in sports, Notre Dame is also a large, four-year, highly
residential research University, and is consistently ranked among the top twenty universities in the United States
and as a major global university. The undergraduate component of the university is organized into four colleges (Arts
and Letters, Science, Engineering, Business) and the Architecture School. The latter is known for teaching New Classical
Architecture and for awarding the globally renowned annual Driehaus Architecture Prize. Notre Dame's graduate program
has more than 50 master's, doctoral and professional degree programs offered by the five schools, with the addition
of the Notre Dame Law School and a MD-PhD program offered in combination with IU medical School. It maintains a system
of libraries, cultural venues, artistic and scientific museums, including Hesburgh Library and the Snite Museum of
Art. Over 80% of the university's 8,000 undergraduates live on campus in one of 29 single-sex residence halls, each
with its own traditions, legacies, events and intramural sports teams. The university counts approximately 120,000
alumni, considered among the strongest alumni networks among U.S. colleges. In 1842, the Bishop of Vincennes, Célestine
Guynemer de la Hailandière, offered land to Father Edward Sorin of the Congregation of the Holy Cross, on the condition
that he build a college in two years. Fr. Sorin arrived on the site with eight Holy Cross brothers from France and
Ireland on November 26, 1842, and began the school using Father Stephen Badin's old log chapel. He soon erected additional
buildings, including Old College, the first church, and the first main building. They immediately acquired two students
and set about building additions to the campus. The first degrees from the college were awarded in 1849. The university
was expanded with new buildings to accommodate more students and faculty. With each new president, new academic programs
were offered and new buildings built to accommodate them. The original Main Building built by Sorin just after he
arrived was replaced by a larger "Main Building" in 1865, which housed the university's administration, classrooms,
and dormitories. Beginning in 1873, a library collection was started by Father Lemonnier. By 1879 it had grown to
ten thousand volumes that were housed in the Main Building. This Main Building, and the library collection, was entirely
destroyed by a fire in April 1879, and the school closed immediately and students were sent home. The university
founder, Fr. Sorin and the president at the time, the Rev. William Corby, immediately planned for the rebuilding
of the structure that had housed virtually the entire University. Construction was started on the 17th of May and
by the incredible zeal of administrator and workers the building was completed before the fall semester of 1879.
The library collection was also rebuilt and stayed housed in the new Main Building for years afterwards. Around the
time of the fire, a music hall was opened. Eventually becoming known as Washington Hall, it hosted plays and musical
acts put on by the school. By 1880, a science program was established at the university, and a Science Hall (today
LaFortune Student Center) was built in 1883. The hall housed multiple classrooms and science labs needed for early
research at the university. In 1919 Father James Burns became president of Notre Dame, and in three years he produced
an academic revolution that brought the school up to national standards by adopting the elective system and moving
away from the university's traditional scholastic and classical emphasis. By contrast, the Jesuit colleges, bastions
of academic conservatism, were reluctant to move to a system of electives. Their graduates were shut out of Harvard
Law School for that reason. Notre Dame continued to grow over the years, adding more colleges, programs, and sports
teams. By 1921, with the addition of the College of Commerce, Notre Dame had grown from a small college to a university
with five colleges and a professional law school. The university continued to expand and add new residence halls
and buildings with each subsequent president. One of the main driving forces in the growth of the University was
its football team, the Notre Dame Fighting Irish. Knute Rockne became head coach in 1918. Under Rockne, the Irish
would post a record of 105 wins, 12 losses, and five ties. During his 13 years the Irish won three national championships,
had five undefeated seasons, won the Rose Bowl in 1925, and produced players such as George Gipp and the "Four Horsemen".
Knute Rockne has the highest winning percentage (.881) in NCAA Division I/FBS football history. Rockne's offenses
employed the Notre Dame Box and his defenses ran a 7–2–2 scheme. The last game Rockne coached was on December 14,
1930 when he led a group of Notre Dame all-stars against the New York Giants in New York City. The success of its
football team made Notre Dame a household name. The success of Note Dame reflected rising status of Irish Americans
and Catholics in the 1920s. Catholics rallied up around the team and listen to the games on the radio, especially
when it knocked off the schools that symbolized the Protestant establishment in America — Harvard, Yale, Princeton,
and Army. Yet this role as high-profile flagship institution of Catholicism made it an easy target of anti-Catholicism.
The most remarkable episode of violence was the clash between Notre Dame students and the Ku Klux Klan in 1924. Nativism
and anti-Catholicism, especially when directed towards immigrants, were cornerstones of the KKK's rhetoric, and Notre
Dame was seen as a symbol of the threat posed by the Catholic Church. The Klan decided to have a week-long Klavern
in South Bend. Clashes with the student body started on March 17, when students, aware of the anti-Catholic animosity,
blocked the Klansmen from descending from their trains in the South Bend station and ripped the KKK clothes and regalia.
On May 19 thousands of students massed downtown protesting the Klavern, and only the arrival of college president
Fr. Matthew Walsh prevented any further clashes. The next day, football coach Knute Rockne spoke at a campus rally
and implored the students to obey the college president and refrain from further violence. A few days later the Klavern
broke up, but the hostility shown by the students was an omen and a contribution to the downfall of the KKK in Indiana.
Holy Cross Father John Francis O'Hara was elected vice-president in 1933 and president of Notre Dame in 1934. During
his tenure at Notre Dame, he brought numerous refugee intellectuals to campus; he selected Frank H. Spearman, Jeremiah
D. M. Ford, Irvin Abell, and Josephine Brownson for the Laetare Medal, instituted in 1883. O'Hara strongly believed
that the Fighting Irish football team could be an effective means to "acquaint the public with the ideals that dominate"
Notre Dame. He wrote, "Notre Dame football is a spiritual service because it is played for the honor and glory of
God and of his Blessed Mother. When St. Paul said: 'Whether you eat or drink, or whatsoever else you do, do all for
the glory of God,' he included football." The Rev. John J. Cavanaugh, C.S.C. served as president from 1946 to 1952.
Cavanaugh's legacy at Notre Dame in the post-war years was devoted to raising academic standards and reshaping the
university administration to suit it to an enlarged educational mission and an expanded student body and stressing
advanced studies and research at a time when Notre Dame quadrupled in student census, undergraduate enrollment increased
by more than half, and graduate student enrollment grew fivefold. Cavanaugh also established the Lobund Institute
for Animal Studies and Notre Dame's Medieval Institute. Cavanaugh also presided over the construction of the Nieuwland
Science Hall, Fisher Hall, and the Morris Inn, as well as the Hall of Liberal Arts (now O'Shaughnessy Hall), made
possible by a donation from I.A. O'Shaughnessy, at the time the largest ever made to an American Catholic university.
Cavanaugh also established a system of advisory councils at the university, which continue today and are vital to
the university's governance and development The Rev. Theodore Hesburgh, C.S.C., (1917–2015) served as president for
35 years (1952–87) of dramatic transformations. In that time the annual operating budget rose by a factor of 18 from
$9.7 million to $176.6 million, and the endowment by a factor of 40 from $9 million to $350 million, and research
funding by a factor of 20 from $735,000 to $15 million. Enrollment nearly doubled from 4,979 to 9,600, faculty more
than doubled 389 to 950, and degrees awarded annually doubled from 1,212 to 2,500. Hesburgh is also credited with
transforming the face of Notre Dame by making it a coeducational institution. In the mid-1960s Notre Dame and Saint
Mary's College developed a co-exchange program whereby several hundred students took classes not offered at their
home institution, an arrangement that added undergraduate women to a campus that already had a few women in the graduate
schools. After extensive debate, merging with St. Mary's was rejected, primarily because of the differential in faculty
qualifications and pay scales. "In American college education," explained the Rev. Charles E. Sheedy, C.S.C., Notre
Dame's Dean of Arts and Letters, "certain features formerly considered advantageous and enviable are now seen as
anachronistic and out of place.... In this environment of diversity, the integration of the sexes is a normal and
expected aspect, replacing separatism." Thomas Blantz, C.S.C., Notre Dame's Vice President of Student Affairs, added
that coeducation "opened up a whole other pool of very bright students." Two of the male residence halls were converted
for the newly admitted female students that first year, while two others were converted for the next school year.
In 1971 Mary Ann Proctor became the first female undergraduate; she transferred from St. Mary's College. In 1972
the first woman to graduate was Angela Sienko, who earned a bachelor's degree in marketing. In the 18 years under
the presidency of Edward Malloy, C.S.C., (1987–2005), there was a rapid growth in the school's reputation, faculty,
and resources. He increased the faculty by more than 500 professors; the academic quality of the student body has
improved dramatically, with the average SAT score rising from 1240 to 1360; the number of minority students more
than doubled; the endowment grew from $350 million to more than $3 billion; the annual operating budget rose from
$177 million to more than $650 million; and annual research funding improved from $15 million to more than $70 million.
Notre Dame's most recent[when?] capital campaign raised $1.1 billion, far exceeding its goal of $767 million, and
is the largest in the history of Catholic higher education. Since 2005, Notre Dame has been led by John I. Jenkins,
C.S.C., the 17th president of the university. Jenkins took over the position from Malloy on July 1, 2005. In his
inaugural address, Jenkins described his goals of making the university a leader in research that recognizes ethics
and building the connection between faith and studies. During his tenure, Notre Dame has increased its endowment,
enlarged its student body, and undergone many construction projects on campus, including Compton Family Ice Arena,
a new architecture hall, additional residence halls, and the Campus Crossroads, a $400m enhancement and expansion
of Notre Dame Stadium. Because of its Catholic identity, a number of religious buildings stand on campus. The Old
College building has become one of two seminaries on campus run by the Congregation of Holy Cross. The current Basilica
of the Sacred Heart is located on the spot of Fr. Sorin's original church, which became too small for the growing
college. It is built in French Revival style and it is decorated by stained glass windows imported directly from
France. The interior was painted by Luigi Gregori, an Italian painter invited by Fr. Sorin to be artist in residence.
The Basilica also features a bell tower with a carillon. Inside the church there are also sculptures by Ivan Mestrovic.
The Grotto of Our Lady of Lourdes, which was built in 1896, is a replica of the original in Lourdes, France. It is
very popular among students and alumni as a place of prayer and meditation, and it is considered one of the most
beloved spots on campus. A Science Hall was built in 1883 under the direction of Fr. Zahm, but in 1950 it was converted
to a student union building and named LaFortune Center, after Joseph LaFortune, an oil executive from Tulsa, Oklahoma.
Commonly known as "LaFortune" or "LaFun," it is a 4-story building of 83,000 square feet that provides the Notre
Dame community with a meeting place for social, recreational, cultural, and educational activities. LaFortune employs
35 part-time student staff and 29 full-time non-student staff and has an annual budget of $1.2 million. Many businesses,
services, and divisions of The Office of Student Affairs are found within. The building also houses restaurants from
national restaurant chains. Since the construction of its oldest buildings, the university's physical plant has grown
substantially. Over the years 29 residence halls have been built to accommodate students and each has been constructed
with its own chapel. Many academic building were added together with a system of libraries, the most prominent of
which is the Theodore Hesburgh Library, built in 1963 and today containing almost 4 million books. Since 2004, several
buildings have been added, including the DeBartolo Performing Arts Center, the Guglielmino Complex, and the Jordan
Hall of Science. Additionally, a new residence for men, Duncan Hall, was begun on March 8, 2007, and began accepting
residents for the Fall 2008 semester. Ryan Hall was completed and began housing undergraduate women in the fall of
2009. A new engineering building, Stinson-Remick Hall, a new combination Center for Social Concerns/Institute for
Church Life building, Geddes Hall, and a law school addition have recently been completed as well. Additionally the
new hockey arena opened in the fall of 2011. The Stayer Center for Executive Education, which houses the Mendoza
College of Business Executive Education Department opened in March 2013 just South of the Mendoza College of Business
building. Because of its long athletic tradition, the university features also many building dedicated to sport.
The most famous is Notre Dame Stadium, home of the Fighting Irish football team; it has been renovated several times
and today it can hold more than 80 thousand people. Prominent venues include also the Edmund P. Joyce Center, with
indoor basketball and volleyball courts, and the Compton Family Ice Arena, a two-rink facility dedicated to hockey.
Also, there are many outdoor fields, as the Frank Eck Stadium for baseball. The University of Notre Dame has made
being a sustainability leader an integral part of its mission, creating the Office of Sustainability in 2008 to achieve
a number of goals in the areas of power generation, design and construction, waste reduction, procurement, food services,
transportation, and water.As of 2012[update] four building construction projects were pursuing LEED-Certified status
and three were pursuing LEED Silver. Notre Dame's dining services sources 40% of its food locally and offers sustainably
caught seafood as well as many organic, fair-trade, and vegan options. On the Sustainable Endowments Institute's
College Sustainability Report Card 2010, University of Notre Dame received a "B" grade. The university also houses
the Kroc Institute for International Peace Studies. Father Gustavo Gutierrez, the founder of Liberation Theology
is a current faculty member. The university owns several centers around the world used for international studies
and research, conferences abroad, and alumni support. The university has had a presence in London, England, since
1968. Since 1998, its London center has been based in the former United University Club at 1 Suffolk Street in Trafalgar
Square. The center enables the Colleges of Arts & Letters, Business Administration, Science, Engineering and the
Law School to develop their own programs in London, as well as hosting conferences and symposia. Other Global Gateways
are located in Beijing, Chicago, Dublin, Jerusalem and Rome. The College of Arts and Letters was established as the
university's first college in 1842 with the first degrees given in 1849. The university's first academic curriculum
was modeled after the Jesuit Ratio Studiorum from Saint Louis University. Today the college, housed in O'Shaughnessy
Hall, includes 20 departments in the areas of fine arts, humanities, and social sciences, and awards Bachelor of
Arts (B.A.) degrees in 33 majors, making it the largest of the university's colleges. There are around 2,500 undergraduates
and 750 graduates enrolled in the college. The College of Science was established at the university in 1865 by president
Father Patrick Dillon. Dillon's scientific courses were six years of work, including higher-level mathematics courses.
Today the college, housed in the newly built Jordan Hall of Science, includes over 1,200 undergraduates in six departments
of study – biology, chemistry, mathematics, physics, pre-professional studies, and applied and computational mathematics
and statistics (ACMS) – each awarding Bachelor of Science (B.S.) degrees. According to university statistics, its
science pre-professional program has one of the highest acceptance rates to medical school of any university in the
United States. The School of Architecture was established in 1899, although degrees in architecture were first awarded
by the university in 1898. Today the school, housed in Bond Hall, offers a five-year undergraduate program leading
to the Bachelor of Architecture degree. All undergraduate students study the third year of the program in Rome. The
university is globally recognized for its Notre Dame School of Architecture, a faculty that teaches (pre-modernist)
traditional and classical architecture and urban planning (e.g. following the principles of New Urbanism and New
Classical Architecture). It also awards the renowned annual Driehaus Architecture Prize. The College of Engineering
was established in 1920, however, early courses in civil and mechanical engineering were a part of the College of
Science since the 1870s. Today the college, housed in the Fitzpatrick, Cushing, and Stinson-Remick Halls of Engineering,
includes five departments of study – aerospace and mechanical engineering, chemical and biomolecular engineering,
civil engineering and geological sciences, computer science and engineering, and electrical engineering – with eight
B.S. degrees offered. Additionally, the college offers five-year dual degree programs with the Colleges of Arts and
Letters and of Business awarding additional B.A. and Master of Business Administration (MBA) degrees, respectively.
All of Notre Dame's undergraduate students are a part of one of the five undergraduate colleges at the school or
are in the First Year of Studies program. The First Year of Studies program was established in 1962 to guide incoming
freshmen in their first year at the school before they have declared a major. Each student is given an academic advisor
from the program who helps them to choose classes that give them exposure to any major in which they are interested.
The program also includes a Learning Resource Center which provides time management, collaborative learning, and
subject tutoring. This program has been recognized previously, by U.S. News & World Report, as outstanding. The university
first offered graduate degrees, in the form of a Master of Arts (MA), in the 1854–1855 academic year. The program
expanded to include Master of Laws (LL.M.) and Master of Civil Engineering in its early stages of growth, before
a formal graduate school education was developed with a thesis not required to receive the degrees. This changed
in 1924 with formal requirements developed for graduate degrees, including offering Doctorate (PhD) degrees. Today
each of the five colleges offer graduate education. Most of the departments from the College of Arts and Letters
offer PhD programs, while a professional Master of Divinity (M.Div.) program also exists. All of the departments
in the College of Science offer PhD programs, except for the Department of Pre-Professional Studies. The School of
Architecture offers a Master of Architecture, while each of the departments of the College of Engineering offer PhD
programs. The College of Business offers multiple professional programs including MBA and Master of Science in Accountancy
programs. It also operates facilities in Chicago and Cincinnati for its executive MBA program. Additionally, the
Alliance for Catholic Education program offers a Master of Education program where students study at the university
during the summer and teach in Catholic elementary schools, middle schools, and high schools across the Southern
United States for two school years. The Joan B. Kroc Institute for International Peace Studies at the University
of Notre Dame is dedicated to research, education and outreach on the causes of violent conflict and the conditions
for sustainable peace. It offers PhD, Master's, and undergraduate degrees in peace studies. It was founded in 1986
through the donations of Joan B. Kroc, the widow of McDonald's owner Ray Kroc. The institute was inspired by the
vision of the Rev. Theodore M. Hesburgh CSC, President Emeritus of the University of Notre Dame. The institute has
contributed to international policy discussions about peace building practices. The library system of the university
is divided between the main library and each of the colleges and schools. The main building is the 14-story Theodore
M. Hesburgh Library, completed in 1963, which is the third building to house the main collection of books. The front
of the library is adorned with the Word of Life mural designed by artist Millard Sheets. This mural is popularly
known as "Touchdown Jesus" because of its proximity to Notre Dame Stadium and Jesus' arms appearing to make the signal
for a touchdown. The library system also includes branch libraries for Architecture, Chemistry & Physics, Engineering,
Law, and Mathematics as well as information centers in the Mendoza College of Business, the Kellogg Institute for
International Studies, the Joan B. Kroc Institute for International Peace Studies, and a slide library in O'Shaughnessy
Hall. A theology library was also opened in fall of 2015. Located on the first floor of Stanford Hall, it is the
first branch of the library system to be housed in a dorm room. The library system holds over three million volumes,
was the single largest university library in the world upon its completion, and remains one of the 100 largest libraries
in the country. Notre Dame is known for its competitive admissions, with the incoming class enrolling in fall 2015
admitting 3,577 from a pool of 18,156 (19.7%). The academic profile of the enrolled class continues to rate among
the top 10 to 15 in the nation for national research universities. The university practices a non-restrictive early
action policy that allows admitted students to consider admission to Notre Dame as well as any other colleges to
which they were accepted. 1,400 of the 3,577 (39.1%) were admitted under the early action plan. Admitted students
came from 1,311 high schools and the average student traveled more than 750 miles to Notre Dame, making it arguably
the most representative university in the United States. While all entering students begin in the College of the
First Year of Studies, 25% have indicated they plan to study in the liberal arts or social sciences, 24% in engineering,
24% in business, 24% in science, and 3% in architecture. In 2015-2016, Notre Dame ranked 18th overall among "national
universities" in the United States in U.S. News & World Report's Best Colleges 2016. In 2014, USA Today ranked Notre
Dame 10th overall for American universities based on data from College Factual. Forbes.com's America's Best Colleges
ranks Notre Dame 13th among colleges in the United States in 2015, 8th among Research Universities, and 1st in the
Midwest. U.S. News & World Report also lists Notre Dame Law School as 22nd overall. BusinessWeek ranks Mendoza College
of Business undergraduate school as 1st overall. It ranks the MBA program as 20th overall. The Philosophical Gourmet
Report ranks Notre Dame's graduate philosophy program as 15th nationally, while ARCHITECT Magazine ranked the undergraduate
architecture program as 12th nationally. Additionally, the study abroad program ranks sixth in highest participation
percentage in the nation, with 57.6% of students choosing to study abroad in 17 countries. According to payscale.com,
undergraduate alumni of University of Notre Dame have a mid-career median salary $110,000, making it the 24th highest
among colleges and universities in the United States. The median starting salary of $55,300 ranked 58th in the same
peer group. Father Joseph Carrier, C.S.C. was Director of the Science Museum and the Library and Professor of Chemistry
and Physics until 1874. Carrier taught that scientific research and its promise for progress were not antagonistic
to the ideals of intellectual and moral culture endorsed by the Church. One of Carrier's students was Father John
Augustine Zahm (1851–1921) who was made Professor and Co-Director of the Science Department at age 23 and by 1900
was a nationally prominent scientist and naturalist. Zahm was active in the Catholic Summer School movement, which
introduced Catholic laity to contemporary intellectual issues. His book Evolution and Dogma (1896) defended certain
aspects of evolutionary theory as true, and argued, moreover, that even the great Church teachers Thomas Aquinas
and Augustine taught something like it. The intervention of Irish American Catholics in Rome prevented Zahm's censure
by the Vatican. In 1913, Zahm and former President Theodore Roosevelt embarked on a major expedition through the
Amazon. In 1882, Albert Zahm (John Zahm's brother) built an early wind tunnel used to compare lift to drag of aeronautical
models. Around 1899, Professor Jerome Green became the first American to send a wireless message. In 1931, Father
Julius Nieuwland performed early work on basic reactions that was used to create neoprene. Study of nuclear physics
at the university began with the building of a nuclear accelerator in 1936, and continues now partly through a partnership
in the Joint Institute for Nuclear Astrophysics. The Lobund Institute grew out of pioneering research in germ-free-life
which began in 1928. This area of research originated in a question posed by Pasteur as to whether animal life was
possible without bacteria. Though others had taken up this idea, their research was short lived and inconclusive.
Lobund was the first research organization to answer definitively, that such life is possible and that it can be
prolonged through generations. But the objective was not merely to answer Pasteur's question but also to produce
the germ free animal as a new tool for biological and medical research. This objective was reached and for years
Lobund was a unique center for the study and production of germ free animals and for their use in biological and
medical investigations. Today the work has spread to other universities. In the beginning it was under the Department
of Biology and a program leading to the master's degree accompanied the research program. In the 1940s Lobund achieved
independent status as a purely research organization and in 1950 was raised to the status of an Institute. In 1958
it was brought back into the Department of Biology as integral part of that department, but with its own program
leading to the degree of PhD in Gnotobiotics. The Review of Politics was founded in 1939 by Gurian, modeled after
German Catholic journals. It quickly emerged as part of an international Catholic intellectual revival, offering
an alternative vision to positivist philosophy. For 44 years, the Review was edited by Gurian, Matthew Fitzsimons,
Frederick Crosson, and Thomas Stritch. Intellectual leaders included Gurian, Jacques Maritain, Frank O'Malley, Leo
Richard Ward, F. A. Hermens, and John U. Nef. It became a major forum for political ideas and modern political concerns,
especially from a Catholic and scholastic tradition. The rise of Hitler and other dictators in the 1930s forced numerous
Catholic intellectuals to flee Europe; president John O'Hara brought many to Notre Dame. From Germany came Anton-Hermann
Chroust (1907–1982) in classics and law, and Waldemar Gurian a German Catholic intellectual of Jewish descent. Positivism
dominated American intellectual life in the 1920s onward but in marked contrast, Gurian received a German Catholic
education and wrote his doctoral dissertation under Max Scheler. Ivan Meštrović (1883–1962), a renowned sculptor,
brought Croatian culture to campus, 1955–62. Yves Simon (1903–61), brought to ND in the 1940s the insights of French
studies in the Aristotelian-Thomistic tradition of philosophy; his own teacher Jacques Maritain (1882–73) was a frequent
visitor to campus. As of 2012[update] research continued in many fields. The university president, John Jenkins,
described his hope that Notre Dame would become "one of the pre–eminent research institutions in the world" in his
inaugural address. The university has many multi-disciplinary institutes devoted to research in varying fields, including
the Medieval Institute, the Kellogg Institute for International Studies, the Kroc Institute for International Peace
studies, and the Center for Social Concerns. Recent research includes work on family conflict and child development,
genome mapping, the increasing trade deficit of the United States with China, studies in fluid mechanics, computational
science and engineering, and marketing trends on the Internet. As of 2013, the university is home to the Notre Dame
Global Adaptation Index which ranks countries annually based on how vulnerable they are to climate change and how
prepared they are to adapt. In 2014 the Notre Dame student body consisted of 12,179 students, with 8,448 undergraduates,
2,138 graduate and professional and 1,593 professional (Law, M.Div., Business, M.Ed.) students. Around 21–24% of
students are children of alumni, and although 37% of students come from the Midwestern United States, the student
body represents all 50 states and 100 countries. As of March 2007[update] The Princeton Review ranked the school
as the fifth highest 'dream school' for parents to send their children. As of March 2015[update] The Princeton Review
ranked Notre Dame as the ninth highest. The school has been previously criticized for its lack of diversity, and
The Princeton Review ranks the university highly among schools at which "Alternative Lifestyles [are] Not an Alternative."
It has also been commended by some diversity oriented publications; Hispanic Magazine in 2004 ranked the university
ninth on its list of the top–25 colleges for Latinos, and The Journal of Blacks in Higher Education recognized the
university in 2006 for raising enrollment of African-American students. With 6,000 participants, the university's
intramural sports program was named in 2004 by Sports Illustrated as the best program in the country, while in 2007
The Princeton Review named it as the top school where "Everyone Plays Intramural Sports." The annual Bookstore Basketball
tournament is the largest outdoor five-on-five tournament in the world with over 700 teams participating each year,
while the Notre Dame Men's Boxing Club hosts the annual Bengal Bouts tournament that raises money for the Holy Cross
Missions in Bangladesh. About 80% of undergraduates and 20% of graduate students live on campus. The majority of
the graduate students on campus live in one of four graduate housing complexes on campus, while all on-campus undergraduates
live in one of the 29 residence halls. Because of the religious affiliation of the university, all residence halls
are single-sex, with 15 male dorms and 14 female dorms. The university maintains a visiting policy (known as parietal
hours) for those students who live in dormitories, specifying times when members of the opposite sex are allowed
to visit other students' dorm rooms; however, all residence halls have 24-hour social spaces for students regardless
of gender. Many residence halls have at least one nun and/or priest as a resident. There are no traditional social
fraternities or sororities at the university, but a majority of students live in the same residence hall for all
four years. Some intramural sports are based on residence hall teams, where the university offers the only non-military
academy program of full-contact intramural American football. At the end of the intramural season, the championship
game is played on the field in Notre Dame Stadium. The university is affiliated with the Congregation of Holy Cross
(Latin: Congregatio a Sancta Cruce, abbreviated postnominals: "CSC"). While religious affiliation is not a criterion
for admission, more than 93% of students identify as Christian, with over 80% of the total being Catholic. Collectively,
Catholic Mass is celebrated over 100 times per week on campus, and a large campus ministry program provides for the
faith needs of the community. There are multitudes of religious statues and artwork around campus, most prominent
of which are the statue of Mary on the Main Building, the Notre Dame Grotto, and the Word of Life mural on Hesburgh
Library depicting Christ as a teacher. Additionally, every classroom displays a crucifix. There are many religious
clubs (catholic and non-Catholic) at the school, including Council #1477 of the Knights of Columbus (KOC), Baptist
Collegiate Ministry (BCM), Jewish Club, Muslim Student Association, Orthodox Christian Fellowship, The Mormon Club,
and many more. The Notre Dame KofC are known for being the first collegiate council of KofC, operating a charitable
concession stand during every home football game and owning their own building on campus which can be used as a cigar
lounge. Fifty-seven chapels are located throughout the campus. Architecturally, the school has a Catholic character.
Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building
and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the
Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place
of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared
to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3
statues and the Gold Dome), is a simple, modern stone statue of Mary. The university is the major seat of the Congregation
of Holy Cross (albeit not its official headquarters, which are in Rome). Its main seminary, Moreau Seminary, is located
on the campus across St. Joseph lake from the Main Building. Old College, the oldest building on campus and located
near the shore of St. Mary lake, houses undergraduate seminarians. Retired priests and brothers reside in Fatima
House (a former retreat center), Holy Cross House, as well as Columba Hall near the Grotto. The university through
the Moreau Seminary has ties to theologian Frederick Buechner. While not Catholic, Buechner has praised writers from
Notre Dame and Moreau Seminary created a Buechner Prize for Preaching. As at most other universities, Notre Dame's
students run a number of news media outlets. The nine student-run outlets include three newspapers, both a radio
and television station, and several magazines and journals. Begun as a one-page journal in September 1876, the Scholastic
magazine is issued twice monthly and claims to be the oldest continuous collegiate publication in the United States.
The other magazine, The Juggler, is released twice a year and focuses on student literature and artwork. The Dome
yearbook is published annually. The newspapers have varying publication interests, with The Observer published daily
and mainly reporting university and other news, and staffed by students from both Notre Dame and Saint Mary's College.
Unlike Scholastic and The Dome, The Observer is an independent publication and does not have a faculty advisor or
any editorial oversight from the University. In 1987, when some students believed that The Observer began to show
a conservative bias, a liberal newspaper, Common Sense was published. Likewise, in 2003, when other students believed
that the paper showed a liberal bias, the conservative paper Irish Rover went into production. Neither paper is published
as often as The Observer; however, all three are distributed to all students. Finally, in Spring 2008 an undergraduate
journal for political science research, Beyond Politics, made its debut. The television station, NDtv, grew from
one show in 2002 to a full 24-hour channel with original programming by September 2006. WSND-FM serves the student
body and larger South Bend community at 88.9 FM, offering students a chance to become involved in bringing classical
music, fine arts and educational programming, and alternative rock to the airwaves. Another radio station, WVFI,
began as a partner of WSND-FM. More recently, however, WVFI has been airing independently and is streamed on the
Internet. The first phase of Eddy Street Commons, a $215 million development located adjacent to the University of
Notre Dame campus and funded by the university, broke ground on June 3, 2008. The Eddy Street Commons drew union
protests when workers hired by the City of South Bend to construct the public parking garage picketed the private
work site after a contractor hired non-union workers. The developer, Kite Realty out of Indianapolis, has made agreements
with major national chains rather than local businesses, a move that has led to criticism from alumni and students.
Notre Dame teams are known as the Fighting Irish. They compete as a member of the National Collegiate Athletic Association
(NCAA) Division I, primarily competing in the Atlantic Coast Conference (ACC) for all sports since the 2013–14 school
year. The Fighting Irish previously competed in the Horizon League from 1982-83 to 1985-86, and again from 1987-88
to 1994-95, and then in the Big East Conference through 2012–13. Men's sports include baseball, basketball, crew,
cross country, fencing, football, golf, ice hockey, lacrosse, soccer, swimming & diving, tennis and track & field;
while women's sports include basketball, cross country, fencing, golf, lacrosse, rowing, soccer, softball, swimming
& diving, tennis, track & field and volleyball. The football team competes as an Football Bowl Subdivision (FBS)
Independent since its inception in 1887. Both fencing teams compete in the Midwest Fencing Conference, and the men's
ice hockey team competes in Hockey East. Notre Dame's conference affiliations for all of its sports except football
and fencing changed in July 2013 as a result of major conference realignment, and its fencing affiliation will change
in July 2014. The Irish left the Big East for the ACC during a prolonged period of instability in the Big East; while
they maintain their football independence, they have committed to play five games per season against ACC opponents.
In ice hockey, the Irish were forced to find a new conference home after the Big Ten Conference's decision to add
the sport in 2013–14 led to a cascade of conference moves that culminated in the dissolution of the school's former
hockey home, the Central Collegiate Hockey Association, after the 2012–13 season. Notre Dame moved its hockey team
to Hockey East. After Notre Dame joined the ACC, the conference announced it would add fencing as a sponsored sport
beginning in the 2014–15 school year. There are many theories behind the adoption of the athletics moniker but it
is known that the Fighting Irish name was used in the early 1920s with respect to the football team and was popularized
by alumnus Francis Wallace in his New York Daily News columns. The official colors of Notre Dame are Navy Blue and
Gold Rush which are worn in competition by its athletic teams. In addition, the color green is often worn because
of the Fighting Irish nickname. The Notre Dame Leprechaun is the mascot of the athletic teams. Created by Theodore
W. Drake in 1964, the leprechaun was first used on the football pocket schedule and later on the football program
covers. The leprechaun was featured on the cover of Time in November 1964 and gained national exposure. On July 1,
2014, the University of Notre Dame and Under Armour reached an agreement in which Under Armour will provide uniforms,
apparel,equipment, and monetary compensation to Notre Dame for 10 years. This contract, worth almost $100 million,
is the most lucrative in the history of the NCAA. The university marching band plays at home games for most of the
sports. The band, which began in 1846 and has a claim as the oldest university band in continuous existence in the
United States, was honored by the National Music Council as a "Landmark of American Music" during the United States
Bicentennial. The band regularly plays the school's fight song the Notre Dame Victory March, which was named as the
most played and most famous fight song by Northern Illinois Professor William Studwell. According to College Fight
Songs: An Annotated Anthology published in 1998, the "Notre Dame Victory March" ranks as the greatest fight song
of all time. The Notre Dame football team has a long history, first beginning when the Michigan Wolverines football
team brought football to Notre Dame in 1887 and played against a group of students. In the long history since then,
13 Fighting Irish teams have won consensus national championships (although the university only claims 11), along
with another nine teams being named national champion by at least one source. Additionally, the program has the most
members in the College Football Hall of Fame, is tied with Ohio State University with the most Heisman Trophies won,
and have the highest winning percentage in NCAA history. With the long history, Notre Dame has accumulated many rivals,
and its annual game against USC for the Jeweled Shillelagh has been named by some as one of the most important in
college football and is often called the greatest intersectional rivalry in college football in the country. George
Gipp was the school's legendary football player during 1916–20. He played semiprofessional baseball and smoked, drank,
and gambled when not playing sports. He was also humble, generous to the needy, and a man of integrity. It was in
1928 that famed coach Knute Rockne used his final conversation with the dying Gipp to inspire the Notre Dame team
to beat the Army team and "win one for the Gipper." The 1940 film, Knute Rockne, All American, starred Pat O'Brien
as Knute Rockne and Ronald Reagan as Gipp. Today the team competes in Notre Dame Stadium, an 80,795-seat stadium
on campus. The current head coach is Brian Kelly, hired from the University of Cincinnati on December 11, 2009. Kelly's
record in midway through his sixth season at Notre Dame is 52–21. In 2012, Kelly's Fighting Irish squad went undefeated
and played in the BCS National Championship Game. Kelly succeeded Charlie Weis, who was fired in November 2009 after
five seasons. Although Weis led his team to two Bowl Championship Series bowl games, his overall record was 35–27,
mediocre by Notre Dame standards, and the 2007 team had the most losses in school history. The football team generates
enough revenue to operate independently while $22.1 million is retained from the team's profits for academic use.
Forbes named the team as the most valuable in college football, worth a total of $101 million in 2007. Football gameday
traditions During home games, activities occur all around campus and different dorms decorate their halls with a
traditional item (e.g. Zahm House's two-story banner). Traditional activities begin at the stroke of midnight with
the Drummers' Circle. This tradition involves the drum line of the Band of the Fighting Irish and ushers in the rest
of the festivities that will continue the rest of the gameday Saturday. Later that day, the trumpet section will
play the Notre Dame Victory March and the Notre Dame Alma Mater under the dome. The band entire will play a concert
at the steps of Bond Hall, from where they will march into Notre Dame Stadium, leading fans and students alike across
campus to the game. The men's basketball team has over 1,600 wins, one of only 12 schools who have reached that mark,
and have appeared in 28 NCAA tournaments. Former player Austin Carr holds the record for most points scored in a
single game of the tournament with 61. Although the team has never won the NCAA Tournament, they were named by the
Helms Athletic Foundation as national champions twice. The team has orchestrated a number of upsets of number one
ranked teams, the most notable of which was ending UCLA's record 88-game winning streak in 1974. The team has beaten
an additional eight number-one teams, and those nine wins rank second, to UCLA's 10, all-time in wins against the
top team. The team plays in newly renovated Purcell Pavilion (within the Edmund P. Joyce Center), which reopened
for the beginning of the 2009–2010 season. The team is coached by Mike Brey, who, as of the 2014–15 season, his fifteenth
at Notre Dame, has achieved a 332-165 record. In 2009 they were invited to the NIT, where they advanced to the semifinals
but were beaten by Penn State who went on and beat Baylor in the championship. The 2010–11 team concluded its regular
season ranked number seven in the country, with a record of 25–5, Brey's fifth straight 20-win season, and a second-place
finish in the Big East. During the 2014-15 season, the team went 32-6 and won the ACC conference tournament, later
advancing to the Elite 8, where the Fighting Irish lost on a missed buzzer-beater against then undefeated Kentucky.
Led by NBA draft picks Jerian Grant and Pat Connaughton, the Fighting Irish beat the eventual national champion Duke
Blue Devils twice during the season. The 32 wins were the most by the Fighting Irish team since 1908-09. The "Notre
Dame Victory March" is the fight song for the University of Notre Dame. It was written by two brothers who were Notre
Dame graduates. The Rev. Michael J. Shea, a 1904 graduate, wrote the music, and his brother, John F. Shea, who earned
degrees in 1906 and 1908, wrote the original lyrics. The lyrics were revised in the 1920s; it first appeared under
the copyright of the University of Notre Dame in 1928. The chorus is, "Cheer cheer for old Notre Dame, wake up the
echos cheering her name. Send a volley cheer on high, shake down the thunder from the sky! What though the odds be
great or small, old Notre Dame will win over all. While her loyal sons are marching, onward to victory!" In the film
Knute Rockne, All American, Knute Rockne (played by Pat O'Brien) delivers the famous "Win one for the Gipper" speech,
at which point the background music swells with the "Notre Dame Victory March". George Gipp was played by Ronald
Reagan, whose nickname "The Gipper" was derived from this role. This scene was parodied in the movie Airplane! with
the same background music, only this time honoring George Zipp, one of Ted Striker's former comrades. The song also
was prominent in the movie Rudy, with Sean Astin as Daniel "Rudy" Ruettiger, who harbored dreams of playing football
at the University of Notre Dame despite significant obstacles. Notre Dame alumni work in various fields. Alumni working
in political fields include state governors, members of the United States Congress, and former United States Secretary
of State Condoleezza Rice. A notable alumnus of the College of Science is Medicine Nobel Prize winner Eric F. Wieschaus.
A number of university heads are alumni, including Notre Dame's current president, the Rev. John Jenkins. Additionally,
many alumni are in the media, including talk show hosts Regis Philbin and Phil Donahue, and television and radio
personalities such as Mike Golic and Hannah Storm. With the university having high profile sports teams itself, a
number of alumni went on to become involved in athletics outside the university, including professional baseball,
basketball, football, and ice hockey players, such as Joe Theismann, Joe Montana, Tim Brown, Ross Browner, Rocket
Ismail, Ruth Riley, Jeff Samardzija, Jerome Bettis, Brett Lebda, Olympic gold medalist Mariel Zagunis, professional
boxer Mike Lee, former football coaches such as Charlie Weis, Frank Leahy and Knute Rockne, and Basketball Hall of
Famers Austin Carr and Adrian Dantley. Other notable alumni include prominent businessman Edward J. DeBartolo, Jr.
and astronaut Jim Wetherbee.
Anthropology is the study of humans and their societies in the past and present. Its main subdivisions are social anthropology
and cultural anthropology, which describes the workings of societies around the world, linguistic anthropology, which
investigates the influence of language in social life, and biological or physical anthropology, which concerns long-term
development of the human organism. Archaeology, which studies past human cultures through investigation of physical
evidence, is thought of as a branch of anthropology in the United States, while in Europe, it is viewed as a discipline
in its own right, or grouped under other related disciplines such as history. Sporadic use of the term for some of
the subject matter occurred subsequently, such as the use by Étienne Serres in 1838 to describe the natural history,
or paleontology, of man, based on comparative anatomy, and the creation of a chair in anthropology and ethnography
in 1850 at the National Museum of Natural History (France) by Jean Louis Armand de Quatrefages de Bréau. Various
short-lived organizations of anthropologists had already been formed. The Société Ethnologique de Paris, the first
to use Ethnology, was formed in 1839. Its members were primarily anti-slavery activists. When slavery was abolished
in France in 1848 the Société was abandoned. Anthropology and many other current fields are the intellectual results
of the comparative methods developed in the earlier 19th century. Theorists in such diverse fields as anatomy, linguistics,
and Ethnology, making feature-by-feature comparisons of their subject matters, were beginning to suspect that similarities
between animals, languages, and folkways were the result of processes or laws unknown to them then. For them, the
publication of Charles Darwin's On the Origin of Species was the epiphany of everything they had begun to suspect.
Darwin himself arrived at his conclusions through comparison of species he had seen in agronomy and in the wild.
Darwin and Wallace unveiled evolution in the late 1850s. There was an immediate rush to bring it into the social
sciences. Paul Broca in Paris was in the process of breaking away from the Société de biologie to form the first
of the explicitly anthropological societies, the Société d'Anthropologie de Paris, meeting for the first time in
Paris in 1859.[n 4] When he read Darwin he became an immediate convert to Transformisme, as the French called evolutionism.
His definition now became "the study of the human group, considered as a whole, in its details, and in relation to
the rest of nature". Broca, being what today would be called a neurosurgeon, had taken an interest in the pathology
of speech. He wanted to localize the difference between man and the other animals, which appeared to reside in speech.
He discovered the speech center of the human brain, today called Broca's area after him. His interest was mainly
in Biological anthropology, but a German philosopher specializing in psychology, Theodor Waitz, took up the theme
of general and social anthropology in his six-volume work, entitled Die Anthropologie der Naturvölker, 1859–1864.
The title was soon translated as "The Anthropology of Primitive Peoples". The last two volumes were published posthumously.
Waitz defined anthropology as "the science of the nature of man". By nature he meant matter animated by "the Divine
breath"; i.e., he was an animist. Following Broca's lead, Waitz points out that anthropology is a new field, which
would gather material from other fields, but would differ from them in the use of comparative anatomy, physiology,
and psychology to differentiate man from "the animals nearest to him". He stresses that the data of comparison must
be empirical, gathered by experimentation. The history of civilization as well as ethnology are to be brought into
the comparison. It is to be presumed fundamentally that the species, man, is a unity, and that "the same laws of
thought are applicable to all men". Waitz was influential among the British ethnologists. In 1863 the explorer Richard
Francis Burton and the speech therapist James Hunt broke away from the Ethnological Society of London to form the
Anthropological Society of London, which henceforward would follow the path of the new anthropology rather than just
ethnology. It was the 2nd society dedicated to general anthropology in existence. Representatives from the French
Société were present, though not Broca. In his keynote address, printed in the first volume of its new publication,
The Anthropological Review, Hunt stressed the work of Waitz, adopting his definitions as a standard.[n 5] Among the
first associates were the young Edward Burnett Tylor, inventor of cultural anthropology, and his brother Alfred Tylor,
a geologist. Previously Edward had referred to himself as an ethnologist; subsequently, an anthropologist. Similar
organizations in other countries followed: The American Anthropological Association in 1902, the Anthropological
Society of Madrid (1865), the Anthropological Society of Vienna (1870), the Italian Society of Anthropology and Ethnology
(1871), and many others subsequently. The majority of these were evolutionist. One notable exception was the Berlin
Society of Anthropology (1869) founded by Rudolph Virchow, known for his vituperative attacks on the evolutionists.
Not religious himself, he insisted that Darwin's conclusions lacked empirical foundation. During the last three decades
of the 19th century a proliferation of anthropological societies and associations occurred, most independent, most
publishing their own journals, and all international in membership and association. The major theorists belonged
to these organizations. They supported the gradual osmosis of anthropology curricula into the major institutions
of higher learning. By 1898 the American Association for the Advancement of Science was able to report that 48 educational
institutions in 13 countries had some curriculum in anthropology. None of the 75 faculty members were under a department
named anthropology. This meagre statistic expanded in the 20th century to comprise anthropology departments in the
majority of the world's higher educational institutions, many thousands in number. Anthropology has diversified from
a few major subdivisions to dozens more. Practical anthropology, the use of anthropological knowledge and technique
to solve specific problems, has arrived; for example, the presence of buried victims might stimulate the use of a
forensic archaeologist to recreate the final scene. Organization has reached global level. For example, the World
Council of Anthropological Associations (WCAA), "a network of national, regional and international associations that
aims to promote worldwide communication and cooperation in anthropology", currently contains members from about three
dozen nations. Since the work of Franz Boas and Bronisław Malinowski in the late 19th and early 20th centuries, social
anthropology in Great Britain and cultural anthropology in the US have been distinguished from other social sciences
by its emphasis on cross-cultural comparisons, long-term in-depth examination of context, and the importance it places
on participant-observation or experiential immersion in the area of research. Cultural anthropology in particular
has emphasized cultural relativism, holism, and the use of findings to frame cultural critiques. This has been particularly
prominent in the United States, from Boas' arguments against 19th-century racial ideology, through Margaret Mead's
advocacy for gender equality and sexual liberation, to current criticisms of post-colonial oppression and promotion
of multiculturalism. Ethnography is one of its primary research designs as well as the text that is generated from
anthropological fieldwork. Anthropology is a global discipline where humanities, social, and natural sciences are
forced to confront one another. Anthropology builds upon knowledge from natural sciences, including the discoveries
about the origin and evolution of Homo sapiens, human physical traits, human behavior, the variations among different
groups of humans, how the evolutionary past of Homo sapiens has influenced its social organization and culture, and
from social sciences, including the organization of human social and cultural relations, institutions, social conflicts,
etc. Early anthropology originated in Classical Greece and Persia and studied and tried to understand observable
cultural diversity. As such, anthropology has been central in the development of several new (late 20th century)
interdisciplinary fields such as cognitive science, global studies, and various ethnic studies. Sociocultural anthropology
has been heavily influenced by structuralist and postmodern theories, as well as a shift toward the analysis of modern
societies. During the 1970s and 1990s, there was an epistemological shift away from the positivist traditions that
had largely informed the discipline.[page needed] During this shift, enduring questions about the nature and production
of knowledge came to occupy a central place in cultural and social anthropology. In contrast, archaeology and biological
anthropology remained largely positivist. Due to this difference in epistemology, the four sub-fields of anthropology
have lacked cohesion over the last several decades. Sociocultural anthropology draws together the principle axes
of cultural anthropology and social anthropology. Cultural anthropology is the comparative study of the manifold
ways in which people make sense of the world around them, while social anthropology is the study of the relationships
among persons and groups. Cultural anthropology is more related to philosophy, literature and the arts (how one's
culture affects experience for self and group, contributing to more complete understanding of the people's knowledge,
customs, and institutions), while social anthropology is more related to sociology and history. in that it helps
develop understanding of social structures, typically of others and other populations (such as minorities, subgroups,
dissidents, etc.). There is no hard-and-fast distinction between them, and these categories overlap to a considerable
degree. Inquiry in sociocultural anthropology is guided in part by cultural relativism, the attempt to understand
other societies in terms of their own cultural symbols and values. Accepting other cultures in their own terms moderates
reductionism in cross-cultural comparison. This project is often accommodated in the field of ethnography. Ethnography
can refer to both a methodology and the product of ethnographic research, i.e. an ethnographic monograph. As methodology,
ethnography is based upon long-term fieldwork within a community or other research site. Participant observation
is one of the foundational methods of social and cultural anthropology. Ethnology involves the systematic comparison
of different cultures. The process of participant-observation can be especially helpful to understanding a culture
from an emic (conceptual, vs. etic, or technical) point of view. The study of kinship and social organization is
a central focus of sociocultural anthropology, as kinship is a human universal. Sociocultural anthropology also covers
economic and political organization, law and conflict resolution, patterns of consumption and exchange, material
culture, technology, infrastructure, gender relations, ethnicity, childrearing and socialization, religion, myth,
symbols, values, etiquette, worldview, sports, music, nutrition, recreation, games, food, festivals, and language
(which is also the object of study in linguistic anthropology). Archaeology is the study of the human past through
its material remains. Artifacts, faunal remains, and human altered landscapes are evidence of the cultural and material
lives of past societies. Archaeologists examine these material remains in order to deduce patterns of past human
behavior and cultural practices. Ethnoarchaeology is a type of archaeology that studies the practices and material
remains of living human groups in order to gain a better understanding of the evidence left behind by past human
groups, who are presumed to have lived in similar ways. Linguistic anthropology (also called anthropological linguistics)
seeks to understand the processes of human communications, verbal and non-verbal, variation in language across time
and space, the social uses of language, and the relationship between language and culture. It is the branch of anthropology
that brings linguistic methods to bear on anthropological problems, linking the analysis of linguistic forms and
processes to the interpretation of sociocultural processes. Linguistic anthropologists often draw on related fields
including sociolinguistics, pragmatics, cognitive linguistics, semiotics, discourse analysis, and narrative analysis.
One of the central problems in the anthropology of art concerns the universality of 'art' as a cultural phenomenon.
Several anthropologists have noted that the Western categories of 'painting', 'sculpture', or 'literature', conceived
as independent artistic activities, do not exist, or exist in a significantly different form, in most non-Western
contexts. To surmount this difficulty, anthropologists of art have focused on formal features in objects which, without
exclusively being 'artistic', have certain evident 'aesthetic' qualities. Boas' Primitive Art, Claude Lévi-Strauss'
The Way of the Masks (1982) or Geertz's 'Art as Cultural System' (1983) are some examples in this trend to transform
the anthropology of 'art' into an anthropology of culturally specific 'aesthetics'. Media anthropology (also known
as anthropology of media or mass media) emphasizes ethnographic studies as a means of understanding producers, audiences,
and other cultural and social aspects of mass media. The types of ethnographic contexts explored range from contexts
of media production (e.g., ethnographies of newsrooms in newspapers, journalists in the field, film production) to
contexts of media reception, following audiences in their everyday responses to media. Other types include cyber
anthropology, a relatively new area of internet research, as well as ethnographies of other areas of research which
happen to involve media, such as development work, social movements, or health education. This is in addition to
many classic ethnographic contexts, where media such as radio, the press, new media and television have started to
make their presences felt since the early 1990s. Visual anthropology is concerned, in part, with the study and production
of ethnographic photography, film and, since the mid-1990s, new media. While the term is sometimes used interchangeably
with ethnographic film, visual anthropology also encompasses the anthropological study of visual representation,
including areas such as performance, museums, art, and the production and reception of mass media. Visual representations
from all cultures, such as sandpaintings, tattoos, sculptures and reliefs, cave paintings, scrimshaw, jewelry, hieroglyphics,
paintings and photographs are included in the focus of visual anthropology. Economic anthropology attempts to explain
human economic behavior in its widest historic, geographic and cultural scope. It has a complex relationship with
the discipline of economics, of which it is highly critical. Its origins as a sub-field of anthropology begin with
the Polish-British founder of Anthropology, Bronislaw Malinowski, and his French compatriot, Marcel Mauss, on the
nature of gift-giving exchange (or reciprocity) as an alternative to market exchange. Economic Anthropology remains,
for the most part, focused upon exchange. The school of thought derived from Marx and known as Political Economy
focuses on production, in contrast. Economic Anthropologists have abandoned the primitivist niche they were relegated
to by economists, and have now turned to examine corporations, banks, and the global financial system from an anthropological
perspective. Political economy in anthropology is the application of the theories and methods of Historical Materialism
to the traditional concerns of anthropology, including, but not limited to, non-capitalist societies. Political Economy
introduced questions of history and colonialism to ahistorical anthropological theories of social structure and culture.
Three main areas of interest rapidly developed. The first of these areas was concerned with the "pre-capitalist"
societies that were subject to evolutionary "tribal" stereotypes. Sahlins work on Hunter-gatherers as the 'original
affluent society' did much to dissipate that image. The second area was concerned with the vast majority of the world's
population at the time, the peasantry, many of whom were involved in complex revolutionary wars such as in Vietnam.
The third area was on colonialism, imperialism, and the creation of the capitalist world-system. More recently, these
Political Economists have more directly addressed issues of industrial (and post-industrial) capitalism around the
world. Applied Anthropology refers to the application of the method and theory of anthropology to the analysis and
solution of practical problems. It is a, "complex of related, research-based, instrumental methods which produce
change or stability in specific cultural systems through the provision of data, initiation of direct action, and/or
the formulation of policy". More simply, applied anthropology is the practical side of anthropological research;
it includes researcher involvement and activism within the participating community. It is closely related to Development
anthropology (distinct from the more critical Anthropology of development). Anthropology of development tends to
view development from a critical perspective. The kind of issues addressed and implications for the approach simply
involve pondering why, if a key development goal is to alleviate poverty, is poverty increasing? Why is there such
a gap between plans and outcomes? Why are those working in development so willing to disregard history and the lessons
it might offer? Why is development so externally driven rather than having an internal basis? In short why does so
much planned development fail? Kinship can refer both to the study of the patterns of social relationships in one
or more human cultures, or it can refer to the patterns of social relationships themselves. Over its history, anthropology
has developed a number of related concepts and terms, such as "descent", "descent groups", "lineages", "affines",
"cognates", and even "fictive kinship". Broadly, kinship patterns may be considered to include people related both
by descent (one's social relations during development), and also relatives by marriage. Feminist anthropology is
a four field approach to anthropology (archeological, biological, cultural, linguistic) that seeks to reduce male
bias in research findings, anthropological hiring practices, and the scholarly production of knowledge. Anthropology
engages often with feminists from non-Western traditions, whose perspectives and experiences can differ from those
of white European and American feminists. Historically, such 'peripheral' perspectives have sometimes been marginalized
and regarded as less valid or important than knowledge from the western world. Feminist anthropologists have claimed
that their research helps to correct this systematic bias in mainstream feminist theory. Feminist anthropologists
are centrally concerned with the construction of gender across societies. Feminist anthropology is inclusive of birth
anthropology as a specialization. Nutritional anthropology is a synthetic concept that deals with the interplay between
economic systems, nutritional status and food security, and how changes in the former affect the latter. If economic
and environmental changes in a community affect access to food, food security, and dietary health, then this interplay
between culture and biology is in turn connected to broader historical and economic trends associated with globalization.
Nutritional status affects overall health status, work performance potential, and the overall potential for economic
development (either in terms of human development or traditional western models) for any given group of people. Psychological
anthropology is an interdisciplinary subfield of anthropology that studies the interaction of cultural and mental
processes. This subfield tends to focus on ways in which humans' development and enculturation within a particular
cultural group—with its own history, language, practices, and conceptual categories—shape processes of human cognition,
emotion, perception, motivation, and mental health. It also examines how the understanding of cognition, emotion,
motivation, and similar psychological processes inform or constrain our models of cultural and social processes.
Cognitive anthropology seeks to explain patterns of shared knowledge, cultural innovation, and transmission over
time and space using the methods and theories of the cognitive sciences (especially experimental psychology and evolutionary
biology) often through close collaboration with historians, ethnographers, archaeologists, linguists, musicologists
and other specialists engaged in the description and interpretation of cultural forms. Cognitive anthropology is
concerned with what people from different groups know and how that implicit knowledge changes the way people perceive
and relate to the world around them. Political anthropology concerns the structure of political systems, looked at
from the basis of the structure of societies. Political anthropology developed as a discipline concerned primarily
with politics in stateless societies, a new development started from the 1960s, and is still unfolding: anthropologists
started increasingly to study more "complex" social settings in which the presence of states, bureaucracies and markets
entered both ethnographic accounts and analysis of local phenomena. The turn towards complex societies meant that
political themes were taken up at two main levels. First of all, anthropologists continued to study political organization
and political phenomena that lay outside the state-regulated sphere (as in patron-client relations or tribal political
organization). Second of all, anthropologists slowly started to develop a disciplinary concern with states and their
institutions (and of course on the relationship between formal and informal political institutions). An anthropology
of the state developed, and it is a most thriving field today. Geertz' comparative work on "Negara", the Balinese
state is an early, famous example. Cyborg anthropology originated as a sub-focus group within the American Anthropological
Association's annual meeting in 1993. The sub-group was very closely related to STS and the Society for the Social
Studies of Science. Donna Haraway's 1985 Cyborg Manifesto could be considered the founding document of cyborg anthropology
by first exploring the philosophical and sociological ramifications of the term. Cyborg anthropology studies humankind
and its relations with the technological systems it has built, specifically modern technological systems that have
reflexively shaped notions of what it means to be human beings. Environmental anthropology is a sub-specialty within
the field of anthropology that takes an active role in examining the relationships between humans and their environment
across space and time. The contemporary perspective of environmental anthropology, and arguably at least the backdrop,
if not the focus of most of the ethnographies and cultural fieldworks of today, is political ecology. Many characterize
this new perspective as more informed with culture, politics and power, globalization, localized issues, and more.
The focus and data interpretation is often used for arguments for/against or creation of policy, and to prevent corporate
exploitation and damage of land. Often, the observer has become an active part of the struggle either directly (organizing,
participation) or indirectly (articles, documentaries, books, ethnographies). Such is the case with environmental
justice advocate Melissa Checker and her relationship with the people of Hyde Park. Ethnohistory is the study of
ethnographic cultures and indigenous customs by examining historical records. It is also the study of the history
of various ethnic groups that may or may not exist today. Ethnohistory uses both historical and ethnographic data
as its foundation. Its historical methods and materials go beyond the standard use of documents and manuscripts.
Practitioners recognize the utility of such source material as maps, music, paintings, photography, folklore, oral
tradition, site exploration, archaeological materials, museum collections, enduring customs, language, and place
names. Urban anthropology is concerned with issues of urbanization, poverty, and neoliberalism. Ulf Hannerz quotes
a 1960s remark that traditional anthropologists were "a notoriously agoraphobic lot, anti-urban by definition". Various
social processes in the Western World as well as in the "Third World" (the latter being the habitual focus of attention
of anthropologists) brought the attention of "specialists in 'other cultures'" closer to their homes. There are two
principle approaches in urban anthropology: by examining the types of cities or examining the social issues within
the cities. These two methods are overlapping and dependent of each other. By defining different types of cities,
one would use social factors as well as economic and political factors to categorize the cities. By directly looking
at the different social issues, one would also be studying how they affect the dynamic of the city. Anthrozoology
(also known as "human–animal studies") is the study of interaction between living things. It is a burgeoning interdisciplinary
field that overlaps with a number of other disciplines, including anthropology, ethology, medicine, psychology, veterinary
medicine and zoology. A major focus of anthrozoologic research is the quantifying of the positive effects of human-animal
relationships on either party and the study of their interactions. It includes scholars from a diverse range of fields,
including anthropology, sociology, biology, and philosophy.[n 7] Evolutionary anthropology is the interdisciplinary
study of the evolution of human physiology and human behaviour and the relation between hominins and non-hominin
primates. Evolutionary anthropology is based in natural science and social science, combining the human development
with socioeconomic factors. Evolutionary anthropology is concerned with both biological and cultural evolution of
humans, past and present. It is based on a scientific approach, and brings together fields such as archaeology, behavioral
ecology, psychology, primatology, and genetics. It is a dynamic and interdisciplinary field, drawing on many lines
of evidence to understand the human experience, past and present. Ethical commitments in anthropology include noticing
and documenting genocide, infanticide, racism, mutilation (including circumcision and subincision), and torture.
Topics like racism, slavery, and human sacrifice attract anthropological attention and theories ranging from nutritional
deficiencies to genes to acculturation have been proposed, not to mention theories of colonialism and many others
as root causes of Man's inhumanity to man. To illustrate the depth of an anthropological approach, one can take just
one of these topics, such as "racism" and find thousands of anthropological references, stretching across all the
major and minor sub-fields. But by the 1940s, many of Boas' anthropologist contemporaries were active in the allied
war effort against the "Axis" (Nazi Germany, Fascist Italy, and Imperial Japan). Many served in the armed forces,
while others worked in intelligence (for example, Office of Strategic Services and the Office of War Information).
At the same time, David H. Price's work on American anthropology during the Cold War provides detailed accounts of
the pursuit and dismissal of several anthropologists from their jobs for communist sympathies. Professional anthropological
bodies often object to the use of anthropology for the benefit of the state. Their codes of ethics or statements
may proscribe anthropologists from giving secret briefings. The Association of Social Anthropologists of the UK and
Commonwealth (ASA) has called certain scholarship ethically dangerous. The AAA's current 'Statement of Professional
Responsibility' clearly states that "in relation with their own government and with host governments ... no secret
research, no secret reports or debriefings of any kind should be agreed to or given." Anthropologists, along with
other social scientists, are working with the US military as part of the US Army's strategy in Afghanistan. The Christian
Science Monitor reports that "Counterinsurgency efforts focus on better grasping and meeting local needs" in Afghanistan,
under the Human Terrain System (HTS) program; in addition, HTS teams are working with the US military in Iraq. In
2009, the American Anthropological Association's Commission on the Engagement of Anthropology with the US Security
and Intelligence Communities released its final report concluding, in part, that, "When ethnographic investigation
is determined by military missions, not subject to external review, where data collection occurs in the context of
war, integrated into the goals of counterinsurgency, and in a potentially coercive environment – all characteristic
factors of the HTS concept and its application – it can no longer be considered a legitimate professional exercise
of anthropology. In summary, while we stress that constructive engagement between anthropology and the military is
possible, CEAUSSIC suggests that the AAA emphasize the incompatibility of HTS with disciplinary ethics and practice
for job seekers and that it further recognize the problem of allowing HTS to define the meaning of "anthropology"
within DoD." Biological anthropologists are interested in both human variation and in the possibility of human universals
(behaviors, ideas or concepts shared by virtually all human cultures). They use many different methods of study,
but modern population genetics, participant observation and other techniques often take anthropologists "into the
field," which means traveling to a community in its own setting, to do something called "fieldwork." On the biological
or physical side, human measurements, genetic samples, nutritional data may be gathered and published as articles
or monographs. Along with dividing up their project by theoretical emphasis, anthropologists typically divide the
world up into relevant time periods and geographic regions. Human time on Earth is divided up into relevant cultural
traditions based on material, such as the Paleolithic and the Neolithic, of particular use in archaeology.[citation
needed] Further cultural subdivisions according to tool types, such as Olduwan or Mousterian or Levalloisian help
archaeologists and other anthropologists in understanding major trends in the human past.[citation needed] Anthropologists
and geographers share approaches to Culture regions as well, since mapping cultures is central to both sciences.
By making comparisons across cultural traditions (time-based) and cultural regions (space-based), anthropologists
have developed various kinds of comparative method, a central part of their science. Some authors argue that anthropology
originated and developed as the study of "other cultures", both in terms of time (past societies) and space (non-European/non-Western
societies). For example, the classic of urban anthropology, Ulf Hannerz in the introduction to his seminal Exploring
the City: Inquiries Toward an Urban Anthropology mentions that the "Third World" had habitually received most of
attention; anthropologists who traditionally specialized in "other cultures" looked for them far away and started
to look "across the tracks" only in late 1960s. Since the 1980s it has become common for social and cultural anthropologists
to set ethnographic research in the North Atlantic region, frequently examining the connections between locations
rather than limiting research to a single locale. There has also been a related shift toward broadening the focus
beyond the daily life of ordinary people; increasingly, research is set in settings such as scientific laboratories,
social movements, governmental and nongovernmental organizations and businesses.
Montana i/mɒnˈtænə/ is a state in the Western region of the United States. The state's name is derived from the Spanish word
montaña (mountain). Montana has several nicknames, although none official, including "Big Sky Country" and "The Treasure
State", and slogans that include "Land of the Shining Mountains" and more recently "The Last Best Place". Montana
is ranked 4th in size, but 44th in population and 48th in population density of the 50 United States. The western
third of Montana contains numerous mountain ranges. Smaller island ranges are found throughout the state. In total,
77 named ranges are part of the Rocky Mountains. Montana schoolchildren played a significant role in selecting several
state symbols. The state tree, the ponderosa pine, was selected by Montana schoolchildren as the preferred state
tree by an overwhelming majority in a referendum held in 1908. However, the legislature did not designate a state
tree until 1949, when the Montana Federation of Garden Clubs, with the support of the state forester, lobbied for
formal recognition. Schoolchildren also chose the western meadowlark as the state bird, in a 1930 vote, and the legislature
acted to endorse this decision in 1931. Similarly, the secretary of state sponsored a children's vote in 1981 to
choose a state animal, and after 74 animals were nominated, the grizzly bear won over the elk by a 2–1 margin. The
students of Livingston started a statewide school petition drive plus lobbied the governor and the state legislature
to name the Maiasaura as the state fossil in 1985. The state song was not composed until 21 years after statehood,
when a musical troupe led by Joseph E. Howard stopped in Butte in September 1910. A former member of the troupe who
lived in Butte buttonholed Howard at an after-show party, asking him to compose a song about Montana and got another
partygoer, the city editor for the Butte Miner newspaper, Charles C. Cohan, to help. The two men worked up a basic
melody and lyrics in about a half-hour for the entertainment of party guests, then finished the song later that evening,
with an arrangement worked up the following day. Upon arriving in Helena, Howard's troupe performed 12 encores of
the new song to an enthusiastic audience and the governor proclaimed it the state song on the spot, though formal
legislative recognition did not occur until 1945. Montana is one of only three states to have a "state ballad", "Montana
Melody", chosen by the legislature in 1983. Montana was the first state to also adopt a State Lullaby. Montana's
motto, Oro y Plata, Spanish for "Gold and Silver", recognizing the significant role of mining, was first adopted
in 1865, when Montana was still a territory. A state seal with a miner's pick and shovel above the motto, surrounded
by the mountains and the Great Falls of the Missouri River, was adopted during the first meeting of the territorial
legislature in 1864–65. The design was only slightly modified after Montana became a state and adopted it as the
Great Seal of the State of Montana, enacted by the legislature in 1893. The state flower, the bitterroot, was adopted
in 1895 with the support of a group called the Floral Emblem Association, which formed after Montana's Women's Christian
Temperance Union adopted the bitterroot as the organization's state flower. All other symbols were adopted throughout
the 20th century, save for Montana's newest symbol, the state butterfly, the mourning cloak, adopted in 2001, and
the state lullaby, "Montana Lullaby", adopted in 2007. The state also has five Micropolitan Statistical Areas centered
on Bozeman, Butte, Helena, Kalispell and Havre. These communities, excluding Havre, are colloquially known as the
"big 7" Montana cities, as they are consistently the seven largest communities in Montana, with a significant population
difference when these communities are compared to those that are 8th and lower on the list. According to the 2010
U.S. Census, the population of Montana's seven most populous cities, in rank order, are Billings, Missoula, Great
Falls, Bozeman, Butte, Helena and Kalispell. Based on 2013 census numbers, they collectively contain 35 percent of
Montana's population. and the counties containing these communities hold 62 percent of the state's population. The
geographic center of population of Montana is located in sparsely populated Meagher County, in the town of White
Sulphur Springs. Montana has 56 counties with the United States Census Bureau stating Montana's contains 364 "places",
broken down into 129 incorporated places and 235 census-designated places. Incorporated places consist of 52 cities,
75 towns, and two consolidated city-counties. Montana has one city, Billings, with a population over 100,000; and
two cities with populations over 50,000, Missoula and Great Falls. These three communities are considered the centers
of Montana's three Metropolitan Statistical Areas. The name Montana comes from the Spanish word Montaña, meaning
"mountain", or more broadly, "mountainous country". Montaña del Norte was the name given by early Spanish explorers
to the entire mountainous region of the west. The name Montana was added to a bill by the United States House Committee
on Territories, which was chaired at the time by Rep. James Ashley of Ohio, for the territory that would become Idaho
Territory. The name was successfully changed by Representatives Henry Wilson (Massachusetts) and Benjamin F. Harding
(Oregon), who complained that Montana had "no meaning". When Ashley presented a bill to establish a temporary government
in 1864 for a new territory to be carved out of Idaho, he again chose Montana Territory. This time Rep. Samuel Cox,
also of Ohio, objected to the name. Cox complained that the name was a misnomer given that most of the territory
was not mountainous and that a Native American name would be more appropriate than a Spanish one. Other names such
as Shoshone were suggested, but it was eventually decided that the Committee on Territories could name it whatever
they wanted, so the original name of Montana was adopted. With a total area of 147,040 square miles (380,800 km2),
Montana is slightly larger than Japan. It is the fourth largest state in the United States after Alaska, Texas, and
California; the largest landlocked U.S. state; and the 56th largest national state/province subdivision in the world.
To the north, Montana shares a 545-mile (877 km) border with three Canadian provinces: British Columbia, Alberta,
and Saskatchewan, the only state to do so. It borders North Dakota and South Dakota to the east, Wyoming to the south
and Idaho to the west and southwest. The topography of the state is roughly defined by the Continental Divide, which
splits much of the state into distinct eastern and western regions. Most of Montana's 100 or more named mountain
ranges are concentrated in the western half of the state, most of which is geologically and geographically part of
the Northern Rocky Mountains. The Absaroka and Beartooth ranges in the south-central part of the state are technically
part of the Central Rocky Mountains. The Rocky Mountain Front is a significant feature in the north-central portion
of the state, and there are a number of isolated island ranges that interrupt the prairie landscape common in the
central and eastern parts of the state. About 60 percent of the state is prairie, part of the northern Great Plains.
The northern section of the Divide, where the mountains give way rapidly to prairie, is part of the Rocky Mountain
Front. The front is most pronounced in the Lewis Range, located primarily in Glacier National Park. Due to the configuration
of mountain ranges in Glacier National Park, the Northern Divide (which begins in Alaska's Seward Peninsula) crosses
this region and turns east in Montana at Triple Divide Peak. It causes the Waterton River, Belly, and Saint Mary
rivers to flow north into Alberta, Canada. There they join the Saskatchewan River, which ultimately empties into
Hudson Bay. East of the divide, several roughly parallel ranges cover the southern part of the state, including the
Gravelly Range, the Madison Range, Gallatin Range, Absaroka Mountains and the Beartooth Mountains. The Beartooth
Plateau is the largest continuous land mass over 10,000 feet (3,000 m) high in the continental United States. It
contains the highest point in the state, Granite Peak, 12,799 feet (3,901 m) high. North of these ranges are the
Big Belt Mountains, Bridger Mountains, Tobacco Roots, and several island ranges, including the Crazy Mountains and
Little Belt Mountains. However, at the state level, the pattern of split ticket voting and divided government holds.
Democrats currently hold one of the state's U.S. Senate seats, as well as four of the five statewide offices (Governor,
Superintendent of Public Instruction, Secretary of State and State Auditor). The lone congressional district has
been Republican since 1996 and in 2014 Steve Daines won one of the state's Senate seats for the GOP. The Legislative
branch had split party control between the house and senate most years between 2004 and 2010, when the mid-term elections
returned both branches to Republican control. The state Senate is, as of 2015, controlled by the Republicans 29 to
21, and the State House of Representatives at 59 to 41. In presidential elections, Montana was long classified as
a swing state, though the state has voted for the Republican candidate in all but two elections from 1952 to the
present. The state last supported a Democrat for president in 1992, when Bill Clinton won a plurality victory. Overall,
since 1889 the state has voted for Democratic governors 60 percent of the time and Democratic presidents 40 percent
of the time, with these numbers being 40/60 for Republican candidates. In the 2008 presidential election, Montana
was considered a swing state and was ultimately won by Republican John McCain, albeit by a narrow margin of two percent.
Bozeman Yellowstone International Airport is the busiest airport in the state of Montana, surpassing Billings Logan
International Airport in the spring of 2013. Montana's other major Airports include Billings Logan International
Airport, Missoula International Airport, Great Falls International Airport, Glacier Park International Airport, Helena
Regional Airport, Bert Mooney Airport and Yellowstone Airport. Eight smaller communities have airports designated
for commercial service under the Essential Air Service program. Railroads have been an important method of transportation
in Montana since the 1880s. Historically, the state was traversed by the main lines of three east-west transcontinental
routes: the Milwaukee Road, the Great Northern, and the Northern Pacific. Today, the BNSF Railway is the state's
largest railroad, its main transcontinental route incorporating the former Great Northern main line across the state.
Montana RailLink, a privately held Class II railroad, operates former Northern Pacific trackage in western Montana.
Montana is home to the Rocky Mountain Elk Foundation and has a historic big game hunting tradition. There are fall
bow and general hunting seasons for elk, pronghorn antelope, whitetail deer and mule deer. A random draw grants a
limited number of permits for moose, mountain goats and bighorn sheep. There is a spring hunting season for black
bear and in most years, limited hunting of bison that leave Yellowstone National Park is allowed. Current law allows
both hunting and trapping of a specific number of wolves and mountain lions. Trapping of assorted fur bearing animals
is allowed in certain seasons and many opportunities exist for migratory waterfowl and upland bird hunting. Montana
has been a destination for its world-class trout fisheries since the 1930s. Fly fishing for several species of native
and introduced trout in rivers and lakes is popular for both residents and tourists throughout the state. Montana
is the home of the Federation of Fly Fishers and hosts many of the organizations annual conclaves. The state has
robust recreational lake trout and kokanee salmon fisheries in the west, walleye can be found in many parts of the
state, while northern pike, smallmouth and largemouth bass fisheries as well as catfish and paddlefish can be found
in the waters of eastern Montana. Robert Redford's 1992 film of Norman Mclean's novel, A River Runs Through It, was
filmed in Montana and brought national attention to fly fishing and the state. The Montana Territory was formed on
April 26, 1864, when the U.S. passed the Organic Act. Schools started forming in the area before it was officially
a territory as families started settling into the area. The first schools were subscription schools that typically
held in the teacher's home. The first formal school on record was at Fort Owen in Bitterroot valley in 1862. The
students were Indian children and the children of Fort Owen employees. The first school term started in early winter
and only lasted until February 28. Classes were taught by Mr. Robinson. Another early subscription school was started
by Thomas Dimsdale in Virginia City in 1863. In this school students were charged $1.75 per week. The Montana Territorial
Legislative Assembly had its inaugural meeting in 1864. The first legislature authorized counties to levy taxes for
schools, which set the foundations for public schooling. Madison County was the first to take advantage of the newly
authorized taxes and it formed fhe first public school in Virginia City in 1886. The first school year was scheduled
to begin in January 1866, but severe weather postponed its opening until March. The first school year ran through
the summer and didn't end until August 17. One of the first teachers at the school was Sarah Raymond. She was a 25-year-old
woman who had traveled to Virginia City via wagon train in 1865. To become a certified teacher, Raymond took a test
in her home and paid a $6 fee in gold dust to obtain a teaching certificate. With the help of an assistant teacher,
Mrs. Farley, Raymond was responsible for teaching 50 to 60 students each day out of the 81 students enrolled at the
school. Sarah Raymond was paid at a rate of $125 per month, and Mrs. Farley was paid $75 per month. There were no
textbooks used in the school. In their place was an assortment of books brought in by various emigrants. Sarah quit
teaching the following year, but would later become the Madison County superintendent of schools. Montana contains
thousands of named rivers and creeks, 450 miles (720 km) of which are known for "blue-ribbon" trout fishing. Montana's
water resources provide for recreation, hydropower, crop and forage irrigation, mining, and water for human consumption.
Montana is one of few geographic areas in the world whose rivers form parts of three major watersheds (i.e. where
two continental divides intersect). Its rivers feed the Pacific Ocean, the Gulf of Mexico, and Hudson Bay. The watersheds
divide at Triple Divide Peak in Glacier National Park. East of the divide the Missouri River, which is formed by
the confluence of the Jefferson, Madison and Gallatin rivers near Three Forks, flows due north through the west-central
part of the state to Great Falls. From this point, it then flows generally east through fairly flat agricultural
land and the Missouri Breaks to Fort Peck reservoir. The stretch of river between Fort Benton and the Fred Robinson
Bridge at the western boundary of Fort Peck Reservoir was designated a National Wild and Scenic River in 1976. The
Missouri enters North Dakota near Fort Union, having drained more than half the land area of Montana (82,000 square
miles (210,000 km2)). Nearly one-third of the Missouri River in Montana lies behind 10 dams: Toston, Canyon Ferry,
Hauser, Holter, Black Eagle, Rainbow, Cochrane, Ryan, Morony, and Fort Peck. The Yellowstone River rises on the continental
divide near Younts Peak in Wyoming's Teton Wilderness. It flows north through Yellowstone National Park, enters Montana
near Gardiner, and passes through the Paradise Valley to Livingston. It then flows northeasterly across the state
through Billings, Miles City, Glendive, and Sidney. The Yellowstone joins the Missouri in North Dakota just east
of Fort Union. It is the longest undammed, free-flowing river in the contiguous United States, and drains about a
quarter of Montana (36,000 square miles (93,000 km2)). There are at least 3,223 named lakes and reservoirs in Montana,
including Flathead Lake, the largest natural freshwater lake in the western United States. Other major lakes include
Whitefish Lake in the Flathead Valley and Lake McDonald and St. Mary Lake in Glacier National Park. The largest reservoir
in the state is Fort Peck Reservoir on the Missouri river, which is contained by the second largest earthen dam and
largest hydraulically filled dam in the world. Other major reservoirs include Hungry Horse on the Flathead River;
Lake Koocanusa on the Kootenai River; Lake Elwell on the Marias River; Clark Canyon on the Beaverhead River; Yellowtail
on the Bighorn River, Canyon Ferry, Hauser, Holter, Rainbow; and Black Eagle on the Missouri River. Vegetation of
the state includes lodgepole pine, ponderosa pine; Douglas fir, larch, spruce; aspen, birch, red cedar, hemlock,
ash, alder; rocky mountain maple and cottonwood trees. Forests cover approximately 25 percent of the state. Flowers
native to Montana include asters, bitterroots, daisies, lupins, poppies, primroses, columbine, lilies, orchids, and
dryads. Several species of sagebrush and cactus and many species of grasses are common. Many species of mushrooms
and lichens are also found in the state. Montana is home to a diverse array of fauna that includes 14 amphibian,
90 fish, 117 mammal, 20 reptile and 427 bird species. Additionally, there are over 10,000 invertebrate species, including
180 mollusks and 30 crustaceans. Montana has the largest grizzly bear population in the lower 48 states. Montana
hosts five federally endangered species–black-footed ferret, whooping crane, least tern, pallid sturgeon and white
sturgeon and seven threatened species including the grizzly bear, Canadian lynx and bull trout. The Montana Department
of Fish, Wildlife and Parks manages fishing and hunting seasons for at least 17 species of game fish including seven
species of trout, walleye and smallmouth bass and at least 29 species of game birds and animals including ring-neck
pheasant, grey partridge, elk, pronghorn antelope, mule deer, whitetail deer, gray wolf and bighorn sheep. Average
annual precipitation is 15 inches (380 mm), but great variations are seen. The mountain ranges block the moist Pacific
air, holding moisture in the western valleys, and creating rain shadows to the east. Heron, in the west, receives
the most precipitation, 34.70 inches (881 mm). On the eastern (leeward) side of a mountain range, the valleys are
much drier; Lonepine averages 11.45 inches (291 mm), and Deer Lodge 11.00 inches (279 mm) of precipitation. The mountains
themselves can receive over 100 inches (2,500 mm), for example the Grinnell Glacier in Glacier National Park gets
105 inches (2,700 mm). An area southwest of Belfry averaged only 6.59 inches (167 mm) over a sixteen-year period.
Most of the larger cities get 30 to 50 inches or 0.76 to 1.27 metres of snow each year. Mountain ranges themselves
can accumulate 300 inches or 7.62 metres of snow during a winter. Heavy snowstorms may occur any time from September
through May, though most snow falls from November to March. Montana's personal income tax contains 7 brackets, with
rates ranging from 1 percent to 6.9 percent. Montana has no sales tax. In Montana, household goods are exempt from
property taxes. However, property taxes are assessed on livestock, farm machinery, heavy equipment, automobiles,
trucks, and business equipment. The amount of property tax owed is not determined solely by the property's value.
The property's value is multiplied by a tax rate, set by the Montana Legislature, to determine its taxable value.
The taxable value is then multiplied by the mill levy established by various taxing jurisdictions—city and county
government, school districts and others. Approximately 66,000 people of Native American heritage live in Montana.
Stemming from multiple treaties and federal legislation, including the Indian Appropriations Act (1851), the Dawes
Act (1887), and the Indian Reorganization Act (1934), seven Indian reservations, encompassing eleven tribal nations,
were created in Montana. A twelfth nation, the Little Shell Chippewa is a "landless" people headquartered in Great
Falls, recognized by the state of Montana but not by the U.S. Government. The Blackfeet nation is headquartered on
the Blackfeet Indian Reservation (1851) in Browning, Crow on the Crow Indian Reservation (1851) in Crow Agency, Confederated
Salish and Kootenai and Pend d'Oreille on the Flathead Indian Reservation (1855) in Pablo, Northern Cheyenne on the
Northern Cheyenne Indian Reservation (1884) at Lame Deer, Assiniboine and Gros Ventre on the Fort Belknap Indian
Reservation (1888) in Fort Belknap Agency, Assiniboine and Sioux on the Fort Peck Indian Reservation (1888) at Poplar,
and Chippewa-Cree on the Rocky Boy's Indian Reservation (1916) near Box Elder. Approximately 63% of all Native people
live off the reservations, concentrated in the larger Montana cities with the largest concentration of urban Indians
in Great Falls. The state also has a small Métis population, and 1990 census data indicated that people from as many
as 275 different tribes lived in Montana. While the largest European-American population in Montana overall is German,
pockets of significant Scandinavian ancestry are prevalent in some of the farming-dominated northern and eastern
prairie regions, parallel to nearby regions of North Dakota and Minnesota. Farmers of Irish, Scots, and English roots
also settled in Montana. The historically mining-oriented communities of western Montana such as Butte have a wider
range of European-American ethnicity; Finns, Eastern Europeans and especially Irish settlers left an indelible mark
on the area, as well as people originally from British mining regions such as Cornwall, Devon and Wales. The nearby
city of Helena, also founded as a mining camp, had a similar mix in addition to a small Chinatown. Many of Montana's
historic logging communities originally attracted people of Scottish, Scandinavian, Slavic, English and Scots-Irish
descent.[citation needed] Montana has a larger Native American population numerically and percentage-wise than most
U.S. states. Although the state ranked 45th in population (according to the 2010 U.S. Census), it ranked 19th in
total native people population. Native people constituted 6.5 percent of the state's total population, the sixth
highest percentage of all 50 states. Montana has three counties in which Native Americans are a majority: Big Horn,
Glacier, and Roosevelt. Other counties with large Native American populations include Blaine, Cascade, Hill, Missoula,
and Yellowstone counties. The state's Native American population grew by 27.9 percent between 1980 and 1990 (at a
time when Montana's entire population rose just 1.6 percent), and by 18.5 percent between 2000 and 2010. As of 2009,
almost two-thirds of Native Americans in the state live in urban areas. Of Montana's 20 largest cities, Polson (15.7
percent), Havre (13.0 percent), Great Falls (5.0 percent), Billings (4.4 percent), and Anaconda (3.1 percent) had
the greatest percentage of Native American residents in 2010. Billings (4,619), Great Falls (2,942), Missoula (1,838),
Havre (1,210), and Polson (706) have the most Native Americans living there. The state's seven reservations include
more than twelve distinct Native American ethnolinguistic groups. The climate has become warmer in Montana and continues
to do so. The glaciers in Glacier National Park have receded and are predicted to melt away completely in a few decades.
Many Montana cities set heat records during July 2007, the hottest month ever recorded in Montana. Winters are warmer,
too, and have fewer cold spells. Previously these cold spells had killed off bark beetles which are now attacking
the forests of western Montana. The combination of warmer weather, attack by beetles, and mismanagement during past
years has led to a substantial increase in the severity of forest fires in Montana. According to a study done for
the U.S. Environmental Protection Agency by the Harvard School of Engineering and Applied Science, portions of Montana
will experience a 200-percent increase in area burned by wildfires, and an 80-percent increase in related air pollution.
As white settlers began populating Montana from the 1850s through the 1870s, disputes with Native Americans ensued,
primarily over land ownership and control. In 1855, Washington Territorial Governor Isaac Stevens negotiated the
Hellgate treaty between the United States Government and the Salish, Pend d'Oreille, and the Kootenai people of western
Montana, which established boundaries for the tribal nations. The treaty was ratified in 1859. While the treaty established
what later became the Flathead Indian Reservation, trouble with interpreters and confusion over the terms of the
treaty led whites to believe that the Bitterroot Valley was opened to settlement, but the tribal nations disputed
those provisions. The Salish remained in the Bitterroot Valley until 1891. The first U.S. Army post established in
Montana was Camp Cooke on the Missouri River in 1866 to protect steamboat traffic going to Fort Benton, Montana.
More than a dozen additional military outposts were established in the state. Pressure over land ownership and control
increased due to discoveries of gold in various parts of Montana and surrounding states. Major battles occurred in
Montana during Red Cloud's War, the Great Sioux War of 1876, the Nez Perce War and in conflicts with Piegan Blackfeet.
The most notable of these were the Marias Massacre (1870), Battle of the Little Bighorn (1876), Battle of the Big
Hole (1877) and Battle of Bear Paw (1877). The last recorded conflict in Montana between the U.S. Army and Native
Americans occurred in 1887 during the Battle of Crow Agency in the Big Horn country. Indian survivors who had signed
treaties were generally required to move onto reservations. English is the official language in the state of Montana,
as it is in many U.S. states. English is also the language of the majority. According to the 2000 U.S. Census, 94.8
percent of the population aged 5 and older speak English at home. Spanish is the language most commonly spoken at
home other than English. There were about 13,040 Spanish-language speakers in the state (1.4 percent of the population)
in 2011. There were also 15,438 (1.7 percent of the state population) speakers of Indo-European languages other than
English or Spanish, 10,154 (1.1 percent) speakers of a Native American language, and 4,052 (0.4 percent) speakers
of an Asian or Pacific Islander language. Other languages spoken in Montana (as of 2013) include Assiniboine (about
150 speakers in the Montana and Canada), Blackfoot (about 100 speakers), Cheyenne (about 1,700 speakers), Plains
Cree (about 100 speakers), Crow (about 3,000 speakers), Dakota (about 18,800 speakers in Minnesota, Montana, Nebraska,
North Dakota, and South Dakota), German Hutterite (about 5,600 speakers), Gros Ventre (about 10 speakers), Kalispel-Pend
d'Oreille (about 64 speakers), Kutenai (about 6 speakers), and Lakota (about 6,000 speakers in Minnesota, Montana,
Nebraska, North Dakota, South Dakota). The United States Department of Education estimated in 2009 that 5,274 students
in Montana spoke a language at home other than English. These included a Native American language (64 percent), German
(4 percent), Spanish (3 percent), Russian (1 percent), and Chinese (less than 0.5 percent). According to the 2010
Census, 89.4 percent of the population was White (87.8 percent Non-Hispanic White), 6.3 percent American Indian and
Alaska Native, 2.9 percent Hispanics and Latinos of any race, 0.6 percent Asian, 0.4 percent Black or African American,
0.1 percent Native Hawaiian and Other Pacific Islander, 0.6 percent from Some Other Race, and 2.5 percent from two
or more races. The largest European ancestry groups in Montana as of 2010 are: German (27.0 percent), Irish (14.8
percent), English (12.6 percent), Norwegian (10.9 percent), French (4.7 percent) and Italian (3.4 percent). The United
States Census Bureau estimates that the population of Montana was 1,032,949 on July 1, 2015, a 4.40% increase since
the 2010 United States Census. The 2010 census put Montana's population at 989,415 which is an increase of 43,534
people, or 4.40 percent, since 2010. During the first decade of the new century, growth was mainly concentrated in
Montana's seven largest counties, with the highest percentage growth in Gallatin County, which saw a 32 percent increase
in its population from 2000-2010. The city seeing the largest percentage growth was Kalispell with 40.1 percent,
and the city with the largest increase in actual residents was Billings with an increase in population of 14,323
from 2000-2010. In 1940, Jeannette Rankin had once again been elected to Congress, and in 1941, as she did in 1917,
she voted against the United States' declaration of war. This time she was the only vote against the war, and in
the wake of public outcry over her vote, she required police protection for a time. Other pacifists tended to be
those from "peace churches" who generally opposed war. Many individuals from throughout the U.S. who claimed conscientious
objector status were sent to Montana during the war as smokejumpers and for other forest fire-fighting duties. Simultaneously
with these conflicts, bison, a keystone species and the primary protein source that Native people had survived on
for centuries were being destroyed. Some estimates say there were over 13 million bison in Montana in 1870. In 1875,
General Philip Sheridan pleaded to a joint session of Congress to authorize the slaughtering of herds in order to
deprive the Indians of their source of food. By 1884, commercial hunting had brought bison to the verge of extinction;
only about 325 bison remained in the entire United States. Tracks of the Northern Pacific Railroad (NPR) reached
Montana from the west in 1881 and from the east in 1882. However, the railroad played a major role in sparking tensions
with Native American tribes in the 1870s. Jay Cooke, the NPR president launched major surveys into the Yellowstone
valley in 1871, 1872 and 1873 which were challenged forcefully by the Sioux under chief Sitting Bull. These clashes,
in part, contributed to the Panic of 1873 which delayed construction of the railroad into Montana. Surveys in 1874,
1875 and 1876 helped spark the Great Sioux War of 1876. The transcontinental NPR was completed on September 8, 1883,
at Gold Creek. Under Territorial Governor Thomas Meagher, Montanans held a constitutional convention in 1866 in a
failed bid for statehood. A second constitutional convention was held in Helena in 1884 that produced a constitution
ratified 3:1 by Montana citizens in November 1884. For political reasons, Congress did not approve Montana statehood
until 1889. Congress approved Montana statehood in February 1889 and President Grover Cleveland signed an omnibus
bill granting statehood to Montana, North Dakota, South Dakota and Washington once the appropriate state constitutions
were crafted. In July 1889, Montanans convened their third constitutional convention and produced a constitution
acceptable by the people and the federal government. On November 8, 1889 President Benjamin Harrison proclaimed Montana
the forty-first state in the union. The first state governor was Joseph K. Toole. In the 1880s, Helena (the current
state capital) had more millionaires per capita than any other United States city. The Homestead Act of 1862 provided
free land to settlers who could claim and "prove-up" 160 acres (0.65 km2) of federal land in the midwest and western
United States. Montana did not see a large influx of immigrants from this act because 160 acres was usually insufficient
to support a family in the arid territory. The first homestead claim under the act in Montana was made by David Carpenter
near Helena in 1868. The first claim by a woman was made near Warm Springs Creek by Miss Gwenllian Evans, the daughter
of Deer Lodge Montana Pioneer, Morgan Evans. By 1880, there were farms in the more verdant valleys of central and
western Montana, but few on the eastern plains. The Desert Land Act of 1877 was passed to allow settlement of arid
lands in the west and allotted 640 acres (2.6 km2) to settlers for a fee of $.25 per acre and a promise to irrigate
the land. After three years, a fee of one dollar per acre would be paid and the land would be owned by the settler.
This act brought mostly cattle and sheep ranchers into Montana, many of whom grazed their herds on the Montana prairie
for three years, did little to irrigate the land and then abandoned it without paying the final fees. Some farmers
came with the arrival of the Great Northern and Northern Pacific Railroads throughout the 1880s and 1890s, though
in relatively small numbers. In the early 1900s, James J. Hill of the Great Northern began promoting settlement in
the Montana prairie to fill his trains with settlers and goods. Other railroads followed suit. In 1902, the Reclamation
Act was passed, allowing irrigation projects to be built in Montana's eastern river valleys. In 1909, Congress passed
the Enlarged Homestead Act that expanded the amount of free land from 160 to 320 acres (0.6 to 1.3 km2) per family
and in 1912 reduced the time to "prove up" on a claim to three years. In 1916, the Stock-Raising Homestead Act allowed
homesteads of 640 acres in areas unsuitable for irrigation. This combination of advertising and changes in the Homestead
Act drew tens of thousands of homesteaders, lured by free land, with World War I bringing particularly high wheat
prices. In addition, Montana was going through a temporary period of higher-than-average precipitation. Homesteaders
arriving in this period were known as "Honyockers", or "scissorbills." Though the word "honyocker", possibly derived
from the ethnic slur "hunyak," was applied in a derisive manner at homesteaders as being "greenhorns", "new at his
business" or "unprepared", the reality was that a majority of these new settlers had previous farming experience,
though there were also many who did not. In June 1917, the U.S. Congress passed the Espionage Act of 1917 which was
later extended by the Sedition Act of 1918, enacted in May 1918. In February 1918, the Montana legislature had passed
the Montana Sedition Act, which was a model for the federal version. In combination, these laws criminalized criticism
of the U.S. government, military, or symbols through speech or other means. The Montana Act led to the arrest of
over 200 individuals and the conviction of 78, mostly of German or Austrian descent. Over 40 spent time in prison.
In May 2006, then-Governor Brian Schweitzer posthumously issued full pardons for all those convicted of violating
the Montana Sedition Act. When the U.S. entered World War II on December 8, 1941, many Montanans already had enlisted
in the military to escape the poor national economy of the previous decade. Another 40,000-plus Montanans entered
the armed forces in the first year following the declaration of war, and over 57,000 joined up before the war ended.
These numbers constituted about 10 percent of the state's total population, and Montana again contributed one of
the highest numbers of soldiers per capita of any state. Many Native Americans were among those who served, including
soldiers from the Crow Nation who became Code Talkers. At least 1500 Montanans died in the war. Montana also was
the training ground for the First Special Service Force or "Devil's Brigade," a joint U.S-Canadian commando-style
force that trained at Fort William Henry Harrison for experience in mountainous and winter conditions before deployment.
Air bases were built in Great Falls, Lewistown, Cut Bank and Glasgow, some of which were used as staging areas to
prepare planes to be sent to allied forces in the Soviet Union. During the war, about 30 Japanese balloon bombs were
documented to have landed in Montana, though no casualties nor major forest fires were attributed to them.
Punjab (Urdu, Punjabi: پنجاب, panj-āb, "five waters": listen (help·info)), also spelled Panjab, is the most populous of the
four provinces of Pakistan. It has an area of 205,344 square kilometres (79,284 square miles) and a population of
91.379.615 in 2011, approximately 56% of the country's total population. Its provincial capital and largest city
is Lahore. Punjab is bordered by the Indian states of Jammu and Kashmir to the northeast and Punjab and Rajasthan
to the east. In Pakistan it is bordered by Sindh to the south, Balochistān and Khyber Pakhtunkhwa to the west, and
Islamabad and Azad Kashmir to the north. Punjab's geography mostly consists of the alluvial plain of the Indus River
and its four major tributaries in Pakistan, the Jhelum, Chenab, Ravi, and Sutlej rivers. There are several mountainous
regions, including the Sulaiman Mountains in the southwest part of the province, and Margalla Hills, Salt Range,
and Pothohar Plateau in the north. Agriculture is the chief source of income and employment in Punjab; wheat and
cotton are the principal crops. Since independence, Punjab has become the seat of political and economic power; it
remains the most industrialised province of Pakistan. It counts for 39.2% of large scale manufacturing and 70% of
small scale manufacturing in the country. Its capital Lahore is a major regional cultural, historical, and economic
centre. Punjab is Pakistan's second largest province in terms of land area at 205,344 km2 (79,284 sq mi), after Balochistan,
and is located at the north western edge of the geologic Indian plate in South Asia. The province is bordered by
Kashmir (Azad Kashmir, Pakistan and Jammu and Kashmir, India) to the northeast, the Indian states of Punjab and Rajasthan
to the east, the Pakistani province of Sindh to the south, the province of Balochistan to the southwest, the province
of Khyber Pakhtunkhwa to the west, and the Islamabad Capital Territory to the north. The capital and largest city
is Lahore which was the historical capital of the wider Punjab region. Other important cities include Faisalabad,
Rawalpindi, Gujranwala, Sargodha, Multan, Sialkot, Bahawalpur, Gujrat, Sheikhupura, Jhelum and Sahiwal. Undivided
Punjab is home to six rivers, of which five flow through Pakistani Punjab. From west to east, these are: the Indus,
Jhelum, Beas, Chenab, Ravi and Sutlej. Nearly 60% of Pakistan's population lives in the Punjab. It is the nation's
only province that touches every other province; it also surrounds the federal enclave of the national capital city
at Islamabad. In the acronym P-A-K-I-S-T-A-N, the P is for Punjab. There are 48 departments in Punjab government.
Each Department is headed by a Provincial Minister (Politician) and a Provincial Secretary (A civil servant of usually
BPS-20 or BPS-21). All Ministers report to the Chief Minister, who is the Chief Executive. All Secretaries report
to the Chief Secretary of Punjab, who is usually a BPS-22 Civil Servant. The Chief Secretary in turn reports to the
Chief Minister. In addition to these departments, there are several Autonomous Bodies and Attached Departments that
report directly to either the Secretaries or the Chief Secretary. Punjab during Mahabharata times was known as Panchanada.
Punjab was part of the Indus Valley Civilization, more than 4000 years ago. The main site in Punjab was the city
of Harrapa. The Indus Valley Civilization spanned much of what is today Pakistan and eventually evolved into the
Indo-Aryan civilisation. The Vedic civilisation flourished along the length of the Indus River. This civilisation
shaped subsequent cultures in South Asia and Afghanistan. Although the archaeological site at Harappa was partially
damaged in 1857 when engineers constructing the Lahore-Multan railroad used brick from the Harappa ruins for track
ballast, an abundance of artefacts have nevertheless been found. Punjab was part of the great ancient empires including
the Gandhara Mahajanapadas, Achaemenids, Macedonians, Mauryas, Kushans, Guptas, and Hindu Shahi. It also comprised
the Gujar empire for a period of time, otherwise known as the Gurjara-Pratihara empire. Agriculture flourished and
trading cities (such as Multan and Lahore) grew in wealth. Due to its location, the Punjab region came under constant
attacks and influence from the west and witnessed centuries of foreign invasions by the Greeks, Kushans, Scythians,
Turks, and Afghans. The city of Taxila, founded by son of Taksh the son Bharat who was the brother of Ram. It was
reputed to house the oldest university in the world,[citation needed] Takshashila University. One of the teachers
was the great Vedic thinker and politician Chanakya. Taxila was a great centre of learning and intellectual discussion
during the Maurya Empire. It is a UN World Heritage site, valued for its archaeological and religious history. The
northwestern part of the South Asia, including Punjab, was repeatedly invaded or conquered by various foreign empires,
such as those of Tamerlane, Alexander the Great and Genghis Khan. Having conquered Drangiana, Arachosia, Gedrosia
and Seistan in ten days, Alexander crossed the Hindu Kush and was thus fully informed of the magnificence of the
country and its riches in gold, gems and pearls. However, Alexander had to encounter and reduce the tribes on the
border of Punjab before entering the luxuriant plains. Having taken a northeasterly direction, he marched against
the Aspii (mountaineers), who offered vigorous resistance, but were subdued.[citation needed] Alexander then marched
through Ghazni, blockaded Magassa, and then marched to Ora and Bazira. Turning to the northeast, Alexander marched
to Pucela, the capital of the district now known as Pakhli. He entered Western Punjab, where the ancient city of
Nysa (at the site of modern-day Mong) was situated. A coalition was formed against Alexander by the Cathians, the
people of Multan, who were very skilful in war. Alexander invested many troops, eventually killing seventeen thousand
Cathians in this battle, and the city of Sagala (present-day Sialkot) was razed to the ground. Alexander left Punjab
in 326 B.C. and took his army to the heartlands of his empire.[citation needed] The Punjabis followed a diverse plethora
of faiths, mainly comprising Hinduism[citation needed] , when the Muslim Umayyad army led by Muhammad bin Qasim conquered
Sindh and Southern Punjab in 712, by defeating Raja Dahir. The Umayyad Caliphate was the second Islamic caliphate
established after the death of Muhammad. It was ruled by the Umayyad dynasty, whose name derives from Umayya ibn
Abd Shams, the great-grandfather of the first Umayyad caliph. Although the Umayyad family originally came from the
city of Mecca, their capital was Damascus. Muhammad bin Qasim was the first to bring message of Islam to the population
of Punjab.[citation needed] Punjab was part of different Muslim Empires consisting of Afghans and Turkic peoples
in co-operation with local Punjabi tribes and others.[citation needed] In the 11th century, during the reign of Mahmud
of Ghazni, the province became an important centre with Lahore as its second capital[citation needed] of the Ghaznavid
Empire based out of Afghanistan. The Punjab region became predominantly Muslim due to missionary Sufi saints whose
dargahs dot the landscape of Punjab region. In 1758, the general of the Hindu Maratha Empire, Raghunath Rao conquered
Lahore and Attock. Timur Shah Durrani, the son and viceroy of Ahmad Shah Abdali, was driven out of Punjab. Lahore,
Multan, Dera Ghazi Khan, Kashmir and other subahs on the south and eastern side of Peshawar were under the Maratha
rule for the most part. In Punjab and Kashmir, the Marathas were now major players. The Third Battle of Panipat took
place on 1761, Ahmad Shah Abdali invaded the Maratha territory of Punjab and captured remnants of the Maratha Empire
in Punjab and Kashmir regions and re-consolidated control over them. In the mid-fifteenth century, the religion of
Sikhism was born. During the Mughal empire, many Hindus increasingly adopted Sikhism. These became a formidable military
force against the Mughals and later against the Afghan Empire. After fighting Ahmad Shah Durrani in the later eighteenth
century, the Sikhs took control of Punjab and managed to establish the Sikh Empire under Maharaja Ranjit Singh, which
lasted from 1799 to 1849. The capital of Ranjit Singh's empire was Lahore, and the empire also extended into Afghanistan
and Kashmir. Bhangi Misl was the fist Sikh band to conquer Lahore and other towns of Punjab. Syed Ahmad Barelvi a
Muslim, waged jihad and attempted to create an Islamic state with strict enforcement of Islamic law. Syed Ahmad Barelvi
in 1821 with many supporters and spent two years organising popular and material support for his Punjab campaign.
He carefully developed a network of people through the length and breadth of India to collect funds and encourage
volunteers, travelling widely throughout India attracting a following among pious Muslims. In December 1826 Sayyid
Ahmad and his followers clashed with Sikh troops at Akora Khattak, but with no decisive result. In a major battle
near the town of Balakot in 1831, Sayyid Ahmad and Shah Ismail Shaheed with volunteer Muslims were defeated by the
professional Sikh Army. Maharaja Ranjit Singh's death in the summer of 1839 brought political chaos and the subsequent
battles of succession and the bloody infighting between the factions at court weakened the state. Relationships with
neighbouring British territories then broke down, starting the First Anglo-Sikh War; this led to a British official
being resident in Lahore and the annexation in 1849 of territory south of the Satluj to British India. After the
Second Anglo-Sikh War in 1849, the Sikh Empire became the last territory to be merged into British India. In Jhelum
35 British soldiers of HM XXIV regiment were killed by the local resistance during the Indian Rebellion of 1857.[citation
needed] Punjab witnessed major battles between the armies of India and Pakistan in the wars of 1965 and 1971. Since
the 1990s Punjab hosted several key sites of Pakistan's nuclear program such as Kahuta. It also hosts major military
bases such as at Sargodha and Rawalpindi. The peace process between India and Pakistan, which began in earnest in
2004, has helped pacify the situation. Trade and people-to-people contacts through the Wagah border are now starting
to become common. Indian Sikh pilgrims visit holy sites such as Nankana Sahib. The onset of the southwest monsoon
is anticipated to reach Punjab by May, but since the early 1970s the weather pattern has been irregular. The spring
monsoon has either skipped over the area or has caused it to rain so hard that floods have resulted. June and July
are oppressively hot. Although official estimates rarely place the temperature above 46 °C, newspaper sources claim
that it reaches 51 °C and regularly carry reports about people who have succumbed to the heat. Heat records were
broken in Multan in June 1993, when the mercury was reported to have risen to 54 °C. In August the oppressive heat
is punctuated by the rainy season, referred to as barsat, which brings relief in its wake. The hardest part of the
summer is then over, but cooler weather does not come until late October. The major and native language spoken in
the Punjab is Punjabi (which is written in a Shahmukhi script in Pakistan) and Punjabis comprise the largest ethnic
group in country. Punjabi is the provincial language of Punjab. There is not a single district in the province where
Punjabi language is mother-tongue of less than 89% of population. The language is not given any official recognition
in the Constitution of Pakistan at the national level. Punjabis themselves are a heterogeneous group comprising different
tribes, clans (Urdu: برادری) and communities. In Pakistani Punjab these tribes have more to do with traditional
occupations such as blacksmiths or artisans as opposed to rigid social stratifications. Punjabi dialects spoken in
the province include Majhi (Standard), Saraiki and Hindko. Saraiki is mostly spoken in south Punjab, and Pashto,
spoken in some parts of north west Punjab, especially in Attock District and Mianwali District. The Government of
Punjab is a provincial government in the federal structure of Pakistan, is based in Lahore, the capital of the Punjab
Province. The Chief Minister of Punjab (CM) is elected by the Provincial Assembly of the Punjab to serve as the head
of the provincial government in Punjab, Pakistan. The current Chief Minister is Shahbaz Sharif, who became the Chief
Minister of Punjab as being restored after Governor's rule starting from 25 February 2009 to 30 March 2009. Thereafter
got re-elected as a result of 11 May 2013 elections. The Provincial Assembly of the Punjab is a unicameral legislature
of elected representatives of the province of Punjab, which is located in Lahore in eastern Pakistan. The Assembly
was established under Article 106 of the Constitution of Pakistan as having a total of 371 seats, with 66 seats reserved
for women and eight reserved for non-Muslims. Punjab has the largest economy in Pakistan, contributing most to the
national GDP. The province's economy has quadrupled since 1972. Its share of Pakistan's GDP was 54.7% in 2000 and
59% as of 2010. It is especially dominant in the service and agriculture sectors of Pakistan's economy. With its
contribution ranging from 52.1% to 64.5% in the Service Sector and 56.1% to 61.5% in the agriculture sector. It is
also major manpower contributor because it has largest pool of professionals and highly skilled (technically trained)
manpower in Pakistan. It is also dominant in the manufacturing sector, though the dominance is not as huge, with
historical contributions raging from a low of 44% to a high of 52.6%. In 2007, Punjab achieved a growth rate of 7.8%
and during the period 2002–03 to 2007–08, its economy grew at a rate of between 7% to 8% per year. and during 2008–09
grew at 6% against the total GDP growth of Pakistan at 4%. Despite the lack of a coastline, Punjab is the most industrialised
province of Pakistan; its manufacturing industries produce textiles, sports goods, heavy machinery, electrical appliances,
surgical instruments, vehicles, auto parts, metals, sugar mill plants, aircraft, cement, agricultural machinery,
bicycles and rickshaws, floor coverings, and processed foods. In 2003, the province manufactured 90% of the paper
and paper boards, 71% of the fertilizers, 69% of the sugar and 40% of the cement of Pakistan. Despite its tropical
wet and dry climate, extensive irrigation makes it a rich agricultural region. Its canal-irrigation system established
by the British is the largest in the world. Wheat and cotton are the largest crops. Other crops include rice, sugarcane,
millet, corn, oilseeds, pulses, vegetables, and fruits such as kinoo. Livestock and poultry production are also important.
Despite past animosities, the rural masses in Punjab's farms continue to use the Hindu calendar for planting and
harvesting. As of June 2012[update], Pakistan's electricity problems were so severe that violent riots were taking
place across Punjab. According to protesters, load shedding was depriving the cities of electricity 20–22 hours a
day, causing businesses to go bust and making living extremely hard. Gujranwala, Toba Tek Singh, Faisalabad, Sialkot,
Bahawalnagar and communities across Khanewal District saw widespread rioting and violence on Sunday 17 June 2012,
with the houses of several members of parliament being attacked as well as the offices of regional energy suppliers
Fesco, Gepco and Mepco being ransacked or attacked. The structure of a mosque is simple and it expresses openness.
Calligraphic inscriptions from the Quran decorate mosques and mausoleums in Punjab. The inscriptions on bricks and
tiles of the mausoleum of Shah Rukn-e-Alam (1320 AD) at Multan are outstanding specimens of architectural calligraphy.
The earliest existing building in South Asia with enamelled tile-work is the tomb of Shah Yusuf Gardezi (1150 AD)
at Multan. A specimen of the sixteenth century tile-work at Lahore is the tomb of Sheikh Musa Ahangar, with its brilliant
blue dome. The tile-work of Emperor Shah Jahan is of a richer and more elaborate nature. The pictured wall of Lahore
Fort is the last line in the tile-work in the entire world. The fairs held at the shrines of Sufi saints are called
urs. They generally mark the death anniversary of the saint. On these occasions devotees assemble in large numbers
and pay homage to the memory of the saint. Soul inspiring music is played and devotees dance in ecstasy. The music
on these occasions is essentially folk and appealing. It forms a part of the folk music through mystic messages.
The most important urs are: urs of Data Ganj Buksh at Lahore, urs of Hazrat Sultan Bahu at Jhang, urs of Hazrat Shah
Jewna at Jhang, urs of Hazrat Mian Mir at Lahore, urs of Baba Farid Ganj Shakar at Pakpattan, urs of Hazrat Bahaudin
Zakria at Multan, urs of Sakhi Sarwar Sultan at Dera Ghazi Khan, urs of Shah Hussain at Lahore, urs of Hazrat Bulleh
Shah at Kasur, urs of Hazrat Imam Bari (Bari Shah Latif) at Rawalpindi-Islamabad and urs of Shah Inayar Qadri (the
murrshad of Bulleh Shah) in Lahore. Exhibitions and annual horse shows in all districts and a national horse and
cattle show at Lahore are held with the official patronage. The national horse and cattle show at Lahore is the biggest
festival where sports, exhibitions, and livestock competitions are held. It not only encourages and patronises agricultural
products and livestock through the exhibitions of agricultural products and cattle but is also a colourful documentary
on the rich cultural heritage of the province with its strong rural roots. The province is home to several historical
sites, including the Shalimar Gardens, the Lahore Fort, the Badshahi Mosque, the Rohtas Fort and the ruins of the
ancient city of Harrapa. The Anarkali Market and Jahangir's Tomb are prominent in the city of Lahore as is the Lahore
Museum, while the ancient city of Taxila in the northwest was once a major centre of Buddhist and Hindu influence.
Several important Sikh shrines are in the province, including the birthplace of the first Guru, Guru Nanak. (born
at Nankana Sahib). There are a few famous hill stations, including Murree, Bhurban, Patriata and Fort Munro. Among
the Punjabi poets, the names of Sultan Bahu, Bulleh Shah, Mian Muhammad Baksh, and Waris Shah and folk singers like
Inayat Hussain Bhatti and Tufail Niazi, Alam Lohar, Sain Marna, Mansoor Malangi, Allah Ditta Lona wala, Talib Hussain
Dard, Attaullah Khan Essa Khailwi, Gamoo Tahliwala, Mamzoo Gha-lla, Akbar Jat, Arif Lohar, Ahmad Nawaz Cheena and
Hamid Ali Bela are well-known. In the composition of classical ragas, there are such masters as Malika-i-Mauseequi
(Queen of Music) Roshan Ara Begum, Ustad Amanat Ali Khan, Salamat Ali Khan and Ustad Fateh Ali Khan. Alam Lohar has
made significant contributions to folklore and Punjabi literature, by being a very influential Punjabi folk singer
from 1930 until 1979. For the popular taste however, light music, particularly Ghazals and folk songs, which have
an appeal of their own, the names of Mehdi Hassan, Ghulam Ali, Nur Jehan, Malika Pukhraj, Farida Khanum, Roshen Ara
Begum, and Nusrat Fateh Ali Khan are well-known. Folk songs and dances of the Punjab reflect a wide range of moods:
the rains, sowing and harvesting seasons. Luddi, Bhangra and Sammi depict the joy of living. Love legends of Heer
Ranjha, Mirza Sahiban, Sohni Mahenwal and Saiful Mulk are sung in different styles.
Richmond is located at the fall line of the James River, 44 miles (71 km) west of Williamsburg, 66 miles (106 km) east of
Charlottesville, and 98 miles (158 km) south of Washington, D.C. Surrounded by Henrico and Chesterfield counties,
the city is located at the intersections of Interstate 95 and Interstate 64, and encircled by Interstate 295 and
Virginia State Route 288. Major suburbs include Midlothian to the southwest, Glen Allen to the north and west, Short
Pump to the west and Mechanicsville to the northeast. The site of Richmond had been an important village of the Powhatan
Confederacy, and was briefly settled by English colonists from Jamestown in 1609, and in 1610–1611. The present city
of Richmond was founded in 1737. It became the capital of the Colony and Dominion of Virginia in 1780. During the
Revolutionary War period, several notable events occurred in the city, including Patrick Henry's "Give me liberty
or give me death" speech in 1775 at St. John's Church, and the passage of the Virginia Statute for Religious Freedom
written by Thomas Jefferson. During the American Civil War, Richmond served as the capital of the Confederate States
of America. The city entered the 20th century with one of the world's first successful electric streetcar systems,
as well as a national hub of African-American commerce and culture, the Jackson Ward neighborhood. Richmond's economy
is primarily driven by law, finance, and government, with federal, state, and local governmental agencies, as well
as notable legal and banking firms, located in the downtown area. The city is home to both the United States Court
of Appeals for the Fourth Circuit, one of 13 United States courts of appeals, and the Federal Reserve Bank of Richmond,
one of 12 Federal Reserve Banks. Dominion Resources and MeadWestvaco, Fortune 500 companies, are headquartered in
the city, with others in the metropolitan area. In 1775, Patrick Henry delivered his famous "Give me Liberty or Give
me Death" speech in St. John's Church in Richmond, crucial for deciding Virginia's participation in the First Continental
Congress and setting the course for revolution and independence. On April 18, 1780, the state capital was moved from
the colonial capital of Williamsburg to Richmond, to provide a more centralized location for Virginia's increasing
westerly population, as well as to isolate the capital from British attack. The latter motive proved to be in vain,
and in 1781, under the command of Benedict Arnold, Richmond was burned by British troops, causing Governor Thomas
Jefferson to flee as the Virginia militia, led by Sampson Mathews, defended the city. Richmond recovered quickly
from the war, and by 1782 was once again a thriving city. In 1786, the Virginia Statute for Religious Freedom (drafted
by Thomas Jefferson) was passed at the temporary capitol in Richmond, providing the basis for the separation of church
and state, a key element in the development of the freedom of religion in the United States. A permanent home for
the new government, the Virginia State Capitol building, was designed by Thomas Jefferson with the assistance of
Charles-Louis Clérisseau, and was completed in 1788. After the American Revolutionary War, Richmond emerged as an
important industrial center. To facilitate the transfer of cargo from the flat-bottomed bateaux above the fall line
to the ocean-faring ships below, George Washington helped design the James River and Kanawha Canal in the 18th century
to bypass Richmond's rapids, with the intent of providing a water route across the Appalachians to the Kanawha River.
The legacy of the canal boatmen is represented by the figure in the center of the city flag. As a result of this
and ample access to hydropower due to the falls, Richmond became home to some of the largest manufacturing facilities
in the country, including iron works and flour mills, the largest facilities of their kind in the South. The resistance
to the slave trade was growing by the mid-nineteenth century; in one famous case in 1848, Henry "Box" Brown made
history by having himself nailed into a small box and shipped from Richmond to abolitionists in Philadelphia, Pennsylvania,
escaping slavery. On 17 April 1861, five days after the Confederate attack on Fort Sumter, the legislature voted
to secede from the United States and joined the Confederacy. Official action came in May, after the Confederacy promised
to move its national capital to Richmond. The city was at the end of a long supply line, which made it somewhat difficult
to defend, although supplies continued to reach the city by canal and wagon for years, since it was protected by
the Army of Northern Virginia and arguably the Confederacy's best troops and commanders. It became the main target
of Union armies, especially in the campaigns of 1862 and 1864-5. In addition to Virginia and Confederate government
offices and hospitals, a railroad hub, and one of the South's largest slave markets, Richmond had the largest factory
in the Confederacy, the Tredegar Iron Works, which turned out artillery and other munitions, including the 723 tons
of armor plating that covered the CSS Virginia, the world's first ironclad used in war, as well as much of the Confederates'
heavy ordnance machinery. The Confederate Congress shared quarters with the Virginia General Assembly in the Virginia
State Capitol, with the Confederacy's executive mansion, the "White House of the Confederacy", located two blocks
away. The Seven Days Battles followed in late June and early July 1862, during which Union General McClellan threatened
to take Richmond but ultimately failed. Three years later, as March 1865 ended, the Confederate capitol became indefensible.
On March 25, Confederate General John B. Gordon's desperate attack on Fort Stedman east of Petersburg failed. On
April 1, General Philip Sheridan, assigned to interdict the Southside Railroad, met brigades commanded by George
Pickett at the Five Forks junction, smashing them, taking thousands of prisoners, and encouraging General Grant to
order a general advance. When the Union Sixth Corps broke through Confederate lines on Boydton Plank Road south of
Petersburg, Confederate casualties exceeded 5000, or about a tenth of Lee's defending army. General Lee then informed
Jefferson Davis that he was about to evacuate Richmond. Davis and his cabinet left the city by train that night,
as government officials burned documents and departing Confederate troops burned tobacco and other warehouses to
deny their contents to the victors. On April 2, 1865, General Godfrey Weitzel, commander of the 25th corps of the
United States Colored Troops, accepted the city's surrender from the mayor and group of leading citizens who remained.
The Union troops eventually managed to stop the raging fires but about 25% of the city's buildings were destroyed-
President Abraham Lincoln visited General Grant at Petersburg on April 3, and took a launch to Richmond the next
day, while Jefferson Davis attempted to organize his Confederate government at Danville. Lincoln met Confederate
assistant secretary of War John A. Campbell, and handed him a note inviting Virginia's legislature to end their rebellion.
After Campbell spun the note to Confederate legislators as a possible end to the Emancipation Proclamation, Lincoln
rescinded his offer and ordered General Weitzel to prevent the Confederate state legislature from meeting. Union
forces killed, wounded or captured 8000 Confederate troops at Saylor's Creek southwest of Petersburg on April 6.
General Lee continued to reject General Grant's surrender suggestion until Sheridan's infantry and cavalry appeared
in front of his retreating army on April 8. He surrendered his remaining approximately 10000 troops at Appomattox
Court House the following morning. Jefferson Davis retreated to North Carolina, then further south. when Lincoln
rejected the surrender terms negotiated by general Sherman and envoys of North Carolina governor Zebulon Vance, which
failed to mention slavery. Davis was captured on May 10 near Irwinville, Georgia and taken back to Virginia, where
he was charged with treason and imprisoned for two years at Fort Monroe until freed on bail. Richmond emerged a decade
after the smoldering rubble of the Civil War to resume its position as an economic powerhouse, with iron front buildings
and massive brick factories. Canal traffic peaked in the 1860s and slowly gave way to railroads, allowing Richmond
to become a major railroad crossroads, eventually including the site of the world's first triple railroad crossing.
Tobacco warehousing and processing continued to play a role, boosted by the world's first cigarette-rolling machine,
invented by James Albert Bonsack of Roanoke in 1880/81. Contributing to Richmond's resurgence was the first successful
electrically powered trolley system in the United States, the Richmond Union Passenger Railway. Designed by electric
power pioneer Frank J. Sprague, the trolley system opened its first line in 1888, and electric streetcar lines rapidly
spread to other cities across the country. Sprague's system used an overhead wire and trolley pole to collect current,
with electric motors on the car's trucks. In Richmond, the transition from streetcars to buses began in May 1947
and was completed on November 25, 1949. By the beginning of the 20th century, the city's population had reached 85,050
in 5 square miles (13 km2), making it the most densely populated city in the Southern United States. In 1900, the
Census Bureau reported Richmond's population as 62.1% white and 37.9% black. Freed slaves and their descendants created
a thriving African-American business community, and the city's historic Jackson Ward became known as the "Wall Street
of Black America." In 1903, African-American businesswoman and financier Maggie L. Walker chartered St. Luke Penny
Savings Bank, and served as its first president, as well as the first female bank president in the United States.
Today, the bank is called the Consolidated Bank and Trust Company, and it is the oldest surviving African-American
bank in the U.S. Other figures from this time included John Mitchell, Jr. In 1910, the former city of Manchester
was consolidated with the city of Richmond, and in 1914, the city annexed Barton Heights, Ginter Park, and Highland
Park areas of Henrico County. In May 1914, Richmond became the headquarters of the Fifth District of the Federal
Reserve Bank. Between 1963 and 1965, there was a "downtown boom" that led to the construction of more than 700 buildings
in the city. In 1968, Virginia Commonwealth University was created by the merger of the Medical College of Virginia
with the Richmond Professional Institute. In 1970, Richmond's borders expanded by an additional 27 square miles (70
km2) on the south. After several years of court cases in which Chesterfield County fought annexation, more than 47,000
people who once were Chesterfield County residents found themselves within the city's perimeters on January 1, 1970.
In 1996, still-sore tensions arose amid controversy involved in placing a statue of African American Richmond native
and tennis star Arthur Ashe to the famed series of statues of Confederate heroes of the Civil War on Monument Avenue.
After several months of controversy, the bronze statue of Ashe was finally completed on Monument Avenue facing the
opposite direction from the Confederate Heroes on July 10, 1996. Richmond is located at 37°32′N 77°28′W / 37.533°N
77.467°W / 37.533; -77.467 (37.538, −77.462). According to the United States Census Bureau, the city has a total
area of 62 square miles (160 km2), of which 60 square miles (160 km2) is land and 2.7 square miles (7.0 km2) of it
(4.3%) is water. The city is located in the Piedmont region of Virginia, at the highest navigable point of the James
River. The Piedmont region is characterized by relatively low, rolling hills, and lies between the low, sea level
Tidewater region and the Blue Ridge Mountains. Significant bodies of water in the region include the James River,
the Appomattox River, and the Chickahominy River. Richmond's original street grid, laid out in 1737, included the
area between what are now Broad, 17th, and 25th Streets and the James River. Modern Downtown Richmond is located
slightly farther west, on the slopes of Shockoe Hill. Nearby neighborhoods include Shockoe Bottom, the historically
significant and low-lying area between Shockoe Hill and Church Hill, and Monroe Ward, which contains the Jefferson
Hotel. Richmond's East End includes neighborhoods like rapidly gentrifying Church Hill, home to St. John's Church,
as well as poorer areas like Fulton, Union Hill, and Fairmont, and public housing projects like Mosby Court, Whitcomb
Court, Fairfield Court, and Creighton Court closer to Interstate 64. The area between Belvidere Street, Interstate
195, Interstate 95, and the river, which includes Virginia Commonwealth University, is socioeconomically and architecturally
diverse. North of Broad Street, the Carver and Newtowne West neighborhoods are demographically similar to neighboring
Jackson Ward, with Carver experiencing some gentrification due to its proximity to VCU. The affluent area between
the Boulevard, Main Street, Broad Street, and VCU, known as the Fan, is home to Monument Avenue, an outstanding collection
of Victorian architecture, and many students. West of the Boulevard is the Museum District, the location of the Virginia
Historical Society and the Virginia Museum of Fine Arts. South of the Downtown Expressway are Byrd Park, Maymont,
Hollywood Cemetery, the predominantly black working class Randolph neighborhood, and white working class Oregon Hill.
Cary Street between Interstate 195 and the Boulevard is a popular commercial area called Carytown. The portion of
the city south of the James River is known as the Southside. Neighborhoods in the city's Southside area range from
affluent and middle class suburban neighborhoods Westover Hills, Forest Hill, Southampton, Stratford Hills, Oxford,
Huguenot Hills, Hobby Hill, and Woodland Heights to the impoverished Manchester and Blackwell areas, the Hillside
Court housing projects, and the ailing Jefferson Davis Highway commercial corridor. Other Southside neighborhoods
include Fawnbrook, Broad Rock, Cherry Gardens, Cullenwood, and Beaufont Hills. Much of Southside developed a suburban
character as part of Chesterfield County before being annexed by Richmond, most notably in 1970. Richmond has a humid
subtropical climate (Köppen Cfa), with hot and humid summers and generally cool winters. The mountains to the west
act as a partial barrier to outbreaks of cold, continental air in winter; Arctic air is delayed long enough to be
modified, then further warmed as it subsides in its approach to Richmond. The open waters of the Chesapeake Bay and
Atlantic Ocean contribute to the humid summers and mild winters. The coldest weather normally occurs from late December
to early February, and the January daily mean temperature is 37.9 °F (3.3 °C), with an average of 6.0 days with highs
at or below the freezing mark. Downtown areas straddle the border between USDA Hardiness zones 7B and 8A, and temperatures
seldom lower to 0 °F (−18 °C), with the most recent subzero (°F) reading occurring on January 28, 2000, when the
temperature reached −1 °F (−18 °C). The July daily mean temperature is 79.3 °F (26.3 °C), and high temperatures reach
or exceed 90 °F (32 °C) approximately 43 days out of the year; while 100 °F (38 °C) temperatures are not uncommon,
they do not occur every year. Extremes in temperature have ranged from −12 °F (−24 °C) on January 19, 1940 up to
107 °F (42 °C) on August 6, 1918.[a] Precipitation is rather uniformly distributed throughout the year. However,
dry periods lasting several weeks do occur, especially in autumn when long periods of pleasant, mild weather are
most common. There is considerable variability in total monthly amounts from year to year so that no one month can
be depended upon to be normal. Snow has been recorded during seven of the twelve months. Falls of 3 inches (7.6 cm)
or more within 24 hours occur an average once per year. Annual snowfall, however, is usually light, averaging 10.5
inches (27 cm) per season. Snow typically remains on the ground only one or two days at a time, but remained for
16 days in 2010 (January 30 to February 14). Ice storms (freezing rain or glaze) are not uncommon, but they are seldom
severe enough to do any considerable damage. The James River reaches tidewater at Richmond where flooding may occur
in every month of the year, most frequently in March and least in July. Hurricanes and tropical storms have been
responsible for most of the flooding during the summer and early fall months. Hurricanes passing near Richmond have
produced record rainfalls. In 1955, three hurricanes brought record rainfall to Richmond within a six-week period.
The most noteworthy of these were Hurricane Connie and Hurricane Diane that brought heavy rains five days apart.
And in 2004, the downtown area suffered extensive flood damage after the remnants of Hurricane Gaston dumped up to
12 inches (300 mm) of rainfall. As of the census of 2000, there were 197,790 people, 84,549 households, and 43,627
families residing in the city. The population density was 3,292.6 people per square mile (1,271.3/km²). There were
92,282 housing units at an average density of 1,536.2 per square mile (593.1/km²). The racial makeup of the city
was 38.3% White, 57.2% African American, 0.2% Native American, 1.3% Asian, 0.1% Pacific Islander, 1.5% from other
races, and 1.5% from two or more races. Hispanic or Latino of any race were 2.6% of the population. During the late
1980s and early 1990s, Richmond experienced a spike in overall crime, in particular, the city's murder rate. The
city had 93 murders for the year of 1985, with a murder rate of 41.9 killings committed per 100,000 residents. Over
the next decade, the city saw a major increase in total homicides. In 1990 there were 114 murders, for a murder rate
of 56.1 killings per 100,000 residents. There were 120 murders in 1995, resulting in a murder rate of 59.1 killings
per 100,000 residents, one of the highest in the United States. Richmond has several historic churches. Because of
its early English colonial history from the early 17th century to 1776, Richmond has a number of prominent Anglican/Episcopal
churches including Monumental Church, St. Paul's Episcopal Church and St. John's Episcopal Church. Methodists and
Baptists made up another section of early churches, and First Baptist Church of Richmond was the first of these,
established in 1780. In the Reformed church tradition, the first Presbyterian Church in the City of Richmond was
First Presbyterian Church, organized on June 18, 1812. On February 5, 1845, Second Presbyterian Church of Richmond
was founded, which was a historic church where Stonewall Jackson attended and was the first Gothic building and the
first gas-lit church to be built in Richmond. St. Peter's Church was dedicated and became the first Catholic church
in Richmond on May 25, 1834. The city is also home to the historic Cathedral of the Sacred Heart which is the motherchurch
for the Roman Catholic Diocese of Richmond. The first Jewish congregation in Richmond was Kahal Kadosh Beth Shalom.
Kahal Kadosh Beth Shalom was the sixth congregation in the United States. By 1822 K.K. Beth Shalom members worshipped
in the first synagogue building in Virginia. They eventually merged with Congregation Beth Ahabah, an offshoot of
Beth Shalom. There are two Orthodox Synagogues, Keneseth Beth Israel and Chabad of Virginia. There is an Orthodox
Yeshivah K–12 school system known as Rudlin Torah academy, which also includes a post high-school program. There
are two Conservative synagogues, Beth El and Or Atid. There are three Reform synagogues, Bonay Kodesh, Beth Ahabah
and Or Ami. Along with such religious congregations, there are a variety of other Jewish charitable, educational
and social service institutions, each serving the Jewish and general communities. These include the Weinstein Jewish
Community Center, Jewish Family Services, Jewish Community Federation of Richmond and Richmond Jewish Foundation.
There are seven current masjids in the Greater Richmond area, with three more currently in construction, accommodating
the growing Muslim population, the first one being Masjid Bilal. In the 1950s, Muslims from the East End got organized
under Nation of Islam (NOI). They used to meet in Temple #24 located on North Avenue. After the NOI split in 1975,
the Muslims who joined mainstream Islam, start meeting at Shabaaz Restaurant on Nine Mile Road. By 1976, the Muslims
used to meet in a rented church. They tried to buy this church, but due to financial difficulties the Muslims instead
bought an old grocery store at Chimbarazoo Boulevard, the present location of Masjid Bilal. Initially, the place
was called "Masjid Muhammad #24". Only by 1990 did the Muslims renamed it to "Masjid Bilal". Masjid Bilal was followed
by the Islamic Center of Virginia, ICVA masjid. The ICVA was established in 1973 as a non profit tax exempt organization.
With aggressive fundraising, ICVA was able to buy land on Buford road. Construction of the new masjid began in the
early 1980s. The rest of the five current masjids in the Richmond area are Islamic Center of Richmond (ICR) in the
west end, Masjid Umm Barakah on 2nd street downtown, Islamic Society of Greater Richmond (ISGR) in the west end,
Masjidullah in the north side, and Masjid Ar-Rahman in the east end. Hinduism is actively practiced, particularly
in suburban areas of Henrico and Chesterfield. Some 6,000 families of Indian descent resided in the Richmond Region
as of 2011. Hindus are served by several temples and cultural centers. The two most familiar are the Cultural Center
of India (CCI) located off of Iron Bridge Road in Chesterfield County and the Hindu Center of Virginia in Henrico
County which has garnered national fame and awards for being the first LEED certified religious facility in the commonwealth.
Law and finance have long been driving forces in the economy. The city is home to both the United States Court of
Appeals for the Fourth Circuit, one of 13 United States courts of appeals, and the Federal Reserve Bank of Richmond,
one of 12 Federal Reserve Banks, as well as offices for international companies such as Genworth Financial, CapitalOne,
Philip Morris USA, and numerous other banks and brokerages. Richmond is also home to four of the largest law firms
in the United States: Hunton & Williams, McGuireWoods, Williams Mullen, and LeClairRyan. Another law firm with a
major Richmond presence is Troutman Sanders, which merged with Richmond-based Mays & Valentine LLP in 2001. Richmond
is home to the rapidly developing Virginia BioTechnology Research Park, which opened in 1995 as an incubator facility
for biotechnology and pharmaceutical companies. Located adjacent to the Medical College of Virginia (MCV) Campus
of Virginia Commonwealth University, the park currently[when?] has more than 575,000 square feet (53,400 m2) of research,
laboratory and office space for a diverse tenant mix of companies, research institutes, government laboratories and
non-profit organizations. The United Network for Organ Sharing, which maintains the nation's organ transplant waiting
list, occupies one building in the park. Philip Morris USA opened a $350 million research and development facility
in the park in 2007. Once fully developed, park officials expect the site to employ roughly 3,000 scientists, technicians
and engineers. Richmond is also fast-becoming known for its food scene, with several restaurants in the Fan, Church
Hill, Jackson Ward and elsewhere around the city generating regional and national attention for their fare. Departures
magazine named Richmond "The Next Great American Food City" in August 2014. Also in 2014, Southern Living magazine
named three Richmond restaurants – Comfort, Heritage and The Roosevelt – among its "100 Best Restaurants in the South",
while Metzger Bar & Butchery made its "Best New Restaurants: 12 To Watch" list. Craft beer and liquor production
is also growing in the River City, with twelve micro-breweries in city proper; the oldest is Legend Brewery, founded
in 1994. Three distilleries, Reservoir Distillery, Belle Isle Craft Spirits and James River Distillery, were established
in 2010, 2013 and 2014, respectively. Additionally, Richmond is gaining attention from the film and television industry,
with several high-profile films shot in the metro region in the past few years, including the major motion picture
Lincoln which led to Daniel Day-Lewis's third Oscar, Killing Kennedy with Rob Lowe, airing on the National Geographic
Channel and Turn, starring Jamie Bell and airing on AMC. In 2015 Richmond will be the main filming location for the
upcoming PBS drama series Mercy Street, which will premiere in Winter 2016. Several organizations, including the
Virginia Film Office and the Virginia Production Alliance, along with events like the Richmond International Film
Festival and French Film Festival, continue to put draw supporters of film and media to the region. The Greater Richmond
area was named the third-best city for business by MarketWatch in September 2007, ranking behind only the Minneapolis
and Denver areas and just above Boston. The area is home to six Fortune 500 companies: electric utility Dominion
Resources; CarMax; Owens & Minor; Genworth Financial; MeadWestvaco; McKesson Medical-Surgical and Altria Group. However,
only Dominion Resources and MeadWestvaco are headquartered within the city of Richmond; the others are located in
the neighboring counties of Henrico and Hanover. In 2008, Altria moved its corporate HQ from New York City to Henrico
County, adding another Fortune 500 corporation to Richmond's list. In February 2006, MeadWestvaco announced that
they would move from Stamford, Connecticut, to Richmond in 2008 with the help of the Greater Richmond Partnership,
a regional economic development organization that also helped locate Aditya Birla Minacs, Amazon.com, and Honeywell
International, to the region. Other Fortune 500 companies, while not headquartered in the area, do have a major presence.
These include SunTrust Bank (based in Atlanta), Capital One Financial Corporation (officially based in McLean, Virginia,
but founded in Richmond with its operations center and most employees in the Richmond area), and the medical and
pharmaceutical giant McKesson (based in San Francisco). Capital One and Altria company's Philip Morris USA are two
of the largest private Richmond-area employers. DuPont maintains a production facility in South Richmond known as
the Spruance Plant. UPS Freight, the less-than-truckload division of UPS and formerly known as Overnite Transportation,
has its corporate headquarters in Richmond. Several of the city's large general museums are located near the Boulevard.
On Boulevard proper are the Virginia Historical Society and the Virginia Museum of Fine Arts, lending their name
to what is sometimes called the Museum District. Nearby on Broad Street is the Science Museum of Virginia, housed
in the neoclassical former 1919 Broad Street Union Station. Immediately adjacent is the Children's Museum of Richmond,
and two blocks away, the Virginia Center for Architecture. Within the downtown are the Library of Virginia and the
Valentine Richmond History Center. Elsewhere are the Virginia Holocaust Museum and the Old Dominion Railway Museum.
As the primary former Capital of the Confederate States of America, Richmond is home to many museums and battlefields
of the American Civil War. Near the riverfront is the Richmond National Battlefield Park Visitors Center and the
American Civil War Center at Historic Tredegar, both housed in the former buildings of the Tredegar Iron Works, where
much of the ordnance for the war was produced. In Court End, near the Virginia State Capitol, is the Museum of the
Confederacy, along with the Davis Mansion, also known as the White House of the Confederacy; both feature a wide
variety of objects and material from the era. The temporary home of former Confederate General Robert E. Lee still
stands on Franklin Street in downtown Richmond. The history of slavery and emancipation are also increasingly represented:
there is a former slave trail along the river that leads to Ancarrow's Boat Ramp and Historic Site which has been
developed with interpretive signage, and in 2007, the Reconciliation Statue was placed in Shockoe Bottom, with parallel
statues placed in Liverpool and Benin representing points of the Triangle Trade. Other historical points of interest
include St. John's Church, the site of Patrick Henry's famous "Give me liberty or give me death" speech, and the
Edgar Allan Poe Museum, features many of his writings and other artifacts of his life, particularly when he lived
in the city as a child, a student, and a successful writer. The John Marshall House, the home of the former Chief
Justice of the United States, is also located downtown and features many of his writings and objects from his life.
Hollywood Cemetery is the burial grounds of two U.S. Presidents as well as many Civil War officers and soldiers.
The city is home to many monuments and memorials, most notably those along Monument Avenue. Other monuments include
the A.P. Hill monument, the Bill "Bojangles" Robinson monument in Jackson Ward, the Christopher Columbus monument
near Byrd Park, and the Confederate Soldiers and Sailors Monument on Libby Hill. Located near Byrd Park is the famous
World War I Memorial Carillon, a 56-bell carillon tower. Dedicated in 1956, the Virginia War Memorial is located
on Belvedere overlooking the river, and is a monument to Virginians who died in battle in World War II, the Korean
War, the Vietnam War, the Gulf War, the War in Afghanistan, and the Iraq War. Richmond has a significant arts community,
some of which is contained in formal public-supported venues, and some of which is more DIY, such as local privately
owned galleries, and private music venues, nonprofit arts organizations, or organic and venueless arts movements
(e.g., house shows, busking, itinerant folk shows). This has led to tensions, as the city Richmond City levied an
"admissions tax" to fund large arts projects like CentreStage, leading to criticism that it is funding civic initiatives
on the backs of the organic local culture. Traditional Virginian folk music, including blues, country, and bluegrass
are also notably present, and play a large part in the annual Richmond Folk Festival. The following is a list of
the more formal arts establishments (Companies, theaters, galleries, and other large venues) in Richmond: As of 2015
a variety of murals from internationally recognized street artists have appeared throughout the city as a result
of the efforts of Art Whino and RVA Magazine with The Richmond Mural Project and the RVA Street Art Festival. Artists
who have produced work in the city as a result of these festivals include ROA, Pixel Pancho, Gaia, Aryz, Alexis Diaz,
Ever Siempre, Jaz, 2501, Natalia Rak, Pose MSK, Vizie, Jeff Soto, Mark Jenkins, Etam Cru- and local artists Hamilton
Glass, Nils Westergard, and El Kamino. Both festivals are expected to continue this year with artists such as Ron
English slated to produce work. From earliest days, Virginia, and Richmond in particular, have welcomed live theatrical
performances. From Lewis Hallam's early productions of Shakespeare in Williamsburg, the focus shifted to Richmond's
antebellum prominence as a main colonial and early 19th century performance venue for such celebrated American and
English actors as William Macready, Edwin Forrest, and the Booth family. In the 20th century, Richmonders' love of
theater continued with many amateur troupes and regular touring professional productions. In the 1960s a small renaissance
or golden age accompanied the growth of professional dinner theaters and the fostering of theater by the Virginia
Museum, reaching a peak in the 1970s with the establishment of a resident Equity company at the Virginia Museum Theater
(now the Leslie Cheek) and the birth of Theatre IV, a company that continues to this day. Much of Richmond's early
architecture was destroyed by the Evacuation Fire in 1865. It is estimated that 25% of all buildings in Richmond
were destroyed during this fire. Even fewer now remain due to construction and demolition that has taken place since
Reconstruction. In spite of this, Richmond contains many historically significant buildings and districts. Buildings
remain from Richmond's colonial period, such as the Patteson-Schutte House and the Edgar Allan Poe Museum (Richmond,
Virginia), both built before 1750. Architectural classicism is heavily represented in all districts of the city,
particularly in Downtown, the Fan, and the Museum District. Several notable classical architects have designed buildings
in Richmond. The Virginia State Capitol was designed by Thomas Jefferson and Charles-Louis Clérisseau in 1785. It
is the second-oldest US statehouse in continuous use (after Maryland's) and was the first US government building
built in the neo-classical style of architecture, setting the trend for other state houses and the federal government
buildings (including the White House and The Capitol) in Washington, D.C. Robert Mills designed Monumental Church
on Broad Street. Adjoining it is the 1845 Egyptian Building, one of the few Egyptian Revival buildings in the United
States. The firm of John Russell Pope designed Broad Street Station as well as Branch House on Monument Avenue, designed
as a private residence in the Tudor style, now serving as the Branch Museum of Architecture and Design. Broad Street
Station (or Union Station), designed in the Beaux-Arts style, is no longer a functioning station but is now home
to the Science Museum of Virginia. Main Street Station, designed by Wilson, Harris, and Richards, has been returned
to use in its original purpose. The Jefferson Hotel and the Commonwealth Club were both designed by the classically
trained Beaux-Arts architects Carrère and Hastings. Many buildings on the University of Richmond campus, including
Jeter Hall and Ryland Hall, were designed by Ralph Adams Cram, most famous for his Princeton University Chapel and
the Cathedral of Saint John the Divine. Among Richmond's most interesting architectural features is its Cast-iron
architecture. Second only to New Orleans in its concentration of cast iron work, the city is home to a unique collection
of cast iron porches, balconies, fences, and finials. Richmond's position as a center of iron production helped to
fuel its popularity within the city. At the height of production in the 1890, 25 foundries operated in the city employing
nearly 3,500 metal workers. This number is seven times the number of general construction workers being employed
in Richmond at the time which illustrates the importance of its iron exports. Porches and fences in urban neighborhoods
such as Jackson Ward, Church Hill, and Monroe Ward are particularly elaborate, often featuring ornate iron casts
never replicated outside of Richmond. In some cases cast were made for a single residential or commercial application.
Richmond is home to several notable instances of various styles of modernism. Minoru Yamasaki designed the Federal
Reserve Building which dominates the downtown skyline. The architectural firm of Skidmore, Owings & Merrill has designed
two buildings: the Library of Virginia and the General Assembly Offices at the Eighth and Main Building. Philip Johnson
designed the WRVA Building. The Richard Neutra-designed Rice House, a residence on a private island on the James
River, remains Richmond's only true International Style home. The W.G. Harris residence in Richmond was designed
by famed early modern architect and member of the Harvard Five, Landis Gores. Other notable architects to have worked
in the city include Rick Mather, I.M. Pei, and Gordon Bunshaft. There are also parks on two major islands in the
river: Belle Isle and Brown's Island. Belle Isle, at various former times a Powhatan fishing village, colonial-era
horse race track, and Civil War prison camp, is the larger of the two, and contains many bike trails as well as a
small cliff that is used for rock climbing instruction. One can walk the island and still see many of the remains
of the Civil War prison camp, such as an arms storage room and a gun emplacement that was used to quell prisoner
riots. Brown's Island is a smaller island and a popular venue of a large number of free outdoor concerts and festivals
in the spring and summer, such as the weekly Friday Cheers concert series or the James River Beer and Seafood Festival.
Two other major parks in the city along the river are Byrd Park and Maymont, located near the Fan District. Byrd
Park features a one-mile (1.6 km) running track, with exercise stops, a public dog park, and a number of small lakes
for small boats, as well as two monuments, Buddha house, and an amphitheatre. Prominently featured in the park is
the World War I Memorial Carillon, built in 1926 as a memorial to those that died in the war. Maymont, located adjacent
to Byrd Park, is a 100-acre (40 ha) Victorian estate with a museum, formal gardens, native wildlife exhibits, nature
center, carriage collection, and children's farm. Other parks in the city include Joseph Bryan Park Azalea Garden,
Forest Hill Park (former site of the Forest Hill Amusement Park), Chimborazo Park (site of the National Battlefield
Headquarters), among others. Richmond is not home to any major league professional sports teams, but since 2013,
the Washington Redskins of the National Football League have held their summer training camp in the city. There are
also several minor league sports in the city, including the Richmond Kickers of the USL Professional Division (third
tier of American soccer) and the Richmond Flying Squirrels of the Class AA Eastern League of Minor League Baseball
(an affiliate of the San Francisco Giants). The Kickers began playing in Richmond in 1993, and currently play at
City Stadium. The Squirrels opened their first season at The Diamond on April 15, 2010. From 1966 through 2008, the
city was home to the Richmond Braves, a AAA affiliate of the Atlanta Braves of Major League Baseball, until the franchise
relocated to Georgia. Auto racing is also popular in the area. The Richmond International Raceway (RIR) has hosted
NASCAR Sprint Cup races since 1953, as well as the Capital City 400 from 1962 − 1980. RIR also hosted IndyCar's Suntrust
Indy Challenge from 2001 − 2009. Another track, Southside Speedway, has operated since 1959 and sits just southwest
of Richmond in Chesterfield County. This .333-mile (0.536 km) oval short-track has become known as the "Toughest
Track in the South" and "The Action Track", and features weekly stock car racing on Friday nights. Southside Speedway
has acted as the breeding grounds for many past NASCAR legends including Richard Petty, Bobby Allison and Darrell
Waltrip, and claims to be the home track of NASCAR superstar Denny Hamlin. The Richmond Times-Dispatch, the local
daily newspaper in Richmond with a Sunday circulation of 120,000, is owned by BH Media, a subsidiary of Warren Buffett's
Berkshire Hathaway company. Style Weekly is a standard weekly publication covering popular culture, arts, and entertainment,
owned by Landmark Communications. RVA Magazine is the city's only independent art music and culture publication,
was once monthly, but is now issued quarterly. The Richmond Free Press and the Voice cover the news from an African-American
perspective. The Richmond metro area is served by many local television and radio stations. As of 2010[update], the
Richmond-Petersburg designated market area (DMA) is the 58th largest in the U.S. with 553,950 homes according to
Nielsen Market Research. The major network television affiliates are WTVR-TV 6 (CBS), WRIC-TV 8 (ABC), WWBT 12 (NBC),
WRLH-TV 35 (Fox), and WUPV 65 (CW). Public Broadcasting Service stations include WCVE-TV 23 and WCVW 57. There are
also a wide variety of radio stations in the Richmond area, catering to many different interests, including news,
talk radio, and sports, as well as an eclectic mix of musical interests. Richmond city government consists of a city
council with representatives from nine districts serving in a legislative and oversight capacity, as well as a popularly
elected, at-large mayor serving as head of the executive branch. Citizens in each of the nine districts elect one
council representative each to serve a four-year term. Beginning with the November 2008 election Council terms was
lengthened to 4 years. The city council elects from among its members one member to serve as Council President and
one to serve as Council Vice President. The city council meets at City Hall, located at 900 E. Broad St., 2nd Floor,
on the second and fourth Mondays of every month, except August. In 1990 religion and politics intersected to impact
the outcome of the Eighth District election in South Richmond. With the endorsements of black power brokers, black
clergy and the Richmond Crusade for Voters, South Richmond residents made history, electing Reverend A. Carl Prince
to the Richmond City Council. As the first African American Baptist Minister elected to the Richmond City Council,
Prince's election paved the way for a political paradigm shift in politics that persist today. Following Prince's
election, Reverend Gwendolyn Hedgepeth and the Reverend Leonidas Young, former Richmond Mayor were elected to public
office. Prior to Prince's election black clergy made political endorsements and served as appointees to the Richmond
School Board and other boards throughout the city. Today religion and politics continues to thrive in the Commonwealth
of Virginia. The Honorable Dwight C. Jones, a prominent Baptist pastor and former Chairman of the Richmond School
Board and Member of the Virginia House of Delegates serves as Mayor of the City of Richmond. The city of Richmond
operates 28 elementary schools, nine middle schools, and eight high schools, serving a total student population of
24,000 students. There is one Governor's School in the city − the Maggie L. Walker Governor's School for Government
and International Studies. In 2008, it was named as one of Newsweek magazine's 18 "public elite" high schools, and
in 2012, it was rated #16 of America's best high schools overall. Richmond's public school district also runs one
of Virginia's four public charter schools, the Patrick Henry School of Science and Arts, which was founded in 2010.
The Richmond area has many major institutions of higher education, including Virginia Commonwealth University (public),
University of Richmond (private), Virginia Union University (private), Virginia College (private), South University
- Richmond (private, for-profit), Union Theological Seminary & Presbyterian School of Christian Education (private),
and the Baptist Theological Seminary in Richmond (BTSR—private). Several community colleges are found in the metro
area, including J. Sargeant Reynolds Community College and John Tyler Community College (Chesterfield County). In
addition, there are several Technical Colleges in Richmond including ITT Technical Institute, ECPI College of Technology
and Centura College. There are several vocational colleges also, such as Fortis College and Bryant Stratton College.
The Greater Richmond area is served by the Richmond International Airport (IATA: RIC, ICAO: KRIC), located in nearby
Sandston, seven miles (11 km) southeast of Richmond and within an hour drive of historic Williamsburg, Virginia.
Richmond International is now served by nine airlines with over 200 daily flights providing non-stop service to major
destination markets and connecting flights to destinations worldwide. A record 3.3 million passengers used Richmond
International Airport in 2006, a 13% increase over 2005. Richmond is a major hub for intercity bus company Greyhound
Lines, with its terminal at 2910 N Boulevard. Multiple runs per day connect directly with Washington, D.C., New York,
Raleigh, and elsewhere. Direct trips to New York take approximately 7.5 hours. Discount carrier Megabus also provides
curbside service from outside of Main Street Station, with fares starting at $1. Direct service is available to Washington,
D.C., Hampton Roads, Charlotte, Raleigh, Baltimore, and Philadelphia. Most other connections to Megabus served cites,
such as New York, can be made from Washington, D.C. Richmond, and the surrounding metropolitan area, was granted[when?]
a roughly $25 million grant from the U.S. Department of Transportation to support a newly proposed Rapid Transit
System, which would run along Broad Street from Willow Lawn to Rocketts Landing, in the first phase of an improved
public transportation hub for the region. Local transit and paratransit bus service in Richmond, Henrico, and Chesterfield
counties is provided by the Greater Richmond Transit Company (GRTC). The GRTC, however, serves only small parts of
the suburban counties. The far West End (Innsbrook and Short Pump) and almost all of Chesterfield County have no
public transportation despite dense housing, retail, and office development. According to a 2008 GRTC operations
analysis report, a majority of GRTC riders utilize their services because they do not have an available alternative
such as a private vehicle. The Richmond area also has two railroad stations served by Amtrak. Each station receives
regular service from north of Richmond including Washington, D.C., Philadelphia, and New York. The suburban Staples
Mill Road Station is located on a major north-south freight line and receives all service to and from all points
south including, Raleigh, Durham, Savannah, Newport News, Williamsburg and Florida. Richmond's only railway station
located within the city limits, the historic Main Street Station, was renovated in 2004. As of 2010, the station
only receives trains headed to and from Newport News and Williamsburg due to track layout. As a result, the Staples
Mill Road station receives more trains and serves more passengers overall. Electricity in the Richmond Metro area
is provided by Dominion Virginia Power. The company, based in Richmond, is one of the nation's largest producers
of energy, serving retail energy customers in nine states. Electricity is provided in the Richmond area primarily
by the North Anna Nuclear Generating Station and Surry Nuclear Generating Station, as well as a coal-fired station
in Chester, Virginia. These three plants provide a total of 4,453 megawatts of power. Several other natural gas plants
provide extra power during times of peak demand. These include facilities in Chester, and Surry, and two plants in
Richmond (Gravel Neck and Darbytown). The wastewater treatment plant and distribution system of water mains, pumping
stations and storage facilities provide water to approximately 62,000 customers in the city. There is also a wastewater
treatment plant located on the south bank of the James River. This plant can treat up to 70 million gallons of water
per day of sanitary sewage and stormwater before returning it to the river. The wastewater utility also operates
and maintains 1,500 miles (2,400 km) of sanitary sewer and pumping stations, 38 miles (61 km) of intercepting sewer
lines, and the Shockoe Retention Basin, a 44-million-gallon stormwater reservoir used during heavy rains.
Among the vast varieties of microorganisms, relatively few cause disease in otherwise healthy individuals. Infectious disease
results from the interplay between those few pathogens and the defenses of the hosts they infect. The appearance
and severity of disease resulting from any pathogen, depends upon the ability of that pathogen to damage the host
as well as the ability of the host to resist the pathogen. However a host's immune system can also cause damage to
the host itself in an attempt to control the infection. Clinicians therefore classify infectious microorganisms or
microbes according to the status of host defenses - either as primary pathogens or as opportunistic pathogens: One
way of proving that a given disease is "infectious", is to satisfy Koch's postulates (first proposed by Robert Koch),
which demands that the infectious agent be identified only in patients and not in healthy controls, and that patients
who contract the agent also develop the disease. These postulates were first used in the discovery that Mycobacteria
species cause tuberculosis. Koch's postulates can not be applied ethically for many human diseases because they require
experimental infection of a healthy individual with a pathogen produced as a pure culture. Often, even clearly infectious
diseases do not meet the infectious criteria. For example, Treponema pallidum, the causative spirochete of syphilis,
cannot be cultured in vitro - however the organism can be cultured in rabbit testes. It is less clear that a pure
culture comes from an animal source serving as host than it is when derived from microbes derived from plate culture.
Epidemiology is another important tool used to study disease in a population. For infectious diseases it helps to
determine if a disease outbreak is sporadic (occasional occurrence), endemic (regular cases often occurring in a
region), epidemic (an unusually high number of cases in a region), or pandemic (a global epidemic). Infectious diseases
are sometimes called contagious disease when they are easily transmitted by contact with an ill person or their secretions
(e.g., influenza). Thus, a contagious disease is a subset of infectious disease that is especially infective or easily
transmitted. Other types of infectious/transmissible/communicable diseases with more specialized routes of infection,
such as vector transmission or sexual transmission, are usually not regarded as "contagious", and often do not require
medical isolation (sometimes loosely called quarantine) of victims. However, this specialized connotation of the
word "contagious" and "contagious disease" (easy transmissibility) is not always respected in popular use. Infection
begins when an organism successfully enters the body, grows and multiplies. This is referred to as colonization.
Most humans are not easily infected. Those who are weak, sick, malnourished, have cancer or are diabetic have increased
susceptibility to chronic or persistent infections. Individuals who have a suppressed immune system are particularly
susceptible to opportunistic infections. Entrance to the host at host-pathogen interface, generally occurs through
the mucosa in orifices like the oral cavity, nose, eyes, genitalia, anus, or the microbe can enter through open wounds.
While a few organisms can grow at the initial site of entry, many migrate and cause systemic infection in different
organs. Some pathogens grow within the host cells (intracellular) whereas others grow freely in bodily fluids. Wound
colonization refers to nonreplicating microorganisms within the wound, while in infected wounds, replicating organisms
exist and tissue is injured. All multicellular organisms are colonized to some degree by extrinsic organisms, and
the vast majority of these exist in either a mutualistic or commensal relationship with the host. An example of the
former is the anaerobic bacteria species, which colonizes the mammalian colon, and an example of the latter is various
species of staphylococcus that exist on human skin. Neither of these colonizations are considered infections. The
difference between an infection and a colonization is often only a matter of circumstance. Non-pathogenic organisms
can become pathogenic given specific conditions, and even the most virulent organism requires certain circumstances
to cause a compromising infection. Some colonizing bacteria, such as Corynebacteria sp. and viridans streptococci,
prevent the adhesion and colonization of pathogenic bacteria and thus have a symbiotic relationship with the host,
preventing infection and speeding wound healing. Because it is normal to have bacterial colonization, it is difficult
to know which chronic wounds are infected. Despite the huge number of wounds seen in clinical practice, there are
limited quality data for evaluated symptoms and signs. A review of chronic wounds in the Journal of the American
Medical Association's "Rational Clinical Examination Series" quantified the importance of increased pain as an indicator
of infection. The review showed that the most useful finding is an increase in the level of pain [likelihood ratio
(LR) range, 11-20] makes infection much more likely, but the absence of pain (negative likelihood ratio range, 0.64-0.88)
does not rule out infection (summary LR 0.64-0.88). Disease can arise if the host's protective immune mechanisms
are compromised and the organism inflicts damage on the host. Microorganisms can cause tissue damage by releasing
a variety of toxins or destructive enzymes. For example, Clostridium tetani releases a toxin that paralyzes muscles,
and staphylococcus releases toxins that produce shock and sepsis. Not all infectious agents cause disease in all
hosts. For example, less than 5% of individuals infected with polio develop disease. On the other hand, some infectious
agents are highly virulent. The prion causing mad cow disease and Creutzfeldt–Jakob disease invariably kills all
animals and people that are infected. Persistent infections occur because the body is unable to clear the organism
after the initial infection. Persistent infections are characterized by the continual presence of the infectious
organism, often as latent infection with occasional recurrent relapses of active infection. There are some viruses
that can maintain a persistent infection by infecting different cells of the body. Some viruses once acquired never
leave the body. A typical example is the herpes virus, which tends to hide in nerves and become reactivated when
specific circumstances arise. Diagnosis of infectious disease sometimes involves identifying an infectious agent
either directly or indirectly. In practice most minor infectious diseases such as warts, cutaneous abscesses, respiratory
system infections and diarrheal diseases are diagnosed by their clinical presentation and treated without knowledge
of the specific causative agent. Conclusions about the cause of the disease are based upon the likelihood that a
patient came in contact with a particular agent, the presence of a microbe in a community, and other epidemiological
considerations. Given sufficient effort, all known infectious agents can be specifically identified. The benefits
of identification, however, are often greatly outweighed by the cost, as often there is no specific treatment, the
cause is obvious, or the outcome of an infection is benign. Diagnosis of infectious disease is nearly always initiated
by medical history and physical examination. More detailed identification techniques involve the culture of infectious
agents isolated from a patient. Culture allows identification of infectious organisms by examining their microscopic
features, by detecting the presence of substances produced by pathogens, and by directly identifying an organism
by its genotype. Other techniques (such as X-rays, CAT scans, PET scans or NMR) are used to produce images of internal
abnormalities resulting from the growth of an infectious agent. The images are useful in detection of, for example,
a bone abscess or a spongiform encephalopathy produced by a prion. Microbiological culture is a principal tool used
to diagnose infectious disease. In a microbial culture, a growth medium is provided for a specific agent. A sample
taken from potentially diseased tissue or fluid is then tested for the presence of an infectious agent able to grow
within that medium. Most pathogenic bacteria are easily grown on nutrient agar, a form of solid medium that supplies
carbohydrates and proteins necessary for growth of a bacterium, along with copious amounts of water. A single bacterium
will grow into a visible mound on the surface of the plate called a colony, which may be separated from other colonies
or melded together into a "lawn". The size, color, shape and form of a colony is characteristic of the bacterial
species, its specific genetic makeup (its strain), and the environment that supports its growth. Other ingredients
are often added to the plate to aid in identification. Plates may contain substances that permit the growth of some
bacteria and not others, or that change color in response to certain bacteria and not others. Bacteriological plates
such as these are commonly used in the clinical identification of infectious bacterium. Microbial culture may also
be used in the identification of viruses: the medium in this case being cells grown in culture that the virus can
infect, and then alter or kill. In the case of viral identification, a region of dead cells results from viral growth,
and is called a "plaque". Eukaryotic parasites may also be grown in culture as a means of identifying a particular
agent. In the absence of suitable plate culture techniques, some microbes require culture within live animals. Bacteria
such as Mycobacterium leprae and Treponema pallidum can be grown in animals, although serological and microscopic
techniques make the use of live animals unnecessary. Viruses are also usually identified using alternatives to growth
in culture or animals. Some viruses may be grown in embryonated eggs. Another useful identification method is Xenodiagnosis,
or the use of a vector to support the growth of an infectious agent. Chagas disease is the most significant example,
because it is difficult to directly demonstrate the presence of the causative agent, Trypanosoma cruzi in a patient,
which therefore makes it difficult to definitively make a diagnosis. In this case, xenodiagnosis involves the use
of the vector of the Chagas agent T. cruzi, an uninfected triatomine bug, which takes a blood meal from a person
suspected of having been infected. The bug is later inspected for growth of T. cruzi within its gut. Another principal
tool in the diagnosis of infectious disease is microscopy. Virtually all of the culture techniques discussed above
rely, at some point, on microscopic examination for definitive identification of the infectious agent. Microscopy
may be carried out with simple instruments, such as the compound light microscope, or with instruments as complex
as an electron microscope. Samples obtained from patients may be viewed directly under the light microscope, and
can often rapidly lead to identification. Microscopy is often also used in conjunction with biochemical staining
techniques, and can be made exquisitely specific when used in combination with antibody based techniques. For example,
the use of antibodies made artificially fluorescent (fluorescently labeled antibodies) can be directed to bind to
and identify a specific antigens present on a pathogen. A fluorescence microscope is then used to detect fluorescently
labeled antibodies bound to internalized antigens within clinical samples or cultured cells. This technique is especially
useful in the diagnosis of viral diseases, where the light microscope is incapable of identifying a virus directly.
Other microscopic procedures may also aid in identifying infectious agents. Almost all cells readily stain with a
number of basic dyes due to the electrostatic attraction between negatively charged cellular molecules and the positive
charge on the dye. A cell is normally transparent under a microscope, and using a stain increases the contrast of
a cell with its background. Staining a cell with a dye such as Giemsa stain or crystal violet allows a microscopist
to describe its size, shape, internal and external components and its associations with other cells. The response
of bacteria to different staining procedures is used in the taxonomic classification of microbes as well. Two methods,
the Gram stain and the acid-fast stain, are the standard approaches used to classify bacteria and to diagnosis of
disease. The Gram stain identifies the bacterial groups Firmicutes and Actinobacteria, both of which contain many
significant human pathogens. The acid-fast staining procedure identifies the Actinobacterial genera Mycobacterium
and Nocardia. The isolation of enzymes from infected tissue can also provide the basis of a biochemical diagnosis
of an infectious disease. For example, humans can make neither RNA replicases nor reverse transcriptase, and the
presence of these enzymes are characteristic of specific types of viral infections. The ability of the viral protein
hemagglutinin to bind red blood cells together into a detectable matrix may also be characterized as a biochemical
test for viral infection, although strictly speaking hemagglutinin is not an enzyme and has no metabolic function.
Serological methods are highly sensitive, specific and often extremely rapid tests used to identify microorganisms.
These tests are based upon the ability of an antibody to bind specifically to an antigen. The antigen, usually a
protein or carbohydrate made by an infectious agent, is bound by the antibody. This binding then sets off a chain
of events that can be visibly obvious in various ways, dependent upon the test. For example, "Strep throat" is often
diagnosed within minutes, and is based on the appearance of antigens made by the causative agent, S. pyogenes, that
is retrieved from a patients throat with a cotton swab. Serological tests, if available, are usually the preferred
route of identification, however the tests are costly to develop and the reagents used in the test often require
refrigeration. Some serological methods are extremely costly, although when commonly used, such as with the "strep
test", they can be inexpensive. Complex serological techniques have been developed into what are known as Immunoassays.
Immunoassays can use the basic antibody – antigen binding as the basis to produce an electro - magnetic or particle
radiation signal, which can be detected by some form of instrumentation. Signal of unknowns can be compared to that
of standards allowing quantitation of the target antigen. To aid in the diagnosis of infectious diseases, immunoassays
can detect or measure antigens from either infectious agents or proteins generated by an infected organism in response
to a foreign agent. For example, immunoassay A may detect the presence of a surface protein from a virus particle.
Immunoassay B on the other hand may detect or measure antibodies produced by an organism's immune system that are
made to neutralize and allow the destruction of the virus. Technologies based upon the polymerase chain reaction
(PCR) method will become nearly ubiquitous gold standards of diagnostics of the near future, for several reasons.
First, the catalog of infectious agents has grown to the point that virtually all of the significant infectious agents
of the human population have been identified. Second, an infectious agent must grow within the human body to cause
disease; essentially it must amplify its own nucleic acids in order to cause a disease. This amplification of nucleic
acid in infected tissue offers an opportunity to detect the infectious agent by using PCR. Third, the essential tools
for directing PCR, primers, are derived from the genomes of infectious agents, and with time those genomes will be
known, if they are not already. Thus, the technological ability to detect any infectious agent rapidly and specifically
are currently available. The only remaining blockades to the use of PCR as a standard tool of diagnosis are in its
cost and application, neither of which is insurmountable. The diagnosis of a few diseases will not benefit from the
development of PCR methods, such as some of the clostridial diseases (tetanus and botulism). These diseases are fundamentally
biological poisonings by relatively small numbers of infectious bacteria that produce extremely potent neurotoxins.
A significant proliferation of the infectious agent does not occur, this limits the ability of PCR to detect the
presence of any bacteria. There is usually an indication for a specific identification of an infectious agent only
when such identification can aid in the treatment or prevention of the disease, or to advance knowledge of the course
of an illness prior to the development of effective therapeutic or preventative measures. For example, in the early
1980s, prior to the appearance of AZT for the treatment of AIDS, the course of the disease was closely followed by
monitoring the composition of patient blood samples, even though the outcome would not offer the patient any further
treatment options. In part, these studies on the appearance of HIV in specific communities permitted the advancement
of hypotheses as to the route of transmission of the virus. By understanding how the disease was transmitted, resources
could be targeted to the communities at greatest risk in campaigns aimed at reducing the number of new infections.
The specific serological diagnostic identification, and later genotypic or molecular identification, of HIV also
enabled the development of hypotheses as to the temporal and geographical origins of the virus, as well as a myriad
of other hypothesis. The development of molecular diagnostic tools have enabled physicians and researchers to monitor
the efficacy of treatment with anti-retroviral drugs. Molecular diagnostics are now commonly used to identify HIV
in healthy people long before the onset of illness and have been used to demonstrate the existence of people who
are genetically resistant to HIV infection. Thus, while there still is no cure for AIDS, there is great therapeutic
and predictive benefit to identifying the virus and monitoring the virus levels within the blood of infected individuals,
both for the patient and for the community at large. Techniques like hand washing, wearing gowns, and wearing face
masks can help prevent infections from being passed from one person to another. Frequent hand washing remains the
most important defense against the spread of unwanted organisms. There are other forms of prevention such as avoiding
the use of illicit drugs, using a condom, and having a healthy lifestyle with a balanced diet and regular exercise.
Cooking foods well and avoiding foods that have been left outside for a long time is also important. One of the ways
to prevent or slow down the transmission of infectious diseases is to recognize the different characteristics of
various diseases. Some critical disease characteristics that should be evaluated include virulence, distance traveled
by victims, and level of contagiousness. The human strains of Ebola virus, for example, incapacitate their victims
extremely quickly and kill them soon after. As a result, the victims of this disease do not have the opportunity
to travel very far from the initial infection zone. Also, this virus must spread through skin lesions or permeable
membranes such as the eye. Thus, the initial stage of Ebola is not very contagious since its victims experience only
internal hemorrhaging. As a result of the above features, the spread of Ebola is very rapid and usually stays within
a relatively confined geographical area. In contrast, the Human Immunodeficiency Virus (HIV) kills its victims very
slowly by attacking their immune system. As a result, many of its victims transmit the virus to other individuals
before even realizing that they are carrying the disease. Also, the relatively low virulence allows its victims to
travel long distances, increasing the likelihood of an epidemic. Another effective way to decrease the transmission
rate of infectious diseases is to recognize the effects of small-world networks. In epidemics, there are often extensive
interactions within hubs or groups of infected individuals and other interactions within discrete hubs of susceptible
individuals. Despite the low interaction between discrete hubs, the disease can jump to and spread in a susceptible
hub via a single or few interactions with an infected hub. Thus, infection rates in small-world networks can be reduced
somewhat if interactions between individuals within infected hubs are eliminated (Figure 1). However, infection rates
can be drastically reduced if the main focus is on the prevention of transmission jumps between hubs. The use of
needle exchange programs in areas with a high density of drug users with HIV is an example of the successful implementation
of this treatment method. Another example is the use of ring culling or vaccination of potentially susceptible livestock
in adjacent farms to prevent the spread of the foot-and-mouth virus in 2001. Resistance to infection (immunity) may
be acquired following a disease, by asymptomatic carriage of the pathogen, by harboring an organism with a similar
structure (crossreacting), or by vaccination. Knowledge of the protective antigens and specific acquired host immune
factors is more complete for primary pathogens than for opportunistic pathogens. There is also the phenomenon of
herd immunity which offers a measure of protection to those otherwise vulnerable people when a large enough proportion
of the population has acquired immunity from certain infections. The clearance of the pathogens, either treatment-induced
or spontaneous, it can be influenced by the genetic variants carried by the individual patients. For instance, for
genotype 1 hepatitis C treated with Pegylated interferon-alpha-2a or Pegylated interferon-alpha-2b (brand names Pegasys
or PEG-Intron) combined with ribavirin, it has been shown that genetic polymorphisms near the human IL28B gene, encoding
interferon lambda 3, are associated with significant differences in the treatment-induced clearance of the virus.
This finding, originally reported in Nature, showed that genotype 1 hepatitis C patients carrying certain genetic
variant alleles near the IL28B gene are more possibly to achieve sustained virological response after the treatment
than others. Later report from Nature demonstrated that the same genetic variants are also associated with the natural
clearance of the genotype 1 hepatitis C virus. When infection attacks the body, anti-infective drugs can suppress
the infection. Several broad types of anti-infective drugs exist, depending on the type of organism targeted; they
include antibacterial (antibiotic; including antitubercular), antiviral, antifungal and antiparasitic (including
antiprotozoal and antihelminthic) agents. Depending on the severity and the type of infection, the antibiotic may
be given by mouth or by injection, or may be applied topically. Severe infections of the brain are usually treated
with intravenous antibiotics. Sometimes, multiple antibiotics are used in case there is resistance to one antibiotic.
Antibiotics only work for bacteria and do not affect viruses. Antibiotics work by slowing down the multiplication
of bacteria or killing the bacteria. The most common classes of antibiotics used in medicine include penicillin,
cephalosporins, aminoglycosides, macrolides, quinolones and tetracyclines.[citation needed] The top three single
agent/disease killers are HIV/AIDS, TB and malaria. While the number of deaths due to nearly every disease have decreased,
deaths due to HIV/AIDS have increased fourfold. Childhood diseases include pertussis, poliomyelitis, diphtheria,
measles and tetanus. Children also make up a large percentage of lower respiratory and diarrheal deaths. In 2012,
approximately 3.1 million people have died due to lower respiratory infections, making it the number 4 leading cause
of death in the world. The medical treatment of infectious diseases falls into the medical field of Infectious Disease
and in some cases the study of propagation pertains to the field of Epidemiology. Generally, infections are initially
diagnosed by primary care physicians or internal medicine specialists. For example, an "uncomplicated" pneumonia
will generally be treated by the internist or the pulmonologist (lung physician). The work of the infectious diseases
specialist therefore entails working with both patients and general practitioners, as well as laboratory scientists,
immunologists, bacteriologists and other specialists. A number of studies have reported associations between pathogen
load in an area and human behavior. Higher pathogen load is associated with decreased size of ethnic and religious
groups in an area. This may be due high pathogen load favoring avoidance of other groups, which may reduce pathogen
transmission, or a high pathogen load preventing the creation of large settlements and armies that enforce a common
culture. Higher pathogen load is also associated with more restricted sexual behavior, which may reduce pathogen
transmission. It also associated with higher preferences for health and attractiveness in mates. Higher fertility
rates and shorter or less parental care per child is another association that may be a compensation for the higher
mortality rate. There is also an association with polygyny which may be due to higher pathogen load, making selecting
males with a high genetic resistance increasingly important. Higher pathogen load is also associated with more collectivism
and less individualism, which may limit contacts with outside groups and infections. There are alternative explanations
for at least some of the associations although some of these explanations may in turn ultimately be due to pathogen
load. Thus, polygny may also be due to a lower male:female ratio in these areas but this may ultimately be due to
male infants having increased mortality from infectious diseases. Another example is that poor socioeconomic factors
may ultimately in part be due to high pathogen load preventing economic development. Evidence of infection in fossil
remains is a subject of interest for paleopathologists, scientists who study occurrences of injuries and illness
in extinct life forms. Signs of infection have been discovered in the bones of carnivorous dinosaurs. When present,
however, these infections seem to tend to be confined to only small regions of the body. A skull attributed to the
early carnivorous dinosaur Herrerasaurus ischigualastensis exhibits pit-like wounds surrounded by swollen and porous
bone. The unusual texture of the bone around the wounds suggests they were afflicted by a short-lived, non-lethal
infection. Scientists who studied the skull speculated that the bite marks were received in a fight with another
Herrerasaurus. Other carnivorous dinosaurs with documented evidence of infection include Acrocanthosaurus, Allosaurus,
Tyrannosaurus and a tyrannosaur from the Kirtland Formation. The infections from both tyrannosaurs were received
by being bitten during a fight, like the Herrerasaurus specimen.
Hunting is the practice of killing or trapping any animal, or pursuing or tracking it with the intent of doing so. Hunting
wildlife or feral animals is most commonly done by humans for food, recreation, to remove predators which are dangerous
to humans or domestic animals, or for trade. In the 2010s, lawful hunting is distinguished from poaching, which is
the illegal killing, trapping or capture of the hunted species. The species that are hunted are referred to as game
or prey and are usually mammals and birds. Furthermore, evidence exists that hunting may have been one of the multiple
environmental factors leading to extinctions of the holocene megafauna and their replacement by smaller herbivores.
North American megafauna extinction was coincidental with the Younger Dryas impact event, possibly making hunting
a less critical factor in prehistoric species loss than had been previously thought. However, in other locations
such as Australia, humans are thought to have played a very significant role in the extinction of the Australian
megafauna that was widespread prior to human occupation. While it is undisputed that early humans were hunters, the
importance of this for the emergence of the Homo genus from the earlier Australopithecines, including the production
of stone tools and eventually the control of fire, are emphasised in the hunting hypothesis and de-emphasised in
scenarios that stress omnivory and social interaction, including mating behaviour, as essential in the emergence
of human behavioural modernity. With the establishment of language, culture, and religion, hunting became a theme
of stories and myths, as well as rituals such as dance and animal sacrifice. Hunter-gathering lifestyles remained
prevalent in some parts of the New World, Sub-Saharan Africa, and Siberia, as well as all of Australia, until the
European Age of Discovery. They still persist in some tribal societies, albeit in rapid decline. Peoples that preserved
paleolithic hunting-gathering until the recent past include some indigenous peoples of the Amazonas (Aché), some
Central and Southern African (San people), some peoples of New Guinea (Fayu), the Mlabri of Thailand and Laos, the
Vedda people of Sri Lanka, and a handful of uncontacted peoples. In Africa, the only remaining full-time hunter-gatherers
are the Hadza of Tanzania.[citation needed] Archaeologist Louis Binford criticised the idea that early hominids and
early humans were hunters. On the basis of the analysis of the skeletal remains of the consumed animals, he concluded
that hominids and early humans were mostly scavengers, not hunters, and this idea is popular among some archaeologists
and paleoanthropologists. Robert Blumenschine proposed the idea of confrontational scavenging, which involves challenging
and scaring off other predators after they have made a kill, which he suggests could have been the leading method
of obtaining protein-rich meat by early humans. Even as animal domestication became relatively widespread and after
the development of agriculture, hunting was usually a significant contributor to the human food supply. The supplementary
meat and materials from hunting included protein, bone for implements, sinew for cordage, fur, feathers, rawhide
and leather used in clothing. Man's earliest hunting weapons would have included rocks, spears, the atlatl, and bows
and arrows. Hunting is still vital in marginal climates, especially those unsuited for pastoral uses or agriculture.[citation
needed] For example, Inuit people in the Arctic trap and hunt animals for clothing and use the skins of sea mammals
to make kayaks, clothing, and footwear. On ancient reliefs, especially from Mesopotamia, kings are often depicted
as hunters of big game such as lions and are often portrayed hunting from a war chariot. The cultural and psychological
importance of hunting in ancient societies is represented by deities such as the horned god Cernunnos and lunar goddesses
of classical antiquity, the Greek Artemis or Roman Diana. Taboos are often related to hunting, and mythological association
of prey species with a divinity could be reflected in hunting restrictions such as a reserve surrounding a temple.
Euripides' tale of Artemis and Actaeon, for example, may be seen as a caution against disrespect of prey or impudent
boasting. In most parts of medieval Europe, the upper class obtained the sole rights to hunt in certain areas of
a feudal territory. Game in these areas was used as a source of food and furs, often provided via professional huntsmen,
but it was also expected to provide a form of recreation for the aristocracy. The importance of this proprietary
view of game can be seen in the Robin Hood legends, in which one of the primary charges against the outlaws is that
they "hunt the King's deer". In contrast, settlers in Anglophone colonies gloried democratically in hunting for all.
Hindu scriptures describe hunting as an acceptable occupation, as well as a sport of the kingly. Even figures considered
godly are described to have engaged in hunting. One of the names of the god Shiva is Mrigavyadha, which translates
as "the deer hunter" (mriga means deer; vyadha means hunter). The word Mriga, in many Indian languages including
Malayalam, not only stands for deer, but for all animals and animal instincts (Mriga Thrishna). Shiva, as Mrigavyadha,
is the one who destroys the animal instincts in human beings. In the epic Ramayana, Dasharatha, the father of Rama,
is said to have the ability to hunt in the dark. During one of his hunting expeditions, he accidentally killed Shravana,
mistaking him for game. During Rama's exile in the forest, Ravana kidnapped his wife, Sita, from their hut, while
Rama was asked by Sita to capture a golden deer, and his brother Lakshman went after him. According to the Mahabharat,
Pandu, the father of the Pandavas, accidentally killed the sage Kindama and his wife with an arrow, mistaking them
for a deer. Krishna is said to have died after being accidentally wounded by an arrow of a hunter. From early Christian
times, hunting has been forbidden to Roman Catholic Church clerics. Thus the Corpus Juris Canonici (C. ii, X, De
cleric. venat.) says, "We forbid to all servants of God hunting and expeditions through the woods with hounds; and
we also forbid them to keep hawks or falcons." The Fourth Council of the Lateran, held under Pope Innocent III, decreed
(canon xv): "We interdict hunting or hawking to all clerics." The decree of the Council of Trent is worded more mildly:
"Let clerics abstain from illicit hunting and hawking" (Sess. XXIV, De reform., c. xii), which seems to imply that
not all hunting is illicit, and canonists generally make a distinction declaring noisy (clamorosa) hunting unlawful,
but not quiet (quieta) hunting. Nevertheless, although a distinction between lawful and unlawful hunting is undoubtedly
permissible, it is certain that a bishop can absolutely prohibit all hunting to the clerics of his diocese, as was
done by synods at Milan, Avignon, Liège, Cologne, and elsewhere. Benedict XIV (De synodo diœces., l. II, c. x) declared
that such synodal decrees are not too severe, as an absolute prohibition of hunting is more conformable to the ecclesiastical
law. In practice, therefore, the synodal statutes of various localities must be consulted to discover whether they
allow quiet hunting or prohibit it altogether. New Zealand has a strong hunting culture. The islands making up New
Zealand originally had no land mammals apart from bats. However, once Europeans arrived, game animals were introduced
by acclimatisation societies to provide New Zealanders with sport and a hunting resource. Deer, pigs, goats, rabbits,
hare, tahr and chamois all adapted well to the New Zealand terrain, and with no natural predators, their population
exploded. Government agencies view the animals as pests due to their effects on the natural environment and on agricultural
production, but hunters view them as a resource. During the feudal and colonial times in British India, hunting was
regarded as a regal sport in the numerous princely states, as many maharajas and nawabs, as well as British officers,
maintained a whole corps of shikaris (big-game hunters), who were native professional hunters. They would be headed
by a master of the hunt, who might be styled mir-shikar. Often, they recruited the normally low-ranking local tribes
because of their traditional knowledge of the environment and hunting techniques. Big game, such as Bengal tigers,
might be hunted from the back of an elephant. Regional social norms are generally antagonistic to hunting, while
a few sects, such as the Bishnoi, lay special emphasis on the conservation of particular species, such as the antelope.
India's Wildlife Protection Act of 1972 bans the killing of all wild animals. However, the Chief Wildlife Warden
may, if satisfied that any wild animal from a specified list has become dangerous to human life, or is so disabled
or diseased as to be beyond recovery, permit any person to hunt such an animal. In this case, the body of any wild
animal killed or wounded becomes government property. Unarmed fox hunting on horseback with hounds is the type of
hunting most closely associated with the United Kingdom; in fact, "hunting" without qualification implies fox hunting.
What in other countries is called "hunting" is called "shooting" (birds) or "stalking" (deer) in Britain. Originally
a form of vermin control to protect livestock, fox hunting became a popular social activity for newly wealthy upper
classes in Victorian times and a traditional rural activity for riders and foot followers alike. Similar to fox hunting
in many ways is the chasing of hares with hounds. Pairs of Sight hounds (or long-dogs), such as greyhounds, may be
used to pursue a hare in coursing, where the greyhounds are marked as to their skill in coursing the hare (but are
not intended to actually catch it), or the hare may be pursued with scent hounds such as beagles or harriers. Other
sorts of foxhounds may also be used for hunting stags (deer) or mink. Deer stalking with rifles is carried out on
foot without hounds, using stealth. Shooting as practised in Britain, as opposed to traditional hunting, requires
little questing for game—around thirty-five million birds are released onto shooting estates every year, some having
been factory farmed. Shoots can be elaborate affairs with guns placed in assigned positions and assistants to help
load shotguns. When in position, "beaters" move through the areas of cover, swinging sticks or flags to drive the
game out. Such events are often called "drives". The open season for grouse in the UK begins on 12 August, the so-called
Glorious Twelfth. The definition of game in the United Kingdom is governed by the Game Act 1831. Hunting is primarily
regulated by state law; additional regulations are imposed through United States environmental law in the case of
migratory birds and endangered species. Regulations vary widely from state to state and govern the areas, time periods,
techniques and methods by which specific game animals may be hunted. Some states make a distinction between protected
species and unprotected species (often vermin or varmints for which there are no hunting regulations). Hunters of
protected species require a hunting license in all states, for which completion of a hunting safety course is sometimes
a prerequisite. Hunting big game typically requires a "tag" for each animal harvested. Tags must be purchased in
addition to the hunting license, and the number of tags issued to an individual is typically limited. In cases where
there are more prospective hunters than the quota for that species, tags are usually assigned by lottery. Tags may
be further restricted to a specific area, or wildlife management unit. Hunting migratory waterfowl requires a duck
stamp from the Fish and Wildlife Service in addition to the appropriate state hunting license. Gun usage in hunting
is typically regulated by game category, area within the state, and time period. Regulations for big-game hunting
often specify a minimum caliber or muzzle energy for firearms. The use of rifles is often banned for safety reasons
in areas with high population densities or limited topographic relief. Regulations may also limit or ban the use
of lead in ammunition because of environmental concerns. Specific seasons for bow hunting or muzzle-loading black-powder
guns are often established to limit competition with hunters using more effective weapons. Hunting in the United
States is not associated with any particular class or culture; a 2006 poll showed seventy-eight percent of Americans
supported legal hunting, although relatively few Americans actually hunt. At the beginning of the 21st century, just
six percent of Americans hunted. Southerners in states along the eastern seaboard hunted at a rate of five percent,
slightly below the national average, and while hunting was more common in other parts of the South at nine percent,
these rates did not surpass those of the Plains states, where twelve percent of Midwesterners hunted. Hunting in
other areas of the country fell below the national average. Overall, in the 1996–2006 period, the number of hunters
over the age of sixteen declined by ten percent, a drop attributable to a number of factors including habitat loss
and changes in recreation habits. Regulation of hunting within the United States dates from the 19th century. Some
modern hunters see themselves as conservationists and sportsmen in the mode of Theodore Roosevelt and the Boone and
Crockett Club. Local hunting clubs and national organizations provide hunter education and help protect the future
of the sport by buying land for future hunting use. Some groups represent a specific hunting interest, such as Ducks
Unlimited, Pheasants Forever, or the Delta Waterfowl Foundation. Many hunting groups also participate in lobbying
the federal government and state government. Each year, nearly $200 million in hunters' federal excise taxes are
distributed to state agencies to support wildlife management programs, the purchase of lands open to hunters, and
hunter education and safety classes. Since 1934, the sale of Federal Duck Stamps, a required purchase for migratory
waterfowl hunters over sixteen years old, has raised over $700 million to help purchase more than 5,200,000 acres
(8,100 sq mi; 21,000 km2) of habitat for the National Wildlife Refuge System lands that support waterfowl and many
other wildlife species and are often open to hunting. States also collect money from hunting licenses to assist with
management of game animals, as designated by law. A key task of federal and state park rangers and game wardens is
to enforce laws and regulations related to hunting, including species protection, hunting seasons, and hunting bans.
Varmint hunting is an American phrase for the selective killing of non-game animals seen as pests. While not always
an efficient form of pest control, varmint hunting achieves selective control of pests while providing recreation
and is much less regulated. Varmint species are often responsible for detrimental effects on crops, livestock, landscaping,
infrastructure, and pets. Some animals, such as wild rabbits or squirrels, may be utilised for fur or meat, but often
no use is made of the carcass. Which species are varmints depends on the circumstance and area. Common varmints may
include various rodents, coyotes, crows, foxes, feral cats, and feral hogs. Some animals once considered varmints
are now protected, such as wolves. In the US state of Louisiana, a non-native rodent known as a nutria has become
so destructive to the local ecosystem that the state has initiated a bounty program to help control the population.
When Internet hunting was introduced in 2005, allowing people to hunt over the Internet using remotely controlled
guns, the practice was widely criticised by hunters as violating the principles of fair chase. As a representative
of the National Rifle Association (NRA) explained, "The NRA has always maintained that fair chase, being in the field
with your firearm or bow, is an important element of hunting tradition. Sitting at your desk in front of your computer,
clicking at a mouse, has nothing to do with hunting." There is a very active tradition of hunting of small to medium-sized
wild game in Trinidad and Tobago. Hunting is carried out with firearms, and aided by the use of hounds, with the
illegal use of trap guns, trap cages and snare nets. With approximately 12,000 sport hunters applying for hunting
licences in recent years (in a very small country of about the size of the state of Delaware at about 5128 square
kilometers and 1.3 million inhabitants), there is some concern that the practice might not be sustainable. In addition
there are at present no bag limits and the open season is comparatively very long (5 months - October to February
inclusive). As such hunting pressure from legal hunters is very high. Added to that, there is a thriving and very
lucrative black market for poached wild game (sold and enthusiastically purchased as expensive luxury delicacies)
and the numbers of commercial poachers in operation is unknown but presumed to be fairly high. As a result, the populations
of the five major mammalian game species (red-rumped agouti, lowland paca, nine-banded armadillo, collared peccary,
and red brocket deer) are thought to be quite low (although scientifically conducted population studies are only
just recently being conducted as of 2013). It appears that the red brocket deer population has been extirpated on
Tobago as a result of over-hunting. Various herons, ducks, doves, the green iguana, the gold tegu, the spectacled
caiman and the common opossum are also commonly hunted and poached. There is also some poaching of 'fully protected
species', including red howler monkeys and capuchin monkeys, southern tamanduas, Brazilian porcupines, yellow-footed
tortoises, Trinidad piping guans and even one of the national birds, the scarlet ibis. Legal hunters pay very small
fees to obtain hunting licences and undergo no official basic conservation biology or hunting-ethics training. There
is presumed to be relatively very little subsistence hunting in the country (with most hunting for either sport or
commercial profit). The local wildlife management authority is under-staffed and under-funded, and as such very little
in the way of enforcement is done to uphold existing wildlife management laws, with hunting occurring both in and
out of season, and even in wildlife sanctuaries. There is some indication that the government is beginning to take
the issue of wildlife management more seriously, with well drafted legislation being brought before Parliament in
2015. It remains to be seen if the drafted legislation will be fully adopted and financially supported by the current
and future governments, and if the general populace will move towards a greater awareness of the importance of wildlife
conservation and change the culture of wanton consumption to one of sustainable management. Hunting is claimed to
give resource managers an important tool in managing populations that might exceed the carrying capacity of their
habitat and threaten the well-being of other species, or, in some instances, damage human health or safety.[citation
needed] However, in most circumstances carrying capacity is determined by a combination habitat and food availability,
and hunting for 'population control' has no effect on the annual population of species.[citation needed] In some
cases, it can increase the population of predators such as coyotes by removing territorial bounds that would otherwise
be established, resulting in excess neighbouring migrations into an area, thus artificially increasing the population.
Hunting advocates[who?] assert that hunting reduces intraspecific competition for food and shelter, reducing mortality
among the remaining animals. Some environmentalists assert[who?] that (re)introducing predators would achieve the
same end with greater efficiency and less negative effect, such as introducing significant amounts of free lead into
the environment and food chain. In the 19th century, southern and central European sport hunters often pursued game
only for a trophy, usually the head or pelt of an animal, which was then displayed as a sign of prowess. The rest
of the animal was typically discarded. Some cultures, however, disapprove of such waste. In Nordic countries, hunting
for trophies was—and still is—frowned upon. Hunting in North America in the 19th century was done primarily as a
way to supplement food supplies, although it is now undertaken mainly for sport.[citation needed] The safari method
of hunting was a development of sport hunting that saw elaborate travel in Africa, India and other places in pursuit
of trophies. In modern times, trophy hunting persists and is a significant industry in some areas.[citation needed]
A scientific study in the journal, Biological Conservation, states that trophy hunting is of "major importance to
conservation in Africa by creating economic incentives for conservation over vast areas, including areas which may
be unsuitable for alternative wildlife-based land uses such as photographic ecotourism." However, another study states
that less than 3% of a trophy hunters' expenditures reach the local level, meaning that the economic incentive and
benefit is "minimal, particularly when we consider the vast areas of land that hunting concessions occupy." A variety
of industries benefit from hunting and support hunting on economic grounds. In Tanzania, it is estimated that a safari
hunter spends fifty to one hundred times that of the average ecotourist. While the average photo tourist may seek
luxury accommodation, the average safari hunter generally stays in tented camps. Safari hunters are also more likely
to use remote areas, uninviting to the typical ecotourist. Advocates argue that these hunters allow for anti-poaching
activities and revenue for local communities.[citation needed] Hunting also has a significant financial impact in
the United States, with many companies specialising in hunting equipment or speciality tourism. Many different technologies
have been created to assist hunters, even including iPhone applications. Today's hunters come from a broad range
of economic, social, and cultural backgrounds. In 2001, over thirteen million hunters averaged eighteen days hunting,
and spent over $20.5 billion on their sport.[citation needed] In the US, proceeds from hunting licenses contribute
to state game management programs, including preservation of wildlife habitat. However, excessive hunting and poachers
have also contributed heavily to the endangerment, extirpation and extinction of many animals, such as the quagga,
the great auk, Steller's sea cow, the thylacine, the bluebuck, the Arabian oryx, the Caspian and Javan tigers, the
markhor, the Sumatran rhinoceros, the bison, the North American cougar, the Altai argali sheep, the Asian elephant
and many more, primarily for commercial sale or sport. All these animals have been hunted to endangerment or extinction.
On 16 March 1934, President Franklin D. Roosevelt signed the Migratory Bird Hunting Stamp Act, which requires an
annual stamp purchase by all hunters over the age of sixteen. The stamps are created on behalf of the program by
the US Postal Service and depict wildlife artwork chosen through an annual contest. They play an important role in
habitat conservation because ninety-eight percent of all funds generated by their sale go directly toward the purchase
or lease of wetland habitat for protection in the National Wildlife Refuge System.[citation needed] In addition to
waterfowl, it is estimated that one third of the nation's endangered species seek food and shelter in areas protected
using Duck Stamp funds.[citation needed] Since 1934, the sale of Federal Duck Stamps has generated $670 million,
and helped to purchase or lease 5,200,000 acres (8,100 sq mi; 21,000 km2) of habitat. The stamps serve as a license
to hunt migratory birds, an entrance pass for all National Wildlife Refuge areas, and are also considered collectors
items often purchased for aesthetic reasons outside of the hunting and birding communities. Although non-hunters
buy a significant number of Duck Stamps, eighty-seven percent of their sales are contributed by hunters, which is
logical, as hunters are required to purchase them. Distribution of funds is managed by the Migratory Bird Conservation
Commission (MBCC). The Arabian oryx, a species of large antelope, once inhabited much of the desert areas of the
Middle East. However, the species' striking appearance made it (along with the closely related scimitar-horned oryx
and addax) a popular quarry for sport hunters, especially foreign executives of oil companies working in the region.[citation
needed] The use of automobiles and high-powered rifles destroyed their only advantage: speed, and they became extinct
in the wild exclusively due to sport hunting in 1972. The scimitar-horned oryx followed suit, while the addax became
critically endangered. However, the Arabian oryx has now made a comeback and been upgraded from “extinct in the wild”
to “vulnerable” due to conservation efforts like captive breeding The American bison is a large bovid which inhabited
much of western North America prior to the 1800s, living on the prairies in large herds. However, the vast herds
of bison attracted market hunters, who killed dozens of bison for their hides only, leaving the rest to rot. Thousands
of these hunters quickly eliminated the bison herds, bringing the population from several million in the early 1800s
to a few hundred by the 1880s. Conservation efforts have allowed the population to increase, but the bison remains
near-threatened. In contrast, Botswana has recently been forced to ban trophy hunting following a precipitous wildlife
decline. The numbers of antelope plummeted across Botswana, with a resultant decline in predator numbers, while elephant
numbers remained stable and hippopotamus numbers rose. According to the government of Botswana, trophy hunting is
at least partly to blame for this, but many other factors, such as poaching, drought and habitat loss are also to
blame. Uganda recently did the same, arguing that "the share of benefits of sport hunting were lopsided and unlikely
to deter poaching or improve [Uganda's] capacity to manage the wildlife reserves."
Kathmandu(/ˌkɑːtmɑːnˈduː/; Nepali pronunciation: [kɑʈʰmɑɳɖu]) is the capital and largest municipality of Nepal. It also hosts
the headquarters of the South Asian Association for Regional Cooperation (SAARC). It is the only city of Nepal with
the administrative status of Mahanagar (Metropolitan City), as compared to Upa-Mahanagar (Sub-Metropolitan City)
or Nagar (City). Kathmandu is the core of Nepal's largest urban agglomeration located in the Kathmandu Valley consisting
of Lalitpur, Kirtipur, Madhyapur Thimi, Bhaktapur and a number of smaller communities. Kathmandu is also known informally
as "KTM" or the "tri-city". According to the 2011 census, Kathmandu Metropolitan City has a population of 975,453
and measures 49.45 km2 (19.09 sq mi). The city has a rich history, spanning nearly 2000 years, as inferred from inscriptions
found in the valley. Religious and cultural festivities form a major part of the lives of people residing in Kathmandu.
Most of Kathmandu's people follow Hinduism and many others follow Buddhism. There are people of other religious beliefs
as well, giving Kathmandu a cosmopolitan culture. Nepali is the most commonly spoken language in the city. English
is understood by Kathmandu's educated residents. Historic areas of Kathmandu were devastated by a 7.8 magnitude earthquake
on 25 April 2015. The city of Kathmandu is named after Kasthamandap temple, that stood in Durbar Square. In Sanskrit,
Kastha (काष्ठ) means "wood" and Mandap (/मण्डप) means "covered shelter". This temple, also known as Maru Satal in
the Newar language, was built in 1596 by King Laxmi Narsingh Malla. The two-storey structure was made entirely of
wood, and used no iron nails nor supports. According to legend, all the timber used to build the pagoda was obtained
from a single tree. The structure collapsed during the major earthquake on 25 April 2015. The colophons of ancient
manuscripts, dated as late as the 20th century, refer to Kathmandu as Kasthamandap Mahanagar in Nepal Mandala. Mahanagar
means "great city". The city is called "Kasthamandap" in a vow that Buddhist priests still recite to this day. Thus,
Kathmandu is also known as Kasthamandap. During medieval times, the city was sometimes called Kantipur (कान्तिपुर).
This name is derived from two Sanskrit words - Kanti and pur. "Kanti" is one of the names of the Goddess Lakshmi,
and "pur" means place. The ancient history of Kathmandu is described in its traditional myths and legends. According
to Swayambhu Purana, present-day Kathmandu was once a huge and deep lake names "Nagdaha" as it was full of snakes.
The lake was cut drained by Bodhisatwa Manjusri with his sword and the water was evacuated out from there and he
established a city called Manjupattan and made Dharmakar the ruler of the valley land. After sometimes, a demon named
Banasur closed the outlet and the valley was again a lake. Then lots Krishna came to Nepal, killed Banasur and again
drained out water. He has brought some Gops with him and made Bhuktaman the king of Nepal. Very few historical records
exist of the period before the medieval Licchavis rulers. According to Gopalraj Vansawali, a genealogy of Nepali
monarchs, the rulers of Kathmandu Valley before the Licchavis were Gopalas, Mahispalas, Aabhirs, Kirants, and Somavanshi.
The Kirata dynasty was established by Yalamber. During the Kirata era, a settlement called Yambu existed in the northern
half of old Kathmandu. In some of the Sino-Tibetan languages, Kathmandu is still called Yambu. Another smaller settlement
called Yengal was present in the southern half of old Kathmandu, near Manjupattan. During the reign of the seventh
Kirata ruler, Jitedasti, Buddhist monks entered Kathmandu valley and established a forest monastery at Sankhu. The
Licchavis from the Indo-Gangetic plain migrated north and defeated the Kiratas, establishing the Licchavi dynasty.
During this era, following the genocide of Shakyas in Lumbini by Virudhaka, the survivors migrated north and entered
the forest monastery in Sankhu masquerading as Koliyas. From Sankhu, they migrated to Yambu and Yengal (Lanjagwal
and Manjupattan) and established the first permanent Buddhist monasteries of Kathmandu. This created the basis of
Newar Buddhism, which is the only surviving Sanskrit-based Buddhist tradition in the world. With their migration,
Yambu was called Koligram and Yengal was called Dakshin Koligram during most of the Licchavi era. Eventually, the
Licchavi ruler Gunakamadeva merged Koligram and Dakshin Koligram, founding the city of Kathmandu. The city was designed
in the shape of Chandrahrasa, the sword of Manjushri. The city was surrounded by eight barracks guarded by Ajimas.
One of these barracks is still in use at Bhadrakali (in front of Singha Durbar). The city served as an important
transit point in the trade between India and Tibet, leading to tremendous growth in architecture. Descriptions of
buildings such as Managriha, Kailaskut Bhawan, and Bhadradiwas Bhawan have been found in the surviving journals of
travelers and monks who lived during this era. For example, the famous 7th-century Chinese traveller Xuanzang described
Kailaskut Bhawan, the palace of the Licchavi king Amshuverma. The trade route also led to cultural exchange as well.
The artistry of the Newar people—the indigenous inhabitants of the Kathmandu Valley—became highly sought after during
this era, both within the Valley and throughout the greater Himalayas. Newar artists travelled extensively throughout
Asia, creating religious art for their neighbors. For example, Araniko led a group of his compatriot artists through
Tibet and China. Bhrikuti, the princess of Nepal who married Tibetan monarch Songtsän Gampo, was instrumental in
introducing Buddhism to Tibet. The Licchavi era was followed by the Malla era. Rulers from Tirhut, upon being attacked
by Muslims, fled north to the Kathmandu valley. They intermarried with Nepali royalty, and this led to the Malla
era. The early years of the Malla era were turbulent, with raids and attacks from Khas and Turk Muslims. There was
also a devastating earthquake which claimed the lives of a third of Kathmandu's population, including the king Abhaya
Malla. These disasters led to the destruction of most of the architecture of the Licchavi era (such as Mangriha and
Kailashkut Bhawan), and the loss of literature collected in various monasteries within the city. Despite the initial
hardships, Kathmandu rose to prominence again and, during most of the Malla era, dominated the trade between India
and Tibet. Nepali currency became the standard currency in trans-Himalayan trade. During the later part of the Malla
era, Kathmandu Valley comprised four fortified cities: Kantipur, Lalitpur, Bhaktapur, and Kirtipur. These served
as the capitals of the Malla confederation of Nepal. These states competed with each other in the arts, architecture,
aesthetics, and trade, resulting in tremendous development. The kings of this period directly influenced or involved
themselves in the construction of public buildings, squares, and temples, as well as the development of water spouts,
the institutionalization of trusts (called guthis), the codification of laws, the writing of dramas, and the performance
of plays in city squares. Evidence of an influx of ideas from India, Tibet, China, Persia, and Europe among other
places can be found in a stone inscription from the time of king Pratap Malla. Books have been found from this era
that describe their tantric tradition (e.g. Tantrakhyan), medicine (e.g. Haramekhala), religion (e.g. Mooldevshashidev),
law, morals, and history. Amarkosh, a Sanskrit-Nepal Bhasa dictionary from 1381 AD, was also found. Architecturally
notable buildings from this era include Kathmandu Durbar Square, Patan Durbar Square, Bhaktapur Durbar Square, the
former durbar of Kirtipur, Nyatapola, Kumbheshwar, the Krishna temple, and others. The Gorkha Kingdom ended the Malla
confederation after the Battle of Kathmandu in 1768. This marked the beginning of the modern era in Kathmandu. The
Battle of Kirtipur was the start of the Gorkha conquest of the Kathmandu Valley. Kathmandu was adopted as the capital
of the Gorkha empire, and the empire itself was dubbed Nepal. During the early part of this era, Kathmandu maintained
its distinctive culture. Buildings with characteristic Nepali architecture, such as the nine-story tower of Basantapur,
were built during this era. However, trade declined because of continual war with neighboring nations. Bhimsen Thapa
supported France against Great Britain; this led to the development of modern military structures, such as modern
barracks in Kathmandu. The nine-storey tower Dharahara was originally built during this era. Kathmandu is located
in the northwestern part of the Kathmandu Valley to the north of the Bagmati River and covers an area of 50.67 km2
(19.56 sq mi). The average elevation is 1,400 metres (4,600 ft) above sea level. The city is directly bounded by
several other municipalities of the Kathmandu valley: south of the Bagmati by Lalitpur Sub-Metropolitan City (Patan)
with which it today forms one urban area surrounded by a ring road, to the southwest by Kirtipur Municipality and
to the east by Madyapur Thimi Municipality. To the north the urban area extends into several Village Development
Committees. However, the urban agglomeration extends well beyond the neighboring municipalities, e. g. to Bhaktapur
and just about covers the entire Kathmandu valley. Kathmandu is dissected by eight rivers, the main river of the
valley, the Bagmati and its tributaries, of which the Bishnumati, Dhobi Khola, Manohara Khola, Hanumant Khola, and
Tukucha Khola are predominant. The mountains from where these rivers originate are in the elevation range of 1,500–3,000
metres (4,900–9,800 ft), and have passes which provide access to and from Kathmandu and its valley. An ancient canal
once flowed from Nagarjuna hill through Balaju to Kathmandu; this canal is now extinct. The agglomeration of Kathmandu
has not yet been officially defined. The urban area of the Kathmandu valley is split among three different districts
(collections of local government units within a zone) which extend very little beyond the valley fringe, except towards
the southern ranges, which have comparatively small population. They have the three highest population densities
in the country. Within the districts lie VDCs (villages), 3 municipalities (Bhaktapur, Kirtipur, Madhyapur Thimi),
1 sub-metropolitan city (Lalitpur), and 1 metropolitan city (Kathmandu). Some district subdivisions remain legally
villages yet are densely populated, Gonggabu VDC notably recorded a density over 20,000 people/km2. (2011 census).
The following data table describes the districts considered part of the agglomeration: Five major climatic regions
are found in Nepal. Of these, Kathmandu Valley is in the Warm Temperate Zone (elevation ranging from 1,200–2,300
metres (3,900–7,500 ft)), where the climate is fairly temperate, atypical for the region. This zone is followed by
the Cool Temperate Zone with elevation varying between 2,100–3,300 metres (6,900–10,800 ft). Under Köppen's climate
classification, portions of the city with lower elevations have a humid subtropical climate (Cwa), while portions
of the city with higher elevations generally have a subtropical highland climate. In the Kathmandu Valley, which
is representative of its valley's climate, the average summer temperature varies from 28–30 °C (82–86 °F). The average
winter temperature is 10.1 °C (50.2 °F). The city generally has a climate with warm days followed by cool nights
and mornings. Unpredictable weather is expected, given that temperatures can drop to 1 °C (34 °F) or less during
the winter. During a 2013 cold front, the winter temperatures of Kathmandu dropped to −4 °C (25 °F), and the lowest
temperature was recorded on January 10, 2013, at −9.2 °C (15.4 °F). Rainfall is mostly monsoon-based (about 65% of
the total concentrated during the monsoon months of June to August), and decreases substantially (100 to 200 cm (39
to 79 in)) from eastern Nepal to western Nepal. Rainfall has been recorded at about 1,400 millimetres (55.1 in) for
the Kathmandu valley, and averages 1,407 millimetres (55.4 in) for the city of Kathmandu. On average humidity is
75%. The chart below is based on data from the Nepal Bureau of Standards & Meteorology, "Weather Meteorology" for
2005. The chart provides minimum and maximum temperatures during each month. The annual amount of precipitation was
1,124 millimetres (44.3 in) for 2005, as per monthly data included in the table above. The decade of 2000-2010 saw
highly variable and unprecedented precipitation anomalies in Kathmandu. This was mostly due to the annual variation
of the southwest monsoon.[citation needed] For example, 2003 was the wettest year ever in Kathmandu, totalling over
2,900 mm (114 in) of precipitation due to an exceptionally strong monsoon season. In contrast, 2001 recorded only
356 mm (14 in) of precipitation due to an extraordinarily weak monsoon season. The location and terrain of Kathmandu
have played a significant role in the development of a stable economy which spans millennia. The city is located
in an ancient lake basin, with fertile soil and flat terrain. This geography helped form a society based on agriculture.
This, combined with its location between India and China, helped establish Kathmandu as an important trading center
over the centuries. Kathmandu's trade is an ancient profession that flourished along an offshoot of the Silk Road
which linked India and Tibet. From centuries past, Lhasa Newar merchants of Kathmandu have conducted trade across
the Himalaya and contributed to spreading art styles and Buddhism across Central Asia. Other traditional occupations
are farming, metal casting, woodcarving, painting, weaving, and pottery. The economic output of the metropolitan
area alone is worth more than one third of national GDP around $6.5billion in terms of nominal GDP NR.s 550 billion
approximately per year $2200 per capital income approx three times national average. Kathmandu exports handicrafts,
artworks, garments, carpets, pashmina, paper; trade accounts for 21% of its finances.[which?] Manufacturing is also
important and accounts for 19% of the revenue that Kathmandu generates. Garments and woolen carpets are the most
notable manufactured products. Other economic sectors in Kathmandu include agriculture (9%), education (6%), transport
(6%), and hotels and restaurants (5%). Kathmandu is famous for lokta paper and pashmina shawls. Tourism is considered
another important industry in Nepal. This industry started around 1950, as the country's political makeup changed
and ended the country's isolation from the rest of the world. In 1956, air transportation was established and the
Tribhuvan Highway, between Kathmandu and Raxaul (at India's border), was started. Separate organizations were created
in Kathmandu to promote this activity; some of these include the Tourism Development Board, the Department of Tourism
and the Civil Aviation Department. Furthermore, Nepal became a member of several international tourist associations.
Establishing diplomatic relations with other nations further accentuated this activity. The hotel industry, travel
agencies, training of tourist guides, and targeted publicity campaigns are the chief reasons for the remarkable growth
of this industry in Nepal, and in Kathmandu in particular. Since then, tourism in Nepal has thrived; it is the country's
most important industry.[citation needed] Tourism is a major source of income for most of the people in the city,
with several hundred thousand visitors annually. Hindu and Buddhist pilgrims from all over the world visit Kathmandu's
religious sites such as Pashupatinath, Swayambhunath, Boudhanath and Budhanilkantha. From a mere 6,179 tourists in
1961/62, the number jumped to 491,504 in 1999/2000. Following the end of the Maoist insurgency, there was a significant
rise of 509,956 tourist arrivals in 2009. Since then, tourism has improved as the country turned into a Democratic
Republic. In economic terms, the foreign exchange registered 3.8% of the GDP in 1995/96 but then started declining[why?].
The high level of tourism is attributed to the natural grandeur of the Himalayas and the rich cultural heritage of
the country. The neighbourhood of Thamel is Kathmandu's primary "traveller's ghetto", packed with guest houses, restaurants,
shops, and bookstores, catering to tourists. Another neighbourhood of growing popularity is Jhamel, a name for Jhamsikhel
coined to rhyme with Thamel. Jhochhen Tol, also known as Freak Street, is Kathmandu's original traveler's haunt,
made popular by the hippies of the 1960s and 1970s; it remains a popular alternative to Thamel. Asan is a bazaar
and ceremonial square on the old trade route to Tibet, and provides a fine example of a traditional neighbourhood.
With the opening of the tourist industry after the change in the political scenario of Nepal in 1950, the hotel industry
drastically improved. Now Kathmandu boasts several luxury such as the Hyatt Regency, Dwarika's, theYak & Yeti, The
Everest Hotel, Hotel Radisson, Hotel De L'Annapurna, The Malla Hotel, Shangri-La Hotel (which is not operated by
the Shangri-La Hotel Group) and The Shanker Hotel. There are several four-star hotels such as Hotel Vaishali, Hotel
Narayani, The Blue Star and Grand Hotel. The Garden Hotel, Hotel Ambassador, and Aloha Inn are among the three-star
hotels in Kathmandu. Hotels like Hyatt Regency, De L'Annapurna and Hotel Yak & Yeti are among the five-star hotels
providing casinos as well. Metropolitan Kathmandu is divided into five sectors: the Central Sector, the East Sector,
the North Sector, the City Core and the West Sector. For civic administration, the city is further divided into 35
administrative wards. The Council administers the Metropolitan area of Kathmandu city through its 177 elected representatives
and 20 nominated members. It holds biannual meetings to review, process and approve the annual budget and make major
policy decisions. The ward's profile documents for the 35 wards prepared by the Kathmandu Metropolitan Council is
detailed and provides information for each ward on population, the structure and condition of houses, the type of
roads, educational, health and financial institutions, entertainment facilities, parking space, security provisions,
etc. It also includes lists of development projects completed, on-going and planned, along with informative data
about the cultural heritage, festivals, historical sites and the local inhabitants. Ward 16 is the largest, with
an area of 437.4 ha; ward 26 is the smallest, with an area of 4 ha. The fire service, known as the Barun Yantra Karyalaya,
opened its first station in Kathmandu in 1937 with a single vehicle. An iron tower was erected to monitor the city
and watch for fire. As a precautionary measure, firemen were sent to the areas which were designated as accident-prone
areas. In 1944, the fire service was extended to the neighboring cities of Lalitpur and Bhaktapur. In 1966, a fire
service was established in Kathmandu airport. In 1975, a West German government donation added seven fire engines
to Kathmandu's fire service. The fire service in the city is also overlooked by an international non-governmental
organization, the Firefighters Volunteer Association of Nepal (FAN), which was established in 2000 with the purpose
of raising public awareness about fire and improving safety. Over the years the city has been home to people of various
ethnicities, resulting in a range of different traditions and cultural practices. In one decade, the population increased
from 427,045 in 1991 to 671,805 in 2001. The population was projected to reach 915,071 in 2011 and 1,319,597 by 2021.
To keep up this population growth, the KMC-controlled area of 5,076.6 hectares (12,545 acres) has expanded to 8,214
hectares (20,300 acres) in 2001. With this new area, the population density which was 85 in 1991 is still 85 in 2001;
it is likely to jump to 111 in 2011 and 161 in 2021. The largest ethnic groups are Newar (29.6%), Matwali (25.1%
Sunuwar, Gurung, Magars, Tamang etc.), Khas Brahmins (20.51%) and Chettris (18.5%) . Tamangs originating from surrounding
hill districts can be seen in Kathmandu. More recently, other hill ethnic groups and Caste groups from Terai have
come to represent a substantial proportion of the city's population. The major languages are Nepali and Nepal Bhasa,
while English is understood by many, particularly in the service industry. The major religions are Hinduism and Buddhism.
The ancient trade route between India and Tibet that passed through Kathmandu enabled a fusion of artistic and architectural
traditions from other cultures to be amalgamated with local art and architecture. The monuments of Kathmandu City
have been influenced over the centuries by Hindu and Buddhist religious practices. The architectural treasure of
the Kathmandu valley has been categorized under the well-known seven groups of heritage monuments and buildings.
In 2006 UNESCO declared these seven groups of monuments as a World Heritage Site (WHS). The seven monuments zones
cover an area of 188.95 hectares (466.9 acres), with the buffer zone extending to 239.34 hectares (591.4 acres).
The Seven Monument Zones (Mzs) inscribed originally in 1979 and with a minor modification in 2006 are Durbar squares
of Hanuman Dhoka, Patan and Bhaktapur, Hindu temples of Pashupatinath and Changunarayan, the Buddhist stupas of Swayambhu
and Boudhanath. The literal meaning of Durbar Square is a "place of palaces". There are three preserved Durbar Squares
in Kathmandu valley and one unpreserved in Kirtipur. The Durbar Square of Kathmandu is located in the old city and
has heritage buildings representing four kingdoms (Kantipur, Lalitpur, Bhaktapur, Kirtipur); the earliest is the
Licchavi dynasty. The complex has 50 temples and is distributed in two quadrangles of the Durbar Square. The outer
quadrangle has the Kasthamandap, Kumari Ghar, and Shiva-Parvati Temple; the inner quadrangle has the Hanuman Dhoka
palace. The squares were severely damaged in the April 2015 Nepal earthquake. Kumari Ghar is a palace in the center
of the Kathmandu city, next to the Durbar square where a Royal Kumari selected from several Kumaris resides. Kumari,
or Kumari Devi, is the tradition of worshipping young pre-pubescent girls as manifestations of the divine female
energy or devi in South Asian countries. In Nepal the selection process is very rigorous. Kumari is believed to be
the bodily incarnation of the goddess Taleju (the Nepali name for Durga) until she menstruates, after which it is
believed that the goddess vacates her body. Serious illness or a major loss of blood from an injury are also causes
for her to revert to common status. The current Royal Kumari, Matina Shakya, age four, was installed in October 2008
by the Maoist government that replaced the monarchy. The Pashupatinath Temple is a famous 5th century Hindu temple
dedicated to Lord Shiva (Pashupati). Located on the banks of the Bagmati River in the eastern part of Kathmandu,
Pashupatinath Temple is the oldest Hindu temple in Kathmandu. It served as the seat of national deity, Lord Pashupatinath,
until Nepal was secularized. However, a significant part of the temple was destroyed by Mughal invaders in the 14th
century and little or nothing remains of the original 5th-century temple exterior. The temple as it stands today
was built in the 19th century, although the image of the bull and the black four-headed image of Pashupati are at
least 300 years old. The temple is a UNESCO World Heritage Site. Shivaratri, or the night of Lord Shiva, is the most
important festival that takes place here, attracting thousands of devotees and sadhus.[citation needed] Believers
in Pashupatinath (mainly Hindus) are allowed to enter the temple premises, but non-Hindu visitors are allowed to
view the temple only from the across the Bagmati River. The priests who perform the services at this temple have
been Brahmins from Karnataka, South India since the time of Malla king Yaksha Malla. This tradition is believed to
have been started at the request of Adi Shankaracharya who sought to unify the states of Bharatam (Unified India)
by encouraging cultural exchange. This procedure is followed in other temples around India, which were sanctified
by Adi Shankaracharya. The Boudhanath, (also written Bouddhanath, Bodhnath, Baudhanath or the Khāsa Chaitya), is
one of the holiest Buddhist sites in Nepal, along with Swayambhu. It is a very popular tourist site. Boudhanath is
known as Khāsti by Newars and as Bauddha or Bodhnāth by speakers of Nepali. Located about 11 km (7 mi) from the center
and northeastern outskirts of Kathmandu, the stupa's massive mandala makes it one of the largest spherical stupas
in Nepal. Boudhanath became a UNESCO World Heritage Site in 1979. The base of the stupa has 108 small depictions
of the Dhyani Buddha Amitabha. It is surrounded with a brick wall with 147 niches, each with four or five prayer
wheels engraved with the mantra, om mani padme hum. At the northern entrance where visitors must pass is a shrine
dedicated to Ajima, the goddess of smallpox. Every year the stupa attracts many Tibetan Buddhist pilgrims who perform
full body prostrations in the inner lower enclosure, walk around the stupa with prayer wheels, chant, and pray. Thousands
of prayer flags are hoisted up from the top of the stupa downwards and dot the perimeter of the complex. The influx
of many Tibetan refugees from China has seen the construction of over 50 Tibetan gompas (monasteries) around Boudhanath.
Swayambhu is a Buddhist stupa atop a hillock at the northwestern part of the city. This is among the oldest religious
sites in Nepal. Although the site is considered Buddhist, it is revered by both Buddhists and Hindus. The stupa consists
of a dome at the base; above the dome, there is a cubic structure with the eyes of Buddha looking in all four directions.[clarification
needed] There are pentagonal Toran above each of the four sides, with statues engraved on them. Behind and above
the torana there are thirteen tiers. Above all the tiers, there is a small space above which lies a gajur. Kathmandu
valley is described as "an enormous treasure house of art and sculptures", which are made of wood, stone, metal,
and terracotta, and found in profusion in temples, shrines, stupas, gompas, chaityasm and palaces. The art objects
are also seen in street corners, lanes, private courtyards and in open ground. Most art is in the form of icons of
gods and goddesses. Kathmandu valley has had this art treasure for a very long time, but received worldwide recognition
only after the country opened to the outside world in 1950. The religious art of Nepal and Kathmandu in particular
consists of an iconic symbolism of the Mother Goddesses such as: Bhavani, Durga, Gaja-Lakshmi, Hariti-Sitala, Mahsishamardini,
Saptamatrika (seven mother goddesses), and Sri-Lakshmi(wealth-goddess). From the 3rd century BC, apart from the Hindu
gods and goddesses, Buddhist monuments from the Ashokan period (it is said that Ashoka visited Nepal in 250 BC) have
embellished Nepal in general and the valley in particular. These art and architectural edifices encompass three major
periods of evolution: the Licchavi or classical period (500 to 900 AD), the post-classical period (1000 to 1400 AD),
with strong influence of the Palla art form; the Malla period (1400 onwards) that exhibited explicitly tantric influences
coupled with the art of Tibetan Demonology. Kathmandu is home to a number of museums and art galleries, including
the National Museum of Nepal and the Natural History Museum of Nepal. Nepal's art and architecture is an amalgamation
of two ancient religions, Hinduism and Buddhhism. These are amply reflected in the many temples, shrines, stupas,
monasteries, and palaces in the seven well-defined Monument Zones of the Kathmandu valley are part of a UNESCO World
Heritage Site. This amalgamation is also reflected in the planning and exhibitions in museums and art galleries throughout
Kathmandu and its sister cities of Patan and Bhaktapur. The museums display unique artifacts and paintings from the
5th century CE to the present day, including archeological exportation. The National Museum is located in the western
part of Kathmandu, near the Swayambhunath stupa in an historical building. This building was constructed in the early
19th century by General Bhimsen Thapa. It is the most important museum in the country, housing an extensive collection
of weapons, art and antiquities of historic and cultural importance. The museum was established in 1928 as a collection
house of war trophies and weapons, and the initial name of this museum was Chhauni Silkhana, meaning "the stone house
of arms and ammunition". Given its focus, the museum contains many weapons, including locally made firearms used
in wars, leather cannons from the 18th–19th century, and medieval and modern works in wood, bronze, stone and paintings.
The Tribhuvan Museum contains artifacts related to the King Tribhuvan (1906–1955). It has a variety of pieces including
his personal belongings, letters and papers, memorabilia related to events he was involved in and a rare collection
of photos and paintings of Royal family members. The Mahendra Museum is dedicated to king Mahendra of Nepal (1920–1972).
Like the Tribhuvan Museum, it includes his personal belongings such as decorations, stamps, coins and personal notes
and manuscripts, but it also has structural reconstructions of his cabinet room and office chamber. The Hanumandhoka
Palace, a lavish medieval palace complex in the Durbar, contains three separate museums of historic importance. These
museums include the Birendra museum, which contains items related to the second-last monarch, Birendra of Nepal.
The enclosed compound of the Narayanhity Palace Museum is in the north-central part of Kathmandu. "Narayanhity" comes
from Narayana, a form of the Hindu god Lord Vishnu, and Hiti, meaning "water spout" (Vishnu's temple is located opposite
the palace, and the water spout is located east of the main entrance to the precinct). Narayanhity was a new palace,
in front of the old palace built in 1915, and was built in 1970 in the form of a contemporary Pagoda. It was built
on the occasion of the marriage of King Birenda Bir Bikram Shah, then heir apparent to the throne. The southern gate
of the palace is at the crossing of Prithvipath and Darbar Marg roads. The palace area covers (30 hectares (74 acres))
and is fully secured with gates on all sides. This palace was the scene of the Nepali royal massacre. After the fall
of the monarchy, it was converted to a museum. The Taragaon Museum presents the modern history of the Kathmandu Valley.
It seeks to document 50 years of research and cultural heritage conservation of the Kathmandu Valley, documenting
what artists photographers architects anthropologists from abroad had contributed in the second half of the 20th
century. The actual structure of the Museum showcases restoration and rehabilitation efforts to preserve the built
heritage of Kathmandu. It was designed by Carl Pruscha (master-planner of the Kathmandy Valley ) in 1970 and constructed
in 1971. Restoration works began in 2010 to rehabilitate the Taragaon hostel into the Taragaon Museum. The design
uses local brick along with modern architectural design elements, as well as the use of circle, triangles and squares.
The Museum is within a short walk from the Boudhnath stupa, which itself can be seen from the Museum tower. Kathmandu
is a center for art in Nepal, displaying the work of contemporary artists in the country and also collections of
historical artists. Patan in particular is an ancient city noted for its fine arts and crafts. Art in Kathmandu is
vibrant, demonstrating a fusion of traditionalism and modern art, derived from a great number of national, Asian,
and global influences. Nepali art is commonly divided into two areas: the idealistic traditional painting known as
Paubhas in Nepal and perhaps more commonly known as Thangkas in Tibet, closely linked to the country's religious
history and on the other hand the contemporary western-style painting, including nature-based compositions or abstract
artwork based on Tantric elements and social themes of which painters in Nepal are well noted for. Internationally,
the British-based charity, the Kathmandu Contemporary Art Centre is involved with promoting arts in Kathmandu. The
Srijana Contemporary Art Gallery, located inside the Bhrikutimandap Exhibition grounds, hosts the work of contemporary
painters and sculptors, and regularly organizes exhibitions. It also runs morning and evening classes in the schools
of art. Also of note is the Moti Azima Gallery, located in a three storied building in Bhimsenthan which contains
an impressive collection of traditional utensils and handmade dolls and items typical of a medieval Newar house,
giving an important insight into Nepali history. The J Art Gallery is also located in Kathmandu, near the Royal Palace
in Durbarmarg, Kathmandu and displays the artwork of eminent, established Nepali painters. The Nepal Art Council
Gallery, located in the Babar Mahal, on the way to Tribhuvan International Airport contains artwork of both national
and international artists and extensive halls regularly used for art exhibitions. The National Library of Nepal is
located in Patan. It is the largest library in the country with more than 70,000 books. English, Nepali, Sanskrit,
Hindi, and Nepal Bhasa books are found here. The library is in possession of rare scholarly books in Sanskrit and
English dating from the 17th century AD. Kathmandu also contains the Kaiser Library, located in the Kaiser Mahal
on the ground floor of the Ministry of Education building. This collection of around 45,000 books is derived from
a personal collection of Kaiser Shamsher Jang Bahadur Rana. It covers a wide range of subjects including history,
law, art, religion, and philosophy, as well as a Sanskrit manual of Tantra, which is believed to be over 1,000 years
old. The 2015 earthquake caused severe damage to the Ministry of Education building, and the contents of the Kaiser
Library have been temporarily relocated. Kathmandu is home to Nepali cinema and theaters. The city contains several
theaters, including the National Dance Theatre in Kanti Path, the Ganga Theatre, the Himalayan Theatre and the Aarohan
Theater Group founded in 1982. The M. Art Theater is based in the city. The Gurukul School of Theatre organizes the
Kathmandu International Theater Festival, attracting artists from all over the world. A mini theater is also located
at the Hanumandhoka Durbar Square, established by the Durbar Conservation and Promotion Committee. Most of the cuisines
found in Kathmandu are non-vegetarian. However, the practice of vegetarianism is not uncommon, and vegetarian cuisines
can be found throughout the city. Consumption of beef is very uncommon and considered taboo in many places. Buff
(meat of water buffalo) is very common. There is a strong tradition of buff consumption in Kathmandu, especially
among Newars, which is not found in other parts of Nepal. Consumption of pork was considered taboo until a few decades
ago. Due to the intermixing with Kirat cuisine from eastern Nepal, pork has found a place in Kathmandu dishes. A
fringe population of devout Hindus and Muslims consider it taboo. The Muslims forbid eating buff as from Quran while
Hindus eat all varieties except Cow's meat as the consider Cow to be a goddess and symbol of purity. The chief breakfast
for locals and visitors is mostly Momo or Chowmein. Kathmandu had only one western-style restaurant in 1955. A large
number of restaurants in Kathmandu have since opened, catering Nepali cuisine, Tibetan cuisine, Chinese cuisine and
Indian cuisine in particular. Many other restaurants have opened to accommodate locals, expatriates, and tourists.
The growth of tourism in Kathmandu has led to culinary creativity and the development of hybrid foods to accommodate
for tourists such as American chop suey, which is a sweet-and-sour sauce with crispy noodles with a fried egg commonly
added on top and other westernized adaptations of traditional cuisine. Continental cuisine can be found in selected
places. International chain restaurants are rare, but some outlets of Pizza Hut and KFC have recently opened there.
It also has several outlets of the international ice-cream chain Baskin-Robbins Kathmandu has a larger proportion
of tea drinkers than coffee drinkers. Tea is widely served but is extremely weak by western standards. It is richer
and contains tea leaves boiled with milk, sugar and spices. Alcohol is widely drunk, and there are numerous local
variants of alcoholic beverages. Drinking and driving is illegal, and authorities have a zero tolerance policy. Ailaa
and thwon (alcohol made from rice) are the alcoholic beverages of Kathmandu, found in all the local bhattis (alcohol
serving eateries). Chhyaang, tongba (fermented millet or barley) and rakshi are alcoholic beverages from other parts
of Nepal which are found in Kathmandu. However, shops and bars in Kathmandu widely sell western and Nepali beers.
Most of the fairs and festivals in Kathmandu originated in the Malla period or earlier. Traditionally, these festivals
were celebrated by Newars. In recent years, these festivals have found wider participation from other Kathmanduites
as well. As the capital of the Republic of Nepal, various national festivals are celebrated in Kathmandu. With mass
migration to the city, the cultures of Khas from the west, Kirats from the east, Bon/Tibetan from the north, and
Mithila from the south meet in the capital and mingle harmoniously. The festivities such as the Ghode (horse) Jatra,
Indra Jatra, Dashain Durga Puja festivals, Shivratri and many more are observed by all Hindu and Buddhist communities
of Kathmandu with devotional fervor and enthusiasm. Social regulation in the codes enacted incorporate Hindu traditions
and ethics. These were followed by the Shah kings and previous kings, as devout Hindus and protectors of Buddhist
religion. The Bagmati River which flows through Kathmandu is considered a holy river both by Hindus and Buddhists,
and many Hindu temples are located on the banks of this river. The importance of the Bagmati also lies in the fact
that Hindus are cremated on its banks, and Kirants are buried in the hills by its side. According to the Nepali Hindu
tradition, the dead body must be dipped three times into the Bagmati before cremation. The chief mourner (usually
the first son) who lights the funeral pyre must take a holy riverwater bath immediately after cremation. Many relatives
who join the funeral procession also take bath in the Bagmati River or sprinkle the holy water on their bodies at
the end of cremation as the Bagmati is believed to purify people spiritually. Legendary Princess Bhrikuti (7th-century)
and artist Araniko (1245 - 1306 AD) from that tradition of Kathmandu valley played a significant role in spreading
Buddhism in Tibet and China. There are over 108 traditional monasteries (Bahals and Bahis) in Kathmandu based on
Newar Buddhism. Since the 1960s, the permanent Tibetan Buddhist population of Kathmandu has risen significantly so
that there are now over fifty Tibetan Buddhist monasteries in the area. Also, with the modernization of Newar Buddhism,
various Theravada Bihars have been established. Kirant Mundhum is one of the indigenous animistic practices of Nepal.
It is practiced by Kirat people. Some animistic aspects of Kirant beliefs, such as ancestor worship (worship of Ajima)
are also found in Newars of Kirant origin. Ancient religious sites believed to be worshipped by ancient Kirats, such
as Pashupatinath, Wanga Akash Bhairabh (Yalambar) and Ajima are now worshipped by people of all Dharmic religions
in Kathmandu. Kirats who have migrated from other parts of Nepal to Kathmandu practice Mundhum in the city. Sikhism
is practiced primarily in Gurudwara at Kupundole. An earlier temple of Sikhism is also present in Kathmandu which
is now defunct. Jainism is practiced by a small community. A Jain temple is present in Gyaneshwar, where Jains practice
their faith. According to the records of the Spiritual Assembly of the Baha'is of Nepal, there are approximately
300 Baha'is in Kathmandu valley. They have a National Office located in Shantinagar, Baneshwor. The Baha'is also
have classes for children at the National Centre and other localities in Kathmandu. Islam is practised in Kathmandu
but Muslims are a minority, accounting for about 4.2% of the population of Nepal.[citation needed] It is said that
in Kathmandu alone there are 170 Christian churches. Christian missionary hospitals, welfare organizations, and schools
are also operating. Nepali citizens who served as soldiers in Indian and British armies, who had converted to Christianity
while in service, on return to Nepal continue to practice their religion. They have contributed to the spread of
Christianity and the building of churches in Nepal and in Kathmandu, in particular. Institute of Medicine, the central
college of Tribhuwan University is the first medical college of Nepal and is located in Maharajgunj, Kathmandu. It
was established in 1972 and started to impart medical education from 1978. A number of medical colleges including
Kathmandu Medical College, Nepal Medical College, KIST Medical College, Nepal Army Institute of Health Sciences,
National Academy of Medical Sciences (NAMS) and Kathmandu University School of Medical Sciences (KUSMS), are also
located in or around Kathmandu. Football and Cricket are the most popular sports among the younger generation in
Nepal and there are several stadiums in the city. The sport is governed by the All Nepal Football Association (ANFA)
from its headquarters in Kathmandu. The only international football stadium in the city is the Dasarath Rangasala
Stadium, a multi-purpose stadium used mostly for football matches and cultural events, located in the neighborhood
of Tripureshwor. It is the largest stadium in Nepal with a capacity of 25,000 spectators, built in 1956. Martyr's
Memorial League is also held in this ground every year. The stadium was renovated with Chinese help before the 8th
South Asian Games were held in Kathmandu and had floodlights installed. Kathmandu is home to the oldest football
clubs of Nepal such as RCT, Sankata and NRT. Other prominent clubs include MMC, Machhindra FC, Tribhuwan Army Club
(TAC) and MPC. The total length of roads in Nepal is recorded to be (17,182 km (10,676 mi)), as of 2003–04. This
fairly large network has helped the economic development of the country, particularly in the fields of agriculture,
horticulture, vegetable farming, industry and also tourism. In view of the hilly terrain, transportation takes place
in Kathmandu are mainly by road and air. Kathmandu is connected by the Tribhuvan Highway to the south, Prithvi Highway
to the west and Araniko Highway to the north. The BP Highway, connecting Kathmandu to the eastern part of Nepal is
under construction. The main international airport serving Kathmandu and thus Nepal is the Tribhuvan International
Airport, located about six kilometers (6 km (3.7 mi)) from the city centre. Operated by the Civil Aviation Authority
of Nepal it has two terminals, one domestic and one international. At present, about 22 international airlines connect
Nepal to other destinations in Europe, Asia and the Middle East, to cities such as Istanbul, Delhi, Kolkata, Singapore,
Bangkok, Kuala Lumpur, Dhaka, Islamabad, Paro, Lhasa, Chengdu, and Guangzhou. A recent extension to the international
terminal has made the distance to the airplanes shorter and in October 2009 it became possible to fly directly to
Kathmandu from Amsterdam with Arkefly. Since 2013, Turkish Airlines connects Istanbul to Kathmandu. Regionally, several
Nepali airlines operate from the city, including Agni Air, Buddha Air, Cosmic Air, Nepal Airlines and Yeti Airlines,
to other major towns across Nepal. Kathmandu Metropolitan City (KMC), in order to promote international relations
has established an International Relations Secretariat (IRC). KMC's first international relationship was established
in 1975 with the city of Eugene, Oregon, United States. This activity has been further enhanced by establishing formal
relationships with 8 other cities: Motsumoto City of Japan, Rochester of the USA, Yangon (formerly Rangoon) of Myanmar,
Xi'an of the People's Republic of China, Minsk of Belarus, and Pyongyang of the Democratic Republic of Korea. KMC's
constant endeavor is to enhance its interaction with SAARC countries, other International agencies and many other
major cities of the world to achieve better urban management and developmental programs for Kathmandu.
Myocardial infarction (MI) or acute myocardial infarction (AMI), commonly known as a heart attack, occurs when blood flow
stops to a part of the heart causing damage to the heart muscle. The most common symptom is chest pain or discomfort
which may travel into the shoulder, arm, back, neck, or jaw. Often it is in the center or left side of the chest
and lasts for more than a few minutes. The discomfort may occasionally feel like heartburn. Other symptoms may include
shortness of breath, nausea, feeling faint, a cold sweat, or feeling tired. About 30% of people have atypical symptoms,
with women more likely than men to present atypically. Among those over 75 years old, about 5% have had an MI with
little or no history of symptoms. An MI may cause heart failure, an irregular heartbeat, or cardiac arrest. Most
MIs occur due to coronary artery disease. Risk factors include high blood pressure, smoking, diabetes, lack of exercise,
obesity, high blood cholesterol, poor diet, and excessive alcohol intake, among others. The mechanism of an MI often
involves the complete blockage of a coronary artery caused by a rupture of an atherosclerotic plaque. MIs are less
commonly caused by coronary artery spasms, which may be due to cocaine, significant emotional stress, and extreme
cold, among others. A number of tests are useful to help with diagnosis, including electrocardiograms (ECGs), blood
tests, and coronary angiography. An ECG may confirm an ST elevation MI if ST elevation is present. Commonly used
blood tests include troponin and less often creatine kinase MB. Aspirin is an appropriate immediate treatment for
a suspected MI. Nitroglycerin or opioids may be used to help with chest pain; however, they do not improve overall
outcomes. Supplemental oxygen should be used in those with low oxygen levels or shortness of breath. In ST elevation
MIs treatments which attempt to restore blood flow to the heart are typically recommended and include angioplasty,
where the arteries are pushed open, or thrombolysis, where the blockage is removed using medications. People who
have a non-ST elevation myocardial infarction (NSTEMI) are often managed with the blood thinner heparin, with the
additional use angioplasty in those at high risk. In people with blockages of multiple coronary arteries and diabetes,
bypass surgery (CABG) may be recommended rather than angioplasty. After an MI, lifestyle modifications, along with
long term treatment with aspirin, beta blockers, and statins, are typically recommended. The onset of symptoms in
myocardial infarction (MI) is usually gradual, over several minutes, and rarely instantaneous. Chest pain is the
most common symptom of acute MI and is often described as a sensation of tightness, pressure, or squeezing. Chest
pain due to ischemia (a lack of blood and hence oxygen supply) of the heart muscle is termed angina pectoris. Pain
radiates most often to the left arm, but may also radiate to the lower jaw, neck, right arm, back, and upper abdomen,
where it may mimic heartburn. Levine's sign, in which a person localizes the chest pain by clenching their fists
over their sternum, has classically been thought to be predictive of cardiac chest pain, although a prospective observational
study showed it had a poor positive predictive value. Shortness of breath occurs when the damage to the heart limits
the output of the left ventricle, causing left ventricular failure and consequent pulmonary edema. Other symptoms
include diaphoresis (an excessive form of sweating), weakness, light-headedness, nausea, vomiting, and palpitations.
These symptoms are likely induced by a massive surge of catecholamines from the sympathetic nervous system, which
occurs in response to pain and the blood flow abnormalities that result from dysfunction of the heart muscle. Loss
of consciousness (due to inadequate blood flow to the brain and cardiogenic shock) and sudden death (frequently due
to the development of ventricular fibrillation) can occur in MIs. Atypical symptoms are more frequently reported
by women, the elderly, and those with diabetes when compared to their male and younger counterparts. Women also report
more numerous symptoms compared with men (2.6 on average vs. 1.8 symptoms in men). The most common symptoms of MI
in women include dyspnea, weakness, and fatigue. Fatigue, sleep disturbances, and dyspnea have been reported as frequently
occurring symptoms that may manifest as long as one month before the actual clinically manifested ischemic event.
In women, chest pain may be less predictive of coronary ischemia than in men. Women may also experience back or jaw
pain during an episode. At least one quarter of all MIs are silent, without chest pain or other symptoms. These cases
can be discovered later on electrocardiograms, using blood enzyme tests, or at autopsy without a prior history of
related complaints. Estimates of the prevalence of silent MIs vary between 22 and 64%. A silent course is more common
in the elderly, in people with diabetes mellitus and after heart transplantation, probably because the donor heart
is not fully innervated by the nervous system of the recipient. In people with diabetes, differences in pain threshold,
autonomic neuropathy, and psychological factors have been cited as possible explanations for the lack of symptoms.
Tobacco smoking (including secondhand smoke) and short-term exposure to air pollution such as carbon monoxide, nitrogen
dioxide, and sulfur dioxide (but not ozone) have been associated with MI. Other factors that increase the risk of
MI and are associated with worse outcomes after an MI include lack of physical activity and psychosocial factors
including low socioeconomic status, social isolation, and negative emotions. Shift work is also associated with a
higher risk of MI. Acute and prolonged intake of high quantities of alcoholic drinks (3-4 or more) increase the risk
of a heart attack. The evidence for saturated fat is unclear. Some state there is evidence of benefit from reducing
saturated fat, specifically a benefit from eating polyunsaturated fat instead of saturated fat. While others state
there is little evidence that reducing dietary saturated fat or increasing polyunsaturated fat intake affects heart
attack risk. Dietary cholesterol does not appear to have a significant effect on blood cholesterol and thus recommendations
about its consumption may not be needed. Trans fats do appear to increase risk. Genome-wide association studies have
found 27 genetic variants that are associated with an increased risk of myocardial infarction. Strongest association
of MI has been found with the 9p21 genomic locus, which contains genes CDKN2A & 2B, although the single nucleotide
polymorphisms that are implicated are within a non-coding region. The majority of these variants are in regions that
have not been previously implicated in coronary artery disease. The following genes have an association with MI:
PCSK9, SORT1, MIA3, WDR12, MRAS, PHACTR1, LPA, TCF21, MTHFDSL, ZC3HC1, CDKN2A, 2B, ABO, PDGF0, APOA5, MNF1ASM283,
COL4A1, HHIPC1, SMAD3, ADAMTS7, RAS1, SMG6, SNF8, LDLR, SLC5A3, MRPS6, KCNE2. Acute myocardial infarction refers
to two subtypes of acute coronary syndrome, namely non-ST-elevated and ST-elevated MIs, which are most frequently
(but not always) a manifestation of coronary artery disease. The most common triggering event is the disruption of
an atherosclerotic plaque in an epicardial coronary artery, which leads to a clotting cascade, sometimes resulting
in total occlusion of the artery. Atherosclerosis is the gradual buildup of cholesterol and fibrous tissue in plaques
in the wall of arteries (in this case, the coronary arteries), typically over decades. Bloodstream column irregularities
visible on angiography reflect artery lumen narrowing as a result of decades of advancing atherosclerosis. Plaques
can become unstable, rupture, and additionally promote the formation of a blood clot that occludes the artery; this
can occur in minutes. When a severe enough plaque rupture occurs in the coronary arteries, it leads to MI (necrosis
of downstream myocardium). It is estimated that one billion cardiac cells are lost in a typical MI. If impaired blood
flow to the heart lasts long enough, it triggers a process called the ischemic cascade; the heart cells in the territory
of the occluded coronary artery die (chiefly through necrosis) and do not grow back. A collagen scar forms in their
place. Recent studies indicate that another form of cell death, apoptosis, also plays a role in the process of tissue
damage following an MI. As a result, the person's heart will be permanently damaged. This myocardial scarring also
puts the person at risk for potentially life-threatening abnormal heart rhythms (arrhythmias), and may result in
the formation of a ventricular aneurysm that can rupture with catastrophic consequences. Injured heart tissue conducts
electrical impulses more slowly than normal heart tissue. The difference in conduction velocity between injured and
uninjured tissue can trigger re-entry or a feedback loop that is believed to be the cause of many lethal arrhythmias.
The most serious of these arrhythmias is ventricular fibrillation (V-Fib/VF), an extremely fast and chaotic heart
rhythm that is the leading cause of sudden cardiac death. Another life-threatening arrhythmia is ventricular tachycardia
(V-tach/VT), which can cause sudden cardiac death. However, VT usually results in rapid heart rates that prevent
the heart from pumping blood effectively. Cardiac output and blood pressure may fall to dangerous levels, which can
lead to further coronary ischemia and extension of the infarct. Myocardial infarction in the setting of plaque results
from underlying atherosclerosis. Inflammation is known to be an important step in the process of atherosclerotic
plaque formation. C-reactive protein (CRP) is a sensitive but nonspecific marker for inflammation. Elevated CRP blood
levels, especially measured with high-sensitivity assays, can predict the risk of MI, as well as stroke and development
of diabetes. Moreover, some drugs for MI might also reduce CRP levels. The use of high-sensitivity CRP assays as
a means of screening the general population is advised against, but it may be used optionally at the physician's
discretion in those who already present with other risk factors or known coronary artery disease. Whether CRP plays
a direct role in atherosclerosis remains uncertain. For a person to qualify as having a STEMI, in addition to reported
angina, the ECG must show new ST elevation in two or more adjacent ECG leads. This must be greater than 2 mm (0.2
mV) for males and greater than 1.5 mm (0.15 mV) in females if in leads V2 and V3 or greater than 1 mm (0.1 mV) if
it is in other ECG leads. A left bundle branch block that is believed to be new used to be considered the same as
ST elevation; however, this is no longer the case. In early STEMIs there may just be peaked T waves with ST elevation
developing later. In stable patients whose symptoms have resolved by the time of evaluation, technetium (99mTc) sestamibi
(i.e. a "MIBI scan") or thallium-201 chloride can be used in nuclear medicine to visualize areas of reduced blood
flow in conjunction with physiological or pharmacological stress. Thallium may also be used to determine viability
of tissue, distinguishing whether nonfunctional myocardium is actually dead or merely in a state of hibernation or
of being stunned. Medical societies and professional guidelines recommend that the physician confirm a person is
at high risk for myocardial infarction before conducting imaging tests to make a diagnosis. Patients who have a normal
ECG and who are able to exercise, for example, do not merit routine imaging. Imaging tests such as stress radionuclide
myocardial perfusion imaging or stress echocardiography can confirm a diagnosis when a patient's history, physical
exam, ECG, and cardiac biomarkers suggest the likelihood of a problem. There is some controversy surrounding the
effect of dietary fat on the development of cardiovascular disease. People are often advised to keep a diet where
less than 30% of the energy intake derives from fat, a diet that contains less than 7% of the energy intake in the
form of saturated fat, and a diet that contains less than 300 mg/day of cholesterol. Replacing saturated with mono-
polyunsaturated fat is also recommended, as the consumption of polyunsaturated fat instead of saturated fat may decrease
coronary heart disease. Olive oil, rapeseed oil and related products are to be used instead of saturated fat. Aspirin
has been studied extensively in people considered at increased risk of myocardial infarction. Based on numerous studies
in different groups (e.g. people with or without diabetes), there does not appear to be a benefit strong enough to
outweigh the risk of excessive bleeding. Nevertheless, many clinical practice guidelines continue to recommend aspirin
for primary prevention, and some researchers feel that those with very high cardiovascular risk but low risk of bleeding
should continue to receive aspirin. The main treatment for MI with ECG evidence of ST elevation (STEMI) include thrombolysis
and percutaneous coronary intervention. Primary percutaneous coronary intervention (PCI) is the treatment of choice
for STEMI if it can be performed in a timely manner. If PCI cannot be performed within 90 to 120 minutes then thrombolysis,
preferably within 30 minutes of arrival to hospital, is recommended. If a person has had symptoms for 12 to 24 hours
evidence for thrombolysis is less and if they have had symptoms for more than 24 hours it is not recommended. Thrombolysis
involves the administration of medication that activates the enzymes that normally destroy blood clots. Thrombolysis
agents include streptokinase, reteplase, alteplase, and tenecteplase. If no contraindications are present (such as
a high risk of bleeding), thrombolysis can be given in the pre-hospital or in-hospital setting. When given to people
suspected of having a STEMI within 6 hours of the onset of symptoms, thrombolytic drugs save the life of 1 in 43
who received them. The risks were major bleeding (1 in 143) and brain bleeding (1 in 250). It is unclear whether
pre-hospital thrombolysis reduces death in people with STEMI compared to in-hospital thrombolysis. Pre-hospital thrombolysis
reduces time to thrombolytic treatment, based on studies conducted in higher income countries. People with an acute
coronary syndrome where no ST elevation is demonstrated (non-ST elevation ACS or NSTEACS) are treated with aspirin.
Clopidogrel is added in many cases, particularly if the risk of cardiovascular events is felt to be high and early
PCI is being considered. Depending on whether early PCI is planned, a factor Xa inhibitor or a potentiator of antithrombin
(fondaparinux or low molecular weight heparin respectively) may be added. In very high-risk scenarios, inhibitors
of the platelet glycoprotein αIIbβ3a receptor such as eptifibatide or tirofiban may be used. Cardiac rehabilitation
benefits many who have experienced myocardial infarction, even if there has been substantial heart damage and resultant
left ventricular failure; ideally other medical conditions that could interfere with participation should be managed
optimally. It should start soon after discharge from hospital. The program may include lifestyle advice, exercise,
social support, as well as recommendations about driving, flying, sport participation, stress management, and sexual
intercourse. Some risk factors for death include age, hemodynamic parameters (such as heart failure, cardiac arrest
on admission, systolic blood pressure, or Killip class of two or greater), ST-segment deviation, diabetes, serum
creatinine, peripheral vascular disease, and elevation of cardiac markers. Assessment of left ventricular ejection
fraction may increase the predictive power. Prognosis is worse if a mechanical complication such as papillary muscle
or myocardial free wall rupture occurs. Morbidity and mortality from myocardial infarction has improved over the
years due to better treatment. Complications may occur immediately following the heart attack (in the acute phase),
or may need time to develop (a chronic problem). Acute complications may include heart failure if the damaged heart
is no longer able to pump blood adequately around the body; aneurysm of the left ventricle myocardium; ventricular
septal rupture or free wall rupture; mitral regurgitation, in particular if the infarction causes dysfunction of
the papillary muscle; Dressler's syndrome; and abnormal heart rhythms, such as ventricular fibrillation, ventricular
tachycardia, atrial fibrillation, and heart block. Longer-term complications include heart failure, atrial fibrillation,
and an increased risk of a second MI. In contrast, IHD is becoming a more common cause of death in the developing
world. For example, in India, IHD had become the leading cause of death by 2004, accounting for 1.46 million deaths
(14% of total deaths) and deaths due to IHD were expected to double during 1985–2015. Globally, disability adjusted
life years (DALYs) lost to ischemic heart disease are predicted to account for 5.5% of total DALYs in 2030, making
it the second-most-important cause of disability (after unipolar depressive disorder), as well as the leading cause
of death by this date. At common law, in general, a myocardial infarction is a disease, but may sometimes be an injury.
This can create coverage issues in administration of no-fault insurance schemes such as workers' compensation. In
general, a heart attack is not covered; however, it may be a work-related injury if it results, for example, from
unusual emotional stress or unusual exertion. In addition, in some jurisdictions, heart attacks suffered by persons
in particular occupations such as police officers may be classified as line-of-duty injuries by statute or policy.
In some countries or states, a person having suffered from an MI may be prevented from participating in activity
that puts other people's lives at risk, for example driving a car or flying an airplane.
Before the 20th century, the term matter included ordinary matter composed of atoms and excluded other energy phenomena such
as light or sound. This concept of matter may be generalized from atoms to include any objects having mass even when
at rest, but this is ill-defined because an object's mass can arise from its (possibly massless) constituents' motion
and interaction energies. Thus, matter does not have a universal definition, nor is it a fundamental concept in physics
today. Matter is also used loosely as a general term for the substance that makes up all observable physical objects.
All the objects from everyday life that we can bump into, touch or squeeze are composed of atoms. This atomic matter
is in turn made up of interacting subatomic particles—usually a nucleus of protons and neutrons, and a cloud of orbiting
electrons. Typically, science considers these composite particles matter because they have both rest mass and volume.
By contrast, massless particles, such as photons, are not considered matter, because they have neither rest mass
nor volume. However, not all particles with rest mass have a classical volume, since fundamental particles such as
quarks and leptons (sometimes equated with matter) are considered "point particles" with no effective size or volume.
Nevertheless, quarks and leptons together make up "ordinary matter", and their interactions contribute to the effective
volume of the composite particles that make up ordinary matter. Matter commonly exists in four states (or phases):
solid, liquid and gas, and plasma. However, advances in experimental techniques have revealed other previously theoretical
phases, such as Bose–Einstein condensates and fermionic condensates. A focus on an elementary-particle view of matter
also leads to new phases of matter, such as the quark–gluon plasma. For much of the history of the natural sciences
people have contemplated the exact nature of matter. The idea that matter was built of discrete building blocks,
the so-called particulate theory of matter, was first put forward by the Greek philosophers Leucippus (~490 BC) and
Democritus (~470–380 BC). Matter should not be confused with mass, as the two are not quite the same in modern physics.
For example, mass is a conserved quantity, which means that its value is unchanging through time, within closed systems.
However, matter is not conserved in such systems, although this is not obvious in ordinary conditions on Earth, where
matter is approximately conserved. Still, special relativity shows that matter may disappear by conversion into energy,
even inside closed systems, and it can also be created from energy, within such systems. However, because mass (like
energy) can neither be created nor destroyed, the quantity of mass and the quantity of energy remain the same during
a transformation of matter (which represents a certain amount of energy) into non-material (i.e., non-matter) energy.
This is also true in the reverse transformation of energy into matter. Different fields of science use the term matter
in different, and sometimes incompatible, ways. Some of these ways are based on loose historical meanings, from a
time when there was no reason to distinguish mass and matter. As such, there is no single universally agreed scientific
meaning of the word "matter". Scientifically, the term "mass" is well-defined, but "matter" is not. Sometimes in
the field of physics "matter" is simply equated with particles that exhibit rest mass (i.e., that cannot travel at
the speed of light), such as quarks and leptons. However, in both physics and chemistry, matter exhibits both wave-like
and particle-like properties, the so-called wave–particle duality. In the context of relativity, mass is not an additive
quantity, in the sense that one can add the rest masses of particles in a system to get the total rest mass of the
system. Thus, in relativity usually a more general view is that it is not the sum of rest masses, but the energy–momentum
tensor that quantifies the amount of matter. This tensor gives the rest mass for the entire system. "Matter" therefore
is sometimes considered as anything that contributes to the energy–momentum of a system, that is, anything that is
not purely gravity. This view is commonly held in fields that deal with general relativity such as cosmology. In
this view, light and other massless particles and fields are part of matter. The reason for this is that in this
definition, electromagnetic radiation (such as light) as well as the energy of electromagnetic fields contributes
to the mass of systems, and therefore appears to add matter to them. For example, light radiation (or thermal radiation)
trapped inside a box would contribute to the mass of the box, as would any kind of energy inside the box, including
the kinetic energy of particles held by the box. Nevertheless, isolated individual particles of light (photons) and
the isolated kinetic energy of massive particles, are normally not considered to be matter.[citation needed] A source
of definition difficulty in relativity arises from two definitions of mass in common use, one of which is formally
equivalent to total energy (and is thus observer dependent), and the other of which is referred to as rest mass or
invariant mass and is independent of the observer. Only "rest mass" is loosely equated with matter (since it can
be weighed). Invariant mass is usually applied in physics to unbound systems of particles. However, energies which
contribute to the "invariant mass" may be weighed also in special circumstances, such as when a system that has invariant
mass is confined and has no net momentum (as in the box example above). Thus, a photon with no mass may (confusingly)
still add mass to a system in which it is trapped. The same is true of the kinetic energy of particles, which by
definition is not part of their rest mass, but which does add rest mass to systems in which these particles reside
(an example is the mass added by the motion of gas molecules of a bottle of gas, or by the thermal energy of any
hot object). Since such mass (kinetic energies of particles, the energy of trapped electromagnetic radiation and
stored potential energy of repulsive fields) is measured as part of the mass of ordinary matter in complex systems,
the "matter" status of "massless particles" and fields of force becomes unclear in such systems. These problems contribute
to the lack of a rigorous definition of matter in science, although mass is easier to define as the total stress–energy
above (this is also what is weighed on a scale, and what is the source of gravity).[citation needed] A definition
of "matter" more fine-scale than the atoms and molecules definition is: matter is made up of what atoms and molecules
are made of, meaning anything made of positively charged protons, neutral neutrons, and negatively charged electrons.
This definition goes beyond atoms and molecules, however, to include substances made from these building blocks that
are not simply atoms or molecules, for example white dwarf matter—typically, carbon and oxygen nuclei in a sea of
degenerate electrons. At a microscopic level, the constituent "particles" of matter such as protons, neutrons, and
electrons obey the laws of quantum mechanics and exhibit wave–particle duality. At an even deeper level, protons
and neutrons are made up of quarks and the force fields (gluons) that bind them together (see Quarks and leptons
definition below). Leptons (the most famous being the electron), and quarks (of which baryons, such as protons and
neutrons, are made) combine to form atoms, which in turn form molecules. Because atoms and molecules are said to
be matter, it is natural to phrase the definition as: ordinary matter is anything that is made of the same things
that atoms and molecules are made of. (However, notice that one also can make from these building blocks matter that
is not atoms or molecules.) Then, because electrons are leptons, and protons, and neutrons are made of quarks, this
definition in turn leads to the definition of matter as being quarks and leptons, which are the two types of elementary
fermions. Carithers and Grannis state: Ordinary matter is composed entirely of first-generation particles, namely
the [up] and [down] quarks, plus the electron and its neutrino. (Higher generations particles quickly decay into
first-generation particles, and thus are not commonly encountered.) The quark–lepton definition of ordinary matter,
however, identifies not only the elementary building blocks of matter, but also includes composites made from the
constituents (atoms and molecules, for example). Such composites contain an interaction energy that holds the constituents
together, and may constitute the bulk of the mass of the composite. As an example, to a great extent, the mass of
an atom is simply the sum of the masses of its constituent protons, neutrons and electrons. However, digging deeper,
the protons and neutrons are made up of quarks bound together by gluon fields (see dynamics of quantum chromodynamics)
and these gluons fields contribute significantly to the mass of hadrons. In other words, most of what composes the
"mass" of ordinary matter is due to the binding energy of quarks within protons and neutrons. For example, the sum
of the mass of the three quarks in a nucleon is approximately 7001125000000000000♠12.5 MeV/c2, which is low compared
to the mass of a nucleon (approximately 7002938000000000000♠938 MeV/c2). The bottom line is that most of the mass
of everyday objects comes from the interaction energy of its elementary components. The Standard Model groups matter
particles into three generations, where each generation consists of two quarks and two leptons. The first generation
is the up and down quarks, the electron and the electron neutrino; the second includes the charm and strange quarks,
the muon and the muon neutrino; the third generation consists of the top and bottom quarks and the tau and tau neutrino.
The most natural explanation for this would be that quarks and leptons of higher generations are excited states of
the first generations. If this turns out to be the case, it would imply that quarks and leptons are composite particles,
rather than elementary particles. Baryonic matter is the part of the universe that is made of baryons (including
all atoms). This part of the universe does not include dark energy, dark matter, black holes or various forms of
degenerate matter, such as compose white dwarf stars and neutron stars. Microwave light seen by Wilkinson Microwave
Anisotropy Probe (WMAP), suggests that only about 4.6% of that part of the universe within range of the best telescopes
(that is, matter that may be visible because light could reach us from it), is made of baryonic matter. About 23%
is dark matter, and about 72% is dark energy. In physics, degenerate matter refers to the ground state of a gas of
fermions at a temperature near absolute zero. The Pauli exclusion principle requires that only two fermions can occupy
a quantum state, one spin-up and the other spin-down. Hence, at zero temperature, the fermions fill up sufficient
levels to accommodate all the available fermions—and in the case of many fermions, the maximum kinetic energy (called
the Fermi energy) and the pressure of the gas becomes very large, and depends on the number of fermions rather than
the temperature, unlike normal states of matter. Strange matter is a particular form of quark matter, usually thought
of as a liquid of up, down, and strange quarks. It is contrasted with nuclear matter, which is a liquid of neutrons
and protons (which themselves are built out of up and down quarks), and with non-strange quark matter, which is a
quark liquid that contains only up and down quarks. At high enough density, strange matter is expected to be color
superconducting. Strange matter is hypothesized to occur in the core of neutron stars, or, more speculatively, as
isolated droplets that may vary in size from femtometers (strangelets) to kilometers (quark stars). In bulk, matter
can exist in several different forms, or states of aggregation, known as phases, depending on ambient pressure, temperature
and volume. A phase is a form of matter that has a relatively uniform chemical composition and physical properties
(such as density, specific heat, refractive index, and so forth). These phases include the three familiar ones (solids,
liquids, and gases), as well as more exotic states of matter (such as plasmas, superfluids, supersolids, Bose–Einstein
condensates, ...). A fluid may be a liquid, gas or plasma. There are also paramagnetic and ferromagnetic phases of
magnetic materials. As conditions change, matter may change from one phase into another. These phenomena are called
phase transitions, and are studied in the field of thermodynamics. In nanomaterials, the vastly increased ratio of
surface area to volume results in matter that can exhibit properties entirely different from those of bulk material,
and not well described by any bulk phase (see nanomaterials for more details). In particle physics and quantum chemistry,
antimatter is matter that is composed of the antiparticles of those that constitute ordinary matter. If a particle
and its antiparticle come into contact with each other, the two annihilate; that is, they may both be converted into
other particles with equal energy in accordance with Einstein's equation E = mc2. These new particles may be high-energy
photons (gamma rays) or other particle–antiparticle pairs. The resulting particles are endowed with an amount of
kinetic energy equal to the difference between the rest mass of the products of the annihilation and the rest mass
of the original particle–antiparticle pair, which is often quite large. Antimatter is not found naturally on Earth,
except very briefly and in vanishingly small quantities (as the result of radioactive decay, lightning or cosmic
rays). This is because antimatter that came to exist on Earth outside the confines of a suitable physics laboratory
would almost instantly meet the ordinary matter that Earth is made of, and be annihilated. Antiparticles and some
stable antimatter (such as antihydrogen) can be made in tiny amounts, but not in enough quantity to do more than
test a few of its theoretical properties. There is considerable speculation both in science and science fiction as
to why the observable universe is apparently almost entirely matter, and whether other places are almost entirely
antimatter instead. In the early universe, it is thought that matter and antimatter were equally represented, and
the disappearance of antimatter requires an asymmetry in physical laws called the charge parity (or CP symmetry)
violation. CP symmetry violation can be obtained from the Standard Model, but at this time the apparent asymmetry
of matter and antimatter in the visible universe is one of the great unsolved problems in physics. Possible processes
by which it came about are explored in more detail under baryogenesis. In astrophysics and cosmology, dark matter
is matter of unknown composition that does not emit or reflect enough electromagnetic radiation to be observed directly,
but whose presence can be inferred from gravitational effects on visible matter. Observational evidence of the early
universe and the big bang theory require that this matter have energy and mass, but is not composed of either elementary
fermions (as above) OR gauge bosons. The commonly accepted view is that most of the dark matter is non-baryonic in
nature. As such, it is composed of particles as yet unobserved in the laboratory. Perhaps they are supersymmetric
particles, which are not Standard Model particles, but relics formed at very high energies in the early phase of
the universe and still floating about. The pre-Socratics were among the first recorded speculators about the underlying
nature of the visible world. Thales (c. 624 BC–c. 546 BC) regarded water as the fundamental material of the world.
Anaximander (c. 610 BC–c. 546 BC) posited that the basic material was wholly characterless or limitless: the Infinite
(apeiron). Anaximenes (flourished 585 BC, d. 528 BC) posited that the basic stuff was pneuma or air. Heraclitus (c.
535–c. 475 BC) seems to say the basic element is fire, though perhaps he means that all is change. Empedocles (c.
490–430 BC) spoke of four elements of which everything was made: earth, water, air, and fire. Meanwhile, Parmenides
argued that change does not exist, and Democritus argued that everything is composed of minuscule, inert bodies of
all shapes called atoms, a philosophy called atomism. All of these notions had deep philosophical problems. For example,
a horse eats grass: the horse changes the grass into itself; the grass as such does not persist in the horse, but
some aspect of it—its matter—does. The matter is not specifically described (e.g., as atoms), but consists of whatever
persists in the change of substance from grass to horse. Matter in this understanding does not exist independently
(i.e., as a substance), but exists interdependently (i.e., as a "principle") with form and only insofar as it underlies
change. It can be helpful to conceive of the relationship of matter and form as very similar to that between parts
and whole. For Aristotle, matter as such can only receive actuality from form; it has no activity or actuality in
itself, similar to the way that parts as such only have their existence in a whole (otherwise they would be independent
wholes). For Descartes, matter has only the property of extension, so its only activity aside from locomotion is
to exclude other bodies: this is the mechanical philosophy. Descartes makes an absolute distinction between mind,
which he defines as unextended, thinking substance, and matter, which he defines as unthinking, extended substance.
They are independent things. In contrast, Aristotle defines matter and the formal/forming principle as complementary
principles that together compose one independent thing (substance). In short, Aristotle defines matter (roughly speaking)
as what things are actually made of (with a potential independent existence), but Descartes elevates matter to an
actual independent thing in itself. Isaac Newton (1643–1727) inherited Descartes' mechanical conception of matter.
In the third of his "Rules of Reasoning in Philosophy", Newton lists the universal qualities of matter as "extension,
hardness, impenetrability, mobility, and inertia". Similarly in Optics he conjectures that God created matter as
"solid, massy, hard, impenetrable, movable particles", which were "...even so very hard as never to wear or break
in pieces". The "primary" properties of matter were amenable to mathematical description, unlike "secondary" qualities
such as color or taste. Like Descartes, Newton rejected the essential nature of secondary qualities. There is an
entire literature concerning the "structure of matter", ranging from the "electrical structure" in the early 20th
century, to the more recent "quark structure of matter", introduced today with the remark: Understanding the quark
structure of matter has been one of the most important advances in contemporary physics.[further explanation needed]
In this connection, physicists speak of matter fields, and speak of particles as "quantum excitations of a mode of
the matter field". And here is a quote from de Sabbata and Gasperini: "With the word "matter" we denote, in this
context, the sources of the interactions, that is spinor fields (like quarks and leptons), which are believed to
be the fundamental components of matter, or scalar fields, like the Higgs particles, which are used to introduced
mass in a gauge theory (and that, however, could be composed of more fundamental fermion fields)."[further explanation
needed] In the late 19th century with the discovery of the electron, and in the early 20th century, with the discovery
of the atomic nucleus, and the birth of particle physics, matter was seen as made up of electrons, protons and neutrons
interacting to form atoms. Today, we know that even protons and neutrons are not indivisible, they can be divided
into quarks, while electrons are part of a particle family called leptons. Both quarks and leptons are elementary
particles, and are currently seen as being the fundamental constituents of matter. These quarks and leptons interact
through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard
Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity
cannot yet be accounted for at the quantum level; it is only described by classical physics (see quantum gravity
and graviton). Interactions between quarks and leptons are the result of an exchange of force-carrying particles
(such as photons) between quarks and leptons. The force-carrying particles are not themselves building blocks. As
one consequence, mass and energy (which cannot be created or destroyed) cannot always be related to matter (which
can be created out of non-matter particles such as photons, or even out of pure energy, such as kinetic energy).
Force carriers are usually not considered matter: the carriers of the electric force (photons) possess energy (see
Planck relation) and the carriers of the weak force (W and Z bosons) are massive, but neither are considered matter
either. However, while these particles are not considered matter, they do contribute to the total mass of atoms,
subatomic particles, and all systems that contain them. The term "matter" is used throughout physics in a bewildering
variety of contexts: for example, one refers to "condensed matter physics", "elementary matter", "partonic" matter,
"dark" matter, "anti"-matter, "strange" matter, and "nuclear" matter. In discussions of matter and antimatter, normal
matter has been referred to by Alfvén as koinomatter (Gk. common matter). It is fair to say that in physics, there
is no broad consensus as to a general definition of matter, and the term "matter" usually is used in conjunction
with a specifying modifier.
Why is blue light harmful?
Recent scientific research has shown that prolongued exposure to blue light can be harmful to sleeping patterns and ocular health. Exposure to blue light reduces the body's production of melatonin, a hormone that is known to affect a person's natural sleeping patterns. Reduced melatonin levels often make it difficult for people to fall asleep (Harvard Health Publishing). Excessive exposure to blue light has also been linked to eyestrain, retina damage, and an increased risk of cataracts (Canadian Association of Optometrists).
SolarSearch is built as a user experience that mimics reduced blue light exposure. SolarSearch uses Ethan Schoonover's Solarized color palette. Ethan developed Solarized to be used in code editing applications to reduce eyestrain for the programmers that use these applications for several hours every day. SolarSearch uses Ethan's Solarized palette to achieve the same effect of reduced eyestrain, while educating on the effects of blue light exposure.